A medical student’s ‘observed’ score might not reflect their ability to go into healthcare, so Dr Thomas Kropmans of NUI Galway wants to devise an accurate, digital scoring method.
Students from a young age, regardless of subject, will be familiar with taking an exam and coming out of it with either a better or worse score than imagined.
The reality is that human performance is subjective, and what one examiner could score one student mightn’t be the same for another examiner. This could be seriously damaging when it comes to grading medical students.
That is why Dr Thomas Kropmans, a senior lecturer in medical informatics and medical education at NUI Galway, and CEO of Qpercom, is trying to find a digital solution to a paper problem.
What inspired you to become a researcher?
From an early age, I was aware of the unfairness of pass/fail decisions in medical and healthcare education. Lecturers in their role as examiners still assume that an observed score as the outcome of an examination is the ‘observed’ score.
If you ask me about my ‘sparks’, then I would address the beautiful work we have done in paediatric physiotherapy, enhancing developmental issues in neonates and newborns, up to physical problems in teenagers and young adults.
Highly criticised at the time by Dutch healthcare insurance companies due to a supposed ‘lack of evidence’, it initiated a scientific boost to physiotherapists in the Netherlands.
This kind of professionalism; in terms of evidence-based medicine, research methods and statistical interpretation of test results are still my inspiring areas of interest, which I teach with enthusiasm to young undergraduate medical and healthcare students.
Can you tell us about the research you’re currently working on?
In the study of medicine, students go through Objective Structured Clinical Examinations, which are a series of stations in which students have to perform a different clinical task.
They are being observed and marked by an examiner – eg a doctor/physician/more senior student or simulated patient – while executing station-specific tasks such as interviewing a patient, measuring blood pressure, removing stitches, analysing a urine sample and so on.
If 100 students go through 10 of these stations, 1,000 assessment forms will be generated at the end of the examination day.
This is a tremendous paper trail and creates a huge delay in results. In addition, no written feedback was provided in 40 years of OSCEs. In 2009, we looked into these paper forms and discovered that 30pc contained serious errors. 300 of the 1,000 forms were incomplete and results were added incorrectly by the examiners.
Also, it is a pity for a student to fail an exam, and even more so if that decision appears incorrect.
A digital solution for this problem did not exist before. As a result of our research and development, the quality of these practical clinical examinations, large-scale recruitment processes and trustworthy professional activities for consultant trainees has now improved considerably.
Furthermore, we provide students and examiners with feedback on their performance as a young undergraduate or postgraduate healthcare professional, or a senior doctor/consultant acting as an examiner.
In your opinion, why is your research important?
The cut score – the borderline between pass or fail in western Europe – is 50pc in general. In central and northern European universities, the cut score is 75pc and 80pc, respectively.
Does that mean that we, in western Europe, accept graduates who know only half of what they are supposed to know, with a cut-off pass score of 50pc? Or, in science, where the cut score is 40pc?
Moreover, if you imagine a junior doctor or student goes through 10 stations examining simulated patients in distress, unfortunately, as an outcome, five die and five survive.
With a 50pc pass score, the student is still considered to be a good doctor and passes? What message goes out to these students? What impact does this have on society?
What commercial applications do you foresee for your research?
In December 2008, Qpercom spun out as a commercial entity to provide a sustainable assessment solution for NUI Galway’s College of Medicine, Nursing and Health Sciences.
Following on from this, 25 international universities and professional medical bodies embraced our advanced assessment solutions.
In 2017, a large postgraduate educational body – which recruits all health professionals working in the UK – began using our advanced assessment solution to recruit, select and feed back all health professionals that apply for further postgraduate training.
Unfortunately, according to our latest presentation for the European Board of Medical Assessors in November 2017, the quality assurance (reliability and validity) outcome, decision rules and examiners’ variability differs throughout. Our mission is to fill that gap.
What are some of the biggest challenges you face as a researcher in your field?
Measuring human performance in a profession that deals with life and death, quality assurance and error management is crucial.
Professionals learn from mistakes and share those mistakes worldwide, but it is still a long way to go for professionals in this industry.
This is why we address error in assessment and hope to continue to publish, with informed consent, shared assessment results in the international community.
What are some of the areas of research you’d like to see tackled in the years ahead?
We recently developed an EPA (entrustable professional activity) management system with a FieldNotes app (Qpercom Entrust).
The concept of EPAs was developed by Dr Olle ten Cate in the Netherlands, and is not new. In conjunction with a team at the College of Anaesthetists in Dublin, we are one of the first to develop an electronic version of an EPA system.
Key learning outcomes are of extreme importance in undergraduate education, and in shaping tomorrow’s clinicians. We look forward to seeing the impact of Entrust in the coming years.
We hope this assessment solution will also spur more research into the awarding of competence, to continue to challenge and improve the quality standards of medical education.