Abstract
Purpose
Formative assessments can help motivate students and ease students’ learning through feedback. There is a pressing need for improvement of clinical pharmacotherapy (CPT) education since junior doctors make many prescribing errors. The aim of this study was to determine whether a formative assessment with personalized narrative feedback helps medical students to increase their prescribing skills.
Methods
This retrospective cohort study was conducted among masters’ medical students at Erasmus Medical Centre, The Netherlands. Students made a formative and a summative skill-based prescription assessment, both during clerkships as part of their regular curriculum. Errors in both assessments were categorized by type and possible consequence and compared with each other.
Results
A total of 388 students made 1964 errors in the formative assessment and 1016 in the summative assessment. Most improvements after the formative assessment were seen for mentioning the weight of a child on the prescription (n = 242, 19%). Most new and repeated errors in the summative assessment were missing usage instructions (n = 82, 16% and n = 121, 41%).
Conclusions
This formative assessment with personalized and individual narrative feedback has helped students to increase the technical correctness of their prescriptions. However, errors repeated after the feedback were predominantly errors showing that only one formative assessment has not yet enhanced the clinical prescribing enough.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
It has long been known that formative assessments not only assess students but can also help motivate students and assist students’ learning through feedback [1]. Formative assessments give the student feedback on their progress, without expressing this with grades. Opposite are summative assessments, which evaluate student learning, often by grading. Formative assessments for medical students have for example been proven to increase scores on summative assessments in pathophysiology [2] and to encourage them to learn epidemiology [3].
However, for a formative assessment to reach these goals there are several requirements. Firstly, these assessments should focus on learning and happen in a safe environment [4]. Furthermore, to enhance students’ learning through formative assessments, giving constructive feedback is required [5, 6].
There is a pressing need for improvement in clinical pharmacotherapy (CPT) education. Junior doctors make many prescribing errors [7] which can lead to patient complaints, avoidable side effects, hospital admissions, and even death [8, 9]. There have been a lot of new CPT education interventions [10], and CPT has been taught in many different ways [11], but none has had the desired extensive results in reducing prescribing errors. In the clinical setting, it has been shown that feedback from pharmacists on prescribing can effectively reduce prescribing errors [12, 13] and have a positive influence on prescribing behavior [14,15,16,17]. However, the use of feedback on prescribing, in the form of a formative assessment, as a teaching method for clinical pharmacotherapy has not yet been studied.
The aim of this study was to determine whether a formative assessment, including personalized narrative feedback, helps medical students to increase their prescribing skills based on the errors that were made in the formative and the summative prescribing assessment. The hypothesis was that the prescribing errors made in the formative assessment and on which narrative feedback was provided would appear less in the summative assessment. Furthermore, we hypothesized that errors made in the summative assessment were less severe than the errors made in the formative assessment.
Methods
At Erasmus Medical Center, Rotterdam, The Netherlands, students make a formative skill-based prescription assessment during the fourth year of their medical curriculum in the online environment Pscribe [18]. Students may choose a time and place in the first two educational weeks prior to their surgery clerkship to make this digital assessment. During this assessment, students answer six knowledge and application questions with immediately shown feedback, followed by two case-based prescriptions for primary care patients or patients in an outpatient clinic (see appendix 1 for an example of the assessment). A CPT teacher assesses these case-based prescriptions. The students receive standardized feedback on the knowledge- and application questions and personalized feedback on the prescriptions. Students do not receive a grade for this assessment.
In their fifth year of their medical curriculum, students make the Dutch National Pharmacotherapy Assessment [19]. This assessment is a knowledge-based assessment consisting of sixty multiple-choice questions on pharmacotherapy.
At the end of their medical curriculum, students make a summative skill-based prescription assessment (see Appendix 1 for an example of the assessment). The summative assessment is taken in the same online environment as the formative assessment; however, it is in an exam setting with a fixed time and place and supervisors. Students have to write similar case-based prescriptions, for primary care patients or patients in an outpatient clinic, as in the formative assessment. This summative assessment consists of four case-based prescriptions to write, compared to two in the formative assessment. For one of the case-based prescriptions, students need to complete a WHO six-step model, see Fig. 1 [20]. For this six-step model, students can score insufficient, sufficient, or well done per single step. Since the summative assessment is almost 2 years later in the curriculum than the formative assessment, students have acquired more knowledge. Therefore, the cases asked in the summative assessment are slightly more difficult compared to the formative assessment. In preparation for the summative assessment, students can revisit the previously given feedback on their formative assessment in their P-scribe portfolio and can choose to do a practice test.
This retrospective cohort study was conducted among masters’ medical students at Erasmus Medical Center, Rotterdam, The Netherlands. Master students who took their summative prescribing assessment between 27 July 2020 and 4 October 2021 were included. Due to the disrupted educational program as a result of the COVID-19 pandemic, the inclusion period was extended to October 2021 instead of the originally planned July 2021. Data on the formative as well as their summative assessment were extracted from the digital program P-scribe. The data extracted from P-scribe included the teachers’ feedback given during the assessment. If, during the correction of the assessment, teachers missed errors, these missed errors were not added to the dataset.
Students were excluded when either one of the assessments was not available or if feedback from the teacher was absent. Only the first attempts of the assessments were included, meaning that re-sit assessments were excluded.
Prior to the study, each student had made a personal account in the program P-scribe for educational purposes. With the registration in P-scribe, students consented to have their data saved and used for research. We coded student data to insure anonymity. The review of the research proposal by the Medical Ethics Committee Erasmus MC determined that the Medical Research Involving Human Subjects Act was not applicable to this research.
Categorization of errors
From the teachers’ feedback extracted from P-scribe, we categorized the errors into the type of errors and possible consequences. The categorization of the type of errors (see Table 1) was based on previous studies and the Erasmus Medical Center guidelines to report an incident [7, 21,22,23,24]. The classification of the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP) was used to categorize the possible consequences of the errors [25]. All errors were categorized with the expert opinions of a medical doctor and a pharmacist. In complex cases, the error was discussed with an independent pharmacist, an internal medicine physician, and a CPT teacher until consensus was reached.
Repeated errors
We checked the pattern of errors for each student. We categorized all errors into three categories, namely errors which were made in the formative assessment but not in the summative assessment, errors which were only made in the summative assessment, and errors which were made in both assessments. For this analysis, if an error of the same error type was made multiple times by the same student in the same assessment, these were counted as one.
Questionnaire on the preparation of student
To study the use of the feedback on the formative assessment in the preparation for the summative assessment, we have sent an online questionnaire to all students who took their summative prescribing assessment between 17 May 2021 and 4 October 2021 through e-mail 2 weeks after completing the assessment. At that point in time, their result on the summative assessment was not yet known. The questionnaire consisted of four questions regarding the preparation for the summative assessment. To compare students who completed the questionnaire and the students who did not, the scores of all students on the Dutch National Pharmacotherapy Assessment were used.
Data was analyzed using IBM SPSS statistics 28.0 [26]. An independent T-test was done to test for comparability between the group of students who completed the questionnaire and the other students. We used a χ2 test to study the differences in error types and to study whether errors were repeated or not. Further data analysis was done with descriptive statistics.
Results
A total of 452 students made at least one of the assessments during the selected period. From 388 of these students, information on both assessments was available. These students had an average age of 25 years, and 67% were female.
On average, these students made 1.9 errors less [1.7–2.0 95% CI, P < 0.001] per prescription in the summative assessment compared to the formative assessment (see Table 2).
In the formative assessment, the majority of errors (n = 1018, 51.8%) was category B (an error occurred but did not reach the patient) error. In the summative assessment, the majority of errors (n = 455, 44.8%) was category C (an error reached the patient but did not cause harm) error.
WHO six-step
We analyzed the WHO six-step models made by all 388 students. Per step, the students could score insufficient, sufficient, or well done. Figure 2 shows that students score lower on step 5 in comparison to other steps.
Repeated errors
There was a significant difference between error types in whether or not an error was repeated from the formative to the summative assessment (P < 0.001, Table 3). A total of 1249 errors were only made in the formative assessment and were not repeated in the summative assessment. Almost half of these errors (n = 591, 47%) could be assigned to the category missing information. Within this category, the subcategory of mentioning a child’s weight on the prescription (242 (41%)) was the most improved.
Errors that were newly made in the summative assessment were errors, which the student did not make in the formative assessment and therefore did not receive feedback on. Most errors which were only made in the summative assessment (n = 82, 32%) were in the subcategory missing usage instructions, which fall in the overarching category missing information, see Table 4. An example of this category is forgetting to add that eardrops need to be used in the ear canal.
Finally, there were errors made by a student in both the formative and the summative assessment. Again, most of these were in the category missing information (n = 145, 49%). Of these errors, 83% (n = 121) were in the subcategory missing usage instructions.
Feedback checked
We sent all 202 students who took their summative assessment between 17 May 2021 and 4 October 2021 a questionnaire on their preparation for the summative assessment. Of these, 71 (35.1%) students filled out the questionnaire. One student was excluded since the student did not take the formative assessment. These 70 students had an average age of 25.8 years, and 63% were female. This was comparable to the whole group of students (age T-test P = 0.248, gender chi-square test P = 0.666). To check whether this sample was representative of the whole group of students, scores of all students on the Dutch National Pharmacotherapy Assessment [19] were used to compare students who completed the questionnaire and the students who did not. The average score on the Dutch National Pharmacotherapy assessment did not differ between students who did and did not fill out the questionnaire (89.0% vs 89.4%, P = 0.24).
Of the 70 students who filled out the questionnaire, 63 (88.7%) students answered that they checked the feedback on the formative assessment in preparation for the summative assessment. Of these 63 students, 43 (68%) found the feedback useful in their preparation. Students mentioned that they felt well prepared and that they knew what was expected of them. The students who checked the feedback showed the same pattern of errors made in both assessments compared to the total group of included students, see Table 5.
Discussion
The aim of this study was to determine whether personalized feedback on a formative assessment helps medical students to increase their prescribing skills. In our study, we categorized errors from over 2300 prescriptions written by almost 400 medical students. This has shown for which category errors personalized narrative feedback can facilitate students’ learning.
Almost 46% of all errors that were resolved after receiving the personalized narrative feedback were based on basic patient safety issues. For example, this includes over 300 administrative errors and almost 250 errors now mentioning the weight of a child on a pediatric prescription, which is necessary for the pharmacist to be able to check the calculated dose.
On the contrary, errors that were repeated despite the feedback were largely based on developing the ability to empathize with a patient, to understand what information is needed on a prescription to have the patient execute the treatment correctly.
Our findings confirm the result of the study by Sabatino et al., where nurse practitioner students received formative feedback from pharmacists on assignments in which they had to identify errors in prescriptions and write a correct prescription [27]. Equal to our results, where the personalized feedback helped students to learn about the technical elements of prescribing, their students showed a greater increase in the performance of technical elements compared to the increase in the performance on clinical elements after a 14-week intervention with these weekly assignments.
This distinction between technical errors and errors made due to a lack of ability to empathize with the patient can also be seen in the results of the WHO six-step model. Steps 1, 2, and 3 focus on the indication, while steps 4 and 5 require the ability to put oneself in the place of the patient. While step 4, choose a suitable treatment for the patient, was often answered correctly, students had the most difficulty with step 5. In this step, students are asked what information they would communicate with their patient on instructions, efficacy, side effects, and warnings. This confirms the hypothesis that to train medical students in building this skill more frequent practice might be necessary, while this single formative assessment was able to help to increase technical elements in prescriptions.
In our study, the possible severity of the majority of the errors changed from a category B error (an error occurred but did not reach the patient) in the formative assessment to a category C error (an error occurred that reached the patient, but did not cause patient harm) in the summative assessment. This is not in line with the study by Lloyd et al. where pharmacist-led feedback on prescribing in a hospital setting showed no change in the distribution of error severity before and after feedback, but significantly reduced the frequency of all prescribing severity grades [12]. In our study, we have seen a decline in administrative errors after the formative assessment. These administrative errors are often categorized as a category B error, which makes the shift in error severity from category B to category C expected. In addition, this could be explained by the slightly increased difficulty of the cases asked in the summative assessment compared to the formative assessment.
Teaching medical students the skill of prescribing safely and effectively is a complex task. Our results show that personalized narrative feedback is a way to teach students how to write technically more correct prescriptions. However, even though Bertels et al. suggest that the personalized and individual way the feedback on this formative assessment is given is the preferred way [28], it does not seem to increase the gut feeling that students need to write prescriptions. Future studies should investigate if more frequent feedback on prescriptions during their education, compared to this one moment of feedback through a formative assessment, helps to increase this development.
There are some potential drawbacks associated with our study. For example, the summative assessment takes place one-and-a-half year after the formative assessment. This could mean that the results of our study are not only a direct result of the formative assessment, but due to other classes or practice time. However, all classes on the technical aspects of writing a prescription are given prior to the formative assessment. In addition, from the questionnaire, we know that students use the feedback given on the formative assessment in their preparation for the summative assessment, which makes a relation between the feedback given and the errors on the summative assessment plausible. Secondly, it could be discussed that students did not have the opportunity to make all errors in the formative assessment. While it is difficult to compare cases asked in both assessments, the teachers creating the assessments strive to equalize the difficulty of the cases between both assessments. Thirdly, we have not taken the quality of the feedback into consideration. It could well be that with an increased quality of the feedback students would have been able to increase their skills even more. Lastly, there was a relatively low response rate on the questionnaire, which could have biased the results. However, respondents’ scores on the knowledge assessment and the similar distribution of errors give the impression that the group might be representative of the whole group.
This is the first known study to examine the effect of a formative assessment on clinical pharmacotherapy education. A strength of our study was the number of prescriptions checked by a multidisciplinary team. Also, not only were the number of errors within both assessments checked, but it was also studied per student if errors were repeated or not. This made for highly detailed information on almost 400 students.
Conclusion
Formative assessments not only assess students but can also ease students’ learning through feedback. Personalized narrative feedback can help students to increase the technical correctness of their prescriptions. However, errors repeated in the summative assessment are predominantly errors that show that this one formative assessment has not yet enhanced the clinical prescribing enough. Future research should concentrate on an intervention with more frequent personal feedback on the prescriptions of medical students.
Availability of data and materials
The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
References
Rolfe I, McPherson J (1995) Formative assessment: how am I doing? Lancet 345(8953):837–839
Cong X, Zhang Y, Xu H, Liu LM, Zheng M, Xiang RL et al (2020) The effectiveness of formative assessment in pathophysiology education from students’ perspective: a questionnaire study. Adv Physiol Educ 44(4):726–733
Venugopal V, Dongre AR (2020) Effect of interactive lectures and formative assessment on learning of epidemiology by medical undergraduates - a mixed-methods evaluation. Indian J Community Med 45(4):526–530
Prashanti E, Ramnarayan K (2019) Ten maxims of formative assessment. Adv Physiol Educ 43(2):99–102
Lim YS (2019) Students’ perception of formative assessment as an instructional tool in medical education. Med Sci Educ 29(1):255–263
Rushton A (2005) Formative assessment: a key to deep learning? Med Teach 27(6):509–513
Ashcroft DM, Lewis PJ, Tully MP, Farragher TM, Taylor D, Wass V et al (2015) Prevalence, nature, severity and risk factors for prescribing errors in hospital inpatients: prospective study in 20 UK hospitals. Drug Saf 38(9):833–843
Gandhi TK, Burstin HR, Cook EF, Puopolo AL, Haas JS, Brennan TA et al (2000) Drug complications in outpatients. J Gen Intern Med 15(3):149–154
Leendertse AJ, Egberts AC, Stoker LJ, van den Bemt PM, Group HS (2008) Frequency of and risk factors for preventable medication-related hospital admissions in the Netherlands. Arch Intern Med 168(17):1890–1896
Omer U, Danopoulos E, Veysey M, Crampton P, Finn G (2021) A rapid review of prescribing education interventions. Med Sci Educ 31(1):273–289
Brinkman DJ, Tichelaar J, Okorie M, Bissell L, Christiaens T, Likic R et al (2017) Pharmacology and therapeutics education in the European Union needs harmonization and modernization: a cross-sectional survey among 185 medical schools in 27 countries. Clin Pharmacol Ther 102(5):815–822
Lloyd M, Watmough SD, O’Brien SV, Hardy K, Furlong N (2021) Evaluating the impact of a pharmacist-led prescribing feedback intervention on prescribing errors in a hospital setting. Res Social Adm Pharm 17(9):1579–1587
Yang J, Zheng L, Guan YY, Song C, Liu YY, Li PB (2021) Pharmacist-led, prescription intervention system-assisted feedback to reduce prescribing errors: a retrospective study. J Clin Pharm Ther 46(6):1606–1612
Lloyd M, Watmough SD, O’Brien SV, Furlong N, Hardy K (2018) Exploring the impact of pharmacist-led feedback on prescribing behaviour: a qualitative study. Res Social Adm Pharm 14(6):545–554
Ferguson J, Keyworth C, Tully MP (2018) ‘If no-one stops me, I’ll make the mistake again’: changing prescribing behaviours through feedback; a perceptual control theory perspective. Res Social Adm Pharm 14(3):241–247
Choi PW, Benzer JA, Coon J, Egwuatu NE, Dumkow LE (2021) Impact of pharmacist-led selective audit and feedback on outpatient antibiotic prescribing for UTIs and SSTIs. Am J Health Syst Pharm 78(Supplement_2):S62–S9
Lloyd M, Bennett N, Wilkinson A, Furlong N, Cardwell J, Michaels S (2021) A mixed-methods evaluation of the impact of a pharmacist-led feedback pilot intervention on insulin prescribing in a hospital setting. Res Social Adm Pharm 17(11):2006–2014
A. van Doorn. Pscribe [Internet]. [cited December 2021] Available from: https://www.pscribe.nl/en-GB/Entrance/Home/Index
Kramers C, Janssen BJ, Knol W, Hessel MHM, Mulder WM, Dumont G et al (2017) A licence to prescribe. Br J Clin Pharmacol 83(8):1860–1861
de Vries TPGM (1994) Henning RH, Hogerzeil HV. World Health Organization, Fresle DA. Guide to good prescribing - a practical manual
Devine EB, Wilson-Norton JL, Lawless NM, Hansen RN, Hazlet TK, Kelly K et al (2007) Characterization of prescribing errors in an internal medicine clinic. Am J Health Syst Pharm 64(10):1062–1070
Fijn R, Van den Bemt PM, Chow M, De Blaey CJ, De Jong-Van den Berg LT, Brouwers JR (2002) Hospital prescribing errors: epidemiological assessment of predictors. Br J Clin Pharmacol 53(3):326–31
Ross S, Bond C, Rothnie H, Thomas S, Macleod MJ (2009) What is the scale of prescribing errors committed by junior doctors? A systematic review. Br J Clin Pharmacol 67(6):629–640
Kalfsvel LS, Hoek K, Bethlehem C, van der Kuy PHM, van den Broek WW, Versmissen J et al (2022) How would final years’ medical students perform if their skill-based prescription assessment was real life? Br J Clin Pharmacol
Hartwig SC, Denger SD, Schneider PJ (1991) Severity-indexed, incident report-based medication error-reporting program. Am J Hosp Pharm 48:2611–2616
IBM Corp. Released 2021. IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp
Sabatino JA, Pruchnicki MC, Sevin AM, Barker E, Green CG, Porter K (2017) Improving prescribing practices: a pharmacist-led educational intervention for nurse practitioner students. J Am Assoc Nurse Pract 29(5):248–254
Bertels J, Almoudaris AM, Cortoos PJ, Jacklin A, Franklin BD (2013) Feedback on prescribing errors to junior doctors: exploring views, problems and preferred methods. Int J Clin Pharm 35(3):332–338
Acknowledgements
The authors wish to thank Dr. M. Dankbaar for her advice on the methods of this research.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
L.K., L.P., F.R., and J.V. designed the study. The classification of errors was done by L.K., K.H., F.R., J.V., and C.B. L.K. K.H. processed the data. L.K. performed the analysis and drafted the manuscript. L.K. interpreted the results with the help of F.R., L.P., and J.V. All authors discussed the results and commented on the manuscript.
Corresponding author
Ethics declarations
Ethical approval
The review of the research proposal by the Medical Ethics Committee Erasmus MC determined that the Medical Research Involving Human Subjects Act was not applicable to this research.
Consent to participate
Prior to the study, each student had made a personal account in the program P-scribe for educational purposes. With the registration in P-scribe, students consented to have their data saved and used for research.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kalfsvel, L.S., Peeters, L.E.J., Hoek, K. et al. Does formative assessment help students to acquire prescribing skills?. Eur J Clin Pharmacol 79, 533–540 (2023). https://doi.org/10.1007/s00228-023-03456-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00228-023-03456-w