Validation of a performance assessment instrument in problem-based learning tutorials using two cohorts of medical students
- 791 Downloads
Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism. Two cohorts of medical students (N = 310) participated in the study, with 2 years of PBL evaluation data extracted from archive rated by a total of 158 faculty raters. Analyses based on generalizability theory were conducted for reliability examination. Validity was examined through following the Standards for Educational and Psychological Testing to evaluate content validity, response processes, construct validity, predictive validity, and the relationship to the variable of training. For construct validity, correlations of PBL scores with six other outcome measures were examined, including Medical College Admission Test, United States Medical Licensing Examination (USMLE) Step 1, National Board of Medical Examiners (NBME) Comprehensive Basic Science Examination, NBME Comprehensive Clinical Science Examination, Clinical Performance Examination, and USMLE Step 2 Clinical Knowledge. Predictive validity was examined by using PBL scores to predict five medical school outcomes. The highest percentage of PBL total score variance was associated with students (60 %), indicating students in the study differed in their PBL performance. The generalizability and dependability coefficients were moderately high (Ep2 = .68, ϕ = .60), showing the instrument is reliable for ranking students and identifying competent PBL performers. The patterns of correlations between PBL domain scores and the outcome measures partially support construct validity. PBL performance ratings as a whole significantly (p < .01) predicted all the major medical school achievements. The second year PBL scores were significantly higher than those of the first year, indicating a training effect. Psychometric findings provided support for reliability and many aspects of validity of PBL performance assessment using the instrument.
KeywordsPBL assessment Psychometric validation Generalizability theory Reliability and validity Standards for Educational and Psychological Testing Medical Education, Undergraduate
The authors thank Dr. Noreen Webb, Professor of Education at UCLA Graduate School of Education, for her review and valuable suggestions for improvement to the manuscript.
- American Educational Research Association, American Psychological Association & National Council on Measurement in Education. (2014). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.Google Scholar
- Biggs, J. (2003). Teaching for quality learning at university. Buckingham: Open University Press.Google Scholar
- Blumberg, P. (2005). Assessing students during the problem-based learning (PBL) process. Journal of the International Association of Medical Science Educators, 15, 1–9. http://www.iamse.org/artman/publish/article_289.shtml. Accessed 11 April 2014.
- Bowerman, B. L., & O’Connell, R. T. (1990). Linear statistical models: An applied approach (2nd ed.). Belmont, CA: Duxbury.Google Scholar
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
- Crick, J. E., & Brennan, R. L. (1983). Manual for GENOVA: A generalized analysis of variance system. Iowa City, IA: The American College Testing Program.Google Scholar
- Dolmans, D., Gijselaers, W. H., Moust, J. H. C., de Grave, W. S., Wolfhagen, H. A. P., & van der Vleuten, C. P. M. (2002). Trends in research on the tutor in PBL: Conclusions and implications for educational practice and research. Medical Education, 24, 173–180.Google Scholar
- Elizondo-Montemayor, L. L. (2004). Formative and summative assessment of the problem-based learning tutorial session using a criterion-referenced system. Journal of the International Association of Medical Science Educators, 14, 8–14.Google Scholar
- Gijbels, D., van den Bossche, P., & Loyens, S. (2012). Student achievement in problem-based learning. In J. A. C. Hattie & E. M. Anderman (Eds.), International guide to student achievement (pp. 382–384). New York, NY: Routledge.Google Scholar
- Marzano, R. J., Pickering, D., & McTighe, J. (1993). Assessing student outcomes. Alexandria, VA: Association for Supervision and Curriculum Development.Google Scholar
- Menard, S. (1995). Applied logistic regression analysis. Sage university paper series on quantitative applications in the social sciences, 07-106. Thousand Oaks, CA: Sage.Google Scholar
- Myers, R. (1990). Classical and modern regression with applications (2nd ed.). Boston, MA: Duxbury.Google Scholar
- Norman, G. R., & Schmidt, H. G. (1992). The psychological basis of problem-based learning: A review of the evidence. Academic Medicine, 73, 1068–1071.Google Scholar
- Painvin, C., Neufeld, V., Norman, G., Walker, I., & Wheelan, G. (1979). The “triple jump” exercise—A structured measure of problem solving and self-directed learning. Annual Conference on Research in Medical Education. Conference Proceedings, 18, 73–77.Google Scholar
- Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory: A primer. Newbury Park, CA: Sage.Google Scholar
- Sim, S., Azila, N. M. A., Lian, L., Tan, C. P. L., & Tan, N. (2006). A simple instrument for the assessment of student performance in problem-based learning tutorials. Annals Academy of Medicine Singapore, 35, 634–641.Google Scholar