Advertisement

Advances in Health Sciences Education

, Volume 22, Issue 2, pp 327–336 | Cite as

CASPer, an online pre-interview screen for personal/professional characteristics: prediction of national licensure scores

  • Kelly L. Dore
  • Harold I. Reiter
  • Sharyn Kreuger
  • Geoffrey R. Norman
Article

Abstract

Typically, only a minority of applicants to health professional training are invited to interview. However, pre-interview measures of cognitive skills predict for national licensure scores (Gauer et al. in Med Educ Online 21 2016) and subsequently licensure scores predict for performance in practice (Tamblyn et al. in JAMA 288(23): 3019–3026, 2002; Tamblyn et al. in JAMA 298(9):993–1001, 2007). Assessment of personal and professional characteristics, with the same psychometric rigour of measures of cognitive abilities, are needed upstream in the selection to health profession training programs. To fill that need, Computer-based Assessment for Sampling Personal characteristics (CASPer)—an on-line, video-based screening test—was created. In this paper, we examine the correlation between CASPer and Canadian national licensure examination outcomes in 109 doctors who took CASPer at the time of selection to medical school. Specifically, CASPer scores were correlated against performance on cognitive and ‘non-cognitive’ subsections of both the Medical Council of Canada Qualifying Examination (MCCQE) Parts I (end of medical school) and Part II (18 months into specialty training). Unlike most national licensure exams, MCCQE has specific subcomponents examining personal/professional qualities, providing a unique opportunity for comparison. The results demonstrated moderate predictive validity of CASPer to national licensure outcomes of personal/professional characteristics three to six years after admission to medical school. These types of disattenuated correlations (r = 0.3–0.5) are not otherwise predicted by traditional screening measures. These data support the ability of a computer-based strategy to screen applicants in a feasible, reliable test, which has now demonstrated predictive validity, lending evidence of its validation for medical school applicant selection.

Keywords

Selection Screening Situational-judgment test National licensing correlations Predictive validation Professionalism Non-academic qualities 

Notes

Acknowledgements

The authors would like to acknowledge the generous contributions of the group at CAPER, including Steve Slade, for their actions matching data in a highly secure manner as well as the Medical Council of Canada (Margurite Roy) for the data collation. The authors wish to further acknowledge the support of McMaster faculty and students, who supported the implementation of CASPer, and particularly to Admissions Officer Wendy Edge, whose heroic efforts made implementation possible. Financial support for this project was received both from the Medical Council of Canada Medical Education Grant and the National Board of Medical Examiners (Stemmler Fund). The authors would like to acknowledge Ellen MacLellan for her assistance with the final preparation of the manuscript. KD, GN and HR are founders and KD and HR are board members of the company that now administers CASPer (Altus Assessments). All of the research noted above was completed prior to the establishment of Altus Assessments.

References

  1. Albanese, M. A., Snow, M. H., Skochelak, S. E., Huggett, K. N., & Farrell, P. M. (2003). Assessing personal qualities in medical school admissions. Academic Medicine, 78(3), 313–321.CrossRefGoogle Scholar
  2. Axelson, R. D., & Kreiter, C. D. (2009). Rater and occasion impacts on the reliability of pre-admission assessments. Medical Education, 43(12), 1198–1202.CrossRefGoogle Scholar
  3. Barlett, J. E., Kotrlik, J. W., & Higgins, C. C. (2001). Organizational research: Determining appropriate sample size in survey research. Information Technology, Learning, and Performance Journal, 19(1), 43.Google Scholar
  4. Barnett, R., Winterton, A., Firth, P., Davison, J., Willis, J., & O’Brien, A. (2015). Multiple mini interviews for undergraduate physiotherapy entry in the United Kingdom (UK). Physiotherapy, 101, e1649–e1650.CrossRefGoogle Scholar
  5. Carraccio, C., Wolfsthal, S. D., Englander, R., Ferentz, K., & Martin, C. (2002). Shifting paradigms: From Flexner to competencies. Academic Medicine, 77(5), 361–367.CrossRefGoogle Scholar
  6. Cook, D. A., Brydges, R., Ginsburg, S., & Hatala, R. (2015). A contemporary approach to validity arguments: a practical guide to Kane’s framework. Medical Education, 49, 560–575.CrossRefGoogle Scholar
  7. Dore, K. L., Reiter, H. I., Eva, K. W., Kreuger, S., Scriven, E., Siu, E., et al. (2009). Extending the interview to all medical school candidates—computer-based multiple sample evaluation of non-cognitive skills (CMSENS). Academic Medicine, 84(10), S9–S12.CrossRefGoogle Scholar
  8. Dunleavy, D. M., & Whittaker, K. M. (2011). The evolving medical school admissions interview. AAMC Analysis in Brief, 11(7), 1–2.Google Scholar
  9. Eva, K. W., & Macala, C. (2014). Multiple mini-interview test characteristics: ‘Tis better to ask candidates to recall than to imagine. Medical Education, 48(6), 604–613.CrossRefGoogle Scholar
  10. Eva, K. W., Rosenfeld, J., Reiter, H. I., & Norman, G. R. (2004). An admissions OSCE: The multiple mini-interview. Medical Education, 38, 314–326.CrossRefGoogle Scholar
  11. Eva, K. W., Reiter, H. I., Trinh, K., Wasi, P., Rosenfeld, J., & Norman, G. R. (2009). Predictive validity of the multiple mini-interview for selecting medical trainees. Medical Education, 43(8), 767–775.CrossRefGoogle Scholar
  12. Eva, K. W., Reiter, H. I., Rosenfeld, J., Trinh, K., Wood, T. J., & Norman, G. R. (2012). Association between a medical school admission process using the multiple mini-interview and national licensing examination scores. JAMA, 308(21), 2233–2240.CrossRefGoogle Scholar
  13. Ferguson, E., & Madeley, L. (2002). Factors associated with success in medical school: Systematic review of the literature. British Medical Journal, 324, 952–957.CrossRefGoogle Scholar
  14. Frank, J. R., & Danoff, D. (2007). The CanMEDS initiative: Implementing an outcomes-based framework of physician competencies. Medical Teacher, 29, 642–647.CrossRefGoogle Scholar
  15. Gauer, J. L., Wolff, J. M., & Jackson, J. B. (2016). Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data. Medical Education Online. doi: 10.3402/meo.v21.31795.Google Scholar
  16. Ginsburg, S., Regehr, G., Hatala, R., McNaughton, N., Frohna, A., Hodges, B., et al. (2000). Context, conflict, and resolution: A new conceptual framework for evaluating professionalism. Academic Medicine, 75(10), S6–S11.CrossRefGoogle Scholar
  17. Harris, S., & Owen, C. (2007). Discerning quality: Using the multiple mini-interview in student selection for the Australian National University Medical School. Medical Education, 41(3), 234–241.CrossRefGoogle Scholar
  18. Hecker, K., Donnon, T., Fuentealba, C., Hall, D., Illanes, I., Morck, D., et al. (2009). Assessment of applicants to the veterinary curriculum using a multiple mini-interview method. Journal of Veterinary Medical Education, 36(2), 166–173.CrossRefGoogle Scholar
  19. Husbands, A., & Dowell, J. (2013). Predictive validity of the Dundee multiple mini- interview. Medical Education, 47(7), 717–725.CrossRefGoogle Scholar
  20. Jerant, A., Griffin, E., Rainwater, J., Henderson, M., Sousa, F., Bertakis, K. D., et al. (2012). Does applicant personality influence multiple mini-interview performance and medical school acceptance offers? Academic Medicine, 87(9), 1250–1259.CrossRefGoogle Scholar
  21. Kirch, D. G. (2012). Transforming admissions: The gateway to medicine. JAMA, 308(21), 2250–2251.CrossRefGoogle Scholar
  22. Kulatunga-Moruzi, C., & Norman, G. R. (2002). Validity of admissions measures in predicting performance outcomes: The contribution of cognitive and non- cognitive dimensions. Teaching and Learning in Medicine, 14, 34–42.CrossRefGoogle Scholar
  23. Lievens, F. (2013). Adjusting medical school admission: Assessing interpersonal skills using situational judgement tests. Medical Education, 47, 182–189.CrossRefGoogle Scholar
  24. Lievens, F., & Sackett, P. R. (2006). Video-based versus written situational judgment tests: A comparison in terms of predictive validity. Journal of Applied Psychology, 91, 1181–1188.CrossRefGoogle Scholar
  25. McManus, I. C., Dewberry, C., Nicholson, S., & Dowell, J. S. (2013a). The UKCAT-12 study: Educational attainment, aptitude test performance, demographic and socio-economic contextual factors as predictors of first year outcome in a cross-sectional collaborative study of 12 UK medical schools. BMC Medicine, 11(1), 1.CrossRefGoogle Scholar
  26. McManus, I. C., Dewberry, C., Nicholson, S., Dowell, J. S., Woolf, K., & Potts, H. W. (2013b). Construct-level predictive validity of educational attainment and intellectual apti tude tests in medical student selection: Meta-regression of six UK longitudinal stud ies. BMC Medicine, 11(1), 1.CrossRefGoogle Scholar
  27. Patterson, F., Zibarras, L., & Ashworth, V. (2016). Situational judgement tests in medical education and training: AMEE Guide No.100. Medical Teacher, 38(1), 3–17.CrossRefGoogle Scholar
  28. Ramsey, P. G., Carline, J. D., Inui, T. S., Larson, E. B., LoGerfo, J. P., & Wenrich, M. D. (1989). Predictive validity of certification by the American Board of Internal Medicine. Annals of Internal Medicine, 110(9), 719–726.CrossRefGoogle Scholar
  29. Reiter, H. I., Eva, K. W., Rosenfeld, J., & Norman, G. R. (2007). Multiple mini-interviews predict clerkship and licensing examination performance. Medical Education, 41(4), 378–384.CrossRefGoogle Scholar
  30. Salvatori, P. (2001). Reliability and validity of admissions tools used to select students for the health professions. Advances in Health Science Education, 6(2), 159–175.CrossRefGoogle Scholar
  31. Siu, E., & Reiter, H. I. (2009). Overview: What’s worked and what hasn’t as a guide towards predictive admissions tool development. Advances in Health Science Education Theory and Practice, 14, 759–775.CrossRefGoogle Scholar
  32. Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: A practical guide to their development and use (4th ed.). New York: Oxford University Press.CrossRefGoogle Scholar
  33. Tamblyn, R., Abrahamowicz, M., Dauphinee, W. D., Hanley, J. A., Norcini, J., Girard, N., et al. (2002). Association between licensure examination scores and practice in primary care. JAMA, 288(23), 3019–3026.CrossRefGoogle Scholar
  34. Tamblyn, R., Abrahamowicz, M., Dauphinee, D., Wenghofer, E., Jacques, A., Klass, D., et al. (2007). Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. JAMA, 298(9), 993–1001.CrossRefGoogle Scholar
  35. Tiller, D., O’Mara, D., Rothnie, I., Dunn, S., Lee, L., & Roberts, C. (2013). Internet-based multiple mini-interviews for candidate selection for graduate entry programs. Medical Education, 47(8), 801–810.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2016

Authors and Affiliations

  1. 1.Department of Medicine, PERD, 5003/C David Braley Health Sciences CentreMcMaster UniversityHamiltonCanada
  2. 2.Department of Oncology, PERDMcMaster UniversityHamiltonCanada
  3. 3.PERDMcMaster UniversityHamiltonCanada
  4. 4.Department of Clinical Epidemiology and Biostatistics, PERDMcMaster UniversityHamiltonCanada

Personalised recommendations