A preceptor has a key role in evaluating medical graduate’s performance in the clinical setting. This study is conducted to develop an instrument for preceptor evaluation of medical graduates’ performance in the clinical setting.
A mixed-method study design, sequential exploratory approach was chosen to develop the instrument. Initial semi-structured interviews were conducted with 4 preceptors at the teaching hospitals. Five main themes emerged from the interviews. The themes were developed into a 23-item questionnaire. Nineteen Head or Assistant Head of clinical departments were asked to review the relevance of the content. The questionnaire was later sent out to 34 preceptors and 35 paramedic staffs to participate in the construct validity study by conducting exploratory factor analysis (EFA). SPSS version 21 software was used to analyze the data and Varimax rotation method was performed to simplify and describe the data structure.
Review of the factor structures suggested that the most appropriate fit was 5 factors. Most of the questionnaire items were relevant to assess performance (4.65 + 0.15), except in item 4 of the clinical skill factor. The 23 items of the evaluation instruments showed that five factors were extracted which explained 73.9% of the variance between them. Construct validity was achieved after the instrument was run for an iteration of eight times, with Cronbach’s alpha of 0.951.
The instrument has achieved the desired content and construct validity score. It can be used by other institutions to assess their medical graduates’ performance in the clinical setting.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
Subscribe to journal
Immediate online access to all issues from 2019. Subscription will auto renew annually.
Tax calculation will be finalised during checkout.
Vaughan B. Developing a clinical teaching quality questionnaire for use in a university osteopathic pre-registration teaching program. BMC Med Educ. 2015;15:70.
Copeland HL, Hewson MG. Developing and testing an instrument to measure the effectiveness of clinical teaching in an academic medical center. Acad Med. 2000;75(2):161–6.
Creswell JW. A concise introduction to mixed methods research. Lincoln: SAGE Publication; 2015. p. 6–7.
Creswell JW. In: Smith PA, editor. Educational research: planning, conducting, and evaluating quantitative and qualitative research. 4th ed: Pearson; 2012. p. 539–47.
Pluye P, Hong QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu Rev Public Health. 2013;35:29–45.
Tavakol M, Gruppen L. Using evaluation research to improve medical education ‘ U. Clin Teach. 2010;7:192–6.
Lynch DC, Surdyk PM, Eiser AR. Assessing professionalism: a review of the literature. Med Teach. 2004;26(4):366–73.
Stalmeijer RE, Dolmans DHJM, Wolfhagen IHAP, Muijtjens MM, Scherpbier AJJA. The development of an instrument for evaluating clinical teachers: involving stakeholders to determine content validity. Med Teach. 2009;30(August 2016):272–7.
Dilmore TC, Rubio DM, Cohen E, Seltzer D, Switzer GE, Bryce C, et al. Communications psychometric properties of the mentor role instrument when used in an academic medicine setting. Clin Transl Sci. 2010;3(3):104–8.
Costello AB, Osborne JW. Best practices in exploratory factor analysis : four recommendations for getting the most from your analysis. Pract Assess Res Eval. 2005;10(7):1–9.
Worthington RL, Whittaker TA. Scale development research: a content analysis and recommendations for best practices. Couns Psychol. 2006;34(6):806–38.
Nunnally J. Psychometric theory. New York: Mc.graw Hill; 1978.
Boerebach BCM, Lombarts MJMH. Confirmatory factor analysis of the system for evaluation of teaching qualities (SETQ) in graduate medical training; 2014. p. 1–12.
Cabrera-Nguyen P. Author guidelines for reporting scale development and validation results in the Journal of the Society for Social Work and Research. J Soc Soc Work Res. 2010;1(2):99–103 Available from: http://www.journals.uchicago.edu/doi/10.5243/jsswr.2010.8.
Anthoine E, Moret L, Regnault A, Sbille V, Hardouin JB. Sample size used to validate a scale: a review of publications on newly-developed patient reported outcomes measures. Health Qual Life Outcomes. 2014;12(1):1–10.
Creswell JW. In: Knight V, editor. Research design: qualitative, quantitative, and mixed methods approaches. 4th ed. London: SAGE Publication; 2014.
Agung R, Mahatmaharti K, Ardhana W, Hanurawan F. Construct validity in research development instruments: the analysis of self-discipline factors. J Humanit Soc Sci. 2017;22(6):33–40.
Van Der Vleuten CPM, Schuwirth LWT. Assessing professional competence: from methods to programmes. Med Educ. 2005;39(3):309–17.
Roesco JT. Fundamental research statistic for the behavioral sciences. 2nd ed. Holt, Rinehart and Winston, Inc.:New York; 1975.
Reimann R, B Schober CS. Evaluation of curricula in higher education challenges for evaluators. Eval Rev. 2006;30(4):430–50.
This study was funded by Research Resources Centre-Cyberjaya University College of Medical Sciences (RRC-CUCMS), Malaysia.
Conflict of Interest
The authors declare that they have no conflict of interest.
The Health Research Ethics Committee, Faculty of Medicine, Bandung Islamic University has approved this observational study with approval letter no. 005/Ethic committee FK/VI/2017.
The authors declare that they have taken informed consent either for interviews of 4 preceptors or surveys of quantitative.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
About this article
Cite this article
Kusmiati, M., Hamid, N.A.A., Sanip, S. et al. Development of an Instrument for Preceptor Evaluation of Medical Graduates’ Performance: the Psychometric Properties. Med.Sci.Educ. 29, 935–940 (2019). https://doi.org/10.1007/s40670-019-00774-6
- Construct validity
- Content validity