Abstract
Purpose
To design, implement and evaluate a multisource feedback instrument to assess Foundation trainees across the UK.
Methods
mini-PAT (Peer Assessment Tool) was modified from SPRAT (Sheffield Peer Review Assessment Tool), an established multisource feedback (360°) instrument to assess more senior doctors, as part of a blueprinting exercise of instruments suitable for assessment in Foundation programmes (first 2 years postgraduation). mini-PAT’s content validity was assured by a mapping exercise against the Foundation Curriculum. Trainees’ clinical performance was then assessed using 16 questions rated against a six-point scale on two occasions in the pilot period. Responses were analysed to determine internal structure, potential sources of bias and measurement characteristics.
Results
Six hundred and ninety-three mini-PAT assessments were undertaken for 553 trainees across 12 Deaneries in England, Wales and Northern Ireland. Two hundred and nineteen trainees were F1s or PRHOs and 334 were F2s. Trainees identified 5544 assessors of whom 67% responded. The mean score for F2 trainees was 4.61 (SD = 0.43) and for F1s was 4.44 (SD = 0.56). An independent t test showed that the mean scores of these 2 groups were significantly different (t = −4.59, df 390, p < 0.001). 43 F1s (19.6%) and 19 F2s (5.6%) were assessed as being below expectations for F2 completion. The factor analysis produced 2 main factors, one concerned clinical performance, the other humanistic qualities. Seventy-four percent of F2 trainees could have been assessed by as few as 8 assessors (95% CI ±0.6) as they either scored an overall mean of 4.4 or above or 3.6 and below. Fifty-three percent of F1 trainees could have been assessed by as few as 8 assessors (95% CI ±0.5) as they scored an overall mean of 4.5 or above or 3.5 and below. The hierarchical regression when controlling for the grade of trainee showed that bias related to the length of the working relationship, occupation of the assessor and the working environment explained 7% of the variation in mean scores when controlling for the year of the Foundation Programme (R squared change = 0.06, F change = 8.5, significant F change <0.001).
Conclusions
As part of an assessment programme, mini-PAT appears to provide a valid way of collating colleague opinions to help reliably assess Foundation trainees.
Similar content being viewed by others
References
Good Medical Practice (2001). Good Medical Practice London. General Medical Council: http://www.gmc-uk.org
Modernising Medical Careers (2002). Modernising Medical Careers London. Department of Health
Curriculum for the Foundation Years in Postgraduate Education and Training (2005). Curriculum for the Foundation Years in Postgraduate Education and Training. The Foundation Programme Committee of the Academy of Medical Royal Colleges, in co-operation with the Modernising Medical Careers in the Departments of Health, Modernising Medical Careers in the Departments of Health: www.mmc.nhs.uk/pages/foundation/Curriculum
The New Doctor (2005). The New Doctor. London, GMC: http://www.gmc-uk.org/education/foundation/new_doctor.asp
Principles for an assessment system for postgraduate medical training (2005). Principles for an assessment system for postgraduate medical training. London, Postgraduate Medical Education and Training Board, PMETB: www.pmetb.org.uk/pmetb/index.php?id=664
Archer J.C., Davies H.A. (2004) Clinical management. Where medicine meets management. On reflection. Health Service Journal 114(5903): 26–27
Archer J.C., Norcini J., et al. (2005) Use of SPRAT for peer review of paediatricians in training. British Medical Jounal 330(7502): 1251–1253
Borman W.C. (1974). The rating of individuals in organizations: an alternative approach. Organizational Behavior and Human Performance 12: 105–124
Borman W.C. (1987). Personal constructs, performance schema, and “folk theories” of subordinate effectiveness: explorations in an army officer sample. Organizational Behavior and Human Decision Processes 40: 307–322
Conway J.M., Huffcutt A.I. (1997). Psychometric properties of multisource performance ratings: a meta-analysis of subordinate, supervisor, peer, and self-ratings. Human Performance 10: 331–360
Cronbach L.J., Shavelson R. (2004). My current thoughts on coefficient alpha and successor procedures. Educational and Psychological Measurement 64(3): 391–418
Davies H.A., Archer J.C. (2005) Multi source feedback using Sheffield Peer Review Assessment Tool (SPRAT) – development and practical aspects. Clinical Teacher 2(2): 77–81
Davies H, Archer J, et al. (2005) Assessment tools for foundation programmes – a practical guide. British Medical Journal Career Focus 330(7500): 195–196
Downing S.M. (2003) Validity: on the meaningful interpretation of assessment data. Medical Education 37(9): 830–837
Evans R., Elwyn G., et al. (2004) Review of instruments for peer assessment of physicians. British Medical Journal 328(7450): 1240–1243
Lockyer J.M., Violato C. (2004) An examination of the appropriateness of using a common peer assessment instrument to assess physician skills across specialties. Academic Medicine 79(10 suppl): S5–S8
Norcini J.J., Blank L.L., et al. (2003) The mini-CEX: a method for assessing clinical skills. Annales of Internal Medicine 138(6): 476–481
Ramsey, P.G.W., Wenrich, M.D., Carline, J.D., Inui, T.S., Larson, E.B. & LoGerfo, J.P. (1993). Use of peer ratings to evaluate physician performance. Journal of American Medical Association 269(13): 1655–1660
Sargeant J., Mann K., et al. (2005) Exploring family physicians’ reactions to multisource feedback: perceptions of credibility and usefulness. Medical Education 39(5): 497–504
Sargeant J.M., Mann K.V., Ferrier S.N., Langille D.B., Muirhead P.D., Hayes V.M., Sinclair D.E. (2003) Responses of rural family physicians and their colleague and coworker raters to a multi-source feedback process: a pilot study. Academic Medicine 78(S10): S42–S44
Acknowledgements
Our thanks to Chris Loveman, Project Manager, & all the team members of the Administrative Centre, Sheffield & to the members of the London Deanery Assessment Working Party for their invaluable input. Contributors: JA & HD designed the original study and oversaw its implementation. JA analysed the data with assistance from JN. JA wrote the paper with assistance from JN, HD, SH & LS. Ethics approval: Not sought. mini-PAT was implemented as part of the Foundation assessment programme. Guarantor: Dr. Helena Davies, Sheffield Children’s Hospital, Western Bank, Sheffield S10 2HT, UK. Funding: JA’s research fellowship was in part funded by Policy Research Programme Funding, Department of Health, UK.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Archer, J., Norcini, J., Southgate, L. et al. mini-PAT (Peer Assessment Tool): A Valid Component of a National Assessment Programme in the UK?. Adv in Health Sci Educ 13, 181–192 (2008). https://doi.org/10.1007/s10459-006-9033-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10459-006-9033-3