Advances in Health Sciences Education

, Volume 13, Issue 2, pp 181–192 | Cite as

mini-PAT (Peer Assessment Tool): A Valid Component of a National Assessment Programme in the UK?

  • Julian Archer
  • John Norcini
  • Lesley Southgate
  • Shelley Heard
  • Helena Davies
Article

Abstract

Purpose

To design, implement and evaluate a multisource feedback instrument to assess Foundation trainees across the UK.

Methods

mini-PAT (Peer Assessment Tool) was modified from SPRAT (Sheffield Peer Review Assessment Tool), an established multisource feedback (360°) instrument to assess more senior doctors, as part of a blueprinting exercise of instruments suitable for assessment in Foundation programmes (first 2 years postgraduation). mini-PAT’s content validity was assured by a mapping exercise against the Foundation Curriculum. Trainees’ clinical performance was then assessed using 16 questions rated against a six-point scale on two occasions in the pilot period. Responses were analysed to determine internal structure, potential sources of bias and measurement characteristics.

Results

Six hundred and ninety-three mini-PAT assessments were undertaken for 553 trainees across 12 Deaneries in England, Wales and Northern Ireland. Two hundred and nineteen trainees were F1s or PRHOs and 334 were F2s. Trainees identified 5544 assessors of whom 67% responded. The mean score for F2 trainees was 4.61 (SD = 0.43) and for F1s was 4.44 (SD = 0.56). An independent t test showed that the mean scores of these 2 groups were significantly different (t = −4.59, df 390, p < 0.001). 43 F1s (19.6%) and 19 F2s (5.6%) were assessed as being below expectations for F2 completion. The factor analysis produced 2 main factors, one concerned clinical performance, the other humanistic qualities. Seventy-four percent of F2 trainees could have been assessed by as few as 8 assessors (95% CI ±0.6) as they either scored an overall mean of 4.4 or above or 3.6 and below. Fifty-three percent of F1 trainees could have been assessed by as few as 8 assessors (95% CI ±0.5) as they scored an overall mean of 4.5 or above or 3.5 and below. The hierarchical regression when controlling for the grade of trainee showed that bias related to the length of the working relationship, occupation of the assessor and the working environment explained 7% of the variation in mean scores when controlling for the year of the Foundation Programme (R squared change = 0.06, F change = 8.5, significant F change <0.001).

Conclusions

As part of an assessment programme, mini-PAT appears to provide a valid way of collating colleague opinions to help reliably assess Foundation trainees.

Keywords

Foundation programme multisource feedback reliability validity work based assessment 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Good Medical Practice (2001). Good Medical Practice London. General Medical Council: http://www.gmc-uk.orgGoogle Scholar
  2. Modernising Medical Careers (2002). Modernising Medical Careers London. Department of HealthGoogle Scholar
  3. Curriculum for the Foundation Years in Postgraduate Education and Training (2005). Curriculum for the Foundation Years in Postgraduate Education and Training. The Foundation Programme Committee of the Academy of Medical Royal Colleges, in co-operation with the Modernising Medical Careers in the Departments of Health, Modernising Medical Careers in the Departments of Health: www.mmc.nhs.uk/pages/foundation/CurriculumGoogle Scholar
  4. The New Doctor (2005). The New Doctor. London, GMC: http://www.gmc-uk.org/education/foundation/new_doctor.aspGoogle Scholar
  5. Principles for an assessment system for postgraduate medical training (2005). Principles for an assessment system for postgraduate medical training. London, Postgraduate Medical Education and Training Board, PMETB: www.pmetb.org.uk/pmetb/index.php?id=664Google Scholar
  6. Archer J.C., Davies H.A. (2004) Clinical management. Where medicine meets management. On reflection. Health Service Journal 114(5903): 26–27Google Scholar
  7. Archer J.C., Norcini J., et al. (2005) Use of SPRAT for peer review of paediatricians in training. British Medical Jounal 330(7502): 1251–1253CrossRefGoogle Scholar
  8. Borman W.C. (1974). The rating of individuals in organizations: an alternative approach. Organizational Behavior and Human Performance 12: 105–124Google Scholar
  9. Borman W.C. (1987). Personal constructs, performance schema, and “folk theories” of subordinate effectiveness: explorations in an army officer sample. Organizational Behavior and Human Decision Processes 40: 307–322Google Scholar
  10. Conway J.M., Huffcutt A.I. (1997). Psychometric properties of multisource performance ratings: a meta-analysis of subordinate, supervisor, peer, and self-ratings. Human Performance 10: 331–360CrossRefGoogle Scholar
  11. Cronbach L.J., Shavelson R. (2004). My current thoughts on coefficient alpha and successor procedures. Educational and Psychological Measurement 64(3): 391–418CrossRefGoogle Scholar
  12. Davies H.A., Archer J.C. (2005) Multi source feedback using Sheffield Peer Review Assessment Tool (SPRAT) – development and practical aspects. Clinical Teacher 2(2): 77–81CrossRefGoogle Scholar
  13. Davies H, Archer J, et al. (2005) Assessment tools for foundation programmes – a practical guide. British Medical Journal Career Focus 330(7500): 195–196Google Scholar
  14. Downing S.M. (2003) Validity: on the meaningful interpretation of assessment data. Medical Education 37(9): 830–837CrossRefGoogle Scholar
  15. Evans R., Elwyn G., et al. (2004) Review of instruments for peer assessment of physicians. British Medical Journal 328(7450): 1240–1243CrossRefGoogle Scholar
  16. Lockyer J.M., Violato C. (2004) An examination of the appropriateness of using a common peer assessment instrument to assess physician skills across specialties. Academic Medicine 79(10 suppl): S5–S8CrossRefGoogle Scholar
  17. Norcini J.J., Blank L.L., et al. (2003) The mini-CEX: a method for assessing clinical skills. Annales of Internal Medicine 138(6): 476–481Google Scholar
  18. Ramsey, P.G.W., Wenrich, M.D., Carline, J.D., Inui, T.S., Larson, E.B. & LoGerfo, J.P. (1993). Use of peer ratings to evaluate physician performance. Journal of American Medical Association 269(13): 1655–1660Google Scholar
  19. Sargeant J., Mann K., et al. (2005) Exploring family physicians’ reactions to multisource feedback: perceptions of credibility and usefulness. Medical Education 39(5): 497–504CrossRefGoogle Scholar
  20. Sargeant J.M., Mann K.V., Ferrier S.N., Langille D.B., Muirhead P.D., Hayes V.M., Sinclair D.E. (2003) Responses of rural family physicians and their colleague and coworker raters to a multi-source feedback process: a pilot study. Academic Medicine 78(S10): S42–S44CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, Inc. 2006

Authors and Affiliations

  • Julian Archer
    • 1
  • John Norcini
    • 2
  • Lesley Southgate
    • 3
  • Shelley Heard
    • 4
  • Helena Davies
    • 5
  1. 1.Medical Education Research Fellow to the Foundation Assessment ProgrammeUniversity of SheffieldSheffieldUK
  2. 2.Foundation for the Advancement of International Medical Education Research (FAIMER)PhiladelphiaUSA
  3. 3.St George’s Hospital Medical SchoolMedical and Healthcare EducationLondonUK
  4. 4.London DeaneryLondonUK
  5. 5.Academic Unit of Child HealthUniversity of SheffieldSheffieldUK

Personalised recommendations