Skip to main content
Log in

Using a Sampling Strategy to Address Psychometric Challenges in Tutorial-Based Assessments

  • Published:
Advances in Health Sciences Education Aims and scope Submit manuscript

Abstract

Introduction: Tutorial-based assessment, despite providing a good match with the philosophy adopted by educational programmes that emphasize small group learning, remains one of the greatest challenges for educators working in this context. The current study was performed in an attempt to assess the psychometric characteristics of tutorial-based evaluation upon adopting a multiple sampling approach that requires minimal recording of observations. Method: After reviewing the literature, a simple 3-item evaluation form was created. The items were “Professional Behaviour,” “Contribution to Group Process,” and “Contribution to Group Content.” Explicit definition of these items was provided on an evaluation form. Twenty five tutors in five different programmes were asked to use the form to evaluate their students (N = 169) after every tutorial over the course of an academic unit. Each item was rated using a 10-point scale. Results: Cronbach’s alpha revealed an appropriate internal consistency in all five programmes. Test—retest reliability of any single rating was low, but the reliability of the average rating was at least 0.75 in all cases. The construct validity of the tool was supported by the observation of increasing ratings over the course of the academic unit and by the finding that more senior students received higher ratings than more junior students. Conclusion: Consistent with the context specificity phenomenon, the adoption of a “minimal observations often” approach to tutorial-based assessment appears to maintain better psychometric characteristics than do attempts to assess tutorial performance using more comprehensive measurement tools.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Blake J.M., Norman G.R. and Smith E.K (1995). Report card from McMaster: Student evaluation at a problem-based medical school. Lancet 345: 899–902

    Article  Google Scholar 

  • Blake J.M., Norman G.R., Keane D.R., Mueller C.B., Cunnington J. and Didyk N (1996). Introducing progress testing in McMaster University’s problem-based medical curriculum: Psychometric properties and effect on learning. Academic Medicine 71: 1002–1007

    Article  Google Scholar 

  • Cunnington, J. (2001). Evolution of student evaluation in the McMaster MD Programme. Pedagogue: Perspectives on Health Sciences Education 10: 13’18. http://www.fhs.mcmaster.ca/perd/download/Pedagogue_101.pdf

  • Didyk N. and Keane D.R (1997). Student and tutor opinion on five MD program evaluation tools. Pedagogue: Perspectives on Health Sciences Education 7: 5–8

    Google Scholar 

  • Eva K.W. (2001). Assessing tutorial-based assessment. Advances in Health Science Education 6: 243–257

    Article  Google Scholar 

  • Eva K.W. (2003). On the generality of specificity. Medical Education 37: 587–588

    Article  Google Scholar 

  • Eva K.W. (2005). Regehr G. Self-assessment in the Health Professions: A Reformulation and Research Agenda. Academic Medicine 80(10suppl.): S46–S54

    Article  Google Scholar 

  • Gilovich T. (1991). How we know what isn’t so: The fallibility of human reason in everyday life. The Free Press, New York

    Google Scholar 

  • Hebert R. and Bravo G (1996). Development and validation of an evaluation instrument for medical students in tutorials. Academic Medicine 71: 488–494

    Article  Google Scholar 

  • Ladouceur M.G., Rideout E.M., Black M.E.A., Crooks D.L., O’Mara L.M. and Schmuck M.L (2004). Development of an instrument to assess individual student performance in small group tutorials. Journal of Nursing Education 43: 447–455

    Google Scholar 

  • Nendaz M.R. and Tekian A (1999). Assessment in problem-based learning medical schools: A literature review. Teaching and Learning in Medicine 11: 232–243

    Article  Google Scholar 

  • Reiter H.I., Eva K.W., Hatala R.M. and Norman G.R (2002). Self and peer assessment in tutorials: Application of a relative ranking model. Academic Medicine 77: 1134–1139

    Article  Google Scholar 

  • Schuwirth L.W.T. and Vleuten C.P.M. (2004). Different written assessment methods: What can be said about their strengths and weaknesses?. Medical Education 38: 974–979

    Article  Google Scholar 

  • Streiner D.L. and Norman G.R (2003). Health measurement scales: A practical guide to their development and use. Oxford University Press, Oxford

    Google Scholar 

  • Swanson D.B., Norman G.R. and Linn R.L (1995). Performance-based assessment: Lessons from the health professions. Educational Research 24: 5–11,35

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kevin W. Eva.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Eva, K.W., Solomon, P., Neville, A.J. et al. Using a Sampling Strategy to Address Psychometric Challenges in Tutorial-Based Assessments. Adv Health Sci Educ Theory Pract 12, 19–33 (2007). https://doi.org/10.1007/s10459-005-2327-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10459-005-2327-z

Keywords

Navigation