Feedback Opportunities of Comparative Judgement: An Overview of Possible Features and Acceptance at Different User Levels

  • Roos Van Gasse
  • Anneleen Mortier
  • Maarten Goossens
  • Jan Vanhoof
  • Peter Van Petegem
  • Peter Vlerick
  • Sven De Maeyer
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 653)

Abstract

Given the increasing criticism on common assessment practices (e.g. assessments using rubrics), the method of Comparative Judgement (CJ) in assessments is on the rise due to its opportunities for reliable and valid competence assessment. However, up to now the emphasis in digital tools making use of CJ has lied primarily on efficient algorithms for CJ rather than on providing valuable feedback. Digital Platform for the Assessment of Competences (D-PAC) investigates the opportunities and constraints of CJ-based feedback and aims to examine the potential of CJ-based feedback for learning. Reporting on design based research, this paper describes the features of D-PAC feedback available at different user levels: the user being assessed (assesse), the user assessing others (assessor) and the user who coordinates the assessment (Performance Assessment Manager (PAM)). Interviews conducted with different users in diverse organizations show that both the characteristics of D-PAC feedback and the acceptance at user level is promising for future use of D-PAC. Despite that further investigations are needed with regard to the contribution of D-PAC feedback for user learning, the characteristics and user acceptance of D-PAC feedback are promising to enlarge the summative scope of CJ to formative assessment and professionalization.

References

  1. Andrich, D.: An index of person separation in latent trait theory, the traditional KR-20 index, and the Guttman scale response pattern. Educ. Res. Perspect. 9(1), 95–104 (1982)Google Scholar
  2. Anshel, M.H., Kang, M., Jubenville, C.: Sources of acute sport stress scale for sports officials: Rasch calibration. Psychol. Sport Exerc. 14(3), 362–370 (2013)CrossRefGoogle Scholar
  3. Anseel, F., Lievens, F.: The mediating role of feedback acceptance in the relationship between feedback and attitudinal and performance outcomes. Int. J. Sel. Assess. 17(4), 362–376 (2009). doi:10.1111/j.1468-2389.2009.00479.x
  4. Bloxham, S.: Marking and moderation in the UK: false assumptions and wasted resources. Assess. Eval. High. Educ. 34(2), 209–220 (2009)CrossRefGoogle Scholar
  5. Bradley, R.A., Terry, M.E.: Rank analysis of incomplete block designs, 1. The method of paired comparisons. Biometrika 39 (1952). doi:10.2307/2334029
  6. Bramley, T.: Paired comparisons methods. In: Newton, P., Baird, J.-A., Goldstein, H., Patrick, H., Tymms, P. (eds.) Techniques for Monitoring the Comparability of Examination Standards, pp. 246–294. Qualification and Authority, London (2007)Google Scholar
  7. Bramley, T. (ed.): Investigating the Reliability of Adaptive Comparative Judgment. Cambridge Assessment, Cambridge (2015)Google Scholar
  8. Bouwer, R., Koster, M., Van den Bergh, H.: ‘Well done, but add a title!’ Feedback practices of elementary teachers and the relationship with text quality (2016) (Manuscript submitted for publication)Google Scholar
  9. Collins, A., Joseph, D., Bielaczyc, K.: Design research: theoretical and methodological issues. J. Learn. Sci. 13(1), 15–42 (2004)CrossRefGoogle Scholar
  10. Crisp, V.: Do assessors pay attention to appropriate features of student work when making assessment judgments? Paper presented at the International Association for Educational Assessment, Annual Conference, Baku, Azerbaijan (2007)Google Scholar
  11. Hattie, J., Timperley, H.: The power of feedback. Rev. Educ. Res. 77, 81–112 (2007)CrossRefGoogle Scholar
  12. Hevner, A.R.: A three cycle view of design science research. Scand. J. Inf. Syst. 19(2), 87–92 (2007)Google Scholar
  13. Kowalski, T.J., Lasley, T.J.: Part I: Theoretical and practical perspectives. Handbook of Data-Based Decision Making in Education, pp. 3–86. Routledge, New York (2009)Google Scholar
  14. Laming, D.: Human Judgment: The Eye of the Beholder. Thomson Learning, London (2004)Google Scholar
  15. Lesterhuis, M., Verhavert, S., Coertjens, L., Donche, V., De Maeyer, S.: Comparative judgement as a promising alternative. In: Cano, E., Ion, G. (eds.) Innovative Practices for Higher Education Assessment and Measurement. IGI Global, Hershey (2016, in Press)Google Scholar
  16. Linacre, J., Wright, B.: Chi-square fit statistics. Rasch Measur. Trans. 8(2), 350 (1994)Google Scholar
  17. Luce, R.D.: Individual Choice Behaviors. A Theoretical Analysis. Wiley, New York (1959)Google Scholar
  18. McMahon, S., Jones, I.: A comparative judgement approach to teacher assessment. Assessment in Education: Principles, Policy and Practice (ahead-of-print), pp. 1–22 (2014)Google Scholar
  19. Mortier, A., Lesterhuis, M., Vlerick, P., De Maeyer, S.:Comparative Judgment Within Online Assessment: Exploring Students Feedback Reactions. In: Ras, E., Brinke, D.J.T. (ed.) Computer Assisted Assessment: Research into E-Assessment, CAA 2015. pp. 69–79. Springer-Verlag Berlin (2016)Google Scholar
  20. Nicol, D.: Good design of written feedback for students. In: McKeachy Teaching Tips: Strategies, Research and Theory for College and University Teachers, Houghton Miffin, New York, 13th edn., pp. 108–124 (2009)Google Scholar
  21. Nicol, D., MacFarlane-Dick, D.: Formative assessment and self-regulating learning: a model of seven principles of good feedback practice. Stud. High. Educ. 31(2), 199–218 (2006)CrossRefGoogle Scholar
  22. Pierce, R., Chick, H.: Teachers’ intentions to use national literacy and numeracy assessment data: a pilot study. Austr. Educ. Res. 38, 433–447 (2011)CrossRefGoogle Scholar
  23. Pollitt, A.: Let’s stop marking exams. Paper presented at the annual conference of the International Association of Educational Assessment, Philadelphia, USA, 13–18 June (2004)Google Scholar
  24. Pollitt, A.: The method of adaptive comparative judgment. Assess. Educ. Principles Policy Pract. 19(3), 1–20 (2012). doi:10.1080/0969594X.2012.665354 Google Scholar
  25. Popham, J.: What’s wrong - and what’s right - with rubrics. Educ. Leadersh. 55(2), 72–75 (1997)Google Scholar
  26. Rasch, G.: Probabilistic Models for Some Intelligence and Attainment Tests (expanded edition). University of Chicago Press, Chicago (1960/1980). (Original work published in 1960)Google Scholar
  27. Rossi, P.H., Freeman, H.E.: Evaluation: A Systematic Approach, 7th edn. Sage, London (2004)Google Scholar
  28. Sadler, D.R.: Transforming holistic assessment and grading into a vehicle for complex learning. In: Joughin, G. (ed.) Assessment, Learning, and Judgment in Higher Education, pp. 1–19. Springer, Dordrecht (2009)Google Scholar
  29. Schildkamp, K., Teddlie, C.: School performance feedback systems in the USA and in the netherlands: A comparison. Educ. Res. Eval. 14(3), 255–282 (2008)CrossRefGoogle Scholar
  30. Shute, V.J.: Focus on formative feedback. Rev. Educ. Res. 78(1), 153–189 (2008)CrossRefGoogle Scholar
  31. Thurstone, L.L.: The law of comparative judgment. Psychol. Rev. 34(4), 273–286 (1927). doi:http://dx.doi.org/10.1037/h0070288
  32. Vanhoof, J., Verhaeghe, G., Van Petegem, P.: Verschillen in het gebruik van schoolfeedback: een verkenning van verklaringsgronden. Tijdschrift voor onderwijsrecht en onderwijsbeleid 2008–2009(4) (2009)Google Scholar
  33. Verhaeghe, G., Vanhoof, J., Valcke, M., Van Petegem, P.: Effecten van ondersteuning bij schoolfeedbackgebruik. Pedagogische Studiën 88, 90–106 (2010)Google Scholar
  34. Visscher, A.: School performance feedback systems. In: Visscher, A.J., Coe, R. (eds.) School Improvement through Performance Feedback, pp. 41–71. Swets & Zeitlinger, Lisse (2002)Google Scholar
  35. van den Berg, I., Mehra, S., van Boxel, P., van der Hulst, J., Beijer, J., Riteco, A., van Andel, S.G.: Onderzoeksrapportage SURF-project: SCALA- Scaffolding Assessment for Learning (2014)Google Scholar
  36. van der Hulst, J., van Boxel, P., Meeder, S.: Digitalizing feedback: reducing teachers’ time investment while maintaining feedback quality. In: Ørngreen, R., Levinsen, K.T. (eds.) Proceedings of the 13th European Conference on e-Learning, ECEL-2014, Copenhagen, Denmark, pp. 243–250 (2014)Google Scholar
  37. Xu, Y.: Examining the effects of digital feedback on student engagement and achievement. J. Educ. Comput. Res. 43(3), 275–292 (2010)CrossRefGoogle Scholar
  38. Young, V.M.: Teachers’ use of data: loose coupling, agenda setting, and team norms. Am. J. Edu. 112, 521–548 (2006)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Roos Van Gasse
    • 1
  • Anneleen Mortier
    • 2
  • Maarten Goossens
    • 1
  • Jan Vanhoof
    • 1
  • Peter Van Petegem
    • 1
  • Peter Vlerick
    • 2
  • Sven De Maeyer
    • 1
  1. 1.University of AntwerpAntwerpBelgium
  2. 2.Ghent UniversityGhentBelgium

Personalised recommendations