Advertisement

Using adaptive comparative judgment for student formative feedback and learning during a middle school design project

  • Scott R. Bartholomew
  • Greg J. Strimel
  • Emily Yoshikawa
Article

Abstract

While design-based pedagogies have increasingly been emphasized, the assessment of design projects remains difficult due to the large number of potentially “correct” solutions. Adaptive comparative judgment (ACJ), an approach based on assessors/judges working through a series of paired comparisons and selecting the better of two items, has demonstrated high levels of inter-rater reliability with design projects. Efforts towards using ACJ for assessing design have largely centered on summative assessment. However, evidence suggests that ACJ may be a powerful tool for formative assessment and design learning when undertaken by students. Therefore, this study investigated middle school students participated in ACJ at the midpoint and conclusion of a design project, both receiving and providing feedback to/from their peers through the ACJ process. Findings demonstrated promise for using ACJ, as a formative assessment and feedback tool, to improve student learning and achievement.

Keywords

Adaptive comparative judgment Middle school Design Design assessment Formative assessment 

References

  1. Bailey, R., & Garner, M. (2010). Is the feedback in higher education assessment worth the paper it is written on? Teachers’ reflections on their practices. Teaching in Higher Education, 15(2), 187–198.CrossRefGoogle Scholar
  2. Bartholomew, S. R. (2016). A mixed-method study of mobile devices and student self-directed learning and achievement during a middle school STEM activity. Doctoral dissertation, Utah State University.Google Scholar
  3. Bartholomew, S. R. (2017). Assessing open-ended design problems. The Technology & Engineering Teacher, 76(6), 13–17.Google Scholar
  4. Bartholomew, S. R., & Strimel, G. S. (2017). Factors influencing student success on open-ended design problems. International Journal of Design and Technology Education.  https://doi.org/10.1007/s10798-017-9415-2.Google Scholar
  5. Black, B., & Bramley, T. (2008). Investigating a judgemental rank-ordering method for maintaining standards in UK examinations. Research Papers in Education, 23(3), 357–373.CrossRefGoogle Scholar
  6. Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, policy & practice, 5(1), 7–74.CrossRefGoogle Scholar
  7. Bond, L. A. (1996). Norm-and criterion-referenced testing. Practical Assessment, Research & Evaluation, 5(2), 120–125.Google Scholar
  8. Bottomley, L. J. (2017). Engineering is elementary. Retrieved June 29, 2017, from https://www.eie.org/.
  9. Bramley, T. (2007). Paired comparison methods. In P. E. Newton, J. Baird, H. Goldstein, H. Patrick, & P. Tymms (Eds.), Techniques for monitoring the comparability of examination standards (pp. 246–294). London: QCA.Google Scholar
  10. Charmaz, K., & Belgrave, L. (2012). Qualitative interviewing and grounded theory analysis. In J. F. Gubrium, J. A. Holstein, A. Marvasti, & K. M. Marvasti (Eds.), The SAGE handbook of interview research: The complexity of the craft (Vol. 2, pp. 347–365). Thousand Oaks: Sage.Google Scholar
  11. Davies, D., Collier, C., & Howe, A. (2012). Assessing scientific and technological enquiry skills at age 11 using the e-scape system. International Journal of Technology and Design Education, 22(2), 247–263.CrossRefGoogle Scholar
  12. Dearing, B. M., & Daughtery, M. K. (2016). Delivering engineering content in technology education: Can the technology education profession deliver on the promise of technological literacy for all while preparing the secondary school student for engineering education? The Technology Teacher, 64(3), 8–12.Google Scholar
  13. Dede, C., Korte, S., Nelson, R., Valdez, G., & Ward, D. J. (2005). Transforming learning for the 21st century: An economic imperative. Common Knowledge, 399, 1–66.Google Scholar
  14. Denson, C. D., Buelin, J. K., Lammi, M. D., & D’Amico, S. (2015). Developing instrumentation for assessing creativity in engineering design. Journal of Technology Education, 27(1), 23–40.Google Scholar
  15. Diefes-Dux, H. A., Moore, T., Zawojewski, J., Imbrie, P. K., & Follman, D. (2004). A framework for posing open-ended engineering problems: Model-eliciting activities. In Frontiers in Education, FIE 2004. 34th Annual (pp. F1A–3). IEEE.Google Scholar
  16. Diefes-Dux, H. A., Zawojewski, J. S., & Hjalmarson, M. A. (2010). Using educational research in the design of evaluation tools for open-ended problems. International Journal of Engineering Education, 26(4), 807–819.Google Scholar
  17. Doppelt, Y., Mehalik, M. M., Schunn, C. D., Silk, E., & Krysinski, D. (2008). Engagement and achievements: A case study of design-based learning in a science context. Journal of Technology Education, 19(2), 22–39.Google Scholar
  18. Dunn, K. E., & Mulvenon, S. W. (2009). A critical review of research on formative assessment: The limited scientific evidence of the impact of formative assessment in education. Practical Assessment, Research & Evaluation, 14(7), 1–11.Google Scholar
  19. Dunn, L., Parry, S., & Morgan, C. (2002). Seeking quality in criterion referenced assessment. Paper presented at the Learning Communities and Assessment Cultures conference, Northumbria. Retrieved November 11, 2017 from http://www.leeds.ac.uk/educol/documents/00002257.htm.
  20. Engineering byDesign. (2016). Retrieved June 12, 2017, from https://www.iteea.org/EbD.aspx.
  21. Ghosh, S. (1993). An exercise in inducing creativity in undergraduate engineering students through challenging examinations and open-ended design problems. IEEE Transactions on Education, 36(1), 113–119.CrossRefGoogle Scholar
  22. Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2333–2351.CrossRefGoogle Scholar
  23. Haley, C., & Wothers, P. (2005). In M. D. Archer & C. D. Haley (Eds.), The 1702 chair of chemistry at Cambridge. Cambridge: CUP.Google Scholar
  24. Hartell, E., & Skogh, I. B. (2015). Criteria for Success: A study of primary technology teachers’ assessment of digital portfolios. Australasian Journal of Technology Education, 2(1), 43–60.CrossRefGoogle Scholar
  25. International Technology Education Association/International Technology and Engineering Educators Association. (2000/2002/2007). Standards for technological literacy: Content for the study of technology. Reston, VA: Author.Google Scholar
  26. Jones, I., & Alcock, L., (2013). Summative peer assessment of undergraduate calculus using adaptive comparative judgement. Mathematics Education Centre.Google Scholar
  27. Kimbell, R. (2007). E-assessment in project e-scape. Design & Technology Education: An International Journal, 12(2), 66–76.Google Scholar
  28. Kimbell, R. (2008). E-assessment in project e-scape. Design and Technology Education: An International Journal, 12(2), 66–76.Google Scholar
  29. Kimbell, R. (2012a). Evolving project e-scape for national assessment. International Journal of Technology and Design Education, 22, 135–155.CrossRefGoogle Scholar
  30. Kimbell, R. (2012b). The origins and underpinning principles of e-scape. International Journal of Technology and Design Education, 22, 123–134.CrossRefGoogle Scholar
  31. Kimbell, R., Wheeler, T., Miller, A., & Pollitt, A. (2007). E-scape: E-solutions for Creative Assessment in Portfolio Environments. London: Technology Education Research Unit, Goldsmiths College.Google Scholar
  32. Klapwijk, R. M. (2017). Formative assessment of creativity. In M. J. de Vries (Ed.), Handbook of technology education Springer international handbooks of education (pp. 765–784). Cham: Springer.  https://doi.org/10.1007/978-3-319-44687-5_55.Google Scholar
  33. Kolodner, L. J. (2002). Facilitating the learning of design practices: Lessons learned from inquiry into science education. Journal of Industrial Teacher Education, 39(3), 1–31.Google Scholar
  34. Lane, S. (2004). Validity of high-stakes assessment: Are students engaged in complex thinking? Educational Measurement, Issues and Practice, 23(3), 6–14.CrossRefGoogle Scholar
  35. Larmer, J., Mergendoller, J. R., & Boss, S. (2015). Gold standard PBL: Essential project design elements. PBL blog. Retrieved June 30, 2017 from http://www.bie.org/object/document/gold_standard_pbl_essential_project_design_elements.
  36. Lieberman, A. (1995). Practices that support teacher development. Phi delta kappan, 76(8), 591.Google Scholar
  37. McMahon, S., & Jones, I. (2015). A comparative judgement approach to teacher assessment. Assessment in Education: Principles, Policy & Practice, 22(3), 368–389.CrossRefGoogle Scholar
  38. Meusen-Beekman, K. D., Joosten-ten Brinke, D., & Boshuizen, H. P. (2016). Effects of formative assessments to develop self-regulation among sixth grade students: Results from a randomized controlled intervention. Studies in Educational Evaluation, 51, 126–136.CrossRefGoogle Scholar
  39. Morgan, C., & Watson, A. (2002). The interpretative nature of teachers’ assessment of students’ mathematics: Issues for equity. Journal for Research in Mathematics Education, 33, 78–110.CrossRefGoogle Scholar
  40. National Academy of Engineering (NAE), & National Research Council (NRC). (2014). STEM integration in K-12 education: Status, prospects, and an agenda for research. Washington, DC: National Academies Press.Google Scholar
  41. National Research Council (NRC). (2009). Engineering in K-12 education: Understanding the status and improving the prospects. Washington, DC: National Academies Press.Google Scholar
  42. National Research Council (NRC). (2012). Education for life and work: Developing transferable knowledge and skills in the 21st Century. Washington: National Academies Press.Google Scholar
  43. Nelson, W. A. (2003). Problem solving through design. New Directions for Teaching and Learning, 95, 39–44.CrossRefGoogle Scholar
  44. Newhouse, P. (2011). Comparative pairs marking supports authentic assessment of practical performance within constructivist learning environments. In R. F. Cavanagh & R. F. Waugh (Eds.), Applications of rasch measurement in learning environments research (pp. 141–180). Rotterdam: Sense Publishers.CrossRefGoogle Scholar
  45. Office of the Chief Scientist. (2014). Science, technology, engineering and mathematics: Australia’s future. Canberra: Australian Government.Google Scholar
  46. Partnership for 21st Century Skills. (2017). P21 framework definitions. Washington, DC: Author. Retrieved February 2, 2018 from http://www.p21.org/our-work/p21-framework.
  47. Pollitt, A. (2004). Let’s stop marking exams. Retrieved February 2, 2018 from http://www.cambridgeassessment.org.uk/images/109719-let-s-stop-marking-exams.pdf.
  48. Pollitt, A. (2012). The method of adaptive comparative judgment. Assessment in Education: Principles, Policy & Practice, 19(3), 281–300.CrossRefGoogle Scholar
  49. Pollitt, A. (2015). On ‘Reliability’ bias in ACJ. Cambridge Exam Research. Retrieved February 2, 2018 from https://www.researchgate.net/publication/283318012_On_’Reliability’_bias_in_ACJ.
  50. Pollitt, A., & Crisp, V. (2004). Could comparative judgments of script quality replace traditional marking and improve the validity of exam questions? Paper presented at the BERA Annual Conference, UMIST Manchester, England.Google Scholar
  51. Pollitt, A., & Murray, N. L. (1996). What raters really pay attention to. Studies in language testing, 3, 74–91.Google Scholar
  52. Pollitt, A., & Whitehouse, C. (2012). Using adaptive comparative judgement to obtain a highly reliable rank order in summative assessment. Manchester: AQA Centre for Education Research and Policy.Google Scholar
  53. Project Lead the Way. (2017). PLTW engineering curriculum. Retrieved February 2, 2018 from https://www.pltw.org/our-programs.
  54. Rasch, G. (1993). Probabilistic models for some intelligence and attainment tests. Chicago, IL: MESA Press.Google Scholar
  55. Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & evaluation in higher education, 35(4), 435–448.CrossRefGoogle Scholar
  56. Reeve, E. M. (2015). STEM thinking! Technology and Engineering Teacher, 75(4), 8–16.Google Scholar
  57. Sadler, D. R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18(2), 119–144.CrossRefGoogle Scholar
  58. Saldaña, J. (2013). The coding manual of qualitative researchers (2nd ed.). Los Angeles, CA: Sage.Google Scholar
  59. Sanders, M. (2009). STEM, STEM education. STEM mania. Technology Teacher, 68(4), 20–26.Google Scholar
  60. Schmidt, H. G. (1983). Problem-based learning: Rationale and description. Medical Education, 17, 11–16.CrossRefGoogle Scholar
  61. Seery, N., & Canty, D. (2017). Assessment and learning: The proximal and distal effects of comparative judgment. In M. J. de Vries (Ed.), Handbook of technology education. Springer international handbooks of education (pp. 1–14). Cham: Springer.  https://doi.org/10.1007/978-3-319-44687-5_55.Google Scholar
  62. Seery, N., Canty, D., & Phelan, P. (2012). The validity and value of peer assessment using adaptive comparative judgement in design driven practical education. International Journal of Technology and Design Education, 22(2), 205–226.CrossRefGoogle Scholar
  63. Steedle, J. T., & Ferrara, S. (2016). Evaluating comparative judgment as an approach to essay scoring. Applied Measurement in Education, 29(3), 211–223.CrossRefGoogle Scholar
  64. Strimel, G. (2014). Authentic education by providing a situation for student-selected problem-based learning. Technology and Engineering Teacher, 73(7), 8–18.Google Scholar
  65. Strimel, G. J., Bartholomew, S. R., Jackson, A., Grubbs, M. E., & Bates, D. M. (2017). Evaluating freshman engineering design projects using adaptive comparative judgment. Paper presented at 2017 American Society for Engineering Education Annual Conference and Exposition, Columbus, OH.Google Scholar
  66. Strimel, G. J., & Grubbs, M. E. (2017). A critical examination of engineering design processes & practices. In S. Warner (Ed.), Pupils attitudes toward technology 2017 conference, Philadelphia, PA.Google Scholar
  67. Thurstone, L. L. (1927). A law of comparative judgment. Psychological Review, 34, 273–286 (reprinted as Chapter 3 from Thurstone, L. L. (1959). The measurement of values. Chicago, IL: University of Chicago Press).Google Scholar
  68. van Langen, A., & Dekkers, H. (2005). Cross-national differences in participating in tertiary science, technology, engineering and mathematics education. Comparative Education, 41(3), 329–350.CrossRefGoogle Scholar
  69. Wicklein, R. C. (2006). Five good reasons for engineering as the focus for technology education. The Technology Teacher, 65(7), 25.Google Scholar
  70. Wirkala, C., & Kuhn, D. (2011). Problem-based learning in K-12 education: Is it effective and how does it achieve its effects? American Educational Research Journal, 40(5), 1157–1186.CrossRefGoogle Scholar
  71. Woolf, H. (2004). Assessment criteria: Reflections on current practices. Assessment & Evaluation in Higher Education, 29(4), 479–493.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Purdue UniversityWest LafayetteUSA
  2. 2.Purdue UniversityWest LafayetteUSA
  3. 3.Technology, Leadership and InnovationPurdue UniversityWest LafayetteUSA

Personalised recommendations