Abstract
Classroom research has demonstrated the capacity for significantly influencing student learning by engaging students in evaluation of previously submitted work as an intentional priming exercise for learning; we call this experience Learning by Evaluating (LbE). Expanding on current LbE research, we set forth to investigate the impact on student learning by intentionally differing the quality of examples evaluated by the students using adaptive comparative judgement. In this research, university design students (N = 468 students) were randomly assigned to one of three treatment groups; while each group evaluated previously collected student work as an LbE priming activity, the work evaluated by each group differed in quality. Using a three-group experimental design, one group of students only evaluated high quality examples, the second only evaluated low quality examples, and the third group of students evaluated a set of mixed-quality examples of the assignment they were about to work on. Following these LbE priming evaluations, students completed the assigned work and then their projects were evaluated to determine if there was a difference between student performance by treatment condition. Additional qualitative analysis was completed on student LbE rationales to explore similarities and differences in student cognitive judgments based on intervention grouping. No significant difference was found between the groups in terms of achievement, but several differences in group judgement approach were identified and future areas needing investigation were highlighted.
Similar content being viewed by others
References
Baker, S. E., & Edwards, R. (2012). How many qualitative interviews is enough? Expert voices and early career reflections on sampling and cases in qualitative research. National Centre for Research Methods, pp. 1–42.
Baniya, S., Chesley, A., Mentzer, N., Bartholomew, S., Moon, C., & Sherman, D. (2019). Using adaptive comparative judgment in writing assessment: An investigation of reliability among interdisciplinary evaluators. Journal of Technology Studies, 45(2), 24–45.
Bartholomew, S. R., Strimel, G. J., Garcia Bravo, E., Zhang, L., & Yoshikawa, E. (2018). Formative feedback for improved student performance through adaptive comparative judgment. Paper presented at the 125th ASEE conference, Salt Lake City, Utah.
Bartholomew, S. R., & Yoshikawa, E. (2018). A systematic review of research around adaptive comparative judgment (ACJ) in K-16 education. 2018 CTETE Monograph Series. https://doi.org/10.21061/ctete-rms.v1.c.1.
Bartholomew, S. R., & Yauney, J. (2022). The impact of differentiated stimulus materials in learning by evaluating. Pupils’ Attitudes Towards Technology 39th Annual Conference, St. John’s, Canada, 2022. https://par.nsf.gov/servlets/purl/10340851.
Bartholomew, S. R. (2017). Assessing open-ended design problems. Technology and Engineering Education Teacher, 76(6), 13–17.
Bartholomew, S. R., Mentzer, N., Jones, M., Sherman, D., & Baniya, S. (2020). Learning by evaluating (LbE) through adaptive comparative judgment. International Journal of Technology and Design Education. https://doi.org/10.1007/s10798-020-09639-1
Bartholomew, S. R., Strimel, G. S., & Yoshikawa, E. (2018b). Using adaptive comparative judgment for student formative feedback and learning during a middle school open- ended design challenge. International Journal of Technology and Design Education, 29(2), 363–385.
Bartholomew, S. R., Zhang, L., Garcia Bravo, E., & Strimel, G. J. (2019). A tool for formative assessment and learning in a graphics design course: Adaptive comparative judgement. The Design Journal, 22(1), 73–95.
Bramley, T. (2015). Investigating the reliability of adaptive comparative judgment (p. 36). Cambridge Assessment.
Buckley, J., Seery, N., & Kimbell, R. (2022). A review of the valid methodological use of adaptive comparative judgment in technology education research. Frontiers in Education, 7, 787926. https://doi.org/10.3389/feduc.2022.787926
Caniglia, J. (2020). Promoting self-reflection over re-teaching: Addressing students misconceptions with “my favorite no.” Journal of Mathematics Education, 5(2), 70–78.
Canty, D. (2012). The impact of holistic assessment using adaptive comparative judgment of student learning, PhD Thesis, University of Limerick, Ireland.
Collins, A. (2022). Cognitive apprenticeship. Retrieved April 11, 2022 from https://www.isls.org/research-topics/cognitive-apprenticeship/.
Collins, R. (2014). Skills for the 21st Century: Teaching higher-order thinking. Curriculum and Leadership Journal, 12(14), 1–8.
Dam, R. F., & Siang, T. Y. (2020). Stage 2 in the design thinking process: Define the problem and interpret the results. The Interaction Design Foundation. Retrieved November 15, 2021, from https://www.interaction-design.org/literature/article/stage-2-in-the-design-thinking-process-define-the-problem-and-interpret-the-results.
Johnson, C. C., Sondergeld, T. A., & Walton, J. B. (2019). A study of the implementation of formative assessment in three large urban districts. American Educational Research Journal, 56(6), 2408–2438. https://doi.org/10.3102/0002831219842347
Johnston, O., Wildy, H., & Shand, J. (2019). A Decade of teacher expectations research 2008–2018: Historical foundations, new developments, and future pathways. Australian Journal of Education, 63(1), 44–73. https://doi.org/10.1177/0004944118824420
Kimbell, R. (2018). Constructs of quality and the power of holism in Pupils attitudes towards technology 36th Conference Proceedings, pp. 181–186.
Kimbell, R. (2012). The origins and underpinning principles of e-scape. International Journal of Technology and Design Education, 22, 123–124.
Kimbell, R. (2021). Examining the reliability of adaptive comparative judgement (ACJ) as an assessment tool in educational settings. International Journal of Technology and Design Education. https://doi.org/10.1007/s10798-021-09654-w
Mentzer, N., Lee, W., & Bartholomew, S. R. (2021). Examining the validity of adaptive comparative judgment for peer evaluation in a design thinking course. Frontiers in Education, 6, 772832.
Miksza, P. (2011). A review of research on practicing: Summary and synthesis of the extant research with implications for a new theoretical orientation. Bulletin of the Council for Research in Music Education, 190, 51–92. https://doi.org/10.5406/bulcouresmusedu.190.0051
Pollitt, A. (2004). Let’s stop marking exams. Retrieved from http://www.cambridgeassessment.org.uk/images/109719-let-s-stop-marking-exams.pdf.
Pollitt, A. (2015). On ‘reliability’ bias in ACJ. Cambridge Exam Research. Retrieved April 23, 2020 from https://www.researchgate.net/publication/283318012_On_’Reliability’_bias_in_ACJ.
Pollitt, A. (2012). The method of adaptive comparative judgement. Assessment in Education: Principles, Policy and Practice, 19(3), 281–300.
Rangel-Smith, C., & Lynch, D. (2018). Addressing the issue of bias in the measurement of reliability in the method of adaptive comparative judgment. In 36th pupils’ attitudes towards technology conference, Athlone, Ireland, pp. 378–387.
Robertson, S., Humphrey, S., & Steele, J. (2019). Using technology tools for formative assessments. The Journal of Educators Online. https://doi.org/10.9743/jeo.2019.16.2.11
Saldaña, J. (2015). The coding manual for qualitative researchers. SAGE.
Schwartz, D. L., Chase, C. C., Oppezzo, M. A., & Chin, D. B. (2011). Practicing versus inventing with contrasting cases: The effects of telling first on learning and transfer. Journal of Educational Psychology, 103(4), 759.
Seery, N., & Canty, D. (2017). Assessment and learning: The proximal and distal effects of comparative judgment. Handbook of Technology Education. https://doi.org/10.1007/978-3-319-38889-2_54-1
Sherman, D., Mentzer, N., Bartholomew, S., et al. (2022). Across the disciplines: our gained knowledge in assessing a firstyear integrated experience. Int J Technol Des Educ, 32, 1369–1391. https://doi.org/10.1007/s10798-020-09650-6.
Thurstone, L. L. (1927). A law of comparative judgment. Psychological Review, 34, 273–286.
Wible, S. (2020). Using design thinking to teach creative problem solving in writing courses. College Composition and Communication, 71(3), 399–425.
Funding
This material is based upon work supported by the National Science Foundation under Grant 2101235.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Disclaimer
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Bartholomew, S., Yauney, J., Mentzer, N. et al. Investigating the impacts of differentiated stimulus materials in a learning by evaluating activity. Int J Technol Des Educ (2024). https://doi.org/10.1007/s10798-023-09871-5
Accepted:
Published:
DOI: https://doi.org/10.1007/s10798-023-09871-5