Abstract
The study systematically reviewed 70 empirical studies on technology-supported peer assessment published in seven critical journals from 2007 to 2016. Several dimensions of the technology-supported peer-assessment studies were investigated, including the adopted technologies, learning environments, application domains, peer-assessment mode, and the research issues. It was found that, in the 10 years, there was a slight change in peer-assessment studies in terms of the adopted technologies, which were mostly traditional computers. In terms of learning environments, in the first 5 years, most activities were conducted online after class, while in the second 5 years, more activities were conducted in the classroom during school hours. Moreover, several researchers have started to consider peer assessment as a frequently adopted teaching strategy and have tried to integrate other learning strategies into peer-assessment activities to strengthen their effectiveness. In the meantime, it was found that little research engaged students in developing peer-assessment rubrics; that is, most of the studies employed rubrics developed by teachers. In terms of research issues, developing students’ higher-order thinking received the most attention. For future studies, it is suggested that researchers can explore the value and effects of adopting emerging technologies (e.g., mobile devices) in peer assessment as well as engaging students in the development of peer-assessment rubrics, which might enable them to deeply experience the tacit knowledge underlying the standard rubrics provided by the teacher.
Similar content being viewed by others
References
Carson, J., & Nelson, G. (1996). Chinese students’ perceptions of ESL peer response group interaction. Journal of Second Language Writing, 5(1), 1–19.
Chen, T. (2016). Technology-supported peer feedback in ESL/EFL writing classes: a research synthesis. Computer Assisted Language Learning, 29(2), 365–397. https://doi.org/10.1080/09588221.2014.960942.
Ching, Y. H., & Hsu, Y. C. (2016). Learners’ interpersonal beliefs and generated feedback in an online role-playing peer-feedback activity: An exploratory study. International Review of Research in Open and Distributed Learning, 17(2), 105–122.
Cho, K., Cho, M. H., & Hacker, D. J. (2010). Self-monitoring support for learning to write. Interactive Learning Environments, 18(2), 101–113. https://doi.org/10.1080/10494820802292386.
Falchikov, N. (1995). Peer feedback marking: Developing peer assessment. Innovations in Education & Training International, 32, 175–187. https://doi.org/10.1080/1355800950320212.
Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: A meta-analysis comparing peer and teacher marks. Review of Educational Research, 70(3), 287–322. https://doi.org/10.3102/00346543070003287.
Fraile, J., Panadero, E., & Pardo, R. (2017). Co-creating rubrics: The effects on self-regulated learning, self-efficacy and performance of establishing rubrics with students. Studies in Educational Evaluation, 53, 69–76. https://doi.org/10.1016/j.stueduc.2017.03.003.
Guardado, M., & Shi, L. (2007). ESL students’ experiences of online peer feedback. Computers and Composition, 24(4), 443–461.
Hsia, L. H., Huang, I., & Hwang, G. J. (2016a). A web-based peer-assessment approach to improving junior high school students’ performance, self-efficacy and motivation in performing arts courses. British Journal of Educational Technology, 47(4), 618–632. https://doi.org/10.1111/bjet.12248.
Hsia, L. H., Huang, I., & Hwang, G. J. (2016b). Effects of different online peer-feedback approaches on students’ performance skills, motivation and self-efficacy in a dance course. Computers & Education, 96, 55–71. https://doi.org/10.1016/j.compedu.2016.02.004.
Hsu, T. C. (2016). Effects of a peer assessment system based on a grid-based knowledge classification approach on computer skills training. Educational Technology & Society, 19(4), 100–111.
Hulsman, R. L., & van der Vloodt, J. (2015). Self-evaluation and peer-feedback of medical students’ communication skills using a web-based video annotation system. Exploring content and specificity. Patient Education and Counseling, 98(3), 356–363. https://doi.org/10.1016/j.pec.2014.11.007.
Hung, H. T., Yang, J. C., Hwang, G. J., Chu, H. C., & Wang, C. C. (2018). A scoping review of research on digital game-based language learning. Computers & Education, 126, 89–104.
Hwang, G. J., Hung, C. M., & Chen, N. S. (2014). Improving learning achievements, motivations and problem-solving skills through a peer assessment-based game development approach. Etr&D-Educational Technology Research and Development, 62(2), 129–145. https://doi.org/10.1007/s11423-013-9320-7.
Hwang, G. J., & Tsai, C. C. (2011). Research trends in mobile and ubiquitous learning: A review of publications in selected journals from 2001 to 2010. British Journal of Educational Technology, 42(4), E65–E70. https://doi.org/10.1111/j.1467-8535.2011.01183.x.
Jones, I., & Alcock, L. (2014). Peer assessment without rubrics. Studies in Higher Education, 39(10), 1774–1787. https://doi.org/10.1080/03075079.2013.821974.
Kao, G. Y. M., Lin, S. S. J., & Sun, C. T. (2008). Beyond sharing: Engaging students in cooperative and competitive active. Educational Technology & Society, 11(3), 82–96.
Lai, C. L., & Hwang, G. J. (2014). Effects of mobile learning time on students’ conception of collaboration, communication, complex problem-solving, meta-cognitive awareness and creativity. International Journal of Mobile Learning and Organisation, 8(3), 276–291.
Lai, C. L., & Hwang, G. J. (2015). An interactive peer-rubrics development approach to improving students’ art design performance using handheld devices. Computers & Education, 85, 149–159. https://doi.org/10.1016/j.compedu.2015.02.011.
Li, L., Liu, X. Y., & Zhou, Y. C. (2012). Give and take: A re-analysis of assessor and assessee’s roles in technology-facilitated peer assessment. British Journal of Educational Technology, 43(3), 376–384. https://doi.org/10.1111/j.1467-8535.2011.01180.x.
Li, L., Steckelberg, A. L., & Srinivasan, S. (2008). Utilizing peer interactions to promote learning through a computer-assisted peer assessment system. Canadian Journal of Learning and Technology, 34(2), 133–148.
Li, H. L., Xiong, Y., Zang, X. J., Kornhaber, M. L., Lyu, Y. S., Chung, K. S., et al. (2016). Peer assessment in the digital age: A meta-analysis comparing peer and teacher ratings. Assessment & Evaluation in Higher Education, 41(2), 245–264. https://doi.org/10.1080/02602938.2014.999746.
Lin, H. C., & Hwang, G. J. (2018). Research trends of flipped classroom studies for medical courses: A review of journal publications from 2008 to 2017 based on the Technology-Enhanced Learning Model. Interactive Learning Environments. https://doi.org/10.1080/10494820.2018.1467462.
Liu, N. F., & Carless, D. (2006). Peer feedback: the learning element of peer assessment. Teaching in Higher Education, 11(3), 279–290. https://doi.org/10.1080/13562510600680582.
Lu, J. Y., & Law, N. W. Y. (2012). Understanding collaborative learning behavior from Moodle log data. Interactive Learning Environments, 20(5), 451–466. https://doi.org/10.1080/10494820.2010.529817.
Luo, T. (2016). Enabling microblogging-based peer feedback in face-to-face classrooms. Innovations in Education and Teaching International, 53(2), 156–166. https://doi.org/10.1080/14703297.2014.995202.
Miller, P. J. (2003). The effect of scoring criteria specificity on peer and self-assessment. Assessment & Evaluation in Higher Education, 28(4), 383–394.
Mulder, R., Baik, C., Naylor, R., & Pearce, J. (2014). How does student peer review influence perceptions, engagement and academic outcomes? A case study. Assessment & Evaluation in Higher Education, 39(6), 657–677. https://doi.org/10.1080/02602938.2013.860421.
O’Donovan, B., Price, M., & Rust, C. (2004). Know what I mean? Enhancing student understanding of assessment standards and criteria. Teaching in Higher Education, 9(3), 325–335. https://doi.org/10.1080/1356251042000216642.
Speyer, R., Pilz, W., Van Der Kruis, J., & Brunings, J. W. (2011). Reliability and validity of student peer assessment in medical education: A systematic review. Medical Teacher, 33(11), E572–E585. https://doi.org/10.3109/0142159x.2011.610835.
Stefani, L. A. J. (1994). Peer, self and tutor assessment—relative reliabilities. Studies in Higher Education, 19(1), 69–75. https://doi.org/10.1080/03075079412331382153.
Suen, H. K. (2014). Peer assessment for massive open online courses (MOOCs). International Review of Research in Open and Distance Learning, 15(3), 312–327.
Tenorio, T., Bittencourt, I. I., Isotani, S., & Silva, A. P. (2016). Does peer assessment in on-line learning environments work? A systematic review of the literature. Computers in Human Behavior, 64, 94–107. https://doi.org/10.1016/j.chb.2016.06.020.
Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276. https://doi.org/10.2307/1170598.
van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2009). Peer assessment for learning from a social perspective: The influence of interpersonal variables and structural features. Educational Research Review, 4(1), 41–54. https://doi.org/10.1016/j.edurev.2008.11.002.
van Popta, E., Kral, M., Camp, G., Martens, R. L., & Simons, P. R. J. (2017). Exploring the value of peer feedback in online learning for the provider. Educational Research Review, 20, 24–34. https://doi.org/10.1016/j.edurev.2016.10.003.
Vanderhoven, E., Raes, A., Montrieux, H., Rotsaert, T., & Schellens, T. (2015). What if pupils can assess their peers anonymously? A quasi-experimental study. Computers & Education, 81, 123–132. https://doi.org/10.1016/j.compedu.2014.10.001.
Wang, H. Y., Liu, G. Z., & Hwang, G. J. (2017). Integrating socio-cultural contexts and location-based systems for ubiquitous language learning in museums: A state of the art review of 2009–2014. British Journal of Educational Technology, 48(2), 653–671.
Yu, F. Y. (2011). Multiple peer-assessment modes to augment online student question-generation processes. Computers & Education, 56(2), 484–494. https://doi.org/10.1016/j.compedu.2010.08.025.
Yu, F. Y., & Liu, Y. H. (2009). Creating a psychologically safe online space for a student-generated questions learning activity via different identity revelation modes. British Journal of Educational Technology, 40(6), 1109–1123. https://doi.org/10.1111/j.1467-8535.2008.00905.x.
Yu, F. Y., & Sung, S. (2016). A mixed methods approach to the assessor’s targeting behavior during online peer assessment: Effects of anonymity and underlying reasons. Interactive Learning Environments, 24(7), 1674–1691. https://doi.org/10.1080/10494820.2015.1041405.
Yu, F. Y., & Wu, C. P. (2011). Different identity revelation modes in an online peer-assessment learning environment: Effects on perceptions toward assessors, classroom climate and learning activities. Computers & Education, 57(3), 2167–2177. https://doi.org/10.1016/j.compedu.2011.05.012.
Acknowledgements
This study is supported in part by the Ministry of Science and Technology of the Republic of China under contract numbers MOST-105-2511-S-011 -008 -MY3 and MOST 106-2511-S-011 -005 -MY3.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Fu, QK., Lin, CJ. & Hwang, GJ. Research trends and applications of technology-supported peer assessment: a review of selected journal publications from 2007 to 2016. J. Comput. Educ. 6, 191–213 (2019). https://doi.org/10.1007/s40692-019-00131-x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40692-019-00131-x