Advertisement

Closing the Circle: Use of Students’ Responses for Peer-Assessment Rubric Improvement

  • Yang SongEmail author
  • Zhewei Hu
  • Edward F. Gehringer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9412)

Abstract

Educational peer assessment has proven to be a useful approach for providing students timely feedback and allowing them to help and learn from each other. Reviewers are often expected both to provide formative feedback─textual feedback telling the authors where and how to improve the artifact─and peer grading at the same time. Formative feedback is important for the authors because timely and insightful feedback can help them improve their artifacts, and peer grading is important to the teaching staff, as it provides more input to help determine final grades. In a large class or MOOC when the help from teaching staff is limited, formative feedback from their peers is the best help that the authors may receive. To guarantee the quality of the formative feedback and reliability of peer grading, instructors should keep on improving peer-assessment rubrics. In this study we used students’ feedback from the last 3 years in the Expertiza peer-assessment system to analyze the quality of 15 existing rubrics on 61 assignments. A set of patterns on peer-grading reliability and comment length were found and a set of guidelines are given accordingly.

Keywords

Educational peer-review Review rubric Peer assessment 

Notes

Acknowledgments

This project is sponsored by the National Science Foundation, on grant DUE 1432347.

References

  1. 1.
    Kulik, J.A., Kulik, C.-L.C.: Timing of feedback and verbal learning. Rev. Educ. Res. 58(1), 79–97 (1988)CrossRefGoogle Scholar
  2. 2.
    Lundstrom, K., Baker, W.: To give is better than to receive: the benefits of peer review to the reviewer’s own writing. J. Second Lang. Writ. 18(1), 30–43 (2009)CrossRefGoogle Scholar
  3. 3.
    Orsmond, P., Merry, S., Reiling, K.: The importance of marking criteria in the use of peer assessment. Assess. Eval. High. Educ. 21(3), 239–250 (1996)CrossRefGoogle Scholar
  4. 4.
    Reddy, Y.M., Andrade, H.: A review of rubric use in higher education. Assess. Eval. High. Educ. 35(4), 435–448 (2009)CrossRefGoogle Scholar
  5. 5.
    Stellmack, M.A., Konheim-Kalkstein, Y.L., Manor, J.E., Massey, A.R., Schmitz, J.A.P.: An assessment of reliability and validity of a rubric for grading APA-style introductions. Teach. Psychol. 36(2), 102–107 (2009)CrossRefGoogle Scholar
  6. 6.
    Allen, D., Tanner, K.: Rubrics: tools for making learning goals and evaluation criteria explicit for both teachers and learners. CBE-Life Sci. Educ. 5(3), 197–203 (2006)CrossRefGoogle Scholar
  7. 7.
    Stevens, D.D., Levi, A.: Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Stylus Publishing, LLC, Sterling (2005)Google Scholar
  8. 8.
    Kubincová, Z., Homola, M., Bejdová, V.: Motivational effect of peer review in blog-based activities. In: Wang, J.-F., Lau, R. (eds.) ICWL 2013. LNCS, vol. 8167, pp. 194–203. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  9. 9.
    Gehringer, E., Ehresman, L., Conger, S.G., Wagle, P.: Reusable learning objects through peer review: the Expertiza approachGoogle Scholar
  10. 10.
    Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., Koller, D.: Tuned models of peer assessment in MOOCs. In: 6th International Conference on Educational Data Mining, Memphis, Tennessee, USA (2013)Google Scholar
  11. 11.
    Song, Y., Hu, Z., Gehringer, E.: Pluggable reputation systems for peer review: a web-service approach. In: 45th IEEE Frontiers in Education Conference, 2015, FIE 2015, El Paso, Texas (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceNorth Carolina State UniversityRaleighUSA

Personalised recommendations