Advertisement

European Journal of Psychology of Education

, Volume 33, Issue 1, pp 51–73 | Cite as

Reciprocal peer assessment as a learning tool for secondary school students in modeling-based learning

  • Olia E. Tsivitanidou
  • Costas P. Constantinou
  • Peter Labudde
  • Silke Rönnebeck
  • Mathias Ropohl
Article

Abstract

The aim of this study was to investigate how reciprocal peer assessment in modeling-based learning can serve as a learning tool for secondary school learners in a physics course. The participants were 22 upper secondary school students from a gymnasium in Switzerland. They were asked to model additive and subtractive color mixing in groups of two, after having completed hands-on experiments in the laboratory. Then, they submitted their models and anonymously assessed the model of another peer group. The students were given a four-point rating scale with pre-specified assessment criteria, while enacting the peer-assessor role. After implementation of the peer assessment, students, as peer assessees, were allowed to revise their models. They were also asked to complete a short questionnaire, reflecting on their revisions. Data were collected by (i) peer-feedback reports, (ii) students’ initial and revised models, (iii) post-instructional interviews with students, and (iv) students’ responses to open-ended questions. The data were analyzed qualitatively and then quantitatively. The results revealed that, after enactment of the peer assessment, students’ revisions of their models reflected a higher level of attainment toward their model-construction practices and a better conceptual understanding of additive and subtractive color mixing. The findings of this study suggest that reciprocal peer assessment, in which students experience both the role of assessor and assessee, facilitates students’ learning in science. Based on our findings, further research directions are suggested with respect to novel approaches to peer assessment for developing students’ modeling competence in science learning.

Keywords

Formative assessment Reciprocal peer assessment Peer feedback Modeling competence Model-construction practices 

Notes

Acknowledgements

This study was conducted in the context of the research project ASSIST-ME, which is funded by the European Union’s Seventh Framework Programme for Research and Development (grant agreement no. 321428).

References

  1. Ballantyne, R., Hughes, K., & Mylonas, A. (2002). Developing procedures for implementing peer assessment in large classes using an action research process. Assessment & Evaluation in Higher Education, 27, 427–441.CrossRefGoogle Scholar
  2. Bell, B., & Cowie, B. (2001). The characteristics of formative assessment in science education. Science Education, 85(5), 536–553.CrossRefGoogle Scholar
  3. Black, P., & William, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy, and Practice, 5, 7–74.CrossRefGoogle Scholar
  4. Brindley, C., & Scoffield, S. (1998). Peer assessment in undergraduate programmes. Teaching in Higher Education, 3(1), 79–89.CrossRefGoogle Scholar
  5. Cestone, C. M., Levine, R. E., & Lane, D. R. (2008). Peer assessment and evaluation in team-based learning. New Directions for Teaching and Learning, 116, 69–78.CrossRefGoogle Scholar
  6. Chang, H. Y., & Chang, H. C. (2013). Scaffolding students’ online critiquing of expert- and peer-generated molecular models of chemical reactions. International Journal of Science Education, 35(12), 2028–2056.CrossRefGoogle Scholar
  7. Chang, H.-Y., Quintana, C., & Krajcik, J. S. (2010). The impact of designing and evaluating molecular animations on how well middle school students understand the particulate nature of matter. Science Education, 94(1), 73–94.Google Scholar
  8. Chang, C.-C., Tseng, K.-H., Chou, P.-N., & Chen, Y.-H. (2011). Reliability and validity of web-based portfolio peer assessment: a case study for a senior high school’s students taking computer course. Computers and Education, 57(1), 1306–1316.CrossRefGoogle Scholar
  9. Chen, N.-S., Wie, C.-W., Wu, K.-T., & Uden, L. (2009). Effects of high level prompts and peer assessment on online learners’ reflection levels. Computers and Education, 52, 283–291.CrossRefGoogle Scholar
  10. Cheng, K. H., Liang, J. C., & Tsai, C. C. (2015). Examining the role of feedback messages in undergraduate students’ writing performance during an online peer assessment activity. The Internet and Higher Education, 25, 78–84.CrossRefGoogle Scholar
  11. Cho, Y. H., & Cho, K. (2011). Peer reviewers learn from giving comments. Instructional Science, 39(5), 629–643.CrossRefGoogle Scholar
  12. Combs, J. P., & Onwuegbuzie, A. J. (2010). Describing and illustrating data analysis in mixed research. International Journal of Education, 2(2), 1–23.CrossRefGoogle Scholar
  13. Crane, L., & Winterbottom, M. (2008). Plants and photosynthesis: peer assessment to help students learn. Journal of Biological Education, 42, 150–156.CrossRefGoogle Scholar
  14. El-Mowafy, A. (2014). Using peer assessment of fieldwork to enhance students’ practical training. Assessment and Evaluation in Higher Education, 39(2), 223–241.CrossRefGoogle Scholar
  15. Evans, C. (2013). Making sense of assessment feedback in higher education. Review of Educational Research, 83, 70–120.CrossRefGoogle Scholar
  16. Falchikov, N. (2003). Involving students in assessment. Psychology Learning and Teaching, 3, 102–108.CrossRefGoogle Scholar
  17. Falchikov, N., & Goldfinch, J. (2000). Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Review of Educational Research, 70, 287–322.CrossRefGoogle Scholar
  18. van Gennip, N. A. E., Segers, M. S. R., & Tillema, H. H. (2010). Peer assessment as a collaborative learning activity: the role of interpersonal variables and conceptions. Learning and Instruction, 20(4), 280–290.CrossRefGoogle Scholar
  19. Gielen, M., & De Wever, B. (2015). Scripting the role of assessor and assessee in peer assessment in a wiki environment: impact on peer feedback quality and product improvement. Computers & Education, 88, 370–386.CrossRefGoogle Scholar
  20. Gielen, S., Peeters, E., Dochy, F., Onghena, P., & Struyven, K. (2010). Improving the effectiveness of peer feedback for learning. Learning and Instruction, 20, 304–315.CrossRefGoogle Scholar
  21. Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: strategies for qualitative research. Chicago: Aldine.Google Scholar
  22. Hanrahan, S. J., & Isaacs, G. (2001). Assessing self- and peer-assessment: the students’ views. Higher Education Research and Development, 20, 53–70.CrossRefGoogle Scholar
  23. Harlen, W. (2007). Holding up a mirror to classroom practice. Primary Science Review, 100, 29–31.Google Scholar
  24. Harrison, C., & Harlen, W. (2006). Children’s self- and peer-assessment. In W. Harlen (Ed.), A guide to primary science education (pp. 183–190). Hatfield: Association for Science Education.Google Scholar
  25. Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81–112.CrossRefGoogle Scholar
  26. Hodson, D. (1993). Re-thinking old ways: towards a more critical approach to practical work in school science. Studies in Science Education, 22(1), 85–142.CrossRefGoogle Scholar
  27. Hovardas, T., Tsivitanidou, O. E., & Zacharia, Z. C. (2014). Peer versus expert feedback: an investigation of the quality of peer feedback among secondary school students. Computers & Education, 71, 133–152.CrossRefGoogle Scholar
  28. Hwang, G. J., Hung, C. M., & Chen, N. S. (2014). Improving learning achievements, motivations and problem-solving skills through a peer assessment-based game development approach. Educational Technology Research and Development, 62(2), 129–145.CrossRefGoogle Scholar
  29. Jaillet, A. (2009). Can online peer assessment be trusted? Educational Technology & Society, 12(4), 257.Google Scholar
  30. Kaberman, Z., & Dori, Y. J. (2009). Question posing, inquiry, and modeling skills of chemistry students in the case-based computerized laboratory environment. International Journal of Science and Mathematics Education, 7(3), 597–625.CrossRefGoogle Scholar
  31. Kim, M. (2005). The effects of the assessor and assessee’s roles on preservice teachers’ metacognitive awareness, performance, and attitude in a technology-related design task. (Unpublished doctoral dissertation). Tallahassee: Florida State University.Google Scholar
  32. Kollar, I., & Fischer, F. (2010). Peer assessment as collaborative learning: a cognitive perspective. Learning and Instruction, 20(4), 344–348.CrossRefGoogle Scholar
  33. Kyza, E. A., Constantinou, C. P., & Spanoudis, G. (2011). Sixth graders’ co-construction of explanations of a disturbance in an ecosystem: exploring relationships between grouping, reflective scaffolding, and evidence-based explanations. International Journal of Science Education, 33(18), 2489–2525.CrossRefGoogle Scholar
  34. Labudde, P. (2000). Konstruktivismus im Physikunterricht der Sekundarstufe II (constructivism in physics instruction at the upper secondary level). Bern: Haupt.Google Scholar
  35. Latour, B., & Woolgar, S. (1979). An anthropologist visits the laboratory. In B. Latour & S. Woolgar (Eds.), Laboratory life: The construction of scientific facts (pp. 43–90). Princeton, NJ: Princeton University Press.Google Scholar
  36. Li, L., Liu, X., & Steckelberg, A. L. (2010). Assessor or assessee: how student learning improves by giving and receiving peer feedback. British Journal of Educational Technology, 41(3), 525–536.CrossRefGoogle Scholar
  37. Lin, S. S. J., Liu, E. Z. F., & Yuan, S. M. (2001). Web-based peer assessment: feedback for students with various thinking styles. Journal of Computer Assisted Learning, 17, 420–432.CrossRefGoogle Scholar
  38. Lindsay, C., & Clarke, S. (2001). Enhancing primary science through self- and paired-assessment. Primary Science Review, 68, 15–18.Google Scholar
  39. Looney, J. W. (2011). Integrating formative and summative assessment: progress toward a seamless system? (OECD Education Working Papers No. 58). doi: 10.1787/5kghx3kbl734-en.Google Scholar
  40. Lynch, M. (1990). The externalized retina: Selection and mathematization in the visual documentation of objects in the life sciences. In M. Lynch & S. Woolgar (Eds.), Representation in scientific practice (pp. 153 –186). Cambridge, MA: The MIT Press.Google Scholar
  41. Narciss, S. (2008). Feedback strategies for interactive learning tasks. In J. M. Spector, M. D. Merrill, J. van Merriënboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 125–143). New York: Erlbaum.Google Scholar
  42. Narciss, S., & Huth, K. (2006). Fostering achievement and motivation with bug-related tutoring feedback in a computer-based training for written subtraction. Learning and Instruction, 16, 310–322.CrossRefGoogle Scholar
  43. National Research Council. (2007). Taking science to school: learning and teaching science in grades K–8. Washington: National Academies Press.Google Scholar
  44. National Research Council. (2012). A framework for K–12 science education: practices, crosscutting concepts, and core ideas. Committee on a conceptual framework for new K–12 science education standards. Board on science education, division of behavioral and social sciences and education. Washington: The National Academies Press.Google Scholar
  45. Nicol, D. J., & Macfarlane-Dick, D. (2006). Formative assessment and self-regulated learning: a model and seven principles of good feedback practice. Studies in Higher Education, 31(2), 199–218.CrossRefGoogle Scholar
  46. Nicolaou, C. T. (2010). Συνεργασίακαιμοντελοποίησησεμαθησιακάπεριβάλλοντα [Modelling and collaboration in learning environments] (in Greek, doctoral dissertation). Nicosia: University of Cyprus, (ISBN: 978-9963-689-84-2).Google Scholar
  47. Nicolaou, C. T., & Constantinou, C. P. (2014). Assessment of the modeling competence: a systematic review and synthesis of empirical research. Educational Research Review, 13, 52–73.CrossRefGoogle Scholar
  48. Orsmond, P., Merry, S., & Reiling, K. (1996). The importance of marking criteria in the use of peer-assessment. Assessment and Evaluation in Higher Education, 21(3), 239–250.CrossRefGoogle Scholar
  49. Papaevripidou, M. (2012). Teachers as learners and curriculum designers in the context of modeling-centered scientific inquiry (doctoral dissertation). Nicosia: University of Cyprus, (ISBN: 978–9963–700-56-1).Google Scholar
  50. Papaevripidou, M., Nicolaou, C. T., & Constantinou, C. P. (2014). On defining and assessing learners’ modelling competence in science teaching and learning. Philadelphia: Annual Meeting of American Educational Research Association (AERA).Google Scholar
  51. Pluta, W. J., Chinn, C. A., & Duncan, R. G. (2011). Learners’ epistemic criteria for good scientific models. Journal of Research in Science Teaching, 48(5), 486–511.CrossRefGoogle Scholar
  52. Prins, F. J., Sluijsmans, D. M. A., Kirschner, P. A., & Strijbos, J.-W. (2005). Formative peer assessment in a CSCL environment: a case study. Assessment & Evaluation in Higher Education, 30, 417–444.CrossRefGoogle Scholar
  53. Saari, H., & Viiri, J. (2003). A research-based teaching sequence for teaching the concept of modelling to seventh-grade students. International Journal of Science Education, 25(11), 1333–1352.CrossRefGoogle Scholar
  54. Sadler, D. R. (1998). Formative assessment: revisiting the territory. Assessment in Education, 5, 77–84.CrossRefGoogle Scholar
  55. Schwarz, C. V., & Gwekwerere, Y. N. (2006). Using a guided inquiry and modeling instructional framework (EIMA) to support K–8 science teaching. Science Education, 91(1), 158–186.CrossRefGoogle Scholar
  56. Schwarz, C. V., & White, B. Y. (2005). Metamodeling knowledge: developing students’ understanding of scientific modeling. Cognition and Instruction, 23(2), 165–205.CrossRefGoogle Scholar
  57. Schwarz, C. V., Reiser, B. J., Davis, E. A., Kenyon, L., Acher, A., Fortus, D., et al. (2009). Developing a learning progression for scientific modeling: making scientific modeling accessible and meaningful for learners. Journal of Research in Science Teaching, 46(6), 632–654.CrossRefGoogle Scholar
  58. Sluijsmans, D. M. A. (2002). Student involvement in assessment, the training of peer assessment skills. Groningen: Interuniversity Centre for Educational Research.Google Scholar
  59. Sluijsmans, D. M. A., Brand-Gruwel, S., & van Merriënboer, J. J. G. (2002). Peer assessment training in teacher education: effects on performance and perceptions. Assessment and Evaluation in Higher Education, 27, 443–454.CrossRefGoogle Scholar
  60. Smith, H., Cooper, A., & Lancaster, L. (2002). Improving the quality of undergraduate peer assessment: a case for student and staff development. Innovations in Education and Teaching International, 39(1), 71–81.CrossRefGoogle Scholar
  61. Strijbos, J. W., & Sluijsmans, D. (2010). Unravelling peer assessment: methodological, functional, and conceptual developments. Learning and Instruction, 20(4), 265–269.CrossRefGoogle Scholar
  62. Topping, K. (1998). Peer assessment between students in colleges and universities. Review of Educational Research, 68(3), 249–276.CrossRefGoogle Scholar
  63. Topping, K. J. (2003). Self and peer assessment in school and university: reliability, validity and utility. In M. Segers, F. Dochy, & E. Cascaller (Eds.), Optimising new modes of assessment: in search of qualities and standards (pp. 55–87). Dordrecht: Kluwer Academic Publishers.CrossRefGoogle Scholar
  64. Topping, K., Smith, F. F., Swanson, I., & Elliot, A. (2000). Formative peer assessment of academic writing between postgraduate students. Assessment and Evaluation in Higher Education, 25, 149–169.CrossRefGoogle Scholar
  65. Tsai, C.-C., Lin, S. S. J., & Yuan, S.-M. (2002). Developing science activities through a network peer assessment system. Computers & Education, 38(1–3), 241–252.CrossRefGoogle Scholar
  66. Tseng, S.-C., & Tsai, C.-C. (2007). On-line peer assessment and the role of the peer feedback: a study of high school computer course. Computers and Education, 49, 1161–1174.CrossRefGoogle Scholar
  67. Tsivitanidou, O. E., & Constantinou, C. P. (2016a). A study of students' heuristics and strategy patterns in web-based reciprocal peer assessment for science learning. The Internet and Higher Education, 29, 12–22.CrossRefGoogle Scholar
  68. Tsivitanidou, O., & Constantinou, C. (2016b). Undergraduate students’ heuristics and strategy patterns in response to web-based peer and teacher assessment for science learning. In M. Vargas (Ed.), Teaching and Learning: Principles, Approaches and Impact Assessment (pp. 65–116). New York: Nova Science Publishers, (ISBN: 978–1–63485-228-9).Google Scholar
  69. Tsivitanidou, O. E., Zacharia, Z. C., & Hovardas, T. (2011). Investigating secondary school students’ unmediated peer assessment skills. Learning and Instruction, 21(4), 506–519.CrossRefGoogle Scholar
  70. Van Lehn, K. A., Chi, M. T., Baggett, W., & Murray, R. C. (1995). Progress report: towards a theory of learning during tutoring. Pittsburgh: Learning Research and Development Center, University of Pittsburg.CrossRefGoogle Scholar
  71. Walker, M. (2015). The quality of written peer feedback on undergraduates’ draft answers to an assignment, and the use made of the feedback. Assessment & Evaluation in Higher Education, 40(2), 232–247.CrossRefGoogle Scholar

Copyright information

© Instituto Superior de Psicologia Aplicada, Lisboa, Portugal and Springer Science+Business Media B.V. 2017

Authors and Affiliations

  • Olia E. Tsivitanidou
    • 1
  • Costas P. Constantinou
    • 1
  • Peter Labudde
    • 2
  • Silke Rönnebeck
    • 3
    • 4
  • Mathias Ropohl
    • 3
  1. 1.Learning in Science Group, Department of Educational SciencesUniversity of CyprusNicosiaCyprus
  2. 2.Centre for Science and Technology Education, School of EducationUniversity of Applied Sciences and Arts North-western SwitzerlandBaselSwitzerland
  3. 3.Leibniz Institute for Science and Mathematics EducationKielGermany
  4. 4.Kiel UniversityKielGermany

Personalised recommendations