Skip to main content

Grading Open-Ended Questions in an Educational Setting, via Non-exclusive Peer Evaluation

  • Conference paper
  • First Online:
Book cover State-of-the-Art and Future Directions of Smart Learning

Abstract

A framework to allow (semi-)automated grading of answers to open-ended questions (“open answers”) is presented. The grading is done by using both the peer (students) assessment and the teacher’s evaluation of a subset of the answers. The Web of data, associated with peers’ and teacher’s assessments, is represented by a Bayesian network (BN). The students are modeled by their Knowledge and by the effectiveness of their evaluations (J). The answer grades in the network are represented as variables, with value in an estimated probability distribution. Grades are updated by evidence propagation and triggered by teacher’s/peer’s evaluation. The framework is implemented in the OpenAnswer Web system. We report on experiments and discuss the effectiveness of the approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bloom, B. S. (Ed.). (1964). Taxonomy of educational objectives. New York: David McKay Company Inc.

    Google Scholar 

  2. Birenbaum, M., Tatsuoka, K., & Gutvirtz, Y. (1992). Effects of response format on diagnostic assessment of scholastic achievement. Applied Psychological Measurement, 16(4), 353–363.

    Article  Google Scholar 

  3. Palmer, K., & Richardson, P. (2003). On-line assessment and free-response input—A pedagogic and technical model for squaring the circle. In: 7th CAA, Loughborough University.

    Google Scholar 

  4. Sterbini, A., & Temperini, M. (2012). Supporting assessment of open answers in a didactic setting. In: Workshop SPEL 2012, ICALT 2012, Rome, Italy.

    Google Scholar 

  5. De Marsico, M., Sterbini, A., & Temperini, M. (2015). Towards a quantitative evaluation of the relationship between the domain knowledge and the ability to assess peer work. In: ITHET 2015, 11–13 June, 2015, Caparica, Portugal.

    Google Scholar 

  6. Kane, L. S., & Lawler, E. E. (1978). Methods of peer assessment. Psychological Bulletin, 85, 555–586.

    Article  Google Scholar 

  7. Sadler, P. M., & Good, E. (2006). The impact of self- and peer-grading on student learning. Educational Assessment, 11(1), 1–31.

    Article  Google Scholar 

  8. Somervell, H. (1993). Issues in assessment, enterprise and higher education: the case for self-, peer and collaborative assessment. Assessment and Evaluation in Higher Ed., 18, 221–233.

    Article  Google Scholar 

  9. Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., & Koller, D. (2013). Tuned models of peer assessment in MOOCs. In: 6th International Conference on Educational Data Mining, Memphis, USA.

    Google Scholar 

  10. Yamanishi, K., & Li, H. (2002). Mining open answers in questionnaire data. IEEE Intelligent Systems, 2002, 58–63.

    Article  Google Scholar 

  11. Jackson, K., & Trochim, W. (2002). Concept mapping as an alternative approach for the analysis of open-ended survey responses. Organizational Research Methods, 5.

    Google Scholar 

  12. Castellanos-Nieves, D., Fernández-Breis, J., Valencia-García, R., Martínez-Béjar, R., & Iniesta-Moreno, M. (2011). Semantic Web Technologies for supporting learning assessment. Information Sciences, 181(9), 1517–1537.

    Google Scholar 

  13. El-Kechaï, N., Delozanne, É., Prévit, D., Grugeon, B., & Chenevotot, F. (2011). Evaluating the performance of a diagnosis system in school Algebra. ICWL, LNCS, 7048, 263–272.

    Google Scholar 

  14. Huang, C., & Darwiche, A. (1996). Inference in belief networks: A procedural guide. International Journal of Approximate Reasoning, 15, 225–263.

    Article  Google Scholar 

  15. Sciarrone, F. (2013). An extension of the Q diversity metric for information processing in multiple classifier systems: A field evaluation. Internetional Journal of Wavelets, Multiresolution and Information Processing. 11(6).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maria De Marsico .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media Singapore

About this paper

Cite this paper

De Marsico, M., Sterbini, A., Temperini, M. (2016). Grading Open-Ended Questions in an Educational Setting, via Non-exclusive Peer Evaluation. In: Li, Y., et al. State-of-the-Art and Future Directions of Smart Learning. Lecture Notes in Educational Technology. Springer, Singapore. https://doi.org/10.1007/978-981-287-868-7_44

Download citation

  • DOI: https://doi.org/10.1007/978-981-287-868-7_44

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-287-866-3

  • Online ISBN: 978-981-287-868-7

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics