Skip to main content

Evaluating AI-Generated Questions: A Mixed-Methods Analysis Using Question Data and Student Perceptions

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2022)

Abstract

Advances in artificial intelligence (AI) have made it possible to generate courseware and formative practice questions from textbooks. Courseware applies a learn by doing approach by integrating formative practice with text, a method proven to increase learning gains for students. By using AI for automatic question generation, the learn by doing method of courseware can be made available for nearly any textbook subject. As the generated questions are a primary learning feature in this environment, it is necessary to ensure they function as well for students as those written by humans. In this paper, we will use student data from an AI-generated Psychology courseware used in an online course at the University of Central Florida. The courseware has both generated questions and human-authored questions, allowing for a unique comparison of question engagement, difficulty, and persistence using student data from a natural learning context. The evaluation of quality metrics is critical in automatic question generation research, yet on its own is not comprehensive of students’ experience. Student perception is a meaningful qualitative metric, as student perceptions can inform behavior and decisions. Therefore, student perceptions of the courseware and questions were also solicited via survey. Combining question data analysis with student perception feedback gives a more comprehensive evaluation of the quality of AI-generated questions used in a natural learning context.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Anderson, L.W., et al.: A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives (Complete Edition). Longman, New York (2001)

    Google Scholar 

  2. Andrew, D.M., Bird, C.: A comparison of two new-type questions: recall and recognition. J. Educ. Psychol. 29(3), 175–193 (1938). https://doi.org/10.1037/h0062394

    Article  Google Scholar 

  3. Bosson, J.K., Vendello, J.A., Buckner, C.V.: The Psychology of Sex and Gender, 1st edn. SAGE Publications, Thousand Oaks (2018)

    Google Scholar 

  4. Black, P., William, D.: Inside the black box: raising standards through classroom assessment. Phi Delta Kappan 92(1), 81–90 (2010). https://doi.org/10.1177/003172171009200119

    Article  MathSciNet  Google Scholar 

  5. Dittel, J.S., et al.:  SmartStart: Artificial Intelligence Technology for Automated Textbook-to-Courseware Transformation, Version 1.0. VitalSource Technologies, Raleigh (2019)

    Google Scholar 

  6. Dunlosky, J., Rawson, K., Marsh, E., Nathan, M., Willingham, D.: Improving students’ learning with effective learning techniques: promising directions from cognitive and educational psychology. Psychol. Sci. Public Interest 14(1), 4–58 (2013). https://doi.org/10.1177/1529100612453266

    Article  Google Scholar 

  7. Hubertz, M., Van Campenhout, R.: Teaching and iterative improvement: the impact of instructor implementation of courseware on student outcomes. In: IICE 2022: The 7th IAFOR International Conference on Education, Honolulu, Hawaii (2022). https://iafor.org/archives/conference-programmes/iice/iice-programme-2022.pdf

  8. Jerome, B., Van Campenhout, R., Johnson, B.G.:  Automatic question generation and the SmartStart application. In: Proceedings of the Eighth ACM Conference on Learning@Scale, pp. 365–366 (2021). https://doi.org/10.1145/3430895.3460878

  9. Johnson, B.G., Dittel, J.S., Van Campenhout, R., Jerome, B.: Discrimination of automatically generated questions used as formative practice. In: Proceedings of the Ninth ACM Conference on Learning@Scale (2022). https://doi.org/10.1145/3491140.3528323

  10. Koedinger, K., McLaughlin, E., Jia, J., Bier, N.: Is the doer effect a causal relationship? How can we tell and why it’s important. In: Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, Edinburgh, United Kingdom, pp. 388–397 (2016). https://doi.org/10.1145/2883851.2883957

  11. Kurdi, G., Leo, J., Parsia, B., Sattler, U., Al-Emari, S.: A systematic review of automatic question generation for educational purposes. Int. J. Artif. Intell. Educ. 30(1), 121–204 (2019). https://doi.org/10.1007/s40593-019-00186-y

    Article  Google Scholar 

  12. Van Campenhout, R., Johnson, B.G., Olsen, J.A.: The doer effect: replicating findings that doing causes learning. In: Proceedings of eLmL 2021: The Thirteenth International Conference on Mobile, Hybrid, and On-line Learning, pp. 1–6  (2021). https://www.thinkmind.org/index.php?view=article&articleid=elml_2021_1_10_58001. ISSN 2308-4367

  13. Van Campenhout, R., Dittel, J.S., Jerome, B., Johnson, B.G.: Transforming textbooks into learning by doing environments: an evaluation of textbook-based automatic question generation. In: Third Workshop on Intelligent Textbooks at the 22nd International Conference on Artificial Intelligence in Education. CEUR Workshop Proceedings, pp. 60–73 (2021). http://ceur-ws.org/Vol-2895/paper06.pdf. ISSN 1613-0073

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rachel Van Campenhout .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Van Campenhout, R., Hubertz, M., Johnson, B.G. (2022). Evaluating AI-Generated Questions: A Mixed-Methods Analysis Using Question Data and Student Perceptions. In: Rodrigo, M.M., Matsuda, N., Cristea, A.I., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2022. Lecture Notes in Computer Science, vol 13355. Springer, Cham. https://doi.org/10.1007/978-3-031-11644-5_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-11644-5_28

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-11643-8

  • Online ISBN: 978-3-031-11644-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics