Advertisement

Creating Content for Educational Testing Using a Workflow That Supports Automatic Item Generation

  • Mark J. GierlEmail author
  • Donna Matovinovic
  • Hollis Lai
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 532)

Abstract

Automatic item generation is a rapidly evolving research area where cognitive theories, computer technologies, and psychometric practices are used to create models that produce test items with the aid of computer technology. The purpose of our study is to describe the workflow in a strategic partnership between researchers as the University of Alberta and content specialists at the testing company ACT Inc. In this workflow, technical automated item and content generation expertise was combined with item development and subject-matter expertise for the purpose of producing large numbers of high-quality, content-specific test items. The methods and processes described in our study will also be used to help transform item and passage development at ACT Inc. from what is currently a manual, labor intensive, non-scalable process to a specification driven, automated, highly scalable process.

Keywords

Automatic item generation Item development Technology-enhanced assessment 

References

  1. 1.
    F. Drasgow, Technology and Testing: Improving Educational and Psychological Measurement (Routledge, New York, 2016)Google Scholar
  2. 2.
    F. Drasgow, R.M. Luecht, R. Bennett, Technology and Testing, in Educational Measurement, 4th edn. ed. by R.L. Brennan (American Council on Education, Washington, DC, 2006), pp. 471–516Google Scholar
  3. 3.
    R.M. Luecht, Computer-Based Test Delivery Models, Data, and Operational Implementation Issues, in Technology and Testing: Improving Educational and Psychological Measurement, ed. by F. Drasgow (Routledge, New York, 2016), pp. 179–205Google Scholar
  4. 4.
    S. Sireci, A. Zenisky, Computerized Innovative Item Formats: Achievement and Credentialing, in Handbook of Test Development, 2nd edn. (Routledge, New York, 2016), pp. 313–334Google Scholar
  5. 5.
    S. Lane, M. Raymond, R. Haladyna, S. Downing, Test Development Process, in Handbook of Test Development, 2nd edn. ed. by S. Lane, M. Raymond, T. Haladyna (Routledge, New York, 2016), pp. 3–18Google Scholar
  6. 6.
    M.J. Gierl, T. Haladyna, Automatic Item Generation: Theory and Practice (Routledge, New York, 2013)Google Scholar
  7. 7.
    R.M. Luecht, Automatic Item Generation for Computerized Adaptive Testing, in Automatic Item Generation: Theory and Practice, ed. by M. Gierl, T. Haladyna (Routledge, New York, 2013), pp. 196–216Google Scholar
  8. 8.
    L. Rudner, Implementing the Graduate Management Admission Test Computerized Adaptive Test, in Elements of Adaptive Testing, ed. by W. van der Linden, C. Glas (Springer, New York, 2010), pp. 151–165Google Scholar
  9. 9.
    K. Breithaupt, A. Ariel, D. Hare, Assembling an Inventory of Multistage Adaptive Testing Systems, in Elements of Adaptive Testing, ed. by W. van der Linden, C. Glas (Springer, New York, 2010), pp. 247–266Google Scholar
  10. 10.
    M.J. Gierl, H. Lai, A process for reviewing and evaluating generated test items. Educ. Meas. Issues Pract. 35, 6–20 (2016)CrossRefGoogle Scholar
  11. 11.
    M.J. Gierl, H. Lai, Automatic Item Generation, in Handbook of Test Development, 2nd edn. ed. by S. Lane, M. Raymond, T. Haladyna (Routledge, New York, 2016), pp. 410–429Google Scholar
  12. 12.
    M.J. Gierl, H. Lai, Using automated processes to generate test items. Educ. Meas. Issues Pract. 32, 36–50 (2013)CrossRefGoogle Scholar
  13. 13.
    M.J. Gierl, H. Lai, S. Turner, Using automatic item generation to create multiple-choice items for assessments in medical education. Med. Educ. 46, 757–765 (2012)CrossRefGoogle Scholar
  14. 14.
    H. Lai, M.J. Gierl, C. Touchie, D. Pugh, A. Boulais, A. DeChamplain, Using automatic item generation to improve the quality of MCQ distractors. Teach. Learn. Med. 28, 166–173 (2016)CrossRefGoogle Scholar
  15. 15.
    M.J. Gierl, H. Lai, Using automated processes to generate test items and their associated solutions and rationales to support formative feedback. IxD&A J. N. 25, 9–20 (2015)Google Scholar
  16. 16.
    M.J. Gierl, H. Lai, K. Fung, B. Zheng, Using Technology-Enhanced Processes to Generate Items in Multiple Languages, in Technology and Testing: Improving Educational and Psychological Measurement, ed. by F. Drasgow (Routledge, New York, 2016) pp. 109–127Google Scholar
  17. 17.
    I.I. Bejar, R. Lawless, M.E. Morley, M.E. Wagner, R.E. Bennett, J. Revuelta, A feasibility study of on-the-fly item generation in adaptive testing. J. Technol. Learn. Assess. 2(3), 1–30 (2003)Google Scholar
  18. 18.
    A. LaDuca, W.I. Staples, B. Templeton, G.B. Holzman, Item modeling procedures for constructing content-equivalent multiple-choice questions. Med. Educ. 20, 53–56 (1986)CrossRefGoogle Scholar
  19. 19.
    M.J. Gierl, J. Zhou, C. Alves, Developing a taxonomy of item model types to promote assessment engineering. J. Technol. Learn. Assess. 7(2), 1–51 (2008)Google Scholar
  20. 20.
    C.B. Schmeiser, C.J. Welch, Test Development, in Educational Measurement, 4th edn. ed. by R.L. Brennan (National Council on Measurement in Education and American Council on Education, Westport, CT, 2006), pp. 307–353Google Scholar
  21. 21.
    M.J. Gierl, H. Lai, J. Hogan, D. Matovinovic, A method for generating test items that are aligned to the Common Core State Standards. J. Appl. Test. Technol. 16, 1–18 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Faculty of EducationUniversity of AlbertaEdmontonCanada
  2. 2.ACT Inc.Iowa CityUSA
  3. 3.School of Dentistry, University of AlbertaEdmontonCanada

Personalised recommendations