Advertisement

Learners’ Perceived Level of Difficulty of a Computer-Adaptive Test: A Case Study

  • Mariana Lilley
  • Trevor Barker
  • Carol Britton
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3585)

Abstract

A computer-adaptive test (CAT) is a software application that makes use of Item Response Theory (IRT) to create a test that is tailored to individual learners. The CAT prototype introduced here comprised a graphical user interface, a question database and an adaptive algorithm based on the Three-Parameter Logistic Model from IRT. A sample of 113 Computer Science undergraduate students participated in a session of assessment within the Human-Computer Interaction subject domain using our CAT prototype. At the end of the assessment session, participants were asked to rate the level of difficulty of the overall test from 1 (very easy) to 5 (very difficult). The perceived level of difficulty of the test and the CAT scores obtained by this group of learners were subjected to a Spearman’s rank order correlation. Findings from this statistical analysis suggest that the CAT prototype was effective in tailoring the assessment to each individual learner’s proficiency level.

Keywords

Item Response Theory Proficiency Level Educational Software Assessment Session Computer Adaptive Testing 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Barker, T., Lilley, M.: Are Individual Learners Disadvantaged by the Use of Computer-Adaptive Testing? In: Proceedings of the 8th Learning Styles Conference, University of Hull. European Learning Styles Information Network (ELSIN), pp. 30–39 (2003)Google Scholar
  2. 2.
    Brusilovsky, P.: Knowledge Tree: A Distributed Architecture for Adaptive E-Learning. In: Proceedings of the 13th World Wide Web Conference, New York, USA, May 17-22, pp. 104–113 (2004)Google Scholar
  3. 3.
    Conejo, R., Millán, E., Pérez-de-la-Cruz, J.L., Trella, M.: An Empirical Approach to On-Line Learning in SIETTE. In: Gauthier, G., VanLehn, K., Frasson, C. (eds.) ITS 2000. LNCS, vol. 1839, pp. 605–614. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  4. 4.
    Fernandez, G.: Cognitive Scaffolding for a Web-Based Adaptive Learning Environment. In: Zhou, W., Nicholson, P., Corbitt, B., Fong, J. (eds.) ICWL 2003. LNCS, vol. 2783, pp. 12–20. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  5. 5.
    Jettmar, E., Nass, C.: Adaptive testing: effects on user performance. In: Proceedings of the SIGCHI conference on Human factors in computing systems: Changing our world, changing ourselves, Minneapolis, Minnesota, USA, pp. 129–134 (2002)Google Scholar
  6. 6.
    Lilley, M., Barker, T., Britton, C.: The development and evaluation of a software prototype for computer adaptive testing. Computers & Education Journal 43(1-2), 109–122 (2004)CrossRefGoogle Scholar
  7. 7.
    Lord, F.M.: Applications of Item Response Theory to practical testing problems. Lawrence Erlbaum Associates, New Jersey (1980)Google Scholar
  8. 8.
    Wainer, H.: Computerized Adaptive Testing (A Primer), 2nd edn. Lawrence Erlbaum Associates, New Jersey (2000)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2005

Authors and Affiliations

  • Mariana Lilley
    • 1
  • Trevor Barker
    • 1
  • Carol Britton
    • 1
  1. 1.University of Hertfordshire, School of Computer ScienceHatfieldUnited Kingdom

Personalised recommendations