A Web-Based E-Testing System Supporting Test Quality Improvement

  • Gennaro Costagliola
  • Filomena Ferrucci
  • Vittorio Fuccella
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4823)


In e-testing it is important to administer tests composed of good quality question items. By the term “quality” we intend the potential of an item in effectively discriminating between strong and weak students and in obtaining tutor’s desired difficulty level. Since preparing items is a difficult and time-consuming task, good items can be re-used for future tests. Among items with lower performances, instead, some should be discarded, while some can be modified and then re-used. This paper presents a Web-based e-testing system which detects defective question items and, when possible, provides the tutors with advice to improve their quality. The system detects defective items by firing rules. Rules are evaluated by a fuzzy logic inference engine. The proposed system has been used in a course at the University of Salerno.


e-Testing Computer Aided Assessment CAA item item quality questions eWorkbook Item Response Theory IRT Test Analysis online testing difficulty discrimination multiple choice distractor 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Woodford, K., Bancroft, P.: Multiple Choice Items Not Considered Harmful. In: 7th Australian conference on Computing education, Newcastle, Australia, pp. 109–116 (2005)Google Scholar
  2. 2.
    Hambleton, R.K., Swaminathan, H.: Item Response Theory– Principles und Applications, Kluwer Academic Publishers Group, Netherlands (1985)Google Scholar
  3. 3.
    Zadeh, L.A.: Fuzzy Sets and Their Applications to Pattern Classification and Clustering. World Scientific Publishing Co. Inc, River Edge, NJ, USA (1977)Google Scholar
  4. 4.
    Bardossy, A., Duckstein, L.: Fuzzy Rule-Based Modeling with Applications to Geophysical, Biological, and Engineering Systems. CRC Press, Boca Raton, USA (1995)MATHGoogle Scholar
  5. 5.
    Lloyd, S.: Least Squares Quantization in PCM. IEEE Transactions on Information Theory 28 (2), 129–137 (1982)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Costagliola, G., Ferrucci, F., Fuccella, V., Gioviale, F.: A Web-based Tool for Assessment and Self-Assessment, In: 2nd International Conference on Information Technology: Research and Education, pp. 131–135 (2004) Google Scholar
  7. 7.
    Stage, C.: A Comparison Between Item Analysis Based on Item Response Theory and Classical Test Theory. A Study of the SweSAT Subtest READ (1999), http://www.umu.se/edmeas/publikationer/pdf/enr3098sec.pdf
  8. 8.
    Massey: The Relationship Between the Popularity of Questions and Their Difficulty Level in Examinations Which Allow a Choice of Question. Occasional Publication of The Test Dev. and Res. Unit, CambridgeGoogle Scholar
  9. 9.
    Civanlar, M.R., Trussel, H.J.: Constructing membership functions using statistical data. Fuzzy Sets and Systems 18, 1–14 (1986)MATHCrossRefMathSciNetGoogle Scholar
  10. 10.
    jFuzzyLogic: Open Source Fuzzy Logic (Java), http://jfuzzylogic.sourceforge.net/html/index.html
  11. 11.
    FCL: Fuzzy Control Prog. Committee Draft CD 1.0 (Rel. 19 Jan 97), http://www.fuzzytech.com/binaries/ieccd1.pdf
  12. 12.
  13. 13.
    ECMAScript, Standard ECMA-262, ECMAScript Language Specification, http://www.ecma-international.org/publications/files/ECMA-ST/Ecma.262.pdf
  14. 14.
  15. 15.
  16. 16.
  17. 17.
    Hsieh, C.T., Shih, T.K., Chang, W.C., Ko, W.C.: Feedback and Analysis from Assessment Metadata in E-learning. In: 17th International Conference on Advanced Information Networking and Applications, Xi’an, China, pp. 155–158 (2003) Google Scholar
  18. 18.
    Ho, R.G., Yen, Y.C.: Design and Evaluation of an XML-Based Platform-Independent Computerized Adaptive TestingSystem. IEEE Transactions on Education 48(2), 230–237 (2005)CrossRefGoogle Scholar
  19. 19.
    Chen, C.M., Duh, L.J., Liu, C.Y.: A Personalized Courseware Recommendation System Based on Fuzzy Item Response Theory. In: IEEE Int. Conf. on e-Technology, e-Commerce and e-Service, Taipei, Taiwan, pp. 305–308 (2004)Google Scholar
  20. 20.
    Sun, K.T.: An Effective Item Selection Method for Educational Measurement. In: International Workshop on Advanced Learning Technologies, pp. 105–106 (1967)Google Scholar
  21. 21.
    Hung, J.C., Lin, L.J., Chang, W.C., Shih, T.K., Hsu, H.H., Chang, H.B., Chang, H.P., Huang, K.H.: A Cognition Assessment Authoring System for E-Learning. In: 24th Int. Conf. on Distributed Computing Systems Workshops, pp. 262–267 (2004) Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Gennaro Costagliola
    • 1
  • Filomena Ferrucci
    • 1
  • Vittorio Fuccella
    • 1
  1. 1.Dipartimento di Matematica e InformaticaUniversità degli Studi di SalernoFisciano(SA)Italy

Personalised recommendations