Advertisement

TestGardener: A Program for Optimal Scoring and Graphical Analysis

  • Juan LiEmail author
  • James O. Ramsay
  • Marie Wiberg
Conference paper
Part of the Springer Proceedings in Mathematics & Statistics book series (PROMS, volume 265)

Abstract

The aim of this paper is to demonstrate how to use TestGardener to analyze testing data with various item types and explain some main displays. TestGardener is a software designed to aid the development, evaluation, and use of multiple choice examinations, psychological scales, questionnaires, and similar types of data. This software implements the optimal scoring of binary and multi-option items, and uses spline smoothing to obtain item characteristics curves (ICCs) that better fit the real data. Using TestGardner does not require any programming skill or formal statistical knowledge, which will make optimal scoring and item response theory more approachable for test analysts, test developers, researchers, and general public.

Keywords

Item response theory Graphical analysis software Optimal scoring Spline smoothing 

Notes

Acknowledgements

This research was funded by the Swedish Research Council (grant. 2014-578).

References

  1. Gomez, R. (2007). Australian parent and teacher ratings of the DSM-IV ADHD symptoms differential symptom functioning and parent-teacher agreement and differences. Journal of Attention Disorders. 11(1), 17–27.CrossRefGoogle Scholar
  2. Laroche, M., Chankon, K., & Tomiuk, M. (1999). Irt-based item level analysis: an additional diagnostic tool for scale purification. In J. E. Arnould, L. M. Scott (Eds.) Advances in consumer research (Vol 26, pp. 141–149). Provo, UT: Association for Consumer Research.Google Scholar
  3. Lévesque, D., Sévigny, S., Giroux, I., & Jacques, C. (2017). Gambling-related cognition scale (GRCS): Are skills-based games at a disadvantage? Psychology of Addictive Behaviors, 31(6), 647–654.CrossRefGoogle Scholar
  4. Liane, P. (1995). A comparison of item parameter estimates and ICCs produced with TESTGRAF and BILOG under different test lengths and sample sizes. The University of Ottawa, thesis.Google Scholar
  5. Luciano, J., Ayuso-Mateos, J., Aguado, J., Fernandez, A., Serrano-Blance, A., Roca, M., et al. (2010). The 12-item world health organization disability assessment schedule II (WHO-DAS II): A nonparametric item response analysis. BMC Medical Research Methodology, 2010(10), 45.CrossRefGoogle Scholar
  6. Nering, M. L., & Ostini, R. (2010). Handbook of polytomous item response theory models. New York: Taylor and Francis.Google Scholar
  7. Ramsay, J. O. (1995). TestGraf—a program for the graphical analysis of multiple choice test and questionnaire data [computer software]. Montreal: McGill University.Google Scholar
  8. Ramsay, J. O., & Wiberg, M. (2017a). A strategy for replacing sum scores. Journal of Educational and Behavioral Statistics, 42(3), 282–307.CrossRefGoogle Scholar
  9. Ramsay, J. O. & Wiberg, M. (2017b). Breaking through the sum score barrier. (pp. 151–158). Paper presented at the International Meeting of the Psychometric Society, Asheville: NC, July 11–15.Google Scholar
  10. Sachs, J., Law, Y., & Chan, C. K. (2003). A nonparametric item analysis of a selected item subset of the learning process questionnaire. British Journal of Educational Psychology 73(3), 395–423.Google Scholar
  11. Wiberg, M., Ramsay, J. O., & Li, J. (2018). Optimal scores as an alternative to sum scores. In: M. Wiberg, S. Culpepper, R. Janssen, J. González, D. Molenaar (eds) Quantitative Psychology. IMPS 2017. Springer Proceedings in Mathematics & Statistics, vol 233. Cham: Springer.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Mathematics and StatisticsMcGill UniversityMontrealCanada
  2. 2.Department of PsychologyMcGill UniversityMontrealCanada
  3. 3.Department of StatisticsUSBE, Umeå UniversityUmeåSweden

Personalised recommendations