Abstract
Four sets of analyses were conducted on the 1996 Course Experience Questionnaire data. Conventional item analysis, exploratory factor analysis and confirmatory factor analysis were used, Finally, the Rasch measurement model was applied to this data set. This study was undertaken in order to compare conventional analytic techniques with techniques that explicitly set out to implement genuine measurement of perceived course quality. Although conventional analytic techniques are informative, both confirmatory factor analysis and in particular the Rasch measurement model reveal much more about the data set, and about the construct being measured. Meaningful estimates of individual students’ perceptions of course quality are available through the use of the Rasch measurement model. The study indicates that the perceived course quality construct is measured by a subset of the items included in the CEQ and that seven of the items of the original instrument do not contribute to the measurement of that construct. The analyses of this data set indicate that a range of analytical approaches provide different levels of information about the construct. In practice, the analysis of data arising from the administration of instruments like the CEQ would be better undertaken using the Rasch measurement model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
9. References
Adams, R. J., & Khoo, S. T. (1999), Quest: the interactive test analysis system (Version PISA) [Statistical analysis software]. Melbourne: Australian Council for Educational Research.
American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association.
Andrich, D. (1982). An index of person separation in latent trait theory, the traditional KR-20 index, and the Guttman scale response pattern. Educational Research and Perspectives, 9(1), 95–104.
Arbuckle, J. L. (1999). AMOS (Version 4.01) [CFA and SEM analysis program]. Chicago, IL: Smallwaters Corporation.
Bejar, I. I. (1983). Achievement testing. Recent advances. Beverly Hills: Sage Publications.
Bond, T. G., & Fox, C. M. (2001). Applying the Rasch model. Fundamental measurement in the human sciences. Mahwah, NJ: Lawrence Erlbaum and Associates.
Byrne, B. M. (1998). A primer of LISREL: basic applications and programming for confirmatory factor analytic models. New York: Springer-Verlag.
Curtis, D. D. (1999). The 1996 Course Experience Questionnaire: A Re-Analysis. Unpublished Ed. D. dissertation, The Flinders University of South Australia, Adelaide.
Curtis, D. D., & Keeves, J. P. (2000). The Course Experience Questionnaire as an Institutional Performance Indicator. International Education Journal, 1(2), 73–82.
Johnson, T. (1997). The 1996 Course Experience Questionnaire: a report prepared for the Graduate Careers Council of Australia. Parkville: Graduate Careers Council of Australia.
Keeves, J. P., & Masters, G. N. (1999). Issues in educational measurement. In G. N. Masters & J. P. Keeves (Eds.), Advances in measurement in educational research and assessment (pp. 268–281). Amsterdam: Pergamon.
Kline, P. (1993). The handbook psychological testing. London: Routledge.
Linacre, J. M., Wright, B. D., Gustafsson, J.-E., & Martin-Lof, P. (1994). Reasonable meansquare fit values. Rasch Measurement Transactions, 8(2), 370.
Messick, S. (1989). Validity. In R. L. Linn (Ed.), Educational measurement (pp. 13–103). New York: American Council on Education, Macmillan.
Messick, S. (1994). The interplay of evidence and consequences in the validation of performance assessments. Educational Researcher, 23(2), 13–23.
Michell, J. (1997). Quantitative science and the definition of measurement in psychology. British Journal of Psychology, 88, 355–383.
Ramsden, P. (199l). Report on the Course Experience Questionnaire trial. In R. Linke (Ed.), Performance indicators in higher education (Vol. 2). Canberra: Commonwealth Department of Employment, Education and Training.
SPSS Inc. (1995). SPSS for Windows (Version 6.1.3) [Statistical analysis program]. Chicago: SPSS Inc.
Wilson, K. L., Lizzio, A., & Ramsden, P. (1996). The use and validation of the Course Experience Questionnaire (Occasional Papers 6). Brisbane: Griffith University.
Wright, B. D. (1993). Thinking with raw scores, Rasch Measurement Transactions, 7(2), 299–300.
Wright, B. D., & Masters, G. N. (1982). Rating scale analysis. Chicago: MESA Press.
Wu, M. L., Adams, R. J., & Wilson, M. R. (1998). ConQuest generalised item response modelling software (Version 1.0) [Statistical analysis software]. Melbourne: Australian Council for Educational Research.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer
About this chapter
Cite this chapter
Curtis, D.D. (2005). Comparing Classical and Contemporary Analyses and Rasch Measurement. In: Maclean, R., et al. Applied Rasch Measurement: A Book of Exemplars. Education in the Asia-Pacific Region: Issues, Concerns and Prospects, vol 4. Springer, Dordrecht. https://doi.org/10.1007/1-4020-3076-2_10
Download citation
DOI: https://doi.org/10.1007/1-4020-3076-2_10
Publisher Name: Springer, Dordrecht
Print ISBN: 978-1-4020-3072-7
Online ISBN: 978-1-4020-3076-5
eBook Packages: Humanities, Social Sciences and LawEducation (R0)