Skip to main content

Advertisement

Log in

Using Rasch Analysis to Inform Rating Scale Development

  • Short Paper/Note
  • Published:
Research in Higher Education Aims and scope Submit manuscript

Abstract

The use of surveys, questionnaires, and rating scales to measure important outcomes in higher education is pervasive, but reliability and validity information is often based on problematic Classical Test Theory approaches. Rasch Analysis, based on Item Response Theory, provides a better alternative for examining the psychometric quality of rating scales and informing scale improvements. This paper outlines a six-step process for using Rasch Analysis to review the psychometric properties of a rating scale. The Partial Credit Model and Andrich Rating Scale Model will be described in terms of the pyschometric information (i.e., reliability, validity, and item difficulty) and diagnostic indices generated. Further, this approach will be illustrated through the example of authentic data from a university-wide student evaluation of teaching.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3

References

  • Andrich, D. (1978). Rating formulation for ordered response category. Psychometrika, 43(4), 561–573.

    Article  Google Scholar 

  • Bond, T. G., & Fox, C. M. (2012). Applying the Rasch Model: Fundamental measurement in the human sciences (2nd ed.). New York: Routledge.

    Google Scholar 

  • Bradley, K., Peabody, M., Akers, K., & Knutson, N. (2015). Rating scales in survey research: Using the Rasch model to illustrate the middle category measurement flaw. Survey Practice, 8(2). Retrieved from http://www.surveypractice.org/index.php/SurveyPractice/article/view/266.

  • Darby, J. A. (2008). Course evaluations: A tendency to respond ‘favourably’ on scales? Quality Assurance in Education, 16, 7–18.

    Article  Google Scholar 

  • Embretson, S. A., & Reise, S. P. (2000). Item response theory for psychologists. Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Linacre, J. M. (2014a). A user’s guide to Winsteps Ministeps Rasch-Model computer programs. Chicago, IL: Author.

    Google Scholar 

  • Linacre, J. M. (2014b). Sample size and item calibration (or person measure) stability. Institute for Objective Measurement. Retrieved October 7, 2014, from http://www.rasch.org/rmt/rmt74m.htm.

  • Marsh, H. W. (1987). Students’ evaluations of university teaching: Research findings, methodological issues, and directions for future research. International Journal of Educational Research, 11(3), 253–388.

    Article  Google Scholar 

  • Osterlind, S. J. (2009). Modern measurement: Theory, principles, and applications of mental appraisal. Upper Saddle River, NJ: Pearson.

    Google Scholar 

  • Purdue University Center for Instructional Excellence. (2014). PICES item catalog. Retrieved August 26, 2014, from http://www.purdue.edu/cie/data/pices.html.

  • Wright, B. D., & Masters, G. D. (1982). Rating scale analysis. Chicago, IL: Mesa Press.

    Google Scholar 

  • Zlatkin-Troitschanskaia, O., Shavelson, R. J., & Kuhn, C. (2015). The international state of research on measurement of competency in higher education. Studies in Higher Education, 40(3), 393–411.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Carol Van Zile-Tamsen.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van Zile-Tamsen, C. Using Rasch Analysis to Inform Rating Scale Development. Res High Educ 58, 922–933 (2017). https://doi.org/10.1007/s11162-017-9448-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11162-017-9448-0

Keywords

Navigation