Advances in Health Sciences Education

, Volume 15, Issue 5, pp 625–632 | Cite as

Likert scales, levels of measurement and the “laws” of statistics

Methodologist's Corner

Abstract

Reviewers of research reports frequently criticize the choice of statistical methods. While some of these criticisms are well-founded, frequently the use of various parametric methods such as analysis of variance, regression, correlation are faulted because: (a) the sample size is too small, (b) the data may not be normally distributed, or (c) The data are from Likert scales, which are ordinal, so parametric statistics cannot be used. In this paper, I dissect these arguments, and show that many studies, dating back to the 1930s consistently show that parametric statistics are robust with respect to violations of these assumptions. Hence, challenges like those above are unfounded, and parametric methods can be utilized without concern for “getting the wrong answer”.

Keywords

Likert Statistics Robustness ANOVA 

References

  1. Bacchetti, P. (2002). Peer review of statistics in medical research: the other problem. British Medical Journal, 234, 1271–1273.CrossRefGoogle Scholar
  2. Berk, R. A. (1979). Generalizability of behavioral observations: a clarification of interobserver agreement and interobserver reliability. American Journal of Mental Deficiency., 83, 460–472.Google Scholar
  3. Boneau, C. A. (1960). The effects of violations of assumptions underlying the t test. Psychological Bulletin, 57, 49–64.CrossRefGoogle Scholar
  4. Carifio, L., & Perla, R. (2008). Resolving the 50 year debate around using and misusing Likert scales. Medical Education, 42, 1150–1152.CrossRefGoogle Scholar
  5. Cohen, J. J. (1960). A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37–46.CrossRefGoogle Scholar
  6. Cohen, J. J. (1968). Weighted Kappa; Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70, 213–220.CrossRefGoogle Scholar
  7. Cronbach, L. J. (1957). The two disciplines of scientific psychology. American Psychologist, 12, 671–684.CrossRefGoogle Scholar
  8. Dunlap, H. F. (1931). An empirical determination of means, standard deviations and correlation coefficients drawn form rectangular distributions. Annals of Mathematical Statistics, 2, 66–81.CrossRefGoogle Scholar
  9. Fleiss, J. L., & Cohen, J. J. (1973). The equivalence of weighed kappa and the intraclass correlation coefficient as measures of reliability. Educational and Psychological Measurement, 33, 613–619.CrossRefGoogle Scholar
  10. Fletcher, K. E., French, C. T., Corapi, K. M., Irwin, R. S. & Norman, G. R. (2010). Prospective measures provide more accurate assessments than retrospective measures of the minimal important difference in quality of life. Journal of Clinical Epidemiology (in press).Google Scholar
  11. Gaito, J. (1980). Measurement scales and statistics: Resurgence of an old misconception. Psychological Bulletin, 87, 564–567.CrossRefGoogle Scholar
  12. Havlicek, L. L., & Peterson, N. L. (1976). Robustness of the Pearson correlation against violation of assumption. Perceptual and Motor Skills, 43, 1319–1334.Google Scholar
  13. Hunter, J. E., & Schmidt, F. L. (1990). Dichotomozation of continuous variables: The implications for meta-analysis. Journal of Applied Psychology, 75, 334–349.CrossRefGoogle Scholar
  14. Jamieson, S. (2004). Likert scales: How to (ab)use them. Medical Education, 38, 1217–1218.CrossRefGoogle Scholar
  15. Kuzon, W. M., Urbanchek, M. G., & McCabe, S. (1996). The seven deadly sins of statistical analysis. Annals of Plastic Surgery, 37, 265–272.CrossRefGoogle Scholar
  16. Pearson, E. S. (1931). The analysis of variance in the case of non-normal variation. Biometrika, 23, 114–133.Google Scholar
  17. Pearson, E. S. (1932a). The test of signficance for the correlation coefficient. Journal of the American Statistical Association, 27, 128–134.CrossRefGoogle Scholar
  18. Pearson, E. S. (1932b). The test of signficance for the correlation coefficient: Some further results. Journal of the American Statistical Association, 27, 424–426.CrossRefGoogle Scholar
  19. Suissa, S. (1991). Binary methods for continuous outcomes: a parametric alternative. Journal of Clinical Epidemiology, 44, 241–248.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V. 2010

Authors and Affiliations

  1. 1.McMaster UniversityHamiltonCanada

Personalised recommendations