Abstract
Many management researchers use difference scores to report results of empirical studies. Yet difference scores create known problems of reliability, spurious correlations, and variance restriction. Reframing a research model can substantially reduce the problems of difference scores. This note empirically demonstrates what can happen when difference scores are used as dependent variables in research and demonstrates an alternative method of data analysis.
References
Cohen, J. and Cohen, P. (1983). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (2nd ed.), Hillsdale, NJ: Lawrence Erlbaum.
Cronbach, L. J. and Furby, L. (1970). How should we measure ‘change’-or should we? Psychological Bulletin 74(July): 68–80.
Hintze, J. L. (1990). Number Crunching Statistical System: Version 5.03 5/90. Kaysville, Utah: Hintze.
Lautenschlager, G. J. (1989). A comparison of alternatives to conducting Monte Carlo analysis for determining parallel analysis criteria, Multivariate Behavioral Research 24(3): 365–395.
Mitchell, D. L., Klein, G. and Balloun, J. L. (1996). Gonder and Mode Effects on Survey Data Quality (to appear in Information & Management).
Mosteller, F. and Tukey, J. W. (1977). Data Analysis and Regression: A Second Course in Statistics. Menlo Park, Calif.: Addison-Wesley.
Peter, J. P., Churchill, Jr., G. A. and Brown, T. J. (1993). Caution in the use of difference scores in consumer research, Journal of Consumer Research 19 (March): 655–662.
Wolins, L. (1982). Research Mistakes in the Social and Behavioral Sciences. Ames, IA: Iowa State University Press.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Balloun, J.L., Klein, G. A difference which makes a difference. Quality & Quantity 31, 317–324 (1997). https://doi.org/10.1023/A:1004254515556
Issue Date:
DOI: https://doi.org/10.1023/A:1004254515556