Abstract
In a lead article published by the Journal of Marketing Research in 2007, Bergkvist and Rossiter (Journal of Marketing Research 44:175–84, 2007) recommend that “for the many constructs in marketing that consist of a concrete singular object and a concrete attribute, such as AAd or ABrand, single-item measures should be used (page 175).” This conclusion is based on empirical analyses correlating single-item and multiple-item scales measuring attitudes towards advertisements and the advertised brands, collected simultaneously in a single survey instrument. Finding no statistically significant differences between the correlations obtained with the single and multiple items, the authors conclude that there is no loss in predictive validity with the use of single items, which is the basis for their recommendation. Obviously, their recommendation produces substantial savings in data-gathering costs. Consequently, their article has been highly cited (over 600 cites as of September 2013), by authors justifying their use of single-item measures. In this note, I revisit well-known concepts of psychometric theory to demonstrate that this practice is ill-advised. First, I argue that repeated measures are necessary not only to improve the validity of some measurement instruments but also, more importantly, to make it possible to assess and correct measurement instruments for random (non-systematic) measurement errors. Second, I argue that rather than testing for predictive validity, the authors actually tested for concurrent validity, using a common survey instrument, thereby confounding their results with spurious correlations due to common-methods biases. Third, I conduct a true predictive validity test using two attitudinal scales and consumption behavior as a predictive criterion to show that, once corrected for measurement errors, multiple-item scales consistently outperform their single-item equivalents.
Similar content being viewed by others
Notes
All concepts discussed here are in the common domain of classic measurement theory, normally taught in the first measurement course in the social sciences. A classic reference on this topic is Nunnaly and Bernstein (1994) Psychometric Theory, New York: McGraw Hill, or any of its newer editions.
References
Bergkvist, L., & Rossiter, J. R. (2007). The predictive validity of multiple-item versus single-item measures of the same constructs. Journal of Marketing Research, 44(May), 175–184.
Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests. Psychological Bulletin, 52(4), 281–302.
Nunnaly, J. C., Jr., & Bernstein, I. H. (1994). Psychometric theory (3rd ed.). New York: Mc-Graw Hill.
Perdue, B. C., & Summers, J. O. (1986). Checking the success of manipulations in marketing experiments. Journal of Marketing Research, 23(4), 317–326.
Podsakoff, P. M., MacKenzie, S. B., Lee, J.-Y., & Podsakoff, N. P. (2003). Common method biases in behavioral research: a critical review of the literature and recommended remedies. Journal of Applied Psychology, 88(5), 879–903.
Simmons, C. J., Bickart, B., & Lynch, J. G., Jr. (1993). Capturing and creating public opinion in survey research. Journal of Consumer Research, 20(September), 316–329.
Zhao, X., Lynch, J. G., & Chen, Q. (2010). Reconsidering Baron and Kenny: myths and truths about mediation analysis. Journal of Consumer Research, 37(2), 197–206.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kamakura, W.A. Measure twice and cut once: the carpenter’s rule still applies. Mark Lett 26, 237–243 (2015). https://doi.org/10.1007/s11002-014-9298-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11002-014-9298-x