Do Not Cross Me: Optimizing the Use of Cross-Sectional Designs
The cross-sectional research design, especially when used with self-report surveys, is held in low esteem despite its widespread use. It is generally accepted that the longitudinal design offers considerable advantages and should be preferred due to its ability to shed light on causal connections. In this paper, I will argue that the ability of the longitudinal design to reflect causality has been overstated and that it offers limited advantages over the cross-sectional design in most cases in which it is used. The nature of causal inference from a philosophy of science perspective is used to illustrate how cross-sectional designs can provide evidence for relationships among variables and can be used to rule out many potential alternative explanations for those relationships. Strategies for optimizing the use of cross-sectional designs are noted, including the inclusion of control variables to rule out spurious relationships, the addition of alternative sources of data, and the incorporation of experimental methods. Best practice advice is offered for the use of both cross-sectional and longitudinal designs, as well as for authors writing and for reviewers evaluating papers that report results of cross-sectional studies.
KeywordsCausal inference Causality Cross-sectional design Longitudinal design Method variance Philosophy of science Research design Research methodology
- Brief, A. P., Burke, M. J., George, J. M., Robinson, B. S., & Webster, J. (1988). Should negative affectivity remain an unmeasured variable in the study of job stress? Journal of Applied Psychology, 73(2), 193–198.Google Scholar
- Frese, M., & Zapf, D. (1988). Methodological issues in the study of work stress: Objective vs subjective measurement of work stress and the question of longitudinal studies. In C. L. Cooper & R. Payne (Eds.), Causes, coping and consequences of stress at work (pp. 375–411). Oxford, England: John Wiley & Sons.Google Scholar
- Glick, W. H., Jenkins, G., & Gupta, N. (1986). Method versus substance: How strong are underlying relationships between job characteristics and attitudinal outcomes? Academy of Management Journal, 29(3), 441–464.Google Scholar
- Illari, P., & Russo, F. (2014). Causality: Philosophical theory meets scientific practice. Oxford, UK: Oxford University Press.Google Scholar
- Meehl, P. E. (1971). High school yearbooks: A reply to Schwarz. Journal of Abnormal Psychology, 77(2), 143–148.Google Scholar
- Podsakoff, P. M., MacKenzie, S. B., & Podsakoff, N. P. (2012). Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology, 63, 539–569. https://doi.org/10.1146/annurev-psych-120710-100452.Google Scholar
- Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
- Spector, P. E. (1992). A consideration of the validity and meaning of self-report measures of job conditions. In C. L. Cooper & I. T. Robertson (Eds.), International Review of Industrial and Organizational Psychology (pp. 123–151). West Sussex, UK: John Wiley.Google Scholar
- Tuma, N. B., & Hannan, M. T. (1984). Social dynamics models and methods. Saint Louis, US: Elsevier.Google Scholar
- Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford: New York City.Google Scholar
- Woodward, J. (2017). Scientific explanation. In E. N. Zalta (Ed.), Stanford encyclopedia of philosophy. Retrieved from https://plato.stanford.edu/archives/fall2017/entries/scientific-explanation. Accessed 28 Dec 2018.