HARKing: How Badly Can Cherry-Picking and Question Trolling Produce Bias in Published Results?
- 941 Downloads
The practice of hypothesizing after results are known (HARKing) has been identified as a potential threat to the credibility of research results. We conducted simulations using input values based on comprehensive meta-analyses and reviews in applied psychology and management (e.g., strategic management studies) to determine the extent to which two forms of HARKing behaviors might plausibly bias study outcomes and to examine the determinants of the size of this effect. When HARKing involves cherry-picking, which consists of searching through data involving alternative measures or samples to find the results that offer the strongest possible support for a particular hypothesis or research question, HARKing has only a small effect on estimates of the population effect size. When HARKing involves question trolling, which consists of searching through data involving several different constructs, measures of those constructs, interventions, or relationships to find seemingly notable results worth writing about, HARKing produces substantial upward bias particularly when it is prevalent and there are many effects from which to choose. Results identify the precise circumstances under which different forms of HARKing behaviors are more or less likely to have a substantial impact on a study’s substantive conclusions and the field’s cumulative knowledge. We offer suggestions for authors, consumers of research, and reviewers and editors on how to understand, minimize, detect, and deter detrimental forms of HARKing in future research.
KeywordsHARKing Simulation Publication bias Data snooping
- Aguinis, H., Ramani, R. S., & Alabduljader, N. (in press). What you see is what you get? Enhancing methodological transparency in management research. Academy of Management Annals. https://doi.org/10.5465/annals.2016.0011.
- Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., …, Adkins, C. L. (2016a). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42, 5–20.Google Scholar
- Bergh, D. D., Aguinis, H., Heavey, C., Ketchen, D. J., Boyd, B. K., Su, P., Lau, C., & Joo, H. (2016). Using meta-analytic structural equation modeling to advance strategic management research: Guidelines and an empirical illustration via the strategic leadership-performance relationship. Strategic Management Journal, 37, 477–497.CrossRefGoogle Scholar
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale: Lawrence Erlbaum.Google Scholar
- Cortina, J. M., & Landis, R. S. (2009). When small effect sizes tell a big story, and when large effect sizes don’t. In C. E. Lance & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity, and fable in the organizational and social sciences (pp. 287–308). New York: Routledge.Google Scholar
- Edwards, J. R., Berry JW. (2010). The presence of something or the absence of nothing: Increasing theoretical precision in management research. Organizational Research Methods, 13, 668–689. https://doi.org/10.1177/1094428110380467
- Grand, J. A., Rogelberg, S. G., Allen, T. D., Landis, R. S., Reynolds, D. H., Scott, J. C., Tonidandel, S., & Truxillo, D. M. (in press). A systems-based approach to fostering robust science in industrial-organizational psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice. Google Scholar
- Hambrick DC. (2007). The field of management’s devotion to theory: Too much of a good thing? Academy of Management Journal, 50, 1346–1352. http://doi.org/10.2307/20159476
- Harrell, H. (2011). Regression modeling strategies with applications to linear models, logistic regression and survival analysis. New York: Springer-Verlag.Google Scholar
- Hayduk, L. A. (1987). Structural equation modeling with LISREL: Essentials and advances. Baltimore: Johns Hopkins University Press.Google Scholar
- Hubbard R, Armstrong JS. (1997). Publication bias against null results. Psychological Reports, 80, 337–338. https://doi.org/10.2466/pr0.1918.104.22.1687
- Jensen, A. (1980). Bias in mental testing. New York: Free Press.Google Scholar
- Judd, C. M., & McClelland, G. H. (1989). Data analysis: A model comparison approach. New York: Harcourt.Google Scholar
- Kline, R. B. (2005). Principles and practice of structural equation modeling (2nd ed.). New York: Guilford Press.Google Scholar
- Landis, R. S., Edwards, B. D., & Cortina, J. M. (2009). On the practice of allowing correlated residuals among indicators in structural equation models. In C. E. Lance & R. J. Vandenberg (Eds.), Statistical and methodological myths and urban legends: Doctrine, verity and fable in the organizational and social sciences (pp. 193–214). New York: Routledge/Taylor & Francis Group.Google Scholar
- Murphy, K. R., & Cleveland, J. N. (1995). Understanding performance appraisal: Social, organizational and goal-oriented perspectives. Newbury Park: Sage.Google Scholar
- O’Boyle, E. H., Banks, G. C., & Gonzalez-Mulé, E. (2017). The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management, 43, NPi. https://doi.org/10.1177/0149206314527133.
- Orlitzky M. (2012). How can significance tests be deinstitutionalized? Organizational Research Methods, 15, 199–228. https://doi.org/10.1177/1094428111428356
- Pfeffer J. (2007). A modest proposal: How we might change the process and prod- uct of managerial research. Academy of Management Journal, 50, 1334–1345. https://doi.org//10.2307/20159475
- Thurstone, L. L. (1934). The vectors of the mind. American Psychologist, 41, 1–32.Google Scholar
- Tonidandel, S., King, E. B., & Cortina, J. M. (Eds.). (2016). Big data at work: The data science revolution and organizational psychology. New York: Routledge.Google Scholar
- White R. (2003). The epistemic advantage of prediction over accommodation. Mind, 112, 653–683. https://doi.10.1093/mind/112.448.653