Null Findings, Replications and Preregistered Studies in Business Ethics Research

Commentary

Notes

Compliance with Ethical Standards

Conflict of interest

Both authors declare no conflict of interest.

Ethical approval

This article does not contain any studies with human participants or animals performed by any of the authors.

References

  1. Banks, G., O’Boyle, E. H., Jr., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., et al. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 5–20.CrossRefGoogle Scholar
  2. Berger, J. O., & Sellke, T. (1987). Testing a null hypothesis: The irreconcilability of p values and evidence. Journal of the American Statistical Association, 82(397), 112–122.Google Scholar
  3. Bergh, D., Sharp, B., & Li, M. (2017). Tests for identifying “red flags” in empirical findings: Demonstration and recommendations for authors, reviewers and editors. Academy of Management Learning and Education, 16(1), 110–124.CrossRefGoogle Scholar
  4. Bettis, R. A. (2012). The search for asterisks: Compromised statistical tests and flawed theories. Strategic Management Journal, 33, 108–113.CrossRefGoogle Scholar
  5. Bettis, R., Gambardella, A., Helfat, C., & Mitchell, W. (2014). Quantitative empirical analysis in strategic management. Strategic Management Journal, 35, 949–953.CrossRefGoogle Scholar
  6. Byington, E., & Felps, W. (2017). Solutions to credibility crisis in management science. Academy of Management Learning and Education, 16(1), 142–162.CrossRefGoogle Scholar
  7. Centre for Open Science. (2017). Registered Reports: Peer review before results are known to align scientific values and practices. Available online under: https://cos.io/rr/?_ga=1.103210176.1532854806.1489421591. Accessed 4 April 2018.
  8. Chambers, C. (2014). Registered reports: A step change in scientific publishing. Available online under: https://www.elsevier.com/reviewers-update/story/innovation-in-publishing/registered-reports-a-step-change-in-scientific-publishing. Accessed 4 April 2018.
  9. Community for Responsible Research in Business and Management. (2017). A vision of responsible research in business and management: Striving for useful and credible Knowledge. Position Paper published online under: http://rrbm.network/wp-content/uploads/2017/11/Position_-Paper.pdf. Accessed 4 April 2018
  10. Cortina, J. M., & Folger, R. G. (1998). When is it acceptable to accept a null hypothesis: No way, Jose? Organizational Research Methods, 1, 334–350.CrossRefGoogle Scholar
  11. Cortina, J. M., & Landis, R. S. (2011). The earth is not round (p = .00). Organizational Research Methods, 14, 332–349.CrossRefGoogle Scholar
  12. Cumming, G. (2014). The new statistics why and how. Psychological Science, 25, 7–29.CrossRefGoogle Scholar
  13. Dewey, J. (1920). Reconstruction in philosophy. New York: Holt Publishing.CrossRefGoogle Scholar
  14. Dewey, J. (1938). Logic: The theory of inquiry. New York: Holt Publishing.Google Scholar
  15. Du Gay, P. (2015). Organization (theory) as a way of life. Journal of Cultural Economy, 8(4), 399–417.CrossRefGoogle Scholar
  16. Fanelli, D. (2011). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904.CrossRefGoogle Scholar
  17. Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7, 555–561.CrossRefGoogle Scholar
  18. Fish, S. (1985). Consequences. Critical Inquiry, 11, 433–458.CrossRefGoogle Scholar
  19. Fish, S. (2003). Truth but no consequences: Why philosophy doesn’t matter. Critical Inquiry, 29, 389–417.Google Scholar
  20. Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345, 1502–1505.CrossRefGoogle Scholar
  21. Gigerenzer, G., & Marewski, J. N. (2015). Surrogate science: The idol of a universal method for scientific inference. Journal of Management, 41(2), 421–440.CrossRefGoogle Scholar
  22. Greenwald, A. G. (1975). Consequences of prejudice against the Null hypothesis. Psychological Bulletin, 82(1), 1–20.CrossRefGoogle Scholar
  23. Harzing, A.-W. (2016). Why replication studies are essential: Learning from failure and success. Cross Cultural and Strategic Management, 23(4), 563–568.CrossRefGoogle Scholar
  24. Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of P-hacking in science. PLoS Biology, 13(3), e1002106.CrossRefGoogle Scholar
  25. Hopewell, S., Loudon, K., Clarke, M. J., Oxman, A. D., & Dickersin, K. (2009). Publication bias in clinical trials due to statistical significance or direction of trial results. Cochrane Database of Systematic Reviews, 1, MR000006.Google Scholar
  26. Hubbard, R., & Armstrong, J. S. (1992). Are null results becoming an endangered species in marketing? Marketing Letters, 3(2), 127–136.CrossRefGoogle Scholar
  27. Ioannidis, J. P. A. (2014). How to make more published research true. PLoS Medicine, 14(10), e1001747.CrossRefGoogle Scholar
  28. Jasanoff, S. (Ed.). (2004). States of knowledge: The co-production of science and the social order. New York: Routledge.Google Scholar
  29. Jasanoff, S. (2009). The fifth branch: Science advisers as policymakers. Cambridge: Harvard University Press.Google Scholar
  30. Jasanoff, S. (2010). Testing time for climate science. Science, 328, 695–696.CrossRefGoogle Scholar
  31. Jasanoff, S. (2014). A mirror for science. Public Understanding of Science, 23, 21–26.CrossRefGoogle Scholar
  32. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532.CrossRefGoogle Scholar
  33. Johnson, V. E. (2013). Revised standards for statistical evidence. Proceedings of the National Academy of Sciences, 110(48), 19313–19317.CrossRefGoogle Scholar
  34. Kepes, S., Banks, G. C., McDaniel, M., & Whetzel, D. L. (2012). Publication bias in the organizational sciences. Organizational Research Methods, 15, 624–662.CrossRefGoogle Scholar
  35. Knorr-Cetina, K. D. (2009). Epistemic cultures: How the sciences make knowledge. Cambridge: Harvard University Press.Google Scholar
  36. Knorr-Cetina, K. D. (2013). The manufacture of knowledge: An essay on the constructivist and contextual nature of science. New York: Elsevier.Google Scholar
  37. Kuhn, T. S. (2012). The structure of scientific revolutions. Chicago: University of Chicago Press.CrossRefGoogle Scholar
  38. Leonelli, S., Rappert, B., & Davies, G. (2017). Data shadows: Knowledge, openness, and absence. Science, Technology and Human Values, 42(2), 191–202.CrossRefGoogle Scholar
  39. Lynch, J. G., Jr., Bradlow, E. T., Huber, J. C., & Lehmann, D. R. (2015). Reflections on the replication corner: In praise of conceptual replications. International Journal of Research in Marketing, 32, 333–342.CrossRefGoogle Scholar
  40. Morey, R. D., Rouder, J. N., Verhagen, J., & Wagenmakers, E. J. (2014). Why hypothesis tests are essential for psychological science a comment on Cumming. Psychological Science, 25, 1289–1290.CrossRefGoogle Scholar
  41. Nuzzo, R. (2014). Statistical errors. Nature, 506(7487), 150–152.CrossRefGoogle Scholar
  42. O’Boyle, E. H., Jr., Banks, G. C., & Gonzalez-Mule, E. (2017). The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management, 43(2), 376–399.CrossRefGoogle Scholar
  43. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, 943.CrossRefGoogle Scholar
  44. Peirce, C. S. (1923). Chance, love, and logic: Philosophical essays. London: Kegan Paul, Trench, Tubner and Co. LTD.Google Scholar
  45. Poovey, M. (1998). A history of the modern fact: Problems of knowledge in the sciences of wealth and society. Chicago: University of Chicago Press.CrossRefGoogle Scholar
  46. Porter, T. M. (1986). The rise of statistical thinking, 1820–1900. Princeton University Press.Google Scholar
  47. Porter, T. M. (1996). Trust in numbers: The pursuit of objectivity in science and public life. New Haven: Princeton University Press.CrossRefGoogle Scholar
  48. Rothstein, H. R., Sutton, A. J., & Borenstein, M. (Eds.). (2006). Publication bias in meta-analysis: Prevention, assessment and adjustments. New York: Wiley.Google Scholar
  49. Schwab, A., & Starbuck, W. H. (2017). A call for openness in research reporting: How to turn covert practices into helpful tools. Academy of Management Learning and Education, 16(1), 125–141.CrossRefGoogle Scholar
  50. Sellke, T., Bayarri, M. T., & Berger, J. O. (2001). Calibration of p values for testing precise null hypotheses. The American Statistician, 55(1), 62–71.CrossRefGoogle Scholar
  51. Shapin, S. (1994). A social history of truth: Civility and science in seventeenth-century England. Chicago: University of Chicago Press.Google Scholar
  52. Shapin, S. (2009). The scientific life: A moral history of a late modern vocation. Chicago: University of Chicago Press.Google Scholar
  53. Shapin, S., & Schaffer, S. (1985). Leviathan and the air-pump: Hobbes, Boyle, and the experimental life. Princeton: Princeton University Press.Google Scholar
  54. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.CrossRefGoogle Scholar
  55. Starbuck, W. H. (2016). 60th anniversary essay: How journals could improve research practices in social sciences. Administrative Science Quarterly, 61(2), 165–183.CrossRefGoogle Scholar
  56. Strathern, M. (2000). The tyranny of transparency. British Educational Research Journal, 26(3), 309–321.CrossRefGoogle Scholar
  57. Tsoukas, H. (1997). The tyranny of light: The temptations and the paradoxes of the information society. Futures, 29(9), 827–843.CrossRefGoogle Scholar
  58. Van Fraassen, B. C. (2008). The empirical stance. New Haven: Yale University Press.Google Scholar
  59. Wasserstein, R. L., & Lazar, N. A. (2016). The ASA’s statement on p-values: context, process, and purpose. The American Statistician, 70(2), 129–133.CrossRefGoogle Scholar
  60. Zyphur, M. J., & Oswald, F. L. (2015). Bayesian estimation and inference: A user’s guide. Journal of Management, 41, 390–420.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media B.V., part of Springer Nature 2018

Authors and Affiliations

  1. 1.Rennes School of BusinessCentre for Responsible BusinessRennesFrance
  2. 2.Department of Management and Marketing, Faculty of Business and EconomicsUniversity of MelbourneMelbourneAustralia

Personalised recommendations