Psychometrika

, Volume 81, Issue 1, pp 1–15 | Cite as

Playing with Data—Or How to Discourage Questionable Research Practices and Stimulate Researchers to Do Things Right

Article

Abstract

Recent fraud cases in psychological and medical research have emphasized the need to pay attention to Questionable Research Practices (QRPs). Deliberate or not, QRPs usually have a deteriorating effect on the quality and the credibility of research results. QRPs must be revealed but prevention of QRPs is more important than detection. I suggest two policy measures that I expect to be effective in improving the quality of psychological research. First, the research data and the research materials should be made publicly available so as to allow verification. Second, researchers should more readily consider consulting a methodologist or a statistician. These two measures are simple but run against common practice to keep data to oneself and overestimate one’s methodological and statistical skills, thus allowing secrecy and errors to enter research practice.

Keywords

data fraud hiring a methodologist/statistician public availability of data questionable research practices 

References

  1. Abma, R. (2013a). De publicatie fabriek. Over de betekenis van de affaire-Stapel (The publication factory. On the meaning of the Stapel affair). Nijmegen, The Netherlands: Uitgeverij Vantilt.Google Scholar
  2. Abma, R. (2013b). Het jaar van Levelt? (The year of Levelt?). De Psycholoog, 48(3), 34–39.Google Scholar
  3. Anderson, M. S. (2014). Global research integrity in relation to the United States’ research-integrity infrastructure. Accountability in Research, 21, 1–8.CrossRefPubMedGoogle Scholar
  4. Armitage, P., McPherson, C. K., & Rowe, B. C. (1969). Repeated significance tests on accumulating data. Journal of the Royal Statistical Society: Series A, 132, 235–244.CrossRefGoogle Scholar
  5. Asendorpf, J. B., et al. (2013a). Recommendations for increasing replicability in psychology. European Journal of Personality, 27, 108–119.CrossRefGoogle Scholar
  6. Asendorpf, J. B., et al. (2013b). Authors’ response: Replication is more than hitting the lottery twice. European Journal of Personality, 27, 138–144.CrossRefGoogle Scholar
  7. Bakker, M., Van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554.CrossRefPubMedGoogle Scholar
  8. Barnett, V., & Lewis, T. (1994). Outliers in statistical data. Chichester, United Kingdom: Wiley.Google Scholar
  9. Bhattacharjee, Y. (2013). The mind of a con man. The New York Times April 28. Retrieved from http://www.nytimes.com/2013/04/28/magazine/diederik-stapels-audacious-academic-fraud.html?pagewanted=all&_r=0.
  10. Berkhout, K. (2012). Het moet ergens in die doos zitten (It has to be somewhere in that box). Nieuwe Rotterdamse Courant NRC, NRC Weekend June 30/July 1.Google Scholar
  11. Borsboom, D. (2006). The attack of the psychometricians. Psychometrika, 71, 425–440.PubMedCentralCrossRefPubMedGoogle Scholar
  12. Carey, B. (2011). Fraud case seen as a red flag for psychology research. The New York Times November 2, 2011. Retrieved from http://www.nytimes.com/2011/11/03/health/research/noted-dutch-psychologist-stapel-accused-of-research-fraud.html?_r=0.
  13. Carpenter, S. (2012). Psychology’s bold initiative. Science, 335, 1558–1561.CrossRefPubMedGoogle Scholar
  14. Cronbach, L. J. (1954). Report on a psychometric mission to Clinicia. Psychometrika, 19, 263–270.CrossRefGoogle Scholar
  15. De Dreu, C. (2012). Elkaar constructief de maat nemen (Providing constructive criticism). De Psycholoog, 47(10), 34–37.Google Scholar
  16. Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4(4), e5738. doi:10.1371/journal.pone.0005738.PubMedCentralCrossRefPubMedGoogle Scholar
  17. Fanelli, D. (2010). Positive results increase down the hierarchy of the sciences. PLoS ONE, 5(4), e10068. doi:10.1371/journal.pone.0010068.PubMedCentralCrossRefPubMedGoogle Scholar
  18. Fanelli, D. (2013). Redefine misconduct as distorted reporting. Nature, 149, 494. Retrieved from http://www.nature.com/news/redefine-misconduct-as-distorted-reporting-1.12411.
  19. Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7, 555–561.CrossRefPubMedGoogle Scholar
  20. Fischer, G. H. (1974). Einführung in die Theorie psychologischer Tests (Introduction to the theory of psychological tests). Bern, Switzerland: Huber.Google Scholar
  21. Fisher, R. A. (1936). Has Mendel’s work been rediscovered? Annals of Science, 1, 115–137.CrossRefGoogle Scholar
  22. Fuchs, H. M., Jenny, M., & Fiedler, S. (2012). Psychologists are open to change, yet wary of rules. Perspectives on Psychological Science, 7, 639–642.CrossRefPubMedGoogle Scholar
  23. Goodstein, D. (2010). On fact and fraud. Cautionary tales from the front lines of science. Princeton: Princeton University Press.Google Scholar
  24. Hartshorne, J. K., & Schachner, A. (2012). Tracking replicability as a method of post-publication open evaluation. Frontiers in Computational Neuroscience, 6(3), doi:10.3389/fncom.2012.00008.
  25. Hofstee, W. K. B. (2013). Psychologie als wedstrijd. Integer wetenschappelijk onderzoek (Psychology as competition. Honest scientific research). De Psycholoog, 48(4), 28–36.Google Scholar
  26. Huff, D. (1954). How to lie with statistics. New York: Norton.Google Scholar
  27. Hunt, E. (2013). Calls for replicability must go beyond motherhood and apple pie. European Journal of Personality, 27, 126–127.Google Scholar
  28. Hubert, L., & Wainer, H. (2013). A statistical guide for the ethically perplexed. Boca Raton, FL: CRC Press.Google Scholar
  29. Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Med, 2(8), e124.PubMedCentralCrossRefPubMedGoogle Scholar
  30. Ioannidis, J. P. A., & Trakalinos, T. A. (2007). An exploratory test for an excess of significant findings. Clinical Trials, 4, 245–253.CrossRefPubMedGoogle Scholar
  31. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524–532.CrossRefPubMedGoogle Scholar
  32. Judson, H. F. (2004). The great betrayal. Fraud in science. Orlando, FL: Harcourt.Google Scholar
  33. Kadane, J. B., Schervish, M. J., & Seidenfeld, T. (1996). Reasoning to a foregone conclusion. Journal of the American Statistical Association, 91, 1228–1235.CrossRefGoogle Scholar
  34. Kahneman, D. (2011). Thinking, fast and slow. London: Penguin Books Ltd.Google Scholar
  35. Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2, 196–217.CrossRefPubMedGoogle Scholar
  36. Kevles, D. J. (1998). The Baltimore case: A trial of politics, science, and character. New York: W.W. Norton.Google Scholar
  37. KNAW Committee on Research Data. (2012). Zorgvuldig en integer omgaan met wetenschappelijke onderzoeksgegevens (Accurate and correct handling of scientific research data). Royal Netherlands Academy of Arts and Sciences. Amsterdam: The Netherlands. Retrieved from http://www.knaw.nl/Content/Internet_KNAW/publicaties/pdf/20121004.pdf.
  38. Kullmann, K. (2012). Zu schön, um wahr zu sein (To good to be true). Der Spiegel, 35, 123–124.Google Scholar
  39. Lamont, M. (2009). How professors think. Inside the curious world of academic judgment. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
  40. Lehrer, J. (2010). The truth wears off. Is there something wrong with the scientific method? The New Yorker. Annals of Science.Google Scholar
  41. Levelt, W. J. M. (2011). Interim report regarding the breach of scientific integrity committed by Prof. D. A. Stapel. Retrieved from http://www.tilburguniversity.edu/nl/nieuws-en-agenda/commissie-levelt/.
  42. Levelt Committee, Noort Committee, Drenth Committee. (2012). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel. Retrieved from http://www.tilburguniversity.edu/nl/nieuws-en-agenda/finalreportLevelt.pdf.
  43. Lilienfeld, S. O. (2012a). Public skepticism of psychology: Why many people perceive the study of human behavior as unscientific. American Psychologist, 67, 111–129.CrossRefPubMedGoogle Scholar
  44. Lilienfeld, S. O. (2012b). Scientific Utopia or scientific dystopia? Psychological Inquiry, 23, 277–280.CrossRefGoogle Scholar
  45. MacCallum, R. C., Roznowski, M., & Necowitz, L. B. (1992). Model modifications in covariance structure analysis: The problem of capitalization on chance. Psychological Bulletin, 111, 490–504.CrossRefPubMedGoogle Scholar
  46. Mahoney, M. J. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy and Research, 1, 161–175.CrossRefGoogle Scholar
  47. Masicampo, E. J., & Lalande, D. R. (2012). A peculiar preference of p values just below.05. The Quarterly Journal of Experimental Psychology, 65, 2271–2279.CrossRefPubMedGoogle Scholar
  48. Nosek, B. A., & Bar-Anan, Y. (2012). Scientific Utopia I: Opening scientific communication. Psychological Inquiry, 23, 217–243.CrossRefGoogle Scholar
  49. Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 615–631.CrossRefPubMedGoogle Scholar
  50. Panter, A. T., & Sterba, S. K. (2011). Handbook of ethics in quantitative methodology. Hove, United Kingdom: Routledge.Google Scholar
  51. Pashler, H., & Wagenmakers, E. J. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7, 528–530.CrossRefPubMedGoogle Scholar
  52. Piegorsch, W. W. (1990). Fisher’s contributions to genetics and heredity, with special emphasis on the Gregor Mendel controversy. Biometrics, 46, 915–924.CrossRefPubMedGoogle Scholar
  53. Raftery, A. E. (1995). Bayesian model selection in social research (with Discussion). Sociological Methodology, 25, 111–196.CrossRefGoogle Scholar
  54. Robbins, H. E. (1952). Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58, 527–535.CrossRefGoogle Scholar
  55. Ross, S. M. (1994). A first course in probability (4th ed.). New York: McMillan.Google Scholar
  56. Saunders, R., & Savulescu, J. (2008). Research ethics and lessons from Hwanggate: What can we learn from the Korean cloning fraud? Journal of Medical Ethics, 34, 214–221.CrossRefPubMedGoogle Scholar
  57. Shavelson, R. J., & Towne, L. (2002). Scientific research in education. Committee on Scientific Principles for Education Research. Center for Education. Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.Google Scholar
  58. Sijtsma, K. (2012). Future of psychometrics: Ask what psychometrics can do for psychology. Psychometrika, 77, 4–20.CrossRefGoogle Scholar
  59. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366.CrossRefPubMedGoogle Scholar
  60. Simonsohn, U. (2013). Just post it: The lesson from two cases of fabricated data detected by statistics alone. Psychological Science, 24, 1875–1888.CrossRefPubMedGoogle Scholar
  61. Simonsohn, U., Nelson, L. D., & Simmons, J. (2014). P-Curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534–547.CrossRefGoogle Scholar
  62. Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics, 12, 53–74.CrossRefPubMedGoogle Scholar
  63. Tucker, W. H. (1997). Re-reconsidering Burt: Beyond a reasonable doubt. Journal of the History of the Behavioral Sciences, 33, 145–162.CrossRefPubMedGoogle Scholar
  64. Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 1124–1131.Google Scholar
  65. Van de Poll-Franse, L. V., Horevoorts, N., van Eenbergen, M., Denollet, J., Roukema, J. A., Aaronson, N. K., et al. (2011). The Patient Reported Outcomes Following Initial treatment and Long term Evaluation of Survivorship registry: Scope, rationale and design of an infrastructure for the study of physical and psychosocial outcomes in cancer survivorship cohorts. European Journal of Cancer, 47, 2188–2194.CrossRefPubMedGoogle Scholar
  66. Van der Pligt, J. (2013). Het jaar van Stapel (The year of Stapel). De Psycholoog, 48(3), 28–33.Google Scholar
  67. Wagenmakers, E. J. (2007). A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14, 779–804.CrossRefGoogle Scholar
  68. Wagenmakers, E. J., Wetzels, R., Borsboom, D., & Van der Maas, H. L. J. (2011). Why psychologists must change the way they analyze their data: The case of Psi: Comment on Bem (2011). Journal of Personality and Social Psychology, 100, 426–432.CrossRefPubMedGoogle Scholar
  69. Wagenmakers, E. J., Wetzels, R., Borsboom, D., Van der Maas, H. L. J., & Kievit, R. A. (2012). An agenda for purely confirmatory research. Perspectives on Psychological Science, 7, 632–638.CrossRefPubMedGoogle Scholar
  70. Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS ONE, 6(11), e26828. doi:10.1371/journal.pone.0026828.PubMedCentralCrossRefPubMedGoogle Scholar
  71. Wicherts, J. M., Borsboom, D., Kats, J., & Molenaar, D. (2006). The poor availability of psychological research data for reanalysis. American Psychologist, 61, 726–728.CrossRefPubMedGoogle Scholar
  72. Wilson, M. (2013). Seeking a balance between the statistical and scientific elements in psychometrics. Psychometrika, 78, 211–236.CrossRefPubMedGoogle Scholar
  73. Yong, E. (2012). Bad copy. In the wake of high-profile controversies, psychologists are facing up to problems with replication. Nature, 485, 298–300.CrossRefPubMedGoogle Scholar

Copyright information

© The Psychometric Society 2015

Authors and Affiliations

  1. 1.Department of Methodology and Statistics, Tilburg School of Social and Behavioral SciencesTilburg UniversityTilburgThe Netherlands

Personalised recommendations