Science and Engineering Ethics

, Volume 24, Issue 1, pp 151–171 | Cite as

Questionable, Objectionable or Criminal? Public Opinion on Data Fraud and Selective Reporting in Science

  • Justin T. PickettEmail author
  • Sean Patrick Roche
Original Paper


Data fraud and selective reporting both present serious threats to the credibility of science. However, there remains considerable disagreement among scientists about how best to sanction data fraud, and about the ethicality of selective reporting. The public is arguably the largest stakeholder in the reproducibility of science; research is primarily paid for with public funds, and flawed science threatens the public’s welfare. Members of the public are able to make meaningful judgments about the morality of different behaviors using moral intuitions. Legal scholars emphasize that to maintain legitimacy, social control policies must be developed with some consideration given to the public’s moral intuitions. Although there is a large literature on popular attitudes toward science, there is no existing evidence about public opinion on data fraud or selective reporting. We conducted two studies—a survey experiment with a nationwide convenience sample (N = 821), and a follow-up survey with a representative sample of US adults (N = 964)—to explore community members’ judgments about the morality of data fraud and selective reporting in science. The findings show that community members make a moral distinction between data fraud and selective reporting, but overwhelmingly judge both behaviors to be immoral and deserving of punishment. Community members believe that scientists who commit data fraud or selective reporting should be fired and banned from receiving funding. For data fraud, most Americans support criminal penalties. Results from an ordered logistic regression analysis reveal few demographic and no significant partisan differences in punitiveness toward data fraud.


Research misconduct Fabrication and falsification Questionable research practices Researcher degrees of freedom Publication bias False positives 

Supplementary material

11948_2017_9886_MOESM1_ESM.doc (112 kb)
Supplementary material 1 (DOC 112 kb)


  1. Allcott, H. (2011). Consumers’ perceptions and misperceptions of energy costs. American Economic Review, 101, 98–104. doi: 10.1257/aer.101.3.98.CrossRefGoogle Scholar
  2. American Association for Public Opinion Research. (2016). Standard definitions: Final dispositions of case codes and outcome rates for surveys. Ann Arbor, MI: American Association for Public Opinion Research.Google Scholar
  3. Baker, M. (2016). 1,500 scientists lift the lid on reproducibility Survey sheds light on the ‘crisis’ rocking research. Nature, 533, 452–454. doi: 10.1038/533452a.CrossRefGoogle Scholar
  4. Bakker, M., van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7, 543–554. doi: 10.1177/1745691612459060.CrossRefGoogle Scholar
  5. Bakker, M., & Wicherts, J. M. (2014). Outlier removal and the relation with reporting errors and quality of psychological research. PLoS ONE, 9, e103360. doi: 10.1371/journal.pone.0103360.CrossRefGoogle Scholar
  6. Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., et al. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42, 5–20. doi: 10.1177/0149206315619011.CrossRefGoogle Scholar
  7. Baumeister, R. (2014). Personal quote on the Replicaiton Index Blog. Roy Baumeister’s R-Index.
  8. Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483, 531–533. doi: 10.1038/483531a.CrossRefGoogle Scholar
  9. Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research:’s Mechanical Turk. Political Analysis, 20, 351–368. doi: 10.1093/pan/mpr057.CrossRefGoogle Scholar
  10. Berinsky, A. J., Margolis, M. F., & Sances, M. W. (2014). Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. American Journal of Political Science, 58, 739–753. doi: 10.1111/ajps.12081.CrossRefGoogle Scholar
  11. Bhutta, Z. A., & Crane, J. (2014). Should research fraud be a crime? BMJ, 349, g4532. doi: 10.1136/bmj.g4532.CrossRefGoogle Scholar
  12. Blank, J. M., & Shaw, D. (2015). Does partisanship shape attitudes toward science and public policy? The case for ideology and religion. Annals of the American Academy of Political and Social Science, 658, 18–35. doi: 10.1177/0002716214554756.CrossRefGoogle Scholar
  13. Bouri, S., Shun-Shin, M. J., Cole, G. D., Mayet, J., & Francis, D. P. (2014). Meta-analysis of secure randomised controlled trials of β-blockade to prevent perioperative death in non-cardiac surgery. Heart, 100, 456–464. doi: 10.1136/heartjnl-2013-304262.CrossRefGoogle Scholar
  14. Camerer, C. F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., et al. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351, 1433–1436. doi: 10.1126/science.aaf0918.CrossRefGoogle Scholar
  15. Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaiveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavior Research Methods, 46, 112–130. doi: 10.3758/s13428-013-0365-7.CrossRefGoogle Scholar
  16. Chang, L., & Krosnick, J. A. (2009). National surveys via RDD telephone interviewing versus the internet: Comparing sample representativeness and response quality. Public Opinion Quarterly, 72, 641–678. doi: 10.1093/poq/nfp075.CrossRefGoogle Scholar
  17. Darley, J. M. (2009). Morality in the law: The psychological foundations of citizens’ desires to punish transgressions. Annual Review of Law and Social Science, 5, 1–23. doi: 10.1146/annurev.lawsocsci.4.110707.172335.CrossRefGoogle Scholar
  18. Engel, C. (2015). Scientific disintegrity as a public bad. Perspectives on Psychological Science, 10, 361–379.CrossRefGoogle Scholar
  19. Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4, e5738. doi: 10.1371/journal.pone.0005738.CrossRefGoogle Scholar
  20. Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345, 1502–1505. doi: 10.1126/science.1255484.CrossRefGoogle Scholar
  21. Franco, A., Malhotra, N., & Simonovits, G. (2015). Underreporting in political science survey experiments: Comparing questionnaires to published results. Political Analysis, 23, 306–312. doi: 10.1093/pan/mpv006.CrossRefGoogle Scholar
  22. Gammon, E., & Franzini, L. (2013). Research misconduct oversight: Defining case costs. Journal of Health Care Finance, 40, 75–99.Google Scholar
  23. Gauchat, G. (2012). Politicization of science in the public sphere: A study of public trust in the United States, 1974 to 2010. American Sociological Review, 77, 167–187. doi: 10.1177/0003122412438225.CrossRefGoogle Scholar
  24. Gibbons, M. (1999). Science’s new social contract with society. Nature, 402, C81–C84.CrossRefGoogle Scholar
  25. Godlee, F., Smith, J., & Marcovitch, H. (2011). Wakefield’s article linking MMR vaccine and autism was fraudulent. BMJ, 342, 64–66. doi: 10.1136/bmj.c7452.Google Scholar
  26. Groenendyk, E. (2016). The anxious and ambivalent partisan: The effect of incidental anxiety on partisan motivated recall and ambivalence. Public Opinion Quarterly, 80, 460–479. doi: 10.1093/poq/nfv083.CrossRefGoogle Scholar
  27. Groves, R. M. (2006). Nonresponse rates and nonresponse bias in household surveys. Public Opinion Quarterly, 70, 646–675. doi: 10.1093/poq/nfl033.CrossRefGoogle Scholar
  28. Groves, R. M., & Peytcheva, E. (2008). The impact of nonresponse rates on nonresponse bias. Public Opinion Quarterly, 72, 167–189. doi: 10.1093/poq/nfn011.CrossRefGoogle Scholar
  29. Hadjiargyrou, M. (2015). Scientific misconduct: How best to punish those who consciously violate our profession’s integrity? Journal of Information Ethics, 24, 23–30.Google Scholar
  30. Haidt, J. (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108, 814–834. doi: 10.1037/0033-295X.108.4.814.CrossRefGoogle Scholar
  31. Haidt, J. (2012). The righteous mind: Why good people are divided by politics and religion. New York, NY: Pantheon Books.Google Scholar
  32. Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133, 55–56. doi: 10.1162/0011526042365555.CrossRefGoogle Scholar
  33. Haidt, J., & Kesebir, S. (2010). Morality. In S. T. Fiske, D. T. Gilbert, & G. Lindzey (Eds.), Handbook of social psychology (5th ed., pp. 797–832). Hoboken, NJ: Wiley.Google Scholar
  34. Igo, S. E. (2007). The averaged American: Surveys, citizens, and the making of a mass public. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
  35. John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524–532. doi: 10.1177/0956797611430953.CrossRefGoogle Scholar
  36. Judson, H. F. (2004). The great betrayal: Fraud in science. Orlando, FL: Harcourt.Google Scholar
  37. Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T., Hanemann, W. M., Kopp, R. J., et al. (2002). The impact of “no opinion” response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opinion Quarterly, 66, 371–403.CrossRefGoogle Scholar
  38. Mullinix, K. J., Leeper, T. J., Druckman, J. N., & Freese, J. (2015). The generalizability of survey experiments. Journal of Experimental Political Science, 2, 109–138. doi: 10.1017/XPS.2015.1.CrossRefGoogle Scholar
  39. Nadler, J. (2005). Flouting the law. Texas Law Review, 83, 1399–1441. Available at SSRN:
  40. O’Boyle, E. H., Banks, G. C., & Gonzalez-Mulé, E. (2014). The chrysalis effect: How ugly initial results metamorphosize into beautiful articles. Journal of Management. doi: 10.1177/0149206314527133.Google Scholar
  41. O’Leary, P. (2015). Policing research misconduct. Albany Law Journal of Science & Technology, 25, 39–93.Google Scholar
  42. Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349, 943–951. doi: 10.1126/science.aac4716.CrossRefGoogle Scholar
  43. Peer, E., Vosgerau, J., & Acquisti, A. (2013). Reputation as a sufficient condition for data quality on Amazon Mechanical Turk. Behavior Research Methods, 46, 1023–1031. doi: 10.3758/s13428-013-0434-y.CrossRefGoogle Scholar
  44. Pew Research Center. (2013, July 11). Public esteem for military still high.
  45. Pew Research Center. (2015, January 29). Public and scientists’ view on science and society.
  46. Pickett, J. T., & Bushway, S. D. (2015). Dispositional sources of sanction perceptions: Emotionality, cognitive style, intolerance of ambiguity, and self-efficacy. Law and Human Behavior, 39, 624–640. doi: 10.1037/lhb0000150.CrossRefGoogle Scholar
  47. Reardon, S. (2015). Uneven response to scientific fraud. Nature, 523, 138–139.CrossRefGoogle Scholar
  48. Redman, B. K., & Caplan, A. L. (2005). Off with their heads: The need to criminalize some forms of scientific misconduct. The Journal of Law, Medicine & Ethics, 33, 345–348.CrossRefGoogle Scholar
  49. Redman, B. K., & Caplan, A. L. (2015). No one likes a snitch. Science and Engineering Ethics, 21, 813–819. doi: 10.1007/s11948-014-9570-8.CrossRefGoogle Scholar
  50. Robinson, P. H. (2008). Distributive principles of criminal law: Who should be punished how much?. New York, NY: Oxford University Press.CrossRefGoogle Scholar
  51. Robinson, P. H. (2012). Intuitions of justice and the utility of desert. New York, NY: Oxford University Press.Google Scholar
  52. Robinson, P. H., & Darley, J. M. (1997). The utility of desert. Northwestern University Law Review, 91, 453–499. doi: 10.2139/ssrn.10195.Google Scholar
  53. Robinson, P. H., Goodwin, G. P., & Reisig, M. D. (2010). The disutility of injustice. New York University Law Review, 85, 1940–2033. Available at SSRN:
  54. Silver, J. R., & Silver, E. (2017). Why Are Conservatives More Punitive Than Liberals? A Moral Foundations Approach. Law and Human Behavior.Google Scholar
  55. Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359–1366. doi: 10.1177/0956797611417632.CrossRefGoogle Scholar
  56. Smith, R. (2013). Should scientific fraud be a criminal offense?
  57. Sovacool, B. K. (2005). Using criminalization and due process to reduce scientific misconduct. The American Journal of Bioethics, 5, W1–W7. doi: 10.1080/15265160500313242.CrossRefGoogle Scholar
  58. Stern, A. M., Casadevall, A., Steen, R. G., & Fang, F. C. (2014). Financial costs and personal consequences of research misconduct resulting in retracted publications. eLife, 3, e02956. doi: 10.7554/eLife.02956.CrossRefGoogle Scholar
  59. Stroebe, W., Postmes, T., & Spears, R. (2012). Scientific misconduct and the myth of self-correction in science. Perspectives on Psychological Science, 7, 670–688. doi: 10.1177/1745691612460687.CrossRefGoogle Scholar
  60. Suhay, E., & Druckman, J. N. (2015). The politics of science: Political values and the production, communication, and reception of scientific knowledge. Annals of the American Academy of Political and Social Science, 658, 6–15. doi: 10.1177/0002716214559004.CrossRefGoogle Scholar
  61. Weinberg, J. D., Freese, J., & McElhattan, D. (2014). Comparing data characteristics and results of an online factorial survey between a population-based and a crowdsource-recruited sample. Sociological Science, 1, 292–310. doi: 10.15195/v1.a19.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2017

Authors and Affiliations

  1. 1.School of Criminal JusticeUniversity at Albany – State University of New YorkAlbanyUSA

Personalised recommendations