This paper explores the gray area of questionable research practices (QRPs) between responsible conduct of research and severe research misconduct in the form of fabrication, falsification, and plagiarism (Steneck in SEE 12(1): 53–57, 2006). Up until now, we have had very little knowledge of disciplinary similarities and differences in QRPs. The paper is the first systematic account of variances and similarities. It reports on the findings of a comprehensive study comprising 22 focus groups on practices and perceptions of QRPs across main areas of research. The paper supports the relevance of the idea of epistemic cultures (Knorr Cetina in: Epistemic cultures: how the sciences make knowledge, Harvard University Press, Cambridge, 1999), also when it comes to QRPs. It shows which QRPs researchers from different areas of research (humanities, social sciences, medical sciences, natural sciences, and technical sciences) report as the most severe and prevalent within their fields. Furthermore, it shows where in the research process these self-reported QRPs can be found. This is done by using a five-phase analytical model of the research process (idea generation, research design, data collection, data analysis, scientific publication and reporting). The paper shows that QRPs are closely connected to the distinct research practices within the different areas of research. Many QRPs can therefore only be found within one area of research, and QRPs that cut across main areas often cover relatively different practices. In a few cases, QRPs in one area are considered good research practice in another.
This is a preview of subscription content, access via your institution.
Buy single article
Instant access to the full article PDF.
Tax calculation will be finalised during checkout.
This focus group study is part of the PRINT project (Practices, Perceptions, and Patterns of Research Integrity, preregistered at the OSF: https://osf.io/rf4bn/). The PRINT project also consists of a wide-ranging survey study and a number of more detailed studies on the categorizations, causes, and mechanisms of QRPs. Results from these studies will be presented in separate publications, while the findings on QRP perceptions and practices from the focus group study are the focal point of this article.
Two–three disciplines within each of the five main areas were represented in each focus group. The selection of disciplines was based on a list of disciplinary fields within the main areas represented at the Danish universities compiled by the ministry and revised by DKUNI in 2017.
The numbering of the focus groups go beyond 22 because some of the planned focus groups had to be cancelled due to too few participants. These groups were reorganized, rescheduled and assigned a new number.
Prevalence was coded according to the five categories: not prevalent, somewhat prevalent, prevalent, rather prevalent, very prevalent. Severity was coded according to the five categories: not severe, somewhat severe, severe, rather severe, very severe.
ALLEA (2017). The European code of conduct for research integrity. Revised Edition. https://allea.org/wp-content/uploads/2017/05/ALLEA-European-Code-of-Conduct-for-Research-Integrity-2017.pdf
Anderson, M. S., Martinson, B. C., & de Vries, R. (2007). Normative dissonance in science: Results from a national survey of U.S. scientists. Journal of Empirical Research on Human Research Ethics, 2(4), 3–14
Anderson, M. S., Ronning, E. A., DeVries, R., & Martinson, B. C. (2010). Extending the Mertonian norms: Scientists’ subscription to norms of research. Journal of Higher Education, 81(3), 366–393
Anderson, M. S., Shaw, M. A., Steneck, N. H., Konkle, E., & Kamata, T. (2013). Research integrity and misconduct in the academic profession. In M. Paulsen (Ed.), Higher education: Handbook of theory and research.Springer.
Banks, G. C., O’Boyle, E. H., Pollack, J. F., White, C. D., Batchelor, J. H., Whelpley, C. E., Abston, K. A., Bennett, A. A., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 5–20. https://doi.org/10.1177/0149206315619011
Bo, I. G. (2005). At sætte tavsheder i tale: fortolkning og forståelse i det kvalitative forskningsinterview. In M. H. Jacobsen, S. Kristiansen, & A. Prieur (Eds.), Liv, fortælling og tekst: Strejftog i kvalitativ sociologi. Aalborg University Press.
Bouter, L. M., et al. (2016). Ranking major and minor research misbehaviors: Results from a survey among participants of four World Conferences on Research Integrity. Research Integrity and Peer Review. https://doi.org/10.1186/s41073-016-0024-5
Bouter, L. (2020). What Research institutions can do to foster research integrity. Science and Engineering Ethics, 26, 2363–2369. https://doi.org/10.1007/s11948-020-00178-5
Butler, N., Delaney, H., & Spoelstra, S. (2017). The gray zone: Questionable research practices in the business school. Academy of Management Learning & Education, 16(1), 94–109
Dal-Réet, R., Bouter, L. M., Cuijpers, P., Gluud, C., & Holm, S. (2020). Should research misconduct be criminalized? Research Ethics, 16(1–2), 1–12
Davies, S. R. (2019). An ethics of the system: Talking to scientists about research integrity. Science and Engineering Ethics, 25, 1235–1253. https://doi.org/10.1007/s11948-018-0064-y
Fanelli, D. (2009). How many scientists fabricate and falsify research? A Systematic review and meta-analysis of survey data. PLoS ONE, 4(5), e5738
Fleck, L. (1979). Genesis and development of a scientific fact. T. J. Trenn & R. K. Merton (Eds.). “Foreword” by T. S. Kuhn. Chicago University Press
Godecharle, S., Fieuws, S., Nemery, B., et al. (2018). Scientists still behaving badly? A survey within industry and universities. Science and Engineering Ethics, 24, 1697–1717. https://doi.org/10.1007/s11948-017-9957-4
Godecharle, S., Nemery, B., & Dierickx, K. (2014). Heterogeneity in European research integrity guidance: Relying on values or norms? Journal of Empirical Research on Human Research Ethics, 9(3), 79–90. https://doi.org/10.1177/1556264614540594
Halkier, B. (2016). Fokusgrupper. (3rd ed.). Samfundslitteratur.
Hall, J., & Martin, B. R. (2019). Towards a taxonomy of research misconduct: The case of business school. Research Policy, 48(2), 414–427
Haven, T., Tijdink, J., Pasman, H. R., et al. (2019). Researchers’ perceptions of research misbehaviours: A mixed methods study among academic researchers in Amsterdam. Research Integrity and Peer Review. https://doi.org/10.1186/s41073-019-0081-7
Henriksen, D. (2016). The rise in co-authorship in the social sciences (1980–2013). Scientometrics, 107, 455–476. https://doi.org/10.1007/s11192-016-1849-x
Hofmann, B., & Holm, S. (2019). Research integrity: Environment, experience or ethos? Research Ethics, 15(3–4), 1–13. https://doi.org/10.1177/1747016119880844
Horbach, S. P. J. M., & Halffman, W. (2019). The extent and causes of academic text recycling or ‘self-plagiarism.’ Research Policy, 48(2), 492–502
Jensen, K. K. (2017). General introduction to responsible conduct of research. In: K. Klint Jensen, L. Whiteley, & P. Sandøe (Eds.), RCR: A Danish textbook for courses in Responsible Conduct of Research (pp. 12–24). Frederiksberg: Department of Food and Resource Economics, University of Copenhagen
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532
Knorr Cetina, K. & Reichmann, W. (2015). Epistemic cultures. In International encyclopedia of the social & behavioral sciences (2nd ed.) (pp. 873–880)
Knorr-Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Harvard University Press.
Lamont, M. (2009). How professors think: Inside the curious world of academic judgment. Harvard University Press.
Martinson, B., Anderson, M., & de Vries, R. (2005). Scientists behaving badly. Nature, 435, 737–738
Mejlgaard, N., Bouter, L. M., Gaskell, G., Kavouras, P., Allum, A., Bendtsen, A.-K., Charitidis, C. A., Claesen, N., Dierickx, K., Domaradzka, A., Reyes Elizondo, A. E., Foeger, N., Hiney, M., Kaltenbrunner, W., Labib, K., Marušić, A., Sørensen, M. P., Ravn, T., Ščepanović, R., … Veltri, G. A. (2020). Research integrity: nine ways to move from talk to walk. Nature, 586, 358–360
Moher, D., Bouter, L., Kleinert, S., Glasziou, P., Sham, M. H., Barbour, V., et al. (2020). The Hong Kong principles for assessing researchers: Fostering research integrity. PLoS Biology, 18(7), e3000737. https://doi.org/10.1371/journal.pbio.3000737
Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations. (2013). Developed as part of the 3rd World Conference on Research Integrity. https://wcrif.org/montreal-statement/file
Morgan, D. L. (1997). Focus groups as qualitative research. Sage Publications.
National Academies of Science. (2017). Fostering integrity in research. National Academies Press.
National Academy of Science. (1992). Responsible science: Ensuring the integrity of the research process. National Academy Press.
Palinkas, L. A., Horwitz, S. M., Green, C. A., Wisdom, J. P., Duan, N., & Hoagwood, K. (2015). Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Administration and Policy in Mental Health, 42(5), 533–544. https://doi.org/10.1007/s10488-013-0528-y
Penders, B., Vos, R., & Horstman, K. (2009). A question of style: Method, integrity and the meaning of proper science. Endeavour, 33(3), 93–98
Pickering, A. (1992). From science as knowledge to science as practice. In A. Pickering (Ed.), Science as practice and culture. (pp. 1–28). University of Chicago Press.
Pickersgill, M. (2012). The co-production of science, ethics, and emotion. Science, Technology, & Human Values, 37(6), 579–603
Resnik, D. B., & Shamoo, A. E. (2017). Reproducibility and research integrity. Accountability in Research, 24(2), 116–123. https://doi.org/10.1080/08989621.2016.1257387
Schwartz-Shea, P., & Yanow, D. (2012). Interpretive research design: concepts and processes. Routledge.
Shaw, D. (2016). The trojan citation and the “accidental” plagiarist. Bioethical Inquiry, 13, 7–9. https://doi.org/10.1007/s11673-015-9696-7
Shaw, D. (2019). The quest for clarity in research integrity: A conceptual schema. Science and Engineering Ethics, 25, 1085–1093. https://doi.org/10.1007/s11948-018-0052-2
Singapore Statement on research integrity. (2010). World Conference on Research Integrity (Drafting Committee: N. Steneck, T. Mayer, M. Anderson et al.). 2nd and 3rd World Conference on Research Integrity. https://wcrif.org/statement
Steneck, N. H. (2006). Fostering integrity in research: Definitions, current knowledge, and future directions. Science and Engineering Ethics, 12(1), 53–74
Tijdink, J. K., Verbeke, R., & Smulders, Y. M. (2014). Publication pressure and scientific misconduct in medical scientists. Journal of Empirical Research on Human Research Ethics, 9(5), 64–71. https://doi.org/10.1177/1556264614552421
The authors would like to thank the PRINT team, especially Jesper W. Schneider, who as PI of the PRINT project invited us to conduct the focus group study. Jesper also kindly commented on an earlier version of this paper. Together with Jesper, Niels Mejlgaard and Kaare Aagaard also deserve a special thanks for many inspirational discussions on the design and methodology of this study. Jesper, Niels, and Kaare also performed the facet analysis of Bouter et al.’s (2016) list of 60 QRPs that led to the eight prewritten cards/QRPs used in the focus group interviews. We would also like to thank Serge Horbach for his very inspirational comments to a previous version of the paper. We would further like to thank Alexander Kladakis, Asger Dalsgaard Pedersen, and Pernille Bak Pedersen for their invaluable help with organizing the interviews, and Anna-Kathrine Bendtsen, Signe Nygaard, Trine Byg, Astrid Marie Kierkgaard-Schmidt, Anders Møller Jørgensen, and Massimo Graae Losinno for transcribing the interviews. Finally, we would like to thank the 105 interviewees, who took part in this study: Thank you for sharing your experiences and understanding of QRPs with us!
This work has been supported by the PRINT project (Practices, Perceptions, and Patterns of Research Integrity) funded by the Danish Agency for Higher Education and Science (Ministry of Higher Education and Science) under Grant No 6183-00001B.
Conflict of interest
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix 1: Specifications on Research Design
The focus group interviews were conducted by the authors collectively as co-moderators and took place in October, November, and December 2017 at eight universities in Denmark: University of Copenhagen, Aarhus University, Technical University of Denmark, Copenhagen Business School, Roskilde University, Aalborg University, University of Southern Denmark, and the IT University of Copenhagen.
The focus group interview was chosen as method because of its strength in producing data on complex and uncharted empirical subject matters that detail practices, group interpretations, norms, and assessments (Halkier, 2016; Morgan, 1997). Moreover, this method allowed us to discuss issues of a potentially sensitive character and to gain insights into researchers’ perceptions of the gray areas of conducting research.
Selection and Recruitment of Focus Group Participants
To ensure that the interviewees would understand each other’s work and ways of doing research and could be expected to have similar views on what constitutes good and bad research practices, respectively we primarily focused on the participants’ research practices, i.e. on ‘how science is done’, when composing the groups as described in the Methods section.
Further selection criteria impinge on the number of interview participants. Each focus group had to consist of a minimum of four and a maximum of six participants with a balanced gender composition. Two to three sub disciplines within each of the five main research areas should also be represented in each focus group, and the selected disciplines should cover all major subfields of the five main areas (e.g. political science, economics, law, and psychology within the social sciences).
The selection of interviewees was based on an ‘information-oriented’ (Bo, 2005, 71) selection strategy with the objective to reach a broad group of researchers from different main and subfields. Generally, the focus group literature points to sample homogeneity as the most effective composition strategy in terms of group dynamics due to the presence of a common frame of reference (Halkier, 2016; Morgan, 1997). This is the predominant rationale for choosing segmented groups, according to the selection strategy of ‘how science is done’, epitomized as variation in research practices and knowledge production models. At the same time, we wished to introduce some heterogeneity into the groups in order to create a balance between recognisability and knowledge exchange, as too much homogeneity may cause shared knowledge to be taken for granted and left unsaid during focus group discussions. We established diversity by ensuring that the groups were composed of both men and women and by including researchers at stratified career levels (which is likely to correspond to the variation in age too): postdoc/assistant professor, associate professor, and professor. Furthermore, while we wished to establish a setting of optimal interaction conditions, we also wished to pursue group variation to gain in-depth understandings and to foster a group dynamic from which it would be possible to elicit differences in QRP perceptions and practices as well as differences in terms of potential causes (individual, institutional, societal, etc.). As a sample criterion, we avoided senior/junior constellations from the same institute in order to reduce participants’ reticence to speak openly. Likewise, we strived to recruit people with no prior research collaborations.
The recruitment of focus group participants was based on a combination of strategies. To increase credibility and variation, we aimed to conduct purposeful random sampling (Palinkas et al., 2015) by systematically trawling university webpages of the selected disciplines, commencing from the top of the lists of researchers, selecting researchers that met our criteria. Concurrently, we informed heads of department that we were going to recruit participants for our study, and that we would be contacting employees via their public university emails. A snowball/chain sampling strategy supplemented the strategy of searching webpages, and we used our own network to find potential interview participants not known to the interviewers. Recruiting participants for such a large-scale focus group study with 22 focus groups was challenging, and we invited a total of 808 researchers. Each focus group had three to six participants, and a total of 105 researchers took part in the study.
The moderator guide was structured as a ‘funnel model’ (Halkier, 2016). After an introduction explaining the purpose of the study, communicating the interview guidelines, and introducing the participants, each interview opened with an explorative question inquiring into the participants’ thoughts on what constitutes a good or favorable research process and examples hereof. The discussion then moved on to the subject of QRPs, and the participants were asked to list challenges, if any, in upholding good research practices within their field of research. Before further exploring the pre-defined themes of QRP causes and developments, we spent a third of the scheduled time doing a ranking exercise. First, the participants were asked to write down a severe or prevalent QRP within their area of research. Then they were given eight pre-written cards with pre-defined QRPs and, based on group discussions and negotiations, asked to place the pre-written cards as well as their own cards on a simple QRP ‘severity scale’ ranging from ‘not severe’ to ‘severe’ to ‘very severe’. The participants were told that the scale covered the ‘gray area’ between RCR and FFP. We did not provide further specifications of the concept of severity prior to the exercise, as we wanted to encourage extempore researcher understandings of the particular “objects” exposed to harm, such as individual researchers and their careers, particular research projects or research fields or science in general in terms of research beneficiaries, general trust in science etc. After this first part of the exercise and a short break, the interviewees were asked to rank the same cards on a ‘prevalence scale’. Again, we used a very simple scale ranging from ‘not prevalent’ to ‘prevalent’ to ‘very prevalent’. The eight QRP examples used are well-known QRPs in the literature. To arrive at these particular eight examples, the 60 QRP examples in Bouter et al. (2016) were sorted and categorized into a smaller number of practices using facet analysis. The final set of eight QRPs were then selected based on an assessment of their importance and resonance within and across the different fields of research. The eight QRPs are: lack of transparency in the use of methods and empirical data; selective reporting of research findings; salami slicing; p-hacking and/or HARKing (Hypothesizing After the Results are Known); selective citing; unfair assignment of authorship; unfair reviewing; inadequate data management and data storage.
The rationale for including the exercise, and especially for using the eight pre-written cards in all groups, was to gain an understanding of how different disciplinary research fields assess the prevalence and severity of different QRPs. Through participant deliberations, we also wanted to gain insight into their understanding of the different QRPs (e.g. what selective reporting of research findings or unfair assignment of authorship could mean within different fields). The exercise also created a structured, though dynamic and creative space for lively discussions that, presumably, produced comparable knowledge that might have been difficult to elicit otherwise.
Coding and Analysis Strategy
Each interview lasted one hour and 45 min including a break. The interviews were carried out in Danish, and they were subsequently transcribed verbatim and coded in the software and data facilitation program NVivo. In this article, we focus on the QRPs that participants themselves reported during the first part of the exercise. The data coding of these emerging QRPs from the 107 written cards yielded 62 different ‘first rounds of codes’ that were also separately coded based on their prevalence and severity ranking.Footnote 4 These rankings were coded based on the transcriptions and, in cases of doubt, checked against the photo and video material of the exercise in question. Subsequently, they were coded in a second and third round of coding by both authors. In this coding and validation process, many QRPs were conflated within similar categories and renamed. In this process, the 62 ‘in vivo codes’ that remained close to data and the actual spoken words of the participants’ were carefully assessed, compared, sorted, classified and potentially re-categorized into similar QRPs or QRPs already established within the literature. As to the latter, well-known QRPs were sometimes described under a different headline. Such QRPs were then re-categorized and renamed in keeping with existing practices. A total of 34 different main QRPs were identified within the humanities, medical sciences, technical sciences, social sciences, and natural sciences. To enhance transparency and reader assessment of the final set of emerging QRPs, the Table in the supplementary material (to be found in the folder “Focus Group Study” at the PRINT project's OSF site: https://osf.io/rf4bn/) lists all QRPs with their original labeling and description, their prevalence and severity ranking, the related field of research and research discipline, as well as relevant coding comments. Furthermore, the full list of emerging QRPs displays the complexity, nuances, and variety of the practices that researchers identified as QRPs.
Appendix 2: In Which and in How Many Main Areas of Research Were the QRPs Mentioned?
|Name of emerging QRP||Mentioned in main research area||Number of main areas of research|
|1||Breach of ethical principals||Humanities||1|
|2||Cherry picking sources and data||All||5|
|3||Deselection of resource-demanding methods||Social sciences||1|
|4||Exclusion of other traditions on a questionable foundation||Medical sciences + humanities||2|
|5||Fashion- determined choice of research topic||Social sciences||1|
|8||Ignoring negative results||Technical sciences||1|
|9||Inadequate data management and data storage||Humanities||1|
|10||Inadequate use of data||Medical sciences||1|
|11||Inadequate use of methods||Humanities + natural Sciences + technical sciences + medical sciences||4 (minus Social science)|
|12||Instrumental and marketable approach||Social sciences||1|
|13||Insufficient preparation and reading of existing literature||Technical sciences + humanities||2|
|14||Lack of a clear and well-defined research question||Medical sciences||1|
|15||Lack of controls||Medical sciences||1|
|16||Lack of critical reflection||Medical sciences||1|
|17||Lack of transparency about conflicts of interests||Social sciences||1|
|18||Lack of transparency in the use of methods and empirical data||Natural Sciences + medical sciences||2|
|19||Lack of validation||Technical sciences + Natural sciences + humanities + medical sciences||4 (minus social science)|
|20||Non-representative sampling||Technical sciences||1|
|21||Overgeneralising results||Natural sciences + humanities||2|
|22||Overselling methods, data or results||Technical sciences||1|
|23||P-hacking||Social sciences + humanities + medical sciences + social sciences||4 (minus natural science)|
|24||Plagiarism-like behaviour||Humanities + technical sciences||2|
|25||Politicisation of one’s research||Natural sciences + social sciences||2|
|26||Re-use of research material||Humanities||1|
|27||Salami slicing||Social Sciences + natural sciences||2|
|28||Selective citing||Humanities + social Sciences + medical sciences||3|
|29||Selective reporting of research findings||Medical sciences + technical sciences + social sciences||3|
|30||Sloppy use of figures||Technical sciences||1|
|31||Theoretical and methodological authority arguments||Humanities||1|
|32||Tuning and calibrating a method||Natural sciences||1|
|33||Unfair assignment of authorship||Medical sciences + social sciences + technical sciences + natural sciences||4 (minus humanities)|
|34||Unoriginality||Humanities + social sciences + natural sciences||3|
About this article
Cite this article
Ravn, T., Sørensen, M.P. Exploring the Gray Area: Similarities and Differences in Questionable Research Practices (QRPs) Across Main Areas of Research. Sci Eng Ethics 27, 40 (2021). https://doi.org/10.1007/s11948-021-00310-z
- Disciplinary differences
- Epistemic culture
- Research integrity
- Responsible conduct of research
- Research Misconduct