Abstract
Breaches of research integrity have gained considerable attention due to high-profile scandals involving questionable research practices by reputable scientists. These practices include plagiarism, manipulation of authorship, biased presentation of findings and misleading reports of significance. To combat such practices, policymakers tend to rely on top-down measures, mandatory ethics training and stricter regulation, despite limited evidence of their effectiveness. In this study, we investigate the occurrence and underlying factors of questionable research practices (QRPs) through an original survey of 3,005 social and medical researchers at Swedish universities. By comparing the role of the organizational culture, researchers´ norms and counter norms, and individual motivation, the study reveals that the counter norm of Biasedness—the opposite of universalism and skepticism—is the overall most important factor. Thus, Biasedness was related to 40–60% of the prevalence of the questionable practices. The analysis also reveals the contradictory impact of other elements in the organizational environment. Internal competition was positively associated with QRP prevalence, while group-level ethics discussions consistently displayed a negative association with such practices. Furthermore, in the present study items covering ethics training and policies have only a marginal impact on the prevalence of these practices. The organizational climate and normative environment have a far greater influence. Based on these findings, it is suggested that academic leaders should prioritize the creation and maintenance of an open and unbiased research environment, foster a collaborative and collegial climate, and promote bottom-up ethics discussions within and between research groups.
Similar content being viewed by others
Introduction
Two Key Questions on Questionable Research Practices
High-profile research scandals (Berggren & Karabag, 2019; Jungebluth et al., 2011; Levelt et al., 2012; Lonnstedt & Eklov, 2016), an abundance of grey zone practices, low replicability of findings even in top journals (Begley & Ellis, 2012; Camerer Colin et al., 2016; Chalkia et al., 2020; Open Science Collaboration, 2015; Prinz et al., 2011) have brought attention to problems in contemporary science. While blatant fraud, such as falsification and fabrication of results, seems to be rare, questionable research practices (QRPs) aimed at increasing the likelihood of publishing significant findings appear to be widespread (Alberts et al., 2015; Banks et al., 2021). QRPs as understood here implies an intention to achieve success by dubious behavior, such as failure to acknowledge co-authors, selective presentation of findings, removal of data not supporting desired outcomes, and overstatement of a study’s empirical or methodological foundation (Ravn & Sorensen, 2021). As indicated by “an increasing inflation of p-values directly below p < 0.05”, and “a rising share of verified as opposed to falsified hypotheses” (Matthes et al., 2015), QRPs seem to be not only persistent (Helgesson et al., 2022) but also increasing phenomena.
The problems of misconduct and QRPs have sparked a longstanding debate regarding two questions: (1) What factors are strongly associated with the observed prevalence of these devious practices? (2) How can these practices be prevented or at least minimized? Initially, the answers to the What-question focused on the individual characteristics of the perpetrator, so-called “bad apple”- explanations (Huistra & Paul, 2022). In the last decade, the pendulum has swung in the opposite direction, and research studies emphasize certain systemic features such as publication pressure and competitive career systems (Aubert Bonn & Pinxten, 2019). Recently, however, a comprehensive study found no general association between misconduct and this type of systemic features; instead, the study discovered country-level effects which reflected organizational reward strategies (Fanelli et al., 2022). These findings highlight the need to investigate the role of the organizational climate, norms, incentives and policies and their interaction with individual behavior.
Prompted by requirements from governments and authorities (Poff & Tauginienė, 2022), policymakers early on promoted ethics education and ethics review boards. Later, several international Codes and Regulations specified that research institutions and organizations should develop appropriate and adequate training in ethics and research integrity to ensure that all involved are made aware of the relevant codes and regulations; e.g., The European Code of Conduct for Research Integrity (ALLEA, 2023) and the Bonn PRINTEGER Statement (Forsberg et al., 2018). The meaningful effects of such ethics training vary between different studies and research programs (Watts et al., 2017). An issue which complicates the interpretation of such results is that there are considerable variations of what is defined as “ethics training”, both in type, extent and content (Abdi et al., 2021; Watts et al., 2017). However, several studies have found ethics training to be ineffectual in preventing misconduct and may even normalize QRPs among young academics (DuBois et al., 2008; Hite et al., 2022; Sarauw, 2021).
Furthermore, ethics reviews at Western universities have been criticized for being overly bureaucratic, obsessed with control, and hindering research in controversial fields (Hickey et al., 2022). Recently, for example, 2489 researchers in Sweden attacked the expanded mandate of the ethics reviews as “an acute threat to the pursuit of research”.Footnote 1 Far from being settled, the question of how to effectively prevent misconduct and QRPs requires further consideration.
The purpose of this paper is twofold: (1) To analyze the association between the prevalence of QRPs and factors related to the organizational climate, normative environment, university policies, ethics training, demographic factors, and individual motivation. (2) To highlight the potential to reduce QRP-prevalence by means of organization- and culture-oriented strategies that have received insufficient attention thus far.
Searching for Situational Environmental Factors in Explaining Questionable Practices
Science has a history as a highly competitive field, known for great discoveries and theories, but also for instances of gross violations of the basic rules of honesty and transparency (Broad & Wade, 1983). Since the 1990s, with the huge expansion of academic research and education and of academics competing for positions, there has been a robust rise in retracted papers and in reports of scientific misconduct. According to Huistra and Paul (2022) recent research tends to attribute the rise in deceptive or questionable practices to systemic problems in contemporary academia, such as the pressure to publish for career advancement and for attracting research grants; see also Aubert Bonn and Pinxten (2019).
However, a comprehensive international study of factors associated with image manipulation (Fanelli et al., 2022) challenges this particular system-oriented view. At the same time, the authors found a strong country-level effect in relation to observed manipulations: “In countries where publications are rewarded with cash incentives, especially China, the risk of problematic image duplication was higher for more productive, more frequently cited, earlier-career researchers working in lower-ranking institutions… However, a null or opposite pattern was observed in all other countries.” (Fanelli et al., 2022).
The authors argue that cash incentives do not constitute any “pressure” on researchers but promote norms at odds with the scientific norms of disinterestedness and communalism. Moreover, cash rewards for publication were only observed in select countries and could be seen as a case of organizational policies rather than a general systemic feature.
This study highlights the need to shift the focus from systemic factors and see researchers’ behaviors as an interaction and interplay between micro and macro factors. A person’s behavior is often due to a complex combination of and interaction between individual and situational/environmental factors—with these environmental factors emerging at various levels—societal, institutional, departmental, and team-level—all with sub-organizational cultures and climates. The present paper proceeds on this search for situational environmental factors associated with QRPs by analyzing the role of the normative environment and the organizational climate compared to the impact of demographic and policy-oriented factors. The belief of policymakers in training and education to reduce QRP prevalence, e.g., (ALLEA, 2023; Forsberg et al., 2018), has gained some support from a study showing that scientists from a region with much research ethics transgressions, the Middle East, who have obtained ethics training have a higher awareness of research ethical issues and are less involved in transgressions of research ethical rules, than those who have not (Felaefel et al., 2018).
However, in general, there is limited robust evidence supporting this type of remedy. Other studies suggest that supervisors of PhD and Masterstudents, as role models and mentors, may have an important influence on their supervisees´ ethical understanding and future behavior (Gray & Jordan, 2012; Pizzolato & Dierickx, 2022). Unfortunately, there is a lack of evidence regarding this influence, which may be attributed to the challenges of conducting the required longitudinal studies.
The Importance of the Normative Environment and the Organizational Climate
Norms lie at the nexus of institutions and individuals (Thornton & Ocasio, 2008). They are distinct from attitudes, opinions, or behaviors and can be defined as the standards shared by a community or group, to which members are expected to adhere. As noted by Anderson et al. (2013, p. 239), “Social norms have been found to play an especially significant role in scientific communities.”
Robert Merton articulated the “classical” four norms of science in his CUDOS framework which includes communism or communalism, in the extended sense of common ownership of goods; universalism in the sense that the acceptance or rejection of claims in science is not to depend on the personal or social attributes of their protagonist; disinterestedness, reflecting a passion for knowledge without other distinctive motives; and finally organized skepticism, both in a methodologic and an institutional mandate, with suspension of judgment until 'the facts are at hand' (Merton, 1942, 1973). According to Merton, these norms are derived from the spirit of science, and support its prime goal of advancing knowledge. Merton’s framework has been extensively discussed and challenged (Anderson et al., 2010; Chalmers & Glasziou, 2009; Kim & Kim, 2018; Macfarlane & Cheng, 2008). The most powerful challenge comes from a study of scientists involved in the Apollo moon mission by the organizational theorist Ian Mitroff (1974). According to his study, leading researchers were doggedly committed to their pet hypotheses, and thus Mitroff proposed four counter norms: solitariness/secrecy (“Property rights are expanded to include protective control over the disposition of one's discoveries; secrecy thus becomes a necessary moral act”), self-interest (“Scientists are expected by their close colleagues to achieve the self-interest they have in work-satisfaction and in prestige through serving their special communities of interest, e.g., their invisible college”), particularism (“The work of certain scientists will be given priority over that of others”), and dogmatism (“The scientists must believe in his own findings with utter conviction, while doubting those of other with all his worth”). According to Mitroff, researchers may exhibit deep ambivalence between subscribing to Mertonian norms or counter norms, and their choice seemed to be situationally dependent. More recently, sociologist Henry Etzkowitz promoted an “entrepreneurial ethos,” that displaces disinterestedness and communism with an “institutional imperative to translate research into economic and social use” (Etzkowitz, 2011). These “entrepreneurial” norms align more with the current trends in universities, which emphasize brand image, quantity of output, and the commercialization of research results (Pell & Amigud, 2023; Perkmann et al., 2013). Nevertheless, Mertonian norms remain the standard reference in discussions of core values in science.
According to Bruhn (2008), the “social significance of norms lies in the degree of moral outrage or indignation proscribed behaviors evoke when violated.” Therefore, norms are closely related to behavioral control, implying that failure to uphold the scientific norm system contributes to a prevalence of QRPs. In a recent study, inspired by the theory of planned behavior, the authors showed that the “model achieved its best fit when direct paths from perceived norms to plagiarism behavior were specified” (Curtis et al., 2018; Stone et al., 2009). Similar insights are found in the QRP study by Gopalakrishna et al. (2022).
Previous research has also demonstrated the integrity relevance of the organizational climate, particularly the balance between internal competition and collegiality (Anderson et al., 2007). A competitive climate and entrepreneurial norms may enhance creativity and productivity, but also increase the risk of questionable behavior (Nosek et al., 2015). Thus, the organizational climate, defined as “the shared perceptions of and the meaning attached to the policies, practices, and procedures employees experience and the behaviors they observe getting rewarded and that are supported and expected” (Schneider et al., 2013) is an important aspect to include in studies identifying factors associated with QRP prevalence.
In addition to organizational and environmental factors, individual motivation (Ryan, 2014) plays a role in the propensity to engage in questionable practices. This means that studies of QRP prevalence need to consider both intrinsic motivation which pertains to activities done “for their own sake,” or for their inherent interest and enjoyment (Ryan & Deci, 2020) and external motivation which concerns behaviors done for reasons other than their inherent satisfactions (Ryan & Deci, 2020).
In summary, this overview highlights the need for studies that capture the conflicting forces shaping contemporary academia and its research ethics.
Materials and Methods
Study Design
This paper is based on an original nationwide survey in Sweden of QRPs, norm adherence, organizational climate, individual motivation, ethics training, university policies, and demographic variables, including sex, age, and research field, medicine, or social science. The selection frame for the study was based on the nationwide registration of university employed researchers and PhD-students by Statistics Sweden (SCB). It is a government authority responsible for public statistics and adheres to the European Statistics Code of Practice and the United Nations Fundamental Principles of Official Statistics. SCB aided in designing the survey according to institutional routines to ensure overall survey validity (see below), calculated the selection frame, distributed the survey, collected answers, and supplied the researchers with an anonymized data set.
Survey Instrument
The development of the survey started with a search of preliminary items in the literature indexed by International Scientific Indexing (https://isindexing.com) and Scopus using “construct names + survey or questionnaire”. When items could not be identified in the relevant literature, we designed original items specifically for this survey. This was a particular challenge for qualitative research areas, where there is no consensus on what constitute questionable practices.
A first version was tested on 200 European management scholars in 2017. The results were used to form preliminary factor structures and a second survey version for a pre-test by ten researchers at Swedish universities. SCB performed a technical assessment of all survey items regarding wording, sequence, and response alternatives. A modified version was translated into Swedish, retranslated and reviewed by experts in research ethics and scientific misconduct. Next, SCB organized cognitive interviews with eight social and medical science researchers to capture ambiguities and difficulties in understanding specific itemsFootnote 2 Based on this input, we finalized the English and Swedish survey versions.
The survey started with questions regarding the respondents´ employment to assess the eligibility and environment of the respondent. This was followed by sections on the organizational climate, extrinsic vs. intrinsic motivation, scientific norms and counter norms, good and questionable research practices, ethics training and policies and the respondents’ perceived job security (Table S1). A separate section included items on ethics training during postgraduate studies, ethics training for new employees, routines for reporting suspicions of misconduct, written policy about research misconduct, and routines for data management and storage. In a closing section, the respondents could elaborate on their responses, discuss item quality, and provide personal reflections. Despite the lengthy questionnaire and the detailed questions, 650 respondents (21.6% of the sample) entered additional personal comments. Several criticized what they perceived as a negative and suspicious tone in the survey. Others, however, requested more critical questions on power, hierarchy, and discrimination, and projected a view of contemporary academia as marked by hyper-competition, publication pressure, exploitation of Ph.D.s, nepotism, and a toxic atmosphere. Some of these comments echoed the critical discussion of contemporary academia in recent qualitative studies, e. g. (Drolet et al., 2023), and will hopefully provide inspiration for future studies as they were not accessible for analysis in the present, quantitatively oriented paper.
Measuring Norms and QRPs
The survey´s norm section was designed to gauge the respondents´ normative environment, rather than the respondents’ adherence to normative ideals, which is common in other surveys, e.g. (Gopalakrishna et al., 2022). To obtain this goal, the survey positioned the respondent in a “non-self” position:
“The survey questions below address attitudes, behaviors, and practices among colleagues in your research environment, i.e., the group, department, clinic, or similar unit where you conduct most of your research. Please answer based on what you know or have good reason to assume about Ph.D. students, researchers, and teachers in this environment.”
Accordingly, the items were introduced as follows: “In my research environment, colleagues…” followed by a 5-step Likert scale response format, from “(1) never” to “(5) always”.
QRPs refer to the “design, analytic, or reporting practices that have been questioned because of the potential for the practice to be employed with the purpose of presenting biased evidence in favour of an assertion” (Banks et al., 2016). There is no consensus regarding standards to measure QRPs, which has resulted in a huge variation in item design and prevalence estimates (Ravn & Sorensen, 2021; Xie et al., 2021). Some surveys include aspects such as ´insufficient supervision and mentoring of junior colleagues´ or ´choice of inadequate research design´ which could be explained by limited resources or sloppy conduct rather than a purpose to present biased evidence. Other surveys focus only on dubious statistical practices, such as ´rounding off´ p-values or excluding data and control variables to arrive at statistical significance, an approach that excludes qualitative studies (Andrade, 2021). In this study, we maintained the core assumption in Banks et al. (2016) that QRPs are related to a purpose to present evidence in favor of an assertion or a specific person and included items that feature high on the ranking list of minor and major research misbehaviors (Bouter et al., 2016). The survey´s QRP section starts with items addressing practices, irrespective of research approach, followed by items on questionable statistical practices, and finally, items on dubious practices in qualitative research. The nine selected QRP indicators are listed in Table 1. Their prevalence and relative distribution are reported in the form of a Likert scale with 5 steps: “Always”, “Often”, “Sometimes”, “Seldom” and “Never”.
Participants and Data Collection
In 2019, when the survey population was defined, there existed 48 universities, university colleges, and research institutes in Sweden. All researchers, teachers, and Ph.D. students employed at these entities are registered by SCB. After excluding small and specialized colleges, the selection frame comprised 39 academic entities with a total employment of 25,783 individuals. To avoid an overrepresentation of large universities and secure perspectives from the many small universities, the entity population was divided into three strata based on the number of researchers, teachers, and Ph.D. students: > 1,000 individuals (7 universities and university colleges), 500–999 individuals (3 universities and colleges), and less than 500 individuals (29 universities and colleges). From these strata, SCB randomly sampled 35%, 45%, and 50% of the relevant employees, resulting in an unbound sample of 10,047 individuals. After coverage analysis and exclusion of wrongly included, 9,626 individuals remained (Fig. 1).
The selected individuals were approached with a personal postal letter informing them about the project and the survey, with a link to the project website. The letter notified the respondents that they could choose language version and respond either on paper or online. The survey was open for data collection for two months, during which two reminders were sent to non-responders. After a series of standard consistency checks, SCB sent us the fully anonymized dataset in March 2020.
Ethical Consideration and Identity Protection
The Swedish Act concerning the Ethical Review of Research Involving Humans (2003:460) defines the type of studies which requires an ethics approval. In line with the General Data Protection Regulation (EU 2016/67), the act is applicable for studies that collect personal data which reveal racial or ethnic origin, political opinions, trade union membership, religious or philosophical beliefs, or health and sexual orientation. The present study does not involve any of the above, why no formal ethical permit was required. The ethical aspects of the project and its compliance with the guidelines of the Swedish Research Council (2017) were also part of the review process at the project´s public funding agency Forte. To further eliminate any integrity risk for survey participants, the questions addressed the norms, attitudes, and perceptions in the respondents´ research environment, not any individual preference or behavior. By designing the survey as anonymous, neither researchers nor any other reader could track who answered what. The cover letter from SCB informed respondents that:
“Once Statistics Sweden collects the answers, they are anonymized and converted into data files delivered to researchers at Linköping University…No single individual can be identified when the information is presented… Rules on processing personal data are contained in the EU General Data Protection Regulation, the Official Statistics Act (2001:99), and the Official Statistics Ordinance (2001:100).”
SCB also informed respondents about their right to abstain from participation, and 63 persons made use of this possibility.
Statistical Analysis in Three Steps
The survey analysis proceeded in three steps, starting with assessments of response quality and basic factor analyses, followed by univariate and multiple ordinal logistic regressions, and ending with relative importance analyses (Tonidandel & LeBreton, 2011) to ascertain the comparative weight of the explanatory factors.
In the first step, principal component analysis was used to factorize the items related to norms, organizational climate, and motivation to achieve conceptually consistent and robust factor structures (Matsunaga, 2010). This analysis, which was also done on several subsamples, yielded three distinctive norm and counter norm factors. The first factor was named “Biasedness”, and contained six items covering a research environment characterized by a combination of Mitroff´s particularism and dogmatism (Mitroff, 1974). The second factor was named “Skepticism”, and contained four items covering an environment characterized by Merton´s Organized skepticism (Merton, 1938, 1973). The third factor was named “Openness”, contained three items covering readiness to share results with other researchers and to invite external researchers to review methods and results, and was reminiscent to Merton´s communalism and opposed to Mitroff’s counter norm solitariness. Two organizational climate factors, named by us “Competition” and “Collegiality”, and two motivation factors, named “Extrinsic" and "Intrinsic Motivation”, respectively were also identified. Table 2 presents descriptive statistics and the mean value of the items included in each factor. See also Tables S4-S6 in the Supplement for items, their sources from previous literature, whether they are re-coded, and their factor loadings.
In step two, the analysis focused on the extent to which these factors and other explanatory variables were related to the perceived QRP prevalence. Given the ordinal nature of the items, we used ordinal logistic regressions (McCullagh, 1980), with the QRP items as dependent variables. After an initial univariate screening of all explanatory variables (Tables S7-S10) the analysis continued with a series of multiple logistic regressions.Footnote 3 Given the high number of respondents, several explanatory variables were significantly related to the QRP items but only marginally affected the goodness of fit. To avoid overloading in subsequent regression steps we only included explanatory variables having a Nagelkerke pseudo-R2 > 0.01 and a p-value of < 0.001 in the univariate regressions. The variance inflation factors consistently yielded values of 2.5 or less, which suggested no or only minor multicollinearity (Johnston et al., 2018). This warranted further ordinal logistic regressions for each dependent variable.
Ordinal regression models require that the relationship between the explanatory variables and the responses does not depend on the level of the response. We assessed this “proportional odds” assumption visually for each regression by inspecting parallel line plots (Schlegel & Steenbergen, 2017). This inspection suggested that the proportional odds assumption was violated for the response alternative “Always” for most of the QRP variables. As this alternative was only associated with a few percentages of the observations (Table 3) it was aggregated with “Often”. The new four-step variable met the requirements of the proportional odds assumption for five of the nine QRP variables. For the four remaining QRP variables, the proportional odds assumption remained violated, suggesting that the model be relaxedFootnote 4 for one or two variables at a time until log-likelihood ratio tests indicated no further significant improvement.
In the third, and final, step, the relative importance of the explanatory variables was computed using a modification of Pratt’s linear regression method (Pratt, 1987). This modification implies that the pseudo-R2 coefficients are partitioned using a vector-based model into components representing the importance of each independent variable (Thomas et al., 1998). Obtained in such a way, the relative important indices sum to 1, which allows us to keep their natural interpretation. For four QRP indicators the relaxation of the proportional odds assumption resulted in separate importance values of the explanatory variables for each response level of the QRPs.
Results
Descriptive Results
Of the randomized 10,047 individuals, 3,295 responded, amounting to an overall response rate of 34.2% (Fig. 1), with similar rates in all the three strata (Table S2). This compares well with comparable surveys elsewhere (de Vrieze, 2021). An analysis of missing value patterns revealed that 290 of the respondents either lacked all data for at least one of the factors or had too many missing values dispersed over several survey sections. After eliminating these responses, we used SPSS algorithms (IBM-SPSS Statistics 27) to analyze the remaining missing values and found them randomly distributed across the survey and constituting less than 5% of the data. Thus, they could be replaced using the imputation program in SPSS (Madley-Dowd et al., 2019). The curated dataset which contained 3,005 responses was used in all subsequent analyses.
As seen in Table 2, 55.6% of the respondents registered as female, and 44.4% as male. This mirrors the composition of the original randomized sample of 10,047 individuals, where 56.2% were registered as female and 43.8% as male (Table S2). The same consistency applies to the research field: 51.3% medical scientists and 48,7% social scientists among the respondents, compared to 53.5% and 46.5%, respectively, in the original sample (Table S2). The age distribution displayed a similar consistency. Together, this indicates that the composition of the respondents represents a good match of the original sample, and by inference, the target population.
Across the demographic variables age, sex, and position the factor values regarding norms, organizational climate and motivation are remarkably similar, although the large sample tends to make even minor differences significant (Table 2). The same goes for descriptive data on ethics training, presence of a reliable person, perceived job security, routines and policies, and data management (Table S3). It should be noted that the data on the respondents’ age, gender, position, and research field were provided by SCB independent of the respondents, which means that the similarities cannot be attributed to any common method bias.
As for QRP prevalence, the most frequent forms concerned authorship manipulation (indicators NOT CONTRIB and NOT ACKNOW), selective presentation of findings to confirm one´s own opinion (indicator SELECTIVE), and misuse of control variables (indicator SELECT CONTROLS) see Table 2; Fig. 2. These data refer to perceived prevalence in the respondents’ research environment and should not be compared with prevalence data in studies where respondents refer to their own behavior.
Regression Results: Biasedness a Dominant Factor
The initial univariate regression analyses indicated that the norm factors as well as the climate and motivation factors were significantly related to all the nine different QRPs, with the counter norm Biasedness as the most important of all (Table S7). The analysis also indicated that the variable Discuss, referring to open discussions about ethical issues at the group or laboratory level, had a significant and negative relation to the prevalence of QRPs (Table S8). By contrast, the variables related to the conventional integrity measures, ethics training and policies, displayed no or weak relationships with the QRPs (Table S9). This pattern was also evident for the background variables of sex, age, position, perceived job security, and research field (Table S10).
Multiple ordinal regressions were conducted based on the four-step response variable which met the requirements of the proportional odds assumption for five of the nine QRP indicators. For the other four QRPs, we calculated separate regression coefficients for each level of the relaxed explanatory variable (see Table 4). Together, these regressions confirmed the dominant role of Biasedness, followed by Competition, for the odds of working in a research environment characterized by prevalent QRPs. Thus, Biasedness was significantly associated with a higher prevalence of all nine QRPs, with the highest z-scores of all explanatory variables for seven of the QRP indicators. Competition was significantly associated with a higher prevalence of five QRPs and was the factor with the highest z-score for the QRP Not acknowledging co-authors. Discuss was associated with a lower prevalence of six QRPs. Extrinsic Motivation related to an increased prevalence of seven QRPs, and Intrinsic Motivation to a lower prevalence of two QRPs. Collegiality displayed no significant relation to any QRP. Research Field was significantly associated with four QRPs and was the variable with the highest z-score for Not contributing co-author. This may be explained by differences in publication practices in the two research fields (Marušić et al., 2011). Except for Data management and storage, the items on ethics training and policies displayed no significant relation with any of the QRPs.
Inclusion of interaction variables in those regressions did not improve the statistical models, as suggested by a negligible impact on the Nagelkerke R2 values, but rather led to overfit models, which diluted the contribution of each variable (Table S11).
Relative Importance Analysis
Finally, analyses of relative importance were performed to ascertain the contribution of each factor to the prevalence of the QRPs. These analyses confirmed the overall dominant role of Biasedness. This factor had the highest importance value of all factors for eight of the nine QRP indicators (Figs. 3, 4) and was the only factor of importance for indicators SELECTIVE, SELFPLAG, SELECT CONTROLS and REMOVE DATA (Fig. 4B, C, H, I). For the indicator NOT CONTRIB, Competition and Discuss were almost equal as the second most important factor (Fig. 4D). For two QRP indicators related to qualitative research, WRONG METHODS and EXAGG INTERVIEWS, Skepticism was the most important factor after Biasedness. The influence of ethics training and management and of the background variables position and job security were negligible (Fig. 3C).
As the proportional odds assumption had to be relaxed for the four indicators PLAGIA, SELFPLAG, SELECTIVE, and NOT ACKNOW, separate relative importance values were calculated for each response level. For NOT ACKNOW, Competition displayed the highest importance value at all response levels, with Discuss and Biasedness in the second and third places. For PLAGIA, Discuss competed with Biasedness for the leading position, with their relative ranks depending on the response level. The analysis showed that Biasedness had a higher relative importance at the lower response levels (= less prevalent QRPs). By contrast, Discuss had a higher relative importance at response levels associated with more prevalent QRPs (Fig. 4A and E). As the variable Discuss was negatively associated with QRP prevalence (Table 4) this implies that discussions of ethical issues at the research group-level have a stronger beneficial impact in environments with high QRP prevalence.
Discussion
Key Characteristics of the Survey Study
This study builds on a survey of academic teachers and researchers in the social and medical sciences in Sweden regarding their normative environment, organizational climate, ethics infrastructure and personal motivations. The survey targeted a randomized sample of 10,047 persons from a total population of 25,783 individuals and received responses from 34.2%. Compared to other surveys on research integrity, the study stands out in four aspects.
-
The survey captures the contradictory normative framework of contemporary academia by including items that capture classical (Mertonian) norms as well as items that capture their counter norms. The rationale is that the absence of subscription to a particular norm does not per se imply subscription to its counter norm, why norms and counter norms need to be measured separately.
-
Subscription to norms and counter norms is measured by the respondents´ estimates of norm adherence in the research group, not by his/her own adherence to ideal science norms.
-
By addressing attitudes and behaviors in the respondents´ research group, the survey reduced the social desirability bias inherent in self-reports of norm-dissonant behaviors (Fanelli, 2009; Norwood et al., 2016). This also had the advantage of targeting a micro-organizational level of crucial importance for individual behavior (Charness et al., 2007).
-
By measuring QRP prevalence with indicators involving a “purpose of presenting biased evidence in favor of an assertion” (Ravn & Sorensen, 2021) and avoiding items indicating insufficient quality and resource scarcity, the estimated QRP prevalence is lower than in other studies (Agnoli et al., 2017; Bouter et al., 2016; John et al., 2012). At the same time, the results are easier to interpret.
The Crucial Role of the Counter Norm Biasedness
Contradictory forces, classical scientific norms and counter norms, as well as competition and collegiality, are shaping contemporary academia. The first question guiding this study is to assess the strength of the factors in the organizational climate and the normative environment in relation to QRPs, compared to the role of university policies, ethics training and individual and demographic factors. The results highlight the important role associated with the counter norm Biasedness, i.e. the particularism and dogmatism suggested by Mitroff (1974). Biasedness shows the strongest association of all factors with the prevalence of QRPs, overshadowing the influence of classical norm factors like Skepticism and Openness, which are associated with non-prevalence of QRPs. Furthermore, Biasedness has a stronger association with QRP prevalence than the climate factor Competition.
The detailed analysis shows that, out of all the studied variables studied, Biasedness is the most important factor for eight out of the nine QRPs, while Competition is the most important factor for only one QRP indicator. The variable Discuss, indicating the presence of ethics discussions within the research group, is significantly and negatively associated with the prevalence of six out of the nine QRPs. Although theoretically related to organizational collegiality, the effect of the Discuss variable cannot be attributed to this factor.
Consistent with existing literature (Gopalakrishna et al., 2022; Martinson et al., 2010), Skepticism is a significant but less important variable, negatively associated with the prevalence of REMOVE DATA and WRONG METHODS in qualitative studies. In contrast to previous studies (Franco et al., 2014), however, the norm Openness does not emerge as an important factor associated with lower QRP prevalence.
Both the ordered logistic regression and the importance analysis reveal that group-level ethics discussions are importantly related to lower QRP prevalence, whereas collegiality is not. This finding conflicts with existing debates in the literature, which suggest that collegially is an important tool for managing research integrity (Horn, 2013; Loui, 2002). The disparity may arise from different definitions of collegiality. In the literature, e.g. (Davis et al., 2007), the organizational climate is often measured by negative features, such as conflicts, insufficient supervision, non-collegial work environment, and misuse of colleague´s data. In our study, collegiality is assessed positively as the extent of collaboration and collegial assistance. While these positive aspects are crucial for a productive workplace atmosphere, they do not appear to be directly linked to the prevalence of QRPs.
The associations between motivation types and QRPs have been discussed in several studies (Lefor, 2005; May, 2021). Our analysis reveals that individual motivators are much less important than normative and organizational factors. Extrinsic motivation significantly related to the prevalence of several QRPs, but its relative importance is low. Intrinsic motivation shows a significant, but weak, negative association with two QRPs.
The study did not reveal any important relationships between QRP prevalence and individual demographics (gender, age, and years of experience). In contrast to Fanelli et al. (2015) or Gopalakrishna et al. (2022), we did not find any significant relationship between career stage and QRP prevalence. However, our study focuses on attitudes, behaviors and QRP prevalence in the respondents´ research environment rather than exploring the relationship between individual characteristics and behaviors. This might explain the lack of any relationship between the participants’ backgrounds and QRPs.
The second question raised in the introduction pertains to effective means to reduce QRP prevalence. However, the analysis did not reveal any significant association between QRP prevalence and ethics training, university policies, and reporting routines.
The lack of association with ethics training needs a special comment. In general, ethics training in the form of courses is short, often only a few hours or days, with large variations in content, conducted early in a researcher’s career, and with a curricular of rules and regulations pertaining to research misconduct (Abdi et al., 2021; Watts et al., 2017). It is not surprising if long-term beneficial effects of these brief endeavors on perceived QRP prevalence are difficult to observe.
In our study most respondents had received such training; 70% during their PhD studies and 56% at employment (Table S3) which makes it hard to observe any statistically discriminatory effect of this training. Moreover, the absence of any effect of ethics training in our multivariate analysis does not suggest that moments of ethics training and reflection should not be part of the local infrastructure of a research group. On the contrary, it could be surmised that such elements would support the positive group-level dynamics that is indicated by the Discuss item.
The presence of a Reliable person is the only ethics-related variables that displays a significant association, although weak, with any QRP indicator. The stronger association of ethics discussion within the research group and lower QRP prevalence may be related to what Schein (Schein, 2010) refers to as the deep layer of organizational culture. Further studies are needed to explore this possibility.
Conclusion
This analysis of a comprehensive survey of Swedish academics in the social and medical sciences makes important contributions to the ongoing discussion on research integrity. The study highlights the crucial association of counter norms with the prevalence of QRPs within the academic communities under investigation. Specifically, the counter norm factor Biasedness was found to be related with 40–60% of the prevalence of the observed QRP indicators, including selective reporting, self-plagiarism, and removal of data. Additionally, the study reveals the contradictory impact of other elements within the organizational environment. While internal competition was positively related with QRP prevalence, adherence to norms of skepticism and presence of group-level ethics discussions demonstrated a negative association with QRPs. Conventional integrity measures, such as ethics training and university policies, were found to have only a marginal impact. Based on these findings, it is suggested that academic leaders should prioritize the creation and maintenance of an open and unbiased research environment, foster a collaborative and collegial climate, and promote bottom-up ethics discussions within all research groups.
Limitations and Needs for Future Research
The study targets two research fields, the social and medical sciences, which have figured prominently in the discussion of research integrity. Clinical studies without academically employed researchers and corporate research were excluded to ensure comparability and reduce noise in the analysis. Another limitation concerns the lack of established indicators to measure QRPs in qualitative research, which is an important part of both the social and the medical sciences. This problem was solved provisionally by designing original, survey-specific items. Despite several pre-tests, open comments in the roll-out phase of the survey pointed at difficulties in answering some of these items, which then had to be dropped from the analysis. The lack of rigorous studies on QRPs in qualitative and interpretive research highlights a bigger problem, which will hopefully attract focused studies in the future.
A general issue in studies on the causes of QRPs concerns the possible impact of endogeneity i.e., the lack of independent measurements of the explanatory and dependent variables. Unfortunately, the nature of the QRP phenomena makes it impossible to employ objective indicators to measure their prevalence. In this study, Statistics Sweden provided data on the explanatory variables age, gender, position, and research field independent of the respondents, which ameliorated part of the problem. Most of these demographic variables, however, had only a marginal, or no association with the QRP variables. This also goes for the two items on ethics training which relate to the possible long-term effects of training given in the past. We encourage future studies to assess when and how research ethics training can be effective both in the short term and long term, and how it can be related to the maintenance of positive research norms.
Data Availability
Replication code and the data will be made available on diva-portal.org after the acceptance of the publication.
Notes
Dagens Nyheter (“Daily News”), May 16, 2023.
Wallenborg Likidis, J. (2019). Akademiska normer och vetenskapliga förhållningssätt. Mätteknisk granskning av en enkät till doktorander, forskare och akademiska lärare. (Academic norms and scientific attitudes. Metrology Review of a survey for doctoral students, researchers and academic teachers). Statistics Sweden (SCB) Prod.nr. 8942146. Swedish The consequence will be that the substequent two footnotes will be renumbered from 2 and 3, to 3 and 4.
In this process, the original ordinal dependent variable is transformed into a new, binary, dependent variable equal to zero, if the original dependent variable is less than a defined value, and 1 if the variable is greater than or equal to this value, which results in a model with category-specific regression coefficients (Peterson & Harrell, 1990). Technically, this relaxation was performed using vglm(…,parallel = FALSE ~ ….).
References
Abdi, S., Pizzolato, D., Nemery, B., & Dierickx, K. (2021). Educating PhD Students in Research Integrity in Europe. Science and Engineering Ethics, 27(1), 5. https://doi.org/10.1007/s11948-021-00290-0
Agnoli, F., Wicherts, J. M., Veldkamp, C. L. S., Albiero, P., & Cubelli, R. (2017). Questionable research practices among italian research psychologists. PLoS One, 12(3), e0172792. https://doi.org/10.1371/journal.pone.0172792
Alberts, B., Cicerone, R. J., Fienberg, S. E., Kamb, A., McNutt, M., Nerem, R. M., & Jamieson, K. H. (2015). Self-correction in science at work. Science, 348(6242), 1420–1422. https://doi.org/10.1126/science.aab3847
ALLEA. (2023). The European Code of Conduct for Research Integrity – Revised Edition 2023. Berlin. https://doi.org/10.26356/ECOC
Anderson, M. S., Horn, A. S., Risbey, K. R., Ronning, E. A., De Vries, R., & Martinson, B. C. (2007). What do mentoring and training in the responsible conduct of research have to do with scientists’ misbehavior? Findings from a National Survey of NIH-funded scientists. Academic Medicine: Journal of the Association of American Medical Colleges, 82(9), 853–860. https://doi.org/10.1097/ACM.0b013e31812f764c
Anderson, M. S., Ronning, E. A., DeVvries, R., & Martinson, B. C. (2010). Extending the Mertonian norms: Scientists' subscription to norms of research. Journal of Higher Education, 81(3), 366–393. https://doi.org/10.1353/jhe.0.0095
Anderson, M. S., Shaw, M. A., Steneck, N. H., Konkle, E., & Kamata, T. (2013). Research integrity and misconduct in the academic profession. In M. B. Paulsen (Ed.), Higher Education: Handbook of Theory and Research (Vol. 28, pp. 217–261). Netherlands: Springer. https://doi.org/10.1007/978-94-007-5836-0
Andrade, C. (2021). HARKing, cherry-picking, p-hacking, fishing expeditions, and data dredging and mining as questionable research practices. Journal of Clinical Psychiatry, 82(1), 2013804. https://doi.org/10.4088/JCP.20f13804
Aubert Bonn, N., & Pinxten, W. (2019). A decade of empirical research on research integrity: What have we (not) looked at? Journal of Empirical Research on Human Research Ethics, 14(4), 338–352. https://doi.org/10.1177/1556264619858534
Banks, G. C., Fischer, T., Gooty, J., & Stock, G. (2021). Ethical leadership: Mapping the terrain for concept cleanup and a future research agenda. The Leadership Quarterly, 32(2), 101471. https://doi.org/10.1016/j.leaqua.2020.101471
Banks, G. C., O’Boyle, E. H., Pollack, J. M., White, C. D., Batchelor, J. H., Whelpley, C. E., & Adkins, C. L. (2016). Questions about questionable research practices in the field of management: A guest commentary. Journal of Management, 42(1), 5–20. https://doi.org/10.1177/0149206315619011
Begley, C. G., & Ellis, L. M. (2012). Drug development: Raise standards for preclinical cancer research. Nature, 483(7391), 531–533. https://doi.org/10.1038/483531a
Berggren, C., & Karabag, S. F. (2019). Scientific misconduct at an elite medical institute: The role of competing institutional logics and fragmented control. Research Policy, 48(2), 428–443. https://doi.org/10.1016/j.respol.2018.03.020
Bouter, L. M., Tijdink, J., Axelsen, N., Martinson, B. C., & Ter Riet, G. (2016). Ranking major and minor research misbehaviors: Results from a survey among participants of four World Conferences on Research Integrity. Research Integrity and Peer Review, 1, 17. https://doi.org/10.1186/s41073-016-0024-5
Broad, W., & Wade, N. (1983). Betrayers of the Truth: Fraud and Deceit in the Hall of Science. Simon and Schuster Books. ISBN 10: 0671495496
Bruhn, J. G. (2008). Value dissonance and ethics failure in academia: A causal connection? Journal of Academic Ethics, 6(1), 17–32. https://doi.org/10.1007/s10805-008-9054-z
Camerer Colin, F., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., & Wu, H. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433–1436. https://doi.org/10.1126/science.aaf0918
Chalkia, A., Schroyens, N., Leng, L., Vanhasbroeck, N., Zenses, A. K., Van Oudenhove, L., & Beckers, T. (2020). No persistent attenuation of fear memories in humans: A registered replication of the reactivation-extinction effect. Cortex, 129, 496–509. https://doi.org/10.1016/j.cortex.2020.04.017
Chalmers, I., & Glasziou, P. (2009). Avoidable waste in the production and reporting of research evidence. Lancet, 374(9683), 86–89. https://doi.org/10.1016/S0140-6736(09)60329-9
Charness, G., Rigotti, L., & Rustichini, A. (2007). Individual behavior and group membership. American Economic Review, 97(4), 1340–1352. https://doi.org/10.1257/aer.97.4.1340
Curtis, G. J., Cowcher, E., Greene, B. R., Rundle, K., Paull, M., & Davis, M. C. (2018). Self-control, injunctive norms, and descriptive norms predict engagement in plagiarism in a theory of planned behavior model. Journal of Academic Ethics, 16(3), 225–239. https://doi.org/10.1007/s10805-018-9309-2
Davis, M. S., Riske-Morris, M., & Diaz, S. R. (2007). Causal factors implicated in research misconduct: evidence from ORI case files. Science and Engineering Ethics, 13(4), 395–414. https://doi.org/10.1007/s11948-007-9045-2
de Vrieze, J. (2021). Large survey finds questionable research practices are common. Science, 373(6552), 265–265. https://doi.org/10.1126/science.373.6552.265
Drolet, M. J., Rose-Derouin, E., Leblanc, J. C., Ruest, M., & Williams-Jones, B. (2023). Ethical issues in research: Perceptions of researchers, research ethics board members and research ethics experts. Journal of Academic Ethics, 21(2), 269–292. https://doi.org/10.1007/s10805-022-09455-3
DuBois, J. M., Dueker, J. M., Anderson, E. E., & Campbell, J. (2008). The development and assessment of an NIH-funded research ethics training program. Academic Medicine: Journal of the Association of American Medical Colleges, 83(6), 596–603. https://doi.org/10.1097/ACM.0b013e3181723095
Etzkowitz, H. (2011). Normative change in science and the birth of the Triple Helix. Social Science Information, 50(3–4), 549–568. https://doi.org/10.1177/0539018411411403
Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4(5). https://doi.org/10.1371/journal.pone.0005738
Fanelli, D., Costas, R., & Lariviere, V. (2015). Misconduct policies, academic culture and career stage, not gender or pressures to publish affect scientific integrity. Plos One, 10(6), e0127556. https://doi.org/10.1371/journal.pone.0127556
Fanelli, D., Schleicher, M., Fang, F. C., Casadevall, A., & Bik, E. M. (2022). Do individual and institutional predictors of misconduct vary by country? Results of a matched-control analysis of problematic image duplications. PLoS ONE, 17(3), e0255334. https://doi.org/10.1371/journal.pone.0255334
Felaefel, M., Salem, M., Jaafar, R., Jassim, G., Edwards, H., Rashid-Doubell, F., & Silverman, H. (2018). A cross-sectional survey study to assess prevalence and attitudes regarding research misconduct among investigators in the Middle East. Journal of Academic Ethics, 16(1), 71–87. https://doi.org/10.1007/s10805-017-9295-9
Forsberg, E.-M., Anthun, F. O., Bailey, S., Birchley, G., Bout, H., Casonato, C., & Zöller, M. (2018). Working with research integrity—guidance for research performing organisations: The Bonn PRINTEGER statement. Science and Engineering Ethics, 24(4), 1023–1034. https://doi.org/10.1007/s11948-018-0034-4
Franco, A., Malhotra, N., & Simonovits, G. (2014). Publication bias in the social sciences: Unlocking the file drawer. Science, 345(6203), 1502–1505. https://doi.org/10.1126/science.1255484
Gopalakrishna, G., ter Riet, G., Vink, G., Stoop, I., Wicherts, J. M., & Bouter, L. M. (2022). Prevalence of questionable research practices, research misconduct and their potential explanatory factors: A survey among academic researchers in The Netherlands. PLoS ONE, 17(2), e0263023. https://doi.org/10.1371/journal.pone.0263023
Gray, P. W., & Jordan, S. R. (2012). Supervisors and academic integrity: supervisors as exemplars and mentors. Journal of Academic Ethics, 10(4), 299–311. https://doi.org/10.1007/s10805-012-9155-6
Helgesson, G., Holm, S., Bredahl, L., Hofmann, B., & Juth, N. (2022). Misuse of co-authorship in medical PhD theses in Scandinavia: A questionnaire Survey. Journal of Academic Ethics. https://doi.org/10.1007/s10805-022-09465-1
Hickey, A., Davis, S., Farmer, W., Dawidowicz, J., Moloney, C., Lamont-Mills, A., & Maxwell, J. (2022). Beyond criticism of ethics review boards: strategies for engaging research communities and enhancing ethical review processes. Journal of Academic Ethics, 20(4), 549–567. https://doi.org/10.1007/s10805-021-09430-4
Hite, R. L., Shin, S., & Lesley, M. (2022). Reflecting on responsible conduct of research: A self-study of a research-oriented university community. Journal of Academic Ethics, 20(3), 399–419. https://doi.org/10.1007/s10805-021-09418-0
Horn, L. (2013). Promoting responsible research conduct in a developing world academic context. South African Journal of Bioethics and Law, 6(1), 21–24. https://doi.org/10.7196/SAJBL.256
Huistra, P., & Paul, H. (2022). Systemic explanations of scientific misconduct: Provoked by spectacular cases of norm violation? Journal of Academic Ethics, 20(1), 51–65. https://doi.org/10.1007/s10805-020-09389-8
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23(5), 524–532. https://doi.org/10.1177/0956797611430953
Johnston, R., Jones, K., & Manley, D. (2018). Confounding and collinearity in regression analysis: A cautionary tale and an alternative procedure, illustrated by studies of British voting behaviour. Quality & Quantity, 52(4), 1957–1976. https://doi.org/10.1007/s11135-017-0584-6
Jungebluth, P., Alici, E., Baiguera, S., Blomberg, P., Bozoky, B., Crowley, C., & Macchiarini, P. (2011). Tracheobronchial transplantation with a stem-cell-seeded bioartificial nanocomposite: A proof-of-concept study. Lancet, 378(9808), 1997–2004. https://doi.org/10.1016/S0140-6736(11)61715-7
Kim, S. Y., & Kim, Y. (2018). The ethos of science and its correlates: An empirical analysis of scientists’ endorsement of Mertonian norms. Science, Technology and Society, 23(1), 1–24. https://doi.org/10.1177/0971721817744438
Lefor, A. T. (2005). Scientific misconduct and unethical human experimentation: Historic parallels and moral implications. Nutrition, 21(7), 878–882. https://doi.org/10.1016/j.nut.2004.10.011
Levelt, W. J., Drenth, P., & Noort, E. (Eds.). (2012). Flawed science: The fraudulent research practices of social psychologist Diederik Stapel. Commissioned by the Tilburg University, University of Amsterdam and the University of Groningen. http://hdl.handle.net/11858/00-001M-0000-0010-258A-9
Lonnstedt, O. M., & Eklov, P. (2016). Environmentally relevant concentrations of microplastic particles influence larval fish ecology. Science, 352(6290), 1213–1216. https://doi.org/10.1126/science.aad8828
Loui, M. C. (2002). Seven ways to plagiarize: Handling real allegations of research misconduct. Science and Engineering Ethics, 8(4), 529–539. https://doi.org/10.1007/s11948-002-0005-6
Macfarlane, B., & Cheng, M. (2008). Communism, universalism and disinterestedness: Re-examining contemporary support among academics for Merton’s scientific norms. Journal of Academic Ethics, 6, 67–78. https://doi.org/10.1007/s10805-008-9055-y
Madley-Dowd, P., Hughes, R., Tilling, K., & Heron, J. (2019). The proportion of missing data should not be used to guide decisions on multiple imputation. Journal of Clinical Epidemiology, 110, 63–73. https://doi.org/10.1016/j.jclinepi.2019.02.016
Martinson, B. C., Crain, A. L., De Vries, R., & Anderson, M. S. (2010). The importance of organizational justice in ensuring research integrity. Journal of Empirical Research on Human Research Ethics, 5(3), 67–83. https://doi.org/10.1525/jer.2010.5.3.67
Marušić, A., Bošnjak, L., & Jerončić, A. (2011). A systematic review of research on the meaning, ethics and practices of authorship across scholarly disciplines. PLoS ONE, 6(9), e23477–e23477. https://doi.org/10.1371/journal.pone.0023477
Matsunaga, M. (2010). How to factor-analyze your data right: Do’s, don’ts, and how-to’s. International Journal of Psychological Research, 3(1), 97–110. https://doi.org/10.21500/20112084.854
Matthes, J., Marquart, F., Naderer, B., Arendt, F., Schmuck, D., & Adam, K. (2015). Questionable research practices in experimental communication research: A systematic analysis from 1980 to 2013. Communication Methods and Measures, 9(4), 193–207. https://doi.org/10.1080/19312458.2015.1096334
May, J. (2021). Bias in science: Natural and social. Synthese, 199(1), 3345–3366. https://doi.org/10.1007/s11229-020-02937-0
McCullagh, P. (1980). Regression models for ordinal data. Journal of the Royal Statistical Society. Series B, 42(2):109–142. https://www.jstor.org/stable/2984952
Merton, R. K. (1938). Science and the social order. Philosophy of Science, 5(3), 321–337. http://www.jstor.org/stable/184838
Merton, R. K. (1942). Science and technology in a democratic order. Journal of Legal and Political Sociology, 1, 115–126.
Merton, R. K. (1973). The Normative Structure of Science. Theoretical and empirical investigations. University of Chicago.
Mitroff, I. I. (1974). Norms and counter-norms in a select group of the Apollo moon scientists: A case study of the ambivalence of scientists. American Sociological Review, 39(4), 579–595. https://doi.org/10.2307/2094423
Norwood, M. S., Hughes, J. P., & Amico, K. R. (2016). The validity of self-reported behaviors: Methods for estimating underreporting of risk behaviors. Annals of Epidemiology, 26(9), 612–618.e612. https://doi.org/10.1016/j.annepidem.2016.07.011
Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., & Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251). https://doi.org/10.1126/science.aac4716
Pell, D. J., & Amigud, A. (2023). The higher education dilemma: the views of faculty on integrity, organizational culture, and duty of fidelity. Journal of Academic Ethics, 21(1), 155–175. https://doi.org/10.1007/s10805-022-09445-5
Perkmann, M., Tartari, V., McKelvey, M., Autio, E., Broström, A., D’Este, P., & Sobrero, M. (2013). Academic engagement and commercialisation: A review of the literature on university–industry relations. Research Policy, 42(2), 423–442. https://doi.org/10.1016/j.respol.2012.09.007
Peterson, B., & Harrell, F. E., Jr. (1990). Partial proportional odds models for ordinal response variables. Journal of the Royal Statistical Society, 39(2), 205. https://doi.org/10.2307/2347760
Pizzolato, D., & Dierickx, K. (2022). Research integrity supervision practices and institutional support: A qualitative study. Journal of Academic Ethics. https://doi.org/10.1007/s10805-022-09468-y
Poff, D. C., & Tauginienė, L. (2022). Two decades of the Journal of Academic Ethics: From the idea to the present. Journal of Academic Ethics, 20(4), 451–455. https://doi.org/10.1007/s10805-022-09462-4
Pratt, J. W. (1987). Dividing the indivisible: Using simple symmetry to partition variance explained. Proceedings of the Second International Conference in Statistics, Tampere, Finland.
Prinz, F., Schlange, T., & Asadullah, K. (2011). Believe it or not: How much can we rely on published data on potential drug targets? Nature Reviews Drug Discovery, 10(9), 712. https://doi.org/10.1038/nrd3439-c1
R Development Core Team. (2011). The R project for statistical computing. Retrieved February 1, 2024 fromhttp://www.R-project.org/
Ravn, T., & Sorensen, M. P. (2021). Exploring the gray area: Similarities and differences in questionable research practices (QRPs) across main areas of research. Science and Engineering Ethics, 27(4), 40. https://doi.org/10.1007/s11948-021-00310-z
Ryan, J. C. (2014). The work motivation of research scientists and its effect on research performance. R&D Management, 44(4), 355–369. https://doi.org/10.1111/radm.12063
Ryan, R. M., & Deci, E. L. (2020). Intrinsic and extrinsic motivation from a self-determination theory perspective: Definitions, theory, practices, and future directions. Contemporary Educational Psychology, 61, 101860. https://doi.org/10.1016/j.cedpsych.2020.101860
Sarauw, L. L. (2021). The reversed causalities of doctoral training on research integrity: A case study from a medical faculty in Denmark. Journal of Academic Ethics, 19(1), 71–93. https://doi.org/10.1007/s10805-020-09388-9
Schein, E. H. (2010). Organizational Culture and Leadership (4th ed.). Jossey-Bass.
Schlegel, B., & Steenbergen, M. (2017). Test for Parallel Regression Assumption. Retrieved February 1, 2024 from https://cran.r-project.org/web/packages/brant/brant.pdf
Schneider, B., Ehrhart, M. G., & Macey, W. H. (2013). Organizational climate and culture. Annual Review of Psychology, 64(1), 361–388. https://doi.org/10.1146/annurev-psych-113011-143809
Stone, T. H., Jawahar, I. M., & Kisamore, J. L. (2009). Using the theory of planned behavior and cheating justifications to predict academic misconduct. Career Development International, 14(3), 221–241. https://doi.org/10.1108/13620430910966415
Swedish Research Council. (2017). Good Research Practice. Vetenskapsrådet. https://www.vr.se/english/analysis/reports/our-reports/2017-08-31-good-research-practice.html
Thomas, D., Hughes, E., & Zumbo, B. (1998). On variable importance in linear regression. Social Indicators Research, 45(1), 253–275. https://doi.org/10.1023/A:1006954016433
Thornton, P. H., & Ocasio, W. (2008). Institutional Logics. In: R. Greenwood, C. Oliver, R. Suddaby, & K. Sahlin (Eds.), The SAGE Handbook of Organizational Institutionalism (pp. 99-128). SAGE Publications Ltd. https://doi.org/10.4135/9781849200387
Tonidandel, S., & LeBreton, J. M. (2011). Relative importance analysis: A useful supplement to regression analysis. Journal of Business and Psychology,26(1), 1–9. https://doi.org/10.1007/s10869-010-9204-3
Venables, W. N., & Ripley, B. D. (2002). Modern Applied Statistics with S. (4th ed.). Springer, New York.
Watts, L. L., Medeiros, K. E., Mulhearn, T. J., Steele, L. M., Connelly, S., & Mumford, M. D. (2017). Are ethics training programs improving? A meta-analytic review of past and present ethics instruction in the sciences. Ethics and Behavior, 27(5), 351–384. https://doi.org/10.1080/10508422.2016.1182025
Xie, Y., Wang, K., & Kong, Y. (2021). Prevalence of research misconduct and questionable research practices: A systematic review and meta-analysis. Science and Engineering Ethics, 27(4), 41. https://doi.org/10.1007/s11948-021-00314-9
Yee, T. (2021). VGAM: Vector Generalized Linear and Additive Models. R package version 1.1–5. https://CRAN.R-project.org/package=VGAM
Acknowledgements
This paper is dedicated to late Professor Per Aspenberg, M.D., Ph.D., who played a vital role in the initiation of the project. We thank Jennica Wallenborg, Statistics Sweden, for providing expert support in the survey design. We are grateful to many colleagues for their constructive comments: Anna Dreber, Peter Hedström, Jan-Ingvar Jönsson, Steven Frenkel, Dan Larhammar, Thomas Magnusson, and Kerstin Sahlin.
Funding
Open access funding provided by Linköping University. Forte (Swedish Research Council for Health, Working Life and Welfare): (Grant number: 2018–00321, Grant Recipient: Solmaz Filiz Karabag). https://www.vr.se/swecris?#/project/2018-00321_Forte. Grant No. 2018–00321.
Author information
Authors and Affiliations
Contributions
Equally contributed: SFK, BG, CB. Conceptualization: SFK, BG, CB. Methodology: SFK, BG, CB, JP. Visualization: SFK, BG. Funding acquisition: SFK, BG, CB. Project administration and management: SFK. Writing – original draft: SFK, BG, CB. Writing – review & editing: SFK, BG, CB, JP. Approval of the final manuscript: SFK, BG, CB, JP.
Corresponding author
Ethics declarations
Ethical Approval
The Swedish Act concerning the Ethical Review of Research Involving Humans (2003:460) defines the type of studies which requires an ethics approval. In line with the General Data Protection Regulation (EU 2016/67), the act is applicable for studies that collect personal data that reveal racial or ethnic origin, political opinions, trade union membership, religious or philosophical beliefs, or health and sexual orientation. The present study does not involve any of the above, why no formal ethical permit was required. The ethical aspects of the project and its compliance with the guidelines of the Swedish Research Council (2017) were also part of the review process at the project´s public funding agency Forte.
Consent to Participate
N.A.
Competing Interests
The authors declare that they have no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Karabag, S.F., Berggren, C., Pielaszkiewicz, J. et al. Minimizing Questionable Research Practices – The Role of Norms, Counter Norms, and Micro-Organizational Ethics Discussion. J Acad Ethics (2024). https://doi.org/10.1007/s10805-024-09520-z
Accepted:
Published:
DOI: https://doi.org/10.1007/s10805-024-09520-z