The sample was compiled from publicly available e-mail lists at Norwegian universities, university colleges and research institutes. The overview of the institutions was taken from the Nordic Institute for Studies in Innovation, Research and Education (NIFU) and includes nearly the entire research community in Norway. There may be a few potential sources of error, such as double registration of researchers with more than one workplace, or underrepresentation, where a researcher for some reason is not included on the institution’s e-mail list. A very small number of institutions are not represented at all because of unavailability of e-mail addresses of their employees.
The questionnaire was e-mailed to 31,206 staff at the aforementioned types of institutions in three languages: Norwegian (Bokmål), New Norwegian (Nynorsk) and English.Footnote 5 The respondents were asked to choose their preferred language when answering the questionnaire. We have not been able to detect any systematic differences in response profiles that can be attributed to a language factor. The first batch was sent on Friday, 19 January 2018. Two reminders were sent to every e-mail address, which included a link to the questionnaire. Wednesday, 1 March was set as the final date for data collection, i.e. a timeframe of almost seven weeks. At this point, a total of 7947 researchers had responded to all or parts of the survey, which implies a response rate of 25.5%. The number that completed the entire questionnaire was 7291—a response rate of 23.4% (Table 1). The number of respondents was larger than in many other international surveys on research misconduct and QRPs, with for instance Gopalakrishna et al., 2021 reporting a response rate of 21.2% and 6813 completed responses. One notable exception is a survey on authorship and citation manipulation, which analysed data from over 12,000 responses to a series of surveys sent to more than 110,000 scholars (Fong & Wilhite, 2017).
The distribution in the RINO sample from 2018 is comparable to the distribution in the publicly available NIFU database for 2016, the official data on Norwegian research from NIFU (www.nifu.no) as regards four key variables: gender, job category, discipline and age.
The gender distribution in the RINO survey and in the NIFU database is identical: 48% women and 52% men. However, broken down into the various job categories, some biases emerge. The percentage of female professors is somewhat higher in our sample, and the proportion of female associate professors is somewhat lower than in the population. However, this is a very weak bias, which we do not believe should initiate corrective weighting.
Research positions like professor and associate professor were overrepresented and positions such as e.g. lecturers and senior lecturers were correspondingly underrepresented in the final answers. Given the topic of the study—research integrity—this result was to be expected. We see this biased distribution as a result of self-selection.
In terms of disciplines, mathematics/natural sciences are overrepresented by 4.6%, while social sciences are underrepresented by 6.3% and medicine by 3.6% in the sample as a whole. However, among university staff, medicine is overrepresented by 6%. The greatest disparity is in the university college component of the sample, where the social sciences are more strongly underrepresented and the humanities are correspondingly overrepresented.
Finally, the response rate is somewhat higher among older than younger respondents, while the disparity between the population and the distribution in the university component of the sample is minimal.
Based on an overall assessment, the research group decided against weighting the sample based on any of these variables or combination of variables, since this could result in creating new biases. Therefore, in the analyses that follow, the results presented are all unweighted.
The results for FFP are given in the tables below:
Let us briefly summarise these results. Firstly, from Table 2, it is quite obvious that Norwegian researchers have very little tolerance of FFP practices, they more or less uniformly condemn fabrication and falsification of material, while there is a bit more uncertainty about plagiarism. Note, however, that among the total respondents (> 7200), there were still roughly a bit more than 100 respondents who did not see larger problems with each of these forms of scientific practice. And roughly 10% were slightly in doubt about plagiarism.
Secondly, when it comes to self-reporting of FF practices (Table 3), our numbers deviate from the numbers presented in the meta-analysis by Xie et al. (2021), where 2.9% respondents admitted to FFP, and the recent study by Gopalakrishna et al. (2021) where 4.3% and 4.2%, respectively, admitted to one form of FF. As mentioned earlier, these numbers are not directly comparable, while one still has the impression that the numbers in our study are relatively small, 0.2% and 0.3%, respectively, for each FF practice. Whether this is due to different survey techniques or it captures different research cultures with varying practices, is open to interpretation.
With regard to plagiarism, we note that > 0.5% admit having done it at least once during the last three years. However, this result should also be seen in conjunction with the next table, in particular about copying citations.
Our survey contained nine different practices that can be termed QRPs. Here, we present the univariate analysis of attitudes towards and self-reported admissions of these nine practices:
The two tables present some variances in attitudes towards the practices and the self-admission of the practices. In the following, we will discuss the differences between the attitudes and self-admission of four of the QRPs in these tables.
Concerning the first practice: to create the impression of having consulted a source by copying other’s citations, 22.2% (Table 4) of the respondents report a relatively liberal attitude to this, classifying the practice as “somewhat problematic” or even “not problematic at all”. Roughly 20% (Table 5) report having done it at least once in the last three years. The way this particular question was formulated implies a practice that has been termed “citation plagiarism” (e.g. Serenko et al., 2021), an academic shortcut that may potentially cause misleading or false information to be reproduced in the literature, contributing in some cases to the spread of “academic urban legends” (Rekdal 2014).
Authorship issues are known to be contentious in the scientific community, and guidelines have been issued to clarify the rightful claim to be an author or co-author of a scientific paper. These guidelines are typically modelled on the Vancouver recommendations, demanding significant contributions to various stages in the production of the research and the resulting papers. The guidelines have been endorsed by COPE (Committee on Publication Ethics). However, despite the fact that many scientific publishers are members of COPE, experience indicates that compliance with these guidelines and their practical implementation is not uniform but varies between age groups, disciplines and countries. It is, therefore, of interest to see how the topic of gift authorship is viewed and practised among Norwegian researchers, in particular because the national Norwegian ethical guidelines are quite clear on this point (NENT, 2016, Guideline 5; NESH, 2016, Guideline 25).
The complexity of the issue is reflected in the results related to attitudes towards gift authorship. About 14% of the respondent’s regard this as not problematic or only somewhat problematic. Whether this should be considered a high percentage is up for discussion, but it certainly indicates that the whole issue is seen as more complex and not so straightforward as normally dealt with in the ethical guidelines. Given this variation in the views on gift authorship, it is then not surprising to find that roughly 11% of the respondents admit to having been involved in gift authorships in the last three years.
We surmise that the issue of authorship assignments becomes more important with increased expectations and rewards for publications based on simple quantitative measures, like the Hirsch index or similar. “Pressure” is the one term that was mentioned most frequently in the open comments section of the survey. Increased competition among researchers and research groups for funding and positions was mentioned in this connection.
Furthermore, the issue of gift authorship reflects internal power structures within the scientific community. Theoretically, motivations to assign gift authorship may be twofold. Firstly, the inclusion of the name of a highly regarded researcher will increase the likelihood of the paper having a substantial impact and being cited within the relevant community. Secondly, a supervisor or other senior, powerful member of the research group may expect to be mentioned as co-author. For many, in particular younger researchers, it is difficult to deny this request without negative repercussions. Both cases reflect the hierarchical structures within the research community and are reinforced by the reward system of funding and careers.
The practice of salami slicing, i.e. the breaking up of research findings into the maximum of “least publishable units” (Broad, 1981), follows the logic of the authorship issue discussed above: the higher the number of publications, the greater the rewards. Here, we note that more than 35% of the respondents do not find this practice problematic or only somewhat problematic, despite the formulation used in the question “at the expense of scientific quality”. There is, of course, a rationale for this viewpoint, namely that the very content of the scientific communication is assumedly not essentially changed whether published in one piece or in several. The reason for regarding the practice as ethically problematic is not anchored in the truth content, but in the ease of reception in the scientific community and in chopping up research which was designed to form a unit. A further reason is that it disproportionally distorts the work effort behind the publications: “hard numbers on a curriculum vitae no longer necessarily add up to hard work” (Broad, 1981, 1137).
Given the high rate of acceptance of salami slicing, it is perhaps a bit surprising to find that only about 8% admit to having engaged in this practice. Whether this is a true reflection of the researchers’ reality might, of course, be questioned. There is always a well-known bias in assigning ethically problematic behaviour to others as opposed to assigning the behaviour to oneself. We tend to be more lenient with ourselves. Thus, researchers might find good reasons to segment their own research results, even though outsiders might question this and would have expected a more comprehensive publication in a single piece.
Response to External Pressure
One feature of the more recent realities of research is the closer interaction with external funders and stakeholders in a research project. Often, this interaction is a contractual requirement for the research funds. The interaction and often collaboration with these non-scientific bodies and actors could potentially raise the level of conflict, as they may enter the project with differing objectives and interests. This intermingling of interests and objectives has led to public concern that scientific research could become instrumentalised for powerful interests and that reported results could be biased, thus compromising full veracity.Footnote 6 One mechanism would be for funders or stakeholders to exert pressure on the scientists to change the study design, methodology or the published results of the study, according to their interests. There is a clear perception among most sectors of society that this would impede the freedom of science and diminish confidence in its operations. Yet, there are frequently reports that this is indeed happening (Ingierd, et al., 2019; National Research Ethics Committees, 2003), in spite of some institutional measures—for instance standard contracts for commissioned research – to prevent this practice.
The data (Table 5) shows that 94% of the respondents find this unacceptable; correspondingly, 95% (Table 6) report not having experienced such pressure. Whether or not this result should be considered reassuring depends on i.e. the percentage of respondents who regularly receive external funding and are principal investigators in such research projects. Since 78% of our respondents were from universities or from university colleges, and only 22% from research institutes, the average dependency on external funding in Norway may have been low.