Introduction

Research integrity is regarded as fundamental to the validity and reliability of scientific findings, and for ethical conduct of research. Research integrity has multiple definitions (Martinson et al., 2013). The All European Academies (ALLEA) European Code of Conduct for Research Integrity describe ‘research integrity’ as good research practices, embodied by principles of reliability, honesty, respect and accountability (All European Academies, 2017). Researchers broadly understand research integrity as akin to good scientific practices (for example rigour and adherence to good methods), and some think it also includes personal integrity, and fair treatment of colleagues or research subjects (Metcalfe et al., 2020; Shaw & Satalkar, 2018).

Research misconduct typically includes plagiarism and fabrication and falsification of data, but discussion is widening to encompass ‘sloppy science’ and questionable research practices (Bouter, 2015; Horbach & Halffman, 2017; Martinson et al., 2005). Questionable research practice is defined by Sijtsma et al., (2016: 37) as “debatable, disputable, doubtful, and problematic practices” in research (Sijtsma et al., 2016). It can include unfair authorship attribution, ‘cherry-picking’ of data and strategic publication practices (Bouter, 2015; Martinson et al., 2005).

Regulation of researchers is on the rise (Resnik et al., 2015). More than half of the most research and development intensive countries in the world have national guidelines regulating research misconduct (Resnik et al., 2015). Several research organisations have instigated integrity campaigns (All European Academies, 2017; Estonian Research Council & Centre for Ethics at University of Tartu, 2017; United Kingdom Research Integrity Office (UKRIO) & The Royal Society, 2018; Universities UK, 2012). Some of these have been developed in collaboration with stakeholders and subsequently adopted by research institutions, for example in the UK, The Concordat to Support Research Integrity (Universities UK, 2012) and the Estonian Code of Conduct for Research Integrity (Estonian Research Council & Centre for Ethics at University of Tartu, 2017). In Norway, the government issued the Research Ethics Act in 2017, which extends former legislation’s emphasis on the legal responsibility of individual researchers by also imposing a legal responsibility on research institutions (Ministry of Education and Research Norway, 2017). These documents set out responsibilities for research institutions to promote research integrity and foster best practices in research, with emphasis on building a positive research culture.

Researchers, research administrators and research governance advisors are essential agents in research, generating project ideas, securing funding, conducting research, managing resources, organising project activities, and sharing results publicly. Cases of misconduct reported in the media tend to focus on the individual perpetrator as a ‘bad apple’ (Fanelli, 2009; Sovacool, 2008) and, it is suggested that Machiavellian and, to a lesser extent, narcissistic and psychopathic personality traits, can play some role in research misconduct (Antes et al., 2007; J. K. Tijdink et al., 2016). However, it is widely thought that, in addition to internal psychological factors, the external research environment is influential in research integrity, and plays a role in cases of research misconduct and a range of questionable research practices (Anderson, 2018; Fanelli et al., 2015; Horbach et al., 2018; Metcalfe et al., 2020; Nuffield Council on Bioethics, 2014; Tijdink et al., 2014). In addition to the narrative of ‘individual impurity’, two other narratives of scientific misconduct are recognised, namely ‘institutional failure’ and ‘structural crisis’ (Sovacool, 2008). Research activities typically take place within formal institutions such as universities and, as such, the integrity of this research is at least partially dependent on the structures in which those institutions are situated. In this context we take ‘structural’ to be systems in place to facilitate, regulate and evaluate research (for example funding structures and research evaluation). By ‘institutional’ we mean the internal organisation of research and researchers (for example employment contracts and university research ethics committees) within individual institutions (for example universities).

Arguably, a strong culture of integrity with norms and values germane to research integrity is a vital part of promoting good research (Forsberg et al., 2018). It has been suggested that those involved in the practice of research have a responsibility to build a positive research culture to ensure that research conducted is valuable, ethical and of good quality (All European Academies, 2017; Bouter, 2015; Forsberg et al., 2018; Nuffield Council on Bioethics, 2014).

The institutional and structural conditions for the conduct of research are not static. For decades, research has seen a rise of academic capitalism, or “institutional and professorial market or marketlike efforts to secure external funds” (Slaughter & Leslie, 1997, p. 209). These changes have been structural in that an increasing amount of research funding is funnelled through governmental grants and industry contracts acquired through competition. Research has also seen an increased market-orientation in that governments increasingly base funding on the performance of research institutions along parameters such as publishing, a form of neoliberalism (Macfarlane, 2021). Research institutions have translated these structural conditions to institutional practices and pressures, such as rewarding individual researchers for publishing, and conditioning employment on an ability to secure external funding (Slaughter & Leslie, 1997).

Increasing competition in research can have positive effects, such as increasing the prestige of research at the institutional level and promoting fruitful collaborations with industry, which in turn can provide a more solid basis for providing high quality education (Grossi et al., 2020).However, there is evidence to suggest that structural and institutional pressures might also lead to questionable research practice and misconduct amongst researchers (Anderson et al., 2007; Buljan et al., 2018; Fanelli, 2010; Nuffield Council on Bioethics, 2014; Tijdink et al., 2016). These pressures include the ‘publish or perish’ system, unhealthy competition, and the way research quality and productivity are evaluated and rewarded (Anderson et al., 2007; Aubert Bonn & Pinxten, 2021a; Buljan et al., 2018; Fanelli, 2010; Nuffield Council on Bioethics, 2014; Royal Society, 2017; Tijdink et al., 2016; Wellcome Trust, 2020). Therefore, promotion and recognition can effectively reward dubious behaviours (Biagioli et al., 2019; Nuffield Council on Bioethics, 2014). There is also a perception that conducting research to the highest ethical and methodological standards is not always fully recognised or rewarded (Buljan et al., 2018; Royal Society, 2017), and that institutions are responsible for perpetuating publication pressures (Tijdink et al., 2014, 2016).

This study aimed to broaden understanding about the potential pressures and impact upon research integrity in Europe within higher education institutions. The data here report researchers’, research administrators’, research managers’ and research governance advisors’ views and experiences of research integrity in practice. Our study adds to the growing body of work on researchers’ perspectives on the pressures on research integrity (Anderson et al., 2007; Aubert Bonn & Pinxten, 2021a, b; Buljan et al., 2018; Faria, 2015; Metcalfe et al., 2020; Nuffield Council on Bioethics, 2014; Tijdink et al., 2016; Wellcome Trust, 2020), presenting an international perspective detailing findings from research workers in Estonia, Italy, Norway and UK. This study also includes the views of research governance personnel who are under-represented in empirical research examining this topic.

Methods

The work presented here has its origins in a wider scoped project report (Kennedy et al., 2018); part of an international, multi-disciplinary project: Promoting Integrity as an Integral Dimension of Excellence in Research (PRINTEGER, 2015).

A qualitative approach was used to investigate voices of the research workforce (researchers, research managers, research administrators and research governance advisors), and their experiences of research integrity in practice. Four partners were involved in this task from Estonia, Italy, Norway and the UK. All partners worked according to a common research protocol that was collaboratively developed. The focusgroup question schedule and process were tested and refined following a pilot focus group conducted by the UK team.

Sample

The recruitment and data collection for all the focus groups reported in this paper occurred during 2017. Each partner country recruited researchers and research administrators/managers/governance advisors from higher education research institutions within their own country. Partner countries recruited participants from between one and three institutions. The selection of institutions was made by the research team in each country and did not seek to be representative of the country as a whole. Purposive sampling was used to identify potential participants (Ritchie et al., 2003) whereby each team selected participants with the aim of recruiting male and female researchers across a range of disciplinary backgrounds and at different levels of seniority. The research administrators/managers/governance advisors were university personnel in research management and research support roles and were sampled to include individuals with experience of different disciplinary fields where possible. Each research team used a varied strategy approach to identify potential participants, such as utilising university staff information webpages, institutional contact directories and consultation with department heads or administrators. Invitations and information sheets were sent to potential participants by email, with a reminder after two weeks.

Participant recruitment was challenging with all partner countries reporting difficulties of low response in at least one of the focus groups. Moreover, scheduling focus groups was complicated by limited participant availability. Thus, although the aim was for focus groups to be mixed discipline and gender, but this was not always possible. All groups were homogenous in the career level and/or role of participants This composition allowed exploration of disciplinary variations between participants while maintaining a degree of homogeneity to encourage group cohesion. There were four types of focus group: (i) junior researchers: early-career researchers (fewer than 5 years post PhD), PhD students, research assistants/associates or equivalent; (ii) mid-level seniority researchers: mid-career researchers (5–10 years post PhD or equivalent professional experience), research fellows, assistant professors; (iii) senior researchers: professors or readers (research roles where an individual is a principle investigator/management) and (iv) research managers, administrators and advisors: individuals involved in developing governance policies and supporting researchers. Participant demographics are detailed in Table 1 below.

It was a requirement that participants were fluent speakers of the native language of partner institutions conducting the focus groups (English, Estonian, Italian, and Norwegian respectively) or English.

Table 1 Participant demographics per focus group

Data Collection

Partners conducted four focus groups each. The total number of focus groups was 16, and the total number of participants was 87. Focus groups were conducted in English or the native language of the national research team. Where a language other than English was used, the project documents were translated into that language. At the beginning of each session, participants gave their written informed consent. The question guides followed a format of: a warm-up question to introduce participants, followed by questions that become increasingly specific, targeting key areas of interest for the research, and ending in closing questions that sum-up and check key points with participants (Krueger & Casey, 2014).

The focus groups were audio recorded, and a second member of the research team was present to take fieldnotes to gather contextual information about the session, such as observations about group dynamics, and body language (Finch & Lewis, 2003; Krueger & Casey, 2014). In the Norwegian focus groups, the notetaker also helped facilitate interviewing. The audio-recordings of each focus group were transcribed verbatim in the language spoken in the focus group.

Data Analysis

Each team independently analysed the data from their own focus groups, with analysis running concurrently with data collection, beginning on completion of the first focus group. To approach data analysis in a consistent and methodical manner across all four teams, each team followed Krueger and Casey’s (2014) ‘classic analysis strategy’. This method is a form of constant comparative analysis, and a systematic framework approach to identify inductive themes in focus group transcripts and categorise findings (Krueger & Casey, 2014).

Each research team wrote a report for each of their four focus groups describing an overview of the sessions and detailing the findings of their analysis, including verbatim quotes to illustrate findings. Research participants were given the opportunity to check the report corresponding to their focus group and suggest minor changes, for example removing detail to ensure anonymity, and providing respondent validation about the accuracy of the analysis and write-up of focus groups through clarification and feedback (Noble & Smith, 2015). Checked and finalised reports were then provided to the UK team (in English) and formed the basis for the synthesis of the data.

The synthesis of qualitative data across all the focus groups drew upon a seven stage meta-ethnography technique, originating from Noblit and Hare (Noblit & Hare, 1988). This technique provides a means to synthesise outcomes from separate qualitative studies (Noblit & Hare, 1988) and was thus chosen as a systematic means to organise the reported outcomes from each nation’s focus groups, and then identify any further patterns (themes) arising from them. The stages undertaken to synthesise our data were as follows:

Stage 1: Getting started - We chose the focus of the research to fit the aims of the PRINTEGER project (investigating the research integrity and the ‘workfloor’), and Stage 2: Deciding what is relevant to the initial interest. We designed the focus groups (participants and questions) to meet the aims and objectives of our chosen focus and investigate our research questions.

Stage 3: Reading the studies - The researcher (MK) read through all the analysis reports in full (but not all the original datasets) so that she became familiar with the findings from all four countries.

Stage 4: Determining how the studies are related - From the outset, the focus groups and reports followed the same design across all the countries and participant groups, and closely related to each other around concepts of research integrity.

Stage 5: Translating the studies into one another - The data were organised in NVivo by grouping together answers to the same questions across the 16 focus groups and stratifying the answers according to participant type (i.e., junior, mid-level and senior researchers and research administrators/managers/governance advisors) and country (i.e., Estonia, Italy. Norway and UK). This created a framework of comparable data, enabling an exploration of how the different focus groups related to each other across sub-topics and sub-groups, allowing direct comparison. The interpretive content (written by the reports’ authors) of each focus group report were categorised along with any verbatim quotations from participants. These different elements of the reports were understood in a hierarchy where verbatim participant quotations are first order constructs, and author interpretations are second order constructs.

Stage 6: Synthesizing translations - Here, themes were developed for the data across all the reports. These themes are third order constructs that are an interpretation of the first and second order constructs. Themes were developed using an iterative process of coding both first order and second order accounts (stratified across each participant group, country, and question category) to identify and categorise the textual data dealing with a common phenomenon as well as any outliers.

Stage 7: Expressing the synthesis - This involved describing the themes developed in the synthesis according to each question category. Here, similarities and differences in findings from different focus groups were described using the first order constructs (participant verbatim quotations) as evidence. The resulting report was shared with wider members of the team for peer review.

The synthesis of the focus groups identified a broad array of themes relating to different aspects of research integrity. In this paper, we describe themes arising from participants’ responses to questions about good and bad research, barriers to research integrity, and promoting research integrity.

Findings

Across the four participating countries, researchers and research administrators/managers/governance advisors identified a number of organisational influences on research integrity and their ideas for promoting research integrity. Structural matters such as the funding, evaluation and publication of research were highlighted as well as institutional responses, which shape participants’ day-to-day work environment, including workload and research governance. The theme ‘competition’ permeated all these themes.

The quotations have been selected to best demonstrate the ideas and themes of our findings. These tended to be best communicated in the Estonia, Norway, and UK focus groups because there were more verbatim quotations presented in these focus group reports compared to the Italian ones. Although this has resulted in fewer quotations from the Italian focus groups in this paper, the same themes were present in all focus groups.

Competition

Participants perceived academia as increasingly commercialised and competitive, describing it as “harsh scientific capitalismFootnote 1 from which it was hard to escape:

…what is a problem is that, on the one hand, this system is neoliberal and market-based, but it is very difficult for us to separate ourselves from this system.

Mid-level Researcher, Estonia

Competition was seen to have “grown and grown,”Footnote 2 existing between faculties, universities, and globally. Moreover, where research sits within a highly commercialised area, for example machine learning, there was concern that a focus on “trying to beat each other and be the first to do x, y, and zFootnote 3 undermined the importance of collaboration in scientific research. One participant described a “catastrophic trendFootnote 4 towards favouring those who are prolific in publishing over the “eruditeFootnote 5 who works more slowly and does “not publish original science so much in an international arenaFootnote 6 The competitive research system was viewed as influencing research integrity through its funding and evaluation procedures, publishing practices and the work environment. However, competition was seen as both “an opportunity and a challengeFootnote 7 and it was thought a balance was needed to produce the best research:

…it is a kind of balance between security of work and competition. If it goes too far to one extreme… only tenure-based positions where you can just hang about and you end up crazy, or then places where you have to think each moment, ‘where do I get the next grant from?’ … you do not do the best science there either. There is, like a sweet spot, somewhere between.

Junior Researcher, Estonia

This theme of competition permeated all others, as it was seen as the driving force behind the organisational structure of research and the role institutions take. Competition was seen as both a driver for good research and a source of pressure against integrity.

Structural Influences

Structural influences are factors affecting research integrity stemming from the organisation and processes of the wider research system.

Funding

Participants voiced concern over private funders’ power and negative influence on academic freedom, believing that the publication of results “depends to a certain extent on the commercial interest of the financers of the researchFootnote 8. Participants noted the implications of the decrease in public sources of funding and an increase in the proportion of funding that is private. Where industry was involved, it was seen to compromise the integrity of the aim of research.

“As research funding is squeezed in all directions, more and more are going. People are going to industry because they do have funding but that comes at a price in terms of the industry party would drive the research question a lot more potentially because they need something out of it as well, it’s not for the benefit of academia and research. It will be to improve their products and improve their sales.”

Research governance advisor, UK

One participant in Norway also described funders as being inflexible when unforeseen challenges would require researchers to make adaptations to the original research plans. It was thought that funders’ unwillingness to accommodate changes involving higher costs could compromise the quality of the research:

…[The topic] turned out to be more sensitive than we had anticipated, and people wanted to be interviewed, but they wanted to be interviewed alone... then we find ourselves in a huge dilemma because we are forced to do interviews in groups [because that was originally agreed with the funders].

Senior Researcher, Norway

It was thought that some funders (including public organisations) compromised research integrity by frustrating individuals’ wishes to pay their researchers fairly.

I simply cannot pay him no matter how much I would like to... the project money is so small that you are basically starving these people.

Mid-level researcher, Estonia

Here research integrity was understood in terms of respectful and fair treatment of colleagues. Participants believed short-term funding, and therefore short-term employment contracts, meant that some researchers were driven by the need to find a follow-on job. In addition, they thought the emphasis on quantity over quality in evaluation is not conducive to conscientious working:

It is very tempting to just churn out publications, because I need food next year too. It is as simple as that. And it is this way because of the way research is financed and organized.

Junior researcher, Norway

Evaluation

Evaluation of research was identified as a challenge to integrity in focus groups in all participating countries. Across the board, participants linked the number of publications to career progression, and noted a tendency to value quantity of publications over their quality. This was thought to be damaging to research, encouraging and perpetuating a proliferation of low-grade publications and academic journals:

…when they started to measure science… it also started to create distortions in scientific activity… you get what you measure, and if your criteria for performance is how many articles… then you will get articles and perhaps less real substance. In my opinion this creates distortions both in the research itself and in the application for grants…

Mid-level researcher, Estonia

Participants thought the evaluation systems encouraged questionable research practices as researchers strive to get ahead in their careers and resort to practices such as salami-slicing (whereby an author or set of authors publishes two or more papers that answer similar research questions, when one paper would have been sufficient for communicating the findings):

…there is a lot of research that has to do with academia becoming an industry... people turning the handle on ‘this is what gets you promotions’ and ‘this will get you that,’ so it’s sort of salami sliced research.

Senior researcher, UK

In the UK, senior and mid-level researchers spoke critically of the Research Excellence Framework (REF) for failing to measure “things that really matter”Footnote 9 and placing “pressures”Footnote 10 on researchers to publish. Some Estonian researchers were disapproving of the Estonian Research Council for having a “superficial”Footnote 11 review process which was an “international joke.”Footnote 12 They were also critical that evaluation systems are “not really fair”Footnote 13 because they function in a way that favours some disciplines over others.

Despite many criticisms of the evaluation systems, some individuals conceded that quantitative criteria for evaluation are better than “arbitrariness,”Footnote 14 and thought that academic research needed to “invent” a “positive evaluation programme.”Footnote 15

Publishing

Across all focus groups, a great deal of researchers’ discussions about the challenges to research integrity revolved around publishing, which was viewed as a “huge … industry.”Footnote 16 This pressure was regarded as positive up to a point:

… I am not negative to there being some pressure in a knowledge-producing institution. There should be, and we need to publish our work. But that pressure should not be so strong that it breaks you or makes somebody take shortcuts.

Senior researcher, Norway

Some of the criticism about publishing centred around the types of research academics are incentivised to publish, such as reporting positive findings over negative, or publishing new or “uncomplicated”Footnote 17 material because these are more publishable. Researchers thought this emphasis could encourage questionable research practices such as disregarding data where “integrity may be sacrificed …to really sell your results.”Footnote 18 The “pressure to publish things that are newFootnote 19 was thought to reduce the instances of replication, hindering the self-regulatory mechanisms of research.

“…the incentives structures are not necessarily aligned in a way that encourages research integrity… it’s much easier to get a positive result published than a negative one… so you were looking for some effect and thanks to a bit of data that can possibly be ignored for some slightly dodgy reason that you’re not quite getting the feedback that you want, you ignore those data… ‘cos if you think it’s unlikely that anybody’s going to try and repeat my research, then there are fewer incentives to carry it out right in the first place…”

Senior researcher, UK

Participants perceived an emphasis on impact and quantity of output over quality, especially as the evaluation system is based on the number of citations.

…without thinking that even a paper published in a famous journal as Nature can contain untrue results and get cited many times, basically because the experiment cannot be reproduced.

Senior researcher, Italy

It was thought that attention to publications was “unbalanced”Footnote 20 and researchers “should have more time for research”Footnote 21 because researchers spend less time developing original ideas, instead spending time writing papers:

…in our case [humanities] important things can be left unstudied, because you have to play by those rules. Publish in places where there are no interested and knowledgeable readers and do all the time new things instead of finishing something off properly.

Senior researcher, Estonia

This was thought to lead to unnecessary publications.

…today we are rushing into the publish or perish … [many papers are published even if] they have no reason to exist.

Junior researcher, Italy

The need to publish to be competitive was thought to lead early career researchers to submit their work for publication before it is truly ready:

…the pressure to publish … before you actually maybe feel ready or before … you’ve done enough primary research yourself …mean[s] you’re forced to publish work that you feel yourself isn’t quite ready maybe sometimes.

Junior researcher, UK

Some participants were concerned that fast-turnaround times imposed by demands to publish meant that “mistakes are going to be inevitable.”Footnote 22

Peer Review

Participants thought that peer review was not sufficient for ensuring high quality research was published, for example authors were not always truly anonymous, making reviews “fake blind.”Footnote 23 There was also concern over “lazyFootnote 24 reviewers, and one participant blamed workload pressures for their own lower quality reviews.

…sometimes I think I’m guilty of that [not producing a high-quality review] because again it’s to do with pressures and so much reviewing to be done and sometimes I don’t think I’m particularly happy with the quality of my own review but, you know, there’s only so many hours in the day…

Senior researcher, UK

There was a belief that journals favouring positive results or endorsing certain methodological parameters could affect research quality, potentially stifling innovative research. In the UK, researchers cited examples of suspected poor behaviour by reviewers, who are thought to abuse the system by holding back publications by others for their own advantage:

I’m pretty sure I’ve had papers rejected with suggestions for lots of extra work and in the meantime two competitors published their own work so it’s almost certain that they were the reviewers.

Mid-level researcher, UK

Institutional Influences

Institutional influences are factors shaping participants’ day-to-day work environment, including workload, employment contracts and research governance support.

Workload

Participants reported having little time to participate in research ethics training or discuss issues around research integrity, such that they do “not prioritise it, and do not have much time for it.Footnote 25 Notably, when there was time to discuss these matters and attend training, it was regarded as helpful. Researchers described having explicit training sessions which tended to occur early-on as undergraduate students, providing foundations for good practices, but there was some differences in opinion regarding the most effective timing for this training: some researchers thought it important “to instil it at an early level”Footnote 26 but there were also concerns about relevance of training too soon: “it’s pointless telling someone who’s an undergraduate about it.”Footnote 27 Researchers indicated that learning about research integrity becomes more implicit when working formally in research, where knowledge about issues directly relevant to their work is gained through practice and experience of doing research. In the UK, the research governance advisors advocated a mixture of structured online training and face-to-face discussion sessions such as case-based training. Researchers experienced “time constraints and pressure to get things doneFootnote 28 in their daily work, noting that projects are funded for a limited time, and “once it goes beyond that time it’s very difficult for [the researcher] to carry on working on the same project,”Footnote 29 only adding to the pressure, which “leads to issues with people doing the right thing or the wrong thing.”Footnote 30 One participant mentioned three forms of pressure that can lead to cheating among researchers: 

“…[Researchers] cheat a little bit here and a little bit there because you are pressed on time and have to finish something or other. The funding runs out, your supervisor is pushing you… One takes shortcuts in such situations.”

Junior researcher, Norway

Research governance advisors in the UK recognised that formal ethical processes, although seen as “invaluable,” Footnote 31 present researchers with yet another task, and create “a real barrier to them just being able to get on with their research”Footnote 32 given they are “already stretched.”

Footnote 33Research governance advisors also cited workload and under-resourcing as problems in their own roles, leading to reactive, rather than proactive, governance.

… the work’s ever expanding, and it stretches everyone to full capacity and I think that’s where things can go wrong, that’s where things are missed … something will only happen when something has gone wrong and I don’t think that’s a really good position to be in at all … you’re firefighting aren’t you rather than having things in place and having the resources there to minimise the risk.

Research governance advisor, UK

Research Governance Support

Policies about research integrity were viewed as important for promoting discussion and “creating a cultureFootnote 34 of research integrity, and of practical significance for setting standards:

I think it’s handy if there is a general direction somewhere. I think we live and work in a big, complicated institution, so it should probably be something that gets put down and that gets reviewed on a regular basis.

Mid-level researcher, UK

However, problems with policy were also identified. In Italy, an absence of specific guidelines on research integrity at institutions was reported, although some codes of ethics did mention integrity. In contrast, UK research governance advisors and researchers reported having a wealth of policies relating to different aspects of research integrity within their institutions.

… [we have a] core document for researchers which goes through the kind of hot topics and research integrity so things such as ethical approval, data protection, conflicts of interest, intellectual property, authorship and publications, so there’s a whole list of topics but pretty much all of those… have a separate university policy on them, … so it’s absolutely huge…

Research governance advisor, UK

In addition to institutional policies, researchers must also navigate policies relating to funding applications and publishing. In the UK, researchers and governance advisors reported information overload. Some researchers described how it can be difficult to navigate the “vast amount of informationFootnote 35 found in numerous policies and guidelines.

Several participants (particularly in Estonia) were concerned about overregulation of research whilst others voiced cynicism about the value and functionality of some policies. Some argued that policies grounded in ethics, without legislative backing and the “power of the lawFootnote 36” are ineffective because people can “circumventFootnote 37 these codes. Others described problem policies as being “not-fit-for-purposeFootnote 38 or superficial “box-ticking exercises.Footnote 39

In the UK, governance advisors reported variation in staff awareness of policies, and problems applying policies to particular circumstances in a meaningful way:

Where I think researchers struggle in my institution, and what we get a lot of queries about, is there’s some policies that are absolutely fit for purpose, in that they’ve been developed for a university wide purpose… [but] I think applicability to a specific case example is where they’re not always helpful.

Research governance advisor, UK

Several researchers encountered questions around research integrity which they felt could not be answered by policies or guidelines, and should be dealt with through discussion amongst peers:

One always gets questions which there are no right answers to, or codes of conduct for I believe. That one needs to discuss. I guess that is the right way, that researchers discuss it. Researchers who have similar experiences, so that one seeks each other’s council and do not make a private decision.

Mid-level researcher, Norway

Promoting Research Integrity

Participants had a number of ideas about how to promote research integrity.

As was evident from all focus groups, the highly competitive research environment was seen as putting people under enormous pressure to deliver. Thus, when asked about how research integrity could be promoted, besides suggestions of how institutions could more effectively train individual researchers, strong emphasis was laid on the need for systemic changes in academic research to create the conditions that would be supportive of research integrity.

Across the focus groups, training on research integrity was considered essential in upholding the ideals inscribed in the various codes of conduct:

In order to train for research, it is important that in the degree and … PhD programs, opportunities for study, presentation and discussion of these tools of these materials are given. If we have to educate researchers, then it is important that these codes [of ethics] are presented to them.

Research manager, Italy

It was generally thought that training should be provided from early stages of research career, and that more senior research staff and research administrators should also undergo training regularly. Participants emphasised the importance of quality of training and avoiding creating a “tick-boxFootnote 40 exercise. Participants considered effective methods could include case-based training and having individuals who can be a “role model…leading by example.Footnote 41

Nevertheless, there were concerns that training individual researchers was futile without systemic changes to the research environment, especially to the evaluation and incentive structures, to make them more compatible with research integrity, for example by including more qualitative measures into evaluation mechanisms such as valuing “other ways of generating impact and outreach”Footnote 42 (i.e. public engagement activities), allocating more funds for repeating experimentsFootnote 43 and decreasing the bias towards positive results in publishingFootnote 44.

More generally, participants emphasised a need for (re)creating a positive research culture for integrity that would enhance resilience to external pressures and improve self-regulation within scientific communities. One research administrator in Norway said, “build it [research integrity] into the culture.” Junior researchers in Italy also thought that rules should be cogent and respected by researchers in order to develop a positive culture of research integrity.

Participants thought that, if researchers shared their experiences about their work and working methods, it may encourage feedback and communication about ways in which research could be improved, for example by learning from honest mistakes.

If you help them engage, they can become your research integrity champions... they can go out with their colleagues and say, ‘I made this mistake’ or ‘I wasn’t aware of this process but having gone through it, you know, I’m now fully informed of what I need to do to get my research to be done properly’.

UK research governance advisor

It was believed that building a stronger sense of community could lead to a better sense of “shared values”Footnote 45 and therefore greater research integrity. Participants from all four countries argued that a more positive research culture and stronger research community would only be possible if institutions improved the working conditions of researchers. Providing more stable work contracts was highlighted as one solution: “Permanent contracts. I mean it, clean up employment relationships.”Footnote 46 The widely shared feeling was that, while some competition is healthy, too many contracts are insecure and short-term, and there should be a better balance between security and competitiveness. For example, a Research Administrator in Estonia cited as “good practice” the provision of “bridge-financing” to project-based researchers who have a “funding gap.”

Some researchers also explained that access to existing practical tools like software for project management, document sharing and archiving, as well as collaboration between people and groups, facilitated research integrity. However, it was found that there is a need for more support with dealing with the pressures of the academic workplace, more attention to managing workloads and ensuring that researchers have adequate time to conduct quality research by “regulating the teaching load for scientists.Footnote 47

Discussion

Our interpretation of these findings is that the dominant influences on research integrity are the research structures and institutions. Each anecdote and experience expressed by participants provides insight, and together they reveal a remarkable scene: individual researchers navigating a system whose mechanisms frequently put them in a position of feeling compromised between career success and behaving with full integrity.

To contextualise our findings, it is worth noting the academic and funding structures in place in Estonia, Italy, Norway, and the UK. In all these countries, researchers are employed on a range of contracts, with variation within and between countries. Researchers in Estonia typically have permanent contracts, reviewed at five-year intervals. The research positions (researcher, senior researcher, research professor) include a small element of teaching. In Italy, early career researchers are normally employed on short-term postdoc contracts of one year without teaching, or three years with teaching responsibilities, senior researchers have permanent positions. Researchers at Norwegian research institutions are employed on either permanent or fixed-term contracts. The two main types of permanent positions are professor and associate professor, both of which include teaching. PhD positions and postdocs are the most common temporary positions and may or may not entail teaching duties. In the UK, researchers’ jobs are commonly short-term, even for mid-career researchers with several years postdoc research experience. These contracts normally last the length of a particular research project and may include an element of teaching. Contracts may last weeks, months or years and may be salaried or hourly-paid. The traditional lecturer post can be permanent or fixed-term. It is an early to mid-career position with a hybrid of research and teaching. The senior versions of this are reader and then professor, both of which tend to be permanent.

In Norway, the research integrity system is governed through a national law. The law aims to hold research organisations accountable for introducing good systems. It does not seek to regulate the behaviour of individual researchers, except for in the most serious cases of misconduct; falsification, fabrication, and plagiarism, which are explicitly illegal. The law establishes national committees for research ethics and integrity. These provide codes of conduct and guidelines for specific fields, and researchers can ask the committees for advice on ethical issues. The law also requires there to be a national investigatory body for research misconduct. Universities are required to introduce and maintain codes of conduct at the organisational level and to promote a culture conducive to integrity.

There is no dedicated law on research integrity in Estonia, Italy, or the UK. In these countries, there are national guidelines. For example, in Estonia the Ethics Code of the Estonian Scientists was approved by the Estonian Science Academy in 2002. In 2017, Estonian Code of Conduct for Research Integrity was created by Estonian Research Council and Centre of Ethics of the University of Tartu, which was signed by all major research institutions in Estonia, but it has no legislative or regulatory binding (Estonian Research Council & Centre for Ethics at University of Tartu, 2017). In Italy, the Commission for Research Ethics and Bioethics of the National Council of Research developed Guidelines for Research Integrity (in 2015, updated 2019) (CNR Research Ethics and Integrity Committee, 2019). In the UK, there is a Concordat to Support Research Integrity (Universities UK, 2012), developed and supported by Universities UK, major funders and the UK Research Integrity Office. The UKRIO has an advisory role, but this office carries no legal authority. In Estonia, Italy and the UK, universities establish their own research ethics guidance and committees, and in the UK, research conducted with the National Health Service (NHS) has to be approved by an NHS Research Ethics Committee. In Italy, Estonia and the UK, research ethics committees focus on conduct of research studies, rather than treatment of results and publications. In Norway, research ethics committees are mostly concerned with the treatment of participants, and there is an additional system for approving handling of personal data.

In all four countries in this study, university research funding comes from a mixture of public and commercial sources. Importantly, there is baseline public funding and competitively distributed public funding in all four countries.

Our findings suggest that participants across all four participating countries believe that competition is on the increase, and that it plays a dominant role in shaping the research environment and, consequently, the behaviour of researchers. Other studies produce similar results; a high level of competition amongst researchers has been reported as a potential contributing factor in breaches in research integrity and cases of misconduct, and questionable research practices (Anderson et al., 2007; Metcalfe et al., 2020; Nuffield Council on Bioethics, 2014; Tijdink et al., 2016; Tijdink et al., 2014). Participants thought that competition had become integral to academic institutions, which now operate a market-based system. Competition seems to be fundamental to the way research is structured, seen in the evaluation frameworks and funding. Perhaps in response to this, institutional systems are structured around competition too. In our study, we noticed that the theme of competition permeated other themes. Participants explained in detail how their practice is influenced by various factors, from funding schemes to peer review and workload. What is striking is that the common feature is competition. This indicates to us that when research structures and institutional systems are based on competition, it becomes harder to maintain integrity.

The rationale for competition in academia is understandable. Participants regarded it as a positive influence on productivity, and indeed the intended purpose of competition is to drive innovation and better-quality research, and distribute resources fairly (Edwards & Roy, 2017; Fang & Casadevall, 2015). High quality research is in the public interest, and one way to achieve this quality is by ensuring the resources go to the best researchers and best project ideas, which in principle may be ensured through competition between individuals, institutions, and project ideas. Structures can be set up with the intention of achieving this. Crudely put, institutions offer career rewards for success in research, measured in terms of quantity and quality of publications, funding awards and formal research evaluation schemes (e.g., REF in the UK). These measurements have been described as creating ‘perverse incentives’ for questionable behaviour (Bouter, 2015; Edwards & Roy, 2017). One participant’s use of the saying, “you get what you measureFootnote 48 encapsulates this. As an example, our participants believed the pressure to publish affected the quality of submissions to journals, for example the author submitting their work before it is ready, and a preference for sharing positive findings over negative results. This is supported by other literature (Fanelli, 2010; Metcalfe et al., 2020; Tijdink et al., 2014). Our participants reported generally favouring quantity over quality in order to be promoted, mirroring findings from research elsewhere (Aubert Bonn & Pinxten, 2021a; Buljan et al., 2018; Faria, 2015; Metcalfe et al., 2020; Nuffield Council on Bioethics, 2014; Wellcome Trust, 2020).

Our findings give clear indication that, in the four European countries covered, competition can obstruct research integrity. We can also look to the USA, where the research environment is increasingly hypercompetitive, and where promotion and tenure processes have been presented as the most important barrier to collaborative research (Cohen & Siegel, 2005). Competition is heightened as funding is squeezed, the number of researchers outstrip the number of academic positions, and publications proliferate (Edwards & Roy, 2017; Fang & Casadevall, 2015). Academics are asked to compete for funding, publications, students, job roles, support staff and positions on boards and panels with the goal changing from scientific discovery to generating money (Anderson et al., 2007), which has also been highlighted as problematic in the EU (Faria, 2015). It has been argued that, in the USA at least, the very values behind scientific enquiry have changed so that the financial interests of researchers have caused a shift away from valuing knowledge and ethics towards valuing profit (Sovacool, 2008). Thus the ‘dark side’ of competition is acknowledged (Anderson et al., 2007; Fang & Casadevall, 2015) as it can push individuals to behave in ways that compromise research integrity (Anderson et al., 2007; Edwards & Roy, 2017). Hypercompetition is thought to encourage individuals to play the system strategically (Anderson et al., 2007; Edwards & Roy, 2017), as participants described in our own study. There is evidence that researchers are reluctant to share resources (Fang & Casadevall, 2015) or information (Anderson et al., 2007). Our participants and others report sabotage of others’ progress, including interference with peer-review process (Biagioli et al., 2019), and deformation of relationships where, in its extreme, academics deliberately impeded the progress of those they were supervising in order to serve their own gains (Anderson et al., 2007). In our international study, and in a UK-based study, researchers reported that aggressive, unkind behaviour may be fuelled by the competition for grants (Wellcome Trust, 2020).

While questionable research practices or misconduct may be committed by individuals, our findings show that the environment is set up in such a way that inadvertently encourages such practices, and that the pressure comes from the structure of research systems and research institutions. The question, then, is what can be done to change this?

It is thought by some that integrity and misconduct policies, and an academic culture that supports research integrity, are likely to be important in preventing misconduct (Fanelli et al., 2015). Participants in our study had mixed views on the value of guidance and codes, although they did welcome clear positive guidance that is fit for purpose. Other research in the UK suggests that governance processes are criticised by some researchers for being overly bureaucratic, time-consuming, and overregulating, but most researchers thought that ethical review processes do have a positive effect on encouraging quality research (Metcalfe et al., 2020; Nuffield Council on Bioethics, 2014). Governance is important for indicating established good practice and drawing attention to formation of bad habits, which is helpful for training researchers, something that is widely advocated in guidelines to promote research integrity (Estonian Research Council & Centre for Ethics at University of Tartu, 2017; The Royal Society, 2017; Universities UK, 2012). However, guidelines and codes of conduct are not sufficient to ensure integrity unless they operate within a “resilient research culture” (Zwart & ter Meulen, 2019). While codes of conduct are helpful in outlining core values of research and appropriate behaviours, their effectiveness remains limited when researchers are struggling in an increasingly competitive research system that can reward questionable research practices and misconduct.

Our participants emphasised the need for systemic changes that would create conditions to support research integrity. They offered some ideas about what resilient research cultures that promote research integrity might look like: stronger research communities who emphasise collaboration over competition, reduced pressures to publish, and improved work conditions. Participants offered some practical solutions, such as clear institutional guidance and training, ensuring that research outputs are evaluated in terms of quality rather than quantity, and openness to recognising a variety of research outputs (for example public engagement activities). Similar suggestions for improvement of the research system have been reported elsewhere (Aubert Bonn & Pinxten, 2021a; Metcalfe et al., 2020; Wellcome Trust, 2020). It remains to be seen whether these proposals would be helpful in practice, and there is a need for further research to elucidate what works so that polices developed can be evidenced based (Bouter, 2020).

Participants’ concerns around research integrity and their suggested solutions, indicate that a ‘slow science’ approach may be the keystone to a better research system. Slow science is a focus on quality over quantity and speed in research, “doing less but better” (Frith, 2020). Whilst participants regarded some degree of competition as positive, they thought the research system should steer away from fast-paced, hypercompetitive research that risks undermining integrity.

Our interpretation of our findings, supported by the literature, gives good reason to suppose that questionable research practice and cases of misconduct should not solely focus on ‘rotten apples’, or even ‘barrels’ as the problem (Faria, 2015), but rather, it is the orchard that has been poorly landscaped. This would support a shift away from individual blame and towards structural and institutional changes, recognising the problem lies not just with institutions of higher education, but also organisations in the wider research environment, for example funding bodies and publishing companies - everyone in the research system has a responsibility to act to safeguard research integrity. The Wellcome Trust (2020) drew similar conclusions, with Director Jeremy Farrar commenting: “A poor research culture ultimately leads to poor research. The pressures of working in research must be recognised and acted upon by all, from funders to leaders of research and to heads of universities and institutions.”(Wellcome Trust, 2020, p. 3). Individuals, institutions and organisations in the research community globally need to communicate and collaborate to reach understanding and develop solutions that will promote research integrity (Anderson, 2018; Aubert Bonn & Pinxten, 2021b).

Limitations and Future Research

A limitation of this study is that one cannot reliably generalise from the focus groups. Participants were self-selecting, which may have introduced bias into the study. As a qualitative study, there was no aim to be representative of all the researchers/research managers/administrators/governance advisors from all four countries. There are feasibly many relevant views and experiences that have not been captured here. A wider and more in-depth study could include a greater number of participants across more academic fields and administrative roles and facilitate detailed comparisons between these different dimensions.

It should also be recognised that these findings are about the perceptions of researchers, research managers, research administrators and research governance advisors, and are not themselves evidence of particular phenomenon, which could be investigated using objective measures (Fanelli, 2009).

The study was limited to the university sector. It would be worthwhile investigating views and experiences from other research settings, for example the private sector. This is under-represented in the literature.

Conclusion

These focus groups give valuable insight into the views and experiences of research integrity of key players in research practice. Importantly, this included the views of research administrators, research managers and research governance advisors as well as researchers. The study covered four European countries: Estonia, Italy, Norway, and the UK. Our understanding of the findings is that various academic structures within research institutions create a perfect storm for questionable research practices and, in some cases, even misconduct (like plagiarism and fabrication). These findings were evident across all four countries indicating that competition is pervasive and can have a negative impact upon research integrity. Some suggestions for improvement include training on research integrity, better guidance and support, and the development of collaborative research communities with shared values that promote integrity. While these measures are laudable, they have little effect on stemming the structural and institutional pressures on individuals. What will be needed are organisational measures and institutional policies that recognise the struggle of individual researchers within the structures set out by research institutions, whether they be universities, publishers, funders, research evaluation bodies. We suggest that everyone in the research system has a responsibility to act to safeguard research integrity, and that a slow science approach could be the keystone to developing a research system where integrity is promoted and supported.