Keywords

1 What Is Quality Governance?

The term governance is derived from the Greek verb kubernaein [kubernáo] (meaning to steer), and its first metaphorical use (to steer people) is attributed to Plato. Subsequently the term gave rise to the Latin verb gubernare, and from there it was taken over in different languages. It is only since the 1990s that the term governance is used in a broad sense encompassing different types of activities in a wide range of public and private institutions and at different levels (European Commission, http://ec.europa.eu/governance/docs/doc5_fr.pdf).

As a result of the broad application of the term governance, there are multiple definitions, among which the following is a good example (http://www.businessdictionary.com/definition/governance.html): “Establishment of policies, and continuous monitoring of their proper implementation, by the members of the governing body of an organization. It includes the mechanisms required to balance the powers of the members (with the associated accountability), and their primary duty of enhancing the prosperity and viability of the organization.”

In simple terms, governance is (a) a means to monitor whether you are on a good path to achieve the intended outcomes and (b) a means to steer in the right direction so that you proactively prevent risks turning into issues. It is almost like looking in the mirror every morning and making sure you look ok for whatever your plans are that day, or like watching your diet and getting regular exercise in order to keep your cholesterol levels under control.

How does this translate into quality governance? Quality simply means fitness for purpose; in other words the end product of your work should be fit for the purpose it is meant to serve. In experimental pharmacology, this means that your experimental outcomes should be adequate for supporting conclusions and decisions such as on the validity of a molecular target for a novel treatment approach, on the generation of a mode of action hypothesis for a new drug, or on the safety profile of a pharmaceutical ingredient. The different activities you undertake from planning of your experiment, generating the raw data, processing these data to ultimately reporting experimental outcomes and conclusions should be free from bias and should be documented in a way that allows full reconstruction of the experiment.

Quality governance in the context of this book means potential ways in which institutions can monitor research data quality over time and have a mechanism in place to detect and to deal with signals of drift. The purpose and concept of quality governance can be visualized such as in Fig. 1.

Fig. 1
figure 1

A simple visualization of the purpose and concept of quality governance. Quality governance is the answer to the question: “How do I get in the green zone and stay there?” Each star represents a point in time where the organization reflects on or measures its quality level against what is considered fit for purpose. Over time, the measured outcomes will likely change, and when the outcome is in the “not acceptable” or “overengineering” zone, actions need to be taken to move back into the green fit for purpose zone. The curves represent theoretical examples of measured quality levels over time. The lower curve reflects an institution that has taken small continuous improvement steps to move data quality from not acceptable to fit for purpose level. The upper curve represents an institution that at a certain point in time finds itself “overengineering” and course-corrects to an appropriate level of quality

The definition of quality governance implies that, although very important, having policies or guidelines for good research practices is not sufficient. Such documents can be seen as an important building block for good quality research data. However, policies or guidelines will not reach their full impact potential and will not be sustainable over time when the monitoring component of governance is missing. Three aspects of governance are equally important: (1) there needs to be a mechanism to check whether people are applying the guidelines, (2) there needs to be a mechanism to make sure the guidelines remain adequate over time, and (3) there needs to be a mechanism to take the right actions when deviations are seen in (1) and (2). Having these three aspects of governance in place is expected to increase the likelihood of long-term sustainability and full engagement by those who are expected to apply the guidelines.

Common to all effective quality governance systems is the attention to both cultural (acceptance, engagement, sense of responsibility, etc.) and technical (guidelines, procedures, equipment, research data storage, etc.) aspects of quality. It is definitely worth investing time in effective quality governance system as it will help achieve quality research data (data fit for their purpose). Quality data lead to quality decision-making (for acceptance of the best publications or for grants to be given to the best project proposals or for investing in the best active pharmaceutical ingredient). All together quality research data will ultimately lead to unlocking the best possible innovation potential in order to help address unmet medical needs (Fig. 2).

Fig. 2
figure 2

Quality research data leads to quality decision-making which in its turn is required for unlocking optimal innovation potential

The size and type of organization (university, biotech, pharma, contract research, etc.) and the type of work that is being conducted (exploratory, confirmatory, in vivo, in vitro, etc.) will determine the more specific quality governance needs. The next parts of this chapter are meant to offer a stepwise and practical approach to install an effective and tailor-made quality governance approach.

2 Looking in the Quality Mirror (How to Define the Green Zone)

How do you know where your lab or your institution is positioned respective to the different data quality zones schematically introduced in Fig. 1? The first thing to do is to define what success means in your specific situation. In quality management terms, success is about making sure you have controlled the risks that matter to your organization. Therefore, one may start with clearly defining the risks that matter. “What is at stake?” is a good question to trigger this thinking.

2.1 What Is at Stake?

Risks can vary according to the organization type. However, the following risks are generally recognized for organizations conducting biomedical research:

  1. (a)

    First and foremost comes the risk to patients’ safety. Poor research data quality can have dramatic consequences as exemplified in 2015 in Rennes, France, where a first-in-human trail was conducted by contract research organization Biotrial on behalf of Portuguese pharmaceutical company Bial. During the trial, six healthy volunteers were hospitalized with severe neurological injuries after receiving an increased dose of the investigational compound. One patient died as a result (Regulatory Focus TM News articles 2016: https://www.raps.org/regulatory-focus%E2%84%A2/news-articles/2016/5/ema-begins-review-of-first-in-human-trial-safety-following-patient-death). The assigned investigation committee concluded that several errors and mistranslations from source documents were present in the IB that made it difficult to understand. The committee recommended enhanced data transparency and sufficiently complete preclinical safety studies (Schofield 2016).

  2. (b)

    Another important risk in preclinical research is related to animal welfare. Animals should be treated humanely, and care should be taken that animal experiments are well designed and meticulously performed and reported to answer specific questions that cannot be answered by other means. Aspects of animal care and use can also have direct impact on research results. This topic is discussed in detail in chapter “Good Research Practice: Lessons from Animal Care and Use”.

  3. (c)

    There can be damage to public trust and reputation of the institution and/or individual scientist, for example, in case a publication is retracted. Because retraction is often perceived as an indication of wrongdoing (although this is not always the case), many researchers are understandably sensitive when their paper(s) is questioned (Brainard 2018). Improved oversight at a growing number of journals is thought to be the prime factor in the rise of annual retractions. Extra vigilance is required in case of collaborations between multiple labs or in case of delegation of study conduct, for example, to junior staff.

  4. (d)

    Business risks may vary depending on the type of organization. The immediate negative financial consequences of using the non-robust methods (e.g., poor or failing controls) are unnecessary repeats and overdue timelines. More delayed consequences can go as far as missed collaboration opportunities. Another example can be a failure to obtain grant approvals when quality criteria of funders are not met. For biotechnological or pharmaceutical companies, there is the risk of inadequate decision-making based on poor quality data and thus investing in assets with limited value, delaying the development of truly innovative solutions for patients in need.

  5. (e)

    In terms of intellectual property, insufficient data reconstruction can lead to refusal, unenforceability, or other loss of patent rights (Quinn 2017). In a case of a patent attack, it is essential to have comprehensive documentation of research data in order to defend your case in court, potentially many years after the tests were performed. Also, expert scientists are consulted in court and may look for potential bias in your research data.

  6. (f)

    Pharmacology studies, with the exception of certain safety pharmacology studies, are not performed according to GLP regulations (OECD Principles of Good Laboratory Practices as revised in 1997 Organisation for Economic Co-operation and Development. ENV/MC/CHEM (98)17).

    A large part of regulatory submission files for new drug approvals therefore consist of non-GLP studies. It is, however, important to realize that, also for non-GLP studies, the regulators do expect that necessary measures are taken to have trustworthy, unbiased outcomes and that research data are retrievable on demand. The Japanese regulators are particularly very strict when it comes to availability of data (Desai et al. 2018). Regulators have the authorization by law to look into your research data. They will look for data traceability and integrity. If identified, irregularities can affect the review and approval of regulatory submissions and may lead to regulatory actions that go beyond the product/submission involved.

The above-mentioned risks are meant as examples, and the list is certainly not meant to be all inclusive.

When considering what is at stake for your specific organization, you may think just as well in terms of opportunities. As the next step, similar to developing mitigations for potential risks, you can then develop a plan to maximize the benefits or success of the opportunities you have identified, for example, enhancing the likelihood of being seen as the best collaboration partner, getting the next grant, or having the next publication accepted.

2.2 What Do You Do to Protect and Maximize Your Stakes?

When you have a clear view on the risks (and opportunities) in your organization, the next step is to start thinking about what it is that affects those stakes; in other words, what can go wrong so that your key stakes are affected. Or, looking from a positive side, what can be done to maximize the identified opportunities?

Depending on the size and complexity of your organization, the approach to this exercise may vary. In smaller organizations, you may get quick answers from a limited number of subject matter experts. However, in larger organizations, it may take more time to get the full picture. In order to get the best possible understanding about quality-related factors that influence your identified risks, it is recommended to perform a combination of interviews and reviews of research data and documents such as protocols and reports. Most often, reviews will help you find gaps, and interviews will help you define root causes.

Since you will gather a lot of information during this exercise, it is important to be well prepared, and it is advisable to define a structure for collecting and analyzing your findings.

When reviewing research data, you can group your findings in categories such as, for example:

  1. (a)

    Data storage: Proper storage of research data can be considered as a way to safeguard your research investment. Your data may need to be accessed in the future to explain or augment subsequent research, or other researchers may wish to evaluate or use the results of your research. Some situations or practices may result in inability to retrieve data, for example, storage of electronic data in personal folders or on non-networked instrument computers that are only accessible to the scientist involved and temporary researchers coming and going without being trained on data storage practices.

  2. (b)

    Data retrieval: It is expected that there is a way to attribute reported outcomes (in study reports and publications) and conclusions to experimental data. An easy solution is to assign to all experiments a unique identification number from the planning phase onwards. This unique number can then be used in all subsequent recordings related to data capture, processing, and reporting.

  3. (c)

    Data reconstruction: It is expected that an independent person is able to reconstruct the presented graphs, figures, and conclusions with the information in the research data records. If a scientist leaves the institution, colleagues should be able to reconstruct his/her research outcomes. Practices that may hinder the reconstructions are, for example, missing essential information such as timepoints of blood collection, missing details on calculations, or missing batch ID’s of test substances.

  4. (d)

    Risk for bias: According to the American National Standard (T1.523-2001), the definition of bias is as follows: “Data (raw data, processed data and reported data) should be unchanged from its source and should not be accidentally or maliciously modified, altered or destroyed”. Bias can occur in data collection, data analysis, interpretation, and publication which can cause false conclusions. Bias can be either intentional or unintentional (Simundic 2013). Intention to introduce bias into someone’s research is immoral. Nevertheless, considering the possible consequences of a biased research, it is almost equally irresponsible to conduct and publish a biased research unintentionally. Common sources of bias for experiments (see chapter “Resolving the Tension Between Exploration and Confirmation in Preclinical Biomedical Research”) are lack of upfront defined test acceptance criteria or criteria and documentation for inclusion/exclusion of data points or test replicates, use of non-robust assays, or multiple manual copy-paste steps without quality control checks. Specifically, for animal intervention studies, selection, detection, and performance bias are commonly discussed, and the SYRCLE risk of bias tool has been developed to help assess methodological quality (Hooijmans et al. 2014). It is strongly advised to involve experts for statistical analysis as early as the experimental planning phase. Obviously, every study has its confounding variables and limitations that cannot completely be avoided. However, awareness and full transparency on these known limitations is important.

  5. (e)

    Review, sign off, and IP protection: Entry of research data in (electronic) lab notebooks and where applicable (for intellectual property reasons) witnessing of experiments is expected to occur in a timely manner (e.g., within 1 month of experimental conduct).

Likewise, when conducting interviews, you may want to consider having some general questions upfront to trigger the thinking and facilitate obtaining information that helps define route causes and thus focus areas for later actions. Some examples:

  • Culture and communication

    • How is the rewarding system set up and how is it perceived? Is there a feeling that “truth seeking” behavior and good science is rewarded and seen as a success rather than positive experimental outcomes or artificial milestones? Which are the positive and negative incentives that people in this lab/institution experience?

    • How are new employees trained on the importance of quality?

    • Where can employees find information on quality expectations?

    • Does leadership enforce the importance of quality? Is leadership perceived to “walk the walk and talk the talk”?

    • Is there a mechanism to prevent conflict of interest? Are there any undue commercial, financial, or other pressures and influences that may adversely affect the quality of research work?

  • Management of resources

    • Is there sufficient personnel with the necessary competence, skills, and qualifications?

    • Are facilities and equipment suitable to deliver valid results? How is this being monitored?

    • Do computerized systems ensure integrity of data?

  • Data storage

    • Is it clear to employees which research data to store?

    • Is it clear to employees where to store research data?

    • Are research data stored in a safe environment?

  • Data retrieval

    • How easy is it to retrieve data from an experiment you performed last month? 1 year ago? 5 years ago?

    • How easy is it to retrieve data from an experiment conducted by your peer?

    • How easy is it to retrieve data from an experiment conducted by the postdoc who left 2 years ago?

  • Data reconstruction

    • Is it clear to employees to which level of detail individual experiments should be recorded and reported?

    • Is there a process for review and approval of reports or publications?

    • During data reviews, is there attention to the ability to reconstruct experimental outcomes by following the data chain starting from protocol and raw data over processed data to reported data?

  • Bias prevention

    • In study reports, is it common practice to be transparent about the number of test repeats and their outcomes regardless of whether the outcome was positive or negative?

    • In study reports, is it common practice to be transparent about statistical power, known limitations of scientific or statistical methods, etc.?

    • Are biostatistical experts consulted during study setup, data analysis, and reporting in order to ensure adequate power calculations, use of correct statistical methods, and awareness on limitations of certain methods?

    • Is there a process for review and approval of reports and publications?

    • Is there a mechanism to raise, review, and act upon any potential shortcomings in responsible conduct of research?

  • Collaborations

    • Is it common practice to communicate with collaborators/partners/subcontractors on research data quality expectations?

Performing a combination of research data reviews and interviews will require some time investments but will lead to a good understanding of how your current practices affect what is at stake for your specific institution. Oftentimes, while performing this exercise, the root causes for the identified gaps will become clear. For example, people store their research data on personal drives because there are no agreements or IT solutions foreseen for central data storage, or data reconstruction is difficult because new employees are not being systematically mentored or trained on why this is important and what level of documentation is expected.

Good reading material that links in with this chapter is ICHQ9 (https://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q9/Step4/Q9_Guideline.pdf), a comprehensive guideline on quality risk management including principles, process, tools, and examples.

3 Fixing Your Quality Image (How to Get in the Green Zone)

After having looked in the quality mirror, you can ask yourself: “Do I like what I see?”

Like many things in life, quality is to be considered a journey rather than a destination and requires a continued level of attention. Therefore, it is very likely that your assessment outcome indicates that small or large adjustments are advisable.

It is important to realize that quality cannot be the responsibility of one single person or an isolated group of quality professionals in your organization. Quality is really everyone’s responsibility and should be embedded in the DNA of both people and processes. The right people need to be involved in order to have true impact.

Therefore, after analyzing your quality image, it is important to articulate very clearly what you see, why improvement is needed, and who needs to get involved. Also, one should not forget to bring a well-balanced message, adapted to the target audience, and remember to emphasize what is already good.

Another common mistake is to think that research quality principles can be installed just by writing a policy, generating procedures and guidelines, and organizing training sessions, so everyone knows what to do. This may look good on paper and may even be designed to be lean and fit-for-purpose. However, building quality into the everyday activities and at all levels requires to go beyond work instructions and policies. It is the emotional connection that is the basis of success (Fig. 3) since this will make sure that quality is built into the thinking and actions of everyone involved. One can never foresee all potential situations and exceptions in a procedure, but having people’s mindset right will trigger the right behavior in any situation.

Fig. 3
figure 3

Schematic representation of emotional connection elements that are relevant to consider while enhancing quality approaches. Building quality into the everyday activities and at all levels requires emotional connection. The basis of a healthy research climate is people’s mindset, as the right mindset will trigger the right behavior in any situation

Here are some hints and tips and practical examples that have been shown to be well received:

  • Use positive language

Positive communication is key to obtain engagement. Phrases such as “How can we set ourselves up for success?” or “How can we increase our potential for innovation?” are much more effective than “We need to follow these rules for data recording.”

  • Use the scientists’ skills

Scientists are excellent problem solvers. This skill can also do magic beyond the scientific area of expertise such as on quality-related topics. The only trigger that is needed for the scientist is the awareness that there is a situation that needs to be addressed. In this context, quality professionals are advised not to impose solutions on scientists, especially when these solutions may be perceived with even a tiny piece of increased bureaucracy. And, in the end, scientists usually know best what works well in their environment.

  • Use catchy visuals or analogies

As an example, a campaign appealing to scientist’s creativity can result in very engaging visuals that can be displayed in labs and corridors (example in Fig. 4) and will become the talk of the town. Positivity, creativity, and fun are key elements in building a research quality culture.

Fig. 4
figure 4

Example of utilization of visuals and analogies to research data quality. This example was derived from a campaign at Janssen Pharmaceutica N.V., appealing to scientists’ creativity to visualize quality. It is meant to trigger thinking on appropriateness and best practices for outlier exclusion

  • Support from the top

The first question you will be asked by your institutions’ leaders is “Why is this important?”. You need a clear outline of what is at stake, what you learned during the current state analysis, and where you see gaps and a plan for change. It is best to include real examples of what can go wrong, what is currently already working well, and where improvement is still possible. Without support of the institutions’ top leadership, the next steps will be extremely difficult or even impossible.

  • Choose your allies – and be inclusive

It is highly likely that during your initial analysis, you have met individuals who were well aware of (parts of) the gaps and have shown interest and energy to participate in finding solutions. Chances are high that they have already started taking steps toward improvement. It is key to involve these people as you move on.

It is also likely that your initial analysis showed that, in certain parts of the organization, people have already put in place solutions for gaps that still exist in other parts of the organization. These can serve as best practice examples going forward.

Also, you may have come across some very critical people, especially those who are afraid of additional workload and bureaucracy. Although this may sound counterintuitive, it is of high value to also include this type of people in your team or to reach out to them at regular intervals with proposed (draft) solutions and ask for their input.

Finally, since the gaps you have discovered may be of diverse nature, it is important to involve experts from other disciplines such as biostatisticians, communication professionals, patent attorneys, procurement experts, IT experts, etc.

  • Plan and prioritize

The gaps, best practices, and route causes defined during the initial analysis will be the starting point for change. Remember that route causes are not necessarily always of technical nature. Cultural habits or assumptions may be equally or even more important and will often take more time and effort to resolve.

It is also important to try not to change everything at once. It is better to take small incremental steps and focus on implementing some quick wins first.

Apart from prioritizing the topics that need to be tackled, it is good practice to define roles and responsibilities as well as a communication plan. For a large organization, it may be helpful to agree on a governance model for which an example is given in Fig. 5.

Fig. 5
figure 5

Example of a governance model for the design phase of quality solutions in a large organization. There is a close collaboration between the quality department and the research community where the quality department performs analyses (such as described in Sect. 2. Looking in the quality mirror) within the different research teams. Each research team assigns a quality champion who works closely with the quality professionals and quality champions from other research teams to identify opportunities for improvement, generate concepts to fill gaps, and exchange best practices (cross-fertilization). A quality task force is formed consisting of both quality champions, quality professionals, and representatives from other stakeholder organizations (“quality partners”). This quality task force works on solutions for the entire research community, such as research data storage, research data reporting processes, or a training program according to the principles described in Sect. 3. Fixing your quality image. The quality task force provides regular updates to senior decision-makers whose approval is required before implementation

Change is never an easy journey, and awareness on change management principles will be helpful to achieve the ultimate goal. On the web there is a lot of useful material on change management, such as the Kotter’s 8-step change model (https://www.mindtools.com/pages/article/newPPM_82.htm) that explains the hard work required to change an organization successfully. Other useful references are the RQA (Research Quality Association) booklet “Quality in research: guidelines for working in non-regulated research” that can be purchased via the RQA website (https://www.therqa.com/resources/publications/booklets/quality-in-research-booklet/) and the RQA quality systems guide that can be downloaded for free (https://www.therqa.com/resources/publications/booklets/Quality_Systems_Guide/). Careful planning and building the proper foundations is key. Equally important is creating a sense of urgency and effective communication (real examples, storytelling). Having some quick wins will help to build on the momentum and increase enthusiasm.

  • Keep it simple

Try not to make guidelines too descriptive or specific and rather go with general guidance. For example, it is essential that equipment is suitable for intended use and that experimental records provide sufficient details to enable reconstruction. What exactly this means depends on the type of equipment, for what purpose it is used, and what type of experiment is conducted. General templates to document equipment maintenance or reporting templates may be helpful tools for scientists; however, it is key not to go in too much detail when specifying expectations.

  • Make training fun

Interactive and fun activities during training sessions and hands-on workshops are generally well appreciated by the scientists. As an example, a quiz can be built into the training session during which scientists can work in small teams and get a small prize at the end of the session.

4 Looking Good! All Done Now? (How to Stay in the Green Zone)

After having led your organization through the change curve, you can sit back and enjoy the change you had envisioned. However, don’t stay in this mode for too long. As mentioned before, quality is a journey, not a destination.

Today, it may look as if your solutions are embedded in the way people work and have become the new norm. However, 2 years from now, your initial solutions may no longer work for new technologies, newly recruited staff may not have gone through the initial roll out and training, or new needs may show up for which new solutions are required.

For a quality management system to be sustainable over time, it needs to have a built-in continuous improvement mechanism, such as described by the PDCA (plan–do–check–act), also known as the Deming cycle (Deming 1986) and visualized in Fig. 6.

Fig. 6
figure 6

Key attributes of a mature quality system represented by means of the Deming cycle (Deming 1986)

First, the quality solutions need to be sustainably integrated in the way of working with clear roles and responsibilities (DO). In accordance with what has been established in the previous section (Fixing your quality image), clear quality expectations in the form of policies, best practices, or guidelines should be available to the organization, and mechanism for training of new employees and for refresher trainings of existing employees should be in place.

Ideally, all scientists advocate and apply these best practice solutions and also communicate expectations and monitor their application in their external collaborations.

Secondly, there needs to be a mechanism to monitor adherence to the green zone of Fig. 1 (CHECK). When having expectations in place, it is not sufficient to just assume that everyone will now follow them. There may be strong quantitative pressures (time) that tend to make people act differently and may lead to quality being compromised. If people know that there is a certain likelihood that their work will be checked, this will provide a counterbalance for such quantitative pressures. Recent simulations support the idea that a detailed scrutiny of a relatively small proportion of research output could help to affordably maintain overall research quality in a competitive research world where quantity is highly prized (Barnett et al. 2018).

Who performs these checks may depend on the type of organization. For smaller organizations, there can be a self-assessment checklist, or a person within the lab can be assigned to do spot checks for a percentage of their time. In larger organizations, there can be an independent person or group of people (e.g., belonging to the quality department) that can perform spot checks. The benefit of a person performing these checks across different labs can be that certain practices that work well within one lab can be identified and shared with another lab that is in need of a practical solution. Such a person can also serve as go-to person for advice (e.g., on how to deal with specific situation). Whoever is performing checks, it is important to build in an element of positive reinforcement, making sure that people are recognized for applying best practices and not feel blamed when something went wrong. Also, it is important to make people “part of” the exercise and discuss any unclarities with them. This will maximize the learning opportunity as it is much more powerful to say “I am trying to reconstruct this experiment, but I am having some difficulties for which I would like to ask for your help” than to say “I noticed you forgot to document your 96-well plate layout.” When performed in the right way, these checks can be a learning exercise and provide a counter-incentive for time pressure. To clarify the latter point, in contrast to quantitative metrics (such as the number of publications or the number of projects completed within business timelines), most institutions are missing metrics on data quality. When checks are performed on agreed upon quality standards (such as timely recording of data or data exclusion criteria setting before experimental conduct), the outcomes of these checks can be used as a metric for data quality. This way, the common quantitative metrics will no longer be the only drivers within a research lab, and as such, quality metrics can provide a platform to discuss route causes for poor quality data such as time pressure.

It is also worth to consider measuring the “quality culture” or “research climate” in your institution. For this purpose, the SOuRCE (Survey of Organizational Research Climate) (Baker 2015) may be useful. This is a validated 32-question survey for which responses have been shown to correlate with self-reported behavior in the conduct of research. As such, it can provide a snapshot of the research climate in an organization through the aggregated perspectives of its members.

Other applicable indicators of quality that can be considered are, for example, the degree to which (electronic) lab notebooks are used and reviewed within preset timelines or the attendance of trainings.

Besides checking whether expectations are being followed, it is equally important to make it clear to people that the quality expectations are not all carved in stone and, for good reasons, some expectations may need to be refined, altered, deleted, or added over time. Everyone should feel empowered and know how to suggest a justified change to the quality expectations. Changes to quality expectation should not only be considered to prevent drift into the “not acceptable” zone but also to avoid or course-correct when drifting into the “overengineering zone.” The latter is often forgotten. Overengineering can have different causes: overinterpretation of guidelines, mitigations staying in place while the risks have disappeared (e.g., QC checks still occurring long after automation took place), solutions that have been put in place being so complex that no one is following them, etc. Conversations are the best way to detect overengineering. Such conversations can be triggered by people voicing their frustration or coming with critical questions or suggestions on certain expectations. Root-cause conversations can also be started when finding multiple times the same issues during audits or a survey can be sent out from time to time to get feedback on feasibility of quality expectations.

Third, whenever drift is seen, or when a suggestion comes up to change a certain quality expectation, there must be a mechanism to react and make decisions (ACT).

Monitoring outcomes, be it from research data spot checks or from a cultural survey, should be communicated and analyzed. If needed, follow-up actions should be defined and implemented by responsible person(s).

Last but not least, the culture of quality needs to be kept alive, and expectations need to be updated as required (PLAN). For this purpose, regular communication on the importance of quality is crucial. This can be achieved in various ways, for example, by putting posters next to copy or coffee machines, by highlighting best practices during group meetings, by e-mailing relevant publications, or by inviting external speakers. Messages coming from mentors and leaders are most impactful in this respect as well as having people recognize that these leaders themselves are “walking the walk and talking the talk.”

5 Conclusion

Quality governance, when installed successfully, can be a way to provide simple and sustainable solutions that facilitate data quality and promote innovation. The basic principles of quality governance are very similar across different disciplines; however, the practical application of quality governance is dependent on multiple variables. Currently, existing guidance on quality governance for research is limited and fragmented. As a result, institutions may have policies or guidelines in place, but there is often no mechanism to monitor their application. An exception is the animal care and use aspect of research where there is legislation as well as internal and external oversight bodies (see chapter “Good Research Practice: Lessons from Animal Care and Use”). Recently, the IMI project EQIPD (European Quality in Preclinical Data, https://quality-preclinical-data.eu/) has assembled a team of both industrial and academic researchers and quality professionals to work on practical solutions to improve preclinical data quality. One of their deliverables is a tool to help institutions set up a fit-for-purpose quality system including governance aspects, aligned with the information in this chapter. Until the team has delivered their tool, we hope the guidance provided above can be of help for institutions for bringing their research data quality to the right level.