Quality Governance in Biomedical Research
- 7.8k Downloads
Quality research data are essential for quality decision-making and thus for unlocking true innovation potential to ultimately help address unmet medical needs.
The factors influencing quality are diverse. They depend on institution type and experiment type and can be of both technical and cultural nature. A well-thought-out governance mechanism will help understand, monitor, and control research data quality in a research institution.
In this chapter we provide practical guidance for simple, effective, and sustainable quality governance, tailored to the needs of an organization performing nonregulated preclinical research and owned by all stakeholders.
GLP regulations have been developed as a managerial framework under which nonclinical safety testing of pharmaceutical and other products should be conducted. One could argue whether these regulations should be applied to all nonclinical biomedical studies. However, the extensive technical requirements of GLP may not always be fit to the wide variety of studies outside the safety arena and may be seen as overly prescriptive and bureaucratic. In addition, GLP regulations do not take into account scientific excellence in terms of study design or adequacy of analytical methods. For these reasons and in order to allow a lean and fit for purpose approach, the content of this chapter is independent from GLP. Nevertheless, certain topics covered by GLP can be seen as valuable across biomedical research. Examples are focus on transparency and the importance of clear roles and responsibilities for different functions participating in a study.
KeywordsChange management Fit for purpose approach Quality governance Research data quality Sustainability
1 What Is Quality Governance?
The term governance is derived from the Greek verb kubernaein [kubernáo] (meaning to steer), and its first metaphorical use (to steer people) is attributed to Plato. Subsequently the term gave rise to the Latin verb gubernare, and from there it was taken over in different languages. It is only since the 1990s that the term governance is used in a broad sense encompassing different types of activities in a wide range of public and private institutions and at different levels (European Commission, http://ec.europa.eu/governance/docs/doc5_fr.pdf).
As a result of the broad application of the term governance, there are multiple definitions, among which the following is a good example (http://www.businessdictionary.com/definition/governance.html): “Establishment of policies, and continuous monitoring of their proper implementation, by the members of the governing body of an organization. It includes the mechanisms required to balance the powers of the members (with the associated accountability), and their primary duty of enhancing the prosperity and viability of the organization.”
In simple terms, governance is (a) a means to monitor whether you are on a good path to achieve the intended outcomes and (b) a means to steer in the right direction so that you proactively prevent risks turning into issues. It is almost like looking in the mirror every morning and making sure you look ok for whatever your plans are that day, or like watching your diet and getting regular exercise in order to keep your cholesterol levels under control.
How does this translate into quality governance? Quality simply means fitness for purpose; in other words the end product of your work should be fit for the purpose it is meant to serve. In experimental pharmacology, this means that your experimental outcomes should be adequate for supporting conclusions and decisions such as on the validity of a molecular target for a novel treatment approach, on the generation of a mode of action hypothesis for a new drug, or on the safety profile of a pharmaceutical ingredient. The different activities you undertake from planning of your experiment, generating the raw data, processing these data to ultimately reporting experimental outcomes and conclusions should be free from bias and should be documented in a way that allows full reconstruction of the experiment.
The definition of quality governance implies that, although very important, having policies or guidelines for good research practices is not sufficient. Such documents can be seen as an important building block for good quality research data. However, policies or guidelines will not reach their full impact potential and will not be sustainable over time when the monitoring component of governance is missing. Three aspects of governance are equally important: (1) there needs to be a mechanism to check whether people are applying the guidelines, (2) there needs to be a mechanism to make sure the guidelines remain adequate over time, and (3) there needs to be a mechanism to take the right actions when deviations are seen in (1) and (2). Having these three aspects of governance in place is expected to increase the likelihood of long-term sustainability and full engagement by those who are expected to apply the guidelines.
The size and type of organization (university, biotech, pharma, contract research, etc.) and the type of work that is being conducted (exploratory, confirmatory, in vivo, in vitro, etc.) will determine the more specific quality governance needs. The next parts of this chapter are meant to offer a stepwise and practical approach to install an effective and tailor-made quality governance approach.
2 Looking in the Quality Mirror (How to Define the Green Zone)
How do you know where your lab or your institution is positioned respective to the different data quality zones schematically introduced in Fig. 1? The first thing to do is to define what success means in your specific situation. In quality management terms, success is about making sure you have controlled the risks that matter to your organization. Therefore, one may start with clearly defining the risks that matter. “What is at stake?” is a good question to trigger this thinking.
2.1 What Is at Stake?
First and foremost comes the risk to patients’ safety. Poor research data quality can have dramatic consequences as exemplified in 2015 in Rennes, France, where a first-in-human trail was conducted by contract research organization Biotrial on behalf of Portuguese pharmaceutical company Bial. During the trial, six healthy volunteers were hospitalized with severe neurological injuries after receiving an increased dose of the investigational compound. One patient died as a result (Regulatory Focus TM News articles 2016: https://www.raps.org/regulatory-focus%E2%84%A2/news-articles/2016/5/ema-begins-review-of-first-in-human-trial-safety-following-patient-death). The assigned investigation committee concluded that several errors and mistranslations from source documents were present in the IB that made it difficult to understand. The committee recommended enhanced data transparency and sufficiently complete preclinical safety studies (Schofield 2016).
Another important risk in preclinical research is related to animal welfare. Animals should be treated humanely, and care should be taken that animal experiments are well designed and meticulously performed and reported to answer specific questions that cannot be answered by other means. Aspects of animal care and use can also have direct impact on research results. This topic is discussed in detail in chapter “Good Research Practice: Lessons from Animal Care and Use”.
There can be damage to public trust and reputation of the institution and/or individual scientist, for example, in case a publication is retracted. Because retraction is often perceived as an indication of wrongdoing (although this is not always the case), many researchers are understandably sensitive when their paper(s) is questioned (Brainard 2018). Improved oversight at a growing number of journals is thought to be the prime factor in the rise of annual retractions. Extra vigilance is required in case of collaborations between multiple labs or in case of delegation of study conduct, for example, to junior staff.
Business risks may vary depending on the type of organization. The immediate negative financial consequences of using the non-robust methods (e.g., poor or failing controls) are unnecessary repeats and overdue timelines. More delayed consequences can go as far as missed collaboration opportunities. Another example can be a failure to obtain grant approvals when quality criteria of funders are not met. For biotechnological or pharmaceutical companies, there is the risk of inadequate decision-making based on poor quality data and thus investing in assets with limited value, delaying the development of truly innovative solutions for patients in need.
In terms of intellectual property, insufficient data reconstruction can lead to refusal, unenforceability, or other loss of patent rights (Quinn 2017). In a case of a patent attack, it is essential to have comprehensive documentation of research data in order to defend your case in court, potentially many years after the tests were performed. Also, expert scientists are consulted in court and may look for potential bias in your research data.
Pharmacology studies, with the exception of certain safety pharmacology studies, are not performed according to GLP regulations (OECD Principles of Good Laboratory Practices as revised in 1997 Organisation for Economic Co-operation and Development. ENV/MC/CHEM (98)17).
A large part of regulatory submission files for new drug approvals therefore consist of non-GLP studies. It is, however, important to realize that, also for non-GLP studies, the regulators do expect that necessary measures are taken to have trustworthy, unbiased outcomes and that research data are retrievable on demand. The Japanese regulators are particularly very strict when it comes to availability of data (Desai et al. 2018). Regulators have the authorization by law to look into your research data. They will look for data traceability and integrity. If identified, irregularities can affect the review and approval of regulatory submissions and may lead to regulatory actions that go beyond the product/submission involved.
The above-mentioned risks are meant as examples, and the list is certainly not meant to be all inclusive.
When considering what is at stake for your specific organization, you may think just as well in terms of opportunities. As the next step, similar to developing mitigations for potential risks, you can then develop a plan to maximize the benefits or success of the opportunities you have identified, for example, enhancing the likelihood of being seen as the best collaboration partner, getting the next grant, or having the next publication accepted.
2.2 What Do You Do to Protect and Maximize Your Stakes?
When you have a clear view on the risks (and opportunities) in your organization, the next step is to start thinking about what it is that affects those stakes; in other words, what can go wrong so that your key stakes are affected. Or, looking from a positive side, what can be done to maximize the identified opportunities?
Depending on the size and complexity of your organization, the approach to this exercise may vary. In smaller organizations, you may get quick answers from a limited number of subject matter experts. However, in larger organizations, it may take more time to get the full picture. In order to get the best possible understanding about quality-related factors that influence your identified risks, it is recommended to perform a combination of interviews and reviews of research data and documents such as protocols and reports. Most often, reviews will help you find gaps, and interviews will help you define root causes.
Since you will gather a lot of information during this exercise, it is important to be well prepared, and it is advisable to define a structure for collecting and analyzing your findings.
Data storage: Proper storage of research data can be considered as a way to safeguard your research investment. Your data may need to be accessed in the future to explain or augment subsequent research, or other researchers may wish to evaluate or use the results of your research. Some situations or practices may result in inability to retrieve data, for example, storage of electronic data in personal folders or on non-networked instrument computers that are only accessible to the scientist involved and temporary researchers coming and going without being trained on data storage practices.
Data retrieval: It is expected that there is a way to attribute reported outcomes (in study reports and publications) and conclusions to experimental data. An easy solution is to assign to all experiments a unique identification number from the planning phase onwards. This unique number can then be used in all subsequent recordings related to data capture, processing, and reporting.
Data reconstruction: It is expected that an independent person is able to reconstruct the presented graphs, figures, and conclusions with the information in the research data records. If a scientist leaves the institution, colleagues should be able to reconstruct his/her research outcomes. Practices that may hinder the reconstructions are, for example, missing essential information such as timepoints of blood collection, missing details on calculations, or missing batch ID’s of test substances.
Risk for bias: According to the American National Standard (T1.523-2001), the definition of bias is as follows: “Data (raw data, processed data and reported data) should be unchanged from its source and should not be accidentally or maliciously modified, altered or destroyed”. Bias can occur in data collection, data analysis, interpretation, and publication which can cause false conclusions. Bias can be either intentional or unintentional (Simundic 2013). Intention to introduce bias into someone’s research is immoral. Nevertheless, considering the possible consequences of a biased research, it is almost equally irresponsible to conduct and publish a biased research unintentionally. Common sources of bias for experiments (see chapter “Resolving the Tension Between Exploration and Confirmation in Preclinical Biomedical Research”) are lack of upfront defined test acceptance criteria or criteria and documentation for inclusion/exclusion of data points or test replicates, use of non-robust assays, or multiple manual copy-paste steps without quality control checks. Specifically, for animal intervention studies, selection, detection, and performance bias are commonly discussed, and the SYRCLE risk of bias tool has been developed to help assess methodological quality (Hooijmans et al. 2014). It is strongly advised to involve experts for statistical analysis as early as the experimental planning phase. Obviously, every study has its confounding variables and limitations that cannot completely be avoided. However, awareness and full transparency on these known limitations is important.
Review, sign off, and IP protection: Entry of research data in (electronic) lab notebooks and where applicable (for intellectual property reasons) witnessing of experiments is expected to occur in a timely manner (e.g., within 1 month of experimental conduct).
- Culture and communication
How is the rewarding system set up and how is it perceived? Is there a feeling that “truth seeking” behavior and good science is rewarded and seen as a success rather than positive experimental outcomes or artificial milestones? Which are the positive and negative incentives that people in this lab/institution experience?
How are new employees trained on the importance of quality?
Where can employees find information on quality expectations?
Does leadership enforce the importance of quality? Is leadership perceived to “walk the walk and talk the talk”?
Is there a mechanism to prevent conflict of interest? Are there any undue commercial, financial, or other pressures and influences that may adversely affect the quality of research work?
- Management of resources
Is there sufficient personnel with the necessary competence, skills, and qualifications?
Are facilities and equipment suitable to deliver valid results? How is this being monitored?
Do computerized systems ensure integrity of data?
- Data storage
Is it clear to employees which research data to store?
Is it clear to employees where to store research data?
Are research data stored in a safe environment?
- Data retrieval
How easy is it to retrieve data from an experiment you performed last month? 1 year ago? 5 years ago?
How easy is it to retrieve data from an experiment conducted by your peer?
How easy is it to retrieve data from an experiment conducted by the postdoc who left 2 years ago?
- Data reconstruction
Is it clear to employees to which level of detail individual experiments should be recorded and reported?
Is there a process for review and approval of reports or publications?
During data reviews, is there attention to the ability to reconstruct experimental outcomes by following the data chain starting from protocol and raw data over processed data to reported data?
- Bias prevention
In study reports, is it common practice to be transparent about the number of test repeats and their outcomes regardless of whether the outcome was positive or negative?
In study reports, is it common practice to be transparent about statistical power, known limitations of scientific or statistical methods, etc.?
Are biostatistical experts consulted during study setup, data analysis, and reporting in order to ensure adequate power calculations, use of correct statistical methods, and awareness on limitations of certain methods?
Is there a process for review and approval of reports and publications?
Is there a mechanism to raise, review, and act upon any potential shortcomings in responsible conduct of research?
Is it common practice to communicate with collaborators/partners/subcontractors on research data quality expectations?
Performing a combination of research data reviews and interviews will require some time investments but will lead to a good understanding of how your current practices affect what is at stake for your specific institution. Oftentimes, while performing this exercise, the root causes for the identified gaps will become clear. For example, people store their research data on personal drives because there are no agreements or IT solutions foreseen for central data storage, or data reconstruction is difficult because new employees are not being systematically mentored or trained on why this is important and what level of documentation is expected.
Good reading material that links in with this chapter is ICHQ9 (https://www.ich.org/fileadmin/Public_Web_Site/ICH_Products/Guidelines/Quality/Q9/Step4/Q9_Guideline.pdf), a comprehensive guideline on quality risk management including principles, process, tools, and examples.
3 Fixing Your Quality Image (How to Get in the Green Zone)
After having looked in the quality mirror, you can ask yourself: “Do I like what I see?”
Like many things in life, quality is to be considered a journey rather than a destination and requires a continued level of attention. Therefore, it is very likely that your assessment outcome indicates that small or large adjustments are advisable.
It is important to realize that quality cannot be the responsibility of one single person or an isolated group of quality professionals in your organization. Quality is really everyone’s responsibility and should be embedded in the DNA of both people and processes. The right people need to be involved in order to have true impact.
Therefore, after analyzing your quality image, it is important to articulate very clearly what you see, why improvement is needed, and who needs to get involved. Also, one should not forget to bring a well-balanced message, adapted to the target audience, and remember to emphasize what is already good.
Use positive language
Use the scientists’ skills
Use catchy visuals or analogies
Support from the top
Choose your allies – and be inclusive
It is highly likely that during your initial analysis, you have met individuals who were well aware of (parts of) the gaps and have shown interest and energy to participate in finding solutions. Chances are high that they have already started taking steps toward improvement. It is key to involve these people as you move on.
It is also likely that your initial analysis showed that, in certain parts of the organization, people have already put in place solutions for gaps that still exist in other parts of the organization. These can serve as best practice examples going forward.
Also, you may have come across some very critical people, especially those who are afraid of additional workload and bureaucracy. Although this may sound counterintuitive, it is of high value to also include this type of people in your team or to reach out to them at regular intervals with proposed (draft) solutions and ask for their input.
Plan and prioritize
The gaps, best practices, and route causes defined during the initial analysis will be the starting point for change. Remember that route causes are not necessarily always of technical nature. Cultural habits or assumptions may be equally or even more important and will often take more time and effort to resolve.
It is also important to try not to change everything at once. It is better to take small incremental steps and focus on implementing some quick wins first.
Change is never an easy journey, and awareness on change management principles will be helpful to achieve the ultimate goal. On the web there is a lot of useful material on change management, such as the Kotter’s 8-step change model (https://www.mindtools.com/pages/article/newPPM_82.htm) that explains the hard work required to change an organization successfully. Other useful references are the RQA (Research Quality Association) booklet “Quality in research: guidelines for working in non-regulated research” that can be purchased via the RQA website (https://www.therqa.com/resources/publications/booklets/quality-in-research-booklet/) and the RQA quality systems guide that can be downloaded for free (https://www.therqa.com/resources/publications/booklets/Quality_Systems_Guide/). Careful planning and building the proper foundations is key. Equally important is creating a sense of urgency and effective communication (real examples, storytelling). Having some quick wins will help to build on the momentum and increase enthusiasm.
Keep it simple
Make training fun
Interactive and fun activities during training sessions and hands-on workshops are generally well appreciated by the scientists. As an example, a quiz can be built into the training session during which scientists can work in small teams and get a small prize at the end of the session.
4 Looking Good! All Done Now? (How to Stay in the Green Zone)
After having led your organization through the change curve, you can sit back and enjoy the change you had envisioned. However, don’t stay in this mode for too long. As mentioned before, quality is a journey, not a destination.
Today, it may look as if your solutions are embedded in the way people work and have become the new norm. However, 2 years from now, your initial solutions may no longer work for new technologies, newly recruited staff may not have gone through the initial roll out and training, or new needs may show up for which new solutions are required.
First, the quality solutions need to be sustainably integrated in the way of working with clear roles and responsibilities (DO). In accordance with what has been established in the previous section (Fixing your quality image), clear quality expectations in the form of policies, best practices, or guidelines should be available to the organization, and mechanism for training of new employees and for refresher trainings of existing employees should be in place.
Ideally, all scientists advocate and apply these best practice solutions and also communicate expectations and monitor their application in their external collaborations.
Secondly, there needs to be a mechanism to monitor adherence to the green zone of Fig. 1 (CHECK). When having expectations in place, it is not sufficient to just assume that everyone will now follow them. There may be strong quantitative pressures (time) that tend to make people act differently and may lead to quality being compromised. If people know that there is a certain likelihood that their work will be checked, this will provide a counterbalance for such quantitative pressures. Recent simulations support the idea that a detailed scrutiny of a relatively small proportion of research output could help to affordably maintain overall research quality in a competitive research world where quantity is highly prized (Barnett et al. 2018).
Who performs these checks may depend on the type of organization. For smaller organizations, there can be a self-assessment checklist, or a person within the lab can be assigned to do spot checks for a percentage of their time. In larger organizations, there can be an independent person or group of people (e.g., belonging to the quality department) that can perform spot checks. The benefit of a person performing these checks across different labs can be that certain practices that work well within one lab can be identified and shared with another lab that is in need of a practical solution. Such a person can also serve as go-to person for advice (e.g., on how to deal with specific situation). Whoever is performing checks, it is important to build in an element of positive reinforcement, making sure that people are recognized for applying best practices and not feel blamed when something went wrong. Also, it is important to make people “part of” the exercise and discuss any unclarities with them. This will maximize the learning opportunity as it is much more powerful to say “I am trying to reconstruct this experiment, but I am having some difficulties for which I would like to ask for your help” than to say “I noticed you forgot to document your 96-well plate layout.” When performed in the right way, these checks can be a learning exercise and provide a counter-incentive for time pressure. To clarify the latter point, in contrast to quantitative metrics (such as the number of publications or the number of projects completed within business timelines), most institutions are missing metrics on data quality. When checks are performed on agreed upon quality standards (such as timely recording of data or data exclusion criteria setting before experimental conduct), the outcomes of these checks can be used as a metric for data quality. This way, the common quantitative metrics will no longer be the only drivers within a research lab, and as such, quality metrics can provide a platform to discuss route causes for poor quality data such as time pressure.
It is also worth to consider measuring the “quality culture” or “research climate” in your institution. For this purpose, the SOuRCE (Survey of Organizational Research Climate) (Baker 2015) may be useful. This is a validated 32-question survey for which responses have been shown to correlate with self-reported behavior in the conduct of research. As such, it can provide a snapshot of the research climate in an organization through the aggregated perspectives of its members.
Other applicable indicators of quality that can be considered are, for example, the degree to which (electronic) lab notebooks are used and reviewed within preset timelines or the attendance of trainings.
Besides checking whether expectations are being followed, it is equally important to make it clear to people that the quality expectations are not all carved in stone and, for good reasons, some expectations may need to be refined, altered, deleted, or added over time. Everyone should feel empowered and know how to suggest a justified change to the quality expectations. Changes to quality expectation should not only be considered to prevent drift into the “not acceptable” zone but also to avoid or course-correct when drifting into the “overengineering zone.” The latter is often forgotten. Overengineering can have different causes: overinterpretation of guidelines, mitigations staying in place while the risks have disappeared (e.g., QC checks still occurring long after automation took place), solutions that have been put in place being so complex that no one is following them, etc. Conversations are the best way to detect overengineering. Such conversations can be triggered by people voicing their frustration or coming with critical questions or suggestions on certain expectations. Root-cause conversations can also be started when finding multiple times the same issues during audits or a survey can be sent out from time to time to get feedback on feasibility of quality expectations.
Third, whenever drift is seen, or when a suggestion comes up to change a certain quality expectation, there must be a mechanism to react and make decisions (ACT).
Monitoring outcomes, be it from research data spot checks or from a cultural survey, should be communicated and analyzed. If needed, follow-up actions should be defined and implemented by responsible person(s).
Last but not least, the culture of quality needs to be kept alive, and expectations need to be updated as required (PLAN). For this purpose, regular communication on the importance of quality is crucial. This can be achieved in various ways, for example, by putting posters next to copy or coffee machines, by highlighting best practices during group meetings, by e-mailing relevant publications, or by inviting external speakers. Messages coming from mentors and leaders are most impactful in this respect as well as having people recognize that these leaders themselves are “walking the walk and talking the talk.”
Quality governance, when installed successfully, can be a way to provide simple and sustainable solutions that facilitate data quality and promote innovation. The basic principles of quality governance are very similar across different disciplines; however, the practical application of quality governance is dependent on multiple variables. Currently, existing guidance on quality governance for research is limited and fragmented. As a result, institutions may have policies or guidelines in place, but there is often no mechanism to monitor their application. An exception is the animal care and use aspect of research where there is legislation as well as internal and external oversight bodies (see chapter “Good Research Practice: Lessons from Animal Care and Use”). Recently, the IMI project EQIPD (European Quality in Preclinical Data, https://quality-preclinical-data.eu/) has assembled a team of both industrial and academic researchers and quality professionals to work on practical solutions to improve preclinical data quality. One of their deliverables is a tool to help institutions set up a fit-for-purpose quality system including governance aspects, aligned with the information in this chapter. Until the team has delivered their tool, we hope the guidance provided above can be of help for institutions for bringing their research data quality to the right level.
- Deming WE (1986) Out of the crisis. Massachusetts Institute of Technology, Center for Advanced Engineering Study, Cambridge, p 88Google Scholar
- Quinn G (2017) Patent drafting: understanding the enablement requirement. http://www.ipwatchdog.com/2017/10/28/patentability-drafting-enablement-requirement/id=89721/
- Schofield I (2016) Phase I trials: French Body urges transparency, says don’t just follow the rules. Scrip regulatory affairs. https://pink.pharmaintelligence.informa.com/PS118626/Phase-I-Trials-French-Body-Urges-Transparency-Says-Dont-Just-Follow-The-Rules
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.