Advertisement

Reproducible pharmacokinetics

  • John P. A. IoannidisEmail author
Review Paper

Abstract

Reproducibility is a highly desired feature of scientific investigation in general, and it has special connotations for research in pharmacokinetics, a vibrant field with over 500,000 publications to-date. It is important to be able to differentiate between genuine heterogeneity in pharmacokinetic parameters from heterogeneity that is due to errors and biases. This overview discusses efforts and opportunities to diminish the latter type of undesirable heterogeneity. Several reporting and research guidance documents and standards have been proposed for pharmacokinetic studies, but their adoption is still rather limited. Quality problems in the methods used and model evaluations have been examined in some empirical studies of the literature. Standardization of statistical and laboratory tools and procedures can be improved in the field. Only a small fraction of pharmacokinetic studies become pre-registered and only 9995 such studies have been registered in ClinicalTrials.gov as of August 2018. It is likely that most pharmacokinetic studies remain unpublished. Publication bias affecting the results and inferences has been documented in case studies, but its exact extent is unknown for the field at-large. The use of meta-analyses in the field is still limited. Availability of raw data, detailed protocols, software and codes is hopefully improving with multiple ongoing initiatives. Several research practices can contribute to greater transparency and reproducibility for pharmacokinetic investigations.

Keywords

Reproducibility Pharmacokinetics Bias Heterogeneity Research practices Trial registration Data sharing 

Pharmacokinetics is one of the most prolific fields of the biomedical literature. A search for “pharmacokinetics” in PubMed on August 7, 2018 yields 510,660 published items and a random perusal of such articles suggests that almost all of them are indeed relevant to pharmacokinetics. The pivotal, central role of this literature for basic, translational, and clinical science is well known. However, a major question is: how reproducible is this literature?

One should define first what “reproducible” means for the case of pharmacokinetics. With increasing discussion about reproducibility across different scientific fields, it has become evident that diverse scientists may employ the term “reproducibility” with somehow different (non-reproducible) notions. To try to navigate the ensuing conundrum, it has been proposed [1] that reproducibility can be dissected into three components: reproducibility of methods (being able to understand and make use of the methods, tools, scripts and software deployed in a study), reproducibility of results (performing a new study and getting similar results to the original), and reproducibility of inferences (different scientists making similar conclusions from the same body of evidence). It is also understood that the expectations and standards of reproducibility may differ in different fields. Various disciplines may have different magnitude of typical heterogeneity across results of studies that try to address the same question. Moreover, there may be different key reasons contributing to this heterogeneity. Broadly categorized, heterogeneity may be due to genuine biological reasons (true and even potentially desirable heterogeneity) or errors and biases (typically undesirable heterogeneity).

Pharmacokinetics research is likely to have several reasons for genuine heterogeneity. Results about pharmacokinetic parameters and kinetic curves may vary substantially across systems, individuals, settings, complexity of compartments, and many other factors. Dissecting these reasons are a worthwhile field of investigation. Understanding them can be a major contribution to science. For example, classical models may have difficulty describing complex, often chaotic phenomena. A whole literature on fractal kinetics has evolved in the last 25 years, with major contributions from Panos Macheras and other scientists [2, 3, 4]. Moreover, clear documentation of heterogeneity due to differences in methods or software options could be useful in dissecting some of the technical sources of heterogeneity.

The unavoidably large genuine heterogeneity poses even greater requirements and expectations for containing and minimizing sources of undesirable heterogeneity that are due to errors and biases. Otherwise, the effort to dissect genuine heterogeneity will be overwhelmed by these errors and biases with modeling becoming mostly modeling of noise. Are current research practices in pharmacokinetics sufficiently robust, transparent, and unbiased to ensure reproducibility of methods, results, and inferences? If not, what more can be done? Since new processes and practices may also require effort and resources and/or may have some inadvertent negative consequences, how do we ensure that the field selects the best choices? What follows is a discussion of some core issues affecting reproducibility and their current status and prospects in pharmacokinetics research.

Reporting and research standards

Standardized, consistent, informative reporting of how research has been done is an essential prerequisite for reproducibility of methods, but also for understanding what was done in any scientific study. There are already several reporting standards for pharmacokinetic studies that have been established by regulatory agencies, journals, national or international initiatives [5, 6, 7, 8, 9, 10, 11, 12, 13]. However, their uptake is still limited. For example, the comprehensive reporting guidelines on population pharmacokinetic analyses published in 2015 [5] have been cited only 15 times within 3 years of their publication and the ClinPK statement for the reporting guidance on clinical pharmacokinetic studies [6] has also been cited only 17 times in the same time frame. The FDA guidance for industry on population pharmacokinetics document has been cited 188 times since published almost 20 years ago [7] and has been used in an even larger number of studies, but still its uptake pertains to the minority of population pharmacokinetic studies.

It is possible that these efforts have indirectly improved the reporting of pharmacokinetic studies, even if they are not cited, by creating awareness of these issues. Several of these efforts also have the intention to provide guidance not only on the reporting but also the design and conduct of pharmacokinetic investigations. There have been only few systematic empirical assessments of pharmacokinetic studies in terms of their quality of design, conduct and reporting. An assessment of 324 studies published between 2002 and 2004 found that only 28% of pharmacokinetic models (and an even smaller percentage of pharmacodynamic models) adequately evaluated their models [14, 15]. Most studies performed only basic internal evaluation, reporting of statistical methods was rudimentary in the majority of the studies, and external evaluation was performed in only 7% of models [14, 15]. Such large-scale assessment has not been repeated with recent studies to see if any major improvements can be documented. One hopes that evaluation has improved in more recent years and model evaluation in itself is also improving and not a static feature. Other early assessments have focused on studies at the interface of clinical pharmacology [16, 17] and have also shown substantial reporting and quality deficits.

The quality of peer-review in the field has also not been formally, systematically assessed, but one suspects that there is considerable room for improvement given the suboptimal quality of much that gets published. Peer review is not systematically taught, and reviewers learn the “art” of review in practice. A guidance has been proposed on 10 items to look for while peer reviewing clinical pharmacology papers [18]. However, it is unlikely that there can be rapid solutions to training the scientific workforce to do better studies and better peer reviews. The cost of these solutions is unknown, but one hopes that the pay back would be greater than the investment. Empirical studies that assess the prevalence of specific errors and alert the scientific community to their presence and the possibility to avoid them (e.g. Ref. [19]) would be useful, but these are rare. Methodological and statistical literary and numeracy of the larger scientific workforce involved in pharmacokinetic studies needs to be buttressed. This probably needs to combine core training and continuing education at all levels, since some techniques and methods, software, code scripts and computational environments are novel. One needs to be thoroughly informed about the strengths, weaknesses, and caveats of new tools before using them.

The field would benefit from building wider, international consensus about both statistical and laboratory standards for applied pharmacokinetic studies. While some of that role has been adopted by regulatory agencies, there is probably sufficient room for further standardization and improvement of the methods and tools used. Automating some processes may help [20], but automation cannot replace sufficient methodological literacy.

Publication bias and pre-registration

Publication bias refers to the situation where specific study results are reported and disseminated preferentially. Undesirable results are either not published or they are analyzed and explored in various ways until they can become desirable for publication. While publication bias has been documented extensively in diverse types of biomedical research, there is relatively limited evidence about its systematic existence in pharmacokinetic studies. The evidence comes mostly from case studies of specific drugs or topics and usually it pertains to indirect evidence (e.g. detection of funnel plot asymmetry or small-studies effect in meta-analyses, where larger studies show smaller effect sizes) [21] rather than clear documentation of trials with unfavorable results that remain unpublished [22]. Meta-analyses remain quite uncommon in pharmacokinetics and probably less than 0.1% of the over 100,000 meta-analyses to-date [23] are about pharmacokinetics. This does not allow obtaining bird’s eye views of the available evidence, even though many studies may be performed on the same or similar topics. Moreover, given that almost all pharmacokinetic studies on most topics use almost equally small sample sizes, it is usually meaningless to apply small-study effect tests [24]. Nevertheless, sample size considerations may vary, for individual (rich sampling, fewer individuals) or population pharmacokinetic studies (maybe a mix of rich and sparse sampling, typically more individuals). One may speculate that many pharmacokinetic studies are never written up for journal submission and publication. Moreover, given that most pharmacokinetic studies use standard designs, and most are unlikely to have extremely novel, extravagant results, many journals may reject them unfortunately due to perceived lack of novelty thus contributing to loss of information.

The most secure way to understand whether publication and other reporting biases exist would be to have pre-registration records of pharmacokinetic trials. Then, one could track the publication fate of these studies. If the pre-registration includes sufficiently detailed information (and even better, if a protocol is available) one can also compare notes between what was intended to be done, analyzed, and presented and what is eventually published. However, probably most pharmacokinetic studies are still not registered at all. A search in ClinicalTrials.gov (August 7, 2018) with “pharmacokinetic” yields 9955 such studies that have a record in this registry. This is a very small number even compared with the number of pharmacokinetic studies that are published (which is then probably only a modest fraction of all pharmacokinetic studies done). Of those 9955 trials, 1116 are listed as “Recruiting”, and the number increases to 1855 if the categories “Not yet recruiting”, “Enrolling by invitation”, and “Active, not recruiting” are also included. The majority are listed as “Completed” (n = 6861 trials), but of those only 1200 have posted also their Results in ClinicalTrials.gov.

Based on the above, even though exact data are missing, it is probably likely that the large majority of pharmacokinetic studies are not pre-registered, the large majority are not published, and among those that are published deviations from the original protocol (if a protocol existed) and exploratory analyses must be common. Clearly there is plenty of room to improve pre-registration and publication rates and protocol availability with minimal resources. Pharmacokinetic studies are often not standing alone but form a part of a larger protocol. If so, efforts for improving transparency should focus on the larger protocol itself. It is less clear if the published results suffer from heavy publication bias, however. In contrast to late phase trials where there are some clearly desirable outcomes (e.g. favorable results for clinical outcomes or even widely recognized surrogates), in pharmacokinetics it is often less clear what outcome would be desirable.

The exception is the situation where the pharmacokinetic data are the outcome based on which licensing or marketing would be endorsed and no further clinical outcomes will be obtained, e.g. bioavailability for bioequivalence studies. Publication and other reporting biases may be common in bioequivalence trials. One of the hottest examples in the literature of bias is the well-known “thyroid storm” saga about biased results on the bioequivalence of levothyroxine preparations [25, 26]. For pharmacokinetic trials where the obtained information is only an early step in the clinical drug development, the sponsor would favor obtaining accurate and fair assessments rather than biased, inflated results. The reason is that pharmacokinetic information will be used along with other efficacy and safety considerations in choosing carefully the best dose. Wrong choices about dosage and implementation of the drug in later phase trials would lead to lesser chances of success in late stage pivotal trials. It is notable, that industry investigators have been particularly active in raising problems with reproducibility of preclinical research [27, 28], since they have every reason to want to optimize the chances of successful development of a new proposed drug. Thus, while industry involvement may lead to clear sponsor bias in some types of research [29], it may protect from bias in other settings. Moreover, publication and related biases may differ between large companies that are interested in the full development of a new drug and smaller companies and start-ups where the goal is to secure more investment and survive to the next phase without necessarily aiming to go all the way to final licensing. Furthermore, when pharmacokinetic publications are driven by academic rather than industry incentives, the patterns of bias are difficult to predict. In this case, the interest is primarily to get funding and publications and there may be little or no regulatory or other oversight to guarantee the reproducibility of the work.

Availability of raw data, protocols, software, and code

Even though availability of raw data and full detailed protocols is still the exception across biomedical research [30], there is considerable improvement in this regard over the last several years. Pharmacokinetics would benefit from wider access to data and protocols. There are already several ongoing initiatives that try to enhance sharing resources around clinical trials at all phases. For example, ClinicalStudyDataRequest.com lists as of February 3, 2019, a total of 4078 studies for which data can be requested, and a search with “pharmacokinetic” or “pharmacokinetics” yields 609 entries. Use of data from re-analyses [31], meta-analyses, secondary analyses, external validation studies [32], and entirely new projects that combine information from various sources to yield new insights is still a largely unexplored frontier. Data sharing is not necessarily a straightforward process, however, and each field needs to find the best way to operationalize such sharing, maximizing benefits, allowing optimal allocation of credit, using anonymization as needed, and avoiding or minimizing data misuse with wrong inferences [33, 34].

For some datasets, there are already separate articles that have been published that describe them in sufficient detail (as, e.g. in the Data in Brief journal) to facilitate further use. In other projects, sharers need to reach a consensus on what will be shared and how. One example is the OrBiTo IMI project [35] where 13 pharmaceutical companies have agreed to share biopharmaceutics drug properties and performance data for simulations with in silico Physiological Based Pharmacokinetic tools.

Optimizing transparency about software and codes and making these resources widely available in the scientific community is another challenge faced by many disciplines [36]. Many pharmacokinetic studies use widely available software with standard options (e.g. the popPK module in R) and this helps secure transparency. Some commercial software packages have been also very popular, e.g. NONMEM for non-linear mixed effects modeling, and they may come with diverse analytical options and tool suites [37, 38] creating many analytical options with comparative advantages and disadvantages. Furthermore, many investigator-initiated software and codes also are employed, and these may often not be very transparent. Guidelines have been proposed on enhancing this aspect of computational reproducibility [36] and they largely apply also to pharmacokinetic studies. New opportunities but also challenges may arise also from new computational environments, e.g. cloud computing [39].

Entirely independent pharmacometric analysis of the same dataset, followed by cross-critique and adjudication of differences in the pharmacometric methods applied may be useful in some circumstances, especially when complex datasets are involved, or major regulatory issues are at stake. Drawbacks include the time and expense of such actions. There is a wide spectrum of options for appraising datasets and their analyses, ranging from superficial review by peer reviewers at the journal submission phase only to detailed independent re-analyses of the same data. Routine availability of unlimited data and methods may allow a shift towards the latter option to be more readily doable.

There are currently several ongoing initiatives that try to enhance sharing of workflows [40]. Some efforts are disease-specific, while others try to form repositories of models and standardize model description language and model exchange across pharmacometrics at large, as in the case of the Drug Disease Model Resources (DDMoRe) consortium [41, 42, 43].

Moreover, in several preclinical science fields we are already witnessing a confluence of data production, publication, deposition in repositories and generation of meta-data that can all happen concurrently. For example, in the field of enzyme kinetics, there have been several such efforts towards this direction [44, 45, 46, 47], culminating more recently in the STRENDA DB database [48]. Similar efforts may be possible to mobilize in diverse areas of pharmacokinetics, leading to the generation of comprehensive, systematic databases of reliable data and meta-data. Collateral risks from more sharing are likely to be minimal: e.g. in theory there is a risk that multiple analyses of the same data may lead to some biased results, but this is unknown for pharmacokinetic data.

Conclusion

Table 1 summarizes some suggestions on certain research practices that would probably be beneficial for pharmacokinetics research, if adopted more widely. The exact way and sequence to implement research reforms can be debated. However, there is sufficient consensus in the scientific community that suboptimal reproducibility is a problem and we need to act on it [49]. Not all research practices that have worked favorably in other fields may work equally well in improving reproducibility in pharmacokinetic research. The reproducibility of this literature and its response to various modifications of research practices need more empirical study. Research on these research practices [50] may help us understand better where we stand, what we do well, what we don’t do that well, and what we can do better and how.
Table 1

Some research practices that may improve reproducibility in pharmacokinetics

Reporting standards (adoption of existing ones, systematic development of new ones)

 

Standardization of statistical and laboratory tools and procedures

 

Improved statistical literacy, training of the scientific workforce in new methods and tools

 

Study pre-registration

 

Availability of all results (peer-reviewed publication or at least posting in trial registries)

 

Use of rigorous meta-analyses

 

Empirical evaluations of biases and errors in the field

 

Wider availability of raw data, full detailed protocols, software and code scripts and workflows

 

Notes

References

  1. 1.
    Goodman SN, Fanelli D, Ioannidis JP (2016) What does research reproducibility mean? Sci Transl Med 8:34.  https://doi.org/10.1126/scitranslmed.aaf5027 CrossRefGoogle Scholar
  2. 2.
    Macheras P, Argyrakis P, Polymilis C (1996) Fractal geometry, fractal kinetics and chaos en route to biopharmaceutical sciences. Eur J Drug Metab Pharmacokinet 21:77–86CrossRefGoogle Scholar
  3. 3.
    Macheras P, Argyrakis P (1997) Gastrointestinal drug absorption: is it time to consider heterogeneity as well as homogeneity? Pharm Res 14:842–847CrossRefGoogle Scholar
  4. 4.
    Pang KS, Weiss M, Macheras P (2007) Advanced pharmacokinetic models based on organ clearance, circulatory, and fractal concepts. AAPS J 9:E268–E283CrossRefGoogle Scholar
  5. 5.
    Dykstra K, Mehrotra N, Tornøe CW, Kastrissios H, Patel B, Al-Huniti N, Jadhav P, Wang Y, Byon W (2015) Reporting guidelines for population pharmacokinetic analyses. J Clin Pharmacol 55:875–887.  https://doi.org/10.1002/jcph.532 CrossRefGoogle Scholar
  6. 6.
    Kanji S, Hayes M, Ling A, Shamseer L, Chant C, Edwards DJ, Edwards S, Ensom MH, Foster DR, Hardy B, Kiser TH, la Porte C, Roberts JA, Shulman R, Walker S, Zelenitsky S, Moher D (2015) Reporting guidelines for clinical pharmacokinetic studies: the ClinPK statement. Clin Pharmacokinet 54:783–795.  https://doi.org/10.1007/s40262-015-0236-8 CrossRefGoogle Scholar
  7. 7.
    Guidance for Industry (2018) Population Pharmacokinetics.US Food and Drug Administration website. Available at http://www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatoryInformation/Guidances/U CM072137.pdf. Accessed Aug 3 2018
  8. 8.
    Romero K, Corrigan B, Tornùe CW et al (2010) Pharmacometrics as a discipline is entering the “industrialization” phase: standards, automation, knowledge sharing, and training are critical for future success. J Clin Pharmacol 50:9S–19SCrossRefGoogle Scholar
  9. 9.
    Wade JR, Edholm M, Salmonson T (2005) A guide for reporting the results of population pharmacokinetic analyses: a Swedish perspective. AAPS J 7:E456CrossRefGoogle Scholar
  10. 10.
    Guideline on Reporting the Results of Population Pharmacokinetic Analyses (2018). Committee for Medicinal Products for Human Use (CHMP) http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2009/09/WC500003067.pdf. Accessed Aug 3 2018
  11. 11.
    Byon W, Smith MK, Chan P, Tortorici MA et al (2013) Establishing best practices and guidance in population modeling: an experience with an internal population pharmacokinetic analysis guidance. CPT 2(e51):1–8Google Scholar
  12. 12.
    Bonate PL, Strougo A, Desai A et al (2012) Guidelines for the quality control of population pharmacokinetic-pharmacodynamic analyses: an industry perspective. AAPS J 14:749–758CrossRefGoogle Scholar
  13. 13.
    Jamesen KM, McLeay SC, Barras A, Green B (2014) Reporting a population pharmacokinetic-pharmacodynamic study: a journal’s perspective. Clin Pharmacokinet 53:111–122CrossRefGoogle Scholar
  14. 14.
    Brendel K, Dartois C, Comets E et al (2007) Are population pharmacokinetic and/or pharmacodynamic models adequately evaluated? A survey of the literature from 2002 to 2004. Clin Pharmacokinet 46:221–234CrossRefGoogle Scholar
  15. 15.
    Dartois C, Brendel K, Comets E et al (2007) Overview of model building strategies in population PK/PD analyses: 2002–2004 literature survey. Br J Clin Pharmacol 64:603–612CrossRefGoogle Scholar
  16. 16.
    Mills E, Loke YK, Ping W, Victor MM, Daniel P, David M et al (2004) Determining the reporting quality of RCTs in clinical pharmacology. Br J Clin Pharmacol 58:61–65CrossRefGoogle Scholar
  17. 17.
    Mills EJ, Chan AW, Ping W, Andy V, Gordon HG, Douglas GA (2009) Design, analysis, and presentation of crossover trials. Trials 10:27CrossRefGoogle Scholar
  18. 18.
    Woodcock BG, Harder S (2017) The 10-D assessment and evidence-based medicine tool for authors and peer reviewers in clinical pharmacology. Int J Clin Pharmacol Ther 55:639–642CrossRefGoogle Scholar
  19. 19.
    Jasińska-Stroschein M, Kurczewska U, Orszulak-Michalak D (2017) Errors in reporting on dissolution research: methodological and statistical implications. Pharm Dev Technol 22:103–110.  https://doi.org/10.1080/10837450.2016.1194858 CrossRefGoogle Scholar
  20. 20.
    Schaefer P (2011) Automated reporting of pharmacokinetic study results: gaining efficiency downstream from the laboratory. Bioanalysis 3:1471–1478.  https://doi.org/10.4155/bio.11.133 CrossRefGoogle Scholar
  21. 21.
    Uesawa Y, Takeuchi T, Mohri K (2010) Publication bias on clinical studies of pharmacokinetic interactions between felodipine and grapefruit juice. Pharmazie 65:375–378Google Scholar
  22. 22.
    Cowley AJ, Skene A, Stainer K, Hampton JR (1993) The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: an example of publication bias. Int J Cardiol 40:161–166CrossRefGoogle Scholar
  23. 23.
    Ioannidis JP (2016) The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q 94:485–514.  https://doi.org/10.1111/1468-0009.12210 CrossRefGoogle Scholar
  24. 24.
    Ioannidis JP, Trikalinos TA (2007) The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey. CMAJ 176:1091–1096CrossRefGoogle Scholar
  25. 25.
    Rennie D (1997) Thyroid storm. JAMA 277:1238–1243CrossRefGoogle Scholar
  26. 26.
    Dong BJ, Hauck WW, Gambertoglio JG, Gee L, White JR, Bubp JL, Greenspan FS (1997) Bioequivalence of generic and brand-name levothyroxine products in the treatment of hypothyroidism. JAMA 277:1205–1213CrossRefGoogle Scholar
  27. 27.
    Begley CG, Ioannidis JP (2015) Reproducibility in science: improving the standard for basic and preclinical research. Circ Res 116:116–126.  https://doi.org/10.1161/CIRCRESAHA.114.303819 CrossRefGoogle Scholar
  28. 28.
    Begley CG, Ellis LM (2012) Drug development: raise standards for preclinical cancer research. Nature 483:531–533.  https://doi.org/10.1038/483531a CrossRefGoogle Scholar
  29. 29.
    Lundh A, Lexchin J, Mintzes B, Schroll JB, Bero L (2017) Industry sponsorship and research outcome. Cochrane Database Syst Rev 2:MD000033.  https://doi.org/10.1002/14651858.MR000033.pub3 Google Scholar
  30. 30.
    Iqbal SA, Wallach JD, Khoury MJ, Schully SD, Ioannidis JP (2016) Reproducible research practices and transparency across the biomedical literature. PLoS Biol 14(1):e1002333.  https://doi.org/10.1371/journal.pbio.1002333 CrossRefGoogle Scholar
  31. 31.
    Ebrahim S, Sohani ZN, Montoya L, Agarwal A, Thorlund K, Mills EJ, Ioannidis JP (2014) Reanalyses of randomized clinical trial data. JAMA 312:1024–1032.  https://doi.org/10.1001/jama.2014.9646 CrossRefGoogle Scholar
  32. 32.
    Völler S, Flint RB, Stolk LM, Degraeuwe PLJ, Simons SHP, Pokorna P, Burger DM, de Groot R, Tibboel D, Knibbe CAJ, DINO study group (2017) Model-based clinical dose optimization for phenobarbital in neonates: an illustration of the importance of data sharing and external validation. Eur J Pharm Sci 109S:S90–S97.  https://doi.org/10.1016/j.ejps.2017.05.026 CrossRefGoogle Scholar
  33. 33.
    Doshi P, Goodman SN, Ioannidis JP (2013) Raw data from clinical trials: within reach? Trends Pharmacol Sci 34:645–647.  https://doi.org/10.1016/j.tips.2013.10.006 CrossRefGoogle Scholar
  34. 34.
    Anderson BJ, Merry AF (2009) Data sharing for pharmacokinetic studies. Paediatr Anaesth 19:1005–1010CrossRefGoogle Scholar
  35. 35.
    Lacy-Jones K, Hayward P, Andrews S, Gledhill I, McAllister M, Abrahamsson B, Rostami-Hodjegan A, Pepin X (2017) Biopharmaceutics data management system for anonymised data sharing and curation: first application with orbito IMI project. Comput Methods Progr Biomed 140:29–44.  https://doi.org/10.1016/j.cmpb.2016.11.006 CrossRefGoogle Scholar
  36. 36.
    Stodden V, McNutt M, Bailey DH, Deelman E, Gil Y, Hanson B, Heroux MA, Ioannidis JP, Taufer M (2016) Enhancing reproducibility for computational methods. Science 354:1240–1241CrossRefGoogle Scholar
  37. 37.
    Beal S, Sheiner LB, Boekmann A, Bauer RJ (2009) NONMEM’s user’s guides. ICON Development Solutions, Ellicott CityGoogle Scholar
  38. 38.
    Lindbom L, Pihlgren P, Jonsson EN (2005) PsN-Toolkit–a collection of computer intensive statistical methods for non-linear mixed effect modeling using NONMEM. Comput Methods Progr Biomed 79:241–257CrossRefGoogle Scholar
  39. 39.
    Sanduja S, Jewell P, Aron E, Pharai N (2015) Cloud computing for pharmacometrics: using AWS, NONMEM, PsN, Grid Engine, and Sonic. CPT Pharmacometrics Syst Pharmacol 4:537–546.  https://doi.org/10.1002/psp4.12016 CrossRefGoogle Scholar
  40. 40.
    Conrado DJ, Karlsson MO, Romero K, Sarr C, Wilkins JJ (2017) Open innovation: towards sharing of data, models and workflows. Eur J Pharm Sci 109S:S65–S71.  https://doi.org/10.1016/j.ejps.2017.06.035 CrossRefGoogle Scholar
  41. 41.
    Swat MJ, Moodie S, Wimalaratne SM, Kristensen NR, Lavielle M, Mari A, Magni P, Smith MK, Bizzotto R, Pasotti L, Mezzalana E, Comets E, Sarr C, Terranova N, Blaudez E, Chan P, Chard J, Chatel K, Chenel M, Edwards D, Franklin C, Giorgino T, Glont M, Girard P, Grenon P, Harling K, Hooker AC, Kaye R, Keizer R, Kloft C, Kok JN, Kokash N, Laibe C, Laveille C, Lestini G, Mentré F, Munafo A, Nordgren R, Nyberg HB, Parra-Guillen ZP, Plan E, Ribba B, Smith G, Trocóniz IF, Yvon F, Milligan PA, Harnisch L, Karlsson M, Hermjakob H, Le Novère N (2015) Pharmacometrics Markup Language (PharmML): opening new perspectives for model exchange in drug development. CPT Pharmacometrics Syst Pharmacol 4:316–319.  https://doi.org/10.1002/psp4.57 CrossRefGoogle Scholar
  42. 42.
    Smith MK, Moodie SL, Bizzotto R, Blaudez E, Borella E, Carrara L, Chan P, Chenel M, Comets E, Gieschke R, Harling K, Harnisch L, Hartung N, Hooker AC, Karlsson MO, Kaye R, Kloft C, Kokash N, Lavielle M, Lestini G, Magni P, Mari A, Mentré F, Muselle C, Nordgren R, Nyberg HB, Parra-Guillén ZP, Pasotti L, Rode-Kristensen N, Sardu ML, Smith GR, Swat MJ, Terranova N, Yngman G, Yvon F, Holford N, DDMoRe consortium (2017) Model description language (MDL): a standard for modeling and simulation. CPT Pharmacometrics Syst Pharmacol 6:647–650.  https://doi.org/10.1002/psp4.12222 CrossRefGoogle Scholar
  43. 43.
    Wilkins JJ, Chan P, Chard J, Smith G, Smith MK, Beer M, Dunn A, Flandorfer C, Franklin C, Gomeni R, Harnisch L, Kaye R, Moodie S, Sardu ML, Wang E, Watson E, Wolstencroft K, Cheung S, DDMoRe Consortium (2017) Thoughtflow: standards and tools for provenance capture and workflow definition to support model-informed drug discovery and development. CPT Pharmacometrics Syst Pharmacol 6:285–292.  https://doi.org/10.1002/psp4.12171 CrossRefGoogle Scholar
  44. 44.
    Chang A, Schomburg I, Placzek S, Jeske L, Ulbrich M, Xiao M, Sensen CW, Schomburg D (2015) BRENDA in 2015: exciting developments in its 25th year of existence. Nucleic Acids Res 43:D439–D446CrossRefGoogle Scholar
  45. 45.
    Wittig U, Kania R, Golebiewski M, Rey M, Shi L, Jong L, Algaa E, Weidemann A, Sauer-Danzwith H, Mir S et al (2012) SABIO-RK–database for biochemical reaction kinetics. Nucleic Acids Res 40:D790–D796CrossRefGoogle Scholar
  46. 46.
    Wittig U, Rey M, Kania R, Bittkowski M, Shi L, Golebiewski M, Weidemann A, Mueller W, Rojas I (2014) Challenges for an enzymatic reaction kinetics database. FEBS J 281:572–582CrossRefGoogle Scholar
  47. 47.
    Wittig U, Kania R, Bittkowski M, Wetsch E, Shi L, Jong L, Golebiewski M, Rey M, Weidemann A, Rojas I et al (2014) Data extraction for the reaction kinetics database SABIO-RK. Perspect Sci 1:33–40CrossRefGoogle Scholar
  48. 48.
    Swainston N, Baici A, Bakker BM, Cornish-Bowden A, Fitzpatrick PF, Halling P, Leyh TS, O’Donovan C, Raushel FM, Reschel U, Rohwer JM, Schnell S, Schomburg D, Tipton KF, Tsai MD, Westerhoff HV, Wittig U, Wohlgemuth R, Kettner C (2018) STRENDA DB: enabling the validation and sharing of enzyme kinetics data. FEBS J 285:2193–2204.  https://doi.org/10.1111/febs.14427 CrossRefGoogle Scholar
  49. 49.
    Baker M (2016) 1,500 scientists lift the lid on reproducibility. Nature 533:452–454CrossRefGoogle Scholar
  50. 50.
    Ioannidis JP, Fanelli D, Dunne DD, Goodman SN (2015) Meta-research: evaluation and improvement of research methods and practices. PLoS Biol 13:e1002264.  https://doi.org/10.1371/journal.pbio.1002264 CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Departments of Medicine, Health Research and Policy, Biomedical Data Science, and Statistics, Stanford Prevention Research Center, Meta-Research Innovation Center at Stanford (METRICS)Stanford UniversityStanfordUSA

Personalised recommendations