Application of Bayesian Optimization for Pharmaceutical Product Development
Bayesian optimization has been studied in many fields as a technique for global optimization of black-box functions. We applied these techniques for optimizing the formulation and manufacturing methods of pharmaceutical products to eliminate unnecessary experiments and accelerate method development tasks.
A simulation dataset was generated by the data augmentation from a design of experiment (DoE) which was executed to optimize the formulation and process parameters of orally disintegrating tablets. We defined a composite score for integrating multiple objective functions, physical properties of tablets, to meet the pharmaceutical criteria simultaneously. Performance measurements were used to compare the influence of the selection of initial training sets, by controlling data size and variation, acquisition functions, and schedules of hyperparameter tuning. Additionally, we investigated performance improvements obtained using Bayesian optimization techniques as opposed to random search strategy.
Bayesian optimization efficiently reduces the number of experiments to obtain the optimal formulation and process parameters from about 25 experiments with DoE to 10 experiments. Repeated hyperparameter tuning during the Bayesian optimization process stabilizes variations in performance among different optimization conditions, thus improving average performance.
We demonstrated the elimination of unnecessary experiments using Bayesian optimization. Simulations of different conditions depicted their dependencies, which will be useful in many real-world applications. Bayesian optimization is expected to reduce the reliance on individual skills and experiences, increasing the efficiency and efficacy of optimization tasks, expediting formulation and manufacturing research in pharmaceutical development.
KeywordsBayesian optimization Pharmaceutical product Design of experiment Artificial neural network Optimization
Artificial neural network
Design of experiment
- TM replaces TS
- 1.Box GEP, Wilson KB. On the experimental attainment of optimum conditions Breakthroughs in statistics: methodology and distribution (2012): 270.Google Scholar
- 8.Bergstra JJ, Yoshua Bengio Yoshuabengio U. Random search for hyper-parameter optimization. J Mach Learn Res. 2012;13:281–305.Google Scholar
- 10.Snoek J, Larochelle H, Adams RP. Practical Bayesian optimization of machine learning algorithms. Adv Neural Inf Proces Syst. 2012;25:2951–9.Google Scholar
- 11.Harold JK. A new method for locating the maximum point of an arbitrary multipeak curve in the presence of noise. J Basic Eng. 1964;86:07–106.Google Scholar
- 12.Mockus J, Tiesis V, Zilinskas A. The application of Bayesian methods for seeking the extremum. L. Dixon, G. Szego Eds. Towards Global Optimization; 1978.Google Scholar
- 13.Auer P. Using confidence bounds for exploitation-exploration trade-offs. J Mach Learn Res. 2003;3:397–422.Google Scholar
- 16.Chapelle O, Li L. An empirical evaluation of Thompson sampling. Adv Neural Inf Proces Syst. 2011;24:2249–57.Google Scholar
- 27.Frauke G, Stefan F. Neuralnet: Training of neural networks. The R Journal 2010;2:30–38.Google Scholar
- 29.Rasmussen CE, Williams CKI. Gaussian processes for machine learning. Cambridge, Mass: MIT Press; 2006.Google Scholar
- 30.Yang Z, Smola AJ, Song L, Wilson AG. A la Carte-Learning Fast Kernels, in: Proc 18th Int Conf Artif Intell Stat. 2015;1098–1106.Google Scholar
- 31.Kingma D, Ba J. Adam: A method for stochastic optimization. arXive:1412.6980. 20.Google Scholar