A Laplace-based algorithm for Bayesian adaptive design

Abstract

This article presents a novel Laplace-based algorithm that can be used to find Bayesian adaptive designs under model and parameter uncertainty. Our algorithm uses Laplace importance sampling to provide a computationally efficient approach to undertake adaptive design and inference when compared to standard approaches such as those based on the sequential Monte Carlo (SMC) algorithm. Like the SMC approach, our new algorithm requires very little problem-specific tuning and provides an efficient estimate of utility functions for parameter estimation and/or model choice. Further, within our algorithm, we adopt methods from Pareto smoothing to improve the robustness of the algorithm in forming particle approximations to posterior distributions. To evaluate our new adaptive design algorithm, three motivating examples from the literature are considered including examples where binary, multiple response and count data are observed under considerable model and parameter uncertainty. We benchmark the performance of our new algorithm against: (1) the standard SMC algorithm and (2) a standard implementation of the Laplace approximation in adaptive design. We assess the performance of each algorithm through comparing computational efficiency and design selection. The results show that our new algorithm is computationally efficient and selects designs that can perform as well as or better than the other two approaches. As such, we propose our Laplace-based algorithm as an efficient approach for designing adaptive experiments.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

References

  1. Antognini, A.B., Giovagnoli, A.: Adaptive Designs for Sequential Treatment Allocation. Chapman and Hall/CRC, New York (2015)

    Google Scholar 

  2. Barz, T., López, D.C., Bournazou, M.N.C., Körkel, S., Walter, S.F.: Real-time adaptive input design for the determination of competitive adsorption isotherms in liquid chromatography. Comput. Chem. Eng. 94, 104–116 (2016)

    Google Scholar 

  3. Beck, J., Dia, B.M., Espath, L.F., Long, Q., Tempone, R.: Fast Bayesian experimental design: Laplace-based importance sampling for the expected information gain. Comput. Methods Appl. Mech. Eng. 334, 523–553 (2018)

    MathSciNet  MATH  Google Scholar 

  4. Bernardo, J.M., Smith, A.F.M.: Bayesian Theory, 2nd edn. Wiley, Hoboken (2000)

    Google Scholar 

  5. Borth, D.M.: A total entropy criterion for the dual problem of model discrimination and parameter estimation. J. R. Stat. Soc.: Ser. B (Methodol.) 37(1), 77–87 (1975)

    MathSciNet  MATH  Google Scholar 

  6. Brinch, C.: Simulated maximum likelihood using tilted importance sampling. Discussion papers, Statistics Norway, Research Department (2008). https://www.ssb.no/a/publikasjoner/pdf/DP/dp540.pdf

  7. Byrd, R.H., Lu, P., Nocedal, J., Zhu, C.: A limited memory algorithm for bound constrained optimization. SIAM J. Sci. Comput. 16(5), 1190–1208 (1995)

    MathSciNet  MATH  Google Scholar 

  8. Cavagnaro, D.R., Myung, J.I., Pitt, M.A., Kujala, J.V.: Adaptive design optimization: a mutual information-based approach to model discrimination in cognitive science. Neural Comput. 22(4), 887–905 (2010)

    MathSciNet  MATH  Google Scholar 

  9. Clark, N.J., Dixon, P.M.: An extended Laplace approximation method for Bayesian inference of self-exciting spatial-temporal models of count data (2017). arXiv:1709.09952 [stat.CO]

  10. Del Moral, P., Doucet, A., Jasra, A.: Sequential Monte Carlo samplers. J. R. Stat. Soc.: Ser. B (Stat. Methodol.) 68(3), 411–436 (2006)

    MathSciNet  MATH  Google Scholar 

  11. Denman, N.G., McGree, J.M., Eccleston, J.A., Duffull, S.B.: Design of experiments for bivariate binary responses modelled by Copula functions. Comput. Stat. Data Anal. 55(4), 1509–1520 (2011)

    MathSciNet  MATH  Google Scholar 

  12. Doucet, A., Godsill, S., Andrieu, C.: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 10(3), 197–208 (2000)

    Google Scholar 

  13. Dror, H.A., Steinberg, D.M.: Sequential experimental designs for generalized linear models. J. Am. Stat. Assoc. 103(481), 288–298 (2008)

    MathSciNet  MATH  Google Scholar 

  14. Drovandi, C.C., Pettitt, A.N.: Estimation of parameters for macroparasite population evolution using approximate Bayesian computation. Biometrics 67(1), 225–233 (2011)

    MathSciNet  MATH  Google Scholar 

  15. Drovandi, C.C., McGree, J.M., Pettitt, A.N.: Sequential Monte Carlo for Bayesian sequentially designed experiments for discrete data. Comput. Stat. Data Anal. 57(1), 320–335 (2013)

    MathSciNet  MATH  Google Scholar 

  16. Drovandi, C.C., McGree, J.M., Pettitt, A.N.: A sequential Monte Carlo algorithm to incorporate model uncertainty in Bayesian sequential design. J. Comput. Graph. Stat. 23(1), 3–24 (2014)

    MathSciNet  Google Scholar 

  17. Feng, C., Marzouk, Y.M.: A layered multiple importance sampling scheme for focused optimal Bayesian experimental design (2019). arXiv:1903.11187 [stat.CO]

  18. Fenlon, J.S., Faddy, M.J.: Modelling predation in functional response. Ecol. Model. 198, 154–162 (2006)

    Google Scholar 

  19. Foster, A., Jankowiak, M., Bingham, E., Horsfall, P., Teh, Y.W., Rainforth, T., Goodman, N.: Variational optimal experiment design: efficient automation of adaptive experiments (2019). arXiv:1903.05480 [stat.ML]

  20. Holling, C.S.: Some characteristics of simple types of predation and parasitism. Can. Entomol. 91(7), 385–398 (1959)

    Google Scholar 

  21. Huan, X., Marzouk, Y.M.: Sequential Bayesian optimal experimental design via approximate dynamic programming (2016). arXiv:1604.08320 [stat.ME]

  22. Jaakkola, T.S., Jordan, M.I.: Bayesian parameter estimation via variational methods. Stat. Comput. 10(1), 25–37 (2000). https://doi.org/10.1023/A:1008932416310

    Article  Google Scholar 

  23. Kuk, A.Y.C.: Laplace importance sampling for generalized linear mixed models. J. Stat. Comput. Simul. 63(2), 143–158 (1999)

    MathSciNet  MATH  Google Scholar 

  24. Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Stat. 22(1), 79–86 (1951)

    MathSciNet  MATH  Google Scholar 

  25. Lewi, J., Butera, R., Paninski, L.: Sequential optimal design of neurophysiology experiments. Neural Comput. 21(3), 619–687 (2009)

    MathSciNet  MATH  Google Scholar 

  26. Liu, J.S.: Monte Carlo Strategies in Scientific Computing. Springer, New York (2008)

    Google Scholar 

  27. Long, Q., Scavino, M., Tempone, R., Wang, S.: Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations. Comput. Methods Appl. Mech. Eng. 259, 24–39 (2013)

    MathSciNet  MATH  Google Scholar 

  28. Long, Q., Scavino, M., Tempone, R., Wang, S.: A Laplace method for under-determined Bayesian optimal experimental designs. Comput. Methods Appl. Mech. Eng. 285, 849–876 (2015)

    MathSciNet  MATH  Google Scholar 

  29. McGree, J.M.: Developments of the total entropy utility function for the dual purpose of model discrimination and parameter estimation in Bayesian design. Comput. Stat. Data Anal. 113, 207–225 (2017)

    MathSciNet  MATH  Google Scholar 

  30. McGree, J.M., Drovandi, C.C., Thompson, M., Eccleston, J., Duffull, S., Mengersen, K., Pettitt, A.N., Goggin, T.: Adaptive Bayesian compound designs for dose finding studies. J. Stat. Plan. Inference 142(6), 1480–1492 (2012)

    MathSciNet  MATH  Google Scholar 

  31. McGree, J.M., Drovandi, C.C., White, G., Pettitt, A.N.: A pseudo-marginal sequential Monte Carlo algorithm for random effects models in Bayesian sequential design. Stat. Comput. 26(5), 1121–1136 (2016)

    MathSciNet  MATH  Google Scholar 

  32. Moffat, H., Hainy, M., Papanikolaou, N.E., Drovandi, C.C.: Sequential experimental design for functional response experiments (2019). arXiv:1907.02179 [stat.AP]

  33. Müller, P., Berry, D.A., Grieve, A.P., Krams, M.: A Bayesian decision-theoretic dose-finding trial. Decis. Anal. 3(4), 197–207 (2006)

    Google Scholar 

  34. Nelsen, R.B.: An Introduction to Copulas. Springer Series in statistics, 2nd edn. Springer, New York (2006)

    Google Scholar 

  35. Ogden, H.: On the error in Laplace approximations of high-dimensional integrals (2018). arXiv:1808.06341 [math.ST]

  36. Ormerod, J.T., Wand, M.P.: Explaining variational approximations. Am. Stat. 64(2), 140–153 (2010). https://doi.org/10.1198/tast.2010.09058

    MathSciNet  MATH  Article  Google Scholar 

  37. Overstall, A.M., Woods, D.C.: Bayesian design of experiments using approximate coordinate exchange. Technometrics 59(4), 458–470 (2017)

    MathSciNet  Google Scholar 

  38. Overstall, A.M., McGree, J.M., Drovandi, C.C.: An approach for finding fully Bayesian optimal designs using normal-based approximations to loss functions. Stat. Comput. 28(2), 343–358 (2018a)

    MathSciNet  MATH  Google Scholar 

  39. Overstall, A.M., Woods, D.C., Adamou, M.: acebayes: an R package for Bayesian optimal design of experiments via approximate coordinate exchange (2018b). arXiv:170508096

  40. Palmer, J.L., Müller, P.: Bayesian optimal design in population models for haematologic data. Stat. Med. 17(14), 1613–1622 (1998)

    Google Scholar 

  41. Prakash, O., Datta, B.: Sequential optimal monitoring network design and iterative spatial estimation of pollutant concentration for identification of unknown groundwater pollution source locations. Environ. Monit. Assess. 185(7), 5611–5626 (2013)

    Google Scholar 

  42. Raudenbush, S.W., Yang, M.L., Yosef, M.: Maximum likelihood for generalized linear models with nested random effects via high-order, multivariate Laplace approximation. J. Comput. Graph. Stat. 9(1), 141–157 (2000)

    MathSciNet  Google Scholar 

  43. Roy, S., Notz, W.I.: Estimating percentiles in computer experiments: a comparison of sequential-adaptive designs and fixed designs. J. Stat. Theory Pract. 8(1), 12–29 (2014)

    MathSciNet  MATH  Google Scholar 

  44. Ryan, K.: Estimating expected information gains for experimental designs with application to the random fatigue-limit model. J. Comput. Graph. Stat. 12, 585–603 (2003)

    MathSciNet  Google Scholar 

  45. Senarathne, S.G.J., Drovandi, C.C., McGree, J.M.: Bayesian sequential design for Copula models. TEST (2019). https://doi.org/10.1007/s11749-019-00661-7

  46. Shun, Z., McCullagh, P.: Laplace approximation of high dimensional integrals. J. R. Stat. Soc.: Ser. B (Methodol.) 57(4), 749–760 (1995)

    MathSciNet  MATH  Google Scholar 

  47. Skaug, H.J., Fournier, D.A.: Automatic approximation of the marginal likelihood in non-Gaussian hierarchical models. Comput. Stat. Data Anal. 51(2), 699–709 (2006)

    MathSciNet  MATH  Google Scholar 

  48. Stroud, J.R., Muller, P., Rosner, G.L.: Optimal sampling times in population pharmacokinetic studies. J. R. Stat. Soc.: Ser. C (Appl. Stat.) 50(3), 345–359 (2001)

    MathSciNet  MATH  Google Scholar 

  49. Vehtari, A., Simpson, D., Gelman, A., Yao, Y., Gabry, J.: Pareto smoothed importance sampling (2017). arXiv:1507.02646 [stat.CO]

  50. Vehtari, A., Gelman, A., Gabry, J., Yao, Y., Bürkner, P.C., Goodrich, B., Piironen, J., Magnusson, M.: loo: efficient leave-one-out cross-validation and WAIC for Bayesian models (2019). https://CRAN.R-project.org//package=loo, R package version 2.1.0

  51. Wakefield, J.: An expected loss approach to the design of dosage regimens via sampling-based methods. J. R. Stat. Soc. Ser. D (Stat.) 43(1), 13–29 (1994)

    Google Scholar 

  52. Weir, C.J., Spiegelhalter, D.J., Grieve, A.P.: Flexible design and efficient implementation of adaptive dose-finding studies. J. Biopharm. Stat. 17(6), 1033–1050 (2007)

    MathSciNet  Google Scholar 

Download references

Acknowledgements

SGJS was supported by QUTPRA scholarship from the Queensland University of Technology. CCD was supported by an Australian Research Council Discovery Project (DP200102101). JMM was supported by an Australian Research Council Discovery Project (DP200101263). Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia.

Author information

Affiliations

Authors

Corresponding author

Correspondence to S. G. J. Senarathne.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

See Figs. 14, 15 and 16.

Fig. 14
figure14

The posterior distributions obtained at the sixth iteration of the sequential design from each design algorithm based on the same design and data from Example 1

Fig. 15
figure15

The 95% credible intervals of the distribution of the cumulative time required to run each algorithm in the simulation studies for each of the three examples

Fig. 16
figure16

The posterior distributions of the parameters of the data generating model obtained from each design algorithm using the design and data gathered from a single simulation of the SMC algorithm from Example 3

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Senarathne, S.G.J., Drovandi, C.C. & McGree, J.M. A Laplace-based algorithm for Bayesian adaptive design. Stat Comput 30, 1183–1208 (2020). https://doi.org/10.1007/s11222-020-09938-6

Download citation

Keywords

  • Importance sampling
  • Model discrimination
  • Parameter estimation
  • Pareto smoothing
  • Sequential Monte Carlo
  • Total entropy