Skip to main content

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 223))

  • 546 Accesses

Abstract

In the previous chapter, we have seen how to make optimal decisions with respect to a given utility function and belief. One important question is how to compute an updated belief from observations and a prior belief. More generally, we wish to examine how much information we can obtain about an unknown parameter from observations, and how to bound the respective estimation error. While most of this chapter will focus on the Bayesian framework for estimating parameters, we shall also look at tools for making conclusions about the value of parameters without making specific assumptions about the data distribution, i.e., without providing specific prior information. In the Bayesian setting, we calculate posterior distributions of parameters given data. The basic problem can be stated as follows. Let \(\mathscr {P}\mathrel {\triangleq }\left\{ P_{\omega } ~|~ \omega \in \Omega \right\} \) be a family of probability measures on \(({\mathcal {S}}, {\mathcal {F}}_{\mathcal {S}})\) and \(\xi \) be our prior probability measure on \((\varOmega , {\mathcal {F}}_{\varOmega })\).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    There is an alternative definition, which replaces equality of posterior distributions with point-wise equality on the family members, i.e., \(P_\omega (x) = P_\omega (x')\) for all \(\omega \). This is a stronger definition, as it implies the Bayesian one we use here.

  2. 2.

    Typically \({\mathcal {Z}}\subset {\mathbb {R}}^k\) for finite-dimensional statistics.

  3. 3.

    As before, the precision is the inverse of the covariance.

References

  1. Hoeffding, W.: Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 58(301), 13–30 (1963)

    Article  MathSciNet  MATH  Google Scholar 

  2. Casella, G., Fienberg, S., Olkin, I. (eds.) Monte Carlo Statistical Methods. Springer Texts in Statistics. Springer (1999)

    Google Scholar 

  3. Csilléry, K., Blum, M.G.B., Gaggiotti, O.E., François, O.: Approximate Bayesian computation (ABC) in practice. Trends Ecol. Evol. 25(7), 410–418 (2010)

    Google Scholar 

  4. Marin, J.-M., Pudlo, P., Robert, C.P., Ryder, R.J.: Approximate Bayesian computational methods. Stat. Comput. 22(6), 1167–1180 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  5. Geweke, J.F.: Using simulation methods for Bayesian econometric models: Inference, development, and communication. Econ. Rev. 18(1), 1–73 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  6. Toni, T., Welch, D., Strelkowa, N., Ipsen, A., Stumpf, M.P.H.: Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J. Royal Soc. Interf. 6(31), 187–202 (2009)

    Google Scholar 

  7. Dimitrakakis, C., Tziortziotis, N.: ABC reinforcement learning. In: Proceedings of the 30th International Conference on Machine Learning, ICML 2013, pp. 684–692. JMLR.org (2013)

    Google Scholar 

  8. Dimitrakakis, C., Tziortziotis, N.: Usable ABC reinforcement learning. In: NIPS 2014 Workshop: ABC in Montreal (2014)

    Google Scholar 

  9. Minka, T.P.: Expectation propagation for approximate Bayesian inference. In: UAI ’01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pp. 362–369. Morgan Kaufmann (2001)

    Google Scholar 

  10. Robbins, H.: An empirical Bayes approach to statistics. In: Neyman, J. (ed.) Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics. University of California Press, Berkeley, CA (1955)

    Google Scholar 

  11. Laird, N.M., Louis, T.A.: Empirical Bayes confidence intervals based on bootstrap samples. J. Am. Stat. Assoc. 82(399), 739–750 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  12. Lwin, T., Maritz, J.S.: Empirical Bayes approach to multiparameter estimation: with special reference to multinomial distribution. Ann. Inst. Stat. Math. 41(1), 81–99 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  13. Robbins, H.: The empirical Bayes approach to statistical decision problems. Ann. Math. Stat. 35(1), 1–20 (1964)

    Article  MathSciNet  MATH  Google Scholar 

  14. Deely, J.J., Lindley, D.V.: Bayes empirical Bayes. J. Am. Stat. Assoc. 76(376), 833–841 (1981)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Christos Dimitrakakis .

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Dimitrakakis, C., Ortner, R. (2022). Estimation. In: Decision Making Under Uncertainty and Reinforcement Learning. Intelligent Systems Reference Library, vol 223. Springer, Cham. https://doi.org/10.1007/978-3-031-07614-5_4

Download citation

Publish with us

Policies and ethics