Skip to main content

Inference and Learning for Active Sensing, Experimental Design and Control

  • Conference paper
Pattern Recognition and Image Analysis (IbPRIA 2009)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 5524))

Included in the following conference series:

  • 1768 Accesses

Abstract

In this paper we argue that maximum expected utility is a suitable framework for modeling a broad range of decision problems arising in pattern recognition and related fields. Examples include, among others, gaze planning and other active vision problems, active learning, sensor and actuator placement and coordination, intelligent human-computer interfaces, and optimal control. Following this remark, we present a common inference and learning framework for attacking these problems. We demonstrate this approach on three examples: (i) active sensing with nonlinear, non-Gaussian, continuous models, (ii) optimal experimental design to discriminate among competing scientific models, and (iii) nonlinear optimal control.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bernardo, J.: Expected information as expected utility. The Annals of Statistics 7(3), 686–690 (1979)

    Article  MathSciNet  MATH  Google Scholar 

  2. Bertsekas, D.P.: Dynamic Programming and Optimal Control. Athena Scientific (1995)

    Google Scholar 

  3. Chaloner, K., Verdinelli, I.: Bayesian experimental design: A review. Statistical Science 10(3), 273–304 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  4. Dayan, P., Hinton, G.E.: Using EM for reinforcement learning. Neural Computation 9, 271–278 (1997)

    Article  MATH  Google Scholar 

  5. Green, P.: Reversible jump Markov Chain Monte Carlo computation and Bayesian model determination. Biometrika 82(4), 711–732 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hoffman, M., Doucet, A., de Freitas, N., Jasra, A.: Bayesian policy learning with trans-dimensional MCMC. In: NIPS (2007)

    Google Scholar 

  7. Hoffman, M., Doucet, A., de Freitas, N., Jasra, A.: On solving general state-space sequential decision problems using inference algorithms. Technical Report TR-2007-04, University of British Columbia, Computer Science (2007)

    Google Scholar 

  8. Kueck, H., de Freitas, N., Doucet, A.: SMC samplers for Bayesian optimal nonlinear design. Nonlinear Statistical Signal Processing (2006)

    Google Scholar 

  9. Loredo, T.J.: Bayesian adaptive exploration. Bayesian Inference And Maximum Entropy Methods In Science And Engineering, 330–346 (2003)

    Google Scholar 

  10. Müller, P., Sansó, B., de Iorio, M.: Optimal Bayesian design by inhomogeneous Markov chain simulation. Journal of the American Statistical Association 99, 788–798 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  11. Myung, J.I., Pitt, M.A.: Optimal experimental design for model discrimination (under review)

    Google Scholar 

  12. Rubin, D., Hinton, S., Wenzel, A.: The precise time course of retention. Journal of experimental psychology. Learning, memory, and cognition 25(5), 1161–1176 (1999)

    Article  Google Scholar 

  13. Rubin, D.C., Wenzel, A.E.: One hundred years of forgetting: A quantitative description of retention. Psychological review 103, 734–760 (1996)

    Article  Google Scholar 

  14. Shoham, Y., Leyton-Brown, K.: Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, Cambridge (2009)

    MATH  Google Scholar 

  15. Toussaint, M., Storkey, A.: Probabilistic inference for solving discrete and continuous state Markov Decision Processes. In: ICML (2006)

    Google Scholar 

  16. von Neumann, J., Morgenstern, O.: Theory of Games and Economic Behaviour. Princeton University Press, Princeton (1947)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kueck, H., Hoffman, M., Doucet, A., de Freitas, N. (2009). Inference and Learning for Active Sensing, Experimental Design and Control. In: Araujo, H., Mendonça, A.M., Pinho, A.J., Torres, M.I. (eds) Pattern Recognition and Image Analysis. IbPRIA 2009. Lecture Notes in Computer Science, vol 5524. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02172-5_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02172-5_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02171-8

  • Online ISBN: 978-3-642-02172-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics