Abstract
This chapter is the equivalent for optimization problems of what Chapter 3 is for integration problems. Here we distinguish between two separate uses of computer generated random variables. The first use, as seen in Section 5.2, is to produce stochastic techniques to reach the maximum (or minimum) of a function, devising random explorations techniques on the surface of this function that avoid being trapped in a local maximum (or minimum) but also that are sufficiently attracted by the global maximum (or minimum). The second use, described in Section 5.3, is closer to Chapter 3 in that it approximates the function to be optimized. The most popular algorithm in this perspective is the EM (Expectation-Maximization) algorithm.
Keywords
- Simulated Annealing
- Stochastic Approximation
- Exponential Family
- Simulated Annealing Method
- Monte Carlo Optimization
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
“Remember, boy,” Sam Nakai would sometimes tell Chee, “when you’re tired of walking up a long hill you think about how easy it’s going to be walking down.”
—Tony Hillerman, A Thief of Time
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Notes
Broniatowski, M., Celeux, G., and Diebolt, J. (1984). Reconnaissance de mélanges de densités par un algorithme d’apprentissage probabiliste. In Diday, E., editor, Data Analysis and Informatics, volume 3, pages 359–373. North-Holland, Amsterdam.
Celeux, G. and Clairambault, J. (1992). Estimation de chaînes de Markov cachées: méthodes et problèmes. In Approches Markoviennes en Signal et Images, pages 5–19, CNRS, Paris. GDR CNRS Traitement du Signal et Images.
Celeux, G. and Diebolt, J. (1985). The SEM algorithm• a probabilistic teacher algorithm derived from the EM algorithm for the mixture problem. Comput. Statist. Quater., 2: 73–82.
Diebolt, J. and Ip, E. (1996). Stochastic EM: method and application. In Gilks, W., Richardson, S., and Spiegelhalter, D., editors, Markov chain Monte Carlo in Practice, pages 259–274. Chapman and Hall, New York.
Celeux, G. and Diebolt, J. (1990). Une version de type recuit simulé de l’algorithme EM. Comptes Rendus Acad. Sciences Paris, 310: 119–124.
Celeux, G., Chauveau, D., and Diebolt, J. (1996). Stochastic versions of the EM algorithm: An experimental study in the mixture case. J. Statist. Comput. Simul., 55 (4): 287–314.
Lavielle, M. and Moulines, E. (1997). On a stochastic approximation version of the EM algorithm. Statist. Comput., 7: 229–236.
Meng, X. and Rubin, D. (1991). Using EM to obtain asymptotic variance-covariance matrices. J. American Statist. Assoc., 86: 899–909.
Meng, X. and Rubin, D. (1992). Maximum likelihood estimation via the ECM algorithm. A general framework. Biometrika, 80: 267–278.
Liu, C. and Rubin, D. (1994). The ECME algorithm: a simple extension of EM and ECM with faster monotonous convergence. Biometrika, 81: 633–648.
Meng, X. and van Dyk, D. (1997). The EM algorithm an old folk-song sung to a new tune (with discussion). J. Royal Statist. Soc. Series B, 59: 511–568.
Neal, R. (1999). Bayesian Learning for Neural Networks, volume 118. Springer-Verlag, New York. Lecture Notes.
Ripley, B. (1994). Neural networks and related methods for classification (with discussion). J. Royal Statist. Soc. Series B, 56: 409–4560.
Ripley, B. (1996). Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge.
Le Cun, Y., Boser, D., Denker, J., Henderson, D., Howard, R., Hubbard, W., and Jackel, L. (1989). Handwritten digit recognition with a backpropagation network. In Touresky, D., editor, Advances in Neural Information Processing Systems II, pages 396–404. Morgan Kaufman, San Mateo, CA.
Robbins, H. and Monro, S. (1951). A stochastic approximation method. Ann. Mathemat. Statist., 22: 400–407.
Kiefer, J. and Wolfowitz, J. (1952). Stochastic estimation of the maximum of a regression function. Ann. Mathemat. Statist., 23: 462–466.
Bouleau, N. and Lépingle, D. (1994). Numerical Methods for Stochastic Processes. John Wiley, New York.
Benveniste, A., Métivier, M., and Priouret, P. (1990). Adaptive Algorithms and Stochastic Approximations. Springer-Verlag, New York, Berlin-Heidelberg.
Wasan, M. (1969). Stochastic Approximation. Cambridge University Press, Cambridge.
Kersting, G. (1987). Some results on the asymptotic behavior of the RobbinsMonro process. Bull. Int. Statis. Inst., 47: 327–335.
Duflo, M. (1996). Random iterative models. In Karatzas, I. and Yor, M., editors, Applications of Mathematics, volume 34. Springer-Verlag, Berlin.
Hwang, C. (1980). Laplace’s method revisited: Weak convergence of probability measures. Ann. Probab., 8: 1177–1182.
Geyer, C. (1996). Estimation and optimization of functions. In Gilks, W., Richardson, S., and Spiegelhalter, D., editors, Markov chain Monte Carlo in Practice, pages 241–258. Chapman and Hall, New York.
Geyer, C. and Thompson, E. (1992). Constrained Monte Carlo maximum likelihood for dependent data (with discussion). J. Royal Statist. Soc. Series B, 54: 657–699.
Geyer, C. and Thompson, E. (1995). Annealing Markov chain Monte Carlo with applications to ancestral inference. J. American Statist. Assoc., 90: 909–920.
Geyer, C. (1993). Estimating normalizing constants and reweighting mixtures in Markov chain Monte Carlo. Technical Report 568, School of Statistics, Univ. of Minnesota.
Geyer, C. (1994). On the convergence of Monte Carlo maximum likelihood calculations. J. R. Statist. Soc. B, 56: 261–274.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer Science+Business Media New York
About this chapter
Cite this chapter
Robert, C.P., Casella, G. (2004). Monte Carlo Optimization. In: Monte Carlo Statistical Methods. Springer Texts in Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4757-4145-2_5
Download citation
DOI: https://doi.org/10.1007/978-1-4757-4145-2_5
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4419-1939-7
Online ISBN: 978-1-4757-4145-2
eBook Packages: Springer Book Archive