Summary.
We present and analyze a new speed-up technique for Monte Carlo optimization: the Iterated Energy Transformation algorithm, where the Metropolis algorithm is used repeatedly with more and more favourable energy functions derived from the original one by increasing transformations. We show that this method allows a better speed-up than Simulated Annealing when convergence speed is measured by the probability of failure of the algorithm after a large number of iterations. We study also the limit of a large state space in the special case when the energy is made of a sum of independent terms. We show that the convergence time of the I.E.T. algorithm is polynomial in the size (number of coordinates) of the problem, but with a worse exponent than for Simulated Annealing. This indicates that the I.E.T. algorithm is well suited for moderate size problems but not for too large ones. The independent component case is a good model for the end of many optimization processes, when at low temperature a neighbourhood of a local minimum is explored by small and far apart modifications of the current solution. We show that in this case both global optimization methods, Simulated Annealing and the I.E.T. algorithm, are less efficient than repeated local stochastic optimizations. Using the general concept of “slow stochastic optimization algorithm”, we show that any “slow” global optimization scheme should be followed by a local one to perform the last approach to a minimum.
Author information
Authors and Affiliations
Additional information
Received: 22 November 1994 / In revised form: 14 July 1997
Rights and permissions
About this article
Cite this article
Catoni, O. The energy transformation method for the Metropolis algorithm compared with Simulated Annealing. Probab Theory Relat Fields 110, 69–89 (1998). https://doi.org/10.1007/s004400050145
Issue Date:
DOI: https://doi.org/10.1007/s004400050145
- AMS Subject Classification(1991): 60J10
- 90C42