Simulated annealing with noisy or imprecise energy measurements

  • S. B. Gelfand
  • S. K. Mitter
Contributed Papers
  • 92 Downloads

Abstract

The annealing algorithm (Ref. 1) is modified to allow for noisy or imprecise measurements of the energy cost function. This is important when the energy cannot be measured exactly or when it is computationally expensive to do so. Under suitable conditions on the noise/imprecision, it is shown that the modified algorithm exhibits the same convergence in probability to the globally minimum energy states as the annealing algorithm (Ref. 2). Since the annealing algorithm will typically enter and exit the minimum energy states infinitely often with probability one, the minimum energy state visited by the annealing algorithm is usually tracked. The effect of using noisy or imprecise energy measurements on tracking the minimum energy state visited by the modified algorithms is examined.

Key Words

Simulated annealing combinatorial optimization noisy measurements Markov chains 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Kirkpatrick, S., Gelatt, C. D., andVecchi, M.,Optimization by Simulated Annealing, Science, Vol. 220, pp. 621–680, 1983.Google Scholar
  2. 2.
    Hajek, B.,Cooling Schedules for Optimal Annealing, Mathematics of Operations Research, Vol. 13, pp. 311–329, 1988.Google Scholar
  3. 3.
    Cerny, V.,A Thermodynamical Approach to the Travelling Statesman Problem: An Efficient Simulation Algorithm, Journal of Optimization Theory and Applications, Vol. 45, pp. 41–51, 1985.Google Scholar
  4. 4.
    German, S., andGerman, D.,Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-6, pp. 721–741, 1984.Google Scholar
  5. 5.
    Golden, B., andSkiscim, C.,Using Simulated Annealing to Solve Routing and Location Problems, Naval Research Logistics Quarterly, Vol. 33, pp. 261–279, 1986.Google Scholar
  6. 6.
    Johnson, D. S., Aragon, C. R., McGeoch, L. A., andSchevon, C.,Optimization by Simulated Annealing: An Experimental Evaluation, Preprint, 1985.Google Scholar
  7. 7.
    El Gamal, A., Hemachandra, L., Shperling, I., andWei, W.,Using Simulated Annealing to Design Good Codes, IEEE Transactions on Information Theory, Vol. IT-33, pp. 116–123, 1987.Google Scholar
  8. 8.
    Gidas, B.,Nonstationary Markov Chains and Convergence of the Annealing Algorithm, Journal of Statistical Physics, Vol. 39, pp. 73–131, 1985.Google Scholar
  9. 9.
    Mitra, D., Romeo, F., andSangiovanni-Vincentelli, A.,Convergence and Finite-Time Behavior of Simulated Annealing, Advances in Applied Probability, Vol. 18, pp. 747–771, 1986.Google Scholar
  10. 10.
    Tsitsiklis, J.,Markov Chains with Rare Transitions and Simulated Annealing, Mathematics of Operations Research, Vol. 14, pp. 70–90, 1989.Google Scholar
  11. 11.
    Tsitsiklis, J.,A Survey of Large Time Asymptotics of Simulated Annealing Algorithms, Massachusetts Institute of Technology, Laboratory for Information and Decision Systems, Report No. LIDS-P-1623, 1986.Google Scholar
  12. 12.
    Grover, L.,Simulated Annealing Using Approximate Calculations, Preprint, 1986.Google Scholar
  13. 13.
    Billingsley, P.,Probablity and Measure, Wiley, New York, New York, 1978.Google Scholar
  14. 14.
    Goles, E., andVichniac, G.,Lyapunov Functions for Parallel Neural Networks, Proceedings of the AIP Conference on Neural Networks for Computing, Snowbird, Utah, pp. 165–181, 1986.Google Scholar

Copyright information

© Plenum Publishing Corporation 1989

Authors and Affiliations

  • S. B. Gelfand
    • 1
  • S. K. Mitter
    • 2
  1. 1.Computer Vision and Image Processing Laboratory, School of Electrical EngineeringPurdue UniversityWest Lafayette
  2. 2.Center for Intelligent Control Systems and Department of Electrical Engineering and Computer ScienceMassachusetts Institute of TechnologyCambridge

Personalised recommendations