Hardware Annealing Theory

  • Bang W. Lee
  • Bing J. Sheu
Part of the The Springer International Series in Engineering and Computer Science book series (SECS, volume 127)


Engineering optimization is an important subject in signal and image processing. A conventional searching technique for finding the optimal solution is to use gradient descent, which finds a direction for the next iteration from the gradient of the objective function. For complicated problems, the gradient descent technique often gets stuck at a local minimum where the objective function has surrounding barriers. In addition, the complexity of most combinational optimization problems increases dramatically with the problem size and makes it very difficult to obtain the global minimum within a reasonable amount of computational time. Several methods have been reported to assist the network output to escape from the local minima [1,2]. For example, the simulated annealing method is a heuristic approach which can be widely applied to the combinational optimization problems [3,4]; the solutions by the simulated annealing technique are close to the global minimum within a polynomial upper bound for the computational time and are independent of initial conditions; and the simulated annealing technique has been successfully applied in VLSI layout generation [5] and noise filtering in image processing [6].


Simulated Annealing Global Minimum Hopfield Neural Network Amplifier Gain Boltzmann Machine 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    B. W. Lee and B. J. Sheu, “An investigation on local minima of Hopfield network for optimization circuits,” Proc. of IEEE Inter. Conf. on Neural Networks, vol. I, pp. 45–51, San Diego, CA, July 1988.CrossRefGoogle Scholar
  2. [2]
    P. J. M. van Laarhoven and E. H. L. Aarts, Simulated Annealing: Theory and Applications, Boston, MA: Reidel, 1987.MATHGoogle Scholar
  3. [3]
    S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671–680, May 1983.MathSciNetMATHCrossRefGoogle Scholar
  4. [4]
    E. H. L. Aarts and P. J. M. van Laarhoven, “A new polynomial-time cooling schedule,” Proc. of IEEE Inter. Conf. on Computer-Aided Design, pp. 206–208, Nov. 1985.Google Scholar
  5. [5]
    R. A. Rutenbar, “Simulated Annealing Algorithms: An Overview,” IEEE Circuits and Devices Magazine, vol. 5, no. 1, pp. 19–26, Jan. 1989.CrossRefGoogle Scholar
  6. [6]
    S. Geman and D. Geman, “Stochastic relaxation, gibbs distributions, and the bayesian restoration of images,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. PAMI-6, no. 6, pp. 721–741, Nov. 1984.CrossRefGoogle Scholar
  7. [7]
    D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, Parallel Distributed Processing, vol. 1, Cambridge, MA: The MIT Press, pp. 282–317, 1986.Google Scholar
  8. [8]
    B. W. Lee and B. J. Sheu, “Hardware simulated annealing in electronic neural networks,” IEEE Trans. on Circuits and Systems, to appear in 1990.Google Scholar
  9. [9]
    J. J. Hopfield, “Neurons with graded response have collective computational properties like those of two-state neurons,” Proc. Natl. Acad., Sci. USA., vol. 81, pp. 3088–3092, May 1984.CrossRefGoogle Scholar
  10. [10]
    G. D. Smith, Numerical Solution of Partial Differential Equations: Finite Difference Methods, Oxford University Press, pp. 60–63, 1985.Google Scholar
  11. [11]
    D. D. Pollock, Physical Properties of Materials for Engineers, Boca Raton, FL: CRC Press, pp. 14–18, 1982.Google Scholar

Copyright information

© Springer Science+Business Media New York 1991

Authors and Affiliations

  • Bang W. Lee
    • 1
  • Bing J. Sheu
    • 1
  1. 1.University of Southern CaliforniaUSA

Personalised recommendations