Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Use of dynamic programming to accelerate convergence of directional optimization algorithms

Abstract

In mostdirectional algorithms for optimization of a mathematical function, thestep along each direction is chosen to optimize the function along the direction.Multistep procedures, determining several directions and steps along the directions, are more efficient than single-step procedures. This paper shows how dynamic programming can be used for the simultaneous determination of optimal steps for a given set of directions. This leads to accelerated convergence. Computational experience for two-step and three-step procedures is also described.

This is a preview of subscription content, log in to check access.

References

  1. 1.

    Fletcher, R., andPowell, M. J. D.,A Rapidly Convergent Descent Method for Minimization, Computer Journal, Vol. 6, No. 2, 1963.

  2. 2.

    Shah, B. V., Buehler, R. J., andKempthorne, O.,Some Algorithms for Minimizing a Function of Several Variables, SIAM Journal on Applied Mathematics, Vol. 12, No. 1, 1964.

  3. 3.

    Cragg, E. E., andLevy, A. V.,Study on a Supermemory Gradient Method for the Minimization of Functions, Journal of Optimization Theory and Applications, Vol. 4, No. 3, 1970.

  4. 4.

    Hestenes, M. R., andStiefel, E.,Methods of Conjugate Gradients for Solving Linear Systems, Journal of Research of the National Bureau of Standards, Vol. 49, No. 6, 1952.

  5. 5.

    Fadeev, D. K., andFadeeva, V. N.,Computational Methods of Linear Algebra, Translated by R. C. Williams, W. H. Freeman and Company, San Francisco, California, 1963.

  6. 6.

    Ghare, P. M.,Multistep Gradient Methods for Non-Linear Programming, Part 1, Unconstrained Optimization, Paper Presented at the 7th Mathematical Programming Symposium, The Hague, Netherlands, 1970.

  7. 7.

    Turner, W. C.,Development and Application of Multistep Computational Techniques for Constrained and Unconstrained Mathematical Functions, Ph.D. Thesis, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, 1971.

  8. 8.

    Nemhauser, G. L.,Introduction to Dynamic Programming, John Wiley and Sons, New York, New York, 1966.

  9. 9.

    Hooke, R., andJeeves, T. A.,Direct Search Solutions of Numerical and Statistical Problems, Journal of the Association for Computing Machinery, Vol. 8, No. 2, 1961.

  10. 10.

    Friedman, M., andSavage, L. S.,Planning Experiments Seeking Maxima, Selected Techniques of Statistical Analysis, Edited by C. Wisenhart, M. W. Hastay, and W. A. Wallis, McGraw-Hill, Book Company, New York, New York, 1947.

  11. 11.

    Schmidt, J. W., Taylor, R. E., andSmith, J. R.,Successive Quadratic Approximations, Unpublished Working Paper, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, 1972.

  12. 12.

    Levitin, E. S., andPolyak, B. T.,Constrained Minimization Methods, USSR Computational and Mathematical Physics, Vol. 6, No. 5, 1966.

  13. 13.

    Wilde, D. J., andBeightler, C. S.,Foundation of Optimization, Prentice-Hall, Englewood Cliffs, New Jersey, 1967.

Download references

Author information

Additional information

Communicated by G. L. Nemhauser

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Turner, W.C., Ghare, P.M. Use of dynamic programming to accelerate convergence of directional optimization algorithms. J Optim Theory Appl 16, 39–47 (1975). https://doi.org/10.1007/BF00935622

Download citation

Key Words

  • Unconstrained minimization
  • dynamic programming
  • gradient methods
  • descent methods
  • functional minimization