# Unconstrained Optimization

• Raphael T. Haftka
• Zafer Gürdal
Chapter
Part of the Solid Mechanics And Its Applications book series (SMIA, volume 11)

## Abstract

In this chapter we study mathematical programming techniques that are commonly used to extremize nonlinear functions of single and multiple (n) design variables subject to no constraints. Although most structural optimization problems involve constraints that bound the design space, study of the methods of unconstrained optimization is important for several reasons. First of all, if the design is at a stage where no constraints are active then the process of determining a search direction and travel distance for minimizing the objective function involves an unconstrained function minimization algorithm. Of course in such a case one has constantly to watch for constraint violations during the move in design space. Secondly, a constrained optimization problem can be cast as an unconstrained minimization problem even if the constraints are active. The penalty function and multiplier methods discussed in Chapter 5 are examples of such indirect methods that transform the constrained minimization problem into an equivalent unconstrained problem. Finally, unconstrained minimization strategies are becoming increasingly popular as techniques suitable for linear and nonlinear structural analysis problems (see Kamat and Hayduk[[1]])which involve solution of a system of linear or nonlinear equations. The solution of such systems may be posed as finding the minimum of the potential energy of the system or the minimum of the residuals of the equations in a least squared sense.

## Keywords

Unconstrained Optimization Order Method Steep Descent Method Unconstrained Minimization Newton Direction
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

## References

1. [1]
Kamat, M.P. and Hayduk, R.J., “Recent Developments in Quasi-Newton Methods for Structural Analysis and Synthesis,” AIAA J., 20(5), 672–679, 1982.
2. [2]
Avriel, M., Nonlinear Programming: Analysis and Methods. Prentice-Hall, Inc., 1976.Google Scholar
3. [3]
Powell, M.J.D., “An Efficient Method for Finding the Minimum of a Function of Several Variables without Calculating Derivatives,” Computer J., 7, pp. 155–162, 1964.
4. [4]
Kiefer, J., “Sequential Minmax Search for a Maximum,” Proceedings of the American Mathematical Society, 4, pp. 502–506, 1953.
5. [5]
Walsh, G.R., Methods of Optimization, John Wiley, New York, 1975.
6. [6]
Dennis, J.E. and Schnabel, R.B., Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Prentice-Hall, 1983.Google Scholar
7. [7]
Gill, P.E., Murray, W. and Wright, M.H., Practical Optimization, Academic Press, New York, p. 92, 1981.
8. [8]
Spendley, W., Hext, G. R., and Himsworth, F. R., “Sequential Application of Simplex Designs in Optimisation and Evolutionary Operation,” Technometrics, 4(4), pp. 441–461, 1962.
9. [9]
Neider, J. A. and Mead, R., “A Simplex Method for Function Minimization,” Computer J., 7, pp. 308–313, 1965.Google Scholar
10. [10]
Chen, D. H., Saleem, Z., and Grace, D. W., “A New Simplex Procedure for Function Minimization,” Int. J. of Modelling & Simulation, 6, 3, pp. 81–85, 1986.Google Scholar
11. [11]
Cauchy, A., “Methode Generale pour la Resolution des Systemes D’equations Simultanees,” Comp. Rend. l’Academie des Sciences Paris, 5, pp. 536–538, 1847.Google Scholar
12. [12]
Hestenes, M.R. and Stiefel, E., “Methods of Conjugate Gradients for Solving Linear Systems,” J. Res. Nat. Bureau Stand., 49, pp. 409–436, 1952.
13. [13]
Fletcher, R. and Reeves, C.M., “Function Minimization by Conjugate Gradients,” Computer J., 7, pp. 149–154, 1964.
14. [14]
Gill, P.E. and Murray, W.,”Conjugate-Gradient Methods for Large Scale Nonlinear Optimization,” Technical Report 79-15; Systems Optimization Lab., Dept. of Operations Res., Stanford Univ., pp. 10–12, 1979.Google Scholar
15. [15]
Powell, M.J.D., “Restart Procedures for the Conjugate Gradient Method,” Math. Prog., 12, pp. 241–254, 1975.
16. [16]
Polak, E., Computational Methods in Optimization: A Unified Approach, Academic Press, 1971.Google Scholar
17. [17]
Axelsson, O. and Munksgaard, N., “A Class of Preconditioned Conjugate Gradient Methods for the Solution of a Mixed Finite Element Discretization of the Biharmonic Operator,” Int. J. Num. Meth. Engng., 14, pp. 1001–1019, 1979.
18. [18]
Johnson, O.G., Micchelli, C.A. and Paul, G., “Polynomial Preconditioners for Conjugate Gradient Calculations,” SIAM J. Num. Anal., 20(2), pp. 362–376, 1983.
19. [19]
Broyden, C.G., “The Convergence of a Class of Double-Rank Minimization Algorithms 2. The New Algorithm,” J. Inst. Math. Appl., 6, pp. 222–231, 1970.
20. [20]
Oren, S.S. and Luenberger, D., “Self-scaling Variable Metric Algorithms, Part I,” Manage. Sci., 20(5), pp. 845–862, 1974.
21. [21]
Davidon, W.C., Variable Metric Method for Minimization. Atomic Energy Commission Research and Development Report, ANL-5990 (Rev.), November 1959.Google Scholar
22. [22]
Fletcher, R. and Powel M.J.D., “A Rapidly Convergent Descent Method for Minimization,” Comput., J., 6, pp. 163–168, 1963.
23. [23]
Fletcher, R., “A New Approach to Variable Metric Algorithms,” Computer J., 13(3), pp. 317–322, 1970.
24. [24]
Goldfarb, D., “A Fanmy of Variable-metric Methods Derived by Variational Means,” Math. Comput., 24, pp. 23–26, 1970.
25. [25]
Shanno, D.F., “Conditioning of Quasi-Newton Methods for Function Minimization,” Math. Comput., 24, pp. 647–656, 1970.
26. [26]
Dennis, J.E., Jr. and More, J.J., “Quasi-Newton Methods, Motivation and Theory,” SIAM Rev., 19(1), pp. 46–89, 1977.
27. [27]
Powell, M. J.D., “Some Global Convergence Properties of a Variable Metric Algorithm for Minimization Without Exact Line Searches,” In: Nonlinear Programming (R.W. Cottle and C.E. Lemke, eds.), American Mathematical Society, Providence, RI, pp. 53–72, 1976.Google Scholar
28. [28]
Shanno, D.F., “Conjugate Gradient Methods with Inexact Searches,” Math. Oper. Res., 3(2), pp. 244–256, 1978.
29. [29]
Kamat, M.P., Watson, L.T. and Junkins, J.L., “A Robust Efficient Hybrid Method for Finding Multiple Equilibrium Solutions,” Proceedings of the Third Intl. Conf. on Numerical Methods in Engineering, Paris, France, pp. 799–807, March 1983.Google Scholar
30. [30]
Kwok, H.H., Kamat, M.P. and Watson, L.T., “Location of Stable and Unstable Equilibrium Configurations using a Model Trust Region, Quasi-Newton Method and Tunnelling,” Computers and Structures, 21(6), pp. 909–916, 1985.
31. [31]
Matthies, H. and Strang, G., “The Solution of Nonlinear Finite Element Equations,” Int. J. Num. Meth. Enging., 14, pp. 1613–1626, 1979.
32. [32]
Schubert, L.K., “Modification of a Quasi-Newton Method for Nonlinear Equations with a Sparse Jacobian,” Math. Comput., 24, pp. 27–30, 1970.
33. [33]
Broyden, C.G., “A Class of Methods for Solving Nonlinear Simultaneous Equations,” Math. Comput., 19, pp. 577–593, 1965.
34. [34]
Toint, Ph.L., “On Sparse and Symmetric Matrix Updating Subject to a Linear Equation,” Math. Comput., 31, pp. 954–961, 1977.
35. [35]
Shanno, D.F., “On Variable-Metric Methods for Sparse Hessians,” Math. Comput., 34, pp. 499–514, 1980.
36. [36]
Curtis, A.R., Powell, M.J.D. and Reid, J.K., “On the Estimation of Sparse Jacobian Matrices,” J. Inst. Math. Appl., 13, pp. 117–119, 1974.
37. [37]
Powell, M.J.D. and Toint, Ph.L., “On the Estimation of Sparse Hessian Matrices,” SIAM J. Num. Anal., 16(6), pp. 1060–1074, 1979.
38. [38]
Kamat, M.P., Watson, L.T. and VandenBrink, D.J., “An Assessment of Quasi-Newton Sparse Update Techniques for Nonlinear Structural Analysis,” Comput. Meth. Appl. Mech. Enging., 26, pp. 363–375, 1981.
39. [39]
Kamat, M.P. and VandenBrink, D.J., “A New Strategy for Stress Analysis Using the Finite Element Method,” Computers and Structures 16(5), pp. 651–656, 1983.
40. [40]
Gill, P.E. and Murray, W., “Newton-type Methods for Linearly Constrained Optimization,” In: Numerical Methods for Constrained Optimization (Gill & Murray, eds.), pp. 29–66. Academic Press, New York 1974.Google Scholar
41. [41]
Griewank, A.O., Analysis and Modifications of Newton’s Method at Singularities. Ph.D. Thesis, Australian National University, 1980.Google Scholar
42. [42]
Decker, D.W. and Kelley, C.T., “Newton’s Method at Singular Points, I and II,” SIAM J. Num. Anal., 17, pp. 66–70; 465-471, 1980.
43. [43]
Hansen, E., “Global Optimization Using Interval Analysis-The Multi Dimensional Case,” Numer. Math., 34, pp. 247–270, 1980.
44. [44]
Kao, J.-J., Brill, E. D., Jr., and Pfeffer, J. T., “Generation of Alternative Optima for Nonlinear Programming Problems,” Eng. Opt., 15, pp. 233–251, 1990.
45. [45]
Ge, R., “Finding More and More Solutions of a System of Nonlinear Equations,” Appl. Math. Computation, 36, pp. 15–30, 1990.
46. [46]
Laarhoven, P. J. M. van., and Aarts, E., Simulated Annealing: Theory and Applications, D. Reidel Publishing, Dordrecht, The Netherlands, 1987.
47. [47]
Goldberg, D. E., Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley Publishing Co. Inc., Reading, Massachusetts, 1989.
48. [48]
Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., and Teller, E., “Equation of State Calculations by Fast Computing Machines,” J. Chem. Physics, 21(6), pp. 1087–1092, 1953.
49. [49]
Kirkpatrick, S., Gelatt, C. D., Jr., and Vecchi, M. P., “Optimization by Simulated Annealing,” Science, 220 (4598), pp. 671–680, 1983.
50. [50]
Cerny, V., “Thermodynamical Approach to the Traveling Salesman Problem: An Efficient Simulation Algorithm,” J. Opt. Theory Appl., 45, pp. 41–52, 1985.
51. [51]
Rutenbar, R. A., “Simulated Annealing Algorithms: An Overview,” IEEE Circuits and Devices, January, pp. 19–26, 1989.Google Scholar
52. [52]
Johnson, D. S., Aragon, C. R., McGeoch, L. A., and Schevon, C, “Optimization by Simulated Annealing: An Experimental Evaluation. Part I. Graph Partitioning,” Operations Research, 37, 1990, pp. 865–893.
53. [53]
Aarts, E., and Korst, J., Simulated Annealing and Boltzmann Machines, A Stochastic Approach to Combinatorial Optimization and Neural Computing, John Wiley & Sons, 1989.Google Scholar
54. [54]
Nahar, S., Sahni, S., and Shragowithz, E. V., in the Proceedings of 22nd Design Automation Conf., Las Vegas, June 1985, pp. 748–752.Google Scholar
55. [55]
Elperin, T, “Monte Carlo Structural Optimization in Discrete Variables with Annealing ALgorithm,” Int. J. Num. Meth. Eng., 26, 1988, pp. 815–821.
56. [56]
Kincaid, R. K., and Padula, S. L., “Minimizing Distortion and Internal Forces in Truss Structures by Simulated Annealing,” Proceedings of the AIAA/ASME /ASCE/AHS/ASC 31st Structures, Structural Dynamics, and Materials Conference, Long Beach, CA., 1990, Part 1, pp. 327–333.Google Scholar
57. [57]
Balling, R. J., and May, S. A., “Large-Scale Discrete Structural Optimization: Simulated Annealing, Branch-and-Bound, and Other Techniques,” presented at the AIAA/ASME/ASCE/AHS/ASC 32nd Structures, Structural Dynamics, and Materials Conference, Long Beach, CA., 1990.Google Scholar
58. [58]
Chen, G.-S., Bruno, R. J., and Salama, M., “Optimal Placement of Active/Passive Members in Structures Using Simulated Annealing,” AIAA J., 29(8), August 1991, pp. 1327–1334.
59. [59]
Holland, J. H., Adaptation of Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, MI, 1975.Google Scholar
60. [60]
De Jong, K. A., Analysis of the Behavior of a Class of Genetic Adaptive Systems (Doctoral Dissertation, The University of Michigan; University Microfilms No. 76-9381), Dissertation Abstracts International, 36(10), 5140B, 1975.Google Scholar
61. [61]
Booker, L., “Improving Search in Genetic Algorithms,” in Genetic Algorithms and Simulated Annealing, Ed. L. Davis, Morgan Kaufmann Publishers, Inc., Los Altos, CA. 1987, pp. 61–73.Google Scholar
62. [62]
Goldberg, D. E., and Samtani, M. P., “Engineering Optimization via Genetic Algorithm,” Proceedings of the Ninth Conference on Electronic Computation, ASCE, February 1986, pp. 471–482.Google Scholar
63. [63]
Hajela, P., “Genetic Search—An Approach to the Nonconvex Optimization Problem,” AIAA J., 28(7), July 1990, pp. 1205–1210.
64. [64]
Rao, S. S., Pan, T.-S., and Venkayya, V. B., “Optimal Placement of Actuators in Actively Controlled Structures Using Genetic Algorithms,” AIAA J., 29(6), pp. 942–943, June 1991.
65. [65]
Szu, H., and Hartley, R.L., “Nonconvex Optimization by Fast Simulated Annealing,” Proceedings of the IEEE, 75(11), pp. 1538–1540, 1987.