Skip to main content

Numerical Methods for Unconstrained Optimization

  • Chapter
  • First Online:

Part of the book series: Solid Mechanics and Its Applications ((SMIA,volume 242))

Abstract

Unconstrained optimization is the search for the maximum or minimum of a function with no restriction on the values of the variables. At the same time, it forms the basis for methods of constrained optimization in the next chapter. Zero-order methods use only function values, progress made in the previous step pointing the way to the next step. The Hooke and Jeeves method is one such method, suitable for small problems with little programming effort. First-order methods employ the gradient of the function, usually obtained by finite difference, to derive a search direction. This is followed by a line search along this direction for the current maximum or minimum, performed either by progressive reduction of the region in which the maximum or minimum is to be found or by polynomial interpolation. In its simplest form, this is the steepest descent method. However, by the use of gradient data from the previous iteration, an improved search direction can be found, with faster convergence. This is the Fletcher–Reeves method. A more general formulation is based on a quadratic approximation to the objective function, referred to as a second-order method or quasi-Newton method. This involves progressively building up an approximation to the inverse of the Hessian matrix of second derivatives to deduce a search direction. A spreadsheet program for the Hooke and Jeeves method is also used in the next chapter for the penalty function method for constrained optimization.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   119.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    This spreadsheet, and those in the following chapters, contains macros. Depending on the chosen security settings, a Security Warning: ‘Macros have been disabled’ may appear on the Message Bar. Click Enable Content to continue with the spreadsheet.

References

  1. Hooke R, Jeeves TA (1961) ‘Direct search’ solution of numerical and statistical problems. J Assn Comput Mach 8:212–229

    Google Scholar 

  2. Walsh GR (1975) Methods of optimization. John Wiley & Sons, London

    MATH  Google Scholar 

  3. Powell MJD (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput J 7:303–307

    Article  MathSciNet  MATH  Google Scholar 

  4. Fletcher R, Reeves CM (1964) Function minimisation by conjugate gradients. Comput J 7(2):149–154

    Article  MathSciNet  MATH  Google Scholar 

  5. Davidon WC (1959) Variable metric method for minimization. Argone National Laboratory, ANL-5990 Rev., University of Chicago

    Google Scholar 

  6. Fletcher R, Powell MJD (1963) A rapidly convergent method for minimization. Comput J. 6(2):163–168

    Article  MathSciNet  MATH  Google Scholar 

  7. Broydon CG (1970) The convergence of a class of double rank minimization algorithms, parts I and II. J Inst Math Appl 6:76–90, 222–231

    Google Scholar 

  8. Fletcher R (1970) A new approach to variable metric algorithms. Comput J 13:317–322

    Article  MATH  Google Scholar 

  9. Goldfarb D (1970) A family of variable metric methods derived by variational means. Math Comput 24:23–36

    Article  MathSciNet  MATH  Google Scholar 

  10. Shanno DF (1970) Conditioning of quasi-newton methods for function minimization. Math Comput 24:647–656

    Article  MathSciNet  MATH  Google Scholar 

  11. Kiefer J (1953) Sequential minimax search for a maximum. Proc Am Math Soc 4:502–506

    Article  MathSciNet  MATH  Google Scholar 

  12. Reklaitis GV, Ravindran A, Ragsdell KM (1983) Engineering optimization. John Wiley and Sons, New York

    Google Scholar 

  13. Vanderplaats GN (1984) Numerical optimization techniques for engineering design. McGraw-Hill, New York

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alan Rothwell .

Exercises

Exercises

  1. 4.1

    Repeat the first few steps of the problem in Example 4.1 for the same function \( f({\mathbf{x}}) \), from a different starting point.

    Follow the same analytical procedure as in the example. Plot values of the variables at each step in a figure similar to Fig. 4.1.

  2. 4.2

    Perform the first few steps of the Fletcher–Reeves method to minimize the function:

    $$ f({\mathbf{x}}) = x_{1}^{2} + x_{2}^{4} + 1. $$

    Follow the same analytical procedure as in Example 4.2 . Choose a suitable initial point to start the procedure.

  3. 4.3

    Repeat Exercise 4.2 using the DFP update formula in Eq. (4.4).

    Make an Excel spreadsheet to calculate the update formula. Use the MMULT function in Excel to perform the matrix multiplication.

  4. 4.4

    Repeat Exercise 4.3 using the BFGS update formula in Eq. (4.5).

    Extend the spreadsheet in Exercise 4.3 to calculate the BFGS update formula.

  5. 4.5

    Use the golden section method to find the minimum of the function:

    $$ f(x) = 2x^{3} - 3x + 2. $$

    Use initial lower and upper bounds \( x_{L} = 0,\,x_{{{\kern 1pt} U}} = 1 \) to start region elimination. Continue until the search interval has been reduced to about 10 per cent of its initial value.

  6. 4.6

    Complete the minimization of the function in Exercise 4.5 by parabolic interpolation.

    Use the standard formulae for parabolic interpolation in Sect. 4.2.2 . Compare the result with the exact minimum found by differentiation of the original function.

  7. 4.7

    Perform the first few steps of the procedure in the spreadsheet ‘Hooke and Jeeves Method’ by hand and compare the results with those in Table 4.3.

    Take the same quadratic function with the same initial values and step size. Observe the working of the series of ‘If’ and ‘ElseIf’ statements enabling a pattern move towards the end of function HJ.

  8. 4.8

    Use the spreadsheet ‘Hooke and Jeeves Method’ to repeat the optimization of the quadratic function in Table 4.3 with different initial values and step sizes.

    Values of the variables and minimum of the function on completion of each step size can be seen by progressively increasing the number of reductions in step size entered on the spreadsheet. Observe the number of function evaluations necessary for the same accuracy with different initial values and step size.

  9. 4.9

    Use the spreadsheet ‘Hooke and Jeeves Method’ to optimize the angles α and β of the seven-bar truss in Fig. 1.15 of Chap. 1.

    Formulae for analysis of the truss are given in Table 1.5 of Chap. 1. Replace the Visual Basic code in function FN with some lines of code to calculate the volume of the truss in terms of design variables \( x_{1} = D \) and \( x_{2} = H \) . Variable FN in the function FN must be the volume of the truss for any \( x_{1} \) and \( x_{2} \) . Choose convenient values for the load on the truss, its span and the allowable stress. Express the result in terms of angles α and β.

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this chapter

Cite this chapter

Rothwell, A. (2017). Numerical Methods for Unconstrained Optimization. In: Optimization Methods in Structural Design. Solid Mechanics and Its Applications, vol 242. Springer, Cham. https://doi.org/10.1007/978-3-319-55197-5_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-55197-5_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-55196-8

  • Online ISBN: 978-3-319-55197-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics