Numerical Algorithms

, Volume 30, Issue 2, pp 113–139 | Cite as

Reduced Functions, Gradients and Hessians from Fixed-Point Iterations for State Equations

  • Andreas Griewank
  • Christèle Faure
Article

Abstract

In design optimization and parameter identification, the objective, or response function(s) are typically linked to the actually independent variables through equality constraints, which we will refer to as state equations. Our key assumption is that it is impossible to form and factor the corresponding constraint Jacobian, but one has instead some fixed-point algorithm for computing a feasible state, given any reasonable value of the independent variables. Assuming that this iteration is eventually contractive, we will show how reduced gradients (Jacobians) and Hessians (in other words, the total derivatives) of the response(s) with respect to the independent variables can be obtained via algorithmic, or automatic, differentiation (AD). In our approach the actual application of the so-called reverse, or adjoint differentiation mode is kept local to each iteration step. Consequently, the memory requirement is typically not unduly enlarged. The resulting approximating Lagrange multipliers are used to compute estimates of the reduced function values that can be shown to converge twice as fast as the underlying state space iteration. By a combination with the forward mode of AD, one can also obtain extra-accurate directional derivatives of the reduced functions as well as feasible state space directions and the corresponding reduced or projected Hessians of the Lagrangian. Our approach is verified by test calculations on an aircraft wing with two responses, namely, the lift and drag coefficient, and two variables, namely, the angle of attack and the Mach number. The state is a 2-dimensional flow field defined as solution of the discretized Euler equation under transonic conditions.

fixed-point iteration derivative convergence algorithmic or automatic differentiation implicit functions reduced gradient reduced Hessian Q- and R-linear convergence 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    R. Becker and R. Rannacher, An optimal control approach to error control and mesh adaption in finite element methods, in: Acta Numerica 2001, ed. A. Iserles (CUP, 2001) pp. 1–102.Google Scholar
  2. [2]
    B. Christianson, Reverse accumulation and attractive fixed points, Optim. Methods Softw. 3 (1994) 311–326.Google Scholar
  3. [3]
    B. Christianson, Reverse accumulation and implicit functions, Optim. Methods Softw. 9 (1998) 307–322.Google Scholar
  4. [4]
    J.E. Dennis, Jr. and R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, Classics in Applied Mathematics, Vol. 16 (SIAM, Philadelphia, PA, 1996).Google Scholar
  5. [5]
    S.A. Forth and T.P. Evans, Aerofoil optimisation via automatic differentiation of a multigrid cellvertex Euler flow solver, in: Proceedings of Automatic Differentiation 2000: From Simulation to Optimization (Berlin, Springer, 2000).Google Scholar
  6. [6]
    J.C. Gilbert, Automatic differentiation and iterative processes, Optim. Methods Softw. 1 (1992) 13–21.Google Scholar
  7. [7]
    M.B. Giles, On the iterative solution of adjoint equations, in: Proceedings of Automatic Differentiation 2000: From Simulation to Optimization (Berlin, Springer, 2000).Google Scholar
  8. [8]
    A. Griewank, Starlike domains of convergence for Newton's method at singularities, Numer. Math. 35 (1980) 95–111.Google Scholar
  9. [9]
    A. Griewank, Evaluating Derivatives, Principles and Techniques of Algorithmic Differentiation, Frontiers in Applied Mathematics, Vol. 19 (SIAM, Philadelphia, PA, 2000).Google Scholar
  10. [10]
    A. Griewank, C. Bischof, G. Corliss, A. Carle and K. Williamson, Derivative convergence of iterative equation solvers, Optim. Methods Softw. 2 (1993) 321–355.Google Scholar
  11. [11]
    A. Jameson, Optimum aerodynamic design using cfd and control theory, in: 12th AIAA Computational Fluid Dynamics Conference, San Diego, CA; also AIAA Paper 95–1729, American Institute of Aeronautics and Astronautics (1995).Google Scholar
  12. [12]
    P.A. Newman, G.J.-W. Hou, H.E. Jones, A.C. Taylor and V.M. Korivi, Observations on computational methodologies for use in large-scale, gradient-based, multidisciplinary design incorporating advanced CFD codes, Technical Memorandum 104206, NASA Langley Research Center, February 1992, AVSCOM Technical Report 92-B-007.Google Scholar
  13. [13]
    S.G. Nash and A. Sofer, Linear and Nonlinear Programming, McGraw-Hill Series in Industrial Engeneering and Management Science (McGraw-Hill, New York, 1996).Google Scholar
  14. [14]
    J.M. Ortega and W.C. Rheinboldt, Iterative Solution of Nonlinear Equations in Several Variables (Academic Press, New York, 1970).Google Scholar
  15. [15]
    N.A. Pierce and M.B. Giles, Adjoint recovery of superconvergent functionals from PDE approximations, SIAM Rev. 42(2) (2000) 247–264.Google Scholar
  16. [16]
    S. Ta'asan, G. Kuruvila and M.D. Salas, Aerodynamic design and optimization in one shot, in: 30th AIAA Aerospace Sciences Meeting and Exhibit, Reno, NV; also AIAA Paper 91–0025, American Institute of Aeronautics and Astronautics (1992).Google Scholar
  17. [17]
    D. Venditti and D. Darmofal, Adjoint error estimation and grid adaptation for functional outputs: Application to quasi-one-dimensional flow, J. Comput. Phys. 164 (2000) 204–227.Google Scholar

Copyright information

© Kluwer Academic Publishers 2002

Authors and Affiliations

  • Andreas Griewank
    • 1
  • Christèle Faure
    • 2
  1. 1.Institute of Scientific ComputingTechnical University DresdenDresdenGermany
  2. 2.PolySpace TechnologiesParisFrance

Personalised recommendations