Optimization Letters

, Volume 11, Issue 7, pp 1217–1227 | Cite as

Compositions of convex functions and fully linear models

Original Paper
  • 104 Downloads

Abstract

Derivative-free optimization (DFO) is the mathematical study of the optimization algorithms that do not use derivatives. One branch of DFO focuses on model-based DFO methods, where an approximation of the objective function is used to guide the optimization algorithm. Proving convergence of such methods often applies an assumption that the approximations form fully linear models—an assumption that requires the true objective function to be smooth. However, some recent methods have loosened this assumption and instead worked with functions that are compositions of smooth functions with simple convex functions (the max-function or the \(\ell _1\) norm). In this paper, we examine the error bounds resulting from the composition of a convex lower semi-continuous function with a smooth vector-valued function when it is possible to provide fully linear models for each component of the vector-valued function. We derive error bounds for the resulting function values and subgradient vectors.

Keywords

Derivative-free optimization Fully linear models Subdifferential Numerical analysis 

Notes

Acknowledgements

This research was partially funded by the Natural Sciences and Engineering Research Council (NSERC) of Canada, Discover Grant #355571-2013.

References

  1. 1.
    Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, New York (2011). With a foreword by Hédy AttouchCrossRefMATHGoogle Scholar
  2. 2.
    Bauschke, H.H., Hare, W.L., Moursi, W.M.: A derivative-free comirror algorithm for convex optimization. Optim. Methods Softw. 30(4), 706–726 (2015)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Bigdeli, K., Hare, W., Nutini, J., Tesfamariam, S.: Optimizing damper connectors for adjacent buildings. Optim. Eng. 17(1), 47–75 (2016)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Conn, A., Scheinberg, K., Toint, P.: On the convergence of derivative-free methods for unconstrained optimization. In: Approximation Theory and Optimization (Cambridge, 1996), pp. 83–108. Cambridge University Press, Cambridge (1997)Google Scholar
  5. 5.
    Conn, A.R., Scheinberg, K., Vicente, L.N.: Geometry of interpolation sets in derivative free optimization. Math. Program. 111(1–2, Ser. B), 141–172 (2008)MathSciNetMATHGoogle Scholar
  6. 6.
    Conn, A.R., Scheinberg, K., Vicente, L.N.: Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation. IMA J. Numer. Anal. 28(4), 721–748 (2008)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Conn, A.R., Scheinberg, K., Vicente, L.N.: Global convergence of general derivative-free trust-region algorithms to first- and second-order critical points. SIAM J. Optim. 20(1), 387–415 (2009)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization, vol. 8 of MPS/SIAM Book Series on Optimization. SIAM (2009)Google Scholar
  9. 9.
    Custódio, A.L., Vicente, L.N.: Using sampling and simplex derivatives in pattern search methods. SIAM J. Optim. 18(2), 537–555 (2007)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Hare, W.L., Lucet, Y.: Derivative-free optimization via proximal point methods. J. Optim. Theory Appl. 160(1), 204–220 (2014)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Hare, W., Nutini, J.: A derivative-free approximate gradient sampling algorithm for finite minimax problems. Comput. Optim. Appl. 56(1), 1–38 (2013)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Larson, J., Menickelly, M., Wild, S.: Manifold sampling for \(l1\) nonconvex optimization. Technical report, Argonne National Laboratory (2015). http://www.mcs.anl.gov/papers/P5392-0915
  13. 13.
    Powell, M.J.D.: UOBYQA: unconstrained optimization by quadratic approximation. Math. Program. 92(3, Ser. B),555–582. ISMP 2000, Part 2 (Atlanta, GA) (2002)Google Scholar
  14. 14.
    Powell, M.J.D.: On trust region methods for unconstrained minimization without derivatives. Math. Program. 97(3),605–623 (2003). New trends in optimization and computational algorithms (NTOC 2001) (Kyoto)Google Scholar
  15. 15.
    Powell, M.J.D.: Developments of NEWUOA for minimization without derivatives. IMA J. Numer. Anal. 28(4), 649–664 (2008)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Regis, R.G.: The calculus of simplex gradients. Optim. Lett. 9(5), 845–865 (2015)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Rockafellar, R.T., Wets, R.J.-B.: Variational Analysis. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 317. Springer, Berlin (1998)Google Scholar
  18. 18.
    Wild, S.M., Shoemaker, C.: Global convergence of radial basis function trust region derivative-free algorithms. SIAM J. Optim. 21(3), 761–781 (2011)MathSciNetCrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  1. 1.MathematicsUniversity of British ColumbiaKelownaCanada

Personalised recommendations