Abstract
Simplex gradients are widely used in derivative-free optimization. This article clarifies some of the properties of simplex gradients and presents calculus rules similar to that of an ordinary gradient. For example, the simplex gradient does not depend on the order of sample points in the underdetermined and determined cases but this property is not true in the overdetermined case. Moreover, although the simplex gradient is the gradient of the corresponding linear model in the determined case, this is not necessarily true in the underdetermined and overdetermined cases. However, the simplex gradient is the gradient of an alternative linear model that is required to interpolate the reference data point. Also, the negative of the simplex gradient is a descent direction for any interpolating linear function in the determined and underdetermined cases but this is again not necessarily true for the linear regression model in the overdetermined case. In addition, this article reviews a previously established error bound for simplex gradients. Finally, this article treats the simplex gradient as a linear operator and provides formulas for the simplex gradients of products and quotients of two multivariable functions and a power rule for simplex gradients.
Similar content being viewed by others
References
Audet, C., Dennis Jr, J.E.: Mesh adaptive direct search algorithms for constrained optimization. SIAM J. Optim. 17(2), 188–217 (2006)
Bortz, D.M., Kelley, C.T.: The simplex gradient and noisy optimization problems. In: Borggaard, J., et al. (eds.) Computational Methods for Optimal Design and Control, Progress in Systems and Control Theory, vol. 24, pp. 77–90. Springer, New York (1998)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Geometry of interpolation sets in derivative free optimization. Math. Progr. 111(1–2), 141–172 (2008a)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation. IMA J. Numer. Anal. 28(4), 721–748 (2008b)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. MOS/SIAM book series on optimization. SIAM, Philadelphia (2009)
Custódio, A.L., Dennis Jr, J.E., Vicente, L.N.: Using simplex gradients of nonsmooth functions in direct search methods. IMA J. Numer. Anal. 28(4), 770–784 (2008)
Custódio, A.L., Vicente, L.N.: Using sampling and simplex derivatives in pattern search methods. SIAM J. Optim. 18(2), 537–555 (2007)
Kelley, C.T.: Iterative Methods for Optimization. SIAM, Philadelphia (1999)
Moré, J.J., Wild, S.M.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20(1), 172–191 (2009)
Regis, R.G.: An initialization strategy for high-dimensional surrogate-based expensive black-box optimization. In: Zuluaga, L.F., Terlaky, T. (eds.) Selected Contributions from the MOPTA 2012 Conference Series. Springer Proceedings in Mathematics and Statistics, vol 62, pp. 51–85 (2013)
Scheinberg, K., Toint, P.L.: Self-correcting geometry in model-based algorithms for derivative-free unconstrained optimization. SIAM J. Optim. 20(6), 3512–3532 (2010)
Torczon, V.: On the convergence of pattern search algorithms. SIAM J. Optim. 7(1), 1–25 (1997)
Acknowledgments
The author would like to thank the three anonymous referees. Their comments and suggestions greatly improved this article. In particular, the alternative linear model in Sect. 4 was suggested by one of the referees.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Regis, R.G. The calculus of simplex gradients. Optim Lett 9, 845–865 (2015). https://doi.org/10.1007/s11590-014-0815-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11590-014-0815-x