Abstract
We use Lagrange interpolation polynomials to obtain good gradient estimations. This is e.g. important for nonlinear programming solvers. As an error criterion, we take the mean squared error, which can be split up into a deterministic error and a stochastic error. We analyze these errors using N-times replicated Lagrange interpolation polynomials. We show that the mean squared error is of order \(N^{-1+\frac{1}{2d}}\) if we replicate the Lagrange estimation procedure N times and use 2d evaluations in each replicate. As a result, the order of the mean squared error converges to N −1 if the number of evaluation points increases to infinity. Moreover, we show that our approach is also useful for deterministic functions in which numerical errors are involved. We provide also an optimal division between the number of gridpoints and replicates in case the number of evaluations is fixed. Further, it is shown that the estimation of the derivatives is more robust when the number of evaluation points is increased. Finally, test results show the practical use of the proposed method.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
References
Griewank, A.: On automatic differentiation. In: Iri, M., Tanabe, K. (eds.) Mathematical Programming, pp. 83–107. KTK, Tokyo (1989)
He, Y., Fu, M.C., Marcus, S.I.: Convergence of simultaneous perturbation stochastic approximation for nondifferentiable optimization. IEEE Trans. Autom. Control 48, 1459–1463 (2003)
Kim, J., Bates, D.G., Postlethwaite, I.: Nonlinear robust performance analysis using complex-step gradient approximation. Automatica 42, 177–182 (2006)
Spall, J.C.: Introduction to Stochastic Search and Optimization—Estimation, Simulation and Control. Wiley-Interscience Series in Discrete Mathematics and Optimization. Willey-Interscience, Hoboken (2003)
Conn, A.R., Gould, N.I.M., Toint, Ph.L.: Trust-Region Methods. MPS-SIAM Series on Optimization, Philadelphia (2000)
Kiefer, J., Wolfowitz, J.: Stochastic estimation of a regression function. Ann. Math. Stat. 23, 462–466 (1952)
Blum, J.R.: Multidimensional stochastic approximation methods. Ann. Math. Stat. 25, 737–744 (1954)
Ermoliev, Y.: Stochastic quasigradients methods. In: Ermoliev, Y., Wets, R.J.-B. (eds.) Numerical Techniques for Stochastic Optimization. Springer (1980), Chap. 6
Glynn, P.W.: Optimization of stochastic systems via simulation. In: MacNair, E.A., et al. (eds.) Proceedings of the 1989 Winter Simulation Conference, pp. 90–105 (1989)
Zazanis, M.A., Suri, R.: Comparison of perturbation analysis with conventional sensitivity estimates for stochastic systems. Oper. Res. 41, 694–703 (1993)
L’Ecuyer, P., Perron, G.: On the convergence rates of IPA and FDC derivative estimators for finite-horizon stochastic systems. Oper. Res. 42, 643–656 (1994)
L’Ecuyer, P.: An overview of derivative estimation. In: Nelson, B.L., et al.: (eds.) Proceedings of the 1991 Winter Simulation Conference, pp. 207–217 (1991)
Davis, P.J.: Interpolation and Approximation. Dover, New York (1975)
Gill, P.E., Murray, W., Wright, M.H.: Practical Optimization. Academic Press, London (1981)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by L.C.W. Dixon
We thank Jack Kleijnen, Gül Gürkan, and Peter Glynn for useful remarks on an earlier version of this paper. We thank Henk Norde for the proof of Lemma 2.2.
Rights and permissions
Open Access This is an open access article distributed under the terms of the Creative Commons Attribution Noncommercial License (https://creativecommons.org/licenses/by-nc/2.0), which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
About this article
Cite this article
Brekelmans, R.C.M., Driessen, L.T., Hamers, H.J.M. et al. Gradient Estimation Using Lagrange Interpolation Polynomials. J Optim Theory Appl 136, 341–357 (2008). https://doi.org/10.1007/s10957-007-9315-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-007-9315-9