Abstract
Approximation schemes for functional optimization problems with admissible solutions dependent on a large number d of variables are investigated. Suboptimal solutions are considered, expressed as linear combinations of n-tuples from a basis set of simple computational units with adjustable parameters. Different choices of basis sets are compared, which allow one to obtain suboptimal solutions using a number n of basis functions that does not grow “fast” with the number d of variables in the admissible decision functions for a fixed desired accuracy. In these cases, one mitigates the “curse of dimensionality,” which often makes unfeasible traditional linear approximation techniques for functional optimization problems, when admissible solutions depend on a large number d of variables.
Similar content being viewed by others
References
Gelfand, I.M., Fomin, S.V.: Calculus of Variations. Prentice-Hall, Englewood Cliffs (1963)
Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. Optim. 15, 461–487 (2005)
Zoppoli, R., Sanguineti, M., Parisini, T.: Approximating networks and extended Ritz method for the solution of functional optimization problems. J. Optim. Theory Appl. 112, 403–440 (2002)
Kůrková, V., Sanguineti, M.: Comparison of worst case errors in linear and neural network approximation. IEEE Trans. Inf. Theory 48, 264–275 (2002)
Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)
Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39, 930–945 (1993)
Breiman, L.: Hinging hyperplanes for regression, classification, and function approximation. IEEE Trans. Inf. Theory 39, 999–1013 (1993)
Girosi, F.: Regularization theory, radial basis functions and networks. In: Cherkassky, V., Friedman, J.H., Wechsler, H. (eds.) From Statistics to Neural Networks. Theory and Pattern Recognition Applications, Subseries F, pp. 166–187. Springer, Berlin (1994)
Girosi, F., Anzellotti, G.: Rates of convergence for radial basis functions and neural networks. In: Mammone, R.J. (ed.) Artificial Neural Networks for Speech and Vision, pp. 97–113. Chapman & Hall, London (1993)
Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Comput. 7, 219–269 (1995)
Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Stat. 20, 608–613 (1992)
Kainen, P.C., Kůrková, V., Sanguineti, M.: Minimization of error functionals over variable-basis functions. SIAM J. Optim. 14, 732–742 (2003)
Kainen, P.C., Kůrková, V., Sanguineti, M.: Complexity of Gaussian radial basis networks approximating smooth functions. J. Complex. doi:10.1016/j.jco.2008.08.001 (2008)
Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing: The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)
Kůrková, V., Sanguineti, M.: Bounds on rates of variable-basis and neural-network approximation. IEEE Trans. Inf. Theory 47, 2659–2665 (2001)
Kůrková, V., Sanguineti, M.: Geometric upper bounds on rates of variable-basis approximation, IEEE Trans. Inf. Theory (to appear)
Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural Comput. 8, 164–177 (1996)
Alessandri, A., Parisini, T., Sanguineti, M., Zoppoli, R.: Neural strategies for nonlinear optimal filtering. In: Proc. of IEEE Int. Conf. on Syst. Eng., Kobe, Japan, pp. 44–49 (1992)
Zoppoli, R., Parisini, T.: Learning techniques and neural networks for the solution of N-stage nonlinear nonquadratic optimal control problems. In: Isidori, A., Tarn, T.J. (eds.) Systems, Models and Feedback: Theory and Applications, pp. 193–210. Birkhäuser, Boston (1992)
Zoppoli, R., Parisini, T., Baglietto, M., Sanguineti, M.: Neural Approximations for Optimal Control and Decision. Springer, London (in preparation)
Dontchev, A.L., Zolezzi, T.: Well-Posed Optimization Problems. Lecture Notes in Mathematics, vol. 1543. Springer, Berlin (1993)
Pinkus, A.: n-Widths in Approximation Theory. Springer, New York (1986)
Haykin, S.: Neural Networks: A Comprehensive Foundation. MacMillan, New York (1994)
Alessandri, A., Sanguineti, M.: Optimization of approximating networks for optimal fault diagnosis. Optim. Methods Softw. 20, 235–260 (2005)
Alessandri, A., Cervellera, C., Sanguineti, M.: Functional optimal estimation problems and their approximate solution. J. Optim. Theory Appl. 134, 445–466 (2007)
Alessandri, A., Cervellera, C., Sanguineti, M.: Design of asymptotic estimators: An approach based on neural networks and nonlinear programming. IEEE Trans. Neural Netw. 18, 86–96 (2007)
Singer, I.: Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces. Springer, Berlin (1970)
Alessandri, A., Cuneo, M., Pagnan, S., Sanguineti, M.: A recursive algorithm for nonlinear least-squares problems. Comput. Optim. Appl. 38, 195–216 (2007)
Alessandri, A., Sanguineti, M., Maggiore, M.: Optimization-based learning with bounded error for feedforward neural networks. IEEE Trans. Neural Netw. 13, 261–273 (2002)
Bertsekas, D.P.: A new class of incremental gradient methods for least squares problems. SIAM J. Optim. 7, 913–926 (1997)
Chow, T.W.S., Cho, S.-Y.: Neural Networks and Computing: Learning Algorithms and Applications. World Scientific, Singapore (2007)
Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer. 8, 143–196 (1999)
Sanguineti, M.: Universal approximation by ridge computational models: A survey. Open Appl. Math. J. 2, 31–58 (2008)
Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks. Neural Comput. 3(2), 246–257 (1991)
Friedman, A.: Foundations of Modern Analysis. Dover, New York (1982)
Stein, E.M.: Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals. Princeton University Press, Princeton (1993)
Courant, R.: Differential and Integral Calculus, vol. II. Wiley-Interscience, New York (1988)
Adams, R.A.: Sobolev Spaces. Academic Press, New York (1975)
Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1970)
Lebedev, N.N.: Special Functions and Their Applications. Dover, New York (1972)
Zolezzi, T.: Condition numbers and Ritz type methods in unconstrained optimization. Control Cybern. 36, 811–822 (2007)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by T. Rapcsák.
Marcello Sanguineti was partially supported by a PRIN grant from the Italian Ministry for University and Research (project “Models and Algorithms for Robust Network Optimization”).
Rights and permissions
About this article
Cite this article
Giulini, S., Sanguineti, M. Approximation Schemes for Functional Optimization Problems. J Optim Theory Appl 140, 33–54 (2009). https://doi.org/10.1007/s10957-008-9471-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-008-9471-6