Skip to main content
Log in

Approximation Schemes for Functional Optimization Problems

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

Approximation schemes for functional optimization problems with admissible solutions dependent on a large number d of variables are investigated. Suboptimal solutions are considered, expressed as linear combinations of n-tuples from a basis set of simple computational units with adjustable parameters. Different choices of basis sets are compared, which allow one to obtain suboptimal solutions using a number n of basis functions that does not grow “fast” with the number d of variables in the admissible decision functions for a fixed desired accuracy. In these cases, one mitigates the “curse of dimensionality,” which often makes unfeasible traditional linear approximation techniques for functional optimization problems, when admissible solutions depend on a large number d of variables.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Gelfand, I.M., Fomin, S.V.: Calculus of Variations. Prentice-Hall, Englewood Cliffs (1963)

    Google Scholar 

  2. Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. Optim. 15, 461–487 (2005)

    Article  MATH  Google Scholar 

  3. Zoppoli, R., Sanguineti, M., Parisini, T.: Approximating networks and extended Ritz method for the solution of functional optimization problems. J. Optim. Theory Appl. 112, 403–440 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  4. Kůrková, V., Sanguineti, M.: Comparison of worst case errors in linear and neural network approximation. IEEE Trans. Inf. Theory 48, 264–275 (2002)

    Article  MATH  Google Scholar 

  5. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)

    Google Scholar 

  6. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39, 930–945 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  7. Breiman, L.: Hinging hyperplanes for regression, classification, and function approximation. IEEE Trans. Inf. Theory 39, 999–1013 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  8. Girosi, F.: Regularization theory, radial basis functions and networks. In: Cherkassky, V., Friedman, J.H., Wechsler, H. (eds.) From Statistics to Neural Networks. Theory and Pattern Recognition Applications, Subseries F, pp. 166–187. Springer, Berlin (1994)

    Google Scholar 

  9. Girosi, F., Anzellotti, G.: Rates of convergence for radial basis functions and neural networks. In: Mammone, R.J. (ed.) Artificial Neural Networks for Speech and Vision, pp. 97–113. Chapman & Hall, London (1993)

    Google Scholar 

  10. Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Comput. 7, 219–269 (1995)

    Article  Google Scholar 

  11. Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Stat. 20, 608–613 (1992)

    Article  MATH  Google Scholar 

  12. Kainen, P.C., Kůrková, V., Sanguineti, M.: Minimization of error functionals over variable-basis functions. SIAM J. Optim. 14, 732–742 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  13. Kainen, P.C., Kůrková, V., Sanguineti, M.: Complexity of Gaussian radial basis networks approximating smooth functions. J. Complex. doi:10.1016/j.jco.2008.08.001 (2008)

    Google Scholar 

  14. Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing: The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)

    Google Scholar 

  15. Kůrková, V., Sanguineti, M.: Bounds on rates of variable-basis and neural-network approximation. IEEE Trans. Inf. Theory 47, 2659–2665 (2001)

    Article  MATH  Google Scholar 

  16. Kůrková, V., Sanguineti, M.: Geometric upper bounds on rates of variable-basis approximation, IEEE Trans. Inf. Theory (to appear)

  17. Mhaskar, H.N.: Neural networks for optimal approximation of smooth and analytic functions. Neural Comput. 8, 164–177 (1996)

    Article  Google Scholar 

  18. Alessandri, A., Parisini, T., Sanguineti, M., Zoppoli, R.: Neural strategies for nonlinear optimal filtering. In: Proc. of IEEE Int. Conf. on Syst. Eng., Kobe, Japan, pp. 44–49 (1992)

  19. Zoppoli, R., Parisini, T.: Learning techniques and neural networks for the solution of N-stage nonlinear nonquadratic optimal control problems. In: Isidori, A., Tarn, T.J. (eds.) Systems, Models and Feedback: Theory and Applications, pp. 193–210. Birkhäuser, Boston (1992)

    Google Scholar 

  20. Zoppoli, R., Parisini, T., Baglietto, M., Sanguineti, M.: Neural Approximations for Optimal Control and Decision. Springer, London (in preparation)

  21. Dontchev, A.L., Zolezzi, T.: Well-Posed Optimization Problems. Lecture Notes in Mathematics, vol. 1543. Springer, Berlin (1993)

    MATH  Google Scholar 

  22. Pinkus, A.: n-Widths in Approximation Theory. Springer, New York (1986)

    Google Scholar 

  23. Haykin, S.: Neural Networks: A Comprehensive Foundation. MacMillan, New York (1994)

    MATH  Google Scholar 

  24. Alessandri, A., Sanguineti, M.: Optimization of approximating networks for optimal fault diagnosis. Optim. Methods Softw. 20, 235–260 (2005)

    Article  MathSciNet  Google Scholar 

  25. Alessandri, A., Cervellera, C., Sanguineti, M.: Functional optimal estimation problems and their approximate solution. J. Optim. Theory Appl. 134, 445–466 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  26. Alessandri, A., Cervellera, C., Sanguineti, M.: Design of asymptotic estimators: An approach based on neural networks and nonlinear programming. IEEE Trans. Neural Netw. 18, 86–96 (2007)

    Article  Google Scholar 

  27. Singer, I.: Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces. Springer, Berlin (1970)

    MATH  Google Scholar 

  28. Alessandri, A., Cuneo, M., Pagnan, S., Sanguineti, M.: A recursive algorithm for nonlinear least-squares problems. Comput. Optim. Appl. 38, 195–216 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  29. Alessandri, A., Sanguineti, M., Maggiore, M.: Optimization-based learning with bounded error for feedforward neural networks. IEEE Trans. Neural Netw. 13, 261–273 (2002)

    Article  Google Scholar 

  30. Bertsekas, D.P.: A new class of incremental gradient methods for least squares problems. SIAM J. Optim. 7, 913–926 (1997)

    Article  MATH  MathSciNet  Google Scholar 

  31. Chow, T.W.S., Cho, S.-Y.: Neural Networks and Computing: Learning Algorithms and Applications. World Scientific, Singapore (2007)

    Google Scholar 

  32. Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer. 8, 143–196 (1999)

    Article  MathSciNet  Google Scholar 

  33. Sanguineti, M.: Universal approximation by ridge computational models: A survey. Open Appl. Math. J. 2, 31–58 (2008)

    Article  MathSciNet  Google Scholar 

  34. Park, J., Sandberg, I.W.: Universal approximation using radial-basis-function networks. Neural Comput. 3(2), 246–257 (1991)

    Article  Google Scholar 

  35. Friedman, A.: Foundations of Modern Analysis. Dover, New York (1982)

    MATH  Google Scholar 

  36. Stein, E.M.: Harmonic Analysis: Real-Variable Methods, Orthogonality, and Oscillatory Integrals. Princeton University Press, Princeton (1993)

    MATH  Google Scholar 

  37. Courant, R.: Differential and Integral Calculus, vol. II. Wiley-Interscience, New York (1988)

    Google Scholar 

  38. Adams, R.A.: Sobolev Spaces. Academic Press, New York (1975)

    MATH  Google Scholar 

  39. Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  40. Lebedev, N.N.: Special Functions and Their Applications. Dover, New York (1972)

    MATH  Google Scholar 

  41. Zolezzi, T.: Condition numbers and Ritz type methods in unconstrained optimization. Control Cybern. 36, 811–822 (2007)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Sanguineti.

Additional information

Communicated by T. Rapcsák.

Marcello Sanguineti was partially supported by a PRIN grant from the Italian Ministry for University and Research (project “Models and Algorithms for Robust Network Optimization”).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Giulini, S., Sanguineti, M. Approximation Schemes for Functional Optimization Problems. J Optim Theory Appl 140, 33–54 (2009). https://doi.org/10.1007/s10957-008-9471-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-008-9471-6

Keywords

Navigation