Skip to main content
Log in

Estimates of Variation with Respect to a Set and Applications to Optimization Problems

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

A variational norm that plays a role in functional optimization and learning from data is investigated. For sets of functions obtained by varying some parameters in fixed-structure computational units (e.g., Gaussians with variable centers and widths), upper bounds on the variational norms associated with such units are derived. The results are applied to functional optimization problems arising in nonlinear approximation by variable-basis functions and in learning from data. They are also applied to the construction of minimizing sequences by an extension of the Ritz method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Kůrková, V., Sanguineti, M.: Bounds on rates of variable-basis and neural-network approximation. IEEE Trans. Inf. Theory 47, 2659–2665 (2001)

    Article  MATH  Google Scholar 

  2. Kůrková, V., Sanguineti, M.: Geometric upper bounds on rates of variable-basis approximation. IEEE Trans. Inf. Theory 54, 5681–5688 (2008)

    Article  Google Scholar 

  3. Haykin, S.: Neural Networks. A Comprehensive Foundation. Prentice-Hall, Englewood Cliffs (1999)

    MATH  Google Scholar 

  4. Singer, I.: Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces. Springer, Berlin (1970)

    MATH  Google Scholar 

  5. Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. Optim. 18, 461–487 (2005)

    Article  Google Scholar 

  6. Barron, A.R.: Neural net approximation. In: Proc. 7th Yale Workshop on Adaptive and Learning Systems, pp. 69–72. Yale University Press, New Haven (1992)

    Google Scholar 

  7. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39, 930–945 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  8. Breiman, L.: Hinging hyperplanes for regression, classification and function approximation. IEEE Trans. Inf. Theory 39, 999–1013 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  9. Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Stat. 20, 608–613 (1992)

    Article  MATH  Google Scholar 

  10. Kainen, P.C., Kurková, V., Sanguineti, M.: Complexity of Gaussian-radial-basis networks approximating smooth functions. J. Complex. 25, 63–74 (2009)

    Article  MATH  Google Scholar 

  11. Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)

    Google Scholar 

  12. Kůrková, V., Sanguineti, M.: Learning with generalization capability by kernel methods of bounded complexity. J. Complex. 21, 350–367 (2005)

    Article  MATH  Google Scholar 

  13. Kůrková, V., Sanguineti, M.: Approximate minimization of the regularized expected error over kernel models. Math. Oper. Res. 33, 747–756 (2008)

    Article  MathSciNet  Google Scholar 

  14. Giulini, S., Sanguineti, M.: Approximation schemes for functional optimization problems. J. Optim. Theory Appl. 140, 33–54 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  15. Gelfand, I.M., Fomin, S.V.: Calculus of Variations. Prentice-Hall, Englewood Cliffs (1963)

    Google Scholar 

  16. Zoppoli, R., Sanguineti, M., Parisini, T.: Approximating networks and extended Ritz method for the solution of functional optimization problems. J. Optim. Theory Appl. 112, 403–439 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  17. Zoppoli, R., Parisini, T., Baglietto, M., Sanguineti, M.: Neural Approximations for Optimal Control and Decision. Springer, London (in preparation)

  18. Kůrková, V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Netw. 10, 1061–1068 (1997)

    Article  Google Scholar 

  19. Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Netw. 11, 651–659 (1998)

    Article  Google Scholar 

  20. Girosi, F., Anzellotti, G.: Rates of convergence for radial basis functions and neural networks. In: Mammone, R.J. (ed.) Artificial Neural Networks for Speech and Vision, pp. 97–113. Chapman & Hall, London (1993)

    Google Scholar 

  21. Cucker, F., Smale, S.: On the mathematical foundations of learning. Bull. Am. Math. Soc. 39, 1–49 (2002)

    Article  MATH  MathSciNet  Google Scholar 

  22. Alessandri, A., Parisini, T., Sanguineti, M., Zoppoli, R.: Neural strategies for nonlinear optimal filtering. In: Proc. IEEE Int. Conf. on Systems Engineering, Kobe, Japan, pp. 44–49 (1992)

  23. Zoppoli, R., Parisini, T.: Learning techniques and neural networks for the solution of N-stage nonlinear nonquadratic optimal control problems. In: Isidori, A., Tarn, T.J. (eds.) Systems, Models and Feedback: Theory and Applications, pp. 193–210. Birkhäuser, Boston (1992)

    Google Scholar 

  24. Gnecco, G., Sanguineti, M.: Suboptimal solutions to dynamic optimization problems via approximations of the policy functions. J. Optim. Theory Appl. (to appear)

  25. Zolezzi, T.: Condition numbers and Ritz type methods in unconstrained optimization. Control Cybern. 36, 811–822 (2007)

    MATH  MathSciNet  Google Scholar 

  26. Rudin, W.: Real and Complex Analysis. McGraw-Hill, Singapore (1987)

    MATH  Google Scholar 

  27. Aronszajn, N.: Theory of reproducing kernels. Trans. AMS 68, 337–404 (1950)

    Article  MATH  MathSciNet  Google Scholar 

  28. Girosi, F.: An equivalence between sparse approximation and support vector machines. Neural Comput. 10, 1455–1480 (1998)

    Article  Google Scholar 

  29. Schölkopf, B., Smola, A.J.: Learning with Kernels—Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2002)

    Google Scholar 

  30. Kainen, P.C., Kurková, V.: Estimates of network complexity and integral representations. In: Lecture Notes in Computer Science, vol. 5164, pp. 31–40. Springer, Berlin (2008)

    Google Scholar 

  31. Brezis, H.: Analyse Fonctionnelle—Théorie et Applications. Masson, Paris (1983)

    MATH  Google Scholar 

  32. Bochner, S., Chandrasekharan, K.: Fourier Transforms. Princeton University Press, Princeton (1949)

    MATH  Google Scholar 

  33. Gasquet, C., Witomski, P.: Fourier Analysis and Applications: Filtering, Numerical Computation, Wavelets. Springer, Berlin (1999)

    MATH  Google Scholar 

  34. Courant, R.: Differential and Integral Calculus, vol. II. Wiley-Interscience, New York (1988)

    Google Scholar 

  35. Adams, R.A., Fournier, J.J.F.: Sobolev Spaces. Academic Press, Amsterdam (2003)

    MATH  Google Scholar 

  36. Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  37. Borwein, J., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples. CMS Books in Mathematics. Springer, Berlin (2000)

    MATH  Google Scholar 

  38. Ekeland, I., Turnbull, T.: Infinite-Dimensional Optimization and Convexity. The University of Chicago Press, Chicago (1983)

    MATH  Google Scholar 

  39. Kůrková, V., Sanguineti, M.: Comparison of worst case errors in linear and neural network approximation. IEEE Trans. Inf. Theory 48, 264–275 (2002)

    Article  MATH  Google Scholar 

  40. Pisier, G.: Remarques sur un résultat non publié de B. Maurey. In: Séminaire d’Analyse Fonctionnelle 1980-81, vol. I, no. 12. École Polytechnique, Centre de Mathématiques, Palaiseau (1981)

    Google Scholar 

  41. Pinkus, A.: n-Widths in Approximation Theory. Springer, Berlin (1985)

    MATH  Google Scholar 

  42. Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)

    MATH  Google Scholar 

  43. Deutsch, F.: Best Approximation in Inner Product Spaces. Springer, New York (2001)

    MATH  Google Scholar 

  44. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)

    Google Scholar 

  45. Makovoz, Y.: Random approximants and neural networks. J. Approx. Theory 85, 98–109 (1996)

    Article  MATH  MathSciNet  Google Scholar 

  46. Kainen, P.C., Kůrková, V., Sanguineti, M.: On tractability of neural-network approximation. In: Lecture Notes in Computer Science, vol. 5495, pp. 11–21. Springer, Berlin (2009)

    Google Scholar 

  47. Wasilkowski, G.W., Woźniakowski, H.: Complexity of weighted approximation over ℝd. J. Complex. 17, 722–740 (2001)

    Article  MATH  Google Scholar 

  48. Mhaskar, H.N.: On the tractability of multivariate integration and approximation by neural networks. J. Complex. 20, 561–590 (2004)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Sanguineti.

Additional information

Communicated by R. Glowinski.

The authors were partially supported by a grant “Progetti di Ricerca di Ateneo 2008” of the University of Genova, project “Solution of Functional Optimization Problems by Nonlinear Approximators and Learning from Data”.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gnecco, G., Sanguineti, M. Estimates of Variation with Respect to a Set and Applications to Optimization Problems. J Optim Theory Appl 145, 53–75 (2010). https://doi.org/10.1007/s10957-009-9620-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-009-9620-6

Keywords

Navigation