Abstract
A variational norm that plays a role in functional optimization and learning from data is investigated. For sets of functions obtained by varying some parameters in fixed-structure computational units (e.g., Gaussians with variable centers and widths), upper bounds on the variational norms associated with such units are derived. The results are applied to functional optimization problems arising in nonlinear approximation by variable-basis functions and in learning from data. They are also applied to the construction of minimizing sequences by an extension of the Ritz method.
Similar content being viewed by others
References
Kůrková, V., Sanguineti, M.: Bounds on rates of variable-basis and neural-network approximation. IEEE Trans. Inf. Theory 47, 2659–2665 (2001)
Kůrková, V., Sanguineti, M.: Geometric upper bounds on rates of variable-basis approximation. IEEE Trans. Inf. Theory 54, 5681–5688 (2008)
Haykin, S.: Neural Networks. A Comprehensive Foundation. Prentice-Hall, Englewood Cliffs (1999)
Singer, I.: Best Approximation in Normed Linear Spaces by Elements of Linear Subspaces. Springer, Berlin (1970)
Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. Optim. 18, 461–487 (2005)
Barron, A.R.: Neural net approximation. In: Proc. 7th Yale Workshop on Adaptive and Learning Systems, pp. 69–72. Yale University Press, New Haven (1992)
Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39, 930–945 (1993)
Breiman, L.: Hinging hyperplanes for regression, classification and function approximation. IEEE Trans. Inf. Theory 39, 999–1013 (1993)
Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Stat. 20, 608–613 (1992)
Kainen, P.C., Kurková, V., Sanguineti, M.: Complexity of Gaussian-radial-basis networks approximating smooth functions. J. Complex. 25, 63–74 (2009)
Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)
Kůrková, V., Sanguineti, M.: Learning with generalization capability by kernel methods of bounded complexity. J. Complex. 21, 350–367 (2005)
Kůrková, V., Sanguineti, M.: Approximate minimization of the regularized expected error over kernel models. Math. Oper. Res. 33, 747–756 (2008)
Giulini, S., Sanguineti, M.: Approximation schemes for functional optimization problems. J. Optim. Theory Appl. 140, 33–54 (2009)
Gelfand, I.M., Fomin, S.V.: Calculus of Variations. Prentice-Hall, Englewood Cliffs (1963)
Zoppoli, R., Sanguineti, M., Parisini, T.: Approximating networks and extended Ritz method for the solution of functional optimization problems. J. Optim. Theory Appl. 112, 403–439 (2002)
Zoppoli, R., Parisini, T., Baglietto, M., Sanguineti, M.: Neural Approximations for Optimal Control and Decision. Springer, London (in preparation)
Kůrková, V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Netw. 10, 1061–1068 (1997)
Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Netw. 11, 651–659 (1998)
Girosi, F., Anzellotti, G.: Rates of convergence for radial basis functions and neural networks. In: Mammone, R.J. (ed.) Artificial Neural Networks for Speech and Vision, pp. 97–113. Chapman & Hall, London (1993)
Cucker, F., Smale, S.: On the mathematical foundations of learning. Bull. Am. Math. Soc. 39, 1–49 (2002)
Alessandri, A., Parisini, T., Sanguineti, M., Zoppoli, R.: Neural strategies for nonlinear optimal filtering. In: Proc. IEEE Int. Conf. on Systems Engineering, Kobe, Japan, pp. 44–49 (1992)
Zoppoli, R., Parisini, T.: Learning techniques and neural networks for the solution of N-stage nonlinear nonquadratic optimal control problems. In: Isidori, A., Tarn, T.J. (eds.) Systems, Models and Feedback: Theory and Applications, pp. 193–210. Birkhäuser, Boston (1992)
Gnecco, G., Sanguineti, M.: Suboptimal solutions to dynamic optimization problems via approximations of the policy functions. J. Optim. Theory Appl. (to appear)
Zolezzi, T.: Condition numbers and Ritz type methods in unconstrained optimization. Control Cybern. 36, 811–822 (2007)
Rudin, W.: Real and Complex Analysis. McGraw-Hill, Singapore (1987)
Aronszajn, N.: Theory of reproducing kernels. Trans. AMS 68, 337–404 (1950)
Girosi, F.: An equivalence between sparse approximation and support vector machines. Neural Comput. 10, 1455–1480 (1998)
Schölkopf, B., Smola, A.J.: Learning with Kernels—Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, Cambridge (2002)
Kainen, P.C., Kurková, V.: Estimates of network complexity and integral representations. In: Lecture Notes in Computer Science, vol. 5164, pp. 31–40. Springer, Berlin (2008)
Brezis, H.: Analyse Fonctionnelle—Théorie et Applications. Masson, Paris (1983)
Bochner, S., Chandrasekharan, K.: Fourier Transforms. Princeton University Press, Princeton (1949)
Gasquet, C., Witomski, P.: Fourier Analysis and Applications: Filtering, Numerical Computation, Wavelets. Springer, Berlin (1999)
Courant, R.: Differential and Integral Calculus, vol. II. Wiley-Interscience, New York (1988)
Adams, R.A., Fournier, J.J.F.: Sobolev Spaces. Academic Press, Amsterdam (2003)
Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1970)
Borwein, J., Lewis, A.S.: Convex Analysis and Nonlinear Optimization: Theory and Examples. CMS Books in Mathematics. Springer, Berlin (2000)
Ekeland, I., Turnbull, T.: Infinite-Dimensional Optimization and Convexity. The University of Chicago Press, Chicago (1983)
Kůrková, V., Sanguineti, M.: Comparison of worst case errors in linear and neural network approximation. IEEE Trans. Inf. Theory 48, 264–275 (2002)
Pisier, G.: Remarques sur un résultat non publié de B. Maurey. In: Séminaire d’Analyse Fonctionnelle 1980-81, vol. I, no. 12. École Polytechnique, Centre de Mathématiques, Palaiseau (1981)
Pinkus, A.: n-Widths in Approximation Theory. Springer, Berlin (1985)
Vapnik, V.N.: Statistical Learning Theory. Wiley, New York (1998)
Deutsch, F.: Best Approximation in Inner Product Spaces. Springer, New York (2001)
Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)
Makovoz, Y.: Random approximants and neural networks. J. Approx. Theory 85, 98–109 (1996)
Kainen, P.C., Kůrková, V., Sanguineti, M.: On tractability of neural-network approximation. In: Lecture Notes in Computer Science, vol. 5495, pp. 11–21. Springer, Berlin (2009)
Wasilkowski, G.W., Woźniakowski, H.: Complexity of weighted approximation over ℝd. J. Complex. 17, 722–740 (2001)
Mhaskar, H.N.: On the tractability of multivariate integration and approximation by neural networks. J. Complex. 20, 561–590 (2004)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by R. Glowinski.
The authors were partially supported by a grant “Progetti di Ricerca di Ateneo 2008” of the University of Genova, project “Solution of Functional Optimization Problems by Nonlinear Approximators and Learning from Data”.
Rights and permissions
About this article
Cite this article
Gnecco, G., Sanguineti, M. Estimates of Variation with Respect to a Set and Applications to Optimization Problems. J Optim Theory Appl 145, 53–75 (2010). https://doi.org/10.1007/s10957-009-9620-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-009-9620-6