Abstract
In the beginning of this book we postulated (without any discussion) that learning is a problem of function estimation on the basis of empirical data. To solve this problem we used a classical inductive principle — the ERM principle. Later, however, we introduced a new principle — the SRM principle. Nevertheless, the general understanding of the problem remains based on the statistics of large samples: The goal is to derive the rule that possesses the lowest risk. The goal of obtaining the “lowest risk” reflects the philosophy of large sample size statistics: The rule with low risk is good because if we use this rule for a large test set, with high probability the means of losses will be small.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2000 Springer Science+Business Media New York
About this chapter
Cite this chapter
Vapnik, V.N. (2000). Conclusion: What Is Important in Learning Theory?. In: The Nature of Statistical Learning Theory. Statistics for Engineering and Information Science. Springer, New York, NY. https://doi.org/10.1007/978-1-4757-3264-1_10
Download citation
DOI: https://doi.org/10.1007/978-1-4757-3264-1_10
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4419-3160-3
Online ISBN: 978-1-4757-3264-1
eBook Packages: Springer Book Archive