Abstract
The derivation of guarantees on the generalization performance of the learned model is a wide topic in statistical learning theory [Valiant, 1984, Vapnik and Chervonenkis, 1971]. Assuming that data points are independent and identically distributed (i.i.d.) according to some (unknown but fixed) distribution μ, one essentially aims at bounding the deviation of the true risk of the learned model (its performance on unseen data) from its empirical risk (its performance on the training sample). This deviation is typically a function of the number of training examples, and some notion of complexity of the model such as the VC dimension [Vapnik and Chervonenkis, 1971], the fat-shattering dimension [Alon et al., 1997] or the Rademacher complexity [Bartlett and Mendelson, 2002, Koltchinskii, 2001].
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Bellet, A., Habrard, A., Sebban, M. (2015). Generalization Guarantees for Metric Learning. In: Metric Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-01572-4_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-01572-4_8
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-00444-5
Online ISBN: 978-3-031-01572-4
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 6