Abstract
Any set of labeled examples consistent with some hidden orthogonal rectangle can be “compressed” to at most four examples: An upmost, a leftmost, a rightmost and a bottommost positive example. These four examples represent an orthogonal rectangle (the smallest such rectangle that contains them) that is consistent with all examples.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Floyd, S., Warmuth, M.K.: Sample compression, learnability, and the Vapnik- Chervonenkis dimension. Machine Learning 21(3), 269–304 (1995)
Littlestone, N., Warmuth, M.K.: Relating data compression and learnability, June 10 (1986) (unpublished manuscript), obtainable at http://www.cse.ucsc.edu/~manfred
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Warmuth, M.K. (2003). Compressing to VC Dimension Many Points. In: Schölkopf, B., Warmuth, M.K. (eds) Learning Theory and Kernel Machines. Lecture Notes in Computer Science(), vol 2777. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45167-9_60
Download citation
DOI: https://doi.org/10.1007/978-3-540-45167-9_60
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40720-1
Online ISBN: 978-3-540-45167-9
eBook Packages: Springer Book Archive