Abstract
In this paper we study a new restriction of the PAC learning framework, in whicheac hlab el class is handled by an unsupervised learner that aims to fit an appropriate probability distribution to its own data. A hypothesis is derived by choosing, for any unlabeled instance, the label whose distribution assigns it the higher likelihood.
The motivation for the new learning setting is that the general approach of fitting separate distributions to eachlab el class, is often used in practice for classification problems. The set of probability distributions that is obtained is more useful than a collection of decision boundaries. A question that arises, however, is whether it is ever more tractable (in terms of computational complexity or sample-size required) to find a simple decision boundary than to divide the problem up into separate unsupervised learning problems and find appropriate distributions.
Within the framework, we give algorithms for learning various simple geometric concept classes. In the boolean domain we show how to learn parity functions, and functions having a constant upper bound on the number of relevant attributes. These results distinguish the new setting from various other well-known restrictions of PAC-learning. We give an algorithm for learning monomials over input vectors generated by an unknown product distribution. The main open problem is whether monomials (or any other concept class) distinguish learnability in this framework from standard PAC-learnability.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
E.L. Allwein, R.E. Schapire and Y. Singer (2000). Reducing Multiclass to Binary: A Unifying Approachfor Margin Classifiers. Journal of Machine Learning Research 1, 113–141.
M. Anthony and N. Biggs (1992). Computational Learning Theory, Cambridge Tracts in Theoretical Computer Science, Cambridge University Press, Cambridge.
C.M. Bishop (1995). Neural Networks for Pattern Recognition, Oxford University Press.
A. Blumer, A. Ehrenfeucht, D. Haussler and M.K. Warmuth (1989). Learnability and the Vapnik-Chervonenkis Dimension, J.ACM 36, 929–965.
N.H. Bshouty, S.A. Goldman, H.D. Mathias, S. Suri and H. Tamaki (1998). Noise-Tolerant Distribution-Free Learning of General Geometric Concepts. Journal of the ACM 45(5), pp. 863–890.
N.H. Bshouty, P.W. Goldberg, S.A. Goldman and H.D. Mathias (1999). Exact learning of discretized geometric concepts. SIAM J. Comput. 28(2) pp. 674–699.
N. Cristianini and J. Shawe-Taylor (2000). An Introduction to Support Vector Machines. Cambridge University Press.
M. Cryan, L. Goldberg and P. Goldberg (1998). Evolutionary Trees can be Learned in Polynomial Time in the Two-State General Markov Model. Procs. of 39th FOCS symposium, pp. 436–445.
S. Dasgupta (1999). Learning mixtures of Gaussians. 40th IEEE Symposium on Foundations of Computer Science.
R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. Wiley, New York (1973).
Y. Freund and Y. Mansour (1999). Estimating a mixture of two product distributions. Procs. of 12th COLT conference, pp. 53–62.
A. Frieze, M. Jerrum and R. Kannan (1996). Learning Linear Transformations. 37th IEEE Symposium on Foundations of Computer Science, pp. 359–368.
V. Guruswami and A. Sahai (1999). Multiclass Learning, Boosting, and Error-Correcting Codes. Procs. of 12th COLT conference, pp. 145–155.
D. Haussler, M. Kearns, N. Littlestone and M.K. Warmuth (1991). Equivalence of Models for Polynomial Learnability. Information and Computation, 95(2), pp. 129–161.
D. Helmbold, R. Sloan and M.K. Warmuth (1992). Learning Integer Lattices. SIAM Journal on Computing, 21(2), pp. 240–266.
M.J. Kearns (1993). Efficient Noise-Tolerant Learning From Statistical Queries, Procs. of the 25th Annual Symposium on the Theory of Computing, pp. 392–401.
M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R.E. Schapire and L. Sellie (1994). On the Learnability of Discrete Distributions, Proceedings of the 26th Annual ACM Symposium on the Theory of Computing, pp. 273–282.
M.J. Kearns and R.E. Schapire (1994). Efficient Distribution-free Learning of Probabilistic Concepts, Journal of Computer and System Sciences, 48(3) 464–497. (see also FOCS’ 90)
J.C. Platt, N. Cristianini and J. Shawe-Taylor (2000). Large Margin DAGs for Multiclass Classification, Procs. of 12th NIPS conference.
L.G. Valiant (1984). A Theory of the Learnable. Commun. ACM 27(11), pp. 1134–1142.
L.G. Valiant (1985). Learning disjunctions of conjunctions. Procs. of 9th International Joint Conference on Artificial Intelligence.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Goldberg, P.W. (2001). When Can Two Unsupervised Learners Achieve PAC Separation?. In: Helmbold, D., Williamson, B. (eds) Computational Learning Theory. COLT 2001. Lecture Notes in Computer Science(), vol 2111. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44581-1_20
Download citation
DOI: https://doi.org/10.1007/3-540-44581-1_20
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42343-0
Online ISBN: 978-3-540-44581-4
eBook Packages: Springer Book Archive