When Can Two Unsupervised Learners Achieve PAC Separation?
In this paper we study a new restriction of the PAC learning framework, in whicheac hlab el class is handled by an unsupervised learner that aims to fit an appropriate probability distribution to its own data. A hypothesis is derived by choosing, for any unlabeled instance, the label whose distribution assigns it the higher likelihood.
The motivation for the new learning setting is that the general approach of fitting separate distributions to eachlab el class, is often used in practice for classification problems. The set of probability distributions that is obtained is more useful than a collection of decision boundaries. A question that arises, however, is whether it is ever more tractable (in terms of computational complexity or sample-size required) to find a simple decision boundary than to divide the problem up into separate unsupervised learning problems and find appropriate distributions.
Within the framework, we give algorithms for learning various simple geometric concept classes. In the boolean domain we show how to learn parity functions, and functions having a constant upper bound on the number of relevant attributes. These results distinguish the new setting from various other well-known restrictions of PAC-learning. We give an algorithm for learning monomials over input vectors generated by an unknown product distribution. The main open problem is whether monomials (or any other concept class) distinguish learnability in this framework from standard PAC-learnability.
KeywordsConvex Hull Discriminant Function Class Label Product Distribution Unsupervised Learner
Unable to display preview. Download preview PDF.
- 3.C.M. Bishop (1995). Neural Networks for Pattern Recognition, Oxford University Press.Google Scholar
- 7.N. Cristianini and J. Shawe-Taylor (2000). An Introduction to Support Vector Machines. Cambridge University Press.Google Scholar
- 8.M. Cryan, L. Goldberg and P. Goldberg (1998). Evolutionary Trees can be Learned in Polynomial Time in the Two-State General Markov Model. Procs. of 39th FOCS symposium, pp. 436–445.Google Scholar
- 9.S. Dasgupta (1999). Learning mixtures of Gaussians. 40th IEEE Symposium on Foundations of Computer Science.Google Scholar
- 11.Y. Freund and Y. Mansour (1999). Estimating a mixture of two product distributions. Procs. of 12th COLT conference, pp. 53–62.Google Scholar
- 12.A. Frieze, M. Jerrum and R. Kannan (1996). Learning Linear Transformations. 37th IEEE Symposium on Foundations of Computer Science, pp. 359–368.Google Scholar
- 13.V. Guruswami and A. Sahai (1999). Multiclass Learning, Boosting, and Error-Correcting Codes. Procs. of 12th COLT conference, pp. 145–155.Google Scholar
- 16.M.J. Kearns (1993). Efficient Noise-Tolerant Learning From Statistical Queries, Procs. of the 25th Annual Symposium on the Theory of Computing, pp. 392–401.Google Scholar
- 17.M. Kearns, Y. Mansour, D. Ron, R. Rubinfeld, R.E. Schapire and L. Sellie (1994). On the Learnability of Discrete Distributions, Proceedings of the 26th Annual ACM Symposium on the Theory of Computing, pp. 273–282.Google Scholar
- 19.J.C. Platt, N. Cristianini and J. Shawe-Taylor (2000). Large Margin DAGs for Multiclass Classification, Procs. of 12th NIPS conference.Google Scholar
- 21.L.G. Valiant (1985). Learning disjunctions of conjunctions. Procs. of 9th International Joint Conference on Artificial Intelligence.Google Scholar