Encyclopedia of Machine Learning and Data Mining

2017 Edition
| Editors: Claude Sammut, Geoffrey I. Webb

Active Learning

  • David Cohn
Reference work entry
DOI: https://doi.org/10.1007/978-1-4899-7687-1_916

Definition

The term Active Learning is generally used to refer to a learning problem or system where the learner has some role in determining on what data it will be trained. This is in contrast to Passive Learning, where the learner is simply presented with a  training set over which it has no control. Active learning is often used in settings where obtaining  labeled data is expensive or time-consuming; by sequentially identifying which examples are most likely to be useful, an active learner can sometimes achieve good performance, using far less  training data than would otherwise be required.

Structure of Learning System

In many machine learning problems, the training data are treated as a fixed and given part of the problem definition. In practice, however, the training data are often not fixed beforehand. Rather, the learner has an opportunity to play a role in deciding what data will be acquired for training. This process is usually referred to as “active learning,” recognizing...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. Angluin D (1987) Learning regular sets from queries and counterexamples. Inf Comput 75(2):87–106MathSciNetzbMATHCrossRefGoogle Scholar
  2. Angluin D (1988) Queries and concept learning. Mach Learn 2:319–342MathSciNetGoogle Scholar
  3. Box GEP, Draper N (1987) Empirical model-building and response surfaces. Wiley, New YorkzbMATHGoogle Scholar
  4. Cleveland W, Devlin S, Gross E (1988) Regression by local fitting. J Econom 37:87–114CrossRefGoogle Scholar
  5. Cohn D, Atlas L, Ladner R (1990) Training connectionist networks with queries and selective sampling. In: Touretzky D (ed) Advances in neural information processing systems. Morgan Kaufmann, San MateoGoogle Scholar
  6. Cohn D, Ghahramani Z, Jordan MI (1996) Active learning with statistical models. J Artif Intell Res 4:129–145. http://citeseer.ist.psu.edu/321503.html
  7. Dasgupta S (1999) Learning mixtures of Gaussians. Found Comput Sci 634–644Google Scholar
  8. Fedorov V (1972) Theory of optimal experiments. Academic Press, New YorkGoogle Scholar
  9. Kearns M, Li M, Pitt L, Valiant L (1987) On the learnability of Boolean formulae. In: Proceedings of the 19th annual ACM conference on theory of computing. ACM Press, New York, pp 285–295Google Scholar
  10. Lewis DD, Gail WA (1994) A sequential algorithm for training text classifiers. In: Proceedings of the 17th annual international ACM SIGIR conference, Dublin, pp 3–12Google Scholar
  11. McCallum A, Nigam K (1998) Employing EM and pool-based active learning for text classification. In: Machine learning: proceedings of the fifteenth international conference (ICML’98), Madison, pp 359–367Google Scholar
  12. North DW (1968) A tutorial introduction to decision theory. IEEE Trans Syst Sci Cybern 4(3)Google Scholar
  13. Pitt L, Valiant LG (1988) Computational limitations on learning from examples. J ACM (JACM) 35(4):965–984MathSciNetzbMATHCrossRefGoogle Scholar
  14. Robbins H (1952) Some aspects of the sequential design of experiments. Bull Am Math Soc 55:527–535MathSciNetzbMATHCrossRefGoogle Scholar
  15. Ruff R, Dietterich T (1989) What good are experiments? In: Proceedings of the sixth international workshop on machine learning, IthacaGoogle Scholar
  16. Seung HS, Opper M, Sompolinsky H (1992) Query by committee. In: Proceedings of the fifth workshop on computational learning theory. Morgan Kaufmann, San Mateo, pp 287–294Google Scholar
  17. Steck H, Jaakkola T (2002) Unsupervised active learning in large domains. In: Proceeding of the conference on uncertainty in AI. http://citeseer.ist.psu.edu/steck02unsupervised.html

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  • David Cohn
    • 1
    • 2
  1. 1.Mountain ViewUSA
  2. 2.EdinburghUK