Machine Learning

, Volume 15, Issue 2, pp 201–221

Improving Generalization with Active Learning

  • David Cohn
  • Les Atlas
  • Richard Ladner
Article

DOI: 10.1023/A:1022673506211

Cite this article as:
Cohn, D., Atlas, L. & Ladner, R. Machine Learning (1994) 15: 201. doi:10.1023/A:1022673506211

Abstract

Active learning differs from “learning from examples” in that the learning algorithm assumes at least some control over what part of the input domain it receives information about. In some situations, active learning is provably more powerful than learning from examples alone, giving better generalization for a fixed number of training examples.

In this article, we consider the problem of learning a binary concept in the absence of noise. We describe a formalism for active concept learning called selective sampling and show how it may be approximately implemented by a neural network. In selective sampling, a learner receives distribution information from the environment and queries an oracle on parts of the domain it considers “useful.” We test our implementation, called an SG-network, on three domains and observe significant improvement in generalization.

queriesactive learninggeneralizationversion spaceneural networks
Download to read the full article text

Copyright information

© Kluwer Academic Publishers 1994

Authors and Affiliations

  • David Cohn
    • 1
  • Les Atlas
    • 2
  • Richard Ladner
    • 3
  1. 1.Department of Brain and Cognitive SciencesMassachusetts Institute of TechnologyCambridge
  2. 2.Deptartment of Electrical EngineeringUniversity of WashingtonSeattle
  3. 3.Deptartment of Computer Science and EngineeringUniversity of WashingtonSeattle