Skip to main content
Log in

Statistical Asymptotic Theory of Active Learning

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

We study a parametric estimation problem. Our aim is to estimate or to identify the conditional probability which is called the system. We suppose that we can select appropriate inputs to the system when we gather the training data. This kind of estimation is called active learning in the context of the artificial neural networks. In this paper we suggest new active learning algorithms and evaluate the risk of the algorithms by using statistical asymptotic theory. The algorithms are regarded as a version of the experimental design with two-stage sampling. We verify the efficiency of the active learning by simple computer simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Belue, L. M., Bauer, K. W., Jr. and Ruck, D. W. (1997). Selecting optimal experiments for multiple output multilayer perceptrons, Neural Computation, 9, 161–183.

    Google Scholar 

  • Bishop C. M. (1995). Neural Networks for Pattern Recognition, Oxford University Press, New York.

    Google Scholar 

  • Chaloner, K. and Verdinelli, I. (1995). Bayesian experimental design: A review, Statist. Sci., 10, 273–304.

    Google Scholar 

  • Fedorov, V. V. (1972). Theory of Optimal Experiments, Academic Press, New York.

    Google Scholar 

  • Ford, I., Titterington, D. M. and Kitsos, C. P. (1989). Recent advances in nonlinear experimental designs, Technometrics, 31, 49–60

    Google Scholar 

  • Fukumizu, K. (1996). Active learning in multilayer perceptrons, Advances in Neural Information Processing Systems (ed. D. S. Touretzky, M. C Mozer and M. E. Hasselmo), 8, 295–301, MIT Press, Cambridge, Massachusetts.

    Google Scholar 

  • MacKay, D. (1992). Information-based objective function for active data selection, Neural Computation, 4, 305–318

    Google Scholar 

  • Pukelsheim F. (1993). Optimal Design of Experiments, Wiley, New York.

    Google Scholar 

  • Silvey, D. S. (1980). Optimal Design, Monographs on Applied Probability and Statistics, Chapman and Hall, London.

    Google Scholar 

  • Watkin, T. L. H. and Rau, A. (1992). Selecting examples for perceptrons, Journal of Physics A: Mathematical and General, 25, 113–121.

    Google Scholar 

  • White, H. (1982). Maximum likelihood estimation of misspecified models, Econometrica, 50, 1–25.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

About this article

Cite this article

Kanamori, T. Statistical Asymptotic Theory of Active Learning. Annals of the Institute of Statistical Mathematics 54, 459–475 (2002). https://doi.org/10.1023/A:1022446624428

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1022446624428

Navigation