Skip to main content

Mutual information gaining algorithm and its relation to PAC-learning algorithm

  • Selected Papers
  • Conference paper
  • First Online:
Algorithmic Learning Theory (AII 1994, ALT 1994)

Abstract

In this paper, the mutual information between a target concept and a hypothesis is used to measure the goodness of the hypothesis rather than the accuracy, and a notion of mutual information gaining (MI-gaining) algorithms is introduced. In particular, strong and weak MI-gaining algorithms are defined depending on the amount of information acquired, and their relation to strong and weak PAC-learning algorithms are investigated. It is shown that although a strong MI-gaining algorithm is equivalent to a strong PAC-learning algorithm, a weak MI-gaining algorithm does not necessarily imply a weak PAC-learning algorithm, and vice versa. Moreover, a general boosting scheme for weak MI-gaining algorithms is given. That is, any weak MI-gaining algorithm can be used to build a strong one. Since a strong MI-gaining algorithm is also a strong PAC-learning algorithm, the result can be viewed to give a sufficient condition for a class of algorithms to be boosted into strong learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929–965, Aug. 1989.

    Google Scholar 

  2. Y. Freund. Boosting a weak learning algorithm by majority. In Proceedings of the 3rd Workshop on Computational Learning Theory, pages 202–216, 1990.

    Google Scholar 

  3. D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of bayesian learning using information theory and the vc dimension. In Proceedings of the 4th Workshop on Computational Learning Theory, 1991.

    Google Scholar 

  4. D. Helmbold and M. K. Warmuth. Some weak learning results. In Proceedings of the 5th Workshop on Computational Learning Theory, 1992.

    Google Scholar 

  5. B. K. Natarajan. Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Mateo, 1991.

    Google Scholar 

  6. R. Schapire. The strength of weak learnability. In Proceedings of the 30th Annual IEEE Symposium on Foundations of Computer Science, pages 28–33, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Setsuo Arikawa Klaus P. Jantke

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Takimoto, E., Tajika, I., Maruoka, A. (1994). Mutual information gaining algorithm and its relation to PAC-learning algorithm. In: Arikawa, S., Jantke, K.P. (eds) Algorithmic Learning Theory. AII ALT 1994 1994. Lecture Notes in Computer Science, vol 872. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58520-6_89

Download citation

  • DOI: https://doi.org/10.1007/3-540-58520-6_89

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-58520-6

  • Online ISBN: 978-3-540-49030-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics