Abstract
In this paper, the mutual information between a target concept and a hypothesis is used to measure the goodness of the hypothesis rather than the accuracy, and a notion of mutual information gaining (MI-gaining) algorithms is introduced. In particular, strong and weak MI-gaining algorithms are defined depending on the amount of information acquired, and their relation to strong and weak PAC-learning algorithms are investigated. It is shown that although a strong MI-gaining algorithm is equivalent to a strong PAC-learning algorithm, a weak MI-gaining algorithm does not necessarily imply a weak PAC-learning algorithm, and vice versa. Moreover, a general boosting scheme for weak MI-gaining algorithms is given. That is, any weak MI-gaining algorithm can be used to build a strong one. Since a strong MI-gaining algorithm is also a strong PAC-learning algorithm, the result can be viewed to give a sufficient condition for a class of algorithms to be boosted into strong learning algorithms.
Preview
Unable to display preview. Download preview PDF.
References
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik-Chervonenkis dimension. Journal of the Association for Computing Machinery, 36(4):929–965, Aug. 1989.
Y. Freund. Boosting a weak learning algorithm by majority. In Proceedings of the 3rd Workshop on Computational Learning Theory, pages 202–216, 1990.
D. Haussler, M. Kearns, and R. Schapire. Bounds on the sample complexity of bayesian learning using information theory and the vc dimension. In Proceedings of the 4th Workshop on Computational Learning Theory, 1991.
D. Helmbold and M. K. Warmuth. Some weak learning results. In Proceedings of the 5th Workshop on Computational Learning Theory, 1992.
B. K. Natarajan. Machine Learning: A Theoretical Approach. Morgan Kaufmann, San Mateo, 1991.
R. Schapire. The strength of weak learnability. In Proceedings of the 30th Annual IEEE Symposium on Foundations of Computer Science, pages 28–33, 1989.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1994 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Takimoto, E., Tajika, I., Maruoka, A. (1994). Mutual information gaining algorithm and its relation to PAC-learning algorithm. In: Arikawa, S., Jantke, K.P. (eds) Algorithmic Learning Theory. AII ALT 1994 1994. Lecture Notes in Computer Science, vol 872. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58520-6_89
Download citation
DOI: https://doi.org/10.1007/3-540-58520-6_89
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-58520-6
Online ISBN: 978-3-540-49030-2
eBook Packages: Springer Book Archive