Abstract
This chapter returns to the subject of the entropy of a training set. It explains the concept of entropy in detail using the idea of coding information using bits. The important result that when using the TDIDT algorithm information gain must be positive or zero is discussed, followed by the use of information gain as a method of feature reduction for classification tasks.
Notes
- 1.
The log2 function is defined in Appendix A for readers who are unfamiliar with it.
References
McSherry, D., & Stretch, C. (2003). Information gain (University of Ulster Technical Note).
Noordewier, M. O., Towell, G. G., & Shavlik, J. W. (1991). Training knowledge-based neural networks to recognize genes in DNA sequences. In Advances in neural information processing systems (Vol. 3). San Mateo: Morgan Kaufmann.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag London
About this chapter
Cite this chapter
Bramer, M. (2013). More About Entropy. In: Principles of Data Mining. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-4884-5_10
Download citation
DOI: https://doi.org/10.1007/978-1-4471-4884-5_10
Publisher Name: Springer, London
Print ISBN: 978-1-4471-4883-8
Online ISBN: 978-1-4471-4884-5
eBook Packages: Computer ScienceComputer Science (R0)