Abstract
This chapter returns to the subject of the entropy of a training set. It explains the concept of entropy in detail using the idea of coding information using bits. The important result that when using the TDIDT algorithm information gain must be positive or zero is discussed, followed by the use of information gain as a method of feature reduction for classification tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The \(\log_{2}\) function is defined in Appendix A for readers who are unfamiliar with it.
References
McSherry, D., & Stretch, C. (2003). Information gain (University of Ulster Technical Note).
Noordewier, M. O., Towell, G. G., & Shavlik, J. W. (1991). Training knowledge-based neural networks to recognize genes in DNA sequences. In Advances in neural information processing systems (Vol. 3). San Mateo: Morgan Kaufmann.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer-Verlag London Ltd.
About this chapter
Cite this chapter
Bramer, M. (2016). More About Entropy. In: Principles of Data Mining. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-7307-6_10
Download citation
DOI: https://doi.org/10.1007/978-1-4471-7307-6_10
Published:
Publisher Name: Springer, London
Print ISBN: 978-1-4471-7306-9
Online ISBN: 978-1-4471-7307-6
eBook Packages: Computer ScienceComputer Science (R0)