Skip to main content

More About Entropy

  • Chapter
Principles of Data Mining

Part of the book series: Undergraduate Topics in Computer Science ((UTICS))

  • 8288 Accesses

Abstract

This chapter returns to the subject of the entropy of a training set. It explains the concept of entropy in detail using the idea of coding information using bits. The important result that when using the TDIDT algorithm information gain must be positive or zero is discussed, followed by the use of information gain as a method of feature reduction for classification tasks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    The log2 function is defined in Appendix A for readers who are unfamiliar with it.

References

  1. McSherry, D., & Stretch, C. (2003). Information gain (University of Ulster Technical Note).

    Google Scholar 

  2. Noordewier, M. O., Towell, G. G., & Shavlik, J. W. (1991). Training knowledge-based neural networks to recognize genes in DNA sequences. In Advances in neural information processing systems (Vol. 3). San Mateo: Morgan Kaufmann.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Bramer, M. (2013). More About Entropy. In: Principles of Data Mining. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-4884-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4884-5_10

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4883-8

  • Online ISBN: 978-1-4471-4884-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics