Skip to main content

Avoiding Overfitting of Decision Trees

  • Chapter
Book cover Principles of Data Mining

Part of the book series: Undergraduate Topics in Computer Science ((UTICS))

  • 9709 Accesses

Abstract

This chapter begins by examining techniques for dealing with clashes (i.e. inconsistent instances) in a training set. This leads to a discussion of methods for avoiding or reducing overfitting of a decision tree to training data. Overfitting arises when a decision tree is excessively dependent on irrelevant features of the training data with the result that its predictive power for unseen instances is reduced.

Two approaches to avoiding overfitting are distinguished: pre-pruning (generating a tree with fewer branches than would otherwise be the case) and post-pruning (generating a tree in full and then removing parts of it). Results are given for pre-pruning using either a size or a maximum depth cutoff. A method of post-pruning a decision tree based on comparing the static and backed-up estimated error rates at each node is also described.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Notes

  1. 1.

    In Figure and similar figures, the two figures in parentheses at each node give the number of instances in the training set corresponding to that node (as in Figure ) and the estimated error rate at the node, as given in Figure .

  2. 2.

    From now on, for simplicity we will generally refer to the ‘backed-up’ error rate and the ‘static error rate’ at a node, without using the word ‘estimated’ every time. However it is important to bear in mind that they are only estimates not the accurate values, which we have no way of knowing.

References

  1. Quinlan, J. R. (1993). C4.5: programs for machine learning. San Mateo: Morgan Kaufmann.

    Google Scholar 

  2. Esposito, F., Malerba, D., & Semeraro, G. (1997). A comparative analysis of methods for pruning decision trees. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(5), 476–491.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag London

About this chapter

Cite this chapter

Bramer, M. (2013). Avoiding Overfitting of Decision Trees. In: Principles of Data Mining. Undergraduate Topics in Computer Science. Springer, London. https://doi.org/10.1007/978-1-4471-4884-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4884-5_9

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4883-8

  • Online ISBN: 978-1-4471-4884-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics