Skip to main content

Generating Rule Sets from Model Trees

  • Conference paper
Advanced Topics in Artificial Intelligence (AI 1999)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1747))

Included in the following conference series:

Abstract

Model trees—decision trees with linear models at the leaf nodes—have recently emerged as an accurate method for numeric prediction that produces understandable models. However, it is known that decision lists—ordered sets of If-Then rules—have the potential to be more compact and therefore more understandable than their tree counterparts.

We present an algorithm for inducing simple, accurate decision lists from model trees. Model trees are built repeatedly and the best rule is selected at each iteration. This method produces rule sets that are as accurate but smaller than the model tree constructed from the entire dataset. Experimental results for various heuristics which attempt to find a compromise between rule accuracy and rule coverage are reported. We show that our method produces comparably accurate and smaller rule sets than the commercial state-of-the-art rule learning system Cubist.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L. Breiman, J. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, Monterrey, Ca, 1984.

    MATH  Google Scholar 

  2. W.W. Cohen. Fast effective rule induction. In Proc. of the Twelfth International Conference on Machine Learning, pages 115–123. Morgan Kaufmann, 1995.

    Google Scholar 

  3. G. Das, K. I. Lin, G. Renganathan, and P. Smyth. Rule discovery from time series. In Proc. of the Fourth International Conference on Knowledge Discovery and Data Mining, pages 16–22. AAAI Press, 1998.

    Google Scholar 

  4. E. Frank and I. H. Witten. Generating accurate rule sets without global optimization. In Proc. of the Fifteenth International Conference on Machine Learning, pages 144–151. Morgan Kaufmann, 1998.

    Google Scholar 

  5. J. Freidman. Multivariate adaptive regression splines. Annals of Statistics, 19(1):1–141, 1991.

    Google Scholar 

  6. J. Freidman and W. Stuetzle. Projection pursuit regression. J. American Statistics Association, 76:817–823, 1981.

    Article  Google Scholar 

  7. A. Karalic. Employing linear regression in regression tree leaves. In Proc. of the Tenth European Conference on Artificial Intelligence, Vienna, Austria, 1992.

    Google Scholar 

  8. E.J. Keogh and M. J. Pazzani. An enhanced representation of time series which allows fast and accurate classification, clustering and relevance feedback. In Proc. of the Fourth International Conference on Knowledge Discovery and Data Mining, pages 239–243. AAAI Press, 1998.

    Google Scholar 

  9. D. Kilpatrick and M. Cameron-Jones. Numeric prediction using instance-based learning with encoding length selection. In Nikola Kasabov, Robert Kozma, Kitty Ko, Robert O’Shea, George Coghill, and Tom Gedeon, editors, Progress in Connectionist-Based Information Systems, volume 2, pages 984–987. Springer-Verlag, 1998.

    Google Scholar 

  10. J. R. Quinlan. Learning with continuous classes. In Proc. of the Fifth Australian Joint Conference on Artificial Intelligence, pages 343–348, World Scientific, Singapore, 1992.

    Google Scholar 

  11. J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA., 1993.

    Google Scholar 

  12. J. Simonoff. Smoothing Methods in Statistics. Springer-Verlag, New York, 1996.

    MATH  Google Scholar 

  13. StatLib. Department of Statistics, Carnegie Mellon University, 1999. http://lib.stat.cmu.edu.

  14. L. Torgo. Data fitting with rule-based regression. In J. Zizka and P. Brazdil, editors, Proc. of the Workshop on Artificial Intelligence Techniques (AIT’95), Brno, Czech Republic, 1995.

    Google Scholar 

  15. Y. Wang and I. H. Witten. Induction of model trees for predicting continuous classes. In Proc. of the poster papers of the European Conference on Machine Learning, pages 128–137, Prague, Czech Republic, 1997.

    Google Scholar 

  16. S. Weiss and N. Indurkhya. Rule-based machine learning methods for functional prediction. Journal of Artificial Intelligence Research, 3:383–403, 1995.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Holmes, G., Hall, M., Prank, E. (1999). Generating Rule Sets from Model Trees. In: Foo, N. (eds) Advanced Topics in Artificial Intelligence. AI 1999. Lecture Notes in Computer Science(), vol 1747. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46695-9_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-46695-9_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66822-0

  • Online ISBN: 978-3-540-46695-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics