Advertisement

Generating Rule Sets from Model Trees

  • Geoffrey Holmes
  • Mark Hall
  • Eibe Prank
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1747)

Abstract

Model trees—decision trees with linear models at the leaf nodes—have recently emerged as an accurate method for numeric prediction that produces understandable models. However, it is known that decision lists—ordered sets of If-Then rules—have the potential to be more compact and therefore more understandable than their tree counterparts.

We present an algorithm for inducing simple, accurate decision lists from model trees. Model trees are built repeatedly and the best rule is selected at each iteration. This method produces rule sets that are as accurate but smaller than the model tree constructed from the entire dataset. Experimental results for various heuristics which attempt to find a compromise between rule accuracy and rule coverage are reported. We show that our method produces comparably accurate and smaller rule sets than the commercial state-of-the-art rule learning system Cubist.

Keywords

Model Tree Multivariate Adaptive Regression Spline Projection Pursuit Percent Root Decision List 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    L. Breiman, J. Olshen, and C. Stone. Classification and Regression Trees. Wadsworth, Monterrey, Ca, 1984.MATHGoogle Scholar
  2. 2.
    W.W. Cohen. Fast effective rule induction. In Proc. of the Twelfth International Conference on Machine Learning, pages 115–123. Morgan Kaufmann, 1995.Google Scholar
  3. 3.
    G. Das, K. I. Lin, G. Renganathan, and P. Smyth. Rule discovery from time series. In Proc. of the Fourth International Conference on Knowledge Discovery and Data Mining, pages 16–22. AAAI Press, 1998.Google Scholar
  4. 4.
    E. Frank and I. H. Witten. Generating accurate rule sets without global optimization. In Proc. of the Fifteenth International Conference on Machine Learning, pages 144–151. Morgan Kaufmann, 1998.Google Scholar
  5. 5.
    J. Freidman. Multivariate adaptive regression splines. Annals of Statistics, 19(1):1–141, 1991.Google Scholar
  6. 6.
    J. Freidman and W. Stuetzle. Projection pursuit regression. J. American Statistics Association, 76:817–823, 1981.CrossRefGoogle Scholar
  7. 7.
    A. Karalic. Employing linear regression in regression tree leaves. In Proc. of the Tenth European Conference on Artificial Intelligence, Vienna, Austria, 1992.Google Scholar
  8. 8.
    E.J. Keogh and M. J. Pazzani. An enhanced representation of time series which allows fast and accurate classification, clustering and relevance feedback. In Proc. of the Fourth International Conference on Knowledge Discovery and Data Mining, pages 239–243. AAAI Press, 1998.Google Scholar
  9. 9.
    D. Kilpatrick and M. Cameron-Jones. Numeric prediction using instance-based learning with encoding length selection. In Nikola Kasabov, Robert Kozma, Kitty Ko, Robert O’Shea, George Coghill, and Tom Gedeon, editors, Progress in Connectionist-Based Information Systems, volume 2, pages 984–987. Springer-Verlag, 1998.Google Scholar
  10. 10.
    J. R. Quinlan. Learning with continuous classes. In Proc. of the Fifth Australian Joint Conference on Artificial Intelligence, pages 343–348, World Scientific, Singapore, 1992.Google Scholar
  11. 11.
    J. R. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA., 1993.Google Scholar
  12. 12.
    J. Simonoff. Smoothing Methods in Statistics. Springer-Verlag, New York, 1996.MATHGoogle Scholar
  13. 13.
    StatLib. Department of Statistics, Carnegie Mellon University, 1999. http://lib.stat.cmu.edu.
  14. 14.
    L. Torgo. Data fitting with rule-based regression. In J. Zizka and P. Brazdil, editors, Proc. of the Workshop on Artificial Intelligence Techniques (AIT’95), Brno, Czech Republic, 1995.Google Scholar
  15. 15.
    Y. Wang and I. H. Witten. Induction of model trees for predicting continuous classes. In Proc. of the poster papers of the European Conference on Machine Learning, pages 128–137, Prague, Czech Republic, 1997.Google Scholar
  16. 16.
    S. Weiss and N. Indurkhya. Rule-based machine learning methods for functional prediction. Journal of Artificial Intelligence Research, 3:383–403, 1995.MATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1999

Authors and Affiliations

  • Geoffrey Holmes
    • 1
  • Mark Hall
    • 1
  • Eibe Prank
    • 1
  1. 1.Department of Computer ScienceUniversity of WaikatoNew Zealand

Personalised recommendations