Skip to main content

Post-processing Data Mining Models for Actionability

  • Chapter
Data Mining for Business Applications

Data mining and machine learning algorithms are, in the most part, aimed at generating statistical models for decision making. These models are typically mathematical formulas or classification results on the test data. However, many of the output models do not themselves correspond to actions that can be executed. In this paper, we consider how to take the output of data mining algorithms as input, and produce collections of high-quality actions to perform in order to bring out the desired world states. This article gives an overview on two of our approaches in this actionable data mining framework, including an algorithm that extracts actions from decision trees and a system that generates high-utility association rules and an algorithm that can learn relational action models from frequent item sets for automatic planning. These two problems and solutions highlight our novel computational framework for actionable data mining.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. R. Agrawal and R. Srikant. Fast algorithms for mining association rules. In Proceedings of 20th International Conference on Very Large Data Bases(VLDB'94), pages 487–499. Morgan Kaufmann, September 1994.

    Google Scholar 

  2. Maria Fox and Derek Long. PDDL2.1: An extension to pddl for expressing temporal planning domains. Journal of Artificial Intelligence Research, 20:61–124, 2003.

    Google Scholar 

  3. Malik Ghallab, Adele Howe, Craig Knoblock, Drew McDermott, Ashwin Ram, Manuela Veloso, Dan Weld, and David Wilkins. PDDL—the planning domain definition language, 1998.

    Google Scholar 

  4. Jin Huang and Charles X. Ling. Using auc and accuracy in evaluating learning algorithms. IEEE Trans. Knowl. Data Eng, 17(3):299–310, 2005.

    Article  Google Scholar 

  5. L. Kaelbling, M. Littman, and A. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237–285, 1996.

    Google Scholar 

  6. Henry Kautz and Bart Selman. Pushing the envelope: Planning, propositional logic, and stochastic search. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI 1996), pages 1194–1201, Portland, Oregon USA, 1996.

    Google Scholar 

  7. Ron Kohavi and Mehran Sahami. Error-based and entropy-based discretization of continuous features. In Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, pages 114–119, Portland, Oregon USA, 1996.

    Google Scholar 

  8. D. Mladenic and M. Grobelnik. Feature selection for unbalanced class distribution and naive bayes. In Proceedings of ICML 1999., 1999.

    Google Scholar 

  9. M.R.Garey and D.S. Johnson. Computers and Intractability: A guide to the Theory of NPCom-pleteness. 1979.

    Google Scholar 

  10. E. Pednault, N. Abe, and B. Zadrozny. Sequential cost-sensitive decision making with reinforcement learning. In Proceedings of the Eighth International Conference on Knowledge Discovery and Data Mining (KDD'02), 2002.

    Google Scholar 

  11. J.Ross Quinlan. C4.5 Programs for machine learning. Morgan Kaufmann, 1993.

    Google Scholar 

  12. R. Sun and C. Sessions. Learning plans without a priori knowledge. Adaptive Behavior, 8(3/4):225–253, 2001.

    Google Scholar 

  13. R. Sutton and A. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.

    Google Scholar 

  14. Qiang Yang and Hong Cheng. Planning for marketing campaigns. In International Conference on Automated Planning and Scheduling (ICAPS 2003), pages 174–184, 2003.

    Google Scholar 

  15. Qiang Yang, Kangheng Wu, and Yunfei Jiang. Learning action models from plan examples using weighted max-sat. Artif. Intell., 171(2–3):107–143, 2007.

    Google Scholar 

  16. Qiang Yang, Jie Yin, Charles Ling, and Rong Pan. Extracting actionable knowledge from decision trees. IEEE Trans. on Knowl. and Data Eng., 19(1):43–56, 2007.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qiang Yang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Yang, Q. (2009). Post-processing Data Mining Models for Actionability. In: Cao, L., Yu, P.S., Zhang, C., Zhang, H. (eds) Data Mining for Business Applications. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-79420-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-0-387-79420-4_2

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-0-387-79419-8

  • Online ISBN: 978-0-387-79420-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics