Optimization of Decision Rules Based on Dynamic Programming Approach

  • Beata Zielosko
  • Igor Chikalov
  • Mikhail Moshkov
  • Talha Amin
Chapter
Part of the Studies in Computational Intelligence book series (SCI, volume 514)

Abstract

This chapter is devoted to the study of an extension of dynamic programming approach which allows optimization of approximate decision rules relative to the length and coverage. We introduce an uncertainty measure that is the difference between number of rows in a given decision table and the number of rows labeled with the most common decision for this table divided by the number of rows in the decision table. We fix a threshold γ, such that 0 ≤ γ < 1, and study so-called γ-decision rules (approximate decision rules) that localize rows in subtables which uncertainty is at most γ. Presented algorithm constructs a directed acyclic graph Δγ T which nodes are subtables of the decision table T given by pairs “attribute = value”. The algorithm finishes the partitioning of a subtable when its uncertainty is at most γ. The chapter contains also results of experiments with decision tables from UCI Machine Learning Repository.

References

  1. 1.
    Agrawal, R., Srikant, R.: Fast algorithms for mining association rules in large databases. In: Bocca, J.B., Jarke, M., Zaniolo C. (eds.) Proceedings of the 20th International Conference on Very Large Data Bases, VLDB’94, pp. 487–499. Morgan Kaufmann (1994)Google Scholar
  2. 2.
    Alkhalid, A., Chikalov, I., Husain, S., Moshkov, M.: Extensions of dynamic programming as a new tool for decision tree optimization. In: Ramanna, S., Jain, L.C., Howlett, R.J. (eds.) Emerging Paradigms in Machine Learning, Smart Innovation, Systems and Technologies, vol. 13, pp. 11–29. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  3. 3.
    Alkhalid, A., Chikalov, I., Moshkov, M.: On algorithm for building of optimal α-decision trees. In: Szczuka, M.S., Kryszkiewicz, M., Ramanna, S., Jensen, R., Hu, Q. (eds.) RSCTC 2010, LNCS, vol. 6086, pp. 438–445. Springer, Heidelberg (2010)Google Scholar
  4. 4.
    Alkhalid, A., Amin, T., Chikalov, I., Hussain, S., Moshkov, M., Zielosko, B.: Dagger: A tool for analysis and optimization of decision trees and rules. In: Ficarra, F.V.C. (ed.) Computational Informatics, Social Factors and New Information Technologies: Hypermedia Perspectives and Avant-Garde Experiences in the Era of Communicability Expansion, pp. 29–39. Blue Herons, Bergamo, Italy (2011)Google Scholar
  5. 5.
    Amin, T., Chikalov, I., Moshkov, M., Zielosko, B.: Dynamic programming approach for exact decision rule optimization. In: Skowron, A., Suraj, Z. (eds.) Rough Sets and Intelligent Systems—Professor Zdzisław Pawlak in Memoriam, Intelligent Systems Reference Library, vol. 42, pp. 211–228. Springer, Heidelberg (2013) CrossRefGoogle Scholar
  6. 6.
    Amin, T., Chikalov, I., Moshkov, M., Zielosko, B.: Dynamic programming approach for partial decision rule optimization. Fundam. Inform. 119(3–4), 233–248 (2012)MathSciNetMATHGoogle Scholar
  7. 7.
    Amin, T., Chikalov, I., Moshkov, M., Zielosko, B.: Dynamic programming approach to optimization of approximate decision rules. Inf. Sci. 221, 403–418 (2013)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Ang, J., Tan, K., Mamun, A.: An evolutionary memetic algorithm for rule extraction. Export Syst. Appl. 37(2), 1302–1315 (2010)CrossRefGoogle Scholar
  9. 9.
    Asuncion, A., Newman, D.J.: UCI Machine Learning Repository (2007). http://www.ics.uci.edu/~mlearn/
  10. 10.
    Błaszczyński, J., Słowiński, R., Szeląg, M.: Sequential covering rule induction algorithm for variable consistency rough set approaches. Inf. Sci. 181(5), 987–1002 (2011)CrossRefGoogle Scholar
  11. 11.
    Chikalov, I.: On algorithm for constructing of decision trees with minimal number of nodes. In: Ziarko, W., Yao, Y.Y. (eds.) RSCTC 2000, LNCS, vol. 2005, pp. 139–143. Springer, Heidelberg (2001)Google Scholar
  12. 12.
    Clark, P., Niblett, T.: The cn2 induction algorithm. Mach. Learn. 3(4), 261–283 (1989)Google Scholar
  13. 13.
    Dembczyński, K., Kotłowski, W., Słowiński, R.: Ender: a statistical framework for boosting decision rules. Data Min. Knowl. Discov. 21(1), 52–90 (2010)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Fürnkranz, J.: Separate-and-conquer rule learning. Artif. Intell. Rev. 13(1), 3–54 (1999)CrossRefMATHGoogle Scholar
  15. 15.
    Grzymała-Busse, J.W.: Lers—a system for learning from examples based on rough sets. In: Słowiński, R. (ed.) Intelligent Decision Support. Handbook of Applications and Advances of the Rough Sets Theory, pp. 3–18. Kluwer Academic Publishers (1992)Google Scholar
  16. 16.
    Liu, B., Abbass, H.A., McKay, B.: Classification rule discovery with ant colony optimization. In: IAT 2003, pp. 83–88. IEEE Computer Society (2003)Google Scholar
  17. 17.
    Michalski, S., Pietrzykowski, J.: iAQ: A Program That Discovers Rules. AAAI-07 AI Video Competition (2007). URL http://videolectures.net/aaai07_michalski_iaq/
  18. 18.
    Moshkov, M., Chikalov, I.: On algorithm for constructing of decision trees with minimal depth. Fundam. Inform. 41(3), 295–299 (2000)MathSciNetMATHGoogle Scholar
  19. 19.
    Moshkov, M., Zielosko, B.: Combinatorial Machine Learning—A Rough Set Approach, Studies in Computational Intelligence, vol. 360. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  20. 20.
    Moshkov, M., Piliszczuk, M., Zielosko, B.: Partial Covers, Reducts and Decision Rules in Rough Sets—Theory and Applications, Studies in Computational Intelligence, vol. 145. Springer, Heidelberg (2008)Google Scholar
  21. 21.
    Nguyen, H.S.: Approximate boolean reasoning: foundations and applications in data mining. In: Peters, J.F., Skowron, A. (eds.) T. Rough Sets, LNCS, vol. 4100, pp. 334–506. Springer (2006)Google Scholar
  22. 22.
    Pawlak, Z., Skowron, A.: Rough sets and boolean reasoning. Inf. Sci. 177(1), 41–73 (2007)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers Inc. (1993)Google Scholar
  24. 24.
    Rissanen, J.: Modeling by shortest data description. Automatica 14(5), 465–471 (1978)CrossRefMATHGoogle Scholar
  25. 25.
    Skowron, A., Rauszer, C.: The discernibility matrices and functions in information systems. In: Słowinski, R. (ed.) Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory, pp. 331–362. Kluwer Academic Publishers, Dordrecht (1992)Google Scholar
  26. 26.
    Ślęzak, D., Wróblewski, J.: Order based genetic algorithms for the search of approximate entropy reducts. In: Wang, G., Liu, Q., Yao, Y., Skowron, A. (eds.) RSFDGrC 2003, LNCS, vol. 2639, pp. 308–311. Springer (2003)Google Scholar
  27. 27.
    Zielosko, B., Moshkov, M., Chikalov, I.: Optimization of decision rules based on methods of dynamic programming. Vestnik of Lobachevsky State University of Nizhny Novgorod 6, 195–200 (2010). (in Russian)Google Scholar
  28. 28.
    Zielosko, B.: Sequential optimization of γ-decision rules. In: Ganzha, M., Maciaszek, L.A., Paprzycki, M. (eds.) Proceedings of FedCSIS 2012, Wrocław, Poland, 9–12 Sept 2012, pp. 339–346 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Beata Zielosko
    • 1
    • 2
  • Igor Chikalov
    • 1
  • Mikhail Moshkov
    • 1
  • Talha Amin
    • 1
  1. 1.Computer, Electrical and Mathematical Sciences and Engineering DivisionKing Abdullah University of Science and TechnologyThuwalSaudi Arabia
  2. 2.Institute of Computer ScienceUniversity of SilesiaSosnowiecPoland

Personalised recommendations