Advertisement

Annals of Operations Research

, Volume 271, Issue 2, pp 279–295 | Cite as

Bi-criteria optimization problems for decision rules

  • Fawaz AlsolamiEmail author
  • Talha Amin
  • Igor Chikalov
  • Mikhail Moshkov
Original Research
  • 150 Downloads

Abstract

We consider bi-criteria optimization problems for decision rules and rule systems relative to length and coverage. We study decision tables with many-valued decisions in which each row is associated with a set of decisions as well as single-valued decisions where each row has a single decision. Short rules are more understandable; rules covering more rows are more general. Both of these problems—minimization of length and maximization of coverage of rules are NP-hard. We create dynamic programming algorithms which can find the minimum length and the maximum coverage of rules, and can construct the set of Pareto optimal points for the corresponding bi-criteria optimization problem. This approach is applicable for medium-sized decision tables. However, the considered approach allows us to evaluate the quality of various heuristics for decision rule construction which are applicable for relatively big datasets. We can evaluate these heuristics from the point of view of (i) single-criterion—we can compare the length or coverage of rules constructed by heuristics; and (ii) bi-criteria—we can measure the distance of a point (length, coverage) corresponding to a heuristic from the set of Pareto optimal points. The presented results show that the best heuristics from the point of view of bi-criteria optimization are not always the best ones from the point of view of single-criterion optimization.

Keywords

Decision tables with many-valued decisions Systems of decision rules Dynamic programming Pareto optimal points Greedy heuristics 

Notes

Acknowledgements

Research reported in this publication was supported by King Abdullah University of Science and Technology (KAUST). We are greatly indebted to the anonymous reviewer for useful comments and suggestions.

References

  1. Amin, T., Chikalov, I., Moshkov, M., & Zielosko, B. (2013). Dynamic programming approach for exact decision rule optimization. In A. Skowron & Z. Suraj (Eds.), Rough sets and intelligent systems, ISRL 42 (pp. 211–228). Berlin: Springer.CrossRefGoogle Scholar
  2. Azad, M., Chikalov, I., & Moshkov, M. (2013). Optimization of decision rule complexity for decision tables with many-valued decisions. In IEEE international conference on systems, man, and cybernetics (pp. 444–448).Google Scholar
  3. Blockeel, H., Schietgat, L., Struyf, J., Džeroski, S., & Clare, A. (2006). Decision trees for hierarchical multilabel classification: A case study in functional genomics. In European conference on principles and practice of knowledge discovery in databases, Lecture notes in computer science (Vol. 4213, pp. 18–29).Google Scholar
  4. Bonates, T., Hammer, P. L., & Kogan, A. (2008). Maximum patterns in datasets. Discrete Applied Mathematics, 156(6), 846–861.CrossRefGoogle Scholar
  5. Boros, E., Hammer, P. L., Ibaraki, T., Kogan, A., Mayoraz, E., & Muchnik, I. (2000). An implementation of logical analysis of data. IEEE Transactions on Knowledge and Data Engineering, 12(2), 292–306.CrossRefGoogle Scholar
  6. Bostrom, H. (1995). Covering vs divide-and-conquer for top-down induction of logic programs. In Proceedings of the 14th international joint conference on artificial intelligence (Vol. 2, pp. 1194–1200).Google Scholar
  7. Boutell, M. R., Luo, J., Shen, X., & Brown, C. M. (2004). Learning multi-label scene classification. Pattern Recognition, 37(9), 1757–1771.CrossRefGoogle Scholar
  8. Clark, P., & Niblett, T. (1989). The CN2 induction algorithm. Machine Learning, 3, 261–283.Google Scholar
  9. Cohen, W. W., & Singer, Y. (1999). A simple, fast, and effective rule learner. In Proceedings of the sixteenth national conference on artificial intelligence, American Association for Artificial Intelligence, AAAI ’99 (pp. 335–342).Google Scholar
  10. Crama, Y., Hammer, P. L., & Ibaraki, T. (1988). Cause-effect relationships and partially defined boolean functions. Annals of Operations Research, 16(1), 299–325.CrossRefGoogle Scholar
  11. Dembczyński, K., Kotłowski, W., & Słowiński, R. (2010). Ender: A statistical framework for boosting decision rules. Data Mining and Knowledge Discovery, 21(1), 52–90.CrossRefGoogle Scholar
  12. Fürnkranz, J. (1999). Separate-and-conquer rule learning. Artificial Intelligence Review, 13, 3–54.CrossRefGoogle Scholar
  13. Fürnkranz, J., Gamberger, D., & Lavrac, N. (2012). Foundations of rule learning. Cognitive technologies. Berlin: Springer.CrossRefGoogle Scholar
  14. Greco, S., Matarazzo, B., & Słowiński, R. (2001). Rough sets theory for multicriteria decision analysis. European Journal of Operational Research, 129(1), 1–47.CrossRefGoogle Scholar
  15. Hammer, P., & Bonates, T. (2006). Logical analysis of data—An Overview: From combinatorial optimization to medical applications. Annals of Operations Research, 148(1), 203–225.CrossRefGoogle Scholar
  16. Hammer, P. L., Kogan, A., Simeone, B., & Szedmk, S. (2004). Pareto-optimal patterns in logical analysis of data. Discrete Applied Mathematics, 144(12), 79–102.CrossRefGoogle Scholar
  17. Lavrač, N., Fürnkranz, J., & Gamberger, D. (2010). Explicit feature construction and manipulation for covering rule learning algorithms. In J. Koronacki, Z. W. Raś, S. T. Wierzchoń & J. Kacprzyk (Eds.), Advances in machine learning. Studies in Computational Intelligence (Vol. 262, pp. 121–146). Berlin: Springer.CrossRefGoogle Scholar
  18. Lichman, M. (2013). UCI machine learning repository. http://archive.ics.uci.edu/ml. Accessed 10 Dec 2015.
  19. Michalski, S., & Pietrzykowski, J. (2007). iAQ: A program that discovers rules. In AAAI-07 AI Video Competition.Google Scholar
  20. Moshkov, M. (2007). On the class of restricted linear information systems. Discrete Mathematics, 307(22), 2837–2844.CrossRefGoogle Scholar
  21. Moshkov, M., & Chikalov, I. (2000). On algorithm for constructing of decision trees with minimal depth. Fundamenta Informaticae, 41(3), 295–299.Google Scholar
  22. Moshkov, M., & Zielosko, B. (2011). Combinatorial machine learning—A rough set approach, Studies in computational intelligence (Vol. 360). Berlin: Springer.CrossRefGoogle Scholar
  23. Pawlak, Z. (1991). Rough sets: Theoretical aspects of reasoning about data. Dordrecht: Kluwer Academic Publishers.CrossRefGoogle Scholar
  24. Pawlak, Z., & Skowron, A. (2007). Rough sets and boolean reasoning. Information Sciences, 177(1), 41–73.CrossRefGoogle Scholar
  25. Quinlan, J. R. (1993). C4.5: Programs for machine learning. Los Altos: Morgan Kaufmann.Google Scholar
  26. Rivest, R. L. (1987). Learning decision lists. Machine Learning, 2, 229–246.Google Scholar
  27. Wieczorkowska, A., Synak, P., Lewis, R. A., & Raś, Z. W. (2005). Extracting emotions from music data. In: Foundations of intelligent systems, Lecture notes in computer science (Vol. 3488, pp. 456–465). Berlin: Springer.Google Scholar
  28. Zhou, Z. H., Jiang, K., & Li, M. (2005). Multi-instance learning based web mining. Applied Intelligence, 22(2), 135–147.CrossRefGoogle Scholar
  29. Zielosko, B., Chikalov, I.,Moshkov,M., & Amin, T. (2014). Optimization of decision rules based on dynamic programming approach. In C. Faucher & L.C. Jain (Eds.), Innovations in intelligent machines-4. Studies in Computational Intelligence (Vol. 514, pp. 369–392). Cham: Springer.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Fawaz Alsolami
    • 1
    Email author
  • Talha Amin
    • 2
  • Igor Chikalov
    • 2
  • Mikhail Moshkov
    • 2
  1. 1.Computer Science DepartmentKing Abdulaziz UniversityJeddahSaudi Arabia
  2. 2.Computer, Electrical and Mathematical Sciences and Engineering DivisionKing Abdullah University of Science and TechnologyThuwalSaudi Arabia

Personalised recommendations