Advertisement

MLIC: A MaxSAT-Based Framework for Learning Interpretable Classification Rules

  • Dmitry Malioutov
  • Kuldeep S. MeelEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11008)

Abstract

The wide adoption of machine learning approaches in the industry, government, medicine and science has renewed the interest in interpretable machine learning: many decisions are too important to be delegated to black-box techniques such as deep neural networks or kernel SVMs. Historically, problems of learning interpretable classifiers, including classification rules or decision trees, have been approached by greedy heuristic methods as essentially all the exact optimization formulations are NP-hard. Our primary contribution is a MaxSAT-based framework, called \(\mathcal {MLIC}\), which allows principled search for interpretable classification rules expressible in propositional logic. Our approach benefits from the revolutionary advances in the constraint satisfaction community to solve large-scale instances of such problems. In experimental evaluations over a collection of benchmarks arising from practical scenarios we demonstrate its effectiveness: we show that the formulation can solve large classification problems with tens or hundreds of thousands of examples and thousands of features, and to provide a tunable balance of accuracy vs. interpretability. Furthermore, we show that in many problems interpretability can be obtained at only a minor cost in accuracy.

The primary objective of the paper is to show that recent advances in the MaxSAT literature make it realistic to find optimal (or very high quality near-optimal) solutions to large-scale classification problems. We also hope to encourage researchers in both interpretable classification and in the constraint programming community to take it further and develop richer formulations, and bespoke solvers attuned to the problem of interpretable ML.

Notes

Acknowledgements

This work was supported in part by NUS ODPRT Grant, R-252-000-685-133 and IBM PhD Fellowship. The computational work for this article was performed on resources of the National Supercomputing Centre, Singapore, https://www.nscc.sg.

References

  1. 1.
    Andrews, R., Diederich, J., Tickle, A.: Survey and critique of techniques for extracting rules from trained artificial neural networks. Knowl. Based Syst. 8(6), 373–389 (1995)CrossRefGoogle Scholar
  2. 2.
    van Beek, P., Hoffmann, H.F.: Machine learning of Bayesian networks using constraint programming. In: Proceedings of CP, pp. 429–445 (2015)Google Scholar
  3. 3.
    Berg, J., Saikko, P., Järvisalo, M.: Improving the effectiveness of sat-based preprocessing for MaxSAT. In: Proceedings of IJCAI (2015)Google Scholar
  4. 4.
    Bertsimas, D., Chang, A., Rudin, C.: An integer optimization approach to associative classification. Adv. Neur. Inf. Process. Syst. 25, 269–277 (2012)Google Scholar
  5. 5.
    Bessiere, C., Hebrard, E., O’Sullivan, B.: Minimising decision tree size as combinatorial optimisation. In: Gent, I.P. (ed.) CP 2009. LNCS, vol. 5732, pp. 173–187. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-04244-7_16CrossRefGoogle Scholar
  6. 6.
    Blake, C., Merz, C.J.: \(\{\)UCI\(\}\) repository of machine learning databases (1998)Google Scholar
  7. 7.
    Boros, E., Hammer, P., Ibaraki, T., Kogan, A., Mayoraz, E., Muchnik, I.: An implementation of logical analysis of data. IEEE Trans. Knowl. Data Eng. 12(2), 292–306 (2000)CrossRefGoogle Scholar
  8. 8.
    Breiman, L., Friedman, J., Stone, C., Olshen, R.: Classification and Regression Trees. CRC Press, Boca Raton (1984)zbMATHGoogle Scholar
  9. 9.
    Clark, P., Niblett, T.: The CN2 induction algorithm. Mach. Learn. 3(4), 261–283 (1989)Google Scholar
  10. 10.
    Cohen, W.W.: Fast effective rule induction. In: Proceedings of International Conference on Machine Learning, pp. 115–123. Tahoe City, CA, July 1995CrossRefGoogle Scholar
  11. 11.
    Cohen, W.W., Singer, Y.: A simple, fast, and effective rule learner. In: Proceedings of National Conference on Artificial Intelligence, pp. 335–342, Orlando, FL. July 1999Google Scholar
  12. 12.
    Craven, M.W., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: Proceedings of NIPS, pp. 24–30 (1996)Google Scholar
  13. 13.
    Davies, J., Bacchus, F.: Solving MaxSAT by solving a sequence of simpler sat instances. In: Proceedings of CP, pp. 225–239 (2011)CrossRefGoogle Scholar
  14. 14.
    De Raedt, L., Guns, T., Nijssen, S.: Constraint programming for itemset mining. In: Proceedings of KDD, pp. 204–212 (2008)Google Scholar
  15. 15.
    Dembczyński, K., Kotłowski, W., Słowiński, R.: Ender: a statistical framework for boosting decision rules. Data Mining Knowl. Discov. 21(1), 52–90 (2010)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Emad, A., Varshney, K.R., Malioutov, D.M.: A semiquantitative group testing approach for learning interpretable clinical prediction rules. In: Proceedings of Signal Process. Adapt. Sparse Struct. Repr. Workshop, Cambridge, UK (2015)Google Scholar
  17. 17.
    Freitas, A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)CrossRefGoogle Scholar
  18. 18.
    Friedman, J.H., Popescu, B.E.: Predictive learning via rule ensembles. Ann. Appl. Stat. 2(3), 916–954 (2008)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Jawanpuria, P., Jagarlapudi, S.N., Ramakrishnan, G.: Efficient rule ensemble learning using hierarchical kernels. In: Proceedings of ICML (2011)Google Scholar
  20. 20.
    Letham, B., Rudin, C., McCormick, T.H., Madigan, D.: Building interpretable classifiers with rules using Bayesian analysis. Technical report 609, Department of Statistics. University of Washington, December 2012Google Scholar
  21. 21.
    Malioutov, D.M., Varshney, K.R.: Exact rule learning via Boolean compressed sensing. In: Proceedings of ICML, pp. 765–773 (2013)Google Scholar
  22. 22.
    Marchand, M., Shawe-Taylor, J.: The set covering machine. J. Mach. Learn. Res. 3(Dec), 723–746 (2002)Google Scholar
  23. 23.
    Nijssen, S., Guns, T., De Raedt, L.: Correlated itemset mining in ROC space: a constraint programming approach. In: KDD, pp. 647–656. ACM (2009)Google Scholar
  24. 24.
    Quinlan, J.R.: C4.5: Programming for Machine Learning, p. 38. Morgan Kauffmann, San Francisco (1993)Google Scholar
  25. 25.
    Rivest, R.L.: Learning decision lists. Mach. Learn. 2(3), 229–246 (1987)Google Scholar
  26. 26.
    Rückert, U., Kramer, S.: Margin-based first-order rule learning. Mach. Learn. 70(2–3), 189–206 (2008)CrossRefGoogle Scholar
  27. 27.
    Valiant, L.G.: Learning disjunctions of conjunctions. In: Proceedings of International Joint Conference on Artificial Intelligence, pp. 560–566. Los Angeles, CA, August 1985Google Scholar
  28. 28.
    Varshney, K.R.: Data science of the people, for the people, by the people: a viewpoint on an emerging dichotomy. In: Proceedings of Data for Good Exchange Conference (2015)Google Scholar
  29. 29.
    Wang, T., Rudin, C., Doshi-Velez, F., Liu, Y., Klampfl, E., MacNeille, P.: Or’s of And’s for interpretable classification, with application to context-aware recommender systems. arXiv preprint arXiv:1504.07614 (2015)
  30. 30.
    Wang, T., Rudin, C., Liu, Y., Klampfl, E., MacNeille, P.: Bayesian Or’s of And’s for interpretable classification with application to context aware recommender systems (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.T. J. Watson IBM Research CenterYorktown HeightsUSA
  2. 2.School of ComputingNational University of SingaporeSingaporeSingapore

Personalised recommendations