Skip to main content

Interpretable Machine Learning from Granular Computing Perspective

  • Chapter
  • First Online:
Applied Decision-Making

Part of the book series: Studies in Systems, Decision and Control ((SSDC,volume 209))

Abstract

Machine Learning (ML) is a method that aims to learn from data to identify patterns and make predictions. Nowadays ML models have become ubiquitous, there are so many services that people use in their daily life, consequently, those systems affect in very ways to the final users. Recently, there is a special interest on the right of the final user to know why the system generates some output; this field is called Interpretable Machine Learning (IML). Granular Computing (GrC) paradigm is focused in knowledge modeling inspired by human thinking. In this work we conduct a survey of the state of the art in IML and GrC fields to settle the bases of the possible contribution of each other with aims to build more interpretable and accurately ML models.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.darpa.mil/program/explainable-artificial-intelligence.

References

  1. Arras, L., et al.: “What is relevant in a text document?”: an interpretable machine learning approach. PLoS ONE 12(8) (2017). ISSN 19326203. https://doi.org/10.1371/journal.pone.0181142

    Article  Google Scholar 

  2. Bargiela, A., Pedrycz, W.: Granular Computing (2003). ISBN 978-1-4613-5361-4. https://doi.org/10.1007/978-1-4615-1033-8

    Book  Google Scholar 

  3. Basu, S., et al.: Iterative random forests to discover predictive and stable high-order interactions. Proc. Natl. Acad. Sci. U.S.A. 115(8), 1943–1948 (2018). ISSN 00278424. https://doi.org/10.1073/pnas.1711236115

    Article  MathSciNet  Google Scholar 

  4. Beaton, B.: Crucial answers about humanoid capital. In: ACM/IEEE International Conference on Human-Robot Interaction, pp. 5–12. IEEE Computer Society (2018). ISBN 9781450356152. https://doi.org/10.1145/3173386.3173391

  5. Belle, V.: Logic meets probability: towards explainable AI systems for uncertain worlds. In: IJCAI International Joint Conference on Artificial Intelligence, pp. 5116–5120 (2017). ISSN 10450823. https://doi.org/10.24963/ijcai.2017/733

  6. Brinkrolf, J., Hammer, B.: Interpretable machine learning with reject option. At-Automatisierungstechnik 66(4), 283–290 (2018). ISSN 01782312. https://doi.org/10.1515/auto-2017-0123

    Article  Google Scholar 

  7. Caywood, M.S., et al.: Gaussian process regression for predictive but interpretable machine learning models: an example of predicting mental workload across tasks. Front. Hum. Neurosci. 10 (2017). ISSN 16625161. https://doi.org/10.3389/fnhum.2016.00647

  8. Ding, S., et al.: Granular neural networks. Artif. Intell. Rev. 1(3), 373–384 (2014). ISSN 02692821. https://doi.org/10.1007/s10462-012-9313-7

    Article  Google Scholar 

  9. Goodman, B., Flaxman, S.: European Union regulations on algorithmic decision-making and a “right to explanation”, pp. 1–9 (2016). ISSN 0738-4602. https://doi.org/10.1609/aimag.v38i3.2741. arXiv: 1606.08813

    Article  Google Scholar 

  10. Guo, H., Wang, W.: Granular support vector machine: a review. Artif. Intell. Rev. 51(1), 19–32 (2019). ISSN 15737462. https://doi.org/10.1007/s10462-017-9555-5

    Article  MathSciNet  Google Scholar 

  11. Hofmann, D., et al.: Learning interpretable kernelized prototype-based models. Neurocomputing 141, 84–96 (2014). ISSN 09252312. https://doi.org/10.1016/j.neucom.2014.03.003

    Article  Google Scholar 

  12. Huang, S.H., et al.: Enabling robots to communicate their objectives. Auton. Robots 1–18 (2018). ISSN 09295593. https://doi.org/10.1007/s10514-018-9771-0

    Article  Google Scholar 

  13. Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Lee, D.D., et al. (eds.) Advances in Neural Information Processing Systems, vol. 29, pp. 2280–2288. Curran Associates, Inc. (2016)

    Google Scholar 

  14. Lakkaraju, H., Bach, S.H., Leskovec, J.: Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13–17 August, pp. 1675–1684. Association for Computing Machinery (2016). ISBN 9781450342322. https://doi.org/10.1145/2939672.2939874

  15. Li, X., et al.: Using machine learning models to predict in-hospital mortality for ST-elevation myocardial infarction patients. In: Marie-Christine, J., Dong-sheng, Z., Gundlapalli, A.V. (eds.) Studies in Health Technology and Informatics, vol. 245, pp. 476–480 (2017). ISSN 18798365. https://doi.org/10.3233/978-1-61499-830-3-476

  16. Loia, V., Tomasiello, S.: Granularity into functional networks. In: 2017 3rd IEEE International Conference on Cybernetics, CYB-CONF 2017—Proceedings (2017). https://doi.org/10.1109/CYBConf.2017.7985781

  17. Mencar, C., Fanelli, A.M.: Interpretability constraints for fuzzy information granulation. Inf. Sci. 178(24), 4585–4618 (2008). ISSN 00200255. https://doi.org/10.1016/j.ins.2008.08.015

    Article  MathSciNet  Google Scholar 

  18. Miller, T.: Explanation in artificial intelligence: insights from the social sciences (2017). arXiv: 1706.07269

  19. Molnar, C.: Interpretable Machine Learning. https://christophm.github.io/interpretable-ml-book/ (2019)

  20. Nápoles, G., et al.: Fuzzy-rough cognitive networks. Neural Netw. 97, 19–27 (2018). ISSN 18792782. https://doi.org/10.1016/j.neunet.2017.08.007

    Article  Google Scholar 

  21. Pal, S.K., Ray, S.S., Ganivada, A.: Granular Neural Networks, Pattern Recognition and Bioinformatics (2010). ISBN: 9783319571133

    Google Scholar 

  22. Panoutsos, G., Mahfouf, M.: A neural-fuzzy modelling frame-work based on granular computing: concepts and applications. Fuzzy Sets Syst. 161(21), 2808–2830 (2010). ISSN 01650114. https://doi.org/10.1016/j.fss.2010.06.004

    Article  MathSciNet  Google Scholar 

  23. Pedrycz, W., Chen, S.-M.: Granular Computing and Intelligent Systems, p. 305 (2011). ISBN 9783642017988. https://doi.org/10.1007/978-3-642-19820-5

    MATH  Google Scholar 

  24. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier (2016). ISSN 9781450321389. https://doi.org/10.1145/1235. arXiv: 1602.04938

  25. Shalaeva, V., et al.: Multi-operator decision trees for explainable time-series classification. In: Verdegay, J.L., Pelta, D.A., Yager, R.R., Bouchon-Meunier, B., Medina, J., Ojeda-Aciego, M., Cabrera, I.P. (eds.) Communications in Computer and Information Science, vol. 853, pp. 86–99 (2018). ISSN 18650929. https://doi.org/10.1007/978-3-319-91473-2_8

    Google Scholar 

  26. Smith, A., Nolan, J.J.: The problem of explanations with-out user feedback. In: CEUR Workshop Proceedings, vol. 2068 (2018). ISSN 16130073

    Google Scholar 

  27. Valdes, G., et al.: MediBoost: a patient stratification tool for interpretable decision making in the era of precision medicine. Sci. Rep. 6 (2016). ISSN 20452322. https://doi.org/10.1038/srep37854

  28. Varshney, K.R.: Interpretable machine learning via convex cardinal shape composition. In: 54th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2016, pp. 327–330. Institute of Electrical and Electronics Engineers Inc. (2017). ISBN 9781509045495. https://doi.org/10.1109/ALLERTON.2016.7852248

  29. Varshney, K.R.: Engineering safety in machine learning (2016). arXiv:1601.04126 [stat.ML]

  30. van der Waa, J., et al.: ICM: an intuitive model independent and accurate certainty measure for machine learning. In: Rocha, A.P., van den Herik, J. (eds.) ICAART 2018—Proceedings of the 10th International Conference on Agents and Artificial Intelligence, vol. 2, pp. 314–321. SciTePress (2018). ISBN 9789897582752

    Google Scholar 

  31. Wang, T., et al.: Bayesian rule sets for interpretable classification. In: Baeza-Yates, R., Domingo-Ferrer, J., Zhou, Z.-H., Bonchi, F., Wu, X. (eds.) Proceedings—IEEE International Conference on Data Mining, ICDM, pp. 1269–1274. Institute of Electrical and Electronics Engineers Inc. (2017). ISBN 9781509054725. https://doi.org/10.1109/ICDM.2016.130

  32. Williams, J.J., et al.: Enhancing online problems through instructor-centered tools for randomized experiments. In: Conference on Human Factors in Computing Systems—Proceedings, Apr 2018. Association for Computing Machinery (2018). ISBN 9781450356206; 9781450356213. https://doi.org/10.1145/3173574.3173781

  33. Xu, X., et al.: A new method for constructing granular neural networks based on rule extraction and extreme learning machine. Pattern Recognit. Lett. 67, 138–144 (2015). ISSN 01678655. https://doi.org/10.1016/j.patrec.2015.05.006

    Article  Google Scholar 

  34. Zadeh, L.A.: Fuzzy sets. Inf. Control 8(3), 338–353 (1965). ISSN 0019-9958. https://doi.org/10.1016/S0019-9958(65)90241-X

    Article  MathSciNet  Google Scholar 

  35. Zhu, X., Pedrycz, W., Li, Z.: Granular representation of data: a design of families of \(\epsilon \)-information granules. IEEE Trans. Fuzzy Syst. 26(4), 2107–2119 (2018). ISSN 10636706. https://doi.org/10.1109/TFUZZ.2017.2763122

  36. Zhuang, Y.-t., et al.: Challenges and opportunities: from big data to knowledge in AI 2.0. Front. Inf. Technol. Electron. Eng. 18(1), 3–14 (2017). ISSN 2095-9184. https://doi.org/10.1631/FITEE.1601883

    Article  Google Scholar 

Download references

Acknowledgements

This research was partially supported by MyDCI (Maestría y Doctorado en Ciencias e Ingeniería).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Raúl Navarro-Almanza .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Navarro-Almanza, R., Castro, J.R., Sanchez, M.A. (2019). Interpretable Machine Learning from Granular Computing Perspective. In: Sanchez, M., Aguilar, L., Castañón-Puga, M., Rodríguez, A. (eds) Applied Decision-Making. Studies in Systems, Decision and Control, vol 209. Springer, Cham. https://doi.org/10.1007/978-3-030-17985-4_8

Download citation

Publish with us

Policies and ethics