Skip to main content

Developing a Catalogue of Explainability Methods to Support Expert and Non-expert Users

  • Conference paper
  • First Online:
Artificial Intelligence XXXVI (SGAI 2019)

Abstract

Organisations face growing legal requirements and ethical responsibilities to ensure that decisions made by their intelligent systems are explainable. However, provisioning of an explanation is often application dependent, causing an extended design phase and delayed deployment. In this paper we present an explainability framework formed of a catalogue of explanation methods, allowing integration to a range of projects within a telecommunications organisation. These methods are split into low-level explanations, high-level explanations and co-created explanations. We motivate and evaluate this framework using the specific case-study of explaining the conclusions of field engineering experts to non-technical planning staff. Feedback from an iterative co-creation process and a qualitative evaluation is indicative that this is a valuable development tool for use in future company projects.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Arras, L., Horn, F., Montavon, G., Müller, K.R., Samek, W.: “What is relevant in a text document?”: an interpretable machine learning approach. PLoS ONE 12(8), 1–23 (2017). https://doi.org/10.1371/journal.pone.0181142

    Article  Google Scholar 

  2. Cheetham, W.: Case-based reasoning with confidence. In: Blanzieri, E., Portinale, L. (eds.) EWCBR 2000. LNCS, vol. 1898, pp. 15–25. Springer, Heidelberg (2000). https://doi.org/10.1007/3-540-44527-7_3

    Chapter  Google Scholar 

  3. Collins, E., Augenstein, I., Riedel, S.: A supervised approach to extractive summarisation of scientific papers. arXiv preprint arXiv:1706.03946 (2017)

  4. European Parliament and Council: Regulation (EU) 2016/679 of the European parliament and of the council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing directive 95/46/EC (general data protection regulation)

    Google Scholar 

  5. Gunning, D.: Explainable artificial intelligence (XAI). Defense Advanced Research Projects Agency (DARPA), and Web (2017)

    Google Scholar 

  6. Hachey, B., Grover, C.: Extractive summarisation of legal texts. Artif. Intell. Law 14(4), 305–345 (2006)

    Article  Google Scholar 

  7. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  8. Le, Q., Mikolov, T.: Distributed representations of sentences and documents. In: Proceedings of the 31st International Conference on International Conference on Machine Learning, ICML 2014, vol. 32, pp. II-1188–II-1196. JMLR.org (2014)

    Google Scholar 

  9. Lipton, Z.C.: The mythos of model interpretability. arXiv preprint arXiv:1606.03490 (2016)

  10. Luhn, H.P.: The automatic creation of literature abstracts. IBM J. Res. Dev. 2(2), 159–165 (1958)

    Article  MathSciNet  Google Scholar 

  11. Massie, S., Craw, S., Wiratunga, W.: Visualisation of case-base reasoning for explanation. In: Proceedings of the ECCBR 2004 Workshops, pp. 135–144 (2004)

    Google Scholar 

  12. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR abs/1301.3781 (2013)

    Google Scholar 

  13. Miller, T.: Explanation in artificial intelligence: insights from the social sciences. Artif. Intell. 26, 1–38 (2018)

    MathSciNet  MATH  Google Scholar 

  14. Mohseni, S., Zarei, N., Ragan, E.D.: A survey of evaluation methods and measures for interpretable machine learning. arXiv preprint arXiv:1811.11839 (2018)

  15. Muhammad, K., Lawlor, A., Smyth, B.: On the pros and cons of explanation-based ranking. In: Aha, D.W., Lieber, J. (eds.) ICCBR 2017. LNCS (LNAI), vol. 10339, pp. 227–241. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-61030-6_16

    Chapter  Google Scholar 

  16. Nordin, I.: Expert and non-expert knowledge in medical practice. Med. Health Care Philos. 3(3), 295–302 (2000). https://doi.org/10.1023/A:1026446214010

    Article  Google Scholar 

  17. Ramos, J.: Using tf-idf to determine word relevance in document queries. In: Proceedings of the First Instructional Conference on Machine Learning, pp. 133–142 (2003)

    Google Scholar 

  18. Ras, G., van Gerven, M., Haselager, P.: Explanation methods in deep learning: users, values, concerns and challenges. In: Escalante, H.J., et al. (eds.) Explainable and Interpretable Models in Computer Vision and Machine Learning. TSSCML, pp. 19–36. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-98131-4_2

    Chapter  Google Scholar 

  19. Sørmo, F., Cassens, J., Aamodt, A.: Explanation in case-based reasoning - perspectives and goals. Artif. Intell. Rev. 24(2), 109–143 (2005)

    Article  Google Scholar 

  20. Tsymbal, A.: The problem of concept drift: definitions and related work. Computer Science Department, Trinity College Dublin, 106(2), p. 58 (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kyle Martin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Martin, K., Liret, A., Wiratunga, N., Owusu, G., Kern, M. (2019). Developing a Catalogue of Explainability Methods to Support Expert and Non-expert Users. In: Bramer, M., Petridis, M. (eds) Artificial Intelligence XXXVI. SGAI 2019. Lecture Notes in Computer Science(), vol 11927. Springer, Cham. https://doi.org/10.1007/978-3-030-34885-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34885-4_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-34884-7

  • Online ISBN: 978-3-030-34885-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics