Advertisement

Explainable AI: The New 42?

  • Randy Goebel
  • Ajay Chander
  • Katharina Holzinger
  • Freddy Lecue
  • Zeynep Akata
  • Simone Stumpf
  • Peter Kieseberg
  • Andreas Holzinger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11015)

Abstract

Explainable AI is not a new field. Since at least the early exploitation of C.S. Pierce’s abductive reasoning in expert systems of the 1980s, there were reasoning architectures to support an explanation function for complex AI systems, including applications in medical diagnosis, complex multi-component design, and reasoning about the real world. So explainability is at least as old as early AI, and a natural consequence of the design of AI systems. While early expert systems consisted of handcrafted knowledge bases that enabled reasoning over narrowly well-defined domains (e.g., INTERNIST, MYCIN), such systems had no learning capabilities and had only primitive uncertainty handling. But the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge.

There has been recent and relatively rapid success of AI/machine learning solutions arises from neural network architectures. A new generation of neural methods now scale to exploit the practical applicability of statistical and algebraic learning approaches in arbitrarily high dimensional spaces. But despite their huge successes, largely in problems which can be cast as classification problems, their effectiveness is still limited by their un-debuggability, and their inability to “explain” their decisions in a human understandable and reconstructable way. So while AlphaGo or DeepStack can crush the best humans at Go or Poker, neither program has any internal model of its task; its representations defy interpretation by humans, there is no mechanism to explain their actions and behaviour, and furthermore, there is no obvious instructional value ... the high performance systems can not help humans improve.

Even when we understand the underlying mathematical scaffolding of current machine learning architectures, it is often impossible to get insight into the internal working of the models; we need explicit modeling and reasoning tools to explain how and why a result was achieved. We also know that a significant challenge for future AI is contextual adaptation, i.e., systems that incrementally help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence.

Keywords

Artificial intelligence Machine learning Explainability Explainable AI 

Notes

Acknowledgements

The authors thanks their colleagues from local and international institutions for their valuable feedback, remarks and critics on this introduction to the MAKE-Explainable-AI workshop.

References

  1. 1.
    Babiker, H.K.B., Goebel, R.: An introduction to deep visual explanation. In: NIPS 2017 - Workshop Interpreting, Explaining and Visualizing Deep Learning (2017)Google Scholar
  2. 2.
    Babiker, H.K.B., Goebel, R.: Using KL-divergence to focus deep visual explanation. CoRR, abs/1711.06431 (2017)Google Scholar
  3. 3.
    Chander, A., Srinivasan, R.: Evaluating explanations. In: Joint Proceedings of the IFIP Cross-Domain Conference for Machine Learning and Knowledge Extraction (IFIP CD-MAKE 2018) (2018)Google Scholar
  4. 4.
    Chander, A., Srinivasan, R., Chelian, S., Wang, J., Uchino, K.: Working with beliefs: AI transparency in the enterprise. In: Joint Proceedings of the ACM IUI 2018 Workshops Co-located with the 23rd ACM Conference on Intelligent User Interfaces (ACM IUI 2018) (2018)Google Scholar
  5. 5.
    Chen, J., Lecue, F., Pan, J.Z., Horrocks, I., Chen, H.: Transfer learning explanation with ontologies. In: Principles of Knowledge Representation and Reasoning: Proceedings of the Eleventh International Conference, KR 2018, 30 October–2 November 2018, Tempe, Arizona (USA) (2018, to appear)Google Scholar
  6. 6.
    Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)CrossRefGoogle Scholar
  7. 7.
    Hendricks, L.A., et al.: Generating visual explanations. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 3–19. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_1CrossRefGoogle Scholar
  8. 8.
    Holzinger, A., Biemann, C., Pattichis, C.S., Kell, D.B.: What do we need to build explainable AI systems for the medical domain? arXiv:1712.09923 (2017)
  9. 9.
    Holzinger, A., et al.: Towards the augmented pathologist: challenges of explainable-AI in digital pathology. arXiv:1712.06657 (2017)
  10. 10.
    Holzinger, A., et al.: Towards interactive Machine Learning (iML): applying ant colony algorithms to solve the traveling salesman problem with the human-in-the-loop approach. In: Buccafurri, F., Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-ARES 2016. LNCS, vol. 9817, pp. 81–95. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-45507-5_6CrossRefGoogle Scholar
  11. 11.
    Holzinger, A., et al.: A glass-box interactive machine learning approach for solving NP-hard problems with the human-in-the-loop. arXiv:1708.01104 (2017)
  12. 12.
    Holzinger, K., Mak, K., Kieseberg, P., Holzinger, A.: Can we trust machine learning results? Artificial intelligence in safety-critical decision support. ERCIM News 112(1), 42–43 (2018)Google Scholar
  13. 13.
    Kulesza, T., Burnett, M., Wong, W.-K., Stumpf, S.: Principles of explanatory debugging to personalize interactive machine learning. In: Proceedings of the 20th International Conference on Intelligent User Interfaces, pp. 126–137. ACM (2015)Google Scholar
  14. 14.
    Lécué, F., Wu, J.: Semantic explanations of predictions. CoRR, abs/1805.10587 (2018)Google Scholar
  15. 15.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436 (2015)CrossRefGoogle Scholar
  16. 16.
    Maruhashi, K., et al.: Learning multi-way relations via tensor decomposition with neural networks. In: The Thirty-Second AAAI Conference on Artificial Intelligence AAAI-18, pp. 3770–3777 (2018)Google Scholar
  17. 17.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  18. 18.
    Moravčík, M.: Deepstack: expert-level artificial intelligence in heads-up no-limit poker. Science 356(6337), 508–513 (2017)MathSciNetCrossRefGoogle Scholar
  19. 19.
    O’Sullivan, S., et al.: Machine learning enhanced virtual autopsy. Autopsy Case Rep. 7(4), 3–7 (2017)CrossRefGoogle Scholar
  20. 20.
    Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2010)CrossRefGoogle Scholar
  21. 21.
    Park, D.H., et al.: Multimodal explanations: justifying decisions and pointing to the evidence. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2018)Google Scholar
  22. 22.
    Ribeiro, M.T., Singh, S., Guestrin, C.: Why should i trust you?: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. ACM (2016)Google Scholar
  23. 23.
    Singh, D., et al.: Human Activity recognition using recurrent neural networks. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2017. LNCS, vol. 10410, pp. 267–274. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66808-6_18CrossRefGoogle Scholar
  24. 24.
    Taigman, Y., Yang, M., Ranzato, M.A., Wolf, L.: Deepface: Closing the gap to human-level performance in face verification, pp. 1701–1708 (2014)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2018

Authors and Affiliations

  1. 1.Alberta Machine Intelligence InstituteUniversity of AlbertaEdmontonCanada
  2. 2.Fujitsu Labs of AmericaSunnyvaleUSA
  3. 3.SBA-ResearchViennaAustria
  4. 4.INRIASophia AntipolisFrance
  5. 5.Accenture LabsDublinIreland
  6. 6.Amsterdam Machine Learning LabUniversity of AmsterdamAmsterdamThe Netherlands
  7. 7.Max Planck Institute for InformaticsSaarbrueckenGermany
  8. 8.City, University of LondonLondonUK
  9. 9.University of Applied Sciences St. PöltenSt. PöltenAustria
  10. 10.Holzinger Group HCI-KDD, Institute for Medical Informatics, Statistics and DocumentationMedical University GrazGrazAustria
  11. 11.Institute of Interactive Systems and Data ScienceGraz University of TechnologyGrazAustria

Personalised recommendations