Advertisement

Choose for AI and for Explainability

  • Silvie SpreeuwenbergEmail author
Conference paper
  • 13 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11878)

Abstract

As an expert in decision support systems development, I have been promoting transparency and self-explanatory systems to close the plan-do-check-act cycle. AI adoption has tripled in 2018, moving AI towards the Gartner-hype-cycle peak. As AI is getting more mainstream, more conservative companies have good reasons to enter this arena. My impression is that the journey is starting all over again as organizations start using AI technology as black box systems. I think that eventually these companies will also start asking for methods that result in more reliable project outcomes and integrated business systems. The idea that explainable AI is at the expense of accuracy is deeply rooted in the AI community. Unfortunately, this has hampered research into good explainable models and indicates that the human factor in AI is underestimated. Driven by governments asking for transparency, a public asking for unbiased decision making and compliance requirements on business and industry, a new trend and research area is emerging. This talk will explain why explainable artificial intelligence is needed, what makes an explanation good and how ontologies, rule-based systems and knowledge representation may contribute to the research area named XAI.

References

  1. 1.
    Goasduff, L.: 2018 Will Mark the Beginning of AI Democratization. Gartner (2018). https://www.gartner.com/smarterwithgartner/2018-will-mark-the-beginning-of-ai-democratization/
  2. 2.
    Walker, M.: Hype Cycle for Emerging Technologies. Gartner (2018). https://www.gartner.com/en/documents/3885468

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Lab for Intelligent Business Rules Technology – LIBRTAmsterdamThe Netherlands

Personalised recommendations