Choose for AI and for Explainability
- 13 Downloads
As an expert in decision support systems development, I have been promoting transparency and self-explanatory systems to close the plan-do-check-act cycle. AI adoption has tripled in 2018, moving AI towards the Gartner-hype-cycle peak. As AI is getting more mainstream, more conservative companies have good reasons to enter this arena. My impression is that the journey is starting all over again as organizations start using AI technology as black box systems. I think that eventually these companies will also start asking for methods that result in more reliable project outcomes and integrated business systems. The idea that explainable AI is at the expense of accuracy is deeply rooted in the AI community. Unfortunately, this has hampered research into good explainable models and indicates that the human factor in AI is underestimated. Driven by governments asking for transparency, a public asking for unbiased decision making and compliance requirements on business and industry, a new trend and research area is emerging. This talk will explain why explainable artificial intelligence is needed, what makes an explanation good and how ontologies, rule-based systems and knowledge representation may contribute to the research area named XAI.
- 1.Goasduff, L.: 2018 Will Mark the Beginning of AI Democratization. Gartner (2018). https://www.gartner.com/smarterwithgartner/2018-will-mark-the-beginning-of-ai-democratization/
- 2.Walker, M.: Hype Cycle for Emerging Technologies. Gartner (2018). https://www.gartner.com/en/documents/3885468