Advertisement

Explainable Machine Learning for Chatbots

  • Boris Galitsky
  • Saveli Goldberg
Chapter

Abstract

Machine learning (ML) has been successfully applied to a wide variety of fields ranging from information retrieval, data mining, and speech recognition, to computer graphics, visualization, and human-computer interaction. However, most users often treat a machine learning model as a black box because of its incomprehensible functions and unclear working mechanism (Liu et al. 2017). Without a clear understanding of how and why a model works, the development of high performance models for chatbots typically relies on a time-consuming trial-and-error process. As a result, academic and industrial ML chatbot developers are facing challenges that demand more transparent and explainable systems for better understanding and analyzing ML models, especially their inner working mechanisms.

In this Chapter we focus on explainability. We first discuss what is explainable ML and how its features are desired by users. We then draw an example chatbot-related classification problem and show how it is solved by a transparent rule-based or ML method. After that we present a decision support-enabled chatbot that shares its explanations to back up its decisions and tackles that of a human peer. We conclude this chapter with a learning framework representing a deterministic inductive approach with complete explainability.

References

  1. Anshakov OM, Finn VK, Skvortsov DP (1989) On axiomatization of many-valued logics associated with formalization of plausible reasoning. Stud Logica 42(4):423–447MathSciNetCrossRefGoogle Scholar
  2. Arras L, Horn F, Montavon G, Müller K-R, Samek W (2017) What is relevant in a text document?: an interpretable machine learning approach. PLoS One.  https://doi.org/10.1371/journal.pone.0181142
  3. Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K, Müller K-R (2010) How to explain individual classification decisions. J Mach Learn Res 11(June):1803–1831MathSciNetzbMATHGoogle Scholar
  4. DARPA (2016) Explainable artificial intelligence (XAI). http://www.darpa.mil/program/explainable-artificial-intelligence. Last downloaded November 2018
  5. Galitsky B (2014) Learning parse structure of paragraphs and its applications in search. Eng Appl Artif Intell 32:160–184CrossRefGoogle Scholar
  6. Galitsky B (2015) Finding a lattice of needles in a haystack: forming a query from a set of items of interest. FCA4AI@ IJCAI, pp 99–106Google Scholar
  7. Galitsky B (2016) Theory of mind engine. In: Computational autism. Springer, ChamCrossRefGoogle Scholar
  8. Galitsky B (2017) Matching parse thickets for open domain question answering. Data Knowl Eng 107:24–50CrossRefGoogle Scholar
  9. Galitsky B, de la Rosa JL (2011) Concept-based learning of human behavior for customer relationship management. Inf Sci 181(10):2016–2035CrossRefGoogle Scholar
  10. Galitsky BA, Ilvovsky D (2017) Chatbot with a discourse structure-driven dialogue management. EACL Demo E17–3022. ValenciaGoogle Scholar
  11. Galitsky B, Parnis A (2017) How children with autism and machines learn to interact. In: Autonomy and artificial intelligence: a threat or savior. Springer, ChamGoogle Scholar
  12. Galitsky B, Shpitsberg I (2016) Autistic learning and cognition. In: Computational autism. Springer, Cham, pp 245–293CrossRefGoogle Scholar
  13. Galitsky B, Kuznetsov SO, Vinogradov DV (2007) Applying hybrid reasoning to mine for associative features in biological data. J Biomed Inform 40(3):203–220CrossRefGoogle Scholar
  14. Galitsky B, González MP, Chesñevar CI (2009) A novel approach for classifying customer complaints through graphs similarities in argumentative dialogue. Decis Support Syst 46(3):717–729CrossRefGoogle Scholar
  15. Ganter B, Kuznetsov S (2001) Pattern structures and their projections. In: Stumme G, Delugach H (eds) Proceedings of the 9th international conference on conceptual structures, ICCS’01. Lecture Notes in Artificial Intelligence, 2120, pp 129–142Google Scholar
  16. Ganter B, Wille R (1999) Formal concept analysis: mathematical foundations. Springer, BerlinCrossRefGoogle Scholar
  17. Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter M, Kagal L (2018) Explaining explanations: an approach to evaluating interpretability of machine learning. https://arxiv.org/pdf/1806.00069.pdf
  18. Goldberg S, Shklovskiy-Kordi N, Zingerman B (2007) Time-oriented multi-image case history – way to the “disease image” analysis. VISAPP (Special Sessions):200–203Google Scholar
  19. Goldberg S, Niemierko A, Turchin A (2008) Analysis of data errors in clinical research databases. AMIA Annu Symp Proc 6:242–246Google Scholar
  20. Goodman B, Flaxman S (2017) European Union regulations on algorithmic decision-making and a “right to explanation”. AI Mag 38(3):50–57CrossRefGoogle Scholar
  21. Hartono E, Santhanam R, Holsapple CW (2007) Factors that contribute to management support system success: an analysis of field studies. Decis Support Syst 43(1):256–268CrossRefGoogle Scholar
  22. Krakovna V, Doshi-Velez F (2016) Increasing the interpretability of recurrent neural networks using hidden markov models. CoRR. abs/1606.05320Google Scholar
  23. Krawczyk B, Minku LL, Gama J, Stefanowski J, Wozniak M (2017) Ensemble learning for data stream analysis: a survey. Inf Fusion 37:132–156CrossRefGoogle Scholar
  24. Lake BM, Salakhutdinov R, Tenenbaum JB (2015) Human-level concept learning through probabilistic program induction. Science 350(6266):1332–1338MathSciNetCrossRefGoogle Scholar
  25. Lee CJ, Sugimoto CR, Zhang G, Cronin B (2013) Bias in peer review. J Am Soc Inf Sci Tec 64:2–17CrossRefGoogle Scholar
  26. Liu M, Shi J, Li Z, Li C, Zhu J, Liu S (2017) Towards better analysis of deep convolutional neural networks. IEEE Trans Vis Comput Graph 23(1):91–100CrossRefGoogle Scholar
  27. Mann W, Thompson S (1988) Rhetorical structure theory: towards a functional theory of text organization. Text-Interdiscip J Stud Discourse 8(3):243–281CrossRefGoogle Scholar
  28. Mill JS (1843) A System of Logic 1843. Also available from University Press of the Pacific, Honolulu, 2002Google Scholar
  29. Newman S, Lynch T, A Plummer A (2000) Success and failure of decision support systems: learning as we go. J Anim Sci 77:1–12CrossRefGoogle Scholar
  30. Plous S (1993) The psychology of judgment and decision making, p 233Google Scholar
  31. Salton G, Yang CS (1973) On the specification of term values in automatic indexing. J Doc 29:351–372CrossRefGoogle Scholar
  32. Shklovskiy-Kordi N, Shakin VV, Ptashko GO, Surin M, Zingerman B, Goldberg S, Krol M (2005) Decision support system using multimedia case history quantitative comparison and multivariate statistical analysis. CBMS:128–133Google Scholar
  33. Shklovsky-Kordi N, Zingerman B, Rivkind N, Goldberg S, Davis S, Varticovski L, Krol M, Kremenetzkaia AM, Vorobiev A, Serebriyskiy I (2005) Computerized case history – an effective tool for Management of Patients and Clinical Trials. In: Engelbrecht R et al (eds) Connecting medical informatics and bio-informatics, vol 2005. ENMI, pp 53–57Google Scholar
  34. Tan S (2005) Neighbor-weighted K-nearest neighbor for unbalanced text corpus. Expert Syst Appl 28:667–671CrossRefGoogle Scholar
  35. Trstenjak B, Sasa M, Donko D (2013) KNN with TF-IDF based framework for text categorization. Procedia Eng 69:1356–1364CrossRefGoogle Scholar
  36. Young T, Devamanyu Hazarika, Soujanya Poria, Erik Cambria (2018) Recent trends in deep learning based natural language processing. https://arxiv.org/pdf/1708.02709.pdf

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Boris Galitsky
    • 1
  • Saveli Goldberg
    • 2
  1. 1.Oracle (United States)San JoseUSA
  2. 2.Department of RadiologyMassachusetts General HospitalBostonUSA

Personalised recommendations