Skip to main content
Log in

A bi-directional adversarial explainability for decision support

  • Research Article
  • Published:
Human-Intelligent Systems Integration Aims and scope Submit manuscript

Abstract

In this paper we present an approach to creating Bi-directional Decision Support System (DSS) as an intermediary between an expert (U) and a machine learning (ML) system for choosing an optimal solution. As a first step, such DSS analyzes the stability of expert decision and looks for critical values in data that support such a decision. If the expert’s decision and that of a machine learning system continue to be different, the DSS makes an attempt to explain such a discrepancy. We discuss a detailed description of this approach with examples. Three studies are included to illustrate some features of our approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Baehrens D, Schroeter T, Harmeling S, Kawanabe M, Hansen K (2010) Klaus-robert Mller how to explain individual classification decisions. arXiv:0912.1128 [stat.ML] 11(jun): 18031831

  • Bourneffouf et al. (2016) Exponentiated gradient exploration for active learning. Computers 5:1–12

  • Casgrain P, Ning B, Jaimungal S (2019) Deep Q-learning for Nash equilibria: Nash-DQN. arXiv preprint arXiv:1904.10554

  • Cronin P, Ryan F, Coughlan M (2008) Undertaking a literature review: a step-by-step approach. Br J Nurs. 17(1):38–43

    Article  Google Scholar 

  • Galitsky B (2020) Employing abstract meaning representation to lay the last-mile toward reading comprehension. In: Artificial Intelligence for Customer Relationship Management: keeping customers informed, Springer, Cham

  • Galitsky B, Goldberg S (2019) Chapter 3 explainable machine learning for chatbots in B. Galitsky developing enterprise chatbots: learning linguistic structures. Springer, pp 57–89

  • Galitsky B, Shpitsberg I (2016) Autistic learning and cognition. Computational Autism, pp 245–293

  • Goldberg S (1997) Inference engine the systems of the dr. Watson type. DIMACS Workshop Rutgers University, New Jersey

    Google Scholar 

  • Goldberg SI, Lomovskikh VE, Makhanek AO, Sklyar MS (1991) Expert system DINAR-2.-methodological basis for the pediatric emergency aid organization in a large region. In: Medical informatics europe, vienna, austria, 270-274

  • Goldberg S (2007) Nikita Shklovskiy-Kordi.; Boris Zingerman. Time-oriented multi-image case history - way to the disease image analysis. VISAPP (Special Sessions), pp 200–203

  • Goldberg SI, Niemierko A, Shubina M, Turchin A (2010) “Summary Page”: A novel tool that reduces omitted data in research databases. BMC Medical Research Methodology 10:91–97

    Article  Google Scholar 

  • Goldberg S, Katz G, Weisburd B, Belyaev A, Temkin A (2019) Integrating user opinion in decision support systems. In: Arai K, Bhatia R (eds) advances in information and communication. FICC, Lecture Notes in Networks and Systems, 70, Springer

  • Goldberg S, Galitsky B, Weisburd B (2019) Framework for interaction between expert users and machine learning systems. http://ceur-ws.org/vol-2448/SSS19_paper_upload_217.pdf

  • Goodman B, Flaxman S (2017) European Union regulations on algorithmic decision-making and a right to explanation AI Mag Magazine, 38(3)

  • Hansen N (2006) The CMA evolution strategy: a comparing review, Towards a new evolutionary computation. In: Advances on estimation of distribution algorithms, Springer, 1769–1776, CiteSeerX 10.1.1.139.7369

  • Henderson M, Tierney L, Smetana G (2012) The Patient history: evidence-based approach to differential diagnosis., McGraw-Hill, New York NY

  • Illankoon P, Tretten P, Kumar D (2019) Modeling human cognition of abnormal machine behavior. Human-Intelligent Systems Integration 1:13–26

    Article  Google Scholar 

  • Ioannis K, Andrew B, Shiying H, Tanya V, Huihan L, Spanos C (2019) A deep learning and gamification approach to improving human-building interaction and energy efficiency in smart infrastructure. Appl Energy 237:810–821

    Article  Google Scholar 

  • Krawczyk B, Minku LL, Gama J, Stefanowski J, Wozniak M (2017) Ensemble learning for data stream analysis: a survey. Information Fusion 37:132–156

    Article  Google Scholar 

  • Lee CJ, Sugimoto CR, Zhang G, Cronin B (2013) Bias in peer review. J Am Soc Inf Sci Tec 64:2–17

    Article  Google Scholar 

  • Manning CD, Surdeanu M, Bauer J, Finkel J, Bethard SJ (2014) Mcclosky, The stanford coreNLP natural language processing toolkit, Proceedings of 52nd Annual Meeting of the Association for Computational linguistics: System Demonstrations, pp 55–60, Baltimore, Maryland USA, June 23–24

  • Ribeiro MT, Singh S, Guestrin C (2016) Why Should I Trust You? Explaining the Predictions of Any Classifier. https://arxiv.org/pdf/1602.04938.pdf

  • Molnar C (2019) Interpretable Machine Learning. A Guide for Making Black Box Models Explainable. https://christophm.github.io/interpretable-ml-book/

  • NIH News in Health (2014) A monthly newsletter from the National Institutes of Health, part of the U.S. Department of Health and Human Services https://newsinhealth.nih.gov/2014/10/cold-flu-or-allergy

  • Plous S (1993) The psychology of judgment and decision making. McGraw-Hill, New York

    Google Scholar 

  • Ratliff Lillian J, et al. (2014) Social game for building energy efficiency: incentive design, 52nd annual Allerton conference on communication, control, and computing. IEEE, 1011–8

  • Report of Neonatology (2018) Department of sverdlovsk state children hospital, Russia, 43–47

  • Scott M, Lundberg GGE, Lee S-I (2019) Consistent individualized feature attribution for tree ensembles. https://arxiv.org/pdf/1602.04938.pdf

  • Shklovsky-Kordi N, Zingerman B, Rivkind N, Goldberg S, Davis S, Varticovski L, Krol M, Kremenetzkaia AM, Vorobiev A, Serebriyskiy I (2005) Computerized case history - an effective tool for management of patients and clinical trials Engelbrecht R, et al. (eds)

  • Siegenthaler W (2011) Differential diagnosis in internal medicine: from symptom to diagnosis., Thieme Medical Publishers

  • Xiaofeng W, Tuomas S (2002) Reinforcement learning to play an optimal nash equilibrium in team Markov games. NIPS’02: Proceedings Of the 15th International Conference on Neural Information Processing Systems, January 1603–1610

  • Ni Z, Yu Y, Wencong S (2015) A game-theoretic economic operation of residential distribution system with high participation of distributed electricity consumers. Appl Energy 154:471–9

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Eugene Pinsky.

Ethics declarations

Conflict of interest

The authors declare no competing interests.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Goldberg, S., Pinsky, E. & Galitsky, B. A bi-directional adversarial explainability for decision support. Hum.-Intell. Syst. Integr. 3, 1–14 (2021). https://doi.org/10.1007/s42454-021-00031-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s42454-021-00031-5

Keywords

Navigation