Advertisement

KI - Künstliche Intelligenz

, Volume 33, Issue 1, pp 35–44 | Cite as

Please delete that! Why should I?

Explaining learned irrelevance classifications of digital objects
  • Michael SiebersEmail author
  • Ute Schmid
Technical Contribution
  • 143 Downloads

Abstract

Dare2Del is an assistive system which facilitates intentional forgetting of irrelevant digital objects. For an assistive system to be helpful, the user has to trust the system’s decisions. Explanations are a crucial component in establishing this trust. We will introduce different types of explanations which can vary along different dimensions such as level of detail and modality suitable for different application contexts. We will outline the cognitive companion system Dare2Del which is intended to support users managing digital objects in a working environment. Core of Dare2Del is an interpretable machine learning mechanism which induces decision rules to classify whether a digital objects is irrelevant. In this paper, we focus on irrelevance of files. We formalize the decision making process as logic inference. Finally, we present a method to generate verbal explanations for irrelevance decisions and point out how such explanations can be constructed on different levels of details. Furthermore, we show how verbal explanations can be related to the path context of the file. We conclude with a short discussion of the scope and restrictions of our approach.

Keywords

Irrelevant digital objects Verbal explanations Inductive Logic Programming 

References

  1. 1.
    Baader F, Nutt W (2003) Basic description logics. In: Baader F, Calvanese D, McGuinness D, Nardi D, Patel-Schneider P (eds) The description logic handbook. Cambridge University Press, Cambridge, pp 43–95Google Scholar
  2. 2.
    Biundo S, Wendemuth A (2016) Companion-technology for cognitive technical systems. Künstliche Intell 30(1):71–75CrossRefGoogle Scholar
  3. 3.
    Bjork EL, Bjork RA, Anderson MC (1998) Varieties of goal-directed forgetting. In: Golding JM, MacLeod CM (eds) Intentional forgetting: interdisciplinary approaches, vol 103. Lawrence Erlbaum, MahwahGoogle Scholar
  4. 4.
    Clancey WJ (1983) The epistemology of a rule-based expert system—a framework for explanation. Artif Intell 20(3):215–251CrossRefGoogle Scholar
  5. 5.
    Cropper A, Muggleton SH. Metagol system. https://github.com/metagol/metagol
  6. 6.
    De Raedt L (2008) Logical and relational learning. Springer, Berlin, Heidelberg.  https://doi.org/10.1007/978-3-540-68856-3 CrossRefzbMATHGoogle Scholar
  7. 7.
    Eppler MJ, Mengis J (2004) The concept of information overload: A review of literature from organization science, accounting, marketing, MIS, and related disciplines. The Information Society 20(5):325–344.  https://doi.org/10.1080/01972240490507974 CrossRefGoogle Scholar
  8. 8.
    Fails JA, Olsen DR Jr (2003) Interactive machine learning. In: Proceedings of the 8th international conference on Intelligent User Interfaces. ACM, New York, pp 39–45Google Scholar
  9. 9.
    Forbus KD, Hinrichs TR (2006) Companion cognitive systems—a step toward human-level AI. AI Mag 27(2):83–95Google Scholar
  10. 10.
    Fürnkranz J, Kliegr T, Paulheim H (2018) On cognitive preferences and the interpretability of rule-based models. arXiv:1803.01316[cs.LG] (Preprint)
  11. 11.
    Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Advances in neural information processing systems 27. Curran Associates, Inc., pp 2672–2680. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf
  12. 12.
    Gulwani S, Hernandez-Orallo J, Kitzelmann E, Muggleton SH, Schmid U, Zorn B (2015) Inductive programming meets the real world. Commun ACM 58(11):90–99CrossRefGoogle Scholar
  13. 13.
    Hengstler M, Enkel E, Duelli S (2016) Applied artificial intelligence and trust—the case of autonomous vehicles and medical assistance devices. Technol Forecast Soc Change 105:105–120CrossRefGoogle Scholar
  14. 14.
    Hilbert M, López P (2011) The world’s technological capacity to store, communicate, and compute information. Science 332(6025):60–65CrossRefGoogle Scholar
  15. 15.
    Huth EJ (1989) The information explosion. Bull N Y Acad Med 65(6):647–672Google Scholar
  16. 16.
    Jameson A, Schäfer R, Weis T, Berthold A, Weyrath T (1999) Making systems sensitive to the user’s changing resource limitations. Knowl Based Syst 12(8):413–425CrossRefGoogle Scholar
  17. 17.
    Kruschke JK (2008) Models of categorization. In: Sun R (ed) The Cambridge handbook of computational psychology. Cambridge University Press, Cambridge, pp 267–301CrossRefGoogle Scholar
  18. 18.
    Lakkaraju H, Bach SH, Leskovec J (2016) Interpretable decision sets: a joint framework for description and prediction. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, New York, pp 1675–1684CrossRefGoogle Scholar
  19. 19.
    Lombrozo T (2016) Explanatory preferences shape learning and inference. Trends Cogn Sci 20(10):748–759CrossRefGoogle Scholar
  20. 20.
    Lombrozo T, Vasilyeva N (2017) Causal explanation. In: Waldmann M (ed) Oxford handbook of causal reasoning. Oxford University Press, Oxford, pp 415–432Google Scholar
  21. 21.
    Loza Mencía E, Fürnkranz J (2018) Interpretable machine learning. In: ECDA (ed) Book of abstracts, 5th European conference on data analysis, pp 56–60. http://groups.uni-paderborn.de/eim-i-fg-huellermeier/ecda2018/downloads/ECDA2018-BoA.pdf
  22. 22.
    Marcus G (2018) Deep learning: a critical appraisal. arXiv:1801.00631v1 [cs.AI] (Preprint)
  23. 23.
    Markman AB, Gentner D (1996) Commonalities and differences in similarity comparisons. Mem Cogn 24(2):235–249CrossRefGoogle Scholar
  24. 24.
    Michie D (1988) Machine learning in the next five years. In: Proceedings of the third European working session on learning. Pitman, New York, pp 107–122Google Scholar
  25. 25.
    Muggleton S (1995) Inverse entailment and Progol. New Gener Comput 13(3–4):245–286CrossRefGoogle Scholar
  26. 26.
    Muggleton S, De Raedt L (1994) Inductive logic programming: theory and methods. J Logic Programm 19–20:629–679MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Muggleton SH, Lin D, Tamaddoni-Nezhad A (2015) Meta-interpretive learning of higher-order dyadic datalog: predicate invention revisited. Mach Learn 100:49–73.  https://doi.org/10.1007/s10994-014-5471-y MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    Muggleton SH, Schmid U, Zeller C, Tamaddoni-Nezhad A, Besold T (2018) Ultra-strong machine learning: comprehensibility of programs learned with ILP. Mach Learn 107(7):1119–1140.  https://doi.org/10.1007/s10994-018-5707-3 MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    Niessen C, Göbel K, Siebers M, Schmid U Time to forget: a review and conceptual framework of intentional forgetting in the digital world of work. Z Arbeits Org [German Journal of Work and Organizational Psychology] (to appear)Google Scholar
  30. 30.
    Potter J, Wetherell M (1987) Discourse and social psychology: beyond attitudes and behaviour. Sage, Thousand OaksGoogle Scholar
  31. 31.
    Pu P, Chen L (2007) Trust-inspiring explanation interfaces for recommender systems. Knowl Based Syst 20(6):542–556CrossRefGoogle Scholar
  32. 32.
    Rabold J, Siebers M, Schmid U (2018) Explaining black-box classifiers with ILP—empowering LIME with Aleph to approximate non-linear decisions with relational rules. In: Riguzzi F, Bellodi E, Zese R (eds) Proceedings of the 28th international conference on inductive logic programming, pp 105–117Google Scholar
  33. 33.
    Reed SK, Bolstad CA (1991) Use of examples and procedures in problem solving. J Exp Psychol Learn Mem Cogn 17(4):753–766CrossRefGoogle Scholar
  34. 34.
    Ribeiro MT, Singh S, Guestrin C (2016) “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp 1135–1144. ACM. http://arxiv.org/abs/1602.04938
  35. 35.
    Roth-Berghofer T, Richter MM (2008) Schwerpunkt: Erklärungen [special issue: explanations]. Künstliche Intell 22(2)Google Scholar
  36. 36.
    Sadoski M, Paivio A (2013) Imagery and text: a dual coding theory of reading and writing. Routledge, AbingdonCrossRefGoogle Scholar
  37. 37.
    Schmid U (1994) Programmieren lernen: Unterstützung des Erwerbs rekursiver Programmiertechniken durch Beispielfunktionen und Erklärungstexte [Learning programming: Acquisition of recursive programming skills from examples and explanations]. Kognitionswissenschaft 4(1):47–54Google Scholar
  38. 38.
    Schmid U (2018) Inductive programming as approach to comprehensible machine learning. In: Beierle C, Kern-Isberner G, Ragni M, Stolzenburg F, Thimm M (eds) Proceedings of the 7th workshop on dynamics of knowledge and belief (DKB-2018) and the 6th workshop KI & Kognition (KIK-2018), co-located with 41st German conference on artificial intelligence, vol 2194. CEUR Workshop ProceedingsGoogle Scholar
  39. 39.
    Schmid U, Kitzelmann E (2011) Inductive rule learning on the knowledge level. Cogn Syst Res 12(3):237–248CrossRefGoogle Scholar
  40. 40.
    Siebers M, Göbel K, Niessen C, Schmid U (2017) Requirements for a companion system to support identifying irrelevancy, pp 1–2.  https://doi.org/10.1109/COMPANION.2017.8287076
  41. 41.
    Soucek R, Moser K (2010) Coping with information overload in email communication: evaluation of a training intervention. Comput Hum Behav 26(6):1458–1466.  https://doi.org/10.1016/j.chb.2010.04.024 CrossRefGoogle Scholar
  42. 42.
    Srinivasan A (2004) The Aleph manual. http://www.cs.ox.ac.uk/activities/machinelearning/Aleph/
  43. 43.
    Suthers DD (1993) An analysis of explanation and its implications for the design of explanation planners. Ph.D. Thesis, University of MassachusettsGoogle Scholar
  44. 44.
    Sweeney L (2001) Information explosion. In: Zayatz L, Doyle P, Theeuwes J, Lane J (eds) Confidentiality, disclosure, and data access: theory and practical applications for statistical agencies. Urban Institute, Washington, pp 43–74Google Scholar
  45. 45.
    Tintarev N, Masthoff J (2012) Evaluating the effectiveness of explanations for recommender systems. User Model User Adapt Interact 22(4):399–439CrossRefGoogle Scholar
  46. 46.
    Tintarev N, Masthoff J (2015) Explaining recommendations: design and evaluation. In: Recommender systems handbook. Springer, Berlin, pp 353–382CrossRefGoogle Scholar
  47. 47.
    Wang W, Benbasat I (2007) Recommendation agents for electronic commerce: effects of explanation facilities on trusting beliefs. J Manag Inf Syst 23(4):217–246.  https://doi.org/10.2753/MIS0742-1222230410 CrossRefGoogle Scholar
  48. 48.
    Winston PH (1975) Learning structural descriptions from examples. In: Winston PH (ed) The psychology of computer vision. McGraw-Hill, New York, pp 157–210Google Scholar
  49. 49.
    Zeller C, Schmid U (2016) Automatic generation of analogous problems to help resolving misconceptions in an intelligent tutor system for written subtraction. In: Coman A, Kapetanakis S (eds) Workshops proceedings for the 24th international conference on case-based reasoning, CEUR workshop proceedings, vol 1815, pp 108–117. http://ceur-ws.org/Vol-1815/paper11.pdf
  50. 50.
    Zeller C, Schmid U (2017) A human like incremental decision tree algorithm: combining rule learning, pattern induction, and storing examples. In: Leyer M (ed) LWDA conference proceedings, vol 1917, pp 64–73. CEUR workshop proceedings. http://ceur-ws.org/Vol-1917/paper12.pdf

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Cognitive SystemsUniversity of BambergBambergGermany

Personalised recommendations