Almeida, A., Lopez-de Ipina, D.: Assessing ambiguity of context data in intelligent environments: towards a more reliable context managing systems. Sensors 12(4), 4934–4951 (2012). http://www.mdpi.com/1424-8220/12/4/4934
Alvarez-Melis, D., Jaakkola, T.S.: On the robustness of interpretability methods (2018)
Google Scholar
Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
CrossRef
Google Scholar
DARPA: Broad agency announcement - explainable artificial intelligence (XAI). DARPA-BAA-16-53 (Aug 2016)
Google Scholar
Dey, A.K.: Modeling and intelligibility in ambient environments. J. Ambient Intell. Smart Environ. 1(1), 57–62 (2009)
CrossRef
Google Scholar
Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation”. arXiv preprint arXiv:1606.08813 (2016)
Hutter, F., Hoos, H.H., Leyton-Brown, K.: Sequential model-based optimization for general algorithm configuration (extended version). Technical report. TR-2010-10, University of British Columbia, Department of Computer Science (2010). http://www.cs.ubc.ca/~hutter/papers/10-TR-SMAC.pdf
Jannach, D., Manzoor, A., Cai, W., Chen, L.: A survey on conversational recommender systems (2020)
Google Scholar
Lim, B.Y., Dey, A.K., Avrahami, D.: Why and why not explanations improve the intelligibility of context-aware intelligent systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2119–2128, CHI 2009. ACM, New York (2009). https://doi.org/10.1145/1518701.1519023
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 4768–4777. NIPS2017, Curran Associates Inc. (2017)
Google Scholar
Mohseni, S., Zarei, N., Ragan, E.D.: A multidisciplinary survey and framework for design and evaluation of explainable AI systems (2020)
Google Scholar
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144, KDD 2016. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2939672.2939778
Ribeiro, M.T., Singh, S., Guestrin, C.: Anchors: high-precision model-agnostic explanations. In: AAAI Publications, Thirty-Second AAAI Conference on Artificial Intelligence (2018)
Google Scholar
Robnik-Šikonja, M., Bohanec, M.: Perturbation-based explanations of prediction models. In: Zhou, J., Chen, F. (eds.) Human and Machine Learning. HIS, pp. 159–175. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-90403-0_9
CrossRef
Google Scholar
Roy, N., Das, S.K., Julien, C.: Resource-optimized quality-assured ambiguous context mediation framework in pervasive environments. IEEE Trans. Mob. Comput. 11(2), 218–229 (2012). http://dblp.uni-trier.de/db/journals/tmc/tmc11.html#RoyDJ12
Schank, R.C.: Explanation: A first pass. In: Kolodner, J.L., Riesbeck, C.K. (eds.) Experience, Memory, and Reasoning, pp. 139–165. Lawrence Erlbaum Associates, Hillsdale (1986)
Google Scholar
Selvaraju, R.R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., Batra, D.: Grad-cam: Why did you say that? visual explanations from deep networks via gradient-based localization. CoRR abs/1610.02391 (2016). http://arxiv.org/abs/1610.02391
Sokol, K., Flach, P.A.: Explainability fact sheets: A framework for systematic assessment of explainable approaches. CoRR abs/1912.05100 (2019)
Google Scholar
Yeh, C.K., Hsieh, C.Y., Suggala, A.S., Inouye, D.I., Ravikumar, P.: On the (in)fidelity and sensitivity for explanations (2019)
Google Scholar