Abstract
Artificial intelligence is drastically changing the process of creating art. However, in art, as in many other domains, algorithms and models are not immune from generating discriminatory and unfair artifacts or decisions. Explainable Artificial Intelligence (XAI) makes it possible to look into the “black box” and to identify biases and discriminatory behaviour. One of the main problems of XAI is that state-of-the-art explanation tools are usually tailored to AI experts. This paper evaluates how intuitively understandable the same tools are to laypeople. By using the prototypical use case of predictive sales, and testing the results with users, the abstract ideas of XAI are transferred to a real-world setting to study its understandability.
Based on our analysis, it can be concluded that explanations are easier to understand if they are presented in a way that is familiar to the users. A presentation in natural language is favorable because it presents facts unambiguously. All relevant information should be accessible in an intuitive manner that avoids sources of misinterpretations. It is desirable to design the system in an interactive way that allows the user to request further details on demand. This makes the system more flexible and adjustable to the use case. The results presented in this paper can guide the development of explainability tools that are adapted to a non-expert audience.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
See Microsoft’s toolkit at https://github.com/interpretml/interpret.
- 2.
See IBM’s toolkit at https://github.com/Trusted-AI/AIX360.
References
Arya, V., et al.: One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques (2019). arXiv preprint arXiv:1909.03012
Cetinic, E., Grgic, S.: Genre classification of paintings. In: 2016 International Symposium ELMAR, pp. 201–204 (2016). https://doi.org/10.1109/ELMAR.2016.7731786
Chiusi, F.: Report: automated society 2020. J. Chem. Inf. Model. 110(9), 1689–1699 (2017)
Dua, D., Graff, C.: UCI machine learning repository (2017). https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data)
Ehsan, U., Riedl, M.O.: Human-centered explainable AI: towards a reflective sociotechnical approach. In: International Conference on Human-Computer Interaction, pp. 449–466 (2020). http://arxiv.org/abs/2002.01092
Grice, H.P.: Logic and conversation. In: Cole, P., Morgan, J.L. (eds.) Speech Acts, Syntax and Semantics, vol. 3, pp. 41–58. Academic Press, New York (1975)
Holzinger, A., Carrington, A., Müller, H.: Measuring the quality of explanations: the system causability scale (SCS): comparing human and machine explanations. KI - Kunstliche Intelligenz 34(2), 193–198 (2020)
Kuhn, H.W., Tucker, A.W.: Contributions to the Theory of Games (AM-28), Vol. II. Annals of Mathematics Studies, Princeton University Press (2016). https://books.google.de/books?id=Pd3TCwAAQBAJ
Liao, Q.V., Gruen, D., Miller, S.: Questioning the AI: informing design practices for explainable AI user experiences. Conf. Human Factors Comput. Syst. - Proc. (2020). https://doi.org/10.1145/3313831.3376590
Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017(Section 2), 4766–4775 (2017)
Miller, T.: Explanation in artificial intelligence insights from the social sciences. Artif. Intell. 267, 1–38 (2019). arXiv:1706.07269
Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?”: explaining the predictions of any classifier (2016)
Ribera, M., Lapedriza, A.: Can we do better explanations? A proposal of user-centered explainable AI. In: CEUR Workshop Proceedings, Vol. 2327 (2019)
Robinson, J.: Likert Scale, pp. 3620–3621. Springer, Netherlands, Dordrecht (2014). https://doi.org/10.1007/978-94-007-0753-5
Samek, W., Montavon, G., Lapuschkin, S., Anders, C.J., Müller, K.R.: Explaining deep neural networks and beyond: a review of methods and applications. Proc. IEEE 109(3), 247–278 (2021). https://doi.org/10.1109/JPROC.2021.3060483
Sokol, K., Flach, P.A.: Glass-box: explaining AI decisions with counterfactual statements through conversation with a voice-enabled virtual assistant. In: IJCAI, pp. 5868–5870 (2018)
Srinivasan, R., Uchino, K.: Biases in generative art - a causal look from the lens of art history (2021)
Van Looveren, A., Klaise, J.: Interpretable counterfactual explanations guided by prototypes (2019). arXiv preprint arXiv:1907.02584
Zujovic, J., Gandy, L., Friedman, S., Pardo, B., Pappas, T.N.: Classifying paintings by artistic genre: an analysis of features classifiers. In: 2009 IEEE International Workshop on Multimedia Signal Processing, pp. 1–5 (2009). https://doi.org/10.1109/MMSP.2009.5293271
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Schulze-Weddige, S., Zylowski, T. (2022). User Study on the Effects Explainable AI Visualizations on Non-experts. In: Wölfel, M., Bernhardt, J., Thiel, S. (eds) ArtsIT, Interactivity and Game Creation. ArtsIT 2021. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 422. Springer, Cham. https://doi.org/10.1007/978-3-030-95531-1_31
Download citation
DOI: https://doi.org/10.1007/978-3-030-95531-1_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95530-4
Online ISBN: 978-3-030-95531-1
eBook Packages: Computer ScienceComputer Science (R0)