Abstract
[Context and motivation] In our modern society, software systems are highly integrated into our daily life. Quality aspects such as ethics, fairness, and transparency have been discussed as essential for trustworthy software systems and explainability has been identified as a means to achieve all of these three in systems. [Question/problem] Like other quality aspects, explainability must be discovered and treated during the design of those systems. Although explainability has become a hot topic in several communities from different areas of knowledge, there is only little research on systematic explainability engineering. Yet, methods and techniques from requirements and software engineering would add a lot of value to the explainability research. [Principal ideas/results] As a first step to explore this research landscape, we held an interdisciplinary workshop to collect ideas from different communities and to discuss open research questions. In a subsequent working group, we further analyzed and structured the results of this workshop to identify the most important research questions. As a result, we now present a research roadmap for explainable systems. [Contribution] With our research roadmap we aim to advance the software and requirements engineering methods and techniques for explainable systems and to attract research on the most urgent open questions.
All authors have contributed equally to this paper and share the first authorship.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Blumreiter, M., et al.: Towards self-explainable cyber-physical systems. In: ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), pp. 543–548. IEEE (2019)
Brunotte, W., Chazette, L., Klös, V., Knauss, E., Speith, T., Vogelsang, A.: Welcome to the first international workshop on requirements engineering for explainable systems (RE4ES). In: IEEE 29th International Requirements Engineering Conference Workshops (REW), pp. 157–158. IEEE (2021)
Brunotte, W., Chazette, L., Klös, V., Speith, T.: Supplementary Material for Vision Paper “Quo Vadis, Explainability? - A Research Roadmap for Explainability Engineering” (2022). https://doi.org/10.5281/zenodo.5902181
Brunotte, W., Chazette, L., Korte, K.: Can explanations support privacy awareness? a research roadmap. In: IEEE 29th International Requirements Engineering Conference Workshops (REW), pp. 176–180. IEEE (2021)
Chazette, L., Brunotte, W., Speith, T.: Exploring explainability: a definition, a model, and a knowledge catalogue. In: IEEE 29th International Requirements Engineering Conference (RE), pp. 197–208. IEEE (2021)
Chazette, L., Schneider, K.: Explainability as a non-functional requirement: challenges and recommendations. Requirements Eng. 25(4), 493–514 (2020). https://doi.org/10.1007/s00766-020-00333-1
Köhl, M.A., Baum, K., Langer, M., Oster, D., Speith, T., Bohlender, D.: Explainability as a non-functional requirement. In: IEEE 27th International Requirements Engineering Conference (RE), pp. 363–368. IEEE (2019)
Langer, M., Baum, K., Hartmann, K., Hessel, S., Speith, T., Wahl, J.: Explainability auditing for intelligent systems: a rationale for multi-disciplinary perspectives. In: IEEE 29th International Requirements Engineering Conference Workshops (REW), pp. 164–168. IEEE (2021)
Langer, M., et al.: What do we want from explainable artificial intelligence (XAI)? - a stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artif. Intell. 296, 103473 (2021)
Sadeghi, M., Klös, V., Vogelsang, A.: Cases for explainable software systems: characteristics and examples. In: IEEE 29th International Requirements Engineering Conference Workshops (REW), pp. 181–87. IEEE (2021)
Schwammberger, M.: A quest of self-explainability: when causal diagrams meet autonomous urban traffic manoeuvres. In: IEEE 29th International Requirements Engineering Conference Workshops (REW), pp. 195–199. IEEE (2021)
Ziesche, F., Klös, V., Glesner, S.: Anomaly detection and classification to enable self-explainability of autonomous systems. In: 2021 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1304–1309. IEEE (2021)
Acknowledgments
This work was supported by the research initiative Mobilise between the Technical University of Braunschweig and Leibniz University Hannover, funded by the Ministry for Science and Culture of Lower Saxony and by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122, Project ID 390833453). Work on this paper was also funded by the Volkswagen Foundation grant AZ 98514 “Explainable Intelligent Systems” (EIS) and by the DFG grant 389792660 as part of TRR 248.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Brunotte, W., Chazette, L., Klös, V., Speith, T. (2022). Quo Vadis, Explainability? – A Research Roadmap for Explainability Engineering. In: Gervasi, V., Vogelsang, A. (eds) Requirements Engineering: Foundation for Software Quality. REFSQ 2022. Lecture Notes in Computer Science, vol 13216. Springer, Cham. https://doi.org/10.1007/978-3-030-98464-9_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-98464-9_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-98463-2
Online ISBN: 978-3-030-98464-9
eBook Packages: Computer ScienceComputer Science (R0)