Abstract
In this paper, we present the envisioned style and scope of the new topic “Explanation Paradigms Leveraging Analytic Intuition” (ExPLAIn) with the International Journal on Software Tools for Technology Transfer (STTT). Intention behind this new topic is to (1) explicitly address all aspects and issues that arise when trying to, if possible, reveal and then confirm hidden properties of black-box systems, or (2) to enforce vital properties by embedding them into appropriate system contexts. Machine-learned systems, such as Deep Neural Networks, are particularly challenging black-box systems, and there is a wealth of formal methods for analysis and verification waiting to be adapted and applied. The selection of papers of this first Special Section of ExPLAIn, most of which were co-authored by editorial board members, is an illustrative example of the style and scope envisioned: In addition to methodological papers on verification, explanation, and their scalability, case studies, tool papers, literature reviews, and position papers are also welcome.
Article PDF
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
References
ERCIM working group on formal methods for industrial critical systems (FMICS). http://fmics.inria.fr/. Accessed: 2023-07-28
Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6, 52138–52160 (2018)
Amodei, D., Olah, C., Steinhardt, J., Christiano, P.F., Schulman, J., Mané, D.: Concrete problems in AI safety CoRR (2016). arXiv:1606.06565
Badings, T., Simao, T., Suilen, M., Jansen, N.: Decision-making under uncertainty: beyond probabilities. Challenges and perspectives. https://doi.org/10.1007/s10009-023-00704-3
Bahar, R.I., Frohm, E.A., Gaona, C.M., Hachtel, G.D., Macii, E., Pardo, A., Somenzi, F.: Algebric decision diagrams and their applications. Form. Methods Syst. Des. 10(2), 171–206 (1997)
Baier, C., Katoen, J.P.: Principles of Model Checking. MIT Press, Cambridge (2008)
Blanchet, B., Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.: A static analyzer for large safety-critical software. In: Proceedings of the ACM SIGPLAN 2003 Conference on Programming Language Design and Implementation, pp. 196–207 (2003)
Brix, C., Müller, M., Bak, S., Johnson, T.T., Liu, C.: First three years of the international verification of neural networks competition (vnn-comp). https://doi.org/10.1007/s10009-023-00703-4
Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J.D., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al.: Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33, 1877–1901 (2020)
Clarke, E.M., Wing, J.M.: Formal methods: state of the art and future directions. ACM Comput. Surv. 28(4), 626–643 (1996)
Corbett-Davies, S., Goel, S.: The measure and mismeasure of fairness: a critical review of fair machine learning (2018). ArXiv preprint. arXiv:1808.00023
Cramer, G., Ford, R., Hall, R.: Estimation of toxic hazard—a decision tree approach. Food Cosmet. Toxicol. 16(3), 255–276 (1976)
Cutler, A., Cutler, D.R., Stevens, J.R.: Random forests. In: Ensemble Machine Learning, pp. 157–175. Springer, Berlin (2012)
Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale (2020). ArXiv preprint. arXiv:2010.11929
Ghavamzadeh, M., Mannor, S., Pineau, J., Tamar, A., et al.: Bayesian reinforcement learning: a survey. Found. Trends Mach. Learn. 8(5-6), 359–483 (2015)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)
Gossen, F., Steffen, B.: Algebraic aggregation of random forests: towards explainability and rapid evaluation. Int. J. Softw. Tools Technol. Transf. (2021). https://doi.org/10.1007/s10009-021-00635-x
Gros, T.P., Hermanns, H., Hoffmann, J., Klauck, M., Steinmetz, M.: Analyzing neural network behavior through deep statistical model checking. https://doi.org/10.1007/s10009-022-00685-9
Hérault, T., Lassaigne, R., Magniette, F., Peyronnet, S.: Approximate probabilistic model checking. In: International Workshop on Verification, Model Checking, and Abstract Interpretation, pp. 73–84. Springer, Berlin (2004)
Jansen, N.: Intelligent and dependable decision-making under uncertainty. In: FM, Lecture Notes in Computer Science, vol. 14000, pp. 26–36. Springer, Berlin (2023)
Jüngermann, F., Křetínský, J., Weininger, M.: Algebraically explainable controllers: Decision trees and support vector machines join forces. Int. J. Softw. Tools Technol. Transf. (in press)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)
Khmelnitsky, I., Neider, D., Roy, R., Xie, X., Barbot, B., Bollig, B., Finkel, A., Haddad, S., Leucker, M., Ye, L.: Analysis of recurrent neural networks via property-directed verification of surrogate models. Int. J. Softw. Tools Technol. Transf. (2022, in press)
King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)
Kohli, P., Chadha, A.: Enabling pedestrian safety using computer vision techniques: a case study of the 2018 uber inc. self-driving car crash. In: Future of Information and Communication Conference, pp. 261–279. Springer, Berlin (2019)
Lee, P.M.: Bayesian Statistics. Oxford University Press, London (1989)
Legay, A., Delahaye, B., Bensalem, S.: Statistical model checking: an overview. In: International Conference on Runtime Verification, pp. 122–135. Springer, Berlin (2010)
Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., Chen, C., Su, T., Li, L., Liu, Y., et al.: Deepgauge: multi-granularity testing criteria for deep learning systems. In: Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, pp. 120–131 (2018)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. 54(6), 1–35 (2021)
Miller, J.C., Maloney, C.J.: Systematic mistake analysis of digital computer programs. Commun. ACM 6(2), 58–63 (1963)
Monroe, D.: Neurosymbolic AI. Commun. ACM 65(10), 11–13 (2022)
Morimoto, J., Doya, K.: Robust reinforcement learning. Neural Comput. 17(2), 335–359 (2005)
Murtovi, A., Bainczyk, A., Nolte, G., Schlüter, M., Bernhard, S.: Forest Gump: a tool for veification and explanation. Int. J. Softw. Tools Technol. Transf. (2022, in press)
Noble, W.S.: What is a support vector machine? Nat. Biotechnol. 24(12), 1565–1567 (2006)
Nolte, G., Schlüter, M., Murtovi, A., Steffen, B.: The power of typed affine decision structures: a case study. https://doi.org/10.1007/s10009-023-00701-6
Parnas, D.L., Van Schouwen, A.J., Kwan, S.P.: Evaluation of safety-critical software. Commun. ACM 33(6), 636–648 (1990)
Rao, Q., Frtunikj, J.: Deep learning for self-driving cars: chances and challenges. In: Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems, pp. 35–38 (2018)
Schlüter, M., Nolte, G., Steffen, B.: Towards rigorous understanding of neural networks via semantics preserving transformation. Int. J. Softw. Tools Technol. Transf. (2022, in press)
Sieber, K.: The Foundations of Program Verification. Springer, Berlin (2013)
Silver, D., Schrittwieser, J., Simonyan, K., Antonoglou, I., Huang, A., Guez, A., Hubert, T., Baker, L., Lai, M., Bolton, A., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). ArXiv preprint. arXiv:1409.1556
Usman, M., Sun, Y., Gopinath, D., Dange, R., Manolache, L., Pasareanu, C.: An overview of structural coverage metrics for testing neural networks. Int. J. Softw. Tools Technol. Transf. (2022, in press)
Valiant, L.: Probably Approximately Correct: Nature’s Algorithms for Learning and Prospering in a Complex World. Basic Books, USA (2013)
Varshney, K.R.: Engineering safety in machine learning. In: 2016 Information Theory and Applications Workshop (ITA), pp. 1–5 (2016). https://doi.org/10.1109/ITA.2016.7888195
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)
Acknowledgements
As editors of the ExPLAIn theme, we would like to thank all authors for their contributions. We would also like to thank all reviewers for their insights and helpful comments.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Jansen, N., Nolte, G. & Steffen, B. Explanation Paradigms Leveraging Analytic Intuition (ExPLAIn). Int J Softw Tools Technol Transfer 25, 241–247 (2023). https://doi.org/10.1007/s10009-023-00715-0
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10009-023-00715-0