Abstract
Explainable artificial intelligence (XAI) has emerged as a crucial topic in the field of machine learning to provide insights into the reasoning performed by artificial intelligence (AI) systems. However, the lack of a clear definition of explanation and a standard methodology for evaluating the quality of explanations has made it challenging to develop effective XAI systems. One commonly used approach is Local Linear Explanations, but the evaluation of their quality is still unclear due to theoretical inconsistencies. This issue is even more challenging in image recognition, where visual explanations often detect edges rather than providing clear explanations for decisions. To address this issue, several metrics that quantitatively measure different aspects of explanation quality in a robust and mathematically consistent manner has been proposed. On this work, we apply the REVEL framework approach, which standardizes the concept of explanation and allows for the comparison of different explanations and the absolute evaluation of individual explanations. We provide a guide of the REVEL framework to perform an optimization process that aims to improve the explainability of machine learning models. We apply the proposed five metrics on the CIFAR 10 benchmark and demonstrate their descriptive, analytical and optimization power. Our work contributes to the development of XAI systems that provide reliable and interpretable explanations for AI reasoning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Amparore, E., Perotti, A., Bajardi, P.: To trust or not to trust an explanation: using leaf to evaluate local linear XAI methods. PeerJ Comput. Sci. 7, e479 (2021)
Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)
Confalonieri, R., Coba, L., Wagner, B., Besold, T.R.: A historical perspective of explainable artificial intelligence. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 11(1), e1391 (2021)
Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)
Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10, 5(4), 1 (2010). (canadian institute for advanced research), http://www.cs.toronto.edu/kriz/cifar.html
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Miller, T.: “ but why?” understanding explainable artificial intelligence. XRDS: crossroads. ACM Mag. Students 25(3), 20–25 (2019)
Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?”: explaining the predictions of any classifier, pp. 1135–1144. KDD 2016, Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2939672.2939778
Sevillano-García, I., Luengo, J., Herrera, F.: Revel framework to measure local linear explanations for black-box models: deep learning image classification case study. Int. J. Intell. Syst. (2023). https://doi.org/10.48550/ARXIV.2211.06154, https://arxiv.org/abs/2211.06154
Slack, D., Hilgard, A., Singh, S., Lakkaraju, H.: Reliable post hoc explanations: Modeling uncertainty in explainability. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 9391–9404. Curran Associates, Inc. (2021)
Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019)
Acknowledgements
This work was supported by the Spanish Ministry of Science and Technology under project PID2020-119478GB-I00 financed by MCIN/AEI/10.13039/501100011033. This work was also partially supported by the Contract UGR-AM OTRI-6717 and the Contract UGR-AM OTRI-5987. and projects P18-FR-4961 by Proyectos I+D+i Junta de Andalucia 2018. The hardware used in this work is supported by the projects with reference EQC2018-005084-P granted by Spain’s Ministry of Science and Innovation and European Regional Development Fund (ERDF) and the project with reference SOMM17/6110/UGR granted by the Andalusian “Consejería de Conocimiento, Investigación y Universidades" and European Regional Development Fund (ERDF).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Sevillano-Garcia, I., Luengo, J., Herrera, F. (2023). Optimizing LIME Explanations Using REVEL Metrics. In: García Bringas, P., et al. Hybrid Artificial Intelligent Systems. HAIS 2023. Lecture Notes in Computer Science(), vol 14001. Springer, Cham. https://doi.org/10.1007/978-3-031-40725-3_26
Download citation
DOI: https://doi.org/10.1007/978-3-031-40725-3_26
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40724-6
Online ISBN: 978-3-031-40725-3
eBook Packages: Computer ScienceComputer Science (R0)