Skip to main content

Optimizing LIME Explanations Using REVEL Metrics

  • Conference paper
  • First Online:
Hybrid Artificial Intelligent Systems (HAIS 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14001))

Included in the following conference series:

  • 765 Accesses

Abstract

Explainable artificial intelligence (XAI) has emerged as a crucial topic in the field of machine learning to provide insights into the reasoning performed by artificial intelligence (AI) systems. However, the lack of a clear definition of explanation and a standard methodology for evaluating the quality of explanations has made it challenging to develop effective XAI systems. One commonly used approach is Local Linear Explanations, but the evaluation of their quality is still unclear due to theoretical inconsistencies. This issue is even more challenging in image recognition, where visual explanations often detect edges rather than providing clear explanations for decisions. To address this issue, several metrics that quantitatively measure different aspects of explanation quality in a robust and mathematically consistent manner has been proposed. On this work, we apply the REVEL framework approach, which standardizes the concept of explanation and allows for the comparison of different explanations and the absolute evaluation of individual explanations. We provide a guide of the REVEL framework to perform an optimization process that aims to improve the explainability of machine learning models. We apply the proposed five metrics on the CIFAR 10 benchmark and demonstrate their descriptive, analytical and optimization power. Our work contributes to the development of XAI systems that provide reliable and interpretable explanations for AI reasoning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Amparore, E., Perotti, A., Bajardi, P.: To trust or not to trust an explanation: using leaf to evaluate local linear XAI methods. PeerJ Comput. Sci. 7, e479 (2021)

    Article  Google Scholar 

  2. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  3. Confalonieri, R., Coba, L., Wagner, B., Besold, T.R.: A historical perspective of explainable artificial intelligence. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 11(1), e1391 (2021)

    Article  Google Scholar 

  4. Doshi-Velez, F., Kim, B.: Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017)

  5. Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10, 5(4), 1 (2010). (canadian institute for advanced research), http://www.cs.toronto.edu/kriz/cifar.html

  6. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  7. Miller, T.: “ but why?” understanding explainable artificial intelligence. XRDS: crossroads. ACM Mag. Students 25(3), 20–25 (2019)

    Google Scholar 

  8. Ribeiro, M.T., Singh, S., Guestrin, C.: “why should i trust you?”: explaining the predictions of any classifier, pp. 1135–1144. KDD 2016, Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2939672.2939778

  9. Sevillano-García, I., Luengo, J., Herrera, F.: Revel framework to measure local linear explanations for black-box models: deep learning image classification case study. Int. J. Intell. Syst. (2023). https://doi.org/10.48550/ARXIV.2211.06154, https://arxiv.org/abs/2211.06154

  10. Slack, D., Hilgard, A., Singh, S., Lakkaraju, H.: Reliable post hoc explanations: Modeling uncertainty in explainability. In: Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., Vaughan, J.W. (eds.) Advances in Neural Information Processing Systems, vol. 34, pp. 9391–9404. Curran Associates, Inc. (2021)

    Google Scholar 

  11. Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Spanish Ministry of Science and Technology under project PID2020-119478GB-I00 financed by MCIN/AEI/10.13039/501100011033. This work was also partially supported by the Contract UGR-AM OTRI-6717 and the Contract UGR-AM OTRI-5987. and projects P18-FR-4961 by Proyectos I+D+i Junta de Andalucia 2018. The hardware used in this work is supported by the projects with reference EQC2018-005084-P granted by Spain’s Ministry of Science and Innovation and European Regional Development Fund (ERDF) and the project with reference SOMM17/6110/UGR granted by the Andalusian “Consejería de Conocimiento, Investigación y Universidades" and European Regional Development Fund (ERDF).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ivan Sevillano-Garcia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sevillano-Garcia, I., Luengo, J., Herrera, F. (2023). Optimizing LIME Explanations Using REVEL Metrics. In: García Bringas, P., et al. Hybrid Artificial Intelligent Systems. HAIS 2023. Lecture Notes in Computer Science(), vol 14001. Springer, Cham. https://doi.org/10.1007/978-3-031-40725-3_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40725-3_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40724-6

  • Online ISBN: 978-3-031-40725-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics