Skip to main content

From Explainable to Reliable Artificial Intelligence

  • Conference paper
  • First Online:
Machine Learning and Knowledge Extraction (CD-MAKE 2021)

Abstract

Artificial Intelligence systems are characterized by always less interactions with humans today, leading to autonomous decision-making processes. In this context, erroneous predictions can have severe consequences. As a solution, we design and develop a set of methods derived from eXplainable AI models. The aim is to define “safety regions” in the feature space where false negatives (e.g., in a mobility scenario, prediction of no collision, but collision in reality) tend to zero. We test and compare the proposed algorithms on two different datasets (physical fatigue and vehicle platooning) and achieve quite different conclusions in terms of results that strongly depend on the level of noise in the dataset rather than on the algorithms at hand.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://gdpr.eu/tag/gdpr/.

  2. 2.

    https://github.com/scikit-learn-contrib/skope-rules.

  3. 3.

    https://github.com/zahrame/FatigueManagement.github.io/tree/master/Data.

  4. 4.

    https://github.com/mopamopa/Platooning.

References

  1. Adebayo, J., et al.: Sanity checks for saliency maps. arXiv preprint arXiv:1810.03292 (2018)

  2. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  3. Balasubramanian, V.N., Ho, S., Vovk, V.: Conformal Prediction for Reliable Machine Learning, 1st edn. Morgan Kaufmann Elsevier (2014)

    Google Scholar 

  4. Becker, U.: Increasing safety of neural networks in medical devices. In: Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2019. LNCS, vol. 11699, pp. 127–136. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-26250-1_10

    Chapter  Google Scholar 

  5. Campagner, A., Cabitza, F., Ciucci, D.: Three-way decision for handling uncertainty in machine learning: a narrative review. In: Bello, R., Miao, D., Falcon, R., Nakata, M., Rosete, A., Ciucci, D. (eds.) IJCRS 2020. LNCS (LNAI), vol. 12179, pp. 137–152. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52705-1_10

    Chapter  Google Scholar 

  6. Cangelosi, D., et al.: Logic learning machine creates explicit and stable rules stratifying neuroblastoma patients. BMC Bioinform. 14(7), 1–20 (2013)

    Google Scholar 

  7. Cheng, C.H., et al.: Towards dependability metrics for neural networks (2018)

    Google Scholar 

  8. Clavière, A., Asselin, E., Garion, C., Pagetti, C.: Safety verification of neural network controlled systems. arXiv preprint arXiv:2011.05174 (2020)

  9. Cluzeau, J., et al.: Concepts of design assurance for neural networks CoDANN. Standard, European Union Aviation Safety Agency, Daedalean, AG, March 2020. https://www.easa.europa.eu/sites/default/files/dfu/EASA-DDLN-Concepts-of-Design-Assurance-for-Neural-Networks-CoDANN.pdf

  10. Cortes, C., et al.: Boosting with abstention. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016). https://proceedings.neurips.cc/paper/2016/file/7634ea65a4e6d9041cfd3f7de18e334a-Paper.pdf

  11. Czarnecki, K., Salay, R.: Towards a framework to manage perceptual uncertainty for safe automated driving. In: Gallina, B., Skavhaug, A., Schoitsch, E., Bitsch, F. (eds.) SAFECOMP 2018. LNCS, vol. 11094, pp. 439–445. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99229-7_37

    Chapter  Google Scholar 

  12. Eaton-Rosen, Z., Bragman, F., Bisdas, S., Ourselin, S., Cardoso, M.J.: Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 691–699. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00928-1_78

    Chapter  Google Scholar 

  13. Gehr, T., et al.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2018)

    Google Scholar 

  14. Gordon, L., et al.: Explainable artificial intelligence for safe intraoperative decision support. JAMA Surg. 154(11), 1064–1065 (2019)

    Article  Google Scholar 

  15. Gu, X., Easwaran, A.: Towards safe machine learning for CPS: infer uncertainty from training data (2019)

    Google Scholar 

  16. Guo, C., et al.: On calibration of modern neural networks. In: International Conference on Machine Learning, pp. 1321–1330. PMLR (2017)

    Google Scholar 

  17. Hendrycks, D., Dietterich, T.: Benchmarking neural network robustness to common corruptions and perturbations (2019)

    Google Scholar 

  18. Holzinger, A., et al.: What do we need to build explainable AI systems for the medical domain? (2017)

    Google Scholar 

  19. Isele, D., et al.: Safe reinforcement learning on autonomous vehicles. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1–6. IEEE (2018)

    Google Scholar 

  20. ISO/IEC: Standardization in the area of artificial intelligence. Standard, ISO/IEC, Washington, DC 20036, USA (Creation date 2017). https://www.iso.org/committee/6794475.html

  21. Koshiyama, A., et al.: Towards algorithm auditing: a survey on managing legal, ethical and technological risks of AI, ML and associated algorithms. SSRN Electron. J. (2021)

    Google Scholar 

  22. Lakshminarayanan, B., et al.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6405–6416 (2016)

    Google Scholar 

  23. Madhavan, R., et al.: Toward trustworthy and responsible artificial intelligence policy development. IEEE Intell. Syst. 35(5), 103–108 (2020)

    Article  Google Scholar 

  24. Maman, Z.S., et al.: A data analytic framework for physical fatigue management using wearable sensors. Expert Syst. Appl. 155, 113405 (2020)

    Google Scholar 

  25. Mohseni, S., et al.: Practical solutions for machine learning safety in autonomous vehicles. arXiv preprint arXiv:1912.09630 (2019)

  26. Mongelli, M., Muselli, M., Ferrari, E.: Achieving zero collision probability in vehicle platooning under cyber attacks via machine learning. In: 2019 4th International Conference on System Reliability and Safety (ICSRS), pp. 41–45. IEEE (2019)

    Google Scholar 

  27. Mongelli, M., Ferrari, E., Muselli, M., Fermi, A.: Performance validation of vehicle platooning through intelligible analytics. IET Cyber-Phys. Syst. Theory Appl. 4(2), 120–127 (2019)

    Article  Google Scholar 

  28. Mongelli, M., Muselli, M., Scorzoni, A., Ferrari, E.: Accellerating prism validation of vehicle platooning through machine learning. In: 2019 4th International Conference on System Reliability and Safety (ICSRS), pp. 452–456. IEEE (2019)

    Google Scholar 

  29. Maurizio, M., Vanessa, O.: Stability certification of dynamical systems: lyapunov logic learning machine. In: Thampi, S.M., Lloret Mauri, J., Fernando, X., Boppana, R., Geetha, S., Sikora, A. (eds.) Applied Soft Computing and Communication Networks. LNCS, vol. 187, pp. 221–235. (2021). https://doi.org/10.1007/978-981-33-6173-7_15

    Chapter  Google Scholar 

  30. Muselli, M.: Switching neural networks: a new connectionist model for classification (2005)

    Google Scholar 

  31. Parodi, S., et al.: Differential diagnosis of pleural mesothelioma using logic learning machine. BMC Bioinform. 16(9), 1–10 (2015)

    Google Scholar 

  32. Parodi, S., et al.: Logic learning machine and standard supervised methods for Hodgkin’s lymphoma prognosis using gene expression data and clinical variables. Health Inform. J. 24(1), 54–65 (2018)

    Article  Google Scholar 

  33. Pereira, A., Thomas, C.: Challenges of machine learning applied to safety-critical cyber-physical systems. Mach. Learn. Knowl. Extr. 2(4), 579–602 (2020)

    Article  Google Scholar 

  34. Samek, W., et al.: Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. ITU J.: ICT Discoveries - Special Issue 1 - The Impact of Artificial Intelligence (AI) on Communication Networks and Services 1, 1–10 (2017)

    Google Scholar 

  35. Saranti, A., Taraghi, B., Ebner, M., Holzinger, A.: Property-based testing for parameter learning of probabilistic graphical models. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds.) CD-MAKE 2020. LNCS, vol. 12279, pp. 499–515. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57321-8_28

    Chapter  Google Scholar 

  36. Schwalbe, G., Schels, M.: A survey on methods for the safety assurance of machine learning based systems. In: 10th European Congress on Embedded Real Time Software and Systems (ERTS 2020) (2020)

    Google Scholar 

  37. Seshia, S.A., et al.: Formal specification for deep neural networks. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 20–34. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01090-4_2

    Chapter  Google Scholar 

  38. International Organization for Standardization: Road vehicles safety of the intended functionality PD ISO PAS 21448:2019. Standard, International Organization for Standardization, Geneva, CH, March 2019

    Google Scholar 

  39. Sun, Y., et al.: Structural test coverage criteria for deep neural networks. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Companion Proceedings (ICSE-Companion), pp. 1–23. ACM New York (2019)

    Google Scholar 

  40. Varshney, K.R.: Engineering safety in machine learning. In: 2016 Information Theory and Applications Workshop (ITA), pp. 1–5. IEEE (2016)

    Google Scholar 

  41. Wiener, Y., El-Yaniv, R.: Agnostic pointwise-competitive selective classification. J. Artif. Int. Res. 52(1), 179–201 (2015)

    MathSciNet  MATH  Google Scholar 

  42. Williams, N.: The Borg rating of perceived exertion (RPE) scale. Occup. Med. 67(5), 404–405 (2017)

    Article  Google Scholar 

  43. Zhang, X., et al.: DADA: deep adversarial data augmentation for extremely low data regime classification. IEEE Trans. Circuits Syst. Video Technol. 2807–2811 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sara Narteni .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Narteni, S., Ferretti, M., Orani, V., Vaccari, I., Cambiaso, E., Mongelli, M. (2021). From Explainable to Reliable Artificial Intelligence. In: Holzinger, A., Kieseberg, P., Tjoa, A.M., Weippl, E. (eds) Machine Learning and Knowledge Extraction. CD-MAKE 2021. Lecture Notes in Computer Science(), vol 12844. Springer, Cham. https://doi.org/10.1007/978-3-030-84060-0_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-84060-0_17

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-84059-4

  • Online ISBN: 978-3-030-84060-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics