Skip to main content

Advertisement

Log in

On the evaluation of the symbolic knowledge extracted from black boxes

  • Original Research
  • Published:
AI and Ethics Aims and scope Submit manuscript

Abstract

As opaque decision systems are being increasingly adopted in almost any application field, issues about their lack of transparency and human readability are a concrete concern for end-users. Amongst existing proposals to associate human-interpretable knowledge with accurate predictions provided by opaque models, there are rule extraction techniques, capable of extracting symbolic knowledge out of opaque models. The quantitative assessment of the extracted knowledge’s quality is still an open issue. For this reason, we provide here a first approach to measure the knowledge quality, encompassing several indicators and providing a compact score reflecting readability, completeness and predictive performance associated with a symbolic knowledge representation. We also discuss the main criticalities behind our proposal, related to the readability assessment and evaluation, to push future research efforts towards a more robust score formulation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Data availability

Data are publicly available.

Notes

  1. We remark that \(pre_i\) and \(post_i\) are the precondition and postcondition, respectively, associated with the i-th rule of the list.

  2. https://archive.ics.uci.edu/ml/datasets/ISTANBUL+STOCK+EXCHANGE.

References

  1. Aziz, S., Dowling, M.: Machine learning and AI for risk management. In: FinTech and Strategy in the 21st Century, pp. 33–50. Palgrave Pivot, Cham (2019)

  2. Berenji, H.R.: Refinement of approximate reasoning-based controllers by reinforcement learning. In: Birnbaum, L., Collins, G. (eds.) Proceedings of the Eighth International Workshop (ML91), Northwestern University, Evanston, Illinois, USA, pp. 475–479. Morgan Kaufmann (1991)

  3. Breiman, L., Friedman, J., Stone, C.J., Olshen, R.A.: Classification and Regression Trees. CRC Press, Boca Raton (1984)

    Google Scholar 

  4. Calegari, R., Sabbatini, F.: The PSyKE technology for trustworthy artificial intelligence. In: XXI International Conference of the Italian Association for Artificial Intelligence, AIxIA 2022, Udine, Italy, November 28–December 2, 2022, Proceedings, vol. 13796, pp. 3–16 (2023)

  5. Craven, M.W., Shavlik, J.W.: Extracting tree-structured representations of trained networks. In: Touretzky, D.S., Mozer, M.C., Hasselmo, M.E. (eds.) Advances in Neural Information Processing Systems 8. Proceedings of the 1995 Conference, pp. 24–30. The MIT Press (1996). (ISBN 9780262201070)

  6. De Mulder, W., Valcke, P.: The need for a numeric measure of explainability. In: 2021 IEEE International Conference on Big Data (Big Data), pp. 2712–2720 (2021)

  7. European Commission, Directorate-General for Communications Networks, C., Technology. Ethics guidelines for trustworthy AI. Publications Office (2019)

  8. Freitas, A.A.: Comprehensible classification models: a position paper. ACM SIGKDD Explor. Newsl. 15(1), 1–10 (2014)

    Article  Google Scholar 

  9. Garcez, A.S.D., Broda, K., Gabbay, D.M.: Symbolic knowledge extraction from trained neural networks: a sound approach. Artif. Intell. 125(1–2), 155–207 (2001)

    Article  MathSciNet  Google Scholar 

  10. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5), 1–42 (2018)

    Article  Google Scholar 

  11. He, X., Zhao, K., Chu, X.: AutoML: a survey of the state-of-the-art. Knowl.-Based Syst. 212, 106622 (2021)

    Article  Google Scholar 

  12. Horikawa, S., Furuhashi, T., Uchikawa, Y.: On fuzzy modeling using fuzzy neural networks with the back-propagation algorithm. IEEE Trans. Neural Netw. 3(5), 801–806 (1992)

    Article  Google Scholar 

  13. Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., Baesens, B.: An empirical evaluation of the comprehensibility of decision table, tree and rule based predictive models. Decis. Support Syst. 51(1), 141–154 (2011)

    Article  Google Scholar 

  14. Kenny, E.M., Ford, C., Quinn, M., Keane, M.T.: Explaining black-box classifiers using post-hoc explanations-by-example: the effect of explanations and error-rates in XAI user studies. Artif. Intell. 294, 103459 (2021)

    Article  MathSciNet  Google Scholar 

  15. Murphy, P.M., Pazzani, M.J.: ID2-of-3: constructive induction of M-of-N concepts for discriminators in decision trees. In: Machine Learning Proceedings 1991, pp. 183–187. Elsevier (1991)

  16. Ng, A., Ibrahim, M.H., Mirakhor, A.: Ethical behavior and trustworthiness in the stock market-growth nexus. Res. Int. Bus. Financ. 33, 44–58 (2015)

    Article  Google Scholar 

  17. Quinlan, J.R.: C4.5: Programming for Machine Learning. Morgan Kauffmann, San Mateo (1993)

    Google Scholar 

  18. Rocha, A., Papa, J.P., Meira, L.A.A.: How far do we get using machine learning black-boxes? Int. J. Pattern Recognit. Artif. Intell. 26(02), 1261001 (2012)

    Article  MathSciNet  Google Scholar 

  19. Sabbatini, F., Calegari, R.: Symbolic knowledge extraction from opaque machine learning predictors: GridREx & PEDRO. In: Kern-Isberner, G., Lakemeyer, G., Meyer, T. (eds.) Proceedings of the 19th International Conference on Principles of Knowledge Representation and Reasoning, KR 2022, Haifa, Israel. July 31–August 5, 2022, pp. 554–563. IJCAI Organization, Haifa (2022)

  20. Sabbatini, F., Calegari, R.: Bottom-up and top-down workflows for hypercube- and clustering-based knowledge extractors. In: Calvaresi, D., Najjar, A., Omicini, A., Aydogan, R., Carli, R., Ciatto, G., Främling, K. (eds.) Explainable and Transparent AI and Multi-Agent Systems. Fifth International Workshop, EXTRAAMAS 2023, London, UK, May 29, 2023, Revised Selected Papers, Volume 14127 of LNCS, pp. 116–129. Springer Cham, Basel (2023a). (ISBN 978-3-031-40877-9)

  21. Sabbatini, F., Calegari, R.: ExACT explainable clustering: unravelling the intricacies of cluster formation. In: Proceedings of the 2nd International Workshop on Knowledge Diversity, KoDis 2023, Rhodes, Greece, 3 September 2023 (2023)

  22. Sabbatini, F., Calegari, R.: Explainable clustering with CREAM. In: Marquis, P., Son, T. C., Kern-Isberner, G. (eds.) Proceedings of the 20th International Conference on Principles of Knowledge Representation and Reasoning, KR 2023, Rhodes, Greece, 2–8 September 2023. IJCAI Organization, Rhodes, pp. 593–603 (2023)

  23. Sabbatini, F., Calegari, R.: Unveiling opaque predictors via explainable clustering: the CReEPy algorithm. In: Proceedings of the 2nd Workshop on Bias, Ethical Al, Explainability and the role of Logic and Logic Programming, BEWARE-23, co-located with AlxIA 2023, Roma Tre University, Roma, Italy, 6 November 2023 (2023)

  24. Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: On the design of PSyKE: a platform for symbolic knowledge extraction. In: Calegari, R., Ciatto, G., Denti, E., Omicini, A., Sartor, G. (eds.) WOA 2021—22nd Workshop “From Objects to Agents”, Volume 2963 of CEUR Workshop Proceedings, 29–48. Sun SITE Central Europe, RWTH Aachen University. 22nd Workshop “From Objects to Agents” (WOA 2021), Bologna, Italy, 1–3 September 2021. Proceedings (2021)

  25. Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: Hypercube-based methods for symbolic knowledge extraction: towards a unified model. In: Ferrando, A., Mascardi, V. (eds.) WOA 2022—23rd Workshop “From Objects to Agents.” CEUR Workshop Proceedings, vol. 3261, pp. 48–60. RWTH Aachen University, Sun SITE Central Europe (2022)

  26. Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: Symbolic knowledge extraction from opaque ML predictors in PSyKE: platform design and experiments. Intelligenza Artificiale 16(1), 27–48 (2022)

    Article  Google Scholar 

  27. Sabbatini, F., Ciatto, G., Calegari, R., Omicini, A.: Towards a unified model for symbolic knowledge extraction with hypercube-based methods. Intelligenza Artificiale 17(1), 63–75 (2023)

    Article  Google Scholar 

  28. Sabbatini, F., Ciatto, G., Omicini, A.: GridEx: an algorithm for knowledge extraction from black-box regressors. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) Explainable and Transparent AI and Multi-Agent Systems. Third International Workshop, EXTRAAMAS 2021, Virtual Event, May 3–7, 2021, Revised Selected Papers, Volume 12688 of LNCS, pp. 18–38. Springer Nature, Basel (2021). (ISBN 978-3-030-82016-9)

  29. Sabbatini, F., Ciatto, G., Omicini, A.: Semantic web-based interoperability for intelligent agents with PSyKE. In: Calvaresi, D., Najjar, A., Winikoff, M., Främling, K. (eds.) Proceedings of the 4th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems, Volume 13283 of Lecture Notes in Computer Science, chapter 8, pp. 124–142. Springer (2022). (ISBN 978-3-031-15564-2)

  30. Sethi, K.K., Mishra, D.K., Mishra, B.: KDRuleEx: a novel approach for enhancing user comprehensibility using rule extraction. In: 2012 Third International Conference on Intelligent Systems Modelling and Simulation, pp. 55–60 (2012)

  31. Setiono, R.: Extracting M-of-N rules from trained neural networks. IEEE Trans. Neural Netw. Learn. Syst. 11(2), 512–519 (2000)

    Article  Google Scholar 

  32. Setiono, R., Liu, H.: NeuroLinear: from neural networks to oblique decision rules. Neurocomputing 17(1), 1–24 (1997)

    Article  Google Scholar 

  33. Shaheen, M.Y.: Applications of artificial intelligence (AI) in healthcare: a review. In: ScienceOpen Preprints (2021)

  34. Sovrano, F., Sapienza, S., Palmirani, M., Vitali, F.: Metrics, explainability and the European AI act proposal. Journal 5(1), 126–138 (2022)

    Google Scholar 

  35. Towell, G.G., Shavlik, J.W.: Interpretation of artificial neural networks: mapping knowledge-based neural networks into rules. In: Moody, J.E., Hanson, S.J., Lippmann, R. (eds.) Advances in Neural Information Processing Systems 4, [NIPS Conference, Denver, Colorado, USA, December 2–5, 1991], pp. 977–984. Morgan Kaufmann (1991)

  36. Tran, S.N., Garcez, A.S.D.: Knowledge extraction from deep belief networks for images. In: IJCAI-2013 Workshop on Neural-symbolic Learning and Reasoning (2013)

  37. Weiss, J.W.: Business Ethics: A Stakeholder and Issues Management Approach. Berrett-Koehler Publishers, San Francisco (2021)

    Google Scholar 

Download references

Acknowledgements

This work has been partially supported by the EU ICT-48 2020 project TAILOR (No. 952215).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Federico Sabbatini.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sabbatini, F., Calegari, R. On the evaluation of the symbolic knowledge extracted from black boxes. AI Ethics 4, 65–74 (2024). https://doi.org/10.1007/s43681-023-00406-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s43681-023-00406-1

Keywords

Navigation