Skip to main content

Advertisement

Log in

Concept-wise granular computing for explainable artificial intelligence

  • Original Paper
  • Published:
Granular Computing Aims and scope Submit manuscript

Abstract

Artificial neural networks offer great classification performances, but their internal model works as a black box. This can prevent their outcomes to be employed in real-world decision-making processes, e.g., in smart manufacturing. To address this issue, the neural network should provide human-comprehensible explanations for their outcomes. This can be achieved by exploiting domain concepts and measuring their importance for the classification. To this aim, we implement an information granulation process via a neural network specifically trained to represent data instances featuring the same (different) concept’s item close to (far away from) each other. By combining the representations for each concept, we obtain the so-called conceptual space embedding. The classification is obtained by processing it via a neural network classifier. The conceptual space embedding (i) organizes the data instances according to their concepts-wise proximity, resulting in a very informative data representation; this translates into greater classification accuracy if compared to a concept-wise approach from the state-of-the-art; and (ii) encodes each concept in one of its parts; this enables the measurement of the importance of one concept by manipulating the corresponding part of the conceptual space embedding. The proposed approach has been tested with real-world data from smart manufacturing.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

Due to confidentiality agreements, supporting data can only be made available to bona fide researchers subject to a non-disclosure agreement.

References

  • Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (xai). IEEE Access 6:52,138-52,160

    Article  Google Scholar 

  • Afchar D, Guigue V, Hennequin R (2021) Towards rigorous interpretations: a formalisation of feature attribution. In: International Conference on Machine Learning, PMLR, pp 76–86

  • Ahmed I, Jeon G, Piccialli F (2022) From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where. IEEE Trans Indus Inform 18(8):5031–5042

    Article  Google Scholar 

  • Alfeo AL, Cimino MGC, Egidi S, et al (2017) Stigmergy-based modeling to discover urban activity patterns from positioning data. In: International Conference on Social Computing, Behavioral-Cultural Modeling and Prediction and Behavior Representation in Modeling and Simulation, Springer, pp 292–301

  • Alfeo AL, Cimino MG, Manco G et al (2020) Using an autoencoder in the design of an anomaly detector for smart manufacturing. Pattern Recognit Lett 136:272–278

    Article  Google Scholar 

  • Alfeo AL, Cimino MG, Vaglini G (2022a) Degradation stage classification via interpretable feature learning. J Manuf Syst 62:972–983

  • Alfeo AL, Cimino MGC, et al (2022b) Automatic feature extraction for bearings’ degradation assessment using minimally pre-processed time series and multi-modal feature learning. In: Proceedings of the 3rd International Conference on Innovative Intelligent Industrial Production and Logistics (IN4PL 2022)

  • Apicella A, Isgrò F, Prevete R et al (2020) Middle-level features for the explanation of classification systems by sparse dictionary methods. Int J Neural Syst 30(08):2050,040

    Article  Google Scholar 

  • Basu I, Maji S (2022) Multicollinearity correction and combined feature effect in shapley values. In: Australasian Joint Conference on Artificial Intelligence, Springer, pp 79–90

  • Bau D, Zhu JY, Strobelt H et al (2020) Understanding the role of individual units in a deep neural network. Proceedings of the National Academy of Sciences 117(48):30,071-30,078

    Article  Google Scholar 

  • Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Machine Intell 35(8):1798–1828

    Article  Google Scholar 

  • Chen Z, Bei Y, Rudin C (2020) Concept whitening for interpretable image recognition. Nat Machine Intell 2(12):772–782

    Article  Google Scholar 

  • Confalonieri R, Weyde T, Besold TR et al (2021) Using ontologies to enhance human understandability of global post-hoc explanations of black-box models. Artif Intell 296(103):471

    MathSciNet  MATH  Google Scholar 

  • Delaney E, Greene D, Keane MT (2021) Instance-based counterfactual explanations for time series classification. In: International Conference on Case-Based Reasoning, Springer, pp 32–47

  • Díaz-Rodríguez N, Lamas A, Sanchez J et al (2022) Explainable neural-symbolic learning (x-nesyl) methodology to fuse deep learning representations with expert knowledge graphs: the monumai cultural heritage use case. Inf Fusion 79:58–83

    Article  Google Scholar 

  • Ghorbani A, Wexler J, Zou JY, et al (2019) Towards automatic concept-based explanations. Adv Neural Inf Process Syst 32

  • Hitzler P, Sarker M (2022) Human-centered concept explanations for neural networks. Neuro-Symbolic Artif Intell: The State of the Art 342(337):2

    Google Scholar 

  • Hu H, Pang L, Tian D et al (2014) Perception granular computing in visual haze-free task. Expert Syst Appl 41(6):2729–2741

    Article  Google Scholar 

  • İç YT, Yurdakul M (2021) Development of a new trapezoidal fuzzy ahp-topsis hybrid approach for manufacturing firm performance measurement. Granul Computing 6(4):915–929

    Article  Google Scholar 

  • Kazhdan D, Dimanov B, Terre HA, et al (2021) Is disentanglement all you need? Comparing concept-based & disentanglement approaches. arXiv preprint arXiv:2104.06917

  • Kim B, Wattenberg M, Gilmer J, et al (2018) Interpretability beyond feature attribution: quantitative testing with concept activation vectors (tcav). In: International Conference on Machine Learning, PMLR, pp 2668–2677

  • Koh PW, Nguyen T, Tang YS, et al (2020) Concept bottleneck models. In: International Conference on Machine Learning, PMLR, pp 5338–5348

  • Lake BM, Salakhutdinov R, Tenenbaum JB (2015) Human-level concept learning through probabilistic program induction. Science 350(6266):1332–1338

    Article  MathSciNet  MATH  Google Scholar 

  • Lucieri A, Bajwa MN, Dengel A, et al (2020) Explaining ai-based decision support systems using concept localization maps. In: International Conference on Neural Information Processing, Springer, pp 185–193

  • Lundberg SM, Lee SI (2017) A unified approach to interpreting model predictions. Adv Neural Infor Process Syst 30

  • Miller T (2019) Explanation in artificial intelligence: insights from the social sciences. Artif Intell 267:1–38

    Article  MathSciNet  MATH  Google Scholar 

  • Pedrycz W (1998) Conditional fuzzy clustering in the design of radial basis function neural networks. IEEE Trans Neural Netw 9(4):601–612

    Article  Google Scholar 

  • Qi J, Wei L, Wan Q (2019) Multi-level granularity in formal concept analysis. Granul Computing 4(3):351–362

    Article  Google Scholar 

  • Salehi S, Selamat A, Fujita H (2015) Systematic mapping study on granular computing. Knowl-Based Syst 80:78–97

    Article  Google Scholar 

  • Song M, Wang Y (2016) A study of granular computing in the agenda of growth of artificial neural networks. Granul Computing 1(4):247–257

    Article  Google Scholar 

  • Stursa D, Dolezel P (2019) Comparison of relu and linear saturated activation functions in neural network for universal approximation. In: 2019 22nd International Conference on Process Control (PC19), IEEE, pp 146–151

  • van der Waa J, Nieuwburg E, Cremers A et al (2021) Evaluating xai: a comparison of rule-based and example-based explanations. Artif Intell 291(103):404

    MathSciNet  MATH  Google Scholar 

  • van Zelst SJ, Mannhardt F, de Leoni M et al (2021) Event abstraction in process mining: literature review and taxonomy. Granul Computing 6(3):719–736

    Article  Google Scholar 

  • Wang X, Han X, Huang W, et al (2019) Multi-similarity loss with general pair weighting for deep metric learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 5022–5030

  • Zhou B, Bau D, Oliva A et al (2018) Interpreting deep visual representations via network dissection. IEEE Trans Pattern Anal Machine Intell 41(9):2131–2145

    Article  Google Scholar 

  • Zhou B, Khosla A, Lapedriza À, et al (2015) Object detectors emerge in deep scene cnns. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR)

Download references

Funding

This work has been partially supported by: (i) the company Koerber Tissue in the project “Data-driven and Artificial Intelligence approaches for Industry 4.0”; (ii) the University of Pisa, in the project PRA_2022_101 project “Decision Support Systems for territorial networks for managing ecosystem services”; (iii) the Tuscany Region in the framework of the SecureB2C project, POR FESR 2014-2020, Project number 7429 31.05.2017; (iv) the Italian Ministry of University and Research (MUR), in the framework of the ”Reasoning” project, PRIN 2020 LS Programme, Project number 2493 04-11-2021, and of the FISR 2019 Programme, under Grant No. 03602 of the project “SERICA”.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to study conception and design, material preparation, data collection and analysis, as well as experiments and manuscript writing. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Guido Gagliardi.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest, and that an ethical statement is not applicable to this research.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alfeo, A.L., Cimino, M.G.C.A. & Gagliardi, G. Concept-wise granular computing for explainable artificial intelligence. Granul. Comput. 8, 827–838 (2023). https://doi.org/10.1007/s41066-022-00357-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41066-022-00357-8

Keywords

Navigation