Skip to main content

Scalable Concept Extraction in Industry 4.0

  • Conference paper
  • First Online:
Explainable Artificial Intelligence (xAI 2023)

Abstract

The industry 4.0 is leveraging digital technologies and machine learning techniques to connect and optimize manufacturing processes. Central to this idea is the ability to transform raw data into human understandable knowledge for reliable data-driven decision-making. Convolutional Neural Networks (CNNs) have been instrumental in processing image data, yet, their “black box” nature complicates the understanding of their prediction process. In this context, recent advances in the field of eXplainable Artificial Intelligence (XAI) have proposed the extraction and localization of concepts, or which visual cues intervene on the prediction process of CNNs. This paper tackles the application of concept extraction (CE) methods to industry 4.0 scenarios. To this end, we modify a recently developed technique, “Extracting Concepts with Local Aggregated Descriptors” (ECLAD), improving its scalability. Specifically, we propose a novel procedure for calculating concept importance, utilizing a wrapper function designed for CNNs. This process is aimed at decreasing the number of times each image needs to be evaluated. Subsequently, we demonstrate the potential of CE methods, by applying them in three industrial use cases. We selected three representative use cases in the context of quality control for material design (tailored textiles), manufacturing (carbon fiber reinforcement), and maintenance (photovoltaic module inspection). In these examples, CE was able to successfully extract and locate concepts directly related to each task. This is, the visual cues related to each concept, coincided with what human experts would use to perform the task themselves, even when the visual cues were entangled between multiple classes. Through empirical results, we show that CE can be applied for understanding CNNs in an industrial context, giving useful insights that can relate to domain knowledge.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The dataset is available under the link: https://doi.org/10.5281/zenodo.7970596.

  2. 2.

    The dataset is available under the link: https://doi.org/10.5281/zenodo.7970490.

References

  1. Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I.J., Hardt, M., Kim, B.: Sanity checks for saliency maps. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) NeurIPS, pp. 9525–9536 (2018)

    Google Scholar 

  2. Ahmed, I., Jeon, G., Piccialli, F.: From artificial intelligence to explainable artificial intelligence in industry 4.0: a survey on what, how, and where. IEEE Trans. Ind. Inform. 18(8), 5031–5042 (2022)

    Article  Google Scholar 

  3. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  4. Bao, T., et al.: MIAD: a maintenance inspection dataset for unsupervised anomaly detection. CoRR abs/2211.13968 (2022)

    Google Scholar 

  5. Becker, F., et al.: A conceptual model for digital shadows in industry and its application. In: Ghose, A., Horkoff, J., Silva Souza, V.E., Parsons, J., Evermann, J. (eds.) ER 2021. LNCS, vol. 13011, pp. 271–281. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-89022-3_22

    Chapter  Google Scholar 

  6. Bibow, P., et al.: Model-driven development of a digital twin for injection molding. In: Dustdar, S., Yu, E., Salinesi, C., Rieu, D., Pant, V. (eds.) CAiSE 2020. LNCS, vol. 12127, pp. 85–100. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49435-3_6

    Chapter  Google Scholar 

  7. Brillowski, F.S., et al.: Explainable AI for error detection in composites: knowledge discovery in artificial neural networks. In: SAMPE EUROPE Conference and Exhibition 2021. SAMPE EUROPE Conference and Exhibition, Baden/Zürich (Switzerland), 29–30 October 2021 (2021). https://publications.rwth-aachen.de/record/848836

  8. Brito, L.C., Susto, G.A., Brito, J.N., Duarte, M.A.V.: An explainable artificial intelligence approach for unsupervised fault detection and diagnosis in rotating machinery. CoRR abs/2102.11848 (2021)

    Google Scholar 

  9. Burkart, N., Huber, M.F.: A survey on the explainability of supervised machine learning. J. Artif. Intell. Res. 70, 245–317 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  10. Chowdhury, D., Sinha, A., Das, D.: XAI-3DP: diagnosis and understanding faults of 3-D printer with explainable ensemble AI. IEEE Sens. Lett. 7(1), 1–4 (2022)

    Article  Google Scholar 

  11. Das, A., Rad, P.: Opportunities and challenges in explainable artificial intelligence (XAI): a survey. CoRR abs/2006.11371 (2020)

    Google Scholar 

  12. Deitsch, S., et al.: Automatic classification of defective photovoltaic module cells in electroluminescence images. Sol. Energy 185, 455–468 (2019)

    Article  Google Scholar 

  13. DIN Deutsches Institut für Normierung e.V.: DIN 65147: Kohlenstoffasern Gewebe aus Kohlenstofffilamentgarn. beuth Verlag, Berlin (1987)

    Google Scholar 

  14. DIN Deutsches Institut für Normierung e.V.: DIN 65673: Luft- und Raumfahrt Faserverstärkte Kunststoffe. beuth Verlag, Berlin (1999)

    Google Scholar 

  15. Duan, Y., Edwards, J.S., Dwivedi, Y.K.: Artificial intelligence for decision making in the era of big data - evolution, challenges and research agenda. Int. J. Inf. Manag. 48, 63–71 (2019)

    Article  Google Scholar 

  16. Duboust, N., et al.: An optical method for measuring surface roughness of machined carbon fibre-reinforced plastic composites. J. Compos. Mater. 51(3), 289–302 (2017)

    Article  Google Scholar 

  17. Gamble, P., et al.: Determining breast cancer biomarker status and associated morphological features using deep learning. Commun. Med. 1(1), 14 (2021)

    Article  Google Scholar 

  18. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intell. 2(11), 665–673 (2020)

    Article  Google Scholar 

  19. Gholizadeh, S.: A review of non-destructive testing methods of composite materials. Procedia Struct. Integrity 1(2), 50–57 (2016)

    Article  Google Scholar 

  20. Ghorbani, A., Wexler, J., Zou, J.Y., Kim, B.: Towards automatic concept-based explanations. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) NeurIPS, pp. 9273–9282 (2019)

    Google Scholar 

  21. Goyal, Y., Shalit, U., Kim, B.: Explaining classifiers with causal concept effect (CaCE). CoRR abs/1907.07165 (2019)

    Google Scholar 

  22. Graziani, M., Andrearczyk, V., Müller, H.: Regression concept vectors for bidirectional explanations in histopathology. CoRR abs/1904.04520 (2019)

    Google Scholar 

  23. Hong, C.W., Lee, C., Lee, K., Ko, M., Hur, K.: Explainable artificial intelligence for the remaining useful life prognosis of the turbofan engines. In: ICKII, pp. 144–147. IEEE (2020)

    Google Scholar 

  24. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: CVPR, pp. 2261–2269. IEEE Computer Society (2017)

    Google Scholar 

  25. Islam, M.R., Ahmed, M.U., Barua, S., Begum, S.: A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Appl. Sci. 12(3), 1353 (2022)

    Article  Google Scholar 

  26. Kamakshi, V., Gupta, U., Krishnan, N.C.: PACE: posthoc architecture-agnostic concept extractor for explaining CNNs. In: IJCNN, pp. 1–8. IEEE (2021)

    Google Scholar 

  27. Kim, B., et al.: Interpretability beyond feature attribution: quantitative testing with concept activation vectors (TCAV). In: Dy, J.G., Krause, A. (eds.) ICML. Proceedings of Machine Learning Research, vol. 80, pp. 2673–2682. PMLR (2018)

    Google Scholar 

  28. Kumar, A., Sehgal, K., Garg, P., Kamakshi, V., Krishnan, N.C.: MACE: model agnostic concept extractor for explaining image classification networks. IEEE Trans. Artif. Intell. 2(6), 574–583 (2021)

    Article  Google Scholar 

  29. Li, X., Yang, Q., Chen, Z., Luo, X., Yan, W.: Visible defects detection based on UAV-based inspection in large-scale photovoltaic systems. IET Renew. Power Gener. 11(10), 1234–1244 (2017)

    Article  Google Scholar 

  30. Lundberg, S.M., Lee, S.: A unified approach to interpreting model predictions. In: Guyon, I., et al. (eds.) NeurIPS, pp. 4765–4774 (2017)

    Google Scholar 

  31. Meas, M., et al.: Explainability and transparency of classifiers for air-handling unit faults using explainable artificial intelligence (XAI). Sensors 22(17), 6338 (2022)

    Article  Google Scholar 

  32. Mueller, K., Greb, C.: Machine vision: error detection and classification of tailored textiles using neural networks. In: Andersen, A.-L., et al. (eds.) CARV/MCPC 2021. LNME, pp. 595–602. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-90700-6_67

    Chapter  Google Scholar 

  33. Posada-Moreno, A.F., Surya, N., Trimpe, S.: ECLAD: extracting concepts with local aggregated descriptors. CoRR abs/2206.04531 (2022)

    Google Scholar 

  34. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should I trust you?": explaining the predictions of any classifier. In: Krishnapuram, B., Shah, M., Smola, A.J., Aggarwal, C.C., Shen, D., Rastogi, R. (eds.) SIGKDD, pp. 1135–1144. ACM (2016)

    Google Scholar 

  35. Saranya, A., Subhashini, R.: A systematic review of explainable artificial intelligence models and applications: recent developments and future trends. Decis. Anal. J. 100230 (2023)

    Google Scholar 

  36. Sayed Mouchaweh, M., Rajaoarisoa, L.H.: Explainable decision support tool for IoT predictive maintenance within the context of industry 4.0. In: Wani, M.A., Kantardzic, M.M., Palade, V., Neagu, D., Yang, L., Chan, K.Y. (eds.) ICMLA, pp. 1492–1497. IEEE (2022)

    Google Scholar 

  37. Sculley, D.: Web-scale k-means clustering. In: Rappa, M., Jones, P., Freire, J., Chakrabarti, S. (eds.) Proceedings of the 19th International Conference on World Wide Web, WWW 2010, Raleigh, North Carolina, USA, 26–30 April 2010, pp. 1177–1178. ACM (2010)

    Google Scholar 

  38. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626. IEEE Computer Society (2017)

    Google Scholar 

  39. Senoner, J., Netland, T.H., Feuerriegel, S.: Using explainable artificial intelligence to improve process quality: evidence from semiconductor manufacturing. Manag. Sci. 68(8), 5704–5723 (2022)

    Article  Google Scholar 

  40. Serradilla, O., Zugasti, E., Cernuda, C., Aranburu, A., de Okariz, J.R., Zurutuza, U.: Interpreting remaining useful life estimations combining explainable artificial intelligence and domain knowledge in industrial machinery. In: FUZZ-IEEE, pp. 1–8. IEEE (2020)

    Google Scholar 

  41. Strumbelj, E., Kononenko, I.: Explaining prediction models and individual predictions with feature contributions. Knowl. Inf. Syst. 41(3), 647–665 (2014)

    Article  Google Scholar 

  42. Sun, K.H., Huh, H., Tama, B.A., Lee, S.Y., Jung, J.H., Lee, S.: Vision-based fault diagnostics using explainable deep learning with class activation maps. IEEE Access 8, 129169–129179 (2020)

    Article  Google Scholar 

  43. Uthemann, C., Jacobsen, L., Gries, T.: Cost efficiency through load-optimised and semi-impregnated prepregs. Lightweight Des. Worldwide 10(6), 18–21 (2017)

    Article  Google Scholar 

  44. Wang, J., Lim, M.K., Wang, C., Tseng, M.: The evolution of the internet of things (IoT) over the past 20 years. Comput. Ind. Eng. 155, 107174 (2021)

    Article  Google Scholar 

  45. Witten, E., Mathes, V.: Der europäische markt für faserverstärkte kunststoffe/composites 2021: Marktentwicklungen, trends, herausforderungen und ausblicke (2022). https://www.avk-tv.de/files/20220503_avk_marktbericht_2022_final.pdf

  46. Yeh, C., Kim, B., Arik, S.Ö., Li, C., Pfister, T., Ravikumar, P.: On completeness-aware concept-based explanations in deep neural networks. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) NeurIPS (2020)

    Google Scholar 

  47. Yona, G., Greenfeld, D.: Revisiting sanity checks for saliency maps. CoRR abs/2110.14297 (2021)

    Google Scholar 

  48. Zhang, Z., Hamadi, H.M.N.A., Damiani, E., Yeun, C.Y., Taher, F.: Explainable artificial intelligence applications in cyber security: state-of-the-art in research. IEEE Access 10, 93104–93139 (2022)

    Article  Google Scholar 

  49. Zhou, B., Khosla, A., Lapedriza, À., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR, pp. 2921–2929. IEEE Computer Society (2016)

    Google Scholar 

Download references

Acknowledgements

Funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy-EXC-2023 Internet of Production-390621612.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Andrés Felipe Posada-Moreno .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Posada-Moreno, A.F., Müller, K., Brillowski, F., Solowjow, F., Gries, T., Trimpe, S. (2023). Scalable Concept Extraction in Industry 4.0. In: Longo, L. (eds) Explainable Artificial Intelligence. xAI 2023. Communications in Computer and Information Science, vol 1903. Springer, Cham. https://doi.org/10.1007/978-3-031-44070-0_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44070-0_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44069-4

  • Online ISBN: 978-3-031-44070-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics