Skip to main content

A Novel Explainable Deep Learning Model with Class Specific Features

  • Conference paper
  • First Online:
Image and Vision Computing (IVCNZ 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13836))

Included in the following conference series:

Abstract

The predictive accuracy of any machine learning model is highly depended on the features used to train the model. For this reason, it is important to extract good discriminative features from the raw data. This extraction of good features from raw data is a challenging task. Deep learning models like Convolutional Neural Networks (CNNs) have the ability to automatically extract features from raw data and also have excellent predictive capabilities. This excellent predictive capability and ability to extract good features from raw data have made CNN very popular, especially in the field of computer vision. Even though CNN is popular, like many other deep learning models it is also notoriously black-box model. The predictions made by a CNN model cannot be explained based on features that influenced the given predictions. In our work we put forth an architecture that has convolutional layers to extract features automatically and the predictions made by this model can be explained based on specific features/neurons that resulted in the prediction. The model put forth in this paper has accuracy that is on par with the state-of-the-art models. Also, its predictions are explainable with target class specific feature importance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35(8), 1798–1828 (2013)

    Article  Google Scholar 

  2. Dalal, N., Triggs, B.:Histograms of oriented gradients for human detection. In: 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), vol. 1, pp. 886–893. IEEE (2005)

    Google Scholar 

  3. Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vision 60(2), 91–110 (2004)

    Article  Google Scholar 

  4. Ojala, T., Pietikainen, M., Harwood, D.:Performance evaluation of texture measures with classification based on kullback discrimination of distributions. In: Proceedings of 12th International Conference on Pattern Recognition, vol. 1. pp. 582–585. IEEE (1994)

    Google Scholar 

  5. Lin, W., Hasenstab, K., Moura Cunha, G., Schwartzman, A.: Comparison of hand-crafted features and convolutional neural networks for liver MR image adequacy assessment. Sci. Rep. 10(1), 1–11 (2020)

    Google Scholar 

  6. LeCun, Y., Bengio, Y., et al.: Convolutional networks for images, speech, and time series. Handb. Brain Theory Neural Netw. 3361(10), 1995 (1995)

    Google Scholar 

  7. Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill 2(11), e7 (2017)

    Article  Google Scholar 

  8. Wang, Z.J., et al.: CNN explainer: Learning convolutional neural networks with interactive visualization. IEEE Trans. Visual. Comput. Graph. 27(2), 1396–1406 (2020)

    Google Scholar 

  9. Mundhenk, T.N., Chen, B.Y., Friedland, G.: Efficient saliency maps for explainable AI. arXiv preprint arXiv:1911.11293 (2019)

  10. Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-cam: Visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)

    Google Scholar 

  11. Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53

    Chapter  Google Scholar 

  12. Dunn, I., Mingardi, I., Zhuo, Y.D.: Comparing interpretability and explainability for feature selection. arXiv preprint arXiv:2105.05328 (2021)

  13. Zhang, S., et al. A convolutional neural network based auto features extraction method for tea classification with electronic tongue. Appl. Sci. 9(12), 2518 (2019)

    Google Scholar 

  14. Zheng, Y., Li, X., Si, Y., Qin, W., Tian, H.: Hybrid deep convolutional neural network with one-versus-one approach for solar flare prediction. Mon. Not. R. Astron. Soc. 507(3), 3519–3539 (2021)

    Article  Google Scholar 

  15. Sinha, T., Verma, B.: Auto-associative features with non-iterative learning-based technique for image classification. In: 2021 International Joint Conference on Neural Networks (IJCNN), pp. 1–6. IEEE (2021)

    Google Scholar 

  16. Xue, G., Liu, S., Ma, Y.: A hybrid deep learning-based fruit classification using attention model and convolution autoencoder. Complex Intell. Syst. 1–11 (2020)

    Google Scholar 

  17. Niu, X.-X., Suen, C.Y.: A novel hybrid cnn–svm classifier for recognizing handwritten digits. Pattern Recogn. 45(4), 1318–1325 (2012)

    Article  Google Scholar 

  18. Tang, Y.: Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239 (2013)

  19. Huang, F.J., LeCun, Y.: Large-scale learning with SVM and convolutional for generic object categorization. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), vol. 1. pp. 284–291 IEEE (2006)

    Google Scholar 

  20. Notley, S., Magdon-Ismail, M.: Examining the use of neural networks for feature extraction: a comparative analysis using deep learning, support vector machines, and k-nearest neighbor classifiers. arXiv preprint arXiv:1805.02294 (2018)

  21. Abeyrathna, K.D., et al.: Massively parallel and asynchronous tsetlin machine architecture supporting almost constant-time scaling. In: International Conference on Machine Learning. pp. 10–20. PMLR (2021)

    Google Scholar 

  22. Park, J., Lee, J., Jeon, D.: 7.6 A 65 nm 236.5 nJ/classification neuro morphic processor with 7.5% energy overhead on-chip learning using direct spike-only feedback. In: 2019 IEEE International Solid-State Circuits Conference-(ISSCC), pp. 140–142. IEEE (2019)

    Google Scholar 

  23. Passalis, N., Tefas, A.: Training lightweight deep convolutional neural networks using bag-of-features pooling. IEEE Trans. Neural Netw. Learn. Syst. 30(6), 1705–1715 (2018)

    Article  MathSciNet  Google Scholar 

  24. Schuler, J.P.S., Romani, S., Abdel-Nasser, M., Rashwan, H., Puig, D.: Grouped pointwise convolutions reduce parameters in convolutional neural networks. Mendel 28(1), 23–31 (2022)

    Article  Google Scholar 

Download references

Acknowledgments

This research was supported under Australian Research Council’s Discovery Projects funding scheme (project number DP210100640).

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kuttichira, D.P., Azam, B., Verma, B., Rahman, A., Wang, L. (2023). A Novel Explainable Deep Learning Model with Class Specific Features. In: Yan, W.Q., Nguyen, M., Stommel, M. (eds) Image and Vision Computing. IVCNZ 2022. Lecture Notes in Computer Science, vol 13836. Springer, Cham. https://doi.org/10.1007/978-3-031-25825-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-25825-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-25824-4

  • Online ISBN: 978-3-031-25825-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics