Advertisement

A Case Based Deep Neural Network Interpretability Framework and Its User Study

  • Rimmal Nadeem
  • Huijun Wu
  • Hye-young PaikEmail author
  • Chen Wang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11881)

Abstract

Despite its popularity, the decision making process of a Deep Neural Network (DNN) model is opaque to users, making it difficult to understand the behaviour of the model. We present the design of a Web-based DNN interpretability framework which is based on the core notions in case-based reasoning approaches where exemplars (e.g., data points considered similar to a chosen data point) are utilised to help achieve effective interpretation. We demonstrate the framework via a Web based tool called Deep Explorer (DeX) and present the results of user acceptance studies. Our studies showed the effectiveness of the tool in gaining a better understanding of the decision making process of a DNN model as well as the efficacy of the case-based approach in improving DNN interpretability.

Keywords

Deep neural network interpretability Visualisation Decision boundaries Interpretable machine learning 

Notes

Acknowledgements

The authors thank all participants who took part in the application user study.

References

  1. 1.
    Sturm, I., Lapuschkin, S., Samek, W., Müller, K.R.: Interpretable deep neural networks for single-trial EEG classification. J. Neurosci. Methods 274, 141–145 (2016)CrossRefGoogle Scholar
  2. 2.
    Aamodt, A., Plaza, E.: Case-based reasoning: foundational issues, methodological variations, and system approaches. AI Commun. 7(1), 39–59 (1994)Google Scholar
  3. 3.
    Bien, J., Tibshirani, R.: Prototype selection for interpretable classification. Ann. Appl. Stat., 2403–2424 (2011)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Kim, B., Khanna, R., Koyejo, O.O.: Examples are not enough, learn to criticize! Criticism for interpretability. In: Advances in Neural Information Processing Systems, pp. 2280–2288 (2016)Google Scholar
  5. 5.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  6. 6.
    Fong, R.C., Vedaldi, A.: Interpretable explanations of black boxes by meaningful perturbation. In: IEEE Conference on Computer Vision, pp. 3449–3457 (2017)Google Scholar
  7. 7.
    Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: The 34th International Conference on Machine Learning, pp. 1885–1894 (2017)Google Scholar
  8. 8.
    Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., Lipson, H.: Understanding neural networks through deep visualization. arXiv preprint arXiv:1506.06579 (2015)
  9. 9.
    Wu, H., Wang, C., Yin, J., Lu, K., Zhu, L.: Sharing deep neural network models with interpretation. In: Conference on World Wide Web, pp. 177–186 (2018)Google Scholar
  10. 10.
    Deng, L.: The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Process. Mag. 29(6), 141–142 (2012)CrossRefGoogle Scholar
  11. 11.
    Maaten, L.V.D., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9, 2579–2605 (2008)zbMATHGoogle Scholar
  12. 12.
    Andoni, A., Indyk, P., Laarhoven, T., Razenshteyn, I., Schmidt, L.: Practical and optimal LSH for angular distance. In: Advances in Neural Information Processing Systems, pp. 1225–1233 (2015)Google Scholar
  13. 13.
    Brooke, J.: Sus-a quick and dirty usability scale. In: Usability Evaluation in Industry, vol. 189, no. 194, pp. 4–7 (1996)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Rimmal Nadeem
    • 1
  • Huijun Wu
    • 1
    • 2
  • Hye-young Paik
    • 1
    • 2
    Email author
  • Chen Wang
    • 2
  1. 1.School of Computer Science and EngineeringUNSWSydneyAustralia
  2. 2.Data61, CSIROSydneyAustralia

Personalised recommendations