Principal Sensitivity Analysis

  • Sotetsu Koyamada
  • Masanori Koyama
  • Ken Nakae
  • Shin Ishii
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9077)

Abstract

We present a novel algorithm (Principal Sensitivity Analysis; PSA) to analyze the knowledge of the classifier obtained from supervised machine learning techniques. In particular, we define principal sensitivity map (PSM) as the direction on the input space to which the trained classifier is most sensitive, and use analogously defined \(k\)-th PSM to define a basis for the input space. We train neural networks with artificial data and real data, and apply the algorithm to the obtained supervised classifiers. We then visualize the PSMs to demonstrate the PSA’s ability to decompose the knowledge acquired by the trained classifiers.

Keywords

Sensitivity analysis Sensitivity map PCA Dark knowledge Knowledge decomposition 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: Deepface: closing the gap to human-level performance in face verification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1701–1708 (2014)Google Scholar
  2. 2.
    Horikawa, T., Tamaki, M., Miyawaki, Y., Kamitani, Y.: Neural decoding of visual imagery during sleep. Science 340, 639–642 (2013)CrossRefGoogle Scholar
  3. 3.
    Uberbacher, E.C., Mural, R.J.: Locating protein-coding regions in human DNA sequences by a multiple sensor-neural network approach. Proceedings of the National Academy of Sciences 88, 11261–11265 (1991)CrossRefGoogle Scholar
  4. 4.
    Hinton, G.E.: Dark knowledge. Presented as the keynote in BayLearn (2014)Google Scholar
  5. 5.
    Baehrens, D., Schroeter, T., Harmeling, S., Kawanabe, M., Hansen, K., Muller, K.R.: How to explain individual classification decisions. The Journal of Machine Learning Research 11, 1803–1831 (2010)MATHMathSciNetGoogle Scholar
  6. 6.
    Rasmussen, P.M., Madsen, K.H., Lund, T.E., Hansen, L.K.: Visualization of nonlinear kernel models in neuroimaging by sensitivity maps. NeuroImage 55, 1120–1131 (2011)CrossRefGoogle Scholar
  7. 7.
    LaConte, S., Strother, S., Cherkassky, V., Anderson, J., Hu, X.: Support vector machines for temporal classification of block design fMRI data. NeuroImage 26, 317–329 (2005)CrossRefGoogle Scholar
  8. 8.
    Zurada, J.M., Malinowski, A., Cloete, I.: Sensitivity analysis for minimization of input data dimension for feedforward neural network. In: Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS), vol. 6, pp. 447–450 (1994)Google Scholar
  9. 9.
    Zurada, J.M., Malinowski, A., Usui, S.: Perturbation method for deleting redundant inputs of perceptron networks. Neurocomputing 14, 177–193 (1997)CrossRefGoogle Scholar
  10. 10.
    Kjems, U., Hansen, L.K., Anderson, J., Frutiger, S., Muley, S., Sidtis, J., Rottenberg, D., Strother, S.C.: The quantitative evaluation of functional neuroimaging experiments: mutual information learning curves. NeuroImage 15, 772–786 (2002)CrossRefGoogle Scholar
  11. 11.
    Jenatton, R., Obozinski, G., Bach, F.: Structured sparse principal component analysis. In: Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 9, pp. 366–373 (2010)Google Scholar
  12. 12.
    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: Machine Learning in Python. The Journal of Machine Learning Research 12, 2825–2830 (2012)Google Scholar
  13. 13.
    Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580, 1–18 (2012)
  14. 14.
    Yukinawa, N., Oba, S., Kato, K., Ishii, S.: Optimal aggregation of binary classifiers for multiclass cancer diagnosis using gene expression profiles. IEEE/ACM Transactions on Computational Biology and Bioinformatics 6, 333–343 (2009)CrossRefGoogle Scholar
  15. 15.
    Kontos, D., Megalooikonomou, V.: Fast and effective characterization for classification and similarity searches of 2D and 3D spatial region data. Pattern Recognition 38, 1831–1846 (2005)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Sotetsu Koyamada
    • 1
    • 2
  • Masanori Koyama
    • 1
  • Ken Nakae
    • 1
  • Shin Ishii
    • 1
    • 2
  1. 1.Graduate School of InformaticsKyoto UniversityKyotoJapan
  2. 2.ATR Cognitive Mechanisms LaboratoriesKyotoJapan

Personalised recommendations