Feature Salience for Neural Networks: Comparing Algorithms

  • Theodor Heinze
  • Martin von Löwis
  • Andreas Polze
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7666)

Abstract

One of the key problems in the field of telemedicine is the prediction of the patient’s health state change based on incoming non-invasively measured vital data. Artificial Neural Networks (ANN) are a powerful statistical modeling tool suitable for this problem. Feature salience algorithms for ANN provide information about feature importance and help selecting relevant input variables. Looking for a reliable salience analysis algorithm, we found a relatively wide range of possible approaches. However, we have also found numerous methodological weaknesses in corresponding evaluations. Perturb [11][7] and Connection Weight (CW) [1] are two of the most promising algorithms. In this paper, we propose an improvement for Connection Weight and evaluate it along with Perturb and the original CW. We use three independent datasets with already known feature salience rankings as well as varying topologies and random feature ranking results to estimate the usability of the tested approaches for feature salience assessment in complex multi-layer perceptrons.

Keywords

Feature Salience Sensitivity Analysis Neural Networks Machine Learning Telemedicine 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Olden, J.D., Joy, M.K., Death, R.G.: An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data. Ecological Modelling 178, 389–397 (2004)CrossRefGoogle Scholar
  2. 2.
    Farsiu, S., Elad, M., Milanfar, P.: Multi-frame demosaicing and super-resolution of color images. IEEE Trans. on Image Processing 15, 141–159 (2006)CrossRefGoogle Scholar
  3. 3.
    Lek, S., Delacoste, M., Baran, P., Dimopoulos, I., Lauga, J., Aulagnier, S.: Application of neural networks to modelling nonlinear relationships in ecology. Ecological Modelling 90, 39–52 (1996)CrossRefGoogle Scholar
  4. 4.
    Wang, W., Jones, P., Partridge, D.: Assessing the Impact of Input Features in a Feedforward Neural Network. Neural Computing & Applications 9, 101–112 (2000)CrossRefGoogle Scholar
  5. 5.
    Cui, X.R., Abbod, M.F., Liu, Q., Shieh, J.S., Chao, T.Y., Hsieh, C.Y., Yang, Y.C.: Ensembled artificial neural networks to predict the fitness score for body composition analysis. The Journal of Nutrition Health Aging 15, 341–348 (2011)CrossRefGoogle Scholar
  6. 6.
    Tchaban, T., Taylor, M.J., Griffin, J.P.: Establishing impacts of the inputs in a feedforward neural network. Neural Computing & Applications 7, 309–317 (1998)CrossRefMATHGoogle Scholar
  7. 7.
    Bai, R., Jia, H., Cao, P.: Factor Sensitivity Analysis with Neural Network Simulation based on Perturbation System. Journal of Computers 6 (2011)Google Scholar
  8. 8.
    Garson, G.D.: Interpreting neural-network connection weights. AI Expert 6, 46–51 (1991)Google Scholar
  9. 9.
    Dimopoulos, I., Chronopoulos, J., Chronopoulou-Sereli, A., Lek, S.: Neural network models to study relationships between lead concentration in grasses and permanent urban descriptors in Athens city (Greece). Ecological Modelling 120, 157–165 (1999)CrossRefGoogle Scholar
  10. 10.
    Montaño, J.J., Palmer, A.: Numeric sensitivity analysis applied to feedforward neural networks. Neural Computing & Applications 12, 119–125 (2003)CrossRefGoogle Scholar
  11. 11.
    Gevrey, M., Dimopoulos, I., Lek, S.: Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecological Modelling 160, 249–264 (2003)CrossRefGoogle Scholar
  12. 12.
    Cheng, A.Y., Yeung, D.S.: Sensitivity analysis of neocognitron. IEEE Transactions on Systems, Man and Cybernetics, Part C (Applications and Reviews) 29, 238–249 (1999)CrossRefGoogle Scholar
  13. 13.
    Dimopoulos, Y., Bourret, P., Lek, S.: Use of some sensitivity criteria for choosing networks with good generalization ability. Neural Processing Letters 2, 1–4 (1995)CrossRefGoogle Scholar
  14. 14.
    Prechelt, L.: Proben 1. Presented at the Technical Report 21/94 (1994)Google Scholar
  15. 15.
    Riedmiller, M., Braun, H.: RPROP - A Fast Adaptive Learning Algorithm. In: Proc. of ISCIS VII, Universitat (1992)Google Scholar
  16. 16.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors, vol. 323, pp. 533–536 (1986) (published online: October 09, 1986), doi:10.1038/323533a0Google Scholar
  17. 17.
    Heinze, T., von Lowis, M., Polze, A.: Joint multi-frame demosaicing and super-resolution with artificial neural networks. In: 2012 19th International Conference on Systems, Signals and Image Processing (IWSSIP), pp. 540–543 (2012)Google Scholar
  18. 18.
    Olden, J.D., Jackson, D.A.: Illuminating the “black box”: a randomization approach for understanding variable contributions in artificial neural networks. Ecological Modelling 154, 135–150 (2002)CrossRefGoogle Scholar
  19. 19.
    UCI Machine Learning Repository, http://archive.ics.uci.edu/ml/datasets.html

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Theodor Heinze
    • 1
  • Martin von Löwis
    • 1
  • Andreas Polze
    • 1
  1. 1.Hasso-Plattner-Institute for Software Systems EngineeringGermany

Personalised recommendations