Neural Processing Letters

, Volume 10, Issue 3, pp 253–266 | Cite as

Sensitivity Analysis for Decision Boundaries

  • A.P. Engelbrecht
Article

Abstract

A novel approach is presented to visualize and analyze decision boundaries for feedforward neural networks. First order sensitivity analysis of the neural network output function with respect to input perturbations is used to visualize the position of decision boundaries over input space. Similarly, sensitivity analysis of each hidden unit activation function reveals which boundary is implemented by which hidden unit. The paper shows how these sensitivity analysis models can be used to better understand the data being modelled, and to visually identify irrelevant input and hidden units.

sensitivity analysis decision boundary feedforward neural network feature extraction pruning irrelevant parameters 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baum, E. B.: Neural net algorithms that learn in polynomial time from examples and queries, IEEE Transactions on Neural Networks 2(1) (1991), 5–19.Google Scholar
  2. 2.
    Belue, L. M. and Bauer, K. W.: Determining input features for multilayer perceptrons, Neurocomputing 7 (1995), 111–121.Google Scholar
  3. 3.
    Chauvin, Y.: Dynamic behavior of constrained back-propagation networks, In: D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2 (1990), 642–649.Google Scholar
  4. 4.
    Cibas, T., Fogelman Soulié, F., Gallinari, P. and Raudys, S.: Variable selection with neural networks, Neurocomputing 12 (1996), 223–248.Google Scholar
  5. 5.
    Cohn, D., Atlas, L. and Ladner, R.: Improving generalization with active learning, Machine Learning 15 (1994), 201–221.Google Scholar
  6. 6.
    Engelbrecht, A. P., Cloete, I. and Zurada, J. M.: Determining the significance of input parameters using sensitivity analysis, International Workshop on Artificial Neural Networks, Torremolinos, Spain, In: J. Mira and F. Sandoval (eds.), From Natural to Artificial Neural Computing in the series Lecture Notes in Computer Science 930 1995, 382–388.Google Scholar
  7. 7.
    Engelbrecht, A. P. and Cloete, I.: A sensitivity analysis algorithm for pruning feedforward neural networks, IEEE International Conference on Neural Networks, Washington 2 1996, 1274–1277.Google Scholar
  8. 8.
    Engelbrecht, A. P. and Cloete, I.: Selective learning using sensitivity analysis, IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Anchorage, Alaska (1998), 1150–1155.Google Scholar
  9. 9.
    Fletcher, L., Katkovnik, V., Steffens, F. E. and Engelbrecht, A. P.: Optimizing the number of hidden nodes of a feedforward artificial neural network, IEEE World Congress on Computational Intelligence, International Joint Conference on Neural Networks, Anchorage, Alaska (1998), 1608–1612.Google Scholar
  10. 10.
    Goh, T.-H.: Semantic extraction using neural network modelling and sensitivity analysis, Proceedings of the 1993 International Joint Conference on Neural Networks, 1993, 1031–1034.Google Scholar
  11. 11.
    Hassibi, B., Stork, D. G. and Wolff, G.: Optimal brain surgeon: extensions and performance comparisons, In: J. D. Cowan, G. Tesauro and J. Alspector (eds.), Advances in Neural Information Processing Systems 6 (1994), 263–270.Google Scholar
  12. 12.
    Hirose, Y., Yamashita, K. and Hijiya, S.: Back-propagation algorithm which varies the number of hidden units, Neural Networks 4 (1991), 61–66.Google Scholar
  13. 13.
    Hüning, H.: A node splitting algorithm that reduces the number of connections in a Hamming distance classifying network, International Workshop on Artificial Neural Networks, In: New Trends in Neural Computation, J. Mira, J. Castebany and A. Prieto (eds.), in the series Lecture Notes in Computer Science, Springer-Verlag, Berlin, 1993, pp. 102–107.Google Scholar
  14. 14.
    Hwang, J.-N., Choi, J. J., Oh, S. and Marks II, R.J.: Query-Based Learning Applied to Partially Trained Multilayer Perceptrons, IEEE Transactions on Neural Networks 2(1) (1991), 131–136.Google Scholar
  15. 15.
    Jutten, C. and Chentouf, P.: A new scheme for incremental learning, Neural Processing Letters 2(1) (1995), 1–4.Google Scholar
  16. 16.
    Krogh, A. and Hertz, J. A.: A simple weight decay can improve generalization, In: J. Moody, S. J. Hanson and R. Lippmann (eds.), Advances in Neural Information Processing Systems 4 (1992), 950–957.Google Scholar
  17. 17.
    Kwok, T.-Y. and Yeung, D.-Y.: Constructive Feedforward Neural Networks for Regression Problems: A Survey, Technical Report HKUST–CS95–43, Department of Computer Science, The Hong Kong University of Science & Technology, 1995.Google Scholar
  18. 18.
    Le Cun, Y., Denker, J. S. and Solla, S. A.: Optimal brain damage, In: D. Touretzky (ed.), Advances in Neural Information Processing Systems 2 (1990), 598–605.Google Scholar
  19. 19.
    Lee, C. and Landgrebe, D. A.: Decision boundary feature extraction for neural networks, Proceedings of the IEEE (1992), 1053–1058.Google Scholar
  20. 20.
    Pratt, L. Y. and Christensen, A. N.: Relaxing the hyperplane assumption in the analysis and modificatioon of back-propagation neural networks, In: R. Trappl (ed.), Cybernetics and Systems (1994), 1711–1718.Google Scholar
  21. 21.
    Viktor, H. L., Engelbrecht, A. P. and Cloete, I.: Incorporating rule extraction from ANNs into a cooperative learning environment, NEURAP98, Neural Networks & Their Applications, Marseilles, France, pp. 386–391, 1998.Google Scholar
  22. 22.
    Weigend, A. S., Rumelhart, D. E. and Huberman, B. A.: Generalization by weight-elimination with application to forecasting, In: R. Lippmann, J. Moody and D. S. Touretzky (eds.), Advances in Neural Information Processing Systems 3 (1991), 875–882.Google Scholar
  23. 23.
    Yasui, S.: Convergence suppression and divergence facilitation: minimum and joint use of hidden units by multiple outputs, Neural Networks 10(2) (1997), 353–367.Google Scholar

Copyright information

© Kluwer Academic Publishers 1999

Authors and Affiliations

  • A.P. Engelbrecht
    • 1
  1. 1.Department of Computer ScienceUniversity of PretoriaPretoriaSouth Africa, e-mail

Personalised recommendations