Abstract
In this paper, the architecture selection of a three–layer nonlinear feedforward network with linear output neurons and sigmoidal hidden neurons is carried out. In the proposed method, the conventional back propagation (BP) learning algorithm is used to train the network by minimizing the representation error. A new pruning algorithm employing statistical analysis can quantify the importance of each hidden unit. This is accomplished by providing lateral connections among the neurons of the hidden layer and minimizing the variance of the hidden neurons. Variance minimization has resulted in decorrelated neurons and thus the learning rule for the lateral connections in the hidden layer becomes a variation of the anti-Hebbian learning. The decorrelation process minimizes any redundant information transferred among the hidden neurons and therefore enables the network to capture the statistical properties of the required input-output mapping using the minimum number of hidden nodes. Hidden nodes with least contribution to the error minimization at the output layer will be pruned. Experimental results show that the proposed pruning algorithm correctly prunes irrelevant hidden units.
Chapter PDF
Similar content being viewed by others
Keywords
References
Ash, T.: Dynamic node creation in backpropagation networks. Connection Sciences 1, 365–375 (1989)
Christian, S.F., Lebiere, C.: The cascade-correlation learning architecture. In: Advances in Neural Information Processing Systems 2, pp. 524–532. Morgan Kaufmann (1990)
Yau Kwok, T., Yeung, D.Y.: Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Transactions on Neural Networks 8, 630–645 (1997)
Platt, J.: A resource-allocating network for function interpolation. Neural Comput. 3(2), 213–225 (1991)
Han, H.G., Qiao, J.F.: A structure optimisation algorithm for feedforward neural network construction. Neurocomput. 99, 347–357 (2013)
Yau Kwok, T., Yeung, D.Y.: Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Transactions on Neural Networks 8, 630–645 (1997)
Xu, J., Ho, D.W.: A new training and pruning algorithm based on node dependence and jacobian rank deficiency. Neurocomputing 70, 544–558 (2006)
Engelbrecht, A.P.: A new pruning heuristic based on variance analysis of sensitivity information. Trans. Neur. Netw. 12(6), 1386–1399 (2001)
Sietsma, J., Dow, R.J.F.: Creating artificial neural networks that generalize. Neural Network 4(1), 67–79 (1991)
Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: Optimal brain surgeon. In: Advances in Neural Information Processing Systems 5, [NIPS Conference], pp. 164–171. Morgan Kaufmann Publishers Inc., San Francisco (1993)
Augasta, M.G., Kathirvalavakumar, T.: Pruning algorithms of neural networks - a comparative study. Central Europ. J. Computer Science 3(3), 105–115 (2013)
Reed, R.: Pruning algorithms-a survey. Trans. Neur. Netw. 4(5), 740–747 (1993)
Castellano, G., Fanelli, A.M., Pelillo, M.: An iterative pruning algorithm for feedforward neural networks. Trans. Neur. Netw. 8(3), 519–531 (1997)
Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Comput. 7(2), 219–269 (1995)
Schittenkopf, C., Deco, G., Brauer, W.: Two strategies to avoid overfitting in feedforward networks. Neural Networks 10(3), 505–516 (1997)
Miller, D.A., Zurada, J.M.: A dynamical system perspective of structural learning with forgetting. Trans. Neur. Netw. 9(3), 508–515 (1998)
Ishikawa, M.: Structural learning with forgetting. Neural Netw. 9(3), 509–521 (1996)
Islam, M.M., Murase, K.: A new algorithm to design compact two-hidden-layer artificial neural networks. Neural Netw. 14(9), 1265–1278 (2001)
Ma, L., Khorasani, K.: New training strategies for constructive neural networks with application to regression problems. Neural Netw. 17(4), 589–609 (2004)
Cun, Y.L., Denker, J.S., Solla, S.A.: Advances in neural information processing systems 2, pp. 598–605. Morgan Kaufmann Publishers Inc., San Francisco (1990)
Setiono, R.: A penalty-function approach for pruning feedforward neural networks. Neural Comput. 9(1), 185–204 (1997)
Suzuki, K., Horiba, I., Sugie, N.: A simple neural network pruning algorithm with application to filter synthesis. Neural Process. Lett. 13(1), 43–53 (2001)
Kanjilal, P., Dey, P., Banerjee, D.: Reduced-size neural networks through singular value decomposition and subset selection. Electronic Letters 17, 1515–1518 (1993)
Fletcher, L., Katkovnik, V., Steffens, F., Engelbrecht, A.: Optimizing the number of hidden nodes of a feedforward artificial neural network. In: IEEE World Congress on Computational Intelligenc, pp. 1608–1612 (1998)
Rumelhart, D., Hintont, G., Williams, R.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)
Carlson, A.: Anti-Hebbian learning in a non-linear neural network. Biol. Cybern. 64(2), 171–176 (1990)
Chen, Z., Haykin, S., Eggermont, J.J., Becker, S.: Correlative Learning: A Basis for Brain and Adaptive Systems (Adaptive and Learning Systems for Signal Processing, Communications and Control Series). Wiley-Interscience (2007)
Rubner, J., Tavan, P.: A self-organizing network for principal-component analysis. EPL (Europhysics Letters) 10(7), 693 (1989)
Oja, E.: A simplified neuron model as a principal component analyzer. Journal of Mathematical Biology 15, 267–273 (1982)
Cheney, W., Kincaid, D.R.: Linear Algebra: Theory and Applications, 1st edn. Jones and Bartlett Publishers, Inc., USA (2008)
Maldonado, F., Manry, M.: Optimal pruning of feedforward neural networks based upon the schmidt procedure. In: The Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, vol. 2 (1998)
Orfanidis, S.: Gram-schmidt neural nets. Neural Computation 2, 116–126 (1990)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer International Publishing Switzerland
About this paper
Cite this paper
Abbas, H.M. (2014). A Decorrelation Approach for Pruning of Multilayer Perceptron Networks. In: El Gayar, N., Schwenker, F., Suen, C. (eds) Artificial Neural Networks in Pattern Recognition. ANNPR 2014. Lecture Notes in Computer Science(), vol 8774. Springer, Cham. https://doi.org/10.1007/978-3-319-11656-3_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-11656-3_2
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-11655-6
Online ISBN: 978-3-319-11656-3
eBook Packages: Computer ScienceComputer Science (R0)