Abstract
Optimizing the structure of neural networks is an essential step for the discovery of knowledge from data. This paper deals with a new approach which determines the insignificant input and hidden neurons to detect the optimum structure of a feedforward neural network. The proposed pruning algorithm, called as neural network pruning by significance (N2PS), is based on a new significant measure which is calculated by the Sigmoidal activation value of the node and all the weights of its outgoing connections. It considers all the nodes with significance value below the threshold as insignificant and eliminates them. The advantages of this approach are illustrated by implementing it on six different real datasets namely iris, breast-cancer, hepatitis, diabetes, ionosphere and wave. The results show that the proposed algorithm is quite efficient in pruning the significant number of neurons on the neural network models without sacrificing the networks performance.
Similar content being viewed by others
References
Reitermanova Z (2008) Feedforward neural networks—architecture optimization and knowledge extraction. In: WDS’08 proceedings of contributed papers, Part I, 159–164
Castellano G, Fanelli AM, Pelillo M (1997) An iterative pruning algoritm for feedforward neural networks. IEEE Trans Neural Netw 8(3): 519–530
Ahmmed S, Abdullah-Al-Mamun K, Islam M (2007) A novel algorithm for designing three layered artificial neural networks. Int J Soft Comput 2(3): 450–458
Henrique M, Lima L, Seborg E (2000) Model structure determination in neural network models. Chem Eng Sci 55: 5457–5469
Ponnapallii PVS, Ho KC, Thomson M (1999) A formal selection and pruning algorithm for feedforward artificial neural network optimiztion. IEEE Trans Neural Netw 10(4): 964–968
Chauvin Y (1990) Generalization performance of overtrained backpropagation networks. In: Hlomeida LB, Wellekens CJ (Eds) Proceedings of neural networks Euroship workshop, pp 46–55
Choi B, Lee JH, Kim DH (2008) Solving local minima problem with large number of hidden nodes on two layered feedforward artificial neural networks. Neuro Comput 71: 3640–3643
Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by backpropagating errors. Nature 323: 533–536
Yu XH (1992) Can backpropagation error surface not have local minima. IEEE Trans Neural Netw 3: 1019–1021
Reed R (1993) Pruning algorithms a survey. IEEE Trans Neural Netw 4(5): 740–747
Setiono R, Liu H (1995) Understanding neural networks via rule extraction. In: Proceedings of 14th international joint conference on artificial intelligence, pp 480–485
Emmerson MD, Damper RI (1993) Determining and improving the fault tolerance of multi layer perceptrons in a pattern-recognition application. IEEE Trans Neural Netw 4: 788–793
Aran O, Yildiz OT, Alpaydin E (2009) An incremental framework based on cross validation for estimating the architecture of MLP. Int J Pattern Recognit Artif Intell 23(2): 159–190
Zhang J, Morris A (1997) A sequential learning approach for single hidden layer neural networks. Neural Netw 11: 65–80
Moody J, Antsaklis PJ (1996) The dependence identification neural network construction algorithm. IEEE Trans Neural Netw 7: 3–15
Setiono R, Kwong Hui LC (1995) Use of a Quasi-Newton Method in a Feedforward Neural Network Construction Algorithm. IEEE Trans Neural Netw 6(1): 273–277
Engelbrecht AP (2001) A new pruning heuristic based on variance analysis of sensitivity information. IEEE Trans Neural Netw 12(6): 1386–1399
Setiono R (1997) A penalty function approach for pruning feedforward neural networks. Neural Comput 9(1): 185–204
Sabo D, Hua Yu X (2008) Neural network Dimension Selection for dynamical system identification. In: Proceedings of 17th IEEE international conference on control applications, pp 972–977
Huang SC, Huang YF (1991) Bounds on the number of hidden neurons in multilayer perceptrons. IEEE Trans Neural Netw 2: 47–55
Patez J (2004) Reducing the number of neurons in Radial Basis Function networks with dynamic decay adjustment. Neuro Comput 62: 79–91
Narasimhaa PL, Delashmitb WH, Manrya MT, Lic J, Maldonado F (2008) An integrated growing-pruning method for feedforward network training. Neurocomputing 71: 2831–2847
Zurada JM (2002) Introduction to artificial neural systems. Jaisco Publishing House, Mumbai
Hassibi B, Stork DG, Wolf GJ (1993) Optimal brain surgeon and general network pruning. In: Proceedings of IEEE ICNN’93, vol 1, pp 293–299
Le Cun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Touretzky DS (Ed) Advances in neural information processing systems, vol 2. Morgan Kaufmann, San Mateo, pp 598–605
Attik M, Bougrain L, Alexandra F (2005) Neural Network topology optimization. In: Proceedings of ICANN’05, vol 3697, pp 53–58
Karnin ED (1990) A simple procedure for pruning back-propagation trained neural networks. IEEE Trans Neural Netw 1(2): 239–242
Huynh TQ, Setiono R (2005) Effective neural network pruning using cross validation. In: Proceedings of IEEE international joint conference on neural networks, vol 2, pp 972–977
Wan W, Mabu S, Shimada K, Hirasawa K, Hu J (2009) Enhancing the generalization ability of neural networks through controlling the hidden layers. Appl Soft Comput 9: 404–414
Hagiwara M (1994) A simple and effective method for removal of hidden units and weights. Neurocomputing 6: 207–218
Sietsma J, Dow RJF (1988) Neural net pruning: why and how. In: Proceedings of the IEEE international conference on neural networks, vol 1. San Diego, CA, pp 325–333
Hassibi B, Stork DG (1993) Second order derivatives for network pruning: optimal brain surgeon. In: Lee Giles C, Hanson SJ, Cowan JD (Eds) Advances in neural information processing systems, vol 5, pp 164–171
Tamura S, Tateishi M, Matumoto M, Akia S (1993) Determination of the number of redundant hidden units in a three layered feedforward neural network. In: Proceedings of the international joint Conference on neural networks, vol 1, pp 335–338
Fletcher L, Katcovnik V, Steffens FE, Engelbrecht AP (1998) Optimizing the number of hidden nodes of a feed forward neural network. In: Proceedings of the IEEE world congress on computational intelligence, The international joint conference on neural networks, pp 1608–1612
Xing HJ, Gang Hu B (2009) Two phase construction of multilayer perceptrons using Information Theory. IEEE Trans Neural Netw 20(4): 715–721
Whitley D, Bogart C (1990) The evolution of connectivity: pruning neural networks using genetic algorithms. In: International joint conference on neural networks, vol 1, pp 134–137
Benardos PG, Vosniakos GC (2007) Optimizing feedforward artificial neural network architecture. Eng Appl Artif Intell 20: 365–382
Zeng X, Yeung Daniel S (2006) Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neuro Comput 69: 825–837
Lauret P, Fock E, Mara TA (2006) A node pruning algorithm based on a fourier amplitude sensitivity test method. IEEE Trans Neural Netw 17(2): 273–293
Xua J, Hob Daniel WC (2006) A new training and pruning algorithm based on node dependence and Jacobian rank deficiency. Neurocomputing 70: 544–558
Chung FL, Lee T (1992) A node pruning algorithm for backpropagation networks. Int J Neural Syst 3(3): 301–314
Kruschke JK (1998) Creating local and distributed bottlenecks in hidden layers of backpropagation networks. In: Touretzky DS, Hinton GE, Sejnowski TJ (Eds) Proceedings 1988 Connectionist Models Summer School, Morgan Kaufmann, San Mateo, CA, pp 120–126
Han J, Kamber M (2001) DataMining: concepts and techniques. Morgan Kaufmann, San Francisco
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Augasta, M.G., Kathirvalavakumar, T. A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems. Neural Process Lett 34, 241–258 (2011). https://doi.org/10.1007/s11063-011-9196-7
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-011-9196-7