Skip to main content

Advertisement

Log in

A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Optimizing the structure of neural networks is an essential step for the discovery of knowledge from data. This paper deals with a new approach which determines the insignificant input and hidden neurons to detect the optimum structure of a feedforward neural network. The proposed pruning algorithm, called as neural network pruning by significance (N2PS), is based on a new significant measure which is calculated by the Sigmoidal activation value of the node and all the weights of its outgoing connections. It considers all the nodes with significance value below the threshold as insignificant and eliminates them. The advantages of this approach are illustrated by implementing it on six different real datasets namely iris, breast-cancer, hepatitis, diabetes, ionosphere and wave. The results show that the proposed algorithm is quite efficient in pruning the significant number of neurons on the neural network models without sacrificing the networks performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Reitermanova Z (2008) Feedforward neural networks—architecture optimization and knowledge extraction. In: WDS’08 proceedings of contributed papers, Part I, 159–164

  2. Castellano G, Fanelli AM, Pelillo M (1997) An iterative pruning algoritm for feedforward neural networks. IEEE Trans Neural Netw 8(3): 519–530

    Article  Google Scholar 

  3. Ahmmed S, Abdullah-Al-Mamun K, Islam M (2007) A novel algorithm for designing three layered artificial neural networks. Int J Soft Comput 2(3): 450–458

    Google Scholar 

  4. Henrique M, Lima L, Seborg E (2000) Model structure determination in neural network models. Chem Eng Sci 55: 5457–5469

    Article  Google Scholar 

  5. Ponnapallii PVS, Ho KC, Thomson M (1999) A formal selection and pruning algorithm for feedforward artificial neural network optimiztion. IEEE Trans Neural Netw 10(4): 964–968

    Article  Google Scholar 

  6. Chauvin Y (1990) Generalization performance of overtrained backpropagation networks. In: Hlomeida LB, Wellekens CJ (Eds) Proceedings of neural networks Euroship workshop, pp 46–55

  7. Choi B, Lee JH, Kim DH (2008) Solving local minima problem with large number of hidden nodes on two layered feedforward artificial neural networks. Neuro Comput 71: 3640–3643

    Google Scholar 

  8. Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by backpropagating errors. Nature 323: 533–536

    Article  Google Scholar 

  9. Yu XH (1992) Can backpropagation error surface not have local minima. IEEE Trans Neural Netw 3: 1019–1021

    Article  Google Scholar 

  10. Reed R (1993) Pruning algorithms a survey. IEEE Trans Neural Netw 4(5): 740–747

    Article  Google Scholar 

  11. Setiono R, Liu H (1995) Understanding neural networks via rule extraction. In: Proceedings of 14th international joint conference on artificial intelligence, pp 480–485

  12. Emmerson MD, Damper RI (1993) Determining and improving the fault tolerance of multi layer perceptrons in a pattern-recognition application. IEEE Trans Neural Netw 4: 788–793

    Article  Google Scholar 

  13. Aran O, Yildiz OT, Alpaydin E (2009) An incremental framework based on cross validation for estimating the architecture of MLP. Int J Pattern Recognit Artif Intell 23(2): 159–190

    Article  Google Scholar 

  14. Zhang J, Morris A (1997) A sequential learning approach for single hidden layer neural networks. Neural Netw 11: 65–80

    Article  Google Scholar 

  15. Moody J, Antsaklis PJ (1996) The dependence identification neural network construction algorithm. IEEE Trans Neural Netw 7: 3–15

    Article  Google Scholar 

  16. Setiono R, Kwong Hui LC (1995) Use of a Quasi-Newton Method in a Feedforward Neural Network Construction Algorithm. IEEE Trans Neural Netw 6(1): 273–277

    Article  Google Scholar 

  17. Engelbrecht AP (2001) A new pruning heuristic based on variance analysis of sensitivity information. IEEE Trans Neural Netw 12(6): 1386–1399

    Article  Google Scholar 

  18. Setiono R (1997) A penalty function approach for pruning feedforward neural networks. Neural Comput 9(1): 185–204

    Article  MATH  Google Scholar 

  19. Sabo D, Hua Yu X (2008) Neural network Dimension Selection for dynamical system identification. In: Proceedings of 17th IEEE international conference on control applications, pp 972–977

  20. Huang SC, Huang YF (1991) Bounds on the number of hidden neurons in multilayer perceptrons. IEEE Trans Neural Netw 2: 47–55

    Article  Google Scholar 

  21. Patez J (2004) Reducing the number of neurons in Radial Basis Function networks with dynamic decay adjustment. Neuro Comput 62: 79–91

    Google Scholar 

  22. Narasimhaa PL, Delashmitb WH, Manrya MT, Lic J, Maldonado F (2008) An integrated growing-pruning method for feedforward network training. Neurocomputing 71: 2831–2847

    Article  Google Scholar 

  23. Zurada JM (2002) Introduction to artificial neural systems. Jaisco Publishing House, Mumbai

    Google Scholar 

  24. Hassibi B, Stork DG, Wolf GJ (1993) Optimal brain surgeon and general network pruning. In: Proceedings of IEEE ICNN’93, vol 1, pp 293–299

  25. Le Cun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Touretzky DS (Ed) Advances in neural information processing systems, vol 2. Morgan Kaufmann, San Mateo, pp 598–605

  26. Attik M, Bougrain L, Alexandra F (2005) Neural Network topology optimization. In: Proceedings of ICANN’05, vol 3697, pp 53–58

  27. Karnin ED (1990) A simple procedure for pruning back-propagation trained neural networks. IEEE Trans Neural Netw 1(2): 239–242

    Article  Google Scholar 

  28. Huynh TQ, Setiono R (2005) Effective neural network pruning using cross validation. In: Proceedings of IEEE international joint conference on neural networks, vol 2, pp 972–977

  29. Wan W, Mabu S, Shimada K, Hirasawa K, Hu J (2009) Enhancing the generalization ability of neural networks through controlling the hidden layers. Appl Soft Comput 9: 404–414

    Article  Google Scholar 

  30. Hagiwara M (1994) A simple and effective method for removal of hidden units and weights. Neurocomputing 6: 207–218

    Article  Google Scholar 

  31. Sietsma J, Dow RJF (1988) Neural net pruning: why and how. In: Proceedings of the IEEE international conference on neural networks, vol 1. San Diego, CA, pp 325–333

  32. Hassibi B, Stork DG (1993) Second order derivatives for network pruning: optimal brain surgeon. In: Lee Giles C, Hanson SJ, Cowan JD (Eds) Advances in neural information processing systems, vol 5, pp 164–171

  33. Tamura S, Tateishi M, Matumoto M, Akia S (1993) Determination of the number of redundant hidden units in a three layered feedforward neural network. In: Proceedings of the international joint Conference on neural networks, vol 1, pp 335–338

  34. Fletcher L, Katcovnik V, Steffens FE, Engelbrecht AP (1998) Optimizing the number of hidden nodes of a feed forward neural network. In: Proceedings of the IEEE world congress on computational intelligence, The international joint conference on neural networks, pp 1608–1612

  35. Xing HJ, Gang Hu B (2009) Two phase construction of multilayer perceptrons using Information Theory. IEEE Trans Neural Netw 20(4): 715–721

    Article  Google Scholar 

  36. Whitley D, Bogart C (1990) The evolution of connectivity: pruning neural networks using genetic algorithms. In: International joint conference on neural networks, vol 1, pp 134–137

  37. Benardos PG, Vosniakos GC (2007) Optimizing feedforward artificial neural network architecture. Eng Appl Artif Intell 20: 365–382

    Article  Google Scholar 

  38. Zeng X, Yeung Daniel S (2006) Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neuro Comput 69: 825–837

    Google Scholar 

  39. Lauret P, Fock E, Mara TA (2006) A node pruning algorithm based on a fourier amplitude sensitivity test method. IEEE Trans Neural Netw 17(2): 273–293

    Article  Google Scholar 

  40. Xua J, Hob Daniel WC (2006) A new training and pruning algorithm based on node dependence and Jacobian rank deficiency. Neurocomputing 70: 544–558

    Article  Google Scholar 

  41. Chung FL, Lee T (1992) A node pruning algorithm for backpropagation networks. Int J Neural Syst 3(3): 301–314

    Article  Google Scholar 

  42. Kruschke JK (1998) Creating local and distributed bottlenecks in hidden layers of backpropagation networks. In: Touretzky DS, Hinton GE, Sejnowski TJ (Eds) Proceedings 1988 Connectionist Models Summer School, Morgan Kaufmann, San Mateo, CA, pp 120–126

  43. Han J, Kamber M (2001) DataMining: concepts and techniques. Morgan Kaufmann, San Francisco

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to T. Kathirvalavakumar.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Augasta, M.G., Kathirvalavakumar, T. A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems. Neural Process Lett 34, 241–258 (2011). https://doi.org/10.1007/s11063-011-9196-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-011-9196-7

Keywords

Navigation