Advertisement

Neural Processing Letters

, 34:241 | Cite as

A Novel Pruning Algorithm for Optimizing Feedforward Neural Network of Classification Problems

  • M. Gethsiyal Augasta
  • T. Kathirvalavakumar
Article

Abstract

Optimizing the structure of neural networks is an essential step for the discovery of knowledge from data. This paper deals with a new approach which determines the insignificant input and hidden neurons to detect the optimum structure of a feedforward neural network. The proposed pruning algorithm, called as neural network pruning by significance (N2PS), is based on a new significant measure which is calculated by the Sigmoidal activation value of the node and all the weights of its outgoing connections. It considers all the nodes with significance value below the threshold as insignificant and eliminates them. The advantages of this approach are illustrated by implementing it on six different real datasets namely iris, breast-cancer, hepatitis, diabetes, ionosphere and wave. The results show that the proposed algorithm is quite efficient in pruning the significant number of neurons on the neural network models without sacrificing the networks performance.

Keywords

Input and hidden neurons pruning Significant measure Classification Backpropagation training algorithm Multilayer feedforward neural network Data mining 

References

  1. 1.
    Reitermanova Z (2008) Feedforward neural networks—architecture optimization and knowledge extraction. In: WDS’08 proceedings of contributed papers, Part I, 159–164Google Scholar
  2. 2.
    Castellano G, Fanelli AM, Pelillo M (1997) An iterative pruning algoritm for feedforward neural networks. IEEE Trans Neural Netw 8(3): 519–530CrossRefGoogle Scholar
  3. 3.
    Ahmmed S, Abdullah-Al-Mamun K, Islam M (2007) A novel algorithm for designing three layered artificial neural networks. Int J Soft Comput 2(3): 450–458Google Scholar
  4. 4.
    Henrique M, Lima L, Seborg E (2000) Model structure determination in neural network models. Chem Eng Sci 55: 5457–5469CrossRefGoogle Scholar
  5. 5.
    Ponnapallii PVS, Ho KC, Thomson M (1999) A formal selection and pruning algorithm for feedforward artificial neural network optimiztion. IEEE Trans Neural Netw 10(4): 964–968CrossRefGoogle Scholar
  6. 6.
    Chauvin Y (1990) Generalization performance of overtrained backpropagation networks. In: Hlomeida LB, Wellekens CJ (Eds) Proceedings of neural networks Euroship workshop, pp 46–55Google Scholar
  7. 7.
    Choi B, Lee JH, Kim DH (2008) Solving local minima problem with large number of hidden nodes on two layered feedforward artificial neural networks. Neuro Comput 71: 3640–3643Google Scholar
  8. 8.
    Rumelhart DE, Hinton GE, Williams RJ (1986) Learning representations by backpropagating errors. Nature 323: 533–536CrossRefGoogle Scholar
  9. 9.
    Yu XH (1992) Can backpropagation error surface not have local minima. IEEE Trans Neural Netw 3: 1019–1021CrossRefGoogle Scholar
  10. 10.
    Reed R (1993) Pruning algorithms a survey. IEEE Trans Neural Netw 4(5): 740–747CrossRefGoogle Scholar
  11. 11.
    Setiono R, Liu H (1995) Understanding neural networks via rule extraction. In: Proceedings of 14th international joint conference on artificial intelligence, pp 480–485Google Scholar
  12. 12.
    Emmerson MD, Damper RI (1993) Determining and improving the fault tolerance of multi layer perceptrons in a pattern-recognition application. IEEE Trans Neural Netw 4: 788–793CrossRefGoogle Scholar
  13. 13.
    Aran O, Yildiz OT, Alpaydin E (2009) An incremental framework based on cross validation for estimating the architecture of MLP. Int J Pattern Recognit Artif Intell 23(2): 159–190CrossRefGoogle Scholar
  14. 14.
    Zhang J, Morris A (1997) A sequential learning approach for single hidden layer neural networks. Neural Netw 11: 65–80CrossRefGoogle Scholar
  15. 15.
    Moody J, Antsaklis PJ (1996) The dependence identification neural network construction algorithm. IEEE Trans Neural Netw 7: 3–15CrossRefGoogle Scholar
  16. 16.
    Setiono R, Kwong Hui LC (1995) Use of a Quasi-Newton Method in a Feedforward Neural Network Construction Algorithm. IEEE Trans Neural Netw 6(1): 273–277CrossRefGoogle Scholar
  17. 17.
    Engelbrecht AP (2001) A new pruning heuristic based on variance analysis of sensitivity information. IEEE Trans Neural Netw 12(6): 1386–1399CrossRefGoogle Scholar
  18. 18.
    Setiono R (1997) A penalty function approach for pruning feedforward neural networks. Neural Comput 9(1): 185–204zbMATHCrossRefGoogle Scholar
  19. 19.
    Sabo D, Hua Yu X (2008) Neural network Dimension Selection for dynamical system identification. In: Proceedings of 17th IEEE international conference on control applications, pp 972–977Google Scholar
  20. 20.
    Huang SC, Huang YF (1991) Bounds on the number of hidden neurons in multilayer perceptrons. IEEE Trans Neural Netw 2: 47–55CrossRefGoogle Scholar
  21. 21.
    Patez J (2004) Reducing the number of neurons in Radial Basis Function networks with dynamic decay adjustment. Neuro Comput 62: 79–91Google Scholar
  22. 22.
    Narasimhaa PL, Delashmitb WH, Manrya MT, Lic J, Maldonado F (2008) An integrated growing-pruning method for feedforward network training. Neurocomputing 71: 2831–2847CrossRefGoogle Scholar
  23. 23.
    Zurada JM (2002) Introduction to artificial neural systems. Jaisco Publishing House, MumbaiGoogle Scholar
  24. 24.
    Hassibi B, Stork DG, Wolf GJ (1993) Optimal brain surgeon and general network pruning. In: Proceedings of IEEE ICNN’93, vol 1, pp 293–299Google Scholar
  25. 25.
    Le Cun Y, Denker JS, Solla SA (1990) Optimal brain damage. In: Touretzky DS (Ed) Advances in neural information processing systems, vol 2. Morgan Kaufmann, San Mateo, pp 598–605Google Scholar
  26. 26.
    Attik M, Bougrain L, Alexandra F (2005) Neural Network topology optimization. In: Proceedings of ICANN’05, vol 3697, pp 53–58Google Scholar
  27. 27.
    Karnin ED (1990) A simple procedure for pruning back-propagation trained neural networks. IEEE Trans Neural Netw 1(2): 239–242CrossRefGoogle Scholar
  28. 28.
    Huynh TQ, Setiono R (2005) Effective neural network pruning using cross validation. In: Proceedings of IEEE international joint conference on neural networks, vol 2, pp 972–977Google Scholar
  29. 29.
    Wan W, Mabu S, Shimada K, Hirasawa K, Hu J (2009) Enhancing the generalization ability of neural networks through controlling the hidden layers. Appl Soft Comput 9: 404–414CrossRefGoogle Scholar
  30. 30.
    Hagiwara M (1994) A simple and effective method for removal of hidden units and weights. Neurocomputing 6: 207–218CrossRefGoogle Scholar
  31. 31.
    Sietsma J, Dow RJF (1988) Neural net pruning: why and how. In: Proceedings of the IEEE international conference on neural networks, vol 1. San Diego, CA, pp 325–333Google Scholar
  32. 32.
    Hassibi B, Stork DG (1993) Second order derivatives for network pruning: optimal brain surgeon. In: Lee Giles C, Hanson SJ, Cowan JD (Eds) Advances in neural information processing systems, vol 5, pp 164–171Google Scholar
  33. 33.
    Tamura S, Tateishi M, Matumoto M, Akia S (1993) Determination of the number of redundant hidden units in a three layered feedforward neural network. In: Proceedings of the international joint Conference on neural networks, vol 1, pp 335–338Google Scholar
  34. 34.
    Fletcher L, Katcovnik V, Steffens FE, Engelbrecht AP (1998) Optimizing the number of hidden nodes of a feed forward neural network. In: Proceedings of the IEEE world congress on computational intelligence, The international joint conference on neural networks, pp 1608–1612Google Scholar
  35. 35.
    Xing HJ, Gang Hu B (2009) Two phase construction of multilayer perceptrons using Information Theory. IEEE Trans Neural Netw 20(4): 715–721CrossRefGoogle Scholar
  36. 36.
    Whitley D, Bogart C (1990) The evolution of connectivity: pruning neural networks using genetic algorithms. In: International joint conference on neural networks, vol 1, pp 134–137Google Scholar
  37. 37.
    Benardos PG, Vosniakos GC (2007) Optimizing feedforward artificial neural network architecture. Eng Appl Artif Intell 20: 365–382CrossRefGoogle Scholar
  38. 38.
    Zeng X, Yeung Daniel S (2006) Hidden neuron pruning of multilayer perceptrons using a quantified sensitivity measure. Neuro Comput 69: 825–837Google Scholar
  39. 39.
    Lauret P, Fock E, Mara TA (2006) A node pruning algorithm based on a fourier amplitude sensitivity test method. IEEE Trans Neural Netw 17(2): 273–293CrossRefGoogle Scholar
  40. 40.
    Xua J, Hob Daniel WC (2006) A new training and pruning algorithm based on node dependence and Jacobian rank deficiency. Neurocomputing 70: 544–558CrossRefGoogle Scholar
  41. 41.
    Chung FL, Lee T (1992) A node pruning algorithm for backpropagation networks. Int J Neural Syst 3(3): 301–314CrossRefGoogle Scholar
  42. 42.
    Kruschke JK (1998) Creating local and distributed bottlenecks in hidden layers of backpropagation networks. In: Touretzky DS, Hinton GE, Sejnowski TJ (Eds) Proceedings 1988 Connectionist Models Summer School, Morgan Kaufmann, San Mateo, CA, pp 120–126Google Scholar
  43. 43.
    Han J, Kamber M (2001) DataMining: concepts and techniques. Morgan Kaufmann, San FranciscoGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC. 2011

Authors and Affiliations

  1. 1.Department of Computer ApplicationsSarah Tucker CollegeTirunelveliIndia
  2. 2.Department of Computer ScienceV.H.N.S.N. CollegeVirudhunagarIndia

Personalised recommendations