Neural Processing Letters

, Volume 13, Issue 1, pp 43–53 | Cite as

A Simple Neural Network Pruning Algorithm with Application to Filter Synthesis

  • Kenji Suzuki
  • Isao Horiba
  • Noboru Sugie


This paper describes an approach to synthesizing desired filters using a multilayer neural network (NN). In order to acquire the right function of the object filter, a simple method for reducing the structures of both the input and the hidden layers of the NN is proposed. In the proposed method, the units are removed from the NN on the basis of the influence of removing each unit on the error, and the NN is retrained to recover the damage of the removal. Each process is performed alternately, and then the structure is reduced. Experiments to synthesize a known filter were performed. By the analysis of the NN obtained by the proposed method, it has been shown that it acquires the right function of the object filter. By the experiment to synthesize the filter for solving real signal processing tasks, it has been shown that the NN obtained by the proposed method is superior to that obtained by the conventional method in terms of the filter performance and the computational cost.

generalization ability image enhancement generalization ability image enhancement neural filter optimal structure redundancy removal right function signal processing 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Yin, L., Astola, J. and Neuvo, Y.: A new class of nonlinear filters ‐ neural filters, IEEE Trans. Signal Processing 41(3) (1993), 1201–1222.Google Scholar
  2. 2.
    Zhang, Z. Z. and Ansari, N.: Structure and properties of generalized adaptive neural filters for signal enhancement, IEEE Trans. Neural Networks 7(4) (1996), 857–868.Google Scholar
  3. 3.
    Yin, L., Astola, J. and Neuvo, Y.: Adaptive multistage weighted order statistic filters based on the back propagation algorithm, IEEE Trans. Signal Processing 42 (1994), 419–422.Google Scholar
  4. 4.
    Hanek, H., Ansari, N. and Zhang, Z. Z.: Comparative study on the generalized adaptive neural filter with the other nonlinear filter, Proc. IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Vol. I, Minneapolis, MN, April (1993), pp. 649–652.Google Scholar
  5. 5.
    Arakawa, K. and Harashima, H.: A nonlinear digital filter using multi-layered neural networks, Proc. IEEE Int. Conf. Communications 2 (1990), 424–428.Google Scholar
  6. 6.
    Suzuki, K., Horiba, I., Sugie, N. and Nanki, M.: Noise reduction of medical x-ray image sequences using a neural filter with spatiotemporal inputs, Proc. Int. Sympo. Noise Reduction for Imaging & Commu. Systems, Tokyo, Japan, Nov. (1998), pp. 85–90.Google Scholar
  7. 7.
    Sietsma, J. and Dow, R. J. F.: Creating artificial neural networks that generalize, Neural Networks 4(1) (1991), 67–69.Google Scholar
  8. 8.
    Kameyama, K. and Kosugi, Y.: Neural network pruning by fusing hidden layer units, Trans. IEICE E 74(12) (1991), 4198–4204.Google Scholar
  9. 9.
    Castellano, G., Fanelli, A. M. and Pelillo, M.: An iterative pruning algorithm for feedforward neural networks, IEEE Trans. Neural Networks 8(3) (1997), 519–531.Google Scholar
  10. 10.
    Hagiwara, M.: Novel back propagation algorithm for reduction of hidden units and acceleration of convergence using artificial selection, Proc. Int. Joint Conf. Neural Networks II (1990), 625–630.Google Scholar
  11. 11.
    Murata, N., Yoshizawa, S. and Amari, S.: Network information criterion ‐ determining the number of hidden units for an artificial neural network model, IEEE Trans. Neural Networks 5(6) (1994), 865–872.Google Scholar
  12. 12.
    Kurita, T.: A method to determine the number of hidden units of three layered neural networks by information criteria, Trans. IEICE D-II J73-D-II(11) (1990), 1872–1878 (in Japanese).Google Scholar
  13. 13.
    Cun, Y. L., Denker, J. S. and Solla, S. A.: Optimal brain damage, Advances in Neural Information Processing, D.S. Touretzky (ed.), Vol. 2, (1990), pp. 598–605.Google Scholar
  14. 14.
    Weigend, A. S., Rumelhart, D. E. and Huberman, B. A.: Generalization by weight-elimination applied to currency exchange rate prediction, Proc. Int. Joint Conf. Neural Networks, Vol. 1, Seattle, USA, pp. 837–841, 1991.Google Scholar
  15. 15.
    M. Ishikawa, Structural learning with forgetting, Neural Networks 9(3) (1996), 509–521.Google Scholar
  16. 16.
    Ji, C., Snapp, R. R. and Psaltis, D.: Generalizing smoothness constraints from discrete samples, Neural Computation 2(1) (1990), 188–197.Google Scholar
  17. 17.
    Nowlan, S. J. and Hinton, G. E.: Simplifying neural networks by soft weight-sharing, Neural Computation 4(4) (1992), 473–493.Google Scholar
  18. 18.
    Rumelhart, D. E., Hinton, G. E. and Williams, R. J.: Learning internal representations by error propagation, Parallel Distributed Processing, Vol. 1, Chap. 8, M.I.T. Press, MA (1986), pp. 318–362.Google Scholar
  19. 19.
    Banham, M. R. and Katsaggelos, A. K.: Digital image restoration, IEEE Signal Processing Magazine 14(2) (1997), 24–41.Google Scholar

Copyright information

© Kluwer Academic Publishers 2001

Authors and Affiliations

  • Kenji Suzuki
    • 1
  • Isao Horiba
    • 1
  • Noboru Sugie
    • 2
  1. 1.Faculty of Information Science and TechnologyAichi Prefectural UniversityNagakute, AichiJapan
  2. 2.Faculty of Science and TechnologyMeijo UniversityNagoyaJapan

Personalised recommendations