Advertisement

Neural Computing & Applications

, Volume 2, Issue 2, pp 61–68 | Cite as

A neural network construction algorithm with application to image compression

  • R. Setiono
  • G. Lu
Articles

Abstract

We propose an algorithm for constructing a feedforward neural network with a single hidden layer. This algorithm is applied to image compression and it is shown to give satisfactory results. The neural network construction algorithm begins with a simple network topology containing a single unit in the hidden layer. An optimal set of weights for this network is obtained by applying a variant of the quasi-Newton method for unconstrained optimisation. If this set of weights does not give a network with the desired accuracy then one more unit is added to the hidden layer and the network is retrained. This process is repeated until the desired network is obtained. We show that each addition of the hidden unit to the network is guaranteed to increase the signal to noise ratio of the compressed image.

Keywords

Backpropagation method Feedforward neural network Image compression Neural network construction algorithm Quasi-Newton method 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Anthony D, Hines E, Barham J, Taylor D. A comparison of image compression by neural networks and principal component analysis. Proc Int Joint Conf on Neural Networks, San Diego, CA, 1990, I-339–I-344Google Scholar
  2. 2.
    Li T, Fang L, Li KQ-Q. Hierarchical classification and vector quantization with neural trees. Neurocomput 1993; 5: 119–139Google Scholar
  3. 3.
    Chua LO, Lin T. A.. neural network approach to transform image coding. Int J Circuit Theory & Applic 1988; 16: 317–324Google Scholar
  4. 4.
    Sonehara N, Kawato M, Miyake S, Nakane K. Image data compression using a neural network model. Proc Int Joint Conf on Neural Networks, 1989, II-35–II-41Google Scholar
  5. 5.
    Wan E, Ning P, Widrow B. Neural tree structured vector quantization. Proc Int Joint Conf on Neural Networks, 1990, Washington, DC, II-267–II-270Google Scholar
  6. 6.
    Sicuranza GL, Ramponi G, Marsi S. Artificial neural network for image compression. Electr Lett 1990; 26 (7): 477–479Google Scholar
  7. 7.
    Cotrell GW, Theeten JB, Krauth W. Learning internal representations from gray-scale images: An example of extensional programming. Proc Cognitive Sci Soc Ann Conf 1987; 461–473Google Scholar
  8. 8.
    Ash T. Dynamic node creation in backpropagation networks. Connection Sci 1989; 1 (4): 365–375Google Scholar
  9. 9.
    Rumelhart DE, McClelland JL. Parallel Distributed Processing. MIT Press, Cambridge, MA, 1988Google Scholar
  10. 10.
    Watrous RL. Learning algorithm for connectionist networks: Applied gradient method of nonlinear optimization. Proc IEEE First Int Conf on Neural Networks 1987, II-619–II-627Google Scholar
  11. 11.
    Hirose Y, Yamashita K, Hijiya S. Back-propagation algorithm which varies the number of hidden units. Neural Networks 1991; 4: 61–66Google Scholar
  12. 12.
    Pratt WK. Digital Image Processing. John Wiley, New York, 1978Google Scholar

Copyright information

© Springer-Verlag London Limited 1994

Authors and Affiliations

  • R. Setiono
    • 1
  • G. Lu
    • 1
  1. 1.Department of Information Systems and Computer ScienceNational University of SingaporeSingaporeRepublic of Singapore

Personalised recommendations