An Individual Adaptive Gain Parameter Backpropagation Algorithm for Complex-Valued Neural Networks

  • Songsong Li
  • Toshimi Okada
  • Xiaoming Chen
  • Zheng Tang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


The complex-valued backpropagation algorithm has been widely used. However, the local minima problem usually occurs in the process of learning. We proposed an individual adaptive gain parameter backpropagation algorithm for complex-valued neural network to solve this problem. We specified the gain parameter of the sigmoid function in the hidden layer for each learning pattern. The proposed algorithm is tested by benchmark problem. The simulation results show that it is capable of preventing the complex-valued network learning from sticking into the local minima.


Hide Layer Output Layer Sigmoid Function Learning Pattern Gain Parameter 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Benven, N., Piazza, F.: On the Complex Backpropagation Algorithm. IEEE Transactions on Signal Processing 40, 967–969 (1992)CrossRefGoogle Scholar
  2. 2.
    Georgio, G.M., Koutsougeras, C.: Complex Domain Backpropagation. IEEE Transaction on Circuits, Syst-II: Analog Digital Signal Processing 39, 330–334 (1992)CrossRefGoogle Scholar
  3. 3.
    Kim, M.S., Guest, C.C.: Modification of Backpropagation Networks for Complex-Valued Signal Processing in Frequency Domain. In: Pro. Internat. Joint Conf. Neural Networks, vol. 3, pp. 27–31 (1990)Google Scholar
  4. 4.
    Nitta, T., Furuya, T.: A Complex Back-Propagation Learning. Transactions of Information Processing Society of Japan 32, 1319–1329 (1991)Google Scholar
  5. 5.
    Goerick, G., Seelen, W.V.: On Unlearnable Problems or a Model for Premature Saturation in Backpropagation Learning. In: Proceedings of the European Symposium on Artificil Neural Networks 1996, Belgium, pp. 13–18 (1996)Google Scholar
  6. 6.
    Haykin, S.: Neural Networks a Comprehensive Foundation. Macmillan Publishing, New York (1994)MATHGoogle Scholar
  7. 7.
    Wang, X.G., Tang, Z., Tamura, H., Ishi, M.: Multilayer Network Learning Algorithm Based on Pattern Search Method. IEICE Transaction on Fundamentals of Electronics, Communications and Computer Science E86-A, 1869–1875 (2003)Google Scholar
  8. 8.
    Cybenko, G.: Approximation by Superposition of a Sigmoid Function. Mathematics of Control, Signals, and Systems 2, 303–314 (1989)MATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Servan-Schreuber, C., Printz, H., Cohen, J.D.: A Network Model of Neuromodulatory Effects: Gain, Signal-to-Noise Ratio and Behavior. Science 249, 892–895 (1990)CrossRefGoogle Scholar
  10. 10.
    Wang, D.L.: Pattern Recognition: Neural Networks in Perspective. IEEE Intelligent Systems 8(4), 52–60 (1993)Google Scholar
  11. 11.
    Chen, X., Tang, Z., Li, S.: A Modified Error Function for the Complex-Value Backpropagation Neural Networks. Neural Information Processing – Letters and Review 9(1), 1–7 (2005)MathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Songsong Li
    • 1
  • Toshimi Okada
    • 1
  • Xiaoming Chen
    • 2
    • 3
  • Zheng Tang
    • 2
  1. 1.Faculty of EngineeringToyama Prefectural UniversityToyamaJapan
  2. 2.Faculty of EngineeringToyama UniversityToyamaJapan
  3. 3.Faculty of Information Science and EngineeringShenyang University of TechnologyShenyangChina

Personalised recommendations