Advertisement

An Efficient Algorithm for Complex-Valued Neural Networks Through Training Input Weights

  • Qin Liu
  • Zhaoyang Sang
  • Hua Chen
  • Jian Wang
  • Huaqing ZhangEmail author
Conference paper
  • 2.6k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10637)

Abstract

Complex-valued neural network is a type of neural networks, which is extended from real number domain to complex number domain. Fully complex extreme learning machine (CELM) is an efficient algorithm, which owes faster convergence than the common complex backpropagation (CBP) neural networks. However, it needs more hidden neurons to reach competitive performance. Recently, an efficient learning algorithm is proposed for the single-hidden layer feed-forward neural network which is called the upper-layer-solution-aware algorithm (USA). Motivated by USA, an efficient algorithm for complex-valued neural networks through training input weights (GGICNN) has been proposed to train the split complex-valued neural networks in this paper. Compared with CELM and CBP, an illustrated experiment has been done in detail which observes the better generalization ability and more compact architecture for the proposed algorithm.

Keywords

Complex-valued Neural networks Extreme learning machine Complex backpropagation Gradient 

Notes

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China (No. 61305075), the China Postdoctoral Science Foundation (No. 2012M520624), the Natural Science Foundation of Shandong Province (No. ZR2013FQ004, ZR2013DM015, ZR2015AL014), the Specialized Research Fund for the Doctoral Program of Higher Education of China (No. 20130133120014) and the Fundamental Research Funds for the Central Universities (Nos. 14CX05042A, 15CX05053A, 15CX02079A, 15CX08011A, 15CX02064A).

References

  1. 1.
    Hirose, A.: Complex-Valued Neural Networks. World Scientific, Singapore (2003)CrossRefzbMATHGoogle Scholar
  2. 2.
    Cha, I., Kassam, S.A.: Channel equalization using adaptive complex radial basis function networks. IEEE J. Sel. Areas Commun. 13, 122–131 (1995)CrossRefGoogle Scholar
  3. 3.
    Aizenberg, I.: Complex-Valued Neural Networks with Multi-valued Neurons. Springer, Berlin (2011). doi: 10.1007/978-3-642-20353-4. Finance, A.: Multivariate nonlinear analysis and prediction of Shanghai stock market. Discret. Dyn. Nat. Soc. 47–58 (2008)CrossRefzbMATHGoogle Scholar
  4. 4.
    Serre, D.: Matrices: Theory and Applications. Springer, New York (2002). doi: 10.1007/978-1-4419-7683-3 zbMATHGoogle Scholar
  5. 5.
    Rakkiyappan, R., Velmurugan, G., Cao, J.: Stability analysis of fractional-order complex-valued neural networks with time delays. Chaos Solitons Fractals Interdisc. J. Nonlinear Sci. Nonequilibrium Complex Phenom. 78, 297–316 (2015)zbMATHMathSciNetGoogle Scholar
  6. 6.
    Leung, H., Haykin, S.: The complex backpropagation algorithm. IEEE Trans. Signal Process. 39, 2101–2104 (1991)CrossRefGoogle Scholar
  7. 7.
    Nitta, T.: An extension of the back-propagation algorithm to complex numbers. Neural Netw. Off. J. Int. Neural Netw. Soc. 10, 1391–1415 (1997)CrossRefGoogle Scholar
  8. 8.
    Zhang, H., Zhang, C., Wu, W.: Convergence of batch split-complex backpropagation algorithm for complex-valued neural networks. Discret. Dyn. Nat. Soc. 2009, 332–337 (2009)zbMATHMathSciNetGoogle Scholar
  9. 9.
    Zhang, H., Xu, D., Zhang, Y.: Boundedness and convergence of split-complex back-propagation algorithm with momentum and penalty. Neural Process. Lett. 39, 297–307 (2014)CrossRefGoogle Scholar
  10. 10.
    Zhang, H., Liu, X., Xu, D., Zhang, Y.: Convergence analysis of fully complex backpropagation algorithm based on Wirtinger calculus. Cogn. Neurodyn. 46, 5789–5796 (2014)Google Scholar
  11. 11.
    Zhang, H., Mandic, D.P.: Is a complex-valued stepsize advantageous in complex-valued gradient learning algorithms? IEEE Trans. Neural Netw. Learn. Syst. 27, 1–6 (2015)MathSciNetGoogle Scholar
  12. 12.
    Xu, D., Dong, J., Zhang, H.: Deterministic convergence of Wirtinger-gradient methods for complex-valued neural networks. Neural Process. Lett. 1–12 (2016)Google Scholar
  13. 13.
    Rao, C.R., Mitra, S.K.: Generalized Inverse of Matrices and its Applications. Wiley, New York (1971)zbMATHGoogle Scholar
  14. 14.
    Huang, G.B., Zhu, Q.Y., Siew, C.K.: Extreme learning machine: theory and applications. Neurocomputing 70, 489–501 (2006)CrossRefGoogle Scholar
  15. 15.
    Yu, D., Deng, L.: Efficient and effective algorithms for training single-hidden-layer neural networks. Pattern Recogn. Lett. 33, 554–558 (2012)CrossRefGoogle Scholar
  16. 16.
    Li, M.B., Huang, G.B., Saratchandran, P., Sundararajan, N.: Fully complex extreme learning machine. Neurocomputing 68, 306–314 (2005)CrossRefGoogle Scholar
  17. 17.
    Shukla, S., Yadav, R.N.: Regularized weighted circular complex-valued extreme learning machine for imbalanced learning. IEEE Access 3048–3057 (2016)Google Scholar
  18. 18.
    Kim, T., Adal, T.: Approximation by fully complex multilayer perceptrons. Neural Comput. 15, 1641–1666 (2003)CrossRefzbMATHGoogle Scholar
  19. 19.
    Suresh, S., Savitha, R., Sundararajan, N.: Supervised Learning with Complex-Valued Neural Networks. Studies in Computational Intelligence. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-29491-4 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Qin Liu
    • 1
  • Zhaoyang Sang
    • 1
  • Hua Chen
    • 1
  • Jian Wang
    • 1
  • Huaqing Zhang
    • 1
    Email author
  1. 1.College of ScienceChina University of PetroleumQingdaoChina

Personalised recommendations