Two Matrix-Type Projection Neural Networks for Solving Matrix-Valued Optimization Problems

  • Lingmei Huang
  • Youshen XiaEmail author
  • Songchuan Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11302)


In recent years, matrix-valued optimization algorithms have been studied to enhance the computational performance of vector-valued optimization algorithms. This paper presents two matrix-type projection neural networks, continuous-time and discrete-time models, for solving matrix-valued optimization problems. The proposed continuous-time neural network may be viewed as a significant extension to the vector-type double projection neural network. More importantly, the proposed discrete-time projection neural network can be parallelly implemented in terms of matrix state space. Under pseudo-monotonicity condition and Lipschitz continuous condition, it is guaranteed that the two proposed matrix-type projection neural networks are globally convergent to the optimal solution. Finally, computed examples show that the two proposed matrix-type projection neural networks are much superior to the vector-type projection neural network in computation speed.


Matrix-type neural network Matrix-valued optimization Global convergence Computation time 


  1. 1.
    Kalouptsidis, N.: Signal Processing Systems: Theory and Design. Wiley-Interscience, New York (1997)Google Scholar
  2. 2.
    Mohammed, J.L., Hummel, R.A., Zucker, S.W.: A gradient projection algorithm for relaxation methods. IEEE Trans. Pattern Anal. Mach. Intell. 5, 330–332 (1983)CrossRefGoogle Scholar
  3. 3.
    Grant, M., Boyd, S., Ye, Y.: Disciplined Convex Programming. Springer, Boston (2006). Scholar
  4. 4.
    Vanderbei, R.J., Shanno, D.F.: An interior-point algorithm for nonconvex nonlinear programming. Comput. Optim. Appl. 13, 231–252 (1999)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Xia, Y.: A new neural network for solving linear programming problems and its application. IEEE Trans. Neural Netw. 7, 525–529 (1996)CrossRefGoogle Scholar
  6. 6.
    Xia, Y., Wang, J.: A recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraints. IEEE Trans. Circ. Syst. I Regul. Pap. 51, 1385–1394 (2004)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Xia, Y.: A compact cooperative recurrent neural network for computing general constrained \(L_1\) norm estimators. IEEE Trans. Sig. Process. 57(9), 3693–3697 (2009)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Xia, Y., Wang, J.: A general methodology for designing globally convergent optimization neural networks. IEEE Trans. Neural Netw. 9, 1331–1343 (1998)CrossRefGoogle Scholar
  9. 9.
    Liu, Q., Wang, J.: A one-layer recurrent neural network for non-smooth convex optimization subject to linear equality constraints. In: Köppen, M., Kasabov, N., Coghill, G. (eds.) ICONIP 2008. LNCS, vol. 5507, pp. 1003–1010. Springer, Heidelberg (2009). Scholar
  10. 10.
    Kennedy, M.P., Chua, L.O.: Neural networks for nonlinear programming. IEEE Trans. Circ. Syst. 35, 554–562 (1988)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Rodriguez-Vazquez, A., Dominguez-Castro, R., Rueda, A., Huertas, J.L.: Nonlinear switched capacitor neural networks for optimization problems. IEEE Trans. Circ. Syst. 37, 384–398 (1990)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Zhang, S., Constantinides, A.G.: Lagrange programming neural networks. IEEE Trans. Circ. Syst. II Analog Digit. Sig. Process. 39, 441–452 (1992)CrossRefGoogle Scholar
  13. 13.
    Xia, Y., Leung, H., Wang, J.: A projection neural network and its application to constrained optimization problems. IEEE Trans. Circ. Syst. I Fundam. Theory Appl. 49, 447–458 (2002)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Xia, Y., Feng, G., Wang, J.: A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. IEEE Trans. Neural Netw. 19, 1340–53 (2008)CrossRefGoogle Scholar
  15. 15.
    Xia, Y.: An extended projection neural network for constrained optimization. Neural Comput. 16, 863–883 (2004)CrossRefGoogle Scholar
  16. 16.
    Xia, Y., Wang, J.: Solving variational inequality problems with linear constraints based on a novel recurrent neural network. In: Liu, D., Fei, S., Hou, Z., Zhang, H., Sun, C. (eds.) ISNN 2007. LNCS, vol. 4493, pp. 95–104. Springer, Heidelberg (2007). Scholar
  17. 17.
    Xia, Y., Wang, J.: A bi-projection neural network for solving constrained quadratic optimization problems. IEEE Trans. Neural Netw. Learn. Syst. 27, 214–224 (2016)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Xia, Y.S.: New cooperative projection neural network for nonlinearly constrained variational inequality. Sci. China 52, 1766–1777 (2009)MathSciNetzbMATHGoogle Scholar
  19. 19.
    Cheng, L., Hou, Z.G., Lin, Y., et al.: Recurrent neural network for non-smooth convex optimization problems with application to the identification of genetic regulatory networks. IEEE Trans. Neural Netw. 22, 714–726 (2011)CrossRefGoogle Scholar
  20. 20.
    Eshaghnezhad, M., Effati, S., Mansoori, A.: A neurodynamic model to solve nonlinear pseudo-monotone projection equation and its applications. IEEE Trans. Cybern. 47, 3050–3062 (2016)CrossRefGoogle Scholar
  21. 21.
    Xia, Y., Chen, T., Shan, J.: A novel iterative method for computing generalized inverse. Neural Comput. 26(2), 449–465 (2014)MathSciNetCrossRefGoogle Scholar
  22. 22.
    Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Prentice Hall, Upper Saddle River (1989)zbMATHGoogle Scholar
  23. 23.
    Li, Z., Cheng, H., Guo, H.: General recurrent neural network for solving generalized linear matrix equation. Complexity 3, 1–7 (2017)MathSciNetzbMATHGoogle Scholar
  24. 24.
    Bouhamidi, A., Jbilou, K., Raydan, M.: Convex constrained optimization for large-scale generalized Sylvester equations. Comput. Optim. Appl. 48(2), 233–253 (2011)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Shi, Q.B., Xia, Y.S.: Fast multi-channel image reconstruction using a novel two-dimensional algorithm. Multimedia Tools Appl. 71, 2015–2028 (2014)CrossRefGoogle Scholar
  26. 26.
    Li, J.F., Li, W., Huang, R.: An efficient method for solving a matrix least squares problem over a matrix inequality constraint. Comput. Optim. Appl. 63(2), 393–423 (2016)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Bouhamidi, A.: A Kronecker approximation with a convex constrained optimization method for blind image restoration. Optim. Lett. 6, 1251–1264 (2012)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.College of Mathematics and Computer ScienceFuzhou UniversityFuzhouChina
  2. 2.College of Mathematics and Computer ScienceWuyi UniversityNanpingChina

Personalised recommendations