Advertisement

Neural Computing and Applications

, Volume 31, Issue 7, pp 2905–2920 | Cite as

KKT condition-based smoothing recurrent neural network for nonsmooth nonconvex optimization in compressed sensing

  • Dan Wang
  • Zhuhong ZhangEmail author
Original Article

Abstract

This work probes into a smoothing recurrent neural network (SRNN) in terms of a smoothing approximation technique and the equivalent version of the Karush–Kuhn–Tucker condition. Such a network is developed to handle the \(L_0\hbox {-norm}\) minimization model originated from compressed sensing, after replacing the model with a nonconvex nonsmooth approximation one. The existence, uniqueness and limit behavior of solutions of the network are well studied by means of some mathematical tools. Multiple kinds of nonconvex approximation functions are examined so as to decide which of them is most suitable for SRNN to address the problem of sparse signal recovery under different kinds of sensing matrices. Comparative experiments have validated that among the chosen approximation functions, transformed L1 function (TL1), logarithm function (Log) and arctangent penalty function are effective for sparse recovery; SRNN-TL1 is robust and insensitive to the coherence of sensing matrix, while it is competitive by comparison against several existing discrete numerical algorithms and neural network methods for compressed sensing problems.

Keywords

Compressed sensing Nonsmooth and nonconvex approximation Smoothing approximation Neural networks KKT condition 

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant No. 61563009, the Science and Technology Foundation of Guizhou Province (No. LKQS201314) and the Foundation of Qiannan Normal University for Nationalities (No. 2014ZCSX18).

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Donoho DL (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306MathSciNetzbMATHCrossRefGoogle Scholar
  2. 2.
    Candes EJ, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509MathSciNetzbMATHCrossRefGoogle Scholar
  3. 3.
    Natarajan BK (1995) Sparse approximate solutions to linear systems. SIAM J Comput 24(2):227–234MathSciNetzbMATHCrossRefGoogle Scholar
  4. 4.
    Chen SS, Donoho DL, Saunders MA (2001) Atomic decomposition by basis pursuit. SIAM Rev 43(1):129–159MathSciNetzbMATHCrossRefGoogle Scholar
  5. 5.
    Gasso G, Rakotomamonjy A, Canu S (2009) Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans Signal Process 57(12):4686–4698MathSciNetzbMATHCrossRefGoogle Scholar
  6. 6.
    Foucart S, Lai MJ (2009) Sparsest solutions of underdetermined linear systems via \(l_q\)-minimization for \(0<q\le 1\). Appl Comput Harmon Anal 26(3):395–407MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    Lai MJ, Xu Y, Yin W (2013) Improved iteratively reweighted least squares for unconstrained smoothed \(l_q\) minimization. SIAM J Numer Anal 51(2):927–957MathSciNetzbMATHCrossRefGoogle Scholar
  8. 8.
    Geman D, Yang C (1995) Nonlinear image recovery with half-quadratic regularization. IEEE Trans Image Process 4(7):932–946CrossRefGoogle Scholar
  9. 9.
    Trzasko J, Manduca A (2009) Relaxed conditions for sparse signal recovery with general concave priors. IEEE Trans Signal Process 57(11):4347–4354MathSciNetzbMATHCrossRefGoogle Scholar
  10. 10.
    Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Amer Stat Assoc 96(456):1348–1360MathSciNetzbMATHCrossRefGoogle Scholar
  11. 11.
    Friedman JH (2012) Fast sparse regression and classification. Int J Forecast 28(3):722–738CrossRefGoogle Scholar
  12. 12.
    Zhang CH (2010) Nearly unbiased variable selection under minimax concave penalty. Ann Stat 38(2):894–942MathSciNetzbMATHCrossRefGoogle Scholar
  13. 13.
    Zhang S, Qian H, Chen W, Zhang Z (2013) A concave conjugate approach for nonconvex penalized regression with the MCP penalty. In: Proceedings of the 27th AAAI conference on artificial intelligence 2013, pp 1027–1033Google Scholar
  14. 14.
    Zhang T (2010) Analysis of multi-stage convex relaxation for sparse regularization. J Mach Learn Res 11(2):1081–1107MathSciNetzbMATHGoogle Scholar
  15. 15.
    Gao C, Wang N, Yu Q, Zhang Z (2011) A feasible nonconvex relaxation approach to feature selection. In: Proceedings of the 25th AAAI conference on artificial intelligence 2011, pp 356–361Google Scholar
  16. 16.
    Soubies E, Blanc-Fraud L, Aubert G (2015) A continuous exact \(L_0\) penalty (CEL0) for least squares regularized problem. SIAM J Imaging Sci 8(3):1607–1639MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    Malek-Mohammadi M, Koochakzadeh A, Babaie-Zadeh M, Jansson M, Rojas CR (2016) Successive concave sparsity approximation for compressed sensing. IEEE Trans Signal Process 64(21):5657–5671MathSciNetzbMATHCrossRefGoogle Scholar
  18. 18.
    Selesnick IW, Bayram I (2014) Sparse signal estimation by maximally sparse convex optimization. IEEE Trans Signal Process 62(5):1078–1092MathSciNetzbMATHCrossRefGoogle Scholar
  19. 19.
    Lou Y, Osher S, Xin J (2015) Computational aspects of constrained \(L_1-L_2\) minimization for compressive sensing. Modelling. Springer, Computation and optimization in information systems and management sciences, pp 169–180Google Scholar
  20. 20.
    Yin P, Lou Y, He Q, Xin J (2015) Minimization of \(l_{1-2}\) for compressed sensing. SIAM J Sci Comput 37(1):A536–A563zbMATHCrossRefGoogle Scholar
  21. 21.
    Zhang H, Li J, Ji Y, Yue H (2017) Understanding subtitles by character-level sequence-to-sequence learning. IEEE Trans Ind Inform 13(2):616–624CrossRefGoogle Scholar
  22. 22.
    Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117CrossRefGoogle Scholar
  23. 23.
    Bian W, Chen X (2014) Neural network for nonsmooth, nonconvex constrained minimization via smooth approximation. IEEE Trans Neural Netw Learn Syst 25(3):545–556CrossRefGoogle Scholar
  24. 24.
    Qin S, Bian W, Xue X (2013) A new one-layer recurrent neural network for nonsmooth pseudoconvex optimization. Neurocomputing 120:655–662CrossRefGoogle Scholar
  25. 25.
    Hopfield JJ, Tank DW (1985) Neural computation of decisions in optimization problems. Biol Cybern 52(3):141–152zbMATHGoogle Scholar
  26. 26.
    Bian W, Chen X (2012) Smoothing neural network for constrained non-Lipschitz optimization with applications. IEEE Trans Neural Netw Learn Syst 23(3):399–411MathSciNetCrossRefGoogle Scholar
  27. 27.
    Rozell CJ, Garrigues P (2010) Analog sparse approximation for compressed sensing recovery. In: Proceedings of the ASILOMAR conference on signals, systems and computers 2010, pp 822–826Google Scholar
  28. 28.
    Charles AS, Garrigues P, Rozell CJ (2011) Analog sparse approximation with applications to compressed sensing. arXiv preprint arXiv:1111.4118
  29. 29.
    Leung CS, Sum J, Constantinides AG (2014) Recurrent networks for compressive sampling. Neurocomputing 129:298–305CrossRefGoogle Scholar
  30. 30.
    Feng R, Leung CS, Constantinides AG, Zeng WJ (2017) Lagrange programming neural network for nondifferentiable optimization problems in sparse approximation. IEEE Trans Neural Netw Learn Syst 28(10):2395–2407MathSciNetCrossRefGoogle Scholar
  31. 31.
    Wang H, Lee CM, Feng R, Leung CS (2017) An analog neural network approach for the least absolute shrinkage and selection operator problem. Neural Comput Appl. https://doi.org/10.1007/s00521-017-2863-5 CrossRefGoogle Scholar
  32. 32.
    Liu Y, Hu J (2016) A neural network for \(l_1\)-\(l_2\) minimization based on scaled gradient projection: application to compressed sensing. Neurocomputing 173(3):988–993CrossRefGoogle Scholar
  33. 33.
    Liu Q, Wang J (2016) \(L_1\)-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans Neural Netw Learn Syst 27(3):698–707MathSciNetCrossRefGoogle Scholar
  34. 34.
    Guo Z, Wang J (2010) A neurodynamic optimization approach to constrained sparsity maximization based on alternative objective functions. In: Proceedings of the 2010 international joint conference on neural networks (IJCNN) 2010, pp 18–23Google Scholar
  35. 35.
    Guo C, Yang Q (2015) A neurodynamic optimization method for recovery of compressive sensed signals with globally converged solution approximating to minimization \(L_0\). IEEE Trans Neural Netw Learn Syst 26(7):1363–1374MathSciNetCrossRefGoogle Scholar
  36. 36.
    Bazaraa MS, Sherali HD, Shetty CM (2013) Nonlinear programming: theory and algorithms. Wiley, New YorkzbMATHGoogle Scholar
  37. 37.
    Chen X (2012) Smoothing methods for nonsmooth, nonconvex minimization. Math Program 134(1):71–99MathSciNetzbMATHCrossRefGoogle Scholar
  38. 38.
    Tropp JA, Gilbert AC (2007) Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans Inf Theory 53(12):4655–4666MathSciNetzbMATHCrossRefGoogle Scholar
  39. 39.
    Needell D, Tropp JA (2009) CoSaMP: iterative signal recovery from incomplete and inaccurate samples. Appl Comput Harmon Anal 26(3):301–321MathSciNetzbMATHCrossRefGoogle Scholar
  40. 40.
    Tropp JA (2004) Greed is good: algorithmic results for sparse approximation. IEEE Trans Inf Theory 50(10):2231–2242MathSciNetzbMATHCrossRefGoogle Scholar
  41. 41.
    Cohen A, Dahmen W, DeVore R (2009) Compressed sensing and best \(k\)-term approximation. J Am Math Soc 22(1):211–231MathSciNetzbMATHCrossRefGoogle Scholar
  42. 42.
    Boyd S, Parikh N, Chu E, Peleato B, Eckstein J (2011) Distributed optimization and statistical learning via the alternating direction method of multipliers. Found Trends Mach Learn 3(1):1–122zbMATHCrossRefGoogle Scholar
  43. 43.
    Hale E, Yin W, Zhang Y (2008) Fixed-point continuation for \(l_1\)-minimization: methodology and convergence. SIAM J Optim 19(3):1107–1130MathSciNetzbMATHCrossRefGoogle Scholar
  44. 44.
    Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202MathSciNetzbMATHCrossRefGoogle Scholar
  45. 45.
    Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRefGoogle Scholar
  46. 46.
    Candes EJ, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):4203–4215MathSciNetzbMATHCrossRefGoogle Scholar
  47. 47.
    Gribonval R, Nielsen M (2007) Highly sparse representations from dictionaries are unique and independent of the sparseness measure. Appl Comput Harmon Anal 22(3):335–355MathSciNetzbMATHCrossRefGoogle Scholar
  48. 48.
    Clarke FH (1983) Optimization and nonsmooth analysis. Wiley, New YorkzbMATHGoogle Scholar
  49. 49.
    Bandeira A, Dobriban E, Mixon D, Sawin W (2013) Certifying the restricted isometry property is hard. IEEE Trans Inform Theory 59(6):3448–3450MathSciNetzbMATHCrossRefGoogle Scholar
  50. 50.
    Slotine JJE, Li W (1991) Applied nonlinear control. Englewood Cliffs, Prentice-HallzbMATHGoogle Scholar

Copyright information

© The Natural Computing Applications Forum 2017

Authors and Affiliations

  1. 1.College of Computer Science and TechnologyGuizhou UniversityGuiyangPeople’s Republic of China
  2. 2.College of Big Data and Information EngineeringGuizhou UniversityGuiyangPeople’s Republic of China
  3. 3.School of Mathematics and StatisticsQiannan Normal University for NationalitiesDuyunPeople’s Republic of China

Personalised recommendations