Advertisement

Neural Computing and Applications

, Volume 29, Issue 9, pp 389–400 | Cite as

An analog neural network approach for the least absolute shrinkage and selection operator problem

  • Hao Wang
  • Ching Man Lee
  • Ruibin Feng
  • Chi Sing Leung
ICONIP 2015

Abstract

This paper addresses the analog optimization for non-differential functions. The Lagrange programming neural network (LPNN) approach provides us a systematic way to build analog neural networks for handling constrained optimization problems. However, its drawback is that it cannot handle non-differentiable functions. In compressive sampling, one of the optimization problems is least absolute shrinkage and selection operator (LASSO), where the constraint is non-differentiable. This paper considers the hidden state concept from the local competition algorithm to formulate an analog model for the LASSO problem. Hence, the non-differentiable limitation of LPNN can be overcome. Under some conditions, at equilibrium, the network leads to the optimal solution of the LASSO. Also, we prove that these equilibrium points are stable. Simulation study illustrates that the proposed analog model and the traditional digital method have the similar mean squared performance.

Keywords

Analog neural network Neural dynamics LPNN Local competition algorithm 

Notes

Acknowledgements

This work is partially supported by the Research Grants Council, Hong Kong, under Grant Number, CityU 115612.

Compliance with ethical standards

Conflict of interest

Authors declare that they do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

References

  1. 1.
    Cichocki A, Unbehauen R (1993) Neural networks for optimization and signal processing. Wiley, LondonzbMATHGoogle Scholar
  2. 2.
    MacIntyre J (2013) Applications of neural computing in the twenty-first century and 21 years of Neural Computing & Applications. Neural Computing Appl 23(3):657–665CrossRefGoogle Scholar
  3. 3.
    Hopfield JJ (1982) Neural networks and physical systems with emergent collective computational abilities. In: Proceedings of the National Academy of Sciences, 79, 2554–2558Google Scholar
  4. 4.
    Tank D, Hopfield JJ (1986) Simple neural optimization networks: an A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans Circuits Syst 33(5):533–541CrossRefGoogle Scholar
  5. 5.
    Duan S, Dong Z, Hu X, Wang L, Li H (2016) Small-world Hopfield neural networks with weight salience priority and memristor synapses for digit recognition. Neural Computing Appl 27(4):837–844CrossRefGoogle Scholar
  6. 6.
    Chua LO, Lin GN (1984) Nonlinear programming without computation. IEEE Trans Circuits Syst 31:182–188MathSciNetCrossRefGoogle Scholar
  7. 7.
    Liu Q, Wang J (2008) A one-layer recurrent neural network with a discontinuous hard-limiting activation function for quadratic programming. IEEE Trans Neural Netw 19(4):558–570CrossRefGoogle Scholar
  8. 8.
    Wang J (2010) Analysis and design of a k-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21(9):1496–1506CrossRefGoogle Scholar
  9. 9.
    Bharitkar S, Tsuchiya K, Takefuji Y (1999) Microcode optimization with neural networks. IEEE Trans Neural Netw 10(3):698–703CrossRefGoogle Scholar
  10. 10.
    Chua LO, Yang L (1988) Cellular neural networks: theory. IEEE Trans Circuits Syst 35(10):1257–1272MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Ho TY, Lam PM, Leung CS (2008) Parallelization of cellular neural networks on GPU. Pattern Recognit 41(8):2684–2692CrossRefzbMATHGoogle Scholar
  12. 12.
    Lin YL, Hsieh JG, Kuo YS, Jeng JH (2016) NXOR- or XOR-based robust template decomposition for cellular neural networks implementing an arbitrary Boolean function via support vector classifiers. Neural Computing Appl (accepted)Google Scholar
  13. 13.
    Liu X (2016) Improved convergence criteria for HCNNs with delays and oscillating coefficients in leakage terms. Neural Computing Appl 27(4):917–925CrossRefGoogle Scholar
  14. 14.
    Sum J, Leung CS, Tam P, Young G, Kan WK, Chan LW (1999) Analysis for a class of winner-take-all model. IEEE Trans Neural Netw 10(1):64–71CrossRefGoogle Scholar
  15. 15.
    Liu S, Wang J (2006) A simplified dual neural network for quadratic programming with its KWTA application. IEEE Trans Neural Netw 17(6):1500–1510CrossRefGoogle Scholar
  16. 16.
    Xiao Y, Liu Y, Leung CS, Sum J, Ho K (2012) Analysis on the convergence time of dual neural network-based kwta. IEEE Trans Neural Netw Learn Syst 23(4):676–682CrossRefGoogle Scholar
  17. 17.
    Gao XB (2003) Exponential stability of globally projected dynamics systems. IEEE Trans Neural Netw 14:426–431CrossRefGoogle Scholar
  18. 18.
    Hu X, Wang J (2007) A recurrent neural network for solving a class of general variational inequalities. IEEE Trans Syst Man Cybern B Cybern 37(3):528–539CrossRefGoogle Scholar
  19. 19.
    Zhang S, Constantinidies AG (1992) Lagrange programming neural networks. IEEE Tran Circuits Syst II 39:441–452CrossRefGoogle Scholar
  20. 20.
    Leung CS, Sum J, So HC, Constantinides AG, Chan FKW (2014) Lagrange programming neural networks for time-of-arrival-based source localization. Neural Computing Appl 24(1):109–116CrossRefGoogle Scholar
  21. 21.
    Liang J, So HC, Leung CS, Li J, Farina A (2015) Waveform design with unit modulus and spectral shape constraints via Lagrange programming neural network. IEEE J Sel Top Signal Process 9(8):1377–1386CrossRefGoogle Scholar
  22. 22.
    Liang J, Leung CS, So HC (2016) Lagrange programming neural network approach for target localization in distributed MIMO radar. IEEE Trans Signal Process 64(6):1574–1585MathSciNetCrossRefGoogle Scholar
  23. 23.
    Donoho DL, Elad M (2003) Optimally sparse representation in general (nonorthogonal) dictionaries via \(l_1\) minimization. Proc Natl Acad Sci 100(5):2197–2202MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Gilbert AC, Tropp JA (2005) Applications of sparse approximation in communications. In: Proceedings of the international symposium on information theory ISIT 2005:1000–1004Google Scholar
  25. 25.
    Sahoo SK, Lu W(2011) Image denoising using sparse approximation with adaptive window selection. In: Proceedings of the 8th international conference on information, communications and signal processing (ICICS) 2011, 1–5Google Scholar
  26. 26.
    Rahmoune A, Vandergheynst P, Frossard P (2012) Sparse approximation using m-term pursuit and application in image and video coding. IEEE Trans Image Process 21(4):1950–1962MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Kim SJ, Koh K, Lustig M, Boyd S, Gorinevsky D (2007) An interior-point method for large-scale ‘1-regularized least squares. IEEE J Sel Top Sig Proc 1(4):606–617CrossRefGoogle Scholar
  28. 28.
    Saunders MA (2005) Matlab software for convex optimization. http://www.stanford.edu/group/SOL/software/pdco.html
  29. 29.
    Figueiredo M, Nowak R, Wright S (2007) Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems, IEEE. J Sel Top Sig Proc 1(4):586–597CrossRefGoogle Scholar
  30. 30.
    Berg E, Friedlander MP (2008) Probing the pareto frontier for basis pursuit solutions. SIAM J Sci Computing 31(2):890912MathSciNetzbMATHGoogle Scholar
  31. 31.
    Berg E, Friedlander MP (2011) Sparse optimization with least-squares constraints. SIAM J Optim 21(4):1201–1229MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Berg E, Friedlander MP (2007) SPGL1: a solver for large-scale sparse reconstruction. http://www.cs.ubc.ca/labs/scl/spgl1
  33. 33.
    Rozell CJ, Johnson DH, Baraniuk RG, Olshausen BA (2008) Sparse coding via thresholding and local competition in neural circuits. Neural Comput 20(10):2526–2563MathSciNetCrossRefGoogle Scholar
  34. 34.
    Chen SS, Donoho DL, Saunders MA (1998) Atomic decomposition by basis pursuit. SIAM J Sci Comput 20(1):33–61MathSciNetCrossRefzbMATHGoogle Scholar
  35. 35.
    Feng R, Lee CM, Leung CS (2015) Lagrange programming neural network for the L1-norm constrained quadratic minimization. In: Proceedings of the ICONIP 2015, Istanbul, Turkey, 3, pp 119–126Google Scholar
  36. 36.
    Balavoine A, Rozell CJ, Romberg J (2011) Global convergence of the locally competitive algorithm. In: Proceedings of the IEEE signal processing education workshop (DSP/SPE) (2011) Sedona. Arizona, USA, pp 431–436Google Scholar
  37. 37.
    Balavoine A, Romberg J, Rozell CJ (2012) Convergence and rate analysis of neural networks for sparse approximation. IEEE Trans Neural Netw Learn Syst 23(9):1377–1389CrossRefGoogle Scholar
  38. 38.
    Gordon G, Tibshirani R (2012) Karush–Kuhn–Tucker conditions, Optimization Fall 2012 Lecture NotesGoogle Scholar
  39. 39.
    Guenin B, Konemann J, Tunel T (2014) A gentle introduction to optimization. Cambridge University Press, CambridgezbMATHGoogle Scholar
  40. 40.
    Feng X, Zhang Z (2007) The rank of a random matrix. Appl Math Comput 185(1):689–694MathSciNetzbMATHGoogle Scholar

Copyright information

© The Natural Computing Applications Forum 2017

Authors and Affiliations

  • Hao Wang
    • 1
  • Ching Man Lee
    • 1
  • Ruibin Feng
    • 1
  • Chi Sing Leung
    • 1
  1. 1.Department of Electronic EngineeringCity University of Hong KongKowloonHong Kong

Personalised recommendations