Advertisement

Sparse Signal Recovery with Exponential-Family Noise

  • Irina RishEmail author
  • Genady Grabarnik
Chapter
Part of the Signals and Communication Technology book series (SCT)

Abstract

The problem of sparse signal recovery from a relatively small number of noisy measurements has been studied extensively in the recent compressed sensing literature. Typically, the signal reconstruction problem is formulated as \(l_1\)-regularized linear regression. From a statistical point of view, this problem is equivalent to maximum a posteriori probability (MAP) parameter estimation with Laplace prior on the vector of parameters (i.e., signal) and linear measurements disturbed by Gaussian noise. Classical results in compressed sensing (e.g., [7]) state sufficient conditions for accurate recovery of noisy signals in such linear-regression setting. A natural question to ask is whether one can accurately recover sparse signals under different noise assumptions. Herein, we extend the results of [7] to the general case of exponential-family noise that includes Gaussian noise as a particular case; the recovery problem is then formulated as \(l_1\)-regularized Generalized Linear Model (GLM) regression. We show that, under standard restricted isometry property (RIP) assumptions on the design matrix, \(l_1\)-minimization can provide stable recovery of a sparse signal in presence of exponential-family noise, and state some sufficient conditions on the noise distribution that guarantee such recovery.

Keywords

Exponential Family Sparse Signal Noise Distribution Restricted Isometry Property Generalize Linear Model Regression 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Banerjee A, Merugu S, Dhillon IS, Ghosh J (2005) Clustering with Bregman divergences. J Mach Learn Res 6:1705–1749MathSciNetzbMATHGoogle Scholar
  2. 2.
    Banerjee A, Merugu S, Dhillon I, and Ghosh J (2004) Clustering with Bregman divergences. In: Proceedings of the fourth SIAM international conference on data mining, pp 234–245Google Scholar
  3. 3.
    Beygelzimer A, Kephart J, and Rish I (2007) Evaluation of optimization methods for network bottleneck diagnosis. In: Proceedings of ICAC-07Google Scholar
  4. 4.
    Candes E (2006) Compressive sampling. Int Cong Math 3:1433–1452MathSciNetGoogle Scholar
  5. 5.
    Candes E, Romberg J (2006) Quantitative robust uncertainty principles and optimally sparse decompositions. Found Comput Math 6(2):227–254MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Candes E, Romberg J, Tao T (2006) Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory 52(2):489–509MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Candes E, Romberg J, Tao T (2006) Stable signal recovery from incomplete and inaccurate measurements. Commun Pure Appl Math 59(8):1207–1223MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Candes E, Tao T (2005) Decoding by linear programming. IEEE Trans Inf Theory 51(12):4203–4215Google Scholar
  9. 9.
    Carroll MK, Cecchi GA, Rish I, Garg R, Rao AR (2009) Prediction and interpretation of distributed neural activity with sparse models. Neuroimage 44(1):112–122Google Scholar
  10. 10.
    Chandalia G, Rish I (2007) Blind source separation approach to performance diagnosis and dependency discovery. In: Proceedings of IMC-2007Google Scholar
  11. 11.
    Donoho D (2006) Compressed sensing. IEEE Trans Inf Theory 52(4):1289–1306MathSciNetCrossRefGoogle Scholar
  12. 12.
    Donoho D (2006) For most large underdetermined systems of linear equations, the minimal ell-1 norm near-solution approximates the sparsest near-solution. Commun Pure Appl Math 59(7):907–934MathSciNetCrossRefGoogle Scholar
  13. 13.
    Donoho D (2006) For most large underdetermined systems of linear equations, the minimal ell-1 norm solution is also the sparsest solution. Commun Pure Appl Math 59(6):797–829Google Scholar
  14. 14.
    Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just M, Newman S (2004) Learning to decode cognitive states from brain images. Mach Learn 57:145–175CrossRefzbMATHGoogle Scholar
  15. 15.
    Negahban S, Ravikumar P, Wainwright MJ, Yu B (2009) A unified framework for the analysis of regularized \(M\)-estimators. In: Proceedings of neural information processing systems (NIPS)Google Scholar
  16. 16.
    Negahban S, Ravikumar P, Wainwright MJ, Yu B (2010) A unified framework for the analysis of regularized \(M\)-estimators. Technical Report 797, Department of Statistics, UC BerkeleyGoogle Scholar
  17. 17.
    Park Mee-Young, Hastie Trevor (2007) An L1 regularization-path algorithm for generalized linear models. JRSSB 69(4):659–677MathSciNetCrossRefGoogle Scholar
  18. 18.
    Rish I, Brodie M, Ma S, Odintsova N, Beygelzimer A, Grabarnik G, Hernandez K (2005) Adaptive diagnosis in distributed systems. IEEE Trans Neural Networks (special issue on Adaptive learning systems in communication networks) 16(5):1088–1109Google Scholar
  19. 19.
    Rish I, Grabarnik G, (2009) Sparse signal recovery with Exponential-family noise. In: Proceedings of the 47-th annual allerton conference on communication, control and, computingGoogle Scholar
  20. 20.
    Rockafeller RT, (1970) Convex analysis. Princeton university press. New JerseyGoogle Scholar
  21. 21.
    Zheng A, Rish I, Beygelzimer A (2005) Efficient test selection in active diagnosis via entropy approximation. In: Proceedings of UAI-05Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.IBM T.J. Watson Research CenterYorktownUSA
  2. 2.St. John’s UniversityQueensUSA

Personalised recommendations