Advertisement

A Novel Monotonic Fixed-Point Algorithm for l1-Regularized Least Square Vector and Matrix Problem

  • Jiaojiao Jiang
  • Haibin Zhang
  • Shui Yu
Part of the Communications in Computer and Information Science book series (CCIS, volume 163)

Abstract

Least square problem with l 1 regularization has been proposed as a promising method for sparse signal reconstruction (e.g., basis pursuit de-noising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as l 1-regularized least-square program (LSP). In this paper, we propose a novel monotonic fixed point method to solve large-scale l 1-regularized LSP. And we also prove the stability and convergence of the proposed method. Furthermore we generalize this method to least square matrix problem and apply it in nonnegative matrix factorization (NMF). The method is illustrated on sparse signal reconstruction, partner recognition and blind source separation problems, and the method tends to convergent faster and sparser than other l 1-regularized algorithms.

Keywords

l1-regularized LSP fixed point method signal reconstruction NMF 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer Series in Statistics. Springer, New York (2001)CrossRefzbMATHGoogle Scholar
  2. 2.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. ser. B 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  3. 3.
    Tropp, J.: Just relax: Convex programming methods for identifying sparse signals in noise. IEEE Trans. Inf. Theory 52(3), 1030–1051 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. ser. B 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  5. 5.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. Journal of the Roy. Statist. Soc. ser. B 67(2), 301–320 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Kim, S.J., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for large scale L1-regularized least squares. IEEE J. Trans. Sel. Top. Signal Process 1, 606–617 (2007)CrossRefGoogle Scholar
  7. 7.
    Hale, E., Yin, W., Zhang, Y.: A fixed-point continuation method for l 1-regularized minimization with applications to compressed sensing. Tech. rep., CAAM TR07-07 (2007) Google Scholar
  8. 8.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by nonnegative matrix fectorization. Nature 401, 788–791 (1999)CrossRefGoogle Scholar
  9. 9.
    Le, L., Yujin, Z.: FastNMF: highly efficient monotonic fixed-point non-negative matrix factorization algorithm with good applicability. Journal of Electronic Imaging 18(3), 033004 (2009)CrossRefGoogle Scholar
  10. 10.
    Figueiredo, M., Nowak, R., Wright, S.: Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. Appeared in the same issue of IEEE Journal on Selected Topics in Signal Processing (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Jiaojiao Jiang
    • 1
  • Haibin Zhang
    • 1
  • Shui Yu
    • 2
  1. 1.College of Applied SciencesBeijing University of TechnologyChina
  2. 2.School of Information TechnologyDeakin UniversityBurwoodAustralia

Personalised recommendations