Signal, Image and Video Processing

, Volume 3, Issue 3, pp 251–264 | Cite as

Sparsity and persistence: mixed norms provide simple signal models with dependent coefficients

Original Paper


Sparse regression often uses ℓp norm priors (with p < 2). This paper demonstrates that the introduction of mixed-norms in such contexts allows one to go one step beyond in signal models, and promote some different, structured, forms of sparsity. It is shown that the particular case of the ℓ1,2 and ℓ2,1 norms leads to new group shrinkage operators. Mixed norm priors are shown to be particularly efficient in a generalized basis pursuit denoising approach, and are also used in a context of morphological component analysis. A suitable version of the Block Coordinate Relaxation algorithm is derived for the latter. The group-shrinkage operators are then modified to overcome some limitations of the mixed-norms. The proposed group shrinkage operators are tested on simulated signals in specific situations, to illustrate and compare their different behaviors. Results on real data are also used to illustrate the relevance of the approach.


Mixed-norms Time–frequency decompositions Sparse representations 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
  2. 2.
    Berger J., Coifman R., Goldberg M.: Removing noise from music using local trigonometric bases and wavelet packets. J. Audio Eng. Soc. 42(10), 808–818 (1994)Google Scholar
  3. 3.
    Bobin, J., Moudden, Y., Fadili, J., Starck, J.L.: Morphological component analysis for sparse multichannel data: application to inpainting (2007, preprint)Google Scholar
  4. 4.
    Bruce, A.G., Sardy, S., Tseng, P.: Block coordinate relaxation methods for nonparametric signal denoising. In: Proceedings of the SPIE—The International Society for Optical Engineering, vol. 3391, pp. 75–86 (1998)Google Scholar
  5. 5.
    Chen S.S., Donoho D.L., Saunders M.: Atomic decomposition by basis pursuit. SIAM J. Sci. Comput. 20(1), 33–61 (1998)CrossRefMathSciNetGoogle Scholar
  6. 6.
    Cotter S., Rao B., Engan K., Kreutz-Delgado K.: Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Trans. Signal Process. 53(7), 2477–2488 (2005)CrossRefMathSciNetGoogle Scholar
  7. 7.
    Daudet, L., Torrésani, B.: Hybrid representations for audiophonic signal encoding. Signal Process. 82(11), 1595–1617 (2002). Special issue on image and video coding beyond standardsGoogle Scholar
  8. 8.
    Daudet, L., Molla, S., Torrésani, B.: Towards a hybrid audio coder. In: Li, J.P. (ed.) International Conference Wavelet analysis and Applications, pp. 13–24. Chongqing, China (2004)Google Scholar
  9. 9.
    Donoho, D., Tsaig, Y., Drori, I., Starck, J.L.: Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. Tech. rep., Statistics Department, Stanford University (2007, preprint)Google Scholar
  10. 10.
    Elad M., Starck J.L., Donoho D.L., Querre P.: Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). J. Appl. Comput. Harmon. Anal. 19, 340–358 (2005)MATHCrossRefMathSciNetGoogle Scholar
  11. 11.
    Févotte C., Torrésani B., Daudet L., Godsill S.J.: Sparse linear regression with structured priors and application to denoising of musical audio. IEEE Trans. Audio Speech Lang. Process. 16(1), 174–185 (2008)CrossRefGoogle Scholar
  12. 12.
    Fornasier M., Rauhut H.: Recovery algorithm for vector-valued data with joint sparsity constraints. SIAM J. Numer. Anal. 46(2), 577–613 (2007)CrossRefMathSciNetGoogle Scholar
  13. 13.
    Gribonval, R., Rauhut, H., Schnass, K., Vandergheynst, P.: Atoms of all channels, unite! average case analysis of multi-channel sparse recovery using greedy algorithms (2007) (INRIA technical report PI 1848)Google Scholar
  14. 14.
    Kowalski, M.: Sparse regression using mixed norms (2008).
  15. 15.
    Molla S., Torrésani B.: An hybrid audio scheme using hidden Markov models of waveforms. Appl. Comput. Harmon. Anal. 18(2), 137–166 (2005)MATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Samarah S., Obeidat S., Salman R.: A Shur test for weighted mixed-norm spaces. Anal. Math. 31, 277–289 (2005)MATHCrossRefMathSciNetGoogle Scholar
  17. 17.
    Starck, J.L., Elad, M., Donoho, D.: Image decomposition via the combination of sparse representations and a variational approach. IEEE Trans. Image Process. 14(10), (2004)Google Scholar
  18. 18.
    Teschke G., Ramlau R.: An iterative algorithm for nonlinear inverse problems with joint sparsity constraints in vector valued regimes and an application to color image inpainting. Inverse Problems 23(5), 1851–1870 (2007)MATHCrossRefMathSciNetGoogle Scholar
  19. 19.
    Tibshirani R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267–288 (1996)MATHMathSciNetGoogle Scholar
  20. 20.
    Tropp, J., Gilbert, A., Strauss, M.: Algorithms for simultaneous sparse approximation. part I: greedy pursuit. Signal Process. 86, 572–588 (2006). Special issue “Sparse approximations in signal and image processing”Google Scholar
  21. 21.
    Tropp, J., Gilbert, A., Strauss, M.: Algorithms for simultaneous sparse approximation. part II: convex relaxation. Signal Process. 86, 589–602 (2006). Special issue “Sparse approximations in signal and image processing”Google Scholar
  22. 22.
    Tseng P.: Dual coordinate ascent methods for non-stricly convex minimization. Math. Program. 59, 231–247 (1993)CrossRefMathSciNetGoogle Scholar
  23. 23.
    Yuan M., Lin Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. Ser. B 68(1), 49–67 (2006)MATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag London Limited 2008

Authors and Affiliations

  1. 1.LATP, CMIMarseille Cedex 13France

Personalised recommendations