Structured Dictionary Learning Based on Composite Absolute Penalties

Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 206)

Abstract

In this paper, we focus on the problem of learning dictionaries with structural features for the sparse representations of natural images. Dictionaries learned by traditional techniques such as MOD, K-SVD lack structure. Each atom of them is treated independently and the possible relationships are not fully explored, which is insufficient for some cases. We propose a framework for structured dictionary learning by integrating the Composite Absolute Penalties (CAP) into the K-SVD algorithm. Atoms of the learned dictionary are laid out in a predefined fashion, i.e., group or tree structure. Such a setting is more appropriate to exploit the latent relationships existing between the patches of natural images. Experiments show that dictionaries learned by our method achieve better results for image restoration tasks. Our approach can also be integrated into other sparse representation-based applications of image processing.

Keywords

Sparse representation Dictionary learning Structured sparsity Denoising 

References

  1. 1.
    Candès EJ, Donoho DL (2002) Recovering edges in ill-posed inverse problems: optimality of curvelet frames. Ann Statist 30:784–842CrossRefMATHMathSciNetGoogle Scholar
  2. 2.
    Do MN, Vetterli M (2003) Contourlets. In: Stoeckler J, Welland GV (eds) Beyond wavelets. Academic Press, New YorkGoogle Scholar
  3. 3.
    Donoho DL (1998) Wedgelets: nearly minimax estimation of edges. Ann Statist 27:859–897CrossRefMathSciNetGoogle Scholar
  4. 4.
    Mallat S, LePennec E (2005) Sparse geometric image representation with bandelets. IEEE Trans Image Process 14:423–438CrossRefMathSciNetGoogle Scholar
  5. 5.
    Freeman WT, Adelson EH (1991) The design and use of steerable filters. IEEE Pattern Anal Mach Intell 13:891–906CrossRefGoogle Scholar
  6. 6.
    Olshausen BA, Field DJ (1997) Sparse coding with an overcomplete basis set: a strategy employed by v1. Vis Res 37:3311–3325CrossRefGoogle Scholar
  7. 7.
    Engan K, Aase SO, Hakon-Husoy JH (1999) Method of optimal directions for frame design. IEEE Int Conf Acoust, Speech, Signal Process 5:2443–2446Google Scholar
  8. 8.
    Kreutz-Delgado K, Rao BD (2000) FOCUSS-based dictionary learning algorithms. Wavelet Appl Signal Image Process 8:4119–4153Google Scholar
  9. 9.
    Aharon M, Elad M, Bruckstein AM (2006) K-SVD: an algorithm for designing of overcomplete dictionaries for sparse representation. IEEE Trans Signal Process 54:4311–4322CrossRefGoogle Scholar
  10. 10.
    Jenatton R, Mairal J, Obozinski G, Bach F (2011) Proximal methods for hierarchical sparse coding. J Mach Learn Res 12:2297–2334MATHMathSciNetGoogle Scholar
  11. 11.
    Zhao P, Rocha G, Yu B (2009) The composite absolute penalties family for grouped and hierarchical variable selection. Ann Stat 37:3468–3497CrossRefMATHMathSciNetGoogle Scholar
  12. 12.
    Elad M, Aharon M (2006) Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans Image Process 15:3736–3745CrossRefMathSciNetGoogle Scholar
  13. 13.
    Roth R, Black MJ (2005) Fields of experts: a framework for learning image priors. IEEE Conf CVPR 2:860–867Google Scholar
  14. 14.
    Portilla J, Simoncelli EP (2003) Image restoration using gaussian scale mixtures in the wavelet domain. In: 9th IEEE international conference on image processing vol 34. pp 965–968Google Scholar
  15. 15.
    Dabov K, Foi A, Katkovnik V, Egiazarian K (2007) Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans IP 16:2080–2095Google Scholar
  16. 16.
    Katkovnik V, Foi A, Egiazarian K, Astola J (2010) From local kernel to nonlocal multiple-model image denoising. Int J Comput Vis 86:1–32CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag London 2013

Authors and Affiliations

  1. 1.College of Computer ScienceBeijing University of TechnologyBeijingChina

Personalised recommendations