Towards Dictionaries of Optimal Size: A Bayesian Non Parametric Approach

Article

Abstract

Solving inverse problems usually calls for adapted priors such as the definition of a well chosen representation of possible solutions. One family of approaches relies on learning redundant dictionaries for sparse representation. In image processing, dictionary learning is applied to sets of patches. Many methods work with a dictionary with a number of atoms that is fixed in advance. Moreover optimization methods often call for the prior knowledge of the noise level to tune regularization parameters. We propose a Bayesian non parametric approach that is able to learn a dictionary of adapted size. The use of an Indian Buffet Process prior permits to learn an adequate number of atoms. The noise level is also accurately estimated so that nearly no parameter tuning is needed. We illustrate the relevance of the resulting dictionaries on numerical experiments.

Keywords

Sparse representations Dictionary learning Inverse problems Indian Buffet Process 

References

  1. 1.
    Tosic, I., & Frossard, P. (2011). Dictionary learning: What is the right representation for my signal. IEEE Signal Processing Magazine, 28, 27–38.CrossRefGoogle Scholar
  2. 2.
    Ahmed, N., Natarajan, T., & Rao, K.R. (1974). Discrete cosine transform. IEEE Transactions on Computers, C-23, 90–93.MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Mallat, S. (2008). A wavelet tour of signal processing, third edition the sparse way: Academic Press.Google Scholar
  4. 4.
    Olshausen, B.A., & Field, D.J. (1996). Emergence of simple-cell receptive properties by learning a sparse code for natural images. Nature, 381, 607–609.CrossRefGoogle Scholar
  5. 5.
    Aharon, M., Elad, M., & Bruckstein, A. (2006). K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Transactions on Signal Processing, 54, 4311–4322.CrossRefGoogle Scholar
  6. 6.
    Elad, M., & Aharon, M. (2006). Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing, 15, 3736–3745.MathSciNetCrossRefGoogle Scholar
  7. 7.
    Mairal, J., Elad, M., & Sapiro, G. (2008). Sparse representation for color image restoration. IEEE Transactions on Image Processing, 17, 53–69.MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Mairal, J., Bach, F., Ponce, J., Sapiro, G., & Zisserman, A. (2009). Non-local sparse models for image restoration. IEEE International Conference on Computer Vision, 2272– 2279.Google Scholar
  9. 9.
    Rao, N., & Porikli, F. (2012). A clustering approach to optimize online dictionary learning. IEEE International Conference on Acoustics, Speech and Signal Processing, 1293– 1296.Google Scholar
  10. 10.
    Mairal, J., Bach, F., Ponce, J., & Sapiro, G. (2010). Online learning for matrix factorization and sparse coding. Journal of Machine Learning Research, 11, 19–60.MathSciNetMATHGoogle Scholar
  11. 11.
    Mazhar, R., & Gader, P.D. (2008). Ek-svd: Optimized dictionary design for sparse representations. International Conference on Pattern Recognition, 1–4.Google Scholar
  12. 12.
    Feng, J., Song, L., Yang, X., & Zhang, W. (2009). Sub clustering k-svd: Size variable dictionary learning for sparse representations. IEEE International Conference on Image Processing, 2149– 2152.Google Scholar
  13. 13.
    Rusu, C., & Dumitrescu, B. (2012). Stagewise k-svd to design efficient dictionaries for sparse representations. IEEE Signal Processing Letters, 19, 631–634.CrossRefGoogle Scholar
  14. 14.
    Marsousi, M., Abhari, K., Babyn, P., & Alirezaie, J. (2014). An adaptive approach to learn overcomplete dictionaries with efficient numbers of elements. IEEE Transactions on Signal Processing, 62, 3272–3283.MathSciNetCrossRefGoogle Scholar
  15. 15.
    Zhou, M., Chen, H., Paisley, J., Ren, L., Li, L., Xing, Z., Dunson, D., Sapiro, G., & Carin, L. (2012). Nonparametric bayesian dictionary learning for analysis of noisy and incomplete images. IEEE Transactions on Image Processing, 21, 130–144.MathSciNetCrossRefGoogle Scholar
  16. 16.
    Griffiths, T., & Ghahramani, Z. (2006). Infinite latent feature models and the indian buffet process, advances in NIPS 18. MIT Press, 475–482.Google Scholar
  17. 17.
    Griffiths, T.L., & Ghahramani, Z. (2011). The indian buffet process: An introduction and review. Journal of Machine Learning Research, 12, 1185–1224.MathSciNetMATHGoogle Scholar
  18. 18.
    Knowles, D A, & Ghahramani, Z (2011). Nonparametric Bayesian sparse factor models with application togene expression modeling. The Annals of Applied Statistics, 5, 1534–1552.MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Doshi-Velez, F., & Ghahramani, Z. (2009). Accelerated sampling for the indian buffet process. International Conference on Machine Learning, 273–280.Google Scholar
  20. 20.
    Andrzejewski, D. Accelerated Gibbs sampling for infinite sparse factor analysis, LLNL Technical Report (LLNL-TR-499647).Google Scholar
  21. 21.
    van Dyk, D. A., & Park, T. (2008). Partially collapsed gibbs samplers. Journal of the American Statistical Association, 103, 790–796.MathSciNetCrossRefMATHGoogle Scholar
  22. 22.
    Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007). Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Transactions on Image Processing, 16, 2080–2095.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2016

Authors and Affiliations

  1. 1.Université Lille, CNRS, Centrale LilleUMR 9189 - CRIStAL - Centre de Recherche en Informatique Signal et Automatique de LilleLilleFrance

Personalised recommendations