Advertisement

Abstract

The goals of this paper are: 1) the introduction of a shift-invariant sparse coding model together with learning rules for this model; 2) the comparison of this model to the traditional sparse coding model; and 3) the analysis of some limitations of the newly proposed approach. To evaluate the model we will show that it can learn features from a toy problem as well as note-like features from a polyphonic piano recording. We further show that the shift-invariant model can help in overcoming some of the limitations of the traditional model which occur when learning less functions than are present in the true generative model. We finally show a limitation of the proposed model for problems in which mixtures of continuously shifted functions are used.

Keywords

Sparse Code Blind Source Separation Neural Information Processing System Inference Process Sparse Component 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Abdallah, S.: Towards Music Perception by Redundancy Reduction and Unsupervised Learning in Probabilistic Models. PhD thesis, King’s College London (February 2003)Google Scholar
  2. 2.
    Olshausen, B.A., Millman, K.: Learning sparse codes with a mixture-of- Gaussians prior. Advances in Neural Information Processing Systems (2000)Google Scholar
  3. 3.
    Sallee, P., Olshausen, B.A.: Learning sparse multiscale image representations. Advances in Neural Information Processing Systems (2003)Google Scholar
  4. 4.
    Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature (381), 607–609 (1995)Google Scholar
  5. 5.
    Lewicki, M.S., Sejnowski, T.J.: Learning overcomplete representations. Neural Computation (12), 337–365 (2000)Google Scholar
  6. 6.
    Kreutz-Delgado, K., Murray, J.F., Rao, B.D., Engan, K., Lee, T.-W., Sejnowski, T.J.: Dictionary learning algorithms for sparse representation. Neural Computation 15, 349–396 (2003)zbMATHCrossRefGoogle Scholar
  7. 7.
    Figueiredo, M.A.T., Jain, A.K.: Bayesian learning of sparse classifiers. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition - CVPR 2001, Hawaii, December 2001, vol. 1, pp. 35–41. IEEE, Los Alamitos (2001)CrossRefGoogle Scholar
  8. 8.
    Chen, S.S., Donoho, D.L., Saunders, M.A.: Atomic decomposition by basis pursuit. SIAM Journal of Scientific Computing 20(1), 33–61 (1998)CrossRefMathSciNetGoogle Scholar
  9. 9.
    Li, Y., Cichocki, A., Amari, S.: Sparse component analysis for blind source separation with less sensors than sources. In: ICA 2003, pp. 89–94 (2003)Google Scholar
  10. 10.
    Blumensath, T., Davies, M.: Unsupervised learning of sparse and shift-invariant decompositions of polyphonic music. In: IEEE International Conference on Acoustics, Speech, and Signal Processing (2004)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Thomas Blumensath
    • 1
  • Mike Davies
    • 1
  1. 1.Department of Electronic EngineeringQueen Mary, University of LondonLondonUK

Personalised recommendations