Projective Nonnegative Matrix Factorization for Image Compression and Feature Extraction

  • Zhijian Yuan
  • Erkki Oja
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3540)


In image compression and feature extraction, linear expansions are standardly used. It was recently pointed out by Lee and Seung that the positivity or non-negativity of a linear expansion is a very powerful constraint, that seems to lead to sparse representations for the images. Their technique, called Non-negative Matrix Factorization (NMF), was shown to be a useful technique in approximating high dimensional data where the data are comprised of non-negative components. We propose here a new variant of the NMF method for learning spatially localized, sparse, part-based subspace representations of visual patterns. The algorithm is based on positively constrained projections and is related both to NMF and to the conventional SVD or PCA decomposition. Two iterative positive projection algorithms are suggested, one based on minimizing Euclidean distance and the other on minimizing the divergence of the original data matrix and its non-negative approximation. Experimental results show that P-NMF derives bases which are somewhat better suitable for a localized representation than NMF.


Singular Value Decomposition Sparse Representation Nonnegative Matrix Factorization Positive Matrix Factorization Original Data Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Bell, A., Sejnowski, T.: The independent components of images are edge filters. Vision Research 37, 3327–3338 (1997)CrossRefGoogle Scholar
  2. 2.
    Hyvärinen, A., Hoyer, P.: Emergence of phase and shift invariant features by decomposition of natural images into independent feature subspaces. Neural Computation 13, 1527–1558 (2001)zbMATHCrossRefGoogle Scholar
  3. 3.
    Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. Wiley, New York (2001)CrossRefGoogle Scholar
  4. 4.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by non-negative matrix factorization. Nature 401, 788–791 (1999)CrossRefGoogle Scholar
  5. 5.
    Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: NIPS, pp. 556–562 (2000)Google Scholar
  6. 6.
    Olshausen, B.A., Field, D.J.: Natural image statistics and efficient coding. Network 7, 333–339 (1996)CrossRefGoogle Scholar
  7. 7.
    Paatero, P., Tapper, U.: Positive Matrix Factorization: A non-negative factor model with optimal utilization of error estimations of data values. Environmetrics 5, 111–126 (1997)CrossRefGoogle Scholar
  8. 8.
    Kawamoto, T., Hotta, K., Mishima, T., Fujiki, J., Tanaka, M., Kurita, T.: Estimation of single tones from chord sounds using non-negative matrix factorization. Neural Network World 3, 429–436 (2000)Google Scholar
  9. 9.
    Saul, L.K., Lee, D.D.: Multiplicative updates for classification by mixture modela. In: Advances in Neural Information Processing Systems, vol. 14 (2002)Google Scholar
  10. 10.
    van Hateren, J.H., van der Schaaf, A.: Independent component filters of natural images compared with simple cells in primary visual cortex. Proc. Royal Soc. London B, 265, 2315–2320 (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Zhijian Yuan
    • 1
  • Erkki Oja
    • 1
  1. 1.Neural Networks Research CentreHelsinki University of TechnologyFinland

Personalised recommendations