Nonnegative Tensor Factorization with Smoothness Constraints

  • Rafal Zdunek
  • Tomasz M. Rutkowski
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5226)


Nonnegative Tensor Factorization (NTF) is an emerging technique in multidimensional signal analysis and it can be used to find parts-based representations of high-dimensional data. In many applications such as multichannel spectrogram processing or multiarray spectra analysis, the unknown features have locally smooth temporal or spatial structure. In this paper, we incorporate to an objective function in NTF additional smoothness constrains that considerably improve the unknown features. In our approach, we propose to use the Markov Random Field (MRF) model that is commonly-used in tomographic image reconstruction to model local smoothness properties of 2D reconstructed images. We extend this model to multidimensional case whereby smoothness can be enforced in all dimensions of a multi-dimensional array. We analyze different clique energy functions that are a subject to MRF. Some numerical results performed on a multidimensional image dataset are presented.


Nonnegative Tensor Factorization (NTF) multiarray spectra analysis Markov Random Field (MRF) 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Shashua, A., Hazan, T.: Non-negative tensor factorization with applications to statistics and computer vision. In: Proc. of the 22th International Conference on Machine Learning, Bonn, Germany (2005)Google Scholar
  2. 2.
    Hazan, T., Polak, S., Shashua, A.: Sparse image coding using a 3D non-negative tensor factorization. In: International Conference of Computer Vision (ICCV), pp. 50–57 (2005)Google Scholar
  3. 3.
    Lee, D.D., Seung, H.S.: Learning the parts of objects by nonnegative matrix factorization. Nature 401, 788–791 (1999)CrossRefGoogle Scholar
  4. 4.
    Bader, B.W., Kolda, T.G.: Algorithm 862: Matlab tensor classes for fast algorithm prototyping. ACM Trans. Math. Softw. 32, 635–653 (2006)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Cichocki, A., Zdunek, R., Choi, S., Plemmons, R., Amari, S.I.: Novel multi-layer nonnegative tensor factorization with sparsity constraints. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds.) ICANNGA 2007. LNCS, vol. 4432, pp. 271–280. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  6. 6.
    Cichocki, A., Zdunek, R., Choi, S., Plemmons, R., Amari, S.: Nonnegative tensor factorization using Alpha and Beta divergencies. In: Proc. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2007), Honolulu, Hawaii, USA, vol. III, pp. 1393–1396 (2007)Google Scholar
  7. 7.
    Zdunek, R., Cichocki, A.: Gibbs regularized nonnegative matrix factorization for blind separation of locally smooth signals. In: 15th IEEE International Workshop on Nonlinear Dynamics of Electronic Systems (NDES 2007), Tokushima, Japan, pp. 317–320 (2007)Google Scholar
  8. 8.
    Zdunek, R., Cichocki, A.: Blind image separation using nonnegative matrix factorization with Gibbs smoothing. In: Ishikawa, M., Doya, K., Miyamoto, H., Yamakawa, T. (eds.) ICONIP 2007, Part II. LNCS, vol. 4985, pp. 519–528. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  9. 9.
    Dhillon, I., Sra, S.: Generalized nonnegative matrix approximations with Bregman divergences. In: Neural Information Proc. Systems, Vancouver, Canada, pp. 283–290 (2005)Google Scholar
  10. 10.
    Cichocki, A., Zdunek, R., Amari, S.: Csiszar’s divergences for non-negative matrix factorization: Family of new algorithms. In: Rosca, J.P., Erdogmus, D., Príncipe, J.C., Haykin, S. (eds.) ICA 2006. LNCS, vol. 3889, pp. 32–39. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Kompass, R.: A generalized divergence measure for nonnegative matrix factorization. Neural Computation 19, 780–791 (2006)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI-6, 721–741 (1984)CrossRefGoogle Scholar
  13. 13.
    Besag, J.: Toward Bayesian image analysis. J. Appl. Stat. 16, 395–407 (1989)CrossRefGoogle Scholar
  14. 14.
    Hebert, T., Leahy, R.: A generalized EM algorithm for 3-D Bayesian reconstruction from Poisson data using Gibbs priors. IEEE Transactions on Medical Imaging 8, 194–202 (1989)CrossRefGoogle Scholar
  15. 15.
    Geman, S., McClure, D.: Statistical methods for tomographic image reconstruction. Bull. Int. Stat. Inst. LII-4, 5–21 (1987)MathSciNetGoogle Scholar
  16. 16.
    Geman, S., Reynolds, G.: Constrained parameters and the recovery of discontinuities. IEEE Trans. Pattern Anal. Machine Intell. 14, 367–383 (1992)CrossRefGoogle Scholar
  17. 17.
    Stevenson, R., Delp, E.: Fitting curves with discontinuities. In: Proc. 1st Int. Workshop on Robust Computer Vision, Seattle, Washington, USA (1990)Google Scholar
  18. 18.
    Green, P.J.: Bayesian reconstruction from emission tomography data using a modified EM algorithm. IEEE Trans. Medical Imaging 9, 84–93 (1990)CrossRefGoogle Scholar
  19. 19.
    Lange, K., Carson, R.: EM reconstruction algorithms for emission and transmission tomography. J. Comp. Assisted Tomo. 8, 306–316 (1984)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Rafal Zdunek
    • 1
  • Tomasz M. Rutkowski
    • 2
  1. 1.Institute of Telecommunications, Teleinformatics and AcousticsWroclaw University of TechnologyWroclawPoland
  2. 2.RIKEN Brain Science InstituteWako-shiJapan

Personalised recommendations