An Information Theoretic Approach to Joint Approximate Diagonalization

  • Yoshitatsu Matsuda
  • Kazunori Yamaguchi
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7062)


Joint approximate diagonalization (JAD) is a solution for blind source separation, which can extract non-Gaussian sources without any other prior knowledge. However, because JAD is based on an algebraic approach, it is not robust when the sample size is small. Here, JAD is improved by an information theoretic approach. First, the “true” probabilistic distribution of diagonalized cumulants in JAD is estimated under some simple conditions. Next, a new objective function is defined as the Kullback-Leibler divergence between the true distribution and the estimated one of current cumulants. Though it is similar to the usual JAD objective function, it has a positive lower bound. Then, an improvement of JAD with the lower bound is proposed. Numerical experiments verify the validity of this approach for a small number of samples.


Independent Component Analysis Independent Component Analysis Blind Source Separation Blind Signal True Distribution 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Amari, S., Cichocki, A.: A new learning algorithm for blind signal separation. In: Touretzky, D., Mozer, M., Hasselmo, M. (eds.) Advances in Neural Information Processing Systems, vol. 8, pp. 757–763. MIT Press, Cambridge (1996)Google Scholar
  2. 2.
    Cardoso, J.F.: High-order contrasts for independent component analysis. Neural Computation 11(1), 157–192 (1999)CrossRefGoogle Scholar
  3. 3.
    Cardoso, J.F., Souloumiac, A.: Blind beamforming for non Gaussian signals. IEE Proceedings-F 140(6), 362–370 (1993)Google Scholar
  4. 4.
    Cichocki, A., Amari, S.: Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications. Wiley (2002)Google Scholar
  5. 5.
    Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. Wiley (2001)Google Scholar
  6. 6.
    Lee, T.W., Girolami, M., Sejnowski, T.J.: Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources. Neural Computation 11(2), 417–441 (1999)CrossRefGoogle Scholar
  7. 7.
    Matsuda, Y., Yamaguchi, K.: An adaptive threshold in joint approximate diagonalization by assuming exponentially distributed errors. Neurocomputing 74, 1994–2001 (2011)CrossRefGoogle Scholar
  8. 8.
    McCullagh, P., Kolassa, J.: Cumulants. Scholarpedia 4(3), 4699 (2009), CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Yoshitatsu Matsuda
    • 1
  • Kazunori Yamaguchi
    • 2
  1. 1.Department of Integrated Information TechnologyAoyama Gakuin UniversitySagamihara-shiJapan
  2. 2.Department of General Systems Studies, Graduate School of Arts and SciencesThe University of TokyoMeguro-kuJapan

Personalised recommendations