Joint Block Diagonalization Algorithms for Optimal Separation of Multidimensional Components

  • Dana Lahat
  • Jean-François Cardoso
  • Hagit Messer
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7191)


This paper deals with non-orthogonal joint block diagonalization. Two algorithms which minimize the Kullback-Leibler divergence between a set of real positive-definite matrices and a block-diagonal transformation thereof are suggested. One algorithm is based on the relative gradient, and the other is based on a quasi-Newton method. These algorithms allow for the optimal, in the mean square error sense, blind separation of multidimensional Gaussian components. Simulations demonstrate the convergence properties of the suggested algorithms, as well as the dependence of the criterion on some of the model parameters.


Joint block diagonalization relative gradient quasi-Newton 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Nion, D.: A tensor framework for nonunitary joint block diagonalization. IEEE Trans. Signal Process. 59(10), 4585–4594 (2011)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Ghennioui, H., et al.: Gradient-based joint block diagonalization algorithms: Application to blind separation of FIR convolutive mixtures. Signal Process. 90(6), 1836–1849 (2010)CrossRefzbMATHGoogle Scholar
  3. 3.
    Bousbia-Salah, H., Belouchrani, A., Abed-Meraim, K.: Blind separation of non stationary sources using joint block diagonalization. In: Proc. SSP, pp. 448–451 (August 2001)Google Scholar
  4. 4.
    Pham, D.-T.: Joint approximate diagonalization of positive definite hermitian matrices. SIAM J. Matrix Anal. Appl. 22(4), 1136–1152 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Pham, D.T.: Blind Separation of Cyclostationary Sources Using Joint Block Approximate Diagonalization. In: Davies, M.E., James, C.J., Abdallah, S.A., Plumbley, M.D. (eds.) ICA 2007. LNCS, vol. 4666, pp. 244–251. Springer, Heidelberg (2007)CrossRefGoogle Scholar
  6. 6.
    Lahat, D., Cardoso, J.-F., Messer, H.: Second-order multidimensional ICA: Performance analysis. Submitted to IEEE. Trans. Sig. Proc. (September 2011)Google Scholar
  7. 7.
    Comon, P.: Independent component analysis. In: Proc. Int. Signal Process. Workshop on HOS, Chamrousse, France, pp. 111–120 (July 1991); keynote address. Republished in HOS, J.-L. Lacoume ed., Elsevier, 1992, pp. 29–38Google Scholar
  8. 8.
    Pham, D.-T., Cardoso, J.-F.: Blind separation of instantaneous mixtures of non stationary sources. IEEE Trans. Signal Process. 49(9), 1837–1848 (2001)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Cardoso, J.-F., Laheld, B.: Equivariant adaptive source separation. IEEE Trans. Signal Process. 44(12), 3017–3030 (1996)CrossRefGoogle Scholar
  10. 10.
    Pham, D.-T.: Information approach to blind source separation and deconvolution. In: Emmert-Streib, F., Dehmer, M. (eds.) Information Theory and Statistical Learning, ch.7, pp. 153–182. Springer, Heidelberg (2009)Google Scholar
  11. 11.
    Graham, A.: Kronecker Products and Matrix Calculus with Applications. Mathematics and its Applications. Ellis Horwood Ltd., Chichester (1981)zbMATHGoogle Scholar
  12. 12.
    Gutch, H.W., Maehara, T., Theis, F.J.: Second Order Subspace Analysis and Simple Decompositions. In: Vigneron, V., Zarzoso, V., Moreau, E., Gribonval, R., Vincent, E. (eds.) LVA/ICA 2010. LNCS, vol. 6365, pp. 370–377. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  13. 13.
    Vía, J., et al.: A Maximum Likelihood approach for Independent Vector Analysis of Gaussian data sets. In: Proc. MLSP 2011, Beijing, China (September 2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Dana Lahat
    • 1
  • Jean-François Cardoso
    • 2
  • Hagit Messer
    • 1
  1. 1.School of Electrical EngineeringTel Aviv UniversityTel AvivIsrael
  2. 2.LTCITELECOM ParisTECH and CNRSParisFrance

Personalised recommendations