ICA with Sparse Connections: Revisited

  • Kun Zhang
  • Heng Peng
  • Laiwan Chan
  • Aapo Hyvärinen
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5441)


When applying independent component analysis (ICA), sometimes we expect the connections between the observed mixtures and the recovered independent components (or the original sources) to be sparse, to make the interpretation easier or to reduce the random effect in the results. In this paper we propose two methods to tackle this problem. One is based on adaptive Lasso, which exploits the L 1 penalty with data-adaptive weights. We show the relationship between this method and the classic information criteria such as BIC and AIC. The other is based on optimal brain surgeon, and we show how its stopping criterion is related to the information criteria. This method produces the solution path of the transformation matrix, with different number of zero entries. These methods involve low computational loads. Moreover, in each method, the parameter controlling the sparsity level of the transformation matrix has clear interpretations. By setting such parameters to certain values, the results of the proposed methods are consistent with those produced by classic information criteria.


Independent Component Analysis Independent Component Analysis Sparsity Level Adaptive Lasso Oracle Property 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Akaike, H.: Information theory and an extension of the maximum likelihood principle. In: Proc. 2nd Int. Symp. on Information Theory, pp. 267–281 (1973)Google Scholar
  2. 2.
    Fan, J., Li, R.: Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96, 1348–1360 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: Optimal brain surgeon. In: NIPS 5, pp. 164–171. Morgan Kaufmann, San Francisco (1993)Google Scholar
  4. 4.
    Hyvärinen, A., Karhunen, J., Oja, E.: Independent Component Analysis. John Wiley & Sons, Inc., Chichester (2001)CrossRefGoogle Scholar
  5. 5.
    Hyvärinen, A., Karthikesh, R.: Imposing sparsity on the mixing matrix in independent component analysis. Neurocomputing 49, 151–162 (2002)CrossRefzbMATHGoogle Scholar
  6. 6.
    Pham, D.T., Garat, P.: Blind separation of mixture of independent sources through a quasi-maximum likelihood approach. IEEE Trans. on Signal Processing 45(7), 1712–1725 (1997)CrossRefzbMATHGoogle Scholar
  7. 7.
    Schwarz, G.: Estimating the dimension of a model. The Annals of Statistics 6, 461–464 (1978)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Shimizu, S., Hoyer, P.O., Hyvärinen, A., Kerminen, A.J.: A linear non-Gaussian acyclic model for causal discovery. JMLR 7, 2003–2030 (2006)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Silva, F.M., Almeida, L.B.: Acceleration techniques for the backpropagation algorithm. In: Neural Networks, pp. 110–119. Springer, Heidelberg (1990)CrossRefGoogle Scholar
  10. 10.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society 58(1), 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  11. 11.
    Zhang, K., Chan, L.-W.: ICA with sparse connections. In: Corchado, E., Yin, H., Botti, V., Fyfe, C. (eds.) IDEAL 2006. LNCS, vol. 4224, pp. 530–537. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  12. 12.
    Zhang, K., Chan, L.: Minimal nonlinear distortion principle for nonlinear independent component analysis. JMLR 9, 2455–2487 (2008)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Zhao, P., Yu, B.: On model selection consistency of lasso. JMLR 7, 2541–2563 (2006)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Zou, H.: The adaptive lasso and its oracle properties. Journal of the American Statistical Association 101(476), 1417–1429 (2006)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Kun Zhang
    • 1
  • Heng Peng
    • 2
  • Laiwan Chan
    • 3
  • Aapo Hyvärinen
    • 1
    • 4
  1. 1.Dept of Computer Science & HIITUniversity of HelsinkiFinland
  2. 2.Dept of MathematicsHong Kong Baptist UniversityHong Kong
  3. 3.Dept of Computer Science and EngineeringChinese University of Hong KongHong Kong
  4. 4.Dept of Mathematics and StatisticsUniversity of HelsinkiFinland

Personalised recommendations