ICA with Sparse Connections
When applying independent component analysis (ICA), sometimes that the connections between the observed mixtures and the recovered independent components (or the original sources) to be sparse, to make the interpretation easier or to reduce the model complexity. In this paper we propose natural gradient algorithms for ICA with a sparse separation matrix, as well as ICA with a sparse mixing matrix. The sparsity of the matrix is achieved by applying certain penalty functions to its entries. The properties of the penalty functions are investigated. Experimental results on both artificial data and causality discovery in financial stocks show the usefulness of the proposed methods.
KeywordsIndependent Component Analysis Penalty Function Independent Component Analysis Neural Information Processing System Natural Gradient
Unable to display preview. Download preview PDF.
- Amari, S., Cichocki, A., Yang, H.H.: A new learning algorithm for blind signal separation. In: Advances in Neural Information Processing Systems (1996)Google Scholar
- Lewicki, M., Sejnowski, T.J.: Learning nonlinear overcomplete represenations for efficient coding. In: Advances in Neural Information Processing Systems, vol. 10, pp. 815–821 (1998)Google Scholar
- Shimizu, S., Hoyer, P.O., Hyvarinen, A., Kerminen, A.J.: A linear non-Gaussian acyclic model for causal discovery. Submitted to Journal of Machine Learning Research (2006)Google Scholar
- Weigend, A.S., Rumelhart, D.E., Huberman, B.A.: Generalization by weight elimination with application to forecasting. Advances in Neural Information Processing Systems 3 (1991)Google Scholar