Abstract
When dealing with nonlinear blind processing algorithms (deconvolution or post-nonlinear source separation), complex mathematical estimations must be done giving as a result very slow algorithms. This is the case, for example, in speech processing, spike signals deconvolution or microarray data analysis. In this paper, we propose a simple method to reduce computational time for the inversion of Wiener systems or the separation of post-nonlinear mixtures, by using a linear approximation in a minimum mutual information algorithm. Simulation results demonstrate that linear spline interpolation is fast and accurate, obtaining very good results (similar to those obtained without approximation) while computational time is dramatically decreased. On the other hand, cubic spline interpolation also obtains similar good results, but due to its intrinsic complexity, the global algorithm is much more slow and hence not useful for our purpose.
Similar content being viewed by others
References
Haykin S. Blind deconvolution, vol. II. New York: Wiley; 2000.
Haykin S. Blind source separation, vol. I. New York: Wiley; 2000.
Comon, P., Jutten, C.: Handbook of blind source separation: independent component analysis and applications. New York: Academic Press; 2010.
Baccarelli E, Galli S. A new approach based on soft statistics to the nonlinear blind-deconvolution of unknown data channels. IEEE Trans Signal Process. 2001;49(7):1481–91.
Fukunaga S, Fujimoto K. Nonlinear blind deconvolution based on a state-space model. Proceedings of the 45th IEEE conference on decision & control, San Diego, CA, USA, December 13–15, 2007.
Taleb A, Solé-Casals J, Jutten C. Quasi-nonparametric blind inversion of wiener systems. IEEE Trans Signal Process. 2001;49(5):917–24.
Solé-Casals J, Jutten C, Taleb A. Parametric approach to blind deconvolution of nonlinear channels. Neurocomputing. 2002;48(1–4):339–355.
Taleb A, Jutten C: Batch algorithm for source separation in postnonlinear mixtures. In: First Int. Workshop on independent component analysis and signal separation (ICA 1999); 1999. p. 155–60.
Taleb A, Jutten C. Source separation in post-nonlinear mixtures. IEEE Trans Signal Process. 1999;47(10):2807–20.
Solé-Casals J, Caiafa C. A simple approximation for fast nonlinear deconvolution, advances in nonlinear speech processing—5th international conference on nonlinear speech processing, NOLISP 2011, Las Palmas de Gran Canaria, Spain, November 7–9, 2011, Proceedings: lecture notes in computer science 7015, vol. 7015. Heidelberg: Springer Berlin; 2011. p. 55–62.
Hunter IW, Korenberg MJ. The identification of nonlinear biological systems: Wiener and Hammerstein cascade models. Biol Cybern. 1986;55(2):135–44.
Parzen E. On estimation of a probability density function and mode. Ann Math Stat. 1962;33(3):1065–76.
Härdle W. Smoothing techniques: with implementation in S. New York: Springer; 1991.
Silverman BW. Density estimation for statistics and data analysis. London: Chapman and Hall; 1986.
Härdle W. Smoothing techniques with implementation. Belgium: Springer; 1990.
Prenter PM. Splines and variational methods. New York: Dover; 2008.
Acknowledgments
We would like to thank the anonymous Reviewers for their useful and insightful comments which helped us to considerably improve this work. This work has been in part supported by the MINCYT-MICINN Research Program 2009-2011 (Argentina-Spain): Desarrollo de Herramientas de Procesado de Señales para el Análisis de Datos Bioinformáticos (Ref. AR2009-0010), by the University of Vic under the Grant R0904 and by CONICET under the Grant PIP 2012-2014, Number 11420110100021.
Author information
Authors and Affiliations
Corresponding author
Appendix: Post-nonlinear Blind Source Separation (PNL-BSS)
Appendix: Post-nonlinear Blind Source Separation (PNL-BSS)
The PNL-BSS Model
In [8, 9], Taleb and Jutten have studied a realistic case of nonlinear mixtures, called post-nonlinear (PNL) mixtures which are separable. As it is shown in Fig. 8, this two-stage system consists of a linear mixing matrix A, followed by component-wise nonlinear distortions \(f_1(.),{\ldots},f_n(.)\). We assume that the input of the system consists of a set of n independent sources \(s_1,{\ldots},s_n\). The square matrix \({\bf A}\in{{{\rm I}\!{\rm R}}^{n\times n}}\) is assumed unknown and invertible, and the nonlinear distortions (memoryless) \({f_i:\mathbb{R}\rightarrow \mathbb{R}}\) (\(i=1,2,{\ldots},n\)) are also assumed unknown and invertible. Again, we assume that all involved stochastic processes are wide sense stationary and ergodic, so expectations are computed using time samples.
As in the blind deconvolution case, we will need to compute the score function of the outputs. Since now we have several outputs, each one with a corresponding score function ψ i , we also define a vector of score functions, the so-called marginal score function \(\Uppsi=[\psi_1, \psi_2,{\ldots},\psi_n]^T\).
The Min-MI Post-Nonlinear Blind Source Separation Algorithm (Min-MI PNL-BSS)
An algorithm for solving this problem was introduced in [6] where the separating system coefficients (matrix B) and the compensation nonlinearities \(g_1,{\ldots},g_n\) are chosen such that the mutual information among estimated sources \(y_1,{\ldots},y_n\) is minimized. The mixing observations are:
where \(i=1,2,{\ldots},n, s_j(t)\) (\(j=1,2,{\ldots},n\)) are the independent sources, e i (t) is the i-th observation, a ij denotes the entries of the unknown mixing matrix A, and f i is the unknown nonlinear mapping on the component i.
We highlight that, since the statistical independence among sources is the main assumption, the separation structure is tuned so that the components of its output become statistically independent. Similar to the previous case (NL-BD) where we minimized MI between time simples of the (only) output, in the PNL-BSS we have to minimize the MI between estimated sources.
The gradient of the MI with respect to the parameters of the system can be written as follows (see [8] for a detailed derivation of these equations). Special perturbation signals \(\varepsilon (x_i(t))\) are needed to compensate the nonlinear distortions. The vector composed by the perturbation signals is defined as follows:
additionally, the gradient of the MI with respect to matrix B becomes
where K must satisfy an additional condition of positivity in order to insure convergence of the gradient method.
As proposed in the NL-BD case, replacing expectations by their corresponding time average, we can obtain practical equations analogous to Eqs. (5) and (6) for the deconvolution problem. Additionally, we are able to reduce the complexity of the algorithm by using the same strategy as proposed for NL-BD, that is, by computing Eq. (9) only at N points in a regular grid covering the ranges of the variables. In Algorithm 2, the steps of the Min-MI PNL-BSS Algorithm are shown.
Algorithm 2 Min-MI PNL-BSS [8] |
---|
INPUT: Maximum number of iterations Niter and output signal e(t) |
OUTPUT: Estimated input signal \(\widehat{{\bf s}}(t)\) |
1: i = 1; B = I; Initialization |
2: y = B e |
3: i ≤ Niter and NOT convergence is reached do |
4: for j = 1:n do |
5: Estimate the score function ψ j (Eq. (4)) |
6: Compute the perturbation signal \(\varepsilon (x_j(t))\) (Eq. (9)) |
7: \(x_j(t)\leftarrow x_j(t)+\mu_1 \varepsilon(x_j(t))\); Nonlinear compensationa |
8: \(x_j(t)\leftarrow (x_j(t)-m_{x_j})/\sigma_{x_j}\); Normalization |
9: end for |
10: Estimate \(E=\frac{\partial {\bf I}({\bf y})}{\partial {\bf B}}\) (Eq. (10)) |
11: \({\bf B}\leftarrow {\bf B}+\mu_2 E\); Separating matrix updatea |
12: y = B x; Current Estimation of source signals |
13: for j = 1:n do |
14: \(y_j(t)\leftarrow (y_j(t)-m_{y_j})/\sigma_{y_j}\); Normalizationb |
15: \({\bf B}\leftarrow {\bf \Uplambda}^{-1}{\bf B}\); Normalizationc |
16: end for |
17: i = i + 1; |
18: end while |
19: \(\widehat{{\bf s}}(t)={\bf y}(t)\); |
Rights and permissions
About this article
Cite this article
Solé-Casals, J., Caiafa, C.F. A Fast Gradient Approximation for Nonlinear Blind Signal Processing. Cogn Comput 5, 483–492 (2013). https://doi.org/10.1007/s12559-012-9192-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s12559-012-9192-x