Skip to main content
Log in

Efficient Variant of Noncircular Complex FastICA Algorithm for the Blind Source Separation of Digital Communication Signals

  • Short Paper
  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

In this paper, an improved version of the noncircular complex FastICA (nc-FastICA) algorithm is proposed for the separation of digital communication signals. Compared with the original nc-FastICA algorithm, the proposed algorithm is asymptotically efficient for digital communication signals, i.e., its estimation error can be made much smaller by adaptively choosing the approximate optimal nonlinear function. Thus, the proposed algorithm can have a significantly improved performance for the separation of digital communication signals. Simulations confirm the efficiency of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. T. Adali, H. Li, M. Novey et al., Complex ICA using nonlinear functions. IEEE Trans. Signal Process. 56(9), 4536–4544 (2008)

    Article  MathSciNet  Google Scholar 

  2. E. Bingham, A. Hyvärinen, A fast fixed-point algorithm for independent component analysis of complex valued signals. Int. J. Neural Syst. 10(1), 1–8 (2000)

    Article  Google Scholar 

  3. J.F. Cardoso, in Source Separation Using Higher Order Moments. Proceedings of IEEE International Conference on Accoustics, Speech and Signal Processing, Glasgow, UK, pp. 2109–2112 (1989)

  4. J.F. Cardoso, A. Souloumiac, Blind beamforming for non-Gaussian signals. IEEE Proc. F 140(46), 362–370 (1993)

    Google Scholar 

  5. A.L. De Almeida, X. Luciani, A. Stegeman, P. Comon, CONFAC decomposition approach to blind identification of underdetermined mixtures based on generating function derivatives. IEEE Trans. Signal Process. 60(11), 5698–5713 (2012)

    Article  MathSciNet  Google Scholar 

  6. A. Dermoune, T. Wei, FastICA algorithm: five criteria for the optimal choice of the nonlinearity function. IEEE Trans. Signal Process. 61(8), 2078–2087 (2013)

    Article  Google Scholar 

  7. T. Dong, Y. Lei, J. Yang, An algorithm for underdetermined mixing matrix estimation. Neurocomputing 104, 26–34 (2013)

    Article  Google Scholar 

  8. F. Gu, H. Zhang, S. Wang, D. Zhu, Blind identification of underdetermined mixtures with complex sources using the generalized generating function. Circuits Syst. Signal Process. 34(2), 681–693 (2015)

    Article  Google Scholar 

  9. A. Hyvärinen, in One-Unit Contrast Functions for Independent Component Analysis: A Statistical Analysis. Proceedings of the IEEE Workshop on Neural Network and Signal Processing, Amelia Island, FL, pp. 388–397 (1997)

  10. A. Hyvärinen, E. Oja, A fast fixed-point algorithm for independent component analysis. Neural Comput. 9(7), 1483–1492 (1997)

    Article  Google Scholar 

  11. A. Hyvärinen, Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 10(3), 626–634 (1999)

    Article  Google Scholar 

  12. A. Hyvarinen, J. Karunen, E. Oja, Independent Component Analysis (Wiley, New York, 2001)

    Book  Google Scholar 

  13. A. Hyvarinen, Testing the ICA mixing matrix based on inter-subject or inter-session consistency. NeuroImage 58(1), 122–136 (2011)

    Article  Google Scholar 

  14. Z. Koldovsky, P. Tichavsky, E. Oja, Efficient variant of algorithm FastICA for independent component analysis attaining the Cramer–Rao lower bound. IEEE Trans. Neural Netw. 17(5), 1265–1277 (2006)

    Article  Google Scholar 

  15. H. Li, T. Adali, Algorithms for complex ML ICA and their stability analysis using Wirtinger calculus. IEEE Trans. Signal Process. 58(12), 6156–6167 (2010)

    Article  MathSciNet  Google Scholar 

  16. X. Luciani, A.L. de Almeida, P. Comon, Blind identification of underdetermined mixtures based on the characteristic function: the complex case. IEEE Trans. Signal Process. 59(2), 540–553 (2011)

    Article  MathSciNet  Google Scholar 

  17. J. Miettinen, K. Nordhausen, H. Oja, S. Taskinen, in Fast Equivariant JADE. In IEEE International Conference in Acoustics, Speech and Signal Processing (ICASSP), pp. 6153–6157 (2013)

  18. J. Miettinen, K. Nordhausen, H. Oja et al., Deflation-based FastICA with adaptive choice of nonlinearities. IEEE Trans. Signal Process. 62(21), 5716–5724 (2014)

    Article  MathSciNet  Google Scholar 

  19. M. Novey, T. Adali, in Adaptable Nonlinearity for Complex Maximization of Nongaussianity and a Fixed-Point Algorithm. Proceedings of the 16th IEEE Signal Processing Society Workshop in Machine Learning for Signal Processing (2006)

  20. M. Novey, T. Adali, On extending the complex FastICA algorithm to noncircular sources. IEEE Trans. Signal Process. 56(5), 2148–2154 (2008)

    Article  MathSciNet  Google Scholar 

  21. G.R. Naik, D.K. Kumar, Determining number of independent sources in undercomplete mixture. EURASIP J. Adv. Signal Process. 51, 1–5 (2009)

    Article  Google Scholar 

  22. G.R. Naik, D.K. Kumar, Dimensional reduction using blind source separation for identifying sources. Int. J. Innov. Comput. Inf. Control 7(2), 989–1000 (2011)

    Google Scholar 

  23. G.R. Naik, D.K. Kumar, An overview of independent component analysis and its applications. Inf. Int. J. Comput. Inform. 35(1), 63–81 (2011)

    MATH  Google Scholar 

  24. D.T. Pham, Fast algorithms for mutual information based independent component analysis. IEEE Trans. Signal Process. 52(10), 2690–2700 (2004)

    Article  Google Scholar 

  25. D. Tse, P. Viswanath, Fundamentals of Wireless Communication (Cambridge University Press, Cambridge, 2005)

    Book  MATH  Google Scholar 

  26. S. Xie, L. Yang, J. Yang, G. Zhou, Y. Xiang, Time-frequency approach to underdetermined blind source separation. IEEE Trans. Neural Netw. Learn. Syst. 23(2), 306–316 (2012)

    Article  Google Scholar 

  27. G. Zhou, Z. Yang, S. Xie, J.M. Yang, Mixing matrix estimation from sparse mixtures with unknown number of sources. IEEE Trans. Neural Netw. 22(2), 211–221 (2011)

    Article  Google Scholar 

  28. H. Zhang, L. li, W. Li, Independent component analysis based on fast proximal gradient. Circuits Syst. Signal Process. 10(1), 1–8 (2012)

    Google Scholar 

Download references

Acknowledgments

The authors would like to thank the Editor-in-chief, Prof. M. N. S. Swamy, for his help in improving the production of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guobing Qian.

Appendix: Proof of Theorem 1

Appendix: Proof of Theorem 1

As shown in [20], the cost function of nc-FastICA algorithm

$$\begin{aligned} J(\mathbf{w})=E\left\{ {G\left( \left| {\mathbf{w}^\mathrm{H}\mathbf{Vz}} \right| ^{\mathbf{2}}\right) } \right\} \end{aligned}$$
(9)

where \(G:{\mathbb {R}}^{+}\cup \{0\}\rightarrow {\mathbb {R}}\) is a smooth even function, \(\mathbf{V}\) is the whitening matrix and \(\mathbf{w}\in {\mathbb {C}}^{N}\) with \(| \mathbf{w}|=1\), i.e., \(E({|{\mathbf{w}^\mathrm{H}\mathbf{Vz}}|^{2}})=E( {|{{\bar{\mathbf{w}}}^\mathrm{H}\mathbf{z}}|^{2}})=1\), \({\bar{\mathbf{w}}}\) is a column of the demixing matrix.

By making the orthogonal change of coordinates \(\mathbf{q}=\mathbf{A}^\mathrm{H}{\bar{\mathbf{w}}}=\mathbf{A}^\mathrm{H}\mathbf{V}^\mathrm{H}\mathbf{w}\), the Karush–Kuhn–Tucker condition for the constraint optimization problem (9) becomes

$$\begin{aligned} {\partial E\left\{ {G\left( {\left| {\mathbf{q}^\mathrm{H}\mathbf{s}} \right| ^{\mathbf{2}}} \right) } \right\} }\bigg /{\partial \mathbf{q}}=\lambda {\partial E\left( {\left| {\mathbf{q}^\mathrm{H}\mathbf{s}} \right| ^{2}} \right) }\bigg /{\partial \mathbf{q}} \end{aligned}$$
(10)

i.e.,

$$\begin{aligned} \sum _t {\mathbf{s}(t)\left( {\mathbf{q}^\mathrm{H}\mathbf{s}(t)} \right) ^{*}g\left( {\left| {\mathbf{q}^\mathrm{H}\mathbf{s}(t)} \right| ^{2}} \right) } =\lambda \sum _t {\mathbf{s}(t)\mathbf{s}^\mathrm{H}(t)\mathbf{q}} \end{aligned}$$
(11)

where \(t\) is the sample index and has been dropped for simplicity in the following expressions, \(T\) is the sample size, \(\lambda \) is the Lagrange multiplier.

Without loss of generality, we assume \(\mathbf{q}\) is an approximation optimal solution for \(s_1 \), then \(\lambda =E[{| {s_1 } |^{2}g({| {s_1}|^{2}})}]\). By using first-order Taylor expression and ignoring the perturbation to the first component, we can derive

$$\begin{aligned}&\mathbf{s}\left( {\mathbf{q}^\mathrm{H}\mathbf{s}} \right) ^{*}g\left( {\left| {\mathbf{q}^\mathrm{H}\mathbf{s}} \right| ^{2}} \right) \nonumber \\&\quad \approx \mathbf{s}\left\{ {\left( {s_1^*+\mathbf{s}_{-1}^\mathrm{H} \mathbf{q}_{-1} } \right) \left[ {g\left( {\left| {s_1 } \right| ^{2}} \right) +g^{\prime }\left( {\left| {s_1 } \right| ^{2}} \right) \left( {\left| {s_1 } \right| ^{2}\mathbf{s}_{-1}^\mathrm{H} \mathbf{q}_{-1} +s_1^{*2} \mathbf{q}_{-1}^\mathrm{H} \mathbf{s}_{-1} } \right) } \right] } \right\} \nonumber \\&\quad \approx \mathbf{s}\left\{ {s_1^*g\left( {\left| {s_1 } \right| ^{2}} \right) \!+\!\left[ {g\left( {\left| {s_1 } \right| ^{2}} \right) \!+\!g^{\prime }\left( {\left| {s_1 } \right| ^{2}} \right) \left| {s_1 } \right| ^{2}} \right] \mathbf{s}_{-1}^\mathrm{H} \mathbf{q}_{-1} +g^{\prime }\left( {\left| {s_1 } \right| ^{2}} \right) s_1^{*2} \mathbf{s}_{-1}^T \mathbf{q}_{-1}^*} \right\} \end{aligned}$$
(12)
$$\begin{aligned}&\lambda \mathbf{ss}^\mathrm{H} \mathbf{q}=\mathbf{s}\left( {\lambda s_1^*+\mathbf{s}_{-1}^\mathrm{H} \mathbf{q}_{-1} } \right) \end{aligned}$$
(13)

where \(\mathbf{s}_{-1}\) denotes \(\mathbf{s}\) without its first component, \(\mathbf{q}_{-1} \) denotes \(\mathbf{q}\) without its first component.

Excluding the first component in (11) and taking (12) and (13) into consideration, we obtain

$$\begin{aligned}&\frac{1}{\sqrt{T}}\sum _t {\mathbf{s}_{-1} \left[ {s_1^*g\left( {\left| {s_1 } \right| ^{2}} \right) -\lambda s_1^*} \right] } \nonumber \\&\quad =\frac{1}{T}\sum _t {\mathbf{s}_{-1} \mathbf{s}_{-1}^H \left[ {\lambda -g\left( {\left| {s_1 } \right| ^{2}} \right) -\left| {s_1 } \right| ^{2}g^{\prime }\left( {\left| {s_1 } \right| ^{2}} \right) } \right] \mathbf{q}_{-1} \sqrt{T}} \nonumber \\&\qquad +\,\frac{1}{T}\sum _t {\mathbf{s}_{-1} \mathbf{s}_{-1}^T \left[ {-s_1^{*2} g^{\prime }\left( {\left| {s_1 } \right| ^{2}} \right) } \right] \mathbf{q}_{-1}^*\sqrt{T}} \end{aligned}$$
(14)

Moreover, (14) can be further written in a simple form of

$$\begin{aligned} \mathbf{u}=\mathbf{V}_1 {\hat{\mathbf{q}}}_{-1} \sqrt{T}+\mathbf{V}_2 {\hat{\mathbf{q}}}_{-1}^*\sqrt{T} \end{aligned}$$
(15)

where \(\mathbf{V}_1 \rightarrow E[ {| {s_1 } |^{2}g({| {s_1 }|^{2}})-g( {| {s_1 }|^{2}})-| {s_1 } |^{2}g^{\prime }( {| {s_1 }|^{2}})} ]\mathbf{I}_{N-1}\), \(\mathbf{I}_{N-1}\) is a \((N-1)\times (N-1)\) dimensional identity matrix, \(\mathbf{V}_2 \rightarrow -E[ {s_1^{*2} g^{\prime }( {| {s_1 }|^{2}} )} ]E( {\mathbf{s}_{-1} \mathbf{s}_{-1}^T})\), u converges to a normal distribution variable with zero mean, covariance \(\mathbf{V}_3 =\{ {E[ {| {s_1 } |^{2}g^{2}( {| {s_1 }|^{2}})} ]-E^{2}[ {| {s_1 } |^{2}g( {| {s_1 } |^{2}} )} ]} \}\mathbf{I}_{N-1} \) and pseudo-covariance \(\mathbf{V}_4 = \{ {E[ {s_1^{*2} g^{2}( {| {s_1 } |^{2}} )} ]+E[ {s_1^{*2} } ]E^{2}[ {| {s_1 } |^{2}g( {| {s_1 } |^{2}} )} ]-2E[ {| {s_1 } |^{2}g( {| {s_1 } |^{2}} )} ]E[ {s_1^{*2} g( {| {s_1 } |^{2}} )} ]} \}E( {\mathbf{s}_{-1} \mathbf{s}_{-1}^T } )\).

For simplicity, we assume that all the sources belong to the same modulation. Then, \(\mathbf{V}_2 \rightarrow -E[ {s_1^{*2} g^{\prime }( {| {s_1 } |^{2}} )} ]E( {s_1^2 } )\mathbf{I}_{N-1}\), \(\mathbf{V}_4 =\{ E[ {s_1^{*2} g^{2}( {| {s_1 } |^{2}} )} ]+E[ {s_1^{*2} } ] E^{2}[ {| {s_1 } |^{2}g( {| {s_1 } |^{2}} )}]\) \(-2E[ {| {s_1 } |^{2}g( {| {s_1 } |^{2}} )} ]E[ {s_1^{*2} g( {| {s_1 } |^{2}} )} ] \}E( {s_1^2 } )\mathbf{I}_{N-1}\).

Therefore, the asymptotic variance of \({\hat{\mathbf{q}}}_{-1}\) is

$$\begin{aligned} \mathbf{V}=\frac{1}{\left( {a_1^2 -a_2^2 } \right) ^{2}}\left[ {\left( {a_1^2 +a_2^2 } \right) a_3 -a_1 a_2 \left( {a_4 +a_4^*} \right) } \right] \mathbf{I}_{N-1} \end{aligned}$$
(16)

where \(a_1 =E[{| {s_1 } |^{2}g( {| {s_1 } |^{2}} )\!-\!g( {| {s_1 } |^{2}} )-| {s_1 } |^{2}g^{\prime }( {| {s_1 } |^{2}} )} ]\), \(a_2 \!=\!-E[ {s_1^{*2} g^{\prime }( {| {s_1 } |^{2}} )} ]E( {s_1^2 })\), \(a_3 =E[ {| {s_1 } |^{2}g^{2}( {| {s_1 } |^{2}} )} ]-E^{2}[ {| {s_1 } |^{2}g( {| {s_1 } |^{2}} )} ]\), \(a_4 =\{ E[ {s_1^{*2} g^{2}( {| {s_1 } |^{2}} )} ]+E[ {s_1^{*2} } ]E^{2}[ {| {s_1 } |^{2}}{g( {| {s_1 } |^{2}} )}]-2E[ {| {s_1 } |^{2}g( {| {s_1 } |^{2}} )} ]E[ {s_1^{*2} g( {| {s_1 } |^{2}} )} ]\}E( {s_1^2 })\).

Thus, (3) can be easily obtained since \(\mathbf{q}_{-1} =\mathbf{B}\bar{\mathbf{w}}\), where \(\mathbf{B}\) is the conjugate transpose of \(\mathbf{A}\) without its first row.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qian, G., Wei, P. & Liao, H. Efficient Variant of Noncircular Complex FastICA Algorithm for the Blind Source Separation of Digital Communication Signals. Circuits Syst Signal Process 35, 705–717 (2016). https://doi.org/10.1007/s00034-015-0078-5

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-015-0078-5

Keywords

Navigation