Skip to main content

Performance Study for Complex Independent Component Analysis

  • Chapter
  • First Online:
Blind Source Separation

Part of the book series: Signals and Communication Technology ((SCT))

  • 2842 Accesses

Abstract

The goal of independent component analysis (ICA) is to decompose observed signals into components as independent as possible. In linear instantaneous blind source separation, ICA is used to separate linear instantaneous mixtures of source signals into signals that are as close as possible to the original signals. In the estimation of the so-called demixing matrix one has to distinguish two different factors:

  1. 1.

    Variance of the estimated inverse mixing matrix in the noiseless case due to randomness of the sources.

  2. 2.

    Bias of the demixing matrix from the inverse mixing matrix:

This chapter studies both factors for circular and noncircular complex mixtures. It is important to note that the complex case is not directly equivalent to the real case of twice larger dimension. In the derivations, we aim to clearly show the connections and differences between the complex and real cases. In the first part of the chapter, we derive a closed-form expression for the CRB of the demixing matrix for instantaneous noncircular complex mixtures. We also study the CRB numerically for the family of noncircular complex generalized Gaussian distributions (GGD) and compare it to simulation results of several ICA estimators. In the second part, we consider a linear noisy noncircular complex mixing model and derive an analytic expression for the demixing matrix of ICA based on the Kullback-Leibler divergence (KLD). We show that for a wide range of both the shape parameter and the noncircularity index of the GGD, the signal-to-interference-plus-noise ratio (SINR) of KLD-based ICA is close to that of linear MMSE estimation. Furthermore, we show how to extend our derivations to the overdetermined case (\(M>N\)) with circular complex noise.

Sections 3.1.1 and 3.2 of this chapter are based on our previous journal publication [35]. ©  2013 IEEE. Reprinted, with permission, from Loesch and Yang [35].

Sections 3.3.1–3.3.3 of this chapter are based on our previous conference publication [34]. First published in the Proceedings of the 20th European Signal Processing Conference (EUSIPCO-2012) in 2012, published by EURASIP.

An erratum to this chapter is available at 10.1007/978-3-642-55016-4_3

An erratum to this chapter can be found at http://dx.doi.org/10.1007/978-3-642-55016-4_20

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    See Sect. 3.1.1 for a definition.

  2. 2.

    Examples of digital modulation schemes are phase shift keying (PSK), pulse amplitude modulation (PAM) or quadrature amplitude modulation (QAM).

  3. 3.

    See Sect. 3.2.4 for a definition.

  4. 4.

    For a large noise variance \(\sigma ^2\) the theoretical analysis cannot fully describe the behavior of KLD-based ICA since we only take into account terms of order \(\sigma ^2\). However, simulation results show that KLD-based ICA still performs similarly to linear MMSE estimation.

  5. 5.

    Due to the inherent scaling ambiguity between the mixing matrix \({\mathbf A}\) and the source signals \(\mathbf{s}\), without loss of generality, we can scale \(\mathbf{s}\) and accordingly \({\mathbf A}\) such that \(\mathrm{E}\left[ |s_i|^2\right] = 1\) and \(\gamma _i \in [0,1]\).

  6. 6.

    Some authors [5, 15, 47] prefer the so-called expected interference-to-source ratio (ISR) matrix whose elements \(\overline{\text {ISR}}_{ij}\) are defined (for \(i\ne j\) and unit variance sources) as \(\overline{\text {ISR}}_{ij}=\mathrm{E}\left[ \frac{\left| G_{ij}\right| ^2}{\left| G_{ii}\right| ^2}\right] \), where \(G_{ii}\) denotes the diagonal elements and \(G_{ij}\) the off-diagonal elements of \({\mathbf G}\). To compute \(\overline{\text {ISR}}_{ij}\), usually \(G_{ii} \approx 1\) (i.e., \({{\mathrm{var}}}(G_{ii}) \ll 1\)) is assumed such that \(\overline{\text {ISR}}_{ij} \approx {{\mathrm{var}}}(G_{ij})\). In this section, we do not use the ISR matrix but instead directly derive the iCRB for \({\mathbf G}\).

  7. 7.

    Note that many alternative ICA estimators such as [7, 10, 14, 17, 20] exist.

References

  1. Adali, T., Li, H.: A practical formulation for computation of complex gradients and its application to maximum likelihood ICA. In: Proc. IEEE Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 2, pp. II-633-II-636 (2007)

    Google Scholar 

  2. Adali, T., Li, H.: Complex-valued adaptive signal processing, ch. 1. In: T. Adali, S. Haykin (eds.) Adaptive Signal Processing: Next Generation Solutions, pp. 1–85. Wiley, New York (2010)

    Google Scholar 

  3. Adali, T., Li, H., Novey, M., Cardoso, J.F.: Complex ica using nonlinear functions. IEEE Trans. Signal Process. 56(9), 4536–4544 (2008)

    Article  MathSciNet  Google Scholar 

  4. Adali, T., Schreier, P., Scharf, L.: Complex-valued signal processing: the proper way to deal with impropriety. IEEE Trans. Signal Process. 59(11), 5101–5125 (2011)

    Article  MathSciNet  Google Scholar 

  5. Anderson, M., Li, X.L., Rodriquez, P.A., Adali, T.: An effective decoupling method for matrix optimization and its application to the ICA problem. In: Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 1885–1888 (2012)

    Google Scholar 

  6. Brandwood, D.H.: A complex gradient operator and its application in adaptive array theory. IEE Proc. 130, 11–16 (1983)

    MathSciNet  Google Scholar 

  7. Cardoso, J., Souloumiac, A.: Blind beamforming for non-gaussian signals. Radar Signal Process. IEE Proc. F 140(6), 362–370 (1993)

    Google Scholar 

  8. Cardoso, J.F.: On the performance of orthogonal source separation algorithms. In: Proceedings of the European Signal Processing Conference (EUSIPCO), pp. 776–779 (1994)

    Google Scholar 

  9. Cardoso, J.F.: Blind signal separation: statistical principles. Proc. IEEE 86(10), 2009–2025 (1998)

    Article  Google Scholar 

  10. Cardoso, J.F., Adali, T.: The maximum likelihood approach to complex ICA. In: Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 5, pp. 673–676 (2006)

    Google Scholar 

  11. Cichocki, A., Sabala, I., Choi, S., Orsier, B., Szupiluk, R.: Self adaptive independent component analysis for sub-gaussian and super-gaussian mixtures with unknown number of sources and additive noise. In: Proceedings of 1997 International Symposium on Nonlinear Theory and its Applications (NOLTA-97), vol. 2, pp. 731–734 (1997)

    Google Scholar 

  12. Comon, P., Jutten, C. (eds.): Handbook of Blind Source Separation: Independent Component Analysis and Applications, 1st edn. Elsevier, Amsterdam (2010)

    Google Scholar 

  13. Davies, M.: Identifiability issues in noisy ica. IEEE Signal Process. Lett. 11(5), 470–473 (2004)

    Article  Google Scholar 

  14. De Lathauwer, L., De Moor, B.: On the blind separation of non-circular sources. In: Proceedings of the European Signal Processing Conference (EUSIPCO), vol. 2, pp. 99–102. Toulouse, France (2002)

    Google Scholar 

  15. Doron, E., Yeredor, A., Tichavsky, P.: Cramér-Rao-induced bound for blind separation of stationary parametric gaussian sources. IEEE Signal Process. Lett. 14(6), 417–420 (2007)

    Google Scholar 

  16. Douglas, S., Cichocki, A., Amari, S.: A bias removal technique for blind source separation with noisy measurements. Eletron. Lett. 34(14), 1379–1380 (1998)

    Article  Google Scholar 

  17. Douglas, S.C.: Fixed-point algorithms for the blind separation of arbitrary complex-valued non-gaussian signal mixtures. EURASIP J. Appl. Signal Process. 2007(1), Article ID 36,525 (2007)

    Google Scholar 

  18. Eriksson, J., Koivunen, V.: Complex-valued ICA using second order statistics. pp. 183–192 (2004)

    Google Scholar 

  19. Eriksson, J., Koivunen, V.: Complex random vectors and ICA models: identifiability, uniqueness, and separability. IEEE Trans. Inf. Theory 52(3), 1017–1029 (2006)

    Article  MathSciNet  Google Scholar 

  20. Fiori, S.: Neural independent component analysis by maximum-mismatch learning principle. Neural Netw. 16(8), 1201–1221 (2003)

    Article  Google Scholar 

  21. Hjørungnes, A.: Complex-Valued Matrix Derivatives. Cambridge University Press, Cambridge (2011)

    Google Scholar 

  22. Horn, R.A., Johnson, C.R.: Matrix analysis, 1st publis. (1985), 10th print edn. Cambridge University Press, Cambridge (1999)

    Google Scholar 

  23. Hyvärinen, A.: Independent component analysis in the presence of gaussian noise by maximizing joint likelihood. Neurocomputing 22, 49–67 (1998)

    Article  MATH  Google Scholar 

  24. Jagannatham, A., Rao, B.: Cramér-Rao lower bound for constrained complex parameters. IEEE Signal Process. Lett. 11(11), 875–878 (2004)

    Google Scholar 

  25. Joho, M., Mathis, H., Lambert, R.H.: Overdetermined blind source separation: using more sensors than source signals in a noisy mixture. In: Proceedings of the International Conference on Independent Component Analysis and Blind Source Separation (ICA), pp. 81–86 (2000)

    Google Scholar 

  26. Koldovsky, Z., Tichavsky, P.: Methods of fair comparison of performance of linear ICA techniques in presence of additive noise. In: Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 5, pp. 873–876 (2006)

    Google Scholar 

  27. Koldovsky, Z., Tichavsky, P.: Asymptotic analysis of bias of Fast ICA-based algorithms in the presence of additive noise. Technical Report 2181, UTIA, AV CR (2007a)

    Google Scholar 

  28. Koldovsky, Z., Tichavsky, P.: Blind instantaneous noisy mixture separation with best interference-plus-noise rejection. In: Proceedings of the International Conference on Independent Component Analysis and Blind Source Separation (ICA), pp. 730–737 (2007b)

    Google Scholar 

  29. Li, H., Adali, T.: Algorithms for complex ML ICA and their stability analysis using Wirtinger calculus. IEEE Trans. Signal Process. 58(12) (2010)

    Google Scholar 

  30. Li, X.L., Adali, T.: Complex independent component analysis by entropy bound minimization. IEEE Trans. Circ. Syst. I Regul. Pap. 57(7), 1417–1430 (2010)

    Google Scholar 

  31. Loesch, B.: Complex blind source separation with audio applications. Ph.D. thesis, University of Stuttgart (2013). http://www.hut-verlag.de/9783843911214.html

  32. Loesch, B., Yang, B.: On the relation between ICA and MMSE based source separation. In: Proceedings of the IEEE Conference on Acoustics, Speech, and Signal Processing (ICASSP), pp. 3720–3723 (2011)

    Google Scholar 

  33. Loesch, B., Yang, B.: Cramér-Rao bound for circular complex independent component analysis. In: Proceedings of the International Conference on Latent Variable Analysis and Signal Separation (LVA/ICA), pp. 42–49 (2012a)

    Google Scholar 

  34. Loesch, B., Yang, B.: On the solution of circular and noncircular complex KLD-ICA in the presence of noise. In: Proceedings of the European Signal Processing Conference (EUSIPCO), pp. 1479–1483 (2012b)

    Google Scholar 

  35. Loesch, B., Yang, B.: Cramér-Rao bound for circular and noncircular complex independent component analysis. IEEE Trans. Signal Process. 61(2), 365–379 (2013)

    Google Scholar 

  36. Mandic, D.P., Goh, V.S.L.: Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear, and Neural Models, 1st edn. Wiley, Chichester (2009)

    Google Scholar 

  37. Novey, M., Adali, T.: ICA by maximization of nongaussianity using complex functions. In: Proceedings of the IEEE Workshop on Machine Learning for Signal Processing (MLSP), pp. 21–26 (2005)

    Google Scholar 

  38. Novey, M., Adali, T.: Adaptable nonlinearity for complex maximization of nongaussianity and a fixed-point algorithm. In: Proceedings of the IEEE Workshop on Machine Learning for Signal Processing (MLSP), pp. 79–84 (2006)

    Google Scholar 

  39. Novey, M., Adali, T.: On extending the complex FastICA algorithm to noncircular sources. IEEE Trans. Signal Process. 56(5), 2148–2154 (2008)

    Google Scholar 

  40. Novey, M., Adali, T., Roy, A.: A complex generalized gaussian distribution—characterization, generation, and estimation. IEEE Trans. Signal Process. 58(3), 1427–1433 (2010)

    Article  MathSciNet  Google Scholar 

  41. Ollila, E., Kim, H.J., Koivunen, V.: Compact Cramér-Rao bound expression for independent component analysis. IEEE Trans. Signal Process. 56(4), 1421–1428 (2008)

    Google Scholar 

  42. Ollila, E., Koivunen, V., Eriksson, J.: On the Cramér-Rao bound for the constrained and unconstrained complex parameters, pp. 414–418 (2008)

    Google Scholar 

  43. Remmert, R.: Theory of Complex Functions. Graduate Texts in Mathematics. Springer, New York (1991)

    Book  Google Scholar 

  44. Schreier, P.J., Scharf, L.L.: Statistical signal processing of complex-valued data: The theory of improper and noncircular signals. Cambridge University Press, Cambridge (2010)

    Book  Google Scholar 

  45. Tichavsky, P., Koldovsky, Z., Oja, E.: Performance analysis of the Fast ICA algorithm and Cramér-Rao bounds for linear independent component analysis. IEEE Trans. Signal Process. 54(4) (2006)

    Google Scholar 

  46. Wirtinger, W.: Zur formalen theorie der funktionen von mehr komplexen veränderlichen. Math. Ann. 97(1), 357–375 (1927)

    Article  MathSciNet  Google Scholar 

  47. Yeredor, A.: Blind separation of gaussian sources with general covariance structures: bounds and optimal estimation. IEEE Trans. Signal Process. 58(10), 5057–5068 (2010)

    Article  MathSciNet  Google Scholar 

  48. Yeredor, A.: Performance analysis of the strong uncorrelating transformation in blind separation of complex-valued sources. IEEE Trans. Signal Process. 60(1), 478–483 (2012)

    Article  MathSciNet  Google Scholar 

  49. Zhang, L.Q., Cichocki, A., Amari, S.: Natural gradient algorithm for blind separation of overdetermined mixture with additive noise. IEEE Signal Process. Lett. 6(11), 293–295 (1999)

    Article  Google Scholar 

  50. Zhu, X.L., Zhang, X.D., Ye, J.M.: A generalized contrast function and stability analysis for overdetermined blind separation of instantaneous mixtures. Neural Comput. 18(3), 709–728 (2006)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Benedikt Loesch .

Editor information

Editors and Affiliations

Appendices

Appendix 1

1.1 Values of \(\kappa \), \(\xi \), \(\beta \), \(\eta \) for Complex GGD

The pdf of a noncircular complex GGD with zero mean, variance \(\mathrm{E}[|s|^2]=1\) and noncircularity index \(\gamma \in [0,1]\) is given by

$$\begin{aligned} p(s,s^*) = \frac{c \alpha \cdot \exp \left( \!-\!\left[ \frac{\alpha /2}{\gamma ^2-1} \left( \gamma s^2 + \gamma {s^*}^2 - 2 s s^*\right) \right] ^c\right) }{\pi \varGamma (1/c)(1-\gamma ^2)^{1/2}} , \end{aligned}$$
(3.84)

where \(\alpha = \varGamma (2/c)/\varGamma (1/c)\) and \(\varGamma (\cdot )\) is the Gamma function. The function \(\varphi (s,s^*) = - \frac{\partial }{\partial s^*} \ln p(s,s^*)\) is then given by

$$\begin{aligned} \varphi (s,s^*) = \frac{2c (\alpha /2)^c}{(\gamma ^2-1)^c} \left( \gamma s^2 + \gamma (s^*)^2 - 2ss^*\right) ^{c-1} (\gamma s^* -s). \end{aligned}$$
(3.85)

By integration in polar coordinates, it can be shown that \(\kappa \), \(\xi \), \(\beta \) and \(\eta \) are given by:

$$\begin{aligned} \kappa&= \mathrm{E}\left[ |\varphi (s)|^2\right] = \frac{c^2 \varGamma (2/c)}{(1-\gamma ^2) \varGamma ^2(1/c)}, \end{aligned}$$
(3.86)
$$\begin{aligned} \xi&= \mathrm{E}\left[ (\varphi ^*(s))^2\right] = -\frac{c^2 \gamma \varGamma (2/c)}{(1-\gamma ^2) \varGamma ^2(1/c)} = - \gamma \kappa , \end{aligned}$$
(3.87)
$$\begin{aligned} \eta&= \mathrm{E}\left[ |s|^2 |\varphi (s)|^2\right] = \frac{(c+1) \cdot (2-\gamma ^2)}{2 (1-\gamma ^2)} ,\end{aligned}$$
(3.88)
$$\begin{aligned} \beta&= \mathrm{E}\left[ s^2 (\varphi ^*(s))^2\right] =\frac{(c+1) \cdot (2-3 \gamma ^2)}{2 (1-\gamma ^2)} . \end{aligned}$$
(3.89)

1.2 Induced CRB for Real ICA

Here, we briefly review the iCRB for real ICA [41, 45]. In the following, all real quantities \(q\) are denoted as \(\mathring{q}\). In the derivation of the iCRB for the real case \(\mathring{\varphi }(\mathring{s}) = - \partial \ln p(\mathring{s}) / \partial \mathring{s}\) and the parameters \(\mathring{\kappa }=E [ \mathring{\varphi }^2(\mathring{s})]\), \(\mathring{\eta }=\mathrm{E}[\mathring{s}^2 \mathring{\varphi }^2(\mathring{s})] = 2+\mathrm{E}\left[ \mathring{s}^2 \frac{\partial \mathring{\varphi }(\mathring{s})}{\partial \mathring{s}}\right] \) are defined using real derivatives. In [41, 45] it was shown that

$$\begin{aligned} {{\mathrm{var}}}(\hat{G}_{ii})&\ge \frac{1}{L (\mathring{\eta }_i-1)},\end{aligned}$$
(3.90)
$$\begin{aligned} {{\mathrm{var}}}(\hat{G}_{ij})&\ge \frac{1}{L} \frac{\mathring{\kappa }_j}{\mathring{\kappa }_i \mathring{\kappa }_j -1}. \end{aligned}$$
(3.91)

Appendix 2

Here we derive an analytic expression for \({\mathbf W}_{\text {ICA}}\) in the presence of noise by using a perturbation analysis. Motivated by \({\mathbf W}_{\text {ICA}} \mathop {=}\limits ^{\sigma ^2=0} {\mathbf A}^{-1}\), we assume that \({\mathbf W}_{\text {ICA}}\) can be written as \({\mathbf W}_{\text {ICA}} = {\mathbf A}^{-1} + \sigma ^2 {\mathbf B} + \fancyscript{O}(\sigma ^4)\) and derive \({\mathbf B}\) by a two-step perturbation analysis:

  1. 1.

    Taylor series approximation of \(\mathrm{E}({\pmb {\varphi }}^*({\mathbf{y}}) {\mathbf{y}}^T)\) in (3.51) at \({\mathbf{y}}={\hat{\varvec{y}}} = {{\mathbf W}}_{\text {ICA}} {{\mathbf A}} {\mathbf{s}}\),

  2. 2.

    Taylor series approximation of the result of the above step by exploiting \({\mathbf W}_{\text {ICA}} = {\mathbf A}^{-1}+\sigma ^2 {\mathbf B} + \fancyscript{O}(\sigma ^4)\) and \(\hat{\mathbf{y}} = {\mathbf{s}} + \sigma ^2 {\mathbf B} {\mathbf A} {\mathbf{s}}+\fancyscript{O}(\sigma ^4) = {\mathbf{s}} + \sigma ^2 {\mathbf C} {\mathbf{s}}+\fancyscript{O}(\sigma ^4)={\mathbf{s}}+\sigma ^2 {\mathbf{b}} + \fancyscript{O}(\sigma ^4)\) with \({\mathbf C}={\mathbf B}{\mathbf A}\) and \(\mathbf{b} = {\mathbf C} \mathbf{s} = [b_1, \dots , b_N]^T\).

In this way, we determine explicitely the deviation \(\sigma ^2{\mathbf B}\) of \({\mathbf W}_{\text {ICA}}\) from the inverse solution \({\mathbf A}^{-1}\).

The general Taylor series expansion of \(\varphi ^*(y) \mathop {\widehat{=}} \varphi ^*(y,y^*)\) is given as

$$\begin{aligned} \varphi ^*(y,y^*)&=\varphi ^*(\hat{y},\hat{y}^*) + \frac{\partial \varphi ^*}{\partial y} \Delta y + \frac{\partial \varphi ^*}{\partial y^*} \Delta y^* + \frac{1}{2}\left( \frac{\partial ^2 \varphi ^*}{(\partial y)^2} (\Delta y)^2 + \frac{\partial ^2 \varphi ^*}{(\partial y^*)^2} (\Delta y^*)^2\right) \nonumber \\&\quad + \frac{\partial ^2 \varphi ^*}{\partial y \partial y^*} \Delta y \Delta y^* + \ldots \nonumber \\&= \varphi ^*(\hat{y},\hat{y}^*) + \varpi (y,y^*) \Delta y + \vartheta (y,y^*) \Delta y^* \nonumber \\&\quad +\frac{1}{2}\left( \nu (y,y^*) (\Delta y)^2 + \zeta (y,y^*) (\Delta y^*)^2\right) + \epsilon (y,y^*) \Delta y \Delta y^* + \ldots \end{aligned}$$
(3.92)

with \(\varpi (y,y^*) = \frac{\partial \varphi ^*}{\partial y}\), \(\vartheta (y,y^*) = \frac{\partial \varphi ^*}{\partial y^*}\), \(\nu (y,y^*) = \frac{\partial ^2 \varphi ^*}{(\partial y)^2}\), \(\zeta (y,y^*) = \frac{\partial ^2 \varphi ^*}{(\partial y^*)^2}\) and \(\epsilon (y,y^*) = \frac{\partial ^2 \varphi ^*}{\partial y \partial y^*}\). To simplify notation, we will drop the dependence of \(\varphi ^*(\cdot )\), \(\varpi (\cdot )\), \(\vartheta (\cdot )\), \(\nu (\cdot )\), \(\zeta (\cdot )\), \(\epsilon (\cdot )\) on \(y^*\) and keep only the dependence on \(y\) in the following.

Let

$$\begin{aligned} \rho _i&=\mathrm{E}\left[ \varpi _i(s_i) s_i^2\right] , \qquad \delta _i = \mathrm{E}\left[ \vartheta _i(s_i) s_i^* s_i\right] ,\end{aligned}$$
(3.93)
$$\begin{aligned} \kappa _i&=\mathrm{E}\left[ \vartheta _i(s_i)\right] , \qquad \quad \quad \xi _i = \mathrm{E}\left[ \varpi _i(s_i)\right] , \end{aligned}$$
(3.94)
$$\begin{aligned} \omega _i&=\mathrm{E}\left[ \nu _i(s_i) s_i\right] , \qquad \quad \tau _i =\mathrm{E}\left[ \zeta _i(s_i) s_i\right] , \end{aligned}$$
(3.95)
$$\begin{aligned} \lambda _i&= \mathrm{E}\left[ \epsilon _i(s_i) s_i\right] , \qquad \quad \gamma _i = \mathrm{E}[s_i^2]. \end{aligned}$$
(3.96)

As shown in [31, 34], \({\mathbf W}_{\text {ICA}} = {\mathbf A}^{-1} + \sigma ^2 {\mathbf C}\), where the elements of \({\mathbf C}\) can be computed from

$$\begin{aligned} \rho _i C_{ii} + \delta _i C_{ii}^* + C_{ii} = - (\kappa _i + \lambda _i )\left[ {\mathbf R}_{-1}\right] _{ii} - (\xi _i + \frac{1}{2}\omega _i) \left[ \bar{{\mathbf R}}_{-1}\right] _{ii} - \frac{1}{2}\tau _i \left[ \bar{{\mathbf R}}_{-1}\right] _{ii}^*. \end{aligned}$$
(3.97)

and

$$\begin{aligned} \gamma _j \xi _i C_{ij} + \kappa _i C_{ij}^* + C_{ji}&= - \kappa _i \left[ {\mathbf R}_{-1}\right] _{ij}^* - \xi _i \left[ \bar{{\mathbf R}}_{-1}\right] _{ij},\nonumber \\ \gamma _i \xi _j C_{ji} + \kappa _j C_{ji}^* + C_{ij}&= - \kappa _j \left[ {\mathbf R}_{-1}\right] _{ji}^* - \xi _j \left[ \bar{{\mathbf R}}_{-1}\right] _{ji}. \end{aligned}$$
(3.98)

with the transformed noise covariance matrix \({\mathbf R}_{-1} = {\mathbf W} {\mathbf R}_{\mathbf{v}} {\mathbf W}^H ={\mathbf A}^{-1} {\mathbf R}_{\mathbf{v}} {\mathbf A}^{-H} + \fancyscript{O}(\sigma ^2)\) and the transformed noise pseudo-covariance matrix \(\bar{{\mathbf R}}_{-1}\!=\! {\mathbf W} \bar{{\mathbf R}}_{\mathbf{v}} {\mathbf W}^T = {\mathbf A}^{-1} \bar{{\mathbf R}}_{\mathbf{v}} {\mathbf A}^{-T} + \fancyscript{O}(\sigma ^2)\). Note that \({\mathbf R}_{-1}^H = {\mathbf R}_{-1}\) and \(\bar{{\mathbf R}}_{-1}^T = \bar{{\mathbf R}}_{-1}\).

If \(p(s,s^*)\) is symmetric in the real part \(\mathfrak {R}s\) or imaginary part \(\mathfrak {I}s\) of \(s\), i.e., \(p(-\mathfrak {R}s, \mathfrak {I}s)=p(\mathfrak {R}s, \mathfrak {I}s)\) or \(p(\mathfrak {R}s, -\mathfrak {I}s)=p(\mathfrak {R}s, \mathfrak {I}s)\), the parameters \(\kappa _i\), \(\rho _i\), \(\delta _i\), \(\lambda _i\), \(\xi _i\), \(\omega _i\), \(\tau _i\) are real. For \(\rho _i + 1 \pm \delta _i \ne 0\), we then get from (3.97)

$$\begin{aligned} \mathfrak {R}{C_{ii}}&= - \frac{(\kappa _i + \lambda _i)\left[ {\mathbf R}_{-1}\right] _{ii} + (\xi _i + \frac{1}{2}(\omega _i + \tau _i) )\left[ \mathfrak {R}\bar{{\mathbf R}}_{-1}\right] _{ii}}{\rho _i + 1 + \delta _i}, \nonumber \\ \mathfrak {I}{C_{ii}}&= - \frac{(\xi _i + \frac{1}{2}(\omega _i - \tau _i) )\left[ \mathfrak {I}\bar{{\mathbf R}}_{-1}\right] _{ii}}{\rho _i + 1 - \delta _i}. \end{aligned}$$
(3.99)

For \((\gamma _j \xi _i+\kappa _i)(\gamma _i \xi _j + \kappa _j) \ne 1\) and \((\gamma _j \xi _i-\kappa _i)(\gamma _i \xi _j - \kappa _j) \ne 1\), we obtain from (3.98)

$$\begin{aligned} \mathfrak {R}{C_{ij}}&=\frac{(\kappa _j - \kappa _i ( \gamma _i \xi _j + \kappa _j)) \left[ \mathfrak {R}{\mathbf R}_{-1}\right] _{ij} + ( \xi _j - \xi _i (\gamma _i \xi _j +\kappa _j)) \left[ \mathfrak {R}\bar{{\mathbf R}}_{-1}\right] _{ij}}{(\gamma _j \xi _i+\kappa _i)(\gamma _i \xi _j + \kappa _j)- 1}, \nonumber \\ \mathfrak {I}{C_{ij}}&=\frac{(\kappa _j + \kappa _i ( \gamma _i \xi _j - \kappa _j) )\left[ \mathfrak {I}{\mathbf R}_{-1}\right] _{ij} + ( \xi _j - \xi _i (\gamma _i \xi _j -\kappa _j)) \left[ \mathfrak {I}\bar{{\mathbf R}}_{-1}\right] _{ij}}{(\gamma _j \xi _i-\kappa _i)(\gamma _i \xi _j - \kappa _j)- 1}. \end{aligned}$$
(3.100)

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Loesch, B., Yang, B. (2014). Performance Study for Complex Independent Component Analysis. In: Naik, G., Wang, W. (eds) Blind Source Separation. Signals and Communication Technology. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-55016-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-55016-4_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-55015-7

  • Online ISBN: 978-3-642-55016-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics