Skip to main content
Log in

Analysis of Incremental Augmented Affine Projection Algorithm for Distributed Estimation of Complex-Valued Signals

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

In this paper the aim is to solve the problem of distributed estimation in an incremental network when the measurements taken by the nodes follow a widely linear model. The proposed algorithm, which we refer to as incremental augmented affine projection algorithm (incAAPA), utilizes the full second order statistical information in the complex domain. Moreover, it exploits the spatio-temporal diversity to improve the estimation performance. We derive steady-state performance metric of the incAAPA in terms of mean-square deviation. We further derive sufficient conditions to ensure mean-square convergence. Our analysis illustrates that the proposed algorithm is able to process both second-order circular (proper) and non-circular (improper) signals. The validity of the theoretical results and the good performance of the proposed algorithm are demonstrated by several computer simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. For any matrices of compatible dimensions \(\{{\mathbf {Z}}_1, {\mathbf {Z}}_2, {\varvec{\Sigma }}_k\}\), it holds that

    $$\begin{aligned} {\mathsf {vec}}\{{\mathbf {Z}}_1 {\varvec{\Sigma }}_k {\mathbf {Z}}_2\}&=\Big ({\mathbf {Z}}_2^{{\mathsf {T}}} \otimes {\mathbf {Z}}_1\Big ) {\mathsf {vec}}\{{\varvec{\Sigma }}_k\} \nonumber \end{aligned}$$
  2. Note that \({\mathbf {x}}_{k,i}=[x_{k,i}(1), x_{k,i}(2), \ldots , x_{k,i}(L)]\)

References

  1. M.S.E. Abadi, A.R. Danaee, Low computational complexity family of affine projection algorithms over adaptive distributed incremental networks. AEU-Int. J. Electron. Commun. 68(2), 97–110 (2014)

    Article  Google Scholar 

  2. T. Adali, P.J. Schreier, L.L. Scharf, Complex-valued signal processing: the proper way to deal with impropriety. IEEE Trans. Signal Process. 59(11), 5101–5125 (2011)

    Article  MathSciNet  Google Scholar 

  3. K. Aihara, Chaos and its applications. Procedia IUTAM 5(0), 199–203 (2012). (IUTAM Symposium on 50 Years of Chaos: Applied and Theoretical)

    Article  Google Scholar 

  4. R. Arablouei, S. Werner, Y.-F. Huang, K. Dogancay, Distributed least mean-square estimation with partial diffusion. IEEE Trans. Signal Process. 62(2), 472–484 (2014)

    Article  MathSciNet  Google Scholar 

  5. R. Arablouei, K. Doanay, Affine projection algorithm with selective projections. Signal Process. 92(9), 2253–2263 (2012)

    Article  Google Scholar 

  6. G. Azarnia, M.A. Tinati, Steady-state analysis of the deficient length incremental LMS adaptive networks. Circuits Syst. Signal Process. 39, 1–18 (2015)

    MathSciNet  Google Scholar 

  7. S. Barbarossa, S. Sardellitti, P. Di Lorenzo, Distributed detection and estimation in wireless sensor networks. in CoRR, abs/1307.1448 (2013)

  8. J. Benesty, P. Duhamel, Y. Grenier, A multichannel affine projection algorithm with applications to multichannel acoustic echo cancellation. IEEE Signal Process. Lett. 3(2), 35–37 (1996)

    Article  Google Scholar 

  9. N. Bogdanovic, J. Plata-Chaves, K. Berberidis, Distributed incremental-based lms for node-specific parameter estimation over adaptive networks. in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5425–5429 (2013)

  10. F.T. Castoldi, M.L.R. de Campos, Application of a minimum-disturbance description to constrained adaptive filters. IEEE Signal Process. Lett. 20(12), 1215–1218 (2013)

    Article  Google Scholar 

  11. J. Chen, A.H. Sayed, Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans. Signal Process. 60(8), 4289–4305 (2012)

    Article  MathSciNet  Google Scholar 

  12. P. Di Lorenzo, S. Barbarossa, Distributed least mean squares strategies for sparsity-aware estimation over gaussian markov random fields. in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5472–5476 (2014)

  13. S.C. Douglas, Widely-linear recursive least-squares algorithm for adaptive beamforming. in IEEE International Conference on Acoustics, Speech and Signal Processing, 2009. ICASSP 2009, pp. 2041–2044 (2009)

  14. O.N. Gharehshiran, V. Krishnamurthy, G. Yin, Distributed energy-aware diffusion least mean squares: game-theoretic learning. IEEE J. Sel. Top. Signal Process. 7(5), 821–836 (2013)

    Article  Google Scholar 

  15. S.L. Goh, D.P. Mandic, An augmented extended kalman filter algorithm for complex-valued recurrent neural networks. in 2006 IEEE International Conference on Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings, Vol. 5, pp. V–V (2006)

  16. S. Javidi, M. Pedzisz, S. L. Goh, D. P. Mandic, The augmented complex least mean square algorithm with application to adaptive prediction problems. in 1st IARP Workshop on Cognitive Information Processing, pp. 54–57 (2008)

  17. S.M. Jung, J.-H. Seo, P.G. Park, A variable step-size diffusion normalized least-mean-square algorithm with a combination method based on mean-square deviation. Circuits Syst. Signal Process., pp. 1–14 (2015)

  18. S. Kanna, S.P. Talebi, D.P. Mandic, Diffusion widely linear adaptive estimation of system frequency in distributed power grids. in 2014 IEEE International Energy Conference (ENERGYCON), pp 772–778 (2014)

  19. S. Kanna, D.H. Dini, Yili Xia, S.Y. Hui, D.P. Mandic, Distributed widely linear Kalman filtering for frequency estimation in power networks. IEEE Trans. Signal Inf. Process. Netw. 1(1), 45–57 (2015)

    Article  MathSciNet  Google Scholar 

  20. A. Khalili, A. Rastegarnia, W.M. Bazzi, Zhi Yang, Derivation and analysis of incremental augmented complex least mean square algorithm. IET Signal Process. 9(4), 312–319 (2015)

    Article  Google Scholar 

  21. L. Li, J.A. Chambers, C.G. Lopes, A.H. Sayed, Distributed estimation over an adaptive incremental network based on the affine projection algorithm. IEEE Trans. Signal Process. 58(1), 151–164 (2010)

    Article  MathSciNet  Google Scholar 

  22. C. Li, P. Shen, Y. Liu, Z. Zhang, Diffusion information theoretic learning for distributed estimation over network. IEEE Trans. Signal Process. 61(16), 4011–4024 (2013)

    Article  MathSciNet  Google Scholar 

  23. Y. Liu, W.K.S. Tang, Enhanced incremental LMS with norm constraints for distributed in-network estimation. Signal Process. 94(0), 373–385 (2014)

    Article  Google Scholar 

  24. C.G. Lopes, A.H. Sayed, Incremental adaptive strategies over distributed networks. IEEE Trans. Signal Process. 55(8), 4064–4077 (2007)

    Article  MathSciNet  Google Scholar 

  25. C.G. Lopes, A.H. Sayed, Diffusion least-mean squares over adaptive networks: formulation and performance analysis. IEEE Trans. Signal Process. 56(7), 3122–3136 (2008)

    Article  MathSciNet  Google Scholar 

  26. D.P. Mandic, S. Javidi, S.L. Goh, A. Kuh, K. Aihara, Complex-valued prediction of wind profile using augmented complex statistics. Renew. Energy 34(1), 196–201 (2009)

    Article  Google Scholar 

  27. D. Mandic, V.S.L. Goh, Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models (Wiley, New York, 2009)

    Book  MATH  Google Scholar 

  28. R.G. Rahmati, A. Khalili, A. Rastegarnia, An adaptive diffusion algorithm based on augmented QLMS for distributed filtering of hypercomplex processes. Am. J. Signal Process. 5(2A), 1–8 (2015)

    Google Scholar 

  29. R.G. Rahmati, A. Khalili, A. Rastegarnia, H. Mohammadi, An adaptive incremental algorithm for distributed filtering of hypercomplex processes. Am. J. Signal Process. 5(2A), 9–15 (2015)

    Google Scholar 

  30. A. Rastegarnia, M.A. Tinati, A. Khalili, Performance analysis of quantized incremental LMS algorithm for distributed adaptive estimation. Signal Process. 90(8), 2621–2627 (2010)

    Article  MATH  Google Scholar 

  31. A. Rastegarnia, M.A. Tinati, A. Khalili, Steady-state analysis of quantized distributed incremental LMS algorithm without gaussian restriction. Signal Image Video Process. 7(2), 227–234 (2013)

    Article  Google Scholar 

  32. M.O.B. Saeed, A U H Sheikh, A new LMS strategy for sparse estimation in adaptive networks. in 2012 IEEE 23rd International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC), pp. 1722–1733 (2012)

  33. A.H. Sayed, Adaptive Filters (Wiley, New York, 2008)

    Book  Google Scholar 

  34. A.H. Sayed, Adaptive networks. Proc. IEEE 102(4), 460–497 (2014)

    Article  Google Scholar 

  35. I.D. Schizas, G.B. Giannakis, S.I. Roumeliotis, A. Ribeiro, Consensus in ad hoc wsns with noisy links—Part II: Distributed estimation and smoothing of random signals. IEEE Trans. Signal Process. 56(4), 1650–1666 (2008)

    Article  MathSciNet  Google Scholar 

  36. I.D. Schizas, A. Ribeiro, G.B. Giannakis, Consensus in ad hoc wsns with noisy links—Part I: Distributed estimation of deterministic signals. IEEE Trans. Signal Process. 56(1), 350–364 (2008)

    Article  MathSciNet  Google Scholar 

  37. H.-C. Shin, A.H. Sayed, Mean-square performance of a family of affine projection algorithms. IEEE Trans. Signal Process. 52(1), 90–102 (2004)

    Article  MathSciNet  Google Scholar 

  38. F. Wen, Diffusion LMP algorithm with adaptive variable power. Electron. Lett. 50(5), 374–376 (2014)

    Article  Google Scholar 

  39. Y. Xia, S. Javidi, D.P. Mandic, A regularised normalised augmented complex least mean square algorithm. in 2010 7th International Symposium on Wireless Communication Systems (ISWCS), pp. 355–359 (2010)

  40. Y. Xia, C.C. Took, D.P. Mandic, An augmented affine projection algorithm for the filtering of noncircular complex signals. Signal Process. 90(6), 1788–1799 (2010)

    Article  MATH  Google Scholar 

  41. Y. Xia, D.P. Mandic, A.H. Sayed, An adaptive diffusion augmented clms algorithm for distributed filtering of noncircular complex signals. IEEE Signal Process. Lett. 18(11), 659–662 (2011)

    Article  Google Scholar 

  42. Y. Xia, D.P. Mandic, A.H. Sayed, An adaptive diffusion augmented clms algorithm for distributed filtering of noncircular complex signals. IEEE Signal Process. Lett. 18(11), 659–662 (2011)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Azam Khalili.

Appendices

Appendix A

Now, to develop a distributed solution for (5), we turn the constrained minimization into an unconstrained one as

$$\begin{aligned} J({\mathbf {h}}_i,{\mathbf {g}}_i)\triangleq f({\mathbf {h}}_i,{\mathbf {g}}_i) + \sum _{k=1}^{K}{{\mathop {\mathfrak {R}}\nolimits }} \left\{ {{{\left( {{\mathbf {d}}_{k,i}} - {\mathbf {X}}_{k,i}^{{\mathsf {T}}} {{\mathbf {h}}_{i}} - {\mathbf {X}}_{k,i}^{{\mathsf {H}}}{{\mathbf {g}}_{i}}\right) }^{{\mathsf {H}}}} {\pmb {\beta }}_{k,i}}\right\} \end{aligned}$$
(50)

where \(T \times 1\) vector \({\pmb {\beta }}_{k,i}\) comprises Lagrange multipliers. We can see that (50) is real-valued function of complex variables \({\mathbf {h}}\) and \({\mathbf {g}}\). Obviously, each node can use its local data \(\{d_{k}(i), {\mathbf {x}}_{k,i}\}\) to calculate an estimate of \(\{{\mathbf {h}}^{\circ }, {\mathbf {g}}^{\circ }\}\). However, when a set of nodes has access to data, we can take advantage of node cooperation and space-time diversity to improve the estimation performance. To solve (50) we firstly compute the gradient of \(J({\mathbf {h}}_i,{\mathbf {g}}_i)\) with respect to weight vectors \({{\mathbf {h}}^{*}}\) and \({{\mathbf {g}}^{*}}\) which are defined as

$$\begin{aligned} \nabla _{{{\mathbf {h}}^{*}}}J({\mathbf {h}}_i,{\mathbf {g}}_i)= & {} \frac{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}{\partial {\mathbf {h}}_{i}^{*}}= \frac{1}{2}\left( {\frac{{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}}{{\partial {\mathbf {h}}^{\mathfrak {R}}_{i}}} + j \frac{{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}}{{\partial {\mathbf {h}}_{i}^{{{\mathop {\mathfrak {I}}\nolimits }}}}}} \right) \end{aligned}$$
(51)
$$\begin{aligned} _{{\mathbf {g}}^{*}}J({\mathbf {h}}_i,{\mathbf {g}}_i)= & {} \frac{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}{\partial {\mathbf {g}}_{i}^{*}}= \frac{1}{2}\left( {\frac{{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}}{{\partial {\mathbf {g}}_{i}^{\mathfrak {R}}}} + j\frac{{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}}{{\partial {\mathbf {g}}_{i}^{\mathop {\mathfrak {I}}\nolimits }}}} \right) \end{aligned}$$
(52)

The required gradients are given by

$$\begin{aligned} \frac{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}{\partial {\mathbf {h}}_{i}^{*}}&= {\mathbf {h}}_{i} - {\mathbf {h}}_{i-1} -\frac{1}{2}\sum _{k=1}^{K}{{\mathbf {X}}_{k,i}^{*}} {\pmb {\beta }}_{k,i} \end{aligned}$$
(53)
$$\begin{aligned} \frac{\partial J({\mathbf {h}}_i,{\mathbf {g}}_i)}{\partial {\mathbf {g}}_{i}^{*}}&= {\mathbf {g}}_{i} - {\mathbf {g}}_{i-1} -\frac{1}{2}\sum _{k=1}^{K}{{\mathbf {X}}_{k,i}} {\pmb {\beta }}_{k,i} \end{aligned}$$
(54)

After setting the gradient of \(J({\mathbf {h}}_i,{\mathbf {g}}_i)\) with respect to weight vectors \({{\mathbf {h}}^{*}}\) and \({{\mathbf {g}}^{*}}\) equal to zero we get

$$\begin{aligned} {{\mathbf {h}}_{i}}&= {{\mathbf {h}}_{i-1}} + \frac{1}{2}\sum _{k=1}^{K}{{\mathbf {X}}_{k,i}^{*}} {\pmb {\beta }}_{k,i} \end{aligned}$$
(55)
$$\begin{aligned} {{\mathbf {g}}_{i}}&= {{\mathbf {g}}_{i-1}} + \frac{1}{2}\sum _{k=1}^{K}{{\mathbf {X}}_{k,i}} {\pmb {\beta }}_{k,i} \end{aligned}$$
(56)

Clearly, the solution given by (55) and (56) is not a distributed solution for (50). To have a distributed and adaptive solution for (50), we need to apply the following modifications

  1. 1.

    Split the update Eqs. (55) and (56) into N separate steps whereby each step adds one term to summation. Then for \(\forall k \in {\mathcal {K}}\) we have

    $$\begin{aligned} {{\mathbf {h}}_{1,i}}&= {{\mathbf {h}}_{i-1}},\ \ \ {{\mathbf {g}}_{1,i}}= {{\mathbf {g}}_{i-1}} \end{aligned}$$
    (57a)
    $$\begin{aligned} {{\mathbf {h}}_{k,i}}&= {{\mathbf {h}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}^{*}} {\pmb {\beta }}_{k,i} \end{aligned}$$
    (57b)
    $$\begin{aligned} {{\mathbf {g}}_{k,i}}&= {{\mathbf {g}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}} {\pmb {\beta }}_{k,i} \end{aligned}$$
    (57c)
    $$\begin{aligned} {{\mathbf {h}}_{i}}&= {{\mathbf {h}}_{N,i}} \ \ {{\mathbf {g}}_{i}}= {{\mathbf {g}}_{N,i}} \end{aligned}$$
    (57d)

    where \({\mathbf {h}}_{k,i}\) and \({\mathbf {g}}_{k,i}\) denote local estimates of \({\mathbf {h}}^{\circ }\) and \({\mathbf {g}}^{\circ }\) at node k at time i respectively.

  2. 2.

    Eliminate the Lagrange multiplier vectors \({\pmb {\beta }}_{k,i}\). To this end we substitute (57b) and (57c) in the constraint relation of Eq. (5) to obtain

    $$\begin{aligned}&{{\mathbf {d}}_{k,i}} - {\mathbf {X}}_{k,i}^{{\mathsf {T}}} \Big ({{\mathbf {h}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}^{*}} {\pmb {\beta }}_{k,i} \Big ) \nonumber \\&\quad -{\mathbf {X}}_{k,i}^{{\mathsf {H}}} \Big ({{\mathbf {g}}_{k-1,i}} + \frac{1}{2}{{\mathbf {X}}_{k,i}} {\pmb {\beta }}_{k,i} \Big ) =0 \end{aligned}$$
    (58)

    Solving (58) in terms of \({\pmb {\beta }}_{k,i}\) yields

    $$\begin{aligned} {\pmb {\beta }}_{k,i}=2\Big ({\mathbf {X}}_{k,i}^{{\mathsf {H}}}{\mathbf {X}}_{k,i}+{\mathbf {X}}_{k,i}^{{\mathsf {T}}}{\mathbf {X}}_{k,i}^{*} \Big )^{-1}{\mathbf {e}}_{k,i} \end{aligned}$$
    (59)

    where \({\mathbf {e}}_{k,i}\) is defined in (10). Finally, we replace (59) in (57b) and(57c) and also introduce a convergence factor \(\mu >0\) in order to trade-off final misadjustment and convergence speed to obtain the following update equation

    $$\begin{aligned} {{\mathbf {h}}_{k,i}}&= {{\mathbf {h}}_{k-1,i}} + \mu _k {{\mathbf {X}}^{*}_{k,i}} \Big ({\mathbf {X}}_{k,i}^{{\mathsf {H}}}{\mathbf {X}}_{k,i}+{\mathbf {X}}_{k,i}^{{\mathsf {T}}}{\mathbf {X}}_{k,i}^{*} \Big )^{-1}{\mathbf {e}}_{k,i} \end{aligned}$$
    (60a)
    $$\begin{aligned} {{\mathbf {g}}_{k,i}}&= {{\mathbf {g}}_{k-1,i}} + \mu _k {{\mathbf {X}}_{k,i}} \Big ({\mathbf {X}}_{k,i}^{{\mathsf {H}}}{\mathbf {X}}_{k,i}+{\mathbf {X}}_{k,i}^{{\mathsf {T}}}{\mathbf {X}}_{k,i}^{*} \Big )^{-1}{\mathbf {e}}_{k,i} \end{aligned}$$
    (60b)

To avoid singularities due to the inversion of a rank deficient matrix, a positive constant \(\delta \), called the regularisation parameter, is added to the above updates, giving

$$\begin{aligned} {{\mathbf {g}}_{k,i}}&= {{\mathbf {g}}_{k - 1,i}} \nonumber \\&\qquad + {\mu _k}{{\mathbf {X}}_{k,i}}{\left[ {{\mathbf {X}}_{k,i}^H{{\mathbf {X}}_{k,i}} + {\mathbf {X}}_{k,i}^T {\mathbf {X}}_{k,i}^ * + \delta {\mathbf {I}}} \right] ^{-1}}{{\mathbf {e}}_{k,i}}\end{aligned}$$
(61)
$$\begin{aligned} {{\mathbf {h}}_{k,i}}&= {{\mathbf {h}}_{k - 1,i}} \nonumber \\&\qquad + {\mu _k}{\mathbf {X}}_{k,i}^ * {\left[ {{\mathbf {X}}_{k,i}^H{{\mathbf {X}}_{k,i}} + {\mathbf {X}}_{k,i}^T {\mathbf {X}}_{k,i}^ * + \delta {\mathbf {I}}} \right] ^{-1}}{{\mathbf {e}}_{k,i}} \end{aligned}$$
(62)

and the proof is complete.

Appendix B

If we equate the weighted energies of both sides of (25), we arrive the following space-time version of the weighted energy conservation relation for incAAPA as:

$$\begin{aligned} \Vert {\tilde{\mathbf {w}}}_{k,i}\Vert _{{\varvec{\Sigma }}_{k}}^2+{\mathbf {e}}^{{\varvec{\Sigma }}_{k}{\mathsf {H}}}_{a,k}{\mathbf {F}}_{k,i}^{-1}{\mathbf {e}}^{{\varvec{\Sigma }}_{k}}_{a,k}= \Vert {\tilde{\mathbf {w}}}_{k-1,i}\Vert _{{\varvec{\Sigma }}_{k}}^2+{\mathbf {e}}^{{\varvec{\Sigma }}_{k}{\mathsf {H}}}_{p,k}{\mathbf {F}}_{k,i}^{-1} {\mathbf {e}}^{{\varvec{\Sigma }}_{k}}_{p,k} \end{aligned}$$
(63)

Substituting (20) into (63) and rearranging the result, we obtain

$$\begin{aligned} \Vert {\tilde{\mathbf {w}}}_{k,i}\Vert _{{\varvec{\Sigma }}_{k}}^2= \Vert {\tilde{\mathbf {w}}}_{k-1,i}\Vert _{{\varvec{\Sigma }}_{k}}^2 -\mu _k {\mathbf {e}}^{{\varvec{\Sigma }}_{k}{\mathsf {H}}}_{a,k}{\mathbf {B}}_{k,i} {\mathbf {e}}_{k,i} -\mu _k {\mathbf {e}}_{k,i}^{{\mathsf {H}}} {\mathbf {B}}_{k,i} {\mathbf {e}}^{{\varvec{\Sigma }}_{k}}_{a,k}+\mu _k^2 {\mathbf {e}}_{k,i}^{{\mathsf {H}}} {\mathbf {B}}_{k,i} {\mathbf {F}}_{k,i} {\mathbf {B}}_{k,i} {\mathbf {e}}_{k,i} \end{aligned}$$
(64)

By using the error signal \({\mathbf {e}}_{k,i}={\mathbf {U}}_{k,i}^{{\mathsf {H}}} {\tilde{\mathbf {w}}}_{k-1,i}+{\mathbf {v}}_{k,i}\) we have

$$\begin{aligned} \Vert {\tilde{\mathbf {w}}}_{k,i}\Vert _{{\varvec{\Sigma }}_{k}}^2= & {} \Vert {\tilde{\mathbf {w}}}_{k-1,i}\Vert _{{\varvec{\Sigma }}_{k}}^2-\mu _k {\tilde{\mathbf {w}}}_{k-1,i}^{{\mathsf {H}}}{\varvec{\Sigma }}_{k}{\mathbf {U}}_{k,i}{\mathbf {B}}_{k,i}\left( {\mathbf {U}}_{k,i}^{{\mathsf {H}}}{\tilde{\mathbf {w}}}_{k-1,i}+{\mathbf {v}}_{k,i}\right) \nonumber \\&-\, \mu _k \left( {\tilde{\mathbf {w}}}_{k-1,i}^{{\mathsf {H}}}{\mathbf {U}}_{k,i}+{\mathbf {v}}_{k,i}^{{\mathsf {H}}}\right) {\mathbf {B}}_{k,i}{\mathbf {U}}_{k,i}^{{\mathsf {H}}}{\varvec{\Sigma }}_{k}{\tilde{\mathbf {w}}}_{k-1,i} \nonumber \\&+\,\mu _k^2\left( {\tilde{\mathbf {w}}}_{k-1,i}^{{\mathsf {H}}}{\mathbf {U}}_{k,i}+{\mathbf {v}}_{k,i}^{{\mathsf {H}}}\right) {\mathbf {B}}_{k,i}{\mathbf {F}}_{k,i}{\mathbf {B}}_{k,i}\left( {\mathbf {U}}_{k,i}^{{\mathsf {H}}}{\tilde{\mathbf {w}}}_{k-1,i}+{\mathbf {v}}_{k,i}\right) \end{aligned}$$
(65)

Taking expectations of both sides of (65) and applying the Assumptions. 1 and 2 we obtain

$$\begin{aligned} {\mathbb {E}} \Big [\Vert {\tilde{\mathbf {w}}}_{k,i}\Vert _{{\varvec{\Sigma }}_{k}}^2\Big ]&={\mathbb {E}} \Big [\Vert {\tilde{\mathbf {w}}}_{k-1,i}\Vert _{{\varvec{\Sigma }}'_{k}}^2\Big ]+\mu _k^2 {\mathbb {E}}\Big [{\mathbf {v}}_{k,i}^{{\mathsf {H}}}{\mathbf {B}}_{k,i}{\mathbf {F}}_{k,i}{\mathbf {B}}_{k,i}{\mathbf {v}}_{k,i}\Big ] \end{aligned}$$
(66)

where

$$\begin{aligned} {\varvec{\Sigma }}'_k&={\varvec{\Sigma }}_k-\mu _k{\mathbb {E}} \Big [{\varvec{\Sigma }}_k {\mathbf {U}}_{k,i} {\mathbf {B}}_{k,i} {\mathbf {U}}_{k,i}^{{\mathsf {H}}}\Big ] -\mu _k {\mathbb {E}} \Big [{\mathbf {U}}_{k,i} {\mathbf {B}}_{k,i} {\mathbf {U}}_{k,i}^{{\mathsf {H}}} {\varvec{\Sigma }}_k \Big ]+\mu _k^2 {\mathbb {E}} \Big [{\mathbf {U}}_{k,i} {\mathbf {B}}_{k,i} {\mathbf {U}}_{k,i}^{{\mathsf {H}}} {\varvec{\Sigma }}_k {\mathbf {U}}_{k,i}{\mathbf {B}}_{k,i}{\mathbf {U}}_{k,i}^{{\mathsf {H}}}\Big ] \end{aligned}$$
(67)

Then, (66) and (67) can be rewritten as

$$\begin{aligned} {\mathbb {E}} \Big [\Vert {\tilde{\mathbf {w}}}_{k,i}\Vert _{{\varvec{\Sigma }}_{k}}^2\Big ]&={\mathbb {E}} \Big [\Vert {\tilde{\mathbf {w}}}_{k-1,i}\Vert _{{\varvec{\Sigma }}'_{k}}^2\Big ] +\mu _k^2 {\mathbb {E}}\Big [{\mathbf {v}}_{k,i}^{{\mathsf {H}}}{\mathbf {C}}_{k,i}{\varvec{\Sigma }}_{k}{\mathbf {C}}_{k,i}^{{\mathsf {H}}}{\mathbf {v}}_{k,i}\Big ]\end{aligned}$$
(68)
$$\begin{aligned} {\varvec{\Sigma }}'_k&={\varvec{\Sigma }}_k-\mu _k {\varvec{\Sigma }}_k {\mathbb {E}} [{\mathbf {D}}_{k,i}] \mu _k {\mathbb {E}} [{\mathbf {D}}_{k,i} {\varvec{\Sigma }}_k]+\mu _k^2 {\mathbb {E}} [{\mathbf {D}}_{k,i} {\varvec{\Sigma }}_k {\mathbf {D}}_{k,i}] \end{aligned}$$
(69)

and the proof is complete.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khalili, A., Rastegarnia, A., Bazzi, W.M. et al. Analysis of Incremental Augmented Affine Projection Algorithm for Distributed Estimation of Complex-Valued Signals. Circuits Syst Signal Process 36, 119–136 (2017). https://doi.org/10.1007/s00034-016-0295-6

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-016-0295-6

Keywords

Navigation