Skip to main content
Log in

Robust Multitask Diffusion Affine Projection M-Estimate Algorithm: Design and Performance Analysis

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

The distributed estimation performance of multitask diffusion affine projection (AP) algorithm (MD-APA) will be greatly reduced under the impulsive noise interference. To overcome this defect, a robust MD-APA is derived by using M-estimate function (MD-APM) to resist the impulsive noise interference. The mean performance, mean square performance and steady-state performance of MD-APM algorithm are studied, and the convergence range of the step-size and the theoretical steady-state MSD are obtained. In addition, the computational complexity of MD-APM algorithm is analyzed in detail. Simulation experiments show that the proposed MD-APM algorithm has better estimation performance compared with MD-APA and MD-APSA under the impulsive noise interference, and its theoretical steady-state mean square deviation (MSD) can provide accurate prediction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Data Availability

The data that support the findings of this study are available from the corresponding author on request.

References

  1. M.S.E. Abadi, M.S. Shafiee, Distributed estimation over an adaptive diffusion network based on the family of affine projection algorithms. IEEE Trans. Signal Inf. Process. Netw. 5(2), 234–247 (2019)

    MathSciNet  Google Scholar 

  2. M.S.E. Abadi, M.S. Shafiee, Diffusion normalized subband adaptive algorithm for distributed estimation employing signed regressor of input signal. Digital Signal Process. 70, 73–83 (2017)

    Article  Google Scholar 

  3. R. Arablouei, K. Doğançay, S. Werner et al., Adaptive distributed estimation based on recursive least-squares and partial diffusion. IEEE Trans. Signal Process. 62(14), 3510–3522 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  4. F.S. Cattivelli, A.H. Sayed, Analysis of spatial and incremental LMS processing for distributed estimation. IEEE Trans. Signal Process. 59(4), 1465–1480 (2011)

    Article  Google Scholar 

  5. F.S. Cattivelli, A.H. Sayed, Distributed detection over adaptive networks using diffusion adaptation. IEEE Trans. Signal Process. 59(5), 1917–1932 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  6. F.S. Cattivelli, A.H. Sayed, Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process. 58(3), 1035–1048 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  7. F.S. Cattivelli, C.G. Lopes, A.H. Sayed, Diffusion recursive least-squares for distributed estimation over adaptive networks. IEEE Trans. Signal Process. 56(5), 1865–1877 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  8. S.C. Chan, Y. Zhou, On the performance analysis of the least mean M-estimate and normalized least mean M-estimate algorithms with Gaussian inputs and additive Gaussian and contaminated Gaussian noises. J. Signal Process. Syst. 60(1), 81–103 (2010)

    Article  Google Scholar 

  9. F. Chen, X. Li, S. Duan et al., Diffusion generalized maximum correntropy criterion algorithm for distributed estimation over multitask network. Digital Signal Process. 81, 16–25 (2018)

    Article  MathSciNet  Google Scholar 

  10. J. Chen, A.H. Sayed, Diffusion adaptation strategies for distributed optimization and learning over networks. IEEE Trans. Signal Process. 60(8), 4289–4305 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  11. J. Chen, C. Richard, A. H. Sayed, Diffusion LMS for clustered multitask networks, in 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), Florence (2014), pp. 5487–5491.

  12. J. Chen, C. Richard, A.H. Sayed, Multitask diffusion adaptation over networks. IEEE Trans. Signal Process. 62(16), 4129–4144 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  13. J. Chen, C. Richard, A.H. Sayed, Diffusion LMS over multitask networks. IEEE Trans. Signal Process. 63(11), 2733–2748 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  14. J. Chen, A.H. Sayed, Distributed pareto optimization via diffusion strategies. IEEE J. Sel. Top. Signal Process. 7(2), 205–220 (2013)

    Article  Google Scholar 

  15. V. C. Gogineni, M. Chakraborty, Diffusion affine projection algorithm for multitask networks, in 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu (2018), pp. 201–206.

  16. V.C. Gogineni, M. Chakraborty, Improving the performance of multitask diffusion APA via controlled inter-cluster cooperation. IEEE Trans. Circuits Syst. I Regul. Pap. 67(3), 903–912 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  17. F. Huang, J. Zhang, S. Zhang, A family of robust adaptive filtering algorithms based on sigmoid cost. Signal Process. 149, 179–192 (2018)

    Article  Google Scholar 

  18. S. Kar, J.M. Moura, Distributed consensus algorithms in sensor networks with imperfect communication: link failures and channel noise. IEEE Trans. Signal Process. 57(1), 355–369 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  19. S.E. Kim, J.W. Lee, W.J. Song, A theory on the convergence behavior of the affine projection algorithm. IEEE Trans. Signal Process. 59(12), 6233–6239 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  20. M. Korki, H. Zayyani, Weighted diffusion continuous mixed p-norm algorithm for distributed estimation in non-uniform noise environment. Signal Process. 164, 225–233 (2019)

    Article  Google Scholar 

  21. L. Li, J. A. Chambers, Distributed adaptive estimation based on the APA algorithm over diffusion networks with changing topology, in 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, Cardiff (2009), pp. 757–760.

  22. Z. Li, S. Guan, Diffusion normalized Huber adaptive filtering algorithm. J. Frankl. Inst. 355, 3812–3825 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  23. Y. Liu, W.K.S. Tang, Enhanced incremental LMS with norm constraints for distributed in-network estimation. Signal Process. 94, 373–385 (2014)

    Article  Google Scholar 

  24. C.G. Lopes, A.H. Sayed, Incremental adaptive strategies over distributed networks. IEEE Trans. Signal Process. 55(8), 4064–4077 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  25. C.G. Lopes, A.H. Sayed, Diffusion least-mean squares over adaptive networks: formulation and performance analysis. IEEE Trans. Signal Process. 56(7), 3122–3136 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  26. P.D. Lorenzo, A.H. Sayed, Sparse distributed learning based on diffusion adaptation. IEEE Trans. Signal Process. 61(6), 1419–1433 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  27. W. Ma, B. Chen, J. Duan et al., Diffusion maximum correntropy criterion algorithms for robust distributed estimation. Digital Signal Process. 58, 10–19 (2016)

    Article  Google Scholar 

  28. W. Ma, H. Qu, G. Gui et al., Maximum correntropy criterion based sparse adaptive filtering algorithms for robust channel estimation under non-Gaussian environments. J. Frankl. Inst. 352(7), 2708–2727 (2015)

    Article  MATH  Google Scholar 

  29. G. Mateos, I.D. Schizas, G.B. Giannakis, Performance analysis of the consensus-based distributed LMS algorithm. EURASIP J. Adv. Signal Process. 2009(1), 1–19 (2009)

    Article  MATH  Google Scholar 

  30. R. Nassif, C. Richard, A. Ferrari et al., Multitask diffusion adaptation over asynchronous networks. IEEE Trans. Signal Process. 64(11), 2835–2850 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  31. R. Nassif, C. Richard, A. Ferrari et al., Proximal multitask learning over networks with sparsity-inducing coregularization. IEEE Trans. Signal Process. 64(23), 6329–6344 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  32. J. Ni, L. Ma, Distributed subband adaptive filtering algorithms. Acta Electron. Sin. 43(11), 2225–2231 (2015)

    Google Scholar 

  33. J. Ni, J. Chen, X. Chen, Diffusion sign-error LMS algorithm: formulation and stochastic behavior analysis. Signal Process. 128, 142–149 (2016)

    Article  Google Scholar 

  34. J. Ni, Diffusion sign subband adaptive filtering algorithm for distributed estimation. IEEE Signal Process. Lett. 22(11), 2029–2033 (2015)

    Article  Google Scholar 

  35. J. Ni, L. Ma, Distributed affine projection sign algorithms against impulsive interferences. Acta Electron. Sin. 44(7), 1555–1560 (2016)

    Google Scholar 

  36. J. Ni, Y. Zhu, J. Chen, Multitask diffusion affine projection sign algorithm and its sparse variant for distributed estimation. Signal Process. 172, 107561 (2020)

    Article  Google Scholar 

  37. A. Rastegarnia, Reduced-communication diffusion RLS for distributed estimation over multi-agent networks. IEEE Trans. Circuits Syst. II: Express Br. 67(1), 177–181 (2020)

    Article  Google Scholar 

  38. P.J. Rousseeuw, A.M. Leroy, Robust regression and outlier detection (Wiley, New York, 1987)

    Book  MATH  Google Scholar 

  39. A.H. Sayed, Diffusion adaptation over networks. Acad. Press Libr. Signal Process. 3, 323–453 (2014)

    Article  Google Scholar 

  40. I.D. Schizas, G. Mateos, G.B. Giannakis, Distributed LMS for consensus-based in-network adaptive processing. IEEE Trans. Signal Process. 57(6), 2365–2382 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  41. H.C. Shin, A.H. Sayed, Mean-square performance of a family of affine projection algorithms. IEEE Trans. Signal Process. 52(1), 90–102 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  42. P. Song, H. Zhao, Affine-projection-like M-estimate adaptive filter for robust filtering in impulse noise. IEEE Trans. Circuits Syst. II Express Br. 66(12), 2087–2091 (2019)

    Google Scholar 

  43. P. Song, H. Zhao, P. Li, L. Shi, Diffusion affine projection maximum correntropy criterion algorithm and its performance analysis. Signal Process. 181, 107918 (2021)

    Article  Google Scholar 

  44. P. Song, H. Zhao, Robust diffusion affine projection M-estimate algorithm for distributed estimation over network. IFAC-PapersOnline 52(24), 290–293 (2019)

    Article  Google Scholar 

  45. P. Song, H. Zhao, X. Zeng, Robust diffusion affine projection algorithm with variable step-size over distributed networks. IEEE Access 7, 150484–150491 (2019)

    Article  Google Scholar 

  46. N. Takahashi, I. Yamada, A.H. Sayed, Diffusion least-mean squares with adaptive combiners: formulation and performance analysis. IEEE Trans. Signal Process. 58(9), 4795–4810 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  47. S. Tu, A.H. Sayed, Diffusion strategies outperform consensus strategies for distributed estimation over adaptive networks. IEEE Trans. Signal Process. 60(12), 6217–6234 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  48. G. Wang, H. Zhao, Robust adaptive least mean M-estimate algorithm for censored regression. IEEE Trans. Systems, Man, Cybern. Syst. 52(8), 5165–5174 (2022)

    Article  Google Scholar 

  49. P. Wen, J. Zhang, Widely linear complex-valued diffusion subband adaptive filter algorithm. IEEE Trans. Signal Inf. Process. Over Netw. 5(2), 248–257 (2019)

  50. A.M. Wilson, T. Panigrahi, A. Dubey, Robust distributed Lorentzian adaptive filter with diffusion strategy in impulsive noise environment. Digital Signal Process. 96, 102589 (2020)

    Article  Google Scholar 

  51. L. Xiao, S. Boyd, S.J. Kim, Distributed average consensus with least-mean-square deviation. J. Parallel Distrib. Comput. 67(1), 33–46 (2007)

    Article  MATH  Google Scholar 

  52. X. Xu, H. Qu, J. Zhao et al., Diffusion maximum correntropy criterion based robust spectrum sensing in non-Gaussian noise environments. Entropy 20(4), 246 (2018)

    Article  Google Scholar 

  53. Y. Yu, H. Zhao, Incremental M-estimate-based least-mean algorithm over distributed network. Electron. Lett. 52(14), 1270–1272 (2016)

    Article  Google Scholar 

  54. Y. Yu, H. He, B. Chen, J. Li, Y. Zhang, L. Lu, M-estimate based normalized subband adaptive filter algorithm: performance analysis and improvements. IEEE/ACM Trans. Audio, Speech, Lang. Process. 28, 225–239 (2020)

    Article  Google Scholar 

  55. Y. Yu, H. He, T. Yang et al., Diffusion normalized least mean M-estimate algorithms: design and performance analysis. IEEE Trans. Signal Process. 68, 2199–2214 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  56. H. Zayyani, Robust minimum disturbance diffusion LMS for distributed estimation. IEEE Trans. Circuits Syst. II: Express Br. 68(1), 521–525 (2021)

    Article  Google Scholar 

  57. H. Zayyani, Communication reducing diffusion LMS robust to impulsive noise using smart selection of communication nodes. Circuit Syst. Signal Process. 41, 1788–1802 (2022)

    Article  Google Scholar 

  58. H. Zayyani, A. Javaheri, A robust generalized proportionate diffusion LMS algorithm for distributed estimation. IEEE Trans. Circuits Syst. II: Express Br. 68(4), 1552–1556 (2021)

    Article  Google Scholar 

  59. H. Zhao, B. Liu, P. Song, Variable step-size affine projection maximum correntropy criterion adaptive filter with correntropy induced metric for sparse system identification. IEEE Trans. Circuits Syst. II: Express Br. 67(11), 2782–2786 (2020)

    Article  Google Scholar 

  60. Y. Zhou, S.C. Chan, K.L. Ho, New sequential partial-update least mean M-estimate algorithms for robust adaptive system identification in impulsive noise. IEEE Trans. Ind. Electron. 58(9), 4455–4470 (2011)

    Article  Google Scholar 

  61. Y. Zhu, H. Zhao, X. Zeng, B. Chen, Robust generalized maximum correntropy criterion algorithms for active noise control. IEEE/ACM Trans. Audio, Speech, Lang. Process. 28, 1282–1292 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This work was in part by National Natural Science Foundation of China (Grant: 62171388, 61871461, 61571374), in part by Department of Science and Technology of Sichuan Province (Grant: 2019YJ0225, 2020JDTD0009), and by Fundamental Research Funds for the Central Universities (Grant: 2682021ZTPY091).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haiquan Zhao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

The block column vector \({\boldsymbol{z}} = {\text{col}} \left\{ {{\boldsymbol{z}}_{1} ,{\boldsymbol{z}}_{2} ,...,{\boldsymbol{z}}_{N} } \right\}\), where the size of each block \({\boldsymbol{z}}_{k}\) is \(M \times 1\), \(k = 1,2,...,N\). Then the block maximum norm of the block column vector \({\boldsymbol{z}}\) is defined as [31, 39]

$$ \left\| {\boldsymbol{z}} \right\|_{b,\infty } \triangleq \mathop {\max }\limits_{1 \le k \le N} \left\| {{\boldsymbol{z}}_{k} } \right\|_{2} $$
(A-1)

Accordingly, the maximum block norm of an arbitrary block matrix is defined as

$$ \left\| {{\boldsymbol{\Theta}}} \right\|_{b,\infty } \triangleq \mathop {\max }\limits_{{{\boldsymbol{z}} \ne {\boldsymbol{0}}}} \frac{{\left\| {{\boldsymbol{\Theta z}}} \right\|_{b,\infty } }}{{\left\| {\boldsymbol{z}} \right\|_{b,\infty } }} $$
(A-2)

where \({{\boldsymbol{\Theta}}}\) is a block matrix of \(MN \times MN\) and the size of each block is \(M \times M\).

Consider the \(N \times N\) block diagonal Hermitian matrix \({{\boldsymbol{\varOmega}}} = {\text{diag}} \left\{ {{{\boldsymbol{\varOmega}}}_{1} ,{{\boldsymbol{\varOmega}}}_{2} ,...,{{\boldsymbol{\varOmega}}}_{N} } \right\}\), where each block \({{\boldsymbol{\varOmega}}}_{k}\) is a Hermitian matrix of \(M \times M\), thus we obtain [39]

$$ \rho ({{\boldsymbol{\varOmega}}}) = \mathop {\max }\limits_{1 \le k \le N} \rho ({{\boldsymbol{\varOmega}}}_{k} ) = \left\| {{\boldsymbol{\varOmega}}} \right\|_{b,\infty } $$
(A-3)

where \(\rho ( \cdot )\) represents the spectral radius of a matrix.

$$ \begin{aligned} &- 2{\mathbb{E}}\left\{ {\mu {{\boldsymbol{\nu}}}^{T} (i){\boldsymbol{C}}^{T} (i){\boldsymbol{G}}^{T} \sum \left[ {{\boldsymbol{G}} - \mu {\boldsymbol{GZ}}(i)} \right]{\tilde{\boldsymbol{w}}}(i)} \right\} \\&\quad = 2\sum_{n = 1}^{P - 1} {{\mathbb{E}}\left\{ {\mu {{\boldsymbol{\nu}}}^{T} (i){\boldsymbol{C}}^{T} (i){\boldsymbol{G}}^{T} \sum \left\{ {\prod\limits_{l = 0}^{n - 1} {\left( {{\boldsymbol{G}} - \mu {\boldsymbol{GZ}}(i - l)} \right)} } \right\}\mu {\boldsymbol{GC}}(i - n){{\boldsymbol{\nu}}}(i - n)} \right\}} \\&\quad = 2\sum_{n = 1}^{P - 1} {{\mathbb{E}}\left\{ {{\text{Tr}} \left( {\sum \left\{ {\prod\limits_{l = 0}^{n - 1} {\left( {{\boldsymbol{G}} - \mu {\boldsymbol{GZ}}(i - l)} \right)} } \right\}\mu^{2} {\boldsymbol{GC}}(i - n){{\boldsymbol{\nu}}}(i - n){{\boldsymbol{\nu}}}^{T} (i){\boldsymbol{C}}^{T} (i){\boldsymbol{G}}^{T} } \right)} \right\}} \\&\quad { = }2\sum_{n = 1}^{P - 1} {{\text{bvec}}\left( {{\mathbb{E}}\left\{ {\left\{ {\prod\limits_{l = 0}^{n - 1} {\left( {{\boldsymbol{G}} - \mu {\boldsymbol{GZ}}(i - l)} \right)} } \right\}\mu^{2} {\boldsymbol{GC}}(i - n){{\boldsymbol{\nu}}}(i - n){{\boldsymbol{\nu}}}^{T} (i){\boldsymbol{C}}^{T} (i){\boldsymbol{G}}^{T} } \right\}^{T} } \right)}^{T} {{\boldsymbol{\sigma}}} \\&\quad = 2\sum_{n = 1}^{P - 1} {\delta^{T} (n)} {{\boldsymbol{\sigma}}} \\ \end{aligned} $$
(A-4)

Appendix B

By using Assumption 3 and defining a probabilistic event \(\left| {e_{k} (i)} \right| < \xi_{k}\), we can get

$$ \begin{aligned} P_{e,k} (i) \triangleq P\left\{ {\left| {e_{k} (i)} \right| < \xi_{k} } \right\} \\ = p_{k} P\left\{ {\left| {e_{s,k} (i)} \right| < \xi_{k} } \right\} + (1 - p_{k} )P\left\{ {\left| {e_{v,k} (i)} \right| < \xi_{k} } \right\} \\ \end{aligned} $$
(B-1)

where \(e_{s,k} (i) \triangleq {\boldsymbol{x}}_{k}^{T} (i){\tilde{\boldsymbol{w}}}_{k} (i) + v_{k} (i) + \theta_{k} (i)\), \(e_{v,k} (i) \triangleq {\boldsymbol{x}}_{k}^{T} (i){\tilde{\boldsymbol{w}}}_{k} (i) + v_{k} (i)\), and \({\tilde{\boldsymbol{w}}}_{k} (i) \triangleq {\boldsymbol{w}}_{k}^{ * } - {\boldsymbol{w}}_{k} (i)\).

Since \(e_{s,k} (i)\) and \(e_{v,k} (i)\) are zero-mean Gaussian variables [55] so that

$$ \begin{aligned} P_{e,k} (i) &= p_{k} P\left\{ {\left| {e_{s,k} (i)} \right| < \xi_{k} } \right\} + (1 - p_{k} )P\left\{ {\left| {e_{v,k} (i)} \right| < \xi_{k} } \right\} \\& = p_{k} \frac{1}{{\sqrt {2\pi \sigma_{{e_{s,k} }}^{2} (i)} }}\int_{{ - \xi_{k} }}^{{\xi_{k} }} {\exp \left( { - \frac{{e_{s,k}^{2} (i)}}{{2\sigma_{{e_{s,k} }}^{2} (i)}}} \right)} de_{s,k} (i) \\&\quad+ (1 - p_{k} )\frac{1}{{\sqrt {2\pi \sigma_{{e_{v,k} }}^{2} (i)} }} \int_{{ - \xi_{k} }}^{{\xi_{k} }} {\exp \left( { - \frac{{e_{v,k}^{2} (i)}}{{2\sigma_{{e_{v,k} }}^{2} (i)}}} \right)} de_{v,k} (i) \\&\quad\triangleq p_{k} {\text{erf}} \left( {\frac{{\xi_{k} }}{{\sqrt {2\sigma_{{e_{s,k} }}^{2} (i)} }}} \right) + (1 - p_{k} ){\text{erf}} \left( {\frac{{\xi_{k} }}{{\sqrt {2\sigma_{{e_{v,k} }}^{2} (i)} }}} \right) \\ \end{aligned} $$
(B-2)

where \(\sigma_{{e_{s,k} }}^{2} (i) = {\text{Tr}} \left( {{\boldsymbol{W}}_{k} (i){\boldsymbol{R}}_{k} (i)} \right) + \sigma_{{s_{k} }}^{2}\), \(\sigma_{{e_{v,k} }}^{2} (i) = {\text{Tr}} \left( {{\boldsymbol{W}}_{k} (i){\boldsymbol{R}}_{k} (i)} \right) + \sigma_{{v_{k} }}^{2}\), \({\boldsymbol{W}}_{k} (i) \triangleq {\mathbb{E}}\left\{ {{\tilde{\boldsymbol{w}}}_{k} (i){\tilde{\boldsymbol{w}}}_{k}^{T} (i)} \right\}\), and \({\boldsymbol{R}}_{k} (i) \triangleq {\mathbb{E}}\left\{ {{\boldsymbol{x}}_{k} (i){\boldsymbol{x}}_{k}^{T} (i)} \right\}\).

Using (9), \({\mathbb{E}}\left\{ {q\left[ {e_{k} (i)} \right]} \right\}\) can be computed as

$$ \begin{aligned} {\mathbb{E}}\left\{ {q\left[ {e_{k} (i)} \right]} \right\} &= 1 \times P_{e,k} (i) + 0 \times \left( {1 - P_{e,k} (i)} \right) \\& = p_{k} {\text{erf}} \left( {\frac{{\xi_{k} }}{{\sqrt {2\sigma_{{e_{s,k} }}^{2} (i)} }}} \right) + (1 - p_{k} ){\text{erf}} \left( {\frac{{\xi_{k} }}{{\sqrt {2\sigma_{{e_{v,k} }}^{2} (i)} }}} \right) \\ \end{aligned} $$
(B-3)

Since \(\hat{\sigma }_{{e_{k} }}^{2} (i)\) is the variance of error signal without impulsive noise, \(\xi_{k}\) is calculated by \(\kappa \sqrt {\hat{\sigma }_{{e_{k} }}^{2} (i)}\), thus we get

$$ \xi_{k} = \kappa \sqrt {\hat{\sigma }_{{e_{k} }}^{2} (i)} = \kappa \sqrt {\sigma_{{e_{v,k} }}^{2} (i)} $$
(B-4)

At steady-state, \(\sigma_{{e_{s,k} }}^{2} (\infty ) \approx \sigma_{{s_{k} }}^{2}\) and \(\sigma_{{e_{v,k} }}^{2} (\infty ) \approx \sigma_{{v_{k} }}^{2}\), hence, we get

$$ \begin{aligned} {\mathbb{E}}\left\{ {q\left[ {e_{k} (\infty )} \right]} \right\} & = P_{e,k} (\infty ) \\ & = p_{k} {\text{erf}} \left( {\frac{{\kappa \sqrt {\sigma_{{v_{k} }}^{2} } }}{{\sqrt {2\sigma_{{s_{k} }}^{2} } }}} \right) + (1 - p_{k} ){\text{erf}} \left( {\frac{\kappa }{\sqrt 2 }} \right) \\ \end{aligned} $$
(B-5)

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Song, P., Zhao, H., Ma, LJ. et al. Robust Multitask Diffusion Affine Projection M-Estimate Algorithm: Design and Performance Analysis. Circuits Syst Signal Process 42, 540–563 (2023). https://doi.org/10.1007/s00034-022-02140-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-022-02140-1

Keywords

Navigation