Skip to main content
Log in

An Adversary-Resilient Doubly Compressed Diffusion LMS Algorithm for Distributed Estimation

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

This paper proposes an adversary-resilient communication-efficient distributed estimation algorithm for time-varying networks. It is a generalization of the doubly compressed diffusion least mean square algorithm that is not adversary-resilient. The major drawback in existing adversary detectors in the literature is that they suggested the detection criterion heuristically. In this paper, an adversary detector is suggested theoretically based on a Bayesian hypothesis test (BHT). It is proved that the test statistics of the detectors is a distance metric compared to a threshold similarly to related papers in the literature. Hence, we prove the validity of the detection criterion based on BHT. The other difficulty encountered in existing works is the determination of thresholds. In this paper, the optimum thresholds are derived in closed form. Since the optimum thresholds need the values of unknown parameters, it is not feasible to derive them. Hence, suboptimal procedures for determining the thresholds are provided. Moreover, the convergence of the mean of the algorithm is investigated analytically. In addition, the Cramer–Rao bound of the problem of distributed estimation based on all node observations in the presence of adversaries is calculated. The simulation results show the effectiveness of the proposed algorithms and demonstrate that the proposed algorithms reach the performance of the algorithm when the adversaries are ideally known in advance, with some delay.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Notes

  1. The false data injection error \(q_{k,i}\) for various adversary nodes and for various time indexes will be more covert (if it is biased in one direction, the adversary can be detected from this bias) if they are assumed to be positive and negative equiprobably. So, we assume such distribution for \(q_{k,i}\). We could assume other distribution for \(q_{k,i}\) and find the CRB under that assumption.

References

  1. F.S. Abkenar, A. Jamalipour, Energy optimization in association-free fog-IoT networks. IEEE Trans. Green Commun. Netw. 4(2), 404–412 (2020)

    Article  Google Scholar 

  2. M. Amarlingam, K.V.V. Durga Prasad, P. Rajalakshmi, S.S. Channappayya, C.S. Sastry, A novel low-complexity compressed data aggregation method for energy-constrained IoT networks. IEEE Trans. Green Commun. Netw. 4(3), 717–730 (2020)

    Article  Google Scholar 

  3. R. Arablouei, S. Werner, Y. Huang, K. Dogancay, Distributed least mean-square estimation with partial diffusion. IEEE Trans. Signal Process. 62(2), 472–484 (2014)

    Article  MathSciNet  Google Scholar 

  4. R. Arablouei, K. Dogancay, S. Werner, Y. Huang, Adaptive distributed estimation based on recursive least-squares and partial diffusion. IEEE Trans. Signal Process. 62(14), 3510–3522 (2014)

    Article  MathSciNet  Google Scholar 

  5. R. Arablouei, S. Werner, K. Dogancay, Y. Huang, Analysis of a reduced-communication diffusion LMS algorithm. Elsevier Signal Process. 117, 355–361 (2015)

    Article  Google Scholar 

  6. S. Ashkezari-Toussi, H. Sadoghi-Yazdi, Robust diffusion LMS over adaptive networks. Elsevier Signal Process. 158, 201–209 (2019)

    Article  Google Scholar 

  7. N.J. Bershad, E. Eweda, J.C.M. Bermudez, Stochastic analysis of the diffusion LMS algorithm for cyclostationary white Gaussian inputs. Elsevier Signal Process. 185, 108081 (2021)

    Article  Google Scholar 

  8. F.S. Cattivelli, A.H. Sayed, Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process. 58, 1035–1048 (2010)

    Article  MathSciNet  Google Scholar 

  9. H. Chang, W. Li, Correction-based diffusion LMS algorithms for distributed estimation. Circuit Syst. Signal Process. 39, 4136–4154 (2020)

    Article  Google Scholar 

  10. H. Chang, W. Li, Correction-based diffusion LMS algorithms for secure distributed estimation under attacks. Digital Signal Process. 102, 102735 (2020)

    Article  Google Scholar 

  11. F. Chen, X. Shao, Broken-motifs diffusion LMS algorithm for reducing communication load. Elsevier Signal Process. 133, 213–218 (2017)

    Article  Google Scholar 

  12. F. Chen, S. Deng, Y. Hua, S. Duan, L. Wang, J. Wu, Communication-reducing algorithm of distributed least mean square algorithm with neighbor-partial diffusion. Circuit Syst. Signal Process. 39, 4416–4435 (2020)

    Article  Google Scholar 

  13. F. Chen, L. Hu, P. Liu, M. Feng, A robust diffusion estimation algorithm for asynchronous networks in IoT. IEEE Internet Things J. 7(9), 9103–9115 (2020)

    Article  Google Scholar 

  14. Y. Chen, S. Kar, J.M.F. Moura, Resilient distributed estimation through adversary detection. IEEE Trans. Signal Process. 66(9), 2455–2469 (2018)

    Article  MathSciNet  Google Scholar 

  15. P. Cheng et al., Asynchronous Fault Detection Observer for 2-D Markov Jump Systems (IEEE Transactions on Cybernetics, Early Access, 2021)

  16. H. Fang et al., Adaptive optimization algorithm for nonlinear Markov jump systems with partial unknown dynamics. Int. J. Robust Nonlinear Control 31, 2126–2140 (2021)

    Article  MathSciNet  Google Scholar 

  17. V. Flipovic, N. Nedic, V. Stojanovic, Robust identification of pneumatic servo actuators in the real situations. Forsch. Ingenieurwes. 75, 183–196 (2011)

    Article  Google Scholar 

  18. E. Harrane, R. Flamary, C. Richard, On reducing the communication cost of the diffusion LMS algorithm. IEEE Trans. Signal Inf. Process. Netw. 5(1), 100–112 (2019)

    MathSciNet  Google Scholar 

  19. Y. Hua, F. Chen, S. Deng, S. Duan, L. Wang, Secure distributed estimation against false data injection attack. Inf. Sci. 515, 248–262 (2020)

    Article  Google Scholar 

  20. W. Huang, X. Yang, G. Shen, Communication-reducing diffusion LMS algorithm over multitask networks. Inf. Sci. 382–383, 115–134 (2017)

    Article  Google Scholar 

  21. S.M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice Hall, 1993)

  22. M. Korki, H. Zayyani, Weighted diffusion continuous mixed p-norm algorithm for distributed estimation in non-uniform noise environment. Elsevier Signal Process. 164, 225–233 (2019)

    Article  Google Scholar 

  23. J.W. Lee, S.E. Kim, W.J. Song, Data-selective diffusion LMS for reducing communication overhead. Elsevier Signal Process. 113, 211–217 (2015)

    Article  Google Scholar 

  24. J.W. Lee, J.T. Kong, W.J. Song, S.E. Kim, Data-reserved periodic diffusion LMS with low communication cost over networks. IEEE Access 6, 54636–54650 (2018)

    Article  Google Scholar 

  25. X. Li, M. Feng, F. Chen, Q. Shi, J. Kurths, Robust distributed estimation based on a generalized correntropy logarithmic difference algorithm over wireless sensor networks. Elsevier Signal Process. 77, 107731 (2020)

    Article  Google Scholar 

  26. Y. Liu, C. Li, Secure distributed estimation over wireless sensor networks under attacks. IEEE Trans. Aerosp. Electron. Syst. 54(4), 1815–1831 (2018)

    Article  Google Scholar 

  27. K. Ntemos, J. Plata-Chaves, N. Kolokotronis, N. Kalouptsidis, M. Moonen, Secure information sharing in adversarial adaptive diffusion networks. IEEE Trans. Signal Inf. Process. Netw. 4(1), 111–124 (2018)

    MathSciNet  Google Scholar 

  28. G. Nunez, C. Borges, A. Chorti, Understanding the performance of software defined wireless sensor networks under denial of service attack. Open J. Internet Things 5(1), 58–68 (2019)

    Google Scholar 

  29. J.G. Proakis, Digital Communications (Mc-GrawHill, 2001)

  30. A.H. Sayed, Adaptation, Learning and Optimization over Networks, Foundations and Trends in Machine Learning (2014)

  31. M.O. Sayin, S.S. Kozat, Single bit and reduced dimension diffusion strategies over distributed networks. IEEE Signal Process. Lett. 20(10), 976–979 (2013)

    Article  Google Scholar 

  32. M.O. Sayin, S.S. Kozat, Compressive diffusion strategies over distributed networks for reduced communication load. IEEE Trans. Signal Process. 62(20), 5308–5323 (2014)

    Article  MathSciNet  Google Scholar 

  33. Q. Shi, M. Feng, X. Li, S. Wang, F. Chen, A secure distributed information sharing algorithm based on attack detection in multi-task networks. IEEE Trans. Circuits Syst. I Regul. Pap. 67(12), 5125–5138 (2020)

    Article  MathSciNet  Google Scholar 

  34. M. Shirazi, A. Vosoughi, On distributed estimation in hierarchical power constrained wireless sensor networks. IEEE Trans. Signal Inf. Process. Netw. 6, 442–459 (2020)

    MathSciNet  Google Scholar 

  35. H. Shiri, M.A. Tinati, M. Coudreanu, G. Azarnia, Distributed sparse diffusion estimation with reduced communication cost. IET Signal Process. 12(8), 1043–1052 (2018)

    Article  Google Scholar 

  36. Z. Yang, A. Gang, W.U. Bajwa, Adversary-resilient distributed and decentralized statistical inference and machine learning. IEEE Signal Process. Mag. 37(3), 146–159 (2020)

    Article  Google Scholar 

  37. H. Zayyani, M. Korki, F. Marvasti, A distributed 1-bit compressed sensing algorithm robust to impulsive noise. IEEE Commun. Lett. 20(6), 1132–1135 (2016)

    Article  Google Scholar 

  38. H. Zayyani, A. Javaheri, A Robust Generalized Proportionate Diffusion LMS Algorithm for Distributed Estimation. IEEE Transactions (Express Briefs, Early Access, Circuit and Systems-Part II, Oct 2020)

  39. H. Zayyani, Robust minimum disturbance diffusion LMS for distributed estimation. IEEE Trans. Circuit Syst. Part II Express Briefs 68(1), 521–525 (2021)

    Google Scholar 

  40. H. Zayyani, Communication Reducing Diffusion LMS Robust to Impulsive Noise Using Smart Selection of Communication Nodes (System, and Signal Processing, Circuit, 2021)

Download references

Acknowledgements

This research was financially supported by the research deputy of Qom University of Technology (Grant No. 553868). For this study, the work of I. Fijalkow has been partially supported by the ELIOT ANR-18-CE40-0030 and FAPESP 2018/12579-7 project.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hadi Zayyani.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A The Convergence of the Mean Condition

For calculating the sufficient condition of mean convergence of the weight vectors, we shall define some notations. We define \(\tilde{\varvec{\psi }}_{k,i}={{\textbf {w}}}_o-\varvec{\psi }_{k,i}\). Then, we collect them in a vector as \(\tilde{\varvec{\psi }}_i=\mathrm {col}\{\tilde{\varvec{\psi }}_{1,i},\tilde{\varvec{\psi }}_{2,i},\ldots ,\tilde{\varvec{\psi }}_{N,i}\}\). Let \(\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}={{\textbf {u}}}_{l,i}{{\textbf {u}}}^T_{l.i}\). The other notations are defined as follows:

$$\begin{aligned}&{\mathcal {M}}=\mathrm {diag}\{\mu _1{{\textbf {I}}}_L,\mu _2{{\textbf {I}}}_L,\ldots ,\mu _N{{\textbf {I}}}_L\} \end{aligned}$$
(38)
$$\begin{aligned}&\mathbf {{\mathcal {R}}}_{u,i}=\mathrm {diag}\{\mathbf {\mathcal R}_{u_1,i},\ldots ,{{\mathcal {R}}}_{u_N,i}\} \end{aligned}$$
(39)
$$\begin{aligned}&\mathbf {{\mathcal {C}}}={{\textbf {C}}}\otimes {{\textbf {I}}}_L \end{aligned}$$
(40)
$$\begin{aligned}&{{\mathcal {R}}}_{Q,i}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i},\ldots ,\sum _{l\in \mathcal N_N}c_{lN}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}\Big \} \end{aligned}$$
(41)
$$\begin{aligned}&{{\mathcal {R}}}_{\gamma Q,i}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}\gamma _{l1}{{\textbf {Q}}}_{l,i}{{\textbf {R}}}_{{{\textbf {u}}}_l,i},\ldots ,\sum _{l\in \mathcal N_N}c_{lN}\gamma _{lN}{{\textbf {Q}}}_{l,i}{{\textbf {R}}}_{{{\textbf {u}}}_l,i}\Big \} \end{aligned}$$
(42)
$$\begin{aligned}&{{\mathcal {H}}}_i=\mathrm {diag}\{\mathbf {H}_{1,i},\mathbf {H}_{2,i},\ldots ,\mathbf {H}_{N,i}\} \end{aligned}$$
(43)
$$\begin{aligned}&{{\mathcal {Q}}}^{'}_i=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i}),\ldots ,\sum _{l\in \mathcal N_N}c_{lN}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i})\Big \} \end{aligned}$$
(44)
$$\begin{aligned}&{{\mathcal {Q}}}^{'}_{\gamma ,i}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}\gamma _{l1}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i}),\ldots ,\sum _{l\in \mathcal N_N}c_{lN}\gamma _{lN}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i})\Big \} \end{aligned}$$
(45)
$$\begin{aligned}&{{\mathcal {R}}}_{u,i}=\mathrm {diag}\{{{\textbf {R}}}_{{{\textbf {u}}}_1,i},{{\textbf {R}}}_{{{\textbf {u}}}_2,i},\ldots ,{{\textbf {R}}}_{{{\textbf {u}}}_N,i}\} \end{aligned}$$
(46)
$$\begin{aligned}&{{\mathcal {Q}}}_i=\mathrm {diag}\{{{\textbf {Q}}}_{1,i},{{\textbf {Q}}}_{2,i},\ldots ,{{\textbf {Q}}}_{N,i}\} \end{aligned}$$
(47)
$$\begin{aligned}&{{\mathcal {F}}}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}a_{l1}\gamma _{l_1,i}({{\textbf {I}}}_L-\mathbf {H}_{l,i}),\ldots ,\sum _{l\in \mathcal N_N}a_{lN}\gamma _{l_N,i}({{\textbf {I}}}_L-\mathbf {H}_{l,i})\Big \} \end{aligned}$$
(48)
$$\begin{aligned}&{{\mathcal {F}}}^{'}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}a_{l1}\gamma _{l_1,i}\mathbf {H}_{l,i},\ldots ,\sum _{l\in \mathcal N_N}a_{lN}\gamma _{l_N,i}\mathbf {H}_{l,i}\Big \} \end{aligned}$$
(49)
$$\begin{aligned}&\Big [{\mathcal R}_{Q(I-H),i}\Big ]_{kl}=c_{lk}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{u_l,i}({{\textbf {I}}}_L-\mathbf {H}_{k,i}) \end{aligned}$$
(50)
$$\begin{aligned}&\Big [{{\mathcal {R}}}_{\gamma Q(I-H),i}\Big ]_{kl}=c_{lk}\gamma _{lk,i}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{u_l,i}({{\textbf {I}}}_L-\mathbf {H}_{k,i}) \end{aligned}$$
(51)
$$\begin{aligned}&{{\mathcal {D}}}={{\textbf {D}}}_i\otimes {{\textbf {I}}}_L,\quad d_{kl,i}=\sum _{l\in {\mathcal {N}}}a_{lk}(1-\gamma _{lk,i}) \end{aligned}$$
(52)
$$\begin{aligned}&{{\mathcal {R}}}_{qQ,i}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i},\ldots ,\sum _{l\in \mathcal N_N}c_{lN}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i}\Big \} \end{aligned}$$
(53)
$$\begin{aligned}&{{\mathcal {R}}}_{\gamma qQ,i}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}\gamma _{l1}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i},\ldots ,\nonumber \\&\qquad \qquad \qquad \sum _{l\in {\mathcal {N}}_N}c_{lN}\gamma _{lN}{{\textbf {Q}}}_{l,i}\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i}\Big \} \end{aligned}$$
(54)
$$\begin{aligned}&{{\mathcal {A}}}_{q,i}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i})\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i},\ldots , \nonumber \\&\qquad \qquad \qquad \sum _{l\in {\mathcal {N}}_N}c_{lN}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i})\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i} \Big \} \end{aligned}$$
(55)
$$\begin{aligned}&{{\mathcal {A}}}_{\gamma q,i}=\mathrm {diag}\Big \{\sum _{l\in \mathcal N_1}c_{l1}\gamma _{l1}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i})\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i},\ldots , \nonumber \\&\qquad \qquad \qquad \sum _{l\in {\mathcal {N}}_N}c_{lN}\gamma _{lN}({{\textbf {I}}}_L-{{\textbf {Q}}}_{l,i})\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}{{\textbf {q}}}_{l,i} \Big \} \end{aligned}$$
(56)
$$\begin{aligned}&{{\mathcal {S}}}_i=\mathrm {col}\{{{\textbf {u}}}_{1,i}n_{1,i},\ldots ,{{\textbf {u}}}_{N,i}n_{N,i}\} \end{aligned}$$
(57)

After the above definitions, some manipulations show that we have:

$$\begin{aligned} \tilde{{{\textbf {w}}}}_i=({{\textbf {I}}}_{NL}+\mathbf {\mathcal F})\tilde{\varvec{\psi }}_i+\mathbf {\mathcal F}^{'}\tilde{{{\textbf {w}}}}_{i-1}+\mathbf {{\mathcal {D}}}\tilde{{{\textbf {w}}}}_{i-1} \end{aligned}$$
(58)

For investigating the mean convergence, we take expectation from the above equation. So, we reach:

$$\begin{aligned} E\{\tilde{{{\textbf {w}}}}_i\}=E\{({{\textbf {I}}}_{NL}+\mathbf {\mathcal F})\}E\{\tilde{\varvec{\psi }}_i\}+(E\{\mathbf {\mathcal F}^{'}\}+E\{\mathbf {{\mathcal {D}}}\})E\{\tilde{{{\textbf {w}}}}_{i-1}\} \end{aligned}$$
(59)

In the above formula, the term of \(E\{\tilde{\varvec{\psi }}_i\}\) is difficult to compute. It needs to calculate a recursion formula for \(\tilde{{{\textbf {w}}}}_i\). It is done, and for brevity we omit the details of the derivation. We have:

$$\begin{aligned} \tilde{\varvec{\psi }}_i= & {} \Big ({{\textbf {I}}}_{NL}-\mathbf {{\mathcal {M}}}\mathbf {\mathcal R}_{Q,i}\mathbf {{\mathcal {H}}}_i-\mathbf {{\mathcal {M}}}\mathbf {\mathcal Q}^{'}_i\mathbf {{\mathcal {R}}}_{u,i}-\mathbf {\mathcal M}\mathbf {{\mathcal {R}}}_{Q(I-H),i} \nonumber \\&\quad -\, \mathbf {{\mathcal {M}}}\mathbf {{\mathcal {R}}}_{\gamma Q,i}\mathbf {\mathcal H}_i-\mathbf {{\mathcal {M}}}\mathbf {\mathcal Q}^{'}_{\gamma ,i}\mathbf {{\mathcal {R}}}_{u,i}-\mathbf {\mathcal M}\mathbf {{\mathcal {R}}}_{\gamma Q(I-H),i}\Big )\tilde{{{\textbf {w}}}}_{i-1} \nonumber \\&\quad -\, \Big (\mathbf {{\mathcal {M}}}\mathbf {{\mathcal {C}}}^T\mathbf {\mathcal Q}_i+\mathbf {{\mathcal {M}}}\mathbf {{\mathcal {Q}}}^{'}_i+\mathbf {\mathcal M}\mathbf {{\mathcal {C}}}^T\mathbf {\mathcal Q}_{\gamma ,i}+\mathbf {{\mathcal {M}}}\mathbf {\mathcal Q}^{'}_{\gamma ,i}\Big )\mathbf {{\mathcal {S}}}_i \nonumber \\&\quad -\, \mathbf {{\mathcal {M}}}\mathbf {\mathcal R}_{qQ,i}-\mathbf {{\mathcal {M}}}\mathbf {{\mathcal {R}}}_{\gamma qQ,i}-\mathbf {{\mathcal {M}}}\mathbf {{\mathcal {A}}}_{q,i}-\mathbf {\mathcal M}\mathbf {{\mathcal {A}}}_{\gamma q,i} \end{aligned}$$
(60)

Since we assume that \(E\{{{\textbf {q}}}_{k,i}\}=0\) and expectation of noise vectors are zero, the expectations of the above terms in the fourth row of (60) are zero. Then, some calculations lead to the following formula:

$$\begin{aligned} E\{\tilde{\varvec{\psi }}_i\}= & {} \left( {{\textbf {I}}}_{NL}-\frac{MM_{\varDelta }}{L^2}\mathbf {\mathcal M}\mathbf {{\mathcal {R}}}-\left( 1-\frac{M_{\varDelta }}{L}\right) \mathbf {\mathcal M}\mathbf {{\mathcal {R}}}_u \nonumber \right. \\&\left. \quad -\, \frac{M_{\varDelta }}{L}\left( 1-\frac{M}{L}\right) \mathbf {\mathcal M}\mathbf {{\mathcal {C}}}^T\mathbf {\mathcal R}_u-p_a\frac{MM_{\varDelta }}{L^2}\mathbf {{\mathcal {M}}}\mathbf {\mathcal R}-p_a\left( 1-\frac{M_{\varDelta }}{L}\right) \mathbf {{\mathcal {M}}}\mathbf {\mathcal R}_u \nonumber \right. \\&\quad \left. -\, p_a\frac{M_{\varDelta }}{L}\left( 1-\frac{M}{L}\right) \mathbf {\mathcal M}\mathbf {{\mathcal {C}}}^T\mathbf {\mathcal R}_u\right) E\{\tilde{{{\textbf {w}}}}_{i-1}\}=\mathbf {\mathcal B}E\{\tilde{{{\textbf {w}}}}_{i-1}\}, \end{aligned}$$
(61)

where

$$\begin{aligned}&\mathbf {{\mathcal {R}}}_u=E\{\mathbf {\mathcal R}_{u,i}\}=\mathrm {diag}\{\mathbf {\mathcal R}_{u_1},\ldots ,\mathbf {{\mathcal {R}}}_{u_N}\} \end{aligned}$$
(62)
$$\begin{aligned}&\mathbf {{\mathcal {R}}}=\mathrm {diag}\{\mathbf {\mathcal R}_1,\ldots ,\mathbf {{\mathcal {R}}}_N\} \end{aligned}$$
(63)

with

$$\begin{aligned} \mathbf {{\mathcal {R}}}_k=\sum _{l\in {\mathcal {N}}_k}c_{lk}\mathbf {\mathcal R}_{u_l}. \end{aligned}$$
(64)
Table 1 Final MSD of various diffusion algorithms

Then, replacing (61) into (59) and with some calculations, we reach to

$$\begin{aligned} E\{\tilde{{{\textbf {w}}}}_i\}= & {} \Big (1+p_a(1-\frac{M}{L})\Big )\mathbf {\mathcal B}E\{\tilde{{{\textbf {w}}}}_{i-1}\} \nonumber \\&\quad +\, \Big (p_a\frac{M}{L}+(1-p_a)\Big )E\{\tilde{{{\textbf {w}}}}_{i-1}\}=(\mathbf {\mathcal B}^{'}+\mathbf {{\mathcal {Y}}})E\{\tilde{{{\textbf {w}}}}_{i-1}\}, \end{aligned}$$
(65)

where \(\mathbf {\mathcal B}^{'}=\Big (1+p_a(1-\frac{M}{L})\Big )\mathbf {{\mathcal {B}}}\) and \(\mathbf {{\mathcal {Y}}}=[p_a\frac{M}{L}+(1-p_a)]{{\textbf {I}}}_N\). Then, (65) can be written in the following recursive form

$$\begin{aligned} E\{\tilde{{{\textbf {w}}}}_i\}=(\mathbf {{\mathcal {B}}}^{'}+\mathbf {\mathcal Y})E\{\tilde{{{\textbf {w}}}}_{i-1}\} \end{aligned}$$
(66)

Therefore, similar to [18], the proposed AR-DC-DLMS asymptotically converges in the mean toward \({{\textbf {w}}}_o\) if, and only if \(\rho (\mathbf {{\mathcal {B}}}^{'}+\mathbf {{\mathcal {Y}}})<1\) where \(\rho (.)\) stands for the spectral radius of the matrix argument. From matrix algebra, we have \(\rho ({{\textbf {X}}})\le ||{{\textbf {X}}}||\) for any induced norm. So, we have:

$$\begin{aligned} \rho (\mathbf {{\mathcal {B}}}^{'}+\mathbf {\mathcal Y})\le ||\mathbf {{\mathcal {B}}}^{'}+\mathbf {\mathcal Y}||_{b,\infty }\le \mathrm {max}||\Big [\mathbf {\mathcal B}^{'}+\mathbf {{\mathcal {Y}}}\Big ]_{kl}|| \end{aligned}$$
(67)

where \(||.||_{b,\infty }\) is the block maximum norm. Deducing from (67), we will have:

$$\begin{aligned} \rho (\mathbf {{\mathcal {B}}}^{'}+\mathbf {\mathcal Y})\le & {} \mathrm {max}_{k,l}||{{\textbf {I}}}_L-\mu _k\left[ \frac{MM_{\varDelta }}{L^2}+\left( 1-\frac{M_{\varDelta }}{L}\right) \mathbf {\mathcal R}_{u_k}\nonumber \right. \\&\left. \quad +\, \frac{M_{\varDelta }}{L}\left( 1-\frac{M}{L}\right) c_{lk}\mathbf {\mathcal R}_{u_l}-p_a[\frac{MM_{\varDelta }}{L^2}\mathbf {\mathcal R}_k+\left( 1-\frac{M_{\varDelta }}{L}\right) \mathbf {{\mathcal {R}}}_{u_k} \nonumber \right. \\&\left. \quad +\, \frac{M_{\varDelta }}{L}\left( 1-\frac{M}{L}\right) c_{lk}\mathbf {\mathcal R}_{u_l}]+p_a\frac{M}{L}+(1-p_a)\right] ||<1 \end{aligned}$$
(68)

Similar to [18], as a linear combination with positive coefficients of positive-definite matrices \(\mathbf {{\mathcal {R}}}_k\), \(\mathbf {{\mathcal {R}}}_{u_k}\), and \(\mathbf {{\mathcal {R}}}_{u_l}\), the matrix in square brackets on the RHS of (68) is positive definite. Then, the condition in right side of (68) holds if (29) is satisfied. Then, the \(\lambda _{\mathrm {max},k}\) is given by (30) and the proof is completed.

Appendix B Calculating the Fisher-Information Matrix

For calculating the FIM, the total likelihood can be computed as

$$\begin{aligned} p({{\textbf {X}}}|{{\textbf {w}}})=\prod _{k=1}^Np({{\textbf {x}}}_k|{{\textbf {w}}})=\prod _{k=1}^Np_{{{\textbf {z}}}_k}({{\textbf {x}}}_k-{{\textbf {U}}}_k{{\textbf {w}}}). \end{aligned}$$
(69)

Since \({{\textbf {z}}}_k=[z_{k,1},z_{k,2},\ldots ,z_{k,T}]^T\) with \(z_{k,i}={{\textbf {u}}}^T_{k,i}\tilde{{{\textbf {q}}}}_{k,i}+n_{k,i}\), the \(z_{k,i}\) is Gaussian with zero mean and variance \(\sigma ^2_{z_{k,i}}=E(z^2_{k,i})\). Since \(n_{k,i}\) and \(\tilde{{{\textbf {q}}}}_{k,i}\) are assumed to be independent and uncorrelated with mean zero, we have \(\sigma ^2_{z_{k,i}}=E\{({{\textbf {u}}}^T_{k,i}\tilde{{{\textbf {q}}}}_{k,i})^2\}+E\{n^2_{k,i}\}\). This would be equal to \(\sigma ^2_{z_{k,i}}=E\{{{\textbf {u}}}^T_{k,i}\tilde{{{\textbf {q}}}}_{k,i}\tilde{{{\textbf {q}}}}^T_{k,i}{{\textbf {u}}}_{k,i}\}+\sigma ^2_{k,n}\) where \(\sigma ^2_{k,n}\) is the variance of noise at node k. Some simple calculations lead to the following formula

$$\begin{aligned} \sigma ^2_{z_{k,i}}={{\textbf {u}}}^T_{k,i}E\{\tilde{{{\textbf {q}}}}_{k,i}\tilde{{{\textbf {q}}}}^T_{k,i}\}{{\textbf {u}}}_{k,i}+\sigma ^2_{k,n}=p_a\sigma ^2_q||{{\textbf {u}}}_{k,i}||^2+\sigma ^2_{k,n}, \end{aligned}$$
(70)

where it is assumed the elements of \(\tilde{{{\textbf {q}}}}_{k,i}\) are uncorrelated. Since the elements of \({{\textbf {z}}}_k\) are independent of each other, the vector \({{\textbf {z}}}_k\) is Gaussian with zero mean and diagonal covariance matrix equal to \({{\textbf {P}}}_k=\mathrm {diag}(\sigma ^2_{z_{k,i}})\). So, from (69), we can write the log-likelihood as \(\log p({{\textbf {X}}}|{{\textbf {w}}})=\sum _{k=1}^N \Big [-\frac{I}{2}\log (2\pi )-\frac{1}{2}\sum _i\log (\sigma ^2_{z_{k,i}})-\frac{1}{2}\Big ({{\textbf {x}}}_k-{{\textbf {U}}}_k{{\textbf {w}}}\Big )^T{{\textbf {P}}}_k^{-1}\Big ({{\textbf {x}}}_k-{{\textbf {U}}}_k{{\textbf {w}}}\Big )\Big ]\). So, the partial derivative will be equal to \(\frac{\partial \log p({{\textbf {X}}}|{{\textbf {w}}})}{\partial w_j}=-\frac{1}{2}\sum _{k=1}^N\frac{\partial }{\partial w_j}\Big [{{\textbf {e}}}_k^T{{\textbf {P}}}_k^{-1}{{\textbf {e}}}_k\Big ]\) where \({{\textbf {e}}}_k={{\textbf {x}}}_k-{{\textbf {U}}}_k{{\textbf {w}}}\). We can write \(B={{\textbf {e}}}_k^T{{\textbf {P}}}_k^{-1}{{\textbf {e}}}_k=\sum _{i=1}^I P^{-1}_{k,ii}e^2_{k,i}\). So, the partial derivative is equal to \(\frac{\partial B}{\partial w_j}=\sum _{i=1}^I 2P^{-1}_{ii}e_{k,i}\frac{\partial e_{k,i}}{\partial w_j}\). Also, we have \(\frac{\partial e_{k,i}}{\partial w_j}=-U_{k,i,l}\). Taking the second partial derivative and doing some simple manipulations, we reach \(F_{l,j}=-E\{\frac{\partial ^2\log p({{\textbf {X}}}|{{\textbf {w}}})}{\partial w_lw_j}\}=-\sum _{k=1}^N\sum _{i=1}^T P^{-1}_{k,ii}U_{k,i,j}U_{k,i,l}\), where this formula leads to (35).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zayyani, H., Oruji, F. & Fijalkow, I. An Adversary-Resilient Doubly Compressed Diffusion LMS Algorithm for Distributed Estimation. Circuits Syst Signal Process 41, 6182–6205 (2022). https://doi.org/10.1007/s00034-022-02072-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-022-02072-w

Keywords

Navigation