Abstract
This paper proposes an adversary-resilient communication-efficient distributed estimation algorithm for time-varying networks. It is a generalization of the doubly compressed diffusion least mean square algorithm that is not adversary-resilient. The major drawback in existing adversary detectors in the literature is that they suggested the detection criterion heuristically. In this paper, an adversary detector is suggested theoretically based on a Bayesian hypothesis test (BHT). It is proved that the test statistics of the detectors is a distance metric compared to a threshold similarly to related papers in the literature. Hence, we prove the validity of the detection criterion based on BHT. The other difficulty encountered in existing works is the determination of thresholds. In this paper, the optimum thresholds are derived in closed form. Since the optimum thresholds need the values of unknown parameters, it is not feasible to derive them. Hence, suboptimal procedures for determining the thresholds are provided. Moreover, the convergence of the mean of the algorithm is investigated analytically. In addition, the Cramer–Rao bound of the problem of distributed estimation based on all node observations in the presence of adversaries is calculated. The simulation results show the effectiveness of the proposed algorithms and demonstrate that the proposed algorithms reach the performance of the algorithm when the adversaries are ideally known in advance, with some delay.
Similar content being viewed by others
Data Availability
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Notes
The false data injection error \(q_{k,i}\) for various adversary nodes and for various time indexes will be more covert (if it is biased in one direction, the adversary can be detected from this bias) if they are assumed to be positive and negative equiprobably. So, we assume such distribution for \(q_{k,i}\). We could assume other distribution for \(q_{k,i}\) and find the CRB under that assumption.
References
F.S. Abkenar, A. Jamalipour, Energy optimization in association-free fog-IoT networks. IEEE Trans. Green Commun. Netw. 4(2), 404–412 (2020)
M. Amarlingam, K.V.V. Durga Prasad, P. Rajalakshmi, S.S. Channappayya, C.S. Sastry, A novel low-complexity compressed data aggregation method for energy-constrained IoT networks. IEEE Trans. Green Commun. Netw. 4(3), 717–730 (2020)
R. Arablouei, S. Werner, Y. Huang, K. Dogancay, Distributed least mean-square estimation with partial diffusion. IEEE Trans. Signal Process. 62(2), 472–484 (2014)
R. Arablouei, K. Dogancay, S. Werner, Y. Huang, Adaptive distributed estimation based on recursive least-squares and partial diffusion. IEEE Trans. Signal Process. 62(14), 3510–3522 (2014)
R. Arablouei, S. Werner, K. Dogancay, Y. Huang, Analysis of a reduced-communication diffusion LMS algorithm. Elsevier Signal Process. 117, 355–361 (2015)
S. Ashkezari-Toussi, H. Sadoghi-Yazdi, Robust diffusion LMS over adaptive networks. Elsevier Signal Process. 158, 201–209 (2019)
N.J. Bershad, E. Eweda, J.C.M. Bermudez, Stochastic analysis of the diffusion LMS algorithm for cyclostationary white Gaussian inputs. Elsevier Signal Process. 185, 108081 (2021)
F.S. Cattivelli, A.H. Sayed, Diffusion LMS strategies for distributed estimation. IEEE Trans. Signal Process. 58, 1035–1048 (2010)
H. Chang, W. Li, Correction-based diffusion LMS algorithms for distributed estimation. Circuit Syst. Signal Process. 39, 4136–4154 (2020)
H. Chang, W. Li, Correction-based diffusion LMS algorithms for secure distributed estimation under attacks. Digital Signal Process. 102, 102735 (2020)
F. Chen, X. Shao, Broken-motifs diffusion LMS algorithm for reducing communication load. Elsevier Signal Process. 133, 213–218 (2017)
F. Chen, S. Deng, Y. Hua, S. Duan, L. Wang, J. Wu, Communication-reducing algorithm of distributed least mean square algorithm with neighbor-partial diffusion. Circuit Syst. Signal Process. 39, 4416–4435 (2020)
F. Chen, L. Hu, P. Liu, M. Feng, A robust diffusion estimation algorithm for asynchronous networks in IoT. IEEE Internet Things J. 7(9), 9103–9115 (2020)
Y. Chen, S. Kar, J.M.F. Moura, Resilient distributed estimation through adversary detection. IEEE Trans. Signal Process. 66(9), 2455–2469 (2018)
P. Cheng et al., Asynchronous Fault Detection Observer for 2-D Markov Jump Systems (IEEE Transactions on Cybernetics, Early Access, 2021)
H. Fang et al., Adaptive optimization algorithm for nonlinear Markov jump systems with partial unknown dynamics. Int. J. Robust Nonlinear Control 31, 2126–2140 (2021)
V. Flipovic, N. Nedic, V. Stojanovic, Robust identification of pneumatic servo actuators in the real situations. Forsch. Ingenieurwes. 75, 183–196 (2011)
E. Harrane, R. Flamary, C. Richard, On reducing the communication cost of the diffusion LMS algorithm. IEEE Trans. Signal Inf. Process. Netw. 5(1), 100–112 (2019)
Y. Hua, F. Chen, S. Deng, S. Duan, L. Wang, Secure distributed estimation against false data injection attack. Inf. Sci. 515, 248–262 (2020)
W. Huang, X. Yang, G. Shen, Communication-reducing diffusion LMS algorithm over multitask networks. Inf. Sci. 382–383, 115–134 (2017)
S.M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory (Prentice Hall, 1993)
M. Korki, H. Zayyani, Weighted diffusion continuous mixed p-norm algorithm for distributed estimation in non-uniform noise environment. Elsevier Signal Process. 164, 225–233 (2019)
J.W. Lee, S.E. Kim, W.J. Song, Data-selective diffusion LMS for reducing communication overhead. Elsevier Signal Process. 113, 211–217 (2015)
J.W. Lee, J.T. Kong, W.J. Song, S.E. Kim, Data-reserved periodic diffusion LMS with low communication cost over networks. IEEE Access 6, 54636–54650 (2018)
X. Li, M. Feng, F. Chen, Q. Shi, J. Kurths, Robust distributed estimation based on a generalized correntropy logarithmic difference algorithm over wireless sensor networks. Elsevier Signal Process. 77, 107731 (2020)
Y. Liu, C. Li, Secure distributed estimation over wireless sensor networks under attacks. IEEE Trans. Aerosp. Electron. Syst. 54(4), 1815–1831 (2018)
K. Ntemos, J. Plata-Chaves, N. Kolokotronis, N. Kalouptsidis, M. Moonen, Secure information sharing in adversarial adaptive diffusion networks. IEEE Trans. Signal Inf. Process. Netw. 4(1), 111–124 (2018)
G. Nunez, C. Borges, A. Chorti, Understanding the performance of software defined wireless sensor networks under denial of service attack. Open J. Internet Things 5(1), 58–68 (2019)
J.G. Proakis, Digital Communications (Mc-GrawHill, 2001)
A.H. Sayed, Adaptation, Learning and Optimization over Networks, Foundations and Trends in Machine Learning (2014)
M.O. Sayin, S.S. Kozat, Single bit and reduced dimension diffusion strategies over distributed networks. IEEE Signal Process. Lett. 20(10), 976–979 (2013)
M.O. Sayin, S.S. Kozat, Compressive diffusion strategies over distributed networks for reduced communication load. IEEE Trans. Signal Process. 62(20), 5308–5323 (2014)
Q. Shi, M. Feng, X. Li, S. Wang, F. Chen, A secure distributed information sharing algorithm based on attack detection in multi-task networks. IEEE Trans. Circuits Syst. I Regul. Pap. 67(12), 5125–5138 (2020)
M. Shirazi, A. Vosoughi, On distributed estimation in hierarchical power constrained wireless sensor networks. IEEE Trans. Signal Inf. Process. Netw. 6, 442–459 (2020)
H. Shiri, M.A. Tinati, M. Coudreanu, G. Azarnia, Distributed sparse diffusion estimation with reduced communication cost. IET Signal Process. 12(8), 1043–1052 (2018)
Z. Yang, A. Gang, W.U. Bajwa, Adversary-resilient distributed and decentralized statistical inference and machine learning. IEEE Signal Process. Mag. 37(3), 146–159 (2020)
H. Zayyani, M. Korki, F. Marvasti, A distributed 1-bit compressed sensing algorithm robust to impulsive noise. IEEE Commun. Lett. 20(6), 1132–1135 (2016)
H. Zayyani, A. Javaheri, A Robust Generalized Proportionate Diffusion LMS Algorithm for Distributed Estimation. IEEE Transactions (Express Briefs, Early Access, Circuit and Systems-Part II, Oct 2020)
H. Zayyani, Robust minimum disturbance diffusion LMS for distributed estimation. IEEE Trans. Circuit Syst. Part II Express Briefs 68(1), 521–525 (2021)
H. Zayyani, Communication Reducing Diffusion LMS Robust to Impulsive Noise Using Smart Selection of Communication Nodes (System, and Signal Processing, Circuit, 2021)
Acknowledgements
This research was financially supported by the research deputy of Qom University of Technology (Grant No. 553868). For this study, the work of I. Fijalkow has been partially supported by the ELIOT ANR-18-CE40-0030 and FAPESP 2018/12579-7 project.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A The Convergence of the Mean Condition
For calculating the sufficient condition of mean convergence of the weight vectors, we shall define some notations. We define \(\tilde{\varvec{\psi }}_{k,i}={{\textbf {w}}}_o-\varvec{\psi }_{k,i}\). Then, we collect them in a vector as \(\tilde{\varvec{\psi }}_i=\mathrm {col}\{\tilde{\varvec{\psi }}_{1,i},\tilde{\varvec{\psi }}_{2,i},\ldots ,\tilde{\varvec{\psi }}_{N,i}\}\). Let \(\mathcal {{\textbf {R}}}_{{{\textbf {u}}}_l,i}={{\textbf {u}}}_{l,i}{{\textbf {u}}}^T_{l.i}\). The other notations are defined as follows:
After the above definitions, some manipulations show that we have:
For investigating the mean convergence, we take expectation from the above equation. So, we reach:
In the above formula, the term of \(E\{\tilde{\varvec{\psi }}_i\}\) is difficult to compute. It needs to calculate a recursion formula for \(\tilde{{{\textbf {w}}}}_i\). It is done, and for brevity we omit the details of the derivation. We have:
Since we assume that \(E\{{{\textbf {q}}}_{k,i}\}=0\) and expectation of noise vectors are zero, the expectations of the above terms in the fourth row of (60) are zero. Then, some calculations lead to the following formula:
where
with
Then, replacing (61) into (59) and with some calculations, we reach to
where \(\mathbf {\mathcal B}^{'}=\Big (1+p_a(1-\frac{M}{L})\Big )\mathbf {{\mathcal {B}}}\) and \(\mathbf {{\mathcal {Y}}}=[p_a\frac{M}{L}+(1-p_a)]{{\textbf {I}}}_N\). Then, (65) can be written in the following recursive form
Therefore, similar to [18], the proposed AR-DC-DLMS asymptotically converges in the mean toward \({{\textbf {w}}}_o\) if, and only if \(\rho (\mathbf {{\mathcal {B}}}^{'}+\mathbf {{\mathcal {Y}}})<1\) where \(\rho (.)\) stands for the spectral radius of the matrix argument. From matrix algebra, we have \(\rho ({{\textbf {X}}})\le ||{{\textbf {X}}}||\) for any induced norm. So, we have:
where \(||.||_{b,\infty }\) is the block maximum norm. Deducing from (67), we will have:
Similar to [18], as a linear combination with positive coefficients of positive-definite matrices \(\mathbf {{\mathcal {R}}}_k\), \(\mathbf {{\mathcal {R}}}_{u_k}\), and \(\mathbf {{\mathcal {R}}}_{u_l}\), the matrix in square brackets on the RHS of (68) is positive definite. Then, the condition in right side of (68) holds if (29) is satisfied. Then, the \(\lambda _{\mathrm {max},k}\) is given by (30) and the proof is completed.
Appendix B Calculating the Fisher-Information Matrix
For calculating the FIM, the total likelihood can be computed as
Since \({{\textbf {z}}}_k=[z_{k,1},z_{k,2},\ldots ,z_{k,T}]^T\) with \(z_{k,i}={{\textbf {u}}}^T_{k,i}\tilde{{{\textbf {q}}}}_{k,i}+n_{k,i}\), the \(z_{k,i}\) is Gaussian with zero mean and variance \(\sigma ^2_{z_{k,i}}=E(z^2_{k,i})\). Since \(n_{k,i}\) and \(\tilde{{{\textbf {q}}}}_{k,i}\) are assumed to be independent and uncorrelated with mean zero, we have \(\sigma ^2_{z_{k,i}}=E\{({{\textbf {u}}}^T_{k,i}\tilde{{{\textbf {q}}}}_{k,i})^2\}+E\{n^2_{k,i}\}\). This would be equal to \(\sigma ^2_{z_{k,i}}=E\{{{\textbf {u}}}^T_{k,i}\tilde{{{\textbf {q}}}}_{k,i}\tilde{{{\textbf {q}}}}^T_{k,i}{{\textbf {u}}}_{k,i}\}+\sigma ^2_{k,n}\) where \(\sigma ^2_{k,n}\) is the variance of noise at node k. Some simple calculations lead to the following formula
where it is assumed the elements of \(\tilde{{{\textbf {q}}}}_{k,i}\) are uncorrelated. Since the elements of \({{\textbf {z}}}_k\) are independent of each other, the vector \({{\textbf {z}}}_k\) is Gaussian with zero mean and diagonal covariance matrix equal to \({{\textbf {P}}}_k=\mathrm {diag}(\sigma ^2_{z_{k,i}})\). So, from (69), we can write the log-likelihood as \(\log p({{\textbf {X}}}|{{\textbf {w}}})=\sum _{k=1}^N \Big [-\frac{I}{2}\log (2\pi )-\frac{1}{2}\sum _i\log (\sigma ^2_{z_{k,i}})-\frac{1}{2}\Big ({{\textbf {x}}}_k-{{\textbf {U}}}_k{{\textbf {w}}}\Big )^T{{\textbf {P}}}_k^{-1}\Big ({{\textbf {x}}}_k-{{\textbf {U}}}_k{{\textbf {w}}}\Big )\Big ]\). So, the partial derivative will be equal to \(\frac{\partial \log p({{\textbf {X}}}|{{\textbf {w}}})}{\partial w_j}=-\frac{1}{2}\sum _{k=1}^N\frac{\partial }{\partial w_j}\Big [{{\textbf {e}}}_k^T{{\textbf {P}}}_k^{-1}{{\textbf {e}}}_k\Big ]\) where \({{\textbf {e}}}_k={{\textbf {x}}}_k-{{\textbf {U}}}_k{{\textbf {w}}}\). We can write \(B={{\textbf {e}}}_k^T{{\textbf {P}}}_k^{-1}{{\textbf {e}}}_k=\sum _{i=1}^I P^{-1}_{k,ii}e^2_{k,i}\). So, the partial derivative is equal to \(\frac{\partial B}{\partial w_j}=\sum _{i=1}^I 2P^{-1}_{ii}e_{k,i}\frac{\partial e_{k,i}}{\partial w_j}\). Also, we have \(\frac{\partial e_{k,i}}{\partial w_j}=-U_{k,i,l}\). Taking the second partial derivative and doing some simple manipulations, we reach \(F_{l,j}=-E\{\frac{\partial ^2\log p({{\textbf {X}}}|{{\textbf {w}}})}{\partial w_lw_j}\}=-\sum _{k=1}^N\sum _{i=1}^T P^{-1}_{k,ii}U_{k,i,j}U_{k,i,l}\), where this formula leads to (35).
Rights and permissions
About this article
Cite this article
Zayyani, H., Oruji, F. & Fijalkow, I. An Adversary-Resilient Doubly Compressed Diffusion LMS Algorithm for Distributed Estimation. Circuits Syst Signal Process 41, 6182–6205 (2022). https://doi.org/10.1007/s00034-022-02072-w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-022-02072-w