Skip to main content
Log in

Steady-state analysis of distributed incremental variable fractional tap-length LMS adaptive networks

  • Original Paper
  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

Recently, cooperative and distributed processing has been attracted a lot of attention, especially in wireless sensor networks, to prolong the network’s lifetime. So, distributed adaptive filtering, which operates in a distributed and adaptive manner, has been established. In the distributed adaptive networks, in addition to filter coefficients, the length of the adaptive filter is also unknown in general, and it should be estimated. The distributed incremental fractional tap-length (FT) algorithm is an approach to determine the adaptive filter length in a distributed scheme. In the current study, we analyze the steady-state behavior of the distributed incremental variable FT LMS algorithm. According to the analysis, we derive the mathematical expression for the steady-state tap-length at each particular sensor. The obtained results indicate that this algorithm overestimates optimal tap-length. Numerical simulations are provided to confirm the theoretical analyses.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Liu, J., Zhao, Z., Ji, J., & Miaolong, H. (2020). Research and application of wireless sensor network technology in power transmission and distribution system. Intelligent and Converged Networks, 1(2), 199–220.

    Article  Google Scholar 

  2. Zhou, Y., Yang, L., Yang, L., & Ni, M. (2019). Novel energy-efficient data gathering scheme exploiting spatial-temporal correlation for wireless sensor networks. Wireless Communications and Mobile Computing,2019.

  3. Azarnia, G., Tinati, M. A., Sharifi, A. A., & Shiri, H. (2020). Incremental and diffusion compressive sensing strategies over distributed networks. Digital Signal Processing, 101, 102732.

    Article  Google Scholar 

  4. Saeed, M. O. B., & Zerguine, A. (2019). An incremental variable step-size lms algorithm for adaptive networks. IEEE Transactions on Circuits and Systems II: Express Briefs, 67(10), 2264–2268.

    Article  Google Scholar 

  5. Dimple, K., Kotary, D. K., & Nanda, S. J. (2017). An incremental rls for distributed parameter estimation of iir systems present in computing nodes of a wireless sensor network. Procedia Computer Science, 115, 699–706.

    Article  Google Scholar 

  6. Li, L., Chambers, J. A., Lopes, C. G., & Sayed, A. H. (2009). Distributed estimation over an adaptive incremental network based on the affine projection algorithm. IEEE Transactions on Signal Processing, 58(1), 151–164.

    Article  MathSciNet  Google Scholar 

  7. Huang, W., Li, L., Li, Q., & Yao, X. (2018). Diffusion robust variable step-size lms algorithm over distributed networks. IEEE Access, 6, 47511–47520.

    Article  Google Scholar 

  8. Rastegarnia, A. (2019). Reduced-communication diffusion rls for distributed estimation over multi-agent networks. IEEE Transactions on Circuits and Systems II: Express Briefs, 67(1), 177–181.

    Article  Google Scholar 

  9. Li, L., & Chambers, J. A. (2009). Distributed adaptive estimation based on the apa algorithm over diffusion networks with changing topology. In 2009 IEEE/SP 15th Workshop on Statistical Signal Processing, (pp. 757–760). IEEE.

  10. Riera-Palou, F., Noras, J. M., & Cruickshank, D. G. M. (2001). Linear equalisers, with dynamic and automatic length selection. Electronics Letters, 37(25), 1553–1554.

    Article  Google Scholar 

  11. Shaozhong, F., Jianhua, G., & Yong, W. Fast adaptive algorithms for updating variable length equalizer based on exponential. In 2008 4th International Conference on Wireless Communications, Networking and Mobile Computing.

  12. Rusu, C., & Cowan, C. F. N. (2001). Novel stochastic gradient adaptive algorithm with variable length. In European Conference on Circuit Theory and Design (ECCTD’01).

  13. Bilcu, R. C., Kuosmanen, P., & Egiazarian, K. (2002). A new variable length lms algorithm: Theoretical analysis and implementations. In 9th International conference on Electronics, Circuits and Systems, volume 3, (pp. 1031–1034). IEEE.

  14. Bilcu, R. C., Kuosmanen, P., & Egiazarian, K. (2006). On length adaptation for the least mean square adaptive filters. Signal Processing, 86(10), 3089–3094.

    Article  Google Scholar 

  15. Yuantao, G., Tang, K., Cui, H., & Wen, D. (2003). Convergence analysis of a deficient-length lms filter and optimal-length sequence to model exponential decay impulse response. IEEE Signal Processing Letters, 10(1), 4–7.

    Article  Google Scholar 

  16. Zhang, Y., Chambers, J. A., Sanei, S., Kendrick, P., & Cox, T. J. (2007). A new variable tap-length lms algorithm to model an exponential decay impulse response. IEEE Signal Processing Letters, 14(4), 263–266.

    Article  Google Scholar 

  17. Shi, K., Ma, X., Tong, G., & Zhou. (2009). A variable step size and variable tap length lms algorithm for impulse responses with exponential power profile. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 3105–3108). IEEE.

  18. Mayyas, K. (2005). Performance analysis of the deficient length lms adaptive algorithm. IEEE Transactions on Signal Processing, 53(8), 2727–2734.

    Article  MathSciNet  Google Scholar 

  19. Yuantao, G., Tang, K., & Cui, H. (2004). Lms algorithm with gradient descent filter length. IEEE Signal Processing Letters, 11(3), 305–307.

    Article  Google Scholar 

  20. Gong, Yu., & Cowan, C. F. N. (2004). Structure adaptation of linear mmse adaptive filters. IEE Proceedings-Vision, Image and Signal Processing, 151(4), 271–277.

    Article  Google Scholar 

  21. Gong, Yu., & Cowan, C. F. N. (2005). An lms style variable tap-length algorithm for structure adaptation. IEEE Transactions on Signal Processing, 53(7), 2400–2407.

    Article  MathSciNet  Google Scholar 

  22. Ali, A., Moinuddin, M., & Al-Naffouri, T. Y. (2021). The nlms is steady-state schur-convex. IEEE Signal Processing Letters, 28, 389–393.

    Article  Google Scholar 

  23. Lu, L., Chen, L., Zheng, Z., Yi, Yu., & Yang, X. (2020). Behavior of the lms algorithm with hyperbolic secant cost. Journal of the Franklin Institute, 357(3), 1943–1960.

    Article  MathSciNet  Google Scholar 

  24. Luo, L., & Xie, A. (2020). Steady-state mean-square deviation analysis of improved l0-norm-constraint lms algorithm for sparse system identification. Signal Processing, 175, 107658.

    Article  Google Scholar 

  25. Li, L., Zhang, Y., & Chambers, J. A. (2008). Variable length adaptive filtering within incremental learning algorithms for distributed networks. In 2008 42nd Asilomar Conference on Signals, Systems and Computers (pp. 225–229). IEEE.

  26. Azarnia, G., & Tinati, M. A. (2015). Steady-state analysis of the deficient length incremental lms adaptive networks. Circuits, Systems, and Signal Processing, 34(9), 2893–2910.

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ghanbar Azarnia.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A.

Appendix A.

In [26], we have presented the concept of a short-length DILMS algorithm, and we have provided an expression for its steady-state MSD. This steady-state MSD is needed for the evaluation of the term \(E\{{\Vert {\ddot{\mathbf{P}}}_{k-1}\Vert }^2\}\) in (30), so the main results are included here. For this aim, we assume a set of N sensors in order to find an \(L_{opt}\times 1\) unknown vector \(\mathbf{w}^o_{L_{opt}}\) with unknown length \(L_{opt}\) from multiple measurements collected at N sensor nodes in the network. As previously mentioned, to find the length of the unknown parameter, a variable tap-length algorithm is needed. But, we assume that such an algorithm is not applicable for reasons such as energy storage (since energy consumption is an essential issue in WSNs). So, at each sensor, a conjectural length M for the unknown parameter is considered. More clearly, each sensor is equipped with an adaptive filter with M coefficients (\(M<L_{op}\)). Now at each sensor node, only the algorithm (3) is applicable in which all vectors have length M.

Now, we present a steady-state analysis for this deficient length case. To perform this analysis, first, we partition the unknown parameter \(\mathbf{w}^o_{L_{opt}}\) as follow:

$$\begin{aligned} \left[ \begin{array}{c} {\dot{\mathbf{w}}}_M \\ {\ddot{\mathbf{w}}}_{L_{opt}-M} \end{array} \right] \end{aligned}$$
(59)

where \({\dot{\mathbf{w}}}_M\) is the partition of \(\mathbf{w}^o_{L_{opt}}\) that is modeled by \({\varvec{\psi }}^{(i)}_k\) in each sensor, and \({\ddot{\mathbf{w}}}_{L_{opt}-M}\) is the partition of \(\mathbf{w}^o_{L_{opt}}\) that excluded in the estimation of \(\mathbf{w}^o_{L_{opt}}\). The partitioning simplifies the work with the variable-length vectors. Regarding the partitioning and using the data model (11), the update equation (3) is expressed as:

$$\begin{aligned}{\varvec{\psi }}^{(i)}_k&={\varvec{\psi }}^{(i)}_{k-1}-{\mu }_k\mathbf{u}^*_{k,i}{} \mathbf{u}_{L_{opt}k,i}{\overline{\varvec{\psi }}}^{(i)}_{L_{opt}, k-1}\nonumber \\&\quad +{\mu }_kv_k(i)\mathbf{u}^*_{k,i}\ \end{aligned}$$
(60)

where the vector \({\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k-1}\) with length \(L_{opt}\), computes the difference among the weight at sensor \(k-1\) and \(\mathbf{w}^o_{L_{opt}}\) as:

$$\begin{aligned} {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k-1}=\left[ \begin{array}{c} {\varvec{\psi }}^{(i)}_{k-1} \\ \mathbf{O}_{(L_{opt}-M)\times 1} \end{array} \right] -\left[ \begin{array}{c} {\dot{\mathbf{w}}}_M \\ {\ddot{\mathbf{w}}}_{L_{opt}-M} \end{array} \right] \end{aligned}$$
(61)

By padding the \(L_{opt}-M\) zeros in vectors with length M in (60), and subtracting the unknown parameter \(\mathbf{w}^o_{L_{opt}}\) from both of its sides results in:

$$\begin{aligned} {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k}={{\Lambda }}_k(i) {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k-1}+{\mu }_kv_k(i) \left[ \begin{array}{c} \mathbf{u}^*_{k,i} \\ \mathbf{O}_{(L_{opt}-M)\times 1} \end{array} \right] \ \end{aligned}$$
(62)

where

$$\begin{aligned} {{\Lambda }}_k(i)=I_{L_{opt}}-{\mu }_k\left[ \begin{array}{c} \mathbf{u}^*_{k,i} \\ \mathbf{O}_{(L_{opt}-M)\times 1} \end{array} \right] \mathbf{u}_{L_{opt}k,i} \end{aligned}$$
(63)

To demonstrate the steady-state MSD, first we write the \({\Vert {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k}\Vert }^2\) as:

$$\begin{aligned} \begin{array}{l} {\Vert {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k}\Vert }^2={{\overline{\varvec{\psi }}}^{*}}^{(i)}_{L_{opt},k-1}{{\mathrm {\Lambda }}^*}_k(i) {\mathrm {\Lambda }}_k(i){\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k-1} \\ +{\mu }_kv_k(i) {{\overline{\varvec{\psi }}}^{*}}^{(i)}_{L_{opt},k-1} {{\mathrm {\Lambda }}^*}_k(i)\left[ \begin{array}{c} \mathbf{u}^*_{k,i} \\ \mathbf{O}_{(L_{opt}-M)\times 1} \end{array} \right] \\ +{\mu }_kv_k(i)\left[ \begin{array}{cc} \mathbf{u}_{k,i} &{} \mathbf{O}_{1\times (L_{opt}-M)} \end{array} \right] {\mathrm {\Lambda }}_k(i){\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k-1} \\ +{\mu }^2_kv^2_k(i){\Vert \mathbf{u}_{k,i}\Vert }^2 \end{array} \end{aligned}$$
(64)

Considering the assumptions 1, 2, and employing the mathematical expectations from both sides of (64), after some tedious algebra leads to:

$$\begin{aligned}&E\{{\Vert {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k}\Vert }^2\} ={\beta }_kE\{{\Vert {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k-1}\Vert }^2\}\nonumber \\&\quad +({\eta }_k-{\beta }_k){\Vert {\ddot{\mathbf{w}}}_{L_{opt}-M}\Vert }^2+{\tau }_k \end{aligned}$$
(65)

where

$$\begin{aligned} \begin{array}{c} {\beta }_k=1-2{\mu }_k{\sigma }^2_{u,k}+{\mu }^2_k{\sigma }^4_{u,k}(M+2) \\ {\eta }_k=1+{\mu }^2_k{\sigma }^4_{u,k}M \end{array} \end{aligned}$$
(66)

and

$$\begin{aligned} {\tau }_k={\mu }^2_k{\sigma }^2_{v,k}{\sigma }^2_{u,k}M \end{aligned}$$
(67)

Now we write (65) in the brief form of below:

$$\begin{aligned} E\{{\Vert {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k}\Vert }^2\} ={\beta }_kE\{{\Vert {\overline{\varvec{\psi }}}^{(i)}_{L_{opt},k-1}\Vert }^2\}+f_k \end{aligned}$$
(68)

where

$$\begin{aligned} f_k=({\eta }_k-{\beta }_k){\Vert {\ddot{\mathbf{w}}}_{L_{opt}-M}\Vert }^2+{\tau }_k \end{aligned}$$
(69)

According to the steady-state analysis, as \((i \rightarrow \infty )\) in (68), and assuming \({\mathbf{P}_{L_{opt},k}=\overline{\varvec{\psi }}}^{(\infty )}_{L_{opt},k}\), (68) is rewritten as:

$$\begin{aligned} E\{{\Vert \mathbf{P}_{L_{opt},k}\Vert }^2\}={\beta }_kE\{{\Vert \mathbf{P}_{L_{opt},k-1}\Vert }^2\}+f_k \end{aligned}$$
(70)

This equation has a similar structure with (47), therefore in the same manner that was used for the solving of the recursive equation (47) we have:

$$\begin{aligned} E\{{\Vert \mathbf{P}_{L_{opt},k-1}\Vert }^2\}={(1-\Pi _{k,1})}^{-1}s_k \end{aligned}$$
(71)

where

$$\begin{aligned}&\Pi _{k,\ell }{\triangleq \beta }_{k-1}{\beta }_{k-2}\dots {\beta }_1{\beta }_N{\beta }_{N-1}\dots \nonumber \\&\quad {\beta }_{k+\ell }{\beta }_{k+\ell -1},\ \ \ \ell =1,\dots ,N \end{aligned}$$
(72)

and

$$\begin{aligned} \begin{array}{l} \begin{array}{c} s_k\triangleq \Pi _{k,2}\end{array} f_k+ \begin{array}{c} \Pi _{k,3} \end{array} f_{k+1}+\dots + \begin{array}{c} \Pi _{k,N-1} \end{array} f_{k-3}+ \begin{array}{c} \Pi _{k,N} \end{array} f_{k-2} \\ + \begin{array}{c} f_{k-1} \end{array} \end{array} \end{aligned}$$
(73)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Azarnia, G., Sharifi, A.A. Steady-state analysis of distributed incremental variable fractional tap-length LMS adaptive networks. Wireless Netw 27, 4603–4614 (2021). https://doi.org/10.1007/s11276-021-02754-4

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11276-021-02754-4

Keywords

Navigation