Abstract
Constrained least mean square (CLMS) algorithm is the most popular constrained adaptive filtering algorithm due to its simple structure and easy implementation. However, its convergence slows down when the input signal is colored. To address this issue, this paper firstly introduces the normalized subband adaptive filter (NSAF) into the constrained filtering problem and derives a constrained NSAF (CNSAF) algorithm using the Lagrange multiplier method. Benefiting from the good decorrelation capability of the NSAF, the proposed CNSAF algorithm significantly improves the convergence performance of the CLMS algorithm under colored inputs. Then, the mean and mean-square stability of the CNSAF algorithm is analyzed, and the theoretical models to characterize the transient and steady-state mean square deviation (MSD) behaviors of the CNSAF algorithm are derived utilizing the Kronecker product property and vectorization method. Further to extend the CNSAF algorithm to the problem of sparse system identification, a sparse version of the CNSAF algorithm (S-CNSAF) is derived. Finally, the validity of the derived theoretical MSD prediction models and the superiority of the proposed algorithms are confirmed by extensive computer simulations on system identification with colored inputs.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Constrained adaptive filter (CAF) possesses error correction property that prevents the accumulation of quantization errors that occur in digital implementations [4, 11, 12]. Benefiting from this excellent property, the CAF has received much attention in recent years [1, 17, 26] and various constrained adaptive filtering algorithms (CAFAs) have been developed, which have different applications such as linear phase system identification, spectral analysis, interference cancellation in direct-sequence code-division multiple access (DS-CDMA), and beamforming [7, 13, 23]. CAFA is the process of translating the application-specific requirements into a set of linear constraint equations that the coefficients must satisfy and incorporating them into the solution [8, 11]. It is equivalent to the coefficient updates are performed in a subspace that is orthogonal to the subspace spanned by a constraint matrix. Thus CAFA can improve the robustness of the solution or obviate a training phase. In general, these constraints are deterministic and are derived from the priori knowledge of the system. Examples include spreading codes in blind multiuser detection, and linear phase in system identification [20, 25, 27].
These proposed CAFAs can be broadly classified into two main categories: the first category is stochastic gradient (SG) based algorithms [8, 24, 29], such as the constrained least mean square (CLMS) algorithm [7] and the constrained affine projection algorithm (CAPA) [9, 10]. The second type is least squares (LS) based algorithms [5, 6], such as the constrained recursive least squares (CRLS) algorithm and the constrained conjugate gradient (CCG) algorithm [3]. The former class of algorithms are usually simpler, more robust, more computationally efficient and easier to implement than the latter. Therefore, the research in this paper focuses on the first class of CAFAs. Among the SG-based constraint algorithms, CLMS is the simplest and least computationally intensive, but its convergence speed is closely related to the eigenvalue spread (or spectral dynamic range) of the autocorrelation matrix of the input signal. When the input signals are correlated signals, the eigenvalue spread is high and the convergence of the CLMS algorithm slows down. The popular CAFAs that can significantly accelerate the convergence of CLMS are the CAPA and its variants. The CAPA improves the convergence speed of the algorithm through the data reuse strategy, but the direct matrix inversion (DMI) operation in its updating formula leads to high computational complexity, which is unfavorable for practical applications. For this reason, a constrained affine projection like (CAPL) algorithm was proposed [9], which does not have DMI operation reduces the computational complexity, due to the elimination of the constraint that the a posteriori error vector is zero.
Another class of SG-based algorithms suitable for processing colored input signals is the normalized subband adaptive filter (NSAF) algorithm [15, 18, 19]. It reduces the spectral dynamic range of the input signal by decomposing the signal into subband domains, thus obtaining a fast convergence speed with colored inputs [14, 21, 28]. And its computational complexity is greatly reduced compared to AP algorithms. However, to the best of our knowledge, NSAF is not currently used to solve the constrained filtering problem under colored inputs. In view of which, in this paper, we propose the constrained normalized subband adaptive filtering algorithm, and the following works are carried out:
-
(1)
Using a subband filter with the multiband structure to whiten the colored inputs, a novel constrained adaptive algorithm based on subband signals called constrained NSAF (CNSAF) is derived by the Lagrange multiplier method, which converges fast with colored inputs and has low computational complexity.
-
(2)
The statistical behavior of the proposed CNSAF algorithm, including mean and mean-square stability, transient and steady-state MSD performance, are analyzed, and transient and steady-state MSD prediction models for the CNSAF algorithm are derived.
-
(3)
To efficiently identify sparse systems, the L1 norm constraint on the filter weight vector is introduced into the constrained optimization problem solved by the CNSAF algorithm, and a sparse version of the CNSAF algorithm (S-CNSAF) is obtained.
-
(4)
Through simulation experiments of system identification, the accuracy of the theoretical MSD analysis results and the superiority of the proposed CNSAF algorithms over other existing constrained adaptive algorithms are verified.
The remainder of this paper is organized as follows: In Sect. 2, the proposed CNSAF algorithm is derived. The mean and mean square stability, theoretical transient and steady state MSD performance of the CNSAF algorithm are analyzed in Sect. 3. Section 4 presents a sparse version of the CNSAF algorithm for sparse systems. In Sect. 5, computer simulations on system identification are provided. Section 6 draws conclusions.
2 The Proposed CNSAF Algorithm
Consider a linear phase system identification application, where the desired signal of the CAF is the output of an unknown linear system excited by some broadband signal, denoted as
where \({\mathbf{w}}_{0}\) is the system vector of length L to be estimated, \(\upsilon (k)\) represents the system noise with zero-mean and variance of \(\sigma_{\upsilon }^{2}\), k is the discrete time index, and \({\varvec{u}}(k)\) is the input signal vector, containing current and past input signal samples \([u(k),...,u(k - L + 1)]^{T}\), and \([ \cdot ]^{T}\) is the transpose operation. In most cases, the input is a white noise signal, but there are still many cases of colored signals [9, 22]. Such as the first-order auto-regressive (AR(1)) signal and the second-order auto-regressive (AR(1)) signal.
To identify the unknown linear phase system \({\mathbf{w}}_{0}\), the filter is constrained by (2) to preserve the linear phase for each iteration [10].
where \({\varvec{H}}\) is a \(L \times M_{{}} (0 \le M \le L)\) constraint matrix, \({\mathbf{w}}\) is the filter weight vector and \({\varvec{m}}\) is the response vector of length M.
To accelerate the convergence of the CLMS algorithm, the correlation of the input signal must be reduced, i.e., the spectral dynamic range (or eigenvalue spread) of the input signal must be reduced. An approach to reduce the spectral dynamic range of the signal is to decompose the signal into subband domains. Therefore, we propose a new constrained optimization problem based on the multi-band structured NSAF method shown in Fig. 1 as follows
where \(\left\| \cdot \right\|\) denotes the L2-norm. Figure 1 illustrates the principle of the NSAF method. At first, the original signals \(u(k)\) and \(d(k)\) are passed through the analysis filter bank \(\left\{ {H_{i} (z),i \in 1, \ldots ,N} \right\}\) to generate N subband signal pair \({\text{\{ }}u_{i}^{{}} (k),d_{i}^{{}} (k)\}\). Then, the subband desired signal \(d_{i}^{{}} (k)\) is strictly decimated to obtain \(d_{i,D}^{{}} (t)\), and feeding \(u_{i}^{{}} (k)\) to the filter \({\mathbf{w}}(t)\) yields \(y_{i}^{{}} (k) = {\varvec{u}}_{i}^{T} (k){\mathbf{w}}(t)\), where \({\varvec{u}}_{i}^{{}} (k) = [u_{i}^{{}} (k),...,u_{i}^{{}} (k - L + 1)]^{T}\). Further, the decimated subband output signal of the filter is given by \(y_{i,D}^{{}} (t) = y_{i}^{{}} (tN) = {\varvec{u}}_{i}^{T} (t){\mathbf{w}}(t)\), where \({\varvec{u}}_{i}^{{}} (t) = [u_{i}^{{}} (tN),...,u_{i}^{{}} (tN - L + 1)]^{T}\) is the decimated subband input signal vector. The indexes t and k stand for the decimated subband signal sequence and original signal sequence, respectively.
Based on the above defined quantities, the subband estimation error is obtained as
Using the Lagrange multiplier method, (3) becomes
where \({\varvec{\gamma}}\) is the Lagrange multiplier vector. Taking the gradient of \(J({\mathbf{w}}(t))\) with respect to \({\mathbf{w}}(t)\) yields
Utilizing the stochastic gradient descent (SGD) method, the weight vector of the CNSAF algorithm is updated by
Substituting (7) into the constraint condition in (3), the Lagrange multiplier vector \({\varvec{\gamma}}\) is derived as
Substituting (8) into (7), the weight update of the CNSAF algorithm is described as
where
3 Performance Analysis
3.1 Optimal Solution
Letting the gradient vector \(\nabla J({\mathbf{w}}(t))\) in (6) be equal to zero, we have
After some simple calculations, the optimal solution of the constrained problem (3) is given by
where \({\varvec{p}}_{{\varvec{u}}} = E\left\{ {\sum\limits_{i = 1}^{N} {\frac{{{\varvec{u}}_{i}^{{}} (t)d_{i,D}^{{}} (t)}}{{\left\| {{\varvec{u}}_{i}^{{}} (t)} \right\|^{2} }}} } \right\}\) and \({\varvec{R}}_{{\varvec{u}}} = E\left\{ {\sum\limits_{i = 1}^{N} {\frac{{{\varvec{u}}_{i}^{{}} (t){\varvec{u}}_{i}^{T} (t)}}{{\left\| {{\varvec{u}}_{i}^{{}} (t)} \right\|^{2} }}} } \right\}\). Substituting (12) into the constraint equation \({\varvec{H}}^{T} {\mathbf{w}}_{opt} = {\varvec{m}}\) leads to
Combining (12) and (13), the optimal solution can be further expressed as
3.2 Mean Stability
To facilitate the analysis, the below two assumptions are provided.
Assumption 1:
The input signal vector \({\varvec{u}}_{i}^{{}} (t)\) is zero-mean with positive-definite covariance matrix \(E\left\{ {{\varvec{u}}_{i}^{{}} (t){\varvec{u}}_{i}^{T} (t)} \right\}\).
Assumption 2:
The input signal vector \({\varvec{u}}_{i}^{{}} (t)\) and the weight error vector \({\tilde{\mathbf{w}}}(t)\) are independent of each other, and the system noise \(\upsilon (k)\) is statistically independent of any other signals.
Defining the weight error vector \({\tilde{\mathbf{w}}}(t)\) as
Subtracting \({\mathbf{w}}_{opt}\) from (9) yields
Since \({\varvec{C}}{\mathbf{w}}_{opt} - {\mathbf{w}}_{opt} + \user2{f = }{\mathbf{0}}\) [23, 25], (16) becomes
To facilitate the analysis, a new weight error vector is defined as
According to (15) and (18), the subband estimation error of (4) can be rewritten as
where \(\upsilon_{i,D} (t)\) represents the i-th decimated subband system output noise.
Inserting (19) into (17) yields
Using the relation \({\varvec{C}}^{2} = {\varvec{C}}\) [25] for (20), we have \({\varvec{C}}{\tilde{\mathbf{w}}}(t) = {\tilde{\mathbf{w}}}(t)\). Taking the expectations on both sides of (20) results in
where \({\varvec{A}}_{i}^{{}} \user2{ = C}E\{ {\varvec{B}}_{i}^{{}} (t)\} = {\varvec{CR}}_{i}\), and \({\mathbf{B}}_{i} {(}t{) = }{{{\varvec{u}}_{i}^{{}} (t){\varvec{u}}_{i}^{T} (t)} \mathord{\left/ {\vphantom {{{\varvec{u}}_{i}^{{}} (t){\varvec{u}}_{i}^{T} (t)} {\left\| {{\varvec{u}}_{i}^{{}} (t)} \right\|^{2} }}} \right. \kern-0pt} {\left\| {{\varvec{u}}_{i}^{{}} (t)} \right\|^{2} }}\).
Let \(\lambda_{j} {(}{\varvec{A}}_{i} )\) denotes the j-th eigenvalue of \({\varvec{A}}_{i}\). Then the mean stability of the CNSAF algorithm can be guaranteed if.
holds, which is equivalent to
where \(\lambda_{\max } {(}{\varvec{A}}_{i} )\) denotes the maximum eigenvalue of \({\varvec{A}}_{i}\).
When the CNSAF algorithm arrives at steady-state, i.e., \(\mathop {\lim }\limits_{t \to \infty } E\left\{ {{\tilde{\mathbf{w}}}(t)} \right\} \approx E\left\{ {{\tilde{\mathbf{w}}}(t + 1)} \right\}\), we can obtain from (21) that
In other words, the CNSAF algorithm is asymptotically unbiased.
3.3 Mean Square Stability
Using the relation \({\varvec{C}}{\tilde{\mathbf{w}}}(t) = {\tilde{\mathbf{w}}}(t)\), we can rewrite (20) as
where \({\mathbf{q}}_{i}^{{}} (t) = {{{\varvec{u}}_{i}^{{}} (t)} \mathord{\left/ {\vphantom {{{\varvec{u}}_{i}^{{}} (t)} {\left\| {{\varvec{u}}_{i}^{{}} (t)} \right\|^{2} }}} \right. \kern-0pt} {\left\| {{\varvec{u}}_{i}^{{}} (t)} \right\|^{2} }}\). Taking the expectations on the squared Euclidean norm of (25) under Assumptions 1 and 2 results in
where
Assuming that the subband input signals are orthogonal at zero lag [18], the term (a) in (27) can be further rewritten as
Utilizing the Isserlis’ theorem [16], one achieves
where \(tr[ \cdot ]\) denotes the trace operation of matrix. Then, (27) can be written more concisely as
where \({\varvec{Q}}_{i}^{{}} \user2{ = CR}_{i}^{{}} {\varvec{C}}\). Let \(\lambda_{m} ({\varvec{Q}}_{i}^{{}} ),m = 1,2, \ldots ,L\) denote the eigenvalues of \({\varvec{Q}}_{i}^{{}}\). Then the eigenvalues of \({\varvec{\varPi}}\) can be obtained as
According to (30), the inequality
is established. Let \(\lambda_{a} ({\varvec{Q}}_{i}^{{}} )\) and \(\lambda_{b} ({\varvec{Q}}_{i}^{{}} )\) represent the eigenvalues of \({\varvec{Q}}_{i}^{{}}\) corresponding to \(\min \left\{ {\kappa_{m} } \right\}\) and \(0 < \min \left\{ {\kappa_{m} } \right\} < \max \left\{ {\kappa_{m} } \right\} < 1\), respectively, i.e.,
Based on the above analysis, the mean-square stability condition of the CNSAF algorithm can be obtained as
that is,
Finally, the mean-square stability condition for the CNSAF algorithm regarding the step size is given by
3.4 Transient and Steady-State MSD Analysis
Define the auto-correlation matrix \({\tilde{\mathbf{W}}}(t)\) of the weight error vector \({\tilde{\mathbf{w}}}(t)\) as
According to (20) and (37), we have
Performing the vectorization on both sides of (38) and using the property \({\text{vec(}}{\varvec{X}}_{1} {\varvec{X}}_{2} {\varvec{X}}_{3} ) = ({\varvec{X}}_{3}^{{\text{T}}} \otimes {\varvec{X}}_{1} ){\text{vec(}}{\varvec{X}}_{2} {)}\), (38) becomes
where \({\varvec{T}}(t) = \sum\limits_{i = 1}^{N} {{\varvec{B}}_{i}^{{}} (t)} \otimes \sum\limits_{i = 1}^{N} {{\varvec{B}}_{i}^{{}} (t)}\), and
According to \(tr\left[ {{\varvec{X}}_{a} {\varvec{X}}_{b} } \right] = {\text{vec}}\left( {{\varvec{X}}_{a}^{T} } \right)^{T} {\text{vec}}\left( {{\varvec{X}}_{b} } \right)\), the transient mean-square deviation (MSD) at time t + 1 is given by
When the CNSAF algorithm converges to the steady state, the approximation \(\mathop {\lim }\limits_{t \to \infty } {\text{vec}}\left[ {{\tilde{\mathbf{W}}}(t)} \right] \approx {\text{vec}}\left[ {{\tilde{\mathbf{W}}}(t + 1)} \right]\) holds. Using the unbiasedness (24), the steady-state MSD of the CNSAF algorithm can be obtained from (39) as
3.5 Computational Complexity
Table 1 summarize the computational complexities of the CLMS, CAPA, CAPL, and the proposed CNSAF in terms of multiplication, addition, and DMI operations. \(L_{d}\) is the length of the analysis filter bank, K represents the projection order of the CAPA and CAPL algorithms, and it can be observed from Table 1 that the CAPL algorithm has no DMI operation. In addition, the computational complexity of the proposed CNSAF algorithm is much lower than those of CAPA and CAPL algorithms, and is close to that of the CLMS algorithm.
4 The SPARSE version of the CNSAF Algorithm
Considering that the system to be estimated may be sparse, the L1 norm of the filter weight vector is introduced as an additional constraint in the constrained optimization problem (3), which yields
where \(\eta\) is the L1 norm of the coefficient vector \({\mathbf{w}}\), \(\left\| {\mathbf{w}} \right\|_{1} = {\text{sign}}^{T} \left[ {\mathbf{w}} \right]{\mathbf{w}}\) and \({\text{sign}}\left[ w \right] \triangleq \frac{w}{\left| w \right|}\).
Using the Lagrange multiplier method, the constrained minimization problem (43) can be transformed into
Calculating the stochastic gradient of the cost function \(J({\mathbf{w}})\) with respect to \({\mathbf{w}}\) yields
Using the SGD method, the weight update of the S-CNSAF algorithm is obtained as
According to the relation \({\varvec{H}}^{T} {\mathbf{w}}(t + 1) = {\varvec{m}}\), the Lagrange multiplier vector \({\varvec{\gamma}}_{1} (t)\) is obtained by pre-multiplying (46) by \({\varvec{H}}^{T}\) as
Then (46) is pre-multiplied by \({\text{sign}}^{T} \left[ {{\mathbf{w}}(t)} \right]\) to get
where \(\eta = {\text{sign}}^{T} \left[ {{\mathbf{w}}(t)} \right]{\mathbf{w}}(t + 1)\) and \(\eta (t) = {\text{sign}}^{T} \left[ {{\mathbf{w}}(t)} \right]{\mathbf{w}}(t)\) [2]. After separating \(\gamma_{2} (t)\) from (48), one gets
where \(e_{{L_{1} }} (t) = \eta - \eta (t)\). By solving the system of equations stated by (47) and (49), the Lagrange multipliers \({\varvec{\gamma}}_{1} (t)\) and \(\gamma_{2} (t)\) can be derived [2]. The sparse version of the CNSAF algorithm is finally obtained, i.e.,
where \({\varvec{f}}_{{L_{1} }} (t) = e_{{L_{1} }} (t){{{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} \mathord{\left/ {\vphantom {{{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} {\left\| {{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} \right\|_{2}^{2} }}} \right. \kern-0pt} {\left\| {{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} \right\|_{2}^{2} }}\), and \(\overline{\user2{C}} = \left[ {{\mathbf{I}}_{L} - {{{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} \mathord{\left/ {\vphantom {{{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} {\left\| {{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} \right\|_{2}^{2} }}} \right. \kern-0pt} {\left\| {{\varvec{C}}{\text{sign}}\left[ {{\mathbf{w}}(t)} \right]} \right\|_{2}^{2} }}{\text{sign}}^{T} \left[ {{\mathbf{w}}(t)} \right]} \right]{\varvec{C}}\).
5 Simulation Results
In this section, the superiority of the proposed CNSAF algorithm over other competitors for colored inputs and the accuracy of theoretical MSD analysis results are demonstrated via extensive computer simulations. The system output noise is a zero-mean WGN with variance \(\sigma_{\upsilon }^{2} = 0.001\) for all simulations. The cosine modulated analysis filter bank is utilized for the CNSAF algorithm. The MSD defined as \(10\log 10E\left[ {\left\| {\tilde{\user2{w}}(t)} \right\|^{2} } \right]\) in dB is used to measure the algorithm’s performance. All simulation results are obtained by averaging over 100 independent runs.
5.1 System Identification
In this subsection, the convergence performance of the CLMS, CAPA, CAPL and CNSAF algorithms is evaluated in the context of system identification. The system parameter vector \({\varvec{w}}_{0}\) of length L = 7 and the constraint matrix \({\varvec{H}}\) of size \(7 \times 3\) are randomly generated, where \({\varvec{w}}_{0}\) has unit energy, \({\varvec{H}}\) is full-rank, and the constraint vector is given by \(\user2{m = H}^{T} {\varvec{w}}_{0}\). We consider two different types of colored input signals, one is the AR(1) signal, i.e., \(x(k) = 0.99x(k - 1) + z(k)\), denoted simply as \(AR1(1, - 0.99)\). The other is an AR(2) input signal, i.e., \(x(k) = - 1.9x(k - 1) - 0.95x(k - 2) + z(k)\), denoted simply as \(AR2(1,1.9,0.95)\), where \(z(k)\) is a white Gaussian noise with unit variance. In order to obtain a fair comparison result, the step sizes for all algorithms are chosen so that the steady state MSDs of the algorithms are the same. As shown in Fig. 2a and b, the convergence speed of the CLMS algorithm is severely degraded under colored input signals. Although the CAPL algorithm converges slower than CAPA, it has lower computational complexity. From Fig. 2a, it can be seen that CNSAF converges at about 500 iterations, while the CAPA takes about 1000 iterations to converge. In Fig. 2b, CNSAF is faster than the CAPA by about 200 number of iterations. The reason for the fastest convergence of the proposed CNSAF algorithm is due to the better whitening ability of the NSAF on the colored input signals.
5.2 Linear Phase System Identification
In this subsection, we consider a linear phase system identification problem. To identify the unknown linear phase system, the filter needs to be constrained to preserve the linear phase for each iteration. To this end, the constraint matrix is chosen as \({\varvec{H}} = [{\varvec{I}}_{{({{L - 1)} \mathord{\left/ {\vphantom {{L - 1)} 2}} \right. \kern-0pt} 2}}} ,{\mathbf{0}}, - {\varvec{V}}_{{({{L - 1)} \mathord{\left/ {\vphantom {{L - 1)} 2}} \right. \kern-0pt} 2}}} ]^{{\text{T}}}\), and \({\varvec{V}}_{{({{L - 1)} \mathord{\left/ {\vphantom {{L - 1)} 2}} \right. \kern-0pt} 2}}}\) denotes the reversal matrix of unit matrix. The response vector of length \(M = (L - {{1)} \mathord{\left/ {\vphantom {{1)} 2}} \right. \kern-0pt} 2}\) is \({\varvec{m}} = {\mathbf{0}}\). The system parameter vector \({\varvec{w}}_{0}\) of length L has linear phase and unit energy and is generated by the Matlab command \({\varvec{w}}_{0} = {{fir1(L - 1,\omega_{\alpha } )} \mathord{\left/ {\vphantom {{fir1(L - 1,\omega_{\alpha } )} {norm}}} \right. \kern-0pt} {norm}}\left[ {fir1(L - 1,\omega_{\alpha } )} \right]\), where \(\omega_{\alpha } = 0.5\) is the cutoff frequency and \(L = 9\). The two AR signals used in Sect. 5.1 are employed in the simulation. The simulation results in Fig. 3 are similar to those in Fig. 2, where the convergence performance of the CLMS algorithm is severely deteriorated, the CAPA converges faster than CAPL, and the CNSAF algorithm converges fastest. In the AR(1) experiment of Fig. 3, CNSAF is faster than the CAPA by about 500 iteration numbers, and CNSAF is faster than the CAPA by about 100 iteration numbers in the AR(2) experiment. Simulation results in both Figs. 2 and 3 confirm the superiority of the CNASF algorithm.
5.3 Sparse System Identification
In this simulation, the performance of the proposed S-CNSAF algorithm is evaluated by sparse system identification. Consider a sparse system with sparsity of 15/512 as shown in Fig. 4. The size of the constraint matrix is \(512 \times 40\), where the elements obey a white Gaussian distribution with zero mean and unit variance, and the response vector \({\varvec{m}}\) is obtained from \({\varvec{m}} = {\varvec{H}}^{T} {\mathbf{w}}_{0}\). The input signals are the same as in Sect. 5.1. From Fig. 5a, it can be seen that the S-CNSAF algorithm has the fastest convergence speed and saves about 2000 iterations compared to the CAPA when the input is an AR(1) signal with the same steady-state estimation accuracy (about − 34.5 dB). With the AR(2) input signal (shown in Fig. 5b), the S-CNSAF and CAPA algorithms have almost the same convergence speed at about the 10000th iteration at a steady-state estimation accuracy of about -30dB. In conclusion, CAPA performs better than CNSAF in sparse system identification, and the proposed S-CNSAF algorithm has even better convergence performance.
5.4 Validation of the Theoretical Analysis Results
In this subsection, the accuracy of the theoretical transient and steady state MSD analysis results (40) and (41) are verified. The system parameter vector \({\varvec{w}}_{0}\) is randomly generated from a zero-mean WGN with variance of 0.1. The elements of the \(L \times M\) constraint matrix \({\varvec{H}}\) follow the standard normal distribution with M = 3. The response vector is \({\varvec{m}} = {\varvec{H}}^{T} {\mathbf{w}}_{0}\).
1) Transient MSD performance
In this simulation, \(AR1(1, - 0.8)\) is used as the input signal. The filter order is L = 16. The effect of the step size and the number of subbands on the transient MSD convergence behavior of the CNSAF algorithm is investigated and the results are presented in Fig. 6. It can be seen that the larger the step size, the faster the CNSAF algorithm converges and the higher the steady state MSD and vice versa. When the number of subbands is less than 4, the number of subbands mainly affects the convergence speed of the algorithm and has little effect on the steady-state accuracy. When the number of subbands is greater than 4, the number of subbands merely affects the steady-state estimation accuracy of the CNSAF algorithm. In addition, the theoretical and simulated MSD curves are in good agreement, which fully demonstrates the accuracy of the theoretical transient analysis result.
2) Steady-state MSD
In this subsection, the effect of the step size and the number of subbands on the steady-state MSD of the CNSAF algorithm is investigated. The filter order is L = 48 and \(AR1(1, - 0.9)\) is used as the input signal. As shown in Fig. 7, the theoretical and simulated steady-state MSD values are highly coincident, and the accuracy of the theoretical steady-state MSD of the CNSAF algorithm is verified.
In summary, the theoretical analysis in Sect. 3 provides valid and accurate theoretical predictive models for the transient and steady-state MSD statistical behavior of the CNSAF algorithm.
6 Conclusion
In this paper, a novel constrained adaptive filtering algorithm named CNSAF was proposed, which effectively overcomes the issue of slow convergence of CLMS algorithm for colored inputs and maintains a low computational complexity close to that of CLMS algorithm. Then, we analyzed the mean and mean-square stability, theoretical transient and steady state MSD behaviors of the CNSAF algorithm and derived the corresponding theoretical prediction models for MSD. To effectively identify sparse systems, a sparse version of the CNSAF algorithm (S-CNSAF) is further derived. System identification computer simulations show that the proposed CNSAF and S-CNSAF algorithms outperform other competing algorithms under colored input signals, and the obtained theoretical MSD prediction models can accurately predict the MSD statistical behavior of the CNSAF algorithm.
Data availability
The data that support the findings of this study are available from the corresponding author on request.
References
O.M. Abdelrhman, Y. Dou, S. Li, Performance analysis of the standard constrained maximum versoria criterion based adaptive algorithm. IEEE Signal Process. Lett. 30, 125–129 (2023)
J.A. Apolinario, M.L.R. de Campos, C.P. Bernal, The constrained conjugate-gradient algorithm. IEEE Signal Process. Lett. 7(12), 351–354 (2000)
J.A. Apolinario, S. Werner, P.S.R. Diniz, Constrained normalized adaptive filters for CDMA mobile communications. Proc. Euro. Signal Process. Conf. 4, 2053–2056 (1998)
R. Arablouei, K. Dogancay, Linearly-constrained recursive total least-squares algorithm. IEEE Signal Process. Lett. 19(12), 821–824 (2012)
R. Arablouei, K. Dogancay, Performance analysis of linear-equality-constrained least squares estimation. IEEE Trans. Signal Process. 63(14), 3802–3809 (2015)
R. Arablouei, K. Dogancay, On the mean-square performance of the constrained LMS algorithm. Signal Process. 117, 192–197 (2015)
S.S. Bhattacharjee, M.A. Shaikh, K. Kumar, N.Y. George, Robust constrained generalized correntropy and maximum versoria criterion adaptive filters. IEEE Trans. Circuits Syst. II. 68(8), 3002–3006 (2021)
M.Z.A. Bhotto, A. Antoniou, New constrained affine-projection adaptive-filtering algorithm, in 2013 IEEE International Symposium on Circuits and Systems (ISCAS), (2013).
J.F. de Andrade, M.L.R. de Campos, J.A. Apolinário, An L1-constrained normalized LMS algorithm and its application to thinned adaptive antenna arrays, in Proceedings of the 2013 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2013) (Vancouver, Canada, 2013), pp. 3806–3810.
M.L.R. de Campos, J.A. Apolinario, The constrained affine projection algorithm-development and co.nvergence issues, in Proceedings of the IEEE Conference Signal processing, Communication, Circuits, and Systems, (2000).
M.L.R. de Campos, S. Werner, J.A. Apolinario, Constrained Adaptive Filters (Springer, Berlin, 2004)
O. Frost, An algorithm for linearly constrained adaptive array processing. Proc. IEEE 60(8), 926–935 (1972)
V.C. Gogineni, S. Mula, Logarithmic cost based constrained adaptive filtering algorithms for sensor array beamforming. IEEE Sens. J. 18(14), 5897–5905 (2018)
Z. Habibi, H. Zayyani, M.S.E. Abadi, A robust subband adaptive filter algorithm for sparse and block-sparse systems identification. J. of Syst. Eng. Electron. 32(2), 487–497 (2021)
J. H. Husøy, A simplified normalized subband adaptive filter (NSAF) with NLMS-like complexity, in 2022 International Conference on Applied Electronics (AE), Pilsen, Czech Republic (2022), pp. 1–5.
L. Isserlis, On a formula for the product-moment coefficient of any order of a normal frequency distribution in any number of variables. Biometrika 12(1/2), 134–139 (1918)
Y. Ji, J. Ni, Constrained least total lncosh algorithm and its sparsity-induced version. Signal Process. 210, 109098 (2023)
K.-A. Lee and W.-S. Gan, On delayless architecture for the normalized subband adaptive filter, in Proceedings of the IEEE International Conference. Multimedia Expo (2007), pp. 1595–1598.
K.-A. Lee, W.-S. Gan, S.M. Kuo, Subband Adaptive Filtering: Theory and Implementation (Wiley, Hoboken, 2009)
T. Liang, Y. Li, Y. Xia, Recursive constrained adaptive algorithm under q-Rényi kernel function. IEEE Trans. Circuits Syst. II Exp. Briefs 68(6), 2227–2231 (2020)
D. Liu, H. Zhao, Statistics behavior of individual-weighting-factors SSAF algorithm under errors-in-variables model. IEEE Signal Process. Lett. 30, 319–323 (2023)
S. Lv, H. Zhao, W. Xu, Robust widely linear affine projection M-estimate adaptive algorithm: performance analysis and application. IEEE Trans. Signal Process. 71, 3623–3636 (2023)
S. Peng, B. Chen, L. Sun, W. Ser, Z. Lin, Constrained maximum correntropy adaptive filtering. Signal Process. 140, 116–126 (2017)
G. Qian, F. He, S. Wang, H.C.I. Herbert, Robust constrained maximum total correntropy algorithm. Signal Process. 181(8), 107903 (2021)
Z. Wang, H. Zhao, X. Zeng, Constrained least mean M-estimation adaptive filtering algorithm. IEEE Trans. Circuits Syst. II 68(4), 1507–1511 (2021)
W. Xu, H. Zhao, L. Zhou, Modified Huber M-estimate function-based distributed constrained adaptive filtering algorithm over sensor network. IEEE Sens. J. 22(20), 19567–19582 (2022)
H. Zayyani, Continuous mixed p-norm adaptive algorithm for system identification. IEEE Signal Process. Lett. 21(9), 1108–1110 (2014)
S. Zhang, W.X. Zheng, Mean-square analysis of multi-sampled multiband-structured subband filtering algorithm. IEEE Trans. Circuits Syst. I. 66(3), 1051–1062 (2019)
H. Zhao, Z. Cao, Robust generalized maximum Blake–Zisserman total correntropy adaptive filter for generalized Gaussian noise and noisy input. IEEE Trans. Syst. Man Cybern. Syst. 53(11), 6757–6765 (2023)
Acknowledgements
This work was supported by National Natural Science Foundation of China (grant: 62171388, 61871461, 61571374), Zhejiang Provincial Basic Public Welfare Research Program (Grant No. LGF21H180010).
Funding
Open access funding provided by Aalborg University.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Xu, W., Zhao, H. & Lv, S. Constrained Normalized Subband Adaptive Filter Algorithm and Its Performance Analysis. Circuits Syst Signal Process (2024). https://doi.org/10.1007/s00034-024-02685-3
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00034-024-02685-3