# Sequential convex combinations of multiple adaptive lattice filters in cognitive radio channel identification

## Abstract

Sequential convex combinations of multiple adaptive lattice filters using different exponential weighting factors in cognitive radio (CR) channel identification framework have been considered in this presentation. First, the sequential processing multichannel lattice stages (SPMLSs) are modified so as to be used in filter combination task. Then, two different combination schemes, i.e., regular combination of multiple lattice filters (R-CMLF) and decoupled combination of multiple lattice filters (D-CMLF), that utilize modified SPMLSs as filter structure have been proposed. A modified Gram-Schmidt orthogonalization of multiple channels of data, which is constituted in multiple filter combination task, is accomplished. A highly modular, regular, and reconfigurable filter structure, which is suitable for cognitive radios, is achieved with the combination processing implemented in an order-recursive fashion. The mean square deviation (MSD) performances of the schemes under stationary and nonstationary conditions have been presented and compared with the performances of multiple combination of least mean square (M-CLMS), decoupled combination of least mean square (D-CLMS) schemes, and component filters. It has also been shown that the fault tolerances of the proposed schemes are better than those of the component filters due to the redundancy introduced with combination processing, and that the proposed schemes bring together the desired adaptive filter features such as fast convergence and low steady-state MSD levels, which do not normally coexist.

## Keywords

Cognitive radio CR 5G Combining filters Channel identification Lattice filters Sequential processing## 1 Introduction

Cognitive radio (CR) and 5G are two recent developments in the design of next-generation wireless communication systems. CR is built on a software radio, functions as an intelligent system that is aware of its environment and uses the methodology of understanding-by-building to learn from the environment, and adapts to statistical variations in the input stimuli in order to establish reliable communication by efficient utilization of the radio spectrum [1, 2, 3, 4].

A typical CR cycle includes spectrum sensing, analysis, reasoning, and adaptation to new operating parameter steps [5]. CR can detect the availability of a portion of frequency band through spectrum sensing and analysis steps [6]. During the reasoning step, it determines the optimum operating parameters, so that no harmful interference to other users of the spectrum is generated due to its transmission. In the adaptation step, the radio switches to transmission and reception mode using its reconfigurability and reprogrammability property [7, 8], and tunes its operating parameters according to its best response strategy.

The concept of software radio as the backbone of CR on the other hand relies on the development of DSP technology that is flexible, reconfigurable, and reprogrammable by software to adapt to an environment where there are multiple services, standards, and frequency bands [8, 9, 10, 11]. Correspondingly, the infrastructure in a software radio system is generally required to use reconfigurable VLSI hardware components such as DSP chip sets [12], FPGAs [13], embedded processors [14], and even general purpose processors [15].

An emerging requirement for CRs is location and environment awareness that involves modeling the capabilities of human beings and bats for realization of advanced and autonomous location and environment awareness features [16, 17]. Adaptive positioning, determining the coordinates of a cognitive radio in space, is a step towards realization of accurate location awareness in cognitive radios [18]. The author has recently proposed a receiver (equalizer) architecture for use in cognitive MIMO-OFDM radios that performs joint channel estimation and data detection, addresses the receiver complexity problems, and contributes to the flexibility, reconfigurability, and reprogrammability of receiver [19]. It was also shown in [20, 21, 22] that this receiver architecture can be configured for spectrum sensing as well as adaptive positioning function of cognitive radio virtually at no cost.

In parallel with the developments in CR, 5G has been introduced connoting features such as increased data rates, spectral efficiency, and low latency through extreme densification, massive MIMO, device-centric architectures, millimeter wave, smarter devices, native support for machine-to-machine (M2M) communication, and interference management [23, 24]. As the future mobile broadband will be largely driven by ultra-high-definition video and as the things around us will be always connected, it is envisioned that the new era of communication will be dominated by the need for more capacity as well as spectrum, which will result in the integration of cognitive radio concepts in 5G networks [25, 26].

Another important concept that can find application in CRs is related to combining of adaptive filters [27]. When a priori knowledge about the filtering scenario is limited or imprecise, as in a typical cognitive radio operational cycle [1], implementing the most adequate filter structure and adjusting its parameters becomes a difficult task, and inaccurate choices may result in poor performance. An intelligent way to overcome this difficulty is to rely on combining of adaptive filters, in an attempt to improve their properties in terms of convergence, tracking ability, steady-state misadjustment, robustness, or computational cost. Combining adaptive filters can also improve reliability by introducing redundant processing elements, i.e., by stepping up the natural fault tolerance inherent in adaptive filters. Note that the notion of improving reliability by introducing redundancy is in line with the recent developments in the field of reliable control, particularly, with the application of a system augmentation approach that reformulates the original system into a descriptor piecewise affine system and then takes advantage of the redundancy of this descriptor system formulation [28, 29], and that the concept of adaptivity itself can be considered as a form of intelligence built into a filtering mechanism.

It is possible to combine adaptive filters implementing different tasks or using different filter operating parameters, structures, and learning algorithms [30]. The recent examples of adaptive filter combination tasks include the combination of adaptive filters from different families such as one gradient and one Hessian based in [31], the adaptive combination of proportionate filters for sparse echo cancelation in [32], the adaptive combination of subband adaptive filters for acoustic echo cancelation in [33], the convex combination of *H*_{2} and *H*_{∞} filters for space-time adaptive equalization in [34], the online tracking of the changes in the nonlinearity within a signal by using a collaborative adaptive signal processing approach based on a combination (hybrid) filter in [35], the adaptive combination of Volterra kernels and its application to nonlinear echo cancelation in [36], the convex combination of nonlinear adaptive filters for active noise control in [37], the combination of adaptive filters for relative navigation in [38], finite impulse response (FIR)-infinite impulse response (IIR) adaptive hybrid combination in [39], the affine combination of two adaptive sparse filters for estimating large-scale multiple-input multiple-output (MIMO) channels in [40], the combinations of multiple kernel adaptive filters in [41], the low-complexity approximation to the Kalman filter using convex combinations of adaptive filters from different families in [42], and the proposition of a family of combined-step-size proportionate filters in [43].

Two possible strategies for combining filters are convex and affine combinations, where the mixing coefficient is restricted to be nonnegative and sum up to one and the condition on the mixing parameter is relaxed, allowing it to be negative respectively [44, 45, 46]. In fact, the affine combination scheme can be interpreted as a generalization of the convex combination since the mixing coefficient is not restricted to the interval [0,1]. Even though the affine combination scheme allows for smaller error levels in theory, it suffers from larger gradient noise in some situations [47]. The adaptation rule for the mixing coefficient in affine combinations is simpler than those in convex ones, and the correct adjustment of the step size for the updating of the mixing coefficient depends on some characteristics of the filtering scenario. Accordingly, the desired universal behavior for the affine combination, in which case the combined estimate is at least as good as the best of the component filters in steady state, cannot always be ensured, whereas convex combinations have a built-in mechanism to attain such universality [47].

Previously, plant identification via adaptive convex combination of least mean square (LMS) transversal filters with different step sizes and new algorithms for improved adaptive convex combination of least mean square (LMS) transversal filters were introduced in [48, 49] respectively. In this presentation, the objective is to address the implementation issues related to combining adaptive filters by developing modular, order-recursive, and reconfigurable combination schemes. In order to capture the statistical variations in the CR environment, the combination of multiple lattice filters with different exponential weighting factors has been considered so as to bring together the convergence properties of fast filters that have small exponential weighting factors and steady-state MSD properties of slow filters that have large exponential weighting factors. Accordingly, the author envisions multiple adaptive lattice filters as channels of sequential processing multichannel lattice stages (SPMLSs) [19, 20, 21, 22, 50] and proposes to sequentially combine these multiple lattice filters in a CR channel identification task. In view of the aforementioned issues concerning convex vs. affine combinations, the focus is on convex combinations of multiple adaptive lattice filters.

As the first contribution, the regular combination of multiple transversal filters in [49] is adapted to multiple lattice filters, and then, as the second contribution, the decoupled combination of two transversal filters in [49] is extended to multiple filters and tailored to multiple lattice filters by developing order-recursive algorithms for combination processing and also by making use of orthogonalized data and multichannel lattice filter parameters. To the best of the author’s knowledge, neither schemes exist in the literature. Note that a complete modified Gram-Schmidt orthogonalization of multichannel input data, which avoids matrix inversions and enables scalar only operations, is attained, and this feature is considered critical, as matrix inversion is a major bottleneck in the design of embedded receiver architectures [51]. Additionally, the computational complexity as well as the performances of the proposed lattice filter combination schemes in stationary and nonstationary channel identification scenarios are investigated and compared to those of the multiple combination of least mean square (M-CLMS) and decoupled combination of least mean square (D-CLMS) schemes in [49].

The organization of this paper is as follows. Section 2 is about the problem formulation in which channel model and multiple channel identification are introduced. In Section 3, the original SPMLS is initially introduced, and then, the proposed modifications in the SPMLS are discussed. Subsequently, the regular combination of multiple lattice filters (R-CMLF) and decoupled combination of multiple lattice filters (D-CMLF) schemes are presented in Sections 4 and 5 respectively. The experimental results are accounted for in Section 6, and finally, Section 7 is concerned with conclusions. (∙)^{∗} represents the complex conjugate of (∙). (∙)^{T} and (∙)^{H} stand for the transpose and the Hermitan transpose of (∙) respectively. The variables *i* and *n* are the time indexes related to data and coefficients respectively, *m* is the index for the number of combining filters, and finally, *ℓ* stands for the stage number of lattice filters.

## 2 Problem formulation

### 2.1 Channel model

*y*(

*i*), in this model is given by :

where **x**(*i*) = [ *x*(*i*),*x*(*i*−1),…,*x*(*i*−*N*+1)]^{T} refers to the input signal vector. *u*(*i*) is the channel noise signal at time instant *i* and is a realization of white, independently and identically distributed (i.i.d.) Gaussian random process with zero mean and constant variance \(\sigma _{u}^{2}\) and is also independent of **x**(*i*).

**w**(

*n*) = [

*w*

_{0}(

*n*),

*w*

_{1}(

*n*),…,

*w*

_{N−1}(

*n*)]

^{T}is the channel coefficient vector defined over the entire observation interval 1≤

*i*≤

*n*, and

*N*is the channel length. The channel coefficient vector in the model is assumed to change in accordance with the first-order Markov process in [54]:

where **q**(*n*) represents an i.d.d. Gaussian-distributed random zero-mean vector with diagonal covariance matrix **Q**(*n*)=*E*[**q**(*n*)**q**^{H}(*n*)]. Herein, *E*[∙] is defined as the statistical expectation operator. The initial values of the coefficient vector, **w**^{H}(−1), are also assumed Gaussian distributed with zero mean and variance \(\sigma _{w}^{2}\) and independent of **q**(*n*), *u*(*i*), and **x**(*i*). *a* is a constant close to 1.

For slow statistical variations, *ξ*(*n*) is small, and typically less than unity, whereas it is greater than unity for fast statistical variations, indicating that it is not advantageous to build an adaptive filter to solve the tracking problem.

### 2.2 Multiple channel identification

*m*th filter can be found by minimizing the following cost function:

*n*, where

*m*=1,2,…,

*M*and is the index for the component filters, and the use of prewindowing is assumed.

*λ*

_{m}is the exponential weighting factor for the

*m*th filter.

*δ*is a positive real number called the regularization parameter [56]. Herein, the coefficient vector of the

*m*th adaptive filter,

**P**

_{m}(

*n*), at time instant

*n*, is delineated as:

*m*th estimation error at time instant

*i*, computed using the input signal at time instant

*i*and the filter coefficients at time instant

*n*, is given by :

*m*th adaptive filter,

**x**

_{m}(

*i*), at time instant

*i*, is defined as :

Note that *x*_{m}(*i*)=*x*(*i*),∀*m*, *N*_{m} is the length of the *m*th filter.

*m*th optimal coefficient vector is found by differentiating

*J*

_{m}(

*n*) with respect to

**P**

_{m}(

*n*), setting the derivative to zero, and solving for

**P**

_{m}(

*n*) :

*N*

_{m}×

*N*

_{m}correlation matrix of

**x**

_{m}(

*i*) and is given by :

*J*

_{m}(

*n*), and

**I**is

*N*

_{m}×

*N*

_{m}identity matrix. The

*N*

_{m}×1 cross-correlation matrix of

**x**

_{m}(

*i*) and

*y*(

*i*) is expressed as:

Note that the channel length information can be available through the implementation of a channel length estimation algorithm such as [57] in CR. Accordingly, the lengths of all adaptive filters are assumed equal to the length of the channel to be identified, that is, *N*_{m}=*N*. All prediction and estimation errors hereafter are shown for the end of the observation interval *i*=*n*.

## 3 Sequential multichannel lattice processing

In order to provide a modular, order-recursive, and sequential solution to multiple filter combination problem, we propose to use SPMLSs, so that channels of SPMLSs constitute multiple filters with different exponential weighting factors. In the following, we first present the original SPMLS and its algorithm utilizing the direct updating of a priori reflection coefficient form of processing equations in [58, 59] respectively. We then introduce the modifications to be implemented in the SPMLS in order to be able use its channels as filters in a combination task.

### 3.1 The original SPMLS

The elements of \(\hat {\mathbf {f}}_{\ell -1}(n)\) are fed into a forward prediction reference-orthogonalization processor (ROP) in order to predict the elements of **b**_{ℓ−1}(*n*−1) and to produce the stage output back prediction error vector **b**_{ℓ}(*n*). The elements of \( \hat {\mathbf {b}}_{\ell -1}(n)\) are similarly fed into a ROP to perform *M*-channel joint process estimation and to produce the stage output estimation error vector **e**_{ℓ}(*n*). Subsequently, the elements of \( \hat {\mathbf {b}}_{\ell -1}(n)\) are delayed and are also fed into another ROP to obtain the stage output forward prediction error vector **f**_{ℓ}(*n*).

The original SPMLS algorithm

Stage inputs and initialization | |

\( \bar {b}^{1}_{m}(n)=b^{m}_{\ell -1}(n), \bar {f}^{1}_{m}(n)=f^{m}_{\ell -1}(n), \bar {e}^{1}_{m}(n)=e^{m}_{\ell -1}(n)\) | (T.1.1) |

\(\gamma ^{f}_{\ell -1,1}(n)=\gamma _{\ell -1}(n-1), \gamma ^{b}_{\ell -1,1}(n)=\gamma _{\ell -1}(n) \) | (T.1.2) |

\( r^{b}_{\ell -1,k}(-1)=r^{f}_{\ell -1,k}(-1)=\delta, (k=1,\ldots,M) \) | (T.1.3) |

\(\bar {\kappa }^{b}_{kj}(-1)=\bar {\kappa }^{f}_{kj}(-1)=\Delta ^{e}_{k\upsilon }(-1)=\Delta ^{f}_{k\upsilon }(-1)=\Delta ^{b}_{k\upsilon }(-1)=0.0\) | (T.1.4) |

( | |

| |

Computations at SOPs | |

\( \hat {b}_{\ell -1}^{k}(n)=\bar {b}^{k}_{k}(n), \hat {f}_{\ell -1}^{k}(n)=\bar {f}^{k}_{k}(n) \) | (T.1.5) |

\( r^{b}_{\ell -1,k}(n) = \lambda \ r^{b}_{\ell -1,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ \mid \hat {b}_{\ell -1}^{k}(n) \mid ^{2} \) | (T.1.6) |

\( \gamma ^{b}_{\ell -1,k+1}(n)=\gamma ^{b}_{\ell -1,k}(n) - \mid \gamma ^{b}_{\ell -1,k}(n) \mid ^{2} \mid \hat {b}_{\ell -1}^{k}(n)\mid ^{2}/r^{b}_{\ell -1,k}(n) \) | (T.1.7) |

\( r^{f}_{\ell -1,k}(n) = \lambda \ r^{f}_{\ell -1,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ \mid \hat {f}_{\ell -1}^{k}(n) \mid ^{2} \) | (T.1.8) |

\( \gamma ^{f}_{\ell -1,k+1}(n)=\gamma ^{f}_{\ell -1,k}(n) - \mid \gamma ^{f}_{\ell -1,k}(n)\mid ^{2} \mid \hat {f}_{\ell -1}^{k}(n) \mid ^{2}/r^{f}_{\ell -1,k}(n) \) | (T.1.9) |

| |

\( \bar {b}^{k+1}_{j}(n)=\bar {b}^{k}_{j}(n) - \bar {\kappa }^{b^{*}}_{kj}(n-1) \ \hat {b}_{\ell -1}^{k}(n) \) | (T.1.10) |

\( \bar {\kappa }^{b}_{kj}(n)= \bar {\kappa }^{b}_{kj}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ \bar {b}^{k+1^{\ast }}_{j}(n) \hat {b}^{k}_{\ell -1}(n)/r^{b}_{\ell -1,k}(n) \) | (T.1.11) |

\( \bar {f}^{k+1}_{j}(n)=\bar {f}^{k}_{j}(n) - \bar {\kappa }^{f^{*}}_{kj}(n-1) \ f_{\ell -1}^{k}(n) \) | (T.1.12) |

\( \bar {\kappa }^{f}_{kj}(n)= \bar {\kappa }^{f}_{kj}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ \bar {f}^{k+1^{\ast }}_{j}(n) \hat {f}^{k}_{\ell -1}(n)/r^{f}_{\ell -1,k}(n) \) | (T.1.13) |

| |

| |

Joint process estimation (ROP) | |

\( e_{\upsilon }^{k+1}(n)=e_{\upsilon }^{k}(n) - \Delta ^{{e}^{*}}_{k\upsilon }(n-1) \ \hat {b}_{\ell -1}^{k}(n) \) | (T.1.14) |

\( \Delta ^{e}_{k\upsilon }(n)= \Delta ^{e}_{k\upsilon }(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ e^{k+1^{\ast }}_{\nu }(n) b^{k}_{\ell -1}(n)/r^{b}_{\ell -1,k}(n) \) | (T.1.15) |

Forward error prediction (ROP) | |

\( f^{k+1}_{\upsilon }(n)=f^{k}_{\upsilon }(n) - \Delta ^{f^{*}}_{k\upsilon }(n-1) \ \hat {b}_{\ell -1}^{k}(n-1) \) | (T.1.16) |

\( \Delta ^{f}_{k\upsilon }(n)= \Delta ^{f}_{k\upsilon }(n-1) + \gamma ^{b}_{\ell -1,k}(n-1) \ f^{k+1^{\ast }}_{\nu }(n) b^{k}_{\ell -1}(n-1)/r^{b}_{\ell -1,k}(n-1) \) | (T.1.17) |

Backward error prediction (ROP) | |

\( b^{k+1}_{\upsilon }(n)=b^{k}_{\upsilon }(n-1) - \Delta ^{b^{*}}_{k\upsilon }(n-1) \ \hat {f}_{\ell -1}^{k}(n) \) | (T.1.18) |

\( \Delta ^{b}_{k\upsilon }(n)= \Delta ^{b}_{k\upsilon }(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ b^{k+1^{\ast }}_{\nu }(n) f^{k}_{\ell -1}(n)/r^{f}_{\ell -1,k}(n) \) | (T.1.19) |

| |

| |

Stage outputs | |

\( b^{m}_{\ell }(n)=b^{M+1}_{m}(n), \ f^{m}_{\ell }(n)=f^{M+1}_{m}(n), \) | (T.1.20) |

\( e^{m}_{\ell }(n)=e^{M+1}_{m}(n), \ \gamma _{\ell }(n)=\gamma ^{b}_{\ell -1,M+1}(n)\) | (T.1.21) |

### 3.2 Modification of the SPMLS

The modified SPMLS algorithm

Stage inputs and initialization | |

\( \gamma ^{f}_{\ell -1,1}(n)=\gamma _{\ell -1}(n-1), \gamma ^{b}_{\ell -1,1}(n)=\gamma _{\ell -1}(n)\) | (T.2.1) |

\( r^{b}_{\ell -1,k}(-1)=r^{f}_{\ell -1,k}(-1)=\delta, (k=1,\ldots,M) \) | (T.2.2) |

\( \Delta ^{e}_{\ell,k}(-1)=\Delta ^{f}_{\ell,k}(-1)=\Delta ^{b}_{\ell,k}(-1)=0.0, \ (k=1,\ldots,M) \) | (T.2.3) |

| |

Computations at SOPs | |

\( r^{b}_{\ell -1,k}(n) = \lambda _{k} \ r^{b}_{\ell -1,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ \mid b_{\ell -1}^{k}(n) \mid ^{2} \) | (T.2.4) |

\( \gamma ^{b}_{\ell -1,k+1}(n)=\gamma ^{b}_{\ell -1,k}(n) - \mid \gamma ^{b}_{\ell -1,k}(n) \mid ^{2} \mid b_{\ell -1}^{k}(n)\mid ^{2}/r^{b}_{\ell -1,k}(n) \) | (T.2.5) |

\( r^{f}_{\ell -1,k}(n) = \lambda _{k} \ r^{f}_{\ell -1,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ \mid f_{\ell -1}^{k}(n) \mid ^{2} \) | (T.2.6) |

\( \gamma ^{f}_{\ell -1,k+1}(n)=\gamma ^{f}_{\ell -1,k}(n) - \mid \gamma ^{f}_{\ell -1,k}(n)\mid ^{2} \mid f_{\ell -1}^{k}(n) \mid ^{2}/r^{f}_{\ell -1,k}(n) \) | (T.2.7) |

Joint process estimation (ROP) | |

\( e_{\ell }^{k}(n)=e_{\ell -1}^{k}(n) - \Delta ^{e^{*}}_{\ell,k}(n-1) \ b_{\ell -1}^{k}(n) \) | (T.2.8) |

\( \Delta ^{e}_{\ell,k}(n)= \Delta ^{e}_{\ell,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ e^{k^{\ast }}_{\ell }(n) b^{k}_{\ell -1}(n)/r^{b}_{\ell -1,k}(n) \) | (T.2.9) |

Forward error prediction (ROP) | |

\( f_{\ell }^{k}(n)=f_{\ell -1}^{k}(n) - \Delta ^{f^{*}}_{\ell,k}(n-1) \ b_{\ell -1}^{k}(n-1) \) | (T.2.10) |

\( \Delta ^{f}_{\ell,k}(n)= \Delta ^{f}_{\ell,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n-1) \ f^{k^{\ast }}_{\ell }(n) b^{k}_{\ell -1}(n-1)/r^{b}_{\ell -1,k}(i-1) \) | (T.2.11) |

Backward error prediction (ROP) | |

\( b^{k}_{\ell }(n)=b_{\ell -1}^{k}(n-1) - \Delta ^{b^{*}}_{\ell,k}(n-1) \ f_{\ell -1}^{k}(n) \) | (T.2.12) |

\( \Delta ^{b}_{\ell,k}(n)= \Delta ^{b}_{\ell,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ b^{k^{\ast }}_{\ell }(n) f^{k}_{\ell -1}(n)/r^{f}_{\ell -1,k}(n) \) | (T.2.13) |

| |

Stage outputs | |

\( \gamma _{\ell }(n)=\gamma ^{b}_{\ell -1,M+1}(n)\) | (T.2.14) |

## 4 Combinations of multiple lattice filters

In this section, we present the development of two combination schemes, namely, the R-CMLF and D-CMLF schemes, in order to sequentially combine multiple adaptive lattice filters. Even if these schemes may look similar, there is an important difference between them. In the R-CMLF scheme, combination parameters and mixing coefficients are computed globally at the last stage of combining filters and then are fed back to prior stages, whereas they are computed locally in the D-CMLF scheme and therefore stage dependent. Hence, the total number of combination parameters and mixing coefficients in the R-CMLF and D-CMLF schemes is *M* and *M*×*N* respectively.

### 4.1 Regular combination of multiple lattice filters

*M*=2 case, that is, the regular combination of two lattice filters (R-CTLF), is presented in Fig. 4. Two types of combination processors are introduced into the filter structure in this scheme: a type-1 combination processor per stage and a type-2 processor to the final stage. In the following, we present the development of combination algorithms that will be implemented by type-1 and type-2 processors in the R-CMLF scheme.

*ℓ*th type-1 combination processor as:

*v*

^{k}(

*n*) and

*Δ*

_{k,j}(

*n*) represent the

*k*th mixing coefficient and the

*k*th estimation error reflection coefficient of the

*j*th lattice stage at time instant

*n*respectively. \(b^{k}_{j-1}(n)\) is the

*k*th backward prediction error at the entrance of the

*j*th stage at time instant

*n*. The estimate of equivalent desired signal can be expressed order-recursively as follows :

*ℓ*=1,…,

*N*,

*m*=1,…,

*M*, and \(\hat {d}^{q}_{0,n}(n)= 0\). Then, Eq. (17) is substituted in:

*d*(

*n*) represents the desired signal at time instant

*n*, which is the channel output signal

*y*(

*n*) in the channel identification problem. Subsequently, the equivalent estimation error at the (

*ℓ*−1)th stage is defined as:

*ℓ*th stage can also be expressed in terms of estimation errors related to the channels of SPMLSs as:

*k*th estimation error of the

*ℓ*th stage. The corresponding equivalent reflection coefficient at the

*ℓ*th stage can be computed using mixing and reflection coefficients at time instant

*n*as:

Hence, Eqs. (17), (21), or (22) and (23) are implemented in type-1 combination processors shown in Fig. 4.

*m*th mixing coefficient at time instant

*n*+1 is computed at the output of the last lattice stage in a type-2 combination processor as follows:

*m*=1,2,…,

*M*, and

*a*

^{m}(

*n*+1) is the

*m*th combination parameter at time instant

*n*+1. Herein, the normalization parameter at time instant

*n*+1,

*β*(

*n*+1), is computed according to:

Note that 0<*v*^{m}(*n*)<1, ∀*m*, and \( \sum _{k=1}^{M} v^{k}(n)=1\).

*m*th combination parameter,

*a*

^{m}(

*n*), can be accordingly expressed as:

*μ*

_{a}is the step size. The derivation in (26) is carried out so as to obtain the following expression:

*ℓ*=

*N*is subsequently utilized in evaluating \(\frac {\partial {e^{eq}_{N,n}(n)}}{\partial {a^{m}(n)}}\), and Eq. (27) is expressed as in the following statement:

*m*=1,…,

*M*. The partial derivatives of

*v*

^{k}(

*n*) with respect to

*a*

^{m}(

*n*) are stated as follows:

*k*=

*m*and

*k*≠

*m*cases are substituted in Eq. (28) to attain the following statement for the time-update equation of the

*m*th combination parameter:

*ℓ*=

*N*is once more used to find an equivalent expression for the summation term, \( \sum \limits _{k \neq m} v^{k}(n) \ e^{k}_{N,n}(n)\), in Eq. (30) as follows:

*m*th combination parameter in terms of the index

*m*is attained as:

*m*=1,…,

*M*. The term \( \left (e^{m}_{N,n}(n) - e^{eq}_{N,n}(n)\right)\) in Eq. (32) can give rise to a slowing down effect in the learning of combination parameters that usually occurs during long stationary intervals during which the estimation errors, \(e^{m}_{N,n}(n)\) and \(e^{eq}_{N,n}(n)\), are close. In order to alleviate this problem, a momentum term can be appended to the statement in Eq. (32) as in the following :

where 0<*ρ*<1 [49]. Accordingly, the new additive term in Eq. (33) compensates the pernicious effect related to the second term.

*v*

^{m}(

*n*), are then fed back to type-1 combination processors so as to be used in the computation of equivalent desired signals, estimation errors, and equivalent reflection coefficients in Eqs. (17), (21), or (22) and (23) respectively. We call the complete algorithm as the R-CMLF algorithm, which includes the modified SPMLS algorithm in Table 2 as well as the combination algorithm presented in this subsection, and summarize it in Table 3.

The R-CMLF algorithm

Stage inputs | |

\(\gamma ^{f}_{\ell -1,1}(n)=\gamma _{\ell -1}(n-1), \gamma ^{b}_{\ell -1,1}(n)=\gamma _{\ell -1}(n)\) | (T.3.1) |

\( r^{b}_{\ell -1,k}(-1)=r^{f}_{\ell -1,k}(-1)=\delta, (k=1,\ldots,M)\) | (T.3.2) |

\( \Delta ^{e}_{\ell,k}(-1)=\Delta ^{f}_{\ell,k}(-1)=\Delta ^{b}_{\ell,k}(-1)= \Delta ^{eq}_{\ell }(0)=0.0, (k=1,\ldots,M) \) | (T.3.3) |

| |

Computations at SOPs | |

\( r^{b}_{\ell -1,k}(n) = \lambda _{k} \ r^{b}_{\ell -1,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ \mid b_{\ell -1}^{k}(n) \mid ^{2} \) | (T.3.4) |

\( \gamma ^{b}_{\ell -1,k+1}(n)=\gamma ^{b}_{\ell -1,k}(n) - \mid \gamma ^{b}_{\ell -1,k}(n) \mid ^{2} \mid b_{\ell -1}^{k}(n)\mid ^{2}/r^{b}_{\ell -1,k}(n) \) | (T.3.5) |

\( r^{f}_{\ell -1,k}(n) = \lambda _{k} \ r^{f}_{\ell -1,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ \mid f_{\ell -1}^{k}(n) \mid ^{2} \) | (T.3.6) |

\( \gamma ^{f}_{\ell -1,k+1}(n)=\gamma ^{f}_{\ell -1,k}(n) - \mid \gamma ^{f}_{\ell -1,k}(n)\mid ^{2} \mid f_{\ell -1}^{k}(n) \mid ^{2}/r^{f}_{\ell -1,k}(n) \) | (T.3.7) |

Joint process estimation (ROP) | |

\( e_{\ell }^{k}(n)=e_{\ell -1}^{k}(n) - \Delta ^{e^{*}}_{\ell,k}(n-1) \ b_{\ell -1}^{k}(n) \) | (T.3.8) |

\( \Delta ^{e}_{\ell,k}(n)= \alpha (\Delta ^{e}_{\ell,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ e^{k^{\ast }}_{\ell }(n) b^{k}_{\ell -1}(n)/r^{b}_{\ell -1,k}(n)) + (1-\alpha) \Delta ^{eq}_{\ell }(n-1) \) | (T.3.9) |

Combination processing (type-1 processor) | |

\( e_{\ell }^{eq}(n) = e_{\ell }^{eq}(n) + v^{k}(n) \ e_{\ell }^{k}(n) \) | (T.3.10) |

\( \Delta ^{eq}_{\ell }(n) = \Delta ^{eq}_{\ell }(n) + v^{k}(n) \ \Delta ^{e}_{\ell,k}(n) \) | (T.3.11) |

Forward error prediction (ROP) | |

\( f_{\ell }^{k}(n)=f_{\ell -1}^{k}(n) - \Delta ^{f^{*}}_{\ell,k}(n-1) \ b_{\ell -1}^{k}(n-1) \) | (T.3.12) |

\( \Delta ^{f}_{\ell,k}(n)= \Delta ^{f}_{\ell,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n-1) \ f^{k^{\ast }}_{\ell }(n) b^{k}_{\ell -1}(n-1)/r^{b}_{\ell -1,k}(n-1) \) | (T.3.13) |

Backward error prediction (ROP) | |

\( b^{k}_{\ell }(n)=b_{\ell -1}^{k}(i-1) - \Delta ^{b^{*}}_{\ell,k}(n-1) \ f_{\ell -1}^{k}(n) \) | (T.3.14) |

\( \Delta ^{b}_{\ell,k}(n)= \Delta ^{b}_{\ell,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ b^{k^{\ast }}_{\ell }(n) f^{k}_{\ell -1}(n)/r^{f}_{\ell -1,k}(n) \) | (T.3.15) |

| |

Combination processing (type-2 processor) | |

| |

\( a^{k}(n+1)=a^{k}(n)-\mu _{a} \ e^{eq}_{N}(n) \ \left (e_{N}^{k}(n) - e^{eq}_{N}(n)\right) \ v^{k}(n) + \rho \ \left (a^{k}(n)-a^{k}(n-1)\right)\) | (T.3.16) |

\( \beta (n+1)= \beta (n+1) + e^{a^{k}(n+1)} \) | (T.3.17) |

\( v^{k}(n+1)= \frac {e^{a^{k}(n+1)}}{\beta (n+1)} \) | (T.3.18) |

| |

Stage outputs | |

\( \gamma _{\ell }(n)=\gamma ^{b}_{\ell -1,M+1}(n)\) | (T.3.19) |

*m*th joint state estimation lattice reflection coefficient at the

*ℓ*th stage is modified by incorporating the transfer parameter

*α*and the equivalent reflection coefficient \(\Delta ^{eq}_{\ell }(n-1)\) at the

*ℓ*th stage in the following manner:

The permissible range for the transfer parameter is 0<*α*<1, and the transfer of reflection coefficients is only applied when the filtered quadratic estimation errors meet the conditions discussed in [49] as indicators of worse performance.

### 4.2 Decoupled combination of multiple lattice filters

*M*=2 case, which can be named as decoupled combination of two lattice filters (D-CTLF), in Fig. 5. A type-3 combination processor in this case is inserted to each lattice stage. In the sequel, we develop the combination algorithm that will be implemented in a type-3 processor.

*ℓ*th lattice stage in an order-recursive manner, it is first stated as follows:

*Δ*

_{k,j}(

*n*) are the

*k*th mixing and estimation error reflection coefficients of the

*j*th lattice stage at time instant

*n*respectively. \(b^{k}_{j-1}(n)\) is the

*k*th backward prediction error at the entrance of the

*j*th stage at time instant

*i*. The estimate of equivalent desired signal can be expressed order-recursively as follows :

for *ℓ*=1,…,*N*, *m*=1,…,*M*, and \(\hat {d}^{eq}_{0,n}(n)= 0\). Note the mixing coefficients are related to the *ℓ*th stage in this case.

*ℓ*th stage is defined as :

*d*(

*n*) represents the desired signal at time instant

*n*as in the previous subsection, and Eq. (36) is substituted in this equivalent estimation error expression to obtain the following statement:

*ℓ*−1)th stage is similarly defined as :

*ℓ*th stage can also be expressed in terms of estimation errors related to the channels of SPMLSs as:

*k*th estimation error of the

*ℓ*th stage. Similarly, the equivalent reflection coefficient at the

*ℓ*th stage is stated in accordance with :

*m*th channel of the

*ℓ*th stage is stated as :

*β*

_{ℓ}(

*n*+1) are the

*m*th combination parameter and the normalization factor at time instant

*n*+1 for the

*ℓ*th stage respectively. The normalization factor is defined as:

Note that \( 0 < v^{m}_{\ell }(n) < 1, \forall {m} \) and *ℓ*, and \( \sum _{k=1}^{M} v^{k}_{\ell }(n)=1\).

*m*th combination parameter at the

*ℓ*th stage, \(a^{m}_{\ell }(n)\), can be stated by making use of the gradient descent method as:

*μ*

_{a}is the step size. The derivation in (45) is carried out so as to obtain the following expression:

*m*=1,…,

*M*and

*ℓ*=1,…,

*N*. The partial derivatives of \(v^{k}_{\ell }(n)\) with respect to \(a^{m}_{\ell }(n) \) are found as follows:

*k*=

*m*and

*k*≠

*m*cases are substituted in Eq. (47) to obtain the following statement for the time-update equation of the

*m*th combination parameter at the

*ℓ*th stage:

*m*th combination parameter at the

*ℓ*th stage in terms of the indices

*m*and

*ℓ*is given as:

*ℓ*=1,…,

*N*and

*m*=1,…,

*M*. Accordingly, the D-CMLF algorithm is presented in Table 4.

The D-CMLF algorithm

Stage inputs | |

\( \gamma ^{f}_{\ell -1,1}(n)=\gamma _{\ell -1}(n-1), \gamma ^{b}_{\ell -1,1}(n)=\gamma _{\ell -1}(n)\) | (T.4.1) |

\( r^{b}_{\ell -1,k}(-1)=r^{f}_{\ell -1,k}(-1)=\delta, (k=1,\ldots,M) \) | (T.4.2) |

\( \Delta ^{e}_{\ell,k}(-1)=\Delta ^{f}_{\ell,k}(-1)=\Delta ^{b}_{\ell,k}(-1)= \Delta ^{eq}_{\ell }(0)=0.0, (k=1,\ldots,M) \) | (T.4.3) |

| |

Computations at SOPs | |

\( r^{b}_{\ell -1,k}(n) = \lambda _{k} \ r^{b}_{\ell -1,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) \ \mid b_{\ell -1}^{k}(n) \mid ^{2} \) | (T.4.4) |

\( \gamma ^{b}_{\ell -1,k+1}(n)=\gamma ^{b}_{\ell -1,k}(n) - \mid \gamma ^{b}_{\ell -1,k}(n) \mid ^{2} \mid b_{\ell -1}^{k}(n)\mid ^{2}/r^{b}_{\ell -1,k}(n) \) | (T.4.5) |

\( r^{f}_{\ell -1,k}(n) = \lambda _{k} \ r^{f}_{\ell -1,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ \mid f_{\ell -1}^{k}(n) \mid ^{2} \) | (T.4.6) |

\( \gamma ^{f}_{\ell -1,k+1}(n)=\gamma ^{f}_{\ell -1,k}(n) - \mid \gamma ^{f}_{\ell -1,k}(n) \mid ^{2} \mid f_{\ell -1}^{k}(n) \mid ^{2}/r^{f}_{\ell -1,k}(n) \) | (T.4.7) |

Joint process estimation (ROP) | |

\( e_{\ell }^{k}(n)=e_{\ell -1}^{k}(n) - \Delta ^{e^{*}}_{\ell,k}(n-1) \ b_{\ell -1}^{k}(n) \) | (T.4.8) |

\( \Delta ^{e}_{\ell,k}(n)= \alpha (\Delta ^{e}_{\ell,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n) e^{k^{\ast }}_{\ell }(n) b^{k}_{\ell -1}(n)/r^{b}_{\ell -1,k}(n)) + (1 - \alpha) \Delta ^{eq}_{\ell }(n-1) \) | (T.4.9) |

Combination processing (type-3 processor) | |

\( e_{\ell }^{eq}(n) = e_{\ell }^{eq}(n) + v_{\ell }^{k}(n) \ e_{\ell }^{k}(n) \) | (T.4.10) |

\( \Delta ^{eq}_{\ell }(n) = \Delta ^{eq}_{\ell }(n) + v_{\ell }^{k}(n) \ \Delta ^{e}_{\ell,k}(n) \) | (T.4.11) |

\( a_{\ell }^{k}(n+1)=a_{\ell }^{k}(n)-\mu _{a} \ e^{eq}_{\ell }(n) \ (e_{\ell }^{k}(n) - e^{eq}_{\ell }(n)) \ v_{\ell }^{k}(n) + \rho \ (a^{k}(n)-a^{k}(n-1)) \) | (T.4.12) |

\( \beta _{\ell }(n+1)= \beta _{\ell }(n+1) + e_{\ell }^{a^{k}(n+1)} \) | (T.4.13) |

\( v_{\ell }^{k}(n+1)= \frac {e^{a_{\ell }^{k}(n+1)}}{\beta _{\ell }(n+1)} \) | (T.4.14) |

Forward error prediction (ROP) | |

\( f_{\ell }^{k}(n)=f_{\ell -1}^{k}(n) - \Delta ^{f^{*}}_{\ell,k}(n-1) \ b_{\ell -1}^{k}(n-1) \) | (T.4.15) |

\( \Delta ^{f}_{\ell,k}(n)= \Delta ^{f}_{\ell,k}(n-1) + \gamma ^{b}_{\ell -1,k}(n-1) \ f^{k^{\ast }}_{\ell }(n) b^{k}_{\ell -1}(n-1)/r^{b}_{\ell -1,k}(n-1) \) | (T.4.16) |

Backward error prediction (ROP) | |

\( b^{k}_{\ell }(n)=b_{\ell -1}^{k}(n-1) - \Delta ^{b^{*}}_{\ell,k}(n-1) \ f_{\ell -1}^{k}(n) \) | (T.4.17) |

\( \Delta ^{b}_{\ell,k}(n)= \Delta ^{b}_{\ell,k}(n-1) + \gamma ^{f}_{\ell -1,k}(n) \ b^{k^{\ast }}_{\ell }(n) f^{k}_{\ell -1}(n)/r^{f}_{\ell -1,k}(n) \) | (T.4.18) |

| |

Stage outputs | |

\( \gamma _{\ell }(n)=\gamma ^{b}_{\ell -1,M+1}(n)\) | (T.4.19) |

where 0<*ρ*<1 as before. To speed up the convergence of the slower component filters, Eq. (T.4.9) in Table 4, which is related to the computation of the *m*th joint state estimation lattice reflection coefficient at the *ℓ*th stage, is modified as in Eq. (34).

## 5 Computational complexity

The computational complexity of the proposed schemes can be accordingly calculated by taking into consideration the effect of modifications on the complexity of a SPMLS and the added complexity due to combination processing in Eqs. (22) or (23), (24) and (25), and (32) in the R-CMLF scheme and Eqs. (41) or (42), (43) and (44), and (51) in the D-CMLF scheme. Note that computational complexity is expressed in terms of number of required operations, where one operation is considered as one multiplication (division) and one addition.

Due to the removal of all single circular cells in self-orthogonalizing processors and redundant single circular cells in referential-orthogonalizing processors of a SPMLS in combining multiple lattice filters, the complexity of a SPMLS reduces from (12*M*^{2}+7*M*) to (12*M*^{2}+*M*). Therefore, the total complexity for the R-CMLF scheme is (12*N**M*^{2}+2*M**N*+3*M*+1), whereas it is (12*N**M*^{2}+5*M**N*+*N*) for the D-CMLF scheme. If the convergence of the slower component filters is required to be sped up by transferring a part of the equivalent reflection coefficients to the reflection coefficients of the component filters, the complexities of the proposed schemes increase due to the transfer term in Eq. (34), and momentum terms in Eqs. (33) and (52), which together amount to an additional complexity of (2*M**N*+5*M*−*N*+3). Accordingly, the total complexities for the R-CMLF and D-CMLF schemes with transfer and momentum (t/m) terms become (12*N**M*^{2}+4*M**N*+8*M*−*N*+4) and (12*N**M*^{2}+7*M**N*+5*M*+3) respectively.

*N*) curves of the R-CMLF and D-CMLF schemes for

*M*=2,4, and 8 cases, that is, R-CTLF, R-CTLF with t/m terms, D-CTLF, D-CTLF with t/m terms, regular combination of four lattice filters (R-CFLF), R-CFLF with t/m terms, decoupled combination of four lattice filters (D-CFLF), D-CTLF with t/m terms, regular combination of eight lattice filters (R-CELF), R-CELF with t/m terms, decoupled combination of eight lattice filters (D-CELF), and D-CELF with t/m terms have been plotted in Figs. 6, 7, and 8 respectively.

We have also compared the complexities of the proposed methods with those of M-CLMS when *M*=2,4,8, and D-CLMS schemes in [49]. Note that the complexities of M-CLMS and D-CLMS schemes are 3*M**N*+5*M*+1 and 10*N*+3 respectively and that these complexities also increase by an amount of 2*M**N*+5*M*−*N*+3 when the transfer and momentum terms are implemented. In these figures, it can be seen that the complexities of the M-CLMS and D-CLMS schemes are advantageous comparing to the R-CMLF and D-CMLF schemes mainly due to the well known simplicity of LMS filters, and this advantage becomes larger with increasing number of combining filters (*M*). However, it can be also noted that the complexities of transfer and momentum terms are comparable to those of the core M-CLMS and D-CLMS schemes, and therefore, the addition of transfer and momentum terms influences the complexities of M-CLMS and D-CLMS schemes more noticeably.

*M*. For

*N*=30, the R-CTLF and D-CTLF schemes are approximately 3.9810 and 3.1622 times less complex than the R-CFLF and D-CFLF schemes respectively, whereas the R-CFLF and D-CFLF schemes are around 4.4668 less complex than the R-CELF and D-CELF schemes. Note that, when making the aforementioned comparison, the slight complexity differences between the R-CFLF and D-CFLF, and the R-CELF and D-CELF schemes respectively have been ignored. The computational complexity expressions of the proposed methods as well as those of the M-CLMS and D-CLMS schemes are summarized in Table 5.

Computational complexity comparison

R-CMLF | 12 |

R-CMLF with t/m terms | 12 |

D-CMLF | 12 |

D-CMLF with t/m terms | 12 |

M-CLMS | 3 |

M-CLMS with t/m terms | 5 |

D-CLMS | 10 |

D-CLMS with t/m terms | 2 |

## 6 Simulation study

*n*) vs. number of iterations (

*n*) plots, where MSD(

*n*) is defined as:

at time instant *n*. Herein, **w**(*n*) and **Δ**(*n*) represent the coefficients of the channel to be identified and the corresponding lattice identification filter respectively. We also carried out the same experiments with the M-CLMS and D-CLMS schemes in [49] so as to provide comparison with the performances of the proposed methods.

*x*(

*n*) was considered: white and colored Gaussian noise input cases. The filter with the following input-output relationship was used to generate the input signal

*x*(

*n*) to the channel:

in which *υ*(*n*) is a white Gaussian zero-mean noise process with unit variance. In the white Gaussian noise input case, *η*=0.0, whereas in the colored Gaussian input case, *η*=0.9. The channel noise, *u*(*n*), is also a white zero-mean Gaussian noise with a variance of \({\sigma _{u}^{2}=0.01}\) and is added to the channel output signal in all experiments.

The channel coefficients were changed in accordance with Eq. (2). Under stationary operating conditions, **q**(*n*)=0, ∀*n*, and under nonstationary operating conditions, **q**(*n*) represents an identically and independently Gaussian distributed random zero-mean vector with diagonal covariance matrix **Q**(*n*) as was introduced in Section 2. Under both stationary and nonstationary operating conditions, the initial values of the channel coefficients were zero-mean Gaussian distributed with a variance of \({\sigma ^{2}_{w}=0.1}\), so that they were between − 1 and 1. These initial values were kept constant under stationary operating conditions, whereas they were allowed to change in accordance with Eq. (2) under nonstationary operating conditions.

In order to imbibe the channel output signal with alternating slow and fast statistical variations in accordance with Eq. (3) under nonstationary operating conditions, the trace of **Q**(*n*) was selected so as to take turns between two different values. Particularly, it was 12×10^{−6} during the following number of iterations: 0≤*n*≤1500, 3500≤*n*≤5500, and 7500≤*n*≤9000, whereas it was 12×10^{−2} for 1501≤*n*≤3499, 5501≤*n*≤7499 with the values of the degree of nonstationarity alternating between *ξ*(*n*)≈ 0.0345 and *ξ*(*n*)≈ 3.465 respectively.

We considered the combination of four filters in the simulation of the M-CLMS scheme. The following step sizes for the M-CLMS and D-CLMS schemes were used respectively: *μ*_{1}=0.005,*μ*_{2}=0.01,*μ*_{3}=0.02,*μ*_{4}=0.03, and *μ*_{1}=0.005 and *μ*_{2}=0.03. All experimental results are ensemble averages of 100 independent runs. The regularization parameter *δ* for the lattice filters was set to 1.0.

During the simulations, we did not allow the mixing coefficients, i.e., *v*^{m} in the R-CMLF and M-CLMS schemes or \(v_{\ell }^{m}\) in the D-CMLF and D-CLMS schemes, to increase above a threshold value of *ε* in order to avoid the other mixing coefficients getting too close to 0, which can stop the corresponding learning. Accordingly, it can be shown that *v*^{m}<*ε* in the R-CMLF and M-CLMS schemes or \(v_{\ell }^{m} < \epsilon \) in the D-CMLF and D-CLMS schemes are satisfied if \( \mid a^{m} \mid \leq 0.5 \log _{2}((\epsilon (M-1)/(1-\epsilon)))=\epsilon ^{'} \phantom {\dot {i}\!}\) or \( \mid a_{\ell }^{m} \mid \leq 0.5 \log _{2}((\epsilon (M-1)/(1-\epsilon)))=\epsilon ^{'} \) [49]. We used the following *ε* values: *ε*=1−0.09(*M*−1) under stationary operating conditions with white and colored noise input as well as nonstationary operating conditions with colored noise input, and *ε*=1−0.001(*M*−1) under nonstationary operating conditions with white noise input. We also implemented *μ*_{a}=100 in order to adapt the combination parameters in Eqs. (32), (33), (51), and (52).

### 6.1 Stationary operating conditions

The effect of faulty elements on the MSD performance of component and combination filters under stationary operating conditions was investigated so as to demonstrate the intelligence gained in the form of fault tolerance improvement with combination processing. Accordingly, the number of coefficients of filter 2 was reduced from 12 to 6 while keeping the number coefficients of the other filters at 12 in the proposed schemes. We considered combinations of 2, 4, and 8 filters using the D-CMLF and R-CMLF schemes, viz., R-CTLF and D-CTLF, R-CFLF and D-CFLF, and R-CELF and D-CELF schemes. The exponential weighting factors were the following: *λ*_{1}=0.995 and *λ*_{2}=0.97 for the R-CTLF and D-CTLF schemes; *λ*_{1}=0.995, *λ*_{2}=0.99, *λ*_{3}=0.98, and *λ*_{4}=0.97 for the R-CFLF and D-CFLF schemes; and *λ*_{1}=0.9995, *λ*_{2}=0.999, *λ*_{3}=0.995, *λ*_{4}=0.99, *λ*_{5}=0.985, *λ*_{6}=0.98*λ*_{7}=0.975, and *λ*_{8}=0.97 for the R-CELF and D-CELF schemes. The number of iterations were set as 4000.

#### 6.1.1 White input case

*n*) under stationary operating conditions were fluctuating with small variance around a constant value, and therefore, there was not much point in presenting the plots for all

*n*, so that we provide their time-averaged values, i.e., \(\bar {v}^{m}\) for the R-CFLF and M-CLMS schemes and \(\bar {v}^{m}_{\ell }\) for the D-CFLF and D-CLMS schemes in Table 6. Note that the R-CFLF and M-LMS (

*M*=4) schemes have four mixing coefficients, whereas the D-CFLF and D-LMS schemes use 48(

*M*×

*N*=4×12) and 24(

*M*×

*N*=2×12) coefficients respectively. It can be noticed in Table 6 that filter 2 of the R-CFLF and M-CLMS schemes, which is faulty, contributes more than the other normal functioning three filters and that the contribution of the second filter is more in R-CFLF scheme than in M-CLMS scheme. On the other hand, it can also be seen in Table 6 that the contributions of all of the four filters are close till the eighth stage, after which the proportion of contribution for the second filter lessens comparing to the other three filters. Finally, in the case of D-CLMS scheme, it can be deduced that the proportion of contribution of two filters does not change much from stage to stage.

Comparison of time-averaged mixing coefficients under stationary conditions

R-CFLF | |||

\(\bar {v}^{1}=0.219662\) | \(\bar {v}^{2}=0.329039\) | \(\bar {v}^{3}=0.224948\) | \(\bar {v}^{4}=0.226642\) |

D-CFLF | |||

\(\bar {v}^{1}_{1}=0.250000\) | \(\bar {v}^{2}_{1}=0.250000\) | \(\bar {v}^{3}_{1}=0.250000\) | \(\bar {v}^{4}_{1}=0.250000\) |

\(\bar {v}^{1}_{2}=0.260383\) | \(\bar {v}^{2}_{2}=0.245273\) | \(\bar {v}^{3}_{2}=0.243011\) | \(\bar {v}^{4}_{2}=0.248331\) |

\(\bar {v}^{1}_{3}=0.260950\) | \(\bar {v}^{2}_{3}=0.238969\) | \(\bar {v}^{3}_{3}=0.240371\) | \(\bar {v}^{4}_{3}=0.256708\) |

\(\bar {v}^{1}_{4}=0.259955\) | \(\bar {v}^{2}_{4}=0.238969\) | \(\bar {v}^{3}_{4}=0.240596\) | \(\bar {v}^{4}_{4}=0.259483\) |

\(\bar {v}^{1}_{5}=0.259482\) | \(\bar {v}^{2}_{5}=0.235899\) | \(\bar {v}^{3}_{5}=0.240934\) | \(\bar {v}^{4}_{5}=0.260747\) |

\(\bar {v}^{1}_{6}=0.258075\) | \(\bar {v}^{2}_{6}=0.235125\) | \(\bar {v}^{3}_{6}=0.241462\) | \(\bar {v}^{4}_{6}=0.262336\) |

\(\bar {v}^{1}_{7}=0.256902\) | \(\bar {v}^{2}_{7}=0.234466\) | \(\bar {v}^{3}_{7}=0.242221\) | \(\bar {v}^{4}_{7}=0.263409\) |

\(\bar {v}^{1}_{8}=0.307697\) | \(\bar {v}^{2}_{8}=0.137990\) | \(\bar {v}^{3}_{8}=0.267800\) | \(\bar {v}^{4}_{8}=0.283511\) |

\(\bar {v}^{1}_{9}=0.300305\) | \(\bar {v}^{2}_{9}=0.146743\) | \(\bar {v}^{3}_{9}=0.267666\) | \(\bar {v}^{4}_{9}=0.282285\) |

\(\bar {v}^{1}_{10}=0.294155\) | \(\bar {v}^{2}_{10}=0.155295\) | \(\bar {v}^{3}_{10}=0.267576\) | \(\bar {v}^{4}_{10}=0.279972\) |

\(\bar {v}^{1}_{11}=0.287647\) | \(\bar {v}^{2}_{11}=0.167800\) | \(\bar {v}^{3}_{11}=0.266095\) | \(\bar {v}^{4}_{11}=0.275457\) |

\(\bar {v}^{1}_{12}=0.283007\) | \(\bar {v}^{2}_{12}=0.180514\) | \(\bar {v}^{3}_{12}=0.263361\) | \(\bar {v}^{4}_{12}=0.270116\) |

M-CLMS | |||

\(\bar {v}^{1}=0.194679\) | \(\bar {v}^{2}=0.430737\) | \(\bar {v}^{3}=0.185233\) | \(\bar {v}^{4}=0.186599\) |

D-CLMS | |||

\(\bar {v}^{1}_{1}=0.439073\) | \(\bar {v}^{2}_{1}=0.560926\) | ||

\(\bar {v}^{1}_{2}=0.439087\) | \(\bar {v}^{2}_{2}=0.560991\) | ||

\(\bar {v}^{1}_{3}=0.438953\) | \(\bar {v}^{2}_{3}=0.561046\) | ||

\(\bar {v}^{1}_{4}=0.438959\) | \(\bar {v}^{2}_{4}=0.560403\) | ||

\(\bar {v}^{1}_{5}=0.438893\) | \(\bar {v}^{2}_{5}=0.561116\) | ||

\(\bar {v}^{1}_{6}=0.439248\) | \(\bar {v}^{2}_{6}=0.560751\) | ||

\(\bar {v}^{1}_{7}=0.437971\) | \(\bar {v}^{2}_{7}=0.562028\) | ||

\(\bar {v}^{1}_{8}=0.436686\) | \(\bar {v}^{2}_{6}=0.563313\) | ||

\(\bar {v}^{1}_{9}=0.437216\) | \(\bar {v}^{2}_{9}=0.562783\) | ||

\(\bar {v}^{1}_{10}=0.436428\) | \(\bar {v}^{2}_{10}=0.563471\) | ||

\(\bar {v}^{1}_{11}=0.437432\) | \(\bar {v}^{2}_{11}=0.562567\) | ||

\(\bar {v}^{1}_{12}=0.436643\) | \(\bar {v}^{2}_{12}=0.563356\) |

#### 6.1.2 Colored input case

*η*of Eq. (54) controls the coloring of the input, so that

*η*=0.0 and

*η*=0.9 correspond to the white and colored input cases. It can be seen in Fig. 12 that, when the input is colored, the performances of R-CFLF and D-CFLF schemes degrade approximately 2.1 and 2.0 dB respectively comparing to the white input case.

### 6.2 Nonstationary operating conditions

The objective in this experiment is to display the advantage of combination processing in reacting to nonstationary operating conditions. The combinations of four lattice filters, i.e., the R-CFLF and D-CFLF schemes, were considered in this case using the exponential weighting factors: *λ*_{1}=0.995, *λ*_{2}=0.99, *λ*_{3}=0.98, and *λ*_{4}=0.97.

#### 6.2.1 White input case

*ξ*(

*n*)≈ 0.0345), whereas all of the filters merge to almost the same level of MSD (0 dB) abruptly in the fast statistical variation intervals (

*ξ*(

*n*)≈ 3.4657). It can also be observed in Fig. 13 that the component filters with smaller exponential weights converge to higher steady-state MSD levels, albeit faster, whereas the filters with larger exponential weights approach to lower steady-state MSD levels, although slowly. Accordingly, the combination filter brings together the desired features of component filters, that is, the fast convergence of filters that have smaller exponential weights with the low steady-state MSD performance of filters that have larger exponential weights.

*v*

^{1},

*v*

^{2},

*v*

^{3},

*v*

^{4}, as a function of number of iterations in blue color, and the mixing coefficients related to the last stage of the D-CFLF scheme, \(v^{1}_{12}(n),v^{2}_{12}(n),v^{3}_{12}(n),v^{4}_{12}(n),\) in green color. It can be seen in the figure that filter 1 in both schemes is the main contributor to the combination filter in the steady state, whereas the other three filters become conducive in the transient states, particularly, filters 3 and 4 in the first transition and filters 2 and 3 in the second transition.

*α*) in this experiment was varied from

*α*=1.0, which corresponds to no transfer case, to

*α*=0.2 in steps of 0.2. It can be seen in the figures that the lower values of transfer term cause the slower convergence of MSD curves to the steady state. In addition, the MSD curves, when

*α*≠1, converge more smoothly comparing to the case when

*α*=1.0.

*ρ*), and the results are displayed in Figs. 18 and 19.

*ρ*=0 case corresponds to using no momentum term, whereas

*ρ*=1.0 is related to adding the complete term. It can be seen in Fig. 18 that the minimum MSD steady state level for the R-CFLF scheme is achieved when

*ρ*=1.0; on the other hand, the minimum MSD steady state level for the D-CFLF scheme is attained when

*ρ*=0.8. The convergence of neither R-CFLF nor D-CFLF schemes was affected with the use of different values of momentum term.

*α*=1.0) and full momentum term (

*ρ*=1.0), whereas the best result for the D-CFLF scheme, shown in Fig. 21, was made possible using

*α*=1.0 and

*ρ*=0.5.

#### 6.2.2 Colored input case

*α*=1.0,

*ρ*=0.0) are implemented. Clearly, when the input is colored, the performance difference between the R-CFLF and D-CLFLF schemes diminishes, and also, their performances worsen as much as 9 and 8 dB respectively in the steady state comparing to the white input case.

## 7 Conclusions

Two schemes, R-CMLF and D-CMLF, for the sequential convex combination of lattice filters have been presented in which the channels of modified SPMLSs represent multiple filters. The main advantages of the proposed combination schemes are that they are order-recursive and conform to high modularity, regularity, and reconfigurability of lattice filters. The MSD performances and complexities of the proposed schemes are close; however, the D-CMLF scheme is more modular and better complies with the structure of SPMLSs due to the use of a single-in-stage combination processor as opposed to the use of dual combination processors in the R-CMLF scheme. It has also been illustrated that the proposed schemes are better devised than a single filter in reacting to faulty components and statistical variations in the input signal. In addition, it has been determined that the transfer of coefficients and the use of momentum term do not have much effect on the performance of the proposed schemes. Even though the M-CLMS and D-CLMS schemes are less complex than the proposed schemes, they perform worse than the proposed schemes, and in addition, they do not have the desired features such as modularity, order-recursiveness, and reconfigurability.

The application of the proposed methods to sparse channel equalization and identification problems in a cognitive radio framework is considered as an area to be explored. Another possibility for the future work can be to investigate the performance of the proposed methods in combining multiple adaptive lattice filters with different operating parameters and learning algorithms in worst-case scenarios where no assumptions are made about disturbances in the channel.

## Notes

### Acknowledgements

The author is grateful to the Editor and anonymous reviewers for their useful comments.

### Funding

All the costs are covered by the author.

### Availability of data and materials

All the data in Section 6 were generated by using computer simulations, and the algorithms implemented in the simulations are provided as tables in the paper.

### Author’s contributions

All the work related to this paper was carried out by the author. The author read and approved the final manuscript.

### Ethics approval and consent to participate

The research in this paper does not involve human subjects, human material, or human data; therefore, there was no need for the approval of an ethics committee nor the consent of human participants.

### Consent for publication

Figures appearing in this paper are not related to individual participants, and therefore, there was no need for consent for publication.

### Competing interests

The author declares that he has no competing interests.

### Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## References

- 1.J Mitola, GQ Maguire, Cognitive radio: making software radios more personal. IEEE Personal Commun.
**6**(4), 13–18 (1999). https://doi.org/10.1109/98.788210. - 2.S Haykin, Cognitive radio: brain-empowered wireless communications. IEEE J. Sel. Areas Commun.
**23**(2), 201–220 (2005). https://doi.org/10.1109/JSAC.2004.839380. - 3.FK Jondral, Cognitive radio: a communications engineering view. IEEE Wirel.Commun.
**14**(4), 28–33 (2007). https://doi.org/10.1109/MWC.2007.4300980. - 4.G Scutari, DP Palomar, S Barbaross, Cognitive MIMO radio. IEEE Signal Process. Mag.
**25**(6) (2008). https://doi.org/10.1109/MSP.2008.929297. - 5.B Wang, RJ Ray Liu, Advances in cognitive radio networks: a survey. IEEE J. Sel. Topics Signal Process.
**5**(1), 5–23 (2011). https://doi.org/10.1109/JSTSP.2010.2093210. - 6.E Axell, G Leus, EG Larsson, HV Poor, Spectrum sensing for cognitive radio: state-of-the-art and recent advances. IEEE Signal Process. Mag.
**29**(3), 101–116 (May 2012). https://doi.org/10.1109/MSP.2012.2183771. - 7.M Rais-Zadeh, JT Fox, DD Wentzloff, YB Gianchandani, Reconfigurable radios: possible solution to reduce entry costs in wireless phones. Proc. IEEE.
**103**(3), 438–451 (2015). https://doi.org/10.1109/JPROC.2015.2396903. - 8.T Weingard, DC Sicker, D Grunwald, A statistical method for reconfiguration of cognitive radios. IEEE Wirel.Commun.
**14**(4), 34–40 (Aug. 2007). https://doi.org/10.1109/MWC.2007.4300981. - 9.M Milliger, et al.,
*Software defined radio: architectures, systems and functions*(Wiley, New York, 2003).Google Scholar - 10.RG Machado, AM Wyglinski, Software-defined radio: bridging the analog-digital divide. Proc. IEEE.
**103**(3), 409–423 (2015). https://doi.org/10.1109/JPROC.2015.2399173. - 11.D Kreutz, FMV Ramos, PE Verissimo, CE Rothenberg, S Azodolmolky, S Uhlig, Software-defined networking: a comprehensive survey. Proc. IEEE.
**103**(1), 14–76 (2015). https://doi.org/10.1109/JPROC.2014.2371999. - 12.O Anjum, T Ahonen, F Garzia, J Nurmi, C Brunelli, H Berg, State of the art baseband DSP platforms for software defined radio: a survey. EURASIP J. Wirel. Commun. Netw.
**2011:**, 5 (2011). https://doi.org/10.1186/1687-1499-2011-5. - 13.K He, L Crockett, R Stewart, Dynamic reconfiguration technologies based on FPGA in software defined radio system. J. Sign. Process. Syst.
**69**(1), 75–85 (2012). https://doi.org/10.1007/s11265-011-0646-2. - 14.J Im, M Cho, Y Jung, Y Jung, J Kim, A low-power and low-complexity baseband processor for MIMO-OFDM WLAN systems. J. Sign. Process. Syst.
**68**(1), 19–30 (2012). https://doi.org/10.1007/s11265-010-0570-x. - 15.AP Vinod, EM-K Lai, A Omondi, Special issue on signal processing for software defined radio handsets. J. Sign. Process. Syst.
**62**(2), 113–115 (2011). https://doi.org/10.1007/s11265-009-0428-2. - 16.H Celebi, H Arslan, Enabling location and environment awareness in cognitive radios. Computer Commun.
**31:**, 1114–1125 (2008). https://doi.org/10.1016/j.comcom.2008.01.006. - 17.PJ Werbos, Intelligence in the brain: a theory of how it works and how to build it. Neural Networks.
**22**(3), 200–212 (2009). https://doi.org/10.1016/j.neunet.2009.03.012. - 18.AH Sayeed, A Tarighat, N Khajehnouri, Network-Based Wireless Location: challenges faced in developing techniques for accurate wireless location information. IEEE Signal Process.Mag.
**22**(4), 24–40 (2005). https://doi.org/10.1109/MSP.2005.1458275. - 19.MT Ozden, Adaptive reconfigurable V-BLAST type channel equalizer for cognitive MIMO-OFDM radios. EURASIP J. Adv. Signal Processing.
**2015:8:**(2015). https://doi.org/10.1186/s13634-015-0199-9. - 20.MT Ozden, Adaptive multichannel sequential lattice prediction filtering method for ARMA spectrum estimation in subbands. EURASIP J. Adv. Signal Process.
**2013:9:**(2013). https://doi.org/10.1186/1687-6180-2013-9. - 21.MT Ozden, Adaptive multichannel sequential lattice prediction filtering method for range estimation in cognitive radios. 2014 IEEE/ION Position, Locat. Navig. Symp. (PLANS), 426–433 (2014). https://doi.org/10.1109/PLANS.2014.6851400.
- 22.MT Ozden,
*Joint spectrum and AOA estimation for cognitive radios using adaptive multichannel sequential lattice prediction filtering method. 2nd IET International Conference on Intelligent Signal Processing 2015 (ISP)*, (London, 2015). https://doi.org/10.1049/cp.2015.1777. - 23.JG Andrews, S Buzzi, W Choi, SV Hanly, A Lozano, ACK Soong, JC Zhang, What will 5G be?IEEE J.Sel. Areas in Commun.
**32**(6), 1065–1082 (2014). https://doi.org/10.1109/JSAC.2014.2328098. - 24.F Boccardi, RW Heath, A Lozano, TL Marzetta, P Popovski, Five disruptive technology directions for 5G. IEEE Comm. Mag.
**52**(2), 75–80 (2014). https://doi.org/10.1109/MCOM.2014.6736746. - 25.S Sasipriya, R Vigneshram,
*An overview of cognitive radio in 5G wireless communications. IEEE International Conference on Computational Intelligence and Computing Research (ICCIC)*, (Chennai India, 2016). https://doi.org/10.1109/ICCIC.2016.7919725. - 26.CX Wang, F Haider, X Gao, XH You, Y Yang, D Yang, D Yuan, HM Aggoune, H Haas, S Fletcher, E Hepsaydir, Cellular architecture and key technologies for 5G wireless communication networks. IEEE Comm. Mag.
**52**(2), 122–130 (2014). https://doi.org/10.1109/MCOM.2014.6736752. - 27.L Li, Y Xia, B Jelfs, J Cao, DP Mandic, Modelling of brain consciousness based on collaborative adaptive filters. Neurocomputing.
**76**(1), 36–43 (2012). https://doi.org/10.1016/j.neucom.2011.05.038. - 28.J Qiu, Y Wei, HR Karimi, H Gao, Reliable control of discrete-time piecewise-affine time-delay systems via output feedback. IEEE Trans.Rel.
**67**(1), 79–91 (2018). https://doi.org/10.1109/TR.2017.2749242. - 29.J Qiu, Y Wei, L Wu, A novel approach to reliable control of piecewise affine systems with actuator faults. IEEE Trans. Circuits Syst. II, Exp. Briefs.
**64**(8), 957–961 (2017). https://doi.org/10.1109/TCSII.2016.2629663. - 30.J Arenas-Garcia, LA Azpicueta-Ruiz, MTM Silva, VH Nascimento, AH Sayed, Combinations of adaptive filters: performance and convergence properties. IEEE Signal Process. Mag.
**33**(1), 120–140 (2016). https://doi.org/10.1109/MSP.2015.2481746. - 31.MTM Silva, VH Nascimento, Improving the tracking capability of adaptive filters via convex combination. IEEE Trans. Signal Process.
**56**(7), 3137–3149 (2008). https://doi.org/10.1109/TSP.2008.919105. - 32.J Arenas-Garcia, AR Figueiras-Vidal, Adaptive combination of proportionate filters for sparse echo cancellation. IEEE Trans. Audio, Speech, Language Process.
**17**(6), 1087–1098 (2009). https://doi.org/10.1109/TASL.2009.2019925. - 33.J Ni, F Li, Adaptive combination of subband adaptive filters for acoustic echo cancellation. IEEE Trans. Consum. Electron.
**56**(3), 1549–1555 (2010). https://doi.org/10.1109/TCE.2010.5606296. - 34.FS Chaves, JMT Romano, M Abbas-Turki, H Abou-Kandil, A convex combination of
*H*_{2}and*H*_{∞}filters for space-time adaptive equalization. 2011 IEEE Stat. Signal. Process Workshop (SSP), 717–720 (2011). Nice/France. https://doi.org/10.1109/SSP.2011.5967803. - 35.B Jelfs, S Javidi, P Vayanos, D Mandic, Characterisation of signal modality: exploiting signal nonlinearity in machine learning and signal processing. J. Sign. Process. Syst.
**61:105:**(2010). https://doi.org/10.1007/s11265-009-0358-z. - 36.LA Azpicueta-Ruiz, M Zeller, AR Figueiras-Vidal, J Arenas-Garcia, W Kellermann, Adaptive combination of Volterra kernels and its application to nonlinear acoustic echo cancellation. IEEE Trans. Audio, Speech, Language Process.
**19**(1), 97–110 (2011). https://doi.org/10.1109/TASL.2010.2045185. - 37.NV George, A Gonzalez, Convex combination of nonlinear adaptive filters for active noise control. Appl. Acoust.
**76:**, 157–161 (2014). https://doi.org/10.1016/j.apacoust.2013.08.005. - 38.LFO Chamon, CG Lopes,
*Combination of adaptive filters for relative navigation. 2011 19th European Signal Processing Conference*, (Barcelona/Spain, 2011). https://ieeexplore.ieee.org/document/7074291/. - 39.HF Ferro, LFO Chamon, CG Lopes, FIR-IIR filters hybrid combination. Electron. Lett.
**50**(7), 501–503 (2014). https://doi.org/10.1049/el.2014.0248. - 40.G Gui, L Xu,
*Affine combination of two adaptive sparse filters for estimating large scale MIMO channels. 2014 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA)*, (Siem Reap/Cambodia, 2014). https://doi.org/10.1109/APSIPA.2014.7041545. - 41.W Gao, Y Yan, L Zhang, Q Zhang,
*Combinations of multiple kernel adaptive filters. 2017 IEEE International Conference on Signal Processing, Communications and Computing (ICSPCC)*, vol. 2017, (Xiamen/China. https://doi.org/10.1109/ICSPCC.2017.8242551. - 42.R Claser, VH Nascimento, Low-complexity approximation to the Kalman filter using convex combinations of adaptive filters from different families. 2017 25th European Signal Processing Conference (EUSIPCO), 2630–2633 (2017). https://doi.org/10.23919/EUSIPCO.2017.8081687.
- 43.F Huang, J Zhang, Y Pang, A novel combination scheme of proportionate. Sig. Process.
**143:**, 222–231 (2018). https://doi.org/10.1016/j.sigpro.2017.09.013. - 44.VH Nascimento, MTM Silva, R Candido, J Arenas-Garcia,
*A transient analysis for the convex combination of adaptive filters. IEEE/SP 15th Workshop on Statistical Signal Processing*, Cardiff/UK, 2009). https://doi.org/10.1109/SSP.2009.5278642. - 45.NJ Bershad, JCM Bermudez, JY Tourneret, An affine combination of two LMS adaptive filters-transient mean-square analysis. IEEE Trans. Signal Process.
**56**(5), 1853–1864 (2008). https://doi.org/10.1109/TSP.2007.911486. - 46.AT Erdogan, SS Kozat, AC Singer, Comparison of convex combination and affine combination of adaptive filters. IEEE Int. Conf. Acoustics, Speech and Signal Process.(ICASSP)., 3089–3092 (2009). https://doi.org/10.1109/ICASSP.2009.4960277.
- 47.R Candido, MTM Silva, VH Nascimento, Transient and steady-state analysis of the affine combination of two adaptive filters. IEEE Trans. Signal Process.
**58**(8), 4064–4078 (2010). https://doi.org/10.1109/TSP.2010.2048210. - 48.J Arenas-Garcia, M Martinez-Ramon, A Navia-Vazquez, AR Figueiras-Vidal, Plant identification via adaptive combination of transversal filters. Sig. Process.
**86**(9), 2430–2438 (2006). https://doi.org/10.1016/j.sigpro.2005.11.008. - 49.J Arenas-Garcia, V Gomez-Verdejo, AR Figueiras-Vidal, New algorithms for improved adaptive convex combination of LMS transversal filters. IEEE Trans. Instrum. Meas.
**54**(6), 2239–2249 (2005). https://doi.org/10.1109/TIM.2005.858823. - 50.F Ling, JG Proakis, A generalized multichannel least squares lattice algorithm based on sequential processing stages. IEEE Trans. Acoust., Speech, Signal Process.
**32**(2), 381–389 (1984). https://doi.org/10.1109/TASSP.1984.1164325. - 51.J Ma, GY Li, BH Juang, Signal processing in cognitive radio. Proc. IEEE.
**97**(5), 805–823 (2009). https://doi.org/10.1109/JPROC.2009.2015707. - 52.AF Molisch, LJ Greenstein, M Shafi, Propogation issues for cognitive radio. Proc.IEEE.
**97**(5), 787–804 (2009). https://doi.org/10.1109/JPROC.2009.2015704. - 53.AF Molisch,
*Wireless communications, 2/E*(John Wiley and Sons, Chichester, 2011).Google Scholar - 54.S Haykin,
*Adaptive filter theory, 4/E*(Prentice-Hall, Upper Saddle River, NJ, 2002).Google Scholar - 55.O Macchi, Optimization of adaptive identification for time-varying filters. IEEE Trans. Autom. Control.
**31**(3), 283–287 (1986). https://doi.org/10.1109/TAC.1986.1104239. - 56.GV Moustakides, Study of the transient phase of the forgetting factor RLS. IEEE Trans. Signal Process.
**45**(10), 2468–2358 (1997). https://doi.org/10.1109/78.640712. - 57.V Lomi, D Tonetto, L Vangelista, False alarm probability-based estimation of multipath channel length. IEEE Trans. Commun.
**51**(9), 1432–1434 (2003). https://doi.org/10.1109/TCOMM.2003.816974. - 58.F Ling, D Manolakis, JG Proakis, Numerically robust least-squares lattice-ladder algorithms with direct updating of the reflection coefficients. IEEE Trans. Acoust., Speech, Signal Process.
**34**(4), 837–845 (1986). https://doi.org/10.1109/TASSP.1986.1164878. - 59.F Ling, D Manolakis, JG Proakis, A recursive modified Gram-Schmidt algorithm for least-squares estimation. IEEE Trans. Acoust., Speech, Signal Process.
**34**(4), 829–835 (1986). https://doi.org/10.1109/TASSP.1986.1164877.

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.