1 Introduction

Spectral factorization of polynomials is a classical problem with important applications in control [2], optimal filtering [13, 14], and design of invertible filters, among others. In the univariate case, one can summarize the problem of spectral factorization succinctly as follows. Let \(V(z^{-1})=v_0+v_1 z^{-1}+\dots +v_N z^{-N}\) denote a polynomial (with real coefficients) of the variable \(z^{-1}\) . Denote by V(z) the same polynomial in the variable z, rather than \(z^{-1}\), \(V(z)=v_0+v_1 z +\dots +v_N z^N\). Note that \(z_0\) is a root of \(V(z^{-1})\) if and only if \(1/z_0\) is a root of V(z). A polynomial \(A(z^{-1})\) is said to be a spectral factorization of the Laurent polynomial \(V(z^{-1})V(z)\) if the following equality takes place

$$\begin{aligned} A(z^{-1})A(z)=V(z^{-1})V(z)\;. \end{aligned}$$
(1)

Technically, for a polynomial \(V(z^{-1})\) of degree N there are up to \(2^N\) spectral factorizations possible, because replacing any root of \(A(z^{-1})\) with its reciprocal from A(z) also results in a valid spectral factorization.Footnote 1 This ambiguity is typically removed by requesting a minimum-phase factorization, i.e., a spectral factorization with all roots inside the unit circle on the \({\mathcal {Z}}\) plane.

Minimum-phase spectral factorization of small polynomials is easy nowadays because it can be accomplished simply by numerically computing roots of \(V(z^{-1})\) and “reflecting” those outside the unit circle into its inside, \(z_0 \leftarrow 1/z_0\). For high-degree polynomials, spectral factorization remains a challenging problem because of high computation complexity and numerical sensitivity of solving for roots directly. Multiple methods for solving this problem, as well as for solving the more general problem of factoring multivariate rational spectral densities, were developed and the reader is referred to, e.g., [1, 3, 4, 6, 11, 12] for more details and examples.

Unfortunately, majority of the existing approaches are not suitable for embedded applications that require spectral factorization to be performed online in real time, such as the iterative learning active noise control [7]. A noteworthy exception to this rule is a spectral factorization scheme inspired by control theory proposed in [10]. The method is remarkably simple and can be implemented even on the poorest of microcontrollers. The spectral factorization is obtained in a process of iterative refinement that includes negative feedback.

Even though it is very well known in the control theory that feedback alone does not guarantee stability, no stability analysis of the algorithm was performed. In fact, our experience with the method showed that it often diverges. In this paper, we show using an analytic approach that the method, in general, does not guarantee stability of the convergence irrespective of the value of the step-size parameter used.

2 Stability Analysis

2.1 Preliminaries

In this paper, we will consider only the univariate case, as doing so makes discussion simpler. To quickly demonstrate the instability in the multivariate case [9] on the basis of the univariate case, it is sufficient to consider multivariate polynomials whose coefficients are multiples of the eye matrix.

Let \(X(z^{-1})\) be a polynomial or a finite Laurent series and denote by \(\left[ X\right] _{-}\) a column vector that contains only those coefficients of \(X(z^{-1})\) that have nonnegative powers of \(z^{-1}\), sorted in the ascending order (or, equivalently, nonpositive powers of z sorted in the descending order). For example, if \(X(z^{-1})=-3z^2 + 2z + 1 + 2z^{-1} -3z^{-2}\), then \(\left[ X(z^{-1})\right] _{-}=[1\ \ 2\ -3]^T\). Let \(L(z^{-1})=V(z^{-1})V(z)\) be a Laurent polynomial whose spectral factorization is sought. Using the introduced “square brace” notation, one can express the method proposed in [10] as follows

$$\begin{aligned} \left[ W_{k+1}(z^{-1})\right] _{-}=\left[ W_{k}(z^{-1})\right] _{-} + \eta \varvec{\epsilon }_k\;, \end{aligned}$$
(2)

where \(W_{k}(z^{-1})\) is the, iteratively refined, estimate of the spectral factorization at iteration k,

$$\begin{aligned} \varvec{\epsilon }_k = \left[ L(z^{-1}) - W_{k}(z^{-1})W_{k}(z)\right] _{-} \end{aligned}$$
(3)

is the residual vector, and \(\eta >0\) is a small step-size parameter.

2.2 Stability Analysis

Let \(A(z^{-1})\) be a spectral factorization of \(L(z^{-1})\). It is straightforward to see that \(A(z^{-1})\) is a stationary point of (2), because when \(W_{k}(z^{-1})=A(z^{-1})\) one obtains that \(\varvec{\epsilon }_k=\varvec{0}\) and \(W_{k+1}(z^{-1})=W_{k}(z^{-1})\). Overall, the algorithm has up to \(2^{N+1}\) stationary points.

We will analyze behavior of (2) in a neighborhood of \(A(z^{-1})\), i.e., for \(W_k(z^{-1})\) close to \(A(z^{-1})\). Set

$$\begin{aligned} \varDelta W_k(z^{-1})=W_k(z^{-1})-A(z^{-1})\;. \end{aligned}$$

Then, it is straightforward to obtain that

$$\begin{aligned} \left[ \varDelta W_{k+1}(z^{-1})\right] _{-}=\left[ \varDelta W_{k}(z^{-1})\right] _{-} + \eta \varvec{\epsilon }_k\; \end{aligned}$$
(4)

and

$$\begin{aligned} \varvec{\epsilon }_k = -\left[ A(z^{-1})\varDelta W_{k}(z)+\varDelta W_{k}(z^{-1})A(z)+\varDelta W_{k}(z^{-1})\varDelta W_{k}(z)\right] _{-}\;. \end{aligned}$$
(5)

When expressed in this form, a stationary point of (4) is in the origin.

One can analyze behavior of discrete-time algorithms in the form (4) using the associated ordinary differential equation (ODE) method [5, 8]. The ODE associated with (4) has the form

$$\begin{aligned} \frac{d \varDelta W_s(z^{-1})}{ds}=\varvec{\epsilon }_s\;, \end{aligned}$$
(6)

where s is a continuous time variable that corresponds to the rescaled original discrete-time, \(s \sim \eta k\). For small values of the step-size parameter \(\eta \), trajectories obtained by solving (6) and (4) converge—see [5, 8] for more details.

For \(\varDelta W_s(z^{-1})\) sufficiently close to zero, one may neglect second-order terms in (5), which leads to

$$\begin{aligned} \varvec{\epsilon }_s \simeq -\left[ A(z^{-1})\varDelta W_{s}(z)+\varDelta W_{s}(z^{-1})A(z)\right] _{-}\;. \end{aligned}$$
(7)

Furthermore, it is straightforward to check that \(\varvec{\epsilon }_s\) can be expressed in the form

$$\begin{aligned} \varvec{\epsilon }_s = \varvec{M}\left[ \varDelta W_{s}(z^{-1})\right] _{-}\;, \end{aligned}$$
(8)

where

$$\begin{aligned} \varvec{M}=-(\varvec{M}_1+\varvec{M}_2) \end{aligned}$$
(9)

and \(\varvec{M}_1\), \(\varvec{M}_2\) are the Toeplitz and the Hankel matrices of the following forms

(10)

The associated ODE is locally stable if and only if all eigenvalues of \(\varvec{M}\) have strictly negative real parts. In this case, (2) is locally stable for sufficiently small gain \(\eta \). On the other hand, if \(\varvec{M}\) has at least one eigenvalue that has nonnegative real part, (2) is locally unstable around \(A(z^{-1})\) no matter how small \(\eta \) is [5, 8].

Note that, since there could exist more than one spectral factorization of a given \(L(z^{-1})\), to understand the algorithm’s global behavior better, one should carry out the above stability analysis for all possible polynomials \(A(z^{-1})\) that satisfy \(L(z^{-1})=A(z^{-1})A(z)\). If none of these stationary points is found locally stable, then it can be concluded the algorithm cannot possibly converge to a spectral factorization of \(L(z^{-1})\). If, on the other hand, one or more stationary points are found to be locally stable, convergence is possible, but can only be guaranteed locally.

2.3 Computer Simulations

Fig. 1
figure 1

Error trajectory for the example in the text

Analyzing eigenvalues of (9), one can see that the algorithm is guaranteed to be locally stable for positive scalars (in which case it performs a recursive computation of a square root). In case of a first-order polynomial \(A(z^{-1})\), it is straightforward to show that the algorithm is locally stable if and only if \(A(z^{-1})\) is a minimum-phase polynomial.

Establishing analytic conditions of stability for polynomials with degree higher than 1 is difficult, if at all possible, but it relatively easy to find counterexamples that show that the algorithm can be unstable even for minimum-phase polynomials. Consider as an example the polynomial \(V(z^{-1})=1+1.7826z^{-1}+0.7944z^{-2}\) and set \(L(z^{-1})=V(z^{-1})V(z)\). For the stationary point \(A(z^{-1})=V(z^{-1})\), the matrix \(\varvec{M}\) has eigenvalues \(-4.8435,\ 0.0246 \pm 0.0546j\), i.e., our analysis predicts that the method will be locally unstable. We set the initial conditions to \(W_0(z^{-1})=1.0007+1.7842z^{-1}+0.7937z^{-2}\), i.e., very close to \(A(z^{-1})\) and let the algorithm (2) run for 100,000 iterations using a very small step-size \(\eta =2.5 \cdot 10^{-5}\). The trajectory of the error, defined as \(\Vert \left[ \varDelta W_{k}(z^{-1})\right] _{-} \Vert ^2\), shown in Fig. 1 is consistent with the analytic predictions. Moreover, all remaining spectral factorizations of \(L(z^{-1})\) are unstable stationary points, which means that the algorithm simply cannot converge to a factorization of \(L(z^{-1})\).

For the case of Example 2 from [10], where \(A(z^{-1})=1+0.4573z^{-2}+0.4077z^{-4}+0.3101z^{-6}+0.11831z^{-8}-0.2587z^{-10}\) (note that, in [10] there are several typographic errors in the signs of the polynomial’s coefficients), and the algorithm converges, our method confirms that the stationary point is indeed stable.

We validated our analysis extensively using Monte Carlo simulations. In each experiment, a minimum-phase polynomial \(A(z^{-1})\) with degree between 4 and 20 was generated randomly. The algorithm (2) was initialized by adding small random perturbations to coefficients of \(A(z^{-1})\) (Gaussian distributed, zero-mean, standard deviation equal \(10^{-3}\)) and ran for 10,000–100,000 iterations. The behavior of the resultant error trajectory was compared with an analytic prediction regarding the algorithm’s stability obtained from (9). In several thousand such experiments, the results of ODE analysis always agreed with simulations.

3 Global Behavior

Oddly enough, the global behavior of the feedback-based method is somewhat more benign. Very often the algorithm initially manages to converge to a close approximation of a spectral factorization. The instability mechanism shown in Sect. 2 dominates at later stages and the process eventually diverges. Therefore, monitoring the growth of \(\Vert \varvec{\epsilon }_k \Vert ^2\) and stopping the algorithm when it starts to grow may allow one to obtain an approximation of spectral factorizations with accuracy that may be sufficient for some applications. Nevertheless, there are simply no guarantees in the algorithm that such behavior will take place and that such a heuristic modification will work reliably.

4 Conclusions

The control theoretic spectral factorization method from [10] was analyzed using the method of associated ordinary differential equations and it was shown that, in general, it does not give stability guarantees.