Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Spectral Factorization

  • Michael ŠebekEmail author
Living reference work entry

Latest version View entry history

DOI: https://doi.org/10.1007/978-1-4471-5102-9_240-2

Abstract

For more than half a century, spectral factorization is encountered in various fields of science and engineering. It is a useful tool in robust and optimal control and filtering and many other areas. It is also a nice control-theoretical concept closely related to Riccati equation. As a quadratic equation in polynomials, it is a challenging algebraic task.

Keywords

Controller design, H2-optimal control, H-optimal control, J-spectral factorization, Linear systems, Polynomial, Polynomial matrix, Polynomial equation, Polynomial methods, Spectral factorization 

Polynomial Spectral Factorization

As a mathematical tool, the spectral factorization was invented by Wiener in the 1940s to find a frequency domain solution of optimal filtering problems. Since then, this technique has turned up numberless applications in system, network and communication theory, robust and optimal control, filtration, prediction, and state reconstruction. Spectral factorization of scalar polynomials is naturally encountered in the area of single-input single-output systems.

In the context of continuous-time problems, real polynomials in a single complex variable s are typically used. For such a polynomial p(s), its p(s) is defined by
$$\displaystyle{ p^{{\ast}}(s) = p(-s), }$$
(1)
which results in flipping all roots across the imaginary axis. If the polynomial is symmetric, then p(s) = p(s) and its roots are symmetrically placed about the imaginary axis.
The symmetric spectral factorization problem is now formulated as follows: given a symmetric polynomial b(s),
$$\displaystyle{ b^{{\ast}}(s) = b(s), }$$
(2)
that is also positive on the imaginary axis
$$\displaystyle{ b(i\omega ) > 0\ \text{for all real}\ \omega , }$$
(3)
find a real polynomial x(s), which satisfies
$$\displaystyle{ x(s)x^{{\ast}}(s) = b(s) }$$
(4)
as well as
$$\displaystyle{ x(s)\neq 0,\mathrm{Re}\,s \geq 0. }$$
(5)
Such an x(s) is then called a spectral factor of b(s). By (5), the spectral factor is a stable polynomial in the continuous-time (Hurwitz) sense.

Obviously, (4) is a quadratic equation in polynomials and its stable solution is the desired spectral factor.

Example 1.

Given
$$\displaystyle{ b(s) = 4 + s^{4} = (1 + j + s)(1 - j + s)(1 + j - s)(1 - j - s), }$$
(4) results in the spectral factor
$$\displaystyle{ x(s) = 2 + 2s + s^{2} = (1 + j + s)(1 - j + s). }$$
When the right-hand side polynomial b(s) has some imaginary-axis roots, the problem formulated strictly as above becomes unsolvable since (3) does not hold and hence (5) cannot be fulfilled. A more relaxed formulation may then find its use requiring only \(b(i\upomega ) \geq 0\) instead of (3) and x(s) ≠ 0 only for Re s > 0 instead of (5). Clearly, the imaginary-axis roots of b(s) must then appear in x(s) and x(s) as well.
In the realm of discrete-time problems, one usually encounters two-sided polynomials, which are polynomial-like objects (In fact, one can stay with standard one-sided polynomials (either in nonnegative or in nonpositive powers only), if every adjoint p(z) is multiplied by proper power of z to create a one-sided polynomial \(\bar{p}(z) = p^{{\ast}}(z)z^{n}\).) with positive and/or negative powers of a complex variable z, for example, \(p(z) = z^{-1} + 1 + 2z\). Here, the p(z) stands simply for
$$\displaystyle{ p^{{\ast}}(z) = p(z^{-1}) }$$
(6)
and the operation results in flipping all roots across the unit circle. If the two-sided polynomial is symmetric, then p(z) = p(z) and its roots are symmetrically placed about the unit circle.
In its discrete-time version, the spectral factorization problem is stated as follows: given a symmetric two-sided polynomial b(z) that meets the conditions of symmetry
$$\displaystyle{ b^{{\ast}}(z) = b(z) }$$
(7)
and positiveness (here on the unit circle)
$$\displaystyle{ b(e^{i\omega }) > 0\ \text{for all real}\ \omega ,-\pi <\omega \leq \pi , }$$
(8)
find a real polynomial x(z) in nonnegative powers of z to satisfy
$$\displaystyle{ x(z)x^{{\ast}}(z) = b(z) }$$
(9)
and
$$\displaystyle{ x(z)\neq 0,\vert z\vert \geq 1. }$$
(10)
By (10), the spectral factor is a stable polynomial in the discrete-time (Schur) sense.

Example 2.

For
$$\displaystyle\begin{array}{rcl} b(z)=& 2z^{-2} + 6z^{-1} + 9 + 6z + 2z^{2} {}\\ =& 2z^{-2}(z + 0.5 + 0.5j)(z + 0.5 - 0.5j)(z + 1 + j)(z + 1 - j) {}\\ =& 4(z + 0.5 + 0.5j)(z + 0.5 - 0.5j) {}\\ & \times \ (z^{-1} + 0.5 + 0.5j)(z^{-1} + 0.5 - 0.5j) {}\\ \end{array}$$
(9) yields
$$\displaystyle{ x(z) = 1 + 2z + 2z^{2} = (z + 0.5 + 0.5j)(z + 0.5 - 0.5j) }$$
as the desired spectral factor.

When the right-hand side b(z) possesses some roots on the unit circle, this problem turns out to be unsolvable as (8) fails. If necessary, a less restrictive formulation can then be applied replacing (8) by b(e i ω ) ≥ 0 and with x(z) ≠ 0 only for | z |  > 1 instead of (10). Clearly, the unit-circle roots of b(z) must then appear both in x(z) and x(z).

When formulated as above, the spectral factorization problem is always solvable and its solution is unique up to the change of sign (if x is a solution, so is − x, and no other solutions exist).

Polynomial Matrix Spectral Factorization

Matrix version of the problem has been encountered since 1960s. In the world of continuous-time problems, real polynomial matrices in a single complex variable s are used. For such a real polynomial matrix P(s), its adjoint P(s) is defined as
$$\displaystyle{ P^{{\ast}}(s) = P^{T}(-s). }$$
(11)
A polynomial matrix P(s) is symmetric or, more precisely, para-Hermitian, if P(s) = P(s). Needless to say, only square polynomial matrices can be symmetric.
The matrix spectral factorization problem is defined as follows: given a symmetric polynomial matrix B(s),
$$\displaystyle{ B^{{\ast}}(s) = B(s), }$$
(12)
that is also positive definite on the imaginary axis
$$\displaystyle{ B(i\omega ) > 0\ \text{for all real}\ \omega , }$$
(13)
find a square real polynomial matrix X(s), which satisfies
$$\displaystyle{ X(s)X^{{\ast}}(s) = B(s) }$$
(14)
and has no zeros in the closed right half plain Re s ≥ 0. Such an X(s) is then called a left spectral factor of B(s). A right spectral factor Y (s) is defined similarly by replacing (14) with
$$\displaystyle{ Y ^{{\ast}}(s)Y (s) = B(s). }$$
(15)

Example 3.

For a symmetric matrix
$$\displaystyle{ B(s) = \left [\begin{array}{ll} 2 - s^{2} & - 2 - s \\ - 2 + s&4 - s^{2}\\ \end{array} \right ], }$$
we have
$$\displaystyle{ X(s) = \left [\begin{array}{lc} 1.4 + s& - 0.2\\ - 1.2 &1.6 + s\\ \end{array} \right ] }$$
as a left spectral factor and
$$\displaystyle{ Y (s) = \left [\begin{array}{lc} 1 + s& 0\\ - 1 &2 + s\\ \end{array} \right ] }$$
as a right one.

As in the scalar case, less restrictive definitions are sometimes used where the given right-hand side matrix B(s) is only nonnegative definite on the imaginary axis and so the spectral factor is free of zeros in the open right half plain Re s > 0 only.

In the kingdom of discrete-time, two-sided real polynomial matrices P(z) are used having in general entries with both positive and negative powers of the complex variable z. For such a matrix, its adjoint P(z) is defined by
$$\displaystyle{ P^{{\ast}}(z) = P^{T}(z^{-1}). }$$
(16)
Clearly, if P(z) has only nonnegative powers of z, then P(z) has only nonpositive powers of z and vice versa. A square two-sided polynomial matrix P(z) is (para-Hermitian) symmetric if P(z) = P(z).
Here is the discrete-time version of matrix spectral factorization problem. Given a two-sided polynomial matrix B(z) that is symmetric
$$\displaystyle{ B^{{\ast}}(z) = B(z) }$$
(17)
and positively definite on the unit circle
$$\displaystyle{ B(e^{i\omega }) > 0\ \text{for all real}\ \omega ,-\pi <\omega \leq \pi , }$$
(18)
find a real polynomial matrix X(z) in nonnegative powers of z such that
$$\displaystyle{ X(z)X^{{\ast}}(z) = B(z) }$$
(19)
and has no zeros on and outside of the unit circle. Such an X(z) is then called a left spectral factor of B(z). A right (The right and the left spectral factors are sometimes called the factor and the cofactor, respectively, but the terminology is not set at all.) spectral factor Y (z) is defined similarly by replacing (19) with
$$\displaystyle{ Y ^{{\ast}}(z)Y (z) = B(z) }$$
(20)

Example 4.

A symmetric two-sided polynomial matrix
$$\displaystyle{ B(z) = \left [\begin{array}{cc} - 2z^{-1} + 5 - 2z& 2z^{-1} - 1 \\ - 1 + 2z &2z^{-1} + 6 + 2z\\ \end{array} \right ] }$$
has a left spectral factor
$$\displaystyle{ X(z)\mathop{\cong}\left [\begin{array}{cc} - 1.1 + 1.9z& 0.5\\ - 0.8z &0.95 + 2.1z\\ \end{array} \right ] }$$
and a right spectral factor
$$\displaystyle{ Y (z) = \left [\begin{array}{cc} 2z - 1& 1\\ 0 &1 +2z\\ \end{array} \right ]. }$$
As before, less restrictive formulations are sometimes encountered where the given symmetric B(z) is only nonnegatively definite on the unit circle and so the spectral factor must have no zeros only outside of the unit circle.
When formulated as above, the matrix spectral factorization problem is always solvable. The spectral factors are unique up to an orthogonal matrix multiple. That is, if X and X are two left spectral factors of B, then
$$\displaystyle{ X^{{\prime}} = UX }$$
(21)
where U is a constant orthogonal matrix U U T  = I, while if Y and Y are two right spectral factors of B, then
$$\displaystyle{ Y ^{{\prime}} = Y V }$$
(22)
where V is a constant orthogonal matrix V T V = I.

J-Spectral Factorization

In robust control, game theory, and several other fields, the symmetric right-hand side in the matrix spectral factorization may have a general signature. With such a right-hand side, standard (positive or nonnegative definite) factorization becomes impossible. Here, a similar yet different J-spectral factorization takes its role.

In the context of continuous-time problems, the J-spectral factorization problem is formulated as follows. Given a symmetric polynomial matrix B(s),
$$\displaystyle{ B^{{\ast}}(s) = B(s), }$$
(23)
find a square real polynomial matrix X(s), which satisfies
$$\displaystyle{ X(s)JX^{{\ast}}(s) = B(s), }$$
(24)
where X(s) has no zeros in the open right half plain Re s > 0 and J is a signature matrix of the form
$$\displaystyle{ J = \left [\begin{array}{ccl} I_{1} & 0 &0 \\ 0 & - I_{2} & 0 \\ 0 & 0 &0\\ \end{array} \right ] }$$
(25)
with I1 and I2 unit matrices of not necessarily the same dimensions. The bottom right block of zeros is often missing, yet it is considered here for generality. Such an X(s) is called a left J-spectral factor of B(s). A right J-spectral factor is defined by
$$\displaystyle{ Y ^{{\ast}}(s)JY (s) = B(s) }$$
(26)
instead of (24). For discrete-time problems, the J-spectral factorization is defined analogously.

The J-spectral factorization problem is quite general having standard (either positive or nonnegative) spectral factorization as a particular case. No necessary and sufficient existence conditions appear to be known for J-spectral factorization. A sufficient condition by Jakubovič (1970) states that the problem is solvable if the multiplicity of the zeros on the imaginary axis of each of the invariant polynomials of the right-hand side matrix is even. In particular, this condition is satisfied whenever det B(s) has no zeros on the imaginary axis. In turn, the condition is violated if any of the invariant factors is not factorable by itself. An example of a nonfactorizable polynomial is 1 + s2.

The J-spectral factors are unique up to a J-orthogonal matrix multiple. That is, if X and X are two left J-spectral factors of B, then
$$\displaystyle{ X^{{\prime}} = {\it \text{UK}}, }$$
(27)
where U is a J-orthogonal matrix UJU T  = J, while if Y and Y are two right J-spectral factors of B, then
$$\displaystyle{ Y ^{{\prime}} = {\it \text{YV}}, }$$
(28)
where V is a J-orthogonal matrix V T JV = J.

Example 5.

For
$$\displaystyle{ B(s) = \left [\begin{array}{cc} 0 & 1 - s\\ 1 + s &2 - s^{2} \\ \end{array} \right ] }$$
the signature matrix reads
$$\displaystyle{ J = \left [\begin{array}{lc} 1& 0\\ 0 & -1\\ \end{array} \right ] }$$
and the right J-spectral factor is
$$\displaystyle{ Y (s) = \left [\begin{array}{ll} 1 + s&\dfrac{3 - s^{2}} {2} \\ 1 + s&\dfrac{1 - s^{2}} {2}\\ \end{array} \right ] }$$

Nonsymmetric Spectral Factorization

Spectral factorization can also be nonsymmetric. For a scalar polynomial p (either in s or in z), this means to factor it directly as
$$\displaystyle{ p = p^{+}p^{-} }$$
(29)
where p+ is a stable factor of p (having all its roots either in the open left half plane or inside of the unit disc, depending on the variable type) while p is the “remaining” that is an unstable factor. Eventual roots of p at the stability boundary either associate to p+ or to p, depending on the application problem at hand.
For a matrix polynomial P, the nonsymmetric factorization is naturally twofold: either
$$\displaystyle{ P = P^{+}P^{-} }$$
(30)
or
$$\displaystyle{ P = P^{-}P^{+}. }$$
(31)
For scalar polynomials, symmetric and nonsymmetric spectral factors are closely related. Given p and having computed a symmetric factor x for p p as in (4) or (9) to get
$$\displaystyle{ x^{{\ast}}x = p^{{\ast}}p, }$$
(32)
then
$$\displaystyle{ p^{+} =\mathrm{ gcd}(p,x)\ \mathrm{and}\ p^{-} =\mathrm{ gcd}(p,x^{{\ast}}) }$$
(33)
where gcd stands for the greatest common divisor. In reverse,
$$\displaystyle{ x = p^{+}(p^{-})^{{\ast}}\ \mathrm{and}\ x^{{\ast}} = p^{-}(p^{+})^{{\ast}}. }$$
(34)
Unfortunately, no such relations exist for the matrix case.

Example 6.

For example,
$$\displaystyle{ p(s) = 1 - s^{2} }$$
factorizes into
$$\displaystyle{ p^{+}(s) = 1 + s,\ p^{-}(s) = 1 - s }$$
while for
$$\displaystyle{ P(s) = \left [\begin{array}{cc} 1 + s & 0\\ 1 + s^{2 } & 1 - s\\ \end{array} \right ] }$$
we have
$$\displaystyle{ P^{-}(s) = \left [\begin{array}{lc} 1&1 \\ s&1\\ \end{array} \right ],P^{+}(s) = \left [\begin{array}{ll} s& - 1 \\ 1&1\\ \end{array} \right ]. }$$

Algorithms and Software

Spectral factorization is a crucial step in the solution of various control, estimation, filtration, and other problems. It is no wonder that a variety of methods have been developed over the years for the computation of spectral factors. The most popular ones are briefly mentioned here. For details on particular algorithms, the reader is referred to the papers recommended for further reading.

Factor Extraction Method

If all roots of the right-hand side polynomial are known, the factorization becomes trivial. Just write the right-hand side as a product of first- and second-order factors and then collect the stable ones to create the stable factor. If the roots are not known, one can first enumerate them and then proceed as above. Somewhat surprisingly, a similar procedure can be used for the matrix case. To every zero, a proper matrix factor must be extracted. For further details, see Callier (1985) or Henrion and Sebek (2000).

Bauer’s Algorithm

This procedure is an iterative scheme with linear rate of convergence. It relies on equivalence between the polynomial spectral factorization and the Cholesky factorization of a related infinite-dimensional Toeplitz matrix. For further details, see Youla and Kazanjian (1978).

Newton-Raphson Iterations

An iterative algorithm with quadratic convergence rate based on consecutive solutions of symmetric linear polynomial Diophantine equations. It is inspired by the classical Newton’s method for finding a root of a function. To learn more, read Davis (1963), Ježek and Kučera (1985), and Vostrý (1975).

Factorization via Riccati Equation

In state-space solution of various problems, an algebraic Riccati equation plays the role of spectral factorization. It is therefore not surprising that the spectral factor itself can directly be calculated by solution of a Riccati equation. For further info, see, e.g., Šebek (1992).

FFT Algorithm

This is the most efficient and accurate procedure for factorization of scalar polynomials with very high degrees (in orders of hundreds or thousands). Such polynomials appear in some special problems of signal processing in advanced audio applications involving inversions of dynamics of loudspeakers or room acoustics. The algorithm is based on the fact that logarithm of a product (such as the spectral factorization equation) turns into a sum of logarithms of particular entries. For details, see Hromčík and Šebek (2007)

All the procedures above are either directly programmed or can be easily composed from the functions of Polynomial Toolbox for Matlab, which is a third-party Matlab toolbox for polynomials, polynomial matrices, and their applications in systems, signals, and control. For more details on the toolbox, visit www.polyx.com.

Consequences and Comments

Polynomial and polynomial matrix spectral factorization is an important step when frequency domain (polynomial) methods are used for optimal and robust control, filtering, estimation, or prediction. Numerous particular examples can be found throughout this encyclopedia as well as in the textbooks and papers recommended for further reading below.

Spectral factorization of rational functions and matrices is an equally important topic, but it is omitted here due to lack of space. Inquiring readers are referred to the papers of Oara and Varga (2000) and Zhong (2005).

Cross-References

Further Reading

  • Nice tutorial books on polynomials and polynomial matrices in control theory and design are Kučera (1979), Callier and Desoer (1982), and Kailath (1980)

  • The concept of spectral factorization was introduced by Wiener (1949), for further information see later original papers Wilson (1972) or Kwakernaak and Šebek (1994) as well as survey papers Kwakernaak (1991), Sayed and Kailath (2001) or Kučera (2007).

  • Nice applications of spectral factorization in control problems can be found e.g., in Green et al. (1990), Henrion et al. (2003) or Zhou and Doyle (1998). For its use of in other engineering problems see e.g., Sternad and Ahlén (1993).

Bibliography

  1. Callier FM (1985) On polynomial matrix spectral factorization by symmetric extraction. IEEE Trans Autom Control 30:453–464MathSciNetCrossRefGoogle Scholar
  2. Callier FM, Desoer CA (1982) Multivariable feedback systems. Springer, New YorkCrossRefGoogle Scholar
  3. Davis MC (1963) Factorising the spectral matrix. IEEE Trans Autom Control 8:296CrossRefGoogle Scholar
  4. Green M, Glover K, Limebeer DJN, Doyle J (1990) A J-spectral factorization approach to H-infinity control. SIAM J Control Opt 28:1350–1371MathSciNetCrossRefGoogle Scholar
  5. Henrion D, Sebek M (2000) An algorithm for polynomial matrix factor extraction. Int J Control 73(8):686–695MathSciNetCrossRefGoogle Scholar
  6. Henrion D, Šebek M, Kučera V (2003) Positive polynomials and robust stabilization with fixed-order controllers. IEEE Trans Autom Control 48:1178–1186CrossRefGoogle Scholar
  7. Hromčík M, Šebek M (2007) Numerical algorithms for polynomial Plus/Minus factorization. Int J Robust Nonlinear Control 17(8):786–802CrossRefGoogle Scholar
  8. Jakubovič VA (1970) Factorization of symmetric matrix polynomials. Dokl Akad Nauk SSSR 194(3):532–535MathSciNetGoogle Scholar
  9. Ježek J, Kučera V (1985) Efficient algorithm for matrix spectral factorization. Automatica 29:663–669Google Scholar
  10. Kailath T (1980) Linear systems. Prentice-Hall, Englewood CliffsGoogle Scholar
  11. Kučera V (1979) Discrete linear control: the polynomial equation approach. Wiley, ChichesterGoogle Scholar
  12. Kučera V (2007) Polynomial control: past, present, and future. Int J Robust Nonlinear Control 17:682–705CrossRefGoogle Scholar
  13. Kwakernaak H (1991) The polynomial approach to a H-optimal regulation. In: Mosca E, Pandolfi L (eds) H-infinity control theory. Lecture notes in maths, vol 1496. Springer, BerlinGoogle Scholar
  14. Kwakernaak H, Šebek M (1994) Polynomial J-spectral factorization. IEEE Trans Autom Control 39:315–328CrossRefGoogle Scholar
  15. Oara C, Varga A (2000) Computation of general inner-outer and spectral factorizations. IEEE Trans Autom Control 45:2307–2325MathSciNetCrossRefGoogle Scholar
  16. Sayed AH, Kailath T (2001) A survey of spectral factorization methods. Numer Linear Algebra Appl 8(6–7):467–496MathSciNetCrossRefGoogle Scholar
  17. Šebek M (1992) J-spectral factorization via Riccati equation. In: Proceedings of the 31st IEEE CDC, Tuscon, pp 3600–3603Google Scholar
  18. Sternad M, Ahlén A (1993) Robust filtering and feedforward control based on probabilistic descriptions of model errors. Automatica 29(3):661–679MathSciNetCrossRefGoogle Scholar
  19. Vostrý Z (1975) New algorithm for polynomial spectral factorization with quadratic convergence. Kybernetika 11:415, 248Google Scholar
  20. Wiener N (1949) Extrapolation, interpolation and smoothing of stationary time series. Wiley, New YorkGoogle Scholar
  21. Wilson GT (1972) The factorization of matricial spectral densities. SIAM J Appl Math 23:420MathSciNetCrossRefGoogle Scholar
  22. Youla DC, Kazanjian NN (1978) Bauer-type factorization of positive matrices and the theory of matrix polynomials orthogonal on the unit circle. IEEE Trans Circuits Syst 25:57MathSciNetCrossRefGoogle Scholar
  23. Zhong QC (2005) J-spectral factorization of regular para-Hermitian transfer matrices. Automatica 41:1289–1293CrossRefGoogle Scholar
  24. Zhou K, Doyle JC (1998) Essentials of robust control. Prentice-Hall, Upper Saddle RiverGoogle Scholar

Copyright information

© Springer-Verlag London 2015

Authors and Affiliations

  1. 1.Faculty of Electrical Engineering, Department of Control EngineeringCzech Technical University in PraguePrague 6Czech Republic