Abstract
We assume the reader to have familiarity with linear time-invariant (LTI) systems. In this chapter we merely summarise the main results of this theory. We are going to call the quantities that are considered the input, the output and some characterization of the system signals. This should evoke a meaningful interpretation in most of the systems that we are going to discuss. Mathematically they are distributions. The meaning of time-invariant is very intuitive: suppose that we apply the input signal x(t) to a system represented by an operator \({\mathcal {H}}\) and observe the signal \( y(t) = {\mathcal {H}}[x(t)] \) as its output.
You have full access to this open access chapter, Download chapter PDF
We assume the reader to have familiarity with linear time-invariant (LTI) systems. In this chapter we merely summarise the main results of this theory. We are going to call the quantities that are considered the input, the output and some characterization of the system signals. This should evoke a meaningful interpretation in most of the systems that we are going to discuss. Mathematically they are distributions.
1 Basic Definitions
The meaning of time-invariant is very intuitive: suppose that we apply the input signal x(t) to a system represented by an operator \({\mathcal {H}}\) and observe the signal
as its output (Fig. 8.1). The system is said to be time-invariantif by applying the delayed input signal \(x(t - \tau )\) we observe the same output signal as before, except for a delay in time by an amount \(\tau \), that is, if
The concept of linearity is subtler. A defining property of a linear system is the validity of the superposition principle: if \(y_1(t)\) is the response of the system to the input \(x_1(t)\) and \(y_2(t)\) the one to \(x_2(t)\), then the response to a linear combination of these inputs is
with \(c_1\) and \(c_2\) constants. However, if we limit the definition of a linear system to this property, then we admit pathological systems as the following one.
Example 8.1: A Discontinuous System [22]
Consider a system accepting as input a piece-wise continuous function with at most a finite number of isolated jump discontinuities. The system response consists in the sum of the input signal jumps from \(-\infty \) to the present time t.
The system satisfies (8.2). However, the behaviour is rather peculiar. If we apply, say, a rectangular input then the output is also rectangular. But, if we approximate to any degree of accuracy the rectangular input with a continuous function, then the output is always zero.
To exclude systems with such a bizarre behavior, we require linear systems to be continuous: if as \(m \in {\mathbb {N}}\) tends to \(\infty \) the sequence of input signals \(x_m(t)\) converges (in the sense of distributions) to the signal x(t), then the system response \(y_m(t)\) corresponding to input \(x_m(t)\) converges to the response y(t) corresponding to x(t).
Suppose that we apply an impulse \(\delta (t)\) to the input of the system \({\mathcal {H}}\) and observe the signal h(t) at its output. Then, by linearity, if we apply a finite number of pulses the output must be
In Sect. 3.3 we saw that every distribution can be represented as the limit of a finite series of Dirac impulses. From this and the linearity of convolution (Eq. (3.19)) we obtain that, in the limit as n tends to infinity, if the input converges to the signal x(t) the output of the system converges to
We therefore define
Definition 8.1
(LTI System) A single-input, single-output (SISO), linear time-invariant (LTI) system is a system that, when driven by an input signal x(t) produces the output
with h(t) the impulse response of the system.
A system is called realif, when driven by a real distribution, its response is a real distribution. In other words, if its impulse response is a real distribution.
While we have been talking about signals depending on time, we can abstract from that and talk about signals depending on a generic n dimensional independent variable \(\lambda \in {\mathbb {R}}^n\). In this case, instead of time-invariance, it makes more sense to adapt (8.1) to
and talk about translation invariance. A single-input single-output, linear translation-invariant system is then still described by a convolution product similar to (8.3) where however the independent variable t is replaced by the abstract n dimensional variable \(\lambda \). We are going to call a system of this type an LTI system as well.
2 Causality
Assume for simplicity that h and x are integrable functions of time. The response of a system characterized by h when driven by the input x can then be written in integral form
Suppose now that the input vanishes for \(t<0\). Then from
we see that in general the system may produce a nonzero response y(t) for \(t<0\), that is, before the input signal x(t) has been applied.
If a system is causal, that is, if its output at time \(t_0\) can only depend on values of the input signal at times \(t \le t_0\), then its impulse response h(t) must vanish for \(t <0\). In other words h must be a right-sided distribution in \({\mathcal {D_+'}}\).
Note that in our interpretation of signals as being functions of time, non-causal systems are not physically implementable and appear to be meaning-less. However, non-causal systems are sometimes useful in theoretical studies. In addition, in many situations the theory of LTI systems can be applied to systems where the quantities of interest (the input and output) are not functions of time (see Example 7.6).
3 Stability
An important aspect of a system is its stability. Let x(t) be a bounded function, that is, satisfying
The response of a system characterized by the impulse response h(t) to such an input signal is
The output y(t) is well-defined if
for every test function \(\phi \in {\mathcal {D}}\) and for every sequence \((\phi _m)\) converging to zero in \({\mathcal {D}}\)
In this case we say that the system is bounded-input bounded-output (BIBO) stable.
For a system to be BIBO stable
must have a meaning. Observe that the inner integral is an indefinitely differentiable bounded function. For the convolution to have a meaning the impulse response of the system must therefore be extensible to a continuous linear form on \({\mathcal {B}}\). As we saw in Sect. 6.1 this is only the case if h is a summable distribution. Thus, for a system to be BIBO stable, its impulse response must be a summable distribution.
We mention without going into details that the definition of a BIBO stable system can be extended to input signals that are so-called bounded distributions and usually denoted by \({\mathcal {B}}'\) or \({\mathcal {D}}'_{L^\infty }\) [16].
The series connection, or cascade of two stable systems results in a stable system. This is so because the convolution of summable distributions is always well-defined and is itself a summable distribution. In addition, for linear systems the order of the connection is irrelevant as, if \(h_A\) and \(h_B\) are the impulse responses of the two systems
4 Transfer Function
4.1 Stable Systems
If a system is stable then its impulse response h can be Fourier transformed and the transformed \(\hat{h}\) is a continuous function of slow growth called the frequency responseof the system. If the input signal x is also a summable distribution then it can also be Fourier transformed and the Fourier transform of the output signal can be represented by the product
If the input signal x is \({\mathcal {T}}\)-periodic, then the system can be analysed in the convolution algebra of periodic distributions. To do so the impulse response h is converted in a periodic distribution by convolving it with the unit of the convolution algebra of periodic distributions \(\delta _{\mathcal {T}}\)
Provided that \(h_{\mathcal {T}}\) is well-defined, which for stable systems is always the case, then the output of the system can be represented by
Note that while the convolution used to define \(h_{\mathcal {T}}\) is the convolution in \({\mathcal {D'}}({\mathbb {R}})\), the latter is the convolution in \({\mathcal {D'}}({\mathbb {T}})\). As discussed in Sect. 7.5, the equation is most conveniently solved with the help of the Fourier series. If we denote by \(c_m(y), c_m(h_{\mathcal {T}})\) and \(c_m(x)\) the mth Fourier coefficient of \(y, h_{\mathcal {T}}\) and x respectively, then the equation is solved if
for every \(m\in {\mathbb {Z}}\). From (4.24) we know that
with \(\omega _c=2\pi /{\mathcal {T}}\). Therefore, by knowing the Fourier transform of the impulse response we can immediately obtain the Fourier coefficients of the output signal by
In particular, if the input is the complex tone \(\mathrm{{e}}^{\jmath \omega _ct}\), the output is also a complex tone at the exact same frequency
If the input of the system is the sum of two (or more) periodic signals \(x_A\) and \(x_B\) with incommensurate frequencies \(\omega _A\) and \(\omega _B\), that is, if the ratio of the two frequencies \(\omega _A/\omega _B\) is an irrational number, then the input signal is not periodic, but almost periodic. Due to the linearity and continuity of the system, the response can still be calculated by the above technique for each input separately and the result combined
4.2 Causal Systems
If the system is causal, that is, if its impulse response h is a distribution in \({\mathcal {D_+'}}\), and one is interested in the system response for right-sided input signals \(x\in {\mathcal {D_+'}}\), then the system response y can be calculated in the convolution algebra \({\mathcal {D_+'}}\). In particular, if h and x are Laplace transformable then the Laplace transformed of the output signal can be calculated by
The Laplace transformed H(s) of the impulse response h is called the system transfer function.
If the system is BIBO stable, then the ROC of H(s) includes the imaginary axis \(s=j\omega \). In this case the Fourier transformed of h is immediately obtained from the transfer function by
Note that if the system is not BIBO stable then this relation is not valid even if the Fourier transform of h does exits. See Example 5.4 for a simple example where the system corresponds to an ideal integrator.
In the following we are going to denote distributions belonging to \({\mathcal {D_+'}}\cap {\mathcal {D}}_{L^1}'\) by \({\mathcal {D}}_{L^1+}'\).
5 Rational Transfer Functions
Consider a causal system described by a rational transfer function
Given the Laplace transform X(s) of the input signal x, the Laplace transformed of the output is
If we multiply both sides of this equation by P(s) we obtain
and by inverse Laplace transforming the equation we obtain the convolution equation
With the results of Sect. 7.3 we see that this equation corresponds to the initial value problem described by the linear differential equation with constant coefficients
with
and zero initial conditions
For this reason \(y(t) = h(t) *x(t)\) is called the zero state responseof the system.
It is obvious that the procedure can be reversed. We have therefore established a one-to-one correspondence between systems described by a rational transfer function and systems described by a linear differential equation with constant coefficients and zero initial conditions.
If the transfer function H of the system is minimal, that is, if its numerator and its denominator are relatively prime polynomials, then, in the complement of \(t=0\), it is possible to recreate the same output that would be produced by solving the corresponding initial value problem with non-zero initial conditions. This is achieved by driving the system with an input signal consisting of a weighted sum of a Dirac pulse and its derivatives
and by suitably selecting the weighting coefficients \(x_0,\ldots ,x_{m-1}\) as described in Sect. 7.3 (see Example 7.4). Such a system is said to have order m and to be observable and controllable (see Sect. 8.6).
If H(s) is a proper rational transfer function, that is if \(m<n\), then it can be expanded into a sum of partial fractions of the form
with \(p_j\) the jth zero of P(s), \(l_j\) its multiplicity and \(c_{jk_j}\) constants. From Example 7.2 and the properties of the Laplace transform we therefore see that the impulse response h is the sum of products of polynomials and exponential functions. In particular, we see that the system is stable if the real part of the poles of H(s) are negative
If n is not smaller than m then H(s) can be decomposed into the sum of a polynomial and a proper rational function. The impulse response h is then the sum of the above polynomial-exponential functions and a weighted sum of Dirac impulses and its derivatives.
6 System State
In this section we review the concept of the state of a system. To this end consider the initial value problem described by the system of n differential equations
with \(A \in {\mathbb {C}}^{n\times n}\) an \(n \times n\) matrix and u and x n dimensional vectors of complex valued functions of time. As before we can translate this initial value problem in the language of distributions by replacing the (conventional) derivative with the distributional one and work in the convolution algebra of right sided distributions
If we rearrange the equation and convolve each term with \(I\textsf{1}_{+}\) we obtain the equivalent equation
This form shows that the equation can be solved by left convolving both sides of the equation with the inverse of \((I\delta - A\textsf{1}_{+})\). Observing the analogy with the geometric series, provided it converges, the latter can be represented by the following series, where the standard product of the geometric series has been replaced by the convolution product
The iterated convolutions are easily evaluated
and using the identity
we obtain
The last series can be expressed with the help of the exponential matrix defined by
which converges for every value of t
Having established the convergence of the series, using the linearity and continuity of convolution one readily sees that indeed it defines the desired inverse
The solution of the equation is therefore given by
The exponential matrix has several useful properties that are immediately verified using its defining series
Note however that in general
This is only valid if A and B commute, that is \(AB = BA\).
Consider now the state space representationof a SISO LTI system
where now x represents the input signal of the system and y its output. The vector u is called the stateof the system and (8.11) shows that it’s value \(u_0\) at a given point in time \(t_0\) is the minimum amount of information required that together with the input signal at times \(t \ge t_o\) allows determining the system behaviour at all future times \(t > t_0\). In other words, the system state \(u_0\) at time \(t_0\) summarises the effect on the system of all past values of the input signal and of previous states.
6.1 Controllability
It’s interesting to ask if it’s possible to design the input signal in such a way that the system can be set in an arbitrary state \(u_0\) in finite time. That is, can we design the input signal such that for \(t>t_0\) the state vector equals \(u(t) = \mathrm{{e}}^{At} u_0\)?
The problem is most easily analysed using impulsive inputs, starting from the zero state. From the above results we know that the system state dependence on the input signal x is given by
Suppose that for an n dimensional system we use an input signal consisting of a weighted sum of a Dirac impulse and its derivatives up to order \(n-1\)
Since the system is linear, we can analyse the contribution of each term individually
The terms replaced by dots on the last line are constituted by a weighted sum of a Dirac impulse and its derivative which are zero for \(t>0\). Putting all terms together we obtain for \(t>0\)
From this we conclude that we can use a suitably designed input signal x to mimic the effect of an arbitrary initial state \(u_0\) if and only if the matrix
is invertible, in which case the weighting factors are
The matrix \(\mathcal {C}\) is called controllability matrix.
While the state of a system plays an important theoretical and conceptual role, in practice, when dealing with controllable systems we can always start from the zero state and drive the system in any desirable state. Things are completely different for non-controllable systems. As discussed in Sect. 8.6.3, these are systems possessing sub-systems that are not influenced by the input signal. In those systems the initial state may play an important role.
6.2 Observability
Another interesting question is whether it’s possible to reconstruct the initial state of a system at time \(t_0\) from the observation of its output at times \(t > t_0\) assuming that A, B, C, D and the input signal x are known. From linearity and knowledge of the input signal we can assume x to be zero. (Alternatively we could compute the part of the output signal due to the input signal—the zero state response of the system—and subtract it from the observed output.) The question is then if we can calculate \(u_o\) from the observation of
Suppose that the system is n dimensional. Then if we compute the first \(n-1\) derivatives of the output signal we obtain
where in the last equation we have represented by dots a weighted sum of a Dirac pulse and its derivatives as before. Thus, the observation of the output signal and of its first \(n-1\) derivatives at times \(t>0\) allows setting up the following system of equations
This system of equations can only be solved for \(u_0\) if the matrix
is not singular. The matrix \(\mathcal {O}\) is called the observability matrix.
6.3 Jordan Normal form
The simplest way to understand the structure of a system that is either not controllable, or not observable is by considering the system in Jordan normal form.
Consider a system in the state space representation
In linear algebra is shown that, by choosing a suitable basis, every linear operator can be represented by a matrix of the following block form, called the Jordan normal form
with
the elementary Jordan matrix. The diagonal elements of \(J_i\) correspond all to the ith eigenvalue \(\lambda _i\) of A. If \(n_i\) correspond to the algebraic multiplicity of eigenvector \(\lambda _i\) and \(\nu _i\) to its geometric multiplicity, then there are \(\nu _i\) Jordan blocks \(J_i\) corresponding to eigenvalue \(\lambda _i\). Thus, the total number of Jordan blocks corresponds to the number of independent eigenvectors of A. The Jordan normal form of a linear operator is unique up to permutations of the blocks.
A matrix for which the geometric multiplicity equals the algebraic multiplicity for each eigenvalue is called semi simple. In this case each block \(J_i\) is a \(1\times 1\) matrix and the Jordan normal form reduces to diagonal form.
A system in Jordan normal form can be interpreted as the parallel connection of independent sub-systems, each represented by a Jordan block \(J_i\). Figure 8.2 shows the block diagram for a system with a simple eigenvalue \(\lambda _0\) and a double eivenvalue \(\lambda _1\) with \(\nu _1 = 1\). From the figure it’s easy to see that if \(b_0 = 0\) then the state variable \(u_0\) can’t be excited by the input signal x. The same is true for \(u_2\) if \(b_2 = 0\). In either case the system is not controllable. One can check that these are the two conditions under which the determinant of the matrix \(\mathcal {C}\) vanishes.
In a similar way the figure shows that if \(c_0 = 0\) there is no path from \(u_0\) to the output of the system and for \(c_1 = 0\) there is no path from \(u_1\). These are the two cases under which the system is not observable and correspond to the two conditions under which the determinant of the matrix \(\mathcal {O}\) vanishes.
From these considerations we conclude that a non-observable system includes a sub-system whose output does not reach the global system output as schematically depicted in Fig. 8.3b. A non-controllable system includes a sub-system that is not reached by the input signal as schematically depicted in Fig. 8.3a.
Example 8.2: Jordan Block
Consider the system described by the following state-space representation
with
We want to compute an explicit expression for the exponential matrix \(\mathrm{{e}}^{t A}\) allowing us to compute the response of the system to an arbitrary input signal x.
The matrix
is an elementary Jordan matrix and can’t be transformed in a diagonal matrix by a similarity transformation. In fact, as can be seen from the characteristic polynomial
the matrix has a single eigenvalue \(\lambda = \omega _{3dB}\) with an algebraic multiplicity of 2 and the eigenspace belonging to this eigenvalue has dimension 1
The matrix A can however be written as the sum of a diagonal matrix \(A_d\) and a particularly simple matrix \(A_c\)
Observe that the matrices \(A_d\) and \(A_c\) do commute. For this reason we can use the following property of the exponential matrix
Since \(A_d\) is diagonal, the first exponential matrix \(\mathrm{{e}}^{t A_d}\) is easily calculated to be
The second exponential matrix \(\mathrm{{e}}^{t A_c}\) is easily calculated from the series defining the exponential matrix by noting that the square of the matrix \(A_c\) vanishes
Putting these results together we obtain
The above method can be used to calculate the exponential of any elementary Jordan matrix with the only modification that for an \(n\times n\) matrix A it is the nth power of the matrix \(A_c\) that vanishes.
In the following we are always going to assume that the systems under consideration are controllable and observable.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2024 The Author(s)
About this chapter
Cite this chapter
Beffa, F. (2024). Linear Time Invariant Systems. In: Weakly Nonlinear Systems. Understanding Complex Systems. Springer, Cham. https://doi.org/10.1007/978-3-031-40681-2_8
Download citation
DOI: https://doi.org/10.1007/978-3-031-40681-2_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40680-5
Online ISBN: 978-3-031-40681-2
eBook Packages: EngineeringEngineering (R0)