# Switched Differential Algebraic Equations

Chapter
Part of the Advances in Industrial Control book series (AIC)

## Abstract

In this chapter, an electrical circuit with switches is modelled as a switched differential algebraic equation (switched DAE), i.e. each mode is described by a DAE of the form Ex′=Ax+Bu where E is, in general, a singular matrix and u is the input. The resulting time-variance follows from the action of the switches present in the circuit, but can also be induced by faults occurring in the circuit. In general, switches or component faults induce jumps in certain state-variables, and it is common to define additional jump-maps based on physical arguments. However, it turns out that the formulation as a switched DAE already implicitly defines these jumps, no additional jump-map must be given. In fact, an easy way to calculate these jumps will be presented in terms of the consistency projectors. It turns out that general switched DAEs can have not only jumps in the solutions but also Dirac impulses and/or their derivatives. In order to capture this impulsive behaviour, the space of piecewise-smooth distributions is used as an underlying solution space. With this underlying solution space it is possible to show existence and uniqueness of solutions of switched DAEs (including the uniqueness of the jumps induced by the switches). With the help of the consistency projector a condition is formulated whether a switch (or fault) can induce jumps or even Dirac impulses in the solutions. Furthermore, stability of the switched DAE is studied; again the consistency projectors play an important role.

## 6.1 Introduction

In this chapter, an electrical circuit with switches is modelled as a switched differential algebraic equation (switched DAE; DAEs are also known as descriptor systems or singular systems):
$$E_{\sigma(t)}\dot{x}(t) = A_{\sigma(t)} x(t) + B_{\sigma(t)} u(t),\quad\text{or in short form}\quad E_\sigma\dot{x} = A_\sigma x + B_\sigma u,$$
(6.1)
where σ:ℝ→{1,…,p} is the switching signal, Ep,Ap∈ℝn×n, Bp∈ℝn×m for $$p\in\{1,\ldots,{\bar {p}}\}$$, $${\bar {p}}\in {\mathbb {N}}$$ is the number of different subsystems, x:ℝ→ℝn is the state variable and u:ℝ→ℝm is the input. In other words, the system is modelled as a time-varying linear differential-algebraic equation whose coefficient matrices are piecewise-constant. The time-variance follows from the action of the switches present in the circuit, but can also be induced by faults occurring in the circuit. Hence the proposed framework can be used to study the behaviour of the circuit for nominal switching as well as in the case of sudden component faults.

For a motivation and an illustration of the notation, consider the following example.

### Example 6.1.1

(ODE vs. DAE description)

Consider the two simple circuits as shown in Fig. 6.1.
Standard circuit analysis of these two circuits yields the following two ordinary differential equations (ODEs)
$$\frac {\mathrm {d}}{{\mathrm {d}}t}i_L = \frac{1}{L} u$$
and
\begin{aligned} \frac {\mathrm {d}}{{\mathrm {d}}t}i_L &= -\frac{1}{L} v_C, \\ \frac {\mathrm {d}}{{\mathrm {d}}t}v_C &= \frac{1}{C} i_L, \end{aligned}
i.e. each circuit is modelled as an ODE of the form $$\dot{x}=Ax+Bu$$ (possibly with B=0), where x=iL or x=(iL,vC).
Now assume that the two circuits from Fig. 6.1 originate from the analysis of the two modes of the circuit with a switch as shown in Fig. 6.2. Obviously, it is not possible to directly model the overall switched circuit as a switched ODE of the form
$$\dot{x}(t) = A_{\sigma(t)} x(t) + B_{\sigma(t)} u(t)$$
because the two modes are modelled with different state variables, in particular, the variable vC does not exist in the first mode’s description at all. Furthermore, it is not clear from the ODE description alone how vC is initialised after a switch. The underlying problem is that standard circuit analysis eliminates algebraic constraints (like the Kirchhoff’s law) to obtain an ODE description. However, in the presence of switches this elimination might be different for the different modes, so that the resulting ODEs are not compatible anymore. If, on the other hand, the algebraic constraints are not eliminated then this problem disappears. This approach leads to a DAE description of the form $$E\dot{x}=Ax+Bu$$ for each mode; for the two modes of the switched circuit from Fig. 6.2, these are given by, with x=(iL,vL,iC,vC), and Now the behaviour of the circuit can directly be modelled as a switched DAE of the form (6.1). In addition, if the switch does not change the modes instantaneously but spends some time between the two contacts then this can easily be modelled with a third mode given by

In general, switches or component faults induce jumps in certain state variables, and it is common to define additional jump-maps based on physical arguments [9]. However, it turns out that the appropriate formulation as a switched DAE already implicitly defines these jumps, no additional jump-map must be given. In fact, an easy way to calculate these jumps will be presented in terms of the consistency projectors.

In order to allow for jumps in the solution, the problem is embedded into a distributional solution framework. It turns out that general switched DAEs can have not only jumps in the solutions but also Dirac impulses and/or their derivatives. Actually, this is in agreement with the observation of a small experiment (see also the last part of Example 6.1.1): When connecting a coil via a switch to a constant voltage source, one can observe a spark when opening the switch, which can be explained by a voltage peak induced by the rapid drop of the current in the coil. For ideal elements this peak is, in fact, the Dirac impulse because it is the (distributional) derivative of a jump. Unfortunately, it is not possible to just take the classical distribution space as formally introduced by Schwartz in the 1950s [31] as a solution space for switched DAEs. Roughly speaking, the reason is that this space is too large. For example, it is not possible to define a restriction of a general distribution to some interval or, equivalently, to multiply a distribution with a piecewise-constant function, but these operations are needed to formulate the problem as a switched DAEs. To overcome this problem, the smaller space of piecewise-smooth distributions [33, 34] is considered as an underlying solution space.

With the right underlying solution space it is possible to study existence and uniqueness of solutions of switched DAEs. The latter is strongly related to the so-called regularity of the coefficient matrices and it will turn out that regularity of the matrix pairs is necessary and sufficient for the existence of unique solutions. A remarkable consequence of this general existence and uniqueness result of solutions of switched DAEs is the above mentioned property that the jumps induced by switches (or faults) are already determined uniquely by the switched DAE description.

With the help of the aforementioned consistency projectors it is easy to formulate conditions whether a switch (or fault) can induce jumps or even Dirac impulses in the solutions. This has important application for designing, e.g. fault tolerant systems, because if a component fault can induce Dirac impulses in the solution, this peak of voltages or currents of the circuit might destroy other components, possibly leading to a cascading total destruction of the circuit.

In the context of control theory, asymptotic stability of a switched DAE is an important property. It is well known for switched systems (not in DAE form) that switching between stable subsystems can yield an unstable overall system. This is not the case when one is able to find a common Lyapunov function. It will be shown that this result can be generalised to switched DAEs. The consistency projectors play a prominent role again.

The proposed framework also has limitations. The major drawback is that the distributional solution framework cannot be used in a general nonlinear context. In particular, state dependent switching is not covered by the presented theory. Since diodes yield state dependent switching and play an important role for power converters, extensions of the presented theory to encompass at least diodes is a topic of ongoing research.

## 6.2 Mathematical Preliminaries: Distribution Theory

Before introducing piecewise-smooth distributions in the second part of this section, classical distributions as formalised by Schwartz [31] are summarised and important properties are highlighted. A (real-valued) distribution is a continuous linear map from the space of test functions Open image in new window into the real numbers ℝ, where Open image in new window is the set of all functions φ:ℝ→ℝ which are smooth (i.e. arbitrarily often differentiable) and are zero outside some compact set. Continuity of a distribution is defined in terms of a certain topology on the space of test functions Open image in new window, however, this topology is rarely used, instead one works with the following characterisation of continuity.

### Lemma 6.2.1

A linear mapOpen image in new windowis continuous if and only if limn→∞D(φn)=0 for all sequencesOpen image in new windowwith the following properties:
(C1)

compactK⊆ℝ ∀n∈ℕ ∀tK: φn(t)=0 and

(C2)

$$\forall i\in {\mathbb {N}}:\ \lim_{n\to\infty}\|\varphi^{(i)}_{n}\|_{\infty}=0$$,

where ∥⋅∥denotes the supremum norm of a (bounded) function.
The space of all distributions is denoted by $${\mathbb {D}}$$, i.e. Distributions are also called generalised functions because of the following result which shows that the fairly large class of locally integrable functions f:ℝ→ℝ, (i.e. ∫Kf<∞ for any compact K⊆ℝ) is a “subspace” of $${\mathbb {D}}$$.

### Lemma 6.2.2

Each locally integrablef:ℝ→ℝ induces a distribution$$f_{{\mathbb {D}}}\in {\mathbb {D}}$$given byand the correspondence is one-to-one in the following sense:
$$f_{\mathbb {D}}= g_{\mathbb {D}}\quad\Leftrightarrow\quad f=g \text{ \textit{almost everywhere}}.$$
Distributions induced by locally integrable functions are called regular distributions.

A very important and useful property of distributions is that all distributions have a derivative within $${\mathbb {D}}$$.

### Lemma 6.2.3

The distributional derivative of$$D\in {\mathbb {D}}$$is given byand is a distribution again. Furthermore, for differentiablef:ℝ→ℝ it holds that$$(f')_{{\mathbb {D}}}= (f_{{\mathbb {D}}})'$$.
Finally, distributions can be multiplied with smooth functions as follows: and again this generalises the multiplication of functions: $$(\alpha f)_{{\mathbb {D}}}= \alpha f_{{\mathbb {D}}}$$. Furthermore, the product rule for derivatives holds, i.e.
$$(\alpha D)' = \alpha' D + \alpha D'.$$
The most famous distribution which is not induced by a function and which can be seen as the initiating object to study distributions in the first place is the Dirac impulse (or Dirac-Delta) given by In general, the Dirac impulse at t∈ℝ is given by δt(φ):=φ(t). The Dirac impulse is the (distributional) derivative of the Heaviside step function Open image in new window, i.e. Open image in new window.
The main contribution of Schwartz was the embedding of the Dirac impulse into a general functional analytical framework (viewing it as a linear operator on the space of test functions). However, this approach yielded a very large space of distributions where many distributions do not have intuitive properties and cannot be handled easily. For example, in theory there exists a continuous function which is nowhere differentiable, but in the distributional framework this function has a derivative and there is no intuition as to how this distribution looks like. This existence of “weird” distributions makes it impossible to simply plug in a distribution x into the switched DAE (6.1) because it is not clear how the product of the piecewise-constant coefficient matrices Eσ and Aσ with $$\dot{x}$$ or x should be defined. If one rewrites (6.1) with the help of restrictions to intervals as
$$(E_{p_i} \dot{x})_{[t_i,t_{i+1})} = (A_{p_i} x + B_{p_i} u)_{[t_i,t_{i+1})},\quad\forall i\in {\mathbb {Z}},$$
(6.2)
where σ(t)=pi for t∈[ti,ti+1) and i∈ℤ then the following remark shows that there is no suitable definition for the terms in (6.1) in a general distributional framework.

### Remark 6.2.4

$$D=\sum_{i\in {\mathbb {N}}} d_i\delta_{d_i}, \quad d_i:=\frac{(-1)^i}{i+1},\ i\in {\mathbb {N}}.$$
The restriction to the interval (0,∞) should then be
$$D_{(0,\infty)} = \sum_{k\in {\mathbb {N}}} \frac{1}{2k+1} \delta_{\frac{1}{2k+1}},$$
but for any test function Open image in new window with φ(0)≠0 the infinite sum does not converge, hence the restriction is not well defined.
The problem in the above counter-example is the accumulation of Dirac impulses at zero. If, however, the Dirac impulses are isolated then it seems very trivial to define a restriction to intervals. For example, the restriction of the Dirac impulse δ to the closed interval [0,∞) should be the Dirac impulse itself again, while the restriction to the open interval (0,∞) should be the zero distribution. In order to be able to define a restriction for distributions, it therefore seems reasonable to consider first the space of piecewise-regular distributions
$${\mathbb {D}}_{\mathrm {pw}\mathrm {reg}}:=\left \{D = f_{\mathbb {D}}+ \sum _{t\in T} D_{t}\ \left \vert \vphantom {D = f_{\mathbb {D}}+ \sum _{t\in T} D_{t}}\ \begin{array}{l} f\text{ is locally integrable, }\ T\subset {\mathbb {R}}\ \text{locally finite}, \\[5pt] \forall\, t\in T: D_t\in \operatorname {span}\bigl\{\delta_t, \delta_t',\delta_t'', \ldots\bigr\} \end{array} \right .\right \},$$
where $$\operatorname {span}\{\delta_{t},\delta_{t}',\delta_{t}'',\ldots\}$$ denotes the set of all finite linear combinations of the Dirac impulse at t and its derivatives. For a piecewise-regular distribution $$D=f_{{\mathbb {D}}}+\sum_{t\in T}D_{t}$$, the restriction to some interval M⊆ℝ is simply defined by
$$D_M:= (f_M)_{\mathbb {D}}+ \sum _{t\in T\cap M} D_t,$$
where fM:ℝ→ℝ is the restriction of f given by fM(t)=f(t) if tM and fM(t)=0 otherwise. Although this space is suitable for defining a restriction, it is still too big as a solution space for the switched DAE (6.1) because of the following two reasons:
1. 1.

$${\mathbb {D}}_{\mathrm {pw}\mathrm {reg}}$$ is not closed under differentiation, i.e. $$x\in {\mathbb {D}}_{\mathrm {pw}\mathrm {reg}}$$ does not imply $$\dot{x}\in\nobreak {\mathbb {D}}_{\mathrm {pw}\mathrm {reg}}$$, in particular the restriction of $$\dot{x}$$ is still not well defined, and

2. 2.

It is not possible to specify initial values since neither x(t), x(t−) nor x(t+) are well defined quantities for $$x\in {\mathbb {D}}_{\mathrm {pw}\mathrm {reg}}$$.

Cobb [5] solved the second problem by considering the space of piecewise-continuous distributions, i.e. he considered piecewise-regular distributions $$D=f_{{\mathbb {D}}}+\sum_{t\in T}D_{t}$$ with piecewise-continuous f and defined D(t−):=f(t−) and D(t+):=f(t+). However, Cobb seemed to have overlooked the first problem. The following definition of the space of piecewise-smooth distributions resolves this problem.

### Definition 6.2.5

(Piecewise-smooth distributions, [33, 34])

First, define the space of piecewise-smooth functions as
The space of piecewise-smooth distributions is defined as
The space of piecewise-smooth distributions combines the idea from Cobb’s piecewise-continuous distributions (position of impulses not fixed a-priori) and from the space of impulsive-smooth distributions (closed under differentiation) introduced in [17] in the context of optimal control and later used for studying DAEs, see e.g. [13, 29]. The latter allows Dirac impulses and its derivatives only at zero but it was already suggested in [14] (without carrying out the details) to allow Dirac impulses everywhere (without accumulation points). Another viewpoint of piecewise-smooth distributions is based on the axiomatic definition of general distributions as locally finite derivatives of continuous functions (see e.g. [8]) because any piecewise-smooth distribution can be represented locally as a finite derivative of a piecewise-smooth function. One important feature of the piecewise-smooth distributions is the existence of the Fuchssteiner multiplication [10, 11, 33] which defines an associative (but not commutative) multiplication between two arbitrary piecewise-smooth distributions and which fulfils the Leibniz’s product rule. Here this multiplication will not be used in its full generality because for studying (6.1) only the product of a piecewise-smooth function with a piecewise-smooth distribution is needed. The latter is given by where $$\alpha=\sum_{i\in {\mathbb {Z}}}(\alpha_{i})_{[t_{i},t_{i+1})}$$ with Open image in new window. The matrix–vector product in (6.1) is defined in the classical way, i.e. for Open image in new window and Open image in new window the product Open image in new window is a vector of sums of scalar products.1 Note that the Fuchssteiner multiplication makes the standard formulation (6.1) of a switched DAE equivalent to the restriction based formulation (6.2).

### Definition 6.2.6

(Point-wise evaluation)

Let t∈ℝ and $$D=f_{{\mathbb {D}}}+\sum_{\tau\in T} D_{\tau}$$ then the left/right evaluation of D at t is given by
$$D(t-):=f(t-)=\lim_{{\varepsilon }\searrow0} f(t-{\varepsilon }),\qquad D(t+):=f(t+)=f(t)$$
and the impulsive part of D at t is
$$D[t]:= \begin{cases} D_t& \text{if } t\in T, \\ 0&\text{if } t\notin T. \end{cases}$$

## 6.3 Regularity of Matrix Pairs

It is assumed that each mode of the switched DAE (6.1) is described by a regular matrix pair (E,A). In this section, some properties and consequences of regularity are collected.

### Definition 6.3.1

(Regularity)

A matrix pair (E,A)∈ℝm×n×ℝm×n, n,m∈ℕ, is called regular if and only if m=n and the polynomial det(sEA) is not the zero polynomial.

In the following theorem, characterisations of regularity of (E,A) are given which highlight the importance of regularity of (E,A) with respect to solvability of the corresponding DAE $$E\dot{x}=Ax+f$$. Note that here “solution” refers to a classical solution, i.e. a differentiable function x fulfilling the DAE.

### Theorem 6.3.2

(Characterisations of regularity)

The following statements are equivalent for matrix pairs (E,A) with square matricesE,A∈ℝn×n:
1. 1.

The matrix pair (E,A) is regular.

2. 2.
There exist invertible matricesS,T∈ℝn×nsuch that (E,A) is transformed into a quasi-Weierstrass form whereJis some matrix andNis a nilpotent matrix.

3. 3.

For all smoothf:ℝ→ℝnthere exists a solutionxof$$E\dot{x}=Ax+f$$andxis uniquely given by the valuex(t0) for any fixedt0∈ℝ.

4. 4.

The only solutionxof$$E\dot{x}=Ax$$withx(0)=0 is the trivial solution.

### Proof

1 ⇔ 2 This is a classical result going back to Weierstrass [36], for a proof see e.g. the textbook [19]. The prefix “quasi” in “quasi-Weierstrass form” is used because it is not assumed here that J and N are in Jordan canonical form [2].

2 ⇒ 3 It suffices to show that $$\dot{v}=Jv+f_{1}$$ and $$N\dot{w}=w+f_{2}$$ are solvable for all smooth f1,f2 and the solutions are uniquely determined by the values v(t0) and w(t0). Classical ODE theory provides this already for v. Consider the operator Open image in new window, which is invertible with inverse given by
$$\biggl(N\frac {\mathrm {d}}{{\mathrm {d}}t}-I\biggr)^{-1}=-\sum_{i=0}^{\nu-1} \biggl(N\frac {\mathrm {d}}{{\mathrm {d}}t}\biggr)^i,$$
where ν∈ℕ is the nilpotency index of N, i.e. ν is the minimal value such that Nν=0. Hence the unique solution of $$N\dot{w}=w+f_{2}$$ is given by
$$w=-\sum_{i=0}^\nu\biggl(N\frac {\mathrm {d}}{{\mathrm {d}}t}\biggr)^i(f_2)=- \sum_{i=0}^\nu N^i f_2^{(i)}.$$

3 ⇒ 4 Choosing f=0 in 3 implies 4.

4 ⇒ 1 This is shown in [19, Thm. 2.14]. The basic idea is to choose n+1 different pairs (λi,vi)∈ℝ×ℝn∖{0}, i=1,…,n+1 such that (iA)vi=0 and a vanishing nontrivial linear combination $$\sum_{i=1}^{n+1}\alpha_{i} v_{i} =0$$ for αi∈ℝ not all zero. Then $$x(t)=\sum_{i=1}^{n+1} \alpha_{i} v_{i} e^{\lambda_{i} t}$$ is not identically zero but solves $$E\dot{x}=Ax$$, x(0)=0. □

Since the quasi-Weierstrass form (6.3) will play an important role in the following, a convenient method to obtain the transformation matrices S,T is presented with the following theorem which utilises the Wong sequences (named after [37]).

### Theorem 6.3.3

(Wong sequences and the quasi-Weierstrass form [1, 2])

Let (E,A) be a regular matrix pair. Define the Wong sequences of subspaces by, fori∈ℕ, whereOpen image in new windowdenotes the pre-image of the setOpen image in new windowunder the matrixM∈ℝn×nandOpen image in new windowdenotes the image ofOpen image in new windowunderM. These sequences get stationary after finitely many steps; denote the limits withOpen image in new windowandOpen image in new window. Choose full rank matricesV,Wsuch thatOpen image in new windowandOpen image in new windowthen
$$T := [V,W],\qquad S := [EV,AW]^{-1}$$
are invertible and put (E,A) into a quasi-Weierstrass form (6.3).

## 6.4 Explicit Solution Formula for Non-switched DAE

In this section, two explicit formulas for the solutions of the DAE
$$E\dot{x}=Ax+f$$
(6.4)
are presented. The first is based on certain projections defined with the help of the quasi-Weierstrass form (6.3), the second is based on the Drazin inverses of E and A. The Drazin inverse approach only works when E and A commute, i.e. EA=AE must hold. This is in general not true, but by multiplying both matrices with (0A)−1 for some λ0 with det(0A)≠0 this assumption always holds [3].

### Definition 6.4.1

(Consistency, differential and impulse projector, [32])

Consider a regular matrix pair (E,A) and its quasi-Weierstrass form (6.3) obtained by corresponding transformation matrices S,T∈ℝn×n. Let the block sizes in the quasi-Weierstrass form be n1×n1 and n2×n2. The consistency projector is given by the differential projector is given by and the impulse projector is given by where I is the identity matrix of size n1×n1 for the consistency and differential projector and of size n2×n2 for the impulse projector.

Note that, in contrast to the consistency projector, the differential and impulse projectors are not projectors in the usual sense because they are in general not idempotent matrices. Furthermore, it is not difficult to see that the definition of the projectors does not depend on the specific choices of the transformations S and T.

### Definition 6.4.2

(Drazin inverse)

Let M∈ℝn×n. Any matrix MD∈ℝn×n is called Drazin inverse of M if
1. 1.

MMD=MDM,

2. 2.

MDMMD=MD, and

3. 3.

ν∈ℕ: MDMν+1=Mν.

### Lemma 6.4.3

For allM∈ℝn×nthere exists a unique Drazin inverseMD. Furthermore, ifOpen image in new window, whereT∈ℝn×nis invertible, $$C\in\nobreak {\mathbb {R}}^{n_{1}\times n_{1}}$$, n1∈ℕ, is an invertible matrix and$$N\in {\mathbb {R}}^{n_{2}\times n_{2}}$$, n2=nn1, is nilpotent, then

The solution of the DAE (6.4) based on the above defined projectors as well as on the Drazin inverse is now given in the following theorem.

### Theorem 6.4.4

(Explicit solution formula)

Let (E,A) be a regular matrix pair and letΠ(E,A), $$\varPi ^{\mathrm {diff}}_{(E,A)}$$and$$\varPi ^{\mathrm {imp}}_{(E,A)}$$be the consistency, differential and impulse projector, resp., as in Definition 6.4.1. Furthermore, let
$$A^\mathrm {diff}:= \varPi _{(E,A)}^\mathrm {diff}A\quad\text{\textit{and}}\quad E^\mathrm {imp}:=\varPi _{(E,A)}^\mathrm {imp}E.$$
Then all solutions of (6.4) are given by, forc∈ℝn,
$$x(t) = e^{A^\mathrm {diff}t} \varPi _{(E,A)} c + \int_0^t e^{A^\mathrm {diff}(t-s)} \varPi _{(E,A)}^\mathrm {diff}f(s) \,{\mathrm {d}}s- \sum_{i=0}^{n-1} \bigl(E^\mathrm {imp}\bigr)^i\varPi _{(E,A)}^\mathrm {imp}f^{(i)}(t).$$
(6.5)

### Proof

The quasi-Weierstrass form (6.3) with corresponding transformation matrices S,T∈ℝn×n yields that x solves $$E\dot{x}=Ax+f$$ if and only if Open image in new window solves $$\dot{v}=Jv+[I\ 0]Sf$$ and $$N\dot{w}=w+[0\ I]Sf$$, hence, together with the step 2 ⇒ 3 in the proof of Theorem 6.3.2, Together with and the first proposed solution formula is obtained. The second solution formula is standard and can be found in [3, 19]. □

### Remark 6.4.5

(Remarks on the solution formulas)

1. 1.

If EA=AE then both solution formulas are almost identically because then it can be shown, see [2], that $$E^{D}=\varPi _{(E,A)}^{\mathrm {diff}}$$ and EDE=Π(E,A). However, it is in general not true that $$A^{D}=\varPi _{(E,A)}^{\mathrm {imp}}$$ or EAD=Eimp, therefore the second formula needs the “correction term” IEDE.

2. 2.
From the solution formula (6.5) it follows that
$$x(0) = \varPi _{(E,A)} c - \sum_{i=0}^{n-1} \bigl(E^\mathrm {imp}\bigr)^i\varPi _{(E,A)}^\mathrm {imp}f^{(i)}(0),$$
in particular, the initial value problem (6.4), x(0)=x0∈ℝn has a solution if and only if which characterises consistency of the initial value.

3. 3.

If x is allowed to have jumps or even Dirac impulses, i.e. Open image in new window, then Theorem 6.4.4 is still true, i.e. considering distributional solutions doesn’t add any new solutions to the problem (for which the DAE $$E\dot{x}=Ax+f$$ should hold globally). In particular, also all distributional solutions have consistent initial values. Furthermore, if f contains jumps or Dirac impulses, i.e. Open image in new window then the solution formula also holds; however, it is necessary to define the notion of the antiderivative H=∫0D of the distribution D which fulfils H′=D and H(0−)=0, see [34, Prop. 3].

4. 4.

The proof of Theorem 6.4.4 reveals that, instead of the consistency projector Π(E,A) in formula (6.5), any matrix M with Open image in new window could be used. However, the special choice of Π(E,A) yields that this formula holds also true when an inconsistent initial value is given, see the next section.

## 6.5 Inconsistent Initial Values

In the presence of switches, the initial conditions need not to be consistent so that no solution (classical or distributional) with this initial value exists. It is therefore necessary to make precise what a “solution” to an inconsistent initial value problem should be. The viewpoint here is the following.

An inconsistent initial value can only occur if the considered DAE was not active before the initial time (say t0=0). This gives rise to the following initial trajectory problem (ITP), where x0:(−∞,0)→ℝn is some initial trajectory:
\begin{aligned} x_{(-\infty,0)} &= {x^0}_{(-\infty,0)}, \\ (E\dot{x})_{[0,\infty)} &= (Ax)_{[0,\infty)} + f_{[0,\infty)}. \end{aligned}
(6.6)

If x0(0) is not consistent for $$E\dot{x}=Ax+f$$ then no classical solution exists; however, it will be shown in the following that a distributional solution exists. Therefore, (6.6) is considered as an equation of piecewise-smooth distributions, in particular, the inhomogeneity f and the initial trajectory are also piecewise-smooth distributions.

### Theorem 6.5.1

(Solvability of the ITP)

Let (E,A) be a regular matrix pair. Then for every initial trajectoryOpen image in new windowand any inhomogeneityOpen image in new windowthe ITP (6.6) has a unique solutionOpen image in new window. In particular, the jump fromx0(0−) tox(0+) and the impulsive partx[0] is uniquely determined. In fact, iff[0]=0 thenwhereΠis the consistency projector andEimp=ΠimpEwith impulse projectorΠimpas in Definition 6.4.1. In particular, iff=0 then
$$x(0+)=\varPi x^0(0-)$$
and, on the open interval (0,∞),
$$\dot{x}=A^\mathrm {diff}x.$$
Furthermore, iffis smooth then the solutionxrestricted to the open interval (0,∞) is induced by the smooth function given by (6.5) wherec=x0(0−).

### Proof

Choose S,T∈ℝn×n invertible such that (E,A) is put into a quasi-Weierstrass form (6.3) with block sizes n1×n1 and n2×n2, resp., and let Open image in new window. In the new coordinates, the ITP (6.6) then decouples into
$$\begin{array}{@{}r@{\ }l@{\qquad}r@{\ }l@{}} v_{(-\infty,0)} &= {v^0}_{(-\infty,0)}, & w_{(-\infty,0)} &= {w^0}_{(-\infty,0)}, \\[5pt] \dot{v}_{[0,\infty)} &= \bigl(Jv+[I\quad0]Sf\bigr)_{[0,\infty)}, &(N \dot{w})_{[0,\infty)} &= \bigl(w+[0\quad I]Sf\bigr)_{[0,\infty)}, \end{array}$$
where Open image in new window. It can be shown (see e.g. the proof of [33, Thm. 3.3.8]) that the ITP for v just yields the same solutions on [0,∞) as the classical solution of the ODE initial value problem $$\dot{v}=Jv+f$$, v(0)=v0(0−). In particular, v(0+)=v(0−) and v[0]=0. As shown in [33, Thm. 3.1.7], the DAE for w can equivalently be written as a special “switched” DAE without initial trajectory
$$N_{\mathrm {itp}}\dot{w} = w + f_{\mathrm {itp}},$$
where Nitp:=N[0,∞) and $$f_{\mathrm {itp}}=-{w^{0}}_{(-\infty,0)}+ \begin{bmatrix}0 & I \end{bmatrix} Sf_{[0,\infty)}$$. Since the operator Open image in new window is still nilpotent, the solution formula derived in the proof of Theorem 6.3.2 still holds, i.e. w is uniquely given by
\begin{aligned} w &= -\sum_{i=0}^{n_2-1} \biggl(N_{\mathrm {itp}}\frac {\mathrm {d}}{{\mathrm {d}}t}\biggr)^i(f_{\mathrm {itp}}) \\ &= \sum_{i=0}^{n-1} \biggl(N_{[0,\infty)} \frac {\mathrm {d}}{{\mathrm {d}}t}\biggr)^i\bigl({w^0}_{(-\infty,0)}\bigr) - \sum _{i=0}^{n-1} \biggl(N_{[0,\infty)}\frac {\mathrm {d}}{{\mathrm {d}}t}\biggr)^i \bigl([0\quad I]Sf_{[0,\infty)}\bigr). \end{aligned}
This shows existence and uniqueness of a solution x of the ITP (6.6) and Open image in new window. Some calculations within the piecewise-smooth distributional framework yield
$$\biggl(N_{[0,\infty)}\frac {\mathrm {d}}{{\mathrm {d}}t}\biggr)^i\bigl({w^0}_{(-\infty,0)}\bigr) = \begin{cases} w^0_{(-\infty,0)},&\text{if } i=0,\\ -N^i w^0(0-) \delta^{(i-1)},&\text{if } i>0, \end{cases}$$
and, with the abbreviation $$\tilde{f}:=[0\ I]S f$$,
$$\biggl(N_{[0,\infty)}\frac {\mathrm {d}}{{\mathrm {d}}t}\biggr)^i(\tilde{f}_{[0,\infty)})= \begin{cases} \tilde{f}_{(0,\infty)},&\text{if } i=0,\\[5pt] N^i {\tilde{f}^{(i)}}_{(0,\infty)} + N^i \sum_{j=0}^{i-1} \tilde {f}^{(i-1-j)}(0+) \delta^{(j)},&\text{if } i>0. \end{cases}$$
Hence,
$$w(0+) = -\sum_{i=0}^{n-1} N^i [0\quad I] S f^{(i)}(0+)$$
and
$$w[0] = -\sum_{i=0}^{n-2} N^{i+1} w^0(0-) \delta^{(i)} - \sum_{i=0}^{n-2} N^{i+1} \sum_{j=0}^{i} [0\quad I]S f^{(i-j)}(0+)\delta^{(j)}.$$
Together with analogous rearrangements of matrices as in the proof of Theorem 6.4.4, this yields the claimed expressions for x(0+) and x[0]. □

Viewing now the switched DAE (6.1) as a repeated ITP (where the switching times are the initial times) one obtains the following result. Note that it has to be assumed that the switching times do not accumulate, otherwise this approach does not work.

### Corollary 6.5.2

(Existence and uniqueness of solutions of a switched DAE)

Consider the switched DAE (6.1) with regular matrix pairs (Ep,Ap), $$p\in\{1,\ldots,{\bar {p}}\}$$and assume for the switching signalσ
$$\sigma\in \varSigma _0 := \left \{\sigma:{\mathbb {R}}\to\{1,\ldots,{\bar {p}}\}\ \left \vert \vphantom {\sigma:{\mathbb {R}}\to\{1,\ldots,{\bar {p}}\}}\ \begin{array}{l}\sigma\text{ \textit{has locally finitely many switches},} \\[3pt] \sigma_{(-\infty,0)}\text{ \textit{is constant}} \end{array} \right .\right \}.$$
Then there exists a globally defined solutionOpen image in new windowwhich is uniquely given byx(0−).

### Remark 6.5.3

(On the assumption that σ(−∞,0) is constant)

The assumption in Corollary 6.5.2 that the switching signal σ in (6.1) is constant on (−∞,0) is just a technicality to ensure uniqueness of the solution also backward in time. If one is interested only in the behaviour of the solution on [0,∞) then this assumption is not necessary to obtain the same result. More formally, one could also just study the corresponding ITP with some initial trajectory Open image in new window then it doesn’t matter how σ is defined on (−∞,0).

### Example 6.5.4

(Example 6.1.1 continued)

Consider the switched circuit from Fig. 6.2 and the corresponding switched DAE (6.1) with matrices (E1,A1,B1), (E2,A2,B2), (E3,A3,B3) as given at the end of Example 6.1.1, i.e. (E1,A1,B1) correspond to the switch in the left position, (E2,A2,B2) correspond to the switch in the right position and (E3,A3,B3) describes the system when the switch is in between. To check whether the switched DAE is uniquely solvable for all input signals u, regularity of the matrix pairs (E1,A1), (E2,A2), (E3,A3) must be checked:
$$\det(sE_1-A_1)=CLs^2,\qquad \det(sE_2-A_2)=CLs^2 + 1,\qquad \det(sE_3-A_3) = Cs.$$
Hence, Corollary 6.5.2 implies that the switched DAE (6.1) has a solution for every switching signal σΣ0 and for every input Open image in new window and this solution is uniquely determined by x(0−). To characterise the jumps at the switching instances the Wong sequences are calculated first. The corresponding limit spaces are given by and the consistency, differential and impulse projectors are given by With the help of these projectors the re-initialisation of x(0+) for a given (possibly inconsistent) initial value $$x(0-)=:(i_{L}^{0},u_{L}^{0},i_{C}^{0},u_{C}^{0})$$ and the corresponding impulse in the solution can be calculated for each mode by the formula given in Theorem 6.5.1

## 6.6 Stability

In this section, only the homogeneous switched DAE (6.1), i.e.
$$E_\sigma\dot{x} = A_\sigma x$$
(6.7)
is considered, because stability analysis usually considers a closed loop where the input is already replaced by a feedback. Furthermore, certain simple inputs can be incorporated as new state variables, for example, a constant input signal u can be rewritten as the state equation $$\dot{u}=0$$ and a sinusoidal input u can be rewritten as $$\dot{u}=-\omega\overline{u}$$, $$\dot{\overline{u}}=\omega u$$, for some ω∈ℝ. Hence the switched DAE (6.1) with constant or sinusoidal inputs can be written as the DAE (6.7). Throughout this section, it is assumed that the switching signal σ is such that the switching times do not accumulate.

### Definition 6.6.1

(Asymptotic stability)

The switched DAE (6.7) is called asymptotically stable if and only if for all solutions Open image in new window the following properties hold
1. (S)

ε>0 ∃δ>0: ∥x(0−)∥<δ ⇒∀t>0: ∥x(t±)∥<ε,

2. (A)

x(t±)→0 as t→∞,

3. (I)

t≥0: x[t]=0.

The assumptions (S) (stability) and (A) (attractivity) are standard for the definition of asymptotic stability. The assumption (I) (impulse-freeness) is motivated by the following observation: Assume (6.7) has a solution x with x[t]≠0 for some t. By linearity of (6.7), the scaling of x by some ε>0 will yield also a solution, in particular, the initial value εx(0−) can be made arbitrarily small, but the corresponding impulse εx[t] does not vanish. Since an impulse can be interpreted as an infinite peak (or the limit of an unbounded sequence of functions) this shows that the classical stability assumption (S) cannot be fulfilled in the sense that small initial values yield a bounded and small solution. The problem whether the switched system (6.7) has impulses in the solution or not, i.e. whether (I) holds or not, is also an interesting question on its own. Consider, for example, the situation where the switched DAE (6.7) models a nominal system with additional sudden faults (like a shortcut in an circuit element). If the corresponding switch is able to produce a Dirac impulse in some of the variables (e.g. voltages and currents) then this impulse might destroy other components of the system, possibly leading to a cascading destruction of the system. Therefore, the topic of impulse-freeness is studied first.

### Theorem 6.6.2

(Impulse-freeness)

Consider the switched DAE (6.7) with regular matrix pairs (Ep,Ap), $$p\in\{1,\ldots,{\bar {p}}\}$$and a switching from mode$$p\in\{1,\ldots,{\bar {p}}\}$$to mode$$q\in\{1,\ldots,{\bar {p}}\}$$at some switching timets∈ℝ, i.e. σ(ts−)=pandσ(ts+)=q. Then this switch cannot produce an impulse, i.e. x[ts]=0 for all solutionsOpen image in new windowof (6.7) if the following impulse-freeness-condition holds:
$$E_q(I-\varPi _q) \varPi _p = 0,$$
(6.8)
where$$\varPi _{p}:=\varPi _{(E_{p},A_{p})}$$, $$\varPi _{q}:=\varPi _{(E_{q},A_{q})}$$are the consistency projectors as in Definition 6.4.1. In particular, if (6.8) holds for all$$p,q\in\{1,\ldots,{\bar {p}}\}$$then all solutions of (6.7) are impulse-free independently of the switching signal.

### Proof

Let Open image in new window be a solution of (6.7) with a switching signal σ such that σ(ts−)=p and σ(ts+)=p. Since Open image in new window, where Open image in new window is the limit of the first Wong sequence corresponding to (Ep,Ap), it follows that x(ts−)=Πpx(ts−). The impulse formula from Theorem 6.5.1 now yields  □

### Example 6.6.3

(Example 6.1.1 continued)

Consider the circuit as shown in Fig. 6.2 and assume the input u is constant. As mentioned above, the input is reinterpreted as a state variable given by $$\dot{u}=0$$, hence one obtains the following three DAEs with x=(u,iL,vL,iC,vC) describing the three different modes: It is easily seen that the corresponding consistency projectors Πp, p=1,2,3, are given by where $$\overline{\varPi }_{p}$$ is the consistency projector from Example 6.5.4. Hence, the condition (6.8) can easily be checked for each mode change: This is in correspondence to the results obtained in Example 6.5.4 where it was apparent that only a switch to mode three can produce impulses in the solution. However, the method based on Theorem 6.6.2 is simpler (for example, the transformation matrices Sp, $$p\in\{1,\ldots,{\bar {p}}\}$$ are not needed) and it also takes into account which mode was active before the switch. For the considered circuit this doesn’t make a difference, but in general the mode before the switch might prohibit certain initial values so that the switch does not produce impulses although it might produce switches for a general initial value. Actually, this is the case when (6.8) holds but Eq(IΠq)≠0. For a more complex example, see also [7].

### 6.6.2 Lyapunov Functions for Non-switched DAEs

In order to study stability of the switched DAE (6.7), the stability properties of the non-switched DAE
$$E\dot{x}=Ax$$
(6.9)
with regular matrix pair (E,A)∈ℝn×n×ℝn×n are studied first. From the quasi-Weierstrass form (6.3) of the pair (E,A) it is immediately clear that (6.9) is asymptotically stable if and only if the matrix J in (6.3) is Hurwitz. A direct method (i.e. without calculating the full quasi-Weierstrass form first) is given by the following result.

### Theorem 6.6.4

(Generalised Lyapunov equation, [27])2

The DAE (6.9) is asymptotically stable if and only if there exists a positive definite symmetricP=P∈ℝn×nand a symmetricQ=Q∈ℝn×nwhich is positive definite onOpen image in new window (the limit of the first Wong sequence) such that
$$A^\top P E + E^\top P A = -Q.$$
(6.10)
This result makes it possible to define a Lyapunov function as
$$V(x) = (Ex)^\top P Ex$$
(6.11)
because for all (classical) solutions Open image in new window of (6.9) it follows that

### 6.6.3 Asymptotic Stability of Switched DAEs

Consider the switched DAE (6.7)
$$E_\sigma\dot{x} = A_\sigma x.$$
Clearly, if one mode $$E_{p}\dot{x}=A_{p} x$$, $$p\in\{1,\ldots,{\bar {p}}\}$$ is not asymptotically stable then (6.7) cannot be asymptotically stable for arbitrary switching (just chose the constant switching signal σ(t)=p for all t∈ℝ). Hence for studying asymptotic stability for arbitrary switching signals or at least for slow switching it has to be assumed that each mode is asymptotically stable and hence permits a Lyapunov function (6.11). From the theory of switched ODEs (see e.g. [20]), it is known that switching between stable modes can lead to an unstable overall system, hence one might expect a similar behaviour for switched DAEs. This unstable behaviour cannot happen in the switched ODE case when there exists a common Lyapunov function. Surprisingly, this condition is not sufficient any more for the switched DAE case because the possible jumps must be incorporated in the right way, too. The following example shows that the induced jumps are important for the stability (for more examples, see [21]).

### Example 6.6.5

(Unstable switched DAE with the same Lyapunov function for each subsystem)

Consider the switched DAE (6.7) with two subsystems given by The consistency projectors are given by A possible solution behaviour of this switched DAE is shown in Fig. 6.4. Clearly, each subsystem is asymptotically stable. Furthermore, the solutions decrease along $$V(x)=x_{1}^{2}+x_{2}^{2}$$ for both subsystems. However, when switches occur, the consistency projectors yield jumps which can destabilise the overall system.

In general, the jumps must be “compatible” with the Lyapunov function. Furthermore, impulse-freeness must be ensured additionally. Altogether, the following result holds.

### Theorem 6.6.6

(Asymptotic stability under arbitrary switching)

Consider the switched DAE (6.7) with regular matrix pairs (Ep,Ap), $$p\in\{1,\ldots,{\bar {p}}\}$$. Let$$\varPi _{p}:=\varPi _{(E_{p},A_{p})}$$be the consistency projector as given by Definition 6.4.1. If, for all$$p\in\{1,\ldots,{\bar {p}}\}$$,
(Vp)

$$E_{p}\dot{x}=A_{p} x$$is asymptotically stable with Lyapunov functionVp,

(IC)

Eq(IΠq)Πp=0 for all$$q\in\{1,\ldots,{\bar {p}}\}$$and

(JC)

Vq(Πqx)≤Vp(x) for allOpen image in new window

then the switched DAE (6.7) is asymptotically stable for all switching signals.

### Proof

Attractivity is shown in [21] and stability (as well as attractivity in a nonlinear setting) is shown in [22]. The key idea is to consider the common Lyapunov function which is well-defined because for Open image in new window it follows that Πpx=x=Πqx, and therefore
$$V_q(x) = V_q(\varPi _q x) \leq V_p(x) = V_p(\varPi _p x) \leq V_q(x),$$
hence Vp(x)=Vq(x) for all Open image in new window. It then follows that V is decreasing along solutions and V(x(t))→0 as t→∞. Finally, positive definiteness of V on each Vp implies x(t)→0 as t→∞. □

As highlighted in the proof, the jump condition (JC) is a generalisation of the common Lyapunov condition for switched ODEs, in fact, if this condition is applied to a switched ODE it reads as Vp(x)≤Vq(x) for all p,q hence Vp=Vq, i.e. it is equivalent to the existence of a common Lyapunov function.

It is known from the switched ODEs theory that the destabilisation of a switched system can only be achieved by sufficiently fast switching. This result also holds for switched DAEs. Slow switching is characterised here by an average dwell time [18]. Therefore, denote with Nσ(t1,t2) the number of switches of σ within the interval [t1,t2). The class of switching signals with average dwell time τa>0 is then given by
$$\varSigma _{\tau_a}:=\biggl \{\sigma\in \varSigma \ \bigg \vert \vphantom {\sigma\in \varSigma }\ \exists N_0>0\ \forall t\in {\mathbb {R}}\ \forall\Delta t>0:\ N_\sigma(t,t+\Delta t)<N_0+\frac{\Delta t}{\tau_a} \biggr \}.$$
The number N0>0 in the definition of $$\varSigma _{\tau_{a}}$$ is the so-called chatter bound and is an upper bound for the number of switches within an interval of length smaller than τa. For each switching signal $$\sigma\in \varSigma _{\tau_{a}}$$, this chatter bound is finite but it is not bounded for the whole class $$\varSigma _{\tau_{a}}$$. The class of switching signals $$\sigma\in \varSigma _{\tau_{a}}$$ with chatter bound N0=1 is exactly the class of switching signals with dwell time τd=τa.

### Theorem 6.6.7

(Stability under slow switching, [22])

Consider the switched DAE (6.7) satisfying conditions (Vp) and the impulse condition (IC). Then there existsτa>0 such that (6.7) is asymptotically stable if$$\sigma\in \varSigma _{\tau_{a}}$$. In fact, letPp,Qpbe the solutions of the Lyapunov equation (6.10) corresponding to (Ep,Ap), letOpbe an orthogonal basis matrix ofOpen image in new windowand letwhereλmin(⋅) andλmax(⋅) denote the minimal and maximal eigenvalue of a symmetric matrix, respectively. Then an average dwell time of
$$\tau_a > \frac{\max_{p,q}\ln\mu_{p,q}}{\min_p \lambda_p}$$
guarantees asymptotic stability of (6.7).

### Example 6.6.8

(Example 6.6.5 continued)

Consider the switched DAE from Example 6.6.5. It was already highlighted there that both systems share the same Lyapunov function $$V(x)=x_{1}^{2}+x_{2}^{2}$$, hence condition (Vp) holds. Furthermore, it is not difficult to check that the impulse-freeness condition (IC) holds. Since the jump condition (JC) does not hold, Theorem 6.6.6 is not applicable, but Theorem 6.6.7 yields that for switching signals with sufficiently large average dwell time asymptotic stability is guaranteed. In order to calculate a sufficient average dwell time, the Lyapunov functions must be rewritten as V(x)=(Ex)PEx, where P solves the generalised Lyapunov equation (6.10). This is achieved by choosing and The associated Lyapunov functions are $$V_{1}(x)=2x_{2}^{2}$$ and V2(x)=(x1+x2)2 which coincide with $$V(x)=x_{1}^{2}+x_{2}^{2}$$ on the corresponding consistency spaces. As orthonormal basis of Open image in new window and Open image in new window choose
$$O_1 = \frac{1}{2} \begin{bmatrix}\sqrt{2}\\\sqrt{2} \end{bmatrix} \quad\text{and}\quad O_2= \begin{bmatrix}0\\1 \end{bmatrix} .$$
Then hence μ:=maxp,qμp,q=2 and λ:=minpλp=2. Therefore, the corresponding switched DAE is asymptotically stable for all switching signals σΣτ with τa>ln2/2. For this case, the bound is actually sharp because it is easily seen that for a periodic switching signal with periodicity ln2/2 the corresponding solution is also periodic (cf. [21]).

### 6.6.4 Commutativity and Asymptotic Stability

For switched ODEs with asymptotically stable subsystems, it is well known [26] that commuting A-matrices imply asymptotic stability for arbitrary switching signal and also guarantee a common quadratic Lyapunov function. The aim of this section is to generalise this result to switched DAEs (6.1). Example 6.6.5 shows that commutativity of the A-matrix is not the right condition to guarantee asymptotic stability for arbitrary switching. In particular, for switched DAEs instability can be induced by the jumps, hence one would expect that the consistency projectors play a prominent role again. Surprisingly, this is not the case as the following result shows.

### Theorem 6.6.9

(Commutativity of Adiff-matrices implies stability, [23])

Consider a switched DAE (6.7) with corresponding matrices$$A^{\mathrm {diff}}_{p}$$as defined in Theorem 6.4.4. Assume asymptotic stability of each subsystem, i.e. (Vp), and impulse-freeness, i.e. (IC), then
$$\bigl[A^\mathrm {diff}_p,A^\mathrm {diff}_q \bigr]:=A^\mathrm {diff}_p A^\mathrm {diff}_q - A^\mathrm {diff}_q A^\mathrm {diff}_p = 0 \quad\forall p,q\in\{1,2,\ldots,{\bar {p}}\}$$
implies asymptotic stability of the switched DAE (6.7) for arbitrary switching signals.

### Proof

The key observation is that $$[A_{p}^{\mathrm {diff}},A_{q}^{\mathrm {diff}}]=0$$ implies commutativity also with the consistency projectors:
$$\bigl[\varPi _p,A_p^\mathrm {diff}\bigr]=0,\qquad\bigl[ \varPi _p,A_q^\mathrm {diff}\bigr] =0, \qquad[ \varPi _p,\varPi _q] = 0.$$
Hence the flow commutes and asymptotic stability easily follows, see [23] for details. There it is also shown that a common quadratic Lyapunov function exists which is compatible with the jumps. □

## 6.7 Bibliographical Notes

The theory of DAEs gained popularity in the 1980s, see e.g. the textbooks [3, 4, 6, 16], and was already discussed by Gantmacher in the 1950s [12]. That DAEs are especially suitable to model electrical circuits was highlighted by Verghese et al. [35]. There impulsive behaviour was also discussed and motivated; however, the underlying distributional solution space was not formalised which is also true for the above mentioned textbooks. Later on, the focus of research was on numerics of DAEs and (smoothly) time-varying DAEs, see e.g. the textbook [19] and the references therein.

The existence of impulsive solutions was a reoccurring topic in the literature, see e.g. [5, 30]; for a more detailed overview, see the introduction of [34]. Apart from the author’s own work (presented in this chapter), there is not much work on switched DAEs available [14, 15, 24, 25, 38] and none of these works resolve or even discuss the problem of multiplying a piecewise-constant function with a distribution which naturally occurs when studying switched differential algebraic equations of the form (6.1). Another approach to model electrical circuits with switches uses the complementarity framework and is discussed in the next chapter. Combining both approaches seems fruitful and is a topic of ongoing research.

## 6.8 Summary

In this chapter, it was motivated that switched DAEs are a suitable modelling framework for electrical circuits with switches. The space of piecewise-smooth distributions was introduced as an underlying solution space. Explicit solution formulas were given in particular for the impulsive parts induced by the inconsistent initial values. A simple matrix condition was given which makes it possible to exclude impulsive solutions for arbitrary switching. Finally, stability of switched DAEs was studied. With the help of Lyapunov functions, a sufficient condition for asymptotic stability under arbitrary and slow switching was presented. The theoretical results were illustrated with an examples stemming from a simple electrical circuit.

## Footnotes

1. 1.

This is actually a consequence from the standard identification of Open image in new window with the corresponding functional Open image in new window given by x(φ)=(x1(φ),…,xn(φ)). Some authors [19, 28] identify Open image in new window with Open image in new window and $$x(\varphi):=\sum_{i=1}^{n} x_{i}(\varphi_{i})$$. This different viewpoint of the “vector” x makes it necessary to define the matrix–vector product differently; however, these authors do not give a motivation for this different viewpoint.

2. 2.

In [27], only the complex-valued case is studied. However, in the real-valued case it is easily seen that the real part of the complex solutions P and Q are also solutions of the Lyapunov equation.

## Notes

### Acknowledgements

This work was supported by the DFG grant Wi1458/10-1. Many thanks to Roman Geiselhart for giving valuable comments on the manuscript of this book chapter.

### References

1. 1.
Armentano, V.A.: The pencil (sEA) and controllability-observability for generalized linear systems: a geometric approach. SIAM J. Control Optim. 24, 616–638 (1986)
2. 2.
Berger, T., Ilchmann, A., Trenn, S.: The quasi-Weierstraß form for regular matrix pencils. Linear Algebra Appl. (2010). doi:10.1016/j.laa.2009.12.036. Preprint available online, Institute for Mathematics, Ilmenau University of Technology, Preprint Number 09-21 Google Scholar
3. 3.
Campbell, S.L.: Singular Systems of Differential Equations I. Pitman, New York (1980) Google Scholar
4. 4.
Campbell, S.L.: Singular Systems of Differential Equations II. Pitman, New York (1982)
5. 5.
Cobb, J.D.: Controllability, observability and duality in singular systems. IEEE Trans. Autom. Control AC-29, 1076–1082 (1984)
6. 6.
Dai, L.: Singular Control Systems. Lecture Notes in Control and Information Sciences, vol. 118. Springer, Berlin (1989)
7. 7.
Domínguez-García, A.D., Trenn, S.: Detection of impulsive effects in switched DAEs with applications to power electronics reliability analysis. In: Proc. of the IEEE Conference on Decision and Control, Atlanta, Georgia, USA, pp. 5662–5667 (2010) Google Scholar
8. 8.
Ferreira, J.C.: Introduction to the Theory of Distributions. Pitman Monographs and Surveys in Pure and Applied Mathematics, vol. 87. Wesley, Harlow (1997). Translated by J. Sousa Pinto and R. F. Hoskins
9. 9.
Frasca, R., Çamlıbel, M.K., Goknar, I.C., Iannelli, L., Vasca, F.: Linear passive networks with ideal switches: Consistent initial conditions and state discontinuities. IEEE Trans. Circuits Syst. I, Regul. Papers 57(12), 3138–3151 (2010)
10. 10.
Fuchssteiner, B.: Eine assoziative Algebra über einen Unterraum der Distributionen. Math. Ann. 178, 302–314 (1968)
11. 11.
Fuchssteiner, B.: Algebraic foundation of some distribution algebras. Stud. Math. 76, 439–453 (1984)
12. 12.
Gantmacher, F.R.: The Theory of Matrices, vols. I & II. Chelsea, New York (1959)
13. 13.
Geerts, A.H.W.: Solvability conditions, consistency and weak consistency for linear differential-algebraic equations and time-invariant linear systems: The general case. Linear Algebra Appl. 181, 111–130 (1993)
14. 14.
Geerts, A.H.W., Schumacher, J.M.: Impulsive-smooth behavior in multimode systems. Part I: State-space and polynomial representations. Automatica 32(5), 747–758 (1996)
15. 15.
Geerts, A.H.W., Schumacher, J.M.: Impulsive-smooth behavior in multimode systems. Part II: Minimality and equivalence. Automatica 32(6), 819–832 (1996)
16. 16.
Griepentrog, E., März, R.: Differential-Algebraic Equations and Their Numerical Treatment. Teubner-Texte zur Mathematik, vol. 88. Teubner, Leipzig (1986)
17. 17.
Hautus, M.L.J., Silverman, L.M.: System structure and singular control. Linear Algebra Appl. 50, 369–402 (1983)
18. 18.
Hespanha, J.P., Morse, A.S.: Stability of switched systems with average dwell-time. In: Proc. of the IEEE Conference on Decision and Control, Phoenix, Arizona, USA, pp. 2655–2660 (1999) Google Scholar
19. 19.
Kunkel, P., Mehrmann, V.: Differential-Algebraic Equations. Analysis and Numerical Solution. EMS Publishing House, Zürich (2006)
20. 20.
Liberzon, D.: Switching in Systems and Control. Systems and Control: Foundations and Applications. Birkhäuser, Boston (2003)
21. 21.
Liberzon, D., Trenn, S.: On stability of linear switched differential algebraic equations. In: Proc. of the IEEE Conference on Decision and Control, Shanghai, China, pp. 2156–2161 (2009) Google Scholar
22. 22.
Liberzon, D., Trenn, S.: Switched nonlinear differential algebraic equations: Solution theory, Lyapunov functions, and stability. Automatica (2011). doi:10.1016/j.automatica.2012.02.041. Preprint available from the websites of the authors Google Scholar
23. 23.
Liberzon, D., Trenn, S., Wirth, F.: Commutativity and asymptotic stability for linear switched DAEs. In: Proc. of 50th IEEE Conf. on Decision and Control and European Control Conference 2011, Orlando, USA, pp. 417–422 (2011)
24. 24.
Meng, B.: Observability conditions of switched linear singular systems. In: Proc. of the Chinese Control Conference, Harbin, Heilongjiang, China, pp. 1032–1037 (2006)
25. 25.
Meng, B., Zhang, J.F.: Reachability conditions for switched linear singular systems. IEEE Trans. Autom. Control 51(3), 482–488 (2006)
26. 26.
Narendra, K.S., Balakrishnan, J.: A common Lyapunov function for stable LTI systems with commuting A-matrices. IEEE Trans. Autom. Control 29(12), 2469–2471 (1994)
27. 27.
Owens, D.H., Debeljkovic, D.L.: Consistency and Liapunov stability of linear descriptor systems: A geometric analysis. IMA J. Math. Control Inf. 2(2), 139–151 (1985)
28. 28.
Rabier, P.J., Rheinboldt, W.C.: Classical and generalized solutions of time-dependent linear differential-algebraic equations. Linear Algebra Appl. 245, 259–293 (1996)
29. 29.
Rabier, P.J., Rheinboldt, W.C.: Time-dependent linear DAEs with discontinuous inputs. Linear Algebra Appl. 247, 1–29 (1996)
30. 30.
Rabier, P.J., Rheinboldt, W.C.: Theoretical and numerical analysis of differential-algebraic equations. In: Ciarlet, P.G., Lions, J.L. (eds.) Handbook of Numerical Analysis, vol. VIII, pp. 183–537. Elsevier, Amsterdam (2002) Google Scholar
31. 31.
Schwartz, L.: Théorie des Distributions I, II. Publications de l’institut de mathématique de l’Universite de Strasbourg, vols. IX, X. Hermann, Paris (1950/1951) Google Scholar
32. 32.
Tanwani, A., Trenn, S.: On observability of switched differential-algebraic equations. In: Proc. of the IEEE Conference on Decision and Control, Atlanta, Georgia, USA, pp. 5656–5661 (2010) Google Scholar
33. 33.
Trenn, S.: Distributional differential algebraic equations. PhD thesis, Institut für Mathematik, Technische Universität Ilmenau, Universitätsverlag Ilmenau, Ilmenau, Germany (2009). URL http://www.db-thueringen.de/servlets/DocumentServlet?id=13581
34. 34.
Trenn, S.: Regularity of distributional differential algebraic equations. Math. Control Signals Syst. 21(3), 229–264 (2009)
35. 35.
Verghese, G.C., Levy, B.C., Kailath, T.: A generalized state-space for singular systems. IEEE Trans. Autom. Control AC-26(4), 811–831 (1981)
36. 36.
37. 37.
Wong, K.T.: The eigenvalue problem λTx+Sx. Int. J. Differ. Equ. 16, 270–280 (1974)
38. 38.
Wunderlich, L.: Analysis and numerical solution of structured and switched differential-algebraic systems. PhD thesis, Fakultät II Mathematik und Naturwissenschaften, Technische Universität Berlin, Berlin, Germany (2008) Google Scholar