We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Skip to main content

A distributional solution framework for linear hyperbolic PDEs coupled to switched DAEs

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Abstract

A distributional solution framework is developed for systems consisting of linear hyperbolic partial differential equations and switched differential-algebraic equations (DAEs) which are coupled via boundary conditions. The unique solvability is then characterize in terms of a switched delay DAE. The theory is illustrated with an example of electric power lines modelled by the telegraph equations which are coupled via a switching transformer where simulations confirm the predicted impulsive solutions.

Introduction

In this paper, we develop a rigorous solution theory for systems where a linear hyperbolic partial differential equation (PDE) is coupled with a switched differential-algebraic equation (DAE) via boundary conditions (BC), see Fig. 1 as an overview.

Fig. 1
figure 1

Coupling of a PDE with a switched DAE via boundary condition

Such systems occur, for example, when modelling power grids using the telegraph equation [8] including switches (e.g. induced by disconnecting lines), water flow networks with valves [13, 14], supply chain models including processor breakdown [1, 7], district heating systems with rapid consumption changes [5] and blood flow with simplified valve models in the heart [15]. Similar to [3, 11] the closed-loop setting illustrated in Fig. 1 can include general network structures.

In this coupled system, the output of the switched DAE provides the boundary condition for the PDE and the boundary values of the PDE serve as input to the DAE. Solutions of switched DAEs in general contain jumps and derivatives thereof, e.g. derivatives of Dirac impulses [17, 19]; hence, the solution concept of the PDE has to be extended to allow for jumps, Dirac impulses and their derivatives at the boundary. In particular, this is a wider class compared to the solutions of small bounded variation, e.g. used in [4] where a nonlinear hyperbolic PDE is coupled to an ODE. Similarly, in [2, 12], the investigations of switched linear PDEs with source terms are restricted to solutions with bounded variation. In [16], Dirac impulses are introduced at the position of an interface of nonlinear PDEs. A more general appearance of Dirac impulses is allowed in [6, 23] for a partially linear system. A detailed overview of existing solution concepts for linear hyperbolic PDEs is given in [10].

Since arbitrary high derivatives of Dirac impulses can occur as solutions of switched DAEs, the aforementioned approaches are not suitable to handle the coupled systems studied here. Indeed, our first main contribution is a suitable extension of the 1D piecewise-smooth distributional solution framework (developed to handle switched DAEs in [17, 18]) to a 2D piecewise-smooth distributional solution framework. This new class of solutions is large enough to allow distributional solutions such as Dirac impulses and their derivatives. At the same time, its elements are regular enough to allow a trace evaluation on the boundaries of the domain.

Towards our main existence and uniqueness result for solutions of the coupled system, we also establish a relationship between the solutions of the coupled systems and the solution of a switched delay DAE. For the latter, we generalize a recent existence and uniqueness result for delay DAEs in [20].

This paper is structured as follows.

After a detailed description of the coupled system (including an example of a simple power network), we review the classical solution theory of linear hyperbolic PDEs in Sect. 3 and the solution theory of switched DAEs in Sect. 4. The novel distributional solution framework for linear hyperbolic PDEs is introduced in Sect. 5 which is then used in Sect. 6 to establish a link between the coupled system and the solutions of a switched delay DAE (Theorem 23). Finally, we establish an existence and uniqueness result for general switched delay DAE (Theorem 24) and can conclude our main result about the existence and uniqueness of solutions of the coupled system (Corollary 26). We illustrate the results by numerical simulations of the power grid example.

Problem setup

System class

We consider linear hyperbolic PDEs on a bounded interval of the form:

$$\begin{aligned} \partial _t{\mathbf {u}}(t,x)+{\mathbf {A}}\partial _x{\mathbf {u}}(t,x)&= 0,&\qquad x\in [a,b],\ t&\ge {t_0}, \end{aligned}$$
(2.1a)
$$\begin{aligned} {\mathbf {y}}_P(t)&= {\mathbf {C}}_P {\mathbf {u}}_{ab}(t),&t&\ge {t_0} \end{aligned}$$
(2.1b)

where \(a,b,t_0\in {\mathbb {R}}\) with \(a<b\), \({\mathbf {u}}:[t_0,\infty )\times [a,b] \rightarrow {\mathbb {R}}^n\), \(n\in {\mathbb {N}}\), is the n-dimensional vector of unknowns of the PDE, \({\mathbf {A}}\in {\mathbb {R}}^{n\times n}\) and \({\mathbf {y}}_P:[t_0,\infty )\rightarrow {\mathbb {R}}^\nu \), \(\nu \in {\mathbb {N}}\) is the \(\nu \)-dimensional output of the PDE depending on \({\mathbf {u}}_{ab}(t):=({\mathbf {u}}(t,a)^\top ,{\mathbf {u}}(t,b)^\top )^\top \in {\mathbb {R}}^{2n}\) and \({\mathbf {C}}_P\in {\mathbb {R}}^{\nu \times 2n}\). The boundary conditions (BC) of the PDE have the form:

$$\begin{aligned} {\mathbf {P}}{\mathbf {u}}_{ab}(t)&= {\mathbf {y}}_D(t),&t&> {t_0}, \end{aligned}$$
(2.1c)

where \({\mathbf {P}}\in {\mathbb {R}}^{n\times 2n}\) and \({\mathbf {y}}_D:[t_0,\infty )\rightarrow {\mathbb {R}}^n\) is the output of the switched DAE

$$\begin{aligned} {\mathbf {E}}_\sigma {\dot{{\mathbf {w}}}}(t)&={\mathbf {H}}_\sigma {\mathbf {w}}(t)+{\mathbf {B}}_\sigma {\mathbf {y}}_P(t) + {\mathbf {f}}_\sigma (t),&t&\ge {t_0}, \end{aligned}$$
(2.1d)
$$\begin{aligned} {\mathbf {y}}_D(t)&={{\mathbf {C}}_D}_\sigma {\mathbf {w}}(t),&t&\ge {t_0}, \end{aligned}$$
(2.1e)

with the m-dimensional vector of unknowns \({\mathbf {w}}: [{t_0},\infty ) \rightarrow {\mathbb {R}}^m\), \(m\in {\mathbb {N}}\), the switching signal \({\sigma :{\mathbb {R}}\rightarrow \{1,2,\ldots ,N \}}\), \(N\in {\mathbb {N}}\), and \({\mathbf {E}}_\xi ,{\mathbf {H}}_\xi \in {\mathbb {R}}^{m\times m}\), \({\mathbf {B}}_\xi \in {\mathbb {R}}^{m\times \nu }\), \({\mathbf {f}}_\xi :[{t_0},\infty ) \rightarrow {\mathbb {R}}^m\), \({{\mathbf {C}}_D}_\xi \in {\mathbb {R}}^{n\times m}\) for each \(\xi \in \{1,2,\ldots ,N \}\).

The coupled system (2.1) has to be equipped with initial conditions

$$\begin{aligned} {\mathbf {u}}(t_0,x)&= {\mathbf {u}}^{t_0}(x),&x&\in [a,b], \end{aligned}$$
(2.2a)
$$\begin{aligned} {\mathbf {w}}(t_0)&= {\mathbf {w}}^{t_0}, \end{aligned}$$
(2.2b)

where \({\mathbf {u}}^{t_0}:[a,b]\rightarrow {\mathbb {R}}^n\) and \({\mathbf {w}}^{t_0}\in {\mathbb {R}}^m\).

Remark 1

We would like to stress that the coupling structure in (2.1) is quite general; in fact, arbitrary networks whose edges represent PDEs and whose nodes represent (switched) DAEs which couple the different PDEs are covered. Consider, for example, a network as illustrated in Fig. 2a, where on each edge \({\mathcal {E}}\) the quantity \({\mathbf {u}}^{{\mathcal {E}}}\) is governed by a linear PDE \({\mathbf {u}}_t^{{\mathcal {E}}}+{\mathbf {A}}{\mathbf {u}}_x^{{\mathcal {E}}}=0\).

At each node s, algebraic and/or differential conditions combine possible internal states \({\mathbf {w}}^s\) with certain boundary values \({\mathbf {q}}^s\) of the connected \({\mathbf {u}}^{{\mathcal {E}}}\), i.e. \({\mathbf {E}}_{\sigma }^s {\dot{{\mathbf {w}}}}^s = {\mathbf {H}}_{\sigma }^s {\mathbf {w}}^s + {\mathbf {B}}_{\sigma }^s {\mathbf {q}}^s + {\mathbf {f}}_{\sigma }^s\). This setup can be rewritten in the form (2.1) by first rescaling the spatial domain (which simply modifies the matrices \({\mathbf {A}}^{{\mathcal {E}}}\) by a constant multiple) so that all PDEs are defined on the same interval and can be viewed as a single PDE where the new unknown \({\mathbf {u}}\) consists of the unknowns \({\mathbf {u}}^{\mathcal {E}}\) of each edge stacked over each other. (The \({\mathbf {A}}\)-matrix then is a block diagonal matrix.) In a similar way, the switched differential-algebraic equations for each node can be stacked over each other (resulting in block diagonal coefficient matrices) and can then be viewed as single switched DAE, see Fig. 2b. A similar reduction is used in [3, 11]. This method is also used in the specific example of a simple power grid discussed in Sect. 2.3.

Fig. 2
figure 2

An example of a network consisting of four nodes and five connecting edges and how to reduce it to one node and one edge (loop) network which still has all the features of the original network

Ill-posed coupling

As mentioned in the introduction, coupling a PDE to a switched DAE is challenging due to the need to consider distributional boundary terms. However, there is another challenge due to the additional algebraic constraints present in a DAE. The following example illustrates that even if the PDE is well-posed (i.e. it has a unique solutions for all initial and boundary conditions) and the DAE is well-posed (i.e. it has unique solutions for all consistent initial values and for all sufficiently smooth input signals), the coupled system is ill-posed.

Example 2

Consider the scalar advection equation given by

$$\begin{aligned} \partial _t v(t,x) + \partial _x v(t,x)&= 0 , \qquad t \in [0,\infty ), \quad x \in [0,1], \end{aligned}$$
(2.3)

with initial condition \(v(0,x) = v_0(x)\), boundary condition \(v(t,0)=b(t)\), input \(b(\cdot )\) and output \(y_P(t):=v(t,0)\). It is well known that there exists a unique solution to (2.3) for any sufficiently smooth \(v_0(\cdot )\) and \(b(\cdot )\). This PDE will be coupled to the following (nonswitched) DAE given by

$$\begin{aligned} {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{matrix}}\right] } {\left[ {\begin{matrix} {\dot{w}} \\ {\dot{z}} \end{matrix}}\right] } = {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad 1 \end{matrix}}\right] } {\left[ {\begin{matrix} w \\ z \end{matrix}}\right] } - {\left[ {\begin{matrix} 0 \\ 1 \end{matrix}}\right] } q , \end{aligned}$$
(2.4)

with input \(q(\cdot )\) and output \(y_D (t) := {\left[ {\begin{matrix} 0&\quad 1 \end{matrix}}\right] } {\left[ {\begin{matrix} w \\ z \end{matrix}}\right] }\). It is easily seen that for all \(q(\cdot )\) and all initial values for w(0) there exists a unique solution.

Now let (2.3) be coupled with (2.4) via \(b(t)=y_D(t)\) and \(q(t)=y_P(t)\). Since by construction \(b(t)=y_P(t)\), this coupling leads to \(q(t) = y_D(t) = z(t)\), which means that the DAE changes to

$$\begin{aligned} {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad \quad 0 \end{matrix}}\right] } {\left[ {\begin{matrix} {\dot{w}} \\ {\dot{z}} \end{matrix}}\right] } = {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad 1 \end{matrix}}\right] } {\left[ {\begin{matrix} w \\ z \end{matrix}}\right] } - {\left[ {\begin{matrix} 0 \\ 1 \end{matrix}}\right] } z = {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{matrix}}\right] } {\left[ {\begin{matrix} w \\ z \end{matrix}}\right] }. \end{aligned}$$

However, in this new DAE, the variable z is completely free, which also means that the boundary of the PDE is completely free, i.e. the coupled system does not have a unique solution anymore.

For this simple example, it is easy to see why well-posedness of the coupled system is lost: The PDE has as an “output” the same value as the provided boundary value; hence, both values are not independent and cannot be coupled via a DAE in an arbitrary way. In fact, just choosing a different output for the DAE (e.g. \(y_D = {\left[ {\begin{matrix} 1&\quad 0 \end{matrix}}\right] } {\left[ {\begin{matrix} w \\ z \end{matrix}}\right] }\) results in a well-posed coupled system.

In general, it will not be so apparent as in the above example which couplings lead to a well-posed system and it is therefore necessary to better understand the underlying solution structure.

Power grid example

Consider the simple electrical power grid illustrated in Fig. 3.

Fig. 3
figure 3

Simple electrical power grid with one generator node, one switching transformer node and two consumption nodes

Each line is modelled by the telegraph equation of the form:

$$\begin{aligned} \begin{aligned} \partial _t I(t,x) + \tfrac{1}{L} \partial _x V(t,x)&= 0\\ \partial _t V(t,x) + \tfrac{1}{C} \partial _x I(t,x)&=0, \end{aligned} \end{aligned}$$
(2.5)

where \(t\ge 0\), \(x\in [0,\ell ]\), I(tx) stands for the current, V(tx) is the voltage and the constants L and C are inductance and capacitance, respectively. In particular, each line k has a position-dependent current \(I_k\) and voltage \(V_k\). By appropriate scaling of the coefficients in the telegraph equation, we can assume that all PDEs are defined on the common domain \(T\times X = [0,\infty )\times [a,b]\). At the nodes, there is a coupling between corresponding boundary values, where the “outputs” of the telegraph equations are the boundary currents \(I_k\) for each line k. A generator is located at the first node, where we assume an externally given voltage. This algebraic constraint is modelled by the algebraic relations:

$$\begin{aligned} 0&= z_{1} - v_G, \end{aligned}$$
(2.6a)
$$\begin{aligned} y_D^G&= z_1, \end{aligned}$$
(2.6b)

where \(v_G(\cdot )\) is the externally given (time-varying) voltage of the generator together with the boundary condition \(V_1(\cdot ,a^+)=y_D^G\). On the consumption nodes, all voltages are assumed to be equal and we assume that the consumption is modelled as a simple Ohm’s resistance, i.e. the sum of the (directed) currents at the boundary of the lines is proportional to the voltage at the node, and this is modelled by the DAEs

$$\begin{aligned} \begin{aligned} 0&= z_{24} - R_{24} (I_4(\cdot ,a^+)-I_2(\cdot ,b^-)),&\quad y_D^{24}&= z_{24}, \\ 0&= z_{34} - R_{34} (I_3(\cdot ,b^-)+I_4(\cdot ,b^-)),&\quad y_D^{34}&= z_{34}, \end{aligned} \end{aligned}$$
(2.7)

where \(R_{24}, R_{34} > 0\) are the resistive loads. Further, we impose the boundary conditions

$$\begin{aligned} \begin{aligned} V_2(\cdot ,b^-)&= y_D^{24},&\quad V_3(\cdot ,b^-)&= y_D^{34},\\ V_4(\cdot ,a^+)&= y_D^{24},&\quad V_4(\cdot ,b^-)&= y_D^{34}.\\ \end{aligned} \end{aligned}$$

Finally, the switching transformer node is governed by the electric circuit given in Fig. 4.

Fig. 4
figure 4

A node connecting three power lines with switching transformers

The switch-independent equations governing this switching transformer node are as follows:

$$\begin{aligned} \begin{aligned} L_{12} \tfrac{\text {d}}{\text {d} t}i_{12}&= v_{12},&\quad L_{13} \tfrac{\text {d}}{\text {d} t}i_{13}&= v_{13}, \end{aligned} \end{aligned}$$
(2.8)

with output

$$\begin{aligned} \begin{aligned} y_D^2&= \kappa _{12} v_{12},&\quad y_D^3&= \kappa _{13} v_{13}, \end{aligned} \end{aligned}$$

where \(\kappa _{12}, \kappa _{13}>0\) are the voltage amplification constants. Note that, in this example, we use amplifiers only for the voltage values, so the power grid example is a simplified model.

If the switch connects line 1 and 2, then the following two algebraic constraints hold

$$\begin{aligned} 0 = i_{12} - I_1(\cdot ,b^-), \qquad 0 = i_{13} \end{aligned}$$

together with the output \(y_D^1 = v_{12}\) and, otherwise,

$$\begin{aligned} 0 = i_{12},\qquad 0 = i_{13} - I_1(\cdot ,b^-) \end{aligned}$$

with output \(y_D^1 = v_{13}\). Let \({\widetilde{{\mathbf {w}}}} = (i_{12},i_{13},v_{12},v_{13})^\top \), then the rules governing the switching transformer node can be compactly written as a switched DAE

$$\begin{aligned} \begin{aligned} {\widetilde{{\mathbf {E}}}}_\sigma \dot{{\widetilde{{\mathbf {w}}}}}&= {\widetilde{{\mathbf {H}}}}_\sigma {\widetilde{{\mathbf {w}}}} + {\widetilde{{\mathbf {B}}}}_\sigma {\widetilde{{\mathbf {q}}}} \\ {\widetilde{{\mathbf {y}}}}_D&= {\widetilde{{\mathbf {C}}}}_{D_{\sigma }} {\widetilde{{\mathbf {w}}}}, \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} {\widetilde{{\mathbf {E}}}}_1 = {\widetilde{{\mathbf {E}}}}_2&= {\left[ {\begin{matrix} L_{12} &{}\quad 0 &{} \quad 0 &{}\quad 0 \\ 0 &{}\quad L_{13} &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{} 0 \end{matrix}}\right] },&\quad {\widetilde{{\mathbf {H}}}}_1 = {\widetilde{{\mathbf {H}}}}_2&= {\left[ {\begin{matrix} 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 1 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \end{matrix}}\right] },\\ {\widetilde{{\mathbf {B}}}}_1&= {\left[ {\begin{matrix} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \\ -1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \end{matrix}}\right] }=:[{\widetilde{{\mathbf {B}}}}_1^1,{\mathbf {0}},{\mathbf {0}}],&\quad \quad {\widetilde{{\mathbf {B}}}}_2&= {\left[ {\begin{matrix} 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 \\ -1 &{}\quad 0 &{}\quad 0 \end{matrix}}\right] }=:[{\widetilde{{\mathbf {B}}}}_2^1,{\mathbf {0}},{\mathbf {0}}],\\ {\widetilde{{\mathbf {C}}}}_{D_1}&= {\left[ {\begin{matrix} 0 &{}\quad 0 &{}\quad 1 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \kappa _{12} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \kappa _{13} \end{matrix}}\right] },&\quad {\widetilde{{\mathbf {C}}}}_{D_2}&= {\left[ {\begin{matrix} 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad \kappa _{12} &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad \kappa _{13} \end{matrix}}\right] }, \end{aligned} \end{aligned}$$

with

$$\begin{aligned} {\widetilde{{\mathbf {q}}}}=(I_1(\cdot ,b^-),I_2(\cdot ,a^+),I_3(\cdot ,a^+))^\top ,\quad {\widetilde{{\mathbf {y}}}}_D = (y_D^1,y_D^2,y_D^3)^\top . \end{aligned}$$

The coupling via the boundaries of the lines 1, 2 and 3 is as follows:

$$\begin{aligned} V_1(\cdot ,b^-) = y_D^1,\quad V_2(\cdot ,a^+) = y_D^2,\quad V_3(\cdot ,a^+) = y_D^3. \end{aligned}$$

Thus, the overall coupled system has the form (2.1) with

$$\begin{aligned} {\mathbf {A}}= {\left[ {\begin{matrix} {\mathbf {A}}_1 &{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {\mathbf {A}}_2 &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {\mathbf {A}}_3&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad {\mathbf {A}}_4 \end{matrix}}\right] } \text { where } {\mathbf {A}}_j = {\left[ {\begin{matrix} 0 &{}\quad \frac{1}{\ell _j L_j}\\ \frac{1}{\ell _j C_j}&{}\quad 0 \end{matrix}}\right] } \text { for each } j=1,2,3,4, \end{aligned}$$
(2.9)

\({\mathbf {u}}= \left( {\mathbf {u}}_1^\top ,{\mathbf {u}}_2^\top ,{\mathbf {u}}_3^\top , {\mathbf {u}}_4^\top \right) ^\top \) with \({\mathbf {u}}_j=(I_j,V_j)^\top \), and the output of the PDE (used as an input to the switched DAE) is all currents at the boundaries of the lines, i.e.

the switched DAE has the state vector \({\mathbf {w}}= (z_1,i_{12},i_{13},v_{12},v_{13},z_{24},z_{34})^\top \), coefficient matrices

$$\begin{aligned} \begin{aligned}&{\mathbf {E}}_k = {\left[ {\begin{matrix} 0 &{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {\widetilde{{\mathbf {E}}}}_k &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad 0 &{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad 0 \end{matrix}}\right] },\quad {\mathbf {H}}_k = {\left[ {\begin{matrix} 1 &{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {\widetilde{{\mathbf {H}}}}_k &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad 1 &{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad 1 \end{matrix}}\right] },\\&\quad {\mathbf {B}}_k = {\left[ {\begin{matrix} 0 &{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad 0 &{}\quad 0 &{}\quad {0}&{}\quad {\widetilde{{\mathbf {B}}}}^1_k &{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad -R_{24} &{}\quad {0}&{}\quad R_{24} &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad -R_{34} &{}\quad -R_{34} \\ \end{matrix}}\right] },\\&{{\mathbf {C}}_D}_k = {\left[ {\begin{matrix} 1 &{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad [0,1,0] &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad [0,0,1] &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad 1 &{}\quad {0}\\ {0}&{}\quad [1,0,0] &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad 1 &{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad 1 \\ {0}&{}\quad {0}&{}\quad {0}&{}\quad 1 \end{matrix}}\right] }\cdot {\left[ {\begin{matrix} 1 &{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {\widetilde{{\mathbf {C}}}}_{D_k} &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad 1 &{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad 1\end{matrix}}\right] }, \quad {\mathbf {f}}_k={\left[ {\begin{matrix} -v_G \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{matrix}}\right] }, \end{aligned} \end{aligned}$$
(2.10)

where \(k=1,2\) and the coupling matrix \({\mathbf {P}}= {\left[ {\begin{matrix} {\mathbf {P}}_a &{}\quad 0 \\ 0 &{}\quad {\mathbf {P}}_b \end{matrix}}\right] }\) is given by

$$\begin{aligned} {\mathbf {P}}_a = {\mathbf {P}}_b = {\left[ {\begin{matrix} {0}&{}\quad 1 &{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad 1 &{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad 1 &{}\quad {0}&{}\quad {0}\\ {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad {0}&{}\quad 1 \\ \end{matrix}}\right] }. \end{aligned}$$

Linear hyperbolic PDEs

In this section, we recall the solution properties of a system of linear PDEs on a bounded spatial domain

$$\begin{aligned} \partial _t {\mathbf {u}}(t,x) +{\mathbf {A}}\partial _x {\mathbf {u}}(t,x) = 0, \quad x \in [a,b], \quad t \ge t_0, \end{aligned}$$
(3.1)

where \(t_0\) is the initial time for the system, \({\mathbf {u}}: [t_0,\infty )\times [a,b] \rightarrow {\mathbb {R}}^n\) is the n-dimensional unknown vector and \({\mathbf {A}}\in {\mathbb {R}}^{n \times n}\) with the prescribed initial condition

$$\begin{aligned} {\mathbf {u}}(t_0,x)&= {\mathbf {u}}^{t_0}(x), \quad x\in [a,b]. \end{aligned}$$
(3.2)

Assumption 1

The system (3.1) is assumed to be hyperbolic, i.e. \({\mathbf {A}}\) is assumed to be nonsingular and diagonalizable by a real coordinate transformation.

Under Assumption 1, there exists a nonsingular matrix \({\mathbf {R}}\in {\mathbb {R}}^{n\times n}\) such that

$$\begin{aligned} {\mathbf {R}}^{-1}{\mathbf {A}}{\mathbf {R}} = \text {diag} (\lambda _1,\lambda _2,\ldots ,\lambda _n), \end{aligned}$$

where \(\lambda _i\), \(\; i=1,\dots , n,\) are the (real) eigenvalues of \({\mathbf {A}}\). The matrix \({\mathbf {R}}\) is composed of the eigenvectors \({\mathbf {r}}_i\) corresponding to the eigenvalues \(\mathbf {\lambda }_i\) of \({\mathbf {A}}\). Without restricting generality (and under the nonsingularity assumption of \({\mathbf {A}}\)), we can assume that \( \lambda _1 \le \lambda _2 \le \ldots \le \lambda _{n-r}< 0 < \lambda _{n-r+1} \le \ldots \le \lambda _{n-1} \le \lambda _n, \) where \(r\in \{0,1,\ldots ,n\}\) is the number of positive eigenvalues of \({\mathbf {A}}\). Finally, we let \(\Lambda := \text {diag} (\lambda _1,\lambda _2,\ldots ,\lambda _n)\) and \({\mathbf {R}}= [{\mathbf {R}}^-, {\mathbf {R}}^+]\), where the columns of the matrices \({\mathbf {R}}^-\) and \({\mathbf {R}}^+\) consist of the eigenvectors that correspond to negative and positive eigenvalues of \({\mathbf {A}}\), respectively.

By applying the coordinate transformation \({\mathbf {v}}= {\mathbf {R}}^{-1} {\mathbf {u}}\), we see that (3.1) is equivalent to a decoupled system of scalar PDEs, \(i=1,2,\ldots ,n\),

$$\begin{aligned} \partial _t v_i(t,x) + \lambda _i \partial _x v_i (t,x)&= 0 \end{aligned}$$
(3.3a)
$$\begin{aligned} v_i(t_0,x)&=: v_i^{t_0}(x), \end{aligned}$$
(3.3b)

where \(v_i^{t_0}={\mathbf {r}}_i {\mathbf {u}}^{t_0}\), where \({\mathbf {r}}_i\) is the i-th row vector of \({\mathbf {R}}^{-1}.\)

On a bounded spatial domain, say \(x \in [a,b]\) , it is necessary to prescribe some boundary conditions at the boundaries \(x=a\) and \(x=b\). The system (3.1) needs as many boundary conditions at \(x=a\) as the number of positive eigenvalues and as many boundary conditions at \(x=b\) as the number of negative eigenvalues. The boundary conditions are defined as:

$$\begin{aligned} \begin{aligned} {\mathbf {P}}_a{\mathbf {u}}(t,a)&= {\mathbf {b}}^a(t),\\ {\mathbf {P}}_b{\mathbf {u}}(t,b)&= {\mathbf {b}}^b(t),\\ \end{aligned} \qquad t> t_0, \end{aligned}$$
(3.4)

where \({\mathbf {P}}_a\in {\mathbb {R}}^{r\times n}\), \({\mathbf {P}}_b\in {\mathbb {R}}^{(n-r)\times n}\), \({\mathbf {b}}^a : {\mathbb {R}}\rightarrow {\mathbb {R}}^r\) and \({\mathbf {b}}^b :{\mathbb {R}}\rightarrow {\mathbb {R}}^{n-r}\). For the well-posedness of the boundary condition (2.1c) for the hyperbolic PDE (3.1) satisfying Assumption 1, we will impose the following assumption on the coupling matrix \({\mathbf {P}}\).

Assumption 2

The coupling matrix in (2.1c) has the block structure \({\mathbf {P}}= {\left[ {\begin{matrix} {\mathbf {P}}_a &{}\quad 0 \\ 0 &{}\quad {\mathbf {P}}_b \end{matrix}}\right] }\) where \({\mathbf {P}}_a\in {\mathbb {R}}^{r\times n}\) and \({\mathbf {P}}_b\in {\mathbb {R}}^{(n-r)\times n}\) satisfy

$$\begin{aligned} \ker {\mathbf {P}}_a \oplus {{\,\mathrm{im}\,}}{\mathbf {R}}^+ = {\mathbb {R}}^n\quad \text {and}\quad \ker {\mathbf {P}}_b \oplus {{\,\mathrm{im}\,}}{\mathbf {R}}^- = {\mathbb {R}}^n. \end{aligned}$$

For the boundary conditions (3.4) in the characteristic variables, we define \({\mathbf {M}}=\begin{bmatrix} {\mathbf {M}}_1&{\mathbf {M}}_2 \end{bmatrix} \in {\mathbb {R}}^{r\times n}\) and \({\mathbf {N}}=\begin{bmatrix} {\mathbf {N}}_1&{\mathbf {N}}_2 \end{bmatrix} \in {\mathbb {R}}^{(n-r) \times n}\) by

$$\begin{aligned} \begin{aligned} \begin{bmatrix} {\mathbf {P}}_a{\mathbf {R}}^-&{}{\mathbf {P}}_a{\mathbf {R}}^+\\ {\mathbf {P}}_b{\mathbf {R}}^-&{}{\mathbf {P}}_b{\mathbf {R}}^+ \end{bmatrix} = \begin{bmatrix} {\mathbf {M}}_1&{}{\mathbf {M}}_2 \\ {\mathbf {N}}_1&{}{\mathbf {N}}_2 \end{bmatrix} \end{aligned}. \end{aligned}$$

From Assumption 2, it is easily seen that \({\mathbf {M}}_2\) and \({\mathbf {N}}_1\) are invertible, and we therefore can transform (3.4) as follows:

$$\begin{aligned} \begin{aligned} {\mathbf {v}}^a_+(t):= {\mathbf {v}}_+(t,a)&= -{\mathbf {M}}_2^{-1}{\mathbf {M}}_1 {\mathbf {v}}_-(t,a) + {\mathbf {M}}_2^{-1}{\mathbf {b}}^a(t),\\ {\mathbf {v}}^b_-(t):= {\mathbf {v}}_-(t,b)&= -{\mathbf {N}}_1^{-1}{\mathbf {N}}_2 {\mathbf {v}}_+(t,b) + {\mathbf {N}}_1^{-1}{\mathbf {b}}^b(t), \\ \end{aligned} \qquad t > t_0. \end{aligned}$$
(3.5)

Remark 3

In the case that the wave speed \(\lambda _i = 0\), Eq. (3.3) is simply \(\partial _t v_i = 0\), meaning that the change in the solution with respect to time is zero, with the initial condition given as in (3.3b). Hence, the solution is given by the initial condition \(v_i(t,x) = v_i^{t_0}(x)\).

Explicit solution formula in terms of characteristic variables

In order to give a solution formula in a compact form, we define two shift operators as follows.

Definition 4

(Shift operator for functions) Denote with \({\mathcal {F}}(A\rightarrow B)\) the set of all functions from some set A to some set B. Let \(T\subseteq {\mathbb {R}}\) and \(X\subseteq {\mathbb {R}}\), then the time shift operator \({\mathcal {S}}_{\text {time}}^{\lambda ,x_0}\) with speed \(\lambda \in {\mathbb {R}}\) and initial position \(x_0\in X\) is

$$\begin{aligned} {\mathcal {S}}_{\text {time}}^{\lambda ,x_0}:{\mathcal {F}}(T\rightarrow {\mathbb {R}})\mapsto {\mathcal {F}}(T\times X\rightarrow {\mathbb {R}}),\quad f \mapsto \big [(t,x)\mapsto f\big (t-\tfrac{x-x_0}{\lambda }\big )\big ], \end{aligned}$$

with the convention that \(f(s)=0\) if \(s\not \in T\). The space shift operator \({\mathcal {S}}_{\text {space}}^{\lambda ,t_0}\) with speed \(\lambda \in {\mathbb {R}}\) and initial time \(t_0\in T\) is

$$\begin{aligned} {\mathcal {S}}_{\text {space}}^{\lambda ,t_0}:{\mathcal {F}}(X\rightarrow {\mathbb {R}})\mapsto {\mathcal {F}}(T\times X\rightarrow {\mathbb {R}}),\quad g \mapsto \big [(t,x)\mapsto g\big (x-\lambda (t-t_0)\big )\big ], \end{aligned}$$

with the convention that \(f(y)=0\) if \(y \notin X\).

Consider the system (3.3a) with the initial condition (3.3b) and the boundary conditions (3.5). To ease the notation, let \(K^-:=\{ 1,\ldots ,n-r \}\) and \(K^+:=\{ n-r+1, \ldots , n \}\). The solution to each scalar PDE (3.3a) can be found individually in terms of the initial condition \(v_i^{t_0}(x)\), (3.3b) and boundary condition \(v_i(t,b)=:v_i^b(t)\) in (3.5). The solution \({\mathbf {v}}(t,x)\) is expressed in terms of \({\mathbf {v}}_-(t,x)\) and \({\mathbf {v}}_+(t,x)\) separately below.

For left-going waves, the solution is of the form:

$$\begin{aligned} \begin{aligned} v_i(t,x) = {\left\{ \begin{array}{ll} \left( {\mathcal {S}}_{\text {space}}^{\lambda _i,t_0}v_i^{t_0}\right) (t,x), &{} \text { if } x-b\le \lambda _i (t-t_0),\\ \left( {\mathcal {S}}_{\text {time}}^{\lambda _i,b} v_i^{b}\right) (t,x), &{} \text { if } x-b> \lambda _i (t-t_0), \end{array}\right. } \end{aligned} \end{aligned}$$
(3.6)

where \(i \in K^-\) and \(x\in [a,b]\). In a similar fashion, the solution to right-going waves is of the form:

$$\begin{aligned} \begin{aligned} v_j(t,x) = {\left\{ \begin{array}{ll} {\mathcal {S}}_{\text {space}}^{\lambda _j,t_0} v_j^{t_0}, &{} \text { if } x-a\ge \lambda _j (t-t_0),\\ {\mathcal {S}}_{\text {time}}^{\lambda _j,a} v_j^a, &{} \text { if } x-a<\lambda _j (t-t_0), \end{array}\right. } \end{aligned} \end{aligned}$$
(3.7)

where \(j \in K^+\) and \(x\in [a,b]\).

The solutions \({\mathbf {v}}_-(t,x)\) and \({\mathbf {v}}_+(t,x)\) together form the solution \({\mathbf {v}}(t,x)\) to the system (3.3a) with the initial condition (3.3b) and the boundary conditions (3.5). Hence, \({\mathbf {v}}(t,x)\) can be written as

(3.8)

where \({\mathbf {e}}_i \in {\mathbb {R}}^{(n-r)\times (n-r)}\) and \({\mathbf {e}}_j \in {\mathbb {R}}^{r\times r}\) are the i-th and j-th directional unit vectors. The solutions at \(x=a\) and \(x=b\) then are of the form:

(3.9)

Solution framework for the linear hyperbolic system

In this section, the solution \({\mathbf {u}}(t,x)\) to the system (3.1) with the initial and boundary conditions (3.2) and (3.4) will be formulated by using the results from Sect. 3.1 and passing to the representation in the original variables, i.e. \({\mathbf {u}}={\mathbf {R}}{\mathbf {v}}\).

Lemma 5

Consider the PDE (3.1) satisfying Assumption 1 and 2 with some given initial trajectory \({\mathbf {u}}^{t_0}\) as in (3.2) and boundary conditions \({\mathbf {b}}^a\), \({\mathbf {b}}^b\) as in (3.4). Let

$$\begin{aligned} \begin{aligned} {\mathbf {u}}^a(t)&:= {\left\{ \begin{array}{ll} \sum _{i\in K^-} \mathbf {\Pi }_i\left( {\mathcal {S}}_{\text {space}}^{\lambda _i,t_0} {\mathbf {u}}^{t_0}\right) (t,a),&{}\quad t\le t_0+\frac{b-a}{\lambda _i}\\ {\mathbf {F}}_a {\mathbf {b}}^a(t) + \sum _{k=1}^{n}{\mathbf {D}}_k^{ab} \left( {\mathcal {S}}_{\text {time}}^{\lambda _k,b} {\mathbf {u}}^b\right) (t,a),&{}\quad t> t_0+\frac{b-a}{\lambda _k} \end{array}\right. }\\ {\mathbf {u}}^b(t)&:= {\left\{ \begin{array}{ll} \sum _{j\in K^+} \mathbf {\Pi }_j \left( {\mathcal {S}}_{\text {space}}^{\lambda _j,t_0} {\mathbf {u}}^{t_0}\right) (t,b),&{}\quad t\le t_0+\frac{b-a}{-\lambda _j}\\ {\mathbf {F}}_b {\mathbf {b}}^b(t) + \sum _{k=1}^{n}{\mathbf {D}}_k^{ba} \left( {\mathcal {S}}_{\text {time}}^{\lambda _k,a} {\mathbf {u}}^a\right) (t,b),&{}\quad t> t_0+\frac{b-a}{\lambda _k} \end{array}\right. } \end{aligned} \end{aligned}$$
(3.10)

where \(\mathbf {\Pi }_p:= \begin{bmatrix} {\mathbf {R}}^-&{\mathbf {R}}^+ \end{bmatrix} \mathrm {diag}({\mathbf {e}}_p) \begin{bmatrix} {\mathbf {R}}^-&{\mathbf {R}}^+ \end{bmatrix}^{-1}\), with \({\mathbf {e}}_{p} \in {\mathbb {R}}^n\) is the p-th directional unit vector, \(p=1,2,\ldots ,n,\) \(K^-=\{1,2,\ldots ,n-r\}\), \(K^+ = \{n-r+1,n-r+2,\ldots ,n\}\),

$$\begin{aligned}&{\mathbf {F}}_a = {\mathbf {R}}\, \begin{bmatrix} {\mathbf {0}}_{n-r, r}\\ {\mathbf {M}}_2^{-1} \end{bmatrix} \quad \text { and } \quad {\mathbf {F}}_b ={\mathbf {R}}\, \begin{bmatrix} {\mathbf {N}}_1^{-1}\\ {\mathbf {0}}_{r,n-r} \end{bmatrix}, \end{aligned}$$
(3.11a)
$$\begin{aligned}&{\mathbf {D}}_p^{ab} = {\mathbf {R}}{\left[ {\begin{matrix} {\mathbf {I}}_{n-r,n-r} &{}\quad {\mathbf {0}}_{n-r, r}\\ -{\mathbf {M}}_2^{-1}\, {\mathbf {M}}_1 &{}\quad {\mathbf {0}}_{r, r} \end{matrix}}\right] } {\mathbf {R}}^{-1} \, \mathbf {\Pi }_p \text { and } {\mathbf {D}}_p^{ba} = {\mathbf {R}} {\left[ {\begin{matrix} {\mathbf {0}}_{n-r, n-r} &{}\quad -{\mathbf {N}}_1^{-1}{\mathbf {N}}_2\\ {\mathbf {0}}_{r,n-r} &{}\quad {\mathbf {I}}_{r, r} \end{matrix}}\right] } {\mathbf {R}}^{-1} \mathbf {\Pi }_p, \end{aligned}$$
(3.11b)

and where it is assumed that \({\mathbf {u}}^{t_0}(x)=0\) for \(x\notin [a,b]\). Then, every classical solution is given by

$$\begin{aligned} {\mathbf {u}}(t,x) =\sum _{i \in K^-} \mathbf {\Pi }_i \left( {\mathcal {S}}_{\text {time}}^{\lambda _i,b} {\mathbf {u}}^b\right) (t,x) + \sum _{j \in K^+} \mathbf {\Pi }_j \left( {\mathcal {S}}_{\text {time}}^{\lambda _j,a} {\mathbf {u}}^a\right) (t,x). \end{aligned}$$

The proof is a direct consequence of the method of characteristics. Details can be found, for example, in [3].

Remark 6

In addition to the 2D shift operator defined as in Definition 4, we analogously define the 1D time shift operator \({\mathcal {S}}_{\text {time}}^{\tau }\) with \(\tau \in {\mathbb {R}}\) for functions \(f:T\subseteq {\mathbb {R}}\rightarrow {\mathbb {R}}\) as follows:

$$\begin{aligned} {\mathcal {S}}_{\text {time}}^{\tau } : {\mathcal {F}}(T \rightarrow {\mathbb {R}}) \mapsto {\mathcal {F}}(T\rightarrow {\mathbb {R}}), \quad f \mapsto [t \mapsto f(t-\tau )]. \end{aligned}$$
(3.12)

where \({\mathcal {F}}\) is as in Definition 4 and with the convention that \(f(s)=0\) if \(s\notin T\).

Remark 7

Let \({\mathbf {u}}_{ab}(t) := \left( {\mathbf {u}}^a(t)^\top , {\mathbf {u}}^b(t)^\top \right) ^\top \in {\mathbb {R}}^{2n}\), where \({\mathbf {u}}^a(t)\) and \({\mathbf {u}}^b(t)\) given as in (3.10). Below, we express \({\mathbf {u}}_{ab}\) in a compressed form in terms of the 1D time shift operator \({\mathcal {S}}_{\text {time}}^{\tau }\)

$$\begin{aligned} {\mathbf {u}}_{ab}(t) = {\left[ {\begin{matrix} {\mathbf {F}}_a&{}\quad {\mathbf {0}}_{n,n-r}\\ {\mathbf {0}}_{n, r}&{}\quad {\mathbf {F}}_b \end{matrix}}\right] } {\left[ {\begin{matrix} {\mathbf {b}}^a(t) \\ {\mathbf {b}}^b(t) \end{matrix}}\right] } + \sum _{k=1}^n {\left[ {\begin{matrix} {\mathbf {0}}_{n, n} &{}\quad {\mathbf {D}}_k^{ab} \\ {\mathbf {D}}_k^{ba} &{}\quad {\mathbf {0}}_{n, n} \end{matrix}}\right] }\left( {\mathcal {S}}_{\text {time}}^{\tau _k} {\mathbf {u}}_{ab}(t)\right) , \end{aligned}$$
(3.13)

where \(\tau _k= \tfrac{b-a}{\mathop {\mathrm {sgn}}(\lambda _k) \lambda _k}\), the matrices \({\mathbf {F}}_a, {\mathbf {F}}_b, {\mathbf {D}}_k^{ab}, {\mathbf {D}}_k^{ba}\), \(k=1,2,\ldots ,n\), are given as in (3.11) and the extensions of initial conditions as boundary conditions for the negative times are adapted as in the proof of Lemma 5. Then, the equality (3.13) follows from Eq. (3.10).

Switched differential-algebraic equations

In this section, we consider switched differential-algebraic equations (swDAEs) of the form:

$$\begin{aligned} {\mathbf {E}}_\sigma {\dot{{\mathbf {w}}}}(t) = {\mathbf {H}}_\sigma {\mathbf {w}}(t) + {\mathbf {B}}_\sigma {\mathbf {q}}(t) + {\mathbf {f}}_\sigma (t), \end{aligned}$$
(4.1)

with the output \({\mathbf {y}}_D(t):={{\mathbf {C}}_D}_\sigma {\mathbf {w}}(t)\), where \({\mathbf {w}}: [t_0,\infty )\rightarrow {\mathbb {R}}^m\), \(m\in {\mathbb {N}},\) is the state variable of the system, \({\sigma :{\mathbb {R}}\rightarrow \{1,2,\ldots ,N \}}\), \(N\in {\mathbb {N}}\), is a piecewise constant switching signal with a locally finite set of jump points and is right continuous, \({\mathbf {E}}_\xi , {\mathbf {H}}_\xi \in {\mathbb {R}}^{m\times m}\) for each \(\xi \in \{ 1,2,\ldots ,N \}\) and \({\mathbf {f}}_\xi : [t_0,\infty ) \rightarrow {\mathbb {R}}^m\) is some inhomogeneity, \({\mathbf {B}}_\xi \in {\mathbb {R}}^{m\times \nu }\), \({\mathbf {q}}: [{t_0},\infty ) \rightarrow {\mathbb {R}}^{\nu }\) is the input, \({\mathbf {y}}_D \in {\mathbb {R}}^{m_1}\), \(m_1\in {\mathbb {N}}\), \({{\mathbf {C}}_D}_\xi \in {\mathbb {R}}^{m_1\times m}\). Note that the matrix \({\mathbf {E}}_\xi \) is not assumed to be non-singular.

For the existence and uniqueness of solutions to (4.1), the following definition of regularity of matrix pairs will be employed.

Definition 8

(Regularity of a matrix pair) The matrix pencil \(s {\mathbf {E}}_\xi -{\mathbf {H}}_\xi \in {\mathbb {R}}^{m\times n}[s]\), \(\xi \in \{ 1,2,\ldots , N \}\), \(N\in {\mathbb {N}}\) is called regular if and only if \(n=m\) and \(\mathrm {det}(s{\mathbf {E}}_\xi -{\mathbf {H}}_\xi )\) is not the zero polynomial. The matrix pair \(({\mathbf {E}}_\xi ,{\mathbf {H}}_\xi )\) and the corresponding mode \(\xi \) for the swDAE (4.1) are called regular if \(s{\mathbf {E}}_\xi -{\mathbf {H}}_\xi \) is regular.

Due to the nonsingularity of \({\mathbf {E}}_\xi \), solutions of (4.1) need to satisfy certain algebraic constraints which in general differ from each mode. Hence, at a switching time \(t_s\in {\mathbb {R}}\) the value \({\mathbf {w}}(t_s^-)\) may be inconsistent with the mode after the switch, resulting in a jump in \({\mathbf {w}}\) at \(t_s\). In fact, derivatives of jumps, i.e. Dirac impulses, may also occur in response to switches [19]. Therefore, the solution space must be enlarged to distributions so that also jumps and Dirac impulses are allowed in the solutions. To this end, piecewise-smooth distributions are considered as the solution space. Below, we first recall the definition of scalar piecewise-smooth functions and distributions as in [17].

Definition 9

(Piecewise-smooth function/distribution) Let \(T\subseteq {\mathbb {R}}\) be an open interval. A function \(\alpha :T\rightarrow {\mathbb {R}}\) is called piecewise-smooth if and only if

$$\begin{aligned} \alpha = \sum _{i\in {\mathbb {Z}}} (\alpha _i)_{[t_i,t_{i+1})} \end{aligned}$$
(4.2)

where the functions \(\alpha _i:T\rightarrow {\mathbb {R}}\), \(i\in {\mathbb {Z}}\), are (globally) smooth and the set \(\left\{ t_i\in T\,\left| \, i\in {\mathbb {Z}}\,\right. \!\!\right\} \) is an ordered discrete set, i.e. \(t_i<t_{i+1}\) for all \(i\in {\mathbb {Z}}\) and the intersection with any compact subset of T only contains finitely many points. The set of all piecewise-smooth functions is denoted by \({\mathcal {C}}^\infty _\text {pw}(T)\). A distribution \(D\in {\mathbb {D}}(T)\) is called piecewise-smooth if

$$\begin{aligned} D = \alpha _{\mathbb {D}}+ \sum _{\tau \in \Delta } D_\tau \end{aligned}$$

where \(\alpha \in {\mathcal {C}}^\infty _\text {pw}\), \(\Delta \subseteq T\) is a discrete set and \({{\,\mathrm{supp}\,}}D_\tau \subseteq \{\tau \}\) for all \(\tau \in \Delta \). The space of all piecewise-smooth distributions is denoted by \({\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\).

Note that a distribution \(D_\tau \) has point support \(\{\tau \}\) if and only if there exist \(d_\tau \in {\mathbb {N}}\), \(c_0,\ldots ,c_{d_\tau }\in {\mathbb {R}}\) such that

$$\begin{aligned} D_\tau = \sum _{i=0}^{d_\tau } c_i \delta _{\tau }^{(i)}, \end{aligned}$$

where \(\delta _\tau ^{(k)}\) is the k-th derivative of the Dirac impulse \(\delta _{\tau }\) at \(\tau \in T\). It is easily seen that the space of piecewise-smooth distributions is closed with respect to differentiation and contains the space of piecewise-smooth functions as a subspace. In fact, a distribution is piecewise-smooth if, and only if, locally it is equal to a finite derivative of a piecewise-smooth function; in other words, for all \(D\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\) and all compact subsets \(K\subseteq T\) there exists \(k\in {\mathbb {N}}\) and \(\alpha \in {\mathcal {C}}^\infty _\text {pw}(T;{\mathbb {R}})\) such that

$$\begin{aligned} D(\varphi ) = (\alpha _{\mathbb {D}})^{(k)}(\varphi )\quad \forall \varphi \in {\mathcal {C}}^\infty _0(T;{\mathbb {R}}) \text { with }{{\,\mathrm{supp}\,}}\varphi \subseteq K. \end{aligned}$$

Having defined a suitable solution space \({\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\) that allows distributions as solutions to the swDAE (4.1), the state variable \({\mathbf {w}}\), the input \({\mathbf {q}}\) and the inhomogeneity \({\mathbf {f}}\) in the swDAE (4.1) are considered in this solution space \({\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\) and they are vectors of distributions in \({\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\), i.e. \({\mathbf {w}}, {\mathbf {f}}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\) and \({\mathbf {q}}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^\nu \).

Theorem 10

(Existence and uniqueness of solutions of swDAEs, [17]) Consider the swDAE as given in (4.1) with regular matrix pairs \(({\mathbf {E}}_\xi ,{\mathbf {H}}_\xi )\) with \(\xi \in \{ 1,2,\ldots ,N \}\), and assume that the switching signal \(\sigma \) only has locally finitely many switches. Then, for every initial trajectory \({\mathbf {w}}^{t_0}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\) with the initial time \(t_0 \in {\mathbb {R}}\), any input \({\mathbf {q}}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^\nu \) and any inhomogeneity \({\mathbf {f}}_{\sigma } \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\), there exists a unique globally defined solution \({\mathbf {w}}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\) of the initial trajectory problem

$$\begin{aligned} \begin{aligned} {\mathbf {w}}_{(-\infty ,t_0)}&= {\mathbf {w}}^{t_0}_{(-\infty ,t_0)}\, ,\\ \left( {\mathbf {E}}_{\sigma } {\dot{{\mathbf {w}}}} \right) _{[t_0,\infty )}&= \left( {\mathbf {H}}_{\sigma }{\mathbf {w}}+{\mathbf {B}}_\sigma {\mathbf {q}}+ {\mathbf {f}}_{\sigma } \right) _{[t_0,\infty )}. \end{aligned} \end{aligned}$$
(4.3)

Furthermore, the solution on \([t_0,\infty )\) is uniquely determined by \({\mathbf {w}}(t_0^-)\).

Distributional solution of the PDE

In Sect. 3, we have reviewed the classical solution to linear hyperbolic PDEs. But considering a coupling with switched DAEs, the boundary data for the PDE are given by piecewise-smooth distributions. Thus, we need to extend the solutions in the distributional sense, including Dirac impulses and its derivatives. Unfortunately, we cannot simply consider distributions on \({\mathbb {R}}^2\), since we still need to evaluate the traces at initial time and the boundaries. Therefore, we construct an appropriate solution space by piecewise-smooth distributions in time and space.

Distribution theory in time and space

Definition 11

(Piecewise-smooth functions in time and space) Denote by \(T\subseteq {\mathbb {R}}\) (time) and \(X\subseteq {\mathbb {R}}\) (space) open intervals. We say a family of subsets \((P_i)_{i\in {\mathcal {I}}}\) of \(T\times X\) for some index set \({\mathcal {I}}\) is a polyhedral partition of \(T\times X\) if and only if \(P_i\) are polyhedral sets (i.e. the intersection of finitely many open or closed half-spaces in \(T\times X\)), are pairwise disjoint and \(\bigcup _{i\in {\mathcal {I}}} P_i = T\times X\).

A function \(\beta :T\times X \rightarrow {\mathbb {R}}\) is called (polyhedral) piecewise-smooth if and only if there exists a locally finite polyhedral partition \(\bigcup _{i\in {\mathcal {I}}} P_i\) of \(T\times X\) and a family of smooth functions \(\beta _i:T\times X\rightarrow {\mathbb {R}}\), \(i\in {\mathcal {I}}\) such that

$$\begin{aligned} \beta = \sum _{i\in {\mathcal {I}}} \chi _{P_i} \beta _i, \end{aligned}$$
(5.1)

where \(\chi _{P_i}\) is the characteristic function of the set \(P_i\subseteq T\times X\).

In the definition of a polyhedral partition \((P_i)_{i\in {\mathcal {I}}}\), it is not excluded that some \(P_i\) have empty interior (i.e. have measure zero), and this has the advantage that then the corresponding space of piecewise-smooth functions is closed under addition and restriction to polyhedral sets. For a piecewise-smooth function \(\beta :T\times X\rightarrow {\mathbb {R}}\), it is easily seen that for any \(t\in T\) and \(x\in X\), the functions \(\beta (t,\cdot )\) and \(\beta (\cdot ,x)\) are scalar piecewise-smooth functions as in Definition 9 (where we treat two functions as equal when they are equal almost everywhere).

Definition 12

(Dirac segment, cf. [23]) Let \(L\subseteq T\times X\) be a line segment, i.e. there exists \(t_0,t_1\in T\), \(x_0,x_1\in X\) such that

$$\begin{aligned} L = \left\{ (t_0 + \xi (t_1-t_0),x_0 + \xi (x_1-x_0))\,\left| \, \xi \in [0,1]\,\right. \!\!\right\} . \end{aligned}$$
(5.2)

Then, the Dirac segment on L is

$$\begin{aligned} \delta _L : {\mathcal {C}}_0^\infty (T\times X \rightarrow {\mathbb {R}}) \rightarrow {\mathbb {R}}: \varphi \mapsto \int _L \varphi , \end{aligned}$$

where \(\int _L \varphi \) is the usual line integral given by

$$\begin{aligned} \int _L \varphi = \int _0^1 \varphi (t_0+\alpha (t_1-t_0),x_0+\alpha (x_1-x_0)) \sqrt{\Delta t^2 + \Delta x^2} \,\text {d} \alpha , \end{aligned}$$

where \(\Delta t = t_1-t_0\) and \(\Delta x = x_1-x_0\). For unbounded line segments (i.e. \(\xi \) ranges over an unbounded interval in (5.2)), the integral boundaries in the definition of \(\int _L \varphi \) are replaced by \(\pm \infty \) appropriately.

Note that if \(\Delta t\ne 0\), then

$$\begin{aligned} \int _L \varphi = \int _{t_0}^{t_1} \varphi (t,x_0+\tfrac{\Delta x}{\Delta t} (t-t_0)) \sqrt{1 + \tfrac{\Delta x^2}{\Delta t^2}} \,\text {d} t\end{aligned}$$

and if \(\Delta x \ne 0\) then

$$\begin{aligned} \int _L \varphi = \int _{x_0}^{x_1} \varphi (t_0 + \tfrac{\Delta t}{\Delta x} (x-x_0), x) \sqrt{1 + \tfrac{\Delta t^2}{\Delta x^2}} \,\text {d} x. \end{aligned}$$

Lemma 13

Assume \(T={\mathbb {R}}\), \(X={\mathbb {R}}\) and consider the unbounded line \(L=\left\{ (t_0+\lambda \Delta t,x_0+\lambda \Delta x)\,\left| \, \lambda \in {\mathbb {R}}\,\right. \!\!\right\} \) for some \(t_0\in T\), \(x_0\in X\) and \(\Delta t>0, \Delta x >0\). For the step function along L given by

$$\begin{aligned}\begin{aligned} H_L(t,x)&= {\left\{ \begin{array}{ll} 1,&{}\quad t-t_0 \ge \tfrac{\Delta t}{\Delta x}(x-x_0),\\ 0,&{}\quad \mathrm {otherwise}\end{array}\right. } ={\left\{ \begin{array}{ll} 1,&{}\quad x-x_0 \le \tfrac{\Delta x}{\Delta t}(t-t_0),\\ 0,&{}\quad \mathrm {otherwise}\end{array}\right. } \end{aligned} \end{aligned}$$

we have

$$\begin{aligned} \begin{aligned} \partial _t {H_L}_{\mathbb {D}}&= \frac{1}{\sqrt{1+(\frac{\Delta t}{\Delta x})^2}} \delta _L,&\quad \partial x {H_L}_{\mathbb {D}}&= -\frac{1}{\sqrt{1+(\frac{\Delta x}{\Delta t})^2}} \delta _L \end{aligned} \end{aligned}$$

in particular,

$$\begin{aligned} \partial _t {H_L}_{\mathbb {D}}= -\tfrac{\Delta x}{\Delta t} \partial _x {H_L}_{\mathbb {D}}\end{aligned}$$

Proof

Recall the general definition of the partial derivative of a distribution D on \(T\times X\):

$$\begin{aligned} \left( \partial _t D\right) (\varphi ):= -D\left( \partial _t\varphi \right) \quad \text { and }\quad \left( \partial _x D\right) (\varphi ):= -D\left( \partial _x\varphi \right) . \end{aligned}$$

Hence, we have

$$\begin{aligned} \left( \partial _t {H_L}_{\mathbb {D}}\right) (\varphi )&= - \int _X\int _T H_L(t,x) \partial _t\varphi (t,x) \,\text {d} t\,\text {d} x\\&= - \int _{-\infty }^{\infty } \int _{t_0+\tfrac{\Delta t}{\Delta x}(x-x_0)}^\infty \partial _t\varphi (t,x) \,\text {d} t\,\text {d} x\\&= \int _{-\infty }^{\infty } \varphi (t_0+\tfrac{\Delta t}{\Delta x}(x-x_0),x) \,\text {d} x= \frac{1}{\sqrt{1+\left( \frac{\Delta t}{\Delta x}\right) ^2}} \int _L \varphi ,\\ \left( \partial _x {H_L}_{\mathbb {D}}\right) (\varphi )&= - \int _T\int _X H_L(t,x) \partial _x\varphi (t,x) \,\text {d} x\,\text {d} t\\&= - \int _{-\infty }^{\infty } \int _{-\infty }^{x_0+\tfrac{\Delta x}{\Delta t}(t-t_0)} \partial _t\varphi (t,x) \,\text {d} x\,\text {d} t\\&= -\int _{-\infty }^{\infty } \varphi (t,x_0+\tfrac{\Delta x}{\Delta t}(t-t_0)) \,\text {d} t= -\frac{1}{\sqrt{1+\left( \frac{\Delta x}{\Delta t}\right) ^2}} \int _L \varphi . \end{aligned}$$

From this, the claims follow. \(\square \)

Corollary 14

Let \(P\subseteq T\times X\) be a polyhedral set with the line segments \(L_1,L_2,\ldots ,L_p\) as boundaries. Then, the partial derivative of \({\chi _P}_{\mathbb {D}}\) is a linear combination of \(\delta _{L_1},\delta _{L_2},\ldots ,\delta _{L_p}\).

Definition 15

(Piecewise-smooth distributions in space and time) A distribution \(D:{\mathcal {C}}_0^\infty (T\times X)\rightarrow {\mathbb {R}}\) is called piecewise smooth if and only if there exists a piecewise-smooth function \(\beta :T\times X\rightarrow {\mathbb {R}}\) and a locally finite family of line segments \((L_j)_{j\in {\mathcal {J}}}\) in \(T\times X\) and coefficients \(\alpha _{j}^{k,\ell }\in {\mathbb {R}}\), \(k=0,1,\ldots ,n_{j}^t\), \(\ell =0,1,\ldots ,n_{j}^x\) such that

$$\begin{aligned} D = \beta _{\mathbb {D}}+ \sum _{j\in {\mathcal {J}}} \sum _{k,\ell } \alpha _{j}^{k,\ell } \partial _t^{(k)} \partial _x^{(\ell )} \delta _{L_j}. \end{aligned}$$
(5.3)

The space of piecewise-smooth distributions on \(T\times X\) is denoted by \({\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T\times X)\)

Lemma 16

Let \(D\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T\times X)\) given by (5.3) and (5.1), then

  1. 1.

    \(\partial _t D \in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T\times X)\) and \(\partial _x D \in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T\times X)\);

  2. 2.

    The restriction of D to any polyhedral set \(P\subseteq T\times X\) given by \(D_P := (\sum _{i\in {\mathcal {I}}} \chi _{P_i\cap P} \beta _i)_{\mathbb {D}}+ \sum _{j\in {\mathcal {J}}} \sum _{k,\ell } \alpha _{j}^{k,\ell } (\partial _t)^k (\partial _x)^\ell \delta _{L_j\cap P}\) is well defined and is again a piecewise-smooth distribution.

Proof

  1. 1.

    It suffices to show that the (partial) derivative of (the induced distribution by) a piecewise-smooth function (in the sense of Definition 11) is a piecewise-smooth distribution. Since the sum in (5.1) is locally finite, it furthermore suffices to consider only a single summand and since the multiplication with a smooth function is well defined for general distributions, it suffices to show that the partial derivatives of the indicator function \(\chi _P\) for any polyhedral set \(P\subseteq T\times X\) are a piecewise-smooth distribution, which was already established in Corollary 14.

  2. 2.

    First note that the intersection of two polyhedral sets is again a polyhedral set; hence, \(\sum _{i\in {\mathcal {I}}} \chi _{P_i\cap P} \beta _i\) is a piecewise-smooth function. Furthermore, the intersection of a line-segment with a polyhedral set is again a line-segment (or empty); hence, \(D_P\) is again a piecewise-smooth distribution (taking into account the local finiteness property, which implies that evaluated at any test-function reduces to finite sums for which no convergence issues occur due to the restriction). \(\square \)

For any \((t,x)\in T\times X\), we want to define in the following the partial evaluations \(D(t^+,\cdot )\), \(D(t^-,\cdot )\), \(D(\cdot ,x^-)\), \(D(\cdot ,x^+)\) such that they are piecewise-smooth distributions on X or T, respectively, and such that (partial) differentiation commutes with the partial evaluation, i.e.

$$\begin{aligned} (\partial _x D)(t^\pm ,\cdot ) = D(t^\pm ,\cdot )'\quad \text {and}\quad (\partial _t D)(\cdot ,x^\pm ) = D(\cdot ,x^\pm )', \end{aligned}$$

here \((\cdot )'\) denotes the (scalar) differentiation in \({\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\) or \({\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(X)\), respectively. Clearly, for piecewise-smooth functions, such an evaluation is trivially defined. Due to commutativity requirement concerning differentiation and evaluation, it is also clear that it suffices to define the evaluation of Dirac segments. Due to Lemma 13, there is, however, only one possible choice:

$$\begin{aligned}&\delta _L(t^+,\cdot ) := {\left\{ \begin{array}{ll} \sqrt{1+\tfrac{\Delta x^2}{\Delta t^2}} \delta _{x_0+\tfrac{\Delta x}{\Delta t}(t-t_0)}, &{} t\in [t_0,t_1),\\ 0,&{} \text {otherwise,} \end{array}\right. } \\&\delta _L(t^-,\cdot ) := {\left\{ \begin{array}{ll} \sqrt{1+\tfrac{\Delta x^2}{\Delta t^2}} \delta _{x_0+\tfrac{\Delta x}{\Delta t}(t-t_0)}, &{}\quad t\in (t_0,t_1],\\ 0,&{}\quad \text {otherwise,} \end{array}\right. } \\&\delta _L(\cdot ,x^+) := {\left\{ \begin{array}{ll} \sqrt{1+\tfrac{\Delta t^2}{\Delta x^2}} \delta _{t_0+\tfrac{\Delta t}{\Delta x}(x-x_0)}, &{}\quad x\in [x_0,x_1),\\ 0,&{}\quad \text {otherwise,} \end{array}\right. } \\&\delta _L(\cdot ,x^-) := {\left\{ \begin{array}{ll} \sqrt{1+\tfrac{\Delta t^2}{\Delta x^2}} \delta _{t_0+\tfrac{\Delta t}{\Delta x}(x-x_0)}, &{}\quad x\in (x_0,x_1],\\ 0,&{}\quad \text {otherwise.} \end{array}\right. } \end{aligned}$$

Definition 17

Let \(D\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T\times X)\) given by (5.3) and (5.1). Then, for any \((t,x)\in T\times X\)

$$\begin{aligned}\begin{aligned} D(t^\pm ,\cdot ) := \beta (t^\pm ,\cdot )_{\mathbb {D}}+ \sum _{j\in {\mathcal {J}}} \sum _{k,\ell } \alpha _{j}^{k,\ell } \partial _t^{(k)} \partial _x ^{(\ell )}\left( \delta _{L_j}(t^\pm ,\cdot )\right) ,\\ D(\cdot ,x^\pm ) := \beta (\cdot ,x^\pm )_{\mathbb {D}}+ \sum _{j\in {\mathcal {J}}} \sum _{k,\ell } \alpha _{j}^{k,\ell } \partial _t^{(k)} \partial _x ^{(\ell )} \left( \delta _{L_j}(\cdot ,x^\pm )\right) . \end{aligned} \end{aligned}$$

Distributional solutions for linear hyperbolic PDE

Before addressing linear systems, we consider the scalar advection equation

$$\begin{aligned} \partial _t v + \lambda \partial _x v = 0, \end{aligned}$$
(5.5)

where \(\lambda \in {\mathbb {R}}\) is the wave speed and the initial condition is prescribed as

$$\begin{aligned} \mathbf{I}.C. \quad v(t_0^+,\cdot ) =v^{t_0} , \end{aligned}$$
(5.6)

where \(v^{t_0}\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }((a,b))\) and the boundary condition given as

$$\begin{aligned} \mathbf{B}.C. {\left\{ \begin{array}{ll} v(\cdot ,a^+)=v^a,&{}\quad \text { if } \lambda > 0, \\ v(\cdot ,b^-)=v^b,&{} \quad \text { if } \lambda < 0, \end{array}\right. } \end{aligned}$$
(5.7)

where \(v^a, v^b \in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }((t_0,\infty ))\).

We now expand the definition of the shift operator for continuous functions in Definition 4 to distributions.

Definition 18

(Shift operator for Dirac impulses) Let \(T,X \subseteq {\mathbb {R}}\) be open intervals. The distributional time shift operator of a Dirac impulse at \(t^*\in T\) \(\delta _{t^*}\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\) with speed \(\lambda \) and initial position \(x_0\) is given by

$$\begin{aligned} {\mathcal {S}}_{\text {time}}^{\lambda ,x_0}\delta _{t^*} := \frac{1}{\sqrt{1+ \tfrac{1}{\lambda ^2} }}\ \delta _{L_{\text {time}}^{\lambda ,(t^*,x_0)}}\, , \end{aligned}$$

where \(L_{\text {time}}^{\lambda ,(t^*,x_0)}:=\left\{ (t,x)\,\left| \, t\in T,x = x_0 + \lambda (t-t^*) \in X\,\right. \!\!\right\} \); the distributional space shift operator of a Dirac impulse \(\delta _{x^*}\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(X)\) at \(x^*\in X\) with speed \(\lambda \) and initial time \(t_0\) is given by

$$\begin{aligned} {\mathcal {S}}_{\text {space}}^{\lambda ,t_0}\delta _{x^*} := \frac{1}{\sqrt{1+ \lambda ^2 }}\ \delta _{L_{\text {space}}^{\lambda ,(t_0,x^*)}}\, , \end{aligned}$$

where \(L_{\text {space}}^{\lambda ,(t_0,x^*)}:=\left\{ (t,x)\,\left| \, x\in X, t = t_0+(x-x^*)/\lambda \in T\,\right. \!\!\right\} \) .

Note that in the definition of the shift operator for Dirac impulses, the factors \(1/\sqrt{1+1/\lambda ^2}\) and \(1/\sqrt{1+\lambda ^2}\) are necessary to obtain the following equalities:

$$\begin{aligned} \left( {\mathcal {S}}_{\text {time}}^{\lambda ,x_0}\delta _{t^*}\right) (\cdot ,x^\pm ) = \delta _{t^* + (x-x_0)/\lambda }\text { and } \left( {\mathcal {S}}_{\text {space}}^{\lambda ,t_0}\delta _{x^*}\right) (t^\pm ,\cdot ) = \delta _{x^*+\lambda (t-t_0)}. \end{aligned}$$

In particular,

$$\begin{aligned} \left( {\mathcal {S}}_{\text {time}}^{\lambda ,x_0}\delta _{t^*}\right) (\cdot ,x_0^\pm ) = \delta _{t^*}\text { and } \left( {\mathcal {S}}_{\text {space}}^{\lambda ,t_0}\delta _{x^*}\right) (t_0^\pm ,\cdot ) = \delta _{x^*}. \end{aligned}$$

An illustration of the time- and space shift of Dirac impulses can also be found in Fig. 5.

Fig. 5
figure 5

PDE domain \((t_0,\infty )\times (a,b)\), \(\lambda >0\). An example of Dirac impulses prescribed in initial and boundary conditions at certain locations in space, \(x_m^*\), \(m=\{1,2,\ldots ,7 \}\), and time domains, \(t_i^*\), \(i=\{1,2,3,4 \}\), respectively. The green and pink lines correspond to the Dirac segments on these lines and show how Dirac impulses are shifted within the domain

Definition 19

(Shift operator for piecewise-smooth distributions) Let \(D^T\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\) and \(D^X\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(X)\) be given by \(D^T = d^T_{\mathbb {D}}+ \sum _{t^*\in T^*} D^T_{t^*}\) and \(D^X = d^X_{\mathbb {D}}+ \sum _{x^*\in X^*} D^X_{x^*}\), where \(d^T\in {\mathcal {C}}_{\text {pw}}^{\infty }(T)\), \(d^X\in {\mathcal {C}}_{\text {pw}}^{\infty }(X)\), \(T^*\subset T\) and \(X^*\subset X\) are locally finite sets, and for each \(t^*\in T^*\) and \(x^*\in X^*\) we have \(n^{t^*}\in {\mathbb {N}}\) and \(n^{x^*}\in {\mathbb {N}}\), \(c_i^{t^*}\in {\mathbb {R}}\), \(i=0,1,\ldots ,n^{t^*}\) and \(c_j^{x^*}\in {\mathbb {R}}\), \(j=0,1,\ldots ,n^{x^*}\) such that

$$\begin{aligned} D^T_{t^*} = \sum _{i=0}^{n^{t^*}} c_i^{t^*} \partial _t^{(i)}\delta _{t^*}\ \text { and } \ D^X_{x^*} = \sum _{j=0}^{n^{x^*}} c_i^{x^*} \partial _x ^{(j)} \delta _{x^*}. \end{aligned}$$

Then, the distributional time shift of \(D^T\) with speed \(\lambda \) and initial position \(x_0\) is given by

$$\begin{aligned} {\mathcal {S}}^{\lambda ,x_0}_{\text {time}} D^T := ({\mathcal {S}}^{\lambda ,x_0}_{\text {time}} d^T)_{\mathbb {D}}+ \sum _{t^*\in T^*} \sum _{i=0}^{n^{t^*}} c_i^{t^*} \partial _t^{(i)} {\mathcal {S}}^{\lambda ,x_0}_{\text {time}}\delta _{t^*}, \end{aligned}$$

and the distributional space shift of \(D^X\) with speed \(\lambda \) and initial time \(t_0\) is given by

$$\begin{aligned} {\mathcal {S}}^{\lambda ,t_0}_{\text {space}} D^X := ({\mathcal {S}}^{\lambda ,t_0}_{\text {space}} d^X)_{\mathbb {D}}+ \sum _{x^*\in X^*} \sum _{j=0}^{n^{x^*}} c_j^{x^*} \partial _x^{(j)} {\mathcal {S}}^{\lambda ,t_0}_{\text {space}}\delta _{x^*}. \end{aligned}$$

Assume \(\lambda >0\) and let \(v^{t_0}\) and \(v^a\) be given as in (5.6) and (5.7), respectively. Below, we will formulate the solution to the Eq. (5.5) in terms of the distributional space/time shift operators \({\mathcal {S}}_{\text {space}}^{\lambda ,t_0}\), \({\mathcal {S}}_{\text {time}}^{\lambda ,a}\). As seen in Sect. 3, since the solution is constant along the characteristics, exploiting the distributional space/time shift operator given in Definition 19, it can be written as

$$\begin{aligned} v(t^\pm , \cdot )&=\left( \left( \mathbb {1}_{\{x-a\ge \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda ,t_0}v^{t_0}+\left( \mathbb {1}_{\{x-a < \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda ,a}v^a \right) (t^\pm ,\cdot ), \end{aligned}$$
(5.8a)
$$\begin{aligned} v(\cdot , x^\pm )&=\left( \left( \mathbb {1}_{\{x-a\ge \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda ,t_0}v^{t_0}+\left( \mathbb {1}_{\{x-a < \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda ,a}v^a \right) (\cdot ,x^\pm ). \end{aligned}$$
(5.8b)

Then, the solution to the differential Eq. (5.5) at the right boundary with \(\lambda >0\) takes the form:

$$\begin{aligned} v(\cdot , b^-)&=\left( \left( \mathbb {1}_{\left( t_0,t_0 + \tfrac{b-a}{\lambda } \right) }\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda ,t_0} v^{t_0}+\left( \mathbb {1}_{\left( t_0 + \tfrac{b-a}{\lambda }, \infty \right) }\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda ,a}v^a \right) (\cdot ,b^-), \end{aligned}$$

which can be put in the form

$$\begin{aligned} v(\cdot ,b^-)=\left( {\mathcal {S}}_{\text {time}}^{\lambda ,a} v^a\right) (\cdot ,b^-), \end{aligned}$$
(5.9)

where \(\left( {\mathcal {S}}_{\text {time}}^{\lambda ,a}v^a\right) (\cdot ,b^-):=\left( {\mathcal {S}}_{\text {space}}^{\lambda ,t_0} v^{t_0}\right) (\cdot ,b^-)\) on \((t_0, t_0+\tfrac{b-a}{\lambda })\) with the convention that \(v^{t_0}=0\) outside (ab).

Now assume \(\lambda <0\) and let \(v^{t_0}\) and \(v^b\) be given as in (5.6) and (5.7), respectively, and let \(v^{t_0}\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(X)\) and \(v^b\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\). The solution formulae to Eq. (5.5) with \(\lambda <0\) are now of the form:

$$\begin{aligned} v(t^\pm ,\cdot )&=\left( \left( \mathbb {1}_{\{x-b\le \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda ,t_0} v^{t_0}+\left( \mathbb {1}_{\{x-b> \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda ,b}v^b\right) (t^\pm ,\cdot ),\\ v(\cdot , x^\pm )&=\left( \left( \mathbb {1}_{\{x-b\le \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda ,t_0}v^{t_0}+\left( \mathbb {1}_{\{x-b > \lambda (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda ,b}v^b \right) (\cdot ,x^\pm ). \end{aligned}$$

Similarly, the solution to the differential Eq. (5.5) with \(\lambda <0\) at the left boundary can be written as:

$$\begin{aligned} v(\cdot , a^+)&=\left( \left( \mathbb {1}_{\left( t_0,t_0 + \tfrac{b-a}{-\lambda } \right) }\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda ,t_0}v^{t_0}+\left( \mathbb {1}_{\left( t_0 + \tfrac{b-a}{-\lambda }, \infty \right) }\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda ,b}v^b\right) (\cdot ,a^+), \end{aligned}$$

which is written as

$$\begin{aligned} v(\cdot ,a^+)={\mathcal {S}}_{\text {time}}^{\lambda ,b} v^b(\cdot ,a^+), \end{aligned}$$
(5.10)

where \(\left( {\mathcal {S}}_{\text {time}}^{\lambda ,b} v^b\right) (\cdot ,a^+):=\left( {\mathcal {S}}_{\text {space}}^{\lambda ,t_0} v^{t_0}\right) (\cdot ,a^+)\) on \((t_0, t_0+\tfrac{b-a}{-\lambda })\) with the convention that \(v^{t_0}=0\) outside (ab).

As a system of PDEs with boundary conditions, we consider

$$\begin{aligned}&\partial _t {\mathbf {u}}+ {\mathbf {A}} \partial _x {\mathbf {u}}&= {\mathbf {0}}, \end{aligned}$$
(5.11a)
$$\begin{aligned} \mathbf{I}.C.&\quad {\mathbf {u}}(t_0^+,\cdot )&= {\mathbf {u}}^{t_0}, \end{aligned}$$
(5.11b)
$$\begin{aligned} \mathbf{B}.C.&\quad {\mathbf {P}}_a {\mathbf {u}}(\cdot ,a^+) = {\mathbf {b}}^a, \quad&\text { and } \quad {\mathbf {P}}_b {\mathbf {u}}(\cdot ,b^-) = {\mathbf {b}}^b, \end{aligned}$$
(5.11c)

with the unknown \({\mathbf {u}}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( \left( t_0,\infty \right) \times (a,b)\right) \right) ^n\), the initial condition \({\mathbf {u}}^{t_0} \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( a,b\right) \right) ^n\) and \({\mathbf {b}}^a \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( \left( t_0,\infty \right) \right) \right) ^{r}\), \({\mathbf {b}}^b \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( \left( t_0,\infty \right) \right) \right) ^{n-r}\) are left and right boundary conditions with \({\mathbf {P}}_a\in {\mathbb {R}}^{r\times n},\) and \({\mathbf {P}}_b\in {\mathbb {R}}^{(n-r)\times n}\).

As in Sect. 3 and with Assumption 1, the system is decomposed into its distributional characteristic variables \({\mathbf {v}}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( \left( t_0,\infty \right) \times \left( a,b\right) \right) \right) ^n\) with the initial condition

(5.12)

where \({\mathbf {v}}_-^{t_0} \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( a,b\right) \right) ^{n-r}\), \({\mathbf {v}}_+^{t_0} \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( a,b\right) \right) ^{r}\) and the boundary conditions take the form:

$$\begin{aligned} {\mathbf {M}}{\mathbf {v}}(\cdot ,a^+) = {\mathbf {b}}^a, \quad \text { and } \quad {\mathbf {N}}{\mathbf {v}}(\cdot ,b^-) = {\mathbf {b}}^b, \end{aligned}$$

where the boundary conditions for the right- and left-going waves can be expressed as:

$$\begin{aligned} {\mathbf {v}}_+ (\cdot ,a^+)&= \widetilde{{\mathbf {b}}}_a, \end{aligned}$$
(5.13a)
$$\begin{aligned} {\mathbf {v}}_- (\cdot ,b^-)&= \widetilde{{\mathbf {b}}}_b, \end{aligned}$$
(5.13b)

with \(\widetilde{{\mathbf {b}}}_a=\left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\right) ^r, \ \widetilde{{\mathbf {b}}}_b=\left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\right) ^{n-r}\), \({\mathbf {v}}=({\mathbf {v}}^-,{\mathbf {v}}^+)^\top \) and \({\mathbf {v}}^{t_0}=({\mathbf {v}}^{t_0}_-,{\mathbf {v}}^{t_0}_+)^\top \) where \({\mathbf {v}}_- \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( (t_0,\infty ) \times (a,b) \right) \right) ^{n-r}\) stands for the left-going waves, whilst \({\mathbf {v}}_+ \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\left( ( t_0,\infty ) \times (a,b) \right) \right) ^{r}\) for the right-going waves.

The distributional solution \({\mathbf {v}}\) to the decomposed system of (5.11a), as was done similarly to (3.3a), with the initial condition (5.12) and boundary conditions (5.13a)–(5.13b) can be written in terms of the solutions of the left- and right-going waves

(5.14)

The distributional solution \({\mathbf {u}}\) to the IBVP system (5.11a)-(5.11b)-(5.11c) is now formulated via inversion of the distributional characteristic variables \({\mathbf {u}}={\mathbf {R}}{\mathbf {v}}\). Let \(\mathbf {\Pi }_p:= {\mathbf {R}}\; \mathrm {diag}({\mathbf {e}}_p)\; {\mathbf {R}}^{-1}\) with \(\mathrm {diag}(\mathbf {e_p}) \in {\mathbb {R}}^n\) is the p-th directional unit vector. The solution is

$$\begin{aligned} {\mathbf {u}}&=\sum _{i \in K^-} \mathbf {\Pi }_i \left( \left( \mathbb {1}_{\{x-b\le \lambda _i (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda _i,t_0} {\mathbf {u}}^{t_0} + \left( \mathbb {1}_{\{x-b > \lambda _i (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda _i,b} {\mathbf {u}}^b \right) \\&\quad + \sum _{j\in K^+} \mathbf {\Pi }_j \left( \left( \mathbb {1}_{\{x-a\ge \lambda _i (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {space}}^{\lambda _j,t_0} {\mathbf {u}}^{t_0}+\left( \mathbb {1}_{\{x-a < \lambda _i (t-t_0)\}}\right) _{\mathbb {D}}{\mathcal {S}}_{\text {time}}^{\lambda _j,a} {\mathbf {u}}^a \right) , \end{aligned}$$

or, in the compact form

$$\begin{aligned} {\mathbf {u}}=\sum _{i \in K^-} \mathbf {\Pi }_i \left( {\mathcal {S}}_{\text {time}}^{\lambda _i,b} {\mathbf {u}}^b \right) + \sum _{j\in K^+} \mathbf {\Pi }_j \left( {\mathcal {S}}_{\text {time}}^{\lambda _j,a} {\mathbf {u}}^a \right) , \end{aligned}$$
(5.15)

where

$$\begin{aligned} {\left\{ \begin{array}{ll} \left( {\mathcal {S}}_{\text {time}}^{\Lambda ^-,b} {\mathbf {u}}^b\right) := \sum _{i\in K^-} \mathbf {\Pi }_i \left( {\mathcal {S}}_{\text {space}}^{\lambda _i,t_0} {\mathbf {u}}^{t_0}\right) , &{} \text { on } (t_0, t_0+\tfrac{b-x}{-\lambda _i}),\\ \left( {\mathcal {S}}_{\text {time}}^{\Lambda ^+,a} {\mathbf {u}}^a\right) :=\sum _{j\in K^+} \mathbf {\Pi }_j \left( {\mathcal {S}}_{\text {space}}^{\lambda _j,t_0} {\mathbf {u}}^{t_0} \right) , &{} \text { on } (t_0,t_0+\tfrac{x-a}{\lambda _j}), \end{array}\right. } \end{aligned}$$
(5.16)

with the convention that \({\mathbf {u}}^{t_0} = 0\) outside (ab).

At the left and right end of the spatial domain, the distributional solution \({\mathbf {u}}\) is as follows:

(5.17)

where \({\mathbf {u}}^a:={\mathbf {u}}(\cdot ,a^+)\), \({\mathbf {u}}^b:={\mathbf {u}}(\cdot ,b^-)\) and

$$\begin{aligned} {\left\{ \begin{array}{ll} \left( {\mathcal {S}}_{\text {time}}^{\Lambda ^-,b} {\mathbf {u}}^b\right) (\cdot , a^+) := \sum _{i\in K^-} \mathbf {\Pi }_i \left( {\mathcal {S}}_{\text {space}}^{\lambda _i,t_0} {\mathbf {u}}^{t_0}\right) (\cdot , a^+), &{}\quad \text { on } (t_0, t_0+\tfrac{b-a}{-\lambda _i}),\\ \left( {\mathcal {S}}_{\text {time}}^{\Lambda ^+,a} {\mathbf {u}}^a\right) (\cdot , b^-):=\sum _{j\in K^+} \mathbf {\Pi }_j \left( {\mathcal {S}}_{\text {space}}^{\lambda _j,t_0} {\mathbf {u}}^{t_0} \right) (\cdot , b^-), &{}\quad \text { on } (t_0,t_0+\tfrac{b-a}{\lambda _j}), \end{array}\right. } \end{aligned}$$
(5.18)

with the convention that \({\mathbf {u}}^{t_0} = 0\) outside (ab).

Remark 20

Similar to the 1D time shift defined in Remark 6, we define the 1D distributional time shift \({\mathcal {S}}_{\text {time}}^{\tau }\) for \(D\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\). Let \(D \in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\) be given by \(D = d_{\mathbb {D}}+ \sum _{t^*\in T^*} D_{t^*}\), where \(d\in {\mathcal {C}}_{\text {pw}}^{\infty }(T)\), \(T^*\subset T\) is a locally finite set, and for each \(t^*\in T^*\) we have \(n^{t^*}\in {\mathbb {N}}\), \(c_i^{t^*}\in {\mathbb {R}}\), \(i=0,1,\ldots ,n^{t^*}\) such that

$$\begin{aligned} D_{t^*} = \sum _{i=0}^{n^{t^*}} c_i^{t^*} \partial _t^{(i)} \delta _{t^*} \ . \end{aligned}$$

Then, the 1D distributional time shift of D is given by

$$\begin{aligned} {\mathcal {S}}^{\tau }_{\text {time}} D := ({\mathcal {S}}^{\tau }_{\text {time}} d)_{\mathbb {D}}+ \sum _{t^*\in T^*} \sum _{i=0}^{n^{t^*}} c_i^{t^*} \partial _t ^{(i)} {\mathcal {S}}^{\tau }_{\text {time}}\delta _{t^*}. \end{aligned}$$

Remark 21

Equation (5.17) can now be written in compressed form in terms of \({\mathbf {u}}_{ab} \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)\right) ^{2n}\) and the 1D distributional time shift \({\mathcal {S}}_{\text {time}}^{\tau }\) as

$$\begin{aligned} {\mathbf {u}}_{ab} = {\mathbf {F}} {\left[ {\begin{matrix} {\mathbf {b}}^a \\ {\mathbf {b}}^b \end{matrix}}\right] } + \sum _{k=1}^n {\mathbf {D}}_k {\mathcal {S}}_{\text {time}}^{\tau _k} {\mathbf {u}}_{ab}, \end{aligned}$$
(5.19)

where \(\tau _k= \tfrac{b-a}{\mathop {\mathrm {sgn}}(\lambda _k) \lambda _k}\), \({\mathbf {F}}= {\left[ {\begin{matrix} {\mathbf {F}}_a&{}{\mathbf {0}}_{n, n-r}\\ {\mathbf {0}}_{n, r}&{} {\mathbf {F}}_b \end{matrix}}\right] },\) \({\mathbf {D}}_k = {\left[ {\begin{matrix} {\mathbf {0}}_{n, n} &{}\quad {\mathbf {D}}_k^{ab} \\ {\mathbf {D}}_k^{ba} &{}\quad {\mathbf {0}}_{n, n} \end{matrix}}\right] }\), \(k=1,2,\ldots , n\), where the matrices \({\mathbf {F}}_a, {\mathbf {F}}_b, {\mathbf {D}}_k^{ab}, {\mathbf {D}}_k^{ba}\) are given as in (3.11). The extension of the initial conditions as the boundary conditions for the negative times is described in (5.18). Hence, the equality (5.19) follows from the equations in (5.17).

So far we have constructed a piecewise-smooth distributional solution to the hyperbolic PDE (5.11a). This solution is unique due to the following theorem.

Theorem 22

(Uniqueness of the distributional solution) The solutions to (5.11a) are unique in the space of piecewise-smooth distributions.

Proof

As the PDE is linear, it is sufficient to show that \(u \equiv 0\) is the only solution to the problem with zero initial and boundary conditions. First, we verify that \(\delta _L\) can only be a solution to the i-th characteristic component of the PDE, if the segment has slope \(\lambda _i\) and crosses the boundary or initial time line

$$\begin{aligned} \left( \partial _t\delta _L\right) \left( \varphi \right)&= \int _{t_0}^{t_1}\partial _1 \varphi (t,x_0+\tfrac{\Delta x}{\Delta t}(t-t_0)) \sqrt{1+\tfrac{\Delta x^2}{\Delta t^2}}\,\text {d} t\\ \left( \partial _x\delta _L\right) \left( \varphi \right)&= \int _{t_0}^{t_1}\partial _2 \varphi (t,x_0+\tfrac{\Delta x}{\Delta t}(t-t_0)) \sqrt{1+\tfrac{\Delta x^2}{\Delta t^2}}\,\text {d} t\\ \left( \partial _t\delta _L+\lambda _i \partial _x\delta _L\right) (\varphi )&= \sqrt{1+\tfrac{\Delta x^2}{\Delta t^2}} \int _{t_0}^{t_1} \tfrac{\text {d}}{\text {d} t}\varphi (t,x_0+\tfrac{\Delta x}{\Delta t}(t-t_0))\\&\qquad + \left( \lambda _i - \tfrac{\Delta x}{\Delta t}\right) \partial _2 \varphi (t,x_0 +\tfrac{\Delta x}{\Delta t}(t-t_0)) \,\text {d} t\\&= \sqrt{1+\tfrac{\Delta x^2}{\Delta t^2}} \Big (\varphi (t_1,x_1)-\varphi (t_0,x_0) \\&\qquad + \left( \lambda _i - \tfrac{\Delta x}{\Delta t}\right) \int _{t_0}^{t_1}\partial _2 \varphi (t,x_0+\tfrac{\Delta x}{\Delta t}(t-t_0)) \,\text {d} t\Big ) \end{aligned}$$

This expression is only zero for all \(\varphi \), if \(\lambda _i = \frac{\Delta x}{\Delta t}\) and \((t_0,x_0)\) as well as \((t_1,x_1)\) are outside of the support of the \(\varphi \). Thus, the line has to have the slope according to the characteristic speed, and the line has to fully cross the considered domain. But at the points where the line hits the initial time or the boundaries of the domain, it has to satisfy the imposed conditions. Therefore, the strength of the impulse is equal to zero, i.e. the factor c for \(\delta _L\) is zero. Due to linearity, the above computation can be extended directly to any combination of spatial and temporal derivatives of \(\delta _L\), which concludes the proof.\(\square \)

The coupled system

Existence and uniqueness of the coupled system

In this section, we consider the switched DAE (2.1d) with output \({\mathbf {y}}_D\) given by (2.1e) together with the boundary behaviour \({\mathbf {u}}_{ab} := \left( {\mathbf {u}}(\cdot ,a)^\top ,{\mathbf {u}}(\cdot ,b)^\top \right) ^\top \) of the PDE (2.1a). Based on the results from the previous section, we can now relate the solution \({\mathbf {w}}\) and \({\mathbf {u}}_{ab}\) of the coupled system one-to-one with the solution of a switched delay DAE as follows.

Theorem 23

Consider the coupled system (2.1) satisfying Assumptions 1 and 2. Then, \({\mathbf {z}}:=({\mathbf {w}}^\top ,{\mathbf {u}}_{ab}^\top )^\top \) is a solution of the coupled system if and only if \({\mathbf {z}}\) solves the switched delay DAE (swDDAE)

$$\begin{aligned} \begin{bmatrix} {\mathbf {E}}_\sigma &{}{\mathbf {0}}\\ {\mathbf {0}}&{}{\mathbf {0}} \end{bmatrix} {\dot{{\mathbf {z}}}} = \begin{bmatrix} {\mathbf {H}}_\sigma &{}{\mathbf {B}}_\sigma {\mathbf {C}}_P\\ {\mathbf {F}} {{\mathbf {C}}_D}_\sigma &{}-{\mathbf {I}} \end{bmatrix} {\mathbf {z}}+ \sum _{k=1}^{d} \left( \begin{bmatrix} {\mathbf {0}}&{}{\mathbf {0}}\\ {\mathbf {0}}&{}{\mathbf {D}}_k \end{bmatrix} {\mathcal {S}}_{\text {time}}^{\tau _k} {\mathbf {z}}\right) + \begin{bmatrix} {\mathbf {f}}_\sigma \\ {\mathbf {0}} \end{bmatrix}, \end{aligned}$$
(6.1)

where \(\tau _k , \ \) \({\mathbf {u}}_{ab}\) and the matrices \({\mathbf {F}}\) and \({\mathbf {D}}_k\) are given as in Remark 21 and its components as in (3.11).

Proof

\((\Leftarrow )\) Assume that \({\mathbf {z}}=({\mathbf {w}}^\top ,{\mathbf {u}}_{ab}^\top )^\top \) solves the swDDAE (6.1). From the swDDAE (6.1), we obtain

$$\begin{aligned} {\mathbf {E}}_\sigma {\dot{{\mathbf {w}}}} = {\mathbf {H}}_\sigma + {\mathbf {B}}_\sigma {\mathbf {C}}_P {\mathbf {u}}_{ab} + {\mathbf {f}}_\sigma , \end{aligned}$$

for which \({\mathbf {w}}\) is the solution with the input \({\mathbf {q}}= {\mathbf {C}}_P {\mathbf {u}}_{ab} \) as shown in Theorem 10. And also, from the swDDAE (6.1), we obtain

$$\begin{aligned} {\mathbf {u}}(\cdot , a^+)&= {\mathbf {F}}_a {\mathbf {b}}^a +\sum _{i=1}^{n} {\mathbf {D}}_k^{ab} \left( {\mathcal {S}}_{\text {time}}^{\tau _k} {\mathbf {u}}^b \right) \\&= {\mathbf {F}}_a {\mathbf {b}}^a +\sum _{i=1}^{n} {\mathbf {D}}_k^{ab} \left( {\mathcal {S}}_{\text {time}}^{\lambda _k,b} {\mathbf {u}}^b \right) (\cdot , a^+),\\ {\mathbf {u}}^b(\cdot , b^-)&= {\mathbf {F}}_b {\mathbf {b}}^b +\sum _{i=1}^{n} {\mathbf {D}}_k^{ba} \left( {\mathcal {S}}_{\text {time}}^{\tau _k} {\mathbf {u}}^a \right) \\&= {\mathbf {F}}_b {\mathbf {b}}^b +\sum _{i=1}^{n} {\mathbf {D}}_k^{ba} \left( {\mathcal {S}}_{\text {time}}^{\lambda _k,a} {\mathbf {u}}^a \right) (\cdot , b^-), \end{aligned}$$

where \({\left[ {\begin{matrix} {\mathbf {b}}^a \\ {\mathbf {b}}^b \end{matrix}}\right] }={\mathbf {C}}_{D_\sigma }{\mathbf {w}}\), which together yield the solution \({\mathbf {u}}\)

$$\begin{aligned} {\mathbf {u}}&= \sum _{i \in K^-} \mathbf {\Pi }_i \left( {\mathcal {S}}_{\text {time}}^{\lambda _i,b} {\mathbf {u}}^b \right) + \sum _{j\in K^+} \mathbf {\Pi }_j \left( {\mathcal {S}}_{\text {time}}^{\lambda _j,a} {\mathbf {u}}^a \right) , \end{aligned}$$

where

$$\begin{aligned} {\left\{ \begin{array}{ll} \left( {\mathcal {S}}_{\text {time}}^{\Lambda ^-,b} {\mathbf {u}}^b\right) := \sum _{i\in K^-} \mathbf {\Pi }_i \left( {\mathcal {S}}_{\text {space}}^{\lambda _i,t_0} {\mathbf {u}}^{t_0}\right) , &{}\quad \text { on } (t_0, t_0+\tfrac{b-x}{-\lambda _i}),\\ \left( {\mathcal {S}}_{\text {time}}^{\Lambda ^+,a} {\mathbf {u}}^a\right) :=\sum _{j\in K^+} \mathbf {\Pi }_j \left( {\mathcal {S}}_{\text {space}}^{\lambda _j,t_0} {\mathbf {u}}^{t_0} \right) , &{}\quad \text { on } (t_0,t_0+\tfrac{x-a}{\lambda _j}), \end{array}\right. } \end{aligned}$$
(6.2)

with the convention that \({\mathbf {u}}^{t_0} = 0\) outside (ab) and \(\mathbf {\Pi }_p:= {\mathbf {R}}\; \mathrm {diag}({\mathbf {e}}_p)\; {\mathbf {R}}^{-1}\) with \(\mathrm {diag}(\mathbf {e_p}) \in {\mathbb {R}}^n\) is the p-th directional unit vector.

\((\Rightarrow )\) Assume that \({\mathbf {w}}\) is the solution to the swDDAE (2.1d)

$$\begin{aligned} {\mathbf {E}}_\sigma {\dot{{\mathbf {w}}}}={\mathbf {H}}_\sigma {\mathbf {w}}+{\mathbf {B}}_\sigma {\mathbf {y}}_P + {\mathbf {f}}_\sigma , \end{aligned}$$

where \({\mathbf {y}}_P={\mathbf {C}}_P {\mathbf {u}}_{ab}\) and that \({\mathbf {u}}\) is the solution to the PDE (2.1a) given as (5.15). Then \({\mathbf {u}}_{ab}\) is of the form

$$\begin{aligned} \begin{aligned} {\mathbf {u}}_{ab}(t)&= {\mathbf {F}} {\mathbf {C}}_{D_\sigma } {\mathbf {w}}+ \sum _{k=1}^n {\mathbf {D}}_k {\mathcal {S}}_{\text {time}}^{\tau _k} {\mathbf {u}}_{ab}, \end{aligned} \end{aligned}$$

where \({\left[ {\begin{matrix} {\mathbf {b}}^a \\ {\mathbf {b}}^b \end{matrix}}\right] }={\mathbf {C}}_{D_\sigma }{\mathbf {w}}\). Hence, \({\mathbf {z}}=({\mathbf {w}}^\top , {\mathbf {u}}_{ab}^\top )^\top \) solves the swDDAE (6.1). \(\square \)

Since the solution of the coupled system on the whole domain can be recovered via (5.15), we have therefore shown that the solution properties of the coupled system can be equivalently characterized by the swDDAE (6.1). The following result establishes conditions for existence and uniqueness of solutions for general swDDAE.

Theorem 24

(Existence and uniqueness of solutions for swDDAEs) Consider the following switched delay differential-algebraic equations having \(d\in {\mathbb {N}}\) delays such that \(0<\tau _1<\tau _2<\cdots <\tau _d\)

$$\begin{aligned} {\mathcal {E}}_\sigma {\dot{{\mathbf {z}}}} = {\mathcal {H}}_\sigma {\mathbf {z}}+ \sum _{j=1}^d {\mathcal {D}}_j {\mathcal {S}}_{\text {time}}^{\tau _j} {\mathbf {z}}+ {\mathbf {g}}_\sigma , \end{aligned}$$
(6.3)

with \(\sigma :{\mathbb {R}}\rightarrow \{ 1,2,\ldots ,N \}\), \(N\in {\mathbb {N}}\), \({\mathcal {E}}_\xi ,{\mathcal {H}}_\xi ,{\mathcal {D}}_1,\ldots ,{\mathcal {D}}_d\in {\mathbb {R}}^{m\times m}\) for each \(\xi \in \{ 1,2,\ldots ,N \}\). Assume that \(({\mathcal {E}}_\xi ,{\mathcal {H}}_\xi )\) is regular for each \(\xi \in \{ 1,2,\ldots ,N \}\), then for any initial trajectory \({\mathbf {z}}^{t_0}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\) and any inhomogeneity \({\mathbf {g}}_\xi \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\), the corresponding initial trajectory problem has a unique solution.

Proof

The result is a simple consequence from the “method of steps”, and the details for DDAEs with a single delay \(d=1\) can be found in [20]. In the following proof, we adapt the prime notation \((')\) to indicate the derivative of distributions in addition to the dot notation \((\, \dot{}\, )\) since the derivative of a distribution restricted to an interval and restricting a derivative of a distribution to an interval do not have the same meaning. In other words, the operations restriction to an interval and differentiation of a distribution do not commute.

Let \(\tau \) be the smallest delay within the set \(\{ \tau _1,\ldots ,\tau _d \}\). The solution to the swDDAE system (6.3) is shown to be expressed as

$$\begin{aligned} {\mathbf {z}}&={\mathbf {z}}^{t_0}_{(-\infty ,t_0)} + \sum _{k=1}^{\infty } {\mathbf {z}}^k_{[{\widetilde{t}}_{k-1},\widetilde{t}_{k})}, \end{aligned}$$
(6.4)

where \(\widetilde{t}_k:=t_0 + k\tau \), \(k\in {\mathbb {N}}\), \(\widetilde{t}_0:=t_0\), and \({\mathbf {z}}^k \in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\) is the unique solution to the non-delay swDAE, (Theorem 10),

$$\begin{aligned} {\mathbf {z}}^k_{(-\infty ,{\widetilde{t}_{k-1}})}&={\mathbf {z}}^{k-1}_{(-\infty ,{\widetilde{t}_{k-1}})} \end{aligned}$$
(6.5a)
$$\begin{aligned} \left( {\mathcal {E}}_{\sigma } {\dot{{\mathbf {z}}}}^k \right) _{[{\widetilde{t}}_{k-1},\infty )}&= \left( {\mathcal {H}}_{\sigma } {\mathbf {z}}^k + \mathbf {\widetilde{g}}_{\sigma } \right) _{[{\widetilde{t}}_{k-1},\infty )} \end{aligned}$$
(6.5b)

where \(\mathbf {\widetilde{g}}_{\sigma }:= \left( \sum _{j=1}^d {\mathcal {D}}_j {\mathcal {S}}_{\text {time}}^{\tau _j}{\mathbf {z}}^{k-1} + {\mathbf {g}}_{\sigma } \right) \). For each \(\varphi \in {\mathcal {C}}_{pw}^\infty \), the formally infinite sum defining \({\mathbf {z}}(\varphi )\) is actually a finite sum, because \(\varphi \) has a compact support. Moreover, \({\mathbf {z}}\in \left( {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\right) ^m\) since it is a linear combination of piecewise-smooth distributions. For any \(k\ge 1\),

$$\begin{aligned} \left( {\mathcal {E}}_{\sigma }{\dot{{\mathbf {z}}}} \right) _{[{\widetilde{t}}_{k-1},\widetilde{t}_k)}&={\mathcal {E}}_{\sigma }\left( \left( {\mathbf {z}}^{t_0}_{(-\infty ,t_0)}\right) ' + \sum _{p=1}^{\infty } \left( {\mathbf {z}}^p_{[{\widetilde{t}}_{p-1},\widetilde{t}_{p})} \right) '\right) _{[{\widetilde{t}}_{k-1},\widetilde{t}_k)}\\&= {\mathcal {E}}_{\sigma } \left( {\dot{{\mathbf {z}}}}^0_{(-\infty ,t_0)} - {\mathbf {z}}^{t_0}(t_0^-)\delta _{t_0} \right. \\&\left. + \sum _{p=1}^\infty \left( {\dot{{\mathbf {z}}}}^p_{[{\widetilde{t}}_{p-1},\widetilde{t}_p)} + {\mathbf {z}}^p(\widetilde{t}^-_{p-1}) \delta _{\widetilde{t}_{p-1}} - {\mathbf {z}}^p(\widetilde{t}^-_p)\delta _{\widetilde{t}_p} \right) \right) _{[{\widetilde{t}}_{k-1},\widetilde{t}_k)}\\&= \left( {\mathcal {E}}_{\sigma }{\dot{{\mathbf {z}}}}^k \right) _{[{\widetilde{t}}_{k-1}, \widetilde{t}_k)}\\&= \left( {\mathcal {H}}_{\sigma } {\mathbf {z}}^k + \mathbf {\widetilde{g}}_{\sigma } \right) _{[{\widetilde{t}}_{k-1},{\widetilde{t}}_{k})}\\&= {\mathcal {H}}_{\sigma } {\mathbf {z}}_{[{\widetilde{t}}_{k-1},{\widetilde{t}}_{k})} + {\mathcal {D}} \left( {\mathcal {S}}_{\text {time}}^{\tau } {\mathbf {z}}^{k-1}\right) _{[{\widetilde{t}}_{k-1}, {\widetilde{t}}_{k})} + {\mathbf {g}}_{{\sigma }_{[{\widetilde{t}}_{k-1},{\widetilde{t}}_{k})}}\\&= \left( {\mathcal {H}}_{\sigma } {\mathbf {z}}+ {\mathcal {D}} {\mathcal {S}}_{\text {time}}^{\tau } {\mathbf {z}}+ {\mathbf {g}}_{\sigma } \right) _{[{\widetilde{t}}_{k-1},{\widetilde{t}}_{k})}, \end{aligned}$$

where we exploit the relations from cf. [17], as \((D_{[s,t)})' = (D')_{[s,t)} + D(s^-)\delta _s - D(t^-)\delta _t\) and \((D_{(s,t)})' = (D')_{(s,t)} + D(s^+)\delta _s - D(t^-)\delta _t\), where \(\delta _{\mp \infty } = 0\), with \(-\infty \le s \le t \le \infty \) and \(D\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }\). \(\square \)

Remark 25

The existence and uniqueness result of Theorem 24 can easily be extended to the case that the delay coefficient matrices \({\mathcal {D}}_j\), \(j=1,2,\ldots ,d\) in (6.3) are switch dependent, i.e. (6.3) becomes

$$\begin{aligned} {\mathcal {E}}_\sigma {\dot{{\mathbf {z}}}} = {\mathcal {H}}_\sigma {\mathbf {z}}+ \sum _{j=1}^d {\mathcal {D}}_{j,\sigma } {\mathcal {S}}_{\text {time}}^{\tau _j} {\mathbf {z}}+ {\mathbf {g}}_\sigma . \end{aligned}$$

We can conclude now our main result about existence and uniqueness of solutions of the coupled system.

Corollary 26

Consider the coupled system (2.1) with a hyperbolic PDE (Assumption 1) and suitable boundary condition (Assumption 2). Furthermore, assume that for all \(\xi \in \{1,\ldots ,N\}\) the matrix pairs \(({\mathbf {E}}_\xi , {\mathbf {H}}_\xi + {\mathbf {B}}_\xi {\mathbf {C}}_P {\mathbf {F}}{{\mathbf {C}}_D}_\xi )\) with \({\mathbf {F}}\) as in Remark 21 are regular. Then, for any initial values \({\mathbf {u}}^{t_0}\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(X)^n\), \({\mathbf {w}}^{t_0}\in {\mathbb {R}}^m\) and external inhomogeneities \({\mathbf {f}}_\xi \in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)^m\), there exists a unique solution \(({\mathbf {u}},{\mathbf {w}})\in {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T\times X)^n \times {\mathbb {D}}_{\text {pw}{\mathcal {C}}^\infty }(T)^m\) of the coupled system.

Proof

This is a consequence of Theorems 23 and 24 and the fact that \(\det (s{\mathcal {E}}_\xi -{\mathcal {H}}_\xi ) = \det \big (s{\mathbf {E}}_\xi - ({\mathbf {H}}_\xi + {\mathbf {B}}_\xi {\mathbf {C}}_P {\mathbf {F}}{{\mathbf {C}}_D}_\xi )\big )\). \(\square \)

Remark 27

For general delay differential equations, it is common to distinguish the three types retarded, neutral and advanced, see, for example, [9, 22]. For our existence and uniqueness result, we do not need any assumption on these different types and in general all three types are allowed. However, when numeric considerations are important, then it is necessary to further analyse the properties of the coupled systems with respect to the index and type of the resulting delay DAE; in particular, the question whether discontinuities in the initial data get smoothed out over time or not is highly relevant. This question was already investigated for nonswitched DDAEs [21] and it should be possible to extend these results to the switched case; this is ongoing research.

Remark 28

As illustrated already in Example 2, even if the PDE is well-defined and the swDAE is regular, coupling these systems may yield an ill-posed system. Indeed, using the notation of Corollary 26, we see that \({\mathbf {F}}={\left[ {\begin{matrix} 1 \\ 0 \end{matrix}}\right] }\) (note that \({\mathbf {F}}_b\) is a \(1\times 0\) matrix) and hence

$$\begin{aligned} ({\mathbf {E}},{\mathbf {H}}+{\mathbf {B}}{\mathbf {C}}_P {\mathbf {F}}{\mathbf {C}}_D)= & {} \left( {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{matrix}}\right] }, {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad 1 \end{matrix}}\right] } + {\left[ {\begin{matrix} 0 \\ -1 \end{matrix}}\right] }\cdot {\left[ {\begin{matrix} 1&\quad 0 \end{matrix}}\right] } \cdot {\left[ {\begin{matrix} 1 \\ 0 \end{matrix}}\right] } \cdot {\left[ {\begin{matrix} 0&\quad 1 \end{matrix}}\right] } \right) \\= & {} \left( {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{} \quad 0 \end{matrix}}\right] }, {\left[ {\begin{matrix} 1 &{}\quad 0 \\ 0 &{}\quad 0 \end{matrix}}\right] } \right) \end{aligned}$$

which is not a regular matrix pair anymore. Furthermore, the equivalent delay DAE according to Theorem 23 does not contain a delay term; hence, the closed-loop systems results in an DAE which is not regular.

Numerical results of the power grid example

In this section, we explain how we solve the coupled PDE system (2.5) with the switched DAE (2.10) numerically and illustrate the results obtained. We remark that the matrix pairs \(({\mathcal {E}}_\sigma ,{\mathcal {H}}_\sigma )\) in the power grid example are regular.

Denote by \(\mathcal {{\mathcal {E}}}:=\{1,2,3,4 \}\) the set of edges in the coupled network, over which we discretize the PDE. For discretizing the spatial domain [ab], we consider \(x_k=a+k\Delta x\), \(k=0,1,\ldots ,N\), \(\Delta x=(b-a)/N \), where N is the number of cells in the mesh. We then insert two ghost cells \(G_{-1}\) and \(G_{N+1}\) at both ends of the computational domain which are treated as boundaries. The discretization of the time variable is done in accordance with the CFL number, which we choose to be 1, \(t^{n+1}=t^{n}+\Delta t\). We denote by \({\mathbf {u}}_j^n={\mathbf {u}}(x_j,t^n)\) the approximated solution at time \(t^n\) and position \(x_j\). For the power grid example, denote by \({({\mathbf {u}}_k)}_j^n=({(u_k^1)}_j^n,{(u_k^2)}_j^n)^\top \) to have a suitable representation for the unknowns for every edge \(k\in {\mathcal {E}}\). Before we start solving the coupled system at each time, we first decompose the discretized PDE over each edge \(k\in {\mathcal {E}}\) into its characteristic variables \({({\mathbf {v}}_k)}_j^n\) as left-going, \({(v^-_k)}_j^n\), and right-going characteristic waves, \({(v^+_k)}_j^n\), where \((u^1_k)_j^n = (v_k^-)_j^n + (v_k^+)_j^n\) and \((u^2_k)_j^n = - (v_k^-)_j^n + (v_k^+)_j^n\). To solve the decomposed PDE and the swDAE numerically, we use the upwind scheme and implicit Euler method, respectively. At each time iteration, we solve the decomposed PDE numerically, then we update \({({\mathbf {u}}_k)}_j^{n+1}\) via inverse coordinate change \({({\mathbf {u}}_k)}_j^{n+1} ={\mathbf {R}}_k{({\mathbf {v}}_k)}_j^{n+1}\), where \({({\mathbf {v}}_k)}_j^{n}=({(v^-_k)}_{j}^{n},{(v^+_k)}_{j}^{n})^\top \) and \({\mathbf {A}}_k={\mathbf {R}}_k \Lambda _k {\mathbf {R}}_k^{-1}\) for \(k\in {\mathcal {E}}\), where \({\mathbf {A}}_k\) is the coefficient matrix given as in (2.9). The equations in (2.7) together result in coupling conditions in terms of the characteristic variables as follows:

$$\begin{aligned} {(v^-_2)}_{N+1}^{n}&=2{(v^-_4)}_{-1}^{n}-{(v^+_2)}_{N+1}^{n}, \qquad&{(v^-_4)}_{N+1}^{n}&=-\tfrac{2}{3}{(v^+_3)}_{N+1}^{n}+\tfrac{1}{3} {(v^+_4)}_{N+1}^{n},\\ {(v^+_4)}_{-1}^{n}&=2{(v^+_2)}_{N+1}^{n}-{(v^-_4)}_{-1}^{n}, \qquad&{(v^-_3)}_{N+1}^{n}&=\tfrac{1}{3}{(v^+_3)}_{N+1}^{n}-\tfrac{2}{3}{(v^+_4)}_{N+1}^{n}, \end{aligned}$$

which are four out of eight of boundary conditions for the decoupled PDE, and hence, they build up the inputs to the swDAE as

$$\begin{aligned} (u^1_4)_{0}^{n}=(v_4^-)_{-1}^{n}+(v_4^+)_{-1}^{n}, \qquad&(u^1_2)_{N}^{n}=(v_2^-)_{N+1}^{n}+(v_2^+)_{N+1}^{n},\\ (u^1_3)_{N}^{n}=(v_3^-)_{N+1}^{n}+(v_3^+)_{N+1}^{n}, \qquad&(u^1_4)_{N}^{n}=(v_4^-)_{N+1}^{n}+(v_4^+)_{N+1}^{n}. \end{aligned}$$

The remaining four inputs to the swDAE are given in terms of characteristic variables \((v_1^-)_{-1}^{n}\), \((v_1^+)_{N+1}^{n}\), \((v_2^-)_{-1}^{n}\) and \((v_3^-)_{-1}^{n}\). Then, at each time step, we solve the swDAE and obtain the boundary conditions \((v_1^+)_{-1}^{n}\), \((v_1^-)_{N+1}^{n}\), \((v_2^+)_{-1}^{n}\) and \((v_3^+)_{-1}^{n}\). At \(P_1\) in Fig. 4, the boundary condition is assigned as \({(u^2_1)}_{0}^{n}=v_G\), where \(v_G\) is prescribed constant voltage source, and hence, the boundary condition for the characteristics \({(v_1^+)}_{-1}^{n}=v_G+{(v_1^-)}_{-1}^{n}\). At \(P_2\) over \({\mathcal {E}}_1\), the input to the swDAE is \({(u^1_1)}_{N}^{n}={(v_1^-)}_{N}^{n}+ {(v_1^+)}_{N+1}^{n}\) and the boundary condition for the characteristic variable is \({(u^2_1)}_{N}^{n}=-{(v_1^-)}_{N}^{n}+ {(v_1^+)}_{N+1}^{n}\). The swDAE assigns this boundary condition according to (2.8)

$$\begin{aligned} \left. \begin{aligned} i_{1\eta }^{n+1}&= {(u^1_1)}_N^{n}\\ v_{1\eta }^{n+1}&=\tfrac{\mathrm {d}}{\mathrm {d}t}i_{1\eta }^{n+1}\\ {(u^2_1)}_{N}^{n}&=v_{1\eta }^{n+1} \end{aligned} \right\} \Leftrightarrow \left\{ \begin{aligned}&i_{1\eta }^{n+1} = 2{(v^+_1)}_{N}^{n} - v_{1\eta }^{n+1}\\&\tfrac{\mathrm {d}}{\mathrm {d} t}\left( 2{(v^+_1)}_{N+1}^{n}-v_{1\eta } ^{n+1}\right) =v_{1\eta }^{n+1}\\&{(v^-_1)}_{N+1}^{n}={(v^+_1)}_{N}^{n} - v_{1\eta }^{n+1}, \end{aligned} \right. \end{aligned}$$

where \(\eta =2\) or \(\eta =3\) depending on to which edge the switch connects \({\mathcal {E}}_1\) and \(i_{12}^n, i_{13}^n, v_{12}^n, v_{13}^n\) are discretized state variables for the swDAE at time \(t^n\). If \(\eta =2\), \(v_{12}^{n+1}\) is computed as above, then the boundary condition \((v^-_1)^n_{N+1}= (v^+_1)^n_N - v_{12}^{n+1}\) is assigned; hence, \(i_{12}^{n+1}=(v^+_1)^n_N+(v^-_1)^n_{N+1}\). Furthermore, \(i_{13}^{n+1}=0\) and

$$\begin{aligned} \begin{aligned} v_{13}^{n+1}&=\tfrac{\mathrm {d}}{\mathrm {d} t}i_{13}^{n+1}\\&= \tfrac{i_{13}^{n+1}-i_{13}^{n}}{\Delta t}. \end{aligned} \end{aligned}$$
(6.8)
Fig. 6
figure 6

\(t=0.5\). After the first switch at \(t=0.4\), the peak on the 2nd edge occurs when the edge 1 and 2 are disconnected. On the 3rd and 4th edges, no changes have happened yet (hence, plots are not included here)

Fig. 7
figure 7

\(t=0.8\). After the second switch at \(t=0.7\), switching transformer disconnects the edges 1 and 3, therefore, peak on the 3rd edge occurs. On the 4th edge, there has not been any changes yet

Fig. 8
figure 8

PDE solutions on edges for the domain \((t,x)\in [0,1.5] \times [0,0.5]\). The switching times at \(t=0.4\) and \(t=0.7\). The plots on the left show values for \(I_k\) whereas on the right for \(V_k\) for the edges \(k=1,2,3,4\), (\(I_k=(u^2_k)\), \(V_k=(u^1_k)\) in discretized variables)

If \(\eta = 3\), then \(i_{12}^{n+1}=0\) and \(v_{12}^{n+1}\) is found similarly as in (6.8). Then, the boundary conditions \({(u^2_2)}_0^{n}=\kappa _{12}v_{12}^{n}\) and \({(u^2_3)}_0^{n} =\kappa _{13}v_{13}^{n}\) are assigned; thus, the boundary conditions in characteristic variables \({(v^+_2)}_{-1}^{n}={(v^-_2)}_{-1}^{n}+{(u^2_2)}_0^{n}\) and \({(v^+_3)}_{-1}^{n}={(v^-_3)}_{-1}^{n}+{(u^2_3)}_0^{n}\) are assigned. Then, we update the solution for \(({\mathbf {u}}_k)_{j}^{n+1}\), for all \(k \in {\mathcal {E}}\) by using the eigenvector matrix \({\mathbf {R}}_k\) and \((v^-_k)_\ell ^{n+1}, \ \ell = 0,1,\ldots , N\) and \((v^+_k)_\psi ^{n+1}, \ \psi =1,2,\ldots , N+1\). Until we reach the prescribed final time, we update the time \(t^{n+1}=t^{n}+\Delta t\) and repeat the above steps.

If, instead of solving for \(({\mathbf {v}}_k)_j^n\), one attempts to solve for \({({\mathbf {u}}_k)}_j^n\) and assigns inputs/outputs without considering them in characteristic variables, oscillations might occur at boundaries at each time step. The method described in this section covers the Dirac impulses and ensures that such oscillations do not take place. Therefore, the numerical steps defined above should be carried out in characteristic variables, and then, the original unknown variables \(({\mathbf {u}}_k)_j^n\) should be updated accordingly.

Discontinuous initial condition

We consider the computational domain \([a,b]=[0,0.5]\), initial time \(t_0=0\), and final time \(t_{max}=1.5\), the number of cells \(N=150\), \(\Delta x=3.3 \times 10^{-3}.\) The constant voltage at \(P_1\) is \(v_G=0.5\). The constants are assumed to be \(L_{12}=1\), \(L_{13}=1\), \(\kappa _{12}=1\), \(\kappa _{13}=1\), \(R_{24}=1\), \(R_{34}=1\), \(L_k=1\) and \(C_k=1\) for each \(k\in {\mathcal {E}}\). The initial conditions for the PDE are \(I_1(0,x)=0\), \(x\in [0,0.5]\), \(V_1(0,x)=0\) for \(x\in [0,0.3)\) and \(V_1(0,x)=1\) for \(x\in [0.3,0.5]\), \(I_k(0,x) = 0\) and \(V_k(0,x) = 0\), \(x\in [0,0.5],\) for \(k=2,3,4\). The switch initially connects the edges 1 and 2 for \(t\in [0,0.4)\) and then connects the edges 1 and 3 for \(t\in [0.4,0.7)\). For \(t\in [0.7,1.5),\) it connects the edges 1 and 2 again. In Fig. 6, the plots for \({\mathcal {E}}_1\) and \({\mathcal {E}}_2\) are shown at \(t=0.5\). After the first switch at \(t=0.4\), a Dirac impulse occurs on \({\mathcal {E}}_2\).

In Fig. 7, the plots for all edges at \(t=0.8\) are shown. After the second switch at \(t=0.7\), there happens another Dirac impulse on \({\mathcal {E}}_3\).

And in Fig. 8, the solution over the whole domain \((t,x)\in [0,1.5]\times [0,0.5]\) is shown for all edges where the lines on \({\mathcal {E}}_2, {\mathcal {E}}_3\) and \({\mathcal {E}}_4\) show how Dirac impulses move in the domain.

Conclusion

We have presented an existence and uniqueness result for solutions of linear hyperbolic PDE coupled to a linear switched DAE. Since switched DAEs can have distributional solutions (including Dirac impulses and their derivatives of arbitrary high order), it was necessary to first define a suitable distributional solution space for the PDE such that evaluation at the boundary (the trace operator) is well defined. Within this solution space (the space of piecewise-smooth distributions), it is possible to relate the solutions of the coupled systems with solutions of a delay switched DAE and we obtain our main result by extending recent results about the solvability of delay DAEs.

More research is still necessary to extend the results to PDEs with a source term as well as to the mildly nonlinear case. Furthermore, the well-posedness condition we present is still rather technical and there may be simpler sufficient condition, e.g. in terms of the coupling and output matrices of the PDE alone (which then rules out a situation as in Remark 28).

References

  1. Ambrosio Luigi, Bressan Alberto, Helbing Dirk, Klar Axel, Zuazua Enrique (2009) Modelling and optimisation of flows on networks, volume 2062 of Lecture Notes in Mathematics. Springer, Heidelberg; Fondazione C.I.M.E., Florence, 2013. Lectures from the Centro Internazionale Matematico Estivo (C.I.M.E.) Summer School held in Cetraro, June 15–19, Edited by Benedetto Piccoli and Michel Rascle, Fondazione CIME/CIME Foundation Subseries

  2. Amin S, Hante FM, Bayen AM (2012) Exponential stability of switched linear hyperbolic initial-boundary value problems. IEEE Trans. Automat. Control 57(2):291–301

    MathSciNet  Article  Google Scholar 

  3. Bastin Georges, Coron Jean-Michel (2016) Stability and boundary stabilization of 1-D hyperbolic systems, volume 88 of Progress in Nonlinear Differential Equations and their Applications. Birkhäuser/Springer, [Cham]. Subseries in Control

  4. Borsche R, Colombo RM, Garavello M (2012) Mixed systems: ODEs—balance laws. J Differ Equ 252(3):2311–2338

    MathSciNet  Article  Google Scholar 

  5. Borsche R, Eimer M, Siedow N (2019) A local time stepping method for thermal energy transport in district heating networks. Appl Math Comput 353:215–229

    MathSciNet  MATH  Google Scholar 

  6. Danilov VG, Shelkovich VM (2005) Dynamics of propagation and interaction of \(\delta \)-shock waves in conservation law systems. J Differ Equ 211(2):333–381

    MathSciNet  Article  Google Scholar 

  7. Fügenschuh A, Göttlich S, Herty M, Kirchner C, Martin A (2009) Efficient reformulation and solution of a nonlinear PDE-controlled flow network model. Computing 85(3):245–265

    MathSciNet  Article  Google Scholar 

  8. Göttlich S, Herty M, Schillen P (2016) Electric transmission lines: control and numerical discretization. Optim Control Appl Methods 37(5):980–995

    MathSciNet  Article  Google Scholar 

  9. Ha P, Mehrmann V (2016) Analysis and numerical solution of linear delay differential-algebraic equations. BIT Numer Math 56(2):633–657

    MathSciNet  Article  Google Scholar 

  10. Haller S, Hörmann G (2008) Comparison of some solution concepts for linear first-order hyperbolic differential equations with non-smooth coefficients. Publ Inst Math (Beograd) (NS) 84(98):123–157

    MathSciNet  Article  Google Scholar 

  11. Hante FM, Leugering G, Seidman TI (2009) Modeling and analysis of modal switching in networked transport systems. Appl Math Optim 59(2):275–292

    MathSciNet  Article  Google Scholar 

  12. Hante FM, Leugering G, Seidman TI (2010) An augmented BV setting for feedback switching control. J Syst Sci Complex 23(3):456–466

    MathSciNet  Article  Google Scholar 

  13. Izquierdo J, Iglesias PL (2002) Mathematical modelling of hydraulic transients in simple systems. Math Comput Model 35(7–8):801–812

    MathSciNet  Article  Google Scholar 

  14. Kolb O (2011) Simulation and optimization of gas and water supply networks. PhD thesis, TU Darmstadt

  15. Müller LO, Toro EF (2014) A global multiscale mathematical model for the human circulation with emphasis on the venous system. Int J Numer Methods Biomed Eng 30(7):681–725

    MathSciNet  Article  Google Scholar 

  16. Thein F, Hantke M (2016) Singular and selfsimilar solutions for Euler equations with phase transitions. Bull Braz Math Soc (NS) 47(2):779–786

    MathSciNet  Article  Google Scholar 

  17. Trenn S (2009) Distributional differential algebraic equations. PhD thesis, Institut für Mathematik, Technische Universität Ilmenau, Universitätsverlag Ilmenau, Germany

  18. Trenn S (2009) Regularity of distributional differential algebraic equations. Math Control Signals Syst 21(3):229–264

    MathSciNet  Article  Google Scholar 

  19. Trenn S (2012) Switched differential algebraic equations. In: Vasca F, Iannelli L (eds) Dynamics and control of switched electronic systems—advanced perspectives for modeling, simulation and control of power converters, chapter 6. Springer, London, pp 189–216

    Google Scholar 

  20. Trenn Stephan, Unger Benjamin (2019) Delay regularity of differential-algebraic equations. In: Proceedings of the 58th IEEE conference on decision control (CDC) 2019, Nice, France (to appear)

  21. Unger B (2018) Discontinuity propagation in delay differential-algebraic equations. Electron J Linear Algebra 34:582–601

    MathSciNet  Article  Google Scholar 

  22. Wiener J (1993) Generalized solutions of functional differential equations. World Scientific, Singapore

    Book  Google Scholar 

  23. Yang H (1999) Riemann problems for a class of coupled hyperbolic systems of conservation laws. J Differ Equ 159(2):447–484

    MathSciNet  Article  Google Scholar 

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to S. Trenn.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by DFG-Grants BO 4768/1-1 and TR 1223/4-1 as well as NWO Vidi Grant 639.032.733.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Borsche, R., Kocoglu, D. & Trenn, S. A distributional solution framework for linear hyperbolic PDEs coupled to switched DAEs. Math. Control Signals Syst. 32, 455–487 (2020). https://doi.org/10.1007/s00498-020-00267-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00498-020-00267-7

Keywords

  • Solution theory
  • Coupled systems
  • Switched systems
  • Partial differential equations
  • Differential algebraic equations