1 Introduction

District heating (or cooling) networks (briefly DHN), also called teleheating, is a system, which permits to distribute heat, generated in a centralized place called power station or combined heat and power plant (briefly CHP), to residential houses, commercial and industrial sites. The heat is transported along a network of pipes, filled with an incompressible fluid, typically water. Despite the initial costs due to the construction of the whole network, this system typically results in a high efficiency, in a smaller production of pollution compared to localized heat generators, and in reduced costs for final users in a long time scale.

Here we present, from a mathematical point of view, a complete teleheating system, modeled by ordinary and partial differential equations on a graph; see also [3, 32]. Such equations describe both the fluid dynamics inside the pipes’ network and the energy distribution towards the final consumers. Our main result consists in proving the existence of a unique solution to such a system and the continuous dependence of the solution from the initial and boundary conditions. This result opens the road for the study of optimal ways for heat distribution. In simple cases we also present optimal control problems and we show that optimal controls do exist; see [32] for finding optimal controls in a reduced case and Remark 3.11.

The model considered here is constructed starting from by the classic Euler system [29], which describes the evolution of a fluid through macroscopic quantities: density, linear momentum and energy. For results on Euler systems on networks we refer to [7,8,9, 12, 23] and references therein. In the present framework, since the fluid’s incompressibility, in each pipe the velocity of the fluid is described by an ordinary differential equation. Furthermore at junctions coupling conditions can be given in a fixed form, since no change of characteristics can occur. For compressible flows this is a truly challenging problem.

In the present case, the heat energy satisfies a hyperbolic advection equation, since it is transported along each tube according to the fluid velocity. At junctions suitable coupling conditions provide the mass and energy conservation and distribution rules over the pipes. We use coupling conditions slightly different from those considered in [32] with the aim of avoiding low regularity of the solution. Note that although the energy equations are linear, nonlinear effects can occur due to the coupling. In [31, Sect. 3.2 ] a minimal example is given, how discontinuous solutions can occur even with smooth input data. In this example two neighboring consumers are directly supplied by the power plant, but also mutually connected by an additional pipe. The flow in this additional pipe is always pointing towards the consumer with the larger demand. Thus, if the demands change, the flow changes its direction and a different temperature can enter the pipe.

Partial differential equations on networks attracted a lot of interest in the last years, mainly due to applications, which range from traffic flow [25] to gas distribution [11, 26], irrigation channels [17, 21], sewer networks [2] and flexible strings [15, 22, 28]. In this setting, coupling conditions at nodes play an important role, since they strongly influence the existence, the well posedness, and the qualitative properties of the solution.

The proof of the main result is obtained through the classical Banach Fixed Point Theorem and through ad-hoc stability estimates. The most delicate part consists in deducing stability and total variation estimates for the energy subsystem. This is due to the fact that the energy evolves according to a transport equation whose characteristic curves move with speed given by the fluid velocity. Even for simple networks the direction of the flow can change arbitrarily often. This fact produces solutions for the energy functions whose traces at the nodes have low regularity. Such stability estimates e.g. are missing in [33].

The paper is organized as follows. In Sect. 2 we construct in detail the model starting from reduced systems, which describe the hydrodynamics part and the energy part. In Sect. 3 we state and prove the main result of the paper. Moreover we introduce the control problems and we show the existence of optimal controls in various situations. Finally, in the appendix, there are the detailed proofs of the stability of the advection equation w.r.t. the velocity of the flow and several auxiliary results.

2 The Model

In this section, we illustrate in detail the system for the description of the district heating network, starting from the definition of a network used in the paper.

Definition 2.1

The hydraulic network is a couple \({\mathcal {G}} = \left( {\mathcal {I}}, {\mathcal {J}}\right) \) with the following properties.

  1. 1.

    \({\mathcal {I}} = \left\{ 1, \ldots , {N_{\mathcal {I}}}\right\} \) is the set of the indices enumerating the edges or pipes. For every \(i \in {\mathcal {I}}\), the corresponding edge \(I_i\) is modeled by the one-dimensional interval \([a_i, b_i]\). We denote with \(A^i\) the sectional area of the pipe \(I_i\), i.e. \(A^i = \pi r_i^2\) where \(r_i\) is the constant radius of the i-th tube.

  2. 2.

    \({\mathcal {J}} = \left\{ J_1, \ldots , J_{M}\right\} \) is the set of internal nodes. Each node \(J_j\) is described by the set

    $$\begin{aligned} J_j = \left\{ {j_1}, \ldots , {j_{{M}_j}}\right\} \subseteq {\mathcal {I}}, \end{aligned}$$

    i.e. by the indices of the edges connected to the node \(J_j\). Moreover we denote by \(\textrm{inc}(J_j) \subseteq J_j\) and by \(\textrm{out}(J_j) \subseteq J_j\) respectively the incoming and outgoing edges for the node \(J_j\), according to the edge’s parametrization, i.e. we have \(i\in \textrm{inc}(J_j)\) (resp. \(i\in \textrm{out}(J_j)\)) with \(I_i = [a_i, b_i]\) if the junction \(J_j\) is at \(x = b_i\) (resp. at \(x = a_i\)).

  3. 3.

    \({\mathcal {I}}_P = \{1\} \subseteq {\mathcal {I}}\), i.e. the first pipe \(I_1\) is the only one connected to the power station. We assume that \(a_1\) corresponds to the power station position.

  4. 4.

    \({\mathcal {I}}_H = \left\{ {h_1}, \ldots , {h_{N_H}}\right\} \subseteq {\mathcal {I}}\) consists in all the indices corresponding to the pipes connected to the houses. Therefore, for \(h_k\in {\mathcal {I}}_H\), \(b_{h_k}\) denotes the position of the \(h_k\)-th house, while \(a_{h_k}\) represents the position of the (last) internal node before the consumer’s house, thus of some junction \(J_k\).

Remark 2.2

Without loss of generalities, we assume that, for every junction \(J \in {\mathcal {J}}\), the sets of incoming and outgoing edges are not empty, i.e.

$$\begin{aligned} \textrm{inc}\left( J\right) \ne \emptyset \qquad \text {and} \qquad \textrm{out}\left( J\right) \ne \emptyset . \end{aligned}$$

Note that for every \(J \in {\mathcal {J}}\),

$$\begin{aligned} J = \textrm{inc}(J) \cup \textrm{out}(J) \qquad \text {and} \qquad \textrm{inc}(J) \cap \textrm{out}(J) = \emptyset . \end{aligned}$$

Moreover we underline that the incoming and the outgoing edges are not determined by the sign of the velocity of the fluid inside the pipe, but only by the choice of the parametrization.

As stated in the introduction, in each pipe the starting point is the Euler system see [29, Chap. 1]

$$\begin{aligned} \left\{ \begin{array}{l} \partial _t \rho + \partial _x \left( \rho v\right) = 0, \\ \partial _t v + v \partial _x v = - \frac{\partial _x p}{\rho }, \\ \partial _t \left( \frac{1}{2}\rho v^2 + \rho e\right) = - \partial _x \left( \rho v \left( \frac{1}{2} v^2 + e + \frac{p}{\rho }\right) \right) , \end{array} \right. \end{aligned}$$
(2.1)

where t is time, \(x \in [a, b]\) the spatial parametrization, \(\rho \) denotes the density of the fluid, v is the velocity, p is the pressure, and e is the energy density. In the case of an incompressible fluid, the macroscopic variable \(\rho \) is constant, so that the first equation in (2.1) simply becomes \(\partial _x v=0\) and (2.1) reduces to

$$\begin{aligned} \left\{ \begin{aligned} \partial _x v&= 0, \\ \partial _t v + \frac{1}{\rho } \partial _x p&= f(v), \\ \partial _t e + v\partial _x e&= g(e), \end{aligned} \right. \end{aligned}$$
(2.2)

where the right hand side \(f = f(v)\) models the friction as well as gravitational effects, while the source term \(g = g(e)\) models the energy loss due to not perfect insulation.

Remark 2.3

Common choices for the source terms f and g in (2.2) are

$$\begin{aligned} \begin{aligned} f(v)&= -\frac{\uplambda }{2d}v|v|- \gamma \, \partial _x h, \\ g(e)&= -\frac{4 \kappa }{d}\left( T(e)-T_{\infty }\right) . \end{aligned} \end{aligned}$$
(2.3)

The term \(-\frac{\uplambda }{2d}v|v|\) corresponds to the Darcy-Weisbach friction law of the fluid with the pipe’s walls (d is the pipe’s diameter and \(\uplambda \) is the friction factor). The source term \(- \gamma \, \partial _x h\) represents static pressure due to different elevations at the ends of the tube, where the constant \(\gamma \) is the gravitational acceleration and \(\partial _x h\) is the slope of the pipe.

Finally, the term \(-\frac{4\kappa }{d}\left( T(e)-T_{\infty }\right) \) models the energy loss due to heat flux over the pipe wall, where \(\kappa \) is the heat transmission coefficient of the pipe, T(e) is the fluid temperature depending on its energy density e and \(T_\infty \) is the external temperature.

In a district heating network, the heating power is distributed through water in a system of pipes. This can be described by considering a copy of (2.2) for each pipe \(I_i\), \(i \in {\mathcal {I}}\), of the network via suitable coupling conditions at nodes. The full system then can be divided into three different parts, namely the hydrodynamics part (briefly HYD), the consumer part, and the energy part. We describe them separately.

2.1 The Hydrodynamics Part

The flow in each pipe of the network, except those connected to a house, can be modeled by a copy of the first two equations of the complete incompressible Euler system (2.2), i.e. for every \(i \in {\mathcal {I}} \setminus {\mathcal {I}}_H\), by the system

$$\begin{aligned} \left\{ \begin{array}{ll} \partial _x v^i = 0, &{} \\ \partial _t v^i +\frac{1}{\rho }\partial _x p^i = f(v^i), &{} \end{array} t>0, \, x \in [a_i, b_i]. \right. \end{aligned}$$
(2.4)

Note that, since the incompressibility assumption holds, namely \(\partial _x v^i = 0\), each \(v^i\) is a function depending only on time; hence \(\partial _x p^i\) depends only on time, so that \(p^i\) is affine in x. For every \(i \in {\mathcal {I}}\setminus {\mathcal {I}}_H\), the initial condition for \(v^i\) is given by

$$\begin{aligned} v^i(0) = v^{i,o} \in {\mathbb {R}}. \end{aligned}$$
(2.5)

Moreover, for every junction \(J \in {\mathcal {J}}\) and \(t > 0\), we consider the coupling conditions

$$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle \sum _{I_j\in J} A^jv^j(t) = 0, \\ p^i (t, b_i) = p^j (t, a_j), &{} i \in \textrm{inc}(J),\, j \in \textrm{out}(J). \end{array} \right. \end{aligned}$$
(2.6)

The first equation states the conservation of mass. For constant density the mass flux in pipe j is the product of cross sectional area \(A^j\) and fluid velocity \(v^j\). The pipes are assumed to be filled completely for all times. The remaining equations provide the continuity of the pressure at the junction.

Finally, at the CHP we impose, for \(t > 0\), the boundary condition

$$\begin{aligned} p^1(t, a_1) = p^{1, b}(t). \end{aligned}$$
(2.7)

Note that no initial condition for the pressure on the edges is needed. Indeed, thanks to the particular form of the second equation in (2.4) and to the compatibility condition (2.6), it is sufficient to provide the boundary condition at the CHP (2.7). For a later use, we consider the following assumptions:

\(({\textbf {H.1}})\):

\(p^{1, b} \in \mathbf {L^{1}_{\textbf{loc}}} \left( (0, +\infty ); {\mathbb {R}}\right) \) and \(\textrm{TV}\left( p^{1, b}\right) < +\infty \).

\(({\textbf {H.2}})\):

\(f\in \mathbf {C^{1}}\left( {\mathbb {R}};{\mathbb {R}}\right) \) satisfies \(v f(v) \le 0\) for every \(v \in {\mathbb {R}}\).

\(({\textbf {H.3}})\):

For every \(i \in {\mathcal {I}}\), \(v^{i,o} \in {\mathbb {R}}\) satisfies \(\displaystyle \sum _{I_j\in J} A^jv^{j,o} = 0\).

2.2 The Consumers’ Part

For every \({k} \in {\mathcal {I}}_H\), denote by \(J_k\) the junction at position \(a_k\) and let \(Q_k(t)\) be the time dependent power demand of the consumer k. For every \({k} \in {\mathcal {I}}_H\), the velocity \(v^k\) is determined by an ordinary differential equation depending on the power demand \(Q_k\). More precisely, \(v^k\) is governed by the equation

$$\begin{aligned} \partial _t v^k = \frac{1}{\alpha }\left( Q_k(t) - A^k v^k (e_{J_{k}}(t) - e^{k,out})\right) , \end{aligned}$$
(2.8)

where \(\alpha > 0\) is a relaxation parameter, \(e^{k,out}=e(T_{k,out})\) is the energy related to the fixed temperature level \(T_{k,out}\), to which the water inside the pipe is cooled down, and \(e_{J_{k}}\) is the energy at the starting node for consumer k. Note that equation (2.8) pushes \(v^k\) to the state \(Q_k(t) = A^k v^k (e_{J_{k}}(t) - e^{k,out})\).

We consider the following assumptions for the consumer subsystem:

\(({\textbf {C.1}})\):

There exist two positive constants \(\overline{Q}_{max},{\overline{Q}}_{min} > 0\) such that, for every \(k \in \mathcal I_H\), \(Q_k \in \mathbf {L^{1}_{\textbf{loc}}} \left( (0, +\infty ); \left[ {\overline{Q}}_{min}, {\overline{Q}}_{max}\right] \right) \).

Finally we give the definition of solution for the system (2.4)–(2.8).

Definition 2.4

Fix \(T > 0\) and suppose that the assumptions (H.1), (H.2), (H.3), and (C.1) hold. Assume moreover that for every \(k \in {\mathcal {I}}_H\), the function \(t \mapsto e_{J_{k}}(t)\) is in \(\mathbf {L^1}\left( (0, T); {\mathbb {R}}\right) \).

A couple \(\left( p, v\right) = \left( \left( p^1, \ldots , p^{N_{\mathcal {I}}}\right) , \left( v^1, \ldots , v^{N_{\mathcal {I}}}\right) \right) \) is a solution to (2.4)–(2.8) on the time interval [0, T] if the following conditions are satisfied.

  1. 1.

    For every \(i \in \left\{ 1, \ldots , {N_{\mathcal {I}}}\right\} \), the functions \(p^i\), and \(v^i\) satisfy the regularity assumptions:

    1. (a)

      \(p^i \in \mathbf {L^1}\left( (0, T); \mathbf {C^{1}}\left( [a_i, b_i]; {\mathbb {R}}\right) \right) \) and, for a.e. \(t \in [0, T]\), \(\partial _x p^i(t)\) does not depend on x.

    2. (b)

      \(v^i \in \textbf{AC}\left( [0, T]; {\mathbb {R}}\right) \).

  2. 2.

    For every \(i \in \left\{ 1, \ldots , {N_{\mathcal {I}}}\right\} \), \(v^i(0) = v^{i,o}\) and \(p^i\) satisfies, for all \(x \in I_i\) and for a.e. \(t \in [0, T]\),

    $$\begin{aligned} p^i(t, x) = p^i(t, a_i) + \rho \left( f(v^i(t)) - \dot{v}^i(t) \right) (x - a_i). \end{aligned}$$
  3. 3.

    v is a solution to (2.8) in the sense of Carathéodory [19], i.e. for every \(k \in {\mathcal {I}}_H\) and for every \(t \in [0, T]\),

    $$\begin{aligned} v^{k}(t) = v^{k,o} + \int _0^t \frac{1}{\alpha } \left( Q^{k}(s) - v^{k}(s) \left( e_{J_{k}}(s) - e^{{k}, out}\right) \right) \textrm{d}{s}. \end{aligned}$$
  4. 4.

    The boundary condition \(p^1(t, a_1) = p^{1,b}(t)\) holds for a.e. \(t \in [0, T]\).

  5. 5.

    For every junction \(J \in {\mathcal {J}}\)

    $$\begin{aligned} \left\{ \begin{array}{ll} \displaystyle \sum _{I_k \in J} A^k v^k(t) = 0, &{}\forall t\in [0,T] \\ p^i (t, b_i) = p^j (t, a_j) =:p_J(t), &{} for\ a.e.\ t\in [0,T], \begin{array}{c} i \in \textrm{inc}(J)\\ j \in \textrm{out}(J) \end{array} . \end{array} \right. \end{aligned}$$

2.3 The Energy Part

The energy transport in the network is described by the third equation in (2.2). Thus for every \(i \in {\mathcal {I}}\), we consider the advection equation

$$\begin{aligned} \partial _t e^i + v^i(t) \partial _x e^i = g(e^i), \end{aligned}$$
(2.9)

supplemented with the initial condition

$$\begin{aligned} e^i(0, x) = e^{i,o}(x), \qquad x \in I_i, \end{aligned}$$
(2.10)

and with suitable boundary and coupling conditions. More precisely, the boundary condition applies only at the level of CHP and, when admissible, it reads

$$\begin{aligned} e^1(t, a_1) = e^{1,b}(t) \end{aligned}$$
(2.11)

for a.e. \(t > 0\). It prescribes the energy produced by the CHP.

The coupling conditions at each internal node \(J \in {\mathcal {J}}\) depend on the solution of a differential equation. Fix an internal node \(J \in {\mathcal {J}}\) and \(t > 0\). Define the (possibly time dependent) sets (see Fig. 1)

$$\begin{aligned}&\textrm{inc}{}^{+, t}(J) = \left\{ i \in \textrm{inc}(J):\, v^i(t)> 0\right\} ,\quad \textrm{inc}{}^{-, t}(J) = \left\{ i \in \textrm{inc}(J):\, v^i(t)< 0\right\} , \nonumber \\&\textrm{out}{}^{+, t}(J) = \left\{ i \in \textrm{out}(J):\, v^i(t) > 0\right\} , \quad \textrm{out}{}^{-, t}(J) = \left\{ i \in \textrm{out}(J):\, v^i(t) < 0\right\} \nonumber \\ \end{aligned}$$
(2.12)
Fig. 1
figure 1

Scheme of the sets \(\textrm{inc}{}^{\pm ,t}(J)\) and \(\textrm{out}{}^{\pm ,t}(J)\) defined in (2.12). The bold arrows indicate the direction of the fluid flow, the narrow arrows indicate the orientation of the edge

and consider the Cauchy problem

$$\begin{aligned} \left\{ \begin{aligned} \dot{e}_J&= \frac{1}{V_J} \left[ \sum _{I_i\in \textrm{inc}{}^{+, t}(J)} A^i v^i(t) e^i(t, b_i) - \sum _{I_i\in \textrm{out}{}^{-, t}(J)} A^i v^i(t) e^i(t, a_i) \right. \\&\left. \qquad \qquad - e_{J} \sum _{I_i\in \textrm{inc}{}^{-, t}(J) \cup \textrm{out}{}^{+, t}(J)}A^i {\left| v^i(t)\right| } \right] , \\ e_J(0)&= \frac{1}{\sum _{I_i\in \textrm{inc}{}^{-, t}(J) \cup \textrm{out}{}^{+, t}(J)}A^i {\left| v^i(0)\right| }}\\&\qquad \times \left[ \sum _{I_i\in \textrm{inc}{}^{+, t}(J)} A^i v^i(0) e^i(0, b_i) - \sum _{I_i\in \textrm{out}{}^{-, t}(J)} A^i v^i(0) e^i(0, a_i) \right] , \end{aligned} \right. \end{aligned}$$
(2.13)

where \(V_J > 0\) represents the volume inside the junction J. At the outgoing edges according to the fluid velocity, we impose that the energy is equal to \(e_J\): for \(t>0\) and for every \(i\in \textrm{inc}{}^{-, t}(J)\) and \(j\in \textrm{out}{}^{+, t}(J)\), we set

$$\begin{aligned} e^i(t, b_i) = e^j(t, a_j) = e_J(t). \end{aligned}$$
(2.14)

Remark 2.5

The coupling conditions (2.13) can be justified in the following way.

Fig. 2
figure 2

Schematic model of the node J with two incoming and one outgoing pipes, used in the derivation of the coupling condition (2.14)

For \(J \in {\mathcal {J}}\), define \(V_J > 0\) the volume inside the junction J. Denote by \(e_J(t)\) the temperature inside the junction J at time t, the variation \(\Delta e_J\) of \(e_J\) during the time step \(\Delta t\) is given by

$$\begin{aligned} \begin{aligned} e_{J}(t) + \Delta e_J&= \frac{1}{V_J} \sum _{I_i\in \textrm{inc}^{+, t} (J)} A^i v^i(t) e^i(t, b_i) \Delta t \\&\quad - \frac{1}{V_J} \sum _{I_i\in \textrm{out}^{-, t} (J)} A^i v^i(t) e^i(t, a_i) \Delta t \\&\quad + \frac{1}{V_J} \left( V_J - \sum _{I_i \in \textrm{out}^{+, t} (J) \cap \textrm{inc}^{-, t}(J)}A^i {\left| v^i(t)\right| } \Delta t\right) e_{J}(t); \end{aligned} \end{aligned}$$

see Fig. 2. Thus, passing to the limit as \(\Delta t \rightarrow 0\), we obtain (2.13).

Remark 2.6

A reasonable alternative to the coupling condition (2.14) could be the following:

$$\begin{aligned} \left\{ \begin{array}{l} \displaystyle \sum _{i \in \textrm{inc}(J)} A^i v^i(t) e^i(t, b_i) + \sum _{i \in \textrm{out}(J)} A^i v^i(t) e^i(t, a_i) = 0 \\ e^i(t, b_i) = e^j(t, a_j) \qquad i \in \textrm{inc}^{-,t}(J), \, j \in \textrm{out}^{+,t}(J), \end{array} \right. \end{aligned}$$

stating the conservation of energy inside the node and the perfect mixture of energies; see also [3, 32].

The advantage of conditions (2.13)–(2.14) is mainly technical. Indeed (2.13)–(2.14) permit to deduce total variation estimates needed in the proof of the main result.

We consider the following assumptions.

\(({\textbf {E.1}})\):

\(e^{1, b} \in \mathbf {L^{1}_{\textbf{loc}}}\left( (0, +\infty ); {\mathbb {R}}\right) \) and, for every \(i \in {\mathcal {I}}\), \(e^{i,o} \in \mathbf {L^1}\left( [a_i, b_i]; {\mathbb {R}}\right) \) with \(\textrm{TV}\left( e^{i,o}\right) < +\infty \).

\(({\textbf {E.2}})\):

There exists \(G> 0\) such that \(|g(e_1)-g(e_2)|\le G|e_1-e_2|\) for every \(e_1,e_2\in {\mathbb {R}}\).

We give now the definition of solution for the energy system.

Definition 2.7

Fix \(T > 0\) and assume (E.1) and (E.2). Furthermore, assume that for every \(i \in {\mathcal {I}}\), the function \(t \mapsto v^i(t)\) is in \(\mathbf {L^1}\left( (0, T); {\mathbb {R}}\right) \).

A function \(e = ( e^{{\mathcal {I}}},e_{{\mathcal {J}}})=\left( \left( e^1, \ldots , e^{N_{\mathcal {I}}}\right) ,\left( e_{J_1},\dots ,e_{J_{M}}\right) \right) \) is a solution to (2.9), (2.10), (2.11), (2.13), and (2.14) on the time interval [0, T] if the following conditions are satisfied.

  1. 1.

    For every \(i \in \left\{ 1, \ldots , {N_{\mathcal {I}}}\right\} \), the function \(e^i \in \mathbf {C^{0}}\left( [0, T]; \mathbf {L^1}\left( [a_i, b_i]; {\mathbb {R}}\right) \right) \) and, for every \(t \in [0, T]\), \(e^i(t)\) has finite total variation.

  2. 2.

    For every \(J\in {\mathcal {J}}\), the function \(e_{J} \in \mathbf {L^1}\left( (0,T);{\mathbb {R}}\right) \) has finite total variation.

  3. 3.

    \(e^{{\mathcal {I}}}\) is a MV-solution to (2.9), i.e. for every \(i \in \left\{ 1, \ldots , {N_{\mathcal {I}}}\right\} \) and for every \(\varphi \in \mathbf {C_c^{1}}\left( (-\infty , T[ \times {\mathbb {R}}; {\mathbb {R}}_+\right) \) and \(k \in {\mathbb {R}}\),

    $$\begin{aligned}&\int _0^T \int _{a_i}^{b_i} \left( e^i(t, x) -k\right) ^\pm \left( \partial _t \varphi (t, x) + v^i(t) \partial _x \varphi (t, x)\right) \textrm{d}{x} \textrm{d}{t} \\&+ \int _0^T \int _{a_i}^{b_i} \textrm{sgn}^\pm \left( e^i(t, x) -k\right) g(e^i(t,x)) \varphi (t,x) \textrm{d}{x} \textrm{d}{t} \\&+ \int _{a_i}^{b_i} \left( e^{i,o}(x)-k\right) ^\pm \varphi (0, x) \textrm{d}{x} \\&+ {\left\| v_i\right\| }_{\mathbf {L^\infty }([0,T]; {\mathbb {R}})} \int _0^t \left( e_{J,a_i}(t)-k\right) ^\pm \varphi (t,a_i) \textrm{d}{t} \\&+ {\left\| v_i\right\| }_{\mathbf {L^\infty }([0,T]; {\mathbb {R}})} \int _0^t \left( e_{J,b_i}(t)-k\right) ^\pm \varphi (t,b_i) \textrm{d}{t} \ge 0, \end{aligned}$$

    where \(s^+= \max \{s,0\}\), \(s^- = \max \{-s,0\}\),

    $$\begin{aligned} \textrm{sgn}^+ (s) = \&{\left\{ \begin{array}{ll} 1 &{} \text{ if } s>0,\\ 0 &{} \text{ if } s\le 0, \end{array}\right. }&\textrm{sgn}^- (s) = \&{\left\{ \begin{array}{ll} 0 &{} \text{ if } s\ge 0,\\ -1 &{} \text{ if } s< 0, \end{array}\right. } \end{aligned}$$

    \((J,a_i), (J,b_i) \in {\mathcal {J}}\) denote the junctions located respectively at \(x=a_i\) and \(x=b_i\), and \(e_{J,a_i}, e_{J,b_i}\) are the Carathéodory solutions to (2.13).

For the definition of solution used in item 3 we refer to [30, Definition 1], see also [34, Definition 2.1] referring to the 1-D case.

2.4 The Complete System

Finally we consider the complete DHN system, composed by hydrodynamic and energy part. It is given by the system of differential equations

$$\begin{aligned} \left\{ \begin{array}{ll} \partial _x v^i = 0, &{} i \in {\mathcal {I}} \setminus {\mathcal {I}}_H, \\ \partial _t v^i +\frac{1}{\rho }\partial _x p^i = f(v^i), &{} i \in {\mathcal {I}} \setminus {\mathcal {I}}_H, \\ \partial _t v^k = \frac{1}{\alpha }\left( Q_k(t) - A^k v^k (e_{J_{k}}(t) - e(T_{out})\right) , &{} k \in {\mathcal {I}}_H, \\ \partial _t e^i + v^i(t) \partial _x e^i = g(e^i), &{} i \in {\mathcal {I}}, \end{array} \right. \end{aligned}$$
(2.15)

for \(i \in {\mathcal {I}}\), with initial conditions

$$\begin{aligned} \left\{ \begin{array}{l} v^i(0) = v^{i,o},\\ e^i(0, x) = e^{i,o}(x), \end{array} \right. \end{aligned}$$
(2.16)

with boundary conditions

$$\begin{aligned} p^1(t, a_1) = p^{1, b}(t) \qquad \text { and }\qquad e^1(t, a_1) = e^{1,b}(t) \end{aligned}$$
(2.17)

and with the coupling conditions (2.6) and (2.14).

We introduce here the concept of solution to the complete system.

Definition 2.8

Given \(T > 0\), a triple \(\left( e, p, v\right) \) is a solution on the time interval [0, T] to (2.15)–(2.16)–(2.17) with the coupling conditions (2.6) and (2.14) if the following conditions are satisfied.

  1. 1.

    \(e = {( e^{{\mathcal {I}}},e_{{\mathcal {J}}})=(\left( e^1, \ldots , e^{N_{\mathcal {I}}}\right) ,\left( e_{J_1},\dots ,e_{J_{M}}\right) )}\), \(p = \left( p^1, \ldots , p^{N_{\mathcal {I}}}\right) \), and \(v = \left( v^1, \ldots , v^{N_{\mathcal {I}}}\right) \).

  2. 2.

    For the given node energies \(e_{\mathcal {J}}\), (pv) is a solution as defined in Definition 2.4.

  3. 3.

    For the given velocity v, e is a solution as defined in Definition 2.7.

3 Well Posedness Result

In this part we deal with the well posedness result for the district heating network, which is stated in Sect. 3.3. In Sect. 3.1 and in Sect. 3.2 we consider well posedness for the hydrodynamics and energy part respectively, which are the basic steps in the proof of the main result.

3.1 Hydrodynamics Part

Here we consider the system (2.4), with the initial conditions (2.5), coupling conditions (2.6), boundary conditions (2.7), and with the conditions (2.8) at consumers’ sites.

Theorem 3.1

Fix \(T > 0\) and assume that (H.1), (H.2), (H.3), and (C.1) hold. Then, system (2.4)–(2.8) admits a unique solution, in the sense of Definition 2.4.

Moreover, the following stability estimate holds. There exists a positive constant \(L > 0\), depending on the time T, such that, for every two sets of initial conditions

\({\bar{v}}^{i,o}\) and \({\tilde{v}}^{i,o}\) (\(i \in {\mathcal {I}}\)) satisfying  (H.3),

two sets of power demands \({\bar{Q}}_k\), \({\tilde{Q}}_K\) (\(k \in {\mathcal {I}}_H\)) satisfying (C.1), and two sets of node energy functions

\({\bar{e}}_{{\mathcal {J}}}\), \({\tilde{e}}_{{\mathcal {J}}}\) in \(\mathbf {L^1}\left( (0, T); {\mathbb {R}}^{M}\right) \), the corresponding solutions \(\left( {\bar{p}}, {\bar{v}}\right) \) and \(\left( {\tilde{p}}, {\tilde{v}}\right) \) satisfy, for a.e. \(t \in [0, T]\),

$$\begin{aligned} \begin{aligned} \sum _{i \in {\mathcal {I}}} \left\Vert {\bar{v}}^i - \tilde{v}^i\right\Vert _{\mathbf {C^{0}}\left( [0, t]\right) }&\le L \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_{J} - {\tilde{e}}_{J}\right\Vert _{\mathbf {L^1}(0, t)} \\&\quad +L \sum _{k \in {\mathcal {I}}_H} \left\Vert {\bar{Q}}_k - \tilde{Q}_k\right\Vert _{\mathbf {L^1}(0, t)} \!\!+ L \sum _{i \in {\mathcal {I}}} {\left| \bar{v}^{i,o}-{\tilde{v}}^{i,o}\right| }. \end{aligned} \end{aligned}$$
(3.1)

Proof

The proof is very similar to the proof of Theorem 5.3 in [27], but here we use less regularity assumptions on the inputs and the solution. We show that there is a decomposition of the system into pure algebraic and pure ODE parts, where the existence of unique solutions follows directly from standard results about Cauchy problems for solutions in Carathéodory sense [19].

Define the transformation

$$\begin{aligned} v=(A_Q^T \ A_t^T \ A_{PC}^T) \left( \begin{array}{c} v_0\\ v_1\\ v_2 \end{array} \right) = A_Q^T v_0+ A_t^T v_1+ A_{PC}^Tv_2, \end{aligned}$$
(3.2)

where \(A_Q^T\) \(\in {\mathbb {R}}^{{N_{\mathcal {I}}}\times {N_H}}\) rearranges the velocities of consumer edges \(v_0\in {\mathbb {R}}^{{N_H}}\) onto the full velocity vector. The columns of \(A_Q^T\) are the unit vectors of the consumer edges in I such that \(A_QA_Q^T=Id_{{N_H}}\).

The matrix \(A_t \in {\mathbb {R}}^{({M}-1)\times {N_{\mathcal {I}}}}\) maps the edges of a spanning tree of \({\mathcal {G}}\) onto the full edge set and \(A_{PC}\in {\mathbb {R}}^{{N_{\mathcal {I}}}-({M}-1)\times {N_{\mathcal {I}}}}\) is the incidence matrix of a set of fundamental cycles of \({\mathcal {G}}\) corresponding to the spanning tree.

Furthermore we define \(A=(A_r,A_r^p)\) as the incidence matrix of the whole graph, where \(A_r^p\in {\mathbb {R}}^{{N_{\mathcal {I}}}\times 1}\) is the column corresponding to the node of the CHP. \(A_r\) is the remaining matrix for the inner nodes and consumers. A formal definition of these matrices can be found in [27]. For those matrices, we use the following properties:

  1. (i)

    \(R:=\left( \begin{array}{c} A_t\\ A_{PC} \end{array}\right) \in {\mathbb {R}}^{{N_{\mathcal {I}}}\times {N_{\mathcal {I}}}}\) is nonsingular

  2. (ii)

    \(A_tA_r \in {\mathbb {R}}^{{M}-1\times {M}-1}\) is nonsingular

  3. (iii)

    \(A_{PC}A_r = 0\)

  4. (iv)

    \(A_QA_t^T = 0\) and \(A_QA_{PC}^T = 0\)

A proof for (i)–(iii) can be found in [27]. Property (iv) holds due to the fact that the rows of \(A_Q\) only affect consumer edges while those are excluded in the definition of \(A_t\) and \(A_{PC}\).

System (2.4)–(2.8) can be compactly written as

$$\begin{aligned} \begin{aligned} \partial _t v + \frac{1}{\rho } A_rp_{{\mathcal {J}}}&= f(v)-\frac{1}{\rho }A_r^pp^{1,b}\\ A_r^Tv&=0\\ \partial _t (A_Qv)&= \frac{1}{\alpha }\left( Q(t) - A^K (A_Qv) (e_{{\mathcal {J}}}(t) - e^{out})\right) , \end{aligned} \end{aligned}$$
(3.3)

together with the corresponding initial conditions. Here, \(p_{\mathcal {J}}\in \mathbf {L^1}((0,T);{\mathbb {R}}^{M})\) is the vector of node pressures. Due to (2.6) this is well-defined. The last equation is the vectorized version of (2.8). We insert the transformation (3.2) and multiply by \(\left( \begin{array}{c} A_t\\ A_{PC} \end{array}\right) \) from the left. This leads to the equivalent formulation

$$\begin{aligned} \begin{aligned} \partial _t v_0&= \frac{1}{\alpha }\left( Q(t) - A^K v_0 (e_{{\mathcal {J}}}(t) - e^{out})\right) \\ v_1&=- \left( A_r^TA_t^T\right) ^{-1}\left( A_rA_Q^T\right) v_0 \\ \partial _t v_2&= \left( A_{PC}A_{PC}^T\right) ^{-1}A_{PC}\left( \frac{1}{\rho }A_r^pp^{1,b} +f(A_t^Tv_1+A_{PC}^Tv_2)-A_t^T\partial _tv_1\right) \\ p_{\mathcal {J}}&=\rho \left( A_tA_r^T\right) ^{-1} \!\!\! A_t\left( \frac{1}{\rho }A_r^pp^{1,b} \!+\!f(A_t^Tv_1+A_{PC}^Tv_2)-\left( A_t^T\partial _tv_1+A_{PC}^T\partial _t v_2\right) \right) . \end{aligned} \end{aligned}$$
(3.4)

For system (3.4) the requirements for the existence theorem of solutions in Carathéodory sense are met, as the right hand side of the first equation is measurable in t as well as Lipschitz continuous in \(v_0\). We get a unique solution \(v_0\) and due to the pure algebraic connection also a unique \(v_1\). Similarly, the ODE for \(v_2\) has continuously differentiable right hand side and we get existence of a unique \(v_2\).

Applying Gronwall inequality provides us with bound

$$\begin{aligned} |v(t)| \le v_{max}(T) \end{aligned}$$
(3.5)

after transforming back to the usual coordinates.

The stability estimate (3.1) directly follows from basic ODE theory and Gronwall Lemma, e.g. [6, 24]. \(\square \)

Remark 3.2

Similar stability estimates can be deduced for the pressure p. We have, for \(t>0\),

$$\begin{aligned} \begin{aligned}&\left\Vert {\bar{p}}-\tilde{p}\right\Vert _{\mathbf {L^1}\left( [0,t];\mathbf {C^{1}}([a,b];{\mathbb {R}}^{N_{\mathcal {I}}})\right) } \\ {}&\le \sum _{i \in {\mathcal {I}}} \left( \left\Vert \sup _x {\left| {\bar{p}}^i(x)-\tilde{p}^i(x)\right| }\right\Vert _{\mathbf {L^1}(0,t)}+\left\Vert \sup _x {\left| \partial _x \bar{p}^i(x)-\partial _x {\tilde{p}}^i(x)\right| }\right\Vert _{\mathbf {L^1}(0,t)}\right) . \end{aligned} \end{aligned}$$
(3.6)

Using the fact that \(\partial _x p^i\) does not depend on the space variable, we deduce that, for \(t>0\),

$$\begin{aligned} \begin{aligned}&\left\Vert {\bar{p}}-\tilde{p}\right\Vert _{\mathbf {L^1}\left( [0,t];\mathbf {C^{1}}([a,b];{\mathbb {R}}^{N_{\mathcal {I}}})\right) } \\ {}&\le \sum _{i \in {\mathcal {I}}} \left( 1+\frac{1}{b_i-a_i}\right) \left( \left\Vert {\bar{p}}(a_i)-{\tilde{p}}(a_i)\right\Vert _{\mathbf {L^1}(0,t)}+\left\Vert \bar{p}(b_i)-{\tilde{p}}(b_i)\right\Vert _{\mathbf {L^1}(0,t)} \right) \end{aligned} \end{aligned}$$
(3.7)

and thus

$$\begin{aligned} \left\Vert {\bar{p}}-{\tilde{p}}\right\Vert _{\mathbf {L^1}\left( [0,t];\mathbf {C^{1}}([a,b];{\mathbb {R}}^{N_{\mathcal {I}}})\right) } \le C\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{p}}_J-{\tilde{p}}_J\right\Vert _{\mathbf {L^1}([0,t];{\mathbb {R}}^{M})}, \end{aligned}$$
(3.8)

where the individual node pressures \(p_J\) is taken from \(p_{{\mathcal {J}}}\) in (3.4). The form of the last equation in (3.4) immediately gives stability estimates for \(p_{\mathcal {J}}\) due to (H.2) and boundedness of v. Those estimates are similar to the ones for \(\partial _t v_2\). For the ease of notations and clarity of the proofs, we omit the estimates for p. Note that only the velocity couples to the energy transport. The triangular structure of (3.4) allows evaluation and stability estimates for v independently of the pressure, thus the results for the coupled system do not depend on p.

Remark 3.3

The constant L in Theorem 3.1 depends exponentially on T, since it is obtained through the Gronwall Lemma. More precisely

$$\begin{aligned} L = O(1) e^{O(1) T}, \end{aligned}$$

where the Landau symbol O(1) denotes a suitable constant, which depends on initial data by \(\max _{k\in {\mathcal {I}}}\left| v^{k,0}\right| \), but not on T.

3.2 Energy Network

Here we consider system (2.9), supplemented with the initial conditions (2.10), with the coupling conditions (2.14), and with the boundary conditions (2.11).

Theorem 3.4

Fix \(T > 0\) and assume (E.1) and (E.2). Then, system (2.9)–(2.14) admits a unique solution, in the sense of Definition 2.7.

Moreover, there exists a positive constant \(L > 0\), depending on the time T, such that, for every two sets of initial conditions \({\bar{e}}^{i,o}\), \({\tilde{e}}^{i,o}\), (\(i \in {\mathcal {I}}\)) and of boundary data \({\bar{e}}^{1, b}, {\tilde{e}}^{1, b}\), both satisfying (E.1), and two sets of velocity functions

\({\bar{v}}\), \({\tilde{v}} \in \mathbf {L^1}\left( (0, T); {\mathbb {R}}^{N_{\mathcal {I}}}\right) \), the corresponding solutions \({\bar{e}} = ( {\bar{e}}^{{\mathcal {I}}},{\bar{e}}_{{\mathcal {J}}})\) and \({\tilde{e}} = ( {\tilde{e}}^{{\mathcal {I}}},{\tilde{e}}_{{\mathcal {J}}})\) satisfy, for a.e. \(t\in [0, T]\),

$$\begin{aligned} \begin{aligned}&\, \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_{J} - \tilde{e}_{J}\right\Vert _{\mathbf {C^{0}}(0, t)} + \sum _{i \in {\mathcal {I}}} \left\Vert \bar{e}^i(t) - {\tilde{e}}^i(t)\right\Vert _{\mathbf {L^1}\left( I_i\right) } \\ \le&\, L \sum _{i \in {\mathcal {I}}}\left( \left\Vert {\bar{v}}^i - \tilde{v}^i\right\Vert _{\mathbf {L^1}(0, t)} + \left\Vert {\bar{e}}^{i,o} - \tilde{e}^{i,o}\right\Vert _{\mathbf {L^1}(I_i)} + \left\Vert {\bar{e}}^{1, b} - {\tilde{e}}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }\right) . \end{aligned} \end{aligned}$$
(3.9)

Before we start with the proof, we state a lemma for the stability of the solution on a single edge.

Lemma 3.5

Fix \(a, b \in {\mathbb {R}}\), with \(a < b\), \(v_1, v_2, e^1_L, e^2_L, e^1_R, e^2_R \in \mathbf {L^1}({\mathbb {R}}^+) \cap \mathbf {L^\infty }\left( {\mathbb {R}}^+\right) \) and \(\bar{e}_1,\bar{e}_2\in \mathbf {L^\infty }(a,b)\), all with bounded total variation. Let \(G>0\) be fixed and let \(g :{\mathbb {R}}\rightarrow {\mathbb {R}}\) be a Lipschitz continuous function such that \({\left| g(y_1)-g(y_2)\right| }\le G {\left| y_1-y_2\right| }\) for every \(y_1, y_2 \in {\mathbb {R}}\). Let \(e_1\) and \(e_2\) be the solutions to the following IBVP problems:

$$\begin{aligned} P_1:\, \left\{ \begin{array}{l} \partial _t e_1 + v_1 \partial _x e_1 = g(e_1) \\ e_1(t, a) = e_L^1(t) \\ e_1(t, b) = e_R^1(t) \\ e_1(0, x) = {\bar{e}}_1(x) \\ x \in (a, b),\, t> 0 \end{array} \right.{} & {} \text { and }{} & {} P_2:\, \left\{ \begin{array}{l} \partial _t e_2 + v_2 \partial _x e_2 = g(e_2) \\ e_2(t, a) = e_L^2(t)\\ e_2(t, b) = e_R^2(t) \\ e_2(0, x) = {\bar{e}}_2(x) \\ x \in (a, b),\, t > 0 \ . \end{array} \right. \end{aligned}$$

Then, for a.e. \(t>0\), the following stability estimate holds:

$$\begin{aligned} \begin{aligned} \left\Vert e_1(t) - e_2(t)\right\Vert _{\mathbf {L^1}(a, b)}&\le e^{Gt} K \Big [\left\Vert v_1 - v_2\right\Vert _{\mathbf {L^1}(0, t)} + \left\Vert \bar{e}_1 - \bar{e}_2\right\Vert _{\mathbf {L^1}(a, b)} \\&\quad \quad + \left\Vert e_L^1 - e_L^2\right\Vert _{\mathbf {L^1}(0, t)} + \left\Vert e_R^1 - e_R^2\right\Vert _{\mathbf {L^1}(0, t)}\Big ], \end{aligned} \end{aligned}$$
(3.10)

where the constant K depends on the total variation and on the \(\mathbf {L^\infty }\)-norm of \({\bar{e}}_1\), \({\bar{e}}_2\), \(e_L\), \(e_R\), and on the \(\mathbf {L^\infty }\)-norm of \(v_1\) and \(v_2\).

Furthermore, if \(t\le \frac{b-a}{v_{max}}\) with \({\left| v_1\right| },{\left| v_2\right| }\le v_{max}\), for the fluxes at the boundaries we have

$$\begin{aligned} \left\Vert v_1^-e_1(\cdot ,a)-v_2^-e_2(\cdot ,a)\right\Vert _{\mathbf {L^1}(0,t)}&\le K_{v,a} \left\Vert v_1 - v_2\right\Vert _{\mathbf {L^1}(0, t)}+K_{I,a} \left\Vert \bar{e}_1 - \bar{e}_2\right\Vert _{\mathbf {L^1}(a, b)}\nonumber \\&+K_{L,a} \left\Vert e_L^1 - e_L^2\right\Vert _{\mathbf {L^1}(0, t)}, \end{aligned}$$
(3.11)
$$\begin{aligned} \left\Vert v_1^+e_1(\cdot ,b)-v_2^+e_2(\cdot ,b)\right\Vert _{\mathbf {L^1}(0,t)}&\le K_{v,b} \left\Vert v_1 - v_2\right\Vert _{\mathbf {L^1}(0, t)}+K_{I,b} \left\Vert \bar{e}_1 - \bar{e}_2\right\Vert _{\mathbf {L^1}(a, b)} \nonumber \\&+K_{R,b} \left\Vert e_R^1 - e_R^2\right\Vert _{\mathbf {L^1}(0, t)}, \end{aligned}$$
(3.12)

where \(v^-_i(t) = -\min \{v_i(t),0\}, v^+_i(t) = \max \{v_i(t),0\}\). A sequential consideration with (3.10) can remove the bound on t.

The proof of Lemma 3.5 is given in Appendix A.1.

Proof of Theorem 3.4

Existence of solutions to such transport problems on networks have already been proven, e.g. in [33]. We focus on the stability estimate (3.9) that, to our knowledge, has not been shown before.

The proof consists of two steps.

For both steps, we decompose the network problem into shorter time intervals. Those intervals are chosen in such a way that information can not reach one node from another, so that the network problem then decouples into a sequence of localized problems on segments. By defining the intermediate times \(0=t_0<t_1<\dots <t_n=t\) with \(\Delta t_i = (t_i-t_{i-1})\) such that \(\max _{i=1,...,n} \Delta t_i \le \frac{\min _{I_k\in J}\left( b_k-a_k\right) }{v_{max}}\), where the constant \(v_{max}\) denotes the maximum possible of velocity,

in each interval \([t_{i-1},t_i]\) the information starting from one node can not reach a neighboring one. Therefore, a local consideration of the nodes with the adjacent edges is sufficient. To ease the notations, we assume here the topological orientation of the edges to point towards the node, such that positive velocities always represent an inflow, whereas negative velocities correspond to outgoing flows (different from the more complicated global definitions from (2.12)).

Step 1: Stability of node values \({\bar{e}}_J\) and \({\tilde{e}}_J\).

From classical results about ODEs, we deduce that, for the junction J, the solutions \({\bar{e}}_J\) and \({\tilde{e}}_J\) of (2.13) fulfill

$$\begin{aligned} {\left| {\bar{e}}_{J}(t)- {\tilde{e}}_{J}(t)\right| } \le {\hat{K}} \left\Vert \hat{{\bar{e}}}-\hat{{\tilde{e}}}\right\Vert _{L^1(0,t)}+K_{J,0} {\left| {\bar{e}}_{J}(0)-{\tilde{e}}_{J}(0)\right| }, \end{aligned}$$
(3.13)

where \(\hat{e}=\sum _{I_i\in \textrm{inc}{}(J)} A^i \left( v^i\right) ^+ e^i(\cdot , b_i) + \sum _{I_i\in \textrm{out}{}(J)} A^i \left( v^i\right) ^- e^i(\cdot , a_i)\), and \({\hat{K}}\) and \(K_{J,0}\) are suitable constants, not depending on time.

Using (3.11) and (3.12), we have that

$$\begin{aligned} \left\Vert \hat{{\bar{e}}}-\hat{{\tilde{e}}}\right\Vert _{L^1(0,t)} \le K_J\left\Vert {\bar{e}}_{J}- {\tilde{e}}_{J}\right\Vert _{L^1(0,t)} +K_v\left\Vert {\bar{v}}-{\tilde{v}}\right\Vert _{L^1(0,t)}+ K_0 \left\Vert \bar{e}^{0}- {\tilde{e}}^{0}\right\Vert _{L^1(a,b)}\ . \end{aligned}$$

Inserting it into (3.13) yields

$$\begin{aligned} {\left| {\bar{e}}_{J}(t)-{\tilde{e}}_{J}(t)\right| } \le \&\alpha (t)+\hat{K} K_J\left\Vert {\bar{e}}_{J}-{\tilde{e}}_{J}\right\Vert _{L^1(0,t)} \\ = \&\alpha (t) +\int _{0}^{t}{\hat{K}} K_J{\left| {\bar{e}}_{J}(\tau )- {\tilde{e}}_{J}(\tau )\right| } \textrm{d}{\tau }, \end{aligned}$$

where

$$\begin{aligned} \alpha (t) = \hat{K} \, K_v \left\Vert {\bar{v}}-{\tilde{v}}\right\Vert _{\mathbf {L^1}(0,t)} + \hat{K} \, K_0 \left\Vert \bar{e}^{0}- \tilde{e}^{0}\right\Vert _{\mathbf {L^1}(a,b)} + K_{J,0} {\left| {\bar{e}}_{J}(0)-{\tilde{e}}_{J}(0)\right| }. \end{aligned}$$

Applying Gronwall inequality, we obtain

$$\begin{aligned} {\left| {\bar{e}}_{J}(t)-{\tilde{e}}_{J}(t)\right| } \le \alpha (t) e^{{\hat{K}} \,K_J \, t}, \end{aligned}$$

and, due to the positivity of \(\hat{K}K_J\),

$$\begin{aligned} \left\Vert {\bar{e}}_{J}-{\tilde{e}}_{J}\right\Vert _{\mathbf {C^{0}}(0,t)} \le \&\alpha (t) \, e^{{\hat{K}} \, K_J \,t} \\ \le \&K \left( \left\Vert {\bar{v}}-{\tilde{v}}\right\Vert _{\mathbf {L^1}(0,t)} + \left\Vert \bar{e}^{0}- \tilde{e}^{0}\right\Vert _{\mathbf {L^1}(a,b)} +{\left| {\bar{e}}_{J}(0)-{\tilde{e}}_{J}(0)\right| }\right) , \end{aligned}$$

where \(K = e^{\hat{K}K_JT}\left( \hat{K}K_v+\hat{K}K_0+K_{J,0}\right) \).

Step 2: Stability estimates on the whole network

Here, for suitable \(L_1 > 0\) and \(L_2 > 0\), we aim to prove:

$$\begin{aligned} \begin{aligned}&\sum _{i \in {\mathcal {I}}} \left\Vert {\bar{e}}^i(t,\cdot ) - \tilde{e}^i(t,\cdot )\right\Vert _{\mathbf {L^1}(a_i,b_i)} \\&\quad \le L_1 \sum _{k \in {\mathcal {I}}}\left( \left\Vert {\bar{v}}^k - {\tilde{v}}^k\right\Vert _{\mathbf {L^1}(0, t)}+ \left\Vert {\bar{e}}^{k,0} - \tilde{e}^{k,0}\right\Vert _{\mathbf {L^1}(a_k,b_k)}+ \left\Vert {\bar{e}}^{1, b} - {\tilde{e}}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) } \right) \end{aligned} \end{aligned}$$
(3.14)
$$\begin{aligned} \begin{aligned}&\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {C^{0}}(0,t)} \\&\quad \le L_2 \sum _{k \in {\mathcal {I}}} \left( \left\Vert {\bar{v}}^k - {\tilde{v}}^k\right\Vert _{\mathbf {L^1}(0, t)}+ \left\Vert \bar{e}^{k,0} - {\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(a_k,b_k)} + \left\Vert {\bar{e}}^{1, b} - {\tilde{e}}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }\right) . \end{aligned} \end{aligned}$$
(3.15)

For each edge \( i \in {\mathcal {I}}\) with \(I_i = [a_i,b_i]\) we can apply Lemma 3.5

and get

$$\begin{aligned}&\sum _{i \in {\mathcal {I}}} \left\Vert {\bar{e}}^i(t,\cdot ) - {\tilde{e}}^i(t,\cdot )\right\Vert _{\mathbf {L^1}(I_i)} \nonumber \\&\quad \le \ \sum _{k \in {\mathcal {I}}} \left[ K^k_v \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0, t)} + K^k_I \left\Vert {\bar{e}}^{k,0} - {\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(I_k)}\right. \nonumber \\&\quad \left. +K^k_L \left\Vert {\bar{e}}_L^k - \tilde{e}_L^k\right\Vert _{\mathbf {L^1}(0, t)} +K^k_R \left\Vert {\bar{e}}_R^k - \tilde{e}_R^k\right\Vert _{\mathbf {L^1}(0, t)}\right] , \end{aligned}$$
(3.16)

where \(e_L,e_R\) are the left and right boundary values. In case the edge is connected to a node, these are the node energies \(e_J\).

Altogether we have

$$\begin{aligned} \begin{aligned} \sum _{i \in {\mathcal {I}}}&\left\Vert {\bar{e}}^i(t,\cdot ) - \tilde{e}^i(t,\cdot )\right\Vert _{\mathbf {L^1}(I_i)}\\&\le K_v\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0, t)} +K_I\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^{k,0} -{\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(I_k)}\\&\quad +K_J\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {L^1}(0, t)} + K_J\left\Vert {\bar{e}}^{1, b} - \tilde{e}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }\\&\le K_v\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0, t)} +K_I\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^{k,0} -{\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(I_k)} \\&\quad +K_Jt\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{C^0(0, t)} + K_J\left\Vert {\bar{e}}^{1, b} - \tilde{e}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }, \end{aligned} \end{aligned}$$
(3.17)

with \(K_v = \max _{k \in {\mathcal {I}}} K_v^k\), \(K_I = \max _{k \in {\mathcal {I}}} K_I^k\) and \(K_J = \textrm{deg}({\mathcal {G}})(\max _{k \in {\mathcal {I}}}K_L + \max _{k \in {\mathcal {I}}}K_R)\), where \(\textrm{deg}({\mathcal {G}})\) is the largest node degree in the network.

Consequently, (3.14) follows directly from (3.15).

For a node \(J\in {\mathcal {J}}\) and a single time interval \([t_{i-1},t_i]\), using the result of step 1, we obtain

$$\begin{aligned}&\left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {C^{0}}([t_{i-1},t_i])} \nonumber \\&\quad \le \ A_J^i\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(t_{i-1},t_i)} + B_J^i \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^k(t_{i-1},\cdot ) - {\tilde{e}}^k(t_{i-1},\cdot )\right\Vert _{\mathbf {L^1}(I_k)} \nonumber \\&\qquad + {\left| {\bar{e}}_J(t_{i-1})-\tilde{e_J}(t_{i-1})\right| }, \end{aligned}$$
(3.18)

for suitable constants \(A_J^i\) and \(B_J^i\). Set \(A=\max _{i,J} A_J^i, B = \max _{i,J} B_J^i\). Then, by iterative insertion, we get

$$\begin{aligned}&\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([0,t])} = \ \sum _{J \in {\mathcal {J}}} \sup _i \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([t_{i-1},t_i])} \nonumber \\&\quad \le \sum _{J \in {\mathcal {J}}} \sum _{i=1}^n \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([t_{i-1},t_i])} \nonumber \\&\quad {\mathop {\le }\limits ^{[(3.18)]}} \sum _{J \in {\mathcal {J}}} \sum _{i=1}^{n} \left( A \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(t_{i-1},t_i)} \right. \nonumber \\&\quad \quad \quad + B\sum _{k \in {\mathcal {I}}} \left\Vert \left( {\bar{e}}^k - {\tilde{e}}^k\right) (t_{i-1},\cdot )\right\Vert _{\mathbf {L^1}(I_k)} +{\left| \left( {\bar{e}}_J-\tilde{e_J}\right) (t_{i-1})\right| }\Biggr ) \nonumber \\&\quad \le \sum _{J \in {\mathcal {J}}} \left( A\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0,t)} \right. \nonumber \\&\quad \quad + B \sum _{i=0}^{n-1} \sum _{k \in {\mathcal {I}}} \left\Vert \left( {\bar{e}}^k - {\tilde{e}}^k\right) (t_{i},\cdot )\right\Vert _{\mathbf {L^1}(I_k)} + \left\Vert e_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([0,t_{i-1}])} \Biggr ) \nonumber \\&\quad {\mathop {\le }\limits ^{[(3.17)]}} \sum _{J \in {\mathcal {J}}} \left( A \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0,t)} + B \sum _{i=0}^{n-1} \left( K_v \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0, t_i)} \right. \right. \nonumber \\&\quad \quad \left. \qquad + K_J \, t \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {C^{0}}([0, t_i])} + K_I\sum _{k \in {\mathcal {I}}} \left\Vert \left( {\bar{e}}^k- {\tilde{e}}^k\right) (t_{i-1},\cdot )\right\Vert _{\mathbf {L^1}(I_k)} \right) \nonumber \\&\quad \quad \left. \qquad + \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([0,t_{i-1}])} \right) \nonumber \\&{\mathop {\le }\limits ^{[(3.10)]}} \sum _{J \in {\mathcal {J}}} \Bigg [ A \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0,t)} \nonumber \\&\quad \qquad + (n-1) K_I^{(n-2)}\, B \sum _{i=0}^{n-1} \Big ( K_v \! \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0, t_i)} \nonumber \\&\quad \qquad + K_J \, t \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {C^{0}}([0, t_i])} + K_I \sum _{k \in {\mathcal {I}}} \left\Vert \left( {\bar{e}}^k - {\tilde{e}}^k\right) (t_{0},\cdot )\right\Vert _{\mathbf {L^1}(I_k)} \nonumber \\&\quad \qquad \qquad + \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([0,t_{i-1}])}\Big ) \Bigg ] \nonumber \\&\quad \le \quad {M}\, \left( \left( A+(n-1)B \, K_I^{(n-2)}\, K_v\right) \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0,t)} \right. \nonumber \\&\quad \quad + (n-1)^2 \, B \, K_I^{(n-1)}\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^{k,0} - {\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(I_k)} \end{aligned}$$
(3.19)
$$\begin{aligned}&\quad \quad +\left. (n-1) B \, K_I^{(n-2)}(K_J \, t + 1) \sum _{i=1}^{n-1} \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {C^{0}}([0, t_i])} \right) . \end{aligned}$$
(3.20)

This is a recursive formula of the form

$$\begin{aligned} a(i) \le c_1 + c_2\sum _{l=1}^{i-1}a(l),\ a(1) = c_1, \end{aligned}$$
(3.21)

with

$$\begin{aligned} c_1&= \ c_1^1\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0,t)} + c_1^2\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^{k,0} - {\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(I_k)}, \\ c_1^1&= \ {M}\left( A+(n-1)B\,K_I^{(n-2)}\,K_v\right) , \qquad c_1^2 = \ {M}(n-1)^2 B \, K_I^{(n-1)},\\ c_2= \ {}&{M}(n-1) B\, K_I^{(n-2)}(K_J\, t+1). \end{aligned}$$

Hence

$$\begin{aligned} a(i) \le c_1\left( 1+c_2\right) ^{(i-1)}. \end{aligned}$$
(3.22)

So the explicit estimate of (3.20) is

$$\begin{aligned}&\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}(0,t)} \nonumber \\&\quad \le \ \left( 1+ c_2\right) ^{(n-1)} \left( c_1^1 \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0,t)} + c_1^2\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^{k,0} - {\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(I_k)}\right) \nonumber \\&\quad \le \ L_2\sum _{k \in {\mathcal {I}}}\left( \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {L^1}(0,t)} + \left\Vert {\bar{e}}^{k,0} - {\tilde{e}}^{k,0}\right\Vert _{\mathbf {L^1}(I_k)}\right) . \end{aligned}$$
(3.23)

The result follows directly from (3.18). \(\square \)

Remark 3.6

The constant L in Theorem 3.4 depends exponentially on T. More precisely

$$\begin{aligned} L = O(1) e^{O(1) T}, \end{aligned}$$

where the Landau symbol O(1) denotes a suitable constant, which depends on initial data, but not on T. Regarding the initial data, the constants contain only \(\max _{i\in {\mathcal {I}}}\left\| e^{k,0}\right\| _\infty \) and the bound on the total variation \(\textrm{TV}_e\).

3.3 The Complete System

In this part, we deal with the well posedness result for the complete system.

Theorem 3.7

Assume (H.1), (H.3), (E.1), (E.2), and (C.1) hold. Then, for any \(T>0\) the system (2.15)–(2.16)–(2.17)–(2.6)–(2.14) admits a unique solution, in the sense of Definition 2.8.

Moreover there exists a positive constant \(L > 0\) such that for two power demands \({\bar{Q}}_k\) and \({\tilde{Q}}_k\), \(k \in {\mathcal {I}}_H\), two boundary data \({\bar{e}}^{1, b}\), \({\tilde{e}}^{1, b}\) satisfying (E.1) and initial conditions \({\bar{v}}^{i,o}\), \({\tilde{v}}^{i,o}\) and \({\bar{e}}^{i,o}\), \({\tilde{e}}^{i,o}\) (\(i \in {\mathcal {I}}\)) satisfying (H.3), denoting by \(\left( {\bar{e}}, {\bar{p}}, {\bar{v}}\right) \) and \(\left( {\tilde{e}}, {\tilde{p}}, {\tilde{v}}\right) \) the corresponding solutions to (2.15)–(2.16)–(2.17)–(2.6)–(2.14), the following stability estimate holds: for \(t \in [0, T]\),

$$\begin{aligned} \begin{aligned} \sum _{i \in {\mathcal {I}}}&\left[ \left\Vert {\bar{v}}^i - {\tilde{v}}^i\right\Vert _{\mathbf {C^{0}}\left( [0, t]\right) } + \left\Vert {\bar{e}}^i(t) - {\tilde{e}}^i(t)\right\Vert _{\mathbf {L^1}(I_i)} \right] + \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_{J} - \tilde{e}_{J}\right\Vert _{\mathbf {C^{0}}([0, t])} \\&\le \, L \left( \sum _{k \in {\mathcal {I}}_H} \left\Vert {\bar{Q}}_k - {\tilde{Q}}_k\right\Vert _{\mathbf {L^1}(0, t)} + \left\Vert {\bar{e}}^{1, b} - \tilde{e}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) } \right. \\&\qquad \left. + \sum _{i \in {\mathcal {I}}} {\left| \bar{v}^{i,o}-{\tilde{v}}^{i,o}\right| } + \left\Vert {\bar{e}}^{i,o} - \tilde{e}^{i,o}\right\Vert _{\mathbf {L^1}(I_i)}\right) . \end{aligned} \end{aligned}$$
(3.24)

Proof

Define the set

$$\begin{aligned} X_v = \left\{ v \in \mathbf {C^{0}}\left( [0, T]; {\mathbb {R}}^{N_{\mathcal {I}}}\right) : \sum _{j \in J} A^j v^j = 0 \text { for every } J \in {\mathcal {J}} \right\} , \end{aligned}$$
(3.25)

which is a closed subset of the Banach space \(\mathbf {C^{0}}\left( [0, T]; {\mathbb {R}}^{N_{\mathcal {I}}}\right) \) endowed with the norm

$$\begin{aligned} \left\Vert v\right\Vert _{X_{v}} = \sup _{t \in [0, T]} {\left| v(t)\right| }. \end{aligned}$$
(3.26)

Consider the operator

$$\begin{aligned} \begin{array}{rccc} {\mathcal {T}}: &{} X_v &{} \longrightarrow &{} X_v \\ &{} v &{} \longmapsto &{} w \end{array} \end{aligned}$$
(3.27)

where \({\mathcal {T}}\) is defined by two subsequent steps.

For given initial values \(e^0\) and external inputs \(e^b\) we first we define by \({\mathcal {T}}_e\) the operator producing a solution to the energy subsystem according to Theorem 3.4. The solution \(g = {\mathcal {T}}_e \left( v\right) \) is split into \(g = \left( g^{\mathcal {I}},g_{\mathcal {J}}\right) \), where \(g_{\mathcal {J}}\) denotes the node energy functions. Then, with \(v^0\) and \(Q_K\) given, the operator \({\mathcal {T}}_v\) provides the solution to the hydrodynamic part (see Theorem 3.1). Its output we name \(w = {\mathcal {T}}_v\left( g_{\mathcal {J}}\right) \).

\({\mathcal {T}}\) is well defined. Fix \(v \in X_v\). Then, for every \(i \in \{1, \ldots , {N_{\mathcal {I}}}\}\) \(v^i \in \mathbf {L^1} ([0,T];{\mathbb {R}})\).

By Theorem 3.4, there exists a unique \(g = \left( g^{\mathcal {I}},g_{\mathcal {J}}\right) = {\mathcal {T}}_e \left( v\right) \), such that, in particular, \(g_J \in \mathbf {L^1} ([0,T]; {\mathbb {R}})\) for every \(J \in {\mathcal {J}}\) (see Item 2 in Definition 2.7).

Since the set of internal nodes on pipes connected to the houses, i.e. pipes in the set \(\mathcal {I_H}\), are a subset of \({\mathcal {J}}\), by Theorem 3.1 there exists a unique \(w = {\mathcal {T}}_e(g_{{\mathcal {J}}})\) such that, in particular \(w \in \textbf{AC}([0,T];{\mathbb {R}}^{N_{\mathcal {I}}})\) and \(\sum _{h \in J} A^h w^h =0\) for every junction \(J \in {\mathcal {J}}\), that is \( w \in X_v\).

\({\mathcal {T}}\) is a contraction. Let \(T'\le T\). Fix two elements \({\bar{v}}, {\tilde{v}} \in X_v\) and denote

$$\begin{aligned} {\bar{w}} = {\mathcal {T}}\left( {\bar{v}}\right) , \qquad {\tilde{w}} = {\mathcal {T}}\left( {\tilde{v}}\right) . \end{aligned}$$

By Theorem 3.4 we deduce that

$$\begin{aligned} \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{g}}_{J} - {\tilde{g}}_{J}\right\Vert _{\mathbf {C^{0}}\left( [0,T']\right) } \le \ {}&L_E(T) \sum _{i \in {\mathcal {I}}} \left\Vert {\bar{v}}^i - {\tilde{v}}^i\right\Vert _{\mathbf {L^1}(0, T')}\\&\le L_E(T) T' \sum _{i \in {\mathcal {I}}} \left\Vert {\bar{v}}^i - {\tilde{v}}^i\right\Vert _{\mathbf {C^{0}}\left( [0, T']\right) } \\&\le L_E(T) T' {N_{\mathcal {I}}}^2 \left\Vert {\bar{v}} - {\tilde{v}}\right\Vert _{X_v}. \end{aligned}$$

Therefore, using Theorem 3.1, we deduce that

$$\begin{aligned} \left\Vert {\bar{w}} - {\tilde{w}}\right\Vert _{X_v} = \ {}&\sup _{t \in [0, T']} {\left| {\bar{w}}(t) - {\tilde{w}}(t)\right| } \le \sum _{i \in {\mathcal {I}}} \left\Vert {\bar{w}}^i - {\tilde{w}}^i\right\Vert _{\mathbf {C^{0}}\left( [0, T']\right) } \\ \le \ {}&L_H(T) \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{g}}_{J} - {\tilde{g}}_{J}\right\Vert _{\mathbf {L^1}(0, T')} \le L_H(T) T' \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{g}}_{J} - {\tilde{g}}_{J}\right\Vert _{\mathbf {C^{0}}([0, T'])} \\ \le \ {}&L_H(T)L_E(T)\ T'^2 {N_{\mathcal {I}}}^2 \left\Vert {\bar{v}} - {\tilde{v}}\right\Vert _{X_v}. \end{aligned}$$

We choose \(T' >0\) such that \(L_H(T)L_E(T) T'^2 {N_{\mathcal {I}}}^2 < 1\), proving that \({\mathcal {T}}\) is a contraction on \([0,T']\).

Thus the requirements of Banach fixed point theorem [1] are fulfilled, leading to the unique existence of a solution on \([0,T']\).

Since \(L_H(T)\) and \(L_E(T)\) do not depend on \(T'\), we aim to repeat this procedure to cover the full interval [0, T]. According to Remark 3.3 and Remark 3.6 these constants depend on the respective initial conditions only by \(v_{max}\), \(e_{max}\) and \(\textrm{TV}_e\).

Due to (3.5), Lemmas A.3 and A.4 these quantities are globally bounded and \(L_H(T)\) and \(L_E(T)\) can be chosen independently of the individual initial condition. Thus by repeating the contraction finitely many times, we obtain uniquely the solution on [0, T].

Stability estimate. Let \({\bar{Q}}_k\) and \({\tilde{Q}}_k\) (\(k \in {\mathcal {I}}_H\)) denote two power demands, \({\bar{e}}^{1,b}, {\tilde{e}}^{1,b}\) two boundary conditions and \({\bar{v}}^{i,o},{\tilde{v}}^{i,o}\), \({\bar{e}}^{i,o}, \tilde{e}^{i,o}\) initial conditions for the velocity and energy. Denote respectively with \(\left( {\bar{e}}, {\bar{p}}, {\bar{v}}\right) \) and \(\left( {\tilde{e}}, {\tilde{p}}, {\tilde{v}}\right) \) the corresponding solutions. By Theorem 3.1, we deduce that

$$\begin{aligned} \sum _{i \in {\mathcal {I}}}&\left\Vert {\bar{v}}^i - \tilde{v}^i\right\Vert _{\mathbf {C^{0}}\left( [0, t]\right) } \\&\le L \left( \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {L^1}(0, t)} + \sum _{k \in \mathcal {I_H}} \left\Vert {\bar{Q}}_k - \tilde{Q}_k\right\Vert _{\mathbf {L^1}(0, t)} + \sum _{i \in {\mathcal {I}}} {\left| \bar{v}^{i,o}-{\tilde{v}}^{i,o}\right| } \right) \\&\le L \left( t \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {C^{0}}([0, t])} + \sum _{k \in \mathcal {I_H}} \left\Vert {\bar{Q}}_k - {\tilde{Q}}_k\right\Vert _{\mathbf {L^1}(0, t)} + \sum _{i \in {\mathcal {I}}} {\left| \bar{v}^{i,o}-{\tilde{v}}^{i,o}\right| } \right) . \end{aligned}$$

By Theorem 3.4 we deduce that

$$\begin{aligned} \begin{aligned} \sum _{J \in {\mathcal {J}}}&\left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {C^{0}}([0, t])} + \sum _{i \in {\mathcal {I}}} \left\Vert \bar{e}^i(t) - {\tilde{e}}^i(t)\right\Vert _{\mathbf {L^1}(I_i)} \\ \le \ {}&L \sum _{i \in {\mathcal {I}}} \left( \left\Vert {\bar{v}}^i - {\tilde{v}}^i\right\Vert _{\mathbf {L^1}(0, t)} + \left\Vert {\bar{e}}^{i,o} - \tilde{e}^{i,o}\right\Vert _{\mathbf {L^1}(I_i)}\right) + L\left\Vert {\bar{e}}^{1, b} - {\tilde{e}}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }\\ \le \ {}&L \sum _{i \in {\mathcal {I}}}\left( t \left\Vert {\bar{v}}^i - \tilde{v}^i\right\Vert _{\mathbf {C^{0}}([0, t])} + \left\Vert {\bar{e}}^{i,o} - \tilde{e}^{i,o}\right\Vert _{\mathbf {L^1}(I_i)} \right) + L \left\Vert {\bar{e}}^{1, b} - {\tilde{e}}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) } . \end{aligned} \end{aligned}$$

From (3.17), we obtain

$$\begin{aligned} \begin{aligned}&\sum _{i \in {\mathcal {I}}} \left( \left\Vert {\bar{v}}^i - \tilde{v}^i\right\Vert _{\mathbf {C^{0}}\left( [0, t]\right) } + \left\Vert {\bar{e}}^i(t) - \tilde{e}^i(t)\right\Vert _{\mathbf {L^1}(I_i)} \right) +\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([0, t])} \\&\le \left( K_vt+1\right) \sum _{k \in {\mathcal {I}}} \left\Vert {\bar{v}}^k - \tilde{v}^k\right\Vert _{\mathbf {C^{0}}\left( [0, t]\right) } +K_I\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^{i,0} -{\tilde{e}}^{i,0}\right\Vert _{\mathbf {L^1}(I_k)} \\&\quad +\left( K_Jt+1\right) \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{C^0([0, t])} + K_J\left\Vert {\bar{e}}^{1, b} - \tilde{e}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }\ . \end{aligned} \end{aligned}$$
(3.28)

In particular, the estimates of Theorems 3.1 and 3.4 hold point wise, such that

$$\begin{aligned} \begin{aligned}&\sum _{i \in {\mathcal {I}}} {\left| {\bar{v}}^i(t) - {\tilde{v}}^i(t)\right| } +\sum _{J \in {\mathcal {J}}} {\left| {\bar{e}}_J(t) - {\tilde{e}}_J(t)\right| } \\&\le L \left( \sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - \tilde{e}_J\right\Vert _{\mathbf {L^1}(0, t)} +\sum _{i \in {\mathcal {I}}} \left\Vert {\bar{v}}^i - {\tilde{v}}^i\right\Vert _{\mathbf {L^1}(0, t)} \right) + L \gamma . \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} \gamma&= \sum _{k \in \mathcal {I_H}} \left\Vert {\bar{Q}}_k - {\tilde{Q}}_k\right\Vert _{\mathbf {L^1}(0, t)} +\sum _{i \in {\mathcal {I}}} \left( {\left| \bar{v}^{i,o}-{\tilde{v}}^{i,o}\right| } + \left\Vert {\bar{e}}^{i,o} - \tilde{e}^{i,o}\right\Vert _{\mathbf {L^1}(I_i)}\right) \\&\quad + \left\Vert {\bar{e}}^{1, b} - {\tilde{e}}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }. \end{aligned} \end{aligned}$$

Here we can now apply Gronwall inequality to get

$$\begin{aligned} \sum _{i \in {\mathcal {I}}} {\left| {\bar{v}}^i(t) - {\tilde{v}}^i(t)\right| } +\sum _{J \in {\mathcal {J}}} {\left| {\bar{e}}_J(t) - {\tilde{e}}_J(t)\right| } \le e^{L \, t} \gamma . \end{aligned}$$

The monotonicity of the right hand side and (3.28) imply

$$\begin{aligned}&\sum _{i \in {\mathcal {I}}} \left( \left\Vert {\bar{v}}^i - \tilde{v}^i\right\Vert _{\mathbf {C^{0}}\left( [0, t]\right) } + \left\Vert {\bar{e}}^i(t) - \tilde{e}^i(t)\right\Vert _{\mathbf {L^1}(I_i)} \right) +\sum _{J \in {\mathcal {J}}} \left\Vert {\bar{e}}_J - {\tilde{e}}_J\right\Vert _{\mathbf {C^{0}}([0, t])} \\&\quad \le \ \left( Kt+1\right) e^{Lt}\gamma +K_I\sum _{k \in {\mathcal {I}}} \left\Vert {\bar{e}}^{i,0} -{\tilde{e}}^{i,0}\right\Vert _{\mathbf {L^1}(I_k)} + K_J\left\Vert {\bar{e}}^{1, b} - \tilde{e}^{1, b}\right\Vert _{\mathbf {L^1}\left( 0, t\right) }\ , \end{aligned}$$

which concludes the proof. \(\square \)

3.4 Optimal Control Problems

In this part, we consider the complete teleheating system (2.15) as a control system, where the boundary term \(e^{1,b}\) at the CHP acts as a control function, and we assume that, given \(T > 0\), the solution exists on the time interval [0, T]. In this perspective, a natural problem consists in finding a control function \(e^{1,b}\), satisfying assumption (E.1), which minimizes the functional

$$\begin{aligned} \begin{aligned} J(e^{1,b})&= \sum _{k \in {\mathcal {I}}_{H}} \alpha _k \int _0^T \left( Q_k(t) - A^k v^k(t) (e_{J_{k_1}}(t) - e^{k,out})\right) ^2 \textrm{d}{t} \\&\quad + \sum _{k \in {\mathcal {I}}_{H}} \beta _k \left( Q_k(T) - A^k v^k(T) (e_{J_{k_1}}(T) - e^{k,out})\right) ^2 \\&\quad + \gamma _1 \int _0^T {\left| e^{1, b}(t)\right| } \textrm{d}{t} + \gamma _2 \textrm{TV}\left( e^{1,b}\right) \\&\quad + \gamma _3 \int _0^T {\left| e^{1, b}(t) v^1(t)\right| } \textrm{d}{t} + \gamma _4 \textrm{TV}\left( e^{1,b} v^1\right) , \end{aligned} \end{aligned}$$
(3.29)

for suitable coefficients \(\alpha _k \ge 0\), \(\beta _k \ge 0\), and \(\gamma _1, \gamma _2, \gamma _3, \gamma _4 \ge 0\). The minimization of the first two terms in (3.29) aims at producing the required temperature in the houses respectively on the whole time interval and at the final time. The third term in (3.29) measures the total energy produced by the CHP. The minimization of the fourth term penalizes too many oscillation in the energy production. The fifth and sixth terms have similar meanings with respect to the total power provided at the CHP. The next result deals with the lower semicontinuity of J.

Proposition 3.8

Assume that the hypotheses (H.1), (H.2), (H.3), (C.1), (E.1), and (E.2) hold. Define the sets

$$\begin{aligned} \begin{aligned} E_1&= \mathbf {L^1}\left( (0,T); [0, {\bar{e}}_{\max }]\right) , \\ E_2&= \left\{ e \in \mathbf {L^1}\left( (0,T); [0, {\bar{e}}_{\max }]\right) :\, \textrm{TV}(e) < +\infty \right\} . \end{aligned} \end{aligned}$$
(3.30)

Then the functional \(J: E_2 \rightarrow {\mathbb {R}}\), defined in (3.29), is lower semicontinuous with respect to the \(\mathbf {L^1}\) topology. If \(\gamma _2 = 0\) and \(\gamma _4 = 0\), then \(J: E_1 \rightarrow {\mathbb {R}}\) is also continuous.

Proof

We prove that \(J: E_2 \rightarrow {\mathbb {R}}\) is lower semicontinuous with respect the \(\mathbf {L^1}\) topology. The final statement can be proved similarly.

Fix \({\bar{e}}^{1,b} \in E_2\) and a sequence \(e_n^{1,b} \in E_2\) such that \(e_n^{1,b} \rightarrow {\bar{e}}^{1,b}\) in \(\mathbf {L^1}\left( 0, T\right) \) as \(n \rightarrow +\infty \). Denote with \(\left( {\bar{e}}, {\bar{p}}, {\bar{v}}\right) \) and respectively with \(\left( e_n, p_n, v_n\right) \) the solution to the system (2.15) corresponding to the boundary data \({\bar{e}}^{1,b}\) and respectively to \(e_n^{1,b}\), according to Definition 2.8. Note that these solutions exist by Theorem 3.7.

Clearly, by Theorem 3.7, we have that

$$\begin{aligned} \lim _{n \rightarrow + \infty } \int _0^T {\left| e^{1,b}_n (t)\right| } \textrm{d}{t} = \int _0^T {\left| {\bar{e}}^{1,b} (t)\right| } \textrm{d}{t} \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow + \infty } \int _0^T {\left| e^{1,b}_n (t) v^1_n(t)\right| } \textrm{d}{t} = \int _0^T {\left| {\bar{e}}^{1,b} (t) {\bar{v}}^1(t)\right| } \textrm{d}{t}. \end{aligned}$$

Moreover, by [18, Theorem 1, Sect. 5.2.1],

$$\begin{aligned} \liminf _{n \rightarrow + \infty } \textrm{TV}\left( e^{1,b}_n\right) \ge \textrm{TV}\left( {\bar{e}}^{1,b}\right) \end{aligned}$$

and

$$\begin{aligned} \liminf _{n \rightarrow + \infty } \textrm{TV}\left( e^{1,b}_n v^1_n\right) \ge \textrm{TV}\left( {\bar{e}}^{1,b} {\bar{v}}^1\right) . \end{aligned}$$

By Theorem 3.7, we deduce that

$$\begin{aligned} \lim _{n \rightarrow + \infty } \left\Vert v_n - {\bar{v}}\right\Vert _{\mathbf {C^{0}}\left( [0, T]\right) } = 0 \end{aligned}$$

and

$$\begin{aligned} \lim _{n \rightarrow + \infty } \sup _{t \in [0, T]} \left\Vert e_n - {\bar{e}}\right\Vert _{\mathbf {L^1}} = 0. \end{aligned}$$

Therefore, for every \(k \in {\mathcal {I}}_{H}\), we deduce that

$$\begin{aligned} \begin{aligned}&\lim _{n \rightarrow +\infty } \int _0^T \left( Q_k(t) - A^k v^k_n(t) (e_{J_{k_1}}(t) - e^{k,out})\right) ^2 \textrm{d}{t} \\&\quad = \int _0^T \left( Q_k(t) - A^k {\bar{v}}^k(t) (e_{J_{k_1}}(t) - e^{k,out})\right) ^2 \textrm{d}{t} \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned}&\lim _{n \rightarrow + \infty } \left( Q_k(T) - A^k v^k_n(T) (e_{J_{k_1}}(T) - e^{k,out})\right) ^2 \\&\quad = \left( Q_k(T) - A^k v^k(T) (e_{J_{k_1}}(T) - e^{k,out})\right) ^2. \end{aligned} \end{aligned}$$

The proof is so concluded. \(\square \)

The next result deals with optimal solutions.

Corollary 3.9

Assume that (H.1), (H.2), (H.3), (C.1), (E.1), and (E.2) hold. Fix \({\mathcal {K}}\), a compact subset of \(E_1\) or \(E_2\), defined in (3.30), with respect to the \(\mathbf {L^1}\) topology. Then there exists \({\bar{e}}^{1, b} \in {\mathcal {K}}\) such that

$$\begin{aligned} J \left( {\bar{e}}^{1,b}\right) = \min _{e^{1,b} \in {\mathcal {K}}} J\left( e^{1,b}\right) . \end{aligned}$$

Proof

The proof follows the lines of the direct method of calculus of variations; see [14, 16] for more details. Consider a minimizing sequence \(e^{1, b}_n \in {\mathcal {K}}\) for the functional J, i.e.

$$\begin{aligned} \lim _{n \rightarrow +\infty } J\left( e^{1,b}_n\right) = \inf _{e^{1, b} \in {\mathcal {K}}} J(e^{1,b}). \end{aligned}$$

Since \({\mathcal {K}}\) is compact, then there exists \({\bar{e}}^{1,b} \in {\mathcal {K}}\) and a subsequence \(e^{1, b}_{n_k}\) such that \(e^{1, b}_{n_k} \rightarrow {\bar{e}}^{1,b}\) as \(k \rightarrow +\infty \). By using the lower semicontinuity of J (see Proposition 3.8), we deduce that

$$\begin{aligned} \inf _{e^{1, b} \in {\mathcal {K}}} J(e^{1,b}) = \lim _{n \rightarrow +\infty } J\left( e^{1,b}_n\right) = \liminf _{k \rightarrow +\infty } J\left( e^{1,b}_{n_k}\right) \ge J({\bar{e}}^{1,b}), \end{aligned}$$

concluding the proof. \(\square \)

Remark 3.10

Note that the previous result holds if \({\mathcal {K}}\) is a compact subset of \(E_1\) (or of \(E_2\)) with respect the \(\mathbf {L^1}\)-topology. Therefore Corollary 3.9 can not be applied for general closed and bounded subsets \({\mathcal {K}}\) of \(E_1\) or \(E_2\). If instead \({\mathcal {K}}\) is a closed and bounded subset of a finite dimensional subspace of \(E_1\) or \(E_2\), then the functional J admits a minimum.

Remark 3.11

The results in the present section deal only with the existence of optimal controls which minimize the functional J in (3.29). Conversely, no necessary conditions, that represent the main tool in the search of optimizers, are deduced here. From the analytic point of view, this is a challenging problem since it needs some differentiability properties of the functional J with respect the topology of the control set. Dealing with solutions with low regularity makes the problem hard.

However one can try to find approximate optimal controls using numerical schemes inspired for example by the gradient descent method [4] as in [31], even if the present regularity does not fully justifies it. A possible not-optimized algorithm for approximating an optimal control could be the following.

  1. 1.

    Replace the control set \({\mathcal {K}}\) by a finite-dimensional approximation \({\mathcal {K}}_d\).

  2. 2.

    Fix an initial control \(e_0 \in {\mathcal {K}}_d\).

  3. 3.

    Find the numerical gradient \(\nabla J(e_0)\) of J at \(e_0\) and construct the control \(e_1 = e_0 - \gamma _0 \nabla J(e_0)\), possibly with a projection on the set \({\mathcal {K}}_d\), where \(\gamma _0 > 0\) is a sufficiently small learning rate.

  4. 4.

    Recursively, given \(e_n \in {\mathcal {K}}_d\), find the numerical gradient \(\nabla J(e_n)\) of J at \(e_n\) and construct the control \(e_{n+1} = e_n - \gamma _n \nabla J(e_n)\), possibly with a projection on the set \({\mathcal {K}}_d\), where \(\gamma _n > 0\) is a sufficiently small learning rate.