1 Motivation

The subject of this paper is the mono-dimensional, inviscid Burgers’ equation which is the simplest model that begins the whole universe of systems of fluid dynamics. From the mechanical viewpoint it is pure transport of the velocity, modelling a creation of water waves. In the language of material derivative it reads \(\frac{D}{Dt}u=0\), and by reformulating we obtain the well known equation

$$\begin{aligned} \partial _t u+u \partial _x u=0 \text{ on } D \times [0,T). \end{aligned}$$
(1)

The theory says that starting from any smooth, compactly supported initial configuration, the solution does not have to be smooth but it can suffer a jump discontinuity. Thus, waves can create a shock, also called the gradient catastrophe. When we move to the weak formulation non-unique solutions are allowed, and either non-physical shocks or physical rarefaction waves appear. In order to keep the mathematical well-posedness the concept of entropy solutions has been introduced, and then both uniqueness property and decrease in time to zero is guaranteed. The shocks are governed by the Rankine–Hugoniot condition determining the speed of the jump and the Lax condition choosing between continuous and discontinuous solution. The general rule is that the bigger wave overtakes the smaller one, so consequently, for a long time we are not able to say anything about the smaller wave.

In this paper we address the following question: Is there any approach to the Burgers’ equation (1) which admits certain preservation of the smaller wave after collision with the bigger one? The answer is positive, but we are required to take D as a graph.

2 Introduction

The problem of inviscid Burgers equation on networks belongs to the family of conservation laws on networks that has been developed for about thirty years and still receives considerable interest [5, 12, 23]. The major motivation for studying this topic is traffic modelling, see for instance [8, 15, 17], initiated with the well-established now Lighthill–Whitham model [20]. The natural interpretation of a graph as a transportation network shifted the burden of research interests into the case of non-convex flux which enforced the application of either wave-front tracking approximations or vanishing viscosity methods [7]. Furthermore, fixed direction of a lane formed some bitten track for specifying conditions in vertices, review can be found in [12]. In particular, there was no need in specifying negative values of solutions in vertices since such flow reflects driving against the current which does not take place in general. The different motivation of the research presented in this paper leads to new types of transmission conditions in the vertices of a graph. Furthermore, considering pure Burgers’ equation, instead of general conservation law, allows us to use methodology known from Hamilton–Jacobi equation [10, Sec. 3.3] and consequently to obtain an explicit solution being a counterpart of the well-known Lax-Oleinik formula.

Let us look at the Burgers’ network problem from a fluid dynamics perspective. We need to find a suitable language which will allow us to analyze arbitrary directions of the flow and to control the total energy of the moving fluid. Imposing these conditions on the edges is a standard approach, but building the theory where a backflow (change of the direction of a flow in vertex) appears is new, according to authors’ best knowledge, and therefore worth being stressed.

Reaching for the graph structure can be interpreted either as the extension of a mono-dimensional case or as non-standard discretization of a state space. The main question that arises is if introduction of this structural approach gives hope for alternative techniques for proving blowup and uniqueness criteria. If so, the development of a coherent language of description of fluid-type equations on metric graphs, which is the main subject of this study, allows us to pursue from Burgers’ equation to multi-dimensional systems like Navier-Stokes or compressible Euler. To this end, we begin in this paper with addressing two preliminary questions:

  1. 1.

    What is the appropriate description of the flow in vertices? The natural approach is to look at the change of the energy at redistribution points, namely taking the maximal or minimal change of the energy at vertices. It is formulated in Theorem 1 in Section 4 for non-negative flows, and generalized to different sign solutions in Sect. 4.1. This strategy is essentially different then transmission conditions for vehicular traffic [8], data networks [9] or T-nodes [21].

  2. 2.

    What is the relation between the pure mono-dimensional case and the network counterpart? It turns out that choosing correctly the transmission conditions, the network system is in some sense the generalisation of mono-dimensional one. In particular we will be able to answer positively on the question imposed as a motivation in Sect. 2. This issue is discussed further in Sect. 6.

Thus, to state the problem succinctly, our paper aims at constructing general weak solutions to Burgers’ equation in metric graphs initiated by arbitrary TV initial data (possibly with different signs). The rules of transitions of the flow in vertices, in particular its direction and magnitude, are determined by the optimization of the energy at the vertex. Let us underline that at each edge we have the entropy solution in the meaning of the standard mono-dimensional case.

3 Problem Formulation

Let us start with the necessary formalism to describe the language for PDEs on the metric graphs, compare also [8, 16].

3.1 Graph Theory Toolbox

Consider \(G=(V,E,\mathcal {L},\Phi )\) a directed, weighted and finite tree with no multiple edges. Namely, let

$$\begin{aligned} V:=\left\{ v_i:\,i\in I\right\} , \text{ for } I=\left\{ 1,\ldots ,n\right\} , \text{ and } E:=\left\{ e_j:\,j\in J\right\} , \text{ for } J=\left\{ 1,\ldots ,m\right\} , \end{aligned}$$

be respectively sets of vertices and edges of a graph; while \(\mathcal {L}:E\rightarrow \mathbb {R}_+\) be a weight (length) function of the edge; \(e_j\mapsto l_j\) for any \(j\in J\).

The structure of the network is defined by incidence matrix \(\Phi \in M_{n\times m}(\mathbb {R})\), \(\Phi =(\phi _{ij})_{i\in I,j\in J}=\Phi ^+-\Phi ^-\) such that \(\Phi ^+=(\phi ^+_{ij})_{i\in I,j\in J}\) and \(\Phi ^-=(\phi ^-_{ij})_{i\in I,j\in J}\) satisfy conditions

$$\begin{aligned} \phi ^{+}_{ij}=\left\{ \begin{array}{ll}1 &{}\text {if}\quad {\mathop {\rightarrow }\limits ^{e_j}}v_i\\ 0 &{}\text {otherwise},\end{array}\right. \qquad \phi ^{-}_{ij}=\left\{ \begin{array}{ll}1 &{}\text {if}\quad v_i{\mathop {\rightarrow }\limits ^{e_j}}\\ 0 &{}\text {otherwise}.\end{array}\right. \end{aligned}$$

If \(\phi _{ij}\ne 0\), we say that edge \(e_j\) is incident to \(v_i\). We say that there exists a multiple edge between vertices \(v_i, v_k\in V\) if there exist two edges \(e_p, e_q\in E\) such that, for \(z=p,q\), \(\phi ^+_{kz}=1\) and \(\phi ^-_{iz}=1\). Hence, the lack of multiple edges provides a uniqueness of such assignment \(e_j=(v_i,v_k)\in E\) for some \(v_i,v_k\in V\). In further consideration we call \(v_i\) a head and \(v_k\) a tail of the edge \(e_j\). The vertex \(v_i\) is a source or a sink if respectively \(\phi ^+_{ij}=0\) or \(\phi ^-_{ij}=0\) for any \(j\in J\).

By the path in the graph we understand a finite sequence of edges \(p_i=e_{k_1},\ldots ,e_{k_{N_i}}\) such that for any \(e_{k_j},e_{k_{j+1}}\) there exists a vertex \(v_{k_j}\in V\) such that

$$\begin{aligned}{\mathop {\rightarrow }\limits ^{e_{k_j}}} v_{k_j} {\mathop {\rightarrow }\limits ^{e_{k_{j+1}}}}\quad \text {(equivalently}\,\, \phi _{k_jk_j}^+=1=\phi ^-_{k_jk_{j+1}}\text {)}\end{aligned}$$

for \(j=1,...,l-1\). It means the path is of the following form

$$\begin{aligned} v_{k_0} {\mathop {\rightarrow }\limits ^{e_{k_1}}} v_{k_1} {\mathop {\rightarrow }\limits ^{e_{k_2}}} v_{k_2} {\mathop {\rightarrow }\limits ^{e_{k_3}}} ... {\mathop {\rightarrow }\limits ^{e_{k_{N_i-1}}}} v_{k_{N_i-1}} {\mathop {\rightarrow }\limits ^{e_{k_{N_i}}}} v_{k_{N_i}}. \end{aligned}$$

By the length \(N_i\) of a path \(p_i\) we understand the number of edges on the path, while by weighted length \(L_i\) the weights’ sum of all edges on the path.

We say that a graph is connected if there exists at least one path between every two vertices. A closed path, namely \(v_{k_0}=v_{k_{N_i}}\), is a cycle and the graph is called acyclic if it has no cycles. Finally, we say that a graph is a directed tree if it is connected and has no cycles.

In the following considerations we refer to the special examples of trees being a restriction of finite graphs. We say that \(G'=(V',E', \mathcal {L}',\Phi ')\) is a subgraph of a graph \(G=(V,E,\mathcal {L},\Phi )\) if it satisfies the conditions

$$\begin{aligned} V'\subseteq V,\quad E'=E|_{V'\times V'}, \quad \mathcal {L}'=\mathcal {L}|_{E'},\quad \Phi '=\Phi |_{I'\times J'}, \end{aligned}$$

where \(I'=\left\{ i\in I:\,\, v_i\in V'\right\} \) and \(J'=\left\{ j\in J:\,\,e_j\in E'\right\} \).

Definition 1

Consider \(G=(V,E,\mathcal {L},\Phi )\) and \(v_i\in V\). We say that \(G_i=(V_i,E_i,\mathcal {L}_i,\Phi _i)\) is a \(v_i\)-subgraph of G if

$$\begin{aligned} V_i:=\left\{ v_j\in V:\,\,j\in J_i\right\} ,\qquad \text {and}\quad J_i:={\left\{ j\in I:\,\, \phi _{ij}\ne 0\right\} }. \end{aligned}$$
(2)

Definition 2

A path graph \(P_m\) is any connected subgraph of 1D Cartesian grid \(P=(V_P,E_P,\mathcal {L}_P,\Phi _P)\)

$$\begin{aligned} V_P=\left\{ v_i:\,i\in \mathbb {Z}\right\} ,\quad E_P=\left\{ e_j:\,i\in \mathbb {Z}\right\} , \quad \mathcal {L}_P\equiv 1,\quad \Phi _P=(\phi _{ij})_{i,j\in \mathbb {Z}},\,\,\phi _{ij}=\left\{ \begin{array}{ll} 1&{} \text {for}\,\,i=j \\ -1&{}\text {for}\,\,i=j-1\\ 0&{}\text {otherwise} \end{array}\right. \end{aligned}$$

having \(\underline{m}\) edges.

By the honeycomb tree \(H_m\) we understand any connected subgraph of directed hexagonal lattice \(H=(V_H,E_H,\mathcal {L}_H)\)

$$\begin{aligned}&V_H=\left\{ v_{(p+q,-q,p)},v_{(p+q+1,-q,p)}:\, p,q\in \mathbb {Z}\right\} ,\qquad \mathcal {L}_H=1,\\&E_H=\left\{ (v_{(p+q,-q,p)},v_{(p+q+1,-q,p)}),(v_{(p+q,-q,p)},v_{(p+q,-q+1,p)}), (v_{(p+q+1,-q,p)},v_{(p+q+1,-q,p+1)}):\, p,q\in \mathbb {Z}\right\} \end{aligned}$$

having \(\underline{m}\) edges. In further considerations we refer to \(v_{(p+q,-q,p)}\) as vertex of the first kind while to \(v_{(p+q+1,-q,p)}\) as the vertex of the second kind, see Fig. 1.

Note that any vertex of hexagonal lattice H is described by a triple of type \((p+q,-q,p)\) with two parameters pq, which corresponds to the three directions on the honeycomb.

Define now the in- and out degree of vertex \(v_i\) which is the number of edges having respectively a tail or a head in vertex \(v_i\), namely

$$\begin{aligned} \text {deg}_{+}(v_i)=\sum _{j\in J}\phi ^+_{ij},\qquad \text {deg}_{-}(v_i)=\sum _{j\in J}\phi ^-_{ij}, \qquad \text {and}\qquad \text {deg}(v_i)=\text {deg}_{+}(v_i)+\text {deg}_{-}(v_i). \end{aligned}$$

Then using the notation from Definition 2, we have for \(p,q\in \mathbb {Z}\)

$$\begin{aligned}&\text {deg}_{+}(v_{(p+q,-q,p)})=1,\qquad \text {deg}_{-}(v_{(p+q,-q,p)})=2;\\&\text {deg}_{+}(v_{(p+q+1,-q,p)})=2,\qquad \text {deg}_{-}(v_{(p+q+1,-q,p)})=1. \end{aligned}$$

Considering the restriction of H to the subgraph we obtain also additional types of vertices v being sources (\(\text {deg}_{+}(v)=0\), \(\text {deg}_{-}(v)\in \left\{ 1,2\right\} \)), sinks (\(\text {deg}_{+}(v)\in \left\{ 1,2\right\} \), \(\text {deg}_{-}(v)=0\)) or vertices of the path graph (\(\text {deg}_{-}(v)=\text {deg}_{+}(v)=1\)).

Furthermore, we introduce a direction of a vertex \(v_i\in V\) as an ordered pair of sets \(D_i=({D}^{in}_i,{D}^{out}_i)\), \(D^{in}_i,D^{out}_i\subset E\) such that

$$\begin{aligned} D^{in}_i:=\left\{ e_j\in E:\,\, {\mathop {\rightarrow }\limits ^{e_j}}v_i\right\} , \qquad \text {and}\qquad D^{out}_i:=\left\{ e_j\in E:\,\, v_i {\mathop {\rightarrow }\limits ^{e_j}}\right\} . \end{aligned}$$
(3)

Since directed trees G do not have loops, therefore \(D_i^{in}\cap D_i^{out}=\emptyset \), for any \(i\in I\). If we change the vertex \(v_i\) into \(v_i'\) in the way that the parameterization of all edges incident to the vertex \(v_i\) become opposite, we say that \(v_i\) and \(v_i'\) have the opposite direction. In the case of honeycomb trees the direction of vertices of the first and second kind are, for \(p,q\in \mathbb {Z}\), the following

$$\begin{aligned}&D_{(p+q,-q,p)}=\left( D^{in}_{(p+q,-q,p)},D^{out}_{(p+q,-q,p)}\right) \qquad D^{in}_{(p+q,-q,p)}=\left\{ (v_{(p+q,-q,p-1)},v_{(p+q,-q,p)})\right\} ,\\&D^{out}_{(p+q,-q,p)}= \left\{ (v_{(p+q,-q,p)},v_{(p+q+1,-q,p)}), (v_{(p+q,-q,p)},v_{(p+q,-q+1,p)})\right\} ,\\&D_{(p+q+1,-q,p)}=\left( D^{in}_{(p+q+1,-q,p)},D^{out}_{(p+q+1,-q,p)}\right) \qquad D^{out}_{(p+q+1,-q,p)}=\left\{ (v_{(p+q+1,-q,p)},v_{(p+q+1,-q,p+1)})\right\} \\&D^{in}_{(p+q+1,-q,p)}=\left\{ (v_{(p+q,-q-1,p)},v_{(p+q+1,-q,p)}), (v_{(p+q-1,-q,p)},v_{(p+q+1,-q,p)})\right\} . \end{aligned}$$

The above distinction is crucial to the considerations in Sect. 5.1.

Fig. 1
figure 1

Two kinds of vertices in honeycomb trees \(H_{15}\). (i) \(v_{(0,0,0)}\) is of the first kind (ii) \(v_{(1,0,0)}\) is of the second kind. Vertices’ direction is denoted in the symbolic way in red. In (iii) a metric honeycomb tree \(\mathcal {H}_3\) introduced in Example 3

Finally, let us remind that for any tree it is possible to re-enumerate edges in the way that for any two edges \(e_{s}, e_{j}\in E\), and for any chosen path \(e_{s}=e_{k_1},\ldots ,e_{k_N}=e_{j}\); \(k_i<k_{i+1}\) for all \(i\in 1,\ldots ,N-1\). Additionally in the following considerations we choose the enumeration of edges in the way that all sources are associated with the first few edges, namely sources are heads of the edges \(e_{i}\), \(i=1,\ldots , s\). We call such numeration an increasing order of edges and note that two trees with an increasing order of edges are homomorphic.

3.2 Introduction of Metric Graphs

To introduce a metric space into consideration we associate each edge of a graph with a compact interval in the following way for \(d:E\rightarrow \mathcal {B}(\mathbb {R})\) let \(d(e_j)=[0,l_j]\); where \(\mathcal {B}(\mathbb {R})\) is a Borel algebra on \(\mathbb {R}\). We say that \(\mathcal {G}=(G,d)\) is a directed metric graph. In what follows we always consider the parametrisation of an edge that agrees with the direction of an edge. By an abuse of notation we shall denote a metric edge \(d(e_j)\) simply by \(e_j\), the vertices at the endpoints of the edge \(e_j=(v_i,v_k)\) by \(e_j(0):=v_i\) and \(e_j(l_j):=v_k\). Further, when considering a function \(f_j\) defined on the metric edge \(d(e_j)=[0, l_j]\), we shall occasionally write \(f(v_i):=f (s)\) if \(e_j(s)=v_i\) for \(s=0,l_j\). By the function defined on the metric graph we understand a vector-valued function \(f:[0,1]\rightarrow \mathbb {R}^m\) such that \(f(x)=(f_j(l_jx))_{j\in J}\), where \(f_j:[0,l_j]\rightarrow \mathbb {R}\) is defined on the edge \(e_j\).

The main idea of this paper is to find the function defined on the metric graph that satisfies both the weak formulation of Burgers’ equation on edges and certain transmission conditions in vertices. Based on general knowledge of the mono-dimensional case, it is obvious that the direction of a flow can disagree with the parameterization of an edge. Although it does not cause a difficulty on the edge, it complicates transmission conditions. To define well conditions in vertices we extend the classical notion of weighted adjacency matrix of a line graph \(\mathcal {B}=(b_{ij})_{i,j\in J}\), which in the standard setting reads

$$\begin{aligned} b_{jk}\ne 0 \quad \text {if}\quad \exists _{v_i}\,\,{\mathop {\rightarrow }\limits ^{e_k}}v_i{\mathop {\rightarrow }\limits ^{e_j}}\qquad \text {and}\qquad b_{jk}=0\quad \text {otherwise}. \end{aligned}$$
(4)

Consider the following operators \(\mathcal {B}^{pq}=(b_{jk}^{pq})_{j,k\in J}\), for \(p,q\in \{0,1\}\) such that

$$\begin{aligned} b_{jk}^{01}&\ge 0 \quad \text {if}\quad \exists _{v_i}\,\,{\mathop {\rightarrow }\limits ^{e_k}}v_i{\mathop {\rightarrow }\limits ^{e_j}}\qquad \text {and}\qquad b_{jk}^{01}=0\quad \text {otherwise}; \end{aligned}$$
(5a)
$$\begin{aligned} b_{jk}^{00}&\ge 0 \quad \text {if}\quad \exists _{v_i}\,\,{\mathop {\leftarrow }\limits ^{e_k}}v_i{\mathop {\rightarrow }\limits ^{e_j}}\qquad \text {and}\qquad b_{jk}^{00}=0\quad \text {otherwise}; \end{aligned}$$
(5b)
$$\begin{aligned} b_{jk}^{10}&\ge 0 \quad \text {if}\quad \exists _{v_i}\,\,{\mathop {\leftarrow }\limits ^{e_k}}v_i{\mathop {\leftarrow }\limits ^{e_j}}\qquad \text {and}\qquad b_{jk}^{10}=0\quad \text {otherwise}; \end{aligned}$$
(5c)
$$\begin{aligned} b_{jk}^{11}&\ge 0 \quad \text {if}\quad \exists _{v_i}\,\,{\mathop {\rightarrow }\limits ^{e_k}}v_i{\mathop {\leftarrow }\limits ^{e_j}}\qquad \text {and}\qquad b_{jk}^{11}=0\quad \text {otherwise}. \end{aligned}$$
(5d)

Note that the new approach to adjacency matrix definition given in (5), unlike the classical one (4), allows for the lack of flow between two edges even though they are physically connected. Obviously \(\mathcal {B}^{01}=\left( \mathcal {B}^{10}\right) ^T\), but we distinguish those cases due to its different meaning in the sense of flow. Note that if \(b_{jk}^{01}\ge 0\), then using the notation from (5a) and (3), \(e_k\in D^{in}_i\) and \(e_j\in D^{out}_i\). On the other hand for \(b_{jk}^{10}\ge 0\), \(e_j\in D^{in}_i\) and \(e_k\in D^{out}_i\). Consequently, in the first case the direction of vertex \(v_i\) is opposite to the direction of the vertex in the second case.

If we replace 1 with arbitrary nonzero coefficients in matrices \(\mathcal {B}\), \(\mathcal {B}^{pq}\) we arrive at unweighted counterparts of matrices, we call them adjacency matrices of a line graph, and denote them by \(\mathcal {\overline{B}}\), \(\mathcal {\overline{B}}^{pq}\).

Due to the change in the definition of adjacency matrices, it is possible to find a path in the metric graph in which there is no possibility of flow from one edge, say \(e_k\), to another \(e_j\) due to vanishing of coefficients \(b_{jk}^{pq}\), \(p,q\in \left\{ 0,1\right\} \), \(j,k\in J\). Therefore in the whole paper we distinguish the definition of path in the graph G and in its metric counterpart \(\mathcal {G}\). By the path in the metric graph we understand a finite sequence of edges \(p_i=e_{k_1},\ldots ,e_{k_{N_i}}\) such that for any \(e_{k_j},e_{k_{j+1}}\) there exists a pair (pq), \(p,q\in \left\{ 0,1\right\} \) such that \(b_{k_{j+1}k_j}^{pq}\ne 0\). The notions of path length \(N_i\) and weighted path length \(L_i\) remain unchanged.

3.3 Burgers’ Equation on the Network

Let us defined Burgers’ equation on the metric graph \(\mathcal {G}\), compare also with Eq. (1),

$$\begin{aligned} \partial _t u+u \partial _x u=0 \text{ on } \mathcal {G} \times [0,T). \end{aligned}$$

Namely, let \(u=(u_j(l_j\cdot ))_{j\in J}\) be the function defined on the metric graph \(\mathcal {G}\) which satisfies

$$\begin{aligned} \partial _t u_j(x,t)+u_j(x,t)\partial _x u_j(x,t)&=0,&x\in [0,l_j],\, t\in [0,T), \end{aligned}$$
(6a)
$$\begin{aligned} u_j(x,0)&=\mathring{u}_j(x),&x\in [0,l_j], \end{aligned}$$
(6b)

for every coordinate \(j\in J\). Now let us derive the transmission conditions that incorporate the network structure into the formulation from one hand, and allow for the flow that agrees with the physical motivation from another.

Let us start with the formulation of transfers that comes from the generalisation of vertex conditions for network transport, see [16, Sec. 3a]. Consider operators \(u \mapsto \mathcal {B}_z(u)\in M_{m}(\mathbb {R})\), \(z=0,1\) and for almost all \(t\in [0,T)\) assume that

$$\begin{aligned} \mathcal {B}_0(u)u(0,t)+\mathcal {B}_1(u) u(1,t)=0, \qquad \text {with}\quad \mathcal {B}_z=\mathcal {B}^{z0}+\mathcal {B}^{z1},\quad \text {defined in (5).} \end{aligned}$$
(7)

Obviously such a general formulation has to be specified for a number of reasons. Even in the linear case, when \(\mathcal {B}_0, \mathcal {B}_1\) are independent of u, the uniqueness of the solution to (7) strictly depends on their rank. Furthermore, there is no clear relation with a graph structure because again for arbitrary operators \(\mathcal {B}_0,\mathcal {B}_1\in M_{m}(\mathbb {R})\), it is not always possible to build the graph, not mentioning the directed tree that is the object of these considerations. For details see [1].

Let us draw your attention to one property that is important in further considerations. If the direction of flow disagrees with the parametrization it may allow for a cyclic flow along the edges even though the graph is a directed tree.

Example 1

Consider a graph \(G=(V,E,\mathcal {L},\Phi )\) such that

$$\begin{aligned} V=\left\{ v_i:\,i=1,2,3\right\} ,\quad E=\left\{ e_j:\,j=1,2,3\right\} ,\quad L\equiv 2\pi \quad \text {and}\quad \Phi =\left[ \begin{array}{ccc} -1&{}0&{}-1\\ 1&{}-1&{}0\\ 0&{}1&{}1\end{array}\right] , \end{aligned}$$
(8)

presented also in Fig. 2. Problem (6)–(7) such that

$$\begin{aligned} \mathcal {B}_0=\left[ \begin{array}{ccc} 1&{}0&{}1\\ 0&{}-1&{}0\\ 0&{}0&{}0\end{array}\right] ,\quad \mathcal {B}_1=\left[ \begin{array}{ccc} 0&{}0&{}0\\ 1&{}0&{}0\\ 0&{}1&{}1\end{array}\right] ,\quad \text {and}\quad \mathring{u}_1,\mathring{u}_2>0,\quad \mathring{u}_3<0. \end{aligned}$$

is equivalent locally in time with the Burgers’ equation on the circle with radius \(r=3\).

Fig. 2
figure 2

Illustration depicts two network structures: the first is a graph G defined in (8) and considered in Example 1; while the second a metric path graph \(\mathcal {P}_2\) introduced in Example 2

In the Example 1 the cyclic structure appeared due to the disturbance of a flow in vertices \(v_1\) and \(v_3\). Note that the direction of the vertex \(v_1\) is \(D_1=(D_1^{in},D^{out}_1)=\left( \emptyset ,\left\{ e_1,e_2\right\} \right) \) while the mass flows from the edge \(e_3\) into \(e_1\). Similar problem appeared in \(v_3\). In the further considerations we allow for the flow to go in line with the vertex direction and in the opposite direction. We assure, however, that there is no exchange of mass between edges in the sets \(D_i^{in}\) (as well \(D_i^{out}\)) for any \(i\in I\), namely

$$\begin{aligned} \mathcal {B}^{00}=\mathcal {B}^{11}=0. \end{aligned}$$
(9)

Let us now fix the vertex \(i\in I\), the moment \(t\in [0,T)\) and we consider two cases. If the flow at t agrees with the direction of a vertex. Then the transmission conditions in vertex \(v_i\) read

$$\begin{aligned} u_j(0,t)=\sum _{\left\{ s\in J:\,e_s\in D_i^{in}\right\} }\,b^{01}_{js}(u)u_s(1,t),\qquad \text {for}\,\,j\in J\,\,\text {such that}\,\,\phi _{ij}^-\ne 0, \end{aligned}$$
(10)

where \((b^{01}_{js})_{j,s\in J}\) is the adjacency matrix defined in (5a). Similarly, for the flow opposite to the vertex direction we have

$$\begin{aligned} u_j(1,t)=\sum _{\left\{ s\in J:\,e_s\in D_i^{out}\right\} }\,b^{10}_{js}(u)u_s(0,t),\qquad \text {for}\,\,j\in J\,\,\text {such that}\,\,\phi _{ij}^+\ne 0, \end{aligned}$$
(11)

with \((b^{10}_{js})_{j,s\in J}\) is the adjacency matrix defined in (5c). In particular we note that for considered problem matrices \(\mathcal {B}^{00}\) and \(\mathcal {B}^{11}\) defined respectively in (5b) and (5d), vanish.

Definition 3

We say that system (6)–(10)–(11) is the strong formulation of Burgers’ equation on the metric tree \(\mathcal {G}\).

The above definition is formal, still the relation \(\mathcal {B}(u)\) is not given. In order to move from strong to weak formulation we introduce a set of smooth functions over \(\mathcal {G}\). Namely, the functions smooth over the edges which agree on germs given in each vertex \(v_i\); with the neighbourhood oriented in line with direction \(D_i\). Below we give a weaker definition, which always allows determining the differentiation by parts.

Definition 4

We say that \(f=(f_j(l_j\cdot ))_{j\in J}\) defined on the metric graph \(\mathcal {G}\) is smooth on \(\mathcal {G}\), and we write \(f\in C^\infty (\mathcal {G})\), if the following conditions hold

  1. (i)

    \(f_j(l_j\,\cdot )\in C^\infty [0,l_j]\)    for any \(e_j \in E\);

  2. (ii)

    for any \(v_i\in V\), and any \(k \in \mathbb {N}\)

    $$\begin{aligned} \partial ^{(k)}f_{j}(l_j \,\cdot )=\partial ^{(k)}f_{k}(0) \text{ for } \text{ all } e_j \in D^{in}_i \text{ and } e_k \in D^{out}_{i}. \end{aligned}$$

Consider now a function \(\phi :[0,1]\times [0,\infty )\rightarrow \mathbb {R}^m\) ,\(\phi (\cdot ,t)\in C^\infty (\mathcal {G})\). In what follows the product of two vector functions is understood in the sense of the Hadamard product, namely \(fg=(f_jg_j)_{j\in J}\). Now define integration over the metric graph \(\mathcal {G}\) as the sum of the integrals over all edges of a graph, namely for any integrable function \(f=(f_j)_{j\in J}\) defined on \(\mathcal {G}\)

$$\begin{aligned} \int _{\mathcal {G}}f(x)dx=\sum _{j\in J}\int _0^{l_j}f_j(x)dx. \end{aligned}$$
(12)

The weak solution u should satisfy the condition

$$\begin{aligned} \int _0^T \int _{\mathcal {G}} \left( u \partial _t \phi + \frac{u^2 \partial _x}{2}\phi \right) dxdt = \int _{\mathcal {G}} \mathring{u}(x)\phi (x,0)dx, \end{aligned}$$
(13)

for some \(t\in [0,\infty )\). Let us put our attention on the definition of the integral over \(\mathcal {G}\). To pass from (13) to the strong from of the equation we put the x derivative on the equation, namely, we consider

$$\begin{aligned} \int _{\mathcal {G}} \frac{u^2 \partial _x\phi }{2} dx= & {} \sum _{j\in E} \left( \left. \frac{ u_j^2\phi _j}{2}\right| _{x=l_j}-\left. \frac{ u_j^2\phi _j}{2}\right| _{x=0} -\int _{[0,l_j]}u_j\partial _x u_j\phi _j\right) \\= & {} -\int _{\mathcal {G}} u\partial _x u\phi +\sum _{i\in I}\left( \sum _{\left\{ j\in J:\, e_j\in D_i^{in}\right\} }\left. \frac{ u_j^2\phi _j}{2}\right| _{x=l_j}-\sum _{\left\{ j\in J:\, e_j\in D_i^{out}\right\} }\left. \frac{u_j^2\phi _j}{2}\right| _{x=0}\right) . \end{aligned}$$

So to eliminate boundary terms at each vertex \(v_i\), using the Definition 4ii) for \(k=0\), we require that

$$\begin{aligned} \sum _{\left\{ j\in J:\, e_j\in D_i^{in}\right\} }\frac{u_j^2(l_j,t)}{2}\quad =\sum _{\left\{ j\in J:\, e_j\in D_i^{out}\right\} }\frac{u_j^2(0,t)}{2}\qquad \text {for almost all}\,\, t\in [0,\infty ). \end{aligned}$$
(14)

Equation (14), known as the Kirchhoff condition, is one of the most classical transmission conditions considered on metric graphs, see [22, Sec. 2.2.1]. It describes the conservation of flux in each vertex of a network.

Definition 5

We say that system (13)–(10)–(11) is the weak formulation of Burgers’ equation on the metric tree \(\mathcal {G}\), if weighted adjacency matrices of a line graph \(\mathcal {B}^{pq}\), \(p,q=0,1\), satisfy conditions (9), (14). The class of solutions to the problem in weak formulation we denote by \(B(\mathcal {G})\).

The hyperbolic character of Burgers’ equation makes determine the behaviour at vertices to obtain the transmission condition for incoming characteristics, i.e. the coefficients of matrices \(\mathcal {B}^{01}\) and \(\mathcal {B}^{10}\). In our setting we are obliged to take into account two restrictions. The first one is the Kirchhoff condition (14) while the second is the requirement that dynamic on graph \(\mathcal {G}\) is acyclic, namely (9). Note that the determination of a solution, even under the above restrictions, is not unique. To make the solver of our equation on \(\mathcal {G}\) well posed, there is a need to impose more conditions. The general case is rather complex, so in this paper we concentrate on two examples: the equation with non-negative velocities, and the general velocities on the honeycomb tree, see Definition 2. In the last case, the geometry of vertices is simple enough to consider all possible flow variations in vertices. It also gives some intuitions for the more general case.

The article is organised as follows. Section 4 concentrates on non-negative case. The coefficients of \(\mathcal {B}^{10}\) are related with the change of energy of the solution, see Sect. 4.1, while the existence result in Theorem 2 is derived using methodology known from Hamilton–Jacobi equation, it is our first main result. In Sect. 5 general velocities on honeycomb trees are considered. The generalisation of energy methods applied to the vertices of the first and second kind, see Definition 2, with arbitrary direction of a flow in vertex can be found in Sect. 5.1 while the existence result in Sect. 5.3, the second main result is stated as Theorem 3. Finally, in Sect. 6 we refer to the motivating example of wave interference.

4 Non-negative Entropy Solutions

In this section the analysis is restricted to the flow direction that agrees with the parameterization of edges. Consequently, we look for weak solutions such that for \(\mathring{u}>0\) the solution remains in the non-negative cone, \(u\ge 0\). Considerations in the Sect. 4.1 relate coefficients of \(\mathcal {B}^{01}(u)\) with some properties of the solution u while in Sect. 4.2 we derive the existence theorem to the problem of a form

$$\begin{aligned} \sum _{j\in J}\int _0^T \int _{0}^{l_j} \left( u_j \partial _t \phi _j + \frac{u_j^2 \partial _x\phi _j}{2}\right) dxdt&= \sum _{j\in J}\int _{0}^{l_j} \mathring{u}_j(x)\phi _j(x,0)dx, \end{aligned}$$
(15a)
$$\begin{aligned} u_j(x,0)&=\mathring{u}_j(x)>0, \qquad x\in [0,l_j],\,j\in J, \end{aligned}$$
(15b)
$$\begin{aligned} u_j(0,t)&=\sum _{\left\{ s\in J:\,e_s\in D_i^{in}\right\} }\,b^{01}_{js}(u) u_s(1,t),\qquad \text {for}\,\,\phi _{ij}^-\ne 0, \end{aligned}$$
(15c)
$$\begin{aligned} \sum _{\left\{ j\in J:\, e_j\in D_i^{in}\right\} } u_j^2(l_j,t)&=\sum _{\left\{ j\in J:\, e_j\in D_i^{out}\right\} } u_j^2(0,t),\qquad \text {for almost all}\, t\in [0,T]. \end{aligned}$$
(15d)

Before we go through the details let us formalise the notion of non-negative solution.

Definition 6

We say that function u is a non-negative weak solution of network Burgers’ equation (15) if

  1. (i)

    \(t\mapsto u_j(\cdot ,t)\in L^{\infty }([0,l_j],\mathbb {R})\) is continuous almost everywhere on [0, T), for \(T>0\);

  2. (ii)

    for every \(\phi (\cdot ,t) \in C^{\infty }(\mathcal {G})\) u satisfies (15a),

  3. (iii)

    \(u\ge 0\) for every \(\mathring{u}\in L^{\infty }([0,1],\mathbb {R}_+^m)\),

  4. (iv)

    u satisfies transmission conditions (15c)–(15d).

4.1 Derivation of Transmission Conditions

The aim of this part is to understand how to derive coefficients of matrix \(\mathcal {B}^{01}(u)\) in (15c), hence in the whole Sect. 4.1 referring to the network Burgers’ equation we consider the problem

$$\begin{aligned} \hbox {(15a)}\,\text {--}\,\hbox {(15b)}\,\text {--}\,\hbox {(15d)}. \end{aligned}$$
(16)

We learn from the mono-dimensional case that to obtain the uniqueness of weak solutions there is a need to specify the shock wave by Rankine–Hugoniot condition and exclude non-physical shocks by, for instance, Lax condition. Namely, let \(\xi :[0,T)\rightarrow \mathbb {R}_+\) be a smooth curve describing the discontinuity of scalar weak solution u, and by \(\xi ^{\pm }(t)\) denote left and right limit when x goes to \(\xi (t)\). Then

$$\begin{aligned} \frac{d}{dt}\xi (t)=\frac{u(\xi ^-(t),t)+u(\xi ^+(t),t)}{2}, \qquad \text {and}\qquad u(\xi ^-(t),t)>u(\xi ^+(t),t). \end{aligned}$$
(17)

Definition 7

We say that function \(u_j: [0,l_j]\times [0,T)\rightarrow \mathbb {R}\) is an entropy solution of scalar Burgers’ equation on edge \(e_j\), \(i=1,\ldots ,m\) if it is a weak solution of to scalar Burgers’ equation on edge \(e_j\) which satisfies both Rankine–Hugoniot and Lax conditions at each discontinuity.

Furthermore, \(u=(u_j)_{j\in J}\) is edge-entropy solution if it is an entropy solution at each edge.

Let us remind also that in the mono-dimensional case Oleinik’s one-sided inequality

$$\begin{aligned} u(x_2,t)-u(x_1,t)\le \frac{x_2-x_1}{t},\qquad \text {for}\,\,x_1\le x_2,\quad t>0. \end{aligned}$$
(18)

implies that u is an entropy solution.

We concentrate on vertices now. Note first that Kirchhoff condition in vertex \(v_i\) being resp. a source or a sink assures unique representation of solution \(u_j(v_i,t)=0\), for \(e_j\in D^{out}_i\) and \(e_j\in D^{in}_i\) resp., since there is no flow through these vertices. In the case of other vertices we may obtain the ambiguity. Consequently, imposing only conditions (17) on the non-negative weak solution to (16) still does not guarantee the uniqueness. Let us stop at this statement for a moment. In order to define the fraction of mass that flows through the vertex \(v_i\) at some fixed time t, let us transform a classical notion of Riemann solver into the transmission in the vertex counterpart. Denote by \(u_j(v_i,t^\mp )\) the value of solution (in a head or a tail of an edge, respectively for \(\phi _{ij}^-\ne 0\) and \(\phi _{ij}^+\ne 0\)), before the flow through the vertex for \(t^-\) and after the flow for \(t^+\).

Definition 8

Let \(\mathcal {G}=((V,E,\mathcal {L},\phi ), d)\) be a metric graph and fix \(v_i\in V\). We say that a mapping

$$\begin{aligned} {TS}_i: [0,\infty ]^{\text {deg}(v_i)} \rightarrow [0,\infty ]^{\text {deg}(v_i)},\qquad u(x,t^-)|_{J_i} \mapsto u(x,t^+)|_{J_i}, \end{aligned}$$

where \(J_i\) is defined in (2), is a transmission solver in vertex \(v_i\in V\), if it satisfies conditions (15d) for almost all \(t\in [0,T)\).

The first peculiarity implied by assuming only the Kirchhoff conditions in vertices is the lack of condition that joins values of solution before and after the flow through the vertex, namely at \(t^-\) and \(t^+\).

Example 2

Let \(P_2\) be a path graph, see Definition 2, and consider a Riemann problem on metric path graph \(\mathcal {P}_2\), presented in Fig. 2, of the form

$$\begin{aligned} \begin{array}{rcll} \displaystyle \sum _{j=1}^2\int _0^T \int _{0}^1 \left( u_j \partial _t \phi _j + \frac{u_j^2 \partial _x\phi _j}{2}\right) dxdt&{}=&{} \displaystyle \sum _{j=1}^2\int _{0}^{1} \mathring{u}_j(x)\phi _j(x,0)dx,&{}\\ \mathring{u}_1(x)=0,\qquad \mathring{u}_2(x)&{}=&{}1,&{}x\in [0,1],\\ u_1(0,t)=u_2(1,t)=0,\qquad u_1^2(1,t)&{}=&{}u_2^2(0,t), &{}t\ge 0.\end{array} \end{aligned}$$
(19)

The transmission solver \(TS_2\) does not have to be unique at vertex \(v_2=e_1(1)=e_2(0)\), for some neighbourhood of \(t=0\). Note that for any parameter \(a\in [0,\infty )\), u defined below is a non-negative, edge-entropy solution for some \(t\in [0,\epsilon )\).

  1. 1.

    Let \(a\in [0,1)\), then

    $$\begin{aligned} u_1(x,t)= & {} \left\{ \begin{array}{ll} 0&{}\text {for }\,\, x\ne 1,\\ a&{}\text {for }\,\, x= 1,\end{array}\right. \\ u_2(x,t)= & {} \left\{ \begin{array}{ll} a&{}\text {for }\,\, \frac{x}{t}\le a,\\ \frac{x}{t}&{}\text {for }\,\, a< \frac{x}{t}\le 1,\\ 1&{}\text {for }\,\, \frac{x}{t}>1.\end{array}\right. \end{aligned}$$
  2. 2.

    Let \(a\in [1,\infty )\), then

    $$\begin{aligned} u_1(x,t)= & {} \left\{ \begin{array}{ll} 0&{}\text {for }\,\, x\ne 1,\\ a&{}\text {for }\,\, x= 1,\end{array}\right. \\ u_2(x,t)= & {} \left\{ \begin{array}{ll} a&{}\text {for }\,\, \frac{x}{t}<\frac{a+1}{2},\\ 1&{}\text {for }\,\, \frac{x}{t}>\frac{a+1}{2}.\end{array}\right. \end{aligned}$$

Obviously, each coordinate of u is a piece-wise continuous solution to mono-dimensional Burgers’ equation and at each jump satisfies Rankin-Hugoniot and Lax conditions. Consequently, by [4, Thm. 4.2] \(u_j\) is an entropy solution of scalar Burgers’ equation on edges \(e_j\), \(j=1,2\), so the edge-entropy solution to network Burgers. Finally, we derive a family of transmission solvers in \(v_2\) at \(t=0\), that depends on parameter a.

$$\begin{aligned} TS_2(0,1)=(a,a), \qquad a\in [0,\infty ). \end{aligned}$$
(20)

Considerations on a path graph allow us to build the intuition related with the behaviour in vertices, as the solution can be easily related with the scalar case. Let us refer to the solutions presented in Example 2 with a standard solution of initial-boundary value problem on the interval [0, 2]. Namely, with a problem of a form

$$\begin{aligned} \begin{array}{rcll} \displaystyle \int _0^T \int _{0}^2 \left( u \partial _t \phi + \frac{u_j^2 \partial _x\phi }{2}\right) dxdt&{}=&{} \displaystyle \int _{0}^{2} \mathring{u}(x)\phi _j(x,0)dx,&{}\\ [.1cm] \mathring{u}(x)&{}=&{}{\left\{ \begin{array}{ll}0&{}x\in [0,1],\\ 1&{}x\in [1,2],\end{array}\right. }&{}\\ u(0,t)&{}=&{}u(2,t)=0&{}t\ge 0.\end{array} \end{aligned}$$

The comparison clearly indicates that to obtain an entropy solution in a mono-dimensional case we need to take \(a=0\), since otherwise we introduce a non-physical shock into the model. The choice of \(a\in (0,1]\) gives a weak solution that can be justified, while \(a>1\) seems to make no sense. To choose a physically reasonable solution in the network case, we assume the continuity at some edges adjacent to the vertex \(v_i\), a.e. in time. Namely, continuity at the edges from \(D_i^{in}\) if the flow agrees with the direction of a vertex. In the case of non-negative solution, this condition simplifies to

(LC):

        \(u_j(1,t^-)=u_j(1,t^+),\qquad \text {for}\,\,e_j\in D_i^{in}\) and a.e. \(t\in (0,T)\).

Condition (LC) transfers the problem of finding a value of solution at \(t^+\) only into edges from \(D_i^{out}\). It is worth mentioning that it is well defined only for vertices different than sinks. For the path graph, see Example 2, it is sufficient to obtain the uniqueness; but not in the general case \(\text {deg}_{-}(v_i)>1\). The next condition relates the value of solution after the flow through the vertex with the change of the energy, which is a natural assumption in the context of fluid-type equations.

Let us remind that in the case of scalar Burgers’ equation the change of energy of piece wise continuous solution with one jump, defined on the interval [AB] reads

$$\begin{aligned} \frac{d}{dt}E(t)=\frac{u^3(A,t)}{3}-\frac{u^3(B,t)}{3}-\frac{(u(\xi ^-(t),t)-u(\xi ^+(t),t))^3}{12}, \end{aligned}$$
(21)

where \(u(s^{\pm }(t),t)\) is the right and left limit at discontinuity curve \(\xi (\cdot )\). We easily note that for each shock wave that satisfies the Lax condition, energy decreases proportionally to the magnitude of a jump, while for non-physical shocks we observe the increase of the energy. In the following consideration we take into account only edge-entropy solutions, see Definition 7, which excludes existence of non-physical shock waves. Now fix the vertex \(v_i\) and consider the Riemann problem, at \(x=1\) for incoming edges and \(x=0\) for outgoing ones, that arises due to the flow through the vertex. We define the change of the energy at \(v_i\) by \(\mathcal {E}_i:[0,\infty )^{\text {deg}(v_i)}\rightarrow \mathbb {R}\)

$$\begin{aligned} \mathcal {E}_i(u(v_i,t))= & {} \displaystyle \sum _{j:\,e_j\in D_i^{in}} \mathcal {E}_{ij}^+(u(1,t)) +\sum _{j:\,e_j\in D_i^{out}} \mathcal {E}_{ij}^-(u(0,t)),\nonumber \\ \mathcal {E}_{ij}^{\pm }(u(v_i,t))= & {} \displaystyle \frac{u_j^3(v_i,t^{\mp })-u_j^3(v_i,t^{\pm })}{3}\nonumber \\&\quad -\frac{\left( u_j(v_i,t^{\mp })-u_j(v_i,t^{\pm })\right) ^3}{12}\theta \left( u_j(v_i,t^{\mp })- u_j(v_i,t^{\pm })\right) , \end{aligned}$$
(22)

where \(\mathcal {E}_{ij}^{\pm }:[0,\infty )^{\text {deg}(v_i)}\rightarrow \mathbb {R}\) is the change of energy at the edge \(e_j\) and \(\theta \) is a Heaviside step function. The following transmission conditions are related to extremes of \(\mathcal {E}_i\).

(\(\mathcal {E}_i^m\)):

transmission conditions (15c) in \(v_i\) minimize function \(\mathcal {E}_i\),

(\(\mathcal {E}_i^M\)):

transmission conditions (15c) in \(v_i\) maximize function \(\mathcal {E}_i\).

At the beginning let us remark that without condition (LC) the problem of minimization of \(\mathcal {E}_i\) with respect to \(u(v_i,t^+)\) does not have to be well-posed. Let us return to the Example 2. For \(v_2\), at \(t=0\), we have

$$\begin{aligned} \text {min}_{a\in [0,\infty )}\,\, \mathcal {E}_2(0,1,a,a)=-\infty , \end{aligned}$$

since \(\mathcal {E}_2\) reads

$$\begin{aligned} \mathcal {E}_2(0,1,a,a)=\left\{ \begin{array}{ll} -\frac{1}{3},&{}\text {for }a\in [0,1)\\ [0.2cm] -\frac{(a-1)^3}{12}-\frac{1}{3}&{}\text {for }a\in [1,\infty ).\end{array}\right. \end{aligned}$$

On the contrary maximizing \(\mathcal {E}_i\) we obtain \(a=1\) which is again not the solution we head to. In order to build further intuition we consider a problem defined on the metric honeycomb tree.

Example 3

Let us consider metric honeycomb tree \(\mathcal {H}_3\) being v-subgraph of honeycomb lattice for v being a vertex of the first kind, see Fig. 1(iii). Define on \(\mathcal {H}_3\) a network Burgers’ equation (16) with initial condition \(\mathring{u}(x):=(a,1,1)^T\), \(a\in [0,1]\). The edge-entropy solution which satisfies condition (LC) depends on one parameter \(b\in [0,a]\), for \(t\in [0,\epsilon )\), and reads

$$\begin{aligned} u_1(x,t)= & {} \,\,a,\\ u_2(x,t)= & {} \left\{ \begin{array}{ll} b&{}\text {for }\,\, \frac{x}{t}\le b,\\ \frac{x}{t}&{}\text {for }\,\, b< \frac{x}{t}\le 1,\\ 1&{}\text {for }\,\, \frac{x}{t}>1,\end{array}\right. \\ u_3(x,t)= & {} \left\{ \begin{array}{ll} \sqrt{a^2-b^2}&{}\text {for }\,\, \frac{x}{t}\le \sqrt{a^2-b^2},\\ \frac{x}{t}&{}\text {for }\,\, \sqrt{a^2-b^2}< \frac{x}{t}\le 1,\\ 1&{}\text {for }\,\, \frac{x}{t}>1.\end{array}\right. \end{aligned}$$

Now we build two transmission solvers which satisfy either (\(\mathcal {E}_i^m\)) or (\(\mathcal {E}_i^M\)), and denote them respectively by \(TS_2^m\) and \(TS_2^M\). Function \(\mathcal {E}_2\) is, for \(t=0\), formulated by

$$\begin{aligned} \mathcal {E}_2\left( a,1,1,a,b,\sqrt{a^2-b^2}\right) =\frac{b^3+(a^2-b^2)^{\frac{3}{2}}-2}{3} \end{aligned}$$

Calculating critical points of \(\mathcal {E}_2\) and values at the boundary we arrive at three possible cases, namely \(b=0\), \(b=\frac{\sqrt{2}}{2}a\) and \(b=a\). We note that

$$\begin{aligned} \mathcal {E}_2(a,1,1,a,a,0)=\mathcal {E}_2(a,1,1,a,0,a)=\frac{a^3-2}{3} \quad \text {and} \quad \mathcal {E}_2\left( a,1,1,a,\frac{\sqrt{2}}{2}a,\frac{\sqrt{2}}{2}a\right) =\frac{\sqrt{2}a^3-4}{6}, \end{aligned}$$

hence \(TS_2^m(a,1,1)=\left( a,\frac{\sqrt{2}}{2}a,\frac{\sqrt{2}}{2}a\right) \) and \(TS_2^M(a,1,1)\in \left\{ (a,0,a),(a,a,0)\right\} \).

Example 3 is very specific since the value of solution before the flow through the vertex is equal at \(e_2\) and \(e_3\), see Fig. 1 for the notation. Consequently, for \(t=0\) edges \(e_2\) and \(e_3\) can be considered as locally symmetric with respect to the flow. In order to exclude such case in further considerations we introduce some technical condition called decreasing flow with respect to edge enumeration

(DF):

        \((TS_i^M)_j\ge (TS_i^M)_k\) for any \(j<k\), \(j,k\in D_i^{out}\).

It allows specifying the solution in which the highest flow is related to the edge with the lowest number. Since all tree graphs G having the same triplet \((V,E,\mathcal {L})\) but different mappings \(\Phi \) that all satisfy an increasing order of edges are homomorphic, then any locally symmetric solution can be chosen depending on the choice of representative. In particular, using the notation introduced in Example 3, assuming that \(TS_2\) satisfies (DF) we have that \(TS_2^M(a,1,1)=(a,a,0)\).

What happens if edges \(e_2\) and \(e_3\) are not locally symmetric with respect to the flow? We expect that it leads to different mass distribution when going through the vertex, depending on the value of \(\mathring{u}_{2}\) and \(\mathring{u}_{3}\). In such a case, coefficients of matrix \(\mathcal {B}^{01}(u)\) in Eq. (15c) depend strictly on solution u. On the other hand, it is worth to underline that the considered transmission solver works point-wise in time and seems justified to add a consistency condition that allows it to stabilize, on a certain time interval. Namely, we expect that

$$\begin{aligned} TS_i\left( TS_i\left( u(v_i,t^-)\right) \right) =u(v_i,t^+). \end{aligned}$$
(23)

Condition (23) was also introduced in [12, Def. 5] as one of common assumptions imposed on different transmission solvers considered in the literature. In line with this reasoning, let us define minimal and maximal transmission solver in vertex as follows.

Definition 9

Let \(TS_i^m\) (resp. \(TS_i^M\)) be the transmission solver that, for some fixed \(t\in [0,T)\), satisfy conditions (LC)–(\(\mathcal {E}^m_i\)) (resp. (LC)–(\(\mathcal {E}^M_i\))–(DF)) in \(v_i\). We say that \((TS_i^{m})^{\star }\) (resp. \((TS_i^{M})^{\star }\)) is a minimal (resp. maximal) transmission solver in vertex \(v_i\) if it satisfies

$$\begin{aligned} (TS_i^{z})^{\star }u(v_i,t^-)=\lim _{n\rightarrow \infty } (TS_i^z)^{(n)}u(v_i,t^-),\qquad \text {for any}\,\, u(v_i,t^-)\in [0,\infty )^{\text {deg}(v_i)},\, z=m,M, \end{aligned}$$
(24)

where, by \((TS_i^z)^{(n)}\), we understand the n-th composition of the mapping \(TS_i^z\).

We need to justify now that Definition 9 is well-posed, hence that the limit in (24) exists. If it does not depend on u then problem (15) transforms into

$$\begin{aligned} \sum _{j\in J}\int _0^T \int _{0}^{l_j} \left( u_j \partial _t \phi _j + \frac{u_j^2 \partial _x\phi _j}{2}\right) dxdt&= \sum _{j\in J}\int _{0}^{l_j} \mathring{u}_j(x)\phi _j(x,0)dx, \end{aligned}$$
(25a)
$$\begin{aligned} u_j(x,0)&=\mathring{u}_j(x)>0, \qquad x\in [0,l_j],\,j\in J, \end{aligned}$$
(25b)
$$\begin{aligned} u_j(0,t)&=\sum _{\left\{ s\in J:\,e_s\in D_i^{in}\right\} }\,b^{01}_{js}(u) u_s(1,t),\qquad \text {for}\,\,\phi _{ij}^-\ne 0, \end{aligned}$$
(25c)

where coefficients of \(\mathcal {B}^{01}\) in (25c) are given by

$$\begin{aligned} b_{js}^{01}(u)=\frac{(TS_i^{z})^{\star }_j(u)}{\sum _{\left\{ k\in J:\,e_k\in D_i^{in}\right\} }(TS_i^{z})^{\star }_k(u)}, \qquad \text {for}\quad z=m,M. \end{aligned}$$
(26)

Theorem 1

Consider non-negative weak solution u of Burgers’ equation (16) on the metric tree \(\mathcal {G}\) and fix \(t\in [0,T)\). The following statements hold.

  1. (i)

    At each vertex \(v_i\in V\), there exists a unique transmission solver \((TS_i^m)^{\star }\) of the form

    $$\begin{aligned} (TS_i^m)^{\star }u(v_i,t^-)=\left\{ \begin{array}{ll} u_j(1,t^-)&{}\text {for}\,\, e_j\in D_i^{in},\\ \frac{1}{\sqrt{\text {deg}(v_i)}}\sqrt{\sum _{\left\{ s\in J:\,\,e_s\in D_i^{in}\right\} }u_s^2(1,t^-)}\qquad &{}\text {for}\,\, e_j\in D_i^{out}. \end{array}\right. \end{aligned}$$
    (27)
  2. (ii)

    At each vertex \(v_i\in V\), there exists a unique transmission solver \((TS_i^M)^{\star }\) of the form

    $$\begin{aligned} (TS_i^M)^{\star }u(v_i,t^-)=\left\{ \begin{array}{ll} u_j(1,t^-)&{}\text {for}\,\, e_j\in D_i^{in},\\ [.2cm] \sqrt{\sum _{\left\{ s\in J:\,\,e_s\in D_i^{in}\right\} }u_s^2(1,t^-)}\qquad &{}\text {for}\,\, e_j=e_k,\\ [.2cm] 0&{}\text {for}\,\, e_j\in D_i^{out}\setminus \left\{ e_k\right\} , \end{array}\right. \end{aligned}$$
    (28)

    where \(k\in J\) satisfies condition

    $$\begin{aligned} k:=\max \left\{ j\in J:\,\, e_j\in D_i^{out}\right\} . \end{aligned}$$
    (29)

Proof

Let \(v_i\in V\) be an arbitrary vertex. Without loose of generality we assume that

$$\begin{aligned} \sum _{\left\{ j\in J:\,\,e_j\in D_i^{in}\right\} }\,\,u_j^2(1,t^{-})=1, \end{aligned}$$
(30)

and introduce a notation \(f^{\mp }=(f_j^\mp )_{j=1}^{\text {deg}(v_i)}:=(u_j^2(v_i,t^{\mp }))_{j=1}^{\text {deg}(v_i)}\). By (LC), finding \(TS_i^z\), \(z=m,M\), is equivalent to the optimization problem

$$\begin{aligned} \begin{array}{c} \bar{\mathcal {E}}(f^+)=\sum _{j=1}^{\text {deg}_+(v_i)} h_j\left( \sqrt{f_j^+}\right) \longrightarrow \min /\max , \\ [.2cm] \text {on the set}\,\, A=\left\{ f^+\in [0,1]^{\text {deg}_+(v_i)}:\,\, \sum _{j=1}^{\text {deg}_+(v_i)} f_j^+=1\right\} ,\end{array} \end{aligned}$$
(31)

where

$$\begin{aligned} h_j(u)={\left\{ \begin{array}{ll}\frac{1}{3}\left( u^3-(f_j^-)^{\frac{3}{2}}\right) &{}\text {for}\,\,u^2<f_j^-,\\ \frac{1}{4}\left( u^3+(f_j^-)^{\frac{1}{2}}u^2-f_j^-u-f_j^{\frac{3}{2}}\right) &{}\text {for}\,\,u^2\ge f_j^-. \end{array}\right. } \end{aligned}$$
(32)

Since we optimize a continuous function \(\bar{\mathcal {E}}\) on a compact set A, the only thing to prove is the uniqueness of the existing minimum/maximum. We show that \(\bar{\mathcal {E}}\) is strictly quasiconvex on a convex set A, namely

$$\begin{aligned} \bar{\mathcal {E}}(\lambda f^++(1-\lambda )g^+)< \max \left( \bar{\mathcal {E}}( f^+),\bar{\mathcal {E}}(g^+)\right) \end{aligned}$$

for \(f^+,g^+\in A\), \(f^+\ne g^+\), \(\lambda \in (0,1)\); and therefore attains a unique global minimum. Note that function \(\lambda \mapsto \bar{\mathcal {E}}(\lambda f^++(1-\lambda )g^+)\), for \(f^+,g^+\in A\) and \(\lambda \in [0,1]\) is convex since

$$\begin{aligned}&\frac{d}{d\lambda ^2}\bar{\mathcal {E}}(\lambda f^++(1-\lambda )g^+)=\\&\quad \sum _{j=1}^{\,\,l^-}\frac{(f_j^+-g_j^+)^2}{2(\lambda f_j^++(1-\lambda )g_j^+)}\left( \left. \frac{d}{du^2}h_j(u)\right| _{u=\sqrt{\lambda f^+_j+(1-\lambda )g^+_j}}-\frac{\left. \frac{d}{du}h_j(u)\right| _{u=\sqrt{\lambda f^+_j+(1-\lambda )g^+_j}}}{2\sqrt{\lambda f^+_j+(1-\lambda )g^+_j}}\right) >0. \end{aligned}$$

Hence, it attains maximum at the boundary and

$$\begin{aligned} \bar{\mathcal {E}}(\lambda f^++(1-\lambda )g^+)\le \max \left( \bar{\mathcal {E}}( f^+),\bar{\mathcal {E}}(g^+)\right) . \end{aligned}$$
(33)

Since the inequality (33) is strict for \(\lambda \in (0,1)\), \(\bar{\mathcal {E}}\) is strictly quasiconvex.

Using the methods of quasiconvex programming we know that maximum is attained at the boundary of A, see [13, Lem. 3.2]. Adding condition (DF) we have a uniqueness of \(TS_i^M\).

We now derive the formula for \((TS_i^z)^{\star }\), \(z=m,M\), starting with minimization condition. The idea is to describe sequences \((u(v_i,t_n^-))_{n\in \mathbb {N}}\) and \((u(v_i,t_n^+))_{n\in \mathbb {N}}\) in such a way that for each step

$$\begin{aligned} u(v_i,t_{n+1}^-):=u(v_i,t_{n}^+), \qquad \text {and}\quad t_1:=t. \end{aligned}$$
(34)

All velocities are non-negative, so for the next time step we obtain such regulation for the velocities coming out the chosen vertex. Let us fix arbitrary \(n\in \mathbb {N}\) and denote by \(u^-\) and \(u^+\) the value of the solution in vertex \(v_i\) in the time step \(t_n\).

$$\begin{aligned}&u^-:=\left( (TS_i^m)^{(n-1)}u(v_i,t^-)\right) _k, \qquad u^+:=\left( (TS_i^m)^{(n)}u(v_i,t^-)\right) _k; \end{aligned}$$
(35)
$$\begin{aligned}&\text {where}\,\, k\in \left\{ j\in J:\,\,e_j\in \text {max}_{e_j\in D_i^{out}}\,\, (TS_i^m)^{(n-1)}u(v_i,t^-) \right\} . \end{aligned}$$
(36)

Now consider some index \(s\in J\) such that

$$\begin{aligned}&\bar{u}^-:=\left( (TS_i^m)^{(n-1)}u(v_i,t^-)\right) _s<u^-, \quad \text {and}\quad \bar{u}^+:=\left( (TS_i^m)^{(n)}u(v_i,t^-)\right) _s>\bar{u}^-. \end{aligned}$$
(37)

Without loss of generality assume that the flow through the vertex \(v_i\) in \(t_n\) changes only values at two coordinates of edges adjacent to \(v_i\). Since, for almost all t, Kirchhoff condition needs to be satisfied we have

$$\begin{aligned} (u^-)^2+(\bar{u}^-)^2=(u^+)^2+(\bar{u}^+)^2. \end{aligned}$$
(38)

We show that the choice of transmission conditions described in (35)–(37) minimizes the function \(\mathcal {E}_i\). Consequently, only the value given in (27) can be the limit \((TS_i^m)^{\star }\).

Indeed, for \(h>0\) and

$$\begin{aligned} \bar{u}^+=\bar{u}^-+h, \text{ we } \text{ have } \text{ by } \text{(38) } u^+=\sqrt{(u^-)^2 - 2(\bar{u}^-) h -h^2}. \end{aligned}$$

The structure of the data implies that

$$\begin{aligned} \mathcal {E}_i(u(v_i,t_n))= & {} \sum _{j:\,e_j\in D_i^{out}\setminus \left\{ e_k,e_s\right\} } \mathcal {E}_{ij}^-(u(0,t_n))+\frac{(\bar{u}^+)^3-(\bar{u}^-)^3}{3}-\frac{(\bar{u}^+-\bar{u}^-)^3}{12} + \frac{(u^+)^3-(u^-)^3}{3} \\= & {} \sum _{j:\,e_j\in D_i^{out}\setminus \left\{ e_k,e_s\right\} } \mathcal {E}_{ij}^-(u(0,t_n))+\frac{(\bar{u}^-+h)^3-(\bar{u}^-)^3}{3} - \frac{h^3}{12}\\ [0.1cm]+ & {} \frac{ \left( (u^-)^2 - 2(\bar{u}^-) h -h^2\right) ^{\frac{3}{2}}-(u^-)^3}{3}=:\tilde{ \mathcal {E}}(h) \end{aligned}$$

But then we note that

$$\begin{aligned} \frac{d}{dh}\tilde{\mathcal {E}}(h)|_{h=0} = (\bar{u}^-)(\bar{u}^- -u^-) <0. \end{aligned}$$
(39)

Hence \(\mathcal {E}_i\) decreases locally with a growth of \(h>0\).

Let us turn now to the energy maximization case. Since the above considerations are working for h negative also, the form of the derivative in (39) ensures that the maximum is realised at the boundary of the set A. Condition (DF) provides a final formula for \((TS_i^M)^{\star }\). \(\square \)

The assumptions of Theorem 1 are strictly related to non-negative velocities of flow. In the general case the considerations are more subtle and generate a larger number of possibilities of physical behaviour of a flow. For that reason in Sect. 5.1 we confine ourselves to honeycomb trees. Note that this metric graph provides only three types of transmission conditions, according to the formula (26). Two for the vertices \(v_i\) of the first kind, such that \(D_i=\left( \left\{ e_j\right\} ,\left\{ e_k,e_l\right\} \right) \), \(j<k\)

  1. (i)

    \(u_k(0,t)=u_j(1,t),\,\, u_k(0,t)=0\)

  2. (ii)

    \(u_j(0,t)=u_k(0,t)=\frac{\sqrt{2}}{2}u_j(1,t)\);

and one for the vertices of the second kind such that \(D_i=\left( \left\{ e_j,e_k\right\} ,\left\{ e_l\right\} \right) \)

  1. (iii)

    \(u_l(0,t)=\frac{u_j(1,t)}{\sqrt{u_j^2(1,t)+u_k^2(1,t)}}u_j(1,t)+\frac{u_k(1,t)}{\sqrt{u_j^2(1,t)+u_k^2(1,t)}}u_k(1,t)\).

At the end of this part let us give the formal definition of entropy solution of network Burgers’ equation.

Definition 10

We say that function \(u: [0,1]\times [0,T)\rightarrow \mathbb {R}^m\) is a vertex-entropy solution, if it is a weak solution to network Burgers’ equation (15).

Furthermore, \(u=(u_j)_{j\in J}\) is entropy solution if it is an edge- and vertex-entropy solution. In particular minimal- and maximal-entropy solutions are respectively the edge-entropy solutions to (25)–(26) with \(z=m,M\).

4.2 Existence of Solutions

We are finally ready to prove the existence result in the case of non-negative solutions.

Theorem 2

Problem (15) for a finite tree \(\mathcal {G}\) admits a non-negative entropy solution for any \(\mathring{u}\in L^{\infty }([0,1],\mathbb {R}_+^m)\). For almost all \(t>0\) function \(x\mapsto u(x,t)\) has a locally bounded total variation and can be calculated recursively from the formula

$$\begin{aligned} u_{j}(x,t)= & {} \frac{x-y_{j}(x,t)}{t}, \qquad \qquad \text {where}\quad y_j\quad \text {minimizes function} \end{aligned}$$
(40a)
$$\begin{aligned} y\mapsto G_{j}(x,t,y)= & {} \left( \frac{(x-y)^2}{2t}+\int _{0}^y\mathring{u}_j(s)ds\right) \chi _{[0,x]}(y) \end{aligned}$$
(40b)
$$\begin{aligned}&+\left( \frac{x(x-y)}{2t}-\int _{0}^{\frac{-y}{x-y}t}\frac{u_j^2(0,s)}{2}ds\right) \chi _{(-\infty ,0)}(y), \end{aligned}$$
(40c)

for any edge \(j\in J\).

Proof

In accordance with the proof of existence of a weak solution in the scalar case, see [19, Thm.1.1] and [6], we show that formula (40) is valid for piece-wise smooth solutions satisfying Lax shock inequality at discontinuity. To this end we define a solution recursively at each edge.

Necessity. Assume first that u is a solution of (15) as stated above. Then for any source \(e_j(0)\), \(j=1,\ldots ,s\), see Sect. 3.1, the right hand side of (15c) vanishes and consequently \(u_j(0,t)= 0\) for all \(t>0\). Note that due to the tree structure and recursive procedure, we can choose such an order of edges that before calculating the solution on k-th edge we have values of all \(u_j(x,t)\) for \(j\in J\) such that \(b_{kj}> 0\), see Eq. (4). Consequently, the system of conservation laws on network transforms into a sequence of initial-boundary-value problems of a form

$$\begin{aligned} \int _{0}^{l_j} \left( u_j \partial _t \phi _j + \frac{u_j^2 \partial _x \phi _j}{2} \right) dxdt&= \int _{0}^{l_j} \mathring{u}_j(x)\phi _j(x,0)dx, \end{aligned}$$
(41a)
$$\begin{aligned} u_j(x,0)&=\mathring{u}_j(x)>0, \qquad x\in [0,l_j],\,j\in J, \end{aligned}$$
(41b)
$$\begin{aligned} u_j(0,t)&=\sum _{\left\{ s\in J:\,e_s\in D_i^{in}\right\} }\,b^{01}_{js}(t) u_s(1,t), \end{aligned}$$
(41c)

where \(i\in I\) satisfies \(v_i=e_j(0)\).

Let us fix \(j\in J\) and define auxiliary function \(w_j:[0,l_j]\times [0,\infty )\rightarrow \mathbb {R}_+\); \(\mathring{w}_j:[0,l_j]\rightarrow \mathbb {R}_+\) such that

$$\begin{aligned} w_{j}(x,t)=\int _{0}^x u_{j}(s,t)ds,\qquad \mathring{w}_j(x):=w_j(x,0). \end{aligned}$$
(42)

Note that u is a weak, piece-wise smooth solution to (15) if and only if it satisfies

$$\begin{aligned} \partial _t u_j+u_j\partial _x u_j=0 \end{aligned}$$

at each smoothness region in \([0,l_j]\times [0,T)\) and Rankine–Hugoniot condition along the discontinuity, see [19, Thm. 2.3]. We have

$$\begin{aligned} \int _{0}^x \partial _t u_j(s,t)+\partial _s\,\frac{u_j^2(s,t)}{2}ds= & {} \partial _t w_j(x,t)+\frac{u_j^2(x,t)-u_j^2(0,t)}{2}\nonumber \\= & {} \partial _t w_j(x,t)+\frac{(\partial _x w_j(x,t))^2}{2}-\left. \frac{(\partial _x w_j(x,t))^2}{2}\right| _{x=0}=0. \end{aligned}$$
(43)

By the properties of a square function we have that for any \(v\in [0,\infty )\) and \(z\in \mathbb {R}\)

$$\begin{aligned} vz-\frac{v^2}{2}\le \frac{z^2}{2}. \end{aligned}$$

For \(z=\partial _x w_j\), by (43),

$$\begin{aligned} v \partial _x w_j-\frac{v^2}{2}\le \frac{(\partial _x w_j)^2}{2}=\left. \frac{(\partial _x w_j)^2}{2}\right| _{x=0}-\partial _t w_j, \end{aligned}$$

and consequently,

$$\begin{aligned} \partial _t w_j+v\partial _x w_j \le \frac{v^2}{2}+\left. \frac{(\partial _x w_j)^2}{2}\right| _{x=0}. \end{aligned}$$
(44)

In order to determine the value of \(u_j\) at \((x,t)\in [0,l_j]\times [0,\infty )\) we chose some v. The line passing through (xt) with slope v either intersects the ox axis at \(y=x-vt\in [0,l_j]\) or hits the oy axis at \(t=-\frac{y}{v}\) for \(y<0\). We integrate (44) along the characteristic \(y=x-vt\) separately in two mentioned cases.

If \(y\in [0,l_j]\), then integrating over [0, t], analogously to the proof in the scalar case, we have

$$\begin{aligned} w_j(x,t)\le & {} \frac{v^2t}{2}+\int _0^t \frac{(\partial _x w_j(0,s))^2}{2} ds+\mathring{w}_j(x)\nonumber \\= & {} \frac{(x-y)^2}{2t}+\int _0^t \frac{u_j^2(0,s)}{2}ds+\int _0^x\mathring{u}_j(s)ds. \end{aligned}$$
(45)

If \(y\in (-\infty ,0)\), then we integrate over \(\left[ -\frac{y}{v},t\right] \) and since \(v=\frac{x-y}{t}\) and by (42), we obtain

$$\begin{aligned} w_j(x,t)\le & {} \frac{v^2}{2}\left( t+\frac{y}{v}\right) +\int _{-\frac{y}{v}}^t \frac{(\partial _x w_j(0,s))^2}{2}ds+w_j\left( 0,-\frac{y}{v}\right) \nonumber \\\le & {} \frac{x(x-y)}{2t}+\int _{-\frac{yt}{x-y}}^t \frac{u_j^2(0,s)}{2}ds. \end{aligned}$$
(46)

Finally, by (45) and (46)

$$\begin{aligned} w_j(x,t)\le \int _0^t \frac{u_j^2(0,s)}{2}ds +G_j(x,t,y), \end{aligned}$$
(47)

where \(G_j\) is defined in (40b)–(40c). Since the left hand side does not depend on y we minimize the right hand side over y. Let us choose the slope of the characteristic line \(v=u_j(x,t)\). Inserting v to (44) we obtain, by (43), the equality. The minimum in (31) is attained for \(y_j\) since u satisfies Lax condition. Finally,

$$\begin{aligned} w_j(x,t)= & {} \int _0^t \frac{u_j^2(0,s)}{2}ds+G_j(x,t,y_j), \qquad \text {where}\quad y_j:=\text {arg min}_{y\in \mathbb {R}}\,G_j(x,t,y), \end{aligned}$$
(48)

and since \(y_j(x,t)=x-u_j(x,t)t\) we derive a formula (40a).

Sufficiency. Assume that u is given by (40), and show that it is a weak solution to (15). Note firstly that u is well defined since for any \(j=1,\ldots ,m\), there exists a unique minimizer of \(G_j\).

The existence of a minimizer of \(G_j\) for \(y\in [0,x]\) is obvious since the first entry in (40b) grows faster than linearly while the second has at most linear growth. The same argument works in the case \(y\in (-\infty ,0)\) if we transform the problem of minimiaztion of \(G_j\) over \(y_j\) into minimization of function \(H_j:[0,x]\times (0,\infty )\times [0,\infty )\rightarrow \mathbb {R}\),

$$\begin{aligned} H_j(x,t,\tau _j):=\frac{x^2}{2(t-\tau _j)}-\int _{0}^{\tau }\frac{u_j^2(0,s)}{2}ds, \end{aligned}$$
(49)

over \(\tau _j\), where

$$\begin{aligned} \tau _j(x,t):= & {} \frac{-y_j(x,t)}{x-y_j(x,t)}t, \qquad \text {for some }y_j(x,t)<0. \end{aligned}$$
(50)

We show now that \(x\mapsto y_j(x,t)\) is non-decreasing. Consequently it has locally bounded total variation and it is continuous in all but countably many points. It is sufficient for the uniqueness of the minimizer of \(G_j\) and the well-posedness of u for almost all (xt).

Let us fix \(t>0\); and by an abuse of notation denote by \(y_1:=y_j(x_1,t)\), \(y_2:=y_j(x_2,t)\) for any \(x_1,x_2\in [0,x]\). Denote by \(x_0\in [0,x]\) an argument such that \(y_j(x_0,t)=0\). By contradiction we assume that \(x\mapsto y_j(x,t)\) is decreasing and we consider three cases.

  1. 1.

    \(0 \le y_2<y_1\) and \(x_0\le x_1<x_2\)

    From the definition of \(y_1\), \(G_j(x_1,t,y_1)\le G_j(x_1,t,y_2)\). Additionally,

    $$\begin{aligned} \left( \frac{x_2-y_1}{t}\right) ^2+\left( \frac{x_1-y_2}{t}\right) ^2<\left( \frac{x_1-y_1}{t}\right) ^2+\left( \frac{x_2-y_2}{t}\right) ^2. \end{aligned}$$

    Finally, using (40b) we obtain the contradiction with the fact that \(y_2\) minimizes \(y\mapsto G_j(x_2,t,y)\)

    $$\begin{aligned} G_j(x_2,t,y_1)\le G_j(x_1,t,y_2)-G_j(x_1,t,y_1)+G_j(x_2,t,y_1)<G_j(x_2,t,y_2). \end{aligned}$$
  2. 2.

    \(y_2<y_1\le 0\) and \( x_1<x_2<x_0\)

    Using the notation in (49)–(50), introduce \(\tau _1:=\tau _j(x_1,t)\) and \(\tau _2:=\tau _j(x_2,t)\). Conditions \(y_2<y_1\) and \(x_1<x_2\) imply that \(\tau _1<\tau _2\) and therefore we can repeat the reasoning in point 1. Again \(H_j(x_1,t,\tau _1)\le H_j(x_1,t,\tau _2)\) and

    $$\begin{aligned} \frac{x_1^2}{t-\tau _1}+\frac{x_2^2}{t-\tau _2}<\frac{x_1^2}{t-\tau _2}+\frac{x_2^2}{t-\tau _1}. \end{aligned}$$

    Using (49), we obtain the contradiction with the fact that \(\tau _2\) minimizes \(\tau _j \mapsto H_j(x_2,t,\tau )\)

    $$\begin{aligned} H_j(x_2,t,\tau _1)\le H_j(x_1,t,\tau _1)-H_j(x_1,t,\tau _2)+H_j(x_2,t,\tau _1)<H_j(x_2,t,\tau _2). \end{aligned}$$
  3. 3.

    \(y_2<0\le y_1\) and \(x_1<x_2\) Note that \(x\mapsto y_i\) is non-decreasing on both intervals \([0,x_0]\) and \([x_0,x]\) so consequently \(x_0\le x_1<x_2\le x_0\), which leads to the contradiction.

We show that (40) is a weak solution. Define now functions \(a_{j\epsilon }, u_{j\epsilon }, f_{j\epsilon }, v_{j\epsilon }\in L^{\infty }([0,1]\times \mathbb {R}_+)\) such that

$$\begin{aligned} a_{j\epsilon }(x,t):= & {} \int _{-\infty }^0 e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy+\int ^{\infty }_0e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy,\\ [.1cm] u_{j\epsilon }(x,t):= & {} \frac{1}{a_{j\epsilon }(x,t)}\left( \int _{-\infty }^0\frac{2x-y}{2t}e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy+\int ^{\infty }_0\frac{x-y}{t}e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy\right) ,\\[.1cm] f_{j\epsilon }(x,t):= & {} \frac{1}{a_{j\epsilon }(x,t)}\left( \int _{-\infty }^0\frac{x(x-y)}{2t^2}e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy+\int ^{\infty }_0\frac{(x-y)^2}{2t^2}e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy\right) . \end{aligned}$$

Set additionally

$$\begin{aligned} v_{j\epsilon }(x,t)=\log {a_{j\epsilon }(x,t)}. \end{aligned}$$
(51)

Note now that functions \((x,t) \mapsto G_j(x,t,y)\) and \((x,t) \mapsto v_{j \epsilon }(x,t,y)\), are differentiable with respect to x and t; and hence \(u_{j\epsilon }=-\epsilon \, \partial _t v_{j\epsilon }(x,t)\), \(f_{j\epsilon }(x,t,y)=\epsilon \, \partial _x v_{j\epsilon }(x,t,y)\) we have

$$\begin{aligned} \partial _t u_{j\epsilon }+\partial _x f_{j\epsilon }=0. \end{aligned}$$
(52)

We show that

$$\begin{aligned} \lim _{\epsilon \rightarrow 0^+}u_{j\epsilon }(x,t)=u_j(x,t)\quad \text {and}\quad \lim _{\epsilon \rightarrow 0^+}f_{j\epsilon }(x,t)=f_j(x,t), \end{aligned}$$
(53)

for any (xt) in which \(x\mapsto y_j(x,t)\) is continuous. Denote by \(\bar{y}_j(x,t)\) the unique minimizer of \(G_j\) at (xt) and define a mapping

$$\begin{aligned} y\mapsto \bar{G}_j(x,t,y):=G_j(x,t,y)-G_j(x,t,\bar{y}_j); \end{aligned}$$

which attains in \(\bar{y}_{j}\) its minimum equal to 0. Since \(\bar{G}_j\) is locally Lipschitz continuous (which in particular on the interval \((-\infty ,0)\) follows from reformulation (49)–(50) then for any \(\delta >0\), the estimate holds for \(y\in [\bar{y}_j(x,t)-\delta ,\bar{y}_j(x,t)+\delta ]\) with Lipschitz constant \(C_{j1}(x,t)\). Therefore

$$\begin{aligned} a_{j\epsilon }(x,t)=\int _{\mathbb {R}}e^{-\frac{1}{\epsilon }\bar{G}_j(x,t,y)}dy\ge & {} \int _{\bar{y}_j(x,t)-\delta }^{\bar{y}_j(x,t)+\delta }e^{-C_{j1}(x,t)|y-\bar{y}_j|}dy\\= & {} \frac{2}{C_{j1}(x,t)}\left( 1-e^{-\frac{C_{j1}(x,t)\delta }{\epsilon }}\right) \epsilon \ge C_{j2}(x,t)\epsilon , \end{aligned}$$

for all \(\epsilon <\delta \). On the other hand, for y such that \(|y-\bar{y}_j|\ge \delta \), \(\bar{G}_j\) is bounded away from zero and attains infinity in infinity, hence

$$\begin{aligned} e^{-\frac{1}{\epsilon }\bar{G}_j(x,t,y)}\le e^{-\frac{1}{\epsilon }C_{j3}(x,t,\delta )|y-\bar{y}_j|}. \end{aligned}$$
(54)

Finally we have

$$\begin{aligned} |u_{j\epsilon }-u_j|= & {} \frac{1}{a_{j\epsilon (x,t)}t}\left( \int _{\left\{ y:\,|y-\bar{y}_j|<\delta \right\} }|y-\bar{y}_j|e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy+\int _{\left\{ y:\,|y-\bar{y}_j|\ge \delta \right\} }|y-\bar{y}_j|e^{-\frac{1}{\epsilon }G_j(x,t,y)}dy\right) \\\le & {} \frac{\delta }{t}+\frac{2}{C_{j2} t \epsilon }\int _{0}^{\infty }ye^{-\frac{C_{j3}}{\epsilon }y}dy=\frac{\delta }{t}+\frac{2}{C_{j2}C_{j3}^2 t}\epsilon . \end{aligned}$$

Passing to 0 with \(\epsilon \) we receive the first limit in (53). Analogously, we calculate the second and passing with \(\epsilon \rightarrow 0\) in (52) we conclude that u is a weak solution to (15).

Equation (40) is edge-entropy solution since it satisfies Oleinik’s one-sided inequality (18). Indeed, by the fact that \(x\mapsto y_j(x,t)\) is non-decreasing and positive, for any \(x_1\le x_2\), \(x_1,x_2\in [0,l_j]\) and a.e. \(t>0\)

$$\begin{aligned} u_j(x_2,t)-u_j(x_1,t)=\frac{x_2-y_j(x_2,t)}{t}-\frac{x_1-y_j(x_1,t)}{t}\le \frac{x_2-x_1}{t}. \end{aligned}$$

For the case with negative \(y's\) we get

$$\begin{aligned} u_j(x_2,t)-u_j(x_1,t)= & {} \frac{x_2}{t-\tau (x_2,t)}-\frac{x_1}{t-\tau (x_1,t)}=\frac{x_2-x_1}{t-\tau (x_2,t)} + \left( \frac{x_1}{t-\tau (x_2,t)} -\frac{x_1}{t-\tau (x_1,t)}\right) \\\le & {} \frac{x_2-x_1}{t-\tau (x_2,t)}. \end{aligned}$$

Since transmission conditions in (15c) are defined uniquely we arrive at an entropy solution.

Finally, on every edge the weak solution in piece-wise \(C^1\) function so taking the limit \(x_2-x_1\rightarrow 0\)

$$\begin{aligned} \partial _x u_j(x_1,t)=\lim _{x_1-x_2\rightarrow 0}\frac{u_j(x_2,t)-u_j(x_1,t)}{x_2-x_1}\le \max \left\{ \frac{1}{t},\frac{1}{t-\tau (x_2,t)}\right\} , \end{aligned}$$
(55)

we arrive with the estimate on \(u_x\) at a.e. \((x_1,t)\). \(\square \)

Note that Eq. (40) are the counterparts of Lax-Oleinik formulas, see [18, Eq. IV.1.3], for Burgers’ equation on a tree. For the graph that satisfies condition

$$\begin{aligned} \text {deg}_+(v_i)\le 1,\qquad \text {for any}\,\, i\in I, \end{aligned}$$
(56)

it is possible to relate this solution with the standard formulation on the straight line. The core property in this representation is to derive coefficients \(\mathcal {B}^{01}(u)\) that are independent of a flow when we move back-word along the characteristic line.

Fig. 3
figure 3

Two \(v_0\)-subgraphs of honeycomb tree, for \(v_0\) being respectively a vertex of (i) a first kind (ii) a second kind. Illustration for Example 4 and considerations in Sect. 5.1

Example 4

In order to explain this situation consider again two kinds of nodes in honeycomb tree, see Fig. 3(i)(ii) and the transmission conditions in vertex \(v_0\) characterised by minimal transmission solver \((TS^m)^{\star }\).

  1. (i)

    \(v_0\) is of a first kind

    The idea now is to define the solution on the graph at point (xt) on the edge \(e_3\) along the path \(e_0e_1e_3\) where by \(e_0\) we understand a half line \(e_0=(-\infty ,0)\) with initial condition \(\mathring{u}_0=0\) and transmission conditions between edges \(e_0\) and \(e_1\) that conserve both the mass and the flux, namely

    $$\begin{aligned} u_0(1,t)=u_1(0,t),\qquad \text {for almost all}\,\,t>0. \end{aligned}$$
    (57)

    We change the reasoning in the proof of Theorem 2 in the following way. Considering the characteristic line passing through \((x_0,t_0)\) with slope \(v_0\) (assume \(y_0=x_0-v_0t_0<0\)) we allow it to go through the vertex and continue until it hits the initial line. Using the formula for transmission conditions (25c)–(26), we conclude that characteristic line passes through the point \(\left( 1,-\frac{y_0}{v_0}\right) \) on the edge \(e_1\) with a slope \(\sqrt{2}v_0\). Then it intersects either \(e_1\) or \(e_0\) at \((y_1,0)\). Finally the explicit formula for the solution is given by

    $$\begin{aligned} u_3(x_0,t_0)=\frac{x_0-y_3(x_0,t_0)}{t_0} \end{aligned}$$

    where   \(y_3\)   minimizes function

    $$\begin{aligned} y\mapsto G_3(x_0,t_0,y)=\int _{-\infty }^{y_1} \mathring{u}_1(s)ds+\frac{(x_0-y)^2}{2t_0}. \end{aligned}$$
  2. (ii)

    \(v_0\) is of a second kind The first problem in repeating the reasoning in (i) for \(v_0\) is the lack of uniqueness of the path since we can chose either \(e_0e_1e_3\) or \(e_0e_2e_3\). The more essential problem, however, is the fact that we cannot define neither the slope of characteristic line \(v_1\) on \(e_1\), nor its counterpart on \(e_2\) - \(v_2\). Such representation does not result from transmission condition

    $$\begin{aligned} v_0=\frac{\sqrt{2}}{2}\left( v_1+v_2\right) . \end{aligned}$$

Example 4 indicates that the condition (56) allows to choose the unique path from any point \(x\in \mathcal {G}\) to the source and ensures well-posedness of the following procedure: \(\mathcal {G}\) is a finite tree, so it is possible to re-enumerate edges in the way that for any two edges \(e_{s}, e_{j}\in E\), and for any chosen path \(e_{s}=e_{k_1},\ldots ,e_{k_l}=e_{j}\); \(k_i<k_{i+1}\) for all \(i\in 1,\ldots ,l-1\). Fix \(e_j\in E\) and define a path \(P_{j}=e_{k_1}e_{k_2}\ldots e_{k_{N_j}}\) of a length \(L_j=\sum _{s=1}^{N_j} l_s\) that starts in a source and ends in \(e_j\). Now define \(u_{P_j}:(-\infty ,l_j]\times [0,\infty )\rightarrow \mathbb {R}_+\) and \(\mathring{u}_{P_j}:(-\infty ,l_j]\rightarrow \mathbb {R}_+\) such that

$$\begin{aligned} u_{P_j}(x,t):= & {} \sum _{s=1}^{N_j}\, \left( \prod _{p=s}^{N_j-1}\frac{1}{b_{k_{p+1}k_{p}}^{01}}\right) \,\, u_{k_s}\left( x+\sum _{p=s}^{N_j-1}l_p,t\right) \, \chi _{\left( -\sum _{p=s}^{N_j-1}l_p, -\sum _{p=s+1}^{N_j-1}l_p\right] }(x),\\ \mathring{u}_{P_j}(x):= & {} u_{P_j}(x,0). \end{aligned}$$

Proposition 1

The solution to the problem (15) for a finite tree \(\mathcal {G}\) that satisfies (56) can be related with mono-dimensional case using the counterpart of Lax-Oleinik formula on the path sub-graph, namely the formula for any \(j\in J\) is given by

$$\begin{aligned} u_j(x,t)=\frac{x-y_j(x,t)}{t}, \end{aligned}$$

where    \(y_j\)   minimizes function

$$\begin{aligned} y\mapsto G_j(x,t,y)=\int _{-\infty }^y \mathring{u}_{P_j}(s)ds+\frac{(x-y)^2}{2t}. \end{aligned}$$

Finally, it is worth underlining that the considerations presented in the proof of Theorem 2 can be generalised in the number of directions. Firstly, we can examine conservation law on the edges of a network, coupled by the linear transmission of mass that satisfies the conservation of flux condition for \(f\in C^1([0,\infty ))\) such that

$$\begin{aligned} f''>0,\qquad f(0)=0\qquad \text {and}\qquad \lim _{u\rightarrow \infty } \frac{f(u)}{u}=+\infty . \end{aligned}$$
(58)

On the other hand, we can introduce some sources of mass in vertices \(v_i\) such that \(\phi _{ij}^+=0\) for any \(j\in J\).

We formalise those observations into a remark.

Proposition 2

Let f be a flux function that satisfies (58). For any \(\mathring{u}\in L^{\infty }([0,1],\mathbb {R}_+^m)\) and \(\bar{u}\in L^{\infty }([0,T],\mathbb {R}_+^m)\), the proof of Theorem 2 can be repeated to the following generalisation of a problem (15), for almost all \(t\in [0,T]\),

$$\begin{aligned} \sum _{j\in J}\int _0^T \int _{0}^{l_j} \left( u_j \partial _t \phi _j + f(u_j )\partial _x\phi _j\right) dxdt&= \sum _{j\in J}\int _{0}^{l_j} \mathring{u}_j(x)\phi _j(x,0)dx, \end{aligned}$$
(59a)
$$\begin{aligned} u_j(x,0)&=\mathring{u}_j(x)>0, \qquad x\in [0,l_j],\,j\in J, \end{aligned}$$
(59b)
$$\begin{aligned} u_j(0,t)&=\sum _{\left\{ s\in J:\,e_s\in D_i^{in}\right\} }\,b^{01}_{js}(u) u_s(1,t),\qquad \text {for}\,\,\phi _{ij}^-\ne 0,\,\,\text {deg}_+(v_i)>0 \end{aligned}$$
(59c)
$$\begin{aligned} \sum _{\left\{ j\in J:\, e_j\in D_i^{in}\right\} } f(u_j(l_j,t))&=\sum _{\left\{ j\in J:\, e_j\in D_i^{out}\right\} } f(u_j(0,t)),\qquad \text {for}\,\,\text {deg}_+(v_i)>0, \end{aligned}$$
(59d)
$$\begin{aligned} u_j(0,t)&=\bar{u}_j(t)\ge 0,\qquad \text {for}\,\,\phi _{ij}^-\ne 0,\,\,\text {deg}_+(v_i)=0. \end{aligned}$$
(59e)

Proof

The proof of this fact can be found in [18, Thm. 2.1]. \(\square \)

4.3 Dense Subclass of Positive Solutions

Note that for positive solutions one can distinguish a special class of functions which are preserved under the flow. This class is the same as for the classical mono-dimensional Burgers’ equation.

Proposition 3

Let \(\mathcal {G}\) be a metric tree. We introduce a class of functions \(\mathcal {W}^+\) such that

$$\begin{aligned} f \in \mathcal {W}^+ \text{ iff } \{ f \in B(\mathcal {G}): f \text{ is } \text{ piece-wise } C^1 \text{ non-decreasing } \text{ non-negative } \text{ function, }\nonumber \\ \text{ the } \text{ number } \text{ of } \text{ jumps } \text{ is } \text{ finite } \text{ and } \text{ side } \text{ derivatives } \text{ exist } \text{ at } \text{ each } \text{ point } \text{ of } \mathcal {G} \} \end{aligned}$$
(60)

Then the class \(\mathcal {W}^+\) is preserved by the flow generated by the Burgers’ equation (15), i.e. if \(\mathring{u}\in \mathcal {W}^+\) then \(u(t)\in \mathcal {W}^+\) for any \(t>0\).

Proof

Let \(\mathring{u}\in \mathcal {W}^+\). Since in the interior of each edge we have the mono-dimensional situation then \(\mathcal {W}^+\) class is preserved there. The only element that needs to be clarified is a transmission condition, namely that \(u_{j}(0,t)\) is piece-wise \(C^1\), non-decreasing with a finite number of jumps for every \(j\in J\). The properties of solution going out from an arbitrary vertex \(v_i\) in the tree \(\mathcal {G}\) can be considered as the composition of flows going out of two vertices \(v_i'\) and \(v_i''\) which are associated with \(v_i\) by the following relation

$$\begin{aligned} \text {deg}_+(v_i')=\text {deg}_+(v_i),\quad \text {deg}_-(v_i')=1;\qquad \text {and}\qquad \text {deg}_+(v_i'')=1,\quad \text {deg}_-(v_i'')=\text {deg}_-(v_i). \end{aligned}$$
(61)

See also Fig. 4. We can easily see that vertex \(v_i'\) joins the flow, while \(v_i''\) splits it into outgoing edges. In the case of \(v_i'\), the flow in e(0) is a square root of sum of squared flows of incoming edges. Since on each \(e_j\in D_i^{in}\) the flow is non-decreasing, \(C^1\) function, then this properties are preserved for e(0) for all but finite number of points. Since for \(v_i''\) the transmission conditions at the head of outgoing edges are just proportions of the flow coming to the tail of edge e, fine properties are guaranteed.

Finally, we note that the number of jumps can be multiplied by \(\text {deg}_-(v_i)\) as the shock crosses \(v_i\) but the finiteness of the graph ensures the control of the number of jumps. We shall also recall that under evolution some jumps may disappear.\(\square \)

Fig. 4
figure 4

Transformation of arbitrary vertex \(v_i\) in a tree \(\mathcal {G}\), illustration (i), into two vertices \(v_i'\), \(v_i''\), illustration (ii), according to formula (61) introduced in Proposition 3

In further considerations we will use also \(\mathcal {W}^+_{opp}\) class such that

$$\begin{aligned}&f \in \mathcal {W}^+_{opp} \text{ iff } \{ f \in B(\mathcal {G}): f \text{ is } \text{ piece-wise } C^1\quad \hbox { non-increasing non-negative function,} \nonumber \\&\quad \text{ the } \text{ number } \text{ of } \text{ jumps } \text{ is } \text{ finite } \text{ and } \text{ side } \text{ derivatives } \text{ exist } \text{ at } \text{ each } \text{ point } \text{ of } \mathcal {G} \}. \end{aligned}$$
(62)

Definition 11

Let u be a function defined over the graph \(\mathcal {G}\). We say that \(u\in TV(\mathcal {G})\) iff

$$\begin{aligned} \Vert u\Vert _{TV(\mathcal {G)}}=\sum _{j\in J} \Vert u_j\Vert _{TV(e_j)} \text{ is } \text{ finite }. \end{aligned}$$

Let us start with the estimates of TV-norm of non-negative solution for specified family of graphs that can be generalised for arbitrary metric trees.

Lemma 1

Let \(\mathcal {G}\) be a metric honeycomb tree with one source \(e_1(0)\) and one sink \(e_m(1)\). For u being a solution to the problem (15) given by Theorem 2, the following estimate holds

$$\begin{aligned} \sup _{t\in [0,T]} \Vert u^2(t)\Vert _{TV(\mathcal {G})} +\int _0^T|\partial _t u_m^2|(1,t) dt \le 2^{\kappa _\mathcal {G}}\left( \Vert \mathring{u}\Vert _{TV(\mathcal {G})} + \int _0^T |\partial _t u_1^2|(0,t)dt\right) , \end{aligned}$$
(63)

where \(\kappa _\mathcal {G}\) depends on graph structure.

Proof

Let us remind first that solutions u given by Theorem 2 are non-negative. We start with showing that \(u\in \mathcal {W}^+\) restricted to an arbitrary edge \(e_j\) satisfies

$$\begin{aligned} \frac{d}{dt} \int _{e_j} |\partial _x u_{j}^2|dx + |\partial _t u_j^2|(1,t) \le |\partial _t u_{j}^2|(0,t). \end{aligned}$$
(64)

If u is from \(\mathcal {W}^+\)-class, then for \(u_j\) there exists a finite sequence \(0=\xi _0(t)<\xi _1(t)< ... < \xi _{K(t)}(t)=1\) for a.e. \(t\in [0,T)\) such that

$$\begin{aligned} u_j(x,t)=\sum _{k=0}^{K(t)-1} u_j(x,t)\chi _{[\xi _k(t),\xi _{k+1}(t)]}(x), \end{aligned}$$

and on each interval \([\xi _k(t),\xi _{k+1}(t)]\) u is non-decreasing. We extended it by the left and right hand side limits. Furthermore, K(t) is piece-wise constant so there exists a finite sequence \(0<t_0<t_1< ...< t_M<T\) such that K(t) is constant over each interval \((t_i,t_{i+1})\). Note also K(t) is decreasing as \(u_1(0,t)=0\) by Kirchhoff condition. In accordance with previous notation we distinguish the left and right limits at points \(\xi _k\) by respectively \(u(\xi ^{\mp }_k(t),t)\). We have

$$\begin{aligned}&\frac{d}{dt} \int _{[0,1]} |\partial _x u_j^2| dx = \nonumber \\&\frac{d}{dt} \left[ \sum _{k=0}^{K(t)-1} (u_j^2(\xi _{k+1}^-(t),t) - u_j^2(\xi _k^+(t),t)) {+ \sum _{k=0}^{K(t)-2} (u_j^2(\xi _{k+1}^-(t),t) - u_j^{2}(\xi _{k+1}^+(t),t))} \right] . \end{aligned}$$
(65)

Since, for a.e. \(t\in (t_i,t_{i+1})\), K(t) is constant, then for \(0<k<K(t)\)

$$\begin{aligned} \frac{d}{dt} u_j(\xi _{k+1}^-(t),t) = \partial _t u_j(\xi _{k+1}^-(t),t) + \partial _x u_j(\xi _{k+1}^-(t),t) \frac{d\xi _{k+1}(t)}{dt}. \end{aligned}$$

But by the Rankine–Hugoniot and Lax conditions for \(\xi _{k+1}\), see (17), we conclude

$$\begin{aligned} \frac{d}{dt} u^2_j(\xi _{k+1}^-(t),t) = u_j(\xi _{k+1}^-(t),t) \partial _x u_j(\xi _{k+1}^-(t),t) \left( u_j(\xi _{k+1}^+(t),t) - u_j(\xi _{k+1}^-(t),t)\right) \le 0. \end{aligned}$$

In the same manner we prove that

$$\begin{aligned} \frac{d}{dt} u_j^2(\xi _{k+1}^+(t),t) \ge 0. \end{aligned}$$

Taking into account the boundary terms coming from \(k=0,K(t)\) we find that

$$\begin{aligned} \frac{d}{dt} \int _{[0,1]} |\partial _x u_j^2| dx\le \partial _t u_j^2(1,t)-\partial _t u_j^2(0,t). \end{aligned}$$

The boundary terms \(-\partial _t u_j^2(z,t)=2u_j^2(z,t)\partial _x u_j(z,t)\), \(z=0,1\) are non-negative since we are working in \(\mathcal {W}^+\)-class, which leads to (64).

The class \(\mathcal {W}^+\) is dense in \(TV(\mathcal {G})\), so it allows us to approximate any TV-flow by an element from the \(\mathcal {W}^+\)–class. In order to pass to the limit we need to generate the global estimate, namely one which is independent of K(t). Integrating (64) over \((t_i,t_{i+1})\) we get

$$\begin{aligned} \sup _{t\in [t_i,t_{i+1}]} \left( \Vert u_j^2\Vert _{TV(e_j)}(t) + \int _{t_i}^{t} |\partial _t u_j^2|(1,t)dt \right) \le \int _{t_i}^{t_{i+1}} |\partial _t u_j^2|(0,t)dt + \Vert u_j^2(t_i)\Vert _{TV(e_j)}. \end{aligned}$$

Summing up over all intervals \((t_i,t_{i+1})\) we get

$$\begin{aligned} \Vert u_j^2(T)\Vert _{TV(e_j)} + \int _{0}^{T} |\partial _t u_j^2|(1,t)dt \le \int _{0}^{T} |\partial _t u_j^2|(0,t)dt + \Vert \mathring{u}_j^2\Vert _{TV(e_j)}. \end{aligned}$$
(66)

In order to make TV-norm of \(u_j^2\) T-independent we transform (66) into

$$\begin{aligned} \sup _{[0,T]} \Vert u_j^2\Vert _{TV(e_j)} + \int _{0}^{T} |\partial _t u_j^2|(1,t)dt \le 2\left( \int _{0}^{T} |\partial _t u_j^2|(0,t)dt + \Vert \mathring{u}_j^2\Vert _{TV(e_j)})\right) . \end{aligned}$$
(67)

Now we are ready to construct an approximation sequence tending to the desired solution for some general data \(\mathring{u}_j\in TV(e_j)\). For given \(\epsilon >0\) and \(\bar{u}_j=u_j(0,\cdot ) \in TV(0,T)\) we claim there exist

$$\begin{aligned} \mathring{u}_{j\epsilon } \in \mathcal {W}^+ \text{ and } \bar{u}_{j\epsilon } \in \mathcal {W}^+_{opp} \end{aligned}$$

such that

$$\begin{aligned} \Vert \mathring{u}_{j\epsilon }^2 - \mathring{u}_j^2\Vert _{L^1(e_j)}< \epsilon \text{ and } \Vert \bar{u}_{j\epsilon }^2 - \bar{u}_j^2\Vert _{L^1(0,T)} < \epsilon \end{aligned}$$

and

$$\begin{aligned} \Vert \mathring{u}_{j\epsilon }^2\Vert _{TV(e_j)} \le \Vert \mathring{u}_j^2\Vert _{TV(e_j)} \text{ and } \Vert \bar{u}^2_{j\epsilon }\Vert _{TV(0,T)} \le \Vert \bar{u}_j^2\Vert _{TV(0,T)}. \end{aligned}$$

So the considerations for the \(u\in \mathcal {W}^+\) deliver the existence of \(u_{\epsilon }\) solutions on the time interval [0, T] and (67) implies the following estimate independent of \(\epsilon \).

$$\begin{aligned}&\sup _{[0,T]} \Vert u^2_{j\epsilon }\Vert _{TV(e_j)} + \int _{0}^{T} |\partial _t u^2_{j\epsilon }|(1,t)dt \le 2\left( \int _{0}^{T} |\partial _t u^2_{j\epsilon }|(0,t)dt + \Vert \mathring{u}^2_{j\epsilon }\Vert _{TV(e_j)}\right) \\&\le 2\left( \int _{0}^{T} |\partial _t \bar{u}_j^2|(0,t)dt + \Vert \mathring{u}_j^2\Vert _{TV(e_j)}\right) . \end{aligned}$$

The above estimates imply the uniform bound for

$$\begin{aligned} \partial _t u_\epsilon \in L^\infty (0,T;\mathcal {M}(e_j)). \end{aligned}$$

This leads, up to a subsequence \(\epsilon \rightarrow 0\), to

$$\begin{aligned} \begin{array}{lcr} u_{j\epsilon } \rightarrow u_j &{} \text{ in } &{} L^1([0,T] \times e_j),\\ u_{j\epsilon } \rightharpoonup ^*u_j &{} \text{ in } &{} L^\infty ([0,T]\times e_j),\\ u_{j\epsilon }|_{x=l_j} \rightarrow u_j|_{x=l_j} &{} \text{ in } &{} L^1(0,T). \end{array} \end{aligned}$$

In particular we have the point-wise convergence in the domain and at the boundary \(\{x=1\}\). So we conclude u is the solution to Burgers’ equation at \(e_j\).

Finally, to obtain TV-estimate (63) for the whole graph we proceed recursively from the edge \(e_j\) to the source \(e_1(0)\), specifying the right hand side of (64). \(\mathcal {G}\) is a metric honeycomb tree so there is restricted number of vertices’ types, see Definition 2 and remarks below.

Consider a vertex of the first kind \(v_i\), and denote edges adjacent to it in the following way \(D_i=(\left\{ e_j\right\} ,\left\{ e_k,e_l\right\} )\). By the transmission conditions (15d)

$$\begin{aligned} u_z^2(0,t)=\theta _z u_{j}^2(1,t) \text{ with } \theta _z \le 1, \,\,z=k,l. \end{aligned}$$

Then by differentiation in time we find that

$$\begin{aligned} |\partial _t u_z^2|(0,t)=\theta _z |\partial _t u_{j}^2|(1,t),\quad z=k,l. \end{aligned}$$

So the identity (64) gives a term on the left hand side which dominates the terms \(|\partial _t u_{z}^2|(0,t)\), \(z=k,l\), namely

$$\begin{aligned} \frac{d}{dt}\left( \int _{e_j} 2 |\partial _x u_{j}^2|dx + \int _{e_k} |\partial _x u_{k}^2|dx + \int _{e_l} |\partial _x u_{l}^2|dx\right) + |\partial _t u_k^2 |(1,t) + |\partial _t u_l^2 |(1,t)\le 2|\partial _t u_j^2|(0,t). \end{aligned}$$
(68)

Note that \(v_i\) has two out-going edges and \(\theta _z\le 1\) for \(z=k,l\), therefore, the equation for \(e_{j}\) is taken twice.

In the second case as \(v_i\) is of the second kind, using notation \(D_i=(\left\{ e_j,e_k\right\} ,\left\{ e_l\right\} )\), we have

$$\begin{aligned} u_{j}^2(1,t)+u_{k}^2(1,t)=u_{l}^2(0,t). \end{aligned}$$

Then we easily deduce that

$$\begin{aligned} |\partial _t u_l^2|(0,t) \le | \partial _t u_{j}^2|(1,t)+ | \partial _t u_{k}^2|(1,t), \end{aligned}$$
(69)

and analogously to (68) we obtain

$$\begin{aligned} \frac{d}{dt}\left( \int _{e_j} |\partial _x u_{j}^2|dx + \int _{e_k} |\partial _x u_{k}^2|dx + 2\int _{e_l} |\partial _x u_{l}^2|dx\right) + |\partial _t u_l^2 |(1,t)\le |\partial _t u_j^2|(0,t)+|\partial _t u_k^2|(0,t). \end{aligned}$$
(70)

Finally taking the vertex from the path graph such that \(D_i=(\left\{ e_j\right\} ,\left\{ e_k\right\} )\) we have a conservation of mass in the vertex and consequently

$$\begin{aligned} \frac{d}{dt}\left( \int _{e_j} |\partial _x u_{j}^2|dx + \int _{e_k} |\partial _x u_{k}^2|dx \right) + |\partial _t u_k^2 |(1,t)\le |\partial _t u_j^2|(0,t). \end{aligned}$$
(71)

Repeating iteratively above steps, and taking all edges with required multiplicity \(\kappa _j\) that depends on the degree of vertex and its position in the graph we obtain

$$\begin{aligned} \frac{d}{dt} \sum _{j\in J}\int _{e_j} 2^{\kappa _j}|\partial _x u_{j}^2|dx + |\partial _t u_N^2 |(1,t) \le 2^{\kappa _1}|\partial _t u_1^2|(0,t), \end{aligned}$$

since the graph \(\mathcal {G}\) has exactly one source \(e_1(0)\) and one sink \(e_m(0)\). After the integration by parts implies (63). \(\square \)

Remark 1

Estimate derived in Lemma 1 can be extended into arbitrary metric tree \(\mathcal {G}\) having sources \(e_j(0)\), \(j=1,\ldots , s\) and sinks \(e_j(l_j)\) for \(j=m-S+1,\ldots , m\)

$$\begin{aligned} \sup _{t\in [0,T]} \Vert u^2(t)\Vert _{TV(\mathcal {G})} +\int _0^T\sum _{j=m-S+1}^m|\partial _t u_j^2|(l_j,t) dt \le C_\mathcal {G}\left( \Vert \mathring{u}\Vert _{TV(\mathcal {G})} + \int _0^T\sum _{j=1}^s |\partial _t u_j^2|(0,t)dt\right) . \end{aligned}$$
(72)

Proof

The general case is slightly more involving. Assume that for vertex \(v_i\)

$$\begin{aligned} D_i^{in}=\{e_{k_1}, ... , e_{k_p}\} \text{ and } D_i^{out}=\{ e_{r_1}, ... , e_{r_q} \}. \end{aligned}$$

Then of course by the Kirchhoff condition

$$\begin{aligned} \sum _{i=1}^p u^2_{k_i}(l_{k_i},t)=\sum _{j=1}^q u^2_{r_j}(0,t), \end{aligned}$$

and for appropriate constants \(\theta _{r_j}\le 1\), \(j=1,\ldots ,q\),

$$\begin{aligned} |\partial _t u_{r_j}^2|(0,t)\le \theta _{r_j} \sum _{i=1}^p |\partial _t u_{k_i}|(l_{k_i},t). \end{aligned}$$

Taking into account multiplicity of incoming and outgoing edges, it leads to

$$\begin{aligned}&\frac{d}{dt}\left( \text {deg}_{-}(v_i)\sum _{i=1}^p\int _{e_{k_i}} |\partial _x u_{k_i}^2|dx + \text {deg}_{+}(v_i)\sum _{j=1}^q\int _{e_{r_j}} |\partial _x u_{r_j}^2|dx \right) + \text {deg}_{+}(v_i)\sum _{j=1}^q |\partial _t u_{r_j}^2 |(l_{r_j},t)\\&\quad \le \text {deg}_{-}(v_i)\sum _{i=1}^p |\partial _t u_{k_i}^2|(0,t). \end{aligned}$$

The rest of estimates follows as for honeycomb tree in Lemma 1. \(\square \)

5 Stitching Solutions on the Honeycomb Tree

In this part we generalise the considerations from Sect. 4.1 into the case of solutions of an arbitrary sign. With no surprise, the major problem of this construction is a determination of physically justified behaviour in vertices for different sign velocities at adjacent edges. In the whole Sect. 5 we restrict ourselves to honeycomb trees since they consist of exactly two kinds of vertices which additionally provide the same possible cases to consider.

5.1 Derivation of Transmission Conditions

To keep the well-posedness of the solution in the terms of the distributional formulation, see reasoning in (13)–(14), we are required to control the Kirchhoff conditions (14). Using the notation introduced in Sect. 4.1, we denote by \(t^{\mp }\) time shortly before/after the flow through the vertex at t. Denote the set of edges in which the mass enters the vertex \(v_i\) at \(t>0\) by \(\mathcal {F}_i(t):=\mathcal {F}_i^{in}(t)\cup \mathcal {F}_i^{out}(t)\) where

$$\begin{aligned} \mathcal {F}_i^{in}(t):=\left\{ e_j\in D_i^{in}:\,\,u_j(1,t^-)\ge 0\right\} \quad \text {and} \quad \mathcal {F}_i^{out}(t):=\left\{ e_j\in D_i^{out}:\,\,u_j(1,t^-)\le 0\right\} . \end{aligned}$$
(73)

Furthermore, we need to specify the direction of a flow through the vertex. We say that flow agrees with (is opposite to) the direction of a vertex \(D_i=(D_i^{in},D_i^{out})\) at \(t>0\), for some \(v_i\in V\), if

$$\begin{aligned} \sum _{e_j\in \mathcal {F}_i^{in}(t)} u_j^2(1,t^-) > rless \sum _{e_j\in \mathcal {F}_i^{out}(t)} u_j^2(0,t^-). \end{aligned}$$

If there is an equality in the above equation, then there is no flow through the vertex \(v_i\) at t. Let us define \(\mathcal {D}_i=\left( \mathcal {D}_i^{in},\mathcal {D}_i^{out}\right) \) a flow direction of a vertex \(v_i\) which is a counterpart of vertex direction in the case of metric graph. Namely,

$$\begin{aligned} \mathcal {D}_i^{in/out}=\left\{ \begin{array}{ll} D_i^{in/out}&{}\text {for flow that agrees with the direction of a vertex}, \\ D_i^{out/in}&{} \text {for flow opposite to the direction of a vertex}. \end{array}\right. \end{aligned}$$
(74)

We say that the flow direction is positive (negative) in the first (second) case in (74) and write respectively \(\text {sgn}(\mathcal {D}_i)=1\) (\(\text {sgn}(\mathcal {D}_i)=-1\)).

In the following considerations we redefine the maximal and minimal transmission solver \(TS_i^z(t)\), \(z=m,M\), generalising conditions presented in Sect. 4.1. We assume

  1. (i)

    Kirchhoff conditions (14),

  2. (ii)

    continuity conditions in vertices different than sources or sinks generalising (LC), namely for a.e. \(t\in (0,T)\)

    figure a
  3. (iii)

    energy minimization/maximization condition, with function \(\mathcal {E}_i:\Pi _{e_j\in D_i^{in}\cap D_i^{out}}U_j\rightarrow \mathbb {R}\) being generalization of (22), given by the formula

    $$\begin{aligned} \begin{array}{lcl} \mathcal {E}_i(u(v_i,t))&{}=&{} \displaystyle \text {sgn}(\mathcal {D}_i) \left[ \sum _{j:\,e_j\in \mathcal {D}_i^{in}} \mathcal {E}_{ij}^+(u(v_i,t))+\sum _{j:\,e_j\in \mathcal {D}_i^{out}} \mathcal {E}_{ij}^-(u(v_i,t))\right] , \\ \mathcal {E}_{ij}^{\pm }(u(v_i,t))&{}=&{} \displaystyle \frac{u_j^3(v_i,t^{\mp })-u_j^3(v_i,t^{\pm })}{3}\\ &{}&{}\quad -\displaystyle \frac{\left( u_j(v_i,t^{\mp })-u_j(v_i,t^{\pm })\right) ^3}{12}\theta \left( \text {sgn}(\mathcal {D}_i)\left( u_j(v_i,t^{\mp })- u_j(v_i,t^{\pm })\right) \right) , \end{array} \end{aligned}$$

    with a domain

    $$\begin{aligned} U_j= & {} \mathbb {R}\qquad \text {for}\,\,e_j\in \mathcal {D}_i^{in},\\ U_j= & {} \left( \min (-u_j(v_i,t^-), \text {sgn}(\mathcal {D}_i)\infty ) ,\max (-u_j(v_i,t^-), \text {sgn}(\mathcal {D}_i)\infty )\right) \cup \left\{ -u_j(v_i,t^-)\right\} ,\\&\text {for}\,\,e_j\in \mathcal {D}_i^{out} \cap \mathcal {F}_i^{out}, \\ U_j= & {} \left( \min (0, \text {sgn}(\mathcal {D}_i)\infty ) ,\max (0, \text {sgn}(\mathcal {D}_i)\infty )\right) \cup \left\{ 0\right\} \quad \text {for}\,\,e_j\in \mathcal {D}_i^{out} \setminus \mathcal {F}_i^{out}. \end{aligned}$$
  4. (iv)

    decreasing flow with respect to edge enumeration in the case of \(\mathcal {E}_i\) maximization, (DF).

It is easy to notice that the form of condition \((\textit{FC})\) assures that there is no flow within the sets \(\mathcal {D}_i^{in/out}\). In the case of condition (iii) the generalisation is based on the \(\mathcal {E}_i\) domain’s change. The restriction of the value of solutions for \(e_j\in \mathcal {D}_i^{out}\) prevents the situation that there exists an edge \(e_j(0)=v_i\) (resp. \(e_j(l_j)=v_i\)), in which the flow direction at \(e_j(0)\) (resp. \(e_j(l_j)\)) is opposite to the flow at the vertex \(v_i\) and there is no shock at \(e_j(0)\) (resp. \(e_j(l_j)\)).

Based on \(TS^z_i\), \(z=m,M\), which satisfy conditions (i)–(iii); we can repeat the definition of \((TS_i^z)^{\star }\), \(z=m,M\) given in Definition 9. Finally, we are ready to present transmission conditions derived by \((TS_i^z)^{\star }\) for the honeycomb tree.

Case I. Sources and sinks

In analogy to non-negative case we assume that for \(v_i\) being a source (a sink) we have \(u_j(v_i,t)=0\), \(e_j\in D^{out}_i\) (\(e_j\in D^{in}_i\)).

Case II. Vertices from a path graph

Let \(v_i\) be a vertex related to the path graph such that \(D_i=(\left\{ e_j\right\} , \left\{ e_k\right\} )\). By (FC) we have the behaviour analogous to the mono-dimensional case.

  1. 1.

    \(u_j(1,t^-)\ge 0\) and \(u_k(0,t^-)\le 0\)Let us specify the flow through the vertex.

    (a):

    \(u_j^2(1,t^-)\ge u_k(0,t^-)\) We have the flow that agrees with the direction of vertex and

    $$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=u_j(1,t^-). \end{aligned}$$
    (75)
    (b):

    \(u_j^2(1,t^-)< u_k(0,t^-)\) We have the flow opposite to the direction of vertex and

    $$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=u_k(1,t^-). \end{aligned}$$
    (76)
  2. 2.

    \(u_j(1,t^-)< 0\) and \(u_k(0,t^-)> 0\) There is no flow that directs the vertex hence

    $$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=0. \end{aligned}$$
  3. 3.

    \(u_j(1,t^-)\cdot u_k(0,t^-)\ge 0\) We have either (75) for \(\text {sgn}(u_j(1,t^-)\ge 0\), or (76) for \(\text {sgn}(u_j(1,t^-)\le 0\).

Case III. Vertices of the hexagonal grid of the first and second kind

For the illustration see Fig. 3 with the notation changed from \(e_1,e_2,e_3\) to respectively \(e_j,e_k,e_l\). It is worth mentioning that considerations for vertices of the first and second kind are analogous hence we concentrate only on a vertex of a second kind, see Fig. 3(ii).

  1. 1.

    \(u_j(1,t^-)\ge 0, \qquad u_k(0,t^-) \ge 0, \qquad u_l(0,t^-) < 0\) Firstly we need to specify the direction of flow through the vertex.

    1. (a)

      \(u_j^2(1,t^-) \ge u_l^2(0,t^-)\) In this case the flow agrees with the direction of a vertex (goes through the vertex to the right) and therefore values of solution after the flow should not depend on values in edges from \(D_i^{out}\). Intuitively, the character of the vertex should therefore fit to the case of constant sign flows. Obviously \(u_j(1,t^+)=u_j(1,t^-)\). Consider now three cases related to the choice of maximal and minimal transmission solver.

      • In the first (maximal) one, \(k<l\), the total energy should go to the edge \(e_k\), and zero to \(e_l\). Since \(u_l(0,t^-)<0\) then the flow reaches the vertex and the influence of this flow needs to be somehow balanced to maintain the proper direction of a flow. Therefore, we divide the flow from \(e_j\) into two parts in such a way that

        $$\begin{aligned} u_l(0,t^+)=-u_l(0,t^-) \text{ and } u_k(0,t^+)=\sqrt{u_j^2(1,t^-) - u_l^2(0,t^-)}. \end{aligned}$$
        (77)
      • The second (maximal) case, \(k>l\), is when the whole energy is going to \(e_l\), then

        $$\begin{aligned} u_k(0,t^+)=0 \text{ and } u_l(0,t^+)=u_j(1,t^-). \end{aligned}$$
        (78)
      • The last case, related to energy minimization, is more involved. We should have

        $$\begin{aligned} u_k(0,t^+)=u_l(0,t^+)=\frac{\sqrt{2}}{2}u_j(1,t), \end{aligned}$$
        (79)

        but it is valid only for \(\frac{\sqrt{2}}{2} u_j(1,t^-)\ge u_l(0,t^-)\). Otherwise it does not agree with the domain \(U_l\). Instead, minimum is attained at the boundary of \(U_l\), hence we arrive at (77).

    2. (b)

      \(u_j(1,t^-)^2 < u_l(0,t^-)^2\) Now the flow is opposite to the direction of a vertex (goes through the vertex to the left) and therefore values of solution after the flow should not depend on values in edges from \(D_i^{in}\). We put

      $$\begin{aligned} u_k(0,t^+)=0, \quad u_l(0,t^+)=u_k(0,t^-) \text{ and } u_j(1,t^+)=u_l(0,t^-), \end{aligned}$$
      (80)

      where the last quantity is negative. It is the only possibility.

  2. 2.

    \(u_j(1,t^-)\ge 0, \qquad u_k(0,t^-) < 0, \qquad u_l(0,t^-) \ge 0\) This case is analogical to 1. due to the symmetry of the honeycomb tree.

  3. 3.

    \(u_j(1,t^-)\le 0, \qquad u_k(0,t^-) \ge 0, \qquad u_l(0,t^-) \ge 0\) This case is trivial since the mass flows in the direction opposite to the vertex at all edges and the vertex becomes a kind of source. The only possible boundary constraint is

    $$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=u_l(0,t^+)=0. \end{aligned}$$
    (81)
  4. 4.

    \(u_j(1,t) \ge 0, \qquad u_k(0,t) \le 0, \qquad u_l(0,t) \le 0\) Now the situation is more interesting since the vertex resembles a sink and again there is a need to specify the direction of a flow.

    1. (a)

      \(u_j^2(1,t^-) \le u_k^2(0,t^-) + u_l^2(0,t^-)\) The flow is opposite to the direction of a vertex (goes through the vertex to the left) and the shock wave appears on the edge \(e_j\). Obviously \(u_z(0,t^+)=u_z(0,t^-)\) for \(z=k,l\) and

      $$\begin{aligned} u_j(1,t^+) = -\sqrt{u_k^2(0,t^-) + u_l^2(0,t^-)}. \end{aligned}$$
      (82)
    2. (b)

      \(u_j(1,t^-)^2 > u_k^2(0,t^-) + u_l^2(0,t^-)\) The flow agrees with the direction of a vertex (goes through the vertex to the right) and the shock wave appears on the edge \(e_j\) and we need to choose the condition for \(e_k(0)\) and \(e_l(0)\) at \(t^+\). Again by the energy maximization methods we have two options.

      • for \(k<j\) we repeat condition (77),

      • for \(k>j\)

        $$\begin{aligned} u_k(0,t^+)=-u_k(0,t^-) \text{ and } u_l(0,t^+)=\sqrt{u_j(1,t^-)^2 - u_k(0,t^-)^2}. \end{aligned}$$
        (83)

      While in the case of minimization

      • for \(\frac{\sqrt{2}}{2}u_j(1,t^-)>\max (-u_k(1,t^-),-u_l(1,t^-))\) we have (79)

      • for \(\frac{\sqrt{2}}{2}u_j(1,t^-)>-u_k(1,t^-)\) and \(\frac{\sqrt{2}}{2}u_j(1,t^-)<-u_l(0,t^-)=\frac{\sqrt{2}}{2}u_j(1,t^-)+\alpha \), \(\alpha ^2>\sqrt{2}u_j\) we arrive at (77).

      • finally for \(\frac{\sqrt{2}}{2}u_j(1,t^-)>-u_l(1,t^-)\) and \(\frac{\sqrt{2}}{2}u_j(1,t^-)<-u_k(0,t^-)=\frac{\sqrt{2}}{2}u_j(1,t^-)+\alpha \), \(\alpha ^2>\sqrt{2}u_j\) we obtain (83).

5.2 Different Sign Solutions

In this part we construct an approximation of a solution which consists piece-wise of elements from classes \(\mathcal {W}^+\) and \(\mathcal {W}^-\). We say that \(f\in \mathcal {W}^-\) if \(-f \in \mathcal {W}^+\), for \(\mathcal {W}^+\) defined in (60). Let us explain now how to stitch two mentioned types of solutions.

Let \((U_k)_{k\in K}\) be a partition of a set d(E) of metric edges of \(\mathcal {G}=(G,d)\), namely a family of closed and connected intervals such that for any \(U_k\) there exits exactly one metric edge \(e_{k_j}\) such that \(U_k\subset e_{k_j}\),

$$\begin{aligned} \bigcup _{k\in K} U_k = d(E)\qquad \text {and}\qquad \text {int } U_k \cap \text {int } U_l= \emptyset \quad \text{ for } k\ne l. \end{aligned}$$

Define now a class of solutions \(\mathcal {W}\) such that for any fixed \(\mathring{u} \in TV(\mathcal {G})\)

$$\begin{aligned}&u \in \mathcal {W} \text{ iff } \{ u \in B(\mathcal {G}): \text{ there } \text{ exists } \text{ a } \text{ partition }\,\, (U_k)_{k\in K}\,\,\text{ of } \text{ a } \text{ set } \text{ of } \text{ metric } \text{ edges }\,\,d(E) \nonumber \\&\quad \text{ such } \text{ that } \text{ either } \left. u\right| _{U_k}\in \mathcal {W}^+ \text{ or } \left. u\right| _{U_k}\in \mathcal {W}^- \}. \end{aligned}$$
(84)

Proposition 4

Let \(\mathcal {G}\) be a metric honeycomb tree. Then the class \(\mathcal {W}\) is preserved by the flow generated by the Burgers’ equation (15) and the total variation norm is controlled in time, namely

$$\begin{aligned} \sup _{t\in [0,T]} \int _{\mathcal {G}} |\partial _x u^2| dx + \sum _j \int _0^T (|\partial _t u^2|(0,t) +|\partial _t u^2|(1,t)) dt \le C\int _{\mathcal {G}} |\partial _x \mathring{u}^2| dx + CT\Vert u\Vert _{\infty }^3. \end{aligned}$$
(85)

Proof

We prove the proposition by stitching the solutions from \(\mathcal {W}^+\) and \(\mathcal {W}^-\) in several steps.

Step 1. In order to construct the general solution we introduce auxiliary solutions related to each of \(U_k\). Let \(u^{(k)}\) be a solution to the Burgers’ equation on \(\mathcal {G}\) initiated by the initial datum

$$\begin{aligned} \mathring{u}^{(k)}=\mathring{u} \chi _{U_k}. \end{aligned}$$
(86)

Since \(\mathring{u}\in \mathcal {W}^{\pm }\), it follows that \(\mathring{u}^{(k)}\in \mathcal {W}^{\pm }\) and consequently, by Proposition 3, \(u^{(k)}\) is a constant sign solution over the graph.

Step 2. Now we define the interaction between two neighbouring solutions in the interior of \(e_j\). Introduce function \(u^{(kl)}\), for two chosen intervals \(U_k\) and \(U_l\) such that \(D_k \cap D_l \ni \xi (0)\) for some \(\xi (0)\in (0,l_j)\). We need to determine the evolution of the contact point \(\xi (t)\) starting from \(\xi (0)\). Without loss of generality assume that \(\min U_k<\min U_l\).

  1. (i)

    If \(u^{(k)}<0<u^{(k)}\), then solution in the neighbourhood of \(\xi (0)\) is constructed as a rarefaction wave, namely

    $$\begin{aligned} u^{(kl)}(x,t)=\left\{ \begin{array}{ll} u^{(k)}(x,t)&{}\text {for}\,\,\frac{x}{t}<u_k(\xi (t),t),\\ [.1cm] \frac{x}{t}&{}\text {for}\,\,u_k(\xi (t),t)<\frac{x}{t}<u_l(\xi (t),t),\\ [.1cm] u^{(l)}(x,t)&{}\text {for}\,\,u_l(\xi (t),t)<\frac{x}{t}. \end{array}\right. \end{aligned}$$
  2. (ii)

    If \(u^{(k)}>0>u^{(k)}\), then \(u^{(k)}\) and \(u^{(l)}\) are stitched together by the Rankine–Hugoniot condition

    $$\begin{aligned} \frac{d}{dt} s(t)=\frac{u^{(k)}(\xi (t),t)+ u^{(l)}(\xi (t),t)}{2}. \end{aligned}$$

    In the neighbourhood of \((\xi (0),0)\) we have

    $$\begin{aligned} u^{(kl)}(x,t)=u^{(k)}(x,t) \text{ for } x < \xi (t) \text{ and } u^{(kl)}(x,t)=u^{(l)}(x,t) \text{ for } x> \xi (t). \end{aligned}$$

Step 3. Finally, we concentrate on the case of changing the sign at the vertex using the transmission solver derived in Sect. 5.1. For given conditions in vertices at \(t^-\) there exists a unique representation after the flow through the vertex, at \(t^+\). Since the flow through the vertex \(\mathcal {D}_i\) is fixed, we solve the equations at outgoing edges \(\mathcal {D}_i^{out}\) knowing that at least locally near the vertex the solutions are of constant sign.

Let us become more precise about the choice of the time interval where the solution is defined. We consider the case III.1a) from Sect. 5.1. We build the solution on edges \(e_j\) and \(e_l\) for some time \(T_1>0\), and the transmission condition gives the boundary data for the equation at \(e_k\), at least in a vicinity of the vertex. Then we solve Burgers’ equation in the interior of an edge \(e_k\) obtaining the solution locally in time. In general it may happen that the solution \(u_k\) stays positive at the vertex just for time \(T_2>0\), which can be smaller than \(T_1\). Hence, the procedure of deriving the solution in the neighbourhood of the vertex is well-defined in time being the minimum of \(T_1\) and \(T_2\). Nevertheless, since the speed of a wave propagation and the number of vertices is finite, considered time always exists. Note that the construction of the solution bases on approximation in \(\mathcal {W}\)-class. It follows that transmission conditions need to be modified by a suitable approximation, with some error which is controlled. To preserve the \(\mathcal {W}\) – class the boundary term must be in \(\mathcal {W}_{opp}\), and this modification is explained in the next step.

Step 4. Steps 1–3 allow for a unique definition of solution for any time since the structure of the \(\mathcal {W}\)-class guarantees that the solutions locally are of constant sign on edges and they are uniquely determined in vertices. At the end we need to estimate TV-norm. Repeating the considerations from the proof of Lemma 1, we note that for each edge we find the following bound

$$\begin{aligned} \frac{d}{dt} \int _{e_j} |\partial _x u_j^2|dx +|\partial _t u_j^2(1,t)| \,\mathrm{sgn\,} (u_j(1,t)) - |\partial _t u_j^2(0,t)|\,\mathrm{sgn \,}(u_j(0,t)) \le 0. \end{aligned}$$

Of course, the above inequality does not deliver needed information, since in general not only we fail to control the boundary terms, but also the sign of the solution at the ends of the edge.

However, based on construction proposed for \(\mathcal {W}^+\)-functions, we obtain local versions of the above inequality. Introduce \(\pi :e_j \rightarrow [0,1]\) a smooth function such that \(supp\, \pi \subset \subset e_j\) and \(\pi \equiv 1\) on the internal interval in \(e_j\). Then

$$\begin{aligned} 2\partial _t(\pi u_j) + 2u_j \partial _x(\pi u_j)- u_j^2 \pi _x=0 \end{aligned}$$

Then we find

$$\begin{aligned} \frac{d}{dt} \int _{e_j} |\partial _x (\pi u_j^2)|dx \le \left\| \pi _x\right\| _\infty \left\| u_j\right\| ^3_\infty . \end{aligned}$$

So it gives information about the interior of edges.

However the key element is in vertices so for each vertex we use again the localization argument. Again, in order to explain the construction of local estimation we consider a concrete case from Sect. 5.1, namely the case III.1a). Let us remind that solution is given by \(u_l^2(0,t)=u_j^2(1,t)-u_k^2(0,t)\). Before we start the estimation, let us look closer at this definition. We aim at construction of the flow in the \(\mathcal {W}\) – class, so the boundary condition is required to be in \(\mathcal {W}^+_{opp}\). However the above formula does not ensure that it holds. But from (87) we deduce that \(\int _0^T |\partial _t u_l^2|(0,t) dt\) is bounded. Thus, given \(\epsilon >0\) we find a new \(u_l^{new}(0,t) \in \mathcal {W}^+_{opp}\) such that \(\int _0^T |\partial _t {u_l^{new}}^2|(0,t) dt \le \int _0^T |\partial _t u_l^2|(0,t) dt\) and \(\Vert u_l^{new}(0,\cdot )-u_l(0,\cdot )\Vert _{L^1(0,T)} \le \epsilon \). This way the \(\mathcal {W}\) structure of solutions is preserved, and the TV - norm over \(\mathcal {G}\) is controlled too.

Take \(\pi \) defined around the vertex \(v_i\), being 1 over a sufficiently large cover of \(v_i\) and supported in \(e_j\cup e_k \cup e_j\). Then we find

$$\begin{aligned}&\frac{d}{dt} \int _{e_j} |\partial _x (\pi u_j^2)|dx +|\partial _t u^2_j|(1,t) \le \Vert \pi _x\Vert _\infty \Vert u\Vert ^3_\infty ,\\&\frac{d}{dt} \int _{e_k} |\partial _x (\pi u_k^2)|dx +|\partial _t u^2_k|(0,t) \le \Vert \pi _x\Vert _\infty \Vert u\Vert ^3_\infty ,\\&\frac{d}{dt} \int _{e_l} |\partial _x (\pi u_l^2)|dx \le |\partial _t u^2_l|(0,t)+ \Vert \pi _x\Vert _\infty \Vert u\Vert ^3_\infty . \end{aligned}$$

Since \(u_l^2(0,t)=u_j^2(1,t)-u_k^2(0,t)\), we conclude that

$$\begin{aligned} |\partial _t u_l^2|(0,t)\le |\partial _t u_j^2|(1,t) +|\partial _t u_k^2|(0,t). \end{aligned}$$
(87)

So summing all together we get

$$\begin{aligned} \frac{d}{dt} \left( \int _{e_j} |\partial _x (\pi u_j^2)|dx+\int _{e_l} |\partial _x (\pi u_l^2)|dx +\int _{e_k} |\partial _x (\pi u_k^2)|dx\right) \le C\Vert \pi _x\Vert _\infty \Vert u\Vert ^3_\infty . \end{aligned}$$
(88)

Although above information is sufficient, we can have even stronger condition which controls the transmission relation. Because of form of the inequalities for \(e_j\) and \(e_k\), taking it twice, we improve (88), namely

$$\begin{aligned}&\frac{d}{dt} \left( \int _{e_j} |\partial _x (\pi u_j^2)|dx+\int _{e_l} |\partial _x (\pi u_l^2)|dx +\int _{e_k} |\partial _x (\pi u_k^2)|dx\right) \nonumber \\&\quad +\left( |\partial _t u^2_l|(0,t)+|\partial _t u^2_k|(0,t)+|\partial _t u^2_j|(1,t)\right) \le C\Vert \pi _x\Vert _\infty \Vert u\Vert ^3_\infty . \end{aligned}$$
(89)

In the general case, the signs at the vertex may be different, we get the better info with boundary term for the case when the flow at the edge comes into the vertex. So such inequality we make double, then we obtain (89) for the general case. Note that there is only one case where there is no incoming flow, but then all boundary terms are just zero, so the time derivatives vanish too.

Finally, repeating the steps from Lemma 1 in the general case, we get (85). \(\square \)

5.3 Existence of General Solutions

In the last part of this section we show the following existence result, that goes in line with Definition 3. Note that \(\mathring{u}\) is a different sign function.

Theorem 3

Let \(\mathring{u} \in TV(\mathcal {G})\). There exists a weak solution to the Burgers’ equation on graph \(\mathcal {G}\) such that

$$\begin{aligned} u^2 \in L_\infty (0,T;TV(\mathcal {G})). \end{aligned}$$

Proof

For given \(\mathring{u} \in TV(\mathcal {G})\), let us proceed in the following steps.

Step 1. Firstly, we approximate the initial condition. For given \(\epsilon > 0\), one finds \(\mathring{u}_\epsilon =(\mathring{u}_\epsilon )_++(\mathring{u}_\epsilon )_-\) such that \((\mathring{u}_\epsilon )_+ \in \mathcal {W}^+\), \((\mathring{u}_\epsilon )_- \in \mathcal {W}^-\) and

$$\begin{aligned} \Vert \mathring{u}-\mathring{u}_\epsilon \Vert _{L^1(\mathcal {G})} < \epsilon \text{ and } \Vert \mathring{u}_\epsilon ^2\Vert _{TV(\mathcal {G})}\le \Vert \mathring{u}^2\Vert _{TV(\mathcal {G})}. \end{aligned}$$

We solve the equation starting from \(\mathring{u}_\epsilon \) in the class \(\mathcal {W}\) according to the steps presented in Proposition 4. Then the uniform bound

$$\begin{aligned} \sup _{t\in (0,T)} \Vert u_\epsilon ^2(t)\Vert _{TV(\mathcal {G})} \le C, \text{ can } \text{ be } \text{ found, } \text{ as } \text{ well } \text{ as } \sup _{t\in (0,T)} \Vert \partial _t u_\epsilon (t)\Vert _{\mathcal {M(G)}} \le C. \end{aligned}$$

Step 2. Using Lions-Aubin lemma we find a subsequence such that

$$\begin{aligned} u_\epsilon \rightarrow u^* \in L^p(\mathcal {G} \times (0,T)) \text{ for } \text{ any } p<\infty , \end{aligned}$$

hence \(u^*\) is a weak solution. Weak limits guarantee that

$$\begin{aligned} u\in L^\infty (\mathcal {G}\times (0,T)) \text{ and } u\in L^\infty (0,T;TV(\mathcal {G})). \end{aligned}$$
(90)

Step 3. The boundary conditions follow from the information carried by (85), while in the limit this condition can be found only as measure. The compactness ensures us that the approximating sequence goes strongly at the boundary point-wisely since then \(u_\epsilon \rightarrow u\) in \(L^p(0,T)\) in the vertices in time.

Step 4. As the last step let us comment on the uniqueness. The above properties of solutions to the Burgers’ equation fulfil the conditions for the classical mono-dimensional case. We obtain an entropy solution as a bounded distributional solution with the bound (55).

We claim that the solution is unique. Unfortunately, in order to restate the proof from Evans textbook, see [10], the method of characteristics on metric graphs for the transport type equation with smooth coefficients is needed. To our best knowledge still there is no such result in the literature. It will be the subject of our further investigations, hence at this moment we state the uniqueness only as a conjecture.

\(\square \)

6 Conclusions

At the end of this paper we return to our questions from Sect. 2 to understand how well does the developed theory reflect fluid motion observed in real life networks and what is its relation with classical approach.

1. What is the appropriate description of the flow in vertices?

The main argument that supports the energy perspective in vertices is the emergence of a natural phenomenon - a backflow - known from networks of fluids, for instance from the cardiovascular system. Using transmission conditions defined for arbitrary initial data in Sect. 5.1, one can mimic such behaviour on networks.

Example 5

The backflow presented in this example is related to the collision of opposite speed waves in a vertex. Here we illustrate the feature of conditions for Case III from Sect. 5.1.

Consider \(\mathcal {G}=(G,d)\) be the following metric tree \(V=\left\{ v_1\right\} \), \(E=\left\{ e_1,e_2,e_3\right\} \),

$$\begin{aligned} \mathcal {L}(e_i)=\begin{array}{cc} 10&\text { \ for}\,\,\quad i=1,2,3, \end{array}\quad \phi = \left[ \begin{array}{ccc} 1&-1&-1 \end{array}\right] ,\quad d(e_i)=\begin{array}{ll} [0,10]&\text {\ for}\quad \,\,i=1,2,3. \end{array} \end{aligned}$$

Note that \(\mathcal {G}\) can be interpreted as interval \([-10,10]\) split at 0 into two.

figure b

As the initial datum we consider

$$\begin{aligned} \mathring{u}_1(x)=3 \theta (x+3/2), \qquad \mathring{u}_2(x)=-\theta (x-1/2), \qquad \mathring{u}_3(x)=-2\theta (x-1). \end{aligned}$$

Then at time \(t=1\) the waves impact themselves at \(v_1\) and using the energy minimization/maximisation transmission conditions we obtain the following. In the first case , since \(9>4+1\), we have

$$\begin{aligned} u_2(0,1^+)=3 \frac{\sqrt{2}}{2}, \qquad u_3(0,1^+) = 3 \frac{\sqrt{2}}{2} \end{aligned}$$

and then the solution reads for \(t>1\)

$$\begin{aligned} u_2(x,t)= \left\{ \begin{array}{lr} 3/\sqrt{2} &{} x< \frac{1}{2} (\frac{3}{\sqrt{2}} -1)(t-1) \\[4pt] -1 &{} x> \frac{1}{2} (\frac{3}{\sqrt{2}} -1)(t-1) \end{array} \right. ,\qquad u_3(x,t)= \left\{ \begin{array}{lr} 3/\sqrt{2} &{} x < \frac{1}{2} (\frac{3}{\sqrt{2}} -2)(t-1) \\[4pt] -2 &{} x> \frac{1}{2} (\frac{3}{\sqrt{2}} -2)(t-1) \end{array} \right. . \end{aligned}$$

In the case of maximization of the energy to edge \(e_2\), we get

$$\begin{aligned} u_2(0,1^+)=\sqrt{5}, \qquad u_3(0,1^+)=2. \end{aligned}$$

Then the solution reads (\(t>1\))

$$\begin{aligned} u_2(x,t)= \left\{ \begin{array}{lr} \sqrt{5} &{} x < \frac{1}{2} ({\sqrt{5}} -1)(t-1) \\[4pt] -1 &{} x> \frac{1}{2} ({\sqrt{5}} -1)(t-1) \end{array} \right. ,\qquad u_3(x,t)=-2. \end{aligned}$$

Hence, in both cases we observe a backflow that appears either on one (\(e_2\)) or on two (\(e_2\) and \(e_3\)) edges.

Now let us move to the second question.

2. What is the relation between the pure mono-dimensional case and the network counterpart?

The answer to this question is based on the global properties of a network. Namely, depending on the type of transmission conditions (maximizing or minimizing the energy) and their reciprocal location we may obtain either qualitatively similar dynamics or essentially different one. In order to illustrate it, we return an interpretation of Burgers’ equation in the spirit of wave interference presented in Motivation, Section 1.

Let us start again with a mono-dimensional equation, namely (1) with \(D=\mathbb {R}\). Take the following initial configuration on the line.

$$\begin{aligned} u|_{t=0}=\chi _{[-4,-3]} - \chi _{[3,4]}. \end{aligned}$$
(91)

For simplicity consider distributional non-physical solutions being a shift with a speed determined by the Rankine–Hugoniot condition. It means that solution at least for small time is given by

$$\begin{aligned} u(x,t)=\chi _{[-4+\frac{1}{2} t,-3+\frac{1}{2} t]}(x) - \chi _{[3-\frac{1}{2} t, 4 -\frac{1}{2} t]}(x). \end{aligned}$$

In the case of different velocities of waves, the stronger one overtakes the smaller one which is the consequence of the weak formulation and the regime of the Rankine–Hugoniot conditions. To overcome this weakness we put the system onto a metric graph. We rewrite the system into

$$\begin{aligned} \partial _t u +u\partial _x u=0 \text{ on } \mathcal {G} \times [0,T), \qquad u|_{t=0}= \mathring{u} \text{ at } \mathcal {G}, \end{aligned}$$

where \(\mathcal {G}\) is the metric graph. This way we shall be able to obtain a rich structure of solutions even for initial data like (91). Let us look at the following example.

Example 6

Let \(\mathcal {G}=(G,d)\) be the following metric tree \(V=\left\{ v_1,v_2\right\} \), \(E=\left\{ e_1,\ldots ,e_4\right\} \),

$$\begin{aligned} \mathcal {L}(e_i)=\left\{ \begin{array}{cc} 9&{} \text {for}\,\,i=1,4\\ 2&{} \text {for}\,\,i=2,3 \end{array}\right. ,\quad \phi = \left[ \begin{array}{cccc} 1&{}-1&{}-1&{}0 \\ 0&{}1&{}1&{}-1 \end{array}\right] ,\quad d(e_i)=\left\{ \begin{array}{ll} {[}0,9]&{} \text {for}\,\,i=1,4\\ {[}0,2]&{} \text {for}\,\,i=2,3\\ \end{array}\right. . \end{aligned}$$

Note that \(\mathcal {G}\) can be interpreted as interval \([-10,10]\) splitted at \((-1,0)\) into two and joined again at (1, 0).

figure c

We consider Burgers’ equation on \(\mathcal {G}\) with the following initial condition

$$\begin{aligned} u_1|_{t=0}=\chi _{[6,7]}, \qquad u_4|_{t=0}=-\chi _{[2,3]}, \qquad u_2|_{t=0}=u_3|_{t=0}=0. \end{aligned}$$
(92)

Note that condition (92) for network is an analogue of condition (91) for a straight line and at time \(t=0\) we can illustrate it in the following way

figure d

To avoid problems with definitions and argumentation, we just present very schematic behaviour of the proposed system. We assume that the waves are non-physical of kind \(\chi _{[\frac{1}{2} t, 1+\frac{1}{2} t]}(x)\). The character of dynamics is determined by the rules in vertices, describing the partition of the solutions onto different paths. Consider three situations:

Case I. In vertex \(v_1\) the wave from edge \(e_1\) goes on \(e_2\), and in vertex \(v_2\) the wave from \(e_4\) goes on \(e_3\). So at \(t=t_1\) suitably chosen we have

figure e

Then waves pass through without direct interaction, so the energy is not lost. For large time we obtain the solution of the form

$$\begin{aligned} u(x,t)= -\chi _{[3-\frac{1}{2} t,4-\frac{1}{2} t]}+ \chi _{[-4+\frac{1}{2} t,-3+\frac{1}{2} t]}, \end{aligned}$$
(93)

so there is no interaction of waves. It is not possible in description by the classical Burgers equation.

Case II. In vertex \(v_1\) the wave divides into two equal parts (in the sense of energy), and the same happens for vertex \(v_2\). For \(t=t_1\) we have

figure f

Now the waves meet on both \(e_2\) and \(e_3\) and since they are anti-symmetric, they annihilate. Thus for large time

$$\begin{aligned} u(x,t)= 0. \end{aligned}$$
(94)

This case covers the classical result of Burgers’ equation, like without a graph.

Case III. In the vertex \(v_1\) the wave divides into two equal parts, but in the vertex \(v_2\) the wave from \(e_4\) goes on the edge \(e_3\). It means that the upper part of the wave goes on \(e_4\), but on lower edge \(e_3\) we have a shock of two waves. For \(t=t_1\)

figure g

Since the one coming from the right side is larger, the smaller one is overtaken and the wave flows on the edge \(e_1\). Hence up to a small modification of time related to Rankine–Hugoniot conditions, for large time we have

$$\begin{aligned} u(x,t)= - \chi _{[3-\frac{1}{2} t,4-\frac{1}{2} t]}+ \frac{\sqrt{2}}{2} \chi _{[-4+\frac{1}{2} t,-3+\frac{1}{2} t]} \end{aligned}$$

This case is the most interesting since we obtain a practical interference. One part is dumped while the second one is preserved in its magnitude.

We can conclude that developed theory can be interpreted as the extension of mono-dimensional cases into the network. The enhancement of the domain of consideration allows for phenomenons that cannot be observed in simple one dimension. It is definitely worth continuing the research firstly to formally show the uniqueness of the solution starting from arbitrary TV initial datum. Going further, it is interesting to understand the relation of Burgers’ equation considered on planar networks with classical two dimensional problems.