Abstract
The paper deals with the analysis of Burgers’ equation on acyclic metric graphs. The main goal is to establish the existence of weak solutions in the TV—class of regularity. A key point is transmission conditions in vertices obeying the Kirchhoff law. First, we consider positive solutions at arbitrary acyclic networks and highlight two kinds of vertices, describing two mechanisms of flow splitting at the vertex. Next we design rules at vertices for solutions of arbitrary sign for any subgraph of hexagonal grid, which leads to a construction of general solutions with TV—regularity for this class of networks. Introduced transmission conditions are motivated by the change of the energy estimation.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Motivation
The subject of this paper is the mono-dimensional, inviscid Burgers’ equation which is the simplest model that begins the whole universe of systems of fluid dynamics. From the mechanical viewpoint it is pure transport of the velocity, modelling a creation of water waves. In the language of material derivative it reads \(\frac{D}{Dt}u=0\), and by reformulating we obtain the well known equation
The theory says that starting from any smooth, compactly supported initial configuration, the solution does not have to be smooth but it can suffer a jump discontinuity. Thus, waves can create a shock, also called the gradient catastrophe. When we move to the weak formulation non-unique solutions are allowed, and either non-physical shocks or physical rarefaction waves appear. In order to keep the mathematical well-posedness the concept of entropy solutions has been introduced, and then both uniqueness property and decrease in time to zero is guaranteed. The shocks are governed by the Rankine–Hugoniot condition determining the speed of the jump and the Lax condition choosing between continuous and discontinuous solution. The general rule is that the bigger wave overtakes the smaller one, so consequently, for a long time we are not able to say anything about the smaller wave.
In this paper we address the following question: Is there any approach to the Burgers’ equation (1) which admits certain preservation of the smaller wave after collision with the bigger one? The answer is positive, but we are required to take D as a graph.
2 Introduction
The problem of inviscid Burgers equation on networks belongs to the family of conservation laws on networks that has been developed for about thirty years and still receives considerable interest [5, 12, 23]. The major motivation for studying this topic is traffic modelling, see for instance [8, 15, 17], initiated with the well-established now Lighthill–Whitham model [20]. The natural interpretation of a graph as a transportation network shifted the burden of research interests into the case of non-convex flux which enforced the application of either wave-front tracking approximations or vanishing viscosity methods [7]. Furthermore, fixed direction of a lane formed some bitten track for specifying conditions in vertices, review can be found in [12]. In particular, there was no need in specifying negative values of solutions in vertices since such flow reflects driving against the current which does not take place in general. The different motivation of the research presented in this paper leads to new types of transmission conditions in the vertices of a graph. Furthermore, considering pure Burgers’ equation, instead of general conservation law, allows us to use methodology known from Hamilton–Jacobi equation [10, Sec. 3.3] and consequently to obtain an explicit solution being a counterpart of the well-known Lax-Oleinik formula.
Let us look at the Burgers’ network problem from a fluid dynamics perspective. We need to find a suitable language which will allow us to analyze arbitrary directions of the flow and to control the total energy of the moving fluid. Imposing these conditions on the edges is a standard approach, but building the theory where a backflow (change of the direction of a flow in vertex) appears is new, according to authors’ best knowledge, and therefore worth being stressed.
Reaching for the graph structure can be interpreted either as the extension of a mono-dimensional case or as non-standard discretization of a state space. The main question that arises is if introduction of this structural approach gives hope for alternative techniques for proving blowup and uniqueness criteria. If so, the development of a coherent language of description of fluid-type equations on metric graphs, which is the main subject of this study, allows us to pursue from Burgers’ equation to multi-dimensional systems like Navier-Stokes or compressible Euler. To this end, we begin in this paper with addressing two preliminary questions:
-
1.
What is the appropriate description of the flow in vertices? The natural approach is to look at the change of the energy at redistribution points, namely taking the maximal or minimal change of the energy at vertices. It is formulated in Theorem 1 in Section 4 for non-negative flows, and generalized to different sign solutions in Sect. 4.1. This strategy is essentially different then transmission conditions for vehicular traffic [8], data networks [9] or T-nodes [21].
-
2.
What is the relation between the pure mono-dimensional case and the network counterpart? It turns out that choosing correctly the transmission conditions, the network system is in some sense the generalisation of mono-dimensional one. In particular we will be able to answer positively on the question imposed as a motivation in Sect. 2. This issue is discussed further in Sect. 6.
Thus, to state the problem succinctly, our paper aims at constructing general weak solutions to Burgers’ equation in metric graphs initiated by arbitrary TV initial data (possibly with different signs). The rules of transitions of the flow in vertices, in particular its direction and magnitude, are determined by the optimization of the energy at the vertex. Let us underline that at each edge we have the entropy solution in the meaning of the standard mono-dimensional case.
3 Problem Formulation
Let us start with the necessary formalism to describe the language for PDEs on the metric graphs, compare also [8, 16].
3.1 Graph Theory Toolbox
Consider \(G=(V,E,\mathcal {L},\Phi )\) a directed, weighted and finite tree with no multiple edges. Namely, let
be respectively sets of vertices and edges of a graph; while \(\mathcal {L}:E\rightarrow \mathbb {R}_+\) be a weight (length) function of the edge; \(e_j\mapsto l_j\) for any \(j\in J\).
The structure of the network is defined by incidence matrix \(\Phi \in M_{n\times m}(\mathbb {R})\), \(\Phi =(\phi _{ij})_{i\in I,j\in J}=\Phi ^+-\Phi ^-\) such that \(\Phi ^+=(\phi ^+_{ij})_{i\in I,j\in J}\) and \(\Phi ^-=(\phi ^-_{ij})_{i\in I,j\in J}\) satisfy conditions
If \(\phi _{ij}\ne 0\), we say that edge \(e_j\) is incident to \(v_i\). We say that there exists a multiple edge between vertices \(v_i, v_k\in V\) if there exist two edges \(e_p, e_q\in E\) such that, for \(z=p,q\), \(\phi ^+_{kz}=1\) and \(\phi ^-_{iz}=1\). Hence, the lack of multiple edges provides a uniqueness of such assignment \(e_j=(v_i,v_k)\in E\) for some \(v_i,v_k\in V\). In further consideration we call \(v_i\) a head and \(v_k\) a tail of the edge \(e_j\). The vertex \(v_i\) is a source or a sink if respectively \(\phi ^+_{ij}=0\) or \(\phi ^-_{ij}=0\) for any \(j\in J\).
By the path in the graph we understand a finite sequence of edges \(p_i=e_{k_1},\ldots ,e_{k_{N_i}}\) such that for any \(e_{k_j},e_{k_{j+1}}\) there exists a vertex \(v_{k_j}\in V\) such that
for \(j=1,...,l-1\). It means the path is of the following form
By the length \(N_i\) of a path \(p_i\) we understand the number of edges on the path, while by weighted length \(L_i\) the weights’ sum of all edges on the path.
We say that a graph is connected if there exists at least one path between every two vertices. A closed path, namely \(v_{k_0}=v_{k_{N_i}}\), is a cycle and the graph is called acyclic if it has no cycles. Finally, we say that a graph is a directed tree if it is connected and has no cycles.
In the following considerations we refer to the special examples of trees being a restriction of finite graphs. We say that \(G'=(V',E', \mathcal {L}',\Phi ')\) is a subgraph of a graph \(G=(V,E,\mathcal {L},\Phi )\) if it satisfies the conditions
where \(I'=\left\{ i\in I:\,\, v_i\in V'\right\} \) and \(J'=\left\{ j\in J:\,\,e_j\in E'\right\} \).
Definition 1
Consider \(G=(V,E,\mathcal {L},\Phi )\) and \(v_i\in V\). We say that \(G_i=(V_i,E_i,\mathcal {L}_i,\Phi _i)\) is a \(v_i\)-subgraph of G if
Definition 2
A path graph \(P_m\) is any connected subgraph of 1D Cartesian grid \(P=(V_P,E_P,\mathcal {L}_P,\Phi _P)\)
having \(\underline{m}\) edges.
By the honeycomb tree \(H_m\) we understand any connected subgraph of directed hexagonal lattice \(H=(V_H,E_H,\mathcal {L}_H)\)
having \(\underline{m}\) edges. In further considerations we refer to \(v_{(p+q,-q,p)}\) as vertex of the first kind while to \(v_{(p+q+1,-q,p)}\) as the vertex of the second kind, see Fig. 1.
Note that any vertex of hexagonal lattice H is described by a triple of type \((p+q,-q,p)\) with two parameters p, q, which corresponds to the three directions on the honeycomb.
Define now the in- and out degree of vertex \(v_i\) which is the number of edges having respectively a tail or a head in vertex \(v_i\), namely
Then using the notation from Definition 2, we have for \(p,q\in \mathbb {Z}\)
Considering the restriction of H to the subgraph we obtain also additional types of vertices v being sources (\(\text {deg}_{+}(v)=0\), \(\text {deg}_{-}(v)\in \left\{ 1,2\right\} \)), sinks (\(\text {deg}_{+}(v)\in \left\{ 1,2\right\} \), \(\text {deg}_{-}(v)=0\)) or vertices of the path graph (\(\text {deg}_{-}(v)=\text {deg}_{+}(v)=1\)).
Furthermore, we introduce a direction of a vertex \(v_i\in V\) as an ordered pair of sets \(D_i=({D}^{in}_i,{D}^{out}_i)\), \(D^{in}_i,D^{out}_i\subset E\) such that
Since directed trees G do not have loops, therefore \(D_i^{in}\cap D_i^{out}=\emptyset \), for any \(i\in I\). If we change the vertex \(v_i\) into \(v_i'\) in the way that the parameterization of all edges incident to the vertex \(v_i\) become opposite, we say that \(v_i\) and \(v_i'\) have the opposite direction. In the case of honeycomb trees the direction of vertices of the first and second kind are, for \(p,q\in \mathbb {Z}\), the following
The above distinction is crucial to the considerations in Sect. 5.1.
Finally, let us remind that for any tree it is possible to re-enumerate edges in the way that for any two edges \(e_{s}, e_{j}\in E\), and for any chosen path \(e_{s}=e_{k_1},\ldots ,e_{k_N}=e_{j}\); \(k_i<k_{i+1}\) for all \(i\in 1,\ldots ,N-1\). Additionally in the following considerations we choose the enumeration of edges in the way that all sources are associated with the first few edges, namely sources are heads of the edges \(e_{i}\), \(i=1,\ldots , s\). We call such numeration an increasing order of edges and note that two trees with an increasing order of edges are homomorphic.
3.2 Introduction of Metric Graphs
To introduce a metric space into consideration we associate each edge of a graph with a compact interval in the following way for \(d:E\rightarrow \mathcal {B}(\mathbb {R})\) let \(d(e_j)=[0,l_j]\); where \(\mathcal {B}(\mathbb {R})\) is a Borel algebra on \(\mathbb {R}\). We say that \(\mathcal {G}=(G,d)\) is a directed metric graph. In what follows we always consider the parametrisation of an edge that agrees with the direction of an edge. By an abuse of notation we shall denote a metric edge \(d(e_j)\) simply by \(e_j\), the vertices at the endpoints of the edge \(e_j=(v_i,v_k)\) by \(e_j(0):=v_i\) and \(e_j(l_j):=v_k\). Further, when considering a function \(f_j\) defined on the metric edge \(d(e_j)=[0, l_j]\), we shall occasionally write \(f(v_i):=f (s)\) if \(e_j(s)=v_i\) for \(s=0,l_j\). By the function defined on the metric graph we understand a vector-valued function \(f:[0,1]\rightarrow \mathbb {R}^m\) such that \(f(x)=(f_j(l_jx))_{j\in J}\), where \(f_j:[0,l_j]\rightarrow \mathbb {R}\) is defined on the edge \(e_j\).
The main idea of this paper is to find the function defined on the metric graph that satisfies both the weak formulation of Burgers’ equation on edges and certain transmission conditions in vertices. Based on general knowledge of the mono-dimensional case, it is obvious that the direction of a flow can disagree with the parameterization of an edge. Although it does not cause a difficulty on the edge, it complicates transmission conditions. To define well conditions in vertices we extend the classical notion of weighted adjacency matrix of a line graph \(\mathcal {B}=(b_{ij})_{i,j\in J}\), which in the standard setting reads
Consider the following operators \(\mathcal {B}^{pq}=(b_{jk}^{pq})_{j,k\in J}\), for \(p,q\in \{0,1\}\) such that
Note that the new approach to adjacency matrix definition given in (5), unlike the classical one (4), allows for the lack of flow between two edges even though they are physically connected. Obviously \(\mathcal {B}^{01}=\left( \mathcal {B}^{10}\right) ^T\), but we distinguish those cases due to its different meaning in the sense of flow. Note that if \(b_{jk}^{01}\ge 0\), then using the notation from (5a) and (3), \(e_k\in D^{in}_i\) and \(e_j\in D^{out}_i\). On the other hand for \(b_{jk}^{10}\ge 0\), \(e_j\in D^{in}_i\) and \(e_k\in D^{out}_i\). Consequently, in the first case the direction of vertex \(v_i\) is opposite to the direction of the vertex in the second case.
If we replace 1 with arbitrary nonzero coefficients in matrices \(\mathcal {B}\), \(\mathcal {B}^{pq}\) we arrive at unweighted counterparts of matrices, we call them adjacency matrices of a line graph, and denote them by \(\mathcal {\overline{B}}\), \(\mathcal {\overline{B}}^{pq}\).
Due to the change in the definition of adjacency matrices, it is possible to find a path in the metric graph in which there is no possibility of flow from one edge, say \(e_k\), to another \(e_j\) due to vanishing of coefficients \(b_{jk}^{pq}\), \(p,q\in \left\{ 0,1\right\} \), \(j,k\in J\). Therefore in the whole paper we distinguish the definition of path in the graph G and in its metric counterpart \(\mathcal {G}\). By the path in the metric graph we understand a finite sequence of edges \(p_i=e_{k_1},\ldots ,e_{k_{N_i}}\) such that for any \(e_{k_j},e_{k_{j+1}}\) there exists a pair (p, q), \(p,q\in \left\{ 0,1\right\} \) such that \(b_{k_{j+1}k_j}^{pq}\ne 0\). The notions of path length \(N_i\) and weighted path length \(L_i\) remain unchanged.
3.3 Burgers’ Equation on the Network
Let us defined Burgers’ equation on the metric graph \(\mathcal {G}\), compare also with Eq. (1),
Namely, let \(u=(u_j(l_j\cdot ))_{j\in J}\) be the function defined on the metric graph \(\mathcal {G}\) which satisfies
for every coordinate \(j\in J\). Now let us derive the transmission conditions that incorporate the network structure into the formulation from one hand, and allow for the flow that agrees with the physical motivation from another.
Let us start with the formulation of transfers that comes from the generalisation of vertex conditions for network transport, see [16, Sec. 3a]. Consider operators \(u \mapsto \mathcal {B}_z(u)\in M_{m}(\mathbb {R})\), \(z=0,1\) and for almost all \(t\in [0,T)\) assume that
Obviously such a general formulation has to be specified for a number of reasons. Even in the linear case, when \(\mathcal {B}_0, \mathcal {B}_1\) are independent of u, the uniqueness of the solution to (7) strictly depends on their rank. Furthermore, there is no clear relation with a graph structure because again for arbitrary operators \(\mathcal {B}_0,\mathcal {B}_1\in M_{m}(\mathbb {R})\), it is not always possible to build the graph, not mentioning the directed tree that is the object of these considerations. For details see [1].
Let us draw your attention to one property that is important in further considerations. If the direction of flow disagrees with the parametrization it may allow for a cyclic flow along the edges even though the graph is a directed tree.
Example 1
Consider a graph \(G=(V,E,\mathcal {L},\Phi )\) such that
presented also in Fig. 2. Problem (6)–(7) such that
is equivalent locally in time with the Burgers’ equation on the circle with radius \(r=3\).
In the Example 1 the cyclic structure appeared due to the disturbance of a flow in vertices \(v_1\) and \(v_3\). Note that the direction of the vertex \(v_1\) is \(D_1=(D_1^{in},D^{out}_1)=\left( \emptyset ,\left\{ e_1,e_2\right\} \right) \) while the mass flows from the edge \(e_3\) into \(e_1\). Similar problem appeared in \(v_3\). In the further considerations we allow for the flow to go in line with the vertex direction and in the opposite direction. We assure, however, that there is no exchange of mass between edges in the sets \(D_i^{in}\) (as well \(D_i^{out}\)) for any \(i\in I\), namely
Let us now fix the vertex \(i\in I\), the moment \(t\in [0,T)\) and we consider two cases. If the flow at t agrees with the direction of a vertex. Then the transmission conditions in vertex \(v_i\) read
where \((b^{01}_{js})_{j,s\in J}\) is the adjacency matrix defined in (5a). Similarly, for the flow opposite to the vertex direction we have
with \((b^{10}_{js})_{j,s\in J}\) is the adjacency matrix defined in (5c). In particular we note that for considered problem matrices \(\mathcal {B}^{00}\) and \(\mathcal {B}^{11}\) defined respectively in (5b) and (5d), vanish.
Definition 3
We say that system (6)–(10)–(11) is the strong formulation of Burgers’ equation on the metric tree \(\mathcal {G}\).
The above definition is formal, still the relation \(\mathcal {B}(u)\) is not given. In order to move from strong to weak formulation we introduce a set of smooth functions over \(\mathcal {G}\). Namely, the functions smooth over the edges which agree on germs given in each vertex \(v_i\); with the neighbourhood oriented in line with direction \(D_i\). Below we give a weaker definition, which always allows determining the differentiation by parts.
Definition 4
We say that \(f=(f_j(l_j\cdot ))_{j\in J}\) defined on the metric graph \(\mathcal {G}\) is smooth on \(\mathcal {G}\), and we write \(f\in C^\infty (\mathcal {G})\), if the following conditions hold
-
(i)
\(f_j(l_j\,\cdot )\in C^\infty [0,l_j]\) for any \(e_j \in E\);
-
(ii)
for any \(v_i\in V\), and any \(k \in \mathbb {N}\)
$$\begin{aligned} \partial ^{(k)}f_{j}(l_j \,\cdot )=\partial ^{(k)}f_{k}(0) \text{ for } \text{ all } e_j \in D^{in}_i \text{ and } e_k \in D^{out}_{i}. \end{aligned}$$
Consider now a function \(\phi :[0,1]\times [0,\infty )\rightarrow \mathbb {R}^m\) ,\(\phi (\cdot ,t)\in C^\infty (\mathcal {G})\). In what follows the product of two vector functions is understood in the sense of the Hadamard product, namely \(fg=(f_jg_j)_{j\in J}\). Now define integration over the metric graph \(\mathcal {G}\) as the sum of the integrals over all edges of a graph, namely for any integrable function \(f=(f_j)_{j\in J}\) defined on \(\mathcal {G}\)
The weak solution u should satisfy the condition
for some \(t\in [0,\infty )\). Let us put our attention on the definition of the integral over \(\mathcal {G}\). To pass from (13) to the strong from of the equation we put the x derivative on the equation, namely, we consider
So to eliminate boundary terms at each vertex \(v_i\), using the Definition 4ii) for \(k=0\), we require that
Equation (14), known as the Kirchhoff condition, is one of the most classical transmission conditions considered on metric graphs, see [22, Sec. 2.2.1]. It describes the conservation of flux in each vertex of a network.
Definition 5
We say that system (13)–(10)–(11) is the weak formulation of Burgers’ equation on the metric tree \(\mathcal {G}\), if weighted adjacency matrices of a line graph \(\mathcal {B}^{pq}\), \(p,q=0,1\), satisfy conditions (9), (14). The class of solutions to the problem in weak formulation we denote by \(B(\mathcal {G})\).
The hyperbolic character of Burgers’ equation makes determine the behaviour at vertices to obtain the transmission condition for incoming characteristics, i.e. the coefficients of matrices \(\mathcal {B}^{01}\) and \(\mathcal {B}^{10}\). In our setting we are obliged to take into account two restrictions. The first one is the Kirchhoff condition (14) while the second is the requirement that dynamic on graph \(\mathcal {G}\) is acyclic, namely (9). Note that the determination of a solution, even under the above restrictions, is not unique. To make the solver of our equation on \(\mathcal {G}\) well posed, there is a need to impose more conditions. The general case is rather complex, so in this paper we concentrate on two examples: the equation with non-negative velocities, and the general velocities on the honeycomb tree, see Definition 2. In the last case, the geometry of vertices is simple enough to consider all possible flow variations in vertices. It also gives some intuitions for the more general case.
The article is organised as follows. Section 4 concentrates on non-negative case. The coefficients of \(\mathcal {B}^{10}\) are related with the change of energy of the solution, see Sect. 4.1, while the existence result in Theorem 2 is derived using methodology known from Hamilton–Jacobi equation, it is our first main result. In Sect. 5 general velocities on honeycomb trees are considered. The generalisation of energy methods applied to the vertices of the first and second kind, see Definition 2, with arbitrary direction of a flow in vertex can be found in Sect. 5.1 while the existence result in Sect. 5.3, the second main result is stated as Theorem 3. Finally, in Sect. 6 we refer to the motivating example of wave interference.
4 Non-negative Entropy Solutions
In this section the analysis is restricted to the flow direction that agrees with the parameterization of edges. Consequently, we look for weak solutions such that for \(\mathring{u}>0\) the solution remains in the non-negative cone, \(u\ge 0\). Considerations in the Sect. 4.1 relate coefficients of \(\mathcal {B}^{01}(u)\) with some properties of the solution u while in Sect. 4.2 we derive the existence theorem to the problem of a form
Before we go through the details let us formalise the notion of non-negative solution.
Definition 6
We say that function u is a non-negative weak solution of network Burgers’ equation (15) if
-
(i)
\(t\mapsto u_j(\cdot ,t)\in L^{\infty }([0,l_j],\mathbb {R})\) is continuous almost everywhere on [0, T), for \(T>0\);
-
(ii)
for every \(\phi (\cdot ,t) \in C^{\infty }(\mathcal {G})\) u satisfies (15a),
-
(iii)
\(u\ge 0\) for every \(\mathring{u}\in L^{\infty }([0,1],\mathbb {R}_+^m)\),
- (iv)
4.1 Derivation of Transmission Conditions
The aim of this part is to understand how to derive coefficients of matrix \(\mathcal {B}^{01}(u)\) in (15c), hence in the whole Sect. 4.1 referring to the network Burgers’ equation we consider the problem
We learn from the mono-dimensional case that to obtain the uniqueness of weak solutions there is a need to specify the shock wave by Rankine–Hugoniot condition and exclude non-physical shocks by, for instance, Lax condition. Namely, let \(\xi :[0,T)\rightarrow \mathbb {R}_+\) be a smooth curve describing the discontinuity of scalar weak solution u, and by \(\xi ^{\pm }(t)\) denote left and right limit when x goes to \(\xi (t)\). Then
Definition 7
We say that function \(u_j: [0,l_j]\times [0,T)\rightarrow \mathbb {R}\) is an entropy solution of scalar Burgers’ equation on edge \(e_j\), \(i=1,\ldots ,m\) if it is a weak solution of to scalar Burgers’ equation on edge \(e_j\) which satisfies both Rankine–Hugoniot and Lax conditions at each discontinuity.
Furthermore, \(u=(u_j)_{j\in J}\) is edge-entropy solution if it is an entropy solution at each edge.
Let us remind also that in the mono-dimensional case Oleinik’s one-sided inequality
implies that u is an entropy solution.
We concentrate on vertices now. Note first that Kirchhoff condition in vertex \(v_i\) being resp. a source or a sink assures unique representation of solution \(u_j(v_i,t)=0\), for \(e_j\in D^{out}_i\) and \(e_j\in D^{in}_i\) resp., since there is no flow through these vertices. In the case of other vertices we may obtain the ambiguity. Consequently, imposing only conditions (17) on the non-negative weak solution to (16) still does not guarantee the uniqueness. Let us stop at this statement for a moment. In order to define the fraction of mass that flows through the vertex \(v_i\) at some fixed time t, let us transform a classical notion of Riemann solver into the transmission in the vertex counterpart. Denote by \(u_j(v_i,t^\mp )\) the value of solution (in a head or a tail of an edge, respectively for \(\phi _{ij}^-\ne 0\) and \(\phi _{ij}^+\ne 0\)), before the flow through the vertex for \(t^-\) and after the flow for \(t^+\).
Definition 8
Let \(\mathcal {G}=((V,E,\mathcal {L},\phi ), d)\) be a metric graph and fix \(v_i\in V\). We say that a mapping
where \(J_i\) is defined in (2), is a transmission solver in vertex \(v_i\in V\), if it satisfies conditions (15d) for almost all \(t\in [0,T)\).
The first peculiarity implied by assuming only the Kirchhoff conditions in vertices is the lack of condition that joins values of solution before and after the flow through the vertex, namely at \(t^-\) and \(t^+\).
Example 2
Let \(P_2\) be a path graph, see Definition 2, and consider a Riemann problem on metric path graph \(\mathcal {P}_2\), presented in Fig. 2, of the form
The transmission solver \(TS_2\) does not have to be unique at vertex \(v_2=e_1(1)=e_2(0)\), for some neighbourhood of \(t=0\). Note that for any parameter \(a\in [0,\infty )\), u defined below is a non-negative, edge-entropy solution for some \(t\in [0,\epsilon )\).
-
1.
Let \(a\in [0,1)\), then
$$\begin{aligned} u_1(x,t)= & {} \left\{ \begin{array}{ll} 0&{}\text {for }\,\, x\ne 1,\\ a&{}\text {for }\,\, x= 1,\end{array}\right. \\ u_2(x,t)= & {} \left\{ \begin{array}{ll} a&{}\text {for }\,\, \frac{x}{t}\le a,\\ \frac{x}{t}&{}\text {for }\,\, a< \frac{x}{t}\le 1,\\ 1&{}\text {for }\,\, \frac{x}{t}>1.\end{array}\right. \end{aligned}$$ -
2.
Let \(a\in [1,\infty )\), then
$$\begin{aligned} u_1(x,t)= & {} \left\{ \begin{array}{ll} 0&{}\text {for }\,\, x\ne 1,\\ a&{}\text {for }\,\, x= 1,\end{array}\right. \\ u_2(x,t)= & {} \left\{ \begin{array}{ll} a&{}\text {for }\,\, \frac{x}{t}<\frac{a+1}{2},\\ 1&{}\text {for }\,\, \frac{x}{t}>\frac{a+1}{2}.\end{array}\right. \end{aligned}$$
Obviously, each coordinate of u is a piece-wise continuous solution to mono-dimensional Burgers’ equation and at each jump satisfies Rankin-Hugoniot and Lax conditions. Consequently, by [4, Thm. 4.2] \(u_j\) is an entropy solution of scalar Burgers’ equation on edges \(e_j\), \(j=1,2\), so the edge-entropy solution to network Burgers. Finally, we derive a family of transmission solvers in \(v_2\) at \(t=0\), that depends on parameter a.
Considerations on a path graph allow us to build the intuition related with the behaviour in vertices, as the solution can be easily related with the scalar case. Let us refer to the solutions presented in Example 2 with a standard solution of initial-boundary value problem on the interval [0, 2]. Namely, with a problem of a form
The comparison clearly indicates that to obtain an entropy solution in a mono-dimensional case we need to take \(a=0\), since otherwise we introduce a non-physical shock into the model. The choice of \(a\in (0,1]\) gives a weak solution that can be justified, while \(a>1\) seems to make no sense. To choose a physically reasonable solution in the network case, we assume the continuity at some edges adjacent to the vertex \(v_i\), a.e. in time. Namely, continuity at the edges from \(D_i^{in}\) if the flow agrees with the direction of a vertex. In the case of non-negative solution, this condition simplifies to
- (LC):
-
\(u_j(1,t^-)=u_j(1,t^+),\qquad \text {for}\,\,e_j\in D_i^{in}\) and a.e. \(t\in (0,T)\).
Condition (LC) transfers the problem of finding a value of solution at \(t^+\) only into edges from \(D_i^{out}\). It is worth mentioning that it is well defined only for vertices different than sinks. For the path graph, see Example 2, it is sufficient to obtain the uniqueness; but not in the general case \(\text {deg}_{-}(v_i)>1\). The next condition relates the value of solution after the flow through the vertex with the change of the energy, which is a natural assumption in the context of fluid-type equations.
Let us remind that in the case of scalar Burgers’ equation the change of energy of piece wise continuous solution with one jump, defined on the interval [A, B] reads
where \(u(s^{\pm }(t),t)\) is the right and left limit at discontinuity curve \(\xi (\cdot )\). We easily note that for each shock wave that satisfies the Lax condition, energy decreases proportionally to the magnitude of a jump, while for non-physical shocks we observe the increase of the energy. In the following consideration we take into account only edge-entropy solutions, see Definition 7, which excludes existence of non-physical shock waves. Now fix the vertex \(v_i\) and consider the Riemann problem, at \(x=1\) for incoming edges and \(x=0\) for outgoing ones, that arises due to the flow through the vertex. We define the change of the energy at \(v_i\) by \(\mathcal {E}_i:[0,\infty )^{\text {deg}(v_i)}\rightarrow \mathbb {R}\)
where \(\mathcal {E}_{ij}^{\pm }:[0,\infty )^{\text {deg}(v_i)}\rightarrow \mathbb {R}\) is the change of energy at the edge \(e_j\) and \(\theta \) is a Heaviside step function. The following transmission conditions are related to extremes of \(\mathcal {E}_i\).
- (\(\mathcal {E}_i^m\)):
-
transmission conditions (15c) in \(v_i\) minimize function \(\mathcal {E}_i\),
- (\(\mathcal {E}_i^M\)):
-
transmission conditions (15c) in \(v_i\) maximize function \(\mathcal {E}_i\).
At the beginning let us remark that without condition (LC) the problem of minimization of \(\mathcal {E}_i\) with respect to \(u(v_i,t^+)\) does not have to be well-posed. Let us return to the Example 2. For \(v_2\), at \(t=0\), we have
since \(\mathcal {E}_2\) reads
On the contrary maximizing \(\mathcal {E}_i\) we obtain \(a=1\) which is again not the solution we head to. In order to build further intuition we consider a problem defined on the metric honeycomb tree.
Example 3
Let us consider metric honeycomb tree \(\mathcal {H}_3\) being v-subgraph of honeycomb lattice for v being a vertex of the first kind, see Fig. 1(iii). Define on \(\mathcal {H}_3\) a network Burgers’ equation (16) with initial condition \(\mathring{u}(x):=(a,1,1)^T\), \(a\in [0,1]\). The edge-entropy solution which satisfies condition (LC) depends on one parameter \(b\in [0,a]\), for \(t\in [0,\epsilon )\), and reads
Now we build two transmission solvers which satisfy either (\(\mathcal {E}_i^m\)) or (\(\mathcal {E}_i^M\)), and denote them respectively by \(TS_2^m\) and \(TS_2^M\). Function \(\mathcal {E}_2\) is, for \(t=0\), formulated by
Calculating critical points of \(\mathcal {E}_2\) and values at the boundary we arrive at three possible cases, namely \(b=0\), \(b=\frac{\sqrt{2}}{2}a\) and \(b=a\). We note that
hence \(TS_2^m(a,1,1)=\left( a,\frac{\sqrt{2}}{2}a,\frac{\sqrt{2}}{2}a\right) \) and \(TS_2^M(a,1,1)\in \left\{ (a,0,a),(a,a,0)\right\} \).
Example 3 is very specific since the value of solution before the flow through the vertex is equal at \(e_2\) and \(e_3\), see Fig. 1 for the notation. Consequently, for \(t=0\) edges \(e_2\) and \(e_3\) can be considered as locally symmetric with respect to the flow. In order to exclude such case in further considerations we introduce some technical condition called decreasing flow with respect to edge enumeration
- (DF):
-
\((TS_i^M)_j\ge (TS_i^M)_k\) for any \(j<k\), \(j,k\in D_i^{out}\).
It allows specifying the solution in which the highest flow is related to the edge with the lowest number. Since all tree graphs G having the same triplet \((V,E,\mathcal {L})\) but different mappings \(\Phi \) that all satisfy an increasing order of edges are homomorphic, then any locally symmetric solution can be chosen depending on the choice of representative. In particular, using the notation introduced in Example 3, assuming that \(TS_2\) satisfies (DF) we have that \(TS_2^M(a,1,1)=(a,a,0)\).
What happens if edges \(e_2\) and \(e_3\) are not locally symmetric with respect to the flow? We expect that it leads to different mass distribution when going through the vertex, depending on the value of \(\mathring{u}_{2}\) and \(\mathring{u}_{3}\). In such a case, coefficients of matrix \(\mathcal {B}^{01}(u)\) in Eq. (15c) depend strictly on solution u. On the other hand, it is worth to underline that the considered transmission solver works point-wise in time and seems justified to add a consistency condition that allows it to stabilize, on a certain time interval. Namely, we expect that
Condition (23) was also introduced in [12, Def. 5] as one of common assumptions imposed on different transmission solvers considered in the literature. In line with this reasoning, let us define minimal and maximal transmission solver in vertex as follows.
Definition 9
Let \(TS_i^m\) (resp. \(TS_i^M\)) be the transmission solver that, for some fixed \(t\in [0,T)\), satisfy conditions (LC)–(\(\mathcal {E}^m_i\)) (resp. (LC)–(\(\mathcal {E}^M_i\))–(DF)) in \(v_i\). We say that \((TS_i^{m})^{\star }\) (resp. \((TS_i^{M})^{\star }\)) is a minimal (resp. maximal) transmission solver in vertex \(v_i\) if it satisfies
where, by \((TS_i^z)^{(n)}\), we understand the n-th composition of the mapping \(TS_i^z\).
We need to justify now that Definition 9 is well-posed, hence that the limit in (24) exists. If it does not depend on u then problem (15) transforms into
where coefficients of \(\mathcal {B}^{01}\) in (25c) are given by
Theorem 1
Consider non-negative weak solution u of Burgers’ equation (16) on the metric tree \(\mathcal {G}\) and fix \(t\in [0,T)\). The following statements hold.
-
(i)
At each vertex \(v_i\in V\), there exists a unique transmission solver \((TS_i^m)^{\star }\) of the form
$$\begin{aligned} (TS_i^m)^{\star }u(v_i,t^-)=\left\{ \begin{array}{ll} u_j(1,t^-)&{}\text {for}\,\, e_j\in D_i^{in},\\ \frac{1}{\sqrt{\text {deg}(v_i)}}\sqrt{\sum _{\left\{ s\in J:\,\,e_s\in D_i^{in}\right\} }u_s^2(1,t^-)}\qquad &{}\text {for}\,\, e_j\in D_i^{out}. \end{array}\right. \end{aligned}$$(27) -
(ii)
At each vertex \(v_i\in V\), there exists a unique transmission solver \((TS_i^M)^{\star }\) of the form
$$\begin{aligned} (TS_i^M)^{\star }u(v_i,t^-)=\left\{ \begin{array}{ll} u_j(1,t^-)&{}\text {for}\,\, e_j\in D_i^{in},\\ [.2cm] \sqrt{\sum _{\left\{ s\in J:\,\,e_s\in D_i^{in}\right\} }u_s^2(1,t^-)}\qquad &{}\text {for}\,\, e_j=e_k,\\ [.2cm] 0&{}\text {for}\,\, e_j\in D_i^{out}\setminus \left\{ e_k\right\} , \end{array}\right. \end{aligned}$$(28)where \(k\in J\) satisfies condition
$$\begin{aligned} k:=\max \left\{ j\in J:\,\, e_j\in D_i^{out}\right\} . \end{aligned}$$(29)
Proof
Let \(v_i\in V\) be an arbitrary vertex. Without loose of generality we assume that
and introduce a notation \(f^{\mp }=(f_j^\mp )_{j=1}^{\text {deg}(v_i)}:=(u_j^2(v_i,t^{\mp }))_{j=1}^{\text {deg}(v_i)}\). By (LC), finding \(TS_i^z\), \(z=m,M\), is equivalent to the optimization problem
where
Since we optimize a continuous function \(\bar{\mathcal {E}}\) on a compact set A, the only thing to prove is the uniqueness of the existing minimum/maximum. We show that \(\bar{\mathcal {E}}\) is strictly quasiconvex on a convex set A, namely
for \(f^+,g^+\in A\), \(f^+\ne g^+\), \(\lambda \in (0,1)\); and therefore attains a unique global minimum. Note that function \(\lambda \mapsto \bar{\mathcal {E}}(\lambda f^++(1-\lambda )g^+)\), for \(f^+,g^+\in A\) and \(\lambda \in [0,1]\) is convex since
Hence, it attains maximum at the boundary and
Since the inequality (33) is strict for \(\lambda \in (0,1)\), \(\bar{\mathcal {E}}\) is strictly quasiconvex.
Using the methods of quasiconvex programming we know that maximum is attained at the boundary of A, see [13, Lem. 3.2]. Adding condition (DF) we have a uniqueness of \(TS_i^M\).
We now derive the formula for \((TS_i^z)^{\star }\), \(z=m,M\), starting with minimization condition. The idea is to describe sequences \((u(v_i,t_n^-))_{n\in \mathbb {N}}\) and \((u(v_i,t_n^+))_{n\in \mathbb {N}}\) in such a way that for each step
All velocities are non-negative, so for the next time step we obtain such regulation for the velocities coming out the chosen vertex. Let us fix arbitrary \(n\in \mathbb {N}\) and denote by \(u^-\) and \(u^+\) the value of the solution in vertex \(v_i\) in the time step \(t_n\).
Now consider some index \(s\in J\) such that
Without loss of generality assume that the flow through the vertex \(v_i\) in \(t_n\) changes only values at two coordinates of edges adjacent to \(v_i\). Since, for almost all t, Kirchhoff condition needs to be satisfied we have
We show that the choice of transmission conditions described in (35)–(37) minimizes the function \(\mathcal {E}_i\). Consequently, only the value given in (27) can be the limit \((TS_i^m)^{\star }\).
Indeed, for \(h>0\) and
The structure of the data implies that
But then we note that
Hence \(\mathcal {E}_i\) decreases locally with a growth of \(h>0\).
Let us turn now to the energy maximization case. Since the above considerations are working for h negative also, the form of the derivative in (39) ensures that the maximum is realised at the boundary of the set A. Condition (DF) provides a final formula for \((TS_i^M)^{\star }\). \(\square \)
The assumptions of Theorem 1 are strictly related to non-negative velocities of flow. In the general case the considerations are more subtle and generate a larger number of possibilities of physical behaviour of a flow. For that reason in Sect. 5.1 we confine ourselves to honeycomb trees. Note that this metric graph provides only three types of transmission conditions, according to the formula (26). Two for the vertices \(v_i\) of the first kind, such that \(D_i=\left( \left\{ e_j\right\} ,\left\{ e_k,e_l\right\} \right) \), \(j<k\)
-
(i)
\(u_k(0,t)=u_j(1,t),\,\, u_k(0,t)=0\)
-
(ii)
\(u_j(0,t)=u_k(0,t)=\frac{\sqrt{2}}{2}u_j(1,t)\);
and one for the vertices of the second kind such that \(D_i=\left( \left\{ e_j,e_k\right\} ,\left\{ e_l\right\} \right) \)
-
(iii)
\(u_l(0,t)=\frac{u_j(1,t)}{\sqrt{u_j^2(1,t)+u_k^2(1,t)}}u_j(1,t)+\frac{u_k(1,t)}{\sqrt{u_j^2(1,t)+u_k^2(1,t)}}u_k(1,t)\).
At the end of this part let us give the formal definition of entropy solution of network Burgers’ equation.
Definition 10
We say that function \(u: [0,1]\times [0,T)\rightarrow \mathbb {R}^m\) is a vertex-entropy solution, if it is a weak solution to network Burgers’ equation (15).
Furthermore, \(u=(u_j)_{j\in J}\) is entropy solution if it is an edge- and vertex-entropy solution. In particular minimal- and maximal-entropy solutions are respectively the edge-entropy solutions to (25)–(26) with \(z=m,M\).
4.2 Existence of Solutions
We are finally ready to prove the existence result in the case of non-negative solutions.
Theorem 2
Problem (15) for a finite tree \(\mathcal {G}\) admits a non-negative entropy solution for any \(\mathring{u}\in L^{\infty }([0,1],\mathbb {R}_+^m)\). For almost all \(t>0\) function \(x\mapsto u(x,t)\) has a locally bounded total variation and can be calculated recursively from the formula
for any edge \(j\in J\).
Proof
In accordance with the proof of existence of a weak solution in the scalar case, see [19, Thm.1.1] and [6], we show that formula (40) is valid for piece-wise smooth solutions satisfying Lax shock inequality at discontinuity. To this end we define a solution recursively at each edge.
Necessity. Assume first that u is a solution of (15) as stated above. Then for any source \(e_j(0)\), \(j=1,\ldots ,s\), see Sect. 3.1, the right hand side of (15c) vanishes and consequently \(u_j(0,t)= 0\) for all \(t>0\). Note that due to the tree structure and recursive procedure, we can choose such an order of edges that before calculating the solution on k-th edge we have values of all \(u_j(x,t)\) for \(j\in J\) such that \(b_{kj}> 0\), see Eq. (4). Consequently, the system of conservation laws on network transforms into a sequence of initial-boundary-value problems of a form
where \(i\in I\) satisfies \(v_i=e_j(0)\).
Let us fix \(j\in J\) and define auxiliary function \(w_j:[0,l_j]\times [0,\infty )\rightarrow \mathbb {R}_+\); \(\mathring{w}_j:[0,l_j]\rightarrow \mathbb {R}_+\) such that
Note that u is a weak, piece-wise smooth solution to (15) if and only if it satisfies
at each smoothness region in \([0,l_j]\times [0,T)\) and Rankine–Hugoniot condition along the discontinuity, see [19, Thm. 2.3]. We have
By the properties of a square function we have that for any \(v\in [0,\infty )\) and \(z\in \mathbb {R}\)
For \(z=\partial _x w_j\), by (43),
and consequently,
In order to determine the value of \(u_j\) at \((x,t)\in [0,l_j]\times [0,\infty )\) we chose some v. The line passing through (x, t) with slope v either intersects the ox axis at \(y=x-vt\in [0,l_j]\) or hits the oy axis at \(t=-\frac{y}{v}\) for \(y<0\). We integrate (44) along the characteristic \(y=x-vt\) separately in two mentioned cases.
If \(y\in [0,l_j]\), then integrating over [0, t], analogously to the proof in the scalar case, we have
If \(y\in (-\infty ,0)\), then we integrate over \(\left[ -\frac{y}{v},t\right] \) and since \(v=\frac{x-y}{t}\) and by (42), we obtain
where \(G_j\) is defined in (40b)–(40c). Since the left hand side does not depend on y we minimize the right hand side over y. Let us choose the slope of the characteristic line \(v=u_j(x,t)\). Inserting v to (44) we obtain, by (43), the equality. The minimum in (31) is attained for \(y_j\) since u satisfies Lax condition. Finally,
and since \(y_j(x,t)=x-u_j(x,t)t\) we derive a formula (40a).
Sufficiency. Assume that u is given by (40), and show that it is a weak solution to (15). Note firstly that u is well defined since for any \(j=1,\ldots ,m\), there exists a unique minimizer of \(G_j\).
The existence of a minimizer of \(G_j\) for \(y\in [0,x]\) is obvious since the first entry in (40b) grows faster than linearly while the second has at most linear growth. The same argument works in the case \(y\in (-\infty ,0)\) if we transform the problem of minimiaztion of \(G_j\) over \(y_j\) into minimization of function \(H_j:[0,x]\times (0,\infty )\times [0,\infty )\rightarrow \mathbb {R}\),
over \(\tau _j\), where
We show now that \(x\mapsto y_j(x,t)\) is non-decreasing. Consequently it has locally bounded total variation and it is continuous in all but countably many points. It is sufficient for the uniqueness of the minimizer of \(G_j\) and the well-posedness of u for almost all (x, t).
Let us fix \(t>0\); and by an abuse of notation denote by \(y_1:=y_j(x_1,t)\), \(y_2:=y_j(x_2,t)\) for any \(x_1,x_2\in [0,x]\). Denote by \(x_0\in [0,x]\) an argument such that \(y_j(x_0,t)=0\). By contradiction we assume that \(x\mapsto y_j(x,t)\) is decreasing and we consider three cases.
-
1.
\(0 \le y_2<y_1\) and \(x_0\le x_1<x_2\)
From the definition of \(y_1\), \(G_j(x_1,t,y_1)\le G_j(x_1,t,y_2)\). Additionally,
$$\begin{aligned} \left( \frac{x_2-y_1}{t}\right) ^2+\left( \frac{x_1-y_2}{t}\right) ^2<\left( \frac{x_1-y_1}{t}\right) ^2+\left( \frac{x_2-y_2}{t}\right) ^2. \end{aligned}$$Finally, using (40b) we obtain the contradiction with the fact that \(y_2\) minimizes \(y\mapsto G_j(x_2,t,y)\)
$$\begin{aligned} G_j(x_2,t,y_1)\le G_j(x_1,t,y_2)-G_j(x_1,t,y_1)+G_j(x_2,t,y_1)<G_j(x_2,t,y_2). \end{aligned}$$ -
2.
\(y_2<y_1\le 0\) and \( x_1<x_2<x_0\)
Using the notation in (49)–(50), introduce \(\tau _1:=\tau _j(x_1,t)\) and \(\tau _2:=\tau _j(x_2,t)\). Conditions \(y_2<y_1\) and \(x_1<x_2\) imply that \(\tau _1<\tau _2\) and therefore we can repeat the reasoning in point 1. Again \(H_j(x_1,t,\tau _1)\le H_j(x_1,t,\tau _2)\) and
$$\begin{aligned} \frac{x_1^2}{t-\tau _1}+\frac{x_2^2}{t-\tau _2}<\frac{x_1^2}{t-\tau _2}+\frac{x_2^2}{t-\tau _1}. \end{aligned}$$Using (49), we obtain the contradiction with the fact that \(\tau _2\) minimizes \(\tau _j \mapsto H_j(x_2,t,\tau )\)
$$\begin{aligned} H_j(x_2,t,\tau _1)\le H_j(x_1,t,\tau _1)-H_j(x_1,t,\tau _2)+H_j(x_2,t,\tau _1)<H_j(x_2,t,\tau _2). \end{aligned}$$ -
3.
\(y_2<0\le y_1\) and \(x_1<x_2\) Note that \(x\mapsto y_i\) is non-decreasing on both intervals \([0,x_0]\) and \([x_0,x]\) so consequently \(x_0\le x_1<x_2\le x_0\), which leads to the contradiction.
We show that (40) is a weak solution. Define now functions \(a_{j\epsilon }, u_{j\epsilon }, f_{j\epsilon }, v_{j\epsilon }\in L^{\infty }([0,1]\times \mathbb {R}_+)\) such that
Set additionally
Note now that functions \((x,t) \mapsto G_j(x,t,y)\) and \((x,t) \mapsto v_{j \epsilon }(x,t,y)\), are differentiable with respect to x and t; and hence \(u_{j\epsilon }=-\epsilon \, \partial _t v_{j\epsilon }(x,t)\), \(f_{j\epsilon }(x,t,y)=\epsilon \, \partial _x v_{j\epsilon }(x,t,y)\) we have
We show that
for any (x, t) in which \(x\mapsto y_j(x,t)\) is continuous. Denote by \(\bar{y}_j(x,t)\) the unique minimizer of \(G_j\) at (x, t) and define a mapping
which attains in \(\bar{y}_{j}\) its minimum equal to 0. Since \(\bar{G}_j\) is locally Lipschitz continuous (which in particular on the interval \((-\infty ,0)\) follows from reformulation (49)–(50) then for any \(\delta >0\), the estimate holds for \(y\in [\bar{y}_j(x,t)-\delta ,\bar{y}_j(x,t)+\delta ]\) with Lipschitz constant \(C_{j1}(x,t)\). Therefore
for all \(\epsilon <\delta \). On the other hand, for y such that \(|y-\bar{y}_j|\ge \delta \), \(\bar{G}_j\) is bounded away from zero and attains infinity in infinity, hence
Finally we have
Passing to 0 with \(\epsilon \) we receive the first limit in (53). Analogously, we calculate the second and passing with \(\epsilon \rightarrow 0\) in (52) we conclude that u is a weak solution to (15).
Equation (40) is edge-entropy solution since it satisfies Oleinik’s one-sided inequality (18). Indeed, by the fact that \(x\mapsto y_j(x,t)\) is non-decreasing and positive, for any \(x_1\le x_2\), \(x_1,x_2\in [0,l_j]\) and a.e. \(t>0\)
For the case with negative \(y's\) we get
Since transmission conditions in (15c) are defined uniquely we arrive at an entropy solution.
Finally, on every edge the weak solution in piece-wise \(C^1\) function so taking the limit \(x_2-x_1\rightarrow 0\)
we arrive with the estimate on \(u_x\) at a.e. \((x_1,t)\). \(\square \)
Note that Eq. (40) are the counterparts of Lax-Oleinik formulas, see [18, Eq. IV.1.3], for Burgers’ equation on a tree. For the graph that satisfies condition
it is possible to relate this solution with the standard formulation on the straight line. The core property in this representation is to derive coefficients \(\mathcal {B}^{01}(u)\) that are independent of a flow when we move back-word along the characteristic line.
Example 4
In order to explain this situation consider again two kinds of nodes in honeycomb tree, see Fig. 3(i)(ii) and the transmission conditions in vertex \(v_0\) characterised by minimal transmission solver \((TS^m)^{\star }\).
-
(i)
\(v_0\) is of a first kind
The idea now is to define the solution on the graph at point (x, t) on the edge \(e_3\) along the path \(e_0e_1e_3\) where by \(e_0\) we understand a half line \(e_0=(-\infty ,0)\) with initial condition \(\mathring{u}_0=0\) and transmission conditions between edges \(e_0\) and \(e_1\) that conserve both the mass and the flux, namely
$$\begin{aligned} u_0(1,t)=u_1(0,t),\qquad \text {for almost all}\,\,t>0. \end{aligned}$$(57)We change the reasoning in the proof of Theorem 2 in the following way. Considering the characteristic line passing through \((x_0,t_0)\) with slope \(v_0\) (assume \(y_0=x_0-v_0t_0<0\)) we allow it to go through the vertex and continue until it hits the initial line. Using the formula for transmission conditions (25c)–(26), we conclude that characteristic line passes through the point \(\left( 1,-\frac{y_0}{v_0}\right) \) on the edge \(e_1\) with a slope \(\sqrt{2}v_0\). Then it intersects either \(e_1\) or \(e_0\) at \((y_1,0)\). Finally the explicit formula for the solution is given by
$$\begin{aligned} u_3(x_0,t_0)=\frac{x_0-y_3(x_0,t_0)}{t_0} \end{aligned}$$where \(y_3\) minimizes function
$$\begin{aligned} y\mapsto G_3(x_0,t_0,y)=\int _{-\infty }^{y_1} \mathring{u}_1(s)ds+\frac{(x_0-y)^2}{2t_0}. \end{aligned}$$ -
(ii)
\(v_0\) is of a second kind The first problem in repeating the reasoning in (i) for \(v_0\) is the lack of uniqueness of the path since we can chose either \(e_0e_1e_3\) or \(e_0e_2e_3\). The more essential problem, however, is the fact that we cannot define neither the slope of characteristic line \(v_1\) on \(e_1\), nor its counterpart on \(e_2\) - \(v_2\). Such representation does not result from transmission condition
$$\begin{aligned} v_0=\frac{\sqrt{2}}{2}\left( v_1+v_2\right) . \end{aligned}$$
Example 4 indicates that the condition (56) allows to choose the unique path from any point \(x\in \mathcal {G}\) to the source and ensures well-posedness of the following procedure: \(\mathcal {G}\) is a finite tree, so it is possible to re-enumerate edges in the way that for any two edges \(e_{s}, e_{j}\in E\), and for any chosen path \(e_{s}=e_{k_1},\ldots ,e_{k_l}=e_{j}\); \(k_i<k_{i+1}\) for all \(i\in 1,\ldots ,l-1\). Fix \(e_j\in E\) and define a path \(P_{j}=e_{k_1}e_{k_2}\ldots e_{k_{N_j}}\) of a length \(L_j=\sum _{s=1}^{N_j} l_s\) that starts in a source and ends in \(e_j\). Now define \(u_{P_j}:(-\infty ,l_j]\times [0,\infty )\rightarrow \mathbb {R}_+\) and \(\mathring{u}_{P_j}:(-\infty ,l_j]\rightarrow \mathbb {R}_+\) such that
Proposition 1
The solution to the problem (15) for a finite tree \(\mathcal {G}\) that satisfies (56) can be related with mono-dimensional case using the counterpart of Lax-Oleinik formula on the path sub-graph, namely the formula for any \(j\in J\) is given by
where \(y_j\) minimizes function
Finally, it is worth underlining that the considerations presented in the proof of Theorem 2 can be generalised in the number of directions. Firstly, we can examine conservation law on the edges of a network, coupled by the linear transmission of mass that satisfies the conservation of flux condition for \(f\in C^1([0,\infty ))\) such that
On the other hand, we can introduce some sources of mass in vertices \(v_i\) such that \(\phi _{ij}^+=0\) for any \(j\in J\).
We formalise those observations into a remark.
Proposition 2
Let f be a flux function that satisfies (58). For any \(\mathring{u}\in L^{\infty }([0,1],\mathbb {R}_+^m)\) and \(\bar{u}\in L^{\infty }([0,T],\mathbb {R}_+^m)\), the proof of Theorem 2 can be repeated to the following generalisation of a problem (15), for almost all \(t\in [0,T]\),
Proof
The proof of this fact can be found in [18, Thm. 2.1]. \(\square \)
4.3 Dense Subclass of Positive Solutions
Note that for positive solutions one can distinguish a special class of functions which are preserved under the flow. This class is the same as for the classical mono-dimensional Burgers’ equation.
Proposition 3
Let \(\mathcal {G}\) be a metric tree. We introduce a class of functions \(\mathcal {W}^+\) such that
Then the class \(\mathcal {W}^+\) is preserved by the flow generated by the Burgers’ equation (15), i.e. if \(\mathring{u}\in \mathcal {W}^+\) then \(u(t)\in \mathcal {W}^+\) for any \(t>0\).
Proof
Let \(\mathring{u}\in \mathcal {W}^+\). Since in the interior of each edge we have the mono-dimensional situation then \(\mathcal {W}^+\) class is preserved there. The only element that needs to be clarified is a transmission condition, namely that \(u_{j}(0,t)\) is piece-wise \(C^1\), non-decreasing with a finite number of jumps for every \(j\in J\). The properties of solution going out from an arbitrary vertex \(v_i\) in the tree \(\mathcal {G}\) can be considered as the composition of flows going out of two vertices \(v_i'\) and \(v_i''\) which are associated with \(v_i\) by the following relation
See also Fig. 4. We can easily see that vertex \(v_i'\) joins the flow, while \(v_i''\) splits it into outgoing edges. In the case of \(v_i'\), the flow in e(0) is a square root of sum of squared flows of incoming edges. Since on each \(e_j\in D_i^{in}\) the flow is non-decreasing, \(C^1\) function, then this properties are preserved for e(0) for all but finite number of points. Since for \(v_i''\) the transmission conditions at the head of outgoing edges are just proportions of the flow coming to the tail of edge e, fine properties are guaranteed.
Finally, we note that the number of jumps can be multiplied by \(\text {deg}_-(v_i)\) as the shock crosses \(v_i\) but the finiteness of the graph ensures the control of the number of jumps. We shall also recall that under evolution some jumps may disappear.\(\square \)
In further considerations we will use also \(\mathcal {W}^+_{opp}\) class such that
Definition 11
Let u be a function defined over the graph \(\mathcal {G}\). We say that \(u\in TV(\mathcal {G})\) iff
Let us start with the estimates of TV-norm of non-negative solution for specified family of graphs that can be generalised for arbitrary metric trees.
Lemma 1
Let \(\mathcal {G}\) be a metric honeycomb tree with one source \(e_1(0)\) and one sink \(e_m(1)\). For u being a solution to the problem (15) given by Theorem 2, the following estimate holds
where \(\kappa _\mathcal {G}\) depends on graph structure.
Proof
Let us remind first that solutions u given by Theorem 2 are non-negative. We start with showing that \(u\in \mathcal {W}^+\) restricted to an arbitrary edge \(e_j\) satisfies
If u is from \(\mathcal {W}^+\)-class, then for \(u_j\) there exists a finite sequence \(0=\xi _0(t)<\xi _1(t)< ... < \xi _{K(t)}(t)=1\) for a.e. \(t\in [0,T)\) such that
and on each interval \([\xi _k(t),\xi _{k+1}(t)]\) u is non-decreasing. We extended it by the left and right hand side limits. Furthermore, K(t) is piece-wise constant so there exists a finite sequence \(0<t_0<t_1< ...< t_M<T\) such that K(t) is constant over each interval \((t_i,t_{i+1})\). Note also K(t) is decreasing as \(u_1(0,t)=0\) by Kirchhoff condition. In accordance with previous notation we distinguish the left and right limits at points \(\xi _k\) by respectively \(u(\xi ^{\mp }_k(t),t)\). We have
Since, for a.e. \(t\in (t_i,t_{i+1})\), K(t) is constant, then for \(0<k<K(t)\)
But by the Rankine–Hugoniot and Lax conditions for \(\xi _{k+1}\), see (17), we conclude
In the same manner we prove that
Taking into account the boundary terms coming from \(k=0,K(t)\) we find that
The boundary terms \(-\partial _t u_j^2(z,t)=2u_j^2(z,t)\partial _x u_j(z,t)\), \(z=0,1\) are non-negative since we are working in \(\mathcal {W}^+\)-class, which leads to (64).
The class \(\mathcal {W}^+\) is dense in \(TV(\mathcal {G})\), so it allows us to approximate any TV-flow by an element from the \(\mathcal {W}^+\)–class. In order to pass to the limit we need to generate the global estimate, namely one which is independent of K(t). Integrating (64) over \((t_i,t_{i+1})\) we get
Summing up over all intervals \((t_i,t_{i+1})\) we get
In order to make TV-norm of \(u_j^2\) T-independent we transform (66) into
Now we are ready to construct an approximation sequence tending to the desired solution for some general data \(\mathring{u}_j\in TV(e_j)\). For given \(\epsilon >0\) and \(\bar{u}_j=u_j(0,\cdot ) \in TV(0,T)\) we claim there exist
such that
and
So the considerations for the \(u\in \mathcal {W}^+\) deliver the existence of \(u_{\epsilon }\) solutions on the time interval [0, T] and (67) implies the following estimate independent of \(\epsilon \).
The above estimates imply the uniform bound for
This leads, up to a subsequence \(\epsilon \rightarrow 0\), to
In particular we have the point-wise convergence in the domain and at the boundary \(\{x=1\}\). So we conclude u is the solution to Burgers’ equation at \(e_j\).
Finally, to obtain TV-estimate (63) for the whole graph we proceed recursively from the edge \(e_j\) to the source \(e_1(0)\), specifying the right hand side of (64). \(\mathcal {G}\) is a metric honeycomb tree so there is restricted number of vertices’ types, see Definition 2 and remarks below.
Consider a vertex of the first kind \(v_i\), and denote edges adjacent to it in the following way \(D_i=(\left\{ e_j\right\} ,\left\{ e_k,e_l\right\} )\). By the transmission conditions (15d)
Then by differentiation in time we find that
So the identity (64) gives a term on the left hand side which dominates the terms \(|\partial _t u_{z}^2|(0,t)\), \(z=k,l\), namely
Note that \(v_i\) has two out-going edges and \(\theta _z\le 1\) for \(z=k,l\), therefore, the equation for \(e_{j}\) is taken twice.
In the second case as \(v_i\) is of the second kind, using notation \(D_i=(\left\{ e_j,e_k\right\} ,\left\{ e_l\right\} )\), we have
Then we easily deduce that
and analogously to (68) we obtain
Finally taking the vertex from the path graph such that \(D_i=(\left\{ e_j\right\} ,\left\{ e_k\right\} )\) we have a conservation of mass in the vertex and consequently
Repeating iteratively above steps, and taking all edges with required multiplicity \(\kappa _j\) that depends on the degree of vertex and its position in the graph we obtain
since the graph \(\mathcal {G}\) has exactly one source \(e_1(0)\) and one sink \(e_m(0)\). After the integration by parts implies (63). \(\square \)
Remark 1
Estimate derived in Lemma 1 can be extended into arbitrary metric tree \(\mathcal {G}\) having sources \(e_j(0)\), \(j=1,\ldots , s\) and sinks \(e_j(l_j)\) for \(j=m-S+1,\ldots , m\)
Proof
The general case is slightly more involving. Assume that for vertex \(v_i\)
Then of course by the Kirchhoff condition
and for appropriate constants \(\theta _{r_j}\le 1\), \(j=1,\ldots ,q\),
Taking into account multiplicity of incoming and outgoing edges, it leads to
The rest of estimates follows as for honeycomb tree in Lemma 1. \(\square \)
5 Stitching Solutions on the Honeycomb Tree
In this part we generalise the considerations from Sect. 4.1 into the case of solutions of an arbitrary sign. With no surprise, the major problem of this construction is a determination of physically justified behaviour in vertices for different sign velocities at adjacent edges. In the whole Sect. 5 we restrict ourselves to honeycomb trees since they consist of exactly two kinds of vertices which additionally provide the same possible cases to consider.
5.1 Derivation of Transmission Conditions
To keep the well-posedness of the solution in the terms of the distributional formulation, see reasoning in (13)–(14), we are required to control the Kirchhoff conditions (14). Using the notation introduced in Sect. 4.1, we denote by \(t^{\mp }\) time shortly before/after the flow through the vertex at t. Denote the set of edges in which the mass enters the vertex \(v_i\) at \(t>0\) by \(\mathcal {F}_i(t):=\mathcal {F}_i^{in}(t)\cup \mathcal {F}_i^{out}(t)\) where
Furthermore, we need to specify the direction of a flow through the vertex. We say that flow agrees with (is opposite to) the direction of a vertex \(D_i=(D_i^{in},D_i^{out})\) at \(t>0\), for some \(v_i\in V\), if
If there is an equality in the above equation, then there is no flow through the vertex \(v_i\) at t. Let us define \(\mathcal {D}_i=\left( \mathcal {D}_i^{in},\mathcal {D}_i^{out}\right) \) a flow direction of a vertex \(v_i\) which is a counterpart of vertex direction in the case of metric graph. Namely,
We say that the flow direction is positive (negative) in the first (second) case in (74) and write respectively \(\text {sgn}(\mathcal {D}_i)=1\) (\(\text {sgn}(\mathcal {D}_i)=-1\)).
In the following considerations we redefine the maximal and minimal transmission solver \(TS_i^z(t)\), \(z=m,M\), generalising conditions presented in Sect. 4.1. We assume
-
(i)
Kirchhoff conditions (14),
-
(ii)
continuity conditions in vertices different than sources or sinks generalising (LC), namely for a.e. \(t\in (0,T)\)
-
(iii)
energy minimization/maximization condition, with function \(\mathcal {E}_i:\Pi _{e_j\in D_i^{in}\cap D_i^{out}}U_j\rightarrow \mathbb {R}\) being generalization of (22), given by the formula
$$\begin{aligned} \begin{array}{lcl} \mathcal {E}_i(u(v_i,t))&{}=&{} \displaystyle \text {sgn}(\mathcal {D}_i) \left[ \sum _{j:\,e_j\in \mathcal {D}_i^{in}} \mathcal {E}_{ij}^+(u(v_i,t))+\sum _{j:\,e_j\in \mathcal {D}_i^{out}} \mathcal {E}_{ij}^-(u(v_i,t))\right] , \\ \mathcal {E}_{ij}^{\pm }(u(v_i,t))&{}=&{} \displaystyle \frac{u_j^3(v_i,t^{\mp })-u_j^3(v_i,t^{\pm })}{3}\\ &{}&{}\quad -\displaystyle \frac{\left( u_j(v_i,t^{\mp })-u_j(v_i,t^{\pm })\right) ^3}{12}\theta \left( \text {sgn}(\mathcal {D}_i)\left( u_j(v_i,t^{\mp })- u_j(v_i,t^{\pm })\right) \right) , \end{array} \end{aligned}$$with a domain
$$\begin{aligned} U_j= & {} \mathbb {R}\qquad \text {for}\,\,e_j\in \mathcal {D}_i^{in},\\ U_j= & {} \left( \min (-u_j(v_i,t^-), \text {sgn}(\mathcal {D}_i)\infty ) ,\max (-u_j(v_i,t^-), \text {sgn}(\mathcal {D}_i)\infty )\right) \cup \left\{ -u_j(v_i,t^-)\right\} ,\\&\text {for}\,\,e_j\in \mathcal {D}_i^{out} \cap \mathcal {F}_i^{out}, \\ U_j= & {} \left( \min (0, \text {sgn}(\mathcal {D}_i)\infty ) ,\max (0, \text {sgn}(\mathcal {D}_i)\infty )\right) \cup \left\{ 0\right\} \quad \text {for}\,\,e_j\in \mathcal {D}_i^{out} \setminus \mathcal {F}_i^{out}. \end{aligned}$$ -
(iv)
decreasing flow with respect to edge enumeration in the case of \(\mathcal {E}_i\) maximization, (DF).
It is easy to notice that the form of condition \((\textit{FC})\) assures that there is no flow within the sets \(\mathcal {D}_i^{in/out}\). In the case of condition (iii) the generalisation is based on the \(\mathcal {E}_i\) domain’s change. The restriction of the value of solutions for \(e_j\in \mathcal {D}_i^{out}\) prevents the situation that there exists an edge \(e_j(0)=v_i\) (resp. \(e_j(l_j)=v_i\)), in which the flow direction at \(e_j(0)\) (resp. \(e_j(l_j)\)) is opposite to the flow at the vertex \(v_i\) and there is no shock at \(e_j(0)\) (resp. \(e_j(l_j)\)).
Based on \(TS^z_i\), \(z=m,M\), which satisfy conditions (i)–(iii); we can repeat the definition of \((TS_i^z)^{\star }\), \(z=m,M\) given in Definition 9. Finally, we are ready to present transmission conditions derived by \((TS_i^z)^{\star }\) for the honeycomb tree.
Case I. Sources and sinks
In analogy to non-negative case we assume that for \(v_i\) being a source (a sink) we have \(u_j(v_i,t)=0\), \(e_j\in D^{out}_i\) (\(e_j\in D^{in}_i\)).
Case II. Vertices from a path graph
Let \(v_i\) be a vertex related to the path graph such that \(D_i=(\left\{ e_j\right\} , \left\{ e_k\right\} )\). By (FC) we have the behaviour analogous to the mono-dimensional case.
-
1.
\(u_j(1,t^-)\ge 0\) and \(u_k(0,t^-)\le 0\)Let us specify the flow through the vertex.
- (a):
-
\(u_j^2(1,t^-)\ge u_k(0,t^-)\) We have the flow that agrees with the direction of vertex and
$$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=u_j(1,t^-). \end{aligned}$$(75) - (b):
-
\(u_j^2(1,t^-)< u_k(0,t^-)\) We have the flow opposite to the direction of vertex and
$$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=u_k(1,t^-). \end{aligned}$$(76)
-
2.
\(u_j(1,t^-)< 0\) and \(u_k(0,t^-)> 0\) There is no flow that directs the vertex hence
$$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=0. \end{aligned}$$ -
3.
\(u_j(1,t^-)\cdot u_k(0,t^-)\ge 0\) We have either (75) for \(\text {sgn}(u_j(1,t^-)\ge 0\), or (76) for \(\text {sgn}(u_j(1,t^-)\le 0\).
Case III. Vertices of the hexagonal grid of the first and second kind
For the illustration see Fig. 3 with the notation changed from \(e_1,e_2,e_3\) to respectively \(e_j,e_k,e_l\). It is worth mentioning that considerations for vertices of the first and second kind are analogous hence we concentrate only on a vertex of a second kind, see Fig. 3(ii).
-
1.
\(u_j(1,t^-)\ge 0, \qquad u_k(0,t^-) \ge 0, \qquad u_l(0,t^-) < 0\) Firstly we need to specify the direction of flow through the vertex.
-
(a)
\(u_j^2(1,t^-) \ge u_l^2(0,t^-)\) In this case the flow agrees with the direction of a vertex (goes through the vertex to the right) and therefore values of solution after the flow should not depend on values in edges from \(D_i^{out}\). Intuitively, the character of the vertex should therefore fit to the case of constant sign flows. Obviously \(u_j(1,t^+)=u_j(1,t^-)\). Consider now three cases related to the choice of maximal and minimal transmission solver.
-
In the first (maximal) one, \(k<l\), the total energy should go to the edge \(e_k\), and zero to \(e_l\). Since \(u_l(0,t^-)<0\) then the flow reaches the vertex and the influence of this flow needs to be somehow balanced to maintain the proper direction of a flow. Therefore, we divide the flow from \(e_j\) into two parts in such a way that
$$\begin{aligned} u_l(0,t^+)=-u_l(0,t^-) \text{ and } u_k(0,t^+)=\sqrt{u_j^2(1,t^-) - u_l^2(0,t^-)}. \end{aligned}$$(77) -
The second (maximal) case, \(k>l\), is when the whole energy is going to \(e_l\), then
$$\begin{aligned} u_k(0,t^+)=0 \text{ and } u_l(0,t^+)=u_j(1,t^-). \end{aligned}$$(78) -
The last case, related to energy minimization, is more involved. We should have
$$\begin{aligned} u_k(0,t^+)=u_l(0,t^+)=\frac{\sqrt{2}}{2}u_j(1,t), \end{aligned}$$(79)but it is valid only for \(\frac{\sqrt{2}}{2} u_j(1,t^-)\ge u_l(0,t^-)\). Otherwise it does not agree with the domain \(U_l\). Instead, minimum is attained at the boundary of \(U_l\), hence we arrive at (77).
-
-
(b)
\(u_j(1,t^-)^2 < u_l(0,t^-)^2\) Now the flow is opposite to the direction of a vertex (goes through the vertex to the left) and therefore values of solution after the flow should not depend on values in edges from \(D_i^{in}\). We put
$$\begin{aligned} u_k(0,t^+)=0, \quad u_l(0,t^+)=u_k(0,t^-) \text{ and } u_j(1,t^+)=u_l(0,t^-), \end{aligned}$$(80)where the last quantity is negative. It is the only possibility.
-
(a)
-
2.
\(u_j(1,t^-)\ge 0, \qquad u_k(0,t^-) < 0, \qquad u_l(0,t^-) \ge 0\) This case is analogical to 1. due to the symmetry of the honeycomb tree.
-
3.
\(u_j(1,t^-)\le 0, \qquad u_k(0,t^-) \ge 0, \qquad u_l(0,t^-) \ge 0\) This case is trivial since the mass flows in the direction opposite to the vertex at all edges and the vertex becomes a kind of source. The only possible boundary constraint is
$$\begin{aligned} u_j(1,t^+)=u_k(0,t^+)=u_l(0,t^+)=0. \end{aligned}$$(81) -
4.
\(u_j(1,t) \ge 0, \qquad u_k(0,t) \le 0, \qquad u_l(0,t) \le 0\) Now the situation is more interesting since the vertex resembles a sink and again there is a need to specify the direction of a flow.
-
(a)
\(u_j^2(1,t^-) \le u_k^2(0,t^-) + u_l^2(0,t^-)\) The flow is opposite to the direction of a vertex (goes through the vertex to the left) and the shock wave appears on the edge \(e_j\). Obviously \(u_z(0,t^+)=u_z(0,t^-)\) for \(z=k,l\) and
$$\begin{aligned} u_j(1,t^+) = -\sqrt{u_k^2(0,t^-) + u_l^2(0,t^-)}. \end{aligned}$$(82) -
(b)
\(u_j(1,t^-)^2 > u_k^2(0,t^-) + u_l^2(0,t^-)\) The flow agrees with the direction of a vertex (goes through the vertex to the right) and the shock wave appears on the edge \(e_j\) and we need to choose the condition for \(e_k(0)\) and \(e_l(0)\) at \(t^+\). Again by the energy maximization methods we have two options.
-
for \(k<j\) we repeat condition (77),
-
for \(k>j\)
$$\begin{aligned} u_k(0,t^+)=-u_k(0,t^-) \text{ and } u_l(0,t^+)=\sqrt{u_j(1,t^-)^2 - u_k(0,t^-)^2}. \end{aligned}$$(83)
While in the case of minimization
-
for \(\frac{\sqrt{2}}{2}u_j(1,t^-)>\max (-u_k(1,t^-),-u_l(1,t^-))\) we have (79)
-
for \(\frac{\sqrt{2}}{2}u_j(1,t^-)>-u_k(1,t^-)\) and \(\frac{\sqrt{2}}{2}u_j(1,t^-)<-u_l(0,t^-)=\frac{\sqrt{2}}{2}u_j(1,t^-)+\alpha \), \(\alpha ^2>\sqrt{2}u_j\) we arrive at (77).
-
finally for \(\frac{\sqrt{2}}{2}u_j(1,t^-)>-u_l(1,t^-)\) and \(\frac{\sqrt{2}}{2}u_j(1,t^-)<-u_k(0,t^-)=\frac{\sqrt{2}}{2}u_j(1,t^-)+\alpha \), \(\alpha ^2>\sqrt{2}u_j\) we obtain (83).
-
-
(a)
5.2 Different Sign Solutions
In this part we construct an approximation of a solution which consists piece-wise of elements from classes \(\mathcal {W}^+\) and \(\mathcal {W}^-\). We say that \(f\in \mathcal {W}^-\) if \(-f \in \mathcal {W}^+\), for \(\mathcal {W}^+\) defined in (60). Let us explain now how to stitch two mentioned types of solutions.
Let \((U_k)_{k\in K}\) be a partition of a set d(E) of metric edges of \(\mathcal {G}=(G,d)\), namely a family of closed and connected intervals such that for any \(U_k\) there exits exactly one metric edge \(e_{k_j}\) such that \(U_k\subset e_{k_j}\),
Define now a class of solutions \(\mathcal {W}\) such that for any fixed \(\mathring{u} \in TV(\mathcal {G})\)
Proposition 4
Let \(\mathcal {G}\) be a metric honeycomb tree. Then the class \(\mathcal {W}\) is preserved by the flow generated by the Burgers’ equation (15) and the total variation norm is controlled in time, namely
Proof
We prove the proposition by stitching the solutions from \(\mathcal {W}^+\) and \(\mathcal {W}^-\) in several steps.
Step 1. In order to construct the general solution we introduce auxiliary solutions related to each of \(U_k\). Let \(u^{(k)}\) be a solution to the Burgers’ equation on \(\mathcal {G}\) initiated by the initial datum
Since \(\mathring{u}\in \mathcal {W}^{\pm }\), it follows that \(\mathring{u}^{(k)}\in \mathcal {W}^{\pm }\) and consequently, by Proposition 3, \(u^{(k)}\) is a constant sign solution over the graph.
Step 2. Now we define the interaction between two neighbouring solutions in the interior of \(e_j\). Introduce function \(u^{(kl)}\), for two chosen intervals \(U_k\) and \(U_l\) such that \(D_k \cap D_l \ni \xi (0)\) for some \(\xi (0)\in (0,l_j)\). We need to determine the evolution of the contact point \(\xi (t)\) starting from \(\xi (0)\). Without loss of generality assume that \(\min U_k<\min U_l\).
-
(i)
If \(u^{(k)}<0<u^{(k)}\), then solution in the neighbourhood of \(\xi (0)\) is constructed as a rarefaction wave, namely
$$\begin{aligned} u^{(kl)}(x,t)=\left\{ \begin{array}{ll} u^{(k)}(x,t)&{}\text {for}\,\,\frac{x}{t}<u_k(\xi (t),t),\\ [.1cm] \frac{x}{t}&{}\text {for}\,\,u_k(\xi (t),t)<\frac{x}{t}<u_l(\xi (t),t),\\ [.1cm] u^{(l)}(x,t)&{}\text {for}\,\,u_l(\xi (t),t)<\frac{x}{t}. \end{array}\right. \end{aligned}$$ -
(ii)
If \(u^{(k)}>0>u^{(k)}\), then \(u^{(k)}\) and \(u^{(l)}\) are stitched together by the Rankine–Hugoniot condition
$$\begin{aligned} \frac{d}{dt} s(t)=\frac{u^{(k)}(\xi (t),t)+ u^{(l)}(\xi (t),t)}{2}. \end{aligned}$$In the neighbourhood of \((\xi (0),0)\) we have
$$\begin{aligned} u^{(kl)}(x,t)=u^{(k)}(x,t) \text{ for } x < \xi (t) \text{ and } u^{(kl)}(x,t)=u^{(l)}(x,t) \text{ for } x> \xi (t). \end{aligned}$$
Step 3. Finally, we concentrate on the case of changing the sign at the vertex using the transmission solver derived in Sect. 5.1. For given conditions in vertices at \(t^-\) there exists a unique representation after the flow through the vertex, at \(t^+\). Since the flow through the vertex \(\mathcal {D}_i\) is fixed, we solve the equations at outgoing edges \(\mathcal {D}_i^{out}\) knowing that at least locally near the vertex the solutions are of constant sign.
Let us become more precise about the choice of the time interval where the solution is defined. We consider the case III.1a) from Sect. 5.1. We build the solution on edges \(e_j\) and \(e_l\) for some time \(T_1>0\), and the transmission condition gives the boundary data for the equation at \(e_k\), at least in a vicinity of the vertex. Then we solve Burgers’ equation in the interior of an edge \(e_k\) obtaining the solution locally in time. In general it may happen that the solution \(u_k\) stays positive at the vertex just for time \(T_2>0\), which can be smaller than \(T_1\). Hence, the procedure of deriving the solution in the neighbourhood of the vertex is well-defined in time being the minimum of \(T_1\) and \(T_2\). Nevertheless, since the speed of a wave propagation and the number of vertices is finite, considered time always exists. Note that the construction of the solution bases on approximation in \(\mathcal {W}\)-class. It follows that transmission conditions need to be modified by a suitable approximation, with some error which is controlled. To preserve the \(\mathcal {W}\) – class the boundary term must be in \(\mathcal {W}_{opp}\), and this modification is explained in the next step.
Step 4. Steps 1–3 allow for a unique definition of solution for any time since the structure of the \(\mathcal {W}\)-class guarantees that the solutions locally are of constant sign on edges and they are uniquely determined in vertices. At the end we need to estimate TV-norm. Repeating the considerations from the proof of Lemma 1, we note that for each edge we find the following bound
Of course, the above inequality does not deliver needed information, since in general not only we fail to control the boundary terms, but also the sign of the solution at the ends of the edge.
However, based on construction proposed for \(\mathcal {W}^+\)-functions, we obtain local versions of the above inequality. Introduce \(\pi :e_j \rightarrow [0,1]\) a smooth function such that \(supp\, \pi \subset \subset e_j\) and \(\pi \equiv 1\) on the internal interval in \(e_j\). Then
Then we find
So it gives information about the interior of edges.
However the key element is in vertices so for each vertex we use again the localization argument. Again, in order to explain the construction of local estimation we consider a concrete case from Sect. 5.1, namely the case III.1a). Let us remind that solution is given by \(u_l^2(0,t)=u_j^2(1,t)-u_k^2(0,t)\). Before we start the estimation, let us look closer at this definition. We aim at construction of the flow in the \(\mathcal {W}\) – class, so the boundary condition is required to be in \(\mathcal {W}^+_{opp}\). However the above formula does not ensure that it holds. But from (87) we deduce that \(\int _0^T |\partial _t u_l^2|(0,t) dt\) is bounded. Thus, given \(\epsilon >0\) we find a new \(u_l^{new}(0,t) \in \mathcal {W}^+_{opp}\) such that \(\int _0^T |\partial _t {u_l^{new}}^2|(0,t) dt \le \int _0^T |\partial _t u_l^2|(0,t) dt\) and \(\Vert u_l^{new}(0,\cdot )-u_l(0,\cdot )\Vert _{L^1(0,T)} \le \epsilon \). This way the \(\mathcal {W}\) structure of solutions is preserved, and the TV - norm over \(\mathcal {G}\) is controlled too.
Take \(\pi \) defined around the vertex \(v_i\), being 1 over a sufficiently large cover of \(v_i\) and supported in \(e_j\cup e_k \cup e_j\). Then we find
Since \(u_l^2(0,t)=u_j^2(1,t)-u_k^2(0,t)\), we conclude that
So summing all together we get
Although above information is sufficient, we can have even stronger condition which controls the transmission relation. Because of form of the inequalities for \(e_j\) and \(e_k\), taking it twice, we improve (88), namely
In the general case, the signs at the vertex may be different, we get the better info with boundary term for the case when the flow at the edge comes into the vertex. So such inequality we make double, then we obtain (89) for the general case. Note that there is only one case where there is no incoming flow, but then all boundary terms are just zero, so the time derivatives vanish too.
Finally, repeating the steps from Lemma 1 in the general case, we get (85). \(\square \)
5.3 Existence of General Solutions
In the last part of this section we show the following existence result, that goes in line with Definition 3. Note that \(\mathring{u}\) is a different sign function.
Theorem 3
Let \(\mathring{u} \in TV(\mathcal {G})\). There exists a weak solution to the Burgers’ equation on graph \(\mathcal {G}\) such that
Proof
For given \(\mathring{u} \in TV(\mathcal {G})\), let us proceed in the following steps.
Step 1. Firstly, we approximate the initial condition. For given \(\epsilon > 0\), one finds \(\mathring{u}_\epsilon =(\mathring{u}_\epsilon )_++(\mathring{u}_\epsilon )_-\) such that \((\mathring{u}_\epsilon )_+ \in \mathcal {W}^+\), \((\mathring{u}_\epsilon )_- \in \mathcal {W}^-\) and
We solve the equation starting from \(\mathring{u}_\epsilon \) in the class \(\mathcal {W}\) according to the steps presented in Proposition 4. Then the uniform bound
Step 2. Using Lions-Aubin lemma we find a subsequence such that
hence \(u^*\) is a weak solution. Weak limits guarantee that
Step 3. The boundary conditions follow from the information carried by (85), while in the limit this condition can be found only as measure. The compactness ensures us that the approximating sequence goes strongly at the boundary point-wisely since then \(u_\epsilon \rightarrow u\) in \(L^p(0,T)\) in the vertices in time.
Step 4. As the last step let us comment on the uniqueness. The above properties of solutions to the Burgers’ equation fulfil the conditions for the classical mono-dimensional case. We obtain an entropy solution as a bounded distributional solution with the bound (55).
We claim that the solution is unique. Unfortunately, in order to restate the proof from Evans textbook, see [10], the method of characteristics on metric graphs for the transport type equation with smooth coefficients is needed. To our best knowledge still there is no such result in the literature. It will be the subject of our further investigations, hence at this moment we state the uniqueness only as a conjecture.
\(\square \)
6 Conclusions
At the end of this paper we return to our questions from Sect. 2 to understand how well does the developed theory reflect fluid motion observed in real life networks and what is its relation with classical approach.
1. What is the appropriate description of the flow in vertices?
The main argument that supports the energy perspective in vertices is the emergence of a natural phenomenon - a backflow - known from networks of fluids, for instance from the cardiovascular system. Using transmission conditions defined for arbitrary initial data in Sect. 5.1, one can mimic such behaviour on networks.
Example 5
The backflow presented in this example is related to the collision of opposite speed waves in a vertex. Here we illustrate the feature of conditions for Case III from Sect. 5.1.
Consider \(\mathcal {G}=(G,d)\) be the following metric tree \(V=\left\{ v_1\right\} \), \(E=\left\{ e_1,e_2,e_3\right\} \),
Note that \(\mathcal {G}\) can be interpreted as interval \([-10,10]\) split at 0 into two.
As the initial datum we consider
Then at time \(t=1\) the waves impact themselves at \(v_1\) and using the energy minimization/maximisation transmission conditions we obtain the following. In the first case , since \(9>4+1\), we have
and then the solution reads for \(t>1\)
In the case of maximization of the energy to edge \(e_2\), we get
Then the solution reads (\(t>1\))
Hence, in both cases we observe a backflow that appears either on one (\(e_2\)) or on two (\(e_2\) and \(e_3\)) edges.
Now let us move to the second question.
2. What is the relation between the pure mono-dimensional case and the network counterpart?
The answer to this question is based on the global properties of a network. Namely, depending on the type of transmission conditions (maximizing or minimizing the energy) and their reciprocal location we may obtain either qualitatively similar dynamics or essentially different one. In order to illustrate it, we return an interpretation of Burgers’ equation in the spirit of wave interference presented in Motivation, Section 1.
Let us start again with a mono-dimensional equation, namely (1) with \(D=\mathbb {R}\). Take the following initial configuration on the line.
For simplicity consider distributional non-physical solutions being a shift with a speed determined by the Rankine–Hugoniot condition. It means that solution at least for small time is given by
In the case of different velocities of waves, the stronger one overtakes the smaller one which is the consequence of the weak formulation and the regime of the Rankine–Hugoniot conditions. To overcome this weakness we put the system onto a metric graph. We rewrite the system into
where \(\mathcal {G}\) is the metric graph. This way we shall be able to obtain a rich structure of solutions even for initial data like (91). Let us look at the following example.
Example 6
Let \(\mathcal {G}=(G,d)\) be the following metric tree \(V=\left\{ v_1,v_2\right\} \), \(E=\left\{ e_1,\ldots ,e_4\right\} \),
Note that \(\mathcal {G}\) can be interpreted as interval \([-10,10]\) splitted at \((-1,0)\) into two and joined again at (1, 0).
We consider Burgers’ equation on \(\mathcal {G}\) with the following initial condition
Note that condition (92) for network is an analogue of condition (91) for a straight line and at time \(t=0\) we can illustrate it in the following way
To avoid problems with definitions and argumentation, we just present very schematic behaviour of the proposed system. We assume that the waves are non-physical of kind \(\chi _{[\frac{1}{2} t, 1+\frac{1}{2} t]}(x)\). The character of dynamics is determined by the rules in vertices, describing the partition of the solutions onto different paths. Consider three situations:
Case I. In vertex \(v_1\) the wave from edge \(e_1\) goes on \(e_2\), and in vertex \(v_2\) the wave from \(e_4\) goes on \(e_3\). So at \(t=t_1\) suitably chosen we have
Then waves pass through without direct interaction, so the energy is not lost. For large time we obtain the solution of the form
so there is no interaction of waves. It is not possible in description by the classical Burgers equation.
Case II. In vertex \(v_1\) the wave divides into two equal parts (in the sense of energy), and the same happens for vertex \(v_2\). For \(t=t_1\) we have
Now the waves meet on both \(e_2\) and \(e_3\) and since they are anti-symmetric, they annihilate. Thus for large time
This case covers the classical result of Burgers’ equation, like without a graph.
Case III. In the vertex \(v_1\) the wave divides into two equal parts, but in the vertex \(v_2\) the wave from \(e_4\) goes on the edge \(e_3\). It means that the upper part of the wave goes on \(e_4\), but on lower edge \(e_3\) we have a shock of two waves. For \(t=t_1\)
Since the one coming from the right side is larger, the smaller one is overtaken and the wave flows on the edge \(e_1\). Hence up to a small modification of time related to Rankine–Hugoniot conditions, for large time we have
This case is the most interesting since we obtain a practical interference. One part is dumped while the second one is preserved in its magnitude.
We can conclude that developed theory can be interpreted as the extension of mono-dimensional cases into the network. The enhancement of the domain of consideration allows for phenomenons that cannot be observed in simple one dimension. It is definitely worth continuing the research firstly to formally show the uniqueness of the solution starting from arbitrary TV initial datum. Going further, it is interesting to understand the relation of Burgers’ equation considered on planar networks with classical two dimensional problems.
References
Banasiak, J., Falkiewicz, A.: Some transport and diffusion processes on networks and their graph realizability. Appl. Math. Lett. 45, 25–30 (2015)
Bressloff, P., Dwyer, V., Kearney, M.: Burgers’ equation on a branching structure. Phys. Lett. A 229, 37–43 (1997)
Biggs, N.L.: Finite groups of automorphisms. London Math. Soc. Lect. Notes Ser. No. 6 (Cambridge University Press, 1971)
Bressan, A.: Hyperbolic Systems of Conservation Laws in One Space Dimension, Oxford Lecture Series in Mathematics and Its Applications 20. Oxford University Press, Oxford (2005)
Bressan, A., Çanić, S., Garavello, M., Herty, M., Piccoli, B.: Flows on networks: Recent results and perspectives. EMS Surv. Math. Sci. 1, 47–111 (2014)
Bressan, A., Han, K.: Optima and equilibria for a model of traffic flow. SIAM J. Math. Anal. 43(5), 2384–2417 (2011)
Coclite, G.M., Garavello, M.: Vanishing viscosity for traffic on networks. SIAM J. Math. Anal. 42(4), 1761–1783 (2010)
Coclite, G.M., Garavello, M., Piccoli, B.: Traffic flow on a road network. SIAM J. Math. Anal. 36, 1862–1886 (2005)
D’apice, C., Manzo, R., Piccoli, B.: Packet flow on telecommunication networks. SIAM J. Math. Anal. 38(3), 717–740 (2006)
Evans, L.C.: Partial Differential Equations, 2nd Edition, Graduate Studies in Mathematics, Vol.19 AMS (2010)
Feireisl, E., Novotný, A.: Stability of planar rarefaction waves under general viscosity perturbation of the isentropic Euler system. Ann. Inst. H. Poincaré Anal. Non Linéaire 38(6), 1725–1737 (2021)
Garavello, M., Piccoli, B.: Conservation laws on complex networks. Ann. I. H. Poincaré 26, 1925–1951 (2009)
Flores-Bazán, F., Flores-Bazán, F., Vera, C.: Maximizing and minimizing quasiconvex functions: related properties, existence and optimality conditions via radial epiderivatives. J. Glob. Optim. 63, 99–123 (2015). https://doi.org/10.1007/s10898-015-0267-6
Hinz, M., Meinert, M.: On the viscous Burgers equation on metric graphs and fractals. J. Fractal Geom. 7(2), 137–182 (2020)
Holden, H., Risebro, H.: Models for dense multilane vehicular traffic. SIAM J. Math. Anal. 51(5), 3694–3713 (2018)
Kramar-Fijavž, M., Puchalska, A.: Semigroups for dynamical processes on metric graphs. Philos. Trans. A R Soc A 378, 20190619 (2020)
Laurent-Brouty, N., Keimer, A., Goatin, P., Bayen, A.: A macroscopic traffic flow model with finite buffers on networks: Well-posedness by means of Hamilton-Jacobi equations. Math. Sci. 18(6) (2020)
Le Floch, P.: Explicit formula for scalar non-linear conservation laws with boundary condition. Math. Methods Appl. Sci. 10(3), 265–287 (1988)
Le Floch, P.: Hyperbolic Systems of Conservation Laws. Lectures in Mathematics, Birkhäuser, Verlag, Basel, The Theory of Classical and Nonclassical Shock Waves (2002)
Lighthill, M.J., Whitham, G.B.: On kinematic waves II. A theory of traffic flow on long crowded roads. Proc. R. Soc. Lond. Ser. A 229, 317–345 (1955)
Marigo, A., Piccoli, B.: A fluid dynamic model for T -junctions. SIAM J. Math. Anal. 39(6), 2016–2032 (2008)
Mugnolo, D.: Semigroup Methods for Evolution Equations on Networks. Understanding Complex Systems. Springer, Cham (2014)
Musch, M., SkreFjordholm, U., Risebro, N.: Well-posedness theory for nonlinear scalar conservation laws on networks, Networks and heterogeneous media. Am. Inst. Math. Sci. 17(1), 101–128 (2022)
Novotný, A., Pokorný, M.: Continuity equation and vacuum regions in compressible flows. J. Evol. Equ. 21(3), 2891–2922 (2021)
Shukla, A., Mehra, M., Leugering, G.: A fast adaptive spectral graph wavelet method for the viscous Burgers’ equation on a star-shaped connected graph. Math Meth Appl Sci. (2019). https://doi.org/10.1002/mma.5907
Acknowledgements
The authors have been partly supported by National Science Centre Grant 2018/29/B/ST1/00339 (Opus). Additionally, the research of AP was partially supported by National Science Centre Grant 2017/25/N/ST1/00787 (Preludium).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by M. Pokorny
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the topical collection “In memory of Antonin Novotny” edited by Eduard Feireisl, Paolo Galdi, and Milan Pokorny.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mucha, P.B., Puchalska, A. Burgers’ Equation Revisited: Extension of Mono-Dimensional Case on a Network. J. Math. Fluid Mech. 24, 112 (2022). https://doi.org/10.1007/s00021-022-00737-9
Accepted:
Published:
DOI: https://doi.org/10.1007/s00021-022-00737-9
Keywords
- Burgers’ equation
- Networks
- Hexagonal grid
- Transmission conditions in vertices
- Conservation laws
- Weak solutions
- PDEs on metric graphs