## 1 Introduction

The theory graph limits is only understood to a somewhat satisfactory degree in the case of dense graphs, where the limit objects are graphons, and (on the opposite end of the scale) in the case of bounded degree graphs, where the limit objects are graphings. There is, however, a lot of work being done on the intermediate cases. It appears that the most important constituents of graph limits in the general case will be Markov spaces (Markov chains on measurable spaces with a stationary distribution). Markov spaces can be described by a (boolean) sigma-algebra, endowed with a measure on its square, such that its two marginals are equal.

A finite directed graph $$G=(V,E)$$ can be thought of as a sigma-algebra $$2^V$$, endowed with a measure of $$V\times V$$, the counting measure of the set of edges. This motivates our goal to extend some important theorems from finite graphs to measures on squares of sigma-algebras. In this paper we show that much of flow theory, one of the most important areas in graph theory, can be extended to such spaces.

In the finite case, a flow is a function on the edges; we often sum its values on subsets of edges (e.g. cuts), which means we are also using the corresponding measure on subsets. In the case of an infinite point set J (endowed with a sigma-algebra $${\mathcal {A}}$$), these two notions diverge: we can try to generalize the notion of a flow either as a function on ordered pairs of points, or as a measure on the subsets of $$J\times J$$ measurable with respect to the sigma-algebra $${\mathcal {A}}\times {\mathcal {A}}$$. While the first notion is perhaps more natural, flows as measures are easier to define, and we explore this possibility in this paper. Note that even the definition of the flow condition “inflow$$=$$outflow” in the infinite case needs some additional hypothesis or stucture: Laczkovich [18] uses an underlying measure on the nodes, while and Marks and Unger [24] restrict their attention to finite-degree graphs. Of course, one can get back and force between measures and functions under under appropriate circumstances (by integration and Radon-Nikodym differentiation, respectively), but the measure-theoretic formulation seems to involve the least number of extra conditions.

In particular, we generalize the Hoffman Circulation Theorem to measurable spaces. This connects us with the theory of Markov spaces, which can be described as measurable spaces endowed with a nonnegative normalized circulation, called the ergodic circulation. Our main concern will be the existence of circulations; in this sense, these studies can be thought of as preliminaries for the study of Markov spaces or Markov chains, which are concerned with measurable spaces with a given ergodic circulation.

Flows between two points, and more generally, between two measures can then be handled using the results about circulations (by the same reductions as in the finite case). In particular, we prove an extension of the Max-Flow-Min-Cut Theorem, and a measure-theoretic generalization of the Multicommodity Flow Theorem by Iri and Matula–Shahroki.

A few caveats: Graph limit theory has served as the motivation of these studies, but in this paper we don’t study how, for a graph sequence that is convergent in some well-defined sense, parameters and properties of flows converge to those of flows on the measurable spaces serving as their limit objects.

Also, Markov spaces only capture the edge measure of graphons and graphings; to get a proper generalization, one needs to add a further measure on the nodes, to get a double measure space. This node measure is not needed for our development of measure-theoretic flow theory, but it is clearly needed for extending other graph-theoretic notions, like expansion or matchings (see e.g. [12]).

Third, our proofs for the existence of various (generalized) flows in this paper are not constructive, because of the use of the Hahn–Banach Theorem. Of course, in these infinite structures no “algorithmic” proof can be given, but replacing our proofs by iterative constructions modeled on algorithmic proofs in the finite setting would be desirable.

## 2 Preliminaries

### 2.1 Flow theory on finite graphs.

As a motivation of the results in this paper, let us recall some basic results on finite graphs in this area.

Let $$G=(V,E)$$ be a finite directed graph and $$g:~E\rightarrow {\mathbb {R}}$$. The flow condition at node i is that the “inflow” equals the “outflow”; formally,

\begin{aligned} \sum _{j:\,ij\in E} g(ij) = \sum _{j:\,ji\in E} g(ji). \end{aligned}
(1)

A circulation on G is a function $$f:~E\rightarrow {\mathbb {R}}$$ satisfying the flow condition at every node i. Circulations could also be defined by the condition

\begin{aligned} \sum _{i\in A, j\in A^c} g(ij) = \sum _{i\in A^c, j\in A} g(ij) \end{aligned}

for every $$A\subseteq V$$ (here $$A^c=V\setminus A$$ denotes the complement of A). A basic result about the existence of circulations satisfying prescribed bounds is the following [13].

Hoffman’s Circulation Theorem. Let $$a,b:~E\rightarrow {\mathbb {R}}$$ be two functions on the edges of a directed graph $$G=(V,E)$$. Then there is a circulation $$g:~E\rightarrow {\mathbb {R}}$$ such that $$a(ij)\le g(ij)\le b(ij)$$ for every edge ij if and only if $$a\le b$$ and

\begin{aligned} \sum _{i\in A, j\in A^c} a(ij) \le \sum _{i\in A^c, j\in A} b(ij) \end{aligned}

for every $$A\subseteq V$$.

The most important consequence of the Hoffman Circulation Theorem is the Max-Flow-Min-Cut Theorem of Ford and Fulkerson [10]. Let $$s,t\in V$$ and let $$c:~E\rightarrow {\mathbb {R}}_+$$ be an assignment of nonnegative “capacities” to the edges. An s-tcut is a set of edges from A to $$A^c$$, where $$s\in A$$ and $$t\notin A$$. The capacity of this cut is the sum $$\sum _{i\in A,\,j\in A^c} c(ij)$$.

An s-tflow is function $$f:~E\rightarrow {\mathbb {R}}$$ satisfying the flow condition (1) at every node $$i~\not = s,t$$. The value of the flow is

\begin{aligned} \mathrm{val}(f)=\sum _{j:\,sj\in E} f(sj) - \sum _{j:\,js\in E} f(js) =\sum _{j:\,jt\in E} f(jt) - \sum _{j:\,tj\in E} f(tj). \end{aligned}

A flow is feasible, if $$0\le f\le c$$.

Max-Flow-Min-Cut Theorem. The maximum value of a feasible s-t flow is the minimum capacity of an s-t cut.

Instead of specifying just two nodes, we can specify a supply and a demand at each node, and require that the difference between the outflow and the inflow be the difference between the supply and the demand.

Suppose that there is a circulation g satisfying the given conditions $$a(e)\le g(e)\le b(e)$$ for every (directed) edge e (for short, a feasible circulation). Also suppose that we are given a “cost” function $$c:~E\rightarrow {\mathbb {R}}_+$$. What is the minimum of the “total cost” $$\sum _e c(e)g(e)$$ for a feasible circulation? This can be answered by solving a linear program, where the Duality Theorem applies; the condition is somewhat awkward, we’ll state it later for the general (measure) case.

Let $$G=(V,E)$$ be a (finite) directed graph. A multicommodity flow is a family of flows $$(f_{st}:~s,t\in V)$$, where $$f_{st}$$ is a (nonnegative) s-t flow. Suppose that we are given capacities $$c(i,j)\ge 0$$ for the edges and demands $$\sigma (s,t)\ge 0$$ for all pairs of nodes. Then we say that the multicommodity flow is feasible, if $$f_{st}$$ has value $$\sigma (s,t)$$, and

\begin{aligned} \sum _{s,t} f_{st}(ij)\le c(i,j) \end{aligned}

for every edge ij. (We may assume, if convenient, that the graph is a bidirected complete graph, since missing edges can be added with capacity 0.)

The question is whether a feasible multicommodity flow exists. This is not really hard, since the conditions can be written as a system of linear inequalities, treating the values $$f_{st}(i,j)$$ as variables, and we can apply Linear Programming. However, working out the dual we get conditions that are not too transparent. But for undirected graphs there is a very nice form of the condition due to Iri [14] and to Shahroki and Matula [25].

Let $$G=(V,E)$$ be an undirected graph, where we consider each undirected edge as a pair of oppositely directed edges. Let us assume that the demand function $$\sigma (i,j)$$ and the capacity function are symmetric: $$\sigma (i,j)=\sigma (j,i)$$ and $$c(i,j)=c(j,i)$$. Consider a pseudometric D on V (a function $$D:~V\rightarrow V$$ that is nonnegative, symmetric and satisfies the triangle inequality, but D(xy) may be zero for $$x\not =y$$). If a feasible multicommodity flow exists, then

\begin{aligned} \sum _{s,t\in V} \sigma (s,t) D(s,t) \le \sum _{ij\in E} c(i,j) D(i,j) \end{aligned}
(2)

(Just write each s-t flow as a nonnegative linear combination of paths and cycles, and use that the sum of edge lengths along each path is at least D(st).) We call this inequality the volume condition. When required for every pseudometric, it is also sufficient:

Multicommodity Flow Theorem. There exist a feasible multicommodity flow satisfying the demands if and only if the volume condition (2) is satisfied for every pseudometric D in V.

### 2.2 Graph limits.

#### 2.2.1 Graphons.

Let $$(J,{\mathcal {A}})$$ be a standard Borel space, and let $$W:~J\times J\rightarrow [0,1]$$ be a measurable function. Let us endow $$(J,{\mathcal {A}})$$ with a node measure, a probability measure $$\lambda$$. If W is symmetric (i.e. $$W(x,y)=W(y,x)$$), then the quadruple $$(J,{\mathcal {A}},\lambda ,W)$$ is called a graphon. Dropping the assumption that W is symmetric, we get a digraphon.

The edge measure of a graphon or digraphon is the integral measure of W,

\begin{aligned} \eta (S) = \int \limits _S W\,d(\lambda \times \lambda ). \end{aligned}

The node measure and edge measure of a graphon determine the graphon, up to a set of $$(\lambda \times \lambda )$$-measure zero. Indeed, $$\eta$$ is absolutely continuous with respect to $$\lambda \times \lambda$$, and $$W=d\eta /d(\lambda \times \lambda )$$ almost everywhere.

Graphons can represent limit objects of sequences of dense graphs that are convergent in the local sense [4, 23]. For this representation, we may limit the underlying sigma-algebra to standard Borel spaces.

#### 2.2.2 Graphings.

Let $$(J,{\mathcal {A}})$$ be a standard Borel space. A Borel graph is a simple (infinite) graph on node set J, whose edge set E belongs to $${\mathcal {A}}\times {\mathcal {A}}$$. By “graph” we mean a simple undirected graph, so we assume that $$E\subseteq J\times J$$ avoids the diagonal of $$J\times J$$ and is invariant under interchanging the coordinates. A graphing is a Borel graph, with all degrees bounded by a finite constant, endowed with a probability measure $$\lambda$$ on $$(J,{\mathcal {A}})$$, satisfying the following “measure-preservation” condition for any two subsets $$A,B\in {\mathcal {A}}$$:

\begin{aligned} \int \limits _A \deg _B(x)\,d\lambda (x) =\int \limits _B \deg _A(x)\,d\lambda (x). \end{aligned}
(3)

Here $$\deg _B(x)$$ denotes the number of edges connecting $$x\in J$$ to points of B. (It can be shown that this is a bounded Borel function of x.) We call $$\lambda$$ the node measure of the graphing.

We can define Borel digraphs (directed graphs) in the natural way, by allowing E to be any set in $${\mathcal {A}}\times {\mathcal {A}}$$. To define a digraphing, we assume that both the indegrees and outdegrees are finite and bounded. In this case we have to define two functions: $$\deg ^+_B(x)$$ denotes the number of edges from x to B, and $$\deg ^-_B(x)$$ denotes the number of edges from B to x. The “measure-preservation” condition says that

\begin{aligned} \int \limits _A \deg ^+_B(x)\,d\lambda (x) =\int \limits _B \deg ^-_A(x)\,d\lambda (x) \end{aligned}
(4)

for $$A,B\in {\mathcal {A}}$$. Such a digraphing defines a measure on Borel subsets of $$J^2$$, the edge measure of the digraphing: on rectangles we define

\begin{aligned} \eta (A\times B) = \int \limits _A \deg ^+_B(x)\,d\lambda (x), \end{aligned}

which extends to Borel subsets in the standard way. This measure is concentrated on the set of E of edges. In the case of graphings, the edge measure is symmetric in the sense that interchanging the two coordinates does not change it. The node measure and the edge measure determine the (di)graphing up to a set of edges of $$\eta$$-measure zero.

Graphings can represent limit objects of sequences of bounded-degree graphs that are convergent in the local (Benjamini–Schramm) sense [2, 8], but also in a stronger, local-global sense [11].

#### 2.2.3 Double measure spaces.

For both graphons and graphings, all essential information is contained in the quadruple $$(J,{\mathcal {A}},\lambda ,\eta )$$, where the node measure $$\lambda$$ is a probability measure on $$(J,{\mathcal {A}})$$ and the edge measure $$\eta$$ is a symmetric measure on $$(J\times J,{\mathcal {A}}\times {\mathcal {A}})$$. Such a quadruple will be called a double measure space. Graphons are those double measure spaces where $$\eta$$ is dominated by $$\lambda \times \lambda$$; the function W describing the graphon is the Radon-Nikodym derivative $$d\eta /d(\lambda \times \lambda )$$. Graphings, on the other hand, are those double measure spaces whose edge measure is extremely singular with respect to $$\lambda \times \lambda$$.

It turns out that double measure spaces play a role in other recent work in graph limit theory, as limit objects for graph sequences that are neither dense nor bounded-degree, but convergent in some well-defined sense: shape convergence [17] or action convergence [1]. We don’t describe these limit theories here, but as an example for which a very reasonable limit can be defined in terms of double measure spaces we mention the sequence of hypercubes.

We can scale the edge measure of a double measure space to get a probability measure; if we drop the node measure (or restrict our interest to the case when $$\lambda$$ is the marginal of $$\eta$$), we get to our main object of study, Markov spaces. Except for the scaling factor, this generalizes regular graphs. To construct limits of non-regular graphs we need the additional information contained in the node measure; the marginal of $$\eta$$ corresponds to the degree sequence.

#### 2.2.4 Markov spaces.

A Markov space consists of a sigma-algebra $${\mathcal {A}}$$, together with a probability measure $$\eta$$ on $${\mathcal {A}}^2$$ whose marginals are equal. We call $$\eta$$ the ergodic circulation, and its marginals $$\pi =\eta ^1=\eta ^2$$, the stationary distribution of the Markov space $$({\mathcal {A}},\eta )$$.

As the terminology above suggests, Markov spaces are intimately related to Markov chains. To define a Markov chain, we need a sigma-algebra $${\mathcal {A}}$$ and a probability measure $$P_u$$ on $${\mathcal {A}}$$ for every $$u\in J$$, called the transition distribution from u. One assumes that for every $$A\in {\mathcal {A}}$$, the value $$P_u(A)$$ is a measurable function of $$u\in J$$. This structure is sometimes called a Markov scheme.

If we also have a starting distribution on $$(J,{\mathcal {A}})$$, then we can generate a Markov chain, i.e. a sequence of random points $$({\mathbf {w}}^0, {\mathbf {w}}^1, {\mathbf {w}}^2,\ldots )$$ of J such that $${\mathbf {w}}^0$$ is chosen from the starting distribution, and $${\mathbf {w}}^{i+1}$$ is chosen from distribution $$P_{{\mathbf {w}}^i}$$ (independently of the previous elements $${\mathbf {w}}^0,\ldots ,{\mathbf {w}}^{i-1}$$ of the Markov chain). Sometimes we call this sequence a random walk.

A probability measure $$\pi$$ on $$(J,{\mathcal {A}})$$ is a stationary distribution for the Markov scheme if choosing $${\mathbf {w}}^0$$ from this distribution, the next point $${\mathbf {w}}^1$$ of the walk will have the same distribution. While finite Markov schemes always have a stationary distribution, this is not true for infinite underlying sigma-algebras. Furthermore, a Markov scheme may have several stationary distributions. (In the finite case, this happens only if the underlying directed graph is not strongly connected.)

A Markov scheme $$(J,\{P_u:~u\in J\})$$ with a fixed stationary distribution $$\pi$$ defines a Markov space, whose ergodic circulation is the joint distribution measure $$\eta$$ of $$({\mathbf {w}}^0,{\mathbf {w}}^1)$$, where $${\mathbf {w}}^0$$ is a random point from the stationary distribution. Both marginals of this ergodic circulation equal to the stationary distribution $$\pi$$.

The ergodic circulation $$\eta$$ determines the Markov scheme (except for a set of measure zero in the stationary measure). Using the Disintegration Theorem (Proposition 3.3 below), one can show that every Markov space is obtained by this construction from a Markov scheme with a stationary distribution.

It is clear that if $$({\mathcal {A}},\eta )$$ is a Markov space, then $$({\mathcal {A}},\eta ^*)$$ is a Markov space with the same stationary distribution. The corresponding Markov chain is called the reverse chain. A Markov space is reversible, if $$\eta =\eta ^*$$. A Markov space $$({\mathcal {A}},\eta )$$ is indecomposable, if $$\eta (A\times A^c)>0$$ for every set $$A\in {\mathcal {A}}$$ with $$0<\pi (A)<1$$.

Flow problems on graphons and graphings can be formulated as flow problems on double measure spaces; we’ll see that many of them can be formulated as flow problems on Markov spaces, without reference to the node measure. The solutions we obtain yield solutions in the settings of graphings and graphons, via Radon–Nikodym derivatives. However, as mentioned in the introduction, these are just “pure existence proofs” (cf. also Remark 4.5).

## 3 Auxiliaries

### 3.1 Measures.

Let $$(J,{\mathcal {A}})$$ be a sigma-algebra. Unless specifically emphasized otherwise, we assume that $$(J,{\mathcal {A}})$$ is a standard Borel space of continuum cardinality; in particular, $${\mathcal {A}}$$ is separating any two points, and it is countably generated. Since the sigma-algebra $${\mathcal {A}}$$ determines its underlying set, we can talk about the standard Borel space as a sigma-algebra (where, in the case of the sigma-algebra denoted by $${\mathcal {A}}$$, the underlying set will be denoted by J). We denote by $${\mathfrak {M}}({\mathcal {A}})$$ the linear space of finite signed (countably additive) measures on $${\mathcal {A}}$$, and by $${\mathfrak {M}}_+({\mathcal {A}})$$, the set of nonnegative measures in $${\mathfrak {M}}({\mathcal {A}})$$. We denote by $$\delta _s$$ the Dirac measure, the probability distribution concentrated on $$s\in J$$.

If $$\mu \in {\mathfrak {M}}({\mathcal {A}})$$ and $$f:~J\rightarrow {\mathbb {R}}$$ is a $$\mu$$-integrable function, then we define a signed measure $$f\cdot \mu \in {\mathfrak {M}}({\mathcal {A}})$$ and a number $$\mu (f)$$ by

\begin{aligned} (f\cdot \mu )(A)=\int \limits _A f\,d\mu \quad (A\in {\mathcal {A}}),\qquad \mu (f)=(f\cdot \mu )(J) = \int \limits _J f\,d\mu . \end{aligned}

We endow the linear space $${\mathfrak {M}}({\mathcal {A}})$$ with the total variation norm

\begin{aligned} \Vert \alpha \Vert = \sup _{A\in {\mathcal {A}}}\alpha (A)-\inf _{B\in {\mathcal {A}}}\alpha (B). \end{aligned}
(5)

We note that the supremum and the infimum are attained, when $$J=A\cup B$$ is a Hahn decomposition of $$\alpha$$. With this norm, $${\mathfrak {M}}({\mathcal {A}})$$ becomes a Banach space. This norm defines a metric on $${\mathfrak {M}}({\mathcal {A}})$$, the total variation distance

\begin{aligned} d_\mathrm{tv}(\alpha ,\beta ) = \Vert \alpha -\beta \Vert . \end{aligned}

Warning: if $$\alpha$$ and $$\beta$$ are probability measures, then $$\sup _{A\in {\mathcal {A}}}(\alpha (A)-\beta (A))=-\inf _{A\in {\mathcal {A}}}(\alpha (A)-\beta (A))$$, and so $$d_\mathrm{tv}(\alpha ,\beta )= 2 \sup _A (\alpha (A)-\beta (A))$$. In probability theory, the total variation distance is often defined as $$\sup _A (\alpha (A)-\beta (A))$$, a factor of 2 smaller.

For $$\mu \in {\mathfrak {M}}({\mathcal {A}})$$ and $$A\in {\mathcal {A}}$$, we define the restriction measure $$\mu _A\in {\mathfrak {M}}({\mathcal {A}})$$ by $$\mu _A(X)=\mu (A\cap X)$$. We denote the Jordan decomposition of a signed measure $$\alpha \in {\mathfrak {M}}({\mathcal {A}})$$ by $$\alpha =\alpha _+-\alpha _-$$, and its total variation measure by $$|\alpha |=\alpha _++\alpha _-$$. So $$\Vert \alpha \Vert = \alpha _+(J)+\alpha _-(J) = |\alpha |(J)$$. For two measures $$\alpha ,\beta$$ on $${\mathcal {A}}$$, we consider the Jordan decomposition of their difference $$\alpha -\beta =(\alpha -\beta )_+-(\alpha -\beta )_-=(\alpha -\beta )_+-(\beta -\alpha )_+$$, and define the measures

\begin{aligned} \alpha \setminus \beta = (\alpha -\beta )_+,\qquad \alpha \wedge \beta = \alpha -(\alpha -\beta )_+ = \beta -(\beta -\alpha )_+. \end{aligned}

The measure $$\alpha \wedge \beta$$ is the largest nonnegative measure $$\gamma$$ dominated by both $$\alpha$$ and $$\beta$$.

If $${\mathcal {A}}$$ is a sigma-algebra, we denote by $${\mathcal {A}}^2={\mathcal {A}}\times {\mathcal {A}}$$ the product sigma-algebra of $${\mathcal {A}}$$ with itself; $${\mathcal {A}}^3$$ etc. are defined analogously. Sometimes it will be necessary to distinguish the factors (even though they are identical), and we write $${\mathcal {A}}^3={\mathcal {A}}_1\times {\mathcal {A}}_2\times {\mathcal {A}}_3 = {\mathcal {A}}^{\{1,2,3\}}$$ etc. For a measure $$\mu \in {\mathfrak {M}}({\mathcal {A}}^n)$$, and $$T\subseteq \{1,\dots ,n\}$$, we let $$\mu ^T$$ denote its marginal on all coordinates in T. To simplify notation, we write $$\mu ^{34}=\mu ^{\{3,4\}}$$, etc.

We need some further definitions for the sigma-algebra $${\mathcal {A}}^2$$ and for measures on it. For $$X\subseteq J\times J$$, let $$X^*=\{(x,y):~(y,x)\in X\}$$. For a function $$f:~J\times J\rightarrow {\mathbb {R}}$$, we define $$f^*(x,y) = f(y,x)$$. For a signed measure $$\mu$$ on $${\mathcal {A}}\times {\mathcal {A}}$$, we define $$\mu ^*(X)=\mu (X^*)$$. A measure $$\mu$$ on $$J\times J$$ that is symmetric if $$\mu ^*=\mu$$.

We set $$\mu ^B(A)=\mu (A\times B)$$. So $$\mu ^1=\mu ^J$$ and $$\mu ^2=(\mu ^*)^J$$ for $$\mu \in {\mathfrak {M}}({\mathcal {A}}^2)$$. If $$\mu ^1=\lambda _1$$ and $$\mu ^2=\lambda _2$$, then we say that $$\mu$$ is coupling the measures $$\lambda _1$$ and $$\lambda _2$$.

A circulation is a finite signed measure $$\alpha \in {\mathfrak {M}}({\mathcal {A}}^2)$$ with equal marginals: $$\alpha ^1=\alpha ^2$$. Every symmetric measure is a circulation in a trivial way. We’ll return to circulations in the next section. We say that a measure $$\beta \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ is acyclic, if there is no nonzero circulation $$\alpha$$ such that $$0\le \alpha \le \beta$$. Every measure in $${\mathfrak {M}}_+({\mathcal {A}}^2)$$ can be written as the sum of a nonnegative acyclic measure and a nonnegative circulation (this decomposition is not necessarily unique).

We need some well-known facts about measures.

### Lemma 3.1

Let $$(J,{\mathcal {A}})$$ be a standard Borel space, and $$\psi \in {\mathfrak {M}}_+({\mathcal {A}})$$. Let $$\mu _1,\mu _2,\dots \in {\mathfrak {M}}({\mathcal {A}})$$ be signed measures with $$|\mu _n|\le \psi$$. Then there is a subsequence $$n_1<n_2<\dots$$ of natural numbers and a signed measure $$\mu \in {\mathfrak {M}}({\mathcal {A}})$$ such that $$|\mu |\le \psi$$ and $$\mu _{n_i}(A)\rightarrow \mu (A)$$ for every $$A\in {\mathcal {A}}$$.

It follows easily that, more generally, $$\mu _{n_i}(f)\rightarrow \mu (f)$$ for every bounded measurable function $$f:~J\rightarrow {\mathbb {R}}$$.

### Proof

We may assume that $$\mu _n\ge 0$$ (just add $$\psi$$ to every measure). Let $${\mathcal {B}}$$ be a countable set algebra generating $${\mathcal {A}}$$. The sequence $$(\mu _n(B):~n=1,2,\dots )$$ is bounded for every $$B\in {\mathcal {B}}$$, so choosing an appropriate subsequence, we may assume that there is a function $$\mu :~{\mathcal {B}}\rightarrow {\mathbb {R}}$$ such that $$\mu _n(B)\rightarrow \mu (B)$$ for all $$B\in {\mathcal {B}}$$. Clearly $$\mu _n$$ is a pre-measure on $${\mathcal {B}}$$. We claim that $$\mu$$ is a pre-measure on $${\mathcal {B}}$$. Finite additivity of $$\mu$$ is trivial, and so is $$0\le \mu (B)\le \psi (B)$$ for $$B\in {\mathcal {B}}$$. If $$B_1\supseteq B_2\supseteq \dots$$ $$(B_i\in {\mathcal {B}})$$ and $$\cap _k B_k=\emptyset$$, then $$\mu (B_k)\le \psi (B_k)$$, and since $$\psi (B_k)\rightarrow 0$$ as $$k\rightarrow \infty$$, we have $$\mu (B_k)\rightarrow 0$$ as well.

It follows that $$\mu$$ extends to a measure on $${\mathcal {A}}$$. Uniqueness of the extension implies that $$0\le \mu \le \psi$$ on the whole sigma-algebra $${\mathcal {A}}$$. Let $$S\in {\mathcal {A}}$$; we claim that $$\mu _n(S)\rightarrow \mu (S)$$ ($$n\rightarrow \infty$$). For every $$\varepsilon >0$$, there is a set $$B\in {\mathcal {B}}$$ such that $$\psi (B\triangle A)\le \varepsilon /3$$. This implies that $$|\mu _n(S)-\mu _n(B)|\le \mu _n(S\triangle B)\le \psi (S\triangle B)\le \varepsilon /3$$, and similarly $$|\mu (S)-\mu (B)|\le \varepsilon /3$$. Thus $$|\mu _n(S)-\mu (S)|\le |\mu _n(B)-\mu (B)|+2\varepsilon /3$$. Since $$\mu _n(B)\rightarrow \mu (B)$$ by the definition of $$\mu$$, we have $$|\mu _n(S)-\mu (S)|\le \varepsilon$$ if n is large enough. $$\square$$

The following fact follows by a very similar argument.

### Lemma 3.2

Let $$(J,{\mathcal {A}})$$ be a standard Borel space, and let $$\lambda _1,\lambda _2$$ be probability measures on $$(J,{\mathcal {A}})$$. Let $$\mu _n\in {\mathfrak {M}}({\mathcal {A}}^2)$$ $$(n=1,2,\dots )$$ be measures coupling $$\lambda _1$$ and $$\lambda _2$$. Then there is an infinite subsequence $$\mu _{n_1},\mu _{n_2},\dots$$ and a measure $$\mu$$ coupling $$\lambda _1$$ and $$\lambda _2$$ such that $$\mu _{n_i}(A\times B)\rightarrow \mu (A\times B)$$ for all sets $$A,B\in {\mathcal {A}}$$. $$\square$$

We need a special version of the important construction of disintegration; see [3, 6, 7, 15] for more details.

### Proposition 3.3

Let $$(J,{\mathcal {A}})$$ be a standard Borel space, and let $$\psi \in {\mathfrak {M}}({\mathcal {A}}\times {\mathcal {A}})$$. Then there is a family of signed measures $$\varphi _x\in {\mathfrak {M}}({\mathcal {A}})$$ $$(x\in J)$$ such that $$\varphi _x(A)$$ is a measurable function of x for every $$A\in {\mathcal {A}}$$, and

\begin{aligned} \psi (B)=\int \limits _J \varphi _x(B\cap (\{x\}\times J))\,d\varphi ^1(x) \end{aligned}

for every $$B\in {\mathcal {A}}^2$$. $$\square$$

One can think of $$\varphi _x$$ as $$\psi$$ conditioned on $$\{x\}\times J$$, even though the condition has (typically) probability 0, and so the conditional probability in the usual sense is not defined.

### 3.2 Linear functionals.

We need some simple facts of Banach space theory; for completeness, we include their simple derivations from standard results.

### Lemma 3.4

Let $$K_1,\dots ,K_n$$ be open convex sets in a Banach space B. Then $$K_1\cap \dots \cap K_n=\emptyset$$ if and only if there are bounded linear functionals $${\mathcal {L}}_1,\dots {\mathcal {L}}_n$$ on B and real numbers $$a_1,\dots ,a_n$$ such that $${\mathcal {L}}_1+\dots +{\mathcal {L}}_n=0$$, $$a_1+\dots +a_n=0$$, and for each i, either $${\mathcal {L}}_i=0$$ and $$a_i=0$$, or $${\mathcal {L}}_i(x)>a_i$$ for $$x\in K_i$$, and for at least one i, the second possibility holds. $$\square$$

If $${\mathcal {L}}_i=0$$ and $$a_i=0$$ for some i, then already the intersection of the sets $$K_j$$ $$(j\not =i)$$ is empty.

### Proof

The sufficiency of the condition is trivial. To prove the necessity, consider the Banach space $$B'=B\oplus \dots \oplus B$$ (n copies) and the open convex set $$K'{=}K_1\times \dots \times K_n\subseteq B'$$. If any $$K_i$$ is empty, then the conclusion is trivial, so suppose that $$K'\not =\emptyset$$. Also consider the closed linear subspace (“diagonal”) $$\Delta =\{(x,\dots ,x): ~x\in B\}\subseteq B'$$. Then $$\Delta \cap B'=\emptyset$$. By the Hahn–Banach Theorem, there is a bounded linear functional $${\mathcal {L}}$$ on $$B'$$ such that $${\mathcal {L}}(y)=0$$ for $$y\in \Delta$$, and $${\mathcal {L}}(y)>0$$ for $$y\in K'$$.

Define $${\mathcal {L}}_i(x) = {\mathcal {L}}(0,\dots ,0,x,0,\dots ,0)$$ and $$a_i=\inf _{x\in K_i}{\mathcal {L}}_i(x)$$. Then $$L_i$$ is a bounded linear functional on B, and $${\mathcal {L}}(x_1,\dots ,x_n)={\mathcal {L}}_1(x_1)+\dots +{\mathcal {L}}_n(x_n)$$. The condition that $${\mathcal {L}}(y)=0$$ for $$y\in \Delta$$ means that $${\mathcal {L}}_1(x)+\dots +{\mathcal {L}}_n(x)=0$$ for all $$x\in B$$. For each i, either $${\mathcal {L}}_i=0$$ and $$a_i=0$$, or $${\mathcal {L}}_i(x)>a_i$$ for $$i\in K_i$$ (as $$K_i$$ is open). Since $${\mathcal {L}}(y)>0$$ for $$y\in K'$$, there must be at least one i with $${\mathcal {L}}_i\not =0$$. Furthermore, $$a_1+\dots +a_n = \inf _{y\in K'}{\mathcal {L}}(y)\ge 0$$. We can decrease any $$a_i$$ to get equality in the last inequality. $$\square$$

### Proposition 3.5

Let $$B_1$$ and $$B_2$$ be Banach spaces and $${\mathcal {T}}:~B_1\rightarrow B_2$$, a bounded linear transformation whose range is closed in $$B_2$$. Let $${\mathcal {L}}:~B_1\rightarrow {\mathbb {R}}$$ be a bounded linear functional. Then $${\mathcal {L}}$$ vanishes on $$\mathrm{Ker}({\mathcal {T}})$$ if and only if there is a bounded linear functional $${\mathcal {K}}:~B_2\rightarrow {\mathbb {R}}$$ such that $${\mathcal {L}}={\mathcal {K}}\circ {\mathcal {T}}$$. $$\square$$

### Proof

The “if” direction is trivial. To prove the converse, note that $$\mathrm{Ker}({\mathcal {T}})$$ is a closed linear subspace of $$B_1$$, and so $$B_0=B_1/\mathrm{Ker}({\mathcal {T}})$$ is a well defined Banach space. The maps $${\mathcal {T}}$$ and $${\mathcal {L}}$$ induce bounded linear maps $${\mathcal {T}}_0:~B_0\rightarrow B_2$$ and $${\mathcal {L}}_0:~B_0\rightarrow {\mathbb {R}}$$ (since $${\mathcal {L}}$$ vanishes on $$\mathrm{Ker}({\mathcal {T}})$$). Furthermore, $${\mathcal {T}}_0$$ is bijective. Since $$\mathrm{Rng}({\mathcal {T}}_0)=\mathrm{Rng}({\mathcal {T}})$$ is closed in $$B_2$$ and therefore a Banach space, the Inverse Mapping Theorem implies that $${\mathcal {T}}_0^{-1}$$ is bounded. So we can define $${\mathcal {K}}$$ on $$\mathrm{Rng}({\mathcal {T}})$$ by $${\mathcal {K}}(x)={\mathcal {L}}_0({\mathcal {T}}_0^{-1}(x))$$. By the Hahn–Banach Theorem, $${\mathcal {K}}$$ can be extended to $$B_2$$. $$\square$$

We will need linear functionals on the Banach space of measures. These functionals do not seem to have a useful complete description, but the following fact is often a reasonable substitute.

### Proposition 3.6

Let $${\mathcal {L}}$$ be a bounded linear functional on $${\mathfrak {M}}({\mathcal {A}})$$ and $$\psi \in {\mathfrak {M}}_+({\mathcal {A}})$$. Then there is a bounded measurable function $$g:~J\rightarrow {\mathbb {R}}$$ such that $${\mathcal {L}}(\mu )=\mu (g)$$ for every $$\mu \in {\mathfrak {M}}({\mathcal {A}})$$ with $$\mu \ll \psi$$. $$\square$$

### Proof

We define a functional $${\mathcal {N}}:~L_1({\mathcal {A}},\psi )\rightarrow {\mathbb {R}}$$ by $${\mathcal {N}}(f)={\mathcal {L}}(f\cdot \psi )$$ for $$f\in L_1({\mathcal {A}},\psi )$$. Then $${\mathcal {N}}$$ is a bounded linear functional on $$L_1({\mathcal {A}},\psi )$$, and so there is a bounded measurable function g on $$(J,{\mathcal {A}})$$ such that $${\mathcal {N}}(f)=\psi (fg)$$ for all $$f\in L_1(J,\psi ^J)$$.

The condition that $$\mu \ll \psi$$ implies that the Radon-Nikodym derivative $$h=d\mu /d\psi \in L_1({\mathcal {A}},\psi )$$ exists, and $$h\cdot \psi =\mu$$. Thus

\begin{aligned} {\mathcal {L}}(\mu ) ={\mathcal {N}}(h) =\int \limits _J \frac{d\mu }{d\psi } g\,d\psi = \mu (g).[-4pc] \end{aligned}

$$\square$$

We conclude with a technical lemma.

### Lemma 3.7

Let $${\mathcal {L}}$$ be a bounded linear functional on $${\mathfrak {M}}({\mathcal {A}}^2)$$. Then there is a bounded linear functional $${\mathcal {Q}}$$ on $${\mathfrak {M}}({\mathcal {A}})$$ such that for all $$\psi \in {\mathfrak {M}}_+({\mathcal {A}})$$,

\begin{aligned} {\mathcal {Q}}(\psi )=\sup \{{\mathcal {L}}(\mu ):~\mu \in {\mathfrak {M}}_+({\mathcal {A}}^2),~\mu ^1=\psi \}. \end{aligned}

### Proof

The formula in the lemma defines a functional on $${\mathfrak {M}}_+({\mathcal {A}}^2)$$; we start with showing that this is bounded and linear on nonnegative measures. For every $$\mu \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ with $$\mu ^1=\psi$$, we have $$\Vert \mu \Vert =\Vert \psi \Vert$$, and so $${\mathcal {L}}(\mu ) \le \Vert {\mathcal {L}}\Vert \,\Vert \mu \Vert =\Vert {\mathcal {L}}\Vert \,\Vert \psi \Vert$$. Thus $${\mathcal {Q}}(\psi ) \le \Vert {\mathcal {L}}\Vert \,\Vert \psi \Vert$$. It is also clear that $${\mathcal {Q}}(c\psi )=c{\mathcal {Q}}(\psi )$$ for $$c>0$$.

Let $$\psi = \psi _1+\psi _2$$ ($$\psi _i\in {\mathfrak {M}}_+({\mathcal {A}})$$); we claim that

\begin{aligned} {\mathcal {Q}}(\psi )= {\mathcal {Q}}(\psi _1)+{\mathcal {Q}}(\psi _2). \end{aligned}
(6)

For $$\varepsilon >0$$, choose $$\mu _i\in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, so that $$\mu _i^1=\psi _i$$ and $${\mathcal {L}}(\mu _i)\ge {\mathcal {Q}}(\psi _i)-\varepsilon$$. Then

\begin{aligned} {\mathcal {Q}}(\psi ) \ge {\mathcal {L}}(\mu _i+\mu _2) = {\mathcal {L}}(\mu _1)+{\mathcal {L}}(\mu _2) \ge {\mathcal {Q}}(\psi _1)+{\mathcal {Q}}(\psi _2)-2\varepsilon . \end{aligned}

Since this holds for every $$\varepsilon >0$$, this proves that $${\mathcal {Q}}(\psi )\ge {\mathcal {Q}}(\psi _1)+{\mathcal {Q}}(\psi _2)$$. To prove the reverse inequality, let $$\mu \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ with $$\mu ^1=\psi$$. Define the measures

\begin{aligned} \mu _i(U)=\int \limits _U \frac{d\psi _i}{d\psi }(x) \,d\mu (x,y) \qquad (U\in {\mathcal {A}}^2). \end{aligned}

It is easy to check that

\begin{aligned} \mu _1+\mu _2=\mu , \quad \text {and}\quad \mu _i^1=\psi _i\quad (i=1,2). \end{aligned}
(7)

It follows that

\begin{aligned} {\mathcal {L}}(\mu ) ={\mathcal {L}}(\mu _1)+{\mathcal {L}}(\mu _2) \le {\mathcal {Q}}(\psi _1)+{\mathcal {Q}}(\psi _2). \end{aligned}

Since this holds for every $$\mu \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ with $$\mu ^1=\psi$$, we get that $${\mathcal {Q}}(\psi )\le {\mathcal {Q}}(\psi _1)+{\mathcal {Q}}(\psi _2)$$. This implies (6).

Thus $${\mathcal {Q}}$$ is nonnegative, positive homogeneous and linear on $${\mathfrak {M}}_+({\mathcal {A}})$$. We extend it to $${\mathfrak {M}}({\mathcal {A}})$$ by $${\mathcal {Q}}(\mu )={\mathcal {Q}}(\mu _+)-{\mathcal {Q}}(\mu _-)$$. In particular, if $$\mu \le 0$$, then $${\mathcal {Q}}(\mu )=-{\mathcal {Q}}(-\mu )$$. This implies that the extended $${\mathcal {Q}}$$ is homogeneous.

Let $$\varphi ,\psi \in {\mathfrak {M}}({\mathcal {A}}^2)$$; we claim that

\begin{aligned} {\mathcal {Q}}(\varphi +\psi ) = {\mathcal {Q}}(\varphi )+{\mathcal {Q}}(\psi ). \end{aligned}
(8)

We know that this holds if $$\varphi ,\psi \ge 0$$, and it follows that it holds if $$\varphi ,\psi \le 0$$. If $$\varphi \ge 0$$, $$\psi \le 0$$, and $$\varphi +\psi \ge 0$$, then $${\mathcal {Q}}(\varphi ) = {\mathcal {Q}}(\varphi +\psi )+{\mathcal {Q}}(-\psi ) = {\mathcal {Q}}(\varphi +\psi )-{\mathcal {Q}}(\psi )$$, so (8) holds true. This implies easily that (8) holds whenever neither one of $$\varphi$$, $$\psi$$ and $$\varphi +\psi$$ changes sign.

To verify the general case, we consider the common refinement of the Hahn decompositions for $$\varphi$$, $$\psi$$ and $$\varphi +\psi$$. We get a partition $${\mathcal {P}}$$ into at most 8 parts, where neither one of $$\varphi$$, $$\psi$$ and $$\varphi +\psi$$ changes sign on any partition class. Then

\begin{aligned} {\mathcal {Q}}(\varphi )&= {\mathcal {Q}}(\varphi _+)-{\mathcal {Q}}(\varphi _-) = \sum _{X\in {\mathcal {P}}:\,\varphi _X\ge 0} {\mathcal {Q}}(\varphi _X) - \sum _{X\in {\mathcal {P}}:\,\varphi _X\le 0} {\mathcal {Q}}((\varphi _-)_X) = \sum _{X\in {\mathcal {P}}} {\mathcal {Q}}(\varphi _X). \end{aligned}

(Note: (8) has been applied to the restrictions of $$\varphi$$ to subsets of the positive support, and separately to subsets of the negative support.) Similarly,

\begin{aligned} {\mathcal {Q}}(\psi ) = \sum _{X\in {\mathcal {P}}} {\mathcal {Q}}(\psi _X), \quad \text {and} \quad {\mathcal {Q}}(\varphi +\psi ) = \sum _{X\in {\mathcal {P}}} {\mathcal {Q}}((\varphi +\psi )_X). \end{aligned}

Since we know already that $${\mathcal {Q}}((\varphi +\psi )_X)={\mathcal {Q}}(\varphi _X)+{\mathcal {Q}}(\psi _X)$$, this proves that $${\mathcal {Q}}$$ is additive.

Clearly $$|{\mathcal {Q}}(\varphi )|\le |{\mathcal {Q}}(\varphi _+)|+|{\mathcal {Q}}(\varphi _-)|\le 2\Vert {\mathcal {L}}\Vert \,\Vert \varphi \Vert$$, so $${\mathcal {Q}}$$ is continuous. $$\square$$

## 4 Potentials, circulations and flows

### 4.1 Potentials.

Let $$(J,{\mathcal {A}})$$ be a measurable space. A measurable function $$F:~J\times J\rightarrow {\mathbb {R}}$$ is a potential, if there is a measurable function $$f:~ J\rightarrow {\mathbb {R}}$$ such that $$F(x,y)=f(x)-f(y)$$. It is easy to see that a bounded measurable function $$F:~J\times J\rightarrow {\mathbb {R}}$$ is a potential if and only if $$F(x,y)+F(y,z)+F(z,x)=0$$ for all $$x,y,z\in J$$.

Of particular importance will be cut potentials of the form $${\mathbb {1}}_A(x)-{\mathbb {1}}_A(y)={\mathbb {1}}_{A\times A^c}(x,y)-{\mathbb {1}}_{A^c\times A}(x,y)$$, where $$A\in {\mathcal {A}}$$. Every potential F can be expressed by cut potentials as

\begin{aligned} F(x,y) = \int \limits _{-C}^C ({\mathbb {1}}_{A_t}(x)-{\mathbb {1}}_{A_t}(y))\,dt, \end{aligned}
(9)

where C is an upper bound on |F|, and $$A_t$$ ($$-C\le t\le C$$) is a measurable subset of J such that $$A_t\subseteq A_s$$ for $$t<s$$, $$\cap _t A_t=\emptyset$$ and $$\cup _t A_t=J$$. To see this, let $$F(x,y)=f(x)-f(y)$$ for some bounded measurable function f, and define $$A_t=\{x\in J:~f(x)\ge t\}$$ $$(-C\le t\le C)$$.

### 4.2 Circulations.

#### 4.2.1 Circulations and potentials.

Recall that $$\alpha \in {\mathfrak {M}}({\mathcal {A}}^2)$$ is a circulation if its two marginals $$\alpha ^1$$ and $$\alpha ^2$$ are equal. This is clearly equivalent to saying that

\begin{aligned} \alpha (X\times X^c)=\alpha (X^c\times X)\qquad (\forall X\in {\mathcal {A}}) \end{aligned}
(10)

(just cancel the common part $$X\times X$$ in $$\alpha (X\times J)=\alpha (J\times X)$$). Circulations form a linear subspace $${\mathfrak {C}}={\mathfrak {C}}({\mathcal {A}})$$ of the space $${\mathfrak {M}}({\mathcal {A}}^2)$$ of finite signed measures.

In the finite case, circulations of the form $$\delta _{x_1x_2}+\dots +\delta _{x_{n-1}x_n}+\delta _{x_nx_1}$$ generate the space of all circulations (even those with $$n\le 3$$ do). In the measure case, this is not always so, as the next example shows.

### Example 4.1

(Cyclic graphing and digraphing). For a fixed $$a\in (0,1)$$, let $${\mathbf {C}}_a$$ be the graphing on [0, 1] obtained by connecting every point x to $$x+a\pmod 1$$ and $$x-a\pmod 1$$. If a is irrational, this graph consists of two-way infinite paths; if a is rational, the graph will consist of cycles. We will also use the directed version $$\overrightarrow{C}_a$$, obtained by connecting x to $$x+a\pmod 1$$ by a directed edge.

The uniform measure $$\mu$$ on the edges of $$\overrightarrow{C}_a$$ is trivially a circulation, both of its marginals being the uniform measure $$\lambda$$ on [0, 1). Every circulation $$\alpha$$ supported on the edges is a constant multiple of this. Indeed, $$\alpha ^1(A)=\alpha (A\times (A+a))=\alpha ^2(A+a)=\alpha ^1(A+a)$$ for every Borel set $$A\subseteq [0,1)$$, which means that $$\alpha ^1$$ is invariant under translation by a. It is well-known that only scalar multiples of $$\lambda$$ have this property.

We need two lemmas describing “duality” relations between potentials and circulations.

### Lemma 4.2

A signed measure $$\alpha \in {\mathfrak {M}}({\mathcal {A}}^2)$$ is a circulation if and only if $$\alpha (F)=0$$ for every potential F.

### Proof

The “if” part follows by applying the condition to the potential $${\mathbb {1}}_{A}(x)-{\mathbb {1}}_{A}(y)$$:

\begin{aligned} \alpha (A\times J)-\alpha (J\times A)= \int \limits _{J\times J}({\mathbb {1}}_{A}(x)-{\mathbb {1}}_{A}(y))\,d\alpha (x,y) =0. \end{aligned}

To prove the converse, let $$\alpha$$ be a circulation, then for every potential $$F(x,y)=f(x)-f(y)$$, we have

\begin{aligned} \alpha (F)=\int \limits _{J\times J}f(x)-f(y)\,d\alpha (x,y) = \int \limits _J f(x)\,d\alpha ^2(x)-\int \limits _J f(y)\,d\alpha ^1(y)=0.[-3.8pc] \end{aligned}

$$\square$$

### Lemma 4.3

Let $${\mathcal {L}}:~{\mathfrak {M}}({\mathcal {A}}^2)\rightarrow {\mathbb {R}}$$ be a continuous linear functional. Then $${\mathcal {L}}$$ vanishes on the space $${\mathfrak {C}}$$ of circulations if and only if there is a continuous linear functional $${\mathcal {K}}:~{\mathfrak {M}}({\mathcal {A}})\rightarrow {\mathbb {R}}$$ such that $${\mathcal {L}}(\mu )={\mathcal {K}}(\mu ^1-\mu ^2)$$ for all $$\mu \in {\mathfrak {M}}({\mathcal {A}}^2)$$.

### Proof

The kernel of the linear operator $$\varphi \mapsto \varphi ^1-\varphi ^2$$ ($$\varphi \in {\mathfrak {M}}({\mathcal {A}}^2)$$) is $${\mathfrak {C}}$$. The range of this operator is

\begin{aligned} \mathrm{Rng}({\mathcal {T}})=\{\nu \in {\mathfrak {M}}({\mathcal {A}}):~\nu (J)=0\}. \end{aligned}
(11)

Indeed, if $$\nu =\mu ^1-\mu ^2\in \mathrm{Rng}({\mathcal {T}})$$, then $$\nu (J)=\mu (J\times J)-\mu (J\times J)=0$$. Conversely, if $$\nu (J)=0$$, then for any probability measure $$\gamma$$ on $${\mathcal {A}}$$,

\begin{aligned} {\mathcal {T}}(\gamma \times \nu ) = \gamma (J)\nu -\nu (J)\gamma = \nu , \end{aligned}

so $$\nu$$ is in the range of $${\mathcal {T}}$$. It is easy to check that $$\nu (J)=0$$ defines a closed subspace of $${\mathfrak {M}}({\mathcal {A}})$$. Hence Proposition 3.5 implies the necessity of the condition. The sufficiency is straightforward, since $$\mu ^1-\mu ^2=0$$ for every circulation $$\mu$$. $$\square$$

Let $${\mathcal {L}}\in {\mathfrak {C}}^\perp$$ and $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$. Restricting $${\mathcal {L}}$$ to measures $$\mu \ll \psi$$, we get a more explicit representation: there is a potential F such that

\begin{aligned} {\mathcal {L}}(\mu )=\mu (F)\qquad (\mu \ll \psi ). \end{aligned}
(12)

Indeed, consider the continuous linear functional $${\mathcal {K}}$$ constructed in Lemma 4.3, and its representation $${\mathcal {K}}(\nu )=\nu (g)$$ by a bounded measurable function $$g:~J\rightarrow {\mathbb {R}}$$ in Proposition 3.6, valid for every $$\nu \ll \psi ^1+\psi ^2$$. Then for the potential $$F(x,y)=g(x)-g(y)$$ and every $$\mu \ll \psi$$,

\begin{aligned} \mu (F)&=\int \limits _{J\times J} g(x)-g(y)\,d\mu (x,y) = \mu ^1(g)-\mu ^2(g) = {\mathcal {K}}(\mu ^1-\mu ^2)={\mathcal {L}}(\mu ). \end{aligned}

#### 4.2.2 Existence of circulations.

Now we begin to carry out our program of extending basic flow-theoretic results in combinatorial optimization to measures. Our first goal is to generalize the Hoffman Circulation Theorem and to characterize optimal circulations.

Given two measures $$\varphi$$ and $$\psi$$ on $$J\times J$$, we can ask whether there exists a circulation $$\alpha$$ such that $$\varphi \le \alpha \le \psi$$. Clearly $$\varphi \le \psi$$ is a necessary condition, but it is not sufficient in general. The following theorem generalizes the Hoffman Circulation Theorem.

### Theorem 4.4

For two signed measures $$\varphi ,\psi \in {\mathfrak {M}}(J\times J)$$, there exists a circulation $$\alpha$$ such that $$\varphi \le \alpha \le \psi$$ if and only if $$\varphi \le \psi$$ and $$\varphi (X\times X^c)\le \psi (X^c\times X)$$ for every set $$X\in {\mathcal {A}}$$.

### Proof

The necessity of the condition is trivial: if the circulation $$\alpha$$ exists, then $$\varphi (X\times X^c)\le \alpha (X\times X^c)=\alpha (X^c\times X) \le \psi (X^c\times X)$$.

To prove sufficiency, consider the set $${\mathfrak {X}}=\{\mu \in {\mathfrak {M}}({\mathcal {A}}^2):~\varphi \le \mu \le \psi \}$$. We may assume (by adding a sufficiently large circulation, say $$|\varphi |+|\varphi |^*$$) that $$0\le \varphi \le \psi$$. We want to prove that $${\mathfrak {C}}\cap {\mathfrak {X}}\not =\emptyset$$.

First, we prove the weaker fact that

\begin{aligned} d_{\mathrm{tv}}({\mathfrak {C}},{\mathfrak {X}})=0. \end{aligned}
(13)

Suppose that $$c=d_{\mathrm{tv}}({\mathfrak {C}},{\mathfrak {X}})>0$$. Let $${\mathfrak {X}}'=\{\mu \in {\mathfrak {M}}({\mathcal {A}}^2): ~d_\mathrm{tv}(\mu ,{\mathfrak {X}})<c\}$$, then $${\mathfrak {X}}'$$ is a convex open subset of $${\mathfrak {M}}({\mathcal {A}}^2)$$. Since $${\mathfrak {X}}'\cap {\mathfrak {C}}=\emptyset$$, the Hahn–Banach Theorem implies that there is a bounded linear functional $${\mathcal {L}}$$ on $${\mathfrak {M}}({\mathcal {A}}^2)$$ such that $${\mathcal {L}}(\mu )=0$$ for all $$\mu \in {\mathfrak {C}}$$, and $${\mathcal {L}}(\mu )<0$$ for all $$\mu$$ in the interior of $${\mathfrak {X}}'$$, in particular for every $$\mu \in {\mathfrak {X}}$$.

The first condition on $${\mathcal {L}}$$ implies, by representation (12), that there is a potential function $$F(x,y)=g(x)-g(y)$$ (with a bounded and measurable function $$g:~J\rightarrow {\mathbb {R}}$$) such that $${\mathcal {L}}(\mu )=\mu (F)$$ for every $$\mu \in {\mathfrak {M}}({\mathcal {A}}^2)$$ such that $$\mu \ll \psi$$. Let $$|g|\le C$$.

Let $$S=\{(x,y):~g(x)>g(y)\}$$ and $$A_t=\{x\in J:~g(x)\ge t\}$$. Clearly $$A_t\times A_t^c\subseteq S$$ and $$A_t^c\times A_t\subseteq S^c$$. We can write

\begin{aligned} g(x) = \int \limits _{-C}^C {\mathbb {1}}_{A_t}(x)\,dt, \end{aligned}

then

\begin{aligned} {\mathcal {L}}(\mu ) = \int \limits _{-C}^C \int \limits _{J\times J} {\mathbb {1}}_{A_t}(x)-{\mathbb {1}}_{A_t}(y)\,d\mu (x,y)\,dt = \int \limits _{-C}^C \mu (A_t\times A_t^c)-\mu (A_t^c\times A_t)\,dt. \end{aligned}
(14)

Let us apply this formula with $$\mu (X)=\varphi (X\cap S)+\psi (X\setminus S)$$. Then

\begin{aligned} {\mathcal {L}}(\mu ) = \int \limits _{-C}^C \mu (A_t\times A_t^c)-\mu (A_t^c\times A_t)\,dt =\int \limits _{-C}^C \psi (A_t\times A_t^c)-\varphi (A_t^c\times A_t)\,dt \ge 0 \end{aligned}

by hypothesis. On the other hand, we have $$\varphi \le \mu \le \psi$$, so $$\mu \in {\mathfrak {X}}$$, so $${\mathcal {L}}(\mu )<0$$. This contradiction proves (13).

To conclude, we select circulations $$\alpha _n\in {\mathfrak {C}}$$ and measures $$\beta _n\in {\mathfrak {X}}$$ such that $$\Vert \alpha _n-\beta _n\Vert \rightarrow 0$$ ($$n\rightarrow \infty$$). By Lemma 3.1, there is a measure $$\beta \in {\mathfrak {X}}$$ such that $$\beta _n(S)\rightarrow \beta (S)$$ ($$n\rightarrow \infty$$) for all $$S\in {\mathcal {A}}^2$$ and an appropriate subsequence of the indices n. Hence

\begin{aligned} |\alpha _n(S)-\beta (S)|\le & {} |\alpha _n(S)-\beta _n(S)|+|\beta _n(S)-\beta (S)|\\\le & {} \Vert \alpha _n-\beta _n\Vert +|\beta _n(S)-\beta (S)|\rightarrow 0. \end{aligned}

In particular, for every $$A\in {\mathcal {A}}$$ we have

\begin{aligned} 0=\alpha _n(A\times A^c)-\alpha _n(A^c\times A)\rightarrow \beta (A\times A^c)-\beta (A^c\times A), \end{aligned}

and so $$\beta$$ is a circulation, and by a similar argument, $$\beta \in {\mathfrak {X}}$$. $$\square$$

### Remark 4.5

As long as we restrict our attention to circulations $$\alpha$$ that are absolutely continuous with respect to a given measure $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, we can define them as functions, considering the Radon–Nikodym derivative $$f=d\alpha /d\psi$$. Then f is a $$\psi$$-integrable function satisfying

\begin{aligned} \int \limits _{A\times A^c} f\,d\psi = \int \limits _{A^c\times A} f\,d\psi \end{aligned}

for all $$A\in {\mathcal {A}}$$. The value f(xy) can be interpreted as the flow value on the edge xy. The marginals of $$\alpha$$, meaning the flow in and out of a point, could also be defined using a disintegration of $$\psi$$. However, this definition of circulation would depend on the measure $$\psi$$, while our definition above does not depend on any such parameter.

Similar remarks apply to notions like flows below, and will not be repeated.

#### 4.2.3 Optimal circulations

If a feasible circulation exists, we may be interested in finding a feasible circulation $$\mu$$ which minimizes a “cost”, or maximizes a “value” $$\mu (v)$$, given by a bounded measurable function v on $$J\times J$$. Equivalently, we want to characterize when a value of 1 (say) can be achieved. This cannot be characterized in terms of cut conditions any more, but an elegant necessary and sufficient condition can still be formulated.

### Theorem 4.6

Given a bounded measurable function $$v:~J\times J\rightarrow {\mathbb {R}}_+$$ and measures $$\varphi ,\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, $$\varphi \le \psi$$, there is a circulation $$\alpha$$ with $$\varphi \le \alpha \le \psi$$ and $$\alpha (v)=c$$ if and only if the following three conditions are satisfied for every potential F:

\begin{aligned}&\psi (|F+v|_+) \ge \varphi (|F+v|_-) + c, \end{aligned}
(15)
\begin{aligned}&\psi (|F-v|_+) \ge \varphi (|F-v|_-) - c, \end{aligned}
(16)
\begin{aligned}&\psi (|F|_+) \ge \varphi (|F|_-). \end{aligned}
(17)

Condition (17) is equivalent to the condition given for the existence of a circulation in Theorem 4.4, which is obtained when $$F(x,y)={\mathbb {1}}_X(x)-{\mathbb {1}}_X(y)$$. If $$\varphi =0$$, then only (15) is nontrivial. Applying the conditions with $$F=0$$ we get that $$\varphi (v)\le c\le \psi (v)$$.

### Proof

We may assume that $$c=1$$. The necessity of the condition is trivial: if such a circulation $$\alpha$$ exists, then

\begin{aligned} \psi (|F+v|_+) - \varphi (|F+v|_-) \ge \alpha (|F+v|_+) - \alpha (|F+v|_-)= \alpha (F+v) = \alpha (v) = 1, \end{aligned}

and similar calculation proves the other two conditions.

To prove the converse, we proceed along similar lines as in the proof of Theorem 4.4. Consider the subspace $${\mathfrak {C}}\subseteq {\mathfrak {M}}({\mathcal {A}}^2)$$ of circulations, the affine hyperplane $${\mathfrak {H}}=\{\alpha \in {\mathfrak {M}}({\mathcal {A}}^2):~ \alpha (v)=1\}$$ and the “box” $${\mathfrak {X}}=\{\alpha \in {\mathfrak {M}}({\mathcal {A}}^2):~ \varphi \le \alpha \le \psi \}$$. We want to prove that $${\mathfrak {C}}\cap {\mathfrak {H}}\cap {\mathfrak {X}}\not =\emptyset$$.

Clearly the sets $${\mathfrak {C}}$$, $${\mathfrak {H}}$$ and $${\mathfrak {X}}$$ are nonempty. Fix an $$\varepsilon >0$$, and replace them by their $$\varepsilon$$-neighborhoods $${\mathfrak {C}}'=\{\mu \in {\mathfrak {M}}({\mathcal {A}}^2):~d_\mathrm{tv}(\mu ,{\mathfrak {C}})<\varepsilon \}$$ etc. We start with proving the weaker statement that

\begin{aligned} {\mathfrak {C}}'\cap {\mathfrak {H}}'\cap {\mathfrak {X}}'\not =\emptyset . \end{aligned}
(18)

Suppose not. Then Lemma 3.4 implies that there are bounded linear functionals $${\mathcal {L}}_1,{\mathcal {L}}_2,{\mathcal {L}}_3$$ on $${\mathfrak {M}}({\mathcal {A}}^2)$$, not all zero, and real numbers $$a_1,a_2,a_3$$ such that $${\mathcal {L}}_1+{\mathcal {L}}_2+{\mathcal {L}}_3=0$$, $$a_1+a_2+a_3=0$$, and $${\mathcal {L}}_i(\mu )\ge a_i$$ for all $$\mu \in {\mathfrak {C}}'$$, $${\mathfrak {H}}'$$ and $${\mathfrak {X}}'$$, respectively, and $${\mathcal {L}}_i(\mu )>a_i$$ for at least one i.

The functional $${\mathcal {L}}_1$$ remains bounded from below for every circulation $$\alpha \in {\mathfrak {C}}$$, and since $${\mathfrak {C}}$$ is a linear subspace, this implies that

\begin{aligned} {\mathcal {L}}_1(\alpha )=0 \qquad (\alpha \in {\mathfrak {C}}). \end{aligned}
(19)

By a similar reasoning, $${\mathcal {L}}_2$$ must be a constant b on the hyperplane $${\mathfrak {H}}$$; we may scale $${\mathcal {L}}_1$$, $${\mathcal {L}}2$$ and $${\mathcal {L}}_3$$ so that $$b\in \{-1,0,1\}$$. It is easy to see that this implies the more general formula

\begin{aligned} {\mathcal {L}}_2(\mu ) = b\mu (v)\qquad (\mu \in {\mathfrak {M}}({\mathcal {A}}^2)), \end{aligned}
(20)

Finally, we can express $${\mathcal {L}}_3$$ as

\begin{aligned} {\mathcal {L}}_3(\mu )=-{\mathcal {L}}_1(\mu )-{\mathcal {L}}_2(\mu )\quad (\mu \in {\mathfrak {M}}(A^2)). \end{aligned}
(21)

Using the representation (12), we can write

\begin{aligned} {\mathcal {L}}_1(\mu ) = \mu (F)\qquad (0\le \mu \le \psi ) \end{aligned}
(22)

with some potential F on $$J\times J$$. Hence

\begin{aligned} {\mathcal {L}}_3(\mu )=-\mu (F)-b\mu (v)=-\mu (F+bv) \qquad (0\le \mu \le \psi ). \end{aligned}

We also know that for any $$\alpha \in {\mathfrak {C}}$$, $$\nu \in {\mathfrak {H}}$$ and $$\mu \in {\mathfrak {X}}$$, we have

\begin{aligned} 0=a_1+a_2+a_3<{\mathcal {L}}_1(\alpha )+{\mathcal {L}}_2(\nu )+{\mathcal {L}}_3(\mu ) = 0+b+{\mathcal {L}}_3(\mu ) = b-\mu (F+bv), \end{aligned}

and hence $$\mu (F+bv)<b$$ for all $$\mu \in {\mathfrak {X}}$$.

The tightest choice for $$\mu \in {\mathfrak {X}}$$ is $$\mu =\psi _U-\varphi _{U^c}$$, where $$U=\{(x,y):~F(x,y)+bv(x,y)\ge 0\}$$. This gives that

\begin{aligned} \psi (|F+bv|_+) - \varphi (|F+bv|_-) = \psi _U(F+bv) - \varphi _{U^c}(F+bv) = \mu (F+bv) <b. \end{aligned}

This contradicts one of the conditions in the theorem (depending on b). This proves (18).

To prove the stronger statement that $${\mathfrak {C}}\cap {\mathfrak {H}}\cap {\mathfrak {X}}\not =\emptyset$$, (18) implies that there are sequences of measures $$\alpha _n\in {\mathfrak {C}}$$, $$\nu _n\in {\mathfrak {H}}$$ and $$\mu _n\in {\mathfrak {X}}$$ such that $$d_\mathrm{tv}(\mu _n,\alpha _n)\rightarrow 0$$ and $$d_\mathrm{tv}(\mu _n,\nu _n)\rightarrow 0$$. Furthermore, since $$0\le \mu _n\le \psi$$, Lemma 3.1 applies, and so there is a measure $$\mu \in {\mathfrak {X}}$$ such that for an appropriate infinite subsequence of indices, $$\mu _n(U)\rightarrow \mu (U)$$ for all $$U\in {\mathcal {A}}^2$$. This implies that $$\alpha _n(U)\rightarrow \mu (U)$$ and $$\nu _n(U)\rightarrow \mu (U)$$ for this subsequence.

Thus

\begin{aligned} \mu (A\times A^c) = \lim _{n\rightarrow \infty } \alpha _n(A\times A^c) = \lim _{n\rightarrow \infty } \alpha _n(A^c\times A) = \mu (A^c\times A) \end{aligned}

for every $$A\in {\mathcal {A}}$$, so $$\mu \in {\mathfrak {C}}$$. Similarly, by Lemma 3.1$$\mu (v) = \lim _{n\rightarrow \infty }\nu _n(v) =1$$, whence $$\mu \in {\mathfrak {H}}$$. $$\square$$

A straightforward application of Theorem 4.6 allows us to answer a question about the existence of Markov spaces, where an upper bound on the ergodic circulation is prescribed.

### Corollary 4.7

Given a measure $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, there exists an ergodic circulation $$\eta$$ such that $$\eta \le \psi$$ if and only if every potential $$F:~J\times J\rightarrow {\mathbb {R}}$$ satisfies

\begin{aligned} \psi (|1+F|_+) \ge 1. \end{aligned}

#### 4.2.4 Integrality.

In the case when $$v\equiv 1$$ and $$\varphi \equiv 0$$, the condition in Corollary 4.7 implies that

\begin{aligned} \psi (A\times A^c)-\psi (A^c\times A)\le \psi (J\times J)-1 \qquad (A\in {\mathcal {A}}). \end{aligned}

One may wonder whether, at least in this special case, such a cut condition is also sufficient in Corollary 4.7. This, however, fails even in the finite case: on the directed path of length 2 where the edges have capacity 1, these cut conditions for the existence of an ergodic circulation are satisfied, but the only feasible circulation is the 0-circulation.

However, the following weaker requirement can be imposed on F:

### Supplement 4.8

In Theorem 4.6, if the function v has only integral values, then it suffices to require condition (15)–(17) for potentials F having integral values.

This property of F is clearly equivalent to saying that in the representation $$F(x,y)=f(x)-f(y)$$, the function f can be required to have integral values. For finite graphs, this assertion follows easily from the fact that the matrix of flow conditions is totally unimodular. In the infinite case, we have to use another proof.

### Proof

Suppose that there is a potential $$F(x,y)=f(x)-f(y)$$ violating (say) (15). Let $$S=\{(x,y):~F(x,y)+v(x,y)>0\}$$. Consider the modified potentials $$\widehat{F}=\lfloor f(y)\rfloor -\lfloor f(y)\rfloor$$ and $$\widetilde{F}=\langle f(x)\rangle -\langle f(y)\rangle$$, where $$\langle t\rangle = t-\lfloor t\rfloor$$ is the fractional part of the real number t. We claim that

\begin{aligned} \psi (|F+v|_+) - \varphi (|F+v|_-) = \psi (|\widehat{F}+v|_+) - \varphi (|\widehat{F}+v|_-) + \psi _S(\widetilde{F}) + \varphi _{S^c}(\widetilde{F}). \end{aligned}
(23)

Indeed, note that for $$(x,y)\in S$$ we have $$\widehat{F}(x,y)+v(x,y)\ge 0$$, and for $$(x,y)\notin S$$ we have $$\widehat{F}(x,y)+v(x,y)\le 0$$. Hence

\begin{aligned} \psi (|F+v|_+)= \psi _S(F+v) = \psi _S(\widehat{F}+v) + \psi _S(\widetilde{F})&= \psi (|\widehat{F}+v|_+) + \psi _S(\widetilde{F}). \end{aligned}

Similarly,

\begin{aligned} \varphi (|F+v|_-) = \varphi (|\widehat{F}+v|_-) - \varphi _{S^c}(\widetilde{F}). \end{aligned}

This proves (23).

Replacing f by $$f+a$$ with any real constant a, the potential F and the set S do not change, but the potentials $$\widehat{F}_a(x,y)=\lfloor f(x)+a\rfloor -\lfloor f(y)+a\rfloor$$ and $$\widetilde{F}_a(x,y) = \langle f(x)+a\rangle -\langle f(y)+a\rangle$$ do depend on c. We have

\begin{aligned} \psi (|F+v|_+) - \varphi (|F+v|_-) = \psi (|\widehat{F}_a+v|_+)- \varphi (|\widehat{F}_a+v|_-) +\psi _S(\widetilde{F}_a)-\varphi _{S^c}(\widetilde{F}_a). \end{aligned}

Choosing a randomly and uniformly from [0, 1], the expectation of the last two terms is 0, since $$\mathsf{E}(\langle f(x)+a\rangle ) = 1/2$$ for any x, and so $$\mathsf{E}(\widetilde{F}_a(x,y)) =0$$ for all x and y. Thus

\begin{aligned} \psi (|F+v|_+) - \varphi (|F+v|_-) =\mathsf{E}\bigl (\psi (|\widehat{F}_a+v|_+) - \varphi (|\widehat{F}_a+v|_-)\bigr ). \end{aligned}

This implies that there is an $$a\in [0,1]$$ for which

\begin{aligned} \psi (|F+v|_+) - \varphi (|F+v|_-) \ge \psi (|\widehat{F}_a+v|_+) - \varphi (|\widehat{F}_a+v|_-). \end{aligned}

So replacing f by $$\lfloor f+a\rfloor$$, we get an integer valued potential that violates condition (15) even more, which proves the Supplement. $$\square$$

We can give a more combinatorial reformulation of Corollary 4.7.

### Corollary 4.9

Given a measure $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, there exists an ergodic circulation $$\eta$$ such that $$\eta \le \psi$$ if and only if for every partition $$J=S_1\cup \dots \cup S_k$$ into a finite number of Borel sets

\begin{aligned} \sum _{1\le i\le j\le k} (j-i+1) \psi (S_j\times S_i) \ge 1. \end{aligned}

The (insufficient) cut condition discussed above corresponds to the case when $$k=2$$.

### Proof

Let $$F(x,y)=f(y)-f(x)$$ be a bounded integral valued potential. We may assume that f is integral valued and $$1\le f\le k$$ for some integer k. Then the sets $$S_i=\{x\in J:~f(x)=i\}$$ $$(i=1,\dots ,k)$$ form a partition of J. For $$x\in S_i$$ and $$y\in S_j$$, we have

\begin{aligned} |F(x,y)+1|_+ = {\left\{ \begin{array}{ll} j-i+1, &{} \text {if }i\le j, \\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}

Thus the condition in Corollary 4.7 is equivalent to the condition in Corollary 4.9. $$\square$$

### 4.3 Flows.

Let $$\sigma ,\tau \in {\mathfrak {M}}({\mathcal {A}})$$ be two measures with $$\sigma (J)=\tau (J)$$. We consider $$\sigma$$ the “supply” and $$\tau$$, the “demand”. We call a measure $$\varphi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ a flow from $$\sigma$$ to $$\tau$$, or briefly a $$\sigma$$-$$\tau$$ flow, if $$\varphi ^1-\varphi ^2=\sigma -\tau$$. We may assume, if convenient, that the supports of $$\sigma$$ and $$\tau$$ are disjoint, since subtracting $$\sigma \wedge \tau$$ from both does not change their difference. If this is the case, we call $$\sigma (J)=\tau (J)$$ the value of the flow.

Given two points $$s,t\in J$$, a measure $$\varphi$$ on $${\mathcal {A}}^2$$ such that $$\varphi ^1-\varphi ^2 = a(\delta _s-\delta _t)$$ will be called an s-t flow of value a. So $$\varphi$$ is a flow serving supply $$a\delta _s$$ and demand $$a\delta _t$$.

Note that every measure $$\varphi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ is a flow from $$\varphi ^1$$ to $$\varphi ^2$$, and also a flow from $$\varphi ^1\setminus \varphi ^2$$ to $$\varphi ^2\setminus \varphi ^1$$. But we are usually interested in starting with the supply and the demand, and constructing appropriate flows. We may require $$\varphi$$ to be acyclic, since subtracting a circulation does not change $$\varphi ^1-\varphi ^2$$.

As before, we may also be given a nonnegative measure $$\psi$$ on $${\mathcal {A}}^2$$ (the “edge capacity”). We call a flow $$\varphi$$ feasible, if $$\varphi \le \psi$$.

#### 4.3.1 Max-Flow-Min-Cut and Supply-Demand.

These fundamental theorems follow from the results on circulations by the same tricks as in the finite case.

### Theorem 4.10

(Max-Flow-Min-Cut). Given a capacity measure $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ and two points $$s,t\in J$$, there is a feasible s-t flow of value 1 if and only if $$\psi (A\times A^c)\ge 1$$ for every $$A\in {\mathcal {A}}$$ with $$s\in A$$ and $$t\notin A$$.

### Proof

For every feasible flow $$\phi \le \psi$$ of value 1, the measure $$\phi +\delta _{ts}$$ is a circulation such that $$\delta _{st}\le \phi +\delta _{st}\le \psi +\delta _{st}$$. Conversely, for every circulation $$\alpha$$ with $$\delta _{ts}\le \alpha \le \psi +\delta _{st}$$, the measure $$\alpha -\delta _{ts}$$ is a feasible s-t flow of value 1. The conditions in Theorem 4.4 on the existence of such a circulation are trivial except for the second condition when $$s\in A$$ and $$t\notin A$$, which gives the condition in the theorem. $$\square$$

The more general Supply-Demand Theorem can be stated as follows.

### Theorem 4.11

Let $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, and let $$\sigma ,\tau \in {\mathfrak {M}}_+({\mathcal {A}})$$ with $$\sigma (J)=\tau (J)$$. Then there is a feasible $$\sigma$$-$$\tau$$ flow if and only if $$\psi (S\times S^c)\ge \sigma (S)-\tau (S)$$ for every $$S\in {\mathcal {A}}$$.

### Proof

We may assume that $$\sigma (J)=\tau (J)=1$$. Add two new points s and t to J, and extend $${\mathcal {A}}$$ to a sigma-algebra $${\mathcal {A}}'$$ on $$J'=J\cup \{s,t\}$$ generated by $${\mathcal {A}}$$, $$\{s\}$$ and $$\{t\}$$. Define a new capacity measure $$\psi '$$ by

\begin{aligned} \psi '(X)= {\left\{ \begin{array}{ll} \psi (X), &{} \text {if }X\subseteq J\times J, \\ \sigma (Y), &{} \text {if }X= \{s\}\times Y\text { with }Y\subseteq J, \\ \tau (Y), &{} \text {if }X= Y\times \{t\}\text { with }Y\subseteq J,\\ 0, &{} \text {if }X\subseteq (\{t\}\times J) \cup (J\times \{s\}) \cup \{st,ts\}, \end{array}\right. } \end{aligned}

and extend it to all Borel sets by additivity. For every feasible $$\sigma$$-$$\tau$$ flow $$\phi$$ on $$(J,{\mathcal {A}})$$, the measure $$\phi +\psi '_{\{s\}\times J}+\psi '_{J\times \{t\}}$$ is a feasible s-t flow of value 1. Conversely, for every feasible s-t flow of value 1, its restriction to the original space $$(J,{\mathcal {A}})$$ is a feasible $$\sigma$$-$$\tau$$ flow. Applying the condition in the Max-Flow-Min-Cut Theorem completes the proof. $$\square$$

The measure-theoretic Max-Flow-Min-Cut Theorem is closely related to a result of Laczkovich [18], who works in the function setting. He also states an integrality result, which is in a sense dual to our integrality result in Section 4.2.4.

A condition for the minimum cost of a feasible $$\sigma$$-$$\tau$$ flow of a given value can be derived from Theorem 4.6 using the same kind of constructions as in the proof above. This gives the following result.

### Theorem 4.12

Given a bounded measurable “cost” function $$v:~J\times J\rightarrow {\mathbb {R}}_+$$, a “capacity” measure $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ and “supply-demand” measures $$\sigma ,\tau \in {\mathfrak {M}}_+({\mathcal {A}})$$ with $$\sigma (J)=\tau (J)$$, there is a feasible $$\sigma$$-$$\tau$$ flow $$\varphi$$ with $$\varphi (v)=1$$ if and only if

\begin{aligned} \psi (|f(y)-f(x)+bv(x,y)|_+) \ge \tau (f)-\sigma (f)+b \end{aligned}
(24)

for every bounded measurable function $$f:~J\rightarrow {\mathbb {R}}$$ and $$b\in \{-1,0,1\}$$. $$\square$$

#### 4.3.2 Transshipment.

An optimization problem closely related to flows is the transshipment problem. In its simplest measure-theoretic version, we are given two measures $$\alpha ,\beta \in {\mathfrak {M}}({\mathcal {A}})$$ with $$\alpha (J)=\beta (J)$$. An $$\alpha$$-$$\beta$$ transshipment is a measure $$\mu \in {\mathfrak {M}}_+({\mathcal {A}}\times {\mathcal {A}})$$ coupling $$\alpha$$ and $$\beta$$; in other words, $$\mu ^1=\alpha$$ and $$\mu ^2=\beta$$. Note the difference with the notion of an $$\alpha$$-$$\beta$$ flow: there only the difference $$\mu ^1-\mu ^2$$ is prescribed. In transhipment problems, one can think of $$J\times J$$ as the edge set of a (complete) bipartite graph whose color classes are the two copies of J. This observation can be used to derive the following result from the Supply-Demand Theorem 4.11:

### Theorem 4.13

Let $$(J,{\mathcal {A}})$$ be a standard Borel space, and $$\alpha ,\beta \in {\mathfrak {M}}_+({\mathcal {A}})$$ with $$\alpha (J)=\beta (J)$$. Let $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}\times {\mathcal {A}})$$. Then there exists an $$\alpha$$-$$\beta$$ transshipment $$\mu$$ with $$\mu \le \psi$$ if and only if

\begin{aligned} \psi (S\times T)\ge \alpha (S)+\beta (T)-\alpha (J) \end{aligned}

for every $$S,T\in {\mathcal {A}}$$. $$\square$$

Suppose that every edge $$(x,y)\in J\times J$$ has a given cost $$c(x,y)\ge 0$$. We want to find a transshipment minimizing the cost $$\mu (c)$$. We note that the minimum is attained by Lemma 3.2.

### Theorem 4.14

Let $$(J,{\mathcal {A}})$$ be a standard Borel space, and $$\alpha ,\beta \in {\mathfrak {M}}_+({\mathcal {A}})$$ with $$\alpha (J)=\beta (J)$$. Let $$c:~J\times J\rightarrow {\mathbb {R}}_+$$ be a bounded measurable function. Then the minimum cost of an $$\alpha$$-$$\beta$$ transshipment is $$\sup _{g,h} \alpha (g)+\beta (h)$$, where g and h range over all bounded measurable functions $$J\rightarrow {\mathbb {R}}$$ satisfying $$g(x)+h(y)\le c(x,y)$$ for all $$x,y\in J$$.

The proof follows by an easy reduction to Theorem 4.12.

As a third variation on the Transshipment Problem, we ask for a transhipment supported on a specified set E of pairs. The following result is a slight generalization of a theorem of Strassen [26], and essentially equivalent to Proposition 3.8 of Kellerer [16]. See also [9]. It is also a rather straightforward generalization of Theorem 2.5.2 in [22]. The result could also be considered as a limiting case of Theorem 4.14, using the capacity “measure” with infinite values on E.

### Proposition 4.15

Let $$(J,{\mathcal {A}})$$ be a standard Borel space, and $$\alpha ,\beta \in {\mathfrak {M}}_+({\mathcal {A}})$$ with $$\alpha (J)=\beta (J)=1$$. Let $$E\in {\mathcal {A}}\times {\mathcal {A}}$$ be a Borel set such that $$J\times J\setminus E$$ is the union of a countable number of product sets $$A\times B$$ $$(A,B\in {\mathcal {A}})$$. Then there exists an $$\alpha$$-$$\beta$$ transshipment $$\mu$$ concentrated on E if and only if $$\alpha (S)+\beta (T)\le 1$$ for any two sets $$S,T\in {\mathcal {A}}$$ with $$S\times T\cap E=\emptyset$$.

### Remark 4.16

In the finite case, the fundamental Birkhoff–von Neumann Theorem describes the extreme points of the convex polytope formed by doubly stochastic matrices: these are exactly the permutation matrices, or in the language of bipartite graphs, perfect matchings. One generalization of this problem to the measurable case is to consider the set of coupling measures between two copies of a probability space $$(J,{\mathcal {A}},\pi )$$, forming a convex set in $${\mathfrak {M}}_+({\mathcal {A}}^2)$$. What are the extreme points (coupling measures) of this convex set? Unfortunately, these extreme points seem to be too complex for an explicit description. See [19] for several examples.

#### 4.3.3 Path decomposition.

In finite graph theory, it is often useful to decompose an s-t flow into a convex combination of flows along single paths from s to t and circulations along cycles. We will also need a generalization of this construction to measurable spaces.

Let $$K=J\cup J^2\cup J^3\cup \dots$$ be the set of all finite nonempty sequences of points of J; we also call these walks. The set K is endowed with the sigma-algebra $${\mathcal {B}}={\mathcal {A}}\oplus {\mathcal {A}}^2 \oplus \dots$$. Let K(st) be the subset of K consisting of walks starting at s and ending at t ($$s,t\in J$$); such a walk is called an s-twalk.

Let $$\tau \in {\mathfrak {M}}_+({\mathcal {B}})$$. For $$Q=(u^0,u^1,\dots ,u^m)\in K$$, let $$Q'=(u^0,\dots ,u^{m-1})$$, $$V(Q)=\{u^0,\dots ,u^m\}$$, $$E(Q)=\{u^0u^1,u^1u^2,\dots ,u^{m-1}u^m\}$$, and $$Z(Q)=\{u^0,u^m\}$$. Define

\begin{aligned} V(\tau )(X)&= \int \limits _K |V(Q')\cap X|\,d\tau (Q) \qquad (X\in {\mathcal {A}}),\\ E(\tau )(Y)&= \int \limits _K |E(Q)\cap Y|\,d\tau (Q) \qquad (Y\in {\mathcal {A}}^2),\\ Z(\tau )(Y)&= \int \limits _K |Z(Q)\cap Y|\,d\tau (Q) \qquad (Y\in {\mathcal {A}}^2). \end{aligned}

Then $$V(\tau )$$ is a measure on $${\mathcal {A}}$$, and $$E(\tau )$$ and $$Z(\tau )$$ are measures on $${\mathcal {A}}^2$$. The measure $$Z(\tau )$$ is finite, but $$V(\tau )$$ and $$E(\tau )$$ may have infinite values as for now. If $$\tau$$ is a probability measure, then walking along a randomly chosen walk from distribution $$\tau$$, $$V(\tau )(X)$$ is the expected number of times we exit a point in X (so the starting point counts, but the last point does not), and $$E(\tau )(Y)$$ is the expected number of times we traverse an edge in Y. Mapping each walk $$W\in K$$ to its first point, and pushing $$\tau$$ forward by this map, we get the measure $$Z(\tau )^1\in {\mathfrak {M}}({\mathcal {A}})$$. The measure $$Z(\tau )^2$$ is characterized analogously by mapping each walk to its last point. It is easy to see that $$E(\tau )$$ is a flow from $$Z(\tau )^1$$ to $$Z(\tau )^2$$.

### Theorem 4.17

For every acyclic measure $$\varphi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ there is a finite measure $$\tau \in {\mathfrak {M}}_+({\mathcal {B}})$$ for which $$E(\tau )=\varphi$$.

We need a simple (folklore) fact about Markov chains.

### Lemma 4.18

Let $${\mathbf {G}}$$ be an indecomposable Markov space, and let $$S\in {\mathcal {A}}$$ have $$\pi (S)>0$$. Then for $$\pi$$-almost-all starting points x, a random walk started at x hits S almost surely.

### Proof

Let R be the set of starting points $$x\in J$$ for which the random walk starting at x avoids S with positive probability, and suppose that $$\pi (R)>0$$. Since clearly $$R\cap S=\emptyset$$, we also have $$\pi (R)<1$$. Hence $$\eta (R^c\times R)>0$$ by indecomposability, and so there must be a point $$x\in R^c$$ with $$P_x(R)>0$$. But this means that starting at x, the walk moves to R with positive probability, and then avoids S with positive probability, so we would have $$x\in R$$, a contradiction. $$\square$$

### Proof of Theorem 4.17

We start with the special case when $$\varphi$$ is an s-t flow for $$s,t\in J$$; we may scale it to have value 1. Just as in the proof of Theorem 4.10, we see that the measure $$\alpha =\varphi + \delta _{ts}$$ is a nonnegative circulation on $${\mathcal {A}}^2$$. Let $$a=\alpha (J\times J) = \varphi (J\times J)+1$$, then $$\eta =\alpha /a$$ is the ergodic circulation of a Markov space. The stationary distribution of this Markov space is $$\pi =\alpha ^1/a=\alpha ^2/a$$, and

\begin{aligned} \varphi ^1 = a\pi - \delta _t. \end{aligned}
(25)

It is easy to see that $$\varphi (\{(s,s)\})=0$$, since $$\xi =\varphi (\{(s,s)\})\delta _{\{(s,s)\}}$$ is a nonnegative circulation such that $$\xi \le \varphi$$, and since $$\varphi$$ is acyclic, we must have $$\xi =0$$.

### Claim 1

The Markov space $$({\mathcal {A}},\eta )$$ is indecomposable.

Indeed, suppose that there is a set $$A\in {\mathcal {A}}$$ with $$0<\pi (A)<1$$ and $$\eta (A\times A^c)=\eta (A^c\times A)=0$$. Clearly s and t either both belong to A or both belong to $$A^c$$; we may assume that $$s,t\in A^c$$. Then $$\varphi _{A\times A}$$ is a circulation, and $$\varphi =(\varphi -\varphi _{A\times A})+\varphi _{A\times A}$$ is a decomposition showing that $$\varphi$$ is not acyclic, contrary to the hypothesis.

To specify a probability distribution on s-t walks, we describe how to generate a random s-t walk: Start a random walk at s, and follow it until you hit t or return to s, whichever comes first. This happens almost surely by Lemma 4.18: the distribution $$\delta _s$$ is absolutely continuous with respect to $$\pi$$, and $$\pi (t)>0$$. This gives a probability distribution $$\tau$$ on the set $$K(s,\{s,t\})$$ of walks from s to $$\{s,t\}$$.

Let us stop the walk after k steps, or when it hits t, or when it returns to s, whichever comes first. This gives us a distribution $$\tau _k$$ over walks starting at s of length at most k. We claim that this distribution satisfies the following identity for every $$X\subseteq J\setminus \{s,t\}$$:

\begin{aligned} V(\tau _n)(X)= \int \limits _{J\setminus \{s,t\}} P_u(X)\,dV(\tau _{n-1})(u). \end{aligned}
(26)

Indeed, let $$\sigma _k(X)$$ $$(X\in {\mathcal {A}})$$ be the probability that starting at s, we walk k steps without hitting t or returning to s, and after k steps we are in X. It is clear that $$\sigma _0=\delta _s$$. It is also easy to see that for $$n\ge 1$$, we have $$V(\tau _n)=\sigma _0+\sigma _1+\dots +\sigma _{n-1}$$, and for $$X\subseteq J\setminus \{s,t\}$$,

\begin{aligned} \sigma _n(X)= \int \limits _{J\setminus \{t\}} P_u(X)\,d\sigma _{n-1}(u). \end{aligned}
(27)

Thus

\begin{aligned} V(\tau _n)(X) = \sum _{k=1}^{n-1} \sigma _k(X) = \sum _{k=1}^{n-1} \int \limits _{J\setminus \{t\}} P_u(X)\,d\sigma _{k-1}(u) = \int \limits _{J\setminus \{t\}} P_u(X)\,dV(\tau _{n-1})(u). \end{aligned}

This proves (26).

Next we show that

\begin{aligned} V(\tau _n) \le \varphi ^1\qquad (n\ge 1). \end{aligned}
(28)

We prove the inequality by induction on n. For $$n=1$$ it is obvious. Let $$n\ge 2$$. If $$s,t\notin X$$, then $$\sigma _0(X)=0$$, and so using (26) and (25),

\begin{aligned} V(\tau _n)(X)&= \int \limits _{J\setminus \{t\}} P_u(X)\,dV(\tau _{n-1})(u)\\&\le \int \limits _{J\setminus \{t\}} P_u(X)\,d\varphi ^1(u)\le a\int \limits _{J\setminus \{t\}} P_u(X)\,d\pi (u) \\&\le a\int \limits _J P_u(X)\,d\pi (u) = a\pi (X)= \varphi ^1(X). \end{aligned}

If $$t\in X$$ but $$s\notin X$$, then

\begin{aligned} V(\tau _n)(X) = V(\tau _n)(X\setminus \{t\}) \le \varphi ^1(X\setminus \{t\}) \le \varphi ^1(X). \end{aligned}

If $$s\in X$$, then (using that every random walk we constructed exits s only once)

\begin{aligned} V(\tau _n)(X) = 1+V(\tau _n)(X\setminus \{s\}) \le 1+\varphi ^1(X\setminus \{s\})\le \varphi ^1(X). \end{aligned}

Next, we consider $$E(\tau )$$, which is an s-t flow by the discussion before the theorem. It follows easily that

\begin{aligned} E(\tau _n) \le \varphi \qquad (n\ge 1). \end{aligned}
(29)

Indeed, for $$A,B\in {\mathcal {A}}$$,

\begin{aligned} E(\tau _n)(A\times B) = \int \limits _A P_u(B)\,dV(\tau _n)(u) \le \int \limits _A P_u(B)\,d\varphi ^2(u)=\varphi (A\times B). \end{aligned}

This implies that $$E(\tau _n)(X)\le \varphi (X)$$ for every $$X\in {\mathcal {A}}^2$$, proving (29).

### Claim 2

$$V(\tau _n)\rightarrow V(\tau )$$ in total variation distance.

Since clearly $$V(\tau _n)\le V(\tau )$$, we have $$d_\mathrm{tv}(V(\tau _n),V(\tau ))=V(\tau )(J)-V(\tau _n)(J)$$. Let $$p_n$$ be the probability that a random walk started at s first hits $$\{s,t\}$$ in exactly n steps. Then

\begin{aligned} V(\tau )(J)=\sum _{k=1}^\infty p_k\,k, \qquad \text {and}\qquad V(\tau _n)(J)=\sum _{k=1}^n p_k\,k. \end{aligned}

By (28), $$V(\tau _n)(J)\le \varphi ^1(J)<\infty$$, and hence the series representing $$\tau$$ is convergent. This proves the claim.

### Claim 3

The probability that a random walk started at s returns to s before hitting t is zero. So $$\tau$$ can be considered as a probability distribution on walks from s to t.

Indeed, we can split $$K(s,\{s,t\})=K(s,s)\cup K(s,t)$$. Define $$\rho =\tau _{K(s,s)}$$. Then $$E(\rho )\le E(\tau )\le \varphi$$ and it is easy to see that $$E(\rho )$$ is a circulation. Since $$\varphi$$ is acyclic, we must have $$\rho =0$$, and so $$\tau (K(s,s))=0$$.

Inequalities (28), (29) and Claim 2 imply that $$V(\tau )\le \varphi ^1$$ and $$E(\tau ) \le \varphi$$. To complete the proof, consider the measure $$\varphi -E(\tau )$$. This is a nonnegative circulation, and since $$\varphi$$ is acyclic, it follows that $$\varphi -E(\tau )=0$$. This proves the theorem for s-t flows.

The general case can be reduced to the special case of an s-t flow by the following construction, similar to that used in the proof of Theorem 4.11. Let $$\varphi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ be an acyclic measure, let $$\sigma =\varphi ^1\setminus \varphi ^2$$ and $$\tau =\varphi ^2\setminus \varphi ^1$$, so that $$\varphi$$ is an acyclic $$\sigma$$-$$\tau$$ flow. Create two now points s and t, extend $${\mathcal {A}}$$ to a sigma-algebra $${\mathcal {A}}'$$ on $$J'=J\cup \{s,t\}$$ generated by $${\mathcal {A}}$$, $$\{s\}$$ and $$\{t\}$$, and extend the measure $$\varphi$$ to $$\varphi '\in {\mathfrak {M}}({\mathcal {A}}'\times {\mathcal {A}}')$$ by

\begin{aligned} \varphi '(X)= {\left\{ \begin{array}{ll} \varphi (X), &{} \text {if }X\subseteq J\times J, \\ \sigma (Y), &{} \text {if }X= \{s\}\times Y\text { with }Y\subseteq J, \\ \tau (Y), &{} \text {if }X= Y\times \{t\}\text { with }Y\subseteq J,\\ 0, &{} \text {if }X\subseteq (\{t\}\times J) \cup (J\times \{s\}) \cup \{st,ts\}. \end{array}\right. } \end{aligned}

It is easy to check that $$\varphi '$$ is an acyclic s-t flow. Using the theorem for the special case of this s-t flow, we get a measure $$\tau$$ on s-t paths, in which the trivial path (st) has zero measure. So $$\tau$$ defines a measure on nontrivial s-t paths, and since there is a natural bijection with paths in K, we get a measure on $$(K,{\mathcal {B}})$$. It is easy to check that this measure has the desired properties. $$\square$$

### Remark 4.19

Theorem 4.17 raises the question whether circulations have analogous decompositions. In finite graph theory, a circulation can be decomposed into a nonnegative linear combination of directed cycles. In the infinite case, we have to consider, in addition, directed paths infinite in both directions (see Example 4.1); but even so, the decomposition is not well understood.

Suppose that we have a nonnegative circulation $$\eta \not =0$$ on $${\mathcal {A}}$$. We may assume (by scaling) that it is a probability measure, so it is the ergodic circulation of a Markov space. From every point $$u\in J$$, we can start an infinite random walk $$(v^0=u, v^1,\dots )$$, and also an infinite random walk $$(v^0=u, v^{-1},\dots )$$ of the reverse chain. Choosing u from $$\pi$$, this gives us a probability distribution $$\beta$$ on rooted two-way infinite (possibly periodic) sequences, i.e., on $$J^{{\mathbb {Z}}}$$. However, it seems to be difficult to reconstruct the circulation $$\alpha$$ from $$\beta$$.

## 5 Multicommodity measures

### 5.1 Metrical linear functionals.

A bounded linear functional $${\mathcal {D}}$$ on $${\mathfrak {M}}({\mathcal {A}}^2)$$ will be called metrical, if it satisfies the following conditions:

(a) $${\mathcal {D}}(\mu )=0$$ for every measure $$\mu \in {\mathfrak {M}}({\mathcal {A}}^2)$$ concentrated on the diagonal $$\Delta =\{(x,x):~x\in J\}$$;

(b) $${\mathcal {D}}(\mu )={\mathcal {D}}(\mu ^*)$$ for every measure $$\mu \in {\mathfrak {M}}({\mathcal {A}}^2)$$;

(c) $${\mathcal {D}}(\kappa ^{12})+{\mathcal {D}}(\kappa ^{23})\ge {\mathcal {D}}(\kappa ^{13})$$ for every measure $$\kappa \in {\mathfrak {M}}_+({\mathcal {A}}^3)$$.

These conditions imply that $${\mathcal {D}}$$ is nonnegative on nonnegative measures. Indeed, for a measure $$\mu \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ and an arbitrary probability distribution $$\gamma$$ on $${\mathcal {A}}$$, define $$\kappa =(\mu +\mu ^*)\times \gamma$$. Then $$\kappa ^{12}=\mu +\mu ^*$$ and $$\kappa ^{13}=\kappa ^{23}= (\mu ^1+\mu ^2)\times \gamma$$. Applying (c), we get that $${\mathcal {D}}(\mu )+{\mathcal {D}}(\mu ^*)+ {\mathcal {D}}((\mu ^1+\mu ^2)\times \kappa )\ge {\mathcal {D}}((\mu ^1+\mu ^2)\times \kappa )$$, and (b) implies that $${\mathcal {D}}(\mu )\ge 0$$.

The name “metrical” refers to the fact that if $${\mathcal {D}}$$ is defined by a bounded measurable pseudometric r on J as $${\mathcal {D}}(\mu )=\mu (r)$$, then conditions (a)-(c) are satisfied. Conditions (a) and (b) are trivial, and condition (c) also follows easily:

\begin{aligned} {\mathcal {D}}(\kappa ^{12})+{\mathcal {D}}(\kappa ^{23})-{\mathcal {D}}(\kappa ^{13}) = \kappa ^{12}(r) + \kappa ^{23}(r) -\kappa ^{13}(r) = \kappa (r(y,z)+r(y,z)-r(x,z)) \ge 0. \end{aligned}

Can every metrical linear functional $${\mathcal {D}}$$ be represented as $${\mathcal {D}}(\varphi )=\varphi (g)$$ with some pseudometric $$g:~J^2\rightarrow {\mathbb {R}}_+$$? I expect that the answer is negative, but perhaps the following is true:

### Conjecture 1

For every metrical linear functional $${\mathcal {D}}$$ on $${\mathfrak {M}}({\mathcal {A}}^2)$$ and every $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ there is a pseudometric $$g:~J^2\rightarrow {\mathbb {R}}_+$$ such that $${\mathcal {D}}(\varphi )=\varphi (g)$$ for all measures $$\varphi \ll \psi$$.

The conjecture can be proved in several special cases, in particular, for measures $$\psi$$ defined by graphons and graphings (details will be published elsewhere).

We need a lemma relating metrical functionals and flows. Informally, the lemma expresses that in a flow, every particle must travel at least as much as the distance between its starting and ending points.

### Lemma 5.1

Let $${\mathcal {D}}$$ be a metrical linear functional on $${\mathfrak {M}}({\mathcal {A}}^2)$$, and let $$\tau \in {\mathfrak {M}}_+({\mathcal {B}})$$. Then $${\mathcal {D}}(E(\tau ))\ge {\mathcal {D}}(Z(\tau ))$$.

### Proof

Let $$\tau _k$$ denote the measure $$\tau$$ restricted to sequences in $${\mathcal {B}}$$ of length k $$(k\ge 1)$$. For $$0\le i_1<i_2<\dots<i_m<k$$, the measure $$\tau _k^{i_1\dots i_m}$$ is the marginal of $$\tau _k$$ on $$\{i_1,\dots , i_m\}\subseteq \{1,\dots ,k\}$$. For $$i\le j$$, let $$[i,j]=\{i,i+1,\dots ,j\}$$. Then $$Z(\tau ) = \sum _{k\ge 0} \tau _k^{0,k-1}$$.

We claim that

\begin{aligned} {\mathcal {D}}(E(\tau _k^{[i,j]}))\ge {\mathcal {D}}(E(\tau _k^{ij}))\qquad (0\le i<j<k). \end{aligned}
(30)

We use induction on $$j-i$$. For $$j-i=1$$ the assertion is trivial. Let $$j-i>1$$, and choose r with $$i<r<j$$. Then

\begin{aligned} E(\tau _k^{irj})^{23} = E(\tau _k^{rj}),\quad E(\tau _k^{ij})^{13} = E(\tau _k^{ij}),\quad E(\tau _k^{irj})^{12} = E(\tau _k^{ir}). \end{aligned}

Using that $${\mathcal {D}}$$ is metrical, this implies that

\begin{aligned} {\mathcal {D}}(E(\tau _k^{ir}))+{\mathcal {D}}(E(\tau _k^{rj})) \ge {\mathcal {D}}(E(\tau _k^{ij})). \end{aligned}

By induction, we know that $${\mathcal {D}}(E(\tau _k^{[i,r]}))\ge {\mathcal {D}}(E(\tau _k^{ir}))$$ and $${\mathcal {D}}(E(\tau _k^{[r,j]}))\ge {\mathcal {D}}(E(\tau _k^{rj}))$$. Using that $$E(\tau _k^{[i,r]})+E(\tau _k^{[r,j]})=E(\tau _k^{[i,j]})$$, we get

\begin{aligned} {\mathcal {D}}(E(\tau _k^{[i,j]})) = {\mathcal {D}}((\tau _k^{[i,r]}))+{\mathcal {D}}(E(\tau _k^{[r,j]})) \ge {\mathcal {D}}(E(\tau _k^{ir}))+{\mathcal {D}}(E(\tau _k^{rj})) \ge {\mathcal {D}}(E(\tau _k^{ij})). \end{aligned}

This proves the Claim. In particular, we have

\begin{aligned} {\mathcal {D}}(E(\tau _k))= {\mathcal {D}}(E(\tau _k^{[0,k-1]})) \ge {\mathcal {D}}(E(\tau _k^{0,k-1})) = {\mathcal {D}}(Z(\tau _k)). \end{aligned}
(31)

Thus

\begin{aligned} {\mathcal {D}}(\tau ) = \sum _{k=1}^\infty {\mathcal {D}}(E(\tau _k)) \ge \sum _{k=0}^\infty {\mathcal {D}}(Z(\tau _k)) ={\mathcal {D}}(Z(\tau )).[-3.7pc] \end{aligned}

$$\square$$

### 5.2 Multicommodity flows.

A multicommodity flow on a Borel space $${\mathcal {A}}$$ consists of a symmetric measure $$\sigma \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, and of a family of s-t flows $$\varphi _{st}$$ of value 1, one for each pair $$(s,t)\in J\times J$$. We require that $$\varphi _{st}(U)$$ is measurable as a function of $$(s,t)\in J\times J$$ for every $$U\in {\mathcal {A}}^2$$.

Since we are going to put only symmetric upper bounds (capacity constraints) on the sum of these flows, we may also require that each $$\varphi _{st}$$ is acyclic. A further requirement we can impose is that $$\varphi _{ts}=\varphi _{st}^*$$ (replacing each $$\varphi _{st}$$ by $$(\varphi _{st}+\varphi _{ts}^*)/2$$).

Such a multicommodity flow $$F=(\sigma ;~f_{st}:~st \in W)$$ defines symmetric measure (the total load) by

\begin{aligned} \varphi _F(S)=\int \limits _{J\times J} \varphi _{xy}(S)\,d\sigma (x,y)\qquad (S\in {\mathcal {A}}^2). \end{aligned}

A trivial multicommodity flow is defined by $$f_{st}=\delta _{st}$$ for any $$\sigma$$. The total load of this trivial multicommodity flow is $$\sigma$$.

If we are also given a symmetric “capacity” measure $$\psi \in {\mathfrak {M}}_+({\mathcal {A}}^2)$$, then we say that the multicommodity flow $$F=(\sigma ;~\varphi _{st})$$ is feasible, if $$\varphi _F\le \psi$$. Our question is: Given $$\psi$$ and $$\sigma$$, does there exist a feasible multicommodity flow? Our goal is to generalize the Multicommodity Flow Theorem.

To state our main result in this section, we need to relax the capacity constraint $$\varphi _F\le \psi$$, and define the overload over $$\psi$$ as $$\Vert \varphi _F\setminus \psi \Vert$$. In other words, this overload is less than $$\varepsilon$$ if there is a measure $$\psi '\in {\mathfrak {M}}_+({\mathcal {A}}^2)$$ such that $$\Vert \psi -\psi '\Vert <\varepsilon$$ and F is feasible with respect to $$\psi '$$.

### Theorem 5.2

(Multicommodity Flow Theorem for Measures). Let $$\sigma$$ and $$\psi$$ be symmetric measures on $${\mathcal {A}}^2$$. There is a feasible multicommodity flow for demands $$\sigma$$ with arbitrarily small overload over $$\psi$$ if and only if $${\mathcal {D}}(\sigma )\le {\mathcal {D}}(\psi )$$ for every metrical linear functional $${\mathcal {D}}$$ on $${\mathfrak {M}}_+({\mathcal {A}}^2)$$.

I don’t know whether allowing an arbitrarily small overload is needed (probably so). If Conjecture 1 above is true, then the condition $${\mathcal {D}}(\sigma )\le {\mathcal {D}}(\psi )$$ could be replaced by the more explicit condition that $$\sigma (d)\le \psi (d)$$ for every bounded Borel pseudometric d on J.

A cut-metric is perhaps the simplest nontrivial pseudometric, defined as $$d(x,y)={\mathbb {1}}_{A\times A^c}+{\mathbb {1}}_{A^c\times A}$$. For cut-metrics, the condition $${\mathcal {D}}(\sigma )\le {\mathcal {D}}(\psi )$$ in the theorem gives that $$\sigma (A\times A^c)\le \psi (A\times A^c)$$. If the demand measure $$\sigma$$ is concentrated on a single pair $$\{s,t\}$$ of nodes (more exactly, on the two orderings of an unordered pair), then we obtain Theorem 5.2 (at least in the case of symmetric capacities). But in general, it does not suffice to apply the condition to cut-metrics only, even in the finite case.

#### 5.2.1 Formulation as a single measure.

We want to formulate the multicommodity flow problem in terms of a single measure; unfortunately, we have to go up to $${\mathcal {A}}^4$$. If $$\Phi \in {\mathfrak {M}}({\mathcal {A}}^4)$$, then we use the notation

\begin{aligned} \Phi ^*(T\times U)=\Phi (T^*\times U),\quad \Phi ^{**}(T\times U)=\Phi (T^*\times U^*), \quad \Phi ^{\circ *}(T\times U)=\Phi (T\times U^*). \end{aligned}

Every multicommodity flow $$(\sigma ;~\varphi _{st}:~s,t\in J)$$ defines a load measure $$\Phi$$ on $${\mathcal {A}}^4 = {\mathcal {A}}^2\times {\mathcal {A}}^2$$ by

\begin{aligned} \Phi (T\times U) = \int \limits _U \varphi _{st}(T)\,d\sigma (s,t). \end{aligned}

This number expresses how much load the subset of demands U puts on the edges in T. For the trivial solution $$\varphi _{st}=\delta _{st}$$ (sending the stuff directly from s to t) we get

\begin{aligned} \int \limits _U \delta _{xy}(T)\,d\sigma (x,y) = \sigma (T\cap U). \end{aligned}

Sometimes it will be convenient to consider the right hand side as a measure $$\sigma _\Delta (T\times U)=\sigma (T\cap U)$$ defined on $${\mathcal {A}}^4$$. Of course, this trivial solution is not feasible in general.

We can express the multicommodity flow problem in terms of this single measure $$\Phi$$. The condition that $$\varphi _{st}^*=\varphi _{ts}$$ can be expressed as $$\Phi (T\times U)=\Phi (T^*\times U^*)$$, or more compactly,

\begin{aligned} \Phi ^{**}=\Phi . \end{aligned}
(32)

The fact that $$\varphi _{st}-\delta _{st}$$ is a circulation implies that

\begin{aligned} \varphi _{st}^1(A) - \varphi _{st}^2(A)= \delta _{st}^1(A) -\delta _{st}^2(A) = \delta _s(A)-\delta _t(A) \qquad (A\in {\mathcal {A}}). \end{aligned}

Integrating over $$U\in {\mathcal {A}}^2$$ with respect to $$\sigma$$, we get that

\begin{aligned} \Phi ^{134}-\Phi ^{234} = {\overline{\sigma }}, \end{aligned}
(33)

where $${\overline{\sigma }}(A\times U) = \sigma ((A\times J)\cap U) - \sigma ((J\times A)\cap U)$$.

Finally, the feasibility conditions mean that $$\Phi \ge 0$$ and $$\Phi (A\times J\times J)\le \psi (A)$$, which, using our notation, can be expressed as

\begin{aligned} \Phi \ge 0, \qquad \Phi ^{12} \le \psi . \end{aligned}
(34)

Our next observation is that we can forget about condition (32). Indeed, suppose that $$\Phi \in {\mathcal {A}}^4$$ satisfies (33) and (34). Then the measure $$\Phi ^{**}$$ also satisfies these conditions, and the symmetrized measure $$\frac{1}{2}(\Phi +\Phi ^{**})$$ satisfies these equations and, in addition, (32) as well.

Conversely, we show that every measure $$\Phi$$ satisfying (33) and (34) yields a feasible multicommodity flow.

We may assume that $$\Phi ^{34}\ll \sigma$$. Suppose this does not hold, then let $$S\in {\mathcal {A}}^2$$ be a set with $$\sigma (S)=0$$ and $$\Phi ^{34}(S)$$ maximum (such a set clearly exists). Define $$\Phi _1=\Phi _{J^2\times (J^2\setminus S)}$$ and $$\Phi _2=\Phi _{J^2\times S}$$, then $$\Phi =\Phi _1+\Phi _2$$. We claim that $$\Phi _1\ll \sigma$$. Indeed, for $$X\subseteq J^2$$ with $$\sigma (X)=0$$ we have $$\sigma (X\cup S)=0$$, hence $$\Phi ^{34}(X\cup S)\le \Phi ^{34}(S)$$, which implies that $$\Phi ^{34}_1(X)=\Phi ^{34}(X\setminus S) =\Phi ^{34}(X\setminus S)=0$$.

Furthermore, $$\Phi _1$$ satisfies (33) and (34). The second of these is trivial. For the first,

\begin{aligned} \Phi _1^{134}(A\times U)&-\Phi _1^{234}(A\times U) = \Phi _1(A\times J\times U)-\Phi _1(J\times A\times U)\\&=\Phi (A\times J\times (U\setminus S))-\Phi (J\times A\times (U\setminus S))\\&={\overline{\sigma }}(A\times (U\setminus S)) = \sigma ((A\times J)\cap (U\setminus S)) - \sigma ((J\times A)\cap (U\setminus S))\\&= \sigma ((A\times J)\cap U) - \sigma ((J\times A)\cap U) = {\overline{\sigma }}(A\times U) \end{aligned}

(we have used that $$\sigma (S)=0$$). Replacing $$\Phi$$ by $$\Phi _1$$ we get a solution of (33) and (34) such that $$\Phi ^{34}\ll \sigma$$. Thus the Radon–Nikodym derivative $$f=d\Phi ^{34}/d\sigma$$ exists.

The Disintegration Theorem 3.3 implies that there is a family $$(\theta _{st}:~s,t\in J)$$ of measures on $${\mathcal {A}}^2$$ such that $$\theta _{st}(U)$$ is a measurable function of (st) for every $$U\in {\mathcal {A}}^2$$, and

\begin{aligned} \Phi (T\times U)=\int \limits _U \theta _{st}(T)\,d\Phi ^{34}(s,t). \end{aligned}
(35)

for $$T,U\in {\mathcal {A}}^2$$. Defining $$\varphi _{st}= f(s,t)\cdot \theta _{st}$$, Equation (35) can be written as

\begin{aligned} \Phi (T\times U)=\int \limits _U \varphi _{st}(T)\,d\sigma (s,t). \end{aligned}
(36)

Let $$A\in {\mathcal {A}}$$ and $$U\in {\mathcal {A}}^2$$, then

\begin{aligned} \int \limits _U(\varphi _{st}^1(A)&-\varphi _{st}^2(A))\,d\sigma (s,t) = \Phi ^{134}(A\times U)-\Phi ^{234}(A\times U)\\&= {\overline{\sigma }}(A\times U) = \sigma ((A\times J)\cap U)-\sigma ((J\times A)\cap U) =\int _U {\mathbb {1}}_{A\times J}-{\mathbb {1}}_{J\times A}\,d\sigma . \end{aligned}

This holds for every $$U\in {\mathcal {A}}$$, so it follows that for all $$A\in {\mathcal {A}}$$,

\begin{aligned} \varphi _{st}^1(A)-\varphi _{st}^2(A) = {\mathbb {1}}_{J\times A}(s,t)-{\mathbb {1}}_{A\times J}(s,t) = \delta _s(A)-\delta _t(A), \end{aligned}
(37)

holds for $$\sigma$$-almost all (st). We need to argue that for $$\sigma$$-almost all (st), Equation (37) holds for all A.

Let $$R_A$$ denote the set of pairs (st) for which (37) does not hold. Let $$\{A_1,A_2,\dots \}$$ be a countable set algebra generating $${\mathcal {A}}$$. Then $$R=\cup _i R_{A_i}$$ has $$\sigma (R)=0$$ and if $$(s,t)\notin R$$, then

\begin{aligned} \varphi _{st}^1(A_i)+\delta _t(A_i)=\varphi _{st}^2(A_i) + \delta _s(A_i). \end{aligned}

By the uniqueness of measure extension, this equality holds if we replace $$A_i$$ by any $$A\in {\mathcal {A}}$$. This shows that $$\varphi _{st}$$ is an s-t flow of value 1. Replacing $$\varphi _{st}$$ by $$\delta _{st}$$ for $$(s,t)\in R$$, we may assume that $$\varphi _{st}$$ is an s-t flow of value 1 for every s and t.

Equation (36) implies that

\begin{aligned} \int _J\varphi _{st}(T)\,d\sigma (s,t) = \Phi (T\times J)\le \psi (T), \end{aligned}

so this multicommodity flow is feasible. If $$\Phi$$ violates the second inequality in (34) slightly, meaning that $$\Vert \Phi ^{12}\setminus \psi \Vert =\varepsilon >0$$, then by a similar computation the multicommodity flow we constructed has an overload of $$\varepsilon$$.

To sum up, it suffices to find a measure $$\Phi \in {\mathfrak {M}}_+({\mathcal {A}}^4)$$ such that $$\Phi ^{134}-\Phi ^{234} = {{\overline{\sigma }}}$$ and $$\Vert \Phi ^{12} \setminus \psi \Vert \le \varepsilon$$.

#### 5.2.2 Proof of the Multicommodity Flow Theorem.

I. The “only if” direction. Consider a multicommodity flow $$F=(\varphi ^{uv}:~uv\in S)$$, serving demand $$\sigma$$ and with overload over $$\psi$$ less than $$\varepsilon$$ ($$\varepsilon >0$$). We may assume that $$\sigma$$ is a probability distribution. By Theorem 4.17, there is a probability distribution $$\kappa _{uv}$$ on u-v paths for every $$uv\in S$$ such that $$E(\kappa _{uv})=\varphi _{uv}$$. Let $$\tau$$ be the mixture of the $$\kappa _{uv}$$ by $$\sigma$$; in other words, we generate a random path from $$\tau$$ by selecting a random pair uv from $$\sigma$$, and then select a random path from $$\kappa _{uv}$$. Then $$E(\tau ) = \varphi _F$$ and $$Z(\tau )=\sigma$$. By the definition of overload, we have $$\varphi _F\le \psi +\beta$$, where $$\Vert \beta \Vert \le \varepsilon$$. By Lemma 5.1,

\begin{aligned} {\mathcal {D}}(\sigma )={\mathcal {D}}(Z(\tau )) \le {\mathcal {D}}(E(\tau ))={\mathcal {D}}(\varphi _F) \le {\mathcal {D}}(\psi ) + {\mathcal {D}}(\beta ) \le {\mathcal {D}}(\psi )+\Vert {\mathcal {D}}\Vert \varepsilon . \end{aligned}

Since $$\varepsilon$$ can be arbitrarily small, this proves that $${\mathcal {D}}(\sigma )\le {\mathcal {D}}(\psi )$$.

II. The “if” direction. Consider the convex sets of measures

\begin{aligned} {\mathfrak {H}}_1&=\{\Phi \in {\mathfrak {M}}({\mathcal {A}}^4):~\Phi ^{134}-\Phi ^{234}={\overline{\sigma }}\},\\ {\mathfrak {H}}_2&={\mathfrak {M}}_+({\mathcal {A}}^4),\\ {\mathfrak {H}}_3&=\{\Phi \in {\mathfrak {M}}({\mathcal {A}}^4):~\Phi ^{12}\le \psi \}. \end{aligned}

To make these sets open, let $$\delta >0$$, and consider the $$\delta$$-neighborhoods $${\mathfrak {H}}_i^\delta =\{\mu \in {\mathfrak {M}}_+({\mathcal {A}}):~d_\mathrm{tv}(\mu ,{\mathfrak {H}}_i)<\delta \}$$. Note that all these sets are convex and invariant under the map $$\Phi \mapsto \Phi ^{**}$$.

The main step in the proof is proving that

\begin{aligned} {\mathfrak {H}}_1^\delta \cap {\mathfrak {H}}_2^\delta \cap {\mathfrak {H}}_3^\delta \not =\emptyset . \end{aligned}
(38)

Suppose that this intersection is empty. The intersection of any two of these sets is nonempty, so by Lemma 3.4 there are bounded linear functionals $${\mathcal {L}}_1,{\mathcal {L}}_2,{\mathcal {L}}_3$$ on $${\mathfrak {M}}({\mathcal {A}}^4)$$ and real numbers $$a_1,a_2,a_3$$ such that $${\mathcal {L}}_1+{\mathcal {L}}_2+{\mathcal {L}}_3=0$$, $$a_1+a_2+a_3=0$$, and $${\mathcal {L}}_i > a_i$$ on $${\mathfrak {H}}_i^\delta$$. Note that $$0\in {\mathfrak {H}}_2$$ and $$0\in {\mathfrak {H}}_3$$, which implies that $$a_2,a_3<0$$, and hence $$a_1>0$$. Since the sets are invariant under the map $$\Phi \mapsto \Phi ^{**}$$, we may assume that the linear functionals $${\mathcal {L}}_1, {\mathcal {L}}_2,{\mathcal {L}}_3$$ are invariant under this map as well.

These conditions have the following implications for the functionals $${\mathcal {L}}_i$$:

(a) The affine subspace $${\mathfrak {H}}_1$$ is not empty, since the trivial multicommodity flow satisfies it. The condition that $${\mathcal {L}}_1(\Phi )> a_1$$ for $$\Phi \in {\mathfrak {H}}_1^\delta$$ implies that $${\mathcal {L}}_1$$ is constant on $${\mathfrak {H}}_1$$. Since $$a_1>0$$, this constant is positive, and we may assume (by scaling the $${\mathcal {L}}_i$$ and the $$a_i$$) that it is 1. Then $$a_1<1$$. It follows that $${\mathcal {L}}_1(\Phi )=0$$ if $$\Phi ^{134}=\Phi ^{234}$$.

We can apply Proposition 3.5 to the linear operator $${\mathcal {T}}:~\Phi \mapsto \Phi ^{134}-\Phi ^{234}$$ similarly as in the proof of Lemma 4.3. We get a linear functional $${\mathcal {Z}}$$ on $${\mathfrak {M}}({\mathcal {A}}^3)$$ such that

\begin{aligned} {\mathcal {L}}_1(\Phi )={\mathcal {Z}}(\Phi ^{134}-\Phi ^{234})\qquad (\Phi \in {\mathfrak {M}}({\mathcal {A}}^4)). \end{aligned}
(39)

Substituting the trivial multicommodity flow in (39), we get that $${\mathcal {Z}}({\overline{\sigma }})=1$$. It also follows that

\begin{aligned} {\mathcal {L}}_1(\Phi ^*) = {\mathcal {Z}}((\Phi ^*)^{134}-(\Phi ^*)^{234}) = {\mathcal {Z}}(\Phi ^{234}-\Phi ^{134}) = - {\mathcal {L}}_1(\Phi ), \end{aligned}
(40)

and

\begin{aligned} {\mathcal {L}}_1(\Phi ^{\circ *}) = {\mathcal {L}}_1((\Phi ^{**})^*) = -{\mathcal {L}}_1(\Phi ^{**}) = - {\mathcal {L}}_1(\Phi ). \end{aligned}
(41)

(b) The condition that $${\mathcal {L}}_2(\Phi )> a_2$$ for $$\Phi \in {\mathfrak {H}}_2^\delta$$ implies that $${\mathcal {L}}_2(\mu )\ge 0$$ for $$\mu \ge 0$$, so $${\mathcal {L}}_2$$ is a nonnegative functional.

(c) The condition that $${\mathcal {L}}_3(\Phi )> a_3$$ for $$\Phi \in {\mathfrak {H}}_3^\delta$$ implies that $${\mathcal {L}}_3(\mu )\ge 0$$ whenever $$\mu \in {\mathfrak {M}}({\mathcal {A}}^4)$$ and $$\mu ^{12}\le 0$$. This implies that $${\mathcal {L}}_3(\mu )=0$$ whenever $$\mu ^{12}=0$$. We can apply the Proposition 3.5 to the operator $${\mathcal {S}}:~\varphi \mapsto \varphi ^{12}$$ similarly as in (a); it is easy to see that the range of $${\mathcal {S}}$$ is the whole space $${\mathfrak {M}}({\mathcal {A}}^2)$$, so it is closed. We get a bounded linear functional $${\mathcal {R}}$$ on $${\mathfrak {M}}({\mathcal {A}}^2)$$ such that $${\mathcal {L}}_3(\mu )={\mathcal {R}}(\mu ^{12})$$. It also follows that $$-{\mathcal {R}}$$ is a nonnegative functional.

From $${\mathcal {L}}_1+{\mathcal {L}}_2+{\mathcal {L}}_3=0$$ we get that

\begin{aligned} {\mathcal {R}}(\Phi ^{12}) = -{\mathcal {L}}_3(\Phi ) = {\mathcal {L}}_1(\Phi )+{\mathcal {L}}_2(\Phi ) \ge {\mathcal {L}}_1(\Phi ) ={\mathcal {Z}}(\Phi ^{134}-\Phi ^{234}). \end{aligned}
(42)

for every $$\Phi \in {\mathfrak {M}}_+({\mathcal {A}}^4)$$. From the fact that $$\psi \times \gamma \in {\mathfrak {H}}_3$$ for any probability measure $$\gamma \in {\mathfrak {M}}({\mathcal {A}}^2)$$, it follows that $${\mathcal {R}}(\psi )< - a_3 = a_1+a_2<1$$.

By Lemma 3.7, there is a bounded linear functional $${\mathcal {Q}}$$ on $${\mathfrak {M}}({\mathcal {A}}^2)$$ such that

\begin{aligned} {\mathcal {Q}}(\mu )=\sup \{{\mathcal {L}}_1(\Phi ):~\Phi ^{12}=\mu ,~ \Phi \ge 0\} = \sup \{{\mathcal {Z}}(\Phi ^{134}-\Phi ^{234}):~\Phi ^{12}=\mu ,~ \Phi \ge 0\} \end{aligned}

for all $$\mu \ge 0$$. Note that $${\mathcal {Q}}(\mu )\le {\mathcal {R}}(\mu )$$ and

\begin{aligned} {\mathcal {Q}}(\Phi ^{12})\ge {\mathcal {Z}}(\Phi ^{134}-\Phi ^{234}) \end{aligned}
(43)

for every $$\Phi \ge 0$$. Also note that in the definition, the measure $$\Phi ^{\circ *}$$ also competes for the supremum, and since $${\mathcal {L}}_1(\Phi ^{\circ *})=-{\mathcal {L}}_1(\Phi )$$, we can also write

\begin{aligned} {\mathcal {Q}}(\mu )=\sup \{|{\mathcal {L}}_1(\Phi )|:~\Phi ^{12}=\mu ,~ \Phi \ge 0\}\ge 0. \end{aligned}
(44)

We also have $$\sigma _\Delta \ge 0$$ and $$(\sigma _\Delta )^{12}=\sigma$$, and so

\begin{aligned} {\mathcal {Q}}(\sigma )\ge {\mathcal {L}}_1(\sigma _\Delta ) = 1. \end{aligned}
(45)

### Claim 4

The functional $${\mathcal {Q}}$$ is metrical.

First, suppose that $$\mu$$ is concentrated on the diagonal of $${\mathcal {A}}^2$$. Then every measure $$\Phi \in {\mathfrak {M}}({\mathcal {A}}^4)$$ with $$\Phi ^{12}=\mu$$ is concentrated on the set $$\{(x,x,u,v):~x,u,v\in J\}$$, and hence $$\Phi ^{134}=\Phi ^{234}$$, so $${\mathcal {Q}}(\mu )=0$$.

Second, for every $$\mu \ge 0$$ we have

\begin{aligned} {\mathcal {Q}}(\mu ^*)&= \sup \{{\mathcal {L}}_1(\Phi ):~\Phi ^{12}=\mu ^*,~ \Phi \ge 0\} = \sup \{{\mathcal {L}}_1(\Phi ):~(\Phi ^{**})^{12}=\mu ,~ \Phi \ge 0\}\\&= \sup \{{\mathcal {L}}_1(\Phi ^{**}):~\Phi ^{12}=\mu ,~ \Phi \ge 0\} = {\mathcal {Q}}(\mu ). \end{aligned}

Third, let $$\kappa \in {\mathfrak {M}}_+({\mathcal {A}}^3)$$ and $$\delta >0$$. By the definition of $${\mathcal {Q}}$$, there is a measure $$\Phi \in {\mathfrak {M}}_+({\mathcal {A}}^4)$$ such that

\begin{aligned} {\mathcal {Q}}(\kappa ^{12})\le {\mathcal {L}}_1(\Phi )+\delta ,\qquad \text {and}\qquad \Phi ^{12}=\kappa ^{12}. \end{aligned}

Consider the space $${\mathfrak {M}}({\mathcal {A}}^{\{12345\}})$$, where the space of $$\kappa$$ is identified with $${\mathfrak {M}}({\mathcal {A}}^{\{125\}})$$ (the space of $$\Phi$$ remains $${\mathfrak {M}}({\mathcal {A}}^{\{1234\}})$$. The equation $$\Phi ^{12}=\kappa ^{12}$$ implies that there is a measure $$\Gamma \in {\mathfrak {M}}_+({\mathcal {A}}^{\{12345\}})$$ such that $$\Gamma ^{1234}=\Phi$$ and $$\Gamma ^{125}=\kappa$$. Using (39), we get

\begin{aligned} {\mathcal {Q}}(\kappa ^{12})&={\mathcal {Q}}(\Phi ^{12})\le {\mathcal {L}}_1(\Phi )+\delta = {\mathcal {Z}}(\Phi ^{134}-\Phi ^{234})+\delta = {\mathcal {Z}}(\Gamma ^{134}-\Gamma ^{234})+\delta \\&={\mathcal {Z}}(\Gamma ^{134}-\Gamma ^{345})+{\mathcal {Z}}(\Gamma ^{345}-\Gamma ^{234})+\delta . \end{aligned}

Applying (43) with $$\Gamma ^{1345}$$ in place of $$\Phi$$ and index 5 in place of 2, we get that $${\mathcal {Z}}(\Gamma ^{134}-\Gamma ^{345})\le {\mathcal {Q}}(\Gamma ^{15}) = {\mathcal {Q}}(\kappa ^{15})$$. Similarly, $${\mathcal {Z}}(\Gamma ^{345}-\Gamma ^{234})\le {\mathcal {Q}}(\kappa ^{25})$$, and so

\begin{aligned} {\mathcal {Q}}(\kappa ^{12}) \le {\mathcal {Q}}(\kappa ^{15})+{\mathcal {Q}}(\kappa ^{25})+\delta . \end{aligned}

Since this holds for every $$\delta >0$$, we get that $${\mathcal {Q}}(\kappa ^{12})\le {\mathcal {Q}}(\kappa ^{15}) + {\mathcal {Q}}(\kappa ^{25})$$, proving that $${\mathcal {Q}}$$ is metrical.

Now $${\mathcal {Q}}(\psi )\le {\mathcal {R}}(\psi )<1$$ but $${\mathcal {Q}}(\sigma )\ge 1$$, so the hypothesis of the theorem is violated. This proves (38).

This implies the (seemingly) stronger statement that

\begin{aligned} {\mathfrak {H}}_1 \cap {\mathfrak {H}}_2^\delta \cap {\mathfrak {H}}_3^\delta \not =\emptyset \end{aligned}
(46)

for all $$\delta >0$$. Indeed, if $$\Phi \in {\mathfrak {H}}_1^{\delta /2} \cap {\mathfrak {H}}_2^{\delta /2}\cap {\mathfrak {H}}_3^{\delta /2}$$, then there is a measure $$\Phi '\in {\mathfrak {H}}_1$$ such that $$d_\mathrm{tv}(\Phi ,\Phi ')<\delta /2$$, and then $$\Phi '\in {\mathfrak {H}}_1 \cap {\mathfrak {H}}_2^\delta \cap {\mathfrak {H}}_3^\delta$$.

Our next step is to prove that for every $$\delta >0$$,

\begin{aligned} {\mathfrak {H}}_1 \cap {\mathfrak {H}}_2 \cap {\mathfrak {H}}_3^\delta \not =\emptyset . \end{aligned}
(47)

Indeed, let $$\Phi \in {\mathfrak {H}}_1 \cap {\mathfrak {H}}_2^{\delta /3}\cap {\mathfrak {H}}_3^{\delta /3}$$. By $$d_\mathrm{tv}(\Phi ,{\mathfrak {H}}_2)<\delta /3$$ it follows that $$\Vert \Phi _-\Vert <\delta /3$$. Consider the measure $$\Psi =\Phi _++\Phi _-^* \in {\mathfrak {M}}_+({\mathcal {A}}^4)$$, then

\begin{aligned} \Psi ^{134}-\Psi ^{234}&=(\Phi _+)^{134} + (\Phi _-^*)^{134} - (\Phi _+)^{234} - (\Phi _-^*)^{234}\\&=(\Phi _+)^{134} + (\Phi _-)^{234} - (\Phi _+)^{234} - (\Phi _-)^{134}=\Phi ^{134} -\Phi ^{234}={\overline{\sigma }}. \end{aligned}

Thus $$\Psi \in {\mathfrak {H}}_1\cap {\mathfrak {H}}_2$$. Furthermore,

\begin{aligned} d_\mathrm{tv}(\Psi ,{\mathfrak {H}}_3) \le d_\mathrm{tv}(\Phi ,{\mathfrak {H}}_3) + \Vert \Phi -\Psi \Vert< \frac{1}{3}\delta + 2\Vert \Phi _-\Vert < \delta , \end{aligned}
(48)

so $$\Psi \in {\mathfrak {H}}_3^\delta$$. The multicommodity flow $$\Psi$$ satisfies (33) and (34), and it is easy to check that it violates capacity $$\psi$$ by at most $$\Vert \Psi ^{12}\setminus \psi \Vert \le d_\mathrm{tv}(\Psi ,{\mathfrak {H}}_3) <\delta$$.

This completes the proof of Theorem 5.2.