Flows on measurable spaces

The theory of graph limits is only understood to any nontrivial degree in the cases of dense graphs and of bounded degree graphs. There is, however, a lot of interest in the intermediate cases. It appears that the most important constituents of graph limits in the general case will be Markov spaces (Markov chains on measurable spaces with a stationary distribution). This motivates our goal to extend some important theorems from finite graphs to Markov spaces or, more generally, to measurable spaces. In this paper, we show that much of flow theory, one of the most important areas in graph theory, can be extended to measurable spaces. Surprisingly, even the Markov space structure is not fully needed to get these results: all we need a standard Borel space with a measure on its square. Our results may be considered as extensions of flow theory for directed graphs to the measurable case.


Introduction
The theory graph limits is only understood to some nontrivial degree in the case of dense graphs, and (on the opposite end of the scale) in the case of bounded degree graphs. There is, however, a lot of work being done on the intermediate cases. It appears that the most important constituents of graph limits in the general case will be Markov spaces (Markov chains on measurable spaces with a stationary distribution). Markov spaces can be described by a sigma-algebra, endowed with a measure on its square, such that its two marginals are equal; often we drop even this last assumption.
A finite directed graph G = (V, E) can be thought of as a sigma-algebra 2 V , endowed with a measure of V × V , the counting measure of the set of edges. This motivates our goal to extend some important theorems from finite graphs to measures on squares of sigma-algebras. In this paper we show that much of flow theory, one of the most important areas in graph theory, can be extended to such spaces.
In the finite case, a flow is a function on the edges; we often sum its values on subsets of edges (e.g. cuts), which means we are also using the corresponding measure on subsets.
In the case of an infinite point set J (endowed with a sigma-algebra A), these two notions diverge: we can try to generalize the notion of a flow either as a function on ordered pairs of points, or as a measure on the subsets of J × J measurable with respect to the sigma-algebra A × A. While the first notion is perhaps more natural, flows as measures are easier to define, and we explore this possibility in this paper. Marks and Unger [19] define and use flows as functions on the edges, but they restrict their attention to finite-degree graphs, in which case the flow condition "inflow=outflow" is easier to define. In particular, we generalize the Hoffman Circulation Theorem to measurable spaces. This connects us with the theory of Markov spaces, which can be described as measurable spaces endowed with a nonnegative normalized circulation, called the ergodic circulation. Our main concern will be the existence of circulations; in this sense, these studies can be thought of as preliminaries for the study of Markov spaces or Markov chains, which are concerned with measurable spaces with a given ergodic circulation. Flows between two points, and more generally, between two measures can then be handled using the results about circulations (by the same reductions as in the finite case). In particular, we prove an extension of the Max-Flow-Min-Cut Theorem.
In the last part of the paper, we prove a measure-theoretic generalization of the Multicommodity Flow Theorem by Iri and Matula-Shahroki. This result is not fully satisfactory though; we formulate several problems left open by our results.

Flow theory on finite graphs
As a motivation of the results in this paper, let us recall some basic results on finite graphs in this area.
Let G = (V, E) be a finite directed graph and g : E → R. The flow condition at node i is the equation (where the summation extends over those nodes j for which ij ∈ E and ji ∈ E, respectively) A circulation on G is a function f : E → R satisfying the flow condition at every node i. Circulations could also be defined by the condition i∈A,j∈A c g(ij) = i∈A c ,j∈A g(ij) for every A ⊆ V (here A c = V \ A denotes the complement of A). A basic result about the existence of circulations satisfying prescribed bounds is the following [11]. for every A ⊆ V .
The most important consequence of the Hoffman Circulation Theorem is the Max-Flow-Min-Cut Theorem. Let G = (V, E) be a directed graph and s, t ∈ V . Let c : E → R + be an assignment of nonnegative "capacities" to the edges. An s-t cut is a set of edges from A to A c , where s ∈ A ⊆ V \ {t}. The capacity of this cut is the sum i∈A, j∈A c c(ij).
An s-t flow is function f : E → R satisfying the flow condition (1) at every node i = s, t.
The value of the flow is A flow is feasible, if 0 ≤ f ≤ c.  Suppose that there is a circulation g satisfying the given conditions a(e) ≤ g(e) ≤ b(e) for every (directed) edge e (for short, a feasible circulation). Also suppose that we are given a "cost" function c : E → R + . What is the minimum of the "total cost" e c(e)g(e) for a feasible circulation? This can be answered by solving a linear program, where the Duality Theorem applies. This leads to the following.
Theorem 2.4. The minimum value e c(e)g(e) of a circulation g satisfying a(ij) ≤ g(ij) ≤ b(ij) for every edge ij is given by the maximum of ij∈E b(ij)|c(ij) − q(j) + q(i)| + − a(ij)|c(ij) − q(j) + q(i)| − , where q ranges over all functions V → R.
Let G = (V, E) be a (finite) directed graph. A multicommodity flow is a family of flows (f st : s, t ∈ V ), where f st is a (nonnegative) s-t flow of specified value σ(i, j). Suppose we are given capacities c(i, j) for the edges. Then we say that the multicommodity flow is feasible, for every edge ij. (We may assume, if convenient, that the graph is a bidirected complete graph, since missing edges can be added with capacity 0.) The question is whether a feasible multicommodity flow exists. This is not hard, since the conditions can be written as a system of linear inequalities, treating the values f st (i, j) as variables, and we can apply the Farkas Lemma. However, working out the dual we get conditions that are not transparent at all. But for undirected graphs there is a very nice form of the condition due to Iri [12] and to Shahroki and Matula [21].
Let G = (V, E) be an undirected graph, where we consider each undirected edge as a pair of oppositely directed edges. We assume that the demand function σ(i, j) and the capacity function are symmetric: σ(i, j) = σ(j, i) and c(i, j) = c(j, i). In the definition of multicommodity flows, we require that f st (i, j) = f ts (j, i) for every edge ij. Consider a semimetric D on V . Let us describe an informal (physical) derivation of the conditions. Think of an edge ij as a pipe with cross section c(i, j) and length D(i, j). Then the total volume of the system is i,j c(i, j)D(i, j). If the multicommodity problem is feasible, then every flow f s,t occupies a volume of at least σ(s, t)D(s, t), and hence We call this inequality the volume condition. This condition, when required for every possible semimetric, is also sufficient: Theorem 2.5. Let G be an undirected graph, and let a demand function σ(s, t) ∈ R + (s, t ∈ V ) and a capacity function c(i, j) ∈ R + (ij ∈ E) be given. Then there exist a feasible multicommodity flow (f st : s, t ∈ V ) satisfying the demand conditions val(f st ) = σ st if and only if the volume condition (2) is satisfied for every semimetric D in V .

Graphons
Let (J, A) be a standard Borel space, and let W : J × J → [0, 1] be a measurable function.
Let us endow (J, A) with a node measure, a probability measure λ. If W is symmetric (W (x, y) = W (y, x)), then the quadruple (J, A, λ, W ) is called a graphon. Dropping the assumption that W is symmetric, we get a digraphon.
The edge measure of a graphon or digraphon is the integral measure of W , The node measure and edge measure of a graphon determine the graphon, up to a set of (λ × λ)-measure zero. Indeed, η is absolutely continuous with respect to λ × λ, and W = dη)/d(λ × λ) almost everywhere.
Graphons can represent limit objects of sequences of dense graphs that are convergent in the local sense [3,18]. For this representation, we may limit the underlying sigma-algebra to standard Borel spaces.

Graphings
Let (J, A) be a standard Borel space. A Borel graph is a simple (infinite) graph on node set J, whose edge set E belongs to A × A. By "graph" we mean a simple undirected graph, so we assume that E ⊆ J × J avoids the diagonal of J × J and is invariant under interchanging the coordinates. A graphing is a Borel graph, with all degrees bounded by a finite constant, endowed with a probability measure λ on (I, A), satisfying the following "measure-preservation" condition for any two subsets A, B ∈ A: Here deg B (x) denotes the number of edges connecting x ∈ I to points of B. (It can be shown that this is a bounded Borel function of x.) We call λ the node measure of the graphing.
We can define Borel digraphs (directed graphs) in the natural way, by allowing E to be any set in A × A. To define a digraphing, we assume that both the indegrees and outdegrees are finite and bounded. In this case we have to define two functions: deg + B (x) denotes the number of edges from x to B, and deg − B (x) denotes the number of edges from B to x. The "measure-preservation" condition says that for A, B ∈ A. Such a digraphing defines a measure on Borel subsets of I 2 , the edge measure of the digraphing: on rectangles we define which extends to Borel subsets in the standard way. This measure is concentrated on the set of E of edges. In the case of graphings, the edge measure is symmetric in the sense that interchanging the two coordinates does not change it. It can be shown that the node measure and the edge measure determine the (di)graphing up to a set of edges of η-measure zero.
Example 2.6 (Cyclic graphing and digraphing). For a fixed a ∈ (0, 1), let C a be the graphing on [0, 1] obtained by connecting every point x to x + a (mod 1) and x − a (mod 1). If a is irrational, this graph that consists of two-way infinite paths; if a is rational the graph will consist of cycles. We will also use the directed version − → C a , obtained by connecting x to x + a (mod 1) by a directed edge.
Graphings can represent limit objects of sequences of bounded-degree graphs that are convergent in the local (Benjamini-Schramm) sense [2,8], but also in a stronger, local-global sense [10].

Double measure spaces
For both graphons and graphings, all essential information is contained in the quadruple (J, A, λ, η), where λ is a probability measure on (J, A) and η is a measure on (J × J, A × A). Such a quadruple will be called a double measure space. It turns out that double measure spaces play a role in other recent work in graph limit theory, as limit objects for graph sequences that are neither dense nor bounded-degree, but convergent in some well-defined sense [14,1]. Describing these limit theories would take too much space, but as an example for which a very reasonable limit can be defined in terms of double measure spaces we mention the sequence of hypercubes.
As indicated in the introduction, we study the even simpler structure obtained by dropping the node measure. So we are left with a (standard Borel) sigma-algebra with a measure on its square, generalizing the counting measure of edges. If this measure is symmetric, then it corresponds to a time-reversible Markov chain with a stationary distribution, but the general case, it is even more general than a (non-reversible) Markov chain with stationary distribution. It turns out that a large part of flow theory has a natural and nontrivial generalization to this quite simple structure.

Notation
Let (J, A) be a sigma-algebra. Unless specifically emphasized otherwise, we assume that (J, A) is a standard Borel space of continuum cardinality; in particular, A is separating any two points, and it is countably generated. Since the sigma-algebra A determines its underlying set, we can talk about the standard Borel space as a sigma-algebra (where, in the case of the sigma-algebra denoted by A, the underlying set will be denoted by J). We We denote by ½ A (.) the indicator function of a set A. A stepfunction is a measurable function that has a finite range; equivalently, it is a finite linear combination of indicator functions of measurable sets. We denote by δ s the Dirac measure, the probability distribution concentrated on s ∈ J.
If A is a sigma-algebra, we denote by A 2 = A × A the product sigma-algebra of A with itself; A 3 etc. are defined analogously. Sometimes it will be necessary to distinguish the factors (even though they are identical), and we write For µ ∈ M(A) and A ∈ A, we define the restriction measure . For a signed measure µ on A × A, we define µ * (X) = µ(X * ). We set . So µ J and µ * J are the two marginals of µ.
For a measure µ ∈ M(A n ), let µ 1 denote its marginal on all coordinates but the first: We define µ 2 , µ 34 , etc. analogously. For a permutation π of {1, . . . , n}, we denote by µ π the measure obtained by permuting the coordinates according to π. So µ (12) = µ * , µ 1 = µ * J and We denote the Jordan decomposition of a signed measure α ∈ M(A) by α = α + −α − , and its total variation measure by |α| = α + + α − . For two measures α, β on A, we consider the Jordan decomposition of their difference and define the measures An equivalent way of defining the meet is where J = J + ∪ J − is the Hahn decomposition of J according to the sign of α − β. This measure is the largest nonnegative measure γ such that γ ≤ α and γ ≤ β.
Another useful operation is the following way of lifting a measure ϕ ∈ M(A) to M(A 2 ): So ϕ ∆ is supported on the diagonal {(x, x) : x ∈ J}, and it is obtained by pushing ϕ forward by the map x → (x, x).

Total variation distance
We endow the linear space M(A) with the total variation norm We note that the supremum and the infimum are attained, when J = A ∪ B is a Hahn decomposition of α. With this norm, M(A) becomes a Banach space. For two signed measures α and β on A, we define their total variation distance by Warning: if α and β are probability measures, then sup A∈A (α(A)−β(A)) = − inf A∈A (α(A)− β(A)), and so d tv (α, β) = 2 sup A (α(A) − β(A)). In probability theory, the total variation distance is often defined as sup A (α(A) − β(A)), a factor of 2 smaller.
We'll need an explicit description of the total variation distance of a measure from some simply defined sets.
Proof. Statements (a) and (b) are easy. To prove (a), we use that for every σ ∈ M + (A), we follows by applying (a) with ψ − µ in place of µ.
The ≥ direction in (c) follows by a similar argument: for every σ ∈ M(A 2 ), we have The proof of equality is a bit trickier. Let f = d(ψ ∧ µ J )/dµ J and Then Hence σ J ≤ ψ and so σ ∈ K, and σ J ≤ µ J , which implies that

Weak and sequential convergence
We collect some technical lemmas about convergence of measures.
Lemma 3.2. Let (J, A) be a standard Borel space, and let ψ ∈ M + (A) be a nonnegative measure. Let µ 1 , µ 2 , · · · ∈ M(A) be signed measures with |µ n | ≤ ψ. Then there is a subsequence n 1 < n 2 < . . . of natural numbers and a signed measure µ ∈ M(A), |µ| ≤ ψ such that In particular, it follows that µ ni (A) → µ(A) for every A ∈ A. In other words, every interval in the lattice of finite signed measures is sequentially compact with respect to convergence on each set in A. Hence also µ ≤ ψ.
Proof. We may assume that µ n ≥ 0 (just add ψ to every measure). Let B be a countable generating set of A; we may assume that B is a set algebra. The sequence (µ n (B) : n = 1, 2, . . . ) is bounded for every B ∈ B, since µ n (B) ≤ ψ(B) ≤ ψ(J). By choosing an appropriate subsequence, we may assume that µ n (B) tends to some value µ(B) for all B ∈ B. Clearly µ n is a pre-measure on B. We claim that µ too is a pre-measure on B. Finite additivity of µ is trivial, and so is 0 It follows that µ extends to a measure on A. Uniqueness of the extension implies that 0 ≤ µ ≤ ψ on the whole sigma-algebra A. Let S ∈ A; we claim that µ n (S) → µ(S) (n → ∞). For every ε > 0, there is a set B ∈ B such that ψ(S△B) ≤ ε/3. This implies that |µ n (S) − µ n (B)| ≤ µ n (S△B) ≤ ψ(S△B) ≤ ε/3, and similarly |µ(S) − µ(B)| ≤ ε/3. Thus So we see that, for the subsequence we selected, the conclusion of the lemma holds true for indicator functions ½ A (A ∈ A). It follows that it holds for stepfunctions (linear combinations of indicator functions), and hence for every bounded measurable function, since these can be uniformly approximated by stepfunctions.
Then µ n → µ weakly if and only if µ n ( Similarly as in Lemma 3.2, we can (formally) strengthen the conclusion. Let us call a finite linear combination of indicator functions ½ A1×A2 , A i ∈ B i a 2-stepfunction. Then for every function f : K 1 × K 2 → R that is the limit of a uniformly convergent sequence of 2stepfunctions. However, this family of functions does not seem to have a nice characterization.
Proof. (Necessity.) Let ε > 0. There are continuous functions f i : The first term telescopes like this: Here the first term can be estimated easily: Similarly, the second term in (7) is at most ε, so the first term in (6) is at most 2ε. Similarly, the last term in (6) is at most 2ε. Finally the middle term in (6) is at most ε if n is large enough, by the weak convergence µ n → µ. Thus |µ n ( Here is a topology-free corollary: be Borel spaces, let λ i be a probability measure on (J i , B i ), and let µ 1 , µ 2 , · · · ∈ M(J 1 × J 2 ) be coupling measures between λ 1 and λ 2 . Then there is an infinite subsequence µ n1 , µ n2 , . . . and a measure µ coupling λ 1 and λ 2 such that

Decomposition of measures
We need the fact that every measure σ majorized by another measure ψ is a convex combination of restrictions of ψ. To be precise: Lemma 3.5. For every two measures 0 ≤ σ ≤ ψ on J there are subsets B n ∈ A and real numbers c n ≥ 0 (n ∈ N) such that n c n = 1, ψ(B n ) = σ(J) and n c n ψ Bn = σ.
Proof. We may assume that ψ(J) > σ(J) = 1. Let X consist of all measurable functions g : J → R such that 0 ≤ g ≤ 1 and Our first goal is to find a decomposition It is easy to check that t 1 > 0. If t 1 ≥ 1, then we must have f = ½ B1 and we are done. So suppose that 0 < t 1 < 1. Set Then c 1 > 0 and On the other hand, It is easy to check that J f 1 dψ = J g 1 dψ = 1, and so f 1 ∈ X . Thus a decomposition as in (8) exists.
Among all decompositions (8), we choose one where c 1 is close to its supremum; say, at least half of it. Repeating this construction, we get a decomposition f = c 1 g 1 + c 2 g 2 + · · · + c n g n + d n f n , where g i = ½ Bi for some B i ∈ A with ψ(B i ) = 1, f n ∈ X , c i > 0, d n ≥ 0 and c 1 +. . . c n +d n = 1. If d n = 0 for some n, then we are done. Otherwise, we get a decomposition into an infinite sum Since d n f n is pointwise decreasing, this limit function r exists, and clearly r ≥ 0.
Fist, suppose that b > 0. Then r/b ∈ X , and it has a decomposition as in (8): We find an n such that c n < c/2, then the decomposition contradicts the almost-maximal choice of c n .
Proof. Define Hence for every A ∈ A, and for every U ∈ A 2 ,

Disintegration
We need the following (special) version of the Disintegration Theorem; for more general statements, see [6,5].
Theorem 3.7. Let (J, A) and (K, B) be standard Borel spaces, let f : K → J be a measurable map, let ψ be a probability measure on (K, B), and let π(.) = ψ(f −1 (.)) be the push-forward It follows that for every A ∈ A with π(A) > 0 and every B ∈ B (applying the equation This illustrates that one can think of ψ x as ψ conditioned on f −1 (x), even though the condition has (typically) probability 0, and so the conditional probability is not defined.
Proof. Apply Theorem 3.7 with K = J × J and B = A × A, where f is the canonical projection on the first coordinate.

Markov spaces
A Markov space consists of a sigma-algebra A, together with a probability measure η on A 2 whose marginals are equal. We call η the ergodic circulation, and its marginals π(A) = η(A × J) = η(J × A), the stationary measure of the Markov space (A, η).
As the terminology above suggests, Markov spaces are intimately related to Markov chains. A Markov chain is usually defined on a sigma-algebra A, specifying a probability measure P u on A for every u ∈ Ω, called the transition distributions. One assumes that for every A ∈ A, the value p A (u) = P u (A) is a measurable function of u ∈ J. This structure is sometimes called a Markov scheme.
Specifying, in addition, a starting distribution σ on (J, A), we get a Markov chain, i.e. a sequence of random variables (w 0 , w 1 , w 2 , . . .) with values from J such that w 0 is chosen from distribution σ 0 = σ and w i+1 is chosen from distribution P w i (whatever the previous part w 0 , . . . , β i−1 of the Markov chain is). Sometimes we call the sequence (w 0 , w 1 , w 2 , . . .) a random walk. The distribution of w i will be denoted by σ i .
A probability measure π on (J, A) is a stationary distribution for the Markov scheme if choosing w 0 from this distribution, the next point w 1 of the walk will have the same distribution. Then of course, w i has this same distribution for every i ≥ 0. This is equivalent to saying that for all A ∈ A, for every A ∈ A. In this case, we call (w 0 , w 1 , w 2 , . . .) a stationary random walk. While finite Markov schemes always have a stationary distribution, this is not true for infinite underlying sigma-algebras. Furthermore, a Markov scheme may have several stationary distributions. (In the finite case, this happens only if the underlying directed graph is not strongly connected.) A Markov scheme (J, {P u : u ∈ J}) with a fixed stationary distribution π defines a Markov space. The ergodic circulation of this Markov space is the joint distribution measure η of (w 0 , w 1 ), where w 0 is a random point from the stationary distribution. More explicitly, (which extends from product sets to all sets in A 2 ). The marginals of this ergodic circulation equal to the stationary distribution π. The ergodic circulation η determines the Markov scheme (except for a set of measure zero in the stationary measure). Indeed, the stationary measure can be recovered as the marginal of η. Writing the definition of η as we see that P u (B), as a function of u, can be expressed as the Radon-Nikodym derivative almost everywhere.
With some refinement of this argument, using the Disintegration Theorem 3.7, one can show that every Markov space is obtained from a Markov scheme with a stationary distribution: Proposition 3.9. Let G = (J, A, η) be a Markov space. Then there is a Markov scheme on the sigma-algebra (J, A) with ergodic circulation η. The transition distributions P u are uniquely determined up to a set of points u of π-measure zero.
It is clear that if (A, η) is a Markov space, then (A, η * ) is a Markov space with the same stationary distribution. The corresponding Markov chain is called the reverse chain. If is a stationary random walk for the reverse space (A, η * ), then (w k , w k+1 , w k+2 , . . .) is a stationary random walk for any starting index k ∈ Z.
We say that a Markov space with 0 < π(A) < 1. We will need the following simple (folklore) lemma.
Lemma 3.10. Let G be an irreducible Markov space, and let S ∈ A have π(S) > 0. Then for π-almost-all starting points x, a random walk started at x hits S infinitely often almost surely.

Banach spaces
We need two (folklore) consequences of basic Banach space theory. The first is a consequence of the Hahn-Banach Theorem.
Theorem 3.11. Let K 1 , . . . , K n be open convex sets in a Banach space B. Then K 1 ∩ · · · ∩ K n = ∅ if and only if there are bounded linear functionals L 1 , . . . L n on B and real numbers a 1 , . . . , a n such that L 1 + · · · + L n = 0, a 1 + · · · + a n = 0, and for each i, either L i = 0 and a i = 0, or L i (x) > a i for x ∈ K i , and for at least one i, the second possibility holds.
If L i = 0 and a i = 0 for some i, then already the intersection of the sets K j (j = i) is empty.
Proof. The sufficiency of the condition is trivial.
To prove the necessity, consider the Banach space B ′ = B ⊕ · · · ⊕ B (n copies) and the open convex set K ′ = K 1 × · · · × K n ⊆ B ′ . If any K i is empty, then the conclusion is trivial, so suppose that K ′ = ∅. Also consider the closed linear subspace ("diagonal") By the Hahn-Banach Theorem, there is a bounded linear functional L on B ′ such that L(y) = 0 for y ∈ ∆, and L(y) > 0 for y ∈ K ′ . Define L i (x) = L(0, . . . , 0, x, 0, . . . , 0), then L i is a bounded linear functional on B, and L(x 1 , . . . , x n ) = L 1 (x 1 ) + · · · + L n (x n ). The condition that L(y) = 0 for y ∈ ∆ means that L 1 (x) + · · · + L n (x) = 0 for all x ∈ B. Let a i = inf x∈Ki L i (x), then either L i = 0 and Since L(y) > 0 for y ∈ K ′ , there must be at least one i with L i = 0. Furthermore, a 1 + · · · + a n = inf y∈K ′ L(y) ≥ 0. We can decrease any a i to get equality in the last inequality.
The other lemma in this section is an easy consequence of the Inverse Mapping Theorem and the Hahn-Banach Theorem.

The dual space of measures
The dual space of M(A) does not seem to have a useful complete description, but the following lemma is often a reasonable substitute.
Note that this implies that for any countable set {µ 1 , µ 2 , . . . } of measures, L has such a representation valid for every µ i . Indeed, every µ i is absolutely continuous with respect to the measure ρ = i µ i /(2 i µ i ). Proof.
The formula in the lemma defines a functional on M + (A 2 ); we start with showing that this is bounded and linear on nonnegative measures. For every For This proves that Q(ψ) ≥ i Q(ψ i ).
To prove the reverse inequality, note that for every This proves that Q(ψ) ≤ i Q(ψ i ), and so (13)

Simple properties
The following is an almost trivial fact: (a) F is a potential; (c) F can be expressed as Hence the formula in the lemma follows.
(c)⇒(b) It suffices to note that the conditions in (b) are trivially satisfied by every function of the form ½ A (x) − ½ A (y), and hence also by their integrals.
and hence We can say that F is a nonnegative linear combination of "cuts".
Remark 4.2. A more general case of interest is when the function F is defined on a subset of E ⊆ J × J (the edges of an oriented graph G). For a (not necessarily oriented) cycle C ⊆ E, let C + and C i denote the sets of forward and backward edges of C (walking around it in an arbitrary direction. We say that We say that F has the potential property for For every function f : and has the potential property for any E. In the finite case, every function F : E → R which is skew symmetric and has the potential property arises from a function f as F (x, y) = f (y) − f (x). In the general case, this may not hold.
Example 4.3. Consider an irrational cyclic graphing C a (Example 2.6), and define F (x, x + α) = 1 on every edge (x, x + α) (addition modulo 1), and F (x + α, x) = −1. This is a skew symmetric function, and it has the potential property (since all cycles are trivial in this graph). But there is no measurable function f : To see this, consider such a function, and define S k = f −1 [k, k + 1). Then S k+1 = S k + α, and hence λ(S k+1 ) = λ(S k ). But this is clearly impossible, since the sets S k partition the circle. Note that it is easy to construct a non-measurable function just select a representative node y A ∈ A from every 2-way infinite path in C α , and define f (y A + kα) = k.

Potent linear functionals
We say that a bounded linear functional P ∈ M(A 2 ) ⊥ is potent, if for every κ ∈ M + (A 3 ). If F is a potential, then P(µ) = µ(F ) defines a potent linear functional.
If F : J × J → R is a potential, then the functional µ → µ(F ) is potent. The converse is a bit trickier.
for every measure µ ≪ ψ 0 . Since this applies, in particular, to every restriction of ψ 0 , it follows that g(x, y) = −g(y, x) for ψ 0 -almost all pairs (x, y). Changing g on a set of ψ 0measure 0 does not change the condition that P(µ) = µ(g) for all µ ≤ ψ 0 , so we may assume that g is skew symmetric. Let Since this holds for every restriction of κ as well, it follows that g(x, y) + g(y, z) = g(x, z) for κ-almost all triples (x, y, z). Define Then for every S ∈ A 2 , Hence g 1 = g ψ-almost everywhere, and so g 1 defines the same functional P as g for all measures µ ≪ ψ. Furthermore, using that g(u, z) = −g(z, u), we have g 1 (x, y) + g 1 (y, z) = J g(x, u) + g(u, y) dπ(u) + J g(y, u) + g(u, z) dπ(u) = J g(x, u) + g(u, z) dπ(u) = g 1 (x, z), so g 1 is a potential by Lemma 4.1.

Simple properties
A circulation is a finite signed measure α ∈ M(A 2 ) that satisfies In other words, the two marginals α 1 and α 2 are equal. This is clearly equivalent to saying (just cancel the common part X × X in (17)). Circulations form a linear subspace C = C(A) of the space M(A 2 ) of finite signed measures. Recall from Section 3.6 that a sigma-algebra endowed with a nonnegative circulation η that is a probability measure (i.e., η(J × J) = 1) is a Markov space, with ergodic circulation η.
We say that a signed measure µ on A 2 is simple, if it is nonnegative, and supported on a measurable set S such that S ∩ S * = 0. (We could call S the "support" of µ, but without an underlying topology this support is defined only up to a set of µ-measure zero.) In other words, µ ∧ µ * = 0. Every measure α on J × J that is symmetric (in the sense that α * = α) is automatically a circulation (for probability measures α, this means that the corresponding Markov chain is reversible). We say that a (nonnegative) measure β on A 2 is acyclic, if there is no nonzero circulation α such that 0 ≤ α ≤ β. Note that an acyclic measure β is necessarily simple, since β ∧ β * ≤ β and β ∧ β * is symmetric and therefore a circulation, and so it must be zero.
In the finite case, these circulations generate the space of all circulations (even those with n ≤ 3 do).   We don't claim that the decomposition in (b) is unique.
Then β and γ are nonnegative, β is symmetric. Furthermore, if J = J + ∪ J − is the Hahn decomposition of J with respect to the sign of α − α * , then γ = 0 on J − and γ * = 0 on J + , so γ is simple.
To prove the converse, let α be a circulation, then for every potential we have The following lemma describes a "dual" connection between potentials and circulations. Proof. The "if" part of the first assertion is trivial. To prove the "only if" part, we apply Thus Lemma 3.12 implies the first assertion. The second follows by the representation of K in Lemma 3.13: there is a bounded measurable function g : J → R such that K(ν) = ν(g) for every ν ≪ ψ 1 + ψ 2 . Then for the potential F (x, y) = g(y) − g(x) and all µ ≪ ψ,

Existence of circulations
Our main goal in this section is to extend basic results on circulations in combinatorial optimization, like the Hoffman Circulation Theorem and a characterization of optimal circulations, to measures.
Given two measures ϕ and ψ on J × J, we can ask whether there exists a circulation α such that ϕ ≤ α ≤ ψ. Clearly ϕ ≤ ψ is a necessary condition, but it is not sufficient in general. The following theorem generalizes the Hoffman Circulation Theorem.
Proof. The necessity of the condition is trivial: if the circulation α exists, then ϕ(X × X c ) ≤ α(X × X c ) = α(X c × X) ≤ ψ(X c × X).
First, we prove the weaker fact that Suppose that c = d tv (C, X) > 0. Let X ′ = {µ ∈ M(A 2 ) : d tv (µ, X) < c}. Then X ′ is a convex subset of M(A 2 ) with a nonempty interior. Since X ′ ∩ C = ∅, the Hahn-Banach Theorem implies that there is a bounded linear functional L on M(A 2 ) such that L(µ) = 0 for all µ ∈ C, and L(µ) < 0 for all µ in the interior of X ′ , in particular for every µ ∈ X. The first condition on L implies, by Lemma 4.10, that there is a potential function F (x, y) = g(x) − g(y) (with a bounded and measurable function g : J → R) such that L(µ) = µ(F ) for every µ ∈ M(A 2 ) such that µ ≪ ψ.
To conclude, we select circulations α n ∈ C and measures β n ∈ X such that α n − β n → 0 (n → ∞). By Lemma 3.2, there is a measure β ∈ X such that β n (S) → β(S) (n → ∞) for all S ∈ A 2 and an appropriate subsequence of the indices n. Hence In particular, for every A ∈ A we have and so β is a circulation, and by a similar argument, β ∈ X. Similar remarks apply to notions like flows below, and will not be repeated.

Optimal circulations
If a feasible circulation exists, we may be interested in finding a feasible circulation µ which minimizes a "cost", or maximizes a "value" µ(v), given by a bounded measurable function v on J × J. Equivalently, we want to characterize when a value of 1 (say) can be achieved.
This cannot be characterized in terms of cut conditions any more, but an elegant necessary and sufficient condition can still be formulated.
Using that −F is a potential for every potential F , the condition can be split into three: for every potential F . Condition (25) is equivalent to the condition given for the existence of a circulation in Theorem 4.11, which is obtained when F (x, y) = ½ X (x) − ½ X (y). If ϕ = 0, then only (23) is nontrivial. Applying the conditions with F = 0 we get that ϕ(v) ≤ 1 ≤ ψ(v).
Proof. The necessity of the condition is trivial: if such a circulation α exists, then To prove the converse, we proceed along similar lines as in the proof of Clearly the sets C, H and X are nonempty. Fix an ε > 0, and replace them by their ε-neighborhoods C ′ = {µ ∈ M(A 2 ) : d tv (µ, C) < ε} etc. We start with proving the weaker statement that Suppose not. Then Theorem 3.11 implies that there are bounded linear functionals L 1 , L 2 , L 3 on M(A 2 ), not all zero, and real numbers a 1 , a 2 , a 3 such that L 1 +L 2 +L 3 = 0, a 1 +a 2 +a 3 = 0, and L i (µ) > a i for all µ ∈ C ′ , H ′ and X ′ , respectively. The functional L 1 remains bounded from below for every circulation α ∈ C, and since C is a linear subspace, this implies that By a similar reasoning, L 2 must be a constant b on the hyperplane H; we may scale L so that b ∈ {−1, 0, 1}. It is easy to see that this implies the more general formula Finally, we can express L 3 as By Lemma 4.10, we can write with some potential F on J × J. (Warning: this representation is not necessarily valid for all µ.) Hence We also know that for any α ∈ C, ν ∈ H and µ ∈ X, we have and hence µ(F + bv) < b for all µ ∈ X.
which contradicts the condition in the theorem, and thus proves (26).
To prove the stronger statement that C ∩ H ∩ X = ∅, (26) implies that there are sequences of measures α n ∈ C, ν n ∈ H and µ n ∈ X such that d tv (µ n , α n ) → 0 and d tv (µ n , ν n ) → 0. Furthermore, since 0 ≤ µ n ≤ ψ, Lemma 3.2 applies, and so there is a measure µ ∈ X such that for an appropriate infinite subsequence if indices, µ n (U ) → µ(U ) for all U ∈ A 2 . This implies that α n (U ) → µ(U ) and ν n (U ) → µ(U ) for this subsequence. Thus for every A ∈ A, so µ ∈ C. Similarly, by Lemma 3.2,

Integrality
In the case when v ≡ 1 and ϕ ≡ 0, the condition in Corollary 4.14 implies that One may wonder whether, at least in this special case, such a cut condition is also sufficient in Corollary 4.14. This, however, fails even in the finite case: on the directed path of length 2 in which the edges have capacity 1, these cut conditions for the existence of an ergodic circulation are satisfied, but the only feasible circulation is the 0-circulation.
However, the following weaker requirement can be imposed on F : This property of F is clearly equivalent to saying that in the representation F (x, y) = f (x) − f (y), the function f can be required to have integral values. For finite graphs, this assertion follows easily from the fact that the matrix of flow conditions is totally unimodular. In the infinite case, we have to use another proof.
Replacing f by f + c with any real constant c, the potential F and the set S do not change, but the potentials F c = ⌊f (x) + c⌋ − ⌊f (y) + c⌋ and Choosing c randomly and uniformly from [0, 1], the expectation of the last two terms is 0, since E( f (x) + c ) = 1/2 for any x. Thus This implies that there is a c ∈ [0, 1] for which So replacing f by ⌊f + c⌋, we get an integer valued potential that violates condition (22) even more, which proves the Supplement.
Here is a more combinatorial reformulation for the case v ≡ 1: Corollary 4.16. Given two measures ϕ, ψ ∈ M + (A 2 ) such that ϕ ≤ ψ, there is a circulation α with ϕ ≤ α ≤ ψ and α(J × J) = 1 if and only if for every partition J = J 1 ∩ · · · ∩ J k into a finite number of sets in A, The (insufficient) cut condition discussed above corresponds to the case when k = 2.

Flows
Let σ, τ ∈ M(A) be two measures with σ(J) = τ (J). We consider σ the "supply" and τ , the "demand". We call a measure ϕ ∈ M + (A 2 ) a flow from σ to τ , or briefly a σ-τ flow, if ϕ J − ϕ * J = σ − τ . We may assume, if convenient, that the supports of σ and τ are disjoint, since subtracting σ ∧ τ from both does not change their difference.
Note that every measure ϕ ∈ M + (A 2 ) is a flow from ϕ J to ϕ * J , and also a flow from ϕ J \ϕ * J to ϕ * J \ϕ J . But we are usually interested in starting with the supply and the demand, and constructing appropriate flows. We may require ϕ to be acyclic, since subtracting a circulation does not change ϕ J − ϕ * J .
As before, we may also be given a nonnegative measure ψ on A 2 (the "edge capacity").

Supply-Demand and Max-Flow-Min-Cut
We can generalize the Supply-Demand Theorem as follows. Proof. Let us add a new point r to J, and extend A to a sigma-algebra A ′ on J ′ = J ∪ {r} generated by A and {r}. For a set X ⊆ J × {r}, define X ′ = {j ∈ J : (r, j) ∈ X}, and define X ′′ ⊆ J analogously for X ⊆ {r} × J. Extend ψ to a measure ψ ′ on A ′ × A ′ as follows: Apply Theorem 4.11 with ϕ(X) = ψ ′ (X ∩ ({r} × J)) and ψ(X) = ψ ′ (X). The condition is fulfilled: Indeed, trivially ϕ ≤ ψ; if r / ∈ S, then So there is a circulation α on (J ′ , A ′ ) such that ϕ ≤ α ≤ ψ. Let β be the restriction of α to J × J, then 0 ≤ β ≤ ψ. Furthermore, α is a circulation, so and similarly Since α is a circulation on A ′ × A ′ , we have α(J ′ × Y ) = α(Y × J ′ ), and so So β is a feasible σ-τ flow.
Given two points s, t ∈ J, a measure ϕ on A 2 such that ϕ J − ϕ * J = a(δ t − δ s ) will be called an s-t flow of value a. So ϕ is a flow serving supply aδ s and demand aδ t .
We omit the details of the proof.

Transshipment
An optimization problem closely related to flows is the transshipment problem. In its simplest measure-theoretic version, we are given two measures α, β ∈ M(A) with α(J) = β(J). An α-β transshipment is a measure µ ∈ M + (A × A) coupling α and β; in other words, µ 2 = α and µ 1 = β. Note the difference with the notion of an α-β flow: there only the difference µ J − µ * J is prescribed. In transhipment problems, one can think of J × J as the edge set of a (complete) bipartite graph whose color classes are the two copies of J.
First, we state a condition for the existence of a transshipment satisfying a given capacity constraint. Proof. We construct a new Borel space (J ′ , A ′ ) by taking the disjoint union of two copies (J 1 , A 1 ) and (J 2 , A 2 ) of (J, A). We define a capacity measure ψ ′ ∈ M + (A ′ ) by ψ ′ (X) = ψ(X ∩ (J 1 × J 2 ), and supply and demand measures α ′ = α ∩ J 1 and β ′ = β ∩ J 2 . Then an α ′ -β ′ flow satisfying capacity constraint ψ ′ , restricted to J 1 × J 2 , is equivalent with an α-β transshipment satisfying capacity constraint ψ. Applying the Supply-Demand Theorem 4.17, we get the condition in this theorem.
Suppose that every edge (x, y) ∈ J × J has a given cost c(x, y) ≥ 0. We want to find a transshipment minimizing the cost µ(c) = J×J c dµ. We note that the minimum is attained by Corollary 3.4. The proof follows by an easy reduction to Theorem 4.19. As a third variation on the Transshipment Problem, we can ask for a transhipment supported on a specified set E of pairs. The following result is a slight generalization of a theorem of Strassen [22], and essentially equivalent to Proposition 3.8 of Kellerer [13]. It is also a rather straightforward generalization of Theorem 2.5.2 in [17]. The result could also be considered as a limiting case of Theorem 4.21, using the capacity "measure" See also [9].  Then there exists a probability measure µ on A 1 × A 2 supported on D and coupling π 1 and π 2 .
Proof. Let {B 1 , B 2 , . . . } be a countable basis of A, then it is easy to verify that This shows that (J × J) \ D is the union of a countable number of product sets, and also that D ∈ A × A.
Proof. Let f 1 and f 2 be the projection maps of J 1 × J 2 and J 2 × J 3 onto J 2 , respectively, and let By Corollary 4.23, there is a measure µ on A 1 × A 2 × A 2 × A 3 supported on D and coupling µ 1 and µ 2 . The measure ϕ obtained by pushing forward µ by the map (x 1 , x 2 , x 3 , x 4 ) → (x 1 , x 2 , x 4 ) proves the Corollary.

Path decomposition
In finite graph theory, it is often useful to decompose an s-t flow into a convex combination of flows along single paths from s to t and circulations along cycles. We can generalize this construction to measurable spaces as follows.
Let K = J ∪ J 2 ∪ J 3 ∪ . . . be the set of all finite nonempty sequences of points of J; we also call these walks. The set K is endowed with the sigma-algebra B = A ⊕ A 2 ⊕ . . . . Let K(s, t) be the subset of K consisting of walks starting at s and ending at t (s, t ∈ J); such a walk is called an s-t walk.
Then τ is a measure on A, and τ andτ are measures on A 2 . The measureτ is finite, but τ and τ may have infinite values as for now. If τ is a probability measure, then walking along a randomly chosen walk from distribution τ , τ (X) is the expected number of times we exit a point in X (so the starting point counts, but the last point does not), and τ (Y ) is the expected number of times we traverse an edge in Y . Mapping each walk W ∈ K to its first point, and pushing τ forward by this map, we get a measure τ a =τ J ∈ M(A). The measure τ z is defined analogously by mapping each walk to its last point.
It is easy to see that τ is a flow from τ a to τ z .
Proof. This theorem can be reduced to the special case when ϕ is an s-t flow by the following construction. Let us extend the universe J with two new points s and t, the sigma-algebra A by adding all sets of the form A ∪ {s}, A ∪ {t} and A ∪ {s, t} (A ∈ A), and the measure ϕ Using the theorem for the special case of this s-t flow, we get a measure τ on paths starting at s, in which the trivial path (s, t) has zero measure. So τ defines a measure on nontrivial s-t paths, and since there is a natural bijection with paths in K, we get a measure on (K, B).
It is easy to check that this measure has the desired properties.
So we may also assume that ϕ is an s-t flow; we may scale it to have value 1. By the definition of flows, we have ϕ 2 −ϕ 1 = δ s −δ t , so the measure α = ϕ+δ ts satisfies α 2 = ϕ 2 +δ t = ϕ 1 + δ s = α 1 , and so α is a nonnegative circulation on A 2 . Let a = α(J × J) = ϕ(J × J) + 1, then η = α/a is the ergodic circulation of a Markov space. The stationary distribution of this Markov space is π = α 2 /a. This gives that This shows that s and t are atoms of π, and (t, s) is an atom of η. It is easy to see that ϕ({(s, s)}) = 0, since ξ = ϕ({(s, s)})δ {(s,s)} is a nonnegative circulation such that ξ ≤ ϕ, and since ϕ is acyclic, we must have ξ = 0.
Indeed, suppose that there is a set A ∈ A with 0 < π(A) < 1 and η(A×A c ) = η(A c ×A) = 0. Clearly s and t either both belong to A or both belong to A c ; we may assume that s, t ∈ A c .
Then ϕ A×A is a circulation, and ϕ = (ϕ − ϕ A×A ) + ϕ A×A is a decomposition showing that ϕ is not acyclic, contrary to the hypothesis.
To specify a probability distribution on s-t walks, we describe how to generate a random s-t walk: Start a random walk at s, and follow it until you hit t or return to s, whichever comes first. This happens almost surely by Lemma 3.10: the distribution δ s is absolutely continuous with respect to π, and π(t) > 0. This gives a probability distribution τ on the set K(s, {s, t}) of walks from s to {s, t}.
The probability that a random walk started at s returns to s before hitting t is zero.
Indeed, we can split K(s, {s, t}) = K(s, s) ∩ K(s, t). Define ρ = τ | K(s,s) . Then ρ ≤ τ ≤ ϕ and it is easy to see that ρ is a circulation. Since ϕ is acyclic, we must have ρ = 0, and so τ (W (s, s)) = 0. So τ can be considered as a probability distribution on walks from s to t.
We are going to show that τ = ϕ.
Let us stop the walk after k steps, or when it hits t, or when it returns to s, whichever comes first. This gives us a distribution τ k over walks starting at s of length at most k. Let σ k (X) (X ∈ A) be the probability that starting at s, we walk k steps without hitting t or returning to s, and after k steps we are in X. It is clear that σ 0 = δ s . It is also easy to see that for n ≥ 1, we have τ n = σ 0 + σ 1 + · · · + σ n−1 , and for X ⊆ J \ {t}, Claim 2. τ n ≤ ϕ 2 for every n ≥ 1.
We prove the inequality by induction on n. For n = 1 this is obvious. Let n ≥ 2. If s, t / ∈ X, then σ 0 (X) = 0, and so using (33), If t ∈ X but s / ∈ X, then If s ∈ X, then (using that every random walk we constructed exits s only once) Claim 3. τ n ≤ ϕ for every n ≥ 1.
Indeed, for A, B ∈ A, This implies that τ n (X) ≤ ϕ(X) for every X ∈ A 2 , proving the claim.
Since clearly τ n ≤ τ , we have d tv (τ n , τ ) = τ (J) − τ n (J). Let p n be the probability that a random walk started at s first hits {s, t} in exactly n steps. Then p k k, and τ n (J) = n k=1 p k k.
The name "metrical" refers to the fact that if D is defined by a bounded measurable semimetric r on J as Can every metrical linear functional D be represented as D(ϕ) = ϕ(g) with some semimetric g : J 2 → R + ? I expect that the answer is negative, but perhaps the following is true: Conjecture 1. For every metrical linear functional D on M(A 2 ) and every ψ ∈ M + (A 2 ) there is a semimetric g : J 2 → R + such that D(ϕ) = ϕ(g) for all measures ϕ ≪ ψ. Lemma 4.4 is an example of similar (simpler) result. In the Appendix the conjecture is proved in some special cases, in particular, for measures ψ defined by graphons and graphings.

Multicommodity flows
A multicommodity flow on a Borel space A consists of a symmetric measure σ ∈ M + (A 2 ), and of a family of simple s-t flows ϕ st of value 1, one for each pair (s, t) ∈ J × J. We require that ϕ ts = ϕ * st , and that ϕ st (U ) is measurable as a function of (s, t) ∈ J ×J for every U ∈ A 2 . Such a multicommodity flow F = (σ; f st : st ∈ W ) defines symmetric measure (the total load) by A trivial multicommodity flow is defined by f st = δ st for any σ. The total load of this trivial multicommodity flow is σ.
If we are also given a symmetric "capacity" measure ψ ∈ M + (A 2 ), then we say that the multicommodity flow F = (σ; ϕ st ) is feasible, if ϕ F ≤ ψ. Our question is: Given ψ and σ, does there exist a feasible multicommodity flow? Our goal is to generalize Theorem 2.5.
To state our main result in this section, we need to relax the capacity constraint ϕ F ≤ ψ, and consider the overload over ψ (ϕ F \ ψ)(J × J). The proof of Theorem 5.1 will need several auxiliary considerations.

Formulation as a single measure
We describe a reformulation needed for the proof. Every multicommodity flow (σ; ϕ st : s, t ∈ J) defines a (more complex) load measure Φ on This number expresses how much load the subset of demands U puts on the edges in S.
Conversely, we can express the flows in a feasible family as Radon-Nikodym derivatives It is not hard to see that conditions (37) and (38) imply that this defines a feasible multicommodity flow. Indeed, ϕ st is defined for σ-almost all pairs s, t; let ϕ st = δ st when it is not defined. By definition, and so for A ∈ A and S ∈ A 2 , we have This holds for every S so it follows that showing that ϕ st is an s-t flow of value 1. Equation (40) implies that so this multicommodity flow is feasible.
We use induction on j − i. Notice that so for j − i = 1 the assertion is trivial. Let j − i > 1, and choose r with i < r < j. Then Using that D is metrical, this implies that By induction, we know that D( τ k i..r ) ≥ D( τ k i,r ) and D( τ k r..j ) ≥ D( τ k r,j ). Using that τ k i..r + τ k r..j = τ k i..j , we get This proves the Claim. In particular, we have Thus

Proof of the Multicommodity Flow Theorem
I. Consider a multicommodity flow (ϕ uv : uv ∈ S), serving demand σ and feasible with respect to ψ. We may assume that σ is a probability distribution. By Theorem 4.25, there is a probability distribution κ uv on u-v paths for every uv ∈ S such that κ uv = ϕ uv . Let τ be the mixture of the κ uv by σ; in other words, we generate a random path from τ by selecting a random pair uv from σ, nd then select a random path from κ uv . Then τ ≤ ψ by feasibility. Clearlyτ = σ. By Lemma 5.2,

II. Consider the convex sets of measures
To make these sets open, let δ > 0, and consider the δ-neighborhoods H δ i = {µ ∈ M + (A) : d tv (µ, H i ) < δ}. Note that all these sets are convex and invariant under the map Φ → Φ (12)(34) .
The main step in the proof is proving that Suppose that this intersection is empty. The intersection of any two of these sets is nonempty, so by Theorem 3.11 there are bounded linear functionals L 1 , L 2 , L 3 on M(A 4 ) and real numbers a 1 , a 2 , a 3 such that L 1 + L 2 + L 3 = 0, a 1 + a 2 + a 3 = 0, and L i > a i on H δ i . Note that 0 ∈ H 2 and 0 ∈ H 3 , which implies that a 2 , a 3 < 0, and hence a 1 > 0. Since the sets are invariant under the map Φ → Φ (12)(34) , we may assume that the linear functionals L 1 , L 2 , L 3 are invariant under this map as well.
These conditions have the following implications for the functionals L i : (a) The affine subspace H 1 is not empty, since σ ∆ ∈ H 1 by the definition of σ. The condition that L 1 (Φ) > a 1 for Φ ∈ H δ 1 implies that L 1 is constant on H 1 . Since a 1 > 0, this constant is positive, and we may assume (by scaling the L i and the a i ) that it is 1. Then We can apply Lemma 3.12 to the linear operator T : ϕ → ϕ 1 − ϕ 2 similarly as in the proof of Theorem 4.11. We get a linear functional Z on M(A 3 ) such that Substituting σ ∆ in (43), we get that Z(σ) = L 1 (σ ∆ ) = 1. It also follows that and (b) The condition that L 2 (Φ) > a 2 for Φ ∈ H δ 2 implies that L 2 (µ) ≥ 0 for µ ≥ 0, so L 2 is a nonnegative functional.
(c) The condition that L 3 (Φ) > a 3 for Φ ∈ H δ 3 implies that L 3 (µ) ≥ 0 whenever µ ∈ M(A 4 ) and µ 3,4 ≤ 0. This implies that L 3 (µ) = 0 whenever µ 3,4 = 0. We can apply Lemma 3.12 to the operator S : ϕ → ϕ 3,4 similarly as in (a); it is easy to see that the range of S is the whole space M(A 2 ), so it is closed. We get a bounded linear functional R on M(A 2 ) such that L 3 (µ) = R(µ 3,4 ). It also follows that −R is a nonnegative functional.
This completes the proof of Theorem 5.1.

Open problems
Problem 1 (Decomposition of circulations). Theorem 4.25 raises the question whether circulations have analogous decompositions. In finite graph theory, a circulation can be decomposed into a nonnegative linear combination of directed cycles. In the infinite case, we have to consider, in addition, directed paths infinite in both directions; but even so, the decomposition is not well understood.
A simple example is the ergodic flow of the directed graphing − → C α (Example 2.6. The ergodic flow is concentrated on the edges. If α is irrational, then this graphing consists of disjoint 2-way infinite paths; it contains no finite directed cycles at all. Suppose that we have a nonnegative circulation η = 0 on A. We may assume (by scaling) that it is a probability measure, so it is the ergodic circulation of a Markov space. From every point u ∈ J, we can start an infinite random walk (v 0 = u, v 1 , . . . ), and also an infinite random walk (v 0 = u, v −1 , . . . ) of the reverse chain. Choosing u from π, this gives us a probability distribution β on rooted two-way infinite (possibly periodic) sequences, i.e., on J Z . However, it seems to be difficult to reconstruct the circulation α from β.
Problem 2 (Birkhoff-von Neumann for measures). In the finite case, the fundamental Birkhoff-von Neumann Theorem describes the extreme points of the convex polytope formed by doubly stochastic matrices: these are exactly the permutation matrices, or in the language of bipartite graphs, perfect matchings. One generalization of this problem to the measurable case is to consider the set of coupling measures between two copies of a probability space (J, A, π), forming a convex set in M(A 2 ). What are the extreme points (coupling measures) of this convex set? Unfortunately, these extreme points seem to be too complex for an explicit description. See [15] for several examples.
Problem 3 (Category of measures). A consequence of Corollary 4.23 is that we can define a "matrix product" of two measures µ, ν ∈ M(A × A), provided µ * J = ν J . Let π i denote the projection of J × J on the i-th coordinate (i = 1, 2). By Corollary 4.23, there is a measure τ on (J ×J)×(J ×J) coupling µ and ν, and supported on the set {(x 1 , x 2 , x 3 , x 4 ) : π 2 (x 1 , x 2 ) = π 1 (x 3 , x 4 ), i.e., on the set of points (x 1 , x 2 , x 2 , x 4 ). We denote by µ • ν the marginal of τ on the first and last coordinates. Note that the marginal of τ on the first coordinate is µ J , and on the last coordinate is ν * J .
Using the Disintegration Theorem 3.7, let (µ * x : x ∈ J) be a disintegration of µ with respect to the projection π 2 , and (ν x : x ∈ J), a disintegration of ν with respect to the projection π 1 . Then We can say that we get a category, whose objects are the measures on A, whose morphisms are the measures on A × A, and morphism µ points from µ J to µ * J . Perhaps this category has been studied in some other formulation or setting.
Problem 4 (Error-free multicommodity flows). Can the overload ε in Theorem 5.1 be eliminated? Using Lemma 3.2, such a "last minute correction" was possible in the proofs of Theorems 4.11 and 4.13, but the same argument does not seem to apply in the case of Theorem 5.1.
Problem 5 (Convergence of flows). Describe how, for a graph sequence that is convergent in some well-defined sense, parameters and properties of flows converge to the flows on measurable spaces serving as their limit objects.
Problem 6 (Node measure). It would be natural to endow our structure also with a node measure (corresponding to the counting measure on the nodes in the finite case). Surprisingly, neither this, nor the Markov space structure was needed to generalize flow theory. Clearly we will need these additional features for generalizing other areas of graph theory to measure theory.
Problem 7 (Limit theory of directed graphs). Most of this paper (except for the last section) deals with "directed graphs", i.e., non-symmetric measures on the edges. Limit theory for directed graphs has not been formally developed even in the two well-studied extreme cases of graphings and graphons. While most of the analytic tools for limit theory extend to the directed case without substantial difficulties, there are some, most notably arguments using the spectrum, which do not, at least not directly. Perhaps this paper will motivate this (probably) partly routine but partly non-routine work.