1 Introduction

Modulus on graphs (or networks) is a very flexible and general tool for measuring the richness of families of objects defined on a network. For example, the underlying graphs can be directed or undirected, simple or multigraphs, weighted or unweighted. Also the objects that are being measured can be very different. For instance, here are some flavors of modulus that the first and last authors have been studying:

  • Connecting modulus This quantifies the richness of families of walks connecting two given sets of vertices. By varying a parameter p, modulus generalizes classical quantities such as effective resistance (which only makes sense on undirected graphs), max flow/min cut, and shortest-path, see [2]. Applications include new flexible centrality measures that have been used for modeling epidemic mitigation, see [23].

  • Loop modulus Looking at families of cycles in a graph gives information about clustering and community detection, see [22].

  • Spanning tree modulus The modulus of the family of all spanning trees gives deep insights into the degree of connectedness of a network as well as exposing an interesting hierarchical structure, see [3].

The purpose of this paper is to develop the theory of Fulkerson blocking duality for modulus. In Sect. 2, we recall the theory of modulus on networks. Then, in Sects. 3 and 4 we develop the theory of Fulkerson duality for modulus. Also, in Sect. 5, we relate Fulkerson duality to Lagrangian duality and the probabilistic interpretation of modulus developed in [2, 5, 6]. Finally, we propose several applications of Fulkerson duality to demonstrate its power and flexibility:

  • In Sect. 6, we give a new proof of the well-known fact that effective resistance is a metric on graphs, see for instance [14, Corollary 10.8] for a proof based on commute times and [14, Exercise 9.8] for one based on current flows. Assuming Fulkerson duality, our proof in Theorem 8 is very short and compelling. But it also has the added advantage of being the only proof we know that easily generalizes to a wider family of graph metrics based on modulus that continuously interpolate among the shortest-path metric, the effective resistance metric, and an ultrametric related to min cuts. None of the other classical proofs that effective resistance is a metric appear to generalize in this fashion.

  • Furthermore, our proof in Theorem 8, based on Fulkerson duality, allows us to establish the “anti-snowflaking” exponent for this family of graph metrics. Namely, we are able to find the exact largest exponent that each such metric can be raised to, while still being a metric on arbitrary graphs.

  • In Sect. 7, we establish some useful monotonicity properties of modulus on a weighted graph \(G=(V,E,\sigma )\) with respect to the edge-conductances \(\sigma (e)\) (Theorem 10). Two of these properties generalize well-known facts about the behavior of resistor networks when a resistor’s value is changed. The Fulkerson blocker approach provides a third monotonicity property related to the expected edge usages of certain random objects on a graph.

  • Finally, in Sect. 8, we use Fulkerson duality and the previously mentioned monotonicity property to study randomly weighted graphs. We first reinterpret and expand on some results of Lovász from [16]. We then establish a lower bound for the expected p-modulus of a family of objects in terms of modulus of the same family on the deterministic graph with edge weights given by their respective expected values (Theorem 12).

2 Preliminaries

2.1 Modulus in the continuum

The theory of conformal modulus was originally developed in complex analysis, see Ahlfors’ comment on p. 81 of [1]. The more general theory of p-modulus grew out of the study of quasiconformal maps, which generalize the notion of conformal maps to higher-dimensional real Euclidean spaces and, in fact, to abstract metric measure spaces. Intuitively, p-modulus provides a method for quantifying the richness of a family of curves, in the sense that a family with many short curves will have a larger modulus than a family with fewer and longer curves. The parameter p tends to favor the “many curves” aspect when p is close to 1 and the “short curves” aspect as p becomes large. This phenomenon was explored more precisely in [2] in the context of networks. The concept of discrete modulus on networks is not new, see for instance [9, 12, 21]. However, recently the authors have started developing the theory of p-modulus as a graph-theoretic quantity [2, 6], with the goal of finding applications, for instance to the study of epidemics [11, 23].

The concept of blocking duality explored in this paper is an analog of the concept of conjugate families in the continuum. As motivation for the discrete theory to follow, then, let us recall the relevant definitions from the continuum theory. For now, it is convenient to restrict attention to the 2-modulus of curves in the plane, which, as it happens, is a conformal invariant and thus has been carefully studied in the literature.

Let \(\varOmega \) be a domain in \({\mathbb {C}}\), and let EF be two continua in \({\overline{\varOmega }}\). Define \(\varGamma =\varGamma _{\varOmega }(E,F)\) to be the family of all rectifiable curves connecting E to F in \(\varOmega \). A density is a Borel measurable function \(\rho {:}\,\varOmega \rightarrow [0,\infty )\). We say that \(\rho \) is admissible for \(\varGamma \) and write \(\rho \in {\text {Adm}}(\varGamma )\), if

$$\begin{aligned} \int _\gamma \rho \;\mathrm{d}s \ge 1\quad \forall \gamma \in \varGamma . \end{aligned}$$
(1)

Now, we define the modulus of \(\varGamma \) as

$$\begin{aligned} {\text {Mod}}_2(\varGamma ) \mathrel {\mathop :}=\inf _{\rho \in {\text {Adm}}(\varGamma )} \int _{\varOmega } \rho ^{2} \mathrm{d}A. \end{aligned}$$
(2)

Example 1

(The Rectangle) Consider a rectangle

$$\begin{aligned} \varOmega \mathrel {\mathop :}=\left\{ z = x + i y \in {\mathbb {C}}{:}\,0< x< L, 0< y < H \right\} \end{aligned}$$

of height H and length L. Set \(E \mathrel {\mathop :}=\{ z \in {\overline{\varOmega }}{:}\,{\text {Re}}z = 0 \}\) and \(F \mathrel {\mathop :}=\{ z \in {\overline{\varOmega }}{:}\,{\text {Re}}z = L\}\) to be the leftmost and rightmost vertical sides, respectively. If \(\varGamma = \varGamma _{\varOmega }(E,F)\), then

$$\begin{aligned} {\text {Mod}}_2(\varGamma ) = \frac{H}{L}. \end{aligned}$$
(3)

To see this, assume \(\rho \in {\text {Adm}}(\varGamma )\). Then, for all \(0< y < H\), \(\gamma _{y}(t) \mathrel {\mathop :}=t + iy\) is a curve in \(\varGamma \), so

$$\begin{aligned} \int _{\gamma _y}\rho \mathrm{d}s=\int _{0}^{L} \rho (t,y) \mathrm{d}t \ge 1. \end{aligned}$$

Using the Cauchy–Schwarz inequality, we obtain

$$\begin{aligned} 1 \le \left[ \int _{0}^{L} \rho (t,y) \mathrm{d}t \right] ^{2} \le L \int _{0}^{L} \rho ^{2}(t,y) \mathrm{d}t. \end{aligned}$$

In particular, \(L^{-1}\le \int _{0}^{L} \rho ^{2}(t,y) \mathrm{d}t\). Integrating over y, we get

$$\begin{aligned} \frac{H}{L} \le \int _{\varOmega } \rho ^{2} \mathrm{d}A. \end{aligned}$$

So since \(\rho \) was an arbitrary admissible density, \({\text {Mod}}_2(\varGamma ) \ge \frac{H}{L}\).

In the other direction, define \(\rho _{0}(z) = \frac{1}{L} \mathbb {1}_{\varOmega }(z)\) and observe that \(\int _{\varOmega } \rho _0^{2}\mathrm{d}A = \frac{H L}{L^{2}} = \frac{H}{L}\). Hence, if we show that \(\rho _0 \in {\text {Adm}}(\varGamma )\), then \({\text {Mod}}(\varGamma ) \le \frac{H}{L}\). To see this note that for any \(\gamma \in \varGamma \):

$$\begin{aligned} \int _{0}^{L} \frac{1}{L} | {\dot{\gamma }}(t)| \mathrm{d}t \ge \frac{1}{L} \int _{0}^{L} | {\text {Re}}{\dot{\gamma }}(t) | \mathrm{d}t \ge \frac{1}{L} \left( {\text {Re}}\gamma (1) - {\text {Re}}\gamma (0) \right) \ge 1. \end{aligned}$$

This proves the formula (3).

A famous and very useful result in this context is the notion of a conjugate family of a connecting family. For instance, in the case of the rectangle, the conjugate family \(\varGamma ^*=\varGamma ^*_{\varOmega }(E,F)\) for \(\varGamma _{\varOmega }(E,F)\) consists of all curves that “block” or intercept every curve \(\gamma \in \varGamma _{\varOmega }(E,F)\). It is clear in this case that \(\varGamma ^*\) is also a connecting family, namely it includes every curve connecting the two horizontal sides of \(\varOmega \). In particular, by (3), we must have \({\text {Mod}}_2(\varGamma ^*)=L/H\). So we deduce that

$$\begin{aligned} {\text {Mod}}_2(\varGamma _{\varOmega }(E,F))\cdot {\text {Mod}}_2(\varGamma ^*_{\varOmega }(E,F)) =1. \end{aligned}$$
(4)

One reason this reciprocal relation is useful is that upper-bounds for modulus are fairly easy to obtain by choosing reasonable admissible densities and computing their energy. However, lower bounds are typically harder to obtain. However, when an equation like (4) holds, then upper-bounds for the modulus of the conjugate family translate to lower bounds for the given family.

In higher dimensions, say in \({\mathbb {R}}^3\), the conjugate family of a connecting family of curves consists of a family of surfaces, and therefore one must consider the concept of surface modulus, see for instance [18] and references therein. It is also possible to generalize the concept of modulus by replacing the exponent 2 in (2) with \(p\ge 1\) and by replacing \(\mathrm{d}A\) with a different measure.

The principal aim of this paper is to establish a conjugate duality formula similar to (4) for p-modulus on networks, which we call blocking duality.

2.2 Modulus on networks

A general framework for modulus of objects on networks was developed in [5]. In what follows, \(G= (V,E,\sigma )\) is taken to be a finite graph with vertex set V and edge set E. The graph may be directed or undirected and need not be simple. In general, we shall assume a weighted graph with each edge assigned a corresponding weight \(0<\sigma (e)<\infty \). When we refer to an unweighted graph, we shall mean a graph for which all weights are assumed equal to one.

The theory in [5] applies to any finite family of “objects” \(\varGamma \) for which each \(\gamma \in \varGamma \) can be assigned an associated function \({\mathcal {N}}(\gamma ,\cdot ){:}\,E\rightarrow {\mathbb {R}}_{\ge 0}\) that measures the usage of edgeeby\(\gamma \). Notationally, it is convenient to consider \({\mathcal {N}}(\gamma ,\cdot )\) as a row vector \({\mathcal {N}}(\gamma ,\cdot )\in {\mathbb {R}}_{\ge 0}^E\), indexed by \(e\in E\). In order to avoid pathologies, it is useful to assume that \(\varGamma \) is non-empty and that each \(\gamma \in \varGamma \) has positive usage on at least one edge. When this is the case, we will say that \(\varGamma \) is non-trivial. In the following, it will be useful to define the quantity:

$$\begin{aligned} {\mathcal {N}}_{\mathrm{min}}:=\min _{\gamma \in \varGamma }\min _{e{:}\,{\mathcal {N}}(\gamma ,e)\ne 0} {\mathcal {N}}(\gamma ,e). \end{aligned}$$
(5)

Note that, for \(\varGamma \) non-trivial, \({\mathcal {N}}_{\mathrm{min}}>0\).

Some examples of objects and their associated usage functions are the following.

  • To a walk \(\gamma =x_0\ e_1\ x_1\ \ldots \ e_n\ x_n\), we can associate the traversal-counting function \({\mathcal {N}}(\gamma ,e)=\) number times \(\gamma \) traverses e. In this case, \({\mathcal {N}}(\gamma ,\cdot )\in {\mathbb {Z}}_{\ge 0}^E\).

  • To each subset of edges \(T\subset E\), we can associate the characteristic function \({\mathcal {N}}(T,e)=\mathbb {1}_T(e)=1\) if \(e\in T\) and 0 otherwise. Here, \({\mathcal {N}}(\gamma ,\cdot )\in \{0,1\}^E\).

  • To each flow f, we can associate the volume function \({\mathcal {N}}(f,e)=|f(e)|\). Therefore, \({\mathcal {N}}(\gamma ,\cdot )\in {\mathbb {R}}_{\ge 0}^E\).

As a function of two variables, the function \({\mathcal {N}}\) can be thought of as a matrix in \({\mathbb {R}}^{\varGamma \times E}\), indexed by pairs \((\gamma ,e)\) with \(\gamma \) an object in \(\varGamma \) and e an edge in E. This matrix \({\mathcal {N}}\) is called the usage matrix for the family \(\varGamma \). Each row of \({\mathcal {N}}\) corresponds to an object \(\gamma \in \varGamma \) and records the usage of edge e by \(\gamma \). At times we will write \({\mathcal {N}}(\varGamma )\) instead of \({\mathcal {N}}\), to avoid ambiguity. Note, that the families \(\varGamma \) under consideration may very well be infinite (e.g., families of walks), so \({\mathcal {N}}\) may have infinitely many rows. For this paper, we shall assume \(\varGamma \) is finite.

This assumption is not quite as restrictive as it might seem. In [6], it was shown that any family \(\varGamma \) with an integer-valued \({\mathcal {N}}\) can be replaced, without changing the modulus, by a finite subfamily. For example, if \(\varGamma \) is the set of all walks between two distinct vertices, the modulus can be computed by considering only simple paths. This result implies a similar finiteness result for any family \(\varGamma \) whose usage matrix \({\mathcal {N}}\) is rational with positive entries bounded away from zero.

By analogy to the continuous setting, we define a density on G to be a nonnegative function on the edge set: \(\rho {:}\,E\rightarrow [0,\infty )\). The value \(\rho (e)\) can be thought of as the cost of using edgee. It is notationally useful to think of such functions as column vectors in \({\mathbb {R}}_{\ge 0}^E\). In order to mimic (1), we define for an object \(\gamma \in \varGamma \)

$$\begin{aligned} \ell _\rho (\gamma ):=\sum _{e\in E} {\mathcal {N}}(\gamma ,e)\rho (e) = ({\mathcal {N}}\rho )(\gamma ), \end{aligned}$$

representing the total usage cost for \(\gamma \) with the given edge costs \(\rho \). In linear algebra notation, \(\ell _\rho (\cdot )\) is the column vector resulting from the matrix-vector product \({\mathcal {N}}\rho \). As in the continuum case, then, a density \(\rho \in {\mathbb {R}}_{\ge 0}^E\) is called admissible for\(\varGamma \), if

$$\begin{aligned} \ell _\rho (\gamma ) \ge 1\qquad \forall \gamma \in \varGamma ; \quad \text {or equivalently, if} \quad \ell _\rho (\varGamma )\mathrel {\mathop :}=\inf _{\gamma \in \varGamma }\ell _\rho (\gamma ) \ge 1. \end{aligned}$$

In matrix notation, \(\rho \) is admissible if

$$\begin{aligned} {\mathcal {N}}\rho \ge {{\mathbf {1}}}, \end{aligned}$$

where \({\mathbf {1}}\) is the column vector of ones and the inequality is understood to hold elementwise. By analogy, we define the set

$$\begin{aligned} {\text {Adm}}(\varGamma )=\left\{ \rho \in {\mathbb {R}}_{\ge 0}^E{:}\,{\mathcal {N}}\rho \ge 1\right\} \end{aligned}$$
(6)

to be the set of admissible densities.

Now, given an exponent \(p\ge 1\) we define the p-energy on densities, corresponding to the area integral from the continuum case, as

$$\begin{aligned} {\mathcal {E}}_{p,\sigma }(\rho ) \mathrel {\mathop :}=\sum _{e\in E} \sigma (e)\rho (e)^p, \end{aligned}$$

with the weights \(\sigma \) playing the role of the area element dA. In the unweighted case (\(\sigma \equiv 1\)), we shall use the notation \({\mathcal {E}}_{p,1}\) for the energy. For \(p=\infty \), we also define the unweighted and weighted \(\infty \)-energy, respectively, as

$$\begin{aligned} {\mathcal {E}}_{\infty ,1}(\rho ) \mathrel {\mathop :}=\lim _{p\rightarrow \infty }\left( {\mathcal {E}}_{p,\sigma }(\rho )\right) ^{\frac{1}{p}} = \max _{e\in E}\rho (e) \end{aligned}$$

and

$$\begin{aligned} {\mathcal {E}}_{\infty ,\sigma }(\rho ) \mathrel {\mathop :}=\lim _{p\rightarrow \infty }\left( {\mathcal {E}}_{p,\sigma ^p}(\rho )\right) ^{\frac{1}{p}} = \max _{e\in E}\sigma (e)\rho (e) \end{aligned}$$

This leads to the following definition.

Definition 1

Given a graph \(G= (V,E,\sigma )\), a family of objects \(\varGamma \) with usage matrix \({\mathcal {N}}\in {\mathbb {R}}^{\varGamma \times E}\), and an exponent \(1\le p\le \infty \), the p-modulus of \(\varGamma \) is

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma )\mathrel {\mathop :}=\inf _{\rho \in {\text {Adm}}(\varGamma )}{\mathcal {E}}_{p,\sigma }(\rho ) \end{aligned}$$

Equivalently, p-modulus corresponds to the following optimization problem

$$\begin{aligned} \begin{aligned} \text {minimize}&\quad {\mathcal {E}}_{p,\sigma }(\rho ) \\ \text {subject to}&\quad \rho \ge 0,\quad {\mathcal {N}}\rho \ge 1 \end{aligned} \end{aligned}$$
(7)

where each object \(\gamma \in \varGamma \) determines one inequality constraint.

Remark 1

  1. (a)

    When \(\rho _0\equiv 1\), we drop the subscript and write \(\ell (\gamma )\mathrel {\mathop :}=\ell _{\rho _0}(\gamma )\). If \(\gamma \) is a walk, then \(\ell (\gamma )\) simply counts the number of hops that the walk \(\gamma \) makes.

  2. (b)

    For \(1<p<\infty \), a unique extremal density \(\rho ^*\) always exists and satisfies \(0\le \rho ^*\le {\mathcal {N}}_{\text {min}}^{-1}\), where \({\mathcal {N}}_{\text {min}}\) is defined in (5). Existence and uniqueness follows by compactness and strict convexity of \({\mathcal {E}}_{p,\sigma }\), see also Lemma 2.2 of [2]. The upper bound on \(\rho ^*\) follows from the fact that each row of \({\mathcal {N}}\) contains at least one nonzero entry, which must be at least as large as \({\mathcal {N}}_{\text {min}}\). In the special case, when \({\mathcal {N}}\) is integer-valued, the upper bound can be taken to be 1.

The next result shows that modulus is a “capacity,” in the mathematical sense, on families of objects. This is a known fact, see [6, Prop. 3.4] for the case of families of walks. We reproduce a proof here for completeness.

Proposition 1

(Basic properties) Let \(G=(V,E,\sigma )\) be a simple finite graph with edge weights \(\sigma \in {\mathbb {R}}_{>0}^E\). For simplicity, all families of objects on G are assumed to be non-trivial. Then, for \(p\in [1,\infty ]\), the following hold:

  1. (a)

    Monotonicity Suppose \(\varGamma \) and \(\varGamma '\) are families of objects on G such that \(\varGamma \subset \varGamma '\), meaning that the matrix \({\mathcal {N}}(\varGamma )\) is the restriction of the matrix \({\mathcal {N}}(\varGamma ')\) to the rows from \(\varGamma \). Then,

    $$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma )\le {\text {Mod}}_{p,\sigma }(\varGamma '). \end{aligned}$$
    (8)
  2. (b)

    Countable Subadditivity Suppose \(1\le p<\infty \), and let \(\{\varGamma _j\}_{j=1}^\infty \) be a sequence of families of objects on G. Then,

    $$\begin{aligned} {\text {Mod}}_{p,\sigma }\left( \bigcup _{j=1}^\infty \varGamma _j\right) \le \sum _{j=1}^\infty {\text {Mod}}_{p,\sigma }(\varGamma _j). \end{aligned}$$
    (9)

Proof

For monotonicity, note that \({\text {Adm}}(\varGamma ')\subset {\text {Adm}}(\varGamma )\).

For subadditivity, we first fix \(p\in [1,\infty )\). Let \(\varGamma \mathrel {\mathop :}=\bigcup _{j=1}^{\infty } \varGamma _j\). For each j, choose \(\rho _{j} \in {\text {Adm}}(\varGamma _j)\) such that

$$\begin{aligned} {\mathcal {E}}_{p,\sigma }(\rho _j) = {\text {Mod}}_{p,\sigma }\left( \varGamma _j \right) . \end{aligned}$$

Assuming that the right-hand side of (9) is finite, then, since \(\sigma >0\) and \(\rho _j\ge 0\),

$$\begin{aligned} \sum _{e\in E}\sigma (e)\sum _{j=1}^\infty \rho _j(e)^p = \sum _{j=1}^\infty \sum _{e\in E}\sigma (e)\rho _j(e)^p = \sum _{j=1}^\infty {\text {Mod}}_{p,\sigma }(\varGamma _j) < \infty . \end{aligned}$$

So, \(\rho \mathrel {\mathop :}=\left( \sum _{j=1}^{\infty } \rho _{j}^{p} \right) ^{\frac{1}{p}}\) is also finite. For any \(\gamma \in \varGamma \), there exists \(k \in {\mathbb {N}}\) so that \(\gamma \in \varGamma _{k}\). In particular, since \(\rho \ge \rho _{k}\), we have \(\ell _{\rho }(\gamma ) \ge 1\). This shows that \(\rho \in {\text {Adm}}(\varGamma )\). Moreover,

$$\begin{aligned} {\text {Mod}}_{p,\sigma } \varGamma&\le {\mathcal {E}}_{p,\sigma }(\rho ) = \sum _{e \in E}\sigma (e) \rho (e)^{p} = \sum _{e \in E}\sigma (e) \sum _{j=1}^{\infty } \rho _{j}(e)^{p} = \sum _{j=1}^{\infty } \sum _{e \in E}\sigma (e) \rho _{j}(e)^{p} \\&= \sum _{j=1}^{\infty } {\mathcal {E}}_{p,\sigma }(\rho _{j}) =\sum _{j=1}^{\infty } {\text {Mod}}_{p,\sigma }(\varGamma _j). \end{aligned}$$

We leave the case \(p=\infty \) to the reader (one can even replace the sum with max). \(\square \)

Remark 2

The following is another useful basic property to add to monotonicity and countable subadditivity:

  1. (c)

    Subordination With the hypothesis of Proposition 1, suppose that \(\varGamma \) and \(\varGamma '\) are families of objects on G, and suppose that for every object \(\gamma \in \varGamma \) there is an object \(\gamma '\in \varGamma '\) such that \({\mathcal {N}}(\gamma ',e)\le {\mathcal {N}}(\gamma ,e)\), for all \(e\in E\) (we say \(\varGamma \) is subordinated to \(\varGamma '\)). Then,

    $$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma )\le {\text {Mod}}_{p,\sigma }(\varGamma '). \end{aligned}$$
    (10)

Proof

Assume \(\rho \in {\text {Adm}}(\varGamma ')\), then for every \(\gamma \in \varGamma \), there is \(\gamma '\in \varGamma \) such that

$$\begin{aligned} \sum _{e\in E}{\mathcal {N}}(\gamma ,e)\rho (e)\ge \sum _{e\in E}{\mathcal {N}}(\gamma ',e)\rho (e)\ge 1. \end{aligned}$$

Namely, \(\rho \) is admissible for \(\varGamma \) as well. Hence, \({\text {Adm}}(\varGamma ')\subset {\text {Adm}}(\varGamma )\). \(\square \)

2.3 Connection to classical quantities

The concept of p-modulus generalizes known classical ways of measuring the richness of a family of walks. Let \(G=(V,E)\) and two vertices a and b in V be given. We define the connecting family\(\varGamma (a,b)\) to be the family of all simple paths in G that start at a and end at b. To this family, we assign the usage function \({\mathcal {N}}(\gamma ,e)\) to be 1 when \(e\in \gamma \) and 0 otherwise. Classically, there are three main ways to measure the richness of \(\varGamma (a,b)\).

  • Min cut A subset \(S\subset V\) is called a ab-cut if \(a\in S\) and \(b\not \in S\). To every ab-cut S, we assign the edge usage \({\mathcal {N}}(S,e)=1\) for every \(e=\{x,y\}\in E\) such that \(x\in S\) and \(y\not \in S\); and \({\mathcal {N}}(S,e)=0\) otherwise. The support of \({\mathcal {N}}(S,\cdot )\) is also known as the edge-boundary\(\partial S\). Given edge weights \(\sigma \), the size of an ab-cut is measured by \(|\partial S|:=\sum _{e\in E}\sigma (e){\mathcal {N}}(S,e)\). We define the min cut between a and b to:

    $$\begin{aligned} {\text {MC}}(a,b) := \min \left\{ |\partial S|{:}\,S\text { is an }ab\text {-cut}\right\} . \end{aligned}$$
  • Effective Resistance When G is undirected, it can be thought of as an electrical network with edge-conductances given by the weights \(\sigma \), see [8]. Then, effective resistance\({\mathcal {R}}_{\text {eff}}(a,b)\) is the voltage drop necessary to pass 1 Amp of current between a and b through G [8]. In this case, given two vertices a and b in V, we write \(\mathop {{\mathcal {C}}_{\text {eff}}}(a,b):={\mathcal {R}}_{\text {eff}}(a,b)^{-1}\) for the effective conductance between a and b.

  • Shortest-path Finally, the (unweighted) shortest-path distance between a and b refers to the length of the shortest path from a to b, where the length of a path \(\gamma \) is \(\ell (\gamma ):=\sum _{e\in E}{\mathcal {N}}(\gamma ,e),\) and we write

    $$\begin{aligned} \ell (\varGamma ):=\inf _{\gamma \in \varGamma }\ell (\gamma ) \end{aligned}$$

    for the shortest length of a family \(\varGamma \).

The following result is a slight modification of the results in [2, Section 5], taking into account the definition of \({\mathcal {N}}_{\text {min}}\) in (5).

Theorem 1

[2] Let \(G=(V,E,\sigma )\) be a graph with edge weights \(\sigma \). Let \(\varGamma \) be a non-trivial family of objects on G with usage matrix \({\mathcal {N}}\) and let \(\sigma (E) := \sum _{e\in E}\sigma (e)\). Then, the function \( p\mapsto {\text {Mod}}_{p,\sigma }(\varGamma )\) is continuous for \(1\le p< \infty \), and the following two monotonicity properties hold for \(1\le p \le p' <\infty \).

$$\begin{aligned} {\mathcal {N}}_{\text {min}}^p {\text {Mod}}_{p,\sigma }(\varGamma )&\ge {\mathcal {N}}_{\text {min}}^{p'} {\text {Mod}}_{p',\sigma }(\varGamma ), \end{aligned}$$
(11)
$$\begin{aligned} \left( \sigma (E)^{-1}{\text {Mod}}_{p,\sigma }(\varGamma )\right) ^{1/p}&\le \left( \sigma (E)^{-1}{\text {Mod}}_{p',\sigma }(\varGamma )\right) ^{1/p'}. \end{aligned}$$
(12)

Moreover, let \(a\ne b\) in V be given and set \(\varGamma \) equal to the connecting family \(\varGamma (a,b)\). Then,

$$\begin{aligned} \begin{array}{lll} \bullet \ \text {For }p=1, &{}\quad {\text {Mod}}_{1,\sigma }(\varGamma )=\min \{|\partial S|{:}\,S \text { an }ab-\text {cut}\} = {\text {MC}}(a,b) &{}\quad \text {Min cut.}\\ \bullet \ \text {For }p=2,&{}\quad {\text {Mod}}_{2,\sigma }(\varGamma )=\mathop {{\mathcal {C}}_{\text {eff}}}(a,b) = {\mathcal {R}}_{\text {eff}}(a,b)^{-1} &{}\quad \text {Effective conductance.}\\ \bullet \ \text {For }p=\infty , &{}\quad {\text {Mod}}_{\infty ,1}(\varGamma )=\lim \limits _{p\rightarrow \infty }{\text {Mod}}_{p,\sigma }(\varGamma )^{\frac{1}{p}} =\ell (\varGamma )^{-1} &{}\quad \text {Reciprocal of shortest-path.}\\ \end{array} \end{aligned}$$

Remark 3

An early version of the case \(p=2\) is due to Duffin [9]. The proof in [2] was guided by a very general result in metric spaces [13, Theorem 7.31].

The theorem stated in [2, Section 5] does not hold in this context verbatim, but can be easily adapted. The only issue to take care of is the value of \({\mathcal {N}}_{\text {min}}\). Since the previous paper dealt only with families of walks, \({\mathcal {N}}\) was integer-valued and, thus, \({\mathcal {N}}_{\text {min}}\) could be assumed no smaller than 1. This gave rise to an inequality of the form \(0\le \rho ^*\le 1\) that was used to establish a monotonicity property. When \({\mathcal {N}}\) is not restricted to integer values, the bound on \(\rho ^*\) should be replaced by \(0\le \rho ^*\le {\mathcal {N}}_{\text {min}}^{-1}\) [see Remark 1 (c)]. Repeating the proof of [2, Thm. 5.2] with the corrected upper bound and rephrasing in the current context yields theorem 1.

Example 2

(Basic example) Let G be a graph consisting of k simple paths in parallel, each path taking \(\ell \) hops to connect a given vertex s to a given vertex t. Assume also that G is unweighted, that is \(\sigma \equiv 1\). Let \(\varGamma \) be the family consisting of the k simple paths from s to t. Then, \(\ell (\varGamma )=\ell \) and the size of the minimum cut is k. A straightforward computation shows that

$$\begin{aligned} {\text {Mod}}_p(\varGamma )=\frac{k}{\ell ^{p-1}}\quad \text{ for } 1\le p<\infty ,\qquad {\text {Mod}}_{\infty ,1}(\varGamma )=\frac{1}{\ell }. \end{aligned}$$

In particular, \({\text {Mod}}_p(\varGamma )\) is continuous in p, and \(\lim _{p\rightarrow \infty }{\text {Mod}}_p(\varGamma )^{1/p}={\text {Mod}}_{\infty ,1}(\varGamma )\). Intuitively, when \(p\approx 1\), \({\text {Mod}}_p(\varGamma )\) is more sensitive to the number of parallel paths, while for \(p\gg 1\), \({\text {Mod}}_p(\varGamma )\) is more sensitive to short walks.

2.4 Lagrangian duality and the probabilistic interpretation

The optimization problem (7) is an ordinary convex program, in the sense of [19, Sec. 28]. Existence of a minimizer follows from compactness, and uniqueness holds when \(1<p<\infty \) by strict convexity of the objective function. Furthermore, it can be shown that strong duality holds in the sense that a maximizer of the Lagrangian dual problem exists and has dual energy equal to the modulus. The Lagrangian dual problem was derived in detail in [2]. The Lagrangian dual was later reinterpreted in a probabilistic setting in [5].

In order to formulate the probabilistic dual, we let \({\mathcal {P}}(\varGamma )\) represent the set of probability mass functions (pmfs) on the set \(\varGamma \). In other words, \({\mathcal {P}}(\varGamma )\) contains the set of vectors \(\mu \in {\mathbb {R}}_{\ge 0}^\varGamma \) with the property that \(\mu ^T{\mathbf {1}}= 1\). Given such a \(\mu \), we can define a \(\varGamma \)-valued random variable \({\underline{\gamma }}\) with distribution given by \(\mu \): \({\mathbb {P}}_\mu \left( {\underline{\gamma }}=\gamma \right) = \mu (\gamma )\). Given an edge \(e\in E\), the value \({\mathcal {N}}({\underline{\gamma }},e)\) is again a random variable, and we represent its expectation (depending on the pmf \(\mu \)) as \({\mathbb {E}}_\mu \left[ {\mathcal {N}}({\underline{\gamma }},e)\right] \). The probabilistic interpretation of the Lagrangian dual can now be stated as follows.

Theorem 2

Let \(G=(V,E)\) be a finite graph with edge weights \(\sigma \), and let \(\varGamma \) be a non-trivial finite family of objects on G with usage matrix \({\mathcal {N}}\). Then, for any \(1<p<\infty \), letting \(q:=p/(p-1)\) be the conjugate exponent to p, we have

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma )^{-\frac{1}{p}} = \left( \min _{\mu \in {\mathcal {P}}(\varGamma )}\sum _{e\in E}\sigma (e)^{-\frac{q}{p}} {\mathbb {E}}_\mu \left[ {\mathcal {N}}({\underline{\gamma }},e)\right] ^q \right) ^{\frac{1}{q}}. \end{aligned}$$
(13)

Moreover, \(\mu \in {\mathcal {P}}(\varGamma )\) is optimal for the right-hand side of (13) if and only if

$$\begin{aligned} {\mathbb {E}}_{\mu }\left[ {\mathcal {N}}({\underline{\gamma }},e)\right] =\frac{\sigma (e)\rho ^*(e)^{\frac{p}{q}}}{{\text {Mod}}_{p,\sigma }(\varGamma )}\qquad \forall e\in E, \end{aligned}$$
(14)

where \(\rho ^*\) is the unique extremal density for \({\text {Mod}}_{p,\sigma }(\varGamma )\).

Theorem 2 is a consequence of the theory developed in [5]. However, since it was only remarked on in [5], we provide a detailed proof here.

Proof

The optimization problem (7) is a standard convex optimization problem. Its Lagrangian dual problem, derived in [2], is

$$\begin{aligned} \begin{aligned} \text {maximize}&\quad \sum _{\gamma \in \varGamma }\lambda (\gamma ) - (p-1)\sum _{e\in E}\sigma (e)\left( \frac{1}{p\sigma (e)}\sum _{\gamma \in \varGamma }{\mathcal {N}}(\gamma ,e)\lambda (\gamma ) \right) ^{\frac{p}{p-1}}\\ \text {subject to}&\quad \lambda (\gamma )\ge 0\quad \forall \gamma \in \varGamma . \end{aligned} \end{aligned}$$
(15)

It can be readily verified that strong duality holds [i.e., that the minimum in (7) equals the maximum in (15)] and that both extrema are attained. Moreover, if \(\rho ^*\) is the unique minimizer of the modulus problem and \(\lambda ^*\) is any maximizer of the Lagrangian dual, then the optimality conditions imply that

$$\begin{aligned} \rho ^*(e) = \left( \frac{1}{p\sigma (e)}\sum _{\gamma \in \varGamma } {\mathcal {N}}(\gamma ,e)\lambda ^*(\gamma )\right) ^{\frac{1}{p-1}}. \end{aligned}$$
(16)

By decomposing \(\lambda \in {\mathbb {R}}_{\ge 0}^\varGamma \) as \(\lambda =\nu \mu \) with \(\nu \ge 0\) and \(\mu \in {\mathcal {P}}(\varGamma )\), we can rewrite (15) as

$$\begin{aligned} \max _{\nu \ge 0}\left\{ \nu - (p-1)\left( \frac{\nu }{p}\right) ^q\min _{\mu \in {\mathcal {P}}(\varGamma )} \sum _{e\in E}\sigma (e)^{-\frac{q}{p}} \left( \sum _{\gamma \in \varGamma }{\mathcal {N}}(\gamma ,e)\mu (\gamma ) \right) ^q \right\} . \end{aligned}$$

The minimum over \(\mu \) can be recognized as the minimum in (13). Let \(\alpha \) be its minimum value. Then, the maximum over \(\nu \ge 0\) is attained at \(\nu ^* := p\alpha ^{-\frac{p}{q}}\), and strong duality implies that

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma ) = \nu ^*-(p-1)\left( \frac{\nu ^*}{p}\right) ^q\alpha = \alpha ^{-\frac{p}{q}}. \end{aligned}$$

Thus,

$$\begin{aligned} \min _{\mu \in {\mathcal {P}}(\varGamma )}\sum _{e\in E}\sigma (e)^{-\frac{q}{p}} {\mathbb {E}}_\mu \left[ {\mathcal {N}}({\underline{\gamma }},e)\right] ^q = \alpha = {\text {Mod}}_{p,\sigma }(\varGamma )^{-\frac{q}{p}}, \end{aligned}$$

proving (13). The remainder of the theorem follows from (16):

$$\begin{aligned} \rho ^*(e) = \left( \frac{\nu ^*}{p\sigma (e)}\sum _{\gamma \in \varGamma } {\mathcal {N}}(\gamma ,e)\mu ^*(\gamma )\right) ^{\frac{1}{p-1}} = \alpha ^{-1}\sigma (e)^{-\frac{q}{p}} {\mathbb {E}}_{\mu ^*}\left[ {\mathcal {N}}({\underline{\gamma }},e)\right] ^{\frac{q}{p}} \end{aligned}$$

and the fact that, if \(\mu \in {\mathcal {P}}(\varGamma )\) satisfies (14), then \(\lambda :=\nu ^*\mu \) is admissible for (15) and has the same objective value as any optimal \(\lambda ^*\). \(\square \)

Remark 4

The probabilistic interpretation is particularly informative when \(p=2\), \(\sigma \equiv 1\), and \(\varGamma \) is a collection of subsets of E, so that \({\mathcal {N}}\) is a (0, 1)-matrix defined as \({\mathcal {N}}(\gamma ,e)=\mathbb {1}_\gamma (e)\). In this case, this duality relation can be expressed as

$$\begin{aligned} {\text {Mod}}_2(\varGamma )^{-1} = \min _{\mu \in {\mathcal {P}}(\varGamma )}{\mathbb {E}}_\mu \left| {\underline{\gamma }}\cap {\underline{\gamma }}' \right| , \end{aligned}$$

where \({\underline{\gamma }}\) and \({\underline{\gamma }}'\) are two independent random variables chosen according to the pmf \(\mu \), and \(\left| {\underline{\gamma }}\cap {\underline{\gamma }}' \right| \) is their overlap (also a random variable). In other words, computing the 2-modulus in this setting is equivalent to finding a pmf that minimizes the expected overlap of two iid \(\varGamma \)-valued random variables.

In the present work, we are interested in a different but closely related duality called blocking duality.

3 Blocking duality and p-modulus

In this section, we introduce blocking duality for modulus. If \(\varGamma \) is a finite non-trivial family of objects on a graph G, the admissible set \({\text {Adm}}(\varGamma )\), defined in (6), is determined by finitely many inequalities:

$$\begin{aligned} \sum _{e\in E}{\mathcal {N}}(\gamma ,e)\rho (e) \ge 1\quad \forall \gamma \in \varGamma . \end{aligned}$$

Thus, it is possible to identify \(\varGamma \) with the rows of its edge usage matrix \({\mathcal {N}}\) or, equivalently, with the corresponding points in \({\mathbb {R}}^E_{\ge 0}\).

3.1 Fulkerson theorem

First, we recall some general definitions. Let \({\mathcal {K}}\) be the set of all closed convex sets \(K\subset {\mathbb {R}}_{\ge 0}^E\) that are recessive, in the sense that \(K+{\mathbb {R}}_{\ge 0}^E=K\). To avoid trivial cases, we shall assume that \(\varnothing \subsetneq K\subsetneq {\mathbb {R}}_{\ge 0}^E\), for \(K\in {\mathcal {K}}\).

Definition 2

For each \(K\in {\mathcal {K}}\), there is an associated blocking polyhedron, or blocker,

$$\begin{aligned} {\text {BL}}(K) := \left\{ \eta \in {\mathbb {R}}_{\ge 0}^E{:}\,\eta ^T\rho \ge 1,\;\;\forall \rho \in K \right\} . \end{aligned}$$

Definition 3

Given \(K\in {\mathcal {K}}\) and a point \(x\in K\), we say that x is an extreme point of K if \(x=tx_1+(1-t)x_2\) for some \(x_1,x_2\in K\) and some \(t\in (0,1)\), implies that \(x_1=x_2=x\). Moreover, we let \(\mathrm{ext}(K)\) be the set of all extreme points of K.

Definition 4

The dominant of a set \(P\subset {\mathbb {R}}_{\ge 0}^E\) is the recessive closed convex set

$$\begin{aligned} {\text {Dom}}(P)={\text {co}}(P)+{\mathbb {R}}_{\ge 0}^E, \end{aligned}$$

where \({\text {co}}(P)\) is the convex hull of P.

When \(\varGamma \) is finite, \({\text {Adm}}(\varGamma )\) has finitely many faces. However, \({\text {Adm}}(\varGamma )\) is also determined by its finitely many extreme points, or “vertices” in \({\mathbb {R}}^{E}_{\ge 0}\). In fact, since \({\text {Adm}}(\varGamma )\) is a recessive closed convex set, it equals the dominant of its extreme points \(\mathrm{ext}({\text {Adm}}(\varGamma ))\), see [19, Theorem 18.5]. In the present notations,

$$\begin{aligned} {\text {Adm}}(\varGamma ) = {\text {Dom}}({\text {ext}}({\text {Adm}}(\varGamma ))). \end{aligned}$$
(17)

Definition 5

Suppose \(G=(V,E)\) is a finite graph and \(\varGamma \) is a finite non-trivial family of objects on G. We say that the family

$$\begin{aligned} {\hat{\varGamma }}:=\mathrm{ext}({\text {Adm}}(\varGamma ))=\{{\hat{\gamma }}_1,\dots ,{\hat{\gamma }}_s\}\subset {\mathbb {R}}_{\ge 0}^E, \end{aligned}$$

consisting of the extreme points of \({\text {Adm}}(\varGamma )\), is the Fulkerson blocker of \(\varGamma \). We define the matrix \({\hat{{\mathcal {N}}}}\in {\mathbb {R}}_{\ge 0}^{{\hat{\varGamma }}\times E}\) to be the matrix whose rows are the vectors \({\hat{\gamma }}^T\), for \({\hat{\gamma }}\in {\hat{\varGamma }}\).

Theorem 3

(Fulkerson [10]) Let \(G=(V,E)\) be a graph and let \(\varGamma \) be a non-trivial finite family of objects on G. Let \({\hat{\varGamma }}\) be the Fulkerson blocker of \(\varGamma \). Then,

  1. (1)

    \({\text {Adm}}(\varGamma )={\text {Dom}}({\hat{\varGamma }})={\text {BL}}({\text {Adm}}({\hat{\varGamma }}));\)

  2. (2)

    \({\text {Adm}}({\hat{\varGamma }})={\text {Dom}}(\varGamma )={\text {BL}}({\text {Adm}}(\varGamma ));\)

  3. (3)

    \(\hat{{\hat{\varGamma }}}\subset \varGamma .\)

In words, (3) says that the extreme points of \({\text {Adm}}({\hat{\varGamma }})\) are a subset of \(\varGamma \). Combining (1) and (2), we get the following relationships in terms of \(\varGamma \) alone.

Corollary 1

Let \(G=(V,E)\) be a graph and let \(\varGamma \) be a non-trivial finite family of objects on G. Then,

$$\begin{aligned} {\text {BL}}({\text {BL}}({\text {Adm}}(\varGamma )))={\text {Adm}}(\varGamma )\quad \text {and}\quad {\text {BL}}({\text {BL}}({\text {Dom}}(\varGamma )))={\text {Dom}}(\varGamma ). \end{aligned}$$

as well as

$$\begin{aligned} {\text {Adm}}(\varGamma )={\text {BL}}\left( {\text {Dom}}(\varGamma )\right) \quad \text {and}\quad {\text {BL}}({\text {Adm}}(\varGamma ))={\text {Dom}}(\varGamma ). \end{aligned}$$

We include a proof of Theorem 3 for the reader’s convenience.

Proof

We first prove (2). Suppose \(\eta \in {\text {BL}}({\text {Adm}}(\varGamma ))\). Then, \(\eta ^T\rho \ge 1\), for every \(\rho \in {\text {Adm}}(\varGamma )\). In particular, since every row of \({\hat{{\mathcal {N}}}}\) is an extreme point of \({\text {Adm}}(\varGamma )\), we have

$$\begin{aligned} {\hat{{\mathcal {N}}}}\eta \ge 1. \end{aligned}$$
(18)

In other words, \(\eta \in {\text {Adm}}({\hat{\varGamma }})\). Conversely, suppose \(\eta \in {\text {Adm}}({\hat{\varGamma }})\), that is (18) holds. Since

$$\begin{aligned} {\text {Adm}}(\varGamma )=\mathrm{co}({\hat{\varGamma }})+{\mathbb {R}}_{\ge 0}^E, \end{aligned}$$

for every \(\rho \in {\text {Adm}}(\varGamma )\), there is a probability measure \(\nu \in {\mathcal {P}}({\hat{\varGamma }})\) and a vector \(z\ge 0\) such that

$$\begin{aligned} \rho = {\hat{{\mathcal {N}}}}^T\nu + z \end{aligned}$$

And by (18),

$$\begin{aligned} \eta ^T\rho =\eta ^T{\hat{{\mathcal {N}}}}^T\nu + \eta ^Tz \ge \nu ^T1 + \eta ^Tz \ge 1. \end{aligned}$$

So \(\eta \in {\text {BL}}({\text {Adm}}(\varGamma ))\).

Note that \(\eta \in {\text {BL}}({\text {Adm}}(\varGamma ))\) if and only if the value of the following linear program is greater or equal to 1.

$$\begin{aligned} \begin{aligned} \text {minimize}&\quad \eta ^T\rho \\ \text {subject to}&\quad {\mathcal {N}}\rho \ge {\mathbf {1}},\ \rho \ge 0, \end{aligned} \end{aligned}$$
(19)

where \({\mathcal {N}}\) is the usage matrix for \(\varGamma \). The Lagrangian for this problem is

$$\begin{aligned} {\mathcal {L}}(\rho ,\lambda ,t):= \eta ^T\rho +\lambda ^T({\mathbf {1}}-{\mathcal {N}}\rho )-t^T\rho =\lambda ^T{\mathbf {1}}+\rho ^T(\eta -{\mathcal {N}}^T\lambda -t), \end{aligned}$$

with \(\rho \in {\mathbb {R}}^E\), \(\lambda \in {\mathbb {R}}_{\ge 0}^\varGamma \) and \(t\in {\mathbb {R}}_{\ge 0}^E\). In particular, the dual problem is

$$\begin{aligned} \begin{aligned} \text {maximize}&\quad \lambda ^T{\mathbf {1}}\\ \text {subject to}&\quad {\mathcal {N}}^T \lambda \le \eta ,\ \lambda \ge 0. \end{aligned} \end{aligned}$$
(20)

Splitting \(\lambda =s\nu \), with \(s\ge 0\) and \(\nu \in {\mathcal {P}}(\varGamma )\), we can rewrite this problem as

$$\begin{aligned} \begin{aligned} \text {maximize}&\quad s\\ \text {subject to}&\quad s{\mathcal {N}}^T \nu \le \eta ,\ \nu \in {\mathcal {P}}(\varGamma ). \end{aligned} \end{aligned}$$
(21)

By strong duality, \(\eta \in {\text {BL}}({\text {Adm}}(\varGamma ))\) if and only if there is \(s\ge 1\) and \(\nu \in {\mathcal {P}}(\varGamma )\) so that

$$\begin{aligned} \eta \ge s{\mathcal {N}}^T\nu . \end{aligned}$$

Namely, \(\eta \in {\text {BL}}({\text {Adm}}(\varGamma ))\) implies that \(\eta \ge {\mathcal {N}}^T\nu \), so \(\eta \in {\text {Dom}}(\varGamma )\).

Conversely, if \(\eta \in {\text {Dom}}(\varGamma )\), then there is a \(\nu \in {\mathcal {P}}(\varGamma )\) such that \(\eta \ge {\mathcal {N}}^T\nu \). So we have proved (2). In particular, since \(\hat{{\hat{\varGamma }}}\) is the set of extreme points of \({\text {Adm}}({\hat{\varGamma }})\) by Definition 5, it follows from (2) that

$$\begin{aligned} \hat{{\hat{\varGamma }}} = {\text {ext}}({\text {Adm}}({\hat{\varGamma }})) = {\text {ext}}({\text {Dom}}(\varGamma )). \end{aligned}$$

Since any extreme point of \({\text {Dom}}(\varGamma )\) must be present in \(\varGamma \), we conclude that \(\hat{{\hat{\varGamma }}}\subset \varGamma \), and hence (3) is proved as well.

To prove (1), we apply (2) to \({\hat{\varGamma }}\) and find that

$$\begin{aligned} {\text {BL}}({\text {Adm}}({\hat{\varGamma }}))={\text {Adm}}(\hat{{\hat{\varGamma }}})\supset {\text {Adm}}(\varGamma ), \end{aligned}$$

where the last inclusion follows from (3), since \(\hat{{\hat{\varGamma }}}\subset \varGamma \). Also, by (3) applied to \({\hat{\varGamma }}\), the extreme points of \({\text {Adm}}(\hat{{\hat{\varGamma }}})\) are a subset of \({\hat{\varGamma }}\) and therefore they are a subset of \({\text {ext}}({\text {Adm}}(\varGamma ))\). This implies that \({\text {Adm}}(\hat{{\hat{\varGamma }}})\subset {\text {Adm}}(\varGamma )\). So we have \({\text {BL}}({\text {Adm}}({\hat{\varGamma }}))= {\text {Adm}}(\varGamma )\).

Moreover, by (2) applied to \({\hat{\varGamma }}\), we get that

$$\begin{aligned} {\text {BL}}({\text {Adm}}({\hat{\varGamma }}))={\text {Dom}}({\hat{\varGamma }}). \end{aligned}$$

So (1) is proved as well. \(\square \)

3.2 Blocking duality for p-modulus

Theorem 4

Let \(G=(V,E)\) be a graph and let \(\varGamma \) be a non-trivial finite family of objects on G with Fulkerson blocker \({\hat{\varGamma }}\). Let the exponent \(1<p<\infty \) be given, with \(q:=p/(p-1)\) its Hölder conjugate exponent. For any set of weights \(\sigma \in {\mathbb {R}}_{>0}^E\), define the dual set of weights \({\hat{\sigma }}\) as \({\hat{\sigma }}(e) := \sigma (e)^{-\frac{q}{p}}\), for all \(e\in E\).

Then,

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma )^{\frac{1}{p}}{\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})^{\frac{1}{q}} = 1. \end{aligned}$$
(22)

Moreover, the optimal \(\rho ^*\in {\text {Adm}}(\varGamma )\) and \(\eta ^*\in {\text {Adm}}({\hat{\varGamma }})\) are unique and are related as follows:

$$\begin{aligned} \eta ^*(e) = \frac{\sigma (e)\rho ^*(e)^{p-1}}{{\text {Mod}}_{p,\sigma }(\varGamma )}\qquad \forall e\in E. \end{aligned}$$
(23)

Remark 5

The case for \(p=2\), namely

$$\begin{aligned} {\text {Mod}}_{2,\sigma }(\varGamma ){\text {Mod}}_{2,\sigma ^{-1}}({\hat{\varGamma }})= 1, \end{aligned}$$

is essentially contained in [16, Lemma 2], although stated with different terminology and with a different proof. In this case,  (23) can be rewritten as

$$\begin{aligned} \sigma (e)\rho ^*(e) = {\text {Mod}}_{2,\sigma }(\varGamma )\eta ^*(e)\qquad \forall e\in E. \end{aligned}$$

Proof

For all \(\rho \in {\text {Adm}}(\varGamma )\) and \(\eta \in {\text {Adm}}({\hat{\varGamma }})\), Hölder’s inequality implies that

$$\begin{aligned} \begin{aligned} 1 \le \sum _{e\in E}\rho (e)\eta (e)&= \sum _{e\in E}\left( \sigma (e)^{1/p}\rho (e)\right) \left( \sigma (e)^{-1/p}\eta (e)\right) \\&\le \left( \sum _{e\in E}\sigma (e)\rho (e)^p \right) ^{1/p} \left( \sum _{e\in E}{\hat{\sigma }}(e)\eta (e)^{q} \right) ^{1/q}, \end{aligned} \end{aligned}$$
(24)

so

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma )^{1/p}{\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})^{1/q} \ge 1. \end{aligned}$$
(25)

Now, let \(\alpha := {\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})^{-1}\) and let \(\eta ^*\in {\text {Adm}}({\hat{\varGamma }})\) be the minimizer for \({\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})\). Then, (25) implies that

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma ) \ge \alpha ^{\frac{p}{q}} = \alpha ^{\frac{1}{q-1}}. \end{aligned}$$
(26)

Define

$$\begin{aligned} \rho ^*(e) := \alpha \left( \frac{{\hat{\sigma }}(e)}{\sigma (e)}\eta ^*(e)^{q} \right) ^{1/p} = \alpha {\hat{\sigma }}(e)\eta ^*(e)^{q/p}. \end{aligned}$$
(27)

Note that

$$\begin{aligned} {\mathcal {E}}_{p,\sigma }(\rho ^*) = \sum _{e\in E}\sigma (e)\rho ^*(e)^p = \alpha ^p\sum _{e\in E}{\hat{\sigma }}(e)\eta ^*(e)^{q} = \alpha ^{p-1} = \alpha ^{\frac{1}{q-1}}. \end{aligned}$$

Thus, if we can show that \(\rho ^*\in {\text {Adm}}(\varGamma )\), then (26) is attained and \(\rho ^*\) must be extremal for \({\text {Mod}}_{p,\sigma }(\varGamma )\). In particular, (22) would follow. Moreover, (23) is another way of writing (27).

To see that \(\rho ^*\in {\text {Adm}}(\varGamma )\), we will verify that \(\sum _{e\in E}\rho ^*(e)\eta (e)\ge 1\) for all \(\eta \in {\text {Adm}}({\hat{\varGamma }})\). First, consider \(\eta =\eta ^*\). In this case,

$$\begin{aligned} \sum _{e\in E}\rho ^*(e)\eta ^*(e) = \alpha \sum _{e\in E}{\hat{\sigma }}(e)\eta ^*(e)^q = 1. \end{aligned}$$

Now let \(\eta \in {\text {Adm}}({\hat{\varGamma }})\) be arbitrary. Since \({\text {Adm}}({\hat{\varGamma }})\) is convex, we have that \((1-\theta )\eta ^* + \theta \eta \in {\text {Adm}}({\hat{\varGamma }})\) for all \(\theta \in [0,1]\). So, using Taylor’s theorem, we have

$$\begin{aligned} \begin{aligned} \alpha ^{-1}&= {\mathcal {E}}_{q,{\hat{\sigma }}}(\eta ^*) \le {\mathcal {E}}_{q,{\hat{\sigma }}}((1-\theta )\eta ^* + \theta \eta ) = \sum _{e\in E}{\hat{\sigma }}(e)\left[ (1-\theta )\eta ^*(e) + \theta \eta (e)\right] ^{q} \\&= \alpha ^{-1} + q\theta \sum _{e\in E}{\hat{\sigma }}(e)\eta ^*(e)^{q-1} \left( \eta (e)-\eta ^*(e)\right) + O(\theta ^2)\\&= \alpha ^{-1} + \alpha ^{-1}q\theta \sum _{e\in E}\rho ^*(e) \left( \eta (e)-\eta ^*(e)\right) + O(\theta ^2). \end{aligned} \end{aligned}$$

Since this inequality must hold for arbitrarily small \(\theta >0\), it follows that

$$\begin{aligned} \sum _{e\in E}\rho ^*(e)\eta (e) \ge \sum _{e\in E}\rho ^*(e)\eta ^*(e) = 1, \end{aligned}$$

and the proof is complete. \(\square \)

3.3 The cases \(p=1\) and \(p=\infty \)

Now, we turn our attention to establishing the duality relationship in the cases \(p=1\) and \(p=\infty \). Recall that by Theorem 1,

$$\begin{aligned} \lim _{p\rightarrow \infty }{\text {Mod}}_{p,\sigma }(\varGamma )^{\frac{1}{p}} = {\text {Mod}}_{\infty ,1}(\varGamma ) = \frac{1}{\ell (\varGamma )}, \end{aligned}$$

where \(\ell (\varGamma )\) is defined to be the smallest element of the vector \({\mathcal {N}}{\mathbf {1}}\).

In order to pass to the limit in (22), we need to establish the limits for the second term in the left-hand side product.

Lemma 1

Under the assumptions of Theorem 4,

$$\begin{aligned} \begin{aligned} \lim \limits _{q\rightarrow 1}{\text {Mod}}_{q, {\hat{\sigma }}}({\hat{\varGamma }})^{\frac{1}{q}}&= {\text {Mod}}_{1, 1}({\hat{\varGamma }})\quad \text {and}\\ \lim \limits _{q\rightarrow \infty }{\text {Mod}}_{q, {\hat{\sigma }}}({\hat{\varGamma }})^{\frac{1}{q}}&= {\text {Mod}}_{\infty , \sigma ^{-1}}({\hat{\varGamma }}), \end{aligned} \end{aligned}$$
(28)

where \(\sigma ^{-1}(e)=\sigma (e)^{-1}\).

Proof

Let \({\mathcal {N}},{\hat{{\mathcal {N}}}}\in {\mathbb {R}}_{\ge 0}^{\varGamma \times E}\) be the usage matrices for \(\varGamma \) and \({\hat{\varGamma }}\), respectively. Let \(\varvec{\sigma }\in {\mathbb {R}}^{E\times E}\) be the diagonal matrix with entries \(\varvec{\sigma }(e,e) = \sigma (e)\), and define \({\tilde{{\mathcal {N}}}} = {\hat{{\mathcal {N}}}}\varvec{\sigma }\), with \({\tilde{\varGamma }}\) its associated family in \({\mathbb {R}}_{\ge 0}^E\). Note that \(\eta \in {\text {Adm}}({\hat{\varGamma }})\) if and only if \(\varvec{\sigma }^{-1}\eta \in {\text {Adm}}({\tilde{\varGamma }})\). Moreover, for every \(\eta \in {\text {Adm}}({\hat{\varGamma }})\),

$$\begin{aligned} {\mathcal {E}}_{q,{\hat{\sigma }}}(\eta ) = \sum _{e\in E}{\hat{\sigma }}(e)\eta (e)^q = \sum _{e\in E}\sigma (e)\left( \frac{\eta (e)}{\sigma (e)}\right) ^q = {\mathcal {E}}_{q,\sigma }(\varvec{\sigma }^{-1}\eta ), \end{aligned}$$

which implies that

$$\begin{aligned} {\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }}) = {\text {Mod}}_{q,\sigma }({\tilde{\varGamma }}). \end{aligned}$$

Taking the limit as \(q\rightarrow 1\) and using the continuity of p-modulus with respect to p, see Theorem 1, we get that

$$\begin{aligned} \lim _{q\rightarrow 1}{\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})^{\frac{1}{q}} = \lim _{q\rightarrow 1}{\text {Mod}}_{q,\sigma }({\tilde{\varGamma }})^{\frac{1}{q}} = {\text {Mod}}_{1,\sigma }({\tilde{\varGamma }}) = \min _{\eta \in {\text {Adm}}({\hat{\varGamma }})}\sum _{e\in E}\sigma (e) \left( \frac{\eta (e)}{\sigma (e)}\right) = {\text {Mod}}_{1,1}({\hat{\varGamma }}). \end{aligned}$$

Taking the limit as \(q\rightarrow \infty \) and using Theorem 1 show that

$$\begin{aligned} \lim _{q\rightarrow \infty }{\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})^{\frac{1}{q}} = \lim _{q\rightarrow \infty }{\text {Mod}}_{q,\sigma }({\tilde{\varGamma }})^{\frac{1}{q}} = {\text {Mod}}_{\infty ,1}({\tilde{\varGamma }}) = \min _{\eta \in {\text {Adm}}({\hat{\varGamma }})}\max _{e\in E}\left( \frac{\eta (e)}{\sigma (e)}\right) = {\text {Mod}}_{\infty ,\sigma ^{-1}}({\hat{\varGamma }}). \end{aligned}$$

Taking the limit as \(p\rightarrow 1\) in Theorem 4 then gives the following theorem.

Theorem 5

Under the assumptions of Theorem 4,

$$\begin{aligned} {\text {Mod}}_{1,\sigma }(\varGamma ){\text {Mod}}_{\infty , \sigma ^{-1}}({\hat{\varGamma }}) =1. \end{aligned}$$
(29)

Note that taking the limit as \(p\rightarrow \infty \) simply yields the same result for the unweighted case.

4 Blocking duality for families of objects

4.1 Duality for 1-modulus

Suppose that \(G=(V,E,\sigma )\) is a weighted graph, with weights \(\sigma \in {\mathbb {R}}_{> 0}^E\), and \(\varGamma \) is a non-trivial, finite family of subsets of E, where \({\mathcal {N}}\) be the corresponding usage matrix. In this case we can equate each \(\gamma \in \varGamma \) with the vector \(\mathbb {1}_{\gamma }\in {\mathbb {R}}_{\ge 0}^E\), so we think of \(\varGamma \) as living in \(\{0,1\}^E\subset {\mathbb {R}}_{\ge 0}^E\). Recall that \({\text {Mod}}_{1,\sigma }(\varGamma )\) is the value of the linear program:

$$\begin{aligned} \begin{aligned} \text {minimize}&\quad \sigma ^T\rho \\ \text {subject to}&\quad \rho \ge 0,\quad {\mathcal {N}}\rho \ge {\mathbf {1}}\end{aligned} \end{aligned}$$
(30)

Since this is a feasible linear program, strong duality holds, and the dual problem is

$$\begin{aligned} \begin{aligned} \text {maximize}&\quad \lambda ^T{\mathbf {1}}\\ \text {subject to}&\quad \lambda \ge 0,\quad {\mathcal {N}}^T \lambda \le \sigma . \end{aligned} \end{aligned}$$
(31)

We think of (31) as a (generalized) max-flow problem, given the weights \(\sigma \). That is because the condition \({\mathcal {N}}^T \lambda \le \sigma \) says that for every \(e\in E\)

$$\begin{aligned} \sum _{\gamma \in \varGamma }\lambda (\gamma ){\mathcal {N}}(\gamma ,e) = \sum _{{\mathop {e\in \gamma }\limits ^{\gamma \in \varGamma }}}\lambda (\gamma )\le \sigma (e). \end{aligned}$$

However, to think of (30) as a (generalized) min-cut problem, we would need to be able to restrict the densities \(\rho \) to some given subsets of E. That is exactly what the Fulkerson blocker does.

Proposition 2

Suppose \(G=(V,E)\) is a finite graph and \(\varGamma \) is a family of subsets of E with Fulkerson blocker family \({\hat{\varGamma }}\). Then, for any set of weights \(\sigma \in {\mathbb {R}}_{>0}^E\),

$$\begin{aligned} {\text {Mod}}_{1,\sigma }(\varGamma )=\min _{{\hat{\gamma }}\in {\hat{\varGamma }}} \sum _{e\in E}{\hat{{\mathcal {N}}}}({\hat{\gamma }},e)\sigma (e). \end{aligned}$$
(32)

Moreover, for every \({\hat{\gamma }}\in {\hat{\varGamma }}\) there is a choice of \(\sigma \in {\mathbb {R}}_{\ge 0}^E\) such that \({\hat{\gamma }}\) is the unique solution of (32) and the corresponding density \(\rho _{{\hat{\gamma }}}(e) := \hat{{\mathcal {N}}}({\hat{\gamma }},e)\) is the unique minimizer of (30).

Proof

By Theorem 3(1)

$$\begin{aligned} {\text {Adm}}(\varGamma )={\text {Dom}}({\hat{\varGamma }}) \end{aligned}$$

So if \(\sigma \in {\mathbb {R}}_{>0}^E\) is a given set of weights, then, by (30), \({\text {Mod}}_{1,\sigma }(\varGamma )\) is the value of the linear program

$$\begin{aligned} \begin{aligned} \text {minimize}&\quad \sigma ^T\rho \\ \text {subject to}&\quad \rho \in {\text {Dom}}({\hat{\varGamma }}). \end{aligned} \end{aligned}$$
(33)

In particular, the optimal value is attained at a vertex of \({\text {Dom}}({\hat{\varGamma }})\), namely for an object \({\hat{\gamma }}\in {\hat{\varGamma }}\). Therefore, the optimization can be restricted to \({\hat{\varGamma }}\).

The “moreover” part of the proposition follows from [19, Thm. 18.6] since \({\text {Adm}}(\varGamma )\) is a recessive polyhedron with finitely many extreme points. \(\square \)

Remark 6

When \(\varGamma \) is a family of subsets of E, it is customary to say that \(\varGamma \)has the max-flow-min-cut property, if its Fulkerson blocker \({\hat{\varGamma }}\) is also a family of subsets of E. For more details, we refer to the discussion in [15, Chapter 3].

4.2 Connecting families

Let G be an undirected graph and let \(\varGamma =\varGamma (a,b)\) be the family of all simple paths connecting two distinct nodes a and b, i.e., the ab-paths in G. Consider the family \(\varGamma _{\mathrm{cut}}(a,b)\) of all minimal ab-cuts. Recall that an ab-cut S is called minimal if its boundary \(\partial S\) does not contain the boundary of any other ab-cut as a strict subset.

Note that (31) in this case is exactly the max-flow problem. It is not surprising, then, that (30) is closely related to the min-cut problem. In fact, the Fulkerson blocker of \(\varGamma (a,b)\) is \({\hat{\varGamma }}(a,b)=\varGamma _{\mathrm{cut}}(a,b)\). One way to see this is as follows. Every ab-cut, \(S\subset V\) yields a density \(\rho _S := \mathbb {1}_{\partial S}\). In this way, we may recognize \(\varGamma _{\mathrm{cut}}(a,b)\) as the set of extreme points

$$\begin{aligned} \varGamma _{\mathrm{cut}}(a,b) = {\text {ext}}\left( {\text {Dom}}\left( \{\rho _S{:}\,S\text { is an } ab\text {-cut}\}\right) \right) . \end{aligned}$$

Moreover, every such \(\rho _S\) is admissible for (30), since every path \(\gamma \in \varGamma (a,b)\) must have at least one edge in common with \(\partial S\). Thus,

$$\begin{aligned} {\text {Dom}}(\varGamma _{\mathrm{cut}}(a,b)) \subseteq {\text {Dom}}({\hat{\varGamma }}(a,b)), \end{aligned}$$

and it suffices to show that \({\hat{\varGamma }}(a,b)\subseteq \varGamma _\mathrm{cut}(a,b)\).

Let \({\hat{\gamma }}\in {\hat{\varGamma }}(a,b)\) and let \(\sigma \) be chosen as in the “moreover” part of Proposition 2. Then, \(\rho _{{\hat{\gamma }}}\) is the unique minimizer of (30) and, by strong duality, \(\sigma ^T\rho _{{\hat{\gamma }}}\) must equal the value of (31), which is the maximum flow. By the max-flow min-cut theorem, there exists a cut \(S\in \varGamma _\mathrm{cut}(a,b)\) such that \(\sigma ^T\rho _S\) equals this value. Uniqueness implies that \(\rho _{{\hat{\gamma }}}=\rho _S\), showing that \(\hat{{\mathcal {N}}}({\hat{\gamma }},\cdot )=\mathbb {1}_{\partial S}\). In other words, \({\hat{\gamma }}\) is a minimum ab-cut.

The duality

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma )^{\frac{1}{p}}{\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})^{\frac{1}{q}}=1 \end{aligned}$$

can be viewed as a generalization of the max-flow min-cut theorem. To see this, consider the limiting case (29). As discussed above, \({\text {Mod}}_{1,\sigma }(\varGamma )\) takes the value of the minimum ab-cut with edge weights \(\sigma \).

With a little work, the second modulus in (29) can be recognized as the reciprocal of the corresponding max-flow problem. Using the standard trick for \(\infty \)-norms, the modulus problem \({\text {Mod}}_{\infty ,\sigma ^{-1}}({\hat{\varGamma }})\) can be transformed into a linear program taking the form

$$\begin{aligned} \begin{aligned} \text {minimize}&\quad t \\ \text {subject to}&\quad \sigma (e)^{-1}\eta (e) \le t\;\forall e\in E\\&\quad \eta \ge 0,\quad {\hat{{\mathcal {N}}}} \eta \ge 1 \end{aligned} \end{aligned}$$

The minimum must occur somewhere on the boundary of \({\text {Adm}}({\hat{\varGamma }})\) and, therefore, by Theorem 3(2), must take the form

$$\begin{aligned} \eta (e)=\sum _{\gamma \in \varGamma }\lambda (\gamma )\mathbb {1}_{\gamma }(e)\qquad \lambda (\gamma )\ge 0,\;\sum _{\gamma \in \varGamma }\lambda (\gamma )=1. \end{aligned}$$

In other words, the minimum occurs at a unit st-flow \(\eta \), and the problem can be restated as

$$\begin{aligned} \begin{aligned} \text {minimize}&\quad t \\ \text {subject to}&\quad \frac{1}{t}\eta (e) \le \sigma (e)\; \forall e\in E\\&\quad \eta \;\text {a unit }st\text {-flow} \end{aligned} \end{aligned}$$

The minimum is attained when \(\frac{1}{t}\eta \) is a maximum st-flow respecting edge capacities \(\sigma (e)\); the value of such a flow is 1 / t, thus establishing the connection between the \(\infty \)-modulus and the max-flow problem.

4.3 Spanning tree modulus

When \(\varGamma \) is the set of spanning trees on an unweighted, undirected graph G with \({\mathcal {N}}(\gamma ,\cdot )=\mathbb {1}_\gamma (\cdot )\), the Fulkerson blocker \({\hat{\varGamma }}\) can be interpreted as the set of (weighted) feasible partitions [7].

Definition 6

A feasible partitionP of a graph \(G=(V,E)\) is a partition of the vertex set V into two or more subsets, \(\{V_1, \ldots , V_{k_P}\}\), such that each of the induced subgraphs \(G(V_i)\) is connected. The corresponding edge set, \(E_P\), is defined to be the set of edges in G that connect vertices belonging to different \(V_i\)’s.

The results of [7] imply the following theorem.

Theorem 6

Let \(G=(V,E)\) be a simple, connected, unweighted, undirected graph and let \(\varGamma \) be the family of spanning trees on G. Then, the Fulkerson blocker of \(\varGamma \) is the set of all vectors

$$\begin{aligned} \frac{1}{k_P-1}\mathbb {1}_{E_P} \end{aligned}$$

ranging over all feasible partitions P.

This fact plays an important role in [3].

5 Blocking duality and the probabilistic interpretation

At the end of Sect. 2.4, it was claimed that blocking duality was closely related to Lagrangian duality. In this section, we make this connection explicit.

Theorem 7

Let \(G=(V,E,\sigma )\) be a graph and \(\varGamma \) a finite family of objects on G with Fulkerson blocker \({\hat{\varGamma }}\). For a given \(1<p<\infty \), let \(\mu ^*\) be an optimal pmf for the minimization problem in (13) and let \(\eta ^*\) be optimal for \({\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})\). Then, in the notation of Sect. 2.4,

$$\begin{aligned} \eta ^*(e) = {\mathbb {E}}_{\mu ^*}\left[ {\mathcal {N}}({\underline{\gamma }},e)\right] . \end{aligned}$$
(34)

Proof

Every \(\eta \in {\text {Adm}}({\hat{\varGamma }})\) can be written as the sum of a convex combination of the vertices of \({\text {Adm}}({\hat{\varGamma }})\) and a nonnegative vector. In other words, \(\eta \in {\text {Adm}}({\hat{\varGamma }})\) if and only if there exists \(\mu \in {\mathcal {P}}(\varGamma )\) and \(\eta _0\in {\mathbb {R}}_{\ge 0}^E\) such that \(\eta = {\mathcal {N}}^T\mu + \eta _0\). Or, in probabilistic notation,

$$\begin{aligned} \eta (e) = \sum _{\gamma \in \varGamma }{\mathcal {N}}(\gamma ,e)\mu (\gamma ) + \eta _0(e) = {\mathbb {E}}_{\mu }\left[ {\mathcal {N}}({\underline{\gamma }},e)\right] + \eta _0(e). \end{aligned}$$

For such an \(\eta \),

$$\begin{aligned} {\mathcal {E}}_{q,{\hat{\sigma }}}(\eta ) = \sum _{e\in E}\sigma (e)^{-\frac{q}{p}}\eta (e)^q \ge \sum _{e\in E}\sigma (e)^{-\frac{q}{p}} {\mathbb {E}}_{\mu }\left[ {\mathcal {N}}({\underline{\gamma }},e)\right] ^q \end{aligned}$$

with equality holding if and only if \(\eta _0=0\). This implies that the optimal \(\eta ^*\) must be of the form \(\eta ^*={\mathcal {N}}^T\mu '={\mathbb {E}}_{\mu '}\left[ {\mathcal {N}}({\underline{\gamma }},\cdot )\right] \) for some \(\mu '\in {\mathcal {P}}(\varGamma )\).

Now, let \(\mu ^*\) be any optimal pmf for (13) and let \(\eta '={\mathcal {N}}^T\mu ^*\). Since \(\eta '={\mathcal {N}}^T\mu ^*\in {\text {Dom}}(\varGamma )\), Theorem 3(2) implies that \(\eta '\in {\text {Adm}}({\hat{\varGamma }})\). Moreover, by optimality of \(\mu ^*\),

$$\begin{aligned} {\mathcal {E}}_{q,{\hat{\sigma }}}(\eta ') = \sum _{e\in E}\sigma (e)^{-\frac{q}{p}} {\mathbb {E}}_{\mu ^*}\left[ {\mathcal {N}}({\underline{\gamma }},e)\right] ^q \le \sum _{e\in E}\sigma (e)^{-\frac{q}{p}} {\mathbb {E}}_{\mu '}\left[ {\mathcal {N}}({\underline{\gamma }},e)\right] ^q = {\mathcal {E}}_{q,{\hat{\sigma }}}(\eta ^*). \end{aligned}$$

But, since \(1<q<\infty \), the minimizer for \({\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})\) is unique and, therefore, \(\eta '=\eta ^*\). So \(\eta ^*={\mathcal {N}}^T\mu ^*={\mathbb {E}}_{\mu ^*}\left[ {\mathcal {N}}({\underline{\gamma }},\cdot )\right] \) as claimed. \(\square \)

6 The \(\delta _p\) metrics and a new proof that effective resistance is a metric

We saw in Theorem 1 that in the case of connecting families \({\text {Mod}}_{p,\sigma }(\varGamma (a,b))\) satisfies:

  • \({\text {Mod}}_{\infty ,1}(\varGamma (a,b))^{-1}=\ell (\varGamma (a,b))\) is the (unweighted) shortest-path length;

  • \({\text {Mod}}_{2,\sigma }(\varGamma (a,b))^{-1}={\mathcal {R}}_{\text {eff}}(a,b)\) is the effective resistance metric;

  • \({\text {Mod}}_{1,\sigma }(\varGamma (a,b))^{-1}={\text {MC}}(a,b)^{-1}\) is the reciprocal of min cut.

In all three cases, if G is a connected graph, these are distances (or metrics). The fact that shortest-path \(d_\mathrm{SP}(a,b):=\ell (\varGamma (a,b))\) is a metric on V is well known and follows easily from the definition.

The fact that \(d_{\mathrm{MC}}(a,b):={\text {MC}}(a,b)^{-1}\) is an ultrametric (i.e., that the sum can be replaced by the maximum in the triangle inequality) is left as an exercise, or see [4] where a proof is given.

The fact that effective resistance \(d_{\mathrm{ER}}(a,b):={\mathcal {R}}_{\text {eff}}(a,b)\) is a metric has several known proofs. See [14, Exercise 9.8], for a proof using current flows, and see [14, Corollary 10.8], for one using commute times. As a consequence of Theorem 8, we will provide yet another proof that effective resistance is a metric on graphs.

Definition 7

Let \(G=(V,E,\sigma )\) be a weighted, connected, simple graph. Given \(a,b\in V\), let \(\varGamma (a,b)\) be the connecting family of all paths between a and b. Fix \(1<p<\infty \) and let \(q:=p/(p-1)\) be the Hölder conjugate exponent. Then, we define

$$\begin{aligned} \delta _p(a,b):= {\left\{ \begin{array}{ll} 0 &{}\quad \text {if}\,\,a = b,\\ {\text {Mod}}_{p,\sigma }(\varGamma (a,b))^{-q/p} &{}\quad \text {if}\,\,a\ne b. \end{array}\right. } \end{aligned}$$

Theorem 8

Suppose \(G=(V,E,\sigma )\) is a weighted, connected, simple graph. Then, \(\delta _p\) is a metric on V. Moreover,

  1. (a)

    \(\lim _{p\uparrow \infty }\delta _p= d_{\mathrm{SP}}\);

  2. (b)

    \(\delta _2=d_{\mathrm{ER}}\);

  3. (c)

    For \(1<p<2\), \({\text {Mod}}_{p,\sigma }(\varGamma (a,b))^{-1}\) is a metric and it tends to \(d_{{\text {MC}}}(a,b)\) as \(p\rightarrow 1\).

Finally, for every \(\epsilon >0\) and every \(p\in (1,\infty )\) there is a connected graph for which \(\delta _p^{1+\epsilon }\) is not a metric.

Remark 7

Note that, in light of Theorem 1, when \(p=2\), the proof of Theorem 8 gives an alternative modulus-based proof that effective resistance is a metric.

Remark 8

It is straightforward to show that an arbitrary positive power of an ultrametric is also an ultrametric, so \((d_{\mathrm{MC}})^t\) is a metric for any \(t>0\). Using (11) and (12), it can be shown that as \(p\downarrow 1\), \(\delta _p\) converges to the limit

$$\begin{aligned} \lim _{t\rightarrow \infty }(d_{\mathrm{MC}}(a,b))^t = {\left\{ \begin{array}{ll} 0 &{} \text {if}\,\,d_{\mathrm{MC}}(a,b) < 1,\\ 1 &{} \text {if}\,\,d_{\mathrm{MC}}(a,b) = 1,\\ \infty &{} \text {if}\,\,d_{\mathrm{MC}}(a,b) > 1. \end{array}\right. } \end{aligned}$$

For unweighted graphs, this limit essentially decomposes the graph into its 2-edge-connected components. All nodes in the same component are distance zero from one another, while nodes in different components are at distance one.

Proof

Assuming the claim that \(\delta _p\) is a metric, the “moreover” parts (a) and (b) follow from Theorem 1. For (c), recall that a metric d can always be raised to an exponent \(0<\epsilon <1\) and still remain a metric. Since for \(1<p<2\), we have \(p/q<1\), it follows that \({\text {Mod}}_{p,\sigma }(\varGamma (a,b))^{-1}=\delta _p^{p/q}\) is a metric, and the claim follows from continuity in p. Finally, the fact that the exponent 1 is sharp for the metrics \(\delta _p\) is shown in [4]. For completeness, we repeat the argument here. Consider the (unweighted) path graph \(P_3\) with edges \(\{a,c\},\{c,b\}\) and fix \(p\in (1,\infty )\). First, \({\text {Mod}}_p(\varGamma (a,c)) = 1\), because any admissible density \(\rho \) must satisfy \(\rho (a,c)=1\), furthermore, to minimize the energy, we also set \(\rho (c,b)=0\). Likewise, \({\text {Mod}}_p(\varGamma (c,b)) =1\). For \({\text {Mod}}_p(\varGamma (a,b))\), the energy is minimized when \(\rho (a,c)=\rho (c,b)=1/2\). Thus,

$$\begin{aligned} {\text {Mod}}_p(\varGamma (a,b)) = (1/2)^p + (1/2)^p = 2^{1-p} \end{aligned}$$

Hence, \( \delta _p(a,b) = 2^{q(p-1)/p} = 2=1+1=\delta _p(a,c)+\delta _p(c,b)\). In particular, the triangle inequality will fail for \(\delta _p^t\) as soon as \(t>1\).

The proof of the main claim hinges on the dual formulation in terms of Fulkerson blocker duality. Fix \(p\in (1,\infty )\). Recall from Sect. 4.2 that the Fulkerson blocker family for \(\varGamma (a,b)\) is the family of all minimal ab-cuts \({\hat{\varGamma }}(a,b)\). By Theorem 4,

$$\begin{aligned} {\text {Mod}}_{p,\sigma }(\varGamma (a,b))^{-q/p}={\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }}(a,b)), \end{aligned}$$

where \(q:=p/(p-1)\) is the Hölder conjugate exponent of p and \({\hat{\sigma }}=\sigma ^{-q/p}\).

An important observation at this point is that the family \(\varGamma _\mathrm{cut}(a,b)\) of all the ab-cuts is subordinated to \({\hat{\varGamma }}(a,b)\), since every ab-cut contains a minimal ab-cut, see Remark 2.

Now suppose \(a,b,c\in V\) are distinct. Then, for every ab-cut \(S\in {\hat{\varGamma }}(a,b)\), we have the following mutually exclusive cases: either \(c\in S\) or \(c\not \in S\). Therefore,

$$\begin{aligned} {\hat{\varGamma }}(a,b)\subset \varGamma _{\mathrm{cut}}(a,c)\cup \varGamma _{\mathrm{cut}}(c,b). \end{aligned}$$
(35)

The triangle inequality then follows from monotonicity (8) and subadditivity (9) of modulus:

$$\begin{aligned} \delta _p(a,b)&= {\text {Mod}}_{p,\sigma }(\varGamma (a,b))^{-q/p}&\text {(Definition)}\\&={\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }}(a,b))&\text {(Fulkerson duality)}\\&\le {\text {Mod}}_{q,{\hat{\sigma }}}(\varGamma _{\mathrm{cut}}(a,c)\cup \varGamma _{\mathrm{cut}}(c,b))&\text {(by }(35) \text { and Monotonicity)}\\&\le {\text {Mod}}_{q,{\hat{\sigma }}}(\varGamma _{\mathrm{cut}}(a,c))+{\text {Mod}}_{q,{\hat{\sigma }}}(\varGamma _{\mathrm{cut}}(c,b))&\text {(Subadditivity)}\\&\le {\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }}(a,c))+{\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }}(c,b))&\text {(Subordination)}\\&=\delta _p(a,c)+\delta _p(c,b).&\text {(Fulkerson duality)} \end{aligned}$$

Verifying the remaining metric axioms is left to the reader. \(\square \)

7 Edge-conductance monotonicity

When studying the p-modulus of a family of objects \(\varGamma \) on a weighted graph \(G=(V,E,\sigma )\), we often refer to the weights \(\sigma (e)\) as edge-conductances. This terminology originates in the special case of connecting families \(\varGamma (a,b)\) on undirected graphs with \(p=2\). In that case, \({\text {Mod}}_{2,\sigma }(\varGamma (a,b))\) coincides with effective conductance and we can give an electrical network interpretation to the various quantities of interest. In particular, the optimal density \(\rho ^*(e)\) represents the absolute voltage potential drop across e, \(\sigma (e)\) is the conductance of e, and therefore \(\sigma (e)\rho ^*(e)\) is the current flow across e (by Ohm’s law). Moreover, recall the optimal density for the Fulkerson blocker \(\eta ^*(e)\), which probabilistically is the expected usage of e by random paths under an optimal pmf (see Theorem 7). We know that \(\eta ^*(e)\) is related to \(\rho ^*(e)\) via (23), which can be written in this case as

$$\begin{aligned} \eta ^*(e)=\frac{\sigma (e)\rho ^*(e)}{{\text {Mod}}_{2,\sigma }(\varGamma (a,b))}. \end{aligned}$$

Therefore, \(\eta ^*(e)\) is proportional to the current flow across e. And

$$\begin{aligned} \rho ^*(e)\eta ^*(e)=\frac{\sigma (e)\rho ^*(e)^2}{\sum _{e'\in E}\sigma (e')\rho ^*(e')^2} \end{aligned}$$

is the fraction of the total dissipated power due to the resistor on edge e.

In the theory of electrical networks, the following edge-conductance monotonicity property is well known, see for instance Spielman’s notes [24, Problem 4].

Proposition 3

Let \(G = (V, E)\) be an undirected, connected graph and let r be the edge resistances. Let e be an edge of E and let \(\tilde{r}\) be another set of resistances such that \({\tilde{r}}(e') = r(e')\), for all \(e'\ne e\), and \({\tilde{r}}(e) \ge r(e)\). Fix an edge \(\{s, t\}\) of G. If one unit of current flows from s to t, the amount of current that flows through edge e under resistances \({\tilde{r}}\) is no larger than the amount that flows under resistances r.

Our goal is to generalize Proposition 3 to p-modulus of arbitrary families of objects. In the language of modulus, Proposition 3 says that if \(\{s,t\}\) is an edge in E and we are trying to computing \({\text {Mod}}_{2,\sigma }(\varGamma (s,t))\), then lowering \(\sigma (e)\) on some edge \(e\in E\) results in a new modulus problem \({\text {Mod}}_{2,{\tilde{\sigma }}}(\varGamma (s,t))\) whose extremal density satisfies \(\rho ^*_{{\tilde{\sigma }}}(e)\le \rho ^*_\sigma (e)\).

Theorem 9 is a reformulation, in the context of general families of objects, of results from [2, Section 6.2] that were formulated in terms of families of walks. In order, to keep the flow of the paper intact, we have relegated the proof of Theorem 9 to “Appendix.”

Theorem 9

[2] Let \(G=(V,E,\sigma )\) be a graph and \(\varGamma \) a non-empty and non-trivial finite family of objects on G. Fix \(1< p<\infty \) and let \(\rho ^*_\sigma \) be the extremal density for \({\text {Mod}}_{p,\sigma }(\varGamma )\). Then,

  1. 1.

    the map \(\phi {:}\,{\mathbb {R}}_{> 0}^E\rightarrow {\mathbb {R}}\) given by \(\phi (\sigma ):={\text {Mod}}_{p,\sigma }(\varGamma )\) is Lipschitz continuous;

  2. 2.

    the extremal density \(\rho ^*_\sigma \) is also continuous in \(\sigma \);

  3. 3.

    the map \(\phi \) is concave;

  4. 4.

    the map \(\phi \) is differentiable, and the partial derivatives of \(\phi \) satisfy

    $$\begin{aligned} \frac{\partial \phi }{\partial \sigma (e)}=\rho ^*_{\sigma }(e)^p\quad \forall e\in E. \end{aligned}$$

Theorem 10

Under the hypothesis of Theorem 9, with \(\eta ^*_\sigma \) given by (23), we have that in each variable \(\sigma (e)\),

  1. (a)

    \({\text {Mod}}_{p,\sigma }(\varGamma )\) is weakly increasing.

  2. (b)

    \(\rho ^*_\sigma (e)\) is weakly decreasing.

  3. (c)

    \(\eta ^*_\sigma (e)\) is weakly increasing.

Remark 9

Note that Theorem 10(c) can be reformulated using the probabilistic interpretation (34) as saying that if \(\sigma (e)\) increases (and the other weights are left alone), then the expected usage of edge e increases.

Proof of Theorem 10

For part (a), by Theorem 9 (1), \({\text {Mod}}_{p,\sigma }(\varGamma )\) is absolutely continuous in \(\sigma (e)\). In particular, the fundamental theorem of calculus holds and the result follows from Theorem 9 (4).

For part (b), write \(f(h):={\text {Mod}}_{p,\sigma _h}(\varGamma )\), where \(\sigma _h:=\sigma +h\mathbb {1}_{e}\). Set \(h>0\). Then, by concavity and differentiability (Theorem 9 (3) and (4)),

$$\begin{aligned} f'(0)\ge \frac{f(h)-f(0)}{h}\ge f'(h). \end{aligned}$$

The result follows from Theorem 9 (4) since

$$\begin{aligned} f'(h) = \frac{\partial }{\partial \sigma _h(e)}\phi (\sigma _h) = \rho _{\sigma _h}^*(e)^p. \end{aligned}$$

Note that (23) is not sufficient to prove part (c), since it is not immediately clear how the right-hand side varies with \(\sigma (e)\). Instead, we use the fact that, by Theorem 4, \(\eta ^*_\sigma \) is the optimal density for \({\text {Mod}}_{q,{\hat{\sigma }}}({\hat{\varGamma }})\) where \({\hat{\sigma }}=\sigma ^{-q/p}\) (a smooth decreasing function of \(\sigma \)), and use part (b). \(\square \)

8 Randomly weighted graphs

In this section, we explore the main arguments in [16] and recast them in the language of modulus. The goal is to study graphs \(G=(V,E,\sigma )\) where the weights \(\sigma \in {\mathbb {R}}_{>0}^E\) are random variables and compare modulus computed on G to the corresponding modulus computed on the deterministic graph \({\mathbb {E}}G:=(V,E,{\mathbb {E}}\sigma )\). Theorem 11 is a reformulation of Theorem 7 in [16], which generalized Theorem 2.1 in [17]. In Theorem 12, we combine Theorem 11 with the monotonicity properties in Theorem 1 to obtain a new lower bound for the expected p-modulus in terms of p-modulus on \({\mathbb {E}}G\).

First, we recall a lemma from Lovász’s paper.

Lemma 2

[16, Lemma 9] Let \(W\in {\mathbb {R}}_{> 0}^E\) be a random variable with survival function

$$\begin{aligned} S(t):={\mathbb {P}}\left( W\ge t\right) , \qquad \text {for }t\in {\mathbb {R}}_{\ge 0}^E. \end{aligned}$$

If S(t) is log-concave, then the survival function of \(\min _{e\in E} W(e)\) is also log-concave and W satisfies

$$\begin{aligned} {\mathbb {E}}\left( \min _{e\in E} W(e)\right) \ge \left( \sum _{e\in E} \frac{1}{{\mathbb {E}}(W(e))}\right) ^{-1}. \end{aligned}$$
(36)

Property (36) is satisfied if for instance the random variables \(\{W(e)\}_{e\in E}\) are independent and distributed as exponential variables \(\mathrm{Exp}(\lambda (e))\), i.e., so that \({\mathbb {P}}(W(e)>t)=\min \{\exp (-\lambda (e)t), 1\}\).

It is useful to collect some properties of random variables with log-concave survival functions.

Proposition 4

Let \(W\in {\mathbb {R}}_{> 0}^E\) be a random variable with log-concave survival function. Then, the following random variables also have log-concave survival function:

  1. (a)

    CW, where C is an \(E\times E\) diagonal matrix with positive diagonal elements.

  2. (b)

    \(W^*\), where \(E^*\subset E\), and \(W^*\in {\mathbb {R}}_{>0}^{E^*}\) is the projection of W onto \({\mathbb {R}}_{>0}^{E^*}\).

Proof

We define \(S(t):={\mathbb {P}}(W\ge t)\) for \(t\in {\mathbb {R}}_{\ge 0}^E\). For (a), note that

$$\begin{aligned} \log {\mathbb {P}}\left( CW \ge t\right) = \log S(C^{-1}t), \end{aligned}$$

which is the composition of a concave function with an affine function. Likewise (b) follows by composing a concave function with a projection. \(\square \)

Theorem 11

Let \(G=(V,E,\sigma )\) be a simple finite graph. Assume the \(\sigma \) is a random variable in \({\mathbb {R}}_{>0}^E\) with the property that its survival function is log-concave. Let \(\varGamma \) be a finite non-trivial family of objects on G, with \({\mathcal {N}}_{\mathrm{min}}\) defined as in (5). Then,

$$\begin{aligned} {\mathbb {E}}{\text {Mod}}_{1,\sigma }(\varGamma )\ge {\mathcal {N}}_{\mathrm{min}}{\text {Mod}}_{2,{\mathbb {E}}\sigma }(\varGamma ). \end{aligned}$$

Proof

Let \({\hat{\varGamma }}\) be the Fulkerson blocker of \(\varGamma \). Let \(\rho ^*\) be extremal for \({\text {Mod}}_{2,{\mathbb {E}}\sigma }(\varGamma )\) and \(\eta ^*\) be extremal for \({\text {Mod}}_{2,({\mathbb {E}}\sigma )^{-1}}({\hat{\varGamma }})\). Also let \(\mu ^*\in {\mathcal {P}}(\varGamma )\) be an optimal measure, and then we know that

$$\begin{aligned} \eta ^*(e) = \frac{{\mathbb {E}}\sigma (e)\rho ^*(e)}{{\text {Mod}}_{2,{\mathbb {E}}(\sigma )}(\varGamma )}=\sum _{\gamma \in \varGamma }\mu ^*(\gamma ){\mathcal {N}}(\gamma ,e)={\mathbb {E}}_{\mu ^*}\left( {\mathcal {N}}({\underline{\gamma }},e)\right) ,\qquad \forall e\in E. \end{aligned}$$
(37)

To avoid dividing by zero let \(E^*:=\{e\in E{:}\,\eta ^*(e)>0\}\) and let \(\varGamma ^*:=\{\gamma \in \varGamma {:}\,\mu ^*(\gamma )>0\}\). Note that, if \(e\not \in E^*\), then

$$\begin{aligned} 0=\eta ^*(e)=\sum _{\gamma \in \varGamma ^*}\mu ^*(\gamma ){\mathcal {N}}(\gamma ,e), \end{aligned}$$

hence \({\mathcal {N}}(\gamma ,e)=0\) for all \(\gamma \in \varGamma ^*\). Therefore, for any \(\rho \in {\text {Adm}}(\varGamma )\) and \(\gamma \in \varGamma ^*\),

$$\begin{aligned} \sum _{e\in E^*}{\mathcal {N}}(\gamma ,e)\rho (e)=\sum _{e\in E}{\mathcal {N}}(\gamma ,e)\rho (e)=\ell _\rho (\gamma ) \ge 1. \end{aligned}$$
(38)

Now, fix an arbitrary \(\rho \in {\text {Adm}}(\varGamma )\). Then, by (37),

$$\begin{aligned} {\mathcal {E}}_{1,\sigma }(\rho ) \ge \sum _{e\in E^*}\sigma (e)\rho (e) = {\text {Mod}}_{2,{\mathbb {E}}\sigma }(\varGamma )\sum _{e\in E^*}\sigma (e)\rho (e)\frac{1}{{\mathbb {E}}\sigma (e)\rho ^*(e)}{\mathbb {E}}_{\mu ^*}\left( {\mathcal {N}}({\underline{\gamma }},e)\right) , \end{aligned}$$
(39)

where the denominator is positive since \(\sigma >0\) and since \(\rho ^*> 0\) on \(E^*\) by (37). Note that

$$\begin{aligned} \sum _{e\in E^*}\sigma (e)\rho (e)\frac{1}{{\mathbb {E}}\sigma (e)\rho ^*(e)}{\mathbb {E}}_{\mu ^*}\left( {\mathcal {N}}({\underline{\gamma }},e)\right)&= \sum _{\gamma \in \varGamma ^*}\mu ^*(\gamma )\sum _{e\in E^*}\frac{\sigma (e)}{{\mathbb {E}}\sigma (e)\rho ^*(e)}{\mathcal {N}}(\gamma ,e) \rho (e) \\&\ge \sum _{\gamma \in \varGamma ^*}\mu ^*(\gamma )\min _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}}\frac{\sigma (e)}{{\mathbb {E}}\sigma (e)\rho ^*(e)}\sum _{e\in E^*}{\mathcal {N}}(\gamma ,e)\rho (e) \\&\ge \sum _{\gamma \in \varGamma ^*}\mu ^*(\gamma )\min _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}}\frac{\sigma (e)}{{\mathbb {E}}\sigma (e)\rho ^*(e)}, \end{aligned}$$

where the last inequality follows by (38).

Minimizing in (39) over \(\rho \in {\text {Adm}}(\varGamma )\), we find

$$\begin{aligned} {\text {Mod}}_{1,\sigma }(\varGamma ) \ge {\text {Mod}}_{2,{\mathbb {E}}\sigma }(\varGamma )\sum _{\gamma \in \varGamma }\mu ^*(\gamma )\min _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}}\frac{\sigma (e)}{{\mathbb {E}}\sigma (e)\rho ^*(e)} \end{aligned}$$
(40)

Note that for each \(\gamma \in \varGamma ^*\), by Proposition (4) (a) and (b) and Lemma 2, the scaled random variables

$$\begin{aligned} X(e):=\frac{\sigma (e)}{{\mathbb {E}}\sigma (e)\rho ^*(e)}\qquad \text {for }e\in E^*\text { with } {\mathcal {N}}(\gamma ,e)\ne 0, \end{aligned}$$

have the property that

$$\begin{aligned} {\mathbb {E}}\left( \min _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}}X(e)\right) \ge \left( \sum _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}} \frac{1}{{\mathbb {E}}(X(e))}\right) ^{-1}= \left( \sum _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}} \rho ^*(e)\right) ^{-1}. \end{aligned}$$

Moreover, by (5),

$$\begin{aligned} \left( \sum _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}} \rho ^*(e)\right) ^{-1}\ge {\mathcal {N}}_{\mathrm{min}}\left( \sum _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}}{\mathcal {N}}(\gamma ,e)\rho ^*(e)\right) ^{-1}. \end{aligned}$$

Finally, by complementary slackness, since \(\gamma \in \varGamma ^*\), we have \(\mu ^*(\gamma )>0\), hence

$$\begin{aligned} \sum _{\begin{array}{c} e\in E^*\\ {\mathcal {N}}(\gamma ,e)\ne 0 \end{array}} {\mathcal {N}}(\gamma ,e)\rho ^*(e) = \sum _{e\in E} {\mathcal {N}}(\gamma ,e)\rho ^*(e) = 1. \end{aligned}$$

Taking the expectation on both sides of (40) gives the claim. \(\square \)

Theorem 11 has some interesting consequences for p-modulus on randomly weighted graphs. First, recall from Theorem 9 (3) that the map

$$\begin{aligned} \sigma \mapsto {\text {Mod}}_{p,\sigma }(\varGamma ) \end{aligned}$$

is concave for \(1\le p<\infty \). In particular, if \(\sigma \in {\mathbb {R}}_{>0}^E\) is a random variable, then by Jensen’s inequality:

$$\begin{aligned} {\mathbb {E}}{\text {Mod}}_{p,\sigma }(\varGamma )\le {\text {Mod}}_{p,{\mathbb {E}}\sigma }(\varGamma ). \end{aligned}$$
(41)

The following theorem gives a lower bound.

Theorem 12

Let \(G=(V,E,\sigma )\) be a simple finite graph. Assume \(\sigma \) is a random variable in \({\mathbb {R}}_{>0}^E\) with log-concave survival function. Let \(\varGamma \) be a finite non-trivial family of objects on G with \({\mathcal {N}}_{\mathrm{min}}\) defined as in (5). Then, for \(1\le p \le 2\),

$$\begin{aligned} {\mathbb {E}}{\text {Mod}}_{p,\sigma }(\varGamma )\ge \frac{{\mathcal {N}}_\mathrm{min}^p}{{\mathbb {E}}\sigma (E)}{\text {Mod}}_{p,{\mathbb {E}}\sigma }(\varGamma )^2. \end{aligned}$$
(42)

Proof

When \(1<p\le 2\), we have, by (12),

$$\begin{aligned} {\text {Mod}}_{2,{\mathbb {E}}\sigma }(\varGamma )\ge {\mathbb {E}}\sigma (E)^{1-2/p}{\text {Mod}}_{p,{\mathbb {E}}\sigma }(\varGamma )^{2/p}. \end{aligned}$$

So by Theorem 11, we get

$$\begin{aligned} {\mathbb {E}}{\text {Mod}}_{1,\sigma }(\varGamma ) \ge {\mathcal {N}}_\mathrm{min}{\mathbb {E}}\sigma (E)^{1-2/p}{\text {Mod}}_{p,{\mathbb {E}}\sigma }(\varGamma )^{2/p}. \end{aligned}$$
(43)

Letting \(p\rightarrow 1\) and by continuity in p (Theorem 1), we get

$$\begin{aligned} {\mathbb {E}}{\text {Mod}}_{1,\sigma }(\varGamma ) \ge \frac{{\mathcal {N}}_\mathrm{min}}{{\mathbb {E}}\sigma (E)}{\text {Mod}}_{1,{\mathbb {E}}\sigma }(\varGamma )^2 \end{aligned}$$

Moreover, estimating the 1-modulus in terms of p-modulus, using (12) a second time, and then applying Hölder’s inequality give

$$\begin{aligned} {\mathbb {E}}{\text {Mod}}_{1,\sigma }(\varGamma ) \le {\mathbb {E}}\left( \sigma (E)^{1/q}{\text {Mod}}_{p,\sigma }(\varGamma )^{1/p} \right) \le {\mathbb {E}}\sigma (E)^{1/q}{\mathbb {E}}\left( {\text {Mod}}_{p,\sigma }(\varGamma )\right) ^{1/p}. \end{aligned}$$
(44)

Combining (43) and (44) gives (42). \(\square \)

Remark 10

By combining (41) with (42), we find that, for \(1\le p\le 2\),

$$\begin{aligned} {\text {Mod}}_{p,{\mathbb {E}}\sigma }(\varGamma )\le \frac{{\mathbb {E}}\sigma (E)}{{\mathcal {N}}_{\mathrm{min}}^p}. \end{aligned}$$

This is not a contradiction because this inequality is always satisfied, since the constant density \(\rho \equiv {\mathcal {N}}_{\mathrm{min}}^{-1}\) is always admissible.

Theorem 12 leads one to wonder what lower bounds can be established for \({\mathbb {E}}{\text {Mod}}_{2,\sigma }(\varGamma )\) when \(\sigma \) is allowed to vanish and its survival function is not necessarily log-concave. For instance, it would be interesting to study what happens when the weights \(\sigma (e)\) are independent Bernoulli variables, namely when G is an Erdős–Rényi graph. The situation there is complicated by the fact that the family \(\varGamma \) will change with every new sample of the weights \(\sigma \). For instance, the family of all spanning trees will be different for different choices of \(\sigma \).