An MBO Scheme for Minimizing the Graph Ohta–Kawasaki Functional

We study a graph-based version of the Ohta–Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman–Bence–Osher (MBO) scheme for minimizing the graph Ohta–Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Gamma $$\end{document}-converge to the Ohta–Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta–Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta–Kawasaki functional with a mass constraint.


Introduction
In this paper, we study the minimization problem on undirected graphs.Here TV and • 2 H −1 are graph-based analogues of the continuum total variation seminorm and continuum H −1 Sobolev norm, respectively, γ ≥ 0, and u is allowed to vary over the set of node functions with prescribed average mass A(u).These concepts will be made precise later in the paper, culminating in formulation (34) of the minimization problem.The main contributions of this paper are the introduction of the graph Ohta-Kawasaki functional into the literature, the development of an algorithm to produce (approximate) minimizers, and the study of that algorithm, which leads to, among other results, further insight into the connection between the graph Merriman-Bence-Osher (MBO) method and the graph total variation, following on from initial investigations in van Gennip et al. (2014).
There are various reasons to study this minimization problem.First of all, it is the graph-based analogue of the continuum Ohta-Kawasaki variational model (Ohta and Kawasaki 1986;Kawasaki et al. 1988).This model was originally introduced as a model for pattern formation in diblock copolymer systems and has become a paradigmatic example of a variational model which exhibits pattern formation.It spawned a large mathematical literature which explores its properties analytically and computationally.A complete literature overview for this area is outside the scope of this paper.For a brief overview of the continuum Ohta-Kawasaki model, see Section S1 in Supplementary Materials.(We use the prefix "S" to indicate a reference to Supplementary Materials.)For a sample of mathematical papers on this topic, see for example (Ren and Wei 2000;Choksi and Ren 2003;van Gennip et al. 2009;Choksi et al. 2009;Le 2010;Choksi et al. 2011;Glasner 2017) and other references mentioned in Section S1.The problem studied in this paper thus follows in the footsteps of a rich mathematical heritage, but at the same time, being the graph analogue of the continuum functional, connects with the recent interest in discrete PDE inspired problems.
Recently, there has been a growing enthusiasm in the mathematical literature for graph-based variational methods and graph based dynamics which mimic continuumbased variational methods and partial differential equations (PDEs), respectively.This is partly driven by novel applications of such methods in data science and image analysis (Ta et al. 2011;Elmoataz et al. 2012;Bertozzi and Flenner 2012;Merkurjev et al. 2013;Hu et al. 2013;Garcia-Cardona et al. 2014;Calatroni et al. 2017;Bosch et al. 2016;Merkurjev et al. 2017;Elmoataz et al. 2017) and partly by theoretical interest in the new connections between graph theory and PDEs (van Gennip and Bertozzi 2012;van Gennip et al. 2014;Trillos and Slepčev 2016).Broadly speaking, these studies fall into one (or more) of three categories: papers connecting graph problems with continuum problems, for example through a limiting process (van Gennip and Bertozzi 2012;Trillos and Slepčev 2016;Trillos et al. 2016), papers adapting a PDE approach to a graph context in order to tackle a graph problem such as graph clustering and classification (Bertozzi and Flenner 2016;Bresson et al. 2014;Merkurjev et al. 2016), maximum cut computations (Keetch and van Gennip in prep), and bipartite matching (Caracciolo et al. 2014;Caracciolo and Sicuro 2015), and papers studying the graph analogue of a PDE or variational problem that has interesting properties in the continuum, to explore what (potentially similar) properties are present in the graph based version of the problem (van Gennip et al. 2014;Luo and Bertozzi 2017;Elmoataz and Buyssens 2017).This paper mostly falls in the latter category.
The study of the graph-based Ohta-Kawasaki model is also of interest, because it connects with graph methods, concepts, and questions that have recently attracted attention, such as the graph MBO method (also known as threshold dynamics), graph curvature, and the question how these concepts relate to each other.The MBO scheme was originally introduced (in a continuum setting) to approximate motion by mean curvature (Merriman et al. 1992(Merriman et al. , 1993(Merriman et al. , 1994)).It is an iterative scheme, which alternates between a short-time diffusion step and a threshold step.Not only have these dynamics been proven to converge to motion by mean curvature (Evans 1993; Barles and Georgelin 1995;Swartz and Yip 2017), but they have been a very useful basis for numerical schemes as well, both in the continuum and on graphs.Without aiming for completeness, we mention some of the papers that investigate or use the MBO scheme: (Mascarenhas 1992;Ruuth 1998a, b;Chambolle and Novaga 2006;Esedoḡlu et al. 2008Esedoḡlu et al. , 2010;;Hu et al. 2013;Merkurjev et al. 2013Merkurjev et al. , 2014;;Hu et al. 2015;Esedoḡlu and Otto 2015).
In this paper, we study two different MBO schemes, (OKMBO) and (mcOKMBO).The former is an extension of the standard graph MBO scheme van Gennip et al. (2014) in the sense that it replaces the diffusion step in the scheme with a step whose dynamics are related to the Ohta-Kawasaki model and reduce to diffusion in the special case when γ = 0 (for details, see Sect.5.1).The latter uses the same dynamics as the former in the first step, but incorporates mass conservation in the threshold step.The (mcOKMBO) scheme produces approximate graph Ohta-Kawasaki minimizers and is the one we use in our simulations which are presented in Sect.7 and Section S9 of Supplementary Materials.The scheme (OKMBO) is of interest both as a precursor to (mcOKMBO) and as an extension of the standard graph MBO scheme.In van Gennip et al. (2014), it was conjectured that the standard graph MBO scheme is related to graph mean curvature flow and minimizers of the graph total variation functional.This paper furthers the study of that conjecture (but does not provide a definitive answer): in Sect.5.2 it is shown that the Lyapunov functionals associated with the (OKMBO) -converge to the graph Ohta-Kawasaki functional (which reduces to the total variation functional in the case when γ = 0).Moreover, in Sect.6 we introduce a special class of graphs, C γ , dependent on γ .For graphs from this class the (OKMBO) scheme can be interpreted as the standard graph MBO scheme on a transformed graph.For such graphs we extend existing elliptic and parabolic comparison principles for the graph Laplacian and graph diffusion equation to our new Ohta-Kawasaki operator and dynamics (Lemmas 6.13 and 6.15).
A significant role in the analysis presented in this paper is played by the equilibrium measures associated to a given node subset (Bendito et al. 2000b(Bendito et al. , 2003)), especially in the construction of the aforementioned class C 0 .In Sect. 3 we study these equilibrium measures and the role they play in constructing Green's functions for the graph Dirichlet and Poisson problems.The Poisson problem in particular, is an important ingredient in the definition of the graph H −1 norm and the graph Ohta-Kawasaki functional as they are introduced in Sect. 4. Both the equilibrium measures and the Ohta-Kawasaki functional itself are related to the graph curvature, which was introduced in van Gennip et al. (2014), as is shown in Lemma 3.6 and Corollary 4.12, respectively.
The structure of the paper is as follows.In Sect. 2 we define our general setting.Section 3 introduces the equilibrium measures from Bendito et al. (2003) into the paper (the terminology is derived from potential theory; see, e.g.Simon 2007 and references therein) and uses them to study the Dirichlet and Poisson problems on graphs, generalizing some results from Bendito et al. (2003).In Sect. 4 we define the H −1 inner product and norm and use those to construct the object at the centre of our paper: the (sharp interface) Ohta-Kawasaki functional on graphs, F 0 .We also briefly consider F ε , a diffuse interface version of the Ohta-Kawasaki functional and its relationship with F 0 .Moreover, in this section we start using tools from spectral analysis to study F 0 .These tools will be one of the main ingredients in the remainder of the paper.In Sect. 5 the algorithms (OKMBO) and (mcOKMBO) are introduced and analysed.It is shown that both these algorithms have an associated Lyapunov functional (which extends a result from van Gennip et al. 2014) and that these functionalsconverge to F 0 in the limit when τ (the time parameter associated with the first step in the MBO iteration) goes to zero.We introduce the class C γ in Sect.6 and prove that the Ohta-Kawasaki dynamics [(i.e. the dynamics used in the first steps of both (OKMBO) and (mcOKMBO)] on graphs from this class correspond to diffusion on a transformed graph.We also prove comparison principles for these graphs.In Sect.7 we then use (mcOKMBO) to numerically construct (approximate) minimizers of F 0 , before ending with a discussion of potential future research directions in Sect.8. Supplementary Materials accompany this paper, which contain further background information, results, examples, numerical simulations, and deferred proofs.

Setup
In this paper we consider graphs G ∈ G, where G is the set consisting of all finite, simple, 1 connected, undirected, edge-weighted graphs If we want to consider an unweighted graph, we view it as a weighted graph with ω = 1 on E.
Assume G ∈ G is given.Let V be the set of node functions u : V → R and E the set of skew-symmetric2 edge functions ϕ : E → R. For i ∈ V , u ∈ V, we write u i := u(i) and for (i, j) ∈ E, ϕ ∈ E we write ϕ i j := ϕ((i, j)).To simplify notation, we extend each ϕ ∈ E to a function ϕ : V 2 → R (without changing notation) by setting ϕ i j = 0 if (i, j) / ∈ E. The condition that ϕ is skew-symmetric means that, for all i, j ∈ V , ϕ i j = −ϕ ji .Similarly, for the edge weights we write ω i j := ω((i, j)) and we extend ω (without changing notation) to a function ω : Because G ∈ G is connected and n ≥ 2, there are no isolated nodes and thus d − , d + > 0.
For a node i ∈ V , we denote the set of its neighbours by For simplicity of notation, we will assume that the nodes of a given graph G ∈ G are labelled such that V = {1, . . ., n}.For definiteness and to avoid confusion we specify that we consider 0 / ∈ N, i.e.N = {1, 2, 3, . ..}, and when using the subset notation A ⊂ B we allow for the possibility that A = B.The characteristic function (or indicator function) χ S of a node set S ⊂ V is defined by (χ S ) i := 1 if i ∈ S and (χ S ) i := 0 otherwise.If S = {i}, we can use the Kronecker delta to write:3 (χ {i} ) j = δ i j := 1, if i = j, 0, otherwise.As justified in earlier work (Hein et al. 2007;van Gennip and Bertozzi 2012;van Gennip et al. 2014), we introduce the following inner products, 4 We define the gradient ∇ : Note that •, • V is indeed an inner product on V if G has no isolated nodes (i.e. if d i > 0 for all i ∈ V ), as is the case for G ∈ G. Furthermore, •, • E is an inner product on E (since functions in E are either only defined on E or are required to be zero on V 2 \E, depending on whether we consider them as edge functions or as extended edge functions, as explained above).
Using these building blocks, we define the divergence as the adjoint of the gradient and the (graph) Laplacian as the divergence of the gradient, leading to,5 for all i ∈ V , as well as the following norms: Note that we indeed have, for all u ∈ V and all ψ ∈ E, (2) For a function u ∈ V, we define its support as supp(u) := {i ∈ V : Note that, if r = 0, then vol (S) = |S|, where |S| denotes the number of elements in S. Using (2), we find the useful property that, for all u ∈ V, For u ∈ V, define the average mass function of u as A(u) We also define the Dirichlet energy of a function u ∈ V, and the total variation of u ∈ V, TV(u Remark 2.1 We have introduced two parameters, q ∈ [1/2, 1] and r ∈ [0, 1], in our definitions so far.As we will see later in this paper, the choice q = 1 is the natural one for our purposes.In those cases where we do not require q = 1, however, we do keep the parameter q unspecified, because there are papers in the literature in which the choice q = 1/2 is made, such as Gilboa and Osher (2009).One reason for the choice q = 1/2 is that in that case ω i j appears in the graph gradient, graph divergence, and graph total variation with the same power (1/2), allowing one to think of √ ω i j as analogous to a reciprocal distance.The parameter r is the more interesting one of the two, as the choices r = 0 and r = 1 lead to two different graph Laplacians that appear in the spectral graph theory literature under the names combinatorial (or unnormalized) graph Laplacian and random walk (or normalized, or non-symmetrically normalized) graph Laplacian, respectively.Many of the results in this paper hold for all r ∈ [0, 1], and we will clearly indicate whether and when further assumptions on r are required We note that, besides the graph Laplacian, also the mass of a function depends on r , whereas the total variation of a function does not depend on r , but does depend on q.The Dirichlet energy depends on neither parameter.
Unless we explicitly mention any further restrictions on q or r , only the conditions q ∈ [1/2, 1] and r ∈ [0, 1] are implicitly assumed.
Given a graph G = (V, E, ω) ∈ G, we define the following useful subsets of V: the subset of node functions with a given mass M ∈ R, the subset of nonnegative node functions, the subset of {0, 1}-valued binary node functions, the subset of {0, 1}-valued binary node functions with a given mass M ≥ 0, the subset of [0, 1]-valued node functions with a given mass M ≥ 0, The space of zero mass node functions, V 0 , will play an important role, as it is the space of admissible 'right-hand side' functions in the Poisson problem (17).Note that every u ∈ V b is of the form u = χ S for some S ⊂ V .
Observe that for M > vol (V ), V b M = ∅.In fact, for a given finite graph there are only finitely many M ∈ [0, vol (V )] such that V b M = ∅.For a given graph, we define the (finite) set of admissible masses as In Lemma S6.5 of Supplementary Materials we construct M for the example of a star graph.

Dirichlet and Poisson Equations
Proof The result follows as a special case of the comparison principle for uniformly elliptic partial differential equations on graphs with Dirichlet boundary conditions in Manfredi et al. (2015, Theorem 1).For completeness (and future use in the proof of Lemma 6.13) we provide the proof of this special case here.In particular, we will prove that if w ∈ V is such that, for all i ∈ V , ( w) i ≥ 0, and, for all i ∈ V \V , w i ≥ 0, then for all i ∈ V , w i ≥ 0. Applying this to w = u − v gives the desired result.
If V = ∅, the result follows trivially.In what follows we assume that V = ∅.Define the set U := i ∈ V : w i = min j∈V w j .Note that U = ∅.For a proof by contradiction, assume min j∈V w j < 0, then U ⊂ V .By assumption V = V , hence ∅ = V \V ⊂ V \U .Let i * ∈ V \U .Since G is connected, there is a path from U to i * .6Fix such a path and let k * be the first node along this path such that k * ∈ V \U and let j * ∈ U be the node immediately preceding k * in the path.Then, for all k ∈ V , (∇w) k j * ≤ 0, and (∇w) Since j * ∈ V , this contradicts one of the assumptions on w, hence min i∈V w i ≥ 0 and the result is proven.
We will see a generalization of Lemma 3.1 as well as another comparison principle in Sect.6.2, but their proofs require some groundwork which is interesting in its own right as well.That is the topic of Sect.6.1.

Equilibrium Measures
Let G = (V, E, ω) ∈ G. Given a proper7 subset S ⊂ V , consider the equation We recall some properties that are proven in Bendito et al. (2003, Section 2).
The following results and properties hold: 1.The Laplacian is positive semidefinite on V and positive definite on V 0 .
2. The Laplacian satisfies a maximum principle: for all u ∈ V + , max 3. For each proper subset S ⊂ V , (12) has a unique solution in V.If ν S is this solution, then ν S ∈ V + and supp(ν S ) = S. 4. If R ⊂ S are both proper subsets of V and ν R , ν S ∈ V + are the corresponding solutions of (12), then ν S ≥ ν R .
Proof These properties are proven to hold in Bendito et al. (2003, Section 2) for r = 0; in Section S10.1 of Supplementary Materials, we give our own proofs for the general case in detail.
Using property 3 in Lemma 3.2, we can now define the concept of the equilibrium measure of a node subset S.
For any proper subset S ⊂ V , the equilibrium measure for S, ν S , is the unique function in V + which satisfies, for all i ∈ V , the equation in (12).
In Lemmas S6.4 and S6.5 in Supplementary Materials, we construct equilibrium measures on a bipartite graph and a star graph, respectively.

Graph Curvature
We recall the concept of graph curvature, which was introduced in van Gennip et al. (2014, Section 3).Definition 3.4 Let G ∈ G and S ⊂ V .Then we define the graph curvature of the set S by, for all i ∈ V , We are mainly interested in the case q = 1 in this paper and in any given situation, if there are any restrictions on r ∈ [0, 1], they will be clear from the context.Hence, for notational simplicity, we will write κ S := κ 1,r S .For future use, we also define The following lemma collects some useful properties of the graph curvature.
Lemma 3.5 Let G ∈ G, S ⊂ V , and let κ q,r S and κ S be the graph curvatures from Definition 3.4.Then TV(χ S ) = κ q,r S , χ S V (14) Moreover, if κ + S is as in (13), then κ + S = max i∈S (κ S ) i .Proof The properties in ( 14) and ( 15) are proven in van Gennip et al. (2014, Section 3) and can be checked by a direct computation.Note that the latter requires q = 1.The property for κ + S follows from the fact that κ S is nonnegative on S and nonpositive on S c .We can use Lemma 3.1 to connect the equilibrium measures from (12) with the graph curvature.
Lemma 3.6 Assume G = (V, E, ω) ∈ G and let S be a proper subset of V .Let ν S be the equilibrium measure for S from (12) and let κ S be the graph curvature of S (for q = 1) and κ + S its maximum value, as in Definition 3.4.Then, for all i ∈ S, Since G is connected and S is a proper subset of V , max i∈S (κ S ) i > 0, and hence x is well-defined.Using (15), we compute (xχ S ) = xκ S ≤ 1 on V (and in particular on S).Hence, for i ∈ S, ( (xχ S )) i ≤ 1 = ν S i i .Furthermore, for i ∈ V \S, x (χ S ) i = 0 = ν S i .Thus, by Lemma 3.1, for all i ∈ S, x = x(χ S ) i ≤ ν S i .We illustrate Lemma 3.6 with bipartite and star graph examples in Remark S6.6 in Supplementary Materials.

Green's Functions
Next we use the equilibrium measures to construct Green's functions for Dirichlet and Poisson problems, following the discussion in Bendito et al. (2003, Section 3); see also Bendito et al. (2000a) and Chung and Yau (2000).
In this section, all the results assume the context of a given graph G ∈ G.In this section and in some selected places later in the paper, we will also denote Green's functions by the symbol G.It will always be very clear from the context whether G denotes a graph or a Green's function in any given situation.Definition 3.7 For a given subset S ⊂ V , we denote by V(S) the set of all real-valued node functions whose domain is S. Note that V(V ) = V.
Given a nonempty, proper subset S ⊂ V and a function f ∈ V(S), the (semihomogeneous) Dirichlet problem is to find u ∈ V such that, for all i ∈ V , Given k ∈ V and f ∈ V 0 , the Poisson problem is to find u ∈ V such that, Remark 3.8 Note that a general Dirichlet problem which prescribes u = g on V \S, for some g ∈ V(V \S), can be transformed into a semihomogeneous problem by considering the function u − g, where, for all i ∈ S, gi = 0 and for all i ∈ V \S, gi = g i .
Lemma 3.9 Let S ⊂ V be a nonempty, proper subset, and f ∈ V(S).Then the Dirichlet problem (16) has at most one solution.Similarly, given k ∈ V and f ∈ V 0 , the Poisson problem (17) has at most one solution.
Proof Given two solutions u and v to the Dirichlet problem, we have (u − v) = 0 on S and u − v = 0 on V \S.Since the graph is connected, this has as unique solution u − v = 0 on V (see the uniqueness proof in point 3 of Lemma 3.2, which uses the comparison principle of Lemma 3.1).A similar argument proves the result for the Poisson problem.
Next we will show that solutions to both the Dirichlet and Poisson problem exist, by explicitly constructing them using Green's functions.Definition 3.10 Let S be a nonempty, proper subset of V .The function G : V ×S → R is a Green's function for the Dirichlet equation, (16), if, for all f ∈ V(S), the function u ∈ V which is defined by, for all i ∈ V , 17) is satisfied by the function u ∈ V which is defined by, for all i ∈ V , where, for all i ∈ V , G i• : V → R.8 In either case, for fixed j ∈ S (Dirichlet) or fixed j ∈ V (Poisson), we define Lemma 3.11 Let S be a nonempty, proper subset of V and let G : V × S → R. Then G is a Green's function for the Dirichlet equation, (16), if and only if, for all i ∈ V and for all j ∈ S, Then G is a Green's function for the Poisson equation ( 17), if and only if there is a q ∈ V which satisfies and there is a C ∈ R, such that G satisfies, for all i, j ∈ V , Proof For the Dirichlet case, let u be given by ( 18), then, for all i ∈ S, ( u If the function G is a Green's function, then, for all f ∈ V(S) and for all i ∈ S, ( u) i = f i .In particular, if we apply this to f = χ { j} for j ∈ S, we find, for all i, j ∈ S, d r j ( G j ) i = δ i j .Moreover, for all f ∈ V(S) and for all i ∈ V \S, u i = 0. Applying this again to f = χ { j} for j ∈ S, we find for all i ∈ V \S that d r j G j i = 0. Hence, for all i ∈ V \S, and for all j ∈ S, G j i = 0.This gives us (21).Next assume G satisfies (21).By substituting G into (18), we find that u satisfies ( 16) and thus G is a Green's function.
Now we consider the Poisson case and we let u be given by ( 19).Let q satisfy (22 In other words, there is a q ∈ V, such that, for all i ∈ V and for all j Using the expression for G i i that we found above, we solve for G k i to find Combining the above, we find, for all i, j ∈ V , ( Next assume G satisfies (23).By substituting G into (19) we find that u satisfies (17).In particular, remember that f ∈ V 0 .Thus, since q does not depend on j we have q, f V = 0 and moreover u k = CM( f ) = 0. Thus G is a Green's function.
Remark 3.12 Any choice of q in (23) consistent with ( 22) will lead to a valid Green's function for the Poisson equation and hence to the same (and only) solution u of the Poisson problem ( 17) via ( 19).We make the following convenient choice: for all i ∈ V , In Lemma 3.16, we will see that this choice of q leads to a symmetric Green's function.Also any choice of C ∈ R in ( 23 Corollary 3.13 For a given nonempty, proper subset S ⊂ V , if there is a solution to (21), it is unique.Moreover, for given k ∈ V , q ∈ V −1 , and C ∈ R, if there is a solution to (23), it is unique.
Proof Let j ∈ S (or j ∈ V ).If G j and H j both satisfy (21) [or ( 23)], then G j − H j satisfies a Dirichlet (or Poisson) problem of the form ( 16) [or (17)].Hence by a similar argument as in the proof of Lemma 3.9, G j − H j = 0.
For the following lemma, recall the definition of equilibrium measure from Definition 3.3.Lemma 3.14 Let S be a nonempty, proper subset of V .The function G : V × S → R, defined by, for all i ∈ V and all j ∈ S, is the Green's function for the Dirichlet equation, satisfying (21).
Proof This can be checked via direct computations.We provide the details in Section S10.2 of Supplementary Materials.
Remark 3.15 Let G be the Green's function from ( 27) for the Poisson equation.As shown in Lemma 3.14, G satisfies ( 23) with ( 24) and ( 25).Now let us try to find another Green's function satisfying ( 23) with ( 25) and with a different choice of q.
Fix k ∈ V and define q ∈ V, by, for all i ∈ V , qi := q i + d −r k δ ik .Then, by ( 22), M( q) = 0. Hence, using ( 19) with the Green's function G, we find a function v ∈ V which satisfies, v i = q and v k = 0. Hence, for all i, j ∈ V , where the third equality follows from . Now let i, j ∈ S and use the equality above with u = G j to deduce Next we consider the Poisson case with Green's function G : V × V → R, satisfying ( 23) with ( 24) and ( 25).Let k ∈ V and u ∈ V with u k = 0.Then, similar to the computation above, for i ∈ V , we find where we used ( 24).If we use the identity above with u = G j , we obtain, for i, j ∈ V , where we have applied that 23) with ( 24) and with C = 0, then G = G +C and hence G is also symmetric.
The symmetry and support of the Green's functions are discussed in some more detail in Remarks S5.3 and S5.4 in Supplementary Materials.Section S2 of Supplementary Materials gives a random walk interpretation for the Green's function for the Poisson equation.

A Negative Graph Sobolev Norm and Ohta-Kawasaki
In analogy with the negative H −1 Sobolev norm (and underlying inner product) in the continuum (see for example Evans 2002;Adams and Fournier 2003;Brezis 1999), we introduce the graph H −1 inner product and norm.

Definition 4.1 The
where ϕ, ψ ∈ V are any functions such that ϕ = u and ψ = v hold on V .

Remark 4.2
The zero mass conditions on u and v in Definition 4.1 are necessary and sufficient conditions for the solutions ϕ and ψ to the Poisson equations above to exist as we have seen in Sect.3.4.These solutions are unique up to an additive constant.Note that the choice of this constant does not influence the value of the inner product.
Remark 4.4 Note that for a connected graph the expression in Definition 4.1 indeed defines an inner product on V 0 , as u, u H −1 = 0 implies that (∇ϕ) i j = 0 for all i, j ∈ V for which ω i j > 0. Hence, by connectivity, ϕ is constant on V and thus u = ϕ = 0 on V .
The H −1 inner product then also gives us the H −1 norm: Let k ∈ V .By (5), if u ∈ V, then u − A(u) ∈ V 0 , and hence there exists a unique solution to the Poisson problem which can be expressed using the Green's function from ( 27).We say that this solution ϕ solves (29) for u.Because the kernel of contains only the constant functions, the solution ϕ for any other choice of k will only differ by an additive constant.Hence the norm is independent of the choice of k.Note also that this norm in general does depend on r , since ϕ does.Contrast this with the Dirichlet energy in (6) which is independent of r .The norm does not depend on q.
Using the Green's function expansion from ( 19) for ϕ, with G being the Green's function for the Poisson equation from ( 27), we can also write Note that this expression seems to depend on the choice of k, via G, but by the discussion above we know in fact that it does not depend on k.This can also be seen as follows.A different choice for k, leads to an additive constant change in the function G, which leaves the norm unchanged, since i∈V d r i (u i − A(u)) = 0. Let W : R → R be the double-well potential defined by, for all x ∈ R, Note that W has wells of equal depth located at x = 0 and x = 1.
Definition 4.5 For ε > 0, γ ≥ 0 and u ∈ V, we now define both the (epsilon) Ohta-Kawasaki functional (or diffuse interface Ohta-Kawasaki functional) and the limit Ohta-Kawasaki functional (or sharp interface Ohta-Kawasaki functional) The nomenclature and notation is justified by the fact that F 0 [with its domain restricted to V b , see (8)] is the -limit of F ε for ε → 0 (this is shown by a straightforward adaptation of the results and proofs in van Gennip and Bertozzi (2012, Section 3); see Section S3 in Supplementary Materials).
There are two minimization problems of interest here: min for a given M ∈ R for the first problem and a given M ∈ M for the second.In this paper we will mostly be concerned with the second problem (34).
In Lemma S6.7 in Supplementary Materials, we describe a useful symmetry of F 0 for the star graph example.

Ohta-Kawasaki in Spectral Form
Because of the role the graph Laplacian plays in the Ohta-Kawasaki energies, it is useful to consider its spectrum.As is well known (see for example Chung 1997;von Luxburg 2007;van Gennip et al. 2014), for any r ∈ [0, 1], the eigenvalues of , which we will denote by 0 are real and nonnegative.The (algebraic and geometric) multiplicity of 0 as eigenvalue is equal to the number of connected components of the graph and the corresponding eigenspace is spanned by the indicator functions of those components.If G ∈ G, then G is connected, and thus, for all m ∈ {1, . . ., n − 1}, λ m > 0. We consider a set of corresponding V-orthonormal eigenfunctions φ m ∈ V, i.e. for all m, l ∈ {0, . . ., n−1}, where δ ml denotes the Kronecker delta.Note that, since and •, • V depend on r , but not on q, so do the eigenvalues λ m and the eigenfunctions φ m .For definiteness we choose9 The eigenfunctions form a V-orthonormal basis for V, hence, for any u ∈ V, we have As an example, Laplacian eigenvalues and eigenfunctions for the star graph are given in Lemma S6.8 in Supplementary Materials.
The following result will be useful later.
and therefore, for all m ∈ {1, . . ., n − 1}, Substituting these expressions for a 0 and a m into the expansion of ϕ, we find that ϕ is as in (39).
Conversely, if ϕ is as in (39), a direct computation shows that ϕ = u − A(u).
Remark 4.8 From Lemma 4.7 we see that we can write where † is the Moore-Penrose pseudoinverse of (Dresden 1920;Bjerhammer 1951;Penrose 1955).Lemma 4.9 Let q ∈ [1/2, 1], S ⊂ V , and let κ q,r S , κ S be the graph curvatures from Definition 3.4, then Proof Using an expansion as in (38) for χ S together with ( 14), we find where the last equality follows from Moreover, we use (15 If q = 1, such that κ q,r S = κ S , then ( 41) and ( 42) follow.
Lemma 4.10 Let q ∈ [1/2, 1], S ⊂ V , and let κ S be the graph curvature (with q = 1) from Definition 3.4, then As in (40) (with u replaced by χ S ), we have, for m ≥ 1, φ m , A(χ S ) V = 0, and thus Remark 4.11 Note that χ S − A(χ S ) 2 H −1 is independent of q and thus the results from Lemma 4.10 hold for all q ∈ [1/2, 1].However, the formulation involving the graph curvature relies on (43) and thus on the identity (15) which holds for κ S only, not for any κ q,r S .If q = 1 this leads to the somewhat unnatural situation of using κ S (which corresponds to the case q = 1) in a situation where q = 1.Hence the curvature formulation in Lemma 4.10 is more natural, in this sense, when q = 1.Corollary 4.12 Let q = 1, S ⊂ V , and let F 0 be the limit Ohta-Kawasaki functional from (32), then Proof This follows directly from the definition in (32) and Lemmas 4.9 and 4.10.
Lemma S6.9 in Supplementary Materials explicitly computes F 0 (χ S ) for the unweighted star graph example, which allows us, in Corollary S6.10, to solve the binary minimization problem (34) for this example graph.Remarks S6.11 and S6.12 provide further discussion on these results.

The Graph Ohta-Kawasaki MBO Scheme
One way in which we can attempt to solve the F ε minimization problem in (33) (and thus approximately the F 0 minimization problem in (34) in the -convergence sense of Section S3 in Supplementary Materials) is via a gradient flow.In Section S4 in Supplementary Materials we derive gradient flows with respect to the V inner product (which, if r = 0 and each u ∈ V is identified with a vector in R n , is just the Euclidean inner product on R n ) and with respect to the H −1 inner product which leads to the graph Allen-Cahn and graph Cahn-Hilliard type systems of equations, respectively.In our simulations later in the paper, however, we do not use these gradient flows, but we use the MBO approximation.
Heuristically, graph MBO type schemes [originally introduced in the continuum setting in Merriman et al. (1992) and Merriman et al. (1993)] can be seen as approximations to graph Allen-Cahn type equations [as in (S1)], obtained by replacing the double-well potential term in that equation by a hard thresholding step.This leads to the algorithm (OKMBO).In the algorithm we have used the set V ∞ , which we define to be the set of all functions u : [0, ∞) × V → R which are continuously differentiable in their first argument (which we will typically denote by t).For such functions, we will use the notation u i (t) := u(t, i).We note that where before u and ϕ denoted functions in V, here these same symbols are used to denote functions in V ∞ .
For reasons that are explored in Remark S5.1 in Supplementary Materials, in the algorithm we use a variant of ( 29 we say ϕ solves (45) for u.
For a given γ ≥ 0, we define the operator L : V → V as follows.For u ∈ V, let where ϕ ∈ V is the solution to (45).
Threshold step.Define the subset S k ⊂ V to be Remark 5.1 Since L, as defined in ( 46), is a continuous linear operator from V to V (see ( 55)), by standard ODE theory (Hale 2009, Chapter 1; Coddington and Levinson 1984, Chapter 1) there exists a unique, continuously differentiable-in-t, solution u of (47) on (0, ∞) × V .In the threshold step of (OKMBO), however, we only require u(τ ), hence it suffices to compute the solution u on (0, τ ]. By standard ODE arguments (Hale 2009, Chapter III.4) we can write (and interpret) the solution of (47) as an exponential function: u(t) = e −t L u 0 .
The next lemma will come in handy later in the paper.
is decreasing.Moreover, if u is not constant on V , then the function in (48) is strictly decreasing.
Assume that for all m ∈ {1, . . ., n − 1}, u, φ m V = 0.Then, by the expansion in (38) and the expression in (37), we have Hence u is constant.Thus, if u is not constant, then the function in ( 48) is strictly decreasing.
The proof of the mass conservation property follows very closely the proof of (van Gennip et al. 2014, Lemma 2.6(a)).Using ( 4) and ( 45), we find d dt The following lemma introduces a Lyapunov functional for the (OKMBO) scheme.
Then the functional J τ is strictly concave and Fréchet differentiable, with directional derivative at u ∈ V in the direction v ∈ V given by where K is as defined in (9).Moreover, J τ is a Lyapunov functional for the (OKMBO) scheme in the sense that, for all k ∈ {1, . . ., N }, Proof This follows immediately from the proofs of (van Gennip et al. 2014, Lemma 4.5, Proposition 4.6) [(which in turn were based on the continuum case established in Esedoḡlu and Otto (2015)], as replacing in those proofs by L does not invalidate any of the statements.It is useful, however, to reproduce the proof here, especially with an eye to incorporating a mass constraint into the (OKMBO) scheme in Sect.5.3.
First let u, v ∈ V and s ∈ R, then we compute where we used that e −τ L is a self-adjoint operator and e where the inequality follows for example from the spectral expansion in (49).Hence J τ is strictly concave.
To construct a minimizer v for the linear functional d J k=1 generated in this way by setting S k = {i ∈ V : v i = 1} corresponds exactly to the sequence generated by (OKMBO).
Finally we note that, since J τ is strictly concave and d J where the last inequality follows because of (52).Clearly, if χ S k+1 = χ S k , then J τ χ S k+1 − J τ χ S k = 0.
Remark 5.4 It is worth elaborating briefly on the underlying reason why (52) is the right minimization problem to consider in the setting of Lemma 5.3.As is standard in sequential linear programming the minimization of J τ over K is attempted by approximating J τ by its linearization, and minimizing this linear approximation over all admissible u ∈ K.
We can use Lemma 5.3 to prove that the (OKMBO) scheme converges in a finite number of steps to stationary state in sense of the following corollary.
k=1 is a sequence generated by (OKMBO), then there is a K ≥ 0 such that, for all k ≥ K , S k = S K .Proof If N ∈ N the statement is trivially true, so now assume N = ∞.Because |V | < ∞, there are only finitely many different possible subsets of V , hence there 10 Note that the arbitrary choice for those i for which 1 − 2 e −τ L χ S k−1 i = 0 introduces non-uniqueness into the minimization problem (52).exists K , k ∈ N such that k > K and S K = S k .Hence the set in l := min{l ∈ N : S K = S K +l }11 is not empty and thus l ≥ 1.If l ≥ 2, then by Lemma 5.3 we know that . This is a contradiction, hence l = 1 and thus S K = S K +1 .Because equation ( 47) has a unique solution (as noted in Remark 5.1), we have, for all k ≥ K , S k = S K .Remark 5.6 For given τ > 0, the minimization problem has a solution u ∈ V b , because J τ is strictly concave and K is compact and convex.This solution is not unique; for example, if ũ = χ V −u, then, since e −τ L is self-adjoint, we have Lemma 5.3 shows that J τ does not increase in value along a sequence {S k } N k=1 of sets generated by the (OKMBO) algorithm, but this does not guarantee that (OKMBO) converges to the solution of the minimization problem in (53).In fact, we see in Lemma S5.10 and Remark S5.11 in Supplementary Materials that for every S 0 ⊂ V there is a value τ ρ (S 0 ) such that S 1 = S if τ < τ ρ (S 0 ).Hence, unless S 0 happens to be a solution to (53), if τ < τ ρ (S 0 ) the (OKMBO) algorithm will not converge to a solution.This observation and related issues concerning the minimization of J τ will become important in Sect.5.2, see for example Remarks 5.12 and 5.16.
We end this section with a look at the spectrum of L, which plays a role in our further study of (OKMBO).More information is given in Section S5.2 of Supplementary Materials.Moreover, in Section S5.3 we use this information about the spectrum to derive pinning and spreading bounds on the parameter τ in (OKMBO) along similar lines as corresponding results in van Gennip et al. (2014).
Remark 5.7 Remembering from Remark 4.8 the Moore-Penrose pseudoinverse of , which we denote by † , we see that the condition M(ϕ) = 0 in (45) allows us to write ϕ = † (u − A(u)).In particular, if ϕ satisfies (45), then where λ m and φ m are the eigenvalues of and corresponding eigenfunctions, respectively, as in ( 35), (36).Hence, if we expand u as in (38) and L is the operator defined in (46), then In particular, L : V → V is a continuous, linear, bounded, self-adjoint, operator and for every c ∈ R, L(cχ V ) = 0. If, given a u 0 ∈ V, u ∈ V ∞ solves (47), then we have that u(t) = e −t L u 0 .Note that the operator e −t L is self-adjoint, because L is self-adjoint.
In the remainder of this paper we use the notation λ m for the eigenvalues of and m for the eigenvalues of L, with corresponding eigenfunctions φ m , as in ( 35), (36), and Lemma 5.8.
Using an expansion as in ( 38) and the eigenfunctions and eigenvalues as in Lemma 5.8 in the main paper, we can write the solution to (47) as (57)

-Convergence of the Lyapunov Functional
In this section we prove that the functionals Jτ : K → R, defined by for τ > 0, -converge to F0 : K → R as τ → 0, where F0 is defined by where F 0 is as in (32) with q = 1. 12We use the notation R := R ∪ {−∞, +∞} for the extended real line.Remember that the set K was defined in (9) as the subset of all [0, 1]-valued functions in V. We note that Jτ = 1 τ J τ | K , where J τ is the Lyapunov functional from Lemma 5.3.Compare the results in this section with the continuum results in (Esedoḡlu and Otto 2015, Appendix).
In this section we will encounter different variants of the functional 1 τ J τ , such as Jτ , J τ , and J τ , and similar variants of F 0 .The differences between these functionals are the domains on which they are defined and the parts of their domains on which they take finite values: Jτ is defined on all of K, while J τ and J τ (which will be defined later in this section) incorporate a mass constraint and relaxed mass constraint in their domains, respectively.For technical reasons, we thought it is prudent to distinguish these functionals through their notation, but intuitively they can be thought of as the same functional with different types of constraints (or lack thereof).
For sequences in V we use convergence in the V-norm, i.e. if {u k } k∈N ⊂ V, then we say Note, however, that all norms on the finite space V induce equivalent topologies, so different norms can be used without this affecting the convergence results in this section.
Then there exists an i ∈ V , an η > 0, and a K > 0 such that for all k ≥ K we have Proof With J τ the Lyapunov functional from Lemma 5.3, we compute, for τ > 0 and u ∈ V, where we used the mass conservation property from Lemma 5.2.Using the expansion in (38) and Lemma 4.6, we find Now we prove (LB).Let {τ k } k∈N and {u k } n∈N ⊂ K be as stated in the theorem.Then, for all m ∈ {0, . . ., n − 1}, we have that − lim Moreover, if u ∈ K ∩ V b , then, combining the above with (44) (remember that q = 1) and Lemma 5.8, we find Furthermore, since, for every k ∈ N, u k ∈ K, we have that, for all i ∈ V , Assume now that u ∈ K\V b instead, then by Lemma 5.9 it follows that there are an i ∈ V and an η > 0, such that, for all k large enough, (u k ) i ∈ (η, 1 − η).Thus, for all k large enough, Combining this with (60) we deduce which completes the proof of (LB).
To prove (UB), first we note that, if u ∈ K\V b , then F 0 (u) = +∞ and the upper bound inequality is trivially satisfied.If instead u ∈ K ∩ V b , then we define a so-called recovery sequence as follows: for all k ∈ N, u k := u.We trivially have that where we used a similar calculation as in (61).
Theorem 5.11 (Equi-coercivity).Let G = (V, E, ω) ∈ G and γ ≥ 0. Let {τ k } k∈N be a sequence of positive real numbers such that τ k → 0 as k → ∞ and let {u k } k∈N ⊂ K be a sequence for which there exists a C > 0 such that, for all k ∈ N, Jτ (u k ) ≤ C. Then there is a subsequence {u k l } l∈N ⊂ {u k } k∈N and a u ∈ V b such that u k l → u as l → ∞.
Proof Since, for all k ∈ N, we have u k ∈ K, it follows that, for all k ∈ N, 0 ≤ u k 2 ≤ √ n, where • 2 denotes the usual Euclidean norm on R n pulled back to V via the natural identification of each function in V with one and only one vector in R n (thus, it is the norm • V if r = 0).By the Bolzano-Weierstrass theorem it follows that there is a subsequence Next we compute where we used both the mass conservation property χ Assume that u ∈ K\V b , then there is an i ∈ V such that 0 < u i < 1.Hence, by Lemma 5.9, there is a δ > 0 such that for all l large enough, δ < (u k l ) i < 1 − δ and thus Let l be large enough such that Cτ k l < d r i δ 2 and large enough such that (64) holds.Then we have arrived at a contradiction with (63) and thus we conclude that u ∈ V b .
Remark 5.12 The computation in (62) shows that, for all τ > 0 and for all u ∈ K, Furthermore, since each term of the sum in the inner product is nonnegative, we have χ V − u, u V = 0 if and only if u = 0 or u = χ V .Hence we also have Jτ (u) = 0 if and only if u = 0 or u = χ V .The minimization of Jτ over K is thus not a very interesting problem.Therefore we now extend our -convergence and equi-coercivity results from above to incorporate a mass constraint.
As an aside, note that Lemma S5.10 and Remark S5.11 in Supplementary Materials guarantee that for τ large enough and S 0 such that vol S 0 = 1 2 vol (V ), the (OKMBO) algorithm converges in at most one step to the minimizer ∅ or the minimizer V .
Let M ∈ M, where M is the set of admissible masses as defined in (11).Remember from ( 10) that K M is the set of [0, 1]-valued functions in V with mass equal to M. For τ > 0 we define the following functionals with restricted domain.Define J τ : , where Jτ is as defined above in (58).Also define , where F0 is as in ( 59), with q = 1.Note that by definition, F0 , and thus F 0 , do not assign a finite value to functions u that are not in V b .
Theorem 5.13 Let G = (V, E, ω) ∈ G, q = 1, γ ≥ 0, and M ∈ M. Let {τ k } k∈N be a sequence of positive real numbers such that τ k → 0 as k → ∞.Let u ∈ K M .Then the following lower bound and upper bound hold: (LB) for every sequence (UB) there exists a sequence Proof We note that any converging sequence in K M with limit u is also a converging sequence in K with limit u.Moreover, on K M we have J τ k = Jτ k and F 0 = F0 .Hence (LB) follows directly from (LB) in Theorem 5.10.
For (UB) we note that if we define, for all k ∈ N, u k := u, then trivially the mass constraint on u k is satisfied for all k ∈ N and the result follows by a proof analogous to that of (UB) in Theorem 5.10.
Finally, for the equi-coercivity result, we first note that by Theorem 5.11 we immediately get the existence of a subsequence {v k l } l∈N ⊂ {v k } k∈N which converges to some v ∈ K. Since the functional M is continuous with respect to V-convergence, we conclude that in fact v ∈ K M .
Remark 5.14 Note that for τ > 0, M ∈ M, and u ∈ K M , we have Hence finding the minimizer of J τ in K M is equivalent to finding the maximizer of The following result shows that the -convergence and equi-coercivity results still hold, even if the mass conditions are not strictly satisfied along the sequence.
Corollary 5.15 Let G = (V, E, ω) ∈ G, q = 1, and γ ≥ 0 and let C ⊂ M be a set of admissible masses.For each k ∈ N, let C k ⊂ [0, ∞) be such that k∈N C k = C and define, for all k ∈ N, Let {τ k } k∈N be a sequence of positive real numbers such that Then the results of Theorem 5.13 hold with J τ k and F 0 replaced by J τ k and F 0 , respectively, and with the sequences {u k } k∈N and {v k } k∈N in (LB), (UB), and the equicoercivity result taken in Proof The proof is a slightly tweaked version of the proof of Theorem 5.13.On K k M we have that J τ k = Jτ k and F 0 = F0 .Hence (LB) follows from (LB) in Theorem 5.10.For (UB) we note that, since k∈N C k ⊃ C, the recovery sequence defined by, for all k ∈ N, u k := u, is admissible and the proof follows as in the proof of Theorem 5.10.Finally, for the equi-coercivity result, we obtain a converging subsequence {v k l } l∈N ⊂ {v k } k∈N with limit v ∈ K by Theorem 5.11.By continuity of M it follows that M(v) Remark 5.16 By a standard -convergence result (Maso 1993, Chapter 7; Braides 2002, Section 1.5) we conclude from Theorem 5.13 that (for fixed M ∈ M) minimizers of J τ converge (up to a subsequence) to a minimizer of F 0 (with q = 1) when τ → 0.
By Lemma 5.3 we know that iterates of (OKMBO) solve ( 52) and decrease the value of J τ , for fixed τ > 0 (and thus of Jτ ).By Lemma S5.10 in Supplementary Materials, however, we know that when τ is sufficiently small, the (OKMBO) dynamics is pinned, in the sense that each iterate is equal to the initial condition.Hence, unless the initial condition is a minimizer of J τ , for small enough τ the (OKMBO) algorithm does not generate minimizers of J τ and thus we use Theorem 5.13 to conclude that solutions of (OKMBO) approximate minimizers of F 0 when τ → 0.
As an interesting aside that can be an interesting angle for future work, we note that it is not uncommon in sequential linear programming for the constraints (such as the constraint that the domain of Jτ consists of [0, 1]-valued functions only) to be an obstacle to convergence; compare for example the Zoutendijk method with the Topkis and Veinott method (Bazaraa et al. 1993, Chapter 10).An analogous relaxation of the constraints might be a worthwhile direction for alternative MBO type methods for minimization of functionals like Jτ .We will not follow that route in this paper.Instead, in the next section, we will look at a variant of (OKMBO) which conserves mass in each iteration.

A Mass Conserving Graph Ohta-Kawasaki MBO Scheme
In Sect.5.2 we saw that, for given M ∈ M, any solution to where J τ is as in ( 50) 13 is an approximate solution to the F 0 minimization problem in (34) (with q = 1) in the -convergence sense discussed in Remark 5.16.We propose the (mcOKMBO) scheme described below to include the mass condition into the (OKMBO) scheme.As part of the algorithm we need a node relabelling function.For u ∈ V, let R u : V → {1, . . ., n} be a bijection such that, for all i, j ∈ V , R u (i) < R u ( j) if and only if u i ≥ u j .Note that such a function need not be unique, as it is possible that u i = u j while i = j.Given a relabelling function R u , we will define the relabelled version of u denoted by u R ∈ V, by, for all i ∈ V , In other words, R u relabels the nodes in V with labels in {1, . . ., n}, such that in the new labelling we have u R Because this will be of importance later in the paper, we introduce the new set of almost binary functions with prescribed mass M ≥ 0:14

Algorithm (mcOKMBO):
The mass conserving graph Ohta-Kawasaki Merriman-Bence-Osher algorithm Data: A prescribed mass value M ∈ M, an function v 0 ∈ V ab M , a parameter r ∈ [0, 1], a parameter γ ≥ 0, a time step τ > 0, and the number of iterations N ∈ N ∪ {∞}.Output: A sequence of functions {v k } N k=1 ⊂ V ab M , which is the (mcOKMBO) evolution of v 0 .for k = 1 to N , do ODE step.Compute u ∈ V ∞ by solving (47), where u 0 = v k−1 .Mass conserving threshold step.Let R u be a relabelling function and u R the relabelled version of u as in (66).Let i * be the unique i ∈ V such that We see that the ODE step in (mcOKMBO) is as the ODE step in (OKMBO), using the outcome of the previous iteration as initial condition.However, the threshold step is significantly different.In creating the function v k , it assigns the available mass to the nodes {1, . . ., i * } on which u has the highest value.Note that if r = 0, there is exactly enough mass to assign the value 1 to each node in {1, . . ., i * }, since we assumed that M ∈ M and each node contributes the same value to the mass via the factor d r i = 1.In this case we see that v k i * +1 = 0.However, if r ∈ (0, 1], this is not necessarily the case and it is possible to end up with a value in (0, 1) being assigned to Of course there is no issue in evaluating F 0 (v k ) for almost binary functions v k , but strictly speaking an almost binary v N cannot serve as approximate solution to the F 0 minimization problem in (34) as it is not admissible.We can either accept that the qualifier "approximate" refers not only to approximate minimization, but also to the fact that v N is binary when restricted to V \{i * + 1}, but not necessarily on all of V , or we can apply a final thresholding step to v N and set the value at node i * + 1 to either 0 or 1 depending on which choice leads to the lowest value of F 0 and/or the smallest deviation of the mass from the prescribed mass M. In the latter case, the function will be binary, but the adherence to the mass constraint will be "approximate".We emphasize again that this is not an issue when r = 0 (or on a regular graph; i.e. a graph in which each node has the same degree).This case is the most interesting case, as the mass condition can be very restrictive when r ∈ (0, 1], especially on (weighted) graphs in which most nodes each have a different degree.When r ∈ (0, 1], our definition of (mcOKMBO) suggests the first interpretation of "approximate", i.e. we use v N as is and accept that its value at node i * + 1 may be in (0, 1).All our numerical examples in Sect.7 (and Section S9 in Supplementary Materials) use r = 0.
Note that the sequence {v k } N k=1 generated the (mcOKMBO) scheme is not necessarily unique, as the relabelling function R u in the mass threshold step is not uniquely determined if there are two different nodes i, j ∈ V such that u i = u j .This non-uniqueness of R u can lead to non-uniqueness in v k if exchanging the labels R u (i) and R u ( j) of those nodes leads to a different 'threshold node' i * .In the practice of our examples in Sect.7 (and Section S9 in Supplementary Materials) we used the MATLAB function sort(•, 'descend') to order the nodes.
Lemma 5.18 shows that some of the important properties of (OKMBO) from Lemma 5.3 and Corollary 5.5 also hold for (mcOKMBO).First we state an intermediate lemma.
Lemma 5.17 Let w * ∈ V satisfy the constraints in (67).Then w * is a minimizer for (67) if and only if for all i, j ∈ V , if z i < z j , then w * i = d r i or w * j = 0.
Proof See Section S10.3 in Supplementary Materials.
M be a sequence generated by (mcOKMBO).Then, for all k ∈ {1, . . ., N }, where d J τ is given in (51).Moreover, for all k ∈ {1, . . ., N }, Proof For all i ∈ V , define Then the minimization problem (68) turns into (67).Hence, by Lemma 5.17, v * is a solution of (68) if and only if v * satisfies the constraints in ( 68) and for all i, j ∈ by one iteration of the (mcOKMBO) algorithm satisfies these properties.
We note that (68) differs from (52) only in the set of admissible functions over which the minimization takes place.This difference does not necessitate any change in the proof of the second part of the lemma compared to the proof of the equivalent statements at the end of Lemma 5.3.
The final part of the lemma is trivially true if N ∈ N. Now assume N = ∞.The proof is similar to that of Corollary 5.5.In the current case, however, our functions v k are not necessarily binary.We note that for each k, there is at most one node i(k) ∈ V at which v k can take a value in (0, 1).For fixed k and i(k), there are only finitely many different possible functions that v k V \{i(k)} can be.Because = M, this leads to finitely many possible values v k i(k) can Since i(k) can be only one of finitely many (n) nodes, there are only finitely many possible functions that v k can be.Hence the proof now follows as in Corollary 5.5.

Remark 5.19
Similar to what we saw in Remark 5.4 about (OKMBO), we note that (68) is a sequential linear programming approach to minimizing J τ over K M ; the linear approximation of J τ over K M is minimized instead.Remark S8.2 in Supplementary Materials discusses the behaviour of (mcOKMBO) at small and large τ .If τ is too small, pinning can occur similar to, but for different reasons than, the pinning behaviour of (OKMBO) at small τ .

Special Classes of Graphs
There are certain classes of graphs on which the dynamics of equation ( 47), can be directly related to graph diffusion equations, in a way which we will make precise in Sect.6.1.The tools which we develop in that section will again be used in Sect.6.2 to prove additional comparison principles.

Graph Transformation
Definition 6.1 Let G = (V, E, ω) ∈ G.For all j ∈ V , let ν V \{ j} be the equilibrium measure which solves (12) for S = V \{ j} and define the functions f j ∈ V as Now we introduce the following classes of graphs: For γ = 0, we define C 0 := G. 15Remark 6.2 Let us have a closer look at the properties of graphs in Lemma 6.3 Let the setting and notation be as in Definition 6.1.Then, C ⊂ C 0 and, for all γ ≥ 0, Proof The first two inclusions stated in the lemma follow immediately from the definitions of the sets involved.If γ = 0, then G ∈ C γ in the final statement is trivially true.To prove it for γ = 0, let and, by the assumption that G / ∈ C, there are j ∈ V , i ∈ V \{ j} for which this is the case), then by definition of C 0 we have ω i j > 0. Define Lemma S7.1 in Supplementary Materials shows C is not empty; in particular, unweighted star graphs with three or more nodes are in C. Remark S7.2 shows that C 0 \C = ∅.Lemma S7.3 and Remarks S7.4 and S7.6 give and illustrate different sufficient conditions for graphs to be in C or C 0 , which are used in Corollary S7.5 to show that complete graphs are in C 0 .
The following lemma hints at the reason for our interest in the functions f j from (69).Lemma 6.4 Let G = (V, E, ω) ∈ G. Let j ∈ V and let f j ∈ V be as in (69).Then the function ϕ j ∈ V, defined by solves (45) for χ { j} .
Corollary 6.5 Let G = (V, E, ω) ∈ G. Let λ m and φ m be the eigenvalues and corresponding eigenfunctions of the graph Laplacian (with parameter r ), as in (35), (36).Let j ∈ V .If ϕ j ∈ V is as defined in (70), then, for all i ∈ V , In particular, if f j is as in (69) and i ∈ V , then f Proof Let j ∈ V .By Lemma 6.4 we know that ϕ j solves (45) for χ { j} .Then by ( 54) we can write, for all i ∈ V , where we used that, where δ jk is the Kronecker delta.
The final statement follows from the definition of ϕ j in Lemma 6.4, which shows that, for all i ∈ V , f j i ≥ 0 if and only ϕ j i ≤ 0. Corollary 6.6 Let G = (V, E, ω) ∈ G.For all j ∈ V , let ϕ j be as in (70), let f j be as in (69), and let ν V \{ j} be the equilibrium measure for V \{ j} as in (12).If r = 0, then, for all i, j ∈ V , ϕ Proof This follows immediately from ( 71), (70), and (69).
Remark 6.7 The result of Corollary 6.5 is not only an ingredient in the proof of Theorem 6.9, but can also be useful when testing numerically whether or not a graph is in C or in C 0 .
For i, j ∈ V we also compute hence, if i = j, then ω i j = −d r i χ { j} i .Combining the above with (70), we find for i = j, where the inequality follows since G ∈ C γ (note that for γ = 0 the inequality follows from the nonnegativity of ω).Moreover, if ω i j > 0, then, by definition of C γ , the inequality is strict, and thus ωi j > 0. 16  If additionally G ∈ C, then, for i = j, f j i ≥ 0 and thus by (76), ωi j ≥ ω i j .Lemma 6.8 suggests that, given a graph G ∈ C γ with edge weights ω, we can construct a new graph G with edge weights ω as in ( 73), that are also nonnegative.The next theorem shows that, in fact, if r = 0, then this new graph is in G and the graph Laplacian ˜ on G is related to L. Theorem 6.9 Let γ ≥ 0 and let G = (V, E, ω) ∈ C γ .Let L be as defined in (46).Let λ m and φ m be the eigenvalues and corresponding eigenfunctions of the graph Laplacian (with parameter r ), as in (35), (36).Assume r = 0 and let ω be as in (73).Let Ẽ ⊂ V 2 contain an undirected edge (i, j) between i ∈ V and j ∈ V if and only if ωi j > 0. Then G = (V, Ẽ, ω) ∈ G. Let ˜ be the graph Laplacian (with parameter r) on G.If r = 0, then ˜ = L. 16 This property is used to prove connectedness of G in Theorem 6.9, which is the reason why we did not define C γ to simply be G ∈ Proof In the following it is instructive to keep r, r ∈ [0, 1] as unspecified parameters in the proof and point out explicitly where the assumptions r = 0 and r = 0 are used.the definition of ωi j in (73) it follows directly that G has no self-loops ( ωii = 0).Moreover, using r = 0 in (73), we see that ωi j = ω ji and thus G is undirected.Furthermore, by Lemma 6.8 we know that, for all i, j ∈ V , if ω i j > 0, then ωi j > 0. Thus G is connected, because G is connected.Hence G ∈ G.
Repeating the computation from (75) for ˜ instead of , we find, for i, j ∈ V , where di := j∈V ωi j .Combining this with (74), we find that, if j ∈ V and i ∈ V \{ j}, then . By ( 74) and ( 77) with i = j, we then have Now we use r = 0 in ( 78) and (79) to deduce that, for all j ∈ V , ˜ χ { j} = Lχ { j} .Since {χ {i} ∈ V : i ∈ V } is a basis for the vector space V, we conclude ˜ = L. Remark 6.10 In the proof of Theorem 6.9, we can trace the roles that r and r play.We only used the assumption r = 0 in order te deduce that G is undirected.The assumption r = 0 is necessary to obtain equality between ˜ and L in equations ( 78) and ( 79).These assumptions on r and r have a further interesting consequence.Since the graphs G and G have the same node set, both graphs also have the same associated set of node functions V.Moreover, since r = r = 0, the V-inner product is the same for both graphs.Hence we can view V corresponding to G as the same inner product space as V corresponding to G.In this setting the operator equality ˜ = L from Theorem 6.9 holds not only between operators on the vector space V, but also between operators on the inner product space V. Lemma 6.11 Let γ ≥ 0, q = 1, and let G = (V, E, ω) ∈ C γ .Assume r = 0. Let ω be as in (73) and Ẽ as in Theorem 6.9.Let r be the r -parameter corresponding to the graph G = (V, Ẽ, ω).Suppose S ⊂ V , F 0 is as in (32), for all i ∈ V di := j∈V ωi j , and κS is the graph curvature of S as in Definition 3.4 corresponding to ω.Then F 0 (χ S ) = i, j∈S ωi j .Moreover, if r = 0, F 0 (χ S ) = di − ( κS ) i .
Proof From Corollary 4.12 and (56) we find where we used that r = 0.Moreover, if r = 0, i, j∈S ωi j = i∈S j∈V ωi j − j∈V \S ωi j = i∈S di − ( κS ) i .
Lemma S7.7 in Supplementary Materials gives upper and lower bounds on ω − ω in terms of the Laplacian eigenvalues and eigenfunctions.Remarks S7.8 and S7.9 interpret these conditions in terms of the algebraic connectivity of the graph and use them to give some intuition about the (mcOKMBO) dynamics.Lemma S7.10 and Remarks S7.11 and S7.12 use the star graph to illustrate the results from this section.

More Comparison Principles
Theorem 6.9 tells us that, if γ ≥ 0 is such that G ∈ C γ and if r = 0, then the dynamics in (47) can be viewed as graph diffusion on a new graph with the same node set, but a different edge set and weights, as the original graph G.We can use this to prove that properties of also hold for L on such graphs.Note that, when γ = 0, L = , so this can be viewed as a generalization of results for to L.
In this section, we prove a generalization of Lemma 3.1 and a generalization of the comparison principle in (van Gennip et al. 2014, Lemma 2.6(d)).In fact, despite the new graph construction in Theorem 6.9 requiring r = 0 for symmetry reasons (see Remark 6.10), the crucial ingredient that will allow these generalizations is that G ∈ C γ ; the assumption on r is not required.We will also see a counterexample illustrating that this generalization does not extend (at least not without further assumptions) to graphs that are not in C γ .Lemma 6.12 gives a result which we need to prove the comparison principles in Lemmas 6.13 and 6.15.Lemma 6.12 Let γ ≥ 0, G = (V, E, ω) ∈ C γ , w ∈ V, and let i * ∈ V be such that w i * = min i∈V w i .Let ϕ ∈ V solve (45) for w.Then ϕ i * ≤ 0.
Lemma 6.13 (Generalization of comparison principle I).Let γ ≥ 0, G = (V, E, ω) ∈ C γ , and let V be a proper subset of V .Assume that u, v ∈ V are such that, for all i ∈ V , (Lu) i ≥ (Lv) i and, for all i ∈ V \V , u i ≥ v i .Then, for all i ∈ V , u i ≥ v i .
Proof When r = 0, we know that L = ˜ , where ˜ is the graph Laplacian on the graph G, in the notation from Theorem 6.9.Because G and G have the same node set V , the result follows immediately by applying Lemma 3.1 to ˜ .We will, however, prove the generalization for any r ∈ [0, 1].
Let the situation and notation be as in the proof of Lemma 3.1, with the exception that now, for all i ∈ V , (Lw) i ≥ 0 (instead of ( w) i ≥ 0).Let ϕ ∈ V by such that ϕ = w − A(w) and M(w) = 0. Proceed with the proof in the same way as the proof of Lemma 3.1, up to and including the assumption that min j∈V w j < 0 and the subsequent construction of the path from U to i * and the special nodes j * and k * on this path.Then, as in that proof, we know that ( w) * j < 0.Moreover, since w j * = min i∈V w i , we know by Lemma 6.12 that ϕ j * ≤ 0. Hence, for all γ ≥ 0, (Lw) * j < 0. This contradicts the assumption that, for all i ∈ V , (Lw) i ≥ 0, hence min i∈V w i ≥ 0 and the result is proven.
Proof Define w := ũ − u, then, for all i ∈ V , w ≥ 0 and w i * = 0. We compute Let ϕ ∈ V solve (45) for w.Since w i * = min i∈V w i , we have by Lemma 6.12 that ϕ i * ≤ 0. Hence and let u, v ∈ V ∞ be solutions to (47), with initial conditions u 0 and v 0 , respectively.If, for all i ∈ V , (u 0 ) i ≤ (v 0 ) i , then, for all t ≥ 0 and for all i ∈ V , u i (t) ≤ v i (t).
Proof If r = 0 we note that, by Theorem 6.9, L can be rewritten as a graph Laplacian on a new graph G with the same node set V .The result in (van Gennip et al. 2014, Lemma shows the desired conclusion holds for graph Laplacians (i.e. when γ = 0) and thus we can apply it to the graph Laplacian on G to obtain to result for L on G.
In the general case when r ∈ [0, 1], Corollary 6.14 tells us that L satisfies the condition which is called W + in Szarski (1965, Section 4). 17Since, for a given initial condition, the solution to (47) is unique, the result now follows by applying (Szarski 1965, Theorem 9.3 or Szarski 1965, Theorem 9.4).Corollary 6.16 Let γ ≥ 0, G = (V, E, ω) ∈ C γ , and let w ∈ V ∞ be a solution to (47) with initial condition w 0 ∈ V. Let c 1 , c 2 ∈ R be such that, for all i ∈ V , c 1 ≤ (w 0 ) i ≤ c 2 .Then, for all t ≥ 0 and for all i ∈ V , c 1 ≤ w i (t) ≤ c 2 .
Proof First note that c 1 and c 2 always exist, since V is finite.
If u ∈ V ∞ solves (47) with initial condition u 0 = c 1 χ V ∈ V, then, for all t ≥ 0, u(t) = c 1 χ V .Applying Lemma 6.15 with v 0 = w 0 and v = w, we obtain that, for all t ≥ 0 and for all i ∈ V , w i (t) ≥ c 1 .Similarly, if v ∈ V ∞ solves (47) with initial condition v 0 = c 2 χ V ∈ V, then, for all t ≥ 0, u(t) = c 2 χ V .Hence, Lemma 6.15 with u 0 = w 0 and u = w tells us that, for all t ≥ 0 and for all i ∈ V , w i (t) ≤ c 2 .
The final statement follows by noting that, for all i ∈ V , − w 0 V,∞ ≤ (w 0 ) i ≤ w 0 V,∞ .
Remark 6.17 Numerical simulations show that when G / ∈ C γ , the results from Corollary 6.16 do not necessarily hold for all t > 0. For example, consider an unweighted 4-regular graph (in the notation of Section S9.1 in Supplementary Materials we take the graph G torus (900)) with r = 0 and γ = 0.7.We compute min i, j∈V (d −r  i ω i j + γ d r j vol(V ) f j i ) ≈ −0.1906 in MATLAB using (70), (71), so the graph is not in C 0.7 .Computing v(0.01) = e −0.01L v 0 , where v 0 is a {0, 1}-valued initial condition,18 we find min i∈V v i (0.01) ≈ − 0.0033 < 0 and max i∈V v i (0.01) ≈ 1.0033 > 1. Hence the conclusions of Corollary 6.16 do not hold in this case.
We can use the result from Corollary 6.16 to prove a second pinning bound, in the vein of Lemma S5.10, for graphs in C γ ; see Lemma S8.1 in Supplementary Materials.

Numerical Implementations
We implement (mcOKMBO) (in MATLAB version 9.2.0.538062 (R2017a)) by computing the eigenvalues m with corresponding eigenfunctions λ m and then using the spectral expansion from (57) to solve (47).This is similar in spirit to the spectral expansion methods used in, for example (Bertozzi and Flenner 2012;Calatroni et al. 2017).However, in those papers an iterative method is used to deal with additional terms in the equation.Here, we can deal with operator L in (47) in one go.Note that in other applications of spectral expansion methods, such as those in Bertozzi and Flenner (2016), sometimes only a subset of the eigenvalues and corresponding eigenfunctions is used.When n is very large, computation time can be saved, often without a great loss of accuracy, by using a truncated version of (38) which only uses the K n smallest eigenvalues m with corresponding eigenfunctions.The examples we show in this paper (and Supplementary Materials) are small enough that such an approximation was not necessary, but it might be considered if the method is to be run on large graphs.
Figure 1 shows the initial conditions and final states for three runs of (mcOKMBO) on a two-moon graph G moons , 19 with different values for γ .Figure 2 shows the corresponding values of J τ (v k ) and F 0 (v k ) as a function of the iteration number k.In each case, the algorithm was terminated when v k = v k−1 , which is why in each plot in Fig. 2 the final two values are the same.
As expected from Lemma 5.18, J τ decreases along the iterates.By and large F 0 also decreases, although Fig. 2d shows this is not necessarily always the case; also note that the value at the final iterate is not guaranteed to be the minimum value among all iterates (although in our tests it always is close, if not equal, to that minimum; see the figures in Section S9 in Supplementary Materials).
In Section S9 of Supplementary Materials we provide additional results obtained by running (mcOKMBO) on various different graphs, as well as in-depth discussions about those results and the choice of τ , of the initial condition, and of the other parameters (γ , q, r , M, N ) in the graph Ohta-Kawasaki model and the (mcOKMBO) algorithm.

Discussion and Future Work
In this paper we presented three main results: the Lyapunov functionals associated with the (mass conserving) Ohta-Kawasaki MBO schemes -converge to the sharp interface Ohta-Kawasaki functional; there exists a class of graphs on which this MBO scheme can be interpreted as a standard graph MBO scheme on a transformed graph (and for which additional comparison principles hold); the mass conserving Ohta-Kawasaki MBO scheme works well in practice when attempting to minimize the sharp interface graph Ohta-Kawasaki functional under a mass constraint.Along the way we have also further developed the theory of PDE inspired graph problems and added to the theoretical underpinnings of this field.
Future research on the graph Ohta-Kawasaki functional can mirror the research on the continuum Ohta-Kawasaki functional and attempt to prove the existence of certain structures in minimizers on certain graphs, analogous to structures such as lamellae  and droplets in the continuum case.The numerical methods presented in this paper might also prove useful for simulations of minimizers of the continuum functional.
The -convergence results presented in this paper also fit in well with the ongoing programme, started in van Gennip et al. (2014), aimed at improving our understanding how various PDE inspired graph-based processes, such as the graph MBO scheme, graph Allen-Cahn equation, and graph mean curvature flow, are connected.
One of the initial hopes for the graph Ohta-Kawaski functional when starting this research was that it might be helpful to detect particular structures in graphs [(similar to how the graph Ginzburg-Landau functional can be used to detect cluster structures (Bertozzi and Flenner 2012) and to how the signless graph Ginzburg-Landau functional detects bipartite structures (Keetch and van Gennip in prep)].So far this line of research has not yielded concrete results, but it is worth keeping in mind as a potential application, if such a structure can be identified.
We thank the anonymous referee of the first draft of this paper for the suggestion that the mass conserving MBO scheme can be useful for data clustering with prescribed cluster sizes.It would be interesting to pursue this idea in future research.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/),which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
and the minimum and maximum degrees of the graph are defined as d − := min 1≤i≤n d i and d + := max 1≤i≤n d i , respectively.
) will lead to a valid Green's function for the Poisson equation.A function G satisfies (23) with C = C ∈ R if and only if G − C satisfies (23) with C = 0. Hence in Lemma 3.14, we will give a Green's function for the Poisson equation for the choice is the Green's function for the Poisson equation satisfying (23) with (24) (and any choice of C Remark 4.3 It is useful to realize we can rewrite the inner product from Definition 4.1 as

Fig. 1
Fig. 1 Initial (left column) and final (right column) states of Algorithm (mcOKMBO) applied to G moons with r = 0, M = 300, τ = 1 for a different value of γ in each row.The initial conditions are eigenfunction based in the sense of option (c) in Section S9.3 in Supplementary Materials.The values of F 0 at the final iterates are approximately 109.48 row), 230.48 (middle row), and 626.89 (bottom row).a Initial condition for γ = 0.1, b Final iterate (k = 21) for γ = 0.1, c Initial condition for γ = 1, d Final iterate (k = 9) for γ = 1, e Initial condition for γ = 10 and f Final iterate (k = 7) for γ = 10

Fig. 2
Fig. 2 Plots of J 1 v k (left column) and F 0 v k (right column) for the applications of (mcOKMBO) corresponding to Fig. 1. a Plot of J 1 v k for γ 0.1, b Plot of F 0 v k for γ = 0.1, c Plot of J 1 v k for γ = 1, d Plot of F 0 v k for γ = 1, e Plot of J 1 v k for γ = 10 and f Plot of F 0 v k for γ = 10