A Note on the Majority Dynamics in Inhomogeneous Random Graphs

In this note, we study discrete time majority dynamics over an inhomogeneous random graph G obtained by including each edge e in the complete graph Kn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$K_n$$\end{document} independently with probability pn(e)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_n(e)$$\end{document}. Each vertex is independently assigned an initial state +1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$+1$$\end{document} (with probability p+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_+$$\end{document}) or -1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-1$$\end{document} (with probability 1-p+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1-p_+$$\end{document}), updated at each time step following the majority of its neighbors’ states. Under some regularity and density conditions of the edge probability sequence, if p+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_+$$\end{document} is smaller than a threshold, then G will display a unanimous state -1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$-1$$\end{document} asymptotically almost surely, meaning that the probability of reaching consensus tends to one as n→∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n\rightarrow \infty $$\end{document}. The consensus reaching process has a clear difference in terms of the initial state assignment probability: In a dense random graph p+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_+$$\end{document} can be near a half, while in a sparse random graph p+\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p_+$$\end{document} has to be vanishing. The size of a dynamic monopoly in G is also discussed.


Introduction
Majority dynamics is a discrete-time deterministic process over a graph G with n vertices V = {1, 2, . . . , n}, where each vertex i ∈ V holds a state C t (i) ∈ {−1, +1} at time step t ≥ 0. The state configuration of G at t can be represented as a mapping C t : V → {−1, +1}. Given an initial configuration C 0 , the process evolves following This can be viewed as a model of information spreading in social networks, where each individual defers to the majority of its neighbors and keeps its own opinion in case of a tie. Majority dynamics (1.1) is the noiseless special case of the well-known majority-vote model in statistical physics [1][2][3], where each voter has a probability (interpreted as noise or temperature) q choosing the minority of its neighbors and probability 1 − q the majority. Threshold and phase transition with respect to noise and other order parameters are the focus of these studies mainly using mean field approximation. Another closely related model is majority bootstrap percolation [4,5], where each vertex can have one of two colors, red (informed) or blue (uninformed), and blue vertices update their color according to the majority rule while red vertices invariably retain their color. Majority bootstrap percolation can be used to model monotone processes such as infection and rumor diffusion, which is essentially different from majority dynamics where a vertex may change its state many times. For a finite graph, beginning with any initial configuration C 0 , the process of majority dynamics will become recurrent at some point, and interestingly, it is shown in [6] that the period is at most 2 when t is sufficiently large. Given a random initial configuration C 0 with each vertex independently taking state +1 with probability p + and −1 with probability p − = 1 − p + , majority dynamics has been investigated for several classes of graphs including lattice [7,8], infinite lattice [9], infinite trees [10], random regular graphs [11], and Erdős-Rényi random graphs [2,[12][13][14][15]. For example, it is shown in [14] that majority dynamics undergoes a phase transition at the threshold of connectivity p n = n −1 ln n for Erdős-Rényi random graph G(n, p n ), where p n is the edge probability [16].
In the present note, we continue this line of research by considering majority dynamics in inhomogeneous random graphs, where edges are independent but may have different probability. For i = j, let e ij = e ji be the edge connecting vertices i and j in V . Define mutually independent Bernoulli random variables {X(e ij )} 1≤i<j≤n by setting p n (e ij ) := P(X(e ij ) = 1) = 1 − P(X(e ij ) = 0). The edge e ij is present if X(e ij ) = 1 and absent if X(e ij ) = 0. Given the sequence p n := {p n (e ij )} 1≤i<j≤n , the inhomogeneous random graph G(n, p n ) is the probability measure space of all graphs with each edge e ij present independently with probability p n (e ij ). Clearly, if p n (e ij ) ≡ p n ∈ (0, 1), we reproduce the classical Erdős-Rényi random graph model G(n, p n ).
In the rest of the note, we present the main results in Sect. 2 and provide the proofs in Sect. 3. By convention, we are interested in the properties pertaining to random graphs as n → ∞, and a property is said to hold asymptotically almost surely (a.a.s.) if the probability of achieving it tends to 1 as n → ∞. Some standard asymptotic notations such as o, O, Θ, ω, will be adopted; see e.g., the textbook [16]. All logarithms have base e.

Main Results
For a subset S ⊆ V , the expected neighbor densities of a vertex i ∈ V in G(n, p n ) and in the subgraph induced by S are defined by d n (i) = (n − 1) −1 j∈V \{i} p n (e ij ) and d n (i, S) = |S\{i}| −1 j∈S\{i} p n (e ij ) respectively, where | · | represents the size of a set. Apparently, we have d n (i) = d n (i, V ).
Our first result shows that if the graph is dense and the probability p + of a vertex having state +1 in C 0 is only slightly smaller than a half, then the vertices in V unanimously hold the state −1 after a constant number of rounds. Theorem 1. (Dense regime). Suppose there exists a sequence p n ∈ (0, 1) and positive constants α < 1, β < 1, and 4(1 − β)/β ≤ γ ≤ 1 such that for all sufficiently large n the following conditions hold: is tight in the sense that if it is replaced by c √ npn for any constant c > 0, the result does not hold; c.f. [14]. In fact, assume p + = 1 2 − c √ n and p n ≡ 1. Let Y be the number of vertices having state +1 where Z is the standardized normal random variable. With a random initial configuration C 0 (independently taking +1 with probability p + and −1 otherwise) and G(n, p n ) being a complete graph, any vertex holding state −1 in C 0 will change its state to +1 in C 1 . In other words, with a positive probability all vertices will hold state +1 in just one round. The next result concerns the sparse graph regime. It says if the graph is sparse, then in order to allow the state −1 to take over the graph the probability p + of a vertex having state +1 in C 0 must tend to zero sufficiently fast. This is in stark contrast to dense graphs because there might be small components which persistently hold state +1 impeding consensus if p + is not sufficiently small.

Theorem 2. (Sparse regime).
Suppose there exists a sequence p n ∈ (0, 1) and positive constants α > 1, β < 1, and γ ≥ 1 such that for all sufficiently large n the following conditions hold: , then a.a.s. the vertices in G(n, p n ) unanimously have state −1 after two rounds.
The regularity and density conditions above are satisfied with γ = 1, any β ∈ (0, 1), and p n (e ij ) ≡ p n . We obtain the following result for homogeneous random graph G(n, p n ) [14, Theorem 2.4].

Corollary 2.
Assume that p n ≤ ln n αn for α ∈ (0, 1) and sufficiently large n. If , then a.a.s. the vertices in G(n, p n ) unanimously have state −1 after two rounds.
In majority dynamics, a set S ⊆ V is said to be a dynamic monopoly [17,18] if starting from any configuration with all vertices in S holding state −1 (regardless of the states in V \S), vertices in V will unanimously have state −1 eventually. It is shown in [19] that for any n ≥ 1 there exists a graph G with n vertices, which possesses a dynamic monopoly of a constant size. In the following we show that in G(n, p n ) the minimal size of a dynamic monopoly is a.a.s. at least nearly a half of the graph size. and positive constants γ ≤ 1, c ≥ 4( 1 2 − θ) −1 1 γ , and c √ npn ≤ θ < 1 2 such that for all sufficiently large n the following conditions hold:

6)
and Then the size of any dynamic monopoly in G(n, p n ) is at least 1 2 − δ n a.a.s. Note that when δ ∈ (0, 1/2), we have f (δ) ∈ (0, 1). All conditions in Theorem 3 are satisfied with γ = 1, c sufficiently large, p n (e ij ) ≡ p n and

Proofs
In this section, we present the proofs for the above Theorems 1, 2, and 3. Let N (i) be the set of neighbors of vertex i ∈ V in the inhomogeneous random graph G(n, p n ). The following lemma will be used in the proof of Theorem 1.

Lemma 1.
Suppose there exists a sequence p n ∈ (0, 1) and a positive constant α < 1 such that for all sufficiently large n the following condition holds: Proof. If random variable Y is the sum of a list of independent Bernoulli random variables, we have the following tail estimate: for any ε ∈ (0, 1) by [16, Theorem 2.1, Remark 2.9]. For any c > 1, there exists a constant c 1 > 1 satisfying n−1 n c > c 1 for all sufficiently large n. Therefore, Since α < 1, we can choose a sufficiently large c such that c 1 is large and (c 1 e) Proof of Theorem 1.. Here, we follow the idea of [14] by dividing the proof into two regimes: (i) max i,j∈V p n (e ij ) ≥ n θ−1 and (ii) max i,j∈V p n (e ij ) ≤ n θ−1 for some sufficiently small constant θ > 0. Regime (i): Claim 1: For any vertex i ∈ V , P(C 1 (i) = +1) = o(1) as n → ∞.
To show Claim 1, we fix i ∈ V and define a random variable Y (i) as the number of vertices in N (i) holding state −1 in C 0 , which is the sum of a list of independent Bernoulli random variables. Hence, given N (i), we obtain By the Hoeffding inequality [20, Corollary 21.7] and majority dynamics (1.1), where we take p + = 1 2 − ρ and ρ = ω(1/ √ np n ) ≤ 1/2. Hence, for any c > 0, 2). Applying Lemma 1 and the total probability formula, we can choose c sufficiently large such that which concludes the proof of Claim 1.
Given a graph G and two subsets S 1 , S 2 ⊆ V , we say S 1 controls S 2 [14] if all vertices in S 1 holding the state +1 in C 0 will always lead to all vertices in S 2 holding the state +1 in C 1 (regardless of the states of other vertices). By Claim 1, the expected number of vertices in C 1 holding the state +1 is o(n). For any constant c > 0, the probability that the number of vertices in C 1 holding +1 is greater than (1 − β)n is no more than o(n)/(1 − β)n = o(1) by Markov's inequality (c.f. [20,Lemma 20.1]). We present the following Claim 2.
Claim 2: Any subset S with 1 ≤ |S| = s ≤ (1 − β)n can control no more than sn − θ 2 vertices a.a.s., where θ > 0 as mentioned in the beginning of the proof is taken as a sufficiently small constant.
If Claim 2 is true, we start from those (at most) (1 − β)n vertices holding +1 in C 0 , there will be at most s(n − θ 2 ) 2 θ < 1 vertices holding +1 after 2 θ rounds a.a.s. by repeatedly applying Claim 2. This will conclude the proof of Regime (i).
What remains to show is Claim 2. To this end, let S be a set with |S | = s = sn − θ 2 . By the assumption 4(1 − β)/β ≤ γ ≤ 1, we know β ≥ 4/5. Noticing that |V \S| ≥ βn ≥ (1 − β)n, by invoking the regularity condition (2.1) we estimate the expected number of edges between S and V \S as where e(S 1 , S 2 ) = i∈S1 j∈S2 1 {eij ∈E(G(n,p n ))} and E(G(n, p n )) represents the edge set of G(n, p n ). Utilizing the concentration inequality [21, Theorem 3.3] and recalling that max i,j∈V p n (e ij ) ≥ n θ−1 , we have Similarly, we calculate Ee(S , S) = i∈S j∈S p n (e ij ) = i∈S |S|d n (i, S). By the regularity condition (2.1), s sγ max i,j∈V p n (e ij ) ≤ Ee(S , S) ≤ s s · max i,j∈V p n (e ij ). Since γ ≥ 4(1 − β)/β and s ≤ (1 − β)n, we have βγn 2s > 2. Another application of the concentration inequality gives Regime (ii): Claim 3: There exists a sufficiently small constant θ > 0 such that when max i,j∈V p n (e i,j ) ≤ n θ−1 a.a.s. G(n, p n ) does not contain four types of subgraphs, namely, two triangles sharing one vertex or one edge, two 4-cycles sharing one vertex or one edge. Claim 3 can be easily seen by directly estimating the probability of containing any of such subgraphs. Since these graphs can have 4 to 7 vertices and the number of edges in these graphs exceeds the number of vertices by exactly 1, this probability is upper bounded by 7 k=4 n k for a sufficiently small θ. This proves Claim 3. Fix a vertex i ∈ V and list its neighbors as j 1 , j 2 , . . . , j |N (i)| ; see Fig. 1 for a schematic illustration of N (i), where i is in at most one 4-cycle with j 1 and j 2 , and in at most one triangle with j 3 and j 4 . We in the following estimate the probability P(C 2 (i) = +1). For any 5 ≤ l ≤ |N (i)|, denote by Without loss of generality, we assume that ρ < 1/2. For any constant c > 0, using the Hoeffding inequality and the density condition (2.2), we obtain as n → ∞. Hence, taking a sufficiently large c > 0 and an arbitrarily small ε > 0, it follows from Lemma 1 and the total probability formula that (3.5) In view of Claim 3, we know that a.a.s. no two vertices in S := {j 5 , . . . , j |N (i)| } can (a) be adjacent or (b) share a neighbor other than i. An example scenario of N (i) is shown in Fig. 1. Define a random variable Y to be the number of vertices among S, which are good in C 1 . By majority dynamics (1.1) and the above comment about good vertices, we have Given the above (a) and (b) hold, the events are independent. Since their probabilities are upper bounded by (3.5), we see that the right-hand side of (3.6) is no more than Hence, as n → ∞, since ε > 0 can be arbitrarily small and the density condition (2.2) holds with α < 1. Using Lemma 1 and the total probability formula again, we have As there are n vertices in V , the probability of having some vertex i with C 2 (i) = +1 is o (1). The proof of Regime (ii) is complete.
Proof of Theorem 2.. We first consider the case p + = ω(n −1 e npnγ ). It suffices to show there exists an isolated vertex k ∈ V with C 0 (k) = +1 a.a.s. Let Y be the number of isolated vertices holding the state +1 in C 0 and we will resort to the second moment method; see e.g. [20,Sect. 20.1].
To this end, we estimate where in the first inequality we used the density condition (2.4) and 1 − x ≥ e −x−x 2 for x ∈ [0, 1/2], and in the last inequality we applied again the density condition to derive p 2 n (e ij ) ≤ p n (e ij )γp n . Furthermore, it follows from the density condition (2.4) and max i∈V d n (i) ≤ max i,j∈V p n (e ij ) that since 0 ≤ np 2 n ≤ (ln n) 2 /(nα 2 ) → 0 as n → ∞. Next, we estimate the variance Var(Y ). By direct calculation, we obtain where in the last inequality we used the density condition (2.4). Since p n → 0, we have Var Y = o((EY ) 2 ) in view of (3.7). An application of the second moment method yields P( as n → ∞. The first statement of Theorem 2 is proved. Next, we assume p + = o(n −1 e npn ). We will show the second statement of Theorem 2 by showing that C 2 (i) = −1 a.a.s. for every i ∈ V in three separate cases: where we used (1 − p + ) d+1−k ≤ 1 and the binomial theorem. Since p + = o(n −1 e npn ) and the density condition (2.4) holds, the right-hand side of (3.8) is no more than 2 d+1 (n −1 e npn ) d 2 ≤ 2 d+1 (n  as n → ∞. The first moment method gives Z 1 = Z 2 = 0 a.a.s. (iii): From (i) and (ii) we know that any non-leaf vertices take the state −1 in C 1 a.a.s. Hence, any vertex that is not adjacent to any leaf vertex a.a.s. takes the state −1 in C 2 by majority dynamics (1.1). For a vertex i, which is adjacent to at least one leaf vertex, we will show that C 2 (i) = −1 a.a.s. The strategy here is to consider all possibilities that will lead to C 2 (i) = +1 and show this happens with probability o(1). Possibility (a): i is adjacent to at least two leaf vertices, say i 1 and i 2 . In this case, C 0 (i) = +1. [Otherwise, those leaf neighbors take −1 in C 1 , and all non-leaf neighbors of i (if exist) also take −1 in C 1 . This leads to C 2 (i) = −1, which contradicts the assumption.] Possibility (b): i is adjacent to only one leaf vertex, say i 3 . In this case, i has no other neighbors. [Otherwise, these non-leaf neighbors and i must take the state −1 in C 1 a.a.s. By (1.1), C 2 (i) = −1. This contradicts the assumption.] Moreover, C 0 (i) = +1. [Otherwise, the leaf neighbor i 3 holds −1 in C 1 , and hence C 2 (i) = −1, which is a contradiction.]