Abstract
We study binary opinion dynamics in a fully connected network of interacting agents. The agents are assumed to interact according to one of the following rules: (1) Voter rule: An updating agent simply copies the opinion of another randomly sampled agent; (2) Majority rule: An updating agent samples multiple agents and adopts the majority opinion in the selected group. We focus on the scenario where the agents are biased towards one of the opinions called the preferred opinion. Using suitably constructed branching processes, we show that under both rules the mean time to reach consensus is \(\varTheta (\log N)\), where N is the number of agents in the network. Furthermore, under the majority rule model, we show that consensus can be achieved on the preferred opinion with high probability even if it is initially the opinion of the minority. We also study the majority rule model when stubborn agents with fixed opinions are present. We find that the stationary distribution of opinions in the network in the large system limit using mean field techniques.
Similar content being viewed by others
1 Introduction
The social learning literature [4, 12, 13, 29] studies how social agents, interacting under simple rules, learn the true utilities of their choices, opinions or technologies over time. In this context, the two central questions we study are: (1) Can social agents learn/adopt the better technology/opinion through simple rules of interactions and if so, how fast? and (2) What are the effects of the presence of stubborn agents (having fixed opinions) on the dynamics opinion diffusion?
We consider a setting where the choices available to each agent are binary and are represented by \(\left\{ {0}\right\} \) and \(\left\{ {1}\right\} \) [2, 4]. These are referred to as opinions of the agents. The interactions among the agents are modelled using two simple rules: the voter rule [7, 10, 19] and the majority rule [5, 6, 11, 21]. In the voter rule, an agent randomly samples one of its neighbours at an instant when it decides to update its opinion. The updating agent then adopts the opinion of the sampled neighbour. This simple rule captures the tendency of an individual to mimic other individuals in the society. In the majority rule, instead of sampling a single agent, an updating agent samples 2K (\(K \ge 1\)) neighbours and adopts the opinion of the majority of the sampled neighbours (including itself). This rule captures the tendency of the individuals to conform with the majority opinion in their local neighbourhoods.
1.1 Related Literature
The voter model and its variants have been studied extensively (see [3] for a recent survey) for different network topologies, e.g., finite integer lattices in different dimensions [10, 20], complete graphs with three states [30], heterogeneous graphs [31], random dregular graphs [8], Erdos–Renyi random graphs, and random geometric graphs [33] etc. It is known [18, 28] that if the underlying graph is connected, then the classical voter rule leads to a consensus where all agents adopt the same opinion. Furthermore, if A is the set of all agents having an opinion \(i\in \left\{ {0,1}\right\} \) initially, then the probability that consensus is achieved on opinion i (referred to as the exit probability to opinion i) is given by d(A)/2m, where d(A) is the sum of the degrees of the vertices in A and m is the total number of edges in the graph. It is also known that for most network topologies the mean consensus time is \(\varOmega (N)\), where N is the total number of agents. The voter model in the presence of individuals who prefer one of the opinions over the other was first studied in [23] for finite dimensional integer lattices. It was found that the presence of even one such agent can significantly affect the network dynamics. In [24, 34], the voter model has been studied under the presence of stubborn individuals who do not update their opinions. In such a scenario, the network cannot reach a consensus. Using coalescing random walk techniques the average opinion in the network and the variance of opinions have been computed at steady state.
The majority rule model was first studied in [16], where it was assumed that, at every iteration, groups of random sizes are formed by the agents. Within each group, the majority opinion is adopted by all the agents. Similar models with fixed (odd) group size have been considered in [5, 21]. A more general majority rule based model has been analysed in [11] for complete graphs. It has been shown that with high probability (probability tending to one as \(N \rightarrow \infty \)) consensus is achieved on the opinion with the initial majority and the mean time to reach consensus time is \(\varTheta (\log N)\). A synchronous majority rule model for \(K=1\) has been studied for random dregular graphs on N vertices in [9]. It has been shown that when the initial difference between the fractions of agents having the two opinions is above \(c\sqrt{1/d+d/N}\) (for some constant \(c>0\)) then consensus is achieved with high probability in \(O(\log N)\) time on the opinion with the initial majority. A deterministic version of the majority rule model, where an agent, instead of randomly sampling a subset of its neighbours, adopts the majority opinion among all its neighbours, is considered in [1, 14, 25, 26]. In such models, given the graph structure of the network, the opinions of the agents at any time is a deterministic function of the initial opinions of the agents. The interest there is to find out the initial distribution of opinions for which the network converges to some specific absorbing state.
1.2 Contributions
In all the prior works on the voter and the majority rule models, it is assumed that opinions or technologies are indistinguishable. However, in a social learning model, one opinion/technology may be inherently ‘better’ than the other, leading to more utility to individuals choosing the better option in a round of update. As a result, individuals currently using the better technology will update less frequently than individuals with the worse technology. To model this scenario, we assume that an agent having opinion \(i \in \left\{ {0,1}\right\} \) performs an update with probability \(q_i\). By choosing \(q_1 < q_0\), we make the agents ‘biased’ towards the opinion \(\left\{ {1}\right\} \), which is referred to as the preferred opinion. We study the opinion dynamics under both voter and majority rules when the agents are biased. We focus on the case where the underlying graph is complete which closely models situations where agents are mobile and can therefore sample any other agent depending on their current neighbourhood.
For the voter model with biased agents, we show that the probability of reaching consensus on the nonpreferred opinion decreases exponentially with the network size. Furthermore, the mean consensus time is shown to be logarithmic in the network size. This is in sharp contrast to the voter model with unbiased agents where the probability of reaching consensus on any opinion remains constant and the mean consensus time grows linearly with the network size. Therefore, in the biased voter model consensus is achieved exponentially faster than that in the unbiased voter model.
For the majority rule model with biased agents, we show that the network reaches consensus on the preferred opinion with high probability only if the initial fraction of agents with the preferred opinion is above a certain threshold determined by the biases of the agents. In particular, even if the preferred opinion is the minority opinion in the initial population, consensus can be achieved on the preferred opinion with high probability. This is in contrast to the majority rule model with unbiased agents where the opinion with the initial majority wins with high probability. The mean consensus time for the biased majority rule model is shown to be \(\varTheta (\log N)\) which is the same as in the unbiased majority rule model. However, existing proofs for the unbiased majority rule model [11, 21] cannot be extended to the biased case as they crucially rely on the fact that opinions are indistinguishable. We use suitably constructed branching processes and monotonicity of certain rational polynomials to prove the results for the biased model.
We also study the majority rule model in the presence of agents having fixed opinions at all times. These agents are referred to as ’stubborn’ agents. A similar study of the voter model in the presence of stubborn agents was done in [34]. In presence of stubborn agents, the network cannot reach a consensus state. The key objective, therefore, is to study the stationary distribution of opinions among the nonstubborn agents. In [34], coalescing random random walk techniques were used to study this stationary distribution of opinions. However, such techniques do not apply to majority rule dynamics. We analyse the network dynamics in the large scale limit using mean field techniques. In particular, we show that depending on the proportions of stubborn agents the mean field can either have single or multiple equilibrium points. If multiple equilibrium points are present, the network shows metastability in which it switches between stable configurations spending long time in each configuration.
An earlier version of this work [27], contained some of the results of this paper and an analysis of the majority rule model for \(K=1\). However, only sketches of the proofs were provided. In the current paper, we provide rigorous proofs of all results and a more general analysis of the majority rule model (for \(K \ge 1\)).
1.3 Organisation
The rest of the paper is organised as follows. In Sect. 2, we introduce the model with biased agents. In Sects. 3 and 4 , we state the main results for the voter model and the majority rule model with biased agents, respectively. Section 5 analyses the majority rule model with stubborn agents. In Sects. 6–10, we provide the detailed proofs the main results on voter and majority rule models with biased agents. In Sect. 11, we study the behaviour of the biased voter and majority rule models on dregular random graphs. Finally, the paper is concluded in Sect. 12.
2 Model with Biased Agents
We consider a network of N social agents. Opinion of each agent is assumed to be a binary variable taking values in the set \(\{0, 1\}\). Initially, every agent adopts one of the two opinions. Each agent considers updating its opinion at points of an independent unit rate Poisson point process associated with itself. At a point of the Poisson process associated with itself, an agent either updates its opinion or retains its past opinion. We assume that an agent with opinion \(i \in \left\{ {0,1}\right\} \) updates its opinion at a point of the unit rate Poisson process associated with itself with probability \(q_i \in (0,1)\) and retains its opinion with probability \(p_i=1q_i\). To make the agents ‘biased’ towards opinion \(\left\{ {1}\right\} \) we assume that \(q_0 > q_1\) which implies that an agent with opinion \(\left\{ {1}\right\} \) updates its opinion less frequently than an agent with opinion \(\left\{ {0}\right\} \).
In case the agent decides to update its opinion, it does so either using the voter rule or under the majority rule. In the voter rule, an updating agent samples an agent uniformly at random from N agents (with replacement) from the network^{Footnote 1} and adopts the opinion of the sampled agent. In the majority rule, an updating agent samples 2K agents (\(K \ge 1\)) uniformly at random (with replacement) and adopts the opinion of the majority of the \(2K+1\) agents including itself. The results derived in this paper can be extended to the case where the updating agent samples an agent from a random group of size O(N). However, for simplicity we only focus on the case where sampling occurs from the whole population.
3 Main Results for the Voter Model with Biased Agents
We first consider the voter model with biased agents. In this case, clearly, the network reaches consensus in a finite time with probability 1. Our interest is to find out the probability with which consensus is achieved on the preferred opinion \(\{1\}\). This is referred to as the exit probability of the network. We also intend to characterise the average time to reach the consensus.
The case \(q_1=q_0=1\) is referred to as the voter model with unbiased agents, which has been analysed in [7, 19]. It is known that for unbiased agents the probability with which consensus is reached on a particular opinion is simply equal to the initial fraction \(\alpha \) of agents having that opinion and the expected time to reach consensus for large N is approximately given by \(N h(\alpha )\), where \(h(\alpha )=[\alpha \ln (\alpha )+(1\alpha )\ln (1\alpha )]\). We now proceed to characterise these quantities for the voter model with biased agents.
Let \(X^{(N)}(t)\) denote the number of agents with opinion \(\{1\}\) at time \(t \ge 0\). Clearly, \(X^{(N)}(\cdot )\) is a Markov process on state space \(\{0,1,\ldots ,N\}\), with absorbing states 0 and N. The transition rates from state k are given by
where \(q(i \rightarrow j)\) denotes the rate of transition from state i to state j. The embedded discretetime Markov chain \({\tilde{X}}^{(N)}\) for \(X^{(N)}\) is a onedimensional random walk on \(\left\{ {0,1,\ldots ,N}\right\} \) with jump probability of \(p=q_0/(q_0+q_1)\) to the right and \(q=1p\) to the left. We define \(r=q/p <1\) and \(\bar{r}=1/r\). Let \(T_k\) denote the first hitting time of state k, i.e.,
We are interested in the asymptotic behaviour of the quantities \(E_N(\alpha ):={\mathbb {P}}_{\left\lfloor {\alpha N}\right\rfloor }\left( {T_N < T_0}\right) \) and \(t_N(\alpha )={\mathbb {E}}_{{\left\lfloor {\alpha N}\right\rfloor }}\left[ {T_0 \wedge T_N}\right] \), where \({\mathbb {P}}_x\left( {\cdot }\right) \) and \({\mathbb {E}}_x\left[ {\cdot }\right] \), respectively denote the probability measure and expectation conditioned on the event \(X^{(N)}(0)=x\). To characterise the above quantities we require the following lemma which follows from the gambler ruin identity for onedimensional asymmetric random walks [32].
Lemma 1
For \(0 \le a< x < b \le N\), we have
From the above lemma it follows that
for some constant \(c >0\) (since \(r < 1\)). Hence, the probability of having a consensus on the nonpreferred opinion approaches 0 exponentially fast in N. This is unlike the voter model with unbiased agents where the probability of having consensus on either opinion remains constant with respect to N.
The following theorem characterises the mean time \({t}_N(\alpha )\) to reach the consensus state starting from \(\alpha \) fraction of agents having opinion \(\{1\}\).
Theorem 1
For all \(\alpha \in (0,1)\) we have \(t_N(\alpha )=\varTheta (\log N)\).
Hence, the above theorem shows that the mean consensus time in the biased voter model is logarithmic in the network size. This is in contrast to the voter model with unbiased agents where the mean consensus time is linear in the network size. Thus, with biased agents, the network reaches consensus exponentially faster.
We now consider the measurevalued process \(x^{(N)}=X^{(N)}/N\), which describes the evolution of the fraction of agents with opinion \(\{1\}\). We show that the following convergence takes place.
Theorem 2
If \(x^{(N)}(0) \Rightarrow \alpha \), then \(x^{(N)}(\cdot ) \Rightarrow x(\cdot )\), where \(\Rightarrow \) denotes weak convergence and \(x(\cdot )\) is a deterministic process with initial condition \(x(0)=\alpha \) and is governed by the following differential equation
According to the above result, for large N, the process \(x^{(N)}(\cdot )\) is well approximated by the deterministic process \(x(\cdot )\) which is generally referred to as the mean field limit of the system. Using the mean field limit, we can approximate the mean consensus time \(t_N(\alpha )\) by the time the process \(x(\cdot )\) takes to reach the state \(11/N\) starting from \(\alpha \).
Theorem 3

1.
For the process \(x(\cdot )\) defined by (5), we have \(x(t) {\rightarrow } 1\) as \(t \rightarrow \infty \) for any \(x(0) \in (0,1)\).

2.
Let \(t(\epsilon ,\alpha )\) denote the time required by the process \(x(\cdot )\) to reach \(\epsilon \in (0,1)\) starting from \(x(0)=\alpha \in (0,1)\). Then
$$\begin{aligned} t(\epsilon ,\alpha )=\frac{1}{q_0q_1}\left( {\log \frac{\epsilon }{1\epsilon }\ln \frac{\alpha }{1\alpha }}\right) . \end{aligned}$$(6)In particular, for \(\epsilon =11/N\) we have
$$\begin{aligned} t(11/N,\alpha )&=\frac{1}{q_0q_1} \log (N1)\frac{1}{q_0q_1} \log \left( {\frac{\alpha }{1\alpha }}\right) . \end{aligned}$$(7)
Proof
Since \( q_0 > q_1\) and \(x(t) \in (0,1)\) for all \(t \ge 0\), we have from (5) that \({\dot{x}}(t) \ge 0\) for all \(t \ge 0\). Hence, \(x(t) \rightarrow 1\) as \(t \rightarrow \infty \).
The second assertion follows directly by solving (5) with initial condition \(x(0)=\alpha \). \(\square \)
Remark 1
Note that the process \(x(\cdot )\) does not reach 1 in a finite time even though the process \(x^{(N)}(\cdot )\) does reach 1 in a finite time with probability 1. However, it is ‘reasonable’ to assume that \(t_N(\alpha )\) is ’closely’ approximated by \(t(11/N,\alpha )\). Such approximation of the absorption time of an absorbing Markov chain using its corresponding mean field limit is common in the literature [21, 30]. However, except from a few special cases, e.g. [17], there is no rigorous theory justifying such approximations. It is also worth noting that for the unbiased voter rule (\(q_0=q_1\)) the mean field limit is simply \(x(t)=\alpha \) for all \(t \ge 0\). Hence, in this case the mean consensus time cannot be approximated with the mean field limit.
3.1 Simulation Results
In Fig. 1, we plot the exit probability for both unbiased (\(q_0=q_1=1\)) and biased (\(1=q_0 > q_1=0.5\)) cases as functions of the number of agents N for \(\alpha =0.2\). As expected from our theory, we observe that in the biased case the exit probability exponentially increases to 1 with the increase N. This is in contrast to the unbiased case, where the exit probability remains constant at \(\alpha \) for all N.
In Fig. 2a, we plot the mean consensus time \({t}_N(\alpha )\) for both unbiased and biased cases as a function of N for \(\alpha =0.4\). We observe a good match between the estimate obtained in Theorem 3 and the simulation results. The observation also verifies the statement of Theorem 1. In Fig. 2b, we plot the mean consensus time as a function of \(\alpha \) for both biased an unbiased cases. The network size is kept fixed at \(N=100\). We observe that for the unbiased case, the consensus time increases in the range \(\alpha \in (0,0.5)\) and decreases in the range \(\alpha \in (0.5,1)\). In contrast, for the biased case, the consensus time steadily decreases with the increase in \(\alpha \). This is expected since, in the unbiased case, consensus is achieved faster on a particular opinion if the initial number agents having that opinion is more than the initial number of agents having the other opinion. On the other hand, in the biased case, consensus is achieved with high probability on the preferred opinion and therefore increasing the initial fraction of agents having the preferred opinion always decreases the mean consensus time.
4 Main Results for the Majority Rule Model with Biased Agents
In this section, we consider the majority rule model with biased agents. As in the voter model, it is easy to see that in this case a consensus is achieved in a finite time with probability 1. We proceed to find the exit probability to opinion 1 and the mean consensus time.
Let \(X^{(N)}(t)\) denote the number of agents with opinion \(\{1\}\) at time \(t \ge 0\). Clearly, \(X^{(N)}(\cdot )\) is a Markov process on state space \(\{0,1,\ldots ,N\}\). The jump rates of \(X^{(N)}\) from state n to state \(n+1\) and \(n1\) are given by
respectively. Let \({{\tilde{X}}}^{(N)}\) denote the embedded Markov chain corresponding to \(X^{(N)}\). Then the jump probabilities for the embedded chain \({{\tilde{X}}}^{(N)}\) are given by
where \(1\le n \le N1\), \(r=q_1/q_0 < 1\), and \(g_K:(0,1) \rightarrow (0,\infty )\) is defined as
With probability 1, \(X^{(N)}\) gets absorbed in one of the states 0 or N in finite time. We are interested in the probability of absorption in the state N and the average time till absorption. We first state the following lemma which is key to proving many of the results in this section.
Lemma 2
The function \(g_K:(0,1) \rightarrow (0,\infty )\) as defined by (13) is strictly increasing and is therefore also onetoone.
In the following theorem, we characterise the exit probability to state N.
Theorem 4

1.
Let \(E_N(n)\) denote the probability that the process \(X^{(N)}\) gets absorbed in state N starting from state n. Then, we have
$$\begin{aligned} E_N(n)=\frac{\sum _{t=0}^{{n}1} \prod _{j=1}^t \frac{r}{g_K(j/N)}}{\sum _{t=0}^{N1} \prod _{j=1}^t \frac{r}{g_K(j/N)}}, \end{aligned}$$(14) 
2.
Define \(E_N(\alpha ):=E_N(\left\lfloor {\alpha N}\right\rfloor )\) and \(\beta :=g_K^{1}(r)\). Then \(E_N(\alpha ) \rightarrow 1\) (resp. \(E_N(\alpha ) \rightarrow 0\)) as \(N \rightarrow \infty \) if \(\alpha > \beta \) (resp. \(\alpha < \beta \)) and this convergence is exponential in N.
Hence, a phase transition of the exit probability occurs at \(\beta =g_K^{1}\left( {r}\right) \) for all values of \(K \ge 1\). This implies, that even though the agents are biased towards the preferred opinion, consensus may not be obtained on the preferred opinion if the initial fraction of agents having the preferred opinion is below the threshold \(\beta \). This is in contrast to the voter model, where consensus is obtained on the preferred opinion irrespective of the initial state. The threshold \(\beta \) can be computed by solving \(g_K(\beta )=r\) using either NewtonRaphson method or other fixed point methods.
Remark 2
We note that for the unbiased majority rule model we have \(r=1\) and \(\beta =g_K^{1}(r)=g_K^{1}(1)=1/2\). Thus, the known results [11, 21] for the majority rule model with unbiased agents are recovered.
We now characterise the mean time \({t}_N(\alpha )\) to reach the consensus state starting from \(\alpha \) fraction of agents having opinion \(\{1\}\). As before, we define \(T_n\) to be the time of first hitting the state n, i.e., \(T_n=\inf \left\{ {t\ge 0: X^{(N)}(t)\ge n}\right\} \).
Theorem 5
For \(\alpha \in (0,\beta ) \cup (\beta ,1)\) we have \(t_N(\alpha )=\varTheta (\log N)\).
The theorem above shows that the mean consensus time is logarithmic in the network size. To prove the theorem we use branching processes and the monotonicity shown in Lemma 2. Our proof does not require the indistinguishability of the opinions and is therefore more general than existing proofs for the unbiased voter model [11, 21].
It is easy to derive the mean field limit corresponding to the empirical measure process \(x^{(N)}=X^{(N)}/N\). Using the transition rates of the process \(X^{(N)}(\cdot )\) it can be verified that if \(x^{(N)}(0) \Rightarrow \alpha \) as \(N \rightarrow \infty \), then \(x^{(N)}(\cdot ) \Rightarrow x(\cdot )\) as \(N \rightarrow \infty \), where process \(x(\cdot )\) satisfies the initial condition \(x(0)=\alpha \) and is governed by the following ODE:
where \(h_K\) is defined as \(h_K(x)=\sum _{i=K+1}^{2K} \left( {\begin{array}{c}2k\\ i\end{array}}\right) (1x)^{i1}x^{2Ki}\). By definition \(h_K(x) > 0\) for \(x \in (0,1)\). Hence, from Lemma 2, it follows that the process \(x(\cdot )\) has three equilibrium points at 0, 1, and \(\beta \), respectively. Furthermore, using the monotonicity of \(g_K\) established in Lemma 2 and the nonnegativity of \(h_K\) we have that \(\dot{x}(t) > 0\) for \(x(t) > \beta \) and \(\dot{x}(t) < 0\) for \(x(t) < \beta \). This shows that the only stable equilibrium points of the mean field limit \(x(\cdot )\) are 0 and 1. At \(\beta \), \(x(\cdot )\) has an unstable equilibrium point.
4.1 Simulation Results
In Fig. 3a, we plot the exit probability \(E_N(\alpha )\) as a function of the total number N of agents in the network. The parameters are chosen to be \(q_0=1\), \(q_1=0.6\), \(K=1\). For this parameter setting, we can explicitly compute the threshold \(\beta \) to be \(\beta =g_K^{1}(r)=q_1/(q_0+q_1)=0.375\). We observe that for \(\alpha > \beta \) the exit probability exponentially increases to 1 with the increase in N and for \(\alpha < \beta \) the exit probability decreases exponentially to zero with the increase in N. This is in accordance with the assertion made in Theorem 4. Similarly, in Fig. 3b, we plot the exit probability as a function of the initial fraction \(\alpha \) of agents having opinion \(\{1\}\) for the same parameter setting and different values of N. The plot shows a clear phase transition at \(\beta =0.375\). The sharpness of the transition increases as N increases.
In Fig. 4a, we plot the mean consensus time under the majority rule as a function of N for different values of \(\alpha \). As predicted by Theorem 5, we find that that the mean consensus time is logarithmic in the network size. In Fig. 4b, we study the mean consensus time as a function of K for \(q_0=1,q_1=0.6,\alpha =0.5, N=50\). We observe that with the increase in K, the mean time to reach consensus decreases. This is expected since the slope of the mean field x(t) increases with K. This leads to faster convergence to the stable equilibrium points.
5 Majority Model with Stubborn Agents
In this section, we consider the majority rule model in the presence of ‘stubborn agents’. These are agents that never update their opinions. The other agents, referred to as the nonstubborn agents, are assumed to update their opinions at all points of the Poisson processes associated with themselves. We focus on the case where the updates occur according to the majority rule model. The voter model with stubborn agents was studied before in [34] using coalescing random walks. However, this technique does not apply to the majority rule model. We use mean field techniques to study the opinion dynamics under the majority rule model.
We denote by \(\gamma _i\), \(i \in \{0,1\}\), the fraction of agents in network who are stubborn and have opinion i at all times. Thus, \((1\gamma _0\gamma _1)\) is fraction of nonstubborn agents in the network. The presence of stubborn agents prevents the network from reaching a consensus state. This is because at all times there are at least \(N \gamma _0\) stubborn agents having opinion \(\{0\}\) and \(N \gamma _1\) stubborn agents having opinion \(\{1\}\). Furthermore, since each nonstubborn agent may interact with some stubborn agents at every update instant, it is always possible for the nonstubborn agent to change its opinion. Below we characterise the equilibrium fraction of nonstubborn agents having opinion \(\{1\}\) in the network for large N using mean field techniques. For analytical tractability, we consider the case \(K=1\), i.e., when an agent sample two agents at each update instant. However, similar results hold even for larger values of K.
Let \(x^{(N)}(t)\) denote the fraction of nonstubborn agents having opinion \(\{1\}\) at time \(t \ge 0\). Clearly, \(x^{(N)}(\cdot )\) is a Markov process with possible jumps at the points of a rate \(N(1\gamma _0\gamma _1)\) Poisson process. The process \(x^{(N)}(\cdot )\) jumps from the state x to the state \(x+1/N(1\gamma _0\gamma _1)\) when one of the nonstubborn agents having opinion \(\{0\}\) becomes active (which happens with rate \(N(1\gamma _0\gamma _1)(1x)\)) and samples two agents with opinion \(\{1\}\). The probability of sampling an agent having opinion \(\{1\}\) from the entire network is \((1\gamma _0\gamma _1)x+\gamma _1\). Hence, the total rate at which the process transits from state x to the state \(x+1/N(1\gamma _0\gamma _1)\) is given by
Similarly, the rate of the other possible transition is given by
As in Theorem 2, it can be shown from the above transition rates that the process \(x^{(N)}(\cdot )\) converges weakly to the mean field limit \(x(\cdot )\) which satisfies the following differential equation
We now study the equilibrium distribution \(\pi _N\) of the process \(x^{(N)}(\cdot )\) for large N via the equilibrium points of the mean field \(x(\cdot )\).
From (18) we see that \({\dot{x}}(t)\) is a cubic polynomial in x(t). Hence, the process \(x(\cdot )\) can have at most three equilibrium points in [0, 1]. We first characterise the stability of these equilibrium points.
Proposition 1
The process \(x(\cdot )\) defined by (18) has at least one equilibrium point in (0, 1). Furthermore, the number of stable equilibrium points of \(x(\cdot )\) in (0, 1) is either two or one. If there exists only one equilibrium point of \(x(\cdot )\) in (0, 1), then the equilibrium point must be globally stable (attractive).
Proof
Define \(f(x)=(1x)[(1\gamma _0\gamma _1)x+\gamma _1]^2x[(1\gamma _0\gamma _1)(1x)+\gamma _0]^2\). Clearly, \(f(0)=\gamma _1^2 > 0\) and \(f(1)=\gamma _0^2 < 0\). Hence, there exists at least one root of \(f(x)=0\) in (0, 1). This proves the existence of an equilibrium point of \(x(\cdot )\) in (0, 1).
Since f(x) is a cubic polynomial and \(f(0)f(1) < 0\), either all three roots of \(f(x)=0\) lie in (0, 1) or exactly one root of \(f(x)=0\) lies in (0, 1). Let the three (possibly complex and nondistinct) roots of \(f(x)=0\) be denoted by \(r_1, r_2, r_3\), respectively. By expanding f(x) we see that the coefficient of the cubic term is \(2(1\gamma _0\gamma _1)^2\). Hence, f(x) can be written as
We first consider the case when \(0< r_1, r_2, r_3 <1\) and not all of them are equal. Let us suppose, without loss of generality, that the roots are arranged in the increasing order, i.e., \(0< r_1 \le r_2< r_3 < 1\) or \(0< r_1< r_2 \le r_3 < 1\). From (19) and (18), it is clear that, if \(x(t)> r_2\) and \(x(t) > r_3\), then \({\dot{x}}(t) < 0\). Similarly, if \(x(t)> r_2\) and \(x(t) < r_3\), then \({\dot{x}}(t) > 0\) . Hence, if \(x(0) > r_2\) then \(x(t) \rightarrow r_3\) as \(t \rightarrow \infty \). Using similar arguments we have that for \(x(0) < r_2\), \(x(t) \rightarrow r_1\) as \(t \rightarrow \infty \). Hence, \(r_1,r_3\) are the stable equilibrium points of \(x(\cdot )\). This proves that there exist at most two stable equilibrium points of the mean field \(x(\cdot )\).
Now suppose that there exists only one equilibrium point of \(x(\cdot )\) in (0, 1). This is possible either i) if there exists exactly one real root of \(f(x)=0\) in (0, 1), or ii) if all the roots of \(f(x)=0\) are equal and lie in (0, 1). Let \(r_1\) be a root of \(f(x)=0\) in (0, 1). Now by expanding f(x) from (19), we see that the product of the roots must be \(\gamma _1^2/2(1\gamma _0\gamma _1)^2 >0\). This implies that the other roots, \(r_2\) and \(r_3\), must satisfy one of the following conditions: 1) \(r_2, r_3 >1\), 2) \(r_2, r_3 < 0\), 3) \(r_2, r_3\) are complex conjugates, 4) \(r_2=r_3=r_1\).
In all the above cases, we have that \((xr_2)(xr_3) \ge 0\) for all \(x \in [0,1]\) with equality if and only if \(x=r_1=r_2=r_3\). Hence, from (19) and (18), it is easy to see that \({\dot{x}}(t) > 0\) when \(0 \le x(t) < r_1\) and \({\dot{x}}(t) < 0\) when \(1 \ge x(t) > r_1\). This implies that \(x(t) \rightarrow r_1\) for all \(x(0) \in [0,1]\). In other words, \(r_1\) is globally stable. \(\square \)
In the next proposition, we provide the conditions on \(\gamma _0\) and \(\gamma _1\) for which there exist multiple stable equilibrium points of the mean field \(x(\cdot )\).
Proposition 2
There exist two distinct stable equilibrium points of the mean field \(x(\cdot )\) in (0, 1) if and only if

1.
\(D(\gamma _0,\gamma _1)=(\gamma _0\gamma _1)^2+3(12\gamma _02\gamma _1) > 0\)

2.
\(0< z_1, z_2 < 1\), where
$$\begin{aligned} z_1&= \frac{(3\gamma _05\gamma _1)+ \sqrt{D(\gamma _0,\gamma _1)}}{6(1\gamma _0\gamma _1)}, \end{aligned}$$(20)$$\begin{aligned} z_2&= \frac{(3\gamma _05\gamma _1) \sqrt{D(\gamma _0,\gamma _1)}}{6(1\gamma _0\gamma _1)}. \end{aligned}$$(21) 
3.
\(f(z_1)f(z_2) \le 0\), where \(f(x)=(1x)[(1\gamma _0\gamma _1)x+\gamma _1]^2x[(1\gamma _0\gamma _1)(1x)+\gamma _0]^2\).
If any one of the above conditions is not satisfied then \(x(\cdot )\) has a unique, globally stable equilibrium point in (0, 1).
Proof
From Proposition 1, we have seen that \(x(\cdot )\) has two stable equilibrium points in (0, 1) if and only if \(f(x)=0\) has three real roots in (0, 1) among which at least two are distinct. This happens if and only if \(f'(x)=0\) has two distinct real roots \(z_1, z_2\) in the interval (0, 1) and \(f(z_1) f(z_2) \le 0\). Since \(f'(x)\) is a quadratic polynomial in x, the above conditions are satisfied if and only if

1.
The discriminant of \(f'(x)=0\) is positive. This corresponds to the first condition of the proposition.

2.
The two roots \(z_1,z_2\) of \(f'(x) =0\) must lie in (0, 1). This corresponds to the second condition of the proposition.

3.
\(f(z_1) f(z_2) \le 0\). This is the third condition of the proposition.
Clearly, if any one of the above conditions is not satisfied, then \(x(\cdot )\) has a unique equilibrium point in (0, 1). According to Proposition 1 this equilibrium point must be globally stable. \(\square \)
Hence, depending on the values of \(\gamma _0\) and \(\gamma _1\) there may exist of multiple stable equilibrium points of the mean field \(x(\cdot )\). However, for every finite N, the process \(x^{(N)}(\cdot )\) has a unique stationary distribution \(\pi _N\) (since it is irreducible on a finite state space). In the next result, we establish that any limit point of the sequence of stationary probability distributions \((\pi _N)_N\) is a convex combination of the Dirac measures concentrated on the equilibrium points of the mean field \(x(\cdot )\) in [0, 1].
Theorem 6
Any limit point of the sequence of probability measures \((\pi _N)_N\) is a convex combination of the Dirac measures concentrated on the equilibrium points of \(x(\cdot )\) in [0, 1]. In particular, if there exists a unique equilibrium point r of \(x(\cdot )\) in [0, 1] then \(\pi _N \Rightarrow \delta _r\), where \(\delta _{r}\) denotes the Dirac measure concentrated at the point r.
Proof
We first note that since the sequence of probability measures \((\pi _N)_N\) is defined on the compact space [0, 1], it must be tight. Hence, Prokhorov’s theorem implies that \((\pi _N)_N\) is relatively compact. Let \(\pi \) be any limit point of the sequence \((\pi _N)_N\). Then by the mean field convergence result we know that \(\pi \) must be an invariant distribution of the maps \(\alpha \mapsto x(t,\alpha )\) for all \(t \ge 0\), i.e., \(\int \varphi (x(t,\alpha ))d\pi (\alpha )=\int \varphi (\alpha )d\pi (\alpha )\), for all \(t \ge 0\), and all continuous (and hence bounded) functions \(\varphi : [0,1] \mapsto \mathbb {R}\). In the above, \(x(t,\alpha )\) denotes the process \(x(\cdot )\) started at \(x(0)=\alpha \). Hence we have
The second equality follows from the first by the Dominated convergence theorem and the continuity of \(\varphi \). Now, let \(r_1, r_2\), and \(r_3\) denote the three equilibrium points of the mean field \(x(\cdot )\). Hence, by Proposition 1 we have that for each \(\alpha \in [0,1]\), \(\varphi (\lim _{t \rightarrow \infty }x(t,\alpha )) =\varphi (r_1) I_{N_{r_1}}(\alpha )+\varphi (r_2) I_{N_{r_2}}(\alpha )+\varphi (r_3) I_{N_{r_3}}(\alpha )\), where for \(i=1,2,3\), \(N_{r_i} \in [0,1]\) denotes the set for which if \(x(0) \in N_{r_i}\) then \(x(t) \rightarrow r_i\) as \(t \rightarrow \infty \), and I denotes the indicator function. Hence, by (23) we have that for all continuous functions \(\varphi : [0,1] \mapsto \mathbb {R}\)
This proves that \(\pi \) must be of the form \(\pi =c_1 \delta _{r_1}+c_2 \delta _{r_2}+c_3 \delta _{r_3}\), where \(c_1, c_2, c_3 \in [0,1]\) are such that \(c_1+c_2+c_3=1\). This completes the proof. \(\square \)
Thus, according to the above theorem, if there exists a unique equilibrium point of the process \(x(\cdot )\) in [0,1], then the sequence of stationary distributions \((\pi _N)_N\) concentrates on that equilibrium point as \(N \rightarrow \infty \). In other words, for large N, the fraction of nonstubborn agents having opinion \(\{1\}\) (at equilibrium) will approximately be equal to the unique equilibrium point of the mean field.
5.1 Simulation Results
In Fig. 5, we plot the equilibrium point of \(x(\cdot )\) (when it is unique) as a function of the fraction \(\gamma _1\) of agents having opinion \(\{1\}\) who are stubborn keeping the fraction \(\gamma _0\) of stubborn agents having opinion \(\{0\}\) fixed. We choose the parameter values so that there exists a unique equilibrium point of \(x(\cdot )\) in [0, 1] (such parameter settings can be obtained using the conditions of Proposition 2). We see that as \(\gamma _1\) is increased in the range \((0,1\gamma _0)\), the equilibrium point shifts closer to unity. This is expected since increasing the fraction of stubborn agents with opinion \(\{1\}\) increases the probability with which a nonstubborn agent samples an agent with opinion \(\{1\}\) at an update instant.
If there exist multiple equilibrium points of the process \(x(\cdot )\) then the convergence \(x^{(N)}(\cdot ) \Rightarrow x(\cdot )\) implies that at steady state the process \(x^{(N)}(\cdot )\) spends intervals near the region corresponding to one of the stable equilibrium points of \(x(\cdot )\). Then due to some rare events, it reaches, via the unstable equilibrium point, to a region corresponding to the other stable equilibrium point of \(x(\cdot )\). This fluctuation repeats giving the process \(x^{(N)}(\cdot )\) a unique stationary distribution. This behavior is formally known as metastability.
To demonstrate metastability, we simulate a network with \(N=100\) agents and \(\gamma _0=\gamma _1=0.2\). For the above parameters, the mean field \(x(\cdot )\) has two stable equilibrium points at 0.127322 and 0.872678. In Fig. 6, we show the sample path of the process \(x^{(N)}(\cdot )\). We see that at steady state the process switches back and forth between regions corresponding to the stable equilibrium points of \(x(\cdot )\). This provides numerical evidence of the metastable behavior of the finite system.
6 Proof of Theorem 1
Let \(T=T_0\wedge T_N\) denote the random time to reach consensus. Then we have
where \(Z_k\) denotes the number of visits to state k before absorption and \(M_{k,j}\) denotes the time spent in the \(j^{\text {th}}\) visit to state k. Clearly, the random variables \(Z_k\) and \((M_{k,j})_{j\ge 1}\) are independent with each \(M_{k,j}\) being an exponential random variable with rate \((q_0+q_1)k(Nk)/N\). Hence, using Wald’s identity we have
We now proceed to find lower and upper bounds of \(t_N(\alpha )\).
Let \(A=\left\{ {\omega : T_N(\omega ) < T_0(\omega )}\right\} \) denote the event that the Markov chain gets absorbed in state N. We have
Lower bound of \(t_N(\alpha )\) We first obtain a lower bound for \(t_N(\alpha )\). Clearly, we have the following
Using the above in (29) we have
where \(\mathbb {1}_{\varOmega }\) denotes the indicator function for the set \(\varOmega \). Using the above in (28), we have
Upper bound for \(t_{N}(\alpha )\) We first obtain an upper bound on \({\mathbb {E}}_x\left[ {Z_k;A}\right] \) for \(k \ge x\) with any \(0< x < N\). Given A, let \(\zeta _k\) denote the number of times the embedded chain \({{\tilde{X}}}^{(N)}\) jumps from k to \(k1\) before absorption. It is easy to observe that conditioned on A, the embedded chain \({{\tilde{X}}}^{(N)}\) is a Markov chain with jump probabilities given by
where equality (a) follows from Lemma 1. Furthermore, we have
The above relationship follows by observing that the facts that (i) the states \(k \ge x\) are visited at least once and (ii) the number of visits to state k is the sum of the numbers of jumps of \(\tilde{X}^{(N)}\) to the left and to the right from state k.
Given A, we must have \(\zeta _N=0\). Let \(\xi _{l,k}\) denote the random number of leftjumps from state k between \(l^{\text {th}}\) and \((l+1)^{\text {th}}\) leftjumps from state \(k+1\). Then \((\xi _{l,k})_{l\ge 0}\) are i.i.d with geometric distribution having mean \(p^{A}_{k,k1}/p^{A}_{k,k+1}\). Moreover, we have the following recursion
(the above sum starts from \(l=0\) because for \(k \ge x\) left jumps from j can occur even before the chain visits \(j+1\) for the first time) Thus, we see that \((\zeta _k)_{x\le k \le N}\) forms a branching process with immigration of one individual in each generation. Applying Wald’s identity to solve the above recursion we have for \(x \le k \le N1\)
where equality (a) follows from (33). Taking expectation in (34) and substituting (36) we obtain that for \(x \le k \le N1\)
But we also have
which provides the required bound on \({\mathbb {E}}_x\left[ {Z_k; A}\right] \). We note that the above bound is independent of x. In particular, it is true when \(x=\left\lfloor {\alpha N}\right\rfloor \) and \(k \ge \left\lfloor {\alpha N}\right\rfloor \).
For \(1 \le k < x\) we have
where the equality (a) follows from the Markov property and inequality (b) follows from (37). Hence, combining all the results above we have that for all \(0< k < N\)
Using similar arguments for the process conditioned on \(A^c\), it follows that for any \(0<x < N\) and any \(0 < k \le x\) we have
Furthermore, for \(N> k > x\) we have
Combining all the above results we have \({\mathbb {E}}_{\left\lfloor {\alpha N}\right\rfloor }\left[ {Z_k}\right] \le (1+r)/(1r)\) for all \(0< k < N\). Hence from (28) we obtain
which completes the proof.
7 Proof of Theorem 2
The process \(x^{(N)}(\cdot )\) jumps from the state x to the state \(x+1/N\) when one of the \(N(1x)\) agents having opinion \(\{0\}\) updates (with probability \(q_0\)) its opinion by interacting with an agent with opinion \(\{1\}\). Since the agents update their opinions at points of independent unit rate Poisson processes, the rate at which one of the \(N(1x)\) agents having opinion \(\{0\}\) decides to update its opinion is \(N(1x)q_0\). The probability with which the updating agent interacts with an agent with opinion \(\{1\}\) is x. Hence, the total rate of transition from x to \(x+1/N\) is given by \(r(x \rightarrow x+1/N)= {q_0 N x(1x)}\). Similarly, the rate of transition from x to \(x1/N\) is given by \(r(x \rightarrow x1/N)= {q_1 N x(1x)}\). From the above transition rates it can be easily seen that the generator of the process \(x^{(N)}(\cdot )\) converges uniformly as \(N \rightarrow \infty \) to the generator of the deterministic process \(x(\cdot )\) defined by (5). From the classical results (see e.g., Kurtz [22]), the theorem follows.
8 Proof of Lemma 2
We can write \(g_K(x)=\phi (\psi (x))\), where \(\psi (x)=\frac{x}{1x}:[0,1) \rightarrow [0,\infty )\) and
Clearly, \(\psi (x):[0,1) \rightarrow [0,\infty )\) is strictly increasing. Thus, it is sufficient to show that \(\phi :(0,\infty ) \rightarrow (0,\infty )\) is also strictly increasing. Clearly, \(\phi '(t)=A(t)/(\sum _{i=1}^{K} \left( {\begin{array}{c}2K\\ i1\end{array}}\right) t^{i})^2\), where
with
We note that in the above sum the running variable i satisfies \(i \le \max (K,jK)\). Furthermore, from (38), we have that \(K+1 \le j \le 3K1\). Hence, we have \(i \le \max (K,jK) < \frac{j+1}{2}\) for any \(K \ge 1\). This implies that \(M_j > 0\) for all j satisfying \(K+1 \le j \le 3K1\) Hence, \(\phi '(t) > 0\), \(\forall t > 0\), which implies that \(\phi (t)\) is strictly increasing in \((0,\infty )\).
9 Proof of Theorem 4
From the first step analysis of the embedded chain \(\tilde{X}^{(N)}(\cdot )\) it follows that
which upon rearranging gives
Putting \(D_N(n)=E_N(n+1)E_N(n)\) we find that (40) reduces to a first order recursion in \(D_N(n)\) which satisfies the following relation for \(1 \le n \le N1\)
To compute \(D_N(0)\) we use the boundary conditions \(E_N(0)=0\) and \(E_N(N)=1\), which imply that \(\sum _{n=0}^{N1} D_N(n)=1\). Hence, we have
Thus, using \(E_N(n)=\sum _{k=0}^{n1} D_N(k)\) we have the required expression for \(E_N(n)\) for all \(0 \le n \le N\).
It is also important to note that \(D_N\) defines a probability distribution on the set \(\left\{ {0,1,\ldots ,N1}\right\} \). Furthermore, using the monotonicity of \(g_K\) proved in Lemma 2 and (41) we have
Thus, the mode of the distribution \(D_N\) is at \(\left\lfloor {\beta N}\right\rfloor \). Now for any \(\alpha > \beta \) we choose \(\beta '\) such that \(\alpha> \beta ' > \beta \). Hence, by the monotonicity of \(g_K\) we have
Also using the monotonicity of \(g_K\) and (41) we have for any \(j \ge 1\)
where the last step follows since \(\beta 'N < \left\lfloor {\beta ' N}\right\rfloor +1\). Hence, we have
The proof for \(\alpha < \beta \) follows similarly.
10 Proof of Theorem 5
Let \(T=T_0\wedge T_N\) denote the random time to reach consensus. Then we have
where \(Z_n\) denotes the number of visits to state n before absorption and \(M_{n,j}\) denotes the time spent in the \(j^{\text {th}}\) visit to state n. Clearly, the random variables \(Z_n\) and \((M_{n,j})_{j\ge 1}\) are independent with each \(M_{n,j}\) being an exponential random variable with rate \(q(n\rightarrow n+1)+q(n \rightarrow n1)\). Using Wald’s identity we have
Below we find lower and upper bounds of \(t_N(\alpha )\). Let \(A=\left\{ {\omega : T_N(\omega ) < T_0(\omega )}\right\} \) denote the event that the Markov chain gets absorbed in state N. We have
Lower bound of \(t_N(\alpha )\): Applying Markov inequality to the RHS of (9) and (11) we obtain
Furthermore, as in the case of voter model, we have
Using (44) and the above inequalities we obtain
Upper bound for \(t_{N}(\alpha )\) From (9) and (11) we have
where \(c=q_1 \mathbb {P}_{}\left( {\text {Bin}\left( {2K,\frac{1}{2}}\right) \ge K+1}\right) \). Using the above inequalities in (44) we have
Hence, to show that \(t_N(\alpha )=\varTheta (\log N)\) it is sufficient to show that \({\mathbb {E}}_{\left\lfloor {\alpha N}\right\rfloor }\left[ {Z_n}\right] =O(1)\) for all \(1 \le n \le N1\).
For the rest of the proof we assume \( \alpha > \beta \). The case \(\alpha < \beta \) can be handled similarly.
Let \(x=\left\lfloor {\alpha N}\right\rfloor \). We first find upper bound of \({\mathbb {E}}_{x}\left[ {Z_n;A}\right] \). Conditioned on A, the embedded chain \(\tilde{X}^{(N)}\) is a Markov chain with jump probabilities given by
We have
where equality (a) follows from (12) and Theorem 4. Inequality (b) follows from the facts (i) \(\mathbb {P}_{n1}\left( {A}\right) \le \mathbb {P}_{n+1}\left( {A}\right) \) and (ii) for a monotonically nonincreasing nonnegative sequence \((y_n)_{n\ge 1}\) the following inequality holds
(follows simply by comparing the terms in the numerator with the middle \(n1\) terms in the denominator).
Given A, let \(\zeta _n\) denote the number of times the embedded chain \({{\tilde{X}}}^{(N)}\) jumps from n to \(n1\) before absorption. Then as in the voter model we have
where \(\zeta _n\) follows the recursion
with \(\zeta _N=0\) and \(\xi _{l,n}\) denoting the random number of leftjumps from state n between \(l^{\text {th}}\) and \((l+1)^{\text {th}}\) leftjumps from state \(n+1\). Clearly \((\xi _{l,n})_{l\ge 0}\) are i.i.d with geometric distribution having mean \(p^{A}_{n,n1}/p^{A}_{n,n+1}\). Hence, applying Wald’s identity to solve the above recursion we have
Now using inequality (51), monotonicity of \(g_K\), and the fact that for \(n \ge x=\left\lfloor {\alpha N}\right\rfloor > \left\lfloor {\beta N}\right\rfloor \), \(1 > r_\alpha :=r/g_K(\alpha ) \ge r/g_K(n/N)\) we have for \(n \ge x=\left\lfloor {\alpha N}\right\rfloor \)
Hence, using (52) we have for \(n \ge x=\left\lfloor {\alpha N}\right\rfloor \)
For \(n < x=\left\lfloor {\alpha N}\right\rfloor \) we have
where (a) follows from (54) and (51) and (b) follows from (55). Hence, from (52) we have for \(n < x=\left\lfloor {\alpha N}\right\rfloor \)
Similarly, conditioned on \(A^c\) we have
where \({\bar{\zeta }}_n\) denotes the number of times \({{\tilde{X}}}^{(N)}\) jumps to the right from state n given \(A^c\). Hence, \({\bar{\zeta }}_n\) follows the recursion given by
where \({\bar{\zeta }}_0=0\) and \({\bar{\xi }}_{l,n}\) denotes the random number of rightjumps from state n between \(l^{\text {th}}\) and \((l+1)^{\text {th}}\) rightjumps from state \(n1\) given \(A^c\). Clearly \(({\bar{\xi }}_{l,n})_{l\ge 0}\) are i.i.d. with geometric distribution having mean \(p^{A^c}_{n,n+1}/p^{A^c}_{n,n1}\) where
As before, we have
Solving (60) using Wald’s identity we obtain
For \(1 \le n \le x\) after some simplification of (63) we obtain
We observe that for \(j \le \left\lfloor {\beta N}\right\rfloor \) we have \(\frac{g_K(j/N)}{r} \le 1\) and using the fact that \(g_K(x)=1/g_K(1x)\) we have \(\prod _{j=1}^{N1} \frac{g_K(j/N)}{r}=1/r^{N1}\). Hence, using (64) for \(n \le \left\lfloor {\beta N}\right\rfloor \) we have
Furthermore, for \(\left\lfloor {\beta N}\right\rfloor < n \le x\) we have
where (a) follows from the fact that \(\frac{g_K(j/N)}{r} \ge 1\) for \(j> n > \left\lfloor {\beta N}\right\rfloor \). Hence, we have shown that \({\mathbb {E}}_{x}\left[ {{\bar{\zeta }}_n}\right] =O(1)\) for \(1 \le n \le x\). Now, using (63) and inequality (62) we have for \(x< n < N1\) that \({\mathbb {E}}_{x}\left[ {{\bar{\zeta }}_n}\right] \le {\mathbb {E}}_{x}\left[ {{\bar{\zeta }}_x}\right] =O(1)\). Hence, from (59) we see that \({\mathbb {E}}_{x}\left[ {Z_n;A^c}\right] \le {\mathbb {E}}_{x}\left[ {Z_n\vert A^c}\right] =O(1)\) thereby completing the proof.
11 Effects of the Network Topology
In this section, we present some numerical studies on the effects of network topology on the exit probability and mean consensus time of the voter and majority rule models. In particular, we consider connected dregular random graphs (\(d \ge 3\)) which are known to have nearoptimal expansion properties [15]. Note that for fixed d, the neighbourhoodsize for each agent remains constant with respect to N as opposed to complete graphs, where it grows as \(\varTheta (N)\).
In Fig. 7a, we study the mean consensus time under the voter model (both with and without the presence of biased agents) for 3regular random graphs. We observe that the mean consensus time scales as \(\varTheta (\log N)\) when the agents are biased (as opposed to \(\varTheta (N)\) for unbiased agents). Furthermore, in Fig. 7b, the exit probability is observed to increase exponentially to one with the increase in \(\alpha \) for the biased voter model as opposed to a linear increase in the unbiased voter model. Hence, the observations are qualitatively similar to those in the case of complete graphs.
In Fig. 8a, we plot the mean consensus time for 3regular random graphs under the majority rule model as a function of N for different values \(\alpha \). We observe that the mean consensus time grows as \(\varTheta (\log N)\). In Fig. 8b, we plot the exit probability for 3regular random graphs under the majority rule model as a function of \(\alpha \) for different values N. We observe that in each case a phase transition occurs near \(\alpha =0.2\) and the transition becomes sharper with the increase in N. Again, the results are qualitatively similar to those obtained for complete graphs.
Hence, the results lead us to conjecture that the asymptotic behaviour of dregular random graphs under biased voter and majority rule models is independent of the neighbourhood size of each agent as long as \(d \ge 3\).
12 Conclusion
We analysed the voter and the the majority rule models of social interaction under the presence of biased and stubborn agents. We observed that for the voter model the presence of biased agents reduces the mean consensus time exponentially in comparison to the voter model with unbiased agents. For the majority rule model with biased agents, we showed that the network can reach consensus on the preferred opinion even if the the preferred opinion is not the opinion of the majority initially. Finally, we have analysed the majority rule model with stubborn agents and shown that the network exhibits metastability, where it fluctuates between multiple stable configurations, spending long intervals in each configuration.
Several interesting directions for future work exist. For example, analytically studying the behaviour of random dregular graphs under the biased voter and majority rule models remains an open problem. Furthermore, the effects of the presence of more than two opinions and timevarying update probabilities are unknown. It will also be interesting to study the network dynamics under the majority rule model for general network topologies when stubborn agents are present.
Notes
In the large N limit sampling with or without replacement leads to the same results.
References
Agur, Z., Fraenkel, A., Klein, S.: The number of fixed points of the majority rule. Discret. Math. 70(3), 295–302 (1988)
Bandyopadhyay, A., Roy, R., Sarkar, A.: On the one dimensional learning from neighbours model. Electron. J. Probab. 15(51), 1574–1593 (2010)
Becchetti, L., Clementi, A., Natale, E.: Consensus dynamics: an overview. SIGACT News 51(1), 58–104 (2020). https://doi.org/10.1145/3388392.3388403
Chatterjee, K., Xu, S.H.: Technology diffusion by learning from neighbours. Adv. Appl. Probab. 36(2), 355–376 (2004)
Chen, P., Redner, S.: Consensus formation in multistate majority and plurality models. J. Phys. A 38(33), 7239 (2005)
Chen, P., Redner, S.: Majority rule dynamics in finite dimensions. Phys. Rev. E 71(3), 036101 (2005)
Clifford, P., Sudbury, A.: A model for spatial conflict. Biometrika 60(3), 581–588 (1973)
Cooper, C., Elsässer, R., Ono, H., Radzik, T.: Coalescing random walks and voting on connected graphs. SIAM J. Discret. Math. 27(4), 1748–1758 (2013). https://doi.org/10.1137/120900368
Cooper, C., Elsässer, R., Radzik, T.: The power of two choices in distributed voting. In: Esparza, J., Fraigniaud, P., Husfeldt, T., Koutsoupias, E. (eds.) Automata, Languages, and Programming, pp. 435–446. Springer, Berlin (2014)
Cox, J.T.: Coalescing random walks and voter model consensus times on the torus in \({\mathbb{Z}}^d\). Ann. Probab. 17(4), 1333–1366 (1989)
Cruise, J., Ganesh, A.: Probabilistic consensus via polling and majority rules. Queueing Syst. Theory Appl. 78(2), 99–120 (2014). https://doi.org/10.1007/s1113401493977
Ellison, G., Fudenberg, D.: Rules of thumb for social learning. J. Polit. Econ. 101(4), 612–643 (1993)
Feder, G., Umali, D.L.: The adoption of agricultural innovations: a review. Technol. Forecast. Soc. Change 43(3), 215–239 (1993). https://doi.org/10.1016/00401625(93)90053A
Flocchini, P., Lodi, E., Luccio, F., Pagli, L., Santoro, N.: Dynamic monopolies in tori. Discret. Appl. Math. 137(2), 197–212 (2004)
Friedman, J.: A proof of alons second eigenvalue conjecture and related problems. Mem. Am. Math. Soc. (2004). https://doi.org/10.1090/memo/0910
Galam, S.: Minority opinion spreading in random geometry. Eur. J. Phys. 25(4), 403–406 (2002)
Gast, N., Gaujal, B.: Computing absorbing times via fluid approximations. Adv. Appl. Probab. 49(3), 768 (2017)
Hassin, Y., Peleg, D.: Distributed probabilistic polling and applications to proportionate agreement. Inf. Comput. 171(2), 248–268 (2002). https://doi.org/10.1006/inco.2001.3088
Holley, R.A., Liggett, T.M.: Ergodic theorems for weakly interacting infinite systems and the voter model. Ann. Appl. Probab. 3(4), 643 (1975)
Krapivsky, P.L.: Kinetics of monomermonomer surface katalytic reactions. Phys. Rev. A 45(2), 1067 (1992)
Krapivsky, P.L., Redner, S.: Dynamics of majority rule in twostate interacting spin systems. Phys. Rev. Lett. 90(23), 238701 (2003)
Kurtz, T.G.: Solutions of ordinary differential equations as limits of pure jump markov processes. J. Appl. Probab. 7(1), 49–58 (1970)
Mobilia, M.: Does a single zealot affect an infinite group of voters? Phys. Rev. Lett. 91, 028701 (2003). https://doi.org/10.1103/PhysRevLett.91.028701
Mobilia, M., Petersen, A., Redner, S.: On the role of zealotry in the voter model. J. Stat. Mech. 2007(8), P08029 (2007)
Moran, G.: The rmajority vote action on 0–1 sequences. Discret. Math. 132(1), 145–174 (1994)
Mossel, E., Neeman, J., Tamuz, O.: Majority dynamics and aggregation of information in social networks. Auton. Agent. MultiAgent Syst. 28(3), 408–429 (2014)
Mukhopadhyay, A., Mazumdar, R.R., Roy, R.: Binary opinion dynamics with biased agents and agents with different degrees of stubbornness. In: 2016 28th International Teletraffic Congress (ITC 28), vol. 01, pp. 261–269 (2016). https://doi.org/10.1109/ITC28.2016.143
Nakata, T., Imahayashi, H., Yamashita, M.: Probabilistic local majority voting for the agreement problem on finite graphs. In: Asano, T., Imai, H., Lee, D.T., Nakano, S.I., Tokuyama, T. (eds.) Computing and Combinatorics, pp. 330–338. Springer, Berlin (1999)
Norton, J.A., Bass, F.M.: A diffusion theory model of adoption and substitution for successive generations of hightechnology products. Manage. Sci. 33(9), 1069–1086 (1987). https://doi.org/10.1287/mnsc.33.9.1069
Perron, E., Vasudevan, D., Vojnovic, M.: Using three states for binary consensus on complete graphs. IEEE INFOCOM 2009, 2527–2535 (2009). https://doi.org/10.1109/INFCOM.2009.5062181
Sood, V., Redner, S.: Voter model on heterogeneous graphs. Phys. Rev. Lett. 94(17), 178701 (2005)
Spitzer, F.: Principles of Random Walks. Springer, New York (1964)
Yildiz, M.E., Pagliary, R., Ozdaglar, A., Scaglione, A.: Voting models in random networks. In: Information Theory and Applications Workshop (ITA), pp. 1–7 (2010)
Yildiz, E., Ozdaglar, A., Acemoglu, D., Saberi, A., Scaglione, A.: Binary opinion dynamics with stubborn agents. ACM Trans. Econ. Comput. 1(4), 1–30 (2013)
Acknowledgements
RR acknowledges support from the University of Waterloo during various visits and also support from the Matrics Grant MTR/2017/000141.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Dheepak Dhar.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mukhopadhyay, A., Mazumdar, R.R. & Roy, R. Voter and Majority Dynamics with Biased and Stubborn Agents. J Stat Phys 181, 1239–1265 (2020). https://doi.org/10.1007/s1095502002625w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1095502002625w