Voter and Majority Dynamics with Biased and Stubborn Agents

We study binary opinion dynamics in a fully connected network of interacting agents. The agents are assumed to interact according to one of the following rules: (1) Voter rule: An updating agent simply copies the opinion of another randomly sampled agent; (2) Majority rule: An updating agent samples multiple agents and adopts the majority opinion in the selected group. We focus on the scenario where the agents are biased towards one of the opinions called the {\em preferred opinion}. Using suitably constructed branching processes, we show that under both rules the mean time to reach consensus is $\Theta(\log N)$, where $N$ is the number of agents in the network. Furthermore, under the majority rule model, we show that consensus can be achieved on the preferred opinion with high probability even if it is initially the opinion of the minority. We also study the majority rule model when stubborn agents with fixed opinions are present. We find that the stationary distribution of opinions in the network in the large system limit using mean field techniques.


Introduction
The problem of social learning [4,13] is concerned with the rate at which social agents, interacting under simple rules, can learn/discover the true utilities of their choices, opinions or technologies. In this context, the two central questions we study are: (1) Can social agents learn/adopt the better technology/opinion through simple rules of interactions and if so, how fast? and (2) What are the effects of the presence of stubborn agents (having fixed opinions) on the dynamics opinion diffusion?
We consider a setting where the choices available to each agent are binary and are represented by {0} and {1} [2,4]. These are referred to as opinions of the agents. The interactions among the agents are modelled using two simple rules: the voter rule [7,10,18] and the majority rule [5,6,11,20]. In the voter rule, an agent randomly samples one of its neighbours at an instant when it decides to update its opinion. The updating agent then adopts the opinion of the sampled neighbour. This simple rule captures the tendency of an individual to mimic other individuals in the society. In the majority rule, instead of sampling a single agent, an updating agent samples 2K (K ≥ 1) neighbours and adopts the opinion of the majority of the sampled neighbours (including itself). This rule captures the tendency of the individuals to conform with the majority opinion in their local neighbourhoods.
Related literature: The voter model and its variants have been studied extensively (see [3] for a recent survey) for different network topologies, e.g., finite integer lattices in different dimensions [10,19], complete graphs with three states [27], heterogeneous graphs [28], random d-regular graphs [9], Erdos-Renyi random graphs, and random geometric graphs [32] etc. It is known [17,26] that if the underlying graph is connected, then the classical voter rule leads to a consensus where all agents adopt the same opinion. Furthermore, if A is the set of all agents having an opinion i ∈ {0, 1} initially, then the probability that consensus is achieved on opinion i (referred to as the exit probability to opinion i) is given by d(A)/2m, where d(A) is the sum of the degrees of the vertices in A and m is the total number of edges in the graph. It is also known that for most network topologies the mean consensus time is Ω(N ), where N is the total number of agents. In [22,31], the voter model was studied under the presence of stubborn individuals who do not update their opinions. In such a scenario, the network cannot reach a consensus because of the presence of stubborn agents having both opinions. Using coalescing random walk techniques the average opinion in the network and the variance of opinions were computed at steady state. The majority rule model was first studied in [15], where it was assumed that, at every iteration, groups of random sizes are formed by the agents. Within each group, the majority opinion is adopted by all the agents. Similar models with fixed (odd) group size have been considered in [5,20]. A more general majority rule based model is analysed in [11] for complete graphs. It has been shown that with high probability (probability tending to one as N → ∞) consensus is achieved on the opinion with the initial majority and the mean time to reach consensus time is Θ(log N ). The majority rule model is studied for random d-regular graphs on N vertices in [8]. It is shown that when the initial imbalance of between the two opinions is above c 1/d + d/N , for some constant c > 0, then consensus is achieved in O(log N ) time on the initial majority opinion with high probability. A deterministic version of the majority rule model, where an agent, instead of randomly sampling some of its neighbours, adopts the majority opinion among all its neighbours, is considered in [1,14,23,24]. In such models, given the graph structure of the network, the opinions of the agents at any time is a deterministic function of the initial opinions of the agents. The interest there is to find out the initial distribution of opinions for which the network converges to some specific absorbing state.
Contributions: In all the prior works on the voter and the majority rule models, it is assumed that opinions or technologies are indistinguishable. However, in a social learning model, one opinion/technology may be inherently 'better' than the other leading to more utility to individuals choosing the better option in a round of update. As a result, individuals currently using the better technology will update less frequently than individuals with the worse technology. To model this scenario, we assume that an agent having opinion i ∈ {0, 1} performs an update with probability q i . By choosing q 1 < q 0 , we make the agents 'biased' towards the opinion {1}, which is referred to as the preferred opinion. We study the opinion dynamics under both voter and majority rules when the agents are biased. We focus on the case where the underlying graph is complete which closely models situations where agents are mobile and can therefore sample any other agent depending on their current neighbourhood.
For the voter model with biased agents, we show that the probability of reaching consensus on the non-preferred opinion decreases exponentially with the network size. Furthermore, the mean consensus time is shown to be logarithmic in the network size. This is in sharp contrast to the voter model with unbiased agents where the probability of reaching consensus on any opinion remains constant and the mean consensus time grows linearly with the network size. Therefore, in the biased voter model consensus is achieved exponentially faster than that in the unbiased voter model.
For the majority rule model with biased agents, we show that the network reaches consensus on the preferred opinion with high probability only if the initial fraction of agents with the preferred opinion is above a certain threshold determined by the biases of the agents. Furthermore, as in the voter model, the mean consensus time is shown to be logarithmic in the network size. Our results generalise existing results on the majority rule model with unbiased agents where it is known that consensus is achieved (with high probability) on the opinion with the initial majority and the mean consensus time is logarithmic in the network size. However, existing proofs for the unbiased majority rule model [11,20] cannot be extended to the biased case as they crucially rely on the fact that opinions are indistinguishable. We use suitably constructed branching processes and monotonicity of certain rational polynomials to prove the results for the biased model.
We also study the majority rule model in the presence of agents having fixed opinions at all times. These agents are referred to as 'stubborn' agents. A similar study of the voter model in the presence of stubborn agents was done in [31]. In presence of stubborn agents, the network cannot reach a consensus state. The key objective, therefore, is to study the stationary distribution of opinions among the non-stubborn agents. In [31], coalescing random random walk techniques were used to study this stationary distribution of opinions. However, such techniques do not apply to majority rule dynamics. We analyse the network dynamics in the large scale limit using mean field techniques. In particular, we show that depending on the proportions of stubborn agents the mean field can either have single or multiple equilibrium points. If multiple equilibrium points are present, the network shows metastability in which it switches between stable configurations spending long time in each configuration.
An earlier version of this work [25], contained some of the results of this paper and an analysis of the majority rule model for K = 1. However, only sketches of the proofs were provided. In the current paper, we provide rigorous proofs of all results and a more general analysis of the majority rule model (for K ≥ 1).
Organisation: The rest of the paper is organised as follows. In Section 2, we introduce the model with biased agents. In Sections 3 and 4, we state the main results for the voter model and the majority rule model with biased agents, respectively. Section 5 analyses the majority rule model with stubborn agents. In Sections 6-10, we provide the detailed proofs the main results on voter and majority rule models with biased agents. Finally, the paper is concluded in Section 11.

Model with biased agents
We consider a network of N social agents. Opinion of each agent is assumed to be a binary variable taking values in the set {0, 1}. Initially, every agent adopts one of the two opinions. Each agent considers updating its opinion at points of an independent unit rate Poisson point process associated with itself. At a point of the Poisson process associated with itself, an agent either updates its opinion or retains its past opinion. We assume that an agent with opinion i ∈ {0, 1} updates its opinion at a point of the unit rate Poisson process associated with itself with probability q i ∈ (0, 1) and retains its opinion with probability p i = 1 − q i . To make the agents 'biased' towards opinion {1} we assume that q 0 > q 1 which implies that an agent with opinion {1} updates its opinion less frequently than an agent with opinion {0}.
In case the agent decides to update its opinion, it does so either using the voter rule or under the majority rule. In the voter rule, an updating agent samples an agent uniformly at random from N agents (with replacement) from the network 1 and adopts the opinion of the sampled agent. In the majority rule, an updating agent samples 2K agents (K ≥ 1) uniformly at random (with replacement) and adopts the opinion of the majority of the 2K + 1 agents including itself. The results derived in this paper can be extended to the case where the updating agent samples an agent from a random group of size O(N ). However, for simplicity we only focus on the case where sampling occurs from the whole population.

Main results for the voter model with biased agents
We first consider the voter model with biased agents. In this case, clearly, the network reaches consensus in a finite time with probability 1. Our interest is to find out the probability with which consensus is achieved on the preferred opinion {1}. This is referred to as the exit probability of the network. We also intend to characterise the average time to reach the consensus.
The case q 1 = q 0 = 1 is referred to as the voter model with unbiased agents, which has been analysed in [7,18]. It is known that for unbiased agents the probability with which consensus is reached on a particular opinion is simply equal to the initial fraction α of agents having that opinion and the expected time to reach consensus for large N is approximately given by We now proceed to characterise these quantities for the voter model with biased agents.
Let X (N ) (t) denote the number of agents with opinion {1} at time t ≥ 0. Clearly, X (N ) (·) is a Markov process on state space {0, 1, . . . , N }, with absorbing states 0 and N . The transition rates from state k are given by where q(i → j) denotes the rate of transition from state i to state j. The embedded discrete-time Markov chainX (N ) for X (N ) is a one-dimensional random walk on {0, 1, . . . , N } with jump probability of p = q 0 /(q 0 + q 1 ) to the right and q = 1 − p to the left. We define r = q/p < 1 andr = 1/r. Let T k denote the first hitting time of state k, i.e., We are interested in the asymptotic behaviour of the quantities E N (α) : where P x (·) and E x [·], respectively denote the probability measure and expectation conditioned on the event X (N ) (0) = x. To characterise the above quantities we require the following lemma which follows from the gambler ruin identity for one-dimensional asymmetric random walks [29].
From the above lemma it follows that for some constant c > 0 (since r < 1). Hence, the probability of having a consensus on the non-preferred opinion approaches 0 exponentially fast in N . This is unlike the voter model with unbiased agents where the probability of having consensus on either opinion remains constant with respect to N .
The following theorem characterises the mean time t N (α) to reach the consensus state starting from α fraction of agents having opinion {1}.
Hence, the above theorem shows that the mean consensus time in the biased voter model is logarithmic in the network size. This is in contrast to the voter model with unbiased agents where the mean consensus time is linear in the network size. Thus, with biased agents, the network reaches consensus exponentially faster.
We now consider the measure-valued process x (N ) = X (N ) /N , which describes the evolution of the fraction of agents with opinion {1}. We show that the following convergence takes place.
where ⇒ denotes weak convergence and x(·) is a deterministic process with initial condition x(0) = α and is governed by the following differential equatioṅ According to the above result, for large N , the process x (N ) (·) is well approximated by the deterministic process x(·) which is generally referred to as the mean field limit of the system. Using the mean field limit, we can approximate the mean consensus time t N (α) by the time the process x(·) takes to reach the state 1 − 1/N starting from α.
for any x(0) ∈ (0, 1). 2. Let t(ǫ, α) denote the time required by the process x(·) to reach ǫ ∈ (0, 1) starting from x(0) = α ∈ (0, 1). Then In particular, for ǫ = 1 − 1/N we have Proof Since q 0 > q 1 and x(t) ∈ (0, 1) for all t ≥ 0, we have from (5) thaṫ The second assertion follows directly by solving (5) with initial condition Remark 1 It is worth noting here that the process x(·) does not reach 1 in a finite time even though the process x (N ) (·) does reach 1 in a finite time with probability 1. However, it is 'reasonable' to assume that t N (α) is 'closely' approximated by t(1 − 1/N, α). Such approximation of the absorption time of an absorbing Markov chain using its corresponding mean field limit is common in the literature [20,27]. However, except from a few special cases, e.g. [16], there is no general theory justifying such approximations.  Simulation Results: In Figure 1, we plot the exit probability for both unbiased (q 0 = q 1 = 1) and biased (1 = q 0 > q 1 = 0.5) cases as functions of the number of agents N for α = 0.2. As expected from our theory, we observe that in the biased case the exit probability exponentially increases to 1 with the increase N . This is in contrast to the unbiased case, where the exit probability remains constant at α for all N .
In Figure 2a, we plot the mean consensus time t N (α) for both unbiased and biased cases as a function of N for α = 0.4. We observe a good match between the estimate obtained in Theorem 3 and the simulation results. The observation also verifies the statement of Theorem 1. In Figure 2b, we plot the mean consensus time as a function of α for both biased an unbiased cases. The network size is kept fixed at N = 100. We observe that for the unbiased case,  the consensus time increases in the range α ∈ (0, 0.5) and decreases in the range α ∈ (0.5, 1). In contrast, for the biased case, the consensus time steadily decreases with the increase in α. This is expected since, in the unbiased case, consensus is achieved faster on a particular opinion if the initial number agents having that opinion is more than the initial number of agents having the other opinion. On the other hand, in the biased case, consensus is achieved with high probability on the preferred opinion and therefore increasing the initial fraction of agents having the preferred opinion always decreases the mean consensus time.

Main results for the majority rule model with biased agents
In this section, we consider the majority rule model with biased agents. As in the voter model, it is easy to see that in this case a consensus is achieved in a finite time with probability 1. We proceed to find the exit probability to opinion 1 and the mean consensus time. Let X (N ) (t) denote the number of agents with opinion {1} at time t ≥ 0. Clearly, X (N ) (·) is a Markov process on state space {0, 1, . . . , N }. The jump rates of X (N ) from state n to state n + 1 and n − 1 are given by respectively. LetX (N ) denote the embedded Markov chain corresponding to X (N ) . Then the jump probabilities for the embedded chainX (N ) are given by where 1 ≤ n ≤ N − 1, r = q 1 /q 0 < 1, and g K : (0, 1) → (0, ∞) is defined as With probability 1, X (N ) gets absorbed in one of the states 0 or N in finite time. We are interested in the probability of absorption in the state N and the average time till absorption. We first state the following lemma which is key to proving many of the results in this section.
In the following theorem, we characterise the exit probability to state N .
Theorem 4 1. Let E N (n) denote the probability that the process X (N ) gets absorbed in state N starting from state n. Then, we have Hence, a phase transition of the exit probability occurs at β = g −1 K (r) for all values of K ≥ 1. This implies, that even though the agents are biased towards the preferred opinion, consensus may not be obtained on the preferred opinion if the initial fraction of agents having the preferred opinion is below the threshold β. This is in contrast to the voter model, where consensus is obtained on the preferred opinion irrespective of the initial state. The threshold β can be computed by solving g K (β) = r using either Newton-Raphson method or other fixed point methods.

Remark 2
We note that for the unbiased majority rule model we have r = 1 and β = g −1 K (r) = g −1 K (1) = 1/2. Thus, the known results [11,20] for the majority rule model with unbiased agents are recovered.
We now characterise the mean time t N (α) to reach the consensus state starting from α fraction of agents having opinion {1}. As before, we define T n to be the random time of first hitting the state n, i.e., T n = inf t ≥ 0 : X (N ) (t) ≥ n .
The theorem above shows that the mean consensus time is logarithmic in the network size. To prove the theorem we use branching processes and the monotonicity shown in Lemma 2. Our proof does not require the indistinguishability of the opinions and is therefore more general than existing proofs for the unbiased voter model [11,20].
It is easy to derive the mean field limit corresponding to the empirical measure process x (N ) = X (N ) /N . Using the transition rates of the process where process x(·) satisfies the initial condition x(0) = α and is governed by the following ODE: where Hence, from Lemma 2, it follows that the process x(·) has three equilibrium points at 0, 1, and β, respectively. Furthermore, using the monotonicity of g K established in Lemma 2 and the non-negativity of h K we have thatẋ(t) > 0 for x(t) > β andẋ(t) < 0 for x(t) < β. This shows that the only stable equilibrium points of the mean field limit x(·) are 0 and 1. At β, x(·) has an unstable equilibrium point.
Simulation Results: In Figure 3a, we plot the exit probability E N (α) as a function of the total number N of agents in the network. The parameters are chosen to be q 0 = 1, q 1 = 0.6, K = 1. For this parameter setting, we can explicitly compute the threshold β to be β = g −1 K (r) = q 1 /(q 0 +q 1 ) = 0.375. We observe that for α > β the exit probability exponentially increases to 1 with the increase in N and for α < β the exit probability decreases exponentially to zero with the increase in N . This is in accordance with the assertion made in Theorem 4. Similarly, in Figure 3b, we plot the exit probability as a function of the initial fraction α of agents having opinion {1} for the same parameter setting and different values of N . The plot shows a clear phase transition at β = 0.375. The sharpness of the transition increases as N increases.
In Figure 4a, we plot the mean consensus time under the majority rule as a function of N for different values of α. As predicted by Theorem 5, we find that that the mean consensus time is logarithmic in the network size. In Figure 4b, we study the mean consensus time as a function of K for q 0 = 1, q 1 = 0.6, α = 0.5, N = 50. We observe that with the increase in K, the mean time to reach consensus decreases. This is expected since the slope of the mean field x(t) increases with K. This leads to faster convergence to the stable equilibrium points.   In this section, we consider the majority rule model in the presence of 'stubborn agents'. These are agents that never update their opinions. The other agents, referred to as the non-stubborn agents, are assumed to update their opinions at all points of the Poisson processes associated with themselves. We focus on the case where the updates occur according to the majority rule model. The voter model with stubborn agents was studied before in [31] using coalescing random walks. However, this technique does not apply to the majority rule model. We use mean field techniques to study the opinion dynamics under the majority rule model.
We denote by γ i , i ∈ {0, 1}, the fraction of agents in network who are stubborn and have opinion i at all times. Thus, (1 − γ 0 − γ 1 ) is fraction of non-stubborn agents in the network. The presence of stubborn agents prevents the network from reaching a consensus state. This is because at all times there are at least N γ 0 stubborn agents having opinion {0} and N γ 1 stubborn agents having opinion {1}. Furthermore, since each non-stubborn agent may interact with some stubborn agents at every update instant, it is always possible for the non-stubborn agent to change its opinion. Below we characterise the equilibrium fraction of non-stubborn agents having opinion {1} in the network for large N using mean field techniques. For analytical tractability, we consider the case K = 1, i.e., when an agent sample two agents at each update instant. However, similar results hold even for larger values of K.
Let x (N ) (t) denote the fraction of non-stubborn agents having opinion {1} at time t ≥ 0. Clearly, x (N ) (·) is a Markov process with possible jumps at the points of a rate N (1 − γ 0 − γ 1 ) Poisson process. The process x (N ) (·) jumps from the state x to the state x + 1/N (1 − γ 0 − γ 1 ) when one of the nonstubborn agents having opinion {0} becomes active (which happens with rate N (1−γ 0 −γ 1 )(1−x)) and samples two agents with opinion {1}. The probability of sampling an agent having opinion {1} from the entire network is (1 − γ 0 − γ 1 )x + γ 1 . Hence, the total rate at which the process transits from state x to the state x + 1/N (1 − γ 0 − γ 1 ) is given by Similarly, the rate of the other possible transition is given by As in Theorem 2, it can be shown from the above transition rates that the process x (N ) (·) converges weakly to the mean field limit x(·) which satisfies the following differential equatioṅ We now study the equilibrium distribution π N of the process x (N ) (·) for large N via the equilibrium points of the mean field x(·). From (18) we see thatẋ(t) is a cubic polynomial in x(t). Hence, the process x(·) can have at most three equilibrium points in [0, 1]. We first characterise the stability of these equilibrium points.

Proposition 1
The process x(·) defined by (18) has at least one equilibrium point in (0, 1). Furthermore, the number of stable equilibrium points of x(·) in (0, 1) is either two or one. If there exists only one equilibrium point of x(·) in (0, 1), then the equilibrium point must be globally stable (attractive).
In the next proposition, we provide the conditions on γ 0 and γ 1 for which there exist multiple stable equilibrium points of the mean field x(·).
Proposition 2 There exist two distinct stable equilibrium points of the mean field x(·) in (0, 1) if and only if 2. 0 < z 1 , z 2 < 1, where If any one of the above conditions is not satisfied then x(·) has a unique, globally stable equilibrium point in (0, 1).
Proof From Proposition 1, we have seen that x(·) has two stable equilibrium points in ( Clearly, if any one of the above conditions is not satisfied, then x(·) has a unique equilibrium point in (0, 1). According to Proposition 1 this equilibrium point must be globally stable.
Hence, depending on the values of γ 0 and γ 1 there may exist of multiple stable equilibrium points of the mean field x(·). However, for every finite N , the process x (N ) (·) has a unique stationary distribution π N (since it is irreducible on a finite state space). In the next result, we establish that any limit point of the sequence of stationary probability distributions (π N ) N is a convex combination of the Dirac measures concentrated on the equilibrium points of the mean field x(·) in [0, 1].
Theorem 6 Any limit point of the sequence of probability measures (π N ) N is a convex combination of the Dirac measures concentrated on the equilibrium points of x(·) in [0, 1]. In particular, if there exists a unique equilibrium point r of x(·) in [0, 1] then π N ⇒ δ r , where δ r denotes the Dirac measure concentrated at the point r.
Thus, according to the above theorem, if there exists a unique equilibrium point of the process x(·) in [0,1], then the sequence of stationary distributions (π N ) N concentrates on that equilibrium point as N → ∞. In other words, for large N , the fraction of non-stubborn agents having opinion {1} (at equilibrium) will approximately be equal to the unique equilibrium point of the mean field.
Simulation Results: In Figure 5, we plot the equilibrium point of x(·) (when it is unique) as a function of the fraction γ 1 of agents having opinion {1} who are stubborn keeping the fraction γ 0 of stubborn agents having opinion {0} fixed. We choose the parameter values so that there exists a unique equilibrium point of x(·) in [0, 1] (such parameter settings can be obtained using the conditions of Proposition 2). We see that as γ 1 is increased in the range (0, 1 − γ 0 ), the equilibrium point shifts closer to unity. This is expected since increasing the fraction of stubborn agents with opinion {1} increases the probability with which a non-stubborn agent samples an agent with opinion {1} at an update instant.
If there exist multiple equilibrium points of the process x(·) then the convergence x (N ) (·) ⇒ x(·) implies that at steady state the process x (N ) (·) spends intervals near the region corresponding to one of the stable equilibrium points of x(·). Then due to some rare events, it reaches, via the unstable equilibrium point, to a region corresponding to the other stable equilibrium point of x(·). This fluctuation repeats giving the process x (N ) (·) a unique stationary distribution. This behavior is formally known as metastability. To demonstrate metastability, we simulate a network with N = 100 agents and γ 0 = γ 1 = 0.2. For the above parameters, the mean field x(·) has two stable equilibrium points at 0.127322 and 0.872678. In Figure 6, we show the sample path of the process x (N ) (·). We see that at steady state the process switches back and forth between regions corresponding to the stable equilibrium points of x(·). This provides numerical evidence of the metastable behavior of the finite system.

Proof of Theorem 1
Let T = T 0 ∧ T N denote the random time to reach consensus. Then we have where Z k denotes the number of visits to state k before absorption and M k,j denotes the time spent in the j th visit to state k. Clearly, the random variables Z k and (M k,j ) j≥1 are independent with each M k,j being an exponential random variable with rate (q 0 + q 1 )k(N − k)/N . Hence, using Wald's identity we have We now proceed to find lower and upper bounds of t N (α). Let A = {ω : T N (ω) < T 0 (ω)} denote the event that the Markov chain gets absorbed in state N . We have Lower bound of t N (α): We first obtain a lower bound for t N (α). Clearly, we have the following Using the above in (29) we have where ½ Ω denotes the indicator function for the set Ω. Using the above in (28), we have Upper bound for t N (α): We first obtain an upper bound on E x [Z k ; A] for k ≥ x with any 0 < x < N . Given A, let ζ k denote the number of times the embedded chainX (N ) jumps from k to k − 1 before absorption. It is easy to observe that conditioned on A, the embedded chainX (N ) is a Markov chain with jump probabilities given by where equality (a) follows from Lemma 1. Furthermore, we have The above relationship follows by observing that the facts that (i) the states k ≥ x are visited at least once and (ii) the number of visits to state k is the sum of the numbers of jumps ofX (N ) to the left and to the right from state k.
Given A, we must have ζ N = 0. Let ξ l,k denote the random number of left-jumps from state k between l th and (l + 1) th left-jumps from state k + 1. Then (ξ l,k ) l≥0 are i.i.d with geometric distribution having mean p A k,k−1 /p A k,k+1 . Moreover, we have the following recursion (the above sum starts from l = 0 because for k ≥ x left jumps from j can occur even before the chain visits j + 1 for the first time) Thus, we see that (ζ k ) x≤k≤N forms a branching process with immigration of one individual in each generation. Applying Wald's identity to solve the above recursion we have where equality (a) follows from (33). Taking expectation in (34) and substituting (36) we obtain that for But we also have which provides the required bound on E x [Z k ; A]. We note that the above bound is independent of x. In particular, it is true when x = ⌊αN ⌋ and k ≥ ⌊αN ⌋.
where the equality (a) follows from the Markov property and inequality (b) follows from (37). Hence, combining all the results above we have that for all Using similar arguments for the process conditioned on A c , it follows that for any 0 < x < N and any 0 < k ≤ x we have Furthermore, for N > k > x we have Combining all the above results we have E ⌊αN ⌋ [Z k ] ≤ (1 + r)/(1 − r) for all 0 < k < N . Hence from (28) we obtain which completes the proof.

Proof of Theorem 2
The process x (N ) (·) jumps from the state x to the state x + 1/N when one of the N (1 − x) agents having opinion {0} updates (with probability q 0 ) its opinion by interacting with an agent with opinion {1}. Since the agents update their opinions at points of independent unit rate Poisson processes, the rate at which one of the N (1 − x) agents having opinion {0} decides to update its opinion is N (1−x)q 0 . The probability with which the updating agent interacts with an agent with opinion {1} is x. Hence, the total rate of transition from x to x + 1/N is given by r( Similarly, the rate of transition from x to x − 1/N is given by r( From the above transition rates it can be easily seen that the generator of the process x (N ) (·) converges uniformly as N → ∞ to the generator of the deterministic process x(·) defined by (5). From the classical results (see e.g., Kurtz [21]), the theorem follows.

Proof of Theorem 4
From the first step analysis of the embedded chainX (N ) (·) it follows that which upon rearranging gives Putting D N (n) = E N (n + 1) − E N (n) we find that (40) reduces to a first order recursion in D N (n) which satisfies the following relation for 1 ≤ n ≤ N − 1 To compute D N (0) we use the boundary conditions E N (0) = 0 and E N (N ) = 1, which imply that Thus, using E N (n) = n−1 k=0 D N (k) we have the required expression for E N (n) for all 0 ≤ n ≤ N .
It is also important to note that D N defines a probability distribution on the set {0, 1, . . . , N − 1}. Furthermore, using the monotonicity of g K proved in Lemma 2 and (41) we have Thus, the mode of the distribution D N is at ⌊βN ⌋. Now for any α > β we choose β ′ such that α > β ′ > β. Hence, by the monotonicity of g K we have Also using the monotonicity of g K and (41) we have for any j ≥ 1 where the last step follows since β ′ N < ⌊β ′ N ⌋ + 1. Hence, we have The proof for α < β follows similarly.

Proof of Theorem 5
Let T = T 0 ∧ T N denote the random time to reach consensus. Then we have where Z n denotes the number of visits to state n before absorption and M n,j denotes the time spent in the j th visit to state n. Clearly, the random variables Z n and (M n,j ) j≥1 are independent with each M n,j being an exponential random variable with rate q(n → n + 1) + q(n → n − 1). Using Wald's identity we have .
Below we find lower and upper bounds of t N (α). Let A = {ω : T N (ω) < T 0 (ω)} denote the event that the Markov chain gets absorbed in state N . We have Lower bound of t N (α): Applying Markov inequality to the RHS of (9) and (11) we obtain Furthermore, as in the case of voter model, we have Using (44) and the above inequalities we obtain Upper bound for t N (α): From (9) and (11) we have q(n → n + 1) + q(n → n − 1) ≥ cn, for n N ≤ 1 2 , c(N − n), for n N > 1 2 , where c = q 1 P Bin 2K, 1 2 ≥ K + 1 . Using the above inequalities in (44) we have Hence, to show that For the rest of the proof we assume α > β. The case α < β can be handled similarly.
Let x = ⌊αN ⌋. We first find upper bound of E x [Z n ; A]. Conditioned on A, the embedded chainX (N ) is a Markov chain with jump probabilities given by We have where equality (a) follows from (12) and Theorem 4. Inequality (b) follows from the facts (i) P n−1 (A) ≤ P n+1 (A) and (ii) for a monotonically non-increasing non-negative sequence (y n ) n≥1 the following inequality holds y n n−2 t=0 t j=1 y j n t=0 t j=1 y j ≤ 1 (follows simply by comparing the terms in the numerator with the middle n−1 terms in the denominator). Given A, let ζ n denote the number of times the embedded chainX (N ) jumps from n to n − 1 before absorption. Then as in the voter model we have where ζ n follows the recursion ζ n = ζn+1 l=0 ξ l,n , for x ≤ n ≤ N − 1, ζn+1 l=1 ξ l,n , for 1 ≤ n < x (53) with ζ N = 0 and ξ l,n denoting the random number of left-jumps from state n between l th and (l + 1) th left-jumps from state n + 1. Clearly (ξ l,n ) l≥0 are i.i.d with geometric distribution having mean p A n,n−1 /p A n,n+1 . Hence, applying Wald's identity to solve the above recursion we have , for x ≤ n ≤ N − 1 Now using inequality (51), monotonicity of g K , and the fact that for n ≥ x = ⌊αN ⌋ > ⌊βN ⌋, 1 > r α := r/g K (α) ≥ r/g K (n/N ) we have for n ≥ x = ⌊αN ⌋ Hence, using (52) we have for n ≥ For n < x = ⌊αN ⌋ we have where (a) follows from (54) and (51) Similarly, conditioned on A c we have Z n |A c = 1 +ζ n +ζ n−1 for 1 ≤ n ≤ x, ζ n +ζ n−1 for x < n < N − 1, whereζ n denotes the number of timesX (N ) jumps to the right from state n given A c . Hence,ζ n follows the recursion given bȳ ζ n = ζ n−1 l=0ξ l,n , for 1 ≤ n ≤ x, ζ n−1 l=1ξ l,n , for x < n < N − 1 whereζ 0 = 0 andξ l,n denotes the random number of right-jumps from state n between l th and (l+1) th right-jumps from state n−1 given A c . Clearly (ξ l,n ) l≥0 are i.i.d. with geometric distribution having mean p A c n,n+1 /p A c n,n−1 where p A c n,n+1 = 1 − p A c n,n−1 = p n,n+1 P n+1 (A c ) P n (A c ) .
As before, we have Solving (60) using Wald's identity we obtain For 1 ≤ n ≤ x after some simplification of (63) we obtain We observe that for j ≤ ⌊βN ⌋ we have gK (j/N ) r ≤ 1 and using the fact that g K (x) = 1/g K (1 − x) we have Furthermore, for ⌊βN ⌋ < n ≤ x we have where (a) follows from the fact that gK (j/N ) r ≥ 1 for j > n > ⌊βN ⌋. Hence, we have shown that E x ζ n = O(1) for 1 ≤ n ≤ x. Now, using (63) and inequality (62) we have for x < n < N − 1 that E x ζ n ≤ E x ζ x = O(1). Hence, from (59) we see that E x [Z n ; A c ] ≤ E x [Z n |A c ] = O(1) thereby completing the proof.

Conclusion
In this paper, we analysed the voter model the majority rule model of social interaction under the presence of biased and stubborn agents. We observed that for the voter model the presence of biased agents, reduces the mean consensus time exponentially in comparison to the voter model with unbiased agents. For the majority rule model with biased agents, we saw that the network reaches the consensus state with all agents adopting the preferred opinion only if the initial fraction of agents having the preferred opinion is more than a certain threshold value. Finally, we have seen that for the majority rule model with stubborn agents the network exhibits metastability, where it fluctuates between multiple stable configuration, spending long intervals in each configuration.
Several interesting directions for future work exist. For example, the behaviour of random d-regular networks under the biased voter and majority rule models has not been analysed yet. Furthermore, the effect of the presence of more than two opinion on the opinion dynamics is unknown. It will be also interesting to study the networks dynamics under the majority rule model for general network topologies when stubborn agents are present.