Abstract
We study opinion formation games based on the famous model proposed by Friedkin and Johsen (FJ model). In today’s huge social networks the assumption that in each round agents update their opinions by taking into account the opinions of all their friends is unrealistic. So, we are interested in the convergence properties of simple and natural variants of the FJ model that use limited information exchange in each round and converge to the same stable point. As in the FJ model, we assume that each agent i has an intrinsic opinion \(s_i \in [0,1]\) and maintains an expressed opinion \(x_i(t) \in [0,1]\) in each round t. To model limited information exchange, we consider an opinion formation process where each agent i meets with one random friend j at each round t and learns only her current opinion \(x_j(t)\). The amount of influence j imposes on i is reflected by the probability \(p_{ij}\) with which i meets j. Then, agent i suffers a disagreement cost that is a convex combination of \((x_i(t)  s_i)^2\) and \((x_i(t)  x_j(t))^2\). An important class of dynamics in this setting are no regret dynamics, i.e. dynamics that ensure vanishing regret against the experienced disagreement cost to the agents. We show an exponential gap between the convergence rate of no regret dynamics and of more general dynamics that do not ensure no regret. We prove that no regret dynamics require roughly \(\varOmega (1/\varepsilon )\) rounds to be within distance \(\varepsilon \) from the stable point of the FJ model. On the other hand, we provide an opinion update rule that does not ensure no regret and converges to \(x^*\) in \(\tilde{O}(\log ^2(1/\varepsilon ))\) rounds. Finally, in our variant of the FJ model, we show that the agents can adopt a simple opinion update rule that ensures no regret to the experienced disagreement cost and results in an opinion vector that converges to the stable point \(x^*\) of the FJ model within distance \(\varepsilon \) in \(\textrm{poly}(1/\varepsilon )\) rounds. In view of our lower bound for no regret dynamics this rate of convergence is close to best possible.
Similar content being viewed by others
1 Introduction
The study of Opinion Formation has a long history (see e.g. [29]). Opinion Formation is a dynamic process in the sense that socially connected people (e.g. family, friends, colleagues) exchange information and this leads to changes in their expressed opinions over time. Today, the advent of the internet and social media makes the study of opinion formation in large social networks even more important; realistic models of how people form their opinions by interacting with each other are of great practical interest for prediction, advertisement etc. In an attempt to formalize the process of opinion formation, several models have been proposed over the years (see e.g., [13, 14, 18, 28]). The common assumption underlying all these models, which dates back to DeGroot [13], is that opinions evolve through a form of repeated averaging of information collected from the agents’ social neighborhoods.
Our work builds on the model proposed by Friedkin and Johnsen [18]. The FJ model is a variation on the DeGroot model capturing the fact that consensus on the opinions is rarely reached. According to FJ model each person i has a public opinion \(x_i \in [0,1]\) and an internal opinion \(s_i\in [0,1]\), which is private and invariant over time. There also exists a weighted graph G(V, E) representing a social network where V stands for the persons (\(V=n\)) and E their social relations. Initially, all nodes start with their internal opinion and at each round t, update their public opinion \(x_i(t)\) to a weighted average of the public opinions of their neighbors and their internal opinion,
where \(N_i =\{j \in V:(i,j) \in E\}\) is the set of i’s neighbors, the weight \(w_{ij}\) associated with the edge \((i,j) \in E\) measures the extent of the influence that j poses on i and the weight \(w_{ii}>0\) quantifies how susceptible i is in adopting opinions that differ from her internal opinion \(s_i\).
The FJ model is one of most influential models for opinion formation. It has a very simple update rule, making it plausible for modeling natural behavior and its basic assumptions are aligned with empirical findings on the way opinions are formed [1, 32]. At the same time, it admits a unique stable point \(x^* \in [0,1]^n\) to which it converges with a linear rate [23]. The FJ model has also been studied under a game theoretic viewpoint. Bindel et al. considered its update rule as the minimizer of a quadratic disagreement cost function and based on it they defined the following opinion formation game [5]. Each node i is a selfish agent whose strategy is the public opinion \(x_i\) that she expresses incurring her a disagreement cost
Note that the FJ model is the simultaneous best response dynamics and its stable point \(x^*\) is the unique Nash equilibrium of the above game. In [5] they quantified its inefficiency with respect to the total disagreement cost. They proved that the Price of Anarchy (PoA) is 9/8 in case G is undirected and \(w_{ij}=w_{ji}\). They also provided PoA bounds in the case of unweighted Eulerian directed graphs. We remark that in [5] an alternative framework for studying the way opinions evolve was introduced. The opinion formation process can be described as the dynamics of an opinion formation game. This framework is much more comprehensive since different aspects of the opinion formation process can be easily captured by defining suitable games. Subsequent works [3, 4, 15] considered variants of the above game and studied the convergence properties of the best response dynamics.
1.1 Motivation and Our Setting
Many recent works study the Nash equilibrium \(x^*\) of the opinion formation game defined in [5] under various perspectives. In [10] they extended the bounds for PoA in more general classes of directed graphs, while many recently introduced influence maximization problems [2, 24, 33], which are defined with respect to \(x^*\). The reason for this scientific interest is evident: the equilibrium \(x^*\) is considered as an appropriate way to model the final opinions formed in a social network, since the well established FJ model converges to it.
Our work is motivated by the fact that there are notable cases in which the FJ model is not an appropriate model for the dynamic of the opinions, due to the large amount of information exchange that it implies. More precisely, at each round its update rule (1) requires that every agent learns all the opinions of her social neighbors. In today’s large social networks where users usually have several hundreds of friends it is highly unlikely that, each day, they learn the opinions of all their social neighbors. In such environments it is far more reasonable to assume that individuals randomly meet a small subset of their acquaintances and these are the only opinions that they learn. Such information exchange constraints render the FJ model unsuitable for modeling the opinion formation process in such large networks and therefore, it is not clear whether \(x^*\) captures the limiting behavior of the opinions. In this work we ask:
Question 1
Is the equilibrium \(x^*\) an adequate way to model the final formed opinions in large social networks? Namely, are there simple variants of the FJ model that require limited information exchange and converge fast to \(x^*\)? Can they be justified as natural behavior for selfish agents under a gametheoretic solution concept?
To address these questions, one could define precise dynamical processes whose update rules require limited information exchange between the agents and study their convergence properties. Instead of doing so, we describe the opinion formation process in such large networks as dynamics of a suitable opinion formation game that captures these information exchange constraints. This way we can precisely define which dynamics are natural and, more importantly, to study general classes of dynamics (e.g. no regret dynamics) without explicitly defining their update rule. The opinion formation game that we consider is a variant of the game in [5] based on interpreting the weight \(w_{ij}\) as a measure of how frequently i meets j.
Definition 1
For a given opinion vector \(x \in [0,1]^n\), the disagreement cost of agent i is the random variable \(C_i(x_i,x_{i})\) defined as follows:

Agent i meets one of her neighbors j with probability \(p_{ij}= w_{ij}/\sum _{j\in N_i}w_{ij}\).

Agent i suffers cost \(C_i(x_i, x_{i}) = (1a_i)(x_ix_j)^2 + a_i(x_is_i)^2\), where
$$\begin{aligned} \alpha _i = \frac{w_{ii}}{\sum _{j\in N_i}w_{ij}+w_{ii}}. \end{aligned}$$
Note that the expected disagreement cost of each agent in the above game is the same as the disagreement cost in [5] (scaled by \(\sum _{j \in N_i}w_{ij}+w_{ii}\)). Moreover its Nash equilibrium, with respect to the expected disagreement cost, is \(x^*\). This game provides us with a general template of all the dynamics examined in this paper. At round t, each agent i selects an opinion \(x_i(t)\) and suffers a disagreement cost based on the opinion of the neighbor that she randomly met. At the end of round t, she is informed only about the opinion and the index of this neighbor and may use this information to update her opinion in the next round. Obviously different update rules lead to different dynamics, however all of these respect the information exchange constraints: at every round each agent learns the opinion of just one of her neighbors. Question 1 now takes the following more concrete form.
Question 2
Can the agents update their opinions according to the limited information that they receive such that the produced opinion vector x(t) converges to the equilibrium \(x^*\)? How is the convergence rate affected by the limited information exchange? Are there dynamics that ensure that the cost that the agents experience is minimal?
In what follows, we are mostly concerned about the dependence of the rate of convergence on the distance \(\varepsilon \) from the equilibrium \(x^*\). Thus, we shall suppress the dependence on other parameters such as the size of the graph, n. We remark that the dependence of our dynamics on these constants is in fact rather good (see Sect. 2), and we do this only for clarity of exposition.
Definition 2
(Informal) We say that a dynamics converges slowly resp. fast to the equilibrium \(x^*\) if it requires \(\textrm{poly}(1/\varepsilon )\) resp. \(\textrm{poly}(\log (1/\varepsilon ))\) rounds to be within (expected) error \(\varepsilon \) of \(x^*\).
1.2 Contribution
The major contribution of the paper is proving an exponential separation on the convergence rate of no regret dynamics and the convergence rate of more general dynamics produced by update rules that do not ensure no regret.
No regret dynamics are produced by update rules that ensure no regret to any agent that adopts them. Namely, the total disagreement cost of an agent that follows such a rule is close to the total disagreement cost that she would experience by selecting the best fixed opinion in hindsight. The latter must hold regardless of the way the other agents update their opinions and of the neighbors that the agent gets to meet. This powerful property renders no regret dynamics natural dynamics for describing the behavior of agents [8, 16, 31, 37]. We prove that if all the agents adopt an update rule that ensures no regret, then there exists an instance of the game such that the produced opinion vector x(t) requires roughly \(\varOmega (1/\varepsilon )\) rounds to be \(\varepsilon \)close to \(x^*\). No regret comes at the price of slow convergence because it provides robust guarantees. Agents who adopt no regret update rules suffer minimal total disagreement cost even if the other agents play irrationally or adversarially. In order to provide such strong guarantees, no regret rules must only depend on the opinions that the agent observes and not take into account the weights \(w_{ij}\) of the outgoing edges (see Sect. 5). We call the update rules with the latter property, graph oblivious. In Sect. 5 we use a novel information theoretic argument to prove the aforementioned lower bound for this more general class.
In Sect. 6, we present a simple update rule whose resulting dynamics converges fast, i.e. the opinion vector x(t) is \(\varepsilon \)close to \(x^*\) in \(O(\log ^2 (1/\varepsilon ))\) rounds. The reason that the previous lower bound doesn’t apply is that this rule does not ensure no regret to the agents that adopt it. In fact there is a very simple example with two agents, in which the first follows the rule while the second selects her opinions adversarially, where the first agent experiences regret (see Example 1 in Sect. 6).
We introduce an intuitive no regret update rule and we show that if all agents adopt it, the resulting opinion vector x(t) converges to \(x^*\). Our rule is a Follow the Leader algorithm, meaning that at round t, each agent updates her opinion to the minimizer of total disagreement cost that she experienced until round \(t1\). It also has a very simple form: it is roughly the time average of the opinions that the agent observes. In Sect. 3, we bound its convergence rate and show that in order to achieve \(\varepsilon \) distance from \(x^*\), \(\textrm{poly}(1/\varepsilon )\) rounds are sufficient. In view of our lower bound this rate is close to best possible. In Sect. 4, we prove its no regret property. This can be derived by the more general results in [25]. However, we give a short and simple proof that may be of interest.
In conclusion, our results reveal that the equilibrium \(x^*\) is a robust choice for modeling the limiting behavior of the opinions of agents since, even in our limited information setting, there exist simple and natural dynamics that converge to it. The convergence rate crucially depends on whether the agents act selfishly, i.e. they are only concerned about their individual disagreement cost. We present an update rule that selfish agents can adopt (no regret update rule) and show that the resulting opinion vector converges to \(x^*\) but with a slow rate, while, for non selfish agents, the update rule in Sect. 6 leads to a dynamics with fast convergence rate.
1.3 Related Work
There exists a large amount of literature concerning the FJ model. Many recent works [3, 4, 12, 15] bound the inefficiency of equilibrium in variants of opinion formation game defined in [5]. In [23] they bound the convergence time of the FJ model in special graph topologies. In [3], a variant of the opinion formation game, in which social relations depend on the expressed opinions, is studied. They prove that the discretized version of the above game admits a potential function and thus bestresponse converges to the Nash equilibrium. Convergence results in other discretized variants of the FJ model can be found in [17, 40]. In [19] they provide convergence results for limited information variants of the HeglesmannKrause model [28] and the FJ model. Although the considered limited information variant of the FJ model is very similar to ours, their convergence results are much weaker, since they concern the expected value of the opinion vector.
Other works that relate to ours concern the convergence properties of dynamics based on no regret learning algorithms. In [20, 21, 36, 37] it is proved that in a finite nperson game, if each agent updates her mixed strategy according to a no regret algorithm, the resulting timeaveraged strategy vector converges to Coarse Correlated Equilibrium. The convergence properties of no regret dynamics for games with infinite strategy spaces were considered in [16]. They proved that for a large class of games with concave utility functions (socially concave games), the timeaveraged strategy vector converges to Pure Nash Equilibrium (PNE). More recent work investigates a stronger notion of convergence of no regret dynamics. In [11] they show that, in nperson finite generic games that admit unique Nash equilibrium, the strategy vector converges locally and fast to it. They also provide conditions for global convergence. Our results fit in this line of research since we show that for a game with infinite strategy space, the strategy vector (and not the timeaveraged) converges to the Nash equilibrium \(x^*\).
No regret dynamics in limited information settings have recently received substantial attention from the scientific community since they provide realistic models for the practical applications of game theory. Perfect payoff information is rare in practice; agents act based on random or noisy pastpayoff observations. Kleinberg et al. in [30] treated loadbalancing in distributed systems as a repeated game and analyzed the convergence properties of no regret learning algorithms under the full information assumption that each agent learns the load of every machine. In a subsequent work [31], the same authors consider the same problem in a limited information setting (“bulletin board model”), in which each agent learns the load of just the machine that served him. Most relevant to ours are the works [6, 11, 27, 35], where they examine the convergence properties of no regret learning algorithms when the agents observe their payoffs with some additive zeromean random noise. In our limited information setting the agents experience random disagreement cost with expected value equal to the actual cost. The main difference is that our noise is not additive but due to a sampling process.
2 Our Results and Techniques
We have adopted the convention of using \(\ln \) to denote the natural logarithm. We will also use the notation \(\log \) freely without specifying a base when inside the bigO notation or when we have a constant C that is arbitrary. As previously mentioned, an instance of the game in [5] is also an instance of the game of Definition 1. Following the notation introduced earlier we have that \(p_{ij} = w_{ij}/\sum _{j \in N_i}w_{ij}\) if \(j \in N_i\) and 0 otherwise. Moreover, \(\alpha _i=w_{ii}/(\sum _{j \in N_i}w_{ij}+w_{ii})>0\) since \(w_{ii}>0\) by the definition of the game in [5]. If an agent i does not have outgoing edges (\(N_i = \emptyset \)) then \(p_{ij} = 0\) for all j. Therefore \(\sum _{j=1}^n p_{ij}=0\), \(\alpha _i=1\) if \(N_i= \emptyset \) and \(\sum _{j=1}^n p_{ij}=1\), \(\alpha _i \in (0,1)\) otherwise. For simplicity we adopt the following notation for an instance of the game of Definition 1.
Definition 3
We denote an instance of the opinion formation game of Definition 1 as \(I=(P,s,\alpha )\), where P is a \(n \times n\) matrix with nonnegative elements \(p_{ij}\), with \(p_{ii}=0\) and \(\sum _{j=1}^n p_{ij}\) is either 0 or 1, \(s \in [0,1]^n\) is the internal opinion vector, \(\alpha \in (0,1]^n\) the self confidence vector.
An instance \(I=(P,s,\alpha )\) is also an instance of the FJ model, since by the update rule (1) \(x_i(t)=(1\alpha _i)\sum _{j \in N_i}p_{ij}x_j(t1) + a_i s_i\). It also defines the opinion vector \(x^* \in [0,1]^n\) which is the stable point of the FJ model and the Nash equilibrium of the game in [5].
Definition 4
For a given instance \(I=(P,s,\alpha )\) the equilibrium \(x^*\in [0,1]^n\) is the unique solution of the following linear system, for every \(i \in V\):
The fact that the above linear system always admits a solution follows by matrix norm properties. Throughout the paper we study dynamics of the game of Definition 1. We denote as \(W_i^t\) the neighbor that agent i met at round t, which is a random variable whose probability distribution is determined by the instance \(I=(P,s,\alpha )\) of the game, \({\varvec{\textrm{P}}}\left[W_i^t=j\right]=p_{ij}\). Another parameter of an instance I that we often use is \(\rho =\min _{i \in V}\alpha _i\).
In Sect. 3, we examine the convergence properties of the opinion vector x(t) when all agents update their opinions according to the Follow the Leader principle. Since each agent i must select \(x_i(t)\), before knowing which of her neighbors she will meet and what opinion her neighbor will express, this update rule says “play the best according to what you have observed”. For a given instance (P, s, a) of the game the Follow the Leader dynamics x(t) is defined in Dynamics 1 and Theorem 1 shows its convergence rate to \(x^*\).
Theorem 1
Let \(I = (P,s, \alpha )\) be an instance of the opinion formation game of Definition 1 with equilibrium \(x^* \in [0,1]^n\). The opinion vector \(x(t)\in [0,1]^n\) produced by update rule (3) after t rounds satisfies
where \(\rho = \min _{i \in V} a_i\) and C is a universal constant and \(t \ge 6\).
In Sect. 4 we argue that, apart from its simplicity, update rule (3) ensures no regret to any agent that adopts it and therefore the FTL dynamics can be considered as natural dynamics for selfish agents. Since each agent i selfishly wants to minimize the disagreement cost that she experiences, it is natural to assume that she selects \(x_i(t)\) according to a no regret algorithm for the online convex optimization problem where the adversary chooses a function \(f_t(x)=(1\alpha _i)(xb_t)^2 + \alpha _i(xs_i)^2\) at each round t. In Theorem 2 we prove that Follow the Leader is a no regret algorithm for the above OCO problem. We remark that this does not hold, if the adversary can pick functions from a different class (see e.g. chapter 5 in [26]).
Theorem 2
Consider the function \(f:[0,1]^2 \mapsto [0,1]\) with
for some constants \(s,\alpha \in [0,1]\). Let \((b_t)_{t=0}^\infty \) be an arbitrary sequence with \(b_t \in [0,1]\). If
then for all t,
On the positive side, the FTL dynamics converges to \(x^*\) and its update rule is simple and ensures no regret to the agents. On the negative side, its convergence rate is outperformed by the rate of FJ model. For a fixed instance \(I=(P,s,\alpha )\), the FTL dynamics converges with rate \(\widetilde{O}(1/t^{\text {min}(\rho ,1/2)})\) while FJ model converges with rate \(O(e^{\rho t})\) [23].
Question 3
Can the agents adopt other no regret update rules such that the resulting dynamics converges fast to \(x^*\)?
The answer is no. In Sect. 5, we prove that fast convergence cannot be established for any no regret dynamics. The reason that FTL dynamics converges slowly is that rule (3) only depends on the opinions of the neighbors that agent i meets, \(\alpha _i\), and \(s_i\). This is also true for any update rule that ensures no regret to the agents (see Sect. 5). As already mentioned, we call this larger class of update rules graph oblivious, and we prove that fast convergence cannot be established for graph oblivious dynamics.
Definition 5
(graph oblivious update rule) A graph oblivious update rule A is a sequence of functions \((A_t)_{t=0}^\infty \) where \(A_t: [0,1]^{t+2}\mapsto [0,1]\).
Definition 6
(graph oblivious dynamics) Let A be a graph oblivious update rule. For a given instance \(I=(P,s,\alpha )\) the rule A produces a graph oblivious dynamics \(x_A(t)\) defined as follows:

Initially each agent i selects her opinion \(x_i^A(0)=A_0(s_i,\alpha _i)\)

At round \(t\ge 1\), each agent i selects her opinion
$$\begin{aligned}x_i^A(t)=A_t\left(x_{W_i^0}(0),\dots ,x_{W_i^{t1}}(t1),s_i,\alpha _i\right),\end{aligned}$$where \(W_i^t\) is the neighbor that i meets at round t.
Theorem 3 states that for any graph oblivious dynamics there exists an instance \(I = (P,s,\alpha )\), where roughly \(\varOmega (1/\varepsilon )\) rounds are required to achieve convergence within error \(\varepsilon \).
Theorem 3
Let A be a graph oblivious update rule, which all agents use to update their opinions. For any \(c>0\) there exists an instance \(I=(P,s,a)\) such that
where \(x_A(t)\) denotes the opinion vector produced by A for the instance \(I=(P,s,\alpha )\).
To prove Theorem 3, we show that graph oblivious rules whose dynamics converge fast imply the existence of estimators for Bernoulli distributions with “small” sample complexity. The key part of the proof lies in Lemma 6, in which it is proven that such estimators cannot exist. We also briefly discuss two wellknown sample complexity lower bounds from the statistics literature and explain why they do not work in our case.
In Sect. 6, we present a simple update rule that achieves error rate \(\textrm{e}^{\widetilde{O}(\sqrt{t})}\). This update rule is a function of the opinions and the indices of the neighbors that i met, \(s_i,\alpha _i\) and the ith row of the matrix P. Obviously this rule is not graph oblivious, due to its dependency on the ith row and the indices, and thus does not ensure no regret to an agent that adopts it (see Example 1 in Sect. 6). However it reveals that slow convergence is not a generic property of the limited information dynamics, but comes with the assumption that agents act selfishly.
3 Convergence Rate of FTL Dynamics
In this section we prove Theorem 1 which bounds the convergence time of FTL dynamics to the unique equilibrium point \(x^*\). Notice that for an instance \(I=(P,s,\alpha )\), the opinion vector \(x(t) \in [0,1]^n\) of the FTL dynamics (see Dynamics 1) can be written equivalently as follows:

Initially all agents adopt their internal opinion, \(x_i(0)=s_i\).

At round \(t \ge 1\), each agent i updates her opinion
$$\begin{aligned} x_i(t)=(1\alpha _i)\sum _{\tau =0}^{t1} \frac{x_{W_i^\tau }(\tau )}{t}+ \alpha _i s_i, \end{aligned}$$where \(W_i^\tau \) is the neighbor that i met at round t.
Since the opinion vector x(t) is a random vector, the convergence metric used in Theorem 1 is \( {\varvec{\textrm{E}}}_{} \left[ \Vert x(t)  x^* \Vert _{\infty } \right] \) where the expectation is taken over the random meeting of the agents. At first we present a high level idea of the proof. We remind that the unique equilibrium \(x^* \in [0,1]^n\) of the instance \(I=(P,s,\alpha )\) satisfies the following equations for each agent \(i \in V\),
Since our metric is \( {\varvec{\textrm{E}}}_{} \left[ \Vert x(t)x^* \Vert _{\infty } \right] \), we can use the above equations to bound \(x_i(t)x_i^*\).
Now assume that
for all \(t\ge 1\), then with simple algebraic manipulations one can prove that \(\Vert x(t)x^* \Vert _{\infty } \le e(t)\) where e(t) satisfies the recursive equation
where \(\rho = \min a_i\). It follows that \(\Vert x(t)x^* \Vert _{\infty } \le 1/t^\rho \) meaning that x(t) converges to \(x^*\). Obviously the latter assumption does not hold, however since \(W_i^{\tau }\) are independent random variables with \({\varvec{\textrm{P}}}\left[W_i^\tau = j\right]=p_{ij}\), the quantity
tends to 0 with probability 1. In Lemma 1 we use this fact to obtain a similar recursive equation for e(t) and then in Lemma 2 we upper bound its solution.
Lemma 1
Let e(t) be the solution of the recursion
where \(e(0)=\Vert x(0)  x^*\Vert _{\infty }\), \(\delta (t) = \sqrt{\ln (\pi ^2n t^2/6p)/t}\) and \(\rho = \min _{i \in V}\alpha _i\). Then,
Proof
At first we prove that with probability at least \(1p\), for all \(t \ge 1\) and all agents i:
Since \(W_i^\tau \) are independent random variables with \({\varvec{\textrm{P}}}\left[W_i^\tau = j\right]=p_{ij}\) and \( {\varvec{\textrm{E}}}_{} \left[ x^*_{W_i^\tau } \right] = \sum _{j \in N_i} p_{ij} x^*_j\). By the Hoeffding’s inequality we get
To bound the probability of error for all rounds \(t\ge 1\) and all agents i, we apply the union bound
As a result with probability at least \(1p\) we have that inequality (4) holds for all \(t\ge 1\) and all agents i. We now prove our claim by induction. Let \(\Vert x(\tau )x^* \Vert _{\infty } \le e(\tau )\) for all \(\tau \le t1\). Then
We get (5) from the induction step and (6) from inequality (4). Similarly, we can prove that
As a result \(\Vert x(t)x^* \Vert _{\infty } \le e(t)\) and the induction is complete. Therefore, we have that with probability at least \(1p\), \(\Vert x(t)  x^* \Vert _{\infty } \le e(t)\) for all \(t\ge 1\). \(\square \) \(\square \)
Now that we have obtained the recursive equation for the error, we can solve it using straightforward computation. The idea is to express the term \(e(t+1)\) in terms of the previous term e(t) and to apply this expression repeatedly to obtain a formula for e(t). The main technical difficulty is upper bounding the sums that arise during this computation. This is done in the following lemma.
Lemma 2
Let e(t) be a function satisfying the recursion
where \(\delta (t) = \sqrt{\ln (D t^{5/2})/t} \), \(\delta (0) = 0 \), and \(D > \textrm{e}^{5/2}\) is a positive constant. Then
Therefore, for all \(t \ge 6\):
Proof
Observe that for all \(t\ge 0\) the function e(t) satisfies the following recursive relation
For \(t=0\) we have that
Observe that for \(D>\textrm{e}^{2.5}\), \(\delta (t)\) is decreasing for all \(t\ge 1\). Therefore,
Also, note that
where in the last inequality we used the fact that \((t+1)/t \le 2\) for all t and \(\ln t \le \ln (t+1)\). Thus, from equations (7) and (8) we get that for all \(t\ge 0\)
Now that we have expressed \(e(t+1)\) in terms of e(t), we can apply this expression to obtain a formula for e(t). We denote by \(H_t\) the tth partial sum of the harmonic series. To simplify notation, we define
In the following, we will make heavy use of the following elementary inequality, which holds for all \(p > 0\):
By “unrolling” the recurrence of Eq. 9 we obtain:
Next, by using (10) we obtain that
We now use the following well known upper and lower bounds for \(H_t\), which holds for all t and can be found in page 2 of [22]
for all n, where \(\gamma \) is the Euler number. This immediately gives for \(n = t\)
Also, by (11) for \(n = t+1\) we have
which implies that
Putting everything together, we have obtained the following for all t
This implies that for all t and \(\tau \le t\), we have
Thus, we obtain:
Now observe that
Putting all these together, we obtain
Now the remaining task is to bound the sum on the right hand side. A standard way of bounding a sum of decreasing terms is with the corresponding Riemann integral. Indeed, we observe that
since, \(\tau \mapsto \frac{\sqrt{\ln (D\tau )}}{\tau ^{3/2\rho }}\) is a decreasing function of \(\tau \) for all \(\rho \in [0,1]\). To see that, notice that the derivative of this function is
where the inequality holds since \(\rho < 1\). Now, for any \(\tau \ge 1\) we have that
which implies that
meaning that the function is indeed decreasing for all \(\tau \ge 1\). To bound the integral in (12), we have to distinguish cases for \(\rho \). Intuitively, if \(\rho \) is small, then the fraction decays faster than 1/t, which translates to the overall integral being polylogarithmic. If \(\rho \) is large, then a polynomial term with a small exponent might arise in the calculation.

If \(\rho \le 1/2\) then
$$\begin{aligned} \int _{\tau =1}^t \tau ^\rho \frac{\sqrt{\ln (D\tau )}}{\tau ^{3/2}}\textrm{d}\tau \le \sqrt{\ln (Dt) } \int _{\tau =1}^t \frac{1}{\tau }\textrm{d}\tau = \sqrt{\ln (Dt) } \ln t \le (\ln (Dt))^{3/2} , \end{aligned}$$since \(\ln (Dt) > \ln t\) for all \(t \ge 1\). Hence
$$\begin{aligned} e(t)&\le \frac{e(0)}{t^\rho } + \frac{\sqrt{5}}{t^\rho } \sum _{\tau =1}^t\frac{\sqrt{\ln \tau }}{\tau ^{3/2\rho }} \nonumber \\&\le \frac{e(0)}{t^\rho } + \frac{\sqrt{5}}{t^\rho }(\ln (Dt))^{3/2} \le 2\sqrt{5} \frac{(\ln (Dt))^{3/2}}{t^{\rho }} \,, \end{aligned}$$(13)where we used the fact that \(e(0) \le 1\) and \(\sqrt{5}(\ln (Dt))^{3/2} \ge \sqrt{5}(\ln (D))^{3/2}> 1\) for all \(t \ge 1\).

If \(\rho > 1/2\) then
$$\begin{aligned}&\int _{\tau =1}^t \tau ^\rho \frac{\sqrt{\ln (D\tau )}}{\tau ^{3/2}}\textrm{d}\tau = \int _{\tau =1}^t \tau ^{\rho 1/2}\frac{\sqrt{\ln (D\tau )}}{\tau }d\tau \\&\quad = \frac{2}{3} \int _{\tau =1}^t \tau ^{\rho 1/2}((\ln (D\tau ))^{3/2})'\textrm{d}\tau \\&\quad = \frac{2}{3}t^{\rho  1/2}(\ln (Dt))^{3/2}  (\rho 1/2)\frac{2}{3} \int _{\tau =1}^t \tau ^{\rho 3/2}(\ln (D\tau ))^{3/2}\textrm{d}\tau \\&\quad \le \frac{2}{3} t^{\rho  1/2}(\ln (Dt))^{3/2} \,. \end{aligned}$$Hence
$$\begin{aligned} e(t)&\le \frac{e(0)}{t^\rho } + \frac{\sqrt{5}}{t^\rho } \sum _{\tau =1}^t\frac{\sqrt{\ln (D\tau )}}{\tau ^{3/2\rho }} \nonumber \\&\le \frac{e(0)}{t^\rho } + \frac{\sqrt{5}}{t^\rho }\frac{2}{3} t^{\rho  1/2}(\ln (Dt))^{3/2} \le \frac{4\sqrt{5}}{3} \frac{(\ln (Dt))^{3/2}}{t^{1/2}} \,. \end{aligned}$$(14)
For the last inequality, we used the fact that \(\ln D >0\) to conclude that
for all \(t \ge 1\). Combining inequalities 13 and 14 yields that for all \(t\ge 1\)
which proves the first claim of the lemma. We would like the following inequality to be satisfied:
which is equivalent to
If \(\ln D > 2.5\), the right hand side is at most \(1 + 2/3\). Numerically, we observe that for \(t \ge 6\), \(\ln t \ge 1 + 2/3\). Thus, the second inequality of the lemma follows for \(t \ge 6\).
\(\square \)
An interesting consequence of Lemma 2 is that the rate of convergence is never better than \(1/\sqrt{t}\) regardless of the value of \(\rho \). In Sect. 5 we provide evidence that no reasonable protocol can achieve a better convergence rate.
We are now ready to prove Theorem 1.
Theorem 1
Let \(I = (P,s, \alpha )\) be an instance of the opinion formation game of Definition 1 with equilibrium \(x^* \in [0,1]^n\). The opinion vector \(x(t)\in [0,1]^n\) produced by update rule (3) after t rounds satisfies
where \(\rho = \min _{i \in V} a_i\) and C is a universal constant and \(t \ge 6\).
Proof
By Lemma 1 we have that for all \(t\ge 1\) and \(p \in [0,1]\),
where \(e_p(t)\) is the solution of the recursion
with \(\delta (t)=\sqrt{ \frac{\ln (\pi ^2 n t^2/(6 p))}{t}}\). Setting \(p=\frac{1}{12\sqrt{t}}\) we have that
where e(t) is the solution of the recursion
with \(\delta (t)=\sqrt{\frac{\ln (2\pi ^2 n t^{2.5})}{t}}\). Since \(2\pi ^2 \ge e^{2.5}\), Lemma 2 applies and
for some universal constant C and for all \(t\ge 6\). Finally,
\(\square \)
Hence, FTL dynamics converges to the same equilibrium point as the original FJmodel, albeit slower. In the next section we provide justification about why this strategy is a natural one for players to adopt, given that they operate in an adversarial environment.
4 Follow the Leader Ensures No Regret
In this section we provide rigorous definitions of no regret algorithms and explain why update rule (3) ensures no regret to any agent that repeatedly plays the game of Definition 1. Based on the cost that the agents experience, we consider an appropriate Online Convex Optimization problem. This problem can be viewed as a “game” played between an adversary and a player. At round \(t\ge 0\),

1.
the player selects a value \(x_t \in [0,1]\).

2.
the adversary observes the \(x_t\) and selects a \(b_t \in [0,1]\)

3.
the player receives cost \(f(x_t,b_t)=(1\alpha )(x_tb_t)^2 + \alpha (x_t s)^2\).
where \(s,\alpha \) are constants in [0, 1]. The goal of the player is to pick \(x_t\) based on the history \((b_0,\ldots ,b_{t1})\) in a way that minimizes her total cost. Generally, different OCO problems can be defined by a set of functions \(\mathcal {F}\) that the adversary chooses from and a feasibility set \(\mathcal {K}\) from which the player picks her value (see [26] for an introduction to the OCO framework). In our case the feasibility set is \(\mathcal {K}=[0,1]\) and the set of functions is
As a result, each selection of the constants \(s,\alpha \) leads to a different OCO problem.
Definition 7
An algorithm A for the OCO problem with \(\mathcal {F}_{s,\alpha }\) and \(\mathcal {K}=[0,1]\) is a sequence of functions \((A_t)_{t=0}^\infty \) where \(A_t:[0,1]^t \mapsto [0,1]\).
Definition 8
An algorithm A is no regret for the OCO problem with \(\mathcal {F}_{s,\alpha }\) and \(\mathcal {K}=[0,1]\) if and only if for all sequences \((b_t)_{t=0}^\infty \) that the adversary may choose, if \(x_t = A_t(b_0,\dots ,b_{t1})\) then for all t,
Informally speaking, if the player selects the value \(x_t\) according to a no regret algorithm then she does not regret not playing any fixed value no matter what the choices of the adversary are. Theorem 2 states that Follow the Leader i.e.
is a no regret algorithm for all the OCO problems with \(\mathcal {F}_{s,\alpha }\).
Returning to the dynamics of the game in Definition 1, it is reasonable to assume that each agent i selects \(x_i(t)\) according to no regret algorithm \(A_i\) for the OCO problem with \(\mathcal {F}_{s_i,\alpha _i}\), since by Definition 8,
The latter means that the time averaged total disagreement cost that she suffers is close to the time averaged cost by expressing the best fixed opinion and this holds regardless of the opinions of the neighbors that i meets. Meaning that even if the other agents selected their opinions maliciously, her total experienced cost would still be in a sense minimal. Under this perspective update rule (3) is a rational choice for selfish agents and as a result FTL dynamics is a natural limited information variant of the FJ model. We would like to prove the following.
Theorem 2
Consider the function \(f:[0,1]^2 \mapsto [0,1]\) with
for some constants \(s,\alpha \in [0,1]\). Let \((b_t)_{t=0}^\infty \) be an arbitrary sequence with \(b_t \in [0,1]\). If
then for all t,
We now present the key steps for proving Theorem 2. We first prove that a similar strategy that also takes into account the value \(b_t\) admits no regret (Lemma 3). Obviously, knowing the value \(b_t\) before selecting \(x_t\) is in direct contrast with the OCO framework, however proving the no regret property for this algorithm easily extends to establishing the no regret property of Follow the Leader. Theorem 2 follows by direct application of Lemma 4.
Lemma 3
Let \((b_t)_{t=0}^\infty \) be an arbitrary sequence with \(b_t \in [0,1]\). Let \(y_t = \mathop {{{\,\textrm{argmin}\,}}}\limits _{x \in [0,1]}\sum _{\tau =0}^tf(x_,b_\tau )\) then for all t,
Proof
By definition of \(y_t\), \(\sum _{\tau =0}^t f(y_t,b_\tau )=\min _{ x \in [0,1]} \sum _{\tau =0}^t f(x,b_\tau )\), so
The last inequality follows by the fact that \(y_{t1} = \mathop {{{\,\textrm{argmin}\,}}}\limits _{x \in [0,1]}\sum _{\tau =0}^{t1}f(x_,b_\tau )\) Inductively, we prove that \(\sum _{\tau =0}^t f(y_\tau ,b_\tau ) \le \min _{ x \in [0,1]} \sum _{\tau =0}^t f(x,b_\tau )\). \(\square \)
Now we can understand why Follow the Leader admits no regret. Since the cost incurred by the sequence \(y_t\) is at most that of the best fixed value, we can compare the cost incurred by \(x_t\) with that of \(y_t\). Since the functions in \(\mathcal {F}_{s,\alpha }\) are quadratic, the extra term \(f(x,b_t)\) that \(y_t\) takes into account doesn’t change dramatically the minimum of the total sum. Namely, \(x_t,y_t\) are relatively close. Hence, the costs incurred by the two sequences are not very different.
Lemma 4
For all \(t\ge 0\), \( f(x_t,b_t) \le f(y_t,b_t) + 2\frac{1\alpha }{t+1} + \frac{(1\alpha )^2}{(t+1)^2} \).
Proof
We first prove that the two sequences are close. Namely, for all t,
By definition \(x_t = \alpha s + (1\alpha )\frac{\sum _{\tau = 0}^{t1} b_\tau }{t}\) and \( y_t = \alpha s + (1\alpha )\frac{\sum _{\tau = 0}^t b_\tau }{t+1}\).
The last inequality follows from the fact that \(b_\tau \in [0,1]\). We now use inequality (15) to bound the difference \( f(x_t,b_t)  f(y_t,b_t) \). Since f is a quadratic function, the bound follows easily from calculations.
\(\square \)
We are now ready to prove that FTLdynamics has the no regret property.
Theorem 2
Consider the function \(f:[0,1]^2 \mapsto [0,1]\) with
for some constants \(s,\alpha \in [0,1]\). Let \((b_t)_{t=0}^\infty \) be an arbitrary sequence with \(b_t \in [0,1]\). If
then for all t,
Proof
Theorem 2 easily follows by Lemma 3
\(\square \)
In the next section, we are going to prove that FTL dynamics is the fastest possible no regret protocol to solve the problem of opinion formation.
5 Lower Bound for Graph Oblivious Dynamics
In this section we prove that any no regret dynamics cannot converge much faster than FTL dynamics (Dynamics 1).
Definition 9
(no regret dynamics) Consider a collection of no regret algorithms such that for each \((s,\alpha ) \in [0,1]^2\) a no regret algorithm \(A_{s,\alpha }\)^{Footnote 1}for the OCO problem with \(\mathcal {F}_{s,\alpha }\) and \(\mathcal {K}=[0,1]\), is selected. For a given instance \(I=(P,s,\alpha )\) this selection produces the no regret dynamics x(t) defined as follows:

Initially each agent i selects her opinion \(x_i(0)=A_0^{s_i,\alpha _i}(s_i,\alpha _i)\)

At round \(t\ge 1\), each agent i selects her opinion
$$\begin{aligned}x_i(t)=A_t^{s_i,\alpha _i}\left(x_{W_i^0}(0),\dots ,x_{W_i^{t1}}(t1),s_i,\alpha _i\right)\end{aligned}$$where \(W_i^t\) is the neighbor that i meets at round t.
Such a selection of no regret algorithms can be encoded as a graph oblivious update rule. Specifically, the function \(A_t:\{0,1\}^{t+2} \mapsto [0,1]\) is defined as \(A_t(b_0,\ldots ,b_{t1},s,\alpha ) = A^t_{s,\alpha }(b_0,\ldots ,b_{t1})\). Theorem 3 applies and establishes the existence of an instance \(I=(P,s,\alpha )\) such that the produced x(t) converges at best slowly to \(x^*\).
The rest of the section is dedicated to prove Theorem 3. In Lemma 5 we show that any graph oblivious update rule A can be used as an estimator of the parameter \(p \in [0,1] \) of a Bernoulli random variable. Since we prove Theorem 3 using a reduction to an estimation problem, we shall first briefly introduce some definitions and notation. For simplicity we will restrict the following definitions of estimators and risk to the case of estimating the parameter p of Bernoulli random variables. Given t independent samples from a Bernoulli random variable B(p), an estimator is an algorithm that takes these samples as input and outputs an answer in [0, 1].
Definition 10
An estimator \(\theta =(\theta _t)_{t=1}^{\infty }\) is a sequence of functions, \(\theta _t: \{0,1\}^t\mapsto [0,1]\).
Perhaps the first estimator that comes to one’s mind is the sample mean, that is \(\theta _t = \sum _{i=1}^t X_i/t\). To measure the efficiency of an estimator we define the risk, which corresponds to the expected error of an estimator.
Definition 11
Let P be a Bernoulli distribution with mean p and \(P^t\) be the corresponding tfold product distribution. The risk of an estimator \(\theta =(\theta _t)_{t=1}^\infty \) is \( {\varvec{\textrm{E}}}_{(X_1,\ldots ,X_t) \sim P^t} \left[ \theta _t(X_1,\ldots ,X_t)  p \right] \), which we will denote by
for brevity.
The risk \( {\varvec{\textrm{E}}}_{p} \left[ \theta _t  p \right] \) quantifies the error rate of the estimated value \(\hat{p} =\theta _t(Y_1,\ldots ,Y_t)\) to the real parameter p as the number of samples t grows. Since p is unknown, any meaningful estimator \(\theta =(\theta _t)_{t=1}^\infty \) must guarantee that \(\lim _{t \rightarrow \infty } {\varvec{\textrm{E}}}_{p} \left[ \theta _t  p \right] =0\) for all p. For example, sample mean has error rate
Lemma 5
Let A be a graph oblivious update rule such that for all instances \(I=(P,s,\alpha )\),
Then there exists an estimator \(\theta _A=(\theta _t^A)_{t=1}^\infty \) such that for all \(p \in [0,1]\),
Proof
We construct an estimator \(\theta _A = (\theta ^A_t)_{t=1}^\infty \) using the update rule A. Consider the instance \(I_p\) described in Fig. 1. By straightforward computation, we get that the equilibrium point of the graph is \(x_c^* = p/3, x_1^* = p/6+1/2, x_0^* = p/6\). Now consider the opinion vector \(x_A(t)\) produced by the update rule A for the instance \(I_p\). Note that for \(t \ge 1\),

\(x_1^A(t)=A_t(x_c(0),\ldots ,x_c(t1),1,1/2)\)

\(x_0^A(t)=A_t(x_c(0),\ldots ,x_c(t1),0,1/2)\)

\(x_c^A(t)=A_t(x_{W_c^0}(0),\ldots ,x_{W_c^{t1}}(t1),0,1/2)\)
The key observation is that the opinion vector \(x_A(t)\) is a deterministic function of the index sequence \(W_c^0,\ldots ,W_c^{t1}\) and does not depend on p. Thus, we can construct the estimator \(\theta _A\) with \(\theta _t^A(W_c^0,\ldots ,W_c^{t1}) = 3x_c^A(t)\). For a given instance \(I_p\) the choice of neighbor \(W_c^t\) is given by the value of the Bernoulli random variable with parameter p (\({\varvec{\textrm{P}}}\left[W_c^t=1\right]=p\)). As a result,
Since for any instance \(I_p\), we have that
it follows that
for all \(p \in [0,1]\). \(\square \)
In order to prove Theorem 3 we just need to prove the following claim.
Claim
For any estimator \(\theta = (\theta _t)_{t=1}^\infty \) there exists a \(p \in [0,1]\) such that \( \lim _{t \rightarrow \infty } t^{1+c} {\varvec{\textrm{E}}}_{p} \left[ \theta _t  p \right] > 0. \)
The above claim states that for any estimator \(\theta =(\theta _t)_{t=1}^\infty \), we can inspect the functions \(\theta _t: \{0,1\}^t \mapsto [0,1]\) and then choose a \(p \in [0,1]\) such that the function \( {\varvec{\textrm{E}}}_{p} \left[ \theta _tp \right] = \varOmega (1/t^{1+c})\). As a result, we have reduced the construction of a lower bound concerning the round complexity of a dynamical process to a lower bound concerning the sample complexity of estimating the parameter p of a Bernoulli distribution. The claim follows by Lemma 6, which we present at the end of the section.
At this point we should mention that it is known that \(\varOmega (1/\varepsilon ^2)\) samples are needed to estimate the parameter p of a Bernoulli random variable within additive error \(\varepsilon \). Another wellknown result is that taking the average of the samples is the best way to estimate the mean of a Bernoulli random variable. These results would indicate that the best possible rate of convergence for an graph oblivious dynamics would be \(O(1/\sqrt{t})\). However, there is some fine print in these results which does not allow us to use them. In order to explain the various limitations of these methods and results we will briefly discuss some of them. We remark that this discussion is not needed to understand the proof of Lemma 6.
The oldest sample complexity lower bound for estimation problems is the wellknown CramerRao inequality. Let \(\theta _t: \{0,1\}^t \mapsto [0,1]\) be a function such that \( {\varvec{\textrm{E}}}_{p} \left[ \theta _t \right] =p\) for all \(p \in [0,1]\), then
Since \( {\varvec{\textrm{E}}}_{p} \left[ \theta _t  p \right] \) can be lower bounded by \( {\varvec{\textrm{E}}}_{p} \left[ (\theta _t  p)^2 \right] \) we can apply the CramerRao inequality and prove our claim in the case of unbiased estimators, \( {\varvec{\textrm{E}}}_{p} \left[ \theta _t \right] =p\) for all t. Obviously, we need to prove it for any estimator \(\theta \), however this is a first indication that our claim holds.
Sample complexity lower bounds without assumptions about the estimator are usually given as lower bounds for the minimax risk, which was defined^{Footnote 2} by Wald in [39] as
Minimax risk captures the idea that after we pick the best possible algorithm, an adversary inspects it and picks the worst possible \(p \in [0,1]\) to generate the samples that our algorithm will get as input. The methods of Le’Cam, Fano, and Assouad are wellknown informationtheoretic methods to establish lower bounds for the minimax risk. For more on these methods see [38, 41]. As we stated before, it is well known that the minimax risk for the case of estimating the mean of a Bernoulli is lower bounded by \(\varOmega (1/\sqrt{t})\) and this lower bound can be established by Le Cam’s method. In order to show why such results do no work for our purposes we shall sketch how one would apply Le Cam’s method to get this lower bound. To apply Le Cam’s method, one typically chooses two Bernoulli distributions whose means are far but their total variation distance is small. Le Cam showed that when two distributions are close in total variation then given a sequence of samples \(X_1, \ldots , X_t\) it is hard to tell whether these samples were produced by \(P_1\) or \(P_2\). The hardness of this testing problem implies the hardness of estimating the parameters of a family of distributions. For our problem the two distributions would be \(B(1/2  1/\sqrt{t})\) and \(B(1/2 + 1/\sqrt{t})\). It is not hard to see that their total variation distance is at most O(1/t), which implies a lower bound \(\varOmega (1/\sqrt{t})\) for the minimax risk. The problem here is that the parameters of the two distributions depend on the number of samples t. The more samples the algorithm gets to see, the closer the adversary takes the 2 distributions to be. For our problem we would like to fix an instance and then argue about the rate of convergence of any algorithm on this instance. Namely, having an instance that depends on t does not work for us.
Trying to get a lower bound without assumptions about the estimators while respecting our need for a fixed (independent of t) p we prove Lemma 6. In fact, we show something stronger: for almost all \(p \in [0,1]\), any estimator \(\theta \) cannot achieve rate \(o(1/t^{1+c})\).
Lemma 6
Let \(\theta =(\theta _t)_{t=1}^\infty \) be a Bernoulli estimator with error rate \( {\varvec{\textrm{E}}}_{p} \left[ \theta _t  p  \right] \). For any \(c>0\), if we select p uniformly at random in [0, 1] then
with probability 1.
Proof
Since \(\theta _t\) is a function from \(\{0,1\}^t\) to [0, 1], \(\theta _t\) can have at most \(2^t\) different values. Without loss of generality, we assume that \(\theta _t\) takes the same value \(\theta _t(x)\) for all \(x \in \{0,1\}^t\) with the same number of 1’s. For example, \(\theta _3(\{1,0,0\})=\theta _3(\{0,1,0\})=\theta _3(\{0,0,1\})\). This is due to the fact that for any \(p \in [0,1]\), by Jensen’s inequality, we have
Therefore, for any estimator \(\theta \) with error rate \( {\varvec{\textrm{E}}}_{p} \left[ \theta _t  p  \right] \) there exists another estimator \(\theta '\) that satisfies the above property and
for all \(p \in [0,1]\). Thus, we can assume that \(\theta _t\) takes at most \(t+1\) different values.
Let A denote the set of p for which the estimator has error rate \(o(1/t^{1+c})\), that is
We show that if we select p uniformly at random in [0, 1] then \({\varvec{\textrm{P}}}\left[p \in A\right] = 0\). We also define the set
Observe that if \(p \in A\) then there exists \(t_p\) such that \(p \in A_{t_p}\), meaning that \(A \subseteq \bigcup _{k=1}^{\infty }A_k\). As a result,
To complete the proof we show that \({\varvec{\textrm{P}}}\left[p \in A_k\right]=0\) for all k. Notice that \(p \in A_k\) implies that for \(t \ge k\), the estimator \(\theta \) must always have a value \(\theta _t(i)\) close to p. Using this intuition we define the set
We now show that \(A_k \subseteq B_k\). Since \(p \in A_k\) we have that for all \(t\ge k\)
Thus, \({\varvec{\textrm{P}}}\left[p \in A_k\right] \le {\varvec{\textrm{P}}}\left[p \in B_k\right]\). We write the set \(B_k\) as
As a result,
Each value \(\theta _t(i)\) “covers” length \(1/t^{1+c}\) from its left and right, as shown in Fig. 2, and since there are at most \(t+1\) such values,
by the union bound we get \({\varvec{\textrm{P}}}\left[p \in B_k\right] \le 2(t+1)/t^{1+c}\), for all \(t \ge k\). More formally, for a fixed i we get
since p is picked uniformly at random. By the union bound, we have that
We conclude that \({\varvec{\textrm{P}}}\left[p \in B_k\right] =0\). \(\square \)
Lemma 6 essentially shows that we cannot construct a protocol that is graphoblivious and converges exponentially fast to the equilibrium, as the dynamics of the original FJ model does. However, as we show in the next section, even a small amount of information about the topology of the graph results in faster protocols.
6 Limited Information Dynamics with Fast Convergence
We already discussed that the reason that graph oblivious dynamics suffer slow convergence is that the update rule depends only on the observed opinions. Based on works for asynchronous distributed minimization algorithms [7, 9], we provide an update rule showing that information about the graph G combined with agents that do not act selfishly can restore the fast convergence rate. Our update rule depends not only on the expressed opinions of the neighbors that an agent i meets, but also on the ith row of matrix P.
In update rule (6), each agent stores the most recent opinions of the random neighbors that she meets in an array and then updates her opinion according to their weighted sum (each agent knows row i of P). For a given instance \(I=(P,s,\alpha )\) we call the produced dynamics Row Dependent dynamics (Dynamics 2). We have already mentioned that while this update rule guarantees fast convergence it does not guarantee no regret to the agents. To make this concrete we include a simple example.
Example 1
The purpose of this example is to illustrate that the update rule (6) does not ensure the no regret property. If some agents for various reasons exhibit irrational or adversarial behavior, agents that adopt update rule (6) may experience regret. That is the reason that Row Dependent dynamics converge exponetially faster that any no regret dynamics incluing the FTL dynamics.
Consider the instance of the game of Definition 1 consisting of two agents. Agent 1 adopts update rule (6) and has \(s_1=0,\alpha _1=1/2,p_{12}=1\) and agent 2 plays adversarially. Thus, \(s_2,\alpha _2,p_{21}\) don’t need to be specified. By update rule (6), \(x_1(t)=x_2(t1)/2\) and thus total disagreement cost that agent 1 experiences until round t is
Since agent 2 plays adversarially, she selects \(x_2(t)=0\) if t is even and 1 otherwise. As a result, the total cost that agent 1 experiences is \(\sum _{\tau =0}^t \frac{1}{2}x_1(\tau )^2+\frac{1}{2}(x_1(\tau )  x_2(\tau ))^2 \simeq 3t/8\). Now agent 1 regrets for not adopting the fixed opinion 1/3 during the whole game play. Selecting \(x_1(t)=1/3\) for all t would incur her total disagreement cost
which is less than 3t/8.
The problem with the approach of Row Dependent Dynamics is that the opinions of the neighbors that she keeps in her array are outdated, i.e. the opinion of a neighbor of agent i has changed since their last meeting. The good news are that as long as this outdatedness is bounded we can still achieve fast convergence to the equilibrium. By bounded outdatedness we mean that there exists a number of rounds B such that all agents have met all their neighbors at least once from \(tB\) to t. The latter is formally stated in Lemma 7, which states that if such a B exists, then the protocol converges exponentially fast to \(x^*\). For convenience, we call such a sequence of B rounds an epoch.
Remark 1
Update rule (6), apart from the opinions and the indices of the neighbors that an agent meets, also depends on the exact values of the weights \(p_{ij}\) and that is why Row Dependent dynamics converge fast. We mention that the lower bound of Sect. 5 still holds even if the agents also use the indices of the neighbors that they meet to update their opinion, since Lemma 5 can be easily modified to cover this case. The latter implies that any update rule that ensures fast convergence would require from each agent i to be aware of the ith row of matrix P.
The idea behind the proof of Lemma 7 is simple: if during each epoch B an agent meets all his neighbors at least once, then certainly at the end of the epoch a full step of the original FJ dynamics will have been computed. This means that the running time will be slower than that of the FJ model by a factor of B.
Lemma 7
Let \(\rho = \min _i a_i\), and \(\pi _{ij}(t)\) be the most recent round before round t, that agent i met her neighbor j. If for all \(t\ge B\), \(tB \le \pi _{ij}(t)\) then, for all \(t \ge k B\),
Proof
To prove our claim we use induction on k. For the induction base \(k=1\),
Assume that for all \(t\ge (k1)B\) we have that \(\Vert x(t)x^* \Vert _{\infty }\le (1\rho )^{k1}\). For \(k\ge 2\), we again have that
Since \(tB \le \pi _{ij}(t)\) and \(t \ge kB\) we obtain that \(\pi _{ij}(t) \ge (k1)B\). As a result, the inductive hypothesis applies, \(x_j(\pi _{ij}(t))x_j^* \le (1\rho )^{k1}\) and \(x_i(t)  x_i^*\le (1\rho )^k\). \(\square \)
In Row Dependent dynamics there does not exist a fixed length window B that satisfies the requirements of Lemma 7. However we can select a length value such that the requirements hold with high probability. To do this observe that agent i should collect the opinions of all of her neighbors, which resembles the process of the coupons collector problem. We first state a useful fact concerning this problem, whose proof uses just elementary probability.
Lemma 8
(see e.g. [34]) Suppose that the collector picks coupons with different probabilities, where n is the number of distinct coupons. Let w be the minimum of these probabilities. If he selects \(\ln n/w+ c/w\) coupons, then:
It is now clear that each agent i simply needs to wait to meet the neighbor j with the smallest weight \(p_{ij}\). Therefore, after \(\log (1/\delta )/\min _{j \in N_i} p_{ij}\) rounds we have that with probability at least \(1\delta \) agent i met all her neighbors at least once. Since we want this to be true for all agents, we shall roughly take \(B = 1/\min _{i\in V,j\in N_i}{p_{ij}}\). These calculations become precise in the following lemma.
Lemma 9
Let \(\pi _{ij}(t)\) be the most recent round before round t that agent i met agent j and \(B=2\ln (\frac{nt}{\delta })/w\) where \(w=\min _{i \in V}\min _{j\in N_i}p_{ij}\). Then with probability at least \(1\delta \), for all \(\tau \ge B\) and for all \(i \in V\) and all \(j \in N_i\)
Proof
Consider an agent i at round \(\tau \ge B\) where \(B=2\ln (\frac{nt}{\delta })/w\) and assume that there exists an agent \(j\in N_i\) such that \(\pi _{ij}(\tau )< {\tau B}\). Agent i can be viewed as a coupon collector that has buyed B coupons but has not found the coupon corresponding to agent j. Since \(N_i<n\) and \(\min _{j \in N_i}p_{ij}\ge w\) by Lemma 8 we have that
The proof follows by a union bound for all agents i and all rounds \(B\le \tau \le t\).
\(\square \)
Our goal is to prove Theorem 4, showing that the convergence rate of update rule (6) is exponentially fast in expectation (although not as fast as the original FJ dynamics).
Theorem 4
Let \(I = (P,s, \alpha )\) be an instance of the opinion formation game of Definition 1 with equilibrium \(x^* \in [0,1]^n\). Then for all rounds \(t \ge 6\ln n/w+36/w^2+9\rho ^2/\ln ^2 n\),
where \(x(t)\in [0,1]^n\) is the opinion vector produced by update rule (6), \(\rho = \min _{i \in V} a_i\), \(w =\min _{i \in V}\min _{j\in N_i}p_{ij}\).
By direct application of Lemma 7 and Lemma 9, we obtain the following corollary that will be useful in proving Theorem 4.
Corollary 1
Let x(t) be the opinion vector produced by update rule (6) for the instance \(I=(P,s,\alpha )\), then with probability at least \(1\delta \), for all \(t \ge 2\ln (\frac{nt}{\delta })/w\),
where \(\rho = \min _{i\in V}\alpha _i\) and \(w = \min _{i\in V, j\in N_i}p_{ij}\).
Proof
Let \(B=2\ln (\frac{nt}{\delta })/w\). By Lemma 9 we have that with probability at least \(1\delta \), for all \(i\in V,j\in N_i\) and for all \(\tau \ge B\),
As a result, with probability at least \(1\delta \) the requirements of Lemma 7 are satisfied. Thus for all \(t \ge B\),
\(\square \)
Corollary 1 states that the convergence happens with high probability. We want to translate this result into one involving the expected error after t iterations of the dynamic. The standard way of doing that is by using the conditional expectations identity. The proof of Theorem 4 is now reduced to choosing a suitable value for the probability \(\delta \) of the protocol not working. We would like \(\delta \) to be as small as possible, without blowing up the upper bound on \(\Vert x(t)x^* \Vert _{\infty }\) of Corollary 1.
6.1 The Proof of Theorem 4
Proof
Let \(u(t) = \Vert x(t)x^* \Vert _{\infty }\). From Corollary 1 we obtain that for any \(\delta \in (0,1)\), for all rounds \(t \ge 2\ln (\frac{nt}{\delta })/w\),
Since all the parameters of the problem lie in [0, 1], we have \( {\varvec{\textrm{E}}}_{} \left[ u(t)u(t) > r \right] \le 1\). Now, by conditioning on the event that \(u(t) >r\), we get:
where \(r = \frac{1}{1\rho }\exp \left(\frac{\rho wt}{2\ln ( \frac{nt}{\delta })}\right)\). If we set \(\delta = \exp \left(\frac{\rho w\sqrt{t}}{2\ln (nt)}\right)\), then:
We now evaluate r for our choice of probability \(\delta \):
Using the previous calculation, we obtain:
To this end we have established that for \(t \ge 2 \ln (\frac{nt}{\delta })/w\) with \(\delta = \exp \left(\frac{\rho w\sqrt{t}}{2\ln (nt)}\right)\)
The inequality \(t \ge 2 \ln (\frac{nt}{\delta })/w\) with \(\delta = \exp \left(\frac{\rho w\sqrt{t}}{2\ln (nt)}\right)\) can be rewritten as
Notice that

\(\frac{t}{3} \ge 2\frac{\ln n}{w}\) in case \(t \ge 6\frac{\ln n}{w}\)

\(\frac{t}{3 \ln t} \ge \frac{\sqrt{t}}{3} \ge \frac{2}{w} \) in case \(t \ge \frac{36}{w^2}\)

\(\frac{t}{3} \ge \frac{\rho \sqrt{t}}{\ln n} \ge \frac{\rho \sqrt{t}}{\ln (nt)}\) in case \(t \ge \frac{9\rho ^2}{\ln ^2 n}\)
As a result, for all \( t \ge 6\frac{\ln n}{w}+\frac{36}{w^2}+\frac{9\rho ^2}{\ln ^2 n}\) we get that \(t \ge 2 \ln (\frac{nt}{\delta })/w\) with \(\delta = \exp \left(\frac{\rho w\sqrt{t}}{2\ln (nt)}\right)\) and thus
\(\square \)
Notes
These \(s,\alpha \) are scalars in [0, 1] and should not be confused with the internal opinion vector s and the self confidence vector \(\alpha \) of an instance \(I=(P,s,\alpha )\).
Although the minimax risk is defined for any estimation problem and loss function, for simplicity, we write the minimax risk for estimating the mean of a Bernoulli random variable.
References
Alford, J.R., Funk, C.L., Hibbing, J.R., Alford, J.R., Funk, C.L.: Are political orientations genetically transmitted. Am. Polit. Sci. Rev. 1, 153–167 (2005)
Abebe, R., Kleinberg, J.M., Parkes, D.C., Tsourakakis, C.E.: Opinion dynamics with varying susceptibility to persuasion. CoRR arXiv:1801.07863 (2018)
Bilò, V., Fanelli, A., Moscardelli, L.: Opinion formation games with dynamic social influences. In Cai, Y., Vetta, A. (eds.) Web and Internet Economics, pp. 444–458. Springer, Berlin (2016)
Bhawalkar, K., Gollapudi, S., Munagala, K.: Coevolutionary opinion formation games. In: Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1–4, 2013, pp. 41–50 (2013)
Bindel, D., Kleinberg, J.M., Oren, S.: How bad is forming your own opinion? In: IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, USA, October 22–25, 2011, pp. 57–66 (2011)
Bravo, M., Mertikopoulos, P.: On the robustness of learning in games with stochastically perturbed payoff observations. Games Econ. Behav. 103, 41–66 (2017)
Bertsekas, D.P., Tsitsiklis, J.N.: Parallel and Distributed Computation: Numerical Methods. Athena Scientific (1997)
CesaBianchi, N., Lugosi, G.: Potentialbased algorithms in online prediction and game theory. Mach. Learn. 51(3), 239–261 (2003)
Cheung, Y.K., Cole, R.: A unified approach to analyzing asynchronous coordinate descent and tatonnement. CoRR, arXiv:1612.09171 (2016)
Chen, P.A., Chen, Y.L., ChiJen, L.: Bounds on the price of anarchy for a more general class of directed graphs in opinion formation games. Oper. Res. Lett. 44(6), 808–811 (2016)
Cohen, J., Héliou, A., Mertikopoulos, P.: Hedging under uncertainty: regret minimization meets exponentially fast convergence. In: Proceedings of Algorithmic Game Theory—10th International Symposium, SAGT 2017, L’Aquila, Italy, September 12–14, 2017, pp. 252–263 (2017)
Chierichetti, F., Kleinberg, J.M., Oren, S.: On discrete preferences and coordination. In: ACM Conference on Electronic Commerce, EC ’13, Philadelphia, PA, USA, June 16–20, 2013, pp. 233–250 (2013)
DeGroot, M.H.: Reaching a consensus. J. Am. Stat. Assoc. 69, 118–121 (1974)
Deffuant, G., Neau, D., Amblard, F., Weisbuch, G.: Mixing beliefs among interacting agents. Adv. Complex Syst. 3(1–4), 87–98 (2000)
Epitropou, M., Fotakis, D., Hoefer, M., Skoulakis, S.: Opinion formation games with aggregation and negative influence. In: Algorithmic Game Theory—10th International Symposium, SAGT 2017, L’Aquila, Italy, September 1214, 2017, Proceedings, pp. 173–185 (2017)
EvenDar, E., Mansour, Y., Nadav, U.: On the convergence of regret minimization dynamics in concave games. In: Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC 2009, Bethesda, MD, USA, May 31–June 2, 2009, pp. 523–532 (2009)
Ferraioli, D., Goldberg, P.W., Ventre, C.: Decentralized dynamics for finite opinion games. Theor. Comput. Sci. 648, 96–115 (2016)
Friedkin, N.E., Johnsen, E.C.: Social influence and opinions. J. Math. Sociol. 15(3–4), 193–206 (1990)
Fotakis, D., PalyvosGiannas, D., Skoulakis, S.: Opinion dynamics with local interactions. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9–15 July 2016, pp. 279–285 (2016)
Freund, Y., Schapire, R.E.: Adaptive game playing using multiplicative weights. Games Econ. Behav. 29(1), 79–103 (1999)
Foster, D.P., Vohra, R.V.: Calibrated learning and correlated equilibrium. Games Econom. Behav. 21(1), 40–55 (1997)
Gourdon, X., Sebah, P.: The euler constant: \(\gamma \). Young 1, 2n (2004)
Ghaderi, J., Srikant, R.: Opinion dynamics in social networks with stubborn agents: equilibrium and convergence rate. Automatica 50(12), 3209–3215 (2014)
Gionis, A., Terzi, E., Tsaparas, P.: Opinion maximization in social networks. In: Proceedings of the 13th SIAM International Conference on Data Mining, May 2–4, 2013. Austin, Texas, USA, pp. 387–395 (2013)
Hazan, E., Agarwal, A., Kale, S.: Logarithmic regret algorithms for online convex optimization. Mach. Learn. 69(2), 169–192 (2007)
Hazan, E.: Introduction to online convex optimization. Found. Trends Optim. 2(3–4), 157–325 (2016)
Héliou, A., Cohen, J., Mertikopoulos, P.: Learning with bandit feedback in potential games. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4–9 December 2017, Long Beach, CA, USA, pp. 6372–6381 (2017)
Hegselmann, R., Krause, U.: Opinion dynamics and bounded confidence models, analysis, and simulation. J. Artif. Soc. Soc. Simul. 5, 1 (2002)
Jackson, M.O.: Social and Economic Networks. Princeton University Press, Princeton (2008)
Kleinberg, R., Piliouras, G., Tardos, E.: Multiplicative updates outperform generic noregret learning in congestion games: Extended abstract. In: Proceedings of the Fortyfirst Annual ACM Symposium on Theory of Computing, STOC ’09, pp. 533–542. ACM, New York (2009)
Kleinberg, R., Piliouras, G., Tardos, É.: Load balancing without regret in the bulletin board model. Distrib. Comput. 24(1), 21–29 (2011)
Krackhardt, D.: A plunge into networks. Science 326(5949), 47–48 (2009)
Musco, C., Musco, C., Tsourakakis, C.E.: Minimizing polarization and disagreement in social networks. CoRR, arXiv:1712.09948 (2017)
Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995)
Mertikopoulos, P., Staudigl, M.: Convergence to nash equilibrium in continuous games with noisy firstorder feedback. In: 56th IEEE Annual Conference on Decision and Control, CDC 2017, Melbourne, Australia, December 12–15, 2017, pp. 5609–5614 (2017)
Sergiu, H., Andreu, M.C.: A simple adaptive procedure leading to correlated equilibrium. Econometrica 68(5), 1127–1150 (2000)
Syrgkanis, V., Agarwal, A., Luo, H., Schapire, R.E.: Fast convergence of regularized learning in games. In: NIPS, pp. 2989–2997 (2015)
Tsybakov, A.B.: Introduction to Nonparametric Estimation, 1st ed. Springer, New York (2008)
Wald, A.: Contributions to the theory of statistical estimation and testing hypotheses. Ann. Math. Stat. 10(4), 299–326 (1939)
Yildiz, M.E., Ozdaglar, A.E., Acemoglu, D., Saberi, A., Scaglione, A.: Binary opinion dynamics with stubborn agents. ACM Trans. Econ. Comput. 1(4), 19:1–19:30 (2013)
Yu, B., Assouad, F., Cam, L.: In: Festschrift for Lucien Le Cam, pp. 423–435. Springer, New York (1997)
Funding
’Open Access funding provided by the MIT Libraries’
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fotakis, D., Kandiros, V., Kontonis, V. et al. Opinion Dynamics with Limited Information. Algorithmica 85, 3855–3888 (2023). https://doi.org/10.1007/s00453023011575
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00453023011575