Skip to main content
Log in

An upper bound for the \(\ell _1\)-variation along the road to agreement

  • Original Paper
  • Published:
International Journal of Game Theory Aims and scope Submit manuscript

Abstract

Two agents with a common prior on the possible states of the world participate in a process of information transmission, consisting of sharing posterior probabilities of an event of interest. Aumann’s Agreement Theorem implies that such a process must end with both agents having the same posterior probability. We show that the \(\ell _1\)-variation of the sequence of posteriors of each agent, obtained along this process, must be finite, and provide an upper bound for its value.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Jeffrey Sanford Russell, John Hawthorne & Lara Buchak

Notes

  1. The state of the world is not distributed according to \(\mathbb {P}\); rather, it is one of the elements of \(\Omega \). The probability measure \(\mathbb {P}\) reflects the uncertainty of the agents regarding the state of the world.

  2. We assume that the public announcements of current posteriors are conducted simultaneously at each stage of the information transmission.

  3. We denote by |B| the cardinality of a set B.

  4. Such setup does not guarantee agreement, as each agent i can announce \(B^i_n(\omega ) = [0,1],\) \(\forall n \in \mathbb {N}\).

  5. A positive probability event A is an atom of a probability space \((\Omega ,{\mathcal {G}},\mathbb {P})\) if for every event \(B \subseteq A\), either or \(\mathbb {P}(B)=0\).

References

  • Aumann RJ (1976) Agreeing to disagree. Ann. Stat. 4(6):1236–1239

    Article  Google Scholar 

  • Burkholder DL (1966) Martingale transforms. Ann. Math. Stat. 37(6):1494–1504

    Article  Google Scholar 

  • Di Tillio A, Lehrer E, Samet D (2021) Monologues, dialogues and common priors. To appear in Theoretical Economics.

  • Doob JL (1953) Stochastic processes, vol 101. Wiley, New York

    Google Scholar 

  • Geanakoplos JD, Polemarchakis HM (1982) We can’t disagree forever. J Econ Theory 28(1):192–200

    Article  Google Scholar 

  • Maschler M, Solan E, Zamir S (2013) Game theory. Cambridge University Press, Cambridge

  • Williams D (1991) Probability with martingales. Cambridge University Press, Cambridge

    Book  Google Scholar 

Download references

Acknowledgements

The author would like to thank his Ph.D. advisor Prof. Ehud Lehrer for introducing him to the topic. The author is also grateful to Prof. Eilon Solan, Prof. David Gilat, and Andrei Iacob for their suggestions, remarks and overall contribution to this work. The author would also like to thank two anonymous reviewers for their thoughtful comments which led to the improvement of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dimitry Shaiderman.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: \(\ell _1\)-Variation of the conditional means of a martingale

Let \((\Omega , {\mathcal {B}}, \mathbb {P})\) be a probability space. A filtration \(\mathcal {F}= (\mathcal {F}_n)_{n=1}^{\infty }\) is an increasing sequence of sub-\(\sigma \)-fields of \({\mathcal {B}}\). A sequence \((X_n)_{n=1}^{\infty }\) of random variables is said to be a discrete-time martingale with respect to the filtration \((\mathcal {F}_n)_{n=1}^{\infty }\) if

  1. (i)

    \(\mathbb {E}|X_n| < \infty \),

  2. (ii)

    \((X_n)_n\) is adapted to \(\mathcal {F}\), i.e., \(X_n\) is measurable with respect to \(\mathcal {F}_n\) for all \(n \in \mathbb {N}\),

  3. (iii)

    \(\mathbb {E}(X_{n+1}\,|\,\mathcal {F}_n)=X_n\) for all \(n \in \mathbb {N}\). An important example of a discrete-time martingale that we will consider in the paper is a Doob martingale. Such a martingale arises when we consider the sequence of conditional expectations \(X_n = {\mathbb {E}}(Y\,|\,\mathcal {F}_n)\) of a random variable Y satisfying \({\mathbb {E}}|Y|<\infty \), with respect to a filtration \(\mathcal {F}= (\mathcal {F}_n)_{n=1}^{\infty }\). In case \(Y = \mathbb {1}_C\) for some event C (i.e., \(C \in {\mathcal {B}}\)), we will use a standard notation and write \(X_n = \mathbb {P}(C\,|\,\mathcal {F}_n)\).

We say that the discrete-time martingale sequence \(X = (X_n)_{n=1}^{\infty }\) is an \({\mathcal {L}}_2\) (\({\mathcal {L}}_1\)) bounded martingale if \(\sup _n {\mathbb {E}}(|X_n|^2)<\infty \) (\(\sup _n {\mathbb {E}}|X_n|<\infty \)). Let us now introduce two distinct types of variation for discrete-time martingales. The \(\ell _1\)-variation of the discrete-time martingale sequence \(X = (X_n)_{n=1}^{\infty }\) (with respect to \(\mathcal {F}\)) is the random variable

$$\begin{aligned} V(X) = \sum \limits _{n=1}^{\infty } |X_{n+1}-X_{n}|. \end{aligned}$$
(13)

Next, for every event A with \(\mathbb {P}(A)>0\), we define the \(\ell _1\)-variation of the conditional means of X on A by

$$\begin{aligned} V(X,A) = \sum \limits _{n=1}^{\infty } \big |{\mathbb {E}}(X_{n+1}\,|\,A) -{\mathbb {E}}(X_{n}\,|\,A)|, \end{aligned}$$
(14)

where

$$\begin{aligned} {\mathbb {E}}(X_{n}\,|\,A) = \frac{{\mathbb {E}} (X_n\mathbb {1}_A)}{\mathbb {P}(A)}, \ \ \forall n \in {\mathbb {N}}. \end{aligned}$$
(15)

We say that X has finite \(\ell _1\)-variation of the conditional means if \(V(X,A)<\infty \) for all events A with \(\mathbb {P}(A)>0\).

Our main technical result regarding martingales bounds the \(\ell _1\)-variation of the conditional means.

Theorem A1

If X is an \({\mathcal {L}}_2\)-bounded martingale, then

$$\begin{aligned} V(X,A) \le \frac{{\mathbb {E}}(X_{\infty }^2)-{\mathbb {E}}(X_{1}^2)}{2\mathbb {P}(A)} + \frac{{\mathbb {E}}(Y_{\infty }^2)-{\mathbb {E}}(Y_{1}^2)}{2\mathbb {P}(A)} \end{aligned}$$
(16)

for all events A with \(\mathbb {P}(A)>0\), where \(X_{\infty }\) is the \({{\mathcal {L}}}_2\)-limit of X, and \(Y_{\infty }\) is the \({{\mathcal {L}}}_2\)-limit of the \({\mathcal {L}}_2\)-bounded martingale Y defined by \(Y_n = \mathbb {P}(A \,|\,\mathcal {F}_n)\) for all \(n \in \mathbb {N}\). In particular, X has finite \(\ell _1\)-variation of conditional means.

Remark A1

The conditional mean of the \(\ell _1\)-variation of an \({\mathcal {L}}_2\)-bounded martingale can be infinite on any event of positive probability. For instance, let \((d_n)_{n=1}^{\infty }\) be a sequence of independent random variables distributed according to the law \(\mathbb {P}\left( d_n = \frac{1}{n}\right) = \mathbb {P}\left( d_n = -\frac{1}{n}\right) = \frac{1}{2}\). Define the martingale \(M = (M_n)_{n=1}^{\infty }\) by \(M_n = d_1 + \cdots + d_n\) for all \(n \in {\mathbb {N}}\). Then M is an \({\mathcal {L}}_2\)-bounded martingale satisfying

$$\begin{aligned} {\mathbb {E}}\left( \sum \limits _{n=1}^{\infty } |M_{n+1}-M_n|\,\big |\,A\right) = \sum \limits _{n=1}^{\infty } \frac{1}{n} = \infty , \end{aligned}$$

for every event A with \(\mathbb {P}(A)>0\).

As a corollary of Theorem A1 we obtain the following result.

Corollary A1

Suppose that X is a discrete-time \({\mathcal {L}}_2\)-bounded martingale. Suppose that the probability space \((\Omega , {\mathcal {B}}, \mathbb {P})\) contains an atomFootnote 5A. Let \((a_n)_{n=1}^{\infty }\) be the (a.s. fixed) values of \((X_n)_{n=1}^{\infty }\) on A. Note that \(a_n = {\mathbb {E}}(X_n\,|\,A)\) for all \(n \in {\mathbb {N}}\), thus implying that \(V(X) = V(X,A)\) a.s. on A. With the help of Theorem A1 we may deduce that

$$\begin{aligned} V(X) \le \frac{{\mathbb {E}}(X_{\infty }^2)-{\mathbb {E}}(X_{1}^2)}{2\mathbb {P}(A)} + \frac{{\mathbb {E}}(Y_{\infty }^2)-{\mathbb {E}}(Y_{1}^2)}{2\mathbb {P}(A)}, \end{aligned}$$
(17)

a.s. on A.

This result is strongly related to the following theorem of Burkholder (1966), which holds for a wider class of martingales.

Theorem A2

(Burkholder, 1966) Suppose that X is an \({\mathcal {L}}_1\)-bounded martingale. If A is an atom of the probability space, then

$$\begin{aligned} V(X)<\infty , \end{aligned}$$

almost everywhere on A.

Let us now proceed to the proof of the Theorem A1.

Proof of Theorem A1

By conditioning on \({\mathcal {F}}_{n+1}\), one has

$$\begin{aligned} {\mathbb {E}}\left( X_{n+1}\mathbb {1}_{A}\right) = {\mathbb {E}}\left( X_{n+1}Y_{n+1}\right) ,\quad \forall n \ge 0. \end{aligned}$$
(18)

Similarly, by conditioning on \({\mathcal {F}}_{n}\) we have \({\mathbb {E}}\left( X_{n+1}Y_{n}\right) = {\mathbb {E}}\left( X_{n}Y_{n}\right) \) for all \(n \in {\mathbb {N}}\). Therefore, with the use of Eq. (18) we obtain

$$\begin{aligned} {\mathbb {E}}\left( X_{n}\mathbb {1}_{A}\right) = {\mathbb {E}}\left( X_{n}Y_{n}\right) = {\mathbb {E}}\left( X_{n+1}Y_{n}\right) ,\, \ \ \forall n \in {\mathbb {N}}. \end{aligned}$$
(19)

Hence by combining Eqs. (18) and (19) we have

$$\begin{aligned} \sum \limits _{n=1}^{\infty } |{\mathbb {E}} (X_{n+1}\mathbb {1}_A)-{\mathbb {E}} (X_n\mathbb {1}_A)|= & {} \sum \limits _{n=1}^{\infty }\big |{\mathbb {E}}X_{n+1}(Y_{n+1}-Y_n)\big | \nonumber \\= & {} \sum \limits _{n=1}^{\infty }\big |{\mathbb {E}}(X_{n+1}-X_n)(Y_{n+1}-Y_n)\big | \nonumber \\\le & {} \sum \limits _{n=1}^{\infty }{\mathbb {E}}\big |(X_{n+1}-X_n)(Y_{n+1}-Y_n)\big | \nonumber \\\le & {} \sum \limits _{n=1}^{\infty }{\mathbb {E}}\left( \frac{(X_{n+1}-X_n)^2+(Y_{n+1}-Y_n)^2}{2}\right) \nonumber \\= & {} \frac{1}{2} \sum \limits _{n=1}^{\infty } {\mathbb {E}}(X_{n+1}-X_n)^2 + \frac{1}{2}\sum \limits _{n=1}^{\infty }{\mathbb {E}}(Y_{n+1}-Y_n)^2 \nonumber \\= & {} \frac{1}{2}\left( {\mathbb {E}}(X_{\infty }^2)-{\mathbb {E}}(X_{1}^2)\right) + \frac{1}{2}\left( {\mathbb {E}}(Y_{\infty }^2)-{\mathbb {E}}(Y_{1}^2)\right) , \end{aligned}$$
(20)

where the second inequality holds since \(|ab| \le \frac{a^2+b^2}{2}\) for every \(a,b \in \mathbb {R}\), and the last equality follows from Theorem 12.1 in Williams (1991). Thus, combining Eq. (20) together with Eqs. (14) and (15) we deduce that

$$\begin{aligned} V(X,A) \le \frac{{\mathbb {E}}(X_{\infty }^2)-{\mathbb {E}}(X_{1}^2)}{2\mathbb {P}(A)} + \frac{{\mathbb {E}}(Y_{\infty }^2)-{\mathbb {E}}(Y_{1}^2)}{2\mathbb {P}(A)} \end{aligned}$$

as desired. \(\square \)

The following example shows that Theorem A1 cannot be extended to \({{\mathcal {L}}}_1\)-bounded martingales, by providing an \({{\mathcal {L}}}_1\)-bounded martingale M and an event A for which \(V(M,A) = \infty \).

Example A1

Consider the i.i.d. random variables \((d_n)_{n=1}^{\infty }\) distributed according to the law

$$\begin{aligned} \mathbb {P}\left( d_n = 0\right) =\mathbb {P}\left( d_n = 2\right) = \frac{1}{2}. \end{aligned}$$

Let \({\mathcal {D}} \subseteq {\mathcal {G}}\) be the smallest \(\sigma \)-field on which the random variables \((d_n)_{n=1}^{\infty }\) are measurable. Define the martingale \(M = (M_n)_{n=1}^{\infty }\) on \((\Omega ,{\mathcal {D}},\mathbb {P})\) with respect to the natural filtration induced by \((d_n)_{n=1}^{\infty }\) by \( M_n = \prod _{k=1}^{n} d_k,\ \ \forall n \in \mathbb {N}. \) Since \(M_n\ge 0\) for every \(n \in \mathbb {N}\), we have \(\mathbb {E}|M_n| = \mathbb {E}M_n = 1\), implying that M is bounded in \({{\mathcal {L}}}_1\). For each \(n \in \mathbb {N}\) define the event \( A_n = \big \lbrace M_1 =2 , M_2 =4,\ldots ,M_n=2^n,M_{n+1}=0\big \rbrace , \) and let \(A = \bigcup _{n=1}^{\infty }A_n\). We have

$$\begin{aligned} \mathbb {P}(A)V(M,A) = \sum \limits _{n=1}^{\infty } \mathbb {P}(A_n)V(M,A_n)= \sum \limits _{n=1}^{\infty } 2^{-(n+1)}(2+4+\cdots +2^n)=\infty , \end{aligned}$$

and so \(V(M,A) = \infty \).

Remark A2

The events \((A_n)_{n=1}^{\infty }\) in Example A1 are disjoint atoms of the probability space \((\Omega ,{\mathcal {D}},\mathbb {P})\), and thus the result of Burkholder (Theorem A2) cannot be extended to an infinite union of atoms of \({{\mathcal {L}}}_1\)-bounded martingales.

Appendix B: Complements to Aumann’s Bayesian dialogue

Proof of Proposition 1

We begin by proving (i). It is easily verified that \(\mathbb {P}(A) = Q^1_1\). Since the partition element of agent i is determined by the pair \((n,x) \in \mathbb {N}\times \{0,\ldots ,n\}\) of tosses he was allotted and the number of H outcomes he observed, following the notation of Theorem 2 we have

$$\begin{aligned} \sum \limits _{s \in {\hat{Y}}_i} \frac{\mathbb {P}(F_i(s)\cap A)^2}{\mathbb {P}(F_i(s))}= & {} \sum \limits _{n=1}^{\infty } \sum \limits _{x=0}^n \left[ \left( t^i_n \left( {\begin{array}{c}n\\ x\end{array}}\right) \int _0^1 \theta ^{x+1} (1-\theta )^{n+1-(x+1)} dQ(\theta )\right) ^2 \bigg /(t^i_n Q^n_x) \right] \nonumber \\= & {} \sum \limits _{n=1}^{\infty } \sum \limits _{x=0}^n \left[ \left( t^i_n \frac{(x+1)Q^{n+1}_{x+1}}{n+1}\right) ^2 \bigg /(t^i_n Q^n_x)\right] \nonumber \\= & {} \sum \limits _{n=1}^{\infty } t^i_n \sum \limits _{x=0}^n \frac{(Q^{n+1}_{x+1})^2}{Q^n_x}\left( \frac{x+1}{n+1}\right) ^2. \end{aligned}$$
(21)

Moreover,

$$\begin{aligned} \mathbb {P}(C(\omega ))= & {} t(n_1,n_2)\left( {\begin{array}{c}n_1\\ x_1\end{array}}\right) \left( {\begin{array}{c}n_2\\ x_2\end{array}}\right) \int _0^1 \theta ^{x_1+x_2} (1-\theta )^{n_1+n_2-(x_1+x_2)}dQ(\theta )\nonumber \\= & {} t(n_1,n_2)\gamma (n_1,x_1,n_2,x_2) Q_{x_1+x_2}^{n_1+n_2}. \end{aligned}$$
(22)

Combining the latter with Eqs. (7), and (21) we obtain Eq. (11), thus showing (i). Let us now move on and assume the Q is the uniform distribution on [0, 1]. For each \(n \in \mathbb {N}\), and \(x \in \{0,\ldots ,n\}\) we have

$$\begin{aligned} Q_x^n = \left( {\begin{array}{c}n\\ x\end{array}}\right) \int _0^1 \theta ^x (1-\theta )^{n-x}d\theta= & {} \left( {\begin{array}{c}n\\ x\end{array}}\right) \mathbf{B }(x+1,n-x+1)\nonumber \\= & {} \left( {\begin{array}{c}n\\ x\end{array}}\right) \frac{\Gamma (x+1)\Gamma (n-x+1)}{\Gamma (n+2)}\nonumber \\= & {} \frac{1}{n+1}, \end{aligned}$$
(23)

where \(\mathbf{B }\) is the beta-function, for which we used the identity \(\mathbf{B }(\alpha ,\beta ) = \frac{\Gamma (\alpha )\Gamma (\beta )}{\Gamma (\alpha +\beta )}\). In turn, Eq. (23) implies that

$$\begin{aligned}&Q^1_1-\sum \limits _{n=1}^{\infty } t^i_n \sum \limits _{x=0}^n \frac{(Q^{n+1}_{x+1})^2}{Q^n_x}\left( \frac{x+1}{n+1}\right) ^2 \\&\quad = \frac{1}{2} - \sum \limits _{n=1}^{\infty } t^i_n \sum \limits _{x=0}^n \frac{n+1}{(n+2)^2}\left( \frac{x+1}{n+1}\right) ^2 \\&\quad = \frac{1}{2} - \sum \limits _{n=1}^{\infty } t^i_n \frac{1}{(n+1)(n+2)^2} \sum \limits _{x=0}^n (x+1)^2 \\&\quad = \frac{1}{2} - \sum \limits _{n=1}^{\infty } t^i_n \frac{(n+1)(n+2)(2n+3)}{6(n+1)(n+2)^2} \\&\quad = \frac{1}{2} - \sum \limits _{n=1}^{\infty } t^i_n \frac{2n+3}{6n+12} \le \frac{1}{2} - \frac{1}{4} \sum \limits _{n=1}^{\infty } t^i_n = \frac{1}{4}. \end{aligned}$$

Item (i) in Proposition 1 yields the bound

$$\begin{aligned} \sum \limits _{n=1}^{\infty } |p_{n+1}^{i}(\omega )-p_{n}^{i}(\omega )| \le \frac{1}{2} + \frac{n_1+n_2+1}{8t(n_1,n_2)\gamma (n_1,x_1,n_2,x_2)}, \end{aligned}$$
(24)

proving the second item of Proposition 1. \(\square \)

Remark B1

In the proof of Proposition 1 we did not fully utilize the bound presented in Theorem 2. In fact, we did not subtract the quantity \(\mathbb {P}(C(\omega ))/2\mathbb {P}(F_i(\omega ))\). The reason for this is that \(\mathbb {P}(C(\omega ))/2\mathbb {P}(F_i(\omega )) \le \frac{1}{2}\) \(\forall \omega \in \Omega \), making it negligible compared to the right-hand side of Eq. (11), which need not be bounded across different values of \(\omega \in \Omega \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shaiderman, D. An upper bound for the \(\ell _1\)-variation along the road to agreement. Int J Game Theory 50, 1053–1067 (2021). https://doi.org/10.1007/s00182-021-00781-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00182-021-00781-1

Keywords

Mathematics Subject Classification

Navigation