The famous theorem of R. Aumann and M. Maschler states that the sequence of values of an \(N\)-stage zero-sum game \(\varGamma _N(\rho )\) with incomplete information on one side and prior distribution \(\rho \) converges as \(N\rightarrow \infty \), and that the error term \({\mathrm {err}}[\varGamma _N(\rho )]={\mathrm {val}}[\varGamma _N(\rho )]- \lim _{M\rightarrow \infty }{\mathrm {val}}[\varGamma _{M}(\rho )]\) is bounded by \(C N^{-\frac{1}{2}}\) if the set of states \(K\) is finite. The paper deals with the case of infinite \(K\). It turns out that, if the prior distribution \(\rho \) is countably-supported and has heavy tails, then the error term can be of the order of \(N^{\alpha }\) with \(\alpha \in \left( -\frac{1}{2},0\right) \), i.e., the convergence can be anomalously slow. The maximal possible \(\alpha \) for a given \(\rho \) is determined in terms of entropy-like family of functionals. Our approach is based on the well-known connection between the behavior of the maximal variation of measure-valued martingales and asymptotic properties of repeated games with incomplete information.

Keywords

Repeated games with incomplete information Error term Bayesian learning Maximal variation of martingales Entropy