# Diffusion and Localization of Relative Strategy Scores in The Minority Game

- 635 Downloads

## Abstract

We study the equilibrium distribution of relative strategy scores of agents in the asymmetric phase (\(\alpha \equiv P/N\gtrsim 1\)) of the basic Minority Game using sign-payoff, with *N* agents holding two strategies over *P* histories. We formulate a statistical model that makes use of the gauge freedom with respect to the ordering of an agent’s strategies to quantify the correlation between the attendance and the distribution of strategies. The relative score \(x\in \mathbb {Z}\) of the two strategies of an agent is described in terms of a one dimensional random walk with asymmetric jump probabilities, leading either to a static and asymmetric exponential distribution centered at \(x=0\) for fickle agents or to diffusion with a positive or negative drift for frozen agents. In terms of scaled coordinates \(x/\sqrt{N}\) and *t* / *N* the distributions are uniquely given by \(\alpha \) and in quantitative agreement with direct simulations of the game. As the model avoids the reformulation in terms of a constrained minimization problem it can be used for arbitrary payoff functions with little calculational effort and provides a transparent and simple formulation of the dynamics of the basic Minority Game in the asymmetric phase.

## Keywords

Minority game Market dynamics Agent based models## 1 Introduction

A minority game can be exemplified by the following simple market analogy; An odd number *N* of traders (agents) must at each time step choose between two options, buying or selling a share, with the aim of picking the minority group. If sell is in minority and buy in majority one may expect the price to go up to satisfy demand and vice versa if buy is in minority, thus motivating the minority character of the game. Clearly, there is no way to make everyone content, at least half of the agents will inevitably end up in the majority group each round. As the losing agents will try to improve their lot there is no static equilibrium. Instead, agents might be expected to adapt their buy or sell strategies based on perceived trends in the history of outcomes [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12].

The Minority Game proposed by Challet and Zhang [2, 3] formalizes this type of market dynamics where agents of limited intellect compete for a scarce resource by adapting to the aggregate input of all others [1, 12]. Each agent has a set of strategies that, depending on the recent past history of minority groups going *m* time steps back, gives a prediction of the next minority being buy or sell. The agent uses at each time step her highest scoring strategy which has most accurately predicted correct minority groups historically. The state space of the game is given by the strategy scores of each agent together with the recent history of minority groups, and the discrete time evolution in this space represents an intricate dynamical system.

What makes the game appealing from a physics perspective is that it can be described using methods for the statistical physics of disordered systems, with the set of randomly assigned strategies corresponding to quenched disorder [5, 8, 13, 14, 15, 16, 17]. In particular Challet, Marsili, and co-workers showed that the model can be formulated in terms of the gradient descent dynamics of an underlying Hamiltonian [13], plus noise. The asymptotic dynamics corresponds to minimizing the Hamiltonian with respect to the frequency at which agents use each strategy, a problem which in turn can be solved using the replica method [8, 17, 18]. In a complementary development Coolen solved the statistical dynamics of the problem in its full complexity using generating functionals [14, 15, 16].

The game is controlled by the parameter \(\alpha =P/N\), where \(P=2^m\) is the number of distinct histories that agents take into account, which tunes the system through a phase transition (for \(N\rightarrow \infty \)) at a critical value \(\alpha _c=0.3374\ldots \). In the symmetric (or crowded) phase, \(\alpha < \alpha _c\), the game is quasi-periodic with period 2*P* where a given history gives alternately one or the other of the outcomes for minority group [4, 19]. A somewhat oversimplified characterization of the dynamics is that the information about the last winning minority group for a given history gives a crowding effect [20] where many agents want to repeat the last winning outcome which then counterproductively instead puts them in the majority group. The crowding also gives large fluctuations of the size of the minority group.

In this paper we study the dynamics of the Minority Game in the asymmetric phase by formulating a simplified statistical model, focusing on finding probability distributions for the relative strategy scores. In particular, we study the original formulation of the game with sign-payoff for which quantitative results are challenging to derive. By sorting the strategies based on how strongly they are correlated with the average over all strategies in the game, we find that sufficient statistical information can be extracted to formulate a quantitatively accurate model for \(\alpha \gtrsim 1\).

We discuss how the relative score for each agent can be derived from the master equation of a random walk on a chain with asymmetric jump probabilities to nearest neighbor sites, and how these jump probabilities can be calculated from the basic dynamic update equation of the scores. The corresponding probability distributions of scores are either of the form of exponential localization or diffusion with a drift. In the appendices we show that the model is related to but independent from the Hamiltonian formulation and we show how it can also be readily applied to the game with linear payoff where the master equation has long-range hopping.

Although the MG is well understood from the classic works discussed above, it is our hope that the simplified model of the steady state attendance and score distributions presented in this paper provides an alternative and readily accessible perspective on this fascinating model.

## 2 Definition of the Game and Outline

In order to give an overview of our results and for completeness we start by providing the formal definition of the Minority Game and some basic properties [2, 3, 10, 11].

*bid*\(a_i(t)=\pm 1\), all of which are collected into a total

*attendance*

*N*odd) and the winning minority group is then identified through \(-\text {sign}(A_t)\). A binary string of the

*m*past winning bids, called a history \(\mu \), is provided as global information to each agent upon which to base her decision for the following round. There are thus \(\mu =1,\ldots ,P\) with \(P=2^m\) different histories. At her disposal each agent has two randomly assigned strategies (a.k.a. strategy tables) that provide a unique bid for each history. The bid of strategy \(j=1,2\) of agent \(i=1,\ldots ,N\) in response to history \(\mu \) is given by \(a_{i,j}^\mu =\pm 1\) and the full strategy is the

*P*dimensional random binary vector \(\vec {a}_{i,j}\). There are thus a total of \(2^P\) distinct strategies available.

The agent uses at each time step the strategy that has made the best predictions for minority group historically. This is decided by a score \(U_{i,j}(t)\) for each strategy which is updated according to \(U_{i,j}(t+1)=U_{i,j}(t)-a_{i,j}^{\mu }\text {sign}(A^{\mu }_t)\), irrespectively of the strategy actually being used or not. (Here the superscript \(\mu \) on \(A_t\) just indicates that the attendance will depend on the history \(\mu (t)\) giving the bids at time *t*.) Ties, i.e. \(U_{i,1}=U_{i,2}\), are decided by a coin toss.

To make the dynamics generated by these equations more concrete, Fig. 1 shows the scores of the strategies of four particular agents \(U_{i,1/2}\), \(i=1,\ldots ,4\) for one realization of a game with \(N=101\), \(P=2^7\), together with the corresponding relative scores \(x_i\) (inset), over a limited time interval. As exemplified by this figure agents come in two flavors, known as ”frozen” and ”fickle” [5, 14]. An agent is frozen if one of her strategies performs consistently better than the other, such that on average the score difference is diverging, whereas fickle agents have a relative score that meanders around \(x=0\) switching their used strategy. The motion of \(x_i\) for both fickle and frozen agents is a random walk with a bias towards or away from \(x=0\). A basic problem is to characterize and understand this random walk and derive the corresponding probability distribution \(P_i(x,t)\); the probability to find agent *i* at position *x* at time *t* [10, 16].

### 2.1 Outline and Results

As presented in Sect. 3 we can quantify the correlation between an agent’s strategies, specified by \(\xi _i^\mu \), and the total attendance \(A_t^\mu \), which in turn allows for characterizing the mean (time averaged) step size \(\Delta _i=\langle x_i(t+1)-x_i(t)\rangle \) in terms of a distribution over agents \(P(\Delta _i)\). In agreement with earlier work we find that \(\Delta _i\) has two contributions; one center (\(x=0\)) seeking bias term which arises from self interaction (the used strategy contributes to the attendance and as such is more likely to be in the majority group [17]) and a fitness term which reflects the relative adaptation of the agent’s two strategies to the time averaged stochastic environment of the game. The distribution of step sizes over the population of agents are shown in Fig. 3 where frozen agents are simply those where the fitness overcomes the bias, such that \(\Delta _i>0\) for \(x>0\) or \(\Delta _i<0\) for \(x<0\), whereas for fickle agents \(\Delta _i<0\) for \(x>0\) and vice versa.

Knowing the mean step size of an agent allows for a formulation in terms of a one dimensional random walk (Fig. 4) with corresponding jump probabilities, as presented in Sect. 4. Depending on whether it is more likely to jump towards the center or not (fickle or frozen respectively) the master equation on the chain can be solved in terms of a stationary exponential distribution centered at \(x=0\) or (in the continuum limit) a normal distribution with a variance and mean that grow linearly in time (diffusion with drift). These are the distributions \(P_i(x,t)\) depending on \(\Delta _i\).

In simulations over many agents it is natural to consider the full distribution \(P(x,t)=\sum _{i=1}^{N} P_i(x,t)/N=\int P(\Delta _i)P_i(x,t)d\Delta _i\), with *NP*(*x*, *t*) thus the probability of finding an agent at time *t* with relative score *x*. In terms of scaled coordinates \(x/\sqrt{N}\) and *t* / *N* we find that the distribution only depends on \(\alpha \). The model distributions show excellent agreement with direct numerical simulations (Figs. 5 and 6) with no fitting parameters. This result for the full distribution of relative scores together with its systematic derivation for the original sign-payoff game represent the main results of this paper.

In Appendix 2 we discuss the relation between the model presented in this work and the formulation in terms of a minimization problem of a Hamiltonian generator of the asymptotic dynamics [8, 13]. We find that one way to view the present model is as a reduced ansatz for the ground state where the only parameters are the fraction of positively and negatively frozen agents (solved for self-consistently) instead of the full space of the frequency of use of each strategy. With this ansatz closed expressions can be derived for the steady state distributions irrespective of the form of the Hamiltonian.

In Appendix 3 we show how the model applies to the game with linear payoff \(\Delta _i(t)=-\xi _i^\mu A^\mu _t\).

## 3 Statistical Model

*i*always has the same bid for history \(\mu \) independently of which strategy it has in play. The sum over all agents, \(\vec {\Omega }=\sum _{i=1}^N\vec {\omega }_i\), thus gives a constant history dependent but time independent background contribution to the attendance. (In the sense that every time history \(\mu \) occurs in the time series it gives the same contribution.) This background \(\Omega ^\mu \) is, for large

*N*, normally distributed with mean zero and variance

*i*such that

*t*with history \(\mu \) as

*i*is playing [5]. Again, the relative strategy score \(x_i\) of agent

*i*is updated according to Eq. 4. Given the background contribution to the attendance \(\vec {\Omega }\) we expect there to be a surplus of \(s_i=1\) in the steady state with our choice of gauge because the strategy 1 is expected to be favored by the score update function. (In other words, strategy 1 is expected to have a higher fitness.) However, this correlation is not trivial as the accumulated score also depends on the dynamically generated contribution the attendance. As discussed previously some fraction \(\phi \) of the agents are frozen, in the sense of always using the same strategy, \(s_i=\text {constant}\). We make an additional distinction (made significant by our choice of gauge) and separate the group of frozen agents into those with \(s_i(t)=1\) (fraction \(\phi _1\)), and those with \(s_i(t)=-1\) (fraction \(\phi _2\)), such that \(\phi =\phi _1+\phi _2\). Clearly, we expect the former to be more plentiful than the latter.

To proceed, we need to find the distribution of \(\vec {\xi }_i\), i.e. how it varies over the set of agents. (Henceforth we will usually drop the index *i* and regard the objects as drawn from a distribution.) Begin by defining \(\vec {\psi }=\text {Random}(\pm 1)\vec {\xi }\), which is thus disordered with respect to the sign of \(\vec {\Omega }\cdot \vec {\psi }\) ^{1}. The object \(\psi ^\mu \) is independent of \(\Omega ^\mu \) (ignoring 1 / *N* corrections due to \(\Omega ^\mu \ne 0\) limiting the available bids \(\pm 1\)), taking values \((1,0,-1)\) with probability (1 / 4, 1 / 2, 1 / 4), which gives mean zero and variance 1 / 2. Consider the joint object \(h=\frac{1}{P}\vec {\Omega }\cdot \vec {\psi }\), for large *P* this becomes normally distributed with mean zero and variance \(\sigma _h^2=\frac{1}{P}(N/2)(1/2)=1/(4\alpha )\) [5].

*x*with mean \(\mu \) and variance \(\sigma ^2\) as \(\mathcal{N}_x(\mu ,\sigma )=\frac{1}{\sqrt{2\pi }\sigma }e^{-(x-\mu )^2/2\sigma ^2}\). This quantifies that \(\xi ^\mu \) is on average anticorrelated with \(\Omega ^\mu \) which is expected to place strategy 1 in the minority group more often than strategy 2.

### 3.1 Distribution of Step Sizes

To calculate the distribution of mean step sizes we will assume that histories occur with the same frequency such that \(\Delta =\frac{1}{P}\sum _\mu \Delta ^\mu \). This is in fact not the case for a single realization of the game in the dilute phase, some histories occur more often than others, as one can see directly from any simulation in this regime. Nevertheless, for large *P* we will assume that this variation of occurrences of \(\mu \) averages out. As discussed extensively in the literature the overall behavior of the game is insensitive to whether the actual history is used (endogenous information) as input to the agents or if a random history is supplied (exogenous information) [10, 11, 16, 21, 22]. This is also confirmed by the present work through the good agreement between the model using exogenous information and simulations in which we use the actual history.

*P*and given the assumption of independence of the distributions \(\Omega ,\xi ,X,Y\) for different \(\mu \) we expect the distribution \(P(\Delta )\) to approach a Gaussian (by the central limit theorem) with mean

*N*and

*P*through \(\alpha =P/N\), change slowly as a function of the arguments in the physically relevant regime \(0\le \phi _1+\phi _2\le 1\) (Fig. 7) and which satisfy \(\tilde{\Delta }_{\text {bias}}(\alpha ,0,0)=\frac{1}{\sqrt{2\pi }}\) and \(\tilde{\Delta }_{\text {fit}}(\alpha ,0,0)=\frac{1}{\pi }\). As seen from Eq. 17, the mean bias is towards \(x=0\), the used strategy is penalized, while the mean fitness is positive acting to increase the relative score

*x*, consistent with our choice of gauge as discussed earlier.

*P*whereas the bias has a variance that scales with 1 / (

*NP*) and thus negligible (as is the cross term). The variance can be written

### 3.2 Fraction of Frozen Agents

^{2}. The fit is good, but there is no indication of a phase transition for small \(\alpha \) in this simplified model.

From simulations we can also measure the distribution of mean step sizes to compare to the model, which is shown in Fig. 3. There we show an intermediate value of \(\alpha \), the fit in terms of mean and width is not as good close to \(\alpha _c\) and almost perfect for large \(\alpha \), but everywhere the data seems well represented by a normal distribution. We also use the mean step size distributions from simulations to calculate the fraction of frozen agents, Fig. 2. (The naive way to distinguish between frozen and switching agents; to introduce a cut-off \(x_{\text {cut}}\) at some time *t*, with any agents with \(|x_t|>x_{\text {cut}}\) considered frozen, makes it difficult to distinguish between frozen and switching agents with \(\Delta \) near 0.)

## 4 Distributions Over *x*

*x*on the set of integers. Consider that the agent at time step

*t*has score difference

*x*, what is the probability that at time \(t+1\) the score difference is \(x'\)? In each time step,

*x*can only change by \(-1,0,1\) as given by the basic score update Eq. 4. We specify the respective probabilities \(p_-,p_0,p_+\) with \(p_-+p_0+p_+=1\) for \(x>0\) and \(q_-,q_0,q_+\) for \(x<0\). The mean probability that

*x*remains unchanged is \(p_0=q_0=\frac{1}{2}\) as this corresponds to \(\xi _i^\mu =0\), meaning that the agent’s two strategies have the same bid which on average (over \(\mu \)) will be the case for half of the histories. It should also be clear that the stepping probabilities cannot depend on the magnitude of

*x*, only the sign, because the difference in score between strategies does not enter the game, only which strategy is currently used. The case \(x=0\) has to be treated separately; we toss a coin to decide which strategy is used, thus the probability for a \(+1\) increment is \((p_++q_+)/2\) and for a \(-1\) increment is \((p_-+q_-)/2\). The movement of

*x*thus corresponds to a one-dimensional random walk on a chain, with asymmetric jump probabilities, as sketched in Fig. 4.

*q*follow from the same analysis for \(x<0\). Keeping in mind that for a fickle agent \(\Delta ^+<0\) and \(\Delta ^->0\) this is of course consistent with \(p_+<p_-\) and \(q_-<q_+\). A frozen agent is instead given by \(p_+>p_-\) or \(q_->q_+\).

*x*at the interface. This can be solved exactly, but given that the exponential prefactor is small we settle for the approximate expression

### 4.1 Full Score Distributions

The asymmetry of these plots is an artefact of our gauge choice \(\vec {\xi }_i\cdot \vec {\Omega }\le 0\) which implies that on average agents will use strategy 1 (\(x>0\)) more frequently than strategy 2 (\(x<0\)). To restore the full symmetry is simply a matter of symmetrizing the distributions around \(x=0\).

## 5 Summary

We have studied the asymmetric phase of the basic Minority Game, focusing on the statistical distribution of relative strategy scores and the original sign-payoff formulation of the game. We formulate a statistical model for the attendance that relies on a specific gauge choice in which the two strategies of each agent are ordered with respect to the background (\(\vec {\xi }_i\cdot \vec {\Omega }\le 0\) for all agents *i*). Using this model we can derive a distribution of the mean step per time increment for the relative scores, specified in terms of a bias for the used strategy and the relative fitness of the two strategies. The relative strategy score for each agent is conveniently described as a random walk on an integer chain, where the jump probabilities are calculated from the mean step. The probability distribution of observing the agent at some position on the chain at a given time is either given by a static asymmetric exponential localized around \(x=0\) for fickle agents or to diffusion with a drift for frozen agents. Excellent agreement with direct simulations of the game for the score distribution confirms the basic validity of the modelling. At the same time, as discussed in the appendix, the fluctuations of the attendance are overestimated by the model. By contrasting with the Hamiltonian formulation of the dynamics the reason for this discrepancy is readily understood from viewing the model as a crude ansatz for full minimization problem. This also opens up for improving the model by introducing some variational parameters without having to confront the full complexity of the minimization of a non-quadratic Hamiltonian for general payoff functions.

We thank Erik Werner for valuable discussions. Simulations were performed on resources at Chalmers Centre for Computational Science and Engineering (C3SE) provided by the Swedish National Infrastructure for Computing (SNIC).

## Footnotes

- 1.
Note that what we here refer to as \(\psi \) is what is called \(\xi \) in the literature [5]. In this paper we reserve \(\xi \) for the object where strategies are ordered such that \(\vec {\Omega }\cdot \vec {\xi }_i\le 0\), corresponding to \(\xi _i^\mu =-\psi _i^\mu \text {sign}(\vec {\Omega }\cdot \vec {\psi }_i)\).

- 2.
- 3.
The exact expressions for these quantities are derived from the integral formulas as explained, but we are also happy to share them directly. Contact the first author.

## References

- 1.Arthur, W.B.: Inductive reasoning and bounded rationality: the El Farol problem. Am. Econ. Rev.
**84**, 406 (1994)Google Scholar - 2.Challet, D., Zhang, Y.-C.: Emergence of cooperation and organization in an evolutionary game. Physica A
**246**, 407 (1997)ADSCrossRefGoogle Scholar - 3.Zhang, Y.-C.: Evolving models of financial markets. Europhys. News
**29**, 51 (1998)Google Scholar - 4.Savit, R., Manuca, R., Riolo, R.: Adaptive competition, market efficiency, and phase transitions. Phys. Rev. Lett.
**82**, 2203 (1999)ADSCrossRefGoogle Scholar - 5.Challet, D., Marsili, M.: Phase transition and symmetry breaking in the Minority Game. Phys. Rev. E
**60**, R6271(R) (1999)ADSCrossRefGoogle Scholar - 6.de Cara, M.A.R., Pla, O., Guinea, F.: Competition, efficiency and collective behavior in the ”El Farol” bar model. Eur. Phys. J. B
**10**, 187 (1999)ADSCrossRefGoogle Scholar - 7.Cavagna, A., Garrahan, J.P., Giardina, I., Sherrington, D.: Thermal model for adaptive competition in a market. Phys. Rev. Lett.
**83**, 4429 (1999)ADSCrossRefGoogle Scholar - 8.Challet, D., Marsili, M., Zecchina, R.: Statistical mechanics of systems with heterogeneous agents: minority games. Phys. Rev. Lett.
**84**, 1824 (2000)ADSCrossRefGoogle Scholar - 9.Jefferies, P., Hart, M.L., Hui, P.M., Johnson, N.F.: From market games to real-world markets. Eur. Phys. J. B
**20**, 493 (2001)ADSMathSciNetCrossRefGoogle Scholar - 10.Challet, D., Marsili, M., Zhang, Y.-C.: Minority Games. Oxford University Press, Oxford (2005)zbMATHGoogle Scholar
- 11.Yeung, C.H., Zhang, Y.-C.:
*Minority Games*. Encyclopedia of Complexity and Systems Science, pp. 5588–5604. Springer, New York (2009)Google Scholar - 12.Chakrabortia, A., Challeta, D., Chatterjeec, A., Marsilie, M., Zhang, Y.-C., Chakrabartid, B.K.: Statistical mechanics of competitive resource allocation using agent-based models. Phys. Rep.
**552**, 1 (2015)ADSMathSciNetCrossRefGoogle Scholar - 13.Marsili, M., Challet, D.: Continuum time limit and stationary states in the minority game. Phys. Rev. E
**64**, 056138 (2001)ADSCrossRefGoogle Scholar - 14.Hemiel, J.A.F., Coolen, A.C.C.: Generating functional analysis of the dynamics of the batch minority game with random external information. Phys. Rev. E
**63**, 056121 (2001)ADSCrossRefGoogle Scholar - 15.Coolen, A.C.C.: Generating functional analysis of minority games with real market histories. J. Phys. A
**38**, 2311 (2005)ADSMathSciNetCrossRefzbMATHGoogle Scholar - 16.Coolen, A.C.C.:
*The Mathematical Theory of Minority Games: Statistical Mechanics of Interacting Agents*. Oxford University Press, Oxford (2005)Google Scholar - 17.Marsilia, M., Challet, D., Zecchinac, R.: Exact solution of a modified El Farol’s bar problem: efficiency and the role of market impact. Physica A
**280**, 522 (2000)ADSCrossRefGoogle Scholar - 18.Mezard, M., Parisi G., Virasoro, M.: Spin glass theory and beyond: an introduction to the replica method and its applications. World scientific lecture notes in physics, vol. 9. World Scientific, Singapore (1987)Google Scholar
- 19.Acosta, G., Caridi, I., Guala, S., Marenco, J.: The quasi-periodicity of the minority game revisited. Physica A
**392**, 4450 (2013)ADSMathSciNetCrossRefGoogle Scholar - 20.Hart, M., Jefferies, P., Hui, P.M., Johnson, N.F.: Crowd-anticrowd theory of multi-agent market games. Eur. Phys. J. B
**20**, 547 (2001)ADSMathSciNetCrossRefGoogle Scholar - 21.Cavagna, A.: Irrelevance of memory in the minority game. Phys. Rev. E
**59**, R3783 (1999)ADSCrossRefGoogle Scholar - 22.Challet, D., Marsili, M.: Relevance of memory in minority games. Phys. Rev. E
**62**, 1862 (2000)ADSCrossRefGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.