Boltzmann and Fokker-Planck equations modelling the Elo rating system with learning effects

In this paper we propose and study a new kinetic rating model for a large number of players, which is motivated by the well-known Elo rating system. Each player is characterised by an intrinsic strength and a rating, which are both updated after each game. We state and analyse the respective Boltzmann type equation and derive the corresponding nonlinear, nonlocal Fokker-Planck equation. We investigate the existence of solutions to the Fokker-Planck equation and discuss their behaviour in the long time limit. Furthermore, we illustrate the dynamics of the Boltzmann and Fokker-Planck equation with various numerical experiments.


Introduction
In 1950 the Hungarian physicist Arpad Elo developed a rating system to calculate the relative skill level of players in competitor versus competitor games, see [18]. The Elo rating system was initially used in chess competitions, but was quickly adopted by the US Chess Federation as well as the World Chess Federation, and the National Football Foundation. In June 2018, FIFA announced switching their world football ranking to an Elo system, following two years of reviews and studies of different alternatives. The Elo rating system assigns each player a rating, which is updated according to the wins and losses as well as the difference of the ratings. It is hoped that the rating converges to the relative strength level and is a valid measure of the player's skills. However, assigning an initial rating to a new player is a delicate issue, since it is not clear how an inaccurate initial rating influences the latter performance. Elo himself tried to validate the model using computational experiments, while Glickman used statistical techniques to understand the dynamics [19]. The first rigorous proof of convergence of the ratings to the individual strength was presented by Junca and Jabin in [20], who introduced a continuous version of the Elo rating system. In this continuous model every player is characterised by its intrinsic strength ρ and rating R. The intrinsic strength is fixed in time. If two players with rating R i and R j meet in a game, their ratings after the game, R * i and R * j are given by In (1) the random variable S ij is the score result of the game, it takes the value 1 if player i wins and the value −1 if player j wins. The mean score (i.e. expected value of S ij ) is assumed to be equal to b(ρ i − ρ j ), hence the result of each game depends on the difference of the player's intrinsic strengths. The rating of each player in-or decreases proportionally with the outcome of the game, relative to the predicted mean score b(R i − R j ). The speed of the adjustment is controlled by the constant parameter K. The function b is chosen in such a way that extreme differences are moderated; a typical choice is b(z) = tanh(cz), (2) where c is a suitably chosen positive constant. This choice weighs the impact of the outcome with respect to the relative rating. If a player with a high rating wins a game against a player with a low rating, the players' ratings change little. However, if the player with the low rating wins against a highly rated player, the ratings are strongly adjusted.
Junca and Jabin proposed the following Boltzmann type equation to describe the evolution of the distribution of players f = f (r, t) with respect to their ratings This equation describes a more general setup than in the microscopic equations. Here two players only interact according to the interaction rate function w, which depends on the difference of their ratings. The function w is assumed to be even and nonnegative. Junca and Jabin analysed the long time behaviour of solutions to (3). They proved that in the case w = 1, a so-called 'all-play-all' tournament, the ratings converge exponentially fast to the intrinsic strength. In the case of local interactions, that is individuals only play if their ratings are close, the ratings may not converge to the intrinsic strength and the rating fails to give a fair representation of the player's strength distribution.
Rather recently Krupp [21] proposed an extension of the model by Jabin and Junca [20]. In her model not only the rating, but also the intrinsic strength changes as players continuously compete in games. In particular, she assumes that the intrinsic strength ρ changes in every game according to whereK is a positive constant and Z ij takes the value z 1 ∈ N or z 2 ∈ N. In case of a win the inner strength ρ i increases by z 1K , in case of a loss by z 2K . Hence if z 1 < z 2 the looser benefits more from the game, while if z 1 > z 2 the winner learns more. If z 1 = z 2 both learn the same. The corresponding Boltzmann type equation for the distribution of the players f = f (r, ρ, t) with respect to their strength and rating reads as where Krupp analysed the qualitative behaviour of solutions to (5). Due to the continuous increase in strength, the ratings increase in time. Therefore, an appropriately shifted problem was studied, in which the ratings converged exponentially fast to the intrinsic strength in the case w = 1.
In this paper we propose a more general approach to describe how a player's strength changes in encounters. We assume that individuals benefit from every game and increase their strength because of these interactions. However, the extent of the benefit depends on several factors -first, players with a lower rating benefit more. Second, the stronger the opponent, the more a win pushes the intrinsic The kinetic description of the Elo rating system allowed Junca & Jabin to analyse the qualitative behaviour of solutions. In the last decades kinetic models have been used successfully to describe the behaviour of large multiagent systems in socio-economic applications. In all these applications interactions among individuals are modeled as 'collisions', in which agents exchange goods [12,17,6], wealth [13,14,4,11], opinion [28,5,15,23,1,16] or knowledge [25,7]. For a general overview on interacting multi-agent systems and kinetic equations we refer to the book of Pareschi and Toscani [24].
This paper is organised as follows. We introduce a generalization of the kinetic Elo model with variable intrinsic strength due to learning in Section 2. In Section 3 we derive the corresponding Fokker-Planck type equation as the quasi-invariant limit of the Boltzmann type model. Convergence towards steady states of a suitable shifted Fokker-Planck model is analysed in Section 4. We conclude by presenting various numerical simulations of the Boltzmann and the Fokker-Planck type equation in Section 5.

An Elo model with learning
In this section we introduce an Elo model, in which the rating and the intrinsic strength of the players change in time. The dynamics are driven by similar microscopic binary interactions as in the original model by Jabin and Junca [20] and Krupp [21]. We state the specific microscopic interaction rules in each encounter and derive the corresponding limiting Fokker-Planck equation.
2.1. Kinetic model. We follow the notation introduced in Section 1 and denote the individual strength by ρ and the rating by R. If two players with ratings R i and R j meet, their ratings and strength after the game are given by: The interaction rules are motivated by the following considerations: player ratings change with the outcome of each game (as in the original model (1) proposed by Jabin and Junca [20]). The random variable S ij corresponds to the score of the match and depends on the difference in strength of the two players. We assume that S ij takes the values ±1 with an expectation S ij = b(ρ i − ρ j ). Note that one could also assume that S ij is continuous, for example S ij ∈ [−1, +1]. The constant parameter γ > 0 controls the speed of adjustment.
The variables η andη are independent identically distributed random variables with mean zero and variance σ 2 which model small fluctuations due to day-linked performance in the mental strength or personal fitness.
The function h describes the learning mechanism. We assume that h takes the following form, The function h 1 corresponds to the increase in knowledge or skills because of interactions. We assume that each player learns in a game, however players with a lower strength benefit more. A possible choice for h 1 , which we shall use throughout this paper, is where b is given by (2). Note that b is an odd function. Since h 1 is positive, both players are able to learn and improve in each game, to an extent which depends on the difference in strengths, with a player with lower strength benefiting more.
The second function, h 2 , models a change of strength due to gain or loss of self-confidence due to winning or being defeated in a game. We assume that the loss of the stronger player is the same as the gain for the weaker one. Hence, we choose h 2 (ρ j − ρ i ) = S ij l(ρ j − ρ i ) to be an odd, regular, bounded function which is vanishing at infinity, where the function l corresponds to the net change of self-confidence. A possible choice which we adopt in the following corresponds to Note that the expectation for the learning function function is given by  the function b(·) preserve the total value of the rating pointwise and in mean, that is The evolution of the total strength depends on the choices of the function h 1 and h 2 . Note that the function h 2 does not affect the total strength since We see that that the proposed interaction rules result in a net increase of the total knowledge in every interactions. Therefore, we expect to see on overall increase in strength for all times.
The proposed interaction rules are a first step towards a more realistic modeling. Alternative learning mechanisms, such as the one proposed in the context of knowledge exchange in a large society, see [7], could be considered in the future. Here the individual with the lower knowledge level assumes the higher level after the interaction, while the stronger one did not gain anything in the encounter. Hence the overall knowledge level is bounded by the maximum initial knowledge level for all times and the distribution of individuals converges to a Delta Dirac at that point. We expect a similar dynamics, if we were to apply that rule instead of (6). Developing learning mechanisms, which combine limitations of individual learning with the continuous evolution of the collective knowledge, will be an important aspect of future research developments. Now we are able to state the evolution equation for the distribution of players f γ = f γ (ρ, R, t) with respect to their rating R and intrinsic strength ρ. For a fixed number of players, N , the interactions (6) induce a discrete-time Markov process with N -particle joint probability distribution P N (ρ 1 , R 1 , ρ 2 , R 2 , . . . , ρ N , R N , τ ). One can write a kinetic equation for the one-marginal distribution function, P 1 (ρ, R, τ ) = P N (ρ, R, ρ 2 , R 2 , . . . , ρ N , R N , τ ) dρ 2 dR 2 · · · dρ N dR N , using only one-and two-particle distribution functions [8,9], Here, · denotes the mean operation with respect to the random variables η,η and the function w(·) corresponds to the interaction rate function which depends on the difference of the ratings. This process can be continued to give a hierarchy of equations of so-called BBGKY-type [8,9], describing the dynamics of the system of a large number of interacting agents. A standard approximation is to neglect correlations and assume the factorisation By scaling time as t = 2τ /N and performing the thermodynamical limit N → ∞, we can use standard methods of kinetic theory [8,9] to show that the time-evolution of the one-agent distribution function f γ is governed by the following Boltzmann-type equation: where φ(·) is a (smooth) test function, with support supp(φ) ⊆ Ω. The function w(·) corresponds to the interaction rate function which depends on the difference of the ratings. If w ≡ 1 we consider a so-called all-play-all game. If w has compact support only players with close ratings compete. Possible choices for w are where χ denotes the indicator function (or smoothed variants thereof).
In the following we shall analyse (11) as well as different asymptotic limits of it. The presented analysis is based on the following assumptions: (A2) Let f 0 ∈ H 1 (Ω) with f 0 ≥ 0 and compact support. Furthermore we assume that it has mean value zero, and bounded moments up to order two. Hence (A3) The random variables η,η in (6) have the same distribution, zero mean, η = 0, and variance σ 2 η . (A4) Let the interaction rate function w ≥ 0 be an even function with w ∈ C 2 (Ω) ∩ L ∞ (Ω).
The kinetic Elo model can be formulated on the whole space as well as on a bounded domain. In reality, the Elo ratings of top chess players vary between 2000 to 3000, which provides evidence for the assumption of a bounded domain Ω. However, sometimes it is easier to study the dynamics of models on the whole space, i.e. without boundary effects. We will generally work on the bounded domain, and clearly state where we deviate from this assumption, e.g. when we study the asymptotic behaviour of moments. The second assumption states the necessary regularity assumptions on the initial data, which we shall use in the analysis of the moments and the existence proof.

2.2.
Analysis of the moments. We start by studying basic properties of the Boltzmann type equation (11) such as mass conservation and the evolution of the first and second moments with respect to the strength and the ratings. Throughout this section we consider the problem in the whole space.
Conservation of mass: Setting φ(ρ i , R i ) = 1 in the equation (11) we see that Therefore, the total mass is conserved, that is Moments with respect to the rating. The s-th moment, for s ∈ N, with respect to R i is defined as Hence the mean value w.r.t. the rating is preserved in time and therefore m Ri (t) = 0, for all times t ≥ 0.
The evolution of the second moment can be obtained by setting φ(ρ i , R i ) = R i 2 . We see that The second term in the integral is non-positive and we obtain the bound Hence, the second moment grows at most linearly and remains bounded for finite times. Note that the integral is negative for γ small enough, which implies a decreasing second moment.
Moments with respect to the strength. The moments with respect to strength are defined in an analogous way, that is for s ∈ N and using again m ρi (t) = M 1,ρi . Since (A2) holds, we see that for which implies that the mean value is bounded for all times t ∈ [0, T ] and that |m ρi (t)| grows at most linearly in time if h(·) is bounded. If we consider the specific interaction rules (8)-(12), we obtain with equality holding in the "all-play-all" case w = 1. The evolution of the second moment M 2,ρi can be If h(·) is bounded the second moment grows at most at polynomial rate. Since the second moment of f 0 is bounded (see assumption (A2)), it remains finite for all times t ∈ [0, T ].

The Fokker-Planck limit
In the last section we analysed the evolution of moments to the Boltzmann type equation (11). However, it is often more useful to study the dynamics of simplified models (generally of Fokker-Planck type), which can be derived in particular asymptotic limits. These asymptotics provide a good approximation of the stationary profiles of the kinetic equation. In what follows we consider the so-called quasi-invariant limit, in which diffusion and the outcome of the game influence the long-time dynamics. More specifically, we consider the limit In Appendix A we derive the following Fokker-Planck limit: The differential form of (49) is given by where We consider equation (16) with initial datum f 0 satisfying assumption (A2) in the following. Note that (16) includes the nonlocal operator a[f ], corresponding to the change of the ratings, similar as in the Fokker-Planck equations (3) and (5)  properties of the Fokker-Planck equation (16). We shall see that several properties, which we observed for the Boltzmann type equation (11), can be transferred.
Conservation of mass and positivity of solution: Due to mass conservation and (A2) we have that Using similar arguments as in [27], we can directly prove that the Fokker-Planck equation maintains the positivity of the solution. Let v m (t) = (ρ m (t), R m (t)) denote the minimum, which is obtained at timẽ t. Clearly, if at certain timet ≥ 0 the function equals zero, i.e. f (ρ, R,t) = 0, this point is a stationary point or a local minimum, hence gives which implies that the function f is non-decreasing in time and cannot assume negative values.
Evolution of the moments: We now consider the evolution of the moments of the solution of (16) using the interaction rules (8) and (9). Similar calculations as in Section 2.2 confirm the expected behaviour -due to the continuous increase in strength in each game the system does not converge to a steady state and therefore the respective mean of the solution is non-decreasing in time. Summarising the results, we The previous results confirm that due to the continuous increase in strength in each game, rating and skills tend to become increasingly distant from each other. Therefore, we adopt an idea by Krupp [21] and study the evolution of a suitably shifted problem instead. We define where the scaling function H is given by This scaling ensures that the mean value is preserved in time. The corresponding evolution equation for g(ρ, R, t) is given by Now, the mean value of g(ρ, R, t) is constant w.r.t. both R and ρ and we can normalize In a general setting it is not possible to compute scaling function explicitly. However, in 'all-meet-all' tournaments, that is w(R − R j ) = 1, and in case of the specific interaction rules (8)- (9), we obtain that Therefore, in the rest of this paper, we consider the following problem on a bounded domain Ω ⊂ R 2 , with no-flux boundary condition in Ω.

(21c)
Here ν denotes the unit outer normal vector. Note that the existence of solutions to (21a) on the whole domain is more involved, since we would need to prove that the solution decays sufficiently as R and ρ tend to infinity. Therefore, we consider the equation on a bounded domain only.

Analysis of the Fokker-Planck equation.
In the section we prove existence of weak solutions to (21). The main result reads as follows.
The presented existence proof was adapted from a similar argument for a nonlinear Fokker-Planck equation describing the dynamics of agents in an economic market, see [17]. However, equation (21a) has an additional nonlinearity in the derivative w.r.t. the rating R. We divide the proof in several steps for the ease of presentation. In Step 0 we regularize the non-linear Fokker Planck equation (21a) by adding a Laplace operator with small diffusivity µ ≥ 0. We linearise the equation in Step 1 and show existence of a unique solution for this problem. In Step 2 we derive the necessary L ∞ estimates to use Leray-Schauder's fixed point theorem and show existence of solutions to the nonlinear regularised problem. In Step 3 we present additional H 1 estimates, which allow us to pass to the limit µ → 0 in Step 4. Proof.
Step 3: uniform H 1 bound. Our aim is to derive an H 1 bound which is independent of µ. Choosing v = g µ in (23) with t instead of T , we obtain Because of the assumptions on h 1 , h 2 and b we have that − 1 2 ∂ ∂R a[g µ ] + ∂ ∂ρc [g µ ] < C. Therefore, we can rewrite the above estimate as Using Gronwall's lemma, the previous estimate guarantees (independent by µ) estimates for g µ (t), i.e. g µ L ∞ (0,T ;L 2 (Ω)) ≤ C.
Furthermore, by direct computation, we obtain The first term on the right side of the previous inequality goes to 0 when µ → 0 becausec[g µ ] is bounded and g µ → g strongly in L 2 (0, T ; L 2 (Ω)). Using Cauchy-Schwartz's inequality and that the domain Ω is bounded, yields The constant is bounded from above by the L ∞ -norm of h and w, hence this term goes to 0 as µ → 0.
Since c[g µ ]g µ is bounded, convergence holds in L p for all p < ∞. The same argument holds for the difference a[g µ ]g µ − a[g]g L 2 (0,T ;L 2 (Ω)) . So, we have shown that Therefore, we can pass to the limit µ → 0 in the equation (23) and obtain for all v ∈ L 2 (0, T ; H 1 (Ω)) This completes the proof.

Long time behaviour of ratings and strength
In this section we study possible steady states of the proposed Elo model and discuss the convergence of the ratings to the strength. We recall that Junca and Jabin [20] showed that the ratings of players converge to their intrinsic strength in the case w = 1. This corresponds to the concentration of mass along the diagonal. In our model the intrinsic strength is continuously increasing in time. Hence, to be able to identify steady states, we consider the shifted Fokker-Planck equation (21a). Throughout this section we consider the problem in the whole space.
Since the diffusion part in (21a) is singular, the equation is degenerate parabolic. Degenerate Fokker-Planck equations frequently, despite their lack of coercivity, exhibit exponential convergence to equilibrium, a behaviour which has been referred to by Villani as hypocoercivity in [29]. For subsequent research on hypercoercity in linear Fokker-Planck equations, see [2,3]. Since (21a) is a nonlinear, nonlocal Fokker-Planck equation these results do not apply here, but it is conceivable that generalisations of this approach can be used in studying the decay to equilibrium for (21a), which is however beyond the scope of the present paper. In the following, we present some results on the longterm behaviour of solutions to (21a).
Due to normalisation of the mean value, the only point in which the formation of a steady state is possible are R 0 = 0 and ρ 0 = 0. Let us assume that we have a measure valued steady state in (0, 0), that is g ∞ (ρ, R) = δ(ρ)δ(R). Then direct computations using the weak form of (21a) give This equation is not satisfied for all test functions φ. Therefore, we investigate the possibility of having more complex steady states, which have a similar form as the one identified by Junca and Jabin. Let us assume that g ∞ is of the form or alternatively whereg(·) in both cases is not a δ−Dirac.
By direct computation in weak form of (21a) with φ(ρ, R) = ρ 2 and φ(ρ, R) = R 2 respectively, we compute the following expressions for the second moments of the density function g(ρ, R, t): The analysis of the second moment w.r.t. ρ leads us to conclude that the diffusion prevents the formation of a steady state as in (37) if w = 1. Indeed, in this case, the first integral in (39) equals σ 2 . If at certain time t > 0, ρ ρ j or g(ρ, R, t) = δ(ρ−ρ 0 )g(R, t), the integral becomes small or vanishes (anyhow smaller than σ 2 ) and then d dt M 2,ρi (t) ≥ 0. Thus, we can conclude that the diffusion prevents the accumulation of the mass in ρ = 0. For a general choice of w, the long time behaviour of solutions is less clear.
Conversely, the second moment w.r.t. R is decreasing. Due to the symmetry of the functions b and w, we can rewrite (40) as This inequality does not contradict the assumption of a steady state of form (38).
In order to evaluate if, with the scaling (20), the rating converges to the intrinsic strength, let us define the energy We are interested in the evolution of E 2 and compute For general functions w it is not possible to determine the signs of the respective integrals. Therefore, we consider the case w = 1 only. For all odd functions b(·) (the same holds true for h 2 (ρ − ρ j ) ) we are able to show that and Ω 2 ρb(R − R j )g(ρ, R, t)g(ρ j , R j , t) dρ j dR j dρdR = 0. In this case we can rewrite the equation (42) as Again we would like to know if a concentration of mass along the diagonal is possible. Let us assume that at certain time the solution is g(ρ, R, t) = δ(ρ − R)g)(ρ, R, t). If we insert this claim in (43), we It shows that the diffusion counteracts the accumulation of the mass along the diagonal. On the other hand, the four integrals in (43) are strictly negative. Hence if σ 2 is small enough, the distance between rating and intrinsic strength becomes small, and the diffusive term can be controlled. This indicates concentration of the mass in a certain neighbourhood of the diagonal in the long run.

Numerical simulations
In this section we discuss the numerical discretisation of the Boltzmann equation (11) and the shifted Fokker-Planck equation (21a). We initialise the distribution of players with respect to their strength and rating with values from the unit interval and consider appropriately shifted interaction rules to ensure that the distribution remains inside the unit square for all times t > 0.

Monte Carlo simulations of the Boltzmann equation.
We use the classical Monte Carlo method to compute a series of realisations of the Boltzmann equation (11). In the direct Monte Carlo method, also known as Bird's scheme, pairs of players are randomly and non-exclusively selected for two-player games. The outcome of the game is determined by (6). Note that we consider the following shifted interaction rules for the ratings, to ensure that ρ ∈ [0, 1] and R ∈ [0, 1]: The microscopic interactions are simulated as follows: the outcome of the game S ij is the realisation of a discrete distribution function, which takes the value {−1, 1} with probability The random variables η are generated such that they assume values η = ±0.025 with equal probability, the parameter γ is set to 0.05. Further information on Monte Carlo methods for Boltzmann type equations can be found in [24].
In each simulation we consider N = 5000 players and compute the steady state distribution by performing 10 8 time steps. The result is then averaged over another 10 5 time steps. We perform M = 10 realizations and compute the density from the averaged steady states.

Finite volume discretisation and simulations of the nonlinear Fokker-Planck equation.
The solver for the Fokker-Planck equation is based on a Strang splitting and an upwind finite volume scheme. We recall that we discretise the shifted Fokker-Planck equation (21a), which allows us to perform simulations on a bounded domain. Because of the splitting we consider the interactions in the rating and the strength variable separately. We define two operators, which correspond to (S 1 ): Interaction step in the strength variable R: subject to the initial condition g * (ρ, R, t) =g(ρ, R, t). Note that we compute the interaction integrals usingg, which corresponds to the solution at the previous time step in the full splitting scheme.
(S 2 ): Interaction step in the rating variable ρ: We approximate all integrals, which appear in the interaction coefficients using the trapezoidal rule.
Letĝ k denote the solution at time t k = k∆t, where ∆t corresponds to the time step size. Then the Strang splitting results in the schemê where the superscripts denote the solutions of g * and g at the discrete time steps t k+1 = (k + 1)∆t and t k+ 1 2 = (k + 1 2 )∆t. We use a conservative upwind finite volume discretisation to discretise the respective operators. The corresponding explicit-in-time upwind finite volume methods is given bŷ , whereĉ is the upwind flux and the diffusive flux is givend j+ 1 2 = D(ĝ j+1 )ĝ j+1 −D(ĝ j )ĝ j . Here λ 1 = ∆t/∆x and λ 2 = ∆t/∆x 2 . shows the decay of the energy E 2 in time.

5.3.2.
Competitions of players with similar ratings. Assigning initial ratings to players in the Elo rating is a delicate issue, since inaccurate initial ratings may influence the ability of the rating to converge to a 'good' rating of players reflecting their intrinsic strengths. We show the difficulties in this case by studying the dynamics if players with close ratings compete.
We set the interaction rate function to (45) -hence individuals only play against each other, if the difference between their ratings is small. We consider two groups of players with different strength and rating levels as initial distribution. The first group is underrated, that is all players have rating R = 0.2 but their strength is distributed as ρ ∈ N (0.75, 0.1). The second group is overrated, with rating R = 0.9 and a uniform distribution in strength. We use this initial configuration in two computational experiments. In the first, we choose the learning parameters α = 0.1 and β = 0. We see that the two groups remain separated due to their different ratings in this case, see Figure 4. However, players compete within their own group and since β = 0 the overall rating improves. In the overrated group the strongest players accumulate at the highest possible rating, while the underrated group forms a diagonal pattern. Here the underrated players evolve to the maximum possible rating level.
In the second experiment, using the same initial configuration, but α = 0.1 and β = 0.05 the steady state profile looks totally different. In this setting stronger players loose strength, when loosing against a weaker opponent. Therefore, the ratings of the overrated group decrease, while the ratings of the underrated group increases. After a while the two groups merge, accumulating on a diagonal which underestimates the intrinsic strength of players by approximately 0.1, see Figure 5.
These examples show the importance of the initial ratings as well as the influence of the adapted learning mechanism. 5.3.3. Foul play. Finally, we consider a series of games, in which one player, without loss of generality the first one, is playing unfairly, e.g. through cheating, doping or bribing of referees. This means that the outcome of every microscopic game which involves this player is biased in their favor. In particular  we assume that the probability of winning is increased by a factorb for player 1 and decreased byb for the other contestant. Figure 6 shows the stationary profile in the case of a uniform initial distribution of agents, α = 0.1, β = 0, w = 1 andb = 0.2. The star indicates the position of the unfair first player.
While the distribution of players with respect to their ratings and their strengths accumulates along the diagonal, we see that the first player is rated higher than implied by his or her strength.
Appendix A. Derivation of the Fokker-Planck equation In this section we derive the limiting Fokker-Planck equation in the case γ → 0, σ η → 0 such that σ 2 η /γ =: σ 2 is kept fixed. Based on the interaction rules (6), which define the outcome of a game, we . Computed stationary profiles in competitions of players with similar ratings in case of two initially separated groups (one underrated with high strength but low rating and one overrated with variable strength but rating 0.9). Due to the limited interaction between the groups and the chosen learning mechanism, they remain separated. Figure 5. Computed stationary profiles in competitions of players with similar ratings in case of two initially separated groups (one underrated with high strength but low rating and one overrated with variable strength but rating 0.9). Despite the limited interaction between the groups, the adapted learning mechanism leads to convergence of the ratings to a slightly shifted diagonal.
compute the expected values of the following quantities: . Computed stationary profile in a foul play where the first player has an unfair advantage in each game. We observe that the ratings and strength all players except the first one converge. The cheating player (indicated by a star) ends up with a higher rating than it is supposed to have.
Using Taylor expansion of φ(ρ * i , R * i ) up to order two around (ρ i , R i ), we obtain where the remainder term R γ is given by for some 0 ≤ θ 1 , θ 2 ≤ 1 with ρ i and R i defined as Next we rescale time as τ = γt and insert the expansion in (11). This yields wherẽ Next we show that the remainder 1 2γ R 2Rγ (φ, ρ * i , R * i , ρ i , R i , τ )f γ (ρ i , R i , τ )dR i dρ i vanishes for γ → 0. Let us assume that φ(ρ i , R i ) belongs to the space C 2+δ (R 2 ) = {h : R 2 → R, D ζ h δ < +∞}, where 0 < δ ≤ 1, ζ is a multi-index with |ζ| ≤ 2 and the seminorm · δ is the usual Hölder seminorm With this choice of φ(ρ i , R i ), all the terms wich contain ∂ 2 ∂ρ 2 i φ and ∂ 2 ∂R 2 i φ vanish using the same arguments as in [28,10]. Hence, we focus on the mixed derivative ∂ 2 ∂ρi∂Ri φ(ρ i , R i ). Since φ(ρ i , R i ) ∈ C 2+δ (R 2 ) and Furthermore, due to (2), (8) and (9), Using the previous inequalities we estimate the mixed term as Hence the remainder term converges to 0 as γ → 0. Therefore, the density f γ (ρ i , R i , τ ) converges to It remains to show that under suitable boundary conditions equation (47) gives the desired weak formulation of the Fokker Planck equation. We split the boundary terms BT into the different parts BT i , i = 1, 2, 3 that arises respectively from each integral. They are given by These three terms are zero, if the following boundary conditions are satisfied: These boundary condition are guaranteed for the Boltzmann equation f γ (ρ i , R i , τ ) by mass conservation and the upper and lower bounds on the mean, see (14). Therefore, (47) is the weak form of the Fokker-Planck equation