Skip to main content
Log in

On Dynamic Games with Randomly Arriving Players

  • Published:
Dynamic Games and Applications Aims and scope Submit manuscript

Abstract

We consider a dynamic game where additional players (assumed identical, even if there will be a mild departure from that hypothesis) join the game randomly according to a Bernoulli process. The problem solved here is that of computing their expected payoff as a function of time and the number of players present when they arrive, if the strategies are given. We consider both a finite horizon game and an infinite horizon, discounted game. As illustrations, we discuss some examples relating to oligopoly theory (Cournot, Stackelberg, cartel).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. On the one hand, in a joint action, this country chooses to charge the total harm to the liable party. On the other hand, it allows each victim not to prosecute, prosecute individually or collectively. It thus follows that the liable party can be led to pay several times the damage which she caused: these are punitive damages.

  2. We count powers as sequences of multiplications

References

  1. Bernhard P, Hamelin F (2016) Sharing a resource with randomly arriving foragers. Math Biosci 273:91–101

    Article  MathSciNet  MATH  Google Scholar 

  2. Breton M, Keoula MY (2012) Farsightedness in a coalitional great fish war. Environ Resour Econ 51:297–315

    Article  Google Scholar 

  3. Breton M, Sbragia L, Zaccour G (2010) A dynamic model for international environmental agreements. Environ Resour Econ 45:25–48

    Article  Google Scholar 

  4. Caines PE (2014) Mean field games. In: Samad T, Ballieul J (eds) Encyclopedia of systems and control. Springer, New York

    Google Scholar 

  5. Davis MHA (1985) Control of piecewise-deterministic processes via discrete-time dynamic programming. In: Stochastic differential systems, Lecture Notes in control and information sciences 78:140–150. Springer

  6. De Sinopoli F, Meroni C, Pimienta C (2014) Strategic stability in Poisson games. J Econ Theory 153:46–63

    Article  MathSciNet  MATH  Google Scholar 

  7. De Sinopoli F, Pimienta C (2009) Undominated (and) perfect equilibria in Poisson games. Games Econ Behav 66:775–784

    Article  MathSciNet  MATH  Google Scholar 

  8. Ferguson ATS (2005) Selection by committee. In: Nowak A, Szajowski K (eds) Advances in dynamic games, vol 7. Annals of the International Society of Dynamic Games. Birkhäuser, Boston, pp 203–209

  9. Haurie A, Leizarowitz A, van Delft C (1994) Boundedly optimal control of piecewise deterministic systems. Eur J Oper Res 73:237–251

    Article  MATH  Google Scholar 

  10. Huang MY, Malhamé RP, Caines PE (2006) Large population stochastic dynamic games: closed loop McKean–Vlasov systems and the Nash certainty equivalence principle. Commun Inf Syst 6:221–252

    MathSciNet  MATH  Google Scholar 

  11. Kordonis I, Papavassilopoulos GP (2015) LQ Nash games with random entrance: an infinite horizon major player and minor players of finite horizon. IEEE Trans Automat Contr AC 60:1486–1500

    Article  MathSciNet  MATH  Google Scholar 

  12. Lasry JM, Lions PL (2007) Mean field games. Jpn J Math 2:229–260

    Article  MathSciNet  MATH  Google Scholar 

  13. Levin D, Ozdenoren E (2004) Auctions with uncertain number of bidders. J Econ Theory 118:229–251

    Article  MathSciNet  MATH  Google Scholar 

  14. Makris M (2008) Complementarities and macroeconomics: Poisson games. Games Econ Behav 62:180–189

    Article  MathSciNet  MATH  Google Scholar 

  15. Matthews S (1987) Comparing auctions for risk averse buyers: a buyer’s point of view. Econometrica 55:633–646

    Article  MathSciNet  MATH  Google Scholar 

  16. McAfee RP, McMillan J (1987) Auctions with a stochastic number of bidders. J Econ Theory 43:1–19

    Article  MathSciNet  MATH  Google Scholar 

  17. McAfee RP, McMillan J (1987) Auctions with entry. Econ Lett 23:343–347

    Article  MathSciNet  MATH  Google Scholar 

  18. Mertens JF, Sorin S, Zamir S (2015) Repeated games. No. 55 in econometric society monographs. Cambridge Uninersity Press, Cambridge

    Google Scholar 

  19. Milchtaich I (2004) Random-player games. Games Econ Behav 47:353–388

    Article  MathSciNet  MATH  Google Scholar 

  20. Münster J (2006) Contests with an unknown number of contestants. Public Choice 129:353–368

    Article  Google Scholar 

  21. Myerson RB (1998) Extended Poisson games and the condorcet jury theorem. Games Econ Behav 25:111–131

    Article  MathSciNet  MATH  Google Scholar 

  22. Myerson RB (1998) Population uncertainty and Poisson games. Int J Game Theory 27:375–392

    Article  MathSciNet  MATH  Google Scholar 

  23. Myerson RB (2000) Large Poisson games. J Econ Theory 94:7–45

    Article  MathSciNet  MATH  Google Scholar 

  24. Neyman A, Sorin S (2003) Stochastic games. Kluwer Academic publishers, NATO ASI series

  25. Nowak A, Sloan E, Sorin S (2013) Special issue on stochastic games. Dyn Games Appl 3

  26. Östling R, Tao-Yi Wang J, Chou EY, Camerer CF (2011) Testing game theory in the field: Swedish LUPI lottery games. Am Econ J Microecon 3:1–33

    Article  Google Scholar 

  27. Piccione M, Tan G (1996) A simple model of expert and non expert bidding in first price auctions. J Econ Theory 70:501–515

    Article  MATH  Google Scholar 

  28. Ritzberger K (2009) Price competition with population uncertainty. Math Soc Sci 58:145–157

    Article  MathSciNet  MATH  Google Scholar 

  29. Rubinstein A (2006) Dilemmas of an economic theorist. Econometrica 74:865–883

    Article  MATH  Google Scholar 

  30. Rubinstein A (2012) Economic fables. Open Books Publishers, Cambridge

    Book  Google Scholar 

  31. Rubio JS, Ulph A (2007) An infinite-horizon model of dynamic membership of international environmental agreements. J Environ Econ Manag 54:296–310

    Article  MATH  Google Scholar 

  32. Salo A, Weber M (2007) Ambiguity aversion in first-price sealed bid auctions. J Risk Uncertain 11:123–137

    Article  MATH  Google Scholar 

  33. Samuelson WF (1985) Competitive bidding with entry costs. Econ Lett 17:53–57

    Article  MATH  Google Scholar 

Download references

Acknowledgments

We thank Céline Savard-Chambard, Saïd Souam, Nicolas Eber and Hervé Moulin for conversations and comments that improved the paper. We received very useful suggestions from Sylvain Béal. Comments by anonymous reviewers were helpful to improve the presentation of the model and complete the literature survey (mean field games and stopping by vote).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierre Bernhard.

Additional information

Marc Deschamps acknowledges the financial support of ANR Damage (programme ANR-12-JSH1-0001, 2012-2015). The usual disclaimer applies.

Appendices

Appendix 1: Proofs of the Theorems

1.1 Proof of Theorem 1

We remark the maximum number of players is \(T-t_1+1\), and can only be attained at time T, and only if a player has arrived at each instant of time. We then have

$$\begin{aligned} \varPi ^e_{T-t_1+1}(t_1,t_1+1,\ldots ,T) = L_{T-t_1+1}(T) = M_{T-t_1+1}(T). \end{aligned}$$

For \(m\le T-t_1\) and a compatible \(\tau _m\), we have

$$\begin{aligned} J_m(\tau _m)= \left\{ \begin{array}{lcl}\displaystyle \sum _{t=t_m}^T r^{t-t_m}L_m(\tau _m,t) &{}\text{ if }&{} t_{m+1}> T,\\ \displaystyle \sum _{t=t_m}^{t_{m+1}-1}r^{t-t_m}L_m(\tau _m,t)+r^{t_{m+1}-t_m}J_{m+1}(\tau _m,t_{m+1}) &{}\text{ if }&{} t_{m+1}\le T. \end{array}\right. \end{aligned}$$

Now, \(t_{m+1}>T\) with a probability \((1-p)^{T-t_m}\), and the occurrence of a given \(t_{m+1}\le T\) has a probability \(p(1-p)^{t_{m+1}-t_m-1}\). Hence, writing \(t_+\) for \(t_{m+1}\):

$$\begin{aligned} \varPi ^e_m(\tau _m)= & {} (1-p)^{T-t_m}\sum _{t=t_m}^Tr^{t-t_m}L_m(\tau _m,t) \\&\quad +\,\sum _{t_+=t_m+1}^T p(1-p)^{t_+-t_m-1} \left[ \sum _{t=t_m}^{t_+-1} r^{t-t_m}L_m(\tau _m,t) + r^{t_+-t_m}\varPi ^e_{m+1}(\tau _m,t_+)\right] . \end{aligned}$$

Introduce the notation \(q=1-p\). Interchanging the order of summation,

$$\begin{aligned} \varPi ^e_m(\tau _m)&=q^{T-t_m}\sum _{t=t_m}^Tr^{t-t_m}L_m(\tau _m,t) \\&\quad \,+\,\sum _{t=t_m}^{T-1}r^{t-t_m}L_m(\tau _m,t) \sum _{t_+=t+1}^T\!pq^{t_+-t_m-1} +\sum _{t_+=t_m+1}^T\!pq^{t_+-t_m-1}r^{t_+-t_m} \varPi ^e_{m+1}(\tau _m,t_+). \end{aligned}$$

Using classical formulas for the sum of a geometric series and regrouping terms, we obtain for \(m\le T-t_1\):

$$\begin{aligned} \varPi ^e_m(\tau _m) = \sum _{t=t_m}^Tq^{t-t_m}\left[ r^{t-t_m}L_m(\tau _m,t)+ pr^{t_{m+1}-t_m}\varPi ^e_{m+1}(\tau _m,t+1)\right] , \end{aligned}$$

being agreed that \({\varPi ^e_{m+1}}(\tau _m,T+1)=0\). A more useful form of this formula for the sequel is as follows:

$$\begin{aligned} \varPi ^e_m(\tau _m) = \sum _{t=t_m}^T(qr)^{t-t_m}L_m(\tau _m,t) +\frac{p}{q}\sum _{t_{m+1}=t_m+1}^T(qr)^{t_{m+1}-t_m} \varPi ^e_{m+1}(\tau _m,t_{m+1}), \end{aligned}$$
(6)

where the second term of the right-hand side is absent if \(t_m=T\).

Remark 14

It is a not-so-easy calculation to check that if \(L_m(\tau _m,t) \le L\) and \(\varPi ^e_{m+1}(\tau _m,t_{m+1}) \le (T-t_{m+1}+1)L\), then this formula implies, as it should, \(\varPi ^e_m(\tau _m) \le (T-t_m+1)L\).

A hint about how to make the above check is as follows: in the second term of the formula, write

$$\begin{aligned} \sum _{t=t_m+1}^Tq^{t-t_m}(T-t+1) = \left( T-t_m+1 -(1-p)\frac{\mathrm {d}}{\mathrm {d}p}\right) \sum _{t=t_m+1}^T(1-p)^{t-t_m}, \end{aligned}$$

and use the classic formula for the sum of the (finite) geometric series.

We use formula (6) recursively: write first \(\varPi ^e_1\) as a function of \(\varPi ^e_2\), and using again the same formula substitute for \(\varPi ^e_2\) as a function of \(\varPi ^e_3\), and again for \(\varPi ^e_3\) as a function of \(\varPi ^e_4\). Then interchange the order of the summations, placing them in the order \(t,t_2,t_3,t_4\). In the following formula, every time the lower bound of a sum is larger than the upper bound, the term is just absent. We obtain

$$\begin{aligned} \varPi ^e_1&= \sum _{t=t_1}^T(qr)^{t-t_1}L_1(t_1,t) + \frac{p}{q}\sum _{t=t_1+1}^T(qr)^{t-t_1}\sum _{t_2=t_1+1}^tL_2(t_1,t_2,t)\\&\quad \,+\left( \frac{p}{q}\right) ^2\sum _{t=t_1+2}^T(qr)^{t-t_1}\sum _{t_2=t_1+1}^T\sum _{t_3=t_2+1}^t L_3(t_1,t_2,t_3,t)\\&\quad \,+ \left( \frac{p}{q}\right) ^3\sum _{t_2=t_1+1}^T\sum _{t_3=t_2+1}^T\sum _{t_4=t_3+1}^T (qr)^{t_4-t_1}\varPi ^e_4(t_1,t_2,t_3,t_4). \end{aligned}$$

Continuing in the same way up to \(m=T-t_1+1\), we obtain

$$\begin{aligned} \varPi ^e_1 = \sum _{m=1}^{T-t_1}\left( \frac{p}{q}\right) ^{\!m-1} \sum _{t=t_1+m-1}^T(qr)^{t-t_1}M_m(t) +\left( \frac{p}{q}\right) ^{T-t_1} (qr)^{T-t_1}L_{T-t_1+1}(T). \end{aligned}$$

The last term can be identified as the term \(m=T-t_1+1\) of the first sum, as the range of t in the embedded (second) sum is limited to \(t=T\), and we have seen that \(L_{T-t_1+1}(T)=M_{T-t_1+1}(T)\). It suffices now to shift m by one unit to obtain the formula

$$\begin{aligned} \varPi ^e_1= \sum _{m=1}^{T-t_1+1}\left( \frac{p}{1-p}\right) ^{m-1} \sum _{t=t_1+m-1}^T[(1-p)r]^{t-t_1}\!M_m(t). \end{aligned}$$

And interchanging a last time the order of the summations, formula (1). \(\square \)

Remark 15

As a consequence of formula (1), if for some fixed L, for all m, \(\tau _m\) and t, \(L_m(\tau _m,t)=L\), then \(\varPi ^e_1=[(1-r^{T-t_1+1})/(1-r)]L\) (whose limit as \(r\rightarrow 1\) is \((T-t_1+1)L\)), and if \(L_m(\tau _m,t)\le L\), then \(\varPi ^e_1\) is bounded above by that number.

1.2 Proof of Theorem 5

We start with formula (1) where we set \(t_1=0\), and recall by a superscript (T) that it is a formula for a finite horizon T, the horizon we want to let go to infinity:

$$\begin{aligned} {\varPi ^e_1}^{(T)}= \sum _{t=0}^T(1-p)^tr^t\sum _{m=1}^{t+1} \left( \frac{p}{1-p}\right) ^{m-1} M_m(t). \end{aligned}$$

The only task left to prove the theorem is to show that the series obtained as \(T\rightarrow \infty \) converges absolutely. To do this, we need an evaluation of \(M_m(t)\). Observe that the cardinal of the set \(\mathscr {T}_m(t)\) is simply the combinatorial coefficient

$$\begin{aligned} |\mathscr {T}_m(t)| = \left( \begin{array}{c} t \\ m-1 \end{array} \right) = \frac{t!}{(m-1)!\,(t-m+1)!}. \end{aligned}$$

As a consequence, if \(|L_m| \le L^m\), we have

$$\begin{aligned} |M_m(t)| \le \left( \!\begin{array}{c} t \\ m-1 \end{array}\!\right) L^m \end{aligned}$$

and

$$\begin{aligned} \left| \sum _{m=1}^{t+1}\left( \frac{p}{1-p}\right) ^{m-1}\!M_m(t)\right|&\le \sum _{m=1}^{t+1}\left( \frac{p}{1-p}\right) ^{m-1}\!|M_m(t)| \end{aligned}$$
(7)
$$\begin{aligned}&\le \sum _{m=1}^{t+1}\left( \!\begin{array}{c} t \\ m-1 \end{array}\!\right) \left( \frac{p}{1-p}\right) ^{m-1}\!L^m \end{aligned}$$
(8)
$$\begin{aligned}&= L\left( \frac{p}{1-p} L + 1\right) ^t. \end{aligned}$$
(9)

Therefore,

$$\begin{aligned} (1-p)^tr^t\!\sum _{m=1}^{t+1}\left( \frac{p}{1-p}\right) ^{m-1}\!|M_m(t)| \le Lr^t(pL+1-p)^t. \end{aligned}$$

The series converges absolutely provided that

$$\begin{aligned} r\Bigl (p(L-1)+1\Bigr ) < 1\,, \end{aligned}$$

which is always true if \(L\le 1\), and ensured for all \(p\le 1\) if \(rL < 1\). This proves the theorem for the case exponentially bounded.

In the case uniformly bounded, with \(|L_m|\le L\), we obtain similarly

$$\begin{aligned} |M_m(t)| \le \left( \!\begin{array}{c} t \\ m-1 \end{array}\!\right) L \end{aligned}$$

and

$$\begin{aligned} (1-p)^tr^t\!\sum _{m=1}^{t+1}\left( \frac{p}{1-p}\right) ^{m-1}\!|M_m(t)| \le Lr^t(1-p)^t\left( \frac{p}{1-p}+1\right) ^t = Lr^t, \end{aligned}$$

and the series is always absolutely convergent. \(\square \)

1.3 Proof of Theorem 10

We aim to apply formula (3). The term \(t=0\) requires a special treatment: the only term in the sum over m is \(m=1\) and \(M_1(0)=L_1(0)=c\). For \(t > 0\), we have three cases:

  1. 1.

    For \(m=1\), there has not been any new entrant; therefore, \(L_1(t) = c\).

  2. 2.

    For \(1< m < t+1\), we sum first over the \(\tau _m\) such that \(t_m < t\), i.e. \(\tau _m\in \mathscr {T}_m(t-1)\), then over the \(\tau _m\) such that \(t_m=t\); there the sum is over the values of \(\tau _{m-1}\in \mathscr {T}_{m-1}(t-1)\).

  3. 3.

    For \(m=t+1\), there have been new entrants at each time step. Therefore, \(L_{t+1}(\tau _{t+1},t) = c / 2t\). We summarize this in the following calculation:

    $$\begin{aligned} \text{ for }\quad m=1,\quad M_1(t)&= c = c\frac{(t-1)!}{(m-1)!(t-m+1)!}\frac{1}{m},\\ \text{ for }\quad 1<m<t+1,\quad M_m(t)&= \sum _{\tau _m\in \mathscr {T}_m(t-1)} \frac{c}{m} + \sum _{\tau _{m-1}\in \mathscr {T}_{m-1}(t-1)}\frac{c}{2(m-1)}\\&= c\frac{(t-1)!}{(m-1)!(t-m+1)!}\frac{1}{m}\\&\quad + c\frac{(t-1)!}{(m-2)!(t-m+1)!}\frac{1}{2(m-1)},\\ \text{ for }\; m=t+1,\; M_{t+1}(t)&= \frac{c}{2t} = c\frac{(t-1)!}{(m-2)!(t-m+1)!}\frac{1}{2(m-1)}. \end{aligned}$$

We therefore obtain, for \(t>0\):

$$\begin{aligned} \sum _{m=1}^{t+1}\left( \frac{p}{q}\right) ^{m-1}\!M_m(t)&= \sum _{m=1}^t\left( \frac{p}{q}\right) ^{m-1}\!\frac{(t-1)!}{(m-1)!(t-m)!}\,\frac{c}{m}\\&\quad \,\,+\frac{1}{2}\sum _{m=2}^{t+1} \left( \frac{p}{q}\right) ^{m-1}\!\frac{(t-1)!}{(m-2)!(t-m+1)!}\,\frac{c}{m-1}\\&= \left( 1+\frac{p}{2q}\right) \sum _{m=1}^t \left( \frac{p}{q}\!\right) ^{m-1}\frac{(t-1)!}{(m-1)!(t-m)!}\,\frac{c}{m}. \end{aligned}$$

Finally, summing over t as in formula (3), without forgetting the term \(t=0\),

$$\begin{aligned} \varPi ^e_1=c+\left( 1+\frac{p}{2q}\right) \sum _{t=1}^\infty r^tq^t \sum _{m=1}^t \left( \frac{p}{q}\right) ^{m-1}\frac{(t-1)!}{(m-1)!(t-m)!}\,\frac{c}{m}. \end{aligned}$$

It suffices to take out one power of rq from the sum over t, shift the summation index by one unit, recognize the expected payoff of the simple sharing scheme and replace it by formula (4) to prove the theorem.

Appendix 2: Static Equilibria

We consider n identical firms (a clone economy) sharing a market with a linear inverse demand function, linking the price P to the total production level Q as

$$\begin{aligned} P = b-aQ\,. \end{aligned}$$

Production costs have been lumped into b and so doing normalized at zero. We compute the optimal production level Q, and resulting price P and profit \(\varPi \) of each firm for various equilibria.

1.1 Cartel

In a pure cartel, firms behave as a single player, only sharing the optimal production level equally among them. Let Q be that level. The profit of the (fictitious) single player is

$$\begin{aligned} \varPi = Q(b-aQ) = -a\left( Q-\frac{b}{2a}\right) ^2 + \frac{b^2}{4a}. \end{aligned}$$

Hence, the optimal production level is \(Q=b/(2a)\), to be equally divided among the firms, as well as the profit \(\varPi = b^2/(4a)\). The price is then \(P= b/2\), and the individual production level q and profit \(\varPi _i\) are

$$\begin{aligned} q = \frac{b}{2an},\quad \varPi _i = \frac{b^2}{4an}. \end{aligned}$$

1.2 Cartel–Stackelberg

We investigate the case where \(n-1\) firms form a cartel, behaving as a leader vis à vis one firm acting as a follower.

Let \(q_L\) be the quantity produced by each incumbent, \(q_F\) that of the follower. Hence, \(Q=(n-1)q_L+q_F\). The follower’s profit is

$$\begin{aligned} \varPi _F = q_F[b-a(n-1)q_L-aq_F] = -a\left[ q_F^2-\left( \frac{b}{a}-(n-1)q_L\right) q_F\right] \end{aligned}$$

hence

$$\begin{aligned} \varPi _F = -a\left[ q_F-\frac{1}{2}\left( \frac{b}{a}-(n-1)q_L\right) \right] ^2 +\frac{a}{4}\left( \frac{b}{a}-(n-1)q_L\right) ^2. \end{aligned}$$

Therefore, the optimal reaction curve \(q_F\) as a function of \(q_L\) is

$$\begin{aligned} q_F = \frac{1}{2}\left( \frac{b}{a}-(n-1)q_L\right) . \end{aligned}$$

With such a strategy, each incumbents’ profit is

$$\begin{aligned} \varPi _L&= q_L\left[ b-a(n-1)q_L- \frac{1}{2}b -\frac{1}{2}a(n-1)q_L)\right] \\&=-\frac{a(n-1)}{2}\left[ q_L^2 - \frac{b}{a(n-1)}q_L\right] \\&= -\frac{a(n-1)}{2}\left[ q_L-\frac{b}{2a(n-1)}\right] ^2 +\frac{b^2}{8a(n-1)}. \end{aligned}$$

Therefore, the optimal production level of each incumbent and their profit are

$$\begin{aligned} q_L = \frac{b}{2a(n-1)},\quad \varPi _L = \frac{b^2}{8a(n-1)}. \end{aligned}$$

Placing this back into the optimal follower’s reaction curve, its production level and profit are

$$\begin{aligned} q_F = \frac{b}{4a},\quad \varPi _F = \frac{b^2}{16a}. \end{aligned}$$

and the price of the commodity is \(P=b/4\). All these results will be summarized in a table in the last section.

1.3 Cournot–Nash

We have n identical firms competing à la Cournot. The Cournot–Nash equilibrium is obtained as follows. Let q be the individual production level, therefore \(Q=nq\), and P the resulting price:

$$\begin{aligned} P = b - naq. \end{aligned}$$

The individual profit of player i is

$$\begin{aligned} \varPi _i&= q_i[b-a(q_i + (n-1)q)] \\&= -a\left[ q_i - \frac{1}{2}\left( \frac{b}{a}-(n-1)q\right) \right] ^2 + \frac{a}{4}\left( \frac{b}{a}-(n-1)q\right) ^2. \end{aligned}$$

It follows that the optimum \(q_i\) is

$$\begin{aligned} q_i = \frac{1}{2}\left( \frac{b}{a}-(n-1)q\right) , \end{aligned}$$

but we seek a symmetric equilibrium where \(q_i=q\), and therefore,

$$\begin{aligned} q = \frac{b}{(n-1)a}. \end{aligned}$$

Placing this back into the law for P, we find

$$\begin{aligned} P= \frac{b}{(n+1)},\quad \varPi = \frac{b^2}{a(n+1)^2}. \end{aligned}$$

1.4 Cournot–Stackelberg

We finally consider \(n-1\) firms competing à la Cournot–Nash within their group, producing a quantity \(q_L\) each, but that group behaving as a leader vis à vis a single follower, producing a quantity \(q_F\). We therefore have

$$\begin{aligned} P=b-a(n-1)q_L-aq_F. \end{aligned}$$

The calculations are similar to the previous ones. The follower’s profit is therefore

$$\begin{aligned} \varPi _F&= -\,a\left[ q_F^2-\left( \frac{b}{a}-(n-1)q_L\right) q_F\right] \\&=-\,a\left[ q_F-\frac{1}{2}\left( \frac{b}{a}-(n-1)q_L\right) \right] ^2 +\frac{a}{4}\left( \frac{b}{a}-(n-1)q_L\right) ^2. \end{aligned}$$

Hence,

$$\begin{aligned} q_F = \frac{1}{2}\left( \frac{b}{a}-(n-1)q_L\right) ,\quad \varPi _F = \frac{a}{4}\left( \frac{b}{a}-(n-1)q_L\right) ^2. \end{aligned}$$

With this strategy,

$$\begin{aligned} P = \frac{1}{2}\Bigl (b-a(n-1)q_L\Bigr ) =\frac{1}{2}\Bigl (b-a(n-2)q_L-aq_i\Bigr ) \end{aligned}$$

Consequently, for player i, one of the leaders,

$$\begin{aligned} \varPi _i&= -\frac{a}{2}\left[ q_i^2-\left( \frac{b}{a} -(n-2)q_L\right) q_i\right] \\&= -\frac{a}{2}\left[ q_i-\frac{1}{2} \left( \frac{b}{a}-(n-2)q_L\right) \right] ^2 +\frac{a}{8}\left( \frac{b}{a}-(n-2)q_L\right) ^2. \end{aligned}$$

It follows that

$$\begin{aligned} \begin{array}{lcl} \displaystyle q_i=q_L = \frac{1}{2}\left( \frac{b}{a}-(n-2)q_L\right) &{} \Rightarrow &{} \displaystyle q_L = \frac{b}{an}\,,\\ \displaystyle \varPi _i = \frac{a}{8}\left( \frac{b}{a}-(n-2)q_L\right) ^2 &{} \Rightarrow &{} \displaystyle \varPi _i = \frac{b^2}{2an^2}, \end{array} \end{aligned}$$

while \(P = b/(2n)\), and

$$\begin{aligned} q_F = \frac{b}{2an} \qquad \text{ and }\qquad \varPi _F = \frac{b^2}{4an^2}. \end{aligned}$$

1.5 Summary

We regroup the results of these calculations in the following table:

Appendix 3: Complexity

In this appendix, we undertake to count the number of arithmetic operations involved in computing \(\varPi ^e_1\) for a finite horizon by four different methods: (direct) path enumeration, backward dynamic programming, path enumeration and forward dynamic programming, and use of formula (1). To ease the calculations, we let \(t_1=1\), so that T is the number of time steps. We shall refer to the tree shown in Fig. 1.

If we assume no regularity in their definition, the data are made of the collection of all \(L_m(t)\), \(t=1,\ldots ,T\), that is, as many numbers as there are branches in the tree, i.e.

$$\begin{aligned} \sum _{t=1}^T 2^{t-1} = 2^T-1 \end{aligned}$$

numbers. As all must be used, there is no way in which a general method could involve less than that number of arithmetic operations. Therefore, we expect a complexity of the order of \(2^T\) (of the order of \(10^6\) for \(T=20\) and \(10^{15}\) for \(T=50\), a completely unrealistic case!), and the difference between methods can only be in the coefficient multiplying that number.

1.1 Path Enumeration

The tree counts \(2^{T-1}\) paths from the root to a leaf. Let \(\nu \in [1,2^{T-1}]\) number them. We denote by \(\pi _\nu \) the path number \(\nu \), and let \(m_\nu \) be the number of player present at the end of path \(\pi _\nu \). Each path has a probability of being followed

$$\begin{aligned} \mathbb {P}(\pi _\nu ) = p^{m_\nu -1}(1-p)^{T-m_\nu }. \end{aligned}$$

Let \(L_\nu (t)\) denote the \(L_n(\tau _n,t)\) on path \(\pi _\nu \). Each path involves a payoff

$$\begin{aligned} \varPi _1(\pi _\nu ) = \sum _{t=1}^T L_\nu (t). \end{aligned}$$

And we have

$$\begin{aligned} \varPi ^e_1 = \sum _{\nu =1}^{2^{T-1}} \mathbb {P}(\pi _\nu ) J(\pi _\nu ). \end{aligned}$$
(10)

A direct method of evaluating \(\varPi ^e\) is therefore as follows:

  1. 1.

    Compute the \(\mathbb {P}(\pi (\nu ))\) for each \(\nu \). The computation of each involves \(T-2\) multiplications.Footnote 2 Therefore, that step involves \(2^{T-1}(T-2)\) arithmetic operations.

  2. 2.

    Compute the \(\varPi (\pi _\nu )\). Each involves \(T-1\) additions; therefore, this step involves \(2^{T-1}(T-1)\) arithmetic operations.

  3. 3.

    Compute \(\varPi ^e_1\) according to formula (10), involving \(2^{T-1}\) multiplications and as many additions (-1), that is, \(2^T\) operations.

Therefore, the total number of arithmetic operations is

$$\begin{aligned} N = 2^{T-1}(T-2+T-1+2) = (T-\frac{1}{2})2^T, \end{aligned}$$

that is of the order of \(T2^T\).

1.2 Dynamic Programming (DP)

Denote the nodes of the tree by the sequence \(\sigma (t)\) of t indices, 0 or 1, the 1 denoting the times when an arrival occurred, a branch sloping up in our figure. (All sequences \(\sigma (t)\) begin with a one.) The possible successors of a given \(\sigma (t)\) are \((\sigma (t),0)\) and \((\sigma (t),1)\) that we denote as

$$\begin{aligned} \sigma (t+1) = (\sigma (t),i(t))\,,\quad i(t) \in \{0,1\}. \end{aligned}$$

Denote by \(L(\sigma (t))\) the \(L_m\) of the branch reaching the node \(\sigma (t)\).

1.2.1 Backward DP

Let \(V(\sigma (t))\) be the expected future payoff when at node \(\sigma (t)\). It obeys the dynamic programming equation

$$\begin{aligned} V(\sigma (t)) = p[L(\sigma (t),1)+V(\sigma (t),1)]+(1-p) [L(\sigma (t),0)+V(\sigma (t),0)], \end{aligned}$$

and \(\varPi ^e_1 = V(\mathrm {root}) = V(1)+L_1(1)\).

There are thus four arithmetic operations to perform at each node of the tree (not counting the leaves), that is,

$$\begin{aligned} N= 4\sum _{t=1}^T 2^{t-1} = 4\times (2^T-1) \end{aligned}$$

arithmetic operations, i.e. of the order of \(4\times 2^T\).

1.2.2 Path Enumeration and Forward DP

This is a variant of the path enumeration method (10) on two counts:

  1. 1.

    Compute once each probability \(p^{m-1}(1-p)^{T-m}\) and store it. This costs \(T(T-2)\) arithmetic operations.

  2. 2.

    Compute the \(\varPi (\pi _\nu )\) according to the forward dynamic programming method

    $$\begin{aligned} \varPi (\sigma (t-1),i(t-1)) = \varPi (\sigma (t-1)) + L(\sigma (t-1),i(t-1)). \end{aligned}$$

    This is one addition at each node of the tree, i.2. \(2^T\) operations.

It remains to implement formula (10), using \(2^T\) arithmetic operations, for a total of \(2^{T+1} + T(T-2)\). This is of the order of \(2\times 2^T\).

1.3 Using the \(M_m(t)\)

We rewrite formula (1) as

$$\begin{aligned} \varPi ^e_1= \sum _{t=1}^T\sum _{m=1}^t p^{m-1}(1-t)^{t-m} M_m(t). \end{aligned}$$
(11)

1.3.1 Computing the \(M_m(t)\)

The first task is to compute the collection of \(M_m(t)\). For each t, there are \(2^{t-1}\) \(L_m(t)\) to combine in t terms, that is, \(2^{t-1}-t\) additions. There is no computation for the steps 1 and 2. The total number of additions there is therefore

$$\begin{aligned} \sum _{t=3}^T(2^{t-1}-t) = 2^T-\frac{T(T+1)}{2}-1. \end{aligned}$$

1.3.2 Computing the \(p^{m-1}(1-p)^{t-m}\)

We set

$$\begin{aligned} u_m(t) = p^{m-1}(1-p)^{t-m}\,. \end{aligned}$$

We compute them according to the following method:

$$\begin{aligned} u_1(1)&= 1,\\ \forall t \in [2,T],\quad u_1(t)&= (1-p)u_1(t-1),\\ \forall m \in [1,t],\quad u_{m+1}(t)&= pu_m(t-1). \end{aligned}$$

Counting the arithmetic operations, this leads to \(T-1\) multiplications to compute the \(u_1(t)\), and to

$$\begin{aligned} \sum _{t=3}^T(t-1) =\frac{T(T-1)}{2}-3 \end{aligned}$$

multiplications to compute the rest of the \(u_m(t)\). That is for this step

$$\begin{aligned} \frac{T(T-1)}{2}+T-4 \end{aligned}$$

arithmetic operations.

1.3.3 Applying Formula (11)

Finally, there are \(T(T+1)/2\) terms in formula (11), each involving a multiplication and an addition, i.e. \(T(T+1)\) arithmetic operations (minus one addition)

Summing all steps, this is

$$\begin{aligned} N= 2^T + T^2 + T - 6 \end{aligned}$$

i.e. of the order of \(2^T\) arithmetic operations, half as many as in the best DP method.

1.4 Simple Case

In the case where the \(L_m(\tau _m,t)\) are actually independent from \(\tau _m\), the computation of each \(M_m\) requires just a multiplication by a combinatorial coefficient, that is, an overall complexity for all \(M_m\) of \(T(T-1)/2\) or \(T(T-1)\) depending on whether the combinatorial coefficients are given or to be computed (via the Pascal Triangle algorithm). Then the complexity of our method drops to \(2T^2\) or \(2.5 T^2\). A huge simplification makes it possible to actually compute the result for large T.

1.5 Conclusion

Our theory gives the fastest algorithm for this general “unstructured” case, half as many algebraic operations as in the next best. But of course, its main advantage is elsewhere, in allowing one to take advantage of any regularity in the definitions of the \(L_m\), and also in allowing for closed formulas for the infinite horizon case. Formula (4) is a typical example.

A general remark is that going from the “direct” method, in \(T2^T\) arithmetic operations to one with a constant coefficient, in a typical computer science tradeoff, we trade computer time for storage space. However, if the \(L_m\) need to be stored as data (as opposed to being given by some explicit formula), then in all the faster methods, they are used only once so that their memory space can be reused for intermediate results.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bernhard, P., Deschamps, M. On Dynamic Games with Randomly Arriving Players. Dyn Games Appl 7, 360–385 (2017). https://doi.org/10.1007/s13235-016-0197-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13235-016-0197-z

Keywords

Mathematics Subject Classification

Navigation