Skip to main content
Log in

Teamwise Mean Field Competitions

  • Published:
Applied Mathematics & Optimization Submit manuscript

Abstract

This paper studies competitions with rank-based reward among a large number of teams. Within each sizable team, we consider a mean-field contribution game in which each team member contributes to the jump intensity of a common Poisson project process; across all teams, a mean field competition game is formulated on the rank of the completion time, namely the jump time of Poisson project process, and the reward to each team is paid based on its ranking. On the layer of teamwise competition game, three optimization problems are introduced when the team size is determined by: (i) the team manager; (ii) the central planner; (iii) the team members’ voting as partnership. We propose a relative performance criteria for each team member to share the team’s reward and formulate some special cases of mean field games of mean field games, which are new to the literature. In all problems with homogeneous parameters, the equilibrium control of each worker and the equilibrium or optimal team size can be computed in an explicit manner, allowing us to analytically examine the impacts of some model parameters and discuss their economic implications. Two numerical examples are also presented to illustrate the parameter dependence and comparison between different team size decision making.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. Using the information generated by \(\rho (t)\) would lead to the closed loop equilibrium in the two-layer mean-field games that will be investigate later. As is well known in the literature of mean-field game, such closed loop equilibrium would also give an open loop equilibrium, which means that each team member does not need to observe \(\rho (t)\) at the equilibrium.

  2. One can also think of the team sizes as determining the change of measure from \({\mathbb {P}}\) to some \({\mathbb {Q}}\) under which \(Z^i\) is an exponential random variable with rate \(z_i\).

  3. To simplify the presentation, let us assume that \(\beta \) is a constant. The main results can be easily extend to the cases when \(\beta \) is a function of the team size z. In particular, Theorem 1 remains valid.

  4. Here the intra-team division effect only applies to the regular team members’ share of the reward, namely, \((1-\theta )K(1+p)(1-\rho (\tau ))^p\). In the public good allocation scheme (\(\varepsilon =0\)), the manager and the each member’s reward have the same order of magnitude; in the budget allocation scheme (\(\varepsilon =1\)), the manager receives a chunk of the fixed pie while each member shares a negligible piece of the remaining pie.

  5. One could also consider other criteria for the central planner, such as minimizing a given quantile of the completion time distribution or maximizing the total welfare of a team.

  6. Here we refer to \(V_{z\alpha _z, z}(0)\) as “equilibrium reward” to separate it from size-related costs in the definition of \(V^c\). It should be understood that \(V_{z\alpha _z, z}(0)\) includes the cost of effort.

  7. Here and in the sequel, the average is taken in the space \((J, {\mathscr {J}}, \nu )\).

References

  1. Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis: A Hitchhiker’s Guide, 3rd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  2. Battaglini, M., Nunnari, S., Palfrey, T.R.: Dynamic free riding with irreversible investments. Am. Econ. Rev. 104(9), 2858–71 (2014)

    Article  Google Scholar 

  3. Bayraktar, E., Zhang, Y.: A rank-based mean field game in the strong formulation. Electron. Commun. Probab. 21(72), 1–12 (2016)

    MathSciNet  MATH  Google Scholar 

  4. Bayraktar, E., Zhang, Y.: Terminal ranking games (2019). Forthcoming in Mathematics of Operations Research

  5. Bayraktar, E., Cvitanić, J., Zhang, Y.: Large tournament games. Ann. Appl. Probab. 29(6), 3695–3744 (2019)

    Article  MathSciNet  Google Scholar 

  6. Bensoussan, A., Huang, T., Laurière, M.: Mean field control and mean field game models with several populations. Minimax Theory Appl. 3(2), 173–209 (2018)

    MathSciNet  MATH  Google Scholar 

  7. Bonatti, A., Hörner, J.: Collaborating. Am. Econ. Rev. 101(2), 632–63 (2011)

    Article  Google Scholar 

  8. Campbell, A., Ederer, F., Spinnewijn, J.: Delay and deadlines: freeriding and information revelation in partnerships. Am. Econ. J. 6(2), 163–204 (2014)

    Google Scholar 

  9. Carmona, R., Wang, P.: Finite-state contract theory with a principal and a field of agents. Forthcoming in Management Science (2020)

  10. Compte, O., Jehiel, P.: Gradualism in bargaining and contribution games. Rev. Econ. Stud. 71(4), 975–1000 (2004)

    Article  MathSciNet  Google Scholar 

  11. Cvitanić, J., Georgiadis, G.: Achieving efficiency in dynamic contribution games. Am. Econ. J. 8(4), 309–42 (2016)

    Google Scholar 

  12. Élie, R., Mastrolia, T., Possamaï, D.: A tale of a principal and many many agents. Math. Oper. Res. 44(2), 440–467 (2019)

    Article  MathSciNet  Google Scholar 

  13. Fujii, M.: Probabilistic approach to mean field games and mean field type control problems with multiple populations. Forthcoming in Minimax Theory and its Applications (2020)

  14. Georgiadis, G.: Projects and team dynamics. Rev. Econ. Stud. 82(1), 187–218 (2015)

    Article  MathSciNet  Google Scholar 

  15. Huang, M., Nguyen, S.L.: Linear-quadratic mean field teams with a major agent. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 6958–6963 (2016)

  16. Legros, P., Matthews, S.A.: Efficient and nearly-efficient partnerships. Rev. Econ. Stud. 60(3), 599–611 (1993)

    Article  Google Scholar 

  17. Lockwood, B., Thomas, J.P.: Gradualism and irreversibility. Rev. Econ. Stud. 69(2), 339–356 (2002)

    Article  MathSciNet  Google Scholar 

  18. Nutz, M., Zhang, Y.: A mean field competition. Math. Oper. Res. 44(4), 1245–1263 (2019)

    Article  MathSciNet  Google Scholar 

  19. Sanjari, S., Yüksel, S.: Optimal solutions to infinite-player stochastic teams and mean-field teams. IEEE Trans. Autom. Control 66(3), 1071–1086 (2021)

    Article  MathSciNet  Google Scholar 

  20. Sun, Y.: The exact law of large numbers via Fubini extension and characterization of insurable risks. J. Econ. Theory 126(1), 31–69 (2006)

    Article  MathSciNet  Google Scholar 

  21. Yildirim, H.: Getting the ball rolling: voluntary contributions to a large-scale public project. J. Public Econ. Theory 8(4), 503–528 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

Yuchong Zhang is supported by NSERC Discovery Grant RGPIN-2020-06290.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiang Yu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Proofs

Proofs

1.1 Proof of Theorem 1

Proof

(i) The function \(V_{\lambda , z}\) in (6) is well-defined by the non-negativity and integrability of \(\lambda \) (see the definition of \({\mathscr {A}}\)). It is straightforward to verify that \(V_{\lambda , z}\) satisfies (5) and equivalently, (3) with \({\bar{\alpha }}=\alpha _z\), and that \(\alpha _z\in {\mathscr {A}}\). Standard verification argument shows that \(V_{\lambda , z}\) is the value function (in response to \((\lambda , z,\alpha _z)\)), and that \(\alpha _z\) is an optimal control.

(ii) Let \({{\hat{\alpha }}} \in {\mathscr {A}}\) be any equilibrium control and \({{\hat{V}}}\) be the corresponding equilibrium value function within the team. Since the best response problem within a team is time-consistent, the restriction of \({{\hat{\alpha }}}\) on [r, 1] is optimal for \({{\hat{V}}}(r)\) (in response to \((\lambda , z,{{\hat{\alpha }}})\)) for any \(r<1\). By the optimality of \({{\hat{\alpha }}}\), we have \({{\hat{V}}}(r)\le G_z(r)\). On the other hand, taking the admissible control \(\alpha =\epsilon G_z\in {\mathscr {A}}\), we obtain

$$\begin{aligned} {{\hat{V}}}(r)\ge & {} \beta G_z(1)-c \epsilon ^2 {\mathbb {E}}\left[ \int _{\rho (0)}^{\rho (\tau )} \frac{G^2_z(y)}{\lambda (y)(1-y)}dy\right] \ge \beta G_z(1)\\&-\, c \epsilon ^2 {\mathbb {E}}\left[ \int _{0}^{1} \frac{G^2_z(y)}{\lambda (y)(1-y)}dy\right] . \end{aligned}$$

Letting \(\epsilon \rightarrow 0+\) yields \({{\hat{V}}}(r)\ge \beta G_z(1)\). Because \(G_z(1-)=G_z(1)=0\), we must have \({{\hat{V}}}(1-)=0\).

Claim that \({{\hat{V}}}\) is absolutely continuous. Once this is proved, dynamic programming yields that \({{\hat{V}}}\) must a.e. satisfy (3) with \({\bar{\alpha }}={{\hat{\alpha }}}\), and that \({{\hat{\alpha }}}\) coincides with \(\alpha _z\), which further implies that \({{\hat{V}}}\) satisfies (5). It is easy to check that (5) has at most one absolutely continuous solution, namely, (6). By its uniqueness, we must have \({{\hat{V}}}=V_{\lambda , z}\).

The rest is devoted to the proof of absolutely continuity of \({{\hat{V}}}\) by a control-theoretical argument adapted from [18]. Fix an arbitrary \(r_0<1\). Since \(\lambda \) is assumed to be locally piecewise Lipschitz and strictly positive on [0, 1), it is uniformly bounded away from zero on \([0,r_0]\). This implies that \(\rho \) will reach \(r_0\) in finite time. Let \(0\le r<r+h\le r_0\), we wish to bound \({{\hat{V}}}(r)-{{\hat{V}}}(r+h)\) by a constant times h. There are two subtle differences from the proof in [18]: First, due to the bonus payment, monotonicity of the value function is unclear; thus, a lower bound for \({{\hat{V}}}(r)-{{\hat{V}}}(r+h)\) is no longer trivial. Second, in our model a single member has negligible impact on the team’s completion time. Hence \(\tau ^{r}\) and \(\tau ^{r+h}\) (where the superscript indicates the dependence on \(\rho (0)\)) are different regardless of how we choose a single member’s control.

Denote by \(\rho ^r\) the state process starting at \(\rho (0)=r\), and let \(t_h\) be the first time \(\rho ^r\) hits \(r+h\), which is finite. By the memoryless property of exponential random variables and the flow property of \(\rho ^r\), we have that

$$\begin{aligned} {\mathbb {P}}\left( \tau ^r\ge t_h+s| \tau ^r\ge t_h \right)&={\mathbb {P}}\left( Z^i>\int _{0}^{t_h+s} z{{\hat{\alpha }}} (\rho ^r(u))du\Big | Z^i>\int _{0}^{t_h} z{{\hat{\alpha }}} (\rho ^r(u))du\right) \\&={\mathbb {P}}\left( Z^i>\int _{t_h}^{t_h+s} z{{\hat{\alpha }}} (\rho ^r(u))du\right) \\&={\mathbb {P}}\left( Z^i>\int _{0}^{s} z{{\hat{\alpha }}} (\rho ^r(t_h+u))du\right) \\&={\mathbb {P}}\left( Z^i>\int _{0}^{s} z{{\hat{\alpha }}} (\rho ^{r+h}(u))du\right) ={\mathbb {P}}\left( \tau ^{r+h}\ge s \right) . \end{aligned}$$

In other words, the distribution of \(\tau ^r-t_h\) conditioned on the event \(\tau ^r\ge t_h\) is the same as the distribution of \(\tau ^{\tau +h}\). Using this, we deduce that

$$\begin{aligned} {{\hat{V}}}(r)&=J(r, {{\hat{\alpha }}};\lambda , z, {{\hat{\alpha }}})={\mathbb {E}}\left[ G_z(\rho ^r(\tau ^r))-c\int _0^{\tau ^r}{{\hat{\alpha }}}(\rho ^r(t))^2\,dt\, \right] \\&{\le } {\mathbb {E}}\left[ 1_{\{\tau ^r\le t_h\}} G_z(r)\right] {+} {\mathbb {P}}(\tau ^r{>}t_h){\mathbb {E}}\left[ G_z(\rho ^r(\tau ^r)){-}c\int _{t_h}^{\tau ^r}{{\hat{\alpha }}}(\rho ^r(t))^2\,dt \Big | \tau ^r{>}t_h\right] \\&{=}{\mathbb {P}}(\tau ^r{\le } t_h)G_z(r){+}{\mathbb {P}}(\tau ^r{>}t_h){\mathbb {E}}\left[ G_z(\rho ^r(\tau ^{r+h}{+}t_h)){-}c\int _{t_h}^{\tau ^{r+h}{+}t_h}{{\hat{\alpha }}}(\rho ^r(t))^2\,dt \right] \\&{=}{\mathbb {P}}(\tau ^r{\le } t_h)G_z(r){+}{\mathbb {P}}(\tau ^r{>}t_h){\mathbb {E}}\left[ G_z(\rho ^{r+h}(\tau ^{r+h})){-}c\int _{0}^{\tau ^{r+h}}{{\hat{\alpha }}}(\rho ^{r+h}(t))^2\,dt \right] \\&= {\mathbb {P}}(\tau ^r\le t_h)G_z(r)+{\mathbb {P}}(\tau ^r>t_h) J(r+h,{{\hat{\alpha }}};\lambda , z, {{\hat{\alpha }}}) \\&\le {\mathbb {P}}(\tau ^r\le t_h)G_z(0)+{{\hat{V}}}(r+h). \end{aligned}$$

Similarly, with

$$\begin{aligned} \alpha := {\left\{ \begin{array}{ll} \epsilon &{} \text{ on } [r,r+h),\\ {\hat{\alpha }} &{} \text{ on } [r+h,1), \end{array}\right. } \end{aligned}$$

we can show that

$$\begin{aligned} {{\hat{V}}}(r)&\ge J(r,\alpha ;\lambda , z, {{\hat{\alpha }}})\\&\ge {\mathbb {P}}(\tau ^r> t_h) {\mathbb {E}}\left[ G_z(\rho ^r(\tau ^r))\left( \beta +(1-\beta )\frac{\alpha }{{{\hat{\alpha }}}}(\rho ^r(\tau ^r))\right) -c\int _{t_h}^{\tau ^r} {{\hat{\alpha }}}(\rho ^r(t))^2\,dt \Big | \tau ^r>t_h\right] - c \epsilon ^2 t_h\\&={\mathbb {P}}(\tau ^r> t_h) {\mathbb {E}}\left[ G_z(\rho ^{r+h}(\tau ^{r+h})) -c\int _{0}^{\tau ^{r+h}}{{\hat{\alpha }}}(\rho ^{r+h}(t))^2\,dt\right] - c \epsilon ^2 t_h\\&={\mathbb {P}}(\tau ^r> t_h)J(r+h,{{\hat{\alpha }}};\lambda , z, {{\hat{\alpha }}})- c \epsilon ^2 t_h={\mathbb {P}}(\tau ^r> t_h){{\hat{V}}}(r+h)- c \epsilon ^2 t_h\\&\ge {{\hat{V}}}(t+h)-{\mathbb {P}}(\tau ^r\le t_h)G_z(0)- c \epsilon ^2 t_h. \end{aligned}$$

Taking limit as \(\epsilon \rightarrow 0+\) and combining the two chains of inequalities, we obtain

$$\begin{aligned} |{{\hat{V}}}(r)-{{\hat{V}}}(r+h)|\le {\mathbb {P}}(\tau ^r\le t_h)G_z(0). \end{aligned}$$

It remains to note that

$$\begin{aligned} {\mathbb {P}}(\tau ^r\le t_h)&=1-\exp \left( -\int _0^{t_h}z{{\hat{\alpha }}}(\rho ^r(s))ds\right) \\&\le \int _0^{t_h}z{{\hat{\alpha }}}(\rho ^r(s))ds=\int _r^{r+h} z{{\hat{\alpha }}}(y)d\rho ^{-1}(y)=\int _r^{r+h} \frac{z{{\hat{\alpha }}}(y)}{\lambda (y)(1-y)}dy\\&\le \mathop {{\mathrm{ess~sup}}}\limits _{y\in [0, r_0]}\left| \frac{z{{\hat{\alpha }}}(y)}{\lambda (y)(1-y)}\right| h. \end{aligned}$$

We conclude that \({{\hat{V}}}\) is Lipschitz continuous on \([0, r_0]\) for any \(r_0<1\) and thus, absolutely continuous on [0, 1). \(\square \)

1.2 Proof of Theorem 2

Proof

We only show (ii) and (iii). Suppose \(K(1+p)\theta >\kappa _0\) so that zero is not an equilibrium team size. For each \({{\bar{z}}} >0\), define

$$\begin{aligned} F_{{{\bar{z}}}}(z):=K(1+p)\theta \left[ 1+ p \left( \frac{z}{{{\bar{z}}}}\right) ^{\varepsilon -2} \right] ^{-1}-k z^\delta , \quad z\ge 0. \end{aligned}$$

By (11), \(z^*\) is an equilibrium team size if and only if \(z^*\in \mathop {{\mathrm{arg~max}}}\limits _{z> 0}F_{z^*}(z)\) and \(K\theta -\kappa _0-\kappa (z^*)\ge 0\). Since \(F_{{{\bar{z}}}}\) is continuous and \(\lim _{z\rightarrow \infty }F_{{{\bar{z}}}}(z)=-\infty \), the maximum of \(F_{{{\bar{z}}}}\) is attained either at \(z=0\) or at some interior point where the first derivative

$$\begin{aligned} F'_{{{\bar{z}}}}(z)=\frac{K(1+p)\theta p(2-\varepsilon )(z/{{\bar{z}}})^{\varepsilon -3}({{\bar{z}}})^{-1}}{\left( 1+p(z/{{\bar{z}}})^{\varepsilon -2}\right) ^2}- k\delta z^{\delta -1} \end{aligned}$$

vanishes. Any positive equilibrium team size \(z^*\) must satisfy \(F_{z^*}'(z^*)=0\), giving the unique candidate \(z_m^*\) in (15). It remains to check that \(F_{z_m^*}(z)\) attains global maximum at \(z=z_m^*\) and that \(K\theta -\kappa _0-\kappa (z_m^*)\ge 0\).

Let us rewrite the function

$$\begin{aligned} F_{z_m^*}(z)=K(1+p)\theta f(z/z_m^*), \end{aligned}$$

where

$$\begin{aligned} f(x):=\frac{x^{2-\varepsilon }}{x^{2-\varepsilon }+p}-\frac{(2-\varepsilon )p}{\delta (1+p)^2}x^\delta , \quad x\ge 0. \end{aligned}$$

\(F_{z_m^*}(z)\) attains global maximum at \(z=z_m^*\) if and only if f(x) attains global maximum on \({\mathbb {R}}_+\) at \(x=1\). We have

$$\begin{aligned} f'(x)=(2-\varepsilon )px^{1-\varepsilon }\left[ \frac{1}{(x^{2-\varepsilon }+p)^2}-\frac{x^{\varepsilon +\delta -2}}{(1+p)^2}\right] . \end{aligned}$$

It is easy to see that for \(x>0\), \({{\,\mathrm{sgn}\,}}(f'(x))={{\,\mathrm{sgn}\,}}(h(x))\), where

$$\begin{aligned} h(x):=(1+p)x^{1-\frac{\varepsilon +\delta }{2}}-x^{2-\varepsilon }-p \end{aligned}$$

Notice that \(h(1)=0\) and \(h'(1)=p+\varepsilon -1-(1+p)(\varepsilon +\delta )/2.\) Consider two cases:

(i) \(\varepsilon +\delta \ge 2\). In this case, h is strictly decreasing, which implies \(f'\) is positive when \(0<x<1\) and negative when \(x>1\). Consequently, the global maximum of f is attained at \(x=1\) as desired.

(ii) \(\varepsilon +\delta < 2\). In this case, h is strictly concave, which implies that it can cross the x-axis at most twice. As \(h(0)=-p<0\), \(x=1\) is a global maximum of f if and only if \(h'(1)<0\) and \(f(1)\ge f(0)\), i.e.,

$$\begin{aligned} \delta >\frac{(2-\varepsilon )p+\varepsilon -2}{1+p}\quad \text {and}\quad \delta \ge \frac{(2-\varepsilon )p}{(1+p)}. \end{aligned}$$

Note that \(\delta \ge (2-\varepsilon )p/(1+p)\) is equivalent to \(\varepsilon +\delta \ge 2-\delta /p\). Combining the two cases, we see that f(x) attains global maximum at \(x=1\) if and only if \(\delta \ge (2-\varepsilon )p/(1+p)\). We also have that

$$\begin{aligned} K\theta -\kappa _0-\kappa (z_m^*)=K\theta -\kappa _0- \frac{K\theta p (2-\varepsilon )}{\delta (1+p)}\ \ge 0 \end{aligned}$$

if and only if

$$\begin{aligned} \left[ 1-\frac{\kappa _0}{K\theta }\right] \delta \ge \frac{(2-\varepsilon )p}{1+p}, \end{aligned}$$

which implies \(\delta \ge (2-\varepsilon )p/(1+p)\). The rest of the theorem statement follows from direct computation using (9), (15), (13) and (14). \(\square \)

1.3 Proof of Theorem 4

Proof

Part 1: Let us first examine the candidate equilibrium team size \(z^*=0\).

Case I: \(\epsilon =0\). For \(\kappa _0=0\), the conclusion I(i) of Theorem 4 is easy to verify. Now assume \(\kappa _0>0\) and \(\delta \in (0,1]\cup [2,\infty )\). Let us define

$$\begin{aligned} J(z):=\frac{(1+\beta ) K(1+p)}{2}-\frac{\kappa _0}{z}-kz^{\delta -1},\quad z>0. \end{aligned}$$

We get \(J'(z)=\kappa _0z^{-2}-k(\delta -1)z^{\delta -2}\) and \(J''(z)=-2\kappa _0z^{-3}-k(\delta -1)(\delta -2)z^{\delta -3}<0\) as \(\delta \le 1\) or \(\delta \ge 2\). Therefore, the unique interior critical point \({\hat{z}}:=\left( \frac{\kappa _0}{k(\delta -1)}\right) ^{\frac{1}{\delta }}\) is the global maximum point. We have that

$$\begin{aligned} {\hat{z}}J({\hat{z}})=\frac{(1+\beta ) K(1+p)}{2}\left( \frac{\kappa _0}{k(\delta -1)}\right) ^{\frac{1}{\delta }} -\kappa _0-k\left( \frac{\kappa _0}{k(\delta -1)}\right) . \end{aligned}$$

Then \(z^*=0\) is the equilibrium team size if and only if \({{\hat{z}}} J({{\hat{z}}})\le 0\), which is equivalent to

$$\begin{aligned} \frac{2^\delta (\kappa _0)^{\delta -1}k\delta ^{\delta }}{(1+\beta )^{\delta }K^{\delta }(1+p)^{\delta }(\delta -1)^{\delta -1}}\ge 1. \end{aligned}$$

Case II: \(\epsilon =1\). It is clear that \(z^*=0\) is an equilibrium team size if \(\frac{(1+\beta )}{2} K(1+p)\le \kappa _0\). Now suppose that \(\frac{(1+\beta )}{2} K(1+p)>\kappa _0\) and let us define

$$\begin{aligned} J(z):=\frac{\frac{(1+\beta )}{2} K(1+p)-\kappa _0}{z}-kz^{\delta -1},\quad z>0. \end{aligned}$$

We get

$$\begin{aligned}&\lim _{z\rightarrow 0+}\frac{J(z)}{1/z}=\lim _{z\rightarrow 0+}zJ(z)=\lim _{z\rightarrow 0+} \frac{(1+\beta )}{2} K(1+p)-\kappa _0-kz^{\delta }\\&=\frac{(1+\beta )}{2} K(1+p)-\kappa _0. \end{aligned}$$

It then follows that \(\lim _{z\rightarrow 0+}J(z)=+\infty \) and hence \(z^*=0\) is not an equilibrium team size in view of its definition.

Part 2: Next, consider the candidate equilibrium team size \(z^*>0\). We can compute from (19) that

$$\begin{aligned} H'(z;z^*)=&\frac{(1-\varepsilon )K(1+p)(1+\beta )}{(z^*)^{2-\varepsilon }} \frac{z^{1-2\varepsilon }}{p+(\frac{z}{z^*})^{2-\varepsilon }}- \frac{(2-\varepsilon )K(1+p)(1+\beta )}{2(z^*)^{4-2\varepsilon }}\\&\frac{z^{3-3\varepsilon }}{(p+(\frac{z}{z^*})^{2-\varepsilon })^2} +\frac{\kappa _0}{z^2}+k(1-\delta )z^{\delta -2}. \end{aligned}$$

Again as team sizes are required to be positive, we only need to consider interior maxima of \(H(z;z^*)\). Therefore, \(z^*\) is the equilibrium team size with partnership implies that \(z^*\) satisfies that \(H'(z^*;z^*)=0\), which gives that \(z^*\) solves the algebraic equation

$$\begin{aligned} \left[ (1-\varepsilon )K(1+\beta )-\frac{(2-\varepsilon )K(1+\beta )}{2(p+1)} \right] (z^*)^{1-\varepsilon }+k(1-\delta )(z^*)^{\delta }+\kappa _0=0. \end{aligned}$$
(22)

Case I: \(\varepsilon =0\). Suppose that \(\delta \ge 3\). If \(\kappa _0=0\), it is clear that the algebraic equation (22) admits a unique positive root \(z^*_p=\left( \frac{A}{k(\delta -1)}\right) ^{\frac{1}{\delta -1}}\), where we denote \(A:=\frac{pK(1+\beta )}{(p+1)}>0\). If \(\kappa _0>0\), let us denote \(\gamma (x):=Ax+k(1-\delta )x^{\delta }+\kappa _0\). We have that \(\lim _{x\rightarrow 0} \gamma (x)=\kappa _0>0\) and \(\lim _{x\rightarrow \infty } \gamma (x)=-\infty \). Therefore, the equation \(\gamma (x)=0\) admits at least one positive root. Moreover, we also know that \(\gamma '(x)=A+k\delta (1-\delta )x^{\delta -1}\) and therefore \(\gamma (x)\) is strictly increasing for \(x\le x^*\) and strictly decreasing for \(x>x^*\), where

$$\begin{aligned} x^*:=\left( \frac{A}{k\delta (\delta -1)}\right) ^{\frac{1}{\delta -1}}. \end{aligned}$$
(23)

It then follows that the curve \(y=\gamma (x)\) only hits x-axis once, which implies that \(\gamma (x)=0\) admits a unique positive root \(z_p^*\).

Case II: \(\varepsilon =1\). The algebraic equation (22) can be simplified as

$$\begin{aligned} -\frac{K(1+\beta )}{2(p+1)}+k(1-\delta )(z^*)^{\delta }+\kappa _0=0. \end{aligned}$$

It is clear that if \(\delta \ge 2\) and \(\frac{K(1+\beta )}{2(p+1)}<\kappa _0\), we can obtain the unique positive solution given in (20).

It then suffices to verify that \(H(z;z^*_p)\) attains its global maximum at the unique point \(z=z^*_p\) in two cases.

Case I: \(\varepsilon =0\). Let us assume that \(\delta \ge 3\) and \(p\ge 1/3\). We first have

$$\begin{aligned} H(z;z_p^*)=\frac{K(1+p)(1+\beta )}{2} \frac{(\frac{z}{z^*_p})^{2}}{p+(\frac{z}{z^*_p})^{2}}-\frac{\kappa _0+kz^\delta }{z}, \end{aligned}$$

and

$$\begin{aligned} H'(z;z_p^*)=&\frac{K(1+p)(1+\beta )}{(z_p^*)^2} \frac{z}{p+(\frac{z}{z_p^*})^{2}}- \frac{K(1+p)(1+\beta )}{(z_p^*)^{4}} \frac{z^{3}}{(p+(\frac{z}{z_p^*})^{2})^2} \\&+\frac{\kappa _0}{z^2}+k(1-\delta )z^{\delta -2}. \end{aligned}$$

It is straightforward to verify that the sign of \(H'(z;z_p^*)\) coincides with the sign of \(h(z;z_p^*)\), which is defined by

$$\begin{aligned} h(z;z_p^*)&:=pK(1{+}p)(1{+}\beta )\left( \frac{z}{z_p^*}\right) ^{3}{+} \left( \frac{z}{z_p^*}\right) \left( \frac{\kappa _0}{z}{+}k(1{-}\delta )z^{\delta {-}1}\right) \left( p+\left( \frac{z}{z_p^*}\right) ^{2}\right) ^2\\&= pK(1+p)(1+\beta )\left( \frac{z}{z_p^*}\right) ^{3}+ \left( \frac{z}{z_p^*}\right) \left( \frac{\kappa _0}{\frac{z}{z_p^*}}\frac{1}{z_p^*}\right. \\&\quad \left. +k(1-\delta )\left( \frac{z}{z_p^*}\right) ^{\delta -1}(z_p^*)^{\delta -1}\right) \left( p+\left( \frac{z}{z_p^*}\right) ^{2}\right) ^2. \end{aligned}$$

Note that \(z_p^*\) solves the equation \(Ax+k(1-\delta )x^{\delta }+\kappa _0=0\), we get that

$$\begin{aligned} h(z;z_p^*)&=pK(1+p)(1+\beta )\left( \frac{z}{z_p^*}\right) ^{3}\\&\quad + \left[ -A\left( \frac{z}{z_p^*}\right) ^{\delta }+\left( 1-\left( \frac{z}{z_p^*}\right) ^{\delta } \right) \kappa _0(z_p^*)^{-1} \right] \left( p+\left( \frac{z}{z_p^*}\right) ^{2}\right) ^2. \end{aligned}$$

After changing variable \(x=\frac{z}{z_p^*}\), we can consider the function

$$\begin{aligned} h(x)&=pK(1+p)(1+\beta )x^{3}+ \left[ -Ax^{\delta }+\left( 1-x^{\delta } \right) \kappa _0(z_p^*)^{-1} \right] \left( p+x^{2}\right) ^2\\&=Bx^{3}+ \left[ C -(A+C ) x^{\delta } \right] \left( p+x^{2}\right) ^2, \end{aligned}$$

with \(B:=pK(1+p)(1+\beta )\) and \(C:=\kappa _0(z_p^*)^{-1}\). First, we have \(h(1)=B-(p+1)^2A=0\) by recalling that \(A=\frac{pK(1+\beta )}{(p+1)}\). Moreover, we have that

$$\begin{aligned} h'(1)=3B-(p+1)^2(A+C)\delta -4A(p+1). \end{aligned}$$

As \(A,C>0\) and \(\delta \ge 3\), it follows that \((A+C)\delta \ge 3A\). As \(4A(p+1)>0\), we can then deduce that \(h'(1)<3B-3A(p+1)^2=0\) and hence \(z=z_p^*\) is a local maximum of the function \(H(z;z_p^*)\).

We then claim that the equation \(h(x)=0\), \(x>0\), admits a unique solution at \(x=1\). As we already know that \(h(1)=0\) and \(h'(1)<0\), we will show that for any other \({\bar{x}}>0\) such that \(h({\bar{x}})=0\), we always have \(h'({\bar{x}})<0\) and therefore \({\bar{x}}=1\) must be the unique solution as h(x) is a continuous function. Let us then assume that \({\bar{x}}\ne 1\) that also satisfies

$$\begin{aligned} B{\bar{x}}^{3}+ \left[ C -(A+C ) {\bar{x}}^{\delta } \right] \left( p+{\bar{x}}^{2}\right) ^2=0, \end{aligned}$$

and we have

$$\begin{aligned}&\!\!\!\!\!\!\frac{{\bar{x}}}{p+{\bar{x}}^2}h'({\bar{x}})\\&= \left[ 3B{\bar{x}}^3-(A+C)\delta {\bar{x}}^{\delta }(p+{\bar{x}}^2)^2+[C-(A+C){\bar{x}}^{\delta }]4{\bar{x}}^2(p+{\bar{x}}^2)\right] \frac{1}{p+{\bar{x}}^2}\\&= -3 \left[ C {-}(A+C ) {\bar{x}}^{\delta } \right] \left( p+{\bar{x}}^{2}\right) {-}(A+C)\delta {\bar{x}}^{\delta }(p+{\bar{x}}^2)+[C-(A+C){\bar{x}}^{\delta }]4{\bar{x}}^2\\&= C\left[ {\bar{x}}^2-3p- (1+\delta ){\bar{x}}^{2+\delta }+(3-\delta )p{\bar{x}}^{\delta }\right] +A\left[ (-1-\delta ){\bar{x}}^{2+\delta }+(3-\delta )p{\bar{x}}^{\delta }\right] \\&= C\left[ ({\bar{x}}^2{-}3p)(1{-}{\bar{x}}^{\delta }){-}\delta {\bar{x}}^{\delta }({\bar{x}}^2{+}p) \right] {+}A\left[ {-}{\bar{x}}^{\delta }\left( (1{+}\delta ){\bar{x}}^2{+}(\delta {-}3)p\right) \right] =:g({\bar{x}}). \end{aligned}$$

To show \(h'({\bar{x}})<0\) for \({\bar{x}}\ne 1\), it is equivalent to show that \(g({\bar{x}})<0\). First, for the second term of \(g({\bar{x}})\), the condition \(\delta \ge 3\) clearly implies that \((1+\delta ){\bar{x}}^2+(\delta -3)p>0\) for any \(x>0\) and therefore the second term is always negative for \(x>0\). For the first term of \(g({\bar{x}})\), recall the condition that \(p\ge 1/3\) and hence \(\sqrt{3p}\ge 1\). It is then clear to see that either if \({\bar{x}}\le 1\) or \({\bar{x}}\ge \sqrt{3p}\), we have \(({\bar{x}}^2-3p)(1-{\bar{x}}^{\delta })\le 0\) and it follows that the first term is nonpositive. Otherwise, for \(1<{\bar{x}}<\sqrt{3p}\), we can also write

$$\begin{aligned} {\bar{x}}^2-3p- (1+\delta ){\bar{x}}^{2+\delta }+(3-\delta )p{\bar{x}}^{\delta }\le {\bar{x}}^2-3p- (1+\delta ){\bar{x}}^{2}+(3-\delta )p<0. \end{aligned}$$

This verifies the claim that \(g({\bar{x}})<0\) for any \({\bar{x}}\ge 0\). We can then conclude that the claim holds and \(x=1\) is the unique solution such that \(h(x)=0\). It follows that \(H(z;z_p^*)\) admits a unique critical point \(z=z_p^*\) and hence the local maximum is also the global maximum.

Case II: \(\varepsilon =1\). We have that

$$\begin{aligned} H(z;z_p^*)=\frac{K(1+p)(1+\beta )}{2}\frac{1}{pz_p^*+z}-\frac{\kappa _0+kz^\delta }{z}, \end{aligned}$$

as well as

$$\begin{aligned} H'(z;z_p^*)=\frac{-K(1+p)(1+\beta )}{2}\frac{1}{(pz_p^*+z)^2}+\frac{\kappa _0}{z^2}+k(1-\delta )z^{\delta -2}. \end{aligned}$$

It is clear that the sign of the function \(H'(z;z_p^*)\) coincides with the sign of the function

$$\begin{aligned} h(z;z_p^*):=\frac{-K(1+p)(1+\beta )}{2}\left( \frac{z}{z_p^*}\right) ^2{+}\left[ \kappa _0{+}k(1{-}\delta )(z_p^*)^{\delta } \left( \frac{z}{z_p^*}\right) ^{\delta }\right] \left[ p{+}\frac{z}{z_p^*}\right] ^2. \end{aligned}$$

After changing variable, we can consider the function

$$\begin{aligned} h(x)=A x^2+B\left( x^{2+\delta }+2x^{1+\delta }+x^{\delta } \right) +\kappa _0(p+x)^2, \end{aligned}$$

where \(A:=\frac{-K(1+p)(1+\beta )}{2}\) and \(B:=(z_p^*)^{\delta }k(1-\delta )=\frac{K(1+\beta )}{2(p+1)}-\kappa _0\).

Note that \(h(1)=0\). Moreover, we have that

$$\begin{aligned} h'(1)=\left[ \frac{2(1+\delta )}{(p+1)^2}-1\right] K(1+p)(1+\beta )-\kappa _0\left[ 4(1+\delta )-2(p+1)\right] . \end{aligned}$$

By the assumption that \(2(1+\delta )>(p+1)^2\), we get \(2(1+\delta )>(p+1)\) as well and hence \(\frac{\frac{2(1+\delta )}{(p+1)^2}-1}{4(1+\delta )-2(p+1)}\le \frac{1}{2(p+1)^2}\). We obtain that \(h'(1){<}\left[ \frac{K(1{+}\beta )}{2(p{+}1)}{-}\kappa _0\right] \left[ 4(1+\delta ){-}2(p{+}1)\right] <0\) as we assume \(\frac{K(1+\beta )}{2(p+1)}<\kappa _0\). Again, it follows that \(z_p^*\) is a local maximum of the function \(H(z;z_p^*)\).

We then claim that the equation \(h(x)=0\), \(x>0\), admits a unique solution. We again show that for any point \({\bar{x}}\) such that \(h({\bar{x}})=0\), we always have \(h'({\bar{x}})<0\). Let us assume that \({\bar{x}}\) satisfies

$$\begin{aligned} A {\bar{x}}^2+B\left( {\bar{x}}^{2+\delta }+2{\bar{x}}^{1+\delta }+{\bar{x}}^{\delta } \right) +\kappa _0(p+{\bar{x}})^2=0, \end{aligned}$$

and check that

$$\begin{aligned} {\bar{x}}h'({\bar{x}})=&2A{\bar{x}}^2+B\left( (2+\delta ){\bar{x}}^{2+\delta }+2(1+\delta ){\bar{x}}^{1+\delta }+\delta {\bar{x}}^{\delta } \right) +\kappa _02{\bar{x}}(p+{\bar{x}})\\ =&-2B\left( {\bar{x}}^{2+\delta }+2{\bar{x}}^{1+\delta }+{\bar{x}}^{\delta } \right) -2\kappa _0(p+{\bar{x}})^2\\&+B\left( (2+\delta ){\bar{x}}^{2+\delta }+2(1+\delta ){\bar{x}}^{1+\delta }+\delta {\bar{x}}^{\delta } \right) +\kappa _02{\bar{x}}(p+{\bar{x}})\\ =&B\left( \delta {\bar{x}}^{2+\delta }+(2\delta -2){\bar{x}}^{1+\delta }+(\delta -2) {\bar{x}}^{\delta }\right) -2p\kappa _0(p+{\bar{x}}). \end{aligned}$$

As \(\delta \ge 2\), the quadratic function \(\delta {\bar{x}}^{2}+(2\delta -2){\bar{x}}+(\delta -2)>0\) for any \(x>0\). Thanks to \(B<0\), we have \(h'({\bar{x}})<0\) for any \({\bar{x}}>0\) if \(h({\bar{x}})=0\). This leads to the fact that \({\bar{x}}=1\) is the unique solution to the equation \(h(x)=1\) as h(x) is a continuous function. It then yields that the function \(H(z;z_p^*)\) admits a unique critical point. Therefore, \(z_p^*\) is the global maximum of \(H(z;z_p^*)\), which completes the proof. \(\square \)

1.4 Proof of Proposition 1

Proof

Recall the function H defined in (19). \(z_p^*>0\) is an equilibrium only if \(H'(z;z_p^*)|_{z=z_p^*}=0\), which is equivalent to

$$\begin{aligned} K\alpha (\varepsilon ,p)(1+\beta )(z_p^*)^{1-\varepsilon }=-\kappa _0+k(\delta -1)(z_p^*)^\delta , \end{aligned}$$
(24)

where

$$\begin{aligned} \alpha (\varepsilon ,p):=1-\varepsilon -\frac{2-\varepsilon }{2(p+1)}. \end{aligned}$$

As \(\delta >1\), the functions \(z\mapsto K\alpha (\varepsilon ,p)(1+\beta )z^{1-\varepsilon }\) and \(z\mapsto -\kappa _0+k(\delta -1)z^\delta \) have at most one intersection point \(z_p^*>0\) for \(z>0\).

Note that \(\alpha (\varepsilon ,p)>0 \Leftrightarrow 2p-2p\varepsilon -\varepsilon >0\), and if this holds, it can be easily seen that \(z_p^*\) is increasing w.r.t. \(\beta \). Similarly we can show the monotonicity of \(z_p^*\) w.r.t. \(\beta \) when \(\alpha (\varepsilon ,p)=0\) and \(\alpha (\varepsilon ,p)<0\). \(\square \)

1.5 Proof of Proposition 2

Proof

Consider

$$\begin{aligned} \frac{d}{d\beta }V^p(\beta ,z_*^p(\beta ))&=\frac{\partial }{\partial \beta }V^p(\beta ,z_*^p(\beta ))+\frac{\partial }{\partial z}V^p(\beta ,z_*^p(\beta ))\cdot \frac{d}{d\beta }z_p^*(\beta )\\&=\frac{K}{2}(z_p^*)^{-\varepsilon }+\frac{\partial }{\partial z}V^p(\beta ,z_*^p(\beta ))\cdot \frac{d}{d\beta }z_p^*(\beta ). \end{aligned}$$

We have that

$$\begin{aligned} \frac{\partial }{\partial z}V^p(\beta ,z_*^p(\beta ))&=(z_p^*)^{-2}\left[ -\frac{K(1+\beta )\varepsilon }{2}(z_p^*)^{1-\varepsilon }+\kappa _0-k(\delta -1)(z_p^*)^\delta \right] \\&=-(z_p^*)^{-2}\left[ \frac{K(1+\beta )\varepsilon }{2}(z_p^*)^{1-\varepsilon }+K\alpha (\varepsilon ,p)(1+\beta )(z_p^*)^{1-\varepsilon }\right] \\&=-(z_p^*)^{-1-\varepsilon }\cdot \frac{K(1+\beta )}{2(p+1)}\cdot p(2-\varepsilon ), \end{aligned}$$

where the second equality follows from (24). Therefore,

$$\begin{aligned} \frac{d}{d\beta }V^p(\beta ,z_*^p(\beta ))=\frac{K(z_p^*)^{-1-\varepsilon }}{2}\left[ z_p^*-\frac{1+\beta }{1+p}\cdot p(2-\varepsilon )\cdot \frac{d}{d\beta }z_p^*(\beta )\right] . \end{aligned}$$

By Proposition 1, when \(2p-2p\varepsilon -\varepsilon \le 0\), \(\frac{d}{d\beta }z_p^*(\beta )\le 0\) and thus \(\frac{d}{d\beta }V^p(\beta ,z_*^p(\beta ))\ge 0\). For the rest of the proof, we assume \(2p-2p\varepsilon -\varepsilon >0\).

By (24) we have that

$$\begin{aligned} \frac{d}{d\beta }z_p^*(\beta )=\frac{K\alpha (\varepsilon ,p)}{\kappa _0(1-\varepsilon )(z_p^*)^{\varepsilon -2}+k(\delta -1)(\delta +\varepsilon -1)(z_p^*)^{\delta +\varepsilon -2}}. \end{aligned}$$

Since \(2p-2p\varepsilon -\varepsilon>0\Leftrightarrow \alpha (\varepsilon ,p)> 0\), we deduce that

$$\begin{aligned} \frac{d}{d\beta }z_p^*(\beta )\le \frac{K\alpha (\varepsilon ,p)}{k(\delta -1)(\delta +\varepsilon -1)(z_p^*)^{\delta +\varepsilon -2}}. \end{aligned}$$

It follows that

$$\begin{aligned} \frac{d}{d\beta }V^p(\beta ,z_*^p(\beta ))&\ge \frac{K(z_p^*)^{-1-\varepsilon }}{2}\left[ z_p^*-\frac{1+\beta }{1+p}\cdot p(2-\varepsilon )\cdot \frac{K\alpha (\varepsilon ,p)}{k(\delta -1)(\delta +\varepsilon -1)(z_p^*)^{\delta +\varepsilon -2}}\right] \\&=\frac{K(z_p^*)^{-1-\varepsilon }}{2}\left[ z_p^*-\frac{p(2-\varepsilon )}{1+p}\cdot \frac{-\kappa _0(z_p^*)^{\varepsilon -1}+k(\delta -1)(z_p^*)^{\delta +\varepsilon -1}}{k(\delta -1)(\delta +\varepsilon -1)(z_p^*)^{\delta +\varepsilon -2}}\right] \\&\ge \frac{K(z_p^*)^{-\varepsilon }}{2}\left[ 1-\frac{p(2-\varepsilon )}{(1+p)(\delta +\varepsilon -1)}\right] , \end{aligned}$$

where the second equality follows from (24). This completes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yu, X., Zhang, Y. & Zhou, Z. Teamwise Mean Field Competitions. Appl Math Optim 84 (Suppl 1), 903–942 (2021). https://doi.org/10.1007/s00245-021-09789-1

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00245-021-09789-1

Keywords

Mathematics Subject Classification

Navigation