Skip to main content
Log in

On the computation of equilibria in monotone and potential stochastic hierarchical games

  • Full Length Paper
  • Series B
  • Published:
Mathematical Programming Submit manuscript

Abstract

We consider a class of noncooperative hierarchical \({\textbf{N}}\)-player games where the ith player solves a parametrized stochastic mathematical program with equilibrium constraints (MPEC) with the caveat that the implicit form of the ith player’s in MPEC is convex in player strategy, given rival decisions. Few, if any, general purpose schemes exist for computing equilibria, motivating the development of computational schemes in two regimes: (a) Monotone regimes. When player-specific implicit problems are convex, then the necessary and sufficient equilibrium conditions are given by a stochastic inclusion. Under a monotonicity assumption on the operator, we develop a variance-reduced stochastic proximal-point scheme that achieves deterministic rates of convergence in terms of solving proximal-point problems in monotone/strongly monotone regimes with optimal or near-optimal sample-complexity guarantees. Finally, the generated sequences are shown to converge to an equilibrium in an almost-sure sense in both monotone and strongly monotone regimes; (b) Potentiality. When the implicit form of the game admits a potential function, we develop an asynchronous relaxed inexact smoothed proximal best-response framework, requiring the efficient computation of an approximate solution of an MPEC with a strongly convex implicit objective. To this end, we consider an \(\eta \)-smoothed counterpart of this game where each player’s problem is smoothed via randomized smoothing. In fact, a Nash equilibrium of the smoothed counterpart is an \(\eta \)-approximate Nash equilibrium of the original game. Our proposed scheme produces a sequence and a relaxed variant that converges almost surely to an \(\eta \)-approximate Nash equilibrium. This scheme is reliant on resolving the proximal problem, a stochastic MPEC whose implicit form has a strongly convex objective, with increasing accuracy in finite-time. The smoothing framework allows for developing a variance-reduced zeroth-order scheme for such problems that admits a fast rate of convergence. Numerical studies on a class of multi-leader multi-follower games suggest that variance-reduced proximal schemes provide significantly better accuracy with far lower run-times. The relaxed best-response scheme scales well with problem size and generally displays more stability than its unrelaxed counterpart.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Facchinei, F., Pang, J.: Nash equilibria: The variational approach. Convex Optimization in Signal Processing and Communications, Cambridge University Press (Cambridge, England) (2009)

  2. Ravat, U., Shanbhag, U.V.: On the characterization of solution sets of smooth and nonsmooth convex stochastic Nash games. SIAM J. Optim. 21(3), 1168–1199 (2011)

    MathSciNet  MATH  Google Scholar 

  3. Koshal, J., Nedić, A., Shanbhag, U.V.: Regularized iterative stochastic approximation methods for stochastic variational inequality problems. IEEE Trans. Autom. Control 58(3), 594–609 (2013)

    MathSciNet  MATH  Google Scholar 

  4. Monderer, D., Shapley, L.S.: Potential games. Games Econom. Behav. 14(1), 124–143 (1996)

    MathSciNet  MATH  Google Scholar 

  5. Sherali, H.D.: A multiple leader Stackelberg model and analysis. Oper. Res. 32(2), 390–404 (1984)

    MathSciNet  MATH  Google Scholar 

  6. DeMiguel, V., Xu, H.: A stochastic multiple-leader Stackelberg model: analysis, computation, and application. Oper. Res. 57(5), 1220–1235 (2009)

    MathSciNet  MATH  Google Scholar 

  7. Aussel, D., Svensson, A.: A short state of the art on multi-leader-follower games. In: Bilevel Optimization, pp. 53–76, Springer (2020)

  8. Pang, J.-S., Fukushima, M.: Quasi-variational inequalities, generalized Nash equilibria, and multi-leader-follower games. CMS 2(1), 21–56 (2005)

    MathSciNet  MATH  Google Scholar 

  9. Hu, X., Ralph, D.: Using EPECs to model bilevel games in restructured electricity markets with locational prices. Oper. Res. 55(5), 809–827 (2007)

    MathSciNet  MATH  Google Scholar 

  10. Su, C.-L.: Analysis on the forward market equilibrium model. Oper. Res. Lett. 35(1), 74–82 (2007)

    MathSciNet  MATH  Google Scholar 

  11. Kulkarni, A.A., Shanbhag, U.V.: An existence result for hierarchical Stackelberg v/s Stackelberg games. IEEE Trans. Autom. Control 60(12), 3379–3384 (2015)

    MathSciNet  MATH  Google Scholar 

  12. Leyffer, S., Munson, T.: Solving multi-leader-common-follower games. Optim. Methods Softw. 25(4), 601–623 (2010)

    MathSciNet  MATH  Google Scholar 

  13. Herty, M., Steffensen, S., Thünen, A.: Solving quadratic multi-leader-follower games by smoothing the follower’s best response. Optim. Methods Softw. 37, 1–28 (2020)

    MathSciNet  MATH  Google Scholar 

  14. Kulkarni, A.A., Shanbhag, U.V.: A shared-constraint approach to multi-leader multi-follower games. Set-Valued Var. Anal. 22, 691–720 (2014)

    MathSciNet  MATH  Google Scholar 

  15. Luo, Z.-Q., Pang, J.-S., Ralph, D.: Mathematical Programs with Equilibrium Constraints. Cambridge University Press, Cambridge (1996)

    MATH  Google Scholar 

  16. Ichiishi, T.: Game Theory for Economic Analysis. Economic Theory, Econometrics, and Mathematical Economics. Academic Press Inc [Harcourt Brace Jovanovich Publishers], New York (1983)

    MATH  Google Scholar 

  17. Caruso, F., Lignola, M.B., Morgan, J.: Regularization and Approximation Methods in Stackelberg Games and Bilevel Optimization, pp. 77–138. Springer International Publishing, Cham (2020)

    MATH  Google Scholar 

  18. Pang, J., Scutari, G.: Nonconvex games with side constraints. SIAM J. Optim. 21(4), 1491–1522 (2011)

    MathSciNet  MATH  Google Scholar 

  19. Hobbs, B.F., Pang, J.-S.: Nash-cournot equilibria in electric power markets with piecewise linear demand functions and joint constraints. Oper. Res. 55(1), 113–127 (2007)

    MathSciNet  MATH  Google Scholar 

  20. Allaz, B., Vila, J.-L.: Cournot competition, forward markets and efficiency. J. Econ. Theory 59, 1–16 (1993)

    MATH  Google Scholar 

  21. Shanbhag, U.V., Infanger, G., Glynn, P.W.: A complementarity framework for forward contracting under uncertainty. Oper. Res. 59(4), 810–834 (2011)

    MathSciNet  MATH  Google Scholar 

  22. Hu, M., Fukushima, M.: Existence, uniqueness, and computation of robust Nash equilibria in a class of multi-leader-follower games. SIAM J. Optim. 23(2), 894–916 (2013)

    MathSciNet  MATH  Google Scholar 

  23. Hu, M., Fukushima, M.: Multi-leader-follower games: models, methods and applications. J. Oper. Res. Soc. Jpn. 58(1), 1–23 (2015)

    MathSciNet  MATH  Google Scholar 

  24. Mallozzi, L., Messalli, R.: Multi-leader multi-follower model with aggregative uncertainty. Games 8(3), 1–14 (2017)

    MathSciNet  MATH  Google Scholar 

  25. Leleno, J.M., Sherali, H.D.: A leader-follower model and analysis for a two-stage network of oligopolies. Ann. Oper. Res. 34(1), 37–72 (1992)

    MathSciNet  MATH  Google Scholar 

  26. Murphy, F.H., Smeers, Y.: Generation capacity expansion in imperfectly competitive restructured electricity markets. Oper. Res. 53(4), 646–661 (2005)

    MATH  Google Scholar 

  27. Wogrin, S., Hobbs, B.F., Ralph, D., Centeno, E., Barquin, J.: Open versus closed loop capacity equilibria in electricity markets under perfect and oligopolistic competition. Math. Program. 140(2), 295–322 (2013)

    MathSciNet  MATH  Google Scholar 

  28. De Wolf, D., Smeers, Y.: A stochastic version of a Stackelberg-Nash-Cournot equilibrium model. Manag. Sci. 43(2), 190–197 (1997)

    MATH  Google Scholar 

  29. Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2007)

    MATH  Google Scholar 

  30. Scutari, G., Palomar, D.P., Facchinei, F., Pang, J.-S.: Monotone Games for Cognitive Radio Systems, pp. 83–112. Springer, London (2012)

    MATH  Google Scholar 

  31. Scutari, G., Facchinei, F., Pang, J., Palomar, D.P.: Real and complex monotone communication games. IEEE Trans. Inf. Theory 60(7), 4197–4231 (2014)

    MathSciNet  MATH  Google Scholar 

  32. Hu, M., Fukushima, M.: Variational inequality formulation of a class of multi-leader-follower games. J. Optim. Theory Appl. 151(3), 455–473 (2011)

    MathSciNet  MATH  Google Scholar 

  33. Martinet, B.: Détermination approchée d’un point fixe d’une application pseudo-contractante. CR Acad. Sci. Paris 274(2), 163–165 (1972)

    MATH  Google Scholar 

  34. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 14(5), 877–898 (1976)

    MathSciNet  MATH  Google Scholar 

  35. Rockafellar, R.T.: Augmented Lagrangians and applications of the proximal point algorithm in convex programming. Math. Oper. Res. 1(2), 97–116 (1976)

    MathSciNet  MATH  Google Scholar 

  36. Solodov, M., Svaiter, B.: A hybrid approximate extragradient-proximal point algorithm using the enlargement of a maximal monotone operator. Set-Valued Anal. 7(4), 323–345 (1999)

    MathSciNet  MATH  Google Scholar 

  37. Monteiro, R.D., Svaiter, B.F.: On the complexity of the hybrid proximal extragradient method for the iterates and the ergodic mean. SIAM J. Optim. 20(6), 2755–2787 (2010)

    MathSciNet  MATH  Google Scholar 

  38. Monteiro, R.D., Svaiter, B.F.: Complexity of variants of Tseng’s modified FB splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim. 21(4), 1688–1720 (2011)

    MathSciNet  MATH  Google Scholar 

  39. Corman, E., Yuan, X.: A generalized proximal point algorithm and its convergence rate. SIAM J. Optim. 24(4), 1614–1638 (2014)

    MathSciNet  MATH  Google Scholar 

  40. Patrascu, A., Necoara, I.: Nonasymptotic convergence of stochastic proximal point methods for constrained convex optimization. J. Mach. Learn. Res. 18(1), 7204–7245 (2017)

    MathSciNet  MATH  Google Scholar 

  41. Davis, D., Drusvyatskiy, D.: Stochastic model-based minimization of weakly convex functions. SIAM J. Optim. 29(1), 207–239 (2019)

    MathSciNet  MATH  Google Scholar 

  42. Schmidt, M., Roux, N. L., Bach, F. R.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: Advances in Neural Information Processing Systems, pp. 1458–1466 (2011)

  43. Ghadimi, S., Lan, G., Zhang, H.: Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Math. Program 155(1–2 Ser. A), 267–305 (2016)

    MathSciNet  MATH  Google Scholar 

  44. Jalilzadeh, A., Shanbhag, U. V., Blanchet, J. H., Glynn, P. W.: Smoothed variable sample-size accelerated proximal methods for nonsmooth stochastic convex programs. Stoch. Syst. (2022)

  45. Jofré, A., Thompson, P.: On variance reduction for stochastic smooth convex optimization with multiplicative noise. Math. Program. 174(1–2), 253–292 (2019)

    MathSciNet  MATH  Google Scholar 

  46. Ryu, E. K., Boyd, S.: Stochastic proximal iteration: a non-asymptotic improvement upon stochastic gradient descent. https://web.stanford.edu/~boyd/papers/pdf/spi.pdf (2014)

  47. Asi, H., Duchi, J.C.: Stochastic (approximate) proximal point methods: convergence, optimality, and adaptivity. SIAM J. Optim. 29(3), 2257–2290 (2019)

    MathSciNet  MATH  Google Scholar 

  48. Bianchi, P.: Ergodic convergence of a stochastic proximal point algorithm. SIAM J. Optim. 26(4), 2235–2260 (2016)

    MathSciNet  MATH  Google Scholar 

  49. Douglas, J., Rachford, H.H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc. 82(2), 421–439 (1956)

    MathSciNet  MATH  Google Scholar 

  50. Peaceman, D.W., Rachford, H.H., Jr.: The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3(1), 28–41 (1955)

    MathSciNet  MATH  Google Scholar 

  51. Lions, P.-L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    MathSciNet  MATH  Google Scholar 

  52. Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. J. Math. Anal. Appl. 72(2), 383–390 (1979)

    MathSciNet  MATH  Google Scholar 

  53. Rosasco, L., Villa, S., Vũ, B. C.: Convergence of stochastic proximal gradient algorithm. Appl. Math. Optim., pp. 1–27 (2019)

  54. Combettes, P.L., Pesquet, J.-C.: Stochastic approximations and perturbations in forward-backward splitting for monotone operators. Pure Appl. Anal. 1(1), 13–37 (2016)

    MathSciNet  MATH  Google Scholar 

  55. Rosasco, L., Villa, S., Vũ, B.C.: Stochastic forward-backward splitting for monotone inclusions. J. Optim. Theory Appl. 169(2), 388–406 (2016)

    MathSciNet  MATH  Google Scholar 

  56. Chen, X., Wets, R.J.-B., Zhang, Y.: Stochastic variational inequalities: residual minimization smoothing sample average approximations. SIAM J. Optim. 22(2), 649–673 (2012)

    MathSciNet  MATH  Google Scholar 

  57. Shapiro, A., Xu, H.: Stochastic mathematical programs with equilibrium constraints, modelling and sample average approximation. Optimization 57(3), 395–418 (2008)

    MathSciNet  MATH  Google Scholar 

  58. Brezis, H.: Opérateurs Maximaux Monotones et Semi-Groupes de Contractions Dans Les Espaces de Hilbert. Number 5 in North Holland Math. Studies. North-Holland, Amsterdam (1973)

    MATH  Google Scholar 

  59. Polyak, B.T.: Introduction to Optimization. Optimization Software Inc, Publications Division, New York (1987)

    MATH  Google Scholar 

  60. Ahmadi, H.: On the analysis of data-driven and distributed algorithms for convex optimization problems. PhD dissertation, The Pennsylvania State University (2016)

  61. Robinson, S.M.: Generalized equations. In: Bachem, A., Korte, B., Grötschel, M. (eds.) Mathematical Programming The State of the Art: Bonn 1982, pp. 346–367. Springer, Berlin (1983)

    Google Scholar 

  62. Chen, X., Shapiro, A., Sun, H.: Convergence analysis of sample average approximation of two-stage stochastic generalized equations. SIAM J. Optim. 29(1), 135–161 (2019)

    MathSciNet  MATH  Google Scholar 

  63. Aumann, R.J.: Integrals of set-valued functions. J. Math. Anal. Appl. 12(1), 1–12 (1965)

    MathSciNet  MATH  Google Scholar 

  64. Dantzig, G. B.: Linear programming under uncertainty. In: Stochastic Programming, pp. 1–11. Springer (2010)

  65. Birge, J.R., Louveaux, F.: Introduction to Stochastic Programming. Springer, Berlin (2011)

    MATH  Google Scholar 

  66. Shapiro, A., Dentcheva, D., Ruszczyński, A.: Lectures on Stochastic Programming: Modeling and Theory. SIAM, Philadelphia (2014)

    MATH  Google Scholar 

  67. Jiang, H., Xu, H.: Stochastic approximation approaches to the stochastic variational inequality problem. IEEE Trans. Autom. Control 53(6), 1462–1475 (2008)

    MathSciNet  MATH  Google Scholar 

  68. Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm. Stoch. Syst. 1(1), 17–58 (2011)

    MathSciNet  MATH  Google Scholar 

  69. Shanbhag, U. V.: Stochastic variational inequality problems: applications, analysis, and algorithms. In: Theory Driven by Influential Applications, pp. 71–107, INFORMS (2013)

  70. Ravat, U., Shanbhag, U.V.: On the existence of solutions to stochastic quasi-variational inequality and complementarity problems. Math. Program. 165(1), 291–330 (2017)

    MathSciNet  MATH  Google Scholar 

  71. Hobbs, B.F., Metzler, C.B., Pang, J.-S.: Strategic gaming analysis for electric power systems: an MPEC approach. IEEE Trans. Power Syst. 15, 638–645 (2000)

    Google Scholar 

  72. Steklov, V.A.: Sur les expressions asymptotiques decertaines fonctions définies par les équations différentielles du second ordre et leers applications au problème du dévelopement d’une fonction arbitraire en séries procédant suivant les diverses fonctions. Comm. Charkov Math. Soc. 2(10), 97–199 (1907)

    Google Scholar 

  73. Yousefian, F., Nedić, A., Shanbhag, U.V.: On stochastic gradient and subgradient methods with adaptive steplength sequences. Autom. 48(1), 56–67 (2012)

    MathSciNet  MATH  Google Scholar 

  74. Nesterov, Y., Spokoiny, V.: Random gradient-free minimization of convex functions. Found. Comput. Math. 17(2), 527–566 (2017)

    MathSciNet  MATH  Google Scholar 

  75. Yousefian, F., Nedić, A., Shanbhag, U.V.: Self-tuned stochastic approximation schemes for non-Lipschitzian stochastic multi-user optimization and Nash games. IEEE Trans. Autom Control 99, 1 (2015)

    MATH  Google Scholar 

  76. Patriksson, M., Wynter, L.: Stochastic mathematical programs with equilibrium constraints. Oper. Res. Lett. 25, 159–167 (1999)

    MathSciNet  MATH  Google Scholar 

  77. Cui, S., Shanbhag, U. V., Yousefian, F.: Complexity guarantees for an implicit smoothing-enabled method for stochastic MPECs. To appear in Mathematical Programming (2022)

  78. Outrata, J., Kočvara, M., Zowe, J.: Nonsmooth approach to optimization problems with equilibrium constraints, vol. 28 of nonconvex optimization and its applications. Theory, applications and numerical results. Kluwer Academic Publishers, Dordrecht (1998)

  79. Yousefian, F., Nedić, A., Shanbhag, U. V.: Convex nondifferentiable stochastic optimization: A local randomized smoothing technique. In: Proceedings of the 2010 American Control Conference, pp. 4875–4880 (2010)

  80. Facchinei, F., Pang, J.-S.:Nash equilibria: the variational approach. In Convex Optimization in Signal Processing and Communication, ch. 12, pp. 443–495. Cambridge University Press, Cambridge (2009)

  81. Ermoliev, Y.M., Norkin, V.I., Wets, R.J.-B.: The Minimization of Semicontinuous Functions: Mollifier Subgradients. SIAM J. Control. Optim. 33(1), 149–167 (1995)

    MathSciNet  MATH  Google Scholar 

  82. Gürkan, G., Pang, J.: Approximations of Nash equilibria. Math. Program. 117(1–2), 223–253 (2009)

    MathSciNet  MATH  Google Scholar 

  83. Fudenberg, D., Levine, D.K.: The Theory of Learning In Games. MIT Press Series on Economic Learning and Social Evolution, vol. 2. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  84. Basar, T., Olsder, G.J.: Dynamic Noncooperative Game Theory, vol. 23. SIAM, Philadelphia (1999)

    MATH  Google Scholar 

  85. Scutari, G., Palomar, D.P.: MIMO cognitive radio: a game theoretical approach. IEEE Trans. Signal Process. 58(2), 761–780 (2010)

    MathSciNet  MATH  Google Scholar 

  86. Altman, E., Hayel, Y., Kameda, H.: Evolutionary dynamics and potential games in non-cooperative routing. In: 5th International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks and Workshops, 2007. WiOpt 2007, pp. 1–5, IEEE (2007)

  87. Pang, J.-S., Sen, S., Shanbhag, U.V.: Two-stage non-cooperative games with risk-averse players. Math. Program. 165(1), 235–290 (2017)

    MathSciNet  MATH  Google Scholar 

  88. Lei, J., Shanbhag, U.V., Pang, J.-S., Sen, S.: On synchronous, asynchronous, and randomized best-response schemes for stochastic Nash games. Math. Oper. Res. 45(1), 157–190 (2020)

    MathSciNet  MATH  Google Scholar 

  89. Facchinei, F., Piccialli, V., Sciandrone, M.: Decomposition algorithms for generalized potential games. Comput. Optim. Appl. 50(2), 237–262 (2011)

    MathSciNet  MATH  Google Scholar 

  90. Lei, J., Shanbhag, U.V.: Asynchronous schemes for stochastic and misspecified potential games and nonconvex optimization. Oper. Res. 68(6), 1742–1766 (2020)

    MathSciNet  MATH  Google Scholar 

  91. Sherali, H.D., Soyster, A.L., Murphy, F.H.: Stackelberg–Nash–Cournot equilibria: characterizations and computations. Oper. Res. 31(2), 253–276 (1983)

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Uday V. Shanbhag.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Research was partially supported by NSF CMMI-1538605, DOE ARPA-E award DE-AR0001076, and ONR grant N00014-22-1-2589 (Shanbhag).

Appendix

Appendix

1.1 Variational inequality problems, Inclusions, and monotonicity

(a) Variational inequality problems and Inclusions. Consider a variational inequality problem VI\(({\mathcal {X}}, F)\) where \({\mathcal {X}}\) is a closed and convex set and \(F: \mathbb {R}^n \rightarrow \mathbb {R}^n\) is a single-valued continuous map. Such a problem requires an \(\textbf{x}\) such that

$$\begin{aligned} ({\tilde{\textbf{x}}}-\textbf{x})^{\textsf{T}} F(\textbf{x}) \ge 0, \qquad \forall {\tilde{\textbf{x}}} \in \mathcal{X}. \end{aligned}$$

Furthermore, VI\((\mathcal{X},F)\) can also be written as an inclusion problem, i.e.

$$\begin{aligned} \textbf{x}\text { solves } \text {VI}(\mathcal{X},F) \qquad \iff \qquad 0 \in F(\textbf{x}) + {\mathcal {N}}_{\mathcal{X}}(\textbf{x}). \end{aligned}$$

Consider an \({\textbf{N}}\)-player game \({\mathscr {G}}\) where for \(i=1, \cdots , {\textbf{N}}\), the ith player minimizes the parametrized smooth convex optimization problem defined as

figure w

By convexity assumptions, the set of Nash equilibria of \({\mathscr {G}}\) is equivalent to the solution set of the variational inequality problem VI\(({\mathcal {X}}, F)\) where \( {\mathcal {X}} \triangleq \prod _{i=1}^{\textbf{N}}\mathcal{X}_i\) and

$$\begin{aligned} F(\textbf{x}) \triangleq \begin{pmatrix} \nabla _{\textbf{x}^1} f_1(\textbf{x}) \\ \vdots \\ \nabla _{\textbf{x}^{\textbf{N}}} f_{\textbf{N}}(\textbf{x}) \end{pmatrix}.\end{aligned}$$
(39)

If f is a nonsmooth convex function, then the subdifferential \(\partial f\) is also a monotone set-valued (or multi-valued) map on \(\mathcal{X}\). In addition, if \(f_i(\bullet , \textbf{x}^{-i})\) is not necessarily smooth, then the associated set of equilibria are given by the solution of VI\((\mathcal{X}, T)\) where

$$\begin{aligned} T(\textbf{x}) \triangleq \prod _{i=1}^{\textbf{N}}\partial _{\textbf{x}^i} f_i(\textbf{x}^i,\textbf{x}^{-i}). \end{aligned}$$
(40)

(b) Monotonicity properties. Consider VI\((\mathcal{X}, F)\). Then the map F is monotone on \({\mathcal {X}}\) if \((F(\textbf{x})-F(\textbf{y}))^{\textsf{T}}(\textbf{x}-\textbf{y}) \ge 0\) for all \(\textbf{x}, \textbf{y}\in \mathcal{X}\). Monotonicity may also emerge in the context of \({\textbf{N}}\)-player noncooperative games. In particular, one may view \({\mathscr {G}}\) as being monotone if and only if the associated map F, defined as (39), is monotone on \(\mathcal{X}\). In the special case when \({\textbf{N}}= 1\), this reduces to the gradient map of a smooth convex function f, denoted by \(\nabla f\), being monotone. This can also be generalized to set-valued regimes. For instance, the map T, defined as (40), arising from a noncooperative game \({\mathscr {G}}\) with nonsmooth player-specific objectives is said to be monotone if for any \(\textbf{x}, \textbf{y}\in \mathcal{X}\) and any \(u \in T(\textbf{x})\) and \(v \in T(\textbf{y})\), we have \((u-v)^{\textsf{T}}(\textbf{x}-\textbf{y}) \ge 0\).

(c) Monotonicity in the context of single-leader single-follower. Consider a single-leader single-follower problem in which the follower’s objective \(g(\textbf{x},\bullet )\) is a strongly convex function on \(\mathcal{Y}\), a closed and convex set while \(\mathcal{X}\) is also a closed and convex.

$$\begin{aligned} \min _{\textbf{x}\in \mathcal{X}} \&f(\textbf{x},\textbf{y}(\textbf{x})), \text { where } \end{aligned}$$
(Leader)
$$\begin{aligned} \textbf{y}(\textbf{x}) = \text {arg}\min _{\textbf{y}\in \mathcal{Y}} \ {}&g(\textbf{x},\textbf{y}). \end{aligned}$$
(Follower)

There are many instances when \(f(\bullet , \textbf{y}(\bullet ))\) is a convex function on \(\mathcal{X}\) (see [5, 6, 10, 91] for some instances) implying that \(\partial f(\bullet , \textbf{y}(\bullet ))\) is a monotone map on \(\mathcal{X}\). In other words, the implicit problem in leader-level decisions can be seen to be characterized by a convex objective with a monotone map. However, when viewing the problem in the full space of \(\textbf{x}\) and \(\textbf{y}\), i.e.

$$\begin{aligned} \begin{aligned} \min _{\textbf{x}\in \mathcal{X}, \textbf{y}} \ f(\textbf{x},\textbf{y})&\\ ({\tilde{\textbf{y}}}-\textbf{y})^{\textsf{T}} \nabla _{\textbf{y}} g(\textbf{x},\textbf{y})&\ge 0, \qquad \forall {\tilde{\textbf{y}}} \ \in \ \mathcal{Y}. \end{aligned} \end{aligned}$$

In the full space of \(\textbf{x}\) and \(\textbf{y}\), this is indeed a nonconvex optimization problem [15]; however, the implicit problem in x may be convex under some assumptions and the resulting subdifferential map is then monotone.

1.2 Proofs

Proof of Proposition 1

In both cases, it is not difficult to see that H is a monotone map where \( H(\textbf{x}) \triangleq \prod _{i=1}^{\textbf{N}}\partial _{\textbf{x}^i} h_i(\textbf{x}^i,\textbf{x}^{-i})\). Consequently, if T is defined as \(T(\textbf{x})=G(\textbf{x})+H(\textbf{x}) \), then T is a monotone map which follows from the monotonicity of G, defined as \(G(\textbf{x}) \triangleq \prod _{i=1}^{\textbf{N}}\partial _{\textbf{x}^i} g_i(\textbf{x}^i,\textbf{x}^{-i})\). \(\square \)

Proof of Proposition 2

For (a), potentiality follows by noting that for any \(\textbf{x}^i, {\tilde{\textbf{x}}}^i \in \mathcal{X}_i\) and \(\textbf{x}^{-i} \in \mathcal{X}^{-i}\), we have

$$\begin{aligned} {\widehat{P}}(\textbf{x}^i,\textbf{x}^{-i}) - {\widehat{P}}({\tilde{\textbf{x}}}^i,\textbf{x}^{-i})&= P(\textbf{x}^i,\textbf{x}^{-i}) - P({\tilde{\textbf{x}}}^i,\textbf{x}^{-i}) + h_i(\textbf{x}^i,\textbf{y}^i(\textbf{x}^i)) - h_i({\tilde{\textbf{x}}}^i,\textbf{y}^i({\tilde{\textbf{x}}}^i)) \\&= g_i(\textbf{x}^i,\textbf{x}^{-i})+h_i(\textbf{x}^i,\textbf{y}^i(\textbf{x}^i,\textbf{x}^{-i}))\\&\quad -(g_i({\tilde{\textbf{x}}}^i,\textbf{x}^{-i})+h_i({\tilde{\textbf{x}}}^i,\textbf{y}^i({\tilde{\textbf{x}}}^i,\textbf{x}^{-i}))). \end{aligned}$$

For (b), proceeding in a similar fashion, it follows that for any \(\textbf{x}^i, {\tilde{\textbf{x}}}^i \in \mathcal{X}_i\) and \(\textbf{x}^{-i} \in \mathcal{X}^{-i}\), we have

$$\begin{aligned} {\widehat{P}}(\textbf{x}^i,\textbf{x}^{-i}) - {\widehat{P}}({\tilde{\textbf{x}}}^i,\textbf{x}^{-i})&= P(\textbf{x}^i,\textbf{x}^{-i}) - P({\tilde{\textbf{x}}}^i,\textbf{x}^{-i}) + h(\textbf{x}^i,\textbf{x}^{-i}) - h({\tilde{\textbf{x}}}^i,\textbf{x}^{-i})\\&= g_i(\textbf{x}^i,\textbf{x}^{-i})+h_i(\textbf{x}^i,\textbf{y}^i(\textbf{x}^i,\textbf{x}^{-i}))\\&\quad -(g_i({\tilde{\textbf{x}}}^i,\textbf{x}^{-i})+h_i({\tilde{\textbf{x}}}^i,\textbf{y}^i({\tilde{\textbf{x}}}^i,\textbf{x}^{-i}))). \end{aligned}$$

\(\square \)

Proof of Lemma 2

Suppose \(J_2\) denotes a positive integer such that \((1-2c\alpha _j) \ge 0\) for \(j \ge J_2\), i.e. \(J_2 = \lceil 2c\theta \rceil \ge 2c\theta .\) Let \(J \triangleq \max \{J_1,J_2\}\) and \({\mathcal {D}} \triangleq \max \left\{ \tfrac{{\mathcal {M}}^2 \theta ^2}{2(2c\theta -1)},J {\mathcal {A}}_J\right\} .\) For \(j = J\), the inductive hypothesis holds trivially. If it holds for some \(j > J\),

$$\begin{aligned} {\mathcal {A}}_{j+1}&\le (1-2c\alpha _j) {\mathcal {A}}_j + \tfrac{\alpha _j^2 {\mathcal {M}}^2}{2} \le (1-2c\alpha _j) \tfrac{{\mathcal {D}}}{j} + \tfrac{\alpha _j^2 {\mathcal {M}}^2}{2}\\&= (1-2c\alpha _j) \tfrac{{\mathcal {D}}}{j} + \tfrac{2(2c\theta -1)}{2j}\tfrac{\theta ^2 {\mathcal {M}}^2}{2(2c\theta -1)j} \le (1-2c\alpha _j) \tfrac{{\mathcal {D}}}{j} + \tfrac{2c\theta -1}{j}\tfrac{{\mathcal {D}}}{j} \\&\le (1-\tfrac{2c\theta }{j}) \tfrac{{\mathcal {D}}}{j} + \tfrac{2c\theta -1}{j}\tfrac{{\mathcal {D}}}{j} = \tfrac{{\mathcal {D}}}{j} - \tfrac{2c\theta {\mathcal {D}}}{j^2} + \tfrac{2c\theta {\mathcal {D}}}{j^2} - \tfrac{{\mathcal {D}}}{j^2} \le \tfrac{{\mathcal {D}}}{j} - \tfrac{{\mathcal {D}}}{j(j+1)} = \tfrac{{\mathcal {D}}}{(j+1)}. \end{aligned}$$

It remains to get a bound on \({\mathcal {A}}_J\).

$$\begin{aligned} {\mathcal {A}}_J&\le (1-2c\alpha _{J-1}){\mathcal {A}}_{{J-1}} + \tfrac{\alpha _{J-1}^2 {\mathcal {M}}^2}{2} \le {\mathcal {A}}_{{J-1}} + \tfrac{\alpha _{J-1}^2 {\mathcal {M}}^2}{2}\nonumber \\&\le \left( (1-2c\alpha _{J-2}) {\mathcal {A}}_{J-2} + \tfrac{\alpha _{J-2}^2{\mathcal {M}}^2}{2}\right) + \tfrac{\alpha _{J-1}^2{\mathcal {M}}^2}{2} \le {\mathcal {A}}_{{1}} + {\mathcal {M}}^2\sum _{\ell = 1}^{J-1} \tfrac{\alpha _{\ell }^2 }{2} \nonumber \\&\le {\mathcal {A}}_{{1}} + \tfrac{{\mathcal {M}}^2\theta ^2\pi ^2}{12} \triangleq {{\mathcal {A}}_1 + B {\mathcal {M}}^2}, {\text { since } \sum _{\ell =0}^{J-1} \tfrac{1}{\ell ^2} \le \tfrac{ \pi ^2}{6}.} \end{aligned}$$
(41)

Consequently, for \(j \ge J\), \({\mathcal {A}}_{j} \le \frac{\max \left\{ \tfrac{{\mathcal {M}}^2\theta ^2}{2(2c\theta -1)}, J {\mathcal {A}}_{J}\right\} }{2j} \le \frac{\tfrac{{\mathcal {M}}^2\theta ^2}{2(2c\theta -1)}+ J ({{\mathcal {A}}_{{1}} + B{\mathcal {M}}^2})}{2j}.\) \(\square \)

Proof of Proposition 5

Throughout this proof, we refer to \(J_{\lambda }^T(\textbf{x}^k)\) by \(\textbf{z}^{k,*}\) to ease the exposition. Consider the update rule given by (SA), given \(\textbf{z}_{0} = \textbf{x}^k\). We have that

$$\begin{aligned} \Vert \textbf{z}_{j+1}-\textbf{z}^{k,*}\Vert ^2 \! =\! \Vert \textbf{z}_j \!-\! \alpha _j u_j - \textbf{z}^{k,*}\Vert ^2 \!=\! \Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 \Vert u_j\Vert ^2 \!-\! 2\alpha _j u_j^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*}). \end{aligned}$$

Taking expectations on both sides, we obtain that

$$\begin{aligned}&{\mathbb {E}}[\Vert \textbf{z}_{j+1} - \textbf{z}^{k,*}\Vert ^2 \mid {\mathcal {F}}_{k,j}] = \Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 {\mathbb {E}}[\Vert u_j\Vert ^2 \mid {\mathcal {F}}_{k,j}]\\&\qquad - 2\alpha _j {\mathbb {E}}[u_j^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*}) \mid {\mathcal {F}}_{k,j}] \\&\quad = \Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 {\mathbb {E}}[\Vert u_j\Vert ^2 \mid {\mathcal {F}}_{k,j}] - 2\alpha _j{\mathbb {E}}[u_j \mid {\mathcal {F}}_{k,j}]^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*}) \\&\quad = \Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 {\mathbb {E}}[\Vert u_j\Vert ^2\mid {\mathcal {F}}_{k,j}] - 2\alpha _j {\bar{u}}_j^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*}) \\&\quad - 2\underbrace{{\mathbb {E}}[\alpha _j(u_j-{\bar{u}}_j)^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*})\mid {\mathcal {F}}_{k,j}]}_{\ = \ 0}\\&\quad = \Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 {\mathbb {E}}[\Vert u_j\Vert ^2 \mid {\mathcal {F}}_{k,j}] - 2\alpha _j ({\bar{u}}_j-{\bar{u}}_{k}^*)^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*}), \end{aligned}$$

where \({\bar{u}}_j \in F_k(\textbf{z}_j)\), \({\mathbb {E}}[\alpha _j(u_j-{\bar{u}}_j)^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*})\mid {\mathcal {F}}_{k,j}] = \alpha _j({\mathbb {E}}[u_j \mid {\mathcal {F}}_{k,j}]-{\bar{u}}_j)^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*}) = 0\), \(0 = {\bar{u}}_k^* \in F_k(\textbf{z}^{k,*})\) and \(({\bar{u}}_j-{\bar{u}}_{k}^*)^{\textsf{T}}(\textbf{z}_j-\textbf{z}^{k,*}) \ge \tfrac{1}{\lambda }\Vert \textbf{z}_j-\textbf{z}^{k,*}\Vert ^2\) by the \(\tfrac{1}{\lambda }\)-strong monotonicity of \(F_k\). Consequently, we have that

$$\begin{aligned}&{\mathbb {E}}[\Vert \textbf{z}_{j+1} - \textbf{z}^{k,*}\Vert ^2 \mid {\mathcal {F}}_{k,j}] \le (1-\tfrac{2\alpha _j}{\lambda })\Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 {\mathbb {E}}[\Vert u_j\Vert ^2 \mid {\mathcal {F}}_{k,j}] \\&\quad \overset{{(5)}}{\le } (1-\tfrac{2\alpha _j}{\lambda })\Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 (4M_1^2 \Vert \textbf{x}^k\Vert ^2 + 2M_2^2 + (4M_1^2+\tfrac{2}{\lambda ^2})\Vert \textbf{z}_j-\textbf{x}^k\Vert ^2) \\&\quad \le (1-\tfrac{2\alpha _j}{\lambda })\Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2 + \alpha _j^2 (({8}M_1^2+\tfrac{{4}}{\lambda ^2}) \Vert \textbf{z}_j-\textbf{z}^{k,*}\Vert ^2 + 4M_1^2\Vert \textbf{x}^k\Vert ^2 \\&\qquad + 2M_2^2 + (8M_1^2+\tfrac{4}{\lambda ^2})\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2) \\&\quad \le (1-2\alpha _j(\tfrac{1}{\lambda }-\alpha _j (4M_1^2+\tfrac{2}{\lambda ^2}))\Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2\\&\qquad + \alpha _j^2 (4M_1^2\Vert \textbf{x}^k\Vert ^2+ 2M_2^2 + (8M_1^2+\tfrac{4}{\lambda ^2})\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2) \\&\quad \le (1-\tfrac{\alpha _j}{\lambda })\Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2+ \alpha _j^2 (4M_1^2\Vert \textbf{x}^k\Vert ^2+ 2M_2^2 + (8M_1^2+\tfrac{4}{\lambda ^2})\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2), \end{aligned}$$

where the last inequality follows from \(\alpha _j ({8}M_1^2+\tfrac{{4}}{\lambda ^2}) \le \tfrac{1}{2\lambda }\) for \(j \ge J_1\) where \(j \ge J_1 \triangleq \lceil 2\lambda \theta ({8}M_1^2+\tfrac{{4}}{\lambda ^2})\rceil \). Taking expectations conditioned on \({\mathcal {F}}_k\) and recalling that \({\mathbb {E}}[[\Vert \textbf{z}_{j+1} - \textbf{z}^{k,*}\Vert ^2 \mid {\mathcal {F}}_{k,j}]\mid {\mathcal {F}}_k] = {\mathbb {E}}[\Vert \textbf{z}_{j+1} - \textbf{z}^{k,*}\Vert ^2\mid {\mathcal {F}}_k]\) since \({\mathcal {F}}_k \subset {\mathcal {F}}_{k,j}\), we obtain the following inequality for \(j \ge J_1\),

$$\begin{aligned} {\mathbb {E}}[\Vert \textbf{z}_{j+1} - \textbf{z}^{k,*}\Vert ^2 \mid {\mathcal {F}}_k]&\le (1-\tfrac{\alpha _j}{\lambda }){\mathbb {E}}[\Vert \textbf{z}_j - \textbf{z}^{k,*}\Vert ^2\mid {\mathcal {F}}_k]\\&\quad + \alpha _j^2 (4M_1^2{\mathbb {E}}[\Vert \textbf{x}^k\Vert ^2 \mid {\mathcal {F}}_k]+ 2M_2^2 \\&\quad + (8M_1^2+\tfrac{4}{\lambda ^2}){\mathbb {E}}[\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2 \mid {\mathcal {F}}_k]). \end{aligned}$$

Consequently, if \(\alpha _j = \tfrac{\theta }{j}\), we have a recursion given by

$$\begin{aligned} {\mathcal {A}}_{j+1} \le (1-2c \alpha _j) {\mathcal {A}}_j + \tfrac{\alpha _j^2 {\mathcal {M}}^2}{2}, \quad j \ge J_1 \end{aligned}$$

where \({\mathcal {A}}_j \triangleq {\mathbb {E}}[\Vert \textbf{z}_{j}-\textbf{z}^{k,*}\Vert ^2 \mid {\mathcal {F}}_k]\), \(c = \tfrac{1}{2\lambda }\), \(\alpha _j = \tfrac{\theta }{j}\), and \({\mathcal {M}}^2/2 = 4M_1^2{\mathbb {E}}[\Vert \textbf{x}^k\Vert ^2\mid {\mathcal {F}}_k]+ 2M_2^2 + (8M_1^2+\tfrac{4}{\lambda ^2}){\mathbb {E}}[\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2\mid {\mathcal {F}}_k]\). By Lemma 2, we have that

$$\begin{aligned} {\mathcal {A}}_j \le \frac{\tfrac{{\mathcal {M}}^2\theta ^2}{2(2c\theta -1)}+ J {({\mathcal {A}}_1+B{\mathcal {M}}^2)}}{2j}, \quad j \ge J \end{aligned}$$
(42)

where \(J \triangleq \max \{J_1,J_2\}\), \(J_2 \triangleq \lceil 2c\theta \rceil \), and \({B \triangleq \tfrac{\theta ^2 \pi ^2}{12}}\). Since \({\mathcal {A}}_1 = {\mathbb {E}}[\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2 \mid {\mathcal {F}}_k]\), the numerator in (42) may be further bounded as follows.

$$\begin{aligned}&\tfrac{{\mathcal {M}}^2\theta ^2}{2(2c\theta -1)}+J {({\mathcal {A}}_1+B{\mathcal {M}}^2)} = \left( \tfrac{\theta ^2 }{2(2c\theta -1)}+J {B}\right) {\mathcal {M}}^2 + {J{\mathcal {A}}_1} \nonumber \\&\quad \le \left( \tfrac{\theta ^2}{2(2c\theta -1)}+J{B}\right) (8M_1^2\Vert \textbf{x}^k\Vert ^2+ 4M_2^2 + (16M_1^2+\tfrac{8}{\lambda ^2}){\mathbb {E}}[\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2\mid {\mathcal {F}}_k]) \nonumber \\&\qquad + J{\mathbb {E}}[\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2 \mid {\mathcal {F}}_k] \nonumber \\&\quad = \left( \tfrac{\theta ^2}{2(2c\theta -1)}+J {B}\right) (8M_1^2\Vert \textbf{x}^k\Vert ^2+ 4M_2^2) \nonumber \\&\qquad + \left( \left( \tfrac{\theta ^2}{2(2c\theta -1)}+J{B}\right) \left( 16M_1^2+\tfrac{8}{\lambda ^2}\right) +J\right) {\mathbb {E}}[\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2\mid {\mathcal {F}}_k]. \end{aligned}$$
(43)

We have that

$$\begin{aligned}&{\mathbb {E}}[\Vert \textbf{z}^{k,*}-\textbf{x}^k\Vert ^2 \mid {\mathcal {F}}_k] \le 2\Vert \textbf{x}^k - \textbf{x}^*\Vert ^2 + 2{\mathbb {E}}[\Vert \textbf{z}^{k,*} - \textbf{x}^*\Vert ^2 \mid {\mathcal {F}}_k] \\&\qquad = 2\Vert \textbf{x}^k - \textbf{x}^*\Vert ^2 + 2{\mathbb {E}}[\Vert J_{\lambda }^T(\textbf{x}^k) - \textbf{x}^*\Vert ^2 \mid {\mathcal {F}}_k] \\&\qquad \le 2\Vert \textbf{x}^k - \textbf{x}^*\Vert ^2 + 2\Vert \textbf{x}^k - \textbf{x}^*\Vert ^2 \le 8 \Vert \textbf{x}^k\Vert ^2+8 \Vert \textbf{x}^*\Vert ^2, \end{aligned}$$

where the second inequality follows from \(\Vert J_{\lambda }^T(\textbf{x}^k)-\textbf{x}^*\Vert = \Vert J^T_{\lambda }(\textbf{x}^k)-J^T_{\lambda }(\textbf{x}^*)\Vert \le \Vert \textbf{x}^k-\textbf{x}^*\Vert .\) Consequently, from (43), \(\tfrac{{\mathcal {M}}^2\theta ^2}{2(2c\theta -1)}+J {({\mathcal {A}}_1+B{\mathcal {M}}^2)} \le \nu _1^2 \Vert \textbf{x}^k\Vert ^2 + \nu _2^2\), where

$$\begin{aligned} \nu _1^2&\triangleq \left( \left( \tfrac{\theta ^2}{2(2c\theta -1)}+JB\right) \left( {136}M_1^2+\tfrac{64}{\lambda ^2}\right) +{8J}\right) \text { and } \\ \nu _2^2&\triangleq 4\left( \tfrac{\theta ^2}{2(2c\theta -1)}+J {B}\right) M_2^2 + 8\left( \left( \tfrac{\theta ^2}{2(2c\theta -1)}+J{B}\right) \left( 16M_1^2+\tfrac{8}{\lambda ^2}\right) +J\right) \Vert \textbf{x}^*\Vert ^2. \end{aligned}$$

\(\square \)

Proposition 15

For \(i=1,\cdots , {\textbf{N}}\), consider the problem (Player\(_i(\textbf{x}^{-i}\))). Suppose for \(i =1, \cdots , {\textbf{N}}\), (a.i) and (a.ii) hold.

(a.i) \(\mathcal{X}_i \subseteq \mathbb {R}^{n_i}\) and \(\mathcal{Y}_i \subseteq \mathbb {R}^{m_i}\) are closed and convex sets.

(a.ii) \(F_i(\textbf{x},\bullet ,\omega )\) is a \(\mu _F(\omega )\)-strongly monotone and \(L_F(\omega )\)-Lipschitz continuous map on \(\mathcal{Y}\) uniformly in \(\textbf{x}\in \mathcal{X}\) for every \(\omega \in \Omega \), and there exist scalars \(\mu _F,L_F > 0\) such that \(\inf _{\omega \in \Omega }\mu _F(\omega ) \ge \mu _F\) and \(\sup _{\omega \in \Omega }L_F(\omega ) \le L_F\).

Suppose \({\tilde{f}}_i(\textbf{x},\textbf{y}_i,\omega )\) is continuously differentiable on \(\mathcal{C}\times \mathbb {R}^{m_i}\) for every \(\omega \in \Omega \) where \(\mathcal{C}\) is an open set containing \(\mathcal{X}\) and \(\mathcal{X}\) is bounded. Then the function \(f_i^\textbf{imp}\), defined as \({f_i^{\textbf{imp}}(\textbf{x})} \triangleq {\mathbb {E}}[{\tilde{f}}(\textbf{x},\textbf{y}(\textbf{x},\omega ), \omega )]\), is Lipschitz continuous and directionally differentiable on \(\mathcal{X}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cui, S., Shanbhag, U.V. On the computation of equilibria in monotone and potential stochastic hierarchical games. Math. Program. 198, 1227–1285 (2023). https://doi.org/10.1007/s10107-022-01897-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-022-01897-2

Mathematics Subject Classification

Navigation