1 Introduction

A wide range of settings like communication, environmental, macroeconomic, epidemiological, transportation, and energy systems are characterized by dynamical systems in which large populations interact either in a competitive or cooperative way. In the last decades, there is an increased interest to study the interaction within these systems using the framework of dynamic games, that is, by an appropriate modeling of the underlying dynamics of the system, the objective function of the involved agents, and specification of the information structure (including often the specification of some assumed equilibrium concept). In environmental economics and macroeconomic policy coordination, references and examples of using dynamic games in modeling policy coordination problems can be found, e.g., in the books of [6, 15, 18, 30]). In engineering, applications of this theory are reported, e.g., in areas like finance, robust optimal control, and pursuit-evasion problems. Particularly in the area of robust optimal control, which can be modeled as a problem where the controller is fighting “nature” producing worst-case disturbances, the theory of linear quadratic differential games has been extensively developed, see, e.g., [3, 8, 20, 26]. In engineering, using this framework, applications are reported from diverse areas: robot control formation [16]; interconnection of electric power systems [26]; multipath routing in communication networks [2, 23]; solving mixed \(H_2{/}H_{\infty }\) control problems [22]; and military operations of autonomous vehicles [21]. Furthermore, in games involving a large number of players mean field approximation techniques are reported to arrive at decentralized strategies for the original game, which is then an \(\epsilon \)-Nash equilibrium for the original game [17].

In linear quadratic differential games, “real world” is modeled/approximated by a set of linear differential equations and the objectives are modeled using quadratic functions. Assuming that players don’t cooperate and look for linear feedback strategies which lead to a worse performance if they unilaterally deviate from it, leads to the study of so-called linear feedback Nash equilibria (FNE). The resulting equilibrium strategies have the important property that they are strong time consistent, a property which, e.g., does not hold under an open-loop information structure (see, e.g., [4, Chapter 6.5]).

Linear quadratic feedback Nash differential games have been considered by many authors and dates back to the seminal work of Starr and Ho [32]. For the fixed finite planning horizon, there exists at most one FNE (see, e.g., [24]). For an infinite planning horizon, the affine-quadratic differential game is solved in [13]. To find the FNE in this game involves solving a set of coupled algebraic Riccati-type equations (ARE). Only for some special cases of these equations conditions are reported under which FNE exist (see, e.g., [1, 8, 29]). It has been shown (see, e.g., [8, 28]) that the number of equilibria can vary between zero and infinity. Many numerical approaches are reported in the literature to find a solution of the ARE (see for an overview, e.g., [10]). Usually, these approaches only find one solution (if convergence occurs) and it remains unclear whether more solutions exist. More recently in [11] and [31], also algorithms are reported which, in principle, are capable of finding all solutions. These algorithms seem to work if the number of players is not too large. Engwerda [11] uses an eigenvalue-based approach to find all solutions for the general scalar game, whereas [31] uses techniques from algebraic geometry to recast the problem of computing all FNE into that of finding the zeros of a single polynomial function in a scalar variable.

Particularly in the context of large-scale systems, it seems interesting to have a decisive algorithm to conclude whether no, or, a unique equilibrium exists and to calculate such a solution if it exists within a reasonable computation time. Separate from direct applications, this may be helpful, e.g., in the assessment of calculating the cost/gains of different information/cooperation structures (see, e.g., [5]), or, in finding areas where approximate solutions of certain nonlinear differential games exist (see, e.g., [27]). Following the analysis performed in [7] (see also [8, Chapter 8.4]) in [12], an exhaustive description is given of conditions under which the most simple linear scalar game, where the performance criterion is a strict positive quadratic function of both states and controls, has either no, one, or multiple FNE. In this paper, we use this approach to present a numerical algorithm which is capable of answering for these games the question whether there will exist a unique FNE. And, in case a unique FNE exists, to calculate it. Using standard MATLAB code, we show in a simulation study that for systems having up to 100.000 players the FNE can be calculated within 1 s. This, since the main body of the algorithm just requires the calculation of the zero of a scalar function, where the bounds of the search interval are known.

So, whereas, e.g., [9] and [11] concentrate on finding all FNE equilibria in general scalar (disturbed) games with a small number of players (due to computation time constraints), this paper at first instance concentrates on the question whether a special subclass of these games with a large number of players has a unique equilibrium. Furthermore, in case this question is answered affirmative, this equilibrium can be calculated using, e.g., a brute- force halving technique which makes that computation time is small. For instance, in the example dealt with in Sect. 5, calculation time of the equilibrium (for a reasonable accuracy) remains below 1 s for a game that has 100.000 players. The numerical algorithms that are established use the conditions derived in [12]. They will be summarized in Sect. 3.

The outline of the paper is as follows. Section 2 recalls from [8] the basic model and approach. In Sect. 3, we recall from [12] the conditions under which the game will have either no, one, or multiple equilibria. Based on the results presented in Sects. 2 and 3, we develop in Sect. 4 numerical algorithms to calculate the unique FNE. Details on proofs of Sect. 4 are presented in the separate “Appendix 1”. Section 5 illustrates the algorithm for a simple oligopoly game. Finally, Sect. 6 concludes.

2 Preliminaries

In this paper, we consider the problem where N players try to minimize their performance criterion in a noncooperative setting. Each player controls a different set of inputs to a single system. The system is described by the following scalar differential equation

$$\begin{aligned} \dot{x}(t) = ax(t) + \sum _{i=1}^{N} b_iu_i(t),\quad x(0)=x_0. \end{aligned}$$
(1)

Here x is the state of the system, \(u_{i}\) is a (control) variable player i can manipulate, \(x_{0}\) is the arbitrarily chosen initial state of the system, a (the state feedback parameter), \(b_i, i \in \mathbf{N} {:}= \{1,\ldots ,N\}\), are constant system parameters, and \(\dot{x}\) denotes the time derivative of x. All variables are scalar.

The aim of player \(i \in \mathbf{N}\) is to minimize:

$$\begin{aligned} J_{i}\left( u_{1},\ldots ,u_{N}\right) {:}= \int _{0}^{\infty }\left\{ q_{i} x^2(t) + r_i u^{2}_{i}(t) \right\} \hbox {d}t, \end{aligned}$$
(2)

where \(r_i\) is positive and both \(b_i\) and \(q_i\) differ from zero. So, player i is not directly concerned about the control efforts player j uses to manipulate the system. This assumption is crucial for the analysis below.

We assume that players act noncooperatively and use time invariant feedback strategies, \(u_i(t)=f_ix(t)\), to control the system. This, on the supposition that they do not want to destabilize the system. So, the set of strategies is restricted to

$$\begin{aligned} {\mathcal {F}}_{N} {:}= \left\{ \left( f_1,\ldots ,f_N\right) \ |\ a+\sum _{i=1}^{N}b_{i}f_{i} < 0\right\} . \end{aligned}$$

This restriction is essential. Indeed, there exist feedback Nash equilibria in which a player can improve unilaterally by choosing a feedback for which the closed-loop system is unstable (see [25]). Any \(f \in {\mathcal {F}}\) is called a stabilizing solution. A set of feedback strategies is called a Nash equilibrium if none of the players can improve his performance by unilaterally deviating from his strategy within this class of stationary stabilizing feedback controls. More formally, using the notation \(\bar{f}_{-i}(f_i) {:}=(\bar{f}_1,\ldots ,\bar{f}_{i-1},f_i,\bar{f}_{i+1}, \ldots ,\bar{f}_N)\):

Definition 2.1

The N-tuple \(\bar{f} {:}= (\bar{f}_1,\ldots ,\bar{f}_N)\) is called a set of (linear stabilizing stationary) feedback Nash equilibrium strategies if for all \(i \in \mathbf{N}\) the following inequalities hold:

$$\begin{aligned} J_i\left( \bar{f},x_0\right) \le J_i\left( \bar{f}_{-i}(f_i),x_0\right) \end{aligned}$$

for all initial states \(x_0\) and for all \(f_i \in \mathbb {R}\) such that \(\bar{f}_{-i}(f_i) \in {\mathcal {F}_{N}}\).

In the sequel, we will drop the adjectives linear, stabilizing and stationary in the above definition and use the shorthand notation FNE to denote the by these actions implied equilibrium cost, and the actions themselves as FNE actions or strategies.

In this problem setting, we may assume, without loss of generality, that \(r_i\) are positive and both \(b_i\) and \(q_i\) differ from zero. This, since in case \(r_i \le 0\) the problem has no solution; and in case either \(b_i=0\) or \(q_i=0\) the optimal control for player i is to use no control, i.e., \(u_i(\cdot )=0\), at any point in time. So, in the last-mentioned case, the player could be discarded from the game. For this game, we distinguish three cases.

Definition 2.2

Consider the cost function (2). The game is called an economic game if \(q_i < 0,\ i \in \mathbf{N}\); a regulator game if \(q_i > 0,\ i \in \mathbf{N}\); a mixed game if for some indices \(q_i\) is negative, and for other indices this parameter is positive.

The attached names are inspired by the fact that, in case \(q_i < 0,\ i \in \mathbf{N}\), the game can be interpreted as a game between players who all like to maximize their profits (measured by the state variable x) using their input (measured by \(u_i\)) as efficient as possible. Whereas in case \(q_i >0,\ i \in \mathbf{N}\), the game can be interpreted as a problem where all players like to track the system’s state, x, as fast as possible to zero using as less as possible control efforts, \(u_i\).

The FNE for the game (12) are completely characterized by the solutions of a set of coupled algebraic Riccati equations (ARE). With \(s_i {:}= \frac{b_{i}^{2}}{r_{i}}\), these equations in the variables \(k_i\) reduce to (see, e.g., [8]):

$$\begin{aligned} \left( a-\sum _{j=1}^{N}k_js_j\right) k_{i} + k_{i}\left( a-\sum _{j=1}^{N}s_jk_j\right) + q_i + k_{i}s_{i}k_{i} = 0,\ i \in \mathbf{N}. \end{aligned}$$
(3)

The precise statement is as follows:

Theorem 2.3

The game (12) has a FNE if and only if (iff.) there exist N scalars \(k_i\) such that (3) holds and \(a-\sum _{j=1}^{N}s_jk_j < 0\). If this condition holds, the N-tuple \((\bar{f}_1,\ldots ,\bar{f}_N)\) with \(\bar{f}_i {:}=-r_i^{-1}b_ik_i\) is a FNE and \(J_i(\bar{f}_1,\ldots ,\bar{f}_N,x_0)=k_ix_{0}^{2}\).

So, to determine the set of FNE we have to find all stabilizing solutions of (3). To determine these solutions, following [8, Section 8.5.1], we introduce (for notational convenience) the variables:

$$\begin{aligned} \sigma _i {:}= s_iq_i,\ y_i {:}= s_ik_i, \ i \in \mathbf{N},\quad \hbox {and} \quad y_{N+1} {:}= -a_{cl} {:}= -\left( a-\sum _{j=1}^{N}y_j\right) . \end{aligned}$$

Note that, by relabeling the player indices, we can enforce that \(\sigma _1 \ge \cdots \ge \sigma _N\). This ordering is assumed to hold throughout. Furthermore, since \(f_i=\frac{-1}{b_i}y_i\), there is a bijection between \((f_1,\ldots ,f_N)\) and \((y_1,\ldots ,y_N)\). Using this notation, (3) can be rewritten as

$$\begin{aligned} y_{i}^2 - 2y_{N+1}y_{i} + \sigma _i = 0,\quad i \in \mathbf{N}. \end{aligned}$$
(4)

The above problem can then be reformulated as: under which conditions have the above N quadratic equations, and the equation

$$\begin{aligned} y_{N+1} = -a+\sum _{j=1}^{N}y_j, \end{aligned}$$
(5)

a real solution \(y_i, i \in \mathbf{N}\), with \(y_{N+1}>0\).

Remark 2.4

In the rest of this paper, we concentrate on providing numerical algorithms which provide a conclusive answer whether (35) have either no, one, or more than one real solution \(y_i,\ i \in \mathbf{N}\), with \(y_{N+1}>0\). We will distinguish three algorithms, based on the sign of the involved \(\sigma _i\) parameters. Since there is a one-to-one correspondence between the signs of parameter \(q_i\) in performance criterion (2) and the sign of parameter \(\sigma _i\), we will use the notation introduced in Definition 2.2 for these equations.

As we will see later on in Sect. 5 on the oligopolistic competition example, the determination of the unique solutions to these Eqs. (35) enables to determine, e.g., FNE for different/higher-dimensional systems too. In those cases, the interpretation in terms of the original scalar game is, of course, only indirect.

The solutions of (4) are \(y_i = y_{N+1} + \sqrt{y_{N+1}^{2}-\sigma _i}\) and \(y_i = y_{N+1} - \sqrt{y_{N+1}^{2}-\sigma _i},\ i \in \mathbf{N}\). Substitution of this into (5) yields next result (see [12]).

Lemma 2.5

  1. 1.

    The set of Eqs. (45) has a solution iff. there exist \(t_i \in \{-1,1\},\ i \in \mathbf{N}\), such that the equation

    $$\begin{aligned} (N-1)y_{N+1} + t_1 \sqrt{y_{N+1}^2-\sigma _1}+ \cdots + t_N \sqrt{y_{N+1}^2-\sigma _N} = a \end{aligned}$$
    (6)

    has a solution \(y_{N+1}\). In fact, for all solutions satisfying \(y_{N+1}^{2} > \sigma _1\), there is a one-to-one correspondence between solutions \((y_1,\ldots ,y_{N+1})\) of (45) and \((y_{N+1},t_1,\ldots ,t_N)\) satisfying (6).

  2. 2.

    The game (12) has a FNE iff. there exist \(t_i \in \{-1,1\},\ i \in \mathbf{N}\), such that (6) has a solution \(y_{N+1} > 0\) with \(y_{N+1}^{2} \ge \sigma _{1}\).

3 Solvability Conditions

Theorem 3.3 below presents conditions under which the game will have either none, precisely one, or, multiple equilibria. The results are obtained by a detailed study of the set of functions

In particular, next three functions from this set \({\mathcal {F}}\) play a crucial role.

$$\begin{aligned}&f_1(x) \in {\mathcal {F}}, \quad \hbox {where}\quad t_i=-1,\quad i\in \mathbf{N}; \end{aligned}$$
(7)
$$\begin{aligned}&f_2(x) \in {\mathcal {F}}, \quad \hbox {where} \quad t_1=1 \quad \hbox {and}\quad t_i=-1,\ i\ne 1; \end{aligned}$$
(8)
$$\begin{aligned}&f_3(x) \in {\mathcal {F}}, \quad \hbox {where} \quad t_2=1 \quad \hbox {and}\quad t_i=-1,\ i\ne 2. \end{aligned}$$
(9)

Geometrically, the number of FNE is obtained by counting, for a fixed level a, the number of intersection points with all functions from \({\mathcal {F}}\). From [12, Lemma A.1], we recall next properties of above-mentioned functions.

Lemma 3.1

Let \(f(x) \in {\mathcal {F}}\) with \(f(x) \ne f_i(x),\ i=1,2,3\). Then

  1. 1.

    \(f_1(x) \le f_2(x) \le f_3(x) \le f(x)\). If \(\sigma _1=\sigma _2\), \(f_2(x)=f_3(x)\).

  2. 2.
    1. (a)

      \(\lim _{x \rightarrow \infty } f_1(x) = -\infty \) and \(\lim _{x \rightarrow \infty } f_2(x) = \infty \).

    2. (b)

      \(\lim _{x \rightarrow \infty } f_{1}^{\prime }(x) = -1\) and \(\lim _{x \rightarrow \infty } f_{i}^{\prime }(x) = 1,\ i=2,3\).

Above Lemma 3.1 shows that for all \(a\ge f_3(0)\) the game will have more than one equilibrium. Consequently, a detailed study of the functions \(f_i(x),\ i=1,2,3\) provides complete information on conditions under which the game will have either no, one, or more than one equilibrium. This study is performed in [12]. To create some intuition for the behavior of these functions, Lemma 3.2 recalls some general properties concerning the functions \(f_i(x),\ i=1,2,3\). The next subsections report existence conditions from that study.

Lemma 3.2

  1. 1.

    In the regulator game:

    1. (i)

      \(f_1(x)\) is monotonically decreasing.

    2. (ii)

      If \(\sigma _1 \ne \sigma _2\), \(f_2(x)\) has at most two stationary points \(x_1\) and \(x_2\ge x_1\), yielding a local maximum at \(x_1\) and a minimum (which might be global) at \(x_2\). If \(x_1=x_2\), \(f_2(x_1)\) is an inflection point.

    3. (iii)

      If \(\sigma _1 \ne \sigma _2\), \(f_3(x)\) has precisely one stationary point, yielding a global minimum.

    4. (iv)

      If \(\sigma _1=\sigma _2\), \(f_2(x)=f_3(x)\) has at most one stationary point, yielding a global minimum.

  2. 2.

    In the economic game:

    1. (i)

      \(f_1(x)\) has exactly one stationary point \(x^* > 0\), where it attains a (global) maximum.

    2. (ii)

      \(f_2(x)\) and \(f_3(x)\) are strictly increasing.

  3. 3.

    In the mixed game:

    1. (i)

      \(f_1(x)\) has at most two stationary points, yielding a local minimum and a maximum (which might be global), respectively (and an inflection point if there is just one stationary point).

    2. (ii)

      \(f_2(x)\) and \(f_3(x)\) have the same properties as in item 1.

3.1 The 2-Player Case

For the 2-player case the question, for which values of the state feedback parameter a the game will have either no, one, or more than one FNE, is solved by calculating, for fixed a, the total number of solutions for the equations

$$\begin{aligned}&x-\sqrt{x^2-\sigma _1}-\sqrt{x^2-\sigma _1}=a,\quad x+\sqrt{x^2-\sigma _1}-\sqrt{x^2-\sigma _1}=a, \quad \hbox {or} \\&x-\sqrt{x^2-\sigma _1}+\sqrt{x^2-\sigma _1}=a. \end{aligned}$$

This, under the understanding that in the regulator and mixed game, only solutions \(x \ge \sqrt{\sigma _1}\) are relevant, whereas for the economic game only solutions \(x\ge 0\) apply.

By determining the various extremal points (including the boundary points) of the functions \(f_i(x)\), \(i=1,2,3\), and using the monotonicity properties of these functions, one can characterize then the areas where either no, one, or more than one FNE will occur in terms of these extremal points.

Introducing the notation \(f_{i*},\ f_{i}^{*}\) for the minimum and maximum of function \(f_i(x)\), respectively, Tables 1 and 2, below, present the results for the 2-player case. In this case, an analytic solution of the problem is possible. Table 1 reports how many equilibria exist for every choice of the state feedback parameter a for the generic case that \(\sigma _1 > \sigma _2\). For the case \(\sigma _1=\sigma _2\), a separate analysis is required, as in that case the functions \(f_2\) and \(f_3\) coincide. Corresponding results for that case are displayed in Table 2. Let

$$\begin{aligned} y{:}=t_1S+\frac{1}{2}\sqrt{-4S^2+\frac{12}{\sigma _1\sigma _2}- t_1\frac{4}{S}\frac{\sigma _1+\sigma _2}{\sigma _{1}^{2}\sigma _{2}^{2}}} \mathrm{\ with\ } S=\sqrt{\frac{1}{\sigma _1\sigma _2}+ \left( \frac{(\sigma _1-\sigma _2)^2}{4\sigma _{1}^{4}\sigma _{2}^{4}}\right) ^{1{/}3}};\nonumber \\ \end{aligned}$$
(10)

where \(t_1=-1\) if \(\sigma _1+\sigma _2>0\); \(t_1=1\) if \(\sigma _1+\sigma _2<0\); and, in case \(\sigma _1+\sigma _2=0\): \(y{:}=\frac{1}{\sigma _1}\sqrt{\sqrt{12}-3}\).

With y as defined above, \(f_{3*}{:}=\frac{1-\sqrt{1-\sigma _1y}+\sqrt{1-\sigma _2 y}}{\sqrt{y}}\) and, for the economic game (where \(t_1=1\)), \(f_{1}^{*}{:}=\frac{1-\sqrt{1-\sigma _1y}-\sqrt{1-\sigma _2 y}}{\sqrt{y}}\).

Table 1 Number of equilibria if \(N=2\), \(\sigma _1 > \sigma _2\)

In case the game is symmetric, i.e., \(\sigma _1=\sigma _2=\sigma \), \(f_2(x)=f_3(x)=x\). So, these functions are monotonically increasing. Furthermore, for the regulator game \(f_i(\sqrt{\sigma })=\sqrt{\sigma },\ i=1,2,3\). Due to this, it can be shown that at \(a=\sqrt{\sigma }\) a unique equilibrium occurs. Consequently, if \(\sigma _1=\sigma _2=\sigma >0\), \(f_{3*}=\sqrt{\sigma }\), and in case \(\sigma _1=\sigma _2=-\sigma <0\), \(f_{1}^{*}=-\sqrt{3}\sqrt{-\sigma }\), \(f_{1}(0)=-2\sqrt{-\sigma }\) and \(f_2(0)=f_3(0)=0\). In particular, this last observation implies that in the symmetric case in the economic game the interval \((f_{2}(0),f_{3}(0))\) is empty. These observations give rise to Table 2.

Table 2 Number of equilibria if \(N=2\), \(\sigma _1 = \sigma _2\)

3.2 The Symmetric Case

Next we consider the symmetric case if the number of players exceeds two. That is, the case that \(\sigma _i=\sigma ,\ i\in \mathbf{N}, \mathbf{N}>2\). From this definition, it follows that in the symmetric case \({\mathcal {F}}\) reduces to the set of functions

In particular, \(f_1\) coincides with \(g_1\) and both \(f_2\) and \(f_3\) coincide with \(g_2\). That is,

$$\begin{aligned} f_1(x)=(N-1)x-N\sqrt{x^2-\sigma } \quad \mathrm{and }\quad f_2(x)=f_3(x)=(N-1)x-(N-2)\sqrt{x^2-\sigma }. \end{aligned}$$

It can be easily shown in this case that optima of \(f_i(x),\ i=1,2,3,\) occur at stationary points. Some elementary calculations show that

$$\begin{aligned} f_{2*}= & {} f_{3*}=\sqrt{\sigma }\sqrt{2N-3}\mathrm{\ for\ the\ regulator\ game,\ and} \end{aligned}$$
(11)
$$\begin{aligned} f_{1}^{*}= & {} -\sqrt{2N-1}\sqrt{-\sigma } \mathrm{\ for\ the\ economic\ game.} \end{aligned}$$
(12)

Consequently, again, analytic conditions can be derived under which there exists either no, precisely one, or more than one equilibrium. Table 3 presents the results for the two games.

Table 3 Number of equilibria if \(\sigma _i=\sigma _j\), \(N>2\)

3.3 The General Case

In the previous two subsections, results were recalled where analytic solutions could be obtained. Unfortunately, this is not possible for the general case. The results advertised below show that the question whether the game will have no, one, or more than one equilibrium, basically, requires the calculation of the stationary points of the functions \(f_i(x)\ i=1,2,3.\) The numerical algorithms developed in the next section use these results, together with estimates of these stationary points, to determine in a large number of cases efficiently whether the game will have no, one, or more than one equilibrium. And, in case there is a unique equilibrium, to calculate it.

Theorem 3.3

Consider the scalar game (12).

  1. 1.

    In the regulator and mixed game, there always exists a FNE. Moreover, for every FNE strategy the closed-loop state feedback parameter \(a_{cl}\) satisfies: \(a_{cl}\le -\sqrt{\sigma _1}\).

  2. 2.

    The economic game has no FNE iff. \(f_{1}^{*}< a < \sqrt{-\sigma _1} -\sum _{i=2}^{N} \sqrt{-\sigma _i}.\) Here \(f_{1}^{*} =\max _{x \ge 0} f_1(x)\).

  3. 3.

    For very stable systems (i.e., \(a\ll 0\)) all three games have a unique FNE. For very unstable systems (i.e., \(a\gg 0\)), all three games have \(2^{N}-1\) equilibria.

Theorem 3.4

Let \(f_i(x),\ i=1,2,3,\) be as defined in (79). Then, the items below present both necessary and sufficient conditions for the existence of a unique feedback Nash equilibrium for the considered games.

  1. 1.

    Consider the regulator game. Let \(S_{f_2}{:}= \{ x\ |\ f_{2}^{\prime }(x)=0\}\), \(f_{2*}{:}= \min _{x\in S_{f_2}} f_2(x)\), \(f_{2}^{*}{:}= \max _{x \in S_{f_2}} f_2(x)\) if \(\sigma _1>\sigma _2\) and \(f_{2}^{*}{:}= -\infty \) if \(\sigma _1=\sigma _2\); \(S_{f_3}{:}= \{\sqrt{\sigma _1}\} \cup \{ x\ |\ f_{3}^{\prime }(x)=0\}\) and \(f_{3*}{:}= \min _{x\in S_{f_3}} f_3(x)\).

    1. (a)

      Case \(f_2(x)\) is monotonically increasing. If \(\sigma _1=\sigma _2\), there is a unique equilibrium iff. \(a \le f_{3*}\). Otherwise, there is a unique equilibrium iff. \(a < f_{3*}\).

    2. (b)

      Case \(f_2(x)\) is not monotonically increasing. Then, there is a unique equilibrium iff. i) \(a < f_{2*}\) or ii) \(f_{2}^{*}< a < f_{3*}\).

  2. 2.

    Consider the economic game. Let \(f_{1}^{*}{:}= \max f_1(x)\). This game has a unique equilibrium if \(a<f_{1}(0)\). Furthermore,

    1. (a)

      If \(f_2(0)\le f_{1}^{*}<f_3(0)\), additionally there is a unique equilibrium if \(f_{1}^{*}< a < f_{3}(0)\).

    2. (b)

      If \(f_{1}^{*}< f_2(0)\), additionally there is a unique equilibrium if i) \(f_{2}(0) \le a < f_{3}(0)\) or ii) \(a=f_{1}^{*}\).

  3. 3.

    Consider the mixed game with the notation from item 1.

    1. (a)

      If \(f_1(x)\) is monotonically decreasing, the existence conditions for the regulator game in item 1 apply.

    2. (b)

      If \(f_1(x)\) is not monotonically decreasing let \(S_{f_1}{:}= \{ x\ |\ f_{1}^{\prime }(x)=0\}\) and \(f_{1*}{:}=\min _{x \in S_{f_1}} f_1(x)\), \(f_{1}^{*}:=\max _{x \in S_{f_1}} f_1(x)\). Then, there is a unique equilibrium iff. i) \(a<f_{1*}\); ii) \(\max \{f_{1}^{*},f_{2}^{*}\}< a < f_{3*}\) or iii) \(f_{1}^{*}<a<f_{2*}\).

Table 4 summarizes the above results.

Table 4 Number of equilibria, general case

Remark 3.5

  1. 1.

    In [12], it is shown that the sets \(S_{f_i}\) have at most two elements. If in the mixed game, case b., \(f_{2}(x)\) is monotonically increasing, the corresponding maximum and minimum values are not defined. It is easily verified that the corresponding statements continue to hold if we define \(f_{2*}=f_{2}^{*}=-\infty \) in that case.

  2. 2.

    Since \(f_2(x)\) has a global minimum and \(f_2(x)\le f_3(x)\), the global minimum of \(f_2(x)\) is smaller than \(f_{3*}\). So, by Theorem 3.4, item 1., the regulator game has a unique FNE for all a smaller than the global minimum of \(f_2(x)\).

  3. 3.

    From Theorem 3.4 and its proof, we obtain the next specializations in case \(\sigma _1=\sigma _2\) (implying \(f_2(x)=f_3(x)\)).

    1. (i)

      The regulator game has a unique FNE if \(a < f_{3*}\). Additionally, only a unique equilibrium occurs at \(a=f_2(\sqrt{\sigma _1})=f_{3*}\) if \(f_2\) is monotonically increasing.

    2. (ii)

      The economic game has a unique FNE iff. a) \(a < f_1(0)\) or b) \(a=f_{1}^{*}\) if \(f_{1}^{*} < f_2(0)\).

    3. (iii)

      The mixed game has a unique FNE iff., in case \(f_1(x)\) is monotonically decreasing the conditions mentioned under item i) apply; and, in case \(f_1(x)\) is not monotonically decreasing:

      1. (a)

        \(a < f_{1*}\) or b) \(f_{1}^{*}< a < f_{3*}\).

Proposition 3.6 reports some sufficient analytic conditions from [12] under which the game has a unique FNE.

Proposition 3.6

The game (12) has a unique FNE in the following situations.

  1. a.

    In the regulator game, with \(z_1:=\sqrt{\frac{\sum _{i=2}^{N} \sigma _i}{2}}\), \(a_1:=\sqrt{\sum _{i=2}^{N} 2\sigma _i}-\sqrt{\sigma _1}\) and \(a_2:=\frac{1}{2\sqrt{\sigma _1}}\sum _{i=2}^{N} \sigma _i\): i) if \(\sqrt{\sigma _1}< z_1\), for all \(a \le a_1\), and ii) if \(z_1 \le \sqrt{\sigma _1}\), for all \(a \le a_2\).

  2. b.

    In the economic game: for all \(a < -\sum _{i=1}^{N} \sqrt{-\sigma _i}\).

  3. c.

    In the mixed game: let \(\sigma _i > 0,\ i \in \mathbf{N_1}\) and \(\sigma _i < 0,\ i=N_1+1,\ldots ,N\). Assume \(\sum _{i=N_1+1}^{N} \frac{\sqrt{\sigma _1}}{\sqrt{\sigma _1+\sigma _i}}\ge N-N_1-1\). Let \(a_0:=\sum _{i=N_1+1}^{N}(\sqrt{\sigma _1}-\sqrt{\sigma _1+\sigma _i})\) and, if \(N_1=1\): \(z_2:=\sqrt{\sigma _1}\) and \(a_4:=a_0\), whereas for \(N_1\ge 2\): \(z_2:=\sqrt{\frac{\sum _{i=2}^{N_1} \sigma _i}{2}}\), \(a_3:=\sqrt{\sum _{i=2}^{N_1} 2\sigma _i}-\sqrt{\sigma _1}+a_0\) and \(a_4:=\frac{1}{2\sqrt{\sigma _1}}\sum _{i=2}^{N_1} \sigma _i + a_0\). Then, the mixed game has a unique equilibrium: i) if \(\sqrt{\sigma _1}< z_2\), for all \(a \le a_3\), and ii) if \(z_2 \le \sqrt{\sigma _1}\), for all \(a \le a_4\).

In particular, this results in the next observation for the regulator game.

Corollary 3.7

The regulator game has a unique equilibrium if \(a<0\).

4 Numerical Algorithms

In this section, we provide computational schemes to verify in an efficient way whether the game will have no, one, or more than one equilibrium. And, in case there is a unique equilibrium, to calculate this equilibrium. Below we present for each game separately the computational scheme. Proofs can be found in “Appendix 1”. The setup of the algorithms is the same. First, there is a general check on the number of players and symmetry of the game. If the game has two (nonsymmetric) players, we use results from Sect. 3.1 to present the solution. In case the game is symmetric, results from Sect. 3.2 are used to calculate the solution. Next, for every game, we calculate numbers \(a^{*}\) and \(a_{*}\), respectively. These numbers have the property that they can be easily calculated and determine large regions where either a unique equilibrium exists (for all state feedback parameters a smaller than \(a_{*}\)) or multiple equilibria exist (for all \(a > a^{*}\)). The third step in the algorithms use more game specific information to explore efficiently what is going on if \(a_{*} \le a \le a^{*}\).

In case there exists a unique equilibrium, the corresponding solution, \(k_i\), for the coupled algebraic Riccati Eq. (3) can be easily obtained from the intersection point of a with either \(f_1(x)\) or \(f_2(x)\). This follows from Lemma 2.5, item 1. In case \(f_1(x)=a\) has a solution y, \(y_i:= y-\sqrt{y^2-\sigma _i},\ i \in \mathbf{N},\) solves (45). Or, using the definition of \(y_i\),

$$\begin{aligned} k_i =\frac{y_i}{s_i}=\left( y-\sqrt{y^2-\sigma _i}\right) {/}s_i,\ i \in \mathbf{N}, \end{aligned}$$
(13)

solves the set of coupled algebraic Riccati Eq. (3). Similarly, the solutions of (3) are obtained in case \(f_2(x)=a\) has a solution y. The only solution that is obtained differently in that case is of \(y_1\) which is then obtained as \(y_1:= y+\sqrt{y^2-\sigma _1}\). So, in that case the solutions of (3) are

$$\begin{aligned} k_1 =\left( y+\sqrt{y^2-\sigma _1}\right) {/}s_1 \quad \mathrm{and}\quad k_i =\left( y-\sqrt{y^2-\sigma _i}\right) {/}s_i,\ i >1. \end{aligned}$$
(14)

The corresponding equilibrium actions, \(\bar{f}_i=-\frac{b_ik_i}{r_i}\), follow then directly from this (see Theorem 2.3).

4.1 The Regulator Game

Algorithm 4.1

  1. 1.

    2-player case

    1. a.

      Calculate S and y from (10).

    2. b.

      If \(a< f_{3*}\) (see Table 1), proceed with item c. Otherwise, there is no unique equilibrium.

    3. c.

      If \(a< f_{1}(\sqrt{\sigma _1})\), \(k_i\) is given by (13). Else \(k_i\) is given by (14).

  2. 2.

    Symmetric case

    1. a.

      If \(a< f_{3*}\) (\(a\le f_{3*}\) if \(N=2\)), see Tables 2 and 3, proceed with item b. Otherwise, there is no unique equilibrium.

    2. b.

      \(k_i\) is given by (13).

  3. 3.

    General case

    1. a.

      Let \(a^{*}:=f_3(0)=(N-1)\sqrt{\sigma _1} + \sqrt{\sigma _1-\sigma _2} - \sum _{i=3}^{N} \sqrt{\sigma _1-\sigma _i}\); \(a_{*}:=a_1\) if item a.i of Proposition 3.6 applies, and \(a_{*}:=a_2\) if item a.ii of Proposition 3.6 applies.

      If \(a \le a_{*}\), go to item c. If \(a \ge a^{*}\) there is more than one equilibrium. If \(a_{*}< a < a^{*}\), proceed with item b.

    2. b.

      Case i: \(\sigma _1=\sigma _2\) and \(N-1-\sum _{i=3}^{N} \sqrt{\frac{\sigma _1}{\sigma _1-\sigma _i}} \ge 0\).

      No equilibrium exists for \(a \ge f_1(\sqrt{\sigma _1})\). If \(a<f_1(\sqrt{\sigma _1})\), go to item c.

      Case ii: otherwise.

      Solve \(f_{3}^{\prime }(\bar{x})=0\) on \([\sqrt{\sigma _1},\sqrt{\frac{N^2}{2N-1}\sigma _1}]\). Denote \(f_{3*}:=f_{3}(\bar{x})\) and \(I:=[\sqrt{\sigma _1},\bar{x}]\).

      1. I.

        If \(a \ge f_{3*}\), there exists more than one equilibrium.

      2. II.

        If \(f_1(\sqrt{\sigma _1}) \le a < f_{3*}\): Calculate solution(s) of \(f_2(y)=a\) on I.

        Case i: There exists more than one solution. Multiple equilibria exist.

        Case ii: There exists no solution. \(k_i\) is given by (14) where \(y>\bar{x}\) solves \(a=f_2(y)\) (see Remark 4.2.2).

        Case iii: There exists one solution y. If \(f_{2}^{\prime }(y)=0\) and \(f_{2}^{\prime \prime }(y)<0\), two equilibria occur at y. Otherwise, one equilibrium occurs at y, and \(k_i\) is given by (14).

      3. III.

        If \(a_{*}< a < f_1(\sqrt{\sigma _1})\). Calculate solution(s) of \(f_2(y)=a\) on I.

        Case i: There exists at least one solution. Multiple equilibria exist.

        Case ii: There exists no solution. Go to item c.

    3. c.

      \(k_i\) is given by (13) where y solves \(a=f_1(y)\) (see Remark 4.2.1).

In Remark 4.2, we discuss some computational issues occurring in the above Algorithm 4.1.

Remark 4.2

  1. 1.

    To determine the solution of \(f_1(y)=a\) in item 3.c, notice that \(F_1(x):=f_1(x)-a=0\) has a unique solution; \(F_1(\sqrt{\sigma _1})>0\), and with \(x_r:=\sqrt{\sum _{i=1}^{N} \sigma _i}+|a|\), \(F_1(x_r)<0\). Using this, e.g., a brute- force halving technique or Newton–Raphson can be used to calculate the solution of \(F_1(y)=0\). Other estimates that may be appropriate to narrow the initial search interval are, e.g., \(F_1(-a)>0\) if \(a<-\sqrt{\sigma _1}\); \(F_1(\frac{a}{N-1})<0\) if \(\frac{a}{N-1}>\sqrt{\sigma _1}\); and, with \(x_r:=\frac{-a+\sqrt{a^2+4N\sigma _N}}{2},\) \(F_1(x_r)<0\) if \(x_{r}^{2} > \sigma _1\).

  2. 2.

    Notice that \(x_r:=\sqrt{\frac{N^2}{2N-1}\sigma _1}+|a| > \bar{x}\) (see item 3.b.case ii) and, for all \(x\ge x_r\), \(F_2(x):=f_2(x)-a=\sqrt{x^2-\sigma _1}+\sum _{i=2}^{N} \frac{\sigma _i}{x+\sqrt{x^2-\sigma _i}}-a > 0\). So, the solution of \(f_2(y)=a\) in item 3.b.II.ii can be obtained, e.g., by calculating the unique zero of \(F_2(x)\) on the interval \([\bar{x},x_r]\).

4.2 The Economic Game

Algorithm 4.3

  1. 1.

    2-player case

    1. a.

      Calculate, with \(t_1=1\), S and y from (10). If either \(a<f_{1}(0)\) or \(a \in (f_{2}(0),f_{3}(0))\) holds, proceed with item c. Otherwise, proceed with item b.

    2. b.

      Calculate \(f_{1}^{*}\) (see Table 1). If \(a=f_{1}^{*}\), proceed with item c. Otherwise, there is no unique equilibrium.

    3. c.

      If either \(a<f_{1}(0)\) or \(a =f_{1}^{*}\) holds, \(k_i\) is given by (13). Else \(k_i\) is given by (14).

  2. 2.

    Symmetric case

    1. a.

      If either \(a<f_{1}(0)\) or, in case \(N=2,3\), or 4, \(a=f_{1}^{*}\) (see Tables 23), proceed with item b. Otherwise, there is no unique equilibrium.

    2. b.

      If \(a<f_{1}(0)\), \(y:=\frac{a^2+N^2\sigma }{(N-1)a+N\sqrt{a^2+(2N-1)\sigma }}\) and \(k_i\) is given by (13). If either \(N=2,3\) or 4 and \(a=f_{1}^{*}\), \(y:=\frac{N-1}{\sqrt{2N-1}}\sqrt{-\sigma }\) and \(k_i\) is obtained again as (13).

  3. 3.

    General case

    1. a.

      If \(a<f_1(0)=-\sum _{i=1}^{N} \sqrt{-\sigma _i}\), go to item c. If \(a\ge f_3(0)=\sqrt{-\sigma _2} - \sum _{i\ne 2}^{N} \sqrt{-\sigma _i}\), there is more than one equilibrium. If \(f_1(0) \le a < f_3(0)\), proceed with item b.

    2. b.

      Solve \(f_{1}^{\prime }(\bar{x})=0\) on \(\left[ 0,\sqrt{\frac{\sum _{i=1}^{N} -\sigma _i}{2}}\right] \).

      Case i: \(f_{1}(\bar{x}) \ge f_3(0)=\sqrt{-\sigma _2}-\sum _{i\ne 2}^{N} \sqrt{-\sigma _i}\).

      Multiple equilibria exist.

      Case ii: \(\sqrt{-\sigma _1}-\sum _{i=2}^{N} \sqrt{-\sigma _i}=f_2(0) \le f_{1}(\bar{x})< f_3(0)\).

      If \(f_{1}(\bar{x})< a < f_3(0)\), \(k_i\) is given by (14) where \(y \in [0,x_r]\) solves \(a=f_2(y)\). Here \(x_r=\sqrt{|(a+\sum _{i=2}^{N}\sqrt{-\sigma _i})^2+\sigma _1|}\).

      If \(f_1(0) \le a \le f_{1}(\bar{x})\), or \(a \ge f_3(0)\), multiple equilibria exist.

      Case iii: \(f_{1}(\bar{x}) < f_2(0)\).

      If \(f_2(0) \le a < f_3(0)\), \(k_i\) is given by (14) where \(y \in [0,x_r]\) solves \(a=f_2(y)\), where \(x_r\) is as in Case ii.

      If \(a=f_{1}(\bar{x})\), go to item c, with \(y:=\bar{x}\).

      If \(f_1(0)< a < f_2(0)\), there exists no equilibrium. If \(a \ge f_3(0)\), multiple equilibria exist.

    3. c.

      \(k_i\) is given by (13) where y solves \(a=f_1(y)\) (see Remark 4.4.1).

Similar remarks we made in Remark 4.2 apply for the above algorithm here.

Remark 4.4

  1. 1.

    To determine the solution of \(f_1(y)=a\) in item 3.c, notice that \(F_1(x):=f_1(x)-a\) has a unique zero; \(F_1(0)>0\); and \(F_1(|a|)<0\). So, the search can be restricted to the interval [0, |a|]. Clearly, the bounds of this interval are not tight. So, to improve calculation speed in certain applications, it might be worthwhile to improve on them.

  2. 2.

    A similar remark as in item 1 applies w.r.t. \(F_2(x)=f_2(x)-a\). Also here, in case N becomes large, it might be worthwhile from a computational point of view to improve the bound \(x_r\) in item 3.b.

4.3 The Mixed Game

Algorithm 4.5

Following the notation of Proposition 3.6, let \(\sigma _i > 0,\ i \in \mathbf{N_1}\) \((N_1>1)\) and \(\sigma _i < 0,\ i=N_1+1,\ldots ,N\) (\(N > N_1\)).

  1. 1.

    2-player case

    See item 1 Regulator game.

  2. 2.

    Symmetric case

    Not applicable.

  3. 3.

    General case

    1. a.

      Let \(a^{*}:=f_{3}(0)=(N-1)\sqrt{\sigma _1} + \sqrt{\sigma _1-\sigma _2} - \sum _{i=3}^{N} \sqrt{\sigma _1-\sigma _i}\).

      Next, consider the notation from Proposition 3.6, item c. Calculate \(z_2\). If \(\sqrt{\sigma _1} < z_2\), \(a_{*}:=a_3\). Else, \(a_{*}:=a_4\).

      If \(a \le a_{*}\), go to item c. If \(a \ge a^{*}\), there is more than one equilibrium. If \(a_{*}< a < a^{*}\), proceed with item b.

    2. b.

      Case i: \(\sigma _1=\sigma _2\) and \(N-1-\sum _{i=3}^{N} \sqrt{\frac{\sigma _1}{\sigma _1-\sigma _i}} \ge 0\).

      No equilibrium exists for \(a \ge f_1(\sqrt{\sigma _1})\). If \(a<f_1(\sqrt{\sigma _1})\), go to item c.

      Case ii: Otherwise. Let \(s:=\sqrt{\frac{\sum _{i=N_1}^{N} -\sigma _i}{2}}\). Solve \(f_{3}^{\prime }(\bar{x})=0\) on \([\sqrt{\sigma _1},\sqrt{\frac{N_{1}^{2}}{2N_1-1}\sigma _1}]\).

      Denote \(f_{3*}:=f_{3}(\bar{x})\), \(I:=[\sqrt{\sigma _1},\bar{x}]\) and \(I_2:=[\bar{x},s]\).

      1. Case I.

        \(f_{1}(s) \ge f_{1}(\bar{x})\).

        Case i: \(a\in [f_1(a_{*}),f_1(\bar{x}))\). Solve \(f_1(x)=a\) on \(I_2\). If a solution exists, multiple equilibria occur. If no solution exists, go to item c, where \(y \in [s,x_r]\) (see Remark 4.6.1, below).

        Case ii: \(a \in (f_1(\bar{x}),f_1(s)]\): Multiple equilibria exist.

        Case iii: \(a \in [f_1(s),f_{3*}]\) (if \(f_{3*}\le f_1(s)\) multiple equilibria occur). Solve \(f_1(x)=a\) on \(I_2\). If a solution exists, multiple equilibria occur. If no solution exists: Solve \(f_2(x)=a\) on I.

        1. 1.

          If no solution exists, \(k_i\) is given by (14) where \(y \in [\bar{x},x_r]\) (see Remark 4.6.2, below) solves \(a=f_2(y)\).

        2. 2.

          Precisely one solution occurs at y. If \(f_{2}^{\prime }(y)\ne 0\), or \(f_{2}^{\prime }(y)=0\) and \(f_{2}^{\prime \prime }(y) > 0\), \(k_i\) is given by (14).

        3. 3.

          Otherwise, multiple equilibria occur.

      2. Case II.

        \(f_{1}(s) < f_{1}(\bar{x})\). Case i: \(a\in [f_1(a_{*}),f_1(s))\). Solve \(f_1(x)=a\) on \(I_2\). If a solution exists, multiple equilibria occur. If no solution exists, go to item c, where \(y \in [s,x_r]\).

        Case ii: \(a \in [f_1(s),f_1(\bar{x})]\): Solve \(f_1(x)=a\) on \(I_2\). If a unique solution y exists, go to item c. Otherwise, multiple equilibria exist.

        Case iii: \(a \in (f_1(\bar{x}),f_{1}(\sqrt{\sigma _1})]\). If both \(f_1(x)=a\) has no solution on \(I_2\) and \(f_2(x)=a\) has no solution on I, go to item c, where \(y \in I\). Otherwise, multiple equilibria occur.

        Case iv: \(a \in (f_{1}(\sqrt{\sigma _1}),f_{3*})\).

        Solve \(f_1(x)=a\) on \(I_2\). If a solution exists, multiple equilibria occur. If no solution exists: Solve \(f_2(x)=a\) on I.

        1. 1.

          If no solution exists, \(k_i\) is given by (14) where \(y \in [\bar{x},x_r]\) (see Remark 4.6.2, below) solves \(a=f_2(y)\).

        2. 2.

          Precisely one solution occurs at y. If \(f_{2}^{\prime }(y)\ne 0\), or \(f_{2}^{\prime }(y)=0\) and \(f_{2}^{\prime \prime }(y) > 0\), \(k_i\) is given by (14).

        3. 3.

          Otherwise, multiple equilibria occur.

    3. c.

      \(k_i\) is given by (13) where y solves \(a=f_1(y)\) (see Remark 4.2.1).

Remark 4.6

  1. 1.

    Note that, similar to the regulator game (see Remark 4.2.1), we have

    $$\begin{aligned} F_1(x):=f_1(x)-a= & {} (N_1-1)x-\sum _{i=1}^{N_1}\sqrt{x^2-\sigma _i}-\sum _{i=N_1+1}^{N}\left( \sqrt{x^2-\sigma _i}-x\right) \\\le & {} (N_1-1)x-\sum _{i=1}^{N_1}\sqrt{x^2-\sigma _i} <0, \end{aligned}$$

    if \(x>x_r := \sqrt{\sum _{i=1}^{N_1} \sigma _i}+|a|\). Again, since \(x_r\) is not tight, to improve calculation speed in certain applications it might be worthwhile to improve on this bound.

  2. 2.

    Note that, similar to the economic game, we have

    $$\begin{aligned} F_2(x)=f_2(x)-a= & {} \sqrt{x^2-\sigma _1}+\sum _{i=2}^{N_1} \frac{\sigma _i}{x+\sqrt{x^2-\sigma _i}}-\sum _{i=N_1+1}^{N}\frac{-\sigma _i}{x+\sqrt{x^2-\sigma _i}}-a\\>&\sqrt{x^2-\sigma _1}+\sum _{i=2}^{N_1} \frac{\sigma _i}{x+\sqrt{x^2-\sigma _i}}-\sum _{i=N_1+1}^{N}\sqrt{-\sigma _i}-a>0, \end{aligned}$$

    if \(x>x_r:=\sqrt{|(a+\sum _{i=N_1+1}^{N}\sqrt{-\sigma _i})^2+\sigma _1|}\).

    Again, clearly \(x_r\) here is just an upperbound, which may be improved depending on the considered specific case.

5 An Example on Oligopolistic Competition

In this example, we consider the model on oligopolistic competition with sticky prices that was analyzed by Fershtman and Kamien [14] (see also [6]). This model describes a market where N companies sell, more or less, the same product. It is assumed that the market price does not adjust instantaneously to the price indicated by the demand function. There is a “lag” in the market price adjustment. Therefore, the price is called “sticky.”

Assume at any point in time, t, company \(i,\ i \in \mathbf{N}\), produces \(u_i(t)\) with cost function

$$\begin{aligned} C_i(u_i(t)) =c_i u_i(t)+\frac{1}{2}u_{i}^{2}(t), \end{aligned}$$

and sells it in the market at the price of p. The inverse linear demand function for the product is given by

$$\begin{aligned} p(t) = \bar{p}-\sum _{i=1}^{N} \beta _iu_i. \end{aligned}$$

Here, \(\bar{p}\), \(\beta _i\) are positive constants.Footnote 1 Next, Eq. (15) models that the market price does not adapt instantaneously to the price indicated by the demand function.

$$\begin{aligned} \dot{p}(t)=s \left\{ \bar{p}-\sum _{i=1}^{N} \beta _i u_i(t)-p(t) \right\} , \ p(0)=p_0. \end{aligned}$$
(15)

Here, \(s \in (0,\infty )\) is the adjustment speed parameter. For larger values of s, the market price adjusts quicker along the demand function.

Within this framework, we look for affine stabilizing feedback Nash equilibria of the game, where every player wants to maximize his discounted profits

$$\begin{aligned} J_i(u_i)=\int _{0}^{\infty }\hbox {e}^{-rt} \left\{ p(t)u_i(t)-c_iu_i(t)-\frac{1}{2}u_{i}^{2}(t) \right\} \hbox {d}t,\ i \in \mathbf{N}. \end{aligned}$$
(16)

More precisely, we assume that players choose their actions from next set \({\mathcal {F}}\) of stabilizing affine functions of the price p.

$$\begin{aligned} {\mathcal {F}} := \left\{ (u_1,\ldots ,u_N) \ |\ u_i(t)=f_{i}p(t)+g_i, \quad \mathrm{with }\quad -s\left( 1+\sum _{i=1}^{N} \beta _if_{i}\right) < \frac{1}{2}r\right\} . \end{aligned}$$
(17)

To determine the feedback Nash equilibrium actions for this game (1516), we first reformulate it into the standard linear quadratic framework. Following [8, Example 8.5], introduce the variables

$$\begin{aligned} x^{\mathrm{T}}(t):=\hbox {e}^{-\frac{1}{2}rt} \left[ \begin{array}{cc} p(t)&1 \end{array} \right] \quad \mathrm{and }\quad v_i(t):=\hbox {e}^{-\frac{1}{2}rt}u_i(t)+ \left[ \begin{array}{cc} -1&c_i \end{array} \right] x(t),\ i \in \mathbf{N}. \end{aligned}$$
(18)

Then, the problem can be rewritten as the minimization of

$$\begin{aligned} J_i := \int _{0}^{\infty } \left\{ x^{\mathrm{T}}(t)Q_ix(t)+v_{i}^{\mathrm{T}}(t)R_iv_i(t) \right\} \hbox {d}t,\ i \in \mathbf{N}, \end{aligned}$$

subject to the dynamics

$$\begin{aligned} \dot{x}(t)=Ax(t)+\sum _{i=1}^{N} B_iv_i(t),\ x^{\mathrm{T}}(0)= \left[ \begin{array}{cc} p_0&1 \end{array} \right] . \end{aligned}$$

Here

$$\begin{aligned} A= & {} \left[ \begin{array}{cc} -\frac{1}{2}r-s\left( 1+\sum \limits _{i=1}^{N}\beta _i\right) &{}s\left( \bar{p}+\sum \limits _{i=1}^{N}\beta _ic_i\right) \\ 0 &{} -\frac{1}{2}r \end{array} \right] ;\\ B_i= & {} \left[ \begin{array}{c} -s\beta _i \\ 0 \end{array} \right] ;\ Q_i= \left[ \begin{array}{cc} -\frac{1}{2} &{}\frac{1}{2}c_i\\ \frac{1}{2}c_i &{} -\frac{1}{2}c_{i}^{2} \end{array} \right] \quad \mathrm{and }\quad R_i=\frac{1}{2}. \end{aligned}$$

According to, e.g., [8, Theorem 8.5] this game has a feedback Nash equilibrium if and only if

$$\begin{aligned} \left( A-\sum _{i=1}^{N} S_iK_i\right) ^{\mathrm{T}}K_j+K_j\left( A-\sum _{i=1}^{N} S_iK_i\right) +K_jS_jK_j+Q_j=0,\ j \in \mathbf{N}, \end{aligned}$$
(19)

has a set of symmetric solutions \(K_j\) such that matrix \(A-\sum _{i=1}^{N} S_iK_i\) is stable, where \(S_i= \left[ \begin{array}{cc} 2s^2\beta _{i}^{2} &{} 0\\ 0 &{} 0 \end{array} \right] \). Introducing \( K_i:= \left[ \begin{array}{cc} k_{i} &{} l_{i}\\ l_{i} &{} m_{i} \end{array} \right] \), \(a:=-\frac{1}{2}r-s(1+\sum _{i=1}^{N}\beta _i)\), \(a_2:=s(\bar{p}+\sum _{i=1}^{N}\beta _ic_i)\) and \(s_i:=2s^2\beta _{i}^{2}\), simple calculations show that (19) reduces to next 3N equations.

$$\begin{aligned}&2\left( a-\sum _{i=1}^{N} s_ik_i\right) k_j+s_jk_{j}^{2}-\frac{1}{2} = 0, \end{aligned}$$
(20)
$$\begin{aligned}&\left( -a+\frac{1}{2}r+\sum _{i=1}^{N} s_ik_i\right) l_j+ k_j\sum _{i\ne j}^{N} s_il_i = -\frac{1}{2}c_j+a_2k_j, \end{aligned}$$
(21)
$$\begin{aligned}&\frac{1}{r}\left( 2\left( a_2-\sum _{i=1}^{N} s_il_i\right) l_j+s_jl_{j}^{2}-\frac{1}{2}c_{j}^{2}\right) = m_j,\ j\in \mathbf{N}, \end{aligned}$$
(22)

where \(k_i\) should be such that \(a-\sum _{i=1}^{N}s_ik_i <0\).

Or, stated differently, the oligopolistic market has a noncooperative affine feedback Nash equilibrium iff (3) has a stabilizing solution with \(a:=-\frac{1}{2}r-s(1+\sum _{i=1}^{N}\beta _i)\), \(s_i:=2s^2\beta _{i}^{2}\) and \(q_i:=-\frac{1}{2},\ i \in \mathbf{N}.\) So, in the terminology of Definition 2.2, this is an economic game. Note that once we determined \(k_i\) from (20), \(l_i\) can be calculated from the linear Eq. (21). Finally, \(m_i\) can then be directly calculated from (22).

The equilibrium actions follow now directly from (18). That is

$$\begin{aligned} u_i(t)=(2sk_i+1)p(t) + 2sl_i-c_i,\ i \in \mathbf{N}, \end{aligned}$$
(23)

with \(k_i\) given by (20).

The resulting dynamics of the equilibrium price path are

$$\begin{aligned} \dot{p}(t)=-s\left( 1+\sum _{i=1}^{N} \beta _i(1+2sk_i)\right) p(t)+s\left( \bar{p}-\sum _{i=1}^{N}\beta _i(2sl_i-c_i)\right) . \end{aligned}$$

Or, stated differently, with \(p_s:=\frac{\bar{p}-\sum _{i=1}^{N}\beta _i(2sl_i-c_i)}{1+\sum _{i=1}^{N} \beta _i(1+2sk_i)}\),

$$\begin{aligned} p(t)=\alpha \hbox {e}^{-s\left( 1+\sum _{i=1}^{N} \beta _i(1+2sk_i)\right) t}+p_s, \mathrm{\ where\ } \alpha =p_0-p_s. \end{aligned}$$

In particular, we infer from this that the price in this oligopolistic market converges to \(p_s\).

To find this equilibrium price for specific values of the parameters for a large number of players requires, separate from solving \(k_i\) from (20), the solution of the linear Eq. (21). In “Appendix 2”, we show how \(l_i,\ i\in \mathbf{N},\) can be efficiently solved as (24).

To solve \(k_i\) from (20), we use Algorithm 4.3. Note that

$$\begin{aligned} a=-\frac{1}{2}r-s\left( 1+\sum _{i=1}^{N}\beta _i\right) < -\sum _{i=1}^{N} s\beta _i=-\sum _{i=1}^{N}\sqrt{s^2\beta _{i}^{2}}=f_1(0). \end{aligned}$$

So, by item 3.a of this algorithm, there is a unique solution that can be calculated as outlined in item 3.c.

Table 5 reports some simulation results for the benchmark parameters \(c_i=1\); \(\bar{p}=80\); \(\beta _i=1\); \(s=2\); and \(r=0.1\) as a function of the number of players, N. The table reports the parameters \(f_i\) and \(g_i\) of the affine equilibrium state feedback controls (17), together with the steady-state price \(p_s\).

Table 5 Equilibrium actions in case \(\beta _i=1\)
Table 6 Equilibrium actions in case \(\beta _1=10\) and \(\beta _i=1,\ i>1\)

To see the impact if one \(\beta _i\) parameter changes, we considered in our second experiment a case \(\beta _1=10\) with the other parameters unchanged. Corresponding results are reported in Table 6. In this table, no results are reported for \(N>100\) because they coincide with those of Table 5.

We also performed a third experiment in which \(\beta _i\) were chosen uniformly in the interval [0.9, 1.1]. In this simulation, already for \(N=10\), almost the same results occurred as those reported in Table 5.

From these simulation results, we see that for a small number of players an increase in the number of players significantly affects equilibrium prices and strategies. And there is almost no impact anymore if the number of players increases above 10. Furthermore, we see from Table 6 that an increase in the \(\beta _1\) parameter has more or less the same impact as increasing the number of players in the game.

For the implementation of Algorithm 4.3, we used MATLABFootnote 2 The computation time using our code was less than 1 s using an accuracy of \(10^{-5}\). Using a larger accuracy provided similar results. Only, when \(N=10.000\), problems occurred, as then apparently the numerical precision of the square root function implemented in MATLAB reached its numerical precision level.

Finally, we also implemented Algorithm 4.1 with \(\sigma _i=i,\ i\in \mathbf{N}\) for this example and were able to calculate equilibria up to \(N=100.000\), within 1 s. But, also in this case, numerical problems seemed to occur for a larger number of players.

6 Concluding Remarks

In this paper, we presented numerical algorithms to determine the unique linear feedback Nash policies of the basic scalar linear quadratic differential game. We distinguished three cases: the regulator, the economic, and the mixed game. For each game, we provided a separate algorithm which calculates the unique equilibrium if it exists. Furthermore, the algorithm indicates for the economic game, in case not a unique equilibrium exists, whether no equilibrium or multiple equilibria occur.

Using a straightforward implementation with MATLAB code of the main body of the algorithm shows that equilibrium actions can be calculated within a second for games having up to 100.000 players. For a larger number of players, numerical problems seem to occur due to using the predefined MATLAB square root function. To implement the algorithms for a larger number of players seems to require a more subtle coding, as for a larger number of players numerical precision deteriorates. Also, for such large number of players, to improve on computational time it might be worthwhile to determine more accurate search intervals where one is looking for a zero of the involved function. The full implementation of the algorithm requires also verification of whether functions \(f_1\) and \(f_2\) will have no, one, or more than one zero on a certain interval. For this problem, a more detailed study of conditions under which the involved function will be monotonic may help to limit considerably the interval where one has to search for solutions.

Finally, an open question remains how to extend these results for more general games.