Skip to main content
Log in

Systemic-Risk and Evolutionary Stable Strategies in a Financial Network

  • Published:
Dynamic Games and Applications Aims and scope Submit manuscript

Abstract

We consider a financial network represented at any time instance by a random liability graph which evolves over time. The agents connect through credit instruments borrowed from each other or through direct lending, and these create the liability edges. These random edges are modified (locally) by the agents over time, as they learn from their experiences and (possibly imperfect) observations. The settlement of the liabilities of various agents at the end of the contract period (at any time instance) can be expressed as solutions of random fixed point equations. Our first step is to derive the solutions of these equations (asymptotically and one for each time instance), using a recent result on random fixed point equations. The agents, at any time instance, adapt one of the two available strategies, risky or less risky investments, with an aim to maximize their returns. We aim to study the emerging strategies of such replicator dynamics that drives the financial network. We theoretically reduce the analysis of the complex system to that of an appropriate ordinary differential equation (ODE). Using the attractors of the resulting ODE we show that the replicator dynamics converges to one of the two pure evolutionary stable strategies (all risky or all less risky agents); one can have mixed limit only when the observations are imperfect. We verify our theoretical findings using exhaustive Monte Carlo simulations. The dynamics avoid the emergence of the systemic-risk regime (where majority default). However, if all the agents blindly adapt risky strategy it can lead to systemic risk regime.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. We generated synthetic data which are included in the article itself.

Notes

  1. This approximation is valid for \(\epsilon \in (0, 1)\). In Appendix we also provide the details for \(\epsilon \in \{0,1\}\).

  2. Given that the agent switching is from \(G_1\), the probability that the contacted agent is from \(G_1\) is actually \(\epsilon _t n(t)/(n(t)-1) \approx \epsilon _t\), the approximation improves as \(n(t) \uparrow \).

  3. We basically assume each agent among \(n_2(k)\) number of the \(G_2\) agents default (asymptotically) independently of each other, and this is valid by [13, Section 4], [7, Theorem 1], as the clearing vectors are asymptotically independent.

  4. See Definition 2 of Appendix, for the definition of the asymptotically stable attractor.

  5. (i) As seen from proofs in Appendix the random trajectory \(\psi (t)\) is upper bounded by \(1+{\bar{\mathcal{N}}}\), hence sufficient to consider domains with \(\mathcal{D}_\psi \);

    (ii) it is easy to verify that \(\psi (t) = (\psi _{t_0} -a_i) e^{-t} + a_i\) when \(\epsilon (t)\) starts in interval \(\mathcal{I}_i\), while the solution \(\epsilon (t)\) can be derived using elementary calculus-based steps like

    $$\begin{aligned} \int _{\epsilon _{t_0}}^{\epsilon (t)} \frac{\textrm{d} \epsilon }{\kappa \epsilon (1-\epsilon ) + \epsilon E[\mathcal{L}] } = \int _{t_0}^t \frac{\textrm{d}s}{\psi _s}. \end{aligned}$$

    (iii) The sign of \(\kappa + E[\mathcal{L}] - \kappa \epsilon \) remains the same for any \(\epsilon \in \mathcal{I}_i\), and hence

    $$\begin{aligned} \log \left( \frac{ \mid \kappa + E[\mathcal{L}] - \kappa \epsilon (t) \mid }{ \mid \kappa + E[\mathcal{L}] - \kappa \epsilon _{t_0} \mid } \right) = \log \left( \frac{\kappa + E[\mathcal{L}] - \kappa \epsilon (t) }{ \kappa + E[\mathcal{L}] - \kappa \epsilon (t_0) } \right) . \end{aligned}$$

    (iv) Observe that the resultant solution \(\epsilon (t)\) is strictly monotone as long as \(\epsilon (t)\) is confined in \(\mathcal{I}_i\).

  6. One can partially justify similar approximation for systems with \(\epsilon \in \{0, 1\}\) using [13, Theorem 1 and Subsection 4.2].

References

  1. Acemoglu D, Ozdaglar A, Tahbaz-Salehi A (2015) Systemic risk and stability in financial networks. Am Econ Rev 105(2):564–608

    Article  Google Scholar 

  2. Allen F, Gale D (2000) Financial contagion. J Polit Econ 108(1):1–33

    Article  Google Scholar 

  3. Easley D, Kleinberg J (2010) Networks, crowds, and markets: reasoning about a highly connected world. Cambridge University Press

    Book  MATH  Google Scholar 

  4. Eisenberg L, Noe TH (2001) Systemic risk in financial systems. Manag Sci 47(2):236–249

    Article  MATH  Google Scholar 

  5. Ghatak A, Mallikarjuna Rao K, Shaiju A (2012) Evolutionary stability against multiple mutations. Dyn Games Appl 2(4):376–384

    Article  MathSciNet  MATH  Google Scholar 

  6. Glasserman P, Young HP (2015) How likely is contagion in financial networks? J Bank Finance 50:383–399

    Article  Google Scholar 

  7. Kavitha V, Saha I, Juneja S (2018) Random fixed points, limits and systemic risk. In: 2018 IEEE conference on decision and control (CDC). IEEE, pp 5813–581

  8. Kushner H, Yin GG (2003) Stochastic approximation and recursive algorithms and applications. Springer

    MATH  Google Scholar 

  9. Li H, Wu C, Yuan M (2013) An evolutionary game model of financial markets with heterogeneous players. Procedia Comput Sci 17:958–964

    Article  Google Scholar 

  10. Miekisz J (2008) Evolutionary game theory and population dynamics. Springer, Berlin, pp 269–316

    MATH  Google Scholar 

  11. Perko L (2013) Differential equations and dynamical systems, vol 7. Springer, Berlin

    MATH  Google Scholar 

  12. Saha I, Kavitha V (2021) Financial replicator dynamics: emergence of systemic-risk-averting strategies. In: International conference on network games, control and optimization. Springer, pp 211–228

  13. Saha I, Kavitha V (2022) Random fixed points, systemic risk and resilience of heterogeneous financial network. Ann Oper Res. https://doi.org/10.1007/s10479-022-05137-w

  14. Singh V, Agarwal K, Kavitha V, et al (2021) Evolutionary vaccination games with premature vaccines to combat ongoing deadly pandemic. In: EAI international conference on performance evaluation methodologies and tools. Springer, pp 185–206

  15. Tembine H, Altman E, El-Azouzi R, Hayel Y (2009) Evolutionary games in wireless networks. IEEE Trans Syst Man Cybern Part B 40(3):634–646

    Article  MATH  Google Scholar 

  16. Yang K, Yue K, Wu H, Li J, Liu W (2016) Evolutionary analysis and computing of the financial safety net. In: International workshop on multi-disciplinary trends in artificial intelligence. Springer, pp 255–267

Download references

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study, conception and design. The first draft of the manuscript was written by Indrajit Saha, while Veeraruna Kavitha helped in improving the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Veeraruna Kavitha.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Details of Asymptotic Approximation

Clearing vector (5), defined using Eqs (2)–(5), can be viewed as the solution of random fixed point equations, which depend upon the realizations of the economic shocks \(\{K_i\}_{i \in {G}_2}\) to the network. We obtain an approximate clearing vector by applying the single group results of [13, Corollary 1 and Subsection 4.2] only to group \(G_2\). Towards this we consider a fictitious big node (like in [13]) and from each node \(j \in G_2 \) there is a dedicated fraction \((1-c_\epsilon ) \) with \( c_\epsilon : = \frac{\alpha +\alpha \epsilon }{\alpha +\epsilon }\) directed towards the fictitious node. This financial system is exactly similar to the graphical model described by [13, equations (1)–(6) and (40)–(41)], after the following mapping details:

$$\begin{aligned} G_i= & {} K_i, \ \eta _i^{bs} =0 \ (a.s.), \ W_{j,i}=\frac{L_{ji}}{y}= \frac{I_{ji}c_\epsilon }{np_{ss}(1-\epsilon )}, \ p_{c_1} =p_{c_2}= 0, \nonumber \\ p_1 = p_2= & {} p_{ss}, \text{ and } \ p^{sb}_1= p^{sb}_2 = 1- c_\epsilon . \end{aligned}$$
(37)

With the above mapping details the required assumptions of [13] are satisfied: assumption B.1 is immediately satisfied (see (5)); assumption B.3 is satisfied with \(\sigma = 1\) and with any \(0 \le \varsigma < 1\) (as the fixed point equations do not depend upon \(x_b\)). The weight factors are as in [13, Subsection 4.2], and hence, assumption B.2 is not required. Finally, the assumption B.4 is satisfied with \(\rho =1\). Hence by [13, Corollary 1] (as \(0< c_\epsilon < 1\) with \(\epsilon \in (0,1)\)), the solution of random fixed point equations (5) can be approximated using that corresponding to the limit system given in [13, Corollary 1]. Thus we have the convergence providedFootnote 6 in Eqs. (7)–(9) for any \(\epsilon \in (0,1)\) for \(G_2\) (as the network size increases to infinity). This in turn provides the convergence results for \(G_1\).

Network with only risky agents (\(\epsilon =0\)): The total amount lend by any agent to its neighbours equals \(\approx w \alpha / (1-\alpha )\), where the approximation is again accurate at limit. In a similar way the total amount borrowed by any agent also equals \(\approx w \alpha / (1-\alpha )\). Thus any agent invests \(\approx w\) (their initial wealth) in risky assets. In this case limit aggregate clearing vector (8) reduces to the following:

$$\begin{aligned} {{\bar{x}}}^{ \infty } =E \bigg [\min \left\{ \bigg ( K_i+ {{\bar{x}}}^{ \infty } - v\bigg )^+, y \right\} \bigg ]. \end{aligned}$$
(38)

When \(w(1+d) \ge v\) it is easy to observe that \(\bar{x}^{\infty }= y\) is the unique solution of the above equation. Hence \(P_d=0\). Therefore \(R_i^{2}=K_i -v\) at limit for any i. For this case [13, Corollary 1] is not applicable; however, [13, Theorem 1] (applied to single group as in [13, subsection 4.2 1], partially justifies the above approximation.

Network with only less risky agents (\(\epsilon =1\)): On the other hand with \(\epsilon = 1\), all are less risky agents and they invest completely in risk-free assets. Thus the return of any agent \(i\in G_1\) equals, \(R_i^{1}=w(1+r_s) - v\).

1.2 Proofs

Proof of Lemma 1:

We consider the following scenarios with \(v< k_d\). The average clearing vector for the group \(G_2\) agents satisfies (see (8)):

$$\begin{aligned} {{\bar{x}}}^{ \infty }= & {} \min \left\{ \bigg ( k_d+ c_\epsilon {{\bar{x}}}^{ \infty } - v\bigg )^+, y \right\} (1-\delta ) \nonumber \\{} & {} + \min \left\{ \bigg ( k_u+ c_\epsilon {{\bar{x}}}^{ \infty } - v\bigg )^+, y \right\} \delta . \end{aligned}$$
(39)

Case 1: First consider the case when downward shock can be absorbed i.e. default probability is \(P_d= 0\). If we have \(k_d +yc_\epsilon -v \ge y\) then the average clearing vector \({{\bar{x}}}^{ \infty } = y \delta + y(1-\delta ) =y\), and the above condition simplifies to the bound:

$$\begin{aligned} k_d-v +yc_{\epsilon } \ge y \implies c_{\epsilon } \ge \frac{y-{\underline{w}}}{y}. \end{aligned}$$

Case 2: Consider the case in which only the agents that receive shock will default, i.e. when \(P_d =1-\delta \). The corresponding average clearing vector equals:

$$\begin{aligned} {{\bar{x}}}^{ \infty } = y\delta + ({\underline{w}} +c_\epsilon {{\bar{x}}}^{ \infty })(1-\delta ) \text{ and } \text{ satisfies } k_d- v +c_{\epsilon } {{\bar{x}}}^{ \infty } < y, k_u- v +c_{\epsilon } {{\bar{x}}}^{ \infty } \ge y. \end{aligned}$$

In this case the average clearing vector reduces to \( {{\bar{x}}}^{ \infty } = \frac{y\delta + {\underline{w}}(1-\delta ) }{1-c_\epsilon (1-\delta ) }, \) and using the same in the bounds we have:

$$\begin{aligned} \frac{y-{\overline{w}}}{y-(1-\delta )({\overline{w}}-{\underline{w}})} \le c_\epsilon <\frac{y-{\underline{w}}}{y}. \end{aligned}$$

Case 3: (Systemic-risk regime) Consider the case in which all the agents default i.e. when \(P_d = 1\). In this we first calculate \({{\bar{x}}}^{ \infty }\) which is obtained by solving following fixed point equation: If we have \(k_u- v+ c_\epsilon {\bar{x}}^{ \infty } < y\) then from (39) the average clearing vector reduces to:

$$\begin{aligned} {{\bar{x}}}^{ \infty }= & {} (k_d- v+ c_\epsilon {{\bar{x}}}^{ \infty })(1-\delta ) + (k_u- v+ c_\epsilon {{\bar{x}}}^{ \infty })\delta =\frac{E [W]}{1-c_\epsilon }. \end{aligned}$$

Substituting \({{\bar{x}}}^{ \infty }\) we have the required bound: \( c_{\epsilon } < \frac{y- {\overline{w}}}{y- (1-\delta )({\overline{w}}-{\underline{w}})}. \)

Proof of Theorem 1.(a):

By Lemma 4, the mapping \(\epsilon \mapsto q_\epsilon \) is monotone; there exists \({{\bar{\epsilon }}}\), with \({{\bar{\epsilon }}}_1 < {{\bar{\epsilon }}} \le {{\bar{\epsilon }}}_2\), such that \(q_\epsilon = 1-\delta \) for all \(\epsilon \le {\bar{\epsilon }}_2\) and equals 1 for the rest; here \({{\bar{\epsilon }}}_1\), \({{\bar{\epsilon }}}_2\) are given by Lemma 5.

Proof of part (b) From Lemma 4, \({{\bar{\epsilon }}} < 1\) if and only if the conditions of part (b) are satisfied.

Proof of part (c) By the proof of Lemma 4, \(g(\epsilon )\) definition, \({\bar{\epsilon }}\) equals the zero of the following (when condition (19) is satisfied) \(\epsilon ^2 + m_1 \epsilon + m_2 = 0\), with, \(m_1,m_2\) be an appropriate constants. \(\blacksquare \)

Lemma 4

There exists a unique \({\bar{\epsilon }}\), with \({{\bar{\epsilon }}}_1 \le {{\bar{\epsilon }}} \le {\bar{\epsilon }}_2\) and with \(\{{{\bar{\epsilon }}}_i\}\) as in Lemma 5, such that the \(q_\epsilon :=E[R^1 \ge R^2]\) satisfies the following threshold property:

$$\begin{aligned} q_\epsilon = {\left\{ \begin{array}{ll} 1-\delta &{} \text { if }\ \epsilon < \bar{\epsilon } \\ 1 &{} \text { if }\ \epsilon \ge \bar{\epsilon }. \end{array}\right. } \end{aligned}$$
(40)

Further \({{\bar{\epsilon }}} < 1\) if and only if equation (19) is satisfied.

Proof

Case A: When \(P_d = 0\) for some \(\epsilon \), the returns of the agents are given by the following (recall \(r_b \ge r_s > d\), \(w(1+d) \ge v\)):

$$\begin{aligned} R^1 (\epsilon )= & {} \left( w\epsilon (1+r_s)+ \frac{(1-\alpha )(1-\epsilon )}{(\alpha +\epsilon )} y -v \right) ^+ \nonumber \\= & {} \ \left( w\epsilon (1+r_s)+ w (1-\epsilon ) (1+r_b) -v \right) ^+ = \ \left( w (1+r_b) - v + w \epsilon (r_s - r_b) \right) \text{, } \text{ and } \nonumber \\ \end{aligned}$$
(41)
$$\begin{aligned} R^2 (\epsilon )= & {} \left( K_i+\frac{\alpha (1+\epsilon ) }{\alpha + \epsilon } y -v-y \right) ^+ = \left( K_i - w \epsilon (1+r_b) -v \right) ^+ \nonumber \\= & {} \left\{ \begin{array}{llll} R^2_u &{} \text{ w.p. } \delta &{} \text{ where } R^2_u := w (1+u) - v + w \epsilon (u-r_b) \\ \left( R^2_d \right) ^+ &{} \text{ w.p. } 1- \delta &{} \text{ where } R^2_d := w (1+d) - v + w \epsilon (d-r_b) . \end{array} \right. \end{aligned}$$
(42)

Hence with upward movement, \(R^2 - R^1 = R^2_u - R^1 = w(u-r_b) +w\epsilon (u- r_s) > 0. \) It is also clear that \( \left( R^2_d (\epsilon ) \right) ^+- R^1 (\epsilon ) < 0\). Thus \(q_\epsilon = 1-\delta \), when \(P_d = 0\). By Lemma 5, this regime lasts for all \(\epsilon \) satisfying \(0 \le \epsilon \le \bar{\epsilon }_1 \).

Case B: When \(P_d = 1-\delta \), by Lemma 5, we have \({{\bar{\epsilon }}}_1 < \epsilon \le {\bar{\epsilon }}_2\). In this case we have \( R^2_u >0\), but \(R^1 (\epsilon ) \) can be positive or zero. Since \(R^2_d = 0\), we have that \(q_\epsilon = P(R_1(\epsilon ) \ge R_2(\epsilon ) ) \ge 1-\delta \). Further in this case,

$$\begin{aligned} R^1 (\epsilon )= & {} \left( w\epsilon (1+r_s)+ \frac{(1-\alpha )(1-\epsilon )}{(\alpha +\epsilon )} {{\bar{x}}}^\infty -v \right) ^+ \text{, } \text{ and, } \\ R^2_u= & {} \left( w(1+\epsilon )(1+u) +\frac{\alpha (1+\epsilon ) }{\alpha + \epsilon } {{\bar{x}}}^\infty -v-y \right) . \end{aligned}$$

And hence

$$\begin{aligned} R^2_u (\epsilon ) - R^1 (\epsilon )\le & {} w(1+u) +w\epsilon (u- r_s)-y + {{\bar{x}}}^\infty \bigg (\frac{2\alpha +\epsilon -1}{\alpha +\epsilon }\bigg ) \nonumber \\= & {} w(1+u) +w\epsilon (u- r_s) + ({{\bar{x}}}^\infty -y) -{\bar{x}}^\infty \bigg ( \frac{1-\alpha }{\alpha +\epsilon }\bigg ) \nonumber \\= & {} w(u-r_b) +w\epsilon (u- r_s) + ({{\bar{x}}}^\infty -y)\bigg (1- \frac{1-\alpha }{\alpha +\epsilon }\bigg ) . \end{aligned}$$
(43)

If the upper bound on RHS of (43) is negative, then clearly \(R^2_u (\epsilon ) < R^1(\epsilon )\). When the RHS is positive and \(R^1(\epsilon ) = 0\), then clearly \(R^2_u (\epsilon ) > R^1(\epsilon )\). On the other hand, when RHS is positive and \(R^1(\epsilon ) > 0\), then the RHS is the exact value and not the upper bound, and hence again \(R^2_u (\epsilon ) > R^1(\epsilon )\). Thus in all, \(R^2_u (\epsilon ) > R^1(\epsilon )\) if and only if the RHS of (43) is positive.

Hence with \(P_d = 1-\delta \), we compute the following and derive the required analysis by checking the negative/positive sign of the RHS of (43). First observe from Lemma 1 that

$$\begin{aligned} ({{\bar{x}}}^\infty - y )\bigg (1- \frac{1-\alpha }{\alpha +\epsilon }\bigg ) = (1-\delta ) \frac{ w(1+d) -v - w\epsilon (r_b-d) }{\epsilon + \alpha \delta - \alpha \epsilon (1-\delta ) } \bigg (2\alpha +\epsilon - 1 \bigg ) \end{aligned}$$
(44)

and consider the following function constructed (basically the denominator in the above is positive and multiply it with remaining terms of the RHS) using the RHS of (43):

$$\begin{aligned} g(\epsilon ):= & {} w(u-r_b)(\epsilon +\alpha \delta -\alpha \epsilon (1-\delta ))+w\epsilon (u- r_s) (\epsilon +\alpha \delta -\alpha \epsilon (1-\delta )) \nonumber \\{} & {} + (2\alpha -1+\epsilon )(1-\delta )(w(1+d) -v - w\epsilon (r_b-d)),\\ g^{'}(\epsilon )= & {} w(u-r_b) (1-\alpha (1-\delta ))+w(u-r_s)(2\epsilon +\alpha \delta -2\alpha \epsilon (1-\delta )) \nonumber \\{} & {} +(1-\delta )\bigg (w(1+d) -v - w\epsilon (r_b-d) -w(r_b-d) (2\alpha -1+\epsilon )\bigg ), \nonumber \\ g^{''}= & {} 2w(u-r_s)(1-\alpha (1-\delta ))-2w(1-\delta )(r_b-d),\nonumber \\ g(\bar{\epsilon }_1):= & {} w(u-r_b)(\bar{\epsilon }_1+\alpha \delta -\alpha \bar{\epsilon }_1(1-\delta ))+w\bar{\epsilon }_1(u- r_s) (\bar{\epsilon }_1+\alpha \delta -\alpha \bar{\epsilon }_1(1-\delta )) > 0 , \nonumber \\ g(1)= & {} w(1+2\alpha \delta - \alpha )(2u-r_s-r_b)+ 2\alpha (1-\delta )(w(1+d)-v-w(r_b-d)) \nonumber . \end{aligned}$$
(45)

Observe that \(g(\cdot )\) is concave in \(\epsilon \) if \((u-r_s)(1-\alpha (1-\delta ))< (1-\delta )(r_b-d)\), else it is convex in \(\epsilon \).

Sub-case 1: Consider the regime when \((u-r_s)(1-\alpha (1-\delta ))< (1-\delta )(r_b-d)\). Observe that \(g(\bar{\epsilon }_1) > 0\) (by definition of \({{\bar{\epsilon }}}_1\)). By concavity of \(g(\cdot )\), the function (and hence the RHS of (43)) can at maximum change the sign one time. Further when the RHS is zero, clearly \(q_\epsilon = 1\). In other words there exists \({{\bar{\epsilon }}}\) with \(\bar{\epsilon }_1 \le {{\bar{\epsilon }}} \le \bar{\epsilon }_2\), such that \(q_\epsilon = 1-\delta \) for all \(\epsilon < {{\bar{\epsilon }}}\) and equals 1 when \( {{\bar{\epsilon }}} \le \epsilon < {\bar{\epsilon }}_2\).

Sub-case 2: Consider the regime \((u-r_s)(1-\alpha (1-\delta )) \ge (1-\delta )(r_b-d)\). With this, it is easy to verify that \(g(1) > 0\).

With \(g(1) >0\) then we have \(g^{'}(\bar{\epsilon }_1) > 0\), because \(2w(u-r_s)(1-\alpha (1-\delta ))+ (1-\alpha )(1-2\alpha (1-\delta )) \ge 0\). Once again, \(g(\bar{\epsilon }_1) >0\) In this case due to convexity, \(g(\epsilon )\) do not change sign for all \(\bar{\epsilon }_1 \le \epsilon \le \bar{\epsilon }_2\), and hence \(q_\epsilon = 1-\delta \) for all \(\bar{\epsilon }_1 \le \epsilon \le \bar{\epsilon }_2\).

Case C: When \(P_d = 1\), then \(R^2 = 0\) a.s., while \(R^1 \ge 0\), thus \(q_\epsilon = 1\) and this is for all \(\epsilon > \bar{\epsilon }_2\). Thus the threshold property. Further one can have \({{\bar{\epsilon }}} < 1\), either when \(g(1) < 0\) (which also ensures \({{\bar{\epsilon }}}_1 < 1\)) or when \({\bar{\epsilon }}_2 < 1\) and hence the result. \(\square \)

Lemma 5

Define \({{\bar{\epsilon }}}_1 := \frac{w(1+d)-v}{w(r_b-d)} \). There exists \({{\bar{\epsilon }}}_2 \ge {{\bar{\epsilon }}}_1\), which is strictly greater when \( {\bar{\epsilon }}_1<1\) such that

$$\begin{aligned} P_d= & {} \left\{ \begin{array}{lll} 0 &{} \text{ if } 0 \le \epsilon \le {{\bar{\epsilon }}}_1 \\ 1-\delta &{} \text{ if } {{\bar{\epsilon }}}_1 < \epsilon \le {{\bar{\epsilon }}}_2, \\ 1 &{} \text{ if } \epsilon > {{\bar{\epsilon }}}_2. \end{array} \right. \end{aligned}$$
(46)

Further \({{\bar{\epsilon }}}_2 < 1\) if and only if the second line of Eq. (19) is satisfied.

Proof

First consider the following function of \(\epsilon \), constructed using the bound \(a_1\) and \(c_\epsilon \) (the denominators are positive and then by making the denominator common in term \( c_\epsilon - a_1\)) of Lemma 1:

$$\begin{aligned} h(\epsilon ) = \bigg (\alpha +\alpha \epsilon \bigg )y- \bigg (y-k_d+v\bigg )\bigg (\alpha +\epsilon \bigg ) = (\alpha +\epsilon ) \bigg (w(1+d) - v + w \epsilon (d-r_b) \bigg ). \end{aligned}$$

From Lemma 1, if \(h(\epsilon ) \ge 0\) then \(P_d= 0\) and clearly this regime lasts for all \(\epsilon \), with \(0 \le \epsilon \le \bar{\epsilon }_1\). Furthermore, beyond \(\bar{\epsilon }_1\), \(P_d > 0.\)

Next consider the following function of \(\epsilon \), constructed similarly using the bound \(a_2\) and \(c_\epsilon \) of Lemma 1:

$$\begin{aligned} f(\epsilon )= & {} \bigg (\alpha +\alpha \epsilon \bigg )\bigg (y-(1-\delta )w(1+\epsilon )(u-d)\bigg ) - \bigg (y-k_u+v\bigg ) \bigg (\alpha +\epsilon \bigg )\nonumber \\= & {} \ (\alpha +\epsilon ) \bigg (w(1+d)-v -w\epsilon (r_b-d)\bigg )\nonumber \\ {}{} & {} + \bigg ( (\alpha +\alpha \epsilon ) \delta + (1-\alpha ) \epsilon \bigg ) w (u-d) (1+\epsilon ). \end{aligned}$$
(47)

From Lemma 1, if \(f(\epsilon ) \ge 0\) then \( P_d \le 1-\delta \) and if \(f(\epsilon ) < 0\) then \( P_d =1\). Therefore it suffices to study this function. By (47), f(0), f(1) and its derivatives are given by:

$$\begin{aligned} f(0)= & {} \alpha \bigg (w(1+d)-v +w\delta (u-d)\bigg ) > 0,\\ f(1)= & {} (1+\alpha ) \bigg ( (w(1+d)-v) - w(r_b -d) \bigg ) + w \bigg ( 4 \alpha \delta + 2(1-\alpha ) \bigg ) (u-d), \\ f^{''} (\epsilon )= & {} 2w\bigg (u-r_b -\alpha (u-d)(1-\delta )\bigg ) \text{ for } \text{ all } \epsilon \bigg [\text{ as } d-r_b = u-r_b - (u-d)\bigg ]\text{, } \text{ and, } \\ f^{'}(0)= & {} w\bigg ( \alpha (u- r_b) -(2 \alpha (1-\delta ) -1) (u-d)\bigg )+ w(1+d) -v. \end{aligned}$$

It is clear that \(f(0) >0\). When the second derivative \(f^{''} <0\), the function is concave in \(\epsilon \), then clearly \(f(\epsilon ) \ge 0\) for all \(\epsilon \le {{\bar{\epsilon }}}_2\) (for some \(0< {{\bar{\epsilon }}}_2\le 1\)) and \(f(\epsilon ) <0\) for the rest, where \( {{\bar{\epsilon }}}_2 < 1\) if and only if \(f(1) < 0\).

Now consider the case with \(f^{''} \ge 0\), and then f is convex (or linear). Under this condition, clearly the first derivative \(f^{'}(0) > 0\) (as \(\alpha ^2 -2 \alpha (1-\delta ) +1) >0\)). Thus, \(f(0) >0\) implies \(f(\epsilon ) > 0\) for all \(\epsilon \le {\bar{\epsilon }}_2\) and so \( P_d \le (1-\delta )\) for all \(\epsilon \le {{\bar{\epsilon }}}_2\), where \({{\bar{\epsilon }}}_2=1\).

In all, we have the existence of an \({{\bar{\epsilon }}}_2\) such that \(f(\epsilon ) \ge 0\) (and hence \(P_d \le 1-\delta \)) if and only if \(\epsilon \le {{\bar{\epsilon }}}_2\). Also observe from the second equality of (47) that, at \(\epsilon = \bar{\epsilon }_1\), we have \(f(\bar{\epsilon }_1)> 0\). Thus \({\bar{\epsilon }}_1 < {{\bar{\epsilon }}}_2\), whenever \({{\bar{\epsilon }}}_1 <1\). Recall \(P_d = 0\) if and only if \(\epsilon \le {{\bar{\epsilon }}}_1\). Hence we have (46).

Further \(\bar{\epsilon }_2 < 1\) if and only if \(f(1) < 0\) (and \(f'' <0\)) which is equivalent to the second line of (19). \(\square \)

We first begin with some definitions.

Definition 2

Asymptotically stable (Attractor): A set A is said to be asymptotically stable in the sense of Lyapunov, which we refer as attractor, if there exists a neighbourhood (called domain of attraction) starting in which the ODE trajectory converges to A as time progresses (e.g. [8]).

Definition 3

Equicontinuous in extended sense [8]: Suppose that for each n, \(h_n(.)\) is on the \({\mathcal {R}}^r\)-valued measurable function on \((-\infty ,\infty )\) and \(\lbrace h_n(0)\rbrace \) is bounded. Also suppose that for each T and \(\varepsilon > 0\) there is a \(\kappa > 0\) such that

$$\begin{aligned} \limsup _{n} \sup _{0 \le t-s \le \kappa , \ t \le T} \Vert h_n(t)-h_n(s) \Vert \le \varepsilon . \end{aligned}$$

Then we say that \(\lbrace h_n(.)\rbrace \) is equicontinuous in the extended sense. By [8, Theorem 2.2], there exists a sub sequence that converges to some continuous limit function.

Proof of Theorem 2:

We prove the result using [8, Theorem 2.2, pp. 131], as \({{\bar{g}}}_\epsilon (\cdot )\) is only measurable. Towards this, we first need to prove (a.s.) equicontinuity in the extended sense of the following sequence of two-dimensional functions defined for each n (for any \(t \ge 0\)):

$$\begin{aligned}{}[\epsilon ^n(t), \psi ^n(t) ] := [\epsilon _n, \psi _n ] + \sum _{i=n}^{m(t_n+t)-1} \gamma _i Y_i, m(t) := \max \left\{ n: \sum _{k=0}^{n-1} \gamma _k \le t \right\} . \end{aligned}$$

This proof goes through almost exactly as in the proof of [8, Theorem 2.1, pp. 127], and we follow exactly the same pattern. We begin with discussing some initial steps: (i) the random vector \(Y_t\) depends on \((\epsilon _t, \psi _t)\) and \((W_{t+1}, \mathcal {N}_{t+1})\); (ii) observe \(\sup _t E|Y_{t} |^2 < \infty \), which is trivially true by law of large numbers (LLN); (iii) clearly \(\epsilon _t \le 1\) and \(\psi _t \le 1+{\bar{\mathcal {N}}}\) for all t and all sample paths; iv) we have \(E[Y_t |\mathcal{G}_t] = \bar{\textbf{g}} (\epsilon _t, \psi _t)+[e_t, 0]\) where

$$\begin{aligned} e_t := E\bigg [\bigg (\frac{1}{\psi _{t+1}}- \frac{1}{\psi _{t}} \bigg )\bigg (W_{t+1} -\mathcal {N}_{t+1}\epsilon _t \bigg ) \Bigg |{\mathcal {G}}_t \bigg ], \end{aligned}$$

and v) the projection term in [8], \(Z_t \equiv 0\).

We further require to handle the difference term \(e_t\). We will now show that the error term \(e_t\) converges to zero in the limit and continue with the rest of the proof thereafter.

Towards this, observe that \(|\psi _{t} - \mathcal {N}_{t}|\le \psi _{t}+ \bar{\mathcal {N}} \) where \(\bar{\mathcal {N}}\) is such that \(P(\mathcal {N}\le \bar{\mathcal {N}}) = 1\). Thus from (15),

$$\begin{aligned} \bigg |\frac{1}{\psi _{t+1}}- \frac{1}{\psi _{t}} \bigg |= \left|\frac{ \gamma _t \left( \psi _t - \mathcal {N}_{t}\right) }{\psi _{t} \psi _{t+1}} \right|\le \frac{\gamma _t}{\psi _{t+1}} \left( 1+ \frac{\bar{\mathcal {N}}}{\psi _{t} } \right) . \end{aligned}$$
(48)

Consider the (almost sure) sample paths in which \(\psi _t \rightarrow \ E[\mathcal {N}] \) by LLN, as one can rewrite:

$$\begin{aligned} \psi _{t} = \frac{ n(t)}{t+n_0} = \frac{n_0 + \sum _{k=1}^t \mathcal {N}_k }{t+n_0} . \end{aligned}$$
(49)

For such sample paths (i.e. almost surely), \(\psi _{t} \ge \varepsilon \) for all t for some appropriate \(\varepsilon > 0\) and hence using (48)

$$\begin{aligned} |e_t |\le \frac{\bar{s} \gamma _t}{\varepsilon } E\bigg [\big |W_{t+1} - \mathcal {N}_{t+1}\epsilon _t \big |\bigg |{\mathcal {G}}_t \bigg ] \le \frac{ (2E[\mathcal {N}] +E[\mathcal {S}]) \bar{s} }{\varepsilon } \gamma _t \quad \text{ a.s., } \text{ with } {{\bar{s}} } := 1+\frac{{{\bar{\mathcal {N}}}}}{\varepsilon },\nonumber \\ \end{aligned}$$
(50)

and recall \(W_{t+1} = \xi _t + \Xi _1(t) - \Xi _2(t)\).

Thus \(e_t \rightarrow 0\) and \(\sum _t |e_t |\gamma _t < \infty \) a.s. The update equation (starting at n and for any \(t\ge 0\)) can be written as below (as in [8]):

$$\begin{aligned} {[} \epsilon ^{n}(t), \psi ^n(t)]= & {} [\epsilon _n, \psi _n] + \int _{0}^{t} \bar{\textbf{g}} (\epsilon ^n(s), \psi ^n(s))\textrm{d}s + S^{n}(t) +\rho ^n(t) + \sum _{i=n}^{m(t+t_n)-1} \gamma _i e_i \\ \text{ with, } \rho ^n(t):= & {} \sum _{i=n}^{m(t+t_n)-1} \gamma _i\bar{\textbf{g}} (\epsilon _n,\psi _n) - \int _{0}^{t} \bar{\textbf{g}} (\epsilon ^n(s),\psi ^n(s))\textrm{d}s, \text{ and } \\ S^n(t):= & {} \sum _{i=n}^{m(t+t_n)-1} \gamma _i \Delta S_i = S_{m(t+t_n)-1} - S_n, \ \Delta S_n \ = \ Y_n- \bar{\textbf{g}} (\epsilon _n, \psi _n) -e_n. \end{aligned}$$

Observe in the above that \(\epsilon ^n(t_k) = \epsilon _k\) for any \(k > n\), where \(t_k := \sum _{i <k} \gamma _i\). By LLN \(\psi _k \rightarrow E[\mathcal {N}]\) in almost all sample paths, and the idea is to show the equicontinuity of the functions \(\{\epsilon ^n (\cdot ), \psi ^n(\cdot ) \}_n\) for those sample paths; this guarantees the existence of limit function (along a sub-sequence) as in [8], and proceeding further as in [8] we can show that the limit function satisfies ODE (17).

The arguments required to show the equicontinuity in extended sense are exactly as in [8], because of the following: (i) \(\{S_n\}\) is a martingale and using well-known Martingale inequality (as in [8]), \( \lim _m P \big \lbrace \sup _{j\ge m}|S_j-S_m |\ge \mu \big \rbrace =0, \) for any \(\mu \), because

$$\begin{aligned}\sum _i \gamma _i^2< \infty \text{ and } E[\Delta S_i \Delta S_j] =E[E[\Delta S_i \Delta S_j|\mathcal{G}_j ]] = 0, \forall j < i; \end{aligned}$$

(ii) recall \(\psi _k \rightarrow E[\mathcal {N}]\) as \(k\rightarrow \infty \) for chosen sample paths; there exists a \(C_g, {{\bar{k}}} < \infty \) such that \( |{{\bar{g}}}_\epsilon (\epsilon _k, \psi _k) |\le |\beta |/\psi _k < C_g\),

\(|E[\mathcal {N}] - \psi _k |\le C_g\) and hence \(\sup _{t} |\rho ^k (t) |\le 2 C_g \gamma _k \) for all \(k\ge {{\bar{k}}}\); (iii) the sequence \(\{[\epsilon ^n(0), \psi ^n(0) ]\}_{n\ge \bar{k}} = \{[\epsilon _n, \psi _n]\}_{n \ge \bar{k}} \) is bounded a.s. by \([1, 1+{ {\bar{\mathcal {N}}}}]\); and (iv) finally for any \(t \ge t'\):

$$\begin{aligned}{} & {} \bigg |\int _{0}^{t} {\bar{g}}_\epsilon (\epsilon ^n(s),\psi ^n(s))\textrm{d}s - \int _{0}^{t'} {\bar{g}}_\epsilon (\epsilon ^n(s),\psi ^n(s))\textrm{d}s \bigg |\\{} & {} \quad \le (t-t') \sup _{ \epsilon \in [0,1], k \ge n} |{\bar{g}}_\epsilon (\epsilon ,\psi _k) |= C_g (t-t') . \end{aligned}$$

Hence with \(\Theta ^n (\cdot ) := (\epsilon ^n(\cdot ), \psi ^n(\cdot ))\), the sequence \(\{ \Theta ^n(\cdot ) \}_n\) is equicontinuous in extended sense almost surely (observe again that the above proof is using similar arguments as in proof of [8, Theorem 2.1], with extensions to measurable \({\bar{g}}\) made possible because of boundedness of \({{\bar{g}}}\)).

In Corollary 1 we have identified the attractors of (17) and showed that the combined domain of attraction is the whole \([0,1] \times [0, {{\bar{C}}}]\) (for any \({{\bar{C}}} < \infty \)). Choose \({{\bar{C}}}\) such that the dynamics visits \([0,1]\times [0, {{\bar{C}}}]\) infinitely often (possible because \(P(\mathcal{N}_t < {{\bar{\mathcal {N}}}}) =1\)) and hence converges (a.s.) to one of the limit points of Theorem 1 and \(\beta \) by [8, Theorem 2.2, pp. 131]. \(\square \)

Proof of Theorem 3:

We need to prove (a.s.) equicontinuity in the extended sense of the sequence \([\epsilon ^n(t), \psi ^n(t) ]\) of two-dimensional functions defined for each n, and the proof goes through almost exactly as in the proof of Theorem 2. We mention the modifications: (i) the random vector \(Y_t\) depends on \((\epsilon _t, \psi _t)\) and \((W_{t+1}, \mathcal {N}_{t+1}, {\mathcal {L}}_{t+1})\); (ii) clearly \(\epsilon _t \le 1\) and \(\psi _t \le 1+{{\bar{\mathcal {N}}}} + {\bar{\mathcal{L}}}\) for all t and all sample paths; (iii) we have \(E[Y_t |\mathcal{G}_t] = \bar{\textbf{g}} (\epsilon _t, \psi _t)+[e_t, 0]\) where

$$\begin{aligned} e_t := E\bigg [\bigg (\frac{1}{\psi _{t+1}}- \frac{1}{\psi _{t}} \bigg )\bigg (W_{t+1} -\mathcal {N}_{t+1}\epsilon _t + {\mathcal {L}}_{t+1} \epsilon _t \bigg ) \Bigg |{\mathcal {G}}_t \bigg ], \end{aligned}$$

and iv) the projection term in [8], \(Z_t \equiv 0\).

We will now show that the error term \(e_t\) converges to zero. Towards this, first observe that \(|\psi _{t} - \mathcal {N}_{t} -{\mathcal {L}}_t |\le \psi _{t}+ \bar{\mathcal {N}} + {\bar{\mathcal{L}}} \) where \(\bar{\mathcal {N}}\) and \({\bar{\mathcal{L}}}\) are such that \(P(\mathcal {N}\le \bar{\mathcal {N}}) = 1\) and \(P({\mathcal {L}} \le {\bar{\mathcal{L}}}) = 1\). Thus from (),

$$\begin{aligned} \bigg |\frac{1}{\psi _{t+1}}- \frac{1}{\psi _{t}} \bigg |= \left|\frac{ \gamma _t \left( \psi _t - \mathcal {N}_{t} - {\mathcal {L}}_t \right) }{\psi _{t} \psi _{t+1}} \right|\le \frac{\gamma _t}{\psi _{t+1}} \left( 1+ \frac{\bar{\mathcal {N}} +{\bar{\mathcal{L}}}}{\psi _{t} } \right) . \end{aligned}$$
(51)

Further, observe that

$$\begin{aligned} \psi _{t+1} \ge \psi _t + \frac{1}{t+n_0+1} \left( \mathcal {N}_t -\mathcal{L}_t - \psi _t \right) , \end{aligned}$$

hence the sequence \(\{\psi _t\}\) can be lower bounded (term-wise) by a sequence of sample means constructed using \(\mathcal {N}_t - \mathcal{L}_t\) as in (49) and this ensures that \(\psi _t \ge \varepsilon \) where \(\varepsilon \) now depends upon \(E[{ \mathcal{N}}] - E[{ \mathcal{L}}]\) which is strictly positive by hypothesis. Final bound (50) now changes with modified , and as \(\mathcal {N}_{t+1}-\mathcal{D}_{t+1} < \mathcal {N}_{t+1}\):

$$\begin{aligned} |e_t |\le \frac{\bar{s} \gamma _t}{\varepsilon } E\bigg [\big |W_{t+1} - (\mathcal {N}_{t+1}-\mathcal{D}_{t+1})\epsilon _t \big |\bigg |{\mathcal {G}}_t \bigg ] \le \frac{ (2E[\mathcal {N}] +E[\mathcal {S}]) \bar{s} }{\varepsilon } \gamma _t \text{ a.s., } \end{aligned}$$

and recall \(W_{t+1} = \xi _t + \Xi _1(t) - \Xi _2(t)\). Thus \(e_t \rightarrow 0\) and \(\sum _t |e_t |\gamma _t < \infty \) a.s. The rest of the proof follows in exactly the same lines as that in Theorem 2, now using the fact that \(\psi _t \) for large t can be lower bounded by \(\varepsilon \) almost surely and using the modified bounds.

Thus with \(\Theta ^n (\cdot ) := (\epsilon ^n(\cdot ), \psi ^n(\cdot ))\), the sequence \(\{ \Theta ^n(\cdot ) \}_n\) is equicontinuous in extended sense almost surely (say on set F), by using remaining steps as in the proof of Theorem 2. Hence by extended version of Arzela–Ascoli theorem [8, section 4, Theorem 2.2, pp. 127], there exists a sub-sequence \((\Theta ^{k_m}(\omega , \cdot ))\) which converges to some continuous limit, call it \(\Theta (\omega , \cdot )\), uniformly on each bounded interval for \(\omega \in F\) such that, with \({\bar{\textbf{g}}}^D := ({{\bar{g}}}_\epsilon ^D, {{\bar{g}}}_\psi ^D)\),

$$\begin{aligned} \Theta (t) = \lim _{k_m \rightarrow \infty } \Theta _{k_m}(\omega ) + \int _0^t {\bar{\textbf{g}}}^D (\Theta (s)) \textrm{d}s. \end{aligned}$$
(52)

Thus, for every \(\epsilon > 0\) and \(T > 0\), there exists \(N_\epsilon ^T\) such that (note that \({\Theta }^{k_m}(t) = {\Theta }_l\) for \(t = t_l - t_{k_m}\) (\(l > k_m\)) such that \(0 \le t \le T\)):

$$\begin{aligned} \sup _{l: t_l \in [t_{k_m}, \ T + t_{k_m} ]} d({\Theta }_l , \Theta (t_l - t_{k_m})) \le \epsilon /2 \text{ for } \text{ all } k_m \ge N_\epsilon ^T. \end{aligned}$$
(53)

This completes part (i). Part (ii) follows by equicontinuity and by [8, Theorem 2.2, pp. 131] under assumption A. \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saha, I., Kavitha, V. Systemic-Risk and Evolutionary Stable Strategies in a Financial Network. Dyn Games Appl 13, 897–928 (2023). https://doi.org/10.1007/s13235-022-00488-2

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13235-022-00488-2

Keywords

JEL Classification

Navigation