Abstract
Order execution is an important operational level of activity encountered in portfolio investment and risk management. We study a sequential Stackelberg order execution game which arises naturally from the practice of algorithm trading in financial markets. The game consists of two risk-neutral traders, one leader and one follower, who compete to maximize their expected payoffs respectively by trading a single risky asset whose price dynamics follows a linear-price market impact model over a finite horizon. This new Stackelberg game departs from the Nash games which have been the main focus in the algorithm trading literature. We derive a closed-form solution for the unique open-loop Stackelberg equilibrium by exploiting the special structures of the model. This analytic solution enables us to develop new and complementary managerial insights by looking at both players’ equilibrium behavior in terms of trading speeds and positions, expected price dynamics, price of anarchy, first mover’s advantage, and trading horizon effect.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10479-022-05120-5/MediaObjects/10479_2022_5120_Fig1_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10479-022-05120-5/MediaObjects/10479_2022_5120_Fig2_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10479-022-05120-5/MediaObjects/10479_2022_5120_Fig3_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10479-022-05120-5/MediaObjects/10479_2022_5120_Fig4_HTML.png)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs10479-022-05120-5/MediaObjects/10479_2022_5120_Fig5_HTML.png)
Similar content being viewed by others
References
Acevedo, A., & Infante, J. (2014). Optimal execution and price manipulations in time-varying limit order books. Applied Mathematical Finance, 21(3), 201–237.
Alfonsi, A., Fruth, A., & Schied, A. (2010). Optimal execution strategies in limit order books with general shape functions. Quantitative Finance, 10(2), 143–157.
Almgren, R., & Chriss, N. (2001). Optimal execution of portfolio transactions. Journal of Risk, 3, 5–40.
Becherer, D., Bilarev, T., & Frentrup, P. (2018). Optimal liquidation under stochastic liquidity. Finance and Stochastics, 22(1), 39–68.
Bertsimas, D., & Lo, A. W. (1998). Optimal control of execution costs. Journal of Financial Markets, 1(1), 1–50.
Bouchard, B., Fukasawa, M., Herdegen, M., & Muhle-Karbe, J. (2017). Equilibrium liquidity premia. Finance & Stochastics, pp. 1–33.
Brolley, M. (2020). Price improvement and execution risk in lit and dark markets. Management Science, 66(2), 863–886.
Brown, D. B., Carlin, B. I., & Lobo, M. S. (2010). Optimal portfolio liquidation with distress risk. Management Science, 56(11), 1997–2014.
Brown, D. B., & Smith, J. E. (2011). Dynamic portfolio optimization with transaction costs: Heuristics and dual bounds. Management Science, 57, 1752–1770.
Brunnermeier, M. K., & Pedersen, L. H. (2005). Predatory trading. The Journal of Finance, 60(4), 1825–1863.
Brunovskỳ, P., Černỳ, A., & Komadel, J. (2018). Optimal trade execution under endogenous pressure to liquidate: Theory and numerical solutions. European Journal of Operational Research, 264(3), 1159–1171.
Carlin, B. I., Lobo, M. S., & Viswanathan, S. (2007). Episodic liquidity crises: Cooperative and predatory trading. The Journal of Finance, 62(5), 2235–2274.
Carmona, R. (2016). Lectures on BSDEs, stochastic control, and stochastic differential games with financial applications, vol. 1. SIAM.
Carmona, R. A., & Yang, J. (2011). Predatory trading: a game on volatility and liquidity. Preprint. http://www.princeton.edu/rcarmona/download/fe/PredatoryTradingGameQF.pdf.
Engle, R. F., Ferstenberg, R., & Russell, J. R. (2008). Measuring and modeling execution cost and risk. Social Science Electronic Publishing, 38(2), 14–28.
Forsyth, P. A., Kennedy, J. S., Tse, S., & Windcliff, H. (2012). Optimal trade execution: A mean quadratic variation approach. Journal of Economic Dynamics and Control, 36(12), 1971–1991.
Gatheral, J., & Schied, A. (2013). Dynamical models of market impact and algorithms for order execution. Handbook on Systemic Risk, Jean-Pierre Fouque, Joseph A. Langsam, Eds, pp. 579–599.
Gelfand, I., & Fomin, S. (1963). Calculus of variations. Englewood Cliffs, NJ: Prentice-Hall Inc.
Guo, X. (2013). Optimal placement in a limit order book. Theory Driven by Influential Applications, pp. 191–200.
Holthausen, R. W., Leftwich, R. W., & Mayers, D. (1990). Large-block transactions, the speed of response, and temporary and permanent stock-price effects. Journal of Financial Economics, 26(1), 71–95.
Huberman, G., & Stanzl, W. (2000). Arbitrage-free price update and price-impact functions. Unpublished manuscript, Columbia University.
Iancu, D. A., & Trichakis, N. (2014). Fairness and efficiency in multiportfolio optimization. Operations Research, 62(6), 1285–1301.
Jaimungal, S., Donnelly, R., & Cartea, Á. (2018). Portfolio liquidation and ambiguity aversion. High-Performance Computing in Finance (pp. 77–114). Chapman and Hall/CRC.
Koutsoupias, E., & Papadimitriou, C. (2009). Worst-case equilibria. Computer Science Review, 3(2), 65–69.
Kratz, P. (2014). An explicit solution of a nonlinear-quadratic constrained stochastic control problem with jumps: Optimal liquidation in dark pools with adverse selection. Mathematics of Operations Research, 39(4), 1198–1220.
Kraus, A., & Stoll, H. R. (1972). Price impacts of block trading on the New york stock exchange. The Journal of Finance, 27(3), 569–588.
Lachapelle, A., Lasry, J.-M., Lehalle, C.-A., & Lions, P.-L. (2016). Efficiency of the price formation process in presence of high frequency participants: A mean field game analysis. Mathematics and Financial Economics, 10(3), 223–262.
Long, J. B. D., Shleifer, A., Summers, L. H., & Waldmann, R. J. (1990). Noise trader risk in financial markets. Journal of Political Economy, 98(4), 703–38.
Madhavan, A. (2000). Market microstructure: A survey. Journal of financial markets, 3(3), 205–258.
Madhavan, A., & Cheng, M. (1997). In search of liquidity: Block trades in the upstairs and downstairs markets. The Review of Financial Studies, 10(1), 175–203.
Markowitz, H. (1959). Portfolio selection: Efficient diversification of investments. Cowles Foundation monograph no. 16. New York: John Wiley & Sons, Inc.
Mitchell, D., & Chen, J. (2020). Market or limit orders? Quantitative Finance, 20(3), 447–461.
Moallemi, C. C., Park, B., & Van Roy, B. (2012). Strategic execution in the presence of an uninformed arbitrageur. Journal of Financial Markets, 15(4), 361–391.
Moallemi, C. C., & Sağlam, M. (2013). OR Forum-the cost of latency in high-frequency trading. Operations Research, 61(5), 1070–1086.
Muni Toke, I. (2015). The order book as a queueing system: Average depth and influence of the size of limit orders. Quantitative Finance, 15(5), 795–808.
Park, B., & Van Roy, B. (2015). Adaptive execution: Exploration and learning of price impact. Operations Research, 63(5), 1058–1076.
Platania, F., Serrano, P., & Tapia, M. (2018). Modelling the shape of the limit order book. Quantitative Finance, 18(9), 1575–1597.
Schied, A., Schöneborn, T., & Tehranchi, M. (2010). Optimal basket liquidation for CARA investors is deterministic. Applied Mathematical Financeg, 17(6), 471–489.
Schied, A., Strehle, E., & Zhang, T. (2017). High-frequency limit of Nash equilibria in a market impact game with transient price impact. SIAM Journal on Financial Mathematics, 8(1), 589–634.
Schied, A., & Zhang, T. (2017). A state-constrained differential game arising in optimal portfolio liquidation. Mathematical Finance, 27(3), 779–802.
Stackelberg, Hv., et al. (1952). Theory of the market economy. Oxford University Press.
Tsoukalas, G., Wang, J., & Giesecke, K. (2019). Dynamic portfolio execution. Management Science, 65(5), 2015–40.
Vayanos, D. (1998). Transaction costs and asset prices: A dynamic equilibrium model. The Review of Financial Studies, 11(1), 1–58.
Yang, L., & Zhu, H. (2020). Back-running: Seeking and hiding fundamental information in order flows. The Review of Financial Studies, 33(4), 1484–1533.
Funding
Yinhong Dong’s research is partly supported by the CSC (China Scholarship Council, 2017-2018). Donglei Du’s research is partially supported by the NSERC grant (No. 283106), and NSFC grants (Nos. 11771386 and 11728104). Qiaoming Han’s research is partially supported by the NSFC grants (No. 11771386 and No. 11728104). Jianfeng Ren’s research is partially supported by Shandong Province Natural Science Fund (No. ZR2019MA061 and ZR2022MA038). Dachuan Xu’s research is partially supported by the NSFC (No. 12131003).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendices
1.1 Proof of Theorem 1
Proof
Recall the Stackelberg game (2).
We first show that the inner problem in (2) admits a unique optimal solution \(Y_t^*(X_t)\) for any fixed \(X_t\). Then we show that the outer problem admits a unique optimal solution \(X_t^*\). Together the Stackelberg game admits a unique equilibrium \((X^*_t, Y_t^*)\).
We consider the inner problem first. For any fixed \(X_t\), denote the integrand therein as a function of \(Y_t\) and \({\dot{Y}}_t\) only: \(G(Y_t, {\dot{Y}}_t):=-\left( \theta (X_t+Y_t)+\gamma \left( {\dot{X}}_t+{\dot{Y}}_t\right) \right) {\dot{Y}}_t.\) The associated Euler equation is a non-homogeneous second-order linear ordinary differential equation (ODE):
with boundary conditions \(Y_0=y_0, Y_T=y_T\). This ODE admits a unique solution; namely, for \(t\in [0,T]\),
with trading rate
To show that the unique stationary point in (A.1) or (A.2) is indeed the maximum solution to the inner problem, we rewrite the inner objective function as follows, for any solution \(X_t\) and \(Y_t\),
The second equation follows from the integration by parts formula, and the inequality follows from Jensen’s inequality applied to the last term therein.
The third equation above and (A.2) yield that the objective value for \(Y_t^*\) is
Together (A.3) and (A.4) imply that \(\mu _F(X_t,Y^*_t)\ge \mu _F(X_t,Y_t)\), for any fixed \(X_t\) and for all \(Y_t.\). Therefore \(Y_t^*\) is the unique maximum solution of the inner problem, for any fixed \(X_t\).
We now consider the outer problem. Apply the integration by parts formula to rewrite the objective function as follows
Inserting \(Y_t^*\) from (A.1), we obtain
The first term is a constant, and maximizing the outer problem therefore is equivalent to maximizing the second term after discarding the coefficient \(\gamma /2\):
Consider the first-order variation. For any admissible direction \(h\in C^1[0,T]\) (\(h(0) = h(T) = 0\)), the second-order variation of J is given by
Therefore
Taking the first derivative with respective to t, we have a homogeneous ODE of third order: \(\dddot{X}_t-\frac{\theta ^2}{\gamma ^2}{\dot{X}}_t=0\), whose general solution is \(X_t=c_0+c_1e^{-\frac{\theta }{\gamma }t}+c_2e^{\frac{\theta }{\gamma }t}\). Inserting \(X_t\) into the Euler-Lagrange equation (A.5), we obtain \(c_1\left( e^{\frac{\theta T}{\gamma }}-1\right) +c_2\left( e^{-\frac{\theta T}{\gamma }}-1\right) =-\frac{\theta \Delta y}{\gamma }.\) Together with the boundary conditions \(X_0=c_0+c_1+c_2=x_0\) and \(X_T=c_0+c_1e^{-\frac{\theta }{\gamma }T}+c_2e^{\frac{\theta }{\gamma }T}=x_T\), we have \(c_1=\frac{\Delta x+\Delta y}{2}\left( 1-e^{-\frac{\theta T}{\gamma }}\right) ^{-1};\ c_2=\frac{\Delta x-\Delta y}{2}\left( e^{\frac{\theta T}{\gamma }}-1\right) ^{-1};\ c_0=x_0-c_1-c_2\). Set \(a=\frac{\theta }{\gamma }c_1=\frac{\theta }{2\gamma }(\Delta x+\Delta y)\left( 1-e^{-\frac{\theta T}{\gamma }}\right) ^{-1}\) and \(b=\frac{\theta }{\gamma }c_2=\frac{\theta }{2\gamma }(\Delta x-\Delta y)\left( e^{\frac{\theta T}{\gamma }}-1\right) ^{-1}\). Then \(c_0=x_0-c_1-c_2=x_0-\frac{\gamma }{\theta }(b-a)\). So the equilibrium position of the leader is given as \(X^*_t=c_0+c_1e^{-\frac{\theta }{\gamma }t}+c_2e^{\frac{\theta }{\gamma }t}=x_0+\frac{a\gamma }{\theta }\left( 1-e^{-\frac{\theta }{\gamma }t}\right) +\frac{b\gamma }{\theta }\left( e^{\frac{\theta }{\gamma }t}-1\right) \). The corresponding speed is \(x_t^*={\dot{X}}^*_t=ae^{-\frac{\theta }{\gamma }t}+be^{\frac{\theta }{\gamma }t}\). Inserting this quantity into (A.1)-(A.2), we obtain the equilibrium speed and position of the follower: \(Y^*_t=y_0+\frac{\Delta x+\Delta y}{2T}t-\frac{b\gamma }{\theta }\left( e^{\frac{\theta }{\gamma }t}-1\right) ;\ y_t^*=\dot{Y}^*_t=\frac{\Delta x+\Delta y}{2T}-be^{\frac{\theta }{\gamma }t}\).
To show that \(X^*_t\) is the maximum solution of the outer problem, we consider the second-order variation of the objective function: \(J(X_t)=\int _0^T F(X_t, {\dot{X}}_t)dt.\). For any admissible direction \(h\in C^1[0,T]\) (\(h_0 = h_T = 0\)), the second-order variation of J is given by
where \(||\cdot ||_{1,2}\) is the natural norm in the Sobolev space \(W^{1,2}\) and the third equation follows from the integration by parts formula and h being admissible (\(h_0 = h_T = 0\)):
This implies that \(2\int _0^T\dot{h_t}h_tdt=0.\) The condition \(\delta ^2 J_h(X_t)\le -\frac{\gamma }{2}||h_t||^2_{1,2}\) is a sufficient condition for \(X_t\) to be the maximum value for the outer problem (Gelfand & Fomin, 1963, Chapter 5: Section 24: Theorem 2). \(\square \)
1.2 The expected payoffs and variance for Stackelberg game
We calculate the expect payoffs for the Stackelberg game, the Nash game (Carlin et al., 2007), and the single agent problem (Almgren and Chriss, 2001), respectively.
1.2.1 Expected price
From Theorem 1, the expected price for the Stackleberg game equals to \({\mathbb {E}}[P_t]=\gamma \left( a+ (1+m t)\frac{\Delta x+\Delta y}{2T}\right) \).
1.2.2 Expected payoff
From Theorem 1, we obtain the expected payoffs for the Stackelberg equilibrium.
1.2.3 Variance
In the following derivation, we apply the following Ito version of integration-by-parts formula:
From Theorem 1, we obtain the variance of the payoffs
1.3 The simultaneous Nash game (Carlin et al., 2007)
Theorem 2
(Carlin et al. (2007)) Let \(\Delta x=x_T-x_0\) and \(\Delta y=y_T-y_0\). The Nash game (A.6)-(A.7) admits the following unique open-loop equilibrium. For any \(t\in [0, T]\),
or equivalently in terms of trading positions
where
1.3.1 Expected price
From Theorem 2, the expected price for the Nash game equals to \({\mathbb {E}}[P^N_t]=\gamma c\left( 6-4e^{-\frac{\theta }{3\gamma }t}\right) \).
1.3.2 Expected payoffs
From Theorem 2, the expected payoffs of the Nash equilibrium are respectively
1.3.3 Variance
From Proposition 2, we obtain the variance of the payoffs
1.4 The single agent optimal order execution problem (Almgren and Chriss, 2001)
Theorem 3
( Almgren and Chriss (2001)) Let \(\Delta x=x_T-x_0\). The problem (A.9) admits the following unique optimal solution: \(\forall t\in [0, T]\), \(\dot{X}_t^S=\frac{\Delta x}{T}\); or in terms of trading positions: \(X^S_t=x_0+\frac{\Delta x}{T}t.\)
1.4.1 Expected price
From Theorem 3, the expected price for the single agent who trades a total amount of \(\Delta x+\Delta y\) equals to \({\mathbb {E}}[P^S_t]=(\gamma +\theta t)\frac{\Delta x+\Delta y}{T}.\)
1.4.2 Expected payoff
From Theorem 3, the expected payoff for the single agent who trades a total amount of \(\Delta x+\Delta y\) equals to \(\mu ^S=-\frac{\theta (\Delta x+\Delta y)^2}{2}\left( 1+\frac{2}{mT}\right) .\)
1.4.3 Variance
From Proposition 3, the variance of the payoff for the single agent who trades a total amount of \(\Delta x+\Delta y\) equals to \((\sigma ^S)^2=\int _0^T(X_T-X_t)^2dt=\frac{T}{3}(\Delta x+\Delta y)^2.\)
1.5 Proof of Proposition 1
From Appendices A.2.1, A.3.1, and A.4.1, the expected prices for the Stackleberg game, the Nash game and the single agent problem as follows.
where a and c are defined in (4) and (A.8) (Appendix A.3), respectively. Without losing generality, we assume that \(\Delta x+\Delta y>0\).
-
(1)
For small \(t\rightarrow 0\),
$$\begin{aligned} {\mathbb {E}}[P_t]= & {} \gamma \left( a+ \frac{\Delta x+\Delta y}{2T}\right) ,\\ {\mathbb {E}}[P^N_t]= & {} 2\gamma c,\\ {\mathbb {E}}[P^S_t]= & {} \gamma \frac{\Delta x+\Delta y}{T}. \end{aligned}$$Obviously,
$$\begin{aligned} \frac{{\mathbb {E}}[P^N_t]}{{\mathbb {E}}[P^S_t]}= & {} \frac{mT}{3(1-e^{-mT/3})}\ge \frac{mT}{3 mT/3}=1,\\ {\mathbb {E}}[P_t]-{\mathbb {E}}[P^S_t]= & {} \gamma \left( a- \frac{\Delta x+\Delta y}{2T}\right) =\gamma \frac{\Delta x+\Delta y}{2T}(\frac{mT}{1-e^{-mT}}-1)\ge 0. \end{aligned}$$Therefore, \({\mathbb {E}}[P_t] > {\mathbb {E}}[P^S_t]\) and \( {\mathbb {E}}[P^N_t]> {\mathbb {E}}[P^S_t]\) when \(t\rightarrow 0\).
For large \(t\rightarrow T\),
$$\begin{aligned} {\mathbb {E}}[P^S_t]-{\mathbb {E}}[P_t]= & {} \frac{\Delta x+\Delta y}{2T}(1+mT-\frac{mT}{1-e^{-mT}}) =\frac{\Delta x+\Delta y}{2T}f(x),\\ f'(x)= & {} \frac{e^{-x} x}{\left( 1-e^{-x}\right) ^2}-\frac{1}{1-e^{-x}}+1 \ge 0 \end{aligned}$$Note that \(\min _{x\ge 0} f(x)=0\). Therefore, \({\mathbb {E}}[P^S_t]\ge {\mathbb {E}}[P_t]\). Meanwhile,
$$\begin{aligned} \frac{{\mathbb {E}}[P^S_t]}{{\mathbb {E}}[P_t]}= & {} \frac{3(1+\frac{1}{mT})}{2+(1-e^{-\frac{mT}{3}})^{-1}} =1+\frac{3(e^x-1)-x}{x(3e^x-2)} \ge 1+ \frac{2}{3e^x-2} > 1. \end{aligned}$$Therefore, \({\mathbb {E}}[P^S_t] > {\mathbb {E}}[P^N_t]\).
-
(2)
Consider the function \({\mathbb {E}}[P^S_t]-{\mathbb {E}}[P_t]\)
$$\begin{aligned} {\mathbb {E}}[P_t]-{\mathbb {E}}[P^S_t]= & {} \gamma \left( a-(1+m t)\frac{\Delta x+\Delta y}{2T}\right) \\= & {} \frac{(\Delta x+\Delta y)m\gamma }{2T} (\frac{T}{1-e^{-mT}}-\frac{1}{m}-t)\\ \end{aligned}$$Denote \(t_1=(\frac{1}{1-e^{-mT}}-\frac{1}{mT})T\) and we can prove that \(t_1 \in \left[ \frac{T}{2}, T\right] \). Then we have the following relationships: When \(t\in \left[ 0, t_1\right] \), \({\mathbb {E}}[P^S_t] \le {\mathbb {E}}[P_t]\); and when \(t\in \left( t_1, T \right] \), \({\mathbb {E}}[P^S_t] > {\mathbb {E}}[P_t]\).
-
(3)
Consider the function \({\mathbb {E}}[P_t]-{\mathbb {E}}[P^N_t]\)
$$\begin{aligned} {\mathbb {E}}[P_t]-{\mathbb {E}}[P^N_t]= & {} \frac{4 m T e^{-\frac{1}{3} (m t)}}{3 \left( 1-e^{-\frac{1}{3} (m T)}\right) }+m t+\frac{m T}{1-e^{-m T}}-\frac{2 m T}{1-e^{-\frac{1}{3} (m T)}}+1=f(t)\\ f'(t)= & {} m-\frac{4 m^2 T e^{-\frac{1}{3} (m t)}}{9 \left( 1-e^{-\frac{1}{3} (m T)}\right) }\\ f''(t)= & {} \frac{4 m^3 T e^{-\frac{1}{3} (m t)}}{27 \left( 1-e^{-\frac{1}{3} (m T)}\right) }\ge 0. \end{aligned}$$Since \(f'(t)\) is an increasing function, we have that \(f'(0)\le f'(t)\le f'(T)\). Take mT as a parameter. We consider three cases.
-
when \(mT \le 1.65\), where \(mT\approx 1.65\) is the root of the function \(f'(t=T, mT)=0\), f(t) is a decreasing function and \(f_{max}=f(0)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}+1>0\), \(f_{min}=f(T)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}-\frac{mT}{3}+1 < 0\). There exists a unique point \(t_2\) subject to \(f(t_2)=0\), and \(t_2\) is a function of mT.
-
when \(mT \in [1.65, 3.06] \), where \(mT\approx 3.06\) is the root of the function \(f(t=T, mT)=0\), f(t) is a convex function. Note that \(f_{max}=f(0)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}+1>0\), \(f(T)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}-\frac{mT}{3}+1 < 0\), and \(f_{min}=\min \{f(T),f(t*)\}< 0\). There exists a unique point \(t_2\) subject to \(f(t_2)=0\).
-
when \(mT > 3.06 \), f(t) is a convex function and \(f_{max}=f(0)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}+1>0\), \(f(T)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}-\frac{mT}{3}+1 > 0\) and \(f_{min}=\min \{f(T),f(t^*)\}< 0\). There exists two points \(t_2\) and \(t_3\) subject to \(f(t_2)=f(t_3)=0\).
Note that the first two cases have the same conclusion. Therefore, we merge all cases into the following two cases.
-
when \(mT \in (0, 3.06] \)
-
\(t \in [0,t_2]\), \({\mathbb {E}}[P_t]>{\mathbb {E}}[P^N_t]\);
-
\(t \in [t_2, T]\), \({\mathbb {E}}[P_t]<{\mathbb {E}}[P^N_t]\).
-
-
when \(mT > 3.06\)
-
\(t \in [0,t_2]\), \({\mathbb {E}}[P_t] \ge {\mathbb {E}}[P^N_t]\);
-
\(t \in (t_3, t_4)\), \({\mathbb {E}}[P_t]<{\mathbb {E}}[P^N_t]\);
-
\(t \in [t_4, T]\), \({\mathbb {E}}[P_t] \ge {\mathbb {E}}[P^N_t]\).
-
-
1.6 Proof of Proposition 2 and Proposition 3
Both \(\mu _T\) and \(\mu ^S\) are given in Appendices A.2.2 and A.4.2, while \(\mu _T^N\) and \(\mu ^S\) are given in Appendix A.3.2 and Appendix A.4.2, respectively.
Obviously, \(\phi ''(x)=2e^x-2 \ge 0\), \(\phi '(x)=2e^x-2x-2 \ge 0\) and \(\phi (x) \ge 0\). That is, \(\textsc {PoA}\) is increasing in mT, and \(\lim _{mT \rightarrow 0} \textsc {PoA}=1\), \(\lim _{mT \rightarrow +\infty } \textsc {PoA}=\frac{5}{4}\). Therefore, \(1\le \textsc {PoA}\le \frac{5}{4}\). We can use the same process to prove \(1\le \textsc {PoA(N)}\le \frac{4}{3}\).
1.7 Proof of Proposition 4
The expected payoff difference of the leader under the Stackelberg and Nash games is
where \(x:=mT\). Consider the second term above
Take the first derivative of f(x),
The denominator being positive for \(x>0\), we shall show that the numerator \(\phi (x)\) above is also positive, implying that \(f'(x)>0\). Take the first derivative of \(\phi (x)\),
Take the first and second derivatives of the second term p(x),
Note that \(p^{\prime \prime }(x)\ge 0\) for \(x\ge 0\) implies that p(x) is convex for \(x\ge 0\) and zero is the only root of \(p^{\prime }(x)=0\). So \(p(x)>p(0)=0\) for \(x>0\), implying that \(\phi (x)>0\) for \(x> 0\), and hence \(f^{\prime }(x)>0\) for \(x>0\).
1.8 Follower’s relative payoffs function
The expected payoff difference of the follower under the Stackelberg and Nash games is
where \(x=mT\). Denote the term in the parentheses above as
1.8.1 Proof for the properties of k(x)
Lemma 1
The function k(x) (\(x\ge 0\)) has the following properties
-
(i)
\(\sup _{x\ge 0} k(x)\): \(k(0)=0\) is the maximum value of k(x) achieved asymptotically when \(x\rightarrow 0\);
-
(ii)
\(\min _{x\ge 0} k(x)\): \(k(x^*)\approx -1.46\) is the unique minimum value of k(x) achieved at the minimum point \(x^*\approx 8.87\) in the non-negative domain;
-
(iii)
\(\sup _{x\ge x^*} k(x)\): \(k(\infty )=-1\) is the maximum value of k(x) achieved asymptotically when \(x\rightarrow \infty \);
-
(iv)
k(x) is a unimodal (a.k.a, quasi-convex) function: decreasing when \(x\in [0, x^*]\) and increasing when \(x\ge x^*\).
Proof
Take the first derivative of k(x)
Denote the numerator above as
In the following, we prove that \(g(x)\le 0\) for \(x\in [0,x^*]\) and \(g(x)\ge 0\) for \(x\in [x^*,\infty )\) (where \(x^*\approx 8.87\) is the numerical solution for \(g(x)=0\)).
Note that \(z(x)>0, \forall x\ge 0\) and \(w''(x)\) is increasing for \(x\ge 0\). So \(\min _{x\ge 0}w''(x)=w''(0)>0\), and \(\min _{x\ge 0}w'(x)=w'(0)>0\). However, \(w(0)=-2793<0\) and \(w(1)>0\). Therefore there exists a unique root \(m_1\) such that \(w(m_1)=0\) (where \(m_1\approx 1.24\)), implying that \(q''(x)\) is decreasing \(x \in [0,m_1]\) and increasing for \(x \in [m_1,\infty )\). The maximum value for \(q''(x)\) is \(q''(0)<0\) for \(x \in [0,m_1]\), while the minimum value is \(x=m_1\) for \(x \in [m_1,x^*]\). Moreover, \(q''(3)>0\). So there exists a unique root \(m_2\) for \(q''(x)=0\) (where \(m_2\approx 2.27\)). Since \(q''(x)\le 0\) for \(x \in [0,m_2]\), and \(q''(x)\ge 0\) fro \(x \in [m_2, \infty )\). Therefore function \(q'(x)\) is decreasing for \(x \in [0,m_2]\) and increasing for \(x \in [m_2,\infty )\). Due to \(q'(0)<0\), \(q'(m_2)\le 0\) and \(q'(4)>0\), there exists a unique root \(m_3\) such that \(q(m_3)=0\) (where \(m_3\approx 3.29\)). Repeat the same steps as above, we can find the unique root for each function assumed in the above such that \(m_4\approx 4.31\), \(m_5\approx 5.07\), \(m_6\approx 5.82\), \(m_7\approx 6.57\), \(m_8\approx 7.17\), \(m_9\approx 7.77\), \(m_{10}\approx 8.37\), and \(m_{11}\approx 8.87\).
Therefore \(g(x)\le 0\) for \(x\in [0,x^*]\) and \(g(x)\ge 0\) for \(x\in [x^*,\infty )\). In another words, k(x) is decreasing in x for \(x\in [0,x^*]\) and increasing for \(x\in [x^*,\infty )\).
It suffices to show that k(x) is a unimodal function; namely it is monotonically decreasing for \(x \le x^*\) and monotonically increasing for \(x\ge x^*\). The unimodality of k(x) together with \({\lim _{x \rightarrow 0}} k(x)=0\), \({\lim _{x \rightarrow \infty }} k(x)=-1\) and \(k_{min}=k(x^*)\) imply the desired results in (i), (ii) and (iii). \(\square \)
1.8.2 Proof of Proposition 5
Proof
Items (i) and (ii) being easy, we show the rest. The expected payoff difference of the leader under the Stackelberg and Nash games is
where \(x:=mT\) and \(a:=\frac{\gamma }{T}>0\). Consider the second term above
Take the first derivative of w(x),
The denominator being positive for \(x>0\), we shall show that the numerator \(-\phi (x)\) above is also positive, implying that \(L_2'(\theta )<0\)). Take the first derivative of \(-\phi (x)\),
In the following, we prove that \(h(x)>0\) for \(x\ge 0\).
Note that \(q''(x)>0\) for all \(x\ge 0\) and \(q'(x)\) is increasing for \(x\ge 0\). So \(\min _{x\ge 0}q'(x)=q'(0)>0\), and \(\min _{x\ge 0}q(x)=q(0)>0\). Repeat the process, \(\min _{x\ge 0}p(x)=p(0)>0\), and \(\min _{x\ge 0}h(x)=h(0)>0\). Therefore \(\phi (x)< 0\) for all \(x\ge 0\). Therefore, \(L_2(x)\) is decreasing in x for \(x>0\). Since \(x=\frac{\theta }{\gamma }T\) and \(a:=\frac{\gamma }{T}>0\), \(L_2(\theta )\) is decreasing in \(\theta \) for \(\theta >0\). \(\square \)
1.9 Total’s relative payoffs
The total expected payoff difference under the Stackelberg and Nash games is
where \(x=mT\). Recall that for \(x\ge 0\),
1.9.1 Proof for the properties of g(x)
Lemma 2
The function g(x) (\(x\ge 0\)) has the following properties
-
(i)
\(\inf _{x\ge 0} g(x)\): \(g(\infty )=-1\) is the minimum value of g(x) achieved asymptotically when \(x\rightarrow +\infty \).
-
(ii)
\(\max _{x\ge 0} g(x)\): \(g(x^*)\approx 0.778\) is the unique maximum value of g(x) achieved at the maximum point \(x^*\approx 5.105\) in the positive domain;
-
(iii)
The unique positive root of g(x) is \(m_0\approx 17.6\); and \(g(x)\ge 0\) for all \(x\in [0, m_0]\) and \(g(x)\le 0\) for all \(x\in [m_0,\infty )\).
-
(iv)
g(x) is a unimodal function: increasing when \(x\in [0, x^*]\) and decreasing when \(x\ge x^*\).
Proof
Take the first derivative of g(x)
Then \(g'(x)\ge 0\) if and only if \(0\le x\le x^*\).
Denote
In the following, we prove that function g(x) is decreasing for \(x\in [0,x^*]\) and increasing for \(x\in [x^*,\infty )\) (where \(x^*=5.10\) is the numerical solution for \(\psi (x)=0\) ).
Obviously, \(z(x)>0\) for \(0\le x\le m_0\) and \(w''(x)\) increasing in x. Then \(w''(x)_{min}=w''(0)>0\). Repeat the above process. Then \(q'''(x)>0\), \(q''(x)>0\) and they are both increasing in x for \(x \in [0,x^*]\). However, since \(q'(x)_{min}=-826<0\) and \(q'(x)_{max}>0\), there exists a unique root \(m_1\) for \(q'(x)=0\) (where \(m_1=0.218\) can be solved by dichotomy and intermediary theorem). This explains that the function p(x) is decreasing in x when \(x \in [0,m_1]\) and increasing for \(x \in [m_1,x^*]\). Since \(q(0)<0\) and \(q(2)>0\), there exists a unique root \(m_2\) for \(q(x)=0\) (Also \(m_2=1.112\) can be solved by dichotomy).
Repeat the above derivation. Then there exists a unique root \(m_3\) for \(p''(x)=0\) (\(m_3=1.791\)), a unique solution \(m_4\) for \(p'(x)=0\) (\(m_4=2.453\)) and a unique solution \(m_5\) for \(p(x)=0\) (\(m_5=3.099\)). Therefore \(h'''(x)\) is decreasing in x when \(x \in [0,m_5]\) and increasing for \(x \in [m_5,x^*]\). Analogously, there exist unique roots \(m_6\), \(m_7\), \(m_8\) for \(h''(x)=0\) (\(m_6=3.626\)), \(h'(x)=0\) (\(m_7=4.148\)), and \(h(x)=0\) (\(m_8=4.664\)), respectively.
Therefore, \(g'(x)\) decreases in x when \(x \in [0,m_8]\) and increases for \(x \in [m_8,x^*]\). Moreover, there exists a unique solution \(x=x^*\) for \(g(x)=0\). So \(g(x)\le 0\) for \(x \in [0,x^*]\) and \(g(x)>0\) for \(x \in (x^*,\infty )\).
It suffices to show that g(x) is a unimodal function; namely it is monotonically increasing for \(x \le x^*\) and monotonically decreasing for \(x\ge x^*\). The unimodality of g(x) together with \({\lim _{x \rightarrow 0}} g(x)=0\), \({\lim _{x \rightarrow \infty }} g(x)=-1\) and \(g(m_0)=0\) imply the desired results in (i), (ii) and (iii). \(\square \)
1.9.2 Proof of Proposition 6
Proof
Items (i), and (ii) being easy, we only show the rest. The expected payoff difference of the leader under the Stackelberg and Nash games is
where \(x=mT\) and \(a:=\frac{\gamma }{T}>0\). Consider the second term above
Take the first derivative of w(x),
The denominator being positive for \(x>0\), we shall show that the numerator \(\phi (x)\) above is a unimodal function for x: increasing when \(\theta \in (0,\frac{{\hat{x}}T}{\gamma }]\) and decreasing when \(T \ge \frac{{\hat{x}}T}{\gamma }\)(where \({\hat{x}}\approx 8.5\)).
In the following, we prove that \(h(x)>0\) for \(x\ge 0\).
Obviously, \(z''(x)<0\) implies that \(z'(x)\) is decreasing. \(z'(x)_{max}<0\) implies that z(x) is a decreasing function and \(z(x)_{max}<0\). Since \(r'(x)\) is decreasing, \(r'(0)>0\) and \(r'(2)<0\), there exists a unique root \(m_1\) for \(r'(x)=0\) (where \(m_1=1.25\) can be solved by dichotomy and intermediary theorem). Therefore r(x) is increasing when \(x \in [0,m_1]\) and decreasing when \(x \in [m_1,{\hat{x}}]\). Owing to \(r(0)>0\) and \(r(4)<0\), there exists a unique root \(m_2\) for \(r(x)=0\) (Also \(m_2=2.96\) can be solved by dichotomy).
Repeat the above derivation. There exists unique roots \(m_3\) for \(q'(x)=0\) (\(m_3=4.08\)), \(m_4\) for \(q(x)=0\) (\(m_4=5.16\)) and \(m_5\) for \(p'(x)=0\) (\(m_5=5.93\)), respectively. Therefore, p(x) increases in x when \(x \in [0,m_5]\) and increases for \(x \in [m_5,{\hat{x}}]\). Respectively, there exist unique roots \(m_6\), \(m_7\), \(m_8\) and \(m_9\) for \(p(x)=0\) (\(m_6=6.72\)), \(h'(x)=0\) (\(m_7=7.28\)), \(h(x)=0\) (\(m_8=7.96\)), and \(\phi (x)=0\)(\(m_9=8.48\)).
Therefore, \(L_3(\theta )\) is a unimodal function: increasing when \(\theta \in (0,\frac{{\hat{x}}T}{\gamma }]\) and decreasing when \(T \ge \frac{{\hat{x}}T}{\gamma }\) (where \({\hat{x}}\approx 8.48\)). \(\square \)
1.10 Proof of Proposition 7
Proof
where \(x=mT\). Recall that
Note that \(\lim _{x \rightarrow \infty } h(x)=\frac{1}{2}\), \(\lim _{x \rightarrow 0} h(x)=0\). Take the first derivative of h(x),
Take the first derivative of the numerator \(\phi \)
Take the first derivative of the second term p(x),
Then \(p(x)\ge p_{\min }(x)=p(0)=0\) when \(x\ge 0\). Likewise, \(\phi (x)\ge \phi _{\min }(x)=\phi (0)=0\) and \(h'(x)\ge 0, h(x)\ge h_{\min }(x)=h(0)=0\). Therefore, h(x) is positive for \(x\ge 0\) and increasing in mT.
Consider the function \(L_4(\theta ,\gamma ,T)\) when given \(\gamma >0\) and trading horizon T,
Obviously, \(L_4(\theta ,\gamma ,T)\) is increasing in x (or \(\theta \)). Furthermore, \(L_4(\theta ,\gamma ,T)\) is increasing in \(\theta \), T and decreasing in \(\gamma \). \(\square \)
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Dong, Y., Du, D., Han, Q. et al. A Stackelberg order execution game. Ann Oper Res 336, 571–604 (2024). https://doi.org/10.1007/s10479-022-05120-5
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10479-022-05120-5