Skip to main content
Log in

A Stackelberg order execution game

  • Original Research
  • Published:
Annals of Operations Research Aims and scope Submit manuscript

Abstract

Order execution is an important operational level of activity encountered in portfolio investment and risk management. We study a sequential Stackelberg order execution game which arises naturally from the practice of algorithm trading in financial markets. The game consists of two risk-neutral traders, one leader and one follower, who compete to maximize their expected payoffs respectively by trading a single risky asset whose price dynamics follows a linear-price market impact model over a finite horizon. This new Stackelberg game departs from the Nash games which have been the main focus in the algorithm trading literature. We derive a closed-form solution for the unique open-loop Stackelberg equilibrium by exploiting the special structures of the model. This analytic solution enables us to develop new and complementary managerial insights by looking at both players’ equilibrium behavior in terms of trading speeds and positions, expected price dynamics, price of anarchy, first mover’s advantage, and trading horizon effect.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Acevedo, A., & Infante, J. (2014). Optimal execution and price manipulations in time-varying limit order books. Applied Mathematical Finance, 21(3), 201–237.

    Article  Google Scholar 

  • Alfonsi, A., Fruth, A., & Schied, A. (2010). Optimal execution strategies in limit order books with general shape functions. Quantitative Finance, 10(2), 143–157.

    Article  Google Scholar 

  • Almgren, R., & Chriss, N. (2001). Optimal execution of portfolio transactions. Journal of Risk, 3, 5–40.

    Article  Google Scholar 

  • Becherer, D., Bilarev, T., & Frentrup, P. (2018). Optimal liquidation under stochastic liquidity. Finance and Stochastics, 22(1), 39–68.

    Article  Google Scholar 

  • Bertsimas, D., & Lo, A. W. (1998). Optimal control of execution costs. Journal of Financial Markets, 1(1), 1–50.

    Article  Google Scholar 

  • Bouchard, B., Fukasawa, M., Herdegen, M., & Muhle-Karbe, J. (2017). Equilibrium liquidity premia. Finance & Stochastics, pp. 1–33.

  • Brolley, M. (2020). Price improvement and execution risk in lit and dark markets. Management Science, 66(2), 863–886.

    Article  Google Scholar 

  • Brown, D. B., Carlin, B. I., & Lobo, M. S. (2010). Optimal portfolio liquidation with distress risk. Management Science, 56(11), 1997–2014.

    Article  Google Scholar 

  • Brown, D. B., & Smith, J. E. (2011). Dynamic portfolio optimization with transaction costs: Heuristics and dual bounds. Management Science, 57, 1752–1770.

    Article  Google Scholar 

  • Brunnermeier, M. K., & Pedersen, L. H. (2005). Predatory trading. The Journal of Finance, 60(4), 1825–1863.

    Article  Google Scholar 

  • Brunovskỳ, P., Černỳ, A., & Komadel, J. (2018). Optimal trade execution under endogenous pressure to liquidate: Theory and numerical solutions. European Journal of Operational Research, 264(3), 1159–1171.

    Article  Google Scholar 

  • Carlin, B. I., Lobo, M. S., & Viswanathan, S. (2007). Episodic liquidity crises: Cooperative and predatory trading. The Journal of Finance, 62(5), 2235–2274.

    Article  Google Scholar 

  • Carmona, R. (2016). Lectures on BSDEs, stochastic control, and stochastic differential games with financial applications, vol. 1. SIAM.

  • Carmona, R. A., & Yang, J. (2011). Predatory trading: a game on volatility and liquidity. Preprint. http://www.princeton.edu/rcarmona/download/fe/PredatoryTradingGameQF.pdf.

  • Engle, R. F., Ferstenberg, R., & Russell, J. R. (2008). Measuring and modeling execution cost and risk. Social Science Electronic Publishing, 38(2), 14–28.

    Google Scholar 

  • Forsyth, P. A., Kennedy, J. S., Tse, S., & Windcliff, H. (2012). Optimal trade execution: A mean quadratic variation approach. Journal of Economic Dynamics and Control, 36(12), 1971–1991.

    Article  Google Scholar 

  • Gatheral, J., & Schied, A. (2013). Dynamical models of market impact and algorithms for order execution. Handbook on Systemic Risk, Jean-Pierre Fouque, Joseph A. Langsam, Eds, pp. 579–599.

  • Gelfand, I., & Fomin, S. (1963). Calculus of variations. Englewood Cliffs, NJ: Prentice-Hall Inc.

    Google Scholar 

  • Guo, X. (2013). Optimal placement in a limit order book. Theory Driven by Influential Applications, pp. 191–200.

  • Holthausen, R. W., Leftwich, R. W., & Mayers, D. (1990). Large-block transactions, the speed of response, and temporary and permanent stock-price effects. Journal of Financial Economics, 26(1), 71–95.

    Article  Google Scholar 

  • Huberman, G., & Stanzl, W. (2000). Arbitrage-free price update and price-impact functions. Unpublished manuscript, Columbia University.

    Google Scholar 

  • Iancu, D. A., & Trichakis, N. (2014). Fairness and efficiency in multiportfolio optimization. Operations Research, 62(6), 1285–1301.

    Article  Google Scholar 

  • Jaimungal, S., Donnelly, R., & Cartea, Á. (2018). Portfolio liquidation and ambiguity aversion. High-Performance Computing in Finance (pp. 77–114). Chapman and Hall/CRC.

    Google Scholar 

  • Koutsoupias, E., & Papadimitriou, C. (2009). Worst-case equilibria. Computer Science Review, 3(2), 65–69.

    Article  Google Scholar 

  • Kratz, P. (2014). An explicit solution of a nonlinear-quadratic constrained stochastic control problem with jumps: Optimal liquidation in dark pools with adverse selection. Mathematics of Operations Research, 39(4), 1198–1220.

    Article  Google Scholar 

  • Kraus, A., & Stoll, H. R. (1972). Price impacts of block trading on the New york stock exchange. The Journal of Finance, 27(3), 569–588.

    Article  Google Scholar 

  • Lachapelle, A., Lasry, J.-M., Lehalle, C.-A., & Lions, P.-L. (2016). Efficiency of the price formation process in presence of high frequency participants: A mean field game analysis. Mathematics and Financial Economics, 10(3), 223–262.

    Article  Google Scholar 

  • Long, J. B. D., Shleifer, A., Summers, L. H., & Waldmann, R. J. (1990). Noise trader risk in financial markets. Journal of Political Economy, 98(4), 703–38.

    Article  Google Scholar 

  • Madhavan, A. (2000). Market microstructure: A survey. Journal of financial markets, 3(3), 205–258.

    Article  Google Scholar 

  • Madhavan, A., & Cheng, M. (1997). In search of liquidity: Block trades in the upstairs and downstairs markets. The Review of Financial Studies, 10(1), 175–203.

    Article  Google Scholar 

  • Markowitz, H. (1959). Portfolio selection: Efficient diversification of investments. Cowles Foundation monograph no. 16. New York: John Wiley & Sons, Inc.

    Google Scholar 

  • Mitchell, D., & Chen, J. (2020). Market or limit orders? Quantitative Finance, 20(3), 447–461.

    Article  Google Scholar 

  • Moallemi, C. C., Park, B., & Van Roy, B. (2012). Strategic execution in the presence of an uninformed arbitrageur. Journal of Financial Markets, 15(4), 361–391.

    Article  Google Scholar 

  • Moallemi, C. C., & Sağlam, M. (2013). OR Forum-the cost of latency in high-frequency trading. Operations Research, 61(5), 1070–1086.

    Article  Google Scholar 

  • Muni Toke, I. (2015). The order book as a queueing system: Average depth and influence of the size of limit orders. Quantitative Finance, 15(5), 795–808.

    Article  Google Scholar 

  • Park, B., & Van Roy, B. (2015). Adaptive execution: Exploration and learning of price impact. Operations Research, 63(5), 1058–1076.

    Article  Google Scholar 

  • Platania, F., Serrano, P., & Tapia, M. (2018). Modelling the shape of the limit order book. Quantitative Finance, 18(9), 1575–1597.

    Article  Google Scholar 

  • Schied, A., Schöneborn, T., & Tehranchi, M. (2010). Optimal basket liquidation for CARA investors is deterministic. Applied Mathematical Financeg, 17(6), 471–489.

    Article  Google Scholar 

  • Schied, A., Strehle, E., & Zhang, T. (2017). High-frequency limit of Nash equilibria in a market impact game with transient price impact. SIAM Journal on Financial Mathematics, 8(1), 589–634.

    Article  Google Scholar 

  • Schied, A., & Zhang, T. (2017). A state-constrained differential game arising in optimal portfolio liquidation. Mathematical Finance, 27(3), 779–802.

    Article  Google Scholar 

  • Stackelberg, Hv., et al. (1952). Theory of the market economy. Oxford University Press.

    Google Scholar 

  • Tsoukalas, G., Wang, J., & Giesecke, K. (2019). Dynamic portfolio execution. Management Science, 65(5), 2015–40.

    Google Scholar 

  • Vayanos, D. (1998). Transaction costs and asset prices: A dynamic equilibrium model. The Review of Financial Studies, 11(1), 1–58.

    Article  Google Scholar 

  • Yang, L., & Zhu, H. (2020). Back-running: Seeking and hiding fundamental information in order flows. The Review of Financial Studies, 33(4), 1484–1533.

    Article  Google Scholar 

Download references

Funding

Yinhong Dong’s research is partly supported by the CSC (China Scholarship Council, 2017-2018). Donglei Du’s research is partially supported by the NSERC grant (No. 283106), and NSFC grants (Nos. 11771386 and 11728104). Qiaoming Han’s research is partially supported by the NSFC grants (No. 11771386 and No. 11728104). Jianfeng Ren’s research is partially supported by Shandong Province Natural Science Fund (No. ZR2019MA061 and ZR2022MA038). Dachuan Xu’s research is partially supported by the NSFC (No. 12131003).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Donglei Du.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendices

1.1 Proof of Theorem 1

Proof

Recall the Stackelberg game (2).

$$\begin{aligned} \max \limits _{X_t}\quad&\int _0 ^T -\left( \theta (X_t+Y_t)+\gamma ({\dot{X}}_t+{\dot{Y}}_t)\right) {\dot{X}}_t dt\\ \text {s.t.}\quad&Y_t=\arg \max \limits _{Y_t}\left[ \int _0 ^T -\left( \theta (X_t+Y_t)+\gamma ({\dot{X}}_t+{\dot{Y}}_t)\right) {\dot{Y}}_t dt: Y_0=y_0, Y_T=y_T\right] \\&X_0=x_0, X_T=x_T \end{aligned}$$

We first show that the inner problem in (2) admits a unique optimal solution \(Y_t^*(X_t)\) for any fixed \(X_t\). Then we show that the outer problem admits a unique optimal solution \(X_t^*\). Together the Stackelberg game admits a unique equilibrium \((X^*_t, Y_t^*)\).

We consider the inner problem first. For any fixed \(X_t\), denote the integrand therein as a function of \(Y_t\) and \({\dot{Y}}_t\) only: \(G(Y_t, {\dot{Y}}_t):=-\left( \theta (X_t+Y_t)+\gamma \left( {\dot{X}}_t+{\dot{Y}}_t\right) \right) {\dot{Y}}_t.\) The associated Euler equation is a non-homogeneous second-order linear ordinary differential equation (ODE):

$$\begin{aligned} \frac{\partial G}{\partial Y}-\frac{d}{dt}\left( \frac{\partial G}{\partial {\dot{Y}}}\right) =0\iff {\ddot{Y}}_t=-\frac{1}{2}{\ddot{X}}_t-\frac{\theta }{2\gamma }{\dot{X}}_t \end{aligned}$$

with boundary conditions \(Y_0=y_0, Y_T=y_T\). This ODE admits a unique solution; namely, for \(t\in [0,T]\),

$$\begin{aligned} Y^*_t= & {} -\frac{1}{2}(X_t-x_0)-\frac{\theta }{2\gamma }\int _0^t X_sds+\frac{t \theta }{2T \gamma }\int _0^T X_sds+\left( \frac{\Delta y+\frac{1}{2}\Delta x}{T}\right) t+y_0, \end{aligned}$$
(A.1)

with trading rate

$$\begin{aligned} y^*_t= & {} {\dot{Y}}^*_t=-\frac{1}{2}{\dot{X}}_t-\frac{\theta }{2\gamma }X_t+\frac{\theta }{2T \gamma }\int _0^T X_sds+\frac{\Delta y+\frac{1}{2}\Delta x}{T}. \end{aligned}$$
(A.2)

To show that the unique stationary point in (A.1) or (A.2) is indeed the maximum solution to the inner problem, we rewrite the inner objective function as follows, for any solution \(X_t\) and \(Y_t\),

$$\begin{aligned} \mu _F(X_t,Y_t)= & {} \int _0^T -\left[ \theta (X_t+Y_t)+\gamma ({\dot{X}}_t+{\dot{Y}}_t)\right] {\dot{Y}}_t dt\nonumber \\= & {} -\frac{\theta }{2}\left( Y_T^2-Y_0^2\right) -\gamma \int _0^T \left( {\dot{Y}}_t+{\dot{X}}_t+\frac{\theta }{\gamma }X_t\right) {\dot{Y}}_t dt \nonumber \\= & {} -\frac{\theta }{2}\left( Y_T^2-Y_0^2\right) + \gamma \int _0^T\left( \frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt \nonumber \\{} & {} -\gamma \int _0^T\left( {\dot{Y}}_t+\frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt\nonumber ,\\\le & {} -\frac{\theta }{2}(Y_T^2-Y_0^2)+ \gamma \int _0^T\left( \frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt\nonumber \\{} & {} -\frac{\gamma }{T}\left( \int _0 ^T \left( {\dot{Y}}_t+\frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) dt\right) ^2\nonumber \\= & {} -\frac{\theta }{2}(Y_T^2-Y_0^2)+ \gamma \int _0^T\left( \frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt\nonumber \\{} & {} -\frac{\gamma }{T}\left( \Delta y+\frac{1}{2}\Delta x+\frac{\theta }{2\gamma }\int _0 ^T X_t dt\right) ^2. \end{aligned}$$
(A.3)

The second equation follows from the integration by parts formula, and the inequality follows from Jensen’s inequality applied to the last term therein.

The third equation above and (A.2) yield that the objective value for \(Y_t^*\) is

$$\begin{aligned} \mu _F(X_t,Y_t^*)= & {} -\frac{\theta }{2}\left( Y_T^2-Y_0^2\right) + \gamma \int _0^T\left( \frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt \nonumber \\{} & {} -\gamma \int _0^T\left( {\dot{Y}}_t^*+\frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt\nonumber \\= & {} -\frac{\theta }{2}\left( Y_T^2-Y_0^2\right) + \gamma \int _0^T\left( \frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt \nonumber \\{} & {} -\gamma \int _0^T\left( \frac{\Delta y+\frac{1}{2}\Delta x}{T}+\frac{\theta }{2T\gamma }\int _0^T X_sds\right) ^2 dt\nonumber \\= & {} -\frac{\theta }{2}\left( Y_T^2-Y_0^2\right) + \gamma \int _0^T\left( \frac{1}{2}{\dot{X}}_t+\frac{\theta }{2\gamma }X_t\right) ^2 dt \nonumber \\{} & {} -\frac{\gamma }{T}\left( \Delta y+\frac{1}{2}\Delta x+\frac{\theta }{2\gamma }\int _0 ^T X_t dt\right) ^2 \end{aligned}$$
(A.4)

Together (A.3) and (A.4) imply that \(\mu _F(X_t,Y^*_t)\ge \mu _F(X_t,Y_t)\), for any fixed \(X_t\) and for all \(Y_t.\). Therefore \(Y_t^*\) is the unique maximum solution of the inner problem, for any fixed \(X_t\).

We now consider the outer problem. Apply the integration by parts formula to rewrite the objective function as follows

$$\begin{aligned} \mu _L(X_t, Y_t)= & {} \int _0 ^T -\left( \theta (X_t+Y_t)+\gamma \left( {\dot{X}}_t+{\dot{Y}}_t\right) \right) {\dot{X}}_t dt\\= & {} \theta \left( (X_0+Y_0)x_0-(X_T-Y_T)X_T\right) -\gamma \int _0 ^T\left( {\dot{X}}_t+{\dot{Y}}_t\right) \left( {\dot{X}}_t-\frac{\theta }{\gamma }X_t\right) dt. \end{aligned}$$

Inserting \(Y_t^*\) from (A.1), we obtain

$$\begin{aligned} \mu _L(X_t, Y^*_t)= & {} \theta \left( (X_0+Y_0)x_0-(X_T-Y_T)X_T\right) \\{} & {} -\frac{\gamma }{2}\int _0 ^T\left( {\dot{X}}_t-\frac{\theta }{\gamma }X_t+\frac{\theta }{T \gamma }\int _0^T X_sds+\frac{\Delta x+2\Delta y}{T}\right) \left( {\dot{X}}_t-\frac{\theta }{\gamma }X_t\right) dt \end{aligned}$$

The first term is a constant, and maximizing the outer problem therefore is equivalent to maximizing the second term after discarding the coefficient \(\gamma /2\):

$$\begin{aligned} \max _{X_t} \left[ J(X_t):= -\int _0 ^T\left( {\dot{X}}_t-\frac{\theta }{\gamma }X_t+\frac{\theta }{T \gamma }\int _0^T X_sds+\frac{\Delta x+2\Delta y}{T}\right) \left( {\dot{X}}_t-\frac{\theta }{\gamma }X_t\right) dt\right] . \end{aligned}$$

Consider the first-order variation. For any admissible direction \(h\in C^1[0,T]\) (\(h(0) = h(T) = 0\)), the second-order variation of J is given by

$$\begin{aligned} \delta J_h(X_t)=\int _0^T\left( \gamma {\ddot{X}}_t-\frac{\theta ^2}{\gamma }{X}_t+\frac{\theta ^2}{T\gamma }\int _0^T{X}_tdt+\frac{\theta }{T}\Delta y\right) h_tdt. \end{aligned}$$

Therefore

$$\begin{aligned} \gamma {\ddot{X}}_t-\frac{\theta ^2}{\gamma }{X}_t+\frac{\theta ^2}{T\gamma }\int _0^T{X}_tdt+\frac{\theta }{T}\Delta y=0. \end{aligned}$$
(A.5)

Taking the first derivative with respective to t, we have a homogeneous ODE of third order: \(\dddot{X}_t-\frac{\theta ^2}{\gamma ^2}{\dot{X}}_t=0\), whose general solution is \(X_t=c_0+c_1e^{-\frac{\theta }{\gamma }t}+c_2e^{\frac{\theta }{\gamma }t}\). Inserting \(X_t\) into the Euler-Lagrange equation (A.5), we obtain \(c_1\left( e^{\frac{\theta T}{\gamma }}-1\right) +c_2\left( e^{-\frac{\theta T}{\gamma }}-1\right) =-\frac{\theta \Delta y}{\gamma }.\) Together with the boundary conditions \(X_0=c_0+c_1+c_2=x_0\) and \(X_T=c_0+c_1e^{-\frac{\theta }{\gamma }T}+c_2e^{\frac{\theta }{\gamma }T}=x_T\), we have \(c_1=\frac{\Delta x+\Delta y}{2}\left( 1-e^{-\frac{\theta T}{\gamma }}\right) ^{-1};\ c_2=\frac{\Delta x-\Delta y}{2}\left( e^{\frac{\theta T}{\gamma }}-1\right) ^{-1};\ c_0=x_0-c_1-c_2\). Set \(a=\frac{\theta }{\gamma }c_1=\frac{\theta }{2\gamma }(\Delta x+\Delta y)\left( 1-e^{-\frac{\theta T}{\gamma }}\right) ^{-1}\) and \(b=\frac{\theta }{\gamma }c_2=\frac{\theta }{2\gamma }(\Delta x-\Delta y)\left( e^{\frac{\theta T}{\gamma }}-1\right) ^{-1}\). Then \(c_0=x_0-c_1-c_2=x_0-\frac{\gamma }{\theta }(b-a)\). So the equilibrium position of the leader is given as \(X^*_t=c_0+c_1e^{-\frac{\theta }{\gamma }t}+c_2e^{\frac{\theta }{\gamma }t}=x_0+\frac{a\gamma }{\theta }\left( 1-e^{-\frac{\theta }{\gamma }t}\right) +\frac{b\gamma }{\theta }\left( e^{\frac{\theta }{\gamma }t}-1\right) \). The corresponding speed is \(x_t^*={\dot{X}}^*_t=ae^{-\frac{\theta }{\gamma }t}+be^{\frac{\theta }{\gamma }t}\). Inserting this quantity into (A.1)-(A.2), we obtain the equilibrium speed and position of the follower: \(Y^*_t=y_0+\frac{\Delta x+\Delta y}{2T}t-\frac{b\gamma }{\theta }\left( e^{\frac{\theta }{\gamma }t}-1\right) ;\ y_t^*=\dot{Y}^*_t=\frac{\Delta x+\Delta y}{2T}-be^{\frac{\theta }{\gamma }t}\).

To show that \(X^*_t\) is the maximum solution of the outer problem, we consider the second-order variation of the objective function: \(J(X_t)=\int _0^T F(X_t, {\dot{X}}_t)dt.\). For any admissible direction \(h\in C^1[0,T]\) (\(h_0 = h_T = 0\)), the second-order variation of J is given by

$$\begin{aligned} \delta ^2 J_h(X_t)= & {} -\frac{\gamma }{2}\left( \int _0^T\left( \dot{h_t}-h_t\right) ^2 dt+\frac{\theta ^2}{\gamma ^2}\left( \int _0 ^T h_t dt\right) ^2\right) \\\le & {} -\frac{\gamma }{2}\int _0^T\left( \dot{h_t}-h_t\right) ^2 dt=-\frac{\gamma }{2}\left( \int _0^T\dot{h_t}^2dt+\int _0^T h_t^2 dt-2\int _0^T\dot{h_t}h_tdt\right) \\= & {} -\frac{\gamma }{2}\left( \int _0^T\dot{h_t}^2dt+\int _0^T h_t^2 dt\right) =-\frac{\gamma }{2}||h_t||^2_{1,2}, \end{aligned}$$

where \(||\cdot ||_{1,2}\) is the natural norm in the Sobolev space \(W^{1,2}\) and the third equation follows from the integration by parts formula and h being admissible (\(h_0 = h_T = 0\)):

$$\begin{aligned} \int _0^T\dot{h_t}h_tdt=h^2_t\left| _0^T\right. -\int _0^T\dot{h_t}h_tdt=h^2_T-h^2_0-\int _0^T\dot{h_t}h_tdt=-\int _0^T\dot{h_t}h_tdt. \end{aligned}$$

This implies that \(2\int _0^T\dot{h_t}h_tdt=0.\) The condition \(\delta ^2 J_h(X_t)\le -\frac{\gamma }{2}||h_t||^2_{1,2}\) is a sufficient condition for \(X_t\) to be the maximum value for the outer problem (Gelfand & Fomin, 1963, Chapter 5: Section 24: Theorem 2). \(\square \)

1.2 The expected payoffs and variance for Stackelberg game

We calculate the expect payoffs for the Stackelberg game, the Nash game (Carlin et al., 2007), and the single agent problem (Almgren and Chriss, 2001), respectively.

1.2.1 Expected price

From Theorem 1, the expected price for the Stackleberg game equals to \({\mathbb {E}}[P_t]=\gamma \left( a+ (1+m t)\frac{\Delta x+\Delta y}{2T}\right) \).

1.2.2 Expected payoff

From Theorem 1, we obtain the expected payoffs for the Stackelberg equilibrium.

$$\begin{aligned} -\mu _L= & {} \int _0^T\left( \theta (X_t+Y_t)+\gamma ({\dot{X}}_t+{\dot{Y}}_t)\right) {\dot{X}}_tdt\\= & {} \theta (x_0+y_0) \Delta x+\left( \frac{\gamma }{2 T}-\frac{\theta }{4\left( e^{mT}-1\right) }\right) (\Delta x+\Delta y)^2+\frac{\theta (\Delta x+\Delta y)}{4\left( 1-e^{-mT}\right) }(3\Delta x-\Delta y)\\ -\mu _F= & {} \int _0^T\left( \theta (X_t+Y_t)+\gamma ({\dot{X}}_t+{\dot{Y}}_t)\right) {\dot{Y}}_tdt\\ -\mu= & {} -\mu _L-\mu _F =\theta (x_0+y_0) (\Delta x+\Delta y)+\left( \frac{3\gamma }{4T}+\frac{5\theta }{8}+\frac{\theta }{4(e^{mT}-1)}\right) (\Delta x+\Delta y)^2 \end{aligned}$$

1.2.3 Variance

In the following derivation, we apply the following Ito version of integration-by-parts formula:

$$\begin{aligned} {\mathbb {V}}\left[ \int _0^T B_tdX_t\right]= & {} {\mathbb {V}}[B_TX_T]+{\mathbb {V}}\left[ \int _0^T X_tdB_t\right] -2X_T\textrm{Cov}\left[ B_T, \int _0^T X_tdB_t\right] \\= & {} TX_T^2+\int _0^T X^2_tdt-2X_T{\mathbb {E}}\left[ \int _0^TdB_t\int _0^T X_tdB_t\right] \\= & {} TX_T^2+\int _0^T X^2_tdt-2X_T\int _0^TX_tdt=\int _0^T(X_T-X_t)^2dt \end{aligned}$$

From Theorem 1, we obtain the variance of the payoffs

$$\begin{aligned} \sigma ^2_L= & {} T(X_T)^2-2X_T\int _0^TX_tdt+\int _0^T(X_t)^2dt =\int _0^T(X_t-X_T)^2dt\\= & {} \left( \frac{\gamma }{\theta }(a-b)-\Delta x\right) ^2T+\frac{\theta }{4\gamma }(\Delta x^2+\Delta y^2)-2\left( \frac{\gamma ^2}{\theta ^2}(a-b)-\frac{\gamma }{\theta }\Delta x\right) \Delta y-\frac{2abT\theta ^2}{\gamma ^2}\\ \sigma ^2_F= & {} TY_T^2-2Y_T\int _0^TY_tdt+\int _0^TY_t^2dt=\int _0^T(Y_t-Y_T)^2dt.\\= & {} \frac{T}{12}\left( \Delta x^2-4\Delta x\Delta y+7\Delta y^2\right) -\frac{3\gamma }{8\theta }(\Delta x-\Delta y)^2-\frac{b\gamma ^2}{2\theta ^2}(3\Delta x+\Delta y)\\{} & {} +\frac{b\gamma T}{2\theta }(\Delta x-3\Delta y)+\frac{\gamma ^2}{2T\theta ^2}(\Delta x^2-\Delta y^2)+\frac{b^2\gamma ^2}{\theta ^2}T-\frac{2b\gamma ^2}{\theta ^2}\Delta y. \end{aligned}$$

1.3 The simultaneous Nash game (Carlin et al., 2007)

$$\begin{aligned} \max \limits _{X}{} & {} \left[ \mu _L(X,Y): X_0=x_0, X_T=x_T\right] \end{aligned}$$
(A.6)
$$\begin{aligned} \max \limits _{Y}{} & {} \left[ \mu _F(X,Y): Y_0=y_0, Y_T=y_T\right] \end{aligned}$$
(A.7)

Theorem 2

(Carlin et al. (2007)) Let \(\Delta x=x_T-x_0\) and \(\Delta y=y_T-y_0\). The Nash game (A.6)-(A.7) admits the following unique open-loop equilibrium. For any \(t\in [0, T]\),

$$\begin{aligned} \dot{X}_t^N= & {} ce^{-\frac{\theta }{3\gamma }t}+b e^{\frac{\theta }{\gamma }t};\ \dot{Y}_t^N=ce^{-\frac{\theta }{3\gamma }t}-be^{\frac{\theta }{\gamma }t}, \end{aligned}$$

or equivalently in terms of trading positions

$$\begin{aligned} X^N_t= & {} x_0+\frac{3c\gamma }{\theta }\left( 1-e^{-\frac{\theta }{3\gamma }t}\right) +\frac{b\gamma }{\theta }\left( e^{\frac{\theta }{\gamma }t}-1\right) ;\\ Y^N_t= & {} y_0+\frac{3c\gamma }{\theta }\left( 1-e^{-\frac{\theta }{3\gamma }t}\right) -\frac{b\gamma }{\theta }\left( e^{\frac{\theta }{\gamma }t}-1\right) , \end{aligned}$$

where

$$\begin{aligned} a= & {} \frac{\theta }{\gamma }\left( 1-e^{-mT}\right) ^{-1}\frac{\Delta x+\Delta y}{2};\ b=\frac{\theta }{\gamma }\left( e^{mT}-1\right) ^{-1}\frac{\Delta x-\Delta y}{2},\nonumber \\ c= & {} \frac{\theta }{3\gamma }\left( 1-e^{-\frac{\theta }{3\gamma }T}\right) ^{-1}\frac{\Delta x+\Delta y}{2}. \end{aligned}$$
(A.8)

1.3.1 Expected price

From Theorem 2, the expected price for the Nash game equals to \({\mathbb {E}}[P^N_t]=\gamma c\left( 6-4e^{-\frac{\theta }{3\gamma }t}\right) \).

1.3.2 Expected payoffs

From Theorem 2, the expected payoffs of the Nash equilibrium are respectively

$$\begin{aligned} -\mu ^N_L= & {} \int _0^T\left( \theta (X^N_t+Y^N_t)+\gamma ({\dot{X}}^N_t+{\dot{Y}}^N_t)\right) {\dot{X}}^N_tdt\\= & {} \theta (x_0+y_0) \Delta x+\frac{\theta (\Delta x+\Delta y)^2}{6}+\gamma \left( c+a\right) \Delta x+\gamma \left( c-a\right) \Delta y\\ -\mu ^N_F= & {} \int _0^T\left( \theta (X^N_t+Y^N_t)+\gamma ({\dot{X}}^N_t+{\dot{Y}}^N_t)\right) {\dot{Y}}^N_tdt\\= & {} \theta (x_0+y_0) \Delta y+\frac{\theta (\Delta x+\Delta y)^2}{6}+\gamma \left( c+a\right) \Delta y+\gamma \left( c-a\right) \Delta x\\ -\mu ^N= & {} -\mu ^N_L-\mu ^N_F\\= & {} \theta (x_0+y_0)(\Delta x+ \Delta y)+\frac{\theta (\Delta x+\Delta y)^2}{3}+2\gamma c(\Delta x+ \Delta y). \end{aligned}$$

1.3.3 Variance

From Proposition 2, we obtain the variance of the payoffs

$$\begin{aligned} (\sigma ^N_L)^2= & {} T(X^N_T)^2-2X^N_T\int _0^TX^N_tdt+\int _0^T(X_t^N)^2dt\\= & {} \frac{\gamma }{4\theta }(7\Delta x^2+12\Delta x \Delta y-\Delta y^2)+\frac{b\gamma ^2}{2\theta ^2}(5\Delta x+7\Delta y)-\frac{3c\gamma ^2}{2\theta ^2}(\Delta x+5\Delta y)\\{} & {} -\frac{3b\gamma ^2}{2\theta ^2}(\Delta x+\Delta y)\left( e^{\frac{2\theta }{3\gamma }T}+e^{\frac{\theta }{3\gamma }T}\right) +\left( \frac{\gamma }{\theta }(3c-b)-\Delta x\right) ^2 \\ (\sigma ^N_F)^2= & {} T(Y^N_T)^2-2Y^N_T\int _0^TY^N_tdt+\int _0^T(Y_t^N)^2dt\\= & {} \frac{\gamma }{4\theta }(7\Delta y^2+12\Delta x \Delta y-\Delta x^2)-\frac{b\gamma ^2}{2\theta ^2}(5\Delta y+7\Delta x)-\frac{3c\gamma ^2}{2\theta ^2}(\Delta y+5\Delta x)\\{} & {} +\frac{3b\gamma ^2}{2\theta ^2}(\Delta x+\Delta y)\left( e^{\frac{2\theta }{3\gamma }T}+e^{\frac{\theta }{3\gamma }T}\right) +\left( \frac{\gamma }{\theta }(3c+b)-\Delta y\right) ^2. \end{aligned}$$

1.4 The single agent optimal order execution problem (Almgren and Chriss, 2001)

$$\begin{aligned} \max \limits _{X}{} & {} \left[ \int _0^T -\left( \theta X_t+\gamma {\dot{X}}_t\right) {\dot{X}}_t dt: X_0=x_0, X_T=x_T\right] \end{aligned}$$

Theorem 3

( Almgren and Chriss (2001)) Let \(\Delta x=x_T-x_0\). The problem (A.9) admits the following unique optimal solution: \(\forall t\in [0, T]\), \(\dot{X}_t^S=\frac{\Delta x}{T}\); or in terms of trading positions: \(X^S_t=x_0+\frac{\Delta x}{T}t.\)

1.4.1 Expected price

From Theorem 3, the expected price for the single agent who trades a total amount of \(\Delta x+\Delta y\) equals to \({\mathbb {E}}[P^S_t]=(\gamma +\theta t)\frac{\Delta x+\Delta y}{T}.\)

1.4.2 Expected payoff

From Theorem 3, the expected payoff for the single agent who trades a total amount of \(\Delta x+\Delta y\) equals to \(\mu ^S=-\frac{\theta (\Delta x+\Delta y)^2}{2}\left( 1+\frac{2}{mT}\right) .\)

1.4.3 Variance

From Proposition 3, the variance of the payoff for the single agent who trades a total amount of \(\Delta x+\Delta y\) equals to \((\sigma ^S)^2=\int _0^T(X_T-X_t)^2dt=\frac{T}{3}(\Delta x+\Delta y)^2.\)

1.5 Proof of Proposition 1

From Appendices A.2.1A.3.1, and A.4.1, the expected prices for the Stackleberg game, the Nash game and the single agent problem as follows.

$$\begin{aligned} {\mathbb {E}}[P_t]= & {} \gamma \left( a+ (1+m t)\frac{\Delta x+\Delta y}{2T}\right) ,\\ {\mathbb {E}}[P^N_t]= & {} \gamma \left( 6c-4ce^{-\frac{m}{3}t}\right) ,\\ {\mathbb {E}}[P^S_t]= & {} \gamma \left( (1+mt)\frac{\Delta x+\Delta y}{T}\right) . \end{aligned}$$

where a and c are defined in (4) and (A.8) (Appendix A.3), respectively. Without losing generality, we assume that \(\Delta x+\Delta y>0\).

  1. (1)

    For small \(t\rightarrow 0\),

    $$\begin{aligned} {\mathbb {E}}[P_t]= & {} \gamma \left( a+ \frac{\Delta x+\Delta y}{2T}\right) ,\\ {\mathbb {E}}[P^N_t]= & {} 2\gamma c,\\ {\mathbb {E}}[P^S_t]= & {} \gamma \frac{\Delta x+\Delta y}{T}. \end{aligned}$$

    Obviously,

    $$\begin{aligned} \frac{{\mathbb {E}}[P^N_t]}{{\mathbb {E}}[P^S_t]}= & {} \frac{mT}{3(1-e^{-mT/3})}\ge \frac{mT}{3 mT/3}=1,\\ {\mathbb {E}}[P_t]-{\mathbb {E}}[P^S_t]= & {} \gamma \left( a- \frac{\Delta x+\Delta y}{2T}\right) =\gamma \frac{\Delta x+\Delta y}{2T}(\frac{mT}{1-e^{-mT}}-1)\ge 0. \end{aligned}$$

    Therefore, \({\mathbb {E}}[P_t] > {\mathbb {E}}[P^S_t]\) and \( {\mathbb {E}}[P^N_t]> {\mathbb {E}}[P^S_t]\) when \(t\rightarrow 0\).

    For large \(t\rightarrow T\),

    $$\begin{aligned} {\mathbb {E}}[P^S_t]-{\mathbb {E}}[P_t]= & {} \frac{\Delta x+\Delta y}{2T}(1+mT-\frac{mT}{1-e^{-mT}}) =\frac{\Delta x+\Delta y}{2T}f(x),\\ f'(x)= & {} \frac{e^{-x} x}{\left( 1-e^{-x}\right) ^2}-\frac{1}{1-e^{-x}}+1 \ge 0 \end{aligned}$$

    Note that \(\min _{x\ge 0} f(x)=0\). Therefore, \({\mathbb {E}}[P^S_t]\ge {\mathbb {E}}[P_t]\). Meanwhile,

    $$\begin{aligned} \frac{{\mathbb {E}}[P^S_t]}{{\mathbb {E}}[P_t]}= & {} \frac{3(1+\frac{1}{mT})}{2+(1-e^{-\frac{mT}{3}})^{-1}} =1+\frac{3(e^x-1)-x}{x(3e^x-2)} \ge 1+ \frac{2}{3e^x-2} > 1. \end{aligned}$$

    Therefore, \({\mathbb {E}}[P^S_t] > {\mathbb {E}}[P^N_t]\).

  2. (2)

    Consider the function \({\mathbb {E}}[P^S_t]-{\mathbb {E}}[P_t]\)

    $$\begin{aligned} {\mathbb {E}}[P_t]-{\mathbb {E}}[P^S_t]= & {} \gamma \left( a-(1+m t)\frac{\Delta x+\Delta y}{2T}\right) \\= & {} \frac{(\Delta x+\Delta y)m\gamma }{2T} (\frac{T}{1-e^{-mT}}-\frac{1}{m}-t)\\ \end{aligned}$$

    Denote \(t_1=(\frac{1}{1-e^{-mT}}-\frac{1}{mT})T\) and we can prove that \(t_1 \in \left[ \frac{T}{2}, T\right] \). Then we have the following relationships: When \(t\in \left[ 0, t_1\right] \), \({\mathbb {E}}[P^S_t] \le {\mathbb {E}}[P_t]\); and when \(t\in \left( t_1, T \right] \), \({\mathbb {E}}[P^S_t] > {\mathbb {E}}[P_t]\).

  3. (3)

    Consider the function \({\mathbb {E}}[P_t]-{\mathbb {E}}[P^N_t]\)

    $$\begin{aligned} {\mathbb {E}}[P_t]-{\mathbb {E}}[P^N_t]= & {} \frac{4 m T e^{-\frac{1}{3} (m t)}}{3 \left( 1-e^{-\frac{1}{3} (m T)}\right) }+m t+\frac{m T}{1-e^{-m T}}-\frac{2 m T}{1-e^{-\frac{1}{3} (m T)}}+1=f(t)\\ f'(t)= & {} m-\frac{4 m^2 T e^{-\frac{1}{3} (m t)}}{9 \left( 1-e^{-\frac{1}{3} (m T)}\right) }\\ f''(t)= & {} \frac{4 m^3 T e^{-\frac{1}{3} (m t)}}{27 \left( 1-e^{-\frac{1}{3} (m T)}\right) }\ge 0. \end{aligned}$$

    Since \(f'(t)\) is an increasing function, we have that \(f'(0)\le f'(t)\le f'(T)\). Take mT as a parameter. We consider three cases.

    • when \(mT \le 1.65\), where \(mT\approx 1.65\) is the root of the function \(f'(t=T, mT)=0\), f(t) is a decreasing function and \(f_{max}=f(0)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}+1>0\), \(f_{min}=f(T)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}-\frac{mT}{3}+1 < 0\). There exists a unique point \(t_2\) subject to \(f(t_2)=0\), and \(t_2\) is a function of mT.

    • when \(mT \in [1.65, 3.06] \), where \(mT\approx 3.06\) is the root of the function \(f(t=T, mT)=0\), f(t) is a convex function. Note that \(f_{max}=f(0)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}+1>0\), \(f(T)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}-\frac{mT}{3}+1 < 0\), and \(f_{min}=\min \{f(T),f(t*)\}< 0\). There exists a unique point \(t_2\) subject to \(f(t_2)=0\).

    • when \(mT > 3.06 \), f(t) is a convex function and \(f_{max}=f(0)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}+1>0\), \(f(T)=\frac{mT}{1-e^{-mT}}-\frac{2mT}{3(1-e^{mT/3})}-\frac{mT}{3}+1 > 0\) and \(f_{min}=\min \{f(T),f(t^*)\}< 0\). There exists two points \(t_2\) and \(t_3\) subject to \(f(t_2)=f(t_3)=0\).

    Note that the first two cases have the same conclusion. Therefore, we merge all cases into the following two cases.

    • when \(mT \in (0, 3.06] \)

      • \(t \in [0,t_2]\), \({\mathbb {E}}[P_t]>{\mathbb {E}}[P^N_t]\);

      • \(t \in [t_2, T]\), \({\mathbb {E}}[P_t]<{\mathbb {E}}[P^N_t]\).

    • when \(mT > 3.06\)

      • \(t \in [0,t_2]\), \({\mathbb {E}}[P_t] \ge {\mathbb {E}}[P^N_t]\);

      • \(t \in (t_3, t_4)\), \({\mathbb {E}}[P_t]<{\mathbb {E}}[P^N_t]\);

      • \(t \in [t_4, T]\), \({\mathbb {E}}[P_t] \ge {\mathbb {E}}[P^N_t]\).

1.6 Proof of Proposition 2 and Proposition  3

Both \(\mu _T\) and \(\mu ^S\) are given in Appendices A.2.2 and  A.4.2, while \(\mu _T^N\) and \(\mu ^S\) are given in Appendix A.3.2 and Appendix A.4.2, respectively.

$$\begin{aligned} \textsc {PoA}= & {} \frac{\frac{2mT}{e^{mT}-1}+5mT+6}{4(mT+2)}=\frac{\frac{2x}{e^{x}-1}+5x+6}{4(x+2)}=p(x)\\ p'(x)= & {} \frac{e^x \left( -x^2-2 x+2 e^x-2\right) }{2 \left( e^x-1\right) ^2 (x+2)^2}=\frac{e^x \phi (x)}{2 (e^x-1)^2 (x+2)^2} \end{aligned}$$

Obviously, \(\phi ''(x)=2e^x-2 \ge 0\), \(\phi '(x)=2e^x-2x-2 \ge 0\) and \(\phi (x) \ge 0\). That is, \(\textsc {PoA}\) is increasing in mT, and \(\lim _{mT \rightarrow 0} \textsc {PoA}=1\), \(\lim _{mT \rightarrow +\infty } \textsc {PoA}=\frac{5}{4}\). Therefore, \(1\le \textsc {PoA}\le \frac{5}{4}\). We can use the same process to prove \(1\le \textsc {PoA(N)}\le \frac{4}{3}\).

1.7 Proof of Proposition 4

The expected payoff difference of the leader under the Stackelberg and Nash games is

$$\begin{aligned} \mu _L-\mu ^N_L= & {} \frac{\theta (\Delta x+\Delta y)^2}{2}\left( \frac{1+e^{-\frac{\theta }{3\gamma }T}}{6\left( 1-e^{-\frac{\theta }{3\gamma }T}\right) }-\frac{\gamma }{\theta T}\right) \\= & {} \frac{\theta (\Delta x+\Delta y)^2}{2}\left( \frac{1+e^{-\frac{x}{3}}}{6\left( 1-e^{-\frac{x}{3}}\right) }-\frac{1}{x}\right) , \end{aligned}$$

where \(x:=mT\). Consider the second term above

$$\begin{aligned} f(x)=\frac{1+e^{-\frac{x}{3}}}{6\left( 1-e^{-\frac{x}{3}}\right) }-\frac{1}{x}, x\ge 0. \end{aligned}$$

Take the first derivative of f(x),

$$\begin{aligned} f'(x)= & {} \frac{1}{6}\left( \frac{6}{x^2}-\frac{2 e^{x/3}}{3 \left( e^{x/3}-1\right) ^2}\right) =\frac{18(e^{\frac{x}{3}}-1)^2-2x^2e^{\frac{x}{3}}}{18x^2(e^{\frac{x}{3}}-1)^2}:=\frac{\phi (x)}{18x^2(e^{\frac{x}{3}}-1)^2}. \end{aligned}$$

The denominator being positive for \(x>0\), we shall show that the numerator \(\phi (x)\) above is also positive, implying that \(f'(x)>0\). Take the first derivative of \(\phi (x)\),

$$\begin{aligned} \phi '(x)=e^{\frac{x}{3}}\left( 12e^{\frac{x}{3}}-4x-\frac{2}{3}x^2-12\right) :=e^{\frac{x}{3}}p(x). \end{aligned}$$

Take the first and second derivatives of the second term p(x),

$$\begin{aligned} p^{\prime }(x)= & {} 4e^{\frac{x}{3}}-4-\frac{4}{3}x,\\ p^{\prime \prime }(x)= & {} \frac{4}{3}\left( e^{\frac{x}{3}}-1\right) \ge 0, \forall m\ge 0. \end{aligned}$$

Note that \(p^{\prime \prime }(x)\ge 0\) for \(x\ge 0\) implies that p(x) is convex for \(x\ge 0\) and zero is the only root of \(p^{\prime }(x)=0\). So \(p(x)>p(0)=0\) for \(x>0\), implying that \(\phi (x)>0\) for \(x> 0\), and hence \(f^{\prime }(x)>0\) for \(x>0\).

1.8 Follower’s relative payoffs function

The expected payoff difference of the follower under the Stackelberg and Nash games is

$$\begin{aligned} L_2(x):=\mu _F-\mu ^N_F= & {} \frac{\theta (\Delta x+\Delta y)^2}{24}\left( \frac{4}{1-e^{-\frac{x}{3}}}-\frac{6}{1-e^{-x}}-\frac{6}{x}+1\right) , \end{aligned}$$

where \(x=mT\). Denote the term in the parentheses above as

$$\begin{aligned} k(x)= & {} \frac{4}{1-e^{-\frac{x}{3}}}-\frac{6}{1-e^{-x}}-\frac{6}{x}+1. \end{aligned}$$

1.8.1 Proof for the properties of k(x)

Lemma 1

The function k(x) (\(x\ge 0\)) has the following properties

  1. (i)

    \(\sup _{x\ge 0} k(x)\): \(k(0)=0\) is the maximum value of k(x) achieved asymptotically when \(x\rightarrow 0\);

  2. (ii)

    \(\min _{x\ge 0} k(x)\): \(k(x^*)\approx -1.46\) is the unique minimum value of k(x) achieved at the minimum point \(x^*\approx 8.87\) in the non-negative domain;

  3. (iii)

    \(\sup _{x\ge x^*} k(x)\): \(k(\infty )=-1\) is the maximum value of k(x) achieved asymptotically when \(x\rightarrow \infty \);

  4. (iv)

    k(x) is a unimodal (a.k.a, quasi-convex) function: decreasing when \(x\in [0, x^*]\) and increasing when \(x\ge x^*\).

Fig. 6
figure 6

The function k(x): \(x^*\approx 8.87\) is the unique minimum point of k(x) and the minimum value of k(x) is \(-1.46\)

Proof

Take the first derivative of k(x)

$$\begin{aligned} k'(x)= & {} \frac{6}{x^2}+\frac{6 e^{-x}}{\left( 1-e^{-x}\right) ^2}-\frac{4 e^{-\frac{x}{3}}}{3 \left( 1-e^{-\frac{x}{3}}\right) ^2}\\= & {} \frac{2 \left( -2 e^{\frac{x}{3}} x^2-4 e^{\frac{2 x}{3}} x^2-4 e^{\frac{4 x}{3}} x^2-2 e^{\frac{5 x}{3}} x^2+3 e^x x^2-18 e^x+9 e^{2 x}+9\right) }{3 \left( e^{x/3}-1\right) ^2 \left( e^{\frac{x}{3}}+e^{\frac{2 x}{3}}+1\right) ^2 x^2}. \end{aligned}$$

Denote the numerator above as

$$\begin{aligned} g(x)=-2 e^{\frac{x}{3}} x^2-4 e^{\frac{2 x}{3}} x^2-4 e^{\frac{4 x}{3}} x^2-2 e^{\frac{5 x}{3}} x^2+3 e^x x^2-18 e^x+9 e^{2 x}+9. \end{aligned}$$

In the following, we prove that \(g(x)\le 0\) for \(x\in [0,x^*]\) and \(g(x)\ge 0\) for \(x\in [x^*,\infty )\) (where \(x^*\approx 8.87\) is the numerical solution for \(g(x)=0\)).

$$\begin{aligned} g'(x)= & {} \frac{1}{3} e^{\frac{x}{3}} \left( 9 e^{\frac{2 x}{3}} \left( x^2+2 x-6\right) -8 e^{\frac{x}{3}} x (x+3)+54 e^{\frac{5 x}{3}}-2 x (x+6)-8 e^x x (2 x+3)\right) \\{} & {} -\frac{2}{3} e^{\frac{5 x}{3}} x (5 x+6)=\frac{1}{3} e^{\frac{x}{3}}h(x)\\ h'''(x)= & {} \frac{2}{27} e^{\frac{x}{3}} \left( -4 \left( x^2+21 x+81\right) -108 e^{\frac{2 x}{3}} \left( 2 x^2+15 x+21\right) +18 e^{\frac{x}{3}} \left( 2 x^2+22 x+33\right) \right) \\+ & {} \frac{2}{27} e^{\frac{x}{3}}(-8 e^x \left( 40 x^2+228 x+243\right) +3375 e^{\frac{4 x}{3}})=\frac{2}{27} e^{\frac{x}{3}}p(x).\\ p'''(x)= & {} \frac{2}{3} e^{\frac{x}{3}} \left( 2 x^2-24 e^{\frac{x}{3}} \left( 4 x^2+66 x+231\right) -12 e^{\frac{2 x}{3}} \left( 40 x^2+468 x+1167\right) +58 x\right) \\{} & {} +8000 e^{\frac{4m}{3}}+226 e^{\frac{x}{3}}=\frac{2}{3} e^{\frac{x}{3}} q(x)\\ q'''(x)= & {} \frac{8}{9} e^{\frac{x}{3}} \left( -4 x^2-4 e^{\frac{x}{3}} \left( 40 x^2+828 x+3813\right) -138 x+13500 e^{\frac{2 x}{3}}-1041\right) =\frac{8}{9} e^{\frac{x}{3}}w(x)\\ w'''(x)= & {} \frac{4}{27} e^{\frac{x}{3}} \left( -40 x^2-1548 x+27000 e^{\frac{x}{3}}-13425\right) =\frac{4}{27} e^{\frac{x}{3}}z(x)\\ z(x)= & {} -40 x^2-1548 x+27000 e^{\frac{x}{3}}-13425\\ z'(x)= & {} -80 x+9000 e^{\frac{x}{3}}-1548. \end{aligned}$$

Note that \(z(x)>0, \forall x\ge 0\) and \(w''(x)\) is increasing for \(x\ge 0\). So \(\min _{x\ge 0}w''(x)=w''(0)>0\), and \(\min _{x\ge 0}w'(x)=w'(0)>0\). However, \(w(0)=-2793<0\) and \(w(1)>0\). Therefore there exists a unique root \(m_1\) such that \(w(m_1)=0\) (where \(m_1\approx 1.24\)), implying that \(q''(x)\) is decreasing \(x \in [0,m_1]\) and increasing for \(x \in [m_1,\infty )\). The maximum value for \(q''(x)\) is \(q''(0)<0\) for \(x \in [0,m_1]\), while the minimum value is \(x=m_1\) for \(x \in [m_1,x^*]\). Moreover, \(q''(3)>0\). So there exists a unique root \(m_2\) for \(q''(x)=0\) (where \(m_2\approx 2.27\)). Since \(q''(x)\le 0\) for \(x \in [0,m_2]\), and \(q''(x)\ge 0\) fro \(x \in [m_2, \infty )\). Therefore function \(q'(x)\) is decreasing for \(x \in [0,m_2]\) and increasing for \(x \in [m_2,\infty )\). Due to \(q'(0)<0\), \(q'(m_2)\le 0\) and \(q'(4)>0\), there exists a unique root \(m_3\) such that \(q(m_3)=0\) (where \(m_3\approx 3.29\)). Repeat the same steps as above, we can find the unique root for each function assumed in the above such that \(m_4\approx 4.31\), \(m_5\approx 5.07\), \(m_6\approx 5.82\), \(m_7\approx 6.57\), \(m_8\approx 7.17\), \(m_9\approx 7.77\), \(m_{10}\approx 8.37\), and \(m_{11}\approx 8.87\).

Therefore \(g(x)\le 0\) for \(x\in [0,x^*]\) and \(g(x)\ge 0\) for \(x\in [x^*,\infty )\). In another words, k(x) is decreasing in x for \(x\in [0,x^*]\) and increasing for \(x\in [x^*,\infty )\).

It suffices to show that k(x) is a unimodal function; namely it is monotonically decreasing for \(x \le x^*\) and monotonically increasing for \(x\ge x^*\). The unimodality of k(x) together with \({\lim _{x \rightarrow 0}} k(x)=0\), \({\lim _{x \rightarrow \infty }} k(x)=-1\) and \(k_{min}=k(x^*)\) imply the desired results in (i), (ii) and (iii). \(\square \)

1.8.2 Proof of Proposition 5

Proof

Items (i) and (ii) being easy, we show the rest. The expected payoff difference of the leader under the Stackelberg and Nash games is

$$\begin{aligned} \mu _L-\mu ^N_L= & {} \frac{a(\Delta x+\Delta y)^2}{24}\left( \frac{4 x}{1-e^{-\frac{x}{3}}}-\frac{6 x}{1-e^{-x}}+x-6\right) \end{aligned}$$

where \(x:=mT\) and \(a:=\frac{\gamma }{T}>0\). Consider the second term above

$$\begin{aligned} w(x)=\frac{4 x}{1-e^{-\frac{x}{3}}}-\frac{6 x}{1-e^{-x}}+x-6, x\ge 0. \end{aligned}$$

Take the first derivative of w(x),

$$\begin{aligned} w'(x)= & {} \frac{-4 e^{\frac{5 x}{3}} (x-3)-3 e^{2 x}+6 e^x x-4 e^{\frac{x}{3}} (x+3)-4 e^{\frac{2 x}{3}} (2 x+3)+e^{\frac{4 x}{3}} (12-8 x)+3}{3 \left( e^x-1\right) ^2} \end{aligned}$$

The denominator being positive for \(x>0\), we shall show that the numerator \(-\phi (x)\) above is also positive, implying that \(L_2'(\theta )<0\)). Take the first derivative of \(-\phi (x)\),

$$\begin{aligned} -\phi '(x)= & {} \frac{2 e^{x/3}}{3} \left( -9 e^{\frac{2 x}{3}} (x+1)+9 e^{\frac{5 x}{3}}+8 e^{x/3}\right. \\{} & {} \left. (x+3)+2 (x+6)+4 e^x (4 x-3)+2 e^{\frac{4 x}{3}} (5 x-12)\right) \\= & {} \frac{2}{3} e^{x/3} h(x). \end{aligned}$$

In the following, we prove that \(h(x)>0\) for \(x\ge 0\).

$$\begin{aligned} h''(x)= & {} \frac{1}{9} e^{x/3} \left( -36 e^{x/3} (x+4)+225 e^{\frac{4 x}{3}}+8 (x+9)+36 e^{\frac{2 x}{3}} (4 x+5)+16 e^x (10 x-9)\right) \\ {}= & {} \frac{1}{9} e^{x/3}p(x).\\ p''(x)= & {} 4 e^{x/3} \left( -x+100 e^x+4 e^{x/3} (4 x+17)+e^{\frac{2 x}{3}} (40 x+44)-10\right) =4 e^{x/3}q(x)\\ q''(x)= & {} \frac{4}{9} e^{x/3} \left( 4 x+225 e^{\frac{2 x}{3}}+4 e^{x/3} (10 x+41)+41\right) . \end{aligned}$$

Note that \(q''(x)>0\) for all \(x\ge 0\) and \(q'(x)\) is increasing for \(x\ge 0\). So \(\min _{x\ge 0}q'(x)=q'(0)>0\), and \(\min _{x\ge 0}q(x)=q(0)>0\). Repeat the process, \(\min _{x\ge 0}p(x)=p(0)>0\), and \(\min _{x\ge 0}h(x)=h(0)>0\). Therefore \(\phi (x)< 0\) for all \(x\ge 0\). Therefore, \(L_2(x)\) is decreasing in x for \(x>0\). Since \(x=\frac{\theta }{\gamma }T\) and \(a:=\frac{\gamma }{T}>0\), \(L_2(\theta )\) is decreasing in \(\theta \) for \(\theta >0\). \(\square \)

1.9 Total’s relative payoffs

The total expected payoff difference under the Stackelberg and Nash games is

$$\begin{aligned} L_3(x):=\mu _T-\mu ^N_T= & {} \left( \frac{3\gamma }{4T}+\frac{\theta }{24}+\frac{\theta }{4\left( 1-e^{-mT}\right) }-\frac{\theta }{3\left( 1-e^{-\frac{\theta }{3\gamma }T}\right) }\right) (\Delta x+\Delta y)^2\\= & {} \frac{\theta (\Delta x+\Delta y)^2}{24}\left( \frac{3\left( e^{x}+1\right) }{e^{x}-1}-\frac{4\left( e^{\frac{x}{3}}+1\right) }{e^{\frac{x}{3}}-1}+\frac{18}{x}\right) , \end{aligned}$$

where \(x=mT\). Recall that for \(x\ge 0\),

$$\begin{aligned} g(x)= & {} \frac{3\left( e^{x}+1\right) }{e^{x}-1}-\frac{4\left( e^{\frac{x}{3}}+1\right) }{e^{\frac{x}{3}}-1}+\frac{18}{x},\\= & {} -1+\frac{6}{e^{x}-1}-\frac{8}{e^{\frac{x}{3}}-1}+\frac{18}{x}. \end{aligned}$$

1.9.1 Proof for the properties of g(x)

Lemma 2

The function g(x) (\(x\ge 0\)) has the following properties

  1. (i)

    \(\inf _{x\ge 0} g(x)\): \(g(\infty )=-1\) is the minimum value of g(x) achieved asymptotically when \(x\rightarrow +\infty \).

  2. (ii)

    \(\max _{x\ge 0} g(x)\): \(g(x^*)\approx 0.778\) is the unique maximum value of g(x) achieved at the maximum point \(x^*\approx 5.105\) in the positive domain;

  3. (iii)

    The unique positive root of g(x) is \(m_0\approx 17.6\); and \(g(x)\ge 0\) for all \(x\in [0, m_0]\) and \(g(x)\le 0\) for all \(x\in [m_0,\infty )\).

  4. (iv)

    g(x) is a unimodal function: increasing when \(x\in [0, x^*]\) and decreasing when \(x\ge x^*\).

Proof

Take the first derivative of g(x)

$$\begin{aligned} \frac{1}{2}g'(x)= & {} -\frac{3e^x}{\left( e^{x}-1\right) ^2}+\frac{\frac{4}{3}e^{\frac{x}{3}}}{\left( e^{\frac{x}{3}}-1\right) ^2}-\frac{9}{x^2}. \end{aligned}$$

Then \(g'(x)\ge 0\) if and only if \(0\le x\le x^*\).

Denote

$$\begin{aligned} \psi (x)=27e^{2m}-54e^x+27-x^2(4e^{\frac{x}{3}}+8e^{\frac{2m}{3}}+3e^x+8e^{\frac{4m}{3}}+4e^{\frac{5m}{3}}). \end{aligned}$$

In the following, we prove that function g(x) is decreasing for \(x\in [0,x^*]\) and increasing for \(x\in [x^*,\infty )\) (where \(x^*=5.10\) is the numerical solution for \(\psi (x)=0\) ).

$$\begin{aligned} \psi '(x)= & {} \frac{1}{3} e^{\frac{x}{3}} \left( -9e^{\frac{2m}{3}} \left( x^2+2 x+18\right) -16 e^{\frac{x}{3}} x (x+3)+162 e^{\frac{5 x}{3}}-4 x (x+6)\right) \\{} & {} + \frac{1}{3} e^{\frac{x}{3}}\left( -16 e^x x (2 x+3)-4 e^{\frac{4 x}{3}} x (5 x+6)\right) =\frac{1}{3} e^{\frac{x}{3}}h(x)\\ h'''(x)= & {} \frac{2}{27} e^{\frac{x}{3}} \{-8 \left( x^2+21 x+81\right) -216 e^{\frac{2 x}{3}} \left( 2 x^2+15 x+21\right) -18 e^{\frac{x}{3}} (2 x^2+22 x+81)\}\\{} & {} +\frac{2}{27} e^{\frac{x}{3}}\{-16 e^x \left( 40 x^2+228 x+243\right) +10125 e^{\frac{4 x}{3}}\}=\frac{2}{27} e^{\frac{x}{3}}p(x).\\ p'''(x)= & {} \frac{2}{3} e^{\frac{x}{3}} \left( -2 x^2-48 e^{x/3} \left( 4 x^2+66 x+231\right) -24 e^{\frac{2 x}{3}} \left( 40 x^2+468 x+1167\right) \right) +\\{} & {} \frac{2}{3} e^{\frac{x}{3}} \left( -58 x+36000 e^x-387\right) =\frac{2}{3} e^{\frac{x}{3}} q(x)\\ q'''(x)= & {} \frac{16}{9} e^{\frac{x}{3}} \left( -4 x^2-4 e^{\frac{x}{3}} \left( 40 x^2+828 x+3813\right) -138 x+20250 e^{\frac{2 x}{3}}-1041\right) \\ w(x)= & {} -4 x^2-4 e^{\frac{x}{3}} \left( 40 x^2+828 x+3813\right) -138 x+20250 e^{\frac{2 x}{3}}-1041\\ w'''(x)= & {} \frac{4}{27} e^{\frac{x}{3}} \left( -40 x^2-1548 x+40500 e^{\frac{x}{3}}-13425\right) \\ z(x)= & {} -40 x^2-1548 x+40500 e^{\frac{x}{3}}-13425 \end{aligned}$$

Obviously, \(z(x)>0\) for \(0\le x\le m_0\) and \(w''(x)\) increasing in x. Then \(w''(x)_{min}=w''(0)>0\). Repeat the above process. Then \(q'''(x)>0\), \(q''(x)>0\) and they are both increasing in x for \(x \in [0,x^*]\). However, since \(q'(x)_{min}=-826<0\) and \(q'(x)_{max}>0\), there exists a unique root \(m_1\) for \(q'(x)=0\) (where \(m_1=0.218\) can be solved by dichotomy and intermediary theorem). This explains that the function p(x) is decreasing in x when \(x \in [0,m_1]\) and increasing for \(x \in [m_1,x^*]\). Since \(q(0)<0\) and \(q(2)>0\), there exists a unique root \(m_2\) for \(q(x)=0\) (Also \(m_2=1.112\) can be solved by dichotomy).

Repeat the above derivation. Then there exists a unique root \(m_3\) for \(p''(x)=0\) (\(m_3=1.791\)), a unique solution \(m_4\) for \(p'(x)=0\) (\(m_4=2.453\)) and a unique solution \(m_5\) for \(p(x)=0\) (\(m_5=3.099\)). Therefore \(h'''(x)\) is decreasing in x when \(x \in [0,m_5]\) and increasing for \(x \in [m_5,x^*]\). Analogously, there exist unique roots \(m_6\), \(m_7\), \(m_8\) for \(h''(x)=0\) (\(m_6=3.626\)), \(h'(x)=0\) (\(m_7=4.148\)), and \(h(x)=0\) (\(m_8=4.664\)), respectively.

Therefore, \(g'(x)\) decreases in x when \(x \in [0,m_8]\) and increases for \(x \in [m_8,x^*]\). Moreover, there exists a unique solution \(x=x^*\) for \(g(x)=0\). So \(g(x)\le 0\) for \(x \in [0,x^*]\) and \(g(x)>0\) for \(x \in (x^*,\infty )\).

It suffices to show that g(x) is a unimodal function; namely it is monotonically increasing for \(x \le x^*\) and monotonically decreasing for \(x\ge x^*\). The unimodality of g(x) together with \({\lim _{x \rightarrow 0}} g(x)=0\), \({\lim _{x \rightarrow \infty }} g(x)=-1\) and \(g(m_0)=0\) imply the desired results in (i), (ii) and (iii). \(\square \)

1.9.2 Proof of Proposition 6

Proof

Items (i), and (ii) being easy, we only show the rest. The expected payoff difference of the leader under the Stackelberg and Nash games is

$$\begin{aligned} L_3(x)= & {} \frac{a(\Delta x+\Delta y)^2}{24}\left( \frac{6 x}{1-e^x}-\frac{8 x}{1-e^{-x/3}}-x+18\right) \end{aligned}$$

where \(x=mT\) and \(a:=\frac{\gamma }{T}>0\). Consider the second term above

$$\begin{aligned} w(x)=\frac{6 x}{1-e^x}-\frac{8 x}{1-e^{-x/3}}-x+18, x\ge 0. \end{aligned}$$

Take the first derivative of w(x),

$$\begin{aligned} w'(x)= & {} \frac{8 e^{x/3} x+16 e^{\frac{2 x}{3}} x+16 e^{\frac{4 x}{3}} x+8 e^{\frac{5 x}{3}} x+6 e^x x+24 e^{x/3}+24 e^{\frac{2 x}{3}}-24 e^{\frac{4 x}{3}}-24 e^{\frac{5 x}{3}}-3 e^{2 x}-3}{3 \left( e^{x/3}-1\right) ^2 \left( e^{x/3}+e^{\frac{2 x}{3}}+1\right) ^2}. \end{aligned}$$

The denominator being positive for \(x>0\), we shall show that the numerator \(\phi (x)\) above is a unimodal function for x: increasing when \(\theta \in (0,\frac{{\hat{x}}T}{\gamma }]\) and decreasing when \(T \ge \frac{{\hat{x}}T}{\gamma }\)(where \({\hat{x}}\approx 8.5\)).

$$\begin{aligned} \phi '(x)= & {} \frac{2}{3}e^{x/3} \left( 9 e^{\frac{2 x}{3}} (x+1)-9 e^{\frac{5 x}{3}}\right. \\ {}{} & {} \left. +16 e^{x/3} (x+3)+4 (x+6)+8 e^x (4 x-3)-e^{\frac{4 x}{3}} (48-20 x)\right) \\= & {} \frac{2}{3} e^{x/3}h(x). \end{aligned}$$

In the following, we prove that \(h(x)>0\) for \(x\ge 0\).

$$\begin{aligned} h''(x)= & {} \frac{1}{9} e^{x/3} \left( 36 e^{x/3} (x+4)-225 e^{\frac{4 x}{3}}+16 (x+9)+72 e^{\frac{2 x}{3}} (4 x+5)+32 e^x (10 x-9)\right) \\ {}= & {} \frac{1}{9} e^{x/3}p(x).\\ p''(x)= & {} 4 e^{x/3} \left( x-100 e^x+8 e^{x/3} (4 x+17)+e^{\frac{2 x}{3}} (80 x+88)+10\right) =4 e^{x/3}q(x)\\ q''(x)= & {} \frac{4}{9}e^{x/3} \left( 8 x-225 e^{\frac{2 x}{3}}+8 e^{x/3} (10 x+41)+82\right) =\frac{4}{9}e^{x/3}r(x)\\ r''(x)= & {} \frac{4}{9} e^{x/3} \left( 20 x-225 e^{x/3}+202\right) =\frac{4}{9}e^{x/3}z(x)\\ z''(x)= & {} -25e^{\frac{x}{3}} \end{aligned}$$

Obviously, \(z''(x)<0\) implies that \(z'(x)\) is decreasing. \(z'(x)_{max}<0\) implies that z(x) is a decreasing function and \(z(x)_{max}<0\). Since \(r'(x)\) is decreasing, \(r'(0)>0\) and \(r'(2)<0\), there exists a unique root \(m_1\) for \(r'(x)=0\) (where \(m_1=1.25\) can be solved by dichotomy and intermediary theorem). Therefore r(x) is increasing when \(x \in [0,m_1]\) and decreasing when \(x \in [m_1,{\hat{x}}]\). Owing to \(r(0)>0\) and \(r(4)<0\), there exists a unique root \(m_2\) for \(r(x)=0\) (Also \(m_2=2.96\) can be solved by dichotomy).

Repeat the above derivation. There exists unique roots \(m_3\) for \(q'(x)=0\) (\(m_3=4.08\)), \(m_4\) for \(q(x)=0\) (\(m_4=5.16\)) and \(m_5\) for \(p'(x)=0\) (\(m_5=5.93\)), respectively. Therefore, p(x) increases in x when \(x \in [0,m_5]\) and increases for \(x \in [m_5,{\hat{x}}]\). Respectively, there exist unique roots \(m_6\), \(m_7\), \(m_8\) and \(m_9\) for \(p(x)=0\) (\(m_6=6.72\)), \(h'(x)=0\) (\(m_7=7.28\)), \(h(x)=0\) (\(m_8=7.96\)), and \(\phi (x)=0\)(\(m_9=8.48\)).

Therefore, \(L_3(\theta )\) is a unimodal function: increasing when \(\theta \in (0,\frac{{\hat{x}}T}{\gamma }]\) and decreasing when \(T \ge \frac{{\hat{x}}T}{\gamma }\) (where \({\hat{x}}\approx 8.48\)). \(\square \)

1.10 Proof of Proposition 7

Proof

$$\begin{aligned} L_3(x):=\mu _L-\mu _F= & {} \frac{\theta (\Delta x+\Delta y)^2}{4}\left( \frac{1}{2}+\frac{1}{x}-\frac{1}{1-e^{-m}}\right) , \end{aligned}$$

where \(x=mT\). Recall that

$$\begin{aligned} h(x)=\frac{1}{2}+\frac{1}{x}-\frac{1}{1-e^{-x}}, x\ge 0. \end{aligned}$$

Note that \(\lim _{x \rightarrow \infty } h(x)=\frac{1}{2}\), \(\lim _{x \rightarrow 0} h(x)=0\). Take the first derivative of h(x),

$$\begin{aligned} h'(x)=\frac{-e^x x^2-2 e^x+e^{2 x}+1}{\left( e^x-1\right) ^2 x^2}:=\frac{\phi (x)}{\left( e^x-1\right) ^2 x^2}. \end{aligned}$$

Take the first derivative of the numerator \(\phi \)

$$\begin{aligned} \phi '(x)=e^x(-2+2e^x-x^2-2m):=e^x p(x). \end{aligned}$$

Take the first derivative of the second term p(x),

$$\begin{aligned} p'(x)=2e^x-2m-2\ge 0. \end{aligned}$$

Then \(p(x)\ge p_{\min }(x)=p(0)=0\) when \(x\ge 0\). Likewise, \(\phi (x)\ge \phi _{\min }(x)=\phi (0)=0\) and \(h'(x)\ge 0, h(x)\ge h_{\min }(x)=h(0)=0\). Therefore, h(x) is positive for \(x\ge 0\) and increasing in mT.

Consider the function \(L_4(\theta ,\gamma ,T)\) when given \(\gamma >0\) and trading horizon T,

$$\begin{aligned} L_4(\theta ,\gamma ,T)=\frac{a(\Delta x+\Delta y)^2}{4}\left( \frac{x}{1-e^{-x}}-\frac{x}{2}-1\right) , x\ge 0. \end{aligned}$$

Obviously, \(L_4(\theta ,\gamma ,T)\) is increasing in x (or \(\theta \)). Furthermore, \(L_4(\theta ,\gamma ,T)\) is increasing in \(\theta \), T and decreasing in \(\gamma \). \(\square \)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dong, Y., Du, D., Han, Q. et al. A Stackelberg order execution game. Ann Oper Res 336, 571–604 (2024). https://doi.org/10.1007/s10479-022-05120-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10479-022-05120-5

Keywords

Navigation