Skip to main content
Log in

Optimal trading of a basket of futures contracts

  • Research Article
  • Published:
Annals of Finance Aims and scope Submit manuscript

Abstract

We study the problem of dynamically trading multiple futures contracts with different underlying assets. To capture the joint dynamics of stochastic bases for all traded futures, we propose a new model involving a multi-dimensional scaled Brownian bridge that is stopped before price convergence. This leads to the analysis of the corresponding Hamilton–Jacobi–Bellman equations, whose solutions are derived in semi-explicit form. The resulting optimal trading strategy is a long-short policy that accounts for whether the futures are in contango or backwardation. Our model also allows us to quantify and compare the values of trading in the futures markets when the underlying assets are traded or not. Numerical examples are provided to illustrate the optimal strategies and the effects of model parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Notes

  1. See CFTC Press Release 5542-08: https://www.cftc.gov/PressRoom/PressReleases/pr5542-08.

  2. According to CME group report: https://www.cmegroup.com/daily_bulletin/monthly_volume/Web_ADV_Report_CMEG.pdf.

  3. See Kaldor (1939), Working (1949), and Brennan (1958).

  4. See Cootner (1960).

  5. A commodity is said to be contangoed if its forward curve (which is the plot of its futures prices against time-to-delivery) is increasing. The commodity is backwardated if its forward curve is decreasing.

  6. That is, the number of futures contracts held multiplied by the futures price.

  7. A notable exception is “basis trading”, see Angoshtari and Leung (2019) for further discussion.

  8. See Leung and Li (2016) and Leung et al. (2016) for discussions of such strategies involving a single futures contract.

References

  • Angoshtari, B., Leung, T.: Optimal dynamic basis trading. Ann Finance 15(3), 307–335 (2019)

    Article  Google Scholar 

  • Brennan, M.J.: The supply of storage. Am Econ Rev 48(1), 50–72 (1958)

    Google Scholar 

  • Brennan, M.J., Schwartz, E.S.: Optimal arbitrage strategies under basis variability. In: Sarnat, M. (ed.) Essays in Financial Economics. Amsterdam: North Holland (1988)

  • Brennan, M.J., Schwartz, E.S.: Arbitrage in stock index futures. J Bus 63(1), S7–S31 (1990)

    Article  Google Scholar 

  • Carmona, R., Ludkovski, M.: Spot convenience yield models for the energy markets. Contemp Math 351, 65–80 (2004)

    Article  Google Scholar 

  • Cootner, P.H.: Returns to speculators: Telser versus Keynes. J Polit Econ 68(4), 396–404 (1960)

    Article  Google Scholar 

  • Dai, M., Zhong, Y., Kwok, Y.K.: Optimal arbitrage strategies on stock index futures under position limits. J Futures Mark 31(4), 394–406 (2011)

    Article  Google Scholar 

  • Fleming, W.H., Soner, H.M.: Controlled Markov Processes and Viscosity Solutions, vol. 25. New York: Springer (2006)

  • Gibson, R., Schwartz, E.S.: Stochastic convenience yield and the pricing of oil contingent claims. J Finance 45(3), 959–976 (1990)

    Article  Google Scholar 

  • Hilliard, J.E., Reis, J.: Valuation of commodity futures and options under stochastic convenience yields, interest rates, and jump diffusions in the spot. J Financ Quant Anal 33(1), 61–86 (1998)

    Article  Google Scholar 

  • Kaldor, N.: Speculation and economic stability. Rev Econ Stud 7(1), 1–27 (1939)

    Article  Google Scholar 

  • Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus. Berlin: Springer (1991)

  • Leung, T., Li, X.: Optimal Mean Reversion Trading: Mathematical Analysis and Practical Applications. Singapore: World Scientific (2016)

  • Leung, T., Yan, R.: Optimal dynamic pairs trading of futures under a two-factor mean-reverting model. Int J Financ Eng 5(3), 1850027 (2018)

    Article  Google Scholar 

  • Leung, T., Yan, R.: A stochastic control approach to managed futures portfolios. Int J Financ Eng 6(1), 1950005 (2019)

    Article  Google Scholar 

  • Leung, T., Li, J., Li, X., Wang, Z.: Speculative futures trading under mean reversion. Asia-Pac Financ Mark 23(4), 281–304 (2016)

    Article  Google Scholar 

  • Liu, J., Longstaff, F.A.: Losing money on arbitrage: optimal dynamic portfolio choice in markets with arbitrage opportunities. Rev Financ Stud 17(3), 611–641 (2004)

    Article  Google Scholar 

  • Miffre, J.: Long-short commodity investing: a review of the literature. J Commod Mark 1(1), 3–13 (2016)

    Article  Google Scholar 

  • Reid, T.W.: Riccati Differential Equations, Mathematics in Science and Engineering, vol. 86. New York: Academic Press (1972)

  • Schwartz, E.S.: The stochastic behavior of commodity prices: implications for valuation and hedging. J Finance 52(3), 923–973 (1997)

    Article  Google Scholar 

  • Working, H.: The theory of price of storage. Am Econ Rev 39(6), 1254–1262 (1949)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bahman Angoshtari.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Proof of Theorem 1

The proof relies on the following well-known comparison result for Riccati differential equations, which we include for readers’ convenience. Let \(A\ge 0\) (resp. \(A>0\)) denote that A is positive semi-definite (resp. positive definite) and \(A\ge B\) (resp. \(A>B\)) denote that \(A-B\ge 0\) (resp. \(A-B>0\)).

Lemma 2

(Reid (1972), Theorem 4.3, p. 122) Let A(t), B(t), C(t), and \(\widetilde{C}(t)\) be continuous \(N\times N\) matrix functions on an interval \([a,b]\in \mathbb {R}\), and \(H_0\) and \(\widetilde{H}_0\) be two symmetric \(N\times N\) matrices. Furthermore, assume that for all \(t\in [a,b]\), \(B(t)\ge 0\) and C(t) and \(\widetilde{C}(t)\) are symmetric such that \(C(t)\ge \widetilde{C}(t)\). Consider the Riccati matrix differential equations

$$\begin{aligned} {\left\{ \begin{array}{ll} H'(t) + H(t)\,B(t)\, H(t) + H(t)A(t) + A(t)^\top H(t) - C(t) =0; \quad a\le t \le b,\\ H(a) = H_0, \end{array}\right. } \end{aligned}$$
(30)

and

$$\begin{aligned} {\left\{ \begin{array}{ll} \widetilde{H}'(t) + \widetilde{H}(t)\,B(t)\, \widetilde{H}(t) + \widetilde{H}(t)A(t) + A(t)^\top \widetilde{H}(t) - \widetilde{C}(t) =0; \quad a\le t \le b,\\ H(a) = \widetilde{H}_0. \end{array}\right. } \end{aligned}$$
(31)

If \(H_0>\widetilde{H}_0\) (resp. \(H_0\ge \widetilde{H}_0\)) and (31) has a symmetric solution \(\widetilde{H}(t)\), then (30) also has a symmetric solution H(t) such that \(H(t)>\widetilde{H}(t)\) (resp. \(H(t)\ge \widetilde{H}(t)\)) for all \(t\in [a,b]\).\(\square \)

Proof of (i): Consider the matrix Riccati differential equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \widetilde{H}'(\tau ) + \widetilde{H}(\tau )\, \big (\gamma \,C^\top \Sigma \,C + (1-\gamma )A\big )\, \widetilde{H}(\tau )\\ \quad \quad \quad - 2\left( {\varvec{\eta }}(T-\tau )^\top C + \left( \frac{1}{\gamma }-1\right) {\varvec{\eta }}_F(T-\tau )^\top B\right) \, \widetilde{H}(\tau )=0_{N\times N},\\ H(0)=0_{N\times N}, \end{array}\right. } \end{aligned}$$

which has the trivial solution \(\widetilde{H}_0\equiv 0_{N\times N}\). Assume, for now, that \(\gamma \,C^\top \Sigma \,C + (1-\gamma )A >0\). By Lemma 2, it follows that (20) has a positive semidefinite solution \(H(\tau )\) on [0, T]. That \(H(\tau )\) is positive definite on (0, T] follows from the fact that \(H'(0) = \frac{\gamma -1}{\gamma ^2} {\varvec{\eta }}_F(T-\tau )^\top \Sigma _\mathbf F ^{-1}{\varvec{\eta }}_F(T-\tau )>0\) and Lemma 2. The uniqueness of the solution follows from the uniqueness theorem for first order differential equations.

It only remains to show that \(\gamma \,C^\top \Sigma \,C + (1-\gamma )A >0\). By (4), (18), and (19), we have

$$\begin{aligned} A&= \Sigma _\mathbf F + \Sigma _\mathbf S - \Sigma _\mathbf{F \mathbf S } - \Sigma _\mathbf{F \mathbf S }^\top - \Sigma _\mathbf S + \Sigma _\mathbf{F \mathbf S }^\top \Sigma _\mathbf F ^{-1}\Sigma _\mathbf{F \mathbf S }\\&= C^\top \Sigma C - \left( \Sigma _\mathbf S - \Sigma _\mathbf{F \mathbf S }^\top \Sigma _\mathbf F ^{-1}\Sigma _\mathbf{F \mathbf S }\right) . \end{aligned}$$

From (4) and (5), we obtain

$$\begin{aligned} \Sigma _\mathbf S - \Sigma _\mathbf{F \mathbf S }^\top \Sigma _\mathbf F ^{-1}\Sigma _\mathbf{F \mathbf S }&= \widetilde{\Sigma }_\mathbf S \widetilde{\Sigma }_\mathbf S ^\top + \widetilde{\Sigma }_\mathbf{F \mathbf S }^\top \widetilde{\Sigma }_\mathbf{F \mathbf S } - \widetilde{\Sigma }_\mathbf{F \mathbf S }^\top \widetilde{\Sigma }_\mathbf F ^\top \left( \widetilde{\Sigma }_\mathbf{F }\widetilde{\Sigma }_\mathbf{F }^\top \right) ^{-1} \widetilde{\Sigma }_\mathbf F \widetilde{\Sigma }_\mathbf{F \mathbf S }\\&=\widetilde{\Sigma }_\mathbf S \widetilde{\Sigma }_\mathbf S ^\top \end{aligned}$$

Finally, using the assumption \(\gamma >1\) and the last two results yield

$$\begin{aligned} \gamma \,C^\top \Sigma \,C + (1-\gamma )A&= C^\top \Sigma \,C + (\gamma -1)\left( \Sigma _\mathbf S - \Sigma _\mathbf{F \mathbf S }^\top \Sigma _\mathbf F ^{-1}\Sigma _\mathbf{F \mathbf S }\right) \\&= C^\top \Sigma \,C + (\gamma -1) \widetilde{\Sigma }_\mathbf S \widetilde{\Sigma }_\mathbf S ^\top >0, \end{aligned}$$

as we set out to prove.

Proof of (ii) and (iii): As we argue later, \(V_F(t,x,\mathbf {z})\) is the solution of the Hamilton–Jacobi–Bellman (HJB) equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle v_t + \sup _{{\varvec{\theta }}\in \mathbb {R}^N} \mathscr {J}_{\varvec{\theta }}v= 0,\\ v(T,x,\mathbf {z}) = \frac{x^{1-\gamma }}{1-\gamma }, \end{array}\right. } \end{aligned}$$
(32)

for \((t,x,\mathbf {z})\in [0,T]\times \mathbb {R}_+\times \mathbb {R}^N\), in which the differential operator \(\mathscr {J}_{\varvec{\theta }}\) is given by

$$\begin{aligned} \mathscr {J}_{\varvec{\theta }}\varphi (t, x,\mathbf {z}) :={}&(\mathbf {m}+C^\top {\varvec{\eta }}(t)\mathbf {z})^\top \varphi _\mathbf {z}+ \frac{1}{2}{\text {tr}}(C^\top \Sigma \,C\,\varphi _{\mathbf {z}\mathbf {z}})\\&{}+ \big (r x + {\varvec{\theta }}^\top ({\varvec{\mu }}_\mathbf F +{\varvec{\eta }}_F(t)\mathbf {z})\big )\,\varphi _x\\&{}+ \frac{1}{2} {\varvec{\theta }}^\top \Sigma _\mathbf F \,{\varvec{\theta }}\,\varphi _{xx} + {\varvec{\theta }}^\top (\Sigma _\mathbf F - \Sigma _\mathbf{F \mathbf S })\, \varphi _{x\mathbf {z}}, \end{aligned}$$

for any \({\varvec{\theta }}\in \mathbb {R}^N\) and any \(\varphi (t,x,\mathbf {z}):[0,T]\times \mathbb {R}_+\times \mathbb {R}^N\rightarrow \mathbb {R}\) that is continuously twice differentiable in \((x,\mathbf {z})\). Here, \({\text {tr}}(A)\) is the trace of the matrix A and we have used the shorthand notations \(\varphi _\mathbf {z}\) and \(\varphi _{\mathbf {z}\mathbf {z}}\) to denote the gradient vector and the Hessian matrix of \(\varphi (t,x,\mathbf {z})\) with respect to \(\mathbf {z}\), that is,

$$\begin{aligned} \varphi _\mathbf {z}(t,x,\mathbf {z}) := \left( \frac{\partial \varphi }{\partial z_1},\dots , \frac{\partial \varphi }{\partial z_N}\right) ,\quad \text {and}\quad \varphi _{\mathbf {z}\mathbf {z}}(t,x,\mathbf {z}) := \left[ \frac{\partial ^2 \varphi }{\partial z_i\partial z_j}\right] _{N\times N}. \end{aligned}$$

By assuming that \(v_{xx}<0\) [which is verified by the form of the solution, i.e. (36)], the maximizer of the left side of the differential equation in (32) is

$$\begin{aligned} {\varvec{\theta }}^*(t,x,\mathbf {z}) = - \frac{v_{x}(t,x,\mathbf {z})}{v_{xx}(t,x,\mathbf {z})} \Sigma _\mathbf F ^{-1}({\varvec{\mu }}_\mathbf F +{\varvec{\eta }}_F(t)\mathbf {z}) - B\,\frac{v_{x\mathbf {z}}(t,x,\mathbf {z})}{v_{xx}(t,x,\mathbf {z})}. \end{aligned}$$
(33)

Substituting \(\sup _{{\varvec{\theta }}\in \mathbb {R}^N} \mathscr {J}_{\varvec{\theta }}v=\mathscr {J}_{{\varvec{\theta }}^*}v\) in (32) yields

$$\begin{aligned} v_t&{}+ (\mathbf {m}+C^\top {\varvec{\eta }}(t)\mathbf {z})^\top v_\mathbf {z}+ \frac{1}{2}{\text {tr}}(C^\top \Sigma \,C\,v_{\mathbf {z}\mathbf {z}}) + rx\, v_x\nonumber \\&{}-\frac{1}{2}\big ({\varvec{\mu }}_\mathbf F +{\varvec{\eta }}_F(t)\mathbf {z}\big )^\top \Sigma _\mathbf F ^{-1}\big ({\varvec{\mu }}_\mathbf F +{\varvec{\eta }}_F(t)\mathbf {z}\big )\frac{v_x^2}{v_{xx}}\nonumber \\&{}-\frac{1}{2v_{xx}}v_{x\mathbf {z}}^\top A\,v_{x\mathbf {z}} -\frac{v_x}{v_{xx}}({\varvec{\mu }}_\mathbf F +{\varvec{\eta }}_F(t)\mathbf {z})^\top B\, v_{x\mathbf {z}} = 0, \end{aligned}$$
(34)

for \((t,x,\mathbf {z})\in [0,T]\times \mathbb {R}_+\times \mathbb {R}^N\), subject to the terminal condition

$$\begin{aligned} v(T,x,\mathbf {z}) = \frac{x^{1-\gamma }}{1-\gamma }. \end{aligned}$$
(35)

To solve (34), we consider the ansatz

$$\begin{aligned} v(t,x,\mathbf {z})&= \frac{x^{1-\gamma }}{1-\gamma } \mathrm {e}^{\gamma \left( f(T-t) + \mathbf {z}^\top \mathbf {g}(T-t) - \frac{1}{2} \mathbf {z}^\top H(T-t)\mathbf {z}\right) }, \end{aligned}$$
(36)

for \((t,x,\mathbf {z})\in [0,T]\times \mathbb {R}_+\times \mathbb {R}^N\), in which f(t), \(\mathbf {g}(t)=\big (g_1(t),\ldots ,g_N(t)\big )^\top \), and

$$\begin{aligned} H(t) = \begin{pmatrix} h_{11}(t) &{} \dots &{} h_{1N}(t)\\ \vdots &{} \ddots &{} \vdots \\ h_{N1}(t) &{} \dots &{} h_{NN}(t) \end{pmatrix} \end{aligned}$$

are unknown functions to be determined. Without loss of generality, we further assume that H(t) is symmetric such that \(h_{ij}(t)=h_{ji}(t)\) for \(0\le t\le T\). Substituting this ansatz into (34) yields

$$\begin{aligned}&\frac{\gamma }{2}\mathbf {z}^\top \bigg [ H' + H\, \big (\gamma \,C^\top \Sigma \,C + (1-\gamma )A\big )\, H\\&\qquad \qquad {}- 2\left( E^\top C + \left( \frac{1}{\gamma }-1\right) {\varvec{\eta }}_F^\top B\right) \, H + \frac{1-\gamma }{\gamma ^2} {\varvec{\eta }}_F^\top \Sigma _\mathbf F ^{-1}{\varvec{\eta }}_F \bigg ]\mathbf {z}\\&{}+\gamma \mathbf {z}^\top \bigg [ -\mathbf {g}' + \left( E^\top C - \gamma \,H\,C^\top \Sigma \,C - (1-\gamma )HA + \left( \frac{1}{\gamma }-1\right) {\varvec{\eta }}_F^\top B\right) \mathbf {g}\\&\quad \quad \quad {}- H\left( \mathbf {m}+ \left( \frac{1}{\gamma }-1\right) B^\top {\varvec{\mu }}_\mathbf F \right) +\frac{1-\gamma }{\gamma ^2}\,{\varvec{\eta }}_F^\top \Sigma _\mathbf F ^{-1}{\varvec{\mu }}_\mathbf F \bigg ]\\&{}- \gamma f' +(1-\gamma )\left( r+\frac{{\varvec{\mu }}_\mathbf F ^\top \Sigma _\mathbf F ^{-1}{\varvec{\mu }}_\mathbf F }{2\gamma }\right) +\frac{\gamma }{2}\mathbf {g}^\top \left( (1-\gamma ) A + \gamma \, C^\top \Sigma \,C\right) \mathbf {g}\\&{}+ \gamma \left( \mathbf {m}^\top +\left( \frac{1}{\gamma }-1\right) {\varvec{\mu }}_\mathbf F ^\top B\right) \mathbf {g}-\frac{\gamma }{2}{\text {tr}}(C^\top \Sigma \,C\, H) =0, \end{aligned}$$

for all \((t,\mathbf {z})\in [0,T]\times \mathbb {R}^N\), where we have omitted the t arguments to simplify the terms. Taking the terminal condition (35) into account, it then follows that H, \(\mathbf {g}\), and f must satisfy (20), (22), and (23), respectively.

By statement (i) of the theorem, (20) has a unique solution that is positive definite on (0, T]. Using the classical existence theorem of systems of ordinary differential equations, we then deduce that (22) also has a unique bounded solution on [0, T]. Finally, f given by (23) is continuously differentiable since the integrand on the right side is continuous. Thus, \(v(t,x,\mathbf {z})\) given by (21) is a solution of the HJB equation (32).

It only remains to show that the solution of the HJB equation is the value function, that is \(v(t,x,\mathbf {z})=V(t,x,\mathbf {z})\) for all \((t,x,\mathbf {z})\in [0,T]\times \mathbb {R}_+\times \mathbb {R}^N\). Note that, for \(0\le t <T\), we have

$$\begin{aligned} f(T-t)&{}+ \mathbf {z}^\top \mathbf {g}(T-t) - \frac{1}{2} \mathbf {z}^\top H(T-t)\mathbf {z}\\&= f(T-t) + \frac{1}{2} \mathbf {g}(T-t)^\top H(T-t)\,\mathbf {g}(T-t) -\frac{1}{2}\left\| \widetilde{H}^\top \mathbf {z}- \widetilde{H}^{-1}\mathbf {g}(T-t)\right\| ^2\\&\le f(T-t) + \frac{1}{2} \mathbf {g}(T-t)^\top H(T-t)\,\mathbf {g}(T-t), \end{aligned}$$

in which \(\widetilde{H}(T-t)\) is the Cholesky factor of the positive definite matrix \(H(T-t)\). It then follows that \(v(t,x,\mathbf {z})\) in (36) is bounded in \(\mathbf {z}\) and has polynomial growth in x. A standard verification result such as Theorem 3.8.1 on page 135 of Fleming and Soner (2006) then yields that \(v(t,x,\mathbf {z})=V(t,x,\mathbf {z})\) for all \((t,x,\mathbf {z})\in [0,T]\times \mathbb {R}_+\times \mathbb {R}^N\).

The verification result also states that the optimal control in feedback form is \({\varvec{\theta }}^*\) given by (33). Using (36), one obtains \({\varvec{\theta }}^*(t,x,\mathbf {z})\) in terms H and \(\mathbf {g}\) as in (24).

Proof of Theorem 2

The proof is similar to the proof of Theorem 1 and, thus, is presented in less detail.

(i): Similar to the proof of statement (i) of Theorem 1, the proof here involves comparing (26) with the homogenous equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \widetilde{H}_0'(\tau ) + \widetilde{H}_0(\tau ) C^\top \Sigma \, C\, \widetilde{H}_0(\tau ) - \frac{2}{\gamma }\,{\varvec{\eta }}(T-\tau )^\top C\, \widetilde{H}_0(\tau )=0; \quad 0\le \tau \le T,\\ \widetilde{H}_0(0)=0_{N\times N}, \end{array}\right. } \end{aligned}$$

using Lemma 2.

(ii) and (iii): As we later verify, \(V(t,x,\mathbf {z})\) solves the HJB equation

$$\begin{aligned} {\left\{ \begin{array}{ll} \displaystyle v_t + \sup _{{\varvec{\Theta }}\in \mathbb {R}^{2N}} \mathscr {L}_{\varvec{\Theta }}v= 0,\\ v(T,x,\mathbf {z}) = \frac{x^{1-\gamma }}{1-\gamma }, \end{array}\right. } \end{aligned}$$
(37)

for \((t,x,\mathbf {z})\in [0,T]\times \mathbb {R}_+\times \mathbb {R}^N\), in which the differential operator \(\mathscr {L}_{\varvec{\Theta }}\) is given by

$$\begin{aligned} \mathscr {L}_{\varvec{\Theta }}\varphi (t, x,\mathbf {z}) :={}&(\mathbf {m}+C^\top {\varvec{\eta }}(t)\mathbf {z})^\top \varphi _\mathbf {z}+ \frac{1}{2}{\text {tr}}(C^\top \Sigma \,C\,\varphi _{\mathbf {z}\mathbf {z}})\\&{}+ \big (r x + {\varvec{\Theta }}^\top ({\varvec{\mu }}+{\varvec{\eta }}(t)\mathbf {z})\big )\varphi _x + \frac{1}{2} {\varvec{\Theta }}^\top \Sigma \,{\varvec{\Theta }}\,\varphi _{xx} + {\varvec{\Theta }}^\top \Sigma \, C\, \varphi _{x\mathbf {z}}, \end{aligned}$$

for any \({\varvec{\Theta }}\in \mathbb {R}^{2N}\) and any \(\varphi (t,x,\mathbf {z}):[0,T]\times \mathbb {R}_+\times \mathbb {R}^N\rightarrow \mathbb {R}\) that is continuously twice differentiable in \((x,\mathbf {z})\). Assuming that \(v_{xx}<0\), which is readily verified by the form of the solution in (27), we obtain that the maximizer \({\varvec{\Theta }}^*\) in (37) is given by

$$\begin{aligned} {\varvec{\Theta }}^* =\begin{pmatrix} {\varvec{\theta }}^*(t,x,\mathbf {z})\\ {\varvec{\pi }}^*(t,x,\mathbf {z}) \end{pmatrix} = - \frac{v_{x}(t,x,\mathbf {z})}{v_{xx}(t,x,\mathbf {z})} \Sigma ^{-1} ({\varvec{\mu }}+{\varvec{\eta }}(t)\mathbf {z}) - C\,\frac{v_{x\mathbf {z}}(t,x,\mathbf {z})}{v_{xx}(t,x,\mathbf {z})}. \end{aligned}$$

Substituting \({\varvec{\Theta }}^*\) into (37) yields

$$\begin{aligned} v_t&{}+ (\mathbf {m}+C^\top {\varvec{\eta }}(t)\mathbf {z})^\top v_\mathbf {z}+ \frac{1}{2}{\text {tr}}(C^\top \Sigma \,C\,v_{\mathbf {z}\mathbf {z}}) + rx\, v_x\\&{}-\frac{1}{2}({\varvec{\mu }}+{\varvec{\eta }}(t)\mathbf {z})^\top \Sigma ^{-1}({\varvec{\mu }}+{\varvec{\eta }}(t)\mathbf {z})\frac{v_x^2}{v_{xx}}\\&{}-\frac{1}{2v_{xx}}v_{x\mathbf {z}}^\top C^\top \Sigma \,C\,v_{x\mathbf {z}} -\frac{v_x}{v_{xx}}({\varvec{\mu }}+{\varvec{\eta }}(t)\mathbf {z})^\top C\, v_{x\mathbf {z}} = 0, \end{aligned}$$

for \((t,x,\mathbf {z})\in [0,T]\times \mathbb {R}_+\times \mathbb {R}^N\), subject to the terminal condition

$$\begin{aligned} v(T,x,\mathbf {z}) = \frac{x^{1-\gamma }}{1-\gamma }. \end{aligned}$$
(38)

This partial differential equation is similar to (34) and can be solved using the same ansatz. Indeed, applying (36) yields that f(t), \(\mathbf {g}(t)\), and H(t) satisfy

$$\begin{aligned}&\frac{\gamma }{2}\mathbf {z}^\top \left[ H' + H\, C^\top \Sigma \,C\, H - \frac{2}{\gamma } E^\top C\, H + \frac{1-\gamma }{\gamma ^2} E^\top \Sigma ^{-1}E \right] \mathbf {z}\\&{}+\gamma \mathbf {z}^\top \bigg [ -\mathbf {g}' + (\frac{1}{\gamma } E^\top -H\,C^\top \Sigma )\,C\,\mathbf {g}\\&\quad \quad \quad {}- H\left( \mathbf {m}+ \left( \frac{1}{\gamma }-1\right) C^\top {\varvec{\mu }}\right) +\frac{1-\gamma }{\gamma ^2}\,E^\top \Sigma ^{-1}{\varvec{\mu }}\bigg ]\\&{}- \gamma f' +(1-\gamma )\left( r+\frac{{\varvec{\mu }}^\top \Sigma ^{-1}{\varvec{\mu }}}{2\gamma }\right) +\frac{\gamma }{2}\mathbf {g}^\top C^\top \Sigma \,C\,\mathbf {g}\\&{}+ \gamma \left( \mathbf {m}^\top +\left( \frac{1}{\gamma }-1\right) {\varvec{\mu }}^\top C\right) \mathbf {g}-\frac{\gamma }{2}{\text {tr}}(C^\top \Sigma \,C\, H) =0, \end{aligned}$$

for all \((t,\mathbf {z})\in [0,T]\times \mathbb {R}^N\), in which we have omitted the t arguments to simplify the notation. Taking the terminal condition (38) into account, it then follows that H, \(\mathbf {g}\), and f must satisfy (26), (28), and (29), respectively.

The verification result and the optimal trading strategy are obtained in a similar fashion as in the proof of Theorem 1.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Angoshtari, B., Leung, T. Optimal trading of a basket of futures contracts. Ann Finance 16, 253–280 (2020). https://doi.org/10.1007/s10436-019-00357-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10436-019-00357-w

Keywords

JEL Classification

Navigation