Abstract
Trailing stop is a popular stop-loss trading strategy by which the investor will sell the asset once its price experiences a pre-specified percentage drawdown. In this paper, we study the problem of timing to buy and then sell an asset subject to a trailing stop. Under a general linear diffusion framework, we study an optimal double stopping problem with a random path-dependent maturity. Specifically, we first analytically solve the optimal liquidation problem with a trailing stop, and in turn derive the optimal timing to buy the asset. Our method of solution reduces the problem of determining the optimal trading regions to solving the associated differential equations. For illustration, we implement an example and conduct a sensitivity analysis under the exponential Ornstein–Uhlenbeck model.
This is a preview of subscription content, access via your institution.




Similar content being viewed by others
Notes
As usual, we set \(\inf \emptyset =\infty \).
If there is an \(x\in I\) such that \(h_b(x)<h(x)\), then immediate selling after purchasing when the asset price is at x yields a strictly positive profit with certainty, hence an arbitrage.
In this case, \(h(x)=x-c_s\) where \(c_s\ge 0\) is a transaction fee.
It is well-known that if \(\mu \ge q\), then the optimal stopping region is the empty set.
Notice that in the expectation (19) we don’t have the indicator \(\mathbf {1}_{\{\tau _X^+(b(y))\wedge \tau _X^-(y)<\infty \}}\), as it is equal to 1 almost surely.
The procedure can be conveniently generalized to allow for distinct discounting rates for the acquisition and liquidation problems.
References
Dai, M., Zhang, Q., Zhu, Q.: Trend following trading under a regime switching model. SIAM J. Financ. Math. 1(1), 780–810 (2010)
Lehoczky, J.: Formulas for stopped diffusion processes with stopping times based on the maximum. Ann. Probab. 5(4), 601–607 (1977)
Zhang, H.: Occupation time, drawdowns, and drawups for one-dimensional regular diffusion. Adv. Appl. Probab. 47(1), 210–230 (2015)
Zhang, H., Hadjiliadis, O.: Drawdowns and the speed of market crash. Methodol. Comput. Appl. Probab. 14, 739–752 (2012)
Shepp, L., Shiryaev, A.N.: The Russian option: reduced regret. Ann. Appl. Probab. 3(3), 603–631 (1993)
Egami, M., Oryu, T.: A direct solution method for pricing options involving maximum process. Financ. Stoch., forthcoming (2017)
Zhang, H., Rodosthenous, N., Hadjiliadis, O.: Robustness of the N-CUSUM stopping rule in a Wiener disorder problem. Ann. Appl. Probab. 25(6), 3405–3433 (2015)
Glynn, P., Iglehart, D.: Trading securities using trailing stops. Manag. Sci. 41, 1096–1106 (1995)
Warburton, A., Zhang, Z.: A simple computational model for analyzing the properties of stop-less, take profit, and price breakout trading strategies. Comput. Oper. Res. 33(1), 32–42 (2006)
Yin, G., Zhang, Q., Zhuang, C.: Recursive algorithms for trailing stop: stochastic approximation approach. J. Optim. Theory Appl. 146(1), 209–231 (2010)
Imkeller, N., Rogers, L.: Trading to stops. SIAM J. Financ. Math. 5(1), 753–781 (2014)
Rodosthenous, N., Zhang, H.: Beating the Omega clock: an optimal stopping problem with random time-horizon under spectrally negative Lévy models. Ann. Appl. Probab., forthcoming (2017a)
Rodosthenous, N., Zhang, H.: How to sell a stock amid anxiety about drawdowns—an optimal stopping approach. Working paper (2017b)
Leung, T., Yamazaki, K.: American step-up and step-down credit default swaps under Lévy models. Quantit. Financ. 13(1), 137–157 (2013)
Leung, T., Li, X.: Optimal mean reversion trading with trasaction costs and stop-loss exit. Int. J. Theor. Appl. Financ. 18(3), 1550020 (2015)
Carr, P., Jarrow, R., Myneni, R.: Alternative characterizations of American put options. Math. Financ. 2, 87–105 (1992)
Leung, T., Ludkovski, M.: Optimal timing to purchase options. SIAM J. Financ. Math. 2(1), 768–793 (2011)
Borodin, A., Salminen, P.: Handbook of Brownian Motion: Facts and Formulae. Birkhauser, Basel (2002)
Dayanik, S., Karatzas, I.: On the optimal stopping problem for one-dimensional diffusions. Stoch. Process. Appl. 107(2), 173–212 (2003)
Cartea, A., Jaimungal, S., Penalva, J.: Algorithmic and High-Frequency Trading. Cambridge University Press, Cambridge (2015)
Leung, T., Li, X., Wang, Z.: Optimal starting-stopping and switching of a CIR process with fixed costs. Risk Decis. Anal. 5(2), 149–161 (2014)
Leung, T., Li, X., Wang, Z.: Optimal multiple trading times under the exponential OU model with transaction costs. Stoch. Models 31(4), 554–587 (2015)
Temme, N.: Numerical and asymptotic aspects of parabolic cylinder functions. J. Comput. Appl. Math. 121(1–2), 221–246 (2000)
Zhang, H., Zhang, Q.: Trading a mean-reverting asset: buy low and sell high. Automatica 44(6), 1511–1518 (2008)
Zervos, M., Johnson, T., Alazemi, F.: Buy-low and sell-high investment strategies. Math. Financ. 23(3), 560–578 (2013)
Leung, T., Wang, Z.: Optimal risk-averse timing of an asset sale: trending versus mean-reverting price dynamics. Ann. Financ. Forthcoming (2018)
Salminen, P., Vallois, P., Yor, M.: On the excursion theory for linear diffusions. Jpn. J. Math. 2(1), 97–127 (2007)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Proofs
A Proofs
Proof of Lemma 2.2
Following [19], let us define For any \(b\in I\), let us define
By [19, Proposition 5.11], we know that the value function
is given by \(\phi _q^-(x){\hat{H}}(\psi _q(x))\), where \({\hat{H}}(\cdot )\) is the smallest nonnegative concave majorant of \(H(\cdot )\) on \({\mathbb {R}}_+\). On the other hand, by [19, Section 6], we have
So Assumption 2.1 implies that \(H(\cdot )\) is convex on \((0,\psi _q(x_0))\), and concave on \((\psi _q(x_0),\infty )\). We now examine the behavior of \(H(\cdot )\) near 0 and \(\infty \). From (34) we know that,
-
1.
if \(h(l+)\ge 0\), then \(h(l+)\) is finite, and \(H(0+)=\lim _{x\downarrow l}\frac{h(x)}{\phi _q^-(x)}=0\);
-
2.
if \(h(l+)<0\), then \(H(z)<0\) for sufficiently small \(z>0\).
Moreover, from
we know \(H(z)>0\) for sufficiently large \(z>0\). Here, function \(F(\cdot )\) is twice continuously differentiable on \(\mathbb {R}_+\), and by Assumption 2.1 we know that \(\sup _{z\ge \psi _q(x_0)}F(z)=F(z_*)\) for some \(z_*\in [\psi _q(x_0),\infty )\). Obviously \(F(z_*)>0\), which implies that \(H(z)=\frac{h(x)}{\phi _q^-(x)}>0\) for all \(z>z_*\) since \(h(\cdot )\) is monotone. Furthermore, \(z_*\) must satisfy the first order condition
Now define function
which is clearly continuously differentiable and concave on \({\mathbb {R}}_+\), thanks to (35). Function \({\tilde{H}}(\cdot )\) is also positive on \({\mathbb {R}}_+\), which is evident from the construction. Hence we conclude that \({\tilde{H}}(\cdot )\) is the smallest concave majorant of \(H(\cdot )\). So the optimal stopping region is given by
Therefore, \(x^\star =\psi _q^{-1}(z_*)\) is the optimal stopping threshold. \(\square \)
Proof of Proposition 3.1
The proof is similar as that for Lemma 2.2. In the spirit of [19], we derive the optimal value function and the stopping region by constructing the smallest concave majorant of H(z) on \([\psi _q(y),\infty )\). By the convexity of \(H(\cdot )\), we know this concave majorant is given by
where z(y) is defined as
Thus, the optimal stopping region is given by
Therefore, the optimal stopping barrier is given by \(b(y):=\psi _q^{-1}(z(y))\).
From Remark 3.1 we know that, for \(l\le y_1<y_2<x_0\), the equalities hold:
Thus necessarily, \(b(y_2)\le b(y_1)\le b(l)=x_*<r\). Because z(y) is an interior maximizer in the objective function in (37), it must satisfy the first order condition:
This gives (20).
As \(y\uparrow x_0\), b(y) converges to some limit in \([x_0,r)\). Suppose that \(b(x_0-)\equiv {\underline{b}}>x_0\), then the concavity of \(H(\cdot )\) over \((\psi _q(x_0),\infty )\) implies that
However, taking limit in (38) as \(y\uparrow x_0\), we know that the above inequality is in fact an equality. This, together with the concavity of \(H(\cdot )\) implies that \(H(\cdot )\) is in fact a straight line over \([\psi _q(x_0), \psi _q({\underline{b}})]\), but then (by the definition of z(y), again) we must have \(b(x_0-)=x_0\) instead.
We use implicit differentiation to prove b(y) is strictly decreasing and differentiable on \((l,x_0)\). To that end, we denote \(z=z(y)\) and \(u=\psi _q(y)\), then the first order equation in (38) reads as
By the definition of \(z\equiv z(y)\) we have
Thus, we know that z(y) is strictly decreasing and differentiable in \(\psi _q(y)\). In order words, z(y) is differentiable in y and \(z'(y)<0\) for any \(y\in (l,x_0)\). \(\square \)
Proof of Corollary 4.1
From Theorem 3.1 we know that \({\bar{x}}\mapsto b(f({\bar{x}}))\) is strictly decreasing and continuous over \((f^{-1}(l),f^{-1}(x_0))\), and the mapping \({\bar{x}}:\mapsto {\bar{x}}\) is strictly increasing over the same domain. Therefore, the difference \(D({\bar{x}}):=b(f({\bar{x}}))-{\bar{x}}\) is strictly decreasing, and \(D({\bar{x}})\ge D(x_0)> 0\) for all \({\bar{x}}\in (f^{-1}(l),x_0]\), and by Proposition 3.1,
As a consequence, we can define \(b_f^\star :=\inf \{{\bar{x}}<f^{-1}(x_0): D({\bar{x}})\le 0\}\), and \(b_f^\star \in (x_0,f^{-1}(x_0))\), so \(f(b_f^\star )\le x_0\).
Now for all \({\bar{x}}<b_f^\star \), by the construction of \(b_f^\star \) we have \(b(f({\bar{x}}))>{\bar{x}}\), by definition of \(z(f({\bar{x}}))\equiv \psi _q(b(f({\bar{x}})))\) in the proof of Proposition 3.1 we know that \(z(f({\bar{x}}))>\psi _q({\bar{x}})\). Because the line segment \(l_0\) connecting \((\psi _q(f({\bar{x}})), H(\psi _q(f({\bar{x}}))))\) and \((z(f({\bar{x}})), H(\psi _q(f({\bar{x}}))))\) gives part of the concave majorant of \(H(\cdot )\), we know that the line segment \(l_1\) connecting \((\psi _q(f({\bar{x}})), H(\psi _q(f({\bar{x}}))))\) and \((\psi _q({\bar{x}}), H(\psi _q({\bar{x}})))\), which is below line segment \(l_0\), must go below the graph of \(H(\cdot )\) at \(\psi _q({\bar{x}})\). This implies that the derivative of \(H(\cdot )\) at \(\psi _q({\bar{x}})\) must be strictly greater than that of line segment \(l_1\). That is,
On the other hand, for all \(f^{-1}(x_0)>{\bar{x}}>b_f^\star \), we have \(b(f({\bar{x}}))<{\bar{x}}\). Using similar argument as above, we know that \(z(f({\bar{x}}))=\psi _q(b(f({\bar{x}})))<\psi _q({\bar{x}})\). Since the line segment \(l_1\) connecting \((\psi _q(f({\bar{x}})), H(\psi _q(f({\bar{x}}))))\) and \((\psi _q({\bar{x}}), H(\psi _q({\bar{x}})))\) is a line segment connecting two points on the graph of a concave function \({\hat{H}}(\cdot )\), which is the smallest concave majorant of \(H(\cdot )\) over \([\psi _q(f({\bar{x}})),\infty )\), we know that
Expressing \(H(\cdot )\) and its derivative with \(h(\cdot ), \phi _q^-(\cdot ), \psi _q(\cdot )\) and their derivatives yields (21) and completes the proof. \(\square \)
Proof of Lemma 4.1
Let us denote by \(\mathbf {e}_q\) an exponential random variable with mean 1Â /Â q, which is independent of X. Then we notice that
To calculate the right-hand sides of the above, we consider an excursion of X below u (notice that \(\tau _X^+(u-)=\inf \{t>0: X_t\ge u\}\) is the first hitting time of X to u):
which is defined for all \(u\ge X_0=\overline{X}_0={\bar{x}}\) such that its lifetime \(\zeta (\epsilon _u):=\tau _X^+(u)-\tau _X^+(u-)>0\). When \(\zeta (\epsilon _u)=0\) we set \(\epsilon _u=\partial \), an isolated point. Then the process \(\{(u,\epsilon _u)\}_{u\ge {\bar{x}}}\) is a Poisson point process with jump measure \(d u\times d n_u\), where \(n_u\) is the excursion measure for \(\epsilon _u\). Define \(T_f(\epsilon _u):=\inf \{0<s<\zeta (\epsilon _u): \epsilon _{u}(s)>u-f(u)\}\). It is known from [27] and Lemma 2.1 that,
Hence,
Let A be the space of all excursions \(\epsilon _u\) such that \(T_f(\epsilon _u)<\zeta (\epsilon _u)\wedge \mathbf {e}_q\), and B be the space of all excursions \(\epsilon _u\) such that \(\mathbf {e}_q<\zeta (\epsilon _u)\wedge T_f(\epsilon _u)\). We have that \(A\cap B=\emptyset \). Consider a Poisson process (with time indexed by the running maximum \(\overline{X}\)) that jumps whenever the current excursion \(\epsilon _{\overline{X}}\in A\cup B\), then from the above calculation, we know that this Poisson process has jump intensity \(n_u(\mathbf {e}_q<\zeta (\epsilon _u)\wedge T_f(\epsilon _u)\text { or }T_f(\epsilon _u)<\zeta (\epsilon _u)\wedge \mathbf {e}_q)\). So \(\mathbb {P}_{{\bar{x}},{\bar{x}}}(\tau _X^+(b)<\rho _f\wedge \mathbf {e}_q)\) is the same as the probability that this Poisson process has no jump over \([{\bar{x}},b)\), which is given by
Moreover, for any \(v\in [{\bar{x}},b)\), the probability that the Poisson process will have the first jump at “time” \(d v\) as a result of \(\epsilon _v\in A\), is given by
which is the same as \(\mathbb {P}_{{\bar{x}},{\bar{x}}}(\overline{X}_{\rho _f}\in d v, \rho _f<\tau _X^+(b)\wedge \mathbf {e}_q)\). The proof is complete by integrating in v over \([{\bar{x}},b)\). \(\square \)
Proof of Lemma 4.2
Let us define for any \(b\ge {\bar{x}}\)
It is clear that \({\bar{H}}(\psi _q({\bar{x}}),{\bar{x}})=H(\psi _q({\bar{x}}))=\frac{h({\bar{x}})}{\phi _q^-({\bar{x}})}\), and for \(b>{\bar{x}}\) we have the right derivative of \(H_f(\psi _q({\bar{x}}),b)\) in b:
It follows that the sign of \(\frac{\partial }{\partial b}{\bar{H}}(\psi _q({\bar{x}}), b)\) depends on that of
But the latter is known to be positive for all \(b<b_f^\star \), thanks to Corollary 4.1. Because \(H'(\psi _q(\cdot ))\) is continuous, so is \(\Gamma (\cdot )\). So we know that
This completes the proof. \(\square \)
Proof of Corollary 4.2
If \(f({\bar{x}})<x\le {\bar{x}}<b_f^\star \), then by the strong Markov property of X, we have
where \(\mathbb {E}_{b_f^\star ,b_f^\star }(\mathrm {e}^{-q\rho _f}h(X_{\rho _f})\mathbf {1}_{\{\rho _f<\infty \}})=g_f(b_f^\star ,b_f^\star )\) is given in Lemma 4.1, which is finite since we know that it is dominated from above by \(v_f(b_f^\star ,b_f^\star )=h(b_f^\star )\). On the other hand, by the analysis in (22) and the results in Lemma 4.1, we have
We obtain the claimed formula by combining the above results.
If \(f({\bar{x}})<x_0\) and \({\bar{x}}\ge b_f^\star \), then from Theorem 3.1 and Theorem 4.1 we know that \(b(f({\bar{x}}))\le {\bar{x}}\), and for all \(f({\bar{x}})<x<b(f({\bar{x}}))\),
By using Lemma 2.1 we obtain that
The claim in this case follows from Lemma 4.1.
In the last case that \(f({\bar{x}})<x_0\), \({\bar{x}}\ge b_f^\star \) and \(b(f({\bar{x}}))\le x\le {\bar{x}}\), or \(f({\bar{x}})\ge x_0\) and \(f({\bar{x}})<x\le {\bar{x}}\), from Theorem 3.1 and Theorem 4.1 we know that the optimal stopping rule for problem (4) is 0, so we have
The completes the proof. \(\square \)
Proof of Lemma 4.3
The convexity of \(H(\cdot )\) has already been proved in the proof of Lemma 2.2, so we only need to prove that for \(H_f(\cdot )\). To that end, we recall (30) that
from which we obtain that, for \(z\in (0,z_f^\star )\),
We prove that the embraced expression in (39) is positive, which implies that \(H_f'(\cdot )\) is increasing so \(H_f(\cdot )\) is convex.
To prove the claim, we notice that for \(z\in (0,z_f^\star )\), we have \(\varphi (z)<\varphi (z_f^\star )=\psi _q(f(\psi _q^{-1}(z_f^\star )))=\psi _q(f(b_f^\star ))<\psi _q(x_0)\), thanks to Corollary 4.1. We now prove that the line segment connecting \((\varphi (z),H(\varphi (z)))\) and (z, H(z)) stays above the graph of \(H(\cdot )\). Suppose not, then by the convexity of \(H(\cdot )\) this can happen only if the line segment crosses the graph of \(H(\cdot )\) twice, and \(z>\psi _q(b(\psi _q^{-1}(\varphi (z))))\), the latter of which is the point where the tangent line of \(H(\cdot )\) that crosses \((\varphi (z),H(\varphi (z)))\) touches the graph of \(H(\cdot )\). In other words,
On the other hand, by the monotonicity of b(y) (see Proposition 3.1) we know that
where we used the definition of \(b_f^\star \) in Corollary 4.1. However, (40) is contradictory to (41). Thus, the the line segment connecting \((\varphi (z),H(\varphi (z)))\) and (z, H(z)) stays above the graph of \(H(\cdot )\). Given that \(H(\cdot )\) is convex at \(\varphi (z)\), we know that the slope of this line segment, \(\frac{H(z)-H(\varphi (z))}{z-\varphi (z)}\), is larger than \(H'(\varphi (z))\). \(\square \)
Lemma A.1
Define the constant \(\beta ^\pm := -\delta \pm \gamma ,\) where
Then, we have \(\beta ^+>1\) and
Proof
First, since \(g(1)=\mu -q<g(\beta ^+)=0\) where \(g(\beta )=\frac{1}{2}\sigma ^2\beta (\beta -1)+\mu \beta -q\), we conclude that \(1<\beta ^+\). It follows from \(\delta <\gamma \) that \(-\beta ^-=\delta +\gamma <2\gamma \), so \(-\frac{\beta ^-}{2\gamma }<1\). From \(g(-\epsilon )<g(\beta ^-)=0\) where \(g(\beta )=\frac{1}{2}\sigma ^2\beta (\beta -1)+\mu \beta -q\), we know that \(-\epsilon >\beta ^-\). Moreover, \(1-\beta ^--2\gamma =1+\delta -\gamma =1+\delta -\sqrt{\delta ^2+\frac{2q}{\sigma ^2}}<\delta +1-\sqrt{\delta ^2+\frac{2\mu }{\sigma ^2}}=\delta +1-\sqrt{\delta ^2+2\delta +1}\le 0\), so \(\frac{1-\beta ^-}{2\gamma }<1\). \(\square \)
Proof of Example 4.1
First of all, we verify that \(h(\cdot )\) satisfies Assumption 2.1. To that end, we calculate
from which we know that (12) holds. From [18] we know that
where \(\beta ^\pm \) is defined in Lemma A.1. Condition (13) holds since \(\beta ^+>1\), and thus, Assumption 2.1 holds.
Using (43) and \(f(x)=(1-\alpha )x\) we obtain that
It follows that
Notice that (42) ensures that two detonators in the last line of (45) are negative.
Using (43), (44) and (45), we obtain
where k(u) is a polynomial in u:
with
and unambiguous definitions of the coefficients A, B, and C. We can show that \(C>0\). In view of the fraction inside C, we let \(g(x)=x^p+p(1-x)-1\) for \(p=\frac{-\epsilon -\beta ^-}{2\gamma }\in (0,1)\). Then \(g(1)=0\) and \(g'(x)=p(x^{p-1}-1)>0\) for all \(x\in (0,1)\) so \(g(\cdot )\) is strictly increasing over (0, 1). In particular, \(g({\bar{\alpha }})<g(1)=0\). Since the denominator in C is also negative, we conclude that \(C>0\).
Also, observe that \(k(0+)=0=H^{(1)}(0+)\). Now, taking derivative of k(u) in (47), we get
From \(\lim _{u\downarrow 0}u^{1-n_3}k'(u)=Cn_3>0\) we know that \(H^{(1),\prime }(z)>0\) for sufficiently small \(z>0\). Moreover,
Using standard argument by taking the derivative, it can be shown that functions like the right hand side of (49) can change monotonicity at most once over (0, 1). Clearly, the right hand side of (49) converges to \(Cn_3(n_3-1)<0\) as \(u\downarrow 0\). On the other hand, because \(H_f(z)-H(z)\) is convex over \((\psi _q(x_0), \psi _q(b_f^\star ))\) (see Lemma 4.3), we know that the right hand side of (49) is positive as \(u\uparrow 1\). Given that k(u) is maximized at \(\frac{\overline{z}_f^\star }{z_f^\star }\), we know that \(k''(u)\) changes sign exactly once over (0, 1). More specifically, there are \(u_1\in (0,1)\) such that \(k''(u)<0\) for all \(u\in (0,u_1)\), and \(k''(u)>0\) for all \(u\in (u_1,1)\). This proves the pattern of convexity change for \(H^{(1)}(\cdot )\). It follows that \(H^{(1)}(\cdot )\) is strictly increasing from 0 to \(\overline{z}_f^\star \), in particular, \(H^{(1)}(z)>0\) for all \(z\in (0,z_f^\star )\). Thus, the smallest nonnegative concave majorant of \(H^{(1)}(\cdot )\) is given by
and the optimal stopping region for (24) is given by \(\psi _q^{-1}((0,\overline{z}_f^\star ])=(0,\underline{b}_f^\star ]\). Finally, the global maximum \(\overline{z}_f^\star \) is the unique solution to
\(\square \)
Rights and permissions
About this article
Cite this article
Leung, T., Zhang, H. Optimal Trading with a Trailing Stop. Appl Math Optim 83, 669–698 (2021). https://doi.org/10.1007/s00245-019-09559-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00245-019-09559-0