Perpetual American Double Lookback Options on Drawdowns and Drawups with Floating Strikes

We present closed-form solutions to the problems of pricing of the perpetual American double lookback put and call options on the maximum drawdown and the maximum drawup with floating strikes in the Black-Merton-Scholes model. It is shown that the optimal exercise times are the first times at which the underlying risky asset price process reaches some lower or upper stochastic boundaries depending on the current values of its running maximum or minimum as well as the maximum drawdown or maximum drawup. The proof is based on the reduction of the original double optimal stopping problems to the appropriate sequences of single optimal stopping problems for the three-dimensional continuous Markov processes. The latter problems are solved as the equivalent free-boundary problems by means of the smooth-fit and normal-reflection conditions for the value functions at the optimal stopping boundaries and the edges of the three-dimensional state spaces. We show that the optimal exercise boundaries are determined as either the unique solutions of the associated systems of arithmetic equations or the minimal and maximal solutions of the appropriate first-order nonlinear ordinary differential equations.


Introduction
The main aim of this paper is to compute closed-form expressions for the values of the discounted optimal double stopping problems: for some given constants L 1 ≥ 1 ≥ K 1 > 0 , and for some given constants K 2 ≥ 1 ≥ L 2 > 0 . In order to give a precise mathematical formulation of the problem, we consider a probability space (Ω, F, P) with a standard Brownian motion B = (B t ) t≥0 . For simplicity of presentation, we assume that the process X = (X t ) t≥0 is a geometric Brownian motion defined by: which solves the stochastic differential equation where r > 0 , > 0 , and > 0 are given constants, and x > 0 is fixed. The process X can be interpreted as the price of a risky asset on a financial market, where r is the riskless interest rate, is the dividend rate paid to the asset holders, and is the volatility rate. Suppose that the suprema in (1) and (2) are taken over all stopping times and with respect to the natural filtration (F t ) t≥0 of the process X, and the expectations there are taken with respect to the risk-neutral probability measure P. In this case, the values of (1) and (2) can therefore be interpreted as the rational (or no-arbitrage) prices of the perpetual American double lookback options on the maximum drawdown and maximum drawup with floating strikes K i X and L i X , for i = 1, 2 , in the Black-Merton-Scholes model, respectively (see, e.g. Shiryaev [Chapter VIII;Section 2a] (1999), Peskir and Shiryaev [Chapter VII;Section 25] (2006), or Detemple (2006), for an extensive overview of other related results in the area).
Compound options are financial contracts which give their holders the right (but not the obligation) to buy or sell some other options at certain times in the future by the strike prices agreed in advance. Such contingent claims and the related hedging strategies are widely used in various financial markets for the purpose of risk protection (see, e.g. Geske (1977Geske ( , 1979 and Hodges and Selby (1987) for the first applications of compound options of European type with fixed maturity times). Other important versions of such contracts are compound contingent claims of American type in which both the outer and inner options can be exercised at any random (stopping) times up to maturity. The rational pricing problems for these options can thus be embedded into double (two-step) optimal stopping problems for the underlying asset price processes. The latter problems are decomposed into the appropriate sequences of single (one-step) optimal stopping problems which can then be solved separately. Moreover, in the real financial world, a common application of such contracts is the hedging of suggestions (1) (4) dX t = (r − ) X t dt + X t dB t (X 0 = x), for business opportunities which may or may not be accepted in the future, and which become available only after the previous ones are undertaken. This fact makes compound options an important example of the real options to undertake business decisions which can be expressed in the presented perspective (see Dixit and Pindyck [Chapter X] (1994) for an extensive introduction).
Apart from the singular and impulse stochastic control problems, the multiple (multistep) optimal stopping problems for one-dimensional diffusion processes have recently drawn a considerable attention in the related literature. Duckworth and Zervos (2000) studied an investment model with entry and exit decisions alongside a choice of the production rate for a single commodity. The initial valuation problem was reduced to a double (twostep) optimal stopping problem which was solved through the associated dynamic programming differential equation. Carmona and Touzi (2008) derived a constructive solution to the problem of pricing of perpetual swing contracts, the recall components of which could be viewed as contingent claims with multiple exercises of American type, using the connection between optimal stopping problems and the associated with them Snell envelopes. Carmona and Dayanik (2008) then obtained a closed form solution of a multiple (multi-step) optimal stopping problem for a general linear regular diffusion process and a general payoff function. Algorithmic constructions of the related exercise boundaries were also proposed and illustrated with several examples of such optimal stopping problems for several linear and mean-reverting diffusions. Other infinite horizon optimal stopping problems with finite sequences of stopping times, which are related to hiring and firing options, have been recently considered by Egami and Xu (2008) among others.
Discounted optimal stopping problems for certain reward functionals depending on the running maxima and minima of continuous Markov (diffusion-type) processes were initiated by Shepp and Shiryaev (1993) and further developed by Pedersen Pedersen (2000), Guo and Shepp (2001), Gapeev (2007), Guo and Zervos (2010), Peskir (2012), Peskir (2014), Glover et al. (2013), Rodosthenous and Zervos (2017), Gapeev (2019Gapeev ( , 2020, , Gapeev and Al Motairi (2021), Gapeev and Li (2021), and Gapeev et al. (2022) among others. The main feature in the analysis of such optimal stopping problems was that the normal-reflection conditions hold for the value functions at the diagonals of the state spaces of the multi-dimensional continuous Markov processes having the initial processes and the running extrema as their components. It was shown, by using the established by Peskir (1998) maximality principle for solutions of optimal stopping problems, which is equivalent to the superharmonic characterisation of the value functions, that the optimal stopping boundaries are characterised by the appropriate extremal solutions of certain (systems of) first-order nonlinear ordinary differential equations. Other optimal stopping problems in models with spectrally negative Lévy processes and their running maxima were studied by Asmussen et al. (2003), Avram et al. (2004), Ott (2013), and Kyprianou and Ott (2014) among others.
We further consider the problems of (1) and (2) as the associated double (two-step) optimal stopping problems of (5) and (6) for the three-dimensional continuous Markov processes having the underlying risky asset price X and either its running maximum S and the maximum drawdown Y or its running minimum Q and the maximum drawup Z as their state space components. The resulting problems turn out to be necessarily threedimensional in the sense that they cannot be reduced to optimal stopping problems for Markov processes of lower dimensions. The original optimal double stopping problems are reduced to the appropriate sequences of single optimal stopping problems which are solved as the equivalent free-boundary problems for the value functions which satisfy the structure of the optimal exercise times and formulate the equivalent free-boundary problems.

The Multiple Optimal Stopping Problems
It is seen that the problems of (1) and (2) can naturally be embedded into the optimal double stopping problems for the (time-homogeneous strong) Markov processes (X, S, Y) = (X t , S t , Y t ) t≥0 and (X, Q, Z) = (X t , Q t , Z t ) t≥0 with the values: for some L 1 ≥ 1 ≥ K 1 > 0 fixed, and for some K 2 ≥ 1 ≥ L 2 > 0 fixed, where the suprema are taken over all stopping times and with respect to the filtration (F t ) t≥0 . The processes S = (S t ) t≥0 and Q = (Q t ) t≥0 are the associated with X running maximum and running minimum defined by: while the processes Y = (Y t ) t≥0 and Z = (Z t ) t≥0 are the associated with X running maximum drawdown and running maximum drawup defined by: for arbitrary 0 < s − y ≤ x ≤ s and 0 < q ≤ x ≤ q + z , respectively. In this case, by virtue of the strong Markov property of the processes (X, S, Y) and (X, Q, Z), the original problems of (5) and (6) can be reduced to the optimal stopping problems with the values: where the suprema are taken over all stopping times of (X, S, Y) or (X, Q, Z), and we set: for some K 2 ≥ 1 ≥ K 1 > 0 fixed, respectively. Here, the functions U * 1 (x, s, y) and U * 2 (x, q, z) represent the values of the optimal stopping problems formulated in (99) and (100), where the optimal stopping times * i , for i = 1, 2 , have the form of (148), for some boundaries 0 < s − y < b(s, y) ≡ * (s − y) < s and 0 < q < g(q, z) ≡ * (q + z) < q + z determined in Theorem 5.1 below.

The Outer Optimal Stopping Problems
Let us first find convenient representations for the reward functionals of the optimal stopping problems from (9) with (10). For this purpose, we use the facts that the functions U * 1 (x, s, y) and U * 2 (x, q, z) from (99) and (100) satisfy the free-boundary problems in (112)-(117), which particularly lead to the representation for the process e −rt U * 1 (X t , S t , Y t ) and for all t ≥ 0 , are continuous uniformly integrable martingales under the probability measure P. Note that the processes S and Y may change their values only at the times when X t = S t and X t = S t − Y t , while the processes Q and Z may change their values only at the times when X t = Q t and X t = Q t + Z t , for t ≥ 0 , respectively, and such times accumulated over the infinite horizon form the sets of the Lebesgue measure zero, so that the indicators in the expressions of (11) and (13) as well as (15) and (16) can be ignored (see also Proof of Theorem 4.1 below for more explanations and references). Then, inserting in place of t and applying Doob's optional sampling theorem (see, e.g. Liptser and Shiryaev (11) (11) and (13), we get that the equalities: and hold, for any stopping time with respect to the filtration (F t ) t≥0 . Hence, taking into account the expressions in (17) and (18), we conclude that the optimal stopping problems with the values of (9) are equivalent to the optimal stopping problems with the value functions: and where the functions H 1 (x, s, y) and H 2 (x, q, z) are defined in (12) and (14), for (x, s, y) ∈ E 1 and (x, q, z) ∈ E 2 , respectively. Here, we denote by E x,s,y and E x,q,z the expectations with respect to the probability measures P x,s,y and P x,q,z under which the three-dimensional (time-homogeneous strong Markov) processes (X, S, Y) and (X, Q, Z) start at (x, s, y) ∈ E 1 and (x, q, z) ∈ E 2 , and by the state spaces of (X, S, Y) and (X, Q, Z), respectively. We further obtain solutions to the optimal stopping problems in (19) and (20) and verify below that the value functions V * 1 (x, s, y) and V * 2 (x, q, z) are the solutions of the problems in (9), and thus, give the solutions of the original optimal double stopping problems in (1) and (2), under s = x with y = 0 and q = x with z = 0 , respectively.
It follows from the general theory of optimal stopping problems for Markov processes (see, e.g. Peskir and Shiryaev [Chapter I, Section 2.2] (2006)) that the continuation regions for the optimal stopping problems of (5) and (6) have the form: so that the appropriate stopping regions are given by: and It is seen from the results of Theorem 4.1 proved below that the value functions V * 1 (x, s, y) and V * 2 (x, q, z) are continuous, so that the sets C * 1,j , for j = 1, 2 , in (21)- (22) are open, while the sets D * 1,j , for j = 1, 2 , in (23)-(24) are closed.

The Structure of Optimal Stopping Times
Let us now specify the structure of the optimal stopping times in the outer optimal stopping problems of (19)-(20).
(i) It follows from the structure of the second and the third integrals of (19) and (20) as well as the facts that the process S is increasing and the process Q is decreasing, while the processes Y and Z are both increasing, that it is not optimal to exercise the outer parts of the contracts (or exercise the compound options for the first time), whenever the appropriate integrands are positive. In other words, the sets c 1, belong to the continuation regions C * 1,i , for i = 1, 2 , in (21)-(22), respectively. Moreover, it follows from the structure of the first integrals in (19) and (20) that it is not optimal to exercise the outer parts of the contracts (or exercise the compound options for the first time) when the inequality H 1 (X t , S t , Y t ) ≥ 0 holds, which is equiva- for all t ≥ 0 , respectively. In other words, these facts mean that the set {(x, s, y) ∈ E 1 | 0 < (s − y) ∨ rs∕( K 1 ) ≤ x < b(s, y) ∧ s} , belongs to the continuation region C * 1,1 , while the set {(x, q, z) ∈ E 2 | 0 < q ∨ g(q, z) < x ≤ rq∕( K 2 ) ∧ (q + z)} belongs to the continuation region C * 1,2 in (21)-(22). (ii) We now observe that it follows from the definition of the process (X, S, Y) and (X, Q, Z) in (3) and (7)-(8) and the structure of the rewards in (19) and (20) that, for each 0 < s − y < s fixed, there may exist a sufficiently small or large 0 < s − y ≤ x ≤ s such that the point (x, s, y) belongs to the stopping region D * 1,1 , while, for each 0 < q < q + z fixed, there may exist a sufficiently large or small 0 < q ≤ x ≤ q + z such that the point (x, q, z) belongs to D * 1,2 . By virtue of arguments similar to the ones applied in Dubins et al. [Subsection 3.3] (1993) and Peskir [Subsection 3.3] (1998), these properties can be explained by the facts that the costs of waiting until the process X coming from either such a small x > 0 increases to the current value of the running maximum process S (or the process Q + Z ) or such a large x > 0 decreases to the current value of the running minimum process Q (or the process S − Y ) may be too large due to the presence of the discounting factors in the reward functionals of (19) and (20). Furthermore, by virtue of properties of the running maximum S and minimum Q from (7) (as well as of the running maximum drawdown Y and drawup Z) of the geometric Brownian motion X from (3)-(4) (see, e.g.  (19) and (20) infinitesimally increase when X t = Q t or X t = S t (as well as when X t = S t − Y t or X t = Q t + Z t ), for each t ≥ 0. We now show the existence of parts of the stopping regions D * 1,i , for i = 1, 2 , with the left-hand and right-hand stopping boundaries, respectively, while the existence of parts of the same regions with the right-hand and left-hand stopping boundaries can be shown by means of arguments similar to the ones applied to the stopping regions D * 2,i , for i = 1, 2 , in Part (ii) of Subsection 5.2 below. On the one hand, if we take some (x, s, y) ∈ D * 1,1 from (23) such that 0 < x < (r∕( K 1 ) ∧ 1)s and use the fact that the process (X, S, Y) started at some (x � , s, y) such that 0 < s − y < x � < x < (r∕( K 1 ) ∧ 1)s passes through the point (x, s, y) before hitting the diagonal d 1,1 = {(x, s, y) ∈ ℝ 3 | 0 < s − y < x = s} , then the representation of (17) for the reward functional in (19) . Moreover, if we take some (x, q, z) ∈ D * 1,2 from (24) such that x > (1 ∨ r∕( K 2 ))q and use the fact that the process (X, Q, Z) started at some (x � , q, z) such that 0 < (1 ∨ r∕( K 2 ))q < x < x � < q + z passes through the point (x, q, z) before hitting the plane d 2,1 = {(x, q, z) ∈ ℝ 3 | 0 < x = q < q + z} , then the representation of (18) for the reward functional in (20) 1,2 . Thus, we conclude that the stopping regions D * 1,i , for i = 1, 2 , from (23)-(24) may have parts with the left-hand and right-hand stopping boundaries, respectively.
(iii) We may therefore conclude that there exist functions a * (s, y) and b * (s, y) such that the inequality H 1 (x, s, y) < 0 holds, for (x, s, y) ∈ E 1 with x ≤ a * (s, y) and x ≥ b * (s, y) . Also, there exist functions g * (q, z) and h * (q, z) and the inequality H 2 (x, q, z) < 0 holds, for (x, q, z) ∈ E 2 with x ≤ g * (q, z) and x ≥ h * (q, z) . In this respect, the continuation regions C * 1,j , for j = 1, 2 , in (21) and (22) have the form: and while the stopping regions D * 1,j , for j = 1, 2 , in (23) and (24) are given by: Figs. 1-2 below for computer drawings of the optimal stopping boundaries a * (s, y) and b * (s, y) as well as Figs. 3-4 below for computer drawings of the optimal stopping boundaries g * (q, z) and h * (q, z)).
(iv) Let us now clarify the location of the boundaries b * (s, y) and g * (q, z) in relation to the optimal stopping boundaries b(s, y) and g(q, z) from (110)-(111) for the optimal stopping problems with the value functions U * 1 (x, s, y) and U * 2 (x, q, z) in (99)-(100). For this purpose, we use the notations of the functions F 1 (x, s, y) and F 2 (x, q, z) from (47) and (49) below. Suppose that the inequality b * (s, y) < b(s, y) holds, for some 0 < s − y < s , and the inequality g * (q, z) > g(q, z) holds, for some 0 < q < q + z . In this case, for each point (x, s, y) such that x ∈ (b * (s, y), b(s, y)) , we would have Let us finally clarify the location of the boundaries a * (s, y) and h * (q, z) in relation to the optimal stopping boundaries a(s) and h(q) from (158) for the optimal stopping problems with the value functions W * 1 (x, s) and W * 2 (x, q) in (157). For this purpose, we suppose that the inequality Fig. 1 A computer drawing of the optimal exercise boundaries. a * (s, y) , b * (s, y) and b(s, y) , for each y > 0 fixed a * (s, y) > a(s) holds, for some 0 < s − y < s , and the inequality h * (q, z) < h(q) holds, for some 0 < q < q + z . In this case, for each point (x, s, y) such that x ∈ (a(s), a * (s, y)), Thus, we may conclude that the inequality a * (s, y) ≤ a(s) holds, for all 0 < s − y < s , and the inequality h * (q, z) ≥ h(q) holds, for all 0 < q < q + z.

The Free-Boundary Problems
By means of standard arguments based on an application of Itô's formula, it is shown that the infinitesimal operator of the process (X, S, Y) or (X, Q, Z) from (3)-(4) and (7)-(8) has the form:  Gapeev and Rodosthenous (2014b), and Gapeev and Rodosthenous (2016b)). In order to find analytic expressions for the unknown value functions V * 1 (x, s, y) and V * 2 (x, q, z) from (19) and (20) with the unknown boundaries boundaries a * (s, y) and b * (s, y) from (25) and (27) as well as g * (q, z) and h * (q, z) from (26) and (28), we use the results of general theory of optimal stopping problems for Markov processes (see, e.g. Peskir and Shiryaev [Chapter IV,Section 8] (2006)) as well as optimal stopping problems for maximum processes (see, e.g. Peskir and Shiryaev [Chapter V, (2006) and references therein). We can therefore reduce the optimal stopping problem of (19) to the equivalent free-boundary problem: A computer drawing of the optimal exercise boundaries. g * (q, z) , h * (q, z) and g(q, z) , for each z > 0 fixed where the function H 1 (x, s, y) is defined in (12), the left-hand conditions of (33)-(34) are satisfied, when s − y ≤ a(s, y) < s holds, and the right-hand conditions of (33)-(34) are satisfied, when s − y < b(s, y) ≤ s holds, as well as the left-hand condition of (35) is satisfied, when a(s, y) < s − y holds, and the right-hand condition of (35) is satisfied, when b(s, y) > s holds, for all 0 < s − y < s . Similarly, the optimal stopping problem of (20) is reduced to the equivalent free-boundary problem: A computer drawing of the optimal exercise boundaries. g * (q, z) , h * (q, z) and g(q, z) , for each q > 0 fixed where the function H 2 (x, q, z) is defined in (14), the left-hand conditions of (40)-(41) are satisfied, when q ≤ g(q, z) < q + z holds, and the right-hand conditions of (40)-(41) are satisfied, when q < h(q, z) ≤ q + z holds, as well as the left-hand condition of (42) is satisfied, when g(q, z) < q holds, and the right-hand condition of (42) is satisfied, when h(q, z) > q + z holds, for all 0 < q < q + z . Observe that the superharmonic characterisation of the value function (see, e.g. Peskir and Shiryaev [Chapter IV, Section 9] (2006)) implies that V * 1 (x, s, y) or V * 2 (x, q, z) are the smallest functions satisfying the equalities in (32)-(33) and the properties in (36)-(37) with the boundaries a * (s, y) and b * (s, y) , or in (39)-(40) and (43)-(44) with the boundaries g * (q, z) and h * (q, z) , respectively.

Solutions to the Free-Boundary Problems
In this section, we obtain closed-form expressions for the value functions V * 1 (x, s, y) and (20) of the perpetual American double lookback put and call options on the maximum drawdown and the maximum drawup. We also derive first-order nonlinear ordinary differential equations for the optimal exercise boundaries a * (s, y) and b * (s, y) as well as g * (q, z) and h * (q, z) in (25)

The Candidate Value Functions
We first observe that the general solution of the second-order ordinary differential equations in (32) and (39) with (29) have the form: with the particular solution: with the particular solution: Here, C 1,j (s, y) and C 2,j (q, z) , for j = 1, 2 , are some arbitrary continuously differentiable functions, for 0 < s − y < s and 0 < q < q + z , respectively, and the numbers j , for j = 1, 2 , are given by: so that 2 < 0 < 1 < 1 holds. We will further use the obvious fact that the identity: is satisfied. Then, by applying the conditions from (33)-(35) to the function in (46), we get that the equalities: hold, for some boundaries a(s, y) ≤ a(s) and b(s, y) ≥ b(s, y) , where the conditions of (52) and (54) are satisfied, when s − y ≤ a(s, y) < b(s, y) ∧ s holds, and the conditions of (53) and (55) are satisfied, when b(s, y) < b(s, y) ≤ s holds, as well as the condition of (56) is satisfied, when a(s, y) < s − y holds, and the condition of (57) is satisfied, when b(s, y) > s holds, for all 0 < s − y < s . Furthermore, by applying the conditions from (40)-(42) to the function in (48), we get that the equalities: hold, for some boundaries g(q, z) ≤ g(q, z) and h(q, z) ≥ h(q) , where the conditions of (58) and (60) are satisfied, when q ≤ g(q, z) ≤ q + z holds, and the conditions of (59) and (61) are satisfied, when q ∨ g(q, z) < h(q, z) ≤ q + z holds, as well as the condition of (62) is satisfied, when g(q, z) < q holds, and the condition of (63) is satisfied, when h(q, z) > q + z holds, for all 0 < q < q + z.
Then, by solving the system of equations in (52)+(53), we obtain that the candidate value function admits the representation: Also, by solving the system of equations in (58)+(59), we obtain that the candidate value function admits the representation: Hence, by solving the system of equations in (52)+(54), we obtain that the candidate value function admits the representation: , for every i = 1, 2 . Also, by solving the system of equations in (59)+(61), we obtain that the candidate value function admits the representation: for 0 < g(q, z) < q < h(q, z) ≤ q + z , for every j = 1, 2 . Thus, by solving the system of equations in (53)+(55), we obtain that the candidate value function admits the representation: Also, by solving the system of equations in (58)+(60), we obtain that the candidate value function admits the representation: Moreover, by means of straightforward computations, it can be deduced from the expression in (64) with (47) that the first-order and second-order partial derivatives x V 1 (x, s, y;a(s, y), b(s, y)) and xx V 1 (x, s, y;a(s, y), b(s, y)) of the function V 1 (x, s, y;a(s, y), b(s, y)) take the form: and on the interval (s − y) ∨ a(s, y) < x < b(s, y) ∧ s , for each 0 < s − y < s . Also, it can be deduced from the expression in (66) with (49) that the first-order and second-order partial derivatives x V 2 (x, q, z;g(q, z), h(q, z)) and xx V 2 (x, q, z;g(q, z), h(q, z)) of the function V 2 (x, q, z;g(q, z), h(q, z)) take the form: and on the interval q ∨ g(q, z) < x < h(q, z) ∧ (q + z) , for each 0 < q < q + z.

The Candidate Stopping Boundaries
We first apply the conditions of (54)-(55) to the functions C 1,j (s, y;a(s, y), b(s, y)) , for j = 1, 2 , in (65) to obtain the equalities: for every j = 1, 2 and all 0 < s − y ≤ a(s, y) < b(s, y) ≤ s . Observe that the existence and uniqueness of solutions of the system of arithmetic equations in (80) (2014)). Note that the system of arithmetic equations in (80) satisfies the conditions of the classical (two-dimensional) implicit function theorem, so that the resulting solutions a * (s, y) and b * (s, y) turn out to be continuously differentiable. Furthermore, assuming that the candidate boundary functions a(s, q) and b(s, q) is continuously differentiable, we apply the condition of (57) to the functions C 1,j (s, y;a(s, y)) , for i = 1, 2 , in (69) to conclude that the candidate boundary a(s, y) satisfies the ordinary differential equation: y) . Here, we have a(s, y) = * s with 0 < * < 1 from (167), for all s > s(y) and each y > 0 . We also apply the condition of (56) to the functions C 1,j (s, y;b(s, y)) , for i = 1, 2 , in (73) to conclude that the candidate boundary b(s, y) satisfies the ordinary differential equation: for a(s, y) < s − y < b(s, y) ≤ s. In order to specify the optimal exercise boundaries for the outer lookback put options, let us consider the boundaries a * (s, y) and b * (s, y) , which provide a unique solution to the system of arithmetic equations in (80), for each 0 < s − y < s fixed. On the one hand, we can define the functions s(y) = sup{0 < s ≤ s * (y) | b * (s, y) ≤ s} and s(y) = inf{0 < s ≤ s * (y) | a * (s, y) ≥ s − y} , where we set s * (y) = * y∕( * − 1) > y with * > 1 from (143), for each y > 0 fixed. In other words, the boundary b * (s, y) enters the region E 1 from the side of the diagonal d 1,1 = {(x, s, y) ∈ ℝ 3 | 0 < s − y < x = s} by passing through the point (s(y), s(y), y) , while the boundary a * (s, y) exits E 1 from the side of the plane d 1,2 = {(x, s, y) ∈ ℝ 3 | 0 < x = s − y < s} by passing through the point (s(y) − y, s(y), y) , for each y > 0 fixed. Hence, the candidate value function V 1 (x, s, y;a(s, y)) admits the representation of (68)-(69) and the candidate stopping boundary a * (s, y) satisfies a * (s, y) = * s with 0 < * < 1 from (167), for all s > s(y) , while the candidate value function V 1 (x, s, y;a(s, y), b(s, y)) admits the representation of (64)-(65) and the candidate stopping boundaries a * (s, y) and b * (s, y) are uniquely determined from the system of arithmetic equations in (80), for all s(y) < s ≤ s(y) , and each y > 0 fixed.
On the other hand, we can define the functions z(q) = sup{z > z * (q) | g * (q, z) ≥ q} and z(q) = inf{z > z * (q) | h * (q, z) ≤ q + z} , where we set z * (q) = * q∕(1 − * ) with 0 < * < 1 from (145), for each q > 0 fixed. In other words, the boundary g * (q, z) enters the region E 2 from the side of the diagonal d 2,1 = {(x, q, z) ∈ ℝ 3 | 0 < x = q < q + z} by passing through the point (q, q, z(q)) , while the boundary h * (q, z) exits E 2 from the side of the plane d 2,2 = {(x, q, z) ∈ ℝ 3 | 0 < q < x = q + z} by passing through the point (q + z(q), q, z(q)) , for each q > 0 fixed. Hence, the candidate value function V 2 (x, q, z;g(q, z), h(q, z)) admits the representation of (66)-(67) and the candidate stopping boundaries g * (q, z) and h * (q, z) are uniquely determined from the system of arithmetic equations in (83), for all z(q) ≤ z ≤ z(q) , while the candidate value function V 2 (x, q, z;g(s, y)) admits the representation of (74)-(75) and candidate stopping boundary g * (s, y) solves the first-order nonlinear ordinary differential equation (85), for all z > z(q) , and each q > 0 fixed. Note that the candidate value function V 2 (x, q, z;g(q, z)) in (74)-(75) is increasing in g(q, z), so that we should take the candidate stopping boundary g * (q, z) as the maximal solution of the first-order nonlinear ordinary differential equation in (85) located below the plane

The Maximal and Minimal Admissible Solutions b * (s, y) and g * (q, z)
We further consider the maximal and minimal admissible solutions of firstorder nonlinear ordinary differential equations as the smallest and largest possible solutions b * (s, y) and g * (q, z) of the equations in (82) and (85), which satisfy the inequalities 0 < s − y < b(s, y) ≤ b * (s, y) ≤ s , for all 0 < s − y < s , and 0 < q ≤ g * (q, z) ≤ g(q, z) < q + z , for all 0 < q < q + z . By virtue of the classical results on the existence and uniqueness of solutions for first-order nonlinear ordinary differential equations, we may conclude that these equations admit (locally) unique solutions, because the facts that their right-hand sides represent (locally) continuous functions in (s, y, b(s, y)) and (q, z, g(q, z)) and (locally) Lipschitz functions in b(s, y) and g(q, z), for each 0 < s − y < s and 0 < q < q + z fixed (see also Peskir [Subsection 3.9] (1998) for similar arguments based on the analysis of other first-order nonlinear ordinary differential equations). Then, it is shown by means of technical arguments based on Picard's method of successive approximations that there exist unique solutions b(s, y) and g(q, z) to the equations in (82) and (85) started at some points (x � 0 , y 0 , y 0 ) and (x �� 0 , z 0 , z 0 ) , for each 0 < x ′ 0 < y 0 and 0 < x ′′ 0 < z 0 fixed (see also Graversen and Peskir [Subsection 3.2] (1998) and Peskir [Example 4.4] (1998) for similar arguments based on the analysis of other first-order nonlinear ordinary differential equations).
Hence, in order to construct the appropriate functions b * (s, y) and g * (q, z) which satisfy the equations in (82) and (85) and stays strictly above and below the appropriate plane d 1,2 or d 2,2 , respectively, we construct the sequences of solutions satisfying such properties and intersecting the planes {(x, s, y) ∈ E 1 | 0 < s − y < s} and {(x, s, y) ∈ E 2 | 0 < q < q + z} (see also Peskir [Subsection 3.5] (2014) (among others) for a similar procedure applied for solutions of other first-order nonlinear ordinary differential equations). For this purpose, for any decreasing and increasing sequences (x � l ) l∈ℕ and (x �� l ) l∈ℕ , such that 0 < x ′ l < s and x ′′ l > q , we can construct the sequences of solutions b l (s, y) and g l (q, z) , for l ∈ ℕ , to the equations in (82) and (85), for all b l (s, y) = x � l and g l (q, z) = x �� l holds, for each 0 < s − y < s and 0 < q < q + z , and l ∈ ℕ . It follows from the structure of the equations in (82) and (85) that the inequalities y b l (s, y) > −1 and z g l (q, z) < 1 should hold for the derivatives of the corresponding functions, for each 0 < s − y < s and 0 < q < q + z , and l ∈ ℕ (see also   (2000) for the analysis of solutions of another first-order nonlinear differential equation). Observe that, by virtue of the uniqueness of solutions mentioned above, we know that each two curves y ↦ b l (s, y) and y ↦ b m (s, y) as well as z ↦ g l (q, z) and z ↦ g m (q, z) cannot intersect, for each 0 < s − y < s and 0 < q < q + z , and l, m ∈ ℕ , such that l ≠ m , and thus, we see that the sequence (b l (s, y)) l∈ℕ is decreasing and the sequence (g l (q, z)) l∈ℕ is increasing, so that the limits b * (s, y) = lim l→∞ b l (s, y) and g * (q, z) = lim l→∞ g l (q, z) exist, for each 0 < s − y < s and 0 < q < q + z , respectively. We may therefore conclude that b * (s, y) and g * (q, z) provides the minimal and maximal solutions to the equations in (82) and (85) such that a * (s, y) > s − y and g * (q, z) < q + z holds, for all 0 < s − y < s and 0 < q < q + z.
Moreover, since the right-hand sides of the first-order nonlinear ordinary differential equations in (82) and (85) are (locally) Lipschitz in (s, y) and (q, z), respectively, one can deduce by means of Gronwall's inequality that the functions b l (s, y) and g l (q, z) , for each l ∈ ℕ , are continuous, so that the functions b * (s, y) and g * (q, z) are continuous too, for 0 < s − y < s and 0 < q < q + z . The appropriate maximal admissible solutions of firstorder nonlinear ordinary differential equations and the associated maximality principle for solutions of optimal stopping problems which is equivalent to the superharmonic characterisation of the payoff functions were established in Peskir (1998) and further developed in Graversen and Peskir (1998), Pedersen (2000), Guo and Shepp (2001), Gapeev (2007)
Recall that we can put s = x and q = x as well as y = 0 and z = 0 to obtain the values of the original perpetual American double floating-strike lookback maximum drawdown put and maximum drawup call option pricing problems of (1) and (2) from the values of the optimal double stopping problems of (5) and (6), which are equivalent to the sequence of single optimal stopping problems of (19) with (99) and (20) with (100), respectively. Note that, since the both parts of the assertion stated above are proved using similar arguments, we may only give a proof for the case of the three-dimensional single optimal stopping problem of (19), which is related to the outer perpetual American lookback put options on the maximum drawdown.
Proof In order to verify the assertion of Part (i) stated above, it remains for us to show that the function defined in (86) coincides with the value function in (19) and that the stopping time * 1 in (88) is optimal with the boundaries a * (s, y) and b * (s, y) being the solution of the system in (52)-(57) specified in (64)-(65) with (80), or (68)-(69) with (81), or (72)-(73) with (82). For this purpose, let us denote by V 1 (x, s, y) the right-hand side of the expression in (86) associated with a * (s, y) and b * (s, y) . Then, it is shown by means of straightforward calculations from the previous section that the function V 1 (x, s, y) solves the left-hand system of (32)-(38). Recall that the function V 1 (x, s, y) is C 2,1,1 on the closure C 1,1 of C 1,1 and is equal to 0 on D 1,1 , which are defined as C * 1,1 , C * 1,1 and D * 1,1 in (21)- (22) and (23)-(24) with a(s, y) and b(s, y) instead of a * (s, y) and b * (s, y) , respectively. Hence, taking into account the assumption that the boundaries a * (s, y) and b * (s, y) are (at least piecewise) continuously differentiable, for all 0 < s − y < s , by applying the change-of-variable formula from Peskir [Theorem 3.1] (2007) to the process e −rt V 1 (X t , S t , Y t ) (see also Peskir and Shiryaev [Chapter II, Section 3.5] (2006) for a summary of the related results and further references), we obtain the expression: for all t ≥ 0 . Here, the process M 1,1 = (M 1,1 t ) t≥0 defined by: for all t ≥ 0 , is a continuous local martingale with respect to the probability measure P x,s,y . Note that, since the time spent by the process (X, S, Y) at the (part of the) boundary surface C 1,1 = {(x, s, y) ∈ E 1 | x = a(s, y)} as well as at the diagonals  (90) can also be set equal to one. It follows from straightforward calculations and the arguments of the previous section that the function V 1 (x, s, y) satisfies the left-hand second-order ordinary differential equation in (32), which together with the left-hand conditions of (33)-(34) and (36) as well as the fact that the left-hand inequality in (38) holds imply that the inequality ( V 1 − rV 1 )(x, s, y) ≤ −H 1 (x, s, y) is satisfied, for all (x, s, y) ∈ E 1 such that 0 < s − y < x < s with x ≠ a * (s, y) and x ≠ b * (s, y) . Moreover, we observe directly from the expressions in (64)-(65) or (68)-(69) or (72)-(73) with (76)-(77) that the function V 1 (x, s, y) + F 1 (x, s, y) is convex, because its first-order partial derivative x (V 1 (x, s, y) + F 1 (x, s, y)) is increasing, while its second-order partial derivative which is equal to xx V 1 (x, s, y) is positive, on the interval (s − y) ∨ a * (s, y) < x < b * (s, y) ∧ s . Thus, we may conclude that the left-hand inequality in (37) holds, which together with the lefthand conditions of (33)- (34) and (36) imply that the inequality V 1 (x, s, y) ≥ 0 is satisfied, for all (x, s, y) ∈ E 1 . Let ( n ) n∈ℕ be the localising sequence of stopping times for the process M 1,1 from (91) such that n = inf{t ≥ 0 | |M 1,1 t | ≥ n} , for each n ∈ ℕ . It therefore follows from the expression in (90) that the inequalities: hold with any stopping time of the process X, for each n ∈ ℕ fixed. Then, taking the expectation with respect to P x,s,y in (92), by means of Doob's optional sampling theorem, we get: for all (x, s, y) ∈ E 1 and each n ∈ ℕ . Hence, letting n go to infinity and using Fatou's lemma, we obtain from the expressions in (93) that the inequalities: are satisfied with any stopping time , for all (x, s, y) ∈ E 1 .
We now prove the fact that the couple of boundaries a * (s, y) and b * (s, y) specified above is optimal in the problem of (19). By virtue of the fact that the function V 1 (x, s, y) from the right-hand side of the expression in (86) associated with the boundaries a * (s, y) and b * (s, y) satisfies the equation of (32) and the conditions of (33), and taking into account the structure of * 1 in (88), it follows from the expression in (90) that the equalities: hold, for all (x, s, y) ∈ E 1 and each n ∈ ℕ . Observe that, taking into account the arguments from Shepp and Shiryaev [pages 635-636] (1993), it follows from the structure of the stopping time * 1 in (88) that the property: holds, where the function G 1 (x, s, y) is defined in (10), for all (x, s, y) ∈ E 1 . We also note that the variable e −r * ) is finite on the event { * 1 = ∞} as well as recall from the arguments of Beibel and Lerche (1997) and Pedersen (2000) that the property P x,s,y ( * 1 < ∞) = 1 holds, for all (x, s, y) ∈ E 1 . Hence, letting n go to infinity and using the conditions of (33), we can apply the Lebesgue dominated convergence theorem to the expression of (95) to obtain the equality: for all (x, s, y) ∈ E 1 , which together with the inequalities in (94) directly implies the desired assertion. We finally recall from the results of Part (iv) of Subsection 2.2 above implied by standard comparison arguments applied to the value functions of the appropriate optimal stopping problems that the inequalities a * (s, y) ≤ a(s) = * s and b * (s, y) ≥ b(s, y) ≡ * (s − y) with 0 < * < 1 from (167) and * > 1 from (143), for 0 < s − y < s , should hold for the optimal stopping boundaries, that completes the verification. ◻ (1) and (2), which are equivalent to the ones of (5) and (6), acts as follows. After the outer options with the equivalent value functions from (19) and (20) are exercised at the first exit times * i , for i = 1, 2 , from (88) and (89) with the boundaries a * (s, y) and b * (s, y) specified in Theorem 4.1 above, the inner options should be exercised at the first hitting times:

Corollary 4.2 The optimal method of exercising the perpetual American double lookback options with the values in
with the boundaries b(s, y) and g(q, z) specified in Theorem 5.1 below, respectively.

Remark 4.3
Note that in the cases in which one starts from the stretch, that is, when x = s with y = 0 and x = q with z = 0 holds, the subsequent exercise of the outer and inner perpetual American lookback put and call options on the maximum drawdown and the maximum drawup with the value functions in (19) and (20) actually follows the subsequent exercise of the standard perpetual American lookback options with the value functions in (157) and (99)-(100). More precisely, when the underlying asset price process X starts at some x = s with y = 0 or x = q with z = 0 , by virtue of the facts that the inequalities * > 1 and 0 < * < 1 hold for the unique solutions of the arithmetic equations in (143) and (145) below, the outer options should be exercised only when the process X reaches a lower boundary a * (S, Y) [≤ a(S)] or an upper boundary h * (Q, Z) [≥ h(Q)] , respectively. However, in the cases in which the process X starts at some x < s with y > 0 or x > q with z > 0 holds, the outer perpetual real lookback put and call options on the maximum drawdown and the maximum drawup should be also exercised at the times at which the underlying asset price process reaches an upper boundary b * (S, Y) [≥ b(S, Y)] or a lower boundary g * (Q, Z) [≤ g(Q, Z)] , respectively, and then, the appropriate inner options should be exercised at the same time. (97)

Solutions to the Inner Optimal Stopping Problems
In this section, we obtain explicit expressions for the value functions U * 1 (x, s, y) and U * 2 (x, q, z) in (99) and (100) and the optimal exercise boundaries b(s, y) and g(q, z) of the perpetual American lookback put and call options on the maximum drawdown and the maximum drawup in (110) and (111) below. Note that the optimal stopping problem of (99) was solved in Gapeev and Rodosthenous (2016b), and we present its solution here for completeness.

The Inner Optimal Stopping Problems
We now consider the optimal stopping problems with the values: and for some L 1 ≥ 1 ≥ L 2 > 0 fixed, where the suprema are taken over all stopping times of the processes (X, S, Y) or (X, Q, Z), respectively.
It follows from the arguments above that the continuation regions C * 2,j , for j = 1, 2 , for the optimal stopping problems of (99) and (100) have the form: and so that the appropriate stopping regions D * 2,j , for j = 1, 2 , are given by: and It is seen from the results of Theorem 5.1 proved below that the value functions U * 1 (x, s, y) and U * 2 (x, q, z) are continuous, so that the sets C * 2,j , for j = 1, 2 , in (101)- (102) are open, while the sets D * 2,j , for j = 1, 2 , in (103)-(104) are closed.

The Structure of Optimal Stopping Times
Let us now specify the structure of the optimal stopping times in the inner optimal stopping problems of (99) and (100).
(i) Following the arguments of Subsection 2.2 above, we apply Itô's formula to the processes e −rt (L 1 X t − S t + Y t ) and e −rt (Q t + Z t − L 2 X t ) to get: for each 0 < s − y ≤ x ≤ s , and for each 0 < q ≤ x ≤ q + z , and all t ≥ 0 . Here, the processes N 2,j = (N 2,j t ) t≥0 , for j = 1, 2 , defined by: for all t ≥ 0 , are continuous uniformly integrable martingales under P x,s,y and P x,q,z , respectively. Here, we have used the fact mentioned in Subsection 2.2 and in Proof of Theorem 4.1 above that the processes S and Y may change their values only at the times when X t = S t and X t = S t − Y t , while the processes Q and Z may change their values only at the times when X t = Q t and X t = Q t + Z t , for t ≥ 0 , respectively, and such times accumulated over the infinite horizon form the sets of the Lebesgue measure zero. Then, inserting in place of t and applying Doob's optional sampling theorem to the expressions in (105) and (106), we get that the equalities: and hold, for any stopping time . Hence, it follows from the structure of the integrands in the first integrals of (108) and (109) and the fact that the second integrals there increase whenever the processes (X, S, Y) and (X, Q, Z) are located at the planes d 1,2 = {(x, s, y) ∈ ℝ 3 | 0 < x = s − y < s} and d 2,2 = {(x, q, z) ∈ ℝ 3 | 0 < q < x = q + z} that it is not optimal to exercise the inner parts of the contracts (or exercise the compound options for the second time) when either the inequalities S t − Y t < X t ≤ r(S t − Y t )∕( L 1 ) and r(Q t + Z t )∕( L 2 ) ≤ X t < Q t + Z t or the inequalities X t = S t − Y t and X t = Q t + Z t hold, for any t ≥ 0 , respectively. In other words, these facts mean that the sets {(x, s, y) ∈ ℝ 3 | 0 < s − y < x ≤ r(s − y)∕( L 1 )} and {(x, q, z) ∈ ℝ 3 | 0 < r(q + z)∕( L 2 ) ≤ x < q + z} (whenever they exist) as well as the planes d 1,2 = {(x, s, y) ∈ ℝ 3 | 0 < x = s − y < s} and d 2,2 = {(x, q, z) ∈ ℝ 3 | 0 < q < x = q + z} belong to the continuation regions C * 2,1 and C * 2,2 in (101) and (102) above. (ii) We now show the existence of the right-hand and left-hand parts of the stopping regions D * 2,i , for i = 1, 2 , respectively. On the one hand, if we take some (x, s, y) ∈ D * 2,1 from (103) such that x > (r∕( L 1 ) ∨ 1)(s − y) and use the fact that the process (X, S, Y) started at some (x � , s, y) such that (r∕( L 1 ) ∨ 1)(s − y) < x < x � < s passes through the point (x, s, y) before hitting the plane d 1,2 = {(x, s, y) ∈ ℝ 3 | 0 < x = s − y < s} , then the representation of (108) for the reward functional in (99) implies that U * 1 (x � , s, y) ≤ U * 1 (x, s, y) = L 1 x − s + y holds, so that (x � , s, y) ∈ D * 2,1 . Moreover, if we take some (x, q, z) ∈ D * 2,2 from (104) such that x < (r∕( L 2 ) ∧ 1)(q + z) and use the fact that the process (X, Q, Z) started at some (x � , q, z) such that q < x � < x < (r∕( L 2 ) ∧ 1)(q + z) passes through the point (x, q, z) before hitting the plane d 2,2 = {(x, q, z) ∈ ℝ 3 | 0 < q < x = q + z} , then the representation of (109) for the reward functional in (100) On the other hand, if take some (x, s, y) ∈ C * 2,1 from (101) and use the fact that the process (X, S, Y) started at (x, s, y) passes through some point (x �� , s, y) such that 0 < s − y < x �� < x < s before hitting the plane d 2,1 , then the representation of (108) for the reward functional in (99) . Moreover, if we take some (x, q, z) ∈ C * 2,2 from (102) and use the fact that the process (X, Q, Z) started at (x, q, z) passes through some point (x �� , q, z) such that 0 < q < x < x �� < q + z before hitting the plane d 2,2 , then the representation of (109) for the reward functional in (100) Hence, we may conclude that there exist functions b(s, y) and g(q, z) satisfying the inequalities b(s, y) > (r∕( L 1 ) ∨ 1)(s − y) and g(q, z) < (r∕( L 2 ) ∧ 1)(q + z) , such that the continuation regions C * 2,j , for j = 1, 2 , in (101)-(102) have the form: while the stopping regions D * 2,j , for j = 1, 2 , in (103)-(104) are given by: (see Figs. 1-2 above for computer drawings of the optimal stopping boundary b(s, y) as well as Figs. 3-4 above for computer drawings of the optimal stopping boundary g(q, z)).

The Free-Boundary Problems
In order to find analytic expressions for the unknown value functions U * 1 (x, s, y) and U * 2 (x, q, z) from (99) and (100) as well as the unknown boundaries boundaries b(s, y) and g(q, z) from (110) and (111), we formulate the equivalent free-boundary problems: where the left-hand conditions of (113)- (114) and (116) are satisfied, when s − y < b(s, y) ≤ s holds, and the left-hand conditions of (115) and (116) are satisfied, when b(s, q) > s holds, for all 0 < s − y < s , while the right-hand conditions of (113)- (114) and (116) are satisfied, when q ≤ g(q, z) < q + z holds, and the right-hand conditions of (115) and (116) are satisfied, when g(q, z) < q holds, for all 0 < q < q + z . The superharmonic characterisation of the value function implies that U * 1 (x, s, y) and U * 2 (x, q, z) are the smallest functions satisfying the left-hand and the right-hand sides of the equations in (112)-(113) with (117)-(118) with the boundaries b(s, y) and g(q, z) , respectively.

The Candidate Stopping Boundaries
Finally, by applying the condition of (125) to the functions D 1,j (s, y;b(s, y)) , for j = 1, 2 , in (131), we conclude that the candidate boundary b(s, y) satisfies the first-order nonlinear ordinary differential equation: for 0 < s − y < b(s, y) < s . It follows from the structure of the ordinary differential equation in (142) that b(s, y) ≡ * (s − y) should hold for the solution, for all 0 < s − y < s , where * > 1 is the unique solution of the arithmetic equation: on the interval (1, ∞) . The proof of uniqueness of the solution of the arithmetic equation in (143) is particularly given in Gapeev and Rodosthenous [Appendix] (2016b). We therefore have b(s, y(s)) ≡ * (s − y(s)) = s , so that y(s) = ( * − 1)s∕ * < s , for each s > 0 . In this case, we have U 1 (x, s, y) with D 1,j (s, y) , for j = 1, 2 , as solutions of a system of firstorder partial differential equations, for 0 < s − y(s) ≡ s∕ * ≤ s − y < s , while we have the ordinary differential equation for the boundary b(s, y) (which is equivalent to an arithmetic equation), for 0 < s − y < s − y(s) ≡ s∕ * < s.

The Results
Summarising the facts shown above, we state the following result which is proved by means of the same arguments as Theorem 4.1 above in combinations with the arguments from Gapeev (2020).
Since the both parts of the assertion stated above are proved using similar arguments, we may only give a proof for the case of the three-dimensional single optimal stopping problem of (99), which is related to the inner perpetual American floating-strike lookback call options on the maximum drawdown.
Proof In order to verify the assertion of Part (i) stated above, it remains for us to show that the function defined in (146) coincides with the value function in (99) and that the stopping time * 1 in (148) is optimal with the boundary b(s, y) being the minimal solution of the firstorder nonlinear ordinary differential equation in (142). For this purpose, let us denote by U 1 (x, s, y) the right-hand side of the expression in (146) associated with b(s, y) = * (s − y) , for each 0 < s − y < s , with * > 1 being a unique solution of the arithmetic equation in (143). Then, it is shown by means of straightforward calculations from the previous section that the function U 1 (x, s, y) solves the left-hand system of (112)-(119). Recall that the function U 1 (x, s, y) is C 2,1,1 on the closure C 2,1 of C 2,1 and is equal to L 1 x − s + y on D 2,1 , which are defined as C * 2,1 , C * 2,1 and D * 2,1 in (110) and (111). Hence, taking into account the simple structure of the boundary b(s, y) , by applying the change-of-variable formula from Peskir [Theorem 3.1] (2007) to the process e −rt U 1 (X t , S t , Y t ) , we obtain the expression: for all t ≥ 0 . Here, the process M 2,1 = (M 2,1 t ) t≥0 defined by: for all t ≥ 0 , is a continuous local martingale with respect to the probability measure P x,s,y . Note that, since the time spent by the process (X, S, Y) at the (part of the) boundary surface C 2,1 = {(x, s, y) ∈ E 1 | x = b(s, y)} as well as at the diagonals d 1,1 = {(x, s, y) ∈ ℝ 3 | 0 < s − y < x = s} and d 1,2 = {(x, s, y) ∈ ℝ 3 | 0 < x = s − y < s} is of the Lebesgue measure zero, the indicators in the second line of the formula in (149) as well as in the expression of (150) can be ignored. Moreover, since the component S increases only when the process (X, S, Y) is located on the diagonal (149) e −rt U 1 (X t , S t , Y t ) = U 1 (x, s, y) + M 2,1 t + ∫ t 0 e −ru ( U 1 − rU 1 )(X u , S u , Y u ) I S u − Y u < X u < S u du

The Results
Summarising the facts shown above, we state the following result which can be proved by means of the same arguments as Theorem 4.1 above in combinations with the arguments from Gapeev (2020).
Corollary 6.1 Let the processes (X, S) and (X, Q) be given by (3) and (7) with r > 0 , > 0 , and > 0 . Then, the value functions of the inner optimal stopping problems in (157), for some K 2 ≥ 1 ≥ K 1 > 0 fixed, admit the representations: and while the optimal stopping times have the form of (158) above, where the candidate value functions and the candidate exercise boundaries are specified as follows: (i) the function W 1 (x, s;a(s)) is given by (166), while the boundary has the form a(s) = * s , for each s > 0 , with 0 < * < 1 being a unique solution of the arithmetic equation in (167) on (0, 1); (ii) the function W 2 (x, q;h(q)) is given by (168), while the boundary has the form h(q) = * q , for each q > 0 , with * > 1 being a unique solution of the arithmetic equation in (169) on (1, ∞).