Abstract
Optimal liquidation of an asset with unknown constant drift and stochastic regimeswitching volatility is studied. The uncertainty about the drift is represented by an arbitrary probability distribution; the stochastic volatility is modelled by mstate Markov chain. Using filtering theory, an equivalent reformulation of the original problem as a fourdimensional optimal stopping problem is found and then analysed by constructing approximating sequences of threedimensional optimal stopping problems. An optimal liquidation strategy and various structural properties of the problem are determined. Analysis of the twopoint prior case is presented in detail, building on which, an outline of the extension to the general prior case is given.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Selling is a fundamental and ubiquitous economic operation. As the prices of goods fluctuate over time, ‘What is the best time to sell an asset to maximise revenue?’ qualifies as a basic question in Finance. Suppose that an asset needs to be sold before a known deterministic time \(T>0\) and that the only source of information available to the seller is the price history. A natural mathematical reformulation of the aforementioned optimal selling question is to find a selling time \(\tau ^{*} \in {\mathcal {T}}_{T}\) such that
where \(\{S_{t}\}_{t\ge 0}\) denotes the price process and \(\mathcal {T}_{T}\) denotes the set of stopping times with respect to the price process S.
Many popular continuous models for the price process are of the form
where \(\alpha \in {\mathbb {R}}\) is called the drift, and \(\sigma \ge 0\) is known as the volatility process. Imposing simplifying assumptions that the volatility is independent of W as well as timehomogeneous, an mstate timehomogeneous Markov chain stands out as a basic though still rather flexible stochastic volatility model (proposed in [11]), which we choose to use in this article. The flexibility comes from the fact that we can choose the state space as well as the transition intensities between the states.
Though the problem (1.1) in which S follows (1.2) is wellposed mathematically, from a financial point of view, the known drift assumption is widely accepted to be unreasonable (e.g. see [32, Sect. 4.2 on p. 144]) and needs to be relaxed. Hence, using the Bayesian paradigm, we model the initial uncertainty about the drift by a probability distribution (known as the prior in Bayesian inference), which incorporates all the available information about the parameter and its uncertainty (see [15] for more on the interpretation of the prior). If the quantification of initial uncertainty is subjective, then the prior represents one’s beliefs about how likely the drift is to take different values. To be able to incorporate arbitrary prior beliefs, we set out to solve the optimal selling problem (1.1) under an arbitrary prior for the drift.
In the present paper, we analyse and solve the asset liquidation problem (1.1) in the case when S follows (1.2) with mstate timehomogeneous Markov chain volatility and unknown drift, the uncertainty of which is modelled by an arbitrary probability distribution. The first time a particular fourdimensional process hits a specific boundary determining the stopping set is shown to be optimal. This stopping boundary has attractive monotonicity properties and can be found using the approximation procedure developed.
Let us elucidate our study of the optimal selling problem in more depth. Using the nonlinear filtering theory, the original selling problem with parameter uncertainty is rewritten as an equivalent optimal stopping problem of a standard form (i.e. without unknown parameters). In this new optimal stopping problem, the posterior mean serves as the underlying process and acts as a stochastic creation rate; the payoff function in the problem is constant. The posterior mean is shown to be the solution of an SDE depending on the prior and the whole volatility history. Embedding of the optimal stopping problem into a Markovian framework is nontrivial because the whole posterior distribution needs to be included as a variable. Fortunately, we show that having fixed the prior, the posterior is fully characterised by only two realvalued parameters: the posterior mean and, what we call, the effective learning time. As a result, we are able to define an associated Markovian value function with four underlying variables (time, posterior mean, effective learning time, and volatility) and study the optimal stopping problem as a fourdimensional Markovian optimal stopping problem (the volatility takes values in a finite set, but slightly abusing terminology, we still call it a dimension). Exploiting that the volatility is constant between the regime switches, we construct m sequences of simpler auxiliary threedimensional Markovian optimal stopping problems whose values in the limit converge monotonically to the true value function. The main advantage of this approximating sequence approach comparing with tackling the full variational inequality of the problem directly is that dealing with the analytically complicated coupled system is avoided altogether. Instead only much simpler standard uncoupled freeboundary problems need to be analysed or solved numerically to arrive at a desired result. We show that the value function is decreasing in time and effective learning time as well as increasing and convex in posterior mean. The first hitting time of a region specified by a stopping boundary that is a function of time, effective learning time, and volatility is shown to be optimal. The stopping boundary is increasing in time, effective learning time, and is the limit of a monotonically increasing sequence of boundaries from the auxiliary problems. Moreover, the approximation procedure using the auxiliary problems yields a method to calculate the value function as well as the optimal stopping boundary numerically.
In the twopoint prior case, the posterior mean fully characterises the posterior distribution, making the problem more tractable and allowing us to obtain some additional results. In particular, we prove that, under a skipfree volatility assumption, the Markovian value function is decreasing in the volatility and that the stopping boundary is increasing in the volatility.
In a broader mathematical context, the selling problem investigated appears to be the first optimal stopping problem with parameter uncertainty and stochastic volatility to be studied in the literature. Thus it is plausible that ideas presented herein will find uses in other optimal stopping problems of the same type; for example, in classical problems of Bayesian sequential analysis (e.g. see [29, Chapter VI]) with stochastically evolving noise magnitude. It is clear to the author that with additional efforts a number of results of the article can be refined or generalised. However, the objective chosen is to provide an intuitive understanding of the problem and the solution while still maintaining readability and clarity. This also explains why, for the most part, we focus on the twopoint prior case and outline an extension to the general prior case only at the end.
1.1 Related Literature
There is a strand of research on asset liquidation problems in models with regimeswitching volatility, alas, they either concern only a special class of suboptimal strategies or treat the drift as observable. In [36], a restrictive asset liquidation problem was proposed and studied; the drift as well as the volatility were treated as unobservable and the possibility to learn about the parameters from the observations was disregarded. The subsequent papers [17, 34, 35] explored various aspects of the same formulation. An optimal selling problem with the payoff \(e^{r\tau }(S_{\tau }K)\) was studied in [26] for the Black–Scholes model, in [21] for a twostate regimeswitching model, and in [35] for an mstate model with finite horizon. In all three cases, the drift and the volatility are assumed to be fully observable.
In another strand of research, the optimal stopping problem (1.1) has been solved and analysed in the Black–Scholes model under arbitrary uncertainty about the drift. The twopoint prior case was studied in [12], while the general prior case was solved in [15] using a different approach. This article can be viewed as a generalisation of [15] to include stochastic regimeswitching volatility. Related option valuation problems under incomplete information were studied in [18, 33], both in the twopoint prior case, and in [10] in the npoint prior case.
The approach we take to approximate a Markovian value function by a sequence of value functions of simpler constant volatility problems was used before in [24] to investigate a finitehorizon American put problem (also, its slight generalisation) in a regimeswitching model with full information. Regrettably, in the case of 3 or more volatility states, the recursive approximation step in [24, Sect. 5] contains a blunder; we rectify it in Sect. 3.2 of this article. A possible alternative route to analysing and solving the optimal stopping problem is to analytically tackle the system of variational inequalities directly using weak solutions techniques (e.g., see [6, 30]), similarly as in [7] for American options with regimeswitching volatility. Structural and regularity properties would need to be established using PDE techniques. If appropriate theoretical results can be obtained, numerical PDE schemes discussed in [22] should yield a numerical solution. However, this alternative approach requires a different toolkit, appears to be more demanding analytically, and hence not investigated further in the present article.
Though it is true that the current paper is a generalisation of [15] from constant volatility to the regimeswitching stochastic volatility model, the extension is definitely not a straightforward one. Novel statistical learning intuitions were needed, and new proofs were developed to arrive at the results of the paper. One of the main insights of the optimal liquidation problem with constant volatility in [15] was that the current time and price were sufficient statistics for the optimal selling problem. However, changing the volatility from constant to stochastic makes the posterior distribution of the drift truly dependent on the price path. This raises questions whether an optimal liquidation problem can be treated using the mainstream finitedimensional Markovian techniques at all, and also whether any of the developments from the constant volatility case can be taken advantage of. In the twopoint prior case with regimeswitching volatility, the following new insight was key. Despite the posterior being a pathdependent function of the stock price, we can show that the current time, posterior mean and instantaneous volatility (extracted from the price process) are sufficient statistics for the optimal liquidation problem. Alas, for any prior with more than two points in the support, the same triplet is no longer a sufficient statistic. Fortunately, if in addition to the timepricevolatility triplet we introduce an additional statistic, which we name the effective learning time, the resulting 4tuple becomes a sufficient statistic for the selling problem under a general prior. Besides these insights, some new technicalities (in particular, Lemma (2.3)) stemming from stochastic volatility had to be resolved to reformulate the optimal selling problem into the standard Markovian form.
In relation to [24], though we employ the same general iterative approximation idea to construct an approximating sequence for the Markovian value function, the particulars, including proofs and results, are notably distinct. Firstly, we work in a more general setting, proving and formulating more abstract as well as, in multiple instances, new type of results. For example, we prove things in the mstate rather than the twostate regimeswitching model. This allowed us to catch and correct an erroneous construction of the approximating sequence in [24] for models with more than two volatility states. Moreover, almost all the proofs follow different arguments either because of the structural differences in the selling problem or because we prefer another way, which seems to be more transparent and direct, to arrive at the results. Lastly, many of the results in the present paper are problemspecific and even not depend on the iterative approximation of the value function after all.
The idea to iteratively construct a sequence of auxiliary value functions that converge to the true value function in the limit is generic and has been many times successfully applied to optimal stopping problems with a countable number of discrete events (e.g. jumps, discrete observations). In the setting with partial observations, an iterative approximation scheme was employed in [5] to study the Poisson disorder detection problem with unknown postdisorder intensity, then later, in [9], to analyse a combined PoissonWiener disorder detection problem, and, more recently, in [4], to investigate the Wiener disorder detection under discrete observations. In the fully observable setting, such iterative approximations go back to at least as early as [19], which deals with a Markovian optimal stopping problem with a piecewise deterministic underlying. In Financial Mathematics, iteratively constructed approximations were used in [2, 3] to study the value functions of finite and perpetual American put options, respectively, for a jump diffusion. Besides optimal stopping, the iterative approximation technique was utilised for the singular control problem [13] of optimal dividend policy.
2 Problem SetUp
We model a financial market on a filtered probability space \((\Omega ,{\mathcal {F}}, \{\mathcal {F}_{t} \}_{t \ge 0}, {{\mathbb {P}}})\) satisfying the usual conditions. Here the measure \({{\mathbb {P}}}\) denotes the physical probability measure. The price process is modelled by
where X is a random variable having probability distribution \(\mu \), W is a standard Brownian motion, and \(\sigma \) is a timehomogeneous rightcontinuous mstate Markov chain with a generator \(\Lambda = (\lambda _{ij})_{1\le i,j \le m}\) and taking values \(\sigma _{m} \ge \cdots \ge \sigma _{1} >0\). Moreover, we assume that X, W, and \(\sigma \) are independent. Since the volatility can be estimated from the observations of S in an arbitrary short period of time (at least in theory), it is reasonable to assume that the volatility process \(\{ \sigma (t) \}_{t \ge 0}\) is observable. Hence the available information is modeled by the filtration \({\mathbb {F}}^{S, \sigma } = \{ {\mathcal {F}}^{S, \sigma }_t \}_{t \ge 0}\) generated by the processes S and \(\sigma \) and augmented by the null sets of \({\mathcal {F}}\). Note that the drift X and the random driver W are not directly observable.
The optimal selling problem that we are interested in is
where \(\mathcal {T}_T^{S, \sigma }\) denotes the set of \({\mathbb {F}}^{S, \sigma }\)stopping times that are smaller or equal to a prespecified time horizon \(T >0\).
Remark 2.1
It is straightforward to include a discount factor \(e^{r\tau }\) in (2.2). In fact, it simply corresponds to a shift of the prior distribution \(\mu \) in the negative direction by r.
Let \(l:=\inf \mathop {\mathrm {supp}}\nolimits (\mu )\) and \(h:=\sup \mathop {\mathrm {supp}}\nolimits (\mu )\). It is easy to see that if \(l \ge 0\), then it is optimal to stop at the terminal time T. Likewise, if \(h \le 0\), then stopping immediately, i.e. at time zero, is optimal. The rest of the article focuses on the remaining and most interesting case.
Assumption 2.2
\(l< 0 < h\).
2.1 Equivalent Reformulation Under a Measure Change
Let us write \({\hat{X}}_{t} := {\mathbb {E}}[X \,\, \mathcal {F}^{S, \sigma }_{t} ]\). Then the process
called the innovation process, is an \({\mathbb {F}}^{S,\sigma }\)Brownian motion (see [1, Proposition 2.30 on p. 33]).
Lemma 2.3
The volatility process \(\sigma \) and the innovation process \({\hat{W}}\) are independent.
Proof
Since X, W, and \(\sigma \) are independent, we can think of \((\Omega , {\mathcal {F}}, {{\mathbb {P}}})\) as a product space \( \left( \Omega _{X,W} \times \Omega _{\sigma }, {\mathcal {F}}_{X,W} \otimes {\mathcal {F}}_{\sigma }, {{\mathbb {P}}}_{X,W} \times {{\mathbb {P}}}_{\sigma } \right) \). Let \( A, A' \in \mathcal {B}({\mathbb {R}}^{[0,T]}) \). Then
where the penultimate equality is justified by the fact that, for any fixed \(\omega _{\sigma }\), the innovation process \({\hat{W}} (\cdot , \omega _{\sigma })\) is a Brownian motion under \({{\mathbb {P}}}_{X,W}\). Hence from (2.3), the processes \({\hat{W}}\) and \(\sigma \) are independent. \(\square \)
Defining a new equivalent measure \({\tilde{{{\mathbb {P}}}}} \sim {{\mathbb {P}}}\) on \((\Omega , {\mathcal {F}}_{T})\) via the RadonNikodym derivative
and writing
we have that, for any \(\tau \in \mathcal {T}^{S, \sigma }_{T}\),
Moreover, by Girsanov’s theorem, the process \(B_t:= \int _{0}^{t} \sigma (s) \,\mathrm {d}s + \hat{W}_t\) is a \({\tilde{{{\mathbb {P}}}}}\)Brownian motion on [0, T]. In addition, Lemma 2.3 together with [1, Proposition 3.13] tells us that the law of \(\sigma \) is the same under \({\tilde{{{\mathbb {P}}}}}\) and \({{\mathbb {P}}}\), as well as that B and \(\sigma \) are independent under \({\tilde{{{\mathbb {P}}}}}\).
Without loss of generality, we set \(S_0=1\) throughout the article, so the optimal stopping problem (2.2) can be cast as
Between the volatility jumps, the stock price is a geometric Brownian motion with known constant volatility and unknown drift. Hence, by Corollary 3.4 in [15], we have that \({\mathbb {F}}^{S, \sigma } = {\mathbb {F}}^{{\hat{X}}, \sigma }\) and \(\mathcal {T}^{S, \sigma }_T=\mathcal {T}^{{\hat{X}}, \sigma }_T\), where \({\mathbb {F}}^{{\hat{X}}, \sigma }\) denotes the usual augmentation of the filtration generated by \({\hat{X}}\) and \(\sigma \), also, \(\mathcal {T}^{{\hat{X}} , \sigma }_{T}\) denotes the set of \({\mathbb {F}}^{{\hat{X}}, \sigma }\)stopping times not exceeding T. As a result, an equivalent reformulation of (2.4) is
which we will study in the subsequent parts of the article.
2.2 Markovian Embedding
In all except the last section of this article, we will focus on the special case when X has a twopoint distribution \(\mu = \pi \delta _{h} + (1\pi ) \delta _{l}\), where \(h>l\), \(\pi \in (0,1)\) are constants, and \(\delta _{h}, \delta _{l}\) are Dirac measures at h and l, respectively. In this special case, expressions are simpler and arguments are easier to follow than in the general prior case; still, most underlying ideas of the arguments are the same. Hence, we choose to understand the twopoint prior case first, after which generalising the results to the general prior case will become a rather easy task.
Since the volatility is a known constant between the jump times, using the dynamics of \({\hat{X}}\) in the constant volatility case [the equation (3.9) in [15]], the process \({\hat{X}}\) is a unique strong solution of
where
Now, we can embed the optimal stopping problem (2.4) into a Markovian framework by defining a Markovian value function
Here \({\hat{X}}^{t,x, \sigma }\) denotes the process \({\hat{X}}\) in (2.6) started at time t with \({\hat{X}}_{t} = x\), \(\sigma (t) = \sigma \), and \({\mathcal {T}}_{Tt}\) stands for the set of stopping times less or equal to \(Tt\) with respect to the usual augmentation of the filtration generated by \(\{ {\hat{X}}^{t,x,\sigma }_{t+s}\}_{s \ge 0}\) and \(\{ \sigma (t+s)\}_{s \ge 0}\). The formulation (2.7) has an interpretation of an optimal stopping problem with the constant payoff 1 and the discount rate \({\hat{X}}_{s}\); from now onwards, we will study this discounted problem. The notation \(v_{i} := v( \cdot , \cdot , \sigma _{i})\) will often be used.
3 Approximation Procedure
It is not clear how to compute v in (2.7) or analyse it directly. Hence, in this section, we develop a way to approximate the value function v by a sequence of value functions, corresponding to simpler constant volatility optimal stopping problems.
3.1 Operator \(J_{i}\)
For the succinctness of notation, let \(\lambda _{i}:=\sum _{j \ne i} \lambda _{ij}\) denote the total intensity with which the volatility jumps from state \(\sigma _{i}\). Also, let us define
which is an Exp(\(\lambda _{i}\))distributed random variable representing the duration up to the first volatility change if started from the volatility state \(\sigma _{i}\) at time t.
Furthermore, let us define an operator J acting on a bounded \(f : [0, T] \times (l,h) \rightarrow {\mathbb {R}}\) by
where \({\mathcal {T}}_{Tt}\) denotes the set of stopping times less or equal to \(Tt\) with respect to the usual augmentation of the filtration generated by \(\{ {\hat{X}}^{t,x,\sigma _{i}}_{t+s}\}_{s \ge 0}\) and \(\{ \sigma (t+s)\}_{s \ge 0}\). To simplify notation, we also define an operator \(J_{i}\) by
Intuitively, \((J_{i} f)\) represents a Markovian value function corresponding to optimal stopping before \(t+ \eta ^{t}_{i}\), i.e. before the first volatility change after t, when, at time \(t+\eta ^{t}_{i} < T\), the payoff \(f\left( t+\eta ^{t}_{i}, {\hat{X}}^{{t,x, \sigma _{i}}}_{t+\eta ^{t}_{i}} \right) \) is received provided stopping has not occurred yet.
Proposition 3.1
Let \(f : [0, T] \times (l,h) \rightarrow {\mathbb {R}}\) be bounded. Then
 (i)
Jf is bounded;
 (ii)
f increasing in the second variable x implies that Jf is increasing in the second variable x;
 (iii)
f decreasing in the first variable t implies that Jf is decreasing in the first variable t;
 (iv)
f increasing and convex in the second variable x implies that Jf is increasing and convex in the second variable x;
 (v)
J preserves order, i.e. \(f_{1} \le f_{2}\) implies \(J f_{1} \le J f_{2}\);
 (vi)
\(J f \ge 1\).
Proof
All except claim (iv) are straightforward consequences of the representation (3.2). To prove (iv), we will approximate the optimal stopping problem (3.2) by Bermudan options.
Let i and n be fixed. We will approximate the value function \(J_{i}f\) by a value function \(w^{(f)}_{i,n}\) of a corresponding Bermudan problem with stopping allowed only at times \(\left\{ \frac{kT}{2^{n}} \,:\, k\in \{0,1, \ldots , 2^{n}\}\right\} \). We define \(w^{(f)}_{i,n}\) recursively as follows. First,
Then, starting with \(k =2^{n}\) and continuing recursively down to \(k=1\), we define
where the function g is given by
Next, we show by backward induction on k that \(w^{(f)}_{i,n}\) is increasing and convex in the second variable x. Suppose that for some \(k \in \{1, 2,\ldots , 2^{n}\}\), the function \(w^{(f)}_{i,n}\left( \frac{kT}{2^{n}}, \cdot \right) \) is increasing and convex (the assumption clearly holds for the base step \(k=2^{n}\)). Let \(t\in [ \frac{(k1)T}{2^{n}}, \frac{kT}{2^{n}})\). Then, since f is also increasing and convex in the second variable x, we have that the function \(g(t,\cdot , \frac{kT}{2^{n}})\), and so \(w^{(f)}_{i,n}(t,\cdot )\), is convex by [14, Theorem 5.1]. Moreover, from (3.4) and [31, Theorem IX.3.7], it is clear that \(w^{(f)}_{i,n}(t,\cdot )\) is increasing. Consequently, by backward induction, we obtain that the Bermudan value function \(w^{(f)}_{i,n}\) is increasing and convex in the second variable.
Letting \(n \nearrow \infty \), the Bermudan value \(w^{(f)}_{i,n} \nearrow J_{i}f\) pointwise. As a result, \(J_{i}f\) is increasing and convex in the second argument, since convexity and monotonicity are preserved when taking pointwise limits. \(\square \)
The sets
correspond to continuation and stopping sets for the stopping problem \(J_{i}f\) as the next proposition shows.
Proposition 3.2
(Optimal stopping time) The stopping time
is optimal for the problem (3.2).
Proof
A standard application of Theorem D.12 in [23]. \(\square \)
Proposition 3.3
If a bounded \(f :[0,T] \times (l,h) \rightarrow {\mathbb {R}}\) is decreasing in the first variable as well as increasing and convex in the second, then \(J_{i}f\) is continuous.
Proof
The argument is a troublefree extension of the proof of the third part of Theorem 3.10 in [15]; still, we include it for completeness. Before we begin, in order to simplify notation, we will write \(u:= J_{i} f\).
Firstly, we let \(r\in (l, h)\) and will prove that there exists \(K>0\) such that, for every \(t\in [0,T]\), the map \(x\mapsto J_{i}f(t,x)\) is KLipschitz continuous on (l, r]. To obtain a contradiction, assume that there is no such K. Then, by convexity of u in the second variable, there is a sequence \(\{ t_{n}\}_{n\ge 0} \subset [0,T]\) such that the leftderivatives \(\partial ^{}_{2} u(t_{n}, r) \nearrow \infty \). Hence, for \(r' \in (r, h)\), the sequence \(u(t_{n}, r') \rightarrow \infty \), which contradicts that \(u(t_{n}, r') \le u(0, r') < \infty \) for all \(n \in {\mathbb {N}}\).
Now, it remains to show that u is continuous in time. Assume for a contradiction that the map \(t \mapsto u(t, x_{0})\) is not continuous at \(t=t_{0}\) for some \(x_{0}\). Since u is decreasing in time, \(u(\cdot , x_{0})\) has a negative jump at \(t_{0}\). Next, we will investigate the cases \(u(t_{0}, x_{0}) > u(t_{0}, x_{0})\) and \(u(t_{0}, x_{0}) > u(t_{0}+, x_{0})\) separately.
Suppose \(u(t_{0}, x_{0}) > u(t_{0}, x_{0})\). By Lipschitz continuity in the second variable, there exists \(\delta >0\) such that, writing \(\mathcal {R} = (t_{0}\delta , t_{0}) \times (x_{0}  \delta , x_{0}+\delta )\),
Thus \(\mathcal {R} \subseteq \mathcal {C}^{f}_{i}\). Let \(t \in (t_{0}\delta , t_{0})\) and \(\tau _{\mathcal {R}} := \inf \{ s \ge 0\,:\, (t+s, {\hat{X}}^{t,x,\sigma _{i}}_{t+\tau _{\mathcal {R}}}) \notin \mathcal {R} \}\). Then, by the martingality in the continuation region,
as \(t \rightarrow t_{0}\), contradicting (3.7).
The other case to consider is \(u(t_{0}, x_{0}) > u(t_{0}+, x_{0})\); we look into the situation \(u(t_{0}, x_{0})> u(t_{0}+, x_{0})>1\) first. The local Lipschitz continuity in the second variable and the decay in the first variable imply that there exist \(\epsilon >0\) and \(\delta >0\) such that, writing \(\mathcal {R} = (t_{0}, t_{0}+\epsilon ] \times [x_{0}\delta , x_{0}+\delta ]\),
Hence, \({\mathcal {R}} \subseteq \mathcal {C}^{f}_{i}\) and writing \(\tau _{\mathcal {R}}:=\inf \{s\ge 0: (t_{0}+s, {\hat{X}}^{t_{0},x_0, \sigma _{i}}_{t_{0}+s})\notin {\mathcal {R}}\}\) we have
as \(\epsilon \searrow 0\), which contradicts (3.8).
Lastly, suppose that \(u(t_{0}, x_{0}) > u(t_{0}+, x_{0}) = 1\). By Lipschitz continuity in the second variable, there exists \(\delta >0\) such that
Consequently, \((t_{0}, T]\times (x_{0}\delta , x_{0}) \subseteq \mathcal {D}^{f}_{i}\). Hence the process \({\hat{X}}^{t_{0}, x_{0}\delta /2, \sigma _{i}}\) hits the stopping region immediately and so \((t_{0}, x_{0}\delta /2) \in \mathcal {D}^{f}_{i}\), which contradicts (3.9). \(\square \)
Proposition 3.4
(Optimal stopping boundary) Let \(f: [0,T] \times (l, h) \rightarrow {\mathbb {R}}\) be bounded, decreasing in the first variable as well as increasing and convex in the second variable. Then the following hold.
 (i)
There exists a function \(b^{f}_{\sigma _{i}}: [0,T) \rightarrow [l,h]\) that is both increasing, rightcontinuous with left limits, and satisfies
$$\begin{aligned} \mathcal {C}^{f}_{i} = \{ (t,x) \in [0,T) \times (l, h) \,:\, x > b^{f}_{\sigma _{i}}(t) \}. \end{aligned}$$(3.10)  (ii)
The pair \((J_{i}f, b^{f}_{\sigma _{i}})\) satisfies the freeboundary problem
$$\begin{aligned} \left\{ \begin{array}{ll} \partial _{t}u(t,x) + {\sigma _{i}} \phi (x, \sigma _{i}) \partial _{x} u(t,x) + \frac{1}{2} \phi (x, \sigma _{i})^{2} \partial _{xx} u(t,x)\\ \quad +\,(x\lambda _{i})u(t,x)+\lambda _{i} f(t,x) = 0, &{}\quad \text { if } x > b^{f}_{\sigma _{i}}(t), \\ u(t,x) = 1, &{}\quad \text { if } x \le b^{f}_{\sigma _{i}}(t) \text { or } t=T. \end{array} \right. \end{aligned}$$(3.11)
Proof

(i)
By Proposition 3.1 (iv), there exists a unique function \(b^{f}_{\sigma _{i}}\) satisfying (3.10). Moreover, by Proposition 3.1 (iii), this boundary \(b^{f}_{\sigma _{i}}\) is increasing. Hence, using Proposition 3.3, we also obtain that \(b^{f}_{\sigma _{i}}\) is rightcontinuous with left limits.

(ii)
The proof follows a wellknown standard argument (e.g. see [23, Theorem 7.7 in Chapter 2]), thus we omit it.
\(\square \)
3.2 A Sequence of Approximating Problems
Let us define a sequence of stopping times \(\{\xi ^{t}_{n}\}_{n \ge 0}\) recursively by
Here \(\xi ^{t}_{n}\) represents the duration until the nth volatility jump since time t. Furthermore, let us define a sequence of operators \(\{ J^{(n)} \}_{n\ge 0}\) by
where \(f : [0, T] \times (l,r) \rightarrow {\mathbb {R}}\) is bounded. In particular, note that \(J^{(0)} f = f\) and \(J^{(1)}f=Jf\). Similarly as for the operator J, we define \(J^{(n)}_{i}\) by
Proposition 3.5
Let \(n \ge 0\) and \(i \in \{0,\ldots , m\}\). Then
Proof
The proof is by induction. In order to present the argument of the proof while keeping intricate notation at bay, we will only prove that, for a bounded \(f:[0,T]\times (l,h) \rightarrow {\mathbb {R}}\) and \(x\in (l,h)\), the identity \((J^{(2)}_{i}f)(t,x)= (J_{i}(\sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}} J_{j}f))(t,x)\) holds. The induction step \(J^{(n+1)}_{i}= J_{i} \left( \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}} J^{(n)}_{j} \right) \) follows a similar argument, though with more abstract notation. Note that without loss of generality, we can assume \(t=0\), which we do.
Firstly, we will show \((J_{i}^{(2)}f)(0,x) \le J_{i}\bigg ( \sum _{j \ne i}\frac{\lambda _{ij}}{\lambda _{i}} (J_{j}f)\bigg )(0,x)\) and then the opposite inequality. For \(j \in {\mathbb {N}}\), we will write \(\xi _{j}\) instead of \(\xi ^{0}_{j}\) as well as will use the notation \(\eta _{j}:=\xi _{j}\xi _{j1}\). Let \(\tau \in \mathcal {T}_{T}\) and consider
where \(\{N_{t}\}_{t\ge 0}\) denotes the process counting the volatility jumps. The inner conditional expectation in (3.14) satisfies
where \({\tilde{\tau }} = \tau  \eta _{1}\) in the case \(\eta _{1} \le \tau \le T\). Therefore, substituting (3.15) into (3.14) and then taking a supremum over \({\tilde{\tau }}\), we get
Taking a supremum over \(\tau \) in (3.16), we obtain
It remains to establish the opposite inequality. Let \(\tau \in \mathcal {T}_{T}\) and define
where \(\tau _{\sigma (\eta _{1})} := \tau ^{f}_{\sigma (\eta _{1})} (\eta _{1} \wedge T, {\hat{X}}^{0,x,\sigma _{i}}_{\eta _{1} \wedge T})\). Clearly, \(\check{\tau } \in \mathcal {T}_{T}\). Then
where Proposition 3.2 was used to obtain the last equality. Hence, by taking supremum over stopping times \(\tau \in \mathcal {T}_{T}\), we get
Finally, (3.17) and (3.19) taken together imply
\(\square \)
Remark 3.6
In [24], the authors use the same approximation procedure for an optimal stopping problem with regime switching volatility as in this article. Unfortunately, a mistake is made in equation (18) of [24], which wrecks the subsequent approximation procedure when the number of volatility states is greater than 2. The identity (18) therein should be replaced by (3.13).
3.3 Convergence to the Value Function
Proposition 3.7
(Properties of the approximating sequence)
 (i)
The sequence of functions \(\{ J^{(n)} 1 \}_{n \ge 0}\) is increasing, bounded from below by 1 and from above by \(e^{hT}\).
 (ii)
Every \(J^{(n)} 1 \) is decreasing in the first variable t as well as increasing and convex in the second variable x.
 (iii)
The sequence of functions
$$\begin{aligned} J^{(n)} 1 \nearrow v \quad \text { pointwise as } n \nearrow \infty . \end{aligned}$$Moreover, the approximation error
$$\begin{aligned} \Vert v  J^{(n)} 1 \Vert _{\infty } \le e^{hT} \lambda T \frac{(\lambda T)^{n1}}{(n1)!} \text { as } n \rightarrow \infty , \end{aligned}$$(3.20)where \(\lambda := \max \{ \lambda _{i}\,:\, 1 \le i \le m \}\).
 (iv)
For every \(n \in {\mathbb {N}}\cup \left\{ 0\right\} \),
$$\begin{aligned} J_{m}^n 1 \le J^{(n)} 1 \le J_{1}^n 1. \end{aligned}$$(3.21)
Proof

(i)
The statement that \(\{ J^{(n)}_{i} 1 \}_{n \ge 0}\) is increasing, bounded from below by 1 and from above by \(e^{hT}\) is a direct consequence of the definition (3.12).

(ii)
The claim that every \(J^{(n)}_{i} 1 \) is decreasing in the first variable t as well as increasing and convex in the second variable x follows by a straightforward induction on n, using Proposition 3.1 (iii),(iv) and Proposition 3.5 at the induction step.

(iii)
First, let \(i \in \{ 1, \ldots , m\}\) and note that, for any \(n \in {\mathbb {N}}\),
$$\begin{aligned} J^{(n)}_{i}1 \le v_{i}. \end{aligned}$$Here the inequality holds by suboptimality, since \(J^{(n)}_{i}1\) corresponds to an expected payoff of a particular stopping time in the problem (2.4). Next, define
$$\begin{aligned} U^{(i)}_{n}(t,x):= & {} \sup _{\tau \in \mathcal {T}_{Tt}} {\tilde{{\mathbb {E}}}} \left[ e^{\int _0^\tau {\hat{X}}^{t,x, \sigma _{i}}_{t+s} \,\mathrm {d}s } \mathbb {1}_{\{ \tau < \xi ^{t}_{n} \}}\right] . \end{aligned}$$Then
$$\begin{aligned} U^{(i)}_{n}(t,x) \le (J^{(n)}_{i}1) (t,x) \le v_{i}(t,x) \le U^{(i)}_{n}(t,x) + e^{h(Tt)} {{\mathbb {P}}}(\xi ^{t}_{n} \le Tt). \end{aligned}$$(3.22)Since it is a standard fact that the \(n^{\text {th}}\) jump time, call it \(\zeta _{n}\), of a Poisson process with jump intensity \(\lambda := \max \{ \lambda _{i}\,:\, 1 \le i \le m \}\) follows the Erlang distribution, we have
$$\begin{aligned} {{\mathbb {P}}}(\xi ^{t}_{n} \le Tt)\le & {} {{\mathbb {P}}}(\zeta _{n} \le Tt)\\= & {} \frac{1}{(n1)!} \int _{0}^{\lambda (Tt)} u^{n1} e^{u} \,\mathrm {d}u \\\le & {} \lambda T \frac{(\lambda T)^{n1}}{(n1)!}. \end{aligned}$$Therefore, by (3.22),
$$\begin{aligned} \Vert v  J^{(n)} 1 \Vert _{\infty } \le e^{hT} \lambda T \frac{(\lambda T)^{n1}}{(n1)!} \text { as } n \rightarrow \infty . \end{aligned}$$ 
(iv)
The string of inequalities (3.21) will be proved by induction. First, the base step is obvious. Now, suppose (3.21) holds for some \(n \ge 0\). Hence, for any \(i \in \{1, \ldots , m\}\),
$$\begin{aligned} J^{n}_{m} 1 \le \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}} J^{(n)}_{j} 1 \le J^n_1 1. \end{aligned}$$(3.23)Let us fix \(i \in \left\{ 1, \ldots , m \right\} \). By Proposition 3.1 (iv), every function in (3.23) is convex in the spatial variable x, thus [14, Theorem 6.1] yields
$$\begin{aligned} J^{n+1}_{m} 1 \le J_i \left( \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}} J^{(n)}_{j} 1 \right) \le J^{n+1}_1 1. \end{aligned}$$As i was arbitrary, we also have
$$\begin{aligned} J_{\sigma _{m}}^{n+1} 1 \le J^{(n+1)} 1 \le J_{\sigma _{1}}^{n+1} 1. \end{aligned}$$(3.24)
\(\square \)
Remark 3.8
If instead of 1 we choose the constant function \(e^{hT}\) to apply the operators \(J^{(n)}_{i}\) to, then, following the same strategy as above, \(\{ J^{(n)}_{i} e^{hT} \}_{n \ge 0}\) is a decreasing sequence of functions with the limit \( J^{(n)}_{i} e^{hT} \searrow v_{i}\) pointwise as \(n \nearrow \infty \).
Let \(\mathcal {B}_{b}([0,T]\times (l,h); {\mathbb {R}})\) denote the set of bounded functions from \([0,T]\times (l,h)\) to \({\mathbb {R}}\) and define an operator \( {\tilde{J}} : \mathcal {B}_{b}([0,T]\times (l,h); {\mathbb {R}})^{m} \rightarrow \mathcal {B}_{b}([0,T]\times (l,h); {\mathbb {R}})^{m}\) by
Proposition 3.9

(i)
Let \(f \in \mathcal {B}_{b}([0,T]\times (l,h); {\mathbb {R}})^{m}\). Then
$$\begin{aligned} \lim _{n\rightarrow \infty } {\tilde{J}}^{n} f= & {} \left( \begin{array}{c} v_{1}\\ \vdots \\ v_{m} \end{array} \right) . \end{aligned}$$ 
(ii)
The vector \((v_{1}, \ldots , v_{m})^{tr}\) of value functions is a fixed point of the operator \({\tilde{J}}\), i.e.
$$\begin{aligned} {\tilde{J}} \left( \begin{array}{c} v_{1} \\ \vdots \\ v_{m} \end{array} \right)= & {} \left( \begin{array}{c} v_{1} \\ \vdots \\ v_{m} \end{array}\right) . \end{aligned}$$(3.25)
Proof

(i)
Observe that the argument in the proof of part (iii) of Proposition 3.7 also gives that \(J^{(n)}_{i} g \rightarrow v_{i}\) as \(n \rightarrow \infty \) for any bounded g. Hence to finish the proof it is enough to recall the relation (3.13) in Proposition 3.5.

(ii)
Let \(i \in \{1, \ldots , m\}\). By Proposition 3.5,
$$\begin{aligned} J^{(n+1)}_{i}1= J_{i} \left( \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}} J^{(n)}_{j} 1\right) . \end{aligned}$$(3.26)By Proposition 3.7 (iii), for every \(j \in \{1, \ldots , m\}\), the sequence \(J^{(n)}_{j} 1 \nearrow v_{j}\) as \(n \nearrow \infty \), so, letting \(n \nearrow \infty \) in (3.26), the monotone convergence theorem tells us that
$$\begin{aligned} v_{i}= J_{i} \left( \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}} v_{j} \right) . \end{aligned}$$(3.27)
\(\square \)
4 The Value Function and the Stopping Strategy
In this section, we show that the value function v has attractive structural properties and identify an optimal strategy for the liquidation problem (2.7). The first passage time below a boundary, which is an increasing function of time and volatility, is proved to be optimal. Moreover, we provide a method to approximate the optimal stopping boundary by demonstrating that it is a limit of an increasing sequence of stopping boundaries coming from easier auxiliary problems of Sect. 3.
Theorem 4.1
(Properties of the value function)
 (i)
v is decreasing in the first variable t as well as increasing and convex in the second variable x.
 (ii)
\(v_{i}\) is continuous for every \(i \in \{1, \ldots , m\}\).
 (iii)$$\begin{aligned} \check{v}_{\sigma _m} \le v \le \check{v}_{\sigma _1}, \end{aligned}$$(4.1)
where \(\check{v}_{\sigma _i}:[0,T]\times (l, h) \rightarrow {\mathbb {R}}\) denotes the Markovian value function as in (2.7), but for a price process (2.1) with constant volatility \(\sigma _i\).
Proof

(i)
Since, by Proposition 3.7 (ii), every \(J^{(n)} 1\) is decreasing in the first variable t, increasing and convex in the second variable x, these properties are also preserved in the pointwise limit \(\lim _{n \rightarrow \infty } J^{(n)}1\), which is v by Proposition 3.7 (iii).

(ii)
Using part (i) above, the claim follows from Proposition 3.9 (ii), i.e. from the fact that \((v_{1}, \ldots , v_{m})^{tr}\) is a fixed point of a regularising operator \({\tilde{J}}\) in the sense of Proposition 3.3.

(iii)
Letting \(n \rightarrow \infty \) in (3.21), Proposition 3.7 (iii) gives us (4.1).
\(\square \)
For the optimal liquidation problem (2.4) with constant volatility \(\sigma \), i.e. in the case \(\sigma _{1}= \ldots =\sigma _{m} =\sigma \), it has been shown in [15] that an optimal liquidation strategy is characterised by a increasing continuous stopping boundary \(\check{b}_{\sigma } :[0,T) \rightarrow [l, 0]\) with \(\check{b}_{\sigma }(T)=0\) such that the stopping time \(\check{\tau }_{\sigma }= \inf \{t \ge 0 \,:\, {\hat{X}}_{t} \le \check{b}_{\sigma }(t) \}\wedge T\) is optimal. It turns out that the optimal liquidation strategy within our regimeswitching volatility model shares some similarities with the constant volatility case as the next theorem shows.
Theorem 4.2
(Optimal liquidation strategy)
 (i)
For every \(i \in \{1,\ldots , m\}\), there exists \(b_{\sigma _{i}}: [0,T) \rightarrow [l, 0]\) that is increasing, rightcontinuous with left limits, satisfies the equality \(b_{\sigma _{i}}(T) = 0\) and the identity
$$\begin{aligned} \mathcal {C}^{u_{i}}_{i} = \{ (t,x) \in [0,T) \times (l, h) \,:\, x > b_{\sigma _{i}}(t) \}, \end{aligned}$$(4.2)where \(u_{i} := { \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}} v_{j}}\). Moreover,
$$\begin{aligned} \check{b}_{\sigma _{1}} \le b_{\sigma _{i}} \le \check{b}_{\sigma _{m}}. \end{aligned}$$for any \(i \in \left\{ 1, \ldots , m \right\} \).
 (ii)
The stopping strategy
$$\begin{aligned} \tau ^{*} := \inf \{ s \in [0,Tt) \,:\, {\hat{X}}^{t,x, \sigma }_{t+s} \le b_{\sigma (t+s)}(t+s) \} \wedge (Tt)\,. \end{aligned}$$is optimal for the optimal selling problem (2.7).
 (iii)
For \(i \in \{1,\ldots , m\}\), the boundaries
$$\begin{aligned} b_{\sigma _{i}}^{g_i^{(n)}} \searrow b_{\sigma _{i}} \quad \text {pointwise as } n \nearrow \infty , \end{aligned}$$where \(g_i^{(n)} := \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}}J^{(n)}_{j}1\).
 (iv)
The pairs \((v_{1}, b_{\sigma _{1}}), (v_{2},b_{\sigma _{2}}), \ldots ,(v_{m},b_{\sigma _{m}})\) satisfy a coupled system of m freeboundary problems with each being
$$\begin{aligned} \left\{ \begin{array}{ll} \partial _{t}v_{i}(t,x) +\,{\sigma _{i}} \phi (x, \sigma _{i}) \partial _{x} v_{i}(t,x) + \frac{1}{2} \phi (x, \sigma _{i})^{2} \partial _{xx} v_{i}(t,x) \\ \quad +(x \lambda _{i})v_{i}(t,x)+\sum _{j \ne i} \lambda _{ij}v_{j}(t,x) = 0, &{} \text { if } x > b_{i}(t),\\ v_{i}(t,x) = 1, &{} \text { if } x \le b_{i}(t) \text { or } t = T, \end{array} \right. \end{aligned}$$(4.3)where \(i \in \{1, \ldots , m\}\).
Proof

(i)
The existence of \(b_{\sigma _{i}} : [0, T) \rightarrow [l, h]\) that is increasing, rightcontinuous with left limits, and satisfies (4.2) follows from the fixedpoint property (3.25), and Theorem 4.1 (i),(ii). Since the range of \(\check{b}_{\sigma _{1}}, \check{b}_{\sigma _{m}}\) is [l, 0] and \(\check{b}_{\sigma _{1}}(T)= \check{b}_{\sigma _{m}}(T)=0\), using Theorem 4.1 (iii), we also conclude that \(\check{b}_{\sigma _{1}} \le b_{\sigma _{i}} \le \check{b}_{\sigma _{m}}\) and that \(b_{\sigma _{i}}(T) = 0\) for every i.

(ii)
Let us define \(\mathcal {D}:= \{ (t,x, \sigma ) \in [0,T]\times (l,h)\times \{\sigma _1, \ldots ,\sigma _m\} \,:\, v(t,x, \sigma ) = 1 \}\). Then \(\tau _{\mathcal {D}} := \inf \{ s \ge 0 \,:\, (t+s, {\hat{X}}^{t,x, \sigma (t)}_{t+s}, \sigma (t+s)) \in \mathcal {D}\}\) is optimal for the problem (2.7) by [29, Corollary 2.9]. Lastly, from the fixedpoint property (3.25) and Proposition 3.2, we conclude that \(\tau ^*=\tau _{\mathcal {D}}\), which finishes the proof.

(iii)
Since \(J^{(n)}_{i} 1 \nearrow v_{i}\) as \(n \nearrow \infty \) and \(J^{(n)}_{i} 1 \ge 1\) for all n, we have that \(\lim _{n\nearrow \infty } b_{\sigma _{i}}^{g_i^{(n)}} \ge b_{\sigma _{i}}\). Also, if \(x < \lim _{n\nearrow \infty } b_{\sigma _{i}}^{g_i^{(n)}} (t)\), then \(J^{(n)}_{i} 1 (t,x) = 1\) for all \( n \in {\mathbb {N}}\) and so \(v_{i}(t,x)= \lim _{n \nearrow \infty } J^{(n)}_{i} 1 (t,x)=1\). Hence, \(\lim _{n\nearrow \infty } b_{\sigma _{i}}^{g_i^{(n)}} \le b_{\sigma _{i}}\). As a result, \( \lim _{n\nearrow \infty } b_{\sigma _{i}}^{g_i^{(n)}} = b_{\sigma _{i}}\).

(iv)
The freeboundary problem is a consequence of Proposition 3.4 (ii) and the fixedpoint property (3.25).
\(\square \)
Remark 4.3
Establishing uniqueness of a classical solution to a time nonhomogeneous freeboundary problem is typically a technical task (see [27] for an example). Not being central to the mission of the paper, the uniqueness of solution to the freeboundary problems (4.3) and (3.11) has not been pursued.
Remark 4.4
(A possible alternative approach) It is worth pointing out that a potential alternative approach for the study of the value function and the optimal strategy is to directly analyse the variational inequality formulation (e.g., see [30, Sect. 5.2]) arising from the optimal stopping problem (2.7). The coupled system of variational inequalities would need to be studied using weak solution techniques from the PDE theory (e.g., see [6, 30]) to obtain desired regularity and structural properties of the value function and the stopping region. Though the author is unaware of any work studying exactly this type of freeboundary problem directly in detail, there are available theoretical results [7] that include existence, uniqueness of viscosity solutions, and a comparison principle for the pricing of American options in regimeswitching models. Also, under some conditions, convergence of stable, monotone, and consistent approximation schemes to the value function is shown. Suitable numerical PDE methods and their pros and cons for such a coupled system are discussed in [22]. With this alternative route in mind (provided all the needed technical results can be established), our approach has clear benefits: avoiding many analytical complications that arise in the study of the full system (compare [7]) and yielding a very intuitive monotone approximation scheme for the value function and the stopping boundary.
For further study of the problem in this section, we will make a structural assumption about the Markov chain modelling the volatility.
Assumption 4.5
The Markov chain \(\sigma \) is skipfree, i.e. for all \(i \in \{1, \ldots , m \}\),
As many popular financial stochastic volatility models have continuous trajectories, and a skipfree Markov chain is a natural discrete statespace approximation of a continuous process, Assumption 4.5 does not appear to be a severe restriction.
Lemma 4.6
Let \(\delta >0\), \(g:(l,h)\times [0, \infty ) \rightarrow [0,\infty ) \) be increasing and convex in the first variable as well as decreasing in the second. Then \(u : (l, h)\times \{ \sigma _{1}, \ldots , \sigma _{m}\} \rightarrow {\mathbb {R}}\) defined by
is increasing and convex in the first variable as well as decreasing in the second.
Proof
We will prove the claim using a coupling argument. Let \((\Omega ', {\mathcal {F}}', {\tilde{{{\mathbb {P}}}}}')\) be a probability triplet supporting a Brownian motion B, and two volatility processes \(\sigma ^{1}\), \(\sigma ^{2}\) with the state space and transition densities as in (2.1). In addition, we assume that B is independent of \((\sigma ^{1}, \sigma ^{2})\), that the starting values satisfy \(\sigma ^{1}(0) = \sigma _{i} \le \sigma _{j} = \sigma ^{2}(0)\), and that \(\sigma ^{1}(t)\le \sigma ^{2}(t)\) for all \(t \ge 0\). Also, let \({\hat{X}}^{1}\) and \({\hat{X}}^{2}\) denote the solutions to (2.6) when \(\sigma \) is replaced by \(\sigma ^{1}\) and \(\sigma ^{2}\), respectively.
Let us fix an arbitrary \(\omega _{0} \in \Omega '\). Since \({\hat{W}}\) is independent of \(\sigma ^{1}\),
where \({\tilde{X}}^{1}\) denotes the process \({\hat{X}}^{1}\) with the volatility process \(\sigma ^{1}\) replaced by a deterministic function \(\sigma ^{1}(\cdot , \omega _{0})\). Furthermore, the righthand (and so the lefthand side) in (4.5) as a function of x is increasing by [31, Theorem IX.3.7] as well as convex by [14, Theorem 5.1]. Hence
is increasing and convex. Next, we observe that
In the above, having in mind that the conditional expectations can be rewritten as ordinary expectations similarly as in (4.5), the first inequality followed by [14, Theorem 6.1], the second by the decay of g in the second variable. Integrating both sides of (4.6) over all possible \(\omega _{0} \in \Omega '\) with respect to \(\mathrm {d}{{\mathbb {P}}}'\), we get that
Thus we can conclude that u is increasing and convex in the first variable as well as decreasing in the second. \(\square \)
Theorem 4.7
(Ordering in volatility)
 (i)
v is decreasing in the volatility variable, i.e.
$$\begin{aligned} v_{\sigma _{1}} \ge v_{\sigma _{2}} \ge \cdots \ge v_{\sigma _{m}} . \end{aligned}$$  (ii)
The boundaries are ordered in volatility as
$$\begin{aligned} b_{\sigma _{1}} \le b_{\sigma _{2}} \le \cdots \le b_{\sigma _{m}}. \end{aligned}$$
Proof

(i)
We will prove the claim by approximating the value function v by a sequence of value functions \(\{v_{n}\}_{n\ge 0}\) of corresponding Bermudan optimal stopping problems. Let \(v_{n}\) denote the value function as in (2.7), but when stopping is allowed only at times \(\left\{ \frac{kT}{2^{n}} \,:\, k\in \{0,1, \ldots , 2^{n}\}\right\} \). Let us fix \(n \in {\mathbb {N}}\). We will show that, for any given \(k\in \{0, \ldots ,2^n\}\) and any \(t \in [\frac{k}{2^n}T, T]\), the value function \(v_{n}(t,x, \sigma )\) is increasing and convex in x as well as decreasing in \(\sigma \) (note that here \(\sigma \) denotes the initial value of the process \(t\mapsto \sigma (t)\)). The proof is by backwards induction from \(k=2^n\) down to \(k=0\). Since \(v_{n}(T, \cdot , \cdot )=1\), the base step \(k=2^n\) holds trivially. Now, suppose that, for some given \(k \in \{0, \ldots , 2^n\}\), the value \(v_{n}(t,x, \sigma )\) is increasing and convex in x as well as decreasing in \(\sigma \) for any \(t\in [\frac{k}{2^n}T, T]\). Then, Lemma 4.6 tells us that for any fixed \(t \in [\frac{(k1)T}{2^n}, \frac{kT}{2^n})\),
$$\begin{aligned} f(t, x, \sigma ) := {\tilde{{\mathbb {E}}}} \left[ e^{\int _{t}^{\frac{kT}{2^n}} {\hat{X}}^{t,x, \sigma }_{u} \,\mathrm {d}u} v_{n}\left( \frac{kT}{2^n}, {\hat{X}}^{t, x, \sigma }_{\frac{kT}{2^n}}, \sigma \left( \frac{kT}{2^n} \right) \right) \right] , \end{aligned}$$is increasing and convex in x as well as decreasing in \(\sigma \). Consequently, since
$$\begin{aligned} v_{n}(t,x, \sigma )= & {} \left\{ \begin{array}{ll} f(t,x, \sigma ), &{}\quad t \in (\frac{(k1)T}{2^n}, \frac{kT}{2^n}), \\ f(t,x, \sigma ) \vee 1, &{}\quad t = \frac{(k1)T}{2^n}, \end{array} \right. \end{aligned}$$(4.7)the value \(v_{n}(t,x, \sigma )\) is increasing and convex in x as well as decreasing in \(\sigma \) for any fixed \(t \in [ \frac{k1}{2^n}T, T]\). Hence, by backwards induction, \(v_{n}\) is increasing and convex in the second argument x as well as decreasing in the third argument \(\sigma \). Finally, since \(v_{n} \rightarrow v\) pointwise as \(n \rightarrow \infty \), we can conclude that the value function v is decreasing in \(\sigma \).

(ii)
From the proof of Theorem 4.2 (ii), the claim is a direct consequence of part (i) above.
\(\square \)
Remark 4.8

1.
The value function is decreasing in the initial volatility (Theorem 4.7 (i)) also when the volatility is any continuous timehomogeneous positive Markov process independent of the driving Brownian motion W. The assertion is justified by inspection of the proof of Lemma 4.6 in which no crossing of the volatility trajectories was important, not the Markov chain structure.

2.
Though there are no grounds to believe that any of the boundaries \(b_{\sigma _{1}}, \ldots ,b_{\sigma _{m}}\) is discontinuous, proving their continuity, except for the lowest one, is beyond the power of customary techniques. Continuity of the lowest boundary can be proved similarly as in the proof of part 4 of [15, Theorem 3.10], exploiting the ordering of the boundaries. The stumbling block for proving continuity of the upper boundaries is that, at a downward volatility jump time, the value function has a positive jump whose magnitude is difficult to quantify.
5 Generalisation to an Arbitrary Prior
In this section, we generalise most results of the earlier parts to the general prior case. In what follows, the prior \(\mu \) of the drift is no longer a twopoint but an arbitrary probability distribution.
5.1 TwoDimensional Characterisation of the Posterior Distribution
Let us first think a bit more abstractly to develop intuition for the arbitrary prior case. According to the Kushner–Stratonivich stochastic partial differential equation (SPDE) for the posterior distribution (see [8, Sect. 3.2]), if we take the innovation process driving the SPDE and the volatility as the available information sources, then the posterior distribution is a measurevalued Markov process. Unfortunately, there does not exist any applicable general methods to solve optimal stopping problems for measurevalued stochastic processes. If only we were able to characterise the posterior distribution process by an \({\mathbb {R}}^n\)valued Markovian process (with respect to the filtration generated by the innovation and the volatility processes), then we should manage to reduce our optimal stopping problem with a stochastic measurevalued underlying to an optimal stopping problem with a \({\mathbb {R}}^n\)valued Markovian underlying. Mercifully, this wishful thinking turns out to be possible in reality as we shall soon see.
Unlike in the problem with constant volatility studied in [15], when the volatility is varying, the pair consisting of the elapsed time t and the posterior mean \({\hat{X}}_{t}\) is not sufficient (with an exception of the twopoint prior case studied before) to characterise the posterior distribution \(\mu _{t}\) of X given \({\mathcal {F}}^{S, \sigma }_{t}\). Hence we need some additional information to describe the posterior distribution. Quite surprisingly, all this needed additional information can be captured in a single additional observable statistic which we will name the ‘effective learning time’. We start the development by first introducing some useful notation.
Define \(Y^{(i)}_t:=Xt + \sigma _{i} W_{t}\) and let \(\mu ^{(i)}_{t,y}\) denote the posterior distribution of X at time t given \(Y^{(i)}_{t}=y\). It needs to be mentioned that, for any given prior \(\mu \), the distributions of X given \({\mathcal {F}}^{Y^{(i)}}_{t}\) and X given \(Y^{(i)}_{t}\) are equal (see Proposition 3.1 in [15]), which justifies our conditioning only on the last value \(Y^{(i)}_{t}\). Also, recall that \(l= \inf \mathop {\mathrm {supp}}\nolimits (\mu )\), \(h = \sup \mathop {\mathrm {supp}}\nolimits (\mu )\).
The next lemma provides the key insight allowing to characterise the posterior distribution by only two parameters.
Lemma 5.1
Let \(\sigma _{2} \ge \sigma _{1} > 0\). Then
i.e. the sets of possible conditional distributions of X in both cases are the same.
Proof
Let \(t>0\), \(y \in {\mathbb {R}}\). By the standard filtering theory (a generalised Bayes’ rule),
Then taking \(r = \left( \frac{\sigma _{1}}{\sigma _{2}} \right) ^{2}t\) and \(y_{1} = \left( \frac{\sigma _{1}}{\sigma _{2}} \right) ^{2} y\), we have that
\(\square \)
From Lemma 5.1 and [15, Lemma 3.3] we obtain the following important corollary, telling us that, having fixed a prior, any possible posterior distribution can be fully characterised by only two parameters.
Corollary 5.2
Let \(t >0\). Then, for any posterior distribution \(\mu _{t}(\cdot ) = {{\mathbb {P}}}(X \in \cdot \,\, {\mathcal {F}}^{S, \sigma }_{t})(\omega )\), there exists \((r, x) \in (0,T]\times (l,h)\) such that \(\mu _{t}= \mu ^{(1)}_{r,y_{1}(r,x)}\), where \(y_{1}(r,x)\) is defined as the unique value satisfying \({\mathbb {E}}[X \,\, Y^{(1)}_{r}=y_{1}(r,x) ] =x\). In particular, we can take \(r = \int _0^t \left( \frac{\sigma _1}{\sigma (u)(\omega )}\right) ^2 \,\mathrm {d}u\) and \(y_1(r,x) = \int _0^t \left( \frac{\sigma _1}{\sigma (u)(\omega )}\right) ^2 \,\mathrm {d}Y_u(\omega )\), where \(Y_u = \log (S_u) + \frac{1}{2}\int _0^u \sigma (b)^2 \,\mathrm {d}b\).
When the volatility varies, so does the speed of learning about the drift. The corollary tells us that we can interpret r as the effective learning time measured under the constant volatility \(\sigma _{1}\). The intuition for the name is that even though the volatility is varying over time, the same posterior distribution \(\mu _t\) can be also be obtained in a constant volatility model with the constant volatility \(\sigma _1\), just at a different time r and at a different value of the price S.
Remark 5.3
It is worth remarking that Corollary 5.2 also holds for any reasonable positive volatility process. Indeed, using the Kallianpur–Striebel formula with timedependent volatility (see Theorem 2.9 on page 39 of [8]), the proof of Lemma 5.1 equally applies for an arbitrary positive timedependent volatility and immediately yields the result of the corollary.
Next, we make a convenient technical assumption about the prior distribution \(\mu \).
Assumption 5.4
The prior distribution \(\mu \) is such that
 1.
\( \int _{\mathbb {R}}e^{a u^2}\mu (\mathrm {d}u)<\infty \) for some \(a>0\),
 2.
\(\psi (\cdot ,\cdot ) :[0,T]\times (l,h) \rightarrow {\mathbb {R}}\) defined by
$$\begin{aligned} \psi (t,x) := \frac{1}{\sigma _{1}} \left( {\mathbb {E}}[X^{2}\,\, Y^{1}_{t}=y_{1}(t,x)]  x^{2} \right) = \frac{1}{\sigma _{1}} \mathop {\mathrm {Var}}\nolimits \left( X\,\, Y^{1}_{t}= y_{1}(t,x) \right) \end{aligned}$$is a bounded function that is Lipschitz continuous in the second variable.
In particular, all compactly supported distributions as well as the normal distribution are known to satisfy Assumption 5.4 (see [15]), so it is an inconsequential restriction for practical applications.
5.2 Markovian Embedding
Similarly as in the twopoint prior case, we will study the optimal stopping problem (2.5) by embedding it into a Markovian framework. With Corollary 5.2 telling us that the effective learning time r and the posterior mean x fully characterise the posterior distribution, now, we can embed the optimal stopping problem (2.5) into the standard Markovian framework by defining the Markovian value function
Here the process \({\hat{X}}= {\hat{X}}^{t,x, r, \sigma _{i}}\) evolves according to
the given dynamics of \({\hat{X}}\) is a consequence of Corollary 5.2 and the evolution equation of \({\hat{X}}\) in the constant volatility case (see the equation (3.9) in [15]). Also, in (5.3), the process \(B_{t}= \int _{0}^{t} \sigma (u) \,\mathrm {d}u + {\hat{W}}_{t}\) is a \({\tilde{{{\mathbb {P}}}}}\)Brownian motion. Lastly, in (5.2), \({\mathcal {T}}_{Tt}\) denotes the set of stopping times less than or equal to \(Tt\) with respect to the usual augmentation of the filtration generated by \(\{ {\hat{X}}^{t, x, r, \sigma _{i}}_{t+s}\}_{s \ge 0}\) and \(\{ \sigma (t+s)\}_{s \ge 0}\).
Remark 5.5
Let us note that in light of the observations of Sect. 5.1, if the regimeswitching volatility was replaced by a different stochastic volatility process, the same Markovian embedding 5.2 could still be useful for the study of the altered problem.
5.3 Outline of the Approximation Procedure and Main Results
Under an arbitrary prior, the approximation procedure of Sect. 3 can also be applied, however, the operators J and \(J^{(n)}\) need to be redefined in a suitable way. We redefine the operator J to act on a function \(f:[0, T] \times (l,h) \times [0,T] \rightarrow {\mathbb {R}}\) as
and then the operator \(J_{i}\) as \(J_{i} f := (Jf)(\cdot , \cdot , \sigma _{i})\). Intuitively, \((J_{i} f)\) represents a Markovian value function corresponding to optimal stopping before \(t+ \eta ^{t}_{i}\), i.e. before the first volatility change after t, when, at time \(t+\eta ^{t}_{i} < T\), the payoff \(f\left( t+\eta ^{t}_{i}, {\hat{X}}^{{t, x, r, \sigma _{i}}}_{t+\eta ^{t}_{i}}, r^{t,r}_{t+\eta ^{t}_{i}} \right) \) is received, provided stopping has not occurred yet. The underlying process in the optimal stopping problem \(J_{i} f\) is the diffusion \((t, {\hat{X}}_{t}, r_{t})\).
The majority of the results in Sects. 3 and 4 generalise nicely to an arbitrary prior case. Proposition 3.1 extends word by word; the proofs are analogous, just the second property of \(\psi \) from [15, Proposition 3.6] needs to be used for Proposition 3.1 (iv). In addition, we have that f decreasing in r implies that \(J_{i}f\) is decreasing in r, which is proved by a Bermudan approximation argument as in Proposition 3.1 (iv) using the time decay of \(\psi \) from [15, Proposition 3.6]. As a result, for \(f :[0,T] \times (l,h) \times [0,T] \rightarrow {\mathbb {R}}\) that is decreasing in the first and third variables as well as increasing (though not too fast as \(x \nearrow \infty \)) and convex in the second, there exists a function (a stopping boundary) \(b^{f}_{\sigma _{i}} : [0,T)\times [0, T) \rightarrow [l,0]\) that is increasing in both variables and such that the continuation region \( \mathcal {C}^{f}_{i} := \{ (t, x, r) \in [0,T) \times (l,h) \times \big [0, T \big ) \,:\, (J_{i}f) (t,x, r) > 1 \}\) (optimality shown as in Proposition 3.2) satisfies
In addition, each pair \((J_{i}f,b_{\sigma _{i}}^{f})\) solves the freeboundary problem
With the operator \(J^{(n)}\) redefined as
the crucial Proposition 3.5 holds word by word. Furthermore, the sequence of functions \(\{ J^{(n)} 1 \}_{n \ge 0}\) is increasing, bounded from below by 1 with each \(J^{(n)} 1 \) being decreasing in the first and third variables as well as increasing and convex in the second variable x. As desired,
so the value function v is decreasing in the first and third variables as well as increasing and convex in the second variable; again, v is a fixed point of \({{\tilde{J}}}\). Moreover, the uniform approximation error result (3.20) also holds for compactly supported priors (with an obvious reinterpretation \(h = \sup (\mathop {\mathrm {supp}}\nolimits \mu )\)). We can also show (by a similar argument as in Theorem 4.2 (iii)) that
where \(g_i^{(n)} := \sum _{j\ne i} \frac{\lambda _{ij}}{\lambda _{i}}J^{(n)}_{j}1\) and the limit \(b_{\sigma _{i}}\) is a function increasing in both variables. Lastly, by similar arguments as before, the stopping time
is optimal for the liquidation problem (2.5).
Remark 5.6
The higher volatility, the slower learning about the drift, so under Assumption 4.5 it is tempting to expect that the value function v is decreasing in the volatility variable and so the stopping boundaries \(b_{\sigma _{1}} \le b_{\sigma _{2}} \le \cdots \le b_{\sigma _{m}}\) also in the case of an arbitrary prior distribution \(\mu \). Regrettably, proving (or disproving) such monotonicity in volatility has not been achieved by the author.
References
Bain, A., Crisan, D.: Fundamentals of stochastic filtering. In: Stochastic Modelling and Applied Probability, vol. 60. Springer, New York (2009)
Bayraktar, E.: A proof of the smoothness of the finite time horizon american put option for jump diffusions. SIAM J. Control Optim. 48(2), 551–572 (2009)
Bayraktar, E.: On the perpetual American put options for level dependent volatility models with jumps. Quant. Financ. 11(3), 335–341 (2011)
Bayraktar, E., Kravitz, R.: Quickest detection with discretely controlled observations. Seq. Anal. 34(1), 77–133 (2015)
Bayraktar, E., Dayanik, S., Karatzas, I.: Adaptive Poisson disorder problem. Ann. Appl. Probab. 16(3), 1190–1261 (2006)
Bensoussan, A.: Applications of Variational Inequalities in Stochastic Control. Studies in Mathematics and Its Applications, vol. 12. NorthHolland, Amsterdam (1982)
Crépey, S.: About, the pricing equations in finance. In: ParisPrinceton Lectures on Mathematical Finance 2010, pp. 63–203. Springer, Berlin (2011)
Crisan, D., Rozovskii, B.: The Oxford Handbook of Nonlinear Filtering. Oxford University Press, Oxford (2011)
Dayanik, S., Poor, H.V., Sezer, S.O.: Multisource Bayesian sequential change detection. Ann. Appl. Probab. 18(2), 552–590 (2008)
Décamps, J.P., Mariotti, T., Villeneuve, S.: Investment timing under incomplete information. Math. Oper. Res. 30(2), 472–500 (2005)
Di Masi, G.B., Kabanov, Y.M., Runggaldier, W.J.: Meanvariance hedging of options on stocks with Markov volatilities. Theory Probab. Appl. 39(1), 172–182 (1995)
Ekström, E., Lu, B.: Optimal selling of an asset under incomplete information. Int. J. Stoch. Anal. 2011, ID 543590 (2011)
Ekström, E., Lu, B.: The optimal dividend problem in the dual model. Adv. Appl. Probab. 46(3), 746–765 (2014)
Ekström, E., Tysk, J.: Convexity theory for the term structure equation. Financ. Stoch. 12(1), 117–147 (2008)
Ekström, E., Vaicenavicius, J.: Optimal liquidation of an asset under drift uncertainty. SIAM J. Financ. Math. 7(1), 357–381 (2016)
Elie, R., Kharroubi, I.: Probabilistic representation and approximation for coupled systems of variational inequalities. Stat. Probab. Lett. 80(17–18), 1388–1396 (2010)
Eloe, P., Liu, R.H., Yatsuki, M., Yin, G., Zhang, Q.: Optimal selling rules in a regimeswitching exponential Gaussian diffusion model. SIAM J. Appl. Math. 69(3), 810–829 (2008)
Gapeev, P.: Pricing of perpetual American options in a model with partial information. Int. J. Theor. Appl. Financ. 15(1), ID 1250010 (2012)
Gugerui, U.S.: Optimal stopping of a piecewisedeterministic Markov process. Stochastics 19(4), 221–236 (1986)
Guo, X., Zhang, Q.: Closedform solutions for perpetual American put options with regime switching. SIAM J. Appl. Math. 64(6), 2034–2049 (2004)
Guo, X., Zhang, Q.: Optimal selling rules in a regime switching model. IEEE Trans. Autom. Control 50, 1450–1455 (2005)
Huang, Y., Forsyth, P.A., Labahn, G.: Methods for pricing American options under regime switching. SIAM J. Sci. Comput. 33(5), 2144–2168 (2011)
Karatzas, I., Shreve, S.: Methods of Mathematical Finance. Applications of Mathematics, vol. 39. Springer, New York (1998)
Le, H., Wang, C.: A finite time horizon optimal stopping problem with regime switching. SIAM J. Control Optim. 48(8), 5193–5213 (2010)
Lu, B.: Optimal selling of an asset with jumps under incomplete information. Appl. Math. Financ. 20(6), 599–610 (2013)
Øksendal, B.: Stochastic Differential Equations: An Introduction with Applications, 6th edn. Springer, New York (2007)
Pascucci, A.: Free boundary and optimal stopping problems for American Asian options. Financ. Stoch. 12(1), 21–41 (2008)
Pemy, M., Zhang, Q.: Optimal stock liquidation in a regime switching model with finite time horizon. J. Math. Anal. Appl. 321(2), 537–552 (2006)
Peskir, G., Shiryaev, A.: Optimal Stopping and FreeBoundary Problems. Lectures in Mathematics, ETH Zürich. Birkhäuser Verlag, Basel (2006)
Pham, H.: ContinuousTime Stochastic Control and Optimization with Financial Applications, vol. 61. Springer, Berlin (2009)
Revuz, D., Yor, M.: Continuous martingales and Brownian motion. Grundlehren der Mathematischen Wissenschaften, 3rd edn., vol. 293. Springer, Berlin (1999)
Rogers, L.C.G.: Optimal Investment. Springer Briefs in Quantitative Finance. Springer, New York (2013)
Vannestål, M.: Exercising American options under uncertainty. Working paper (2017)
Yin, G., Liu, R.H., Zhang, Q.: Recursive algorithms for stock liquidation: a stochastic optimization approach. SIAM J. Optim. 13(1), 240–263 (2002)
Yin, G., Zhang, G., Liu, F., Liu, R.H., Cheng, Y.: Stock liquidation via stochastic approximation using Nasdaq daily and intraday data. Math. Financ. 16(1), 217–236 (2006)
Zhang, Q.: Stock trading: an optimal selling rule. SIAM J. Control Optim. 40(1), 64–87 (2001)
Zhang, Q., Yin, G., Liu, R.H.: A nearoptimal selling rule for a twotimescale market model. Multiscale Model. Simul. 4(1), 172–193 (2005)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Vaicenavicius, J. Asset Liquidation Under Drift Uncertainty and RegimeSwitching Volatility. Appl Math Optim 81, 757–784 (2020). https://doi.org/10.1007/s0024501895185
Published:
Issue Date:
DOI: https://doi.org/10.1007/s0024501895185
Keywords
 Optimal liquidation
 Drift uncertainty
 Regimeswitching volatility
 Sequential analysis
 Optimal stopping
 Stochastic filtering