1 Introduction

Optimal stopping problems, a variant of optimization problems allowing investors freely to stop before or at the maturity in order to maximize their profits, have been implemented in practice and given rise to investigation in academic areas such as science, engineering, economics and, particularly, finance. For instance, pricing American-style derivatives is a conventional optimal stopping time problem where the stopping time is adapted to the information generated over time. The underlying dynamic system is usually described by stochastic differential equations (SDEs). The research on optimal stopping, consequently, has mainly focused on the underlying dynamic system itself. In the field of financial investment, however, an investor frequently runs into investment decisions where investors stop investing in risky assets so as to maximize their expected utilities with respect to their wealth over a finite time investment horizon. These optimal stopping problems depend on the underlying dynamic systems as well as investors’ optimization decisions (controls). This naturally results in a mixed optimal control and stopping problem, and Ceci and Bassan [1] is one of the typical works along this line of research. In the general formulation of such models, the control is mixed, composed of a control and a stopping time. The theory has also been studied in Bensoussan and Lions [2], Elliott and Kopp [3], Yong and Zhou [4] and Fleming and Soner [5], and applied in finance in Dayanik and Karatzas [6], Henderson and Hobson [7], Li and Zhou [8], Li and Wu [9, 10] and Shiryaev, Xu and Zhou [11].

In the finance field, finding an optimal stopping time point has been extensively studied for pricing American-style options, which allow option holders to exercise the options before or at the maturity. Typical examples that are applicable include, but are not limited to, those presented in Chang, Pang and Yong [12], Dayanik and Karatzas [6] and Rüschendorf and Urusov [13]. In the mathematical finance literature, choosing an optimal stopping time point is often related to a free boundary problem for a class of diffusions (see Fleming and Soner [5] and Peskir and Shiryaev [14]). In many applied areas, especially in more extensive investment problems, however, one often encounters more general controlled diffusion processes. In real financial markets, the situation is even more complicated when investors expect to choose as little time as possible to stop portfolio selection over a given investment horizon so as to maximize their profits (see Samuelson [15], Karatzas and Kou [16], Karatzas and Sudderth [17], Karatzas and Wang [18], Karatzas and Ocone [19], Ceci and Bassan [1], Henderson [20], Li and Zhou [8] and Li and Wu [9, 10]).

The initial motivation of this paper comes from our recent studies on choosing an optimal point at which an investor stops investing and/or sells all his risky assets (see Choi, Koo and Kwak [21] and Henderson and Hobson [7]). The objective is to find an optimization process and stopping time so as to meet certain investment criteria, such as, the maximum of an expected utility value before or at the maturity. This is a typical problem in the area of financial investment. However, there are fundamental difficulties in handling such optimization problems. Firstly, our investment problems, which are different from the classical American-style options, involve optimization process over the entire time horizon. Secondly, they involve the portfolio in the drift and volatility terms so that the problem of multi-dimensional financial assets are more realistic than those addressed in finance literature (see Capenter [22]). Therefore, it is difficult to solve these problems either analytically or numerically using current methods developed in the framework of studying American-style options. In our model, the corresponding HJB equation of the problem is formulated into a variational inequality of a fully nonlinear equation. We make a dual transformation for the problem to obtain a new free boundary problem with a linear equation. Tackling this new free boundary problem, we establish the properties of the free boundary and optimal strategy for the original problem.

The remainder of the paper is organized as follows. In Section 2, the mathematical formulation of the model is presented, and the corresponding HJB equation is posed. In Section 3, a dual transformation converts the free boundary problem of a fully nonlinear PDE to a new free boundary problem of a linear equation but with the complicated constraint (3.16). In Section 4 we simplify the constraint condition in (3.16) to obtain a new free boundary problem with a simple condition (4.4). Moreover, we show that the solution of problem (4.5) must be the solution of problem (3.15). Section 5 is devoted to the study of the free boundary of problem (4.5). In Section 6, we go back to the original problem (2.6) to show that its free boundary is decreasing and differentiable. Moreover, we present its financial meanings. Section 7 concludes the paper.

2 Model formulation

2.1 The manager’s problem

The manager operates in a complete, arbitrage-free, continuous-time financial market consisting of a riskless asset with instantaneous interest rate r and n risky assets. The risky asset prices S i are governed by the stochastic differential equations

d S i , t S i , t =(r+ μ i )dt+ σ i d W t j ,for i=1,2,,n,
(2.1)

where the interest rate r, the excess appreciation rates μ i , and the volatility vectors σ i are constants, W is a standard n-dimensional Brownian motion. In addition, the covariance matrix Σ= σ σ is strongly nondegenerate.

A trading strategy for the manager is an n-dimensional process π t whose i th component, where π i , t is the holding amount of the i th risky asset in the portfolio at time t. An admissible trading strategy π t must be progressively measurable with respect to { F t } such that X t 0. Note that X t = π 0 , t + i = 1 n π i , t , where π 0 , t is the amount invested in the money market. The value of the wealth X t evolves according to

d X t = ( r X t + μ π t ) dt+ π t σd W t .
(2.2)

In addition, short-selling is allowed.

The manager controls assets with initial value x. The manager’s dynamic problem is to choose an admissible trading strategy π t and a stopping time τ to maximize his expected utility of the exercise wealth:

V(x,t)= max π , τ E [ e r ( τ t ) U ( X τ + K ) ] ,
(2.3)

where r>0 is the interest and K is a positive constant (e.g., a fixed salary),

U(x)= 1 γ x γ ,0<γ<1,

is the utility function.

2.2 HJB equation

Applying the dynamic programming principle, we get the following Hamilton-Jacobi-Bellman (HJB) equation:

{ min { t V max π [ 1 2 ( π Σ π ) x x V + μ π x V ] r x x V + r V , V 1 γ ( x + K ) γ } = 0 , x > 0 , 0 < t < T , V ( 0 , t ) = 1 γ K γ , 0 < t < T , V ( x , T ) = 1 γ ( x + K ) γ , x > 0 .
(2.4)

Suppose that V(x) is strictly increasing and strictly concave, i.e., x V>0, x x V<0. Note that the gradient of π Σπ with respect to π

π ( π Σ π ) =2Σπ,

then

π = Σ 1 μ x V ( x , t ) x x V ( x , t ) .
(2.5)

Thus (2.4) becomes

{ min { t V + 1 2 a 2 ( x V ) 2 x x V r x x V + r V , V 1 γ ( x + K ) γ } = 0 , x > 0 , 0 < t < T , V ( 0 , t ) = 1 γ K γ , 0 < t < T , V ( x , T ) = 1 γ ( x + K ) γ , x > 0 ,
(2.6)

where a 2 = μ Σ 1 μ. Now we find a condition under which the free boundary exists. A simple calculation shows

U ( x + K ) = 1 γ ( x + K ) γ , x U ( x + K ) = ( x + K ) γ 1 , x x U ( x + K ) = ( 1 γ ) ( x + K ) γ 2 .

It follows that

t U ( x + K ) + 1 2 a 2 ( x U ( x + K ) ) 2 x x U ( x + K ) r x x U ( x + K ) + r U ( x + K ) = a 2 2 1 1 γ ( x + K ) γ r x ( x + K ) γ 1 + r γ ( x + K ) γ 0 .

Eliminating 1 γ ( x + K ) γ 1 yields

a 2 γ 2 ( 1 γ ) (x+K)rγx+r(x+K)0,

i.e.,

( a 2 γ 2 ( 1 γ ) r + r γ ) x ( a 2 γ 2 ( 1 γ ) + r ) K.
(2.7)

If

a 2 γ 2 ( 1 γ ) rrγ,
(2.8)

then (2.7) holds for any x>0, the solution to problem (2.6) is U(x+K).

If

a 2 γ 2 ( 1 γ ) r0,
(2.9)

then (2.7) is impossible for any x>0. Therefore, in this case, the solution to problem (2.6) satisfies

{ t V + a 2 2 ( x V ) 2 x x V r x x V + r V = 0 , x > 0 , 0 < t < T , V ( 0 , t ) = 1 γ K γ , 0 < t < T , x V ( + , t ) = 0 , 0 < t < T , V ( x , T ) = 1 γ ( x + K ) γ , x > 0 .
(2.10)

We summarize the above results in the following theorem.

Theorem 2.1 In the following cases, problem (2.6) has a trivial solution.

  1. (1)

    If (2.8) holds, the solution to problem (2.6) is U(x+K).

  2. (2)

    If (2.9) holds, the solution to problem (2.10) is the solution to problem (2.6) as well.

Recalling (2.8) and (2.9), in the following we always assume that

rγ< a 2 γ 2 ( 1 γ ) r<0.
(2.11)

In the case of (2.11), there exists a free boundary.

3 Dual transformation

Define a dual transformation of V(x,t) (see Pham [23])

v(y,t):= max x > 0 ( V ( x , t ) x y ) ,0y y 0 .
(3.1)

If x V(,t) is strictly decreasing, which is equivalent to the strict concavity of V(,t) (we will show this fact in the end of Section 5), then the maximum in (3.1) will be attained at just one point

x=I(y,t),
(3.2)

which is the unique solution of

y= x V(x,t).
(3.3)

Using the coordinate transformation (3.2) yields

v(y,t)= [ V ( x , t ) x x V ( x , t ) ] | x = I ( y , t ) =V ( I ( y , t ) , t ) yI(y,t).
(3.4)

Differentiating with respect to y and t, we get

y v(y,t)= x V ( I ( y , t ) , t ) y I(y,t)y y I(y,t)I(y,t)=I(y,t),
(3.5)
y y v(y,t)= y I(y,t)= 1 x x V ( I ( y , t ) , t ) ,
(3.6)
t v(y,t)= t V ( I ( y , t ) , t ) + x V ( I ( y , t ) , t ) t I(y,t)y t I(y,t)= t V ( I ( y , t ) , t ) .
(3.7)

Substituting (3.5) into (3.4), we have

V ( I ( y , t ) , t ) =v(y,t)y y v(y,t).
(3.8)

By the transformation (3.2) and (3.3)-(3.8), the HJB equation in (2.6) becomes

min { t v a 2 2 y 2 y y v + r v , v y y v 1 γ ( K y v ) γ } =0,0<y< y 0 ,0<t<T.
(3.9)

Now we derive the terminal condition for v(y,T). Note that

V(x,T)= 1 γ ( x + K ) γ ,
(3.10)

so x V(x,T)= ( x + K ) γ 1 , i.e., [ x V ( x , T ) ] 1 γ 1 =x+K. It follows that

y 1 γ 1 K=x=I(y,T)= y v(y,T),
(3.11)

and by (3.8), we have

v ( y , T ) = V ( I ( y , T ) , T ) + y y v ( y , T ) = 1 γ y γ γ 1 + y ( K y 1 γ 1 ) = 1 γ γ y γ γ 1 + K y .
(3.12)

Next, we determine the upper bound y 0 for y. In fact, V(x,t)= 1 γ ( x + K ) γ in the neighborhood of x=0, so the upper bound is

y 0 = x V(0,t)= K γ 1 .
(3.13)

In addition, we need to determine the value v( y 0 ,t). By (3.8), we also have

v( y 0 ,t)=V(0,t)+ y 0 0= 1 γ K γ .
(3.14)

Combining (3.9) and (3.12)-(3.14), we obtain

{ min { t v a 2 2 y 2 y y v + r v , v y y v 1 γ ( K y v ) γ } = 0 , 0 < y < K γ 1 , 0 < t < T , v ( K γ 1 , t ) = 1 γ K γ , 0 < t < T , v ( y , T ) = 1 γ γ y γ γ 1 + K y , 0 < y < K γ 1 .
(3.15)

In (3.15), the equation is a linear parabolic equation, but the constraint condition

vy y v+ 1 γ ( K y v ) γ
(3.16)

is very complicated. In the following section, we simplify this condition.

Remark The equation in (3.15) is degenerate on the boundary y=0. According to Fichera’s theorem (see Oleĭnik and Radkević [24]), we must not put the boundary condition on y=0.

4 Simplifying the complicated constraint condition

Note that in the domain {(x,t)|V(x,t)= 1 γ ( x + K ) γ }, we have

x V(x,t)= ( x + K ) γ 1 ,if V(x,t)= 1 γ ( x + K ) γ .
(4.1)

By the y coordinate,

y= ( K y v ) γ 1 ,if vy y v= 1 γ ( K y v ) γ .
(4.2)

Deriving y v from the first equality in (4.2) yields

y v=K y 1 γ 1 ,
(4.3)

and then substituting (4.3) into (3.16), we have

v 1 γ γ y γ γ 1 +Ky.
(4.4)

This is the simplified constraint condition. We assume that u(y,t) satisfies

{ min { t u a 2 2 y 2 y y u + r u , u 1 γ γ y γ γ 1 K y } = 0 , ( y , t ) Q y , u ( K γ 1 , t ) = 1 γ K γ , 0 < t < T , u ( y , T ) = 1 γ γ y γ γ 1 + K y , 0 < y < K γ 1 ,
(4.5)

where

Q y = ( 0 , K γ 1 ) ×(0,T).

Moreover, we split the domain Q y into two parts; denote (see Figure 1)

ER y = { u ( y , t ) = 1 γ γ y γ γ 1 + K y } ,exercise region,
(4.6)
CR y = { u ( y , t ) > 1 γ γ y γ γ 1 + K y } ,continuation region.
(4.7)
Figure 1
figure 1

CR y and ER y .

Theorem 4.1 The solution u(x,t) to problem (4.5) is the solution to problem (3.15) as well.

In order to prove this theorem, we first show the following two lemmas.

Lemma 4.1 For any (y,t) Q y , we have

y u=K y 1 γ 1 ,(y,t) ER y ,
(4.8)
y uK y 1 γ 1 ,(y,t) CR y .
(4.9)

Proof Equation (4.8) follows from the definition (4.6) directly. Also, in CR y

t u a 2 2 y 2 y y u+ru=0,(y,t) CR y .
(4.10)

Differentiating (4.10) to y yields

t ( y u) a 2 2 y 2 y y ( y u) a 2 y y ( y u)+r( y u)=0,(y,t) CR y .
(4.11)

Note that

y u(y,T)=K y 1 γ 1 ,0<y< K γ 1 ,
(4.12)
y u(y,t)=K y 1 γ 1 ,(y,t)( CR y ) Q y ,
(4.13)

where ( CR y ) is the boundary of CR y .

Denote w=K y 1 γ 1 , we further show that w is a supersolution to problem (4.11)-(4.13) by

y w = 1 1 γ y 1 γ 1 1 = 1 1 γ y 2 γ γ 1 , y y w = 2 γ ( 1 γ ) 2 y 1 γ 1 2 ,

and

t w a 2 2 y 2 y y w a 2 y y w + r w = a 2 2 2 γ ( 1 γ ) 2 y 1 γ 1 a 2 1 1 γ y 1 γ 1 + r ( K y 1 γ 1 ) = r K + ( a 2 γ 2 ( 1 γ ) 2 r ) y 1 γ 1 > 0 ( by the first inequality in  (2.11) ) .

So w is a supersolution of (4.11)-(4.13). This means that (4.9) holds. □

Lemma 4.2 The function

y y u+ 1 γ ( K y u ) γ

is increasing with respect to y u if y uK y 1 γ 1 .

Proof Define a function

f(z)=yz+ 1 γ ( K z ) γ ,zK y 1 γ 1 .

Then

f (z)=y ( K z ) γ 1 0

if zK y 1 γ 1 . □

Proof of Theorem 4.1 Note that, from (4.5),

t u a 2 2 y 2 y y u+ru0,(y,t) ER y ,
(4.14)
u= 1 γ γ y γ γ 1 +Ky,(y,t) ER y .
(4.15)

Rewrite (4.15) as

u=y ( K y 1 γ 1 ) + 1 γ ( K [ K y 1 γ 1 ] ) γ ,(y,t) ER y .
(4.16)

Applying (4.8) to (4.16), we have

u=y y u+ 1 γ ( K y u ) γ ,(y,t) ER y .
(4.17)

On the other hand, from (4.5), in CR y

t u a 2 2 y 2 y y u+ru=0,(y,t) CR y ,
(4.18)
u 1 γ γ y γ γ 1 +Ky,(y,t) CR y .
(4.19)

We rewrite (4.19) as

uy ( K y 1 γ 1 ) + 1 γ ( K [ K y 1 γ 1 ] ) γ ,(y,t) CR y .
(4.20)

Applying (4.9) and Lemma 4.2, we get

uy y u+ 1 γ ( K y u ) γ ,(y,t) CR y .

 □

5 The free boundary of problem (4.5)

Denote

W p , loc 2 , 1 ( Q y )= { u ( y , t ) : u , y u , y y u , t u L p ( Q ) , Q Q y } .

Theorem 5.1 Problem (4.5) has a unique solution u W p , loc 2 , 1 ( Q y )( Q ¯ y {y=0}), and

1 γ γ y γ γ 1 +Kyu(y,t) e A ( T t ) ( 1 γ γ y γ γ 1 + K y ) ,
(5.1)
y ( u 1 γ γ y γ γ 1 K y ) 0,
(5.2)
t ( u 1 γ γ y γ γ 1 K y ) 0,
(5.3)

where A= a 2 2 γ ( 1 γ ) 2 .

Proof According to the existence and uniqueness of W p , loc 2 , 1 ( Q y )( Q ¯ y {y=0}), the solution for system (4.5) can be proved by a standard penalty method (see Friedman [25]). Here, we omit the details. The first inequality in (5.1) follows from (4.5) directly, and now we prove the second inequality in (5.1). Denote

W(y,t):= e A ( T t ) ( 1 γ γ y γ γ 1 + K y ) ,

where A>0 to be determined later on. We first show that W(y,t) is a supersolution to problem (4.5). In fact,

t W a 2 2 y 2 y y W + r W = A e A ( T t ) ( 1 γ γ y γ γ 1 + K y ) + e A ( T t ) [ ( a 2 2 1 1 γ + r 1 γ γ ) y γ γ 1 + r K y ] e A ( T t ) ( A 1 γ γ a 2 2 1 1 γ ) y γ γ 1 = 0

if

A= a 2 2 γ ( 1 γ ) 2 .

So, W(y,t) is a supersolution to problem (4.5). Hence, the second inequality in (5.1) holds.

In addition, inequality (5.2) follows from (4.8) and (4.9). In order to prove (5.3), we define

w(y,t)=u(y,tδ)for small δ>0.

From (4.5), we know that w(x,t) satisfies

{ min { t w a 2 2 y 2 y y w + r w , w 1 γ γ y γ γ 1 K y } = 0 , y > 0 , δ < t < T , w ( K γ 1 , t ) = 1 γ K γ , δ < t < T , w ( y , T ) = u ( y , T δ ) 1 γ γ y γ γ 1 + K y , 0 < y < K γ 1 .
(5.4)

Applying the comparison principle to variational inequalities (4.5) and (5.4) with respect to terminal values (see Friedman [26]), we obtain

u(y,t)w(y,t)=u(y,tδ),y>0,δ<t<T.

Thus t u0 and (5.3) holds. □

Based on (5.2), we define the free boundary

h(t):=min { y | u ( y , t ) = 1 γ γ y γ γ 1 + K y } ,0t<T.

Theorem 5.2 The free boundary function h(t) is monotonic decreasing (Figure 2) with

h(T):= lim t T h(t)= ( r K a 2 2 1 1 γ r 1 γ γ ) γ 1 .
(5.5)

Moreover, h(t)C[0,T] C [0,T).

Figure 2
figure 2

y=h(t) , φ(y)= 1 γ γ y γ γ 1 +Ky .

Proof First, from (5.3), h(t) is monotonic decreasing. Denote

φ(y):= 1 γ γ y γ γ 1 +Ky.

In ER y ,

t φ a 2 2 y 2 y y φ+rφ= ( a 2 2 1 1 γ + r 1 γ γ ) y γ γ 1 +rKy0,

so

h(t) ( r K a 2 2 1 1 γ r 1 γ γ ) γ 1 ,0t<T.

Hence,

h(T) ( r K a 2 2 1 1 γ r 1 γ γ ) γ 1 .

In order to prove (5.5), we suppose

h(T)> ( r K a 2 2 1 1 γ r 1 γ γ ) γ 1 ,
(5.6)

then it is not hard to get

t u(y,T)>0,for h(T)<y< ( r K a 2 2 1 1 γ r 1 γ γ ) γ 1 ,

which is a contradiction to (5.3). Therefore, the desired result (5.5) holds.

Finally, the proof of h(t)C[0,T] C [0,T) is similar to the result in Friedman [25]. Here, we omit the details. □

Theorem 5.3 For any (y,t) Q y , we have

y y u(y,t)>0.
(5.7)

Proof If (y,t) ER y , then u= 1 γ γ y γ γ 1 +Ky. Thus,

y y u= 1 1 γ y 1 γ 1 1 >0,(y,t) ER y .

If (y,t) CR y , then

t u a 2 2 y 2 y y u+ru=0,(y,t) CR y .
(5.8)

Differentiating (5.8) with respect to y twice yields

t ( y y u) a 2 2 y 2 y y ( y y u) a 2 y y ( y y u)+ ( r a 2 ) ( y y u)=0,(y,t) CR y .
(5.9)

Note that

y y u(y,t)>0,t=T or y=h(t).

Applying the minimum principle, we obtain

y y u= 1 1 γ y 1 γ 1 1 >0,(y,t) CR y .

 □

Remark From (3.6), we have x x V<0, which means V is strict concave to x.

6 The free boundary of original problem (2.6)

Recalling on the free boundary y=h(t)

u(y,t)= 1 γ γ y γ γ 1 +Ky,y=h(t),
(6.1)
y u(y,t)= y 1 γ 1 +K,y=h(t).
(6.2)

From the dual transformation (3.2) and (3.5), we know

x= y u(y,t).
(6.3)

Denote the free boundary of (2.6) by x=g(t). Applying (6.2) and (6.3) yields

g(t)= y u ( h ( t ) , t ) =h ( t ) 1 γ 1 K.
(6.4)

Moreover,

g (t)= 1 γ 1 h ( t ) 1 γ 1 1 h (t)>0,
(6.5)
g(T)=h ( T ) 1 γ 1 K= r K a 2 2 1 1 γ r 1 γ γ K(by (5.5)).
(6.6)

Thus, we have following theorem.

Theorem 6.1 The free boundary x=g(t) of problem (2.6) is monotonic increasing (Figure 3) and g(T) is determined by (6.6). Moreover, g(t)C[0,T] C [0,T).

Figure 3
figure 3

x=g(t) .

Financial meanings At time t, the manager should continue to invest according to (2.5) if x>g(t), while the investor should stop investment if x<g(t).

7 Concluding remarks

We explore a class of optimal investment problems mixed with optimal stopping in the financial investment. The corresponding HJB equation, a free boundary problem of a fully nonlinear equation, is posed. By means of a dual transformation, we obtain a new free boundary problem with a linear equation under a complicated constraint condition. The key step is to simplify this complicated constraint condition. In this way we study the properties of the free boundary and optimal strategy for investors.

Remark on constant K (salary) If K is a function of time t, K=K(t), the unique difficulty is the proof of (5.3). If K(t) is decreasing, then (5.3) is still right and all results hold as well. In general case if K(t) is not decreasing, then the free boundary may be not monotonic. We will consider this problem in the future.