Abstract
In the paper expected utility from consumption over finite time horizon for discrete time markets with bid and ask prices and strictly concave utility function is considered. The notion of weak shadow price, i.e. an illiquid price, depending on the portfolio, under which the model without bid and ask price is equivalent to the model with bid and ask price is introduced. Existence and the form of weak shadow price is shown. Using weak shadow price usual (called in the paper strong) shadow price is then constructed.
Similar content being viewed by others
1 Introduction
In this paper we study the problem of maximization of expected utility in the discrete time market with finite horizon and with transaction costs. We introduce the so called weak shadow price, i.e. a portfolio state dependent price process taking values between the bid and ask prices for which optimal value of expected utility in this frictionless market is the same as in the market with transaction costs. With the use of weak shadow price we construct shadow price, called in the paper strong shadow price, which is a sequence of random variables, playing the role of asset prices, taking values between bid and ask prices, depending on initial portfolio position, such that the optimal value of expected utility in the market with these asset prices is the same as in the market with transaction costs.
The problem of existence and construction of shadow price has been first studied for the Black–Scholes model with transaction costs and discounted logarithmic utility function (see [9, 10, 15]). Then existence of shadow price was shown for discrete time finite market in [16]. It appears that in some cases we are not able to find a frictionless market with price process taking values between bid and ask prices which gives the same optimal strategy as the market with transaction costs (see [2] and [6]).
In this paper we study general discrete time finite horizon problem with strictly concave utility function. We consider so called weak shadow price, i.e. a price system in an illiquid frictionless market, depending on our portfolio, for which optimal value of the expected utility (and thus also the optimal strategies) coincides with value of optimal expected utility in the market with transaction costs. This price system is not a shadow price in the sense considered in [15] or in [16]. This is in fact a more general notion, which enables us to construct later strong shadow price studied in [15] or in [16]. Furthermore under our assumptions for power and logarithmic utilities strong and weak shadow prices are uniquely defined.
The method used in this paper is significantly different from those considered in the other papers (see [1, 2, 5, 6, 9, 10, 13, 15, 16, 19]). Because of discrete time we don’t have differential structure of the model as in [15]. We also do not use Lagrange method studied in [16]. Our method is based on strict concavity of utility function, which results in uniqueness and continuity of optimal strategies. We are also using a number of geometric properties of selling, buying and non transaction zones. The main construction of weak shadow price is based on the Merton’s proportion, i.e. the optimal proportion between the value of stocks and the wealth.
We assume that on a probability space \((\Omega , \mathcal {F}, \mathbb {P})\) with filtration \((\mathcal {F}_{n})^{N}_{n=0}\) we are given a strictly positive adapted processes \(\underline{S} = (\underline{S}_{n})_{n=0}^{N}\) and \(\overline{S}=(\overline{S}_{n})_{n=0}^{N}\) such that \(\overline{S}_{n} > \underline{S}_{n}\) for \(n = 0, 1, 2, \ldots , N\) satisfying the following version of conditional full support condition (CFS) almost surely
for \(k = 0, 1, 2, \ldots , N\), where conv stands for convex hull and supp is the support of the random vector. This condition is similar to the condition (CFS) considered in [11].
Assume we are given a market \(\mathcal {M}\) in which we have a safe bank account and a risky stock account with infinitely divisible assets. The interest rate on the bank account for simplicity is equal to \(0\). At time moment \(n = 0, 1, \ldots , N\) we can buy or sell stocks paying \(\overline{S}_{n}\) or getting \(\underline{S}_{n}\) respectively. In the paper we assume that every conditional expected value is of a regular version, the existence of which is guaranteed by Theorem 3.1 in [12]. We shall also use the convention that \(\mathbb {E}(- \infty | \mathcal {G}) := - \infty \) for any \(\sigma \)-field \(\mathcal {G} \subseteq \mathcal {F}\).
Our financial position will be denoted by the pair \((x,y)\), where \(x\) is the amount on the bank account and \(y\) is the number of assets in our portfolio. Given a position \((x, y)\) at a fixed time moment we are allowed to trade stocks just in such way that we are not allowed to bankrupt. Taking into account the fact that the random variables \(\underline{S}_{n}\) and \(\overline{S}_{n}\), which represent the bid and ask prices of the stocks, are fully supported (see [11]), we are allowed to make an investment policy only in such a way that at next time moment the amount on bank and stock accounts will be nonnegative almost surely. Consequently, this way we have short selling and short buying constraints.
Our aim is to maximize the value:
over all \(u\) from the set of admissible strategies \(\mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) which are defined in Sect. 2, with a constant discount factor \(\gamma \in (0,1]\), where our initial position \((x_0, y_0)=(x,y) \in \mathbb {R}_{+}^{2}\) is such that \(x + y > 0\) and \(c_{n}\) is our consumption at time moment \(n = 0, 1, \ldots , N\), \(\underline{S}_0=\underline{s}\), \(\overline{S}_0=\overline{s}\) and \(g\) is a utility function that is a strictly increasing, strictly concave function defined on \((0,\infty )\) with \(g(0)\) finite or \(g(0)=-\infty \). We shall also assume that \(g(u) = - \infty \) for \(u < 0\). The class of such utility functions contains in particular \(g(c) = \ln c\), \(g(c) = c^{\alpha }\) with \(\alpha \in (0,1)\) or \(g(c)=1-e^{-c}\).
We assume that the processes \(\underline{S}\) and \(\overline{S}\) are such that assumption (A1) (see Sect. 3), which guarantees integrability of certain finite horizon value functions, is satisfied.
We will introduce a notion of weak shadow price, i.e. a price system
where
such that \(n=0, 1,\ldots , N\):
the random variable \(\hat{S}_{n}(x, y, \underline{S}_{n}, \overline{S}_{n})\) is \(\mathcal {F}_{n}\)-measurable for \((x, y) \in \mathbb {R}_{+}^{2} \setminus \{(0, 0)\}\) and the optimal expected value of discounted utility function (1.2) for the market with price system \(\hat{S}\) is the same as in the market \(\mathcal {M}\). More precisely, in this shadow market the current price of a unit of the stock depends on our position at the beginning of this period. In other words, we translate the problem of maximization of (1.2) in the liquid market with transaction costs to the problem of maximization (1.2) in a frictionless illiquid market with price system \(\hat{S}\).
Then we construct shadow price (strong shadow price): a sequence of random variables, depending on initial position, taking values between bid and ask prices such that optimal values of the cost functional (1.2) for market with shadow price is the same as for the market with transaction costs.
The problem of construction of weak and then strong shadow prices for the functional (1.2) is solved for every price processes \(\underline{S}\) and \(\overline{S}\) satisfying (1.1) and (A1). What is important we do not impose any additional conditions (besides of (1.1) and (A1)) for the processes \(\underline{S}\) and \(\overline{S}\) and we study the case with general strictly concave utility function.
2 Properties of the Set of Constraints
In this section we introduce the notion of constraints on admissible strategies. Generally speaking, the strategies are admissible if they are adapted to filtration \((\mathcal {F}_{n})^{N}_{n=0}\) and they do not lead to bankruptcy almost surely. Note that because of the conditionally full support condition (1.1), after possible transaction we should have nonnegative position in bank and stock accounts, since otherwise with positive probability our wealth in the next time moment could be strictly negative.
For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) with \(\mathbb {D}\) defined in (1.3) let
Equivalently we have
The set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) consists of one step consumption, buying and selling strategies we are allowed to use starting from the position \((x, y)\). We summarize below important properties of this set.
Proposition 2.1
Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then we have
-
(i)
\({\mathbb {A}}(\rho x, \rho y, \underline{s}, \overline{s}) = \rho {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \), for \(\rho \ge 0\),
-
(ii)
the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) is convex,
-
(iii)
for \(\overline{s} > \underline{s} > 0\) the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) is compact,
-
(iv)
for \(\overline{s} > \underline{s} > 0\) the following implications hold
$$\begin{aligned} (0, \hat{l}, 0) \in {\mathbb {A}}(x, y, \underline{s},\overline{s}) \Longrightarrow \forall _{l \in [0, \hat{l}]} \ (0, \hat{l} - l, 0) \in {\mathbb {A}}(x - \overline{s} l, y + l, \underline{s}, \overline{s}), \end{aligned}$$(2.3)$$\begin{aligned} (0, 0, \hat{m}) \in {\mathbb {A}}(x, y, \underline{s},\overline{s}) \Longrightarrow \forall _{m \in [0, \hat{m}]} \ (0, 0, \hat{m} - m) \in {\mathbb {A}}(x + \underline{s} m, y + m, \underline{s}, \overline{s}), \end{aligned}$$(2.4)$$\begin{aligned} (c, l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \Longrightarrow \forall _{\rho \in [0, 1]} \ (\rho c, \rho l, \rho m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}), \end{aligned}$$(2.5)and
$$\begin{aligned} (c, l, m) \in \mathbb {R}_{+}^{3} \setminus {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \Longrightarrow \forall _{\rho \ge 1} \ (\rho c, \rho l, \rho m) \not \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}), \end{aligned}$$(2.6) -
(v)
for \((x_{1}, y_{1}), (x_{2}, y_{2}) \in \mathbb {R}_{+}^{2}\), \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) and all \(t \in [0, 1]\) the following inclusion holds
$$\begin{aligned} t {\mathbb {A}}(x_{1}, y_{1}, \underline{s}, \overline{s}) + (1 - t){\mathbb {A}}(x_{2}, y_{2}, \underline{s}, \overline{s}) \nonumber \\ \subseteq {\mathbb {A}}(t x_{1} + (1 - t) x_{2}, t y_{1} + (1 - t) y_{2}, \underline{s}, \overline{s}), \end{aligned}$$(2.7) -
(vi)
for \((x_{1}, y_{1},\underline{s}_{1}, \overline{s}_{1}), (x_{2}, y_{2},\underline{s}_{2}, \overline{s}_{2}) \in \mathbb {D}\) if \(x_{1} \le x_{2}, y_{1} \le y_{2}, \underline{s}_{1} \le \underline{s}_{2}\) and \(\overline{s}_{1} \ge \overline{s}_{2}\), we have \({\mathbb {A}}(x_{1}, y_{1}, \underline{s}_{1}, \overline{s}_{1}) \subseteq {\mathbb {A}}(x_{2}, y_{2}, \underline{s}_{2}, \overline{s}_{2}),\)
-
(vii)
if sequence \((x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})\in \mathbb {D}\) converges to \((x_{0}, y_{0}, \underline{s}_{0}, \overline{s}_{0}) \in \mathbb {D}\) then the set \(cl({\mathbb {A}}(x_{0}, y_{0}, \underline{s}_{0}, \overline{s}_{0})\cup \cup _{n=1}^{\infty }{\mathbb {A}}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n}))\), is compact, where cl stands for the closure.
The proof is in Appendix.
Denote by \(h\) the Hausdorff metric defined on the space \(\mathcal {H}(\mathbb {R}^{3}_{+})\) of compact subsets of \(\mathbb {R}^{3}_{+}\) as follows
with \(d(A,B) := \sup \{ dist(a, B): a \in A\}\) and \(dist (x, A):= \inf \{ d(x,a) : a \in A\}\). Clearly \((\mathcal {H}(\mathbb {R}^{3}_{+}),h)\) is a complete metric space (see e.g. [4]). We have
Theorem 2.1
Let \((x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})_{n=1}^{\infty }\) be a sequence from \(\mathbb {D}\), which converges to \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then
The proof is in Appendix.
3 Bellman Equations
Following Theorem 1 of [8] we introduce now a system of Bellman equations. For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) let
The function \(w_{N}\) is continuous and concave. For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) and \((c, l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})\) let
It is obvious that the random function \(V_{N-1}\) is continuous in its domain for \(\omega \in \Omega \).
Assuming tacitly integrability of \(V_{N-1}\) (with respect to \(\omega \)) from Theorem I.3.1 of [12] (see also [18]) there exists a regular conditional probability
given \(\mathcal {F}_{N-1}\) defined for \(\omega \in \Omega \) and \(A \in \mathcal {F}_{N-1}\) such that the mapping
is well defined for \(\omega \in \Omega \) and
for \(\mathbb {P}\)-almost all \(\omega \in \Omega \).
In other words, the mapping defined in (3.3) is a version of conditional expected value of \(V_{N-1}(x, y, \underline{s}, \overline{s}, c, l, m)\) given \(\mathcal {F}_{N-1}\) and as we mentioned in the Introduction in what follows we shall consider only such versions of conditional expected value.
For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) let
and define inductively
and
for \(k = 1, 2, \ldots , N\). By Theorem 1 of [8] we know that optimal control problem with gain functional (1.2) is solved using a sequence of Bellman equations (3.6) introduced above. In what follows we shall assume that
- (A1) :
-
bid and ask prices \(\underline{S} = (\underline{S}_{n})_{n=0}^{N}\) and \(\overline{S}=(\overline{S}_{n})_{n=0}^{N}\) are such that:
\(\forall _{(x, y) \in \mathbb {R}_{+}^{2} \setminus \{ (0 , 0) \}}\) we have integrability of \(w_i(x,y,\underline{S}_{i},\overline{S}_{i})\) (with respect to \(\omega \)) and \( \mathbb {E}g(x + \underline{S}_{i} y)^{-} < \infty \) as well as \(\forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ \mathbb {E}w_{i}(x, y, \underline{s}, \overline{s}) < \infty \) for \(i=1,2,\ldots ,N\).
We have
Proposition 3.1
Under (A1) for \(\overline{s}>\underline{s}>0\) and \(k=1,2,\ldots ,N\) the random mappings
and
(considered as regular conditional expected value) with \((x,y)\in \mathbb {R}_+^2\setminus \left\{ (0,0)\right) \) and \((c,l,m) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\) are well defined continuous \(\mathcal {F}_{N-k}\)-measurable random functions.
The proof by induction is postponed to the Appendix.
In Lemma 10.1 in the Appendix we impose some sufficient conditions for processes \((\underline{S}_{n})_{n=0}^{N}\) and \((\overline{S}_{n})_{n=0}^{N}\) under which assumption (A1) is satisfied.
Basing on continuity results of Proposition 3.1 we obtain existence of selectors in Bellman equations (3.6).
Lemma 3.1
Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then there exists an \(\mathcal {F}_{N-k}\)-measurable random variable \((\hat{c}, \hat{l}, \hat{m})\) which takes values in the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) such that for \(\omega \in \Omega \) we have
The proof is in Appendix.
Remark 3.1
Notice that in the Lemma 3.1 thanks to the suitable continuity we have a nice result on the existence of measurable selectors without necessity to use more general results of [8] or Theorem B of section 6 in chapter 2 of [7]. See also [17].
For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) denote by \(\mathcal {A}_{N-k}(x, y, \) \(\underline{s}, \overline{s})\) the set of all \(\mathcal {F}_{N-k}\)-measurable random variables taking values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\).
Corollary 3.1
Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then
The meaning of (3.8) is very important, because it says that dealing with Bellman equation \(w_{N-k}\) we can look at not only the deterministic set of triples from \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) but we can deal with \(\mathcal {F}_{N-k}\)-measurable random variables which take values in this set.
We also have
Corollary 3.2
Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Let \((\hat{c}, \hat{l}, \hat{m}) \in \mathcal {A}_{N-k}(x, y, \underline{s}, \overline{s})\) be such that
Then for every random variable \((\tilde{c}, \tilde{l}, \tilde{m})\) from \(\mathcal {A}_{N-k}(x, y, \underline{s}, \overline{s})\) we have
Now we will define the set \(\mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) of all admissible strategies in the market with transaction costs and with the initial position \((x, y) \in \mathbb {R}_{+}^{2}\). A sequence \(u = (u_{n})_{n=0}^{N} = (c_{n}, l_{n}, m_{n})_{n=0}^{N}\) is called an admissible strategy if for \(n = 0, 1,\ldots , N\) the triple \((c_{n}, l_{n}, m_{n}) \in \mathcal {A}_{n}(x_{n}, y_{n}, \underline{S}_{n}, \overline{S}_{n})\), where the sequences \((x_{n})_{n=0}^{N}\) and \((y_{n})_{n=0}^{N}\) are defined inductively in the following way:
Note that any admissible strategy \(u \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) defines by (3.9) a unique predictable sequence \((x_{n}, y_{n})_{n = 0}^{N}\). Thus writing \(u \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) we may think that \(u = (c_{n}, l_{n}, m_{n}, x_{n}, y_{n})_{n = 0}^{N}\).
We have
Proposition 3.2
Let \(\hat{u}=(\hat{c}_{n}, \hat{l}_{n}, \hat{m}_{n})_{n=0}^{N}\) be a sequence of admissible strategies such that for the corresponding sequence of market positions \((\hat{x}_n,\hat{y}_n)\) defined by (3.9) and \(k = 1, 2, 3,\ldots , N\) the following equalities hold
Then we have
with \(\underline{S}_{0}=\underline{s}\) and \(\overline{S}_{0}=\overline{s}\).
Proof
It is obvious that in (3.11) we have “\(\le \)”, because the sequence \((\hat{c}_{n}, \hat{l}_{n}, \hat{m}_{n})_{n=0}^{N}\) is a sequence of admissible strategies and hence it must be
But the sequence \(\hat{u}\) is an admissible strategy so that for any \(u = (c_{n}, l_{n}, m_{n})_{n=0}^{N}\) \(\in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) from (3.10) we have
Thus, indeed, we have “\(\ge \)” in (3.11).
In effect, we have the equality in (3.11). This ends the proof. \(\square \)
To simplify the notation any element of \(\mathcal {A}_{N-k}(x, y, \underline{s}, \overline{s})\) also will be called an admissible strategy.
Almost immediately we obtain
Lemma 3.2
The random functions \(w_{N-k}(\cdot , \cdot , \underline{s}, \overline{s})\) are concave for \(k \!=\! 0, 1, 2 ,\ldots , N\).
Proof
It follows easily by induction from concavity of the utility function \(g\). \(\square \)
Next result plays an important role in the uniqueness of the optimal strategies.
Theorem 3.1
Under (A1) the random mapping
is strictly concave for \(k = 1, 2, \ldots , N\).
Proof
We use induction in \(k=1,2, \ldots , N\). The case \(k=1\) follows directly from strict concavity of \(g\). Assume inductively strict concavity of the random mapping
Let \(F(x, y) :=\mathbb {E} [w_{N-k+1}(x, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]\). By Lemma 3.2 the mapping \((x,y)\mapsto F(x,y)\) is concave. Assume this function is not strictly concave. Then there exist pairs of different financial positions \((x_{1}, y_{1}), (x_{2}, y_{2}) \in \mathbb {R}_{+}^{2}\) such that for any \(t \in (0, 1)\) we have
Let \((x_{3}, y_{3}) := t (x_{1}, y_{1}) + (1 - t)(x_{2}, y_{2})\) and \((\hat{c}_{i}, \hat{l}_{i}, \hat{m}_{i})\) be optimal one step strategies in \(w_{N-k+1}\) (the existence of which is guaranteed by Corollary 3.1) for \((x_{i}, y_{i})\) with \(i=1,2\). By concavity of \(g\) and \(w_{N-k+2}\) taking into account (3.12) we clearly have that \((\hat{c}_{3}, \hat{l}_{3}, \hat{m}_{3}) := t (\hat{c}_{1}, \hat{l}_{1}, \hat{m}_{1}) + (1 - t) (\hat{c}_{2}, \hat{l}_{2}, \hat{m}_{2})\) is a.s. optimal for \((x_3,y_3)\). Furthermore, by strict concavity of \(g\) we have a.s. that \(\hat{c}_3=\hat{c}_1=\hat{c}_2\). Therefore by (3.12) and (3.7) we have
Since by concavity
we have equality in (3.13) only when we have equality a.s. in (3.14). By induction hypothesis the random mapping
is strictly concave so that from a.s. equality in (3.14) taking into account that \(\hat{c}_3=\hat{c}_2=\hat{c}_1\) we should have a.s.
Since the strategies are optimal we have that \(\hat{m}_{1}\hat{l}_{1}=0=\hat{m}_{2}\hat{l}_{2}\). Therefore, the cases \(x_{1}\ge x_{2}\) and \(y_{1} > y_{2}\) or \(x_{2} > x_{1}\) and \(y_{2} \ge y_{1}\) are not allowed. Assume \(x_{1} < x_{2}\) and \(y_{1} \ge y_{2}\). Then \(\hat{m}_2=0=\hat{l}_1\) and solving (3.15) we obtain a.s.
Notice that \({y_1-y_2 \over x_2-x_1}\) is fixed while \(\overline{S}_{N-k+1}\) is random and by the assumption on the conditional full support (1.1), since \(\hat{m}_1\) is bounded by \(y_1\) so that \(b \ge 0\) is bounded, \(a={1 \over \overline{S}_{N-k+1}}\) can be arbitrarily big, with a positive probability, which contradicts that (3.16) should hold a.s. The case \(x_{1} \ge x_{2}\) and \(y_{1} < y_{2}\) can be rejected in a similar way. Consequently, (3.12) does not hold and we have strict concavity of \(F\). \(\square \)
Immediately from Theorem 10.2 we obtain
Corollary 3.3
For each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) there exists unique \(\mathcal {F}_{N-k}\)-measurable random variable
which takes values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) and such that
Moreover, the random mapping
is continuous on the set \(\mathbb {D}\).
By simple induction we obtain
Lemma 3.3
For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) we have
when \(g(u) = \ln u\), while
when \(g(u) = u^{\alpha }\).
4 Properties of the Optimal Strategies
In this section basing on Bellman equations introduced in Sect. 3 we shall characterize classes of optimal one step strategies. Let \((\underline{s}, \overline{s}) \in \mathbb {R}_{+}^{2}\) be such that \(\overline{s} > \underline{s} > 0\).
For \(k = 1, 2,\ldots , N\) let us define the following random sets corresponding respectively to no transaction, selling and buying zones:
and
If the bid and ask prices of a unit of a stock are \(\underline{s}, \overline{s}\) respectively, then after optimal consumption we do not trade, sell or buy stocks if our position is in \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\), \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) or in \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) respectively. Furthermore, by Lemma 3.3 for \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\) these sets are cones.
Lemma 4.1
For \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) any pair of the random triple \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\), \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\), \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) do not have common points.
Proof
Fix \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\). Clearly, by the definition the random sets \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) and \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) do no have common points with \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\). From the uniqueness of the optimal strategy (see Corollary 3.3) we obtain that \(\mathbf {S}_{N-k}(\underline{s}, \overline{s}) \cap \mathbf {B}_{N-k}(\underline{s}, \overline{s}) = \emptyset \). \(\square \)
Proposition 4.1
For \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) the random sets \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\), \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\), \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\) are connected for each \(\omega \in \Omega \).
Proof
Assume that \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\) is not connected for some \(\omega \in \Omega \). Then in the convex envelope of \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\) we should have either elements of \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\), or of \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\). Assume that we have there elements of \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\). To simplify notation we shall skip the dependence on \(\omega \). Since the set \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) and \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) is a close we may assume that there exist \((x_1,y_1), (x_1+\underline{s}m_1,y_1-m_1) \in \mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) for positive \(m_1\) such that for some positive \(m'<m_1\) we have that \((x_2,y_2):=(x_1+\underline{s}m',y_1-m')\) and \((x_2+\underline{s}m,y_2-m)\in \mathbf {S}_{N-k}(\underline{s}, \overline{s})\) for any \(m\in [0,m_1-m')\). Let \(c^*\) be an optimal consumption for \((x_2,y_2)\). Clearly, \((c^*,0,m_1-m')\) is an optimal one step strategy for \((x_2,y_2)\) and
Furthermore, \(w_{N-k}(x_1,y_1, \underline{s}, \overline{s})\ge w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\) and by concavity of \(w_{N-k}(\cdot , \cdot , \underline{s}, \overline{s})\) we should have \(w_{N-k}(x_1,y_1, \underline{s}, \overline{s})= w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\), since otherwise using concavity we obtain that for \(m\in (0,m_1-m')\) we have \(w_{N-k}(x_2+\underline{s}m,y_2-m, \underline{s}, \overline{s})> w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\).
If \(w_{N-k}(x_1,y_1, \underline{s}, \overline{s})= w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\) then the strategy \((c^*,0,m_1-m')\) can not be optimal for \((x_2,y_2)\) (by uniqueness of optimal strategies, see Corollary 3.3, selling is not allowed). The case when the convex envelope of \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) contains elements of \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) can be rejected in a similar way. Since the set \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\) is close and right boundary of \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) and left boundary of \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) are in \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) so that these sets should be connected. \(\square \)
5 Local Weak Shadow Price
Consider now the case when at a given time moment \(N-k\), where \(k= 0, 1,\ldots , N\) instead of bid and ask prices \(\underline{s}, \overline{s}\) we have a one price \(\hat{s}\) for which we are allowed to sell and buy assets, while in the next time moments we again have bid and ask prices.
Define the set
For \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) define
where
or equivalently
In fact, this is the set of constraints we impose on admissible strategies at time moment \(N-k\) in the case when the asset price is equal to \(\hat{s}\) (we have no frictions) and we do not want to have negative position in bank or stock account at the next time moment.
Let
for \((x, y, \hat{s}) \in \hat{\mathbb {D}}\). Clearly,
Moreover we have
Lemma 5.1
Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then
and
By analogy to Theorem 2.1 we obtain
Proposition 5.1
For \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) the set \(\overline{\mathbb {B}}(x, y, \hat{s})\) is convex and compact. Furthermore the mapping
is continuous in Hausdorff metric.
From the Theorem 3.1 using also Theorems 10.1 and 10.2 we obtain
Proposition 5.2
The random mapping
is a strictly concave for \((x, y, \hat{s}) \in \hat{\mathbb {D}}\). Moreover for each \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) there exists a unique \(\mathcal {F}_{N-k}\)-measurable random variable \((\hat{c}(x,y,\hat{s}), \hat{K}(x,y,\hat{s}))\) taking values in the set \(\overline{\mathbb {B}}(x, y, \hat{s})\) which is an optimal one step strategy, i.e.
Furthermore, the random mapping \((x, y, \hat{s}) \longmapsto (\hat{c}(x,y,\hat{s}), \hat{K}(x,y,\hat{s}))\) is continuous.
We now introduce the notion of weak shadow price, which we consider first locally.
Definition 5.1
A family \(\{ \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}) : (x, y, \underline{s}, \overline{s}) \in \mathbb {D} \}\) of random variables is called local weak shadow price at time \(N-k\), if
-
(i)
for every \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) the random variable \(\hat{S}_{N-k}(x, y, \underline{s}, \overline{s})\) is \(\mathcal {F}_{N - k}\) -measurable,
-
(ii)
\(\forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ \underline{s} \le \hat{S}_{N-k}(x, y, \underline{s}, \overline{s}) \le \overline{s}\),
-
(iii)
\(\forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ v_{N - k}(x, y, \hat{S}_{N-k}(x, y, \underline{s}, \overline{s})) = w_{N - k}(x, y, \underline{s}, \overline{s})\).
The notion of the local weak shadow price is crucial for the construction of global weak shadow price. We look at our market at time moment \(N-k\) for a price \(\hat{S}_{N-k}(x, y, \underline{s}, \overline{s})\), which is between bid and ask prices, and for which value of our functional corresponding to the case when at time \(N-k\) we have just one price \(\hat{S}_{N-k}(x, y, \underline{s}, \overline{s})\) and in the next time moments we have again bid and ask prices, is the same as in the case in which all time we have bid and ask prices. The local weak shadow price depends on the value of the bid and ask prices \(\underline{s}, \overline{s}\) at time moment \(N-k\) and on the \(initial\) portfolio position at the beginning of this time moment.
For \(\hat{s} > 0\) and for \(k = 1,\ldots , N\) let
and
The sets \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\), \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) and \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) correspond to no transaction, selling and buying zones in the case of one selling and buying price equal to \(\hat{s}\). For \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\) these sets are clearly cones.
Proposition 5.3
For each \(\omega \in \Omega \) there exists a continuous function \(f^{\omega } : \mathbb {R}_{+}^2 \longrightarrow \mathbb {R}_{+}^2\) such that
Furthermore if the mapping \((x,y) \mapsto v_{N-k}(x,y,\hat{s})\) is differentiable for any \(\hat{s}>0\) then for \(\hat{s}\ne \hat{s}'\) we have
Proof
From Proposition 5.2 and Theorem 3.1 we get that there exists a unique \(\mathcal {F}_{N - k}\)-measurable continuous random function \((\hat{c}, \hat{K}) : \hat{\mathbb {D}} \longrightarrow \mathbb {R}^{2}\) such that for each \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) the random variable \((\hat{c}(x, y, \hat{s}), \hat{K}(x, y, \hat{s}))\) takes values in the set \(\overline{\mathbb {B}}(x, y, \hat{s})\) and for each \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) we have that
Since on the line \(x+y\hat{s}=t\) there is a unique point belonging to the no transaction zone we have that
from which (5.9) and continuity of \(f^{\omega }\) follows.
Assume now that \((\bar{x},\bar{y})\in \hat{\mathbf {NT}}_{N - k}(\hat{s})(\omega ) \cap \hat{\mathbf {NT}}_{N - k}(\hat{s}')(\omega )\) for \(\hat{s}<\hat{s}'\) and \(\bar{y}>0\). Since for \(s=\hat{s}\) or \(s=\hat{s}'\)
we have that \(v_{N-k}(\bar{x}, \bar{y}, \hat{s})=v_{N-k}(\bar{x}, \bar{y}, \hat{s}')=\hat{v}_{N-k}(\bar{x},\bar{y})\). Moreover for \((x,y)\in \mathbb {R}_+^{2} \) such that \(x+y\hat{s}=\bar{x}+\bar{y}\hat{s}\) or \(x+y\hat{s}'=\bar{x}+\bar{y}\hat{s}'\) we have \(v_{N-k}(x, y, \hat{s})=v_{N-k}(x, y, \hat{s}')=v_{N-k}(\bar{x}, \bar{y}, \hat{s})\). Furthermore one can easily show that for any \(\tilde{s}\in [\hat{s},\hat{s}']\) whenever \(x+y \tilde{s}=\bar{x}+\bar{y}\tilde{s}\) we have also \( v_{N-k}(x, y, \tilde{s})=v_{N-k}(\bar{x}, \bar{y}, \tilde{s})=v_{N-k}(\bar{x}, \bar{y}, \hat{s})\). Therefore directional derivative of \(v_{N-k}(\bar{x}, \bar{y},\hat{s})\) along the line \(x+y \tilde{s}=\bar{x}+\bar{y}\tilde{s}\) should be equal to \(0\), as a derivative of a constant function, in particular at \((\bar{x},\bar{y})\) we have
for any \(\tilde{s}\in (\hat{s},\hat{s}')\), which means that \(v_{N-k,x}^{\prime }(\bar{x}, \bar{y},\hat{s})=0= v_{N-k,y}^{\prime }(\bar{x}, \bar{y},\hat{s})\), which contradicts the fact that \(v_{N-k}(x, y, \hat{s})\) is strictly increasing in \(x\) and \(y\). Consequently we obtain (5.10). \(\square \)
Taking into account \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\) is an image of a continuous function \(f^\omega \) which has exactly one intersection point with each line \(x+\hat{s}y=t\), for \(t\ge 0\) we easily obtain
Corollary 5.1
The sets \(\hat{\mathbf {B}}_{N-k}(\hat{s})\) and \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) are connected for \(\hat{s}>0\).
Remark 5.1
In the case, when \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\), the sets \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\), \(\hat{\mathbf {B}}_{N-k}(\hat{s})\) and \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) are cones, and therefore from Proposition 5.3 we get that the set \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\) is a half line starting from the point \((0, 0)\).
6 Optimal Consumption in the Markets Locally Without Friction with Logarithmic and Power Utility Functions
In this section we show formulas for optimal consumption in the market in which at a given time moment we have one selling and buying price (we don’t have frictions). Notice first that in the equation (5.4) we can replace control variable \(K\) by \(b\in [0,1]\) representing a portion of our wealth invested in the stock market. Then for \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) we have
In the case when \(g(u) = \ln u\) using Lemma 3.3 we obtain
and then by Lemma 10.2 the supremum is attained for \(c = \hat{c}_{N-k}(x, y, \hat{s})\), where
In the case when \(g(u) = u^{\alpha }\) by Lemma 3.3 we have
where \(\tilde{D}_{N-k}(\hat{s}):= \gamma \sup _{b\in [0,1]}\mathbb {E}[w_{N-k+1}(1 - b, \frac{b}{\hat{s}}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}]\). The supremum in
by Lemma 10.3 is attained for \(c = \hat{c}_{N-k}(x, y, \hat{s})\), where
Substituting (6.3) into (6.2) and (6.6) into (6.4) we immediately obtain
Corollary 6.1
If \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\) the function \((x,y)\mapsto v_{N-k}(x, y, \hat{s})\) is differentiable and consequently we have (5.10).
7 Properties of Selling and Buying Zones
The construction of shadow price is based on relations between the random sets \(\hat{\mathbf {S}}_{N-k}\), \(\hat{\mathbf {B}}_{N-k}\) and \({\mathbf {S}}_{N-k}\), \({\mathbf {B}}_{N-k}\) respectively which we shall show in this section. We start with a useful simple
Lemma 7.1
For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) and \(\hat{s} \in [\underline{s}, \overline{s}]\) we have:
and consequently
First we consider relation between \(\hat{\mathbf {S}}_{N-k}\) and \({\mathbf {S}}_{N-k}\).
Proposition 7.1
For \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) and all \(\omega \in \Omega \) we have
Proof
Assume that \((x,y) \in \hat{\mathbf {S}}_{N-k}(\underline{s})(\omega )\) for certain \(\omega \in \Omega \). Then there is an \(\mathcal {F}_{N-k}\)-measurable triple \((\tilde{c}(\omega ),0,\tilde{m}(\omega ))\) taking values in \(\mathbb {B}(x, y, \underline{s})(\omega )\) such that \(\tilde{m}(\omega )>0\) and
where to simplify notation we drop the dependence on \(\omega \). Since also \((\tilde{c},0,\tilde{m})\in \mathbb {A}(x, y, \underline{s}, \overline{s})\) then taking into account (7.2) we have
which means that \((x,y) \in \mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\).
Assume now that \((x,y) \in {\mathbf {S}}_{N-k}(\underline{s},\overline{s})(\omega )\). Then there exists an \(\mathcal {F}_{N-k}\)-measurable random triple \((\hat{c}, 0, \hat{m})\) which takes values in \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) such that
If \((x,y) \in \hat{\mathbf {NT}}_{N-k}(\underline{s})(\omega )\) then there is \((\tilde{c},0,0)\in \mathbb {B}(x, y, \underline{s})\) such that
and since also \((\tilde{c},0,0)\in \mathbb {A}(x, y, \underline{s}, \overline{s})\) then taking into account (7.2) we obtain that
which means that \((x,y)\in {\mathbf {NT}}_{N-k}(\underline{s},\overline{s})(\omega )\), what is a contradiction. If \((x,y) \in \hat{\mathbf {B}}_{N-k}(\underline{s})(\omega )\) then there is \((\tilde{c},\tilde{l},0)\in \mathbb {B}(x, y, \underline{s})\) such that \(\tilde{l}>0\) and
Consider now the triple \(\lambda (\hat{c}, 0, \hat{m})+ (1-\lambda ) (\tilde{c},\tilde{l},0)\) with \(\lambda ={\tilde{l} \over \hat{m}+\tilde{l}}\in [0,1]\). Note that \(\lambda (\hat{c}, 0, \hat{m})+ (1-\lambda ) (\tilde{c},\tilde{l},0)\in \mathbb {A}(x, y, \underline{s}, \overline{s})\). By concavity of the random function \(F: \mathbb {B}(x, y, \underline{s}) \longrightarrow \mathbb {R}\) defined in the following way
we have using again (7.2) that
From (7.11) we have that
which means that \((x,y)\in {\mathbf {NT}}_{N-k}(\underline{s},\overline{s})(\omega )\), which is a contradiction. \(\square \)
Next relation between the sets \(\hat{\mathbf {B}}_{N-k}(\overline{s})\) and \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) shall require two technical lemmas.
Lemma 7.2
Let \((x, y,\underline{s}, \overline{s})\in \mathbb {D}\) and \((\hat{c}, \hat{l}, 0) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\), \((\tilde{c}, 0, \tilde{m}) \in \mathbb {B}(x, y, \overline{s})\) be such that \(\hat{l}, \tilde{m} > 0\). Then for \(\lambda \in (0, 1)\) such that \(\lambda >{\tilde{m} \over \hat{l}+ \tilde{m}}\) we have
Proof
For any \(\lambda \in [0, 1]\) we have \(0\le \lambda \hat{c} + (1 - \lambda ) \tilde{c}\le x+\underline{s}y\) and
Whenever \(1 >\lambda >{\tilde{m} \over \hat{l}+ \tilde{m}}\) we have \(\lambda \hat{l} - (1 - \lambda ) \tilde{m} > 0\) and (7.12) holds. \(\square \)
Lemma 7.3
Let \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) be such that \(\overline{s} > \underline{s} > 0\). Then for \(\omega \in \Omega \) we have
Proof
Assume this is not true. Then there exists some pair \((x, y)\) of two strictly positive numbers such that the event
Let \((\tilde{c}, \tilde{l}, \tilde{m})\) be an optimal one step strategy in the market locally without frictions with the price \(\overline{s}\), i.e. let \((\tilde{c}, \tilde{l}, \tilde{m})\) be an \(\mathcal {F}_{N-k}\)-measurable random variable which takes values in the set \(\mathbb {B}(x, y, \overline{s})\) such that
Let \((\hat{c}, \hat{l}, \hat{m})\) be an optimal one step strategy in the primary market i.e. \((\hat{c}, \hat{l}, \hat{m})\) is \(\mathcal {F}_{N-k}\)-measurable random variable taking values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) such that
On the event \(A\) we clearly have \((\tilde{c}, \tilde{l}, \tilde{m}) = (\tilde{c}, 0, \tilde{m})\), \((\hat{c}, \hat{l}, \hat{m}) = (\hat{c}, \hat{l}, 0)\) and \(\hat{l}, \tilde{m} > 0\).
Let \(\lambda \) be an \(\mathcal {F}_{N-k}\)-measurable random variable taking values in the interval \([0, 1]\) such that on \(A\) we have \(\lambda \hat{l} - (1 - \lambda ) \hat{m} > 0\). From the property (7.12) we have that \((\lambda \hat{c} + (1 - \lambda ) \tilde{c}, \lambda \hat{l} - (1 - \lambda ) \tilde{m}, 0)\) is a well defined \(\mathcal {F}_{N-k}\)-measurable random variable, which on \(A\) takes values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\).
Since on \(A\) we have
from the strict concavity of the function \(g\) and of the random function
taking into account the property (7.2) we get that on \(A\) we have
which is a contradiction and therefore we have (7.13). \(\square \)
We are now in position to compare the sets \(\hat{\mathbf {B}}_{N-k}(\overline{s})(\omega )\) and \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\).
Proposition 7.2
Let \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) be such that \(\overline{s} > \underline{s} > 0\). Then for \(\omega \in \Omega \)
Proof
Notice first that by Lemma 7.3 we have that \(\hat{\mathbf {S}}_{N-k}(\overline{s})(\omega ) \cap \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega ) = \emptyset \) for \(\omega \in \Omega \). Let for \((x,y) \in \mathbb {R}_{+}^{2}\)
be an optimal \(\mathcal {F}_{N-k}\)-measurable one step strategy in the market locally without frictions with the price \(\overline{s}\), i.e.
Let \((x, y) \in \hat{\mathbf {B}}_{N-k}(\overline{s})(\omega )\). Without loss of generality we can assume that \(\tilde{l}(\omega )> \tilde{m}(\omega ) = 0\) and \((\tilde{c}(\omega ), \tilde{l}(\omega ), 0)\in \mathbb {B}(x, y, \overline{s})\) and by (5.6) we also have \((\tilde{c}(\omega ), \tilde{l}(\omega ), 0) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\). Therefore,
which means that \((x,y) \in \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\). Let now \((x,y) \in \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\) and assume that \((x,y) \notin \hat{\mathbf {B}}_{N-k}(\overline{s})(\omega )\). By (7.13) we have also that \((x,y) \notin \hat{\mathbf {S}}_{N-k}(\overline{s})(\omega )\). Therefore, \((x,y)\in \hat{\mathbf {NT}}_{N-k}(\overline{s})(\omega )\) and for \((\tilde{c},0,0)\in \mathbb {B}(x, y, \overline{s})\) we have
Since also \((\tilde{c},0,0)\in \mathbb {A}(x, y, \underline{s}, \overline{s})\) we have that
which means that \((x,y)\in \mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\), which is a contradiction. \(\square \)
Remark 7.1
From Corollary 5.1 taking into account Propositions 7.1 and 7.2 we obtain an alternative proof of the fact that the sets \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\) and \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\) are connected for \(\omega \in \Omega \).
8 Construction of Local Weak Shadow Price
In this section we construct shadow price. For this purpose we shall need a number of properties of selling and buying cones corresponding to different asset prices on the market locally without friction. We start with an obvious
Lemma 8.1
Let \((x, y) \in \mathbb {R}_{+}^{2}\) and let \(0 < s_{1} \le s_{2}\). Then the following implications hold
and
Next Lemma shows relations between selling and buying cones for different asset prices.
Lemma 8.2
Let \(s_{1}, s_{2} \in \mathbb {R}_{+}\) be such that \(0 < s_{1} \le s_{2}\). Then
and
Proof
We will prove only (8.4). The proof of (8.3) is similar. Fix \(\omega \in \Omega \). Let \((x,y)\in \hat{\mathbf {B}}_{N-k}(s_{2})(\omega )\). Then there is an optimal one step strategy \((\tilde{c},\tilde{l}, \tilde{m})\) such that
Let \((c^{*}, 0, m^{*})\) be an \(\mathcal {F}_{N-k}\) measurable triple taking values in the set \(\mathbb {B}(x, y,\) \(s_{1})\). Taking into account that by (8.1) the random variable \((c^{*}, 0, m^{*})\) takes values in \(\mathbb {B}(x,y,s_{2})\), we have
Consequently, taking into account that the strategy \((c^{*}, 0, m^{*})\) could be arbitrary we have \((x,y) \notin \hat{\mathbf {S}}_{N-k}(s_{1})(\omega )\cup \hat{\mathbf {NT}}_{N-k}(s_{1})(\omega )\), which means that \((x,y)\in \hat{\mathbf {B}}_{N-k}(s_{1})(\omega )\), which completes the proof. \(\square \)
The following two properties of no transaction zone will be important later
Lemma 8.3
Assume that \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) are such that \(\overline{s} > \underline{s} > 0\). Then
Proof
Form (8.3) and (8.4) together with (7.3) i (7.16) we have
\(\square \)
Lemma 8.4
If \(s_{1}, s_{2}, \hat{s} \in \mathbb {R}_{+}\) are such that \(0 < s_{1} \le \hat{s} \le s_{2}\) then
Proof
\(\square \)
In what follows we shall try to characterize \(\mathcal {F}_{N - k}\)-measurable random variables \({s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\), taking values in \([\underline{s}, \overline{s}]\), such that \((x,y)\in \) \(\hat{\mathbf {NT}}_{N - k}({s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}))\).
Proposition 8.1
Let for \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\)
and
Then \(\overline{s}_{N - k}^{*}\) and \(\underline{s}_{N - k}^{*}\) are well defined \(\mathcal {F}_{N - k}\)-measurable random functions from \(\mathbb {D}\) to \((0, \infty )\). Moreover \(\overline{s}_{N - k}^{*}\) and \(\underline{s}_{N - k}^{*}\) are lower and upper semicontinuous on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\). Furthermore, for each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) we have
Proof
Fix \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). We are going to show first that \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) and \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) are well defined \(\mathcal {F}_{N - k}\)-measurable random variables.
Notice first that from (8.5) for each \(\omega \in \Omega \) and each \((x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s})(\omega )\) there is \(s_{N - k}(x, y, \omega ) \in [\underline{s}, \overline{s}]\) such that \((x, y) \in \hat{\mathbf {NT}}_{N - k}(s_{N - k}(x, y, \omega ))(\omega )\).
Furthermore
and
Moreover using (8.3) and (8.4) we obtain
and
For any \(t \in (\underline{s}, \overline{s})\) we have
and
which means that \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) are \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) well defined \(\mathcal {F}_{N - k}\)-measurable random variables.
We now show that on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) we have (8.9). Fix \(\omega \in \{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) and let \((\overline{s}_{\omega }^{n})_{n = 1}^{\infty }\) and \((\underline{s}_{\omega }^{n})_{n = 1}^{\infty }\) be any two sequences from the interval \([\underline{s}, \overline{s}]\) convergent respectively to \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) and \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) such that for any \(n \in \mathbb {N}\) we have \((x, y) \in \hat{\mathbf {NT}}_{N - k}(\overline{s}_{\omega }^{n})(\omega ) \cap \hat{\mathbf {NT}}_{N - k}(\underline{s}_{\omega }^{n})(\omega )\). Then for \(n \in \mathbb {N}\) we have
By continuity of \({v}_{N - k}(x, y, \cdot )\) letting \(n \longrightarrow \infty \) we obtain
Therefore we have (8.9) on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\).
It remains to show that \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) and \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) are also measurable functions (of their coordinates). For this purpose it suffices to prove their lower and upper semicontinuity, respectively, on the set \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\). Fix \(\omega \in \{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\). Let \((x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})_{n = 1}^{\infty }\) be a sequence from \(\mathbb {D}\) convergent to \((x, y, \underline{s}, \overline{s})\). We have to show that
and
We shall show only the first inequality since the other can be shown in a similar way. There are three cases:
- \(1^{o}\) :
-
for infinitely many \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \mathbf {S}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\). Choosing a suitable subsequence we can assume that \((x_{n}, y_{n}) \in \mathbf {S}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\) for \(n \in \mathbb {N}\) and then \(\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) = \underline{s}_{n} \xrightarrow {n \rightarrow \infty } \underline{s}\). By (8.9) for \(n \in \mathbb {N}\) we have
$$\begin{aligned}&{v}_{N - k}(x_{n}, y_{n}, \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega ) = \sup _{(c, l, m) \in {\mathbb {B}}(x_{n}, y_{n}, \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))} \mathbb {E}[g(c) + \\&\gamma w_{N - k + 1}(x_{n} - c + \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) \cdot (m - l), y_{n} - m + l, \underline{S}_{N - k + 1}, \\&\overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ) = \sup _{(c, l, m) \in {\mathbb {B}}(x_{n}, y_{n}, \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))} \mathbb {E}[g(c) + \\&\gamma w_{N - k + 1}(x_{n} - c, y_{n}, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ). \end{aligned}$$By continuity of \({v}_{N - k}\) using Theorem 10.1 and letting \(n \longrightarrow \infty \) we obtain
$$\begin{aligned}&{v}_{N - k}(x, y, \underline{s})(\omega ) = \\&\sup _{(c, l, m) \in {\mathbb {B}}(x, y, \underline{s})} \mathbb {E}[g(c) + \gamma w_{N - k + 1}(x - c, y, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ), \end{aligned}$$which means that \((x, y) \in \hat{\mathbf {NT}}_{N - k}(\underline{s})(\omega )\) and \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega ) = \underline{s}\).
- \(2^{o}\) :
-
for infinitely many \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \mathbf {B}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\). As above we may assume (choosing a suitable subsequence) that \((x_{n}, y_{n}) \in \mathbf {B}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\) for \(n \in \mathbb {N}\). Then \(\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) = \overline{s}_{n} \xrightarrow {n \rightarrow \infty } \overline{s} \ge \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega )\).
- \(3^{o}\) :
-
for infinitely may \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \mathbf {NT}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\). We may assume that \((x_{n}, y_{n}) \in \mathbf {NT}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\) for \(n \in \mathbb {N}\) and by (8.9) for \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \hat{\mathbf {NT}}_{N - k}(\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega )\). Let \((n_{l})_{l = 1}^{\infty }\) be such subsequence that \(\overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ) \xrightarrow {l \rightarrow \infty } \liminf _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ).\) By continuity of \({v}_{N - k}\) and Theorem 10.1 we obtain
$$\begin{aligned}&{v}_{N - k}(x_{n_{l}}, y_{n_{l}}, \overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ))(\omega ) \\&\xrightarrow {l \rightarrow \infty } {v}_{N - k}(x, y, \liminf _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega ) \end{aligned}$$and
$$\begin{aligned}&{v}_{N - k}(x_{n_{l}}, y_{n_{l}}, \overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ) )(\omega ) = \\&\sup _{(c, 0, 0) \in {\mathbb {B}}(x_{n_{l}}, y_{n_{l}}, \overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ) )} \mathbb {E}[g(c)+\\&\gamma w_{N - k + 1}(x_{n_{l}} - c, y_{n_{l}}, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ) \xrightarrow {l \rightarrow \infty } \\&\sup _{(c, 0, 0) \in {\mathbb {B}}(x, y, \liminf _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) )} \mathbb {E}[g(c) + \\&\gamma w_{N - k + 1}(x - c, y, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ) . \end{aligned}$$Therefore \((x, y) \in \hat{\mathbf {NT}}_{N - k}(\liminf _{n \longrightarrow \infty }\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega )\), which completes the proof of lower semicontinuity of \(\overline{s}_{N - k}^{*}\).
\(\square \)
Corollary 8.1
Let \(\hat{s}_{N - k} : \mathbb {D} \longrightarrow (0, \infty )\) be defined by the formula
for \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) we have
Proof
It follows directly from (8.6) and (8.9). \(\square \)
Remark 8.1
Note that due to the Proposition 5.3 under differentiability of \(v_{N-k}\) we have that the random function \(\hat{s}_{N-k}\) defined by (8.10) is the unique random function for which (8.11) holds since then \(\underline{s}^{*}_{N-k}(x, y,\underline{s}, \overline{s} ) = \overline{s}^{*}_{N-k}(x, y, \underline{s}, \overline{s})\) . When \(\underline{s}^{*}_{N-k}(x, y,\underline{s}, \overline{s}) < \overline{s}^{*}_{N-k}(x, y, \underline{s}, \overline{s})\) the random function \(\hat{s}_{N-k}\) for which (8.11) holds is not defined in a unique way.
Having defined \(\hat{s}_{N-k}(x, y)\) we are allowed to formulate the main result of this section
Theorem 8.1
For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) let \(\hat{S}_N(x,y, \underline{s}, \overline{s})= \underline{s}\) and for \(k=1,\ldots , N\)
where the random mapping \(\hat{s}\) is defined by (8.10). Then the family \(\{ \hat{S}_{N-k}(x, y, \underline{s}, \overline{s}) : (x, y, \underline{s}, \overline{s}) \in \mathbb {D}\}\) is a local weak shadow price at time moment \(N-k\), for \(k=1,2, \dots ,N\), i.e. it is \(\mathcal {F}_{N-k}\)-measurable and
and the optimal strategies at time moment \(N-k\) in market with price \(\hat{S}_{N-k}\) and in the market with bid and ask prices \(\bar{s}\), \(\overline{s}\) respectively, are the same.
Proof
It is a consequence of previous facts, namely Propositions 7.1, 7.2 and (8.11). We have to show equality (8.13). By Propositions 7.1 and 7.2 we have equality (8.13) for \((x, y)\) in \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) or in \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) respectively. For \((x,y)\in \mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) we have \((x, y) \in \hat{\mathbf {NT}}_{N-k}(\hat{s}_{N-k}(x, y,\underline{s}, \overline{s}))\), which again implies equality (8.13). Equality of optimal strategies at time moment \(N-k\) in the markets with price \(\hat{S}_{N-k}\) and bid and ask prices \(\underline{s}\), \(\overline{s}\) follows directly from the equation (8.13). \(\square \)
9 Weak Shadow Price and Shadow Price (Strong Shadow Price)
In the previous four sections we considered a market which was locally at a given time moment without friction but with the asset price depending on our financial position, while in the other moments of time we had transaction cots (bid and ask prices). Now, we shall introduce shadow price over the whole time horizon. The main result of the paper states that expected values of discounted utilities and the optimal strategies are the same for the original market with bid and ask prices and for the market with suitably defined shadow price. We start with the following
Definition 9.1
A family \(\hat{S} := \{ \hat{S}_{n}(x, y, \underline{s}, \overline{s}) :\ n \in \{ 0, 1, 2, \dots , N \}, (x, y, \underline{s}, \overline{s}) \in \mathbb {D} \}\) will be called weak shadow price, if
-
(i)
for each \(n \in \{ 0, 1, 2, \dots , N \}\) and for each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) the random variable \(\hat{S}_{n}(x, y, \underline{s}, \overline{s})\) is \(\mathcal {F}_{n}\)-measurable,
-
(ii)
\(\forall _{n \in \{ 0, 1, 2, \dots , N \}} \forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ \ \underline{s} \le \hat{S}_{n}(x, y, \underline{s}, \overline{s}) \le \overline{s}\),
-
(iii)
the optimal value of the functional (1.2) of an investor in the frictionless market starting at time \(n \in \{ 0, 1, 2, \dots , N \}\) from a position \((x, y)\) and trading stocks with the price \(\hat{S}_{n}(x, y, \underline{S}_{n}, \overline{S}_{n})\), is the same as in the market with transaction costs.
In the case when market is \(governed\) by the family \(\hat{S}\) of asset prices satisfying conditions (i)–ii) of Definition 9.1 we will say that we have a market with price system \(\hat{S}\).
Proposition 9.1
Let \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) be the optimal strategy in the market with transaction costs with initial position \((x, y)\). Assume there exists a weak shadow price \(\hat{S}\). Then the strategy \(\hat{u}\) is also optimal in the frictionless market with price system \(\hat{S}\).
Proof
Denote by \(\mathcal {U}_{(x, y)}(\hat{S})\) the set of all admissible strategies in the frictionless market with price system \(\hat{S}\) and with initial position \((x, y)\). From the condition \((iii)\) of Definition 9.1 we have that
Therefore it remains to show that \(\hat{u}\) is admissible in the frictionless market with the price system \(\hat{S}\). Since for each \(n \in \{0, 1, 2, \dots , N \}\) we have
Taking into account that \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) we have that
which means that, indeed, \(\hat{u} \in \mathcal {U}_{(x, y)}(\hat{S})\). This ends the proof. \(\square \)
In the following definition we stress very firmly the dependence on the initial position in the market with transaction costs.
Definition 9.2
For a given initial position \((x, y) \in \mathbb {R}_{+}^{2} \setminus \{ (0, 0) \}\) a process \(\tilde{S} = (\tilde{S}_{n})_{n = 0}^{N}\) depending on this initial position will be called shadow price (strong shadow price), if
-
(i)
it is adapted,
-
(ii)
\(\forall _{n \in \{ 0, 1, 2, \ldots , N \}} \ \ \underline{S}_{n} \le \tilde{S}_{n} \le \overline{S}_{n}\),
-
(iii)
the optimal value of the functional (1.2) in a frictionless market with price process \(\tilde{S}\) is the same as in the market with transaction costs with the initial position \((x, y)\).
In an analogous way as in the proof of Proposition 9.1 we show that
Proposition 9.2
Let \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) be the optimal strategy in the market with transaction costs with initial position \((x, y)\). Assume there exists a shadow price (strong shadow price) \(\tilde{S}\), then the strategy \(\hat{u}\) is an optimal strategy in the frictionless market with price process \(\tilde{S}\).
One can notice that there is a clear difference between weak and strong shadow prices. Weak shadow price is in fact a random field satisfying (i)–(iii) of Definition 9.1, while strong shadow price is just a sequence of random variables the choice of which adjusted to the initial position at time \(0\).
We now formulate our main result of the paper
Theorem 9.1
Let the family \(\hat{S}\) be defined by (8.12). Then \(\hat{S}\) is a weak shadow price. Furthermore, the optimal strategies in the market with shadow price are also optimal in the original market with bid and ask prices.
Proof
The proof is by backward induction. Our induction hypothesis \(I_k\) is the equality
for \(t\in \{N-k, N-k+1, \ldots , N \}\) and \((x,y)\in \mathbb {R}_{+}^{2}\setminus \left\{ (0,0)\right\} \) and the fact that optimal strategies in the markets with shadow price and bid and ask prices over time span \(\{N-k, N-k+1, \ldots , N \}\) coincide. First, consider the case \(k=0\). Let \((x, y) \in \mathbb {R}^{2}_{+}\backslash \{(0,0)\}\) be our position. Clearly, the shadow price \(\hat{S}_{N}(x, y, \underline{S}_{N}, \overline{S}_{N}) = \underline{S}_{N}\) because at time moment \(N\) it is optimal to sell all assets. For \((x, y) \in \mathbb {R}_{+}^{2}\setminus \{(0,0)\}\) we have
Therefore for \(k=0\) the hypothesis \(I_0\) is satisfied. Assume it also holds for \(k\le n-1\). Then for \((x, y) \in \mathbb {R}_{+}^{2}\setminus \{(0,0)\}\) we have that
Since by Bellman equation
using (9.2) we obtain
which coincides with \(v_{N-n}\) defined in (5.2). By Theorem 8.1 we obtain \(I_n\). \(\square \)
Notice that
Corollary 9.1
Let \(\hat{S}\) be defined by (8.12). For each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) and for every \(k \in \{1, 2, 3, \dots , N \}\) we have that
In the next Theorem we prove the existence of strong shadow price and we show the connection between strong and weak shadow prices.
Theorem 9.2
If \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) is the optimal strategy in the market with transaction costs with initial position \((x, y)\) and \((\hat{x}_{n}, \hat{y}_{n})_{n = 0}^{N}\) are the corresponding market positions then there exists a strong shadow price \(\tilde{S} = (\tilde{S}_{n})_{n = 0}^{N}\) which is of the the form
for \(n \in \{ 0, 1, 2, \dots , N \}\), where \(\hat{S}\) is a weak shadow price.
Proof
Notice first that for \(n \in \{ 0, 1, 2, \dots , N-1 \}\) we have
where \(\hat{c}_n\) is the corresponding to \(\hat{u}\) optimal consumption.
For \((x, y, \tilde{s}) \in \hat{\mathbb {D}}\) let
and for \(k \in \{ 1, 2, 3, \dots , N \}\) let
Clearly
By Theorem 8.1 and (8.13) we have that
and therefore by (9.6)
Assume now that for \(k \in \{2, \dots , N \}\)
By (8.13) we have again that
and therefore by (9.10) and (9.6) we have that
Consequently \(w_{0}({x},{y},\underline{S}, \overline{S})=\tilde{v}_{0}({x},{y}, \tilde{S}_{0})\) which means that \((\tilde{S}_n)_{n=0}^N\) is a strong shadow price. This ends the proof. \(\square \)
10 Examples
In this section we show two examples for which the results presented in the paper can be applied. The first example concerns so called Markov price model considered in [3]. Let \(\xi _{1}, \xi _{2}, \xi _{3}, \dots , \xi _{N}\) be a sequence of independent and identically distributed random variables such that \(supp \xi _{1} = [-1, \infty )\). Moreover let \(\mathcal {F}_{n} := \sigma (\xi _{1}, \xi _{2}, \xi _{3}, \dots , \xi _{n})\) for \(n \in \{ 1, 2, 3, \dots , N \}\) and
where \(S_{0} > 0\) is a fixed constant. Define bid and ask prices
for \(n \in \{0, 1, 2, \dots , N \}\) with constants \(\mu \in [0,1)\) and \( \lambda \in [0, \infty )\).
We are going to maximize the functional
in the market with the bid and ask price processes \(\underline{S}\) and \(\overline{S}\) given by (10.1) and (10.2).
By (3.1) and (3.6) the sequence of Bellman equations corresponding to above problem is of the form
and
for \((x, y, s) \in \hat{\mathbb {D}}\) and \(k \in \{ 1, 2, 3, \dots , N \}\).
Moreover, by Corollary 3.3 for all \((x, y, s) \in \hat{\mathbb {D}}\) and for all \(k \in \{ 1, 2, 3, \dots , N \}\) there exists a unique
for which supremum in (10.4) is attained.
where \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) and \(k \in \{ 1, 2, 3, \dots , N \}\). By the properties of logarithm (see (6.3)) we get that the optimal consumption in (10.6) is given by the formula
Let \(u \in \mathcal {U}_{(x, y, (1 - \mu ) S_{0}, (1 + \lambda ) S_{0})}\) be an optimal strategy in the market with proportional transaction costs. Clearly the sequence \((c_{n}(x_{n}, y_{n}, S_{n}))_{n = 0}^{N}\) is uniquely determined.
From (10.7) there exists only one \(\mathcal {F}_{N - k}\)-measurable random variable \(\tilde{S}_{N - k}\) such that
Namely, from (10.8) we obtain that
The sequence \((\tilde{S}_{n})_{n = 0}^{N}\) given by (10.9) is the strong shadow price.
Note that the optimal one step strategy given by (10.5) after consumption can be characterized from Lemma 3.3 by a cone \(\mathbf {NT}_{N - k}((1 - \mu ) s, (1 + \lambda ) s)\), which has the following representation
Denote by
and
the slopes of lower and upper boundaries of \(\mathbf {NT}_{N - k}((1 - \mu ) s, (1 + \lambda ) s)\). Note that \(0 \le \overline{t}_{N - k}(s) \le \underline{t}_{N - k}(s) \le \infty \).
Assume now that at the time moment \(N - k\) after consumption the investor has the position \((x, y) \in \mathbb {R}_{+}^{2}\) and the bid and ask prices are given by \((1 - \mu ) s\) and \((1 + \lambda ) s\) respectively, with \(s > 0\). Then
-
(i)
he does not trade if and only if \(\overline{t}_{N - k}(s) \le \frac{y}{x} \le \underline{t}_{N - k}(s)\);
-
(ii)
he sells some amount of stocks if and only if \(\underline{t}_{N - k}(s) < \frac{y}{x}\);
-
(iii)
he buys some amount of stocks if and only if \(\frac{y}{x} < \overline{t}_{N - k}(s)\).
Moreover, from (10.8) we obtain that for each \((x, y, s) \in \hat{\mathbb {D}}\) such that \(y > 0\) there exists a unique \(\hat{s}_{N - k}(x, y, s) \in [(1 - \mu ) s, (1 + \lambda ) s]\) for which
and
Consequently the family
where \(\hat{S}_{n}(x, y, (1 - \mu ) s, (1 + \lambda ) s) := \hat{s}_{n}(x, y, s)\), forms a weak shadow price and
where \(\tilde{S}_{n}\) is given by (10.9).
Another example concerns the case when \(S_n=\exp \left\{ \sigma X_n+f_n\right\} \) where \(X_t\) is a fractional Brownian motion with Hurst parameter \(0<H<1\) and \(f_n\) is a deterministic sequence with the same cost functional (10.3). By Proposition 4.2 of [11] we have that conditional full support condition is satisfied. Consequently expected values in (10.4) and in (10.6) have to be replaced by conditional expectations and functions \(w_{N-k}\) and \({v}_{N-k}\) are random. The optimal strategies (10.5) are also random as well as the slopes for lower and upper boundaries of \(\mathbf {NT}_{N - k}((1 - \mu ) s, (1 + \lambda ) s)\), which is again random. Nevertheless (10.9) and (10.14) define strong and weak shadow prices.
Similar results we can also obtain for power utility function both for Markov and fractional Brownian motion prices. The only problem is that we don’t have explicit form for \(\tilde{S}_{N-k}\) since we are not able to express \(\hat{s}\) as a function of \(\hat{c}_{N-k}\) from (6.6).
References
Bayer, C., Veliyev, B.: Utility maximization in a binomial model with transaction costs: a duality approach based on the shadow price process. Int. J. Theor. Appl. Financ. 17(4), 1–27 (2014)
Benedetti, G., Campi, L., Kallsen, J., and Muhle-Karbe, J.: On the existence of shadow price. Financ. Stoch. 17(4), 801–818 (2013)
Bobryk, R., Stettner, L.: Discrete time portfolio selection with proportional transaction costs. Prob. Math. Stat. 19, 135–248 (1999)
Castaing, C., Valadier, M.: Convex Analysis and Measurable Multifunctions. Lectures Notes in Mathematics no. 580, Springer, Berlin (1977)
Choi, J.H., Sirbu, M., Zitkovic, G.: Shadow prices and well-posedness in the problem of optimal investment and consumption with transaction costs. SIAM J. Control Optim. 51(6), 4414–4449 (2013)
Czichowsky, C., Muhle-Karbe, J., Schachermayer, W.: Transaction costs, shadow prices in discrete time. SIAM J. Financ. Math. 5(1), 258–277 (2014)
Dynkin, E.B., Yushkevich, A.A.: Controlled Markov Processes. Springer, Berlin (1979)
Evstigneev, I.V.: Measurable selection and dynamic programming. Math. Methods Oper. Res. 1(3), 52–55 (1976)
Gerhold, S., Muhle-Karbe, J., Schachermayer, W.: Asymptotics and duality for the Davis and Norman problem. Stochastics 84(5–6), 625–641 (2012)
Gerhold, S., Muhle-Karbe, J., Schachermayer, W.: The dual optimizer for the growth-optimal portfolio under transaction costs. Financ. Stoch. 17(2), 325–354 (2013)
Guasoni, P., Rasonyi, M., Schachermayer, W.: Consistent price systems and face-lifting pricing under transaction costs. Ann. App. Probab. 18(2), 491–520 (2008)
Ikeda, N., Watanabe, S.: Stochastic Differential Equations and Diffusion Processes. North-Holland Publishing Company, Amsterdam (1981)
Herchegh, A., Prokaj, V: Shadow price in the power utility case. Ann. App. Probab. (to appear)
Kabanov, Y., Stricker, Ch.: A teachers’ note on no-arbitrage criteria. Seminaire de Probabilites XXXV, Springer Lecture Notes in Mathematics 1755(2001), 149–152 (2001)
Kallsen, J., Muhle-Karbe, J.: On using shadow prices in portfolio optimization with transaction costs. Ann. App. Probab. 20(4), 1341–1358 (2010)
Kallsen, J., Muhle-Karbe, J.: Existence of shadow prices in finite probability spaces. Math. Methods Oper. Res. 79, 251–262 (2011)
Molchanov, I.: Theory of Random Sets. Springer, London (2005)
Rasonyi, M., Stettner, L.: On utility maximization in discrete-time financial market models. Ann. App. Probab. 15(2), 1367–1395 (2005)
Rokhlin, D.: On the game interpretation of a shadow price process in utility maximization problems under transaction costs. Financ. Stoch. 17, 819–838 (2013)
Acknowledgments
Lukasz Stettner research supported by NCN Grant DEC-2012/07/B/ST1/03298.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Proof of Proposition 2.1
The only nontrivial property is compactness of the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\). It is clear (see (2.2)) that \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) is closed. Since \(m\le y+l\) and \(x-c+\underline{s}(y+l)-\overline{s}l\ge 0\) we then have \(l\le {x-c+\underline{s}y\over \overline{s}-\underline{s}}\) from which boundedness and then compactness of \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) follows and also we have \((vii)\). \(\square \)
Proof of Theorem 2.1
We show first that for every sequence \((c_{n}, l_{n}, m_{n})_{n=1}^{\infty }\) such that for \(n \in \mathbb {N}\) we have \((c_{n}, l_{n}, m_{n}) \in \mathbb {A}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})\) we get
Assume that (A.1) does not hold. By compactness of the set \(cl({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) \(\cup \cup _{n=1}^{\infty }{\mathbb {A}}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n}))\) there is a subsequence \((c_{n_{k}}, l_{n_{k}}, m_{n_{k}})_{k=1}^{\infty }\) converging to \((c^{*}, l^{*}, m^{*})\) and a strictly positive \(\varepsilon \) such that
Since \(0 \le c_{n_k} \le x_{n_{k}} + \underline{s}_{n_{k}} y_{n_{k}}\), \(0 \le x_{n_{k}} - c_{n_{k}} + \underline{s}_{n_{k}} m_{n_{k}} - \overline{s}_{n_{k}} y_{n_{k}}\), \(0 \le y_{n_{k}} - m_{n_{k}} + l_{n_{k}}\) and \(0 \le m_{n_{k}}, l_{n_{k}}\), letting \(k\rightarrow \infty \), we obtain \(0 \le c^{*} \le x + \underline{s} y\), \(0 \le x - c^{*} + \underline{s} - \overline{s} y\), \(0 \le y - m^{*} + l^{*}\), \(0\le m^{*}, l^{*}\), which means that \((c^{*}, l^{*}, m^{*}) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\), which contradicts (A.2) so that (A.1) follows.
We are going now to show that for any sequence \((c_n,l_n,m_n)\) taking values in \(\mathbb {A}(x,y,\underline{s}, \overline{s})\) we have
Assume that (A.3) does not hold. By compactness of \(\mathbb {A}(x,y,\underline{s}, \overline{s})\) there exists a subsequence \((c_{n_{k}}, l_{n_{k}}, m_{n_{k}})\) from \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) converging to some \((c^{*}, l^{*}, m^{*}) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\) and a strictly positive \(\varepsilon \) such that for \(k \in \mathbb {N}\) we have
We shall show that there is a sequence \((c_{n_k}, l_{n_k}, m_{n_k})_{k=1}^{\infty }\) taking values in
\({\mathbb {A}}(x_{n_k}, y_{n_k}, \underline{s}_{n_k}, \overline{s}_{n_k})\) and convergent to \((c^{*}, l^{*}, m^{*})\), which will contradict (A.4). The construction of such sequence or subsequence for simplicity denoted as \((c_{n_k}, l_{n_k}, m_{n_k})\) taking values in \({\mathbb {A}}(x_{n_k}, y_{n_k}, \underline{s}_{n_k}, \overline{s}_{n_k})_{k=1}^{\infty }\) shall consist of several steps:
step 1. If \(c^{*}\ge x_{n_k}+\underline{s}_{n_k}y_{n_k}\) then letting \(k\rightarrow \infty \), we obtain \(c^*\ge x+\underline{s}y\). Therefore, \(c^*= x+\underline{s}y\), \(l=0\) and \(m=y\) and we let \(c_{n_k}:=x_{n_k}+ \underline{s}_{n_k}y_{n_k}\), \(l_{n_k}:=0\), \(m_{n_k}:=y_{n_k}\).
step 2. If \(0<c^{*}< x_{n_k}+\underline{s}_{n_k}y_{n_k}\), for infinitely many \(k\in \mathbb {N}\), then
with \(d_{n_k} := (x - x_{n_k}) + (\underline{s} - \underline{s}_{n_k}) m^* + (\overline{s}_{n_k} - \overline{s}) l^*\) and \(b_{n_k} := y - y_{n_k}\). Let \(\tilde{l}_{n}:= l^* + b_{n}\) and \(e_{n} := \overline{s}_{n} (\tilde{l}_{n} - l^*)\). Clearly, \(d_{n}, b_{n}, e_{n} \xrightarrow {n \rightarrow \infty } 0\) and \(m^*-\tilde{l}_{n_k}\le y_{n_k}\). We have four cases:
case a) for infinitely many \(k \in \mathbb {N}\) we have \(d_{n_k} < 0\) and \(b_{n_k} \le 0\). We can choose a further subsequence to simplify notation denoted again by \(n_k\) such that \(d_{n_k} < 0\) and \(b_{n_k} \le 0\). As \(d_{n_{k}} < 0\), then we have \(0 \le x_{n_{k}} - c^* + \underline{s}_{n_{k}} m^* - \overline{s}_{n_{k}}l^* + d_{n_{k}} < x_{n_{k}} - c^* + \underline{s}_{n_{k}} m^* - \underline{s}_{n_{k}} l^*\) and since \(b_{n_{k}} \le 0\), we have \(m^* - l^* \le y_{n_{k}} + b_{n_{k}} \le y_{n_{k}}\). Therefore, taking into account that \(0\le c^*< x_{n_k}+ \underline{s}_{n_k}y_{n_k}\) we obtain that \((c_{n_k}, l_{n_k}, m_{n_k}):=(c^*,l^*,m^*)\) takes values in \(\mathbb {A}(x_{n_k}, y_{n_k}, \underline{s}_{n_k}, \overline{s}_{n_k})\), and it is a required sequence.
case b) for infinitely many \(k \in \mathbb {N}\) we have \(d_{n_k} < 0\) and \(b_{n_k} > 0\) and as above we consider further subsequence denoted again by \(n_k\) such that \(d_{n_k} < 0\) and \(b_{n_k} > 0\). Then \(0 \le x_{n_{k}} - c^* + \underline{s}_{n_{k}} m^* - \overline{s}_{n_{k}}l^* + d_{n_{k}}< x_{n_{k}} - (c^* - e_{n_{k}}) + \underline{s}_{n_{k}} m^* - \overline{s}_{n_{k}} \tilde{l}_{n_{k}}.\) Since \(b_{n_{k}} > 0\), we have \(\tilde{l}_{n_{k}} = l^* + b_{n_{k}} > l^*\) and consequently \(e_{n_{k}} > 0\). Moreover, \(e_{n_{k}} \xrightarrow {k \rightarrow \infty } 0\), and for \(k \in \mathbb {N}\) sufficiently large \(0 < c^* - e_{n_{k}} < x_{n_{k}} + \underline{s}_{n_{k}} y_{n_{k}}\). Since \(0\le y-m^*+l^*=y_{n_{k}}-m^* + \tilde{l}_{n_{k}}\) we have that \((c^* - e_{n_{k}}, \tilde{l}_{n_{k}}, m) \in \mathbb {A}(x_{n_{k}}, y_{n_{k}}, \underline{s}_{n_{k}}, \overline{s}_{n_{k}})\) for \(k \in \mathbb {N}\) sufficiently large and clearly \((c_{n_k}, l_{n_k}, m_{n_k}):=(c^* - e_{n_{k}}, \tilde{l}_{n_{k}}, m^*)\xrightarrow {k \rightarrow \infty } (c^*, l^*, m^*)\).
case c) for infinitely many \(k\in \mathbb {N}\) we have \(d_{n_k} \ge 0\) and \(b_{n_k} \le 0\) and we consider further subsequence \(n_k\) such that \(d_{n_k}\ge 0\) and \(b_{n_k}\le 0\). Since \((d_{n_{k}})_{k=1}^{\infty }\) converges to zero, for sufficiently large \(k \in \mathbb {N}\) we have \(0 < c^* - d_{n_{k}} \le c^* < x_{n_{k}} + \underline{s}_{n_{k}} y_{n_{k}}\). Moreover, \(0 \le x - (c^* - d_{n_{k}}) + \underline{s}_{n_{k}} m^* - \overline{s}_{n_{k}} l^*\) and \(0\le y+l^*-m^*= y_{n_k}+b_{n_k}+l^*-m^*\le y_{n_k}+l^*-m^* \), which together means that \((c^* - d_{n_{k}}, l^*, m^*) \in \mathbb {A}(x_{n_{k}}, y_{n_{k}}, \underline{s}_{n_{k}}, \overline{s}_{n_{k}})\) and \((c_{n_k}, l_{n_k}, m_{n_k}):=(c^* - d_{n_{k}}, l^*, m^*) \xrightarrow {k \rightarrow \infty } (c^*, l^*, m^*)\).
case d) for infinitely many \(k\in \mathbb {N}\) we have \(d_{n_k} \ge 0\) and \(b_{n_k} > 0\) and we consider further subsequence denoted again by \(n_k\) such that \(d_{n_k} \ge 0\) and \(b_{n_k} > 0\).
We have \(0 \le x - c^* + \underline{s} m^* - \overline{s} l^*=x_{n_{k}} - (c^* - d_{n_{k}} - e_{n_{k}}) + \underline{s}_{n_{k}} m^* - \overline{s}_{n_{k}} \tilde{l}_{n_{k}}\) and \(0\le y+l^*-m^*=y_{n_k}+\tilde{l}_{n_{k}}-m^*\). Since sequences \((d_{n_{k}})_{k=1}^{\infty }\) and \((e_{n_{k}})_{k=1}^{\infty }\) are nonnegative and converge to zero, then for \(k \in \mathbb {N}\) sufficiently large we have \(0 < c^* - d_{n_{k}} - e_{n_{k}} \le c^* < x_{n_{k}} + \underline{s}_{n_{k}} y_{n_{k}}\). This means that for \(k \in \mathbb {N}\) sufficiently large \((c^* - d_{n_{k}} - e_{n_{k}}, \tilde{l}_{n_{k}}, m^*) \in \mathbb {A}(x_{n_{k}}, y_{n_{k}}, \underline{s}_{n_{k}}, \overline{s}_{n_{k}})\). Therefore, we let \((c_{n_k}, l_{n_k}, m_{n_k}):=(c^* - d_{n_{k}} - e_{n_{k}}, \tilde{l}_{n_{k}}, m^*) \xrightarrow {k \rightarrow \infty } (c^*, l^*, m^*)\).
step 3. If \(0=c^{*}< x_{n_k}+\underline{s}_{n_k}y_{n_k}\), we have two cases: either \(\liminf _{k\rightarrow \infty }(x_{n_k}+\underline{s}_{n_k}y_{n_k})=0\) and then \(x+\underline{s}y=0\) and \(x=y=c^*=m^*=l^*=0\) and we let \((c_{n_k}, l_{n_k}, m_{n_k}):=(0,0,0) \in \mathbb {A}(x_{n_{k}}, y_{n_{k}}, \underline{s}_{n_{k}}, \overline{s}_{n_{k}})\), or \(\liminf _{k\rightarrow \infty }(x_{n_k}+\underline{s}_{n_k}y_{n_k})>0=c^*\). In the last case we have \(0< x+ \underline{s} y\) so that there is \(0< c^*_n \rightarrow 0\) such that \((c^*_n, l^*, m^*)\in \mathbb {A}(x,y,\underline{s}, \overline{s})\), which using step 2, can be approximated by elements of \({\mathbb {A}}(x_{n_k}, y_{n_k}, \underline{s}_{n_k}, \overline{s}_{n_k})\). Since \(c^*_n\rightarrow 0\), then we easily construct \((c_{n_k}, l_{n_k}, m_{n_k})\).
Summarizing, we have (A.3) which together with (A.1) implies the convergence (2.8). \(\square \)
Proof of Proposition 3.1
The proof is by induction in \(k=1,2,\ldots ,N\). We show first the case \(k = 1\).
Fix \((x, y) \in \mathbb {R}_{+}^{2}\) and \((c, l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})\). Assume the sequence \((x_{n}, y_{n}, c_{n}, l_{n}, m_{n})_{n=1}^{\infty }\) is such that for \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \mathbb {R}_{+}^{2}\) and \((c_{n}, l_{n}, m_{n}) \in {\mathbb {A}}(x_{n}, y_{n}, \underline{s}, \overline{s})\) and it converges to \((x, y, c, l, m)\). Consider the following two cases.
\(1^{o}\). \(x - c + \underline{s} m - \overline{s} l \ge 0\) and \(y - m + l \ge 0\) and at least one of this inequalities is strict.
From the convergence of the sequence \((x_n, y_n, c_{n}, l_{n}, m_{n})_{n=1}^{\infty }\) we have that there exist \(\varepsilon _1 \ge 0\) and \(\varepsilon _2 \ge 0\) such that \(\varepsilon _1+\varepsilon _2>0\) and for \(n \in \mathbb {N}\) sufficiently large we have \(x_{n} - c_{n} + \underline{s} m_{n} - \overline{s} l_{n} \ge \varepsilon _1\) and \(y_{n} - m_{n} + l_{n} \ge \varepsilon _2\).
Let \(\delta _1 := \sup _{n \in \mathbb {N}} (x_{n} - c_{n} + \underline{s} m_{n} - \overline{s} l_{n})\) and \(\delta _2 := \sup _{n \in \mathbb {N}}(y_{n} - m_{n} + l_{n})\). Clearly, \(0 \le \delta _1, \delta _2 < \infty \) and \(\delta _1+\delta _2>0\).
Consequently, for \(n \in \mathbb {N}\) sufficiently large we have
because the random function \(w_{N}(\cdot , \cdot , \underline{S}_{N}, \overline{S}_{N})\) is strictly increasing with respect to both variables.
Let
From the integrability of \(w_{N}(\varepsilon ^{X}, \varepsilon ^{Y}, \underline{S}_{N}, \overline{S}_{N})\) and \(w_{N}(\delta ^{X}, \delta ^{Y}, \underline{S}_{N}, \overline{S}_{N})\) using Lebesgue’s dominated convergence theorem we obtain
From the continuity of the function \(g:[0, x + \underline{s} y] \longrightarrow \mathbb {R} \cup \{ - \infty \}\) we finally obtain
\(2^{o}\). \(x - c + \underline{s} m - \overline{s} l = 0\) and \(y - m + l = 0\).
In this case when \(g(0)\) is finite we use monotonicity argument as in the previous case. Consider now the case of \(g(0)=-\infty \). In such case
Since the sequence \((c_{n})_{n=1}^{\infty }\) is bounded from above, we have that \(- \infty \le g(c) < \infty \). Therefore, \(V_{N-1}(x, y, \underline{s}, \overline{s}, c, l, m) = - \infty \) and using our convention we have
It remains to show that
It suffices to show that any sequence contains a subsequence which diverges to \(-\infty \). If the sequences \((x_{n_{k}} - c_{n_{k}} + \underline{s} m_{n_{k}} - \overline{s} l_{n_{k}})_{k=1}^{\infty }\) and \((y_{n_{k}} - m_{n_{k}} + l_{n_{k}})_{k=1}^{\infty }\) are positive and converge to \(x - c + \underline{s} m - \overline{s} l = 0\) and \(y - m + l = 0\) respectively, then there exists subsequence \((k_{j})_{j=1}^{\infty }\) such that the sequences \((x_{n_{k_{j}}} - c_{n_{k_{j}}} + \underline{s} m_{n_{k_{j}}} - \overline{s} l_{n_{k_{j}}})_{j=1}^{\infty }\), \((y_{n_{k_{j}}} - m_{n_{k_{j}}} + l_{n_{k_{j}}})_{j=1}^{\infty }\) and \((c_{n_{k_{j}}})\) are decreasing. Consequently the sequence \((V_{N-1}(x_{n_{k_{j}}}, y_{n_{k_{j}}}, c_{n_{k_{j}}}, l_{n_{k_{j}}}, m_{n_{k_{j}}}, \underline{s}, \overline{s}))_{j=1}^{\infty _{j=1}^{\infty }}\) is decreasing to \(-\infty \) and by Lebesgue’s monotonic convergence theorem we obtain (A.7).
From Theorems 2.1 and 10.1 we get that random function \(w_{N-1}(\cdot , \cdot , \underline{s}, \overline{s})\) is continuous.
In the case of \(k>1\) we assume continuity \(w_{N-k+1}\) and then show similarly as in the case \(k=1\) the continuity of the mapping
for \((x, y) \in \mathbb {R}_{+}^{2}\) and \((c,l,m) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\), taking into account required by (A1) integrability of \(w_{N-k+1}(x,y,\underline{S}_{N-k+1},\) \(\overline{S}_{N-k+1})\). The continuity of \(w_{N-k}\) follows then from Theorems 2.1 and 10.1.
This completes the proof. \(\square \)
Proof of Lemma 3.1
From the continuity of the random function
we get that for \(\omega \in \Omega \) we have
where \(\mathbb {Q}\) stays for rationals. Let \((c_n, l_n, m_n)_{n=0}^\infty \) be the set of all elements of \(\mathbb {A}(x, y, \underline{s}, \overline{s})\cap \mathbb {Q}^3\). Define the sequence of \((\hat{c}_n, \hat{l}_n, \hat{m}_n)_{n=0}^\infty \) of \(\mathcal {F}_{N-k}\)-measurable random variables taking values in \(\mathbb {A}(x, y, \underline{s}, \overline{s})\cap \mathbb {Q}^3\) in the following way. Let \((\hat{c}_0, \hat{l}_0, \hat{m}_0):=(c_0, l_0, m_0)\),
and
Then we define inductively
and
Directly from the construction we have that
and
for \(n=0,1\ldots \) Therefore, for \(\omega \in \Omega \) we have
By Lemma 2 of [14] there is a random \(\mathcal {F}_{N-k}\)-measurable subsequence \((n_k)_{n=1}^{\infty }\) of \(\mathbb {N}\) and \(\mathcal {F}_{N-k}\) - measurable random variable \((\hat{c}, \hat{l}, \hat{m})\) taking values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) such that for each \(\omega \in \Omega \)
Letting subsequence \(n_k\rightarrow \infty \) in (A.9) taking into account (A.8) and assumption (A1) by continuity of the mapping
we obtain (3.7), which completes the proof. \(\square \)
Theorem 10.1
Let \((X, d)\) be a metric space and let the mapping \(x\longmapsto \mathcal {A}(x)\) be compact valued continuous in Hausdorff metric. Assume the function \(\beta : X \times X \longrightarrow \overline{\mathbb {R}}\), where \(\overline{\mathbb {R}}\) stands for \(\mathbb {R}\cup \left\{ -\infty ,+\infty \right\} \), is continuous. Then the function \(\varphi : X \longrightarrow \overline{\mathbb {R}}\) defined in the following way:
is continuous.
Proof
Fix \(x \in X\). Assume the sequence \((x_{n})_{n=1}^{\infty }\) from \(X\) converges to \(x\). We shall prove that \(\varphi (x_{n}) \xrightarrow {n \rightarrow \infty } \varphi (x)\).
For \(n \in \mathbb {N}\) let \(y_{n} \in \mathcal {A}(x_{n})\) be such that
Since the sequence \((x_{n})_{n=1}^{\infty }\) is convergent, then the set \(cl(\cup _{n=1}^{\infty } \mathcal {A}(x_{n}) \cup \mathcal {A}(x))\) is compact. Consequently there exists a subsequence \((y_{n_{k}})_{k=1}^{\infty }\) of \((y_{n})_{n=1}^{\infty }\) which converges to some \(y^{*} \in cl(\cup _{n=1}^{\infty } \mathcal {A}(x_{n}) \cup \mathcal {A}(x))\). Clearly \(y^{*} \in \mathcal {A}(x)\), since \(\mathcal {A}(x_{n_{k}}) \xrightarrow {k\rightarrow \infty } \mathcal {A}(x)\) in Hausdorff metric. Furthermore, every convergent subsequence \((y_{n_{k}})_{k=1}^{\infty }\) of \((y_{n})_{n=1}^{\infty }\) converges to an element of \(\mathcal {A}(x)\). Therefore,
Since this holds for any convergent subsequence \((y_{n_{k}})_{k=1}^{\infty }\) of \((y_{n})_{n=1}^{\infty }\), then we have
Let now \(\overline{y} \in \mathcal {A}(x)\) be such that \(\varphi (x) = \beta (x, \overline{y})\). From the convergence \(\mathcal {A}(x_{n}) \xrightarrow {n \rightarrow \infty } \mathcal {A}(x)\) in Hausdorff metric there exists a sequence \((\overline{y}_{n})_{n=1}^{\infty }\) convergent to \(\overline{y}\) and such that \(\overline{y}_{n} \in \mathcal {A}(x_{n})\) for \(n \in \mathbb {N}\). Thus,
This means that
From (A.11) and (A.12) we obtain that
i.e. \(\varphi (x) = \lim _{n \rightarrow \infty } \varphi (x_{n})\) and the function \(\varphi \) is continuous. \(\square \)
Theorem 10.2
Let \((X, d)\) be a linear metric space and for each \(x \in X\) let \(\mathcal {C}(x) \subseteq X\) be a compact and convex set. Assume the function \(\beta : X \times X \rightarrow \overline{\mathbb {R}}\) is continuous and such that for \(x \in X\) the mapping \(\beta (x, \cdot ) : \mathcal {C}(x) \rightarrow \overline{\mathbb {R}}\) is strictly concave. Let the function \(\varphi : X \rightarrow \overline{\mathbb {R}}\) be defined as follows:
for \(x \in X\). If the mapping \(x\longmapsto \mathcal {C}(x)\) is continuous in Hausdorff metric, then for every \(x \in X\) there exists a unique \(\hat{c}(x) \in \mathcal {C}(x)\) such that
Furthermore, the mapping \(x\longmapsto \hat{c}(x)\) satisfying (A.14) is continuous.
Proof
By Theorem 10.1, from the continuity of the mapping \(\beta \) we get the continuity of \(\varphi \). We will prove first that for every \(x \in X\) there exists only one \(\hat{c}(x) \in \mathcal {C}(x)\) satisfying (A.14). Assume this is not true. Then there exist \(x \in X\) and \(\hat{c}_{1}(x), \hat{c}_{2}(x) \in \mathcal {C}(x)\) such that \(\hat{c}_{1}(x) \not = \hat{c}_{2}(x)\) and
By strict concavity of \(\beta (x, \cdot ): \mathcal {C}(x)\longrightarrow \mathbb {R}\) and convexity of the set \(\mathcal {C}(x)\), for every \(t \in (0, 1)\) we have \(t \hat{c}_{1}(x) + (1 - t) \hat{c}_{2}(x) \in \mathcal {C}(x)\) and
which is a contradiction. Consequently for every \(x \in X\) there exists a unique \(\hat{c}(x) \in \mathcal {C}(x)\) satisfying (A.14).
We will show now that the mapping \(x \longmapsto \hat{c}(x)\) is continuous. Fix \(x \in X\) and assume the sequence \((x_{n})_{n=1}^{\infty }\) converges to \(x\). Since \(cl (\cup _{n=1}^{\infty } \mathcal {C}(x_{n})\cup \mathcal {C}(x))\) is compact there exists a subsequence \((n_{k})_{k=1}^{\infty }\) such that the sequence \((\hat{c}(x_{n_{k}}))_{k=1}^{\infty }\) converges to some \(\tilde{c} \in \mathcal {C}(x)\). By continuity of \(\varphi \) and \(\beta \), we have
As \(\varphi (x) = \beta (x, \hat{c}(x))\) and \(\hat{c}(x)\) is uniquely determined, we have \(\tilde{c} = \hat{c}(x)\). Since it holds for every convergent subsequence of \((\hat{c}(x_{n_{k}}))_{k=1}^{\infty }\), then the mapping \(x\longmapsto \hat{c}(x)\) is continuous.
Below we present a sufficient condition for the condition \((A1)\) to be satisfied.
Lemma 10.1
If for \(k=0,1,\ldots N\)
and
for any sequences \((\sigma (1), \ldots , \sigma (N))\) and \((\tau (1), \ldots , \tau (N))\) taking values in the set \(\left\{ 0,1\right\} \) then the condition (A1) is satisfied.
Proof
First, note that for every \((x, y) \in \mathbb {R}_{+}^{2} \setminus \{(0, 0)\}\) and \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) we have
for \(k = 0, 1, 2,\ldots , N\). It is a consequence of the following fact: the strategy such that at time moment \(N-k\) we sell all our stocks and in the future we do not buy stocks and at time moments \(N-k, N-k+1,\ldots , N\) we consume the fixed amount \(\frac{x + \underline{s} y}{N-k+1}\) is an admissible strategy. Thus, indeed, (A.17) holds and consequently for \(k = 0, 1, 2,\ldots , N\)
Note that from the fact that for \(u > 0\) we have \(\ln u \le 1 + u\) and \(u^{\alpha } \le 1 + u\) we get that for every random variable \(S\) which takes values in the interval \([0, \infty )\) the integrability of \((1 + S)\) implies the integrability of \(g(S)^{+}\).
Note also that for \((x, y) \in \mathbb {R}_{+}^{2} \setminus \{(0, 0)\}\) and \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) the following inequalities hold
for \((c,l,m) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\).
Define the sequence \((F_{N-k}(\cdot , \cdot ))^{N}_{k=0}\) of random functions from \(\mathbb {R}_{+}^{2}\) to \(\mathbb {R}_{+}\) inductively in the following way. Let \(F_{N}(x, y) := 1 + x + \underline{S}_{N} y\) and for \(k =1, 2, 3,\ldots , N\) let
Clearly the following implication holds
From the fact that for \(k = 0, 1, 2,\ldots , N\) we have \(0 < \underline{S}_{N-k} < \overline{S}_{N-k}\) and from the construction of the sequence \((F_{N-k}(\cdot , \cdot ))_{k=0}^{N}\) of random functions taking into account (A.16) we obtain
which completes the proof. \(\square \)
Below we formulate two simple lemmas without proofs.
Lemma 10.2
Let \(x, a > 0\) and for \(c \in [0, x]\)
Then \(\sup _{c \in [0, x]} G (c) = G(\hat{c})\), where \(\hat{c} = \frac{x}{1 + a}\).
Lemma 10.3
Let \(x, a > 0\) and for \(c \in [0, x]\)
where \(\alpha \in (0,1)\). Then \(\sup _{c \in [0, x]} F(c) = F(\hat{c})\), where \(\hat{c} = \frac{x}{1 + a^{\frac{1}{1 - \alpha }}}\).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Rogala, T., Stettner, L. Construction of Discrete Time Shadow Price. Appl Math Optim 72, 391–433 (2015). https://doi.org/10.1007/s00245-014-9285-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00245-014-9285-x