1 Introduction

In this paper we study the problem of maximization of expected utility in the discrete time market with finite horizon and with transaction costs. We introduce the so called weak shadow price, i.e. a portfolio state dependent price process taking values between the bid and ask prices for which optimal value of expected utility in this frictionless market is the same as in the market with transaction costs. With the use of weak shadow price we construct shadow price, called in the paper strong shadow price, which is a sequence of random variables, playing the role of asset prices, taking values between bid and ask prices, depending on initial portfolio position, such that the optimal value of expected utility in the market with these asset prices is the same as in the market with transaction costs.

The problem of existence and construction of shadow price has been first studied for the Black–Scholes model with transaction costs and discounted logarithmic utility function (see [9, 10, 15]). Then existence of shadow price was shown for discrete time finite market in [16]. It appears that in some cases we are not able to find a frictionless market with price process taking values between bid and ask prices which gives the same optimal strategy as the market with transaction costs (see [2] and [6]).

In this paper we study general discrete time finite horizon problem with strictly concave utility function. We consider so called weak shadow price, i.e. a price system in an illiquid frictionless market, depending on our portfolio, for which optimal value of the expected utility (and thus also the optimal strategies) coincides with value of optimal expected utility in the market with transaction costs. This price system is not a shadow price in the sense considered in [15] or in [16]. This is in fact a more general notion, which enables us to construct later strong shadow price studied in [15] or in [16]. Furthermore under our assumptions for power and logarithmic utilities strong and weak shadow prices are uniquely defined.

The method used in this paper is significantly different from those considered in the other papers (see [1, 2, 5, 6, 9, 10, 13, 15, 16, 19]). Because of discrete time we don’t have differential structure of the model as in [15]. We also do not use Lagrange method studied in [16]. Our method is based on strict concavity of utility function, which results in uniqueness and continuity of optimal strategies. We are also using a number of geometric properties of selling, buying and non transaction zones. The main construction of weak shadow price is based on the Merton’s proportion, i.e. the optimal proportion between the value of stocks and the wealth.

We assume that on a probability space \((\Omega , \mathcal {F}, \mathbb {P})\) with filtration \((\mathcal {F}_{n})^{N}_{n=0}\) we are given a strictly positive adapted processes \(\underline{S} = (\underline{S}_{n})_{n=0}^{N}\) and \(\overline{S}=(\overline{S}_{n})_{n=0}^{N}\) such that \(\overline{S}_{n} > \underline{S}_{n}\) for \(n = 0, 1, 2, \ldots , N\) satisfying the following version of conditional full support condition (CFS) almost surely

$$\begin{aligned} conv \left( supp \ \mathbb {E}[(\underline{S}_{N-k}, \ldots , \underline{S}_{N})|\mathcal {F}_{N-k}]\right) = \{ \underline{S}_{N-k}\} \times [0, \infty )^{k},\nonumber \\ conv \left( supp \ \mathbb {E}[(\overline{S}_{N-k}, \ldots , \overline{S}_{N})|\mathcal {F}_{N-k}]\right) = \{ \overline{S}_{N-k}\} \times [0, \infty )^{k} \end{aligned}$$
(1.1)

for \(k = 0, 1, 2, \ldots , N\), where conv stands for convex hull and supp is the support of the random vector. This condition is similar to the condition (CFS) considered in [11].

Assume we are given a market \(\mathcal {M}\) in which we have a safe bank account and a risky stock account with infinitely divisible assets. The interest rate on the bank account for simplicity is equal to \(0\). At time moment \(n = 0, 1, \ldots , N\) we can buy or sell stocks paying \(\overline{S}_{n}\) or getting \(\underline{S}_{n}\) respectively. In the paper we assume that every conditional expected value is of a regular version, the existence of which is guaranteed by Theorem 3.1 in [12]. We shall also use the convention that \(\mathbb {E}(- \infty | \mathcal {G}) := - \infty \) for any \(\sigma \)-field \(\mathcal {G} \subseteq \mathcal {F}\).

Our financial position will be denoted by the pair \((x,y)\), where \(x\) is the amount on the bank account and \(y\) is the number of assets in our portfolio. Given a position \((x, y)\) at a fixed time moment we are allowed to trade stocks just in such way that we are not allowed to bankrupt. Taking into account the fact that the random variables \(\underline{S}_{n}\) and \(\overline{S}_{n}\), which represent the bid and ask prices of the stocks, are fully supported (see [11]), we are allowed to make an investment policy only in such a way that at next time moment the amount on bank and stock accounts will be nonnegative almost surely. Consequently, this way we have short selling and short buying constraints.

Our aim is to maximize the value:

$$\begin{aligned} \mathbf {J}(u) := \mathbb {E}\left( \sum ^{N}_{n=0} \gamma ^{n} g(c_{n})\right) \end{aligned}$$
(1.2)

over all \(u\) from the set of admissible strategies \(\mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) which are defined in Sect. 2, with a constant discount factor \(\gamma \in (0,1]\), where our initial position \((x_0, y_0)=(x,y) \in \mathbb {R}_{+}^{2}\) is such that \(x + y > 0\) and \(c_{n}\) is our consumption at time moment \(n = 0, 1, \ldots , N\), \(\underline{S}_0=\underline{s}\), \(\overline{S}_0=\overline{s}\) and \(g\) is a utility function that is a strictly increasing, strictly concave function defined on \((0,\infty )\) with \(g(0)\) finite or \(g(0)=-\infty \). We shall also assume that \(g(u) = - \infty \) for \(u < 0\). The class of such utility functions contains in particular \(g(c) = \ln c\), \(g(c) = c^{\alpha }\) with \(\alpha \in (0,1)\) or \(g(c)=1-e^{-c}\).

We assume that the processes \(\underline{S}\) and \(\overline{S}\) are such that assumption (A1) (see Sect. 3), which guarantees integrability of certain finite horizon value functions, is satisfied.

We will introduce a notion of weak shadow price, i.e. a price system

$$\begin{aligned} \hat{S}=\{\hat{S}_{n}(x, y, \underline{s}, \overline{s}): n=0, 1,\ldots , N, (x, y,\underline{s}, \overline{s}) \in \mathbb {D} \} \end{aligned}$$

where

$$\begin{aligned} \mathbb {D} := \{ (x, y, \underline{s}, \overline{s}) \in \mathbb {R}_{+}^{4}: \ \overline{s} > \underline{s} > 0 \}, \end{aligned}$$
(1.3)

such that \(n=0, 1,\ldots , N\):

$$\begin{aligned} \underline{S}_{n} \le \hat{S}_{n}(x, y, \underline{S}_{n}, \overline{S}_{n}) \le \overline{S}_{n} \end{aligned}$$

the random variable \(\hat{S}_{n}(x, y, \underline{S}_{n}, \overline{S}_{n})\) is \(\mathcal {F}_{n}\)-measurable for \((x, y) \in \mathbb {R}_{+}^{2} \setminus \{(0, 0)\}\) and the optimal expected value of discounted utility function (1.2) for the market with price system \(\hat{S}\) is the same as in the market \(\mathcal {M}\). More precisely, in this shadow market the current price of a unit of the stock depends on our position at the beginning of this period. In other words, we translate the problem of maximization of (1.2) in the liquid market with transaction costs to the problem of maximization (1.2) in a frictionless illiquid market with price system \(\hat{S}\).

Then we construct shadow price (strong shadow price): a sequence of random variables, depending on initial position, taking values between bid and ask prices such that optimal values of the cost functional (1.2) for market with shadow price is the same as for the market with transaction costs.

The problem of construction of weak and then strong shadow prices for the functional (1.2) is solved for every price processes \(\underline{S}\) and \(\overline{S}\) satisfying (1.1) and (A1). What is important we do not impose any additional conditions (besides of (1.1) and (A1)) for the processes \(\underline{S}\) and \(\overline{S}\) and we study the case with general strictly concave utility function.

2 Properties of the Set of Constraints

In this section we introduce the notion of constraints on admissible strategies. Generally speaking, the strategies are admissible if they are adapted to filtration \((\mathcal {F}_{n})^{N}_{n=0}\) and they do not lead to bankruptcy almost surely. Note that because of the conditionally full support condition (1.1), after possible transaction we should have nonnegative position in bank and stock accounts, since otherwise with positive probability our wealth in the next time moment could be strictly negative.

For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) with \(\mathbb {D}\) defined in (1.3) let

$$\begin{aligned} \mathbb {A}(&x, y, \underline{s}, \overline{s}):= \{ (c, l, m) \in [0, x + \underline{s} y]\times \mathbb {R}^{2}_{+} :\nonumber \\&\forall _{s \in [0, \infty )}\ x - c + \underline{s} m - \overline{s} l + s (y - m + l) \ge 0 \}. \end{aligned}$$
(2.1)

Equivalently we have

$$\begin{aligned} \mathbb {A}(&x, y, \underline{s}, \overline{s})=\{(c, l, m) \in [0, x + \underline{s} y]\times \mathbb {R}^{2}_{+} : \nonumber \\&x - c + \underline{s} m - \overline{s} l \ge 0, y - m + l \ge 0\}. \end{aligned}$$
(2.2)

The set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) consists of one step consumption, buying and selling strategies we are allowed to use starting from the position \((x, y)\). We summarize below important properties of this set.

Proposition 2.1

Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then we have

  1. (i)

    \({\mathbb {A}}(\rho x, \rho y, \underline{s}, \overline{s}) = \rho {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \), for \(\rho \ge 0\),

  2. (ii)

    the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) is convex,

  3. (iii)

    for \(\overline{s} > \underline{s} > 0\) the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) is compact,

  4. (iv)

    for \(\overline{s} > \underline{s} > 0\) the following implications hold

    $$\begin{aligned} (0, \hat{l}, 0) \in {\mathbb {A}}(x, y, \underline{s},\overline{s}) \Longrightarrow \forall _{l \in [0, \hat{l}]} \ (0, \hat{l} - l, 0) \in {\mathbb {A}}(x - \overline{s} l, y + l, \underline{s}, \overline{s}), \end{aligned}$$
    (2.3)
    $$\begin{aligned} (0, 0, \hat{m}) \in {\mathbb {A}}(x, y, \underline{s},\overline{s}) \Longrightarrow \forall _{m \in [0, \hat{m}]} \ (0, 0, \hat{m} - m) \in {\mathbb {A}}(x + \underline{s} m, y + m, \underline{s}, \overline{s}), \end{aligned}$$
    (2.4)
    $$\begin{aligned} (c, l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \Longrightarrow \forall _{\rho \in [0, 1]} \ (\rho c, \rho l, \rho m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}), \end{aligned}$$
    (2.5)

    and

    $$\begin{aligned} (c, l, m) \in \mathbb {R}_{+}^{3} \setminus {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \Longrightarrow \forall _{\rho \ge 1} \ (\rho c, \rho l, \rho m) \not \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}), \end{aligned}$$
    (2.6)
  5. (v)

    for \((x_{1}, y_{1}), (x_{2}, y_{2}) \in \mathbb {R}_{+}^{2}\), \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) and all \(t \in [0, 1]\) the following inclusion holds

    $$\begin{aligned} t {\mathbb {A}}(x_{1}, y_{1}, \underline{s}, \overline{s}) + (1 - t){\mathbb {A}}(x_{2}, y_{2}, \underline{s}, \overline{s}) \nonumber \\ \subseteq {\mathbb {A}}(t x_{1} + (1 - t) x_{2}, t y_{1} + (1 - t) y_{2}, \underline{s}, \overline{s}), \end{aligned}$$
    (2.7)
  6. (vi)

    for \((x_{1}, y_{1},\underline{s}_{1}, \overline{s}_{1}), (x_{2}, y_{2},\underline{s}_{2}, \overline{s}_{2}) \in \mathbb {D}\) if \(x_{1} \le x_{2}, y_{1} \le y_{2}, \underline{s}_{1} \le \underline{s}_{2}\) and \(\overline{s}_{1} \ge \overline{s}_{2}\), we have \({\mathbb {A}}(x_{1}, y_{1}, \underline{s}_{1}, \overline{s}_{1}) \subseteq {\mathbb {A}}(x_{2}, y_{2}, \underline{s}_{2}, \overline{s}_{2}),\)

  7. (vii)

    if sequence \((x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})\in \mathbb {D}\) converges to \((x_{0}, y_{0}, \underline{s}_{0}, \overline{s}_{0}) \in \mathbb {D}\) then the set \(cl({\mathbb {A}}(x_{0}, y_{0}, \underline{s}_{0}, \overline{s}_{0})\cup \cup _{n=1}^{\infty }{\mathbb {A}}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n}))\), is compact, where cl stands for the closure.

The proof is in Appendix.

Denote by \(h\) the Hausdorff metric defined on the space \(\mathcal {H}(\mathbb {R}^{3}_{+})\) of compact subsets of \(\mathbb {R}^{3}_{+}\) as follows

$$\begin{aligned} h(A,B):= \max \{ d(A,B), d(B,A)\} \end{aligned}$$

with \(d(A,B) := \sup \{ dist(a, B): a \in A\}\) and \(dist (x, A):= \inf \{ d(x,a) : a \in A\}\). Clearly \((\mathcal {H}(\mathbb {R}^{3}_{+}),h)\) is a complete metric space (see e.g. [4]). We have

Theorem 2.1

Let \((x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})_{n=1}^{\infty }\) be a sequence from \(\mathbb {D}\), which converges to \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then

$$\begin{aligned} h(\mathbb {A}(x, y, \underline{s}, \overline{s}), \mathbb {A}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})) \xrightarrow {n\rightarrow \infty } 0. \end{aligned}$$
(2.8)

The proof is in Appendix.

3 Bellman Equations

Following Theorem 1 of [8] we introduce now a system of Bellman equations. For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) let

$$\begin{aligned} w_{N}(x, y, \underline{s}, \overline{s}) := g(x + \underline{s} y). \end{aligned}$$
(3.1)

The function \(w_{N}\) is continuous and concave. For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) and \((c, l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})\) let

$$\begin{aligned} V_{N-1}(&x, y, \underline{s}, \overline{s}, c, l, m) := \nonumber \\&g(c) + \gamma w_{N}(x - c + \underline{s} m - \overline{s} l, y - m + l, \underline{S}_{N}, \overline{S}_{N}). \end{aligned}$$
(3.2)

It is obvious that the random function \(V_{N-1}\) is continuous in its domain for \(\omega \in \Omega \).

Assuming tacitly integrability of \(V_{N-1}\) (with respect to \(\omega \)) from Theorem I.3.1 of [12] (see also [18]) there exists a regular conditional probability

$$\begin{aligned} \{p_{N-1}(\omega , A) \}_{\omega \in \Omega , A \in \mathcal {F}_{N-1}} \end{aligned}$$

given \(\mathcal {F}_{N-1}\) defined for \(\omega \in \Omega \) and \(A \in \mathcal {F}_{N-1}\) such that the mapping

$$\begin{aligned} \omega \longmapsto \int _{\Omega }V_{N-1}(x, y, \underline{s}, \overline{s}, c, l, m) (\omega ') p_{N-1}(\omega , d \omega ') \end{aligned}$$
(3.3)

is well defined for \(\omega \in \Omega \) and

$$\begin{aligned} \int _{\Omega }V_{N-1}(&x, y, \underline{s}, \overline{s}, c, l, m) (\omega ') p_{N-1}(\omega , d \omega ') = \nonumber \\&\mathbb {E}[V_{N-1}(x, y, \underline{s}, \overline{s}, c, l, m)|\mathcal {F}_{N-1}](\omega ) \end{aligned}$$
(3.4)

for \(\mathbb {P}\)-almost all \(\omega \in \Omega \).

In other words, the mapping defined in (3.3) is a version of conditional expected value of \(V_{N-1}(x, y, \underline{s}, \overline{s}, c, l, m)\) given \(\mathcal {F}_{N-1}\) and as we mentioned in the Introduction in what follows we shall consider only such versions of conditional expected value.

For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) let

$$\begin{aligned} w_{N-1}(x, y, \underline{s}, \overline{s}) := \sup _{(c, l, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})}\mathbb {E}[V_{N-1}(x, y, \underline{s}, \overline{s}, c, l, m)|\mathcal {F}_{N-1}] \end{aligned}$$
(3.5)

and define inductively

$$\begin{aligned} V_{N-k}(&x, y, \underline{s}, \overline{s}, c, l, m) :=g(c) + \\&\gamma w_{N-k+1}(x - c + \underline{s} m - \overline{s} l, y - m + l, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) \end{aligned}$$

and

$$\begin{aligned} w_{N-k}(x, y, \underline{s}, \overline{s}) :=\sup _{(c, l, m) \in \mathbb {A}(x, y, \underline{s}, \overline{s})}\mathbb {E}[V_{N-k}(x, y, \underline{s}, \overline{s}, c, l, m) |\mathcal {F}_{N-k}], \end{aligned}$$
(3.6)

for \(k = 1, 2, \ldots , N\). By Theorem 1 of [8] we know that optimal control problem with gain functional (1.2) is solved using a sequence of Bellman equations (3.6) introduced above. In what follows we shall assume that

(A1) :

bid and ask prices \(\underline{S} = (\underline{S}_{n})_{n=0}^{N}\) and \(\overline{S}=(\overline{S}_{n})_{n=0}^{N}\) are such that:

\(\forall _{(x, y) \in \mathbb {R}_{+}^{2} \setminus \{ (0 , 0) \}}\) we have integrability of \(w_i(x,y,\underline{S}_{i},\overline{S}_{i})\) (with respect to \(\omega \)) and \( \mathbb {E}g(x + \underline{S}_{i} y)^{-} < \infty \) as well as \(\forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ \mathbb {E}w_{i}(x, y, \underline{s}, \overline{s}) < \infty \) for \(i=1,2,\ldots ,N\).

We have

Proposition 3.1

Under (A1) for \(\overline{s}>\underline{s}>0\) and \(k=1,2,\ldots ,N\) the random mappings

$$\begin{aligned} (x, y) \longmapsto w_{N-k}(x, y, \underline{s}, \overline{s}) \end{aligned}$$

and

$$\begin{aligned} (x ,y, c, l ,m)\longmapsto \mathbb {E}[V_{N-k}(x, y, \underline{s}, \overline{s}, c, l, m)|\mathcal {F}_{N-k}] \end{aligned}$$

(considered as regular conditional expected value) with \((x,y)\in \mathbb {R}_+^2\setminus \left\{ (0,0)\right) \) and \((c,l,m) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\) are well defined continuous \(\mathcal {F}_{N-k}\)-measurable random functions.

The proof by induction is postponed to the Appendix.

In Lemma 10.1 in the Appendix we impose some sufficient conditions for processes \((\underline{S}_{n})_{n=0}^{N}\) and \((\overline{S}_{n})_{n=0}^{N}\) under which assumption (A1) is satisfied.

Basing on continuity results of Proposition 3.1 we obtain existence of selectors in Bellman equations (3.6).

Lemma 3.1

Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then there exists an \(\mathcal {F}_{N-k}\)-measurable random variable \((\hat{c}, \hat{l}, \hat{m})\) which takes values in the set \({\mathbb {A}}(x, y, \underline{s}, \overline{s})\) such that for \(\omega \in \Omega \) we have

$$\begin{aligned} w_{N-k}( x, y, \underline{s}, \overline{s})(\omega )&= \mathbb {E}[g(\hat{c}(\omega )) + \gamma w_{N-k+1}(x - \hat{c}(\omega ) + \underline{s} \hat{m}(\omega ) - \overline{s} \hat{l}(\omega ), \nonumber \\&\quad y - \hat{m}(\omega ) + \hat{l}(\omega ), \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}](\omega ). \end{aligned}$$
(3.7)

The proof is in Appendix.

Remark 3.1

Notice that in the Lemma 3.1 thanks to the suitable continuity we have a nice result on the existence of measurable selectors without necessity to use more general results of [8] or Theorem B of section 6 in chapter 2 of [7]. See also [17].

For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) denote by \(\mathcal {A}_{N-k}(x, y, \) \(\underline{s}, \overline{s})\) the set of all \(\mathcal {F}_{N-k}\)-measurable random variables taking values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\).

Corollary 3.1

Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then

$$\begin{aligned} w_{N-k}(&x, y, \underline{s}, \overline{s}) = \sup _{(c^{*}, l^{*}, m^{*}) \in \mathcal {A}_{N-k}(x, y, \underline{s}, \overline{s})} \mathbb {E}[V_{N-k}(x, y, \underline{s}, \overline{s}, c^{*}, l^{*}, m^{*}) | \mathcal {F}_{N-k}]. \end{aligned}$$
(3.8)

The meaning of (3.8) is very important, because it says that dealing with Bellman equation \(w_{N-k}\) we can look at not only the deterministic set of triples from \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) but we can deal with \(\mathcal {F}_{N-k}\)-measurable random variables which take values in this set.

We also have

Corollary 3.2

Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Let \((\hat{c}, \hat{l}, \hat{m}) \in \mathcal {A}_{N-k}(x, y, \underline{s}, \overline{s})\) be such that

$$\begin{aligned} w_{N-k}(x, y, \underline{s}, \overline{s}) = \mathbb {E}[V_{N-k}(x, y, \underline{s}, \overline{s}, \hat{c}, \hat{l}, \hat{m}) | \mathcal {F}_{N-k}]. \end{aligned}$$

Then for every random variable \((\tilde{c}, \tilde{l}, \tilde{m})\) from \(\mathcal {A}_{N-k}(x, y, \underline{s}, \overline{s})\) we have

$$\begin{aligned} \mathbb {E}[V_{N-k}(x, y, \underline{s}, \overline{s}, \hat{c}, \hat{l}, \hat{m}) | \mathcal {F}_{N-k}] \ge \mathbb {E}[V_{N-k}(x, y, \underline{s}, \overline{s}, \tilde{c}, \tilde{l}, \tilde{m}) | \mathcal {F}_{N-k}]. \end{aligned}$$

Now we will define the set \(\mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) of all admissible strategies in the market with transaction costs and with the initial position \((x, y) \in \mathbb {R}_{+}^{2}\). A sequence \(u = (u_{n})_{n=0}^{N} = (c_{n}, l_{n}, m_{n})_{n=0}^{N}\) is called an admissible strategy if for \(n = 0, 1,\ldots , N\) the triple \((c_{n}, l_{n}, m_{n}) \in \mathcal {A}_{n}(x_{n}, y_{n}, \underline{S}_{n}, \overline{S}_{n})\), where the sequences \((x_{n})_{n=0}^{N}\) and \((y_{n})_{n=0}^{N}\) are defined inductively in the following way:

$$\begin{aligned} {\left\{ \begin{array}{ll} (x_{0}, y_{0}) := (x, y)\\ x_{n+1} := x_{n} - c_{n} + \underline{S}_{n} m_{n} - \overline{S}_{n} l_{n} &{}\text{ for } n = 0, 1, 2,\ldots , N-1\\ y_{n+1} := y_{n} - m_{n} + l_{n} &{}\text{ for } n = 0, 1, 2,\ldots , N-1 \end{array}\right. }. \end{aligned}$$
(3.9)

Note that any admissible strategy \(u \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) defines by (3.9) a unique predictable sequence \((x_{n}, y_{n})_{n = 0}^{N}\). Thus writing \(u \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) we may think that \(u = (c_{n}, l_{n}, m_{n}, x_{n}, y_{n})_{n = 0}^{N}\).

We have

Proposition 3.2

Let \(\hat{u}=(\hat{c}_{n}, \hat{l}_{n}, \hat{m}_{n})_{n=0}^{N}\) be a sequence of admissible strategies such that for the corresponding sequence of market positions \((\hat{x}_n,\hat{y}_n)\) defined by (3.9) and \(k = 1, 2, 3,\ldots , N\) the following equalities hold

$$\begin{aligned} w_{N-k}(&\hat{x}_{N-k-1}, \hat{y}_{N-k-1}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) = \nonumber \\ \mathbb {E}[&V_{N-k}(\hat{x}_{N-k-1}, \hat{y}_{N-k-1}, \underline{S}_{N-k}, \overline{S}_{N-k}, \hat{c}_{N-k}, \hat{l}_{N-k}, \hat{m}_{N-k}) | \mathcal {F}_{N-k}]. \end{aligned}$$
(3.10)

Then we have

$$\begin{aligned} \mathbb {E}[w_{0}(x, y, \underline{s}, \overline{s})] = \sup _{u \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})}\mathbf {J}(u)=\mathbf {J}(\hat{u}). \end{aligned}$$
(3.11)

with \(\underline{S}_{0}=\underline{s}\) and \(\overline{S}_{0}=\overline{s}\).

Proof

It is obvious that in (3.11) we have “\(\le \)”, because the sequence \((\hat{c}_{n}, \hat{l}_{n}, \hat{m}_{n})_{n=0}^{N}\) is a sequence of admissible strategies and hence it must be

$$\begin{aligned} \mathbb {E}\left[ \sum _{n=0}^{N} g(\hat{c}_{n})\right] \le \mathbf {J}(u). \end{aligned}$$

But the sequence \(\hat{u}\) is an admissible strategy so that for any \(u = (c_{n}, l_{n}, m_{n})_{n=0}^{N}\) \(\in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) from (3.10) we have

$$\begin{aligned} w_{0}( x_{0}, y_{0}, \underline{S}_{0}, \overline{S}_{0})&= \mathbb {E}[V_{1}(x_{0}, y_{0}, \underline{S}_{1}, \overline{S}_{1}, \hat{c}_{0}, \hat{l}_{0}, \hat{l}_{0}) | \mathcal {F}_{0}] \ge \\&\quad \mathbb {E}[V_{1}(x_{0}, y_{0}, \underline{S}_{1}, \overline{S}_{1}, c_{0}, l_{0}, m_{0}) | \mathcal {F}_{0}]. \end{aligned}$$

Thus, indeed, we have “\(\ge \)” in (3.11).

In effect, we have the equality in (3.11). This ends the proof. \(\square \)

To simplify the notation any element of \(\mathcal {A}_{N-k}(x, y, \underline{s}, \overline{s})\) also will be called an admissible strategy.

Almost immediately we obtain

Lemma 3.2

The random functions \(w_{N-k}(\cdot , \cdot , \underline{s}, \overline{s})\) are concave for \(k \!=\! 0, 1, 2 ,\ldots , N\).

Proof

It follows easily by induction from concavity of the utility function \(g\). \(\square \)

Next result plays an important role in the uniqueness of the optimal strategies.

Theorem 3.1

Under (A1) the random mapping

$$\begin{aligned}(x, y) \longmapsto \mathbb {E}[w_{N-k+1}(x, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]\end{aligned}$$

is strictly concave for \(k = 1, 2, \ldots , N\).

Proof

We use induction in \(k=1,2, \ldots , N\). The case \(k=1\) follows directly from strict concavity of \(g\). Assume inductively strict concavity of the random mapping

$$\begin{aligned} (x, y) \longmapsto \mathbb {E}[w_{N-k+2}(x, y, \underline{S}_{N-k+2}, \overline{S}_{N-k+2}) | \mathcal {F}_{N-k+1}]. \end{aligned}$$

Let \(F(x, y) :=\mathbb {E} [w_{N-k+1}(x, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]\). By Lemma 3.2 the mapping \((x,y)\mapsto F(x,y)\) is concave. Assume this function is not strictly concave. Then there exist pairs of different financial positions \((x_{1}, y_{1}), (x_{2}, y_{2}) \in \mathbb {R}_{+}^{2}\) such that for any \(t \in (0, 1)\) we have

$$\begin{aligned} F(t (x_{1}, y_{1}) + (1 - t)(x_{2}, y_{2})) = t F(x_{1}, y_{1}) + (1 - t) F(x_{2}, y_{2}). \end{aligned}$$
(3.12)

Let \((x_{3}, y_{3}) := t (x_{1}, y_{1}) + (1 - t)(x_{2}, y_{2})\) and \((\hat{c}_{i}, \hat{l}_{i}, \hat{m}_{i})\) be optimal one step strategies in \(w_{N-k+1}\) (the existence of which is guaranteed by Corollary 3.1) for \((x_{i}, y_{i})\) with \(i=1,2\). By concavity of \(g\) and \(w_{N-k+2}\) taking into account (3.12) we clearly have that \((\hat{c}_{3}, \hat{l}_{3}, \hat{m}_{3}) := t (\hat{c}_{1}, \hat{l}_{1}, \hat{m}_{1}) + (1 - t) (\hat{c}_{2}, \hat{l}_{2}, \hat{m}_{2})\) is a.s. optimal for \((x_3,y_3)\). Furthermore, by strict concavity of \(g\) we have a.s. that \(\hat{c}_3=\hat{c}_1=\hat{c}_2\). Therefore by (3.12) and (3.7) we have

$$\begin{aligned} \mathbb {E}[&(w_{N-k+2}(x_{3} - \hat{c}_{3} + \underline{S}_{N-k+1} \hat{m}_{3} - \overline{S}_{N-k+1} \hat{l}_{3}, y_{3} - \hat{m}_{3} + \hat{l}_{3}, \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \underline{S}_{N-k+2}, \overline{S}_{N-k+2}) | \mathcal {F}_{N-k}] = \nonumber \\&t \mathbb {E}[(w_{N-k+2}(x_{1} - \hat{c}_{1} + \underline{S}_{N-k+1} \hat{m}_{1} \overline{S}_{N-k+1} \hat{l}_{1}, y_{1} - \hat{m}_{1} + \hat{l}_{1}, \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \underline{S}_{N-k+2}, \overline{S}_{N-k+2}) | \mathcal {F}_{N-k}] + \nonumber \\&(1 - t) \mathbb {E}[(w_{N-k+2}(x_{2} - \hat{c}_{2} + \underline{S}_{N-k+1} \hat{m}_{2} - \overline{S}_{N-k+1} \hat{l}_{2}, y_{2} - \hat{m}_{2} + \hat{l}_{2}, \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \underline{S}_{N-k+2}, \overline{S}_{N-k+2}) | \mathcal {F}_{N-k}]. \end{aligned}$$
(3.13)

Since by concavity

$$\begin{aligned} \mathbb {E}[&(w_{N-k+2}(x_{3} - \hat{c}_{3} + \underline{S}_{N-k+1} \hat{m}_{3} - \overline{S}_{N-k+1} \hat{l}_{3}, y_{3} - \hat{m}_{3} + \hat{l}_{3}, \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \ \underline{S}_{N-k+2},\overline{S}_{N-k+2}) | \mathcal {F}_{N-k+1}] \ge \nonumber \\&t \mathbb {E}[(w_{N-k+2}(x_{1} - \hat{c}_{1} + \underline{S}_{N-k+1} \hat{m}_{1} - \overline{S}_{N-k+1} \hat{l}_{1},y_{1} - \hat{m}_{1} + \hat{l}_{1}, \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \underline{S}_{N-k+2}, \overline{S}_{N-k+2}) | \mathcal {F}_{N-k+1}] + \nonumber \\&(1 - t) \mathbb {E}[(w_{N-k+1}(x_{2} - \hat{c}_{2} + \underline{S}_{N-k+1} \hat{m}_{2} - \overline{S}_{N-k+1} \hat{l}_{2}, y_{2} - \hat{m}_{2} + \hat{l}_{2}, \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \underline{S}_{N-k+2}, \overline{S}_{N-k+2}) | \mathcal {F}_{N-k+1}]. \end{aligned}$$
(3.14)

we have equality in (3.13) only when we have equality a.s. in (3.14). By induction hypothesis the random mapping

$$\begin{aligned} (x, y) \longmapsto \mathbb {E}[w_{N-k+2}(x, y, \underline{S}_{N-k+2}, \overline{S}_{N-k+2}) | \mathcal {F}_{N-k+1}] \end{aligned}$$

is strictly concave so that from a.s. equality in (3.14) taking into account that \(\hat{c}_3=\hat{c}_2=\hat{c}_1\) we should have a.s.

$$\begin{aligned} {\left\{ \begin{array}{ll} x_{1} + \underline{S}_{N-k+1} \hat{m}_{1} - \overline{S}_{N-k+1} \hat{l}_{1} = x_{2} + \underline{S}_{N-k+1} \hat{m}_{2} - \overline{S}_{N-k+1} \hat{l}_{2} \\ y_{1} - \hat{m}_{1} + \hat{l}_{1} = y_{2} - \hat{m}_{2} + \hat{l}_{2} \end{array}\right. }. \end{aligned}$$
(3.15)

Since the strategies are optimal we have that \(\hat{m}_{1}\hat{l}_{1}=0=\hat{m}_{2}\hat{l}_{2}\). Therefore, the cases \(x_{1}\ge x_{2}\) and \(y_{1} > y_{2}\) or \(x_{2} > x_{1}\) and \(y_{2} \ge y_{1}\) are not allowed. Assume \(x_{1} < x_{2}\) and \(y_{1} \ge y_{2}\). Then \(\hat{m}_2=0=\hat{l}_1\) and solving (3.15) we obtain a.s.

$$\begin{aligned} {y_1-y_2 \over x_2-x_1}={1 \over \overline{S}_{N-k+1}} + {\hat{m}_1 \over x_2-x_1} \left( 1-{\underline{S}_{N-k+1} \over \overline{S}_{N-k+1}}\right) =:a+b \end{aligned}$$
(3.16)

Notice that \({y_1-y_2 \over x_2-x_1}\) is fixed while \(\overline{S}_{N-k+1}\) is random and by the assumption on the conditional full support (1.1), since \(\hat{m}_1\) is bounded by \(y_1\) so that \(b \ge 0\) is bounded, \(a={1 \over \overline{S}_{N-k+1}}\) can be arbitrarily big, with a positive probability, which contradicts that (3.16) should hold a.s. The case \(x_{1} \ge x_{2}\) and \(y_{1} < y_{2}\) can be rejected in a similar way. Consequently, (3.12) does not hold and we have strict concavity of \(F\). \(\square \)

Immediately from Theorem 10.2 we obtain

Corollary 3.3

For each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) there exists unique \(\mathcal {F}_{N-k}\)-measurable random variable

$$\begin{aligned} (\hat{c}(x, y, \underline{s}, \overline{s}), \hat{l}(x, y, \underline{s}, \overline{s}), \hat{m}(x, y, \underline{s}, \overline{s})) \end{aligned}$$

which takes values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) and such that

$$\begin{aligned} w_{N-k}(&x, y, \underline{s}, \overline{s}) = \mathbb {E}[g(\hat{c}(x, y, \underline{s}, \overline{s})) + \nonumber \\&\ \ \ \ \gamma w_{N-k+1}(x - \hat{c}(x, y, \underline{s}, \overline{s}) + \underline{s} \hat{m}(x, y, \underline{s}, \overline{s}) - \overline{s} \hat{l}(x, y, \underline{s}, \overline{s}), \nonumber \\&\ \ \ \ \ \ \ \ y - \hat{m}(x, y, \underline{s}, \overline{s}) + \hat{l}(x, y, \underline{s}, \overline{s}), \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]. \end{aligned}$$
(3.17)

Moreover, the random mapping

$$\begin{aligned} (x, y, \underline{s}, \overline{s}) \mapsto (\hat{c}_{N-k}(x, y, \underline{s}, \overline{s}), \hat{l}_{N-k}(x, y, \underline{s}, \overline{s}), \hat{m}_{N-k}(x, y, \underline{s}, \overline{s})) \end{aligned}$$

is continuous on the set \(\mathbb {D}\).

By simple induction we obtain

Lemma 3.3

For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) we have

$$\begin{aligned} w_{N-k}(\rho x, \rho y, \underline{s}, \overline{s}) = (1 + \gamma + \ldots + \gamma ^{k}) \ln \rho + w_{N-k}(x, y, \underline{s}, \overline{s}), \end{aligned}$$
(3.18)

when \(g(u) = \ln u\), while

$$\begin{aligned} w_{N-k}(\rho x, \rho y, \underline{s}, \overline{s}) = \rho ^{\alpha } \cdot w_{N-k}(x, y, \underline{s}, \overline{s}), \end{aligned}$$
(3.19)

when \(g(u) = u^{\alpha }\).

4 Properties of the Optimal Strategies

In this section basing on Bellman equations introduced in Sect. 3 we shall characterize classes of optimal one step strategies. Let \((\underline{s}, \overline{s}) \in \mathbb {R}_{+}^{2}\) be such that \(\overline{s} > \underline{s} > 0\).

For \(k = 1, 2,\ldots , N\) let us define the following random sets corresponding respectively to no transaction, selling and buying zones:

$$\begin{aligned}&\mathbf {NT}_{N-k}(\underline{s}, \overline{s}):=\{ (x,y)\in \mathbb {R}^{2}_{+} : w_{N-k}(x, y, \underline{s}, \overline{s}) = \sup _{c \in [0, x]} \mathbb {E}[g(c) + \\&\ \ \ \ \ \ \gamma w_{N-k+1}(x - c, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}] \}, \\&\mathbf {S}_{N-k}(\underline{s}, \overline{s}):= \{ (x,y)\in \mathbb {R}^{2}_{+} : w_{N-k}(x, y, \underline{s}, \overline{s}) =\sup _{(c, 0,m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})} \mathbb {E}[g(c) + \\&\ \ \ \ \ \ \gamma w_{N-k+1}(x - c + \underline{s} m, y - m, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}]\}\backslash \mathbf {NT}_{N-k}(\underline{s}, \overline{s}) \end{aligned}$$

and

$$\begin{aligned}&\mathbf {B}_{N-k}(\underline{s}, \overline{s}):=\{ (x,y)\in \mathbb {R}^{2}_{+} : w_{N-k}(x, y, \underline{s}, \overline{s}) =\sup _{(c,l,0) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s})} \mathbb {E} [g(c) + \\&\ \ \ \ \ \ \gamma w_{N-k+1}(x - c - \overline{s} l,y + l, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}] \}\backslash \mathbf {NT}_{N-k}(\underline{s}, \overline{s}). \end{aligned}$$

If the bid and ask prices of a unit of a stock are \(\underline{s}, \overline{s}\) respectively, then after optimal consumption we do not trade, sell or buy stocks if our position is in \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\), \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) or in \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) respectively. Furthermore, by Lemma 3.3 for \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\) these sets are cones.

Lemma 4.1

For \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) any pair of the random triple \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\), \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\), \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) do not have common points.

Proof

Fix \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\). Clearly, by the definition the random sets \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) and \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) do no have common points with \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\). From the uniqueness of the optimal strategy (see Corollary 3.3) we obtain that \(\mathbf {S}_{N-k}(\underline{s}, \overline{s}) \cap \mathbf {B}_{N-k}(\underline{s}, \overline{s}) = \emptyset \). \(\square \)

Proposition 4.1

For \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) the random sets \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\), \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\), \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\) are connected for each \(\omega \in \Omega \).

Proof

Assume that \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\) is not connected for some \(\omega \in \Omega \). Then in the convex envelope of \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\) we should have either elements of \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\), or of \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\). Assume that we have there elements of \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\). To simplify notation we shall skip the dependence on \(\omega \). Since the set \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) and \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) is a close we may assume that there exist \((x_1,y_1), (x_1+\underline{s}m_1,y_1-m_1) \in \mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) for positive \(m_1\) such that for some positive \(m'<m_1\) we have that \((x_2,y_2):=(x_1+\underline{s}m',y_1-m')\) and \((x_2+\underline{s}m,y_2-m)\in \mathbf {S}_{N-k}(\underline{s}, \overline{s})\) for any \(m\in [0,m_1-m')\). Let \(c^*\) be an optimal consumption for \((x_2,y_2)\). Clearly, \((c^*,0,m_1-m')\) is an optimal one step strategy for \((x_2,y_2)\) and

$$\begin{aligned} w_{N-k}(x_2,y_2, \underline{s}, \overline{s})\!=\!w_{N-k}(x_2+ \underline{s} m,y_2\!-\!m,\underline{s}, \overline{s})=w_{N-k}(x_1+\underline{s} m_1,y_1-m,\underline{s}, \overline{s}). \end{aligned}$$

Furthermore, \(w_{N-k}(x_1,y_1, \underline{s}, \overline{s})\ge w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\) and by concavity of \(w_{N-k}(\cdot , \cdot , \underline{s}, \overline{s})\) we should have \(w_{N-k}(x_1,y_1, \underline{s}, \overline{s})= w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\), since otherwise using concavity we obtain that for \(m\in (0,m_1-m')\) we have \(w_{N-k}(x_2+\underline{s}m,y_2-m, \underline{s}, \overline{s})> w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\).

If \(w_{N-k}(x_1,y_1, \underline{s}, \overline{s})= w_{N-k}(x_2, y_2, \underline{s}, \overline{s})\) then the strategy \((c^*,0,m_1-m')\) can not be optimal for \((x_2,y_2)\) (by uniqueness of optimal strategies, see Corollary 3.3, selling is not allowed). The case when the convex envelope of \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) contains elements of \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) can be rejected in a similar way. Since the set \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\) is close and right boundary of \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) and left boundary of \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) are in \(\mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) so that these sets should be connected. \(\square \)

5 Local Weak Shadow Price

Consider now the case when at a given time moment \(N-k\), where \(k= 0, 1,\ldots , N\) instead of bid and ask prices \(\underline{s}, \overline{s}\) we have a one price \(\hat{s}\) for which we are allowed to sell and buy assets, while in the next time moments we again have bid and ask prices.

Define the set

$$\begin{aligned} \hat{\mathbb {D}} := \{ (x, y, \hat{s}) \in \mathbb {R}_{+}^{3} : \hat{s} > 0 \}. \end{aligned}$$
(5.1)

For \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) define

$$\begin{aligned} v_{N-k}( x, y, \hat{s})&:= \sup _{(c,l,m)\in \mathbb {B}(x,y,\hat{s})} \mathbb {E}[g(c) + \gamma w_{N-k+1}(x - c + \hat{s} (m - l), y - m\nonumber \\&\qquad + l, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}], \end{aligned}$$
(5.2)

where

$$\begin{aligned} \mathbb {B}(x,y,\hat{s}):= \{&(c,l,m) \in [0, x + \hat{s} y]\times \mathbb {R}_{+}^{2}: \\&\forall _{s\in [0, \infty )}\ x - c + (m - l) \hat{s} + s (y - m + l) \ge 0 \}. \end{aligned}$$

or equivalently

$$\begin{aligned} \mathbb {B}(x, y, \hat{s}) \!:=\! \{(c, l, m) \in [0, x \!+\! \hat{s} y] \times \mathbb {R}_{+}^{2}: x - c + \hat{s} (m - l) \ge 0, y - m + l \ge 0\}. \end{aligned}$$

In fact, this is the set of constraints we impose on admissible strategies at time moment \(N-k\) in the case when the asset price is equal to \(\hat{s}\) (we have no frictions) and we do not want to have negative position in bank or stock account at the next time moment.

Let

$$\begin{aligned} \overline{\mathbb {B}}(x, y, \hat{s}) := \{(c, K): \in [0, x + \hat{s} y] \times \mathbb {R}: x - c + \hat{s} K \ge 0, y - K \ge 0\} \end{aligned}$$
(5.3)

for \((x, y, \hat{s}) \in \hat{\mathbb {D}}\). Clearly,

$$\begin{aligned} v_{N-k}(&x, y, \hat{s})= \sup _{(c,K)\in \overline{\mathbb {B}}(x,y,\hat{s})} \mathbb {E}[g(c) + \gamma w_{N-k+1}(x - c + \hat{s} K, \nonumber \\&\ \ \ \ \ \ \ \ y - K, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}]. \end{aligned}$$
(5.4)

Moreover we have

Lemma 5.1

Let \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then

$$\begin{aligned} (c, 0, m) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \Leftrightarrow (c, m) \in \overline{\mathbb {B}}(x, y, \underline{s}) \end{aligned}$$
(5.5)

and

$$\begin{aligned} (c, l, 0) \in {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \Leftrightarrow (c, -l) \in \overline{\mathbb {B}}(x, y, \overline{s}). \end{aligned}$$
(5.6)

By analogy to Theorem 2.1 we obtain

Proposition 5.1

For \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) the set \(\overline{\mathbb {B}}(x, y, \hat{s})\) is convex and compact. Furthermore the mapping

$$\begin{aligned} \hat{\mathbb {D}} \ni (x,y,\hat{s}) \longmapsto \overline{\mathbb {B}}(x,y,\hat{s}) \end{aligned}$$

is continuous in Hausdorff metric.

From the Theorem 3.1 using also Theorems 10.1 and 10.2 we obtain

Proposition 5.2

The random mapping

$$\begin{aligned} \overline{\mathbb {B}}(x&, y, \hat{s})\ni (c, K)\longmapsto \mathbb {E}[g(c) + \nonumber \\&\gamma w_{N-k+1}(x - c + \hat{s} K , y - K, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}] \end{aligned}$$
(5.7)

is a strictly concave for \((x, y, \hat{s}) \in \hat{\mathbb {D}}\). Moreover for each \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) there exists a unique \(\mathcal {F}_{N-k}\)-measurable random variable \((\hat{c}(x,y,\hat{s}), \hat{K}(x,y,\hat{s}))\) taking values in the set \(\overline{\mathbb {B}}(x, y, \hat{s})\) which is an optimal one step strategy, i.e.

$$\begin{aligned} v_{N-k}(&x, y, \hat{s}) = \mathbb {E}[g(\hat{c}(x,y,\hat{s})) + \nonumber \\&\ \ \ \ \ \ \ \ \ \ \gamma w_{N-k+1}(x - \hat{c}(x,y,\hat{s}) + \hat{s} \hat{K}(x,y,\hat{s}) , \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y - \hat{K}(x,y,\hat{s}), \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}]. \end{aligned}$$
(5.8)

Furthermore, the random mapping \((x, y, \hat{s}) \longmapsto (\hat{c}(x,y,\hat{s}), \hat{K}(x,y,\hat{s}))\) is continuous.

We now introduce the notion of weak shadow price, which we consider first locally.

Definition 5.1

A family \(\{ \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}) : (x, y, \underline{s}, \overline{s}) \in \mathbb {D} \}\) of random variables is called local weak shadow price at time \(N-k\), if

  1. (i)

    for every \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) the random variable \(\hat{S}_{N-k}(x, y, \underline{s}, \overline{s})\) is \(\mathcal {F}_{N - k}\) -measurable,

  2. (ii)

    \(\forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ \underline{s} \le \hat{S}_{N-k}(x, y, \underline{s}, \overline{s}) \le \overline{s}\),

  3. (iii)

    \(\forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ v_{N - k}(x, y, \hat{S}_{N-k}(x, y, \underline{s}, \overline{s})) = w_{N - k}(x, y, \underline{s}, \overline{s})\).

The notion of the local weak shadow price is crucial for the construction of global weak shadow price. We look at our market at time moment \(N-k\) for a price \(\hat{S}_{N-k}(x, y, \underline{s}, \overline{s})\), which is between bid and ask prices, and for which value of our functional corresponding to the case when at time \(N-k\) we have just one price \(\hat{S}_{N-k}(x, y, \underline{s}, \overline{s})\) and in the next time moments we have again bid and ask prices, is the same as in the case in which all time we have bid and ask prices. The local weak shadow price depends on the value of the bid and ask prices \(\underline{s}, \overline{s}\) at time moment \(N-k\) and on the \(initial\) portfolio position at the beginning of this time moment.

For \(\hat{s} > 0\) and for \(k = 1,\ldots , N\) let

$$\begin{aligned}&\hat{\mathbf {NT}}_{N-k}(\hat{s}):=\{ (x,y)\in \mathbb {R}^{2}_{+} : v_{N-k}(x,y,\hat{s}) =\\&\ \ \ \ \ \ \ = \sup _{c \in [0, x]} \mathbb {E}[g(c) + \gamma w_{N-k+1}(x - c, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}] \}, \\&\hat{\mathbf {S}}_{N-k}(\hat{s}):= \{ (x,y)\in \mathbb {R}^{2}_{+} : v_{N-k}(x,y,\hat{s}) =\sup _{(c,0,m) \in \mathbb {B}(x,y,\hat{s})} \mathbb {E}[g(c) +\\&\ \ \ \ \ \ \ \gamma w_{N-k+1}(x - c + \hat{s} m, y - m, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}] \}\setminus \hat{\mathbf {NT}}_{N-k}(\hat{s}) \end{aligned}$$

and

$$\begin{aligned}&\hat{\mathbf {B}}_{N-k}(\hat{s}):=\{ (x,y)\in \mathbb {R}^{2}_{+} : v_{N-k}(x,y,\hat{s}) =\sup _{(c,l,0)\in \mathbb {B}(x,y,\hat{s})} \mathbb {E}[g(c) + \\&\ \ \ \ \ \ \ \gamma w_{N-k+1}(x - c - \hat{s} l, y + l,\underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}]\}\setminus \hat{\mathbf {NT}}_{N-k}(\hat{s}). \end{aligned}$$

The sets \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\), \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) and \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) correspond to no transaction, selling and buying zones in the case of one selling and buying price equal to \(\hat{s}\). For \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\) these sets are clearly cones.

Proposition 5.3

For each \(\omega \in \Omega \) there exists a continuous function \(f^{\omega } : \mathbb {R}_{+}^2 \longrightarrow \mathbb {R}_{+}^2\) such that

$$\begin{aligned} \hat{\mathbf {NT}}_{N - k}(\hat{s})(\omega ) = \{ (f^{\omega }(t,{\hat{s}})) \in \mathbb {R}_{+}^{2} : t \in \mathbb {R}_{+} \}. \end{aligned}$$
(5.9)

Furthermore if the mapping \((x,y) \mapsto v_{N-k}(x,y,\hat{s})\) is differentiable for any \(\hat{s}>0\) then for \(\hat{s}\ne \hat{s}'\) we have

$$\begin{aligned} \hat{\mathbf {NT}}_{N - k}(\hat{s})(\omega ) \cap \hat{\mathbf {NT}}_{N - k}(\hat{s}')(\omega )\subset [0,\infty ) \times \left\{ 0\right\} . \end{aligned}$$
(5.10)

Proof

From Proposition 5.2 and Theorem 3.1 we get that there exists a unique \(\mathcal {F}_{N - k}\)-measurable continuous random function \((\hat{c}, \hat{K}) : \hat{\mathbb {D}} \longrightarrow \mathbb {R}^{2}\) such that for each \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) the random variable \((\hat{c}(x, y, \hat{s}), \hat{K}(x, y, \hat{s}))\) takes values in the set \(\overline{\mathbb {B}}(x, y, \hat{s})\) and for each \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) we have that

$$\begin{aligned} v_{N-k}(&x, y, \hat{s}) = \mathbb {E}[g(\hat{c}(x,y,\hat{s})) + \gamma w_{N-k+1}(x - \hat{c}(x,y,\hat{s}) + \hat{s} \hat{K}(x,y,\hat{s}) , \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y - \hat{K}(x,y,\hat{s}), \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}]. \end{aligned}$$

Since on the line \(x+y\hat{s}=t\) there is a unique point belonging to the no transaction zone we have that

$$\begin{aligned} f^{\omega }(t,{\hat{s}})=(t+\hat{s}\hat{K}(t, 0, \hat{s}), -\hat{K}(t, 0, \hat{s})) \end{aligned}$$
(5.11)

from which (5.9) and continuity of \(f^{\omega }\) follows.

Assume now that \((\bar{x},\bar{y})\in \hat{\mathbf {NT}}_{N - k}(\hat{s})(\omega ) \cap \hat{\mathbf {NT}}_{N - k}(\hat{s}')(\omega )\) for \(\hat{s}<\hat{s}'\) and \(\bar{y}>0\). Since for \(s=\hat{s}\) or \(s=\hat{s}'\)

$$\begin{aligned} v_{N-k}(\bar{x}, \bar{y}, s) = \sup _{c\in [0,\bar{x}]}\mathbb {E}[g(c) + \gamma w_{N-k+1}(\bar{x} - c, \bar{y}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}] \end{aligned}$$
(5.12)

we have that \(v_{N-k}(\bar{x}, \bar{y}, \hat{s})=v_{N-k}(\bar{x}, \bar{y}, \hat{s}')=\hat{v}_{N-k}(\bar{x},\bar{y})\). Moreover for \((x,y)\in \mathbb {R}_+^{2} \) such that \(x+y\hat{s}=\bar{x}+\bar{y}\hat{s}\) or \(x+y\hat{s}'=\bar{x}+\bar{y}\hat{s}'\) we have \(v_{N-k}(x, y, \hat{s})=v_{N-k}(x, y, \hat{s}')=v_{N-k}(\bar{x}, \bar{y}, \hat{s})\). Furthermore one can easily show that for any \(\tilde{s}\in [\hat{s},\hat{s}']\) whenever \(x+y \tilde{s}=\bar{x}+\bar{y}\tilde{s}\) we have also \( v_{N-k}(x, y, \tilde{s})=v_{N-k}(\bar{x}, \bar{y}, \tilde{s})=v_{N-k}(\bar{x}, \bar{y}, \hat{s})\). Therefore directional derivative of \(v_{N-k}(\bar{x}, \bar{y},\hat{s})\) along the line \(x+y \tilde{s}=\bar{x}+\bar{y}\tilde{s}\) should be equal to \(0\), as a derivative of a constant function, in particular at \((\bar{x},\bar{y})\) we have

$$\begin{aligned} v_{N-k,x}^{\prime }(\bar{x}, \bar{y},\hat{s})(-\tilde{s})+ v_{N-k,y}^{\prime }(\bar{x}, \bar{y},\hat{s})=0 \end{aligned}$$
(5.13)

for any \(\tilde{s}\in (\hat{s},\hat{s}')\), which means that \(v_{N-k,x}^{\prime }(\bar{x}, \bar{y},\hat{s})=0= v_{N-k,y}^{\prime }(\bar{x}, \bar{y},\hat{s})\), which contradicts the fact that \(v_{N-k}(x, y, \hat{s})\) is strictly increasing in \(x\) and \(y\). Consequently we obtain (5.10). \(\square \)

Taking into account \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\) is an image of a continuous function \(f^\omega \) which has exactly one intersection point with each line \(x+\hat{s}y=t\), for \(t\ge 0\) we easily obtain

Corollary 5.1

The sets \(\hat{\mathbf {B}}_{N-k}(\hat{s})\) and \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) are connected for \(\hat{s}>0\).

Remark 5.1

In the case, when \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\), the sets \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\), \(\hat{\mathbf {B}}_{N-k}(\hat{s})\) and \(\hat{\mathbf {S}}_{N-k}(\hat{s})\) are cones, and therefore from Proposition 5.3 we get that the set \(\hat{\mathbf {NT}}_{N-k}(\hat{s})\) is a half line starting from the point \((0, 0)\).

6 Optimal Consumption in the Markets Locally Without Friction with Logarithmic and Power Utility Functions

In this section we show formulas for optimal consumption in the market in which at a given time moment we have one selling and buying price (we don’t have frictions). Notice first that in the equation (5.4) we can replace control variable \(K\) by \(b\in [0,1]\) representing a portion of our wealth invested in the stock market. Then for \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) we have

$$\begin{aligned}&v_{N-k}( x, y, \hat{s}):= \sup _{(c, K)\in \overline{\mathbb {B}}(x,y,\hat{s})} \mathbb {E}[g(c) + \gamma w_{N-k+1}(x - c + \hat{s} K, \nonumber \\&y - m + l, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}] = \sup _{(c, b) \in [0, x + \hat{s} y] \times [0, 1]}\mathbb {E}[g(c) + \nonumber \\&\gamma w_{N-k+1}((1 - b)(x + \hat{s} y - c), \frac{b (x + \hat{s} y - c)}{\hat{s}}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) |\mathcal {F}_{N-k}] \end{aligned}$$
(6.1)

In the case when \(g(u) = \ln u\) using Lemma 3.3 we obtain

$$\begin{aligned} v_{N-k}(x, y, \hat{s})&=\sup _{c\in [0, x + \hat{s} y]}[ \ln c + \gamma (1 + \gamma + \ldots + \gamma ^{k}) \ln (x - c + \hat{s} y)] + \nonumber \\&\quad \sup _{b \in [0, 1]}\mathbb {E}[w_{N-k+1}(1 - b, \frac{b}{\hat{s}}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}]. \end{aligned}$$
(6.2)

and then by Lemma 10.2 the supremum is attained for \(c = \hat{c}_{N-k}(x, y, \hat{s})\), where

$$\begin{aligned} \hat{c}_{N-k}(x, y, \hat{s}) = \frac{x + \hat{s} y}{1 + \gamma + \cdots + \gamma ^{k}}. \end{aligned}$$
(6.3)

In the case when \(g(u) = u^{\alpha }\) by Lemma 3.3 we have

$$\begin{aligned} v_{N-k}(&x, y, \hat{s}) = \sup _{c \in [0, x + \hat{s} y]}[c^{\alpha } + \gamma (x - c+ \hat{s} y)^{\alpha } \cdot \nonumber \\&\ \sup _{b\in [0,1]}\mathbb {E}[w_{N-k+1}(1 - b, \frac{b}{\hat{s}}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}]= \nonumber \\&\ \ \ \sup _{c \in [0, x + \hat{s} y]}[c^{\alpha } + \tilde{D}_{N-k}(\hat{s})\cdot (x - c+ \hat{s} y)^{\alpha }], \end{aligned}$$
(6.4)

where \(\tilde{D}_{N-k}(\hat{s}):= \gamma \sup _{b\in [0,1]}\mathbb {E}[w_{N-k+1}(1 - b, \frac{b}{\hat{s}}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1})|\mathcal {F}_{N-k}]\). The supremum in

$$\begin{aligned} \sup _{c \in [0, x + \hat{s} y]}[c^{\alpha } + \tilde{D}_{N-k}(\hat{s})\cdot (x - c+ \hat{s} y)^{\alpha }] \end{aligned}$$
(6.5)

by Lemma 10.3 is attained for \(c = \hat{c}_{N-k}(x, y, \hat{s})\), where

$$\begin{aligned} \hat{c}_{N-k}(x, y, \hat{s}) = \frac{x + \hat{s} y}{1 + [\tilde{D}_{N-k}(\hat{s})]^{\frac{1}{1 - \alpha }}}. \end{aligned}$$
(6.6)

Substituting (6.3) into (6.2) and (6.6) into (6.4) we immediately obtain

Corollary 6.1

If \(g(u) = \ln u\) or \(g(u) = u^{\alpha }\) the function \((x,y)\mapsto v_{N-k}(x, y, \hat{s})\) is differentiable and consequently we have (5.10).

7 Properties of Selling and Buying Zones

The construction of shadow price is based on relations between the random sets \(\hat{\mathbf {S}}_{N-k}\), \(\hat{\mathbf {B}}_{N-k}\) and \({\mathbf {S}}_{N-k}\), \({\mathbf {B}}_{N-k}\) respectively which we shall show in this section. We start with a useful simple

Lemma 7.1

For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) and \(\hat{s} \in [\underline{s}, \overline{s}]\) we have:

$$\begin{aligned} {\mathbb {A}}(x, y, \underline{s}, \overline{s}) \subseteq \mathbb {B}(x, y, \hat{s}), \end{aligned}$$
(7.1)

and consequently

$$\begin{aligned} v_{N-k}(x, y, \hat{s}) \ge w_{N-k}(x, y, \underline{s}, \overline{s}). \end{aligned}$$
(7.2)

First we consider relation between \(\hat{\mathbf {S}}_{N-k}\) and \({\mathbf {S}}_{N-k}\).

Proposition 7.1

For \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) such that \(\overline{s} > \underline{s} > 0\) and all \(\omega \in \Omega \) we have

$$\begin{aligned} \hat{\mathbf {S}}_{N-k}(\underline{s})(\omega ) = \mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega ). \end{aligned}$$
(7.3)

Proof

Assume that \((x,y) \in \hat{\mathbf {S}}_{N-k}(\underline{s})(\omega )\) for certain \(\omega \in \Omega \). Then there is an \(\mathcal {F}_{N-k}\)-measurable triple \((\tilde{c}(\omega ),0,\tilde{m}(\omega ))\) taking values in \(\mathbb {B}(x, y, \underline{s})(\omega )\) such that \(\tilde{m}(\omega )>0\) and

$$\begin{aligned} v_{N-k}( x, y, \underline{s}) = \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k}(x - \tilde{c} + \underline{s} \tilde{m}, y - \tilde{m}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}], \end{aligned}$$
(7.4)

where to simplify notation we drop the dependence on \(\omega \). Since also \((\tilde{c},0,\tilde{m})\in \mathbb {A}(x, y, \underline{s}, \overline{s})\) then taking into account (7.2) we have

$$\begin{aligned} w_{N-k}( x, y, \underline{s}) = \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k}(x - \tilde{c} + \underline{s} \tilde{m}, y - \tilde{m}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}], \end{aligned}$$
(7.5)

which means that \((x,y) \in \mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\).

Assume now that \((x,y) \in {\mathbf {S}}_{N-k}(\underline{s},\overline{s})(\omega )\). Then there exists an \(\mathcal {F}_{N-k}\)-measurable random triple \((\hat{c}, 0, \hat{m})\) which takes values in \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) such that

$$\begin{aligned} w_{N-k}( x, y, \underline{s}, \overline{s}) \!=\! \mathbb {E}[g(\hat{c}) \!+\! \gamma w_{N-k+1}(x \!-\! \hat{c} \!+\! \underline{s} \hat{m}, y \!-\! \hat{m}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]. \end{aligned}$$
(7.6)

If \((x,y) \in \hat{\mathbf {NT}}_{N-k}(\underline{s})(\omega )\) then there is \((\tilde{c},0,0)\in \mathbb {B}(x, y, \underline{s})\) such that

$$\begin{aligned} v_{N-k}(x, y, \underline{s}) = \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c}, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}] \end{aligned}$$
(7.7)

and since also \((\tilde{c},0,0)\in \mathbb {A}(x, y, \underline{s}, \overline{s})\) then taking into account (7.2) we obtain that

$$\begin{aligned} w_{N-k}(x, y, \underline{s}, \overline{s}) \!=\! \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c}, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}],\qquad \end{aligned}$$
(7.8)

which means that \((x,y)\in {\mathbf {NT}}_{N-k}(\underline{s},\overline{s})(\omega )\), what is a contradiction. If \((x,y) \in \hat{\mathbf {B}}_{N-k}(\underline{s})(\omega )\) then there is \((\tilde{c},\tilde{l},0)\in \mathbb {B}(x, y, \underline{s})\) such that \(\tilde{l}>0\) and

$$\begin{aligned} v_{N-k}(x, y, \underline{s}) = \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c}-\underline{s}\tilde{l}, y+\tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]. \end{aligned}$$
(7.9)

Consider now the triple \(\lambda (\hat{c}, 0, \hat{m})+ (1-\lambda ) (\tilde{c},\tilde{l},0)\) with \(\lambda ={\tilde{l} \over \hat{m}+\tilde{l}}\in [0,1]\). Note that \(\lambda (\hat{c}, 0, \hat{m})+ (1-\lambda ) (\tilde{c},\tilde{l},0)\in \mathbb {A}(x, y, \underline{s}, \overline{s})\). By concavity of the random function \(F: \mathbb {B}(x, y, \underline{s}) \longrightarrow \mathbb {R}\) defined in the following way

$$\begin{aligned}&F(c, l, m) := \nonumber \\ \mathbb {E}[&g(c) + \gamma w_{N-k+1}(x - c + \underline{s} (m - l), y - m + l, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}] \end{aligned}$$
(7.10)

we have using again (7.2) that

$$\begin{aligned} F(&\lambda \hat{c}+(1-\lambda ) \tilde{c},(1-\lambda )\tilde{l},\lambda \hat{m}) \ge \mathbb {E}[g(\lambda \hat{c}+(1-\lambda )\tilde{c}) + \nonumber \\&\ \gamma w_{N-k+1}(x - \lambda \hat{c}+(1-\lambda )\tilde{c}, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]\ge \nonumber \\&\ \ \ \lambda w_{N-k}(x, y, \underline{s}, \overline{s})+(1-\lambda )v_{N-k}(x, y, \underline{s})\ge w_{N-k}(x, y, \underline{s}, \overline{s}). \end{aligned}$$
(7.11)

From (7.11) we have that

$$\begin{aligned} w_{N-k}(&x, y, \underline{s}, \overline{s})= \mathbb {E}[g(\lambda \hat{c}+(1-\lambda )\tilde{c}) + \\&\ \ \ \gamma w_{N-k+1}(x - \lambda \hat{c}+(1-\lambda )\tilde{c}, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}], \end{aligned}$$

which means that \((x,y)\in {\mathbf {NT}}_{N-k}(\underline{s},\overline{s})(\omega )\), which is a contradiction. \(\square \)

Next relation between the sets \(\hat{\mathbf {B}}_{N-k}(\overline{s})\) and \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) shall require two technical lemmas.

Lemma 7.2

Let \((x, y,\underline{s}, \overline{s})\in \mathbb {D}\) and \((\hat{c}, \hat{l}, 0) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\), \((\tilde{c}, 0, \tilde{m}) \in \mathbb {B}(x, y, \overline{s})\) be such that \(\hat{l}, \tilde{m} > 0\). Then for \(\lambda \in (0, 1)\) such that \(\lambda >{\tilde{m} \over \hat{l}+ \tilde{m}}\) we have

$$\begin{aligned} (\lambda \hat{c} + (1 - \lambda ) \tilde{c}, \lambda \hat{l} - (1 - \lambda ) \tilde{m}, 0) \in \mathbb {A}(x, y, \underline{s}, \overline{s}). \end{aligned}$$
(7.12)

Proof

For any \(\lambda \in [0, 1]\) we have \(0\le \lambda \hat{c} + (1 - \lambda ) \tilde{c}\le x+\underline{s}y\) and

$$\begin{aligned} x - [\lambda \hat{c} + (1 - \lambda ) \tilde{c}] - \overline{s} [\lambda \hat{l} - (1 - \lambda ) \tilde{m}] = \lambda (x - \hat{c} - \overline{s} \hat{l}) + (1 - \lambda ) (x - \tilde{c} + \overline{s} \tilde{m}) \ge 0. \end{aligned}$$

Whenever \(1 >\lambda >{\tilde{m} \over \hat{l}+ \tilde{m}}\) we have \(\lambda \hat{l} - (1 - \lambda ) \tilde{m} > 0\) and (7.12) holds. \(\square \)

Lemma 7.3

Let \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) be such that \(\overline{s} > \underline{s} > 0\). Then for \(\omega \in \Omega \) we have

$$\begin{aligned} \hat{\mathbf {S}}_{N-k}(\overline{s})(\omega ) \cap \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega ) = \emptyset . \end{aligned}$$
(7.13)

Proof

Assume this is not true. Then there exists some pair \((x, y)\) of two strictly positive numbers such that the event

$$\begin{aligned} A := \{ (x, y) \in \hat{\mathbf {S}}_{N-k}(\overline{s}) \cap \mathbf {B}_{N-k}(\underline{s}, \overline{s}) \} \ne \emptyset . \end{aligned}$$
(7.14)

Let \((\tilde{c}, \tilde{l}, \tilde{m})\) be an optimal one step strategy in the market locally without frictions with the price \(\overline{s}\), i.e. let \((\tilde{c}, \tilde{l}, \tilde{m})\) be an \(\mathcal {F}_{N-k}\)-measurable random variable which takes values in the set \(\mathbb {B}(x, y, \overline{s})\) such that

$$\begin{aligned} v_{N-k}(&x, y, \overline{s}) = \\ \mathbb {E}[g(&\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c} + \overline{s} (\tilde{m} - \tilde{l}), y - \tilde{m} + \tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]. \end{aligned}$$

Let \((\hat{c}, \hat{l}, \hat{m})\) be an optimal one step strategy in the primary market i.e. \((\hat{c}, \hat{l}, \hat{m})\) is \(\mathcal {F}_{N-k}\)-measurable random variable taking values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\) such that

$$\begin{aligned} w_{N-k}(&x, y, \underline{s}, \overline{s}) = \\ \mathbb {E}[g(&\hat{c}) + \gamma w_{N-k+1}(x - \hat{c} + \underline{s} \hat{m} - \overline{s} \hat{l}, y - \hat{m} + \hat{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]. \end{aligned}$$

On the event \(A\) we clearly have \((\tilde{c}, \tilde{l}, \tilde{m}) = (\tilde{c}, 0, \tilde{m})\), \((\hat{c}, \hat{l}, \hat{m}) = (\hat{c}, \hat{l}, 0)\) and \(\hat{l}, \tilde{m} > 0\).

Let \(\lambda \) be an \(\mathcal {F}_{N-k}\)-measurable random variable taking values in the interval \([0, 1]\) such that on \(A\) we have \(\lambda \hat{l} - (1 - \lambda ) \hat{m} > 0\). From the property (7.12) we have that \((\lambda \hat{c} + (1 - \lambda ) \tilde{c}, \lambda \hat{l} - (1 - \lambda ) \tilde{m}, 0)\) is a well defined \(\mathcal {F}_{N-k}\)-measurable random variable, which on \(A\) takes values in the set \(\mathbb {A}(x, y, \underline{s}, \overline{s})\).

Since on \(A\) we have

$$\begin{aligned} (\lambda \hat{c} + (1 - \lambda ) \tilde{c}, \lambda \hat{l} - (1 - \lambda ) \tilde{m}, 0) \not = (\hat{c}, \hat{l}, 0), \end{aligned}$$
(7.15)

from the strict concavity of the function \(g\) and of the random function

$$\begin{aligned} (x, y) \longmapsto \mathbb {E}[w_{N-k+1}(x, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}], \end{aligned}$$

taking into account the property (7.2) we get that on \(A\) we have

$$\begin{aligned} w_{N-k}(&x, y, \underline{s}, \overline{s}) \ge \\&g(\lambda \hat{c} + (1 - \lambda ) \tilde{c}) + \gamma w_{N-k+1}(x - (\lambda \hat{c} + (1 - \lambda ) \tilde{c}) - \overline{s} (\lambda \hat{l} - (1 - \lambda ) \tilde{m}), \\&y - (1 - \lambda ) \tilde{m} + \lambda \hat{l} \underline{S}_{N-k+1}, \overline{S}_{N-k+1} | \mathcal {F}_{N-k}) > \\&\lambda \mathbb {E}[g(\hat{c}) + \gamma w_{N-k+1}(x - \hat{c} - \overline{s} \hat{l}, y - \hat{m} + \hat{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}] + \\&(1 - \lambda ) \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c} + \overline{s} (\tilde{m} - \tilde{l}), \\&y - \tilde{m} + \tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}] \ge \\&\lambda w_{N-k}(x, y, \underline{s}, \overline{s}) + (1 - \lambda ) v_{N-k}(x, y, \overline{s}) \ge \\&\lambda w_{N-k}(x, y, \underline{s}, \overline{s}) + (1 - \lambda ) w_{N-k}(x, y, \underline{s}, \overline{s}) = w_{N-k}(x, y, \underline{s}, \overline{s}) \end{aligned}$$

which is a contradiction and therefore we have (7.13). \(\square \)

We are now in position to compare the sets \(\hat{\mathbf {B}}_{N-k}(\overline{s})(\omega )\) and \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\).

Proposition 7.2

Let \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) be such that \(\overline{s} > \underline{s} > 0\). Then for \(\omega \in \Omega \)

$$\begin{aligned} \hat{\mathbf {B}}_{N-k}(\overline{s})(\omega ) = \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega ). \end{aligned}$$
(7.16)

Proof

Notice first that by Lemma 7.3 we have that \(\hat{\mathbf {S}}_{N-k}(\overline{s})(\omega ) \cap \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega ) = \emptyset \) for \(\omega \in \Omega \). Let for \((x,y) \in \mathbb {R}_{+}^{2}\)

$$\begin{aligned} (\tilde{c}, \tilde{l}, \tilde{m}) := (\tilde{c}(x, y, \overline{s}), \tilde{l}(x, y, \overline{s}), \tilde{m}(x, y, \overline{s})) \end{aligned}$$

be an optimal \(\mathcal {F}_{N-k}\)-measurable one step strategy in the market locally without frictions with the price \(\overline{s}\), i.e.

$$\begin{aligned} v_{N-k}( x, y, \overline{s})&= \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c} + \overline{s} (\tilde{m} - \tilde{l}), y - \tilde{m}\\&\qquad + \tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]. \end{aligned}$$

Let \((x, y) \in \hat{\mathbf {B}}_{N-k}(\overline{s})(\omega )\). Without loss of generality we can assume that \(\tilde{l}(\omega )> \tilde{m}(\omega ) = 0\) and \((\tilde{c}(\omega ), \tilde{l}(\omega ), 0)\in \mathbb {B}(x, y, \overline{s})\) and by (5.6) we also have \((\tilde{c}(\omega ), \tilde{l}(\omega ), 0) \in \mathbb {A}(x, y, \underline{s}, \overline{s})\). Therefore,

$$\begin{aligned} w_{N-k}(&x, y, \underline{s},\overline{s})(\omega ) \ge \\&\mathbb {E}[g(\tilde{c}) \!+\!\gamma w_{N-k+1}(x \!-\! \tilde{c} \!+\! \overline{s} (\tilde{m} \!-\! \tilde{l}), y - \tilde{m} + \tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}](\omega ) \end{aligned}$$

which means that \((x,y) \in \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\). Let now \((x,y) \in \mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\) and assume that \((x,y) \notin \hat{\mathbf {B}}_{N-k}(\overline{s})(\omega )\). By (7.13) we have also that \((x,y) \notin \hat{\mathbf {S}}_{N-k}(\overline{s})(\omega )\). Therefore, \((x,y)\in \hat{\mathbf {NT}}_{N-k}(\overline{s})(\omega )\) and for \((\tilde{c},0,0)\in \mathbb {B}(x, y, \overline{s})\) we have

$$\begin{aligned} v_{N-k}(&x, y, \overline{s})(\omega ) = \mathbb {E}[g(\tilde{c}) + \\&\ \ \ \ \ \ \ \gamma w_{N-k+1}(x - \tilde{c}, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}](\omega )\ge w_{N-k}(x, y, \underline{s}, \overline{s})(\omega ). \end{aligned}$$

Since also \((\tilde{c},0,0)\in \mathbb {A}(x, y, \underline{s}, \overline{s})\) we have that

$$\begin{aligned} w_{N-k}(&x, y, \underline{s}, \overline{s})(\omega ) = \mathbb {E}[g(\tilde{c}) + \\&\ \ \ \ \ \ \ \gamma w_{N-k+1}(x - \tilde{c}, y, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}](\omega ) \end{aligned}$$

which means that \((x,y)\in \mathbf {NT}_{N-k}(\underline{s}, \overline{s})(\omega )\), which is a contradiction. \(\square \)

Remark 7.1

From Corollary 5.1 taking into account Propositions 7.1 and 7.2 we obtain an alternative proof of the fact that the sets \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})(\omega )\) and \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})(\omega )\) are connected for \(\omega \in \Omega \).

8 Construction of Local Weak Shadow Price

In this section we construct shadow price. For this purpose we shall need a number of properties of selling and buying cones corresponding to different asset prices on the market locally without friction. We start with an obvious

Lemma 8.1

Let \((x, y) \in \mathbb {R}_{+}^{2}\) and let \(0 < s_{1} \le s_{2}\). Then the following implications hold

$$\begin{aligned} (c, 0, m) \in \mathbb {B}(x, y, s_{1}) \Longrightarrow (c, 0, m) \in \mathbb {B}(x, y, s_{2}) \end{aligned}$$
(8.1)

and

$$\begin{aligned} (c, l, 0) \in \mathbb {B}(x, y, s_{2}) \Longrightarrow (c, l, 0) \in \mathbb {B}(x, y, s_{1}). \end{aligned}$$
(8.2)

Next Lemma shows relations between selling and buying cones for different asset prices.

Lemma 8.2

Let \(s_{1}, s_{2} \in \mathbb {R}_{+}\) be such that \(0 < s_{1} \le s_{2}\). Then

$$\begin{aligned} \hat{\mathbf {S}}_{N-k}(s_{1}) \subseteq \hat{\mathbf {S}}_{N-k}(s_{2}) \end{aligned}$$
(8.3)

and

$$\begin{aligned} \hat{\mathbf {B}}_{N-k}(s_{2}) \subseteq \hat{\mathbf {B}}_{N-k}(s_{1}). \end{aligned}$$
(8.4)

Proof

We will prove only (8.4). The proof of (8.3) is similar. Fix \(\omega \in \Omega \). Let \((x,y)\in \hat{\mathbf {B}}_{N-k}(s_{2})(\omega )\). Then there is an optimal one step strategy \((\tilde{c},\tilde{l}, \tilde{m})\) such that

$$\begin{aligned} v_{N-k}( x, y, s_{2})&= \mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c} + s_{2} (\tilde{m} -\tilde{l}), y - \tilde{m}\\&\qquad + \tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}]. \end{aligned}$$

Let \((c^{*}, 0, m^{*})\) be an \(\mathcal {F}_{N-k}\) measurable triple taking values in the set \(\mathbb {B}(x, y,\) \(s_{1})\). Taking into account that by (8.1) the random variable \((c^{*}, 0, m^{*})\) takes values in \(\mathbb {B}(x,y,s_{2})\), we have

$$\begin{aligned}&\ \ \ \ \ v_{N-k}(x, y, s_{1})(\omega )\ge \\&\mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c} - s_{1} \tilde{l}, y + \tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}] (\omega ) > \\&\mathbb {E}[g(\tilde{c}) + \gamma w_{N-k+1}(x - \tilde{c} - s_{2} \tilde{l}, y + \tilde{l}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}] (\omega ) = \\&v_{N-k}(x, y, s_{2}) (\omega ) \ge \\&\mathbb {E}[g(c^{*}) + \gamma w_{N-k+1}(x - c^{*} + s_{2} m^{*},y - m^{*}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}] (\omega ) \ge \\&\mathbb {E}[g(c^{*}) + \gamma w_{N-k+1}(x - c^{*} + s_{1} m^{*},y - m^{*}, \underline{S}_{N-k+1}, \overline{S}_{N-k+1}) | \mathcal {F}_{N-k}](\omega ). \end{aligned}$$

Consequently, taking into account that the strategy \((c^{*}, 0, m^{*})\) could be arbitrary we have \((x,y) \notin \hat{\mathbf {S}}_{N-k}(s_{1})(\omega )\cup \hat{\mathbf {NT}}_{N-k}(s_{1})(\omega )\), which means that \((x,y)\in \hat{\mathbf {B}}_{N-k}(s_{1})(\omega )\), which completes the proof. \(\square \)

The following two properties of no transaction zone will be important later

Lemma 8.3

Assume that \(\underline{s}, \overline{s} \in \mathbb {R}_{+}\) are such that \(\overline{s} > \underline{s} > 0\). Then

$$\begin{aligned} \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) = \bigcup _{\hat{s} \in [\underline{s}, \overline{s}]} \hat{\mathbf {NT}}_{N - k}(\hat{s}). \end{aligned}$$
(8.5)

Proof

Form (8.3) and (8.4) together with (7.3) i (7.16) we have

$$\begin{aligned}&\bigcup _{\hat{s} \in [\underline{s}, \overline{s}]} \hat{\mathbf {NT}}_{N - k}(\hat{s}) = \bigcup _{\hat{s} \in [\underline{s}, \overline{s}]} \mathbb {R}_{+}^{2} \setminus (\hat{\mathbf {S}}_{N - k}(\hat{s}) \cup \hat{\mathbf {B}}_{N - k}(\hat{s})) = \\&\mathbb {R}_{+}^{2} \setminus \bigcap _{\hat{s} \in [\underline{s}, \overline{s}]} (\hat{\mathbf {S}}_{N - k}(\hat{s}) \cup \hat{\mathbf {B}}_{N - k}(\hat{s})) == \mathbb {R}_{+}^{2} \setminus (\hat{\mathbf {S}}_{N - k}(\overline{s}) \cup \hat{\mathbf {B}}_{N - k}(\underline{s})) = \\&\mathbb {R}_{+}^{2} \setminus (\mathbf {S}_{N - k}(\underline{s}, \overline{s}) \cup \mathbf {B}_{N - k}(\underline{s}, \overline{s})) = \mathbf {NT}_{N - k}(\underline{s}, \overline{s}). \end{aligned}$$

\(\square \)

Lemma 8.4

If \(s_{1}, s_{2}, \hat{s} \in \mathbb {R}_{+}\) are such that \(0 < s_{1} \le \hat{s} \le s_{2}\) then

$$\begin{aligned} \hat{\mathbf {NT}}_{N - k}(s_{1}) \cap \hat{\mathbf {NT}}_{N - k}(s_{2}) \subseteq \hat{\mathbf {NT}}_{N - k}(\hat{s}). \end{aligned}$$
(8.6)

Proof

From (8.3) and (8.4) we have

$$\begin{aligned}&\hat{\mathbf {NT}}_{N - k}(s_{1}) \cap \hat{\mathbf {NT}}_{N - k}(s_{2}) = \\&[\mathbb {R}_{+}^{2} \setminus (\hat{\mathbf {S}}_{N - k}(s_{1}) \cup \hat{\mathbf {B}}_{N - k}(s_{1}))] \cap [\mathbb {R}_{+}^{2} \setminus (\hat{\mathbf {S}}_{N - k}(s_{2}) \cup \hat{\mathbf {B}}_{N - k}(s_{2}))] = \\&\mathbb {R}_{+}^{2} \setminus [(\hat{\mathbf {S}}_{N - k}(s_{1}) \cup \hat{\mathbf {B}}_{N - k}(s_{1})) \cup (\hat{\mathbf {S}}_{N - k}(s_{2}) \cup \hat{\mathbf {B}}_{N - k}(s_{2}))] = \\&\mathbb {R}_{+}^{2} \setminus (\hat{\mathbf {B}}_{N - k}(s_{1}) \cup \hat{\mathbf {S}}_{N - k}(s_{2})) \subseteq \mathbb {R}_{+}^{2} \setminus (\hat{\mathbf {B}}_{N - k}(\hat{s}) \cup \hat{\mathbf {S}}_{N - k}(\hat{s})) = \hat{\mathbf {NT}}_{N - k}(\hat{s}). \end{aligned}$$

\(\square \)

In what follows we shall try to characterize \(\mathcal {F}_{N - k}\)-measurable random variables \({s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\), taking values in \([\underline{s}, \overline{s}]\), such that \((x,y)\in \) \(\hat{\mathbf {NT}}_{N - k}({s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}))\).

Proposition 8.1

Let for \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\)

$$\begin{aligned} \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) := \left\{ \begin{array}{l@{\quad }l} \underline{s} &{} \text{ for } \{ (x, y) \in \mathbf {S}_{N - k}(\underline{s}, \overline{s}) \}\\ \inf \{ s \in [\underline{s}, \overline{s}] : (x, y) \in \hat{\mathbf {NT}}_{N - k}(s) \} &{} \text{ for } \{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\\ \overline{s} &{}\text{ for } \{ (x, y) \in \mathbf {B}_{N - k}(\underline{s}, \overline{s}) \} \end{array} \right. \nonumber \\ \end{aligned}$$
(8.7)

and

$$\begin{aligned} \underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) := \left\{ \begin{array}{l@{\quad }l} \underline{s}&{} \text{ for } \{ (x, y) \in \mathbf {S}_{N - k}(\underline{s}, \overline{s}) \}\\ \sup \{ s \in [\underline{s}, \overline{s}] : (x, y) \in \hat{\mathbf {NT}}_{N - k}(s) \} &{} \text{ for } \{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\\ \overline{s} &{}\text{ for } \{ (x, y) \in \mathbf {B}_{N - k}(\underline{s}, \overline{s}) \} \end{array}\right. .\nonumber \\ \end{aligned}$$
(8.8)

Then \(\overline{s}_{N - k}^{*}\) and \(\underline{s}_{N - k}^{*}\) are well defined \(\mathcal {F}_{N - k}\)-measurable random functions from \(\mathbb {D}\) to \((0, \infty )\). Moreover \(\overline{s}_{N - k}^{*}\) and \(\underline{s}_{N - k}^{*}\) are lower and upper semicontinuous on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\). Furthermore, for each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) we have

$$\begin{aligned} (x, y) \in \hat{\mathbf {NT}}_{N - k}(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})) \cap \hat{\mathbf {NT}}_{N - k}(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})). \end{aligned}$$
(8.9)

Proof

Fix \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). We are going to show first that \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) and \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) are well defined \(\mathcal {F}_{N - k}\)-measurable random variables.

Notice first that from (8.5) for each \(\omega \in \Omega \) and each \((x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s})(\omega )\) there is \(s_{N - k}(x, y, \omega ) \in [\underline{s}, \overline{s}]\) such that \((x, y) \in \hat{\mathbf {NT}}_{N - k}(s_{N - k}(x, y, \omega ))(\omega )\).

Furthermore

$$\begin{aligned} \{ \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) = \underline{s} \} = \{ (x, y) \in \mathbf {S}_{N - k}(\underline{s}, \overline{s}) \} \cup \{ (x, y) \in \hat{\mathbf {NT}}_{N - k}(\underline{s}) \} \in \mathcal {F}_{N - k} \end{aligned}$$

and

$$\begin{aligned} \{ \underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) = \overline{s} \} = \{ (x, y) \in \mathbf {B}_{N - k}(\underline{s}, \overline{s}) \} \cup \{ (x, y) \in \hat{\mathbf {NT}}_{N - k}(\overline{s}) \} \in \mathcal {F}_{N - k}. \end{aligned}$$

Moreover using (8.3) and (8.4) we obtain

$$\begin{aligned}&\{ \overline{s}_{N - k}^{*} (x, y, \underline{s}, \overline{s}) = \overline{s} \} = \\&= \{ (x, y) \in \mathbf {B}_{N - k}(\underline{s}, \overline{s}) \} \cup \bigcup _{s \in [\underline{s}, \overline{s})} \{ (x, y) \in \hat{\mathbf {S}}_{N - k}(s) \} = \\&= \{ (x, y) \in \mathbf {B}_{N - k}(\underline{s}, \overline{s}) \} \cup \bigcup _{s \in [\underline{s}, \overline{s}) \cap \mathbb {Q}} \{ (x, y) \in \hat{\mathbf {S}}_{N - k}(s) \} \in \mathcal {F}_{N - k} \end{aligned}$$

and

$$\begin{aligned}&\{ \underline{s}_{N - k}^{*} (x, y, \underline{s}, \overline{s}) = \underline{s} \} = \\&= \{ (x, y) \in \mathbf {S}_{N - k}(\underline{s}, \overline{s}) \} \cup \bigcup _{s \in (\underline{s}, \overline{s}]} \{ (x, y) \in \hat{\mathbf {B}}_{N - k}(s) \} = \\&= \{ (x, y) \in \mathbf {S}_{N - k}(\underline{s}, \overline{s}) \} \cup \bigcup _{s \in (\underline{s}, \overline{s}] \cap \mathbb {Q}} \{ (x, y) \in \hat{\mathbf {B}}_{N - k}(s) \} \in \mathcal {F}_{N - k}. \end{aligned}$$

For any \(t \in (\underline{s}, \overline{s})\) we have

$$\begin{aligned}&\{ \overline{s} > \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) > t \} = \bigcup _{s \in (t, \overline{s}) \cap \mathbb {Q}} \ \{ \overline{s} > \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) > s \} = \\&= \bigcup _{s \in (t, \overline{s}) \cap \mathbb {Q}} \ \{ (x, y) \in \hat{\mathbf {B}}_{N - k}(s) \} \in \mathcal {F}_{N - k} \end{aligned}$$

and

$$\begin{aligned}&\{ t > \underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) > \underline{s} \} = \bigcup _{s \in (\underline{s}, t) \cap \mathbb {Q}} \ \{ s > \underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) > \underline{s} \} = \\&= \bigcup _{s \in (\underline{s}, t) \cap \mathbb {Q}} \ \{ (x, y) \in \hat{\mathbf {S}}_{N - k}(s) \} \in \mathcal {F}_{N - k}, \end{aligned}$$

which means that \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) are \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) well defined \(\mathcal {F}_{N - k}\)-measurable random variables.

We now show that on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) we have (8.9). Fix \(\omega \in \{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) and let \((\overline{s}_{\omega }^{n})_{n = 1}^{\infty }\) and \((\underline{s}_{\omega }^{n})_{n = 1}^{\infty }\) be any two sequences from the interval \([\underline{s}, \overline{s}]\) convergent respectively to \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) and \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) such that for any \(n \in \mathbb {N}\) we have \((x, y) \in \hat{\mathbf {NT}}_{N - k}(\overline{s}_{\omega }^{n})(\omega ) \cap \hat{\mathbf {NT}}_{N - k}(\underline{s}_{\omega }^{n})(\omega )\). Then for \(n \in \mathbb {N}\) we have

$$\begin{aligned}&{v}_{N - k}( x, y, \overline{s}_{\omega }^{n})(\omega ) = {v}_{N - k}(x, y, \underline{s}_{\omega }^{n})(\omega ) = \\&= \sup _{c \in [0, x]} \mathbb {E}[g(c) + \gamma w_{N - k + 1}(x - c, y, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ). \end{aligned}$$

By continuity of \({v}_{N - k}(x, y, \cdot )\) letting \(n \longrightarrow \infty \) we obtain

$$\begin{aligned}&{v}_{N - k}( x, y, \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega ))(\omega ) = \hat{v}_{N - k}(x, y, \underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega ))(\omega ) = \\&= \sup _{c \in [0, x]} \mathbb {E}[g(c) + \gamma w_{N - k + 1}(x - c, y, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ). \end{aligned}$$

Therefore we have (8.9) on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\).

It remains to show that \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) and \(\underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})\) are also measurable functions (of their coordinates). For this purpose it suffices to prove their lower and upper semicontinuity, respectively, on the set \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\). Fix \(\omega \in \{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\). Let \((x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})_{n = 1}^{\infty }\) be a sequence from \(\mathbb {D}\) convergent to \((x, y, \underline{s}, \overline{s})\). We have to show that

$$\begin{aligned} \liminf _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) \ge \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega ) \end{aligned}$$

and

$$\begin{aligned} \limsup _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) \le \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega ) \end{aligned}$$

We shall show only the first inequality since the other can be shown in a similar way. There are three cases:

\(1^{o}\) :

for infinitely many \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \mathbf {S}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\). Choosing a suitable subsequence we can assume that \((x_{n}, y_{n}) \in \mathbf {S}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\) for \(n \in \mathbb {N}\) and then \(\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) = \underline{s}_{n} \xrightarrow {n \rightarrow \infty } \underline{s}\). By (8.9) for \(n \in \mathbb {N}\) we have

$$\begin{aligned}&{v}_{N - k}(x_{n}, y_{n}, \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega ) = \sup _{(c, l, m) \in {\mathbb {B}}(x_{n}, y_{n}, \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))} \mathbb {E}[g(c) + \\&\gamma w_{N - k + 1}(x_{n} - c + \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) \cdot (m - l), y_{n} - m + l, \underline{S}_{N - k + 1}, \\&\overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ) = \sup _{(c, l, m) \in {\mathbb {B}}(x_{n}, y_{n}, \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))} \mathbb {E}[g(c) + \\&\gamma w_{N - k + 1}(x_{n} - c, y_{n}, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ). \end{aligned}$$

By continuity of \({v}_{N - k}\) using Theorem 10.1 and letting \(n \longrightarrow \infty \) we obtain

$$\begin{aligned}&{v}_{N - k}(x, y, \underline{s})(\omega ) = \\&\sup _{(c, l, m) \in {\mathbb {B}}(x, y, \underline{s})} \mathbb {E}[g(c) + \gamma w_{N - k + 1}(x - c, y, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ), \end{aligned}$$

which means that \((x, y) \in \hat{\mathbf {NT}}_{N - k}(\underline{s})(\omega )\) and \(\overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega ) = \underline{s}\).

\(2^{o}\) :

for infinitely many \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \mathbf {B}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\). As above we may assume (choosing a suitable subsequence) that \((x_{n}, y_{n}) \in \mathbf {B}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\) for \(n \in \mathbb {N}\). Then \(\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) = \overline{s}_{n} \xrightarrow {n \rightarrow \infty } \overline{s} \ge \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s})(\omega )\).

\(3^{o}\) :

for infinitely may \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \mathbf {NT}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\). We may assume that \((x_{n}, y_{n}) \in \mathbf {NT}_{N - k}(\underline{s}_{n}, \overline{s}_{n})(\omega )\) for \(n \in \mathbb {N}\) and by (8.9) for \(n \in \mathbb {N}\) we have \((x_{n}, y_{n}) \in \hat{\mathbf {NT}}_{N - k}(\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega )\). Let \((n_{l})_{l = 1}^{\infty }\) be such subsequence that \(\overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ) \xrightarrow {l \rightarrow \infty } \liminf _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ).\) By continuity of \({v}_{N - k}\) and Theorem 10.1 we obtain

$$\begin{aligned}&{v}_{N - k}(x_{n_{l}}, y_{n_{l}}, \overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ))(\omega ) \\&\xrightarrow {l \rightarrow \infty } {v}_{N - k}(x, y, \liminf _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega ) \end{aligned}$$

and

$$\begin{aligned}&{v}_{N - k}(x_{n_{l}}, y_{n_{l}}, \overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ) )(\omega ) = \\&\sup _{(c, 0, 0) \in {\mathbb {B}}(x_{n_{l}}, y_{n_{l}}, \overline{s}_{N - k}^{*}(x_{n_{l}}, y_{n_{l}}, \underline{s}_{n_{l}}, \overline{s}_{n_{l}})(\omega ) )} \mathbb {E}[g(c)+\\&\gamma w_{N - k + 1}(x_{n_{l}} - c, y_{n_{l}}, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ) \xrightarrow {l \rightarrow \infty } \\&\sup _{(c, 0, 0) \in {\mathbb {B}}(x, y, \liminf _{n \longrightarrow \infty } \overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ) )} \mathbb {E}[g(c) + \\&\gamma w_{N - k + 1}(x - c, y, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}](\omega ) . \end{aligned}$$

Therefore \((x, y) \in \hat{\mathbf {NT}}_{N - k}(\liminf _{n \longrightarrow \infty }\overline{s}_{N - k}^{*}(x_{n}, y_{n}, \underline{s}_{n}, \overline{s}_{n})(\omega ))(\omega )\), which completes the proof of lower semicontinuity of \(\overline{s}_{N - k}^{*}\).

\(\square \)

Corollary 8.1

Let \(\hat{s}_{N - k} : \mathbb {D} \longrightarrow (0, \infty )\) be defined by the formula

$$\begin{aligned} \hat{s}_{N - k}(x, y, \underline{s}, \overline{s}) := \frac{1}{2} \overline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) + \frac{1}{2} \underline{s}_{N - k}^{*}(x, y, \underline{s}, \overline{s}) \end{aligned}$$
(8.10)

for \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\). Then on the event \(\{ (x, y) \in \mathbf {NT}_{N - k}(\underline{s}, \overline{s}) \}\) we have

$$\begin{aligned} (x, y) \in \hat{\mathbf {NT}}_{N - k}(\hat{s}_{N - k}(x, y, \underline{s}, \overline{s})). \end{aligned}$$
(8.11)

Proof

It follows directly from (8.6) and (8.9). \(\square \)

Remark 8.1

Note that due to the Proposition 5.3 under differentiability of \(v_{N-k}\) we have that the random function \(\hat{s}_{N-k}\) defined by (8.10) is the unique random function for which (8.11) holds since then \(\underline{s}^{*}_{N-k}(x, y,\underline{s}, \overline{s} ) = \overline{s}^{*}_{N-k}(x, y, \underline{s}, \overline{s})\) . When \(\underline{s}^{*}_{N-k}(x, y,\underline{s}, \overline{s}) < \overline{s}^{*}_{N-k}(x, y, \underline{s}, \overline{s})\) the random function \(\hat{s}_{N-k}\) for which (8.11) holds is not defined in a unique way.

Having defined \(\hat{s}_{N-k}(x, y)\) we are allowed to formulate the main result of this section

Theorem 8.1

For \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) let \(\hat{S}_N(x,y, \underline{s}, \overline{s})= \underline{s}\) and for \(k=1,\ldots , N\)

$$\begin{aligned} \hat{S}_{N-k}(x, y, \underline{s}, \overline{s}) := \hat{s}_{N-k}(x, y,\underline{s}, \overline{s} ) \end{aligned}$$
(8.12)

where the random mapping \(\hat{s}\) is defined by (8.10). Then the family \(\{ \hat{S}_{N-k}(x, y, \underline{s}, \overline{s}) : (x, y, \underline{s}, \overline{s}) \in \mathbb {D}\}\) is a local weak shadow price at time moment \(N-k\), for \(k=1,2, \dots ,N\), i.e. it is \(\mathcal {F}_{N-k}\)-measurable and

$$\begin{aligned} v_{N - k}( x, y, \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}))&= \sup _{(c, l, m) \in \mathbb {B}(x, y, \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}))} \mathbb {E}[g(c)\nonumber \\&\quad + \gamma w_{N - k + 1}(x \!-\! c \!+\! \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}) \cdot (m - l), y \!-\! m\nonumber \\&\qquad \!+\! l, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}] \nonumber \\&= w_{N - k}(x, y, \underline{s}, \overline{s}), \end{aligned}$$
(8.13)

and the optimal strategies at time moment \(N-k\) in market with price \(\hat{S}_{N-k}\) and in the market with bid and ask prices \(\bar{s}\), \(\overline{s}\) respectively, are the same.

Proof

It is a consequence of previous facts, namely Propositions 7.1, 7.2 and (8.11). We have to show equality (8.13). By Propositions 7.1 and 7.2 we have equality (8.13) for \((x, y)\) in \(\mathbf {S}_{N-k}(\underline{s}, \overline{s})\) or in \(\mathbf {B}_{N-k}(\underline{s}, \overline{s})\) respectively. For \((x,y)\in \mathbf {NT}_{N-k}(\underline{s}, \overline{s})\) we have \((x, y) \in \hat{\mathbf {NT}}_{N-k}(\hat{s}_{N-k}(x, y,\underline{s}, \overline{s}))\), which again implies equality (8.13). Equality of optimal strategies at time moment \(N-k\) in the markets with price \(\hat{S}_{N-k}\) and bid and ask prices \(\underline{s}\), \(\overline{s}\) follows directly from the equation (8.13). \(\square \)

9 Weak Shadow Price and Shadow Price (Strong Shadow Price)

In the previous four sections we considered a market which was locally at a given time moment without friction but with the asset price depending on our financial position, while in the other moments of time we had transaction cots (bid and ask prices). Now, we shall introduce shadow price over the whole time horizon. The main result of the paper states that expected values of discounted utilities and the optimal strategies are the same for the original market with bid and ask prices and for the market with suitably defined shadow price. We start with the following

Definition 9.1

A family \(\hat{S} := \{ \hat{S}_{n}(x, y, \underline{s}, \overline{s}) :\ n \in \{ 0, 1, 2, \dots , N \}, (x, y, \underline{s}, \overline{s}) \in \mathbb {D} \}\) will be called weak shadow price, if

  1. (i)

    for each \(n \in \{ 0, 1, 2, \dots , N \}\) and for each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) the random variable \(\hat{S}_{n}(x, y, \underline{s}, \overline{s})\) is \(\mathcal {F}_{n}\)-measurable,

  2. (ii)

    \(\forall _{n \in \{ 0, 1, 2, \dots , N \}} \forall _{(x, y, \underline{s}, \overline{s}) \in \mathbb {D}} \ \ \underline{s} \le \hat{S}_{n}(x, y, \underline{s}, \overline{s}) \le \overline{s}\),

  3. (iii)

    the optimal value of the functional (1.2) of an investor in the frictionless market starting at time \(n \in \{ 0, 1, 2, \dots , N \}\) from a position \((x, y)\) and trading stocks with the price \(\hat{S}_{n}(x, y, \underline{S}_{n}, \overline{S}_{n})\), is the same as in the market with transaction costs.

In the case when market is \(governed\) by the family \(\hat{S}\) of asset prices satisfying conditions (i)–ii) of Definition 9.1 we will say that we have a market with price system \(\hat{S}\).

Proposition 9.1

Let \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) be the optimal strategy in the market with transaction costs with initial position \((x, y)\). Assume there exists a weak shadow price \(\hat{S}\). Then the strategy \(\hat{u}\) is also optimal in the frictionless market with price system \(\hat{S}\).

Proof

Denote by \(\mathcal {U}_{(x, y)}(\hat{S})\) the set of all admissible strategies in the frictionless market with price system \(\hat{S}\) and with initial position \((x, y)\). From the condition \((iii)\) of Definition 9.1 we have that

$$\begin{aligned} \sup _{u \in \mathcal {U}_{(x, y)}(\hat{S})} \mathbf {J}(u) = \sup _{u \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})} \mathbf {J}(u) = \mathbf {J}(\hat{u}). \end{aligned}$$
(9.1)

Therefore it remains to show that \(\hat{u}\) is admissible in the frictionless market with the price system \(\hat{S}\). Since for each \(n \in \{0, 1, 2, \dots , N \}\) we have

$$\begin{aligned} \underline{S}_{n} \le \hat{S}_{n}(\hat{x}_{n}, \hat{y}_{n}, \underline{S}, \overline{S}) \le \overline{S}_{n}. \end{aligned}$$

Taking into account that \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) we have that

$$\begin{aligned} 0&\le \hat{c}_{n} \le \hat{x}_{n} + \underline{S}_{n} \hat{y}_{n} \le \hat{x}_{n} + \hat{S}_{n}(\hat{x}_{n}, \hat{y}_{n}, \underline{S}_{n}, \overline{S}_{n}) \hat{y}_{n}, \\ 0&\le \hat{x}_{n} - \hat{c}_{n} + \underline{S}_{n} \hat{m}_{n} - \overline{S}_{n} \hat{l}_{n} \le \\&\le \hat{x}_{n} - \hat{c}_{n} + \hat{S}_{n}(\hat{x}_{n}, \hat{y}_{n}, \underline{S}_{n}, \overline{S}_{n}) \hat{m}_{n} - \hat{S}_{n}(\hat{x}_{n}, \hat{y}_{n}, \underline{S}_{n}, \overline{S}_{n}) \hat{l}_{n} \le \\&\le \hat{x}_{n} - \hat{c}_{n} + \hat{S}_{n}(\hat{x}_{n}, \hat{y}_{n}, \underline{S}_{n}, \overline{S}_{n}) \cdot (\hat{m}_{n} - \hat{l}_{n}), \\ 0&\le \hat{y}_{n} - \hat{m}_{n} + \hat{l}_{n}, \end{aligned}$$

which means that, indeed, \(\hat{u} \in \mathcal {U}_{(x, y)}(\hat{S})\). This ends the proof. \(\square \)

In the following definition we stress very firmly the dependence on the initial position in the market with transaction costs.

Definition 9.2

For a given initial position \((x, y) \in \mathbb {R}_{+}^{2} \setminus \{ (0, 0) \}\) a process \(\tilde{S} = (\tilde{S}_{n})_{n = 0}^{N}\) depending on this initial position will be called shadow price (strong shadow price), if

  1. (i)

    it is adapted,

  2. (ii)

    \(\forall _{n \in \{ 0, 1, 2, \ldots , N \}} \ \ \underline{S}_{n} \le \tilde{S}_{n} \le \overline{S}_{n}\),

  3. (iii)

    the optimal value of the functional (1.2) in a frictionless market with price process \(\tilde{S}\) is the same as in the market with transaction costs with the initial position \((x, y)\).

In an analogous way as in the proof of Proposition 9.1 we show that

Proposition 9.2

Let \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) be the optimal strategy in the market with transaction costs with initial position \((x, y)\). Assume there exists a shadow price (strong shadow price) \(\tilde{S}\), then the strategy \(\hat{u}\) is an optimal strategy in the frictionless market with price process \(\tilde{S}\).

One can notice that there is a clear difference between weak and strong shadow prices. Weak shadow price is in fact a random field satisfying (i)–(iii) of Definition 9.1, while strong shadow price is just a sequence of random variables the choice of which adjusted to the initial position at time \(0\).

We now formulate our main result of the paper

Theorem 9.1

Let the family \(\hat{S}\) be defined by (8.12). Then \(\hat{S}\) is a weak shadow price. Furthermore, the optimal strategies in the market with shadow price are also optimal in the original market with bid and ask prices.

Proof

The proof is by backward induction. Our induction hypothesis \(I_k\) is the equality

$$\begin{aligned} v_{t}( x, y, \hat{S}_{t}(x, y, \underline{S}_{t}, \overline{S}_{t}))=w_{t}(x, y, \underline{S}_{t}, \overline{S}_{t}) \end{aligned}$$

for \(t\in \{N-k, N-k+1, \ldots , N \}\) and \((x,y)\in \mathbb {R}_{+}^{2}\setminus \left\{ (0,0)\right\} \) and the fact that optimal strategies in the markets with shadow price and bid and ask prices over time span \(\{N-k, N-k+1, \ldots , N \}\) coincide. First, consider the case \(k=0\). Let \((x, y) \in \mathbb {R}^{2}_{+}\backslash \{(0,0)\}\) be our position. Clearly, the shadow price \(\hat{S}_{N}(x, y, \underline{S}_{N}, \overline{S}_{N}) = \underline{S}_{N}\) because at time moment \(N\) it is optimal to sell all assets. For \((x, y) \in \mathbb {R}_{+}^{2}\setminus \{(0,0)\}\) we have

$$\begin{aligned} v_{N}(&x, y, \hat{S}_{N}(x, y, \underline{S}_{N}, \overline{S}_{N})) = g (x + \hat{S}_{N}(x, y, \underline{S}_{N}, \overline{S}_{N}) y) = \\&g(x + \underline{S}_{N} y) = w_{N}(x, y, \underline{S}_{N}, \overline{S}_{N}). \end{aligned}$$

Therefore for \(k=0\) the hypothesis \(I_0\) is satisfied. Assume it also holds for \(k\le n-1\). Then for \((x, y) \in \mathbb {R}_{+}^{2}\setminus \{(0,0)\}\) we have that

$$\begin{aligned} v_{N-n+1}(&x, y, \hat{S}_{N-n+1}(x, y, \underline{S}_{N-n+1}, \overline{S}_{N-n+1}))= \nonumber \\&w_{N-n+1}(x,y, \underline{S}_{N-n+1}, \overline{S}_{N-n+1}). \end{aligned}$$
(9.2)

Since by Bellman equation

$$\begin{aligned} v_{N-n}(&x, y, \hat{S}_{N-n}(x, y, \underline{S}_{N-n}, \overline{S}_{N-n})) = \\&\sup _{(c,l,m) \in \mathbb {B}(x,y, \hat{S}_{N-n}(x, y, \underline{S}_{N-n}, \overline{S}_{N-n}))} \mathbb {E}[g(c) + \\&\ \ \ \ \ \ \ \gamma w_{N-n+1}(x - c + \hat{S}_{N-n}(x,y,\underline{S}_{N-n}, \overline{S}_{N-n})(m - l), y-m+l, \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \hat{S}_{N-n+1}(x-c+ \hat{S}_{N-n}(x,y,\underline{S}_{N-n},\overline{S}_{N-n}) (m-l), \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y-m+l,\underline{S}_{N-n+1}, \overline{S}_{N-n+1})|\mathcal {F}_{N-n}], \end{aligned}$$

using (9.2) we obtain

$$\begin{aligned} v_{N-n}(&x, y, \hat{S}_{N-n}(x, y, \underline{S}_{N-n}, \overline{S}_{N-n}))=\\&=\sup _{(c,l,m) \in \mathbb {B}(x,y, \hat{S}_{N-n}(x, y, \underline{S}_{N-n}, \overline{S}_{N-n}))} \mathbb {E}[g(c) + \\&\gamma w_{N-n+1}(x - c + \hat{S}_{N-n}(x,y,\underline{S}_{N-n}, \overline{S}_{N-n}) \cdot (m - l), \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y-m+l,\underline{S}_{N-n+1}, \overline{S}_{N-n+1})|\mathcal {F}_{N-n}], \end{aligned}$$

which coincides with \(v_{N-n}\) defined in (5.2). By Theorem 8.1 we obtain \(I_n\). \(\square \)

Notice that

Corollary 9.1

Let \(\hat{S}\) be defined by (8.12). For each \((x, y, \underline{s}, \overline{s}) \in \mathbb {D}\) and for every \(k \in \{1, 2, 3, \dots , N \}\) we have that

$$\begin{aligned} v_{N - k}(&x, y, \hat{S}_{N - k}(x, y, \underline{s}, \overline{s})) = \nonumber \\&\sup _{(c, l, m) \in \mathbb {B}(x, y, \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}))} \mathbb {E}[g(c) + \nonumber \\&\ \ \ \gamma v_{N - k + 1}(x - c + \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}) \cdot (m - l), y -m + l, \nonumber \\&\ \ \ \ \ \ \hat{S}_{N - k +1}(x - c + \hat{S}_{N - k}(x, y, \underline{s}, \overline{s}) \cdot (m - l), \nonumber \\&\ \ \ \ \ \ y -m + l, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1})) | \mathcal {F}_{N - k}] = \nonumber \\&w_{N - k}(x, y, \underline{s}, \overline{s}). \end{aligned}$$
(9.3)

In the next Theorem we prove the existence of strong shadow price and we show the connection between strong and weak shadow prices.

Theorem 9.2

If \(\hat{u} \in \mathcal {U}_{(x, y)}(\underline{S}, \overline{S})\) is the optimal strategy in the market with transaction costs with initial position \((x, y)\) and \((\hat{x}_{n}, \hat{y}_{n})_{n = 0}^{N}\) are the corresponding market positions then there exists a strong shadow price \(\tilde{S} = (\tilde{S}_{n})_{n = 0}^{N}\) which is of the the form

$$\begin{aligned} \hat{S}_{n}(\hat{x}_{n}, \hat{y}_{n}, \underline{S}_{n}, \overline{S}_{n}) = \tilde{S}_{n}. \end{aligned}$$
(9.4)

for \(n \in \{ 0, 1, 2, \dots , N \}\), where \(\hat{S}\) is a weak shadow price.

Proof

Notice first that for \(n \in \{ 0, 1, 2, \dots , N-1 \}\) we have

$$\begin{aligned} w_n(\hat{x}_n,\hat{y}_n,\underline{S}_{n}, \overline{S}_{n})= \mathbb {E}[g(\hat{c}_n) + w_{n+1}(\hat{x}_{n+1},\hat{y}_{n+1},\underline{S}_{n+1}, \overline{S}_{n+1})| \mathcal {F}_{n} ], \end{aligned}$$
(9.5)

where \(\hat{c}_n\) is the corresponding to \(\hat{u}\) optimal consumption.

For \((x, y, \tilde{s}) \in \hat{\mathbb {D}}\) let

$$\begin{aligned} \tilde{v}_{N}(x, y, \tilde{s}) := g(x + \tilde{s} y), \end{aligned}$$

and for \(k \in \{ 1, 2, 3, \dots , N \}\) let

$$\begin{aligned} \tilde{v}_{N - k}(&x, y, \tilde{s}) := \sup _{(c, l, m) \in \mathbb {B}(x, y, \tilde{s})} \mathbb {E}[g(c) + \nonumber \\&\ \ \gamma \tilde{v}_{N - k + 1}(x - c + \tilde{s} \cdot (m - l), y - m + l, \tilde{S}_{N - k +1}) | \mathcal {F}_{N - k}]. \end{aligned}$$
(9.6)

Clearly

$$\begin{aligned} w_N(\hat{x}_N,\hat{y}_N,\underline{S}_{N}, \overline{S}_{N})= \tilde{v}_{N}(\hat{x}_N,\hat{y}_N, \tilde{S}_N)=v_N(\hat{x}_N,\hat{y}_N,\hat{S}_{N}(\hat{x}_{N}, \hat{y}_{N}, \underline{S}_{N}, \overline{S}_{N})). \end{aligned}$$
(9.7)

By Theorem 8.1 and (8.13) we have that

$$\begin{aligned} v_{N-1}(\hat{x}_{N-1}&,\hat{y}_{N-1},\hat{S}_{N-1}(\hat{x}_{N-1}, \hat{y}_{N-1}, \underline{S}_{N-1}, \overline{S}_{N-1})) \nonumber \\&= w_{N-1}(\hat{x}_{N-1},\hat{y}_{N-1},\underline{S}_{N-1}, \overline{S}_{N-1}) \end{aligned}$$
(9.8)

and therefore by (9.6)

$$\begin{aligned} w_{N-1}(\hat{x}_{N-1},\hat{y}_{N-1},\underline{S}_{N-1}, \overline{S}_{N-1})=\tilde{v}_{N-1}(\hat{x}_{N-1},\hat{y}_{N-1}, \tilde{S}_{N-1}). \end{aligned}$$
(9.9)

Assume now that for \(k \in \{2, \dots , N \}\)

$$\begin{aligned} w_{N-k+1}(\hat{x}_{N-k+1},&\hat{y}_{N-k+1},\underline{S}_{N-k+1}, \overline{S}_{N-k+1})= \nonumber \\ \ \ \ \&\tilde{v}_{N-k+1}(\hat{x}_{N-k+1},\hat{y}_{N-k+1}, \tilde{S}_{N-k+1}). \end{aligned}$$
(9.10)

By (8.13) we have again that

$$\begin{aligned} v_{N-k}(\hat{x}_{N-k},&\hat{y}_{N-k},\hat{S}_{N-k}(\hat{x}_{N-k}, \hat{y}_{N-k}, \underline{S}_{N-k}, \overline{S}_{N-k}))= \nonumber \\ \qquad \qquad \qquad \qquad&w_{N-k}(\hat{x}_{N-k},\hat{y}_{N-k},\underline{S}_{N-k}, \overline{S}_{N-k}) \end{aligned}$$
(9.11)

and therefore by (9.10) and (9.6) we have that

$$\begin{aligned} w_{N-k}(\hat{x}_{N-k},\hat{y}_{N-k},\underline{S}_{N-k}, \overline{S}_{N-k})=\tilde{v}_{N-k}(\hat{x}_{N-k},\hat{y}_{N-k}, \tilde{S}_{N-k}). \end{aligned}$$
(9.12)

Consequently \(w_{0}({x},{y},\underline{S}, \overline{S})=\tilde{v}_{0}({x},{y}, \tilde{S}_{0})\) which means that \((\tilde{S}_n)_{n=0}^N\) is a strong shadow price. This ends the proof. \(\square \)

10 Examples

In this section we show two examples for which the results presented in the paper can be applied. The first example concerns so called Markov price model considered in [3]. Let \(\xi _{1}, \xi _{2}, \xi _{3}, \dots , \xi _{N}\) be a sequence of independent and identically distributed random variables such that \(supp \xi _{1} = [-1, \infty )\). Moreover let \(\mathcal {F}_{n} := \sigma (\xi _{1}, \xi _{2}, \xi _{3}, \dots , \xi _{n})\) for \(n \in \{ 1, 2, 3, \dots , N \}\) and

$$\begin{aligned} S_{n} := S_{0} \cdot (1 + \xi _{1}) \cdot (1 + \xi _{2}) \cdot (1 + \xi _{3}) \cdot \dots \cdot (1 + \xi _{n}), \end{aligned}$$
(10.1)

where \(S_{0} > 0\) is a fixed constant. Define bid and ask prices

$$\begin{aligned} \underline{S}_{n} := (1 - \mu ) S_{n} \ \ \ \ \ \text{ and } \ \ \ \ \ \overline{S}_{n} := (1 + \lambda ) S_{n}. \end{aligned}$$
(10.2)

for \(n \in \{0, 1, 2, \dots , N \}\) with constants \(\mu \in [0,1)\) and \( \lambda \in [0, \infty )\).

We are going to maximize the functional

$$\begin{aligned} \mathbf {J}(u) := \sum _{n = 0}^{N} \gamma ^{n} \ln c_{n} \end{aligned}$$
(10.3)

in the market with the bid and ask price processes \(\underline{S}\) and \(\overline{S}\) given by (10.1) and (10.2).

By (3.1) and (3.6) the sequence of Bellman equations corresponding to above problem is of the form

$$\begin{aligned} w_{N}(x, y, (1 - \mu ) s, (1 + \lambda ) s) = \ln (x + (1 - \mu ) s y) \end{aligned}$$

and

$$\begin{aligned}&w_{N - k}( x, y, (1 - \mu ) s, (1 + \lambda ) s) := \nonumber \\ =&\sup _{(c, l, m) \in \mathbb {A}(x, y, (1 - \mu ) s, (1 + \lambda ) s)} \mathbb {E}[\ln c + \gamma w_{N - k + 1}(x - c + (1 - \mu ) s m - (1 + \lambda ) s l, \nonumber \\&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y - m + l, (1 - \mu ) s (1 + \xi _{N - k + 1}), (1 + \lambda ) s (1 + \xi _{N - k + 1})) ] \end{aligned}$$
(10.4)

for \((x, y, s) \in \hat{\mathbb {D}}\) and \(k \in \{ 1, 2, 3, \dots , N \}\).

Moreover, by Corollary 3.3 for all \((x, y, s) \in \hat{\mathbb {D}}\) and for all \(k \in \{ 1, 2, 3, \dots , N \}\) there exists a unique

$$\begin{aligned} (c_{N - k}(x, y, s), l_{N - k}(x, y, s), m_{N - k}(x, y, s)) \in \mathbb {R}_{+}^{3} \end{aligned}$$
(10.5)

for which supremum in (10.4) is attained.

From Sect. 6 by (6.1) we have

$$\begin{aligned}&{v}_{N - k}( x, y, \hat{s}) = \nonumber \\&= \sup _{(c, b) \in [0, x + \hat{s} y] \times [0, 1]} \mathbb {E}[\ln c + \gamma w_{N - k + 1}((1 - b) (x + \hat{s} y - c), \nonumber \\&\frac{b (x + \hat{s} y - c)}{\hat{s}}, \underline{S}_{N - k + 1}, \overline{S}_{N - k + 1}) | \mathcal {F}_{N - k}], \end{aligned}$$
(10.6)

where \((x, y, \hat{s}) \in \hat{\mathbb {D}}\) and \(k \in \{ 1, 2, 3, \dots , N \}\). By the properties of logarithm (see (6.3)) we get that the optimal consumption in (10.6) is given by the formula

$$\begin{aligned} \hat{c}_{N - k}(x, y, \hat{s}) := \frac{x + \hat{s} y}{1 + \gamma + \gamma ^{2} + \dots + \gamma ^{k}}. \end{aligned}$$
(10.7)

Let \(u \in \mathcal {U}_{(x, y, (1 - \mu ) S_{0}, (1 + \lambda ) S_{0})}\) be an optimal strategy in the market with proportional transaction costs. Clearly the sequence \((c_{n}(x_{n}, y_{n}, S_{n}))_{n = 0}^{N}\) is uniquely determined.

From (10.7) there exists only one \(\mathcal {F}_{N - k}\)-measurable random variable \(\tilde{S}_{N - k}\) such that

$$\begin{aligned} c_{N - k}(x_{N - k}, y_{N - k}, S_{N - k}) = \hat{c}_{N - k}(x_{N - k}, y_{N - k}, \tilde{S}_{N - k}.) \end{aligned}$$
(10.8)

Namely, from (10.8) we obtain that

$$\begin{aligned} \tilde{S}_{N - k} = \frac{(1 + \gamma + \gamma ^{2} + \dots + \gamma ^{k})\cdot c_{N - k}(x_{N - k}, y_{N - k}, S_{N - k}) - x_{N - k}}{y_{N - k}} \end{aligned}$$
(10.9)

The sequence \((\tilde{S}_{n})_{n = 0}^{N}\) given by (10.9) is the strong shadow price.

Note that the optimal one step strategy given by (10.5) after consumption can be characterized from Lemma 3.3 by a cone \(\mathbf {NT}_{N - k}((1 - \mu ) s, (1 + \lambda ) s)\), which has the following representation

$$\begin{aligned}&\mathbf {NT}_{N - k}((1 - \mu ) s, (1 + \lambda ) s) = \nonumber \\&= \bigcup _{(x, y) \in \mathbb {R}_{+}^{2}} (x + (1 - \mu ) s m_{N - k}(x, y, s) - (1 + \lambda ) s l_{N - k}(x, y, s),\nonumber \\&\qquad y - m_{N - k}(x, y, s) + l_{N - k}(x, y, s)) = \nonumber \\&= conv\Big [ \bigcup _{x \in \mathbb {R}_{+}} (x - (1 + \lambda ) s l_{N - k}(x, 0, s), l_{N - k}(x, 0, s)) \cup \nonumber \\&\qquad \cup \bigcup _{y \in \mathbb {R}_{+}}((1 - \mu ) s m_{N - k}(0, y, s), y - m_{N - k}(0, y, s)\Big ]. \end{aligned}$$
(10.10)

Denote by

$$\begin{aligned} \overline{t}_{N - k}(s) := \frac{l_{N - k}(1, 0, s)}{1 - (1 + \lambda ) s l_{N - k}(1, 0, s)} \end{aligned}$$
(10.11)

and

$$\begin{aligned} \underline{t}_{N - k}(s) := \frac{1 - m_{N - k}(0, 1, s)}{(1 - \mu ) s m_{N - k}(0, 1, s)} \end{aligned}$$
(10.12)

the slopes of lower and upper boundaries of \(\mathbf {NT}_{N - k}((1 - \mu ) s, (1 + \lambda ) s)\). Note that \(0 \le \overline{t}_{N - k}(s) \le \underline{t}_{N - k}(s) \le \infty \).

Assume now that at the time moment \(N - k\) after consumption the investor has the position \((x, y) \in \mathbb {R}_{+}^{2}\) and the bid and ask prices are given by \((1 - \mu ) s\) and \((1 + \lambda ) s\) respectively, with \(s > 0\). Then

  1. (i)

    he does not trade if and only if \(\overline{t}_{N - k}(s) \le \frac{y}{x} \le \underline{t}_{N - k}(s)\);

  2. (ii)

    he sells some amount of stocks if and only if \(\underline{t}_{N - k}(s) < \frac{y}{x}\);

  3. (iii)

    he buys some amount of stocks if and only if \(\frac{y}{x} < \overline{t}_{N - k}(s)\).

Moreover, from (10.8) we obtain that for each \((x, y, s) \in \hat{\mathbb {D}}\) such that \(y > 0\) there exists a unique \(\hat{s}_{N - k}(x, y, s) \in [(1 - \mu ) s, (1 + \lambda ) s]\) for which

$$\begin{aligned} \hat{c}_{N - k}(x, y, \hat{s}_{N - k}(x, y, s)) = c_{N - k}(x, y, s) \end{aligned}$$
(10.13)

and

$$\begin{aligned} \hat{s}_{N - k}(x, y, s) = \frac{(1 + \gamma + \gamma ^{2} + \dots + \gamma ^{k}) \cdot c_{N - k}(x, y, s) - x}{y}. \end{aligned}$$
(10.14)

Consequently the family

$$\begin{aligned} \{ \hat{S}_{n}(x, y, (1 - \mu ) s, (1 + \lambda ) s) : (x, y, s) \in \hat{\mathbb {D}}, n \in \{0, 1, 2, \dots , N \} \} \end{aligned}$$
(10.15)

where \(\hat{S}_{n}(x, y, (1 - \mu ) s, (1 + \lambda ) s) := \hat{s}_{n}(x, y, s)\), forms a weak shadow price and

$$\begin{aligned} \hat{S}_{n}(x_{n}, y_{n}, (1 - \mu ) S_{n}, (1 + \lambda ) S_{n}) = \tilde{S}_{n}, \end{aligned}$$
(10.16)

where \(\tilde{S}_{n}\) is given by (10.9).

Another example concerns the case when \(S_n=\exp \left\{ \sigma X_n+f_n\right\} \) where \(X_t\) is a fractional Brownian motion with Hurst parameter \(0<H<1\) and \(f_n\) is a deterministic sequence with the same cost functional (10.3). By Proposition 4.2 of [11] we have that conditional full support condition is satisfied. Consequently expected values in (10.4) and in (10.6) have to be replaced by conditional expectations and functions \(w_{N-k}\) and \({v}_{N-k}\) are random. The optimal strategies (10.5) are also random as well as the slopes for lower and upper boundaries of \(\mathbf {NT}_{N - k}((1 - \mu ) s, (1 + \lambda ) s)\), which is again random. Nevertheless (10.9) and (10.14) define strong and weak shadow prices.

Similar results we can also obtain for power utility function both for Markov and fractional Brownian motion prices. The only problem is that we don’t have explicit form for \(\tilde{S}_{N-k}\) since we are not able to express \(\hat{s}\) as a function of \(\hat{c}_{N-k}\) from (6.6).