Optimal posting price of limit orders: learning by trading


We model a trader interacting with a continuous market as an iterative algorithm that adjusts limit prices at a given rhythm and propose a procedure to minimize trading costs. We prove the \(a.s.\) convergence of the algorithm under assumptions on the cost function and give some practical criteria on model parameters to ensure that the conditions to use the algorithm are met (notably, using the co-monotony principle). We illustrate our results with numerical experiments on both simulated and market data.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16


  1. 1.

    Abergel, F., Jedidi, A.: A mathematical approach to order book modelling. In: Abergel, F., Chakrabarti, B.K., Chakraborti, A., Mitra, M. (eds.) Econophysics of Order Driven Markets. Springer, New York (2011)

    Google Scholar 

  2. 2.

    Alfonsi, A., Fruth, A., Schied, A.: Optimal execution strategies in limit order books with general shape functions. Quant. Financ. 10(2), 143–157 (2010)

    MathSciNet  MATH  Article  Google Scholar 

  3. 3.

    Almgren, R.F., Chriss, N.: Optimal execution of portfolio transactions. J. Risk 3(2), 5–39 (2000)

    Google Scholar 

  4. 4.

    Avellaneda, M., Stoikov, S.: High-frequency trading in a limit order book. Quant. Financ. 8(3), 217–224 (2008)

    MathSciNet  MATH  Article  Google Scholar 

  5. 5.

    Bayraktar, E., Ludkovski, M.: Liquidation in limit order books with controlled intensity. CoRR (2011)

  6. 6.

    Beskos, A., Roberts, G.O.: Exact simulation of diffusions. Ann. Appl. Prob. 15(4), 2422–2444 (2005)

    MathSciNet  MATH  Article  Google Scholar 

  7. 7.

    Bouchard, B., Dang, N.-M., Lehalle, C.-A.: Optimal control of trading algorithms: a general impulse control approach. SIAM J. Financ. Math. 2, 404–438 (2011)

    MathSciNet  MATH  Article  Google Scholar 

  8. 8.

    Duflo, M.: Algorithmes Stochastiques, vol. 23 of Mathématiques & Applications [Mathematics & Applications]. Springer, Berlin (1996)

  9. 9.

    Foucault, T., Kadan, O., Kandel, E.: Limit order book as a market for liquidity. Discussion Paper Series dp321, Center for Rationality and Interactive Decision Theory, Hebrew University, Jerusalem, Jan 2003

  10. 10.

    Guéant, O., Lehalle, C.-A., Razafinimanana, J.: High frequency simulations of an order book: a two-scales approach. In: Abergel, F., Chakrabarti, B.K., Chakraborti, A., Mitra, M. (eds.) Econophysics of Order-Driven Markets. New Economic Windows. Springer, Milan (2010)

    Google Scholar 

  11. 11.

    Guéant, O., Fernandez-Tapia, J., Lehalle, C.-A.: Dealing with the inventory risk. Technical report (2011)

  12. 12.

    Guilbaud, F., Pham. H.: Optimal high-frequency trading with limit and market orders. Quant. Finac. to appear (2012)

  13. 13.

    Guilbaud, F., Mnif, M., Pham, H.: Numerical methods for an optimal order execution problem. J. Comput. Finan., to appear (2010)

  14. 14.

    Ho, T., Stoll, H.R.: Optimal dealer pricing under transactions and return uncertainty. J. Financ. Econ. 9(1), 47–73 (1981)

    Article  Google Scholar 

  15. 15.

    Jacod, J., Shiryaev, A.N.: Limit theorems for stochastic processes, vol. 288, 2nd edn. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]. Springer, Berlin (2003)

  16. 16.

    Karatzas, I., Shreve, S.E.: Brownian motion and stochastic calculus, vol. 113, 2nd edn. Graduate Texts in Mathematics. Springer, New York (1991)

  17. 17.

    Karlin, S., Taylor, H.M.: A Second Course in Stochastic Processes. Academic, New York (1981)

    Google Scholar 

  18. 18.

    Kushner, H.J., Clark, D.S.: Stochastic Approximation Methods for Constrained and Unconstrained Systems, vol. 26 of Applied Mathematical Sciences. Springer, New York (1978)

  19. 19.

    Kushner, H.J., Yin, G.G.: Stochastic Approximation and Recursive Algorithms and Applications, vol. 35 of Applications of Mathematics. Stochastic Modelling and Applied Probability, 2nd edn. Springer, New York (2003)

  20. 20.

    Laruelle, S., Pagès, G.: Stochastic approximation with averaging innovation applied to finance. Monte Carlo Methods Appl. 18(1), 1–51 (2012)

    MathSciNet  MATH  Article  Google Scholar 

  21. 21.

    Laruelle, S., Lehalle, C.-A., Pagès, G.: Optimal split of orders across liquidity pools: a stochastic algorithm approach. SIAM J. Finan. Math. 2(1), 1042–1076 (2011)

    MATH  Article  Google Scholar 

  22. 22.

    McCulloch, J.: A model of true spreads on limit order markets (2011). SSRN: http://www.ssrn.com/abstract=1815782

  23. 23.

    Pagès, G.: A functional co-monotony principle with an application to peacoks. Pre-pub LPMA n\(^\circ \)1536. To appear in sèmin. Proab. XLV (2010)

  24. 24.

    Predoiu, S., Shaikhet, G., Shreve, S.: Optimal Execution of a General One-Sided Limit-Order Book. Technical Report. Carnegie Mellon University, Pittsburgh (2010)

    Google Scholar 

  25. 25.

    Robert, C.Y., Rosenbaum, M.: A new approach for the dynamics of ultra high frequency data: the model with uncertainty zones. J. Finan. Econ. 9(2), 344–366 (2011)

    Google Scholar 

Download references

Author information



Corresponding author

Correspondence to Sophie Laruelle.


Appendix 1: Convergence theorem for constrained algorithms

The aim is to determine an element of the set \(\{\theta \in \Theta \,:\,h(\theta )=\mathbb{E }\left[ H(\theta ,Y)\right] =0\}\) (zeros of \(h\) in \(\Theta \)) where \(\Theta \subset \mathbb{R }^d\) is a closed convex set, \(h:\mathbb{R }^d\rightarrow \mathbb{R }^d\) and \(H:\mathbb{R }^d\times \mathbb{R }^q\rightarrow \mathbb{R }^d\). For \(\theta _0\in \Theta \), we consider the \(\mathbb{R }^d\)-valued sequence \((\theta _n)_{n\ge 0}\) defined by

$$\begin{aligned} \theta _{n+1}=\mathrm{Proj}_{\Theta }\left( \theta _n-\gamma _{n+1}H(\theta _n,Y_{n+1})\right) , \end{aligned}$$

where \((Y_n)_{n\ge 1}\) is an i.i.d. sequence with the same law as \(Y, (\gamma _n)_{n\ge 1}\) is a positive sequence of real numbers and \(\mathrm{Proj}_{\Theta }\) denotes the Euclidean projection on \(\Theta \). The recursive procedure (7.1) can be rewritten as follows

$$\begin{aligned} \theta _{n+1}=\theta _n-\gamma _{n+1}h(\theta _n)-\gamma _{n+1}\Delta M_{n+1}+\gamma _{n+1}p_{n+1}, \end{aligned}$$

where \(\Delta M_{n+1}=H(\theta _n,Y_{n+1})-h(\theta _n)\) is a martingale increment and

$$\begin{aligned} p_{n+1}=\frac{1}{\gamma _{n+1}}\mathrm{Proj}_{\Theta }\left( \theta _n-\gamma _{n+1}H(\theta _n,Y_{n+1})\right) -\frac{\theta _n}{\gamma _{n+1}}+H(\theta _n,Y_{n+1}). \end{aligned}$$

Theorem 7.1

(see [18] and [19]) Let \((\theta _n)_{n\ge 0}\) be the sequence defined by (7.2). Assume that there exists a unique \(\theta ^*\in \Theta \) such that \(h(\theta ^*)=0\) and that the mean function satisfies on \(\Theta \) the following mean-reverting property, namely

$$\begin{aligned} \forall \theta \ne \theta ^*\in \Theta ,\quad \left\langle h(\theta )\left. \right| \theta -\theta ^*\right\rangle >0. \end{aligned}$$

Assume that the gain parameter sequence \((\gamma _n)_{n\ge 1}\) satisfies

$$\begin{aligned} \sum _{n\ge 1}\gamma _n=+\infty \quad \text{ and }\quad \sum _{n\ge 1}\gamma ^2_n<+\infty . \end{aligned}$$

If the function \(H\) satisfies

$$\begin{aligned} \exists \, K>0 \; \text{ such } \text{ that }\;\forall \theta \in \Theta , \quad \mathbb{E }\left[ \left| H(\theta ,Y)\right| ^2\right] \le K(1+\left| \theta \right| ^2), \end{aligned}$$


$$\begin{aligned} \theta _n\;\overset{a.s.}{\underset{n\rightarrow +\infty }{\longrightarrow }}\theta ^*. \end{aligned}$$


If \(\Theta \) is bounded (7.5) reads \(\sup _{\theta \in \Theta }\mathbb{E }\left[ \left| H(\theta ,Y)\right| ^2\right] <+\infty \), which is always satisfied if \(\Theta \) is compact and \(\theta \mapsto \mathbb{E }\left[ \left| H(\theta ,Y)\right| ^2\right] \) is continuous.

Appendix 2: Functional co-monotony principle for a class of one-dimensional diffusions

In this section, we present the principle of co-monotony, first for random vectors taking values in a nonempty interval \(I\), then for one-dimensional diffusions lying in \(I\).

Case of random variables and random vectors

First we recall a classical result for random variables.

Proposition 8.1

Let \(f,g:I\subset \mathbb{R }\rightarrow \mathbb{R }\) be two monotonic functions with same monotony. Let \(X:(\Omega ,{\fancyscript{A}},\mathbb{P })\rightarrow I\) be a real valued random variable such that \(f(X),g(X)\in L^2(\mathbb{P })\). Then

$$\begin{aligned} \mathrm{Cov}(f(X),g(X))\ge 0. \end{aligned}$$


Let \(X,\,Y\) be two independent random variables defined on the same probability space with the same distribution \(\mathbb{P }_X\). Then

$$\begin{aligned} (f(X)-f(Y))(g(X)-g(Y))\ge 0 \end{aligned}$$

hence its expectation is non-negative too. Consequently

$$\begin{aligned} \mathbb{E }\left[ f(X)g(X)\right] -\mathbb{E }\left[ f(X)g(Y)\right] -\mathbb{E }\left[ f(Y)g(X)\right] +\mathbb{E }\left[ f(Y)g(Y)\right] \ge 0 \end{aligned}$$

so, using that \(Y\overset{(d)}{=}X\) and \(Y, X\) are independent, yields

$$\begin{aligned} 2\mathbb{E }\left[ f(X)g(X)\right] \ge \mathbb{E }\left[ f(X)\right] \mathbb{E }\left[ g(Y)\right] +\mathbb{E }\left[ f(Y)\right] \mathbb{E }\left[ g(X)\right] =2 \mathbb{E }\left[ f(X)\right] \mathbb{E }\left[ g(X)\right] \end{aligned}$$

that is \(\mathrm{Cov}(f(X),g(X))\ge 0\). \(\square \)

Proposition 8.2

Let \(F,G:\mathbb{R }^d\rightarrow \mathbb{R }\) be two monotonic functions with same monotony in each of their variables, i.e. for every \(i\in \{1,\ldots ,d\}, x_i\longmapsto F(x_1,\ldots ,x_i,\ldots ,x_d)\) and \(x_i\longmapsto G(x_1,\ldots ,x_i,\ldots ,x_d)\) are monotonic with the same monotony which may depend on \(i\) (but does not depend on \((x_1,\ldots ,x_{i-1}, x_{i+1},\ldots ,x_d)\in \mathbb{R }^{d-1}\)). Let \(X_1,\ldots ,X_d\) be independent real valued random variables defined on a probability space \((\Omega ,{\fancyscript{A}},\mathbb{P })\) such that \(F(X_1,\ldots ,X_d),G(X_1,\ldots ,X_d)\in L^2(\mathbb{P })\). Then

$$\begin{aligned} \mathrm{Cov}\left( F(X_1,\ldots ,X_d),G(X_1,\ldots ,X_d)\right) \ge 0. \end{aligned}$$


The proof of the above proposition is made by induction on \(d\). The case \(d=1\) is given by Proposition 8.1. We give here the proof for \(d=2\) for notational convenience, but the general case of dimension \(d\) follows likewise. By the monotonic assumption on \(F\) and \(G\), we have for every \(x_2\in \mathbb{R }\), if \(X^{\prime }_1\overset{d}{=}X_1\) with \(X^{\prime }_1, X_1\) independent, that

$$\begin{aligned} \left( F(X_1,x_2)-F(X^{\prime }_1,x_2)\right) \left( G(X_1,x_2)-G(X^{\prime }_1,x_2) \right) \ge 0. \end{aligned}$$

This implies that (see Proposition 8.1)

$$\begin{aligned} \mathrm{Cov}\left( F(X_1,x_2)G(X_1,x_2)\right) \ge 0. \end{aligned}$$

If \(X_1\) and \(X_2\) are independent, using Fubini’s Theorem and what precedes, we have

$$\begin{aligned} \mathbb{E }\left[ F(X_1,X_2)G(X_1,X_2)\right]&= \int \limits _{\mathbb{R }}\mathbb{P }_{X_2}(dx_2) \mathbb{E }\left[ F(X_1,x_2)G(X_1,x_2)\right] \\&\ge \int \limits _{\mathbb{R }}\mathbb{P }_{X_2}(dx_2)\mathbb{E }\left[ F(X_1,x_2)\right] \mathbb{E }\left[ G(X_1,x_2)\right] . \end{aligned}$$

By setting \(\varphi (x_2)=\mathbb{E }\left[ F(X_1,x_2)\right] \) and \(\psi (x_2)=\mathbb{E }\left[ G(X_1,x_2)\right] \) and using the monotonic assumptions on \(F\) and \(G\), we have that \(\varphi \) and \(\psi \) are monotonic with the same monotony so that

$$\begin{aligned} \int \limits _{\mathbb{R }}\mathbb{P }_{X_2}(dx_2)\mathbb{E }\left[ F(X_1,x_2)\right] \mathbb{E }\left[ G(X_1,x_2)\right] =\mathbb{E }\left[ \varphi (X_2)\psi (X_2)\right] \ge \mathbb{E }\left[ \varphi (X_2)\right] \mathbb{E }\left[ \psi (X_2)\right] . \end{aligned}$$

Combining these above two inequalities finally yields \(\mathrm{Cov}\left( F(X_1,X_2)G(X_1,X_2)\right) \ge 0\).

\(\square \)

Case of (one-dimensional) diffusions

This framework corresponds to the infinite dimensional case and we can not apply straightforwardly the result of Proposition 8.1: indeed, if we define the following natural order relation on \(\mathbb{D }([0,T],\mathbb{R })\)

$$\begin{aligned} \forall \alpha _1,\alpha _2\in \mathbb{D }([0,T],\mathbb{R }), \quad \alpha _1\le \alpha _2\Longleftrightarrow \left( \forall t\in [0,T], \ \alpha _1(t)\le \alpha _2(t)\right) , \end{aligned}$$

this order is partial which makes the formal proof of Proposition 8.1 collapse. To establish a co-monotony principle for diffusions, we proceed in two steps: first, we use the Lamperti transform to “force” the diffusion coefficient to be equal to 1 and we establish the co-monotony principle for this class of diffusions. Then by the inverse Lamperti transform, we go back to the original process.

In this section, we first present our framework in more details. Then we recall some weak convergence results for diffusion with diffusion coefficient equal to 1. Afterwards we present the Lamperti transform and we conclude by the general co-monotony principle.

Let \(I\) be a nonempty open interval of \(\mathbb{R }\). One considers a real-valued Brownian diffusion process

$$\begin{aligned} dX_t=b(t,X_t)dt+\sigma (t,X_t)dW_t, \quad X_0=x_0\in I,\quad t\in [0, T], \end{aligned}$$

where \(b,\,\sigma : [0,T]\times I\rightarrow \mathbb{R }\) are Borel functions with at most linear growth such that the above Eq. (8.1) admits at least one (weak) solution over \([0,T]\) and \(W\) is a Brownian motion defined on a probability space \((\Omega , {\fancyscript{A}}, \mathbb{P })\). We assume that the diffusion \(X\; a.s.\) does not explode and lives in the interval \(I\). This implies assumptions on the function \(b\) and \(\sigma \) especially in the neighborhood (in \(I\)) of the endpoints of \(I\) that we will not detail here. At a finite endpoint of \(I\), these assumptions are strongly connected with the Feller classification for which we refer to [17] (with \(\sigma (t,\cdot )>0\) for every \(t\in [0,T]\)). We will simply make the classical linear growth assumption on \(b\) and \(\sigma \) (which prevents explosion at a finite time) that will be used for different purpose in what follows.

To “remove” the diffusion coefficient of the diffusion \(X\), we will introduce the so-called Lamperti transform which requires additional assumptions on the drift \(b\) and the diffusion coefficient \(\sigma \), namely

$$\begin{aligned} ({\fancyscript{A}}_{b,\sigma })\equiv \left\{ \begin{array}{ll} (\mathrm{i}) &{} \sigma \in {\fancyscript{C}}^1([0,T]\times I,\mathbb{R }), \\ (\mathrm{ii})&{} \forall (t,x)\in [0,T]\times I, \quad \left| b(t,x)\right| \le C(1+\left| x\right| ) \\ &{}\quad \text{ and } \quad 0<\sigma (t,x)\le C(1+\left| x\right| ),\\ (\mathrm{iii})&{} \forall \, x \in I, \quad \displaystyle \int \limits _{(-\infty ,x]\cap I}\frac{d\xi }{\sigma (t,\xi )}=\displaystyle \int \limits _{[x,+\infty ,)\cap I}\frac{d\xi }{\sigma (t,\xi )}=+\infty \end{array}\right. \end{aligned}$$


Condition (iii) clearly does not depend on \(x\in I\). Furthermore, if \(I=\mathbb{R }\), (iii) follows from (ii) since \(\frac{1}{\sigma (t,\xi )} \ge \frac{1}{C} \frac{1}{1+|\xi |}\).

Before passing to a short background on the Lamperti transform which will lead to the new diffusion deduced from (8.1) whose diffusion coefficient is equal to \(1\), we need to recall (and adapt) some background on solution and discretization of such \(SDE\).

Background on diffusions with \(\sigma \equiv 1\) (weak solution, discretization).

The following proposition gives a condition on the drift for the existence and the uniqueness of a weak solution of a SDE when \(\sigma \equiv 1\) (see [16] Proposition 3.6, Chap. 5, p. 303 and Corollary 3.11, Chap. 5, p. 305).

Proposition 8.3

Consider the stochastic differential equation

$$\begin{aligned} dY_t=\beta (t,Y_t)dt + dW_t, \quad t\in [0, T], \end{aligned}$$

where \(T\) is a fixed positive number, \(W\) is a one-dimensional Brownian motion and \(\beta :[0,T]\times \mathbb{R }\rightarrow \mathbb{R }\) is a Borel-measurable function satisfying

$$\begin{aligned} \left| \beta (t,y)\right| \le K(1+\left| y\right| ), \quad t\in [0, T],\quad y\in \mathbb{R },\quad K>0. \end{aligned}$$

For any probability measure \(\nu \) on \((\mathbb{R },{\fancyscript{B}}(\mathbb{R }))\), equation (8.3) has a weak solution with initial distribution \(\nu \).

If, furthermore, the drift term \(\beta \) satisfies one of the following conditions:

  1. (i)

    \(\beta \) is bounded on \([0,T]\times \mathbb{R }\),

  2. (ii)

    \(\beta \) is continuous, locally Lipschitz in \(y\in \mathbb{R }\) uniformly in \(t\in [0,T]\),

then this weak solution is unique (in fact (ii) is a strong uniqueness assumption).

Now we introduce the stepwise constant (Brownian) Euler scheme \(\bar{Y}^m=\left( \bar{Y}_{\frac{kT}{m}}\right) _{0\le k\le m}\) with step \(\frac{T}{m}\) of the process \(Y=(Y_t)_{t\in [0,T]}\) defined by (8.3). It is defined by

$$\begin{aligned} \bar{Y}_{t^m_{k+1}}=\bar{Y}_{t^m_k}+\beta (t^m_k,\bar{Y}_{t^m_k})\frac{T}{m}+\sqrt{\frac{T}{m}}U_{k+1}, \quad \bar{Y}_0=Y_0=y_0, \quad k=0,\ldots ,m-1, \end{aligned}$$

where \(t^m_k=\frac{kT}{m}, k=0,\ldots ,m\), and \((U_k)_{0\le k\le m}\) denotes a sequence of i.i.d. \({\fancyscript{N}}(0,1)\)-distributed random variables given by

$$\begin{aligned} U_k=\sqrt{\frac{m}{T}}\left( W_{t^m_k}-W_{t^m_{k-1}}\right) , \quad \quad k=1,\ldots ,m. \end{aligned}$$

The following theorem gives a weak convergence result for the stepwise constant Euler scheme (8.4). Its proof is a straightforward consequence of the functional limit theorems for semi-martingales (to be precise Theorem 3.39, Chap. IX, p. 551 in [15]).

Theorem 8.1

Let \(\beta :[0,T]\times \mathbb{R }\rightarrow \mathbb{R }\) be a continuous function satisfying

$$\begin{aligned} \exists \, K>0,\quad \left| \beta (t,y)\right| \le K(1+\left| y\right| ), \quad t\in [0, T],\quad y\in \mathbb{R }. \end{aligned}$$

Assume that the weak solution of equation (8.3) is unique. Then, the stepwise constant Euler scheme of (8.3) with step \(\frac{T}{m}\) satisfies

$$\begin{aligned} \bar{Y}^m\stackrel{{\fancyscript{L}}}{\longrightarrow }Y \quad \text{ for } \text{ the } \text{ Skorokhod } \text{ topology } \text{ as }\;m\rightarrow \infty . \end{aligned}$$

In particular, for every functional \(F:\mathbb{D }([0,T],\mathbb{R })\rightarrow \mathbb{R }\), \(\mathbb{P }_Y(d\alpha )\)-\(a.s.\) continuous at \(\alpha \in {\fancyscript{C}}([0,T],\mathbb{R })\), with polynomial growth, we have

$$\begin{aligned} \mathbb{E }F(\bar{Y}^m)\underset{m\rightarrow \infty }{\longrightarrow }\mathbb{E }F(Y) \end{aligned}$$

(by uniform integrability since \(\sup _{t\in [0,T]}\left| \bar{Y}^m_t\right| \in \bigcap _{p>0}L^p\)).

Background on the Lamperti transform

We will introduce a new diffusion \(Y_t:=L(t,X_t)\) which will satisfy a new SDE whose diffusion coefficient will be constant equal to \(1\). This function \(L\) defined on \([0,T]\times I\) is known in the literature as the Lamperti transform. It is defined for every \((t,x)\in [0,T]\times I\) by

$$\begin{aligned} L(t,x):=\int \limits _{x_1}^x\frac{d\xi }{\sigma (t,\xi )} \end{aligned}$$

where \(x_1\) is an arbitrary fixed value lying in \(I\). The Lamperti transform clearly depends on the choice of \(x_1\) in \(I\) but not its properties of interest. First, under \(({\fancyscript{A}}_{b,\sigma })\)-(i)-(ii), \(L\in {\fancyscript{C}}^{1,2}([0,T]\times I)\) with

$$\begin{aligned} \frac{\partial L}{\partial t}(t,x)&= -\int \limits _{x_1}^x\frac{1}{\sigma ^2(t,\xi )}\frac{\partial \sigma }{\partial t}(t,\xi )d\xi , \quad \frac{\partial L}{\partial x}(t,x)=\frac{1}{\sigma (t,x)}>0 \qquad \qquad \quad \quad \\ \text{ and }\quad \qquad \qquad \frac{\partial ^2 L}{\partial x^2}(t,x)&= -\,\frac{1}{\sigma ^2(t,x)}\frac{\partial \sigma }{\partial x}(t,x).\qquad \qquad \quad \quad \end{aligned}$$

Let \(t\in [0,T], L(t,\cdot )\) is an increasing \({\fancyscript{C}}^{2}\)-diffeomorphism from \(I\) onto \(\mathbb{R }= L(t,I)\) (the last claim follows from \(({\fancyscript{A}}_{b,\sigma })\)-(iii)). Its inverse will be denoted \(L^{-1}(t,\cdot )\).

Notice that, \((t,y)\mapsto L^{-1}(t,y)\) is continuous on \([0,T]\times I\) since both sets

$$\begin{aligned} \left\{ (t,y)\in [0,T]\times I\,:\, L^{-1}(t,y)\le c\right\} =\{(t,y)\in [0,T]\times \mathbb{R }\,:\, L(t,c)\ge y\} \end{aligned}$$


$$\begin{aligned} \left\{ (t,y)\in [0,T]\times I\,:\, L^{-1}(t,y)\ge c\right\} =\{(t,y)\in [0,T]\times \mathbb{R }\,:\, L(t,c)\le y\} \end{aligned}$$

are both closed for every \(c\in \mathbb{R }\). Therefore, if \(({\fancyscript{A}}_{b,\sigma })\) holds, the function \(\beta :[0,T]\times I\mapsto \mathbb{R }\) defined by

$$\begin{aligned} \beta (t,y):=\left( \frac{b}{\sigma }-\int \limits _{x_1}^{\cdot }\frac{1}{\sigma ^2(t,\xi )}\frac{\partial \sigma }{\partial t}(t,\xi )d\xi -\frac{1}{2}\frac{\partial \sigma }{\partial x}\right) (t,L^{-1}(t,y)) \end{aligned}$$

is a Borel function, continuous as soon as \(b\) is. Now, we set

$$\begin{aligned} \forall \, t\in [0,T],\quad Y_t:=L(t,X_t). \end{aligned}$$

Itô formula straightforwardly yields

$$\begin{aligned} dY_t=\beta (t,Y_t)dt+dW_t, \quad Y_0=L(0,x_0)=:y_0\in \mathbb{R }. \end{aligned}$$


  • In the homogeneous case, which is the most important case for our applications,

    $$\begin{aligned} dX_t=b(X_t)dt+\sigma (X_t)dW_t, \quad X_0=x_0\in \mathbb{R },\quad t\in [0, T], \end{aligned}$$

    we have

    $$\begin{aligned} L(t,x)=L(x):=\int \limits _{x_1}^x\frac{d\xi }{\sigma (\xi )}. \end{aligned}$$

    Then by setting \(Y_t:=L(X_t)\), we obtain

    $$\begin{aligned} dY_t=\beta (Y_t)dt+dW_t, \quad Y_0=L(x_0)=:y_0\quad \text{ with }\quad \beta :=\Big (\frac{b}{\sigma }-\frac{\sigma ^{\prime }}{2}\Big )\circ L^{-1}. \end{aligned}$$

    Note that \(\beta \) is bounded as soon as \(\frac{b}{\sigma }-\frac{\sigma ^{\prime }}{2}\) is.

  • If the partial derivative \(b^{\prime }_x\) exists on \([0,T]\times I\), one easily checks, using \((L^{-1})^{\prime }_y(t,y)= \sigma (t,L^{-1}(t,y))\), that for every \((t,y)\in [0,T]\times I\),

    $$\begin{aligned} \beta ^{\prime }_y(t,y)=\left( b^{\prime }_x-\frac{b\sigma ^{\prime }_x+\sigma ^{\prime }_t}{\sigma }-\frac{\sigma \sigma ^{\prime \prime }_{x^2}}{2}\right) (t,L^{-1}(t,y)). \end{aligned}$$

As a consequence, if the function

$$\begin{aligned}&b^{\prime }_x-\frac{b\sigma ^{\prime }_x+\sigma ^{\prime }_t}{\sigma }-\frac{\sigma \sigma ^{\prime \prime }_{x^2}}{2} \,\text{ is } \text{ bounded } \text{ on } [0,T]\times I\text{, } \\ \nonumber&\text{ then } \beta \text{ satisfies } \text{ the } \text{ linear } \text{ growth } \text{ Lipschitz } \text{ assumption, } \end{aligned}$$

if it is non-negative, then \(\beta \) is non-decreasing.

Definition 8.1

The functional Lamperti transform, denoted \(\Lambda \), is a functional from \({\fancyscript{C}}([0,T],I)\) to \({\fancyscript{C}}([0,T],\mathbb{R })\) defined by

$$\begin{aligned} \forall \, \alpha \in {\fancyscript{C}}([0,T],I),\quad \Lambda (\alpha ) = L(\cdot ,\alpha (\cdot )). \end{aligned}$$

Proposition 8.4

If the diffusion coefficient \(\sigma \) satisfies \(({\fancyscript{A}}_{b,\sigma })\), the functional Lamperti transform is an homeomorphism from \({\fancyscript{C}}([0,T],I)\) onto \({\fancyscript{C}}([0,T],\mathbb{R })\).


Let \(\alpha \in {\fancyscript{C}}([0,T],I)\). Since \(\sigma \) is bounded away from \(0\) on the compact set \([0,T]\times \alpha ([0,T])\), standard arguments based on Lebesgue domination theorem, imply that \(\Lambda (\alpha ) \in {\fancyscript{C}}([0,T],\mathbb{R })\).

Conversely, as \(L(t,\cdot ): I\rightarrow \mathbb{R }\) is an homeomorphism for every \(t\in [0,T], \Lambda \) admits an inverse defined by

$$\begin{aligned} \forall \, \xi \in {\fancyscript{C}}([0,T],\mathbb{R }),\quad \Lambda ^{-1}(\xi ) := \big (t\mapsto L^{-1}(t,\xi (t))\big ) \in {\fancyscript{C}}([0,T],I). \end{aligned}$$

Let \(U_K\) denote the topology of the convergence on compact sets of \(I\) on \({\fancyscript{C}}([0,T],I)\).

\(\rhd U_K\)-Continuity of \(\Lambda \) on \([0,T]\times I\): If \(\alpha _n\stackrel{U_K}{\longrightarrow }\alpha _{\infty }\), the set \(K= [0,T]\times \bigcup _{n\in \overline{\mathbb{N }}}\alpha _n([0,T])\) is a compact set included in \(I\). Hence \(\sigma \) is bounded away from \(0\) on \(K\) so that

$$\begin{aligned}&\quad \quad \forall \, t\in [0,T],\quad |L(t,\alpha _n(t))-L(t,\alpha _{\infty }(t))|\le \frac{1}{\inf _K \sigma } |\alpha _n(t)-\alpha _{\infty }(t)|\\&i.e. \quad \Vert \Lambda (\alpha _n)-\Lambda (\alpha _{\infty })\Vert _{\infty }\le \frac{1}{\inf _K \sigma } \Vert \alpha _n-\alpha _{\infty }\Vert _{\infty }. \end{aligned}$$

\(\rhd U_K\)-Continuity of \(\Lambda ^{-1}\) on \([0,T]\times I\): by using \(({\fancyscript{A}}_{b,\sigma })\)-(ii), we have for a fixed \(t\in [0,T]\),

$$\begin{aligned} \forall x,x^{\prime }\in I, \quad \left| L(t,x)-L(t,x^{\prime })\right| \ge \frac{1}{C}\int \limits _{x\wedge x^{\prime }}^{x\vee x^{\prime }}\frac{d\xi }{1+|\xi |}=\frac{1}{C}\left| \varPhi (x)-\varPhi (x^{\prime })\right| , \end{aligned}$$

where \(\varPhi (z)=\text{ sign }(z)\log (1+|z|)\). Thus,

$$\begin{aligned} \forall y,y^{\prime }\in \mathbb{R },\quad \left| \varPhi (L^{-1}(t,y))-\varPhi (L^{-1}(t,y^{\prime }))\right| \le C\left| y-y^{\prime }\right| . \end{aligned}$$

Let \((\xi _n)_{n\ge 1}\) be a sequence of functions of \(\mathbb{D }([0,T],\mathbb{R })\) such that \(\xi _n\overset{U}{\underset{n\rightarrow +\infty }{\longrightarrow }}\xi \in {\fancyscript{C}}([0,T],\mathbb{R })\). Then, for every \(t\in [0,T]\) and \(n\ge 1\),

$$\begin{aligned} \left| \varPhi (L^{-1}(t,\xi _n(t)))-\varPhi (L^{-1}(t,0))\right| \le C\left| \xi _n(t)\right| \le C\left( \left\| \xi _n(t)-\xi \right\| +\Vert \xi \Vert \right) +\left| \varPhi (x_0)\right| \le C^{\prime }, \end{aligned}$$

since \(L^{-1}(t,0)=x_0\). Consequently, for every \(t\in [0,T]\) and every \(n\ge 1, L^{-1}(t,\xi _n(t))\in K^{\prime }:=\varPhi ^{-1}([-C^{\prime },C^{\prime }])\). The set \(K^{\prime }\) is compact (because the function \(\varPhi \) is continuous and proper (\(\lim _{|z|\rightarrow \infty }\left| \varPhi (z)\right| =+\infty \))). As \(\inf _{K^{\prime }} \varPhi ^{\prime }>0\), we deduce that there exists \(\eta _0>0\) such that

$$\begin{aligned} \forall x,y\in I, \quad \left| \varPhi (x)-\varPhi (y)\right| >\eta _0|x-y|, \end{aligned}$$


$$\begin{aligned} \forall t\in [0,T], \ \forall u,v\in L(t,I), \quad \left| L^{-1}(t,u)-L^{-1}(t,v)\right| \le C^{\prime \prime }\left| u-v\right| , \quad C^{\prime \prime }>0. \end{aligned}$$

Hence, one concludes that

$$\begin{aligned} \Vert \Lambda ^{-1}(\xi _n)-\Lambda ^{-1}(\xi _{\infty })\Vert _{\infty }\le C^{\prime \prime }\Vert \xi _n-\xi _{\infty })\Vert _{\infty }. \end{aligned}$$

\(\square \)

Functional co-monotony principle for diffusion

Definition 8.2

The diffusion process (8.1) is admissible if \(({\fancyscript{A}}_{b,\sigma })\) holds and

  1. (i)

    for every starting value \(x_0\in I\), (8.1) has a unique weak solution which lives in \(I\) up to \(t=+\infty \) (see Proposition 8.3 for a criteria),

  2. (ii)

    the function \(\beta \) defined by

    $$\begin{aligned} \beta (t,y):=\left( \frac{b}{\sigma }-\int \limits _{x_1}^{\cdot }\frac{1}{\sigma ^2(t,\xi )}\frac{\partial \sigma }{\partial t}(t,\xi )d\xi -\frac{1}{2}\frac{\partial \sigma }{\partial x}\right) (t,L^{-1}(t,y)), \end{aligned}$$

    is continuous on \([0,T]\times \mathbb{R }\), non-decreasing in \(y\) for every \(t\in [0,T]\) or Lipschitz in \(y\) uniformly in \(t\in [0,T]\), and satisfies

    $$\begin{aligned} \exists \, K > 0\;\text{ such } \text{ that }\;\left| \beta (t,y)\right| \le K(1+\left| y\right| ), \;t\in [0, T],\; y\in \mathbb{R }. \end{aligned}$$

Definition 8.3

Let \(F:\mathbb{D }([0,T],\mathbb{R })\rightarrow \mathbb{R }\) be a functional.

  1. (i)

    The functional \(F\) is non-decreasing (resp. non-increasing) on \(\mathbb{D }([0,T],\mathbb{R })\) if

    $$\begin{aligned}&\forall \alpha _1,\alpha _2\in \mathbb{D }([0,T],\mathbb{R }), \quad \left( \forall t\in [0,T], \ \alpha _1(t)\le \alpha _2(t)\right) \\&\quad \Rightarrow F(\alpha _1)\le F(\alpha _2) \ \text{(resp. } F(\alpha _1)\ge F(\alpha _2)\text{) }. \end{aligned}$$
  2. (ii)

    The functional \(F\) is continuous at \(\alpha \in {\fancyscript{C}}([0,T],\mathbb{R })\) if

    $$\begin{aligned} \forall \alpha _m\in \mathbb{D }([0,T],\mathbb{R }), \quad \alpha _m\overset{U}{\longrightarrow }\alpha \in {\fancyscript{C}}([0,T],\mathbb{R }),\quad F(\alpha _m)\rightarrow F(\alpha ). \end{aligned}$$

    where \(U\) denotes the uniform convergence of functions on \([0,T]\). The functional \(F\) is \(C\)-continuous if it is continuous at every \(\alpha \in {\fancyscript{C}}([0,T],\mathbb{R })\).

  3. (iii)

    The functional \(F\) has polynomial growth if there exists a positive real number \(r>0\) such that

    $$\begin{aligned} \forall \alpha \in \mathbb{D }([0,T],\mathbb{R }), \quad \left| F(\alpha )\right| \le K\left( 1+\left\| \alpha \right\| ^r_{\infty }\right) . \end{aligned}$$


Any \(C\)-continuous functional in the above sense is in particular \(\mathbb{P }_Z\)-\(a.s.\) continuous for every process \(Z\) with continuous paths.

Definition 8.4

A process \((X_t)_{t\in [0,T]}\) with continuous (resp. càdlàg stepwise constant) paths defined on \((\Omega ,{\fancyscript{A}},\mathbb{P })\) satisfies a functional co-monotony principle if for every \(C\)-continuous functionals (resp. measurable functionals on \(\mathbb{D }([0,T],\mathbb{R })\)) \(F,G\) monotonic with the same monotony satisfying (8.11) such that \(F(X), G(X)\) and \(F(X)G(X)\in L^1\), we have

$$\begin{aligned} \mathrm{Cov}\left( F\big (\left( X_t\right) _{t\in [0,T]}\big ),G \big (\left( X_t\right) _{t\in [0,T]}\big )\right) \ge 0. \end{aligned}$$

The main result of this section is the following

Theorem 8.2

Assume that the real-valued diffusion process (8.1) is admissible (see Definition 8.2). Then it satisfies a co-monotony principle.

Corollary 8.1 Assume that the real-valued diffusion process (8.1) is admissible (see Defintion 8.2).

  1. (a)

    Let \(\left( \bar{X}_{t^m_k}\right) _{0\le k\le m}\) be its stepwise constant Euler scheme with step \(\frac{T}{m}\) (\(t^m_k\!=\!\frac{kT}{m}, 0\!\le \! k\!\le \! m\)). Then \(\left( \bar{X}_{t^m_k}\right) _{0\le k\le m}\) satisfies a co-monotony principle.

  2. (b)

    Let \(\left( \tilde{X}_{t_k}\right) _{0\le k\le m}\) be a sample of discrete time observations of \((X_t)_{t\in [0,T]}\) for a subdivision \((t_k)_{0\le k\le m}\) of \([0,T](0=t_0<\cdots <t_m=T\)). Then \(\left( \tilde{X}_{t_k}\right) _{0\le k\le m}\) satisfies a co-monotony principle.


The proof of Corollary 8.1 is contained in the proof of Theorem 8.2. The only difference is that we do not need to transfer the co-monotony principle from the Euler scheme to the diffusion process.

Before passing to the proof of Theorem 8.2, we need two lemmas: one is a key step to transfer co-monotony from the Euler scheme to the diffusion process, the other aims at transferring uniqueness property for weak solutions.

Lemma 8.1

For every \(\alpha \in \mathbb{D }([0,T],\mathbb{R })\), set

$$\begin{aligned} \alpha ^{(m)}=\sum _{k=0}^{m-1}\alpha (t_k^m)1\!\!\!\mathrm{l}_ {[t_k^m,t_{k+1}^m)}+\alpha (T)1\!\!\!\mathrm{l}_{\{T\}}, \quad m\ge 1, \end{aligned}$$

with \(t^m_k:=\frac{kT}{m}, k=0,\ldots ,m\). Then \(\alpha ^{(m)}\overset{U}{\longrightarrow }\alpha \) as \(m\rightarrow \infty \).

If \(F:\mathbb{D }([0,T],\mathbb{R })\rightarrow \) \(\mathbb{R }\) is \(C\)-continuous and non-decreasing (resp. non-increasing), then the unique function \(F_m:\mathbb{R }^{m+1}\rightarrow \mathbb{R }\) satisfying \(F(\alpha ^{(m)})=F_m(\alpha (t^m_k),\,k=0,\ldots ,m\), is continuous and non-decreasing (resp. non-increasing) in each of its variables. Furthermore, if \(F\) satisfies a polynomial growth assumption of the form

$$\begin{aligned} \forall \,\alpha \in \mathbb{D }([0,T],\mathbb{R }),\quad |F(\alpha )|\le C(1+\Vert \alpha \Vert ^r_{\infty }) \end{aligned}$$

then, for every \(m\ge 1\),

$$\begin{aligned} |F_m(x_0,\ldots ,x_m)|\le C(1+\max _{0\le k\le m}|x_k|^r) \end{aligned}$$

with the same real constant \(C>0\).

Lemma 8.2

Let \((S,d), (T,\delta )\) be two Polish spaces and let \(\varPhi :S\mapsto T\) be a continuous injective function. Let \(\mu \) and \(\mu ^{\prime }\) be two probability measures on \((S,{\fancyscript{B}}or(S))\). If \(\mu \circ \varPhi ^{-1}=\mu ^{\prime }\circ \varPhi ^{-1}\), then \(\mu =\mu ^{\prime }\).

Proof of Lemma 8.2

For every Borel set \(A\) of \(S, \mu (A)=\sup \left\{ \mu (K), \ K\subset A,\right. \) \(\left. K \text{ compact }\right\} \). Let \(A\in {\fancyscript{B}}or(S)\) such that \(\mu (A)\ne \mu ^{\prime }(A)\). Then there exists a compact set \(K\) of \(A\) such that \(\mu (K)\ne \mu ^{\prime }(K)\). But \(\varPhi (K)\) is a compact set of \(S\) because \(\varPhi \) is continuous, so \(\varPhi ^{-1}\left( \varPhi (K)\right) \) is a Borel set of \(S\) which contains \(K\). As \(\varPhi \) is injective, \(\varPhi ^{-1}\left( \varPhi (K)\right) =K\). Therefore \(\mu \left( \varPhi ^{-1}\left( \varPhi (K)\right) \right) \ne \mu ^{\prime }\left( \varPhi ^{-1}\left( \varPhi (K)\right) \right) \). We deduce that \(\mu \circ \varPhi ^{-1}\ne \mu ^{\prime }\circ \varPhi ^{-1}\).

\(\square \)

Proof of Theorem 8.2

First we consider the Lamperti transform \((Y_t)_{t\ge 0}\) (see (8.5)) of the diffusion \(X\) solution to (8.3) with \(X_0=x_0\in I\). Using the homeomorphism property of \(\Lambda \) and calling upon the above Lemma 8.2 with \(\Lambda ^{-1}\) and \(\Lambda \), we see that existence and uniqueness assumptions on Equ. (8.3) can be transferred to (8.7) since \(\Lambda \) is a one-to-one mapping between the solutions of these two SDE’s.

To fulfill condition (ii) in Definition 8.2, we need to introduce the smallest integer, denoted \(m_{b,\sigma }\), such that \(y\mapsto y+\frac{T}{m_{b,\sigma }}\beta (t,y)\) is non-decreasing in \(y\) for every \(t\in [0,T]\). Its existence follows from \({\fancyscript{A}}_{b,\sigma }\)-(ii). Note that if \(\beta \) is non-decreasing in \(y\) for every \(t\in [0,T]\), then \(m_{b,\sigma }=1\). Then we introduce the stepwise constant (Brownian) Euler scheme \(\bar{Y}^m=\big (\bar{Y}_{\frac{kT}{m}}\big )_{0\le k\le m}\) with step \(\frac{T}{m}\) (defined by (8.4)) of \(Y=(Y_t)_{t\in [0,T]}\) with \(m\ge m_{b,\sigma }\). It is clear by induction on \(k\) that there exists for every \(k\in \{1,\ldots ,m\}\) a function \(\Theta _k:\mathbb{R }^{k+1}\rightarrow \mathbb{R }\) such that

$$\begin{aligned} \bar{Y}_{t^m_k}=\Theta _k(y_0,\Delta W_{t^m_1},\ldots ,\Delta W_{t^m_k}) \end{aligned}$$

where for \((y_0,z_1,\ldots ,z_k)\in \mathbb{R }^{k+1}\),

$$\begin{aligned} \Theta _k(y_0,z_1,\ldots ,z_k)&= \Theta _{k-1}(y_0,z_1,\ldots ,z_{k-1})+ \beta (t^m_{k-1},\Theta _{k-1}(y_0,z_1,\ldots ,z_{k-1}))\frac{T}{m}+z_k\\&= \left( \mathrm{id}+\beta (t^m_{k-1},\cdot )\frac{T}{m}\right) \circ \Theta _{k-1}(y_0,z_1,\ldots ,z_{k-1})+z_k. \end{aligned}$$

Thus for every \(i\in \{1,\ldots ,k\}, z_i\mapsto \Theta _k(y_0,z_1,\ldots ,z_i,\ldots ,z_k)\) is non-decreasing because \(y\mapsto \left( y+\beta (t^m_{k-1},y)\frac{T}{m}\right) \) is non-decreasing for \(m\) large enough, say \(m\ge m_{b,\sigma }\). We deduce that if \(F_m:\mathbb{R }^{m+1}\rightarrow \mathbb{R }\) is non-decreasing in each variables, then, for every \(i\in \{1,\ldots ,k\}\),

$$\begin{aligned} z_i\mapsto F_m\left( y_0,\Theta _1(y_0,z_1),\ldots ,\Theta _m(y_0,z_1,\ldots ,z_m)\right) \text{ is } \text{ non-decreasing }. \end{aligned}$$

By the same reasoning, we deduce that for \(G_m:\mathbb{R }^{m+1}\rightarrow \mathbb{R }\), non-increasing in each variables, we have for every \(i\in \{1,\ldots ,k\}\),

$$\begin{aligned} z_i\mapsto G_m\left( y_0,\Theta _1(y_0,z_1),\ldots ,\Theta _m(y_0,z_1,\ldots ,z_m)\right) \text{ is } \text{ non-increasing }. \end{aligned}$$

Let \(F_m\) and \(G_m\) be the functions defined on \(\mathbb{R }^{m+1}\) associated to \(F\) and \(G\) respectively by Lemma 8.1. As \(\beta \) has linear growth, \(Y\) and its Euler scheme have polynomial moments at any order \(p>0\). Then we can apply Proposition 8.2 to deduce that

$$\begin{aligned} \mathbb{E }\left[ FG\left( \bar{Y}^m\right) \right]&= \mathbb{E }\left[ F_m \left( \left( \bar{Y}_{\frac{kT}{m}}\right) _{0\le k\le m}\right) G_m\left( \left( \bar{Y}_{\frac{kT}{m}}\right) _{0\le k\le m}\right) \right] \\&\ge \mathbb{E }\left[ F_m\left( \left( \bar{Y}_{\frac{kT}{m}}\right) _{0\le k\le m}\right) \right] \mathbb{E }\left[ G_m\left( \left( \bar{Y}_{\frac{kT}{m}}\right) _{0\le k\le m}\right) \right] \\&= \mathbb{E }\left[ F\left( \bar{Y}^m\right) \right] \mathbb{E }\left[ G \left( \bar{Y}^m\right) \right] . \end{aligned}$$

Note that if \(F\) and \(G\) are \(C\)-continuous with polynomial growth, so is \(FG\). We derive from Theorem 8.1 that

$$\begin{aligned} \mathbb{E }\left[ FG\left( \bar{Y}^m\right) \right] \underset{m\rightarrow \infty }{\longrightarrow }\mathbb{E }FG(Y), \quad \mathbb{E }\left[ F\left( \bar{Y}^m\right) \right] \underset{m\rightarrow \infty }{\longrightarrow }\mathbb{E }F(Y), \quad \mathbb{E }\left[ G\left( \bar{Y}^m\right) \right] \underset{m\rightarrow \infty }{\longrightarrow }\mathbb{E }G(Y), \end{aligned}$$


$$\begin{aligned} \mathrm{Cov}\left( F(Y),G(Y)\right) \ge 0. \end{aligned}$$

To conclude the proof, we need to go back to the process \(X\) by using the inverse Lamperti transform. Indeed, for every \(t\in [0,T], X_t=L^{-1}(t,Y_t)\), where \(Y\) satisfies (8.7). Let \(F:\mathbb{D }([0,T],\mathbb{R })\rightarrow \mathbb{R }C\)-continuous. Set

$$\begin{aligned} \forall \alpha \in {\fancyscript{C}}([0,T],\mathbb{R }), \quad \widetilde{F}(\alpha ):=F\left( \left( L^{-1}(t,\alpha _t)\right) _{t\in [0,T]}\right) . \end{aligned}$$

Assume first that \(F\) and \(G\) are bounded. The functional \(\widetilde{F}\) is \(C\)-continuous owing to Proposition 8.4, non-decreasing (resp. non-increasing) since \(L^{-1}(t,.)\) is for every \(t\in [0,T]\) and is bounded. Consequently,

$$\begin{aligned} \mathrm{Cov}\left( F(X),G(X)\right) =\mathrm{Cov}\left( \widetilde{F}(Y),\widetilde{G}(Y)\right) \ge 0. \end{aligned}$$

To conclude we approximate \(F\) and \(G\) in a robust way with respect to the “constraints”, by a canonical truncation procedure, say

$$\begin{aligned} F_M:=\max \Big ((-M), \min \big (F,M\big )\Big ), \quad M\in \mathbb{N }. \end{aligned}$$

If \(F\) and \(G\) have polynomial growth, it is clear that \(\mathrm{Cov}\left( F_M(X),G_M(X)\right) \rightarrow \mathrm{Cov}(F(X),G(X))\) as \(M\rightarrow \infty \). \(\square \)

Examples of admissible diffusions

  • The Bachelier model: This simply means that \(X_t = \mu t +\sigma W_t, \sigma >0\), clearly fulfills the assumptions of Theorem 8.2.

  • The Black–Scholes model: The diffusion process \(X\) is a geometric Brownian motion, solution to the SDE

    $$\begin{aligned} dX_t=rX_tdt+\vartheta X_tdW_t, \quad X_0=x_0>0, \end{aligned}$$

    where \(r\in \mathbb{R }\) and \(\vartheta >0\) are real numbers. The geometric Brownian motion lives in the open interval \(I=(0,+\infty )\) and \(\beta (y)=\frac{r}{\vartheta }-\frac{\vartheta }{2}\) is constant. One checks that \(L(x)= \frac{1}{\sigma } \log \Big (\frac{x}{x_1}\Big )\) where \(x_1\in (0,+\infty )\) is fixed.

  • The Hull–White model: It is an elementary improvement of the Black–Scholes model where \(\vartheta :[0,T]\rightarrow (0,+\infty )\) is a deterministic positive function i.e. the diffusion process \(X\) is a geometric Brownian motion solution of the SDE

    $$\begin{aligned} dX_t=rX_tdt+\vartheta (t) X_tdW_t, \quad X_0=x_0>0. \end{aligned}$$

    Then, elementary stochastic calculus shows that

    $$\begin{aligned} X_t =x_0 e^{rt-\frac{1}{2}\int _0^t \vartheta ^2(s)ds +\int _0^t\vartheta (s)dW_s}= x_0e^{rt-\frac{1}{2}\int _0^t \vartheta ^2(s)ds +B_{\int _0^t \vartheta ^2(s)ds}} \end{aligned}$$

    where \((B_u)_{u\ge 0}\) is a standard Brownian motion (the second equality follows form the Dambins–Dubins–Schwarz theorem). Consequently \(X_t= \varphi \Big (t,B_{\int _0^t\vartheta ^2(s)ds}\Big )\) where the functional \(\xi \mapsto \Big (t\mapsto \varphi \Big (t,\xi \Big (\int _0^.\vartheta ^2(s)ds\Big )\Big )\Big )\) defined on \(\mathbb{D }([0,T_{\vartheta }],\mathbb{R }), T_{\vartheta }= \int _0^T\vartheta ^2(t)dt\), is \(C\)-continuous on \({\fancyscript{C}}([0,T_{\vartheta }],\mathbb{R })\). Hence for any \(C\)-continuous \(\mathbb{R }\)-functional on \(\mathbb{D }([0,T], \mathbb{R })\), the \(\mathbb{R }\)-valued functional \(\widetilde{F}\) defined by \(\widetilde{F}(\xi )=F\Big (\varphi \Big (t,\xi \Big (\int _0^.\vartheta ^2(s)ds\Big )\Big )\Big )\) is \(C\)-continuous on \(\mathbb{D }([0,T_{\vartheta }],\mathbb{R })\). Then, on can transfer the co-monotony property from \(B\) to \(X\).

  • Local volatility model (elliptic case): More generally, it applies still with \(I=(0,+\infty )\) to some usual extensions like the models with local volatility

    $$\begin{aligned} dX_t=rX_tdt+\vartheta (X_t) X_tdW_t, \quad X_0=x_0>0, \end{aligned}$$

    where \(\vartheta :\mathbb{R }\rightarrow (\vartheta _0,+\infty ), \vartheta _0>0\), is a bounded, twice differentiable function satisfying \(\left| \vartheta ^{\prime }(x)\right| \le \frac{C}{1+ |x|}\) and \(\left| \vartheta ^{\prime \prime }(x)\right| \le \frac{C}{1+|x|^2}, x\in (0,+\infty )\).

In this case \(I= (0,+\infty )\) and, \(x_1\in I\) being fixed, one has for every \(x\in I\),

$$\begin{aligned} L(x) =\int \limits _{x_1}^x \frac{d\xi }{\xi \vartheta (\xi )} \end{aligned}$$

which clearly defines an increasing homeomorphism from \(I\) onto \(\mathbb{R }\) since \(\vartheta \) is bounded. Furthermore, one easily derives from the explicit form (8.9) and the condition (8.10) that \(\beta \) is Lipschitz as soon as the function

$$\begin{aligned} x\mapsto rx\frac{\vartheta ^{\prime }}{\vartheta }(x)+\frac{x^2\vartheta \vartheta ^{\prime \prime }(x)}{2}+x\vartheta \vartheta ^{\prime }(x)\; \text{ is } \text{ bounded } \text{ on } (0,\infty ) \end{aligned}$$

which easily follows from the assumptions made on \(\vartheta \).

Extension to other classes of diffusions and models. This general approach does not embody all situations: thus the true CEV model does not fulfill the above assumptions. The \(CEV\) model is a diffusion process \(X\) following the SDE

$$\begin{aligned} dX_t=rX_tdt+\vartheta X_t^{\alpha }dW_t, \quad X_0=x_0, \end{aligned}$$

where \(\vartheta >0\) and \(0<\alpha <1\) are real numbers.

So this \(CEV\) model, for which \(I=(0,+\infty )\), does not fulfill Assumption \({\fancyscript{A}}_{b,\sigma }\)-(iii). As a consequence \(L(t, I)\ne \mathbb{R }\) is an open interval (depending on the choice of \(x_1\)). To be precise, if \(x_1\in (0,+\infty )\) is fixed,

$$\begin{aligned} L(x)=\frac{1}{\vartheta (1-\alpha )}\big (x^{1-\alpha }-x_1^{1-\alpha }\big ),\quad x\in (0,+\infty ) \end{aligned}$$

so that, if we set

$$\begin{aligned} J_{x_1}:= L(I)=\Big (-\frac{x_1^{1-\alpha }}{\vartheta (1-\alpha )},+\infty \Big ), \end{aligned}$$

\(L\) defines an homeomorphism from \(I=(0,+\infty )\) onto \(J_{x_1}\). Finally the function \(\beta \) defined by

$$\begin{aligned} \beta (y) =\frac{r}{\vartheta }\big (\vartheta (1-\alpha )y+x_1^{1-\alpha }\big )-\frac{\alpha \vartheta }{2}\frac{1}{(\vartheta (1-\alpha )y+x_1^{1-\alpha })},\quad y\in J_{x_1} \end{aligned}$$

is non-decreasing with linear growth at \(+\infty \). Now, tracing the lines of the above proof, in particular establishing weak existence and uniqueness of the solution of the SDE (8.3) in that setting, leads to the same positive conclusion concerning the covariance inequalities for co-monotonic or anti-monotonic functionals.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Laruelle, S., Lehalle, CA. & Pagès, G. Optimal posting price of limit orders: learning by trading. Math Finan Econ 7, 359–403 (2013). https://doi.org/10.1007/s11579-013-0096-7

Download citation


  • Stochastic approximation
  • Order book
  • Limit order
  • Market impact
  • Statistical learning
  • High-frequency optimal liquidation
  • Poisson process
  • Co-monotony principle

Mathematics Subject Classification (2000)

  • 62L20
  • 62P05
  • 60G55
  • 65C05