1 Introduction and statement of the results

One might argue that the connection between fractional calculus and stable processes can be further strengthened even though links between the fractional Laplacian and symmetric stable processes are often alluded to (cf. [3, 4] with Remark 2). The oldest references that relate fractional calculus and stable random variables are the seminal work of Feller [5], which uses fractional calculus to compute a series for stable densities and the articles of Gorenflo and Mainardi (cf. [6, 7]), which identify a correspondence between stable characteristic functions and the Fourier transform of fractional derivatives. More recent references are the book of Meerschaert and Sikorskii [3] and the article of Kolokoltsov [8], where the infinitesimal generator of a stable process and various transformations of it are written in terms of different types of fractional derivatives.

One objective of this work is to present a natural application of fractional calculus to (one-dimensional and asymmetric) stable processes by inverting their infinitesimal generator. The inversion is valid in the so called Lizorkin space. Multiple consequences include a generalization of the celebrated Tanaka formula for Brownian motion into the stable setting as first obtained by Tsukada [1]. This will follow from constructing a function which the generator transforms into the \(\delta \) distribution. More generally, one can define a class of functions whose image under the generator is a signed measure. One obtains semimartingales when applying functions of this class to stable processes; the semimartingale decomposition gives us a version of Meyer-Itô formula for discontinuous semimartingales (cf. Protter [9, IV.7]) which features a non-zero local time term. This allows for a concrete semimartingale decompositions for power functions applied to stable processes which were recently obtained for symmetric stable processes in Engelbert and Kurenok [2].

This work is based on several results, from both probability theory and fractional calculus, so let us first state the basic elements we will need.

Definition 1

(Strictly stable process) A Lévy process \(\left( X_t\right) _{t\ge 0}\) is called a strictly stable process with index of stability \(\alpha \in (0,2){\setminus } \left\{ 1\right\} \) if, for any \(c>0\): \(X_{ct}{\mathop {=}\limits ^{d}} c^{1/\alpha }X_t\).

We will only consider strictly stable processes in this paper, excluding the cases when the index is 1 or 2, corresponding to the symmetric Cauchy process and Brownian motion, which have been studied with different techniques. Stable processes belong to the class of Lévy processes (c.f. [10]) and their properties are deferred to the next section.

According to [11] (Chapter 3) there exist some constants \(c_-, c_+ \ge 0\), not both zero, such that the Lévy measure \(\nu \) of X, which describes the jumps of X and is given by

$$\begin{aligned} \nu (A)={\mathbb {E}}(\# \{t\in [0,1]: X_{t}-X_{t-}\in A\}), \end{aligned}$$

satisfies:

$$\begin{aligned} \nu (dh) = \left( c_{-} {{\,\mathrm{1{}l}\,}}_{\left\{ h<0 \right\} } + c_{+} {{\,\mathrm{1{}l}\,}}_{\left\{ h>0 \right\} } \right) \frac{dh}{\left| h\right| ^{\alpha +1}}. \end{aligned}$$

Stable processes can then be constructed using a Poisson random measure N with intensity \(ds\, \nu (dh)\) if \(\alpha \in (0,1)\) or the compensated Poisson random measure \({\tilde{N}}\) when \(\alpha \in (1,2)\) by means of the Lévy-Itô decomposition:

$$\begin{aligned} X_t =X_0 + {\left\{ \begin{array}{ll} \displaystyle \int _0^t \int _{{\mathbb {R}}_0} h N(ds,dh)&{} \text { if }\alpha \in (0,1) \\ \displaystyle \int _0^t \int _{{\mathbb {R}}_0} h {\tilde{N}}(ds,dh)&{}\text { if }\alpha \in (1,2) \end{array}\right. }. \end{aligned}$$

In fact, note that in the recurrent case when \(\alpha \in (1,2)\), \(X_t\) is integrable for any t and X is a martingale (whenever \(X_0\) is deterministic). In both cases, we will write that \(X \sim S_{\alpha }\left( c_-, c_+ \right) \) when we refer to a strictly stable process with such parameters. The infinitesimal generator \({\mathcal {L}}\) of X can be defined as the derivative at zero of the semigroup on an adequate class of functions. Indeed, recall that if \(\phi :{\mathbb {R}}\rightarrow {\mathbb {R}}\) belongs to the Schwartz space \({\mathcal {S}}({\mathbb {R}})\) of rapidly decreasing functions, we have

$$\begin{aligned} {\mathcal {L}}\phi (x)&:=\left. \frac{\partial }{\partial t}\right| _{t=0} {\mathbb {E}}(\phi (x+X_t)) \\ {}&={\left\{ \begin{array}{ll} \displaystyle \int _{{\mathbb {R}}_0} [\phi (x+y)-\phi (x)]\, \nu (dy)&{}\alpha \in (0,1)\\ \displaystyle \int _{{\mathbb {R}}_0} [\phi (x+y)-\phi (x)-y\phi '(x)]\, \nu (dy)&{}\alpha \in (1,2) \end{array}\right. }, \end{aligned}$$

as in [10, I.2].

The behavior and further properties of the process X differ substantially whether \(\alpha \in (0,1)\) or \(\alpha \in (1,2)\), so they will be studied separately (as above, there are many differences in these cases, such as the transient/recurrent dichotomy, the polar/non-polar character of zero, or the bounded vs unbounded variation of the sample paths). Nevertheless, in both cases we get a representation of their infinitesimal generator and its inverse in terms of fractional operators. The fractional operators we will use are the Riemann-Liouville’s, these definitions and further properties can be consulted in [12, Ch. 2].

Definition 2

(Riemann-Liouville fractional operators) Let \(\alpha \ge 0\) and \(\varphi \in {\mathcal {S}}({\mathbb {R}})\). Then, the left and right Riemann-Liouville fractional operators of order \(\alpha \) applied to \(\varphi \) are defined in three cases:

  • For \(\alpha = 0\) we get the identity operator

    $$\begin{aligned} W_-^\alpha \varphi \left( x\right) = W_+^{\alpha } \varphi \left( x\right) := \varphi \left( x\right) . \end{aligned}$$
  • For \(\alpha > 0\), the (left and right) Riemann-Liouville fractional integrals are given by

    $$\begin{aligned} W_{-}^\alpha \varphi (x):= & {} \frac{1}{\varGamma \left( \alpha \right) } \int _{-\infty }^{x}\left( x-t\right) ^{\alpha -1}\varphi \left( t\right) dt\quad \text {and}\\ W_+^\alpha \varphi (x):= & {} \frac{1}{\varGamma \left( \alpha \right) } \int _{x}^{\infty }\left( t-x\right) ^{\alpha -1}\varphi \left( t\right) dt. \end{aligned}$$
  • For \(n-1 < \alpha \le n\), with \(n \in {\mathbb {N}}\), the Riemann-Liouville fractional derivatives are given by

    $$\begin{aligned} W_-^{-\alpha }\varphi (x):= & {} \frac{d^n}{dx^n} W_-^{n-\alpha }\varphi (x) \quad \text {and}\\ W_+^{-\alpha }\varphi (x):= & {} (-1)^n \frac{d^n}{dx^n} W_+^{n-\alpha }\varphi (x). \end{aligned}$$

Remark 1

  1. 1.

    In fact, both the fractional integral and derivative are defined under \(L^p\) assumptions depending on \(\alpha \), but we restrict to Schwartz space so that their Fourier transforms are well defined.

  2. 2.

    If \(\alpha > 0\), we will use the following notation for the fractional integrals and derivatives:

    $$\begin{aligned} I^{\alpha }_\pm := W^{\alpha }_\pm \quad \text {and}\quad D^{\alpha }_\pm := W^{-\alpha }_\pm . \end{aligned}$$
  3. 3.

    If \(\alpha \in {\mathbb {N}}\), then the left fractional operators \(I_-^\alpha \) and \(D_-^\alpha \), correspond to the iterated integral and classical differential operators of order \(\alpha \). Fractional operators can be regarded as “nice” interpolations between their corresponding integer neighbors.

  4. 4.

    On an adequate domain (the so called Lizorkin space, to be introduced), they satisfy the group property with respect to composition: for \(\alpha ,\beta \in {\mathbb {R}}\),

    $$\begin{aligned} W_-^\alpha \circ W_-^{\beta } = W_-^{\alpha +\beta } \quad \text {and}\quad W_+^{\alpha }\circ W_+^{\beta } = W_+^{\alpha +\beta }. \end{aligned}$$

    From the group property, we immediately obtain that the inverse of \(W^\alpha _{\pm }\) is \(W^{-\alpha }_{\pm }\).

  5. 5.

    Again on Lizorkin space, we have that \(W^\beta \phi \rightarrow W^\alpha \phi \) as \(\beta \rightarrow \alpha \), as follows from the expressions of the Fourier transforms in Proposition 6.

With some algebraic manipulations, the infinitesimal generator of a strictly \(\alpha \)-stable process can be written as a linear combination of left and right Riemann-Liouville fractional derivatives of order \(\alpha \). For a detailed proof see for example the article of Kolokoltsov [8] (Section 2), or the book of Meerschaert and Sikorskii [3] (Section 2.2).

Proposition 1

(Infinitesimal generator) Let \(\alpha \in (0,2)\setminus \left\{ 1\right\} \), \(c_-, c_+ \ge 0\), not both zero. If \(X\sim S_{\alpha }\left( c_-, c_+ \right) \), then the domain of the infinitesimal generator \({\mathcal {L}}\) of X contains \({\mathcal {S}}({\mathbb {R}})\). For \(\varphi \in {\mathcal {S}}({\mathbb {R}})\), we have:

$$\begin{aligned} {\mathcal {L}}\varphi \left( x\right) = M_- D_-^{\alpha }\varphi \left( x\right) + M_+ D_+^{\alpha }\varphi \left( x\right) , \end{aligned}$$

where \(M_\pm = c_\pm \varGamma (-\alpha )\).

Remark 2

This representation is consistent with the case \(\alpha = 2\) and \(c_- = c_+\), which corresponds to the Brownian motion, and its infinitesimal generator is the Laplacian \(\varDelta \). In the case \(\alpha \in (0,2){\setminus } \left\{ 1\right\} \) and \(c_- = c_+\), corresponding to a symmetric strictly \(\alpha \)-stable process, the infinitesimal generator is given by the fractional Laplacian \(-(-\varDelta )^{\alpha /2}\).

The semigroup property of fractional operators is no longer enough to invert the infinitesimal generator, since we are lacking an expression for the composition of left and right fractional operators. The result of this computation is stated in the forthcoming Proposition 2.

The main problem working in the fractional calculus framework is the domain of definition of these operators; Schwartz space is not invariant under fractional operators (cf. [12], Section 8.2). Since we are seeking for the inverse of the infinitesimal generator, it is useful to have a space which remains invariant under the action of the Riemann-Liouville fractional operators. This kind of space has been thoroughly studied by Lizorkin [13, 14], Samko, Kilbas and Marichev [12] and Rubin [15, 16].

Definition 3

(Lizorkin space) Consider the space of functions that vanish at zero together with all its derivatives:

$$\begin{aligned} \varPsi = \left\{ \psi \in {\mathcal {S}}({\mathbb {R}})\left| \psi ^{ (j)}(0)=0, j \in \{0,1,2,\ldots \} \right. \right\} . \end{aligned}$$

Then, the space of functions whose Fourier transforms are in \(\varPsi \) is called the Lizorkin space and is defined by

$$\begin{aligned} \varPhi = \left\{ \phi \in {\mathcal {S}}({\mathbb {R}}) \left| {\mathcal {F}}[\phi ] \in \varPsi \right. \right\} . \end{aligned}$$

In the Lizorkin space, compositions of fractional operators are well defined and therefore fractional integrals are the inverses of fractional derivatives. In general, to invert the generator, we need to see how crossed compositions are computed. The following result is stated, without proof, for fractional integrals in the article of Feller [5].

Proposition 2

Let \(\lambda , \mu \in {\mathbb {R}}\) with \((\lambda + \mu ) \notin {\mathbb {Z}}\) and \(\phi \in \varPhi \). Then, the crossed composition of Riemann-Liouville operators satisfy:

$$\begin{aligned} W_+^{\lambda } W_-^{\mu } \phi \left( x\right) = \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } W_-^{\lambda + \mu } \phi \left( x\right) + \frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } W_+^{\lambda + \mu } \phi \left( x\right) . \end{aligned}$$
(1.1)

Working in the Lizorkin space and using the last result we can compute the inverse of the infinitesimal generator of a stable process:

Theorem 1

(Inverse of the Infinitesimal Generator) Let \(\alpha \in (0,2)\setminus \left\{ 1\right\} \), \(c_-, c_+ \ge 0\), not both zero. Consider \(X\sim S_{\alpha }\left( c_-, c_+ \right) \) with infinitesimal generator \({\mathcal {L}}\). Then, \({\mathcal {L}}\) is invertible in \(\varPhi \) and for every \(\phi \in \varPhi \)

$$\begin{aligned} \displaystyle {\mathcal {L}}^{-1}\phi \left( x\right) = K_{-} I_-^{\alpha }\phi \left( x\right) + K_{+} I_+^{\alpha }\phi \left( x\right) , \end{aligned}$$
(1.2)

where

$$\begin{aligned} K_\pm = \frac{M_\pm }{M_-^2 + M_+^2 + 2M_-M_+\cos (\pi \alpha )}, \end{aligned}$$

and the constants \(M_\pm \) are defined in Proposition 1.

Note that Lizorkin space is known to be dense in the space of continuous functions vanishing at infinity and in \(L^p\) (cf. [14] and [17]), so that the above inversion formula is quite general. As an application of equation (1.2), the following known results can be recovered:

  1. 1.

    For the case \(\alpha \in (0,1)\), the Lévy process X is transient. Therefore, its potential corresponds to the inverse of the negative of the infinitesimal generator, \((-{\mathcal {L}})^{-1}\). The above theorem recovers the expression of Sato [18, Example 5.4].

  2. 2.

    For the case \(\alpha \in (1,2)\), the Lévy process X is recurrent and its classical potential is infinite. Nevertheless, Port [19] defined the recurrent potential for stable processes (by an appropriate compensated kernel) and computed it explicitly. As Sato [18] notes, for a wide class of Lévy processes, the limit \(\lim _{\lambda \rightarrow 0}(\lambda -{\mathcal {L}})^{-1}\) corresponds to a potential (classical or recurrent). On Lizorkin space, where we can explicitly compute an inversion thanks to the above theorem, Port’s computation can be recovered.

  3. 3.

    A heuristic explanation of the function involved in the Tanaka formula for strictly stable processes given by Tsukada in [1] can be given as follows. Note that the Itô formula for Lévy processes (see Proposition 4) tells us that for any Schwartz function f, writing \(g={\mathcal {L}}f\), we have

    $$\begin{aligned} f(x+X_t)=f(x)+M^f_t+\int _0^t g(x+X_s)\, ds, \end{aligned}$$

    where \(M^f\) is a martingale whose explicit expression is only needed later. Formally, if g equals the Dirac \(\delta \) distribution, the last summand equals the time that X spends at x on [0, t], which is one guiding principle behind the construction of the local time of X at x. Hence, if \({\mathcal {L}}F=\delta \) (which will be given a sense in Section 2 and a proof in Lemma 2), then the local time should equal \(F(x+X)-M^F\). Our formula for \({\mathcal {L}}^{-1}\) allows us to guess a solution to \({\mathcal {L}}F=\delta \) as a linear combination \(\kappa _-(x^{-})^{\alpha -1}+\kappa _+(x^{+})^{\alpha -1}\), which is exactly the formula of Tsukada. That \(\kappa _-\ne \kappa _+\) in general is a manifestation of the asymmetry in the jumps of X.

To state the Tanaka and Meyer-Itô formulae for stable processes, we need more preliminaries concerning definition of local time and an important class of admissible functions. We now consider \(\alpha \in (1,2)\) for states to be recurrent and local time to be non trivial.

Definition 4

(Occupational local time) Consider a family of random variables with two indices \(\left\{ L_t^a(X): a\in {\mathbb {R}},t\ge 0 \right\} \). We will call it an occupational local time of a process X if the occupation time formula is satisfied for any positive Borel measurable function \(f:{\mathbb {R}}\rightarrow [0,\infty )\):

$$\begin{aligned} \int _0^t f\left( X_s\right) \, ds = \int _{-\infty }^{\infty } f\left( a\right) L_t^a(X) \, da \quad \text {a.s.} \end{aligned}$$

The fact that this local time exists for recurrent stable processes, as well as being jointly continuous in time and space, was established by Boylan [20] and Barlow [21]. See the textbook account in [10, Ch. V].

The following definition appears in the Tanaka formula given by Tsukada in [1], but we will write it in our notation.

Definition 5

For every fixed \(\alpha \in (1,2)\), \(c_-, c_+ \ge 0\), not both zero, we define the function \(F=F^{\alpha ,c_-,c_+}\) by:

$$\begin{aligned} F(x)=\kappa _+(x^-)^{\alpha -1}+\kappa _-(x^+)^{\alpha -1} \end{aligned}$$
(1.3)

where

$$\begin{aligned} \kappa _\pm =\frac{c_\pm }{\varGamma (\alpha )\varGamma (-\alpha )[c_+^2+c_{-}^2+2c_+c_-\cos \alpha \pi ]}. \end{aligned}$$

It is intentional that \(\kappa _-\) accompanies \(x^+\) because of Lemma 2. As we have remarked and will prove after Proposition 2, F is a weak solution to the Poisson equation \({\mathcal {L}}F= \delta \). If we consider adequate measures \(\mu \) for which the convolution \(F*\mu \) is well defined, we could regard \(f(x)=(F*\mu )(x)\) as a solution to \({\mathcal {L}}f = \mu \).

Definition 6

(The class \({\mathcal {C}}^{\alpha , c_-,c_+}\)) For every fixed \(\alpha \in (1,2)\) and \(c_-, c_+ \ge 0\), not both zero, the class \({\mathcal {C}}^{\alpha ,c_-,c_+}\) is defined as

$$\begin{aligned} \displaystyle \left\{ f=F*\mu \left| \mu \text { is a signed measure such that } \int |x|^{\alpha -1}\, |\mu |(dx)<\infty \right. \right\} . \end{aligned}$$

The integrability condition on \(\mu \) implies that the convolution is well defined and pointwise finite. The Meyer-Itô formula will feature functions \(f=F*\mu \in {\mathcal {C}}^{\alpha ,c_-,c_+}\) where \(\mu \) is finite and of compact support. Later, in Theorem 3, we will consider convolutions where \(\mu \) is a non compactly supported measure. The class of functions \({\mathcal {C}}^{\alpha ,c_-,c_+}\) is quite large. Indeed, it contains the absolute value function and functions of the type \(|x|^\gamma \) for \(\gamma \in (\alpha -1,\alpha )\) (cf. Lemma 2). Therefore, differences of convex functions are contained in \({\mathcal {C}}^{\alpha , c_-,c_+}\). The case \(\gamma =\alpha -1\) is special in that we can only prove its membership to \({\mathcal {C}}^{\alpha ,c_-,c_+}\) in the symmetric case.

Recall that the Meyer-Itô theorem for semimartingales, for example from [9] (Theorem 70), gives a semimartingale decomposition for |X| which contains a semimartingale local time term. However, the latter is zero for a strictly stable process. For functions in the class \({\mathcal {C}}^{\alpha ,c_-,c_+}\) we prove the following occupational Meyer-Itô theorem, with a non-zero local time term.

Theorem 2

(Occupational Meyer-Itô formula) Let \(\alpha \in (1,2)\), \(c_-, c_+ \ge 0\), not both zero, and consider a strictly stable process \(X\sim S_{\alpha }\left( c_-, c_+ \right) \). Let \(f=F*\mu \in {\mathcal {C}}^{\alpha ,c_-,c_+}\) and furthermore assume that \(\mu \) is finite and compactly supported. Then,

$$\begin{aligned} f\left( X_t\right) = f\left( X_0\right) + M_t + \int _{-\infty }^{\infty } L_t^a\left( X\right) \mu \left( da\right) , \end{aligned}$$
(1.4)

where

$$\begin{aligned} M_t = \int _{0}^{t}\int _{{\mathbb {R}}_{0}}\left[ f\left( X_{s-}+h\right) -f\left( X_{s-}\right) \right] {\tilde{N}}\left( ds,dh\right) , \end{aligned}$$

is a martingale and \(L_t^a(X)\) is the occupational local time at a up to time t of X.

The novel part in this result is the representation of the semimartingale in terms of an occupational local time.

Remark 3

  • In the limiting case \(\alpha =2\), we have \(F_{\pm }(x)=x^{\pm }\) and the corresponding class \({\mathcal {C}}\) can be identified with that of differences of convex functions (cf. [22, Thm. 6.22]).

  • For recurrent symmetric stable process, that is \(\alpha \in (1,2)\) and \(c_-=c_+=c >0\), we have \(F(x)=\kappa _{\alpha ,c}|x|^{\alpha - 1}\) for some constant \(\kappa _{\alpha ,c}\) (cf. [23, Corollary 1]).

  • The Tanaka formula of Tsukada [1], corresponds to the case where \(f=F=F*\delta \).

  • The compact support hypothesis of \(\mu \) is sufficient to ensure the integrability of all the terms in (1.4). Since strictly stable processes have finite \(\kappa \)-moments for \(\kappa \in (-1,\alpha )\), for non compactly supported measures \(\mu \), we would at least need to verify (or assume) the integrability of \(f(X_t)\) in \(L^1({\mathbb {P}})\).

In general, we cannot handle the case when \(\mu \) is not compactly supported, due to the integrability restrictions of strictly stable processes. Nevertheless, in the following particular case, we obtain a generalization of the works of Salminen and Yor in [23] and Engelbert and Kurenok [2] from the symmetric to the general case. Formally, the result would follow from applying Theorem 2 to the infinite measure \(\mu (dy)=|y|^{\gamma -\alpha }[k_-{{\,\mathrm{1{}l}\,}}_{y>0}+k_+{{\,\mathrm{1{}l}\,}}_{y<0}]\, dy\). Recall the definition of the constants \(M_\pm \) in Proposition 1.

Theorem 3

(Power decomposition) Let \(\alpha \in (1,2)\) and \(c_-,c_+\ge 0\) not both zero, and consider a strictly stable process \(X\sim S_{\alpha }\left( c_-, c_+ \right) \). Then for all \(x\in {\mathbb {R}}\) and \(\gamma \in (\alpha -1,\alpha )\) we have the decomposition

$$\begin{aligned} \left| X_t - x\right| ^{\gamma }= & {} \left| X_0 - x\right| ^{\gamma } + \int _0^t \int _{{\mathbb {R}}_0} \left[ \left| X_{s-} - x + h\right| ^{\gamma } - \left| X_{s-} - x\right| ^{\gamma }\right] {\tilde{N}}(ds,dh) \nonumber \\+ & {} \int _0^t \left| X_s - x\right| ^{\gamma -\alpha } \left[ k_-{{\,\mathrm{1{}l}\,}}_{\{X_s>x\}} + k_+{{\,\mathrm{1{}l}\,}}_{\{X_s<x\}} \right] ds, \end{aligned}$$
(1.5)

where \(k_{\pm }:=k_{\pm }\left( \alpha , \gamma , c_-, c_+ \right) \) are given by

$$\begin{aligned}&k_{-} \\&\quad = \frac{\varGamma (\gamma + 1)}{\varGamma (\gamma - \alpha +1)}\left[ M_+\frac{\sin \left( -\alpha \pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } + M_- \frac{\sin \left( (\gamma +1)\pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } + M_+\right] \end{aligned}$$

and

$$\begin{aligned}&k_+\\&\quad = \frac{\varGamma (\gamma + 1)}{\varGamma (\gamma - \alpha +1)}\left[ M_-\frac{\sin \left( -\alpha \pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } + M_+ \frac{\sin \left( (\gamma +1)\pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } + M_-\right] . \end{aligned}$$

Note that the last integral in (1.5) could be written in terms of the local time as

$$\begin{aligned} \int _{-\infty }^{\infty } \left| a - x\right| ^{\gamma -\alpha } \left[ k_-{{\,\mathrm{1{}l}\,}}_{\{a>x\}} + k_+{{\,\mathrm{1{}l}\,}}_{\{a<x\}} \right] L_t^a \, da. \end{aligned}$$

The main result of Engelbert and Kurenok [2], for symmetric stable processes, is that the finite variation part in the semimartingale decomposition of \(| X_t-x|^\gamma \) is increasing so that this process is a submartingale. One thus obtains a Doob-Meyer decomposition. However, if asymmetry in the jumps of the stable process is allowed, this decomposition will not be in general a submartingale. By direct inspection, the last term of the decomposition is an increasing process if and only if \(k_{\pm } \ge 0\).

The constants \(k_{\pm }\) have been found and used by Fournier [24] by other means and in a different context. Fournier proved pathwise uniqueness for SDEs driven by an asymmetric strictly stable process and, in order to use the Gronwall inequality, he defined a constant \(\beta (a,c) \in (\alpha -1,1)\), where \(a=\cos (\pi \alpha )\) and \(c=c_-/c_+\), assuming \(0<c_- < c_+\). Then, he proved that \(k_+ = 0\) for \(\gamma =\beta (a,c)\). We will prove in Lemma 5 that, in fact, both \(k_{\pm }\) are non negative for all \(\gamma \ge \beta (a,c)\) and otherwise one of them is negative.

Corollary 1

Under the hypotheses of Theorem 3, let \(a=\cos (\pi \alpha )\) and \(c=(c_-\wedge c_+)/(c_-\vee c_+)\). Then the process \(\left| X_t - x\right| ^{\gamma }\) is a submartingale if \(\gamma \in [\beta (a,c),\alpha )\). For \(\gamma \in (\alpha -1,\beta (a,c))\), \(\left| X_t - x\right| ^{\gamma }\) is a semimartingale, whose finite variation part is not monotone.

The organization of the paper is as follows. In Section 2 we state known and preliminary results regarding strictly stable processes and fractional calculus that we need for the main results. Section 3 contains proofs of the main results, examining the crossed composition Proposition 2, the Inversion Theorem 1, the Meyer-Itô Theorem 2 and finally the Power Decomposition Theorem 3.

2 Preliminaries: stable processes and fractional operators

Fractional calculus has been studied almost since the invention of calculus. One of the most famous applications is the solution to the tautochrone problem by Abel (cf. [25]). Even though many mathematicians have contributed to the formalization of the field; it was Marcel Riesz who systematized several results in terms of non-local operator theory (cf. [26]). The book of Samko, Kilbas and Marichev [12] will be our main reference for the theory of fractional calculus in what follows. As been pointed out in the introduction, the connection between fractional calculus and stable processes will appear naturally by means of their infinitesimal generator.

In this section we state the preliminaries, regarding stable processes and fractional operators, we will need in order to prove the results outlined in the previous section.

Following Applebaum [27] (Theorem 1.2.14 and 2.4.16), we state the Lévy-Khintchine formula and the Lévy-Itô decomposition for the special case of strictly stable processes.

Corollary 2

(Lévy-Khintchine formula) Let \(X \sim S_{\alpha }\left( c_-, c_+ \right) \) with \(c_-\), \(c_+ \ge 0\), not both zero. Then its characteristic exponent, determined by continuity from the equation \(e^{t\phi (u)}={\mathbb {E}}\left( e^{iuX_t }\right) \), can be written as

$$\begin{aligned} \phi (u) = {\left\{ \begin{array}{ll}\displaystyle \int _{{\mathbb {R}}_0} \left( e^{iuh} -1\right) \nu (dh) &{} \hbox { if}\ \alpha \in (0,1),\\ \displaystyle \int _{{\mathbb {R}}_0} \left( e^{iuh} -1- iuh \right) \nu (dh) &{} \hbox { if}\ \alpha \in (1,2). \end{array}\right. } \end{aligned}$$

Moreover, it can be proved (cf. Applebaum [27] Theorem 1.2.21) that in the case \(\alpha \in (0,2){\setminus } \left\{ 1\right\} \) the characteristic exponent of a stable process X of index \(\alpha \) is equal to:

$$\begin{aligned} \phi (u) = \exp \left[ -\sigma |u|^{\alpha } \left( 1 - i\beta {{\,\textrm{sgn}\,}}(u) \tan \left( \frac{\pi \alpha }{2} \right) \right) \right] . \end{aligned}$$
(2.1)

Here we have another parametrization of a stable process in terms of the skewness and scale parameters, denoted by \(X\sim S_{\alpha }\left( \beta , \sigma \right) \). We can recover the \((c_-,c_+)\) parametrization solving:

$$\begin{aligned} \beta= & {} \frac{c_+ - c_-}{c_+ + c_-},\\ \sigma= & {} -(c_+ + c_-)\varGamma (-\alpha )\cos \left( \frac{\pi \alpha }{2} \right) . \end{aligned}$$

Remark 4

The characteristic exponent of a strictly stable process and the Fourier transform of the fractional operators are intrinsically related as we will see in Remark 5.

The following result concerns the finiteness of moments for stable processes. For the first part, the proof can be consulted in [1] and the second one in [10].

Proposition 3

Let \(\alpha \in (0,2)\setminus \left\{ 1\right\} \), \(c_-, c_+ \ge 0\), not both zero, and consider a strictly stable process \(X\sim S_{\alpha }\left( c_-, c_+ \right) \). Then, the following bounds are satisfied:

  1. 1.

    For all \(t>0\), \(x\in {\mathbb {R}}\) and \(0<\gamma <1\),

    $$\begin{aligned} {\mathbb {E}}\left[ |X_t - x|^{-\gamma } \right] \le S(\alpha ,\gamma ) t^{-\gamma /\alpha }, \end{aligned}$$

    where \(S(\alpha ,\gamma )\) is a constant which depends on \(\alpha \) and \(\gamma \) and is independent of x.

  2. 2.

    For all \(t>0\) and \(0 \le \gamma <\alpha \),

    $$\begin{aligned} {\mathbb {E}}\left[ |X_t|^{\gamma } \right] < \infty . \end{aligned}$$

    If \(\gamma \ge \alpha \) and \(t>0\), \({\mathbb {E}}\left[ |X_t|^{\gamma } \right] =\infty \).

The following proposition is a version of Itô’s formula (termed predictable in [23]) for stable processes. The statement, and the useful notation \(C^2_{1+,b}\) for the functional space of twice continuously differentiable functions whose derivatives of order \(\ge 1\) are bounded, is taken from [1]. In contrast to Itô’s formula, the semimartingale decomposition of f(X) in this version clearly features the infinitesimal generator and big and small jumps are compensated. Hence the need to restrict the class of functions to \(C^2_{1+,b}\).

Proposition 4

(Itô’s formula) Let \(X\sim S_{\alpha }\left( c_-, c_+ \right) \) with \(c_-, c_+ \ge 0\), not both zero and \(f\in C^2_{1+,b}\). If \(\alpha \in (1,2)\), we have

$$\begin{aligned} f\left( X_t\right)= & {} f\left( X_0\right) + \int _0^t \int _{{\mathbb {R}}_0} \left[ f\left( X_{s-}+h\right) - f\left( X_{s-}\right) \right] {\tilde{N}}(ds,dh)\\{} & {} + \int _0^t \int _{{\mathbb {R}}_0} \left[ f\left( X_{s}+h\right) - f\left( X_{s}\right) - hf^{\prime }\left( X_{s}\right) \right] \, \nu (dh)\, ds. \end{aligned}$$

As pointed out by Engelbert and Kurenok [2] in their Remark 1.1, it is a common mistake to state the Itô formula in terms of the infinitesimal generator \({\mathcal {L}}\), since functions in \(C^2_{1+,b}\) are not in its domain. So, when we have a function \(f \in C^2_{1+,b}\), we define:

$$\begin{aligned} {\mathcal {L}}f(x):= \int _{{\mathbb {R}}_0} \left[ f\left( x+h\right) - f\left( x\right) - hf^{\prime }\left( x\right) \right] \nu (dh). \end{aligned}$$
(2.2)

If \(f \in {\mathcal {S}}\subset C^2_{1+,b}\), then the above expression coincides with that of the infinitesimal generator, so that \({\mathcal {L}}\) can be considered as an extension of the infinitesimal generator in the class \(C^2_{1+,b}\).

In the next proposition we rewrite the definition of fractional derivative depending on the index \(\alpha \), this representation is called the generator form. The fact that they are equivalent can be found on the book of Meerschaert and Sikorskii [3] and the article of Kolokoltsov [8].

Proposition 5

(Generator form) Let \(f\in {\mathcal {S}}({\mathbb {R}})\) and \(\alpha \in (0,2){\setminus } \{1\}\). Then the generator form of the left and right fractional derivatives are as follow:

$$\begin{aligned} D_-^{\alpha } f\left( x\right)= & {} {\left\{ \begin{array}{ll} \displaystyle \frac{1}{\varGamma \left( -\alpha \right) } \int _{0}^{\infty } \frac{f\left( x - h\right) - f\left( x\right) }{h^{1+\alpha }} \, dh, &{}\text {if }\alpha \in (0,1)\\ \displaystyle \frac{1}{\varGamma \left( -\alpha \right) } \int _{0}^{\infty } \frac{f\left( x - h\right) - f\left( x\right) + hf^{\prime }\left( x\right) }{h^{1+\alpha }} \, dh, &{} \text {if }\alpha \in (1,2) \end{array}\right. }\\ D_+^{\alpha } f\left( x\right)= & {} {\left\{ \begin{array}{ll} \displaystyle \frac{1}{\varGamma \left( -\alpha \right) } \int _{0}^{\infty } \frac{f\left( x + h\right) - f\left( x\right) }{h^{1+\alpha }} \, dh, &{} \text {if }\alpha \in (0,1)\\ \displaystyle \frac{1}{\varGamma \left( -\alpha \right) } \int _{0}^{\infty } \frac{f\left( x + h\right) - f\left( x\right) - hf^{\prime }\left( x\right) }{h^{1+\alpha }} \, dh, &{} \text {if }\alpha \in (1,2)\\ \end{array}\right. }. \end{aligned}$$

From this generator form follows that the infinitesimal generator of strictly stable processes can be seen as a weighted sum of fractional derivatives given in Proposition 1. Now we focus on the properties of the fractional operators that will lead us to the proof of the Inversion Theorem 1.

The main reason to use the Lizorkin space \(\varPhi \), defined in Introduction, in Definition 3, is that the Fourier transform of fractional operators applied to functions in \(\varPhi \) behaves well. First, recall that for \(f\in {\mathcal {S}}({\mathbb {R}})\) the Fourier transform of f is defined by:

$$\begin{aligned} {\mathcal {F}}\left[ f\right] (u) = \int _{{\mathbb {R}}} f(x) e^{iux} \, dx. \end{aligned}$$

Proposition 6

(Fourier transform of fractional operators) Let \(f\in \varPhi \) and \(\alpha \ge 0\), then, using the principal branch of the logarithm, the Fourier transforms of the Riemann-Liouville fractional operators of index \(\alpha \) satisfy the following identities:

$$\begin{aligned} {\mathcal {F}}\left[ D_{\pm }^{\alpha }f\right] \left( u\right) = \left( \pm iu \right) ^{ \alpha } {\mathcal {F}}\left[ f\right] (u) \quad \text { and }\quad {\mathcal {F}}\left[ I_+^{\alpha }f\right] \left( u\right)= & {} \left( \pm iu \right) ^{-\alpha } {\mathcal {F}}\left[ f\right] (u). \end{aligned}$$

Remark 5

Using the principal branch of logarithm, we have

$$\begin{aligned} \left( \pm iu \right) ^{\alpha }= & {} |u|^{\alpha } e^{\pm i{{\,\textrm{sgn}\,}}(u) \alpha \pi / 2} = |u|^{\alpha } \left( \cos \left( \frac{\alpha \pi }{2}\right) \pm i {{\,\textrm{sgn}\,}}(u) \sin \left( \frac{\alpha \pi }{2} \right) \right) , \end{aligned}$$

for all \(u, \alpha \in {\mathbb {R}}\). These are precisely the characteristic functions of the one sided stable processes, see equation (2.1) with \(\sigma =1\) and \(\beta =\pm 1\).

The proof of this proposition can be found in the book of Samko, Kilbas and Marichev [12] Lemma 8.1. Note that one of the main features of the Lizorkin space \(\varPhi \), is that the Fourier transforms are well behaved near zero in such a way that the product \((\pm iu)^{-\alpha }{\mathcal {F}}[f](u)\) is defined.

Now we are ready to prove Proposition 2, regarding the crossed composition of Riemann-Liouville operators. First, note that from the definition of the Riemann-Liouville operators and their Fourier transforms it is easy to verify that composition of operators of the same side, left or right, commute and satisfy the semigroup property. However, the composition of crossed operators, left with right or vice versa, is not as direct as in the previous case.

Proof of Proposition 2

Since the Fourier transform characterizes a function \(\phi \in \varPhi \), we will prove that the Fourier transform of both sides of the statement coincide. First, using the Fourier transform of fractional operators in Proposition 6 we have:

$$\begin{aligned} {\mathcal {F}}\left[ W_-^{\lambda }f\right] \left( u\right)= & {} \left( -iu \right) ^{-\lambda } {\mathcal {F}}\left[ f\right] (u) = |u|^{-\lambda }e^{i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\lambda }{\mathcal {F}}\left[ f\right] (u), \\ {\mathcal {F}}\left[ W_+^{\mu }f\right] \left( u\right)= & {} \left( iu \right) ^{-\mu } {\mathcal {F}}\left[ f\right] (u) = |u|^{-\mu }e^{-i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\mu }{\mathcal {F}}\left[ f\right] (u). \end{aligned}$$

Then, for the LHS of equation (1.1) we have:

$$\begin{aligned} {\mathcal {F}}\left[ W_+^{\lambda } W_-^{\mu } \phi \right] \left( u\right)= & {} |u|^{-\lambda }e^{i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\lambda } |u|^{-\mu }e^{-i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\mu } {\mathcal {F}}\left[ \phi \right] \left( u \right) \\= & {} |u|^{-(\lambda + \mu )}e^{i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\left( \lambda - \mu \right) } {\mathcal {F}}\left[ \phi \right] \left( u \right) . \end{aligned}$$

On the other hand, for the RHS of equation (1.1) we have:

$$\begin{aligned}{} & {} \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } {\mathcal {F}}\left[ W_-^{\lambda + \mu } \phi \right] \left( u\right) + \frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } {\mathcal {F}}\left[ W_+^{\lambda + \mu } \phi \right] \left( u\right) \\{} & {} \quad =\frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) }|u|^{-(\lambda + \mu )} e^{-i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\left( \lambda + \mu \right) } {\mathcal {F}}\left[ \phi \right] \left( u \right) \\{} & {} \qquad + \frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) }|u|^{-(\lambda + \mu )} e^{i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\left( \lambda + \mu \right) } {\mathcal {F}}\left[ \phi \right] \left( u \right) . \end{aligned}$$

In order for the LHS and the RHS to be equal, it suffices to prove that:

$$\begin{aligned} e^{i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\left( \lambda - \mu \right) } = \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } e^{-i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\left( \lambda + \mu \right) } +\frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } e^{i\frac{\pi }{2}{{\,\textrm{sgn}\,}}(u)\left( \lambda + \mu \right) }. \end{aligned}$$

This is equivalent for the real and imaginary parts agreeing and using the formula for the sum of angles we need to prove that

$$\begin{aligned} \cos \left( (\lambda - \mu )\frac{\pi }{2} \right)= & {} \frac{\cos \left( (\lambda + \mu ) \frac{\pi }{2} \right) \left[ \sin (\mu \pi ) + \sin (\lambda \pi ) \right] }{\sin \left( (\lambda + \mu ) \pi \right) }\\= & {} \frac{ \sin (\mu \pi ) + \sin (\lambda \pi ) }{2\sin \left( (\lambda + \mu ) \frac{\pi }{2}\right) }, \quad \text {and}\\ \sin \left( (\lambda - \mu ){{\,\textrm{sgn}\,}}(u)\frac{\pi }{2} \right)= & {} \frac{\sin \left( (\lambda + \mu ) {{\,\textrm{sgn}\,}}(u) \frac{\pi }{2} \right) \left[ \sin (\lambda \pi ) - \sin (\mu \pi ) \right] }{\sin \left( (\lambda + \mu ) \pi \right) }\\= & {} \frac{{{\,\textrm{sgn}\,}}(u)\left[ \sin (\lambda \pi ) - \sin (\mu \pi ) \right] }{2\cos \left( (\lambda + \mu ) \frac{\pi }{2} \right) }. \end{aligned}$$

These trigonometric relations are proved in Lemma 6, finishing the proof. \(\square \)

Finally, we will be interested in some distributions acting on the Lizorkin space of test functions. For the definition of the action of Riemann-Liouville operators on distributions we refer to [12] (Section 8.1) and Rubin [15] (Section 3).

Definition 7

Let \(f\in \varPhi ^{\prime }\) and \(\alpha \in {\mathbb {R}}\). The distributions \(W_-^{\alpha } f\) and \(W_+^{\alpha }f\) are defined by duality:

$$\begin{aligned} \left( W_\pm ^{\alpha }f, \phi \right) = \left( f,W_{\mp }^{\alpha } \phi \right) , \end{aligned}$$

for any \(\phi \in \varPhi \) and \((g,\phi )\) denotes the evaluation of the distribution g on the function \(\phi \).

Note that \(\delta \) belongs to \(\varPhi '\) (and, indeed, any Schwartz distribution) since \(\varPhi \) is contained in \({\mathcal {S}}\). The infinitesimal generator \({\mathcal {L}}\) associated to \(X\sim S_{\alpha }\left( c_-, c_+ \right) \) is a linear combination of fractional derivatives as in Proposition 1. Then, if we denote its dual operator by \({\tilde{{\mathcal {L}}}}\), this equals the infinitesimal generator associated to \({\tilde{X}}\sim S_{\alpha }(c_+,c_-)\). So that, if we take \(f\in \varPhi ^{\prime }\), then for any \(\phi \in \varPhi \) we have

$$\begin{aligned} \left( {\mathcal {L}}f, \phi \right) = \left( f, {\tilde{{\mathcal {L}}}} \phi \right) . \end{aligned}$$
(2.3)

In the next section we are going to use these results to prove those outlined in the Introduction.

3 Proof of the main results

The objective of this section is to prove the Inversion Theorem 1, the occupational Meyer-Itô formula stated as Theorem 2 and the Doob-Meyer or semimartingale decomposition of Theorem 3 together with Corollary 1.

The representation in Proposition 1 is crucial to work out the Inversion Theorem  1. While it is well understood that the left (right) fractional derivatives is the inverse of the left (right) fractional integral, at least in Lizorkin space, the action of the crossed compositions, as far as we know, has not been reported yet. For instance, in the symmetric case, where \(c_- = c_+\), the infinitesimal generator equals the fractional Laplacian \(-(-\varDelta )^{\alpha /2}\) and its inverse operator is known in the literature as the Riesz potential (cf. [12]); however, since in this case the left and right fractional derivatives merge into the fractional Laplacian, a crossed composition does not appear.

Before the proof of the Inversion Theorem 1, we will prove one more lemma regarding the composition of fractional derivatives and integrals. Since we are taking functions in the Lizorkin space, these compositions are well defined.

Lemma 1

(Fractional compositions) Let \(\phi \in \varPhi \) and \(\alpha > 0\) with \(\alpha \notin {\mathbb {N}}\), then the compositions of fractional derivatives and integrals of order \(\alpha \) satisfy:

$$\begin{aligned} D_-^{\alpha } I_-^{\alpha } \phi \left( x\right)= & {} \phi \left( x\right) ,\\ D_+^{\alpha } I_+^{\alpha }\phi \left( x\right)= & {} \phi \left( x\right) ,\\ D_-^{\alpha } I_+^{\alpha } \phi \left( x\right) + D_+^{\alpha } I_-^{\alpha } \phi \left( x\right)= & {} 2\cos (\alpha \pi )\phi (x). \end{aligned}$$

Proof

The first two equations as well as the fact that all the compositions of fractional operators commute follow from Proposition 6. For the last equation, we use Proposition 2 with \(\lambda = \alpha \) and \(\mu \rightarrow -\alpha \), to get the result. This last limit can be taken since the composition groups \(\mu \rightarrow W^\mu _{\pm }\) are continuous on Lizorkin space.

Using equation (1.1) twice to obtain both cross compositions, we have:

$$\begin{aligned} W_-^{\lambda } W_+^{\mu }\phi \left( x\right)+ & {} W_+^{\lambda } W_-^{\mu } \phi \left( x\right) \nonumber \\= & {} \frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } W_-^{\lambda + \mu } \phi \left( x\right) + \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } W_+^{\lambda + \mu } \phi \left( x\right) \nonumber \\{} & {} + \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } W_-^{\lambda + \mu } \phi \left( x\right) + \frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } W_+^{\lambda + \mu } \phi \left( x\right) \nonumber \\= & {} \left[ \frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } + \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } \right] W_-^{\lambda + \mu } \phi \left( x\right) \nonumber \\{} & {} + \left[ \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } + \frac{\sin \left( \lambda \pi \right) }{\sin \left( \left( \lambda +\mu \right) \pi \right) } \right] W_+^{\lambda + \mu } \phi \left( x\right) . \end{aligned}$$
(3.1)

Moreover, by l’Hôpital’s rule we have:

$$\begin{aligned} \lim _{\mu \rightarrow -\alpha } \left( \frac{\sin \left( \mu \pi \right) +\sin \left( \alpha \pi \right) }{\sin \left( \left( \alpha +\mu \right) \pi \right) } \right) \ =\cos (\alpha \pi ). \end{aligned}$$

Finally, with \(\lambda = \alpha \) and taking the limit \(\mu \rightarrow -\alpha \) in equation (3.1) we have:

$$\begin{aligned} D_-^{\alpha } I_+^{\alpha } \phi \left( x\right)+ & {} D_+^{\alpha } I_-^{\alpha } \phi \left( x\right) \\{} & {} = \lim _{\mu \rightarrow -\alpha } \;_{-\infty }W_x^{\alpha } \;_xW_{\infty }^{\mu }\phi \left( x\right) + \;_xW_{\infty }^{\alpha }\;_{-\infty }W_x^{\mu } \phi \left( x\right) \\{} & {} = \lim _{\mu \rightarrow -\alpha } \left[ \frac{\sin \left( \alpha \pi \right) }{\sin \left( \left( \alpha +\mu \right) \pi \right) } + \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \alpha +\mu \right) \pi \right) } \right] W_-^{\alpha + \mu } \phi \left( x\right) \\{} & {} + \lim _{\mu \rightarrow -\alpha } \left[ \frac{\sin \left( \mu \pi \right) }{\sin \left( \left( \alpha +\mu \right) \pi \right) } + \frac{\sin \left( \alpha \pi \right) }{\sin \left( \left( \alpha +\mu \right) \pi \right) } \right] W_+^{\alpha + \mu } \phi \left( x\right) \\{} & {} = 2\cos (\alpha \pi )\phi \left( x\right) , \end{aligned}$$

where we used that \(W^0\) is the identity operator as in Definition 2. \(\square \)

We are ready to prove the invertibility of the infinitesimal generator \({\mathcal {L}}\) in \(\varPhi \) and its expression as a weighted sum of fractional integrals of order \(\alpha \in (0,2){\setminus } \left\{ 1\right\} \).

Proof of Inversion Theorem 1

Define the operator \({\mathcal {G}}\) as

$$\begin{aligned} {\mathcal {G}}\phi \left( x\right) = K_- I_-^{\alpha }\phi \left( x\right) + K_+ I_+^{\alpha }\phi \left( x\right) , \end{aligned}$$

we will prove that \({\mathcal {G}}\left( {\mathcal {L}}\phi \right) = {\mathcal {L}}\left( {\mathcal {G}}\phi \right) = \phi \), so that \({\mathcal {L}}\) is invertible and \({\mathcal {L}}^{-1} = {\mathcal {G}}\). By our definition of \({\mathcal {G}}\), we have:

$$\begin{aligned} {\mathcal {G}}\left( {\mathcal {L}}\phi \left( x\right) \right)= & {} {\mathcal {G}}\left( M_- D_-^{\alpha }\phi \left( x\right) + M_+ D_+^{\alpha }\phi \left( x\right) \right) \\= & {} K_- M_- I_-^{\alpha } \left( D_-^{\alpha }\phi \left( x\right) \right) + K_- M_+ I_-^{\alpha } \left( D_+^{\alpha }\phi \left( x\right) \right) \\+ & {} K_+ M_- I_+^{\alpha } \left( D_-^{\alpha }\phi \left( x\right) \right) + K_+M_+ I_+^{\alpha } \left( D_+^{\alpha }\phi \left( x\right) \right) \end{aligned}$$

Substituting the values of \(K_-\) and \(K_+\), and defining \(M = M_-^2 + M_+^2 + 2M_- M_+\cos (\alpha \pi )\) to temporarily ease notation, and using Lemma 1, we get

$$\begin{aligned} {\mathcal {G}}\left( {\mathcal {L}}\phi \left( x\right) \right)= & {} \frac{M_-^2}{M} \phi \left( x\right) + \frac{M_+^2}{M} \phi \left( x\right) + \frac{M_- M_+}{M} I_-^{\alpha } D_+^{\alpha }\phi \left( x\right) + \frac{M_+ M_-}{M}I_+^{\alpha } D_-^{\alpha }\phi \left( x\right) \\= & {} \frac{M_-^2 + M_+^2 + 2M_- M_+\cos (\alpha \pi )}{M} \phi \left( x\right) \\= & {} \phi \left( x\right) . \end{aligned}$$

We conclude that \(({\mathcal {G}}\circ {\mathcal {L}})\phi =\phi \) and analogous computations prove that \(({\mathcal {L}}\circ {\mathcal {G}})\phi =\phi \). \(\square \)

Considering the following generalized functions in the dual space \(\varPhi ^{\prime }\), using Definition 7 we will prove an important relationship between the Dirac \(\delta \) distribution and the power functions, which are strongly related with the strictly stable processes. Moreover, the following lemma could be regarded as the key result to obtain Tanaka type formulae.

Lemma 2

If \(\lambda >0\), then, the (generalized) functions \(f_+^{\lambda }(x):= x^{\lambda } {{\,\mathrm{1{}l}\,}}_{\{ x>0 \}}\) and \(f_-^{\lambda }(x):=|x|^{\lambda } {{\,\mathrm{1{}l}\,}}_{\{ x<0 \}}\) belong to \(\varPhi ^{\prime }\) and

$$\begin{aligned} f_\pm ^{\lambda }(x)= & {} \varGamma (\lambda +1) I_{\mp }^{\lambda +1} \delta \left( x\right) , \end{aligned}$$

Therefore,

$$\begin{aligned} {\mathcal {L}}^{-1}(\delta ) = F^{\alpha , c_-,c_+}. \end{aligned}$$

Proof

The computation of \(I^\alpha _\pm \delta \) is found in [12, Ch. 2§8,p. 153]. It follows from Definition 7, the Inversion Theorem 1 and the previous Lemma 2 that:

$$\begin{aligned} {\mathcal {L}}^{-1}(\delta )= & {} K_- I_-^{\alpha } \delta \left( x\right) + K_+ I_+^{\alpha } \delta \left( x\right) = \frac{K_-}{\varGamma (\alpha )}f^{\alpha -1}_+(x) + \frac{K_+}{\varGamma (\alpha )}f^{\alpha -1}_-(x). \end{aligned}$$

If we substitute the values of \(K_-\) and \(K_+\) in terms of \(\alpha ,c_-\) and \(c_+\) we will get that \({\mathcal {L}}^{-1}(\delta ) =F^{\alpha ,c_-,c_+}\) in the sense of \(\varPhi ^{\prime }\) distributions. \(\square \)

Thus, Theorem 1 provides an insight to the function that satisfies the Tanaka formula.

The class of convolutions \(f=F^{\alpha ,c_-,c_+}*\mu \) in Definition 6, is defined in such a way that the distribution induced by the measure \(\mu \) coincides with \({\mathcal {L}}f\), in the sense of \(\varPhi ^{\prime }\) distributions. As a consequence, \(\mu \) can be considered as the extension of \({\mathcal {L}}f\) from the Lizorkin space to the class \(C_c\) of continuous functions with compact support. A precise version of this is contained in the following lemma. It is here that the completely balanced averages of Lizorkin play a fundamental rôle: they constitute a way to approximate \(\delta \) and other distributions from within Lizorkin space.

Lemma 3

Let \(f \in {\mathcal {C}}^{\alpha ,c_-,c_+}\) be given by \(f=F*\mu \). Then, \({\mathcal {L}}f = \mu \) in the \(\varPhi ^{\prime }\) sense; that is, for every \(\phi \in \varPhi \):

$$\begin{aligned} \left( {\mathcal {L}}f, \phi \right) = \left( \mu , \phi \right) . \end{aligned}$$

Finally, if \(\mu \) is a finite measure with compact support, then \(\phi \mapsto ({\mathcal {L}}f,\phi )\) extends by continuity to \(\phi \mapsto (\mu ,\phi )\) from \(\varPhi \) to \(C_c\) with the topology of uniform convergence.

Proof

Let \(\phi \in \varPhi \). Since \(\alpha -1\in (0,1)\), then \(x\mapsto x^{\alpha -1}\) is subadditive on \([0,\infty )\). Hence,

$$\begin{aligned} \int |f(x) \phi (x)|\, dx&\le \int \int [ |x|^{\alpha -1}+|a|^{\alpha -1}] |\phi (x)|\, |\mu |(da) \, dx \\ {}&\le | \mu |({\mathbb {R}})\int |x|^{\alpha -1}|\phi (x)|\, dx + \Vert \phi \Vert _1 \int |a|^{\alpha -1} |\mu |(da)<\infty . \end{aligned}$$

From equation (2.3) and Fubini’s theorem (justified from the previous display applied to \({{\tilde{{\mathcal {L}}}}}\phi \)):

$$\begin{aligned} \left( {\mathcal {L}}f, \phi \right)&=\int _{-\infty }^{\infty } f(x) {\tilde{{\mathcal {L}}}} \phi (x)\, dx = \int _{-\infty }^{\infty } \int _{-\infty }^{\infty } F^{\alpha ,c_-,c_+}(x-a) \mu (da) {\tilde{{\mathcal {L}}}}\phi (x)\, dx\\&= \int _{-\infty }^{\infty } \int _{-\infty }^{\infty } F^{\alpha ,c_-,c_+}(x-a) {\tilde{{\mathcal {L}}}}\phi (x)\, dx \, \mu (da) = \int _{-\infty }^{\infty } \left( {\mathcal {L}}^{-1}\delta _a , {\tilde{{\mathcal {L}}}}\phi \right) \, \mu (da)\\&= \int _{-\infty }^{\infty } \left( \delta _a , \tilde{{\mathcal {L}}^{-1}}{\tilde{{\mathcal {L}}}}\phi \right) \, \mu (da) = \int _{-\infty }^{\infty } \left( \delta _a , \phi \right) \, \mu (da) = \int _{-\infty }^{\infty } \phi (a) \, \mu (da), \end{aligned}$$

yielding that \({\mathcal {L}}f=\mu \) on \(\varPhi ^{\prime }\).

Lizorkin, in [14] (cf. after Definition 3), gives an approximation of \(\delta \) in \(\varPhi ^{\prime }\) by means of a collection of functions \(\kappa _\beta \in \varPhi \) with the following property. If \(\phi \in C_c\), then \(\phi _\beta :=\kappa _\beta *\phi \rightarrow \phi \) uniformly on compact sets; note that \(\phi _\beta \in \varPhi \). Indeed, Lizorkin writes \(\kappa _\beta =\kappa ^1_\beta -\kappa ^2_\beta \) where \(\kappa ^1_\beta \) is a centered Gaussian density of variance \(2\beta ^2\). Hence \(\kappa ^1_\beta *\phi \rightarrow \phi \) uniformly if \(\phi \in C_c\). On the other hand, the proof of Theorem 1 [14, Ch. II§4] tells us that \(\kappa ^2_\beta *\phi \rightarrow 0\) uniformly on compact sets since \(\phi \) is integrable. Hence, \(\varPhi \) is dense in \(C_c\) If \(\mu \) is finite and of compact support then it also has a finite moment of order \(\alpha -1\) and so, by the previous paragraph, \({\mathcal {L}}(F*\mu )=\mu \) in \(\varPhi '\). The bounded linear functional \(\phi \mapsto (\mu ,\phi )\) on \(C_c\) coincides with \(\phi \mapsto ({\mathcal {L}}f, \phi )\) on \({\mathcal {L}}\), so that, by denseness, the latter extends uniquely by continuity to \(C_c\). \(\square \)

The result in Lemma 3 with the Brownian motion case, where \({\mathcal {L}}f(x) =\frac{1}{2}\varDelta f(x)\), \({\mathcal {C}}^{2,c,c}\) is the class of differences of convex functions, whose second derivative are signed measures.

The following results are inspired by the work of Tsukada [1], which will be generalized by a well-known procedure to construct approximations of a function, smoothing it with mollifiers (cf. [22], Theorem 6.22), allowing us to use Itô formula (4).

A positive real function \(\rho \in C^{\infty }_c\), with support in \([-1,1]\) and integral equal to one, is said to be a mollifier. Then, if we consider a sequence of functions given by \(\rho _n(x) = n\rho (nx)\) for all \(n\in {\mathbb {N}}\), this sequence converges weakly to the Dirac \(\delta \) distribution in the sense of Schwartz distributions, that is

$$\begin{aligned} \left| \int _{-\infty }^{\infty } \rho _n(x)\phi (x)\, dx - \phi (0) \right| \longrightarrow 0, \quad \text {as } n\rightarrow \infty , \end{aligned}$$

for all \(\phi \in {\mathcal {S}}\).

Let \(C^{\infty }_{1+,b}\) be the family of infinitely differentiable functions with bounded derivatives of any order greater than or equal to one. We are going to use some bounds for the function \(F^{\alpha ,c_-,c_+}\) as well as of its increments, for a proof of the following results we refer to [1] (cf. equation (3.9) of the proof of Theorem 3.1 and the proof of Lemma 3.1 in that reference). For fixed \(\alpha , c_-\) and \(c_+\), to ease the notation, we are going to write F instead of \(F^{\alpha ,c_-,c_+}\) when there is no confusion with the parameters. Also, recall the constants \(\kappa _\pm \) in the definition of F and write \(\kappa =\kappa _-\vee \kappa _+\), so that \(0\le F(x)\le \kappa |x|^{\alpha -1}\).

Lemma 4

Let \(\alpha \in (1,2)\), \(c_-, c_+ \ge 0\), not both zero, and consider a strictly stable process \(X\sim S_{\alpha }\left( c_-, c_+ \right) \). Consider the function \(F^{\alpha ,c_-,c_+}\) in equation (1.3), then the following results are satisfied:

  1. 1.

    Let \((\rho _n)_{n \ge 1}\) as above, then \(F_n:= F^{\alpha ,c_-,c_+}*\rho _n \in C^{\infty }_{1+,b}\) for all \(n \in {\mathbb {N}}\) and \(F_n \rightarrow F\), uniformly on compact sets as \(n\rightarrow \infty \).

  2. 2.

    Let \(|h|\le 1\), \(a \in {\mathbb {R}}\), \(s> 0\) and \(\epsilon _0 \le (\alpha -1)\wedge (2-\alpha )\), then we have:

    $$\begin{aligned}&{\mathbb {E}}\left[ \left| F(X_{s_-} -a +h) - F(X_{s_- }- a) \right| ^2 \right] \\&\quad \le c_1 S(\alpha ,2+\epsilon _0 - \alpha ) s^{(\alpha -2-\epsilon _0)/\alpha } |h|^{\alpha + \epsilon _0}, \end{aligned}$$

    where \(c_1 = 20\kappa ^2\) and the constant \(S(\cdot ,\cdot )\) as in Proposition 3, and the same bound holds if we replace F by \(F_n\). Moreover, this bound satisfies:

    $$\begin{aligned}&\int _0^t \int _{|h|\le 1} s^{(\alpha -2-\epsilon _0)/\alpha } |h|^{\alpha + \epsilon _0} \, \nu (dh)\, ds \\&\quad = \left( \frac{c_+ + c_-}{\epsilon _0} \right) \left( \frac{\alpha }{2\alpha - \epsilon _0 - 2} \right) t^{(2\alpha - \epsilon _0 - 2)/\alpha } < \infty . \end{aligned}$$
  3. 3.

    Let \(|h|> 1\), \(a \in {\mathbb {R}}\) and \(s> 0\), then we have:

    $$\begin{aligned} {\mathbb {E}}\left[ \left| F(X_s - a +h) - F(X_s -a) \right| \right]\le & {} c_2 |h|^{\alpha -1}. \end{aligned}$$

    where \(c_2 = 4\kappa \) and the same bound holds if we replace F by \(F_n\). Moreover, this bound satisfies:

    $$\begin{aligned} \int _0^t \int _{|h| > 1} |h|^{\alpha -1 } \, \nu (dh) \, ds = \left( c_+ + c_- \right) t < \infty . \end{aligned}$$

The following result is a corollary of Lemma 4 and it will be useful in several steps of the Meyer-Itô theorem’s proof.

Corollary 3

Under the assumptions of Lemma 4, let \(f \in {\mathcal {C}}^{\alpha ,c_-,c_+}\), such that \(f = F* \mu \in C^{\alpha ,c_-,c_+}\) with \(\mu \) a finite Radon measure and consider \(f_n = f* \rho _n\) for \(n\in {\mathbb {N}}\). Then we have:

$$\begin{aligned}&{\mathbb {E}}\left[ \left| f(X_{s_-} +h) - f(X_{s_-}) \right| ^2\right] \le (\mu ({\mathbb {R}}))^2 c_1 S(\alpha ,2+\epsilon _0 - \alpha ) s^{(\alpha -2-\epsilon _0)/\alpha } |h|^{\alpha + \epsilon _0}, \end{aligned}$$

for \(|h|\le 1\),

$$\begin{aligned}&{\mathbb {E}}\left[ \left| f(X_{s_-} +h) - f(X_{s_-})\right| \right] \le \mu ({\mathbb {R}})c_2 |h|^{\alpha - 1}, \end{aligned}$$

for \(|h|>1\), and the same bounds are satisfied if we replace f with \(f_n\). These bounds belong to \(L^1\left( (0,t)\times A, {\mathcal {B}}((0,t)\times A), {{\,\textrm{Leb}\,}}\otimes \nu )\right) \), with \(A = [-1,1]{\setminus }\{0\}\) and \(A = [-1,1]^c\) respectively.

These results follow from the Lemma 4 and an application of a Jensen-like inequality for finite measures.

Proof of the Occupational Meyer-Itô Formula (Theorem 2)

Without loss of generality, we asume that \(\mu \) is actually a positive measure, which was assumed to be finite with compact support and, therefore, with moments of order \(\alpha \) and \(2(\alpha -1)\). Then, we have the representation:

$$\begin{aligned} f\left( x\right)= & {} \int _{-\infty }^{\infty } F^{\alpha ,c_-,c_+}\left( x - a\right) \mu \left( da\right) . \end{aligned}$$

Consider the sequences \(F_n = F*\rho _n\) and \(f_n = f*\rho _n = F*\rho _n*\mu \) as the infinitely differentiable approximations of F and f by the sequence \(\{\rho _n\}_{n\ge 0}\), with \(n\in {\mathbb {N}}\), and we have that \(f_n \rightarrow f\) uniformly on compact sets ( [28] Theorem 4.1: Properties of mollifiers).

Since \(f_n \in C_{1+,b}^{\infty }\subset C_{1+,b}^2\), using Itô’s formula (Proposition 4) we have:

$$\begin{aligned} f_n(X_t) = f_n(X_0) + M_t^{n} + V_t^{n}, \end{aligned}$$
(3.2)

where the last two terms are

$$\begin{aligned} M_t^{n}&= \int _{0}^t \int _{{\mathbb {R}}_0} \left[ f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right] {\tilde{N}}(ds,dh) \end{aligned}$$

and

$$\begin{aligned} V_t^{n}&= \int _{0}^t {\mathcal {L}}f_n (X_s) ds. \end{aligned}$$

Moreover, since the behavior of \(M^n_t\) is different depending on the size of the jumps, we will consider \(M^n_t = M^{1,n}_t + M^{2,n}_t\), where

$$\begin{aligned} M_t^{1,n}= & {} \int _{0}^t \int _{h \le 1} \left[ f_n\left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right] {\tilde{N}}(ds,dh), \\ M_t^{2,n}= & {} \int _{0}^t \int _{h > 1} \left[ f_n\left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right] {\tilde{N}}(ds,dh). \end{aligned}$$

In a similar fashion, we define \(M_t = M_t^1 + M_t^2\), by replacing \(f_n\) with f.

The proof consists in establishing the following steps:

  1. Step 1

    \(f(X_t)\) and \(f_n(X_t)\) are in \(L^1({\mathbb {P}})\) and \(f_n(X_t)\rightarrow f(X_t)\) in \(L^1\).

  2. Step 2

    \(M^{1}\) and \(M^{1,n}\) are square integrable martingales and \(M^{1,n}_t\rightarrow M^1_t\) in \(L^2\).

  3. Step 3

    \(M^{2}\) and \(M^{2,n}\) are integrable martingales and \(M^{2,n}_t\rightarrow M^{2}_t\) in \(L^1\).

  4. Step 4

    \(V^n_t\rightarrow \int L^a_t\, \mu (da)\) in \(L^1\).

Let us begin with Step 1. First, we provide a bound for f(x) and \(f_n(x)\) in terms of x and which does not depend on n. Using that \(\alpha -1\in (0,1)\), we have that \(x\mapsto x^{\alpha -1}\) is subadditive on \([0,\infty )\), so that

$$\begin{aligned} 0\le & {} f_n(x) = \int _{-\infty }^{\infty } f(x-y)\rho _n(y)\, dy\\= & {} \int _{-\infty }^{\infty } \int _{-\infty }^{\infty } F(x-a-y)\rho _n(y)\, dy \,\mu (da)\\\le & {} \int _{-\infty }^{\infty } \int _{-1/n}^{1/n} 2\kappa (|x|^{\alpha -1} + |a|^{\alpha -1} +|y|^{\alpha -1})\rho _n(y)\, dy \, \mu (da)\\\le & {} 2\kappa \int _{-\infty }^{\infty } (|x|^{\alpha -1} + |a|^{\alpha -1} + 1) \, \mu (da), \end{aligned}$$

which is finite for any \(x\in {\mathbb {R}}\) by the assumptions on \(\mu \) and does not depend on n.

By similar arguments we have that

$$\begin{aligned} 0\le f(x) \le 2\kappa \int _{-\infty }^{\infty } (|x|^{\alpha -1} + |a|^{\alpha -1} )\, \mu (da). \end{aligned}$$
(3.3)

For the squared difference, using a Jensen-like inequality for finite measures, we have,

$$\begin{aligned}{} & {} |f_n(x)-f(x)|^2 \le 2|f_n(x)|^2 + 2|f(x)|^2 \nonumber \\{} & {} \quad \le 16 \kappa ^2 \left( \int _{-\infty }^{\infty } (|x|^{\alpha -1} + |a|^{\alpha -1} + 1) \, \mu (da)\right) ^2 \nonumber \\{} & {} \quad \le 16 \kappa ^2\mu ({\mathbb {R}}) \int _{-\infty }^{\infty } \left( (|x|^{\alpha -1} + |a|^{\alpha -1} + 1)\right) ^2 \, \mu (da) \nonumber \\{} & {} \quad \le 48 \kappa ^2\mu ({\mathbb {R}}) \int _{-\infty }^{\infty } (|x|^{2\alpha -2} + |a|^{2\alpha -2} + 1) \, \mu (da). \end{aligned}$$
(3.4)

Then, similar arguments give

$$\begin{aligned} |f_n(X_t)|^2\le & {} 12 \kappa ^2\mu ({\mathbb {R}}) \int _{-\infty }^{\infty } (|X_t|^{2\alpha -2} + |a|^{2\alpha -2} + 1) \mu (da)\quad \text {and}\\ |f(X_t)|^2\le & {} 8 \kappa ^2 \mu ({\mathbb {R}}) \int _{-\infty }^{\infty } (|X_t|^{2\alpha -2} + |a|^{2\alpha -2} ) \, \mu (da),\\ \end{aligned}$$

and these bounds are independent of n and belong to \(L^1({\mathbb {P}})\) since \(0<2\alpha -2<\alpha \) and \(\mu \) is a finite measure with a moment of order \(2\alpha -2\). We can conclude that \(f_n(X_t)\) and \(f(X_t)\) are elements of \(L^2({\mathbb {P}})\). Moreover, by dominated convergence, we get

$$\begin{aligned} \lim _{n\rightarrow \infty }{\mathbb {E}}\left[ |f_n(X_t)-f(X_t)|^2\right] = {\mathbb {E}}\left[ \lim _{n\rightarrow \infty }|f_n(X_t)-f(X_t)|^2\right] = 0, \end{aligned}$$
(3.5)

so that \(f_n(X_t) \rightarrow f(X_t)\) in \(L^2({\mathbb {P}})\), which implies Step 1’s assertions.

Let us move to Step 2. In this case we are considering the jumps smaller than one, i.e. \(h \le 1\). To prove that \(M^{1,n}\) is a square integrable martingale, according to Ikeda and Watanabe ([29], Section II.3), we need to show that:

$$\begin{aligned} m^{1,n}_t:= {\mathbb {E}}\left[ \int _{0}^t \int _{|h|\le 1} \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right| ^2 \,\nu (dh)\, ds \right] < \infty . \end{aligned}$$

Since the integrand is positive and \(({\mathcal {X}},{\mathcal {B}}({\mathcal {X}}))\)-measurable with \({\mathcal {X}}=(\varOmega \times [-1,1]\setminus \{0\} \times [0,t])\), by the Fubini theorem (cf. [30] Theorem 1.27), it suffices to prove the finiteness in any order of integration. Then, using the bound in Corollary 3 for \(|h|\le 1\) and Lemma 4, we have:

$$\begin{aligned} m^{1,n}_t= & {} \int _{0}^t \int _{|h|\le 1} {\mathbb {E}}\left[ \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right| ^2 \right] \, \nu (dh)\, ds \\\le & {} \int _{0}^t \int _{|h|\le 1} (\mu ({\mathbb {R}}))^2 c_1 S(\alpha ,2+\epsilon _0 - \alpha ) s^{(\alpha -2-\epsilon _0)/\alpha } |h|^{\alpha + \epsilon _0} \, \nu (dh)\, ds \\\le & {} (\mu ({\mathbb {R}}))^2 c_1 S(\alpha ,2+\epsilon _0 - \alpha ) \int _{0}^t \int _{|h|\le 1} s^{(\alpha -2-\epsilon _0)/\alpha } |h|^{\alpha + \epsilon _0} \, \nu (dh)\, ds \\< & {} \infty . \end{aligned}$$

The result for \(m^1_t\) follows from Corollary 3 in a similar fashion. Hence, \(M^1\) is also a square integrable martingale.

In order to prove the convergence of \(M^{1,n}_t \rightarrow M^1_t\) in \(L^2({\mathbb {P}})\), first note that according to Corollary 3 we have

$$\begin{aligned} {\mathcal {M}}^1_n:= & {} {\mathbb {E}}\left[ \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) - \left( f \left( X_{s-} + h \right) + f \left( X_{s-} \right) \right) \right| ^2\right] \\\le & {} 2{\mathbb {E}}\left[ \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right| ^2\right] + 2{\mathbb {E}}\left[ \left| f \left( X_{s-} + h \right) - f \left( X_{s-} \right) \right| ^2\right] \\\le & {} 4(\mu ({\mathbb {R}}))^2 c_1 S(\alpha ,2+\epsilon _0 - \alpha ) s^{(\alpha -2-\epsilon _0)/\alpha } |h|^{\alpha + \epsilon _0}, \end{aligned}$$

Thus, \(({\mathcal {M}}^1_n)_{n\ge 1}\) is dominated in

$$\begin{aligned} L^1\left( (0,t)\times [-1,1]\setminus \{0\}, {\mathcal {B}}((0,t)\times [-1,1]\setminus \{0\}), {{\,\textrm{Leb}\,}}\otimes \nu )\right) . \end{aligned}$$

Let \(\varDelta _hg(x)=g(x+h)-g(x)\). We know that \((M^{1,n}_t-M^1_t)\) is a square integrable martingale for any \(n\in {\mathbb {N}}\), then using Itô’s isometry ( [27] p. 223) and dominated convergence theorem for the sequence \(({\mathcal {M}}^1_n)_{n\ge 1}\) we have:

$$\begin{aligned}{} & {} \lim _{n\rightarrow \infty }{\mathbb {E}}\left[ \left| M^{1,n}_t-M^1_t \right| ^2 \right] \\{} & {} \quad =\lim _{n\rightarrow \infty } \int _{0}^t \int _{|h|\le 1} {\mathbb {E}}\left[ \left| \varDelta _h f_n \left( X_{s-}\right) - \varDelta _h f (X_{s-}) \right| ^2\right] \, \nu (dh)\, ds \\{} & {} \quad =\int _{0}^t \int _{|h|\le 1} \lim _{n\rightarrow \infty } {\mathbb {E}}\left[ \left| \varDelta _h f_n \left( X_{s-} \right) - \varDelta _h f \left( X_{s-} \right) \right| ^2\right] \, \nu (dh)\, ds\\{} & {} \quad =0. \end{aligned}$$

The convergence to zero of the last equation is a consequence of equation (3.5) in Step 1. So that \(M^{1,n}_t \rightarrow M^1_t\) in \(L^2({\mathbb {P}})\), ending with Step 2.

For Step 3, we are considering the jumps greater than one, i.e. \(h > 1\). To prove that \(M^{2,n}_t\) is a martingale, following Ikeda and Watanabe ( [29] Section II.3) we must show:

$$\begin{aligned} m^{2,n}_t:= {\mathbb {E}}\left[ \int _{0}^t \int _{|h|> 1} \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right| \, \nu (dh)\, ds \right] < \infty . \end{aligned}$$

Since the integrand is positive and \(({\mathcal {X}},{\mathcal {B}}({\mathcal {X}}))\)-measurable with \({\mathcal {X}}=(\varOmega \times [-1,1]^c \times [0,t])\), by the Fubini theorem it suffices to prove the finiteness in any order of integration. Then, using the bound in Corollary 3 for \(|h|> 1\) and Lemma 4, we have:

$$\begin{aligned} m^{2,n}_t= & {} \int _{0}^t \int _{|h|> 1} {\mathbb {E}}\left[ \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right| \right] \, \nu (dh)\, ds \\\le & {} \mu ({\mathbb {R}})\int _{0}^t \int _{|h|> 1} c_2 |h|^{\alpha - 1} \, \nu (dh)\, ds < \infty . \end{aligned}$$

The result for \(m^2_t\) follows by the same bounds in Corollary 3, so that \(M^2_t\) is also a martingale. As in the previous step, to prove the convergence of \(M^{2,n} \rightarrow M^2\) in \(L^1({\mathbb {P}})\), first note that according to Corollary 3 we have

$$\begin{aligned} {\mathcal {M}}^2_n:= & {} {\mathbb {E}}\left[ \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) - \left( f \left( X_{s-} + h \right) + f \left( X_{s-} \right) \right) \right| \right] \\\le & {} {\mathbb {E}}\left[ \left| f_n \left( X_{s-} + h \right) - f_n \left( X_{s-} \right) \right| \right] + {\mathbb {E}}\left[ \left| f \left( X_{s-} + h \right) - f \left( X_{s-} \right) \right| \right] \\\le & {} 2\mu ({\mathbb {R}}) c_2 |h|^{\alpha - 1}. \end{aligned}$$

Thus, \(({\mathcal {M}}^2_n)_{n\ge 1}\) is dominated in

$$\begin{aligned} L^1\left( (0,t)\times [-1,1]^c, {\mathcal {B}}((0,t)\times [-1,1]^c), {{\,\textrm{Leb}\,}}\otimes \nu )\right) . \end{aligned}$$

We know that \((M^{2,n}_t-M^2_t)\) is a stochastic integral with respect to a Poisson random measure for any \(n\in {\mathbb {N}}\), then using Campbell’s theorem ([31], Section 3.2) and dominated convergence theorem for the sequence \(({\mathcal {M}}^2_n)_{n\ge 1}\) we have:

$$\begin{aligned}{} & {} \lim _{n\rightarrow \infty }{\mathbb {E}}\left[ \left| M^{2,n}_t-M^2_t \right| \right] \\{} & {} \quad \le \int _{0}^t \int _{|h| > 1} \lim _{n\rightarrow \infty } {\mathbb {E}}\left[ \left| \varDelta _h f_n(X_{s-})-\varDelta _h f(X_{s-}) \right| \right] \, \nu (dh)\, ds= 0. \end{aligned}$$

The convergence to zero of the last equation is a consequence of equation (3.5) in Step 1. So that \(M^{2,n}_t \rightarrow M^2_t\) in \(L^1({\mathbb {P}})\), ending with Step 3.

By Step 2 and Step 3 we conclude that \(M^{n}\) and M in equation (3.2) are martingales and \(M^n_t\rightarrow M_t\) in \(L^1\).

Finally, for Step 4, we have from equation (3.2) that:

$$\begin{aligned} V_t^{n} = f_n(X_t) - f_n(X_0) - M_t^{n} {\mathop {\rightarrow }\limits ^{L^1({\mathbb {P}})}} f(X_t) - f(X_0) - M_t, \end{aligned}$$

as \(n \rightarrow \infty \), so that the limit \(\lim _{n\rightarrow \infty } V_t^n(X_t) \in L^1({\mathbb {P}})\). We just need to verify that this limit coincides with the one stated in the theorem.

We know that \(f_n =F*(\rho _n*\mu )\in C_{1+,b}^{\infty } \cap C^{\alpha ,c_-,c_+}\) is positive and measurable and that \(\rho _n*\mu \) is a finite measure with compact support. Then \({\mathcal {L}}f_n\) is well defined, positive and measurable as well. So, by the occupation formula we have:

$$\begin{aligned} V_t^{n} = \int _{0}^t {\mathcal {L}}f_n (X_s) \, ds = \int _{-\infty }^{\infty } L_t^a {\mathcal {L}}f_n (a) \, da. \end{aligned}$$

Since \(L_t^a(\omega ) \in C_c\) for almost all \(\omega \in \varOmega \), Lemma 3 tells us that

$$\begin{aligned} V_t^{n} = \int _{-\infty }^{\infty } L_t^a \, (\mu * \rho _n)(da), \end{aligned}$$

and since \(\rho _n \rightarrow \delta \) weakly as \(n\rightarrow \infty \), then \((\mu * \rho _n) \rightarrow \mu \) weakly as \(n\rightarrow \infty \) as well. Hence,

$$\begin{aligned} \left| \int _{-\infty }^{\infty } L_t^a \, (\mu * \rho _n)(da) -\int _{-\infty }^{\infty } L_t^a \, \mu (da) \right| \rightarrow 0, \quad \hbox { as}\ n\rightarrow \infty . \end{aligned}$$

Steps 1–4 finish the proof of Theorem 2. \(\square \)

For a first application, we have the Tanaka formula for asymmetric strictly stable processes.

Corollary 4

(Tanaka formula) Let \(\alpha \in (1,2)\), \(c_-, c_+ \ge 0\), not both zero, and consider a strictly stable process \(X\sim S_{\alpha }\left( c_-, c_+ \right) \). Then, the Tanaka formula is satisfied:

$$\begin{aligned} F^{\alpha ,c_-,c_+}\left( X_{t}-a \right) = F^{\alpha ,c_-,c_+}\left( X_{0}-a\right) + M_{t}^a(X) + L_t^a(X), \end{aligned}$$
(3.6)

where \(L_t^a(X)\) is the occupational local time at a up to time t of X and \(M_t^a(X)\) is a square integrable martingale given by

$$\begin{aligned} M_{t}^a(X) = \int _0^t \int _{{\mathbb {R}}_0} \left[ F^{\alpha ,c_-,c_+}\left( X_{s-}-a+h\right) - F^{\alpha ,c_-,c_+}\left( X_{s-}-a\right) \right] {\tilde{N}}(ds,dh). \end{aligned}$$

Proof

Consider the unitary measure concentrated in a, that is \(\delta _a(E)=1\) if \(a\in E\) and zero otherwise, with \(f(x) = \left( F^{\alpha ,c_-,c_+} * \delta _a\right) (x) = F^{\alpha ,c_-,c_+}(x-a)\), using the occupational Meyer-Itô theorem we have:

$$\begin{aligned}{} & {} F^{\alpha ,c_-,c_+}(X_t-a) = F^{\alpha ,c_-,c_+}\left( X_0-a\right) +\int _{-\infty }^{\infty } L_t^{x}\left( X\right) \delta _a \left( dx\right) \\{} & {} \qquad +\int _{0}^{t}\int _{{\mathbb {R}}_{0}}\left[ F^{\alpha ,c_-,c_+}\left( X_{s-}-a+h\right) -F^{\alpha ,c_-,c_+}\left( X_{s-}-a\right) \right] {\tilde{N}}\left( ds,dh\right) \\{} & {} \quad = F^{\alpha ,c_-,c_+}\left( X_0-a\right) + M^a_t + L_t^a\left( X\right) . \end{aligned}$$

\(\square \)

We turn our attention to the power decomposition of Theorem 3. Our first step will be to explicitly compute the infinitesimal generator of the power functions in Lemma 2. Let \(\alpha \in (1,2)\), \(c_-, c_+ \ge 0\) not both zero and \(\alpha -1< \gamma <\alpha \). From Lemma 2 we know that \(f_\pm ^{\gamma }\) belong to \(\varPhi ^{\prime }\) and can be identified with the following fractional integrals:

$$\begin{aligned} f_+^{\gamma }(x)= & {} \varGamma (\gamma + 1)I_-^{\gamma + 1} \delta \left( x\right) ,\\ f_-^{\gamma }(x)= & {} \varGamma (\gamma + 1)I_+^{\gamma + 1} \delta \left( x\right) . \end{aligned}$$

Consider the infinitesimal generator evaluated at \(f^\gamma _+(x)\): with the constants \(M_\pm \) as defined in Proposition 1, we have

$$\begin{aligned} {\mathcal {L}}f_+^{\gamma }\left( x\right)= & {} M_-D_-^{\alpha }f_+^{\gamma }\left( x\right) + M_+D_+^{\alpha }f_+^{\gamma }\left( x\right) \\= & {} \varGamma (\gamma + 1) M_-D_-^{\alpha } I_-^{\gamma + 1} \delta \left( x\right) + \varGamma (\gamma + 1) M_+D_+^{\alpha } I_-^{\gamma + 1} \delta \left( x\right) \end{aligned}$$

Using the fractional composition formulas in Lemma 1, we get

$$\begin{aligned} {\mathcal {L}}f_+^{\gamma }\left( x\right)= & {} \varGamma (\gamma + 1) M_- I_-^{\gamma - \alpha + 1} \delta \left( x\right) \\ {}+ & {} \varGamma (\gamma + 1) M_+ \frac{\sin \left( (\gamma +1)\pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } I_-^{\gamma - \alpha + 1} \delta \left( x\right) \\+ & {} \varGamma (\gamma + 1) M_+ \frac{\sin \left( -\alpha \pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } I_+^{\gamma - \alpha + 1}\delta \left( x\right) \\= & {} \frac{\varGamma (\gamma + 1) M_-}{\varGamma (\gamma - \alpha +1)}f_+^{\gamma - \alpha }(x) + \frac{\varGamma (\gamma + 1) M_+}{\varGamma (\gamma - \alpha +1)} \frac{\sin \left( (\gamma +1)\pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } f_+^{\gamma - \alpha }(x)\\+ & {} \frac{\varGamma (\gamma + 1) M_+ }{\varGamma (\gamma - \alpha +1)} \frac{\sin \left( -\alpha \pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } f_-^{\gamma - \alpha }(x). \end{aligned}$$

For the function \(f_-^{\gamma }(x)\), we can proceed similarly to get

$$\begin{aligned} {\mathcal {L}}f_-^{\gamma }\left( x\right)= & {} \varGamma (\gamma + 1) M_- \frac{\sin \left( -\alpha \pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } I_-^{\gamma - \alpha + 1} \delta \left( x\right) \\+ & {} \varGamma (\gamma + 1) M_- \frac{\sin \left( (\gamma +1)\pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } I_+^{\gamma - \alpha + 1}\delta \left( x\right) \\ {}+ & {} \varGamma (\gamma + 1) M_+ I_+^{\gamma - \alpha + 1} \delta \left( x\right) \\= & {} \frac{\varGamma (\gamma + 1) M_-}{\varGamma (\gamma - \alpha +1)} \frac{\sin \left( -\alpha \pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } f_+^{\gamma - \alpha }(x)\\+ & {} \frac{\varGamma (\gamma + 1) M_- }{\varGamma (\gamma - \alpha +1)} \frac{\sin \left( (\gamma +1)\pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } f_-^{\gamma - \alpha }(x) + \frac{\varGamma (\gamma + 1) M_+}{\varGamma (\gamma - \alpha +1)} f_-^{\gamma - \alpha }(x). \end{aligned}$$

Before we prove Theorem 3, we need to undestand the constants \(k_{\pm }\) (which depend on \(\alpha , \gamma , c_-, c_+ \)) that are used there. They play an important role in the bounded variation part of the power decomposition (1.5), because in order to be an increasing process, both need to be positive. The following lemma states the critical exponent \(\gamma \) from which both \(k_{\pm }\left( \alpha , \gamma , c_-, c_+ \right) \) are positive. Recall the definition of c in Corollary 1.

Lemma 5

Let \(\alpha \in (1,2)\), \(\gamma \in (\alpha -1,\alpha )\) and \(k_{\pm }\left( \alpha , \gamma , c_-, c_+ \right) \) as in Theorem 3. Define

$$\begin{aligned} \beta (a,c):= \frac{1}{\pi } \arccos \left( \frac{c^2(1-a^2)-(1+ac)^2}{c^2(1-a^2)+(1+ac)^2} \right) \in (\alpha -1,1), \end{aligned}$$

where \(a=\cos (\alpha \pi )\) and \(c=\frac{\min (c_-,c_+)}{\max (c_-,c_+)}\). Then, if \(c_-<c_+\) we have that \(k_-\left( \alpha , \gamma , c_-, c_+ \right) \) is positive for all \(\gamma \in (\alpha -1,\alpha )\) while \(k_+\left( \alpha , \gamma , c_-, c_+ \right) \) is negative if \(\gamma \in (\alpha -1, \beta (a,c))\) and positive if \(\gamma \in ( \beta (a,c),1)\). The same conclusion follows for \(c_+<c_-\) after switching the roles of the \(k_{\pm }\left( \alpha , \gamma , c_-, c_+ \right) \).

Proof

Assume that \(c_- < c_+\). First, we prove \(k_-\left( \alpha , \gamma , c_-, c_+ \right) >0\) for all \(\gamma \in (\alpha -1,\alpha )\). Note that:

$$\begin{aligned}{} & {} k_-\left( \alpha , \gamma , c_-, c_+ \right) \\{} & {} \quad = \frac{\varGamma (\gamma + 1)}{\varGamma (\gamma - \alpha +1)}\left[ M_+\frac{\sin \left( -\alpha \pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } + M_- \frac{\sin \left( (\gamma +1)\pi \right) }{\sin \left( (\gamma -\alpha +1)\pi \right) } + M_+\right] \\{} & {} \quad = \frac{\varGamma (\gamma + 1)M_+}{\varGamma (\gamma - \alpha +1)\sin \left( (\gamma -\alpha +1)\pi \right) } \\{} & {} \qquad \cdot \left[ \sin \left( -\alpha \pi \right) + c \sin \left( (\gamma +1)\pi \right) + \sin \left( (\gamma -\alpha +1)\pi \right) \right] \\{} & {} \quad = \frac{\varGamma (\gamma + 1)c_+\varGamma (-\alpha )}{\varGamma (\gamma - \alpha +1)\sin \left( (\gamma -\alpha +1)\pi \right) }\left[ \sin \left( -\alpha \pi \right) - c \sin \left( \gamma \pi \right) - \sin \left( (\gamma -\alpha )\pi \right) \right] . \end{aligned}$$

Since we have

$$\begin{aligned} \frac{\varGamma (\gamma + 1)c_+\varGamma (-\alpha )}{\varGamma (\gamma - \alpha +1)\sin \left( (\gamma -\alpha +1)\pi \right) } > 0, \end{aligned}$$

for all \(\alpha \in (1,2)\) and \(\gamma \in (\alpha -1,\alpha )\), then \(k_-\left( \alpha , \gamma , c_-, c_+ \right) >0\) is equivalent to:

$$\begin{aligned} h_-(\gamma ):=\sin \left( -\alpha \pi \right) - c \sin \left( \gamma \pi \right) - \sin \left( (\gamma -\alpha )\pi \right) > 0, \end{aligned}$$

for all \(\gamma \in (\alpha -1,\alpha )\). Lemma 7 tells us that \(h_\pm \) are 2-periodic. Moreover, we have that:

$$\begin{aligned} h_-(0)= & {} \sin \left( -\alpha \pi \right) - \sin \left( -\alpha \pi \right) = 0,\\ h_-(\alpha -1)= & {} \sin \left( -\alpha \pi \right) - c \sin \left( (\alpha -1)\pi \right) \\= & {} \sin \left( -\alpha \pi \right) (1-c)\\> & {} 0, \end{aligned}$$

because \(c<1\) and \(\alpha \in (1,2)\). This means that \(h_-(\gamma )\) has just one zero in (0, 2) and it is before \(\alpha -1\), so that \(h(\gamma )>0\) for all \(\gamma \in (\alpha -1,\alpha )\), as well as \(k_-\left( \alpha , \gamma , c_-, c_+ \right) >0\) in the same interval.

We will prove in a similar way the change of signs of \(k_+\left( \alpha , \gamma , c_-, c_+ \right) \). Note that, as in the previous case, we just need to analyze the change of signs of the function:

$$\begin{aligned} h_+(\gamma ):=c\sin \left( -\alpha \pi \right) - \sin \left( \gamma \pi \right) - c\sin \left( (\gamma -\alpha )\pi \right) . \end{aligned}$$

Since

$$\begin{aligned} h_+(0):=c\sin \left( -\alpha \pi \right) - c\sin \left( -\alpha \pi \right) = 0, \end{aligned}$$

there must be just one zero in \((0,2\pi )\), this zero is precisely \(\gamma = \beta (a,c)\) ( [24] Lemma 9). But, by definition \(\beta (a,c) \in (\alpha -1,1)\), this means that:

$$\begin{aligned} h_+(\gamma )< & {} 0,\quad \text {if}\ \gamma \in (\alpha -1,\beta (a,c))\ \text {and}\\ h_+(\gamma )\ge & {} 0,\quad \hbox { if}\ \gamma \in [\beta (a,c),1). \end{aligned}$$

Finally, when \(c_+ < c_-\), just note that since \(k_+\left( \alpha , \gamma , c_-, c_+ \right) = k_-\left( \alpha , \gamma , c_+, c_- \right) \) we can use the same proof. \(\square \)

We are ready to prove the power decomposition theorem. These results are a generalization of the works of Salminen and Yor [23] and of Engelbert and Kurenok [2]. The proof of the decomposition uses the Tanaka formula for asymmetric stable processes (3.6) and relies on the representation of the infinitesimal generator of a power function given in Lemma 2. Note that in [23] it was easy to find the measure which could recover the power decomposition in the symmetric case and for the generalization we made direct use of fractional calculus to find the relevant measure needed for the asymmetric case.

Proof of Theorem 3

Recall that from Lemma 2 we know that for \(f(y)=|y|^{\gamma }\), the infinitesimal generator associated to f is given by:

$$\begin{aligned} \mu (dy) = \left( k_- \left| y\right| ^{\gamma -\alpha } {{\,\mathrm{1{}l}\,}}_{\{y>0\}} + k_+\left| y\right| ^{\gamma -\alpha } {{\,\mathrm{1{}l}\,}}_{\{0<y\}}\right) dy. \end{aligned}$$

Taking the Tanaka formula (3.6) at the level a and integrating both sides by \(\mu ^x(da)\) (the measure \(\mu \) translated by x) we have:

$$\begin{aligned}&\int _{\infty }^{\infty }F\left( X_{t}-a \right) \mu ^x(da) \\&\quad = \int _{\infty }^{\infty }F\left( X_{0}-a\right) \mu ^x(da) + \int _{\infty }^{\infty }M_{t}^a(X) \mu ^x(da)+ \int _{\infty }^{\infty }L_t^a(X)\mu ^x(da). \end{aligned}$$

Note that the representation of f as a member of the Class \({\mathcal {C}}^{\alpha ,c_-,c_+}\) is precisely \(F*\mu \). We will now use a version of Fubini’s theorem for compensated Poisson random measures and apply it to the small jumps of \(M^a(X)\) above. See [32, Lemma A.1.2]. We need to verify some integrability assumptions to apply it, which are (3.7) and (3.8) below. Applying the Fubini theorem, we get

$$\begin{aligned} \left| X_t - x\right| ^{\gamma }= & {} \left| X_0 - x\right| ^{\gamma } + \int _0^t \int _{{\mathbb {R}}_0} \left[ \left| X_{s-} - x + h\right| ^{\gamma } - \left| X_{s-} - x\right| ^{\gamma }\right] {\tilde{N}}(ds,dh) \nonumber \\+ & {} \int _{-\infty }^{\infty } \left| a - x\right| ^{\gamma -\alpha } \left[ k_-{{\,\mathrm{1{}l}\,}}_{\{a>x\}} + k_+{{\,\mathrm{1{}l}\,}}_{\{a<x\}} \right] L_t^a \, da. \end{aligned}$$

Using the occupational formula for the local time, the last integral is equivalent to

$$\begin{aligned} \int _0^t \left| X_s - x\right| ^{\gamma -\alpha } \left[ k_-{{\,\mathrm{1{}l}\,}}_{\{X_s>x\}} + k_+{{\,\mathrm{1{}l}\,}}_{\{X_s<x\}} \right] ds. \end{aligned}$$

This finishes the proof modulo showing that the first integral is a martingale and the applicability of Fubini’s theorem. The proof of the martingale character will follow the ideas of [2, Section 3]. Incidentally, the same argument will justify the application of Fubini’s theorem above. We can identify two cases depending on the size of the jump:

$$\begin{aligned} M_t^{\gamma }= & {} \int _0^{t} \int _{{\mathbb {R}}_0} \left[ \left| X_{s-} - x + h\right| ^{\gamma } - \left| X_{s-} - x\right| ^{\gamma }\right] {\tilde{N}}(ds,dh) \nonumber \\= & {} M^{\gamma ,1}_t + M^{\gamma ,2}_t\\:= & {} \int _0^{t} \int _{|h|\le |X_{s-} - x |} \left[ \left| X_{s-} - x + h\right| ^{\gamma } - \left| X_{s-} - x\right| ^{\gamma }\right] {\tilde{N}}(ds,dh)\\{} & {} +\int _0^{t} \int _{|h|> |X_{s-} - x |} \left[ \left| X_{s-} - x + h\right| ^{\gamma } - \left| X_{s-} - x\right| ^{\gamma }\right] {\tilde{N}}(ds,dh). \end{aligned}$$

In order to prove that \(M^{1,\gamma }\) is a square integrable martingale, according to Ikeda and Watanabe ([29], section II.3) we need to show that:

$$\begin{aligned} m^{1,\gamma }_t:= {\mathbb {E}}\left[ \int _{0}^t \int _{|h|\le |X_{s-} - x |} \left| \left| X_{s-} - x + h\right| ^{\gamma } - \left| X_{s-} - x\right| ^{\gamma } \right| ^2 \, \nu (dh)\, ds \right] < \infty . \nonumber \\ \end{aligned}$$
(3.7)

Take \({{\overline{c}}}=c_- \vee c_+\), then the intensity measure \(\nu _{{{\overline{c}}}}(dh)={{\overline{c}}}|h|^{-\alpha -1}dh\) is greater than the intensity measure \(\nu (dh)\), corresponding to \(X_t\), and if we consider the change of variable \(h=(X_{s-} - x)u\) we have:

$$\begin{aligned} m^{1,\gamma }_t\le & {} {\mathbb {E}}\left[ \int _{0}^t \int _{|u|\le 1} |X_{s-}-x|^{2\gamma } \left( \left| 1+u\right| ^{\gamma } - 1 \right) ^2 \frac{{{\overline{c}}}}{|X_{s-}-x|^{\alpha }|u|^{\alpha +1}}\, du \, ds \right] \\= & {} {\mathbb {E}}\left[ \int _{0}^t |X_{s-}-x|^{2\gamma -\alpha }ds \right] \int _{|u|\le 1} \left( \left| 1+u\right| ^{\gamma } - 1 \right) ^2 \frac{{{\overline{c}}}}{|u|^{\alpha +1}}\, du. \end{aligned}$$

Since \(-1<\alpha -2<2\gamma -\alpha <\alpha \), the integral \(\displaystyle {\mathbb {E}}\left[ \int _{0}^t |X_{s-}-x|^{2\gamma -\alpha }\, ds \right] \) is finite for all \(t\ge 0\). It remains to check that the second integral is finite. Consider the auxiliary function \(g(u) = |1+u|^{\gamma }\), and note that for any \(u\in (-1,1)\) we have that \(g(u)=(1+u)^{\gamma }\), which is differentiable. By the mean value theorem we can choose \(u_*\in (-1,0)\) and \(u^*\in (0,1)\) such that:

$$\begin{aligned} f(u)-f(0) = {\left\{ \begin{array}{ll} f^{\prime }(u_*)u &{} -1<u<0,\\ f^{\prime }(u^*)u &{} 0<u<1, \end{array}\right. } \end{aligned}$$

This corresponds to:

$$\begin{aligned} (1+u)^{\gamma }-1 = {\left\{ \begin{array}{ll} \gamma (1+u_*)^{\gamma -1}u &{} -1<u<0,\\ \gamma (1+u^*)^{\gamma -1}u &{} 0<u<1. \end{array}\right. } \end{aligned}$$

We get the following bound for any \(u\in (-1,1)\):

$$\begin{aligned} |(1+u)^{\gamma }-1| \le \gamma c_1(\gamma )|u|, \end{aligned}$$

where \(c_1(\gamma ) = \max ((1+u_*)^{\gamma -1},(1+u^*)^{\gamma -1})\). Then, we have that

$$\begin{aligned} \int _{|u|\le 1} \left( \left| 1+u\right| ^{\gamma } - 1 \right) ^2 \frac{{{\overline{c}}}}{|u|^{\alpha +1}}du\le & {} \gamma ^2 c_1^2(\gamma ) \int _{|u|\le 1} {{\overline{c}}} |u|^{1-\alpha } \, du\\\le & {} \gamma ^2 c_1^2(\gamma ) \frac{2{{\overline{c}}}}{2-\alpha } < \infty . \end{aligned}$$

So that \(m^{1,\gamma }_t\) for any \(t\ge 0\) and \(M^{1,\gamma }\) is a square integrable martingale.

Now, to prove that \(M^{2,\gamma }\) is a martingale, according to Ikeda and Watanabe ([29], Section II.3) we need to show that:

$$\begin{aligned} m^{2,\gamma }_t:= {\mathbb {E}}\left[ \int _{0}^t \int _{|h|> |X_{s-} - x |} \left| \left| X_{s-} - x + h\right| ^{\gamma } - \left| X_{s-} - x\right| ^{\gamma } \right| \, \nu (dh)\, ds \right] < \infty .\qquad \end{aligned}$$
(3.8)

Similarly, we have:

$$\begin{aligned} m^{2,\gamma }_t\le & {} {\mathbb {E}}\left[ \int _{0}^t \int _{|u|> 1} |X_{s-}-x|^{\gamma } \left| \left| 1+u\right| ^{\gamma } - 1 \right| \frac{c}{|X_{s-}-x|^{\alpha }|u|^{\alpha +1}} \, du \, ds \right] \\= & {} {\mathbb {E}}\left[ \int _{0}^t |X_{s-}-x|^{\gamma -\alpha }ds \right] \int _{|u|> 1} \left| \left| 1+u\right| ^{\gamma } - 1 \right| \frac{c}{|u|^{\alpha +1}}\, du\\< & {} \infty , \end{aligned}$$

since \(\gamma -\alpha \in (-1,0)\) and this moment of \(X_t\) is finite for any \(t\ge 0\), the expectation is finite. To see the last integral is finite, just note that \(\left| \left| 1+u\right| ^{\gamma } - 1 \right| \) behaves like \(|u|^{\gamma }\) as \(|u|\rightarrow \infty \). Then, we have that \(m^{2,\gamma }_t\) is finite for any \(t\ge 0\) and we can conclude that \(M^{2,\gamma }\) is a martingale. This allow us to conclude that \(M^{\gamma } = M^{1,\gamma }+M^{2,\gamma }\) is a martingale. \(\square \)

Finally, we analyze when \(|X-x|^\gamma \) is a submartingale or when its finite variation part can decrease.

Proof of Corollary 1

From the Lemma 5 and Theorem 3 the last integral is a non decreasing process if and only if \(\gamma \in [\beta (a,c),\alpha )\), by Lemma 5, so that we get a Doob-Meyer decomposition for \(|X_t -x|^{\gamma }\). In the other case, \(\gamma \in (\alpha -1,\beta (a,c))\), we only obtain a semimartingale instead of a submartingale. \(\square \)