1 Introduction

In this paper, we study a general class of evolution equations

$$\begin{aligned} u(t)&=u_0+\int _0^t k(t,s)Lu(s)ds,\qquad t>0, \end{aligned}$$
(1.1)

where k is a fairly general memory kernel and L is the infinitesimal generator of a strongly continuous semigroup \((T_t)_{t\geqslant 0}\) acting on some Banach space X. In particular, the operator L may be the generator \(L_0\) of a Markov process \(\xi \) on some state space Q, or \(L:=L_0+b\nabla +V\) for a suitable potential V and drift b. Moreover, L may be the generator of a subordinate semigroup or a Schrödinger type group. This class of evolution equations includes in particular time- and space- fractional heat and Schrödinger type equations as well as equations with generalized time-fractional derivatives of Caputo type (cf. Remarks 12 below). Such equations are widely discussed in the literature (see, e.g., [2, 10, 14, 18, 19, 24,25,26,27] and references therein), in particular, in connection with models of anomalous diffusion. We refer to [4] for additional background information and for detailed discussion of the considered memory kernels k.

In this paper, we show that the solution operator of equation (1.1) can be written in the form

$$\begin{aligned} \mathrm{Dom}(L)\rightarrow X,\quad u_0\mapsto \int _0^\infty (T_au_0)\, \mathcal {P}_{A(t)}(da) \end{aligned}$$

for a family \((\mathcal {P}_{A(t)})_{t\geqslant 0}\) of probability measures on the positive real line, which depends on k only. We, thus, consider this representation as a subordination principle associated to the memory kernel k. We state the subordination principle in Section 2, and in particular discuss how to obtain stochastic representations of the solution, if the operator L is (a Bernstein function of) the infinitesimal generator of a Markov process (plus a potential). The most natural stochastic reprensentations of such an approach are given in terms of time-changed Markov processes. In Section 3, we explain, however, how to arrive at representations in terms of non-Markovian processes such as generalized grey Brownian motion or even in terms of solutions of stochastic differential equations driven by more general randomly scaled fractional Brownian motions. Such stochastic processes are attractive for modelling since they are self-similar and with stationary increments (cf. [26, 27]). Finally, the proofs are provided in Section 4. While the main results can be considered as generalizations of our previous results in [4] beyond the case of pseudo-differential operators L associated to Lévy processes, the proofs are completely different, relating (an approximate version of) the subordination principle to a family of Volterra equations via the Hille-Phillips functional calculus.

2 Main results

Assumption 1

Let X be a Banach space with a norm \(\Vert \cdot \Vert _X\). Let \((T_t)_{t\geqslant 0}\) be a strongly continuous semigroup on X with generator \((L,\mathrm{Dom}(L))\).

We consider the evolution equation (1.1) with operator L as in Assumption 1, with \(u_0\in \mathrm{Dom}(L)\), \(u\,:\,[0,\infty )\rightarrow X\) and k satisfying the following Assumptions 23.

Assumption 2

We consider a Borel-measurable kernel \(k\,:\,(0,\infty )\times (0,\infty )\rightarrow {{\mathbb {R}}}\) satisfying the condition: \(\exists \,\alpha ^*\in [0,1)\) and \(\exists \,\varepsilon >0\) such that for each \(T>0\)

$$\begin{aligned} K_T:=\sup \limits _{0<t\leqslant T}t^{\alpha ^*-\frac{1}{1+\varepsilon }}\Vert k(t,\cdot )\Vert _{L^{1+\varepsilon }((0,t))}<\infty . \end{aligned}$$

In order to identify the family of probability measures \((\mathcal {P}_{A(t)})_{t\geqslant 0}\) for the subordination, we specify their Laplace transform in terms of the memory kernel k. To this end we define the function \(\varPhi :[0,\infty )\times \mathbb {C}\rightarrow \mathbb {C}\) via

$$\begin{aligned}&\varPhi (t,\lambda ):=\sum _{n=0}^\infty c_n(t)\lambda ^n, \end{aligned}$$
(2.1)
$$\begin{aligned}&c_0(t):=1\qquad \forall \,\,t\geqslant 0\nonumber \qquad \text { and}\\&c_n(t):=\left\{ \begin{array}{ll} \int _0^t k(t,s)c_{n-1}(s)ds, &{} \quad \forall \,\,t>0,\\ 0, &{} \quad t=0, \end{array}\right. \qquad n\in \mathbb {N}. \end{aligned}$$
(2.2)

It has been shown in [4] that, under Assumption 2, the function \(\varPhi \) is well-defined and, for fixed t, entire in \(\lambda \).

Assumption 3

Let the function \(\varPhi \) be constructed from the kernel k via formulas (2.1), (2.2). We assume that the restriction of the function \(\varPhi (t,-\cdot )\) on \((0,\infty )\) is completely monotone for all \(t\geqslant 0\), i.e., for each \(t\geqslant 0\), there exists a nonnegative random variable A(t) whose distribution \(\mathcal {P}_{A(t)}\) has the Laplace transform given by \(\varPhi (t,-\cdot )\):

$$\begin{aligned} \int _0^\infty e^{-\lambda a}\mathcal {P}_{A(t)}(da)=\varPhi (t,-\lambda ),\qquad \forall \,\lambda \in {{\mathbb {C}}},\quad \Re \lambda \geqslant 0. \end{aligned}$$
(2.3)

Note that \(\mathcal {P}_{A(0)}=\delta _0\) and \(A(0)=0\) a.s. since \(\varPhi (0,-\lambda )\equiv 1\).

Typical examples of kernels k satisfying Assumptions 23 are kernels of convolution type and homogeneous kernels related to operators of generalized fractional calculus (cf. [4]). Recall that a kernel k is homogeneous of degree \(\theta -1\) for some \(\theta >0\) if \(k(t,ts)=t^{\theta -1}k(1,s)\), \(t\in (0,\infty )\), \(s\in (0,1)\).

Theorem 1

Let Assumption 1 hold. Let k satisfy Assumption 2 and assume that the corresponding function \(\varPhi \) satisfies Assumption 3. Then:

(i) For each \(t\geqslant 0\), the operator \(\varPhi (t,L)\) given by the Bochner integral

$$\begin{aligned} \varPhi (t,L)\varphi :=\int _0^\infty T_a\varphi \,\mathcal {P}_{A(t)}(da),\qquad \varphi \in X, \end{aligned}$$
(2.4)

is well defined, and it is a bounded linear operator on X.

(ii) For each \(t>0\) and each \(u_0\in \mathrm{Dom}(L)\), the function

$$\begin{aligned} u(t):=\varPhi (t,L)u_0 \end{aligned}$$
(2.5)

solves equation (1.1) and it holds \(\lim _{t\searrow 0} u(t)=u_0\).

(iii) Suppose additionally that k is homogeneous of order \(\theta -1\) for some \(\theta >0\). Then one can choose \(A(t):=At^\theta \) in (2.4), where A is a nonnegative random variable such that

$$\begin{aligned} \int _0^\infty e^{-\lambda a} \mathcal {P}_{A}(da) = \varPhi (1,-\lambda ), \qquad \forall \lambda \in \mathbb {C},\quad \Re \lambda \geqslant 0. \end{aligned}$$
(2.6)

We next wish to apply the semigroup \((T_t)_{t\geqslant 0}\) associated to a generator L in order to represent the solution of the evolution equation with memory kernel k and the (space-)fractional operator \(-(-L)^\gamma \). We use subordination in the sense of Bochner [5, 30] which is a random time change of a given process \((\xi _t)_{t\geqslant 0}\) by an independent subordinator, i.e. an 1-dimensional increasing Lévy process (with killing) \((\eta ^f_t)_{t\geqslant 0}\). Any subordinator can be characterized in terms of its Laplace exponent f: \(\mathbb {E}\left[ e^{-\lambda \eta ^f_t} \right] =e^{-tf(\lambda )}\); any such f is a Bernstein function and is determined uniquely by its Lévy-Khintchine representation \(f(\lambda )=a+b\lambda +\int _{(0,\infty )}\left( 1-e^{-\lambda s} \right) \nu (ds)\), where a, \(b\geqslant 0\) and \(\nu \) is a measure on \((0,\infty )\) satisfying \(\int _{(0,\infty )}\min (s,1)\nu (ds)<\infty \). Let \((T_t)_{t\geqslant 0}\) be as in Assumption 1 and additionally a contraction semigroup. The family of operators \((T^f_t)_{t\geqslant 0}\) defined by the Bochner integral

$$\begin{aligned} T^f_t\varphi :=\int _0^\infty T_s\varphi \,\mathcal {P}_{\eta ^f_t}(ds),\qquad \varphi \in X, \end{aligned}$$

is said to be subordinate to \((T_t)_{t\geqslant 0}\) with respect to the convolution semigroup of measures \(\left( \mathcal {P}_{\eta ^f_t} \right) _{t\geqslant 0}\), where \(\mathcal {P}_{\eta ^f_t}\) is the distribution of \(\eta ^f_t\). The family \((T^f_t)_{t\geqslant 0}\) is again a strongly continuous contraction semigroup on the space X whose generator \((L^f,\mathrm{Dom}(L^f))\) is the closure of the operator \((-f(-L),\mathrm{Dom}(L))\), where

$$\begin{aligned} -f(-L)\varphi :=-a\varphi +bL\varphi +\int _{(0,\infty )}\left( T_s\varphi -\varphi \right) \nu (ds),\qquad \varphi \in \mathrm{Dom}(L). \end{aligned}$$

If \((T_t)_{t\geqslant 0}\) is the transition semigroup of a Feller process \((\xi _t)_{t\geqslant 0}\) and \((\eta ^f_t)_{t\geqslant 0}\) is an independent subordinator, then \((T^f_t)_{t\geqslant 0}\) is the transition semigroup of the (again Feller) process \((\xi _{\eta ^f_t})_{t\geqslant 0}\). Further information on subordination in the sense of Bochner and all related objects can be found e.g. in [31].

Consider now the function \(\varPhi ^f(t,-\cdot ):=\varPhi (t,-f(\cdot ))\). If the function \(\varPhi (t,-\cdot ) \) is completely monotone, so is the function \(\varPhi ^f(t,-\cdot )\), as a composition of a Bernstein function f and a completely monotone function \(\varPhi (t,-\cdot )\). Hence there exists a family of nonnegative random variables whose Laplace transform is given by \(\varPhi ^f(t,-\cdot )\), \(t\geqslant 0\). Using distributions of these random variables and a strongly continuous contraction semigroup \((T_t)_{t\geqslant 0}\) with generator \((L,\mathrm{Dom}(L))\), one can define the operator \(\varPhi ^f(t,L)\) analogously to (2.4).

Corollary 1

Let Assumption 1 hold and \((T_t)_{t\geqslant 0}\) be a contraction semigroup. Let k satisfy Assumption 2 and the corresponding function \(\varPhi \) satisfy Assumption 3. Let \((A(t))_{t\geqslant 0}\) be a family of nonnegative random variables satisfying (2.3). Let \((\eta ^f_t)_{t\geqslant 0}\) be a subordinator corresponding to a Bernstein function f which is independent from \((A(t))_{t\geqslant 0}\). Then

$$\begin{aligned} \varPhi ^f(t,L)\varphi =\int _0^\infty T_s\varphi \,\mathcal {P}_{\eta ^f_{A(t)}}(ds)=\varPhi (t,L^f)\varphi ,\qquad \varphi \in X. \end{aligned}$$
(2.7)

Moreover, for each \(t>0\) and each \(u_0\in \mathrm{Dom}(L^f)\), the function \(u(t):=\varPhi ^f(t,L)u_0\) solves the evolution equation

$$\begin{aligned} u(t)&=u_0+\int _0^t k(t,s)L^f u(s)ds,\qquad t>0,\nonumber \\ \lim _{t \searrow 0} u(t)&= u_0. \end{aligned}$$
(2.8)

If the semigroup \((T_t)_{t\geqslant 0}\) has a stochastic representation, then the family \((\varPhi ^f(t,L))_{t\geqslant 0}\) as well has a stochastic representation due to (2.7).

Example 1

Let Q be a separable completely metrizable topological space endowed with a Borel \(\sigma \)-field \(\mathcal {B}(Q)\). Let \((\varOmega ,{\mathcal {F}},\mathbb {P}^x,(\xi _t)_{t\geqslant 0})_{x\in Q}\) be a (universal) Markov process with state space \((Q,\mathcal {B}(Q))\). Assume that the corresponding transition semigroup \((T^0_t)_{t\geqslant 0}\), \( T^0_t u_0(x):=\mathbb {E}^x\left[ u_0(\xi _t) \right] , \) is a strongly continuous semigroup on some Banach space \(X\subset B_b(Q)\) (where \(B_b(Q)\) is the space of all bounded Borel measurable functions on Q). Let \((L_0,\mathrm{Dom}(L_0))\) be the generator of \((T^0_t)_{t\geqslant 0}\). Let \(V\,:\,Q\rightarrow (-\infty ,0]\) be a Borel measurable function such that the (closure of the) operator \((L_0+V,\mathrm{Dom}(L_0+V))\) generates a strongly continuous semigroup \((T_t)_{t\geqslant 0}\) on X with stochastic representation

$$\begin{aligned} T_tu_0(x):=\mathbb {E}^x\left[ u_0(\xi _t)\exp \left( \int _0^tV(\xi _s)ds \right) \right] ,\quad t\geqslant 0,\,\, x\in Q,\,\, u_0\in X. \end{aligned}$$
(2.9)

Note that (2.9) is the classical Feynman-Kac formula which holds under very mild assumptions on processes and potentials, cf., e.g., [6, 7, 21]. Let assumptions of Corollary 1 hold and \((\xi _t)_{t\geqslant 0}\) be independent from \((A(t))_{t\geqslant 0}\) and \((\eta ^f_t)_{t\geqslant 0}\). Then for \(u_0\in \mathrm{Dom}((L_0+V)^f)\) the function

$$\begin{aligned} u(t,x):=\mathbb {E}^x\left[ u_0\left( \xi _{\eta ^f_{A(t)}} \right) e^{\int _0^{\eta ^f_{A(t)}} V(\xi _s)ds} \right] \end{aligned}$$
(2.10)

solves the evolution equation

$$\begin{aligned} u(t,x)=u_0(x)+\int _0^t k(t,s)\left( L_0+V\right) ^fu(s,x)ds. \end{aligned}$$
(2.11)

Suppose additionally that k is homogeneous of order \(\theta -1\) for some \(\theta >0\). Let A be a nonnegative random variable which satisfies (2.6) and is independent from \((\eta ^f_t)_{t\geqslant 0}\) and \((\xi _t)_{t\geqslant 0}\). Then we can take \(A(t):=At^\theta \) in (2.10).

Remark 1

Theorem 1 (and Corollary 1) can be applied also to generalized time-fractional Schrödinger type equations. Note, that different types of fractional analogues of the standard Schrödinger equation have been discussed in the literature, see, e.g., [2, 10, 14]. Such equations seem to be physically relevant; in particular, some of them arise from the standard quantum dynamics under special geometric constraints [19, 29]. So, let \(X:=L^2({{\mathbb {R}}}^d)\) be the Hilbert space of complex-valued square integrable functions; X plays the role of the state space of a quantum system. Let \((\mathcal {H},\mathrm{Dom}(\mathcal {H}))\) be a (bounded from below) self-adjoint operator in X playing the role of the Hamiltonian (energy operator) of this quantum system. Then \((L,\mathrm{Dom}(L)):=(-i\mathcal {H},\mathrm{Dom}(\mathcal {H}))\) does generate a strongly continuous contraction semigroup \((T^\mathcal {H}_t)_{t\geqslant 0}\) on X by the Stone theorem. Let k, \(\varPhi \), \((A(t))_{t\geqslant 0}\) be as in Theorem 1. Then, by Theorem 1,

$$\begin{aligned} u(t,x):=\mathbb {E}\left[ T^\mathcal {H}_{A(t)}u_0(x) \right] \end{aligned}$$
(2.12)

solves the generalized time-fractional Schrödinger type equation

$$\begin{aligned} u(t,x)=u_0(x)-i\int _0^t k(t,s)\mathcal {H}u(s,x)ds, \end{aligned}$$
(2.13)

where the equality above is understood as the equality of two elements of the space X. For a few particular choices of the Hamiltonian, some stochastic representations of the corresponding semigroup \((T^\mathcal {H}_t)_{t\geqslant 0}\) are known in the literature (see, e.g., [8, 9, 18]). Inserting these stochastic representations into (2.12), one obtains Feynman-Kac formulae (which may be local in the space variables) for the corresponding generalized time-fractional Schrödinger type equation (2.13).

Remark 2

(i) Consider \(k(t,s):={\mathfrak {K}}(t-s)\), where \({\mathfrak {K}}\) is such that its Laplace transform \(\mathcal {L}[{\mathfrak {K}}]=1/h\) for some Bernstein function h. Then this k satisfies Assumptions 23. And one may take an inverse subordinator corresponding to h as \((A(t))_{t\geqslant 0}\) in this case (cf. Sec. 2.3 of [4]). Moreover, evolution equation (1.1) is then equivalent (what can be shown by applying the Laplace transform w.r.t. time-variable to both equations) to the Cauchy problem

$$\begin{aligned} \mathcal {D}^h_t u(t,x)=Lu(t,x),\qquad u(0,x)=u_0(x),\qquad x\in {{\mathbb {R}}}^d,\quad t>0, \end{aligned}$$
(2.14)

where \(\mathcal {D}^h_t\) is a generalized time-fractional derivative of Caputo type, which is defined (for sufficiently good functions \(v\,:\,(0,\infty )\rightarrow {{\mathbb {R}}}\) of time variable t) via the Laplace transform (cf. [1]) by

$$\begin{aligned} \left( \mathcal {L}\left[ \mathcal {D}^h_t v\right] \right) (\sigma )=h(\sigma )(\mathcal {L}v)(\sigma )-\frac{h(\sigma )}{\sigma }v(+0). \end{aligned}$$

Therefore, the results of Theorem 1 and Corollary 1 provide solutions for evolution equations of the form (2.14) with generalized time-fractional derivatives of Caputo type \(\mathcal {D}^h_t\). In the case \(h(\sigma ):=\sigma ^\beta \), \(\beta \in (0,1)\), the generalized time-fractional derivative \(\mathcal {D}^h_t\) coincides with the Caputo derivative of order \(\beta \). The kernel \({\mathfrak {K}}_1\) below corresponds to a mixture of Caputo time-fractional derivatives of orders \(\beta \), \(\beta _1,\ldots ,\beta _m\). In the case of Bernstein function \(h(\sigma ):=\int _0^1 \sigma ^\beta \mu (d\beta )\) with a finite Borel measure \(\mu \) concentrated on the interval (0, 1), the corresponding derivative \(\mathcal {D}^h_t\) is known as distributed order fractional derivative.

(ii) Let us mention the following functions \({\mathfrak {K}}_1\) and \({\mathfrak {K}}_2\) providing kernels k as in part (i) (cf. [3]): for \(1\geqslant \beta>\beta _1>\ldots>\beta _m>0\), \(b_j>0\), \(j=1,\ldots ,m\)

$$\begin{aligned} {\mathfrak {K}}_1(t):=\frac{t^{\beta -1}}{\varGamma (\beta )}+\sum _{j=1} ^mb_j\frac{t^{\beta _j-1}}{\varGamma (\beta _j)} \end{aligned}$$

with the corresponding Bernstein function \(h_1(\sigma ):=\left( \sigma ^{-\beta }+\sum _{j=1}^m b_j\sigma ^{-\beta _j} \right) ^{-1}\) and

$$\begin{aligned} {\mathfrak {K}}_2(t):=t^{\beta -1}{E}_{(\beta -\beta _1,\ldots ,\beta -\beta _m),\beta }\left( -b_1 t^{\beta -\beta _1},\ldots ,-b_m t^{\beta -\beta _m} \right) \end{aligned}$$

with multinomial Mittag-Leffler function [13, 17] (for \(z_j\in {{\mathbb {C}}}\), \(\beta \in {{\mathbb {R}}}\), \(\alpha _j>0\), \(j=1,\ldots ,m\))

$$\begin{aligned}&{E}_{(\alpha _1,\ldots ,\alpha _m),\beta }(z_1,\ldots ,z_m):=\\&\qquad \sum _{n=0}^\infty \sum _{\begin{array}{c} n_1+\ldots +n_m=n\\ n_1\in \mathbb {N}_0,\ldots ,n_m\in \mathbb {N}_0 \end{array}} \frac{n!}{n_1!\cdots n_m!}\frac{\prod _{j=1}^m z_j^{n_j}}{\varGamma \left( \beta +\sum _{j=1}^m \alpha _jn_j \right) }. \end{aligned}$$

The kernel \({\mathfrak {K}}_2\) corresponds to the Bernstein function \(h_2(\sigma ):=\sigma ^{\beta }+\sum _{j=1}^mb_j\sigma ^{\beta _j}\). The corresponding functions \(\varPhi _1(t,-\lambda )\) and \(\varPhi _2(t,-\lambda )\) are found in [3] in terms of the multinomial Mittag-Leffler function:

$$\begin{aligned}&\varPhi _1(t,-\lambda ):= {E}_{(\beta ,\beta _1,\ldots ,\beta _m),1}\left( -\lambda t^\beta ,-\lambda t^{\beta _1},\ldots ,-\lambda t^{\beta _m} \right) ,\\&\varPhi _2(t,-\lambda ):=1-\lambda t^{\beta }E_{(\beta ,\beta -\beta _1,\ldots ,\beta -\beta _m),\beta +1}\left( -\lambda t^\beta ,-\lambda t^{\beta _1},\ldots ,-\lambda t^{\beta _m} \right) . \end{aligned}$$

3 Feynman-Kac formulae with randomly scaled Gaussian processes

Due to Corollary 1, the most natural stochastic representations for evolution equations of the form (1.1) with L being (a Bernstein function of) the generator of a Markov process (plus a potential term) are given in terms of time-changed Markov processes. In the special case when the memory kernel k is homogeneous, one may sometimes use randomly scaled Gaussian processes in the obtained stochastic representations. For this recall first a class of memory kernels k from [4] which satisfy Assumptions 2-3 and are homogeneous.

Example 2

Let \(b>0\), \(a\geqslant b\), \(\mu \geqslant \frac{b}{a}-1\), \(\nu >\max \left\{ a-b,-a\mu \right\} \). Consider the Marichev-Saigo-Maeda kernel (cf. Sec. 4 in [4])

$$\begin{aligned} k(t,s):=\frac{at^{a-\nu }s^{\nu -1}(t^a-s^a)^{\frac{b}{a}-1}}{\varGamma \left( \frac{b}{a}\right) } F_3\left( \frac{\nu }{a}-1,\frac{b}{a},1,\mu , \frac{b}{a}, 1-\left( \frac{s}{t}\right) ^{a},1-\left( \frac{t}{s}\right) ^{\!a}\right) \!\!, \end{aligned}$$
(3.1)

where \(0<s<t\) and \(F_3\) is Appell’s third generalization of the Gauss hypergeometric function. The kernel k is homogeneous of degree \(b-1\) and satisfies Assumptions 23. The corresponding function \( \varPhi \) has the following form: \( \varPhi (t,\lambda )= \varGamma (q_2) E_{q_1,q_2}^{q_3}(\lambda t^b)\), where \(q_1:=\frac{b}{a}\), \( q_2:=\frac{\nu }{a}+\mu \), \(q_3:= 1+\frac{\nu -a}{b}\), and \(E_{q_1,q_2}^{q_3}\) is the three parameter Mittag-Leffler function \( E_{q_1,q_2}^{q_3}(\lambda ):=\sum _{n=0}^\infty \frac{\left( q_3\right) _n}{\varGamma \left( q_1 n+q_2\right) n!}\,\lambda ^n. \) As corresponding random variables \((A(t))_{t\geqslant 0}\) one may take \(A(t):=A_{b,a,\mu ,\nu }t^b\), where \(A_{b,a,\mu ,\nu }\) is a non-negative random variable whose Laplace transform is given by \(\varGamma (q_2) E_{q_1,q_2}^{q_3}(-\lambda )\). In the special case \(\mu :=0\), \(b:=\alpha \), \(a:=\frac{\alpha }{\beta }\), \(\nu :=a\) for some \(\alpha \in (0,2)\), \(\beta \in (0,1]\), the Marichev-Saigo-Maeda kernel (3.1) reduces to the kernel which appears in the governing equation for generalized grey Brownian motion:

$$\begin{aligned} k(t,s):=\frac{\alpha }{\beta \varGamma (\beta )}s^{\frac{\alpha }{\beta }-1}\left( t^{\frac{\alpha }{\beta }}-s^{\frac{\alpha }{\beta }} \right) ^{\beta -1},\qquad \beta \in (0,1],\quad \alpha \in (0,2). \end{aligned}$$
(3.2)

The corresponding function \(\varPhi \) reduces to the classical Mittag-Leffler function: \(\varPhi (t,\lambda )=E_\beta (\lambda t^\alpha )\). And, as the corresponding random variables \((A(t))_{t\geqslant 0}\), one may take \(A(t):=A_{\beta }t^\alpha \), where \(A_{\beta }\) is a non-negative random variable with Laplace transform \(E_\beta (-\cdot )\).

Let us now present some Feynman-Kac formulae for evolution equations of type (1.1) with homogeneous kernel k on the base of randomely scaled Gaussian processes.

Example 3

(i) Under the assumptions of Corollary 1 consider the Bernstein function \(f(\lambda ):=\lambda ^\gamma \), \(\gamma \in (0,1]\). Then, in the case \(\gamma \in (0,1)\), the operator \(L^f\) is the fractional power of the operator L, i.e. \(L^f=-(-L)^\gamma \) (cf. [31]), and \((\eta ^f_t)_{t\geqslant 0}\) is a \(\gamma \)-stable subordinator. In the case \(\gamma =1\), we take \(\eta ^f_t:=t\), \(t\geqslant 0\). Let k be homogeneous of degree \(\theta -1\) for some \(\theta >0\) and take \(A(t)=At^\theta \) according to Corollary 1 and Theorem 1 (iii). Then the random variable \(\eta ^f_{A(t)}\) has the same distribution as \(A^{1/\gamma }\eta ^f_1 t^{\theta /\gamma }\). We may replace the “subordinator” \(\left( \eta ^f_{A(t)}\right) _{t\geqslant 0}\) in (2.10) by a new “subordinator” \(\left( \mathcal {A}t^{\theta /\gamma }\right) _{t\geqslant 0}\) with

$$\begin{aligned} \mathcal {A}:=A^{1/\gamma }\eta ^f_1. \end{aligned}$$
(3.3)

This allows to split randomness and time-dependence in the random time-change. Thus, we obtain the following Feynman-Kac formula

$$\begin{aligned} u(t,x):&=\mathbb {E}^x\left[ u_0\left( \xi _{\mathcal {A}t^{\theta /\gamma }} \right) \exp \left( \int _0^{\mathcal {A}t^{\theta /\gamma }} V(\xi _s)ds\right) \right] \nonumber \\&=\mathbb {E}^x\left[ u_0\left( \xi _{\mathcal {A}t^{\theta /\gamma }} \right) \exp \left( {\mathcal {A}\frac{\theta }{\gamma }\int _0^{t} s^{\frac{\theta }{\gamma }-1} V(\xi _{\mathcal {A}s^{\theta /\gamma }})ds}\right) \right] \end{aligned}$$
(3.4)

for the evolution equation

$$\begin{aligned} u(t,x)=u_0(x)-\int _0^t k(t,s) \left( -L_0-V\right) ^\gamma u(s,x)ds. \end{aligned}$$

(ii) Let k, A, \((\eta ^f_t)_{t\geqslant 0}\) and \(\mathcal {A}\) be as in part (i) of this example. Let \(V:=c\) for some \(c\leqslant 0\), \(\xi _t:=x+B_t+wt\) under \(\mathbb {P}^x\), where \((B_t)_{t\geqslant 0}\) is a standard d-dimensional Brownian motion, which is independent from A and \((\eta ^f_t)_{t\geqslant 0}\), and \(w\in {{\mathbb {R}}}^d\) is some fixed vector. Let \(X^{\mathcal {A},\gamma ,\theta }_t:=B_{\text{\AA} t^{\theta /\gamma }}\) or \(X^{\mathcal {A},\gamma ,\theta }_t:=\sqrt{\mathcal {A}} B_{ t^{\theta /\gamma }}\), or, if \(H:=\frac{\theta }{2\gamma }\in (0,1)\), \(X^{\mathcal {A},\gamma ,\theta }_t:=\sqrt{\mathcal {A}} B^H_{ t}\), where \(\left( B^{H}_t \right) _{t\geqslant 0}\) is a d-dimensional fractional Brownian motion with Hurst parameter H which is independent from A and \((\eta ^f_t)_{t\geqslant 0}\). Note that all three options of the process \((X^{\mathcal {A},\gamma ,\theta }_t)_{t\geqslant 0}\) have the same one-dimensional marginal distributions. Then, due to Feynman-Kac formula (3.4),

$$\begin{aligned} u(t,x)=\mathbb {E}\left[ u_0\left( x+X^{\mathcal {A},\gamma ,\theta }_t+\mathcal {A}wt^{\theta /\gamma } \right) e^{c\mathcal {A}t^{\theta /\gamma }} \right] , \end{aligned}$$
(3.5)

solves the evolution equation

$$\begin{aligned} u(t,x)=u_0(x)-\int _0^t k(t,s) \left( -\frac{1}{2}\varDelta -w\nabla -c\right) ^\gamma u(s,x) ds. \end{aligned}$$
(3.6)

Therefore, we have obtained a Feynman-Kac formula (3.5) for the evolution equation (3.6) in terms of two different classes of randomly scaled Gaussian processes: randomly scaled slowed-down / speeded-up Brownian motion \(\left( \sqrt{\mathcal {A}} B_{ t^{\theta /\gamma }} \right) _{t\geqslant 0}\) and (if \(H:=\frac{\theta }{2\gamma }\in (0,1)\)) randomly scaled fractional Brownian motion \(\left( \sqrt{\mathcal {A}} B^H_{ t} \right) _{t\geqslant 0}\). If k is a Marichev-Saigo-Maeda kernel (3.1) then \(\theta =b\), \(A=A_{b,a,\mu ,\nu }\) in distribution. In the special case of the GGBM-kernel (3.2), we have \(\theta =\alpha \), \(A=A_\beta \) in distribution, and hence we may use generalized grey Brownian motion in formula (3.5) as the process \((X^{\mathcal {A},\gamma ,\theta }_t)_{t\geqslant 0}\).

The result of Example 3 (ii) can be generalized beyond the case of a constant diffusion coefficient, as detailed in the case of dimension \(d=1\) in space in the proposition below. As can be seen from the proof, this generalization requires to move from a Brownian motion to a stochastic differential driven by a Brownian motion in the Stratonovich sense in order to apply Corollary 1.

Proposition 1

Let \(\gamma \in (0,1]\) and suppose the kernel k is homogeneous of order \(\theta -1\) for some \(\theta >0\) and Assumption 2, Assumption 3 are satisfied. Let \(\mathcal {A}\) be a non-negative random variable constructed by (3.3) in Example 3 (i). Assume \(w\in {{\mathbb {R}}}\), \(c\leqslant 0\), and \(\sigma \in C^2({{\mathbb {R}}})\) is a bounded function with bounded first and second derivatives. Consider the linear operator \((L_{(\sigma ,w)},\mathrm{Dom}(L_{(\sigma ,w)}))\) in \(C_\infty ({{\mathbb {R}}})\) which is defined by

$$\begin{aligned}&L_{(\sigma ,w)}\varphi (x)\!:=\!\frac{\sigma ^2(x)}{2}\frac{d^2}{d x^2}\varphi (x)\!+\!\left( \!w\!+\!\frac{1}{2}\sigma '(x)\!\right) \!\sigma (x)\frac{d}{dx}\varphi (x),\quad \varphi \in \mathrm{Dom}(L_{(\sigma ,w)}),\\&\mathrm{Dom}(L_{(\sigma ,w)})\!:=\!\left\{ \varphi \in C^2({{\mathbb {R}}})\,\,:\,\, \varphi ,\, L_{(\sigma ,w)}\varphi \in C_\infty ({{\mathbb {R}}})\right\} . \end{aligned}$$

Let \(u_0\in \mathrm{Dom}(L_{(\sigma ,w)})\) and denote by \(g_\sigma \,:\,[0,\infty )\times {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) the solution to the parametrized family of ODEs

$$\begin{aligned} \frac{\partial }{\partial y}g_\sigma (y,x)=\sigma (g_\sigma (y,x)),\qquad g_\sigma (0,x)=x. \end{aligned}$$
(3.7)

Let \((B_t)_{t\geqslant 0}\) be a standard Brownian motion independent from \(\mathcal {A}\). Let \(X^{\mathcal {A},\gamma ,\theta }_t:=B_{\text{\AA} t^{\theta /\gamma }}\) or \(X^{\mathcal {A},\gamma ,\theta }_t:=\sqrt{\mathcal {A}} B_{ t^{\theta /\gamma }}\), or, if \(H:=\frac{\theta }{2\gamma }\in (0,1)\), \(X^{\mathcal {A},\gamma ,\theta }_t:=\sqrt{\mathcal {A}} B^H_{ t}\), where \(\left( B^{H}_t \right) _{t\geqslant 0}\) is a 1-dimensional fractional Brownian motion with Hurst parameter H which is independent from \(\mathcal {A}\). Then

$$\begin{aligned} u(t,x)&=\mathbb {E}\left[ u_0\left( g_\sigma \left( X^{\mathcal {A},\gamma ,\theta }_t+w\mathcal {A}t^{\theta /\gamma },x \right) \right) e^{c\mathcal {A}t^{\theta /\gamma }} \right] \end{aligned}$$
(3.8)
$$\begin{aligned}&=\mathbb {E}\left[ u_0\left( g_\sigma \left( X^{\mathcal {A},\gamma ,\theta }_t,x \right) \right) e^{\mathcal {A}t^{\theta /\gamma }\left( c-\frac{w^2}{2} \right) +wX^{\mathcal {A},\gamma ,\theta }_t} \right] \end{aligned}$$
(3.9)

solves the evolution equation

$$\begin{aligned} u(t,x)=u_0(x)-\int _0^t k(t,s)\left( - L_{(\sigma ,w)}-c \right) ^\gamma u(s,x)ds. \end{aligned}$$
(3.10)

The proof of Proposition 1 will be given in Section 4.

Remark 3

Let \(H:=\frac{\theta }{2\gamma }\in (0,1)\) and \(X^{\mathcal {A},\gamma ,\theta }_t:=\sqrt{\mathcal {A}} B^H_{ t}\), where \(\left( B^{H}_t \right) _{t\geqslant 0}\) is a 1-dimensional fractional Brownian motion with Hurst parameter H as in Proposition 1. We now interpret the Feynman-Kac formula (3.9) from the point of view of stochastic differential equations with respect to \(\left( X^{\mathcal {A},\gamma ,\theta }_t \right) _{t\geqslant 0}\) in the rough path sense. Note that almost every path of \(\left( X^{\mathcal {A},\gamma ,\theta }_t \right) _{t\geqslant 0}\) is Hölder continuous with each index less than H. Assume \(H>1/3\). Consider now \(\mathbb {X}_{t,s}:=\frac{1}{2}\left( X^{\mathcal {A},\gamma ,\theta }_t-X^{\mathcal {A},\gamma ,\theta }_s \right) ^2\). Then \(\mathcal {X}:=(X^{\mathcal {A},\gamma ,\theta }, \mathbb {X})\) is a lift to a geometric rough path (see [12]). Consider \(Z^x_t:=g_\sigma \left( X^{\mathcal {A},\gamma ,\theta }_t,x \right) \). Then, by the Itô formula for geometric rough paths, see again [12],

$$\begin{aligned} Z^x_t=x+\int _0^t\sigma (Z^x_s)dX^{\mathcal {A},\gamma ,\theta }_s, \end{aligned}$$
(3.11)

since \(g_\sigma \in C^3({{\mathbb {R}}})\). Hence, the stochastic representation for the solution of (3.10) in (3.9) can be rewritten in the form

$$\begin{aligned} u(t,x)=\mathbb {E}\left[ u_0\left( Z^x_t\right) e^{\mathcal {A}t^{\theta /\gamma }\left( c-\frac{w^2}{2} \right) +wX^{\mathcal {A},\gamma ,\theta }_t} \right] . \end{aligned}$$

This form resembles the classical Feynman-Kac formula for parabolic Cauchy problems in terms of stochastic differential equations driven by a Brownian motion. However, the stochastic differential equation (3.11) is driven by a randomly scaled fractional Brownian motion, which is neither a semimartingale nor a Markov process (unless \(H=1/2\)), to account for the memory kernel and the space fractionality in (3.10), while maintaining the stationary increment property of the driving process.

4 Proofs

Proof of Theorem 1

(i) Let Assumptions 12 and  3 hold. Since the function \(\varPhi (t,\cdot )\), \(t\geqslant 0\), is entire by Theorem 1 in [4], the function \(\varPhi (t,i(\cdot ))\) is also entire and is the characteristic function of the distribution \(\mathcal {P}_{A(t)}\), which is concentrated on \([0,\infty )\). Therefore, we have by the Raikov theorem (cf. Theorem 3.2.1 in [22])

$$\begin{aligned} \int _{{\mathbb {R}}}e^{r|a|}\mathcal {P}_{A(t)}(da)=\int _0^\infty e^{ra}\mathcal {P}_{A(t)}(da)<\infty ,\qquad \forall \,\, r>0. \end{aligned}$$
(4.1)

Further, for any strongly continuous semigroup \((T_t)_{t\geqslant 0}\) there exist constants \(M\geqslant 1\), \(c\geqslant 0\) such that \(\Vert T_t\Vert \leqslant M e^{ct}\), \(\forall \,t\geqslant 0\), and the mapping \(t\mapsto T_t\varphi \) is continuous for any \(\varphi \in X\). Thus, we have \(\int _0^\infty \Vert T_a\varphi \Vert _X \mathcal {P}_{A(t)}(da)<\infty \) and the Bochner integral in the r.h.s. of (2.4) is well defined for any \(\varphi \in X\). Moreover, the operator \(\varPhi (t,L)\) defined by (2.4) is a bounded linear operator on X and \(\varPhi (0,L)=\mathop {\mathrm {Id}}\nolimits \).

(ii) Recall that the following statement was proved in [4] (cf. Corollary 1 of [4]):

Lemma 1

Let Assumption 2 hold. Then, for each \(\lambda \in {{\mathbb {C}}}\), there exists a unique solution \(\varPhi (\cdot ,-\lambda )\in B_b([0,T],{{\mathbb {C}}})\), \(\forall \,\, T>0\), of the following Volterra equation of the second kind

$$\begin{aligned} \varPhi (t,-\lambda )=1-\lambda \int _0^t k(t,s)\varPhi (s,-\lambda )ds,\qquad t>0. \end{aligned}$$
(4.2)

Moreover, \(\lim _{t\searrow 0}\varPhi (t,-\lambda )=1\) locally uniformly with respect to \(\lambda \in {{\mathbb {C}}}\), \(\varPhi (t,\cdot )\) is an entire function for all \(t\geqslant 0\) and equalities (2.1) and (2.2) hold.

Our aim is to lift the equality (4.2) to the level of operators \(\varPhi (t,L)\). To this aim we use the so-called Hille-Phillips functional calculus. Let us recall the main facts about this functional calculus (cf. [15, 16]).

Let \((T_t)_{t\geqslant 0}\) be as in Assumption 1. Consider first the case when \((T_t)_{t\geqslant 0}\) is uniformly bounded (i.e. \(\Vert T_t\Vert \leqslant M\) for some \(M\geqslant 1\) and all \(t\geqslant 0\)). Denote by \(LS({{\mathbb {C}}}_+)\) the space of functions that are Laplace transforms of complex measures on \(([0,\infty ),\mathcal {B}([0,\infty )))\). Let \(g\in LS({{\mathbb {C}}}_+)\) and \(m_g\) be the (unique) complex measure whose Laplace transform \(\mathcal {L}[m_g]\) is given by g. One defines the operator \(g(-L)\) as follows:

$$\begin{aligned} g(-L)\varphi :=\int _0^\infty T_a\varphi \,m_g(da),\qquad \varphi \in X. \end{aligned}$$
(4.3)

The right hand side of (4.3) is a well-defined Bochner integral and \(g(-L)\) is a bounded linear operator on X, i.e. \(g(-L)\in \mathcal {L}(X)\). The mapping \(\mathcal {C}_T\,:\,LS({{\mathbb {C}}}_+)\rightarrow \mathcal {L}(X)\), \(g\mapsto g(-L)\), is called the Hille-Phillips calculus for \(-L\). Note that \(\mathcal {C}_T\) is an algebra homomorphism and hence \(\mathcal {C}_T(g_1g_2)= g_1(-L)\circ g_2(-L)=g_2(-L)\circ g_1(-L)\) and \( \mathcal {C}_T( a g_1+b g_2)= a g_1(-L)+b g_2(-L)\) for any \(g_1\), \(g_2\in LS({{\mathbb {C}}}_+)\), \(a,b\in {{\mathbb {R}}}\).

Consider now the case when \((T_t)_{t\geqslant 0}\) is of type \(c\geqslant 0\) (i.e., \(\Vert T_t\Vert \leqslant M e^{ct}\) for some \(M\geqslant 1\), \(c\geqslant 0\) and all \(t\geqslant 0\)). Then the rescaled semigroup \((T^c_t)_{t\geqslant 0}\), \(T^c_t:=T_te^{-ct}\), is uniformly bounded, strongly continuous and has generator \((L-c,\mathrm{Dom}(L))\). Then one may use the Hille-Phillips calculus \(\mathcal {C}_{T^c}\) for \(-(L-c)\). Consider now the space \(LS({{\mathbb {C}}}_+-c):=\left\{ g\,:\,g(\cdot -c)\in LS({{\mathbb {C}}}_+) \right\} \). Let \(g\in LS({{\mathbb {C}}}_+-c)\) and \(m^c_g\) be the (unique) complex measure with \(\mathcal {L}[m^c_g]=g(\cdot -c)\). One defines the operator \(g(-L)\) as follows:

$$\begin{aligned} g(-L)\varphi :=\mathcal {C}_{T^c}\big (g(\cdot -c) \big )\varphi \equiv \int _0^\infty T^c_a\varphi \,m^c_g(da),\qquad \varphi \in X. \end{aligned}$$

Let now m be a complex measure such that \(e^{ca}m(da)\) is again a complex measure. Let \(g^*:=\mathcal {L}[m]\). Then it holds \(\mathcal {L}[e^{ca}m(da)](\lambda )=\int _0^\infty e^{-\lambda a}e^{ca}m(da)=g^*(\lambda -c)\). Hence \(g^*\in LS({{\mathbb {C}}}_+-c)\) and \(m^c_{g^*}(da)=e^{ca}m(da)\). Therefore, \( g^*(-L)\varphi =\int _0^\infty T^c_a\varphi \,m^c_{g^*}(da)=\int _0^\infty T_a\varphi \, m(da)\), \(\varphi \in X. \) Thus, the operator \(\varPhi (t,L)\) defined in (2.4) can be interpreted as \(\mathcal {C}_{T^c}\big (\varPhi (t,-(\cdot -c)) \big )\) in terms of Hille-Phillips calculus due to (4.1).

Now we are ready to transfer equality (4.2) to the level of operators by means of Hille-Phillips calculus. Let \((T_t)_{t\geqslant 0}\) be of type \(c\geqslant 0\) and \(\rho (L)\) be the resolvent set of operator L, i.e. the resolvent operator \(R_\lambda (L):=(\lambda -L)^{-1}\) is a well defined bounded operator on X for each \(\lambda \in \rho (L)\). Let \(\gamma >c\). Hence \(\gamma \in \rho (L)\). And equality (4.2) implies the equality: \(\forall t>0\), \(\forall \lambda \in \mathbb {C}: \Re \lambda \geqslant -c\)

$$\begin{aligned} \gamma \cdot \frac{\varPhi (t,-\lambda ) -1}{\gamma + \lambda } = -\lambda \cdot \frac{\gamma }{\gamma +\lambda } \cdot \int _0^t k(t,s)\varPhi (s,-\lambda )ds. \end{aligned}$$
(4.4)

Let us present each component of (4.4) as the Laplace transform of some complex measure on \(([0,\infty ),\mathcal {B}([0,\infty )))\). As we have already discussed

$$\begin{aligned} \varPhi (t,-\lambda ) = \mathcal {L}(\mathcal {P}_{A(t)})(\lambda )\overset{\mathcal {C}_{T^c}}{ \longleftrightarrow } \varPhi (t,L)= \int _0^\infty T_a \,\mathcal {P}_{A(t)}(da). \end{aligned}$$

Furthermore, we have with Dirac delta-measure \(\delta _0\) and with exponential distribution \(Exp(\gamma )\):

$$\begin{aligned} 1&= \mathcal {L}(\delta _0)(\lambda ) \overset{\mathcal {C}_{T^c}}{\longleftrightarrow } \mathcal {L}(\delta _0)(-L):= \int _0^\infty T_a^c \delta _0(da) = Id, \\ \frac{\gamma }{\gamma + \lambda }&= \mathcal {L}(Exp(\gamma ))(\lambda ) \overset{\mathcal {C}_{T^c}}{\longleftrightarrow } \mathcal {L}(Exp(\gamma ))(-L):= \int _0^\infty T_a^c \gamma e^{-\gamma a}e^{c a} da\\&= \int _0^\infty T_a \gamma e^{-\gamma a} da = \gamma \cdot (\gamma -L)^{-1}=\gamma \cdot R_\gamma (L). \end{aligned}$$

Note that \(R_\gamma (L)\) is a bounded operator since \(\gamma \in (c,\infty )\subset \rho (L)\) (cf. [11, 28]) and \(\Vert \gamma R_\gamma (L)\varphi -\varphi \Vert _X\rightarrow 0\) as \(\gamma \rightarrow \infty \) for any \(\varphi \in X\). Further,

$$\begin{aligned} \frac{-\lambda \gamma }{\gamma +\lambda }&= -\gamma \cdot 1+ \gamma \cdot \frac{\gamma }{\gamma +\lambda } = -\gamma \cdot \mathcal {L}(\delta _0)(\lambda ) + \gamma \cdot \mathcal {L}(Exp(\gamma ))(\lambda ) \overset{\mathcal {C}_{T^c}}{\longleftrightarrow } \\&-\gamma \cdot \mathcal {L}(\delta _0)(-L) + \gamma \cdot \mathcal {L}(Exp(\gamma ))(-L)= -\gamma \cdot Id + \gamma ^2 R_\gamma (L) =: L_\gamma . \end{aligned}$$

Note that \(L_\gamma \) is the so-called Yosida-Approximation of L (cf. [11, 28]); \(L_\gamma \) is a bounded operator and \(\Vert L\varphi -L_\gamma \varphi \Vert _X\rightarrow 0\) as \(\gamma \rightarrow +\infty \) for each \(\varphi \in \mathrm{Dom}(L)\).

Without loss of generality we now assume \(k(t,s)\ge 0\) (else divide into negative and nonnegative part) and define a family of measures on \(([0,\infty ),\mathcal {B}([0,\infty )))\) via

$$\begin{aligned} \nu _t(B):= \int _0^t k(t,s) \mathcal {P}_{A(s)}(B)ds, \qquad B\in \mathcal {B}([0,\infty )). \end{aligned}$$

The right hand side in the above formula is well-defined since the mapping \(s\mapsto \mathcal {P}_{A(s)}(B)\) is a bounded Borel-measurable function on \([0,\infty )\) for any \(B\in \mathcal {B}([0,\infty ))\). Indeed, the mapping \(s\mapsto \varPhi (s,-\lambda )\) is Borel measurable for any \(\lambda \in {{\mathbb {C}}}\) due to Assumption 2 and representation formulas (2.1), (2.2). And for any \(s,x\geqslant 0\) holds (cf. Lemma 1.1 and the proof of Prop. 1.2 in [31])

$$\begin{aligned} \mathcal {P}_{A(s)}([0,x])=\lim \limits _{\lambda \rightarrow \infty }\sum _{k\leqslant \lambda x}(-1)^k\frac{\partial ^k\varPhi (s,-\lambda )}{\partial \lambda ^k}\frac{\lambda ^k}{k!}. \end{aligned}$$

Further, it holds for measurable \(g:[0,\infty )\rightarrow [0,\infty )\)

$$\begin{aligned} \int _0^\infty g(a) \nu _t(da) = \int _0^t k(t,s) \int _0^\infty g(a) \mathcal {P}_{A(s)}(da) ds, \end{aligned}$$
(4.5)

which can be seen via approximation of g by step functions from below and the use of Beppo Levi’s Theorem. By choosing \(g(a):=e^{-\lambda a}\) we see that

$$\begin{aligned} \int _0^\infty e^{-\lambda a} \nu _t(da)&= \int _0^t k(t,s) \int _0^\infty e^{-\lambda a} \mathcal {P}_{A(s)}(da) ds\\&= \int _0^t k(t,s) \varPhi (s,-\lambda ) ds =:\Psi (t,-\lambda ). \end{aligned}$$

Thereby \(\Psi (t,-\lambda )\) is the Laplace transform of an appropriate measure and we get the correspondence

$$\begin{aligned} \Psi (t,-\lambda ) \overset{\mathcal {C}_{T^c}}{\longleftrightarrow } \Psi (t,L):=\int _0^\infty T_a^c {\nu }^c_t(da) \end{aligned}$$

where \({\nu }^c_t(da):=e^{c a}\nu _t(da)\). Note that \({\nu }^c_t\) is a bounded measure on the measurable space \(([0,\infty ),\mathcal {B}([0,\infty )))\) due to (4.1). Furthermore, similar to property (4.5), it holds for any Bochner-integrable function \(g\,:\,[0,\infty )\rightarrow X\)

$$\begin{aligned} \int _0^\infty g(a) {\nu }^c_t(da) = \int _0^t k(t,s) \int _0^\infty g(a) e^{c a} \mathcal {P}_{A(s)}(da) ds, \end{aligned}$$

and therefore, for any \(\varphi \in X\),

$$\begin{aligned} \Psi (t,L)\varphi&= \int _0^\infty T_a^c \varphi \,{\nu }^c_t(da) = \int _0^t k(t,s) \int _0^\infty T_a^c\varphi \, e^{c a} \mathcal {P}_{A(s)}(da)ds\\&= \int _0^t k(t,s) \varPhi (s,L)\varphi ds. \end{aligned}$$

Thereby, all components of (4.4) have been transferred. Taking everything together and using that for \(u_0\in \mathrm{Dom}(L)\) holds (cf. [16])

$$\begin{aligned} L_\gamma \Psi (t,L)u_0 = \gamma L R_\gamma (L) \Psi (t,L) u_0 = \Psi (t,L)\gamma L R_\gamma (L) u_0 = \Psi (t,L) L_\gamma u_0, \end{aligned}$$

we get

$$\begin{aligned} \gamma R_\gamma (L)\left( \varPhi (t,L) -Id\right) u_0 = \Psi (t,L) L_\gamma u_0 \qquad \forall \, u_0\in \mathrm{Dom}(L). \end{aligned}$$
(4.6)

Taking the limit \(\gamma \rightarrow +\infty \) we obtain (with \(\varPhi (s,L) L u_0=L \varPhi (s,L) u_0\) for all \(u_0\in \mathrm{Dom}(L)\))

$$\begin{aligned} (\varPhi (t,L)-Id)u_0&= \Psi (t,L) L u_0 =\!\! \int _0^t\!\!\! k(t,s) \varPhi (s,L) L u_0 ds =\! \!\int _0^t\!\!\! k(t,s) L \varPhi (s,L) u_0 ds\\ \Leftrightarrow \ \varPhi (t,L)u_0&= u_0 + \int _0^t k(t,s) L\varPhi (s,L)u_0 ds. \end{aligned}$$

Therefore, the function \(u(t):=\varPhi (t,L)u_0\) solves evolution equation (1.1) for any \(u_0\in \mathrm{Dom}(L)\).

For continuity at zero we evaluate equality (2.3) at \(\lambda =-c-i\rho \), \(\rho \in {{\mathbb {R}}}\), resulting in \( \int _0^\infty e^{i\rho a} e^{c a} \mathcal {P}_{A(t)}(da) = \varPhi (t,i\rho +c),\) \( \forall (t,\rho )\in [0,\infty )\times {{\mathbb {R}}}\). According to Lemma 1

$$\begin{aligned} \lim _{t\searrow 0}\int _0^\infty e^{i\rho a} e^{c a} \mathcal {P}_{A(t)}(da) = \lim _{t\searrow 0} \varPhi (t,i\rho +c) = 1 \qquad \forall \rho \in {{\mathbb {R}}}, \end{aligned}$$

and by Lévy’s Continuity Theorem it follows that \(e^{c a}\mathcal {P}_{A(t)}(da) \xrightarrow {\text {weakly}} \delta _0(da)\), \( t\searrow 0\). We now write

$$\begin{aligned} \Vert u(t)-u_0\Vert _X=&\left\| \int _0^\infty \left( T_au_0 -u_0 \right) \mathcal {P}_{A(t)}(da) \right\| _X \\&\le \int _0^\infty \Vert T_au_0-u_0\Vert _X e^{-c a}e^{c a} \mathcal {P}_{A(t)}(da) = \int _{{\mathbb {R}}}f(a)e^{c a} \mathcal {P}_{A(t)}(da), \end{aligned}$$

where \(f:{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\), \(f(a):=\Vert T_au_0-u_0\Vert _X e^{-c a}\) for \(a\geqslant 0\) and \(f(a):=0\) for \(a<0\), is a bounded and continuous function. Now

$$\begin{aligned} \lim _{t\searrow 0} \Vert u(t)-u_0\Vert _X \leqslant \lim _{t\searrow 0} \int _{{\mathbb {R}}}f(a)e^{c a} \mathcal {P}_{A(t)}(da) =f(0)= 0 \end{aligned}$$

by weak convergence and thus continuity at zero is shown.

(iii) Let k be homogeneous of order \(\theta -1\) for some \(\theta >0\). By the recursion formula (2.2) for all \(t>0\), \(n\in \mathbb {N}\)

$$\begin{aligned} c_n(t)=t^\theta \int _0^1k(1,s)c_{n-1}(ts)ds=t^{n\theta }\int _0^1k(1,s)c_{n-1} (s)ds=t^{n\theta }c_n(1). \end{aligned}$$

And, thus, we have for all \( t\geqslant 0\), \(\lambda \in \mathbb {C}\) (cf. Theorem 2 in [4]):

$$\begin{aligned} \varPhi (1,-t^\theta \lambda )&= \sum _{n=0}^\infty c_n(1)\left( -t^\theta \lambda \right) ^n = \sum _{n=0}^\infty t^{-n\theta }c_n(t)(-t^\theta \lambda )^n\\&= \sum _{n=0}^\infty c_n(t)(-\lambda )^n =\varPhi (t,-\lambda ). \end{aligned}$$

Let \(A(t):=At^\theta \), where A is a nonnegative random variable satisfying (2.6). Then \(\mathcal {L}(\mathcal {P}_{A(t)})(\lambda ) = \mathbb {E}\left[ e^{-\lambda At^\theta }\right] = \mathcal {L}(\mathcal {P}_A)(\lambda t^\theta ) = \varPhi (1,-\lambda t^\theta ) = \varPhi (t,-\lambda )\). Therefore, \(A(t):=A t^\theta \) has the required distribution. And Theorem 1 is proved. \(\square \)

Proof of Corollary 1

(i) \((T_t^f)_{t\geqslant 0}\) is a strongly continuous contraction semigroup on the Banach space X. Therefore, \((T_t^f)_{t\geqslant 0},\ k\) and \(\varPhi \) fulfill all assumptions of Theorem 1 and thus

$$\begin{aligned} \varPhi (t,L^f)\varphi :=\int _0^\infty T^f_s \varphi \mathcal {P}_{A(t)}(ds),\qquad \varphi \in X, \end{aligned}$$

is well-defined. Let now \((A(t))_{t\geqslant 0}\) and \((\eta ^f_t)_{t\geqslant 0}\) be as in the statement of Corollary 1. Consider the family of random variables \(\left( \eta ^f_{A(t)}\right) _{t\geqslant 0}\). Then

$$\begin{aligned} \mathbb {E}\left[ e^{-\lambda \eta ^f_{A(t)} } \right] =\!\int _0^\infty \!\mathbb {E}\left[ e^{-\lambda \eta ^f_a} \right] \mathcal {P}_{A(t)}(da)=\!\int _0^\infty \! e^{-af(\lambda )}\mathcal {P}_{A(t)}(da)=\varPhi (t,-f(\lambda )). \end{aligned}$$

Starting with the strongly continuous contraction semigroup \((T_t)_{t\geqslant 0}\) and the completely monotone function \(\varPhi ^f(t,-\cdot ):=\varPhi (t,-f(\cdot ))\), one may define

$$\begin{aligned} \varPhi ^f(t,L) \varphi :=\int _0^\infty T_s \varphi \, \mathcal {P}_{\eta ^f_{A(t)}}(ds),\qquad \varphi \in X. \end{aligned}$$

Due to Fubini’s theorem

$$\begin{aligned} \varPhi (t,L^f)\varphi =\int _0^\infty \! T^f_s \varphi \,\mathcal {P}_{A(t)}(ds)&= \int _0^\infty \! \int _0^\infty \! T_a \varphi \, \mathcal {P}_{\eta ^f_s} (da) \mathcal {P}_{A(t)}(ds) \\&= \int _0^\infty \! T_a \varphi \, \mathcal {P}_{\eta ^f_{A(t)}}(da) = \varPhi ^f(t,L)\varphi , \qquad \varphi \in X. \end{aligned}$$

Therefore, for any \(u_0\in \mathrm{Dom}(L^f)\), equation (2.8) is solved by \(\varPhi (t,L^f)u_0=\varPhi ^f(t,L)u_0\) according to Theorem 1 (ii).

(ii) Since \(V\le 0\), \((T_t)_{t\geqslant 0}\) is a strongly continuous contraction semigroup and so is \((T_t^f)_{t\geqslant 0}\). It follows from Theorem 1 (ii) that \(u(t,x):=\varPhi (t, (L+V)^{f})u_0\) solves evolution equation (2.11) and due to Fubini’s theorem

$$\begin{aligned} \varPhi (t, (L+V)^{f})u_0&= \int _0^\infty T_a^{f} u_0 \mathcal {P}_{A(t)}(da) \\&= \int _0^\infty \int _0^\infty \mathbb {E}^x\left[ u_0(\xi _s)exp\left( \int _0^s V(\xi _v)dv\right) \right] \mathcal {P}_{\eta _a^f}(ds) \mathcal {P}_{A(t)}(da)\\&= \int _0^\infty \mathbb {E}^x\left[ u_0(\xi _{\eta _a^f})exp\left( \int _0^{\eta _a^f} V(\xi _v)dv\right) \right] \mathcal {P}_{A(t)}(da) \\&= \mathbb {E}^x\left[ u_0(\xi _{\eta _{A(t)}^f})exp\left( \int _0^{\eta _{A(t)}^f} V(\xi _s)ds\right) \right] . \end{aligned}$$

(iii) Follows immediately from Theorem 1 (iii). \(\square \)

Proof of Proposition 1

First, note that, under our assumptions on \(\sigma \), the operator \((L_{(\sigma ,w)},\mathrm{Dom}(L_{(\sigma ,w)}))\) does generate a strongly continuous semigroup on \(C_\infty ({{\mathbb {R}}})\) (cf. [23], Sec. 3.1.2). Second, consider the pair \(\left( (\xi _t)_{t\geqslant 0},(\mathbb {P}^x)_{x\in {{\mathbb {R}}}}\right) \) where \((\xi _t)_{t\geqslant 0}\) solves the Stratonovich SDE with respect to a standard 1-dimensional Brownian motion \((B_t)_{t\geqslant 0}\)

$$\begin{aligned} d\xi _t=\sigma (\xi _s)\circ dB_t+w\sigma (\xi _t)dt \end{aligned}$$

with \(\xi _0=x\) under \(\mathbb {P}^x\). By Remark 5.2.22 in [20], the pair \(\left( (\xi _t)_{t\geqslant 0},(\mathbb {P}^x)_{x\in {{\mathbb {R}}}}\right) \) is a Markov process with generator \(L_{(\sigma ,w)}\). We apply the Doss-Sussmann technique to find an explicit expression for \((\xi _t)_{t\geqslant 0}\). So, let \((B_t)_{t\geqslant 0}\) be a standard 1-dimensional Brownian motion with respect to some probability measure \(\mathbb {P}\). Let \(g_\sigma \) be as in the statement of Proposition 1. Then, by the Itô formula for the Stratonovich integral

$$\begin{aligned} g_\sigma (B_t+wt,x)=x+\!\int _0^t\! \sigma \big ( g_\sigma (B_s+ws,x) \big )\circ dB_s+\!\int _0^t\! w\sigma \big ( g_\sigma (B_s+ws,x) \big )ds. \end{aligned}$$

Hence \(\text {Law}\big ( (g_\sigma (B_t+wt,x))_{t\geqslant 0},\mathbb {P}\big )=\text {Law}\big ( (\xi _t)_{t\geqslant 0},\mathbb {P}^x \big )\) for every \(x\in {{\mathbb {R}}}\). In view of Corollary 1 and Example 3, there is a nonnegative random variable \(\mathcal {A}\) (constructed from k and \(\gamma \) as in (3.3)) which is independent of \((B_t)_{t\geqslant 0}\) and such that

$$\begin{aligned} u(t,x)&=\mathbb {E}\left[ u_0\left( g_\sigma \left( B_{\mathcal {A}t^{\theta /\gamma }}+w\mathcal {A}t^{\theta /\gamma },x \right) \right) e^{c\mathcal {A}t^{\theta /\gamma }} \right] \end{aligned}$$

solves the evolution equation (3.10). Note that \(\left( B_{\text{\AA} t^{\theta /\gamma }}+w\text{\AA} t^{\theta /\gamma } \right) _{t\geqslant 0}\), conditionally on \(\mathcal {A}\), is a Gaussian process with mean \(w\mathcal {A}t^{\theta /\gamma } \) and variance \(\text{\AA} t^{\theta /\gamma }\). The process \(\left( \sqrt{\mathcal {A}} B_{t^{\theta /\gamma }}+w\mathcal {A}t^{\theta /\gamma } \right) _{t\geqslant 0}\) and, if \(H:=\frac{\theta }{2\gamma }\in (0,1)\), the process \(\left( \sqrt{\mathcal {A}} B^H_{t}+w\text{\AA} t^{\theta /\gamma } \right) _{t\geqslant 0}\) have the same conditional law, where \((B^H_t)_{t\geqslant 0}\) is a 1-dimensional fractional Brownian motion independent of \(\mathcal {A}\). Hence Feynman-Kac formula (3.8) is shown. Further, we have

$$\begin{aligned}&u(t,x)=\mathbb {E}\left[ u_0\left( g_\sigma \left( \sqrt{\mathcal {A}}B_{t^{\theta /\gamma }}+w\mathcal {A}t^{\theta /\gamma },x \right) \right) e^{c\mathcal {A}t^{\theta /\gamma }} \right] \\&=\!\!\int _0^\infty \!\!\!\!\int _{{\mathbb {R}}}\!\! u_0\big (g_\sigma (\sqrt{a}z+wat^{\theta /\gamma },x) \big )e^{act^{\theta /\gamma }}(2\pi t^{\theta /\gamma })^{-\frac{1}{2}}\exp \left( -\frac{z^2}{2t^{\theta /\gamma }} \right) \,dz\,\mathcal {P}_{\!\mathcal {A}}(da)\\&=\!\!\int _0^\infty \!\!\!\!\int _{{\mathbb {R}}}\!\! u_0\big (g_\sigma (\sqrt{a}y,x) \big )e^{at^{\theta /\gamma }\left( c-\frac{w^2}{2} \right) +w\sqrt{a}y}(2\pi t^{\theta /\gamma })^{-\frac{1}{2}}\exp \!\left( \! -\frac{y^2}{2t^{\theta /\gamma }} \!\right) \!dy\mathcal {P}_{\!\mathcal {A}}(da)\\&=\mathbb {E}\left[ u_0\left( g_\sigma \left( \sqrt{\mathcal {A}}B_{t^{\theta /\gamma }},x \right) \right) e^{\mathcal {A}t^{\theta /\gamma }\left( c-\frac{w^2}{2} \right) +w\sqrt{\mathcal {A}}B_{t^{\theta /\gamma }}} \right] . \end{aligned}$$

Hence Feynman-Kac formula (3.9) is shown. \(\square \)