1 Introduction

We study the asymptotic behaviour of a linear kinetic equation on the real line \({\mathbb {R}}\):

$$\begin{aligned} {\left\{ \begin{array}{ll} \partial _tW(t,y,k)+ \bar{\omega }'(k) \partial _y W(t,y,k) \, = \, \gamma L_k W(t,y,k), &{} \text{ on } (0,+\infty )\times {\mathbb {R}}_*\times {\mathbb {T}}; \\ W(0,y,k)\, = \, W_0(y,k), &{} \text{ on } {\mathbb {R}}\times {\mathbb {T}}, \end{array}\right. } \end{aligned}$$
(1.1)

with the following boundary condition at \(x=0\) and \(t\ge 0\):

$$\begin{aligned} {\left\{ \begin{array}{ll} W(t,0^+, k)\, = \, p_+(k)W(t,0^-,k)+p_-(k)W(t,0^+, -k)+p_0(k)T_o, &{} \text{ on } {\mathbb {T}}_+;\\ W(t,0^-, k)\, = \, p_+(k)W(t,0^+, k)+p_-(k)W(t,0^-,-k) + p_0(k)T_o, &{} \text{ on } {\mathbb {T}}_-. \end{array}\right. }\qquad \end{aligned}$$
(1.2)

Here, \({\mathbb {T}}\) denotes the unit one-dimensional torus, understood as the interval \([-1/2,1/2]\) with identified endpoints, \({\mathbb {T}}_\pm :=[k\in {\mathbb {T}}:\pm k>0]\) and \({\mathbb {R}}_*:={\mathbb {R}}\setminus \{0\}\). We call the origin \(o:=[y=0]\) interface. The parameters \(\gamma >0\), \(T_o\ge 0\) are given and the functions \(\bar{\omega }, p_\pm , p_0\), defined on \({\mathbb {T}}\), are assumed to be continuous and non-negative. The scattering operator \(L_k\), acting only on the variable k in \({\mathbb {T}}\), is given by

$$\begin{aligned} L_ku(k)\,:= \, \int _{{\mathbb {T}}}R(k,k') \left[ u(k') - u(k)\right] \, dk'. \end{aligned}$$
(1.3)

Here \(R:{\mathbb {T}}^2\rightarrow [0,+\infty )\) is \(C^2\) smooth and u belongs to \(L^1({\mathbb {T}})\).

This equation arises in the kinetic limit of the evolution of the energy density in a stochastically perturbed harmonic chain interacting with a point Langevin thermostat located at \(y=0\), see [22,23,24,25]. The energy density W(tyk) at time t is resolved in both the spatial variable \(y\in {\mathbb {R}}\) and the frequency variable \(k\in {\mathbb {T}}\). The function \(\bar{\omega }\) is the dispersion relation of the harmonic chain and \(\bar{\omega }'(k)\) is the group velocity of phonons of mode k – theoretical particles that carry the energy due to the chain vibrations of frequency k. The presence of the Langevin thermostat results in the boundary (interface) condition (1.2). The number \(T_o\) corresponds to the temperature of the thermostat, while \(p_+(k)\), \(p_-(k)\) and \(p_0(k)\) are the respective probabilities of transmission, reflection and killing of a mode k phonon at the interface; see [23]. They are continuous, even functions that satisfy

$$\begin{aligned} p_+(k)+p_-(k)+ p_0(k)\, = \, 1, \quad k\in {\mathbb {T}}. \end{aligned}$$
(1.4)

With a small risk of ambiguity, in what follows we also write

$$\begin{aligned} p_+:=p_+(0),\quad p_-:=p_-(0),\quad p_0:=p_0(0). \end{aligned}$$
(1.5)

Our main goal is to study the asymptotic behaviour of the solutions of (1.1) on macroscopic space-time scales. More precisely, we are interested in the limit of solutions, when \(\lambda \rightarrow +\infty \), for the family of rescaled equations

$$\begin{aligned} {\left\{ \begin{array}{ll} \dfrac{1}{\lambda }\partial _t W_\lambda (t,y,k)+\dfrac{1}{\lambda ^{1/\alpha }}\bar{\omega }'(k)\partial _y W_\lambda (t,y,k)\, = \, \gamma L_kW_\lambda (t,y,k), &{} \\ (t,y,k)\in (0,+\infty )\times {\mathbb {R}}_*\times {\mathbb {T}}; \\ W_\lambda (0,y,k)\, = \, W_0(y,k), &{}(y,k)\in {\mathbb {R}}\times {\mathbb {T}}, \end{array}\right. } \end{aligned}$$
(1.6)

subject to the boundary condition (1.2), where \(1<\alpha <2\) is a suitably chosen exponent.

It turns out, see [5, 22], that under appropriate assumptions on the total scattering kernel \(R(k):=\int _{{\mathbb {T}}}R(k,k')dk'\), group velocity \(\bar{\omega }'(k)\) and a suitable choice of \(\alpha \), the limit \(\bar{W}(t,y):=\lim _{\lambda \rightarrow +\infty }W_\lambda (t,y,k)\) exists in a weak (distributional) sense. In the case where \(\sup _{k\in {\mathbb {T}}}[\bar{\omega }'(k)]^2/R(k)<+\infty \) and \(\alpha =2\) (diffusive space-time scaling), \(\bar{W}(t,y)\) satisfies the heat equation with the Dirichlet boundary condition \(\bar{W}(t,0)=T_o\) and \(\bar{W}(0,y)=\int _{{\mathbb {T}}}W(0,y,k)dk\), see [5, Theorem 2.2]. On the other hand, if \(|\bar{\omega }'(k)|/R(k)\sim |k|^{-\beta }\), for \(|k|\ll 1\), with \(\beta >1\) and \(p_0=\lim _{k\rightarrow 0}p_0(k)>0\) (this is, e.g., the case of an unpinned nearest neighbor harmonic chain, see [25, formula (119)]), then letting \(\alpha =1+1/\beta \) one can show, see [22, Theorem 1.1], that \(\bar{W}(t,y)\) satisfies a fractional diffusion equation with a boundary condition at \(y=0\). In informal terms, the equation states that

$$\begin{aligned} \partial _t\bar{W}(t,y)={\mathcal L}\Big (\bar{W}(t,y)-T_o\Big ), \end{aligned}$$
(1.7)

where \({\mathcal L}\) is the generator of a modified symmetric \((1+1/\beta )\)-stable process. The process behaves like a “regular” symmetric \((1+1/\beta )\)-stable process outside the interface \(y=0\), but transmits, reflects, or dies at the interface with the respective probabilities \(p_+\), \(p_-\) and \(p_0\) of (1.5).

In the present paper, we address the situations where \(p_0=0\), in fact, we assume that the killing probability \(p_0(k)\) satisfies, for some \(\kappa >0\),

$$\begin{aligned} p_*:=\liminf _{k\rightarrow 0}\left( \log |k|^{-1}\right) ^{\kappa } p_0(k)\in (0,+\infty ). \end{aligned}$$
(1.8)

Furthermore, we suppose that the transmission probability does not vanish, that is,

$$\begin{aligned} \inf _{k\in {\mathbb {T}}} p_+(k) > 0. \end{aligned}$$
(1.9)

The dispersion relation \(\bar{\omega }:{\mathbb {T}}\rightarrow [0,+\infty )\) is assumed to be even and unimodal, i.e., it possesses exactly one local maximum, at 1/2, and one local minimum, at 0. In addition it is of the \(C^2({\mathbb {T}}_*)\) class of regularity, where \({\mathbb {T}}_*:={\mathbb {T}}\setminus \{0\}\), and there exist one-sided limits of its both derivatives at \(k=0\). A typical dispersion relation we have in mind is \(\bar{\omega }(k)=|\sin (\pi k)|\), which corresponds to the harmonic chain with the nearest-neighbour interaction.

Concerning the scattering kernel, we assume that it is of the multiplicative form

$$\begin{aligned} R(k,k')\, = \, R_1(k)R_2(k'), \end{aligned}$$
(1.10)

where \(R_1\), \(R_2\) are non-negative, even functions belonging to \(C({\mathbb {T}})\). Without loss of generality, we also suppose that

$$\begin{aligned} \int _{{\mathbb {T}}}R_j(k)\, dk\, = \, 1, \quad j=1,2. \end{aligned}$$

We suppose that there exist exponents \(\beta _j>0\), \(j=1,2,3\), such that

$$\begin{aligned} R^*_{j} \,&:= \,\lim _{k\rightarrow 0}\frac{R_j(k)}{|k|^{\beta _j}}>0,\quad j=1,2, \end{aligned}$$
(1.11)
$$\begin{aligned} S_*\,&:=\,\lim _{k\rightarrow 0 }\frac{ |k|^{\beta _3}|\bar{\omega }'(k)|}{ \gamma R_1(k)}>0, \end{aligned}$$
(1.12)
$$\begin{aligned} \beta _1\,&< \, 1+\beta _2, \end{aligned}$$
(1.13)
$$\begin{aligned} \alpha \,&:= \, \frac{1+\beta _2}{\beta _3}\, \in \, (1,2). \end{aligned}$$
(1.14)

Consistent with the condition (1.13), we further require that

$$\begin{aligned} {\mathcal R}:=\int _{{\mathbb {T}}}\frac{R_2(k)}{ R_1(k)}dk\in (0,+\infty ). \end{aligned}$$
(1.15)

We define the probability measure \(\pi (dk):=R_2(k)/\big ({\mathcal R} R_1(k)\big )dk\) and we can informally state the main result of the present work; see Theorem 2.5 for a rigorous formulation.

Main Theorem. Let \(W_0(y,k)\) be a sufficiently regular function satisfying the interface conditions in (1.2) and such that the averaged function

$$\begin{aligned} \bar{W}_0(y)\,:= \, \int _{{\mathbb {T}}}W_0(y,k)\,\pi (dk) \end{aligned}$$
(1.16)

satisfies \(\bar{W}_0(0)=T_o\). Let \(W_\lambda (t,y,k)\) be the solution to (1.6) with interface conditions (1.2). Then, under the hypotheses (1.8)–(1.14) made above, the limit \( \lim _{\lambda \rightarrow +\infty }W_\lambda (t,y,k) \, = \, \bar{W}(t,y)\) exists in a distributional sense on \({\mathbb {R}}\times {\mathbb {T}}\) for any \(t>0\). Moreover, \(\bar{W}(t,y)\) is the unique weak solution, in the sense of Definition 2.4 below, of the equation:

$$\begin{aligned} \partial _t \bar{W}(t,y)&= \, \bar{\gamma }\, \,\mathrm{p.v.}\int _{yy'>0}q_\alpha (y'-y)[\bar{W}(t,y')-\bar{W}(t,y)] \, dy'\nonumber \\&\quad +\bar{\gamma }p_+\, \int _{yy'<0}q_\alpha (y'-y) [\bar{W}(t,y')-\bar{W}(t,y)] \, dy' +\bar{\gamma }p_-\,\int _{yy'<0}q_\alpha (y'-y)\nonumber \\&\quad [\bar{W}(t,-y')-\bar{W}(t,y)] \, dy', \end{aligned}$$
(1.17)

with the initial and boundary conditions \(\bar{W}(0,y) = \bar{W}_0(y)\) and \(\bar{W}(t,0)\, = \, T_o\), respectively. Here, \(\bar{\gamma }\) is a positive constant depending on the parameters of the model (see (2.9) below),

$$\begin{aligned} q_\alpha (y)\,:= \, \frac{c_\alpha }{|y|^{ 1+\alpha }}, \qquad c_\alpha \,:=\, \frac{2^\alpha \Gamma ((1+\alpha )/2)}{\sqrt{\pi }\Gamma (\alpha /2-1)},\quad y\not =0, \end{aligned}$$
(1.18)

and \(\Gamma (\cdot )\) is the Euler gamma function.

Remark 1.1

The above result should be compared with the result of [22], where the case \(p_0=\lim _{k\rightarrow 0}p_0(k)>0\) has been considered. As we have already mentioned, the informal formulation of the limiting fractional dynamics involves the generator \({\mathcal L}\) of the process that is symmetric stable and can be transmitted, reflected or killed when crossing the interface at \(y=0\). In the situation considered in the present paper we can also view the evolution of \(\bar{W}(t,y)\) as described by Eq. (1.7), with \({\mathcal L}\) corresponding to a symmetric \(\alpha \)-stable process that is transmitted or reflected when crossing the interface at \(y=0\), with the respective probabilities \(p_+\) and \(p_-\). Furthermore, the process is killed upon the first hitting of the interface.

The fractional diffusion limit of a kinetic equation has been the subject of intense investigation in recent years. We refer the interested reader to a review of the existing literature contained in [26]. However, there seem to be only few results dealing with a fractional diffusion limit for kinetic equations with a boundary condition. In this context, we mention the papers [1, 3, 9,10,11,12, 22]. The case that is somewhat related to ours is considered in [11] and [12]. In the first paper, the convergence of scaled solutions to kinetic equations in spatial dimension one, with diffusive reflection condition on the boundary, is investigated. However, this condition is different from ours. Furthermore, the results of [11] do not establish the uniqueness of the limit, stating only that it satisfies a certain fractional diffusive equation with a boundary condition and leaving the question of the uniqueness of solutions for the limiting equation open, see the remark after Theorem 1.2 in [11]. In the paper [12], the authors complete the results of [11] and establish an anomalous diffusive limit for a family of solutions of scaled linear kinetic equations in a one-dimensional bounded domain with diffusive boundary conditions. The scattering kernel, defined on \({\mathbb {R}}^2\), is also of multiplicative form, as in (1.10), and decays according to an appropriate power law.

Concerning our methods of proof, we rely on the probabilistic interpretation of solutions to kinetic equations. As shown in [22, Proposition 3.2], the solution can be expressed, with the help of an underlying two-dimensional stochastic process \(\{\big (Y^o(t),K^o(t)\big )\}_{t\ge 0}\), see Proposition 3.1 below. The component \(K^o(t)\) shall be referred to as a frequency, while \(Y^o(t)\) can be thought of as the position of a particle, whose velocity is given by \(-\bar{\omega }'(K^o(t))\), see (3.7) below. Outside the interface \([y=0]\), i.e. when \(Y^o(t)\not =0\), the component \(K^o(t)\) is described by a pure jump, \({\mathbb {T}}\)-valued Markov process corresponding to the operator \(L_k\) in (1.3). As a result, the particle performs a uniform motion between the consecutive scattering events, with velocity \(-\bar{\omega }'(K^o(t))\). On the other hand, if the particle tries to cross the interface at time t, it will be transmitted (then \(K^o(t)=K^o(t-)\)), reflected (then \(K^o(t)=-K^o(t-)\)), or killed, with the respective probabilities \(p_+(K^o(t-))\), \(p_-(K^o(t-))\) and \(p_0(K^o(t-))\). Using this probabilistic interpretation, the asymptotic behaviour of the solutions of (1.6) can be reformulated into the problem of finding the limit of appropriately scaled processes \(\{\big (Y_\lambda ^o(t),K_\lambda ^o(t)\big )\}_{t\ge 0}\), with the parameter \(\lambda >0\) corresponding to the ratio between the macro- and microscopic time units. This approach has been successfully implemented in the case of solutions of scaled kinetic equations without interface in [4, 19, 20]. In the case when the killing probability \(p_0\) is strictly positive, as assumed in [22], one can expect that typically the particle crosses the interface only finitely many times before being killed. Its trajectory can then be obtained by a path transformation (consisting in performing suitable reflections) of the trajectory when no interface is present. This transformation turns out to be continuous in the Skorokhod \(J_1\) topology on the path space, thus the convergence of the scaled processes in the presence of the interface is a consequence of the result for the process in the free space, done, e.g., in [20]. This approach cannot be applied in our present situation, since with small \(p_0\) it is not so obvious how to effectively control the number of interface crossings performed by the particle before it gets killed. For this reason, we use a different method to deal with the problem. We define a Lévy-type process \(\big \{Z(t)\big \}_{t\ge 0}\) whose scaled limit \(\big \{Z_{\lambda }(t)\big \}_{t\ge 0}\), as \(\lambda \rightarrow +\infty \), is related to the respective limit of the position of the particle by a deterministic time change. To find the limit of \(\big \{Z_{\lambda }(t)\big \}_{t\ge 0}\), we prove that the associated Dirichlet forms converge in the sense of Mosco (see [27, Definition 2.1.1] and also Definition 6.1 below), to the Dirichlet form corresponding to the modified \(\alpha \)-stable process, described in Remark 1.1. The method used in Sect. 6 can also be applied to the case considered in [22]. However, one should keep in mind that [22] covers a broader range of scattering kernels than the multiplicative ones considered here, see (1.10). In fact, it is this particular form of the scattering kernel that allows us to use the machinery of Dirichlet forms in the present paper.

Apart from the limit theory of kinetic equations, we present here some new technical results, which may be of independent interest, for a construction of skew-stable processes (Theorem 3.4), for the characterization of the fractional Sobolev spaces (Proposition 6.3 and Corollary 6.4) and for the definition and uniqueness of solutions to non-local parabolic equations (Definition 2.4 and Proposition 3.7).

The paper is organized as follows: after presenting some preliminaries in Sect. 2, we reformulate the problem of finding the limit of solutions of (1.6), with the boundary condition (1.2) into the problem of finding the limit of the respective stochastic processes. This is done in Sect. 3. In Sect. 4, we show how to use the result concerning the limit of stochastic processes to conclude our main result, formulated rigorously in Theorem 2.5. Section 5 is devoted to the proof of the convergence of the scaled Lévy type processes, as stated in Theorem 3.4, assuming the convergence of their respective Dirichlet forms in the sense of Mosco. The latter is established in Sect. 6. Some auxiliary facts are proved in “Appendix A and B”.

2 Preliminaries

2.1 Precise assumptions on the model

Given an arbitrary set A and two functions \(f,g:A\rightarrow [0,+\infty ]\), we write \(f\preceq g\) on A if there is a constant \(C>0\) such that

$$\begin{aligned} f(a)\, \le \, Cg(a),\quad a\in A. \end{aligned}$$

We also write \(f\approx g\) on A when \(f\preceq g\) and \(g\preceq f\). In what follows, we always assume the following conventions: \(\sum _A f(a) =0\) and \(\prod _A f(a)=1\), if \(A=\emptyset \), and \(\bar{\omega }'(0):=0\), even though as a rule, \(\bar{\omega }\) is not differentiable at \(k=0\).

Let us now denote for any k in \({\mathbb {T}}_*\),

$$\begin{aligned} \bar{t}(k)\,:= \, (\gamma R_1(k))^{-1}, \end{aligned}$$
(2.1)

which can be interpreted as expected waiting time for the scattering of a particle at frequency k. The interplay between scattering kernel and drift will be captured by the function

$$\begin{aligned} S(k)\,:= \, \bar{\omega }'(k)\bar{t}(k), \quad k \in [-1/2,1/2]. \end{aligned}$$
(2.2)

Clearly, we can interpret |S(k)| as the expected distance travelled by the particle before scattering. According to our assumptions, S is an odd function and by (1.12), \(\lim _{k\rightarrow 0 } |k|^{\beta _3} |S(k)|=S_*\). We impose some further restrictions on the model coefficients by assuming that:

$$\begin{aligned} \begin{aligned}&R_j(k)\, \approx \, |k|^{\beta _j}, \quad j=1,2,\\&S\in C^1({\mathbb {T}}_*),\\&|S(k)|\, \approx \, \frac{\cos (\pi k)}{|k|^{\beta _3}},\quad |S'(k)|\, \approx \, \frac{1}{|k|^{1+\beta _3}}\,\, \text { on }{\mathbb {T}}_* \quad \text {and} \quad \\&S'_*:=\,\lim _{k\rightarrow 0}|k|^{1+\beta _3}|S'(k)| >0. \end{aligned} \end{aligned}$$
(2.3)

Note that assumption made about \(S'\) in third line of (2.3) together with the fact that \(\bar{\omega }'(k)\ge 0\), for \(k\in (0,1/2]\) imply that S restricted to that interval is strictly decreasing. From this point on, we shall assume that all the above hypotheses together with those made including and between (1.8) and (1.15) are in force.Footnote 1

Remark 2.1

It follows from our subsequent argument that the limiting behavior of \(W_\lambda (t,y,k)\) is determined by the behavior of S(k) and \(R_j(k)\), \(j=1,2\), in the vicinity of \(k=0\), provided that these functions, together with \(1/R_1(k)\) remain bounded for k away from 0. The generalization of the proof for a broader class of coefficients should therefore be possible, at the expense of an additional complication of the argument.

2.2 Solution of the kinetic equation

Let us write \({\mathbb {R}}_+:=(0, +\infty )\), \(\bar{{\mathbb {R}}}_+:=[0,+\infty )\), \({\mathbb {R}}_-:=(-\infty ,0)\) and \(\bar{{\mathbb {R}}}_-:=(-\infty , 0]\). Given \(T \in {\mathbb {R}}\), we denote by \(\mathcal {C}_{T}\) the class of functions \(\phi \) in \(C_b({\mathbb {R}}_*\times {\mathbb {T}}_*)\) that can be continuously extended to \(\bar{{\mathbb {R}}}_\iota \times {\mathbb {T}}_*\), for \(\iota \in \{+,-\}\), and satisfy the following interface conditions:

$$\begin{aligned} {\left\{ \begin{array}{ll} \phi (0^+, k)\, = \, p_+(k)\phi (0^-,k)+p_-(k)\phi (0^+, -k)+p_0(k)T, &{} \text{ on } {\mathbb {T}}_+;\\ \phi (0^-, k)\, = \, p_+(k)\phi (0^+, k)+p_-(k)\phi (0^-,-k) + p_0(k)T, &{} \text{ on } {\mathbb {T}}_-. \end{array}\right. } \end{aligned}$$
(2.4)

Clearly, the constant function \(T_o\) belongs to \(\mathcal {C}_{T_o}\). Furthermore, \(F\in \mathcal { C}_{T_o}\) if and only if \(F-T_o\in \mathcal {C}_0\). This allows us to reduce the proofs of some results below to the case \(T_o=0\).

Remark 2.2

Note that if \(\phi \in \mathcal { C}_{T_o}\) and \( \phi (y,k)=\tilde{\phi }(y)\) for all \((y,k)\in {\mathbb {R}}_*\times {\mathbb {T}}_*\), then, we necessarily have \(\tilde{\phi }(0^+)=\tilde{\phi }(0^-)=T_o\). In light of the previous remark, it suffices to verify the statement for \(T_o=0\). Taking into account (2.4) (with \(T_o=0\)) we conclude that

$$\begin{aligned}0\, = \, (1-p_-(k))\left( \tilde{\phi }^2(0^+)+\tilde{\phi }^2(0^-)\right) -2 p_-(k) p_+(k) \tilde{\phi }(0^+)\tilde{\phi }(0^-).\end{aligned}$$

The form \((x,y)\mapsto (1-p_-(k))\left( x^2+y^2\right) -2 p_-(k)p_+(k) xy\) is positive definite, since

$$\begin{aligned}&\textrm{det}\left[ \begin{array}{ll} 1-p_-(k)&{}-p_-(k)p_+(k)\\ &{}\\ -p_-(k)p_+(k)&{}1-p_-(k) \end{array} \right] \,= \,(1-p_-(k))^2-p^2_-(k)p^2_+(k)\\&{\ge } \, (1-p_-(k))^2-p^2_-(k) (1-p_-(k))^2\, = \, (1-p_-(k))^3 (1+p_-(k))>0, k\in {\mathbb {T}}, \end{aligned}$$

due to (1.9).

Following [22], we now recall the definition of solution to equation (1.1) with the interface conditions (1.2).

Definition 2.3

A bounded, continuous function \(W:{\mathbb {R}}_+\times {\mathbb {R}}_*\times {\mathbb {T}}_*\rightarrow {\mathbb {R}}\) is called a solution to equation (1.1) with the interface conditions (1.2) if all the following conditions are satisfied:

  • for any \(\iota , \iota '\) in \(\{-,+\}\), the restriction of W to \({\mathbb {R}}_+\times {\mathbb {R}}_\iota \times {\mathbb {T}}_{\iota '}\) can be extended to a bounded, continuous function on \(\bar{\mathbb {R}}_+\times \bar{\mathbb {R}}_{\iota }\times \bar{\mathbb {T}}_{\iota '}\);

  • for any \((t,y,k)\in {\mathbb {R}}_+\times {\mathbb {R}}_*\times {\mathbb {T}}_*\) fixed, the function \(s\mapsto W(t+s,y+\bar{\omega }'(k) s,k)\) is continuously differentiable in a neighborhood of \(s=0\), the directional derivative

    $$\begin{aligned} D_tW(t,y,k)\,:= \, \frac{d}{ds}_{|s=0} W(t+s,y+\bar{\omega }'(k) s,k) \end{aligned}$$
    (2.5)

    is bounded in \({\mathbb {R}}_+\times {\mathbb {R}}_*\times {\mathbb {T}}_*\), and

    $$\begin{aligned} D_tW(t,y,k) \, = \, \gamma L_k W(t,y,k), \quad (t,y,k)\in {\mathbb {R}}_+\times {\mathbb {R}}_*\times {\mathbb {T}}_*; \end{aligned}$$
  • the interface conditions in (1.2) hold, together with the initial condition

    $$\begin{aligned} \lim _{t\rightarrow 0+} W(t,y,k)\, = \, W_0(y,k),\quad (y,k)\in {\mathbb {R}}_*\times {\mathbb {T}}_*. \end{aligned}$$

We highlight that, at least formally, the directional derivative satisfies

$$\begin{aligned} D_tW(t,y,k)\, = \, \left[ \partial _t+\bar{\omega }'(k)\partial _y\right] W(t,y,k),\end{aligned}$$

which justifies the notation in (2.5). It has been shown in [22] that there exists a unique (classical) solution, in the sense of Definition 2.3, to the Cauchy problem (1.1), if the initial distribution \(W_0\) belongs to \(\mathcal {C}_{T_o}\). On this regard, see as well Proposition 3.1 below.

2.3 Weak solution of the limit equation

Recalling the definitions of \(p_\pm \) in (1.5), we introduce the bilinear form

$$\begin{aligned} \hat{\mathcal E}[u,v]\,:= & {} \, \frac{1}{2} \int _{{\mathbb {R}}^2}(u(y')-u(y))(v(y')-v(y)) q_\alpha (y'-y)\left( \mathbbm {1}_{yy'>0} +p_+\mathbbm {1}_{yy'<0}\right) dydy'\nonumber \\{} & {} + \frac{1}{2}\int _{{\mathbb {R}}^2}(u(-y')-u(y))(v(-y')-v(y)) q_\alpha (y'-y)p_- \mathbbm {1}_{yy'<0}\,dydy', \end{aligned}$$
(2.6)

and the associated quadratic form \(\hat{\mathcal E}[u]:=\hat{\mathcal E}[u,u]\). We now define the semi-norm \(\Vert u\Vert _{\mathcal {H}_o}\,:= \, \hat{\mathcal E}^{1/2}[u]\) for any Borel function \(u:{\mathbb {R}}_*\rightarrow {\mathbb {R}}\) such that the expression is finite. One can check that \(\hat{\mathcal E}^{1/2}[u]<+\infty \), if u belongs to \(C_c^\infty ({\mathbb {R}}_*)\). We then denote by \(\mathcal {H}_o\) the completion of \(C_c^\infty ({\mathbb {R}}_*)\) under \(\Vert \cdot \Vert _{\mathcal {H}_o}\). We proceed to the definition of a weak solution to (1.17) with the interface conditions (1.2).

Definition 2.4

A bounded function \(\bar{W}:\bar{{\mathbb {R}}}_+\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) is a weak solution to (1.17) with interface conditions (1.2) if

  1. (i)

    \(\bar{W}(\cdot )-T_o\in L^2_{\text {loc}}([0,+\infty );\mathcal {H}_o)\) and \(\bar{W}(\cdot )-T_\infty \in C([0,+\infty );L^2({\mathbb {R}}))\) for some \(T_\infty \in {\mathbb {R}}\),

  2. (ii)

    for any \(F\in C^\infty _c([0,+\infty )\times {\mathbb {R}}_*)\) and any \(t\ge 0\), it holds that

    $$\begin{aligned}{} & {} \int _{{\mathbb {R}}}F(0,y)[ \bar{W}_0(y)-T_o]\, dy =\int _{{\mathbb {R}}}F(t,y)[\bar{W}(t,y)-T_o]\, dy \nonumber \\{} & {} \quad -\int _0^{t}\int _{{\mathbb {R}}}\partial _sF(s,y) [\bar{W}(s,y)-T_o]\, dy ds +\bar{\gamma } \int _0^{t}\hat{\mathcal E} [F(s,\cdot ),\bar{W}(s,\cdot )-T_o] \, ds.\nonumber \\ \end{aligned}$$
    (2.7)

By Proposition 3.7 below, our definition guarantees the uniqueness of solutions.

2.4 Statement of the main result

Once we have formulated the definition of a solution, we are ready to formulate rigorously our main result. It reads as follows:

Theorem 2.5

Assume that the hypotheses about \(p_\pm \), \(p_0\), R and \(\bar{\omega }\) made between (1.8) and (1.15) hold. Furthermore, suppose that assumptions (2.3) are in force. Let \(T_\infty \in {\mathbb {R}}\) and \(W_0\in {\mathcal C}_{T_o}\) be such that \(\bar{W}_0-T_\infty \in L^2({\mathbb {R}})\) and \(\bar{W}_0-T_o\in \mathcal {H}_o\), with \(\bar{W}_0\) defined in (1.16). Let \(W_\lambda (t,y,k)\) be the solution, in the sense of Definition 2.3, to the Cauchy problem (1.6). Then,

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}\times {\mathbb {T}}}W_\lambda (t,y,k)F(y,k)\, dkdy \, = \, \int _{{\mathbb {R}}\times {\mathbb {T}}}\bar{W}(t,y)F(y,k)\, dkdy, \end{aligned}$$
(2.8)

for all \(t>0\) and test functions \(F\in C^\infty _c({\mathbb {R}}\times {\mathbb {T}})\). Moreover, the limit \(\bar{W}(t,y)\) is the weak solution, in the sense of Definition 2.4, to the Cauchy problem (1.17) with the initial distribution \(\bar{W}_0\) and the fractional diffusion coefficient

$$\begin{aligned} \bar{\gamma }\,:= \, \gamma R^*_2S^{1+\alpha }_0 \Gamma (\alpha +1)/S'_*. \end{aligned}$$
(2.9)

3 Probabilistic representation of solutions

3.1 Construction of the position-frequency process

As usual, \({\mathbb {N}}:=\{1,2,\ldots \}\) and \({\mathbb {N}}_0:=\{0,1,2,\ldots \}\). Let \((\Omega ,\mathcal {F},\mathbb {P})\) be a probability space carrying the following random objects. We consider a Markov chain \(K_n(k)\) and the renewal process \({\mathfrak T}_n(k)\). More precisely, \(\{K_n(k)\}_{n\in {\mathbb {N}}_0}\) is such that \(K_0(k)=k\) and \(\{K_n(k)\}_{n\in {\mathbb {N}}}\) are i.i.d. random variables on \({\mathbb {T}}\), distributed according to the following probability measure:

$$\begin{aligned} \mu (dk)\,:= \, R_2(k)dk, \end{aligned}$$
(3.1)

and

$$\begin{aligned} {\mathfrak T}_n(k)\,:= \, \sum _{0\le j<n}\bar{t}(K_j(k))\tau _j, \quad n \in {\mathbb {N}}_0, \end{aligned}$$
(3.2)

where \(\{\tau _n\}_{n\in {\mathbb {N}}_0}\) is an independent sequence of i.i.d. exponentially distributed random variables with intensity 1 and, we recall, \(\bar{t}(k)\) was defined in (2.1). We introduce the process \(\{{\mathfrak T}(t,k)\}_{t\ge 0}\) as the linear interpolation between the values of \({\mathfrak T}_n(k)\):

$$\begin{aligned} {\mathfrak T}(t,k)={\mathfrak T}_n(k)+(t-n)({\mathfrak T}_{n+1}(k)-{\mathfrak T}_n(k))\quad \text{ if }\quad t\in [n, n+1),\;n\in {\mathbb {N}}_0. \end{aligned}$$

We then define the continuous-time frequency process as

$$\begin{aligned}K(t,k)\,:= \, K_{[{\mathfrak T}^{-1}(t,k)]}(k)=K_n(k) \quad \text{ if } \quad t\in [{\mathfrak T}_n(k), {\mathfrak T}_{n+1}(k)),\; n\in {\mathbb {N}}_0.\end{aligned}$$

Here \({\mathfrak T}^{-1}\) is the inverse function of \(t\mapsto {\mathfrak T}(t,k)\) and \([\cdot ]\) denotes the integer part.

According to (2.2), the particle position at the time of the n-th renewal of its frequency is

$$\begin{aligned}Z_n(y,k)\,:=\, y- \sum _{0\le j<n} S(K_j)\tau _j, \quad n \in {\mathbb {N}}_0.\end{aligned}$$

In particular, the law of \(Z_n(y,k)\), for each \(n\in {\mathbb {N}}\), is absolutely continuous with respect to the Lebesgue measure on \({\mathbb {R}}\). We then consider an auxiliary Poisson process \(\{N(t)\}_{t\ge 0}\) of intensity 1 that is independent of both \(\{K_n\}_{n\ge 0}\) and \(\{\tau _n\}_{n\ge 0}\). We define \(\{\tilde{N}(t)\}_{t\ge 0}\) as the linear interpolation between the nodal points of N. Namely, if \(n:=N(t)\) then

$$\begin{aligned} \tilde{N}(t):=n+(t-l)/(r-l), \end{aligned}$$

where \(l:=\inf \{s:N(s)=n\}\) and \(r:=\inf \{s: N(s)=n+1\}\). Our next process, \(\{\tilde{Z}(t,y,k)\}_{t\ge 0}\), is obtained by linearly interpolating between the nodal points of \(Z_{N(t)}(y,k)\): if \(n=N(t)\), then

$$\begin{aligned} \tilde{Z}(t,y,k):=Z_n(y,k)+(Z_{n+1}(y,k)-Z_n(y,k))(t-l)/(r-l). \end{aligned}$$

Next we define the transmission-reflection-killing mechanism at the interface \(o=[y=0]\). To this end, we fix \((y,k)\in {\mathbb {R}}_*\times {\mathbb {T}}_*\) and consider the times \(\left\{ n_{m}\left( \Big \{Z_n(y,k)\Big \}_{n\ge 0}\right) \right\} _{m\ge 0}\) when \(\Big \{Z_n(y,k)\Big \}_{n\ge 0}\) crosses the interface. Namely, we let \(n_0:=0\) and then define recursively

$$\begin{aligned} n_{m+1} \,:= \, \inf \left\{ n>n_{m}:(-1)^m y Z_n(y,k)<0\right\} , \quad m =0,1,\ldots . \end{aligned}$$
(3.3)

For the sake of brevity, when there is no danger of confusion, we skip the sequence \(\Big \{Z_n(y,k)\Big \}_{n\ge 0}\) from the notation. Similarly, we define the sequence \(\Big \{\tilde{\mathfrak {s}}_{m}\Big (\{\tilde{Z}(t,y,k)\}_{t\ge 0} \Big )\Big \}_{m\ge 0}\) of consecutive times when the process \(\tilde{Z}(t,y,k)\) crosses the interface o. Namely, we let \(\tilde{\mathfrak {s}}_{0}=0\) and

$$\begin{aligned} \tilde{\mathfrak {s}}_{m+1}\,:= \, \inf \left\{ t>\tilde{\mathfrak {s}}_{m}:(-1)^m y\tilde{Z}(t,y,k)<0\right\} , \quad m =0,1,\ldots . \end{aligned}$$
(3.4)

Again, we simplify the notation by omitting the path of the process when there is no danger of confusion. Sometimes, to highlight the dependence of the crossing times on the starting point (which will be relevant in the argument), we may write \(\tilde{\mathfrak {s}}_{y,k,m}\), or \(\tilde{\mathfrak {s}}_{y,m}\). Let \(\mathfrak {s}_{m}:=\inf \{t>0:\,n_{m}=N(t)\}\). Since the m-th passage of \(\tilde{Z}(t,y,k)\) across the interface occurs before \(\mathfrak {s}_{m}\), we have \(\tilde{\mathfrak {s}}_{m} < \mathfrak {s}_{m}\), \(\mathbb {P}\)-a.s. Likewise, the \((m+1)\)-st passage has to occur after \(\mathfrak {s}_{m}\), so \(\mathfrak {s}_{m} < \tilde{\mathfrak {s}}_{m+1}\), \(\mathbb {P}\)-a.s.

We now consider a sequence \( \{\sigma _{m}\}_{m\ge 0}\) of \(\{-1,0,1\}\)-valued random variables that are independent when conditioned on \(\{K_n(k)\}_{n\ge 0}\), such that \(\sigma _0:=1\) and

$$\begin{aligned} {\mathbb {P}}\left( \sigma _{m}=\iota |\{K_n(k)\}_{n\ge 0}\right) \,= \, p_\iota (K_{n_{m}-1}(k)), \quad \iota \in \{-1,0,1\},\,m\in {\mathbb {N}}. \end{aligned}$$
(3.5)

Here and below, \(p_\iota \) means \(p_\pm \) if \(\iota =\pm 1\), respectively. Of course, \(\{\sigma _m\}_{m\ge 1}\) can be defined by applying the quantile functions for (3.5) to independent i.i.d. uniform random variables. We can finally add the random interface mechanism to the considered processes. Namely,

$$\begin{aligned} \tilde{Z}^o(t,y,k) := \,\left( \prod _{j=1}^{m}\sigma _{j}\right) \tilde{Z}(t,y,k), \quad \text {if }t \in [\tilde{\mathfrak {s}}_{m},\tilde{\mathfrak {s}}_{m+1}) \text{ for } \text{ some } m\in {\mathbb {N}}_0.\nonumber \\ \end{aligned}$$
(3.6)

We define \({\mathcal S}(t,k):=\tilde{N}^{-1}\left( {\mathfrak T}^{-1}(t,k)\right) \) and \({\mathfrak {u}}_{m}:=\mathcal {S}^{-1}(\tilde{\mathfrak {s}}_{m},k)\). Let

$$\begin{aligned} K^o(t,k) := \,\left( \prod _{j=1}^{m}\sigma _{j}\right) K(t,k), \quad \text {if } t \in [ {\mathfrak {u}}_{m},{\mathfrak {u}}_{m+1}) \text{ for } \text{ some } m\in {\mathbb {N}}_0.\end{aligned}$$

The “true” position process is then given by

$$\begin{aligned} Y^o(t,y,k)\,:= \, \tilde{Z}^o({\mathcal S}(t,k),y,k) \, = \, y-\int _0^{t}\bar{\omega }'(K^o(s,k))\, ds. \end{aligned}$$
(3.7)

We now denote by \({\mathfrak f}:= \min \left\{ m\in {\mathbb {N}}:\sigma _{m}=0\right\} \) the interface crossing at which the particle gets killed and let \(\mathfrak { u}_{{\mathfrak f}}:={\mathcal S}^{-1}(\tilde{\mathfrak s}_{{\mathfrak f}},k)\). To highlight the dependence of the crossing times on the starting point, sometimes we may write \(\mathfrak { u}_{y,k,{\mathfrak f}}\). The following probabilistic representation of the solution to (1.1), with interface conditions (1.2), holds.

Proposition 3.1

Let \(W_0\) be in \(\mathcal {C}_{T_o}\). Then, for any \((t,y,k)\in [0,+\infty )\times {\mathbb {R}}_*\times {\mathbb {T}}_*\),

$$\begin{aligned} W(t,y,k)\, = \, \mathbb {E}\left[ W_0\left( Y^o( t,y,k), K^o( t,k)\right) ,\, t< \mathfrak { u}_{y,k,{\mathfrak f}}\right] +T_o\mathbb {P}\left[ t\ge \mathfrak { u}_{y,k,{\mathfrak f}}\right] .\nonumber \\ \end{aligned}$$
(3.8)

In addition, if \(W_0(0,k)=T_o\) for all \(k\in {\mathbb {T}}\), then

$$\begin{aligned} W(t,y,k)\, = \, \mathbb {E}\left[ W_0\left( Y^o(t,y,k),K^o(t,k)\right) \right] , \qquad (t,y,k)\in [0,+\infty )\times {\mathbb {R}}_*\times {\mathbb {T}}_*,\nonumber \\ \end{aligned}$$
(3.9)

is the unique classical solution, in the sense of Definition 2.3, to the Cauchy problem (1.1) with initial distribution \(W_0\).

Proof

The existence and uniqueness of a classic solution and its representation by formula (3.8) have already been shown in [22], Section A, and Proposition 3.2. Recalling that \(Y^o( t,y,k)=0\) if \(t\ge {\mathfrak {u}}_{y,k,{\mathfrak f}}\), we can use that \(W_0(0,k)=T_o\) to get that

$$\begin{aligned} T_o\mathbb {P}\left[ t\ge \mathfrak u_{y,k,{\mathfrak f}}\right] \,= & {} \,\mathbb {E}\left[ W_0\left( 0, K^o( t,k)\right) ,\, t\ge \mathfrak { u}_{y,k,{\mathfrak f}}\right] \, \\= & {} \, \mathbb {E}\left[ W_0\left( Y^o( t,y,k), K^o( t,k)\right) ,\, t\ge \mathfrak { u}_{y,k,{\mathfrak f}}\right] . \end{aligned}$$

Then, (3.9) follows immediately from (3.8). \(\square \)

3.2 Scaling of the process \((Y^o( t,y,k), K^o( t,k))\)

Equations (1.6) may be considered a special case of (1.1), so we first focus on establishing a flexible notation for later use. As before, we fix \((y,k)\in {\mathbb {R}}_*\times {\mathbb {T}}_*\). We let \(\lambda >0\) and rescale the clock processes, introduced in the previous section, as follows:

$$\begin{aligned} N_\lambda (t) \,:= \, N(\lambda t), \quad \tilde{N}_\lambda (t) \,:= \, \tilde{N}(\lambda t), \quad {\mathfrak T}_\lambda (t,k) \,:= \, {\frac{1}{\lambda } {\mathfrak T}(\lambda t,k)} \end{aligned}$$

and

$$\begin{aligned} {\mathcal S}_\lambda (t,k)\,:= \, \frac{1}{\lambda }{\mathcal S}(\lambda t,k) \, = \, \tilde{N}_{\lambda }^{-1}\left( {\mathfrak T}_\lambda ^{-1}(t,k)\right) . \end{aligned}$$
(3.10)

We then notice that for large \(\lambda \), the process \({\mathcal S}_\lambda (t,k)\) becomes almost deterministic. More precisely, by (1.15) we have that

$$\begin{aligned} \bar{\theta }\,:= \, \left( {\mathbb E}[\bar{t}(K_1)]\right) ^{-1}\, = \, \gamma {\mathcal R}^{-1} \, < \, +\infty . \end{aligned}$$
(3.11)

Except for the first deterministic term, the inverse \({\mathcal S}_\lambda ^{-1}(s,k) = {\mathfrak T}_\lambda (\tilde{N}_{\lambda }(s),k)\) can be represented as a Poisson sum of i.i.d. variables with finite first moment and so, by a standard argument using the strong law of large numbers, it can be shown that:

Proposition 3.2

For any \(t_*\in [0,+\infty )\), we have

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }\sup _{t\in [0,t_*]}\left| {\mathcal S}_\lambda (t,k)-\bar{\theta }t\right| =0,\quad {\mathbb {P}}\text{-a.s. } \end{aligned}$$
(3.12)

We can now rescale the “position” process in the following way. Let us define \(\{\tilde{Z}_\lambda (t,y,k)\}_{t\ge 0}\) as the linear interpolation between the nodal points of \(Z_\lambda (t,y,k)\), where

$$\begin{aligned} Z_\lambda (t,y,k)\,&:= \, Z_{N_\lambda (t)}^{\lambda }(y,k), \end{aligned}$$
(3.13)
$$\begin{aligned} Z_{n}^{\lambda }(y,k)\,&:= \, y- \frac{1}{\lambda ^{1/\alpha }}\sum _{j=0}^{n-1} S(K_j)\tau _j, \quad n \in {\mathbb {N}}_0. \end{aligned}$$
(3.14)

We then construct the sequences \(\{n_{m}^\lambda \}_{m\ge 0}\), \(\{{{\mathfrak {s}}}_{m}^{\lambda }\}_{m\ge 0}\), \(\{{\tilde{\mathfrak {s}}}_{m}^{\lambda }\}_{m\ge 0}\) of indices and stopping times for the rescaled processes \(Z_\lambda (t,y,k)\), \(\tilde{Z}_\lambda (t,y,k)\), defined by analogues of (3.3) and (3.4), skipping (yk) from the notation, when unambiguous. Since the law of \( Z_\lambda (t,y,k)\) is absolutely continuous with respect to the Lebesgue measure, for each \(y\in {\mathbb {R}}\) there exists a strictly increasing sequence \(n_{m}^\lambda :=N_\lambda ({{\mathfrak {s}}}_{m}^{\lambda })\) such that \( {{\mathfrak {s}}}_{m}^{\lambda } \, = \, \tilde{N}_\lambda ^{-1}\left( n_{m}^\lambda \right) . \) To set the notation for the processes subject to the random mechanism at the interface, let \(\{\sigma _{m}^\lambda \}_{m\in {\mathbb {N}}}\) be a sequence of \(\{-1,0,1\}\)-valued random variables that are independent when conditioned on \(\{K_n(k)\}_{n\ge 0}\) and such that

$$\begin{aligned} {\mathbb {P}}\left( \sigma _{m}^\lambda =\iota |\{K_n(k)\}_{n\ge 0}\right) \, = \, p_\iota (K_{n_{m}^\lambda -1}(k)),\qquad \iota \in \{-1,0,1\}.\end{aligned}$$

We then set

$$\begin{aligned} {\mathfrak f}^\lambda \,:= \, \min \{m \ge 1:\sigma _{m}^{\lambda }=0\}, \qquad \tilde{\mathfrak {s}}_{\mathfrak f}^\lambda \,:= \, \tilde{\mathfrak {s}}^{\lambda }_{{\mathfrak {f}^\lambda }}, \qquad {{\mathfrak {s}}}_{\mathfrak {f}}^\lambda \,:= \, {\mathfrak {s}}^{\lambda }_{{\mathfrak f}^\lambda }. \end{aligned}$$

The jump process \(\{Z_\lambda ^o(t,y,k)\}_{t\ge 0}\) is now defined, for any \(m\in {\mathbb {N}}_0\), as

$$\begin{aligned} Z_\lambda ^o(t,y,k)\,:= \, \left( \prod _{j=0}^{m}\sigma _{j}^{\lambda }\right) Z_\lambda (t,y,k),\quad t\in [{\mathfrak {s}}^\lambda _{m}, {\mathfrak {s}}^\lambda _{m+1}). \end{aligned}$$
(3.15)

Here \(\sigma _{0}^{\lambda }:=1\). Similarly, the continuous trajectory process \(\{\tilde{Z}_\lambda ^o(t,y,k)\}_{t\ge 0}\) can be obtained as in (3.15) but with respect to the stopping times \(\{\tilde{\mathfrak {s}}^{\lambda }_m\}_{m\in {\mathbb {N}}_0}\).

In what follows, we show that it is unlikely that the scaled position process \(Z_\lambda (t,y,k)\) crosses the interface after the first jump, when \(\lambda \) becomes large. Indeed, recall from (3.14) that \(Z_1^\lambda (y,k)=y-\lambda ^{-1/\alpha }S(k)\tau _0\) where \(\tau _0\) is \(\exp (1)\)-distributed. Let

$$\begin{aligned} A^{\lambda }(y,k)\,:= \, \left\{ yZ_1^\lambda (y,k)<0\right\} . \end{aligned}$$
(3.16)

The following estimate follows immediately.

Proposition 3.3

For all \(\lambda >0\), \(y\in {\mathbb {R}}_*\) and \(k\in {\mathbb {T}}_*\) we have \({\mathbb {P}}\left( A^{\lambda }(y,k)\right) \, \le \, \exp \left\{ -\frac{|y|\lambda ^{1/\alpha }}{|S(k)|}\right\} .\)

3.3 Auxiliary stable Lévy process

To describe the limit, as \(\lambda \) goes to \(+\infty \), of processes \(\{Z_\lambda ^o(t)\}_{t\ge 0}\), we consider the \(\alpha \)-stable Lévy process \(\{\zeta (t)\}_{t\ge 0}\) with the infinitesimal generator

$$\begin{aligned} \mathcal {L}_{\alpha }u(y)\,:= \, \bar{r}_*\,\mathrm{p.v.}\int _{{\mathbb {R}}}[u(y')-u(y)]q_\alpha (y'-y)\, dy',\quad u\in C_c^2({\mathbb {R}}), \end{aligned}$$
(3.17)

where \(\bar{r}_*\,:= \, R^*_2S_*^{1+\alpha } \Gamma (\alpha +1)/S'_*\). For any \(y\ne 0\), we denote \(\zeta (t,y):= y+\zeta (t)\). By a straightforward modification of (3.4), we let \(\mathfrak {t}_{0}:=0\) and denote by \(\Big \{\mathfrak {t}_{m}\Big (\Big \{\zeta (t,y)\Big \}_{t\ge 0}\Big )\Big \}_{m\ge 1}\) the consecutive times when the process \(\zeta (t,y)\) crosses the interface o. We often abbreviate

$$\begin{aligned} \mathfrak {t}_{y,m}:=\,\mathfrak {t}_{m}\left( \left\{ \zeta (t,y)\right\} _{t\ge 0}\right) \end{aligned}$$
(3.18)

and we also let

$$\begin{aligned} \mathfrak {t}_{y,\mathfrak {f}} :=\, \inf \left\{ t>0:\zeta (t,y)=0\right\} .\end{aligned}$$

It is known that \(\mathfrak {t}_{y,\mathfrak {f}}\) is finite \(\mathbb {P}\)-a.s. and its law is absolutely continuous with respect to the Lebesgue measure on \({\mathbb {R}}\) (cf. [30, Example 43.22]). We can finally introduce the tentative limit process \(\{\zeta ^o(t,y)\}_{t\ge 0}\) as follows. Recalling the definitions of \(p_{\pm }\) in (1.5) and \(p_0=0\), stemming from (1.8), we take a sequence \(\{\sigma _m\}_{m\in {\mathbb {N}}}\) of i.i.d. \(\{-1,1\}\)-valued random variables such that for any \(m\in {\mathbb {N}}\), the random variable \(\sigma _m\) is independent of \(\{\zeta (t,y)\}_{t\ge 0}\) and \(\mathbb {P}(\sigma _m=\pm 1) = p_\pm \) (the notation of (3.5) is best ignored from now on). Letting \(\sigma _0:=1\), we define for \(m\in {\mathbb {N}}_0\),

$$\begin{aligned} \zeta ^o(t,y)\,:= \,\left( \prod _{j=0}^{m}\sigma _j\right) \zeta (t,y), \quad \text{ if } \quad t \in [\mathfrak {t}_{y,m},\mathfrak {t}_{y,m+1}), \end{aligned}$$
(3.19)

and \(\zeta ^o(t,y):= 0\) if \(t\ge \mathfrak {t}_{y,\mathfrak {f}}\). The process, discussed in detail below, is an interesting take on the question of constructing skew stable Lévy processes, a topic recently examined in [18]. The case of a diffusive limit, when the transmission probability depends on whether the transmission occurs from the left to the right half-line or otherwise, leads to the limit which is given by a skew Brownian motion, see [8, Theorem 3.1].

However, in view of limit theorems for kinetic equations, the following is our main motivation.

Theorem 3.4

When \(\lambda \) tends to \(+\infty \), the processes \(\left\{ Z_{\lambda }^o(t,y,k)\right\} _{t\ge 0}\) converge both in finite distributions and weakly, over \(\mathcal {D}[0,+\infty )\) with the \(J_1\)-topology, to \(\left\{ \zeta ^o(t,y)\right\} _{t\ge 0}\).

The proof of the theorem shall be presented in Sect. 5 below.

For simplicity, denote by \(\mathcal {X}\) the space \(\mathcal {D}[0,+\infty )\times \mathcal {C}[0,+\infty )\) equipped with the product of the \(M_1\)-topology and the topology of uniform convergence over compact intervals. The precise definition of the \(M_1\)-topology on \(\mathcal {D}[0,+\infty )\) can be found in [31, Section 12]. We only mention here that the topology is metrizable and for two paths to be close, it means that they have their parametrizations, both in the temporal and spatial domains, that are not far from each other in the supremum norm, see [31, formula (3.7), Section 12, p. 395].

It then follows from Theorem 3.4 and Proposition 3.2 that:

Corollary 3.5

As \(\lambda \rightarrow +\infty \), the processes \(\left\{ \left( \tilde{Z}_{\lambda }^o(t,y,k),\mathcal {S}_{\lambda }(t,k)\right) \right\} _{t\ge 0}\) converge both in finite distributions and weakly, over \(\mathcal {X}\), to \(\left\{ \left( \zeta ^o(t,y),\bar{\theta }t\right) \right\} _{t\ge 0}\).

Let us denote \(K_\lambda (t,k):=K(\lambda t,k)\) and \( \mathfrak {u}_{m}^\lambda :={\mathcal S}_{\lambda }(\tilde{\mathfrak {s}}_{m}^\lambda ,k) \). We finally define the position- momentum process \(\Big \{K_\lambda ^o(t,k), Y_\lambda ^o(t,k,y)\Big \}_{t\ge 0}\) as

$$\begin{aligned} K_\lambda ^o(t,k)\,&:= \, \left( \prod _{j=1}^{m}\sigma _{j}^{\lambda }\right) K_\lambda (t,k),\quad t\in [\mathfrak {u}_{m}^\lambda , \mathfrak {u}_{m+1}^\lambda ), \quad m \in {\mathbb {N}}_0 ; \end{aligned}$$
(3.20)
$$\begin{aligned} Y^o_\lambda (t,y,k) \,&:= \, \tilde{Z}^o_\lambda ({\mathcal S}_\lambda (t,k),y,k)\, = \, y-{\frac{1}{\lambda ^{1/\alpha -1}}}\int _0^{ t}\bar{\omega }'( K^o_\lambda (s,k)) \, ds. \end{aligned}$$
(3.21)

If we let \(\eta ^o(t,y):= \zeta ^o\left( \bar{\theta }t,y\right) \), where \(\bar{\theta }\) is defined in (3.11), then the following result is an immediate consequence of Corollary 3.5.

Corollary 3.6

As \(\lambda \) goes to \(+\infty \), the processes \(\left\{ Y^o_{\lambda }(t,y,k)\right\} _{t\ge 0}\) converge both in finite distributions and weakly, over \(\mathcal {D}[0,+\infty )\) with the \(M_1\)-topology, to the process \(\left\{ \eta ^o(t,y)\right\} _{t\ge 0}\).

Proof

Invoking Theorem 7.2.3 of [32] and using formula (3.21), we conclude the weak convergence of the processes. Since, for any deterministic time \(t\ge 0\) we have

$$\begin{aligned} {\mathbb {P}}[\eta ^o(t,y)=\eta ^o(t-,y)]\, = \, 1, \end{aligned}$$

by virtue of Lemma 6.5.1 in [31], the set of discontinuities of the one-dimensional projection mapping \(\omega \mapsto \eta ^o(t,y,\omega )\) is of null probability. Using the continuous mapping theorem, see Theorem 2.7 of [7], we conclude the convergence of the one-dimensional distributions. The generalisation to finite-dimensional distributions is trivial. \(\square \)

We conclude this section by presenting a probabilistic characterisation of the weak solution of the limit Cauchy problem (1.17) in terms of the process \(\eta ^o(t,y)\). A proof of this result can be found in the “Appendix B” below.

Proposition 3.7

Let \(\bar{W}_0-T_o\) be in \(\mathcal {H}_0\) such that \(\bar{W}_0-T_\infty \) is in \(L^2({\mathbb {R}})\) for some \(T_\infty \in {\mathbb {R}}\). Then, the function

$$\begin{aligned} \bar{W}(t,y) \, = \, {\mathbb E}\left[ \bar{W}_0\left( \eta ^o(t,y)\right) \right] , \qquad (t,y) \in \bar{\mathbb {R}}_+\times {\mathbb {R}}_*, \end{aligned}$$
(3.22)

is the unique weak solution, in the sense of Definition 2.4, to the Cauchy problem (1.17) with initial condition \(\bar{W}_0\).

4 Proof of Theorem 2.5

Since \(W_{\lambda }(t,y,k)-T_o\) is the solution of (1.6) with the interface condition (1.2) corresponding to the zero thermostat temperature, we assume without loss of generality that \(T_o=0\). Clearly, the solution \(W_\lambda \) is given by (3.8). In what follows, we often write \(W_\lambda (t)\) for \(W_\lambda (t,\cdot ,\cdot )\).

Fix a test function F in \(C^\infty _c({\mathbb {R}}\times {\mathbb {T}})\). Suppose that \(\textrm{supp}\,F\subset [-M_0,M_0]\times {\mathbb {T}}\). Thanks to Corollary 3.6 for any \(\varepsilon >0\) we can find a sufficiently large \(M>1\) such that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }{\mathbb {P}}[|Y_\lambda ^o(t,y,k)|\ge M]<\varepsilon ,\quad (y,k)\in [-M_0,M_0]\times {\mathbb {T}}. \end{aligned}$$

By a standard approximation argument, it is enough therefore to consider an initial condition \(W_0\in C^\infty _c({\mathbb {R}}\times {\mathbb {T}})\cap \mathcal {C}_{0}\). If the initial condition \(W_0\) is independent of k, i.e. \(W_0(y,k)=W_0(y)\), then, by Remark 2.2, we have \(W_0(0^+)=W_0(0^-)=0\) and we can use formula (3.9) to represent \(W_\lambda \). As a result, we can immediately conclude the proof of Theorem 2.5 using Proposition 3.1 (with respect to the rescaled processes), (3.22) and Corollary 3.6.

The next result enables to replace an arbitrary initial condition \(W_0(y,k)\) by its average over \({\mathbb {T}}\) with respect to the measure \(\pi (dk)=R_2(k)/\big ({\mathcal R}R_1(k)\big )dk\). For notational simplicity, we denote by \(L^2_{\pi }({\mathbb {R}}\times {\mathbb {T}})\) the \(L^2\)-space on \({\mathbb {R}}\times {\mathbb {T}}\) with respect to the product measure \(dy\, \pi (dk)\).

Lemma 4.1

Suppose that \(W_0\in C_c({\mathbb {R}}\times {\mathbb {T}}) \). Let \(\bar{W}_0\) be the function defined by (1.16). Then, for any \(\varepsilon >0\), there exists \({\delta }_0>0\) such that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\Vert W_\lambda (\delta )-\bar{W}_0\Vert _{L^2_{\pi }({\mathbb {R}}\times {\mathbb {T}})}\, < \, \varepsilon , \qquad \delta \in [0,{\delta }_0). \end{aligned}$$
(4.1)

The above lemma shall be proved at the end of this section.

Since \(W_\lambda (t)\) solves (1.6) in the sense of Definition 2.3, the existence of the directional derivative \(D_tW_{\lambda }(t,y,k)\), see (2.5), implies the differentiability of \(t\mapsto \Vert W_{\lambda }(t)\Vert ^2_{L^2_\pi ({\mathbb {R}}\times {\mathbb {T}})}\). Computing \(\frac{d}{dt} \Vert W_{\lambda }(t)\Vert ^2_{L^2_\pi ({\mathbb {R}}\times {\mathbb {T}})}\) and using (1.6) together with the interface conditions (1.2), see the calculations in [5, Section 3], we immediately conclude the following.

Proposition 4.2

The norm \( \Vert W_{\lambda }(t)\Vert _{L^2_\pi ({\mathbb {R}}\times {\mathbb {T}})}\), defined as a function of \(t\in [0,\infty )\), does not increase.

We can now finish the proof of Theorem 2.5. We first show the following variant of (2.8),

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}\times {\mathbb {T}}}W_\lambda (t,y,k)F(y,k)\, dy\, \pi (dk) \, = \, \int _{{\mathbb {R}}\times {\mathbb {T}}}\bar{W}(t,y)F(y,k)\, dy\, \pi (dk).\nonumber \\ \end{aligned}$$
(4.2)

Choose an arbitrary \(\varepsilon >0\). Let \(\delta >0\) be as in Lemma 4.1 and \(\tilde{W}_\lambda \) the solution to the Cauchy problem (1.6) when the initial condition is given at time \(t=\delta \) by \(\tilde{W}_\lambda (\delta ,y,k)=\bar{W}_0(y)\). In particular, we can use Proposition 3.1 to represent \(\tilde{W}_\lambda \) as

$$\begin{aligned} \tilde{W}_\lambda (t,y,k)\, = \, {\mathbb E}\left[ \bar{W}_0\left( Y^o_\lambda ( t-\delta ,y,k)\right) \right] , \qquad t \ge \delta . \end{aligned}$$
(4.3)

By virtue of Proposition 4.2 and Lemma 4.1, we have that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\Vert W_\lambda (t)-\tilde{W}_\lambda (t)\Vert _{L^2_{\pi }({\mathbb {R}}\times {\mathbb {T}})}\, \le \, \limsup _{\lambda \rightarrow +\infty }\Vert W_\lambda (\delta )-\bar{W}_0\Vert _{L^2_{\pi }({\mathbb {R}}\times {\mathbb {T}})}\, < \, \varepsilon . \end{aligned}$$

Therefore, it follows that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\left| \int _{{\mathbb {R}}\times {\mathbb {T}}}F(y,k)\left[ W_\lambda (t,y,k)- \tilde{W}_\lambda (t,y,k)\right] \,\pi (dk) dy\right| \, \le \, \Vert F\Vert _{L^2_\pi ({\mathbb {R}}\times {\mathbb {T}})}\varepsilon . \end{aligned}$$

Recalling (3.22) and (4.3), it is not difficult to conclude from Corollary 3.6 that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}\times {\mathbb {T}}}F(y,k) \tilde{W}_\lambda (t,y,k)\, dy\, \pi (dk)\, = \, \int _{{\mathbb {R}}\times {\mathbb {T}}}F(y,k) \bar{W}(t-\delta ,y)\, dy\, \pi (dk). \end{aligned}$$

Choosing \(\delta >0\) sufficiently small, we can then guarantee that

$$\begin{aligned} \left| \int _{{\mathbb {R}}\times {\mathbb {T}}}F(y,k)\left[ \bar{W}(t-\delta ,y)-\bar{W}(t,y)\right] \, dy\, \pi (dk)\right| \, < \, \varepsilon . \end{aligned}$$

As a result, we conclude that for any \(\varepsilon >0\),

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\left| \int _{{\mathbb {R}}\times {\mathbb {T}}}F(y,k)\left[ W_\lambda (t,y,k)-\bar{W}(t,y)\right] \, dy\, \pi (dk)\right| \, \le \,\varepsilon , \end{aligned}$$

and (4.2) immediately follows. The conclusion of Theorem 2.5 with respect to the Lebesgue measure on \({\mathbb {T}}\), as in (2.8), immediately follows from (4.2) and the fact that \(\Vert F \tilde{W}_\lambda (t )\Vert _\infty \le \Vert F W_0\Vert _\infty \). The claim that \(\bar{W}(t,y)\) is a weak solution of (1.17) follows from Proposition 3.7.\(\square \)

4.1 Proof of Lemma 4.1

Recall that we have assumed that \(W_0\in C_c({\mathbb {R}}\times {\mathbb {T}}) \). Fix an arbitrary \(\varepsilon \in (0,1)\). We shall show that there exists \(\delta _0\in (0,1)\) such that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\Vert W_\lambda (\delta )-\hat{W}^o_\lambda (\delta )\Vert _{L^2_{\pi }({\mathbb {R}}\times {\mathbb {T}})}^2\, < \, \varepsilon ,\quad \text{ for } \text{ all } \delta \in (0,\delta _0], \end{aligned}$$
(4.4)

where \( \hat{W}^o_\lambda (t,y,k):= {\mathbb E}\left[ W_0\left( y, K_\lambda ^o(t,k)\right) \right] \). Indeed, let \(\rho >0\) be so small that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\int _{\{|y|<\rho \}\times {\mathbb {T}}}| W_\lambda (\delta ,y,k)- \hat{W}^o_\lambda (\delta ,y,k)|^2\, dy\, \pi (dk) \, < \, \frac{\varepsilon }{10} \end{aligned}$$
(4.5)

for all \(\delta >0\). This is possible because the volume of integration can be made small and \(W_{\lambda }(\delta )\) is bounded in the \(L^\infty \)-norm (thanks to (3.8)).

Since \(W_0\) is compactly supported we claim that we can choose \(\rho \in (0,1)\) so small that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\int _{\{ |y|\ge \rho ^{-1}\}\times {\mathbb {T}}}\Big (W_\lambda (\delta ,y,k)-\hat{W}^o_\lambda (\delta ,y,k)\Big )^2\, dy\, \pi (dk)<\frac{\varepsilon }{10} \end{aligned}$$
(4.6)

for any \(\delta >0\). Indeed, for a small \(\rho \) we have \(\hat{W}^o_\lambda (t,y,k)=0\) for \(|y|\ge \rho ^{-1}\) and all \((t,k)\in \bar{\mathbb {R}}_+\times {\mathbb {T}}_*\). To conclude (4.6) it suffices to show that

$$\begin{aligned} \lim _{\rho \rightarrow 0+}\sup _{\delta \in (0,1]}\limsup _{\lambda \rightarrow +\infty }\int _{\{ |y|\ge \rho ^{-1}\}\times {\mathbb {T}}} W_\lambda ^2(\delta ,y,k)\, dy\, \pi (dk) \, = \, 0. \end{aligned}$$
(4.7)

The function

$$\begin{aligned} W_0^*(y)\,:= \, \sup _{k\in {\mathbb {T}}}|W_0(y,k)|+\sup _{k\in {\mathbb {T}}}|W_0(-y,k)| \end{aligned}$$

is continuous, compactly supported and even. Since \(|Y^o_\lambda (\delta ,y,k)|=|Y_\lambda (\delta ,y,k)|\) for \(\delta \in [0, \mathfrak { u}_{y,k,{\mathfrak f}}^{\lambda })\) we have \(W_0^*\left( Y^o_\lambda (\delta ,y,k)\right) = W_0^*\left( Y_\lambda (\delta ,y,k)\right) \) for \(\delta \in [0, \mathfrak { u}_{y,k,{\mathfrak f}}^{\lambda })\). As a result, thanks to (3.8) with \(T_o=0\),

$$\begin{aligned} \begin{aligned} |W_\lambda (\delta ,y,k)|\le \mathbb {E}\left[ W_0^*\left( Y^o_{\lambda }( \delta ,y,k)\right) ,\, \delta< \mathfrak { u}_{y,k,{\mathfrak f}}\right]&= \mathbb {E}\left[ W_0^*\left( Y_{\lambda }( \delta ,y,k)\right) ,\, \delta < \mathfrak { u}_{y,k,{\mathfrak f}}\right] \\&\le \mathbb {E}\left[ W_0^*\left( Y_{\lambda }( \delta ,y,k)\right) \right] \\&= \mathbb {E}\left[ W_0^*\left( y+Y_{\lambda }( \delta ,0,k)\right) \right] . \end{aligned}\end{aligned}$$

Using the weak convergence of \(\Big \{Y_\lambda ( t,0,k)\Big \}_{t\ge 0}\) to \(\Big \{\eta (t)\Big \}_{t\ge 0}\) we can write

$$\begin{aligned}\begin{aligned} \limsup _{\lambda \rightarrow +\infty }&\int _{\{|y|\ge \rho ^{-1}\}\times {\mathbb {T}}} W_\lambda ^2(\delta ,y,k)\, dy\, \pi (dk) \, \\&\le \, \limsup _{\lambda \rightarrow +\infty }{\mathbb E}\left[ \int _{\{|y|\ge \rho ^{-1}\}}\left[ W_0^*(y+Y_\lambda (\delta ,0,k))\right] ^2\,dy\right] \\&= \, {\mathbb E}\left[ \int _{\{|y|\ge \rho ^{-1}\}}\left[ W_0^*(y+\eta (\delta ))\right] ^2 \, dy\right] . \end{aligned}\end{aligned}$$

Estimate (4.7) follows taking first the supremum over \(\delta \in (0,1]\) and then the limit as \(\rho \) goes to zero.

To conclude the proof of (4.4) we need to prove yet that given \(\rho \in (0,1)\), there exist \(\delta _0\in (0,1)\) such that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\int _{\{\rho \le |y|\le \rho ^{-1}\}\times {\mathbb {T}}}\left[ W_\lambda (\delta ,y,k)-\hat{W}^o_\lambda (\delta ,y,k)\right] ^2\, dy\, \pi (dk) \, < \, \frac{\varepsilon }{10} \end{aligned}$$
(4.8)

for \(\delta \in (0,\delta _0].\)

We start with the following observation. For any \(\rho '\in (0,\rho )\) and \(k\in {\mathbb {T}}_*\) we have

$$\begin{aligned} \left\{ \sup _{|y|\ge \rho }\sup _{t\in [0,\delta ]} |Y^o_\lambda ( t,y,k)-y|< \rho '\right\} \, = \, \left\{ \sup _{|y|\ge \rho }\sup _{t\in [0,\delta ]} |Y_\lambda ( t,y,k)-y|< \rho '\right\} ,\nonumber \\ \end{aligned}$$
(4.9)

where \(Y_\lambda (t,y,k)\) is the analogue of \(Y_\lambda ^o( t,y,k)\), but without the interface (or, equivalently, with \(p_+\equiv 1\) in our model). Taking the complements in (4.9) and using \(Y_\lambda (t,y,k)-y=Y_\lambda (t,0,k)\), we conclude that

$$\begin{aligned} \left\{ \sup _{|y|\ge \rho }\sup _{t\in [0,\delta ]} |Y^o_\lambda ( t,y,k)-y|\ge \rho '\right\} \, = \, \left\{ \sup _{t\in [0,\delta ]}|Y_\lambda ( t,0,k)|\ge \rho '\right\} . \end{aligned}$$
(4.10)

The processes \(\Big \{Y_\lambda ( t,0,k)\Big \}_{t\ge 0}\) converge, as \(\lambda \rightarrow +\infty \), both in finite distributions and weakly, over \(\mathcal {D}[0,+\infty )\) with the \(M_1\)-topology, to a symmetric, \((1+1/\beta )\)-stable process \(\Big \{\eta (t)\Big \}_{t\ge 0}\), see [4, Theorem 3.2], or [20, Theorem 3.1]. Therefore by (4.10) and [32, Theorem 7.4.1] we conclude that for any \(k\in {\mathbb {T}}\) and \(\rho '< \rho \), there exists a sufficiently small \(\delta _0>0\) such that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\,{\mathbb {P}}\left( \sup _{|y|\ge \rho }\sup _{t\in [0,\delta ]}|Y^o_\lambda ( t,y,k)-y|\ge \rho '\right) \, < \, \frac{\varepsilon \rho }{100(\Vert W_0 \Vert ^2_\infty +1)}\nonumber \\ \end{aligned}$$
(4.11)

for all \(\delta \in (0,\delta _0]\). In particular, it follows that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\sup _{|y|\ge \rho }{\mathbb {P}}\left( \delta \ge \mathfrak {u}^\lambda _{y,k,\mathfrak {f}}\right) \le \limsup _{\lambda \rightarrow +\infty } \sup _{|y|\ge \rho }{\mathbb {P}}\left( \delta \ge \mathfrak {u}^\lambda _{y,k,1}\right) <\frac{\varepsilon \rho }{100(\Vert W_0 \Vert ^2_\infty +1)},\nonumber \\ \end{aligned}$$
(4.12)

where \(\mathfrak {u}^\lambda _{y,k,1}\), \(\mathfrak {u}^\lambda _{y,k,\mathfrak {f}}\) are the first time the process \(Y_\lambda (t,y,k)\) crosses the interface, and the time it gets killed, correspondingly. Coming back to the proof of (4.8), note that, by virtue of (3.8) (with \(T_o=0\)), the expression inside the integral there can be rewritten as

$$\begin{aligned}&\bigl \{{\mathbb E}\left[ W_0\left( Y_\lambda ^o( \delta ,y,k), K_\lambda ^o( \delta ,k)\right) -W_0\left( y, K_\lambda ^o( \delta ,k)\right) ,\, \delta < \mathfrak { u}_{y,k,{\mathfrak f}}^{\lambda }\right] \bigr \}^2 \\&+\bigl \{{\mathbb E}\left[ W_0\left( y, K_\lambda ^o( \delta ,k)\right) ,\, \delta \ge \mathfrak { u}_{y,k,{\mathfrak f}}^{\lambda }\right] \bigr \}^2\\&\le \, {\mathbb E}\left[ \sup _{k'\in {\mathbb {T}}}\Big (W_0\left( Y_\lambda ^o( \delta ,y,k), k'\right) -W_0\left( y, k'\right) \Big )^2\right] +3\Vert W_0\Vert _\infty ^2 {\mathbb {P}}^2\left[ \delta \ge \mathfrak { u}_{y,k,\mathfrak {f}}^{\lambda }\right] . \end{aligned}$$

Using uniform continuity of \(W_0\) and (4.9) we can select \(\rho '\) in such a way that for some \(\delta _0\in (0,1)\), we have

$$\begin{aligned}\lim _{\lambda \rightarrow +\infty }\sup _{|y|\ge \rho }{\mathbb E}\left[ \sup _{k'\in {\mathbb {T}}}\left( W_0\left( Y_\lambda ^o( \delta ;y,k), k'\right) -W_0\left( y,k'\right) \right) ^2\right] \,<\,\frac{\varepsilon \rho }{10},\end{aligned}$$

provided \(\delta \in (0,\delta _0]\). Combining this with (4.12), we conclude that for each \(k\in {\mathbb {T}}_*\), there exists \(\delta _0\in (0,1)\) such that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\sup _{|y|\ge \rho }\left[ W_\lambda (\delta ,y,k)-\hat{W}^o_\lambda (\delta ,y,k)\right] ^2< \frac{\varepsilon \rho }{4} \end{aligned}$$

for \(\delta \in (0,\delta _0]\). Since the expression inside the integral in (4.8) is uniformly bounded in \(\lambda \), we can use the ”limsup version” of the Fatou lemma to conclude that (4.8) holds. This ends the proof of (4.4).

4.1.1 The end of the proof of Lemma 4.1

Let \( \hat{W}_\lambda (t,y,k):={\mathbb E}\left[ W_0\left( y,K_\lambda (t,k) \right) \right] \). Using (4.12) and the fact that \(W_0\) is compactly supported, we conclude that for any \(\varepsilon >0\), there exists \(\delta _0>0\) such that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}\times {\mathbb {T}}}\left[ \hat{W}^o_\lambda (\delta ,y,k)- \hat{W}_\lambda (\delta ,y,k)\right] ^2\, dy\, \pi (dk)\, < \, \frac{\varepsilon }{10} \end{aligned}$$
(4.13)

for all \(\delta \in (0,\delta _0]\). The dynamics of the frequency process \(K_\lambda ( t,k)\) is reversible with respect to the measure \(\pi \) on the torus \({\mathbb {T}}\) and 0 is a simple eigenvalue for the generator \(L_k\), i.e. \(L_ku=0\) implies \(u\equiv \textrm{const}\), \(\pi \) a.e. The latter can be easily seen, due to the fact that \(L_k1=0\) and the identity

$$\begin{aligned}-\int _{{\mathbb {T}}}L_ku(k)u(k) \pi (dk)\, = \, \frac{1}{ 2 {\mathcal R}}\int _{{\mathbb {T}}^2}R_2(k) R_2(k') \left[ u(k') - u(k)\right] ^2\, dk dk', \end{aligned}$$

which holds for all \(u\in L^2(\pi )\). As a consequence, we have

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}\times {\mathbb {T}}}\left[ \hat{W}_\lambda (\delta ,y,k)-\bar{W}_0(y)\right] ^2\, dy\, \pi (dk) =0 \end{aligned}$$
(4.14)

for all \(\delta \in (0,\delta _0]\). Combining with (4.4), this ends the proof of the lemma. \(\square \)

5 Proof of Theorem 3.4

To prove Theorem 3.4, we show the convergence of the Markov semigroups corresponding to processes \(\left\{ Z_{\lambda }^o(t,y,k)\right\} _{t\ge 0}\), see (3.15). In the first part of the present section, we focus on constructing a Lévy-type process \(\{\hat{Z}^o_\lambda (t,y)\}_{t\ge 0}\) whose increments, after the first jump, coincide with those of \(\left\{ Z_{\lambda }^o(t,y,k)\right\} _{t\ge 0}\).

5.1 Construction of the associated Markov process

Let us denote by r(y) the density of \(S(K_1)\), where S(k) is defined in (2.2) and \(K_1\) is distributed according to \(R_2(k)dk\) (see (3.1)). Recalling that the function S(k) is bijective, the law of \(S(K_1)\) is then given by

$$\begin{aligned} \mu _S(A)\,:= \, \int _A r(y) \, dy\, = \, \int _{S(k)\in A}R_2(k)\, dk,\quad A\in {\mathcal B}({\mathbb {R}}). \end{aligned}$$
(5.1)

Since S is odd, its law \(\mu _S\) is even, and thus r(y) is even. Let us now define, for simplicity,

$$\begin{aligned} \tilde{p}_{\iota ,\lambda }(z)\,:= \, p_{\iota }\left( S^{-1}(\lambda ^{1/\alpha }z)\right) , \quad \iota \in \{-1,0,1\} \end{aligned}$$

for any \(\lambda >0\) and any z in \({\mathbb {R}}\), with the classical notation \(\tilde{p}_{\iota }(z):=\tilde{p}_{\iota ,1}(z)\). Clearly, all the above functions are even. Thanks to (1.8) we may assume, with no loss of generality, that

$$\begin{aligned} \tilde{p}_0\left( y\right) \succeq \frac{1}{(1+\log |y|)^{\kappa }},\quad |y|\ge 1. \end{aligned}$$
(5.2)

From the general assumptions of the model (cf. (1.8), (1.11), (1.12) and (2.3)), it is not difficult to verify, by direct calculations, the following properties:

Lemma 5.1

We have

$$\begin{aligned} r\left( y\right) \, \approx \, \frac{1}{(1+|y|)^{ 1+\alpha }}, \qquad y\in {\mathbb {R}}\end{aligned}$$
(5.3)

and

$$\begin{aligned} \lim _{y\rightarrow +\infty } r\left( y\right) y^{ 1+\alpha }\, = \, r_* \,:= \, \frac{R^*_2S_*^{1+\alpha }}{S'_*}. \end{aligned}$$
(5.4)

In addition,

$$\begin{aligned} \liminf _{y\rightarrow +\infty } \tilde{p}_0\left( y\right) \log ^{\kappa } y=:\tilde{p}_*>0. \end{aligned}$$
(5.5)

where \( \tilde{p}_*:= p_*\beta _3^{\kappa }, \) and \(p_*\) is given by (1.8).

Consider, for any y in \({\mathbb {R}}\),

$$\begin{aligned} \bar{r}(y) \,:= \, \int _0^{+\infty }r\left( \frac{y}{\tau }\right) \frac{e^{-\tau }d\tau }{\tau }, \end{aligned}$$
(5.6)

and its rescaled version \(\bar{r}_\lambda (y):=\lambda ^{1+1/\alpha } \bar{r}\left( \lambda ^{1/\alpha }y\right) \) for any \(\lambda >0\). Lemma 5.1 then implies that

$$\begin{aligned} \bar{r}(y) \,&\approx \, \frac{1}{(1+|y|)^{ 1+\alpha }} \quad \text { on } \,\, {\mathbb {R}}; \end{aligned}$$
(5.7)
$$\begin{aligned} \lim _{y\rightarrow +\infty } \bar{r}(y)y^{1+\alpha } \,&= \, \bar{r}_* \, = \, r_*\int _0^{+\infty }\tau ^{\alpha }e^{-\tau }\, d\tau ; \end{aligned}$$
(5.8)
$$\begin{aligned} \lim _{\lambda \rightarrow +\infty } \bar{r}_{\lambda }(y) \,&= \, \frac{\bar{r}_*}{|y|^{ 1+\alpha }}, \quad \text{ if } y \ne 0. \end{aligned}$$
(5.9)

We now consider the Markov process \(\left\{ \hat{Z}_\lambda ^o(t,y)\right\} _{t\ge 0}\) starting at y, whose generator is defined on \(C_0({\mathbb {R}})\) - the space of continuous functions vanishing at \(\infty \) - by \({\hat{\mathcal {L}}}_{\lambda }^o u(0)=0\) and, for any \(y\in {\mathbb {R}}_*\), by

$$\begin{aligned} {\hat{\mathcal {L}}}_{\lambda }^ou(y) \, = \int _{{\mathbb {R}}}\hat{r}_{\lambda }(y,y')[u(y')-u(y)]\,dy' + k_{\lambda }(y)\left[ u(0)-u(y)\right] , \end{aligned}$$
(5.10)

where the jump and killing kernels are given by

$$\begin{aligned}\begin{aligned} \hat{r}_{\lambda }(y,y') \,&:= \, \mathbbm {1}_{yy'>0}\bar{r}_{\lambda }(y'-y)+\mathbbm {1}_{yy'<0}\left[ \tilde{p}_{+,\lambda }(y'-y)\bar{r}_{\lambda }(y'-y)\right. \\&\quad \left. +\tilde{p}_{-,\lambda }(y'+y)\bar{r}_{\lambda }(y'+y)\right] ;\\ k_{\lambda }(y)\,&:= \, \int _{y}^{+\infty }\tilde{p}_{0,\lambda }(z)\bar{r}_{\lambda }(z)\, dz\, = \, \int _{|y|}^{+\infty }\tilde{p}_{0,\lambda }(z)\bar{r}_{\lambda }(z)\, dz. \end{aligned} \end{aligned}$$

We will prove at the end of the present section that the Markov processes \(\hat{Z}^o_\lambda (t,y)\) converge as well to the limit process \(\zeta ^o(t,y)\), defined in (3.19).

Theorem 5.2

Let y be in \({\mathbb {R}}_*\). As \(\lambda \rightarrow +\infty \), the processes \(\{\hat{Z}_{\lambda }^o(t,y)\}_{t\ge 0}\) converge both in finite distributions and weakly, over \(\mathcal {D}[0,+\infty )\) with the \(J_1\)-topology, to \(\left\{ \zeta ^o(t,y)\right\} _{t\ge 0}\).

5.2 Properties of the Markov semigroup corresponding to \(\{\hat{Z}^o_\lambda (t,y)\}_{t\ge 0}\)

In order to show Theorem 5.2, we will strongly rely on a convergence property between the corresponding Markov semigroups. The process \(\hat{Z}_\lambda ^o(t,y)\) killed at the interface is Markovian and its transition semigroup is given by

$$\begin{aligned} P_t^{o,\lambda }u(y):= \,{\mathbb E}\left[ u\left( \hat{Z}_\lambda ^o(t,y)\right) ,\,t<\hat{\mathfrak s}_{y,{\mathfrak f}}^{\lambda }\right] , \qquad u\in C_0({\mathbb {R}}). \end{aligned}$$
(5.11)

Here \(\hat{\mathfrak s}^{\lambda }_{y,{\mathfrak f}}\,:= \, \inf \{t>0:\hat{Z}_\lambda ^o(t,y) =0\}\). According to [15, Corollary 4.2.8, p. 170], \(\Big (P_t^{o,\lambda }\Big )\) forms a strongly continuous semigroup of contractions on \(C_0({\mathbb {R}})\).

We shall also consider the process stopped at \(\hat{\mathfrak s}^{\lambda }_{y,{\mathfrak f}}\). Its transition semigroup equals

$$\begin{aligned} \hat{P}_{t}^{o,\lambda }u(y)\,:= \, {\mathbb E}\left[ u\left( \hat{Z}_{\lambda }^o(t,y)\right) \right] \, = \, P_{t}^{o,\lambda }u(y)+u(0){\mathbb {P}}\left( t\ge \hat{\mathfrak s}_{y,{\mathfrak f}}^{\lambda }\right) .\nonumber \\ \end{aligned}$$
(5.12)

According to Theorem 5.2 the process \(\zeta ^o(t,y)\) is the limit of the killed processes \(\{\hat{Z}^o_\lambda (t,y)\}_{t\ge 0}\). Its Markov semigroup satisfies the following.

Proposition 5.3

For any y in \({\mathbb {R}}_*\), the process \(\{\zeta ^o(t,y)\}_{t\ge 0}\) generates a symmetric, strongly continuous Markov semigroup \(\{P_t^o\}_{t\ge 0}\) on \(L^2({\mathbb {R}})\) given by

$$\begin{aligned} P_t^ou(y)\, = \,{\mathbb E}\left[ u(\zeta ^o(t,y)),\,t< \mathfrak {t}_{y,\mathfrak {f}}\right] ,\quad t\ge 0,\, u\in L^2({\mathbb {R}}). \end{aligned}$$
(5.13)

Proof

We firstly note that (5.13) is well defined on \(L^2({\mathbb {R}})\). Let \(u_*(y):=|u(y)|+|u(-y)|\). From (3.19), we see that \(|\zeta ^o(t,y)|=|\zeta (t,y)|\), for \(t< \mathfrak {t}_{y,\mathfrak {f}}\). Therefore, since \(u_*\) is even, we can write

$$\begin{aligned}{\mathbb E}\left[ |u(\zeta ^o(t,y))|,\, t<\mathfrak {t}_{y,\mathfrak {f}}\right] \,\le \, {\mathbb E}\left[ u_*(\zeta (t,y)),\,t<\mathfrak {t}_{y, \mathfrak {f}}\right] \le P_t u_*(y),\end{aligned}$$

where \(\{P_t\}\) is the transition semigroup corresponding to the stable process \(\{\zeta (t,y)\}_{t\ge 0}\) - well defined on any \(L^p({\mathbb {R}})\), see, e.g., [2, Section 3.4]. \(\square \)

5.2.1 Proof that \((P^o_t)_{t\ge 0}\) is a Markovian semigroup

Fix \(t,s>0\) and \(N\in {\mathbb {N}}\). Let us consider \(0<s_1<\dots <s_N\le s\), a finite family \(\{\phi _j, \, j=1,\ldots ,N \}\) of bounded Borel measurable functions \(\phi _j:{\mathbb {R}}_*\rightarrow {\mathbb {R}}_+\) and a bounded Borel measurable function \(u:{\mathbb {R}}_*\rightarrow {\mathbb {R}}_+\). It is then enough to show that

$$\begin{aligned} {\mathbb E}\left[ u(\zeta ^o(t+s,y))\Phi (y),\,t+s< \mathfrak {t}_{y,\mathfrak {f}}\right] \, = \, {\mathbb E}\left[ P_t^o u(\zeta ^o(s,y))\Phi (y),\,s< \mathfrak {t}_{y,\mathfrak {f}}\right] .\nonumber \\ \end{aligned}$$
(5.14)

where we denoted, for simplicity, \(\Phi (y)=\prod _{j=1}^N\phi _j(\zeta ^o(s_j,y))\). Recalling the definition of \(\zeta ^o(t,y)\) in (3.19), we rewrite the left-hand side of (5.14) as:

$$\begin{aligned}{} & {} \sum _{0\le n\le m}{\mathbb E}\left[ u(\zeta ^o(t+s,y))\Phi (y),\, {\mathfrak t}_{y,n} \le s< {\mathfrak t}_{y,n+1},\, {\mathfrak t}_{y,m} \le t+s< {\mathfrak t}_{y,m+1}\right] \nonumber \\{} & {} \quad =\, \sum _{0\le n\le m}\sum _{\varepsilon _1,\ldots ,\varepsilon _{m}\in \{\pm 1\}}{\mathbb E}\left[ u\left( \zeta (t+s,y)\prod _{j=1}^{m}\varepsilon _j\right) \Phi (y),\right. \nonumber \\{} & {} \quad \left. \sigma _1=\varepsilon _1,\ldots ,\sigma _m=\varepsilon _m,\, {\mathfrak t}_{y,n} \le s< {\mathfrak t}_{y,n+1},\, {\mathfrak t}_{y,m} \le t+s< {\mathfrak t}_{y,m+1}\right] . \end{aligned}$$
(5.15)

Let now \(\{ \tilde{\zeta }(t)\}_{t\ge 0}\) be an independent copy of the stable process \(\zeta (t)\). Similarly to (3.4), we can also consider, for any \(m\in \{0,1,\ldots \}\) and \(z\in {\mathbb {R}}_*\), the m-th consecutive time \(\tilde{\mathfrak t}_{z,m}\) that the process \(\{z+\tilde{\zeta }(t) \}_{t\ge 0}\) crosses the interface. Using the independence of increments for the stable Lévy process, we then rewrite the right-hand side of (5.15) as

$$\begin{aligned}{} & {} \sum _{0\le n\le m}\sum _{\varepsilon _1,\ldots ,\varepsilon _{m}\in \{\pm 1\}}{\mathbb E}\left[ u\left( \left( \zeta (s,y)+\tilde{\zeta }(t)\right) \prod _{j=1}^{m}\varepsilon _j\right) \Phi (y),\right. \nonumber \\{} & {} \quad \,\left. \sigma _1=\varepsilon _1,\ldots ,\sigma _m=\varepsilon _m,\, {\mathfrak t}_{y,n} \le s< {\mathfrak t}_{y,n+1},\, \tilde{\mathfrak t}_{\zeta (s,y),m-n} \le t< \tilde{\mathfrak t}_{\zeta (s,y),m-n+1}\right] .\nonumber \\ \end{aligned}$$
(5.16)

Then, using the symmetry of the law of a stable process, it follows that for any \(\varepsilon =\pm 1\) and any z in \({\mathbb {R}}_*\), we have that

$$\begin{aligned}\left\{ \tilde{\zeta }(t),\{\tilde{t}_{z,j}\}_{j\ge 0} \right\} _{t\ge 0}\, \overset{\text {(law)}}{=} \, \left\{ \varepsilon \tilde{\zeta }(t),\{\tilde{t}_{\varepsilon z,j}\}_{j\ge 0}\right\} _{t\ge 0}. \end{aligned}$$

Therefore, we can finally rewrite (5.16) as:

$$\begin{aligned} \begin{aligned}&\sum _{0\le n< m}\sum _{\varepsilon _1,\ldots ,\varepsilon _{m}\in \{\pm 1\}}{\mathbb E}\Bigg [u\left( \left( \zeta ^o(s,y)+\tilde{\zeta }(t)\right) \prod _{j=n+1}^m\varepsilon _j\right) \Phi (y),\\&\sigma _1=\varepsilon _1,\ldots ,\sigma _m=\varepsilon _m,\, {\mathfrak t}_{y,n} \le s< {\mathfrak t}_{y,n+1},\, \tilde{\mathfrak t}_{\zeta ^o(s,y), m-n} \le t< \tilde{\mathfrak t}_{\zeta ^o(s,y), m-n+1}\Bigg ]\\&\, =\, \sum _{n\ge 0}\sum _{\varepsilon _1,\ldots ,\varepsilon _{n}\in \{\pm 1\}}{\mathbb E}\left[ P_t^o u\big (\zeta ^o(s,y)\big )\Phi (y), \,\sigma _1=\varepsilon _1,\ldots ,\sigma _n=\varepsilon _n,\, {\mathfrak t}_{y,n} \le s< {\mathfrak t}_{y,n+1}\right] \end{aligned} \end{aligned}$$

and (5.14) then follows immediately.

The semigroup \(P^o_t\) is symmetric. Fixed uv in \(L^2({\mathbb {R}})\), our aim is to show the following:

$$\begin{aligned} \int _{{\mathbb {R}}}P_t^ou(y)v(y)\, dy\, = \, \int _{{\mathbb {R}}}u(y)P_t^ov(y)\, dy, \qquad t>0. \end{aligned}$$
(5.17)

We start by rewriting the left-hand side of (5.17) as

$$\begin{aligned}{} & {} \sum _{m\ge 0} \sum _{\varepsilon _1,\ldots ,\varepsilon _{n}\in \{\pm 1\}}\int _{{\mathbb {R}}}{\mathbb E}\left[ u\left( \zeta (t,y)\prod _{j=1}^m\varepsilon _j\right) v(y), \,\sigma _1=\varepsilon _1,\ldots ,\sigma _m=\varepsilon _m,\, {\mathfrak t}_{y,m} \right. \nonumber \\{} & {} \left. \quad \le t< {\mathfrak t}_{y,m+1}\right] \, dy \nonumber \\{} & {} \quad =\, \sum _{m\ge 0}\sum _{\varepsilon _1,\ldots ,\varepsilon _{m}\in \{\pm 1\}}\prod _{j=1}^mp_{\varepsilon _j}\int _{{\mathbb {R}}}{\mathbb E}\left[ u\left( \zeta (t,y)\prod _{j=1}^m\varepsilon _j\right) v(y),\, {\mathfrak t}_{y,m} \right. \nonumber \\{} & {} \quad \left. \quad \le t< {\mathfrak t}_{y,m+1}\right] \, dy. \end{aligned}$$
(5.18)

We have used the convention \(p_{\pm 1}:= p_{\pm }\). By a standard argument, using the Chapman-Kolmogorov equations, it is possible to show the following.

Lemma 5.4

Let \(F:\mathcal {D}[0,t]\rightarrow {\mathbb {R}}_+\) be bounded measurable. Then,

$$\begin{aligned}{} & {} \int _{\mathbb {R}}{\mathbb E}\left[ u(\zeta (t,y))v(y)F(\zeta (\cdot ,y))\right] \, dy = \, \int _{\mathbb {R}}{\mathbb E}\left[ u(y)v(\zeta (t,y))F(\zeta ^r(\cdot ,y))\right] \, dy,\nonumber \\ \end{aligned}$$
(5.19)

where the reversed time process is given by \(\zeta ^r(s,y):=\zeta ((t-s)_{-},y)\), \(s\in [0,t]\).

We can now exploit the above lemma with \(F(\zeta )=\mathbbm {1}_{\mathfrak {t}_{m}(\zeta ) \le t< \mathfrak { t}_{m+1}(\zeta )}\). Note that \(F(\zeta )=F(\zeta ^r)\). Therefore, we conclude that

$$\begin{aligned}{} & {} \int _{\mathbb {R}}{\mathbb E}\left[ u\left( \zeta (t,y)\prod _{j=1}^m\varepsilon _j\right) v(y),\, {\mathfrak t}_{m} \left( \zeta (\cdot ,y)\right) \le t< {\mathfrak t}_{m+1}\left( \zeta (\cdot ,y)\right) \right] \, dy \nonumber \\{} & {} \quad = \, \int _{\mathbb {R}}{\mathbb E}\left[ u\left( y\prod _{j=1}^m\varepsilon _j\right) v(\zeta (t,y)),\, {\mathfrak t}_{m}\left( \zeta (\cdot ,y)\right) \le t< {\mathfrak t}_{m+1}\left( \zeta (\cdot ,y)\right) \right] \, dy.\nonumber \\ \end{aligned}$$
(5.20)

If we assume for the moment that \(\prod _{j=1}^m\varepsilon _j=1\), it then obviously follows from (5.20) that

$$\begin{aligned}{} & {} \int _{{\mathbb {R}}}{\mathbb E}\left[ u\left( \zeta (t,y)\prod _{j=1}^m\varepsilon _j\right) v(y),\, {\mathfrak t}_{y,m} \le t< {\mathfrak t}_{y,m+1}\right] \, dy \nonumber \\{} & {} \quad = \, \int _{{\mathbb {R}}}{\mathbb E}\left[ u(y)\, v\left( \zeta (t,y)\prod _{j=1}^m\varepsilon _j\right) ,\, {\mathfrak t}_{y,m} \le t< {\mathfrak t}_{y,m+1} \right] \, dy, \end{aligned}$$
(5.21)

where, we recall, \({\mathfrak t}_{y,m}\) is defined in (3.18). Suppose now that \(\prod _{j=1}^m\varepsilon _j=-1\). Since the laws of \(\{\zeta (t,y)\}_{t\ge 0}\) and \(\{-\zeta (t,-y)\}_{t\ge 0}\) are identical, we can rewrite (5.20) as

$$\begin{aligned}{} & {} \int _{\mathbb {R}}{\mathbb E}\left[ u\left( \zeta (t,y)\prod _{j=1}^m\varepsilon _j\right) v(y),\, {\mathfrak t}_{y,m} \le \, t< {\mathfrak t}_{y,m+1}\right] \, dy\nonumber \\{} & {} \quad =\, \int _{\mathbb {R}}{\mathbb E}\left[ u(-y)v(-\zeta (t,-y)),\,{\mathfrak t}_{-y,m} \le t< {\mathfrak t}_{-y,m+1} \right] \, dy \nonumber \\{} & {} \quad = \, \int _{{\mathbb {R}}}{\mathbb E}\left[ u(y)v\left( \zeta (t,y)\prod _{j=1}^m\varepsilon _j\right) ,\, {\mathfrak t}_{y,m} \le t< {\mathfrak t}_{y,m+1} \right] \, dy. \end{aligned}$$
(5.22)

Applying (5.21)–(5.22) to (5.18) above, we can finally conclude the proof of (5.17). From the latter we infer that each \(P_t^o\) is a contraction in any \(L^p({\mathbb {R}})\), \(p\in [1,+\infty )\). It is easy to check that \(\lim _{t\rightarrow 0+}P_t^of=f\) in the \(L^2\)-sense for any compactly supported continuous f. Since the set of such functions is \(L^2\)-dense, the strong continuity of the semigroup follows. \(\square \)

As in (5.12), it is now clear that for any \(u\in C_0({\mathbb {R}})\) we have

$$\begin{aligned} \hat{P}_{t}^{o}u(y)\,:= \, {\mathbb E}\left[ u\left( \zeta ^o(t,y)\right) \right] = \, P_{t}^{o}u(y)+u(0){\mathbb {P}}\left( t\ge {\mathfrak t}_{y,{\mathfrak f}}\right) , \end{aligned}$$
(5.23)

where \(\hat{P}_{t}^{o}u(y)\) is the semigroup associated with \(\{\zeta ^o(t,y)\}_{t\ge 0}\). We will show in Sect. 6 below that the following result holds:

Theorem 5.5

As \(\lambda \) tends to \(+\infty \), the semigroups \(\{ P_{t}^{o,\lambda }\}_{t\ge 0}\), defined in (5.11), strongly converge in \(L^2({\mathbb {R}})\), uniformly on compact intervals, to the semigroup \(\{P_{t}^o\}_{t\ge 0}\) given in (5.13).

Thanks to the above result, we can now prove that the Markov process \(\hat{Z}_{\lambda }^o(t,y)\) converges to \(\zeta ^o(t,y)\).

5.3 Proof of Theorem 5.2

5.3.1 Convergence of finite dimensional distributions

Since the generalisation to finite distribution marginals is immediate, we will show only the convergence of the one-dimensional distributions. Recalling the definition of the process \(\{\hat{Z}^o_\lambda (t,y)\}_{t\ge 0}\) in (5.10), we start by considering the symmetric Lévy process \(\{\hat{Z}_{\lambda }(t)\}_{t\ge 0}\) corresponding to the following characteristic function:

$$\begin{aligned}\mathbb {E}\left[ e^{i\xi \cdot \hat{Z}_{\lambda }(t)}\right] \, = \, e^{-t\Psi _\lambda (\xi )}, \qquad \xi \in {\mathbb {R}}^d,\end{aligned}$$

where the Lévy symbol \( \Psi _\lambda (\xi ):=-\log \mathbb {E}\left[ e^{i\xi \cdot \hat{Z}_{\lambda }(1)}\right] \) can be written, thanks to (5.3), as

$$\begin{aligned} \Psi _\lambda (\xi )\, = \, 2\lambda ^{1+1/\alpha }\int _{0}^{+\infty }{\sin }^2\left( \pi \xi y\right) \bar{r}\left( \lambda ^{1/\alpha }y\right) \, dy \,\succeq \, |\xi |^{\alpha }\int _{|\xi | \lambda ^{-1/\alpha }}^{+\infty }\frac{\sin ^2\left( \pi y\right) }{y^{ 1+\alpha }}\, dy.\qquad \end{aligned}$$
(5.24)

As before, we then denote \(\hat{Z}_\lambda (t,y):=y+\hat{Z}_\lambda (t)\) for any \(y\in {\mathbb {R}}^*\). If we assume for the moment that \(|\xi | \ge \lambda ^{1/\alpha }\), we have that

$$\begin{aligned} \Psi _\lambda (\xi ) \, \succeq \, \lambda (|\xi |\lambda ^{-1/\alpha })^{\alpha }\int _{|\xi | \lambda ^{-1/\alpha }}^{+\infty }\frac{\sin ^2(\pi y)}{y^{ 1+\alpha }}\, dy \, \succeq \, \theta _*\lambda , \end{aligned}$$
(5.25)

where

$$\begin{aligned} \theta _*\,:= \, \inf _{z\ge 1}\left\{ z^{\alpha }\int _z^{+\infty }\frac{\sin ^2 (\pi y)}{y^{ 1+\alpha }}\, dy \right\} \, > \, 0. \end{aligned}$$
(5.26)

On the other hand, if \( |\xi | \le \lambda ^{1/\alpha }\), we can write from (5.24) that

$$\begin{aligned} \Psi _\lambda (\xi ) \, \succeq \, |\xi |^{\alpha }\int _{1}^{+\infty }\frac{\sin ^2( \pi y)}{y^{ 1+\alpha }} \, dy \, \succeq \, \theta _*|\xi |^{\alpha }. \end{aligned}$$
(5.27)

Fixing \(c>0\), we now notice from the above controls that \(\Psi _\lambda (\xi )\succeq \theta _*>0\) for all \(|\xi |\ge c\). Then, it follows that the random variable \(\hat{Z}_\lambda (t)\) admits a density \(f_\lambda (t,\cdot )\) and belongs to \( L^1({\mathbb {R}})\cap L^\infty ({\mathbb {R}})\). Noticing that the process \(\{\hat{Z}_{\lambda }^o(t,y)\}_{t\ge 0}\) can be obtained from \(\{\hat{Z}_{\lambda }(t,y)\}_{t\ge 0}\) by performing a transformation of the trajectory (consisting in transmission-reflection, or moving the particle to the interface) described in Sect. 3.2, we have in particular that

$$\begin{aligned} |\hat{Z}_{\lambda }^o(t,y)|\, \le \, |\hat{Z}_{\lambda }(t,y)|, \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(5.28)

Since we already know the weak convergence of the laws of \(\Big (\hat{Z}_{\lambda }(t,y)\Big )\), it in turn implies the tightness of the laws of the random variables \(\hat{Z}_{\lambda }^o(t,y)\), as \(\lambda \rightarrow +\infty \). To conclude the proof, it is enough to show that for any \(y\in {\mathbb {R}}_*\) and any \(\phi \in C^\infty _c({\mathbb {R}})\), we have

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }{\mathbb E}\left[ \phi \Big (\hat{Z}_{\lambda }^o(t,y)\Big )\right] \, = \,{\mathbb E}\left[ \phi \Big (\zeta ^o(t,y)\Big )\right] . \end{aligned}$$
(5.29)

Let \(\varepsilon >0\) be arbitrary. Choosing \(h\in (0,t]\) sufficiently small, we can guarantee that

$$\begin{aligned} {\mathbb {P}}\left( {\mathfrak t}_{y,1}\le h\right) +\limsup _{\lambda \rightarrow +\infty }{\mathbb {P}}\left( \hat{\mathfrak {s}}_{y,1}^{\lambda }\le h\right) \, < \, \varepsilon , \end{aligned}$$

where \({\mathfrak t}_{y,1}\), \(\hat{\mathfrak {s}}^\lambda _{y,1}\) represents the first time the processes crosses the interface o (cf. (3.4)). Recalling the definition of the semigroups \(\hat{P}^{o,\lambda }_t\), \(\hat{P}^{o}_t\) in (5.12) and (5.13), we then have that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\left| {\mathbb E}\left[ \phi \left( \hat{Z}_{\lambda }^o(t,y)\right) \right] - {\mathbb E}\left[ \hat{P}^{o,\lambda }_{t-h}\phi \left( \hat{Z}_{\lambda } (h,y)\right) \right] \right| \,&\preceq \, \varepsilon \end{aligned}$$
(5.30)
$$\begin{aligned} \left| {\mathbb E}\left[ \phi \left( \zeta ^o(t,y)\right) \right] - {\mathbb E}\left[ \hat{P}^o_{t-h}\phi \left( \zeta (h,y)\right) \right] \right| \,&\preceq \, \varepsilon . \end{aligned}$$
(5.31)

By the above reasoning and (5.12), it now follows that

$$\begin{aligned} {\mathbb E}\left[ \hat{P}^{o,\lambda }_{t-h}\phi \left( \hat{Z}_{\lambda } (h,y)\right) \right]= & {} \int _{{\mathbb {R}}} P^{o,\lambda }_{t-h}\phi (z+y)f_{\lambda }(h,z) \, dz\nonumber \\{} & {} +\phi (0) \int _{{\mathbb {R}}}{\mathbb {P}}\left( t\ge \hat{\mathfrak s}^{\lambda }_{y+z, {\mathfrak f}}\right) f_{\lambda } (h,z) \, dz. \end{aligned}$$
(5.32)

Thanks to Proposition 5.5 and [17, Theorem 46.2], we know that \( P^{o,\lambda }_{t-h}\phi \) and \(f_{\lambda }(h,\cdot )\) strongly converge in \(L^2({\mathbb {R}})\), as \(\lambda \rightarrow +\infty \), to \( P^o_{t-h}\phi \) and \(f(h,\cdot )\), the density of \(\zeta (h)\), respectively. Therefore,

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}}P^{o,\lambda }_{t-h}\phi (z+y)f_{\lambda }(h,t)\, dz \, = \, \int _{{\mathbb {R}}} P^o_{t-h}\phi (z+y)f(h,z) \, dz. \end{aligned}$$
(5.33)

To deal with the above limit we use the following lemma, whose proof is presented in Sect. 5.3.2.

Lemma 5.6

Let f be in \(L^1({\mathbb {R}})\). Then,

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}}f(y){\mathbb {P}}\left( t<\hat{\mathfrak s}^{\lambda }_{y,{\mathfrak f}}\right) \, dy \, = \, \int _{{\mathbb {R}}}f(y){\mathbb {P}}\left( t<{\mathfrak t}_{y,{\mathfrak f}}\right) \, dy. \end{aligned}$$
(5.34)

Recalling that \(f_{\lambda }(h,\cdot )\) converge to \(f(h,\cdot )\) in \(L^2({\mathbb {R}})\), Lemma 5.6 implies that

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }\int _{{\mathbb {R}}}{\mathbb {P}}\left( t\ge \hat{\mathfrak s}^{\lambda }_{y+z,{\mathfrak f}} \right) f_{\lambda }(h,z)\, dz \, = \, \int _{{\mathbb {R}}}{\mathbb {P}}\left( t\ge {\mathfrak t}_{y+z,{\mathfrak f}} \right) f(h,z) \, dz.\nonumber \\ \end{aligned}$$
(5.35)

Hence, from (5.32), (5.33), (5.35), and (5.23), we conclude that

$$\begin{aligned} \lim _{\lambda \rightarrow +\infty }{\mathbb E}\left[ \hat{P}^{o,\lambda }_{t-h}\phi \left( \hat{Z}_{\lambda }(h,y)\right) \right] \, = \, {\mathbb E}\left[ \hat{P}^o_{t-h}\phi \left( \zeta ^o(h,y)\right) \right] . \end{aligned}$$

Combining the above equation with (5.30) and (5.31), we can easily conclude the proof of the convergence of finite dimensional distributions.

5.3.2 Proof of Lemma 5.6

We are going to show (5.34) only for f in \(C^\infty _c({\mathbb {R}})\). The general statement of Lemma 5.6 then follows from a density argument. Fixed \(M>0\), let us consider an even, smooth function \(\phi _M:{\mathbb {R}}\rightarrow [0,1]\) such that

$$\begin{aligned} \phi _M(y) \, = \, {\left\{ \begin{array}{ll} 1, &{} \text{ if } |y| \le M; \\ 0, &{} \text{ if } |y| \ge 2M. \end{array}\right. } \end{aligned}$$
(5.36)

We can then define \(\psi _M:=1-\phi _M\). Using (5.28), we get that

$$\begin{aligned} {\mathbb E}\left[ \psi _M\left( \hat{Z}_{\lambda }^o(t,y)\right) \right] \, \le \, {\mathbb {P}}\left[ |\hat{Z}^o_{\lambda }(t,y)|\ge M\right] \, \le \, {\mathbb {P}}\left[ |\hat{Z}_{\lambda }(t)|\ge M/2\right] \end{aligned}$$

for any \(|y|\le M/2\). Clearly, a similar reasoning holds for \(\zeta ^o(t,y)\) as well. Therefore, for any \(\varepsilon >0\), we can choose \(M>0\) and \(\lambda _0>1\) large enough, so that for \(\lambda \ge \lambda _0\), \(|y|\le M/2\), we have

$$\begin{aligned} {\mathbb E}\left[ \psi _M\left( \hat{Z}^o_\lambda (t,y)\right) \right] + {\mathbb E}\left[ \psi _M\left( \zeta ^o(t,y)\right) \right] \, \le \, \varepsilon . \end{aligned}$$

Using again that M is large enough so that \(\text {supp} f\subseteq [-M/2,M/2]\), we then conclude that

$$\begin{aligned}{} & {} \limsup _{\lambda \rightarrow +\infty }\left| \int _{{\mathbb {R}}}f(y)\left[ {\mathbb {P}}\left( \hat{\mathfrak s}^{\lambda }_{y,{\mathfrak f}}>t\right) - P_{t,\lambda }^o\phi _M(y)\right] \, dy\right| \\{} & {} \quad + \left| \int _{{\mathbb {R}}}f(y)\left[ {\mathbb {P}}\left( {\mathfrak t}_{y,{\mathfrak f}}>t\right) - P_{t}^o\phi _M(y)\right] \,dy\right| \, \preceq \, \varepsilon . \end{aligned}$$

The above estimate and Proposition 5.5 finally imply that

$$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }\left| \int _{{\mathbb {R}}}f(y)\left[ {\mathbb {P}}\left( \hat{\mathfrak s}^{\lambda }_{y,{\mathfrak f}}>t\right) -{\mathbb {P}}\left( {\mathfrak t}_{y,{\mathfrak f}}>t\right) \right] \, dy\right| \, \preceq \, \varepsilon , \end{aligned}$$

and (5.34) immediately follows. \(\square \)

5.3.3 Weak convergence in \(J_1\)-topology

By the previous argument, it is enough to prove tightness of the laws of \(\{\hat{Z}_{\lambda }^o(t,y)\}_{t\ge 0}\) over \(\mathcal {D}[0,+\infty )\) with the \(J_1\)-topology. We shall make use of the following notation. Given a function f in \(D[0,t_*]\), \(t_*>0\) and \(a<b\) in \([0,t_*]\), we denote \( \omega \left( a,b,f\right) := \sup \{|f(t)-f(s)|:a \le s<t<b\}\). We can then define the D-modulus of f of step \(\delta >0\) as:

$$\begin{aligned} \omega '_{[0,t_*]}(\delta ,f) :=\, \inf _{\mathcal {P}\in \mathbb {I}_\delta }\max _{j=1,\dots ,N}\omega \left( t_{j-1},t_j,f\right) , \end{aligned}$$
(5.37)

where \(\mathbb {I}_\delta \) is composed of all the partitions \(\mathcal {P}=\{t_0=t<\dots <t_N=t_*\}\) of \([0,t_*]\) such that \(t_j-t_{j-1}\ge \delta \), for any \(j=1,\ldots ,N\). It is not difficult to check that the laws of \(\{\hat{Z}_\lambda (t,y)\}_{t\ge 0}\) are tight over \(\mathcal {D}[0,+\infty )\) with the \(J_1\)-topology. Theorem 13.2 in [7] then implies that for any \(t_*>0\) and any \(\varepsilon >0\), it holds that

$$\begin{aligned}{} & {} \lim _{a\rightarrow +\infty }\sup _{\lambda \ge 1}{\mathbb {P}}\left( \sup _{t\in [0,t_*]}|\hat{Z}_\lambda (t,y)|\ge a\right) \, = \, 0; \end{aligned}$$
(5.38)
$$\begin{aligned}{} & {} \quad \lim _{\delta \rightarrow 0^+}\sup _{\lambda \ge 1}{\mathbb {P}}\left( \omega '_{[0,t_*]}\left( \delta ,\hat{Z}_\lambda (\cdot ,y)\right) \ge \varepsilon \right) \, = \, 0. \end{aligned}$$
(5.39)

Moreover, using nested partitions of the time interval \([0,t_*]\), one can easily show that

$$\begin{aligned} \omega '_{[0,t_*]}\left( \delta , \hat{Z}_\lambda ^o(\cdot ,y)\right) \, \le \, 2\omega '_{[0,t_*]}\left( \delta ,\hat{Z}_\lambda (\cdot ,y)\right) . \end{aligned}$$

Thanks to the above control and (5.28), we therefore conclude that (5.38) and (5.39) hold for \(\hat{Z}_\lambda ^o(t,y)\) as well and the tightness of the latter, as \(\lambda \rightarrow +\infty \), immediately follows. \(\square \)

5.4 The end of the proof of Theorem 3.4

The conclusion of Theorem 3.4 follows from Theorem 5.2 and Proposition 3.3. Indeed, suppose that \(\{\hat{Z}_{\lambda }^o(t,y)\}_{t\ge 0}\), \(y\in {\mathbb {R}}_*\) is the process defined in the previous section. Let \(\tau _0\) be an exp(1) distributed random variable independent of the process. Define also the random variable \(\sigma \) that is independent of the process and \(\tau _0\) such that \({\mathbb {P}}[\sigma =\iota ]=p_\iota (k)\), \(\iota \in \{-1,0,1\}\). Recall that \(A^\lambda (y,k)\) is defined by (3.16) and \(Z_1^\lambda (y,k)=y-\lambda ^{-1/\alpha }S(k)\tau _0\). Let \(A^\lambda _\iota (y,k):=\{yZ_1^\lambda (y,k)<0\text { and } \sigma ^{\lambda }=\iota \}.\) Note that

$$\begin{aligned} Z_{\lambda }^o(t,y,k) \, {\mathop {=}\limits ^\textrm{law}} \,\left\{ \begin{array}{ll} y&{} \text{ if } t<\tau _0/\lambda ,\\ 0&{}\text{ on } A^\lambda _0(y,k), \text{ if } \tau _0/\lambda \le t,\\ \hat{Z}_{\lambda }^o\Big (t-\tau _0/\lambda ,-Z_1^\lambda (y,k)\Big )&{}\text{ on } A^\lambda _{-1}(y,k) \text{ if } \tau _0/\lambda \le t,\\ \hat{Z}_{\lambda }^o\Big (t-\tau _0/\lambda , Z_1^\lambda (y,k)\Big )&{} \text{ if } \text{ otherwise }. \end{array} \right. \end{aligned}$$

Convergence of finite dimensional distributions, claimed in Theorem 3.4, is then a consequence of Proposition 3.3 and Theorem 5.5. Tightness can be argued from tightness of \(\Big (Z_{\lambda }(t,y,k)\Big )\) analogously to Sect. 5.3.3. \(\square \)

6 Proof of Theorem 5.5

We are going to show here the strong \(L^2\)-convergence of the semigroups \( P_{t}^{o,\lambda }\), defined in (5.11) and associated with the Markov processes \(\hat{Z^o}_\lambda (t,y)\), to the semigroup \(\{P_{t}^o\}_{t\ge 0}\), given in (5.13) and corresponding to the process \(\zeta ^o(t,y)\). The main tool used in the proof is the notion of convergence of the Dirichlet forms corresponding to the semigroups in the sense of Mosco. Before going into the actual proof, we recall some basic facts on the subject.

6.1 Basic notions on Sobolev spaces and Dirichlet forms

For any \(b\in (0,2)\), we define the following quadratic form:

$$\begin{aligned} {\mathcal E}[u]\,:= \, \frac{1}{2}\int _{{\mathbb {R}}^2}[u(y')-u(y)]^2q_b(y'-y)\, dydy' \end{aligned}$$
(6.1)

for any Borel function \(u:{\mathbb {R}}\rightarrow {\mathbb {R}}\). We admit the possibility that the right-hand side of (6.1) equals infinity. Then, \(H^{b/2}({\mathbb {R}}):= \left\{ u\in L^2({\mathbb {R}}):{\mathcal E}[u] < \infty \right\} \) is a Hilbert space when endowed with the following norm

$$\begin{aligned} \Vert u\Vert _{H^{b/2}({\mathbb {R}})}\,:= \, \left( \Vert u\Vert _{L^2({\mathbb {R}})}^2+ {\mathcal E}[u]\right) ^{1/2}. \end{aligned}$$
(6.2)

Recalling that \(\alpha \), defined in (1.14), belongs to (1, 2), we have that \(H^{\alpha /2}({\mathbb {R}})\subset C_b({\mathbb {R}})\) (see, e.g., [16, formula (1.4.33)]) and thus, u(0) is well defined. Moreover, \(C_c^\infty ({\mathbb {R}})\) is dense in \(H^{\alpha /2}({\mathbb {R}})\) and therefore (see [16, Example 1.4.1]) the form \({\mathcal E}\) is regular on \(H^{\alpha /2}({\mathbb {R}})\), i.e. there exists a set of compactly supported, smooth functions that is dense in \(H^{\alpha /2}({\mathbb {R}})\) (in the \(H^{\alpha /2}({\mathbb {R}})\)-norm), see [16, p. 6].

Recalling that \({\mathbb {R}}_*:={\mathbb {R}}\smallsetminus \{0\}\), we also consider the set \(H_0^{b/2}({\mathbb {R}}_*)\) obtained by the completion of \(C_c^\infty ({\mathbb {R}}_*)\) under the norm \(\Vert \cdot \Vert _{H^{b/2}({\mathbb {R}})}\). In particular, the space \(H_0^{\alpha /2}({\mathbb {R}}_*)\) can be equivalently characterised as the set of functions u in \(H^{\alpha /2}({\mathbb {R}})\) such that \(u(0)=0\). For a proof of this fact, see Corollary 6.4 below. It easily follows that the form \({\mathcal E}\) is regular on \(H_0^{\alpha /2}({\mathbb {R}}_*)\). Recall that \(\hat{\mathcal E}\) is the form defined in (2.6). Using (1.9), we easily conclude \(c{\mathcal E}\le \hat{\mathcal E}\), for some \(c\in (0,1)\). On the other hand, thanks to the Hardy inequality, see [14, Theorem 1.1, (T4)] and also Proposition 6.2 below, we can choose c so small that \(\hat{\mathcal E}\le c^{-1}{\mathcal E}\) on the space \(H^{\alpha /2}_0({\mathbb {R}}_*)\). Hence, the form \(\hat{\mathcal E}\) is comparable with \({\mathcal E}\) on the space \(H^{\alpha /2}_0({\mathbb {R}}_*)\). Thus, \(\hat{\mathcal E}\) is regular on \(H^{\alpha /2}_0({\mathbb {R}}_*)\).

To prove Proposition 5.5, we will show that the corresponding Dirichlet forms, defined below, converge in a suitable sense. More precisely (see [27, Definition 2.1.1]):

Definition 6.1

Let \(\{\mathcal {E}_{\lambda }\}_{\lambda >0}\) be a family of Dirichlet forms on \(L^2({\mathbb {R}})\) endowed with their natural domains:

$$\begin{aligned} {\mathcal D}\left( {\mathcal E}_\lambda \right) \,:=\, \left\{ u\in L^2({\mathbb {R}}):{\mathcal E}_\lambda [u]<+\infty \right\} . \end{aligned}$$

Then, \(\mathcal {E}_\lambda \) is called M-convergent to a Dirichlet form \(\mathcal {E}_\infty \), as \(\lambda \rightarrow +\infty \), if for any \(u\in L^2({\mathbb {R}})\) the following conditions are satisfied:

  1. i)

    for any family \(\{u_{\lambda }\}_{\lambda >0}\) weakly convergent to u in \(L^2({\mathbb {R}})\), it holds that

    $$\begin{aligned} \liminf _{\lambda \rightarrow +\infty }{\mathcal E}_{\lambda }[u_{\lambda }]\, \ge \, {\mathcal E}_\infty [u] \end{aligned}$$
  2. ii)

    there exists a family \(\{v_{\lambda }\}_{\lambda >0}\) strongly convergent to u in \(L^2({\mathbb {R}})\) such that

    $$\begin{aligned} \limsup _{\lambda \rightarrow +\infty }{\mathcal E}_{\lambda }[v_{\lambda }]\, \le \, {\mathcal E}_\infty [u]. \end{aligned}$$

The notion of M-convergence is particularly useful for our purposes since it naturally implies (cf. [27, Corollary 2.6.1]) the strong convergence of the corresponding semigroups on \(L^2({\mathbb {R}})\).

We conclude this subsection presenting some properties of the Sobolev spaces we have just introduced. The proofs of the results formulated below can be found in “Appendix A”. Let us start with the following variant of the Hardy-type inequality of Dyda [14].

Proposition 6.2

For \(b\not =1\), there exists a positive constant \(C_b:=C(b)\) such that

$$\begin{aligned} \int _{{\mathbb {R}}}\frac{u^2(y)}{|y|^{b}}\, dy \, \le \, C_b \Vert u\Vert _{H^{b/2}_0 ({\mathbb {R}}_*)}^2,\quad \text{ for } \text{ all } u\in H_0^{b/2}({\mathbb {R}}_*). \end{aligned}$$
(6.3)

Moreover, if \(1<b_0<b_1 <2\), then \(\sup _{b\in [b_0,b_1]} C_b \, < \, +\infty \).

We then obtain a characterisation of \(H_0^{b/2}({\mathbb {R}}_*)\) as a subspace of \(H^{b/2}({\mathbb {R}})\).

Proposition 6.3

For \(b\not =1\), we have

$$\begin{aligned} H_0^{b/2}({\mathbb {R}}_*)\, = \, \left\{ u\in H^{b/2}({\mathbb {R}}):\int _{{\mathbb {R}}}\frac{u^2(y)}{|y|^{b}}dy<+\infty \right\} . \end{aligned}$$
(6.4)

Finally, two other characterisations of the space \(H^{b_0/2}_0({\mathbb {R}}_*)\) are proposed for indices \(b_0\in (1,2)\).

Corollary 6.4

For every \(b_0\) in (1, 2), we have

$$\begin{aligned} H_0^{b_0/2}({\mathbb {R}}_*) \, = \, H^{b_0/2}({\mathbb {R}})\cap \bigcap _{1< b <b_0}H_0^{ b/2 }({\mathbb {R}}_*); \end{aligned}$$
(6.5)
$$\begin{aligned} H_0^{b_0/2}({\mathbb {R}}_*) \, = \, \left\{ u\in H^{b_0/2}({\mathbb {R}}):u(0)=0\right\} . \end{aligned}$$
(6.6)

6.2 Properties of the associated Dirichlet forms

Let us consider the Dirichlet form corresponding to \(P_t^{o,\lambda }\):

$$\begin{aligned} \hat{\mathcal E}_\lambda [u]\,:= \, \frac{1}{2}\int _{{\mathbb {R}}^2}\hat{r}_{\lambda }(y,y')[u(y')-u(y)]^2\, dy'dy + \int _{{\mathbb {R}}} k_{\lambda }(y)u^2(y)\, dy, \end{aligned}$$
(6.7)

where the two kernels \(\hat{r}_{\lambda }\), \(k_{\lambda }\) were defined in (5.10). Note that these forms are finite for all \(u\in L^2({\mathbb {R}})\).

It follows from Proposition 5.3 that the process \(\{\zeta ^0(t,y)\}_{t\ge 0}\) admits a Dirichlet form \(\mathcal {E}^{o}\) given by

$$\begin{aligned} \mathcal {E}^o[u]\,:= \, \lim _{t\rightarrow 0^+}\frac{1}{t}\int _{{\mathbb {R}}}\left[ u(y)-P_t^ou(y)\right] u(y)\, dy, \end{aligned}$$
(6.8)

for any function u belonging to its natural domain \(\mathcal {D}(\mathcal {E}^o):=\{u\in L^2({\mathbb {R}}):\mathcal {E}^o[u]<\infty \}\). We extend \(\mathcal {E}^o\) to the entire \(L^2({\mathbb {R}})\) by letting \(\mathcal {E}^o[u]=+\infty \) for \(u\not \in \mathcal {D}(\mathcal {E}^o)\). Thanks to (6.10) below, the form \(\mathcal {E}^o\) is regular in the sense of [16, p. 6], i.e., \(C_c^\infty ({\mathbb {R}}_*)\) is its core. The main result of the present section is the following:

Theorem 6.5

The Dirichlet forms \(\{\hat{\mathcal E}_{\lambda }\}_{\lambda >0}\) are M-convergent, as \(\lambda \) goes to \(+\infty \), to the Dirichlet form \({\mathcal E}^o\).

Before proving Theorem 6.5, we show that \(\mathcal {E}^o\) actually coincides with the Dirichlet form \( \bar{r}_*\hat{\mathcal {E}}\), defined by (2.6), and it is comparable to \(\mathcal {E}\) with \(b=\alpha \), which was given in (6.1).

Proposition 6.6

For any \(u\in H_0^{\alpha /2}({\mathbb {R}}_*)\), we have

$$\begin{aligned} \mathcal {E}^o[u]\, = \, \bar{r}_*\hat{\mathcal {E}}[u]. \end{aligned}$$
(6.9)

In addition

$$\begin{aligned} \mathcal {E}^o[u] \, \approx \, \mathcal {E}[u],\quad \text{ for } \text{ all } u\in H_0^{\alpha /2}({\mathbb {R}}_*). \end{aligned}$$
(6.10)

Proof

For notational simplicity, we start by denoting

$$\begin{aligned} \mathcal {E}^o_t[u]\,:= \, \frac{1}{t}\int _{{\mathbb {R}}}[u(y)- P_t^ou(y)]u(y)\, dy, \end{aligned}$$
(6.11)

so that, in particular, \(\mathcal {E}^o[u] = \lim _{t\rightarrow 0^+}\mathcal {E}^o_t[u]\). Using the symmetry of the semigroup \(P^o_t\), we write

$$\begin{aligned} \begin{aligned} \frac{1}{2t}\int _{{\mathbb {R}}} {\mathbb E}[(u(y)-u(\zeta ^o(t,y)))^2,\,t< \mathfrak {t}_{y,\mathfrak {f}}]\, dy\,&= \, \frac{1}{t}\int _{{\mathbb {R}}}P_t^ou^2(y)\, dy-\frac{1}{t}\int _{{\mathbb {R}}}u(y)P_t^ou(y)\, dy\\&=\mathcal {E}^o_t[u]-\frac{1}{t}\left( \int _{{\mathbb {R}}}u^2(y)\big (1-P_t^o1(y)\big )\, dy\right) . \end{aligned}\end{aligned}$$

We stress here that even though the constant function 1 is not in \(L^2({\mathbb {R}})\), it is still possible to use the symmetry of the operator \(P_t^o\), arguing by approximation. Therefore,

$$\begin{aligned} \mathcal {E}^o_t[u]\, = \, \frac{1}{t}\int _{{\mathbb {R}}}u^2(y)\mathbb {P}(t\ge \mathfrak {t}_{y,\mathfrak {f}})\, dy+\frac{1}{2t}\int _{{\mathbb {R}}} {\mathbb E}[(u(y)-u(\zeta ^o(t,y)))^2,\,t< \mathfrak {t}_{y,\mathfrak {f}}]\, dy.\nonumber \\ \end{aligned}$$
(6.12)

For any \(u\in C_c^\infty ({\mathbb {R}}_*)\), we have

$$\begin{aligned} \mathcal {E}^o[u] \, = \, \lim _{t\rightarrow 0^+}\mathcal {E}^o_t[u] \, = \, \bar{r}_*\hat{\mathcal {E}}[u]. \end{aligned}$$
(6.13)

Indeed, as \(u\in C_c^\infty ({\mathbb {R}}_*)\) we may assume that its support is contained in the interval \([\rho ,\rho ^{-1}]\) for some \(\rho \in (0,1)\). As \(\mathfrak {t}_{y,\mathfrak {f}}\ge \mathfrak {t}_{y,1}\) and \(\zeta ^o(t,y)=\zeta (t,y)=y+\zeta (t)\) for \(t<\mathfrak {t}_{y,1}\), we have

$$\begin{aligned} \frac{1}{t}\mathbb {P}(t\ge \mathfrak {t}_{y,\mathfrak {f}})\le \frac{1}{t}\mathbb {P}(\sup _{s\in [0,t]}|\zeta (s)|\ge \rho ),\quad y\in \textrm{supp}\,u. \end{aligned}$$

From elementary properties of a stable process we conclude that the right hand side vanishes as \(t\rightarrow 0+\). The above implies that for any \(u\in C_c^\infty ({\mathbb {R}}_*)\)

$$\begin{aligned} \, \lim _{t\rightarrow 0^+}\mathcal {E}^o_t[u] \, =\lim _{t\rightarrow 0^+} \frac{1}{2t}\int _{{\mathbb {R}}} {\mathbb E}[(u(y)-u(y+\zeta (t)))^2 ]\, dy. \end{aligned}$$

The second equality in (6.13) follows from the formula for the Dirichlet form of a symmetric stable process. \(\square \)

Fix y in \({\mathbb {R}}_*\) and let \(\{\tilde{\zeta }(t)\}_{t\ge 0}\) be the symmetric \(\alpha \)-stable Lévy process starting at y but killed at hitting 0. Clearly, its transition semigroup \(\{\tilde{P}_t\}_{t\ge 0}\), given as in (5.13), is made of symmetric Markov contractions on \(L^2({\mathbb {R}})\). In particular, its corresponding Dirichlet form equals:

$$\begin{aligned} \tilde{\mathcal E}[u] \,:= \, \frac{1}{2}\int _{{\mathbb {R}}}\int _{{\mathbb {R}}}[u(y')-u(y)]^2 q_\alpha (y'-y)\, dydy' \, = \, \mathcal {E}[u], \end{aligned}$$

for any function u belonging to its natural domain \(\mathcal {D}(\tilde{\mathcal {E}})=H_0^{\alpha /2}({\mathbb {R}}_*)\). For a proof see, e.g., [13, Section 3.3.3]. We consider as well \(\tilde{\mathcal E}_t[u]\) defined as in (6.11) with respect to \(\tilde{P}_t\). The same calculations that lead to (6.12) can be performed again to show that

$$\begin{aligned} \tilde{\mathcal {E}}_t[u]\, = \, \frac{1}{t}\int _{{\mathbb {R}}}u^2(y)\mathbb {P}(t\ge \mathfrak {t}_{y,\mathfrak {f}})\, dy+\frac{1}{2t}\int _{{\mathbb {R}}} {\mathbb E}[(u(y)-u(\tilde{\zeta }(t,y)))^2,\,t< \mathfrak {t}_{y,\mathfrak {f}}]\, dy. \end{aligned}$$

Recalling the construction of the process \(\zeta ^o(t,y)\) in (3.19), we now write that

$$\begin{aligned} t\mathcal { E}^o_t[u] \,= & {} \,\int _{{\mathbb {R}}}u^2(y)\mathbb {P}(t\ge \mathfrak {t}_{y,\mathfrak {f}})\, dy \nonumber \\{} & {} +\frac{1}{2}\sum _{m\ge 0}\sum _{\varepsilon _1,\ldots ,\varepsilon _{m}\in \{\pm 1\}}\prod _{j=1}^m p_{\varepsilon _j}\int _{{\mathbb {R}}} {\mathbb E}\left[ \left( u(y)-u\left( \zeta (t,y)\prod _{i=1}^m\varepsilon _i\right) \right) ^2 ,\, \right. \nonumber \\{} & {} \quad \left. {\mathfrak t}_{y,m} \le t< {\mathfrak t}_{y,m+1}\right] \, dy. \end{aligned}$$
(6.14)

We then denote by \({\mathcal P}_m\) the family of sets \(\{\varepsilon _1,\ldots ,\varepsilon _{m}\}\subseteq \{-1,1\}^m\) such that \(\prod _{j=1}^m\varepsilon _j=1\) and by \({\mathcal P}_m^c\) its complement. Let

$$\begin{aligned} s_m\,:= \, \sum _{\{\varepsilon _1,\ldots ,\varepsilon _{m}\}\in {\mathcal P}_m}\prod _{j=1}^mp_{\varepsilon _j},\quad m=1,2,\ldots . \end{aligned}$$

By a direct calculation,

$$\begin{aligned}s_m \, = \, \frac{(p_++p_-)^m+(p_+-p_-)^m}{2}\, = \, \frac{1}{2}+\frac{1}{2}(2p_+-1)^m,\quad m=1,2,\ldots .\end{aligned}$$

It is then possible to show the following:

Lemma 6.7

The sequence \(\{s_m\}_{m\in {\mathbb {N}}}\) tends to 1/2. Moreover, if \(p_+>1/2\), then \(\{s_m\}_{m\in {\mathbb {N}}}\) is strictly decreasing, if \(p_+=1/2\), then it holds that \(s_m=1/2\) for any \(m\ge 1\) and if \(p_+<1/2\), then \(\{s_{2m-1}\}_{m\in {\mathbb {N}}}\) increases while \(\{s_{2m}\}_{m\in {\mathbb {N}}}\) decreases.

From (6.14), we now have that

$$\begin{aligned} \nonumber t\mathcal { E}^o_t[u] \,&\ge \, \int _{{\mathbb {R}}}u^2(y)\mathbb {P}(t\ge \mathfrak {t}_{y,\mathfrak {f}})\, dy+\frac{1}{2}\sum _{m\ge 0}s_m\int _{{\mathbb {R}}} {\mathbb E}\left[ \left( u(y)-u(\zeta (t,y)\right) ^2 ,\, {\mathfrak t}_{y,m}\right. \nonumber \\&\left. \le t< {\mathfrak t}_{y,m+1}\right] \, dy\nonumber \\&\ge \, \left( p_+\wedge \frac{1}{2}\right) t\tilde{\mathcal E}_t(f). \end{aligned}$$
(6.15)

If we suppose now that u belongs to \({\mathcal D}(\mathcal {E}^o)\), the closure of \(C^\infty _c({\mathbb {R}}_*)\) with respect to the form \(\mathcal {E}^o\), then (6.15) implies that u belongs to \(\mathcal {D}(\tilde{\mathcal {E}})=H_0^{\alpha /2}({\mathbb {R}}_*)\), as well. On the other hand, if u is in \(C_c^\infty ({\mathbb {R}}_*)\) then we can use the Hardy inequality (6.3) in (6.13) to show that \(\mathcal {E}^o[u] \preceq \tilde{\mathcal {E}}[u]\). Such an estimate then extends to \(H_0^{\alpha /2}({\mathbb {R}}_*)\) and implies in particular that \(H_0^{\alpha /2}({\mathbb {R}}_*)\subseteq \mathcal {D}(\mathcal {E}^o)\). We have thus proven that \(H_0^{\alpha /2}({\mathbb {R}}_*)= \mathcal {D}(\mathcal {E}^o)\) and there exists a positive constant C such that

$$\begin{aligned} C^{-1}\tilde{\mathcal {E}}[u] \, \le \, \mathcal {E}^o[u] \, \le \, C \tilde{\mathcal {E}}[u],\quad u\in H_0^{\alpha /2}({\mathbb {R}}_*), \end{aligned}$$

which ends the proof of the proposition. \(\square \)

6.3 Proof of Theorem 6.5

In the present section, we are going to show that the Dirichlet forms \(\{\hat{\mathcal E}_{\lambda }\}_{\lambda >0}\), defined in (6.7), are M-convergent, when \(\lambda \) tends to \(+\infty \), to the Dirichlet form \(\bar{r}_*\hat{\mathcal E}\) given in (2.6)–(5.8). Thanks to Proposition 6.6, Theorem 6.5 will then follow immediately.

Recalling the meaning of the M-convergence in Definition 6.1, condition ii) easily follows choosing the trivial family \(\{v_\lambda \}_{\lambda > 0}\) of functions given by \(v_\lambda =u\) and using the Lebesgue dominated convergence theorem, together with (1.9) and (5.9). We now focus on showing condition i). Let \(\{u_{\lambda }\}_{\lambda >0}\) be a family of functions weakly convergent to u in \(L^2({\mathbb {R}})\). We start by noticing that if \(\liminf _{\lambda \rightarrow +\infty }\hat{\mathcal E}_{\lambda }[u_{\lambda }]=+\infty \), then condition i) clearly holds. We can then suppose without loss of generality that

$$\begin{aligned} \liminf _{\lambda \rightarrow +\infty }\hat{\mathcal E}_{\lambda }[u_{\lambda }] \, < \, +\infty . \end{aligned}$$
(6.16)

Let \(\{\lambda _n\}_{n\in {\mathbb {N}}}\) be a sequence in \((0,+\infty )\) such that \(\lambda _n\rightarrow +\infty \) and

$$\begin{aligned} \lim _{n\rightarrow +\infty }\hat{\mathcal E}_{n}[u_{n}]\, = \, \liminf _{\lambda \rightarrow +\infty }\hat{\mathcal E}_{\lambda }[u_{\lambda }], \end{aligned}$$

where we denoted \(u_n:=u_{\lambda _n}\), \(\hat{\mathcal E}_{n}:=\hat{\mathcal E}_{\lambda _n}\). We will use the following lemma, whose proof is presented in Sect. 6.4.

Lemma 6.8

Let \(\{f_n\}_{n\ge 1}\) be a bounded sequence in \(L^2({\mathbb {R}})\) such that \(\lim _{n\rightarrow \infty } \hat{\mathcal {E}}_n[f_n]<\infty \). Then, there exists a subsequence \(\{f_{n_k}\}_{k\ge 1}\) that is a.s. convergent to f.

We claim that u is in \(H_0^{\alpha /2}({\mathbb {R}}_*)\). Indeed, by Lemma 6.8 above, there exists a sub-sequence of \(\{u_n\}_{n\ge 1}\), that for simplicity we denote again by the same symbol, that is a.s. convergent in \({\mathbb {R}}\). From (5.10) and (1.9), we now notice that

$$\begin{aligned}\hat{r}_{\lambda _n}(y,y') \, \ge \, \mathbbm {1}_{yy'>0}\bar{r}_{\lambda _n}(y'-y)+\mathbbm {1}_{yy'<0}\tilde{p}_{+,\lambda _n}(y'-y)\bar{r}_{\lambda _n}(y'-y)\, \, \succeq \, \bar{r}_{\lambda _n}(y'-y).\end{aligned}$$

Fatou’s lemma and (5.9) then imply that

$$\begin{aligned} \lim _{n\rightarrow +\infty }\hat{\mathcal E}_n[u_n]\, \ge \, \frac{1}{2} \int _{{\mathbb {R}}^2}\liminf _{n\rightarrow +\infty }\left[ \hat{r}_{\lambda _n}(y,y') \left( u_n(y')-u_n(y)\right) ^2\right] \, dy'dy\, \succeq \, \mathcal {E}[u]. \end{aligned}$$

From (6.16), we now have that \(u\in H^{\alpha /2}({\mathbb {R}})\). Using again (6.16), we notice that

$$\begin{aligned} \int _{{\mathbb {R}}}u_n^2(y) k_{\lambda _n}(y)\, dy \, \preceq \, 1, \quad n\ge N, \end{aligned}$$
(6.17)

for some \(N\in {\mathbb {N}}\). Assuming that \(\lambda ^{1/\alpha }_n|y|\ge 1\), we can use (5.2)–(5.7) to show that

$$\begin{aligned} k_{\lambda _n}(y) \, \succeq \, \lambda _n\int _{\lambda _n^{1/\alpha }|y|}^{+\infty } \frac{\left[ 1+\log (1+ z)\right] ^{-\kappa }}{\left( 1+z\right) ^{1+\alpha }}\, dz\, \succeq \,\frac{\left[ 1+\log (1+\lambda _n^{1/\alpha }|y|)\right] ^{-\kappa }}{ \left( \lambda _n^{-1/\alpha }+|y|\right) ^{\alpha }}.\nonumber \\ \end{aligned}$$
(6.18)

The second case, where \(\lambda ^{1/\alpha }_n|y|< 1\), is trivial, since then

$$\begin{aligned} k_{\lambda _n}(y) \, \succeq \, \lambda _n\int _{1}^{+\infty } \frac{\left[ 1+\log (1+ z)\right] ^{-\kappa }}{\left( 1+z\right) ^{1+\alpha }}\, dz\,\succeq \lambda _n. \end{aligned}$$

Thus, \(k_{\lambda _n}(y)\) can also be controlled from below by the term appearing on the utmost right hand side of (6.18). From (6.17) and (6.18), it follows that

$$\begin{aligned} \int _{{\mathbb {R}}}\frac{u_n^2(y)}{ \left( \lambda _n^{-1/\alpha }+|y|\right) ^{\alpha }}\cdot \frac{dy}{\left[ 1+\log (1+\lambda _n^{1/\alpha } |y|)\right] ^{\kappa }} \, \, \preceq \, 1, \end{aligned}$$
(6.19)

for any \(n\ge N\). We shall use the following.

Lemma 6.9

Let \(\{f_n\}_{n\ge 1}\) be a bounded sequence in \(L^2({\mathbb {R}})\) such that \(\lim _{n\rightarrow \infty } \hat{\mathcal {E}}_n[f_n]<\infty \) and the inequality (6.19) holds. Then, for any \(\rho \) in \((0,1-1/\alpha )\),

$$\begin{aligned} \sup _{n \ge 1}\int _{{\mathbb {R}}}\frac{f_n^2(y)}{ (\lambda _n^{-1/\alpha }+|y|)^{ \alpha (1-\rho )}} \, dy \, < \, +\infty . \end{aligned}$$
(6.20)

The lemma is shown in Sect. 6.5. We proceed with its application to the proof of Theorem 6.5. Using Fatou’s lemma in (6.20), we conclude that

$$\begin{aligned} \int _{{\mathbb {R}}}\frac{u^2(y)}{|y|^{\alpha (1-\rho )}}\, dy \, < \, +\infty , \end{aligned}$$

for any \(\rho \) in \((0,1-1/\alpha )\). Thanks to Proposition 6.3 and Corollary 6.4, we then infer that u is in \(H^{ \alpha /2}_0({\mathbb {R}}_*)\). Since \(\{u_n\}_{n\ge 1}\) is a.s. convergent to a function u in \(H^{ \alpha /2}_0({\mathbb {R}}_*)\), we can use Fatou’s lemma and (5.9) to write that:

$$\begin{aligned} \liminf _{n\rightarrow +\infty }\hat{\mathcal E}_{n}[u_n]\, \ge \, \frac{1}{2}\int _{{\mathbb {R}}^2}\lim _{n\rightarrow +\infty }(u_n(y')-u_n(y))^2\hat{r}_{\lambda _n}(y,y')\, dydy'\, = \, \bar{r}_*\hat{\mathcal E}[u], \end{aligned}$$

and we have proven part i) of Definition 6.1, which ends the proof of Theorem 6.5.

What remains to be done is to prove the auxiliary results presented above. Before doing so, we however need the following result:

Lemma 6.10

Let \(\{f_n\}_{n\ge 1}\) be bounded in \(L^2({\mathbb {R}})\) such that \(\lim _{n\rightarrow \infty } \hat{\mathcal {E}}_n[f_n]<\infty \). Then,

$$\begin{aligned} \lim _{K\rightarrow +\infty }\int _{|\xi |\ge K}|\hat{f}_n(\xi )|^2 \, d\xi \, = \, 0, \end{aligned}$$

uniformly in n. Here we have denoted by \(\hat{f}(\cdot )\) the Fourier transform of a function \(f\in L^2({\mathbb {R}})\).

Proof

Thanks to (1.9), we know that \(p_*:= \inf _{k\in {\mathbb {T}}} p_+(k) \, > \, 0\). Since \(\lim _{n\rightarrow \infty } \hat{\mathcal {E}}_n[f_n]<\infty \), we then conclude from (6.7) that

$$\begin{aligned} 1\, \succeq \, \hat{\mathcal E}_n[f_n] \, \ge \, \frac{p_*}{2}\int _{{\mathbb {R}}^2}\left[ f_n(y')-f_ n(y)\right] ^2 \bar{r}_{\lambda _n}(y'-y) \, dydy', \end{aligned}$$

where \(\bar{r}_{\lambda }\) is given by (5.6). The right-hand side is then of the same order of magnitude as

$$\begin{aligned} \int _{{\mathbb {R}}}|\hat{f}_n(\xi )|^2\Big (\int _{{\mathbb {R}}}\sin ^2\left( \pi \xi y\right) \bar{r}_{\lambda _n}\left( y\right) \, dy\Big )d\xi \, \preceq \, 1. \end{aligned}$$

Fixing \(K>0\), it now follows that

$$\begin{aligned} \int _{|\xi |\ge K}\Psi _n(\xi )|\hat{f}_n(\xi )|^2 \, d\xi \, \preceq \, 1, \end{aligned}$$
(6.21)

where \(\Psi _n:=\Psi _{\lambda _n}\) was defined in (5.24). Thus, for any n so large that \( K\le \lambda _n^{1/\alpha }\), the estimates (6.21), (5.25) and (5.27) imply that

$$\begin{aligned} K^{\alpha }\theta _*\int _{\lambda _n^{1/\alpha }\ge |\xi |\ge K}|\hat{f}_n(\xi )|^2\, d\xi \preceq \int _{|\xi |\ge K}\Psi _n(\xi )|\hat{f}_n(\xi )|^2 \, d\xi \preceq 1. \end{aligned}$$

where \(\theta _*\) is given by (5.26). Hence,

$$\begin{aligned}\limsup _{n\rightarrow +\infty }\int _{\lambda _n^{1/\alpha }\ge |\xi |\ge K}|\hat{f}_n(\xi )|^2\, d\xi \, \preceq \, K^{-\alpha },\end{aligned}$$

and the conclusion of Lemma 6.10 follows. \(\square \)

6.4 Proof of Lemma 6.8

Fix \(\delta >0\) and let us consider the function \(\phi _\delta \) given in (5.36). We claim that the sequence \(\{f_n\phi _\delta \}_{n\ge 1}\) is strongly compact in \(L^2({\mathbb {R}})\). Using the Pego Criterion, see [29, Theorem 3], it is enough to show that

$$\begin{aligned}&\lim _{K\rightarrow +\infty }\int _{|y|>K}|f_n(y) \phi _\delta (y)|^2 \, dy\, = \, 0; \end{aligned}$$
(6.22)
$$\begin{aligned}&\quad \lim _{K\rightarrow +\infty }\int _{|\xi |>K}|\widehat{f_n\phi _\delta }(\xi )|^2\, d\xi \, = \, 0, \end{aligned}$$
(6.23)

uniformly in \(n \in {\mathbb {N}}\). Clearly, (6.22) holds since the sequence is uniformly bounded in \(L^2({\mathbb {R}})\). To show (6.23), we notice that

$$\begin{aligned} \int _{|\xi |>K}|\widehat{f_n\phi _\delta }(\xi )|^2\, d\xi \,{} & {} = \, \int _{|\xi |>K}\left| \int _{{\mathbb {R}}}\widehat{f}_n(\xi -\eta )\widehat{\phi }_\delta (\eta )\, d\eta \right| ^2\, d\xi \, \le \, I_1(K,L)\nonumber \\{} & {} \quad +I_2(K,L), \end{aligned}$$
(6.24)

where for any fixed \(L>0\), we have

$$\begin{aligned} \begin{aligned} I_1(K,L)\,&:= \, 2 \int _{|\xi |>K}\left| \int _{|\eta |\le L}\widehat{f}_n(\xi -\eta )\widehat{\phi }_\delta (\eta )\, d\eta \right| ^2\, d\xi ,\\ I_2(K,L)\,&:= \, 2 \int _{|\xi |>K}\left| \int _{|\eta |> L}\widehat{f}_n(\xi -\eta )\widehat{\phi }_\delta (\eta )\, d\eta \right| ^2\, d\xi . \end{aligned} \end{aligned}$$

The Cauchy-Schwartz inequality and Lemma 6.10 then imply that for any fixed \(L>0\),

$$\begin{aligned} \lim _{K\rightarrow +\infty }\sup _{n\in {\mathbb {N}}} I_1(K,L)\, \le \, 4L\Vert \phi _\delta \Vert _{L^2({\mathbb {R}})}^2\lim _{K\rightarrow +\infty }\sup _{n\in {\mathbb {N}}}\int _{|\xi |>K-L}\left| \widehat{f}_n(\xi )\right| ^2\,d\xi \, = \, 0.\nonumber \\ \end{aligned}$$
(6.25)

On the other hand, recalling that \(\{f_n\}_{n\ge 1}\) is uniformly bounded in \(L^2({\mathbb {R}})\) and \(\hat{\phi }_\delta \) belongs to the Schwartz class, we conclude that for any \(\varepsilon >0\) it is possible to choose \(L:=L(\varepsilon )\), sufficiently large, such that

$$\begin{aligned} I_2(K,L)\, \preceq \, 2\Vert f_n\Vert ^2_{L^2({\mathbb {R}})}\left( \int _{|\eta |>L}|\widehat{\phi }_\delta (\eta )|d\eta \right) ^2 \, \preceq \, \varepsilon , \end{aligned}$$
(6.26)

uniformly in n. From (6.24)–(6.26), we can then conclude that (6.23) follows. From the compactness of \(\left\{ f_n\phi _\delta \right\} _{n\ge 1}\), we can now choose a subsequence that is a.e. convergent on \([-\delta ,\delta ]\). Using the Cantor diagonal argument, we then find a subsequence of \(\left\{ f_n\right\} _{n\ge 1}\) that converges a.e. on \({\mathbb {R}}\). \(\square \)

6.5 Proof of Lemma 6.9

First observe that since the sequence \(\{f_n\}_{n\ge 1}\) is bounded in \(L^2({\mathbb {R}})\), it is enough to show that for any \(\rho \) small enough, we have:

$$\begin{aligned} \limsup _{n \rightarrow +\infty }\int _{-1}^1\frac{f_n^2(y)}{ \left( \lambda _n^{-1/\alpha }+|y|\right) ^{ \alpha (1-\rho )}}\, dy \,<+\infty . \end{aligned}$$
(6.27)

We will actually prove an analogue of (6.27) with the integral over [0, 1], as the argument in the case of \([-1,0]\) is similar. The proof is divided into three steps.

Step 1. We start by claiming that for any \(\rho \) in (0, 1), \(\gamma >1\) and \(\lambda _n\) large enough,

$$\begin{aligned} \int _{0}^{c_n(\gamma )}\frac{f_n^2(y)}{(\lambda _n^{-1/\alpha }+y)^{\alpha (1-\rho )}}\, dy\, \preceq \, c_n(\gamma /2), \end{aligned}$$
(6.28)

where

$$\begin{aligned} c_n(\gamma ) \,:= \, \exp \left\{ -\log ^{\gamma }_2\lambda _n^{{1/\alpha }}\right\} \end{aligned}$$
(6.29)

and \(\log _2x:=\log \log x\), \(x>1\). Indeed, let us denote

$$\begin{aligned} I_m \,:= \, \left\{ y\in [0,1]:e^{m-1} \lambda _n^{-1/\alpha }\le y<e^m\lambda _n^{-1/\alpha }\right\} , \end{aligned}$$

for any \(m\in {\mathbb {N}}\) and \(I_{0}:=\left\{ y\in [0,1] :y<\lambda _n^{-1/\alpha }\right\} \). Estimate (6.19) now implies that

$$\begin{aligned} \sum _{m=0}^{N_1}a_m\int _{I_m}\frac{f_n^2(y)}{\left( \lambda _n^{-1/\alpha }+y \right) ^{ \alpha (1-\rho )}}\, dy\, \preceq \,\sum _{m=0}^{N_1}\frac{1}{(1+m)^{\kappa }}\int _{I_m}\frac{f_n^2(y)}{\left( \lambda _n^{-1/\alpha }+y\right) ^{\alpha }} \, dy \, \preceq \, 1,\nonumber \\ \end{aligned}$$
(6.30)

where a positive integer \(N_1 \) is such that

$$\begin{aligned} e^{N_1-1}\lambda ^{-1/\alpha }_n \le 1 < e^{N_1}\lambda ^{-1/\alpha }_n \end{aligned}$$
(6.31)

and

$$\begin{aligned} a_m\,:= \, \frac{e^{\rho \alpha (N_1-m) }}{(1+m)^{\kappa }}. \end{aligned}$$

It is not difficult to check now that the sequence \(\{ a_m\}_{m\ge 1}\) is decreasing and thus, its minimum in \(1\le m\le m_*(\gamma ):=N_1-[\log ^{\gamma }N_1]\) equals \(a_{m_*(\gamma )}\). Hence, (6.30) implies that

$$\begin{aligned} a_{m_*(\gamma )}\int _{0}^{e^{m_*(\gamma )-N_1}}\frac{f_n^2(y)}{\left( \lambda _n^{-1/\alpha }+y\right) ^{ \alpha (1-\rho )}}\, dy \, \preceq \, 1, \end{aligned}$$

since \( e^{m_*(\gamma )}\lambda _n^{-1/\alpha } \ge e^{m_*(\gamma )-N_1}\) (see (6.31)). Estimate (6.28) follows from an observation that \(e^{m_*(\gamma )-N_1} \ge c_n(\gamma )\) and

$$\begin{aligned} \frac{1}{a_{m_*(\gamma )}} \, \preceq \, \frac{N_1^\kappa }{e^{\rho \alpha [\log ^\gamma N_1]}} \, \preceq \, \exp \left\{ \kappa \log N_1-\rho \alpha [\log ^{\gamma }N_1]\right\} \, \preceq \, c_n(\gamma /2).\end{aligned}$$

In order to prove the utmost right inequality, it is enough to show that

$$\begin{aligned}\rho \alpha [\log ^\gamma N_1]-\kappa \log N_1 \, \succeq \,(\log _2\lambda _n^{1/\alpha })^{\gamma /2}.\end{aligned}$$

To see the last estimate, recall that by (6.31), we have \(N_1-1\le \log \lambda _n^{1/\alpha }\le N_1.\) Hence it suffices to show \(\rho \alpha [\log ^\gamma N_1]-\kappa \log N_1 \, \succeq \, (\log N_1)^{\gamma /2}\). The latter holds, since \(\gamma >1\vee \gamma /2\).

Step 2. We then show that for any \(\lambda _n>1\), the function \(f_n\) can be decomposed as:

$$\begin{aligned} f_n(y ) \, = \, f^{(1)}_n(y)+f_n^{(2)}(y), \end{aligned}$$
(6.32)

for two functions \(f^{(1)}_n\), \(f^{(2)}_n\) such that

$$\begin{aligned} \sup _{y\in {\mathbb {R}}}|f^{(1)}_n(y+h)-f^{(1)}_n(y)| \,&\preceq \, h^{(\alpha -1)/2},\quad h>0, \end{aligned}$$
(6.33)
$$\begin{aligned} \Vert f^{(2)}_n\Vert _{L^2({\mathbb {R}})}\,&\preceq \, 1/\sqrt{\lambda _n}. \end{aligned}$$
(6.34)

We start by defining

$$\begin{aligned} f^{(1)}_n(y):= & {} \int _{|\xi |\le \lambda ^{1/\alpha }}\exp \left\{ 2\pi i \xi y\right\} \hat{f}_n(\xi )\, d\xi \,\\ f^{(2)}_n(y):= & {} \int _{|\xi |> \lambda ^{1/\alpha }}\exp \left\{ 2\pi i \xi y\right\} \hat{f}_n(\xi )\, d\xi . \end{aligned}$$

Clearly, (6.32) then holds. We now use again the function \(\Psi _n\) defined in (5.24). Fixed \(h>0\), we apply (5.27) to write that

$$\begin{aligned} |f^{(1)}_n(y+h)-f^{(1)}_n(y)|\, \preceq \, \frac{1}{\sqrt{\theta _*}}\int _{|\xi |\le \lambda _n^{1/\alpha }}\Psi _n^{1/2}(\xi )\left| \hat{f}_n(\xi )\right| \frac{\left| 1-\exp \left\{ 2\pi i \xi h\right\} \right| }{|\xi |^{ \alpha /2}}\, d\xi . \end{aligned}$$

Using the Cauchy-Schwartz inequality and then (6.21), we conclude that estimate (6.33) holds:

$$\begin{aligned} |f^{(1)}_n(y+h)-f^{(1)}_n(y)| \, \preceq \, \left( \int _{{\mathbb {R}}}\frac{\left| 1-\exp \left\{ 2\pi i \xi \right\} \right| ^2}{|\xi | ^{ \alpha }}\, d\xi \right) ^{1/2}h^{(\alpha -1)/2}\, \preceq \, h^{(\alpha -1)/2}.\end{aligned}$$

On the other hand, estimate (6.34) follows from (5.25) and (6.21):

$$\begin{aligned} \Vert f^{(2)}_n\Vert _{L^2({\mathbb {R}})}^2\, \le \, \frac{1}{\theta _*\lambda _n}\int _{|\xi |> \lambda _n^{1/\alpha }}\Psi _n(\xi )\left| \hat{f}_n\left( \xi \right) \right| ^2\, d\xi \, \preceq \, \frac{1 }{\lambda _n}. \end{aligned}$$

Step 3. Fix n large enough so that \(\lambda _n\ge e\) and \(\gamma >1\), we consider

$$\begin{aligned} \tilde{I}_m \,:= \, \{y\in [0,1]:mc_n(\gamma ) \le y<(m+1)c_n(\gamma )\}, \end{aligned}$$

for any \(m\in {\mathbb {N}}_0\). We can then write

$$\begin{aligned} \int _{0}^1\frac{f_n^2(y)}{ (\lambda _n^{-1/\alpha }+y)^{ \alpha (1-\rho )}} \, dy \, \le \, \sum _{m=0}^{N_2} \int _{\tilde{I}_m}\frac{f_n^2(y)}{ (\lambda _n^{-1/\alpha }+y)^{ \alpha (1-\rho )}} \, dy \, \preceq \, I_n+I\!I_n\nonumber \\ \end{aligned}$$
(6.35)

where \(N_2\) is the (unique) positive integer such that \(N_2 c_n(\gamma )\le 1< (N_2+1)c_n(\gamma )\) and

$$\begin{aligned} \begin{aligned} I_n \,:= \, \sum _{m=0}^{N_2} \int _{0}^{c_n(\gamma )}\frac{f_n^2(y)}{(\lambda _n^{-1/\alpha }+mc_n(\gamma )+y)^{\alpha (1-\rho )}}\, dy;\\ I\!I_n := \, \sum _{m=1}^{N_2} \int _{0}^{c_n(\gamma )}\frac{[f_n(y+mc_n(\gamma ))-f_n(y)]^2}{ (\lambda _n^{-1/\alpha }+mc_n(\gamma )+y)^{\alpha (1-\rho )}}\, dy. \end{aligned} \end{aligned}$$

To control the first term \(I_n\), we start by noticing that if \(|y|\le c_n(\gamma )\), then \( \lambda _n^{-1/\alpha }+y \le 2c_n (\gamma )\) for sufficiently large n. We can then write:

$$\begin{aligned} \begin{aligned} \sum _{m=0}^{N_2} \frac{1}{(\lambda _n^{-1/\alpha }+mc_n(\gamma )+y)^{\alpha (1-\rho )}}\,&\preceq \, \frac{1}{(\lambda _n^{-1/\alpha }+y)^{\alpha (1-\rho )}}+\frac{1}{c_n^{ \alpha (1-\rho )}(\gamma )} {\sum _{m=0}^{+\infty } \frac{1}{m^{\alpha (1-\rho )}}} \\&\preceq \, \frac{1}{(\lambda _n^{-1/\alpha }+y)^{ \alpha (1-\rho )}}, \end{aligned}\end{aligned}$$

provided that \(\rho \) is so small that \( \alpha (1-\rho )>1\). From (6.28) we thus have that \(I_n\preceq c_n(\gamma /2)\preceq 1\), uniformly in n. Concerning the second term \(I\!I_n\), we use (6.32) to decompose it as \(I\!I^{(1)}_n+I\!I^{(2)}_n\) where for any \(j=1,2\), \(I\!I^{(j)}_n\) is obtained from \(I\!I_n\) by replacing there \(f_n\) with \(f^{(j)}_n\). Noticing that \( N_2= \left[ c^{-1}_n(\gamma )\right] \), Estimate (6.33) now implies that

$$\begin{aligned} I\!I^{(1)}_n \,\preceq \, \sum _{m=1}^{N_2} \frac{c_n(\gamma )(mc_n(\gamma ))^{\alpha -1}}{(mc_n(\gamma ))^{\alpha (1-\rho )}} \, \preceq \, c_n^{\alpha \rho }(\gamma )\sum _{m=1}^{N_2}\frac{1}{m^{1-\rho \alpha }} \, \preceq \, (c_n(\gamma ) N_2)^{ \alpha \rho } \, \preceq \, 1. \end{aligned}$$
(6.36)

To control \(I\!I^{(2)}_n\), we recall that \(\lambda _n \) dominates \(c_n^{-\alpha (1-\rho )}(\gamma ) = \exp \left\{ \alpha (1-\rho )\log _2^{\gamma }\lambda _n \right\} \) (cf. (6.29)) for n sufficiently large, and then, thanks to (6.34), we write

$$\begin{aligned} \begin{aligned} I\!I^{(2)}_n \,&\preceq \, \Vert f^{(2)}_n\Vert ^2_{L^2({\mathbb {R}})}\sum _{m=1}^{N_2} \frac{1}{(mc_n(\gamma ))^{ \alpha (1-\rho )}} \, \preceq \, \frac{1}{\lambda _n c_n^{\alpha (1-\rho )}(\gamma )}\, \preceq \, 1. \end{aligned} \end{aligned}$$

This ends the proof of (6.27). \(\square \)