1 Introduction

In order to measure the fluctuations of a family of linear bounded operators \(A_t:L^p(X)\rightarrow L^p(X)\), where \(t > 0\) and X is a measure space, it may be useful to consider quantities involving many differences \(T_t f(\cdot )-T_sf(\cdot )\), with \(s,t>0\) and \(f\in L^p(X)\). Among these quantities, variation and oscillation seminorms are probably the best known. The corresponding variational inequalities, stating that the \(L^p\)-norm of the variation or the oscillation of \((A_t f)_{t > 0}\) is uniformly bounded by the \(L^p\)-norm of f, have attracted increasing interest in the last fifty years.

In fact, in 1976 D. Lépingle proved a first variational inequality for a family of bounded martingales [19], also providing a weak type (1, 1) variant (see also [22] for extensions and a different proof). Then Gaposhkin in [10, 11] considered oscillational inequalities for standard ergodic averages. Some years later, in 1989, J. Bourgain proved the pointwise convergence of ergodic averages along polynomial orbits by replacing the classical estimates for the Hardy–Littlewood maximal function by variational seminorm bounds [4]. Further results may be found in [13, 15].

After the seminal work by Bourgain, in light of the applications to pointwise convergence phenomena, the study of variational inequalities spread in many different contexts. For an updated survey, especially from the point of view of oscillation estimates, we refer to [24]. The recent paper [23] deals with jump inequalities, seen as endpoint results for variation inequalities. Focusing on the field of harmonic analysis, we recall here the cases of the Hilbert transform [5], Fejér and Poisson kernels [17], families of truncations of Gaussian Riesz transforms [12] and the heat and Poisson semigroups of the Laplacian and Hermite operator [2, 9]. Analogous results have been obtained for semigroups associated with Fourier-Bessel expansions [3], for spherical means or averages along curves \((t, t^a)\) in the plane [16] and for differential and singular integral operators in some weighted Lebesgue spaces [20, 21]. Some results are also known for the Ornstein–Uhlenbeck semigroup; see below.

To define the variation seminorm \(v(\rho )\), let \(\phi \) be a real- or complex-valued function defined in an interval I. Then for \(1 \le \rho < \infty \)

$$\begin{aligned} \Vert \phi \Vert _{v(\rho ), I} := \sup \left( \sum _{i=1}^{n} |\phi (t_i) - \phi (t_{i-1})|^\rho \right) ^{1/\rho }, \end{aligned}$$

where the supremum is taken over all finite, increasing sequences \(\left( t_i \right) _0^n\) of points in I. This is a seminorm which vanishes only for constant functions. We will often omit indicating the interval I. The space \(V(\rho ,I)\) consists of those functions \(\phi \) in I for which \( \Vert \phi \Vert _{v(\rho ),I} < \infty \). In this paper we will only consider the variation of continuous functions \(\phi (t)\).

We next introduce the one-dimensional Ornstein–Uhlenbeck semigroup. Let \(R(x) = x^2/2\) for \(x \in {\mathbb {R}}\), and define the measure \(d\gamma _\infty (u) = (2\pi )^{-1/2} \, \exp (-R(u))\,du\) in \(\mathbb R\). The semigroup is then given by

$$\begin{aligned} H_t f(x) = \int f(u)\,K_t(x,u)\,d\gamma _\infty (u), \qquad t>0, \end{aligned}$$

where \(f \in L^1(\gamma _\infty )\) and the kernel \(K_t\) is

$$\begin{aligned} K_t(x,u) = \frac{e^{R(x)}}{\sqrt{1-e^{-2t}}} \, \exp \left( -\frac{1}{2}\,\frac{(e^{-t}u - x)^2}{1-e^{-2t}}\right) , \qquad t>0, \quad (x,u)\in {\mathbb {R}}\times {\mathbb {R}}. \end{aligned}$$

The measure \(\gamma _\infty \) is the unique probability measure which is invariant under the semigroup.

We will consider the variation of the semigroup, i.e., the seminorm \(\Vert H_t f(x)\Vert _{v(\rho ),{\mathbb {R}}_+}\) taken with respect to t and considered as an operator defined for \(f \in L^1(\gamma _\infty )\).

When \(\rho > 2\) it is known that this operator is bounded from \(L^p(\gamma _\infty )\) to \(L^p(\gamma _\infty )\) for \(1<p<\infty \), even for the Ornstein–Uhlenbeck semigroup in any finite dimension. This follows from [14], where a general symmetric diffusion semigroup is considered. Another proof can be found in [18, Corollary 4.5]; it is verified in [1, page 31] that this corollary can be applied in our setting.

The inspiration for the present work came from a comment in [1, page 31] saying that no variational weak type (1, 1) bound is known for the Ornstein–Uhlenbeck semigroup. We will prove the following one-dimensional result.

Theorem 1.1

For each \(\rho > 2\) the operator that maps \(f \in L^1(\gamma _\infty )\) to the function

$$\begin{aligned} \Vert H_t f(x)\Vert _{v(\rho ),{\mathbb {R}}_+}, \quad x \in {\mathbb {R}}, \end{aligned}$$

where the \(v(\rho )\) seminorm is taken in the variable t, is of weak type (1, 1) with respect to the measure \(\gamma _\infty \).

In other words, the inequality

$$\begin{aligned} \gamma _\infty \{x\in {\mathbb {R}}: \Vert H_t f(x)\Vert _{v(\rho ),{\mathbb {R}}_+}> \alpha \} \le \frac{C}{\alpha }\,\Vert f\Vert _{L^1( \gamma _\infty )}, \qquad \alpha >0, \end{aligned}$$

holds for some \(C > 0\) and all functions \(f\in L^1 (\gamma _\infty )\).

The structure of this paper is as follows. Section 2 contains some preliminaries, mainly concerning the variation seminorm and the t derivative of \(K_t\). In the following sections, Theorem 1.1 will be obtained as a direct consequence of Propositions 3.1, 4.1 and 5.1. Of these, Proposition 3.1 deals with the variation only for \(t \ge 1\). For these values of t, the estimate (1.2) is slightly strengthened. In Sect. 4, we split the operator given by the variation for \(0 < t \le 1\) into a local and a global part. This is done by means of a partition of the line into intervals where the density of \(\gamma _\infty \) is essentially constant. Then Proposition 4.1, dealing with the global part, is proved.

Proposition 5.1 is an estimate of the local part of the variation in \(0 < t \le 1\), and is stated and proved in the long Sect. 5. The proof goes via Proposition 5.2, which deals with one of the intervals of the partition at a time, and where \(\gamma _\infty \) is replaced by Lebesgue measure. Finally, Proposition 5.2 is seen to follow from estimates for the variation of integrals of an \(L^1\) function over certain intervals, obtained as a consequence of a known theorem about the variation of mean values.

Theorem 1.1 is proved in the one-dimensional case, and the handling of integrals over intervals in Sect. 5 just mentioned seems hard to extend to higher dimensions, because of geometrical obstructions. Only the results of Sects. 3 and 4 extend easily.

We point out that in Sects. 3 and 4, we use arguments similar to some from the authors’ papers [7] and [8]. Rather than invoking the results from there, we prefer to give the proofs explicitly.

2 Preliminaries

By \(C < \infty \) and \(c > 0\) we denote many different absolute constants, and \(X \lesssim Y\), or equivalently \(Y \gtrsim X\), means \(X \le C Y\). We write \(X \simeq Y\) if both \(X \lesssim Y\) and \(Y \lesssim X\).

Seminorms of type \(\Vert . \Vert _{v(\rho ),I}\) will always be taken in one of the variables t or \(\tau \).

We will let \(\dot{K}_t(x,u) = \partial K_t(x,u)/\partial t\).

It is not immediately obvious that the function \(x \mapsto \Vert H_t f(x)\Vert _{v(\rho ),{\mathbb {R}}_+}\) is measurable. But \(H_t f(x)\) is continuous in t for each x, as seen by dominated convergence. In the definition of the \({v(\rho )}\) seminorm, it is therefore enough to consider sequences of points \(t_i \in {\mathbb {Q}}\), thus only a countable family of sequences. The measurability follows.

We give some simple properties of the variation, and first observe that the seminorm \(\Vert .\Vert _{v(\rho )}\) is decreasing in \(\rho \) for \(1 \le \rho < \infty \). This seminorm is also subadditive in I, in the following sense. Take an inner point \(\tau \) of I and set \(I_+ = I \cap [\tau , +\infty )\) and \(I_- = I \cap (-\infty , \tau ]\). Then for \(1 \le \rho < \infty \) and any \(\phi \)

$$\begin{aligned} \Vert \phi \Vert _{v(\rho ), I} \le \Vert \phi \Vert _{v(\rho ), I_+} + \Vert \phi \Vert _{v(\rho ), I_-}. \end{aligned}$$

Lemma 2.1

Let \(1 \le \rho < \infty \).

  1. (a)

    If \(\phi \in C^1(I)\) and \(\phi ' \in L^1(I)\), then \(\phi \in V(\rho ,I)\) and

    $$\begin{aligned} \Vert \phi \Vert _{v(\rho ), I} \le \int _{I} |\phi '(t)|\,dt. \end{aligned}$$
  2. (b)

    If \(\phi \) is monotone and bounded in I, then \(\phi \in V(\rho ,I)\) and

    $$\begin{aligned} \Vert \phi \Vert _{v(\rho ), I} \le 2 \sup _I |\phi |. \end{aligned}$$

Both parts here are easy for \(\rho = 1\) and then follow for all \(\rho \).

The variation of products can be estimated as follows.

Lemma 2.2

Let \(\phi \) and \(\psi \) be bounded functions defined in the interval I. Then for any \(1 \le \rho < \infty \)

$$\begin{aligned} \Vert \phi \psi \Vert _{v(\rho )} \le \Vert \phi \Vert _\infty \Vert \psi \Vert _{v(\rho )} + \Vert \phi \Vert _{v(\rho )} \Vert \psi \Vert _\infty . \end{aligned}$$

To prove this, it is enough to write for an increasing sequence \((t_i)\) in I

$$\begin{aligned} \phi (t_i)\psi (t_i) - \phi (t_{i-1})\psi (t_{i-1}) = \phi (t_i)(\psi (t_i) -\psi (t_{i-1})) + (\phi (t_i)- \phi (t_{i-1}))\psi (t_{i-1}), \end{aligned}$$

and then take the \(\ell ^\rho \) norm.

We next make some preparations for the proof of Theorem 1.1.

The following estimate of the variation seminorm of \(H_t f(x)\) will be useful. Let the interval I be either (0, 1] or \([1,\infty )\). From Lemma 2.1(a), we conclude that

$$\begin{aligned} \Vert H_tf(x)\Vert _{v(\rho ), I}&\le \int _I \left| \frac{\partial }{\partial t} \int K_t(x,u) f(u)\,d\gamma _\infty (u) \right| \,dt\nonumber \\&=\int _I \left| \int \dot{K}_t(x,u) f(u) \,d\gamma _\infty (u) \right| \,dt \nonumber \\&\le \int \int _I \big | \dot{K}_t(x,u)\big | \,dt \, |f(u)|\, d\gamma _\infty (u). \end{aligned}$$

To justify moving the differentiation inside the integral in the second step here, we refer to [8, Lemma 5.3].

We compute and estimate \(\dot{K}_t(x,u)\).

Lemma 2.3

For all \((x,u) \in {\mathbb {R}}^n\times {\mathbb {R}}^n\) and \(t>0\), we have

$$\begin{aligned} \dot{K}_t(x,u) =&\, K_t(x,u)\, \left( -\frac{e^{-2t}}{1-e^{-2t}}+\frac{e^{-2t}(e^{-t}u - x)^2}{(1-e^{-2t})^2}+ \frac{e^{-t} u (e^{-t}u - x) }{1-e^{-2t}}\right) . \end{aligned}$$

Moreover, for \(t\ge 1\) one has

$$\begin{aligned} \big | \dot{K}_t(x,u) \big |\lesssim e^{R(x)} \exp \big ( -c \left( e^{-t}\,u- x\right) ^2 \big )\big ( e^{-t}\,|u| +e^{-2t}\big ). \end{aligned}$$


We omit the proof of (2.2). When \(t\ge 1\), (2.2) implies that

$$\begin{aligned} \vert \dot{K}_t(x,u) \vert \lesssim K_t(x,u) \left( |e^{-t}\,u- x|\,e^{-t}\,|u|+ e^{-2t}\, (e^{-t}\,u- x)^2 +e^{-2t}\right) . \end{aligned}$$

For \(t \ge 1\) (1.1) shows that \(K_t \lesssim e^{R(x)}\,\exp \left( -c\left( e^{-t}\,u- x\right) ^2\right) \). Changing the value of c here, we may neglect the factors \(e^{-t}\,u- x\) in (2.4) and obtain (2.3). \(\square \)

3 The case of large t

In this section we consider the variation of \(H_tf(x)\) only for \(1 \le t < \infty \).

Proposition 3.1

For each \(\rho > 2\) the operator that maps \(f \in L^1(\gamma _\infty )\) to the function

$$\begin{aligned} \Vert H_t f(x)\Vert _{v(\rho ), [1,+\infty )}, \quad x \in {\mathbb {R}}, \end{aligned}$$

is of weak type (1, 1) with respect to the measure \(\gamma _\infty \). In fact, one has the following stronger result: If \(\rho > 2\) and \(\Vert f\Vert _{L^1( \gamma _\infty )} = 1\), then

$$\begin{aligned} \gamma _\infty \left\{ x\in {\mathbb {R}}: \Vert H_t f(x)\Vert _{v(\rho ),[1,\infty )}> \alpha \right\} \lesssim \frac{1}{\alpha \sqrt{\log \alpha }}, \qquad \alpha > 2. \end{aligned}$$

Notice that, when t is large, the estimate (1.2) is enhanced by a logarithmic factor. In [6] and [7], an analogous phenomenon was already observed both for the Ornstein–Uhlenbeck maximal operator and for the Gaussian Riesz transform.


Let f be normalized in \(L^1( \gamma _\infty )\). We integrate (2.3), getting

$$\begin{aligned} \int _{1}^{\infty } \big | \dot{K}_t(x,u) \big | \, dt \lesssim e^{R(x)} \int _{1}^{\infty } \exp \big ( -c \left( e^{-t}\,u- x\right) ^2\big )\big (e^{-t}\,|u| +e^{-2t}\big ) \,dt. \end{aligned}$$

In the last parenthesis in the second integrand here, we consider first only the term \(e^{-t}\,|u|\) and make the change of variable \(e^{-t}\,u- x=y\), separately for \(u>0\) and \(u<0\). As a result,

$$\begin{aligned} \int _1^\infty \exp \big ({ -c ( e^{-t}\,u- x)^2 }\big )\, e^{-t}\,|u| \,dt \le \int _{{\mathbb {R}}}\exp (-cy^2)\, dy \simeq 1. \end{aligned}$$

Taking also the term \(e^{-2t}\) in the integral above into account, we conclude that

$$\begin{aligned} \int _{1}^{\infty } \big | \dot{K}_t(x,u) \big | \, dt \lesssim e^{R(x)}. \end{aligned}$$

Now (2.1) leads to

$$\begin{aligned} \Vert H_tf(x)\Vert _{v(\rho ), [1,\infty )} \lesssim e^{R(x)}. \end{aligned}$$

It is easily seen that

$$\begin{aligned} \gamma _\infty \left\{ x : e^{R(x)}> \beta \right\} \lesssim \frac{1}{\beta \sqrt{\log \beta }}, \qquad \beta > 2. \end{aligned}$$

From this (3.1) follows, and since (1.2) is trivial for \(\alpha \le 2\), Proposition 3.1 is proved. \(\square \)

4 The global case with small t

We first split the operator \(H_t\) in a local and a global part, in a way adapted to \(\gamma _\infty \). Let \(\eta \ge 0\) be a smooth function in \({\mathbb {R}}_+\) which is 1 in (0, 1/2] and 0 in \([1,\infty )\). The local part of the semigroup is defined by

$$\begin{aligned} H_t^{\textrm{loc}} f(x) = \int f(u)\, K_t(x,u)\,\eta ((1+|x|)|x-u|)\,d\gamma _\infty (u). \end{aligned}$$

The global part \(H_t^{\textrm{glob}} = H_t - H_t^{\textrm{loc}}\) is given by a similar expression, with \(\eta (.)\) replaced by \(1 - \eta (.)\).

Proposition 4.1

For each \(\rho > 2\) the operator that maps \(f \in L^1(\gamma _\infty )\) to the function

$$\begin{aligned} \Vert H_t^{\textrm{glob}} f(x)\Vert _{v(\rho ), (0,1]}, \quad x \in \mathbb R, \end{aligned}$$

is of weak type (1, 1) with respect to the measure \(\gamma _\infty \).


We first give an estimate of the number of zeros of the function \(t \mapsto \dot{K}_t(x,u)\) for \(0< t < 1\). From (2.2) we see that we can write

$$\begin{aligned} \dot{K}_t(x,u) = K_t(x,u)\, \frac{P_{x,u}(e^{-t})}{(1-e^{-2t})^2}, \end{aligned}$$

where \(P_{x,u}\) is a polynomial of degree at most 4, with coefficients depending on x and u. Thus \(\dot{K}_t(x,u)\) can have at most four zeros in (0, 1). Denote these zeros by \(t_1,\dots , t_{N-1}\); the \(t_i\) and also N will depend on (xu), and \(N \le 5\). Set also \(t_0 = 0\) and \(t_N = 1\). Then

$$\begin{aligned} \int _0^1 |\dot{K}_t(x,u)|\,dt = \sum _1^N \left| \int _{t_{i-1}}^{t_i} \dot{K}_t(x,u)\,dt \right| \le 10\, \sup _{(0,1]}K_t(x,u). \end{aligned}$$

Since the computation (2.1) remains valid with an extra factor \(1-\eta ((1+|x|)|x-u|)\), we conclude

$$\begin{aligned} \Vert H_t^{\text {glob}} f(x)\Vert _{v(\rho ), (0,1]} \lesssim \int |f(u)| \sup _{(0,1]}K_t(x,u)\,(1-\eta ((1+|x|)|x-u|))\,d\gamma _\infty (u). \end{aligned}$$

We claim that for \(0<t \le 1\) and all (xu)

$$\begin{aligned} \sup _{(0,1]}K_t(x,u)\,(1-\eta ((1+|x|)|x-u|)) \lesssim e^{R(x)}\,(1+|x|). \end{aligned}$$

If \(1-\eta ((1+|x|)|x-u|) \ne 0\), we have \(|x-u| > 1/(2(1+|x|))\) and thus for \(0<t\le 1\) also

$$\begin{aligned} (1+|x|)^{-1}&< 2 |x-u| \le 2 \vert x-e^{t}\,x \vert + 2 \vert e^{t}\,x - u \vert = 2 (e^{t}-1)|x| + 2e^{t} |x-e^{-t}u| \\&\le 2et (1+|x|) + 2e|x-e^{-t}u|. \end{aligned}$$

Now, if \(t (1+|x|)^2 < 1/(4e)\) we get a bootstrap implying

$$\begin{aligned} (1+|x|)^{-1} < 4e|x-e^{-t}u|. \end{aligned}$$

Then we see from (1.1) that

$$\begin{aligned} e^{-R(x)}\,K_t(x,u)&\simeq t^{-1/2}\, \exp \left( -\frac{1}{2} \frac{(e^{-t}u-x)^2}{t}\right) \\ {}&\le t^{-1/2}\, \exp \left( -\frac{1}{2}\, \frac{1}{16e^2t(1+|x|)^2} \right) \lesssim 1+|x|, \end{aligned}$$

and (4.2) follows. On the other hand, if \(t (1+|x|)^2 \ge 1/(4e)\), (4.2) also follows, since then \(t^{-1/2} \lesssim 1+|x|\).

Combining now (4.1) and (4.2), we get

$$\begin{aligned} \Vert H_t^{\textrm{glob}} f(x)\Vert _{v(\rho ), (0,1]} \lesssim e^{R(x)}\,(1+|x|) \, \Vert f \Vert _{L^1(\gamma _\infty )}. \end{aligned}$$

This ends the proof of Proposition 4.1, because

$$\begin{aligned} \gamma _\infty \left\{ x: \, e^{R(x)}\,(1+|x|)> \beta \right\} \lesssim \frac{1}{\beta }, \qquad \beta > 0. \end{aligned}$$

\(\square \)

5 The local case with small t

This section consists of the proof of the following result.

Proposition 5.1

For each \(\rho > 2\) the operator that maps \(f \in L^1(\gamma _\infty )\) to the function

$$\begin{aligned} \Vert H_t^{\textrm{loc}} f(x) \Vert _{v(\rho ),(0,1]}, \quad x \in \mathbb R^n, \end{aligned}$$

is of weak type (1, 1) with respect to the measure \(\gamma _\infty \).

5.1 Splitting of the line into local intervals

The localization means that the value \(H_t^{\textrm{loc}} f(x)\) depends only on the restriction of f to the interval \(\{u:\,|u-x| \le 1/(1+|x|)\}\), and we will split the line into intervals of similar type. Choose an increasing sequence \((x_j)_0^\infty \) with \(x_0 =0\) such that for \(j = 0,1,\dots \)

$$\begin{aligned} x_{j+1} - \frac{1}{1+x_{j+1}} = x_{j} + \frac{1}{1+x_{j}}. \end{aligned}$$

This recursion formula determines the sequence uniquely. We have \(x_{j+1} -x_{j} < 2\) for all \(j \ge 0\), so that \(x_j \le 2j\). Thus \(x_{j+1} -x_{j} \gtrsim 1/j \) and \(x_j \rightarrow +\infty \) as \(j \rightarrow \infty \). (In fact, \(x_{j}\) is close to \(2\,\sqrt{j}-1\), as shown in the Appendix.) For \(j<0\) we let \(x_j = - x_{|j|}\).

The intervals

$$\begin{aligned} I_j = \left[ x_{j} - \frac{1}{1+|x_{j}|}, \, x_{j} + \frac{1}{1+|x_{j}|}\right] , \qquad j\in {\mathbb {Z}}, \end{aligned}$$

are pairwise disjoint except for endpoints, and they cover \({\mathbb {R}}\). If \(\textrm{supp}\, f \subset I_j\), we claim that the support of \(H_t^{\textrm{loc}} f \) is contained in the interval

$$\begin{aligned} {\widetilde{I}}_j = \left[ x_{j} - \frac{4}{1+|x_{j}|}, \, x_{j} + \frac{4}{1+|x_{j}|}\right] . \end{aligned}$$

To verify this, let \(x \in \textrm{supp}\,H_t^{\textrm{loc}} f\). Then x has distance at most \(1/(1+|x|)\) from some point in \(I_j\), so that

$$\begin{aligned} |x-x_j| \le \frac{1}{1+|x|} + \frac{1}{1+|x_j|}, \end{aligned}$$

which implies

$$\begin{aligned} 1+|x| \ge 1+|x_j| - \frac{1}{1+|x|} - \frac{1}{1+|x_j|} \ge |x_j| - 1 \ge \frac{1+|x_j|}{3}, \end{aligned}$$

the last inequality holding only if \(|x_j| \ge 2\). But if \(|x_j| < 2\), one has the same estimate, since then \(1+|x| \ge 1 \ge ( {1+|x_j|})/3\). The claim now follows from (5.1).

We also observe that if \(\textrm{supp} \,f \subset I_j\) and \(j>0\), then \(\textrm{supp} \,H_t^{\textrm{loc}}f \subset \{x \ge 0\} \). Indeed, \(I_j \subset [1,\infty )\) when \(j>0\), so if \(x \in \textrm{supp} \,H_t^{\textrm{loc}}f\) we must have \(x \ge 1 - 1/(1+|x|) \ge 0\).

The intervals \({\widetilde{I}}_j\) have bounded overlap. Therefore, it is enough to prove Theorem 1.1 for functions f supported in \(I_j\), with a bound that is uniform in \(j \in {\mathbb {Z}}\).

In each \({\widetilde{I}}_j\), the density of \(\gamma _\infty \) is essentially constant, since \(e^{-R(x)} \simeq e^{-R(x_j)}\) for \(x \in {\widetilde{I}}_j\), and this is uniform in j. Therefore, we can pass to Lebesgue measure in u and in x. We replace \(f \in L^1(\gamma _\infty )\), supported in \(I_j\), by \(g(u) = f(u)\,e^{-R(u)} \in L^1(du)\), with the same support. Instead of \(H_t^{\textrm{loc}}\), we can then consider the operator

$$\begin{aligned} {\mathcal {H}}_t^{\textrm{loc}} g(x) = \frac{1}{\sqrt{1-e^{-2t}}} \, \int g(u)\, \exp \left( -\frac{1}{2} \, \frac{(e^{-t}u-x)^2}{1-e^{-2t}}\right) \,\eta ((1+|x|)|x-u|)\,du, \end{aligned}$$

where we deleted the essentially constant factor \(e^{R(x)}\).

We conclude from the above that the following proposition implies Proposition 5.1.

Proposition 5.2

For \(\rho >2\) and each \(j\in {\mathbb {Z}}\), the operator that maps \(g \in L^1(I_j)\) to

$$\begin{aligned} \Vert {\mathcal {H}}_t^{\textrm{loc}} g(x)\Vert _{v(\rho ), (0,1]} \end{aligned}$$

is bounded from \(L^1(I_j)\) to \(L^{1,\infty }({\widetilde{I}}_j)\), where the intervals are endowed with the Lebesgue measure. This is uniform in j.

5.2 Proof of Proposition 5.2

For symmetry reasons, it is enough to consider only \(j \ge 0\) and only points \(x \in {\mathbb {R}}_+\). With \(j \ge 0\) fixed, we let \(g \in L^1(I_j)\). The expression for \({\mathcal {H}}_t^{\textrm{loc}} g(x)\) will be rewritten in terms of integrals of only g(u) over many intervals which depend on t. Here we follow [5, proof of Lemma 2.4], writing

$$\begin{aligned} \exp (-y^2/2) = - \int _{y}^{\infty } \frac{d e^{-s^2/2}}{ds}\,ds = - \int _{0}^{\infty } \chi _{y<s}\,\frac{d e^{-s^2/2}}{ds}\,ds \end{aligned}$$


$$\begin{aligned} \eta (y) = - \int _{y}^{\infty }\frac{d \eta (\sigma )}{d\sigma }\,d\sigma = - \int _{1/2}^{1} \chi _{y<\sigma }\, \frac{d \eta (\sigma )}{d\sigma }\,d\sigma . \end{aligned}$$

As a result,

$$\begin{aligned} {\mathcal {H}}_t^{\text {loc}} g(x) = \int _{0}^{\infty }\frac{d e^{-s^2/2}}{ds}\, \int _{1/2}^{1}\frac{d \eta (\sigma )}{d\sigma }\, R_t^{s,\sigma }g(x) \;d\sigma \,ds, \end{aligned}$$


$$\begin{aligned} R_t^{s,\sigma }g(x) = \frac{1}{\sqrt{1-e^{-2t}}} \, \int g(u)\, \chi _{|e^{-t}u-x|/\sqrt{1-e^{-2t}}<s}\, \chi _{(1+|x|)|x-u|<\sigma }\,\,du. \end{aligned}$$

Observe that it is not enough to prove that the \(v(\rho )\) seminorm of \(R_t^{s,\sigma }g(x)\), taken with respect to t, defines an operator of weak type (1,1) for each s and \(\sigma \). This is because \(L^{1,\infty }\) is not a normed space. Instead we will estimate the variation of \(R_t^{s,\sigma }g(x)\) for all s and \(\sigma \) in terms of one operator of weak type (1,1) (actually a small number of such operators and actually with a factor \(s+1\), which is integrable against \(de^{-s^2/2}/ds\)).

A few times below, we will use the simple inequalities

$$\begin{aligned} y \le e^y - 1 \le 4y \qquad \textrm{for} \qquad 0 \le y \le 2. \end{aligned}$$

The second inequality holds because the function \((e^y - 1)/y\) is increasing for these y, as seen from the power series.

In the sequel, we fix an \(x \in \widetilde{I}_j \cap {\mathbb {R}}_+\) and let \(s>0\) and \(1/2<\sigma < 1\), but we temporarily allow all \(t>0\). We will soon introduce many quantities which will depend on \(x,\,s,\,t \) and sometimes \(\sigma \); in order not to make the notation too heavy, we will systematically omit indicating the dependence on x.

Since the inequality \(|e^{-t}u-x|/\sqrt{1-e^{-2t}}<s\) can be rewritten as \( |u-e^t x|< s\,\sqrt{e^{2t}-1},\) the integration in (5.3) is taken over the interval

$$\begin{aligned} J_t(s,\sigma ) = \left\{ u \in {\mathbb {R}}: |u-e^t x|< s\,\sqrt{e^{2t}-1}\qquad \textrm{and} \qquad |u-x| < \frac{\sigma }{1+x} \right\} . \end{aligned}$$

Observe that \(J_t(s,\sigma )\) is nonempty precisely when

$$\begin{aligned} s\,\sqrt{e^{2t}-1} + \frac{\sigma }{1+x} > e^t x -x, \end{aligned}$$

or equivalently \(Q_{s}(t) < \sigma /(1+x)\), where

$$\begin{aligned} Q_{s}(t) = x(e^t-1) - s\,\sqrt{e^{2t}-1}\,. \end{aligned}$$

An instance of this function is plotted in Fig. 1.

Fig. 1
figure 1

Graph of \(Q_s(t)\) with \(x=2\), \(s=1\). Here \(\sigma = 3/4\)

Since \(Q'_{s}(t) = x e^t - se^{2t}/\sqrt{e^{2t}-1} \), one finds by squaring each of these two terms that \(Q'_{s}(t) > 0\) if and only if \(x>s\) and \(t > {\widetilde{t}}(s)\), where \({\widetilde{t}}(s) >0\) is determined by \(e^{2{\widetilde{t}}(s)} = x^2/(x^2 - s^2)\). If \(x \le s\), we set \({\widetilde{t}}(s) = +\infty \). It follows that \(Q_{s}(t)\) is strictly

$$\begin{aligned} \left\{ \begin{array}{ll} \text { decreasing } \text { in }\; \;&{}{}0< t< {\widetilde{t}}(s) \\ \text { increasing } \text { in }\;\; &{}{}{\widetilde{t}}(s)< t < +\infty . \end{array} \right. \end{aligned}$$

Further, \(Q_{s}(0) = 0\), and if \(x>s\), then \(Q_{s}(t) \rightarrow +\infty \) as \(t \rightarrow +\infty \). We conclude that there exists a \(t_1(s,\sigma ) \in (0, +\infty ]\) such that

$$\begin{aligned} J_t(s,\sigma ) \ne \emptyset \quad \Leftrightarrow \quad Q_{s}(t)< \sigma /(1+x) \quad \Leftrightarrow \quad 0< t < t_1(s,\sigma ). \end{aligned}$$

Moreover, \(t_1(s, \sigma ) < +\infty \) if and only if \(x > s\).

For later use, we make a similar observation regarding the inequality \(Q_{s}(t) < 0\). There exists a \(t_0(s) \in \left( {\widetilde{t}}(s), t_1(s,\sigma )\right) \cup \{+\infty \}\), finite if and only if \(x > s\), such that

$$\begin{aligned} Q_{s}(t)< 0 \quad \Leftrightarrow \quad 0< t < t_0(s). \end{aligned}$$

(Actually, \(t_0(s)\) is given by \(e^{t_0(s)} = \frac{x^2+s^2}{x^2-s^2}\) if \(x>s\).)

We let

$$\begin{aligned} T = T(s,\sigma ) := 1\wedge t_1(s,\sigma ) = \sup \{t \in (0, 1]:\, Q_{s}(t) < \sigma /(1+x) \}. \end{aligned}$$

Then \( T(s,\sigma ) \in (0, 1]\), and from now on we consider only \(0 < t \le 1\).

Notice that \( T(s,\sigma ) < 1\) if and only if \(Q_{s}(1) > \sigma /(1+x)\).


$$\begin{aligned} \{t \in (0, 1]:\, J_t(s,\sigma ) \ne \emptyset \} = \left\{ \begin{array}{ll} (0,T(s,\sigma )) &{} \hbox { if}\ Q_{s}(1) \ge \sigma /(1+x)\\ (0, T(s,\sigma )] = (0,1] &{} \text{ otherwise. } \end{array} \right. \end{aligned}$$

In the first case here, \(J_{T(s,\sigma )}(s,\sigma ) = \emptyset \) and \(R_{T(s,\sigma )}^{s,\sigma }g(x) = 0\). We observe that \(R_t^{s,\sigma }g(x)\) is in all cases defined and continuous as a function of t for \(0<t \le T(s,\sigma )\).

Next, we deduce a bound for \(T(s,\sigma )\). Since \(T(s,\sigma ) \le t_1(s,\sigma )\), any \(t < T(s,\sigma )\) satisfies (5.5). Using first (5.4) and then (5.5) multiplied by x together with (5.4), and finally the inequality between the geometric and arithmetic means, we get

$$\begin{aligned} x^2\,t \le x^2\,(e^{t}-1) < sx\,\sqrt{8t} + \sigma \,\frac{x}{1+x} \le \frac{x^2t}{2} + 4s^2 +1. \end{aligned}$$


$$\begin{aligned} x^2\, T(s,\sigma ) \le 8(s^2 +1). \end{aligned}$$

When \(J_t(s,\sigma )\) is nonempty, we write its endpoints as

$$\begin{aligned} J_t(s,\sigma ) = (k_t^-(s,\sigma ), k_t^+(s,\sigma )), \end{aligned}$$

and they are

$$\begin{aligned} k_t^+(s,\sigma ) = \left( xe^t + s\,\sqrt{e^{2t}-1} \right) \wedge \left( x + \frac{\sigma }{1+x}\right) \end{aligned}$$


$$\begin{aligned} k_t^-(s,\sigma ) = \left( xe^t - s\,\sqrt{e^{2t}-1} \right) \vee \left( x - \frac{\sigma }{1+x}\right) . \end{aligned}$$

From the last expression, it follows that

$$\begin{aligned} k_t^-(s,\sigma )< x \quad \Leftrightarrow \quad Q_{s}(t)< 0 \quad \Leftrightarrow \quad t < t_0(s); \end{aligned}$$

see (5.7).

The next step in the proof of Proposition 5.2 will be to apply the following theorem, obtained in the discrete setting in [13]. It can easily be transferred to the setting of \({\mathbb {R}}\), see [5, proof of Lemma 2.1]. Define the one-sided mean values of a function \(\phi \in L^1({\mathbb {R}})\) by

$$\begin{aligned} M_\tau ^+\,\phi (x) = \frac{1}{\tau }\, \int _{x}^{x+\tau } \phi (u)\,du, \quad x \in {\mathbb {R}},\;\; \tau >0, \end{aligned}$$

and \(M_\tau ^-\,\phi (x)\) similarly using the interval \((x-\tau ,x)\).

Theorem 5.3

([13, Theorem 3.6]) For \(2<\rho <\infty \), the operator that maps \(f \in L^1({\mathbb {R}})\) to the function

$$\begin{aligned} \Vert M_\tau ^+\,\phi (x) \Vert _{v(\rho ),{\mathbb {R}}_+}, \quad x \in {\mathbb {R}}, \end{aligned}$$

is of weak type (1,1) with respect to Lebesgue measure in \({\mathbb {R}}\). Here the variation is taken in the variable \(\tau \).

This clearly holds also with \(M_\tau ^+\) replaced by \(M_\tau ^-\).

Thus we need to rewrite \( R_t^{s,\sigma }g(x) = (1-e^{-2t})^{-1/2} \, \int _{J_t(s,\sigma )} g(u)\, du\) in terms of mean values of g in intervals with one endpoint at x.

With \(0<t\le T(s,\sigma )\), we define \(J_t^+(s,\sigma ) = (x,k_t^+(s,\sigma ))\), which is an interval of length

$$\begin{aligned} |J_t^+(s,\sigma )| = k_t^+(s,\sigma ) - x =\left( (e^t-1)x + s\,\sqrt{e^{2t}-1} \right) \wedge \frac{\sigma }{1+x}\,. \end{aligned}$$

We further define \(J_t^-(s,\sigma ) = (k_t^-(s,\sigma ),x)\), considered as an oriented interval in the sense that if \(x < k_t^-(s,\sigma )\), an integral over \({J_t^-(s,\sigma )}\) is interpreted as minus the integral over \((x, {k_t^-(s,\sigma )})\). Its length is

$$\begin{aligned} |J_t^-(s,\sigma )| =&\,|k_t^-(s,\sigma ) - x|\nonumber \\ =&\, \left| x(e^t-1) - s\,\sqrt{e^{2t}-1} \right| \wedge \frac{\sigma }{1+x} = |Q_{s}(t)| \wedge \frac{\sigma }{1+x}\,, \end{aligned}$$

as follows from the expression for \(k_t^-(s,\sigma )\) if one separates the cases when the equivalent statements of (5.9) are true or false.

We now have for any \(0<t\le T(s,\sigma )\)

$$\begin{aligned} R_t^{s,\sigma }g(x) =&\, \frac{1}{\sqrt{1-e^{-2t}}} \, \int _{J_t^+(s,\sigma )}g(u)\, du + \frac{1}{\sqrt{1-e^{-2t}}} \, \int _{J_t^-(s,\sigma )} g(u)\, du \nonumber \\ =&\, \frac{|J_t^+(s,\sigma )|}{\sqrt{1-e^{-2t}}}\, M_{|J_t^+(s,\sigma )|}^+ \, g(x) \pm \frac{|J_t^-(s,\sigma )|}{\sqrt{1-e^{-2t}}}\, M_{|J_t^-(s,\sigma )|}^{\mp } \,g(x), \end{aligned}$$

where one should read the upper signs in ± and \({\mp }\) if \( k_t^-(s,\sigma ) < x\) and otherwise the lower signs. Notice that the two terms here cancel for \(t = T(s,\sigma )\) if \(J_{T(s,\sigma )}(s,\sigma ) = \emptyset \), since then \(Q_s(T(s,\sigma )) = \sigma /(1+x)\) and so \(k_t^+(s,\sigma ) = k_t^-(s,\sigma ) = x + \sigma /(1+x)\).

We will next consider the variation in \(0<t\le T(s,\sigma )\) of the two mean values in (5.12), and start with \(M_{|J_t^+(s,\sigma )|}^+ \,g(x)\). Since \(|J_t^+(s,\sigma )|\) is a nondecreasing, continuous function of t in this interval, we can reparametrize \(M_{|J_t^+(s,\sigma )|}^+ \,g(x),\;0<t\le T(s,\sigma ) \), as \(M_\tau ^+ \,g(x)\) with \(0 < \tau \le \tau _0\) for some \(\tau _0 = \tau _0(s,\sigma )\). This reparametrization does not change the variation, so that

$$\begin{aligned} \Vert M_{|J_t^+(s,\sigma )|}^+ \,g(x) \Vert _{v(\rho ), (0,T]} = \Vert M_\tau ^+\, g(x)\Vert _{v(\rho ), (0,\tau _0]} \end{aligned}$$

for each s and \(\sigma \), with the variations taken with respect to t and \(\tau \), respectively. Extending the range of \(\tau \) here, we conclude that

$$\begin{aligned} \Vert M_{|J_t^+(s,\sigma )|}^+ \,g(x)\Vert _{v(\rho ), (0, T]} \le \Vert M_\tau ^+\, g(x)\Vert _{v(\rho ), {\mathbb {R}}_+}. \end{aligned}$$

The right-hand side here is independent of s and \(\sigma \), and Theorem 5.3 applies to it.

To deal with \(M_{|J_t^-(s,\sigma )|}^{\mp } \,g(x)\), we first consider the case when \(t_0(s) < T(s,\sigma )\). At the point \(t = t_0(s)\), the difference \(k_t^-(s,\sigma ) - x\) changes sign, and \(|J_{t_0(s)}^-(s,\sigma )| = 0\). Observe that if x is a Lebesgue point for g, then \(M_{|J_t^-(s,\sigma )|}^+ \,g(x)\) tends to g(x) as \(t \downarrow t_0(s)\) and similarly for \(M_{|J_t^-(s,\sigma )|}^- \,g(x)\) as \(t \uparrow t_0(s)\). Then the second factor in the last term of (5.12) will be continuous in t also at \(t = t_0(s)\), if interpreted as g(x) at this point. This last term, with the ± sign, is also continuous, because the first factor is continuous and vanishes at \(t = t_0(s)\). We will consider the variation of \(M_{|J_t^-(s,\sigma )|}^{\mp } \,g(x)\) separately in the subintervals \(0 < t \le t_0(s)\) and \(t_0(s) \le t \le T(s,\sigma )\).

To obtain subintervals where the length \(|J_t^-(s,\sigma )|\) is monotone, we invoke (5.6) and split \((0, t_0(s)]\) further into \(\left( 0, {\widetilde{t}}(s)\right] \) and \(\left( {\widetilde{t}}(s), t_0(s)\right] \). We can now reparametrize as before in each of the three subintervals of \((0, T(s,\sigma )]\) obtained. The only little difference is that \(\tau \) may now run in an interval that stays away from 0, but we can still extend its range to \({\mathbb {R}}_+\). We conclude that for every Lebesgue point \(x \in \widetilde{I}_j \cap \mathbb R_+\), thus for a.a. \(x \in \widetilde{I}_j \cap {\mathbb {R}}_+\),

$$\begin{aligned} \Vert M_{|J_t^-(s,\sigma )|}^{\mp } \,g(x)\Vert _{v(\rho ), (0, T]} \le 2\, \Vert M_\tau ^-\, g(x)\Vert _{v(\rho ), {\mathbb {R}}_+} + \,\Vert M_\tau ^+\, g(x)\Vert _{v(\rho ), {\mathbb {R}}_+}; \end{aligned}$$

here and below \(T = T(s,\sigma )\). This ends the case \(t_0(s) < T(s,\sigma )\).

The remaining case \(t_0(s) \ge T(s,\sigma )\) is slightly easier. Then \(T(s,\sigma ) = 1\), and \(k_t^-(s,\sigma ) < x\) for \(t < 1\). If \({\widetilde{t}}(s) < 1\), we split (0, 1] into \(\left( 0, \widetilde{t}(s)\right] \) and \(\left[ {\widetilde{t}}(s), 1\right] \); otherwise no splitting is necessary. When \(t_0(s) = 1\), we again need to assume that x is a Lebesgue point. It follows that (5.14) holds also in this case.

Since we are going to apply Lemma 2.2 to the products in (5.12), we observe that the \(L^\infty \) norms of the means in (5.12) are controlled by standard maximal operators of g. More precisely,

$$\begin{aligned}{} & {} \Vert M_{|J_t^+(s,\sigma )|}^+ \,g(x)\Vert _{L^\infty } \le {\mathcal {M}}^+ g(x), \end{aligned}$$
$$\begin{aligned}{} & {} \Vert M_{|J_t^-(s,\sigma )|}^+ \,g(x)\Vert _{L^\infty } \le {\mathcal {M}}^+ g(x), \end{aligned}$$


$$\begin{aligned}{} & {} \Vert M_{|J_t^-(s,\sigma )|}^- \,g(x)\Vert _{L^\infty } \le {\mathcal {M}}^- g(x), \end{aligned}$$

where the \(L^\infty \) norms are taken with respect to t, and

$$\begin{aligned} {\mathcal {M}}^\pm g(x) = \sup _{\tau >0} M_\tau ^\pm |g|(x). \end{aligned}$$

It remains to deal with the two factors in front of the mean values in (5.12). They are

$$\begin{aligned} F_\pm := \frac{|J_t^\pm (s,\sigma )|}{\sqrt{1-e^{-2t}}} =&\,\frac{\left| x(e^t - 1)\pm s\sqrt{e^{2t}-1}\right| \wedge (\sigma /(1+x))}{\sqrt{1-e^{-2t}}} \end{aligned}$$
$$\begin{aligned} =&\,\left| \frac{x(e^t - 1)}{\sqrt{1-e^{-2t}}} \pm se^t\right| \wedge \frac{\sigma }{(1+x)\sqrt{1-e^{-2t}}}, \end{aligned}$$

where we used (5.10) and (5.11).

Lemma 5.4

For \(\rho \ge 1\), \(s>0\) and \(\sigma \in (1/2,1)\),

$$\begin{aligned} \Vert F_\pm \Vert _{L^\infty (0,T]} \lesssim s+1 \qquad \textrm{and} \qquad \Vert F_\pm \Vert _{v(\rho ),(0,T]} \lesssim s+1, \end{aligned}$$

where the norm and the seminorm are taken in the variable t.


We have from (5.19)

$$\begin{aligned} |F_\pm | \le \frac{xe^t(e^t - 1)}{\sqrt{e^{2t}-1}} +se^t \le \frac{4xe^t\,t}{\sqrt{2t}}\, + es \lesssim x\sqrt{t} + s \lesssim s+1, \end{aligned}$$

where the second inequality comes from (5.4) and the last step uses (5.8). The first inequality of the lemma is verified.

For the second inequality, we will apply Lemma 2.1(b). The factors \(F_\pm \) are not always monotone in t, but we will split the interval \((0, T(s,\sigma )]\) into subintervals where they are monotone. The splitting may depend on \(s,\,\sigma \) and x, but the number of subintervals will be no larger than C. This will be done in several steps.

To begin with, we consider only \(F_-\). We split \((0, T(s,\sigma )]\) at \(t = {\widetilde{t}}(s)\) if \({\widetilde{t}}(s) < T(s,\sigma )\), and also at \(t = t_0(s)\) if \(t_0(s) < T(s,\sigma )\).

The splitting then continues, and now we take both \(F_+\) and \(F_-\) into account. The next split depends on which of the quantities \(\left| x(e^t - 1)\pm s\sqrt{e^{2t}-1}\right| \) and \(\sigma /(1+x)\), occurring in the minimum in (5.18), is the smaller. Since \(\left| x(e^t - 1)\pm s\sqrt{e^{2t}-1}\right| \) is monotone in each subinterval obtained so far, this may give one split for \(F_+\) and another for \(F_-\). Next, observe that in the subintervals where \(\sigma /(1+x)\) is the smaller, we see from (5.19) that \(F_\pm = \sigma /\left( (1+x)\sqrt{1-e^{-2t}}\,\right) \), which is monotone. It only remains to consider the case when \(\left| x(e^t - 1)\pm s\sqrt{e^{2t}-1}\right| \) is the smaller quantity in (5.18). Then (5.19) shows that \(F_\pm = P_\pm \), where

$$\begin{aligned} P_\pm = \frac{x(e^t - 1)}{\sqrt{1-e^{-2t}}} \pm se^t. \end{aligned}$$

The derivative of \(P_\pm \) is seen to vanish precisely when

$$\begin{aligned} x\, \frac{e^t -2e^{-t} + e^{-2t}}{( 1 -e^{-2t} )^{3/2}} = {\mp } se^t. \end{aligned}$$

In this equation, we multiply by the denominator and square both sides. After multiplication by a suitable power of \(e^t\), the result will be a polynomial equation in \(e^t\), with only a bounded number of solutions. Thus we can split our subintervals further, into intervals where \(P_\pm \) is monotone.

This ends the splitting, and Lemma 2.1(b) implies the second inequality of Lemma 5.4. \(\square \)

We can now finish the proof of Proposition 5.2. Applying Lemma 2.2 to the two products in (5.12), we get

$$\begin{aligned}&\Vert R_t^{s,\sigma }g(x)\Vert _{v(\rho ), (0, T]} \\ {}&\le \Vert F_+ \Vert _{L^\infty (0,T]} \Vert M_{|J_t^+(s,\sigma )|}^+ \,g(x)\Vert _{v(\rho ), (0, T]} + \Vert F_+ \Vert _{v(\rho ),(0,T]}\, \Vert M_{|J_t^+(s,\sigma )|}^+ \,g(x)\Vert _{L^\infty (0,T]} \\ {}&\quad + \Vert F_- \Vert _{L^\infty (0,T]}\, \left( \Vert M_{|J_t^-(s,\sigma )|}^+ \,g(x)\Vert _{v(\rho ), (0, T]}+ \Vert M_{|J_t^-(s,\sigma )|}^- \,g(x)\Vert _{v(\rho ), (0, T]}\right) \\ {}&\quad + \Vert F_- \Vert _{v(\rho ),(0,T]}\, \left( \Vert M_{|J_t^-(s,\sigma )|}^+ \,g(x)\Vert _{L^\infty (0,T]} + \Vert M_{|J_t^-(s,\sigma )|}^- \,g(x)\Vert _{L^\infty (0,T]}\right) . \end{aligned}$$

Using Lemma 5.4 together with (5.13), (5.15), (5.14), and (5.17), we conclude that for a.a. x

$$\begin{aligned}{} & {} \Vert R_t^{s,\sigma }g(x)\Vert _{v(\rho ), (0, T]} \\{} & {} \quad \lesssim \, \,(1+s) \, \big ( \Vert M_\tau ^+\, g(x)\Vert _{v(\rho ), {\mathbb {R}}_+} + \Vert M_\tau ^-\, g(x)\Vert _{v(\rho ), {\mathbb {R}}_+} + {\mathcal {M}}^+ g(x) + {\mathcal {M}}^- g(x) \big ). \end{aligned}$$

The four terms to the right here are independent of s and \(\sigma \), and we can insert this estimate in (5.2) and integrate with respect to s and \(\sigma .\) The result will be

$$\begin{aligned} \Vert {\mathcal {H}}_t^{\textrm{loc}} g(x)\Vert _{v(\rho ), (0,1]} \lesssim \Vert M_\tau ^+\, g(x)\Vert _{v(\rho ), {\mathbb {R}}_+} + \Vert M_\tau ^-\, g(x)\Vert _{v(\rho ), {\mathbb {R}}_+} + {\mathcal {M}}^+ g(x) + \mathcal M^- g(x) \end{aligned}$$

for a.a. \(x \in \widetilde{I}_j \cap {\mathbb {R}}_+\). In view of Theorem 5.3, this shows that the operator given by \(\Vert {\mathcal {H}}_t^{\textrm{loc}} g(x)\Vert _{v(\rho ), (0,1]}\) is of weak type (1,1) as stated in Proposition 5.2. This ends the proofs of Proposition 5.2 and also that of Proposition 5.1.

6 Appendix. Asymptotics of \(x_j\)


$$\begin{aligned} x_j = 2\,\sqrt{j} -1 + O\left( \frac{1}{\sqrt{j}}\right) , \qquad j \rightarrow +\infty . \end{aligned}$$

To prove this, let \(z_j = 1 + x_j\) for \(j = 1,2,\dots \). The recursion formula says that \(z_{j+1} - z_j = 1/z_j +1/z_{j+1} \). We have

$$\begin{aligned} z_{j+1}^2 - z_j^2 = (z_{j+1} + z_j)\left( \frac{1}{z_j} + \frac{1}{z_{j+1}}\right) = 2 + \frac{z_{j+1}}{z_{j}} + \frac{z_{j}}{z_{j+1}} > 2. \end{aligned}$$

Writing \(z_j^2\) as a telescoping sum, we obtain

$$\begin{aligned} z_j^2 = z_0^2 + \sum _{0}^{j-1} \left( z_{\nu +1}^2 - z_\nu ^2 \right) \ge 1 +2j, \end{aligned}$$

and so \(z_{j+1} - z_j \le 2/\sqrt{1+2j}\). We continue with the sum in (5.20), getting

$$\begin{aligned} z_{j+1}^2 - z_j^2&= 2 + \frac{z_{j+1}-z_{j}}{z_{j}} + 1 + \frac{z_{j}-z_{j+1}}{z_{j+1}} + 1 = 4 + (z_{j+1}-z_{j})\left( \frac{1}{z_j} - \frac{1}{z_{j+1}}\right) \\ {}&= 4 +\frac{(z_{j+1}-z_{j})^2}{z_{j}z_{j+1}} = 4 +O\left( \frac{1}{(1+2j)^{2}} \right) . \end{aligned}$$

Summing as above, we find

$$\begin{aligned} z_j^2 = 4j + O(1), \end{aligned}$$

and so \(z_j = 2\,\sqrt{j} + O(1/\sqrt{j})\). This proves the claim.