1 Introduction

A function \(g: \mathbb{R }\rightarrow \mathbb{R }\) is \(\alpha \)-Lipschitz for some \(\alpha > 0\) if \(|g(s) - g(t)| \le \alpha |s - t|\) for all \(s,t \in \mathbb{R }\). If \(\varGamma \) is a set of \(\alpha \)-Lipschitz functions from \(\mathbb{R }\) to \(\mathbb{R }\) such that \(\sup \{g(t_0) : g \in \varGamma \} < \infty \) for some \(t_0 \in \mathbb{R }\), then the function \(g^*: \mathbb{R }\rightarrow \mathbb{R }\) defined by \(g^*(t) = \sup \{g(t) : g \in \varGamma \},\,t \in \mathbb{R }\), is \(\alpha \)-Lipschitz. Also, if \(f: \mathbb{R }\rightarrow \mathbb{R }\) is an arbitrary function, then the set of \(\alpha \)-Lipschitz functions dominated by \(f\) is non-empty if and only if \(f\) is bounded below on compact intervals and satisfies \(\liminf _{t \rightarrow -\infty } f(t) - \alpha t > - \infty \) and \(\liminf _{t \rightarrow +\infty } f(t) + \alpha t > - \infty \). Therefore, in this case there is a unique greatest \(\alpha \)-Lipschitz function dominated by \(f\), and we call this function the \(\alpha \) -Lipschitz minorant of \(f\). See Fig. 1 for an example of the minorant of a Brownian sample path.

Fig. 1
figure 1

A typical Brownian motion sample path and its associated Lipschitz minorant

Denoting the \(\alpha \)-Lipschitz minorant of \(f\) by \(m\), an explicit formula for \(m\) is

$$\begin{aligned} m(t)&= \sup \{ h \in \mathbb{R }: h - \alpha |t-s| \le f(s) \text{ for } \text{ all } s \in \mathbb{R }\}\nonumber \\&= \inf \{f(s) + \alpha |t-s| : s \in \mathbb{R }\}. \end{aligned}$$
(1.1)

For the sake of completeness, we present a proof of these equalities in Lemma 9.1. The first equality says that for each \(t \in \mathbb{R }\) we construct \(m(t)\) by considering the set of “tent” functions \(s \mapsto h - \alpha |t-s|\) that have a peak of height \(h\) at the position \(t\) and are dominated by \(f\), and then taking the supremum of those peak heights—see Fig. 2 in Sect. 9. The second equality is simply a rephrasing of the first.

Fig. 2
figure 2

Lemma 9.1 shows that the height of the \(\alpha \)-Lipschitz minorant of a function \(f\) at a fixed time \(t\) is given by \(\sup \{ h \in \mathbb{R }: h - \alpha |t-s| \le f(s) \text{ for } \text{ all } s \in \mathbb{R }\} \)

The property that the pointwise supremum of a suitable family of \(\alpha \)-Lipschitz functions is also \(\alpha \)-Lipschitz is reminiscent of the fact that the pointwise supremum of a suitable family of convex functions is also convex, and so the notion of the \(\alpha \)-Lipschitz minorant of a function is analogous to that of the convex minorant. Indeed, there is a well-developed theory of abstract or generalized convexity that subsumes both of these concepts and is used widely in nonlinear optimization, particularly in the theory of optimal mass transport—see [2, 12, 14, 26], Sect. 3.3 of [41] and Chapter 5 of [50]. Lipschitz minorants have also been studied in convex analysis for their Lipschitz regularization and Lipschitz extension properties, and in this area are known as Pasch-Hausdorff envelopes [22, 23, 27, 42].

Furthermore, the second expression in (1.1) can be thought of as producing a function analogous to the smoothing of the function \(f\) by an integral kernel (that is, a function of the form \(t \mapsto \int _\mathbb{R }K(|t-s|) f(s) \, ds\) for some suitable kernel \(K: \mathbb{R }\rightarrow \mathbb{R }\)) where one has taken the “min-plus” or “tropical” point of view and replaced the algebraic operations of + and \(\times \) by, respectively, \(\wedge \) and +, so that integrals are replaced by infima. Note that if \(f\) is a continuous function that possesses an \(\alpha _0\)-Lipschitz minorant for some \(\alpha _0\) (and hence an \(\alpha \)-Lipschitz minorant for all \(\alpha \ge \alpha _0\)), then the \(\alpha \)-Lipschitz minorants converge pointwise monotonically up to \(f\) as \(\alpha \rightarrow +\infty \). Standard methods in optimization theory involve approximating a general function by a Lipschitz function and then determining approximate optima of the original function by finding optima of its Lipschitz approximant [20, 21, 24, 36].

We investigate here the stochastic process \((M_t)_{t \in \mathbb{R }}\) obtained by taking the \(\alpha \)-Lipschitz minorant of the sample path of a real-valued Lévy process \(X=(X_t)_{t \in \mathbb{R }}\) for which the \(\alpha \)-Lipschitz minorant almost surely exists, a condition that turns out to be equivalent to \(| \mathbb{E }[X_1] | \!<\! \alpha \) when \(X_0 = 0\) (excluding the trivial case where \(X_t = \pm \alpha t\) for \(t \in \mathbb{R }\))—see Proposition 2.1. Our original motivation for this undertaking was the abovementioned analogy between \(\alpha \)-Lipschitz minorants and convex minorants and the rich (and growing) literature on convex minorants of Brownian motion and Lévy processes in general [1, 3, 7, 9, 11, 19, 38, 39, 46].

In particular, we study properties of the contact set \({\fancyscript{Z}}:=\{ t \in \mathbb{R }: M_t = X_t \wedge X_{t-} \}\). This random set is stationary and regenerative as shown in Theorem 2.6. Consequently, its distribution is that of the closed range of a subordinator “made stationary” in a suitable manner. For a broad class of Lévy processes we are able to identify the associated subordinator in the sense that we can determined its Laplace exponent—see Theorem 4.10.

We then consider the Lebesgue measure of the random set \(\fancyscript{Z}\) in Theorem 4.1 and Remark 4.2. If the paths of the Lévy process have either unbounded variation or bounded variation with drift \(d\) satisfying \(|d |> \alpha \), then the associated subordinator has zero drift, and hence the random set \(\fancyscript{Z}\) has zero Lebesgue measure almost surely. Conversely, if the paths of the Lévy process have bounded variation and drift \(d\) satisfying \(|d| < \alpha \), then the associated subordinator has positive drift, and hence the random set \(\fancyscript{Z}\) has infinite Lebesgue measure almost surely. In Theorem 4.7 we give conditions under which the Lévy measure of the subordinator associated to the set \(\fancyscript{Z}\) has finite total mass, which implies that \(\fancyscript{Z}\) is a discrete set in the case where it has zero Lebesgue measure. This result relies on an interesting result relating to the local behavior of a Lévy process at its local extrema—see Theorem 3.1.

If for the moment we write \(\fancyscript{Z}_\alpha \) instead of \(\fancyscript{Z}\) to stress the dependence on \(\alpha \), then it is clear that \(\fancyscript{Z}_{\alpha ^{\prime }} \subseteq \fancyscript{Z}_{\alpha ^{\prime \prime }}\) for \(\alpha ^{\prime } \le \alpha ^{\prime \prime }\). We find in Theorem 5.5 that if the Lévy process is abrupt, that is, its paths have unbounded variation and “sharp” local extrema in a suitable sense (see Definition 5.1 for a precise definition), then the set \(\bigcup _\alpha \fancyscript{Z}_\alpha \) is almost surely the set of local infima of the Lévy process.

Lastly, when the Lévy process is a Brownian motion with drift, we can compute explicitly the distributions of a number of functionals of the \(\alpha \)-Lipschitz minorant process. In order to describe these results, we first note that it follows from Lemma 9.3 below that the graph of the \(\alpha \)-Lipschitz minorant \(M\) over one of the connected components of the complement of \(\fancyscript{Z}\) is almost surely a “sawtooth” that consists of a line of slope \(+\alpha \) followed by a line of slope \(-\alpha \). Set \(G := \sup \{ t < 0 : t \in \fancyscript{Z} \},D := \inf \{ t > 0 : t \in \fancyscript{Z} \}\), and put \(K: = D-G\). Let \(T\) be the unique \(t \in [G,D]\) such that \(M(t) = \max \{M(s) : s \in [G,D]\}\). That is, \(T\) is place where the peak of the sawtooth occurs. Further, let \(H := X_T - M_T\) be the distance between the Brownian path and the \(\alpha \)-Lipschitz minorant at the time where the peak occurs.

The following theorem summarizes a series of results that we establish in Sect. 8.

Theorem 1.1

Suppose that \(X\) is a Brownian motion with drift \(\beta \), where \(|\beta |<\alpha \). Then, the following hold.

  1. (a)

    The Lévy measure \(\varLambda \) of the subordinator associated to the contact set \(\fancyscript{Z}\) has finite mass and is characterized up to a scalar multiple by

    $$\begin{aligned} \frac{\displaystyle \int \nolimits _{\mathbb{R }_+} 1\! -\! e^{-\theta x} \, \varLambda (dx)}{\displaystyle \int \nolimits _{\mathbb{R }_+} x \, \varLambda (dx)} \!=\! \frac{4 (\alpha ^2 - \beta ^2) \theta }{ \left( \sqrt{2 \theta \!+\! (\alpha - \beta )^2} + \alpha - \beta \right) \left( \sqrt{2 \theta \!+\! (\alpha + \beta )^2} + \alpha + \beta \right) } \end{aligned}$$
  2. (b)

    When \(\beta = 0\) the measure \(\varLambda \) is absolutely continuous with respect to Lebesgue measure with

    $$\begin{aligned} \frac{\varLambda (d x)}{\varLambda (\mathbb{R }_+)} = \frac{2 \alpha }{ \sqrt{2 \pi } } \left[ x^{-\frac{1}{2}} e^{- \frac{\alpha ^2 x}{2}} - 2 \alpha ^2 \varPhi (- \alpha x^{\frac{1}{2}}) \right] \, dx, \end{aligned}$$

    where \(\varPhi \) is the standard normal cumulative distribution function (that is, \(\varPhi (z) := \int _{-\infty }^{z} \frac{1}{\sqrt{2 \pi }} e^{-\frac{t^2}{2}} \, dt\)).

  3. (c)

    The distribution of \(T\) is characterized by

    $$\begin{aligned}&\mathbb{E }[e^{-\theta T}] = 8 \alpha (\alpha ^2-\beta ^2)\\&\quad \frac{1}{\theta } \left( \frac{1}{ \sqrt{(\alpha +\beta )^2 - 2 \theta } + 3 \alpha - \beta } - \frac{1}{ \sqrt{(\alpha -\beta )^2 + 2 \theta } + 3 \alpha + \beta } \right) \end{aligned}$$

    for \(- \frac{(\alpha -\beta )^2}{2} \le \theta \le \frac{(\alpha +\beta )^2}{2} \). Also,

    $$\begin{aligned} \mathbb{P }\{T>0\} = \frac{1}{2} \left( 1 + \frac{\beta }{\alpha } \right) \!. \end{aligned}$$
  4. (d)

    The random variable \(H\) has a \({\mathrm{Gamma}}(2, 4 \alpha )\) distribution; that is, the distribution of \(H\) is absolutely continuous with respect to Lebesgue measure with density \(h \mapsto (4 \alpha )^2 h e^{-4 \alpha h},\,h \ge 0\).

The rest of this article is organized as follows. In Sect. 2 we provide precise definitions and give some preliminary results relating to the nature of the contact set. In Sect. 3 we introduce some results relating to the local behavior of a Lévy process at its local extrema. In Sect. 4 we describe the subordinator associated with the contact set, and in Sect. 5 we describe the limit of the contact set as \(\alpha \rightarrow \infty \). In order to prove Theorem 4.10 we need some preliminary results relating to the future infimum of a Lévy process, which we give in Sect. 6, and then we prove Theorem 4.10 in Sect. 7. In Sect. 8 we cover the special case when \(X\) is a two sided Brownian motion with drift in detail. Finally, in Sect. 9 we give some basic facts about the \(\alpha \)-Lipschitz minorant of a function that are helpful throughout the paper.

2 Definitions and preliminary results

2.1 Basic definitions

Let \(X = (X_t)_{t \in \mathbb{R }}\) be a real-valued Lévy process. That is, \(X\) has càdlàg sample paths, \(X_0=0\), and \(X_t-X_s\) is independent of \(\{X_r : r \le s\}\) with the same distribution as \(X_{t-s}\) for \(s,t \in \mathbb{R }\) with \(s<t\).

The Lévy-Khintchine formula says that for \(t \in \mathbb R \) the characteristic function of \(X_t\) is given by \(\mathbb{E }[e^{i \theta X_t}] = e^{|t| \varPsi (\text{ sgn }(t) \theta ) }\) for \(\theta \in \mathbb{R }\), where

$$\begin{aligned} \mathbf{{\varPsi }}(\theta ) = - i a \theta + \frac{1}{2} \sigma ^2 \theta ^2 + \int _\mathbb{R }(1 - e^{i \theta x} + i \theta x 1_{ \{ |x| < 1 \} } ) \, \varPi (dx) \end{aligned}$$

with \(a \in \mathbb{R },\,\sigma \in \mathbb{R }_+\), and \(\varPi \) a \(\sigma \)-finite measure concentrated on \(\mathbb{R }{\setminus } \{0\}\) satisfying \(\int _\mathbb{R }(1 \wedge x^2) \, \varPi (dx) < \infty \). We call \(\sigma ^2\) the infinitesimal variance of the Brownian component of \(X\) and \(\varPi \) the Lévy measure of \(X\).

The sample paths of \(X\) have bounded variation almost surely if and only if \(\sigma = 0\) and \(\int _\mathbb{R }(1 \wedge |x| ) \, \varPi (dx) < \infty \). In this case \(\varPsi \) can be rewritten as

$$\begin{aligned} \mathbf{{\varPsi }}(\theta ) = - i d \theta + \int _\mathbb{R }(1-e^{i \theta x} ) \, \varPi (dx). \end{aligned}$$

We call \(d \in \mathbb{R }\) the drift coefficient. For full details of these definitions see [4].

We will often need the result of Shtatland [45] that if \(X\) has paths of bounded variation with drift \(d\), then

$$\begin{aligned} \lim _{t \downarrow 0} t^{-1} X_t = d \quad \text{ a.s. } \end{aligned}$$
(2.1)

The counterpart of Shtatland’s result when \(X\) has paths of unbounded variation is Rogozin’s result

$$\begin{aligned} \liminf _{t \downarrow 0} t^{-1} X_t = - \infty \quad \text{ and } \quad \limsup _{t \downarrow 0} t^{-1} X_t = + \infty \quad \text{ a.s. } \end{aligned}$$
(2.2)

For the sake of reference, we also record here a regularity criterion due to Rogozin [44] (see also [4, Proposition VI.11]) that we use frequently:

$$\begin{aligned}&\text{ zero } \text{ is } \text{ regular } \text{ for } (-\infty ,0] \nonumber \\&\qquad \Longleftrightarrow \nonumber \\&\int _0^1 t^{-1} \mathbb{P }\{X_t \le 0\} \, dt = \infty . \end{aligned}$$
(2.3)

Of course, (2.3) has an obvious analogue that determines when zero is regular for \([0,\infty )\). Note that, except for the case \(\varPi (\mathbb{R })<\infty \) and \(d=0\), zero is regular for \((-\infty ,0]\) (resp. \([0,\infty )\)) if and only if zero is regular for \((-\infty ,0)\) (resp. \((0,\infty )\)) — see the remarks preceding [4, Proposition VI.11].

2.2 Existence of a minorant

Proposition 2.1

Let \(X\) be a Lévy process. The \(\alpha \)-Lipschitz minorant of \(X\) exists almost surely if and only if either \(\sigma = 0,\,\varPi =0\) and \(|d| = \alpha \) (equivalently, \(X_t = \alpha t\) for all \(t \in \mathbb{R }\) or \(X_t = -\alpha t\) for all \(t \in \mathbb{R }\)), or \(\mathbb{E }[|X_1|] < \infty \) and \(| \mathbb{E }[X_1]| < \alpha \).

Proof

As we remarked in the Introduction, the \(\alpha \)-Lipschitz minorant of a function \(f: \mathbb{R }\rightarrow \mathbb{R }\) exists if and only if \(f\) is bounded below on compact intervals and satisfies \(\liminf _{t \rightarrow -\infty } f(t) - \alpha t > - \infty \) and \(\liminf _{t \rightarrow +\infty } f(t) + \alpha t > - \infty \).

Since the sample paths of a Lévy process are almost surely bounded on compact intervals, we need necessary and sufficient conditions for \(\liminf _{t \rightarrow -\infty } X_t - \alpha t > - \infty \) and \(\liminf _{t \rightarrow +\infty } X_t + \alpha t > - \infty \) to hold almost surely. This is equivalent to requiring that

$$\begin{aligned} \limsup _{t \rightarrow +\infty } X_t - \alpha t < +\infty \quad \text{ a.s. } \qquad \text{ and } \qquad \liminf _{t \rightarrow +\infty } X_t + \alpha t > -\infty \quad \text{ a.s. } \end{aligned}$$
(2.4)

It is obvious that that the two conditions in (2.4) hold if \(\sigma = 0,\,\varPi =0\) and \(|d| = \alpha \). It is clear from the strong law of large numbers that they also hold if \(\mathbb{E }[|X_1|] < \infty \) and \(| \mathbb{E }[X_1]| < \alpha \).

Consider the converse. Writing \(x^+ := x \vee 0\) and \(x^- := - (x \wedge 0)\) for \(x \in \mathbb{R }\), the strong law of large numbers precludes any case where either \(\mathbb{E }[X_1^+] = +\infty \) and \(\mathbb{E }[X_1^-] < + \infty \) or \(\mathbb{E }[X_1^+] < +\infty \) and \(\mathbb{E }[X_1^-] = +\infty \). A result of Erickson [13, Chapter 4, Theorem 15] rules out the possibility \(\mathbb{E }[X_1^+] = \mathbb{E }[X_1^-] = +\infty \), and so \(\mathbb{E }[|X_1|] < \infty \). It now follows from the strong law of large numbers that \(\lim _{t \rightarrow \infty } t^{-1} X_t = \mathbb{E }[X_1]\) and so \(|\mathbb{E }[X_1]| \le \alpha \). Suppose that \(X_t\) is non-degenerate for \(t \ne 0\) (that is, that \(\sigma \ne 0\) or \(\varPi \ne 0\)). Then, \(\limsup _{t \rightarrow \infty } X_t - \mathbb{E }[X_1] t = +\infty \) almost surely and \(\liminf _{t \rightarrow \infty } X_t - \mathbb{E }[X_1] t = -\infty \) almost surely (see, for example, [25, Corollary 9.14]), and so \(|\mathbb{E }[X_1]| < \alpha \) in this case. \(\square \)

Hypothesis 2.2

From now on we assume, unless we note otherwise, that the Lévy process \(X = (X_t)_{t \in \mathbb{R }}\) has the properties:

  • \(X_0 = 0\);

  • \(X_t\) is non-degenerate for \(t \ne 0\);

  • \(\mathbb{E }[|X_1|]<\infty \);

  • \(| \mathbb{E }[X_1]| < \alpha \).

Notation 2.3

As in the Introduction, let \(M = (M_t)_{t \in \mathbb{R }}\) be the \(\alpha \)-Lipschitz minorant of \(X\). Put \(\fancyscript{Z} = \{ t \in \mathbb{R }: M_t = X_t \wedge X_{t-} \}\).

2.3 The contact set is stationary and regenerative

It follows fairly directly from our standing assumptions Hypothesis 2.2 that the random set \(\fancyscript{Z}\) is almost surely unbounded above and below. (Alternatively, it follows even more easily from Hypothesis 2.2 that \(\fancyscript{Z}\) is non-empty almost surely. We show below that \(\fancyscript{Z}\) is stationary, and any non-empty stationary random set is necessarily almost surely unbounded above and below.)

We now show that the contact set \(\fancyscript{Z}\) is stationary and that it is also regenerative in the sense of Fitzsimmons and Taksar [16]. For simplicity, we specialize the definition in [16] somewhat as follows by only considering random sets defined on probability spaces (rather than general \(\sigma \)-finite measure spaces).

Let \(\varOmega ^0\) denote the class of closed subsets of \(\mathbb{R }\). For \(t \in \mathbb R \) and \(\omega ^0 \in \varOmega ^0\), define

$$\begin{aligned} d_t(\omega ^0) := \inf \{ s>t: s \in \omega ^0 \}, \quad r_t(\omega ^0) := d_t(\omega ^0) - t, \end{aligned}$$

and

$$\begin{aligned} \tau _t(\omega ^0) = \mathbf{cl} \{ s-t:s \in \omega ^0 \cap (t,\infty ) \} = \mathbf{cl} \left( (\omega ^0-t) \cap (0,\infty ) \right) \!. \end{aligned}$$

Here \(\mathbf{cl}\) denotes closure and we adopt the convention \(\inf \emptyset = + \infty \). Note that \(t \in \omega ^0\) if and only if \(\lim _{s \uparrow t} r_s(\omega ^0) = 0\), and so \(\omega ^0 \cap (-\infty ,t]\) can be reconstructed from \(r_s(\omega ^0),\,s \le t\), for any \(t \in \mathbb R \). Set \(\fancyscript{G}^0 = \sigma \{ r_s : s \in \mathbb{R }\}\) and \(\fancyscript{G}_t^0 = \sigma \{ r_s:s \le t \}\). Clearly, \((d_t)_{t \in \mathbb{R }}\) is an increasing càdlàg process adapted to the filtration \((\fancyscript{G}_t^0)_{t \in \mathbb{R }}\), and \(d_t \ge t\) for all \(t \in \mathbb{R }\).

A random set is a measurable mapping \(S\) from a measurable space \((\varOmega ,\fancyscript{F})\) into \((\varOmega ^0,\fancyscript{G}^0)\).

Definition 2.4

A probability measure \(\mathbb{Q }\) on \((\varOmega ^0,\fancyscript{G}^0)\) is regenerative with regeneration law \(\mathbb{Q }^0\) if

  1. (i)

    \(\mathbb{Q }\{d_t = +\infty \} = 0\), for all \(t \in \mathbb{R }\);

  2. (ii)

    for all \(t \in \mathbb{R }\) and for all \(\fancyscript{G}^0\)-measurable nonnegative functions \(F\),

    $$\begin{aligned} \mathbb{Q }\left[ F(\tau _{d_t}) \, | \, \fancyscript{G}_{t+}^0 \right] = \mathbb{Q }^0[F], \end{aligned}$$
    (2.5)

    where we write \(\mathbb{Q }[\cdot ]\) and \(\mathbb{Q }^0[\cdot ]\) for expectations with respect to \(\mathbb{Q }\) and \(\mathbb{Q }^0\). A random set \(S\) defined on a probability space \((\varOmega ,\fancyscript{F}, \mathbb{P })\) is a regenerative set if the push-forward of \(\mathbb{P }\) by the map \(S\) (that is, the distribution of \(S\)) is a regenerative probability measure.

Remark 2.5

Suppose that the probability measure \(\mathbb{Q }\) on \((\varOmega ^0,\fancyscript{G}^0)\) is stationary; that is, if \(S^0\) is the identity map on \(\varOmega ^0\), then the random set \(S^0\) on \((\varOmega ^0,\fancyscript{G}^0, \mathbb{Q })\) has the same distribution as \(u+S^0\) for any \(u \in \mathbb{R }\) or, equivalently, that the process \((r_t)_{t \in \mathbb{R }}\) has the same distribution as \((r_{t-u})_{t \in \mathbb{R }}\) for any \(u \in \mathbb{R }\). Then, in order to check conditions (i) and (ii) of Definition 2.4 it suffices to check them for the case \(t=0\).

The probability measure \(\mathbb{Q }^0\) is itself regenerative. It assigns all of its mass to the collection of closed subsets of \(\mathbb{R }_+\). As remarked in [16], it is well known that any regenerative probability measure with this property arises as the distribution of a random set of the form \(\mathbf{cl}\{Y_t : Y_t > Y_0, \, t \ge 0\}\), where \((Y_t)_{t \ge 0}\) is a subordinator (that is, a non-decreasing, real-valued Lévy process) with \(Y_0 = 0\)—see [28, 29]. Note that \(\mathbf{cl}\{Y_t : Y_t > Y_0, \, t \ge 0\}\) has the same distribution as \(\mathbf{cl}\{Y_{ct} : Y_{ct} > Y_{c0}, \, t \ge 0\}\), and the distribution of the subordinator associated with a regeneration law can at most be determined up to linear time change (equivalently, the corresponding drift and Lévy measure can at most be determined up to a common constant multiple). It turns out that the distribution of the subordinator is unique except for this ambiguity—again see [28, 29].

We refer the reader to [16, Theorem \(3.5\)] for a description of the sense in which a stationary regenerative probability measure \(\mathbb{Q }\) with regeneration law \(\mathbb{Q }^0\) can be regarded as \(\mathbb{Q }^0\) “made stationary”. Note that if \(\varLambda \) is the Lévy measure of the subordinator associated with \(\mathbb{Q }\) in this way, then, by stationarity, it must be the case that \(\int _{\mathbb{R }_+} y \, \varLambda (dy) < \infty \).

Theorem 2.6

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Then, the random (closed) set \(\fancyscript{Z}\) is stationary and regenerative.

Proof

It follows from Lemma 9.3 that \(\fancyscript{Z}\) is almost surely closed.

We next show that \(\fancyscript{Z}\) is stationary. Note for \(u \in \mathbb{R }\) that \(u + \fancyscript{Z} = \{t \in \mathbb{R }\!\!:\!X_{(t-u)} \wedge X_{(t-u)-} = M_{(t-u)}\}\). Define \((\breve{X}_t)_{t \in \mathbb{R }}\) by \(\breve{X}_t = X_{t-u} - X_{(-u)}\) for \(t \in \mathbb{R }\) and let \(\breve{M}\) be the \(\alpha \)-Lipschitz minorant of \(\breve{X}\). Note that \(\breve{M}_t = M_{t-u} - X_{(-u)}\) for \(t \in \mathbb{R }\). Therefore, \(u + \fancyscript{Z} = \{t \in \mathbb{R }: \breve{X}_t \vee \breve{X}_{t-} = \breve{M}_t\}\) and hence \(u + \fancyscript{Z}\) has the same distribution as \(\fancyscript{Z}\) because \(\breve{X}\) has the same distribution as \(X\).

We now show that \(\fancyscript{Z}\) is regenerative. For \(t \in \mathbb{R }\) set

$$\begin{aligned} D_t&:= \inf \{ s > t : X_s \wedge X_{s-} = M_s \} = d_t \circ \fancyscript{Z},\\ R_t&:= D_t - t,\\ S_t&:= \inf \left\{ s > t : X_s \wedge X_{s-} - \alpha (s-t) \le \inf \{ X_u - \alpha (u-t) : u \le t\} \right\} \!, \end{aligned}$$

and \(\fancyscript{F}_t := \bigcap _{s > t} \sigma \{X_u : u \le s\}\).

We claim that \(X_{S_t} \le X_{S_t-}\) almost surely. Suppose to the contrary that the event \(A := \{X_{S_t} > X_{S_t-}\}\) has positive probability. On the event \(A,\,X_s > X_{S_t} - (X_{S_t} - X_{S_t-})/2\) for \(s \in (S_t, S_t + \delta )\) for some (random) \(\delta > 0\). Hence, \(X_{s-} \ge X_{S_t} - (X_{S_t} - X_{S_t-})/2\) for \(s \in (S_t, S_t + \delta )\), and so \(X_s \wedge X_{s-} - \alpha (s - t) > X_{S_t} \wedge X_{S_t-} - \alpha (S_t-t) = X_{S_t-} - \alpha (S_t-t)\) for \(s \in (S_t, S_t + \delta )\) on the event \(A\) provided \(\delta \) is sufficiently small. It follows that, on the event \(A,\,X_{S_t-} - \alpha (S_t-t) \le \inf \{ X_u - \alpha (u-t) : u \le t\}\) and \(X_s \wedge X_{s-} - \alpha (s-t) > \inf \{ X_u - \alpha (u-t) : u \le t\}\) for \(t \le s < S_t\). Define a non-decreasing sequence of stopping times \(\{U_n\}_{n \in \mathbb N }\) by

$$\begin{aligned} U_n := \inf \left\{ s > t : X_s \wedge X_{s-} - \alpha (s-t) \le \inf \{ X_u - \alpha (u-t): u \le t\} + \frac{1}{n}\right\} \!, \end{aligned}$$

and set \(U_\infty := \sup _{n \in \mathbb N } U_n\). We have shown that, on the event \(A,\,U_n < S_t\) for all \(n \in \mathbb N \) and \(U_\infty = S_t\). By the quasi-left-continuity of \(X,\,\lim _{n \rightarrow \infty } X_{U_n} = X_{U_\infty }\) almost surely. In particular, \(X_{S_t} = X_{S_t-}\) almost surely on the event \(A\), and so \(A\) cannot have positive probability.

Lemma 9.4 now gives that almost surely

$$\begin{aligned} D_t = \inf \left\{ s \ge S_t : X_t \wedge X_{t-} + \alpha (s-S_t) = \inf \{ X_u + \alpha (u-S_t) : u \ge S_t \} \right\} \!. \end{aligned}$$

We have already remarked that \(\fancyscript{Z}\) is almost surely unbounded above and below, and hence condition (i) of Definition 2.4 holds. By Remark 2.5, in order to check condition (ii) of Definition 2.4, it suffices to consider the case \(t=0\).

For notational simplicity, set \(S := S_0\) and \(D := D_0\)—see Fig. 3 for two illustrations of the construction of \(S\) and \(D\) from a sample path. For a random time \(U\), let \(\fancyscript{F}_{U}\) be the \(\sigma \)-field generated by random variables of the form \(\xi _U\), where \((\xi _t)_{t \in \mathbb{R }}\) is some optional process for the filtration \((\fancyscript{F}_t)_{t \in \mathbb{R }}\) (cf. Millar [32, 33]). It follows from Corollary 9.2 (where we are thinking intuitively of removing the process to the right of \(D\) rather than to the right of zero) that \(\bigcap _{\epsilon > 0} \sigma \{R_s : s \le \epsilon \} \subseteq \fancyscript{F}_{D}\).

Put

$$\begin{aligned} \tilde{X} = (\tilde{X}_s)_{s \ge 0} := \left( (X_{S+s} - X_S) + \alpha s \right) _{s \ge 0}\!. \end{aligned}$$

By the strong Markov property at the stopping time \(S\) and the spatial homogeneity of \(X\), the process \(\tilde{X}\) is independent of \(\fancyscript{F}_S\) with the same distribution as the Lévy process \((X_t + \alpha t)_{t \ge 0}\). Suppose for the Lévy process \((X_t + \alpha t)_{t \ge 0}\) that zero is regular for the interval \((0,\infty )\). A result of Millar [31, Proposition \(2.4\)] implies that almost surely there is a unique time \(\tilde{T}\) such that \(\tilde{X}_{\tilde{T}} = \inf \{\tilde{X}_s: s \ge 0\}\) and that if \(\bar{T}\) is such that \(\tilde{X}_{\bar{T} -} = \inf \{\tilde{X}_s: s \ge 0\}\), then \(\bar{T} = \tilde{T}\). Thus, \(\tilde{T} = \sup \{ t \ge 0: \tilde{X}_t \wedge \tilde{X}_{t-} = \inf \{\tilde{X}_s: s \ge 0\} \}\) and \(D = S + \tilde{T}\). Combining this observation with the main result of Millar [32] (see Remark 2.7 below) and the fact that \(\tilde{X}_{\tilde{T}} = \inf \{\tilde{X}_s: s \ge 0\}\) gives that \((\tilde{X}_{\tilde{T}+t})_{t \ge 0}\) is conditionally independent of \(\fancyscript{F}_D\) given \(\tilde{X}_{\tilde{T}}\). Thus, again by the spatial homogeneity of \(\tilde{X},\,(\tilde{X}_{\tilde{T}+t} - \tilde{X}_{\tilde{T}})_{t \ge 0}\) is independent of \(\fancyscript{F}_D\). This establishes condition (ii) of Definition 2.4 for \(t=0\).

If zero is not regular for the interval \((0,\infty )\) for the Lévy process \((X_t + \alpha t)_{t \ge 0}\), then zero is necessarily regular for the interval \((0,\infty )\) for the Lévy process \((X_{-t-} + \alpha t)_{t \ge 0}\) because this latter process the same distribution as \((-(X_t + \alpha t) + 2 \alpha t)_{t \ge 0}\). The argument above then establishes that the random set \(-\fancyscript{Z}\) is regenerative. It follows from [16, Theorem 4.1] that \(\fancyscript{Z}\) is regenerative with the same distribution as \(-\fancyscript{Z}\).

\(\square \)

Remark 2.7

A key ingredient in the proof of Theorem 2.6 was the result of Millar from [32] which says that, under suitable conditions, the future evolution of a càdlàg strong Markov process after the time it attains its global minimum is conditionally independent of the past up to that time given the value of the process and its left limit at that time. That result follows in turn from results in [18] on last exit decompositions or results in [40] on analogues of the strong Markov property at general coterminal times. Note that in the case of Lévy processes the independence can also be proved using Pitman and Uribe Bravo’s description of the convex minorant of a Lévy process [38], since in this description the pre and post minimum processes are obtained from restricting a Poisson point process to disjoint sets. We did not apply Millar’s result directly; rather, we considered a random time \(D = D_0\) that was the last time after a stopping time that a strong Markov process attained its infimum over times greater than the stopping time and combined Millar’s result with the strong Markov property at the stopping time. An alternative route would have been to observe that the random time \(D\) is a randomized coterminal time in the sense of [33] for a suitable strong Markov process.

3 Behavior of Lévy processes at local extrema

To identify properties of the subordinator associated with the Lipschitz minorant, Theorem 4.7 will make use of the main result of this section, Theorem 3.1, relating to the behavior of Lévy processes at local extrema. We also include in this section related results which we do not make later use of, but which we believe are of some interest. Write

$$\begin{aligned} \fancyscript{M}^- := \bigcup _{\epsilon > 0} \{t \in \mathbb{R }: X_t \wedge X_{t-} = \inf \{X_s: s \in (t-\epsilon ,t+\epsilon )\}\} \end{aligned}$$
(3.1)

for the set of local infima of the path of \(X\) and

$$\begin{aligned} \fancyscript{M}^+ := \bigcup _{\epsilon > 0} \{t \in \mathbb{R }: X_t \vee X_{t-} = \sup \{X_s: s \in (t-\epsilon ,t+\epsilon )\}\} \end{aligned}$$
(3.2)

for the set of local suprema.

Theorem 3.1

Let \(X\) be an Lévy process such that \(\sigma \ne 0\) or \(\varPi (\mathbb{R }) = \infty \). For \(r \ge 0\) and \(t \ge 0\), define

$$\begin{aligned} T_t^{(r)} := \inf \{ s > 0: X_{t + s} - X_t \wedge X_{t-} \le r s \}. \end{aligned}$$

Then, \(T_t^{(r)} > 0\) for all \(t \in \fancyscript{M}^-\) almost surely if and only if

$$\begin{aligned} \int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, r t]\} \, dt < \infty . \end{aligned}$$
(3.3)

Similarly, for \(r < 0\) and \(t \ge 0\), define

$$\begin{aligned} T_t^{(r)} := \inf \{ s > 0: X_{t + s} - X_t \vee X_{t-} \ge r s \}. \end{aligned}$$

Then, \(T_t^{(r)} > 0\) for all \(t \in \fancyscript{M}^+\) almost surely if and only if

$$\begin{aligned} \int _0^1 t^{-1} \mathbb{P }\{X_t \in [ r t, 0]\} \, dt < \infty . \end{aligned}$$
(3.4)

Proof

We need prove only the result relating to local infima, since the result for local suprema follows by a time reversal argument.

We note from Remark 6.1 below that for \(0 \le a < b < \infty \) there is almost surely a unique time \(U \in [a,b]\) such that \(X_U \wedge X_{U-} = \inf \{X_t : t \in [a,b]\} = \inf \{X_t \wedge X_{t-}: t \in [a,b]\}\). We use the notation \(\arg \inf _{t \in [a,b]} X_t\) for the random time \(U\).

Let \(\xi \) be an independent exponential time with parameter \(0 < q < \infty \) that is independent of \(X\). Define

$$\begin{aligned} \rho := \arg \inf _{0 \le t \le \xi } X_t. \end{aligned}$$

We claim that by a classical localization argument it is sufficient to show that the result holds at \(\rho \) almost surely rather than at every \(t \in \fancyscript{M}^-\). To be explicit, suppose we have shown that the result holds at \(\rho \) almost surely. It is then true with \(\rho \) replaced by \(\arg \inf _{0 \le s \le t} X_s\) for all \(t>0\) in some set whose complement can be contained in a Lebesgue-null set. Let \(\{ t_n \}_{n \in \mathbb N }\) be a sequence of such times with \(t_n \downarrow 0\). For \(n \in \mathbb N \) define

$$\begin{aligned} \fancyscript{I}_n := \left\{ \arg \inf _{mt_n \le t \le (m+1)t_n} X_t \, : \, m \in \mathbb Z \right\} \!, \end{aligned}$$

so that the result is true almost surely for every element of \(\fancyscript{I}_n\), and define

$$\begin{aligned} \fancyscript{M}^-_n := \{t \in \mathbb{R }: X_t \wedge X_{t-} = \inf \{X_s : s \in (t-t_n,t+t_n) \} \}, \end{aligned}$$

so that \(\fancyscript{M}^- = \bigcup _{n \ge 1} \fancyscript{M}^-_n\). The localization argument is completed by noting that \(\fancyscript{M}^-_n \subseteq \fancyscript{I}_n\) for every \(n \in \mathbb N \).

To show that the result is true at \(\rho \) we use a technique involving the convex minorant of the path of a Lévy process. By results of Pitman and Uribe Bravo [38, Corollary 2], the linear segments of the convex minorant of the process \((X_t)_{0 \le t \le \xi }\) define a Poisson point process on \(\mathbb{R }_+ \times \mathbb{R }\), where a point at \((t, x)\) represents a segment with length \(t\) and increment \(x\). The intensity measure of the Poisson point process is the measure on \(\mathbb{R }_+ \times \mathbb{R }\) given by

$$\begin{aligned} e^{-qt} t^{-1} \mathbb{P }\{ X_t \in dx \} dt. \end{aligned}$$
(3.5)

In order to recreate the convex minorant from the point process, the segments are pieced together in increasing order of slope—[38, Proposition 1] states that almost surely no two segments may have the same slope. Note that the time \(\rho \) is the supremum of the right-hand endpoints of the ordered segments with negative slope and the infimum of the left-hand endpoints of the segments with positive slope. Observe also that, up to translations in time and space, the restriction of the convex minorant to the time interval \([\rho ,\xi ]\) can be recreated by only piecing together the segments of positive slope. Let \(\fancyscript{S}\) be the infimum of the slopes of all segments of the convex minorant of \((X_t)_{0 \le t \le \xi }\) that have positive slope. It follows from (3.5) that for \(r > 0\),

$$\begin{aligned} \mathbb{P }\{ \fancyscript{S} \ge r \} = \exp \left( - \int _0^\infty e^{-qt} t^{-1} \mathbb{P }\{ X_t \in [0, r t) \} \, dt \right) \!. \end{aligned}$$

Since almost surely no two segments may have the same slope we must have \(\int _0^\infty e^{-qt} t^{-1} \mathbb{P }\{ X_t = rt \} \, dt = 0\) for all \(r \ge 0\), and hence for all \(r \ge 0\) we have

$$\begin{aligned} \mathbb{P }\{ \fancyscript{S} \ge r \} = \exp \left( - \int _0^\infty e^{-qt} t^{-1} \mathbb{P }\{ X_t \in [0, r t] \} \, dt \right) \!. \end{aligned}$$

Suppose first that \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, rt]\} \, dt < \infty \). Then, \(\mathbb{P }\{ \fancyscript{S} \ge r \} > 0\) and thus, with positive probability, there exists \(\varepsilon >0\) such that \(X_{\rho +t}-X_\rho \wedge X_{\rho -} \ge r t\) for all \(0 \le t \le \varepsilon \). Hence, by Millar’s zero-one law at the infimum of a Lévy process, such an \(\varepsilon \) exists almost surely. By the almost sure uniqueness of the value of the infima of a Lévy process that is not a compound Poisson process with zero drift [4, Proposition VI.4], there exists almost surely \(\varepsilon \!\!>\!\!0\) such that \(X_{\rho +t}-X_\rho \wedge X_{\rho -} > r t\) for all \(0 < t \le \varepsilon \). Hence, \(T_\rho ^{(r)} > 0\) almost surely.

Now suppose that \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, r^* t]\} \, dt = \infty \). Then, \(\mathbb{P }\{ \fancyscript{S} \ge r^* \} = 0\), and since the convex minorant of \((X_t)_{0 \le t \le \xi }\) then almost surely contains linear segments with positive slope less than \(r^*\), it follows that \(T_t^{(r^*)} = 0\) almost surely. \(\square \)

The following easy corollary is essentially a result of Vigon [49, Proposition 3.6] in the unbounded variation case. In the bounded variation case it can be easily deduced from the forthcoming Proposition 3.4.

Corollary 3.2

Let \(X\) be an Lévy process such that \(\sigma \ne 0\) or \(\varPi (\mathbb{R }) = \infty \). Define

$$\begin{aligned} r^*_- := \sup \left\{ r \ge 0 : \int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, rt]) dt < \infty \right\} \!. \end{aligned}$$

Then,

$$\begin{aligned} \liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t \wedge X_{t-}) = r^*_- \end{aligned}$$

for all \(t \in \fancyscript{M}^-\) almost surely. Similarly, define

$$\begin{aligned} r^*_+ := \inf \left\{ r \le 0 : \int _0^1 t^{-1} \mathbb{P }\{X_t \in [rt,0]) dt < \infty \right\} \!. \end{aligned}$$

Then,

$$\begin{aligned} \limsup _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t \wedge X_{t-}) = r^*_+ \end{aligned}$$

for all \(t \in \fancyscript{M}^+\) almost surely.

Proof

We will use the same notation as in Theorem 3.1. As before we only need prove the local infima case, and again it is enough to show that the result holds at \(\rho \) almost surely rather than at every \(t \in \fancyscript{M}^-\).

For any \(0 \le r < r^*_-\) we have \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, rt]\} \, dt < \infty \). Theorem 3.1 implies that \(T^{(r)}_\rho > 0\) almost surely, and thus

$$\begin{aligned} \liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t \wedge X_{t-}) \ge r \end{aligned}$$

for all \(t \in \fancyscript{M}^-\) almost surely, for all \(0 \le r < r^*_-\).

For any \(r > r^*_-\) we have \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, rt]\} \, dt = \infty \). Theorem 3.1 implies that \(T_t^{(r)} = 0\) almost surely. Hence, for all \(r > r^*_-\),

$$\begin{aligned} \liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t \wedge X_{t-}) \le r \end{aligned}$$

for all \(t \in \fancyscript{M}^-\) almost surely. \(\square \)

Remark 3.3

When \(X\) has paths of bounded variation and drift \(d > 0\) then (2.1) implies that zero is regular for the interval \([0,\infty )\) for the process \((X_t - d^{\prime } t)_{t \ge 0}\) if \(d^{\prime } < d\), but not if \(d^{\prime } > d\), and hence (2.3) implies that \(r^*_- = d\). By (2.3), the integral in (3.3) in the interesting case of \(r=r^*_- = d\), i.e. \(\int _0^1 t^{-1} \mathbb{P }\{X_t \in [0, d t]\} \, dt\), will be finite if and only if zero is not regular for the interval \((-\infty ,0]\) for the process \((X_t - d t)_{t \ge 0}\). An integral condition due to Bertoin involving the Lévy measure \(\varPi \) can be used to determine whether this is the case or not [6]. Note that the reasoning as to why \(r^*_- = d\) also shows that \(r^*_+ = -\infty \)—as will be shown in Proposition 3.4 this is because there is a downwards jump at every \(t \in \fancyscript{M}_+\). Comments similar to all the preceding can be made when \(d < 0\). If \(d=0\), then \(r^*_-\) and \(-r^*_+\) are either infinite or zero, and at least one of them must be zero since \(\varPi (\mathbb{R }) = \infty \).

In the unbounded variation case some special cases are worth mentioning. The value of \(r^*\) is infinite when \(X\) has non-zero Brownian component or is a stable process with stability parameter in the interval (1, 2]—see the discussion of abrupt processes in Sect. 5. Vigon provides in an unpublished work [47] a practical method for determining whether the integral in (3.3) is finite or not for processes with paths of unbounded variation:

$$\begin{aligned} \int _0^\infty t^{-1} e^{-qt} \mathbb{P }\{X_t \in [at, bt]\} \, dt = \frac{1}{2 \pi } \int _a^b \left( \int _0^\infty \mathfrak R \frac{1}{\varPsi (-u) + iur} \, du \right) \, dr, \end{aligned}$$

where \(\varPsi \) is as defined in Sect. 2.1.

The following proposition completes the picture of the behavior at local extrema. Recall from the comments following (2.3) that, except for the case \(\varPi (\mathbb{R })<\infty \) and \(d=0\), zero is regular for \((-\infty ,0]\) (resp. \([0,\infty )\)) if and only if zero is regular for \((-\infty ,0)\) (respectively \((0,\infty )\)).

Proposition 3.4

Suppose that \(X\) is a Lévy process with paths of unbounded variation. Then, \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.

Next, suppose that \(X\) is a Lévy process with paths of bounded variation and drift \(d \ne 0\). Then, \(X_t \ne X_{t-}\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely. Moreover, \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t) = d\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.

Suppose finally that \(X\) is a Lévy process with paths of bounded variation, drift \(d=0\), and \(\varPi (\mathbb{R }) = \infty \). Then,

  1. (i)

    If zero is not regular for \([0,\infty )\), then \(X_t > X_{t-}\) for all \(t \in \fancyscript{M}^-\) and \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^+\) almost surely.

  2. (ii)

    If zero is not regular for \((-\infty ,0]\), then \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^-\) and \(X_t < X_{t-}\) for all \(t \in \fancyscript{M}^+\) almost surely.

  3. (iii)

    If zero is regular for both \((-\infty ,0]\) and \([0,\infty )\), then \(X_t = X_{t-}\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.

Moreover, in all cases \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t) = 0\) for all \(t \in \fancyscript{M}^-\) and all \(t \in \fancyscript{M}^+\) almost surely.

Proof

As before, we only need prove the local infima cases, and again it is enough to show that the result holds at \(\rho \) almost surely rather than at every \(t \in \fancyscript{M}^-\).

Suppose that \(X\) has paths of unbounded variation. Then, \(\liminf _{t \downarrow 0} t^{-1} X_t = -\infty \) and \(\limsup _{t \downarrow 0} t^{-1} X_t = +\infty \) almost surely by (2.2), and hence zero is regular for both \((-\infty ,0]\) and \([0,\infty )\). That \(X_\rho = X_{\rho -}\) then follows from [31, Theorem \(3.1\)].

Next, suppose that \(X\) is a Lévy process with paths of bounded variation and drift \(d \ne 0\). A result of Millar states that any Lévy process for which zero is not regular for \([0,\infty )\) must jump out of its global infimum and that any such Lévy process for which zero is not regular for \((-\infty ,0]\) must jump into its global infimum—see [31, Theorem \(3.1\)]. By (2.1), \(\lim _{t \downarrow 0} t^{-1} X_t = d\) almost surely, and so one of these alternatives must hold. Hence, in either case, \(X_\rho \ne X_{\rho -}\).

Moreover, the fact that \(\rho \) is a jump time of \(X\) implies \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{\rho + \varepsilon } - X_\rho ) = d\). To see this, we argue as in [38]. For \(\delta >0\), let \(0 < J_1^\delta < J_2^\delta < \ldots \) be the successive nonnegative times at which \(X\) has jumps of size greater than \(\delta \) in absolute value. The strong Markov property applied at the stopping time \(J_i^\delta \) and (2.1) gives that

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{J_i^\delta +\varepsilon }-X_{J_i^\delta }) = d. \end{aligned}$$

Hence, at any random time \(V\) such that \(X_{V} \ne X_{V-}\) almost surely we have

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{V+\varepsilon }-X_{V}) = d. \end{aligned}$$

Suppose finally that \(X\) has paths of bounded variation with drift \(d=0\) and with \(\varPi (\mathbb{R }) = \infty \). Since \(\varPi (\mathbb{R }) = \infty \), zero must be regular for at least one of \((-\infty ,0]\) and \([0,\infty )\). Results (i), (ii) and (iii) are then direct consequences of [31, Theorem 3.1]. In case (iii), Remark 3.3 explains why \(r^*_- = 0\) and thus Corollary 3.2 implies that \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{\rho + \varepsilon } - X_\rho ) = 0\) almost surely. In cases (i) and (ii), since \(\rho \) is a jump time of \(X\), as before it must be the case that \(\lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{t + \varepsilon } - X_t) = d = 0\) almost surely. \(\square \)

Remark 3.5

The relationships between \(X_t\) and \(X_{t-}\) given in Proposition 3.4 are essentially results of Millar [31, Theorem \(3.1\)].

4 Identification of the associated subordinator

Let \(Y = (Y_t)_{t \ge 0}\) be “the” subordinator associated with the regenerative set \(\fancyscript{Z}\). Write \(\delta \) and \(\varLambda \) for the drift coefficient and Lévy measure of \(Y\). Recall that these quantities are unique up to a common scalar multiple. The closed range of \(Y\) either has zero Lebesgue measure almost surely or infinite Lebesgue measure almost surely according to whether \(\delta \) is zero or positive [13, Chapter 2, Theorem 3]. Consequently, the same dichotomy holds for the contact set \(\fancyscript{Z}\). It follows from Fubini’s theorem and the stationarity of \(\fancyscript{Z}\) that \(\fancyscript{Z}\) has infinite Lebesgue measure almost surely if and only if \(\mathbb{P }\{ 0 \in \fancyscript{Z} \} > 0\). The following result gives necessary and sufficient conditions for each alternative in this dichotomy.

Theorem 4.1

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. If \(\sigma =0,\,\varPi (\mathbb{R }) < \infty \), and \(|d|=\alpha \), then the Lebesgue measure of \(\fancyscript{Z}\) is almost surely infinite. If \(X\) is not of this form, then the Lebesgue measure of \(\fancyscript{Z}\) is almost surely zero if and only if zero is regular for the interval \((-\infty ,0]\) for at least one of the Lévy processes \((X_t + \alpha t)_{t\ge 0}\) and \((-X_t+\alpha t)_{t \ge 0}\).

Proof

Suppose first that \(\sigma =0,\,\varPi (\mathbb{R }) < \infty \) and \(|d|=\alpha \). In this case, the paths of \(X\) are piecewise linear with slope \(d\). Our standing assumption \(|\mathbb{E }[X_1]| < \alpha \) and the strong law of large numbers give \(\lim _{t \rightarrow -\infty } t^{-1} X_t = \lim _{t \rightarrow +\infty } t^{-1} X_t = \mathbb{E }[X_1]\). It is now clear that \(\fancyscript{Z}\) has positive Lebesgue measure with positive probability and hence infinite Lebesgue measure almost surely.

Suppose now that \(X\) is not of this special form. It suffices by the remarks prior to the statement of the result to show that \(\mathbb{P }\{ 0 \in \fancyscript{Z} \} > 0\) if and only if zero is not regular for \((-\infty ,0]\) for both of the Lévy processes \((X_t + \alpha t)_{t \ge 0}\) and \((-X_t+\alpha t)_{t \ge 0}\).

Set \(I^- := \inf \{X_t - \alpha t : t \le 0\}\) and \(I^+ := \inf \{X_t + \alpha t : t \ge 0\}\). Recall from (1.1) that \(M_0 = I^{-} \wedge I^+\). Therefore,

$$\begin{aligned} \mathbb{P }\{0 \in \fancyscript{Z}\}&= \mathbb{P }\{I^{-} \wedge I^+ = X_0 \wedge X_{0-} = 0\} \\&= \mathbb{P }\{I^- = I^+ = 0\} \\&= \mathbb{P }\{I^- = 0\} \mathbb{P }\{I^+ = 0\}, \end{aligned}$$

and so \(\mathbb{P }\{0 \in \fancyscript{Z}\} > 0\) if and only if \(\mathbb{P }\{I^- = 0\} > 0\) and \(\mathbb{P }\{I^+ = 0\} > 0\).

Note that \(I^-\) has the same distribution as \(\inf \{-X_t + \alpha t : t \ge 0\}\). From the formulas of Pecherskii and Rogozin [37] (or [4, Theorem VI.5]),

$$\begin{aligned} \mathbb{E }[e^{\theta I^-}] = \exp \left( \int _0^\infty \int _{(-\infty ,0]} (e^{\theta x} - 1)t^{-1} \mathbb{P }\{-X_{t} + \alpha t \in dx\} \, dt\right) \end{aligned}$$
(4.1)

and

$$\begin{aligned} \mathbb{E }[e^{\theta I^+}] = \exp \left( \int _0^\infty \int _{(-\infty ,0]} (e^{\theta x} - 1)t^{-1} \mathbb{P }\{X_t + \alpha t \in dx\} \, dt \right) . \end{aligned}$$
(4.2)

Taking the limit as \(\theta \rightarrow \infty \) and applying monotone convergence in (4.1) and in (4.2) gives

$$\begin{aligned} \mathbb{P }\{I^- = 0\} = \exp \left( - \int _0^\infty t^{-1} \mathbb{P }\{-X_{t} + \alpha t < 0\} \, dt \right) \end{aligned}$$
(4.3)

and

$$\begin{aligned} \mathbb{P }\{I^+ = 0\} = \exp \left( - \int _0^\infty t^{-1} \mathbb{P }\{X_t + \alpha t < 0\} \, dt \right) . \end{aligned}$$
(4.4)

Since we are assuming that it is not the case that \(\sigma = 0\), \(\varPi (\mathbb{R }) < \infty \) and \(|d|=\alpha \), we have \(\mathbb{P }\{X_t + \alpha t = 0\} = \mathbb{P }\{-X_{t} + \alpha t = 0\} = 0\) for all \(t>0\). Moreover, by our standing assumption \(|\mathbb{E }[X_1] | < \alpha \) it certainly follows that both \(X_t + \alpha t\) and \(-X_{t} + \alpha t\) drift to \(+\infty \). Hence, by a result of Rogozin [44] (or see [4, Theorem VI.12])

$$\begin{aligned} \int _1^\infty t^{-1} \mathbb{P }\{ X_t + \alpha t \le 0 \} \, dt < \infty \quad \text{ and } \quad \int _1^\infty t^{-1} \mathbb{P }\{ - X_{t} + \alpha t \le 0 \} \, dt < \infty . \end{aligned}$$
(4.5)

The result now follows from (2.3) which implies that zero is not regular for the interval \((-\infty ,0]\) for both \((-X_{t}+\alpha t)_{t \ge 0}\) and \((X_t + \alpha t)_{t \ge 0}\) if and only if

$$\begin{aligned} \int _0^1 t^{-1} \mathbb{P }\{ -X_{t} + \alpha t \le 0 \} \, dt < \infty \quad \text{ and } \quad \int _0^1 t^{-1} \mathbb{P }\{ X_t + \alpha t \le 0 \} \, dt < \infty . \end{aligned}$$
(4.6)

\(\square \)

Remark 4.2

  1. (i)

    Note that zero is regular for the interval \((-\infty ,0]\) for both \((X_t + \alpha t)_{t \ge 0}\) and \((-X_{t}+\alpha t)_{t \ge 0}\) when \(X\) has paths of unbounded variation, since then \(\liminf _{t \rightarrow 0} t^{-1} X_t = -\infty \) and \(\limsup _{t \rightarrow 0} t^{-1} X_t = +\infty \) by (2.2).

  2. (ii)

    If \(X\) has paths of bounded variation and drift coefficient \(d\), then \(\lim _{t \downarrow 0} t^{-1}(X_t+ \alpha t) = d+ \alpha \) and \(\lim _{t \downarrow 0} t^{-1}(-X_{t}+ \alpha t) = -d + \alpha \) by (2.1). Thus, if \(|d| < \alpha \), then zero is regular for \((-\infty ,0]\) for neither \((X_t + \alpha t)_{t \ge 0}\) or \((-X_{t}+\alpha t)_{t \ge 0}\), whereas if \(|d| > \alpha \), then zero is regular for \((-\infty ,0]\) for exactly one of those two processes.

  3. (iii)

    If \(X\) has paths of bounded variation and \(|d| = \alpha \), then an integral condition due to Bertoin [6] involving the Lévy measure \(\varPi \) determines whether zero is regular for the interval \((-\infty ,0]\) for whichever of the processes \((X_t +\alpha t)_{t \ge 0}\) or \((-X_{t}+\alpha t)_{t \ge 0}\) has zero drift coefficient.

Remark 4.3

Recall the notation \(G = \sup \{ t < 0 : t \in \fancyscript{Z} \},\,D = \inf \{ t > 0 : t \in \fancyscript{Z} \}\) and \(K = D-G\) (note that \(D = d_0 \circ \fancyscript{Z}\)). Recall also that \(\varLambda \) and \(\delta \) are choices (unique up to a common scalar multiple) of the Lévy measure and drift of “the” subordinator associated with the stationary regenerative set \(\fancyscript{Z}\). If \(\delta = 0\), a condition equivalent to the Lebesgue measure of \(\fancyscript{Z}\) being almost surely zero and \(0 \notin \fancyscript{Z}\) almost surely, then \(G < 0 < D\) almost surely, and the distribution of \(K\) is obtained by size-biasing the Lévy measure \(\varLambda \); that is,

$$\begin{aligned} \mathbb{P }\{K \in dx\} = \frac{x \, \varLambda (dx)}{\int _{\mathbb{R }_+} y \, \varLambda (dy)} \end{aligned}$$
(4.7)

(recall that \(\int _{\mathbb{R }_+} y \, \varLambda (dy) < \infty \) since \(\fancyscript{Z}\) is stationary).

If the Lebesgue measure of \(\fancyscript{Z}\) is positive almost surely (and hence \(\delta > 0\)), then \(\mathbb{P }\{K=0\} > 0\) and we see by multiplying together (4.3) and (4.4) that

$$\begin{aligned} \mathbb{P }\{K=0\}&= \exp \left( - \int _0^\infty t^{-1} \left( \mathbb{P }\{X_t + \alpha t < 0\}+ \mathbb{P }\{-X_{t} + \alpha t < 0\} \right) \, dt \right) \nonumber \\&= \exp \left( - \int _0^\infty t^{-1} \mathbb{P }\{X_t \notin [-\alpha t, \alpha t]\} \, dt \right) . \end{aligned}$$
(4.8)

In this latter case, the conditional distribution of \(K\) given \(K > 0\) is the size-biasing of \(\varLambda \). The relationship between \(\delta \) and \(\varLambda \) is easily deduced since \(\mathbb{P }\{ K = 0 \}\) is the expected proportion of the real line that is covered by the range of the subordinator associated with \(\fancyscript{Z}\). Thus,

$$\begin{aligned} \mathbb{P }\{ K = 0 \} = \frac{ \delta }{ \delta + \int _{\mathbb{R }_+} y \varLambda (dy)}. \end{aligned}$$

Remark 4.4

Theorem 4.1 and Remark 4.2 provide conditions for deciding whether the Lebesgue measure of the contact set \(\fancyscript{Z}\) is zero almost surely or positive almost surely. Recall that the Lebesgue measure of \(\fancyscript{Z}\) is zero almost surely if and only if \(0 \notin \fancyscript{Z}\) almost surely or, equivalently, that \(G < 0 < D\) almost surely. This is in turn equivalent to \(\delta = 0\). When \(\delta = 0,\,\varLambda (\mathbb{R }_+) < \infty \) if and only if the contact set \(\fancyscript{Z}\) is a discrete set, and this is equivalent to \(D^{\prime } > D\) almost surely, where

$$\begin{aligned} D^{\prime } : = \inf \{ t > D : t \in \fancyscript{Z} \}. \end{aligned}$$

Clearly, if \(\sigma = 0\) and \(\varPi (\mathbb{R }) < \infty \), then \(\varLambda (\mathbb{R }_+) < \infty \). In the case \(\sigma =0,\,\varPi (\mathbb{R }) = \infty \), and \(\delta > 0\) we claim that \(\varLambda (\mathbb{R }_+) = \infty \). To see this, suppose to the contrary that \(\varLambda (\mathbb{R }_+)<\infty \), then there almost surely exists \(t_1 < t_2\) such that \(X_t \wedge X_{t-} = M_t\) for all \(t_1 < t < t_2\). Because \(t \mapsto M_t\) is continuous, the paths of \(X\) cannot jump between times \(t_1\) and \(t_2\). However, when \(\varPi (\mathbb{R }) = \infty \) the jump times of \(X\) are almost surely dense in \(\mathbb{R }\).

In Theorem 4.7 we provide conditions for deciding whether \(\varLambda (\mathbb{R }_+) < \infty \) or \(\varLambda (\mathbb{R }_+) = \infty \) for the remaining case \(\delta = 0\) and \(\varPi (\mathbb{R }) = \infty \).

We first give some relevant preliminary results on the behavior of the path of the Lipschitz minorant \(M\) on the interval \([G,D]\). By Proposition 9.3, when \(G < 0 < D\) the path of \(M\) on \([G,D]\) consists of a line of slope \(+\alpha \) on the interval \([G,T]\) followed by a line of slope \(-\alpha \) on the interval \([T,D]\), where \(T\) is the unique time that \(M\) attains its maximum over the interval \([G,D]\). Recall that

$$\begin{aligned} S := \inf \{ t > 0 : X_t \wedge X_{t-} - \alpha t \le \inf \{ X_{s} - \alpha s : s \le 0\} \}. \end{aligned}$$

As in the proof of Theorem 2.6, it follows from Lemma 9.4 that almost surely

$$\begin{aligned} D = \inf \{ t \ge S : X_t \wedge X_{t-} + \alpha (t-S) = \inf \{ X_u + \alpha (u-S) : u \ge S\} \}. \end{aligned}$$

Moreover, from Corollary 9.5, \(G \le T \le S \le D\) and \(T=S=D\) on the event \(\{T=S\}\).

Proposition 4.5

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Then, \(\mathbb{P }\{0 \notin \fancyscript{Z}, \, S=0\} = 0\). In addition,

  1. (a)

    If \(X\) has paths of unbounded variation, then \(G<T<S<D\) almost surely.

  2. (b)

    If \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(d < - \alpha \), then \(G < T<S<D\) almost surely, and if \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(d > \alpha \), then \(G < T<S \le D\) almost surely.

  3. (c)

    If \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| < \alpha \), then almost surely either \(0 \in \fancyscript{Z}\) and \(G = T = S = D = 0\), or \(0 \notin \fancyscript{Z}\) and \(G \le T \le S \le D\). Furthermore, \(T=S=D\) almost surely on the event \(\{T=S\}\).

Proof

Firstly, if \(0 \notin \fancyscript{Z}\), then \(\inf \{ X_u - \alpha u : u \le 0 \} < 0\), and thus \(S>0\) almost surely on the event \(\{0 \notin \fancyscript{Z}\}\).

  1. (a)

    Suppose that \(X\) has paths of unbounded variation. We have from Theorem 4.1 (see Remark 4.2(i)) that \(0 \notin \fancyscript{Z}\) almost surely. It follows from (2.2) that at the stopping time \(S\)

    $$\begin{aligned} - \liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{S+\varepsilon } - X_S) = \limsup _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{S+\varepsilon } - X_S) = \infty , \end{aligned}$$

    and hence it is not possible for the \(\alpha \)-Lipschitz minorant to meet the path of \(X\) at time \(S\). Thus, \(T < S < D\) almost surely by Corollary 9.5. By time reversal, \(G < T\) almost surely.

  2. (b)

    Suppose \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| > \alpha \), then we have from Theorem 4.1 (see Remark 4.2(ii)) that \(0 \notin \fancyscript{Z}\) almost surely. Therefore, by Corollary 9.5, if \(T = S\) then \(T=S=D\). Suppose that \(d<-\alpha \). It follows from (2.1) that

    $$\begin{aligned} \lim _{\varepsilon \downarrow 0} \varepsilon ^{-1}(X_{S+\varepsilon } - X_S) = d, \quad \text{ a.s. } \end{aligned}$$

    Thus, \(S \notin \fancyscript{Z}\) and, in particular, \(S < D\), so that \(T < S < D\) almost surely. On the other hand, if \(d > \alpha \), then the Lévy process \((X_t - \alpha t)_{t \ge 0}\) has positive drift and so the associated descending ladder process has zero drift coefficient [13, p. 56]. In that case, for any \(x < 0\) we have \(X_V < x\) almost surely, where \(V := \inf \{t \ge 0 : X_t - \alpha t \le x\}\) [4, Theorem III.4]. Therefore,

    $$\begin{aligned} X_S - \alpha S < \inf \{ X_{u} - \alpha u : u \le 0 \} \quad \text{ a.s. } \end{aligned}$$

    If \(T=S\), then \(T=S=D\) by Corollary 9.5, and then

    $$\begin{aligned} X_S&= X_D \wedge X_{D-} \\&= X_G \wedge X_{G-} + \alpha (D-G) \\&= X_G \wedge X_{G-} + \alpha (S-G), \end{aligned}$$

    which results in the contradiction

    $$\begin{aligned} X_G \wedge X_{G-} - \alpha G = X_S - \alpha S < \inf \{ X_{u} - \alpha u : u \le 0 \}. \end{aligned}$$

    Thus, \(T<S \le D\) almost surely. The results for \(G\) now follow by a time reversal argument.

  3. (c)

    Suppose \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| < \alpha \). We know from Theorem 4.1 and Remark 4.2 that the subordinator associated with \(\fancyscript{Z}\) has non-zero drift and so \(\fancyscript{Z}\) has positive Lebesgue measure almost surely. The subset of points of \(\fancyscript{Z}\) that are isolated on either the left or the right is countable and hence has zero Lebesgue measure. It follows from the stationarity of \(\fancyscript{Z}\) that \(G = T = S = D = 0\) almost surely on the event \(\{0 \in \fancyscript{Z}\}\). The remaining statements can be read from Corollary 9.5. \(\square \)

Corollary 4.6

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Suppose further that \(\delta = 0\), so that \(G < 0 < D\) almost surely, and that when \(X\) has paths of bounded variation the drift coefficient \(d\) satisfies \(|d| \ne \alpha \). Then, \(G<T<D\) almost surely.

Proof

We know from parts (a) and (b) of Proposition 4.5 that the result holds when \(X\) has paths of unbounded variation or \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| > \alpha \). As remarked in the proof of part (c) of Proposition 4.5, it follows from Theorem 4.1 and Remark 4.2 that \(\delta >0\) when \(X\) has paths of bounded variation and drift coefficient \(d\) satisfying \(|d| < \alpha \). \(\square \)

Theorem 4.7

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2 and \(\varPi (\mathbb{R }) = \infty \). Then, \(\varLambda (\mathbb{R }_+) < \infty \) if and only if

$$\begin{aligned} \int _0^1 t^{-1} \mathbb{P }\{X_t \in [-\alpha t, \alpha t]\} \, dt < \infty . \end{aligned}$$
(4.9)

Proof

Suppose first that (4.9) holds. Note that necessarily one of the integrals in (4.6) must then be infinite, and hence, as noted in the proof of Theorem 4.1, this implies that \(\delta = 0\) and so \(G < 0 < D\) almost surely. Now, \(X_t \wedge X_{t-} > M_t = X_{D} \wedge X_{D-} - \alpha (t-D)\) for \(T \le t \le D\). Moreover, \(X_t \wedge X_{t-} \ge M_t \ge X_{D} \wedge X_{D-} - \alpha (t-D)\) for all \(t \ge D\) by the definition of the Lipschitz minorant. It follows that \(X_t \wedge X_{t-} + \alpha t \ge X_{D} \wedge X_{D-} + \alpha D\) for \(t \ge T\) and, in particular, \(D\) is the time of a local infimum of the process \((X_t + \alpha t)_{t \ge 0}\) on the event \(\{T < D\}\). Theorem 3.1 and (4.9) imply that almost surely on the event \(\{T < D\}\) there exists \(\varepsilon >0\) such that \((X_{D + s} + \alpha s)-X_D \wedge X_{D-} > 2 \alpha s\) for all \(0 < s \le \varepsilon \). Thus, if

$$\begin{aligned} D^{\prime }:= \inf \{ t > D : t \in \fancyscript{Z}\}, \end{aligned}$$

then \(D^{\prime }>D\) almost surely on the event \(\{T < D\}\). By the regenerative property of \(\fancyscript{Z}\), the event \(\{D^{\prime }=D\}\) has probability zero or one, and in the latter case \(\fancyscript{Z}\) is discrete almost surely.

Thus, \(\mathbb{P }\{T < D\} > 0\) implies that \(\fancyscript{Z}\) is discrete almost surely and hence \(\varLambda (\mathbb{R }_+) < \infty \). However, if \(\mathbb{P }\{T < D\} = 0\), then \(\mathbb{P }\{T > G\} = 1\), and the above argument combined with a time reversal shows that \(\varLambda (\mathbb{R }_+) < \infty \) in this case as well.

Turning to the converse, suppose that (4.9) fails. If \(\sigma = 0\) and \(\delta > 0\), then \(\varLambda (\mathbb{R }_+) = \infty \) as discussed in Remark 4.4, and if \(\sigma > 0\) then \(\delta = 0\) as discussed in Remark 4.2(ii). Thus we henceforth assume that \(\delta = 0\).

As above, \(D\) is the time of a local infimum of the process \((X_t + \alpha t)_{t \ge 0}\) on the event \(\{T < D\}\). Suppose for the moment that \(\mathbb{P }\{T < D\} > 0\). If it were the case that \(\varLambda (\mathbb{R }_+) < \infty \) and hence \(D^{\prime } > D\) almost surely, then it would follow that

$$\begin{aligned} T_D^{(\alpha )} := \inf \{ s > 0 : (X_{D + s} + \alpha s) - X_D \wedge X_{D-} \le 2 \alpha s \} > 0 \end{aligned}$$

almost surely. However, the failure of (4.9) and Theorem 3.1 imply that \(T_D^{(\alpha )} = 0\) on the event \(\{T<D\}\), and hence \(D^{\prime } = D\) almost surely. Since \(\delta = 0\), this is only possible if \(\varLambda (\mathbb{R }_+) = \infty \).

Lastly, if \(\mathbb{P }\{T < D\} = 0\), then \(\mathbb{P }\{T > G\} = 1\), and the above argument combined with a time reversal shows that \(\varLambda (\mathbb{R }_+) = \infty \) in this case as well.\(\square \)

Remark 4.8

An example of a process that satisfies our standing assumptions and for which (4.9) fails for all \(\alpha >0\) is given by truncating the Lévy measure of the symmetric Cauchy process to remove all jumps with magnitude greater than \(m\), so that the Lévy measure becomes \(1_{|x| \le m}x^{-2} \, dx\). To see this, first note that (4.9) fails for the symmetric Cauchy process because, by the self-similarity properties of this process, the probability that it lies in an interval of the form \((at, bt)\) at time \(t > 0\) does not depend on \(t\) and \(\int _0^1 t^{-1} \, dt = \infty \). Then observe that the difference between the probabilities that the truncated process and the symmetric Cauchy processes lie in some interval at time \(t\) is at most the probability that the symmetric Cauchy process has at least one jump of size greater than \(m\) before time \(t\). The latter probability is \(1-e^{-\lambda t}\) with \(\lambda = 2 \int _m^\infty x^{-2} \, dx < \infty \), and \(\int _0^1 t^{-1}(1-e^{-\lambda t}) \, dt < \infty \).

Remark 4.9

If \(X\) is a symmetric \(\beta \)-stable process for \(1 < \beta \le 2\), then (4.9) holds for all \(\alpha >0\). To see this, first note that \(X_1\) has a bounded density. Hence, by scaling, \(\mathbb{P }\{X_t \in [-\alpha t, \alpha t]\} = \mathbb{P }\{t^{1/\beta } X_1 \in [-\alpha t, \alpha t]\} \le c t^{1-1/\beta }\) for some constant \(c\) depending on \(\alpha \), and then observe that \(\int _0^1 t^{-1} t^{1-1/\beta } \, dt < \infty \). Further cases where (4.9) holds for all \(\alpha >0\) are discussed in Remark 5.3.

In Sect. 7 we prove the following result, which characterizes the subordinator associated with \(\fancyscript{Z}\) when \(X\) has paths of unbounded variation and satisfies certain extra conditions. In Corollary 6.5 we show that these extra conditions hold when \(X\) has non-zero Brownian component. Note that the conclusion \(\delta = 0\) in the result follows from Remark 4.2(i).

Theorem 4.10

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2 and has paths of unbounded variation almost surely. Suppose further that \(X_t\) has absolutely continuous distribution for all \(t \ne 0\), and that the densities of the random variables \(\inf _{t \ge 0} \{ X_t + \alpha t \}\) and \(\inf _{t \ge 0} \{ X_{-t} + \alpha t \}\) are square integrable. Then, \(\delta = 0\) and \(\varLambda \) is characterized by

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda ( dx )}{\int _{\mathbb{R }_+} x \, \varLambda ( dx )}&= 4 \pi \alpha \int _{-\infty }^\infty \left\{ \exp \left( \int _0^\infty t^{-1} \mathbb{E }\left[ \left( e^{i z X_t - i z \alpha t} - 1\right) \mathbf 1 \{X_t \ge + \alpha t\} \right. \right. \right. \\&\quad + \left. \left. \left( e^{i z X_t + i z \alpha t} - 1\right) \mathbf 1 \{X_t \le - \alpha t\} \right] \, dt \right) \\&\quad - \exp \left( \int _0^\infty t^{-1} \mathbb{E }\left[ \left( e^{- \theta t + i z X_t - i z \alpha t} - 1\right) \mathbf 1 \{X_t \ge + \alpha t\} \right. \right. \\&\quad + \left. \left. \left. \left( e^{- \theta t + i z X_t + i z \alpha t} - 1\right) \mathbf 1 \{X_t \le - \alpha t\} \right] \, dt \right) \right\} \, dz \end{aligned}$$

for \(\theta \ge 0\), and, moreover, \(\varLambda (\mathbb{R }_+) < \infty \).

Note that the existence of the densities of the infima in the hypothesis (ii) of Theorem 4.10 comes from the assumption that \(X_t\) has absolutely continuous distribution for all \(t \ne 0\)—see Lemma 6.2.

When the conditions of Theorem 4.10 are not satisfied, we are able to give a characterization of \(\varLambda \) as a limit of integrals in the following way. Let \(X^\varepsilon = X + \varepsilon B\), with \(B\) a (two-sided) standard Brownian motion independent of \(X\), and let \(\varLambda ^\varepsilon \) be the Lévy measure of the subordinator associated with the contact set for \(X^\varepsilon \). Then, in the case \(\delta = 0\) we have the representation

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda ( dx )}{\int _{\mathbb{R }_+} x \, \varLambda (dx)} = \lim _{\varepsilon \downarrow 0} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda ^\varepsilon ( dx )}{\int _{\mathbb{R }_+} x \, \varLambda ^\varepsilon (dx)}. \end{aligned}$$

See Lemma 7.3 in Sect. 7 for details of this limit and (7.10) for a proof of the above equality.

Theorem 4.7 together with the conclusion \(\varLambda (\mathbb{R }_+) < \infty \) of Theorem 4.10 result in the following.

Corollary 4.11

Suppose the conditions of Theorem 4.10 are satisfied, then

$$\begin{aligned} \int _0^1 t^{-1} \mathbb{P }\{X_t \in [-\alpha t, \alpha t]\} \, dt < \infty . \end{aligned}$$

5 The limit of the contact set for increasing slopes

We now investigate how \(\fancyscript{Z}\) changes as \(\alpha \) increases. For the sake of clarity, let \(X\) be a fixed Lévy process with \(X_0=0\) and \(\mathbb{E }[|X_1|] < \infty \). Write \(M^{(\alpha )} = (M^{(\alpha )}_t)_{t \in \mathbb{R }}\) for the \(\alpha \)-Lipschitz minorant of \(X\) for \(\alpha > |\mathbb{E }[X_1]|\), and put \(\fancyscript{Z}_\alpha := \{ t \in \mathbb{R }: X_t \wedge X_{t-} = M^{(\alpha )}_t \}\). For \(|\mathbb{E }[X_1]| < \alpha ^{\prime } \le \alpha ^{\prime \prime }\), we have \(M^{(\alpha ^{\prime })}_t \le M^{(\alpha ^{\prime \prime })}_t \le X_t\) for all \(t \in \mathbb{R }\) (because any \(\alpha ^{\prime }\)-Lipschitz function is also \(\alpha ^{\prime \prime }\)-Lipschitz), and so \(\fancyscript{Z}_{\alpha ^{\prime }} \subseteq \fancyscript{Z}_{\alpha ^{\prime \prime }}\). We note in passing that \(\fancyscript{Z}_{\alpha ^{\prime }}\) is regeneratively embedded in \(\fancyscript{Z}_{\alpha ^{\prime \prime }}\) in the sense of Bertoin [5].

If \(X\) has paths of bounded variation and drift coefficient \(d\), then \(|d| < \alpha \) for all \(\alpha \) large enough. Since \(\lim _{t \downarrow 0} t^{-1} X_t = - \lim _{t \downarrow 0} t^{-1} X_{-t} = d\), the law of large numbers implies that

$$\begin{aligned} \lim _{\alpha \rightarrow \infty } \mathbb{P }\{ 0 \in \fancyscript{Z}_\alpha \} = \lim _{\alpha \rightarrow \infty } \mathbb{P }\{ \inf _{t \ge 0} (X_t + \alpha t) = \inf _{t \le 0} (X_t - \alpha t) = 0 \} = 1, \end{aligned}$$

and thus the set \(\bigcup _{\alpha > | \mathbb{E }[X_1] | } Z_\alpha \) has full Lebesgue measure.

We now consider the case where \(X\) has paths of unbounded variation. In order to state our result, we need to recall the definition of the so-called abrupt Lévy processes introduced by Vigon [49]. Recall from (3.1) that \(\fancyscript{M}^-\) is the set of local infima of the path of \(X\), and from Proposition 3.4 if the paths of \(X\) have unbounded variation, then almost surely \(X_{t-} = X_t\) for all \(t \in \fancyscript{M}^-\).

Definition 5.1

A Lévy process \(X\) is abrupt if its paths have unbounded variation and almost surely for all \(t \in \fancyscript{M}^-\)

$$\begin{aligned} \limsup _{\varepsilon \uparrow 0} \frac{X_{t+\varepsilon }-X_{t-}}{\varepsilon } = -\infty \quad \text{ and } \quad \liminf _{\varepsilon \downarrow 0} \frac{X_{t+\varepsilon }-X_t}{\varepsilon } = +\infty . \end{aligned}$$

Remark 5.2

An equivalent definition may be made in terms of local suprema [49, Remark 1.2]: a Lévy process \(X\) with paths of unbounded variation is abrupt if almost surely for any \(t\) that is the time of a local supremum,

$$\begin{aligned} \liminf _{\varepsilon \uparrow 0} \frac{X_{t+\varepsilon }-X_{t-}}{\varepsilon } = +\infty \quad \text{ and } \quad \limsup _{\varepsilon \downarrow 0} \frac{X_{t+\varepsilon }-X_t}{\varepsilon } = -\infty . \end{aligned}$$

Remark 5.3

A Lévy process \(X\) with paths of unbounded variation is abrupt if and only if

$$\begin{aligned} \int _0^1 t^{-1} \mathbb{P }\{ X_t \in [at,bt]\} \, dt < \infty , \quad \forall a<b, \end{aligned}$$
(5.1)

(see [49, Theorem 1.3]). Examples of abrupt Lévy processes include stable processes with stability parameter in the interval \((1,2]\), processes with non-zero Brownian component, and any processes that creep upwards or downwards. An example of an unbounded variation process that is not abrupt is the symmetric Cauchy process.

Remark 5.4

The analytic condition (5.1) for a Lévy process \(X\) to be abrupt in Remark 5.3 has an interpretation in terms of the smoothness of the convex minorant of \(X\) over a finite interval. The results of Pitman and Uribe Bravo [38] imply that the number of segments of the convex minorant of \(X\) over a finite interval that have slope between \(a\) and \(b\) is finite for all \(a < b\) if and only if (5.1) holds.

We now return to the question of the limit of \(\fancyscript{Z}_\alpha \) as \(\alpha \rightarrow +\infty \).

Theorem 5.5

Let \(X\) be a Lévy process with \(X_0=0\) and \(\mathbb{E }[ |X_1| ] < \infty \). Then \( \bigcup _{\alpha > | \mathbb{E }[X_1] |} \fancyscript{Z}_\alpha \supseteq \fancyscript{M}^-\). Furthermore, if \(X\) is abrupt, then \( \bigcup _{\alpha > | \mathbb{E }[X_1] |} \fancyscript{Z}_\alpha = \fancyscript{M}^-\).

Proof

Suppose that \(t \in \fancyscript{M}^-\) so that there exists \(\epsilon > 0\) such that \(\inf \{X_s : t-\epsilon < s < t+\epsilon \} = X_t = X_{t-}\). Fix any \(\beta > | \mathbb{E }[X_1] |\). Then, by the strong law of large numbers, \(\inf \{ X_s + \beta s : s \ge 0\} > - \infty \) and \(\inf \{ X_s - \beta s : s \le 0 \} > - \infty \). It is clear that if \(\alpha \in \mathbb{R }\) is such that

$$\begin{aligned} \alpha > - \frac{ \inf \{ X_s + \beta s : s \ge 0 \} \, \vee \, \inf \{ X_s - \beta s : s \le 0 \}, }{\epsilon } \end{aligned}$$

then \(X_t = X_{t-} = M^{(\alpha )}_t\) and \(t \in \fancyscript{Z}_\alpha \). Hence \( \bigcup _{\alpha > | \mathbb{E }[X_1] |} \fancyscript{Z}_\alpha \supseteq \fancyscript{M}^-\).

Now suppose that \(X\) is abrupt, and let \(t \in \fancyscript{Z}_\alpha \) for some \(\alpha > | \mathbb{E }[X_1] |\). Then, one of the following three possibilities must occur:

  1. (a)

    \(X_t > X_{t-}\) and \(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) \le \alpha \);

  2. (b)

    \(X_{t-} > X_t\) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) \ge - \alpha \);

  3. (c)

    \(X_{t-} = X_t\) and \(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) \le \alpha ,\,\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) \ge -\alpha \).

We discount options (a) and (b) by assuming that \(t\) is a jump time of \(X\) and then showing that the \(\liminf \) or \(\limsup \) part of the statements cannot occur. Our argument borrows heavily from the proof of Property 2 in [38, Proposition 1], which itself is based on the proof of [31, Proposition 2.4], but is more detailed.

Arguing as in the proof of Proposition 3.4, for \(\delta >0\), let \(0 < J_1^\delta < J_2^\delta < \ldots \) be the successive nonnegative times at which \(X\) has jumps of size greater than \(\delta \) in absolute value. The strong Markov property applied at the stopping time \(J_i^\delta \) and (2.2) gives that

$$\begin{aligned} \liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{J_i^\delta +\varepsilon }-X_{J_i^\delta }) =-\infty \quad \text{ and } \quad \limsup _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{J_i^\delta +\varepsilon }-X_{J_i^\delta }) = +\infty . \end{aligned}$$

Hence, at any random time \(V\) such that \(X_{V} \ne X_{V-}\) almost surely we have

$$\begin{aligned} \liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{V+\varepsilon }-X_{V}) = -\infty , \end{aligned}$$

and, by a time reversal,

$$\begin{aligned} \limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{V+\varepsilon }-X_{V-}) = + \infty . \end{aligned}$$

Thus, neither of the possibilities (a) or (b) hold, and so (c) must hold. It then follows from Theorem 5.6 below that \(X\) must have a local infimum or supremum at \(t\). However, \(X\) cannot have a local supremum at \(t\) by Remark 5.2, and so \(X\) must have a local infimum at \(t\). \(\square \)

The key to proving Theorem 5.5 in the abrupt case was the following theorem that describes the local behavior of an abrupt Lévy process at arbitrary times. This result is an immediate corollary of [49, Theorem 2.6] once we use the fact that almost surely the paths of a Lévy processes cannot have both points of increase and points of decrease [17].

Theorem 5.6

Let \(X\) be an abrupt Lévy process. Then, almost surely for all \(t\) one of the following possibilities must hold:

  1. (i)

    \(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) = +\infty \) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) = -\infty \);

  2. (ii)

    \(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) < +\infty \) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) = -\infty \);

  3. (iii)

    \(\limsup _{\varepsilon \uparrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_{t-}) = +\infty \) and \(\liminf _{\varepsilon \downarrow 0} \varepsilon ^{-1} (X_{t+\varepsilon }-X_t) > -\infty \);

  4. (iv)

    \(X\) has a local infimum or supremum at \(t\).

Remark 5.7

Theorem 5.5 shows that the \(\alpha \)-Lipschitz minorant provides a method for “sieving out” a certain discrete set of times of local infima of an abrupt process. This method has the property that if we let \(\alpha \rightarrow \infty \), then eventually we collect all the times of local infima. Alternative methods for sieving out the local minima of Brownian motion are discussed in [34, 35]. One method is to take all local infima times \(t\) such that \(X_{t+s} - X_t > 0\) for all \(s \in ( -h, h)\) for some fixed \(h\), and then let \(h \rightarrow 0\). Another is to take all local infima times \(t\) such that \(X_{s_+} - X_t \ge h\) for some time \(s_+ \in (0, \inf \{ s > 0 : X_s - X_t = 0 \})\) and such that \(X_{s_-} - X_t \ge h\) for some time \(s_- \in (0, \inf \{ s < 0 : X_s - X_t = 0 \})\), and then again let \(h \rightarrow 0\). This work is extended to Brownian motion with drift in [15].

6 Future infimum of a Lévy process

For future use, we collect together in this section some preliminary results concerning the distribution of the infimum of a Lévy process \((Z_t)_{t \ge 0}\) and the time at which the infimum is attained. Further interesting results in this direction, proved using excursion theory, may be found in recent work by Chaumont [10].

Remark 6.1

Let \(Z = (Z_t)_{t \ge 0}\) be a Lévy process such that \(Z_0 = 0\). Set \(\underline{Z}_t := \inf \{Z_s : 0 \le s \le t\},\,t \ge 0\). If \(Z\) is not a compound Poisson process (that is, either \(Z\) has a non-zero Brownian component or the Lévy measure of \(Z\) has infinite total mass or the Lévy measure has finite total mass but there is a non-zero drift coefficient), then

$$\begin{aligned} \mathbb{P }\{\exists 0 \le s < t < u : \underline{Z}_s = \underline{Z}_t = Z_t \wedge Z_{t-} = \underline{Z}_u\} = 0 \end{aligned}$$
(6.1)

– see, for example, [4, Proposition VI.4]. Hence, almost surely for each \(t \ge 0\) there is a unique time \(U_t\) such that \(Z_{U_t} \wedge Z_{U_t-} = \underline{Z}_t\). If, in addition, \(\lim _{t \rightarrow \infty } Z_t = +\infty \), then almost surely there is a unique time \(U_\infty \) such that \(Z_{U_\infty } \wedge Z_{U_\infty -} = \underline{Z}_\infty := \inf \{Z_s : s \ge 0\}\).

Lemma 6.2

Let \(Z\) be a Lévy process such that \(Z_0 = 0,\,Z_t\) has an absolutely continuous distribution for each \(t>0\), and \(\lim _{t \rightarrow \infty } Z_t = +\infty \). Then, the distribution of \((U_\infty ,\underline{Z}_\infty )\) restricted to \((0,\infty ) \times (-\infty ,0]\) is absolutely continuous with respect to Lebesgue measure. Moreover, \(\mathbb{P }\{(U_\infty , \underline{Z}_\infty ) = (0,0)\} > 0\) if and only if zero is not regular for \((-\infty ,0)\).

Proof

Because the random variable \(Z_t\) has an absolutely continuous distribution for each \(t>0\), it follows from [38, Theorem 2] that for all \(t > 0\) the restriction of the distribution of the random vector \((U_t,\underline{Z}_t)\) is absolutely continuous with respect to Lebesgue measure on the set \((0,t] \times (-\infty ,0]\). Observe that

$$\begin{aligned} \mathbb{P }\left\{ \exists s : (U_t, \underline{Z}_t) = (U_\infty , \underline{Z}_\infty ), \; \forall t \ge s \right\} = 1. \end{aligned}$$

Thus, if \(A \subseteq (0,\infty ) \times (-\infty ,0]\) is Borel with zero Lebesgue measure, then

$$\begin{aligned} \mathbb{P }\left\{ (U_\infty ,\underline{Z}_\infty ) \in A \right\} = \lim _{t \rightarrow \infty } \mathbb{P }\left\{ (U_t,\underline{Z}_t) \in A \right\} = 0. \end{aligned}$$

The proof the claim concerning the atom at \((0,0)\) follows from the above formula, the fact that \(\mathbb{P }\{(U_t, \underline{Z}_t) = (0,0)\} > 0\) if and only if zero is not regular for the interval \((-\infty ,0)\) [38, Theorem 2], and the hypothesis that \(\lim _{t \rightarrow \infty } Z_t = +\infty \). \(\square \)

Remark 6.3

Note that if the process \(Z\) has a non-zero Brownian component, then the random variable \(Z_t\) has an absolutely continuous distribution for all \(t>0\). Moreover, in this case zero is regular for the interval \((-\infty ,0)\)

Let \(\tau \!=\! (\tau _t)_{t \ge 0}\) be the local time at zero for the process \(Z \!-\! \underline{Z}\). Write \(\tau ^{-1}\) for the inverse local time process. Set \(\underline{H}_t \!:=\! \underline{Z}_{\tau ^{-1}(t)}\) The process \(\underline{H} \!:=\! (\underline{H}_t)_{t \ge 0}\) is the descending ladder height process for \(Z\). If \(\lim _{t \rightarrow \infty } Z_t \!=\! \!+\!\infty \), then \(\hat{\underline{H}} \!:=\! \!-\!\underline{H}\) is a subordinator killed at an independent exponential time (see, for example, [4, Lemma VI.2]).

For the sake of completeness, we include the following observation that combines well-known results and probably already exists in the literature—it can be easily concluded from Theorem 19 and the remarks at the top of page 172 of [4].

Lemma 6.4

Let \(Z\) be a Lévy process such that \(Z_0 = 0\) and \(\lim _{t \rightarrow \infty } Z_t = +\infty \). Then, the distribution of random variable \(\underline{Z}_\infty \) is absolutely continuous with a bounded density if and only if the (killed) subordinator \(\hat{\underline{H}}\) has a positive drift coefficient.

Proof

Let \(S = (S_t)_{t \ge 0}\) be an (unkilled) subordinator with the same drift coefficient and Lévy measure as \(\hat{\underline{H}}\), so that \(-\underline{Z}_\infty \) has the same distribution as \(S_\zeta \), where \(\zeta \) is an independent, exponentially distributed random time. Therefore, for some \(q>0\),

$$\begin{aligned} \mathbb{P }\{-\underline{Z}_\infty \in A\} = \int _0^\infty q e^{-qt} \mathbb{P }\{S_t \in A\} \, dt \end{aligned}$$

for any Borel set \(A \subseteq \mathbb{R }\). By a result of Kesten for general Lévy processes (see, for example, [4, Theorem II.16]) the \(q\)-resolvent measure \(\int _0^\infty e^{-qt} \mathbb{P }\{S_t \in \cdot \} \, dt\) of \(S\) is absolutely continuous with a bounded density for all \(q>0\) (equivalently, for some \(q>0\)) if and only if points are not essentially polar for \(S\). Moreover, points are not essentially polar for a Lévy process with paths of bounded variation (and, in particular, for a subordinator) if and only if the process has a non-zero drift coefficient [4, Corollary II.20]. \(\square \)

Corollary 6.5

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2 and which has paths of unbounded variation almost surely. Then, the random variables \(\inf \{ X_t + \alpha t : t \ge 0 \}\) and \(\inf \{ X_t - \alpha t : t \le 0 \}\) both have absolutely continuous distributions with bounded densities if and only if \(X\) has a non-zero Brownian component.

Proof

By Lemma 6.4, the distributions in question are absolutely continuous with bounded densities if and only if the drift coefficients of the descending ladder processes for the two Lévy processes \((X_t + \alpha t)_{t \ge 0}\) and \((-X_{t} + \alpha t)_{t \ge 0}\) are non-zero. By the results of [30] (see also [4, Theorem VI.19]), this occurs if and only if both \((X_t + \alpha t)_{t \ge 0}\) and \((-X_{t} + \alpha t)_{t \ge 0}\) have positive probability of creeping down across \(x\) for some (equivalently, all) \(x<0\), where we recall that a Lévy process creeps down across \(x < 0\) if the first passage time in \((-\infty , x)\) is not a jump time for the path of the process. Equivalently, both densities exist and are bounded if and only if the Lévy process \((X_t + \alpha t)_{t \ge 0}\) creeps downwards and the Lévy process \((X_t - \alpha t)_{t \ge 0}\) creeps upwards, where the latter notion is defined in the obvious way.

A result of Vigon [48] (see also [13, Chapter 6, Corollary 9]) states that when the paths of \(X\) have unbounded variation, \((X_t + \alpha t)_{t \ge 0}\) creeps downward if and only if \(X\) creeps downward, and hence, in turn, if and only if \((X_t - \alpha t)_{t \ge 0}\) creeps downwards. A similar result applies to creeping upwards.

Thus, both densities exist and are bounded if and only if \(X\) creeps downwards and upwards. This occurs if and only if the ascending and descending ladder processes of \(X\) have positive drifts [4, Theorem VI.19], which happens if and only if \(X\) has a non-zero Brownian component [13, Chapter 4, Corollary 4(i)] (or see the remark after the proof of [4, Theorem VI.19]). \(\square \)

7 The complementary interval straddling zero

7.1 Distributions in the case of a non-zero Brownian component

Suppose that \(X = (X_t)_{t \in \mathbb{R }}\) is a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Also, suppose until further notice that \(X\) has a non-zero Brownian component.

Recall that \(M = (M_t)_{t \in \mathbb R }\) is the \(\alpha \)-Lipschitz minorant of \(X\) and \(\fancyscript{Z}\) is the stationary regenerative set \(\{t \in \mathbb{R }: X_t \wedge X_{t-} = M_t\}\). Recall also that \(K = D - G\), where \( G = \sup \{ t < 0 : X_t \wedge X_{t-} = M_t \} = \sup \{t < 0 : t \in \fancyscript{Z}\} \) and \( D = \inf \{ t > 0 : X_t \wedge X_{t-} = M_t \} = \inf \{t > 0 : t \in \fancyscript{Z}\} \). Lastly, recall that \(T\) is the unique \(t \in [G,D]\) such that \(M_t = \max \{M_s : s \in [G, D]\}\).

Let \(f^+\) (respectively, \(f^-\)) be the joint density of the random variables we denoted by \((U_\infty ,\underline{Z}_\infty )\) in Lemma 6.2 in the case where the Lévy process \(Z\) is \((X_t + \alpha t)_{t \ge 0}\) (respectively, \((-X_t + \alpha t)_{t \ge 0}\)).

Proposition 7.1

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Suppose, moreover, that \(X\) has a non-zero Brownian component. Set \(L := T - G\) and \(R := D - T\). Then, the random vector \((T,L,R)\) has a distribution that is absolutely continuous with respect to Lebesgue measure with joint density

$$\begin{aligned} (\tau , \lambda , \rho ) \mapsto 2 \alpha \int _{-\infty }^0 f^-(\lambda ,h) f^+(\rho ,h) \, dh, \quad \lambda ,\rho > 0 \ \text{ and } \ \tau - \lambda < 0 < \tau + \rho . \end{aligned}$$

Therefore, \((T,G,D)\) also has an absolutely continuous distribution with joint density

$$\begin{aligned} (\tau , \gamma , \delta ) \mapsto 2 \alpha \int _{-\infty }^0 f^-(\tau -\gamma ,h) f^+(\delta -\tau ,h) \, dh, \quad \gamma < 0 < \delta \,\, \text{ and }\,\, \gamma < \tau < \delta , \end{aligned}$$

and \(K\) has an absolutely continuous distribution with density

$$\begin{aligned} \kappa \mapsto 2 \alpha \kappa \int _0^\kappa \int _{-\infty }^0 f^-(\xi ,h) f^+(\kappa -\xi ,h) \, dh \, d \xi , \quad \kappa > 0. \end{aligned}$$

Proof

Observe that \(X\) is abrupt and so, by Theorem 4.7 and Remark 5.3, \(\fancyscript{Z}\) is a stationary discrete random set with intensity

$$\begin{aligned} \left( \frac{\int _{\mathbb{R }_+} x \varLambda (dx)}{\varLambda (\mathbb{R }_+)} \right) ^{-1} = \frac{\varLambda (\mathbb{R }_+)}{\int _{\mathbb{R }_+} x \varLambda (dx)} < \infty . \end{aligned}$$

Hence, the set of times of peaks of the \(\alpha \)-Lipschitz minorant \(M\) is also a stationary discrete random set with the same finite intensity. The point process consisting of a single point at time \(T\) is included in the set of times of peaks of \(M\), and so for \(A\) a Borel set with Lebesgue measure \(|A|\) we have

$$\begin{aligned} \mathbb{P }\{T \in A\}&\le \mathbb{P }\left\{ \text{ at } \text{ least } \text{ one } \text{ peak } \text{ of }\,\, M \text{ at } \text{ a } \text{ time }\,\,t \in A \right\} \\&\le \mathbb{E }\left[ \text{ number } \text{ of } \text{ times } \text{ of } \text{ peaks } \text{ in }\,\, A \right] \\&= \frac{ \varLambda (\mathbb{R }_+) }{ \int _{\mathbb{R }_+} x \varLambda (dx)}\, |A|. \end{aligned}$$

Thus, the distribution of \(T\) is absolutely continuous with respect to Lebesgue measure with density bounded above by \(\varLambda (\mathbb{R }_+)/\int _{\mathbb{R }_+} x \varLambda (dx)\).

It follows from the observations made in the proof of Theorem 2.6 about the nature of the global infimum of the process \(\tilde{X}\) that under our hypotheses, almost surely \(X_G = X_{G-} = M_T - \alpha |G-T| = M_T - \alpha L,\,X_D = X_{D-} = M_T - \alpha |D-T| = M_T - \alpha R\), and \(X_t \wedge X_{t-} > M_T - \alpha |t-T|\) for \(t \notin \{ G, D \}\). Thus,

$$\begin{aligned} 0 = \inf \{ X_{T+t} - (M_T - \alpha t) : t \ge 0 \} = X_{T+R} - (M_T - \alpha R) \end{aligned}$$

and

$$\begin{aligned} 0&= \inf \{ X_{T+t} - (M_T + \alpha t) : t \le 0 \} \\&= \inf \{ X_{T-t} - (M_T - \alpha t) : t \ge 0 \} = X_{T-L} - (M_T - \alpha L). \end{aligned}$$

Consequently,

$$\begin{aligned} X_{T-L} - X_T + \alpha L&= \inf \{ X_{T-t} - X_T + \alpha t : t \ge 0 \}\nonumber \\&= \inf \{X_{T+t} - X_T + \alpha t : t \ge 0 )\} \nonumber \\&= X_{T+R} - X_T + \alpha R. \end{aligned}$$
(7.1)

Conversely, \((T,L,R)\) is the unique triple with \(T-L < 0 < T+R\) such that (7.1) holds.

Fix \(\tau \in \mathbb R \) and \(\lambda ,\rho \in \mathbb R _+\) such that \(\tau - \lambda < 0 < \tau + \rho \). Set

$$\begin{aligned} Z_t^-&:= X_{\tau - t} - X_\tau + \alpha t, \quad t \ge 0, \\ \underline{Z}^-&:= \inf \{Z_t^- : t \ge 0\}, \\ U^-&:= \inf \{t \ge 0: Z_t^- = \underline{Z}^-\}. \end{aligned}$$

For \(0 < \varDelta \tau < \rho \) set

$$\begin{aligned} Z_t^+&:= X_{t + \tau + \varDelta \tau } - X_{\tau + \varDelta \tau } + \alpha t, \quad t \ge 0, \\ \underline{Z}^+&:= \inf \{Z_t^+ : t \ge 0\}, \\ U^+&:= \inf \{t \ge 0: Z_t^+ = \underline{Z}^+\}. \end{aligned}$$

From (7.1) we have that

$$\begin{aligned}&\mathbb P \{ T \in [\tau , \tau +\varDelta \tau ],\, L > \lambda , \, R >\rho \} \nonumber \\&\quad \le \mathbb P ( \{ U^- > \lambda - \varDelta \tau , \, U^+ > \rho - \varDelta \tau \} \nonumber \\&\quad \cap \, \{ \exists 0 \le s \le \varDelta \tau : X_\tau + \underline{Z}^- + \alpha s = X_{\tau + \varDelta \tau } + \underline{Z}^+ + \alpha (\varDelta \tau - s) \}). \end{aligned}$$
(7.2)

Similarly,

$$\begin{aligned}&\mathbb P \{ T \in [\tau , \tau +\varDelta \tau ],\, L > \lambda , \, R >\rho \} \nonumber \\&\quad \ge \mathbb P ( \{ U^- > \lambda , \, U^+ > \rho \} \nonumber \\&\qquad \cap \, \{ \exists 0 \le s \le \varDelta \tau : X_\tau + \underline{Z}^- + \alpha s = X_{\tau + \varDelta \tau } + \underline{Z}^+ + \alpha (\varDelta \tau - s)\} \nonumber \\&\qquad \cap \, \{ \inf \{ X_{\tau +s} - (X_\tau + \underline{Z}^- + \alpha s) : 0 \le s \le \varDelta \tau \}> 0\} \nonumber \\&\qquad \cap \, \{ \inf \{ X_{\tau +\varDelta \tau - s} - (X_{\tau +\varDelta \tau } + \underline{Z}^+ + \alpha s) : 0 \le s \le \varDelta \tau \}> 0\}) \nonumber \\&\quad \ge \mathbb P ( \{ U^- > \lambda , \, U^+ > \rho \} \nonumber \\&\qquad \cap \, \{ \exists 0 \le s \le \varDelta \tau : X_\tau + \underline{Z}^- + \alpha s = X_{\tau + \varDelta \tau } + \underline{Z}^+ + \alpha (\varDelta \tau - s) \}) \nonumber \\&\qquad -\, \mathbb P ( \{ \exists 0 \le s \le \varDelta \tau : X_\tau + \underline{Z}^- + \alpha s = X_{\tau + \varDelta \tau } + \underline{Z}^+ + \alpha (\varDelta \tau - s)\} \nonumber \\&\qquad \quad \cap \, \{ \inf \{ X_{\tau +s} - (X_\tau + \underline{Z}^- + \alpha s) : 0 \le s \le \varDelta \tau \} \le 0 \}) \nonumber \\&\qquad -\, \mathbb P ( \{ \exists 0 \le s \le \varDelta \tau : X_\tau + \underline{Z}^- + \alpha s = X_{\tau + \varDelta \tau } + \underline{Z}^+ + \alpha (\varDelta \tau - s)\} \nonumber \\&\qquad \quad \cap \, \{ \inf \{ X_{\tau +\varDelta \tau - s} - (X_{\tau +\varDelta \tau } + \underline{Z}^+ + \alpha s) : 0 \le s \le \varDelta \tau \} \le 0 \}). \end{aligned}$$
(7.3)

Observe that

$$\begin{aligned}&\mathbb P ( \{ \exists 0 \le s \le \varDelta \tau : X_\tau + \underline{Z}^- + \alpha s = X_{\tau + \varDelta \tau } + \underline{Z}^+ + \alpha (\varDelta \tau - s)\} \\&\qquad \cap \{ \inf \{ X_{\tau +s} - (X_\tau + \underline{Z}^- + \alpha s) : 0 \le s \le \varDelta \tau \} \le 0 \}) \\&\quad = \mathbb P ( \{ \{ \exists 0 \le s \le \varDelta \tau : (\underline{Z}^+ - \underline{Z}^-) + (X_{\tau + \varDelta \tau } - X_\tau ) = 2 \alpha s - \alpha \varDelta \tau \} \\&\qquad \cap \{ \underline{Z}^- \ge \inf _{0 \le s \le \varDelta \tau } (X_{\tau +s} - X_\tau - \alpha s) \}) \\&\quad = \mathbb P ( \{ \{ (\underline{Z}^+ - \underline{Z}^-) + (X_{\tau + \varDelta \tau } - X_\tau ) \in [-\alpha \varDelta \tau , \alpha \varDelta \tau ]\} \\&\qquad \cap \{ \underline{Z}^- \ge \inf _{0 \le s \le \varDelta \tau } (X_{\tau +s} - X_\tau - \alpha s) \}). \end{aligned}$$

By Corollary 6.5, the independent random variables \(\underline{Z}^-\) and \(\underline{Z}^+\) have densities bounded by some constant \(c\). Moreover, they are independent of \((X_{\tau +s})_{0 \le s \le \varDelta \tau }\). Conditioning on \(\underline{Z}^-\) and \((X_{\tau +s})_{0 \le s \le \varDelta \tau }\), we see that the last probability is, using \(| \cdot |\) to denote Lebesgue measure, at most

$$\begin{aligned}&\mathbb{E }[ c | [ {\underline{Z}^{-}} - (X_{\tau + \varDelta \tau } - X_\tau ) - \alpha \varDelta \tau , {\underline{Z}^{-}} - (X_{\tau + \varDelta \tau } - X_\tau ) + \alpha \varDelta \tau ]| \\&\qquad \times \mathbf 1 \{ {\underline{Z}^{-}} \ge \inf _{0 \le s \le \varDelta \tau } (X_{\tau +s} - X_\tau - \alpha s) \}] \\&\quad = 2 c \alpha \varDelta \tau \mathbb{P }\{ {\underline{Z}^{-}} \ge \inf _{0 \le s \le \varDelta \tau } (X_{\tau +s} - X_\tau - \alpha s)\}. \end{aligned}$$

Consequently,

$$\begin{aligned}&\mathbb P (\{ \exists 0 \le s \le \varDelta \tau : X_\tau + {\underline{Z}^{-}} + \alpha s = X_{\tau + \varDelta \tau } + {\underline{Z}^{+}} + \alpha (\varDelta \tau - s)\}\nonumber \\&\qquad \cap \, \{ \inf \{ X_{\tau +s} - (X_\tau + {\underline{Z}^{-}} + \alpha s) : 0 \le s \le \varDelta \tau \} \le 0 \})\nonumber \\&\quad =o(\varDelta \tau ) \end{aligned}$$
(7.4)

as \(\varDelta \tau \downarrow 0\).

The same argument shows that

$$\begin{aligned}&\mathbb P ( \{ \exists 0 \le s \le \varDelta \tau : X_\tau + {\underline{Z}^{-}} + \alpha s = X_{\tau + \varDelta \tau } + {\underline{Z}^{+}} + \alpha (\varDelta \tau - s)\} \nonumber \\&\qquad \cap \{ \inf \{ X_{\tau +\varDelta \tau - s} - (X_{\tau +\varDelta \tau } + {\underline{Z}^{+}} + \alpha s) : 0 \le s \le \varDelta \tau \} \le 0 \})\nonumber \\&\quad = o(\varDelta \tau ) \end{aligned}$$
(7.5)

as \(\varDelta \tau \downarrow 0\).

Now,

$$\begin{aligned}&\mathbb P ( \{ U^- > \lambda , \, U^+ > \rho \} \\&\qquad \cap \{ \exists 0 \le s \le \varDelta \tau : X_\tau + {\underline{Z}^{-}} + \alpha s = X_{\tau + \varDelta \tau } + {\underline{Z}^{+}} + \alpha (\varDelta \tau - s) \}) \\&\quad = \mathbb P ( \{ U^- > \lambda , \, U^+ > \rho \} \\&\qquad \cap \{ \exists 0 \le s \le \varDelta \tau : ({\underline{Z}^{+}} - {\underline{Z}^{-}}) + (X_{\tau + \varDelta \tau } - X_\tau ) = 2 \alpha s - \alpha \varDelta \tau \}) \\&\quad = \mathbb P ( \{ {U^{-}} > \lambda ,\, {U^{+}} > \rho \} \\&\qquad \cap \{ ({\underline{Z}^{+}} - {\underline{Z}^{-}}) + (X_{\tau + \varDelta \tau } - X_\tau ) \in [- \alpha \varDelta \tau , + \alpha \varDelta \tau ] \}). \end{aligned}$$

The random vectors \((U^-,\underline{Z}^-)\) and \((U^+, \underline{Z}^+)\) are independent with respective densities \(f^-\) and \(f^+\), and so the joint density of \((U^-, U^+, \underline{Z}^+ - \underline{Z}^- )\) is

$$\begin{aligned} (u,v,w) \mapsto \int _{-\infty }^\infty f^-(u,h-w) f^+(v,h) \, dh. \end{aligned}$$

Thus, using the facts that the random variable \(\underline{Z}^+ - \underline{Z}^-\) is independent of \(X_{\tau + \varDelta \tau } - X_\tau \) and the latter random variable has the same distribution as \(X_{\varDelta \tau }\),

$$\begin{aligned}&\mathbb P ( \{ U^- > \lambda ,\, U^+ > \rho \} \\&\qquad \cap \{ (\underline{Z}^+ - \underline{Z}^-) + (X_{\tau + \varDelta \tau } - X_\tau ) \in [- \alpha \varDelta \tau , + \alpha \varDelta \tau ] \}) \\&\quad = \int _{\lambda }^{\infty } du \, \int _{\rho }^{\infty } dv \, \int _{-\infty }^\infty dw \, \int _{-\infty }^\infty dh \\&\qquad \times \mathbb{P }\{-w- \alpha \varDelta \tau < X_{\varDelta \tau } < -w + \alpha \varDelta \tau \} \, f^-(u,h-w) f^+(v,h). \end{aligned}$$

By Fubini’s theorem,

$$\begin{aligned}&\int _{-\infty }^\infty dw \, \mathbb{P }\{-w- \alpha \varDelta \tau < X_{\varDelta \tau } < -w + \alpha \varDelta \tau \} \\&\quad = \mathbb{E }\left[ \int _{-\infty }^\infty dw \, \mathbf 1 \{-X_{\varDelta \tau } - \alpha \varDelta \tau < w < -X_{\varDelta \tau } + \alpha \varDelta \tau \}\right] \\&\quad = \mathbb{E }\left[ 2 \alpha \varDelta \tau \right] = 2 \alpha \varDelta \tau . \end{aligned}$$

Moreover, for any \(\epsilon > \varDelta \tau \),

$$\begin{aligned}&\int _{-\infty }^\infty dw \, \mathbb{P }\{-w- \alpha \varDelta \tau < X_{\varDelta \tau } < -w + \alpha \varDelta \tau \} \mathbf 1 \{|w| > \epsilon \} \\&\quad = \mathbb{E }\left[ \int _{-\infty }^\infty dw \, \mathbf 1 \{-X_{\varDelta \tau } - \alpha \varDelta \tau < w < -X_{\varDelta \tau } + \alpha \varDelta \tau , \, |w| > \epsilon \}\right] \\&\quad = \mathbb{E }\left[ (|X_{\varDelta \tau }|-(\epsilon -\varDelta \tau ))_+ \wedge (2 \varDelta \tau )\right] . \end{aligned}$$

Note that \((\varDelta \tau )^{-1} [(|X_{\varDelta \tau }|-(\epsilon -\varDelta \tau ))_+ \wedge (2 \varDelta \tau )] \le 2\) and that the random variable on the left of this inequality converges to \(0\) almost surely as \(\varDelta \tau \downarrow 0\). Hence, by bounded convergence,

$$\begin{aligned} \lim _{\varDelta \tau \downarrow 0} \int _{-\infty }^\infty dw \, (\varDelta \tau )^{-1} \mathbb{P }\{-w- \alpha \varDelta \tau < X_{\varDelta \tau } < -w + \alpha \varDelta \tau \} \mathbf 1 \{|w| > \epsilon \} = 0. \end{aligned}$$

Furthermore, the independent random variables \(\underline{Z}^-\) and \(\underline{Z}^+\) both have bounded densities by Corollary 6.5; that is, the functions \(h \mapsto \int _0^\infty du \, f^-(u,h)\) and \(h \mapsto \int _0^\infty dv \, f^+(v,h)\) both belong to \(L^1 \cap L^\infty \). Therefore, the functions \(h \mapsto \int _{\lambda }^\infty du \, f^-(u,h)\) and \(h \mapsto \int _{\rho }^\infty dv \, f^+(v,h)\) both certainly belong to \(L^1 \cap L^\infty \).

It now follows from the Lebesgue differentiation theorem that

$$\begin{aligned}&\lim _{\varDelta \tau \downarrow 0} (\varDelta \tau )^{-1} \int _{-\infty }^\infty dw \, \mathbb{P }\{-w- \alpha \varDelta \tau < X_{\varDelta \tau } < -w + \alpha \varDelta \tau \} \int _{\lambda }^\infty du \, f^-(u,h-w) \\&\quad = 2 \alpha \int _{\lambda }^\infty du \, f^-(u,h) \end{aligned}$$

for Lebesgue almost every \(h \in \mathbb{R }\). Moreover, the quantity on the left is bounded by \(\sup _{h \in \mathbb{R }} 2 \alpha \int _{\lambda }^\infty du \, f^-(u,h) < \infty \). Therefore, by (7.2), (7.3), (7.4), (7.5), and bounded convergence,

$$\begin{aligned}&\lim _{\varDelta \tau \downarrow 0} (\varDelta \tau )^{-1} \mathbb P \{ T \in [\tau , \tau + \varDelta \tau ], \, L > \lambda , \, R > \rho \} \nonumber \\&\quad = 2 \alpha \int _{\lambda }^\infty du \, \int _{\rho }^\infty dv\, \int _{-\infty }^\infty dh \, f^-(u,h) f^+(v,h). \end{aligned}$$
(7.6)

As we observed above, the measure \(\mathbb{P }\{T \in d\tau \}\) is absolutely continuous with density bounded above by \(\varLambda (\mathbb{R }_+) < \infty \), and so the same is certainly true of the measure \( \mathbb{P }\{ T \in d\tau , \, L > \lambda , \, R > \rho \} \) for fixed \(\lambda \) and \(\rho \). Therefore, by (7.6) and the Lebesgue differentiation theorem,

$$\begin{aligned}&\mathbb{P }\{ T \in A, \, L > \lambda , \, R > \rho \} \\&\quad = 2 \alpha \int _{-\infty }^{\infty } d \tau \, \int _{\lambda }^\infty du \, \int _{\rho }^\infty dv \, \int _{-\infty }^\infty dh \, f^-(u,h) f^+(v,h) \mathbf 1 \{\tau \in A\} \end{aligned}$$

for any Borel set \(A \subseteq (-\rho ,\lambda )\), and this establishes that \((T,L,R)\) has the claimed density.

The remaining two claims follow immediately. \(\square \)

Corollary 7.2

Under the assumptions of Proposition 7.1,

$$\begin{aligned} \mathbb{E }[e^{-\theta K}]&= - 4 \pi \alpha \frac{d}{d \theta } \int _{-\infty }^{\infty } \left( \exp \left\{ \int _0^\infty dt \int _0^\infty [ e^{- \theta t + i z x} - 1 ] t^{-1} \mathbb{P }\{X_t - \alpha t \in dx\}\right\} \right. \\&\quad \times \left. \exp \left\{ \int _0^\infty dt \int _0^\infty [ e^{- \theta t - i z x}- 1 ] t^{-1} \mathbb{P }\{- X_t - \alpha t \in dx\} \right\} \right) \, dz. \end{aligned}$$

Proof

From Proposition 7.1,

$$\begin{aligned}&\mathbb{E }[e^{-\theta K}] \nonumber \\&\quad = 2 \alpha \int _{-\infty }^0 \int _0^\infty \kappa \int _0^\kappa f^-(\kappa \!-\!\xi ,h) f^+(\xi ,h) e^{-\theta \kappa } \, d \xi \, d \kappa \, dh \nonumber \\&\quad = 2 \alpha \int _{-\infty }^0 \int _0^\infty \int _\xi ^\infty \kappa f^-(\kappa \!-\!\xi ,h) f^+(\xi ,h) e^{-\theta \kappa } \, d \kappa \, d \xi \, dh \nonumber \\&\quad = 2 \alpha \int _{-\infty }^0 \left( \int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \int _\xi ^\infty ( \kappa \!-\! \xi ) f^-(\kappa \!-\!\xi ,h) e^{-\theta ( \kappa \!-\! \xi ) } \, d \kappa \, d \xi \right. \nonumber \\&\qquad \left. + \int _0^\infty \xi f^+(h,\xi ) e^{-\theta \xi } \int _\xi ^\infty f^-(h,\kappa \!-\!\xi ) e^{-\theta ( \kappa \!-\! \xi ) } \, d \kappa \, d \xi \right) \, dh \nonumber \\&= 2 \alpha \int _{-\infty }^0 \left( \int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \int _0^\infty \kappa f^-(\kappa ,h) e^{-\theta \kappa } \, d \kappa \, d \xi \right. \nonumber \\&\qquad + \left. \int _0^\infty \xi f^+(\xi ,h) e^{-\theta \xi } \int _0^\infty f^-(\kappa ,h) e^{-\theta \kappa } \, d \kappa \, d \xi \right) \, dh \nonumber \\&\quad = - 2 \alpha \frac{d}{d \theta } \left( \, \int _{-\infty }^0 \left( \int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \, d \xi \right) \left( \int _0^\infty f^-(\kappa ,h) e^{-\theta \kappa } \, d \kappa \right) \, dh \right) .\qquad \end{aligned}$$
(7.7)

Viewing \(\int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \, d \xi \) and \(\int _0^\infty f^-(\kappa ,h) e^{-\theta \kappa } \, d \kappa \) as functions of \(h\) that belong to \(L^1 \cap L^\infty \subset L^2\), we can use Plancherel’s Theorem and then the Pecherskii-Rogozin formulas [13, p. 28] to get that \(\mathbb{E }[e^{-\theta K}]\) is

$$\begin{aligned}&- 2 \alpha \frac{d}{d \theta } 2 \pi \int _{-\infty }^\infty \left( \int _0^{\infty } \int _0^\infty f^+(\xi ,-h) e^{izh - \theta \xi } \, d \xi \, dh \right. \\&\qquad \left. \times \overline{ \int _0^{\infty } \int _0^\infty f^-(\kappa ,-h) e^{izh - \theta \kappa } \, d \kappa \, dh } \, dz \right) \\&\quad = - 4 \pi \alpha \frac{d}{d \theta } \int _{-\infty }^\infty \left( \exp \left\{ \int _0^\infty dt \int _0^\infty [ e^{- \theta t + i z x} -1 ] t^{-1} \mathbb{P }\{X_t - \alpha t \in dx\} \right\} \right. \\&\qquad \left. \times \overline{ \exp \left\{ \int _0^\infty dt \int _0^\infty [ e^{- \theta t + i z x} -1 ] t^{-1} \mathbb{P }\{- X_t - \alpha t \in dx\} \right\} } \right) \, dz \\&\quad = - 4 \pi \alpha \frac{d}{d \theta } \int _{-\infty }^\infty \left( \exp \left\{ \int _0^\infty dt \int _0^\infty [ e^{- \theta t + i z x} - 1 ] t^{-1} \mathbb{P }\{X_t - \alpha t \in dx\} \right\} \right. \\&\qquad \times \left. \exp \left\{ \int _0^\infty dt \int _0^\infty [ e^{- \theta t - i z x}-1 ] t^{-1} \mathbb{P }\{- X_t - \alpha t \in dx\} \right\} \right) \,dz. \end{aligned}$$

\(\square \)

7.2 Extension to more general Lévy processes

Corollary 7.2 establishes Theorem 4.10 when \(X\) has a non-zero Brownian component. The next result allows us establish the latter result for the class of Lévy processes described in its statement.

Lemma 7.3

Let \(X\) be a Lévy process that satisfies our standing assumptions Hypothesis 2.2. Suppose, moreover, that if \(X\) has paths of bounded variation, then \(|d| \ne \alpha \). For \(\varepsilon > 0\) set \(X^\varepsilon = X + \varepsilon B\), where \(B\) is a standard Brownian motion on \(\mathbb{R }\), independent of \(X\). Define \(G^\varepsilon ,\,D^\varepsilon \) and \(K^\varepsilon = D^\varepsilon - G^\varepsilon \) to be the analogues of \(G,\,D\) and \(K\) with \(X\) replaced by \(X^\varepsilon \). Then, \((G^\varepsilon ,D^\varepsilon )\) converges almost surely to \((G,D)\) as \(\varepsilon \downarrow 0\), and so \(K^\varepsilon \) converges almost surely to \(K\) as \(\varepsilon \downarrow 0\).

Proof

By symmetry, it suffices to show that \(D^\varepsilon \) converges almost surely to \(D\) as \(\varepsilon \downarrow 0\). We first show the convergence on the event \(\{S>0\}\).

Let \(S^\varepsilon \) be the analogue of the stopping time \(S\) with \(X\) replaced by \(X^\varepsilon \). As we observed in the proof of Theorem 2.6, \(X_S - \alpha S = X_S \wedge X_{S-} - \alpha S \le \inf \{ X_u - \alpha u : u \le 0 \}\). If \(X\) has paths of unbounded variation or bounded variation with drift satisfying \(d < \alpha \), then, since \(S\) is a stopping time, \(\liminf _{u \downarrow S} (X_u - X_S - \alpha (u-S))/(u-S) < 0\). If \(X\) has paths of bounded variation with drift satisfying \(d > \alpha \), then by the remarks at the top of page 56 of [13], the downwards ladder height process of the process \((X_u - \alpha u)_{u \ge 0}\) (resp. \((-X_u + \alpha u)_{u \ge 0}\)) has zero drift (resp. non-zero drift). By Lemma 6.4, the distribution of \(\inf \{ X_u - \alpha u : u \le 0 \}\) is absolutely continuous with a bounded density, and hence

$$\begin{aligned} \mathbb{P }\left\{ X_S - \alpha S = \inf \{ X_u - \alpha u : u \le 0 \} \right\} = 0 \end{aligned}$$

by Fubini’s theorem and the fact that the range of a subordinator with zero drift has zero Lebesgue measure almost surely.

For all three of these cases, given any \(\delta > 0\) we can, with probability one, thus find a time \(t \in (S,S+\delta )\) such that

$$\begin{aligned} X_t \wedge X_{t-} - \alpha t < \inf \{ X_u - \alpha u : u \le 0 \}. \end{aligned}$$

By the strong law of large numbers for the Brownian motion \(B\),

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \inf \{X_{u}^\varepsilon - \alpha u : u \le 0\} = \inf \{ X_u - \alpha u : u \le 0 \}. \end{aligned}$$

Hence, \(X_t^\varepsilon \wedge X_{t-}^\varepsilon - \alpha t \le \inf \{ X_{u}^\varepsilon - \alpha u : u \le 0\}\) for \(\varepsilon \) sufficiently small, and so \(S^\varepsilon \le S + \delta \) for such an \(\varepsilon \). Therefore, \(\limsup _{\varepsilon \downarrow 0} S^\varepsilon \le S\).

On the other hand, for any \(\delta > 0\) we have

$$\begin{aligned} \inf \left\{ X_t \wedge X_{t-} - \alpha t - \inf \{ X_u - \alpha u : u \le 0 \} : t \in [0, (S - \delta )_+] \right\} > 0. \end{aligned}$$

Thus, \(X_t^\varepsilon \wedge X_{t-}^\varepsilon - \alpha t > \inf \{ X_{u}^\varepsilon - \alpha u : u \le 0\}\) for all \(t \in [0, (S- \delta )_+]\) for \(\varepsilon \) sufficiently small, so that \(S^\varepsilon \ge (S - \delta )_+\). Therefore, \(\liminf _{\varepsilon \downarrow 0} S^\varepsilon \ge S\). Consequently, \(\lim _{\varepsilon \downarrow 0} S^\varepsilon = S\).

Now, as a result of the uniqueness of the global infima of Lévy processes that are not compound Poisson processes with zero drift [4, Proposition VI.4], and the law of large numbers applied to \(B\), we have

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \arg \inf _{u \ge S^\varepsilon } \{ X_u^\varepsilon + \alpha (u-S^\varepsilon )\} = \arg \inf _{u \ge S} \{ X_u + \alpha (u-S)\}. \end{aligned}$$

It follows readily that \(D^\varepsilon \) converges to \(D\) almost surely as \(\epsilon \downarrow 0\) on the event \(\{S>0\}\).

Suppose now that we are on the event \(\{S=0\}\). Then, by Proposition 4.5, \(0 \in \fancyscript{Z}\) almost surely, and we may suppose that \(X\) satisfies the conditions of part (c) of that result, so that \(G=T=S=D=0\) almost surely. Then, by the strong law of large numbers for the Brownian motion \(B\), almost surely

$$\begin{aligned} \lim _{\varepsilon \downarrow 0} \inf _{u \le 0} \{ X_u^\varepsilon - \alpha u \} = \lim _{\varepsilon \downarrow 0} \inf _{u \ge 0} \{ X_u^\varepsilon + \alpha u \} = 0. \end{aligned}$$

Therefore, \(D^\varepsilon \) also converges to \(D\) almost surely as \(\epsilon \downarrow 0\) on the event \(\{S = 0\}\).

\(\square \)

We are finally in a position to give the proof of Theorem 4.10. Suppose for the moment that \(X\) has a non-zero Brownian component. It follows from Theorem 4.1 that \(\delta = 0\), and hence from (4.7) we have that

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda ( dx )}{\int _{\mathbb{R }_+} x \, \varLambda (dx)}&= \frac{\int _{\mathbb{R }_+} \left( \int _0^\theta x e^{- \varphi x} \, d \varphi \right) \, \varLambda (dx)}{\int _{\mathbb{R }_+} x \, \varLambda (dx)}\nonumber \\&= \int _0^\theta \left( \frac{ \int _{\mathbb{R }_+} x e^{- \varphi x} \, \varLambda (dx)}{\int _{\mathbb{R }_+} x \, \varLambda (dx)} \right) \, d \varphi \! =\!\int _0^\theta \mathbb{E }[e^{-\varphi K}] \, d \varphi .\qquad \end{aligned}$$
(7.8)

By Corollary 7.2, this last integral is

$$\begin{aligned}&4 \pi \alpha \int \limits _{-\infty }^\infty \left\{ \exp \left( \int \limits _0^\infty t^{-1} \mathbb{E }\left[ \left( e^{i z X_t - i z \alpha t} - 1\right) \mathbf 1 \{X_t \ge + \alpha t\} \right. \right. \right. \nonumber \\&\quad \left. \left. + \left( e^{i z X_t + i z \alpha t} - 1\right) \mathbf 1 \{X_t \le - \alpha t\} \right] \, dt \right) \nonumber \\&\quad - \exp \left( \int \limits _0^\infty t^{-1} \mathbb{E }\left[ \left( e^{- \theta t + i z X_t - i z \alpha t} - 1\right) \mathbf 1 \{X_t \ge + \alpha t\} \right. \right. \nonumber \\&\quad + \left. \left. \left. \left( e^{- \theta t + i z X_t + i z \alpha t} - 1 \right) \mathbf 1 \{X_t \le - \alpha t\} \right] \, dt \right) \right\} \, dz, \end{aligned}$$
(7.9)

as claimed in the theorem.

Now suppose \(X\) has zero Brownian component, but satisfies the conditions of Theorem 4.10. Since \(X\) has paths of unbounded variation almost surely, it follows from Remark 4.2 that \(\delta = 0\). Let \(X^\varepsilon = X + \varepsilon B\) and \(K^\varepsilon \) be as in Lemma 7.3, and let \(\varLambda ^\varepsilon \) be the Lévy measure of the subordinator associated with the set of points where \(X^\varepsilon \) meets its \(\alpha \)-Lipschitz minorant. By Lemma 7.3 we know that \(K^\varepsilon \rightarrow K\) almost surely, and thus since \(\delta = 0\), arguing as in (7.8) we have

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda ( dx )}{\int _{\mathbb{R }_+} x \, \varLambda (dx)}&= \int _0^\theta \mathbb{E }[e^{-\varphi K}] \, d \varphi = \lim _{\varepsilon \downarrow 0} \int _0^\theta \mathbb{E }[e^{-\varphi K^\varepsilon }] \, d \varphi \nonumber \\&= \lim _{\varepsilon \downarrow 0} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda ^\varepsilon ( dx )}{\int _{\mathbb{R }_+} x \, \varLambda ^\varepsilon (dx)}. \end{aligned}$$
(7.10)

Now, in the notation of of the proof of Corollary 7.2 , it can be seen that the square integrability of the densities of \(\inf _{t \ge 0} \{ X_t + \alpha t \}\) and \(\inf _{t \ge 0} \{ -X_t + \alpha t \}\) implies that

$$\begin{aligned} \int _{-\infty }^0 \left( \int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \, d \xi \right) \left( \int _0^\infty f^-(\kappa ,h) e^{-\theta \kappa } \, d \kappa \right) \, dh < \infty \end{aligned}$$

for all \(\theta \ge 0\). Thus, by the same methods used in the proof of Corollary 7.2 from the last line of (7.7) onwards, it follows that (7.9) is finite. Then, since for each fixed value of \(z\) the integrand in (7.9) is a product of characteristic functions of certain infima, and hence not equal to zero, we can apply Fubini’s theorem to swap the order of the integrals within the exponentials (here we are using the absolute continuity of the distribution of \(X_t\) for all \(t>0\)). We now have that the integrand for each fixed value of \(z\) with \(X_t\) replaced by \(X^\varepsilon _t\) converges to the inegrand with just \(X_t\) as \(\varepsilon \rightarrow 0\). Then, by finiteness of (7.9), we have that (7.9) with \(X_t\) replaced by \(X^\varepsilon _t\) converges to (7.9).

It remains to show that \(\varLambda (\mathbb{R }_+) < \infty \). We have from (7.7) that

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda ( dx )}{\int _{\mathbb{R }_+} x \, \varLambda (dx)}&= 2 \alpha \left[ \left( \,\, \int _{-\infty }^0 \left( \int _0^\infty f^+(\xi ,h) \, d \xi \right) \left( \int _0^\infty f^-(\kappa ,h) \, d \kappa \right) \, dh \right) \right. \\&\quad - \left. \left( \,\,\int _{-\infty }^0 \left( \,\,\int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \, d \xi \right) \left( \int _0^\infty f^-(\kappa ,h) e^{-\theta \kappa } \, d \kappa \right) \, dh \right) \!\right] \\&\quad \rightarrow 2 \alpha \left( \,\,\int _{-\infty }^0 \left( \int _0^\infty f^+(\xi ,h) \, d \xi \right) \left( \int _0^\infty f^-(\kappa ,h) \, d \kappa \right) \, dh \right) < \infty \end{aligned}$$

as \(\theta \rightarrow \infty \), and we conclude that \(\varLambda (\mathbb{R }_+) < \infty \). \(\square \)

8 Lipschitz minorants of Brownian motion

8.1 Williams’ path decomposition for Brownian motion with drift

We recall for later use a path composition due to David Williams that describes the distribution of a Brownian motion with positive drift in terms of the segment of the path up to the time the process achieves its global minimum and the segment of the path after that time—see [43, p. 436] or, for a concise description, [8, Sect. IV.5].

For \(\mu \in \mathbb{R }\), let \(Z^{(\mu )} = (Z_t^{(\mu )})_{t \ge 0}\) be a Brownian motion with drift \(\mu \) started at zero. Take \(\beta > 0\) and let \(E\) be a random variable that is independent of \(Z^{(-\beta )}\) and has an exponential distribution with mean \((2 \beta )^{-1}\). Set

$$\begin{aligned} T_E := \inf \{ t \ge 0 : Z^{(-\beta )} = - E \}. \end{aligned}$$

Then, there is a diffusion \(W = (W_t)_{t \ge 0}\) with the properties

  1. (i)

    \(W\) is independent of \(Z^{(-\beta )}\) and \(E\);

  2. (ii)

    \(W_0 = 0\);

  3. (iii)

    \(W_t > 0\) for all \(t>0\) almost surely;

such that if we define a process \((\tilde{Z}_t)_{t \ge 0}\) by

$$\begin{aligned} \tilde{Z}_t := {\left\{ \begin{array}{ll} Z_t^{(-\beta )}, &{} 0 \le t < T_E, \\ Z_{T_E}^{(-\beta )} + W_{t-T_E}, &{} t \ge T_E, \\ \end{array}\right. } \end{aligned}$$
(8.1)

then \(\tilde{Z}\) has the same distribution as \(Z^{(\beta )}\). Thus, in particular,

$$\begin{aligned} - \inf \{Z^{(\beta )}_t : t \ge 0 \} \sim \mathrm{Exp}(2 \beta ) \end{aligned}$$
(8.2)

and the unique time that \(Z^{(\beta )}\) achieves its global minimum is distributed as \(T_E\). Recall also that

$$\begin{aligned} \mathbb{E }[\inf \{ t \ge 0 : Z_t^{(-\beta )} = h \}] = \frac{h}{\beta } \end{aligned}$$
(8.3)

for \(h \le 0\) (see, for example, [8, page 295, equation \(2.2.0.1\)]).

8.2 Random variables related to the Brownian Lipschitz minorant

Proposition 8.1

Let \(X\) be a Brownian motion with drift \(\beta \), where \(|\beta | < \alpha \). Then, the distribution of \(K\) is characterized by

$$\begin{aligned} \mathbb{E }[e^{-\theta K}] = \frac{8 \alpha ( \alpha ^2 -\beta ^2) \left( \frac{1}{\sqrt{2 \theta + ( \alpha +\beta )^2 }}+\frac{1}{\sqrt{2 \theta + ( \alpha -\beta )^2 }}\right) }{\left( \sqrt{2 \theta +( \alpha +\beta )^2 }+\sqrt{2 \theta +( \alpha -\beta )^2}+2 \alpha \right) ^2} \end{aligned}$$

for \(\theta \ge 0\), and hence \(\varLambda \) is characterized by

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda (dx)}{\int _{\mathbb{R }_+} x \, \varLambda (dx)} = \frac{4 (\alpha ^2 - \beta ^2) \theta }{ \left( \sqrt{2 \theta + (\alpha - \beta )^2} + \alpha - \beta \right) \left( \sqrt{2 \theta + (\alpha + \beta )^2} + \alpha + \beta \right) } \end{aligned}$$

for \(\theta \ge 0\).

Proof

We have from [8, page 269, Eq. 1.14.3(1)] that

$$\begin{aligned} \int _{-\infty }^0 f^-(\xi ,h) e^{-\theta \xi } \, d \xi = 2(\alpha -\beta )e^{h(\sqrt{2 \theta + (\alpha - \beta )^2}+(\alpha -\beta ))} \end{aligned}$$

and

$$\begin{aligned} \int _0^\infty f^+(\xi ,h) e^{-\theta \xi } \, d \xi = 2(\alpha +\beta )e^{h(\sqrt{2 \theta + (\alpha + \beta )^2}+(\alpha +\beta ))}. \end{aligned}$$

Thus, from (7.7),

$$\begin{aligned} \mathbb{E }[e^{-\theta K}]&= - 2 \alpha \frac{d}{d \theta } \left( \,\, \int _{-\infty }^0 4(\alpha ^2-\beta ^2)e^{h(\sqrt{2 \theta + (\alpha + \beta )^2}+\sqrt{2 \theta + (\alpha - \beta )^2}+2\alpha )} \, dh \right) \\&= 8 \alpha (\alpha ^2-\beta ^2) \frac{d}{d \theta } \left( \frac{1}{\sqrt{2 \theta + (\alpha + \beta )^2}+\sqrt{2 \theta + (\alpha - \beta )^2}+2\alpha } \right) \\&= \frac{8 \alpha ( \alpha ^2 -\beta ^2) \left( \frac{1}{\sqrt{2 \theta + ( \alpha +\beta )^2 }}+\frac{1}{\sqrt{2 \theta + ( \alpha -\beta )^2 }}\right) }{\left( \sqrt{2 \theta +( \alpha +\beta )^2 }+\sqrt{2 \theta +( \alpha -\beta )^2}+2 \alpha \right) ^2}, \end{aligned}$$

as required.

Now, by (7.8),

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda (dx)}{\int _{\mathbb{R }_+} x \, \varLambda (dx)}&= \int _0^\theta \mathbb{E }[e^{-\varphi K}] \, d\varphi \\&= 8 \alpha (\alpha ^2-\beta ^2) \left[ \frac{1}{\sqrt{(\alpha + \beta )^2}+\sqrt{(\alpha - \beta )^2}+2\alpha }\right. \\&\quad \left. - \frac{1}{\sqrt{2 \theta + (\alpha + \beta )^2}+\sqrt{2 \theta + (\alpha - \beta )^2}+2\alpha } \right] \\ \!&= \! 8 \alpha (\alpha ^2\!-\!\beta ^2) \left[ \frac{1}{4 \alpha } - \frac{1}{\sqrt{2 \theta + (\alpha + \beta )^2}+\sqrt{2 \theta + (\alpha - \beta )^2}+2\alpha }\right] \\&= \frac{4 (\alpha ^2 - \beta ^2) \theta }{ \left( \sqrt{2 \theta + (\alpha - \beta )^2} + \alpha - \beta \right) \left( \sqrt{2 \theta + (\alpha + \beta )^2} + \alpha + \beta \right) } \end{aligned}$$

after a little algebra. \(\square \)

Remark 8.2

There is an alternative way to verify that the Laplace transform for \(K\) presented in Proposition 8.1 is correct. Recall from the proof of Theorem 2.6 that \(D = S + \tilde{T}\), where the independent random variables \(S\) and \(\tilde{T}\) are defined by

$$\begin{aligned} S = \inf \left\{ s > 0 : X_s - \alpha s = \inf \{ X_u - \alpha u : u \le 0\} \right\} \end{aligned}$$

and

$$\begin{aligned} \tilde{T} = \sup \{ t \ge 0 : \tilde{X}_t = \inf \{\tilde{X}_s : s \ge 0\} \} \end{aligned}$$

with

$$\begin{aligned} (\tilde{X}_s)_{s \ge 0} := \left( (X_{S+s} - X_S) + \alpha s \right) _{s \ge 0}. \end{aligned}$$

Set \(I^- := \inf \{ X_u - \alpha u : u \le 0\}\). Because \((X_{-t} + \alpha t)_{t \ge 0}\) is a Brownian motion with drift \(\alpha - \beta \), we know from Sect. 8.1 that \(-I^-\) has an exponential distribution with mean \((2(\alpha - \beta ))^{-1}\). Now \((X_t - \alpha t)_{t \ge 0}\) is a Brownian motion with drift \(\beta - \alpha \), and so, again from Sect. 8.1, \(S\) is distributed as the time until this process achieves its global minimum. It follows that

$$\begin{aligned} \mathbb{E }[e^{-\theta S}] = \frac{2(\alpha -\beta )}{\sqrt{2 \theta + (\alpha - \beta )^2} + \alpha - \beta } \end{aligned}$$

and

$$\begin{aligned} \mathbb{E }[e^{-\theta \tilde{T}}] = \frac{2(\alpha +\beta )}{\sqrt{2 \theta + (\alpha + \beta )^2} + \alpha + \beta } \end{aligned}$$

—see, for example, [8, page 266, Eq. 1.12.3(2)].

By stationarity, \(D\) has the same distribution as \(U(D-G) = UK\), where \(U\) is an independent random variable that is uniformly distributed on \([0,1]\). Thus,

$$\begin{aligned} \mathbb{E }[e^{-\theta D}] = \int _0^1 \mathbb{E }[e^{- u \theta K}] \, du = \mathbb{E }\left[ \frac{1}{\theta K} \left( 1 - e^{- \theta K}\right) \right] = \frac{1}{\theta } \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x} ) \, \varLambda (dx)}{\int _{\mathbb{R }_+} x \, \varLambda (dx)}, \end{aligned}$$

and

$$\begin{aligned} \frac{\int _{\mathbb{R }_+} (1-e^{-\theta x}) \, \varLambda (dx)}{\int _{\mathbb{R }_+} x \, \varLambda (dx)}&= \theta \mathbb{E }[e^{-\theta D}] \\&= \theta \mathbb{E }[e^{-\theta S}] \mathbb{E }[e^{-\theta \tilde{T}}] \\ \!&= \! \frac{4 (\alpha ^2 - \beta ^2) \theta }{ \left( \sqrt{2 \theta + (\alpha - \beta )^2} + \alpha \!-\! \beta \right) \left( \sqrt{2 \theta + (\alpha + \beta )^2} + \alpha + \beta \right) }. \end{aligned}$$

This equality agrees with the one found in Proposition 8.1. Differentiating the expression on the right with respect to \(\theta \) and recalling the observation (7.8), we arrive at the the expression for the Laplace transform of \(K\) in Proposition 8.1.

Proposition 8.3

Let \(X\) be a Brownian motion with zero drift. Then,

$$\begin{aligned} \mathbb{P }\{K \in d \kappa \} = \left( \frac{4 \alpha ^3 }{ \sqrt{2 \pi } } \kappa ^{1/2} e^{- \alpha ^2 \kappa /2} - 4 \alpha ^4 \kappa \varPhi (- \alpha \kappa ^{1/2})\right) \, d\kappa , \end{aligned}$$

where \(\varPhi \) is the standard normal cumulative distribution function. Thus,

$$\begin{aligned} \frac{\varLambda (dx)}{\varLambda (\mathbb{R }_+)} = \frac{2 \alpha }{ \sqrt{2 \pi } } x^{-1/2} e^{- \alpha ^2 x /2} - 2 \alpha ^2 \varPhi (- \alpha x^{1/2}) \end{aligned}$$

Proof

We have from [8, page 269, Eq. 1.14.4(1)] that

$$\begin{aligned} f^-(\xi ,h) = f^+(\xi ,h) = \frac{- 2 \alpha h }{ \sqrt{2\pi } \xi ^{3/2} } \exp \left\{ - \frac{( \alpha \xi -h)^2}{ 2 \xi } \right\} . \end{aligned}$$

Thus, by Proposition 7.1,

$$\begin{aligned} \frac{\mathbb{P }\{ K \in d \kappa \} }{d\kappa }&= \frac{ 4 \alpha ^3 \kappa e^{- \alpha ^2 \kappa / 2 } }{ \pi } \int _0^\kappa \int _{-\infty }^0 \frac{ h^2 }{ \xi ^{3/2} (\kappa -\xi )^{3/2} } \exp \left\{ 2 \alpha h - \frac{ \kappa h^2}{ 2 \xi (\kappa - \xi ) } \right\} \, dh \, d \xi \\ \!&= \! \frac{ 4 \alpha ^3 e^{- \alpha ^2 \kappa / 2 } }{ \pi \kappa }\! \int _{-\infty }^0\! h^2 e^{ 2 \alpha h} \left( \int _0^1 \frac{ 1 }{ \xi ^{3/2} (1-\xi )^{3/2} } \exp \left\{ - \frac{ h^2/2 \kappa }{ \xi (1\! -\! \xi ) } \right\} \, d \xi \!\right) dh. \end{aligned}$$

The change of variable \(y = \frac{1}{\xi (1-\xi )} -4\) gives that

$$\begin{aligned} \int _0^{1/2} \frac{ 1 }{ \xi ^{3/2} (1-\xi )^{3/2} } \exp \left\{ - \frac{ c }{ \xi (1 - \xi ) } \right\} \, d \xi = e^{-4c} \int _0^\infty z^{-1/2} e^{- c z} dz = \frac{e^{-4c} \sqrt{\pi }}{ \sqrt{c}} \end{aligned}$$

for any \(c>0\), and hence

$$\begin{aligned} \frac{\mathbb{P }\{ K \in d \kappa \}}{d\kappa }&= \frac{ 4 \alpha ^3 e^{- \alpha ^2 \kappa / 2 } }{ \pi \kappa } \int _{-\infty }^0 h^2 e^{2 \alpha h } \frac{2 e^{-2h^2 / \kappa } \sqrt{\pi }}{ \sqrt{ h^2/2 \kappa }}\, dh \\&= - \frac{ 8 \sqrt{2} \alpha ^3 e^{- \alpha ^2 \kappa / 2 } }{ \sqrt{\pi \kappa } } \int _{-\infty }^0 h e^{2 \alpha h - 2h^2 / \kappa } \, dh. \end{aligned}$$

The further change of variable \(z = 2 \kappa ^{-1/2} h - \alpha \kappa ^{1/2} \) leads to

$$\begin{aligned} \frac{\mathbb{P }\{K \in d \kappa \}}{d\kappa }&= - 4 \alpha ^3 \int _{-\infty }^{- \alpha \kappa ^{1/2} } (\kappa ^{1/2} z + \alpha \kappa ) \frac{1}{ \sqrt{2 \pi } } e^{- z^2 / 2 } \, dz \\&= - 4 \alpha ^3 \left( - \frac{\kappa ^{1/2} }{ \sqrt{2 \pi } } e^{- \alpha ^2 \kappa /2} + \alpha \kappa \varPhi (- \alpha \kappa ^{1/2} ) \right) \\&= \frac{4 \alpha ^3 }{ \sqrt{2 \pi } } \kappa ^{1/2} e^{- \alpha ^2 \kappa /2} - 4 \alpha ^4 \kappa \varPhi (- \alpha \kappa ^{1/2}). \end{aligned}$$

Because \(\varLambda (dx)\) is proportional to \(x^{-1} \mathbb{P }\{K \in dx\}\), we need only find \(\int _{\mathbb{R }_+} x^{-1} \mathbb{P }\{K \in dx\}\) to establish the claim for \(\varLambda \), and this can be done using methods of integration similar to those used in Remark 8.4 below to check that the density of \(K\) integrates to one. \(\square \)

Remark 8.4

We can check directly that the density given for \(K\) integrates to one. For the first term, we use the substitution \(\eta = \alpha ^2 \kappa /2\), and for the second we use the substitution \(\eta = \alpha ^2 \kappa \) and then change the order of integration to get that the integral of the claimed density is

$$\begin{aligned} \frac{4 }{ \varGamma (3/2) } \int _0^{\infty } \eta ^{1/2} e^{- \eta } \, d \eta - 4 \int _0^\infty \eta \varPhi (- \eta ^{1/2}) \, d \eta&= 4 - \frac{4}{\sqrt{2 \pi } } \int _0^\infty \int _{\eta ^{1/2}}^\infty \eta e^{-y^2/2} \, d y \, d \eta \\&= 4 - \frac{4}{\sqrt{2 \pi } } \int _0^\infty \left( \int _0^{y^2} \eta \, d \eta \right) e^{-y^2/2} \, d y \\&= 4 - \frac{2}{\sqrt{2 \pi } } \int _0^\infty y^4 e^{- y^2/2} \, dy \\&= 4 - \frac{3}{ \varGamma (5/2) } \int _0^\infty x^{3/2} e^{- x} \, dx = 1. \end{aligned}$$

Proposition 8.5

Let \(X\) be a Brownian motion with drift \(\beta \), where \(|\beta | < \alpha \). Recall that \(T := \arg \max \{M_t : G \le t \le D\}\) and \(H := X_T - M_T \). Then, \(H\) has a \(\mathrm{Gamma}(2, 4 \alpha )\) distribution; that is, the distribution of \(H\) is absolutely continuous with respect to Lebesgue measure with density \(h \mapsto (4 \alpha )^2 h e^{-4 \alpha h},\,h \ge 0\). Also,

$$\begin{aligned} \mathbb{P }\{T>0\} = \frac{1}{2} \left( 1 + \frac{\beta }{\alpha } \right) , \end{aligned}$$

and the distribution of \(T\) is characterized by

$$\begin{aligned} \mathbb{E }\left[ e^{-\theta T}\right]&= 8 \alpha (\alpha ^2-\beta ^2) \frac{1}{\theta } \left( \frac{1}{ \sqrt{(\alpha +\beta )^2 - 2 \theta } + 3 \alpha - \beta } \right. \\&\left. - \frac{1}{ \sqrt{(\alpha -\beta )^2 + 2 \theta } + 3 \alpha + \beta } \right) \end{aligned}$$

for \(- \frac{(\alpha -\beta )^2}{2} \le \theta \le \frac{(\alpha +\beta )^2}{2}\).

Proof

Consider the claim regarding the distribution of \(H\). A slight elaboration of the proof of Proposition 7.1 shows, in the notation of that result, that the random vector \((T,L,R,-H)\) has a distribution that is absolutely continuous with respect to Lebesgue measure with joint density \((\tau ,\lambda ,\rho ,\eta ) \mapsto 2 \alpha f^-(\lambda ,\eta ) f^+(\rho ,\eta ),\,\lambda ,\rho >0,\,\tau - \lambda < 0 < \tau + \rho ,\,\eta < 0\). Therefore,

$$\begin{aligned} \mathbb{P }\{H \in dh\} = 2 \alpha \int _0^\infty \int _0^\infty (\lambda + \rho ) f^-(\lambda ,-h) f^+(\rho ,-h) \, d\lambda d\rho d\eta . \end{aligned}$$

By (8.2),

$$\begin{aligned} \int _0^\infty f^-(\lambda , -h) \, d \lambda = 2(\alpha -\beta )e^{-2(\alpha -\beta ) h}. \end{aligned}$$
(8.4)

Combining this with (8.3) gives

$$\begin{aligned} \int _{-\infty }^0 \lambda f^-(\lambda , -h) \, d \lambda = \frac{-\eta }{\alpha - \beta } \times 2(\alpha -\beta )e^{-2(\alpha -\beta ) h} = 2 h \eta e^{-2(\alpha -\beta ) h}. \end{aligned}$$
(8.5)

Similarly,

$$\begin{aligned} \int _0^\infty f^+(\rho , -h) \, d \rho = 2(\alpha +\beta )e^{-2(\alpha +\beta ) h} \end{aligned}$$
(8.6)

and

$$\begin{aligned} \int _{-\infty }^0 \rho f^+(\rho , \eta ) \, d \rho = 2 h e^{-2(\alpha +\beta ) h}. \end{aligned}$$
(8.7)

Thus,

$$\begin{aligned} \mathbb{P }\{H \in dh\}&= 2 \alpha \left[ 2 h e^{-2(\alpha -\beta ) h} \times 2(\alpha +\beta )e^{-2(\alpha +\beta ) h}\right. \\&\quad \left. + 2(\alpha -\beta )e^{-2(\alpha -\beta ) h} \times 2 h e^{-2(\alpha +\beta ) h}\right] \, dh \\&= (4 \alpha )^2 h e^{-4 \alpha h} \, dh. \end{aligned}$$

Note that \(T>0\) if and only if \(I^+ > I^-\), where

$$\begin{aligned} I^+ := \inf \{X_t + \alpha t : t \ge 0\} \quad \text{ and } \quad I^- := \inf \{X_t - \alpha t : t \le 0\}. \end{aligned}$$

Recall from Sect. 8.1 that the independent random variables \(I^+\) and \(I^-\) are exponentially distributed with respective means \((2(\alpha + \beta ))^{-1}\) and \((2(\alpha - \beta ))^{-1}\). It follows that

$$\begin{aligned} \mathbb{P }\{T > 0\} = \frac{2(\alpha + \beta )}{2(\alpha + \beta ) + 2(\alpha - \beta )} = \frac{1}{2} \left( 1 + \frac{\beta }{\alpha } \right) \!. \end{aligned}$$

We can also derive this last result from Proposition 7.1 as follows.

$$\begin{aligned} \mathbb{P }\{T>0\}&= 2 \alpha \int _{-\infty }^0 \int _0^\infty \int _{-\infty }^0 \int _\tau ^\infty f^-(\tau -\gamma ,h) f^+(\delta -\tau ,h) \,d \delta d \gamma d\tau dh \\&= 2 \alpha \int _{-\infty }^0 \int _0^\infty \int _{-\infty }^0 f^-(\tau -\gamma ,h) \left( \int _0^\infty f^+(\eta ,h) \, d\eta \right) dh d \tau d \gamma \\&= 2 \alpha \int _{-\infty }^0 \left( \int _0^\infty \int _{-\infty }^0 f^+(\tau -\gamma ,h) \, d \gamma \, d \tau \right) \left( \int _0^\infty f^+(\eta ,h) \, d\eta \right) \, dh\\&= 2 \alpha \int _{-\infty }^0 \left( \int _0^\infty \eta f^-(\eta ,h) \, d\eta \right) \left( \int _0^\infty f^+(\eta ,h) \, d\eta \right) \, dh. \end{aligned}$$

Substituting in (8.5) and (8.6), and then evaluating the resulting straightforward integral establishes the result.

The Laplace transform of \(T\) may be calculated using very similar methods. \(\square \)

9 Some facts about Lipschitz minorants

The following is a restatement of (1.1) accompanied by a proof.

Lemma 9.1

Suppose that the function \(f: \mathbb{R }\rightarrow \mathbb{R }\) has \(\alpha \)-Lipschitz minorant \(m: \mathbb{R }\rightarrow \mathbb{R }\). Then,

$$\begin{aligned} m(t)&= \sup \{ h \in \mathbb{R }: h - \alpha |t-s| \le f(s) \text{ for } \text{ all } s \in \mathbb{R }\} \\&= \inf \{f(s) + \alpha |t-s| : s \in \mathbb{R }\}. \end{aligned}$$

Proof

Consider the first equality. Fix \(t \in \mathbb{R }\). Because \(m\) is \(\alpha \)-Lipschitz, if \(h \le m(t)\), then \(h - \alpha |t-s| \le m(t) - \alpha |t-s| \le m(s) \le f(s)\) for all \(s \in \mathbb{R }\). On the other hand, if \(h > m(t)\), then \(s \mapsto (h - \alpha |t-s|) \vee m(s)\) is an \(\alpha \)-Lipschitz function that dominates \(m\) (strictly at \(t\)), and so \((h - \alpha |t-s|) \vee m(s) > f(s)\) for some \(s \in \mathbb{R }\). This implies that \(h - \alpha |t-s| > f(s)\), since \(m(s) \le f(s)\). The second equality is simply a rephrasing of the first. \(\square \)

We leave the proof of the following straightforward consequence of Lemma 9.1 to the reader.

Corollary 9.2

Suppose that the function \(f: \mathbb{R }\rightarrow \mathbb{R }\) has \(\alpha \)-Lipschitz minorant \(m: \mathbb{R }\rightarrow \mathbb{R }\). Define functions \(f^\leftarrow : \mathbb{R }\rightarrow \mathbb{R }\) and \(f^\rightarrow : \mathbb{R }\rightarrow \mathbb{R }\) by

$$\begin{aligned} f^\leftarrow (t) := {\left\{ \begin{array}{ll} f(t),&{} t < 0, \\ m(0) - \alpha t,&{} t \ge 0, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} f^\rightarrow (t) := {\left\{ \begin{array}{ll} m(0) + \alpha t,&{} t \le 0, \\ f(t),&{} t > 0. \end{array}\right. } \end{aligned}$$

Denote the \(\alpha \)-Lipschitz minorants of \(f^\leftarrow \) and \(f^\rightarrow \) by \(m^\leftarrow \) and \(m^\rightarrow \), respectively. Then, \(m^\leftarrow (t) = m(t)\) for all \(t \le 0\) and \(m^\rightarrow (t) = m(t)\) for all \(t \ge 0\).

The next result says that if \(f\) is a càdlàg function with \(\alpha \)-Lipschitz minorant \(m\), then on an open interval in the complement of the closed set \(\{t \in \mathbb{R }: m(t) = f(t) \wedge f(t-)\}\) the graph of the function \(m\) is either a straight line or a “sawtooth”.

Lemma 9.3

Suppose that \(f: \mathbb{R }\rightarrow \mathbb{R }\) is a càdlàg function with \(\alpha \)-Lipschitz minorant \(m : \mathbb{R }\rightarrow \mathbb{R }\). The set \(\{t \in \mathbb{R }: m(t) = f(t) \wedge f(t-)\}\) is closed. If \(t^{\prime } < t^{\prime \prime }\) are such that \(f(t^{\prime }) \wedge f(t^{\prime }-) = m(t^{\prime }),\,f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-) = m(t^{\prime \prime })\), and \(f(t) \wedge f(t-) > m(t)\) for \(t^{\prime } < t < t^{\prime \prime }\), then, setting \(t^* = (f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-) - f(t^{\prime }) \wedge f(t^{\prime }-) + \alpha (t^{\prime \prime } + t^{\prime }))/(2 \alpha )\),

$$\begin{aligned} m(t) = {\left\{ \begin{array}{ll} f(t^{\prime }) \wedge f(t^{\prime }-) + \alpha (t - t^{\prime }), &{} t^{\prime } \le t \le t^*, \\ f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-) + \alpha (t^{\prime \prime } - t), &{} t^* \le t \le t^{\prime \prime }. \\ \end{array}\right. } \end{aligned}$$

Proof

We first show that the set \(\{t \in \mathbb{R }: m(t) = f(t) \wedge f(t-)\}\) is closed by showing that its complement is open. Suppose \(t\) is in the complement, so that \(f(t) \wedge f(t-) - m(t) =: \epsilon > 0\). Because \(f\) is càdlàg and \(m\) is continuous, there exists \(\delta > 0\) such that if \(|s - t| < \delta \), then \(f(s) > f(t) \wedge f(t-) - \epsilon /3\) and \(m(s) < m(t) + \epsilon /3\). Hence, \(f(s-) \ge f(t) \wedge f(t-) - \epsilon /3\) and \(f(s) \wedge f(s-) - m(s) > \epsilon /3\) for \(|s - t| < \delta \), showing that a neighborhood of \(t\) is also in the complement.

Fig. 3
figure 3

Two instances of the construction of Lemma 9.4. The straight lines have slopes \(\pm \alpha \) (\(\alpha \) is different in each example. The dotted line shows the Lipschitz minorant, and the dashed line extends onwards from \(\inf \{ f(u) - \alpha u : u \le 0 \} \) so that the first time after zero that the process meets the dashed line is \(\mathbf S \)

Turning to the second claim, define a function \(\tilde{m} : \mathbb{R }\rightarrow \mathbb{R }\) by

$$\begin{aligned} \tilde{m}(t) := {\left\{ \begin{array}{ll} f(t^{\prime }) \wedge f(t^{\prime }-) + \alpha (t - t^{\prime }), &{} t \le t^*, \\ f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-) + \alpha (t^{\prime \prime } - t), &{} t^* \le t. \\ \end{array}\right. } \end{aligned}$$

That is, \(\tilde{m}(t) = h^* - \alpha |t - t^*|\), where

$$\begin{aligned} h^* = (f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-) + f(t^{\prime }) \wedge f(t^{\prime }-) + \alpha (t^{\prime \prime } - t^{\prime }))/2. \end{aligned}$$

Because \(m(t^{\prime }) = \tilde{m}(t^{\prime }),\,m(t^{\prime \prime }) = \tilde{m}(t^{\prime \prime })\), and \(m\) is \(\alpha \)-Lipschitz, we have \(m(t) \le \tilde{m}(t)\) for \(t \in [t^{\prime },t^{\prime \prime }]\) and \(m(t) \ge \tilde{m}(t)\) for \(t \notin [t^{\prime },t^{\prime \prime }]\). Suppose for some \(t_0 \in (t^{\prime },t^{\prime \prime })\) that \(m(t_0) < \tilde{m}(t_0)\). We must have that \(m(t_0) - \alpha |t^{\prime }-t_0| \le m(t^{\prime }) \le f(t^{\prime }) \wedge f(t^{\prime }-)\) and \(m(t_0) - \alpha |t^{\prime \prime }-t_0| \le m(t^{\prime \prime }) \le f(t^{\prime \prime }) \wedge f(t^{\prime \prime }-)\). Moreover, both of these inequalities must be strict, because otherwise we would conclude that \(m(t_0) \ge \tilde{m}(t_0)\).

We can therefore choose \(\epsilon > 0\) sufficiently small so that \(m(t_0) + \epsilon - \alpha |t - t_0| < f(t) \wedge f(t-)\) for \(t \in [t^{\prime },t^{\prime \prime }]\). This implies that \(m(t_0) + \epsilon - \alpha |t - t_0| < \tilde{m}(t) \le m(t) \le f(t) \wedge f(t-)\) for \(t \notin [t^{\prime },t^{\prime \prime }]\). Thus, \(t \mapsto (m(t_0) + \epsilon - \alpha |t - t_0|) \vee m(t)\) is an \(\alpha \)-Lipschitz function that is dominated everywhere by \(f\) and strictly dominates \(m\) at the point \(t_0\), contradicting the definition of \(m\). \(\square \)

We have a recipe for finding \(\inf \{ t>0 : f(t) \wedge f(t-) = m(t) \}\) when \(f\) is a càdlàg function with \(\alpha \)-Lipschitz minorant \(m\). Figure 3 gives two examples of how the recipe applies to different paths (note that the value of \(\alpha \) differs for the two examples).

Lemma 9.4

Let \(f: \mathbb{R }\rightarrow \mathbb{R }\) be a càdlàg function with \(\alpha \)-Lipschitz minorant \(m : \mathbb{R }\rightarrow \mathbb{R }\). Set

$$\begin{aligned} \mathbf d&:= \inf \{ t>0 : f(t) \wedge f(t-) = m(t) \},\\ \mathbf s&:= \inf \left\{ t > 0 : f(t) \wedge f(t-) - \alpha t \le \inf \{ f(u) - \alpha u : u \le 0 \} \right\} , \end{aligned}$$

and

$$\begin{aligned} \mathbf e := \inf \left\{ t \ge \mathbf s : f(t) \wedge f(t-) + \alpha (t-\mathbf s ) = \inf \{ f(u) + \alpha (u-\mathbf s ) : u \ge \mathbf s \} \right\} . \end{aligned}$$

Suppose that \(f(\mathbf s ) \le f(\mathbf s -)\). Then, \(\mathbf e =\mathbf d \).

Proof

It suffices to show the following:

$$\begin{aligned} f(t) \wedge f(t-)&> m(t) \quad \text{ for } \; 0 < t < \mathbf e ,\end{aligned}$$
(9.1)
$$\begin{aligned} f(\mathbf e ) \wedge f(\mathbf e -)&\le m(\mathbf e ),\end{aligned}$$
(9.2)
$$\begin{aligned} \mathbf d > 0 \Longrightarrow \mathbf e&> 0. \end{aligned}$$
(9.3)

For \(0 < t < \mathbf s \), it follows from the definition of \(\mathbf s \) that

$$\begin{aligned} f(t) \wedge f(t-)&> \inf \{ f(u) - \alpha u : u \le 0 \} + \alpha t\\&= \inf \{ f(u) + \alpha (t-u) : u \le 0 \} \\&\ge \inf \{ f(u) + \alpha |t-u| : u \in \mathbb{R }\} = m(t). \end{aligned}$$

For \(\mathbf s \le t < \mathbf e \), it follows from the definition of \(\mathbf e \) that

$$\begin{aligned} f(t) \wedge f(t-) + \alpha (t-\mathbf s ) > \inf \{ f(u) + \alpha (u-\mathbf s ) : u \ge \mathbf s \}, \end{aligned}$$

and hence

$$\begin{aligned} f(t) \wedge f(t-)&> \inf \{f(u) + \alpha (u-\mathbf s ) : u \ge \mathbf s \} -\alpha (t-\mathbf s ) \\&= \inf \{f(u) + \alpha (u-t) : u \ge \mathbf s \} \\&\ge \inf \{ f(u) + \alpha |t-u| : u \in \mathbb{R }\} = m(t). \end{aligned}$$

This completes the proof of (9.1)

Now \(f(\mathbf e ) \wedge f(\mathbf e -) + \alpha (\mathbf e - \mathbf s ) = \inf \{ f(u) + \alpha (u - \mathbf s ) : u \ge \mathbf s \} \) , and so \(f(\mathbf e ) \wedge f(\mathbf e -) = \inf \{ f(u) + \alpha (u - \mathbf e ) : u \ge \mathbf s \} \) This certainly gives

$$\begin{aligned} f(\mathbf e ) \wedge f(\mathbf e -) \le \inf \{ f(u) + \alpha |\mathbf e - u| : u \ge \mathbf s \}. \end{aligned}$$
(9.4)

Combined with the definition of \(\mathbf s \), it also gives

$$\begin{aligned} f(\mathbf e ) \wedge f(\mathbf e -) + \alpha (\mathbf e - \mathbf s )&\le f(\mathbf s ) + \alpha (\mathbf s - \mathbf s ) \\&\le \inf \{ f(s) - \alpha s : s \le 0 \} + \alpha \mathbf s . \end{aligned}$$

Thus, \(f(\mathbf e ) \wedge f(\mathbf e -) + 2 \alpha (\mathbf e - \mathbf s ) \le \inf \{ f(s) + \alpha (\mathbf e - s) : s \le 0 \} \) and hence, a fortiori,

$$\begin{aligned} f(\mathbf e ) \wedge f(\mathbf e -) \le \inf \{ f(s) + \alpha |\mathbf e - s| : s \le 0 \}. \end{aligned}$$
(9.5)

For \(0 < s < \mathbf s ,\,f(s) - s> \inf \{ f(r) - \alpha r : r \le 0 \}\), and so

$$\begin{aligned} \inf \{f(s) + \alpha |\mathbf e - s| : 0 \le s < \mathbf s \}&= \inf \{f(s) + \alpha (\mathbf e - s) : 0 \le s < \mathbf s \} \nonumber \\&= \inf \{f(s) - \alpha s : 0 \le s < \mathbf s \} + \alpha \mathbf e \nonumber \\&\ge \inf \{ f(r) - \alpha r : r \le 0 \} + \alpha \mathbf e \nonumber \\&= \inf \{ f(r) + \alpha (\mathbf e - r) : r \le 0 \} \nonumber \\&= \inf \{ f(r) + \alpha |\mathbf e - r| : r \le 0 \}. \end{aligned}$$
(9.6)

Combining (9.4), (9.5) and (9.6) gives (9.2).

The proof of (9.3) is a straightforward consequence of Lemma 9.3 and we leave it to the reader. \(\square \)

Corollary 9.5

Let \(f: \mathbb{R }\rightarrow \mathbb{R }\) be a càdlàg function with \(\alpha \)-Lipschitz minorant \(m : \mathbb{R }\rightarrow \mathbb{R }\). Define \(\mathbf d ,\,\mathbf s \), and \(\mathbf e \) as in Lemma 9.4. Assume that \(f(\mathbf s ) \le f(\mathbf s -)\), so that \(\mathbf e =\mathbf d \). Put \(\mathbf g := \sup \{ t<0 : f(t) \wedge f(t-) = m(t) \}\) and assume that \(f(0) \wedge f(0-) > m(0)\), so that \(f(t) \wedge f(t-) > m(t)\) for \(t \in (\mathbf g ,\mathbf d )\). Let \(\mathbf t := (f(\mathbf d ) \wedge f(\mathbf d -) - f(\mathbf g ) \wedge f(\mathbf g -) + \alpha (\mathbf d + \mathbf g ))/(2 \alpha )\) be the point in \([\mathbf g , \mathbf d ]\) at which the function \(m\) achieves its maximum. Then, \(\mathbf g \le \mathbf t \le \mathbf s \le \mathbf d \). Moreover, if \(\mathbf t =\mathbf s \), then \(\mathbf t =\mathbf s =\mathbf d \).

Proof

We first show that \(\mathbf g \le \mathbf t \le \mathbf s \le \mathbf d \). We certainly have \(\mathbf g \le \mathbf s \le \mathbf d \) and \(\mathbf g \le \mathbf t \le \mathbf d \), so it suffices to prove that \(\mathbf t \le \mathbf s \). Because \(\mathbf s \ge 0\), this is clear when \(\mathbf t < 0\), so it further suffices to consider the case where \(\mathbf t \ge 0\). Suppose, then, that \(\mathbf g \le 0 \le \mathbf s < \mathbf t \le \mathbf d \).

From Lemma 9.3 we have \(m(u) = f(\mathbf g ) \wedge f(\mathbf g -) + \alpha (u - \mathbf g )\) for \(\mathbf g \le u \le \mathbf t \) and \(f(u) \wedge f(u-) \ge f(\mathbf g ) \wedge f(\mathbf g -) + \alpha (u - \mathbf g )\) for \(u \le \mathbf t \). Therefore, \(\inf \{f(u) \wedge f(u-) - \alpha u : u \le 0\} \ge f(\mathbf g ) \wedge f(\mathbf g -) - \mathbf g \), and hence \(\inf \{f(u) \wedge f(u-) - \alpha u : u \le 0\} = f(\mathbf g ) \wedge f(\mathbf g -) - \alpha \mathbf g \). Now, by definition of \(\mathbf s ,\,f(\mathbf s ) \wedge f(\mathbf s- ) - \alpha \mathbf s \le \inf \{f(u) \wedge f(u-) - \alpha u : u \le 0\}\), and so

$$\begin{aligned} f(\mathbf s ) \wedge f(\mathbf s- )&\le f(\mathbf g ) \wedge f(\mathbf g -) - \alpha \mathbf g + \alpha \mathbf s \\&= f(\mathbf g ) \wedge f(\mathbf g -) + \alpha (\mathbf s - \mathbf g ) \\&= m(\mathbf s ), \end{aligned}$$

which contradicts \(\mathbf d = \inf \{u > 0 : f(u) \wedge f(u-) = m(u)\} = \inf \{u > 0 : f(u) \wedge f(u-) \le m(u)\}\) unless \(\mathbf s =0\) and \(f(0) \wedge f(0-) = m(0)\), but we have assumed that this is not the case.

A similar argument shows that if \(\mathbf t =\mathbf s \), then \(\mathbf t =\mathbf s =\mathbf d \). \(\square \)