# Solutions to complex smoothing equations

## Abstract

We consider smoothing equations of the form

\begin{aligned} X ~\mathop {=}\limits ^{\text {law}}~ \sum _{j \ge 1} T_j X_j + C \end{aligned}

where $$(C,T_1,T_2,\ldots )$$ is a given sequence of random variables and $$X_1,X_2,\ldots$$ are independent copies of X and independent of the sequence $$(C,T_1,T_2,\ldots )$$. The focus is on complex smoothing equations, i.e., the case where the random variables $$X, C,T_1,T_2,\ldots$$ are complex-valued, but also more general multivariate smoothing equations are considered, in which the $$T_j$$ are similarity matrices. Under mild assumptions on $$(C,T_1,T_2,\ldots )$$, we describe the laws of all random variables X solving the above smoothing equation. These are the distributions of randomly shifted and stopped Lévy processes satisfying a certain invariance property called $$(U,\alpha )$$-stability, which is related to operator (semi)stability. The results are applied to various examples from applied probability and statistical physics.

This is a preview of subscription content, access via your institution.

1. 1.

In slight abuse of language, we will sometimes call a random variable X a solution if the distribution of X is a solution to (1.1).

2. 2.

By a $$d \times d$$ similarity matrix we mean a $$d \times d$$ matrix that can be written as a scale multiple of an orthogonal matrix.

## References

1. 1.

Aldous, D.J., Bandyopadhyay, A.: A survey of max-type recursive distributional equations. Ann. Appl. Probab. 15(2), 1047–1110 (2005)

2. 2.

Alsmeyer, G., Biggins, J.D., Meiners, M.: The functional equation of the smoothing transform. Ann. Probab. 40(5), 2069–2105 (2012)

3. 3.

Alsmeyer, G., Dyszewski, P.: Thin tails of fixed points of the nonhomogeneous smoothing transform. ArXiv e-prints (2015)

4. 4.

Alsmeyer, G., Kuhlbusch, D.: Double martingale structure and existence of $$\phi$$-moments for weighted branching processes. Münster J. Math. 3, 163–212 (2010)

5. 5.

Alsmeyer, G., Meiners, M.: Fixed points of inhomogeneous smoothing transforms. J. Difference Equ. Appl. 18(8), 1287–1304 (2012)

6. 6.

Alsmeyer, G., Meiners, M.: Fixed points of the smoothing transform: two-sided solutions. Probab. Theory Related Fields 155(1–2), 165–199 (2013)

7. 7.

Araman, V.F., Glynn, P.W.: Tail asymptotics for the maximum of perturbed random walk. Ann. Appl. Probab. 16(3), 1411–1431 (2006)

8. 8.

Asmussen, S.: Applied Probability and Queues, Volume 51 of Applications of Mathematics. Stochastic Modelling and Applied Probability, 2nd edn. Springer, New York (2003)

9. 9.

Athreya, K.B., McDonald, D.R., Ney, P.E.: Limit theorems for semi-Markov processes and renewal theory for Markov chains. Ann. Probab. 6(5), 788–797 (1978)

10. 10.

Athreya, K.B., Ney, P.E.: A new approach to the limit theory of recurrent Markov chains. Trans. Am. Math. Soc. 245, 493–501 (1978)

11. 11.

Barral, J.: Generalized vector multiplicative cascades. Adv. Appl. Probab. 33(4), 874–895 (2001)

12. 12.

Bassetti, F., Ladelli, L.: Self-similar solutions in one-dimensional kinetic models: a probabilistic view. Ann. Appl. Probab. 22(5), 1928–1961 (2012)

13. 13.

Bassetti, F., Ladelli, L., Matthes, D.: Central limit theorem for a class of one-dimensional kinetic equations. Probab. Theory Related Fields 150(1–2), 77–109 (2011)

14. 14.

Bassetti, F., Ladelli, L., Matthes, D.: Infinite energy solutions to inelastic homogeneous boltzmann equations. Electron. J. Probab. 20(89), 1–34 (2015)

15. 15.

Bassetti, F., Matthes, D.: Multi-dimensional smoothing transformations: existence, regularity and stability of fixed points. Stoch. Process. Appl. 124(1), 154–198 (2014)

16. 16.

Bertoin, J.: Random Fragmentation and Coagulation Processes, Volume 102 of Cambridge Studies in Advanced Mathematics, vol. 102. Cambridge University Press, Cambridge (2006)

17. 17.

Bhattacharya, R.N.: Speed of convergence of the $$n$$-fold convolution of a probability measure on a compact group. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete. 25, 1–10 (1972/73)

18. 18.

Biggins, J.D.: Martingale convergence in the branching random walk. J. Appl. Probab. 14(1), 25–37 (1977)

19. 19.

Biggins, J.D.: Uniform convergence of martingales in the branching random walk. Ann. Probab. 20(1), 137–151 (1992)

20. 20.

Biggins, J.D.: Lindley-type equations in the branching random walk. Stoch. Process. Appl. 75(1), 105–133 (1998)

21. 21.

Biggins, J.D., Kyprianou, A.E.: Seneta–Heyde norming in the branching random walk. Ann. Probab. 25(1), 337–360 (1997)

22. 22.

Biggins, J.D., Kyprianou, A.E.: Fixed points of the smoothing transform: the boundary case. Electron. J. Probab. 10(17), 609–631 (2005). (electronic)

23. 23.

Bobylev, A.V., Cercignani, C., Gamba, I.M.: On the self-similar asymptotics for generalized nonlinear kinetic Maxwell models. Commun. Math. Phys. 291(3), 599–644 (2009)

24. 24.

Buraczewski, D., Damek, E., Guivarc’h, Y.: Convergence to stable laws for a class of multidimensional stochastic recursions. Probab. Theory Related Fields 148(3–4), 333–402 (2010)

25. 25.

Buraczewski, D., Damek, E., Guivarc’h, Y., Hulanicki, A., Urban, R.: Tail-homogeneity of stationary measures for some multidimensional stochastic recursions. Probab. Theory Related Fields 145(3–4), 385–420 (2009)

26. 26.

Buraczewski, D., Damek, E., Mentemeier, S., Mirek, M.: Heavy tailed solutions of multivariate smoothing transforms. Stoch. Process. Appl. 123(6), 1947–1986 (2013)

27. 27.

Caliebe, A.: Symmetric fixed points of a smoothing transformation. Adv. Appl. Probab. 35(2), 377–394 (2003)

28. 28.

Chakraborti, A., Toke, I.M., Patriarca, M., Abergel, F.: Econophysics review: II. Agent-based models. Quant. Finance 11(7), 1013–1041 (2011)

29. 29.

Chauvin, B., Gardy, D., Pouyanne, N., Ton-That, D.-H.: Burns. ArXiv e-prints (2014)

30. 30.

Chauvin, B., Liu, Q., Pouyanne, N.: Limit distributions for multitype branching processes of $$m$$-ary search trees. Ann. Inst. Henri Poincaré Probab. Stat. 50(2), 628–654 (2014)

31. 31.

Chauvin, B., Pouyanne, N.: $$m$$-ary search trees when $$m\ge 27$$: a strong asymptotics for the space requirements. Random Struct. Algorithms 24(2), 133–154 (2004)

32. 32.

Chow, Y.S., Teicher, H.: Probability Theory. Independence, Interchangeability, Martingales, 3rd edn. Springer Texts in Statistics, Springer, New York (1997)

33. 33.

Cordier, S., Pareschi, L., Toscani, G.: On a kinetic model for a simple market economy. J. Stat. Phys. 120(1–2), 253–277 (2005)

34. 34.

Deitmar, A., Echterhoff, S.: Principles of Harmonic Analysis. Universitext. Springer, New York (2009)

35. 35.

Dolera, E., Regazzini, E.: Proof of a McKean conjecture on the rate of convergence of Boltzmann-equation solutions. Probab. Theory Related Fields 160(1–2), 315–389 (2014)

36. 36.

Durrett, R., Liggett, T.M.: Fixed points of the smoothing transformation. Z. Wahrsch. Verw. Gebiete 64(3), 275–301 (1983)

37. 37.

Falconer, K.: Fractal Geometry. Mathematical Foundations and Applications. John Wiley and Sons Ltd, Chichester (1990)

38. 38.

Fill, J.A., Kapur, N.: The space requirement of $$m$$-ary search trees: distributional asymptotics for $$m \ge 27$$. In: Proc. 7th Iranian Statistical Conference (2004). Invited paper

39. 39.

Guivarc’h, Y.: Extension d’un théorème de Choquet-Deny à une classe de groupes non abéliens. In: Séminaire KGB sur les Marches Aléatoires (Rennes, 1971–1972), pp. 41–59. Astérisque, 4. Soc. Math. France, Paris (1973)

40. 40.

Hazod, W., Siebert, E.: Stable Probability Measures on Euclidean Spaces and on Locally Compact Groups. Mathematics and its Applications. Structural properties and limit theorems, vol. 531. Kluwer Academic Publishers, Dordrecht (2001)

41. 41.

Hewitt, E., Ross, K.A.: Abstract harmonic analysis. Vol. I: Structure of topological groups. Integration theory, group representations. Die Grundlehren der mathematischen Wissenschaften, Bd. 115. Academic Press, Inc., Publishers, New York; Springer, Berlin (1963)

42. 42.

Holley, R., Liggett, T.M.: Generalized potlatch and smoothing processes. Z. Wahrsch. Verw. Gebiete 55(2), 165–195 (1981)

43. 43.

Iksanov, A.: Elementary fixed points of the BRW smoothing transforms with infinite number of summands. Stoch. Process. Appl. 114(1), 27–50 (2004)

44. 44.

Iksanov, A., Meiners, M.: Fixed points of multivariate smoothing transforms with scalar weights. ALEA Lat. Am. J. Probab. Math. Stat. 12(1), 69–114 (2015)

45. 45.

Iksanov, A., Meiners, M.: Rate of convergence in the law of large numbers for supercritical general multi-type branching processes. Stoch. Process. Appl. 125(2), 708–738 (2015)

46. 46.

Jagers, P.: General branching processes as Markov fields. Stoch. Process. Appl. 32(2), 183–212 (1989)

47. 47.

Janson, S.: Functional limit theorems for multitype branching processes and generalized Pólya urns. Stoch. Process. Appl. 110(2), 177–245 (2004)

48. 48.

Janson, S., Neininger, R.: The size of random fragmentation trees. Probab. Theory Related Fields 142(3–4), 399–442 (2008)

49. 49.

Kac, M.: Foundations of kinetic theory. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, vol. III, pp. 171–197. University of California, PressBerkeley, Los Angeles (1956)

50. 50.

Kahane, J.-P.: Sur le chaos multiplicatif. Ann. Sci. Math. Québec 9(2), 105–150 (1985)

51. 51.

Kahane, J.-P., Peyrière, J.: Sur certaines martingales de Benoit Mandelbrot. Adv. Math. 22(2), 131–145 (1976)

52. 52.

Kallenberg, O.: Random Measures, 4th edn. Akademie-Verlag, Berlin; Academic Press, Inc., London (1986)

53. 53.

Kallenberg, O.: Foundations of Modern Probability. Probability and its Applications (New York), 2nd edn. Springer, New York (2002)

54. 54.

Knape, M., Neininger, R.: Pólya urns via the contraction method. Comb. Probab. Comput. 23(6), 1148–1186 (2014)

55. 55.

Kolmogorov, A.N.: Über das logarithmisch normale Verteilungsgesetz der Dimensionen der Teilchen bei Zerstückelung. C. R. (Doklady) Acad. Sci. URSS (N. S.) 31, 99–101 (1941)

56. 56.

Kyprianou, A.E.: Martingale convergence and the stopped branching random walk. Probab. Theory Related Fields 116(3), 405–419 (2000)

57. 57.

Lacoin, H., Rhodes, R., Vargas, V.: Complex Gaussian multiplicative chaos. Commun. Math. Phys. 337(2), 569–632 (2015)

58. 58.

Lang, S.: Real and Functional Analysis, Volume 142 of Graduate Texts in Mathematics, 3rd edn. Springer, New York (1993)

59. 59.

Lew, W., Mahmoud, H.M.: The joint distribution of elastic buckets in multiway search trees. SIAM J. Comput. 23(5), 1050–1074 (1994)

60. 60.

Luczak, A.: Centering problems for probability measures on finite-dimensional vector spaces. J. Theor. Probab. 23(3), 770–791 (2010)

61. 61.

Lyons, R.: A simple path to Biggins’ martingale convergence for branching random walk. In: Classical and modern branching processes (Minneapolis, MN, 1994), volume 84 of IMA Vol. Math. Appl., pp. 217–221. Springer, New York (1997)

62. 62.

Madaule, T., Rhodes, R., Vargas, V.: Continuity estimates for the complex cascade model on the phase boundary. ArXiv e-prints (2015)

63. 63.

Mahmoud, H.M.: Evolution of Random Search Trees. Wiley-Interscience Series in Discrete Mathematics and Optimization. A Wiley-Interscience Publication, John Wiley & Sons Inc., New York (1992)

64. 64.

Matthes, D., Toscani, G.: On steady distributions of kinetic models of conservative economies. J. Stat. Phys. 130(6), 1087–1117 (2008)

65. 65.

Mentemeier, S.: The fixed points of the multivariate smoothing transform. Probab. Theory Related Fields 164(1), 401–458 (2016)

66. 66.

Meyn, S.P., Tweedie, R.L.: Markov Chains and Stochastic Stability. Communications and Control Engineering Series. Springer-Verlag London, Ltd., London (1993)

67. 67.

Nerman, O.: On the convergence of supercritical general (C–M–J) branching processes. Z. Wahrsch. Verw. Gebiete 57(3), 365–395 (1981)

68. 68.

Nummelin, E.: A splitting technique for Harris recurrent Markov chains. Z. Wahrsch. Verw. Gebiete 43(4), 309–318 (1978)

69. 69.

Nummelin, E., Tuominen, P.: The rate of convergence in Orey’s theorem for Harris recurrent Markov chains with applications to renewal theory. Stoch. Process. Appl. 15(3), 295–311 (1983)

70. 70.

Pouyanne, N.: Classification of large Pólya-Eggenberger urns with regard to their asymptotics. In: 2005 International Conference on Analysis of Algorithms, Discrete Math. Theor. Comput. Sci. Proc., AD, pp. 275–285 (electronic). Assoc. Discrete Math. Theor. Comput. Sci., Nancy (2005)

71. 71.

Rhodes, R., Vargas, V.: Gaussian multiplicative chaos and applications: a review. Probab. Surv. 11, 315–392 (2014)

72. 72.

Rösler, U., Topchiĭ, V.A., Vatutin, V.A.: Convergence conditions for branching processes with particles having weight. Diskret. Mat. 12(1), 7–23 (2000)

73. 73.

Sato, K.: Lévy processes and Infinitely Divisible Distributions, Volume 68 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (1999). Translated from the 1990 Japanese original, Revised by the author

74. 74.

Villani, C.: A review of mathematical topics in collisional kinetic theory. In: Friedlander, S., Serre, D. (eds.) Handbook of Mathematical Fluid Dynamics, vol. I, pp. 71–305. North-Holland, Amsterdam (2002)

## Acknowledgments

The research has partly been carried out during visits of the authors to the Institute of Mathematical Statistics in Münster. The authors would like to express their gratitude for the hospitality. We further thank Vincent Vargas for interesting discussions on the subject.

## Author information

Authors

### Corresponding author

Correspondence to Sebastian Mentemeier.

Research supported by short visit Grant 6172 from the European Science Foundation (ESF) for the activity entitled ‘Random Geometry of Large Interacting Systems and Statistical Physics’. Research of M. M. was partially supported by DFG Grant ME 3625/3-1.

## Appendices

### Appendix 1: The Choquet–Deny lemma

Given a probability measure $$\mu$$ on the similarity group $${\mathbb {S}}{} \textit{(d)}$$, let U be the closed subgroup generated by the support of $$\mu$$—if $$\mu$$ is the step distribution of the associated multiplicative random walk $$(L_n)_{n \in {\mathbb {N}}_0}$$, see Sect. 3.4, then $$U= {\mathbb {U}}$$.

### Lemma 5.1

Let $$\psi :U \rightarrow {\mathbb {R}}$$ be measurable and bounded. If

\begin{aligned} \int _{U} \psi (ug) \, \mu (\mathrm {d} u) = \psi (g) \end{aligned}
(5.1)

for all $$u \in U$$, then $$\psi$$ is constant $$\mu$$-a.e.

This is a consequence of [39, Theorem 3]. For the reader’s convenience, we state that theorem and show how the lemma can be derived from it.

In the following, let G be a locally compact, separable and unimodular group. A probability measure $$\mu$$ on G is called aperiodic, if the closed subgroup generated by the support of $$\mu$$ equals G. Write [GG] for the commutator subgroup, i.e., the group generated by the commutators $$[a,b] :=(ba)^{-1} ab$$, $$a,b \in G$$, and $$\overline{[G,G]}$$ for its closure. Let $$H \subsetneq G$$ be a normal subgroup of G. Then G acts on H by conjugation (inner automorphisms), i.e.,

\begin{aligned} g.h ~:=~ g^{-1} h g, \quad g \in G, h \in H. \end{aligned}

For $$A \subseteq H$$ write

\begin{aligned} A^G ~:=~ \{g.a:\, g \in G, a \in A\}. \end{aligned}

The action of G on H is said to be compact if for each compact $$A \subseteq H {\setminus } \{1_G\}$$, $$1_G$$ the unit element of G, $$A^G$$ is relatively compact, i.e., has compact closure. Then [39, Theorem 3] reads as follows:

### Theorem 5.2

Let $$\mu$$ be an aperiodic probability measure on G. If $$\overline{[G,G]}$$ is Abelian or compact and if the action of G on $$\overline{[G,G]}$$ is compact, then the only bounded, measurable functions $$\psi$$ satisfying

\begin{aligned} \int \psi (u g) \, \mu (\mathrm {d}u) = \psi (g) \quad \text {for all } g \in G \end{aligned}

are the $$\mu$$-almost everywhere constant functions.

Following the proof of [25, Theorem A.1], we show how Theorem 5.2 applies to the situation here, i.e., $$G=U$$ is the closed subgroup generated by the support of $$\mu$$. Then $$\mu$$ is aperiodic on U by the very definition of U. Referring to Proposition 4.1, there is a closed subgroup $$\textit{A}_{U}$$ of U, which is isomorphic to a closed subgroup of the multiplicative group $${\mathbb {R}}_>$$, and a normal compact subgroup $$\textit{C}_{U}=U \cap {\mathbb {O}}{} \textit{(d)}$$, such that $$U/\textit{C}_{U}\simeq \textit{A}_{U}$$. The groups $$\textit{A}_{U}$$ and $$\textit{C}_{U}$$ (as a compact group, see [34, Theorem 1.4.1]) are unimodular and hence, by the Fubini formula for the Haar measure on $$U=\textit{A}_{U}\textit{C}_{U}$$, [34, Proposition 1.5.5], U is unimodular as well.

Clearly, the commutator subgroup of U is a subgroup of $${\mathbb {O}}{} \textit{(d)}$$, hence its closure is compact. Moreover, for any compact $$A \subseteq \overline{[U,U]} {\setminus } \{\textit{I}_{\textit{d}}\}$$, $$A^U$$ is again a subset of $${\mathbb {O}}{} \textit{(d)}$$ since

\begin{aligned} u.[a,b] = u^{-1} [a,b]\, u = o^{-1} [a,b]\, o, \end{aligned}

where $$u = \left\| u \right\| o$$ with $$o \in \textit{C}_{U}\subseteq {\mathbb {O}}{} \textit{(d)}$$. Hence, $$A^U$$ is relatively compact as a subset of a compact set.

### Appendix 2: Evaluating the Lévy integrals

In this section, we compute

\begin{aligned} I(x) = \int _{{\mathbb {R}}^d {\setminus } \{0\}} \left( e^{\text {i}\left\langle x,y \right\rangle } - 1 - \text {i}\left\langle x,y \right\rangle \mathbbm {1}_{[0,1]}(\left| y \right| ) \right) \bar{\nu }(\text {d} \textit{y}), \quad x \in {\mathbb {R}}^d \end{aligned}
(6.1)

for a deterministic $$(U,\alpha )$$-invariant Lévy measure $$\bar{\nu }$$, i.e. satisfying (3.29).

### Lemma 6.1

Let $$\bar{\nu }$$ be a deterministic Lévy measure satisfying (3.29) for some $$1 \ne \alpha \in (0,2)$$, and define I(x) via (6.1). Then, for $$0 \not = x \in {\mathbb {R}}^d$$,

\begin{aligned} I(x) = - |x|^\alpha \eta _1^\alpha (x) + \text {i}|x|^\alpha \eta _2^\alpha (x) + \text {i}\left\langle x,\gamma ^\alpha \right\rangle , \end{aligned}
(6.2)

with functions $$\eta _1^\alpha ,\eta _2^\alpha$$ defined in (1.19) and (1.20), respectively. $$\eta _1^\alpha$$ and $$\eta _2^\alpha$$ are bounded real functions satisfying $$\eta _j^\alpha (u^\mathsf {T}x) = \eta _j^\alpha (x)$$ for all $$u \in {\mathbb {U}}$$, $$x \in {\mathbb {R}}^d{\setminus }\{0\}$$, $$j=1,2$$, and $$\eta _1^\alpha$$ is nonnegative. The vector $$\gamma ^\alpha$$ satisfies $$o \gamma ^\alpha = \gamma ^\alpha$$ for all $$o \in \textit{C}_{{\mathbb {U}}}$$.

### Proof

Fix $$0 \not = x \in {\mathbb {R}}^d$$ and notice that I(x) is finite since $$\bar{\nu }$$ is a Lévy measure. Further, according to Proposition 4.3, there is a $$\textit{C}_{{\mathbb {U}}}$$-invariant finite measure $$\rho$$ on $$\textit{S}_{{\mathbb {U}}}$$ such that (4.4) holds.

We set

\begin{aligned} \gamma ^\alpha :={\left\{ \begin{array}{ll} -\int y \mathbbm {1}_{[0,1]}(|y|) \bar{\nu }(\text {d} \textit{y})&{}\,\\ \quad = - \int _{\textit{A}_{{\mathbb {U}}}} \int _{\textit{S}_{{\mathbb {U}}}} ax \mathbbm {1}_{[0,1]}(|ax|) \left\| a \right\| ^{-\alpha } \rho (\text {d} \textit{x}) \, \textit{H}_{{} \textit{A}_{{\mathbb {U}}}}(\text {d} \textit{a}) &{} \text {if } \alpha < 1 \\ - \int _{\textit{A}_{{\mathbb {U}}}} \int _{\textit{S}_{{\mathbb {U}}}} ax \mathbbm {1}_{(1,\infty )}(|ax|) \left\| a \right\| ^{-\alpha } \rho (\text {d} \textit{x}) \, \textit{H}_{{} \textit{A}_{{\mathbb {U}}}}(\text {d} \textit{a}), &{} \text {if } \alpha \in (1,2). \end{array}\right. } \end{aligned}

The asserted $$\textit{C}_{{\mathbb {U}}}$$-invariance follows from the $$\textit{C}_{{\mathbb {U}}}$$-invariance of $$\rho$$.

Recalling the definitions

\begin{aligned} \eta _1^\alpha (x)&= \frac{1}{|x|^\alpha } \int \big (1 - \cos (\left\langle x,y \right\rangle ) \big ) \nu ^\alpha (\text {d} \textit{y}) \\ \eta _2^\alpha (x)&= \frac{1}{|x|^\alpha } \int \big (\sin (\left\langle x,y \right\rangle ) - \mathbbm {1}_{\{\alpha >1\}}\left\langle x,y \right\rangle \big ) \nu ^\alpha (\text {d} \textit{y}) , \end{aligned}

(6.2) holds, and it remains to prove boundedness and invariance properties of $$\eta _i^\alpha$$, $$i=1,2$$. Let $$u \in {\mathbb {U}}$$, then, using (3.29),

\begin{aligned} \eta _1^\alpha (u^\mathsf {T}x)&= \frac{1}{\left\| u \right\| ^\alpha |x|^\alpha } \int \big (1 - \cos (\left\langle x,uy \right\rangle ) \big ) \nu ^\alpha (\text {d} \textit{y}) \\&= \frac{\left\| u \right\| ^\alpha }{\left\| u \right\| ^\alpha |x|^\alpha } \int \big (1 - \cos (\left\langle x,y \right\rangle ) \big ) \nu ^\alpha (\text {d} \textit{y}) = \eta _1^\alpha (x), \end{aligned}

and the invariance of $$\eta _2^\alpha$$ is proved along the same lines. This implies in particular that the continuous functions $$\eta _i^\alpha$$ are determined by their respective values on the (relative) compact set $$\textit{S}_{{\mathbb {U}}}$$, hence the asserted boundedness follows. $$\square$$

For $$\alpha =1$$, we can compute a meaningful expression for $$\eta ^1(x)$$ only for $$x \in E_1(Q^\mathsf {T})$$. Notice that this implies $$x \in E_1(t^Q)$$ for all $$t \ge 0$$. Hence, using formula (4.6) for $$\bar{\nu }$$, we obtain

\begin{aligned} \eta ^1(x)&= \int _{{\mathbb {S}}^{d-1}} \int _{{\mathbb {R}}_>} \bigg ( e^{\text {i}\left\langle (t^{Q})^\mathsf {T}x,s \right\rangle } - 1 - \text {i}\left\langle (t^Q)^\mathsf {T}x,s \right\rangle \mathbbm {1}_{\{|(t^Q)^\mathsf {T}s| \le 1\}} \bigg ) t^{-2}\, \text {d} \textit{t}\, \rho (\text {d} \textit{s}) \nonumber \\&= \int _{{\mathbb {S}}^{d-1}} \int _{{\mathbb {R}}_>} \bigg ( e^{\text {i}\left\langle tx,s \right\rangle } - 1 - \text {i}\left\langle tx,s \right\rangle \mathbbm {1}_{\{t \le 1\}} \bigg ) t^{-2}\, \text {d} \textit{t}\, \rho (\text {d} \textit{s}) \nonumber \\&= -\int _{{\mathbb {S}}^{d-1}} \bigg ( \left| \left\langle x,s \right\rangle \right| + \text {i}\frac{2}{\pi }\left\langle x,s \right\rangle \log |\left\langle x,s \right\rangle | \bigg ) \rho (\text {d} \textit{s}) + \text {i}\left\langle \gamma ,x \right\rangle \end{aligned}
(6.3)

for a suitable $$\gamma \in {\mathbb {R}}^d$$, see [73, Theorem 14.10] for details.

### Lemma 7.1

Let $$(S_n)_{n \in {\mathbb {N}}_0}$$ be a random walk with i.i.d. increments, $$S_0=0$$, $${\mathbb {E}}[S_1]>0$$ and $${\mathbb {E}}[(S_1^+)^2]<\infty$$. Let $$\tau (t) :=\inf \{n \in {\mathbb {N}}_0: S_n > t\}$$, $$t \ge 0$$. Then, for every $$0<a<1$$,

\begin{aligned} t {\mathbb {P}}(S_{\tau (t)-1} < at) ~\rightarrow ~ 0 \quad \text {as } t \rightarrow \infty . \end{aligned}
(7.1)

### Proof

$${\mathbb {E}}[(S_1^+)^2]<\infty$$ implies $$\lim _{t \rightarrow \infty } t^2{\mathbb {P}}(S_1 > t) = 0$$. Further, it is known from standard random walk theory that $$\lim _{t\rightarrow \infty } t^{-1} {\mathbb {E}}[\tau (t)] = {\mathbb {E}}[S_1]^{-1}$$. Consequently, setting, for $$n \in {\mathbb {N}}$$, $$A_n :=\{S_0 \le t, \ldots , S_{n-2} \le t, S_{n-1} < at\}$$, we have

\begin{aligned} t {\mathbb {P}}(S_{\tau (t)-1} < at)= & {} t \sum _{n \ge 1} {\mathbb {P}}(A_n \cap \{S_{n+1}>t\}) \le t \sum _{n \ge 1} {\mathbb {P}}(A_n) {\mathbb {P}}(S_{1}>(1-a)t) \\\le & {} t^2 {\mathbb {P}}(S_{1}>(1-a)t) \cdot \frac{1}{t} \sum _{n \ge 1} {\mathbb {P}}(\tau (t) \ge n) \\= & {} t^2 {\mathbb {P}}(S_1 > (1-a)t) \cdot \frac{{\mathbb {E}}[\tau (t)]}{t} ~\rightarrow ~ 0 \qquad \text {as } t \rightarrow \infty . \end{aligned}

$$\square$$

### A rate-of-convergence result in Markov renewal theory

Throughout this section, let $$((O_n, S_n))_{n \in {\mathbb {N}}_0}$$ be a random walk on $${\mathbb {O}}{} \textit{(d)}\times {\mathbb {R}}$$ with increment law $$\mu$$, say. The components $$O_n$$ and $$S_n$$ may be dependent. We assume that $$\mu$$ satisfies the minorization condition (M) and that

\begin{aligned} {\mathbb {E}}[S_1] > 0 \text { and } {\mathbb {E}}\big [ |S_1|^{ \ell + 1 + \delta } \big ] < \infty \end{aligned}
(7.2)

for some $$\ell > 0$$. Let $${\mathbb {O}}$$ be the closed subgroup of $${\mathbb {O}}{} \textit{(d)}$$ generated by the support of $$O_1$$.

### Proposition 7.2

Let $$\ell >0$$ and $$g: {\mathbb {R}}\rightarrow [0,\infty )$$ be a measurable function that is decreasing on $$[0,\infty )$$, with $$g(t)=0$$ for all $$t<0$$ and $$\lim _{t \rightarrow \infty } t^{ \ell +1+\epsilon } g(t) =0$$ for some $$0<\epsilon < (\delta \wedge 1)$$. Then

\begin{aligned} \lim _{t \rightarrow \infty } \sup _{\left| f \right| \le g} ~t^{ \ell +\epsilon } \left| {\mathbb {E}}\bigg [ \sum _{n=0}^\infty f(O_n, t-S_n) \bigg ] ~-~ \frac{1}{{\mathbb {E}}[S_1]} \int _0^\infty \int _{\mathbb {O}}f(o, r) \, H_{\mathbb {O}}(\text {d} \textit{o}) \, \text {d} \textit{r} \right| = 0. \end{aligned}

Here and below, $$\sup _{|f| \le g}$$ means the supremum over all measurable functions $$f : {\mathbb {O}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}$$ satisfying $$\sup _{o \in {\mathbb {O}}} |f(o,x)| \le g(x)$$ for all $$x \in {\mathbb {R}}$$.

The main ingredient in the proof will be the use of regeneration techniques for general state space Markov chains as developed in [10, 68]. We sum up what is needed in the subsequent lemma.

### Lemma 7.3

There is a measurable space $$(\varOmega , {\mathcal {G}})$$ together with a family of probability measures $$({\mathbb {P}}_{o,r})_{o \in {\mathbb {O}}, r \in {\mathbb {R}}}$$ and sequences of random variables $$((M_n, R_n))_{n \ge 0}$$ and $$(\tau _n)_{n \ge 1}$$ with

\begin{aligned} {\mathbb {P}}_{o,r}\big (((M_n, R_n))_{n \ge 0} \in \cdot \big ) = {\mathbb {P}}\big ( ((oO_n, r + S_n))_{n \ge 0} \in \cdot \big ) \end{aligned}
(7.3)

for all $$o \in {\mathbb {O}}$$, $$r \in {\mathbb {R}}$$. Further, the following properties hold:

1. (i)

There is a filtration $$({\mathcal {G}}_n)_{n \in {\mathbb {N}}}$$ such that $$((M_n, R_n))_{n \ge 0}$$ is Markov adapted to $$({\mathcal {G}}_n)_{n \in {\mathbb {N}}}$$, and $$(\tau _n)_{n \ge 1}$$ is a sequence of predictable $$({\mathcal {G}}_n)_{n \in {\mathbb {N}}}$$-stopping times, i.e., $$\{\tau _n=k\} \in {\mathcal {G}}_{k-1}$$.

2. (ii)

There are probability measures $$\nu$$ on $${\mathbb {O}}$$ and $$\eta$$ on $${\mathbb {R}}$$, $$\eta$$ having a bounded Lebesgue density, such that for all $$n \ge 1$$ and $$o \in {\mathbb {O}}$$, under $${\mathbb {P}}_o$$, $$((M_{\tau _n+k}, R_{\tau _n+k}-R_{\tau _n-1}))_{0 \le k \le \tau _{n+1}-\tau _n-1}$$ is independent of $$(M_0, R_0, \ldots , M_{\tau _n-1}, R_{\tau _n -1})$$ and has law $${\mathbb {P}}_{\nu \otimes \eta }(((M_{k}, R_{k}))_{k=0}^{\tau _{1}-1} \in \cdot )$$.

3. (iii)

For each bounded measurable function $$f:{\mathbb {O}}\rightarrow {\mathbb {R}}$$,

\begin{aligned} {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{n=0}^{\tau _1-1} f(M_n) \bigg ] = {\mathbb {E}}_{\nu \otimes \eta }[\tau _1] \, \int _{\mathbb {O}}f(o) \, H_{\mathbb {O}}(\text {d} \textit{o}). \end{aligned}
(7.4)
4. (iv)

There are $$C, \lambda >0$$ such that $${\mathbb {P}}_{o,r}(\tau _1 > n) \le C e^{-\lambda n}$$ for all $$o \in {\mathbb {O}}$$, $$r \in {\mathbb {R}}$$ and $$n \in {\mathbb {N}}$$.

Here and below, we use the shorthand $${\mathbb {P}}_{\nu \otimes \eta } = \int _{{\mathbb {O}}} \int _{\mathbb {R}}{\mathbb {P}}_{o,r} \, \nu (\text {d} \textit{o}) \, \eta (\text {d} \textit{r})$$, the same notation for expectations, and sometimes omit the initial value $$R_0$$ if it is irrelevant.

Now we turn to the proof of Proposition 7.2.

### Proof (Proof of Proposition 7.2)

In order to prove this result, we will combine methods from  and . Below, we describe the steps of the proof and defer the technicalities to several lemmata. By splitting f into its positive and negative part, it suffices to consider nonnegative functions which are bounded by g.

Let $$((M_n, R_n))_{n \ge 0}$$ and $$(\tau _n)_{n \ge 0}$$ be as in Lemma 7.3. Define $$V_0 :=0$$ and

\begin{aligned} V_n ~:=~ \sum _{k=1}^n (R_{\tau _{k+1}-1}-R_{\tau _k-1}) = R_{\tau _{n+1}-1} - R_{\tau _1-1}. \end{aligned}

Then, under each $${\mathbb {P}}_{o,r}$$, $$(V_n)_{n \ge 1}$$ is a random walk with i.i.d. increments and increment law $${\mathbb {P}}_{\nu \otimes \eta }(R_{\tau _1-1} \in \cdot )$$ which is absolutely continuous since the law $$\eta$$ of $$R_0$$ is absolutely continuous. Since $${\mathbb {E}}[S_1]>0$$ and

\begin{aligned} 1 = {\mathbb {P}}(S_n/n \rightarrow {\mathbb {E}}[S_1] \text { as } n \rightarrow \infty ) = {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(R_n/n \rightarrow {\mathbb {E}}[S_1] \text { as } n \rightarrow \infty ), \end{aligned}

we deduce that $${\mathbb {E}}_{\nu \otimes \eta }[V_1] = {\mathbb {E}}_{\nu \otimes \eta }[\tau _1]{\mathbb {E}}[S_1]$$.

Let f be nonnegative and set

\begin{aligned} \hat{f}(t) ~:=~ {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{k=0}^{\tau _1-1} f(M_k, t- R_k) \bigg ]. \end{aligned}

Then we can proceed as in [9, Section 4] to obtain

\begin{aligned} {\mathbb {E}}\bigg [&\sum _{n=0}^\infty f(O_n, t-S_n) \bigg ] = {\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [ \sum _{n=0}^\infty f(M_n, t-R_n) \bigg ] \nonumber \\&= {\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [ \sum _{n=0}^{\tau _1 -1} f(M_n, t-R_n) \bigg ] \nonumber \\&\quad + {\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [ \sum _{n=1}^\infty \, {\mathbb {E}}\bigg [\sum _{k=\tau _n}^{\tau _{n+1}-1} f(M_k, t-(R_k-R_{\tau _{n}-1}) - R_{\tau _{n}-1}) \, \Big | \, {\mathcal {G}}_{\tau _n-1} \bigg ] \bigg ] \nonumber \\&= {\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [ \sum _{n=0}^{\tau _1 -1} f(M_n, t-R_n) \bigg ] + {\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [ \sum _{n=0}^\infty \hat{f}(t-R_{\tau _1-1}-V_n) \bigg ]\nonumber \\&=:E_1(t) + E_2(t). \end{aligned}
(7.5)

We show in Lemma 7.4 that $$t^{ \ell +\epsilon } E_1(t)$$ tends to zero, uniformly over all f with $$\left| f \right| \le g$$. We rewrite

\begin{aligned} E_2(t) = \int _{\mathbb {R}}{\mathbb {E}}_{\nu \otimes \eta } \bigg [\sum _{n=0}^\infty \hat{f}(t-s-V_n)\bigg ] \, {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(R_{\tau _1-1} \in \text {d} \textit{s}) \end{aligned}

and use (7.4) to infer that (recall that $$f \ge 0$$)

\begin{aligned} \frac{1}{{\mathbb {E}}_{\nu \otimes \eta } [V_1]} \int _{\mathbb {R}}\hat{f}(r) \, \text {d} \textit{r}&= \frac{1}{{\mathbb {E}}_{\nu \otimes \eta } [V_1]} {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{k=0}^{\tau _1-1} \int _{\mathbb {R}}f(M_k, r-R_k) \, \text {d} \textit{r}\bigg ] \\&= \frac{1}{{\mathbb {E}}_{\nu \otimes \eta } [V_1]} {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{k=0}^{\tau _1-1} \int _{\mathbb {R}}f(M_k, r) \, \text {d} \textit{r}\bigg ] \\&= \frac{1}{{\mathbb {E}}[S_1]} \int _0^\infty \int _{{\mathbb {O}}} f(o,r) \, H_{\mathbb {O}}(\text {d} \textit{o}) \, \text {d} \textit{r}. \end{aligned}

Then the claimed convergence rate holds, if

\begin{aligned} \int _{\mathbb {R}}\, \sup _{\left| f \right| \le g} t^{ \ell +\epsilon } \left| {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{n=0}^\infty \hat{f}(t-s-V_n) \bigg ] - \frac{1}{{\mathbb {E}}_{\nu \otimes \eta }[V_1]}\int _0^\infty \hat{f}(r) \, \text {d} \textit{r} \right| \, {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(R_{\tau _1-1} \in \text {d} \textit{s}) \end{aligned}

tends to 0 as $$t \rightarrow \infty$$. This result will be established in Lemma 7.5. $$\square$$

### Proof (Proof of Lemma 7.3)

Observe that $$((oO_n, r+S_n))_{n \in {\mathbb {N}}_0}$$ is indeed a Markov chain on $${\mathbb {O}}\times {\mathbb {R}}$$ and that the increments of $$S_n-S_{n-1}$$ are independent of the past. Since $$\mu$$ satisfies the minorization condition (M), we have that for all $$o \in \text {SO}{} \textit{(d)}$$, $$B \in \text {SO}{} \textit{(d)}$$,

\begin{aligned} {\mathbb {P}}(oO_{1} \in B) = {\mathbb {P}}(O_{1} \in o^{-1}B) \ge \gamma H_{{\mathbb {O}}}((o^{-1}B) \cap \text {SO}{} \textit{(d)}) \ge \frac{\gamma }{2} H_{\text {SO}{} \textit{(d)}}(B). \end{aligned}
(7.6)

If $${\mathbb {O}}=\text {SO}{} \textit{(d)}$$, this shows that $$(oO_n)_{n \in {\mathbb {N}}}$$ is a Doeblin chain on $${\mathbb {O}}$$ (see [66, Section 16.2] for the definition).

If $${\mathbb {O}}$$ contains elements with determinant $$-1$$ as well, then readily $${\mathbb {O}}={\mathbb {O}}{} \textit{(d)}$$, for $$\text {SO}{} \textit{(d)}\subseteq {\mathbb {O}}$$, and the product of two matrices with negative determinant has a positive determinant. Moreover, this necessitates $${\mathbb {P}}(\det (O_1)=-1) >0$$. Then, for all $$o \in {\mathbb {O}}{\setminus } \text {SO}{} \textit{(d)}$$ and $$B \subseteq \text {SO}{} \textit{(d)}$$,

\begin{aligned} {\mathbb {P}}(oO_{2} \in B) \ge&\int \mathbbm {1}_{\{\det (o')=-1 \}} {\mathbb {P}}(O_{1} \in (oo')^{-1}B) \, {\mathbb {P}}(O_1 \in \text {d} \textit{o}') \nonumber \\ \ge&{\mathbb {P}}(\det (O_1)=-1) \, \frac{\gamma }{2} H_{\text {SO}{} \textit{(d)}}(B). \end{aligned}
(7.7)

Thus, $$(oO_n)_{n \in {\mathbb {N}}}$$ is a Doeblin chain in the case $${\mathbb {O}}={\mathbb {O}}{} \textit{(d)}$$, too. Its unique invariant probability measure is given by the normalized Haar measure $$H_{\mathbb {O}}$$ on $${\mathbb {O}}{} \textit{(d)}$$.

Again by the minorization condition for $$\mu$$, it follows that there is an absolutely continuous measure $$\eta (\text {d} \textit{x}):=\mathbbm {1}_I(x) dx$$ such that

\begin{aligned} {\mathbb {P}}(oO_1 \in A, S_1 \in B) ~\ge ~ \frac{\gamma }{2} H_{\text {SO}{} \textit{(d)}}(A) \eta (B) \end{aligned}
(7.8)

for all $$o \in \text {SO}{} \textit{(d)}$$ and all measurable AB.

(7.6) and (7.7) yield that there is some $$q <1$$ such that, for all $$o \in {\mathbb {O}}$$,

\begin{aligned} {\mathbb {P}}(oO_k \notin \text {SO}{} \textit{(d)}\text { for } k=1,2) \le q. \end{aligned}
(7.9)

It follows that $$(oO_n)_{n \in {\mathbb {N}}}$$ is $$(\text {SO}{} \textit{(d)},\gamma /2,\nu ,1)$$-recurrent in the sense of , with $$\nu :=H_{\text {SO}{} \textit{(d)}}$$. Then, $$((M_n, R_n))_{n \in {\mathbb {N}}_0}$$ can be constructed along the same lines as in [9, Section 3]: Under $${\mathbb {P}}_{o,r}$$, let $$((M_n, R_n))_{n \in {\mathbb {N}}}$$ have the same transitions as $$(oO_n,r+ S_n)_{n \in {\mathbb {N}}}$$, but whenever $$O_n$$ enters $$\text {SO}{} \textit{(d)}$$, an independent $$B(1,\gamma /2)$$-distributed coin is flipped. If 1 shows up, then $$(M_{n+1}, R_{n+1}-R_n)$$ is generated according to $$\nu \otimes \eta$$, this event we call a regeneration; if 0 shows up, then $$(M_{n+1}, R_{n+1}-R_n)$$ is generated according to $$(1-\frac{\gamma }{2})^{-1}\big ({\mathbb {P}}((M_nO_1, S_1) \in \cdot ) - \frac{\gamma }{2} \nu \otimes \eta \big )$$. Thus, the total transition probabilities are still equal to that of $$(oO_n, r+S_n)_{n \in {\mathbb {N}}}$$. Let $$\tau _0=0$$ and $$\tau _n$$ be the nth regeneration time, see [10, 68] for details. This gives Assertions 1 and 2, while Assertion 3 is proved in [10, Theorem 6.1]. The construction together with (7.9) show that at least every third step, there is a uniform positive chance for regeneration, this yields Assertion 4. $$\square$$

### Lemma 7.4

Let g be as in Proposition 7.2 and $$((M_n,R_n))_{n \ge 0}$$ as in Lemma 7.3. Then

\begin{aligned} \lim _{t \rightarrow \infty } \sup _{\left| f \right| \le g} t^{ \ell +\epsilon +1} \bigg |{\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [\sum _{n=0}^{\tau _1-1} f(M_n, t-R_n) \bigg ] \bigg | = 0. \end{aligned}
(7.10)

In particular, $$\lim _{t \rightarrow \infty } t^{ \ell +\epsilon +1} \hat{g}(t)=0$$. Moreover,

\begin{aligned} \lim _{t \rightarrow \infty } t^{ \ell +\epsilon +1} \, {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(R_{\tau _1-1}>t/2) = 0. \end{aligned}
(7.11)

### Proof

In order to prove (7.10), we assume that $$g(0) \le 1$$ and fix some $$|f| \le g$$. Recall that, by Lemma 7.3(iv), $${\mathbb {P}}_{\textit{I}_{\textit{d}},0}(\tau _1 >n) \le C e^{-\lambda n}$$ for some $$C,\lambda > 0$$. Define $$n_t= (\log t) (\ell +2)/\lambda$$, $$t>0$$. Then

\begin{aligned}&t^{ \ell +\epsilon +1} \bigg | {\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [\sum _{n=0}^{\tau _1-1} f(M_n, t-R_n) \bigg ] \bigg | \le t^{ \ell +\epsilon +1} {\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [\sum _{n=0}^{\tau _1-1} g(t-R_n) \bigg ] \\&\quad \le t^{ \ell +\epsilon +1}{\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [\mathbbm {1}_{\{\tau _1 \le n_t\}}\sum _{n=0}^{\lfloor n_t \rfloor } g(t-R_n)\big (\mathbbm {1}_{\{R_n \le t/2\}} + \mathbbm {1}_{\{R_n > t/2\}} \big ) \bigg ] \\&\qquad + t^{ \ell +\epsilon +1}{\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [ \mathbbm {1}_{\{\tau _1 > n_t\}} \sum _{n=0}^{\tau _1-1} g(t-R_n) \bigg ] =:I_1(t) + I_2(t) + I_3(t). \end{aligned}

Recall that g is decreasing on $$[0, \infty )$$ and $$\lim _{t \rightarrow \infty } t^{\ell +1+\delta } g(t)=0$$ for some $$\delta >0$$. This gives

\begin{aligned} I_1(t)= & {} t^{ \ell +\epsilon +1}{\mathbb {E}}_{\textit{I}_{\textit{d}},0} \bigg [\mathbbm {1}_{\{\tau _1 \le n_t\}} \sum _{n=0}^{\lfloor n_t \rfloor } g(t-R_n) \mathbbm {1}_{\{R_n \le t/2\}} \bigg ]\\\le & {} t^{ \ell +\epsilon +1} (\lfloor n_t \rfloor + 1) g(t/2) ~\underset{t \rightarrow \infty }{\rightarrow }~0. \end{aligned}

By Jensen’s inequality, $${\mathbb {E}}[\left| S_n \right| ^\kappa ] \le n^{k-1} {\mathbb {E}}[\left| S_1 \right| ^\kappa ]$$ for all $$\kappa \ge 1$$. Thus, applying Markov’s inequality,

\begin{aligned} I_2(t)&\le t^{ \ell +\epsilon +1} \sum _{n=0}^{\lfloor n_t \rfloor } {\mathbb {P}}(S_n > t/2) \le t^{ \ell +\epsilon +1} \sum _{n=0}^{\lfloor n_t \rfloor } \frac{{\mathbb {E}}[ \left| S_n \right| ^{\ell +1+\delta }]}{(t/2)^{\ell +1+\delta }} \\&\le \frac{2^{\ell +1+\delta }}{t^{ \delta -\epsilon }} \sum _{n=0}^{\lfloor n_t \rfloor } n^{\ell +\delta } {\mathbb {E}}[\left| S_1 \right| ^{\ell +1+\delta }] \le 2^{\ell +1+\delta } \frac{n_t^{\ell +1+\delta }}{t^{ \delta -\epsilon }} {\mathbb {E}}[\left| S_1 \right| ^{\ell +1+\delta }], \end{aligned}

which tends to zero as $$t \rightarrow \infty$$. Finally,

\begin{aligned} I_3(t)&\le t^{ \ell +\epsilon +1} {\mathbb {E}}_{\textit{I}_{\textit{d}},0}\big [ \tau _1 \mathbbm {1}_{\{\tau _1 > n_t\}} \big ] \\&\le t^{ \ell +\epsilon +1} \bigg ( n_t {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(\tau _1 > n_t) + \int _{n_t}^\infty {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(\tau _1 > r) \, \text {d} \textit{r}\bigg ) \\&\le t^{ \ell +\epsilon +1} \bigg ( n_t C e^{-\lambda n_t} + \frac{C}{\lambda } e^{-\lambda n_t} \bigg ) \\&\le C \frac{t^{ \ell +\epsilon +1} (n_t +1/\lambda )}{t^{\ell +2}} \rightarrow 0 \end{aligned}

as $$t \rightarrow \infty$$. Thus we have proved the first and second assertion. (7.11) follows from (7.10) with $$g(s)=\mathbbm {1}_{[0,t/2)}(s)$$. $$\square$$

### Lemma 7.5

It holds that

\begin{aligned} \lim _{t \rightarrow \infty } \int _{\mathbb {R}}\, \sup _{\left| f \right| \le g} t^{ \ell + \epsilon } \left| {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{n=0}^\infty \hat{f}(t-s-V_n) \bigg ] - \frac{\int _0^\infty \hat{f}(r) \, \text {d} \textit{r}}{{\mathbb {E}}_{\nu \otimes \eta }[V_1]} \right| \, {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(R_{\tau _1-1} \in \text {d} \textit{s}) = 0. \end{aligned}
(7.12)

### Proof

We start by proving that the functions $$\hat{f}$$ are uniformly directly Riemann-integrable over $$\left| f \right| \le g$$. Write

\begin{aligned} \hat{f}(t) = \int {\mathbb {E}}_{\nu \otimes \delta _0} \bigg [\sum _{k=0}^{\tau _1-1} f(M_k,t-s-R_k) \bigg ] \, \eta (\text {d} \textit{s}), \end{aligned}

then observe that

\begin{aligned} \int \bigg | {\mathbb {E}}_{\nu \otimes \delta _0} \bigg [\sum _{k=0}^{\tau _1-1} f(M_k,t-R_k) \bigg ] \bigg | \text {d} \textit{t}&\le {\mathbb {E}}_{\nu \otimes \delta _0} \bigg [\sum _{k=0}^{\tau _1-1} \int g(t-R_k) \bigg ] \text {d} \textit{t}\\&= {\mathbb {E}}_{\nu \otimes \delta _0}[\tau _1] \int g(t) \text {d} \textit{t}\end{aligned}

and recall that $$\eta$$ has a bounded Lebesgue density. Consequently, $$\hat{f}$$ is the convolution of a Lebesgue integrable function with a bounded function and hence continuous, see [8, Lemma VII.1.2]. Further, arguing as in [9, Eq. (4.4)],

\begin{aligned} \sum _{l\in {\mathbb {Z}}} \sup _{t \in [l h,(l+1)h]} |\hat{f}(t)|&\le {\mathbb {E}}_{\nu \otimes \eta }[\tau _1] \int _{\mathbb {O}}\sum _{l \in {\mathbb {Z}}} \sup _{t \in [l 2h,(l+1)2h]} \left| f(o,t) \right| \, H_{\mathbb {O}}(\text {d} \textit{o}) \\&\le {\mathbb {E}}_{\nu \otimes \eta }[\tau _1] \sum _{l \in {\mathbb {Z}}} \sup _{t \in [l 2h,(l+1)2h]} g(t) =:C_g. \end{aligned}

Hence, it suffices to show that g is directly Riemann-integrable. The latter is clear since g is monotone and Lebesgue-integrable (see [8, Proposition V.4.1(v)]). We have the uniform bound (cf. [8, TheoremV.2.4(iii)])

\begin{aligned} \sup _{t \in {\mathbb {R}}} \sup _{\left| f \right| \le g} \, {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{n=0}^\infty |\hat{f}(t-V_n)| \bigg ] \le C_g {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{n=0}^\infty \mathbbm {1}_{[-4h,4h]}(V_n) \bigg ] =:D_g < \infty . \end{aligned}

Now we decompose the integral inside the limit in (7.12) according to the set $$\{R_{\tau _1-1}> t/2\}$$ to obtain the following upper bound

\begin{aligned} \sup _{s \le t/2} \sup _{\left| f \right| \le g} t^{ \ell + \epsilon } \bigg |{\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{n=0}^\infty \hat{f}(t-s-V_n) \bigg ] - \frac{1}{{\mathbb {E}}_{\nu \otimes \eta } [V_1]}\int _0^\infty \hat{f}(r) \text {d} \textit{r}\bigg |&\\ +\, 2 D_g \, t^{ \ell + \epsilon } \, {\mathbb {P}}_{\textit{I}_{\textit{d}},0}(R_{\tau _1-1} > t/2)&. \end{aligned}

The second term tends to zero by (7.11). For the first term, we invoke [69, Theorem 4.2(ii)] (with $$G=\delta _0$$), which gives (note that $$\left| f \right| \le g$$ implies $$|\hat{f}| \le \hat{g}$$)

\begin{aligned} \lim _{t \rightarrow \infty } t^{ \ell + \epsilon } \sup _{\left| \hat{f} \right| \le \hat{g}} \left| {\mathbb {E}}_{\nu \otimes \eta } \bigg [ \sum _{n=0}^\infty \hat{f}(t-s-V_n) \bigg ] -\frac{1}{{\mathbb {E}}_{\nu \otimes \eta } [V_1]}\int _0^\infty \hat{f}(r) dr \right| = 0 \end{aligned}

as soon as $$V_1$$ has positive drift (here, $${\mathbb {E}}_{\nu \otimes \eta }[V_1] = {\mathbb {E}}_{\nu \otimes \eta }[\tau _1]{\mathbb {E}}[S_1] > 0$$ by the proof of Proposition 7.2), a spread-out law (here, the law of $$V_1$$ is even absolutely continuous) and $${\mathbb {E}}_{\nu \otimes \eta }[|V_1|^{ \ell +\epsilon +1}]<\infty$$ (which is true by (7.2) and Lemma 7.3(iv)) and $$\hat{g}$$ is bounded, Lebesgue-integrable and satisfies

\begin{aligned} t^{ \ell + \epsilon } \int _t^{2t} \hat{g}(r) \text {d} \textit{r}\rightarrow 0 \quad \text { and } \quad t^{ \ell + \epsilon } \sup _{r \ge t} \hat{g}(r) \rightarrow 0 \quad \text { as } t\rightarrow \infty . \end{aligned}
(7.13)

Lemma 7.4 gives $$\lim _{t \rightarrow \infty } t^{ \ell +\epsilon +1} \hat{g}(t) = 0$$, which is sufficient for (7.13) to hold. $$\square$$

## Rights and permissions

Reprints and Permissions