Cartoon Approximation with \(\alpha \)-Curvelets

Abstract

It is well-known that curvelets provide optimal approximations for so-called cartoon images which are defined as piecewise \(C^2\)-functions, separated by a \(C^2\) singularity curve. In this paper, we consider the more general case of piecewise \(C^\beta \)-functions, separated by a \(C^\beta \) singularity curve for \(\beta \in (1,2]\). We first prove a benchmark result for the possibly achievable best N-term approximation rate for this more general signal model. Then we introduce what we call \(\alpha \)-curvelets, which are systems that interpolate between wavelet systems on the one hand (\(\alpha = 1\)) and curvelet systems on the other hand (\(\alpha = \frac{1}{2}\)). Our main result states that those frames achieve this optimal rate for \(\alpha = \frac{1}{\beta }\), up to \(\log \)-factors.

This is a preview of subscription content, access via your institution.

Fig. 1

References

  1. 1.

    Berger, T.: Rate-Distortion Theory. Wiley Online Library, New York (1971)

    Google Scholar 

  2. 2.

    Candès, E.J.: Ridgelets theory and applications. PhD thesis, Stanford University (1998)

  3. 3.

    Candès, E.J., Donoho, D.L.: New tight frames of curvelets and optimal representations of objects with \(C^2\) singularities. Commun. Pure Appl. Math. 56, 219–266 (2004)

    MathSciNet  Article  MATH  Google Scholar 

  4. 4.

    Christensen, O.: An Introduction to Frames and Riesz Bases. Birkhäuser, Boston (2003)

    Book  MATH  Google Scholar 

  5. 5.

    DeVore, R.: Nonlinear approximation. Acta Numer. 7, 51–150 (1998)

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Donoho, D.L.: Sparse components of images and optimal atomic decomposition. Constr. Approx. 17, 353–382 (2001)

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Gribonval, R., Nielsen, M.: Non-linear approximation with dictionaries. I. Direct estimates. J. Fourier Anal. Appl. 10, 51–71 (2004)

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Grohs, P.: Ridgelet-type frame decompositions for sobolev spaces related to linear transport. J. Fourier Anal. Appl. 18(2), 309–325 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Grohs, P., Keiper, S., Kutyniok, G., Schäfer, M.: \(\alpha \)-Molecules: curvelets, shearlets, ridgelets, and beyond. In: Proceedings SPIE, vol. 2013 (2013)

  10. 10.

    Grohs, P., Keiper, S., Kutyniok, G., Schäfer, M.: \(\alpha \)-Molecules. (2013, preprint)

  11. 11.

    Grohs, P., Kutyniok, G.: Parabolic molecules. Found. Comput. Math. 14(2), 299–337 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Guo, K., Labate, D.: Sparse multidimensional representations using anisotropic dilation and shear operators. Wavelets and Splines (Athens, GA, 2005). Nashboro Press 14, 189–201 (2005)

    Google Scholar 

  13. 13.

    Keiper, S.: A flexible shearlet transform—sparse approximation and dictionary learning. Bachelor’s thesis, TU Berlin (2013)

  14. 14.

    Kutyniok, G., Labate, D.: Introduction to shearlets. Shearlets: Multiscale Analysis for Multivariate Data, pp. 1–38. Birkhäuser, Boston (2012)

    Chapter  Google Scholar 

  15. 15.

    Kutyniok, G., Lemvig, J., Lim, W.-Q.: Compactly supported shearlet frames and optimally sparse approximations of functions in \(L^2(\mathbb{R}^3)\) with piecewise \(C^\alpha \) singularities. SIAM J. Math. Anal. 44, 2962–3017 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Kutyniok, G., Lim, W.-Q.: Compactly supported shearlets are optimally sparse. J. Approx. Theory 163(11), 1564–1589 (2011)

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgments

PG was supported in part by Swiss National Fund (SNF) Grant 146356. SK acknowledges support from the Berlin Mathematical School and the DFG Collaborative Research Center TRR 109 “Discretization in Geometry and Dynamics. GK was supported in part by the Einstein Foundation Berlin, by the Einstein Center for Mathematics Berlin (ECMath), by Deutsche Forschungsgemeinschaft (DFG) Grant KU 1446/14, by the DFG Collaborative Research Center TRR 109 “Discretization in Geometry and Dynamics”, and by the DFG Research Center Matheon “Mathematics for key technologies” in Berlin.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Philipp Grohs.

Additional information

Communicated by Albert Cohen.

Appendices

Appendix A: Proof of Lemma 6.4

Let us start with a simple result, that shows how scaling affects the Hölder constant.

Lemma 6.10

Let \(f\in C^\alpha (\mathbb {R})\) and \(0<\alpha <1\). Then for \(t,s>0\)

$$\begin{aligned} H\ddot{o}l (s f(t\cdot ),\alpha )=st^\alpha \cdot H\ddot{o}l (f,\alpha ). \end{aligned}$$

We proceed with some technical estimates of the functions \(g_j^\eta \) and \(\omega ^\eta \), which occur as components of the functions \(G^{\eta }_{j}\) defined in (31). These estimates will provide the basis for the more complex estimates needed in the actual proof of Lemma 6.4.

Estimates for \(g_{{j}}^\eta \)

The functions \(g^\eta _j\) are given for \(j\in \mathbb {N}_0\) by \(g^\eta _j=g^\eta (2^{-\alpha j}\cdot )\), where \(g^\eta \) is a rotated version of the fixed function \(g\in C_0^{\beta }(\mathbb {R}^2)\) with \(\beta \in (1,2]\). Thus clearly, \(g^\eta \in C_0^{\beta }(\mathbb {R}^2)\) and also \(g^{{\eta }}_j\in C_0^{\beta }(\mathbb {R}^2)\). However, the parameters of the regularity change. Applying Lemma 6.10 yields the following result.

Lemma 6.11

Let \(\beta \in (1,2]\) and \(g^{{\eta }}\in C_0^{\beta }(\mathbb {R}^2)\). Then for \(g^{{\eta }}_j=g^{{\eta }}(2^{-\alpha j}\cdot )\)

$$\begin{aligned} {H\ddot{o}l}(\partial _1g^{{\eta }}_{j},\beta -1)=2^{-j}{H\ddot{o}l}(\partial _1g^{{\eta }},\beta -1). \end{aligned}$$

Proof

In view of Lemma 6.10 we have

$$\begin{aligned} {H\ddot{o}l}(\partial _1g^{{\eta }}_{j},\beta -1)={H\ddot{o}l}(2^{-\alpha j}\partial _1g^{{\eta }}(2^{-\alpha j}\cdot ),\beta -1)=2^{-j}{H\ddot{o}l}(\partial _1g^{{\eta }},\beta -1). \end{aligned}$$

\(\square \)

It is obvious that \(\Vert g^{{\eta }}_j\Vert _\infty \lesssim 1\). Further, the chain rule yields

$$\begin{aligned} \Vert \partial _1 g^{{\eta }}_j\Vert _\infty \lesssim 2^{-\alpha j} \quad \text {and}\quad \Vert \partial _2 g^{{\eta }}_j\Vert _\infty \lesssim 2^{-\alpha j}. \end{aligned}$$

Some more estimates for \(g^{{\eta }}_j\) are collected in the following two lemmas.

Lemma 6.12

The following estimates hold true for \(g^{{\eta }}_j\):

$$\begin{aligned} \Vert \Delta _{(h,0)} g^{{\eta }}_j\Vert _\infty&\lesssim 2^{-\alpha j}h , \\ \Vert \Delta _{(h,0)} \partial _1g^{{\eta }}_j \Vert _\infty ,\, \Vert \Delta _{(h,0)} \partial _2g^{{\eta }}_j \Vert _\infty&\lesssim 2^{-j} h^{\beta -1}= 2^{-\alpha j}h^{\beta } , \\ \Vert \Delta ^2_{(h,0)} g^{{\eta }}_j \Vert _\infty&\lesssim 2^{-j}h^{\beta } , \end{aligned}$$

with implicit constants, that do not depend on \(j\in \mathbb {N}_0\) and \(h\ge 0\).

Proof

Applying the mean value theorem yields

$$\begin{aligned} \Vert \Delta _{(h,0)} g^{{\eta }}_j \Vert _\infty \le h \Vert \partial _1g^{{\eta }}_j \Vert _\infty \lesssim 2^{-\alpha j}h. \end{aligned}$$

Considering Lemma 6.11 we obtain

$$\begin{aligned} \Vert \Delta _{(h,0)} \partial _1g^{{\eta }}_j \Vert _\infty \lesssim 2^{-j} h^{\beta -1} = 2^{-\alpha j} h^{\beta }. \end{aligned}$$

Noting the commutativity \(\partial _1\Delta _{(h,0)}=\Delta _{(h,0)}\partial _1\), we obtain

$$\begin{aligned} \Vert \Delta ^2_{(h,0)} g^{{\eta }}_j\Vert _\infty \lesssim h \Vert \Delta _{(h,0)} \partial _1g^{{\eta }}_j \Vert _\infty \lesssim 2^{-j}h^{\beta }. \end{aligned}$$

\(\square \)

The next lemma gives estimates for \(g^{{\eta }}_j\) along the edge curve. Here the function \(a\in C^\beta (\mathbb {R})\) comes into play, which was defined in (27). The following estimates also depend on the properties of a, which are summarized in Lemma 6.2.

Lemma 6.13

Assume \(|\sin \eta |\ge 2\delta _j\). The following estimates hold true for \(g^{{\eta }}_j\):

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta _h g^{{\eta }}_j(t,a(t))|&\lesssim h |\sin \eta |^{-1} 2^{-\alpha j}, \\ \sup _{t\in \mathbb {R}}|\Delta _h \partial _1g^{{\eta }}_j(t,a(t))|,\, \sup _{t\in \mathbb {R}}|\Delta _h \partial _2g^{{\eta }}_j(t,a(t))|&\lesssim h^{\beta -1} |\sin \eta |^{1-\beta } 2^{-j},\\ \sup _{t\in \mathbb {R}}|\Delta ^2_h g^{{\eta }}_j(t,a(t))|&\lesssim h^{\beta } |\sin \eta |^{-1-\beta } 2^{-j}, \end{aligned}$$

where the implicit constants are independent of \(j\in \mathbb {N}_0\) and \(h\ge 0\).

Proof

In view of Lemma 6.2 it holds

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta _h g^{{\eta }}_j(t,a(t))|&\lesssim h\cdot \sup _{t\in \mathbb {R}}|\frac{d}{dt} g^{{\eta }}_j(t,a(t))| \\&\lesssim h\cdot \Big ( \sup _{t\in \mathbb {R}}| \partial _1g^{{\eta }}_j(t,a(t))| + \sup _{t\in \mathbb {R}} |\partial _2g^{{\eta }}_j(t,a(t))a^\prime (t)| \Big ) \\&\lesssim h\cdot |\sin \eta |^{-1} 2^{-\alpha j}. \end{aligned}$$

Considering the transformation behavior of the Hölder constant we obtain with Lemma 6.2

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta _h \partial _1g^{{\eta }}_j(t,a(t))|&\lesssim 2^{-j} \sup _{t\in \mathbb {R}}|(h,a(t+h)-a(t))|_2^{\beta -1} \\&\lesssim 2^{-j} \big ( h^{\beta -1} + \sup _{t\in \mathbb {R}}|a(t+h)-a(t)|^{\beta -1} \big ) \\&\lesssim 2^{-j} h^{\beta -1} |\sin \eta |^{1-\beta }. \end{aligned}$$

Applying Lemma 6.2, the mean value theorem and \(\frac{d}{dt}\Delta _h=\Delta _h\frac{d}{dt}\) yields

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta ^2_h g^{{\eta }}_j(t,a(t))|&\lesssim h \cdot \sup _{t\in \mathbb {R}}|\Delta _h \frac{d}{dt}g^{{\eta }}_j(t,a(t)) | \\&= h\cdot \sup _{t\in \mathbb {R}}|\Delta _h \big (\partial _1g^{{\eta }}_j(t,a(t)) + \partial _2g^{{\eta }}_j(t,a(t))a^\prime (t) \big ) | \\&= h\cdot \sup _{t\in \mathbb {R}}| \Delta _h \partial _1g^{{\eta }}_j(t,a(t)) + \Delta _h \partial _2g^{{\eta }}_j(t,a(t))a^\prime (t+h)\\&\quad + \partial _2g^{{\eta }}_j(t,a(t))\Delta _ha^\prime (t) | \\&\lesssim h^{\beta } |\sin \eta |^{1-\beta }2^{-j} + h^{\beta } |\sin \eta |^{-\beta }2^{-j} + \delta _j h^{\beta } |\sin \eta |^{-1-\beta }2^{-\alpha j}. \end{aligned}$$

\(\square \)

Estimates for \(\omega ^\eta \)

Similarly, we obtain estimates for the window function \(\omega ^\eta \in C_0^\infty (\mathbb {R}^2)\), which in contrast to the functions \(g^{{\eta }}_j\) remains fixed at all scales. This fact and the smoothness of \(\omega ^{{\eta }}\) result in different estimates.

First, we state the trivial estimates \(\Vert \omega ^{{\eta }}\Vert _\infty \lesssim 1\), \(\Vert \partial _1\omega ^{{\eta }}\Vert _\infty \lesssim 1\), and \(\Vert \partial _2\omega ^{{\eta }}\Vert _\infty \lesssim 1\). Next, we apply the forward difference operator \(\Delta _{(h,0)}\) to \(\omega ^{{\eta }}\).

Lemma 6.14

Let \(k\in \mathbb {N}_0\). It holds with implicit constants independent of \(h\ge 0\)

$$\begin{aligned} \Vert \Delta ^k_{(h,0)} \omega ^{{\eta }}\Vert _\infty \lesssim h^k \quad \text {and}\quad \Vert \Delta ^k_{(h,0)} \partial _1\omega ^{{\eta }}\Vert _\infty \lesssim h^k. \end{aligned}$$

Analogous to Lemma 6.13 we establish estimates along the edge curve.

Lemma 6.15

Assume \(|\sin \eta |\ge 2\delta _j\). It holds

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta _h \omega ^{{\eta }}(t,a(t))|&\lesssim h |\sin \eta |^{-1}, \\ \sup _{t\in \mathbb {R}}|\Delta _h \partial _1\omega ^{{\eta }}(t,a(t))|,\, \sup _{t\in \mathbb {R}}|\Delta _h \partial _2\omega ^{{\eta }}(t,a(t))|&\lesssim h |\sin \eta |^{-1}, \\ \sup _{t\in \mathbb {R}}|\Delta ^2_h \omega ^{{\eta }}(t,a(t))|&\lesssim h^2 |\sin \eta |^{-2} + \delta _j h^{\beta } |\sin \eta |^{-1-\beta }. \end{aligned}$$

Proof

This proof is analogous to the proof of Lemma 6.13. \(\square \)

Now we are in the position to give the proof of Lemma 6.4.

Proof of Lemma 6.4

Proof

First we differentiate \(\mathcal {R}F_{{j}}(t,\eta )\) with respect to t and obtain from (30)

$$\begin{aligned} \partial _1(\mathcal {R}F_{{j}})(t,\eta )=a^\prime (t) G_{{j}}(t,a(t)) + \int _{-\infty }^{a(t)} \partial _1G_{{j}}(t,u) \,du=:T(t) \end{aligned}$$

where on the right-hand side the dependence on \(\eta \) is omitted in the notation. In the remainder of the proof, we will also suppress the index j as far as possible. Applying \(\Delta _h\) then yields for \(t\in \mathbb {R}\)

$$\begin{aligned} \Delta _hT(t)&=\Delta _h a^\prime (t) G(t+h,a(t+h))+ a^\prime (t) \Delta _h G(t,a(t))\\&\quad + \int _{a(t)}^{a(t+h)} \partial _1G(t+h,u) \,du + \int _{-\infty }^{a(t)} \Delta _{(h,0)} \partial _1G(t,u) \,du \\&=: T_1(t)+T_2(t)+T_3(t)+T_4(t). \end{aligned}$$

Next, we estimate the \(L^\infty \)-norms of the functions \(T_i\) for \(i\in \{1,2,3,4\}\). Let us begin with \(T_1\). Applying Lemma 6.2 we obtain

$$\begin{aligned} \Vert T_1\Vert _\infty \le \Vert \Delta _h a^\prime \Vert _\infty \Vert G \Vert _\infty \lesssim \Vert \Delta _h a^\prime \Vert _\infty \lesssim \delta _j h^{\beta -1} |\sin \eta |^{-1-\beta } \lesssim h^{\beta } |\sin \eta |^{-1-\beta }. \end{aligned}$$

The estimate of \(T_2\) takes some more effort. The product rule yields for \(t\in \mathbb {R}\)

$$\begin{aligned} T_2(t)&= a^\prime (t) \Delta _h G(t,a(t)) = a^\prime (t) \Delta _h g_j(t,a(t)) \omega (t+h,a(t+h))\\&\quad + a^\prime (t) g_j(t,a(t)) \Delta _h \omega (t,a(t)) =: T_{21}(t) + T_{22}(t). \end{aligned}$$

Using the mean value theorem and Lemmas 6.2 and 6.13 yields

$$\begin{aligned} \Vert T_{21} \Vert _\infty \le \Vert a^\prime \Vert _\infty \sup _{t\in \mathbb {R}}|\Delta _h g_j(t,a(t))| \Vert \omega \Vert _\infty \lesssim h |\sin \eta |^{-2} 2^{-\alpha j} \lesssim h^{\beta } |\sin \eta |^{-1-\beta }. \end{aligned}$$

We take another forward difference of the component \(T_{22}\) and obtain

$$\begin{aligned} \Delta _h T_{22}(t)= & {} \Delta _h a^\prime (t) g_j(t+h,a(t+h)) \Delta _h \omega (t+h,a(t+h)) \\&+ a^\prime (t) \Delta _h g_j(t,a(t)) \Delta _h \omega (t+h,a(t+h)) + a^\prime (t) g (t,a(t)) \Delta ^2_h \omega (t,a(t)) \\=: & {} T^1_{22}(t) + T^2_{22}(t) + T^3_{22}(t). \end{aligned}$$

These terms allow the following estimates, where we use Lemmas 6.2, 6.15 and 6.13. Also note \(h\lesssim |\sin \eta |\).

$$\begin{aligned} \Vert T^1_{22}\Vert _\infty&\le h^{\beta +1} |\sin \eta |^{-2-\beta } \lesssim h^{\beta } |\sin \eta |^{-1-\beta } , \\ \Vert T^2_{22}\Vert _\infty&\le h^2 |\sin \eta |^{-3} 2^{-\alpha j} \lesssim h^{\beta } |\sin \eta |^{-1-\beta } , \\ \Vert T^3_{22}\Vert _\infty&\le h^2 |\sin \eta |^{-3} + h^{\beta +1} |\sin \eta |^{-2-\beta } \lesssim h^{\beta } |\sin \eta |^{-1-\beta } . \end{aligned}$$

By substitution the term \(T_3\) transforms to

$$\begin{aligned} T_3(t)= & {} \int _{t}^{t+h} \partial _1G(t+h,a(u)) a^\prime (u) \,du \\= & {} \int _{t}^{t+h} \partial _1g_j(t+h,a(u))\omega (t+h,a(u)) a^\prime (u) \,du \\&+ \int _{t}^{t+h} g_j(t+h,a(u)) \partial _1\omega (t+h,a(u))a^\prime (u) \,du \\=: & {} T_{31}(t) + T_{32}(t) \end{aligned}$$

We apply \(\Delta _h\) to \(T_{31}\). Here \(\Delta _h\) acts exclusively on t and \(\tau \). We obtain

$$\begin{aligned} \Delta _h T_{31}(t)= & {} \int _{t}^{t+h} \Delta _h\big [ \partial _1g_j(t+h,a(\tau ))\omega (t+h,a(\tau )) a^\prime (\tau ) \big ] \,d\tau \\= & {} \int _{t}^{t+h} \Delta _h\partial _1g_j(t+h,a(\tau ))\omega (t+2h,a(\tau +h)) a^\prime (\tau +h)\,d\tau \\&+ \int _{t}^{t+h} \partial _1g_j(t+h,a(\tau )) \Delta _h\omega (t+h,a(\tau )) a^\prime (\tau +h) \,d\tau \\&+ \int _{t}^{t+h} \partial _1g_j(t+h,a(\tau ))\omega (t+h,a(\tau )) \Delta _ha^\prime (\tau ) \,d\tau \\=: & {} T^1_{31}(t)+ T^2_{31}(t) + T^3_{31}(t). \end{aligned}$$

Analogously, we decompose

$$\begin{aligned} \Delta _h T_{32}(t)&= \int _{t}^{t+h} \Delta _h\big [ g_j(t+h,a(\tau )) \partial _1\omega (t+h,a(\tau ))a^\prime (\tau ) \big ] \,d\tau \\&=: T^1_{32}(t)+ T^2_{32}(t) + T^3_{32}(t). \end{aligned}$$

Then we estimate with the results from the appendix

$$\begin{aligned} \Vert T^1_{31}\Vert _\infty&\lesssim h |\sin \eta |^{-1} 2^{-j} \big ( h^{\beta -1} + h^{\beta -1} |\sin \eta |^{1-\beta } \big ) \lesssim h^{\beta } |\sin \eta |^{-1-\beta }, \\ \Vert T^2_{31}\Vert _\infty&\lesssim h |\sin \eta |^{-1} 2^{-\alpha j} \big ( h + h |\sin \eta |^{-1} \big ) \lesssim h^{\beta } |\sin \eta |^{-1-\beta }, \\ \Vert T^3_{31}\Vert _\infty&\lesssim h 2^{-\alpha j} \Vert \Delta _h a^\prime \Vert _\infty \lesssim h^{\beta } |\sin \eta |^{-1-\beta } , \end{aligned}$$

and

$$\begin{aligned} \Vert T^1_{32}\Vert _\infty&\lesssim h |\sin \eta |^{-1} 2^{-\alpha j} \big ( h + h|\sin \eta |^{-1} \big ) \lesssim h^{\beta } |\sin \eta |^{-1-\beta }, \\ \Vert T^2_{32}\Vert _\infty&\lesssim h |\sin \eta |^{-1} \big ( h + h |\sin \eta |^{-1} \big ) \lesssim h^{\beta } |\sin \eta |^{-1-\beta }, \\ \Vert T^3_{32}\Vert _\infty&\lesssim h \Vert \Delta _h a^\prime \Vert _\infty \lesssim h^{\beta } |\sin \eta |^{-1-\beta }. \end{aligned}$$

Finally, we treat the term \(T_4\),

$$\begin{aligned} T_4(t)= & {} \int _{-\infty }^{a(t)} \Delta _h \partial _1G(t,u) \,du =\int _{-\infty }^{a(t)} \Delta _h \big (\partial _1g_j(t,u)\omega (t,u) + g_j(t,u)\partial _1\omega (t,u)\big ) \,du \\= & {} \int _{-\infty }^{a(t)} \Delta _h \partial _1g_j(t,u) \omega (t+h,u) \,du + \int _{-\infty }^{a(t)} \big ( \partial _1g_j(t,u) \Delta _h \omega (t,u)\\&+ \Delta _h g_j(t,u) \partial _1\omega (t+h,u) \big )\,du \\&+ \int _{-\infty }^{a(t)} g_j(t,u) \Delta _h\partial _1\omega (t,u) \,du =: T_{41}(t) + T_{42}(t) + T_{43}(t). \end{aligned}$$

The terms \(T_{41}\) and \(T_{42}\) can be estimated directly,

$$\begin{aligned} \Vert T_{41} \Vert _\infty&\lesssim h^{\beta -1}\cdot 2^{-j} \le h^{\beta } , \\ \Vert T_{42} \Vert _\infty&\lesssim h\cdot 2^{-\alpha j}\asymp 2^{-j}\le 2^{-j(\beta -1)}\asymp h^{\beta }. \end{aligned}$$

The term \(T_{43}\) again needs some further preparation,

$$\begin{aligned} \Delta _h T_{43}(t)&= \int _{a(t)}^{a(t+h)} g_j(t+h,u)\Delta _h\partial _1\omega (t+h,u) \,du \\&\quad + \int _{-\infty }^{a(t)} \Delta _h\big (g_j(t,u)\Delta _h\partial _1\omega (t,u)\big ) \,du =: T^1_{43}(t) + T^2_{43}(t). \end{aligned}$$

In the end we arrive at

$$\begin{aligned} \Vert T^1_{43}\Vert _\infty&\lesssim h^2 |\sin \eta |^{-1}\lesssim h^{\beta } |\sin \eta |^{-1}, \\ \Vert T^2_{43}\Vert _\infty&\lesssim h^2 \lesssim h^{\beta } . \end{aligned}$$

Now we collect the appropriate terms and add them up to obtain \(S_1\) and \(S_2\). In a last step, we use our \(L^\infty \)-estimates to obtain the desired \(L^2\)-estimates. Here we use that \(|{\mathrm{supp }}T_i|\lesssim |I(\eta )| \lesssim |\sin \eta |\) according to Lemma 6.3 for \(i\in \{1,2,3\}\) and \(|{\mathrm{supp }}T_4|\lesssim 1\). This finishes the proof. \(\square \)

Appendix B: Refinement of Theorem 6.6

In this final section we prove Theorem 6.7, which is a refinement of Theorem 6.6. For that we need to analyze the modified edge fragment \(\widetilde{F}_{{j}}\), given for fixed \(m\in \mathbb {N}_0\) by

$$\begin{aligned} \widetilde{F}_{{j}}(x)=r(x)^{m}F_{{j}}(x), \quad x\in \mathbb {R}^2, \end{aligned}$$
(44)

where \(F_{{j}}\) is the function (23) and \(r:\mathbb {R}^2\rightarrow \mathbb {R}\) shall map a vector \(x=(x_1,x_2)\in \mathbb {R}^2\) to its first component \(x_1\in \mathbb {R}\). Alternatively, (44) can be written as the product \(\widetilde{F}_{{j}}(x)= \widetilde{G}_{{j}}(x)\chi _{\{x_1\ge E_j(x_2)\}}\) with the function

$$\begin{aligned} \widetilde{G}_{{j}}(x):=r(x)^m G_{{j}}(x) =r(x)^m \omega (x) g_j(x), \quad x\in \mathbb {R}^2, \end{aligned}$$
(45)

which is a modified version of \(G_j(x)=g_j(x)\omega (x)\) from (29).

Rotating by the angle \(\eta \) yields \(\widetilde{G}_j^\eta (x)= (r^\eta (x))^m G_j^\eta (x) = (r^\eta (x))^m g_j^\eta (x) \omega ^\eta (x)\), where \(G_j^\eta \) and \(r^\eta \) are the functions obtained by rotating \(G_j\) and r, respectively. The function \(r^\eta :\mathbb {R}^2\rightarrow \mathbb {R}\) has the form

$$\begin{aligned} r^\eta (t,a):=t\cos \eta - a \sin \eta , \quad (t,a)\in \mathbb {R}^2. \end{aligned}$$
(46)

Some important properties of \(r^\eta \) and \(\widetilde{G}_{{j}}^\eta \) are collected below.

Estimates for \(r^\eta \)

First we analyze the function \(r^\eta :\mathbb {R}^2\rightarrow \mathbb {R}\) given by (46). Clearly \(r^{{\eta }}\in C^\infty (\mathbb {R}^2)\). Also note, that \(r^{{\eta }}\) is not compactly supported. Since \(r^{{\eta }}\) only occurs as a factor in products with the window \(\omega ^{{\eta }}\) this does not cause any problems however.

Thanks to the smoothness of \(r^{{\eta }}\) we have the following result.

Lemma 6.16

Let \(k,m\in \mathbb {N}_0\) and \(K\subset \mathbb {R}^2\) a compact set. Then we have

$$\begin{aligned} \Vert \Delta ^{k}_{(h,0)} {(r^{\eta })^m} \Vert _{L^\infty (K)} \lesssim h^{k}. \end{aligned}$$

Along the edge curve the following estimates hold. Here \(\widetilde{I}(\eta )\) denotes the interval defined in (28).

Lemma 6.17

Let \(|\sin \eta |\ge 2\delta _j\). Then we have \(\sup _{t\in \widetilde{I}(\eta )}|r^{{\eta }}(t,a(t))|\lesssim \delta _j\). Moreover, for \(h\ge 0\) it holds

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta _hr^{{\eta }}(t,a(t))|\lesssim h \quad \text {and}\quad \sup _{t\in \mathbb {R}}|\Delta ^2_hr^{{\eta }}(t,a(t))|\lesssim h^{\beta }\delta _j |\sin \eta |^{-\beta }. \end{aligned}$$

Proof

For every \(t\in \mathbb {R}\) the point \((t,a(t))\in \mathbb {R}^2\) in rotated coordinates lies on the (extended) edge curve \(\Gamma \). We know that the function \(E_j\) deviates little from zero and obeys \(\sup _{|x_2|\le 1}|E_j(x_2)|\le \delta _j \lesssim 2^{-j(1-\alpha )}\) according to (24). Furthermore, the slope of \(E_j\) outside of \([-1,1]\) is constant and bounded by \(\delta _j\). This yields the estimate \(\sup _{t\in \widetilde{I}(\eta )} |r^{{\eta }}(t,a(t))|\lesssim \delta _j\).

The other estimates follow from Lemma 6.2. In view of this lemma we conclude

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta _hr^{{\eta }}(t,a(t))|\le h\cdot \sup _{t\in \mathbb {R}}|\cos \eta - a^\prime (t)\sin \eta | \lesssim h |\sin \eta |^{-1} |\sin \eta | =h, \end{aligned}$$

and

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\Delta ^2_hr^{{\eta }}(t,a(t))|\le h |\sin \eta | \Vert \Delta _ha^\prime \Vert _{\infty } \lesssim h^{\beta }\delta _j |\sin \eta |^{-\beta }. \end{aligned}$$

\(\square \)

Estimates for \(\widetilde{G}_{{j}}^\eta \)

The function \(\widetilde{G}_{{j}}^\eta \) is the rotated version of the function \(\widetilde{G}_j\) given in (45) as the composition of the ‘elementary functions’ \(g_j\), \(\omega \), and r. Hence we can apply the previous estimates to obtain estimates for \(\widetilde{G}_{{j}}^\eta \).

Lemma 6.18

Let \(|\sin \eta |\ge 2\delta _j\). Let \(\widetilde{G}_{j}^\eta (t,a)=(r^\eta (t,a))^m G_{j}^\eta (t,a)\) for \((t,a)\in \mathbb {R}^2\), \(m\in \mathbb {N}\), \(m\ne 0\). Then there are the estimates

$$\begin{aligned} \sup _{t\in \mathbb {R}}|\widetilde{G}^{{\eta }}_{{j}}(t,a(t))|&\lesssim \delta _j^m,&\sup _{t\in \mathbb {R}}|\Delta _h\widetilde{G}^{{\eta }}_{{j}}(t,a(t))|&\lesssim \delta _j^{m-1}h, \\ \sup _{t\in \mathbb {R}}|\partial _1\widetilde{G}^{{\eta }}_{{j}}(t,a(t))|&\lesssim \delta _j^{m-1},&\sup _{t\in \mathbb {R}}|\partial _2\widetilde{G}^{{\eta }}_{{j}}(t,a(t))|&\lesssim \delta _j^{m-1}|\sin \eta |. \end{aligned}$$

Proof

We omit the dependence on j and \(\eta \) and calculate for \((t,a)\in \mathbb {R}^2\)

$$\begin{aligned}&\partial _1\widetilde{G}(t,a)=\partial _1 \big ( r(t,a)^mG(t,a) \big ) = (\cos \eta ) m r(t,a)^{m-1} G(t,a) + r(t,a)^m \partial _1G(t,a),&\\ \text {and}&\partial _2\widetilde{G}(t,a)=\partial _2 \big ( r(t,a)^mG(t,a) \big ) = -(\sin \eta ) m r(t,a)^{m-1} G(t,a) + r(t,a)^m \partial _2G(t,a).&\end{aligned}$$

The assertion is then a consequence of the following facts. It holds \(\Vert G\Vert _\infty \lesssim 1\) and \(|r(t,a(t))|\le \delta _j\) for all \(t\in I(\eta )\). Further, for \(t\notin I(\eta )\) the expressions G(ta(t)), \(\partial _1G(t,a(t))\), and \(\partial _2G(t,a(t))\) vanish. \(\square \)

Refinement of Lemma 6.4

In this subsection we prove the following generalization of Lemma 6.4.

Lemma 6.19

For \(m\in \mathbb {N}_0\) let \(\widetilde{F}_{{j}}\) be the modified edge fragment (44). Further, assume that \(|\sin \eta |\ge 2\delta _j\) and \(h\asymp 2^{-(1-\alpha )j}\). Then the function \(S:=\Delta _h\partial _1\mathcal {R}\widetilde{F}_{{j}}(\cdot ,\eta )\) admits a decomposition

figurea

with the estimates

$$\begin{aligned} \Vert S^{k}_1\Vert _{2}^2&\lesssim 2^{-2jm(1-\alpha )} h^{2\beta } |\sin \eta |^{-1-2\beta } + 2^{-j(1-\alpha )(2\beta +1)},\qquad k=0,1,\ldots ,m+1. \end{aligned}$$

For convenience we set \(S_2^{m+1}=0\).

We introduce the following language and say, that the function S admits a decomposition \((S^k_1,S^k_2)_k\) of the form \((*)\) of length \(m+1\) with the estimates

$$\begin{aligned} \Vert S^{k}_1\Vert _{2}^2&\lesssim 2^{-2jm(1-\alpha )} h^{2\beta } |\sin \eta |^{-1-2\beta } + 2^{-j(1-\alpha )(2\beta +1)},\qquad k=0,1,\ldots ,m+1. \end{aligned}$$

Before we come to the proof of Lemma 6.19 we need to establish three important technical results.

Lemma 6.20

Let \(\widetilde{G}^\eta _j(x)=(r^\eta (x))^m G^\eta _j(x)\) for \(x\in \mathbb {R}^2\) and \(m\in \mathbb {N}_0\). Further, let \(h\asymp 2^{-j(1-\alpha )}\). The function \(T:\mathbb {R}\rightarrow \mathbb {R}\) defined by \(T(t)=a^\prime (t)\Delta _h \widetilde{G}^\eta _j(t,a(t))\) then admits a decomposition \((T^k_1,T^k_2)_k\) of the form \((*)\) of length \((m+1)\) with the estimates

$$\begin{aligned} \Vert T^{k}_1\Vert _\infty&\lesssim h^{m}h^{\beta } |\sin \eta |^{-1-\beta }, \quad k=0,\ldots ,m+1, \\ \Vert T^{k}_2\Vert _\infty&\lesssim h^{m}|\sin \eta |^{-1}, \quad k=0,\ldots ,m, \end{aligned}$$

and subject to the condition \({\mathrm{supp }}T^{k}_i \subset \widetilde{I}(\eta )\), where \(\widetilde{I}(\eta )\) is the interval from (28).

Proof

We prove this by induction on m. If \(m=0\) we put \(T^0_1=T_{21}\), \(T^0_2=T_{22}\), \(T^1_1=\Delta _h T_{22}\), and \(T^1_2=0\), with entities \(T_{21}\) and \(T_{22}\) as defined in the proof of Lemma 6.4. The estimates for \(T_{21}\) and \(\Delta _h T_{22}\) have been carried out there. In view of \(h\lesssim \sin \eta \) we can further estimate

$$\begin{aligned} \Vert T^0_2\Vert _\infty =\Vert T_{22}\Vert _\infty \lesssim h|\sin \eta |^{-2} \lesssim h^0|\sin \eta |^{-1}. \end{aligned}$$

This proves the case \(m=0\).

We proceed with the induction and assume that the lemma is true for T, where \(m\in \mathbb {N}_0\) is fixed but arbitrary. The associated decomposition of length \(m+1\) shall be denoted by \((T^k_1,T^k_2)_k\). We will show that under this hypothesis also the function \(\widetilde{T}(t):=a^\prime (t)\Delta _h \widetilde{G}_+(t,a(t))\), where \(\widetilde{G}_+(x)=(r^\eta (x))^{m+1}G^\eta _j(x)\) for \(x\in \mathbb {R}^2\), admits a decomposition \((\tilde{T}^k_1,\tilde{T}^k_2)_k\) of the form \((*)\) of length \((m+2)\) with the desired properties.

Subsequently, we simplify the notation by omitting the indices \(\eta \) and j. First we decompose as follows,

$$\begin{aligned} \widetilde{T}(t)&=a^\prime (t) \Delta _h\big ( r(t,a(t)) \widetilde{G}(t,a(t))\big ) \\&= a^\prime (t) \Delta _hr(t,a(t)) \widetilde{G}(t+h, a(t+h)) + r(t,a(t)) a^\prime (t)\Delta _h\widetilde{G}(t, a(t)) \\&= \big [ r(t,a(t)) T^0_1(t) \big ] + \big [ a^\prime (t) \Delta _hr(t,a(t)) \widetilde{G}(t+h, a(t+h)) + r(t,a(t)) T^0_2(t)\big ] \\&=: \tilde{T}^0_1(t)+ \tilde{T}^0_2(t). \end{aligned}$$

In view of the properties of \(T^0_1\) and Lemma 6.17 we see that the function \( \tilde{T}^0_1\) satisfies the assertion. The estimate

$$\begin{aligned} \Vert \tilde{T}^0_2 \Vert _\infty \lesssim \Vert a^\prime \Vert _\infty \sup _{t\in \mathbb {R}}|\Delta _h r(t,a(t))| \sup _{t\in \mathbb {R}}|\widetilde{G}(t,a(t))| \lesssim |\sin \eta |^{-1} \cdot h \cdot \delta _j^{m}, \end{aligned}$$

where Lemmas 6.2, 6.17 and 6.18 were used, shows the claim also for \(\tilde{T}^0_2 \).

We take another forward difference of the component \(\tilde{T}^0_2\) and obtain

$$\begin{aligned} \Delta _h \tilde{T}^0_2(t)= & {} \Delta _h a^\prime (t) \Delta _hr(t+h,a(t+h)) \widetilde{G}(t+2h, a(t+2h)) \\&+ a^\prime (t) \Delta ^2_hr(t,a(t)) \widetilde{G}(t+2h, a(t+2h)) \\&+ \Delta _hr(t,a(t)) a^\prime (t) \Delta _h\widetilde{G}(t+h, a(t+h)) + \Delta _hr(t,a(t)) T^0_2(t+h)\\&+ r(t,a(t)) \Delta _hT^0_2(t) \\= & {} \Delta _h a^\prime (t) \Delta _hr(t+h,a(t+h)) \widetilde{G}(t+2h, a(t+2h)) \\&+ a^\prime (t) \Delta ^2_hr(t,a(t)) \widetilde{G}(t+2h, a(t+2h)) \\&+ \Delta _hr(t,a(t))( T^0_1(t+h) + T^0_2(t+h) )\\&- \Delta _hr(t,a(t))\Delta _ha^\prime (t)\Delta _h\widetilde{G}(t+h, a(t+h)) \\&+ \Delta _hr(t,a(t)) T^0_2(t+h) + r(t,a(t)) T^1_1(t) + r(t,a(t)) T^1_2(t) \\= & {} \big [ \Delta _h a^\prime (t) \Delta _hr(t+h,a(t+h)) \widetilde{G}(t+2h, a(t+2h)) \\&+ a^\prime (t) \Delta ^2_hr(t,a(t)) \widetilde{G}(t+2h, a(t+2h))\\&- \Delta _hr(t,a(t))\Delta _ha^\prime (t)\Delta _h\widetilde{G}(t+h, a(t+h)) + r(t,a(t)) T^1_1(t)\\&+ \Delta _hr(t,a(t)) T^0_1(t+h) \big ] \\&+ \big [ 2\Delta _hr(t,a(t)) T^0_2(t+h) + r(t,a(t)) T^1_2(t) \big ]\\=: & {} \tilde{T}^1_1(t)+ \tilde{T}^1_2(t). \end{aligned}$$

For \(\tilde{T}^1_1\) we check directly

$$\begin{aligned}&\sup _{t\in \mathbb {R}}|\Delta _h a^\prime (t) \Delta _hr(t,a(t)) \widetilde{G}(t+h, a(t+h))| \lesssim h^{\beta -1}\delta _j |\sin \eta |^{-1-\beta } h h^{m} \\&\quad = h^{m+1} h^{\beta } |\sin \eta |^{-1-\beta }, \\&\sup _{t\in \mathbb {R}}|a^\prime (t) \Delta ^2_hr(t,a(t)) \widetilde{G}(t+2h, a(t+2h))| \lesssim |\sin \eta |^{-1} h^{\beta }\delta _j |\sin \eta |^{-\beta } h^{m} \\&\quad = h^{m+1} h^{\beta } |\sin \eta |^{-1-\beta }, \\&\sup _{t\in \mathbb {R}}| \Delta _hr(t,a(t))\Delta _ha^\prime (t)\Delta _h\widetilde{G}(t+h, a(t+h))| \lesssim \delta _j^m h^\beta h |\sin \eta |^{-1-\beta }\\&\quad \lesssim h^{m+1} h^{\beta } |\sin \eta |^{-1-\beta }. \end{aligned}$$

The estimates for the remaining two terms are obvious. Hence \(\tilde{T}^1_1\) fulfills the desired properties.

For \(\tilde{T}^1_2\) we use the induction hypothesis and Lemma 6.17 to obtain

$$\begin{aligned} \Vert \tilde{T}^1_2\Vert _\infty \lesssim \sup _{t\in \mathbb {R}} |r(t,a(t))T^{1}_2(t)| + \sup _{t\in \mathbb {R}}|\Delta _hr(t,a(t))T^{0}_2(t+h)| \lesssim h^{m+1}|\sin \eta |^{-1}. \end{aligned}$$

Moving forward, this procedure yields terms for \(k=1,\ldots ,m+1\),

$$\begin{aligned} \tilde{T}^{k+1}_1(t)= & {} r(t,a(t))T^{k+1}_1(t) + (k+1)\Delta _hr(t+h,a(t+h))T^{k}_1(t+h) \\&+ (k+1)\Delta ^2_hr(t,a(t))T^{k-1}_2(t+h) + (k+1)\Delta ^2_hr(t,a(t))T^{k}_2(t+h), \\ \tilde{T}^k_2(t)= & {} r(t,a(t))T^{k}_2(t) + (k+1)\Delta _hr(t,a(t))T^{k-1}_2(t+h), \end{aligned}$$

which satisfy the desired estimates. Here we put \(T_1^{m+2}=T_2^{m+2}=0\) for convenience. Indeed, using the induction assumptions, we obtain

$$\begin{aligned} \Vert \tilde{T}^{k+1}_1\Vert _\infty\lesssim & {} \sup _{t\in \mathbb {R}}|r(t,a(t))T^{k+1}_1(t)| + \sup _{t\in \mathbb {R}}|\Delta _hr(t+h,a(t+h))T^{k}_1(t+h)| \\&+ \sup _{t\in \mathbb {R}}|\Delta ^2_hr(t,a(t))T^{k-1}_2(t+h)| + \sup _{t\in \mathbb {R}}|\Delta ^2_hr(t,a(t))T^{k}_2(t+h)| \\&\lesssim h^{m+1}h^{\beta } |\sin \eta |^{-1-\beta } , \\ \Vert \tilde{T}^k_2\Vert _\infty\lesssim & {} \sup _{t\in \mathbb {R}}|r(t,a(t))T^{k}_2(t)| + \sup _{t\in \mathbb {R}}|\Delta _hr(t,a(t))T^{k-1}_2(t+h)| \lesssim h^{m+1}|\sin \eta |^{-1}. \end{aligned}$$

Note, that \(T_2^{m+1}=T_1^{m+2}=T_2^{m+2}=0\). Hence, for \(k=m+1\) these expressions read

$$\begin{aligned} \tilde{T}^{m+2}_1(t)&= (m+2)\Delta _hr(t+h,a(t+h))T^{m+1}_1(t+h) + (m+2)\Delta ^2_hr(t,a(t))T^{m}_2(t+h), \\ \tilde{T}^{m+1}_2(t)&= (m+2)\Delta _hr(t,a(t))T^{m}_2(t+h). \end{aligned}$$

Since \(\Delta _h \tilde{T}^{m+1}_2=\tilde{T}^{m+2}_1\) we have \(\tilde{T}^{m+2}_2=0\) and the proof is finished. \(\square \)

The following Lemma 6.21 is in the same spirit as Lemma 6.20.

Lemma 6.21

Let \(\widetilde{G}^\eta _j(x)=(r^\eta (x))^m G^\eta _j(x)\) for \(x\in \mathbb {R}^2\), \(m\in \mathbb {N}_0\), and \(h\asymp 2^{-j(1-\alpha )}\). Then the function \(S:\mathbb {R}\rightarrow \mathbb {R}\) defined by \(S(t)=a^\prime (t) \Delta _h\partial _1\widetilde{G}^\eta _j(t,a(t)) \) admits a decomposition \((S^k_1,S^k_2)_k\) of the form \((*)\) of length \(m+1\) with estimates

$$\begin{aligned} \Vert S^{k}_1\Vert _\infty&\lesssim h^{m-1}h^{\beta } |\sin \eta |^{-1-\beta }, \quad k=0,\ldots ,m+1, \\ \Vert S^{k}_2\Vert _\infty&\lesssim h^{m-1}|\sin \eta |^{-1}, \quad k=0,\ldots ,m. \end{aligned}$$

Moreover, these functions can be chosen such that \({\mathrm{supp }}S^{k}_i \subset \widetilde{I}(\eta )\) with \(\widetilde{I}(\eta )\) from (28).

Proof

The proof is by induction on m. To enhance readability we again omit the indices \(\eta \) and j. The assertions are clearly true for \(m=0\).

For the induction we let \(m\in \mathbb {N}_0\) be fixed and let S be the function defined in the setting. Further, let us assume that we have a decomposition \((S^k_1,S^k_2)_k\) of length \(m+1\) with the desired properties for S. We put \(S_2^{m+1}=0\) and for convenience we also define \(S_1^{m+2}=S_2^{m+2}=0\). We will show that under these assumptions the function \(\widetilde{S}:\mathbb {R}\rightarrow \mathbb {R}\) given by \(\widetilde{S}(t):=a^\prime (t)\Delta _h\partial _1\widetilde{G}_+(t,a(t))\), where \(\widetilde{G}_+(x)=r(x)^{m+1} G(x)\) for \(x\in \mathbb {R}^2\), admits a decomposition \((\tilde{S}^k_1,\tilde{S}^k_2)_k\) of length \(m+2\) of the same form. First we calculate

$$\begin{aligned} \widetilde{S}(t)&=a^\prime (t) \Delta _h\partial _1\widetilde{G}_+(t,a(t)) =a^\prime (t)\Delta _h\big ( \cos \eta \widetilde{G}(t,a(t)) + r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) \big ) \\&= a^\prime (t)\cos \eta \Delta _h \widetilde{G}(t,a(t)) + a^\prime (t) \Delta _hr(t,a(t)) \partial _1\widetilde{G}(t,a(t))\\&\quad + r(t+h,a(t+h))a^\prime (t)\Delta _h\partial _1\widetilde{G}(t,a(t)). \end{aligned}$$

Using the induction hypothesis we can proceed,

$$\begin{aligned} \widetilde{S}(t)= & {} \big [ r(t+h,a(t+h)) S^0_1(t) \big ] \\&+ \big [ r(t+h,a(t+h)) S^0_2(t) + a^\prime (t) \cos \eta \Delta _h \widetilde{G}(t,a(t))\\&+ a^\prime (t) \Delta _hr(t,a(t)) \partial _1\widetilde{G}(t,a(t)) \big ] \\=: & {} \tilde{S}^0_1(t) + \tilde{S}^0_2(t). \end{aligned}$$

The terms \(\tilde{S}^0_1\) and \(\tilde{S}^0_2\) have the desired properties, which follows from the estimates

$$\begin{aligned} \sup _{t\in \mathbb {R}}|r(t+h,a(t+h))S^0_1(t)|&\lesssim h h^{m-1} h^{\beta } |\sin \eta |^{-1-\beta }, \\ \sup _{t\in \mathbb {R}}|r(t+h,a(t+h))S^0_2(t)|&\lesssim h h^{m-1} |\sin \eta |^{-1}, \\ \sup _{t\in \mathbb {R}}|a^\prime (t) \cos \eta \Delta _h\widetilde{G}(t,a(t))|&\lesssim |\sin \eta |^{-1} \delta _j^{m-1} h, \\ \sup _{t\in \mathbb {R}}|a^\prime (t) \Delta _hr(t,a(t)) \partial _1\widetilde{G}(t,a(t))|&\lesssim |\sin \eta |^{-1} h \delta _j^{m-1}. \end{aligned}$$

Taking another forward difference of \(\tilde{S}^0_2\) yields

$$\begin{aligned} \Delta _h \tilde{S}^0_2(t)= & {} \Delta _hr(t+h,a(t+h))S^0_2(t) + r(t+2h,a(t+2h))\Delta _hS^0_2(t)\\&+\Delta _ha^\prime (t)\cos \eta \Delta _h \widetilde{G}(t,a(t)) \\&+ a^\prime (t+h) \cos \eta \Delta _h^2 \widetilde{G}(t,a(t)) + \Delta _ha^\prime (t) \Delta _h r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) \\&+ a^\prime (t+h)\Delta _h^2r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) \\&+ a^\prime (t+h)\Delta _hr(t+h,a(t+h))\Delta _h\partial _1\widetilde{G}(t,a(t)). \end{aligned}$$

Let T denote the function from Lemma 6.20. We observe,

$$\begin{aligned} a^\prime (t+h) \Delta _h^2 \widetilde{G}(t,a(t))&= a^\prime (t+h) \big ( \Delta _h \widetilde{G}(t+h,a(t+h)) - \Delta _h \widetilde{G}(t,a(t)) \big ) \\&= a^\prime (t+h) \Delta _h\widetilde{G}(t+h,a(t+h)) -a^\prime (t) \Delta _h\widetilde{G}(t,a(t)) \\&\quad + (a^\prime (t) - a^\prime (t+h)) \Delta _h\widetilde{G}(t,a(t)) \\&= a^\prime (t+h) \Delta _h\widetilde{G}(t+h,a(t+h)) -a^\prime (t) \Delta _h\widetilde{G}(t,a(t)) \\&\quad - \Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t)) \\&= T(t+h) - T(t) - \Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t)) = \Delta _hT(t)\\&\quad - \Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t)). \end{aligned}$$

Now we know by Lemma 6.20 that there is a decomposition \((T^k_1,T^k_2)_k\) of T of length \(m+1\) with the specific properties given there. This allows to decompose \(\Delta _hT=\Delta _hT^0_1+T^1_1+T^1_2\) and we obtain

$$\begin{aligned} a^\prime (t+h) \Delta _h^2 \widetilde{G}(t,a(t)) = \Delta _hT^0_1(t)+T^1_1(t)+T^1_2(t)- \Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t)). \end{aligned}$$

Using this observation we obtain

$$\begin{aligned} \Delta _h \tilde{S}^0_2(t)= & {} \Delta _hr(t+h,a(t+h))S^0_2(t)+ r(t+2h,a(t+2h))S^1_1(t)\\&+ r(t+2h,a(t+2h))S^1_2(t) \\&+ \Delta _ha^\prime (t)\cos \eta \Delta _h \widetilde{G}(t,a(t)) +\cos \eta \Delta _hT^0_1(t) + \cos \eta (T^1_1(t)+T^1_2(t)) \\&- \cos \eta \Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t)) \\&+ \Delta _ha^\prime (t) \Delta _h r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) + a^\prime (t+h)\Delta _h^2r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) \\&+ a^\prime (t)\Delta _hr(t+h,a(t+h))\Delta _h\partial _1\widetilde{G}(t,a(t)) + (a^\prime (t+h)\\&-a^\prime (t))\Delta _hr(t+h,a(t+h))\Delta _h\partial _1\widetilde{G}(t,a(t)) \end{aligned}$$

and further

$$\begin{aligned} \Delta _h \tilde{S}^0_2(t)= & {} \big [ r(t+2h,a(t+2h))S^1_1(t) + \Delta _ha^\prime (t)\cos \eta \Delta _h \widetilde{G}(t,a(t)) \\&+ \cos \eta \Delta _hT^0_1(t) - \cos \eta \Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t)) + \cos \eta T^1_1(t) \\&+ \Delta _ha^\prime (t) \Delta _h r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) + a^\prime (t+h)\Delta _h^2r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) \\&+ \Delta _ha^\prime (t)\Delta _hr(t+h,a(t+h))\Delta _h\partial _1\widetilde{G}(t,a(t)) + \Delta _hr(t+h,a(t+h))S^0_1(t) \big ] \\&+ \big [ r(t+2h,a(t+2h))S^1_2(t) + \cos \eta T^1_2(t) + 2\Delta _hr(t+h,a(t+h))S^0_2(t) \big ] \\=: & {} \tilde{S}^1_1(t)+\tilde{S}^1_2(t). \end{aligned}$$

Now we can split \(\Delta _h \tilde{S}^0_2=\tilde{S}^1_1+\tilde{S}^1_2\) with

$$\begin{aligned} \tilde{S}^1_1(t)= & {} r(t+2h,a(t+2h))S^1_1(t) +\cos \eta \Delta _ha^\prime (t)\Delta _h\widetilde{G}(t,a(t)) \\&+ \Delta _ha^\prime (t) \Delta _hr(t,a(t)) \partial _1\widetilde{G}(t,a(t)) \\&+ a^\prime (t+h)\Delta _h^2r(t,a(t)) \partial _1\widetilde{G}(t,a(t)) + \Delta _hr(t+h,a(t+h)) S^0_1(t) + \cos \eta \Delta _hT^0_1(t) \\&+ \Delta _ha^\prime (t)\Delta _hr(t+h,a(t+h))\Delta _h\partial _1\widetilde{G}(t,a(t)) - \cos \eta \Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t)) \\&+ \cos \eta T^1_1(t) , \\ \tilde{S}^1_2(t)= & {} 2\Delta _hr(t+h,a(t+h))S^0_2(t) + r(t+2h,a(t+2h))S^1_2(t) + \cos \eta T^1_2(t). \end{aligned}$$

These terms have the desired properties. To see this, we calculate

$$\begin{aligned} \sup _{t\in \mathbb {R}}|r(t+2h,a(t+2h))S^1_1(t)|&\lesssim h h^{m-1} h^{\beta } |\sin \eta |^{-1-\beta }, \\ \sup _{t\in \mathbb {R}}|\Delta _ha^\prime (t)\Delta _h\widetilde{G}(t,a(t))|&\lesssim \delta _j h^{\beta -1} |\sin \eta |^{-1-\beta } \cdot \delta _j^{m-1}h, \\ \sup _{t\in \mathbb {R}}|\Delta _ha^\prime (t) \Delta _h r(t,a(t)) \partial _1\widetilde{G}(t,a(t))|&\lesssim \delta _j h^{\beta -1}|\sin \eta |^{-1-\beta } \cdot h \cdot \delta _j^{m-1}, \\ \sup _{t\in \mathbb {R}}|a^\prime (t+h)\Delta _h^2r(t,a(t)) \partial _1\widetilde{G}(t,a(t))|&\lesssim |\sin \eta |^{-1} \cdot h^{\beta } \delta _j |\sin \eta |^{-\beta } \cdot \delta _j^{m-1}, \\ \sup _{t\in \mathbb {R}}|\Delta _hr(t+h,a(t+h)) S^0_1(t)|&\lesssim h h^{m-1} h^{\beta } |\sin \eta |^{-1-\beta }, \\ \sup _{t\in \mathbb {R}}|\Delta _hT^0_1(t)|&\lesssim h^{m} h^{\beta } |\sin \eta |^{-1-\beta }, \\ \sup _{t\in \mathbb {R}}|\Delta _h a^\prime (t) \Delta _h\widetilde{G}(t,a(t))|&\lesssim \delta _j h^{\beta -1} |\sin \eta |^{-1-\beta } \cdot \delta _j^{m-1}h, \\ \sup _{t\in \mathbb {R}}|\Delta _ha^\prime (t)\Delta _hr(t+h,a(t+h))\Delta _h\partial _1\widetilde{G}(t,a(t)) |&\lesssim \delta _j h^{\beta -1} |\sin \eta |^{-1-\beta } \cdot h \cdot \delta _j^{m-1}, \\ \sup _{t\in \mathbb {R}}|T^1_1(t)|&\lesssim \delta _j^m h^{\beta } |\sin \eta |^{-1-\beta }, \end{aligned}$$

and

$$\begin{aligned} \sup _{t\in \mathbb {R}}|2\Delta _hr(t+h,a(t+h))S^0_2(t)|&\lesssim h h^{m-1} |\sin \eta |^{-1}, \\ \sup _{t\in \mathbb {R}}|r(t+2h,a(t+2h))S^1_2(t)|&\lesssim h h^{m-1} |\sin \eta |^{-1}, \\ \sup _{t\in \mathbb {R}}|T^1_2(t)|&\lesssim h h^{m-1} |\sin \eta |^{-1}. \end{aligned}$$

We proceed with

$$\begin{aligned} \Delta _h\tilde{S}^1_2(t)= & {} \big [ \cos \eta T^2_1(t)+ r(t+3h,a(t+3h))S^2_1(t) + 2\Delta _hr(t+2h,a(t+2h))S^1_1(t) \\&+ 2\Delta _h^2r(t+h,a(t+h)) S^0_2(t) \big ] + \big [ \cos \eta T^2_2(t) + r(t+3h,a(t+3h))S^2_2(t) \\&+ 3\Delta _hr(t+2h,a(t+2h))S^1_2(t) \big ] =: \tilde{S}^2_1(t) + \tilde{S}^2_2(t). \end{aligned}$$

Inductively, we put for \(k=1,\ldots ,m+1\), where for convenience \(T^{m+2}_1=0\),

$$\begin{aligned} \tilde{S}^{k+1}_1(t):= & {} \cos \eta T^{k+1}_1(t) + r(t+(k+2)h,a(t+(k+2)h))S^{k+1}_1(t) \\&+ (k+1)\Delta _hr(t+(k+1)h,a(t+(k+1)h))S^k_1(t) \\&+ (k+1)\Delta _h^2r(t+kh,a(t+kh)) S^{k-1}_2(t), \\ \tilde{S}^k_2(t):= & {} \cos \eta T^{k}_2(t) + r(t+(k+1)h,a(t+(k+1)h))S^{k}_2(t) \\&+ (k+1)\Delta _hr(t+kh,a(t+kh))S^{k-1}_2(t). \end{aligned}$$

These terms clearly satisfy \(\Delta _h \tilde{S}^k_2 = \tilde{S}^{k+1}_1 + \tilde{S}^{k+1}_2\). They also have the desired properties since

$$\begin{aligned} \sup _{t\in \mathbb {R}}|T_1^{k+1}(t)|&\lesssim h^{m} h^{\beta } |\sin \eta |^{-1-\beta }, \\ \sup _{t\in \mathbb {R}}| r(t+(k+2)h,a(t+(k+2)h))S^{k+1}_1(t)|&\lesssim h\cdot h^{m-1} h^{\beta } |\sin \eta |^{-1-\beta }, \\ \sup _{t\in \mathbb {R}}|\Delta _hr(t+(k+1)h,a(t+(k+1)h))S^k_1(t)|&\lesssim h\cdot h^{m-1} h^{\beta } |\sin \eta |^{-1-\beta }, \\ \sup _{t\in \mathbb {R}}|\Delta _h^2r(t+kh,a(t+kh)) S^{k-1}_2(t)|&\lesssim h^{\beta } \delta _j |\sin \eta |^{-\beta } \cdot h^{m-1} |\sin \eta |^{-1}, \end{aligned}$$

and

$$\begin{aligned} \sup _{t\in \mathbb {R}}|T^{k}_2(t)|&\lesssim h^{m} |\sin \eta |^{-1},\\ \sup _{t\in \mathbb {R}}|r(t+(k+1)h,a(t+(k+1)h))S^{k}_2(t)|&\lesssim h \cdot h^{m-1} |\sin \eta |^{-1},\\ \sup _{t\in \mathbb {R}}|\Delta _hr(t+kh,a(t+kh))S^{k-1}_2(t)|&\lesssim h \cdot h^{m-1} |\sin \eta |^{-1}. \end{aligned}$$

Since \(S_2^{m+1}=S_1^{m+2}=S_2^{m+2}=T_2^{m+1}=T_1^{m+2}=0\), for \(k=m+1\) these expressions read

$$\begin{aligned} \tilde{S}^{m+2}_1(t)= & {} (m+2)\Delta _hr(t+(m+2)h,a(t+(m+2)h)) S^{m+1}_1(t) \\&+(m+2)\Delta ^2_hr(t+(m+1)h,a(t+(m+1)h)) S^{m}_2(t), \\ \tilde{S}^{m+1}_2(t)= & {} (m+2)\Delta _hr(t+(m+1)h,a(t+(m+1)h)) S^{m}_2(t). \end{aligned}$$

We see that \(\Delta _h\tilde{S}^{m+1}_2=\tilde{S}^{m+2}_1\). Therefore \(\tilde{S}^{m+2}_2=0\) and the proof is finished. \(\square \)

A slight modification of the previous proof leads to the following lemma.

Lemma 6.22

Let \(\widetilde{G}^\eta _j(x)=(r^\eta (x))^m G^\eta _j(x)\) for \(x\in \mathbb {R}^2\) and \(m\in \mathbb {N}_0\) and \(h\asymp 2^{-j(1-\alpha )}\). The function \(\widetilde{S}:\mathbb {R}^2\rightarrow \mathbb {R}\) given by \(\widetilde{S}(t,\tau )=a^\prime (\tau ) \Delta _h\partial _1\widetilde{G}^\eta _j(t,a(\tau )) \) for \((t,\tau )\in \mathbb {R}^2\) admits a decomposition \((\tilde{S}^k_1,\tilde{S}^k_2)_k\) of the form \((*)\) of length \(m+1\) with estimates

$$\begin{aligned} \sup _{t\in \mathbb {R}}\sup _{\tau \in [t-h,t+h]} |\tilde{S}^{k}_1(t,\tau )|&\lesssim h^{m-1}h^{\beta } |\sin \eta |^{-1-\beta }, \quad k=0,\ldots ,m+1, \\ \sup _{t\in \mathbb {R}}\sup _{\tau \in [t-h,t+h]} |\tilde{S}^{k}_2(t,\tau )|&\lesssim h^{m-1}|\sin \eta |^{-1}, \quad k=0,\ldots ,m. \end{aligned}$$

Proof

A small adaption of the previous proof is required to account for the little deviation of \(\tau \) from t. We just make the following remark. For \(t,\,\tau \in \mathbb {R}\) we have \( r^{{\eta }}(t,a(\tau ))=r^{{\eta }}(t,a(t))+(a(t)-a(\tau ))\sin \eta . \) It follows for \(h\in \mathbb {R}\)

$$\begin{aligned} \sup _{\tau \in [t-h,t+h]}|r^{{\eta }}(t,a(\tau ))| \le |r^{{\eta }}(t,a(t))| + |h\sin \eta |\Vert a^\prime \Vert _\infty \lesssim |r^{{\eta }}(t,a(t))| + |h|. \end{aligned}$$

Since \(h\asymp 2^{-j(1-\alpha )}\) this additional term poses no problem in the estimations. \(\square \)

Finally, we have all the tools available to give the proof of Lemma 6.19.

Proof of Lemma 6.19

We have \(\widetilde{G}_{{j}}^{\eta }(x)=(r^\eta (x))^{m}G_{{j}}^{\eta }(x)\) for \(x\in \mathbb {R}^2\) and analogous to (30)

$$\begin{aligned} \mathcal {R}\widetilde{F}_{{j}}(t,\eta )= \int _{-\infty }^{a(t,\eta )} \widetilde{G}_{{j}}^{\eta }(t,u) \,du . \end{aligned}$$

For simplicity we omit the superindex \(\eta \) subsequently, and also j wherever possible. Similar to the proof of Lemma 6.4 we obtain

$$\begin{aligned} S(t)&=\Delta _h a^\prime (t) \widetilde{G}(t+h,a(t+h))+ a^\prime (t) \Delta _h \widetilde{G}(t,a(t)) + \int _{a(t)}^{a(t+h)} \partial _1\widetilde{G}(t+h,u) \,du\\&\quad + \int _{-\infty }^{a(t)} \Delta _{(h,0)} \partial _1\widetilde{G}(t,u) \,du \\&=: \widetilde{T}_1(t)+\widetilde{T}_2(t)+\widetilde{T}_3(t)+\widetilde{T}_4(t). \end{aligned}$$

We will show the assertion for each of these terms separately. Moreover, it suffices to prove \(L^\infty \)-estimates, which can be transformed to the desired \(L^2\)-estimates via the corresponding support properties. Note that \(|{\mathrm{supp }}\widetilde{T}_i|\lesssim |\widetilde{I}(\eta )| \lesssim |\sin \eta |\) according to Lemma 6.3 for \(i\in \{1,2,3\}\) and that \(|{\mathrm{supp }}\widetilde{T}_4|\lesssim 1\).

For \(\widetilde{T}_1\) the estimate

$$\begin{aligned} \Vert \widetilde{T}_1 \Vert _\infty\le & {} \Vert \Delta _h a^\prime \Vert _\infty \sup _{t\in \mathbb {R}}| \widetilde{G}(t,a(t))| \lesssim \Vert \Delta _h a^\prime \Vert _\infty \sup _{t\in I(\eta )}|r(t,a(t))|^{m} \\\lesssim & {} \delta ^{m+1}_j h^{\beta -1} |\sin \eta |^{-1-\beta } \lesssim h^m h^{\beta } |\sin \eta |^{-1-\beta } \end{aligned}$$

is sufficient. Next, we show that \(\widetilde{T}_2\) and \(\widetilde{T}_3\) admit decompositions \((\widetilde{T}^k_1,\widetilde{T}^k_2)_k\) of the form \((*)\) of length \(m+1\) with \({\mathrm{supp }}\widetilde{T}^k_i\subset \widetilde{I}(\eta )\), \(i\in \{1,2\}\), and the estimates

$$\begin{aligned} \Vert \widetilde{T}^{k}_1\Vert _\infty&\lesssim h^{m}h^{\beta } |\sin \eta |^{-1-\beta }, \quad k=0,\ldots ,m+1, \\ \Vert \widetilde{T}^{k}_2\Vert _\infty&\lesssim h^{m}|\sin \eta |^{-1}, \quad k=0,\ldots ,m. \end{aligned}$$

The decomposition of the component \(\widetilde{T}_2\) is provided by Lemma 6.20. Let us turn to \(\widetilde{T}_3\). By substitution this term transforms to

$$\begin{aligned} \widetilde{T}_3(t)&=\int _{t}^{t+h} \partial _1\widetilde{G}(t+h,a(u)) a^\prime (u) \,du. \end{aligned}$$

We put \(\widetilde{T}^0_1=0\) and \(\widetilde{T}^0_2=\widetilde{T}_3\). These terms clearly satisfy the assertions. Next we take the forward difference of \(\widetilde{T}^0_2\). Here \(\Delta _h\) acts on both t and \(\tau \). We obtain

$$\begin{aligned} \Delta _h \widetilde{T}^0_2(t) = \Delta _h \widetilde{T}_3(t)= & {} \int _{t}^{t+h} \Delta _h\big (\partial _1\widetilde{G}(t+h,a(\tau )) a^\prime (\tau )\big ) \,d\tau \\= & {} \int _{t}^{t+h} \Delta _h\partial _1\widetilde{G}(t+h,a(\tau )) a^\prime (\tau ) \,d\tau \\&+ \int _{t}^{t+h} \partial _1\widetilde{G}(t+2h,a(\tau +h))\Delta _ha^\prime (\tau ) \,d\tau \\=: & {} \widetilde{T}_{31}(t) + \widetilde{T}_{32}(t). \end{aligned}$$

Lemma 6.22 then yields a decomposition \((\widetilde{S}^k_1,\widetilde{S}^k_2)_k\), such that we can write

$$\begin{aligned} \Delta _h\partial _1\widetilde{G}(t+h,a(\tau )) a^\prime (\tau )=\widetilde{S}^0_1(t,\tau )+\widetilde{S}^0_2(t,\tau ). \end{aligned}$$

This leads to

$$\begin{aligned} \widetilde{T}_{31}(t)=\int _{t}^{t+h} \Delta _h\partial _1\widetilde{G}(t+h,a(\tau )) a^\prime (\tau ) \,d\tau =\int _{t}^{t+h} \widetilde{S}^0_1(t,\tau ) \,d\tau + \int _{t}^{t+h} \widetilde{S}^0_2(t,\tau ) \,d\tau . \end{aligned}$$

We put \( \widetilde{T}^1_1(t):= \widetilde{T}_{32}(t) + \int _{t}^{t+h} \widetilde{S}^0_1(t,\tau ) \,d\tau \) and \(\widetilde{T}^1_2(t):= \int _{t}^{t+h} \widetilde{S}^0_2(t,\tau ) \,d\tau \). These terms \(\widetilde{T}^1_1\) and \(\widetilde{T}^1_2\) then satisfy the requirements, i.e.,

$$\begin{aligned} \Vert \widetilde{T}^1_1\Vert _\infty&\lesssim \Vert \widetilde{T}_{32}\Vert _\infty + h\cdot \sup _{t\in \mathbb {R}}\sup _{\tau \in [t,t+h]} |\widetilde{S}^0_1(t,\tau )| \lesssim h \cdot \delta _j^{m-1} \cdot h^{\beta -1}\delta _j|\sin \eta |^{-1-\beta }, \\ \Vert \widetilde{T}^1_2\Vert _\infty&\lesssim h \sup _{t\in \mathbb {R}}\sup _{\tau \in [t,t+h]}|\widetilde{S}^0_2(t,\tau )| \lesssim h^{m} |\sin \eta |^{-1}. \end{aligned}$$

Taking another forward difference of \(\widetilde{T}^1_2\) yields

$$\begin{aligned} \Delta _h \widetilde{T}^1_2(t) = \int _t^{t+h} \Delta _h S^0_2(t,\tau ) \,d\tau . \end{aligned}$$

Proceeding inductively from here with Lemma 6.22 settles the claim for the component \(\widetilde{T}_3\).

Finally, we turn to the function \(\widetilde{T}_4(t)=\int _{-\infty }^{a(t)} \Delta _{(h,0)} \partial _1\widetilde{G}(t,u)\,du\). First, we calculate

$$\begin{aligned}&\Delta _h \widetilde{T}_4(t) = \int _{a(t)}^{a(t+h)} \Delta _{(h,0)} \partial _1\widetilde{G}(t+h,u)\,du + \int _{-\infty }^{a(t)} \Delta ^2_{(h,0)} \partial _1\widetilde{G}(t,u)\,du =: \widetilde{T}_{41}(t) + \widetilde{T}_{42}(t)&\\ \text {and}&\Delta _h \widetilde{T}_{42}(t) = \int _{a(t)}^{a(t+h)} \Delta ^2_{(h,0)} \partial _1\widetilde{G}(t+h,u)\,du + \int _{-\infty }^{a(t)} \Delta ^3_{(h,0)} \partial _1\widetilde{G}(t,u)\,du =: \widetilde{T}_{43}(t) + \widetilde{T}_{44}(t).&\end{aligned}$$

Next, we show \(\Vert \widetilde{T}_{44}\Vert _\infty \lesssim h^{\beta +\frac{1}{2}}\) because then, in view of \(|{\mathrm{supp }}\widetilde{T}_{44}|\asymp 1\),

$$\begin{aligned} \Vert \widetilde{T}_{44}\Vert ^2_{2}\lesssim h^{2\beta +1}\asymp 2^{-j(1-\alpha )(2\beta +1)}. \end{aligned}$$

The \(L^\infty \)-estimate of the term \(\widetilde{T}_{44}\) relies on the fact, that for \(h\asymp 2^{-j(1-\alpha )}\)

$$\begin{aligned} \Vert \Delta ^3_{(h,0)}\partial _1\widetilde{G}\Vert _\infty \lesssim h^{\beta }2^{-\alpha j} \lesssim h^{\beta +1}. \end{aligned}$$

This estimate is a consequence of Lemmas 6.12, 6.14, and 6.16 and is analogous to (18). Essential is the observation that since \(\alpha \ge \frac{1}{2}\) Lemma 6.12 yields

$$\begin{aligned} \Vert \Delta _{(h,0)}\partial _1g_j\Vert _\infty \lesssim 2^{-\alpha j} h^{\beta } \lesssim h^{\beta +1}. \end{aligned}$$

Finally, we take care of the remaining terms \(\widetilde{T}_{41}\) and \(\widetilde{T}_{43}\). First we note that \(|{\mathrm{supp }}\widetilde{T}_{41}|\lesssim |\widetilde{I}(\eta )| \lesssim |\sin \eta |\) and also \(|{\mathrm{supp }}\widetilde{T}_{43}|\lesssim |\widetilde{I}(\eta )| \lesssim |\sin \eta |\) according to Lemma 6.3. Hence, it suffices to prove \(\Vert \widetilde{T}_{41}\Vert _\infty \lesssim h^mh^\beta |\sin \eta |^{-1-\beta } \) and \(\Vert \widetilde{T}_{41}\Vert _\infty \lesssim h^mh^\beta |\sin \eta |^{-1-\beta }\). It holds

$$\begin{aligned} \Vert \widetilde{T}_{41}\Vert _\infty\le & {} \sup _{t\in \mathbb {R}}\Big |\int _{a(t)}^{a(t+h)} \partial _1\widetilde{G}(t+2h,u)\,du\Big | + \sup _{t\in \mathbb {R}}\Big |\int _{a(t)}^{a(t+h)} \partial _1\widetilde{G}(t+h,u)\,du\Big |. \end{aligned}$$

Analogously, we have

$$\begin{aligned} \Vert \widetilde{T}_{43}\Vert _\infty\le & {} \sup _{t\in \mathbb {R}}\Big |\int _{a(t)}^{a(t+h)} \partial _1\widetilde{G}(t+3h,u)\,du\Big | + 2\sup _{t\in \mathbb {R}}\Big |\int _{a(t)}^{a(t+h)} \partial _1\widetilde{G}(t+2h,u)\,du\Big | \\&+ \sup _{t\in \mathbb {R}}\Big |\int _{a(t)}^{a(t+h)} \partial _1\widetilde{G}(t+h,u)\,du\Big |. \end{aligned}$$

All these terms on the right hand side can be estimated in the same way as

$$\begin{aligned} \widetilde{T}_{3}(t)=\int _{t}^{t+h} \partial _1\widetilde{G}(t+h,a(u)) a^\prime (u) \,du. \end{aligned}$$

This finishes the proof. \(\square \)

Proof of Theorem 6.7

Lemma 6.19 enables us to prove a generalization of Proposition 6.5.

Proposition 6.23

We have for \(m=(m_1,m_2)\in \mathbb {N}_0^2\) the estimate

$$\begin{aligned}&\int _{|\lambda |\sim 2^{j(1-\alpha )}} |\partial ^{m}\widehat{F_{{j}}}(\lambda \cos \eta , \lambda \sin \eta )|^2 \,d\lambda \\&\quad \lesssim 2^{-2(1-\alpha )jm_1} 2^{-(1-\alpha )j} \big ( 1+ 2^{(1-\alpha )j}\sin \eta \big )^{-1-2\beta } + 2^{3\alpha j}2^{(-1-2\beta )j}. \end{aligned}$$

Proof

Observe that \(\partial ^{m}\widehat{F_{{j}}}=\big (x^mF_{{j}}\big )^{\wedge }\). Putting \({\overline{F}}_{{j}}:=x_2^{m_2}F_{{j}}\), the function \(\widetilde{F}_{{j}}(x):=x^mF_{{j}}(x)\) takes the form of a modified edge fragment as defined in (44), i.e. \(\widetilde{F}_{{j}}=x_1^{m_1}\overline{F}_{{j}}\). Analogous to Proposition 6.5 we distinguish between the cases \(|\sin \eta |< 2\delta _j\) and \(|\sin \eta |\ge 2\delta _j\).

In case \(|\sin \eta |< 2\delta _j\) we show

$$\begin{aligned} \int _{|\lambda |\sim 2^{j(1-\alpha )}} |\partial ^m\widehat{F_{{j}}}(\lambda \cos \eta ,\lambda \sin \eta )|^2 \,d\lambda \lesssim 2^{-2jm_1(1-\alpha )} 2^{-j(1-\alpha )}. \end{aligned}$$

For this let \(\overline{F}_{j}=F_j^0+F_j^1\) be a decomposition similar to (33), where \(F_j^0(x):= x_2^{m_2} g(2^{-j\alpha }x)\omega (x) \chi _{\{x_1\ge \delta _j\}}\). Further, we write \(\widetilde{F}_{{j}}=\widetilde{F}_{{j}}^{{0}}+{\widetilde{F}_j^1}\) with

$$\begin{aligned} \widetilde{F}_{{j}}^{{0}}(x):= x_1^{m_1} F_{{j}}^{{0}}(x) = x^{m} g(2^{-j\alpha }x)\omega (x) \chi _{\{x_1\ge \delta _j\}} \end{aligned}$$

and \(\widetilde{F}_j^1(x):=\widetilde{F}_{{j}}(x)-\widetilde{F}_{{j}}^{{0}}(x)\) the deviation. Note that \(\widetilde{F}_{{j}}^{{0}}\) is a fragment with a straight edge of height about \(\delta _j^{m_1}\) and that the function \(\widetilde{F}_j^1\) is supported in a vertical strip of width \(2\delta _j\).

For \(\eta \) satisfying \(|\sin \eta |<2\delta _j\) the Radon transform \(\mathcal {R}\widetilde{F}_j^1(\cdot ,\eta )\) is \(L^\infty \)-bounded with \(\Vert \mathcal {R}\widetilde{F}_j^1(\cdot ,\eta )\Vert _\infty \lesssim \delta _j^{m_1} \Vert \mathcal {R}F_j^1(\cdot ,\eta )\Vert _\infty {\lesssim \delta _j^{m_1}}\) and it is supported in an interval of length

$$\begin{aligned} 2(\delta _j\cos \eta + \sin \eta )\lesssim \delta _j. \end{aligned}$$

It follows \(\Vert \mathcal {R}{\widetilde{F}_j^1}(\cdot ,\eta )\Vert ^2_{2}\lesssim \delta _j^{2 m_1} \delta _j \lesssim \delta _j^{2 m_1} 2^{-j(1-\alpha )}\). Therefore

$$\begin{aligned} \int _{|\lambda |\sim 2^{j(1-\alpha )}} |{\mathcal {F}\widetilde{F}_j^1}(\lambda \cos \eta ,\lambda \sin \eta )|^2 \,d\lambda \le \int _{\mathbb {R}} |\widehat{\mathcal {R}{\widetilde{F}}_j^1(\cdot ,\eta )}(\lambda )|^2 \,d\lambda \lesssim \delta _j^{2 m_1} 2^{-j(1-\alpha )}. \end{aligned}$$

It remains to show

$$\begin{aligned} \int _{|\lambda |\sim 2^{j(1-\alpha )}} |{\mathcal {F}}{\widetilde{F}_{{j}}^{{0}}}(\lambda \cos \eta ,\lambda \sin \eta )|^2 \,d\lambda \lesssim \delta _j^{2 m_1} 2^{-j(1-\alpha )}. \end{aligned}$$

This follows from the fact, that we have decay \(|{\mathcal {F}}\widetilde{F}_{{j}}^{{0}}(\lambda ,0)|\lesssim \delta _j^{m_1} |\lambda |^{-1/2}\) normal to the straight singularity curve, since the height of the jump is \(\delta _j^{m_1} \). Further, the second argument \((\lambda \sin \eta )\) remains bounded due to the condition \(|\sin \eta |< 2\delta _j\).

In case \(|\sin \eta |\ge 2\delta _j\) we conclude as follows. Let \(C_1,\,C_2>0\) be the constants specifying the integration domain \([C_1 2^{j(1-\alpha )},C_2 2^{j(1-\alpha )}]\). We choose \(C>0\) such that \(C_2C<2\pi \) and fix \(h:=C 2^{-j(1-\alpha )}\). Then there is \(c>0\) such that \( |e^{i\lambda h}-1|^{m_1}\ge c\) for \(|\lambda |\in [C_1 2^{j(1-\alpha )},C_2 2^{j(1-\alpha )}]\) at all scales. We obtain

$$\begin{aligned}&\int _{|\lambda |\sim 2^{j(1-\alpha )}} |\lambda |^2 |\partial ^{m}\widehat{F_{{j}}}(\lambda \cos \eta ,\lambda \sin \eta )|^2 \,d\lambda \\&\quad \lesssim \int _{|\lambda |\sim 2^{j(1-\alpha )}} |e^{i\lambda h}-1|^2 |\lambda |^2|\widehat{x^{m}F_{{j}}}(\lambda \cos \eta ,\lambda \sin \eta )|^2 \,d\lambda \\&\quad \lesssim \int _{|\lambda |\sim 2^{j(1-\alpha )}} |e^{i\lambda h}-1|^2 |\lambda |^2 |\left( \mathcal {R} {\widetilde{F}_{{j}}}(\cdot ,\eta )\right) ^{\wedge }(\lambda )|^2 \,d\lambda \\&\quad \lesssim \int _{|\lambda |\sim 2^{j(1-\alpha )}} |\big [\Delta _h\partial _1\mathcal {R}\widetilde{F}_{{j}}(\cdot ,\eta )\big ]^{\wedge }(\lambda )|^2 \,d\lambda \end{aligned}$$

From Lemma 6.19 we know, that \(S=\Delta _h\partial _1\mathcal {R}\widetilde{F}_{{j}}(\cdot ,\eta )\) admits a decomposition \((S^k_1,S^k_2)_k\) of length \(m_1+1\) with estimates

$$\begin{aligned} \Vert S^{k}_1\Vert _{2}^2\lesssim 2^{-2jm_1(1-\alpha )} h^{2\beta } |\sin \eta |^{-1-2\beta } + 2^{-j(1-\alpha )(2\beta +1)},\qquad k=0,1,\ldots ,m_1+1. \end{aligned}$$

Using the same trick as in Proposition 6.5 we can then conclude

$$\begin{aligned} \int _{|\lambda |\sim 2^{j(1-\alpha )}} |\lambda |^2 |\partial ^m\widehat{F_{{j}}}(\lambda \cos \eta ,\lambda \sin \eta )|^2 \,d\lambda&\lesssim \int _{|\lambda |\sim 2^{j(1-\alpha )}} |\big [\Delta _h\partial _1\mathcal {R}\widetilde{F}_{{j}}(\cdot ,\eta )\big ]^{\wedge }(\lambda )|^2 \,d\lambda \\&\lesssim \sum _{k=0}^{m_1+1} \int _{|\lambda |\sim 2^{j(1-\alpha )}} |\widehat{S^k_1}|^2 \,d\lambda \\&\lesssim \sum _{k=0}^{m_1+1} \Vert S^k_1\Vert _2^2 \\&\lesssim 2^{-2jm_1(1-\alpha )} h^{2\beta } |\sin \eta |^{-1-2\beta }\\&\quad + 2^{-j(1-\alpha )(2\beta +1)}. \end{aligned}$$

It follows

$$\begin{aligned} \int _{|\lambda |\sim 2^{j(1-\alpha )}} |\partial ^{m}\widehat{F_{{j}}}(\lambda \cos \eta ,\lambda \sin \eta )|^2 \,d\lambda&\lesssim 2^{-2j(1-\alpha )(m_1+1)} h^{2\beta } |\sin \eta |^{-1-2\beta }\\&\quad + 2^{-j(1-\alpha )(2\beta +3)}\\&\lesssim 2^{-2(1-\alpha )jm_1} 2^{-(1-\alpha )j} \big ( 2^{(1-\alpha )j}\sin \eta \big )^{-1-2\beta }\\&\quad + 2^{3\alpha j}2^{(-1-2\beta )j}. \end{aligned}$$

This finishes the proof. \(\square \)

By rescaling \(F_{{j}}\) to the original edge fragment \(f_{{j}}\) we obtain Theorem 6.7, because of the relation \(\widehat{f_{{j}}}(\xi )=2^{-2\alpha j}\widehat{F_{{j}}}(2^{-\alpha j}\xi )\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Grohs, P., Keiper, S., Kutyniok, G. et al. Cartoon Approximation with \(\alpha \)-Curvelets. J Fourier Anal Appl 22, 1235–1293 (2016). https://doi.org/10.1007/s00041-015-9446-6

Download citation

Keywords

  • Anisotropic scaling
  • Curvelets
  • Nonlinear approximation
  • Ridgelets
  • Shearlets
  • Sparsity equivalence
  • Wavelets