Skip to main content
Log in

Convergence and performance of the peeling wavelet denoising algorithm

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This note is devoted to an analysis of the so-called peeling algorithm in wavelet denoising. Assuming that the wavelet coefficients of the useful signal are modeled by generalized Gaussian random variables and its noisy part by independent Gaussian variables, we compute a critical thresholding constant for the algorithm, which depends on the shape parameter of the generalized Gaussian distribution. We also quantify the optimal number of steps which have to be performed, and analyze the convergence of the algorithm. Several implementations are tested against classical wavelet denoising procedures on benchmark and simulated biological signals.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  • Antoniadis A, Bigot J, Sapatinas T (2001) Wavelet estimators in non-parametric regression: a comparative simulation study. J Stat Softw 6(6):61–83

    Google Scholar 

  • Buccigrossi R, Simoncelli E (1999) Image compression via joint statistical characterization in the wavelet domain. IEEE Trans Image Process 8(12):1688–1701

    Article  Google Scholar 

  • Cai T, Silverman B (2001) Incorporating information on neighbouring coefficients into wavelet estimation. Sankhyā: Indian J Stat Special Issue Wavelets 63(2):127–148

    MATH  MathSciNet  Google Scholar 

  • Cai T, Zhou H (2009) A data-driven block thresholding approach to wavelet estimation. Ann Stat 37(2):569–595

    Google Scholar 

  • Chesneau C (2007) Wavelet block thresholding for samples with random design: a minimax approach under the \(L^p\) risk. Electron J Stat 1:331–346

    Article  MATH  MathSciNet  Google Scholar 

  • Coifman R, Wickerhauser M (1995) Adapted waveform de-noising for medical signals an images. IEEE Eng Med Biol Mag 14(5):578–586

    Article  Google Scholar 

  • Daubechies I (1992) Ten lectures on wavelets. In: CBMS-NSF Regional Conference Series in Applied Mathematics, vol 61. SIAM, Philadelphia

  • Do M, Vetterli M (2002) Wavelet-based texture retrieval using generalized Gaussian density and Kullback–Leibler distance. IEEE Trans Image Process 11(2):146–158

    Article  MathSciNet  Google Scholar 

  • Donoho D, Johnstone I (1994) Ideal spatial adaptation via wavelet shrinkage. Biometrika 81:425–455

    Article  MATH  MathSciNet  Google Scholar 

  • Donoho D, Johnstone I, Kerkyacharian G, Picard D (1995) Wavelet shrinkage: asymptopia? With discussion and a reply by the authors. J R Stat Soc Ser B 57(2):301–369

    MATH  MathSciNet  Google Scholar 

  • Gin E, Nickl R (2009) Uniform limit theorems for wavelet density estimators. Ann Probab 37(4):1605–1646

    Article  MathSciNet  Google Scholar 

  • Hadjileontiadis L, Panas S (1997) Separation of discontinuous adventitious sounds from vesicular sounds using a wavelet based filter. IEEE Trans Biomed Eng 44(12):1269–1281

    Article  Google Scholar 

  • Mallat S (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell 11(7):674–693

    Article  MATH  Google Scholar 

  • Mallat S (1997) A wavelet tour of signal processing. Academic Press, London

    Google Scholar 

  • Moulin P, Liu J (1999) Analysis of multiresolution image denoising schemes using generalized Gaussian and complexity priors. IEEE Trans Inf Theory 45(3):909–919

    Article  MATH  MathSciNet  Google Scholar 

  • Pižurica A, Philips W (2006) Estimating the probability of the presence of a signal of interest in multiresolution single- and multiband image denoising. IEEE Trans Image Process 15(3):654–665

    Article  Google Scholar 

  • Ranta R, Heinrich C, Louis-Dorr V, Wolf D (2003) Interpretation and improvement of an iterative wavelet-based denoising method. IEEE Signal Process Lett 10(8):239–241

    Google Scholar 

  • Ranta R, Heinrich C, Louis-Dorr V, Wolf D (2005) Iterative wavelet-based denoising methods and robust outlier detection. IEEE Signal Process Lett 12(8):557–560

    Google Scholar 

  • Ranta R, Louis-Dorr V, Heinrich C, Wolf D, Guillemin F (2010) Digestive activity evaluation by multi-channel abdominal sounds analysis. IEEE Trans Biomed Eng 57(6):1507–1519

    Google Scholar 

  • Simoncelli E, Buccigrossi R (1997) Embedded wavelet image compression based on a joint probability model. In: 4th IEEE international conference on image processing, ICIP. Santa Barbara, USA

  • van der Vaart A, Wellner J (1996) Weak convergence and empirical processes. Springer, Berlin

    Book  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Céline Lacaux.

Additional information

C. Lacaux, A. Muller-Gueudin and S. Tindel are members of the BIGS (BIology, Genetics and Statistics) team at INRIA.

Appendices

Appendix 1: Proof of proposition 2.3

1.1 Preliminaries

This section gives the main tools to prove Proposition 2.3. The first lemma is interested in some useful properties of \(g_{1,u}\), which is defined by (11), and in the associated deterministic dynamic.

Lemma 5.1

Assume \(F>F_c\). Let \(g_{1,u}\,:\mathbb{R }_+\rightarrow \mathbb{R }_+\) be defined by (11), where \(u\) is the shape parameter given in Hypothesis 1.1. Let \(\ell _1<t^*\) be the two positive fixed points of \(g_{1,u}\) as defined in Lemma 2.2.

(1):

Then there exists \(\ell _2\in (\ell _1,t^*)\) such that \(g_{1,u}(\ell _2)>\ell _2, g_{1,u}^{\prime }(\ell _2)< 1\) and \(g_{1,u}\) is concave on \([\ell _2,\infty )\).

(2):

Define the deterministic sequence \(\{u_k;\, k\ge 0\}\) recursively by

$$\begin{aligned} \left\{ \begin{array}{ll} u_0 =+\infty \\ u_{k+1} =g_{1,u}(u_k), \quad k\ge 0. \end{array}\right. \end{aligned}$$
(20)

Then for \(k\ge 1\),

$$\begin{aligned} |u_k-t^*|\le M_{t^*}^{k-1} \left( F^2 -t^*\right) \cdot \end{aligned}$$
(21)

where \(M_{t^*}=g_{1,u}^{\prime }(t^*)\in (0,1)\).

Proof

The first assertion is easily deduced from the variations of \(t\mapsto d_{1,u}(t)=g_{1,u}(t)-t\), and its proof is left to the reader. Let us now prove the second assertion. According to Lemma 2.2, \(g_{1,u}\) is an increasing function and has exactly three fixed points: \(0<\ell _1<t^*\). Then the sequence \(\{u_k; k\ge 0\}\), defined by (20), is decreasing and converges to \(t^*\) as \(k\rightarrow \infty \). Furthermore,

$$\begin{aligned} |u_{k+1}-t^*|=u_{k+1}-t^*=g_{1,u}(u_k)-g_{1,u}(t^*)\le M_{t^*} \left( u_{k}-t^* \right) \!, \end{aligned}$$

with \(M_{t^*}=\sup \{|g_{1,u}^{\prime }(t)|;\, t\ge t^*\}\). Since \(g_{1,u}\) is increasing and concave on \([\ell _2,+\infty )\) with \(\ell _2<t^*\) and \(g_{1,u}^{\prime }(\ell _2)<1\),

$$\begin{aligned} M_{t^*}=g_{1,u}^{\prime }(t^*)\in (0,1). \end{aligned}$$

Then, Assertion (2) follows by a trivial induction procedure. The proof of Lemma 5.1 is then complete. \(\square \)

The following lemma compares the functions \(g_{\sigma ,u,\sigma _w}\) and \(g_{\sigma ,u}\) defined by (10) and (11) respectively.

Lemma 5.2

Assume \(F>F_c\). For \(u>0, \sigma >0\) and \(\sigma _w>0\), let \(g_{\sigma ,u}\) and \(g_{\sigma ,u,\sigma _w}\) be defined by (11) and (10) with \(p_{\sigma ,u}\) and \(p_{\sigma _w}\) introduced in Hypothesis 1.1. Let \(\ell _1<t^*\) be the two positive fixed points of \(g_{1,u}\) as defined in Lemma 2.2. Then, there exists \(C:=C(u)\in (0,\infty )\) a constant which does not depend on \((\sigma ,\sigma _w,F)\) such that for any \(t\in \mathbb{R }_+\) we have

$$\begin{aligned} |g_{\sigma ,u,\sigma _w}^{\prime }(t)-g_{\sigma ,u}^{\prime }(t)|\le \frac{C F^2\sqrt{t}}{\sigma }\left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)} \end{aligned}$$
(22)

and

$$\begin{aligned} | g_{\sigma ,u,\sigma _w}(t)-g_{\sigma ,u}(t)|\le \frac{C F^2 t^{3/2}}{\sigma }\left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)}. \end{aligned}$$
(23)

In particular, \(g_{\sigma ,u,\sigma _w}\rightarrow g_{\sigma ,u}\) and \(g_{\sigma ,u,\sigma _w}^{\prime }\rightarrow g_{\sigma ,u}^{\prime }\) uniformly on every compact set of \(\mathbb{R }_+\), as \(\sigma _w\) goes to 0.

Proof

Since (23) is a direct consequence of (22), we only prove (22). By definition of \(g_{\sigma ,u,\sigma _w}\) and \(g_{\sigma ,u}\), for any \(t\in \mathbb{R }_+\),

$$\begin{aligned} |g_{\sigma ,u,\sigma _w}^{\prime }(t)-g_{\sigma ,u}^{\prime }(t)|=F^2\sqrt{t} |p_{\sigma ,u}*p_{\sigma _w}(\sqrt{t}) -p_{\sigma ,u}(\sqrt{t})|. \end{aligned}$$
(24)

Notice that for all \(y\in \mathbb{R }_+\)

$$\begin{aligned} p_{\sigma ,u}*p_{\sigma _w}(y)-p_{\sigma ,u}(y)= \int \limits _\mathbb{R }\left( p_{\sigma ,u}(r)-p_{\sigma ,u}(y)\right) p_{\sigma _w}(y-r)dr. \end{aligned}$$
(25)

It can be readily checked that

$$\begin{aligned} \forall t\in \mathbb{R }, \quad p_{\sigma ,u}(t)=\frac{1}{\sigma }p_{1,u}\left( \frac{t}{\sigma }\right) =\frac{\alpha }{\sigma }e ^{-\left| \frac{ \beta t}{\sigma }\right| ^u}. \end{aligned}$$

Let us first assume \(u\ge 1\). Then \(t\mapsto p_{1,u}(t)\) is \(\mathcal C ^1\) on \(\mathbb{R }\) and its derivate \(p_{1,u}^{\prime }\) is bounded on \(\mathbb{R }\). In this case,

$$\begin{aligned} \left| p_{\sigma ,u}(r)-p_{\sigma ,u}(y)\right| \le \frac{\left| r-y\right| \left\| p_{1,u}^{\prime } \right\| _{\infty } }{\sigma ^2}. \end{aligned}$$
(26)

Assume now that \(u\in (0,1]\). Then by the Mean Value Theorem applied to the exponential map,

$$\begin{aligned} \left| p_{\sigma ,u}(r)-p_{\sigma ,u}(y)\right| \le \frac{\alpha \beta ^u}{\sigma ^{1+u}}\Big | |r|^u-|y|^u\Big |. \end{aligned}$$

Since for any \(\gamma \in (0,1)\) and \(0\le b\le a, a^\gamma -b^\gamma \le | a-b|^\gamma \), one checks that

$$\begin{aligned} \left| p_{\sigma ,u}(r)-p_{\sigma ,u}(y)\right| \le \frac{\alpha \beta ^u}{\sigma ^{1+u}}|r-y|^u. \end{aligned}$$
(27)

Plugging (26) or (27) in (25), we now get the existence of a finite positive constant \(c:=c(u,\alpha ,\beta )\) which only depends on \(u,\alpha ,\beta \) such that

$$\begin{aligned} \left| p_{\sigma ,u}*p_{\sigma _w}(y)-p_{\sigma ,u}(y) \right| \le \frac{c}{\sigma ^{1+\min (1,u)}} \int \limits _{\mathbb{R }} |v|^{\min (1,u)} p_{\sigma _w}(v) dv. \end{aligned}$$

Since \(p_{\sigma _w}\) is the density of a centered Gaussian variable of variance \(\sigma _w^2\),

$$\begin{aligned} \left| p_{\sigma ,u}*p_{\sigma _w}(y)-p_{\sigma ,u}(y) \right| \le \frac{c}{\sigma }\mathbb E \left( |W|^{\min (1,u)}\right) \left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)}\!, \end{aligned}$$

with \(W\) a standard Gaussian variable. This inequality and Eqs. (24) lead to (22) setting \(C=c\mathbb E (|W|^{\min (1,u)})\), which concludes the proof. \(\square \)

1.2 Proof of proposition 2.3

This section is devoted to the proof of Proposition 2.3. In this proof, \(c\) and \(C\) denote two unspecified positive and finite constants which may not be the same in each occurrence and depend neither on the standard deviation \(\sigma \) of the signal \(x\) nor on the standard deviation \(\sigma _w\) of the noise \(w\). Let us recall that \(g_{\sigma ,u,\sigma _w}\) is defined by (10).

  1. (1)

    First observe that

    $$\begin{aligned} \forall t\in \mathbb{R }_+,\quad g_{\sigma ,u,\sigma _w}(t) \le 2 F^2 \int \limits _{0}^{+\infty } y^2 p_{\sigma ,u} * p_{\sigma _w} (y) dy= F^2 \mathbb E \left( z(1)^2\right) \!, \end{aligned}$$

    owing to the fact that \(p_{\sigma ,u} * p_{\sigma _w} \) is the density of the wavelet coefficient \(z(1)=x(1)+w(1)\). Since the centered random variables \(x(1)\) and \(w(1)\) are independent, this leads to

    $$\begin{aligned} \forall t\in \mathbb{R }_+,\quad g_{\sigma ,u,\sigma _w}(t) \le F^2\left( \sigma ^2+{\sigma _{w}^2}\right) \!. \end{aligned}$$

    Thanks to the relation \(M>F^2\), there exists a finite positive constant \(c_1:=c_1(M,F)\) depending only on \(M\) and \(F\) so that, if \(\sigma _w/\sigma \le c_1\), then \(F^2\left( \sigma ^2+{\sigma _{w}^2}\right) <M\sigma ^2\) and henceforth,

    $$\begin{aligned} \sup \{ g_{\sigma ,u,\sigma _w}(t); t\in \mathbb{R }_+\} <M\sigma ^2. \end{aligned}$$

    Let \(\ell _2\in (\ell _1,t^*)\) be defined by Lemma 5.1. Note that \(\ell _2\) only depends on \(g_{1,u}\) and thus on both parameters \(F\) and \(u\). Since \(t^*\) is a fixed point of the increasing function \(g_{1,u}\), we get

    $$\begin{aligned} \ell _2< t^*\le F^2=\lim _{t\rightarrow +\infty } g_{1,u}(t)<M. \end{aligned}$$

    Hence, applying (23) and (13), we have :

    $$\begin{aligned} g_{\sigma ,u,\sigma _w}\left( \sigma ^2\ell _2\right) \ge \sigma ^2 \left( g_{1,u}\left( \ell _2\right) - C \left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)}\right) \end{aligned}$$

    where \(C:=C(M,F,u)\in (0,+\infty )\) does not depend on \((\sigma ,\sigma _w)\). Since \(g_{1,u}\left( \ell _2\right) >\ell _2\) by Lemma 5.1, the previous equation leads to the existence of a constant \(c:=c(M,F,u)\), such that if \(\sigma _w/\sigma \le c\),

    $$\begin{aligned} g_{\sigma ,u,\sigma _w}\left( \sigma ^2\ell _2\right) >\sigma ^2\ell _2. \end{aligned}$$

    Then, the proof of Assertion (1) is complete.

  2. (2)

    According to (22) and (13),

    $$\begin{aligned} \forall t\in [\sigma ^2\ell _2,\sigma ^2 M],\quad g'_{\sigma ,u,\sigma _w}\left( t\right) \le g'_{1,u}\left( \frac{t}{\sigma ^2}\right) +C \left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)} \end{aligned}$$

    where \(C:=C(M,F,u)\in (0,+\infty )\). Thanks to Lemma 5.1,

    $$\begin{aligned} \sup \left\{ g_{1,u}^{\prime }(y),y\ge \ell _2\right\} =g_{1,u}^{\prime }(\ell _2)<1. \end{aligned}$$
    (28)

    Then, choosing \(c:=c(M,F,u)\) small enough, if \(\sigma _w/\sigma \le c\), we obtain

    $$\begin{aligned} \forall t\in [\sigma ^2\ell _2,\sigma ^2 M],\quad g_{\sigma ,u,w}^{\prime }\left( t\right) \le {g_{1,u}^{\prime }(\ell _2)}+C c^{\min (1,u)}=:\tilde{M}<1, \end{aligned}$$
    (29)

    which establishes Assertion (2).

  3. (3)

    Let us now prove Assertion (3). Assume that \(\sigma _w/ \sigma \le c\). By Assertions (1) and (2), \(d_{\sigma ,u,\sigma _w}:t\mapsto g_{\sigma ,u,\sigma _w}(t)-t\) is a decreasing function on \([\sigma ^2\ell _2,\sigma ^2 M]\) such that \(d_{\sigma ,u,\sigma _w}(\sigma ^2\ell _2)>0\) and \( d_{\sigma ,u,\sigma _w}(\sigma ^2 M)<0. \) Then, there exists an unique number \(t_{\sigma ,w}^*\in (\sigma ^2\ell _2,\sigma ^2 M)\) such that \(g_{\sigma ,u,\sigma _w}(t_{\sigma ,w}^*)=t_{\sigma ,w}^*\). Moreover, since \(g_{\sigma ,u,\sigma _w}\) takes its values in \([0,\sigma ^2 M), t_{\sigma ,w}^*\) is the only fixed point for \(g_{\sigma ,u,\sigma _w}\) in \([\sigma ^2\ell _2,\infty )\). Consider now the sequence \(\{u_k^w;\, k\ge 0 \}\) defined by Eq. (15). Since \(g_{\sigma ,u,\sigma _w}\) is an increasing function which admits as unique fixed point \(t_{\sigma ,w}^*\) in \([\sigma ^2\ell _2,\infty )\), it is easily seen that \(\{u_k^w;\, k\ge 0 \}\) is a decreasing sequence such that \(\lim _{k\rightarrow \infty } u_k^w=t^*_{\sigma ,w}\). Moreover, for any \(k\ge 1, u_k^w\in [ t^*_{\sigma ,w}, F^2(\sigma ^2+\sigma _w^2)]\subset [\sigma ^2\ell _2,\sigma ^2M]\). Then, using that \(t^*_{\sigma ,w}\in (\sigma ^2\ell _2,\sigma ^2 M)\) is a fixed point and Eq. (29), we get

    $$\begin{aligned} |u_{k+1}^w-t^*_{\sigma ,w}| \le \tilde{M}^k |u_1^w-t^*_{\sigma ,w}| \end{aligned}$$

    for any \(k\ge 1\). We can now bound trivially \(|u_1^w-t^*_{\sigma ,w}|\) as follows:

    $$\begin{aligned} |u_1^w-t^*_{\sigma ,w}| = u_1^w-t^*_{\sigma ,w}\le M\sigma ^2, \end{aligned}$$

    so that we end up with

    $$\begin{aligned} |u_{k+1}^w-t^*_{\sigma ,w}|\le M \sigma ^2 \tilde{M}^k \end{aligned}$$

    for any \(k\ge 1\). This equation, which is Eq. (16) also holds for \(k=0\). Consider now the sequence \(\{u_k;\, k\ge 0 \}\) defined by Eq. (20). Using (13), (23), (28) and the Mean Value Theorem, we get:

    $$\begin{aligned} \begin{array}{ll} |u_{k+1}^w-\sigma ^2 u_{k+1}|&{}=|g_{\sigma ,u,\sigma _w}(u_{k}^w)-\sigma ^2 g_{1,u}(u_{k})|\\ &{}\le |g_{\sigma ,u,\sigma _w}(u_{k}^w)-g_{\sigma ,u}(u_{k}^w)| +\sigma ^2 |g_{1,u}(u_{k}^w/\sigma ^2)-g_{1,u}(u_{k})|\\ &{}\le C\sigma ^2\left( \dfrac{\sigma _w}{\sigma }\right) ^{\min (1,u)}+{g'_{1,u}(\ell _2)} |u_{k}^w- \sigma ^2u_{k}|\\ \end{array} \end{aligned}$$

    since \(u_k^w/\sigma ^2,u_k\in [\ell _2,\infty )\) and \(u_k^w\le M\sigma ^2\) for \(k\ge 1\). By iterating this procedure, with \(C:=C(M,F,u)\) that may change in each occurrence, we get:

    $$\begin{aligned} |u_{k+1}^w-\sigma ^2 u_{k+1}|&\le C \sigma ^2\left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)} \sum \limits _{n=0}^{k-1}{g_{1,u}^{\prime }(\ell _2)}^n+{g_{1,u}^{\prime }(\ell _2)}^k |u_{1}^w- \sigma ^2u_{1}|\\&\le {C\sigma ^2}\left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)} +F^2\sigma _w^2\,{g'_{1,u}(\ell _2)}^k \end{aligned}$$

    since \({g_{1,u}^{\prime }(\ell _2)}<1, u_1^w=F^2(\sigma ^2+\sigma _w^2)\) and \(u_1=F^2\). Taking limits in the relation above as \(k\rightarrow \infty \) we get (17), which ends the proof. \(\square \)

Appendix 2: Probabilistic analysis of the algorithm

1.1 Preliminaries: comparison noisy dynamics/ deterministic dynamics

As mentioned at Eq. (8), the exact dynamics governing the sequence \(\{ U_k;\, k\ge 0 \}\) is of the form \(U_{k+1}=g_{N,w}(U_k)\) with \(g_{N,w}\) defined by (8). In order to compare this with the deterministic dynamics (15), let us recast this relation into:

$$\begin{aligned} U_{k+1}=g_{\sigma ,u,\sigma _w}(U_k) +\varepsilon _{k,N}, \quad \hbox {where}\quad \varepsilon _{k,N}=g_{N,w}(U_k)-g_{\sigma ,u,\sigma _w}(U_k). \end{aligned}$$
(30)

Notice that the errors \(\varepsilon _{k,N}\) are far from being independent, which means that the relation above does not define a Markov chain. However, a fairly simple expression is available for \(U_k\):

Proposition 6.1

Let \(U_k\) be defined by (7), \(g_{\sigma ,u,\sigma _w} \) by (10) and \(\varepsilon _{k,N}\) by (30). For \(k\ge 0\), set \(g_{\sigma ,u,\sigma _w}^{\circ k}\) for the \(k\)th iteration of \(g_{\sigma ,u,\sigma _w}\). Then for \(k\ge 0\), we have:

$$\begin{aligned} U_{k}= g_{\sigma ,u,\sigma _w}^{\circ k}(U_0)+ R_{k}, \quad \hbox {with}\quad R_{k}=\sum _{p=0}^{k-1}\varepsilon _{p,N} \prod _{q=2}^{k-p} g_{\sigma ,u,\sigma _w}^{\prime }(C_{p+q}), \end{aligned}$$

where the random variable \(C_{j}\) (\(j\ge 2\)) is a certain real number within the interval \([g_{\sigma ,u,\sigma _w}^{\circ (j-1)}(U_0); \, U_{j-1}]\). In the definition of \(R_{k}\), we have also used the conventions \(\prod _{q=2}^{1}a_q=1\) and \(R_0=0\).

Proof

It is easily seen inductively that \(R_0=0, R_{1}=\varepsilon _{0,N}\) and for \(k\ge 1\)

$$\begin{aligned} R_{k+1}=g_{\sigma ,u,\sigma _w}^{\prime }(C_{k+1})R_{k}+\varepsilon _{k,N}. \end{aligned}$$

Hence, by a backward induction, we obtain:

$$\begin{aligned} R_{k}=\sum _{j=1}^{k}\varepsilon _{k-j,N} \prod _{l=0}^{j-2} g_{\sigma ,u,\sigma _w}^{\prime }(C_{k-l}) =\sum _{p=0}^{k-1}\varepsilon _{p,N} \prod _{q=2}^{k-p} g_{\sigma ,u,\sigma _w}^{\prime }(C_{p+q}), \end{aligned}$$

which ends the proof. \(\square \)

A useful property of the errors \(\varepsilon _{p,N}\) is that they concentrate exponentially fast (in terms of \(N\)) around 0. This can be quantified in the following:

Lemma 6.2

Assume that our signal \(z=x+w\) satisfies Hypothesis 1.1, and recall that \(F\) is defined by Eq. (3). Set

$$\begin{aligned} \eta _u=\min \left( \frac{u}{2}, 1\right) \quad \mathrm{and}\quad \gamma _u= \frac{1}{2^{\max (2\eta _u-1,0)}F^{2\eta _u}} \min \left( \frac{\beta ^u}{\sigma ^ u}, \frac{1}{2\sigma _w^2}\right) \end{aligned}$$
(31)

where parameters \(u,\sigma ,\sigma _w\) and \(\beta \) are defined at Hypothesis 1.1. Then for every \(0<\gamma <\gamma _u\), there exists a finite positive constant \(K>0\) such that for all \(N\ge 1\), for all \(p\ge 0\) and for all \(\lambda \in [0,\gamma N^{\eta _u/2}]\),

$$\begin{aligned} \mathbb E \left[ e ^{\lambda | \varepsilon _{p,N}|^{\eta _u} }\right] \le K. \end{aligned}$$
(32)

Moreover, for all \(N\ge 1, p\ge 0\) and \(l>0\),

$$\begin{aligned} \mathbb P \left( \left| \varepsilon _{p,N}\right| \ge l\right) \le K e ^{-\gamma l^{\eta _u} N^{\eta _u/2}}. \end{aligned}$$
(33)

Proof

Recall that \(\varepsilon _{p,N}\) is defined by:

$$\begin{aligned} \varepsilon _{p,N}=g_{N,w}(U_p)-g_{\sigma ,u,\sigma _w}(U_p)=\frac{1}{N} \sum _{q=1}^{N}\left( F^2Y(q)\, \mathbf{1}_{\{Y(q)< U_p\}}-g_{\sigma ,u,\sigma _w}(U_p)\right) \!, \end{aligned}$$

for a collection \(\{Y(q);\, q\le N\}\) of i.i.d random variables, where \(Y(q)=z(q)^2\). Moreover, \(z(q)=x(q)+w(q)\), with \(x(q)\) a centered generalized Gaussian random variable with parameter \(u>0\) [whose density is given by (5)] and \(w(q)\sim \mathcal{N }(0,\sigma _w^2)\). For a fixed positive \(t\), the fluctuations \(g_{N,w}(t)-g_{\sigma ,u,\sigma _w}(t)\) are easily controlled thanks to the classical central limit theorem or large deviations principle. The difficulty in our case arises from the fact that \(U_p\) is itself a random variable, which rules out the possibility of applying those classical results. However, uniform central limit theorems and deviation inequalities have been thoroughly studied, and our result will be obtained by translating our problem in terms of empirical processes like in Vaart and Wellner (1996).

In order to express \(\varepsilon _{p,N}\) in terms of empirical processes, consider \(t\in [0,\infty ]\) and define \(h_t:\mathbb{R }_+\rightarrow \mathbb{R }_+\) by \(h_t(v)=F^2v\, \mathbf{1}_{\{ v < t\}}\). Next, for \(f:\mathbb{R }_+\rightarrow \mathbb{R }\), set

$$\begin{aligned} \mathbb{G }_N f=\frac{1}{\sqrt{N}} \sum _{q=1}^{N} \left[ f(Y(q)) - \mathbb{E }[f(Y(q))]\right] \!, \end{aligned}$$

and with these notations in mind, observe that

$$\begin{aligned} \mathbb{G }_N h_t=\frac{1}{\sqrt{N}} \sum _{q=1}^{N} \left[ h_t(Y(q)) - g_{\sigma ,u,\sigma _w}(t)\right] \!. \end{aligned}$$

It is now easily seen that

$$\begin{aligned} \varepsilon _{p,N}=\frac{\mathbb{G }_N h_{U_p}}{\sqrt{N}}, \end{aligned}$$

and the key to our result will be to get good control on \(\mathbb{G }_N h_t\) in terms of \(N\), uniformly in \(t\in [0,\infty ]\).

Let us consider the class of functions \(\mathcal{G }=\{h_t;\, t\in [0,+\infty ]\}\). According to the terminology of Vaart and Wellner (1996), the uniform central limit theorems are obtained when \(\mathcal{G }\) is a Donsker class of functions. A typical example of Donsker setting is provided by some VC classes (see Vaart and Wellner 1996, Section 2.6.2). The VC classes can be briefly described as sets of functions whose subgraphs can only shatter a finite collection of points, with a finite maximal cardinality, in \(\mathbb{R }^2\). For instance, the collections of indicators

$$\begin{aligned} \mathcal{F }=\left\{ \mathbf{1}_{[0,t)}; \, t\in [0,+\infty ]\right\} \!. \end{aligned}$$

is a VC class. Thanks to (Vaart and Wellner 1996, Lemma 2.6.18), \(\mathcal{G }\) is also a VC class since it can be written as

$$\begin{aligned} \mathcal{G }=\mathcal{F }\cdot h=\left\{ fh; f\in \mathcal{F }\right\} \!, \end{aligned}$$

where \(h:\mathbb{R }_+\rightarrow \mathbb{R }_+\) is defined by \(h(v)=h_{\infty }(v)=F^2v\).

In order to state our concentration result, we still need to introduce the envelope \( \overline{\mathcal{G }}\) of \(\mathcal{G }\), which is the function \(\overline{\mathcal{G }}:\mathbb{R }_+\rightarrow \mathbb{R }\) defined as

$$\begin{aligned} \overline{\mathcal{G }}(v)=\sup \{f(v);\, f\in \mathcal{G }\},\ v\in \mathbb R _+. \end{aligned}$$

Note that in our particular example of application, we simply have \( \overline{\mathcal{G }}=h\). Let us also introduce the following notation:

$$\begin{aligned} \mathcal{N }[\mathbb{G }_N;\mathcal{G },\lambda ,m]\!:=\! \mathbb{E }^*\left[ e^{\lambda \sup _{f\in \mathcal{G }}|\mathbb{G }_N f|^m} \right] \!, \;\;\hbox {and}\;\; \mathcal{N }[h;\lambda ,m]\!:=\! \mathbb{E }\left[ e^{\lambda |h(Y)|^m} \right] \!,\qquad \quad \end{aligned}$$
(34)

where \(\mathbb{E }^*\) is the outer expectation (defined in Vaart and Wellner 1996 for measurability issues), and \(Y\) can be decomposed as \(Y=(X+W)^2\) for a centered generalized Gaussian random variable \(X\) with parameter \(u>0\) and an independent variable \(W\sim \mathcal{N }(0,\sigma _w^2)\). In (34), we also assume \(\lambda >0\) and \(m\ge 0\).

Then, since \(\mathcal{G }\) is a VC class with measurable envelope, \(\mathcal{G }\) is a Donsker class and (Vaart and Wellner 1996, Theorem 2.14.5 p. 244) leads to:

$$\begin{aligned} \mathcal{N }[\mathbb{G }_N;\mathcal{G },\lambda ,m] \le c \, \mathcal{N }[h; \lambda ,m], \end{aligned}$$

with \(c\) a finite positive constant which does not depend on \(N,\lambda \) and \(\mathcal{G }\). Furthermore, since \(Y\) can be decomposed as \(Y=(X+W)^2\) and invoking the elementary inequality \((a+b)^p\le 2^{\max (p-1,0)}(a^p+b^p)\), valid for \(a,b\ge 0\) and \(p>0\), it is readily checked that

$$\begin{aligned} \mathcal{N }[h; \lambda ,m]<\infty \end{aligned}$$

for \(\lambda <\gamma _u\) with \(\gamma _u\) defined at (31), and where \(m=\eta _u:=\min \left( \frac{u}{2}, 1\right) \). Recalling now that \(\varepsilon _{p,N}=N^{-1/2}\mathbb{G }_N h_{U_p}\), we have obtained:

$$\begin{aligned} \mathbb E \left[ e ^{\lambda | N^{1/2}\varepsilon _{p,N}|^{\eta _u} }\right] \le \mathcal{N }[\mathbb{G }_N;\mathcal{G },\lambda ,\eta _u]\le c\mathcal{N }[h; \gamma ,\eta _u]=:K<\infty \end{aligned}$$

for \(\lambda \le \gamma <\gamma _u\), which easily implies our claim (32).

Let \(l>0\). Then,

$$\begin{aligned} \mathbb P \left( |\varepsilon _{p,N}|\ge l\right) =\mathbb P \left( e ^{\gamma N^{\eta _u/2} |\varepsilon _{p,N}|^{\eta _u}}\ge e ^{\gamma l^{\eta _u}N^{\eta _u/2}}\right) \!. \end{aligned}$$

The concentration property (33) is thus an easy consequence of (32) and Markov’s inequality. \(\square \)

1.2 Proof of theorem 3.1

Observe first that, owing to Proposition 6.1 and inequality (16), we have

$$\begin{aligned} \left| U_k-t_{\sigma ,w}^{*}\right| =\left| g_{\sigma ,u,\sigma _w}^{\circ k}\left( U_0\right) -t_{\sigma ,w}^{*}+R_k\right| \!=\! \left| u_k^w-t_{\sigma ,w}^{*}+R_k\right| \le M \tilde{M}^{k-1} \sigma ^2+\left| R_k \right| \!, \end{aligned}$$

for any \(k\ge 1\). Let then \(\hat{\delta }>0\) and let us fix \(k\ge 1\) such that

$$\begin{aligned} M \tilde{M}^{k-1}\sigma ^2 \le \frac{\hat{\delta }}{2}, \end{aligned}$$
(35)

i.e.

$$\begin{aligned} k\ge 1+ \log (\hat{\delta }/(2 M \sigma ^2))/\log ( \tilde{M})\cdot \end{aligned}$$
(36)

Then it is readily checked that:

$$\begin{aligned} \mathbb P \left( \left| U_k-t_{\sigma ,w}^{*}\right| \ge \hat{\delta } \right) \le \mathbb P \left( \left| R_k\right| \ge \frac{\hat{\delta }}{2} \right) , \end{aligned}$$
(37)

and we will now bound the probability in the right hand side of this inequality. To this purpose, let us introduce a little more notation: recall that \(\ell _{2}\) has been defined at Lemma 5.1 and for \(n\ge 1\), let \(\Omega _n\) be the set defined by

$$\begin{aligned} \Omega _n=\left\{ \omega \in \Omega ; \, \inf \left\{ j\ge 0\,/\, U_j(\omega )\le \sigma ^2 \ell _{2}\right\} =n\right\} \end{aligned}$$

and set also

$$\begin{aligned} \widetilde{\Omega }_k=\bigcup _{n=1}^k \Omega _n\cup \left\{ U_1>M\sigma ^2\right\} . \end{aligned}$$

Then we can decompose (37) into:

$$\begin{aligned} \mathbb P \left( \left| U_k-t_{\sigma ,w}^{*}\right| \ge \hat{\delta } \right) \le \mathbb P \left( \widetilde{\Omega }_k \right) +\mathbb P \left( \widetilde{\Omega }_k^c\cap \left\{ \left| R_k\right| \ge \frac{\hat{\delta }}{2}\right\} \right) . \end{aligned}$$
(38)

We will now control these two terms separately.

Step 1: Upper bound for \(\mathbb P ( \widetilde{\Omega }_k)\). Let us fix \(n\ge 1\) and first study \(\mathbb P \left( \Omega _n \right) \). To this purpose, observe first that

$$\begin{aligned} \Omega _n\subset \left\{ U_{n}\le \sigma ^2 \ell _{2}<U_{n-1} \right\} \!. \end{aligned}$$

Hence, since \(U_{n}=g_{N,w}(U_{n-1})\) and invoking that \(g_{\sigma ,u,\sigma _w}\) is an increasing function, the following relation holds true on \(\Omega _n\):

$$\begin{aligned} g_{N,w}(U_{n-1})=U_{n}\le \sigma ^2 \ell _{2} \quad \hbox {and}\quad g_{\sigma ,u,\sigma _w}(\sigma ^2\ell _{2} )<g_{\sigma ,u,\sigma _w}(U_{n-1}). \end{aligned}$$

We have thus proved that

$$\begin{aligned} \Omega _n\subset \left\{ g_{N,w}(U_{n-1})-g_{\sigma ,u,\sigma _w}(U_{n-1})\le \sigma ^2 \ell _{2}-g_{\sigma ,u,\sigma _w}(\sigma ^2\ell _{2})\right\} \!, \end{aligned}$$

where, by Assertion (1) of Proposition 2.3, \(\sigma ^2\ell _{2}-g_{\sigma ,u,\sigma _w}(\sigma ^2 \ell _{2})= :-L_1<0.\) Since \(g_{N,w}(U_{n-1})-g_{\sigma ,u,\sigma _w}(U_{n-1})=\varepsilon _{n-1,N}\) by definition, we end up with:

$$\begin{aligned} \mathbb P (\Omega _n)\le \mathbb P \left( \left| \varepsilon _{n-1,N}\right| \ge L_1\right) \!. \end{aligned}$$

Moreover,

$$\begin{aligned} \mathbb P ( U_1>M\sigma ^2)\le \mathbb P \left( \left| \varepsilon _{0,N}\right| >L_2\right) \end{aligned}$$

with \(L_2=M\sigma ^2 -g_{\sigma ,u,\sigma _w}(+\infty )>0\) by Assertion (1) of Proposition 2.3.

A direct application of Lemma 6.2 yields now the existence of \(\gamma ,K\in (0,\infty )\) such that for all \(n\ge 1\) and all \(N\ge 1\)

$$\begin{aligned} \mathbb P (\Omega _n)\le K \mathrm e ^{-\gamma L_1^{\eta _{u}} N^{\eta _{u}/2}}\quad \mathrm and \quad \mathbb P ( U_1>M\sigma ^2) \le K \mathrm e ^{-\gamma L_2^{\eta _{u}} N^{\eta _{u}/2}} \end{aligned}$$

with \(\eta _u=\min \left( \frac{u}{2}, 1\right) \). Hence

$$\begin{aligned} \mathbb P (\widetilde{\Omega }_k)\le \sum _{n=1}^k \mathbb P (\Omega _n)+ \mathbb P ( U_1>M\sigma ^2)\le (k+1) K \mathrm e ^{-\gamma L^{\eta _{u}} N^{\eta _{u}/2}} \end{aligned}$$
(39)

where \(L:=\min (L_1,L_2)>0\).

Step 2: Upper bound for \(\mathbb P ( \widetilde{\Omega }_k^c\cap \{ \left| R_k \right| \ge \frac{\hat{\delta }}{2}\} ).\) We have constructed the set \(\widetilde{\Omega }_k\) so that, for all \(2\le p\le k+1\), the random variables \(C_p\) introduced at Proposition 6.1 satisfy \(0\le g_{\sigma ,u,\sigma _w}^{\prime }\left( C_p\right) \le \rho :=\tilde{M} <1\) on \(\widetilde{\Omega }_k^c\). Thus

$$\begin{aligned} \mathbb P \left( \widetilde{\Omega }_k^c\cap \left\{ \left| R_k \right| \ge \frac{\hat{\delta }}{2}\right\} \right)&\le \mathbb P \left( \sum _{p=0}^{k-1} \left| \varepsilon _{p,N}\right| \rho ^{k-1-p}\ge \frac{\hat{\delta }}{2}\right) \nonumber \\&\quad \le \mathbb P \left( \sum _{p=0}^{k-1} \left| \varepsilon _{p,N}\right| \nu _p\ge L_{k,\hat{\delta }}\right) \!, \end{aligned}$$
(40)

where we have set

$$\begin{aligned} \nu _p=\frac{\rho ^{k-1-p}(1-\rho )}{1-\rho ^k}, \quad \hbox {and}\quad L_{k,\hat{\delta }} =\frac{\hat{\delta }(1-\rho )}{2(1-\rho ^k)}, \end{aligned}$$

so that \(\{\nu _p;\, 0\le p\le k-1\}\) is a probability measure on \(\{0,\ldots ,k-1\}\).

We introduce now a convex non-decreasing function \(a_u\) which only depends on the shape parameter \(u\), and which behaves like \(\exp (t^{\eta _{u}})\) at infinity. Observe that, setting \(s_u=\left( {1/\eta _u}-1\right) ^{{1/\eta _u}}\), the function \(t\mapsto \exp (t^{\eta _u})\) is concave on \([0,s_u]\) and convex on \([s_u,+\infty )\) Then, we consider the convex function \(a_u\) defined by

$$\begin{aligned} a_u(t)=e ^{t^{\eta _u}} \mathbf{1}_{[s_u,\infty )}(t) + e ^{s_u^{\eta _u}} \mathbf{1}_{[0,s_u)}(t). \end{aligned}$$
(41)

Observe that if \(u\ge 2, a_u\) is the exponential map.

Since \(a_u\) is a non-decreasing function, for all \(\lambda > 0\), relation (40) implies that:

$$\begin{aligned} \mathbb P \left( \widetilde{\Omega }_k^c\cap \left\{ \left| R_k \right| \ge \frac{\hat{\delta }}{2}\right\} \right)&\le \mathbb P \left( a_u \left( \lambda \sum _{p=0}^{k-1} \left| \varepsilon _{p,N}\right| \nu _p \right) \ge a_u \left( \lambda L_{k,\hat{\delta }}\right) \right) \\&\le \frac{1}{a_u\left( \lambda L_{k,\hat{\delta }}\right) }\mathbb E \left[ a_u\left( \lambda \sum _{p=0}^{k-1}| \varepsilon _{p,N}| \nu _p \right) \right] \!, \end{aligned}$$

where we have invoked Markov’s inequality for the second step. Hence, applying Jensen’s inequality, for all \(\lambda >0\), we obtain:

$$\begin{aligned} \displaystyle \mathbb P \left( \widetilde{\Omega }_k^c\cap \left\{ \left| R_k \right| \ge \frac{\hat{\delta }}{2}\right\} \right) \le \frac{1}{a_u\left( \lambda L_{k,\hat{\delta }}\right) }\sum _{p=0}^{k-1}\nu _p\mathbb E \left( a_u\left( \lambda | \varepsilon _{p,N}|\right) \right) \!. \end{aligned}$$

Furthermore, owing to the definition (41) of \(a_u\),

$$\begin{aligned} \mathbb E \left( a_u\left( \lambda | \varepsilon _{p,N}|\right) \right) \le \mathbb E \left( e ^{\lambda ^{\eta _{u}}|\varepsilon _{p,N}|^{\eta _{u}}}\right) +e ^{1/\eta _u-1} \end{aligned}$$

for all \(p\ge 0\), all \(N\ge 1\) and all \(\lambda >0\).

Then, applying Lemma 6.2, we have:

$$\begin{aligned} \displaystyle \mathbb P \left( \widetilde{\Omega }_k^c\cap \left\{ \left| R_k \right| \ge \frac{\hat{\delta }}{2}\right\} \right) \le \frac{K+e ^{1/\eta _u-1}}{a_u\left( \lambda L_{k,\hat{\delta }}\right) }, \end{aligned}$$

for any \(\lambda \le \gamma ^{1/\eta _{u}} N^{1/2}\) with \(\gamma <\gamma _u\). Since \(L_{k,\hat{\delta }}\ge (1-\rho )\hat{\delta }/2\) and since \(a_u\) is a non-decreasing function, by choosing \(\lambda =\gamma ^{1/\eta _{u}} N^{1/2}\), we obtain:

$$\begin{aligned} \mathbb P \left( \widetilde{\Omega }_k^c\cap \left\{ |R_k|\ge \frac{\hat{\delta }}{2}\right\} \right) \le \frac{K_1}{a_u\left( \gamma _1\hat{\delta } N^{1/2}\right) }, \end{aligned}$$

with \(\gamma _1=(1-\rho )\gamma ^{1/\eta _{u}}/2>0\) and \(K_1=K+e ^{1/\eta _{u}-1}\).

Choose now \(\hat{\delta }=N^{-\alpha /2}\), with \(\alpha <1\). Observe that for \(N\) large enough, \(\gamma _1\hat{\delta }N^{1/2}>s_u\) and thus \(a_u\left( \gamma _1\hat{\delta } N^{1/2}\right) =e ^{\gamma _1^{\eta _u}N^{(1-\alpha )\eta _{u}/2}}\). Hence, there exists a finite positive constant \(K'\) such that for all \(N\ge 1\)

$$\begin{aligned} \mathbb P \left( \widetilde{\Omega }_k^c\cap \left\{ |R_k|\ge \frac{1}{2N^{\alpha /2}}\right\} \right) \le K'e ^{-\widetilde{\gamma }N^{(1-\alpha )\eta _{u}/2}} \end{aligned}$$
(42)

with \(\widetilde{\gamma }=\gamma _1^{\eta _u}.\)

Step 3: Conclusion. Putting together (38), (39) and (42), choosing \(\hat{\delta }=N^{-\alpha /2}\) with \(\alpha <1\), we end up with:

$$\begin{aligned} \mathbb P \left( \left| U_k-t_{\sigma ,w}^{*}\right| \ge N^{-\alpha /2} \right) \le (k+1)Ke ^{-\gamma L^{\eta _{u}} N^{\eta _{u}/2}}+K'e ^{-\widetilde{\gamma } \ N^{(1-\alpha )\eta _{u}/2}}, \end{aligned}$$
(43)

for any \(k\) satisfying (36). Choose now \(k=k(N):=[C\alpha \log (N)]+1\). If the following condition holds true:

$$\begin{aligned} \lim _{N\rightarrow +\infty } \left( k+\frac{\alpha }{2} \log (N)/\log ( \tilde{M})\right) =+\infty , \end{aligned}$$

i.e. if \(C>-{1}/(2\log ( \tilde{M}))\), then for \(N\ge N_0\) with \(N_0\) large enough, (36) holds. We thus choose \(C=-1/(2\log ( \tilde{M}))+\eta \) with \(\eta >0\). Then, for \(N\ge N_0\) and \(k=k(N):=[C\alpha \log (N)]+1\), we have (43). Therefore, since \((1-\alpha )\eta _{u}/2\le \eta _u/2\) we have proved that there exists a positive finite constant \(A\) such that for all \(N\in \mathbb{N }^*\),

$$\begin{aligned} \mathbb P \left( \left| U_k-t_{\sigma ,w}^*\right| \ge N^{-\alpha /2} \right) \le Ae ^{-\widetilde{\gamma } N^{(1-\alpha )\eta _{u}/2}}, \end{aligned}$$

which is the desired result. \(\square \)

1.3 Proof of proposition 3.4

In the subcritical case, the following property holds true for the function \( g_{1,u}\) defined by (13): there exists a constant \(\kappa _1\in (0,1)\) such that, for all \(t\ge 0, 0\le g_{1,u}(t)\le \kappa _1 t\).

Let us now fix \(\kappa _2\in (\kappa _1,1)\) and \(L\in (0,\infty )\). Then, by (13) and (23), for \(\sigma _w/\sigma \le c\) with \(c\) small enough,

$$\begin{aligned} \forall t\in [0,L],\, 0\le g_{\sigma ,u,\sigma _w}(t)\le \kappa _2 t. \end{aligned}$$

Since \(g_{\sigma ,u,\sigma _w}\) is upper bounded by \(2F^2\sigma ^2\) (for \(\sigma _w/\sigma \le c\) with \(c\) small enough), choosing \(L\) such that \(\kappa _2 L >2F^2\sigma ^2\), the previous equation holds on \([0,\infty )\). We thus have the following relation for the noisy dynamics of \(U_k\): for every \(k\ge 2\),

$$\begin{aligned} U_{k}=g_{\sigma ,u,\sigma _w}(U_{k-1})+\varepsilon _{k-1,N}\le \kappa _2 U_{k-1}+\varepsilon _{k-1,N}. \end{aligned}$$

Iterating this inequality, we have: for every \(k\ge 2\),

$$\begin{aligned} U_{k} \le \kappa _2^{k-1} U_{1}+ \sum _{j=1}^{k-1} \kappa _2^{j-1} \varepsilon _{k-j,N}. \end{aligned}$$

According to the fact that \(U_1=F^2(\sigma ^2+\sigma _w^2) +\varepsilon _{0,N}\), we end up with:

$$\begin{aligned} U_{k} \le \kappa _2^{k-1} F^2(\sigma ^2+\sigma _w^2) + \sum _{j=1}^{k} \kappa _2^{j-1} \varepsilon _{k-j,N}, \end{aligned}$$
(44)

a relation which is valid for any \(k\ge 1\).

Consider now \(\alpha <1\), assume that \(\sigma _w/\sigma \le c\) and choose \(C>-\alpha /(2\log (\kappa _2))\). Then there exists \(N_0\in \mathbb{N }^*\) such that for any integers \(N\ge N_0\) and \(k\ge C\log (N)\),

$$\begin{aligned} \kappa _2^{k-1} F^2(\sigma ^2+\sigma _w^2) \le \kappa _2^{k-1} F^2\sigma ^2(1+c^2)\le \frac{N^{-\alpha /2}}{2}. \end{aligned}$$

Hence, for any integers \(N\ge N_0\) and \(k\ge C\log (N)\), invoking (44), we have:

$$\begin{aligned} \mathbb P \left( U_k \ge N^{-\alpha /2} \right) \le \mathbb P \left( \sum _{j=1}^{k} \kappa _2^{j-1} \varepsilon _{k-j,N} \ge \frac{N^{-\alpha /2}}{2}\right) \!. \end{aligned}$$

We are thus back to the setting of the proof of Theorem 3.1, Step 2. Along the same lines as in this proof (changing just the name of the constants there), the reader can now easily check inequality (19). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Lacaux, C., Muller-Gueudin, A., Ranta, R. et al. Convergence and performance of the peeling wavelet denoising algorithm. Metrika 77, 509–537 (2014). https://doi.org/10.1007/s00184-013-0451-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-013-0451-y

Keywords

Navigation