Abstract
This note is devoted to an analysis of the so-called peeling algorithm in wavelet denoising. Assuming that the wavelet coefficients of the useful signal are modeled by generalized Gaussian random variables and its noisy part by independent Gaussian variables, we compute a critical thresholding constant for the algorithm, which depends on the shape parameter of the generalized Gaussian distribution. We also quantify the optimal number of steps which have to be performed, and analyze the convergence of the algorithm. Several implementations are tested against classical wavelet denoising procedures on benchmark and simulated biological signals.
Similar content being viewed by others
References
Antoniadis A, Bigot J, Sapatinas T (2001) Wavelet estimators in non-parametric regression: a comparative simulation study. J Stat Softw 6(6):61–83
Buccigrossi R, Simoncelli E (1999) Image compression via joint statistical characterization in the wavelet domain. IEEE Trans Image Process 8(12):1688–1701
Cai T, Silverman B (2001) Incorporating information on neighbouring coefficients into wavelet estimation. Sankhyā: Indian J Stat Special Issue Wavelets 63(2):127–148
Cai T, Zhou H (2009) A data-driven block thresholding approach to wavelet estimation. Ann Stat 37(2):569–595
Chesneau C (2007) Wavelet block thresholding for samples with random design: a minimax approach under the \(L^p\) risk. Electron J Stat 1:331–346
Coifman R, Wickerhauser M (1995) Adapted waveform de-noising for medical signals an images. IEEE Eng Med Biol Mag 14(5):578–586
Daubechies I (1992) Ten lectures on wavelets. In: CBMS-NSF Regional Conference Series in Applied Mathematics, vol 61. SIAM, Philadelphia
Do M, Vetterli M (2002) Wavelet-based texture retrieval using generalized Gaussian density and Kullback–Leibler distance. IEEE Trans Image Process 11(2):146–158
Donoho D, Johnstone I (1994) Ideal spatial adaptation via wavelet shrinkage. Biometrika 81:425–455
Donoho D, Johnstone I, Kerkyacharian G, Picard D (1995) Wavelet shrinkage: asymptopia? With discussion and a reply by the authors. J R Stat Soc Ser B 57(2):301–369
Gin E, Nickl R (2009) Uniform limit theorems for wavelet density estimators. Ann Probab 37(4):1605–1646
Hadjileontiadis L, Panas S (1997) Separation of discontinuous adventitious sounds from vesicular sounds using a wavelet based filter. IEEE Trans Biomed Eng 44(12):1269–1281
Mallat S (1989) A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell 11(7):674–693
Mallat S (1997) A wavelet tour of signal processing. Academic Press, London
Moulin P, Liu J (1999) Analysis of multiresolution image denoising schemes using generalized Gaussian and complexity priors. IEEE Trans Inf Theory 45(3):909–919
Pižurica A, Philips W (2006) Estimating the probability of the presence of a signal of interest in multiresolution single- and multiband image denoising. IEEE Trans Image Process 15(3):654–665
Ranta R, Heinrich C, Louis-Dorr V, Wolf D (2003) Interpretation and improvement of an iterative wavelet-based denoising method. IEEE Signal Process Lett 10(8):239–241
Ranta R, Heinrich C, Louis-Dorr V, Wolf D (2005) Iterative wavelet-based denoising methods and robust outlier detection. IEEE Signal Process Lett 12(8):557–560
Ranta R, Louis-Dorr V, Heinrich C, Wolf D, Guillemin F (2010) Digestive activity evaluation by multi-channel abdominal sounds analysis. IEEE Trans Biomed Eng 57(6):1507–1519
Simoncelli E, Buccigrossi R (1997) Embedded wavelet image compression based on a joint probability model. In: 4th IEEE international conference on image processing, ICIP. Santa Barbara, USA
van der Vaart A, Wellner J (1996) Weak convergence and empirical processes. Springer, Berlin
Author information
Authors and Affiliations
Corresponding author
Additional information
C. Lacaux, A. Muller-Gueudin and S. Tindel are members of the BIGS (BIology, Genetics and Statistics) team at INRIA.
Appendices
Appendix 1: Proof of proposition 2.3
1.1 Preliminaries
This section gives the main tools to prove Proposition 2.3. The first lemma is interested in some useful properties of \(g_{1,u}\), which is defined by (11), and in the associated deterministic dynamic.
Lemma 5.1
Assume \(F>F_c\). Let \(g_{1,u}\,:\mathbb{R }_+\rightarrow \mathbb{R }_+\) be defined by (11), where \(u\) is the shape parameter given in Hypothesis 1.1. Let \(\ell _1<t^*\) be the two positive fixed points of \(g_{1,u}\) as defined in Lemma 2.2.
- (1):
-
Then there exists \(\ell _2\in (\ell _1,t^*)\) such that \(g_{1,u}(\ell _2)>\ell _2, g_{1,u}^{\prime }(\ell _2)< 1\) and \(g_{1,u}\) is concave on \([\ell _2,\infty )\).
- (2):
-
Define the deterministic sequence \(\{u_k;\, k\ge 0\}\) recursively by
$$\begin{aligned} \left\{ \begin{array}{ll} u_0 =+\infty \\ u_{k+1} =g_{1,u}(u_k), \quad k\ge 0. \end{array}\right. \end{aligned}$$(20)Then for \(k\ge 1\),
$$\begin{aligned} |u_k-t^*|\le M_{t^*}^{k-1} \left( F^2 -t^*\right) \cdot \end{aligned}$$(21)where \(M_{t^*}=g_{1,u}^{\prime }(t^*)\in (0,1)\).
Proof
The first assertion is easily deduced from the variations of \(t\mapsto d_{1,u}(t)=g_{1,u}(t)-t\), and its proof is left to the reader. Let us now prove the second assertion. According to Lemma 2.2, \(g_{1,u}\) is an increasing function and has exactly three fixed points: \(0<\ell _1<t^*\). Then the sequence \(\{u_k; k\ge 0\}\), defined by (20), is decreasing and converges to \(t^*\) as \(k\rightarrow \infty \). Furthermore,
with \(M_{t^*}=\sup \{|g_{1,u}^{\prime }(t)|;\, t\ge t^*\}\). Since \(g_{1,u}\) is increasing and concave on \([\ell _2,+\infty )\) with \(\ell _2<t^*\) and \(g_{1,u}^{\prime }(\ell _2)<1\),
Then, Assertion (2) follows by a trivial induction procedure. The proof of Lemma 5.1 is then complete. \(\square \)
The following lemma compares the functions \(g_{\sigma ,u,\sigma _w}\) and \(g_{\sigma ,u}\) defined by (10) and (11) respectively.
Lemma 5.2
Assume \(F>F_c\). For \(u>0, \sigma >0\) and \(\sigma _w>0\), let \(g_{\sigma ,u}\) and \(g_{\sigma ,u,\sigma _w}\) be defined by (11) and (10) with \(p_{\sigma ,u}\) and \(p_{\sigma _w}\) introduced in Hypothesis 1.1. Let \(\ell _1<t^*\) be the two positive fixed points of \(g_{1,u}\) as defined in Lemma 2.2. Then, there exists \(C:=C(u)\in (0,\infty )\) a constant which does not depend on \((\sigma ,\sigma _w,F)\) such that for any \(t\in \mathbb{R }_+\) we have
and
In particular, \(g_{\sigma ,u,\sigma _w}\rightarrow g_{\sigma ,u}\) and \(g_{\sigma ,u,\sigma _w}^{\prime }\rightarrow g_{\sigma ,u}^{\prime }\) uniformly on every compact set of \(\mathbb{R }_+\), as \(\sigma _w\) goes to 0.
Proof
Since (23) is a direct consequence of (22), we only prove (22). By definition of \(g_{\sigma ,u,\sigma _w}\) and \(g_{\sigma ,u}\), for any \(t\in \mathbb{R }_+\),
Notice that for all \(y\in \mathbb{R }_+\)
It can be readily checked that
Let us first assume \(u\ge 1\). Then \(t\mapsto p_{1,u}(t)\) is \(\mathcal C ^1\) on \(\mathbb{R }\) and its derivate \(p_{1,u}^{\prime }\) is bounded on \(\mathbb{R }\). In this case,
Assume now that \(u\in (0,1]\). Then by the Mean Value Theorem applied to the exponential map,
Since for any \(\gamma \in (0,1)\) and \(0\le b\le a, a^\gamma -b^\gamma \le | a-b|^\gamma \), one checks that
Plugging (26) or (27) in (25), we now get the existence of a finite positive constant \(c:=c(u,\alpha ,\beta )\) which only depends on \(u,\alpha ,\beta \) such that
Since \(p_{\sigma _w}\) is the density of a centered Gaussian variable of variance \(\sigma _w^2\),
with \(W\) a standard Gaussian variable. This inequality and Eqs. (24) lead to (22) setting \(C=c\mathbb E (|W|^{\min (1,u)})\), which concludes the proof. \(\square \)
1.2 Proof of proposition 2.3
This section is devoted to the proof of Proposition 2.3. In this proof, \(c\) and \(C\) denote two unspecified positive and finite constants which may not be the same in each occurrence and depend neither on the standard deviation \(\sigma \) of the signal \(x\) nor on the standard deviation \(\sigma _w\) of the noise \(w\). Let us recall that \(g_{\sigma ,u,\sigma _w}\) is defined by (10).
-
(1)
First observe that
$$\begin{aligned} \forall t\in \mathbb{R }_+,\quad g_{\sigma ,u,\sigma _w}(t) \le 2 F^2 \int \limits _{0}^{+\infty } y^2 p_{\sigma ,u} * p_{\sigma _w} (y) dy= F^2 \mathbb E \left( z(1)^2\right) \!, \end{aligned}$$owing to the fact that \(p_{\sigma ,u} * p_{\sigma _w} \) is the density of the wavelet coefficient \(z(1)=x(1)+w(1)\). Since the centered random variables \(x(1)\) and \(w(1)\) are independent, this leads to
$$\begin{aligned} \forall t\in \mathbb{R }_+,\quad g_{\sigma ,u,\sigma _w}(t) \le F^2\left( \sigma ^2+{\sigma _{w}^2}\right) \!. \end{aligned}$$Thanks to the relation \(M>F^2\), there exists a finite positive constant \(c_1:=c_1(M,F)\) depending only on \(M\) and \(F\) so that, if \(\sigma _w/\sigma \le c_1\), then \(F^2\left( \sigma ^2+{\sigma _{w}^2}\right) <M\sigma ^2\) and henceforth,
$$\begin{aligned} \sup \{ g_{\sigma ,u,\sigma _w}(t); t\in \mathbb{R }_+\} <M\sigma ^2. \end{aligned}$$Let \(\ell _2\in (\ell _1,t^*)\) be defined by Lemma 5.1. Note that \(\ell _2\) only depends on \(g_{1,u}\) and thus on both parameters \(F\) and \(u\). Since \(t^*\) is a fixed point of the increasing function \(g_{1,u}\), we get
$$\begin{aligned} \ell _2< t^*\le F^2=\lim _{t\rightarrow +\infty } g_{1,u}(t)<M. \end{aligned}$$Hence, applying (23) and (13), we have :
$$\begin{aligned} g_{\sigma ,u,\sigma _w}\left( \sigma ^2\ell _2\right) \ge \sigma ^2 \left( g_{1,u}\left( \ell _2\right) - C \left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)}\right) \end{aligned}$$where \(C:=C(M,F,u)\in (0,+\infty )\) does not depend on \((\sigma ,\sigma _w)\). Since \(g_{1,u}\left( \ell _2\right) >\ell _2\) by Lemma 5.1, the previous equation leads to the existence of a constant \(c:=c(M,F,u)\), such that if \(\sigma _w/\sigma \le c\),
$$\begin{aligned} g_{\sigma ,u,\sigma _w}\left( \sigma ^2\ell _2\right) >\sigma ^2\ell _2. \end{aligned}$$Then, the proof of Assertion (1) is complete.
-
(2)
$$\begin{aligned} \forall t\in [\sigma ^2\ell _2,\sigma ^2 M],\quad g'_{\sigma ,u,\sigma _w}\left( t\right) \le g'_{1,u}\left( \frac{t}{\sigma ^2}\right) +C \left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)} \end{aligned}$$
where \(C:=C(M,F,u)\in (0,+\infty )\). Thanks to Lemma 5.1,
$$\begin{aligned} \sup \left\{ g_{1,u}^{\prime }(y),y\ge \ell _2\right\} =g_{1,u}^{\prime }(\ell _2)<1. \end{aligned}$$(28)Then, choosing \(c:=c(M,F,u)\) small enough, if \(\sigma _w/\sigma \le c\), we obtain
$$\begin{aligned} \forall t\in [\sigma ^2\ell _2,\sigma ^2 M],\quad g_{\sigma ,u,w}^{\prime }\left( t\right) \le {g_{1,u}^{\prime }(\ell _2)}+C c^{\min (1,u)}=:\tilde{M}<1, \end{aligned}$$(29)which establishes Assertion (2).
-
(3)
Let us now prove Assertion (3). Assume that \(\sigma _w/ \sigma \le c\). By Assertions (1) and (2), \(d_{\sigma ,u,\sigma _w}:t\mapsto g_{\sigma ,u,\sigma _w}(t)-t\) is a decreasing function on \([\sigma ^2\ell _2,\sigma ^2 M]\) such that \(d_{\sigma ,u,\sigma _w}(\sigma ^2\ell _2)>0\) and \( d_{\sigma ,u,\sigma _w}(\sigma ^2 M)<0. \) Then, there exists an unique number \(t_{\sigma ,w}^*\in (\sigma ^2\ell _2,\sigma ^2 M)\) such that \(g_{\sigma ,u,\sigma _w}(t_{\sigma ,w}^*)=t_{\sigma ,w}^*\). Moreover, since \(g_{\sigma ,u,\sigma _w}\) takes its values in \([0,\sigma ^2 M), t_{\sigma ,w}^*\) is the only fixed point for \(g_{\sigma ,u,\sigma _w}\) in \([\sigma ^2\ell _2,\infty )\). Consider now the sequence \(\{u_k^w;\, k\ge 0 \}\) defined by Eq. (15). Since \(g_{\sigma ,u,\sigma _w}\) is an increasing function which admits as unique fixed point \(t_{\sigma ,w}^*\) in \([\sigma ^2\ell _2,\infty )\), it is easily seen that \(\{u_k^w;\, k\ge 0 \}\) is a decreasing sequence such that \(\lim _{k\rightarrow \infty } u_k^w=t^*_{\sigma ,w}\). Moreover, for any \(k\ge 1, u_k^w\in [ t^*_{\sigma ,w}, F^2(\sigma ^2+\sigma _w^2)]\subset [\sigma ^2\ell _2,\sigma ^2M]\). Then, using that \(t^*_{\sigma ,w}\in (\sigma ^2\ell _2,\sigma ^2 M)\) is a fixed point and Eq. (29), we get
$$\begin{aligned} |u_{k+1}^w-t^*_{\sigma ,w}| \le \tilde{M}^k |u_1^w-t^*_{\sigma ,w}| \end{aligned}$$for any \(k\ge 1\). We can now bound trivially \(|u_1^w-t^*_{\sigma ,w}|\) as follows:
$$\begin{aligned} |u_1^w-t^*_{\sigma ,w}| = u_1^w-t^*_{\sigma ,w}\le M\sigma ^2, \end{aligned}$$so that we end up with
$$\begin{aligned} |u_{k+1}^w-t^*_{\sigma ,w}|\le M \sigma ^2 \tilde{M}^k \end{aligned}$$for any \(k\ge 1\). This equation, which is Eq. (16) also holds for \(k=0\). Consider now the sequence \(\{u_k;\, k\ge 0 \}\) defined by Eq. (20). Using (13), (23), (28) and the Mean Value Theorem, we get:
$$\begin{aligned} \begin{array}{ll} |u_{k+1}^w-\sigma ^2 u_{k+1}|&{}=|g_{\sigma ,u,\sigma _w}(u_{k}^w)-\sigma ^2 g_{1,u}(u_{k})|\\ &{}\le |g_{\sigma ,u,\sigma _w}(u_{k}^w)-g_{\sigma ,u}(u_{k}^w)| +\sigma ^2 |g_{1,u}(u_{k}^w/\sigma ^2)-g_{1,u}(u_{k})|\\ &{}\le C\sigma ^2\left( \dfrac{\sigma _w}{\sigma }\right) ^{\min (1,u)}+{g'_{1,u}(\ell _2)} |u_{k}^w- \sigma ^2u_{k}|\\ \end{array} \end{aligned}$$since \(u_k^w/\sigma ^2,u_k\in [\ell _2,\infty )\) and \(u_k^w\le M\sigma ^2\) for \(k\ge 1\). By iterating this procedure, with \(C:=C(M,F,u)\) that may change in each occurrence, we get:
$$\begin{aligned} |u_{k+1}^w-\sigma ^2 u_{k+1}|&\le C \sigma ^2\left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)} \sum \limits _{n=0}^{k-1}{g_{1,u}^{\prime }(\ell _2)}^n+{g_{1,u}^{\prime }(\ell _2)}^k |u_{1}^w- \sigma ^2u_{1}|\\&\le {C\sigma ^2}\left( \frac{\sigma _w}{\sigma }\right) ^{\min (1,u)} +F^2\sigma _w^2\,{g'_{1,u}(\ell _2)}^k \end{aligned}$$since \({g_{1,u}^{\prime }(\ell _2)}<1, u_1^w=F^2(\sigma ^2+\sigma _w^2)\) and \(u_1=F^2\). Taking limits in the relation above as \(k\rightarrow \infty \) we get (17), which ends the proof. \(\square \)
Appendix 2: Probabilistic analysis of the algorithm
1.1 Preliminaries: comparison noisy dynamics/ deterministic dynamics
As mentioned at Eq. (8), the exact dynamics governing the sequence \(\{ U_k;\, k\ge 0 \}\) is of the form \(U_{k+1}=g_{N,w}(U_k)\) with \(g_{N,w}\) defined by (8). In order to compare this with the deterministic dynamics (15), let us recast this relation into:
Notice that the errors \(\varepsilon _{k,N}\) are far from being independent, which means that the relation above does not define a Markov chain. However, a fairly simple expression is available for \(U_k\):
Proposition 6.1
Let \(U_k\) be defined by (7), \(g_{\sigma ,u,\sigma _w} \) by (10) and \(\varepsilon _{k,N}\) by (30). For \(k\ge 0\), set \(g_{\sigma ,u,\sigma _w}^{\circ k}\) for the \(k\)th iteration of \(g_{\sigma ,u,\sigma _w}\). Then for \(k\ge 0\), we have:
where the random variable \(C_{j}\) (\(j\ge 2\)) is a certain real number within the interval \([g_{\sigma ,u,\sigma _w}^{\circ (j-1)}(U_0); \, U_{j-1}]\). In the definition of \(R_{k}\), we have also used the conventions \(\prod _{q=2}^{1}a_q=1\) and \(R_0=0\).
Proof
It is easily seen inductively that \(R_0=0, R_{1}=\varepsilon _{0,N}\) and for \(k\ge 1\)
Hence, by a backward induction, we obtain:
which ends the proof. \(\square \)
A useful property of the errors \(\varepsilon _{p,N}\) is that they concentrate exponentially fast (in terms of \(N\)) around 0. This can be quantified in the following:
Lemma 6.2
Assume that our signal \(z=x+w\) satisfies Hypothesis 1.1, and recall that \(F\) is defined by Eq. (3). Set
where parameters \(u,\sigma ,\sigma _w\) and \(\beta \) are defined at Hypothesis 1.1. Then for every \(0<\gamma <\gamma _u\), there exists a finite positive constant \(K>0\) such that for all \(N\ge 1\), for all \(p\ge 0\) and for all \(\lambda \in [0,\gamma N^{\eta _u/2}]\),
Moreover, for all \(N\ge 1, p\ge 0\) and \(l>0\),
Proof
Recall that \(\varepsilon _{p,N}\) is defined by:
for a collection \(\{Y(q);\, q\le N\}\) of i.i.d random variables, where \(Y(q)=z(q)^2\). Moreover, \(z(q)=x(q)+w(q)\), with \(x(q)\) a centered generalized Gaussian random variable with parameter \(u>0\) [whose density is given by (5)] and \(w(q)\sim \mathcal{N }(0,\sigma _w^2)\). For a fixed positive \(t\), the fluctuations \(g_{N,w}(t)-g_{\sigma ,u,\sigma _w}(t)\) are easily controlled thanks to the classical central limit theorem or large deviations principle. The difficulty in our case arises from the fact that \(U_p\) is itself a random variable, which rules out the possibility of applying those classical results. However, uniform central limit theorems and deviation inequalities have been thoroughly studied, and our result will be obtained by translating our problem in terms of empirical processes like in Vaart and Wellner (1996).
In order to express \(\varepsilon _{p,N}\) in terms of empirical processes, consider \(t\in [0,\infty ]\) and define \(h_t:\mathbb{R }_+\rightarrow \mathbb{R }_+\) by \(h_t(v)=F^2v\, \mathbf{1}_{\{ v < t\}}\). Next, for \(f:\mathbb{R }_+\rightarrow \mathbb{R }\), set
and with these notations in mind, observe that
It is now easily seen that
and the key to our result will be to get good control on \(\mathbb{G }_N h_t\) in terms of \(N\), uniformly in \(t\in [0,\infty ]\).
Let us consider the class of functions \(\mathcal{G }=\{h_t;\, t\in [0,+\infty ]\}\). According to the terminology of Vaart and Wellner (1996), the uniform central limit theorems are obtained when \(\mathcal{G }\) is a Donsker class of functions. A typical example of Donsker setting is provided by some VC classes (see Vaart and Wellner 1996, Section 2.6.2). The VC classes can be briefly described as sets of functions whose subgraphs can only shatter a finite collection of points, with a finite maximal cardinality, in \(\mathbb{R }^2\). For instance, the collections of indicators
is a VC class. Thanks to (Vaart and Wellner 1996, Lemma 2.6.18), \(\mathcal{G }\) is also a VC class since it can be written as
where \(h:\mathbb{R }_+\rightarrow \mathbb{R }_+\) is defined by \(h(v)=h_{\infty }(v)=F^2v\).
In order to state our concentration result, we still need to introduce the envelope \( \overline{\mathcal{G }}\) of \(\mathcal{G }\), which is the function \(\overline{\mathcal{G }}:\mathbb{R }_+\rightarrow \mathbb{R }\) defined as
Note that in our particular example of application, we simply have \( \overline{\mathcal{G }}=h\). Let us also introduce the following notation:
where \(\mathbb{E }^*\) is the outer expectation (defined in Vaart and Wellner 1996 for measurability issues), and \(Y\) can be decomposed as \(Y=(X+W)^2\) for a centered generalized Gaussian random variable \(X\) with parameter \(u>0\) and an independent variable \(W\sim \mathcal{N }(0,\sigma _w^2)\). In (34), we also assume \(\lambda >0\) and \(m\ge 0\).
Then, since \(\mathcal{G }\) is a VC class with measurable envelope, \(\mathcal{G }\) is a Donsker class and (Vaart and Wellner 1996, Theorem 2.14.5 p. 244) leads to:
with \(c\) a finite positive constant which does not depend on \(N,\lambda \) and \(\mathcal{G }\). Furthermore, since \(Y\) can be decomposed as \(Y=(X+W)^2\) and invoking the elementary inequality \((a+b)^p\le 2^{\max (p-1,0)}(a^p+b^p)\), valid for \(a,b\ge 0\) and \(p>0\), it is readily checked that
for \(\lambda <\gamma _u\) with \(\gamma _u\) defined at (31), and where \(m=\eta _u:=\min \left( \frac{u}{2}, 1\right) \). Recalling now that \(\varepsilon _{p,N}=N^{-1/2}\mathbb{G }_N h_{U_p}\), we have obtained:
for \(\lambda \le \gamma <\gamma _u\), which easily implies our claim (32).
Let \(l>0\). Then,
The concentration property (33) is thus an easy consequence of (32) and Markov’s inequality. \(\square \)
1.2 Proof of theorem 3.1
Observe first that, owing to Proposition 6.1 and inequality (16), we have
for any \(k\ge 1\). Let then \(\hat{\delta }>0\) and let us fix \(k\ge 1\) such that
i.e.
Then it is readily checked that:
and we will now bound the probability in the right hand side of this inequality. To this purpose, let us introduce a little more notation: recall that \(\ell _{2}\) has been defined at Lemma 5.1 and for \(n\ge 1\), let \(\Omega _n\) be the set defined by
and set also
Then we can decompose (37) into:
We will now control these two terms separately.
Step 1: Upper bound for \(\mathbb P ( \widetilde{\Omega }_k)\). Let us fix \(n\ge 1\) and first study \(\mathbb P \left( \Omega _n \right) \). To this purpose, observe first that
Hence, since \(U_{n}=g_{N,w}(U_{n-1})\) and invoking that \(g_{\sigma ,u,\sigma _w}\) is an increasing function, the following relation holds true on \(\Omega _n\):
We have thus proved that
where, by Assertion (1) of Proposition 2.3, \(\sigma ^2\ell _{2}-g_{\sigma ,u,\sigma _w}(\sigma ^2 \ell _{2})= :-L_1<0.\) Since \(g_{N,w}(U_{n-1})-g_{\sigma ,u,\sigma _w}(U_{n-1})=\varepsilon _{n-1,N}\) by definition, we end up with:
Moreover,
with \(L_2=M\sigma ^2 -g_{\sigma ,u,\sigma _w}(+\infty )>0\) by Assertion (1) of Proposition 2.3.
A direct application of Lemma 6.2 yields now the existence of \(\gamma ,K\in (0,\infty )\) such that for all \(n\ge 1\) and all \(N\ge 1\)
with \(\eta _u=\min \left( \frac{u}{2}, 1\right) \). Hence
where \(L:=\min (L_1,L_2)>0\).
Step 2: Upper bound for \(\mathbb P ( \widetilde{\Omega }_k^c\cap \{ \left| R_k \right| \ge \frac{\hat{\delta }}{2}\} ).\) We have constructed the set \(\widetilde{\Omega }_k\) so that, for all \(2\le p\le k+1\), the random variables \(C_p\) introduced at Proposition 6.1 satisfy \(0\le g_{\sigma ,u,\sigma _w}^{\prime }\left( C_p\right) \le \rho :=\tilde{M} <1\) on \(\widetilde{\Omega }_k^c\). Thus
where we have set
so that \(\{\nu _p;\, 0\le p\le k-1\}\) is a probability measure on \(\{0,\ldots ,k-1\}\).
We introduce now a convex non-decreasing function \(a_u\) which only depends on the shape parameter \(u\), and which behaves like \(\exp (t^{\eta _{u}})\) at infinity. Observe that, setting \(s_u=\left( {1/\eta _u}-1\right) ^{{1/\eta _u}}\), the function \(t\mapsto \exp (t^{\eta _u})\) is concave on \([0,s_u]\) and convex on \([s_u,+\infty )\) Then, we consider the convex function \(a_u\) defined by
Observe that if \(u\ge 2, a_u\) is the exponential map.
Since \(a_u\) is a non-decreasing function, for all \(\lambda > 0\), relation (40) implies that:
where we have invoked Markov’s inequality for the second step. Hence, applying Jensen’s inequality, for all \(\lambda >0\), we obtain:
Furthermore, owing to the definition (41) of \(a_u\),
for all \(p\ge 0\), all \(N\ge 1\) and all \(\lambda >0\).
Then, applying Lemma 6.2, we have:
for any \(\lambda \le \gamma ^{1/\eta _{u}} N^{1/2}\) with \(\gamma <\gamma _u\). Since \(L_{k,\hat{\delta }}\ge (1-\rho )\hat{\delta }/2\) and since \(a_u\) is a non-decreasing function, by choosing \(\lambda =\gamma ^{1/\eta _{u}} N^{1/2}\), we obtain:
with \(\gamma _1=(1-\rho )\gamma ^{1/\eta _{u}}/2>0\) and \(K_1=K+e ^{1/\eta _{u}-1}\).
Choose now \(\hat{\delta }=N^{-\alpha /2}\), with \(\alpha <1\). Observe that for \(N\) large enough, \(\gamma _1\hat{\delta }N^{1/2}>s_u\) and thus \(a_u\left( \gamma _1\hat{\delta } N^{1/2}\right) =e ^{\gamma _1^{\eta _u}N^{(1-\alpha )\eta _{u}/2}}\). Hence, there exists a finite positive constant \(K'\) such that for all \(N\ge 1\)
with \(\widetilde{\gamma }=\gamma _1^{\eta _u}.\)
Step 3: Conclusion. Putting together (38), (39) and (42), choosing \(\hat{\delta }=N^{-\alpha /2}\) with \(\alpha <1\), we end up with:
for any \(k\) satisfying (36). Choose now \(k=k(N):=[C\alpha \log (N)]+1\). If the following condition holds true:
i.e. if \(C>-{1}/(2\log ( \tilde{M}))\), then for \(N\ge N_0\) with \(N_0\) large enough, (36) holds. We thus choose \(C=-1/(2\log ( \tilde{M}))+\eta \) with \(\eta >0\). Then, for \(N\ge N_0\) and \(k=k(N):=[C\alpha \log (N)]+1\), we have (43). Therefore, since \((1-\alpha )\eta _{u}/2\le \eta _u/2\) we have proved that there exists a positive finite constant \(A\) such that for all \(N\in \mathbb{N }^*\),
which is the desired result. \(\square \)
1.3 Proof of proposition 3.4
In the subcritical case, the following property holds true for the function \( g_{1,u}\) defined by (13): there exists a constant \(\kappa _1\in (0,1)\) such that, for all \(t\ge 0, 0\le g_{1,u}(t)\le \kappa _1 t\).
Let us now fix \(\kappa _2\in (\kappa _1,1)\) and \(L\in (0,\infty )\). Then, by (13) and (23), for \(\sigma _w/\sigma \le c\) with \(c\) small enough,
Since \(g_{\sigma ,u,\sigma _w}\) is upper bounded by \(2F^2\sigma ^2\) (for \(\sigma _w/\sigma \le c\) with \(c\) small enough), choosing \(L\) such that \(\kappa _2 L >2F^2\sigma ^2\), the previous equation holds on \([0,\infty )\). We thus have the following relation for the noisy dynamics of \(U_k\): for every \(k\ge 2\),
Iterating this inequality, we have: for every \(k\ge 2\),
According to the fact that \(U_1=F^2(\sigma ^2+\sigma _w^2) +\varepsilon _{0,N}\), we end up with:
a relation which is valid for any \(k\ge 1\).
Consider now \(\alpha <1\), assume that \(\sigma _w/\sigma \le c\) and choose \(C>-\alpha /(2\log (\kappa _2))\). Then there exists \(N_0\in \mathbb{N }^*\) such that for any integers \(N\ge N_0\) and \(k\ge C\log (N)\),
Hence, for any integers \(N\ge N_0\) and \(k\ge C\log (N)\), invoking (44), we have:
We are thus back to the setting of the proof of Theorem 3.1, Step 2. Along the same lines as in this proof (changing just the name of the constants there), the reader can now easily check inequality (19). \(\square \)
Rights and permissions
About this article
Cite this article
Lacaux, C., Muller-Gueudin, A., Ranta, R. et al. Convergence and performance of the peeling wavelet denoising algorithm. Metrika 77, 509–537 (2014). https://doi.org/10.1007/s00184-013-0451-y
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-013-0451-y