Skip to main content
Log in

Removing Mixture of Gaussian and Impulse Noise by Patch-Based Weighted Means

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

We first establish a law of large numbers and a convergence theorem in distribution to show the rate of convergence of the non-local means filter for removing Gaussian noise. Based on the convergence theorems, we propose a patch-based weighted means filter for removing an impulse noise and its mixture with a Gaussian noise by combining the essential idea of the trilateral filter and that of the non-local means filter. Experiments show that our filter is competitive compared to recently proposed methods. We also introduce the notion of degree of similarity to measure the impact of the similarity among patches on the non-local means filter for removing a Gaussian noise, as well as on our new filter for removing an impulse noise or a mixed noise. Using again the convergence theorem in distribution, together with the notion of degree of similarity, we obtain an estimation for the PSNR value of the denoised image by the non-local means filter or by the new proposed filter, which is close to the real PSNR value.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. In fact, [19] defines the joint impulse factor as \(J(i,j)=1-J_I(i,j)\). Following [24], we use \(J_I(i,j)\) rather than J(ij), which seems more convenient.

  2. The code of our method and the images can be downloaded at

    https://www.dropbox.com/s/oylg9to8n6029hh/to_j_sci_comput_paper_code.zip.

  3. The images Lena, Peppers256 and Boats are originally downloaded from

    Fig. 1
    figure 1

    Original \(512\times 512\) images of Lena, Bridge, Peppers512, Boats. Since in the original Peppers images, there are black boundaries of width of one pixel in the left and top which can be considered as impulse noise, to make an impartial comparison, we compute PSNR for Peppers images after removing all the four boundaries, that is with images of size \(510\times 510\) for Peppers512 and \(254\times 254\) for Peppers256

    http://decsai.ugr.es/~javier/denoise/test_images/index.htm; the image Peppers512 is from http://perso.telecom-paristech.fr/~delon/Demos/Impulse and the image Bridge is from www.math.cuhk.edu.hk/~rchan/paper/dcx/.

References

  1. Akkoul, S., Ledee, R., Leconge, R., Harba, R.: A new adaptive switching median filter. IEEE Signal Process. Lett. 17(6), 587–590 (2010)

    Article  Google Scholar 

  2. Bilcu, R.C., Vehvilainen, M.: Fast nonlocal means for image denoising. In: Proceedings of SPIE, Digital Photography III 6502, 65020R (2007)

  3. Bovik, A.: Handbook of Image and Video Processing. Academic Press, London (2005)

    MATH  Google Scholar 

  4. Buades, A., Coll, B., Morel, J.: A review of image denoising algorithms, with a new one. Multiscale Model. Simul. 4(2), 490–530 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  5. Castleman, K.: Digital Image Processing. Prentice Hall Press, Upper Saddle River (1996)

    Google Scholar 

  6. Chan, R., Hu, C., Nikolova, M.: An iterative procedure for removing random-valued impulse noise. IEEE Signal Process. Lett. 11(12), 921–924 (2004)

    Article  Google Scholar 

  7. Chang, S.G., Yu, B., Vetterli, M.: Adaptive wavelet thresholding for image denoising and compression. IEEE Trans. Image Process. 9, 1532–1546 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  8. Chen, T., Wu, H.: Adaptive impulse detection using center-weighted median filters. IEEE Signal Process. Lett. 8(1), 1–3 (2001)

    Article  Google Scholar 

  9. Chow, Y., Teicher, H.: Probability Theory: Independence, Interchangeability, Martingales. Springer, Berlin (2003)

    MATH  Google Scholar 

  10. Coifman, R., Donoho, D.: Translation-invariant denoising. Wavel. Stat. 103, 125–150 (1995)

    Article  MATH  Google Scholar 

  11. Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)

    Article  MathSciNet  Google Scholar 

  12. Delon, J., Desolneux, A.: A patch-based approach for random-valued impulse noise removal. In: 2012 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp. 1093–1096. IEEE (2012)

  13. Delon, J., Desolneux, A.: A patch-based approach for removing impulse or mixed gaussian-impulse noise. SIAM J. Imaging Sci. 6(2), 1140–1174 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  14. Dong, Y., Chan, R., Xu, S.: A detection statistic for random-valued impulse noise. IEEE Trans. Image Process. 16(4), 1112–1120 (2007)

    Article  MathSciNet  Google Scholar 

  15. Donoho, D., Johnstone, J.: Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425–455 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  16. Durand, S., Froment, J.: Reconstruction of wavelet coefficients using total variation minimization. SIAM J. Sci. Comput. 24(5), 1754–1767 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  17. Duval, V., Aujol, J., Gousseau, Y.: A bias-variance approach for the nonlocal means. SIAM J. Imaging Sci. 4, 760–788 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  18. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 15(12), 3736–3745 (2006)

    Article  MathSciNet  Google Scholar 

  19. Garnett, R., Huegerich, T., Chui, C., He, W.: A universal noise removal algorithm with an impulse detector. IEEE Trans. Image Process. 14(11), 1747–1754 (2005)

    Article  Google Scholar 

  20. Hu, H., Froment, J.: Nonlocal total variation for image denoising. In: Symposium on Photonics and Optoelectronics (SOPO), pp. 1–4. IEEE (2012)

  21. Hu, H., Li, B., Liu, Q.: Non-local filter for removing a mixture of gaussian and impulse noises. In: Csurka, G., Braz, J. (eds.) VISAPP, vol. 1, pp. 145–150. SciTePress (2012)

  22. Johnstone, I.M., Silverman, B.W.: Wavelet threshold estimators for data with correlated noise. J. R. Stat. Soc. B B 59(2), 319–351 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  23. Kervrann, C., Boulanger, J.: Local adaptivity to variable smoothness for exemplar-based image regularization and representation. Int. J. Comput. Vis. 79(1), 45–69 (2008)

    Article  Google Scholar 

  24. Li, B., Liu, Q., Xu, J., Luo, X.: A new method for removing mixed noises. Sci. China Inf. Sci. 54, 51–59 (2011)

    Article  MATH  Google Scholar 

  25. López-Rubio, E.: Restoration of images corrupted by gaussian and uniform impulsive noise. Pattern Recognit. 43(5), 1835–1846 (2010)

    Article  MATH  Google Scholar 

  26. Mahmoudi, M., Sapiro, J.: Fast image and video denoising via nonlocal means of similar neighborhoods. IEEE Signal. Process. Lett. 12(12), 839–842 (2005)

    Article  Google Scholar 

  27. Mairal, J., Sapiro, G., Elad, M.: Learning multiscale sparse representations for image and video restoration. Multiscale Model. Simul. 7, 214–241 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  28. Muresan, D., Parks, T.: Adaptive principal components and image denoising. In: IEEE International Conference on image processing (2003)

  29. Nikolova, M.: A variational approach to remove outliers and impulse noise. J. Math. Imaging Vis. 20(1), 99–120 (2004)

    Article  MathSciNet  Google Scholar 

  30. Pratt, W.: Median filtering. Semiannual Report, Image Processing Institute, University of Southern California, Los Angeles, Tech. Rep pp. 116–123 (1975)

  31. Pruitt, W.: Summability of independent random variables. J. Math. Mech. 15, 769–776 (1966)

    MathSciNet  MATH  Google Scholar 

  32. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 60(1–4), 259–268 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  33. Smith, S., Brady, J.: SUSAN—a new approach to low level image processing. Int. J. Comput. Vis. 23(1), 45–78 (1997)

    Article  Google Scholar 

  34. Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: Proceedings of the international confernce on computer vision, pp. 839–846. IEEE (1998)

  35. Van De Ville, D., Kocher, M.: Sure-based non-local means. IEEE Signal Process. Lett. 16(11), 973–976 (2009)

    Article  Google Scholar 

  36. Wen, Z.Y.: Sur quelques théorèmes de convergence du processus de naissance avec interaction des voisins. Bull. Soc. Math. Fr. 114, 403–429 (1986)

    MATH  Google Scholar 

  37. Xiao, Y., Zeng, T., Yu, J., Ng, M.: Restoration of images corrupted by mixed gaussian-impulse noise via l sub (1-l) sub (0) minimization. Pattern Recognit. 44(8), 1708–1720 (2011)

    Article  MATH  Google Scholar 

  38. Xiong, B., Yin, Z.: A universal denoising framework with a new impulse detector and nonlocal means. IEEE Trans. Image Process. 21(4), 1663–1675 (2012)

    Article  MathSciNet  Google Scholar 

  39. Xu, H., Xu, J., Wu, F.: On the biased estimation of nonlocal means filter. In: International conference on multimedia computing and systems/international conference on multimedia and expo, pp. 1149–1152 (2008)

  40. Yang, J.X., Wu, H.R.: Mixed gaussian and uniform impulse noise analysis using robust estimation for digital images. In: Proceedings of the 16th international conference on digital signal processing, pp. 468–472 (2009)

  41. Yaroslavsky, L.P.: Digital Picture Processing. Springer, Secaucus (1985)

    Book  Google Scholar 

  42. Yuksel, M.: A hybrid neuro-fuzzy filter for edge preserving restoration of images corrupted by impulse noise. IEEE Trans. Image Process. 15(4), 928–936 (2006)

    Article  Google Scholar 

  43. Zhou, Y., Ye, Z., Xiao, Y.: A restoration algorithm for images contaminated by mixed gaussian plus random-valued impulse noise. J. Vis. Commun. Image Represent. 24(3), 283–294 (2013)

    Article  Google Scholar 

Download references

Acknowledgments

The work has been supported by the Fundamental Research Funds for the Central Universities in China (No.N130323016), the Research Funds of Northeastern University at Qinhuangdao (No. XNB201312, China), the Scientific Research Foundation for the Returned Overseas Chinese Scholars, State Education Ministry(48-2), the National Natural Science Foundation of China (Grant Nos. 11171044 and 11401590), and the Science and Technology Research Program of Zhongshan (Grant No. 20123A351, China). The authors are very grateful to the reviewers for their valuable remarks and comments which led to a significant improvement of the manuscript. They are also grateful to Prof. Raymond H. Chan and Dr. Yiqiu Dong for kindly providing the code of ROLD-EPR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Quansheng Liu.

Appendix

Appendix

In this appendix we will prove Theorems 1 and 2.

1.1 Convergence Theorems for Random Weighted Means

We first show a Marcinkiewicz law of large numbers (Theorem 3) and a convergence theorem in distribution for random weighted means (Theorem 4) for l-dependent random variables, which we will use to prove Theorems 1 and 2.

Theorem 3

Let \(\{(a_k, v_k)\}\) be a sequence of l-dependent identically distributed random variables, with \(\mathbb {E}|a_1|^p<\infty \) and \( \mathbb {E}|a_1v_1|^p<\infty \) for some \( p\in [1,2),\) and \(\mathbb {E} a_1\ne 0\). Then

$$\begin{aligned} \frac{\sum _{k=1}^na_kv_k}{\sum _{k=1}^na_k}-\frac{\mathbb {E}a_1v_1}{\mathbb {E}a_1}=o(n^{-(1-1/p)}) \quad \text{ almost } \text{ surely }. \end{aligned}$$

We need the following lemma to prove it.

Lemma 2

[24] If \(\{X_n\}\) are l-dependent and identically distributed random variables with \( \mathbb {E}X_1=0\) and \(\mathbb {E}|X_1|^p<\infty \) for some \(p\in [1,2)\), then

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{X_1+ \cdots +X_n}{n^{1/p} }=0 \quad \text{ almost } \text{ surely }. \end{aligned}$$

This lemma is a direct consequence of Marcinkiewicz law of large numbers for independent random variables (see e.g. [9], p. 118), since for all \(k\in \{1,\ldots ,l+1\}, \{X_{i(l+1)+k}:i\ge 0\}\) is a sequence of i.i.d. random variables, and for each positive integer n, we have

$$\begin{aligned} X_1+ \cdots +X_n=\sum _{k=1}^{l+1}\sum _{i=0}^{m-1}X_{i(l+1)+k}+\sum _{1\le k\le k_0}X_{m(l+1)+k}, \end{aligned}$$

where \(m,k_0\) are positive integers determined by \(n=m(l+1)+k_0, 0\le k_0 \le l\).

   Proof of Theorem 3

Notice that

$$\begin{aligned} n \bigg (\frac{\sum _{k=1}^na_kv_k}{\sum _{k=1}^na_k}-\frac{\mathbb {E}a_1v_1}{\mathbb {E}a_1}\bigg )= \frac{n}{(\sum _{k=1}^na_k)} \frac{1}{\mathbb {E}a_1} \sum _{k=1}^n a_k z_k, \end{aligned}$$

where

$$\begin{aligned} z_k=v_k \mathbb {E}a_1-\mathbb {E}a_1 v_1. \end{aligned}$$

Since \(\mathbb {E}|a_1|\le (\mathbb {E}|a_1|^p)^{1/p}<\infty \), by Lemma 2 with \(p=1\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{k=1}^na_k}{n}=\mathbb {E}a_1 \quad \text{ almost } \text{ surely }. \end{aligned}$$

Since \(\mathbb {E}a_1z_1=0\), and \(\mathbb {E}|a_1z_1|^p<\infty \), again by Lemma 2, we get

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{k=1}^na_kz_k}{n^{1/p}}=0 \quad \text{ almost } \text{ surely }. \end{aligned}$$

Thus the conclusion follows. \(\square \)

Theorem 4

Let \(\{(a_k,v_k)\}\) be a stationary sequence of l-dependent and identically distributed random variables with \(\mathbb {E}a_1\ne 0, \mathbb {E}a_1^2<\infty ,\) and \(\mathbb {E}(a_1v_1)^2<\infty \). Then

$$\begin{aligned} {\sqrt{n}} \bigg (\frac{\sum _{k=1}^na_kv_k}{\sum _{k=1}^na_k}-\frac{\mathbb {E}a_1v_1}{\mathbb {E}a_1}\bigg )\mathop {\rightarrow }\limits ^{d} N(0,c^2), \end{aligned}$$

that is,

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}\bigg \{\frac{\sqrt{n}}{c} \bigg (\frac{\sum _{k=1}^na_kv_k}{\sum _{k=1}^na_k}-\frac{\mathbb {E}a_1v_1}{\mathbb {E}a_1}\bigg )\le z\bigg \}=\varPhi (z), \quad z\in \mathbb {R}, \end{aligned}$$

where \(\varPhi (z)\) is the cumulative distribution function of the standard normal distribution,

$$\begin{aligned} \varPhi (z)=\frac{1}{\sqrt{2\pi }}\int _{-\infty }^z e^{-\frac{t^2}{2}}dt, \end{aligned}$$

and

$$\begin{aligned} c= \frac{1}{(\mathbb {E}a_1)^2}\sqrt{\mathbb {E}(a_1z_1)^2+2\lambda } \end{aligned}$$
(35)

with \(\lambda =\sum _{k=1}^l\mathbb {E}a_1z_1a_{1+k}z_{1+k}, \; z_k=v_k \mathbb {E}a_1-\mathbb {E}a_1 v_1 \).

We need the following lemma to prove the theorem.

Lemma 3

[36] Let \(\{X_n\}\) be a stationary sequence of l-dependent and identically distributed random variables with \(\mathbb {E}X_1=0\) and \(\mathbb {E}X_1^2<\infty \). Set \(S_n=X_1+ \cdots +X_n (n\ge 1)\),

$$\begin{aligned} c_1= \mathbb {E}X_1^2 +2\sum _{k=1}^l\mathbb {E}X_1X_{1+k}, \quad \text{ and } \quad c_2=2\sum _{k=1}^lk\mathbb {E}X_1X_{1+k}. \end{aligned}$$

Then var\((S_n)=c_1n-c_2\) for \(n > l\), and as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{S_n}{\sqrt{c_1 n}}\mathop {\rightarrow }\limits ^{d}N(0,1). \end{aligned}$$

   Proof of Theorem 4

As in the proof of Theorem 3, we have

$$\begin{aligned} {\sqrt{n}} \bigg (\frac{\sum _{k=1}^na_kv_k}{\sum _{k=1}^na_k}-\frac{\mathbb {E}a_1v_1}{\mathbb {E}a_1}\bigg )= \frac{n}{(\sum _{k=1}^na_k)} \frac{1}{\mathbb {E}a_1} \frac{\sum _{k=1}^n a_k z_k}{\sqrt{n}}, \end{aligned}$$

where

$$\begin{aligned} z_k=v_k \mathbb {E}a_1-\mathbb {E}a_1 v_1. \end{aligned}$$

Notice that the l-dependence of \(\{(a_k,v_k)\}\) and the stationarity imply those of \(\{(a_k,z_k)\}\). Therefore by Lemma 3, we get

$$\begin{aligned} \frac{\sum _{k=1}^n a_k z_k}{\sqrt{n}c_0}\rightarrow N(0,1), \end{aligned}$$

where

$$\begin{aligned} c_0=\sqrt{\mathbb {E}(a_1z_1)^2+2\lambda }, \quad \text{ with }\, \lambda =\sum _{k=1}^l\mathbb {E}a_1z_1a_{1+k}z_{1+k}. \end{aligned}$$

Since \(\mathbb {E}|a_1|\le (\mathbb {E}|a_1|^p)^{1/p}\), by Lemma 2 with \(p=1\), we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{k=1}^na_k}{n}=\mathbb {E}a_1 \quad \text{ almost } \text{ surely }. \end{aligned}$$

Thus the conclusion follows with \(c=c_0 /(\mathbb {E}a_1)^2\). \(\square \)

1.2 Proofs of Theorems 1 and 2

We now come to the proofs of Theorems 1 and 2, using Theorems 3 and 4.

For Theorem 1, we need to prove that for any \(\epsilon \in (0,\frac{1}{2}]\), as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{\sum _{k=1}^n{w^{0}}(i,j_k)v(j_k)}{\sum _{k=1}^n{w^{0}}(i,j_k)}-u(i)=o(n^{-(\frac{1}{2}-\epsilon )}) \quad \text{ almost } \text{ surely }, \end{aligned}$$

where

$$\begin{aligned} w^{0}(i,j_k)=e^{-\Vert v(\mathcal {N}^{0}_{i})-v(\mathcal {N}^{0}_{j_k})\Vert ^2/(2\sigma _r^2)}. \end{aligned}$$

We will apply Theorem 3 to prove this. Note that the sequence \(\{{w^{0}}(i,j_k),v(j_k)\}\) \({(k=1,2,\ldots ,n)}\) is usually not l-dependent, since the central random variable \(v(\mathcal {N}^{0}_{i})\) is contained in all the terms. To make use of Theorem 3, we first take a fixed vector to replace the central random variable.

   Proof of Theorem 1

Fix \(x\in \mathbb {R}^{|\mathcal {N}_i^0|}\). Let

$$\begin{aligned} a_{k}=w^0(x,j_k)=e^{-\Vert x-v(\mathcal {N}^{0}_{j_k})\Vert ^2/(2\sigma _r^2)}. \end{aligned}$$

Then \(a_{k}\) and \(v(j_k)\) are independent since \(j_k\not \in \mathcal {N}^{0}_{j_k}\), so that

$$\begin{aligned} \frac{\mathbb {E}a_{k}v(j_k)}{\mathbb {E}a_{k}}=\mathbb {E}v(j_k)=u(j_k)=u(i). \end{aligned}$$

By Lemma 1, the sequence \( \{v(\mathcal {N}_{j_k})\} \) is l-dependent for \( l=(2d-1)^2 -1\); thus the sequence \(\{\big (a_{k},v(j_k)\big )\}\) is also l-dependent. Since \(v=u+\eta \), with the range of u being bounded and \(\eta \) being Gaussian, we have \(\mathbb {E}|v(j_k)|^p<\infty \) for \(p\in [1,2)\). (In fact, it holds for all \(p\ge 1\).) Hence \(\mathbb {E}|a_{k}v(j_k)|^p<\infty \), as \(a_k\le 1\).

Applying Theorem 3, we have, for any fixed \(x=v(\mathcal {N}^0_i)\in \mathbb {R}^{|\mathcal {N}_i^0|}\) and any positive integer \(k_0\),

$$\begin{aligned} \frac{\sum _{k=k_0}^n{w^{0}}(x,j_k)v(j_k)}{\sum _{k=k_0}^n{w^{0}}(x,j_k)}-u(i)=o(n^{-(1-1/p})\quad \text{ almost } \text{ surely }. \end{aligned}$$
(36)

Let \(k_0 > l\), so that \(v(\mathcal {N}_i^0)\) is independent of \(v(\mathcal {N}^{0}_{j_k})\) for all \(k\ge k_0\). By Fubini’s theorem, we can replace \(w^{0}(x,j_k)\) in (36) by

$$\begin{aligned} w^{0}(i,j_k)=e^{-\Vert v(\mathcal {N}^{0}_{i})-v(\mathcal {N}^{0}_{j_k})\Vert ^2/(2\sigma _r^2)}. \end{aligned}$$

That is,

$$\begin{aligned} \frac{\sum _{k=k_0}^n{w^{0}}(i,j_k)v(j_k)}{\sum _{k=k_0}^n{w^{0}}(i,j_k)}-u(i)=o(n^{-(1-1/p)})\quad \text{ almost } \text{ surely }. \end{aligned}$$
(37)

To prove the theorem, we need to estimate the difference between the left-hand sides of (8) and (37). Let

$$\begin{aligned} A_0= & {} \sum _{k=1}^{k_0-1}{w^{0}}(i,j_k)v(j_k), \quad A_n=\sum _{k=k_0}^n{w^{0}}(i,j_k)v(j_k),\\ B_0= & {} \sum _{k=1}^{k_0-1}{w^{0}}(i,j_k), \quad B_n=\sum _{k=k_0}^n{w^{0}}(i,j_k). \end{aligned}$$

Then as before, fixing \(x\in \mathbb {R}^{|\mathcal {N}_i^0|}\), applying Theorem 3 with \(p=1\) and Fubini’s theorem, and replacing x by \(v(\mathcal {N}_i^0)\), we obtain

$$\begin{aligned} \lim _{n\rightarrow \infty } \frac{A_n}{n}=\mathbb {E}{w^{0}}(i,j_k)v(j_k), \quad \text{ and } \lim _{n \rightarrow \infty } \frac{B_n}{n}=\mathbb {E}{w^{0}}(i,j_k). \end{aligned}$$

Using this and the fact that

$$\begin{aligned} \frac{A_0+A_n}{B_0+B_n}-\frac{A_n}{B_n}=\frac{A_0B_n-A_nB_0}{B_n(B_0+B_n)}, \end{aligned}$$

we see that

$$\begin{aligned} \bigg |\frac{\sum _{k=k_0}^n{w^{0}}(i,j_k)v(j_k)}{\sum _{k=k_0}^n{w^{0}}(i,j_k)} -\frac{\sum _{k=1}^n{w^{0}}(i,j_k)v(j_k)}{\sum _{k=1}^n{w^{0}}(i,j_k)}\bigg |=O\left( \frac{1}{n}\right) \quad \text{ almost } \text{ surely }. \end{aligned}$$
(38)

Therefore, (37) implies that

$$\begin{aligned} \frac{\sum _{k=1}^n{w^{0}}(i,j_k)v(j_k)}{\sum _{k=1}^n{w^{0}}(i,j_k)}-u(i)=o(n^{-(1-1/p)})\quad \text{ almost } \text{ surely }. \end{aligned}$$
(39)

As (39) holds for any \(p \in [1,2)\), we see that (8) holds for all \(\epsilon \in (0,\frac{1}{2}]\). \(\square \)

We will prove Theorem 2, which demonstrates that as \( n\rightarrow \infty \),

$$\begin{aligned} \sqrt{n} \left( \frac{\sum _{k=1}^n{w^{0}}(i,j_k)v(j_k)}{\sum _{k=1}^n{w^{0}}(i,j_k)}-u(i)\right) \mathop {\rightarrow }\limits ^{d} \mathcal {L}, \end{aligned}$$

where

$$\begin{aligned} w^{0}(i,j_k)=e^{-\Vert v(\mathcal {N}^{0}_{i})-v(\mathcal {N}^{0}_{j_k})\Vert ^2/(2\sigma _r^2)}, \end{aligned}$$

and \(\mathcal {L}\) is a mixture of centered Gaussian laws in the sense that it has a density of the form (15).

   Proof of Theorem 2

The procedure of the proof is similar to that of the proof of Theorem 1. Fix \(x\in \mathbb {R}^{|\mathcal {N}_i^0|}\), and set

$$\begin{aligned} a_{k}=w^0(x,j_k)=e^{-\Vert x-v(\mathcal {N}^{0}_{j_k})\Vert ^2/(2\sigma _r^2)}, \quad k=1,2,\ldots ,n. \end{aligned}$$

Then \(a_{k}\) and \(v(j_k)\) are independent, \( {\mathbb {E}a_{k}v(j_k)}/{\mathbb {E}a_{k}}=\mathbb {E}v(j_k)=u(i)\), and \(\{(a_{k},v(j_k))\}\) is a sequence of l-dependent and identically distributed random vectors with \(l= (2d-1)^2 -1\), and \( \mathbb {E}|a_{k}v(j_k)|^2\le \mathbb {E}|v(j_k)|^2<\infty \). Hence applying Theorem 4, we get, for any fixed x and any positive integer \(k_0\),

$$\begin{aligned} Z_n(x):={\sqrt{n}} \bigg (\frac{\sum _{k=k_0}^n{w^{0}}(x,j_k)v(j_k)}{\sum _{k=k_0}^n{w^{0}}(x,j_k)}-u(i)\bigg )\mathop {\rightarrow }\limits ^{d} N(0,c_x^2), \end{aligned}$$

where \(c_x>0\) will be calculated by the end of the proof. This means that for any \(t\in \mathbb {R}\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}(Z_n(x) \le t)=\int _{-\infty }^t\frac{1}{\sqrt{2\pi }c_x}e^{-\frac{z^2}{2c_x^2}}dz. \end{aligned}$$

Let \(k_0>l\) be the positive integer such that \(v(\mathcal {N}_i^0)\) is independent of \(v(\mathcal {N}^{0}_{j_k})\) for all \(k\ge k_0\). Then by Fubini’s theorem and Lebesgue’s dominated convergence theorem, we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathbb {P}\Big (Z_n\big (v(\mathcal {N}^0_i)\big ) \le t\Big )=\int _{-\infty }^t f(z) dz, \end{aligned}$$

where

$$\begin{aligned} f(z)=\int _{\mathbb {R}^{|\mathcal {N}_i^0|}}\frac{1}{\sqrt{2\pi }c_x}e^{-\frac{z^2}{2c_x^2}}\mu (dx), \end{aligned}$$

with \(\mu \) being the law of \(v(\mathcal {N}_i^0)\). In other words,

$$\begin{aligned} {\sqrt{n}} \bigg (\frac{\sum _{k=k_0}^n{w^{0}}(i,j_k)v(j_k)}{\sum _{k=k_0}^n{w^{0}}(i,j_k)}-u(i)\bigg )\mathop {\rightarrow }\limits ^{d} \mathcal {L}, \end{aligned}$$

where \(\mathcal {L}\) is the law with density f. This together with (38) prove the equation (15) of Theorem 2.

We now turn to the calculation of \(c_x\). Let \(v_k=v(j_k)\) and \( z_k=v_k \mathbb {E}a_1-\mathbb {E}(a_1 v_1)\). Because of the independence of \(a_1\) and \(v_1\), we get \(z_k=(v_k-\mathbb {E} v_1)\mathbb {E}a_1\). Then, it follows that

$$\begin{aligned} \mathbb {E} (a_1z_1)^2=\mathbb {E} (a_1^2 (v_1-\mathbb {E} v_1)^2)\mathbb {E} ^2a_1=\mathbb {E} (a_1^2) \mathbb {E} ^2( a_1) \mathbb {E} (v_1-\mathbb {E} v_1)^2, \end{aligned}$$

and

$$\begin{aligned} \mathbb {E} (a_1z_1a_kz_k)= & {} \mathbb {E}( a_1a_k (v_1- \mathbb {E}v_1)(v_k-\mathbb {E} v_1)\mathbb {E}^2a_1)\\= & {} \mathbb {E} ^2(a_1) \mathbb {E} (a_1a_k(v_1-\mathbb {E} v_1)(v_k-\mathbb {E} v_k)). \end{aligned}$$

Note that if \((a_1,a_k)\) is independent of \((v_1,v_k)\), it holds that

$$\begin{aligned} \mathbb {E} (a_1z_1a_kz_k)=\mathbb {E} ^2(a_1) \mathbb {E}(a_1a_k) \mathbb {E} (v_1-\mathbb {E} v_1) \mathbb {E}(v_k-\mathbb {E} v_k)=0, \end{aligned}$$
(40)

by the independence of \(v_1\) and \(v_k\). If \(v_1\) is not contained in \(v(\mathcal {N}^{0}_{j_k})\), then \((a_1,a_k)\) is independent of \((v_1,v_k)\). Notice that according to the order of \(I_i\) defined in Sect. 2.2, when \(k>d^2\), \(v_1\) is not contained in \(v(\mathcal {N}^{0}_{j_k})\), so that (40) holds; therefore by Theorem 4,

$$\begin{aligned} c_x= & {} \sqrt{\mathbb {E}(a_1z_1)^2+2\lambda }/(\mathbb {E}a_1)^2 \\= & {} \frac{1}{\mathbb {E}a_1}\sqrt{\mathbb {E} (a_1^2) \mathbb {E} (v_1-\mathbb {E} v_1)^2 +2\sum \nolimits _{k=2}^{d^2} \mathbb {E} (a_1a_k(v_1-\mathbb {E} v_1)(v_k-\mathbb {E} v_k))}. \end{aligned}$$

We finally give an approximation of \(c_x\). Recall that \(a_{k}=e^{-\Vert x-v(\mathcal {N}^{0}_{j_k})\Vert ^2/(2\sigma _r^2)}\), and \(v_k=v(j_k)\). Let \(\mathcal {T}(j)=j-j_1+j_k\) be the translation mapping \(j_1\) to \(j_k\) (thus mapping \(\mathcal {N}^{0}_{j_1}\) onto \(\mathcal {N}^{0}_{j_k}\)).

If \(v( j_1)\) is not contained in \(v(\mathcal {N}^{0}_{j_k})\), we have already seen that \(\mathbb {E} (a_1z_1a_kz_k)= 0\). If \(v(j_1)\) is contained in \(v(\mathcal {N}^{0}_{j_k})\), to make \((a_1,a_k)\) independent of \((v_1,v_k)\), we can remove \(v(j_1)\) from \(v(\mathcal {N}^{0}_{j_k})\) and the corresponding term \(v(\mathcal {T}^{-1}(j_1))\) from \(v(\mathcal {N}^{0}_{j_1})\); remove \(v(j_k)\) from \(v(\mathcal {N}^{0}_{j_1})\) and the corresponding term \(v(\mathcal {T}(j_k))\)from \(v(\mathcal {N}^{0}_{j_k})\). The obtained values of \(a_1,a_k\) are very close to the initial values of \(a_1,a_k\) respectively. Hence, we can consider that \(\mathbb {E} (a_1z_1a_kz_k)\approx 0\). Therefore

$$\begin{aligned} c_x \approx \frac{\sqrt{\mathbb {E} (a_1z_1)^2}}{\mathbb {E} ^2a_1}=\frac{\sqrt{\mathbb {E} a_1^2}}{\mathbb {E} a_1}\sigma =\sigma \frac{\sqrt{\int _{R^m}e^{-\Vert x-v\Vert ^2/\sigma _r^2}\mu (dv)}}{\int _{R^m}e^{-\Vert x-v\Vert ^2/2\sigma _r^2}\mu (dv)}, \end{aligned}$$

where \(v=v(\mathcal {N}^{0}_{j_k})\) and \(m=|\mathcal {N}_{j_k}^0|\). Recall that \(\mu (dv)\) is the law of \(v(\mathcal {N}^{0}_{i})\), so it is also the law of \(v(\mathcal {N}^{0}_{j_k})\). Let

$$\begin{aligned} \nu =\mathbb {E}(v(\mathcal {N}^{0}_{i})), \end{aligned}$$

then \(v\sim N(\nu , \sigma ^2Id_m)\), where \(Id_m\) denotes the identity matrix of size \(m\times m\). Since

$$\begin{aligned} \int _{R^m}e^{-\Vert x-v\Vert ^2/a}\mu (dv) =\frac{1}{(\sqrt{2/a+1/\sigma ^2}\sigma )^m}\exp (-\Vert x-\nu \Vert ^2/(a+2\sigma ^2)), \end{aligned}$$

we get

$$\begin{aligned} c_x=\sigma ^{m/2+1}\left( \frac{(\sigma ^2+\sigma _r^2)^2}{\sigma ^2\sigma _r^2(2\sigma ^2+\sigma _r^2)}\right) ^{m/4} \exp \left( \frac{\sigma ^2}{2(\sigma _r^2+\sigma ^2)(\sigma _r^2+2\sigma ^2)}\Vert x-\nu \Vert ^2\right) . \end{aligned}$$

This ends the proof of Theorem 2. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hu, H., Li, B. & Liu, Q. Removing Mixture of Gaussian and Impulse Noise by Patch-Based Weighted Means. J Sci Comput 67, 103–129 (2016). https://doi.org/10.1007/s10915-015-0073-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10915-015-0073-9

Keywords

Navigation