Advertisement

Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Foveated Nonlocal Self-Similarity

  • 1472 Accesses

  • 12 Citations

Abstract

When we gaze a scene, our visual acuity is maximal at the fixation point (imaged by the fovea, the central part of the retina) and decreases rapidly towards the periphery of the visual field. This phenomenon is known as foveation. We investigate the role of foveation in nonlocal image filtering, installing a different form of self-similarity: the foveated self-similarity. We consider the image denoising problem as a simple means of assessing the effectiveness of descriptive models for natural images and we show that, in nonlocal image filtering, the foveated self-similarity is far more effective than the conventional windowed self-similarity. To facilitate the use of foveation in nonlocal imaging algorithms, we develop a general framework for designing foveation operators for patches by means of spatially variant blur. Within this framework, we construct several parametrized families of operators, including anisotropic ones. Strikingly, the foveation operators enabling the best denoising performance are the radial ones, in complete agreement with the orientation preference of the human visual system.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22

Notes

  1. 1.

    In our experiments we resort to symmetric padding.

  2. 2.

    Because our foveation operators are always designed upon a given windowing kernel \(\mathbf {k}\), a more precise symbol would be \(\mathcal {F}_{\mathbf {k} } \). However, not to overload the notation, we omit the subscript \(\mathbf {k} \) and leave room to other decorations that are used in the later sections.

  3. 3.

    These five requirements include the four requirements described in our preliminary conference publications (Foi and Boracchi 2012, 2013b, a), plus a non-negativity requirement, which had been always tacitly assumed but never mentioned explicitly.

  4. 4.

    In particular, the recursive sequence \(\left\{ \varsigma _{n}\right\} _{n=0}^{+\infty }\) defined by

    $$\begin{aligned} \varsigma _{0}=\tfrac{1}{2\sqrt{\pi }}\sqrt{\tfrac{\mathbf {k}\left( 0\right) }{\mathbf {k}\left( u\right) }},\qquad \varsigma _{n+1}=\varsigma _{n}\left\| \bar{g}_{\varsigma _{n},u}^{\rho ,\vartheta }\right\| _{2} \sqrt{\tfrac{\mathbf {k}\left( 0\right) }{\mathbf {k}\left( u\right) }} \end{aligned}$$

    converges monotonically to the solution \(\overset{_{*}}{\varsigma } _{u}^{\rho ,\vartheta }\) of (49), with geometric rate for any \(\mathbf {k}\left( u\right) <\mathbf {k}\left( 0\right) \) (contraction mapping).

  5. 5.

    Note that \(\bar{v}_{u_{i}}^{\rho ,\vartheta }\left( u_{j}\right) \) is nothing but the inner product between a Dirac patch at \(u_{i}\) and \(\mathcal { B}_{\rho ,\theta }\) applied to another Dirac patch at \(u_{j}\): \(\bar{v} _{u_{i}}^{\rho ,\vartheta }\left( u_{j}\right) =\left\langle \mathbf {\delta } _{u_{i}},\mathcal {B}_{\rho ,\theta }\left[ \mathbf {\delta }_{u_{j}}\right] \right\rangle \).

  6. 6.

    http://www.cs.tut.fi/~foi/FoveatedNL

  7. 7.

    The selected parameter range is general enough since on natural images the denoising performance of left-hand chiral operators (\(\rho \ge 1\), \(-\pi /2<\theta <0\)) is practically identical to those of right-hand chiral operators (\(\rho \ge 1\), \(0<\theta <\pi /2\)), as shown in Section Suppl.4.

  8. 8.

    For Gaussian blur kernels, \(\mathcal {F}_{\rho ,0}=\mathcal {F}_{1/\rho ,\pi /2}\).

  9. 9.

    Invertibility is feasible with self-map operators \(\mathcal {F}_{\rho ,\theta }^{\text {self}}\) provided that the inequality (50) is met with a strict lower bound. As a matter of fact, the greatest advantage of self-map operators is that they can be represented through \( \mathcal {B}_{\rho ,\theta }\) (46) as square matrices of size \( \left| U\right| \times \left| U\right| \).

  10. 10.

    In fact, \(\mathcal {F}_{\rho ,\theta }[z,x_{1}](\bar{u})\) is proportional to the value in \(\bar{u}\) of the convolution between z and the blur kernel \( v_{\bar{u}}\), and this latter has a main axis oriented as \(\angle \bar{u} +\theta \)

References

  1. Alahi, A., Ortiz, R., & Vandergheynst, P. (2012). FREAK: Fast retina keypoint. Proceedings of IEEE CVPR (pp. 510–517).

  2. Alexander, S. K., Vrscay, E. R., & Tsurumi, S. (2008). A simple, general model for the affine self-similarity of images. Image Analysis and Recognition (Vol. 5112)., LNCS Berlin: Springer.

  3. Arias, P., Facciolo, G., Caselles, V., & Sapiro, G. (2011). A variational framework for exemplar-based image inpainting. IJCV, 93(3), 319–347.

  4. Awate, S., & Whitaker, R. (2005). Higher-order image statistics for unsupervised, information-theoretic, adaptive, image filtering. Proceedings of IEEE CVPR, 2, 44–51.

  5. Baddeley, R. (1997). The correlational structure of natural images and the calibration of spatial representations. Cognitive Science, 21(3), 351–372.

  6. Balas, B., Nakano, L., & Rosenholtz, R. (2009). A summary-statistic representation in peripheral vision explains visual crowding. Journal of Vision, 9(12), 13.

  7. Barnsley, M. F. (1993). Fractals everywhere. Boston: Academic Press.

  8. Basu, A., & Wiebe, K. (1998). Enhancing videoconferencing using spatially varying sensing. IEEE TSMC-A, 28(2), 137–148.

  9. Bigun, J., & Granlund, G. H. (Sept. 1986). Central symmetry modelling. Proceedings of EUSIPCO (pp. 883–886).

  10. Brockmole, J. R., & Irwin, D. E. (2005). Eye movements and the integration of visual memory and visual perception. Perception and Psychophysics, 67(3), 495–512.

  11. Buades, A., Coll, B., & Morel, J.-M. (2005). A review of image denoising algorithms, with a new one. MMS, 4(2), 490.

  12. Buades, A., Coll, B., & Morel, J.-M. (2008). Nonlocal image and movie denoising. IJCV, 76(2), 123–139.

  13. Buades, A., Coll, B., Morel, J.-M., & Sbert, C. (2009). Self-similarity driven color demosaicking. IEEE TIP, 18(6), 1192–1202.

  14. Buades, A., Coll, B., & Morel, J.-M. (2010). Image denoising methods. A new nonlocal principle. SIAM Review, 52(1), 113–147.

  15. Buades, A., Coll, B., & Morel, J.-M. (2011). Non-local means denoising. IPOL2011.

  16. Chang, E.-C., Mallat, S., & Yap, C. (2000). Wavelet foveation. ACHA, 9(3), 312–335.

  17. Chatterjee, P., & Milanfar, P. (2008). A generalization of non-local means via kernel regression. Proceedings of SPIE CI VI, 6814, 68140P–68140P–9.

  18. Chatterjee, P., & Milanfar, P. (2010). Is denoising dead? IEEE TIP, 19(4), 895–911.

  19. Chierchia, G., Pustelnik, N., Pesquet, J.-C., & Pesquet-Popescu, B. (Mar. 2014). Epigraphical splitting for solving constrained convex formulations of inverse problems with proximal tools. arXiv:1210.5844v3.

  20. Chierchia, G., Pustelnik, N., Pesquet, J.-C., & Pesquet-Popescu, B. (2014a). Epigraphical projection and proximal tools for solving constrained convex optimization problems. SIVP (pp. 1–13).

  21. Chierchia, G., Pustelnik, N., Pesquet-Popescu, B., & Pesquet, J.-C. (2014). A nonlocal structure tensor-based approach for multicomponent image recovery problems. IEEE TIP, 23(12), 5531–5544.

  22. Ciresan, D., Giusti, A., Gambardella, L. M., & Schmidhuber, J. (2012). Deep neural networks segment neuronal membranes in electron microscopy images. Proceedings of NIPS (pp. 2843–2851).

  23. Conn, A. R., Scheinberg, K., & Vicente, L. (2009). Introduction to Derivative-Free Optimization. Philadelphia: SIAM.

  24. Curcio, C., Sloan, K., Kalina, R., & Hendrickson, A. (1990). Human photoreceptor topography. Journal of Comparative Neurology, 292, 497–523.

  25. Dabov, K., Foi, A., & Egiazarian, K. (2007). Video denoising by sparse 3D transform-domain collaborative filtering. Proceedings of EUSIPCO.

  26. Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2007b). Image denoising by sparse 3D transform-domain collaborative filtering. IEEE TIP, 16(8), 2080–2095.

  27. Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2008). Image restoration by sparse 3D transform-domain collaborative filtering. Proceedings of SPIE EI, 6812, 681207–1–681207–12.

  28. Dabov, K., Foi, A., Katkovnik, V., & Egiazarian, K. (2009). BM3D image denoising with shape-adaptive principal component analysis. Proceedings of SPARS.

  29. Daly, S., Ribas-Corbera, J., & Matthews, K. (2001). As plain as the noise on your face: Adaptive video compression using face detection and visual eccentricity models. JEI, 10(1), 30–46.

  30. Danielyan, A., Foi, A., Katkovnik, V., & Egiazarian, K., (2008). Image and video super-resolution via spatially adaptive block-matching filtering. Proceedings of LNLA.

  31. Danielyan, A., Katkovnik, V., & Egiazarian, K. (2012). BM3D frames and variational image deblurring. IEEE TIP, 21(4), 1715–1728.

  32. Darbon, J., Cunha, A., Chan, T., Osher, S., & Jensen, G. (May 2008). Fast nonlocal filtering applied to electron cryomicroscopy. IEEE ISBI (pp. 1331–1334).

  33. De Bonet, J. S. (1997). Noise reduction through detection of signal redundancy. MIT AI Lab: Rethinking artificial intelligence.

  34. Deledalle, C.-A., Denis, L., & Tupin, F. (2012). How to compare noisy patches? Patch similarity beyond Gaussian noise. IJCV, 99(1), 86–102.

  35. Demeyer, M., De Graef, P., Wagemans, J., & Verfaillie, K. (2009). Transsaccadic identification of highly similar artificial shapes. Journal of Vision, 9(4), 28.

  36. Desbordes, G. (2007). Vision in the Presence of Fixational Eye Movements: Insights from Psychophysics and Neural Modeling. PhD thesis, Boston University.

  37. Dong, B., Ye, J., Osher, S., & Dinov, I. (2008). Level set based nonlocal surface restoration. MMS, 7(2), 589–598.

  38. Donner, K., & Hemilä, S. (2007). Modelling the effect of microsaccades on retinal responses to stationary contrast patterns. Vision Research, 47(9), 1166–1177.

  39. Duncan, R. O., & Boynton, G. M. (2003). Cortical magnification within human primary visual cortex correlates with acuity thresholds. Neuron, 38(4), 659–671.

  40. Ebrahimi, M., & Vrscay, E. R. (2007). Solving the inverse problem of image zooming using “self-examples”. Image Analysis and Recognition (Vol. 4633, pp. 117–130)., LNCS b: a.

  41. Ebrahimi, M., & Vrscay, E. R. (2008a). Examining the role of scale in the context of the non-local-means filter. Proceedings ICIAR (pp. 170–181)

  42. Ebrahimi, M., & Vrscay, E. R. (2008b). Multi-frame super-resolution with no explicit motion estimation. Proceedings of IPCV.

  43. Eckstein, M. P. (2011) Visual search: A retrospective. JOV, 11(5).

  44. Efros, A., & Leung, T. (1999). Texture synthesis by non-parametric sampling. Proceedings of IEEE ICCV, 2, 1033–1038.

  45. Elad, M., & Datsenko, D. (2009). Example-based regularization deployed to super-resolution reconstruction of a single image. Computer Journal, 52(1), 15–30.

  46. Engbert, R., & Kliegl, R. (2004). Microsaccades keep the eyes’ balance during fixation. Psychological Science, 15(6), 431–431.

  47. Etienne-Cummings, R., Van der Spiegel, J., Mueller, P., & Zhang, M.-Z. (2000). A foveated silicon retina for two-dimensional tracking. IEEE TCS, 47(6), 504–517.

  48. Foi, A., & Boracchi, G. (2012). Foveated self-similarity in nonlocal image filtering. Proceedings of SPIE HVE, I, 8291.

  49. Foi, A., & Boracchi, G. (Sept. 2013a). Anisotropically foveated nonlocal image denoising. Proceedings of IEEE ICIP (pp. 464–468).

  50. Foi, A., & Boracchi, G. (2013b). Anisotropic foveated self-similarity. Proceedings of SPARS (p. 1), July 8–11.

  51. Foi, A., & Boracchi, G. (2014). Nonlocal foveated principal components. Proceedings of IEEE Workshop SSP (pp. 145–148).

  52. Freeman, J., & Simoncelli, E. P. (2011). Metamers of the ventral stream. Nature Neuroscience, 14(9), 1195–1201.

  53. Freeman, J., Brouwer, G. J. J., Heeger, D. J., & Merriam, E. P. (2011). Orientation decoding depends on maps, not columns. Journal of Neuroscience, 31(13), 4792–4804.

  54. Freeman, W., Jones, T., & Pasztor, E. (2002). Example-based superresolution. IEEE CG & A, 22(2), 56–65.

  55. Froment, J. (2014). Parameter-free fast pixelwise non-local means denoising. IPOL, 4, 300–326.

  56. Garway-Heath, D. F., Caprioli, J., Fitzke, F. W., & Hitchings, R. A. (2000). Scaling the hill of vision: The physiological relationship between light sensitivity and ganglion cell numbers. IOVS, 41(7), 1774–1782.

  57. Geisler, W., Perry, J., Super, B., & Gallogly, D. (2001). Edge co-occurrence in natural images predicts contour grouping performance. Vision Research, 41(6), 711–724.

  58. Geisler, W. S., & Perry, J. S. (1998). A real-time foveated multiresolution system for low-bandwidth video communication. Proceedings of SPIE HVEI (pp. 294–305).

  59. Ghazel, M., Freeman, G. H., & Vrscay, E. R. (2003). Fractal image denoising. IEEE TIP, 12(12), 1560–1578.

  60. Gilboa, G., & Osher, S. (2009). Nonlocal operators with applications to image processing. MMS, 7(3), 1005–1028.

  61. Grewenig, S., Zimmer, S., & Weickert, J. (2011). Rotationally invariant similarity measures for nonlocal image denoising. Journal of Visual Communication and Image R, 22(2), 117–130.

  62. Haloi, M. (2015). Improved microaneurysm detection using deep neural networks. ArXiv:1505.04424.

  63. Herwig, A., & Schneider, W. X. (2014). Predicting object features across saccades: Evidence from object recognition and visual search. Journal of Experimental Psychology: General, 143(5), 1903–1922.

  64. Jacquin, A. (1992). Image coding based on a fractal theory of iterated contractive image transformations. IEEE TIP, 1(1), 18–30.

  65. Jennings, J. A. M., & Charman, W. (1997). Analytic approximation of the off-axis modulation transfer function of the eye. Vision Research, 37(6), 697–704.

  66. Ji, Z., Chen, Q., Sun, Q.-S., & Xia, D.-S. (2009). A moment-based nonlocal-means algorithm for image denoising. IPL, 109, 1238–1244.

  67. Jin, Q., Grama, I., & Liu, Q. (2011). Removing Gaussian noise by optimization of weights in non-local means. ArXiv:1109.5640.

  68. Joselevitch, C. (2008). Human retinal circuitry and physiology. Psychology and Neuroscience, 1(2), 141–165.

  69. Karlsson, S., & Bigun, J. (Mar. 2011). Synthesis and detection of log-spiral codes. SSBA Proceedings (p. 4).

  70. Katkovnik, V., Foi, A., Egiazarian, K., & Astola, J. (2010). From local kernel to nonlocal multiple-model image denoising. IJCV, 86(1), 1–32.

  71. Kervrann, C., & Boulanger, J. (2006). Optimal spatial adaptation for patch-based image denoising. IEEE TIP, 15(10), 2866–2878.

  72. Kervrann, C., Boulanger, J., & Coupé, P. (2007). Bayesian non-local means filter, image redundancy and adaptive dictionaries for noise removal. Proceedings of SSVM, SSVM’07 (pp. 520–532).

  73. Kindermann, S., Osher, S., & Jones, P. (2005). Deblurring and denoising of images by nonlocal functionals. MMS, 4(4), 1091–1115.

  74. Kortum, P., & Geisler, W. (1996). Implementation of a foveated image coding system for image bandwidth reduction. Proceedings of SPIE HVEI (pp. 350–360).

  75. Kowler, E. (2011). Eye movements: The past 25 years. Vision Research, 51(13), 1457–1483.

  76. Krishna, B. S., Ipata, A. E., Bisley, J. W., Gottlieb, J., & Goldberg, M. E. (2014). Extrafoveal preview benefit during free-viewing visual search in the monkey. JOV, 14(1), 6.

  77. La Torre, D., Vrscay, E. R., Ebrahimi, M., & Barnsley, M. F. (2009). Measure-valued images, associated fractal transforms, and the affine self-similarity of images. SSIMS, 2(2), 470–507.

  78. Lebrun, M., Buades, A., & Morel, J.-M. (2013). A nonlocal bayesian image denoising algorithm. SSIMS, 6(3), 1665–1688.

  79. Lee, S., Pattichis, M., & Bovik, A. (2001). Foveated video compression with optimal rate control. IEEE TIP, 10(7), 977–992.

  80. Levick, W., & Thibos, L. (1982). Analysis of orientation bias in cat retina. The Journal of Physiology, 329, 243.

  81. Levin, A., Nadler, B., Durand, F., & Freeman, W. T. (2012). Patch complexity, finite pixel correlations and optimal denoising. Proceedings of ECCV, 5, 73–86.

  82. Li, Y., & Huttenlocher, D. (2008). Sparse long-range random field and its application to image denoising. Proceedings of ECCV (Vol. 5304, pp. 344–357), LNCS

  83. Lou, Y., Favaro, P., Soatto, S., & Bertozzi, A. (2009). Nonlocal similarity image filtering. Proceedings of ICIAP (pp. 62–71).

  84. Louchet, C., & Moisan, L. (2011). Total variation as a local filter. SSIMS, 4(2), 651–694.

  85. Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. IJCV, 60, 91–110.

  86. Lyu, S., & Simoncelli, E. (2009). Modeling multiscale subbands of photographic images with fields of gaussian scale mixtures. IEEE TPAMI, 31(4), 693–706.

  87. Maggioni, M., Boracchi, G., Foi, A., & Egiazarian, K. (2012). Video denoising, deblocking and enhancement through separable 4-D nonlocal spatiotemporal transforms. IEEE TIP, 21(9), 3952–3966.

  88. Mahmoudi, M., & Sapiro, G. (2005). Fast image and video denoising via nonlocal means of similar neighborhoods. IEEE SPL, 12(12), 839–842.

  89. Maleki, A., Narayan, M., & Baraniuk, R. G. (2013). Anisotropic nonlocal means. ACHA, 35(3), 452–482.

  90. Manjon-Herrera, J. V., & Buades, A. (2008). Non-local means filter, Matlab code. Matlab Central File Exchange http://www.mathworks.com/matlabcentral/fileexchange/13176-non-local-means-filter.

  91. Martinez-Conde, S., Otero-Millan, J., & Macknik, S. L. (2013). The impact of microsaccades on vision: towards a unified theory of saccadic function. Nature Reviews Neuroscience, 14(2), 83–96.

  92. McCamy, M. B., Otero-Millan, J., Macknik, S. L., Yang, Y., Troncoso, X. G., Baer, S. M., et al. (2012). Microsaccadic efficacy and contribution to foveal and peripheral vision. The Journal of Neuroscience, 32(27), 9194–9204.

  93. Milanfar, P. (2013). A tour of modern image filtering: New insights and methods, both practical and theoretical. IEEE SPM, 30(1), 106–128.

  94. Monaco, J. P., Bovik, A. C., & Cormack, L. K. (2009). Active, foveated, uncalibrated stereovision. IJCV, 85(2), 192–207.

  95. Mosseri, I., Zontak, M., & Irani, M. (2013). Combining the power of internal and external denoising. Proceedings of IEEE ICCP (pp. 1–9).

  96. Olmedo-Payá, A., Martínez-Álvarez, A., Cuenca-Asensi, S., Ferrández-Vicente, J., & Fernández, E. (2013). Modeling the effect of fixational eye movements in natural scenes. Natural and Artificial Models in Computation and Biology (vol. 7930, pp. 332–341), LNCS.

  97. Orchard, J., Ebrahimi, M., & Wong, A. (Oct. 2008). Efficient nonlocal-means denoising using the SVD. Proceedings of ICIP (pp. 1732–1735).

  98. Osindero, S., Welling, M., & Hinton, G. E. (2006). Topographic product models applied to natural scene statistics. Neural Computation, 18(2), 381–414.

  99. Peyré, G. (2009a). Sparse modeling of textures. Journal of Mathematical Imaging and Vision, 34(1), 17–31.

  100. Peyré, G. (2009b). Manifold models for signals and images. CVIU, 113(2), 249–260.

  101. Poletti, M., Listorti, C., & Rucci, M. (2013). Microscopic eye movements compensate for nonhomogeneous vision within the fovea. Current Biology, 23(17), 1691–1695.

  102. Postec, S. (2012). Quelques remarques en débruitage des images liées à des propriétés de similarité, de régularité et de parcimonie. PhD thesis, Université de Bretagne-Sud.

  103. Protter, M., Elad, M., Takeda, H., & Milanfar, P. (2009). Generalizing the nonlocal-means to super-resolution reconstruction. IEEE TIP, 18(1), 36–51.

  104. Ranzato, M., Mnih, V., Susskind, J., & Hinton, G. (2013). Modeling natural images using gated MRFs. IEEE TPAMI, 35(9), 2206–2222.

  105. Rosenholtz, R., Huang, J., Raj, A., Balas, B. J., & Ilie, L. (2012). A summary statistic representation in peripheral vision explains visual search. JOV 12(4).

  106. Rosman, G., Dubrovina, A., & Kimmel, R. (2013). Patch-collaborative spectral point-cloud denoising. Computer Graphics Forum.

  107. Roth, S., & Black, M. (2005). Fields of experts: a framework for learning image priors. Proceedings of IEEE CVPR, 2, 860–867.

  108. Rucci, M., Iovin, R., Poletti, M., & Santini, F. (2007). Miniature eye movements enhance fine spatial detail. Nature, 447(7146), 852–855.

  109. Salmon, J. (2010). On two parameters for denoising with non-local means. IEEE SPL, 17(3), 269–272.

  110. Sasaki, Y., Rajimehr, R., Kim, B., Ekstrom, L., Vanduffel, W., & Tootell, R. (2006a). The radial bias: A different slant on visual orientation sensitivity in human and nonhuman primates. Neuron, 51(5), 661–670.

  111. Sasaki, Y., Rajimehr, R., Kim, B. W., Knutsen, T., Ekstrom, L., Dale, A., et al. (2006b). The radial orientation effect in human and non-human primates. JOV, 6(6), 916–916.

  112. Sigman, M., Cecchi, G. A., Gilbert, C. D., & Magnasco, M. O. (2001). On a common circle: Natural scenes and gestalt rules. PNAS, 98(4), 1935–1940.

  113. Strasburger, H., Rentschler, I., & Jüttner, M. (2011). Peripheral vision and pattern recognition: A review. JOV 11(5).

  114. Sutour, C., Deledalle, C.-A., & Aujol, J.-F. (2014). Adaptive regularization of the NL-means: Application to image and video denoising. IEEE TIP, 23(8), 3506–3521.

  115. Tasdizen, T. (2009). Principal neighborhood dictionaries for nonlocal means image denoising. IEEE TIP, 18(12), 2649–2660.

  116. Thaipanich, T., Oh, B., Wu, P.-H., Xu, D., & Kuo, C.-C. (2010). Improved image denoising with adaptive nonlocal means (ANL-means) algorithm. IEEE TCE, 56(4), 2623–2630.

  117. Toet, A., & Levi, D. M. (1992). The two-dimensional shape of spatial interaction zones in the parafovea. Vision Research, 32(7), 1349–1357.

  118. Tomasi, C., & Manduchi, R. (1998). Bilateral filtering for gray and color images. Proceedings of ICCV (pp. 839–846).

  119. Tong, F., & Li, Z.-N. (1995). Reciprocal-wedge transform for space-variant sensing. IEEE TPAMI, 17(5), 500–511.

  120. Torralba, A., & Oliva, A. (2003). Statistics of natural image categories. Network: Computation in Neural Systems, 14, 391–412.

  121. USC-SIPI. (2016). University of Southern California Signal and Image Processing Institute image database. http://sipi.usc.edu/database/.

  122. van der Schaaf, A., & van Hateren, J. (1996). Modelling the power spectra of natural images: Statistics and information. Vision Research, 36(17), 2759–2770.

  123. Wallace, R. S., Ong, P.-W., Bederson, B. B., & Schwartz, E. L. (1994). Space variant image processing. IJCV, 13(1), 71–90.

  124. Wandell, B. A. (1995). Foundations of Vision. Sunderland: Sinauer Assoc.

  125. Wang, J., Guo, Y., Ying, Y., Liu, Y., & Peng, Q. (Oct. 2006). Fast non-local algorithm for image denoising. Proceedings IEEE ICIP (pp. 1429–1432).

  126. Wang, Z., & Bovik, A. (2001). Embedded foveation image coding. IEEE TIP, 10(10), 1397–1410.

  127. Wang, Z., & Bovik, A. (2006). Foveated image and video coding. Digital Video, Image Quality and Perceptual Coding (pp. 431–457).

  128. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error measurement to structural similarity. IEEE TIP, 13(4), 600–612.

  129. Wei, J., & Li, Z.-N. (1998). Efficient disparity-based gaze control with foveate wavelet transform. Proceedings IEEE/RSJ IROS, 2, 866–871.

  130. Wei, L.-Y., & Levoy, M. (2000). Fast texture synthesis using tree-structured vector quantization. Proceedings SIGGRAPH (pp. 479–488).

  131. Weiman, C. F., & Chaikin, G. (1979). Logarithmic spiral grids for image processing and display. CGIP, 11(3), 197–226.

  132. Weiss, Y., & Freeman, W. (2007). What makes a good model of natural images? Proceedings of IEEE CVPR (pp. 1–8).

  133. Wertheim, T. (1894). Über die indirekte sehschärfe. Zeitschrift für Psychologie und Physiologie der Sinnesorgane, 7, 172–187.

  134. Williams, D. R., Artal, P., Navarro, R., McMahon, M. J., & Brainard, D. H. (1996). Off-axis optical quality and retinal sampling in the human eye. Vision Research, 36(8), 1103–1114.

  135. Wodnicki, R., Roberts, G., & Levine, M. (1995) A foveated image sensor in standard CMOS technology. Proceedings of CCIC (pp. 357–360).

  136. Yaroslavsky, L. (1985). Digital Picture Processing. An Introduction. Berlin: Springer.

  137. Zhang, D., & Wang, Z. (2002). Image information restoration based on long-range correlation. IEEE TCSVT, 12(5), 331–341.

Download references

Acknowledgments

This work was supported by the Academy of Finland (Project No. 252547, Academy Research Fellow 2011-2016).

Author information

Correspondence to Alessandro Foi.

Additional information

Communicated by Stefan Roth.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 7103 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Foi, A., Boracchi, G. Foveated Nonlocal Self-Similarity. Int J Comput Vis 120, 78–110 (2016). https://doi.org/10.1007/s11263-016-0898-1

Download citation