Advertisement

Image Quality Assessment Based on Human Visual System Properties

  • Yong Ding
Chapter

Abstract

Utilizing the properties of human visual system (HVS) is a major source of inspirations for the design of image quality assessment (IQA) methods. With the current research status of neuroscience and human vision perception, although a rigorous simulation of HVS is still far from possible, novel ideas can be enlightened. The basic structures of HVS have been previously discussed in Chap.  3, and the goal of this chapter is to connect IQA design with certain knowledge about HVS that can be made use of. More specifically, because we have now been aware that the operation of HVS is actually under a hierarchical structure, it is feasible to study the characteristics of its individual processing stages; on the other hand, if the inner structures are neglected and the HVS is regarded as a Black box, studying its external responses is another potential for providing solutions. This chapter will provide introduction to methods employing these strategies.

Keywords

Human visual system Hierarchy Visual signal processing System response Information theory 

References

  1. Achanta, R., Hemami, S., Estrada, F. & Susstrunk, S. (2009). Frequency-tuned salient region detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (pp. 1597–1604).Google Scholar
  2. Ahumada, A. J., & Peterson, H. A. (1992). Luminance-model-based DCT quantization for color image compression. Proc. SPIE on Human Vision, Visual Processing, and Digital Display III, 1666, 365–374.CrossRefGoogle Scholar
  3. Avraham, T., & Lindenbaum, M. (2010). Esaliency (extended saliency): Meaningful attention using stochastic image modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(4), 693–708.CrossRefGoogle Scholar
  4. Barten, P. (1999). Contrast sensitivity of the human eye and its effects on image quality. SPIE Press.Google Scholar
  5. Baylor, D. A., Lamb, T. D., & Yau, K. W. (1979). Responses of retinal rods to single photons. The Journal of Psychology, 288, 613–634.Google Scholar
  6. Bengio, Y. (2009). Learning deep hierarchies for AI. Foundations and Trends in Machine Learning, 2(1), 1–127.MATHCrossRefGoogle Scholar
  7. Bex, P. J., & Makous, W. (2002). Spatial frequency, phase, and the contrast of natural images. Journal of the Optical Society of America A, 19(6), 1096–1106.CrossRefGoogle Scholar
  8. Bian, P., & Zhang, L. (2009). Biological plausibility of spectral domain approach for spatiotemporal visual saliency. Advances in Neuro-Information Processing, 5506, 251–258.CrossRefGoogle Scholar
  9. Blakemore, C., & Campbell, F. W. (1969). On the existence of neurones in the human visual system selectively sensitive to the orientation and size of retinal images. The Journal of Physiology, 203(1), 237–260.CrossRefGoogle Scholar
  10. Boring, E. G. (1942). Sensation and perception in the history of experimental psychology. New York: Appleton-Century.Google Scholar
  11. Borji, A., & Itti, L. (2013). State-of-the-art in visual attention modeling. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(1), 185–207.CrossRefGoogle Scholar
  12. Brandao, T., & Queluz, M. P. (2008). No-reference image quality assessment based on DCT domain statistics. Signal Processing, 88(4), 822–833.MATHCrossRefGoogle Scholar
  13. Bruce, N. D. B., & Tsotsos, J. K. (2009). Saliency, attention, and visual search: An information theoretic approach. Journal of Vision, 9(3), 1–24.CrossRefGoogle Scholar
  14. Burt, P. J. (1981). Fast filter transform for image processing. Computer Graphics and Image Processing, 16, 20–51.CrossRefGoogle Scholar
  15. Burt, P. J., & Adelson, E. H. (1983). The Laplacian pyramid as a compact image code. IEEE Transactions on Communications, 9(4), 532–540.CrossRefGoogle Scholar
  16. Chandler, D. M., & Hemami, S. S. (2007). VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Transactions on Image Processing, 16(9), 2284–2298.MathSciNetCrossRefGoogle Scholar
  17. Chang, H., Zhang, Q., Wu, Q., & Gan, Y. (2015). Perceptual image quality assessment by independent feature detector. Neurocomputing, 151(3), 1142–1152.CrossRefGoogle Scholar
  18. Chiu, Y. J., & Berger, T. (1999). A software-only videocodec using pixelwise conditional differential replenishment and perceptual enhancements. IEEE Transactions on Circuits and Systems for Video Technology, 9(3), 438–450.CrossRefGoogle Scholar
  19. Chou, C. H., & Li, Y. C. (1995). A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Transactions on Circuits on Systems for Video Technology, 5(6), 467–476.CrossRefGoogle Scholar
  20. Cooley, J. W., & Tukey, J. W. (1965). An algorithm for the machine calculation of complex Fourier series. Mathematics of Computation, 19(90), 297–301.MathSciNetMATHCrossRefGoogle Scholar
  21. Corporation, S. (1997). Sarnoff JND vision model. Contribution to IEEE G-2.1.6 Compression and Processing Subcommittee.Google Scholar
  22. Crowley, J. L. (1981). A representation for visual information (Technique Report CMU-RI-TR-82-07). Pennsylvania: Robotics Institute, Carnegie-Mellon University.Google Scholar
  23. Crowley, J. L., & Parker, A. C. (1984). A representation for shape based on peaks and ridges in the difference of low-pass transform. IEEE Transactions on Pattern Recognition and Machine Intelligence, 6(2), 156–170.CrossRefGoogle Scholar
  24. Crowley, J. L., & Sanderson, A. C. (1987). Multiple resolution representation and probabilistic matching of 2-D gray-scale shape. IEEE Transactions on Pattern Recognition and Machine Intelligence, 9(1), 113–121.CrossRefGoogle Scholar
  25. D’Angelo, A., Li, Z., & Barni, M. (2010). A full-reference quality metric for geometrically distorted images. IEEE Transactions on Image Processing, 19(4), 867–881.MathSciNetMATHCrossRefGoogle Scholar
  26. Daly, S. (1992). The visible difference predictor: An algorithm for the assessment of image fidelity. Proceedings of SPIE, 1616, 2–15.CrossRefGoogle Scholar
  27. De Valois, R. L., Albrecht, D. G., & Thorell, L. G. (1982). Spatial frequency selectivity of cells in the macaque visual cortex. Vision Research, 22(5), 545–559.CrossRefGoogle Scholar
  28. Dickinson, S., Leonardis, A., Schiele, B., & Tarr, M. J. (2009). Objective categorization: Computer and human vision perspectives. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
  29. Ding, Y., Wang, S., & Zhang, D. (2014a). Full-reference image quality assessment using statistical local correlation. Electronics Letters, 50(2), 79–80.CrossRefGoogle Scholar
  30. Ding, Y., Zhang, Y., Wang, X., Yan, X., & Krylov, A. S. (2014b). Perceptual image quality assessment metric using mutual information of Gabor features. Science China: Information Science, 57(3), 032111.MATHGoogle Scholar
  31. Ding, Y., Zhao, X., Zhang, Z., & Dai, H. (2017a). Image quality assessment based on multi-order local features description, modeling and quantification. IEICE Transactions on Information and Systems, E, 100D(6), 1303–1315.CrossRefGoogle Scholar
  32. Ding, Y., Zhao, Y., & Zhao, X. (2017b). Image quality assessment based on multi-feature extraction and synthesis with support vector regression. Signal Processing: Image Communication, 54, 81–92.Google Scholar
  33. Du, S., Yan, Y., & Ma, Y. (2016). Blind image quality assessment with the histogram sequence of high-order local derivative patterns. Digital Image Processing, 55, 1–12.CrossRefGoogle Scholar
  34. Engelke, U., Kaprykowsky, H., Zepernick, H. J., & Ndjiki-Nya, P. (2011). Visual attention in quality assessment. IEEE Signal Processing Magazine, 28(6), 50–59.CrossRefGoogle Scholar
  35. Farias, M. C. Q., & Akamine, W. Y. L. (2012). On performance of image quality metrics enhanced with visual attention computational models. Electronics Letters, 48(11), 631–633.CrossRefGoogle Scholar
  36. Faugeras, O. D., & Pratt, W. K. (1980). Decorrelation methods of texture feature extraction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2(4), 323–332.CrossRefGoogle Scholar
  37. Felleman, D., & Essen, D. C. (1991). Distributed hierarchical processing in primate cerebral cortex. Cerebral Cortex, 1, 1–47.CrossRefGoogle Scholar
  38. Field, D. J. (1987). Relations between the statistics of natural images and the response properties of cortical cells. Journal of Optical Society of America A, 4(12), 2379–2397.CrossRefGoogle Scholar
  39. Freitas, P. G., Akamine, W. Y. L. & Farias, M. C. Q. (2016). No-reference image quality assessment based on statistics of local ternary pattern. In 8th International Conference on Quality of Multimedia Experience, June 6–8, Lisbon, Portugal.Google Scholar
  40. Gagalowicz, A. (1981). A new method for texture fields synthesis: Some applications to the study of human vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 3(5), 520–533.CrossRefGoogle Scholar
  41. Gao, X., Gao, F., Tao, D., & Li, X. (2013). Universal blind image quality assessment metrics via natural scene statistics and multiple kernel learning. IEEE Transactions on Neural Networks and Learning Systems, 24(12), 2013–2026.CrossRefGoogle Scholar
  42. Gao, X., Lu, W., Tao, D., & Li, X. (2009). Image quality assessment based on multiscale geometric analysis. IEEE Transactions on Image Processing, 18(7), 1409–1423.MathSciNetMATHCrossRefGoogle Scholar
  43. Garcia-Diaz, A., Fdez-Vidal, X. R., Pardo, X. M., & Dosil, R. (2012). Saliency from hierarchical adaptation through decorrelation and variance normalization. Image and Vision Computing, 30(1), 51–64.CrossRefGoogle Scholar
  44. Gdyczynski, C. M., Manbachi, A., Hashemi, S., Lashkari, B., & Cobbold, R. S. C. (2014). On estimating the directionality distribution in pedicle trabecular bone from micro-CT images. Physiological Measurement, 35(12), 2415–2428.CrossRefGoogle Scholar
  45. Girod, B. (1993). What’s wrong with mean-squared error? In Visual factors of electronic image communications. Cambridge: MIT Press.Google Scholar
  46. Gu, K., Liu, M., Zhai, G., Yang, X., & Zhang, W. (2015). Quality assessment considering viewing distance and image resolution. IEEE Transactions on Broadcasting, 61(3), 520–531.CrossRefGoogle Scholar
  47. Guo, C., Ma, Q., & Zhang, L. (2008). Spatio-temporal saliency detection using phase spectrum of quaternion Fourier transform. In Proceedings of IEEE Computer Society Conference on Computer Society Conference on Computer Vision and Pattern Recognition, (pp. 1–8).Google Scholar
  48. Guo, C., & Zhang, L. (2010). A novel multiresolution spatiotemporal saliency detection model and its applications in image and video compression. IEEE Transactions on Image Processing, 19(1), 185–198.MathSciNetMATHCrossRefGoogle Scholar
  49. Hahn, P. J., & Mathews, V. J. (1998). An analytical model of the perceptual threshold function for multichannel image compression. Proceedings of IEEE International Conference on Image Processing, 3, 404–408.Google Scholar
  50. Harel, J., Koch, C., & Perona, P. (2007). Graph-based visual saliency. In Advances in Neural Information Processing Systems 19, Proceedings of the 2006 Conference, (pp. 545–552).Google Scholar
  51. Hecht, S., Shlar, S., & Pirenne, M. H. (1942). Energy, quanta, and vision. Journal of General Physiology, 25, 819–840.CrossRefGoogle Scholar
  52. Heeger, D., & Bergen, J. (1995). Pyramid-based texture analysis/synthesis. In Proceeding of ACM SIGGRAPH, (pp. 229–238).Google Scholar
  53. Hong, R., Pan, J., Hao, S., Wang, M., Xue, F., & Wu, X. (2014). Image quality assessment based on matching pursuit. Information Sciences, 273, 196–211.CrossRefGoogle Scholar
  54. Hou, X., & Zhang, L. (2007). Saliency detection: A spectral residual approach. In IEEE Conference on Computer Vision and Pattern Recognition, (pp. 2280–2287).Google Scholar
  55. Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology, 160(1), 106–154.CrossRefGoogle Scholar
  56. Hubel, D. H., & Wiesel, T. N. (1968). Receptive fields and functional architecture of monkey striate cortex. Journal of Physiology, 195(1), 215–243.CrossRefGoogle Scholar
  57. Hurvich, L. (1981). Color vision. Sunderland: Sinauer Associates Inc.Google Scholar
  58. Hyvärinen, A., Hurri, J., & Hoyer, P. O. (2009). Natural image statistics: A probabilistic approach to early computational vision. Berlin: Springer.MATHCrossRefGoogle Scholar
  59. Itti, L., & Baldi, P. (2009). Bayesian surprise attracts human attention. Vision Research, 49(10), 1295–1306.CrossRefGoogle Scholar
  60. Itti, L., Koch, C., & Niebur, E. (1998). A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(11), 1254–1259.CrossRefGoogle Scholar
  61. Jia, Y., Lin, W., & Kassim, A. A. (2006). Estimating just-noticeable distortion for video. IEEE Transactions on Circuits and Systems for Video Technology, 16(7), 820–829.CrossRefGoogle Scholar
  62. Jiang, Q., Shao, F., Jiang, G., Yu, M., & Peng, Z. (2015). Supervised dictionary learning for blind image quality assessment using quality-constraint sparse coding. Journal of Visual Communication and Image Representation, 33, 123–133.CrossRefGoogle Scholar
  63. Kandel, E. R., Schwartz, J. H., & Jessel, T. M. (2000). Principles of neural sciences. New York: McGraw-Hill.Google Scholar
  64. Kingdom, F. A. A., Hayes, A., & Field, D. J. (1995). Sensitivity to contrast histogram differences in synthetic wavelet-textures. Vision Research, 41(5), 585–598.CrossRefGoogle Scholar
  65. Koch, C., & Ullman, S. (1985). Shifts in selective visual attention: Towards the underlying neural circuitry. Human Neurobiology, 4(4), 219–227.Google Scholar
  66. Kremers, J. (2005). The primate visual system: A comparative approach. Hoboken: Wiley.CrossRefGoogle Scholar
  67. Krüger, N., Janssen, P., Kalkan, S., Lappe, M., Leonardis, A., Piater, J., et al. (2013). IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8), 1847–1871.CrossRefGoogle Scholar
  68. Larson, E. C., & Chandler, D. M. (2008). Unveiling relationships between regions of interest and image fidelity metrics. Proceedings of the Society of Photo-optical Instrumentation Engineers, 6822: 6822A1-16.Google Scholar
  69. Larson, E. C., & Chandler, D. M. (2010). Most apparent distortion: Full-reference image quality assessment and the role of strategy. Journal of Electronic Imaging, 19(1), 011006.CrossRefGoogle Scholar
  70. Le Meur, O., Le Callet, P., Barba, D., & Thoreau, D. (2006). A coherent computational approach to model bottom-up visual attention. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(5), 802–817.CrossRefGoogle Scholar
  71. Legras, R., Chanteau, N., & Charman, W. N. (2004). Assessment of just-noticeable differences for refractive errors and spherical aberration using visual simulation. Optometry and Vision Science, 81(9), 718–728.CrossRefGoogle Scholar
  72. Lewis, J., & Essen, D. C. (2000). Corticocortical connections of visual, sensorimotor, and multimodal processing areas in the parietal lobe of the macaque monkey. Journal of Comparative Neurology, 428(1), 112–137.CrossRefGoogle Scholar
  73. Li, J., Duan, L. Y., Chen, X., Huang, T., & Tian, Y. (2015). Finding the secret of image saliency in the frequency domain. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(12), 2428–2440.CrossRefGoogle Scholar
  74. Li, Q., & Wang, Z. (2009). Reduced-reference image quality assessment using divisive normalization-based image representation. IEEE Journal of Selected Topics in Signal Processing, 3(2), 202–211.CrossRefGoogle Scholar
  75. Li, S., Zhang, F., Ma, L., & Ngan, K. N. (2011). Image quality assessment by separately evaluating detail losses and additive impairments. IEEE Transactions on Multimedia, 13(5), 935–949.CrossRefGoogle Scholar
  76. Li, X., Tao, D., Gao, X., & Lu, W. (2009). A natural image quality evaluation metric. Signal Processing, 89(4), 548–555.MATHCrossRefGoogle Scholar
  77. Lin, J., Liu, T., Lin, W., & Kuo, C. (2013). Visual-saliency-enhanced image quality assessment indices. In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, (pp. 1–4).Google Scholar
  78. Lin, W., Dong, L., & Xue, P. (2003). Discriminative analysis of pixel difference towards picture quality prediction. Proceedings of IEEE International Conference on Image Processing, 3, 193–196.Google Scholar
  79. Liu, H., & Heynderickx, I. (2011). Visual attention in objective image quality assessment: Based on eye-tracking data. IEEE Transactions on Circuits and Systems for Video Technology, 21(7), 971–982.CrossRefGoogle Scholar
  80. Liu, L., Dong, H., Huang, H., & Bovik, A. C. (2014a). No-reference image quality assessment in curvelet domain. Signal Processing: Image Communication, 29(4), 494–505.Google Scholar
  81. Liu, M., & Yang, X. (2009). Image quality assessment using contourlet transform. Optical Engineering, 48(10), 107201.CrossRefGoogle Scholar
  82. Liu, X., Sun, C., & Yang, L. T. (2015). DCT-based objective quality assessment metric of 2D/3D image. Multimedia Tools and Applications, 74(8), 2803–2820.CrossRefGoogle Scholar
  83. Liu, Z., Zou, W., & Le Meur, O. (2014b). Saliency tree: A novel saliency detection framework. IEEE Transactions on Image Processing, 23(5), 1937–1952.MathSciNetMATHCrossRefGoogle Scholar
  84. Lu, W., Gao, X., Tao, D., & Li, X. (2008). A wavelet-based image quality assessment method. International Journal of Wavelets Multiresolution and Information, 6(4), 541–551.MathSciNetMATHCrossRefGoogle Scholar
  85. Lu, W., Zeng, K., Tao, D., Yuan, Y., & Gao, X. (2010). No-reference image quality assessment in contourlet domain. Neurocomputing, 73(4–6), 784–794.CrossRefGoogle Scholar
  86. Lubin, J. (1993). The use of psychophysical data and models in the analysis of display system performance. In A. B. Watson (Ed.), Digital Images and Human Vision (pp. 163–178). Cambridge: MIT Press.Google Scholar
  87. Lubin, J. (1995). A visual discrimination mode for image system design and evaluation. In E. Peli (Ed.), Visual models for target detection and recognition (pp. 207–220). Singapore: World Scientific Publishers.Google Scholar
  88. Ma, L., Li, S., & Ngan, K. N. (2013). Reduced-reference image quality assessment in reorganized DCT domain. Signal Processing: Image Communication, 28(8), 884–902.Google Scholar
  89. Ma, L., Li, S., Zhang, F., & Ngan, K. N. (2011). Reduced-reference image quality assessment using reorganized DCT-based image representation. IEEE Transactions on Multimedia, 13(4), 824–829.CrossRefGoogle Scholar
  90. Ma, L., Wang, X., Liu, Q., & Ngan, K. N. (2016). Reorganized DCT-based image representation for reduced reference stereoscopic image quality assessment. Neurocomputing, 215(SI), 21–31.CrossRefGoogle Scholar
  91. Ma, Q., & Zhang, L. (2008). Saliency-based image quality assessment criterion. Advanced Intelligent Computing Theories and Applications, International Conference on Intelligent Computing, 5226, 1124–1133.Google Scholar
  92. Ma, Q., Zhang, L., & Wang, B. (2010). New strategy for image and video quality assessment. Journal of Electronic Imaging, 19(1), 1–14.CrossRefGoogle Scholar
  93. Manduchi, R., Perona, P., & Shy, D. (1997). Efficient deformable filter banks. IEEE Transactions on Signal Processing, 46(4), 1168–1173.CrossRefGoogle Scholar
  94. Mannos, J. L., & Sakrison, D. J. (1974). The effects of a visual fidelity criterion on the images. IEEE Transactions on Information Theory, 20(4), 525–536.MATHCrossRefGoogle Scholar
  95. Marat, S., Phuoc, T. H., Granjon, L., Guyader, N., Pellerin, D., & Guerin-Dugue, A. (2009). Modeling spatio-temporal saliency to predict gaze direction for short videos. International Journal of Computer Vision, 82(3), 231–243.CrossRefGoogle Scholar
  96. Marr, D. (1977). Vision: A computational investigation into the human representation and processing of visual information. Freeman.Google Scholar
  97. Maunsell, J. H. R., & Essen, D. C. (1983). Functional properties of neurons in middle temporal area of the macaque monkey, I, selectivity for stimulus direction, speed, and orientation. Journal of Neurophysiology, 49(5), 1127–1147.CrossRefGoogle Scholar
  98. McCann, J. J., McKee, S. P., & Taylor, T. H. (1976). Quantitative studies in retinex theory a comparison between theoretical predictions and observer responses to the “color mondrian” experiments. Vision Research, 16(5), 445–458.CrossRefGoogle Scholar
  99. Meer, P., Baugher, E. S., & Rosenfeld, A. (1987). Frequency domain analysis and synthesis of image generating kernels. IEEE Transactions on Pattern Recognition and Machine Intelligence, 9(4), 512–522.CrossRefGoogle Scholar
  100. Moorthy, A. K., & Bovik, A. C. (2009). Visual importance pooling for image quality assessment. IEEE Journal of Selected Topics in Signal Processing, 3(2), 193–201.CrossRefGoogle Scholar
  101. Moulden, B., Kingdom, F. A. A., & Gatley, L. F. (1990). The standard deviation of luminance as a metric for contrast in random-dot images. Perception, 19(1), 79–101.CrossRefGoogle Scholar
  102. Nakamura, H., Kuroda, T., Wakita, M., Kusunoki, M., Kato, A., Mikami, A., et al. (2001). From three-dimensional space vision to prehensile hand movements: The lateral intraparietal area links the area V3A and the anterior intraparietal area in macaques. Journal of Neuroscience, 21(20), 8174–8187.Google Scholar
  103. Narwaria, M., Lin, W., McLoughlin, I. V., Emmanuel, S., & Chia, L. T. (2012). Fourier transform-based scalable image quality measure. IEEE Transactions on Image Processing, 21(8), 3364–3377.MathSciNetMATHCrossRefGoogle Scholar
  104. Ojala, T., Pietikäinen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on feature distributions. Pattern Recognition, 29, 51–59.CrossRefGoogle Scholar
  105. Ojala, T., Pietikäinen, M., & Mäenpää, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary pattern. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971–987.MATHCrossRefGoogle Scholar
  106. Orban, G. A. (2008). Higher order visual processing in macaque extrastriate cortex. Physiological Reviews, 88, 59–89.CrossRefGoogle Scholar
  107. Pati, P. B., & Ramakrishnan, A. G. (2008). Word level multi-script identification. Pattern Recognition Letters, 29, 1218–1229.CrossRefGoogle Scholar
  108. Peters, R., Iyer, A., Itti, L., & Koch, C. (2005). Components of bottom-up gaze allocation in natural images. International Journal of Neural Systems, 45(18), 2397–2416.Google Scholar
  109. Portilla, J., & Simoncelli, E. P. (2000). A parametric texture model based on joint statistics of complex wavelet coefficients. International Journal of Computer Vision, 40, 49–71.MATHCrossRefGoogle Scholar
  110. Poynton, C. (1998). The rehabilitation of gamma. In Proceedings of SPIE Human Vision and Electronic Imaging, pp. 232–249.Google Scholar
  111. Prewitt, J. M. S. (1970). Object enhancement and extraction. In B. S. Lipkin & A. Rosenfeld (Eds.), Picture processing and psychopictorics. Cambridge: Academic Press.Google Scholar
  112. Qian, J., Wu, D., Li, L., Cheng, D., & Wang, X. (2014). Image quality assessment based on multi-scale representation of structure. Digital Signal Processing, 33, 125–133.CrossRefGoogle Scholar
  113. Raju, S. S., Pati, P. B., & Ramakrishnan, A. G. (2004). Gabor filter based block energy analysis for text extraction from digital document images. In Proceedings of the 1st International Workshop on Document Image Analysis for Libraries, (pp. 233–243).Google Scholar
  114. Ramasubramanian, M., Pattanaik, S. N., & Greenberg, D. P. (1999). A perceptually based physical error metric for realistic image synthesis. In Proceedings of International Conference on Computer Graphics and Interactive Techniques, (pp. 73–82).Google Scholar
  115. Ramos, M. G., & Hemami, S. S. (2001). Suprathreshold wavelet coefficient quantization in complex stimuli: Psychophysical evaluation and analysis. Journal of the Optical Society of America A, 18(10), 2385–2397.CrossRefGoogle Scholar
  116. Rezazadeh, S., & Coulombe, S. (2013). A novel discrete wavelet transform framework for full reference image quality assessment. Signal, Image and Video Processing, 7(3), 559–573.CrossRefGoogle Scholar
  117. Roberts, L. G. (1965). Machine perception of three-dimensional solids. In J. T. Tippet (Ed.), Optical and electro-optical information processing. Cambridge: MIT Press.Google Scholar
  118. Rodríguez-Sánchez, A. J., Simine, E., & Tsotsos, J. (2007). Attention and visual search. International Journal of Neural Systems, 17(4), 275–288.CrossRefGoogle Scholar
  119. Saad, M. A., Bovik, A. C., & Charrier, C. (2010). A DCT statistics-based blind image quality index. IEEE Signal Processing Letters, 17(6), 583–586.CrossRefGoogle Scholar
  120. Saad, M. A., Bovik, A. C., & Charrier, C. (2012). Blind image quality assessment: A natural scene statistics approach in the DCT domain. IEEE Transactions on Image Processing, 21(8), 3339–3352.MathSciNetMATHCrossRefGoogle Scholar
  121. Saha, A., & Wu, Q. M. J. (2013). Perceptual image quality assessment using phase deviation sensitive energy features. Signal Processing, 93(11), 3182–3191.CrossRefGoogle Scholar
  122. Safranek, R. J., & Johnston, J. D. (1989). A perceptually tuned sub-band image coder with image dependence quantization and post-quantization data compression. In Proceedings of IEEE Conference on Acoustic, Speech, and Signal Processing, (pp. 1945–1948).Google Scholar
  123. Sezan, M. I., Yip, K. L., & Daly, S. (1987). Uniform perceptual quantization: Applications to digital radiography. IEEE Transactions on Systems, Man, and Cybernetics, 17(4), 622–634.CrossRefGoogle Scholar
  124. Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on Image Processing, 15(2), 430–444.CrossRefGoogle Scholar
  125. Sheikh, H. R., Bovik, A. C., & Cormack, L. (2005a). No-reference quality assessment using natural scene statistics: JPEG2000. IEEE Transactions on Image Processing, 14(11), 1918–1927.CrossRefGoogle Scholar
  126. Sheikh, H. R., Bovik, A. C., & de Vaciana, G. (2005b). An information fidelity criterion for image quality assessment using natural scene statistics. IEEE Transactions on Image Processing, 14(12), 2117–2128.CrossRefGoogle Scholar
  127. Sheikh, H. R., Sabir, M. F., & Bovik, A. C. (2006). A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Transactions on Image Processing, 15(11), 3440–3451.CrossRefGoogle Scholar
  128. Shen, J., Li, Q., & Erlebacher, G. (2011). Hybrid no-reference natural image quality assessment of noisy, blurry, JPEG2000, and JPEG images. IEEE Transactions on Image Processing, 20(8), 2089–2098.MathSciNetMATHCrossRefGoogle Scholar
  129. Sobel, I. E. (1970). Camera models and machine perception. Ph.D. Dissertation. California: Stanford University.Google Scholar
  130. Soundararajan, R., & Bovik, A. C. (2012). RRED indices: Reduced reference entropic differencing for image quality assessment. IEEE Transactions on Image Processing, 21(2), 517–526.MathSciNetMATHCrossRefGoogle Scholar
  131. Stromeyer, C. F., & Julesz, B. (1972). Spatial-frequency masking in vision: Critical bands and spread of masking. Journal of the Optical Society of America, 62(10), 1221–1232.CrossRefGoogle Scholar
  132. Tan, X., & Triggs, B. (2010). Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Transactions on Image Processing, 19(6), 1635–1650.MathSciNetMATHCrossRefGoogle Scholar
  133. Tanaka, K. (1996). Inferotemporal cortex and object vision. Annual Review of Neuroscience, 19(1), 109–139.MathSciNetCrossRefGoogle Scholar
  134. Tanaka, K., & Saito, H. A. (1989). Analysis of motion of the visual field by direction, expansion/contraction, and rotation cells clustered in the dorsal part of the medial superior temporal area of the macaque monkey. Journal of Neurophysiology, 62(3), 626–641.CrossRefGoogle Scholar
  135. Tenenbaum, F. E., David, S. V., Singh, N. C., Hsu, A., Vinje, W. E., & Gallant, J. L. (2001). Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network: Computation in Neural Systems, 12(3), 289–316.MATHCrossRefGoogle Scholar
  136. Teo, P. C., & Heeger, D. J. (1994). Perceptual image distortion. Proceedings of SPIE, 2179, 127–141.CrossRefGoogle Scholar
  137. Tiippana, K., Näsänen, R., & Rovamo, J. (1994). Contrast matching of two-dimensional compound gratings. Vision Research, 34(9), 1157–1163.CrossRefGoogle Scholar
  138. Toet, A. (2011). Computational versus psychophysical bottom-up image saliency: A comparative evaluation study. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(11), 2131–2146.CrossRefGoogle Scholar
  139. Tong, Y., Konik, H., Cheikh, F. A., & Tremeau, A. (2010). Full reference image quality assessment based on saliency map analysis. Journal of Imaging Science and Technology, 54(3), 30503:1–30503:14.CrossRefGoogle Scholar
  140. Toprak, S., & Yalman, Y. (2017). A new full-reference image quality metric based on just noticeable difference. Computer Standards & Interfaces, 50, 18–25.CrossRefGoogle Scholar
  141. Torralba, A. (2003). Modeling global scene factors in attention. Journal of the Optical Society of America A-Optics Image Science and Vision, 20(7), 1407–1418.CrossRefGoogle Scholar
  142. Ungerleider, L. G., & Mishkin, M. (1982). Two cortical visual systems. In D. J. Ingle, M. A. Goodale, & R. J. W. Mansfield (Eds.), Analysis of visual behavior (pp. 549–586). Cambridge: MIT Press.Google Scholar
  143. Uzair, M., & Dony, R. D. (2017). Estimating just-noticeable distortion for images/videos in pixel domain. IET Image Processing, 11(8), 559–567.CrossRefGoogle Scholar
  144. Van Nes, F. L., & Bouman, M. A. (1967). Spatial modulation transfer in the human eye. Journal of the Optical Society of America, 57(3), 401–406.CrossRefGoogle Scholar
  145. Walther, D., & Koch, C. (2006). Modeling attention to salient proto-objects. Neural Networks, 19(9), 1395–1407.MATHCrossRefGoogle Scholar
  146. Wang, C., Shen, M., & Yao, C. (2015). No-reference quality assessment for DCT-based compressed image. Journal of Visual Communication and Image Representation, 28, 53–59.CrossRefGoogle Scholar
  147. Wang, S., Gu, K., Ma, S., Lin, W., Liu, X., & Gao, W. (2016). Guided image contrast enhancement based on retrieved images in cloud. IEEE Transactions on Multimedia, 18(2), 219–232.CrossRefGoogle Scholar
  148. Wang, Z., & Bovik, A. C. (2006). Modern image quality assessment. New York: Morgan & Claypool.Google Scholar
  149. Wang, Z., & Bovik, A. C. (2009). Mean squared error: Love it or leave it? A new look at signal fidelity measures. IEEE Signal Processing Magazine, 26(1), 98–117.CrossRefGoogle Scholar
  150. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.CrossRefGoogle Scholar
  151. Watson, A. B. (1987a). Estimation of local spatial scale. Journal of the Optical Society of America A, 5(4), 2401–2417.CrossRefGoogle Scholar
  152. Watson, A. B. (1987b). The Cortex transform: Rapid computation of simulated neural images. Computer Vision Graphics and Image Processing, 39(3), 311–327.CrossRefGoogle Scholar
  153. Watson, A. B., & Ahumanda, A. (2005). A standard model for foveal detection of spatial contrast. Journal of Vision, 5(9), 717–740.CrossRefGoogle Scholar
  154. Watson, A. B., Yang, G. Y., Solomon, J. A., & Villasenor, J. (1997). Visibility of wavelet quantization noise. IEEE Transactions on Image Processing, 6(8), 1164–1175.CrossRefGoogle Scholar
  155. Wei, Z., & Ngan, K. N. (2009). Spatio-temporal just noticeable distortion profile for grey scale image/video in DCT domain. IEEE Transactions on Circuits and Systems for Video Technology, 19(3), 337–346.CrossRefGoogle Scholar
  156. Wen, Y., Li, Y., Zhang, X., Shi, W., Wang, L., & Chen, J. (2017). A weighted full-reference image quality assessment based on visual saliency. Journal of Visual Communication and Image Representation, 43, 119–126.CrossRefGoogle Scholar
  157. Willmore, B. D. B., Prenger, R. J., & Gallant, J. L. (2010). Neural representation of natural images in visual area V2. Journal of Neuroscience, 30(6), 2102–2114.CrossRefGoogle Scholar
  158. Wilson, H., & Bergen, J. (1979). A four-mechanism model for threshold spatial vision. Vision Research, 19(1), 19–32.CrossRefGoogle Scholar
  159. Wu, H. R., & Rao, K. R. (2005). Digital image video quality and perceptual coding. Florida: CRC Press.CrossRefGoogle Scholar
  160. Wu, Q., Li, H., Meng, F., Ngan, K. N., Luo, B., Huang, C., et al. (2016). Blind image quality assessment based on multichannel feature fusion and label transfer. IEEE Transactions on Circuits and Systems for Video Technology, 26(3), 425–440.CrossRefGoogle Scholar
  161. Xue, W., Mou, X., Zhang, L., Bovik, A. C., & Feng, X. (2014). Blind image quality assessment using joint statistics of gradient magnitude and Laplacian features. IEEE Transactions on Image Processing, 23(11), 4850–4862.MathSciNetMATHCrossRefGoogle Scholar
  162. Yang, X., Lin, W., Lu, Z., Ong, E. P., & Yao, S. (2003a). Just-noticeable-distortion profile with nonlinear additivity model for perceptual masking in color images. Proceedings of IEEE International Conference on Acoustics, Speech, and Signal Processing, 3, 609–612.Google Scholar
  163. Yang, X., Lin, W., Lu, Z., Ong, E. P., & Yao, S. (2003b). Perceptually-adaptive hybrid video encoding based on just-noticeable-distortion profile. Proceedings of the Society of Photo-optical Instrumentation Engineers, 5150, 1448–1459.Google Scholar
  164. Ye, P., & Doermann, D. (2012). No-reference image quality assessment using visual codebooks. IEEE Transactions on Image Processing, 21(7), 3129–3138.MathSciNetMATHCrossRefGoogle Scholar
  165. Zhai, G., Zhang, W., Yang, X., Lin, W., & Xu, Y. (2008). No-reference noticeable blockiness estimation in images. Signal Processing: Image Communication, 23(6), 417–432.Google Scholar
  166. Zhang, L., & Li, H. (2012). SR-SIM: A fast and high performance IQA index based on spectral residual. In Proceedings of IEEE International Conference on Image Processing, (pp. 1473–1476).Google Scholar
  167. Zhang, L., Gu, Z., & Li, H. (2013a). SDSP: A novel saliency detection method by combining simple priors. In Proceedings of IEEE International Conference on Image Processing, (pp. 171–175).Google Scholar
  168. Zhang, L., Shen, Y., & Li, H. (2014a). VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Transactions on Image Processing, 23(10), 4270–4281.MathSciNetMATHCrossRefGoogle Scholar
  169. Zhang, L., Tong, M. H., Marks, T. M., Shan, H., & Cottrell, G. W. (2008a). SUN: A Bayesian framework for saliency using natural statistics. Journal of Vision, 8(7), 32.CrossRefGoogle Scholar
  170. Zhang, L., Zhang, L., Mou, X., & Zhang, D. (2011). FSIM: A feature similarity index for image quality assessment. IEEE Transactions on Image Processing, 20(8), 2378–2386.MathSciNetMATHCrossRefGoogle Scholar
  171. Zhang, M., Mou, X., Fujita, H., Zhang, L., Zhang, X., & Xue, W. (2013b). Local binary pattern statistics feature for reduced reference image quality assessment. Proceedings of SPIE, 8660(86600L), 1–8.Google Scholar
  172. Zhang, W., Borji, A., Wang, Z., Le Callet, P., & Liu, H. (2016). The application of visual saliency models in objective image quality assessment: A statistical evaluation. IEEE Transactions on Neural Networks and Learning Systems, 27(6), 1266–1278.MathSciNetCrossRefGoogle Scholar
  173. Zhang, X., Lin, W., & Xue, P. (2008b). Just-noticeable difference estimation with pixels in images. Journal of Visual Communication and Image Representation, 19(1), 30–41.CrossRefGoogle Scholar
  174. Zhang, Y., & Chandler, D. M. (2013). No-reference image quality assessment based on log-derivative statistics of natural scenes. Journal of Electronic Imaging, 22(4), 043025.CrossRefGoogle Scholar
  175. Zhang, Y., Moorthy, A. K., Chandler, D. M., & Bovik, A. C. (2014b). D-DIIVINE: No-reference image quality assessment based on local magnitude and phase statistics of natural scenes. Signal Processing: Image Communication, 29(7), 725–747.Google Scholar
  176. Zhao, Y., Ding, Y., & Zhao, X. (2016). Image quality assessment based on complementary local feature extraction and quantification. Electronics Letters, 52(22), 1849–1850.CrossRefGoogle Scholar
  177. Zhu, W., Liang, S., Wei, Y., & Sun, J. (2014). Saliency optimization from robust background detection. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, (pp. 2814–2821).Google Scholar

Copyright information

© Zhejiang University Press, Hangzhou and Springer-Verlag GmbH Germany 2018

Authors and Affiliations

  1. 1.Zhejiang UniversityHangzhouChina

Personalised recommendations