Advertisement

Frontiers of Computer Science

, Volume 13, Issue 1, pp 4–15 | Cite as

Survey of visual just noticeable difference estimation

  • Jinjian WuEmail author
  • Guangming Shi
  • Weisi Lin
Review Article
  • 50 Downloads

Abstract

The concept of just noticeable difference (JND), which accounts for the visibility threshold (visual redundancy) of the human visual system, is useful in perception-oriented signal processing systems. In this work, we present a comprehensive review of JND estimation technology. First, the visual mechanism and its corresponding computational modules are illustrated. These include luminance adaptation, contrast masking, pattern masking, and the contrast sensitivity function. Next, the existing pixel domain and subband domain JND models are presented and analyzed. Finally, the challenges associated with JND estimation are discussed.

Keywords

just noticeable difference human visual system luminance adaptation contrast masking pattern masking contrast sensitivity function 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Grant No. 61401325), the Research Fund for the Doctoral Program of Higher Education (20130203130001), and the Young Talent Fund of University Association for Science and Technology in Shaanxi (20150110).

Supplementary material

11704_2016_6213_MOESM1_ESM.ppt (380 kb)
Supplementary material, approximately 381 KB.

References

  1. 1.
    Jayant N, Johnston J, Safranek R. Signal compression based on models of human perception. Proceedings of the IEEE, 1993, 81(10): 1385–1422Google Scholar
  2. 2.
    Wu J J, Lin W S, Shi G M, Wang X T, Li F. Pattern masking estimation in image with structural uncertainty. IEEE Transactions on Image Processing, 2013, 22(12): 4892–4904MathSciNetzbMATHGoogle Scholar
  3. 3.
    Yang X K, Lin W S, Lu Z K, Lin X, Rahardja S, Ong E, Yao S S. Rate control for videophone using local perceptual cues. IEEE Transactions on Circuits and Systems for Video Technology, 2005, 15(4): 496–507Google Scholar
  4. 4.
    Ji T L, Sundareshan M K, Roehrig H. Adaptive image contrast enhancement based on human visual properties. IEEE Transactions on Medical Imaging, 1994, 13(4): 573–586Google Scholar
  5. 5.
    Dong X, Wen J T. A pixel-based outlier-free motion estimation algorithm for scalable video quality enhancement. Frontiers of Computer Science, 2015, 9(5): 729–740Google Scholar
  6. 6.
    Lin W S, Kuo C J. Perceptual visual quality metrics: a survey. Visual Communication and Image Representation, 2011, 22(4): 297–312Google Scholar
  7. 7.
    Cui L. SWVFS: a saliency weighted visual feature similarity metric for image quality assessment. Frontiers of Computer Science, 2014, 8(1): 145–155MathSciNetGoogle Scholar
  8. 8.
    Li W, Yang C, Li C, Yang Q. JND model study in image watermarking. In: Jin D, Lin S, eds, Advances in Multimedia, Software Engineering and Computing, Vol 2. Springer: Berlin Heidelberg, 2011, 535–543Google Scholar
  9. 9.
    Chou C H, Liu K C. A perceptually tuned watermarking scheme for color images. IEEE Transactions on Image Processing, 2010, 19(11): 2966–2982MathSciNetzbMATHGoogle Scholar
  10. 10.
    Xia Z H, Wang X H, Zhang L G, Qin Z, Sun X M, Ren K. A privacy-preserving and copy-deterrence content-based image retrieval scheme in cloud computing. IEEE Transactions on Information Forensics and Security, 2016, 11(11): 2594–2608Google Scholar
  11. 11.
    Cheng Q, Huang T S. An additive approach to transform-domain information hiding and optimum detection structure. IEEE Transactions on Multimedia, 2001, 3(3): 273–284Google Scholar
  12. 12.
    Fu Z J, Ren K, Shu J G, Sun X M, Huang F X. Enabling personalized search over encrypted outsourced data with efficiency improvement. IEEE Transactions on Parallel and Distributed Systems, 2016, 27(9): 2546–2559Google Scholar
  13. 13.
    Xia Z H, Wang X H, Sun X M, Liu Q S, Xiong N X. Steganalysis of LSB matching using differences between nonadjacent pixels. Multimedia Tools and Applications, 2016, 75(4): 1947–1962Google Scholar
  14. 14.
    Legge G E, Foley J M. Contrast masking in human vision. Journal of the Optical Society of America, 1980, 70(12): 1458–1471Google Scholar
  15. 15.
    Daly S J. Visible differences predictor: an algorithm for the assessment of image fidelity. Proceedings of SPIE, 1992, 1666(1): 2–15Google Scholar
  16. 16.
    Foley JM. Human luminance pattern-vision mechanisms: masking experiments require a new model. Journal of the Optical Society of America A, 1994, 11(6): 1710–1719Google Scholar
  17. 17.
    KovÃa¸cs G, Vogels R, Orban G A. Cortical correlate of pattern backward masking. Proceedings of the National Academy of Sciences, 1995, 92(12): 5587–5591Google Scholar
  18. 18.
    Watson A B, Solomon J A. Model of visual contrast gain control and pattern masking. Journal of the Optical Society of America A, 1997, 14(9): 2379–2391Google Scholar
  19. 19.
    Daly S J. Engineering observations from spatiovelocity and spatiotemporal visual models. Proceedings of SPIE, 1998, 3299(1): 180–191Google Scholar
  20. 20.
    Chou C H, Li Y C. A perceptually tuned subband image coder based on the measure of just-noticeable distortion profile. IEEE Transactions on Circuits and Systems for Video Technology, 1995, 5(6): 467–476Google Scholar
  21. 21.
    Yang X K, Ling W S, Lu Z K, Ong E P, Yao S S. Just noticeable distortion model and its applications in video coding. Signal Processing: Image Communication, 2005, 20(7): 662–680Google Scholar
  22. 22.
    Liu A M, Lin W S, Paul M, Deng C W, Zhang F. Just noticeable difference for images with decomposition model for separating edge and textured regions. IEEE Transactions on Circuits and Systems for Video Technology, 2010, 20(11): 1648–1652Google Scholar
  23. 23.
    Wu J J, Shi G M, Lin W S, Liu A M, Qi F. Just noticeable difference estimation for images with free-energy principle. IEEE Transactions on Multimedia, 2013, 15(7): 1705–1710Google Scholar
  24. 24.
    Jia Y T, Lin W S, Kassim A. Estimating just-noticeable distortion for video. IEEE Transactions on Circuits and Systems for Video Technology, 2006, 16(7): 820–829Google Scholar
  25. 25.
    Wei Z Y, Ngan K N. Spatio-temporal just noticeable distortion profile for grey scale image/video in DCT domain. IEEE Transactions on Circuits and Systems for Video Technology, 2009, 19(3): 337–346Google Scholar
  26. 26.
    Zhang X H, Lin WS, Xue P. Just-noticeable difference estimation with pixels in images. Journal Visual Communication and Image Representation, 2008, 19(1): 30–41Google Scholar
  27. 27.
    Chen H, Hu R, Hu J, Wang Z. Temporal color just noticeable distortion model and its application for video coding. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME). 2010, 713–718Google Scholar
  28. 28.
    Ma L, Ngan K N, Zhang F, Li S. Adaptive block-size transform based just-noticeable difference model for images/videos. Signal Processing: Image Communication, 2011, 26(3): 162–174Google Scholar
  29. 29.
    Bae S H, Kim M. A novel DCT-based JND model for luminance adaptation effect in DCT frequency. IEEE Signal Processing Letters, 2013, 20(9): 893–896Google Scholar
  30. 30.
    Bae S H, Kim M. A DCT-based total JND profile for spatio-temporal and foveated masking effects. IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27(6): 1196–1207Google Scholar
  31. 31.
    Rovamo J, Mustonen J, Näsänen R. Modelling contrast sensitivity as a function of retinal illuminance and grating area. Vision Research, 1994, 34(10): 1301–1314Google Scholar
  32. 32.
    Safranek R J, Johnston J D. A perceptually tuned sub-band image coder with image dependent quantization and post-quantization data compression. In: Proceedings of International Conference on Acoustics, Speech, and Signal Processing. 1989, 1945–1948Google Scholar
  33. 33.
    Moon P, Spencer D E. The visual effect on non-uniform surrounds. Journal of the Optical Society of America, 1945, 35(3): 233–248Google Scholar
  34. 34.
    Netravali A N, Prasada B. Adaptive quantization of picture signals using spatial masking. Proceedings of the IEEE, 1977, 65(4): 536–548Google Scholar
  35. 35.
    Wu J J, Shi G M, Lin W S, Kuo C C J. Enhanced just noticeable difference model with visual regularity consideration. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 2016, 1581–1585Google Scholar
  36. 36.
    Wang S Q, Ma L, Fang Y M, Lin WS, Ma SW, Gao W. Just noticeable difference estimation for screen content images. IEEE Transactions on Image Processing, 2016, 25(8): 3838–3851MathSciNetGoogle Scholar
  37. 37.
    Pan Z Q, Lei J J, Zhang Y, Sun X M, Kwong S. Fast motion estimation based on content property for low-complexity H.265/HEVC encoder. IEEE Transactions on Broadcasting, 2016, 62(3): 675–684Google Scholar
  38. 38.
    Bae S H, Kim M. A novel generalized DCT-based JND profile based on an elaborate CM-JND model for variable block-sized transforms in monochrome images. IEEE Transactions on Image Processing, 2014, 23(8): 3227–3240MathSciNetzbMATHGoogle Scholar
  39. 39.
    Pan Z Q, Zhang Y, Kwong S. Efficient motion and disparity estimation optimization for low complexity multiview video coding. IEEE Transactions on Broadcasting, 2015, 61(2): 166–176Google Scholar
  40. 40.
    ITU. Method for the subjective assessment of the quality of television pictures. Geneva, Switzerland, Document ITU-R BT.500-11, 2002Google Scholar
  41. 41.
    Jarsky T, Cembrowski M, Logan S M, Kath W L, Riecke H, Demb J B, Singer J H. A synaptic mechanism for retinal adaptation to luminance and contrast. The Journal of Neuroscience, 2011, 31(30): 11003–11015Google Scholar
  42. 42.
    Netravali A N, Haskell B G. Digital Pictures: Representation, Compression and Standards. 2nd ed. New York: Plenum Press, 1995Google Scholar
  43. 43.
    Wu H R, Reibman A R, Lin W S, Pereira F, Hemami S S. Perceptual visual signal compression and transmission. Proceedings of the IEEE, 2013, 101(9): 2025–2043Google Scholar
  44. 44.
    Jourlin M, Carre M, Breugnot J, Bouabdellah M. Logarithmic image processing: additive contrast, multiplicative contrast, and associated metrics. Advances in Imaging and Electron Physics, 2012, 171: 357–406Google Scholar
  45. 45.
    Foley J M, Boynton G M. New model of human luminance pattern vision mechanisms: analysis of the effects of pattern orientation, spatial phase, and temporal frequency. Proceedings of SPIE, 1994, 2054(1): 32–42Google Scholar
  46. 46.
    Truchard AM, Ohzawa I, Freeman R D. Contrast gain control in the visual cortex: monocular versus binocular mechanisms. Journal of Neuroscience, 2000, 20(8): 3017–3032Google Scholar
  47. 47.
    Zhou Z L, Wang Y L, Wu Q J, Yang C N, Sun X M. Effective and efficient global context verification for image copy detection. IEEE Transactions on Information Forensics and Security, 2017, 12(1): 48–63Google Scholar
  48. 48.
    Friston K. The free-energy principle: a unified brain theory? Nature Reviews Neuroscience, 2010, 11(2): 127–138Google Scholar
  49. 49.
    Knill D C, Pouget R. The Bayesian brain: the role of uncertainty in neural coding and computation. Trends in Neuroscience, 2004, 27(12): 712–719Google Scholar
  50. 50.
    Zhang X J, Wu X L. Image interpolation by adaptive 2-D autoregressive modeling and Soft-Decision estimation. IEEE Transactions on Image Processing, 2008, 17(6): 887–896MathSciNetGoogle Scholar
  51. 51.
    Wu J J, Lin WS, Shi G M, Liu A M. Perceptual quality metric with internal generative mechanism. IEEE Transactions on Image Processing, 2013, 22(1): 43–54MathSciNetzbMATHGoogle Scholar
  52. 52.
    Ojala T, Pietikainen M, Maenpaa T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2002, 24(7): 971–987zbMATHGoogle Scholar
  53. 53.
    Nill N B. A visual model weighted cosine transform for image compression and quality assessment. IEEE Transactions on Communications, 1985, 33(3): 551–557Google Scholar
  54. 54.
    Ngan K N, Leong K S, Singh H. Adaptive cosine transform coding of images in perceptual domain. IEEE Transactions on Acoustics, Speech, and Signal Processing, 1989, 37(11): 1743–1750Google Scholar
  55. 55.
    Fu Z J, Sun X M, Liu Q, Zhou L, Shu J G. Achieving efficient cloud search services: multi-keyword ranked search over encrypted cloud data supporting parallel computing. IEICE Transactions on Communications, 2015, 98(1): 190–200Google Scholar
  56. 56.
    Ren Y J, Shen J, Wang J, Han J, Lee S Y. Mutual verifiable provable data auditing in public cloud storage. Journal of Internet Technology, 2015, 16(2): 317–323Google Scholar
  57. 57.
    Yang X K, Lin W S, Lu Z Y, Ong E, Yao S S. Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile. IEEE Transactions on Circuits and Systems for Video Technology, 2005, 15(6): 742–752Google Scholar
  58. 58.
    Downing P E. Interactions between visual working memory and selective attention. Psychological Science, 2000, 11(6): 467–473Google Scholar
  59. 59.
    Bar M. The proactive brain: memory for predictions. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 2009, 364(1521): 1235–1243Google Scholar
  60. 60.
    Chaumon M, Kveraga K, Barrett L F, Bar M. Visual predictions in the orbitofrontal cortex rely on associative content. Cerebral Cortex, 2014, 24(11): 2899–2907Google Scholar
  61. 61.
    Koch C, Ullman S. Shifts in selection in visual attention: toward the underlying neural circuitry. Human Neurobiology, 1985, 4(4): 219–227Google Scholar
  62. 62.
    Itti L, Koch C, Niebur E. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998, 20(11): 1254–1259Google Scholar
  63. 63.
    Itti L, Koch C. Computational modelling of visual attention. Nature Reviews Neuroscience, 2001, 2(3): 194–203Google Scholar

Copyright information

© Higher Education Press and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Electronic EngineeringXidian UniversityXi’anChina
  2. 2.Collaborative Innovation Center of Information Sensing and UnderstandingXidian UniversityXi’anChina
  3. 3.School of Computer EngineeringNanyang Technological UniversitySingaporeSingapore

Personalised recommendations