Skip to main content

Improving Just Noticeable Difference Model by Leveraging Temporal HVS Perception Characteristics

  • Conference paper
  • First Online:
MultiMedia Modeling (MMM 2020)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11961))

Included in the following conference series:

Abstract

Temporal HVS characteristics are not fully exploited in conventional JND models. In this paper, we improve the spatio-temporal JND model by fully leveraging the temporal HVS characteristics. From the viewpoint of visual attention, we investigate two related factors, positive stimulus saliency and negative uncertainty. This paper measures the stimulus saliency according to two stimulus-driven parameters, relative motion and duration along the motion trajectory, and measures the uncertainty according to two uncertainty-driven parameters, global motion and residue intensity fluctuation. These four different parameters are measured with self-information and information entropy, and unified for fusion with homogeneity. As a result, a novel temporal JND adjustment weight model is proposed. Finally, we fuse the spatial JND model and temporal JND weight to form the spatio-temporal JND model. The experiment results verify that the proposed JND model yields significant performance improvement with much higher capability of distortion concealment compared to state-of-the-art JND profiles.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bae, S.H., Kim, M.: A DCT-based total JND profile for spatio-temporal and foveated masking effects. IEEE Trans. Circ. Syst. Video Technol. 27(6), 1196–1207 (2017)

    Article  Google Scholar 

  2. Black, M.J., Anandan, P.: The robust estimation of multiple motions: parametric and piecewise-smooth flow fields. Comput. Vis. Image Underst. 63(1), 75–104 (1996)

    Article  Google Scholar 

  3. BT, R.I.R.: Methodology for the subjective assessment of the quality of television pictures (2002)

    Google Scholar 

  4. Chen, Z., Guillemot, C.: Perceptually-friendly H.264/AVC video coding based on foveated just-noticeable-distortion model. IEEE Trans. Circ. Syst. Video Technol. 20(6), 806–819 (2010)

    Article  Google Scholar 

  5. Wu, J., Lin, W., Shi, G., Wang, X., Li, F.: Pattern masking estimation in image with structural uncertainty. IEEE Trans. Image Process. 22(12), 4892–4904 (2013)

    Article  MathSciNet  Google Scholar 

  6. Stocker, A.A., Simoncelli, E.P.: Noise characteristics and prior expectations in human visual speed perception. Nat. Neurosci. 9(4), 578–585 (2006)

    Article  Google Scholar 

  7. Wu, J., Li, L., Dong, W., Shi, G., Lin, W., Kuo, C.C.J.: Enhanced just noticeable difference model for images with pattern complexity. IEEE Trans. Image Process. PP(99), 1 (2017)

    MathSciNet  MATH  Google Scholar 

  8. Yang, X., Ling, W., Lu, Z., Ong, E., Yao, S.: Just noticeable distortion model and its applications in video coding. Signal Process. Image Commun. 20(7), 662–680 (2005)

    Article  Google Scholar 

  9. Zhao, Y., Yu, L., Chen, Z., Zhu, C.: Video quality assessment based on measuring perceptual noise from spatial and temporal perspectives. IEEE Trans. Circ. Syst. Video Technol. 21(12), 1890–1902 (2011)

    Article  Google Scholar 

  10. Zhou, W., Qiang, L.: Video quality assessment using a statistical model of human visual speed perception. J. Opt. Soc. Am. A Opt. Image. Sci. Vis. 24(12), 61–69 (2007)

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported by the Natural Science Foundation of China (NSFC) under Grants 61572449, 61931008, 61901150 and 61972123,Key R&D projects 2018YFC0830106, and by Natural Science Foundation of Zhejiang Province under Grants Q19F010030 and Y19F020124.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Haibing Yin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yin, H., Xing, Y., Xia, G., Huang, X., Yan, C. (2020). Improving Just Noticeable Difference Model by Leveraging Temporal HVS Perception Characteristics. In: Ro, Y., et al. MultiMedia Modeling. MMM 2020. Lecture Notes in Computer Science(), vol 11961. Springer, Cham. https://doi.org/10.1007/978-3-030-37731-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37731-1_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37730-4

  • Online ISBN: 978-3-030-37731-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics