Skip to main content
Log in

A Just Noticeable Difference-Based Video Quality Assessment Method with Low Computational Complexity

  • Original Paper
  • Published:
Sensing and Imaging Aims and scope Submit manuscript

Abstract

A Just Noticeable Difference (JND)-based video quality assessment (VQA) method is proposed. This method, termed as JVQ, applies JND concept to structural similarity (SSIM) index to measure the spatial quality. JVQ incorporates three features, i.e. luminance adaptation, contrast masking, and texture masking. In JVQ, the concept of JND is refined and more features are considered. For the spatial part, minor distortions in the distorted frames are ignored and considered imperceptible. For the temporal part, SSIM index is simplified and used to measure the temporal video quality. Then, a similar JND concept which comprises of temporal masking is also applied in the temporal quality evaluation. Pixels with large variation over time are considered as not distorted because the distortions in these pixels are hardly perceivable. The final JVQ index is the arithmetic mean of both spatial and temporal quality indices. JVQ is found to achieve good correlation with subjective scores. In addition, this method has low computational cost as compared to existing state-of-the-art metrics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Zhang, X., Lin, W., & Xue, P. (2008). Just-noticeable difference estimation with pixels in images. Journal of Visual Communication and Image Representation, 19(1), 30–41. https://doi.org/10.1016/j.jvcir.2007.06.001.

    Article  Google Scholar 

  2. Chen, Z., & Liu, H. (2014). JND modeling: Approaches and applications. In 2014 19th International conference on digital signal processing (DSP) (pp. 827–830). IEEE. https://doi.org/10.1109/icdsp.2014.6900782.

  3. Chou, C. H., & Li, Y. C. (1995). A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Transactions on Circuits and Systems for Video Technology, 5(6), 467–476. https://doi.org/10.1109/76.475889.

    Article  Google Scholar 

  4. Wang, Z., Simon, S., Baroud, Y., & Najmabadi, S. M. (2015). Visually lossless image compression extension for JPEG based on just-noticeable distortion evaluation. In 2015 International conference on systems, signals and image processing (IWSSIP) (pp. 237–240). IEEE. https://doi.org/10.1109/iwssip.2015.7314220.

  5. Liu, K. C. (2009). Just noticeable distortion model and its application in color image watermarking. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 92(2), 563–576. https://doi.org/10.1587/transfun.E92.A.563.

    Article  Google Scholar 

  6. Shang, X., Wang, Y., Luo, L., & Zhang, Z. (2013). Perceptual multiview video coding based on foveated just noticeable distortion profile in DCT domain. In 2013 20th IEEE international conference on image processing (ICIP) (pp. 1914–1917). IEEE. https://doi.org/10.1109/icip.2013.6738394.

  7. Zheng, M., Su, K., Wang, W., Lan, C., & Yang, X. (2013). Enhanced subband JND model with textural image. In 2013 IEEE international conference on signal processing, communication and computing (ICSPCC) (pp. 1–4). IEEE. https://doi.org/10.1109/icspcc.2013.6663953.

  8. Ma, L., Zhang, F., Li, S., & Ngan, K. N. (2010). Video quality assessment based on adaptive block-size transform just-noticeable difference model. In 2010 17th IEEE international conference on image processing (ICIP), (pp. 2501–2504). IEEE. https://doi.org/10.1109/icip.2010.5649188.

  9. Vranješ, M., Rimac-Drlje, S., & Grgić, K. (2013). Review of objective video quality metrics and performance comparison using different databases. Signal Processing: Image Communication, 28(1), 1–19. https://doi.org/10.1016/j.image.2012.10.003.

    Article  Google Scholar 

  10. Zhu, K., Li, C., Asari, V., & Saupe, D. (2015). No-reference video quality assessment based on artifact measurement and statistical analysis. IEEE Transactions on Circuits and Systems for Video Technology, 25(4), 533–546. https://doi.org/10.1109/TCSVT.2014.2363737.

    Article  Google Scholar 

  11. Bong, D. B. L., & Khoo, B. E. (2014). Blind image blur assessment by using valid reblur range and histogram shape difference. Signal Processing: Image Communication, 29(6), 699–710. https://doi.org/10.1016/j.image.2014.03.003.

    Article  Google Scholar 

  12. Chandler, D. M. (2013). Seven challenges in image quality assessment: Past, present, and future research. ISRN Signal Processing. https://doi.org/10.1155/2013/905685.

    Article  Google Scholar 

  13. Shahid, M., Rossholm, A., Lövström, B., & Zepernick, H. J. (2014). No-reference image and video quality assessment: A classification and review of recent approaches. EURASIP Journal on Image and Video Processing, 2014(1), 40. https://doi.org/10.1186/1687-5281-2014-40.

    Article  Google Scholar 

  14. Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612. https://doi.org/10.1109/TIP.2003.819861.

    Article  Google Scholar 

  15. Seshadrinathan, K., & Bovik, A. C. (2010). Motion tuned spatio-temporal quality assessment of natural videos. IEEE Transactions on Image Processing, 19(2), 335–350. https://doi.org/10.1109/TIP.2009.2034992.

    Article  MathSciNet  MATH  Google Scholar 

  16. Sheikh, H. R., & Bovik, A. C. (2006). Image information and visual quality. IEEE Transactions on Image Processing, 15(2), 430–444. https://doi.org/10.1109/TIP.2005.859378.

    Article  Google Scholar 

  17. Soundararajan, R., & Bovik, A. C. (2013). Video quality assessment by reduced reference spatio-temporal entropic differencing. IEEE Transactions on Circuits and Systems for Video Technology, 23(4), 684–694. https://doi.org/10.1109/TCSVT.2012.2214933.

    Article  Google Scholar 

  18. Loh, W. T., & Bong, D. B. L. (2018). New spatiotemporal method for assessing video quality. International Journal of Computational Vision and Robotics, 8(3), 336. https://doi.org/10.1504/IJCVR.2018.10014146.

    Article  Google Scholar 

  19. Loh, W. T., & Bong, D. B. L. (2016). Temporal video quality assessment method involving structural similarity index. In 2016 IEEE international conference on consumer electronics-Taiwan (ICCE-TW) (pp. 1–2). IEEE. https://doi.org/10.1109/icce-tw.2016.7520921.

  20. Narwaria, M., Lin, W., & Liu, A. (2012). Low-complexity video quality assessment using temporal quality variations. IEEE Transactions on Multimedia, 14(3), 525–535. https://doi.org/10.1109/TMM.2012.2190589.

    Article  Google Scholar 

  21. Vu, C., & Deshpande, S. (2012). ViMSSIM: From image to video quality assessment. In Proceedings of the 4th workshop on mobile video (pp. 1-6). ACM. https://doi.org/10.1145/2151677.2151679.

  22. Cardoso, J. V. M., Alencar, M. S., Regis, C. D. M., & Oliveira, Í. P. (2014). Temporal analysis and perceptual weighting for objective video quality measurement. In 2014 IEEE southwest symposium on image analysis and interpretation (SSIAI) (pp. 57–60). IEEE. https://doi.org/10.1109/ssiai.2014.6806028.

  23. Oliveira, Í. P., Cardoso, J. V. M., Regis, C. D. M., & Alencar, M. S. (2013). Spatial and temporal analysis considering relevant regions applied to video quality assessment. In XXXI Brazilian telecommunications symposium (SBrT2013) (pp. 1–4). SBrT. https://doi.org/10.14209/sbrt.2013.222.

  24. Limb, J. O. (1979). Distortion criteria of the human viewer. IEEE Transactions on Systems, Man, and Cybernetics, 9(12), 778–793. https://doi.org/10.1109/TSMC.1979.4310129.

    Article  Google Scholar 

  25. Sheikh, H. R., Wang, Z., Cormack, L., & Bovik, A. C. (2005). LIVE image quality assessment database release 2. Laboratory for Image and Video Engineering (LIVE) at The University of Texas at Austin. http://live.ece.utexas.edu/research/quality. Accessed 28 September 2016.

  26. Haralick, R. M., Shanmugam, K., & Dinstein, I. (1973). Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics, 6, 610–621. https://doi.org/10.1109/TSMC.1973.4309314.

    Article  Google Scholar 

  27. Min, X., Gu, K., Zhai, G., Liu, J., Yang, X., & Chen, C. W. (2017). Blind quality assessment based on pseudo reference image. IEEE Transactions on Multimedia. PP(99), 1–14. https://doi.org/10.1109/tmm.2017.2788206.

    Article  Google Scholar 

  28. Min, X., Gu, K., Zhai, G., Hu, M., & Yang, X. (2018). Saliency-induced reduced-reference quality index for natural scene and screen content images. Signal Processing, 145, 127–136. https://doi.org/10.1016/j.sigpro.2017.10.025.

    Article  Google Scholar 

  29. Min, X., Ma, K., Gu, K., Zhai, G., Wang, Z., & Lin, W. (2017). Unified blind quality assessment of compressed natural, graphic, and screen content images. IEEE Transactions on Image Processing, 26(11), 5462–5474. https://doi.org/10.1109/TIP.2017.2735192.

    Article  MathSciNet  Google Scholar 

  30. Wang, Z., Simoncelli, E. P., & Bovik, A. C. (2003). Multiscale structural similarity for image quality assessment. In signals, systems and computers, 2004. Conference record of the thirty-seventh asilomar conference on (Vol. 2, pp. 1398–1402). IEEE. https://doi.org/10.1109/acssc.2003.1292216.

  31. Le Callet, P., Viard-Gaudin, C., & Barba, D. (2006). A convolutional neural network approach for objective video quality assessment. IEEE Transaction on Neural Networks, 17(5), 1316–1327. https://doi.org/10.1109/TNN.2006.879766.

    Article  Google Scholar 

  32. Aldridge, R., & Pearson, D. (2000). A calibration method for continuous video quality (SSCQE) measurements. Signal Processing: Image Communication, 16(3), 321–332. https://doi.org/10.1016/S0923-5965(99)00061-2.

    Article  Google Scholar 

  33. Mitchell, J. L., Pennebaker, W. B., Fogg, C. E., & LeGall, D. J. (1996). Aspects of visual perception. In MPEG video: Compression standard (pp. 51–80). USA: Kluwer Academic Publishers. https://doi.org/10.1007/b115884.

    Book  Google Scholar 

  34. Girod. B. (1989). the information theoretical significance of spatial and temporal masking in video signals. In Proceeding SPIE 1077, human vision, visual processing, and digital display (pp. 178–187). SPIE. https://doi.org/10.1117/12.952716.

  35. Aldridge, R., Davidoff, J., Ghanbari, M., Hands, D., & Pearson, D. (1995). Measurement of scene-dependent quality variations in digitally coded television pictures. IEE Proceedings-Vision, Image and Signal Processing, 142(3), 149–154. https://doi.org/10.1049/ip-vis:19951937.

    Article  Google Scholar 

  36. Watson, A. B., Hu, Q. J., & McGowan, J. F. (2001). Digital video quality metric based on human vision. Journal of Electronic Imaging, 10(1), 20–30. https://doi.org/10.1117/1.1329896.

    Article  Google Scholar 

  37. Hands, D. S. (1997). Mental processes in the evaluation of digitally-coded television pictures. Doctoral dissertation, University of Essex, Essex, England.

  38. Wandell, B. A. (1995). Foundations of vision. Sunderland, MA: Sinauer Associates.

    Google Scholar 

  39. Seshadrinathan, K., Soundararajan, R., Bovik, A. C., & Cormack, L. K. (2010). Study of subjective and objective quality assessment of video. IEEE Transactions on Image Processing, 19(6), 1427–1441. https://doi.org/10.1109/TIP.2010.2042111.

    Article  MathSciNet  MATH  Google Scholar 

  40. Seshadrinathan, K., Soundararajan, R., Bovik, A. C., & Cormack, L. K. (2010). A subjective study to evaluate video quality assessment algorithms. In Human vision and electronic imaging (pp. 75270H–75270H). https://doi.org/10.1117/12.845382.

  41. Vu, P. V., & Chandler, D. M. (2014). ViS3: An algorithm for video quality assessment via analysis of spatial and spatiotemporal slices. Journal of Electronic Imaging, 23(1), 013016. https://doi.org/10.1117/1.JEI.23.1.013016.

    Article  Google Scholar 

  42. Zhang, F., Li, S., Ma, L., Wong, Y. C., & Ngan, K. N. (2011). IVP subjective quality video database. The Chinese University of Hong Kong. http://ivp.ee.cuhk.edu.hk/research/database/subjective. Accessed 15 January 2016.

  43. Pinson, M. H., & Wolf, S. (2004). A new standardized method for objectively measuring video quality. IEEE Transactions on Broadcasting, 50(3), 312–322. https://doi.org/10.1109/TBC.2004.834028.

    Article  Google Scholar 

  44. Vu, P. V., Vu, C. T., & Chandler, D. M. (2011). A spatiotemporal most-apparent-distortion model for video quality assessment. In 2011 18th IEEE international conference on image processing (ICIP) (pp. 2505–2508). IEEE. https://doi.org/10.1109/icip.2011.6116171.

  45. Pinson, M. H., Choi, L. K., & Bovik, A. C. (2014). Temporal video quality model accounting for variable frame delay distortions. IEEE Transactions on Broadcasting, 60(4), 637–649. https://doi.org/10.1109/TBC.2014.2365260.

    Article  Google Scholar 

  46. Choi, L. K., & Bovik, A. C. (2016). Flicker sensitive motion tuned video quality assessment. In 2016 IEEE southwest symposium on image analysis and interpretation (SSIAI) (pp. 29–32). IEEE. https://doi.org/10.1109/ssiai.2016.7459167.

  47. Loh, W. T., & Bong, D. B. L. (2015). Video quality assessment method: MD-SSIM. In 2015 IEEE international conference on consumer electronics-Taiwan (ICCE-TW) (pp. 290–291). IEEE. https://doi.org/10.1109/icce-tw.2015.7216904.

  48. Video Quality Expert Group. (2003). Final report from the Video Quality Experts Group on the validation of objective models of video quality assessment, phase II (FR-TV2) 2003. Video Quality Expert Group. https://www.itu.int/md/T01-SG09-C-0060. Accessed 4 June 2015.

  49. Vo, D. T., & Nguyen, T. Q. (2008). Quality enhancement for motion JPEG using temporal redundancies. IEEE Transactions on Circuits and Systems for Video Technology, 18(5), 609–619. https://doi.org/10.1109/TCSVT.2008.918807.

    Article  Google Scholar 

  50. Mu, M., Gostner, R., Mauthe, A., Tyson, G., & Garcia, F. (2009). Visibility of individual packet loss on H. 264 encoded video stream: A user study on the impact of packet loss on perceived video quality. In Multimedia computing and networking 2009 (pp. 725302-1–725302-12). SPIE. https://doi.org/10.1117/12.815538.

  51. Mallat, S. G. (1989). A theory for multiresolution signal decomposition: The wavelet representation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 11(7), 674–693. https://doi.org/10.1109/34.192463.

    Article  MATH  Google Scholar 

  52. Sullivan, G. J., Ohm, J., Han, W. J., & Wiegand, T. (2012). Overview of the high efficiency video coding (HEVC) standard. IEEE Transactions on Circuits and Systems for Video Technology, 22(12), 1649–1668. https://doi.org/10.1109/TCSVT.2012.2221191.

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Ministry of Education Malaysia through the provision of Fundamental Research Grant Scheme, Grant Number F02/FRGS/1492/2016.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to David Boon Liang Bong.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Loh, WT., Bong, D.B.L. A Just Noticeable Difference-Based Video Quality Assessment Method with Low Computational Complexity. Sens Imaging 19, 33 (2018). https://doi.org/10.1007/s11220-018-0216-9

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1007/s11220-018-0216-9

Keywords

Navigation