Advertisement

Multimedia Tools and Applications

, Volume 64, Issue 3, pp 535–547 | Cite as

Perceptual auto-regressive texture synthesis for video coding

  • Zhihua Bao
  • Chen Xu
  • Chong WangEmail author
Article

Abstract

Traditional video compression methods consider the statistical redundancy among pixels as the only adversary of compression, with the perceptual redundancy totally neglected. However, it is well-known that none criterion is as eloquent as the visual quality of an image. To reach higher compression ratios without perceptually degrading the reconstructed signal, the properties of the human visual system (HVS) need to be better exploited. Recent research indicates that HVS has different sensitivities towards different image content, based on which a novel perceptual video coding method is explored in this paper to achieve better perceptual coding quality while spending fewer bits. A new texture segmentation method exploiting just noticeable distortion (JND) profile is first devised to detect and classify texture regions in video scenes. To effectively remove temporal redundancies while preserving high visual quality, an auto-regressive (AR) model is then applied to synthesize the texture regions and combine with other regions which are encoded by the traditional hybrid coding scheme. To demonstrate the performance, the proposed scheme is integrated into the H.264/AVC video coding system. Experimental results show that on various sequences with different types of texture regions, we can reduce the bit-rate for 15% to 58% while maintaining good perceptual quality.

Keywords

Perceptual video coding HVS Texture synthesis JND Auto-regressive model 

Notes

Acknowledgment

This work is supported by the National Hi-Tech Development 863 Program of China under grant No. 2007AA01Z330 and the open project of Jiangsu Provincial Key Lab of ASIC Design under grant No. JSICK0910.

References

  1. 1.
    Chou C-H, Chen C-W (1996) A perceptually optimized 3-D subband codec for video communication over wireless channels. IEEE Trans Circuits Syst Video Technol 6(4):143–156MathSciNetCrossRefGoogle Scholar
  2. 2.
    Chou C-H, Li Y-C (1995) A perceptually tuned subband image coder based on the measure of just-noticeable-distortion profile. IEEE Trans Circuits Syst Video Technol 5(12):467–476CrossRefGoogle Scholar
  3. 3.
    ISO/IEC 15444-1 (2000) “Information technology—JPEG 2000 image coding system”Google Scholar
  4. 4.
    ITU-R BT.500-11, “Methodology for the subjective assessment of the quality of television pictures”Google Scholar
  5. 5.
    Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG (2003) “Draft ITU-T recommendation and final draft international standard of joint video specification (ITU-T Rec. H.264jISO/IEC 14496-10 AVC),” JVT-G050Google Scholar
  6. 6.
    Lubin J (1995) A visual system discrimination model for imaging system design and evaluation. In: Peli E (ed) Vision Models for Target Detection and Recognition. World Scientific Publishers, River Edge, pp 245–283CrossRefGoogle Scholar
  7. 7.
    Ndjiki-Nya P, Wiegand T (2003) “Video coding using texture analysis and synthesis.” In: Proc Picture Coding Symp. Saint-Malo, FranceGoogle Scholar
  8. 8.
    Netravali AN, Prasada B (1977) Adaptive quantization of picture signals using spatial masking. Proc IEEE 65(4):536–548CrossRefGoogle Scholar
  9. 9.
    Paul M, Lin W, Lau CT, Lee BS (2011) Explore and model better I-frames for video coding. IEEE Trans Circuits Syst Video Technol 21(9):1242–1254CrossRefGoogle Scholar
  10. 10.
    Safranek RJ, Johnston JD (1989) “A perceptually tuned subband image coder with image dependent quantization and post-quantization data compression.” In: Proc IEEE Int Conf Acoust, Speech, Signal Process, pp. 1945–1948Google Scholar
  11. 11.
    Sun X, Yin B, Shi Y (2009) “A low cost video coding scheme using texture synthesis.” In: Proc Int Cong Image Signal Process, pp. 1–5Google Scholar
  12. 12.
    Tugnait JK (1993) “Texture synthesis using asymmetric 2-D noncausal AR models.” In: Proc IEEE Sig Process Work High-Order Stat, pp. 71–75Google Scholar
  13. 13.
    Vatis Y, Ostermann J (2009) Adaptive interpolation filter for H.264/AVC. IEEE Trans Circuits Syst Video Technol 19(2):179–192CrossRefGoogle Scholar
  14. 14.
    Watson AB (1993) DCTune: a technique for visual optimization of DCT quantization matrices for individual images. Soc Info Disp Digest Tech Pap XXIV:946–949Google Scholar
  15. 15.
    Watson AB, Yang GY, Solomon JA, Villasenor J (1997) Visibility of wavelet quantization noise. IEEE Trans Image Process 6(8):1164–1175CrossRefGoogle Scholar
  16. 16.
    Wu X, Barthel KU, Zhang W (1998) “Piecewise 2D autoregression for predictive image coding.” In: Proc IEEE Int Conf Image Process, pp. 901–904Google Scholar
  17. 17.
    Yang XK, Ling WS, Lu ZK (2005) Just noticeable distortion model and its applications in video coding. Sig Process Image Commun 20(7):662–680CrossRefGoogle Scholar
  18. 18.
    Yang X, Lin W, Lu Z, Ong EP, Yao S (2005) Motion-compensated residue preprocessing in video coding based on just-noticeable-distortion profile. IEEE Trans Circuits Syst Video Technol 15(6):742–752CrossRefGoogle Scholar
  19. 19.
    Zhang Y, Ji X, Zhao D, Gao W (2006) Video coding by texture analysis and synthesis using graph cut. In: Zhuang Y, Yang S, Rui Y, He Q (eds) Proc Pacific-Rim Conf Multimedia, LNCS4261. Springer, Heidelberg, pp 582–589Google Scholar
  20. 20.
    Zhu C, Sun X, Wu F, Li H (2007) “Video coding with spatio-temporal texture synthesis.” In: Proc IEEE Int Conf Multimed Expo, pp. 112–115Google Scholar
  21. 21.
    Zhu C, Sun X, Wu F, Li H (2008) “Video coding with spatio-temporal texture synthesis and edge-based inpainting.” In: Proc IEEE IntConf Multimed Expo, pp. 813–816Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2011

Authors and Affiliations

  1. 1.School of Electronics and InformationNantong UniversityJiangsuChina

Personalised recommendations