Skip to main content

Motion Perception Based Adaptive Quantization for Video Coding

  • Conference paper
  • 1196 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 3767))

Abstract

A visual measure for the purpose of video compressions is proposed in this paper. The novelty of the proposed scheme relies on combining three human perception models: motion attention model, eye movement based spatiotemporal visual sensitivity function, and visual masking model. With the aid of spatiotemporal visual sensitivity function, the visual sensitivities to DCT coefficients on less attended macroblocks are evaluated. The spatiotemporal distortion masking measures at macroblock level are then estimated based on the visual masking thresholds of the DCT coefficients with low sensitivities. Accordingly, macroblocks that can hide more distortions are assigned larger quantization parameters. Experiments conducted on the basis of H.264 demonstrate that this scheme effectively improves coding efficiency without picture quality degradation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H.264/AVC Software, http://iphome.hhi.de/suehring/tml

  2. Itti, L.: Automatic Foveation for Video Compression Using a Neurobiological Model of Visual Attention. IEEE Trans. Image Processing 13(10), 1304–1318 (2004)

    Article  Google Scholar 

  3. Agrafiotis, D., Canagarajah, N., Bull, D.R., Dye, M.: Perceptually Optimized Sign Language Video Coding Based on Eye Tracking Analysis. IEE Electronics Letters 39(2), 1703–1705 (2003)

    Article  Google Scholar 

  4. Wang, Z., Lu, L., Bovik, A.C.: Foveation Scalable Video Coding with Automatic Fixation Selection. IEEE Trans. Image Processing 12(2), 243–254 (2003)

    Article  Google Scholar 

  5. Basu, A., Wiebe, K.: Videoconferencing Using Spatially Varying Sensing with Multiple and Moving Foveas. In: Proc. IEEE Intl. Conference on Pattern Recognition (October 1994)

    Google Scholar 

  6. Kelly, D.H.: Motion and Vision II. Stabilized Spatio-Temporal Threshold Surface. J. Opt. Soc. Amer. 69(10), 1340–1349 (1979)

    Article  Google Scholar 

  7. Pei, S.-C., Lai, C.-L.: Very Low Bit-Rate Coding Algorithm for Stereo Video with Spatiotemporal HVS Model and Binary Correlation Disparity Estimator. IEEE Journal on Selected Areas in Communications 16(1), 98–107 (1998)

    Article  Google Scholar 

  8. Daly, S.: Engineering Observations from Spatiovelocity and Spatiotemporal Visual Models. IS&T/SPIE Conference on Human Vision and Electronic and Electronic Imaging IV 3644, 162–166 (1999)

    Google Scholar 

  9. Yee, H., Pattanaik, S., Greenberg, D.P.: Spatiotemporal Sensitivity and Visual Attention for Efficient Rendering of Dynamic Environments. ACM 2001 Trans. Computer Graphics 20(1), 39–65 (2001)

    Article  Google Scholar 

  10. Tan, S.H., Pang, K.K., Ngan, K.N.: Classified Perceptual Coding with Adaptive Quantization. IEEE Trans. Circuits and Systems for Video Technology 6(4), 375–388 (1996)

    Article  Google Scholar 

  11. Tang, C.-W., Chen, C.-H., Yu, Y.-H., Tsai, C.-J.: Visual Sensitivity Guided Bit Allocation for Video Coding. Accepted by IEEE Trans. Multimedia (2005)

    Google Scholar 

  12. Itti, L.: Quantifying the Contribution of Low-Level Saliency to Human Eye Movements in Dynamic Scenes. Visual Cognition (2005)

    Google Scholar 

  13. Koch, C.: Biological Models of Motion Perception: Spatio-Temporal Energy Models and Electrophysiology (2004)

    Google Scholar 

  14. Ma, Y.-F., Zhang, H.-J.: A Model of Motion Attention for Video Skimming. In: Proc. Int. Conf. Image Processing, vol. 1, pp. I-129–I-132 (2002)

    Google Scholar 

  15. Chitprasert, B., Rao, K.R.: Human Visual Weighted Progressive Image Transmission. IEEE Trans. Communication 38, 1040–1944 (1990)

    Article  Google Scholar 

  16. Watson, A.B.: Visual Optimization of DCT Quantization Matrices for Individual Images. In: Proc. AIAA Computing in Aerospace 9, pp. 286–291. American Institute of Aeronautics and Astronautics, San Diego (1993)

    Google Scholar 

  17. Pereira, F., Ebrahimi, T.: The MPEG-4 Book, pp. 669–675 (2002)

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Tang, CW. (2005). Motion Perception Based Adaptive Quantization for Video Coding. In: Ho, YS., Kim, H.J. (eds) Advances in Multimedia Information Processing - PCM 2005. PCM 2005. Lecture Notes in Computer Science, vol 3767. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11581772_12

Download citation

  • DOI: https://doi.org/10.1007/11581772_12

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-30027-4

  • Online ISBN: 978-3-540-32130-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics