Skip to main content

Hierarchical Rendering System Based on Viewpoint Prediction in Virtual Reality

  • Conference paper
  • First Online:
Book cover Advances in Computer Graphics (CGI 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12221))

Included in the following conference series:

Abstract

Virtual reality (VR) systems use multi-modal interfaces to explore three-dimensional virtual worlds. During exploration, the user may look at different objects of interest or in different directions. The field of view of human vision is 135\(^{\circ }\times 160^{\circ }\), but the one requiring the highest resolution is only in 1.5\(^{\circ }\times 2^{\circ }\). It is estimated that in modern VR, only 4\(\%\) of the pixel resources of the head-mounted display are mapped to the visual center. Therefore, allocating more computing resources to the visual center and allocating fewer viewpoint prediction rendering techniques elsewhere can greatly speed up the rendering of the scene, especially for VR devices equipped with eye trackers. However, eye trackers as additional equipment may be relatively expensive and be harder to use, at the same time, there is considerable work to be done in the development of eye trackers and their integration with commercial head-mounted equipment. Therefore, this article uses an eye-head coordination model combined with the saliencey of the scene to predict the gaze position, and then uses a hybrid method of Level of Detail (LOD) and grid degeneration to reduce rendering time as much as possible without losing the perceived details and required calculations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013)

    Article  Google Scholar 

  2. Hoppe, H.: Progressive meshes. In: International Conference on Computer Graphics and Interactive Techniques, pp. 99–108 (1996)

    Google Scholar 

  3. Hu, Z., Zhang, C., Li, S., Wang, G., Manocha, D.: SGaze: a data-driven eye-head coordination model for realtime gaze prediction. IEEE Trans. Vis. Comput. Graph. 25(5), 2002–2010 (2019)

    Article  Google Scholar 

  4. Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)

    Article  Google Scholar 

  5. Kamel, A., Sheng, B., Yang, P., Li, P., Shen, R., Feng, D.D.: Deep convolutional neural networks for human action recognition using depth maps and postures. IEEE Trans. Syst. Man Cybern. Syst. 49(9), 1806–1819 (2019)

    Article  Google Scholar 

  6. Karambakhsh, A., Kamel, A., Sheng, B., Li, P., Yang, P., Feng, D.D.: Deep gesture interaction for augmented anatomy learning. Int. J. Inform. Manag. 45, 328–336 (2019)

    Article  Google Scholar 

  7. Low, K., Tan, T.: Model simplification using vertex-clustering. In: Interactive 3d Graphics and Games, pp. 75–82 (1997)

    Google Scholar 

  8. Lu, P., Sheng, B., Luo, S., Jia, X., Wu, W.: Image-based non-photorealistic rendering for realtime virtual sculpting. Multimedia Tools Appl. 74(21), 9697–9714 (2014). https://doi.org/10.1007/s11042-014-2146-4

    Article  Google Scholar 

  9. Meng, X., et al.: A video information driven football recommendation system. Comput. Electr. Eng. 85, 106699 (2020)

    Article  Google Scholar 

  10. Pan, J., Sayrol, E., Giroinieto, X., Mcguinness, K., Oconnor, N.E.: Shallow and deep convolutional networks for saliency prediction. In: Computer Vision and Pattern Recognition, pp. 598–606 (2016)

    Google Scholar 

  11. Sitzmann, V., et al.: Saliency in vr: how do people explore virtual environments. IEEE Trans. Vis. Comput. Graph. 24(4), 1633–1642 (2018)

    Article  Google Scholar 

  12. Sheng, B., Li, P., Zhang, Y., Mao, L.: Greensea: visual soccer analysis using broad learning system. IEEE Trans. Cybern., pp. 1–15, May 2020

    Google Scholar 

  13. Xu, Y., et al.: Gaze prediction in dynamic 360 immersive videos. In: Computer Vision and Pattern Recognition, pp. 5333–5342 (2018)

    Google Scholar 

  14. Yarbus, A.: Eye Movements and Vision. New York (1967)

    Google Scholar 

  15. Zangemeister, W.H., Stark, L.: Active head rotations and eye-head coordination. Ann. N.Y. Acad. Sci. 374(1), 540–559 (1981)

    Article  Google Scholar 

  16. Zhang, P., Zheng, L., Jiang, Y., Mao, L., Li, Z., Sheng, B.: Tracking soccer players using spatio-temporal context learning under multiple views. Multimedia Tools Appl. 77(15), 18935–18955 (2017). https://doi.org/10.1007/s11042-017-5316-3

    Article  Google Scholar 

Download references

Acknowledgement

This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFF0300903, in part by the National Natural Science Foundation of China under Grant 61872241 and Grant 61572316, and in part by the Science and Technology Commission of Shanghai Municipality under Grant 15490503200, Grant 18410750700, Grant 17411952600, and Grant 16DZ0501100.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Bin Sheng or Lijuan Mao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lu, P., Zhu, F., Li, P., Kim, J., Sheng, B., Mao, L. (2020). Hierarchical Rendering System Based on Viewpoint Prediction in Virtual Reality. In: Magnenat-Thalmann, N., et al. Advances in Computer Graphics. CGI 2020. Lecture Notes in Computer Science(), vol 12221. Springer, Cham. https://doi.org/10.1007/978-3-030-61864-3_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61864-3_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61863-6

  • Online ISBN: 978-3-030-61864-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics