Abstract
Virtual reality (VR) systems use multi-modal interfaces to explore three-dimensional virtual worlds. During exploration, the user may look at different objects of interest or in different directions. The field of view of human vision is 135\(^{\circ }\times 160^{\circ }\), but the one requiring the highest resolution is only in 1.5\(^{\circ }\times 2^{\circ }\). It is estimated that in modern VR, only 4\(\%\) of the pixel resources of the head-mounted display are mapped to the visual center. Therefore, allocating more computing resources to the visual center and allocating fewer viewpoint prediction rendering techniques elsewhere can greatly speed up the rendering of the scene, especially for VR devices equipped with eye trackers. However, eye trackers as additional equipment may be relatively expensive and be harder to use, at the same time, there is considerable work to be done in the development of eye trackers and their integration with commercial head-mounted equipment. Therefore, this article uses an eye-head coordination model combined with the saliencey of the scene to predict the gaze position, and then uses a hybrid method of Level of Detail (LOD) and grid degeneration to reduce rendering time as much as possible without losing the perceived details and required calculations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Borji, A., Itti, L.: State-of-the-art in visual attention modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 185–207 (2013)
Hoppe, H.: Progressive meshes. In: International Conference on Computer Graphics and Interactive Techniques, pp. 99–108 (1996)
Hu, Z., Zhang, C., Li, S., Wang, G., Manocha, D.: SGaze: a data-driven eye-head coordination model for realtime gaze prediction. IEEE Trans. Vis. Comput. Graph. 25(5), 2002–2010 (2019)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Trans. Pattern Anal. Mach. Intell. 20(11), 1254–1259 (1998)
Kamel, A., Sheng, B., Yang, P., Li, P., Shen, R., Feng, D.D.: Deep convolutional neural networks for human action recognition using depth maps and postures. IEEE Trans. Syst. Man Cybern. Syst. 49(9), 1806–1819 (2019)
Karambakhsh, A., Kamel, A., Sheng, B., Li, P., Yang, P., Feng, D.D.: Deep gesture interaction for augmented anatomy learning. Int. J. Inform. Manag. 45, 328–336 (2019)
Low, K., Tan, T.: Model simplification using vertex-clustering. In: Interactive 3d Graphics and Games, pp. 75–82 (1997)
Lu, P., Sheng, B., Luo, S., Jia, X., Wu, W.: Image-based non-photorealistic rendering for realtime virtual sculpting. Multimedia Tools Appl. 74(21), 9697–9714 (2014). https://doi.org/10.1007/s11042-014-2146-4
Meng, X., et al.: A video information driven football recommendation system. Comput. Electr. Eng. 85, 106699 (2020)
Pan, J., Sayrol, E., Giroinieto, X., Mcguinness, K., Oconnor, N.E.: Shallow and deep convolutional networks for saliency prediction. In: Computer Vision and Pattern Recognition, pp. 598–606 (2016)
Sitzmann, V., et al.: Saliency in vr: how do people explore virtual environments. IEEE Trans. Vis. Comput. Graph. 24(4), 1633–1642 (2018)
Sheng, B., Li, P., Zhang, Y., Mao, L.: Greensea: visual soccer analysis using broad learning system. IEEE Trans. Cybern., pp. 1–15, May 2020
Xu, Y., et al.: Gaze prediction in dynamic 360 immersive videos. In: Computer Vision and Pattern Recognition, pp. 5333–5342 (2018)
Yarbus, A.: Eye Movements and Vision. New York (1967)
Zangemeister, W.H., Stark, L.: Active head rotations and eye-head coordination. Ann. N.Y. Acad. Sci. 374(1), 540–559 (1981)
Zhang, P., Zheng, L., Jiang, Y., Mao, L., Li, Z., Sheng, B.: Tracking soccer players using spatio-temporal context learning under multiple views. Multimedia Tools Appl. 77(15), 18935–18955 (2017). https://doi.org/10.1007/s11042-017-5316-3
Acknowledgement
This work was supported in part by the National Key Research and Development Program of China under Grant 2018YFF0300903, in part by the National Natural Science Foundation of China under Grant 61872241 and Grant 61572316, and in part by the Science and Technology Commission of Shanghai Municipality under Grant 15490503200, Grant 18410750700, Grant 17411952600, and Grant 16DZ0501100.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Lu, P., Zhu, F., Li, P., Kim, J., Sheng, B., Mao, L. (2020). Hierarchical Rendering System Based on Viewpoint Prediction in Virtual Reality. In: Magnenat-Thalmann, N., et al. Advances in Computer Graphics. CGI 2020. Lecture Notes in Computer Science(), vol 12221. Springer, Cham. https://doi.org/10.1007/978-3-030-61864-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-61864-3_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-61863-6
Online ISBN: 978-3-030-61864-3
eBook Packages: Computer ScienceComputer Science (R0)