Skip to main content
Log in

Real-time ultra-high definition multiview glasses-free 3D display system

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The design and implementation of a real-time ultra-high definition (UHD) multiview glasses-free 3D display system always suffer from the high transmission bandwidth, high memory cost, and high computational complexity. This paper presents a glasses-free 3D display system based on depth-image-based rendering (DIBR) technique by solving these problems. It can convert a V \(+\) D video (RGBD) stream in real time to a multiview representation suitable for a multiview autostereoscopic display. As V \(+\) D video format is used for the system, the transmission bandwidth is reduced. For the memory cost, we introduce an asymmetric shift-sensor camera setup to avoid external memory usage and reduce the storage requirement of multiviews. For the computational complexity, as our camera setup ensures that all the virtual views can be generated with the same DIBR algorithm, the computational complexity is reduced. In addition, we simplify the view fusion method so as to rearrange the subpixels from multiviews with lower complexity to form a single glasses-free 3D image. Moreover, we propose a hardware architecture for the system and implement it using field programmable gate array. Simulation results show that our system can support the UHD V \(+\) D videos for an 8-view glasses-free 3D display. Performance evaluation results show that the proposed system can provide reasonably good stereoscopic image quality if appropriate parameters of the system are applied.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23

Similar content being viewed by others

References

  1. Schaffner, M., Gurkaynak, F.K., Greisen, P., Kaeslin, H., Smolic, A., Smolic, A.: Hybrid ASIC/FPGA system for fully automatic stereo-to-multiview conversion using IDW. IEEE Trans. Circuits Syst. Video Technol. 26(11), 2093–2108 (2016)

    Article  Google Scholar 

  2. Liu, R., Tan, W., Wu, Y., Tan, Y., Li, B., Xie, H., Tai, G., Xu, X.: Deinterlacing of depth-image-based three-dimensional video for a depth-image-based rendering system. J. Electron. Imaging 22(3), 033031 (2013). https://doi.org/10.1117/1.jei.22.3.033031

    Article  Google Scholar 

  3. Liu, R., Deng, Z., Yi, L., Huang, Z., Cao, D., Xu, M., Jia, R.: Hole-filling based on disparity map and inpainting for depth-image-based rendering. Int. J. Hybrid Inf. Technol. 9(5), 145–164 (2016). https://doi.org/10.14257/ijhit.2016.9.5.12

    Article  Google Scholar 

  4. Hsia, C.-H.: Improved depth image-based rendering using an adaptive compensation method on an autostereoscopic 3-D display for a kinect sensor. Sens. J. IEEE 15(2), 994–1002 (2015)

    Article  MathSciNet  Google Scholar 

  5. Tang, Y., Gu, H.: Automatic virtual view generation based on view morphing. J. Comput. Inf. Syst. 7(6), 2051–2057 (2011)

    Google Scholar 

  6. Fehn, C.: Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV. In: Stereoscopic Displays and Virtual Reality Systems XI, January 19, 2004–January 21, 2004, San Jose, CA, United states 2004. Proceedings of SPIE - The International Society for Optical Engineering, pp. 93–104

  7. Marroquim, R., Pfeiffer, G., Carvalho, F., Oliveira, A.A.F.: Texturing 3D models from sequential photos. Vis. Comput. 28(10), 983–993 (2012). https://doi.org/10.1007/s00371-012-0743-7

    Article  Google Scholar 

  8. Sfikas, K., Theoharis, T., Pratikakis, I.: 3D object retrieval via range image queries in a bag-of-visual-words context. Vis. Comput. 29(12), 1351–1361 (2013). https://doi.org/10.1007/s00371-013-0876-3

    Article  Google Scholar 

  9. Liu, R., Xie, H., Tian, F., Wu, Y., Tai, G., Tan, Y., Tan, W., Chen, H., Ge, L.: Hole-filling based on disparity map for DIBR. KSII Trans. Internet Inf. Syst. (TIIS) 6(10), 2663–2678 (2012)

    Google Scholar 

  10. Liu, R., Xie, H., Tai, G., Tan, Y.: A DIBR view judgment-based fold-elimination approach. J. Tongji Univ. 41(1), 142–147 (2013). https://doi.org/10.3969/j.issn.0253-374x.2013.01.024

    Google Scholar 

  11. Jung, Y.J., Baik, A., Kim, J., Park, D.: A novel 2D-to-3D conversion technique based on relative height depth cue. In: Stereoscopic Displays and Applications XX, January 19, 2009–January 21, 2009, San Jose, CA, United states 2009. Proceedings of SPIE - The International Society for Optical Engineering

  12. Liu, R., Li, B., Huang, Z., Cao, D., Tan, Y., Deng, Z., Xu, M., Jia, R., Tan, W.: Hole filling using joint bilateral filtering for moving object segmentation. J. Electron. Imaging (2014). https://doi.org/10.1117/1.JEI.23.6.063021

    Google Scholar 

  13. Donatsch, D., Bigdeli, S.A., Robert, P., Zwicker, M.: Hand-held 3D light field photography and applications. Vis. Comput. 30(6), 897–907 (2014). https://doi.org/10.1007/s00371-014-0979-5

    Article  Google Scholar 

  14. Fezza, S.A., Larabi, M.-C.: Color calibration of multi-view video plus depth for advanced 3D video. SIViP 9(1), 177–191 (2015)

    Article  Google Scholar 

  15. Liu, R., Xie, H., Tai, G., Tan, Y., Guo, R., Luo, W., Xu, X., Liu, J.: Depth adjustment for depth-image-based rendering in 3D TV system. J. Inf. Comput. Sci. 8(16), 4233–4240 (2011)

    Google Scholar 

  16. Ying-Rung, H., Yu-Cheng, T., Tian-Sheuan, C.: VLSI architecture for real-time HD1080p view synthesis engine. IEEE Trans. Circuits Syst. Video Technol. 21(9), 1329–13401340 (2011). https://doi.org/10.1109/tcsvt.2011.2148410

    Article  Google Scholar 

  17. Dodgson, N.D.: Autostereoscopic 3D displays (cover story). Computer 38(8), 31–36 (2005)

    Article  Google Scholar 

  18. Chen, W.-Y., Chang, Y.-L., Chiu, H.-K., Chien, S.-Y., Chen, L.-G.: Real-time depth image based rendering hardware accelerator for advanced three dimensional television system. In: 2006 IEEE International Conference on Multimedia and Expo, ICME 2006, July 9, 2006–July 12, 2006, Toronto, ON, 2006. 2006 IEEE International Conference on Multimedia and Expo, ICME 2006—Proceedings, pp. 2069-2072. Institute of Electrical and Electronics Engineers Computer Society

  19. Jin, P.F., Yao, S.J., Li, D.X., Wang, L.H., Zhang, M.: Real-time multi-view rendering based on FPGA. In: International Conference on Systems and Informatics 2012, pp. 1981–1984

  20. Yao, S.J., Wang, L.H., Li, D.X., Zhang, M.: A Real-Time Full HD 2D-to-3D Video Conversion System Based on FPGA. In: International Conference on Image and Graphics, pp. 774–778 (2013)

  21. Ren, P., Zhang, X., Bi, H., Sun, H., Zheng, N.: Towards an efficient multiview display processing architecture for 3DTV. IEEE Trans. Circuits Syst. II Express Briefs 64(6), 705–709 (2016)

    Article  Google Scholar 

  22. Liu, R., Tan, Y., Tian, F., Xie, H., Tai, G., Tan, W., Liu, J., Xu, X., Kadri, C., Abakah, N.: Visual fatigue reduction based on depth adjustment for DIBR system. KSII Trans. Internet Inf. Syst. 6(4), 1171–1187 (2012). https://doi.org/10.3837/tiis.2012.04.013

    Google Scholar 

  23. Lin, T.-C., Huang, H.-C., Huang, Y.-M.: Preserving depth resolution of synthesized images using parallax-map-based dibr for 3D-TV. IEEE Trans. Consum. Electron. 56(2), 720–727 (2010)

    Article  Google Scholar 

  24. Shim, H., Adelsberger, R., Kim, J.D., Rhee, S.-M., Rhee, T., Sim, J.-Y., Gross, M., Kim, C.: Time-of-flight sensor and color camera calibration for multi-view acquisition. Vis. Comput. 28(12), 1139–1151 (2012). https://doi.org/10.1007/s00371-011-0664-x

    Article  Google Scholar 

  25. Zhang, L., Tam, W.J.: Stereoscopic image generation based on depth images for 3D TV. IEEE Trans. Broadcast. 51(2), 191–199 (2005)

    Article  Google Scholar 

  26. Su, Z., Luo, X., Artusi, A.: A novel image decomposition approach and its applications. Vis. Comput. 29(10), 1011–1023 (2013). https://doi.org/10.1007/s00371-012-0753-5

    Article  Google Scholar 

  27. Arnold, J.F., Frater, M.R., Pickering, M.R.: Digital Television: Technology and Standards. Wiley, New York (2007)

    Google Scholar 

  28. Chen, J., Xiping, X.U., Qiong, W.U.: Research and hardware design of median filtering algorithms base on FPGA. J. Changchun Univ. Sci. Technol. 31(1), 8–10 (2008)

    Google Scholar 

  29. Tsung, P.K., Lin, P.C., Chen, K.Y., Chuang, T.D.: A 216fps 40962160p 3DTV set-top box SoC for free-viewpoint 3DTV applications. In: IEEE International Solid-State Circuits

  30. Chang, F.J., Tseng, Y.C., Chang, T.S.: A 94fps view synthesis engine for HD1080p video. In: Visual Communications and Image Processing 2011, pp. 1-4

  31. Fan, Y.C., Chi, P.C., Wu, S.H.: DIBR based multi-view generator circuit and chip design. In: Communications and Signal Processing 2013, pp. 1–4

  32. Chen, H.J., Lo, F.H., Jan, F.C., Wu, S.D.: Real-time multi-view rendering architecture for autostereoscopic displays. In: IEEE International Symposium on Circuits and Systems 2010, pp. 1165–1168

  33. Yang, X., Wang, D., Hu, H., Yue, K.: P-31: Visual fatigue assessment and modeling based on ECG and EOG caused by 2D and 3D displays. In: SID Symposium Digest of Technical Papers, vol. 47(1), pp. 1237–1240 (2016). https://doi.org/10.1002/sdtp.10857

  34. Jincheol, P., Heeseok, O., Sanghoon, L., Bovik, A.C.: 3D visual discomfort predictor: analysis of disparity and neural activity statistics. IEEE Trans. Image Process. 24(3), 1101–1114 (2015). https://doi.org/10.1109/TIP.2014.2383327

    Article  MathSciNet  Google Scholar 

  35. Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: International Conference on Neural Information Processing Systems 2014, pp. 2672–2680

Download references

Acknowledgements

This work was jointly supported by the National Natural Science Foundation of China (No. 61201347), the Chongqing Foundation and Advanced Research Project (cstc2016jcyjA0103), and the Entrepreneurship and Innovation Program for Chongqing Overseas Returned Scholars (No. cx2017094).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mingming Liu.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 3119 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, R., Liu, M., Zhang, Y. et al. Real-time ultra-high definition multiview glasses-free 3D display system. Vis Comput 35, 303–321 (2019). https://doi.org/10.1007/s00371-018-1508-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-018-1508-8

Keywords

Navigation