3D Research

, 3:1 | Cite as

On consistent inter-view synthesis for autostereoscopic displays

  • Lam C. Tran
  • Can Bal
  • Christopher J. Pal
  • Truong Q. Nguyen
3DR Express


In this paper we present a novel stereo view synthesis algorithm that is highly accurate with respect to inter-view consistency, thus to enabling stereo contents to be viewed on the autostereoscopic displays. The algorithm finds identical occluded regions within each virtual view and aligns them together to extract a surrounding background layer. The background layer for each occluded region is then used with an exemplar based inpainting method to synthesize all virtual views simultaneously.

Our algorithm requires the alignment and extraction of background layers for each occluded region; however, these two steps are done efficiently with lower computational complexity in comparison to previous approaches using the exemplar based inpainting algorithms. Thus, it is more efficient than existing algorithms that synthesize one virtual view at a time.

This paper also describes the implementation of a simplified GPU accelerated version of the approach and its implementation in CUDA. Our CUDA method has sublinear complexity in terms of the number of views that need to be generated, which makes it especially useful for generating content for autostereoscopic displays that require many views to operate.

An objective of our work is to allow the user to change depth and viewing perspective on the fly. Therefore, to further accelerate the CUDA variant of our approach, we present a modified version of our method to warp the background pixels from reference views to a middle view to recover background pixels. We then use an exemplar based inpainting method to fill in the occluded regions. We use warping of the foreground from the reference images and background from the filled regions to synthesize new virtual views on the fly. Our experimental results indicate that the simplified CUDA implementation decreases running time by orders of magnitude with negligible loss in quality.


Autostereoscopic Display View Synthesis CUDA Disparity 


  1. 1.
    C. L. Zitnick, S. B. Kang, M. Uyttendaele, S. Winder, R. Szeliski (2004) “High-quality video view interpolation using a layered representation,” Proc. ACM SIGGRAPH, 22(3):600–608CrossRefGoogle Scholar
  2. 2.
    H. Hirschmuller, D. Scharstein (2009) “Evaluation of stereo matching costs on images with radiometric differences,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 31:1582–1599CrossRefGoogle Scholar
  3. 3.
    D. Scharstein, R. Szeliski (2002) “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vision, 47:7–42.CrossRefzbMATHGoogle Scholar
  4. 4.
    J. Zhu, L. Wang, R. Yang, J. Davis, Z. Pan (2011) “Reliability fusion of time-of-flight depth and stereo geometry for high quality depth maps,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 33:1400–1414CrossRefGoogle Scholar
  5. 5.
    C. Fehn (2004) “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3DTV,” Proceedings of SPIE Stereoscopic Displays and Virtual Reality Systems XI, 5291:93–104Google Scholar
  6. 6.
    K. Müller, A. Smolic, K. Dix (2008) “View synthesis for advanced 3d video systems,” EURASIP Journal on Image and Video Processing, 2008:1–11CrossRefGoogle Scholar
  7. 7.
    S. Zinger, L. Do (2010) “Free-viewpoint depth image based rendering,” Journal Visual Communication and Image Representation, 21:533–541CrossRefGoogle Scholar
  8. 8.
    Y. Mori, N. Fukushima, T. Yendo, T. Fujii, M. Tanimoto (2009) “View generation with 3d warping using depth information for ftv.,” Sig. Proc.: Image Comm., 24:65–72CrossRefGoogle Scholar
  9. 9.
    A. Jain, L. Tran, R. Khoshabeh, T. Nguyen (2011) “Efficient Stereo-to-Multiview Synthesis,” IEEE ICASSP, 889–892Google Scholar
  10. 10.
    L. Tran, C. Pal, T. Nguyen (2010) “View synthesis based on conditional random fields and graph cuts,” IEEE ICIP, 433–436Google Scholar
  11. 11.
    L. Tran, R. Khoshabeh, A. Jain, C. Pal, T. Nguyen (2011) “Spatially consistent view synthesis with coordinate alignment,” IEEE ICASSP, 905–908Google Scholar
  12. 12.
    P. Ndjiki-Nya, M. Köppel, D. Doshkov, H. Lakshman, P. Merkle, K. Müller, T. Wiegand (2010) “Depth image based rendering with advanced texture synthesis,” IEEE ICME, 424–429Google Scholar
  13. 13.
    C.-M. Cheng, X.-A. Hsu, S.-H. Lai (2010) “Spatiotemporally Consistent Multi-view Video Synthesis for Autostereoscopic Displays, ” in PCM, 532–542Google Scholar
  14. 14.
    J. Lu, S. Rogmans, G. Lafruit, F. Catthoor (2007) “High-Speed Stream-Centric Dense Stereo and View Synthesis on Graphics Hardware,” IEEE Workshop on Multimedia Signal Processing, 243–246Google Scholar
  15. 15.
    S. Rogmans, J. Lu, P. Bekaert, G. Lafruit (2009) “Real-time stereo-based view synthesis algorithms: A unified framework and evaluation on commodity gpus,” Signal Processing: Image Communication, 24:49–64CrossRefGoogle Scholar
  16. 16.
    A. Saxena, M. Sun, A. Ng (2009) “Make3d: Learning 3d scene structure from a single still image,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 31:824–840CrossRefGoogle Scholar
  17. 17.
    D. Comaniciu, P. Meerr (2002) “Mean shift: A robust approach toward feature space analysis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:603–619CrossRefGoogle Scholar
  18. 18.
    A. Criminisi, P. Perez, K. Toyama (2004) “Region Filling and Object Removal by Exemplar-Based Image Inpainting,” IEEE Transactions on Image Processing, 13(9):1200–1212CrossRefGoogle Scholar
  19. 19.
    D. Scharstein, C. Pal (2007) “Learning conditional random fields for stereo,” IEEE Conference on Computer Vision and Pattern Recognition, 1–8Google Scholar

Copyright information

© 3D Display Research Center and Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  1. 1.Video Processing Lab at University of CaliforniaSan Diego La JollaUSA
  2. 2.Département de génie informatique et génie logicielÉcole Polytechnique de MontréalMontréalCanada

Personalised recommendations