On consistent inter-view synthesis for autostereoscopic displays
In this paper we present a novel stereo view synthesis algorithm that is highly accurate with respect to inter-view consistency, thus to enabling stereo contents to be viewed on the autostereoscopic displays. The algorithm finds identical occluded regions within each virtual view and aligns them together to extract a surrounding background layer. The background layer for each occluded region is then used with an exemplar based inpainting method to synthesize all virtual views simultaneously.
Our algorithm requires the alignment and extraction of background layers for each occluded region; however, these two steps are done efficiently with lower computational complexity in comparison to previous approaches using the exemplar based inpainting algorithms. Thus, it is more efficient than existing algorithms that synthesize one virtual view at a time.
This paper also describes the implementation of a simplified GPU accelerated version of the approach and its implementation in CUDA. Our CUDA method has sublinear complexity in terms of the number of views that need to be generated, which makes it especially useful for generating content for autostereoscopic displays that require many views to operate.
An objective of our work is to allow the user to change depth and viewing perspective on the fly. Therefore, to further accelerate the CUDA variant of our approach, we present a modified version of our method to warp the background pixels from reference views to a middle view to recover background pixels. We then use an exemplar based inpainting method to fill in the occluded regions. We use warping of the foreground from the reference images and background from the filled regions to synthesize new virtual views on the fly. Our experimental results indicate that the simplified CUDA implementation decreases running time by orders of magnitude with negligible loss in quality.
KeywordsAutostereoscopic Display View Synthesis CUDA Disparity
- 5.C. Fehn (2004) “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3DTV,” Proceedings of SPIE Stereoscopic Displays and Virtual Reality Systems XI, 5291:93–104Google Scholar
- 9.A. Jain, L. Tran, R. Khoshabeh, T. Nguyen (2011) “Efficient Stereo-to-Multiview Synthesis,” IEEE ICASSP, 889–892Google Scholar
- 10.L. Tran, C. Pal, T. Nguyen (2010) “View synthesis based on conditional random fields and graph cuts,” IEEE ICIP, 433–436Google Scholar
- 11.L. Tran, R. Khoshabeh, A. Jain, C. Pal, T. Nguyen (2011) “Spatially consistent view synthesis with coordinate alignment,” IEEE ICASSP, 905–908Google Scholar
- 12.P. Ndjiki-Nya, M. Köppel, D. Doshkov, H. Lakshman, P. Merkle, K. Müller, T. Wiegand (2010) “Depth image based rendering with advanced texture synthesis,” IEEE ICME, 424–429Google Scholar
- 13.C.-M. Cheng, X.-A. Hsu, S.-H. Lai (2010) “Spatiotemporally Consistent Multi-view Video Synthesis for Autostereoscopic Displays, ” in PCM, 532–542Google Scholar
- 14.J. Lu, S. Rogmans, G. Lafruit, F. Catthoor (2007) “High-Speed Stream-Centric Dense Stereo and View Synthesis on Graphics Hardware,” IEEE Workshop on Multimedia Signal Processing, 243–246Google Scholar
- 19.D. Scharstein, C. Pal (2007) “Learning conditional random fields for stereo,” IEEE Conference on Computer Vision and Pattern Recognition, 1–8Google Scholar