A Depth Map Generation Algorithm Based on Saliency Detection for 2D to 3D Conversion
- 157 Downloads
In recent years, 3D movies attract people’s attention more and more because of their immersive stereoscopic experience. However, 3D movies is still insufficient, so estimating depth information for 2D to 3D conversion from a video is more and more important. In this paper, we present a novel algorithm to estimate depth information from a video via scene classification algorithm. In order to obtain perceptually reliable depth information for viewers, the algorithm classifies them into three categories: landscape type, close-up type, linear perspective type firstly. Then we employ a specific algorithm to divide the landscape type image into many blocks, and assign depth value by similar relative height cue with the image. As to the close-up type image, a saliency-based method is adopted to enhance the foreground in the image and the method combine it with the global depth gradient to generate final depth map. By vanishing line detection, the calculated vanishing point which is regarded as the farthest point to the viewer is assigned with deepest depth value. According to the distance between the other points and the vanishing point, the entire image is assigned with corresponding depth value. Finally, depth image-based rendering is employed to generate stereoscopic virtual views after bilateral filter. Experiments show that the proposed algorithm can achieve realistic 3D effects and yield satisfactory results, while the perception scores of anaglyph images lie between 6.8 and 7.8.
KeywordsDepth map 2D to 3D DIBR Saliency
This work was supported by the National Natural Science Foundation of China under Grants 61401137 and 61404043, and the Fundamental Research Funds for the Central Universities under Grant No. J2014HGXJ0083.
Compliance with ethical standards
Conflict of interest
Authors declare that there is no conflict of interest regarding the publication of this paper.
- 1.Redert, A., Beeck, M. O. D., Fehn, C., Ijsselsteijn, W., Pollefeys, M., Gool, L. V., et al. (2002). ATTEST: Advanced three-dimensional television system technologies. In Proceedings of International Symposium on 3D Data Processing Visualization and Transmission, 2002 (pp. 313–319).Google Scholar
- 5.Xiong, Y., & Shafer, S. A. (1993). Depth from focusing and defocusing. In Computer Vision and Pattern Recognition, 1993. Proceedings CVPR’93., 1993 IEEE Computer Society Conference on IEEE (pp. 68–73).Google Scholar
- 6.Kulkarni, J. B., & Sheelarani, C. M. (2015). Generation of depth map based on depth from focus: A survey. In International Conference on Computing Communication Control and Automation. IEEE.Google Scholar
- 7.Jung, Y. J., Baik, A., & Park, D. (2009). A novel 2D-to-3D conversion technique based on relative height-depth cue. In Proceedings of SPIE (vol. 7237, p. 72371.Google Scholar
- 9.Battiato, S., Capra, A., Curti, S., & Cascia, M. L. (2004). 3D stereoscopic image pairs by depth-map generation. In International Symposium on 3D Data Processing, Visualization and Transmission (pp. 124–131).Google Scholar
- 10.Cozman, F., & E. Krotkov. (1997). Depth from scattering. In IEEE Computer Society Conference on Computer Vision & Pattern Recognition (pp. 801–806). Springer.Google Scholar
- 11.Zhou, Y., Hu, B., & Zhang, J. (2006). Occlusion detection and tracking method based on bayesian decision theory. In Pacific-Rim Symposium on Image and Video Technology (pp. 474–482). Berlin: Springer.Google Scholar
- 13.Loh, A. M., & Hartley, R. (2005). Shape from non-homogeneous, non-stationary, anisotropic, perspective texture. In British Machine Vision Conference (pp. 69–78).Google Scholar
- 14.Harman, P. V., Flack, J., Fox, S., & Dowley, M. (2002). Rapid 2D-to-3D conversion. In Electronic Imaging 2002 (pp. 78–86). International Society for Optics and Photonics.Google Scholar
- 16.Kauff, P., Atzpadin, N., Fehn, C., Müller, M., Schreer, O., Smolic, A., et al. (2007). Depth map creation and image-based rendering for advanced 3DTV services providing interoperability and scalability. Signal Processing: Image Communication, 22(2), 217–234.Google Scholar
- 17.Hough, P. V. C. (1962). Method and means for recognizing complex patterns. U.S. Patent (no. 3069654).Google Scholar
- 18.Battiato, S., Curti, S., La Cascia, M., Tortora, M., & Scordato, E. (2004). Depth map generation by image classification. In Electronic Imaging 2004 (pp. 95–104). International Society for Optics and Photonics.Google Scholar
- 21.Rahtu, E., Kannala, J., Salo, M., & Heikkilä, J. (2010). Segmenting salient objects from images and videos. In Computer Vision–ECCV 2010 (pp. 366–379).Google Scholar
- 22.Zhao, Y. X., Tai, H. P., Fang, S. J., & Chou, C. H. (2012). A new validity measure and fuzzy clustering algorithm for vanishing-point detection. In International Conference on Automatic Control and Artificial Intelligence (pp. 195–198). IET.Google Scholar