Skip to main content
Log in

A new method to create depth information based on lighting analysis for 2D/3D conversion

  • Published:
Journal of Central South University Aims and scope Submit manuscript

Abstract

A new method creating depth information for 2D/3D conversion was proposed. The distance between objects is determined by the distances between objects and light source position which is estimated by the analysis of the image. The estimated lighting value is used to normalize the image. A threshold value is determined by some weighted operation between the original image and the normalized image. By applying the threshold value to the original image, background area is removed. Depth information of interested area is calculated from the lighting changes. The final 3D images converted with the proposed method are used to verify its effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. CAO Xun, LI Zheng, DAI Qiong-hai. Semi-automatic 2D-to-3D conversion using disparity propagation [J]. IEEE Transaction on Broadcasting, 2011, 57(2): 491–499.

    Article  Google Scholar 

  2. YU Feng-li, LIU Ju, REN Yan-nan, SUN Jian-de, GAO Yu-ling, LIU Wei. Depth generation method for 2D to 3D conversion [C]// 3DTV Conference: The True Vision — Capture, Transmission and Display of 3D Video (3DTV-CON). Antalya, 2011: 1–4.

    Google Scholar 

  3. LIN Guo-shiang, Cheng-ying Yeh, CHEN Wei-chih. A 2D to 3D conversion scheme based on depth cues analysis for MPEG videos [C]// IEEE International Conference on Multimedia and Expo ICME. Suntec City, 2010: 1141–1145.

    Google Scholar 

  4. CHENG Chao-chung, LI Chung-te, CHEN Liang-gee. A novel 2D-to-3D conversion system using edge information [J]. IEEE Transactions on Consumer Electronics, 2010, 56(3): 1739–1745.

    Article  Google Scholar 

  5. ROSS J. T. Stereopsis by binocular delay [J]. Nature, 1974, 248: 363–364.

    Article  Google Scholar 

  6. HAN Hyeon-ho, LEE Gang-seong, LEE Sang-hun. 2D/3D image conversion method using simplification of level and reduction of noise for optical flow and information of edge [J]. Journal of The Korea Academia-Industrial Cooperation Society, 2012, 13(2): 827–833.

    Article  Google Scholar 

  7. HAN Hyeon-ho, LEE Gang seong, LEE Jong-yong, LEE Sang-hun. A study on create depth map using focus/defocus in single frame [J]. The Korea Society of Digital Policy & Management, 2012, 10(4): 191–198.

    Google Scholar 

  8. CHEN Yi-che, WU Yi-chin, LIU Chih-hung, SUN Wei-chih, CHEN Yung-chang. Depth map generation based on depth from focus [C]// 2010 International Conference on Electronic Devices Systems and Applications (ICEDSA2010). Kuala Lumpur, 2010: 59–63.

    Chapter  Google Scholar 

  9. KAMOLRAT B, FERNANDO W A C, MRAK M, KONDOZ A. 3D motion estimation for depth image coding in 3D video coding [J]. IEEE Transactions on Consumer Electronics, 2009, 55(2): 824–830.

    Article  Google Scholar 

  10. ZHANG L, VÁZQUEZ C, KNORR S. 3D-TV content creation: automatic 2D-to-3D video conversion [J]. IEEE Transactions on Broadcasting, 2011, 57(2): 372–383.

    Article  Google Scholar 

  11. KUO Tien-ying, LO Yi-chung. Depth estimation from a monocular view of the outdoors [J]. IEEE Transactions on Consumer Electronics, 2011, 57(2): 817–822

    Article  Google Scholar 

  12. FENG Yue, REN Jin-chang, JIANG Jian-min. Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications [J]. IEEE Transactions on Broadcasting, 2011, 57(2): 500–509.

    Article  Google Scholar 

  13. URHAN O, ERTÜRK S. Constrained one-bit transform for low complexity block motion estimation [J]. IEEE Transactions on Circuits and Systems for Video Technology, 2007, 17(4): 478–482

    Article  Google Scholar 

  14. SUN De-qing, ROTH STEFAN, BLACK MICHAEL J. Secret of optical flow estimation and their principles [C]// 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). San Francisco, CA, USA, 2010: 2432–2439.

    Chapter  Google Scholar 

  15. MILED W, PESQUET J, PARENT M. A convex optimization approach for depth estimation under illumination variation [J]. IEEE Transaction on Image Processing, 2009, 18(4): 813–830.

    Article  MathSciNet  Google Scholar 

  16. ACHANTA R, HEMAMI S, ESTRADA F, SÜSSTRUNK S. Frequency-tuned salient region detection [C]// IEEE Conference on Computer Vision and Pattern Recognition. Miami, FL, USA, 2009: 1597–1604.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sanghun Lee.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Han, H., Lee, G., Lee, J. et al. A new method to create depth information based on lighting analysis for 2D/3D conversion. J. Cent. South Univ. 20, 2715–2719 (2013). https://doi.org/10.1007/s11771-013-1788-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11771-013-1788-0

Key words

Navigation