Advertisement

Multidimensional Systems and Signal Processing

, Volume 30, Issue 1, pp 275–294 | Cite as

View synthesis for FTV systems based on a minimum spatial distance and correspondence field

  • Hesamodin Hosseinpour
  • Amir MousaviniaEmail author
Article
  • 51 Downloads

Abstract

The main problems with virtual view synthesis based on the common Depth-Image-Based Rendering (DIBR) algorithms are image rectification, depth map and image de-rectification that lead to additional computational load and image distortion. In this paper an efficient and reliable method based on the concept of Correspondence Field and minimum distance among spatial positions of corresponding pixels is proposed to synthesize virtual view images without image rectification, depth map and image de-rectification steps. Simulated multi-view images are used to evaluate the proposed algorithm. By comparison with DIBR algorithms, simulation results show that on average, PSNR is 4.37 dB (14.8%) higher, SSIM is 0.057 (6.2%) more, UNIQUE is 0.13 (20%) more, the running time is 47.34 s (24.5%) less and wrong pixels are 4.35 (38.5%) less.

Keywords

Virtual view synthesis Correspondence field Image rectification Epipolar line Hole filling 

References

  1. Bosc, E., Pepion, R., Le Callet, P., et al. (2011). Towards a new quality metric for 3-D synthesized view assessment. IEEE Journal of Selected Topics in Signal Processing, 5(7), 1332–1343.CrossRefGoogle Scholar
  2. Chen, K. H. (2013). Reducing computation redundancy for high efficiency view synthesis. In IEEE International Symposium on VLSI Design, Automation and Test, Taiwan, pp. 1–4.Google Scholar
  3. Cheng, C., Lin, S., & Lai, S. (2011). Spatio-temporally consistent novel view synthesis algorithm from video-plus-depth sequences for autostereoscopic displays. IEEE Transactions on Broadcasting, 57(2), 523–532.CrossRefGoogle Scholar
  4. Gao, Y., Chen, H., Gao, W., et al. (2013). Virtual view synthesis based on DIBR and image inpainting. In PSIVT 6th Pacific-rim symposium, Mexico, pp. 172–183.Google Scholar
  5. Ham, B., Min, D., Oh, C., et al. (2014). Probability-based rendering for view synthesis. IEEE Transaction on image processing, 23(2), 870–884.MathSciNetCrossRefzbMATHGoogle Scholar
  6. Ham, B., Min, D., & Sohn, K. (2015). Depth superresolution by transduction. IEEE Transactions on Image Processing, 24(5), 1524–1535.MathSciNetCrossRefGoogle Scholar
  7. Huiyan, H., Xie, H., & Fengbao, Y. (2013). Rectification of uncalibrated images for stereo vision. TELKOMNIKA Indonesian Journal of Electrical Engineering, 11(1), 322–329.CrossRefGoogle Scholar
  8. Jain, A. K., Tran, L. C., Khoshabeh, R., et al. (2011). Efficient stereo-to-multiview synthesis. In IEEE international conference on acoustics, speech and signal processing, Prague, pp. 889–892.Google Scholar
  9. Kanade, T., Narayanan, P. J., Rander, P. W. (1995). Virtualised reality: Concepts and early results. In Proceedings of IEEE workshop on representation of visual scenes, Cambridge, MA, USA, pp. 69–76.Google Scholar
  10. Karami, M., Mousavinia, A., & Ehsanian, M. (2017). A general solution for iso-disparity layers and correspondence field model for stereo systems. IEEE Sensors Journal, 17(12), 3744–3753.CrossRefGoogle Scholar
  11. Kim, H., Park, S., Wang, J. et al. (2009). Advanced bilinear image interpolation based on edge features. In IEEE First international conference on advances in multimedia, Colmar, France, pp. 33–36.Google Scholar
  12. Kumar, S., Micheloni, C., Piciarelli, C., et al. (2010). Stereo rectification of uncalibrated and heterogeneous images. Elsevier Pattern Recognition Letters, 31(11), 1445–1452.CrossRefGoogle Scholar
  13. Lee, C., Ho, Y. (2011). View extrapolation method using depth map for 3D video systems. In Asia-Pacific signal and information processing association, China.Google Scholar
  14. Lei, J., Li, L., Yue, H., et al. (2017). Depth map super-resolution considering view synthesis quality. IEEE Transactions on Image Processing, 26(4), 1732–1745.MathSciNetCrossRefGoogle Scholar
  15. Lu, J., Lafruit, G., & Catthoor, F. (2009). Stream-centric stereo matching and view synthesis: A high-speed approach on GPUs. IEEE Transactions on Circuits and Systems for Video Technology, 19(1), 1598–1611.Google Scholar
  16. Ma, Zh, Rana, P. K., Taghia, J., et al. (2014). Bayesian estimation of Dirichlet mixture model with variational inference. ELSEVIER Pattern Recognition, 47(9), 3143–3157.CrossRefzbMATHGoogle Scholar
  17. Manap, N., Soraghan, J. (2011). Novel view synthesis based on depth map layers representation. In 3DTV conference: The true vision—capture, transmission and display of 3D Video, Antalya, Turkey, pp. 1–4.Google Scholar
  18. Manap, N., & Soraghan, J. (2012). Disparity refinement based on depth Image layers separation for stereo matching algorithms. Journal of Telecommunication, Electronic and Computer Engineering, 4(1), 51–64.Google Scholar
  19. Mao, Y., Cheung, G., Ji, Y. (2014). Image interpolation for DIBR view synthesis using graph fourier transform. In IEEE 3DTV-conference: The true vision—capture, transmission and display of 3D video, Budapest, pp. 1–4.Google Scholar
  20. May, S., Droeschel, D. Fuchs, S., et al. (2009). Robust 3D-mapping with time-of-flight cameras. In IEEE/RSJ international conference on intelligent robots and systems, St. Louis, MO, USA.Google Scholar
  21. Oh, K., Yea, S., Vetro, A., et al. (2010). Virtual view synthesis method and self-evaluation metrics for free viewpoint television and 3D video. Imaging System and Technology, 20(4), 378–390.CrossRefGoogle Scholar
  22. Paradiso, V., Lucenteforte, M., Grangetto, M. (2012). A novel interpolation method for 3D view synthesis. In IEEE 3DTV-conference: The True Vision— capture transmission and display of 3D video, Switzerland, pp. 1–4.Google Scholar
  23. Park, J., Choi, J., Ryu, I., et al. (2012). Universal view synthesis unit for glassless 3DTV. IEEE Transactions on Consumer Electronics, 58(2), 706–711.CrossRefGoogle Scholar
  24. Po, L., Zhang, S., Xu, X., et al. (2011). A new multidirectional extrapolation hole-filling method for Depth-Image-Based Rendering. In IEEE 18th international conference on image processing, Brussels, Belgium, pp. 2589–2592.Google Scholar
  25. Rana, P., Taghia, J., Zhanyu, M., et al. (2015). Probabilistic multiview depth image enhancement using variational inference. IEEE Journal of Selected Topics in Signal Processing, 9(3), 435–448. http://ieeexplore.ieee.org/search/searchresult.jsp?searchWithin=%22Authors%22:.QT.Flierl,%20M..QT.&newsearch=true
  26. Safaei, F., Mokhtarian, P., Shidanshidi, H., et al. (2013). Scene-adaptive configuration of multiple cameras using the correspondence field function. In IEEE international conference on multimedia and expo, San Jose, CA, pp. 1–6.Google Scholar
  27. Schuon, S., Theobalt, C., Davis, J., et al. (2008). High-quality scanning using time-of-flight depth superresolution. In IEEE computer vision and pattern recognition workshops, Anchorage, AK, USA, pp. 1–7.Google Scholar
  28. Solh, M., & Airegib, G. (2012). Hierarchical hole-filling for depth-based view synthesis in FTV and 3D video. IEEE Journal of Selected Topics in Signal Processing, 6(5), 495–504.CrossRefGoogle Scholar
  29. Su, H., & He, B. (2011). Stereo rectification of calibrated image pairs based on geometric transformation. I.J.Modern Education and Computer Science, 4, 17–24.CrossRefGoogle Scholar
  30. Temel, D., Prabhushankar, M., & Alregib, Gh. (2016). UNIQUE: unsupervised Image Quality Estimation. IEEE Signal Processing Letters, 23(10), 1414–1418.CrossRefGoogle Scholar
  31. Yaguchi, S., Saito, H. (2001). Arbitrary view image generation from multiple silhouette images in projective grid space. In SPIE videometrics and optical methods for 3D shape measurement, San Jose, CA, pp. 294–304.Google Scholar
  32. Zhang, D., & Liang, J. (2015). View synthesis distortion estimation with a graphical model and recursive calculation of probability distribution. IEEE Transactions on Circuits and Systems for Video Technology, 25(5), 827–840.MathSciNetCrossRefGoogle Scholar
  33. Zhu, C., & Li, S. (2016). Depth image based view synthesis: new insights and perspectives on hole generation and filling. IEEE Trans. Broadcasting, 62(1), 82–93.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Electrical Engineering, Science and Research BranchIslamic Azad UniversityTehranIran
  2. 2.Department of Computer EngineeringK.N Toosi UniversityTehranIran

Personalised recommendations