Advertisement

Cluster Computing

, Volume 22, Supplement 5, pp 11741–11753 | Cite as

An evaluation framework for auto-conversion of 2D to 3D video streaming using depth profile and pipelining technique in handheld cellular devices

  • P. S. RameshEmail author
  • S. Letitia
Article

Abstract

The main objective of this paper is to propose a Reliable and Speedy Communication for Upstream Emergency (RESCUE) framework to create 3D remote sensing videos. The proposed framework uses low memory space on mobile devices with the help of efficient pipelining processes. Many methods for converting 2D videos to 3D have been proposed wherein developers convert high-quality 2D to 3D. We propose a method to apply the pre-classification process on either training on dataset or heuristics based on the quality of streaming attributes. The evaluation framework automatically changes remote sensing videos from 2D to 3D by using depth profile with method filter, segmentation and sharpening transformation of the videos. The traditional Hough transformation algorithm is not suitable for hardware implementation and saliency is based on the colour histogram to support slow motion object of frames which leads to inordinate delay in converting 2D to 3D. Therefore, the RESCUE framework, with its extended vanishing point and line algorithm, is applied to provide suitable solutions and overcome the drawbacks of existing techniques. The framework is a lightweight process when compared with JAVRAE, and it utilizes limited phone memory and consumes less processing time.

Keywords

2D video 3D video Sharpening Depth map Transformation Filter Pipelining 

References

  1. 1.
    Angot, L., Huang, W.-J., Liu, K.-C.: A 2D to 3D video and image conversion technique based on a bilateral filter. Proc. SPIE 7526, 75260D (2004)Google Scholar
  2. 2.
    Baek, N., Kim, K.J.: An artifact detection scheme with CUDA-based image operations. Cluster Comput. 20(1), 749–755 (2017)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Baek, N., Yoo, K.H.: Massively parallel acceleration methods for image handling operations. Cluster Comput. (2017).  https://doi.org/10.1007/s10586-017-0788-5 CrossRefGoogle Scholar
  4. 4.
    Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: Proceedings of the European Conference on Computer Vision, pp. 25–36 (2004)Google Scholar
  5. 5.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the European Conference on Computer Vision and Pattern Recognition, pp. 886–893 (2005)Google Scholar
  6. 6.
    Durand, F., Dorsey, J.: Fast bilateral filtering for the display of high-dynamic-range images. ACM Trans. Graph. 21, 257–266 (2002)Google Scholar
  7. 7.
    Grundmann, M., Kwatra, V., Essa, I.: Auto-directed video stabilization with robust L1 optimal camera paths. In: Proceedings of the European Conference on Computer Vision and Pattern Recognition, pp. 225–232 (2011)Google Scholar
  8. 8.
    Guttmann, M., Wolf, L., Cohen-Or, D.: Semi-automatic stereo extraction from video footage. In: IEEE International Conference on Computer Vision, pp. 136–142 (2009)Google Scholar
  9. 9.
    Karsch, K., Liu, C., Kang, S.B.: Depth extraction from video using non-parametric sampling. In: Proceedings of the European Conference on Computer Vision, pp. 775–788 (2012)Google Scholar
  10. 10.
    Khamis, H.S., Cheruiyot, K.W., Kimani, S.: Application of k-nearest neighbour classification in medical data mining. Int. J. Inf. Commun. Technol. Res. 4(4), 121–128 (2014)Google Scholar
  11. 11.
    Konrad, J., Wang, M., Ishwar, P.: 2D-to-3D image conversion by learning depth from examples. In: Proceedings of the IEEE Computer Society Conference on CVPRW, pp. 16–22 (2012)Google Scholar
  12. 12.
    Konrad, J., Brown, G., Wang, M., Ishwar, P., Wu, C., Mukherjee, D.: Automatic 2D-to-3D image conversion using 3D examples from the Internet. Proc. SPIE 8288, 82880F (2012)CrossRefGoogle Scholar
  13. 13.
    Liao, M., Gao, J., Yang, R., Gong, M.: Video stereolization: combining motion analysis with user interaction. IEEE Trans. Visualizat. Comput. Graph. 18(7), 1079–1088 (2012)CrossRefGoogle Scholar
  14. 14.
    Liu, B., Gould, S., Koller, D.: Single image depth estimation from predicted semantic abels. In: Proceedings of the European Conference on Computer Vision and Pattern Recognition, pp. 1253–1260 (2010)Google Scholar
  15. 15.
    Phan, R., Rzeszutek, R., Androutsos, D.: Semi-automatic 2D to 3D image conversion using scale-space random walks and a graph cuts based depth prior. In: Proceedings of the 18th IEEE International Conference on Image Processing, pp. 865–868 (2011)Google Scholar
  16. 16.
    Priya, L., Anand, S.: Object recognition and 3D reconstruction of occluded objects using binocular stereo. Cluster Comput. (2017).  https://doi.org/10.1007/s10586-017-0891-7 CrossRefGoogle Scholar
  17. 17.
    Saxena, A., Chung, S.H., Ng, A.Y.: Learning depth from single monocular images. In: Advances in Neural Information Processing Systems. Cambridge, MA: MIT Press (2005)Google Scholar
  18. 18.
    Saxena, A., Sun, M., Ng, A.: Make3D: learning 3D scene structure from a single still image. IEEE Trans. Pattern Anal. Mach. Intell. 31(5), 824–840 (2009)CrossRefGoogle Scholar
  19. 19.
    Silberman, N., Fergus, R.: Indoor scene segmentation using a structured light sensor. In: Proceedings of the International Conference on Computer Vision Workshops, pp. 601–608 (2011)Google Scholar
  20. 20.
    Subbarao, M., Surya, G.: Depth from defocus: a spatial domain approach. Int. J. Comput. Vis. 13(3), 271–294 (1994)CrossRefGoogle Scholar
  21. 21.
    Suresh, A., Varatharajan, R.: Competent resource provisioning and distribution techniques for cloud computing environment. Cluster Comput. (2017).  https://doi.org/10.1007/s10586-017-1293-6. ISSN: 1386-7857 (Print) 1573-7543 (Online)
  22. 22.
    Szeliski, R., Torr, P.H.S.: Geometrically constrained structure from motion: points on planes. In: Proceedings of the European Workshop 3D Structure Multiple Images Large-Scale Environment, pp. 171–186 (1998)Google Scholar
  23. 23.
    Torralba, A., Fergus, R., Freeman, W.T.: 80 million tiny images: a large data set for nonparametric object and scene recognition. IEEE Trans. Pattern Anal. Mach. Intell. 30(11), 1958–1970 (2008)CrossRefGoogle Scholar
  24. 24.
    Tsai, S.F., Cheng, C.C., Li, C.T., Chen, L.G.: A real-time 1080p 2D-to-3D video conversion system. IEEE Trans. Consum. Electron. 57(2), 915–922 (2011)CrossRefGoogle Scholar
  25. 25.
    Wang, M., Konrad, J., Ishwar, P., Jing, K., Rowley, H.: Image saliency: from intrinsic to extrinsic context. In: Proceedings of the European Conference on Computer Vision and Pattern Recognition, pp. 417–424 (2011)Google Scholar
  26. 26.
    Yao, L., Sujit, D.: JAVRE: a joint asymmetric video rendering and encoding approach to enable optimized cloud mobile 3D virtual immersive user experience. IEEE J. Emerg. Select. Topic Circ, Syst (2016)Google Scholar
  27. 27.
    Yoo, T., Lee, G., Jung, K., Hong, Y.P., Lee, S.: A study of row-direction reconstruction algorithm in depth map. Cluster Comput. 18(2), 721–731 (2015)CrossRefGoogle Scholar
  28. 28.
    Zhang, R., Tsai, P.S., Cryer, J., Shah, M.: Shape-from-shading: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 21(8), 690–706 (1999)CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Anna UniversityChennaiIndia
  2. 2.Department of Electronics and Communication Engineering, Project Associate-TEQIPDirectorate of Technical EducationChennaiIndia

Personalised recommendations