Skip to main content
Log in

Racking focus and tracking focus on live video streams: a stereo solution

  • Original Article
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

The ability to produce dynamic Depth of Field effects in live video streams was until recently a quality unique to movie cameras. In this paper, we present a computational camera solution coupled with real-time GPU processing to produce runtime dynamic Depth of Field effects. We first construct a hybrid-resolution stereo camera with a high-res/low-res camera pair. We recover a low-res disparity map of the scene using GPU-based Belief Propagation, and subsequently upsample it via fast Cross/Joint Bilateral Upsampling. With the recovered high-resolution disparity map, we warp the high-resolution video stream to nearby viewpoints to synthesize a light field toward the scene. We exploit parallel processing and atomic operations on the GPU to resolve visibility when multiple pixels warp to the same image location. Finally, we generate racking focus and tracking focus effects from the synthesized light field rendering. All processing stages are mapped onto NVIDIA’s CUDA architecture. Our system can produce racking and tracking focus effects for the resolution of 640×480 at 15 fps.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Algorithm 1
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

References

  1. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)

    Article  Google Scholar 

  2. Brunton, A., Shu, C., Roth, G.: Belief propagation on the GPU for stereo vision. In: The 3rd Canadian Conference on Computer and Robot Vision (2006)

    Google Scholar 

  3. Felzenszwalb, P., Huttenlocher, D.: Efficient belief propagation for early vision. In: CVPR (2004)

    Google Scholar 

  4. Grauer-Gray, S., Kambhamettu, C., Palaniappan, K.: GPU implementation of belief propagation using CUDA for cloud tracking and reconstruction. In: 2008 IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS 2008), pp. 1–4 (2008)

    Chapter  Google Scholar 

  5. Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004)

    Article  Google Scholar 

  6. Kopf, J., Cohen, M.F., Lischinski, D., Uyttendaele, M.: Joint bilateral upsampling. In: SIGGRAPH (2007)

    Google Scholar 

  7. Lee, S., Eisemann, E., Seidel, H.P.: Depth-of-field rendering with multiview synthesis. In: SIGGRAPH Asia (2009)

    Google Scholar 

  8. Lee, S., Kim, G.J., Choi, S.: Real-time depth-of-field rendering using anisotropically filtered mipmap interpolation. IEEE Trans. Vis. Comput. Graph. 15(3), 453–464 (2009)

    Article  Google Scholar 

  9. Levin, A., Hasinoff, S.W., Green, P., Durand, F., Freeman, W.T.: 4d frequency analysis of computational cameras for depth of field extension. ACM Trans. Graph. 28 (2009)

  10. Li, F., Yu, J., Chai, J.: A hybrid camera for motion deblurring and depth map super-resolution. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR (2008)

    Google Scholar 

  11. Mcmillan, L., Yang, J.C., Yang, J.C.: A light field camera for image based rendering (2000)

  12. Ng, R.: Fourier slice photography. In: SIGGRAPH (2005)

    Google Scholar 

  13. Ng, R., Levoy, M., Brédif, M., Duval, G., Horowitz, M., Hanrahan, P.: Stanford tech report ctsr 2005-02 light field photography with a hand-held plenoptic camera

  14. Paris, S., Durand, F.: A fast approximation of the bilateral filter using a signal processing approach. Int. J. Comput. Vis. 81(1), 24–52 (2009)

    Article  Google Scholar 

  15. Sawhney, H.S., Guo, Y., Hanna, K., Kumar, R.: Hybrid stereo camera: an IBR approach for synthesis of very high resolution stereoscopic image sequences. In: SIGGRAPH, pp. 451–460 (2001)

    Google Scholar 

  16. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2002)

    Article  MATH  Google Scholar 

  17. Stroebel, L., Compton, J., Current, I., Zakia, R.: Photographic Materials and Processes (1986)

  18. Sun, J., Zheng, N.N., Shum, H.Y.: Stereo matching using belief propagation. IEEE Trans. Pattern Anal. Mach. Intell. 25(7), 787–800 (2003)

    Article  Google Scholar 

  19. Vaish, V., Levoy, M., Szeliski, R., Zitnick, C., Kang, S.B.: Reconstructing occluded surfaces using synthetic apertures: stereo, focus and robust measures. In: CVPR (2006)

    Google Scholar 

  20. Vaish, V., Wilburn, B., Joshi, N., Levoy, M.: Using plane + parallax for calibrating dense camera arrays. In: CVPR (2004)

    Google Scholar 

  21. Wang, H., Sun, M., Yang, R.: Space-time light field rendering. IEEE Trans. Vis. Comput. Graph. 13, 697–710 (2007)

    Article  Google Scholar 

  22. Wang, H., Yang, R.: Towards space: time light field rendering. In: I3D (2005)

  23. Wilburn, B., Joshi, N., Vaish, V., Levoy, M., Horowitz, M.: High-speed videography using a dense camera array. In: CVPR (2004)

    Google Scholar 

  24. Wilburn, B., Joshi, N., Vaish, V., Talvala, E.V., Antunez, E., Barth, A., Adams, A., Horowitz, M., Levoy, M.: High performance imaging using large camera arrays. ACM Trans. Graph. 24, 765–776 (2005)

    Article  Google Scholar 

  25. Yang, J.C., Everett, M., Buehler, C., McMillan, L.: A real-time distributed light field camera. In: Proceedings of the 13th Eurographics Workshop on Rendering, EGRW ’02, pp. 77–86 (2002)

    Google Scholar 

  26. Yang, Q., Yang, R., Davis, J., Nister, D.: Spatial-depth super resolution for range images. In: CVPR (2007)

    Google Scholar 

  27. Yu, X., Wang, R., Yu, J.: Real-time depth of field rendering via dynamic light field generation and filtering. Comput. Graph. Forum 29(7), 2099–2107 (2010)

    Article  Google Scholar 

  28. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)

    Article  Google Scholar 

Download references

Acknowledgements

This project was partially supported by the National Science Foundation under Grants IIS-CAREER-0845268 and IIS-RI-1016395, and by the Air Force Office of Scientific Research under the YIP Award.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhan Yu.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Yu, Z., Yu, X., Thorpe, C. et al. Racking focus and tracking focus on live video streams: a stereo solution. Vis Comput 30, 45–58 (2014). https://doi.org/10.1007/s00371-013-0778-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-013-0778-4

Keywords

Navigation