Skip to main content

Cell-Based 3D Video Capture Method with Active Cameras

  • Chapter
  • First Online:
Image and Geometry Processing for 3-D Cinematography

Abstract

This paper proposes a 3D video capture method with active cameras, which enables us to produce 3D video of a moving object in a widespread area. Most existing capture methods use fixed cameras and have strong restrictions on allowable object motion; an object cannot move in a wide area. To solve this problem, our method partitions a studio space into a set of subspaces named “cells”, and conducts the camera calibration and control for object tracking based on the cells. We first formulate our method as an optimization problem and then propose an algorithm to solve it.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Esteban, C., Schmitt, F.: Silhouette and stereo fusion for 3D object modeling. Comput. Vis. Image Underst. 96(3), 367–392 (2004)

    Article  Google Scholar 

  2. Furukawa, Y., Ponce, J.: Carved visual hulls for image-based modeling. In: European Conference on Computer Vision (ECCV), pp. 564–577 (2006)

    Google Scholar 

  3. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press (2004)

    Google Scholar 

  4. Isard, M., Blake, A.: Condensation – conditional density propagation for visual tracking. Int. J. Comput. Vis. 29(1), 5–28 (1998). URL http://citeseer.ist.psu.edu/isard98condensation.html

    Google Scholar 

  5. Kanade, T., Rander, P., Narayanan, P.: Virtualized reality: constructing virtual worlds from real scenes. IEEE Multimed. 4(1), 33–47 (1997)

    Article  Google Scholar 

  6. Kitahara, I., Saito, H., Akimichi, S., Onno, T., Ohta, Y., Kanade, T.: Large-scale virtualized reality. In: Proceedings of IEEE Conference on CVPR Technical Sketches (2001)

    Google Scholar 

  7. Kondou, J., Wu, X., Matsuyama, T.: Calibration of partially-fixed viewpoint active camera. IPSJ SIG Notes. CVIM 2003-CVIM-137 No.36(137-19), 149–156 (20030327). URL http://ci.nii.ac.jp/naid/110002664037/

  8. Laurentini, A.: The visual hull concept for silhouette based image understanding. IEEE Trans. Pattern Anal. Mach. Intell. 16(2), 150–162 (1994)

    Article  Google Scholar 

  9. Matsuyama, T., Wu, X., Takai, T., Nobuhara, S.: Real-time 3D shape reconstruction, dynamic 3D mesh deformation, and high fidelity visualization for 3D video. Comput. Vis. Image Underst. 96(3), 393–434 (2004). doi:http://dx.doi.org/10.1016/j.cviu.2004.03.012

  10. Starck, J., Hilton, A.: Virtual view synthesis of people from multiple view video sequences. Graph. Models 67(6), 600–620 (2005)

    Article  MATH  Google Scholar 

  11. Starck, J., Maki, A., Nobuhara, S., Hilton, A., Matsuyama, T.: The multiple-camera 3-D production studio. IEEE Trans. Circuits Syst. Video Technol. 19(6), 856–869 (2009)

    Article  Google Scholar 

  12. Svoboda, T., Martinec, D., Pajdla, T.: A convenient multicamera self-calibration for virtual environments. Presence: Teleoper. Virtual Environ. 14(4), 407–422 (2005). doi:http://dx.doi.org/10.1162/105474605774785325

  13. Ukita, N., Matsuyama, T.: Real-time cooperative multi-target tracking by communicating active visionagents. Comput. Vis. Image Underst. 97(2), 137–179 (2005)

    Article  Google Scholar 

  14. Wada, T., Matsuyama, T.: Appearance sphere: background model for pan-tilt-zoom camera. In: The 13th International Conference on Pattern Recognition, vol. A, pp. 718–722 (1996)

    Google Scholar 

  15. Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Takashi Matsuyama .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Yamaguchi, T., Yoshimoto, H., Matsuyama, T. (2010). Cell-Based 3D Video Capture Method with Active Cameras. In: Ronfard, R., Taubin, G. (eds) Image and Geometry Processing for 3-D Cinematography. Geometry and Computing, vol 5. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12392-4_8

Download citation

Publish with us

Policies and ethics