Skip to main content

Active Camera System for Object Tracking and Multi-view Observation

  • Chapter
3D Video and Its Applications
  • 1105 Accesses

Abstract

Most of 3D video studios developed so far employ a group of static cameras, and hence the object movable space is subject to strict constraints to guarantee high resolution well-focused multi-view object observation. This chapter presents a multi-view video capture system with a group of active cameras, which cooperatively track an object moving in a wide area to capture high resolution well-focused multi-view video data. The novelty of the system rests in the cell-based object tracking and multi-view observation, where the scene space is partitioned into a set of disjoint cells, and the camera calibration and the object tracking are conducted based on the cells. To evaluate practical utilities of the cell-based object tracking and multi-view observation algorithm, the performance of the system implemented at Kyoto University is demonstrated. The last part of the chapter presents a practical process of designing a system for large scale sport scenes such as figure skating, which will expand new applications of 3D video.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We regard the object to be in a cell when the axis of its bounding cylinder is included in the cell.

  2. 2.

    While we do not know if this speed is reasonable, we had to limit the maximum speed due to the camera control speed of off-the-shelf active cameras employed.

References

  1. Davis, J., Chen, X.: Calibrating pan-tilt cameras in wide-area surveillance networks. In: Proc. of International Conference on Computer Vision, pp. 144–149 (2003)

    Chapter  Google Scholar 

  2. Fitzgibbon, A.W., Zisserman, A.: Automatic camera recovery for closed or open image sequences. In: Proc. of European Conference on Computer Vision, pp. 311–326 (1998)

    Google Scholar 

  3. Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000)

    MATH  Google Scholar 

  4. International Skating Union: Special Regulations & Technical Rules. Single and Pair Skating and Ice Dance (2008). Rule 342 Required rinks

    Google Scholar 

  5. Jain, A., Kopell, D., Kakligian, K., Wang, Y.-F.: Using stationary-dynamic camera assemblies for wide-area video surveillance and selective attention. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 537–544 (2006)

    Google Scholar 

  6. Kitahara, I., Saito, H., Akimichi, S., Onno, T., Ohta, Y., Kanade, T.: Large-scale virtualized reality. In: CVPR2001 Technical Sketches (2001)

    Google Scholar 

  7. Lavest, J.M., Peuchot, B., Delherm, C., Dhome, M.: Reconstruction by zooming from implicit calibration. In: Proc. of International Conference on Image Processing, vol. 2, pp. 1012–1016 (1994)

    Google Scholar 

  8. Lavest, J.-M., Rives, G., Dhome, M.: Three-dimensional reconstruction by zooming. IEEE Trans. Robot. Autom. 9(2), 196–207 (1993)

    Article  Google Scholar 

  9. Li, M., Lavest, J.-M.: Some aspects of zoom lens camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 18(11), 1105–1110 (1996)

    Article  Google Scholar 

  10. Lu, C.-P., Hager, G.D., Mjolsness, E.: Fast and globally convergent pose estimation from video images. IEEE Trans. Pattern Anal. Mach. Intell. 22(6), 610–622 (2000)

    Article  Google Scholar 

  11. Luong, Q.-T., Faugeras, O.D.: Self-calibration of a moving camera from point correspondences and fundamental matrices. Int. J. Comput. Vis. 22, 261–289 (1997)

    Article  Google Scholar 

  12. Maybank, S.J., Faugeras, O.D.: A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 8, 123–151 (1992)

    Article  Google Scholar 

  13. Mendonca, P., Cipolla, R.: A simple technique for self-calibration. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 637–663 (1999)

    Google Scholar 

  14. Pollefeys, M., Koch, R., Gool, L.V.: Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. Int. J. Comput. Vis., 7–25 (1999)

    Google Scholar 

  15. Sarkis, M., Senft, C.T., Diepold, K.: Calibrating an automatic zoom camera with moving least squares. IEEE Trans. Autom. Sci. Eng. 6(3), 492–503 (2009)

    Article  Google Scholar 

  16. Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2002)

    Article  MATH  Google Scholar 

  17. Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 519–528 (2006)

    Google Scholar 

  18. Sinha, S.N., Pollefeys, M.: Pan-tilt-zoom camera calibration and high-resolution mosaic generation. Comput. Vis. Image Underst. 103(3), 170–183 (2006)

    Article  Google Scholar 

  19. Szeliski, R., Kang, S.B.: Recovering 3D shape and motion from image streams using nonlinear least squares. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 752–753 (1993)

    Chapter  Google Scholar 

  20. Ukita, N., Matsuyama, T.: Real-time cooperative multi-target tracking by communicating active vision agents. Comput. Vis. Image Underst. 97, 137–179 (2005)

    Article  Google Scholar 

  21. Wada, T., Wu, X., Tokai, S., Matsuyama, T.: Homography based parallel volume intersection: Toward real-time reconstruction using active camera. In: Proc. of CAMP, pp. 331–339 (2000)

    Google Scholar 

  22. Yamaguchi, T., Yoshimoto, H., Matsuyama, T.: Cell-based 3D video capture method with active cameras. In: Ronfard, R., Taubin, G. (eds.) Image and Geometry Processing for 3-D Cinematography, pp. 171–192. Springer, Berlin (2010)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag London

About this chapter

Cite this chapter

Matsuyama, T., Nobuhara, S., Takai, T., Tung, T. (2012). Active Camera System for Object Tracking and Multi-view Observation. In: 3D Video and Its Applications. Springer, London. https://doi.org/10.1007/978-1-4471-4120-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-4120-4_3

  • Publisher Name: Springer, London

  • Print ISBN: 978-1-4471-4119-8

  • Online ISBN: 978-1-4471-4120-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics