Advertisement

Active Camera System for Object Tracking and Multi-view Observation

  • Takashi Matsuyama
  • Shohei Nobuhara
  • Takeshi Takai
  • Tony Tung

Abstract

Most of 3D video studios developed so far employ a group of static cameras, and hence the object movable space is subject to strict constraints to guarantee high resolution well-focused multi-view object observation. This chapter presents a multi-view video capture system with a group of active cameras, which cooperatively track an object moving in a wide area to capture high resolution well-focused multi-view video data. The novelty of the system rests in the cell-based object tracking and multi-view observation, where the scene space is partitioned into a set of disjoint cells, and the camera calibration and the object tracking are conducted based on the cells. To evaluate practical utilities of the cell-based object tracking and multi-view observation algorithm, the performance of the system implemented at Kyoto University is demonstrated. The last part of the chapter presents a practical process of designing a system for large scale sport scenes such as figure skating, which will expand new applications of 3D video.

Keywords

Camera Calibration Cell Radius Camera Control Camera Arrangement Active Camera 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Davis, J., Chen, X.: Calibrating pan-tilt cameras in wide-area surveillance networks. In: Proc. of International Conference on Computer Vision, pp. 144–149 (2003) CrossRefGoogle Scholar
  2. 2.
    Fitzgibbon, A.W., Zisserman, A.: Automatic camera recovery for closed or open image sequences. In: Proc. of European Conference on Computer Vision, pp. 311–326 (1998) Google Scholar
  3. 3.
    Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) MATHGoogle Scholar
  4. 4.
    International Skating Union: Special Regulations & Technical Rules. Single and Pair Skating and Ice Dance (2008). Rule 342 Required rinks Google Scholar
  5. 5.
    Jain, A., Kopell, D., Kakligian, K., Wang, Y.-F.: Using stationary-dynamic camera assemblies for wide-area video surveillance and selective attention. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 537–544 (2006) Google Scholar
  6. 6.
    Kitahara, I., Saito, H., Akimichi, S., Onno, T., Ohta, Y., Kanade, T.: Large-scale virtualized reality. In: CVPR2001 Technical Sketches (2001) Google Scholar
  7. 7.
    Lavest, J.M., Peuchot, B., Delherm, C., Dhome, M.: Reconstruction by zooming from implicit calibration. In: Proc. of International Conference on Image Processing, vol. 2, pp. 1012–1016 (1994) Google Scholar
  8. 8.
    Lavest, J.-M., Rives, G., Dhome, M.: Three-dimensional reconstruction by zooming. IEEE Trans. Robot. Autom. 9(2), 196–207 (1993) CrossRefGoogle Scholar
  9. 9.
    Li, M., Lavest, J.-M.: Some aspects of zoom lens camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 18(11), 1105–1110 (1996) CrossRefGoogle Scholar
  10. 10.
    Lu, C.-P., Hager, G.D., Mjolsness, E.: Fast and globally convergent pose estimation from video images. IEEE Trans. Pattern Anal. Mach. Intell. 22(6), 610–622 (2000) CrossRefGoogle Scholar
  11. 11.
    Luong, Q.-T., Faugeras, O.D.: Self-calibration of a moving camera from point correspondences and fundamental matrices. Int. J. Comput. Vis. 22, 261–289 (1997) CrossRefGoogle Scholar
  12. 12.
    Maybank, S.J., Faugeras, O.D.: A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 8, 123–151 (1992) CrossRefGoogle Scholar
  13. 13.
    Mendonca, P., Cipolla, R.: A simple technique for self-calibration. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 637–663 (1999) Google Scholar
  14. 14.
    Pollefeys, M., Koch, R., Gool, L.V.: Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. Int. J. Comput. Vis., 7–25 (1999) Google Scholar
  15. 15.
    Sarkis, M., Senft, C.T., Diepold, K.: Calibrating an automatic zoom camera with moving least squares. IEEE Trans. Autom. Sci. Eng. 6(3), 492–503 (2009) CrossRefGoogle Scholar
  16. 16.
    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2002) MATHCrossRefGoogle Scholar
  17. 17.
    Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 519–528 (2006) Google Scholar
  18. 18.
    Sinha, S.N., Pollefeys, M.: Pan-tilt-zoom camera calibration and high-resolution mosaic generation. Comput. Vis. Image Underst. 103(3), 170–183 (2006) CrossRefGoogle Scholar
  19. 19.
    Szeliski, R., Kang, S.B.: Recovering 3D shape and motion from image streams using nonlinear least squares. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 752–753 (1993) CrossRefGoogle Scholar
  20. 20.
    Ukita, N., Matsuyama, T.: Real-time cooperative multi-target tracking by communicating active vision agents. Comput. Vis. Image Underst. 97, 137–179 (2005) CrossRefGoogle Scholar
  21. 21.
    Wada, T., Wu, X., Tokai, S., Matsuyama, T.: Homography based parallel volume intersection: Toward real-time reconstruction using active camera. In: Proc. of CAMP, pp. 331–339 (2000) Google Scholar
  22. 22.
    Yamaguchi, T., Yoshimoto, H., Matsuyama, T.: Cell-based 3D video capture method with active cameras. In: Ronfard, R., Taubin, G. (eds.) Image and Geometry Processing for 3-D Cinematography, pp. 171–192. Springer, Berlin (2010) CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2012

Authors and Affiliations

  • Takashi Matsuyama
    • 1
  • Shohei Nobuhara
    • 1
  • Takeshi Takai
    • 1
  • Tony Tung
    • 1
  1. 1.Graduate School of InformaticsKyoto UniversitySakyoJapan

Personalised recommendations