Abstract
Most of 3D video studios developed so far employ a group of static cameras, and hence the object movable space is subject to strict constraints to guarantee high resolution well-focused multi-view object observation. This chapter presents a multi-view video capture system with a group of active cameras, which cooperatively track an object moving in a wide area to capture high resolution well-focused multi-view video data. The novelty of the system rests in the cell-based object tracking and multi-view observation, where the scene space is partitioned into a set of disjoint cells, and the camera calibration and the object tracking are conducted based on the cells. To evaluate practical utilities of the cell-based object tracking and multi-view observation algorithm, the performance of the system implemented at Kyoto University is demonstrated. The last part of the chapter presents a practical process of designing a system for large scale sport scenes such as figure skating, which will expand new applications of 3D video.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
We regard the object to be in a cell when the axis of its bounding cylinder is included in the cell.
- 2.
While we do not know if this speed is reasonable, we had to limit the maximum speed due to the camera control speed of off-the-shelf active cameras employed.
References
Davis, J., Chen, X.: Calibrating pan-tilt cameras in wide-area surveillance networks. In: Proc. of International Conference on Computer Vision, pp. 144–149 (2003)
Fitzgibbon, A.W., Zisserman, A.: Automatic camera recovery for closed or open image sequences. In: Proc. of European Conference on Computer Vision, pp. 311–326 (1998)
Hartley, R.I., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000)
International Skating Union: Special Regulations & Technical Rules. Single and Pair Skating and Ice Dance (2008). Rule 342 Required rinks
Jain, A., Kopell, D., Kakligian, K., Wang, Y.-F.: Using stationary-dynamic camera assemblies for wide-area video surveillance and selective attention. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 537–544 (2006)
Kitahara, I., Saito, H., Akimichi, S., Onno, T., Ohta, Y., Kanade, T.: Large-scale virtualized reality. In: CVPR2001 Technical Sketches (2001)
Lavest, J.M., Peuchot, B., Delherm, C., Dhome, M.: Reconstruction by zooming from implicit calibration. In: Proc. of International Conference on Image Processing, vol. 2, pp. 1012–1016 (1994)
Lavest, J.-M., Rives, G., Dhome, M.: Three-dimensional reconstruction by zooming. IEEE Trans. Robot. Autom. 9(2), 196–207 (1993)
Li, M., Lavest, J.-M.: Some aspects of zoom lens camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 18(11), 1105–1110 (1996)
Lu, C.-P., Hager, G.D., Mjolsness, E.: Fast and globally convergent pose estimation from video images. IEEE Trans. Pattern Anal. Mach. Intell. 22(6), 610–622 (2000)
Luong, Q.-T., Faugeras, O.D.: Self-calibration of a moving camera from point correspondences and fundamental matrices. Int. J. Comput. Vis. 22, 261–289 (1997)
Maybank, S.J., Faugeras, O.D.: A theory of self-calibration of a moving camera. Int. J. Comput. Vis. 8, 123–151 (1992)
Mendonca, P., Cipolla, R.: A simple technique for self-calibration. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 637–663 (1999)
Pollefeys, M., Koch, R., Gool, L.V.: Self-calibration and metric reconstruction in spite of varying and unknown internal camera parameters. Int. J. Comput. Vis., 7–25 (1999)
Sarkis, M., Senft, C.T., Diepold, K.: Calibrating an automatic zoom camera with moving least squares. IEEE Trans. Autom. Sci. Eng. 6(3), 492–503 (2009)
Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. Int. J. Comput. Vis. 47, 7–42 (2002)
Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 519–528 (2006)
Sinha, S.N., Pollefeys, M.: Pan-tilt-zoom camera calibration and high-resolution mosaic generation. Comput. Vis. Image Underst. 103(3), 170–183 (2006)
Szeliski, R., Kang, S.B.: Recovering 3D shape and motion from image streams using nonlinear least squares. In: Proc. of IEEE Conference on Computer Vision and Pattern Recognition, pp. 752–753 (1993)
Ukita, N., Matsuyama, T.: Real-time cooperative multi-target tracking by communicating active vision agents. Comput. Vis. Image Underst. 97, 137–179 (2005)
Wada, T., Wu, X., Tokai, S., Matsuyama, T.: Homography based parallel volume intersection: Toward real-time reconstruction using active camera. In: Proc. of CAMP, pp. 331–339 (2000)
Yamaguchi, T., Yoshimoto, H., Matsuyama, T.: Cell-based 3D video capture method with active cameras. In: Ronfard, R., Taubin, G. (eds.) Image and Geometry Processing for 3-D Cinematography, pp. 171–192. Springer, Berlin (2010)
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag London
About this chapter
Cite this chapter
Matsuyama, T., Nobuhara, S., Takai, T., Tung, T. (2012). Active Camera System for Object Tracking and Multi-view Observation. In: 3D Video and Its Applications. Springer, London. https://doi.org/10.1007/978-1-4471-4120-4_3
Download citation
DOI: https://doi.org/10.1007/978-1-4471-4120-4_3
Publisher Name: Springer, London
Print ISBN: 978-1-4471-4119-8
Online ISBN: 978-1-4471-4120-4
eBook Packages: Computer ScienceComputer Science (R0)