Abstract
Recently, diverse sensor technologies have been advanced dramatically, so that people can use those sensors in many areas. Camera to capture the video data is one of the most useful sensors among them, and the use of camera with other sensors or the use of several cameras has been done to obtain more information. This paper deals with the multi-camera system, which uses the several cameras as sensors. Previous multi-camera systems have been used to track a moving object in a wide area. In this paper, we have set cameras to focus on the same place in an office so that system can provide diverse views on a single event. We have modeled office events, and modeled events can be recognized from annotated features. Finally, we have conducted the event recognition, view selection and event retrieval experiments based on a scenario in an office to show the usefulness of the proposed system.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
H.-R. TRankler and O. Kanoun, “Recent advances in sensor technology,” Proceedings of the IEEE Instrumentation and Measurement Technology Conference, vol. 1, pp. 309–16, 2001.
X. Zhu, J. Fan, A. K. Elmagarmid, and X.Wu, “Hierarchical video content description and summarization using unified semantic and visual similarity,” Multimedia Systems, vol. 9, pp. 31–53, 2003.
S.-M. Lee and P. A. Abbott, “Bayesian networks for knowledge discovery in large datasets: Basics for nurse researchers,” Journal of Biomedical Informatics, vol. 36, pp. 389–399, 2003.
J. Black and T. Ellis, “Multi camera image tracking,” Image and Vision Computing, vol. 24, pp. 1256–1267, 2006.
Y. Sumi, S. Ito, T.Matsuguchi, S. Fels, and K.Mase, “Collaborative capturing and interpretation of interactions,” Pervasive 2004 Workshop on Memory and Sharing of Experiences, pp. 1–7, 2004.
G. C. de Silva, T. Yamasaki, and K. Aizawa, “Evaluation of video summarization for a large number of cameras in ubiquitous home,” Proceedings of the 13th ACM International Conference on Multimedia, pp. 820–828, 2005.
B. Li, J. H. Errico, H. Pan, and I. Sezan, “Bridging the semantic gap in sports video retrieval and summarization,” Journal of Visual Communication & Image Representation, vol. 15, pp. 393–424, 2004.
A. Ekin, A.M. Tekalp, and R. mehrotra, “Automatic soccer video analysis and summarization,” IEEE Transactions on Image Processing, vol. 12, no. 7, pp. 796–807, 2003.
I. Ersoy, F. Bunyak, and S. R. Subramanya, “A framework for trajectory based visual event retrieval,” International Conference on Information Technology: Coding and Computing, vol. 2, pp. 23–27, 2004.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Park, HS., Lim, S., Min, JK., Cho, SB. (2009). Optimal View Selection and Event Retrieval in Multi-Camera Office Environment. In: Hahn, H., Ko, H., Lee, S. (eds) Multisensor Fusion and Integration for Intelligent Systems. Lecture Notes in Electrical Engineering, vol 35. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89859-7_4
Download citation
DOI: https://doi.org/10.1007/978-3-540-89859-7_4
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-89858-0
Online ISBN: 978-3-540-89859-7
eBook Packages: EngineeringEngineering (R0)