Multimedia Event Detection Using Segment-Based Approach for Motion Feature
- 2.4k Downloads
Multimedia event detection has become a popular research topic due to the explosive growth of video data. The motion features in a video are often used to detect events because an event may contain some specific actions or moving patterns. Raw motion features are extracted from the entire video first and then aggregated to form the final video representation. However, this video-based representation approach is ineffective when used for realistic videos because the video length can be very different and the clues for determining an event may happen in only a small segment of the entire video. In this paper, we propose using a segment-based approach for video representation. Basically, original videos are divided into segments for feature extraction and classification, while still keeping the evaluation at the video level. The experimental results on recent TRECVID Multimedia Event Detection datasets proved the effectiveness of our approach.
KeywordsMultimedia event detection Segment-based Video-based Dense trajectories
Multimedia Event Detection (MED) is a challenging task in TREC Video Retrieval Evaluation (TRECVID).1 The task is defined as follow: given a collection of test videos and a list of test events, indicate whether each of the test events is present in each of the test videos. The aim of MED is to develop systems that can automatically find video containing any event of interest, assuming only a limited number of training exemplars are given.
The need for such MED systems is rising because a massive number of videos are produced every day. For example, more than 3 million hours of video are uploaded and over 3 billion hours of video are watched each month on YouTube,2 the most popular video sharing website. What isneeded are the tools for automatically processing the video content and looking for the presence of a complex event in such unconstrained capturing videos. Automatic detection of complex events has great potential for many applications in the field of web video indexing and retrieval. In practice, a viewer may only want to watch goal scenes in a long football video, a housewife may need to search for videos that teach her how to make a cake, a handyman may look for how to repair an appliance, or a TV program manager may want to remove violent scenes in a film before it is aired.
However, detecting events in multimedia videos is a difficult task due to both the large content variation and uncontrolled capturing conditions. The video content is extremely diverse even in a same event class. The genres of video are also very varied, such as interviews, home videos, and tutorials. Moreover, the number of events is expected to be extensive for large scale processing. Each event, in its turn, can involve a number of objects and actions in a particular setting (indoors, outdoors, etc). Furthermore, multimedia videos are typically recorded under uncontrolled conditions such as different lighting, viewpoints, occlusions, complicated camera motions and cinematic effects. Therefore, it is very hard to model and detect of multimedia events.
In this paper, we propose using a segment-based approach to overcome the limitations of both the keyframe-based and video-based approaches. The basic idea is to examine shorter segments instead of using the representative frames or entire video. We can reduce the amount of unrelated information in the final representation, while still benefiting from the temporal information by dividing a video into segments. In particular, we investigate two methods to cut a video into segments. The first method is called uniform sampling, where every segment has an equal length. We choose different segment lengths and use two types of sampling: non-overlapping and overlapping. The overlapped configuration is used to test the influence of dense segment sampling. The second method divides the video based on the shot boundary detection to take into account the boundary information of each segment. Once segments are extracted, we use dense trajectories, a state-of-the-art motion feature proposed by Wang , for the feature extraction. After that, a BoW model is employed for the feature representation. The experimental results on TRECVID MED 2010 and TRECVID MED 2011 showed the improvement of the segment-based approach over the video-based approach. Moreover, a better performance can be obtained by using the overlapping sampling strategy.
The rest of this paper is organized as follows. Section 2 introduces the related work. Section 3 gives an overview of the dense trajectory motion feature and our segment-based approach. The experimental setup including an introduction to the benchmark dataset and the evaluation method are presented in Section 4. Then, in Section 5, we present and analyze our experimental results. Detailed discussions of these results are presented in Section 6. Finally, Section 7 concludes the paper with discussions on our future work.
2 Related Work
Challenges began from TRECVID 2010,3 and Multimedia Event Detection has drawn the attention of many researchers. Seven teams participated in the debut challenge and 19 teams participated the following year (MED 2011). Many MED systems have been built and different strategies have been used for the event detection system.
Columbia University (CU) team achieved the best result in TRECVID MED 2010. Their success greatly influenced later MED systems. In their paper , they answered two important questions. The first question was, “What kind of feature is more effective for multimedia event detection?”. The second one was, “Are features from different feature modalities (e.g., audio and visual) complementary for event detection?”. Different kinds of features have been studied, such as SIFT  for the image feature, STIP  for the motion feature and MFCC (Mel-frequency cepstral coefficients ) for the audio feature to answer the first question. In general, the STIP motion feature is the best single feature for MED. However, the system should combine strong complementary features from multiple modalities (both visual and audio) in order to achieve better results.
The IBM team  achieved the runner-up MED system in TRECVID 2010. They incorporated information from a wide range of static and dynamic visual features to build their baseline detection system. For the static features, they used the local SIFT , GIST  descriptors and various global features such as Color Histogram, Color Correlogram, Color Moments, Wavelet Texture, etc. They used the STIP  feature with a combined HOG-HOF  descriptor for the dynamic feature.
The Nikon MED 2010 system  is also a remarkable system due to its simple but effective solution. They built a MED system based on the assumption that a small number of images in a given video contain enough information for event detection. Thus, they reduced the event detection task to the classification problem for a set of images, called keyframes. However, keyframe extraction is based on a scene cut detection technique  that is less reliable in realistic videos. Moreover, the scene length is not consistent, which may affect the detection performance.
The BBN Viser system  achieved the best performance at TRECVID MED 2011. Their success confirmed the effectiveness of the multiple modalities approach for multimedia event detection. In their work, they further investigated the performance of the appearance features (e.g., SIFT ), color feature (e.g. RGB-SIFT ), and motion (e.g., STIP ), and also MFCC  based audio features. Different kinds of fusion strategies have been explored, from which the novel non-parametric fusion strategy based on a video specific weighted average fusion has shown promising results.
In general, most systems used the multiple modalities approach to exploit different visual cues to build their baseline detection systems. Static image characteristics are extracted from frames within provided videos. Colombia University’s results  suggest that methods for exploiting semantic content from web images, such as  and , are not effective for multimedia event detection. For motion characteristics, most systems employed the popular STIP proposed by Laptev in  for detecting complex actions. Other systems also took into account the HOG3D  and MoSIFT  motion features. All these systems used a video-based approach for the motion features, i.e., the motion features are extracted from the entire video. IBM’s MED system  also applied the video-based approach but the video was downsampled to five frames per second. One drawback of this video-based approach is that it may encode unrelated information in the final video representation. In a long video, the event information may happen during a small segment, and the information from the other segments tends to be noisy. That is why it is important to localize the event segment (i.e., where the event happens). This problem has been thoroughly investigated by Yuan et al. . Yuan proposed using a spatio-temporal branch-and-bound search to quickly localize the volume where an action might happen. In , Xu proposed a method to find optimal frame alignment in the temporal dimension to recognize events in broadcast news. In , a transfer learning method is proposed to recognize simple action events. However, these works are not applicable for complex actions in multimedia event videos.
Different from other approaches, we use a segment-based approach for the event detection. We did not try to localize the event volume like Yuan in . In a simpler way, we use a uniform sampling with different segment lengths for our evaluation. We also investigate the benefit of using the shot boundary detection technique in  for dividing video into segments. Moreover, an overlapped segment sampling strategy is also considered for a denser sampling. To the best of our knowledge, no MED system has previously used this approach. We evaluate its performance using the dense trajectories motion feature that was recently proposed by Wang in . The dense trajectories feature has achieved state-of-the-art performances for various video datasets, including challenging datasets like Youtube Action4 and UCF Sports.5 In TRECVID MED 2012, the dense trajectories feature was also widely used by top performance systems such as AXES , and BBNVISER . We use the popular ”bag-of-words” model in  as our feature representation technique. Finally, we use a Support Vector Machine (SVM) classifier for the training and testing steps.
3 Dense Trajectories and Segment-Based Approach
We introduce the dense trajectory motion feature proposed by Wang in  in this section. We additionally briefly review the trajectory extraction and description method. A detailed calculation of all the related feature descriptors, especially for Motion Boundary Histogram, is also presented. Our segment-based approach for motion features is introduced at the end of this section.
3.1 Dense Trajectories
After extracting a trajectory, two kinds of feature descriptors are adopted: a trajectory shape descriptor and a trajectory-aligned descriptor.
Trajectory Shape Descriptor
More complex descriptors can be computed within a space-time volume around the trajectory. The size of the volume is N × N spatial pixels and L temporal frames. This volume is further divided into a nσ × nσ × nτ grid to encode the spatial-temporal information between the features. The default settings for these parameters are N = 32 pixels, L = 15 frames, nσ = 2, and nτ = 3. The features are separately calculated and aggregated in each region. Finally, the features in all regions are concatenated to form a single representation for the trajectory. Three kinds of descriptors have been employed for representing trajectory following this design: The Histogram of Oriented Gradient (HOG), which was proposed by Dalal et al. in  for object detection, The Histogram of Optical Flow (HOF), which was used by Laptev in  for human action recognition, and the Motion Boundary Histogram (MBH). The MBH descriptor was also proposed by Dalal et al.  for human detection, where the derivatives are computed separately for the horizontal and vertical components of the optical flow Iω = (Ix, Iy). The spatial derivatives are computed for each component of the optical flow field Ix and Iyindependently. After that, the orientation information is quantized into histogram, similarly to that for the HOG descriptor (8-bin histogram for each component). Finally, these two histograms are normalized separately with the L2 norm and concatenated together to form the final representation. Since the MBH represents the gradient of the optical flow, constant motion information is suppressed and only the information concerning the changes in the flow field (i.e., motion boundaries) is kept.
According to the author , the MBH descriptor is the best feature descriptor for dense trajectories. One interesting property of the MBH is that it can cancel out camera motion. That is why it shows significant improvement on realistic action recognition dataset compared to other trajectory descriptors. We only use the MBH descriptor in this study to test the performance of our proposed segment-based method.
3.2 Segment-Based Approach for Motion Feature
For the previous segment-based approach, a video is divided into continuous segments. This means information about the semantic boundary of a segment is not taken into account. However, this information is important because it keeps the semantic meaning of each segment. The simplest way to overcome this drawback is to use a denser sampling such as the overlapped segments. We use an overlapping strategy for the same segment length as in the non-overlapping experiments. In practice, we use uniform segment sampling with 50 % of overlapping. This means the number of segments will be doubled for each overlapping experiment.
Another way to extract segments with boundary information is to employ a shot boundary detection technique. For a fast implementation, we use the algorithm proposed in . This technique is also used in the Nikon 2010 MED system . Basically, at first, this method constructs a space-time image from the input video. We can sample points or calculate the color histogram for each frame to construct the space-time image. This will reduce the 2D frame image to the space dimension of the space-time image. The time dimension is the number of frames of the video. The Canny edge detection algorithm is used to detect the vertical lines after attaining the space-time image. Each detected vertical line is considered as a scene cut. The method in  also proposed solutions for other kinds of scene transitions such as a fade or wide. However, from our previous study, this method showed poor results in these cases. Thus, we only adopted the scene cut detection algorithm. Each detected scene cut is considered a segment in our experiments.
Our proposed segment-based approach is compared with the video-based one. Actually, when the segment length is long enough, it becomes the entire video. In that case, we can consider the video-based approach a special type of segment-based approach.
4 Experimental Setup
List of events and its number of positive samples in event collection set of MED 2011 dataset.
Attempting a board trick
Feeding an animal
Landing a fish
Working on a woodworking project
Changing a vehicle tire
Getting a vehicle unstuck
Grooming an animal
Making a sandwich
Repairing an appliance
Working on a sewing project
4.2 Evaluation Method
Once all the features are extracted, we use the popular Support Vector Machine (SVM) for the classification. In particular, we use the LibSVM library available online7 and adopt the one-vs.-rest scheme for multi-class classification. We annotate the data in the following way to prepare it for the classifier. All the videos/segments from positive videos are considered positive samples, and the remaining videos/segments (in the development set) are chosen as the negative samples. For testing purposes, we also use the LibSVM to predict the scores of the videos/segments in each testing video. The score of a video is defined as the largest score among its videos/segments. This score indicates how likely a video belongs to an event class.
5 Experimental Results
This section presents the experimental results from using our proposed approach on the MED 2010 and MED 2011 dataset. We also present the results of combining various segment lengths using the late fusion technique. This is a simple fusion technique where the predicted score of each video is the average one of that video in all combined runs. We also report the performance of our baseline event detection system using the keyframe-based and video-based approach for comparison.
All the experiments were performed on our grid computers. We utilized up to 252 cores for the parallel processing using Matlab codes. All the results are reported in terms of the Mean Average Precision (MAP). We calculate MAP using the TRECVID evaluation tool8 from the final score of each video in the test set. The best performing feature is highlighted in bold for each event.
5.1 On TRECVID MED 2010
5.1.1 Non-Overlapping and Overlapping Sampling
Results on the MED 2010 dataset using non-overlapping sampling.
Assembling a shelter
Batting in a run
Making a cake
Results on the MED 2010 dataset using overlapping sampling.
Assembling a shelter
Batting in a run
Making a cake
5.1.2 Segment Sampling Based on Shot Boundary Detection
Comparison of different segment-based approaches with the video-based approach on the MED 2010 dataset.
Batting in a run
Making a cake
We also included the best results from the segment-based experiments using non-overlapping and overlapping sampling in Table 4 for comparison. In general, our segment-based approach outperforms the video-based approach by more than 3 % in terms of MAP. We did not conduct a keyframe-based experiment because we learned that it is inefficient compared to the video-based approach.
5.2 On TRECVID MED 2011
Results on the MED 2011 dataset using non-overlapping sampling.
Results on the MED 2011 dataset using overlapping sampling.
6.1 Optimal Segment Length
It is true that the lengths of the event segments are quite different, even for the same events. Therefore, the fixed length video segments are obviously not the optimal solution to describe the events. However, compared to the video-based approach, as shown in our experiments on the datasets of TRECVID MED 2010 and TRECVID MED 2011, the segment-based approach using overlapping strategy for extracting segments consistently outperforms.
It is ideal if the boundary of the event segment can be determined. However, this localization problem is difficult. The straightforward way to tackle this problem is extracting segments based on shot boundary information. This solution is reasonable because the event might be localized in certain shots. However, we obtained unexpected results due to the unreliability of shot boundary detection in uncontrolled video dataset and the event segment might span to several shots.
The method described in  suggests another approach to divide a video into segments. Instead of learning a randomized spatial partition for images, we can learn a randomized temporal partition for videos. However, this approach needs sufficient positive training samples while MED datasets have a small number of positive samples with large variation. On the other hand, it is also not scalable because learning and testing the best randomized pattern is time-consuming. Therefore, the fixed-length approach is quite simple but still effective.
Comparison of different segment-based approaches with the video-based approach on the MED 2011 dataset.
(at 90 s)
(60, 90, 120 s)
(at 120 s)
(60, 90, 120 s)
For scalability, we discuss the storage and computation costs of our experiments. At first, our system does not consume a lot of disk storage because we only store the final representation of the videos or segments, not the raw features. We calculated the BoW features directly from the raw feature outputs using a pipeline reading technique. One drawback is that this technique requires a lot of memories. However, we handled this problem by encoding the raw features into smaller chunks and aggregating them to generate the final representation. By this way, we can manage the mount of memory usage.
In our framework, the most time-consuming steps are the feature extraction and representation (using the bag-of-words model). It is worth noting that the computation time for one video is independent of the segment length, which means our segment-based approach has the same computational cost as the video-based approach. On the other hand, when we do experiments at the segment level, we will have more training and testing samples than that in the video-based approach. Thus, it will cost more in time to train and test using the segment-based approach. However, this cost is relatively small compared with the feature extraction and representation cost. For example, when using a grid computer with 252 cores, it took us about 10 h to generate the feature representation for each segment-based experiment on MED 2010 dataset. In the mean time, we used one-core processor for the training and testing, but it only took about 4–8 h for the training and 2–4 h for the testing on each event. For the MED 2011 dataset, the computational cost was around 13 times bigger than the MED 2010 (linearly to the number of videos itcontains).
We proposed using the segment-based approach for multimedia event detection in this work. We evaluated our approach by using the state-of-the-art dense trajectories motion feature on the TRECVID MED 2010 and TRECVID MED 2011 datasets. Our proposed segment-based approach outperforms the video-based approach in most cases when using a simple non-overlapping sampling strategy. More interestingly, the results are significantly improved when we using the segment-based approach with an overlapping sampling strategy. Therefore, the effectiveness of our methods on realistic datasets like MEDs isconfirmed.
A segment-based approach with an overlapping sampling strategy shows promising results. This suggests the importance of segment localization on the MED performance. Suppose the segment length is fixed, we are interested in determining which segment is the best representative for an event. In this study, we also observed that the detection performance is quite sensitive to the segment-length and it depends on the dataset. The results obtained from the late fusion strategy is quite stable and close the peak performance. This suggests a methodical way to generalize the segment-based approach to other datasets. However, this method is not scalable because it requires a lot of computation costs. Therefore, learning an optimal segment length for each event can be beneficial for an event detection system. This is also an interesting direction for our futurestudy.
This research is partially funded by Vietnam National University HoChiMinh City (VNU-HCM) under grant number B2013-26-01
- 1.Chen, M., & Hauptmann, A. (2009). Mosift: recognizing human actions in surveillance videos. In Computer science department, CMU-CS-09-161.Google Scholar
- 2.Csurka, G., Dance, C.R., Fan, L., Willamowski, J., Bray, C. (2004). Visual categorization with bags of keypoints. In Workshop on statistical learning in computer vision, ECCV (pp. 1–22).Google Scholar
- 3.Dalal, N., Triggs, B., Schmid, C. (2006). Human detection using oriented histograms of flow and appearance. In European conference on computer vision. Springer.Google Scholar
- 4.Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In International conference on computer vision & pattern recognition (vol 2, pp. 886–893). INRIA Rhône-Alpes, ZIRST-655, av. de l’Europe, Montbonnot-38334. http://lear.inrialpes.fr/pubs/2005/DT05.
- 5.Duan, L., Xu, D., Chang, S.F. (2012). Exploiting web images for event recognition in consumer videos: a multiple source domain adaptation approach. In IEEE conference on computer vision and pattern recognition (CVPR) (pp. 1338–1345). IEEE.Google Scholar
- 8.Hill, M., Hua, G., Natsev, A., Smith, J.R., Xie, L., Huang, B., Merler, M., Ouyang, H., Zhou, M. (2010). Ibm research trecvid-2010 video copy detection and multimedia event detection system. In NIST TRECVID workshop. Gaithersburg.Google Scholar
- 9.Jiang, Y.G., Ngo, C.W., Yang, J. (2007). Towards optimal bag-of-features for object categorization and semantic video retrieval. In Proceedings of the 6th ACM international conference on image and video retrieval, (pp. 494–501).Google Scholar
- 10.Jiang, Y.G., Zeng, X., Ye, G., Bhattacharya, S., Ellis, D., Shah, M., Chang, S.F. (2010). Columbia-ucf trecvid2010 multimedia event detection: combining multiple modalities, contextual concepts, and temporal matching. In NIST TRECVID workshop. Gaithersburg.Google Scholar
- 11.Jiang, Y., Yuan, J., Yu, G. (2012). Randomized spatial partition for scene recognition. ECCV, 2, 730–743.Google Scholar
- 12.Kläser, A., Marszałek, M., Schmid, C. (2008). A spatio-temporal descriptor based on 3d-gradients. In British machine vision conference (pp. 995–1004). http://lear.inrialpes.fr/pubs/2008/KMS08.
- 13.Laptev, I., Marszałek, M., Schmid, C., Rozenfeld, B. (2008). Learning realistic human actions from movies. In Conference on computer vision & pattern recognition. http://lear.inrialpes.fr/pubs/2008/LMSR08.
- 15.Lee, C.H., Soong, F., Juang, B.H. (1988). A segment model based approach to speech recognition. In International conference on acoustics, speech, and signal processing, 1988. ICASSP-88 (Vol. 1, pp. 501–541). doi:10.1109/ICASSP.1988.196629.
- 17.Matsuo, T., & Nakajima, S. (2010). Nikon multimedia event detection system. In NIST TRECVID workshop. Gaithersburg.Google Scholar
- 18.Natarajan, P., Manohar, V., Wu, S., Tsakalidis, S., Vitaladevuni, S.N., Zhuang, X., Prasad, R., Ye, G., Liu, D. (2011). Bbn viser trecvid 2011 multimedia event detection system. In NIST TRECVID workshop. Gaithersburg.Google Scholar
- 19.Natarajan, P., Natarajan, P., Wu, S., Zhuang, X., Vazquez-Reina, A., Vitaladevuni, S.N., Tsourides, K., Andersen, C., Prasad, R., Ye, G., Liu, D., Chang, S., Saleemi, I., Shah, M., Ng, Y., White, B., Gupta, A., Haritaoglu, I. (2012). Bbn viser trecvid 2012 multimedia event detection and multimedia event recounting systems. In NIST TRECVID workshop. Gaithersburg,États-Unis.Google Scholar
- 21.Oneata, D., Douze, M., Revaud, J., Jochen, S., Potapov, D., Wang, H., Harchaoui, Z., Verbeek, J., Schmid, C., Aly, R., Mcguiness, K., Chen, S., O’Connor, N., Chatfield, K., Parkhi, O., Arandjelovic, R., Zisserman, A., Basura, F., Tuytelaars, T. (2012). AXES at TRECVid 2012: KIS, INS, and MED. In TRECVID workshop. Gaithersburg, États-Unis. http://hal.inria.fr/hal-00746874.
- 22.van de Sande, K.E.A., Gevers, T., Snoek, C.G.M. (2010). Evaluating color descriptors for object and scene recognition. In IEEE transactions on pattern analysis and machine intelligence (Vol. 32, pp. 1582–1596).Google Scholar
- 23.Wang, H., Kläser, A., Schmid, C., Liu, C.L. (2011). Action recognition by Dense Trajectories. In IEEE conference on computer vision & pattern recognition (pp. 3169–3176). Colorado Springs. http://hal.inria.fr/inria-00583818/en.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.