Methods for Classifying Videos by Subject and Detecting Narrative Peak Points
2009 marked UAIC’s first participation at the VideoCLEF evaluation campaign. Our group built two separate systems for the “Subject Classification” and “Affect Detection” tasks. For the first task we created two resources starting from Wikipedia pages and pages identified with Google and used two tools for classification: Lucene and Weka. For the second task we extracted the audio component from a given video file, using FFmpeg. After that, we computed the average amplitude for each word from the transcript, by applying the Fast Fourier Transform algorithm in order to analyze the sound. A brief description of our systems’ components is given in this paper.
KeywordsMean Average Precision Current Group Video File Previous Group Fast Fourier Transform Algorithm
Unable to display preview. Download preview PDF.
- 1.Hatcher, E., Gospodnetic, O.: Lucene in action. Manning Publications Co. (2005)Google Scholar
- 3.Larson, M., Newman, E., Jones, G.J.F.: Overview of VideoCLEF 2009: New Perspectives on Speech-based Multimedia Content Enrichment. In: Peters, C., Gonzalo, J., Jones, G.J.F., Muller, H., Tsikrika, T., Kalpathy-Kramer, J. (eds.) CLEF 2009 Workshop, Part II. LNCS, vol. 6242, pp. 354–368. Springer, Heidelberg (2010)Google Scholar