With a multimedia document, its semantics are embedded in multiple forms that are usually complimentary each other. For example, a live report on TV about a tsunami conveys information that is far beyond what we read from the newspaper. Therefore, it is necessary to analyze all types of data: image frames, sound tracks, text that can be extracted from image frames, and spoken words that can be deciphered from the audio track [Wang00]. For some applications, automated techniques that process single media, for example, audio or images, may be error-prone, and multimodal processing is used to improve the overall system accuracy.
This chapter guides the reader through three multimedia processing modules: caption/transcript alignment, multimodal story segmentation, and major cast detection in video. The reader can easily appreciate the necessity and the superiority of multimedia content processing for real world applications.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2008 Springer Berlin Heidelberg
About this chapter
Cite this chapter
(2008). Multimodal Processing. In: Introduction to Video Search Engines. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79337-3_9
Download citation
DOI: https://doi.org/10.1007/978-3-540-79337-3_9
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-79336-6
Online ISBN: 978-3-540-79337-3
eBook Packages: Computer ScienceComputer Science (R0)