A Framework for Review, Annotation, and Classification of Continuous Video in Context

  • Tobias Lensing
  • Lutz Dickmann
  • Stéphane Beauregard
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5531)

Abstract

We present a multi-modal video analysis framework for life-logging research. Domain-specific approaches and alternative software solutions are referenced, then we briefly outline the concept and realization of our OS X-based software for experimental research on segmentation of continuous video using sensor context. The framework facilitates visual inspection, basic data annotation, and the development of sensor fusion-based machine learning algorithms.

Keywords

Artificial Intelligence Information Visualization Human Computer Interaction Visual Analytics 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bush, V.: As we may think. The Atlantic Monthly 176(1) (7 1945); NB: Illustrations with the mentioned head-mounted camera appear in Life Magazine 11 (1945)Google Scholar
  2. 2.
    Mann, S.: Wearable computing: A first step toward personal imaging. IEEE Computer 30(2), 25–32 (1997)CrossRefGoogle Scholar
  3. 3.
    Billinghurst, M., Starner, T.: Wearable devices: New ways to manage information. Computer 32(1), 57–64 (1999)CrossRefGoogle Scholar
  4. 4.
    Maeda, M.: DARPA ASSIST. This is an electronic document (Feburary 28, 2005), http://assist.mitre.org/ (retrieved: Mar 18, 2009)
  5. 5.
    Microsoft Research Sensors and Devices Group: Microsoft Research SenseCam. This is an electronic document, http://research.microsoft.com/sendev/projects/sensecam/ (retrieved: March 18, 2009)
  6. 6.
    Smeaton, A.F.: Content vs. context for multimedia semantics: The case of sensecam image structuring. In: Proceedings of The First International Conference on Semantics And Digital Media Technology, pp. 1–10 (2006)Google Scholar
  7. 7.
    Doherty, A., Byrne, D., Smeaton, A.F., Jones, G., Hughes., M.: Investigating keyframe selection methods in the novel domain of passively captured visual lifelogs. In: Proc. Intl. Conference on Image and Video Retrieval (2008)Google Scholar
  8. 8.
    Lienhart, R., Pfeiffer, S., Effelsberg, W.: Video abstracting. Communications of the ACM 40(12), 54–62 (1997)CrossRefGoogle Scholar
  9. 9.
    Ferman, A.M., Tekalp, A.M.: Efficient filtering and clustering methods for temporal video segmentation and visual summarization. Journal of Visual Communication and Image Representation 9(4), 336–351 (1998)CrossRefGoogle Scholar
  10. 10.
    Kubat, R., DeCamp, P., Roy, B.: TotalRecall: Visualization and semi-automatic annotation of very large audio-visual corpora. In: ICMI 2007: Proceedings of the 9th international conference on Multimodal interfaces, pp. 208–215 (2007) Google Scholar
  11. 11.
    MacNeil, R.: Generating multimedia presentations automatically using TYRO, the constraint, case-based designer’s apprentice. In: Proc. VL, pp. 74–79 (1991)Google Scholar
  12. 12.
    Dickmann, L., Fernan, M.J., Kanakis, A., Kessler, A., Sulak, O., von Maydell, P., Beauregard, S.: Context-aware classification of continuous video from wearables. In: Proc. Conference on Designing for User Experience (DUX 2007) (2007)Google Scholar
  13. 13.
    Jaeger, H.: Discovering multiscale dynamical features with hierarchical echo state networks. Technical Report 10, Jacobs University/SES (2007)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Tobias Lensing
    • 1
  • Lutz Dickmann
    • 1
  • Stéphane Beauregard
    • 1
  1. 1.University of BremenBremenGermany

Personalised recommendations