Advertisement

Content vs. Context for Multimedia Semantics: The Case of SenseCam Image Structuring

Invited Keynote Paper
  • Alan F. Smeaton
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4306)

Abstract

Much of the current work on determining multimedia semantics from multimedia artifacts is based around using either context, or using content. When leveraged thoroughly these can independently provide content description which is used in building content-based applications. However, there are few cases where multimedia semantics are determined based on an integrated analysis of content and context. In this keynote talk we present one such example system in which we use an integrated combination of the two to automatically structure large collections of images taken by a SenseCam, a device from Microsoft Research which passively records a person’s daily activities. This paper describes the post-processing we perform on SenseCam images in order to present a structured, organised visualisation of the highlights of each of the wearer’s days.

Keywords

Personal Photo Multimedia Object Landmark Detection Multimedia Information Retrieval Landmark Image 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    ACDSee (last visited September 2006), Available at: http://www.acdsee-guide.com/
  2. 2.
    Adobe Photoshop Album (last visited September 2006), Available at: http://www.adobe.com/products/photoshopalbum/
  3. 3.
    Blighe, M., Le Borgne, H., O’Connor, N.E., Smeaton, A.F., Jones, G.J.F.: Exploiting context information to aid landmark detection in SenseCam images. In: 2nd International Workshop on Exploiting Context Histories in Smart Environments (ECHISE), Irvine, Calif., USA (September 2006)Google Scholar
  4. 4.
    Conaire, C.Ó., O’Connor, N.E., Smeaton, A.F., Jones, G.J.F.: Organising a Daily Visual Diary Using Multi-Feature Clustering (2006) (submitted for publication)Google Scholar
  5. 5.
    Flickr (last visited September 2006), Available at: http://www.flickr.com/
  6. 6.
    Gemmell, J., Aris, A., Lueder, R.: Telling Stories with MyLifeBits. In: ICME 2005: IEEE International Conference on Multimedia and Expo. (2005)Google Scholar
  7. 7.
    Gemmell, J., Bell, G., Lueder, R., Drucker, S., Wong, C.: MyLifeBites: Fulfilling the Memex vision. In: Proceedings of ACM Multimedia (December 2002)Google Scholar
  8. 8.
    Gemmell, J., Williams, L., Wood, K., Lueder, R., Bell, G.: Passive capture and ensuing issues for a personal lifetime store. In: CARPE 2004: Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences, pp. 48–55. ACM Press, New York (2004)CrossRefGoogle Scholar
  9. 9.
    Picasa, G. (last visited September 2006), Available at: http://picasa.google.com/
  10. 10.
    Hodges, S., Williams, L., Berry, E., Izadi, S., Srinivasan, J., Butler, A., Smyth, G., Kapur, N., Wood, K.: SenseCam: a Retrospective Memory Aid. In: Dourish, P., Friday, A. (eds.) UbiComp 2006. LNCS, vol. 4206, pp. 177–193. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  11. 11.
    Ingwersen, P., Järvelin, K.: The Turn: Integration of Information Seeking and Retrieval in Context. Springer: the Kluwer International Series on Information Retrieval (2005)Google Scholar
  12. 12.
    Internet Archive: Moving Image Archive (last visited September 2006), Available at: http://www.archive.org/details/movies
  13. 13.
    Lavelle, B.: SenseCam Social Landmark Detection using Bluetooth. M.Sc.in Software Engineering Practicum Report, Dublin City University (2006)Google Scholar
  14. 14.
    O’Hare, N., Lee, H., Cooray, S., Gurrin, C., Jones, G.J., Malobabic, J., O’Connor, N.E., Smeaton, A.F., Uscilowski, B.: MediAssist: Using content-based analysis and context to manage personal photo collections. In: Sundaram, H., Naphade, M., Smith, J.R., Rui, Y. (eds.) CIVR 2006. LNCS, vol. 4071, pp. 529–532. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    Sav, S., Jones, G.J., Lee, H., O’Connor, N.E., Smeaton, A.F.: Interactive Experiments in Object-Based Retrieval. In: Sundaram, H., Naphade, M., Smith, J.R., Rui, Y. (eds.) CIVR 2006. LNCS, vol. 4071, pp. 1–10. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  16. 16.
    Shneiderman, B., Kang, H., Kules, B., Plaisant, C., Rose, A., Rucheir, R.: A photo history of SIGCHI: evolution of design from personal to public. Interactions 9(3), 17–23 (2002)CrossRefGoogle Scholar
  17. 17.
    Smeaton, A.F.: Large scale evaluations of multimedia information retrieval: The TRECVid experience. In: Leow, W.-K., Lew, M., Chua, T.-S., Ma, W.-Y., Chaisorn, L., Bakker, E.M. (eds.) CIVR 2005. LNCS, vol. 3568, pp. 11–17. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  18. 18.
    Smeaton, A.F.: Techniques Used and Open Challenges to the Analysis, Indexing and Retrieval of Digital Video. Information Systems (in press, 2006)Google Scholar
  19. 19.
    Yahoo ! 360 (last visited September 2006), Available at: http://360.yahoo.com/

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Alan F. Smeaton
    • 1
  1. 1.Adaptive Information Cluster & Centre For Digital Video ProcessingDublin City UniversityIreland

Personalised recommendations