Advertisement

A Continuum between Browsing and Query-Based Search for User-Centered Multimedia Information Access

  • Julien Ah-Pine
  • Jean-Michel Renders
  • Marie-Luce Viaud
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6535)

Abstract

Information seeking in a multimedia database very often implies a search process that is complex, dynamic and multi-faceted. Moreover the information need with respect to a topic is likely to evolve during the same search session, going from a simple lookup search to a thorough discovery of connected subtopics. We propose a system that aims at addressing these challenges. It couples serendipitous browsing and query-based search in a smooth manner. The proposed system offers two levels, one global and one local, of visualizing the context of the information seeking task and it also allows to view and search the data using either monomodal or cross-modal similarities. Furthermore, the system integrates a new relevance feedback model that takes into account the multimodal nature of the data in a flexible way and a combination of two parameters, the locality and forgetting factors, that allows the user to design adaptive metrics in the interactive search process. The paper also presents a preliminary user-centered evaluation of our system and concludes with an analysis of the evaluation results.

Keywords

Digital Library Relevance Feedback Information Seek Multimedia Database Multimedia Object 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Marchionini, G.: Exploring search: from finding to understanding. Communications of the ACM 49, 41–46 (2006)CrossRefGoogle Scholar
  2. 2.
    Campbell, I., Van Rijsbergen, C.: The ostensive model of developing information needs. In: Proc. of CoLIS 2, pp. 251–268 (1996)Google Scholar
  3. 3.
    Clinchant, S., Renders, J.-M., Csurka, G.: XRCE’s participation to ImageCLEF 2007. In: Working Notes of CLEF 2007 Workshop (2007)Google Scholar
  4. 4.
    Ah-Pine, J., Cifarelli, C., Clinchant, S., Csurka, G., Renders, J.M.: XRCE’s participation to ImageCLEF 2008. In: Peters, C., Deselaers, T., Ferro, N., Gonzalo, J., Jones, G.J.F., Kurimo, M., Mandl, T., Peñas, A., Petras, V. (eds.) CLEF 2008. LNCS, vol. 5706, Springer, Heidelberg (2009)CrossRefGoogle Scholar
  5. 5.
    Ah-Pine, J., Bressan, M., Clinchant, S., Csurka, G., Hoppenot, Y., Renders, J.M.: Crossing textual and visual content in different application scenarios. Multimedia Tools Appl. 42(1), 31–56 (2009)CrossRefGoogle Scholar
  6. 6.
    Rocchio, J.: Relevance feedback in information retrieval. In: The SMART Retrieval System, pp. 313–323 (1971)Google Scholar
  7. 7.
    Noack, A.: Visual clustering of graphs with nonuniform degrees. In: Healy, P., Nikolov, N.S. (eds.) GD 2005. LNCS, vol. 3843, pp. 309–320. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  8. 8.
    Misue, K., Eades, P., Lai, W., Sugiyama, K.: Layout adjustment and the mental map. Journal of Visual Languages & Computing 6, 183–210 (1995)CrossRefGoogle Scholar
  9. 9.
    Fruchterman, T.M.J., Reingold, E.M.: Graph drawing by force-directed placement. Softw., Pract. Exper. 21, 1129–1164 (1991)CrossRefGoogle Scholar
  10. 10.
    Urban, J., Joemon, M., van Rijsbergen, C.: An adaptive technique for content-based image retrieval. Multimedia Tools Appl. 31(1), 1–28 (2006)CrossRefGoogle Scholar
  11. 11.
    Diou, C., et al.: VITALAS at TRECVID 2009. In: TRECVID (2009)Google Scholar
  12. 12.
    Huart, J., Kolski, C., Sagar, M.: Evaluation of multimedia applications using inspection methods: the cognitive walkthrough case. Interacting with Computers 16(2) (2004)Google Scholar
  13. 13.
    Polson, P.G., Lewis, C., Rieman, J., Wharton, C.: Cognitive walkthroughs: a method for theory-based evaluation of user interfaces. Int. J. Man-Mach. Stud. 36(5), 741–773 (1992)CrossRefGoogle Scholar
  14. 14.
    Lu, Y., Zhang, H., Wenyin, L.: Joint semantics and feature based image retrieval using relevance feedback. IEEE Transactions on Multimedia 5, 339–347 (2003)CrossRefGoogle Scholar
  15. 15.
    Rahman, M.M., Desai, B.C., Bhattacharya, P.: Multi-modal interactive approach to imageCLEF 2007 photographic and medical retrieval tasks by CINDI. In: Working Notes of CLEF 2007 Workshop (2007)Google Scholar
  16. 16.
    Bruno, E., Moenne-Loccoz, N., Marchand-Maillet, S.: Design of multimodal dissimilarity spaces for retrieval of video documents. IEEE Trans. Pattern Anal. Mach. Intell. 30, 1520–1533 (2008)CrossRefGoogle Scholar
  17. 17.
    Goëau, H., Thièvre, J., Verroust-Blondet, A., Viaud, M.L.: State of the art on advanced visualisation methods. Report D7.2 of the Vitalas EC project FP6 - 045389 (2007)Google Scholar
  18. 18.
    Nguyen, G.P., Worring, M.: Interactive access to large image collections using similarity-based visualization. J. Vis. Lang. Comput. 19(2), 203–224 (2008)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Julien Ah-Pine
    • 1
  • Jean-Michel Renders
    • 1
  • Marie-Luce Viaud
    • 2
  1. 1.Xerox Research Centre EuropeMeylanFrance
  2. 2.Institut National de l’AudiovisuelBry-sur-Marne CedexFrance

Personalised recommendations