Advertisement

WHOSE – A Tool for Whole-Session Analysis in IIR

  • Daniel Hienert
  • Wilko van Hoek
  • Alina Weber
  • Dagmar Kern
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9022)

Abstract

One of the main challenges in Interactive Information Retrieval (IIR) evaluation is the development and application of re-usable tools that allow researchers to analyze search behavior of real users in different environments and different domains, but with comparable results. Furthermore, IIR recently focuses more on the analysis of whole sessions, which includes all user interactions that are carried out within a session but also across several sessions by the same user. Some frameworks have already been proposed for the evaluation of controlled experiments in IIR, but yet no framework is available for interactive evaluation of search behavior from real-world information retrieval (IR) systems with real users. In this paper we present a framework for whole-session evaluation that can also utilize these uncontrolled data sets. The logging component can easily be integrated into real-world IR systems for generating and analyzing new log data. Furthermore, due to a supplementary mapping it is also possible to analyze existing log data. For every IR system different actions and filters can be defined. This allows system operators and researchers to use the framework for the analysis of user search behavior in their IR systems and to compare it with others. Using a graphical user interface they have the possibility to interactively explore the data set from a broad overview down to individual sessions.

Keywords

Interactive Information Retrieval Sessions Analysis Evaluation Logging 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Baskaya, F., et al.: Time Drives Interaction: Simulating Sessions in Diverse Searching Environments. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 105–114. ACM, New York (2012)Google Scholar
  2. 2.
    Bates, M.J.: Where Should the Person Stop and the Information Search Interface Start? Inf. Process Manage. 26(5), 575–591 (1990)CrossRefMathSciNetGoogle Scholar
  3. 3.
    Beckers, T., et al.: ezDL: An Interactive Search and Evaluation System. In: Proceedings of the SIGIR 2012 Workshop on Open Source Information Retrieval, Department of Computer Science, University of Otago, Dunedin, New Zealand, pp. 9–16 (2012)Google Scholar
  4. 4.
    Belkin, N.J.: On the evaluation of Interactive Information Retrieval Systems (2010)Google Scholar
  5. 5.
    Bierig, R., et al.: A User-Centered Experiment and Logging Framework for Interactive Information Retrieval. In: Underst. User - Workshop Conjuction SIGIR 2009 (2009)Google Scholar
  6. 6.
    Cugini, J., Scholtz, J.: VISVIP: 3D Visualization of Paths Through Web Sites. In: Proceedings of the 10th International Workshop on Database & Expert Systems Applications, p. 259. IEEE Computer Society, Washington, DC (1999)CrossRefGoogle Scholar
  7. 7.
    Fox, S., et al.: Evaluating Implicit Measures to Improve Web Search. ACM Trans. Inf. Syst. 23(2), 147–168 (2005)CrossRefGoogle Scholar
  8. 8.
    Fuhr, N.: A Probability Ranking Principle for Interactive Information Retrieval. Inf. Retr. 11(3), 251–265 (2008)CrossRefGoogle Scholar
  9. 9.
    Hall, M.M., Toms, E.: Building a Common Framework for IIR Evaluation. In: Forner, P., Müller, H., Paredes, R., Rosso, P., Stein, B., et al. (eds.) CLEF 2013. LNCS, vol. 8138, pp. 17–28. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  10. 10.
    Jones, R., Klinkner, K.L.: Beyond the Session Timeout: Automatic Hierarchical Segmentation of Search Topics in Query Logs. In: Proceedings of the 17th ACM Conference on Information and Knowledge Management, pp. 699–708. ACM, New York (2008)Google Scholar
  11. 11.
    Kanoulas, E., et al.: Evaluating Multi-query Sessions. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1053–1062. ACM, New York (2011)Google Scholar
  12. 12.
    Kelly, D., et al.: Evaluation challenges and directions for information-seeking support systems. Computer 42(3), 60–66 (2009)CrossRefGoogle Scholar
  13. 13.
    Kotov, A., et al.: Modeling and Analysis of Cross-session Search Tasks. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 5–14. ACM, New York (2011)Google Scholar
  14. 14.
    Lam, H., et al.: Session Viewer: Visual Exploratory Analysis of Web Session Logs. In: IEEE VAST, pp. 147–154. IEEE (2007)Google Scholar
  15. 15.
    Liao, Z., et al.: Evaluating the Effectiveness of Search Task Trails. In: Proceedings of the 21st International Conference on World Wide Web, pp. 489–498. ACM, New York (2012)CrossRefGoogle Scholar
  16. 16.
    Liu, C., et al.: Personalization of Search Results Using Interaction Behaviors in Search Sessions. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 205–214. ACM, New York (2012)Google Scholar
  17. 17.
    Pitkow, J., Bharat, K.A.: Webviz: A Tool For World-Wide Web Access Log Analysis. In: Proceedings of the First International World-Wide Web Conference, pp. 271–277 (1994)Google Scholar
  18. 18.
    Renaud, G., Azzopardi, L.: SCAMP: A Tool for Conducting Interactive Information Retrieval Experiments. In: Proceedings of the 4th Information Interaction in Context Symposium, pp. 286–289. ACM, New York (2012)CrossRefGoogle Scholar
  19. 19.
    Shen, Z., et al.: Visual analysis of massive web session data. In: Barga, R.S., et al. (eds.) LDAV, pp. 65–72. IEEE (2012)Google Scholar
  20. 20.
    Shneiderman, B.: The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations. In: Proceedings of the 1996 IEEE Symposium on Visual Languages, pp. 336–343. IEEE Computer Society, Washington, DC (1996)CrossRefGoogle Scholar
  21. 21.
    Toms, E.G., et al.: WiIRE: the Web interactive information retrieval experimentation system prototype. Inf. Process. Manag. 40(4), 655–675 (2004)CrossRefzbMATHMathSciNetGoogle Scholar
  22. 22.
    Tran, T.V., Fuhr, N.: Quantitative Analysis of Search Sessions Enhanced by Gaze Tracking with Dynamic Areas of Interest. In: Zaphiris, P., Buchanan, G., Rasmussen, E., Loizides, F. (eds.) TPDL 2012. LNCS, vol. 7489, pp. 468–473. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  23. 23.
    Voorhees, E.M., Harman, D.K.: TREC: Experiment and Evaluation in Information Retrieval (Digital Libraries and Electronic Publishing). The MIT Press (2005)Google Scholar
  24. 24.
    Waterson, S.J., et al.: What Did They Do? Understanding Clickstreams with the WebQuilt Visualization System. In: Proceedings of the Working Conference on Advanced Visual Interfaces, pp. 94–102. ACM, New York (2002)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Daniel Hienert
    • 1
  • Wilko van Hoek
    • 1
  • Alina Weber
    • 1
  • Dagmar Kern
    • 1
  1. 1.GESIS – Leibniz Institute for the Social SciencesCologneGermany

Personalised recommendations