Advertisement

A Use Case Framework for Information Access Evaluation

  • Preben Hansen
  • Anni Järvelin
  • Gunnar Eriksson
  • Jussi Karlgren
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8830)

Abstract

Information access is no longer only a question of retrieving topical text documents in a work-task related context. Information search has become one of the most common uses of the personal computers; a daily task for millions of individual users searching for information motivated by information needs they experience for some reason, momentarily or continuously. Instead of professionally edited text documents, multilingual and multimedia content from a variety of sources of varying quality needs to be accessed. Even the scope of the research efforts in the field must therefore be broadened to better capture the mechanisms for the systems’ impact, take-up and success in the marketplace. Much work has been carried out in this direction: graded relevance, and new evaluation metrics, more varied document collections used in evaluation and different search tasks evaluated. The research in the field is however fragmented. Despite that the need for a common evaluation framework is widely acknowledged, such framework is still not in place. IR system evaluation results are not regularly validated in Interactive IR or field studies; the infrastructure for generalizing Interactive IR results over tasks, users and collections is still missing. This chapter presents a use case-based framework for experimental design in the field of interactive information access. Use cases in general connect system design and evaluation to interaction and user goals, and help identifying test cases for different user groups of a system. We suggest that use cases can provide a useful link even between information access system usage and evaluation mechanisms and thus bring together research from the different related research fields. In this chapter we discuss how use cases can guide the developments of rich models of users, domains, environments, and interaction, and make explicit how the models are connected to benchmarking mechanisms. We give examples of the central features of the different models. The framework is highlighted by examples that sketch out how the framework can be productively used in experimental design and reporting with a minimal threshold for adoption.

Keywords

Evaluation benchmarking use cases interaction 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Ahlgren, P.: The effect of indexing strategy-query term combination on retrieval effectiveness in a Swedish full text database. Academic dissertation. Valfrid, Sweden, 166 p. (2004)Google Scholar
  2. 2.
    Azzopardi, L.: The Economics in Interactive Information Retrieval. In: Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 15–24. ACM (2011)Google Scholar
  3. 3.
    Baskaya, F., Keskustalo, H., Järvelin, K.: Time Drives Interaction: Simulating Sessions in Diverse Searching Environments. In: Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 105–114. ACM (2012)Google Scholar
  4. 4.
    Baskaya, F., Keskustalo, H., Järvelin, K.: Modeling Behavioral Factors in Interactive Information Retrieval. In: Proceedings of the 22nd ACM International Conference on Information & Knowledge Management, pp. 2297–2302. ACM (2013)Google Scholar
  5. 5.
    Bates, M.J.: Information Search Tactics. Journal of the American Society for Information Science 30(4), 205–214 (1979)CrossRefGoogle Scholar
  6. 6.
    Bates, M.J.: The Design of Browsing and Berrypicking Techniques for the Online Search Interface. Online Review 13 (October 1989)Google Scholar
  7. 7.
    Bennett, J.L.: Interactive bibliographic search as a challenge to interface design. In: Walker, D.E. (ed.) Interactive Bibliographic Search: The User/Computer Interface, pp. 1–16 (1971)Google Scholar
  8. 8.
    Bennett, J.L.: The user interface in interactive systems. ARIST 7, 159–196 (1972)Google Scholar
  9. 9.
    Borlund, P.: The IIR evaluation model: a framework for evaluation of interactive information retrieval systems. Information Research. An International Electronic Journal 8(3) (2003)Google Scholar
  10. 10.
    Broder, A.: A taxonomy of web search. ACM SIGIR Forum 36(2) (2002)Google Scholar
  11. 11.
    Cleverdon, C.W., Keen, M.: Cranfield CERES: Aslib Cranfield research project - Factors determining the performance of indexing systems. Technical report (1966)Google Scholar
  12. 12.
    Cockburn, A.: Agile software development. Addison-Wesley (2002)Google Scholar
  13. 13.
    Constantine, L., Lockwood, L.: Software for use: A Practical guide to the models and methods of usage-centered design. Addison-Wesley (2006)Google Scholar
  14. 14.
    Fuhr, N., Belkin, N., Jose, J., van Rijsbergen, K.: Interactive Information Retrieval. Dagstuhl Seminar Proceedings: number 09101. ISSN 1862-4405. Schloss Dagstuhl -Leibniz-Zentrum fuer Informatik, Germany (2009)Google Scholar
  15. 15.
    Hansen, P., Järvelin, K.: Collaborative information retrieval in an information-intensive domain. Information Processing and Management 41(5), 1101–1119 (2005)CrossRefGoogle Scholar
  16. 16.
    Hansen, P.: Work task-oriented studies on IS&R processes. Developing theoretical and conceptual frameworks to be applied for evaluation and design of tools and systems. In: Fisher, K., Erdelez, S., McKechnie, L. (eds.) Theories of Information Behaviour. ASIST Monograph series, pp. 392–396. ASIST, Medford (2005)Google Scholar
  17. 17.
    Hearst, M.: “Natural” Search User Interfaces. Communications of the ACM 54(11), 60–67 (2011)CrossRefGoogle Scholar
  18. 18.
    Ingwersen, P., Järvelin, K.: The turn: Integration of Information Seeking and Retrieval in Context. Springer, Dortrecht (2005)Google Scholar
  19. 19.
    Jacobson, I.: Object-oriented development in an industrial environment. In: Procceedings of OOPSLA 1987: Sigplan Notices, 22(12) (1987)Google Scholar
  20. 20.
    Jacobson, I., Christerson, M., Jonsson, P., Overgaard, G.: Object-Oriented Software Engineering: A Use Case Driven Approach. Addison-Wesley (1992)Google Scholar
  21. 21.
    Järvelin, K., Kekäläinen, J.: Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems 20(4), 422–446 (2002)CrossRefGoogle Scholar
  22. 22.
    Kanoulas, E., Carterette, B., Clough, P., Sanderson, M.: Evaluating Multi-Query Sessions. In: Proceedings of 34th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1053–1062 (2011)Google Scholar
  23. 23.
    Keskustalo, H.: Towards Simulating and Evaluating User Interaction in Information Retrieval using Test Collections. Ph D Dissertation. Univeristy of Tampere: Acta Universitatis Tamperensis 1563 (2010)Google Scholar
  24. 24.
    Keskustalo, H., Järvelin, K., Pirkola, A., Sharma, T., Lykke, M.: Test Collection-Based IR Evaluation Needs Extension Toward Sessions - A Case of Extremely Short Queries. In: Lee, G.G., Song, D., Lin, C.-Y., Aizawa, A., Kuriyama, K., Yoshioka, M., Sakai, T. (eds.) AIRS 2009. LNCS, vol. 5839, pp. 63–74. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  25. 25.
    Kuhlthau, C.: Inside the Search Process: Information Seeking from the User’s Perspective. Journal of the American Society for Information Science 42(5), 361–371 (1991)CrossRefGoogle Scholar
  26. 26.
    Kumpulainen, S., Järvelin, K.: Barriers to Task-Based information access in molecular medicine. Journal of the American Society for Information Science and Technology 63(1), 89–97 (2012)CrossRefGoogle Scholar
  27. 27.
    Liu, J., Belkin, N.: Personalizing information retrieval for multi-session tasks: The roles of task stage and task type. In: Proceedings of the 33th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 26–33. ACM (2010)Google Scholar
  28. 28.
    Moffat, A., Zobel, J.: Rank-Biased Precision for Measurement of Retrieval Effectiveness. ACM Transactions on Information Systems 27(1) (2008)Google Scholar
  29. 29.
    Murdock, V., Clarke, C., Kamps, J., Karlgren, J.: Proceedings of SEXI 2013 - Workshop on Search and Exploration of X-Rated Information at WSDM 2013 (2013)Google Scholar
  30. 30.
    Rose, D., Levinson, D.: Understanding user goals in Web search. In: Proceedings of the 13th International ACM Conference on World Wide Web, pp. 13–19. ACM (2004)Google Scholar
  31. 31.
    Sanderson, M.: Test Collection Based Evaluation of Information Retrieval Systems. Foundations and Trends in Information Retrieval 4(4), 247–375 (2010)CrossRefMATHGoogle Scholar
  32. 32.
    Smucker, M., Clarke, C.: Time-based Calibration of Effectiveness Measures. In: Proceedings of 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 95–104. ACMGoogle Scholar
  33. 33.
    Spink, A., Saracevic, T.: Interaction in information retrieval: Selection and effectiveness of search terms. Journal of the American Society for Information Science 48(8), 741–761 (1997)CrossRefGoogle Scholar
  34. 34.
    Su, L.T.: Evaluation Measures for Interactive Information Retrieval. Information Processing and Management 28(4), 503–516 (1992)CrossRefGoogle Scholar
  35. 35.
    Tague-Sutcliffe, J.: The pragmatics of information retrieval experimentation, revisited. Information Processing and Management 28(4), 467–490 (1992)CrossRefGoogle Scholar
  36. 36.
    Vakkari, P., Hakala, N.: Changes in Relevance Criteria and Problem Stages in Task Performance. Journal of Documentation 56(5), 540–562 (2000)CrossRefGoogle Scholar
  37. 37.
    Voorhees, E.M.: The philosophy of information retrieval evaluation. In: Peters, C., Braschler, M., Gonzalo, J., Kluck, M. (eds.) CLEF 2001. LNCS, vol. 2406, pp. 355–370. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  38. 38.
    White, R., Dumais, S., Teevan, J.: Characterizing the influence of domain expertise on web search behavior. In: Proceedings of the Second International Conference on Web Search and Data Mining, pp. 132–141. ACM (2009)Google Scholar
  39. 39.
    White, R., Jose, J., van Rijsbergen, K., Ruthven, I.: Evaluating Implicit Feedback Models Using Searcher Simulations. ACM Transactions on Information Systems 23(3), 325–361 (2005)CrossRefGoogle Scholar
  40. 40.
    Wirfs-Brock, R.: Designing Scenarios: Making the Case for a Use Case Framework. Smalltalk Report (November-December, 1993)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Preben Hansen
    • 1
  • Anni Järvelin
    • 2
  • Gunnar Eriksson
    • 1
  • Jussi Karlgren
    • 3
  1. 1.Department of Computer and Systems SciencesStockholm UniversitySweden
  2. 2.School of Information StudiesUniversity of TampereFinland
  3. 3.Gavagai, Stockholm & School of Computer Science and CommunicationKTHSweden

Personalised recommendations