Advertisement

Relevance in Technicolor

  • Ulises Cerviño Beresi
  • Yunhyong Kim
  • Dawei Song
  • Ian Ruthven
  • Mark Baillie
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6273)

Abstract

In this article we propose the concept of relevance criteria profiles, which provide a global view of user behaviour in judging the relevance of retrieved information. We further propose a plotting technique which provides a session based overview of the relevance judgement processes interlaced with interactions that allow the researcher to visualise and quickly detect emerging patterns in both interactions and relevance criteria usage. We discuss by example, using data from a user study conducted between the months of January and August of 2008, how these tools support the better understanding of task based user valuation of documents that is likely to lead to recommendations for improving end-user services in digital libraries.

Keywords

Relevance Criterion Relevance Judgement Search Session Metadata Element Interactive Information Retrieval 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Barry, C.L.: User-defined relevance criteria: an exploratory study. Journal of the American Society for Information Science 45(3), 149–159 (1994)CrossRefGoogle Scholar
  2. 2.
    Barry, C.L., Schamber, L.: Users’criteria for relevance evaluation: A cross-situational comparison. Information Processing and Management 34(2-3), 219–236 (1998)CrossRefGoogle Scholar
  3. 3.
    Borlund, P.: The concept of relevance in IR. Journal of the American Society for Information Science and Technology 54(10), 913–925 (2003)CrossRefGoogle Scholar
  4. 4.
    Borlund, P.: The IIR evaluation model: a framework for evaluation of interactive information retrieval systems. Information Research 8(3), 8–13 (2003)Google Scholar
  5. 5.
    Cleverdon, C.W., Mills, J., Keen, E.M.: Factors Determining the Performance of Indexing Systems. Design, vol. 1., Test Results, vol. 2. Aslib Cranfield Research Project, Cranfield, Lagland (1966)Google Scholar
  6. 6.
    Ericsson, K.A., Simon, H.A.: Protocol analysis: verbal reports as data. MIT Press, Cambridge (1993)Google Scholar
  7. 7.
    Hurvich, L.M., Jameson, D.: An opponent-process theory of color vision. Psychological Review 64, 384–404 (1957)CrossRefGoogle Scholar
  8. 8.
    Kullback, S., Leibler, R.A.: On information and sufficiency. Annals of Mathematical Statistics 22, 79–86 (1951)zbMATHCrossRefMathSciNetGoogle Scholar
  9. 9.
    Lin, J.: Divergence measures based on the shannon entropy. IEEE Transactions on Information theory 37, 145–151 (1991)zbMATHCrossRefGoogle Scholar
  10. 10.
    Robertson, S.E., Hancock-Beaulieu, M.M.: On the evaluation of ir systems. Information Processing Management 28(4), 457–466 (1992)CrossRefGoogle Scholar
  11. 11.
    Schamber, L.: Users’criteria for evaluation in a multimedia environment. Proceedings of the 54 Annual Meeting of the American Society for Information Science 28, 126–133 (1991)Google Scholar
  12. 12.
    Wang, P., White, M.D.: A cognitive model of document use during a research project. study ii. decisions at the reading and citing stages. Journal of the American Society of Information Sciences 50(2), 98–114 (1999)CrossRefGoogle Scholar
  13. 13.
    Ware, C.: Color sequences for univariate maps: Theory, experiments and principles. IEEE Computer Graphics and Applications 8(5), 41–49 (1988)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Ulises Cerviño Beresi
    • 1
  • Yunhyong Kim
    • 1
  • Dawei Song
    • 1
  • Ian Ruthven
    • 2
  • Mark Baillie
    • 2
  1. 1.School of ComputingThe Robert Gordon University 
  2. 2.Department of Computer and Information SciencesThe Strathclyde University 

Personalised recommendations