The Relationship Between User Perception and User Behaviour in Interactive Information Retrieval Evaluation

  • Mengdie Zhuang
  • Elaine G. Toms
  • Gianluca Demartini
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9626)


Measures of user behaviour and user perception have been used to evaluate interactive information retrieval systems. However, there have been few efforts taken to understand the relationship between these two. In this paper, we investigated both using user actions from log files, and the results of the User Engagement Scale, both of which came from a study of people interacting with a novel interface to an image collection, but with a non-purposeful task. Our results suggest that selected behavioural actions are associated with selected user perceptions (i.e., focused attention, felt involvement, and novelty), while typical search and browse actions have no association with aesthetics and perceived usability. This is a novel finding that can lead toward a more systematic user-centered evaluation.


User-centered evaluation User perception evaluation User behaviour evaluation 


  1. 1.
    Al-Maskari, A., Sanderson, M.: The effect of user characteristics on search effectiveness in information retrieval. Inf. Process. Manage. 47, 719–729 (2011)CrossRefGoogle Scholar
  2. 2.
    Al-Maskari, A., Sanderson, M.: A review of factors influencing user satisfaction in information retrieval. JASIST 61, 859–868 (2010)CrossRefGoogle Scholar
  3. 3.
    Aladwani, A.M., Palvia, P.C.: Developing and validating an instrument for measuring user-perceived web quality. Inf. Manage. 39, 467–476 (2002)CrossRefGoogle Scholar
  4. 4.
    Banhawi, F., Ali, N.M.: Measuring user engagement attributes in social networking application. In: Semantic Technology and Information Retrieval, pp. 297–301. IEEE (2011)Google Scholar
  5. 5.
    Bates, M.J.: The design of browsing and berrypicking techniques for the online search interface. Online Inf. Rev. 13, 407–424 (1989)CrossRefGoogle Scholar
  6. 6.
    Borgman, C.L.: All users of information retrieval systems are not created equal: an exploration into individual differences. Inf. Process. Manage. 25, 237–251 (1989)CrossRefGoogle Scholar
  7. 7.
    DeVellis, R.: Scale Development. Sage, Newbury Park, California (2003)Google Scholar
  8. 8.
    Dillon, A., Watson, C.: User analysis in HCI: the historical lessons from individual differences research. Int. J. Hum. Comput. Stud. 45, 619–637 (1996)CrossRefGoogle Scholar
  9. 9.
    Fenichel, C.H.: Online searching: Measures that discriminate among users with different types of experiences. JASIS 32, 23–32 (1981)CrossRefGoogle Scholar
  10. 10.
    Freund, L., Toms, E.G.: Revisiting informativeness as a process measure for information interaction. In: Proceedings of the WISI Workshop of SIGIR 2007, pp. 33–36 (2007)Google Scholar
  11. 11.
    Hall, M.M., Toms, E.: Building a common framework for iir evaluation. In: Forner, P., Müller, H., Paredes, R., Rosso, P., Stein, B. (eds.) CLEF 2013. LNCS, vol. 8138, pp. 17–28. Springer, Heidelberg (2013)Google Scholar
  12. 12.
    Hall, M.M., Villa, R., Rutter, S.A., Bell, D., Clough, P., Toms, E.G.: Sheffield submission to the chic interactive task: Exploring digital cultural heritage. In: Proceedings of the CLEF 2013 (2013)Google Scholar
  13. 13.
    Hyder, J.: Proposal of a Website Engagement Scale and Research Model: Analysis of the Influence of Intra-Website Comparative Behaviour. Ph.D. Thesis, University of Valencia (2010)Google Scholar
  14. 14.
    Kelly, D.: Methods for evaluating interactive information retrieval systems with users. Found. Trends Inf. Retrieval 3, 1–224 (2009)CrossRefGoogle Scholar
  15. 15.
    Manning, C.D., Raghavan, P., Schütze, H.: Introduction to information retrieval. Cambridge University Press, Cambridge (2008)CrossRefzbMATHGoogle Scholar
  16. 16.
    O’Brien, H.L., Toms, E.G.: The development and evaluation of a survey to measure user engagement. JASIST 61, 50–69 (2010)CrossRefGoogle Scholar
  17. 17.
    O’Brien, H.L., Toms, E.G.: Examining the generalizability of the User Engagement Scale (UES) in exploratory search. Info. Proc. Mgmt. 49, 1092–1107 (2013)CrossRefGoogle Scholar
  18. 18.
    Reise, S.P., Waller, N.G., Comrey, A.L.: Factor analysis and scale revision. Psychol. Assess. 12, 287 (2000)CrossRefGoogle Scholar
  19. 19.
    Su, L.T.: Special Issue: Evaluation Issues in Information Retrieval Evaluation measures for interactive information retrieval. Info. Proc. Mgmt. 28, 503–516 (1992)CrossRefGoogle Scholar
  20. 20.
    Tague-Sutcliffe, J.: Measuring information: an information services perspective. Academic Press, San Diego; London (1995)Google Scholar
  21. 21.
    Teevan, J., Liebling, D.J., Geetha, G.R.: Understanding and predicting personal navigation. In: Proceedings of the International Conference Web search and data mining, pp. 85–94. ACM, Hong Kong, China (2011)Google Scholar
  22. 22.
    Toms, E.G., Villa, R., McCay-Peet, L.: How is a search system used in work task completion? J. Inf. Sci. 39, 15–25 (2013)CrossRefGoogle Scholar
  23. 23.
    Ward Jr., J.H.: Hierarchical grouping to optimize an objective function. J. Am. Statist. Assoc. 58, 236–244 (1963)MathSciNetCrossRefGoogle Scholar
  24. 24.
    White, R.W., Drucker, S.M.: Investigating behavioral variability in web search. In: Proceedings of the 16th Int. Conf. WWW, pp. 21–30. ACM, Banff, Alberta, Canada (2007)Google Scholar
  25. 25.
    Yuan, W., Meadow, C.T.: A study of the use of variables in information retrieval user studies. JASIS 50, 140–150 (1999)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Mengdie Zhuang
    • 1
  • Elaine G. Toms
    • 1
  • Gianluca Demartini
    • 1
  1. 1.Information SchoolUniversity of SheffieldSheffieldUK

Personalised recommendations