Selecting Success Criteria: Experiences with an Academic Library Catalogue

  • Paul Clough
  • Paula Goodale
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8138)


Multiple methods exist for evaluating search systems, ranging from more user-oriented approaches to those more focused on evaluating system performance. When preparing an evaluation, key questions include: (i) why conduct the evaluation, (ii) what should be evaluated, and (iii) how the evaluation should be conducted. Over recent years there has been more focus on the end users of search systems and understanding what they view as ‘success’. In this paper we consider what to evaluate; in particular what criteria users of search systems consider most important and whether this varies by user characteristic. Using our experience with evaluating an academic library catalogue, input was gathered from end users relating to the perceived importance of different evaluation criteria prior to conducting an evaluation. We analyse results to show which criteria users most value, together with the inter-relationships between them. Our results highlight the necessity of conducting multiple forms of evaluation to ensure that search systems are deemed successful by their users.


Evaluation success criteria digital libraries 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Saracevic, T.: Evaluation of evaluation in information retrieval. In: Fox, E., Ingwersen, P., Fidel, R. (eds.) Proc. 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Seattle, Washington, USA, July 9-13, pp. 138–146. ACM Press, New York (1995)Google Scholar
  2. 2.
    Harman, D.: Information retrieval evaluation. Synthesis Lectures on Information Concepts, Retrieval, and Services, vol. 3(2). Morgan & Claypool Publishers, San Raphael (2011)Google Scholar
  3. 3.
    Robertson, S.E., Hancock-Beaulieu, M.: On the evaluation of information retrieval systems. Information Processing and Management 28(4), 457–466 (1992)CrossRefGoogle Scholar
  4. 4.
    Robertson, S.: On the history of evaluation in IR. Journal of Information Science 34(4), 439–456 (2008)CrossRefGoogle Scholar
  5. 5.
    Voorhees, E.M., Harman, D.K.: TREC: experiments and evaluation in information retrieval. MIT Press, Cambridge (2005)Google Scholar
  6. 6.
    Ingwersen, P., Järvelin, K.: The turn: integration of information seeking and retrieval in context. Springer, New York (2005)Google Scholar
  7. 7.
    Borlund, P.: User-Centred Evaluation of Information Retrieval Systems. In: Göker, A., Davies, J. (eds.) Information Retrieval: Searching in the 21st Century. John Wiley & Sons, Chichester (2009)Google Scholar
  8. 8.
    Kelly, D.: Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval 3(1-2), 1–224 (2009)Google Scholar
  9. 9.
    van Rijsbergen, C.J.: Information retrieval, 2nd edn. Butterworths, London (1979)Google Scholar
  10. 10.
    Saracevic, T.: Digital Library Evaluation: Toward Evolution of Concepts. Library Trends 49(2), 350–369 (2000)Google Scholar
  11. 11.
    Fuhr, N., et al.: Evaluation of Digital Libraries. International Journal on Digital Libraries 8(1), 21–38 (2007)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Tsakonas, G., Papatheodorou, C.: Exploring Usefulness and Usability in the Evaluation of Open Access Digital Libraries. Information Processing & Management 44(3), 1234–1250 (2008)CrossRefGoogle Scholar
  13. 13.
    Buchanan, S., Salako, A.: Evaluating the Usability and Usefulness of a Digital Library. Library Review 58(9), 638–651 (2009)CrossRefGoogle Scholar
  14. 14.
    Nielsen, J.: Enhancing the Explanatory Power of Usability Heuristics. In: Proc. SIGCHI Conference on Human Factors in Computing Systems, pp. 152–158. ACM Press, New York (1994)Google Scholar
  15. 15.
    Toms, E.G., O’Brien, H.L., Kopak, R., Freund, L.: Searching for relevance in the relevance of search. In: Crestani, F., Ruthven, I. (eds.) CoLIS 2005. LNCS, vol. 3507, pp. 59–78. Springer, Heidelberg (2005)CrossRefGoogle Scholar
  16. 16.
    Al-Maskari, A., Sanderson, M.: A Review of Factors Influencing User Satisfaction in Information Retrieval. Journal of the American Society for Information Science and Technology 61(5), 859–868 (2010)CrossRefGoogle Scholar
  17. 17.
    Xie, H.: Users’ Evaluation of Digital Libraries (Dls): Their Uses, Their Criteria, and Their Assessment. Information Processing & Management 44(3), 1346–1373 (2008)CrossRefGoogle Scholar
  18. 18.
    Xie, H.: Evaluation of Digital Libraries: Criteria and Problems from Users’ Perspectives. Library and Information Science Research 28(3), 433–452 (2006)CrossRefGoogle Scholar
  19. 19.
    Kelly, D., et al.: Evaluation Challenges and Directions for Information-Seeking Support Systems. Computer 42(3), 60–66 (2009)CrossRefGoogle Scholar
  20. 20.
    Marchionini, G.: Information seeking in electronic environments. Cambridge University Press, Cambridge (1995)CrossRefGoogle Scholar
  21. 21.
    Hölscher, C., Strube, G.: Web Search Behavior of Internet Experts and Newbies. Computer Networks 33(1), 337–346 (2000)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Paul Clough
    • 1
  • Paula Goodale
    • 1
  1. 1.Information SchoolUniversity of SheffieldSheffieldUK

Personalised recommendations