Advertisement

Evaluating Real Patent Retrieval Effectiveness

  • Anthony Trippe
  • Ian Ruthven
Part of the The Information Retrieval Series book series (INRE, volume 29)

Abstract

In this chapter we consider the nature of Information Retrieval evaluation for patent searching. We outline the challenges involved in conducting patent searches and the commercial risks inherent in patent searching. We highlight some of the main challenges of reconciling how we evaluate retrieval systems in the laboratory and the needs of patent searchers, concluding with suggestions for the development of more informative evaluation procedures for patent searching.

Keywords

Information Retrieval Relevant Document High Recall Information Retrieval System Relevant Material 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Joho H, Azzopardi L, Vanderbauwhede W (2010) A survey of patent users: an analysis of tasks, behavior, search functionality and system requirements. In: 3rd symposium on information interaction in context (IIiX ’10) Google Scholar
  2. 2.
    Spärck Jones K, Willett P (eds) (1997) Readings in information retrieval. Morgan Kaufmann, San Francisco Google Scholar
  3. 3.
    Voorhees EM, Harman D (eds) (2005) TREC: Experiment and evaluation in information retrieval. MIT Press, Cambridge Google Scholar
  4. 4.
    Ingwersen P, Järvelin K (2005) The Turn: Integration of information seeking and retrieval in context. Springer, Heidelberg zbMATHGoogle Scholar
  5. 5.
    Hansen P, Järvelin K (2005) Collaborative information retrieval in an information-intensive domain. Inf Process Manag 41:1101–1119 CrossRefGoogle Scholar
  6. 6.
    Blair DC, Maron ME (1985) An evaluation of retrieval effectiveness for a full-text document-retrieval system. Commun ACM 28:289–299 CrossRefGoogle Scholar
  7. 7.
    Voorhees EM (2002) The philosophy of information retrieval evaluation. In: CLEF ’01: Revised papers from the second workshop of the cross-language evaluation forum on evaluation of cross-language information retrieval systems, pp 355–370 Google Scholar
  8. 8.
    Swanson DR (1989) Historical note: Information retrieval and the future of an illusion. J Am Soc Inf Sci Technol 39:92–98 CrossRefGoogle Scholar
  9. 9.
    Voorhees EM (2005) The TREC robust retrieval track. ACM SIGIR Forum 39:11–20 CrossRefGoogle Scholar
  10. 10.
    Blair DC (1996) STAIRS redux: Thoughts on the STAIRS evaluation, ten years after. J Am Soc Inf Sci Technol 47:4–22 CrossRefGoogle Scholar
  11. 11.
    Blair DC (2002) The challenge of commercial document retrieval, Part I: major issues, and a framework based on search exhaustivity, determinacy of representation and document collection size. Inf Process Manag 38:273–291 CrossRefzbMATHGoogle Scholar
  12. 12.
    Blair DC (1980) Searching biases in large interactive document retrieval systems. J Am Soc Inf Sci 31:271–277 CrossRefGoogle Scholar
  13. 13.
    Sanderson M, Zobel Z (2005) Information retrieval system evaluation: effort, sensitivity, and reliability. In: 28th annual international ACM SIGIR conference on research and development in information retrieval, pp 161–169 Google Scholar
  14. 14.
    Spärck Jones K (2005) Epilogue: Metareflections on TREC. In: Voorhees EM, Harman DK (eds) TREC: Experiment and evaluation in information retrieval. MIT Press, Cambridge, pp 421–448 Google Scholar
  15. 15.
    Vakkari P (2000) Cognition and changes of search terms and tactics during task performance: a longitudinal study. In: RIAO 2004 (Recherche d’information assistée par ordinateur), pp 894–907 Google Scholar
  16. 16.
    Huang MH, Wang HY (2004) The influence of document presentation order and number of documents judged on users’ judgements of relevance. J Am Soc Inf Sci Technol 55:970–979 CrossRefGoogle Scholar
  17. 17.
    Hersh WR, Turpin A, Price S, Chan B, Kraemer D, Sacherek L, Olson D (2000) Do batch and user evaluation give the same results. In: 23rd annual international ACM SIGIR conference on research and development in information retrieval, pp 17–24 CrossRefGoogle Scholar
  18. 18.
    Kelly D, Fu X, Shah C (2010) Effects of position and number of relevant documents retrieved on users’ evaluations of system performance. ACM Trans Inf Syst 28:9:1–9:26 CrossRefGoogle Scholar
  19. 19.
    Smith CL, Kantor PB (2008) User adaptation: good results from poor systems. In: 31st annual international ACM SIGIR conference on research and development in information retrieval, pp 147–154 CrossRefGoogle Scholar
  20. 20.
    Hersh W, Turpin A (2001) Why batch and user evaluations do not give the same results. In: 24th annual international ACM SIGIR conference on research and development in information retrieval, pp 225–231 Google Scholar
  21. 21.
    Harter SP (1996) Variations in relevance assessments and the measurement of retrieval effectivness. J Am Soc Inf Sci Technol 47:37–49 CrossRefGoogle Scholar
  22. 22.
    Spärck Jones K (2006) What’s the value of TREC—is there a gap to jump or a chasm to bridge? ACM SIGIR Forum 40:10–20 CrossRefGoogle Scholar
  23. 23.
    Barry CL, Schamber L (1998) Users’ criteria for relevance evaluation: a cross-situational comparison. Inf Process Manag 34:291–236 CrossRefGoogle Scholar
  24. 24.
    Ruthven I, Baillie M, Elsweiler D (2007) The relative effects of knowledge, interest and confidence in assessing relevance. J Doc 63:482–504 CrossRefGoogle Scholar
  25. 25.
    Ruthven I, Baillie M, Azzopardi L, Bierig R, Nicol E, Sweeney S, Yakici M (2008) Contextual factors affecting the utility of surrogates within exploratory search. Inf Process Manag 44:437–462 CrossRefGoogle Scholar
  26. 26.
    Borgman CL, Hirsh SG, Hiller J (1996) Rethinking online monitoring methods for information retrieval systems: from search product to search process. J Am Soc Inf Sci Technol 47:568–583 CrossRefGoogle Scholar
  27. 27.
    Sormunen E, Pennanen S (2004) The challenge of automated tutoring in web-based learning environments for IR instruction. Inf Res 9:169 Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  1. 1.3LP AdvisorsDublinUSA
  2. 2.Department of Computer and Information SciencesUniversity of StrathclydeGlasgowUK

Personalised recommendations