Abstract
Evaluation of information retrieval systems and specific information retrieval system technologies is becoming more important as organizations begin to build more complex and fully functional information retrieval systems for their corporate data. Growth in interest in information retrieval system technologies driven by the original TREC conferences has spurred the creation of many different test databases and forums to evaluate information retrieval technologies. It also has lead to a lot more thought and development on techniques used in the evaluation of specific technologies. The availability of large test sets has provided the basis for researchers and developers to validate that their algorithms are scalable and will operate in real world situations. One of the challenges in evaluation is the ambiguity of the definition of relevant and disagreement between evaluators. Algorithms such as the Kappa Coefficient can be used in adjusting for differences. The other challenge is how to evaluate an operational system you are changing versus doing your evaluation at a conference. The major different options for information retrieval system evaluation are described.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsAuthor information
Authors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer US
About this chapter
Cite this chapter
Kowalski, G. (2011). Information System Evaluation. In: Information Retrieval Architecture and Algorithms. Springer, Boston, MA. https://doi.org/10.1007/978-1-4419-7716-8_9
Download citation
DOI: https://doi.org/10.1007/978-1-4419-7716-8_9
Published:
Publisher Name: Springer, Boston, MA
Print ISBN: 978-1-4419-7715-1
Online ISBN: 978-1-4419-7716-8
eBook Packages: Computer ScienceComputer Science (R0)