Abstract
Comparative evaluation of methods and systems is of primarily importance in the domain of visual indexing and retrieval. As in many other domains, it is generally organized by institutions like NIST or by research networks like Pascal or PetaMedia. Evaluations are carried out in the context of periodical campaigns, or benchmarks. In these, one or more visual indexing or retrieval tasks are defined, each with a data collection, relevance judgments, performance measures and an experimentation protocol. Participants submit results computed automatically and blindly and the organizers return the measured performances. These evaluation campaigns are generally concluded by a workshop in which the participants explain how they performed the tasks. The chapter will give an overview of the major evaluation campaigns in the domain and present in detail the tasks, the data collection, the metrics and the protocols used. The state of the art performance in recent campaigns and the lessons learned from these campaigns will also be presented.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2012 The Author(s)
About this chapter
Cite this chapter
Quénot, G., Joly, P., Benois-Pineau, J. (2012). Evaluation of visual information indexing and retrieval. In: Visual Indexing and Retrieval. SpringerBriefs in Computer Science. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-3588-4_6
Download citation
DOI: https://doi.org/10.1007/978-1-4614-3588-4_6
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4614-3587-7
Online ISBN: 978-1-4614-3588-4
eBook Packages: Computer ScienceComputer Science (R0)