Encyclopedia of Database Systems

2009 Edition
| Editors: LING LIU, M. TAMER ÖZSU

Average Precision at n

  • Nick Craswell
  • Stephen Robertson
Reference work entry
DOI: https://doi.org/10.1007/978-0-387-39940-9_487

Synonyms

 AP@n

Definition

Average Precision at n is a variant of Average Precision (AP) where only the top n ranked documents are considered (please see the entry on Average Precision for its definition). AP is already a top-heavy measure, but has a recall component because it is normalized according to R, the number of relevant documents for a query. In AP@n there are a number of options for normalization, for example, normalize by n or normalize by min(n,R).

Key Points

The well-known measure Average Precision has a number of lesser-known variants, used in TREC [3] and elsewhere. Before and during TREC-1, it was usual to calculate an 11-point interpolated Precision-Recall curve, and take the average of these 11 precision values, giving an “interpolated AP.” In TREC-2 and beyond, the modern non-interpolated AP was introduced. It calculates precision at each relevant document.

A number of other AP variants arise in a precision-oriented setting, where it is possible to calculate Average...
This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Baeza-Yates R.A. and Ribeiro-Neto B. Modern Information Retrieval. Addison-Wesley, Reading, MA, 1999.Google Scholar
  2. 2.
    Hawking D., Craswell N., Bailey P., and Griffiths K. Measuring search engine quality. Inf. Retr., 4(1):33–59, 2001.zbMATHCrossRefGoogle Scholar
  3. 3.
    Voorhees E.M. and Harman D.K. TREC: Experiment and Evaluation in Information Retrieval. MIT Press, Cambridge, MA, 2005.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  • Nick Craswell
    • 1
  • Stephen Robertson
    • 1
  1. 1.Microsoft Research CambridgeCambridgeUK