Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Average Precision at n

  • Nick CraswellEmail author
  • Stephen Robertson
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_487




Average Precision at n is a variant of Average Precision (AP) where only the top n ranked documents are considered (please see the entry on  Average Precision for its definition). AP is already a top-heavy measure, but has a recall component because it is normalized according to R, the number of relevant documents for a query. In AP@n there are a number of options for normalization, for example, normalize by n or normalize by min(n,R).

Key Points

The well-known measure Average Precision has a number of lesser-known variants, used in TREC [3] and elsewhere. Before and during TREC-1, it was usual to calculate an 11-point interpolated Precision-Recall curve, and take the average of these 11 precision values, giving an “interpolated AP.” In TREC-2 and beyond, the modern non-interpolated AP was introduced. It calculates precision at each relevant document.

A number of other AP variants arise in a precision-oriented setting, where it is possible to calculate Average...
This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Baeza-Yates RA, Ribeiro-Neto B. Modern information retrieval. Reading: Addison-Wesley; 1999.Google Scholar
  2. 2.
    Hawking D, Craswell N, Bailey P, Griffiths K. Measuring search engine quality. Inf Retr. 2001;4(1):33–59.zbMATHCrossRefGoogle Scholar
  3. 3.
    Voorhees EM, Harman DK. TREC: experiment and evaluation in information retrieval. Cambridge, MA: MIT Press; 2005.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Microsoft Research CambridgeCambridgeUK

Section editors and affiliations

  • Weiyi Meng
    • 1
  1. 1.Dept. of Computer ScienceState University of New York at BinghamtonBinghamtonUSA