Abstract
Every information retrieval (IR) model embeds in its scoring function a form of term frequency (TF) quantification. The contribution of the term frequency is determined by the properties of the function of the chosen TF quantification, and by its TF normalization. The first defines how independent the occurrences of multiple terms are, while the second acts on mitigating the a priori probability of having a high term frequency in a document (estimation usually based on the document length). New test collections, coming from different domains (e.g. medical, legal), give evidence that not only document length, but in addition, verboseness of documents should be explicitly considered. Therefore we propose and investigate a systematic combination of document verboseness and length. To theoretically justify the combination, we show the duality between document verboseness and length. In addition, we investigate the duality between verboseness and other components of IR models. We test these new TF normalizations on four suitable test collections. We do this on a well defined spectrum of TF quantifications. Finally, based on the theoretical and experimental observations, we show how the two components of this new normalization, document verboseness and length, interact with each other. Our experiments demonstrate that the new models never underperform existing models, while sometimes introducing statistically significantly better results, at no additional computational cost.
Similar content being viewed by others
1 Introduction
The development of retrieval models is one of the key aspects of research in information retrieval (IR). The IR models arise from experimental observations about the use of the language, predominantly on collections of documents primarily composed of news corpora. Today, with the almost total digitization of most text produced, it is clear that the textual documents are not just news and that different collections require different approaches (Hanbury and Lupu 2013). Consequently, the field has been driven to deal with different kinds of information types, demonstrated by the creation of new and more domain specific initiatives in the main IR evaluation campaigns: TREC, NTCIR, CLEF, and FIRE. Now, thanks to the observations made in the context of these evaluation campaigns, we are able to revisit some of the original assumptions and extend the models to integrate other collection statistics that reflect the different use of the language in different domains.
Every IR model boils down to a scoring function in which we can distinguish a component that increases with the number of occurrences of a term in a document (a term frequency component, \({\text {TF}}\)) and a component that decreases with the commonality of a term (an inverse document frequency component, \({\text {IDF}}\)). In this paper we focus on the \({\text {TF}}\) component. Its normalization, first introduced by Robertson et al. (1994) for BM25, and then generalized by Singhal et al. (1996) for a generic model, consists in adjusting the withindocument term frequency (\(\textit{tf}_d\)) based on the ratio between the document length (\(l_d\)) and its expectation (\(\mathrm {E}_{\mathcal {D}}[l_d]\)), called pivoted document length normalization. The work of Singhal et al. is motivated by the experimental observation that the length pattern of the retrieved documents should match the pattern of the relevant documents. Robertson et al. justify this normalization, later declared as ‘soft’ for the mitigation effect provided by the division by the mean, by introducing two contrasting hypotheses (Robertson and Zaragoza 2009), named verboseness and multitopicality: (a) the verboseness hypothesis states that some authors need more words to explain something that could have been explained with fewer; (b) the multitopicality hypothesis states that the reason why more words are required is because the author has covered more ground. While the first hypothesis suggests a document should be normalized by its length, the second suggests the contrary.
Recently, Lipani et al. (2015) have brought back to the attention of the IR community this discussion, pointing out that another collection statistic could be embedded in the \({\text {TF}}\) normalization of BM25. This new statistic measures a kind of verboseness, the repetitiveness of terms in a document, and leads to the achievement of performance better than the standard BM25.
In this paper we address this new observation from the perspective of the established models, and provide a new, general theory. Before doing that, a few general observations are in order.
Retrieval models combine various parameters into a score reflecting the degree to which a document implies a query. The common parameters and rationales are:
 \(\textit{tf}_d\) :

withindocument term frequency; frequent is good
 \(P_D(tc)\) :

documentbased term prob. (aka \({\text {IDF}}(t,c) = \log (P_D(tc))\)); rare is good
 P(tc):

occurrencebased term probability (LM mixture)
 \(l_d\) :

document length; to promote short documents
where c is a collection of documents, d is a document, and t is a term. We claim that there are other properties of documents and terms that are important but underrepresented, namely verboseness and the previously introduced burstiness (Roelleke 2013). In this paper we will focus primarily on verboseness, but we will also make some observations on burstiness and its relation with \({\text {IDF}}\). However, before starting, we introduce the notation used.
1.1 Notation
The basic symbols and sets are given in the following table. The notation is based on the proposal made by Roelleke (2013). However, unlike Roelleke, given that here we will not theoretically analyze different collections, we will generally drop the collection c index where convenient and not ambiguous.
\(\mathcal {T}\)  set of terms in the collection 
\(\mathcal {D}\)  set of documents in the collection 
t  a term \(t\in \mathcal {T}\) 
d  a document \(d\in \mathcal {D}\) 
\(\mathcal {T}\)  number of terms 
\(\mathcal {D}\)  number of documents 
\(l_c\)  length of collection (number of term occurrences) 
Based on the basic symbols, we define frequencies. Term frequencies, document frequencies, average term frequencies are ambiguous notions. It is important to clarify exactly what symbols mean.
\(l_t\)  number of occurrences of the term t in the collection, here also called term length (aka collection frequency) 
\(\mathcal {D}_t\)  set of documents where t occurs 
\(\mathcal {T}_d\)  set of terms in d 
\(\mathcal {D}_t\)  number of documents where t occurs (aka document frequency, \({\text {df}}(t)\)) 
\(\mathcal {T}_d\)  number of distinct terms in d 
\(l_d\)  length of document d (number of term occurrences, note \(l_d \ge \mathcal {T}_d\)) 
Next, we define the four averages important for this paper. The first two combine in a systematic way the symbols of the previous table.
\(\mathrm {E}_{\mathcal {D}_t}[\textit{tf}_d] = l_t/\mathcal {D}_t\)  average frequency of term t in the documents in which the term occurs 
\(\mathrm {E}_{\mathcal {T}_d}[\textit{tf}_d]=l_d/\mathcal {T}_d\)  average term frequency of terms that occur in document d 
\(\bar{l}_d := \mathrm {E}_{\mathcal {D}}[l_d] = l_c/\mathcal {D}\)  average document length 
\(\bar{l}_t := \mathrm {E}_{\mathcal {T}}[l_t] = l_c/\mathcal {T}\)  average term length 
Note that there are two notions regarding “average term frequency”, \(\mathrm {E}_{\mathcal {D}_t}[\textit{tf}_d]\) and \(\mathrm {E}_{\mathcal {T}_d}[\textit{tf}_d]\). In the first case the average is performed fixing t and averaging across the documents \(\mathcal {D}_t\) containing t, and in the second case the average is performed fixing d and averaging across the terms \(\mathcal {T}_d\) contained therein.
Finally, we introduce the probabilities used in this paper.
\(P(t)=P_L(t)=l_t/l_c\)  location based probability of \(t\in \mathcal {T}\) 
\(P(d)=P_L(d)=l_d/l_c\)  location based probability of \(d\in \mathcal {D}\) 
\(P_D(t)=\mathcal {D}_t/\mathcal {D}\)  document based probability of \(t\in \mathcal {T}\) 
\(P_T(d)=\mathcal {T}_d/\mathcal {T}\)  term based probability of \(d\in \mathcal {D}\) 
As can be seen, in this paper, when mentioning probability (P) with no index we refer to the probability based on locations, i.e. the probability defined on the sample space of term occurrences.
1.2 Motivations
In this section we formally introduce the document verboseness and term burstiness. We then motivate their investigation in IR models.
Verboseness is reflected by the ratio \(l_d/\mathcal {T}_d\): the document length divided by the number of (distinct) terms in the document. The ratio corresponds to the average \(\textit{tf}_d\) (over all terms) in document d:
A document is verbose if few terms are repeated many times; its domain is \([1, l_d]\), 1 for nonverbose (no term occurs more then once), and \(l_d\) for maximally verbose (one term is repeated \(l_d\) times).
Intuitively, the more verbose (repetitive) a document is, the higher is the chance to find a high \(\textit{tf}_d\). In other words, a document has a high score just because words are repeated (e.g. spamming), and therefore, one wants to demote verbose documents in the ranking.
Burstiness is reflected by the ratio \(l_t/\mathcal {D}_t\), that is the length of the term in the collection c (or number of occurrences of the term in c) divided by the number of the collection’s documents where the term t occurs (aka document frequency). The ratio corresponds to the average \(\textit{tf}_d\) (over the number of documents where the term t occurs) in collection c:
A term is bursty if it occurs in few documents many times; its domain is \([1, l_t]\), 1 for a nonbursty term (it occurs only once in each document where it is present), \(l_t\) for maximally bursty (all the occurrences are only in one document).
Intuitively, the more bursty a term is, the higher is the chance to find a high \(\textit{tf}_d\). In other words, a bursty term occurs in fewer documents than a nonbursty (a normal) term, and therefore, one wants to promote documents containing bursty terms.
Instead of verboseness and burstiness, scoring functions most often use normalization of the \(\textit{tf}_d\) based on the document length \(l_d\) (e.g. in the TF component of BM25 and in some versions of TFIDF) .
The contribution of the document length is smoothed by its average, that corresponds to the average \(l_d\) (over all the documents) in collection c:
This is then used to calculate the pivoted document length (pivotization indicated in the paper with a hat) as follows:
The \(\hat{l}_d\) is greater than 1 for relatively long documents (greater than the average document length), and smaller than 1 for short documents (lower than the average document length).
It is surprising that IR models are keen to capture the \(\hat{l}_d\), but seem to hide away verboseness and burstiness, i.e. there is no parameter explicitly associated with these properties. However we observe that some IR models implicitly use these normalizations.
We investigate which IR models capture verboseness and burstiness, and how the parameters can be made explicit or added. Motivated by the work of Lipani et al. (2015), we formally justify verboseness from its duality with the document length normalization. As a supportive case we also present its duality with the concept of burstiness (Roelleke 2013), and term length (aka collection frequency).
1.3 Contributions and structure
The main contributions of this paper are: (1) The inclusion of document verboseness as an explicit parameter in TF quantifications, showing that verboseness is to be viewed in a similar way as the document length in the TF normalizations; (2) An extensive set of experiments capturing a welldefined spectrum of TF quantifications, whose results for logbased and BM25based TF quantifications deliver a significant contribution to insights into the effect of TF quantifications, even beyond the TF normalization variants; (3) Theoretical justifications for the way document verboseness and length are combined, considering the dualities between verboseness and other parameters (including the burstiness of terms).
The remainder of the paper is structured as follows: in Sect. 2 we present the background. In Sect. 3, the main contribution of the paper, namely combining document verboseness and length into the normalization parameter \(K_d\) of the TF quantification, is presented. We next review in Sect. 4 the probabilistic foundations of IR models. This highlights the role of parameters such as verboseness, burstiness and document length, and the theoretical justification of \(\text {TF}_{\text {BM25}}\)IDF. In Sect. 5, we report the experimental setup and results, followed by Sect. 6 dedicated to the discussion of the results. Section 7 concludes the paper.
2 Background
The discussion about the TF normalization was initiated by Robertson and Zaragoza (2009), introducing the two hypotheses: verboseness and multitopicality and then followed by the work of Singhal et al. (1996) where the document length pivotization is justified experimentally. Not much work has been done on the multitopicality hypothesis, but some for the verboseness hypothesis. However, the problem of how to weight terms dates back further, to the work of Salton and Buckley (1988). Na et al. (2008) introduce the concept of repetitiveness to derive a smoothing method for Language Modeling, showing an improvement with respect to other smoothing methods.
Following other work on the TF normalization issues, He and Ounis (2005a) apply the Dirichlet priors to the TF normalization following the idea of Amati and Van Rijsbergen (2002), and test it on different test collections (He and Ounis 2003, 2005b). Lv and Zhai pointed out that the TF quantification based on document length excessively penalizes very long documents due to its lower bound, a problem mitigated by leveraging the TF normalization by adding a constant (Lv and Zhai 2011b). They also pointed out that in case of BM25 it can be mitigated by adding a constant to the TF normalization (Lv and Zhai 2011c). Rousseau and Vazirgiannis (2013) generalized the previously mentioned TF normalizations through functional composition. Lv and Zhai (2011a) estimate dynamically the parameter \(k_1\) of BM25, based on a proposed information gain measure.
Lipani et al. (2015) introduce a new variant of BM25, called BM25VA that explicitly incorporates verboseness. This is the main work that motivates this paper. The verboseness is defined as in Eq. (1), and pivoted as \(v_d/\mathrm {E}_{\mathcal {D}}[v_d]\). Verboseness is then added to the \(\text {TF}_{\text {BM25}}\), linearly combining the two contributions through the parameter b, as follows:
In this work, it is heuristically shown that the parameter b is inversely proportional to a statistic of the collection, the average collection verboseness \(\mathrm {E}_{\mathcal {D}}[v_d]\), and that it can be predicted without statistically damaging the performance of the trained BM25.
Another way of approaching the length normalization issue is to consider retrieval of the the individual passages (Robertson and Walker 1999). However, this use of passages to address length normalization is theoretically unjustified and introduces a series of decision points (size and nature of passages) that are not the focus of this current study.
3 TF normalisations
Before getting into the details of the duality between document verboseness and length, it is necessary to formally define the current pivotization of document length and introduce the pivotization of verboseness. To do this we start from the foundation of every IR model: the documentterm matrix \(A \in \mathbb {N}^{\mathcal {D} \times \mathcal {T}}\), in which each element is a \(\textit{tf}_d\) indicated here by \(a_{d,t}\) for convenience of the notation. For any given matrix, we can define two ways to sum the elements of this matrix; one that fixes a column (a term t) and sums over the rows (the \(\mathcal {D}\) documents) and one that fixes a row (a document d) and sums over the columns (the \(\mathcal {T}\) terms). Doing this we calculate two lengths: the length of a term^{Footnote 1} and the length of a document, as follows:
Now, if we want to compute the average of the values on each row or column, we have to divide the sums obtained above by a value. For this value we actually have two options: the number of columns or rows, and the number of nonzero elements in the columns or rows. The first is what we would call the average, and the second the elite average. To give an intuition, think of the question “What is the average number of Ferraris owned by a person?”. This question has two answers: we can divide the total number of Ferraris (the sum of the elements on a row/column) by the total number of people on the planet (the number of columns/rows); or, we can consider only those people that have at least one Ferrari and then divide the number of Ferraris by the size of this set of people. The first one is the common average, while the second, obviously, is the elite average.
Returning to our documentterm matrix, we will denote by a bar (\(\bar{a}\)) a common average and by a breve \((\breve{a}\)) an elite average:
in which we observe that the two elite averages just defined \(\breve{a}_t\) and \(\breve{a}_d\) correspond to the burstiness \(b_t\) as defined in Eq. (2) and the verboseness \(v_d\) as defined in Eq. (1).
Considering the remaining elements, \(\bar{a}_t\), \(\breve{a}_t\), \(\bar{a}_d\) and \(\breve{a}_d\), we can think of them as defining an average document \(\bar{d} = [\bar{a}_{t_1}\,\ldots \,\bar{a}_{t_{\mathcal {T}}}]\), an elite average document \(\breve{d} = [\breve{a}_{t_1}\,\ldots \,\breve{a}_{t_{\mathcal {T}}}]\), an average term \(\bar{t} = [\bar{a}_{d_1}\,\ldots \,\bar{a}_{d_{\mathcal {D}}}]\), and an elite average term \(\breve{t} = [\breve{a}_{d_1}\,\ldots \,\breve{a}_{d_{\mathcal {D}}}]\). Moreover, we observe also that the elite average document is equal to \(\breve{d} = [b_{t_1}\,\ldots \,b_{t_{\mathcal {T}}}]\) and the elite average term is equal to \(\breve{t} = [v_{d_1}\,\ldots \,v_{d_{\mathcal {D}}}]\).
So, now, for each row d and for each column t we have a sum, an average, and an elite average. To obtain a collectionlevel statistic, we have to aggregate again, calculating sums and averages (common and elite averages are identical now, because all rows and all columns have a nonzero aggregated value).
Doing so, we observe that
i.e. the average document length \(\bar{l}_d\) is equal to the sum of the elements of the average document \(\bar{d}\).
However, the same observation is not valid for verboseness, because it is an elite average. Instead, we have two notations:
A graphical representation of the calculations performed in this section is shown in Fig. 1.
3.1 Duality: document verboseness and length
Recalling the definition of verboseness from Eq. (1), it is the average number of times a document’s term occurs within the document. To observe the duality of document verboseness, Eq. (3), let us first define the notation to identify the singleton of a document \(d \in \mathcal {D}\) as \(\mathcal {D}_d=\{d\}\) and the singleton of a term \(t \in \mathcal {T}\) as \(\mathcal {T}_t=\{t\}\). Obviously \(\mathcal {D}_d=\mathcal {T}_t=1\) and therefore we can write \(l_d=l_d/\mathcal {D}_d\). Let us now consider the pivoted verboseness and pivoted document length, using the two sets of values defined above: \(\bar{l}_d=\breve{l}_d\), \(\bar{v}_d\) and \(\breve{v}_d\):
where we indicate the nonelite pivotization with a double dots and the elite pivotization with a hat. The duality is obtained substituting \(\mathcal {D}\rightarrow \mathcal {T}\) to go from \(l_d\) to \(v_d\) or \(\mathcal {T}\rightarrow \mathcal {D}\) to go from \(v_d\) to \(l_d\).
The pivoted verboseness of a document is with respect to the space of terms (\(\mathcal {T}\)), whereas the pivoted document length of a document is with respect to the space of documents (\(\mathcal {D}\)). One can also show the duality between document verboseness and length based on probabilistic expressions:
\(P_L(d)\) is the location based probability of a document. Dividing this by the term based probability of d, \(P_T(d)=\mathcal {T}_d/\mathcal {T}\) yields the pivoted verboseness. Dividing by the document based probability of d, \(P_D(d)=\mathcal {D}_d/\mathcal {D}=1/\mathcal {D}\), yields the pivoted document length.
The dualities between average document verboseness and average document length justify the combination of parameters as formalized in the definition capturing the normalization variants of \(K_d\):
Definition 1
(TF Normalisations \(K_d\) )

\(\ddot{K}_d\): the nonelite normalization comprises the nonelite pivots \(\ddot{l}_d\) and \(\ddot{v}_d\).

\(\hat{K}_d\): the elite normalization comprises the elite pivots \(\hat{l}_d\) and \(\hat{v}_d\).

The expression \({\text {pivdl}}\), pivoted document length, denotes one of the two:
Analogously for \(\text {pivdv}\), pivoted document verboseness.
Then, the pivotization components are defined for the disjunctive (linear) and conjunctive (product) combination of the pivots.
where the two parameters b and a are both defined in [0, 1]. The parameter b controls the degree of normalization between full normalization (when \(b=1\)) and no normalization (when \(b=0\)), and the parameter a controls the balance between the contributions of \({\text {pivdl}}\) and \(\text {pivdv}\). The combination of these pivots becomes part of the usual definition of the normalization parameter \(K_d\).
where the parameter \(k_1\), which is defined in \(]0, \infty [\), controls the power of the normalization.
It is worth pointing out now that for \(b=0\), or \(b=1\) and \(a=\{0,1\}\) these two combinations are the same. In particular we should note that:
which is the “traditional” \(K_d\), created ignoring both document verboseness and length (\(b=0\)).
To summarize, there are four variants of the pivotization factor \(K_d\): nonelite disjunctive denoted as \(\ddot{K}_{\vee }\), nonelite conjunctive denoted as \(\ddot{K}_{\wedge }\), and the respective elite variants \(\hat{K}_{\vee }\) and \(\hat{K}_{\wedge }\). The experiments emphasize the analysis of the behavior of these four variants.
3.2 Example of calculation of the pivotizations
The next example illustrates the arithmetic to compute the pivoted document verboseness and length.
Example 1
(Pivoted Document Verboseness and Length) Assume a document d with \(l_d=300\) word occurrences, and \(\mathcal {T}_d=150\) distinct words. The verboseness is:
Let the collection contain \(l_c=10^7\) word occurrences, and \(\mathcal {T}=10^5\) distinct words. The nonelite average document verboseness is 100, that is, in average, a term occurs \(\bar{v}_d=100\).
The elite average verboseness is the average over the verboseness values of the documents. For example, let \(\breve{v}_d=5/2\) be the elite verboseness.
The pivoted verboseness is the verboseness divided by the average verboseness, e.g. the nonelite average verboseness:
while the pivoted elite verboseness is the verboseness divided by the elite average verboseness:
Regarding the document length, let \(\bar{l}_d=400\) be the average document length. Then, the pivoted document length is:
Then we can combine the nonelite pivots, for example, in a disjunctive way:
or, the elite pivots in a conjunctive way:
The other two variants, elite pivots combined in a disjunctive way (\(\hat{K}_{\vee ,d}\)), and nonelite pivots combined in a conjunctive way (\(\ddot{K}_{\wedge ,d}\)) are left to the reader.
3.3 Other dualities
To strengthen the theoretical justifications, we explore two other dualities, namely the duality between document verboseness and term burstiness, and later in the section the duality between term burstiness and term length. Here, the definitions of the first couple:
The duality is obtained substituting \(\mathcal {T}\rightarrow \mathcal {D}\) and \(d \rightarrow t\) to go from \(v_d\) to \(b_t\) or \(\mathcal {D}\rightarrow \mathcal {T}\) and \(t \rightarrow d\) to go from \(b_t\) to \(v_d\). Verboseness is the average term frequency when considering the document length \(l_d\) over the set \(\mathcal {T}_d\) of terms that occur in the respective document. Burstiness is the average term frequency when considering the number of times the term occurs \(l_t\) over the set \(\mathcal {D}_t\) of documents in which the respective term occurs.
Furthermore, starting from burstiness and substituting \(\mathcal {D}\rightarrow \mathcal {T}\), we observe another duality, between term length and burstiness:
These dualities, based fundamentally on substitutions between the set of documents \(\mathcal {D}\) and the set of terms \(\mathcal {T}\), were briefly explored in the early 1990s, when Knaus et al. (1994), and Amati and Kerpedjiev (1992) talked about ITF (inverse term frequency) and IDF. IDF later generalized by Metzler (2008).
Whereas the IDF is applied for reasoning about the similarity between documents, the ITF is applied for reasoning about the similarity between terms. Viewing the ITF and IDF together, by looking at the denominator’s argument of the logarithms, shows that ITF is related to verboseness, and IDF is related to burstiness.
Overall, the discussion supports the case to consider verboseness as a documentspecific parameter, whereas traditional IR focuses on the pivoted document length only.
3.4 Summary
This section justified the systematic combination of pivoted document length and pivoted verboseness, while placing them in the context of other dualities, involving burstiness and term length. Table 1 shows the list of all the explored dualities.
4 Probabilistic derivation of IR models
To discuss the justification of \({\text {TF}}\) quantifications, we consider the probabilistic derivation of IR models. Most IR models can be derived from measuring the dependence between document and query. Let d denote a document, q a query, and c a collection. The documentquery independence (DQI Roelleke and Wang 2008) is the pointwise mutual information expressed as:
Document and query are considered as sequences of term events. The decomposition of d leads to TFIDF (and, for particular assumptions, to BM25), and the decomposition of q leads to LM. In this section we review the decomposition of d. When decomposing d using \(P(d,q) = P(dq)P(q)\) and then \(P(d) = \prod _{t \in \mathcal {T}_d} P(t)^{\textit{tf}_d}\) and \(P(dq) = \prod _{t \in \mathcal {T}_d} P(tq)^{\textit{tf}_d}\), we obtain:
Here, P(tq) is the query term probability, and P(t) is the background model (collectionwide) term probability. The equation makes two independence assumptions: different terms are independent, and also, the multiple occurrences of the same term are independent. The first assumption is reflected in applying the sum over different terms, and the second assumption is reflected by the total term frequency count, \(\textit{tf}_d\).
To provide a justification for TFIDF, one is looking for the bridges to close the gap between the probabilistic roots (assuming independence) and the TFIDF. Expressed as an equation, we are looking for justifications to transform components of Eq. (25) to TFIDF.
where \({\text {TF}}\) and \({\text {IDF}}\) are the two components, term frequency and inverse document frequency.
4.1 Observations about the \({\text {TF}}\) component
The withindocument term frequency (\(\textit{tf}_d\)) in IR models is usually not used pure due to its bias towards long documents as motivated in Sect. 2. The step from \(\textit{tf}_d\) towards a quantification function involves a normalization component, referred to as \(K_d\). The widely known \(\text {TF}_{\text {BM25}}\) normalization factor is:
Given that \(k_1\) and b are parameters of \(K_d\), one should use the notation \(K_{k_1,b,d}\), but for readability, we simplify the notation to \(K_d\).
The following definition formalizes the welldefined spectrum of \({\text {TF}}\) quantifications (Roelleke et al. 2015).
Definition 2
(\({\text {TF}}\) Quantifications)
The shape of the different \({\text {TF}}\) quantifications is shown in Fig. 2.
This spectrum is welldefined because each of these \({\text {TF}}\)s correspond to an assumption regarding term events (Roelleke et al. 2015). \(\text {TF}_{\text {total}}\) corresponds to assuming independence, and the \(\text {TF}_{\text {log}}\) and \(\text {TF}_{\text {BM25}}\) variants assume the occurrences of an event to be dependent.
With this understanding of what the TF stands for, namely a factor modeling a dependence assumption, the role of \(K_d\) is to tune the dependence assumption. For \(K_d>1\), that is for long documents, \({\text {TF}}(t,d)\) decreases, i.e. the dependence increases. This means that in long documents, the multiple term occurrences are more dependent than in short documents. This makes perfect sense when imagining a long document that repeats some terms many times.
This discussion makes evident that it is not just the length of the document that matters. To illustrate, consider two documents of equal length, for example, \(l_d=300\) words. The standard \(K_d\) will be equal for both documents. One document, however, contains many repetitions of some words (the document is verbose), whereas the other document contains many different words (the document is not verbose). Indeed, it is the verboseness and not simply the document length that leads to high term frequencies, and thus, to dependencies of multiple term occurrences. Therefore, this paper views \(K_d\) as a combination of
the pivoted document length (\({\text {pivdl}}\)) and
the pivoted document verboseness (\(\text {pivdv}\)).
The following equation indicates the difference between the standard \(K_d\) as known for BM25 [as shown in Eq. (26)], and the systematic extension proposed and investigated in this paper:
Here, \(f({\text {pivdl}},\text {pivdv})\) is a function combining the two parameters, and this paper explores both a conjunctive and a disjunctive combination.
4.2 Observations about the \({\text {IDF}}\) component
Regarding \(\text {TF}_\text {BM25}\text {IDF}\), the question remains of how to close the gap between P(tq)/P(t) and \({\text {IDF}}\), as commonly defined in the literature: \({\text {IDF}}(t)=1/P_D(t)\). Mathematically, we are looking for a justification that leads to the following equation:
where in order to avoid confusion in the next derivation steps the collection symbol c is made explicit. We note that P(tc) and \(P_D(tc)\) are both in the denominators of the functions. Let us consider what the relation between these two elements is, i.e. \(P(tc)/P_D(tc)\). Referring back to the notations introduced at the end of Sect. 1.1, we have:
that is,
and, substituting in the left side of (29), it becomes:
This equation makes burstiness explicit, and in particular its otherwise implicit role in the relationship between IDF and the probabilistic model. If we were to return to Eq. (29), we are forced to consider:
Essentially, we have observed that the IDF, in its generic form of \(1/P_D(tc)\) implies that, when the term is not part of the query q, we estimate P(tq) as the probability of the term in the collection (P(tc)) and when the term is part of q we estimate it as \(P(tq)=b_t/\bar{l}_d\).
This separation between the cases when \(t\in \mathcal {T}_q\) and \(t\not \in \mathcal {T}_q\) is reminiscent of smoothing in language modeling. We could for instance write
with
We shall call this an extreme mixture.
If we were to continue this inspiration from language modeling, leaving the above for a moment aside, to compute the P(tq, c) we would estimate it through a linear mixture between the P(tc) and the P(tq), as follows:
This equation is traditionally made because to estimate the probability of a term given the query q, when q is short, is not reliable (even more so than when considering a document d).
Substituting Eq. (36) into Eq. (32), we have:
where P(tq) is calculated in a traditional way with a maximum likelihood estimator. However, this would not solve our problem given by the shortness of q. Instead, we need to use the estimation of Eq. (34). Then, reintroducing the distinction between \(t\in \mathcal {T}_q\) and \(t\not \in \mathcal {T}_q\) (i.e. \(\lambda _q\)), we obtain
In which if we set \(\lambda _q=1\) then the foreground probability P(tc) cancels out from the linear mixture assumption ending up with the standard \({\text {IDF}}\). We shall call this inverse document frequency \(\text {IDF}_{\text {L}}\), where L stands for linear mixture, in contrast to the standard IDF (or \(\text {IDF}_{\text {E}}\)) that is defined by an extreme mixture.
4.3 LM and TFIDF
We already reached with our analysis a point where the border between LM and TFIDF gets blurred. In this section we discuss the derivation of the LM model and highlight some commonality with the derivation of TFIDF done in the previous section. We remember that the discussion of IDF in \(\text {TF}_{\text {BM25}}\)IDF was started from Eq. (24), where we decomposed \(P(d,q)=P(dq)P(q)\). Here we can review the decomposition of q as \(P(d,q) = P(qd)P(d)\). We will then have: \(P(qd)=\prod _{t \in \mathcal {T}_q}P(td)^{\textit{tf}_q}\), and:
Using again the observation formalized in Eq. (31), we observe the explicit presence of burstiness in the following equation, as it was in Eq. (32):
Analogously for the derivation of TFIDF for the estimation of P(tq, c) in Eq. (36), and as commonly done in language modeling, we estimate the P(td, c) as:
and substituting to Eq. (40) we obtain:
We can now notice the symmetry with Eq. (37). In LM, when applying a Dirichletbased mixture (DLM), the value of \(\lambda _{d}\) is Zhai and Lafferty (2001):
where \(\mu\) is a parameter of the collection. This parameter could be set based on the average documents length \(\bar{l}_d\). Zhai and Lafferty (2001) report values of \(\mu \approx 2000\), though they note that the range of optimal parameter values in different collections is quite large (500–10,000). Later, Fang et al. (2004) posited that \(\mu\) needs to be at least as large as the average document length (\(\bar{l}_d\)), so a reasonable value form for \(\lambda _{d}\) is:
Now, just as we did for the normalization of \({\text {TF}}\) in the TFIDF derivation, we should consider here not only the presence of the document length but also that of verboseness:
In a symmetric way we may define for TFIDF a parameter not strongly dependent by the presence or absence of the term in q (as it was the case in the extreme mixture observed in the previous section) but rather using the Dirichlet based smoothing approach and the maximum likelihood estimation for \(P(tq) = \textit{tf}_q/l_q\):
However, the components of this formulation for \(\lambda _{q}\) are generally not very informative (queries tend to be significantly shorter than documents, and therefore we cannot really talk about the verboseness of a query). Instead, at this place we can exploit the duality of document verboseness and length with term length and burstiness (see Sect. 3.3):
In summary, in this section we have explored the relationship between TFIDF and LM. Both models apply a mixture: TFIDF for estimating P(tq, c), and LM for estimating P(td, c). Moreover, both models involve the component \(b_t/\bar{l}_d \cdot P_D(t)\) measuring the discriminativeness of the term, where burstiness is made explicit.
The mixture assumption for P(tq, c) leads to IDF and it becomes clear why IDF is seen as capturing burstiness in an “implicit” way (Church and Gale 1999). The Dirichletbased mixture for P(td, c), usually only associated with the document length, is extended with the document verboseness. This extension is done analogously to the way the TF quantification has been extended for the TFIDF models.
5 Experiments
In this section, we first present the material, then the experimental setup. Finally we discuss the results.
5.1 Setup and materials
To test the \({\text {TF}}\) normalization variants on the different kinds of \({\text {TF}}\) quantifications, we used 4 test collections: TREC HARD 2005, TREC Ad Hoc 8, CLEF eHealth 2014, and TREC Web 2002. Details and corpora properties shown in Table 2. The test collections have been purposefully chosen with a high degree of variability of \(\breve{v}_d\). In this way we can observe the different use of the language in different domains (e.g. we observe that in .GOV on average a term is repeated 218% more times than in the Aquaint collection). We developed^{Footnote 2} the tested IR models on the IR platform Terrier^{Footnote 3} 4.2. All the documents have been preprocessed using the English tokenizer and Porter stemmer of the Terrier search engine. All the topics, when multiple lengths are available in the test collections, are of the shortest kind.
We tested a total of 24 models:

16 models based on TFIDF variants: 4 \({\text {TF}}\) normalizations for each of the 4 \({\text {TF}}\) quantifications defined in Definition 2. Each model is identified by its \({\text {TF}}\) quantification, \(\text {TF}_{\text {total}}\), \(\text {TF}_{\text {log}}\), \(\text {TF}_{\text {BM25}}\), and \(\text {TF}_{\text {constant}}\) and kind of \({\text {TF}}\) normalization applied: nonelite disjunctive \(\ddot{K}_{\vee ,d}\), nonelite conjunctive \(\ddot{K}_{\wedge ,d}\), elite disjunctive \(\hat{K}_{\vee ,d}\) and elite conjunctive \(\hat{K}_{\wedge ,d}\).

4 models based on DLM: Each Dirichletbased mixture is identified by its kind of \(\lambda _{d}\) normalization applied: nonelite disjunctive \(\ddot{\lambda }_{\vee ,d}\), nonelite conjunctive \(\ddot{\lambda }_{\wedge ,d}\), elite disjunctive \(\hat{\lambda }_{\vee ,d}\) and elite conjunctive \(\hat{\lambda }_{\wedge ,d}\).

4 models based on the TF\(\text {IDF}_\text {L}\): Each Dirichletbased mixture is identified by its kind of \(\lambda _{q}\) normalization applied: nonelite disjunctive \(\ddot{\lambda }_{\vee ,q}\), nonelite conjunctive \(\ddot{\lambda }_{\wedge ,q}\), elite disjunctive \(\hat{\lambda }_{\vee ,q}\) and elite conjunctive \(\hat{\lambda }_{\wedge ,q}\). As \({\text {TF}}\) component, we select the nonnormalized \(\text {TF}_{\text {total}}\).
The TF normalization of each model presents 3 parameters: \(k_1\), b and the new a introduced in this paper. The DLM and TF\(\text {IDF}_\text {L}\) based models present 2 parameters: b and a. Our experiments focus on the parameter a. For \(k_1\) and b, there are two ways of selecting their values: using the standard values from the literature, or identifying trained values. For the models based on the TFIDF variants, the standard parameters for \(\text {TF}_{\text {BM25}}\) are \(k_1=1.2\) and \(b=0.7\) (Robertson et al. 1994). The standard parameter for \(\text {TF}_{\text {total}}\) and \(\text {TF}_{\text {constant}}\) is \(b=0\) that simplifies \(K_d\) to a constant. In this case we set \(k_1=1\), because it is easy to demonstrate that to change the parameter \(k_1\), as long as \(k_1>0\), does not change the rank of the retrieved documents for these two quantifications. The same set of parameter values are set for the standard \(\text {TF}_{\text {log}}\) (\(b=0\), \(k_1=1\)). For the models based on the DLM, the standard parameters are \(k_1=1\) and \(b=0\), which reduces to the standard definition of DLM (Zhai and Lafferty 2001). For the models based on the LM variant derived by TFIDF, the standard parameters are \(k_1=+\infty\), which reduces to the standard TFIDF model with non normalized \(\text {TF}_{\text {total}}\) quantification.
To identify trained values, the parameters of each model have been spanned as follows: \(a,b \in [0, 1]\) at steps of 0.1, and \(k_1 \in [0,5]\), from 0 to 1 at steps decided by the function 1 / n with \(n \in \{1,...,50\}\), and from 1 to 5 at steps of 0.1. The trained values are obtained maximizing the mean over the topics of the selected evaluation measure. For every model’s configuration that requires training we perform a fivefold cross validation.
The IR evaluation measures employed are \(\text {AP}\), \(\text {NDCG}\) and \(\text {P@10}\).
5.2 Model candidates/structure
Each TFIDF model candidate is characterized by choosing one of the following options:

1.
Pivotization: elite pivotization or nonelite pivotization for document verboseness and length;

2.
Normalization: conjunctive (\(\wedge\)) or disjunctive (\(\vee\)) combination of pivoted document verboseness and length into \(K_d\);

3.
Quantification: \(\text {TF}_{\text {total}}\), \(\text {TF}_{\text {log}}\), \(\text {TF}_{\text {BM25}}\), or \(\text {TF}_{\text {constant}}\);

4.
Parameter Settings: standard (S) or trained (T) parameters.
Each DLM model candidate is characterized by choosing one of the following options:

1.
Pivotization: elite pivotization or nonelite pivotization for document verboseness and length;

2.
Normalization: conjunctive (\(\wedge\)) or disjunctive (\(\vee\)) combination of pivoted document verboseness and length into \(\lambda _d\);

3.
Parameter Settings: standard (S) or trained (T) parameters.
Each TF\(\text {IDF}_{\text {L}}\) model candidate is characterized by choosing one of the following options:

1.
Pivotization: elite pivotization or nonelite pivotization for term length and burstiness;

2.
Normalization: conjunctive (\(\wedge\)) or disjunctive (\(\vee\)) combination of pivoted term length and burstiness into \(\lambda _q\);

3.
Parameter Settings: standard (S) or trained (T) parameters.
5.3 Results
The main results observed are:

1.
Document Verboseness versus Length: show a certain independence as shown by the shape of the distributions in Fig. 3;

2.
Pivotization: for TFIDF models the elite pivotization is overall better than the nonelite one; for DLM models the nonelite pivotization performs better.

3.
Normalization: for TFIDF models the combination of document verboseness and length achieves significantly better results, especially when combined in a conjunctive fashion; for DLM models the combination of document verboseness and length rarely achieves statistically significance;

4.
TFQuantification: \(\text {TF}_{\text {BM25}}\) appears best, with \(\text {TF}_{\text {log}}\) close behind;

5.
Standard versus Trained parameter: in both parameter configurations, standard and trained, the use of verboseness makes the model achieve better results. On the other hand, the use of term length most of the time has a negligible impact.
For each test collections: HARD 2005 in Table 3, Ad Hoc 8 in Table 4, eHealth 2014 in Table 5, and Web 2002 in Table 6, we present the results obtained with the TFIDF model variants and the two pivotizations. In these tables we observe each model with either its standard configuration (S), or its trained configuration (T), obtained taking the configuration that maximizes the evaluation measure \(\text {AP}\). The standard parameters of the normalizations for the TF quantifications: \(\text {TF}_{\text {total}}\), \(\text {TF}_{\text {log}}\) and \(\text {TF}_{\text {constant}}\), have the effect of disabling the normalization component (\(b=0\)). However, for \(\text {TF}_{\text {BM25}}\) this does not happen. Thereby, we can study the effect of the parameter a in its standard parametrization. To do this we extract the best value obtained with the standard \(k_1\) and b by selecting the maximum value of the measure \(\text {AP}\) obtained by varying the parameter a. In case of the trained parameter values instead, for all the \({\text {TF}}\) quantifications, we show in the first row the best result obtained maximizing the \(\text {AP}\) without the use of verboseness in the scoring function (\(a=1\)), and then we show the result obtained when verboseness is added in the scoring function. The tables distinguish between the conjunctive (\(\wedge\)) and disjunctive (\(\vee\)) combinations of document verboseness and length.
\(\text {TF}_{\text {BM25}}\) works generally better than the other \({\text {TF}}\) quantifications, but not for all test collections. For the test collection eHealth 2014 \(\text {TF}_{\text {log}}\) is better.
We also observe that best configuration is achieved using the elite pivotization. The conjunctive combination works generally better than the disjunctive case (24 of 32 experiments better than the disjunctive, all 7 unfavorable cases occur when using the Web 2002 test collection).
In Table 7, we present the results obtained for every test collections using DLM with \(\lambda _{d}\) extended with verboseness. For this model the standard parameter is when \(b=1\), and \(a=0\), which reduces the formula to the standard DLM without verboseness (citealtZhai:2001:SSM:383952.384019). This variant is shown on the first row for every test collection. The subsequent rows present the variant of \(\lambda _{d}\) when combined with verboseness in disjunction and conjunction with nonelite and elite pivots. For this model we observe that the presence of verboseness produces for only one test collection significant improvements. Overall we observe that the nonelite pivotization should be preferred (all the experiments produce better results than the elite one). No difference is observed by using a disjunctive or conjunctive combination of the pivots.
In Table 8, we present the results obtained for every test collections using TF\(\text {IDF}_\text {L}\) model with \(\lambda _{q}\) that combines in a LM fashion the term length and burstiness. For this model the standard parameter is when \(\lambda _q = 1\), which reduces this IR model to a non TFnormalized \(\text {TF}_{\text {total}}\)IDF model. This variant is shown on the first row for every test collection. The following rows present the variant of \(\lambda _{q}\) when combined in disjunction and conjunction with nonelite and elite pivots. We observe that this parametrization produces significantly better results than the standard case, and that the nonelite parametrization should be preferred. Also here, as for DLM, no difference is observed by using a disjunctive or conjunctive combination of the pivots. We also observe that overall the values of the trained parameter a is often equal to 1, which suggests that, for these model variants, the term length does not play an important role in adjusting the document’s score. This is a curious behavior since it is dual to the DLM model, where the document verboseness does not play an important role either.
Finally, in Tables 9 and 10 we present the results of the fivefold cross validation for all the trained cases of the TFIDF models, in the first table, and the DLM and TF\(\text {IDF}_\text {L}\) models, in the second table.
6 Analysis and discussion
Finally we make some observations across the experimental results about the behavior of the parameter a. Before that however, let us make an observation on the nature of the data at our disposal. Figure 3 shows the distribution of the document verboseness versus document length for the elite and nonelite pivotizations. In both cases we see that verboseness brings additional information compared to document length: the plotted distributions are well spread, away from the first diagonal.
Comparing the two distributions, it is interesting to observe that the nonelite pivotization is significantly more skewed than the elite one: the xaxis of the left plot has a scale in the (0, 0.02) range, while the one on the right plot has a scale that matches the yscale: (0, 4). This supports and grounds our hypothesis that elite pivotization should provide us better means to balance verboseness and document length with parameter a.
The a parameter controls the contribution of elite pivoted verboseness and elite pivoted document length. When \(a<0.5\), the contribution of the document verboseness is higher than the contribution of the document length, and vice versa when \(a>0.5\). Looking at the distribution for the elite pivotizations of the documents, redefining the origin to the point (1, 1) we split the distributions in four quadrants.^{Footnote 4} We know that whatever a we fix, the documents in the I quadrant will be always demoted to some degree, and in the III quadrant the documents will be always promoted to some degree. So here the question is what happens to the documents in the IV and II quadrant. When to be preferred is the contribution of document verboseness (\(a>0.5\)) more documents with low verboseness (\(\hat{v}_d<1\)) and high length (\(\hat{l}_d>1\)) will be promoted against the documents of the IV quadrant, and when preferred is the contribution of the document length (\(a<0.5\)) the contrary happens. Therefore, the a values, previously listed, should anticorrelate with the ratio of the number of relevant documents between the II quadrant and the IV quadrant. Here the two lists of values sorted by test collection, of a extracted from Tables 3, 4, 5, and 6, for the standard BM25 case with trained a: 0.8, 0.6, 0.4, and 0.0 and ratios: 0.63, 0.86, 1.16 and 4.20, where we observe that they anticorrelate. Therefore if we think that all the documents of the collection should be relevant we should find the a value that mostly balance the proportion of non verbose but long documents with the short but verbose documents. All the test collections but Disks 4&5 have been crawled from the Web. For all of them we can observe that the plots manifest a visible noise. In particular we observe the presence of black dots that are most probably caused by the existance of duplicated documents in the collections. For example, the existance of duplicated documents in the eHealth’14 test collection is a known issue to the eHealth IR community.
In Tables 3, 4, 5, and 6 we observe that the best performing configuration, for both \(\text {TF}_{\text {log}}\) and \(\text {TF}_{\text {total}}\), uses the trained parameters combined in disjunction, in particular in Table 4 these configurations also show statistical significance against both standard configuration and trained configuration when verboseness is not present (\(a=0\)). The elite pivotization performs generally better than the nonelite pivotization. In particular the best performing configurations are with elite pivotization and trained parameters in conjunction. We observe also that in general the elite pivotization weighting role is taken by the parameter a (\(b=1\) means that a full document verboseness and length normalization is applied).
In Fig. 4 we further analyze the best configuration on a per topic basis. Here, we show the difference in \(\text {AP}\) between the \(\text {AP}\) of the trained TF\(_\text {BM25}\)IDF with verboseness combined in conjunction with elite pivots, and the trained classic TF\(_\text {BM25}\)IDF. If the difference is positive the variant with verboseness is better than the classic version.
7 Conclusion
This paper presents an extensive study of \({\text {TF}}\) quantifications and normalizations. The quantifications are with respect to a welldefined spectrum comprising \(\text {TF}_{\text {total}}\), \(\text {TF}_{\text {log}}\), \(\text {TF}_{\text {BM25}}\), and \(\text {TF}_{\text {constant}}\). Each of these \({\text {TF}}\) quantifications reflects a dependence assumption. In particular, \(\text {TF}_{\text {total}}\) and \(\text {TF}_{\text {constant}}\) are the extremes of the quantification spectrum, assuming independence for the former and subsumption for the latter. \(\text {TF}_{\text {BM25}}\) is a relatively strong dependence assumption, and \(\text {TF}_{\text {log}}\) is in the middle between \(\text {TF}_{\text {total}}\) and \(\text {TF}_{\text {BM25}}\). Each of these quantifications incorporates a \({\text {TF}}\) normalization parameter, usually denoted as \(K_d\).
Whereas current approaches regarding \(K_d\) consider only the document length as parameter of \(K_d\), this paper makes the case for \(K_d\) to be a combination of document verboseness and length. There are many heuristic options for how to combine the parameters, and this paper contributes the theoretical foundations leading to a systematic combination of document verboseness and length.
The paper reports results of an experimental study investigating the effect of various settings of \(K_d\) for the four main \({\text {TF}}\) quantifications. The overall finding is that combining document verboseness with document length (either in a conjunctive or disjunctive way) improves retrieval quality when compared to results considering document length only.
We expand this in two directions, first by exploring a similar normalization in the context of LM and second a similar normalization in the context of TFIDF. For the former, we include document verboseness into the Dirichlet smoothing where nonsignificant effect is observed, which signifies that document verboseness can be neglected. For the latter, in Sect. 4.3 we have observed the duality between document verboseness and document length on one side, and term burstiness and term length on the other side, and we observed the effect of these normalizations on the query side with respect to LM. Here, significant improvements are observed, however these improvements are obtained primarily by the use of term burstiness, while the term length can be neglected. In both directions improvements are observed given by the new parametrizations, and their results show a dual behavior, given by the exclusion of document verboseness in the former, and by the exclusion of term length in the latter.
In summary in this paper we have provided an exhaustive study of normalization factors in IR probabilistic models using 4 different test collections. Based on the observations made on these test collections, we have made the case that different domains, having different text statistics, can be directly factored into the existing probabilistic models. We have thus provided a quantification of the various document and term statistics into one factor that balances different prior probabilities that all these models, more or less explicitly, rely on.
Notes
Although the “length of a term” is non intuitive, here it is meant the L1length of a vector
Software available on the website of the first author.
Here we enumerate the quadrants from the top right, I and then going counterclockwise the others: II, III and IV.
References
Amati, G., & Kerpedjiev, S. (1992). An information retrieval logic model: Implementation and experiments. Tech. Rep. REL 5b04892, Fondazione Ugo Bordoni, Rome, Italy.
Amati, G., & Van Rijsbergen, C. J. (2002). Probabilistic models of information retrieval based on measuring the divergence from randomness. ACM Transactions on Information Systems, 20(4), 357–389. https://doi.org/10.1145/582415.582416.
Church, K., & Gale, W. (1999). Inverse document frequency (IDF): A measure of deviations from poisson (pp. 283–295). Dordrecht: Springer. https://doi.org/10.1007/9789401723909_18.
Fang, H., Tao, T., & Zhai, C. (2004). A formal study of information retrieval heuristics. In Proceedings of the 27th annual international ACM SIGIR conference on research and development in information retrieval, SIGIR ’04 (pp. 49–56). New York, NY, USA: ACM. https://doi.org/10.1145/1008992.1009004.
Hanbury, A., & Lupu, M. (2013). Toward a model of domainspecific search. In Proceedings of the 10th conference on open research areas in information retrieval, OAIR ’13 (pp. 33–36). Paris, France: CID.
HE, B., & Ounis, I. (2003). A study of parameter tuning for term frequency normalization. In Proceedings of the twelfth international conference on information and knowledge management, CIKM ’03 (pp. 10–16). New York, NY, USA: ACM. https://doi.org/10.1145/956863.956867.
He, B., & Ounis, I. (2005a). A study of the dirichlet priors for term frequency normalisation. In Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval, SIGIR ’05 (pp. 465–471). New York, NY, USA: ACM. https://doi.org/10.1145/1076034.1076114.
He, B., & Ounis, I. (2005b). Term frequency normalisation tuning for BM25 and DFR models (pp. 200–214). Heidelberg, Berlin: Springer. https://doi.org/10.1007/9783540318651_15.
Knaus, D., Mittendorf, E., & Schauble, P. (1994). Improving a basic retrieval method by links and passage level evidence. In Proceedings of the 3rd text REtrieval conference (pp. 241–241).
Lipani, A., Lupu, M., Hanbury, A., & Aizawa, A. (2015). Verboseness fission for bm25 document length normalization. In Proceedings of the 2015 international conference on the theory of information retrieval, ICTIR ’15 (pp. 385–388). New York, NY, USA: ACM. https://doi.org/10.1145/2808194.2809486.
Lv, Y., & Zhai, C. (2011a). Adaptive term frequency normalization for bm25. In Proceedings of the 20th ACM International Conference on Information and Knowledge Management, CIKM ’11 (pp. 1985–1988). New York, NY, USA: ACM. https://doi.org/10.1145/2063576.2063871.
Lv, Y., & Zhai, C. (2011b). Lowerbounding term frequency normalization. In Proceedings of the 20th ACM international conference on information and knowledge management, CIKM ’11 (pp. 7–16). New York, NY, USA: ACM. https://doi.org/10.1145/2063576.2063584.
Lv, Y., & Zhai, C. (2011c). When documents are very long, bm25 fails! In Proceedings of the 34th international ACM SIGIR conference on research and development in information retrieval, SIGIR ’11 (pp. 1103–1104). New York, NY, USA: ACM. https://doi.org/10.1145/2009916.2010070.
Metzler, D. (2008). Generalized inverse document frequency. In Proceedings of the 17th ACM conference on information and knowledge management, CIKM ’08 (pp. 399–408). New York, NY, USA: ACM. https://doi.org/10.1145/1458082.1458137. http://doi.acm.org/10.1145/1458082.1458137.
Na, S. H., Kang, I. S., & Lee, J. H. (2008). Improving term frequency normalization for multitopical documents and application to language modeling approaches (pp. 382–393). Berlin, Heidelberg: Springer. https://doi.org/10.1007/9783540786467_35.
Robertson, S. E., & Walker, S. (1999). Okapi/keenbow at TREC8. In Proceedings of the 8th text REtrieval conference (Vol. 8, pp. 151–162).
Robertson, S. E., Walker, S., Jones, S., HancockBeaulieu, M., & Gatford, M. (1994). Okapi at TREC3. In Proceedings of the 3rd text REtrieval conference (Vol. 3, pp. 109–126).
Robertson, S., & Zaragoza, H. (2009). The probabilistic relevance framework: Bm25 and beyond. Foundations and Trends®in. Information Retrieval, 3(4), 333–389. https://doi.org/10.1561/1500000019.
Roelleke, T. (2013). Information retrieval models: Foundations and relationships. https://doi.org/10.2200/S00494ED1V01Y201304ICR027.
Roelleke, T., Kaltenbrunner, A., & BaezaYates, R. (2015). Harmony assumptions in information retrieval and social networks. The Computer Journal, 58(11), 2982. https://doi.org/10.1093/comjnl/bxv031.
Roelleke, T., & Wang, J. (2008). Tfidf uncovered: A study of theories and probabilities. In Proceedings of the 31st annual international ACM SIGIR conference on research and development in information retrieval, SIGIR ’08 (pp. 435–442). New York, NY, USA: ACM. https://doi.org/10.1145/1390334.1390409.
Rousseau, F., & Vazirgiannis, M. (2013). Composition of tf normalizations: New insights on scoring functions for ad hoc ir. In Proceedings of the 36th international ACM SIGIR conference on research and development in information retrieval, SIGIR ’13 (pp. 917–920). New York, NY, USA: ACM. https://doi.org/10.1145/2484028.2484121.
Salton, G., & Buckley, C. (1988). Termweighting approaches in automatic text retrieval. Information Processing & Management, 24(5), 513–523. https://doi.org/10.1016/03064573(88)900210.
Singhal, A., Buckley, C., & Mitra, M. (1996). Pivoted document length normalization. In Proceedings of the 19th annual international ACM SIGIR conference on research and development in information retrieval, SIGIR ’96 (pp. 21–29). New York, NY, USA: ACM. https://doi.org/10.1145/243199.243206.
Zhai, C., & Lafferty, J. (2001). A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the 24th annual international ACM SIGIR conference on research and development in information retrieval, SIGIR ’01 (pp. 334–342). New York, NY, USA: ACM. https://doi.org/10.1145/383952.384019.
Acknowledgements
Open access funding provided by Austrian Science Fund (FWF). This research was partly supported by the Austrian Science Fund (FWF) Project Number P25905N23 (ADmIRE). This work has been supported by the SelfOptimizer project (FFG 852624) in the EUROSTARS programme, funded by EUREKA, the BMWFW and the European Union.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Lipani, A., Roelleke, T., Lupu, M. et al. A systematic approach to normalization in probabilistic models. Inf Retrieval J 21, 565–596 (2018). https://doi.org/10.1007/s1079101893341
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1079101893341