1 Introduction

The decomposition of generalization errors into bias and variance (Geman et al. 1992) is one of the most profound insights of learning theory. Bias is caused by low capacity of models when the training samples are assumed to be infinite, whereas variance is caused by overfitting to finite samples. In this article, we apply the analysis to a new set of problems in Compositional Distributional Semantics, which studies the calculation of meanings of natural language phrases by vector representations of their constituent words. We prove an upper bound for the bias of a widely used compositional framework, the additive composition (Foltz et al. 1998; Landauer and Dumais 1997; Mitchell and Lapata 2010).

Calculations of meanings are fundamental problems in Natural Language Processing (NLP). In recent years, vector representations have seen great success at conveying meanings of individual words (Levy et al. 2015). These vectors are constructed from statistics of contexts surrounding the words, based on the Distributional Hypothesis that words occurring in similar contexts tend to have similar meanings (Harris 1954). For example, given a target word t, one can consider its context as close neighbors of t in a corpus, and assess the probability \(p^t_{i}\) of the i-th word (in a fixed lexicon) occurring in the context of t. Then, the word t is represented by a vector \(\bigl (F(p^t_{i})\bigr )_i\) (where F is some function), and words with similar meanings to t will have similar vectors (Miller and Charles 1991).

Beyond the word level, a naturally following challenge is to represent meanings of phrases or even sentences. Based on the Distributional Hypothesis, it is generally believed that vectors should be constructed from surrounding contexts, at least for phrases observed in a corpus (Boleda et al. 2013). However, a main obstacle here is that phrases are far more sparse than individual words. For example, in the British National Corpus (BNC) (The BNC Consortium 2007), which consists of 100 M word tokens, a total of 16 K lemmatized words are observed more than 200 times, but there are only 46 K such bigrams, far less than the \(16,000^2\) possibilities for two-word combinations. Be it a larger corpus, one might only observe more rare words due to Zipf’s Law, so most of the two-word combinations will always be rare or unseen. Therefore, a direct estimation of the surrounding contexts of a phrase can have large sampling error. This partially fuels the motivation to construct phrase vectors from combining word vectors (Mitchell and Lapata 2010), which also bases on the linguistic intuition that meanings of phrases are “composed” from meanings of their constituent words. In view of machine learning, word vectors have smaller sampling errors, or lower variance since words are more abundant than phrases. Then, a compositional framework which calculates meanings from word vectors will be favorable if its bias is also small.

Here, “bias” is the distance between two types of phrase vectors, one calculated from composing the vectors of constituent words (composed vector), and the other assessed from context statistics where the phrase is treated as a target (natural vector). The statistics is assessed from an infinitely large ideal corpus, so that the natural vector of the phrase can be reliably estimated without sampling error, hence conveying the meaning of the phrase by Distributional Hypothesis. If the distance between the two vectors is small, the composed vector can be viewed as a reasonable approximation of the natural vector, hence an approximation of meaning; moreover the composed vector can be more reliably estimated from finite real corpora because words are more abundant than phrases. Therefore, an upper bound for the bias will provide a learning-theoretic support for the composition operation.

A number of compositional frameworks have been proposed in the literature (Baroni and Zamparelli 2010; Grefenstette and Sadrzadeh 2011; Socher et al. 2012; Paperno et al. 2014; Hashimoto et al. 2014). Some are complicated methods based on linguistic intuitions (Coecke et al. 2010), and others are compared to human judgments for evaluation (Mitchell and Lapata 2010). However, none of them has been previously analyzed regarding their bias.Footnote 1 The most widely used framework is the additive composition (Foltz et al. 1998; Landauer and Dumais 1997), in which the composed vector is calculated by averaging word vectors. Yet, it was unknown if this average is by any means related to statistics of contexts surrounding the corresponding phrases.

In this article, we prove an upper bound for the bias of additive composition of two-word phrases, and demonstrate several applications of the theory. An overview is given in Fig. 1; we summarize as follows.

Fig. 1
figure 1

An overview of this article. Arrows show dependencies between sections

In Sect. 2.1, we introduce notations and define the vectors we consider in this work. Roughly speaking, we use \(p^{\varUpsilon }_{i}\) to denote the probability of the i-th word in a fixed lexicon occurring within a context window of a target (i.e. a word or phrase) \({\varUpsilon }\), and define the i-th entry of the natural vector as \(\mathbf {w}^{\varUpsilon }:=\bigl (c\cdot w^{\varUpsilon }_{i}\bigr )\) and

$$\begin{aligned} w^{\varUpsilon }_{i}:=F(p^{\varUpsilon }_{i}+1/n)-a^{\varUpsilon }-b_{i}. \end{aligned}$$

Here, n is the lexicon size, \(a^{\varUpsilon }\), \(b_{i}\) and c are real numbers and F is a function. We note that the formalization is general enough to be compatible with several previous research.

In Sect. 2.2, we describe our bias bound for additive composition, sketch its proof, and emphasize its practical consequences that can be tested on a natural language corpus. Briefly, we show that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector; but this guarantee comes with one condition that F should be a function that decreases steeply around 0 and grows slowly at \(\infty \); and when such condition is satisfied, one can derive an additional property that all natural vectors have approximately the same norm. These consequences are all experimentally verified in Sect. 5.3.

In Sect. 2.3, we give a formalized version of the bias bound (Theorem 1), with our assumptions on natural language data clarified. These assumptions include the well-know Zipf’s Law, a similar law applied to word co-occurrences which we call the Generalized Zipf’s Law, and some intuitively acceptable conditions. The assumptions are experimentally tested in Sects. 5.1 and 5.2. Moreover, we show that the Generalized Zipf’s Law can be drived from a widely used generative model for natural language (Sect. 2.6).

In Sect. 2.4, we prove some key lemmas regarding the aforementioned condition on function F; in Sect. 2.5 we formally prove the bias bound (with some supporting lemmas proven in “Appendix 1”), and further give an intuitive explanation for the strength of additive composition: namely, with two words given, the vector of each can be decomposed into two parts, one encoding the contexts shared by both words, and the other encoding contexts not shared; when the two word vectors are added up, the non-shared part of each of them tend to cancel out, because non-shared parts have nearly independent distributions; as a result, the share part gets reinforced, which is coincidentally encoded by the natural phrase vector.

Empirically, we demonstrate three applications of our theory:

  1. 1.

    The condition required to be satisfied by F provides a unified explanation on why some recently proposed word vectors are good at additive composition (Sect. 3.1). Our experiments also verify that the condition drastically affects additive compositionality and other properties of vector representations (Sects. 5.36).

  2. 2.

    Our intuitive explanation inspires a novel method for making vectors recognize word order, which was long thought as an issue for additive composition. Briefly speaking, since additive composition cancels out non-shared parts of word vectors and reinforces the shared one, we show that one can use labels on context words to control what is shared. In this case, we propose the Near–far Context in which the contexts of ordered bigrams are shared (Sect. 3.2). Our experiments show that the resulting vectors can indeed assess meaning similarities between ordered bigrams (Sect. 5.4), and demonstrate strong performance on phrase similarity tasks (Sect. 6.1). Unlike previous approaches, Near–far Context still composes vectors by taking average, retaining the merits of being parameter-free and having a bias bound.

  3. 3.

    Our theory suggests that singular value decomposition (SVD) is suitable for preserving additive compositionality in dimension reduction of word vectors (Sect. 3.3). Experiments also show that SVD might perform better than other models in additive composition (Sects. 5.5,  6).

We discuss related works in Sect. 4 and conclude in Sect. 7.

2 Theory

In this section, we discuss vector representations constructed from an ideal natural language corpus, and establish a mathematical framework for analyzing additive composition. Our analysis makes several assumptions on the ideal corpus, which might be approximations or oversimplifications of real data. In Sect. 5, we will test these assumptions on a real corpus and verify that the theory still makes reasonable predictions.

2.1 Notation and vector representation

A natural language corpus is a sequence of words. Ideally, we assume that the sequence is infinitely long and contains an infinite number of distinct words.

Notation 1

We consider a finite sample of the infinite ideal corpus. In this sample, we denote the number of distinct words by n, and use the n words as a lexicon to construct vector representations. From the sample, we assess the count \(C_i\) of the i-th word in the lexicon, and assume that index \(1\le i\le n\) is taken such that \(C_{i}\ge C_{i+1}\). Let \(C:=\sum _{i=1}^{n}C_{i}\) be the total count, and denote \(p_{i,n}:=C_i/C\).

Table 1 Contexts are taken as the closest five words to each side for the targets “tax” and “rate”, and four for the target “tax_rate
Table 2 List of target types

With a sample corpus given, we can construct vector representations for targets, which are either words or phrases. To define the vectors one starts from specifying a context for each target, which is usually taken as words surrounding the target in corpus. As an example, Table 1 shows a word sequence, a phrase target and two word targets; contexts are taken as the closest four or five words to the targets.

Notation 2

We use s, t to denote word targets, and st a phrase target consisting of two consecutive words s and t. When the word order is ignored (i.e., either st or ts), we denote the target by \(\{st\}\). A general target is denoted by \({\varUpsilon }\). Later in this article, we will consider other types of targets as well, and a full list of target types is shown in Table 2.

Notation 3

Let \(C({\varUpsilon })\) be the count of target \({\varUpsilon }\), and \(C_i^{\varUpsilon }\) the count of i-th word co-occurring in the context of \({\varUpsilon }\). Denote \(p^{\varUpsilon }_{i,n}:=C_i^{\varUpsilon }/C({\varUpsilon })\).

In order to approximate the ideal corpus, we will take a sample larger and larger, then consider the limit. Under this limit, it is obvious that \(n\rightarrow \infty \) and \(C\rightarrow \infty \). Further, we will assume some limit properties on \(p_{i,n}\) and \(p^{\varUpsilon }_{i,n}\) as specified in Sect. 2.3. These properties capture our idealization of an infinitely large natural language corpus. In Sect. 2.6, we will show that such properties can be derived from a Hierarchical Pitman–Yor Process, a widely used generative model for natural language data.

Definition 4

We construct a natural vector for \({\varUpsilon }\) from the statistics \(p^{\varUpsilon }_{i,n}\) as follows:

$$\begin{aligned} \mathbf {w}^{\varUpsilon }_{n}:=\bigl (c_n\cdot w^{\varUpsilon }_{i,n}\bigr )_{1\le i\le n} \quad \text {and}\quad w^{\varUpsilon }_{i,n}:=F(p^{\varUpsilon }_{i,n}+1/n)-a^{\varUpsilon }_n-b_{i,n}. \end{aligned}$$

Here, \(a^{\varUpsilon }_n\), \(b_{i,n}\) and \(c_n\) are real numbers and F is a smooth function on \((0,\infty )\). The subscript n emphasizes that the vector will change if n becomes larger (i.e. a larger sample corpus is taken). The scalar \(c_n\) is for normalizing scales of vectors. In Sect. 2.2, we will further specify some conditions on \(a^{\varUpsilon }_n\), \(b_{i,n}\), \(c_n\) and F, but without much loss of generality.

Table 3 Frequently used notations and general conventions in this article

To consider \(F(p^{\varUpsilon }_{i,n}+1/n)\) instead of \(F(p^{\varUpsilon }_{i,n})\) can be viewed as a smoothing scheme that guarantees F(x) being applied to \(x>0\). We will consider F that is not continuous at 0, such as \(F(x):=\ln {x}\); yet, \(w^{\varUpsilon }_{i,n}\) has to be well-defined even if \(p^{\varUpsilon }_{i,n}=0\). In practice, the \(p^{\varUpsilon }_{i,n}\) estimated from a finite corpus can often be 0; theoretically, the smoothing scheme plays a role in our proof as well.

The definition of \(\mathbf {w}^{\varUpsilon }_{n}\) is general enough to cover a wide range of previously proposed distributional word vectors. For example, if \(F(x)=\ln {x}\), \(a^{\varUpsilon }_n=0\) and \(b_{i,n}=\ln {p_{i,n}}\), then \(w^{\varUpsilon }_{i,n}\) is the Point-wise Mutual Information (PMI) value that has been widely adopted in NLP (Church and Hanks 1990; Dagan et al. 1994; Turney 2001; Turney and Pantel 2010). More recently, the Skip-Gram with Negative Sampling (SGNS) model (Mikolov et al. 2013a) is shown to be a matrix factorization of the PMI matrix (Levy and Goldberg 2014b); and the more general form of \(a^{\varUpsilon }_n\) and \(b_{i,n}\) is explicitly introduced by the GloVe model (Pennington et al. 2014). Regarding other forms of F, it has been reported in Lebret and Collobert (2014) and Stratos et al. (2015) that empirically \(F(x):=\sqrt{x}\) outperforms \(F(x):=x\). We will discuss function F further in Sect. 3.1, and review some other distributional vectors in Sect. 4.

We finish this section by pointing to Table 3 for a list of frequently used notations.

2.2 Practical meaning of the bias bound

A compositional framework combines vectors \(\mathbf {w}^s_{n}\) and \(\mathbf {w}^t_{n}\) to represent the meaning of phrase “s t”. In this work, we study relations between this composed vector and the natural vector \(\mathbf {w}^{\{st\}}_{n}\) of the phrase target.Footnote 2 More precisely, we study the Euclidean distance

$$\begin{aligned} \lim _{n\rightarrow \infty } ||\mathbf {w}^{\{st\}}_{n}-{{\mathrm{COMP}}}(\mathbf {w}^s_{n}, \mathbf {w}^t_{n})||\end{aligned}$$

where \({{\mathrm{COMP}}}(\cdot ,\cdot )\) is the composition operation. If a sample corpus is taken larger and larger, we have limit \(n\rightarrow \infty \), and \(\mathbf {w}^{\{st\}}_{n}\) will be well estimated to represent the meaning of “s t or t s”. Therefore, the above distance can be viewed as the bias of approximating \(\mathbf {w}^{\{st\}}_{n}\) by the composed vector \({{\mathrm{COMP}}}(\mathbf {w}^s_{n}, \mathbf {w}^t_{n})\). In practice, especially when \({{\mathrm{COMP}}}\) is a complicated operation with parameters, it has been a widely adopted approach to learn the parameters by minimizing the same distances for phrases observed in corpus (Dinu et al. 2013; Baroni and Zamparelli 2010; Guevara 2010). These practices further motivate our study on the bias.

Definition 5

We consider additive composition, where \({{\mathrm{COMP}}}(\mathbf {w}^s_{n}, \mathbf {w}^t_{n}):=\frac{1}{2}(\mathbf {w}^s_{n}+\mathbf {w}^t_{n})\) is a parameter-free composition operator. We define

$$\begin{aligned} \mathscr {B}^{\{st\}}_{n}:=||\mathbf {w}^{\{st\}}_{n}-\frac{1}{2}(\mathbf {w}^s_{n}+\mathbf {w}^t_{n})||. \end{aligned}$$

Our analysis starts from the observation that, every word in the context of \(\{st\}\) also occurs in the contexts of s and t: as illustrated in Table 1, if a word token t (e.g. “rate”) comes from a phrase \(\{st\}\) (e.g. “tax rate”), and if the context window size is not too small, the context for this token of t is almost the same as the context of \(\{st\}\). This motivates us to decompose the context of t into two parts, one coming from \(\{st\}\) and the other not.

Definition 6

Define target \(s/t\backslash s\) as the tokens of word t which do not occur next to word s in corpus. We use \(\pi _{s/t\backslash s}\) to denote the probability of t not occurring next to s, conditioned on a token of word t. Practically, \((1-\pi _{s/t\backslash s})\) can be estimated by the count ratio \(C(\{st\})/C(t)\). Then, we have the following equation

$$\begin{aligned} p^t_{i,n} =\pi _{s/t\backslash s}p^{s/t\backslash s}_{i,n} + (1 - \pi _{s/t\backslash s})p^{\{st\}}_{i,n} \quad \text {for all }i,n \end{aligned}$$
(1)

because a word in the context of t occurs in the context of either \(\{st\}\) or \(s/t\backslash s\).

We can view \(\pi _{s/t\backslash s}\) and \(\pi _{t/s\backslash t}\) as indicating how weak the “collocation” between s and t is. When \(\pi _{s/t\backslash s}\) and \(\pi _{t/s\backslash t}\) are small, s and t tend to occur next to each other exclusively, so \(\mathbf {w}^s_{n}\) and \(\mathbf {w}^t_{n}\) are likely to correlate with \(\mathbf {w}^{\{st\}}_{n}\), making \(\mathscr {B}^{\{st\}}_{n}\) small. This is the fundamental idea of our bias bound, which estimates \(\mathscr {B}^{\{st\}}_{n}\) in terms of \(\pi _{s/t\backslash s}\) and \(\pi _{t/s\backslash t}\). We give a detailed sketch below. First, by Triangle Inequality one immediately has

$$\begin{aligned} \mathscr {B}^{\{st\}}_{n}\le ||\mathbf {w}^{\{st\}}_{n}||+\frac{1}{2}\bigl (||\mathbf {w}^s_{n}||+||\mathbf {w}^t_{n}||\bigr ). \end{aligned}$$

Then, we note that both \(\mathscr {B}^{\{st\}}_{n}\) and \(||\mathbf {w}^{{\varUpsilon }}_{n}||\) scale with \(c_n\). Without loss of generality, we can assume that \(c_n\) is normalized such that the average norm of \(\mathbf {w}^{{\varUpsilon }}_{n}\) equals 1. Thus, if we can prove that \(||\mathbf {w}^{{\varUpsilon }}_{n}||=1\) for every target \({\varUpsilon }\), we will have an upper bound

$$\begin{aligned} \mathscr {B}^{\{st\}}_{n}\le 2. \end{aligned}$$
(2)

This is intuitively obvious because if all vectors lie on the unit sphere, distances between them will be less than the diameter 2. In this article, we will show that roughly speaking, it is indeed that \(||\mathbf {w}^{{\varUpsilon }}_{n}||=1\) for “every” \({\varUpsilon }\), and the above “upper bound” can further be strengthened using Eq. (1).

More precisely, we will prove that if a target phrase ST is randomly chosen, then \(\lim \limits _{n\rightarrow \infty }||\mathbf {w}^{\{ST\}}_{n}||\) converges to 1 in probability. The argument is sketched as follows. First, when ST is random, \(p^{\{ST\}}_{i,n}\) and \(w^{\{ST\}}_{i,n}\) become random variables. We assume that for each \(i\ne j\), \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) are independent random variables. Note this assumption in contrast to the fact that \(p_{i,n}\ge p_{j,n}\) for \(i>j\); nonetheless, we assume that \(p^{\{ST\}}_{i,n}\) is random enough so that when i changes, no obvious relation exists between \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\). Thus, \(\bigl (w^{\{ST\}}_{i,n}\bigr )^2\)’s (\(1\le i\le n\), n fixed) are independent and we can apply the Law of Large Numbers:

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n \bigl (w^{\{ST\}}_{i,n}\bigr )^2}{\sum _{i=1}^n \mathbb {E}\bigl [\bigl (w^{\{ST\}}_{i,n}\bigr )^2\bigr ]}=1 \quad \text {in probability}. \end{aligned}$$
(3)

In words, the fluctuations of \(\bigl (w^{\{ST\}}_{i,n}\bigr )^2\)’s cancel out each other and their sum converges to expectation. However, Eq. (3) requires a stronger statement than the ordinary Law of Large Numbers; namely, we do not assume \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) are identically distributed.Footnote 3 For this generalized Law of Large Numbers we need some technical conditions. One necessary condition is \(\lim \limits _{n\rightarrow \infty }\sum _{i=1}^n \mathbb {E}\bigl [\bigl (w^{\{ST\}}_{i,n}\bigr )^2\bigr ]=\infty \), which we prove by explicitly calculating \(\mathbb {E}\bigl [\bigl (w^{\{ST\}}_{i,n}\bigr )^2\bigr ]\); another requirement is that the fluctuations of \(\bigl (w^{\{ST\}}_{i,n}\bigr )^2\) must be at comparable scales so they indeed cancel out. This is formalized as a uniform integrability condition, and we will show it imposes a non-trivial constraint on the function F in definition of word vectors. Finally if Eq. (3) holds, by setting \(c_n:=\sum _{i=1}^n \mathbb {E}\bigl [\bigl (w^{\{ST\}}_{i,n}\bigr )^2\bigr ]\) we are done.

We further strengthen upper bound (2) as follows. First, Eq. (1) suggests:

$$\begin{aligned} F(p^t_{i,n}+1/n) \approx \pi _{s/t\backslash s}F(p^{s/t\backslash s}_{i,n}+1/n) + (1 - \pi _{s/t\backslash s})F(p^{\{st\}}_{i,n}+1/n). \end{aligned}$$

Since F is smooth, this equation can be justified as long as \(p^{s/t\backslash s}_{i,n}\) and \(p^{\{st\}}_{i,n}\) are small compared to 1 / n. Then, we will rigorously prove that, when n is sufficiently large, the total error in the above approximation becomes infinitesimal:

$$\begin{aligned} \mathbf {w}^t_{n} \risingdotseq \pi _{s/t\backslash s}\mathbf {w}^{s/t\backslash s}_{n} + (1 - \pi _{s/t\backslash s})\mathbf {w}^{\{st\}}_{n}. \end{aligned}$$

So we can replace \(\mathbf {w}^t_{n}\) and \(\mathbf {w}^s_{n}\) in definition of \(\mathscr {B}^{\{st\}}_{n}\):

$$\begin{aligned} \mathscr {B}^{\{st\}}_{n}\risingdotseq \frac{1}{2} ||(\pi _{s/t\backslash s}+\pi _{t/s\backslash t})\mathbf {w}^{\{st\}}_{n} -\pi _{s/t\backslash s}\mathbf {w}^{s/t\backslash s}_{n} -\pi _{t/s\backslash t}\mathbf {w}^{t/s\backslash t}_{n}||. \end{aligned}$$
(4)

With arguments similar to the previous paragraph, we have \(\lim \limits _{n\rightarrow \infty }||\mathbf {w}^{S/T\backslash S}_{n}||\) and \(\lim \limits _{n\rightarrow \infty }||\mathbf {w}^{T/S\backslash T}_{n}||\) converge to 1 in probability. Therefore, by Triangle Inequality we get

$$\begin{aligned} (4)\le \frac{1}{2}\Bigl ( (\pi _{s/t\backslash s}+\pi _{t/s\backslash t})||\mathbf {w}^{\{st\}}_{n}||+\pi _{s/t\backslash s}||\mathbf {w}^{s/t\backslash s}_{n}||+\pi _{t/s\backslash t}||\mathbf {w}^{t/s\backslash t}_{n}||\Bigr ) = \pi _{s/t\backslash s}+\pi _{t/s\backslash t}, \end{aligned}$$

a better bound than (2). However, our bias bound is even stronger than this. Our further argument goes to the intuition that, \(\mathbf {w}^{s/t\backslash s}_{n}\) should have “positive correlation” with \(\mathbf {w}^{\{st\}}_{n}\) because as targets, both \(s/t\backslash s\) and \(\{st\}\) contain word t; on the other hand, \(\mathbf {w}^{s/t\backslash s}_{n}\) and \(\mathbf {w}^{t/s\backslash t}_{n}\) should be “independent” because targets \(s/t\backslash s\) and \(t/s\backslash t\) cover disjoint tokens of different words. With this intuition, we will derive the following bias bound:

$$\begin{aligned} (4)\le \frac{1}{2}\sqrt{(\pi _{s/t\backslash s} +\pi _{t/s\backslash t})^2+\pi _{s/t\backslash s}^2+\pi _{t/s\backslash t}^2}= \sqrt{\frac{1}{2}(\pi _{s/t\backslash s}^2+\pi _{t/s\backslash t}^2+\pi _{s/t\backslash s}\pi _{t/s\backslash t})}. \end{aligned}$$

A brief explanation can be found in Sect. 2.5, after the formal proof. Our experiments suggest that this bound is remarkably tight (Sect. 5.3). In addition, the intuitive explanation inspires a way to make additive composition aware of word order (Sect. 3.2).

In the rest of this section, we will formally normalize \(c_n\), \(a^{\varUpsilon }_n\), \(b_{i,n}\) and F for simplicity of discussion. These are mild conditions and do not affect the generality of our results. Then, we will summarize our claim of the bias bound, focusing on its practical verifiability.

Definition 7

Let \({\varLambda }_n\) be the set of two-word phrases observed in a finite corpus, word order ignored. We normalize \(c_n\) such that the average norm of natural phrase vectors becomes 1:

$$\begin{aligned} \frac{1}{|{\varLambda }_n |}\sum _{\{st\}\in {\varLambda }_n}||\mathbf {w}^{\{st\}}_{n} ||=1. \end{aligned}$$

Definition 8

Since \(b_{i,n}\) is canceled out in definition of \(\mathscr {B}^{\{st\}}_{n}\), it does not affect the bias. It does affect \(\mathbf {w}^{\{st\}}_{n}\); we set \(b_{i,n}\) such that the centroid of natural phrase vectors becomes \(\mathbf {0}\):

$$\begin{aligned} b_{i,n}:=\frac{1}{|{\varLambda }_n |}\sum _{\{st\}\in {\varLambda }_n}F(p^{\{st\}}_{i,n}+1/n)-a^{\{st\}}_{n}, \end{aligned}$$
(5)

so that

$$\begin{aligned} \frac{c_n}{|{\varLambda }_n |}\sum _{\{st\}\in {\varLambda }_n}w^{\{st\}}_{i,n}=0\quad \text {for all }i. \end{aligned}$$

Note that, if the centroid of natural phrase vectors is far from \(\mathbf {0}\), the normalization in Definition 7 would cause all phrase vectors cluster around one point on the unit sphere. Then, the phrase vectors would not be able to distinguish different meanings of phrases. The choice of \(b_{i,n}\) in Definition 8 prevents such degenerated cases.

Next, if \(c_n\) and F are fixed, \(\mathscr {B}^{\{st\}}_{n}\) is taking minimum at

$$\begin{aligned} a^{\{st\}}_{n}-\frac{a^s_n+a^t_n}{2}=\frac{1}{n}\sum _{i=1}^{n}F(p^{\{st\}}_{i,n}+1/n)-\frac{F(p^s_{i,n}+1/n)+F(p^t_{i,n}+1/n)}{2}. \end{aligned}$$

Hence, it is favorable to have the above equality. We can achieve it by adjusting \(a^{\varUpsilon }_n\) such that the entries of each vector average to 0.

Definition 9

We set

$$\begin{aligned} a^{\varUpsilon }_n:=\frac{1}{n}\sum _{i=1}^n F(p^{\varUpsilon }_{i,n}+1/n)-b_{i,n}, \end{aligned}$$
(6)

so that

$$\begin{aligned} \frac{c_n}{n}\sum _{i=1}^n w^{\varUpsilon }_{i,n}=0 \quad \text {for all }{\varUpsilon }. \end{aligned}$$

Practically, one can calculate \(a^{\varUpsilon }_n\) and \(b_{i,n}\) by first assuming \(b_{i,n}=0\) in (6) to obtain \(a^{\varUpsilon }_n\), and then substitute \(a^{\{st\}}_n\) in (5) to obtain the actual \(b_{i,n}\). The value of \(a^{\varUpsilon }_n\) will not change because if all vectors have average entry 0, so dose their centroid. In Sect. 2.4, we will derive asymptotic values of \(a^{\varUpsilon }_n\), \(b_{i,n}\) and \(c_n\) theoretically.

Definition 10

We assume there is a \(\lambda \) such that \(F'(x)=x^{-1+\lambda }\). So F(x) can be either \(x^\lambda /\lambda \) (if \(\lambda \ne 0\)) or \(\ln x\) (if \(\lambda =0\)). This assumption is mainly for simplicity; intuitively, behavior of F(x) only matters at \(x\approx 0\), because F is applied to probability values which are close to 0. Indeed, our results can be generalized to G(x) such that

$$\begin{aligned} \lim _{x\rightarrow 0}G'(x)x^{1-\lambda }=1\quad \text { and }\quad G'(x)x^{1-\lambda }\le M \text { for some constant }M. \end{aligned}$$

See Remark 6 in Sect. 2.3 for further discussion. The exponent \(\lambda \) turns out to be a crucial factor in our theory; we require \(\lambda <0.5\), which imposes a non-trivial constraint on F.

Our bias bound is summarized as follows.

Claim 1

Assume \(\lambda < 0.5\), the factors \(a^{\varUpsilon }_n\), \(b_{i,n}\) and \(c_n\) are normalized as above, and distributional vectors are constructed from an ideal natural language corpus. Then:

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathscr {B}^{\{st\}}_{n}\le \sqrt{\frac{1}{2}(\pi _{s/t\backslash s}^2+\pi _{t/s\backslash t}^2+\pi _{s/t\backslash s}\pi _{t/s\backslash t})}. \end{aligned}$$

As we expected, for more “collocational” phrases, since \(\pi _{s/t\backslash s}\) and \(\pi _{t/s\backslash t}\) are smaller, the bias bound becomes stronger. Claim 1 states a prediction that can be empirically tested on a real large corpus; namely, one can estimate \(p^{\varUpsilon }_{i,n}\) from the corpus and construct \(\mathbf {w}^{\varUpsilon }_{n}\) for a fixed n, then check if the inequality holds approximately while omitting the limit. In Sect. 5.3, we conduct the experiment and verify the prediction. Our theoretical assumptions on the “ideal natural language corpus” will be specified in Sect. 2.3.

Besides it being empirically verified for phrases observed in a real corpus, the true value of Claim 1 is that the upper bound holds for an arbitrarily large ideal corpus. We can assume any plausible two-word phrase to occur sufficiently many times in the ideal corpus, even when it is unseen in the real one. In that case, a natural vector for the phrase can only be reliably estimated from the ideal corpus, but Claim 1 suggests that additive composition of word vectors provides a reasonable approximation for that unseen natural vector. Meanwhile, since word vectors can be reliably estimated from the real corpus, Claim 1 endorses additive composition as a reasonable meaning representation for unseen or rare phrases. On the other hand, it endorses additive composition for frequent phrases as well, because such phrases usually have strong collocations and Claim 1 says that the bias in this case is small.

The condition \(\lambda < 0.5\) on function F is crucial; we discuss its empirical implications in Sect. 3.1.

Further, the following is a by-product of Theorem 2 in Sect. 2.4, which corresponds to the previous Eq. (3) in our sketch of proof.

Claim 2

Under the same conditions in Claim 1, we have \(\lim \limits _{n\rightarrow \infty }||\mathbf {w}^{\{st\}}_{n}||=1\) for all \(\{st\}\).

Thus, all natural phrase vectors approximately lie on the unit sphere. This claim is also empirically verified in Sect. 5.3. It enables a link between the Euclidean distance \(\mathscr {B}^{\{st\}}_{n}\) and the cosine similarity, which is the most widely used similarity measure in practice.

2.3 Formalization and assumptions on natural language data

Claim 1 is formalized as Theorem 1 in the following.

Theorem 1

For an ideal natural language corpus, we assume that:

  1. (A)

    \(\lim \limits _{n\rightarrow \infty }p_{i,n}\cdot i\ln n=1\).

  2. (B)

    Let S, T be randomly chosen word targets. If \({\varUpsilon }:=\) \(\{ST\}\), \(S/T\backslash S\) or \(T/S\backslash T\), then:

    1. (B1)

      For n fixed, \(p^{{\varUpsilon }}_{i,n}\)’s \((1\le i\le n)\) can be viewed as independent random variables.

    2. (B2)

      Put \(X:=p^{{\varUpsilon }}_{i,n}/p_{i,n}\). There exist \(\xi ,\beta \) such that \(\mathbb {P}(x\le X)=\xi /x\) for \(x\ge \beta \).

  3. (C)

    For each i and n, the random variables \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are independent, whereas \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\) have positive correlation.

Then, if \(\mathbb {E}\bigl [F(X)^2\bigr ]<\infty \), we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\mathscr {B}^{\{ST\}}_{n}\le \sqrt{\frac{1}{2}(\pi _{S/T\backslash S}^2+\pi _{T/S\backslash T}^2+\pi _{S/T\backslash S}\pi _{T/S\backslash T})}\quad \text {in probability}. \end{aligned}$$

We explain the assumptions of Theorem 1 in details below.

Remark 1

Assumption (A) is Zipf’s Law (Zipf 1935), which states that the frequency of the i-th word is inversely proportional to i. So \(p_{i,n}\) is proportional to \(i^{-1}\), and the factor \(\ln n\) comes from equations \(\sum _{i=1}^np_{i,n}=1\) and \(\sum _{i=1}^ni^{-1}\approx \ln n\). One immediate implication of Zipf’s Law is that one can make \(np_{i,n}\) arbitrarily small by choosing sufficiently large n and i. More precisely, for any \(\delta >0\), we have

$$\begin{aligned} np_{i,n}\le \delta \Leftrightarrow \frac{n}{\delta \ln n}\le i\le n, \end{aligned}$$
(7)

so as long as n is large enough that \(\ln n\ge 1/\delta \), there is an i in (7) such that \(np_{i,n}\le \delta \). The limit \(np_{i,n}\rightarrow 0\) will be extensively explored in our theory.

Empirically, Zipf’s Law has been thoroughly tested under several settings (Montemurro 2001; Ha et al. 2002; Clauset et al. 2009; Corral et al. 2015).

Remark 2

When a target \({\varUpsilon }\) is randomly chosen, (B1) assumes that the probability value \(p^{\varUpsilon }_{i,n}\) is random enough that, when \(i\ne j\), there is no obvious relation between \(p^{\varUpsilon }_{i,n}\) and \(p^{\varUpsilon }_{j,n}\) (i.e. they are independent). We test this assumption in Sect. 5.1. Assumption (B2) suggests that \(p^{\varUpsilon }_{i,n}\) is at the same scale as \(p_{i,n}\), and the random variable \(X:=p^{{\varUpsilon }}_{i,n}/p_{i,n}\) has a power law tailFootnote 4 of index 1. We regard (B2) as the Generalized Zipf’s Law, analogous to Zipf’s Law because \(p_{i,n}\)’s (\(1\le i\le n\), n fixed) can also be viewed as i.i.d. samples drawn from a power law of index 1. In Sect. 2.6, we show that Assumption (B) is closely related to a Hierarchical Pitman–Yor Process; and in Sect. 5.2 we empirically verify this assumption.

Remark 3

Assumption (C) is based on an intuition that, since \(S/T\backslash S\) and \(T/S\backslash T\) are different word targets and \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are assessed from disjoint parts of corpus, the two random variables should be independent. On the other hand, the targets \(S/T\backslash S\) and \(\{ST\}\) both contain a word T, so we expect \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\) to have positive correlation. This assumption is also empirically tested in Sect. 5.1.

Remark 4

Since X has a power law tail of index 1, the probability density \(-{{\mathrm{d}}}\mathbb {P}(x\le X)\) is a multiple of \(x^{-2}{{\mathrm{d}}}x\) for sufficiently large x. Wherein, \(\mathbb {E}\bigl [F(X)^2\bigr ]\) becomes an integral of \(F(x)^2x^{-2}{{\mathrm{d}}}x\), so \(\lambda <0.5\) is a necessary condition for \(\mathbb {E}\bigl [F(X)^2\bigr ]<\infty \).

Conversely, \(\lambda <0.5\) is usually a sufficient condition for \(\mathbb {E}\bigl [F(X)^2\bigr ]<\infty \), for instance, if X follows the Pareto Distribution (i.e. \(\xi =\beta \)) or Inverse-Gamma Distribution. Another example will be given in Sect. 2.6.

Lemma 1

Define \(Y_{i,n}:=F(p^{\varUpsilon }_{i,n}\,+\,1/n)-F(p_{i,n}\beta \,+\,1/n)=F(p_{i,n}X\,+\,1/n)-F(p_{i,n}\beta +1/n)\). Put \(e_{i,n}:=\mathbb {E}\bigl [Y_{i,n}\bigr ]\), \(v_{i,n}:={{\mathrm{Var}}}\bigl [Y_{i,n}\bigr ]\), and \(\varphi _{i,n}:=(p_{i,n})^{2\lambda }\bigl (1+(\beta np_{i,n})^{-1}\bigr )^{-1+2\lambda }\). Then,

  1. (a)

    There exists \(\chi \) such that \(\dfrac{|e_{i,n}|}{\sqrt{\varphi _{i,n}}}\le \chi \) for all in.

  2. (b)

    \(\lim \limits _{np_{i,n}\rightarrow 0}\dfrac{e_{i,n}}{\sqrt{\varphi _{i,n}}}=0. \)

  3. (c)

    The set of random variables \(\bigl \{Y_{i,n}^2/\varphi _{i,n}\bigr \}\) is uniformly integrable; i.e., for any \(\varepsilon >0\), there exists N such that \(\mathbb {E}\bigl [Y_{i,n}^2I_{Y_{i,n}^2>N \varphi _{i,n}}\bigr ]<\varepsilon \varphi _{i,n}\) for all in.

  4. (d)

    \(\lim \limits _{np_{i,n}\rightarrow 0}\dfrac{v_{i,n}}{\varphi _{i,n}}=\eta \ne 0\), where \( \eta =\displaystyle \int _0^\infty \bigl (F(z+\beta )-F(\beta )\bigr )^2\cdot \frac{\xi {{\mathrm{d}}}z}{z^2} \).

Remark 5

As sketched in Sect. 2.2, our proof requires calculation of \(\mathbb {E}\bigl [\bigl (w^{{\varUpsilon }}_{i,n}\bigr )^2\bigr ]\); this is done by applying Lemma 1 above. The lemma calculates the first and second moments of \(Y_{i,n}\); note that \(Y_{i,n}\) differs from \(w^{\varUpsilon }_{i,n}\) only by some constant shift.Footnote 5 As the lemma shows, when i and n vary, the squared first moment \(e_{i,n}^2\) and the variance \(v_{i,n}\) scale with the constant \(\varphi _{i,n}\). At the limit \(np_{i,n}\rightarrow 0\), Lemma 1(b)(d) suggests that \(e_{i,n}/\sqrt{\varphi _{i,n}}\) and \(v_{i,n}/\varphi _{i,n}\) converge, which is where the power law tail of X mostly affects the behavior of \(Y_{i,n}\).

Remark 6

The function F(x) in Lemma 1 can be generalized to function G(x) as mentioned in Definition 10. Because by Cauchy’s Mean Value Theorem,

$$\begin{aligned} \frac{G(p_{i,n}x+1/n)-G(p_{i,n}\beta +1/n)}{F(p_{i,n}x+1/n)-F(p_{i,n}\beta +1/n)}=G'(\zeta )\zeta ^{1-\lambda } \quad \text {for some }p_{i,n}\beta +1/n\le \zeta \le p_{i,n}x+1/n, \end{aligned}$$

so the random variable \(G(p_{i,n}X\,+\,1/n)-G(p_{i,n}\beta \,+\,1/n)\) is dominated by \(MY_{i,n}\) and converges pointwisely to \(Y_{i,n}\) as \(n\rightarrow \infty \). Then, by Lebesgue’s Dominated Convergence Theorem, we can generalize Lemma 1 to G(x), and in turn generalize our bias bound.

Lemma 2

Regarding the asymptotic behavior of \(\varphi _{i,n}\), we have

  1. (a)

    \(\lim \limits _{np_{i,n}\rightarrow 0}\dfrac{n^{2\lambda }\cdot \varphi _{i,n}}{np_{i,n}}=\beta ^{1-2\lambda }\).

  2. (b)

    \(n^{-1+2\lambda }\ln n\cdot \varphi _{i,n}\le \beta ^{1-2\lambda }/i\).

  3. (c)

    For any \(\delta >0\), there exists \(M_\delta \) such that \(n^{-1+2\lambda }\ln n\sum _{i=1}^{\frac{n}{\delta \ln n}}\varphi _{i,n} \le M_\delta \) for all n.

  4. (d)

    For any \(\delta >0\), we have \(\lim \limits _{n\rightarrow \infty }n^{-1+2\lambda }\ln n\sum _{\frac{n}{\delta \ln n}\le i}^n\varphi _{i,n}=\infty \).

Lemma 1 is derived from Assumption (B) and the condition \(\mathbb {E}\bigl [F(X)^2\bigr ]<\infty \). Lemma 2 is derived from Assumption (A). The proofs are found in “Appendix 1”.

2.4 Why is \(\lambda <0.5\) important?

As we note in Remark 4, the condition \(\lambda <0.5\) is necessary for the existence of \(\mathbb {E}\bigl [F(X)^2\bigr ]\). This existence is important because, briefly speaking, the Law of Large Numbers only holds when expected values exist. More precisely, we use the following lemma to prove convergence in probability in Theorem 1, and in particular Eq. (3) as discussed in Sect. 2.2. If \(\mathbb {E}\bigl [F(X)^2\bigr ]=\infty \), the required uniform integrability is not satisfied, which means the fluctuations of random variables may have too different scales to completely cancel out, so their weighted averages as we consider will not converge.

Lemma 3

Assume \(U_{i,n}\)’s (\(1\le i\le n\), n fixed) are independent random variables and \(\bigl \{U_{i,n}/\varphi _{i,n}\bigr \}\) is uniformly integrable. Assume \(\lim \limits _{np_{i,n}\rightarrow 0}\mathbb {E}[U_{i,n}]/\varphi _{i,n}=\ell \). Then,

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n U_{i,n}}{\sum _{i=1}^n \varphi _{i,n}}=\ell \quad \text {in probability}. \end{aligned}$$

Proof

This lemma is a combination of the Law of Large Numbers and the Stolz-Cesàro Theorem. We prove it in two steps.

First step, we prove

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n U_{i,n}-\mathbb {E}[U_{i,n}]}{\sum _{i=1}^n \varphi _{i,n}}=0 \quad \text {in probability}. \end{aligned}$$

This is a generalized version of the Law of Large Numbers, saying that the weighted average of \(U_{i,n}\) converges in probability to the weighted average of \(\mathbb {E}[U_{i,n}]\). Since \(\bigl \{U_{i,n}/\varphi _{i,n}\bigr \}\) is uniformly integrable, for any \(\varepsilon >0\) there exists N such that \(\mathbb {E}\bigl [|U_{i,n}|I_{|U_{i,n} |>N\varphi _{i,n}}\bigr ]<\varepsilon ^2 \varphi _{i,n}\) for all in. Our strategy is to divide the average of \(U_{i,n}\) into two parts, namely

$$\begin{aligned} \frac{\sum _{i=1}^n U_{i,n}}{\sum _{i=1}^n \varphi _{i,n}}= \frac{\sum _{i=1}^n U_{i,n}I_{|U_{i,n} |\le N\varphi _{i,n}}}{\sum _{i=1}^n \varphi _{i,n}} + \frac{\sum _{i=1}^n U_{i,n}I_{|U_{i,n} |>N\varphi _{i,n}}}{\sum _{i=1}^n \varphi _{i,n}}, \end{aligned}$$

and show that each part is close to its expectation. For the \(|U_{i,n} |>N\varphi _{i,n}\) part, we have

$$\begin{aligned} \left|\mathbb {E}\left[ \frac{\sum _{i=1}^n U_{i,n}I_{|U_{i,n} |>N\varphi _{i,n}}}{\sum _{i=1}^n \varphi _{i,n}} \right] \right|<\varepsilon ^2\quad \text {for all }n \end{aligned}$$

by definition, so it has negligible expectation and can be bounded by Markov’s Inequality:

$$\begin{aligned} \mathbb {P}\left( \left|\frac{\sum _{i=1}^n U_{i,n}I_{|U_{i,n} |>N\varphi _{i,n}}}{\sum _{i=1}^n \varphi _{i,n}} - \mathbb {E}\left[ \frac{\sum _{i=1}^n U_{i,n}I_{|U_{i,n} |>N\varphi _{i,n}}}{\sum _{i=1}^n \varphi _{i,n}} \right] \right|>\varepsilon \right) <2\varepsilon \quad \text {for all }n. \end{aligned}$$

On the other hand, for the \(|U_{i,n} |\le N\varphi _{i,n}\) part we have \({{\mathrm{Var}}}\bigl [U_{i,n}I_{|U_{i,n} |\le N\varphi _{i,n}}\bigr ]\le N\varphi _{i,n}\mathbb {E}\bigl [|U_{i,n}|\bigr ]\); and \(\bigl \{U_{i,n}/\varphi _{i,n}\bigr \}\) being uniformly integrable implies \(\mathbb {E}\bigl [|U_{i,n}|\bigr ]\le M\varphi _{i,n}\) for some M, so

$$\begin{aligned} {{\mathrm{Var}}}\left[ \frac{\sum _{i=1}^n U_{i,n}I_{|U_{i,n} |\le N\varphi _{i,n}}}{\sum _{i=1}^n \varphi _{i,n}}\right] \le \frac{\sum _{i=1}^n \varphi _{i,n}^2}{\bigl (\sum _{i=1}^n \varphi _{i,n}\bigr )^2}NM. \end{aligned}$$

By Lemma 2(b), we have

$$\begin{aligned} \bigl (n^{-1+2\lambda }\ln n\bigr )^2\sum _{i=1}^n \varphi _{i,n}^2\le \beta ^{2-4\lambda }\sum _{i=1}^{\infty }i^{-2}<\infty . \end{aligned}$$

In contrast, by Lemma 2(d) we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\bigl (n^{-1+2\lambda }\ln n\bigr )^2 \Bigl (\sum _{i=1}^n \varphi _{i,n}\Bigr )^2=\infty . \end{aligned}$$

Therefore,

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n \varphi _{i,n}^2}{\bigl (\sum _{i=1}^n \varphi _{i,n}\bigr )^2}=0\quad \text {and}\quad \lim _{n\rightarrow \infty }{{\mathrm{Var}}}\left[ \frac{\sum _{i=1}^n U_{i,n}I_{|U_{i,n} |\le N\varphi _{i,n}}}{\sum _{i=1}^n \varphi _{i,n}}\right] =0. \end{aligned}$$

Thus, the \(|U_{i,n} |\le N\varphi _{i,n}\) part concentrates to its expectation by Chebyshev’s Inequality. The first step is completed.

Second step, we prove

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n \mathbb {E}[U_{i,n}]}{\sum _{i=1}^n \varphi _{i,n}}= \lim _{np_{i,n}\rightarrow 0}\frac{\mathbb {E}[U_{i,n}]}{\varphi _{i,n}}. \end{aligned}$$

This is a generalized version of the Stolz-Cesàro Theorem, saying that the limit ratio of two series equals the limit ratio of corresponding terms. By definition and Eq. (7), for any \(\varepsilon >0\) there exists \(\delta \) such that

$$\begin{aligned} \frac{n}{\delta \ln n}\le i\le n\Rightarrow \Bigl |\frac{\mathbb {E}[U_{i,n}]}{\varphi _{i,n}} - \ell \Bigr |< \varepsilon \quad \text {for all }n. \end{aligned}$$

In addition, we can bound

$$\begin{aligned} \Bigl |\frac{\mathbb {E}[U_{i,n}]}{\varphi _{i,n}} - \ell \Bigr |\le M\quad \text {for }1\le i\le \dfrac{n}{\delta \ln n} \end{aligned}$$

because \(\bigl \{U_{i,n}/\varphi _{i,n}\bigr \}\) is uniformly integrable. The ratio \(\bigl (\sum _{i=1}^n \mathbb {E}[U_{i,n}]\bigr )/\bigl (\sum _{i=1}^n \varphi _{i,n}\bigr )\) can be viewed as a weighted average of two parts, one from indices \(\frac{n}{\delta \ln n}\le i\le n\) and the other from \(1\le i<\frac{n}{\delta \ln n}\). By Lemma 2(d), the weight for the first part tends to infinity:

$$\begin{aligned} \lim _{n\rightarrow \infty }n^{-1+2\lambda }\ln n\sum _{\frac{n}{\delta \ln n}\le i}^n \varphi _{i,n}=\infty ; \end{aligned}$$

whereas by Lemma 2(c), the weight for the second part is finite:

$$\begin{aligned} n^{-1+2\lambda }\ln n\sum _{i=1}^{\frac{n}{\delta \ln n}} \varphi _{i,n}<\infty . \end{aligned}$$

Therefore, the first part dominates, so \(\lim \limits _{n\rightarrow \infty }\bigl |\bigl (\sum _{i=1}^n\mathbb {E}[U_{i,n}]\bigr )/\bigl (\sum _{i=1}^n \varphi _{i,n}\bigr )-\ell \bigr |< \varepsilon \). \(\square \)

Combining Lemma 1(a)(c)(d) and Lemma 3, we immediately obtain the following. This is almost Eq. (3) we wanted in Sect. 2.2.

Corollary 4

Let \({\varUpsilon }:=\) \(\{ST\}\), \(S/T\backslash S\) or \(T/S\backslash T\). Then we have

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n \Bigl (F(p^{\varUpsilon }_{i,n}+1/n) - \mathbb {E}\bigl [F(p^{\varUpsilon }_{i,n}+1/n)\bigr ]\Bigr )^2}{\eta \sum _{i=1}^n \varphi _{i,n}}=1\quad \text {in probability}. \end{aligned}$$

Now, we can asymptotically derive the normalization of \(a^{\varUpsilon }_n\), \(b_{i,n}\) and \(c_n\), as defined in Sect. 2.2. A by-product is that the norms of natural phrase vectors converge to 1.

Theorem 2

If we put \(a^{\{st\}}_n:=0\) for all \(\{st\}\), and set \(b_{i,n}:=\mathbb {E}\bigl [F(p^{\{ST\}}_{i,n}+1/n)\bigr ]\), \(c_n:=\bigl (\eta \sum _{i=1}^n \varphi _{i,n}\bigr )^{-1/2}\), then

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{c_n}{|{\varLambda }_n|}\sum _{\{st\}\in {\varLambda }_n}w^{\{st\}}_{i,n}=0, \quad \lim _{n\rightarrow \infty }\frac{c_n}{n}\sum _{i=1}^n w^{\{ST\}}_{i,n}=0 \quad \text {and}\quad \lim _{n\rightarrow \infty }||\mathbf {w}^{\{ST\}}_{n}||=1 \end{aligned}$$

in probability.

Proof

By the assumptions on \(a^{\{st\}}_n\) and \(b_{i,n}\), we have

$$\begin{aligned} w^{\{st\}}_{i,n}=F(p^{\{st\}}_{i,n}+1/n) - \mathbb {E}\bigl [F(p^{\{ST\}}_{i,n}+1/n)\bigr ]. \end{aligned}$$

Then,

$$\begin{aligned} ||\mathbf {w}^{\{ST\}}_{n}||^2 =\frac{\sum _{i=1}^n \Bigl (F(p^{\varUpsilon }_{i,n}+1/n) - \mathbb {E}\bigl [F(p^{\varUpsilon }_{i,n}+1/n)\bigr ]\Bigr )^2}{\eta \sum _{i=1}^n \varphi _{i,n}}, \end{aligned}$$

so Corollary 4 implies \(\lim \limits _{n\rightarrow \infty }||\mathbf {w}^{\{ST\}}_{n}||=1\).

Next, Lemma 1(c) implies that there exists M such that

$$\begin{aligned} {{\mathrm{Var}}}\bigl [ F(p^{\{ST\}}_{i,n}+1/n)\bigr ]\le M\varphi _{i,n}\quad \text {for all }i,n, \end{aligned}$$

hence

$$\begin{aligned} \mathbb {E}\left[ \left( \frac{c_n}{n}\sum _{i=1}^n w^{\{ST\}}_{i,n} \right) ^2 \right] =\frac{1}{n^2}\frac{\sum _{i=1}^n {{\mathrm{Var}}}\bigl [ F(p^{\{ST\}}_{i,n}+1/n)\bigr ]}{\eta \sum _{i=1}^n \varphi _{i,n}}\le \frac{1}{n^2}\frac{M}{\eta }\rightarrow 0\quad \text {(when }n\rightarrow \infty \text {)}. \end{aligned}$$

Therefore, by Chebyshev’s Inequality we have \(\lim \limits _{n\rightarrow \infty }\dfrac{c_n}{n}\displaystyle \sum _{i=1}^n w^{\{ST\}}_{i,n}=0\) in probability.

Finally, the ordinary version of Law of Large Numbers implies that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{|{\varLambda }_n|}\sum _{\{st\}\in {\varLambda }_n}\frac{F(p^{\{st\}}_{i,n}+1/n) - \mathbb {E}\bigl [ F(p^{\{ST\}}_{i,n}+1/n)\bigr ]}{\sqrt{{{\mathrm{Var}}}\bigl [ F(p^{\{ST\}}_{i,n}+1/n)\bigr ]}}=0 \quad \text {in probability}; \end{aligned}$$

so we immediately have

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{c_n}{|{\varLambda }_n|}\sum _{\{st\}\in {\varLambda }_n}w^{\{st\}}_{i,n} = 0\quad \text {for all }i. \end{aligned}$$

The theorem is proven. \(\square \)

Therefore, if we set \(a^{\{st\}}_n\), \(b_{i,n}\) and \(c_n\) as in Theorem 2, all conditions in Definition 7, Definition 8 and Definition 9 are asymptotically satisfied. In addition, we have obtained the result stated in Claim 2.

In view of Corollary 4, if \(\lambda <0.5\) is not satisfied, the norms of natural phrase vectors will not converge. This prediction is experimentally verified in Sect. 5.3.

2.5 Proof of Theorem 1 and an intuitive explanation

In this section, we start to use Eq.(1) and derive our bias bound. Recall that Eq. (1) decomposes \(p^{t}_{i,n}\) into a linear combination of \(p^{s/t\backslash s}_{i,n}\) and \(p^{\{st\}}_{i,n}\); our first notice is that \(F(p^{t}_{i,n}+1/n)\) can be decomposed similarly into a linear combination of \(F(p^{s/t\backslash s}_{i,n}+1/n)\) and \(F(p^{\{st\}}_{i,n}+1/n)\), as if the function F has linearity. This is because F is smooth, and when \(np_{i,n}\) is sufficiently small, the probability value \(p_{i,n}\) is small compared to 1 / n, so \(F(x+1/n)\) can be linearly approximated as \(F'(1/n)x+F(1/n)\), as long as x is at the same scale as \(p_{i,n}\). This is formalized as the following lemma.

Lemma 5

The set of random variables

$$\begin{aligned} \Bigl \{ \bigl ( F(p^{T}_{i,n}+1/n) - \pi _{S/T\backslash S}F(p^{S/T\backslash S}_{i,n}+1/n) - (1-\pi _{S/T\backslash S})F(p^{\{ST\}}_{i,n}+1/n) \bigr )^2/\varphi _{i,n}\Bigr \} \end{aligned}$$

is uniformly integrable, and

$$\begin{aligned} \lim _{np_{i,n}\rightarrow 0}\frac{\mathbb {E}\Bigl [ \bigl ( F(p^{T}_{i,n}+1/n) - \pi _{S/T\backslash S}F(p^{S/T\backslash S}_{i,n}+1/n) - (1-\pi _{S/T\backslash S})F(p^{\{ST\}}_{i,n}+1/n) \bigr )^2 \Bigr ]}{\varphi _{i,n}}=0. \end{aligned}$$

Proof

For brevity, we set

$$\begin{aligned} P_1 :=p^{S/T\backslash S}_{i,n},\quad P_2 :=p^{\{ST\}}_{i,n},\quad \pi := \pi _{S/T\backslash S},\\ \text {and}\quad \tilde{F}(x) := F(x+1/n) - F(p_{i,n}\beta +1/n). \end{aligned}$$

By Eq. (1) we have \(p^T_{i,n}=\pi P_1 + (1-\pi ) P_2\), so \(\tilde{F}(p^T_{i,n}) = \tilde{F}(\pi P_1 + (1-\pi ) P_2)\) lies in between \(\tilde{F}(P_1)\) and \(\tilde{F}(P_2)\). Therefore,

$$\begin{aligned} \begin{aligned} \bigl ( \tilde{F}(p^T_{i,n})-\pi \tilde{F}(P_1)-(1-\pi ) \tilde{F}(P_2) \bigr )^2&\le \bigl ( \tilde{F}(P_1) - \tilde{F}(P_2) \bigr )^2 \\&\le 4\tilde{F}(P_1)^2I_{\tilde{F}(P_1)^2\ge \tilde{F}(P_2)^2} + 4\tilde{F}(P_2)^2I_{\tilde{F}(P_2)^2\ge \tilde{F}(P_1)^2}. \end{aligned}\end{aligned}$$

By Lemma 1(c), \(\bigl \{ \tilde{F}(P_1)^2 / \varphi _{i,n} \bigr \}\) and \(\bigl \{ \tilde{F}(P_2)^2 / \varphi _{i,n} \bigr \}\) are uniformly integrable. So for any \(\varepsilon >0\), we have \(\mathbb {E}\bigl [ \tilde{F}(P_1)^2I_{\tilde{F}(P_1)^2>N\varphi _{i,n}} \bigr ]<\varepsilon \varphi _{i,n}\) and \(\mathbb {E}\bigl [ \tilde{F}(P_2)^2I_{\tilde{F}(P_2)^2>N\varphi _{i,n}} \bigr ]<\varepsilon \varphi _{i,n}\) for some N. Consider the condition

$$\begin{aligned} \mathscr {C}:=\hbox {``}{} \textit{Either } \tilde{F}(P_1)^2>N\varphi _{i,n} \textit{ or } \tilde{F}(P_2)^2>N\varphi _{i,n} \hbox {''}, \end{aligned}$$

which is weaker than “\(\bigl ( \tilde{F}(p^T_{i,n})-\pi \tilde{F}(P_1)-(1-\pi ) \tilde{F}(P_2) \bigr )^2>4N\varphi _{i,n}\)”, and we have

$$\begin{aligned}&\mathbb {E}\Bigl [ \bigl ( \tilde{F}(p^T_{i,n})-\pi \tilde{F}(P_1)-(1-\pi ) \tilde{F}(P_2) \bigr )^2I_{\mathscr {C}} \Bigr ] \nonumber \\&\begin{aligned}&\le \mathbb {E}\bigl [ 4\tilde{F}(P_1)^2I_{\tilde{F}(P_1)^2>N\varphi _{i,n}} + 4\tilde{F}(P_2)^2I_{\tilde{F}(P_2)^2>N\varphi _{i,n}} \bigr ] \\&< 8\varepsilon \varphi _{i,n}. \end{aligned} \end{aligned}$$
(8)

So \(\Bigl \{ \bigl ( \tilde{F}(p^T_{i,n})-\pi \tilde{F}(P_1)-(1-\pi ) \tilde{F}(P_2) \bigr )^2 / \varphi _{i,n} \Bigr \}\) is uniformly integrable.

The previous argument also suggests that the case \(\mathscr {C}\) being satisfied is negligible, because (8) is arbitrarily small. Thus, we only have to consider the complement of \(\mathscr {C}\), namely

$$\begin{aligned} \lnot \mathscr {C}:=\hbox {``}{} \textit{Both } \tilde{F}(P_1)^2\le N\varphi _{i,n} \textit{ and } \tilde{F}(P_2)^2\le N\varphi _{i,n} \hbox {''}. \end{aligned}$$

Under this condition, intuitively \(\tilde{F}(P_1)\) and \(\tilde{F}(P_2)\) are restricted to a small range so a linear approximation of F becomes valid. More precisely, we show that

$$\begin{aligned} \lim _{np_{i,n}\rightarrow 0}\frac{\mathbb {E}\Bigl [ \bigl ( \tilde{F}(p^T_{i,n})-\pi \tilde{F}(P_1)-(1-\pi ) \tilde{F}(P_2) \bigr )^2I_{\lnot \mathscr {C}} \Bigr ]}{\varphi _{i,n}}=0, \end{aligned}$$
(9)

which will complete the proof. For brevity, we set

$$\begin{aligned} \hat{F}(x):=F(x+1)-F(1),\quad U_1:=\hat{F}(nP_1),\quad U_2:=\hat{F}(nP_2). \end{aligned}$$

Let H be the inverse function of \(\hat{F}\):

$$\begin{aligned} H(\hat{F}(x))=x, \end{aligned}$$

and put

$$\begin{aligned} J(u_1, u_2; \pi ):=\hat{F}(\pi H(u_1)+(1-\pi ) H(u_2))-\pi u_1-(1-\pi ) u_2. \end{aligned}$$

Note that the functions \(\hat{F}\), H and J do not depend on n, i, S or T. Now, we consider the limit \(np_{i,n}\rightarrow 0\). By Lemma 2(a), we can replace the \(\varphi _{i,n}\) in (9) with \(np_{i,n}\cdot n^{-2\lambda }\); and since

$$\begin{aligned} n^{\lambda } \tilde{F}(x)=F(nx+1) - F(np_{i,n}\beta +1)\rightarrow \hat{F}(nx) \quad \text {(when } np_{i,n}\rightarrow 0\text {)}, \end{aligned}$$

we can replace \(n^{\lambda } \tilde{F}(x)\) with \(\hat{F}(nx)\). Thus, (9) is equivalent to

$$\begin{aligned} \lim _{np_{i,n}\rightarrow 0}\frac{\mathbb {E}\Bigl [ J(U_1, U_2; \pi )^2I_{\mathscr {D}} \Bigr ]}{np_{i,n}}=0, \end{aligned}$$
(10)

where \(\mathscr {D}\) is the condition

$$\begin{aligned} \mathscr {D}:=\hbox {``}{} \textit{Both } U_1^2\le Nnp_{i,n} \textit{ and } U_2^2\le Nnp_{i,n} \hbox {''}. \end{aligned}$$

Now, since \(\frac{\partial }{\partial u_1}J(0,0; \pi )=\frac{\partial }{\partial u_2}J(0,0; \pi )=0\), we have

$$\begin{aligned} \lim _{u_1^2+u_2^2\rightarrow 0}\frac{J(u_1, u_2; \pi )^2}{u_1^2+u_2^2}=0\quad \text {uniformly on }0\le \pi \le 1. \end{aligned}$$

Therefore, when \(np_{i,n}\rightarrow 0\) we have

$$\begin{aligned} \mathbb {E}\left[ \frac{J(U_1, U_2; \pi )^2I_{\mathscr {D}}}{np_{i,n}}\right] = \mathbb {E}\left[ \frac{J(U_1, U_2; \pi )^2I_{\mathscr {D}}}{U_1^2 + U_2^2}\cdot \frac{U_1^2 + U_2^2}{np_{i,n}}\right] \le 2N\mathbb {E}\left[ \frac{J(U_1, U_2; \pi )^2I_{\mathscr {D}}}{U_1^2 + U_2^2}\right] \rightarrow 0. \end{aligned}$$

Equation (10) is proven and we complete. \(\square \)

Now, we are ready to prove Theorem 1. An intuitive discussion is given after the proof.

Proof of Theorem 1

As in Theorem 2, we set \(a^{\{st\}}_n:=0\), \(b_{i,n}:=\mathbb {E}\bigl [F(p^{\{ST\}}_{i,n}+1/n)\bigr ]\) and \(c_n:=\bigl (\eta \sum _{i=1}^n \varphi _{i,n}\bigr )^{-1/2}\). Assume \(a_n^{t}:=0\) for all t, then one can calculate that \(\lim \limits _{n\rightarrow \infty }\frac{c_n}{n}\sum _{i=1}^n w^{T}_{i,n}=0\) in probability, by using Lemma 5 and similar to the proof of Theorem 2. Thus, we set \(a^{t}_n:=0\). Then,

$$\begin{aligned} \bigl (\mathscr {B}^{\{ST\}}_{n}\bigr )^2=\frac{\sum _{i=1}^{n}\Bigl ( F(p^{\{ST\}}_{i,n}+1/n)-\frac{1}{2}\bigl (F(p^S_{i,n}+1/n)+F(p^T_{i,n}+1/n)\bigr ) \Bigr )^2}{\eta \sum _{i=1}^n \varphi _{i,n}}. \end{aligned}$$

Next, by Lemma 5, Lemma 3 and Triangle Inequality, we can replace \(F(p^T_{i,n}+1/n)\) with

$$\begin{aligned} \pi _{S/T\backslash S}F(p^{S/T\backslash S}_{i,n}+1/n) + (1-\pi _{S/T\backslash S})F(p^{\{ST\}}_{i,n}+1/n), \end{aligned}$$

and replace \(F(p^S_{i,n}+1/n)\) with

$$\begin{aligned} \pi _{T/S\backslash T}F(p^{T/S\backslash T}_{i,n}+1/n) + (1-\pi _{T/S\backslash T})F(p^{\{ST\}}_{i,n}+1/n). \end{aligned}$$

For brevity, we put \(\pi _1:=\pi _{S/T\backslash S}\), \(\pi _2:=\pi _{T/S\backslash T}\), \(\tilde{F}(x) := F(x+1/n) - F(p_{i,n}\beta +1/n)\) and

$$\begin{aligned} \tilde{w}^{\{ST\}}_{i,n}:=\tilde{F}(p^{\{ST\}}_{i,n}),\quad \tilde{w}^{S/T\backslash S}_{i,n}:=\tilde{F}(p^{S/T\backslash S}_{i,n}),\quad \tilde{w}^{T/S\backslash T}_{i,n}:=\tilde{F}(p^{T/S\backslash T}_{i,n}). \end{aligned}$$

We use “\(\risingdotseq \)” to denote asymptotic equality at the limit \(n\rightarrow \infty \). Then,

$$\begin{aligned} \bigl (\mathscr {B}^{\{ST\}}_{n}\bigr )^2\risingdotseq \frac{\sum _{i=1}^{n}\bigl ( (\pi _1+\pi _2)\tilde{w}^{\{ST\}}_{i,n}-\pi _1 \tilde{w}^{S/T\backslash S}_{i,n}-\pi _2 \tilde{w}^{T/S\backslash T}_{i,n} \bigr )^2}{4\eta \sum _{i=1}^n \varphi _{i,n}}. \end{aligned}$$

Again, by Lemma 1(a)(b), Lemma 3 and Triangle Inequality, we can replace \(\tilde{w}^{\varUpsilon }_{i,n}\) with \(\hat{w}^{\varUpsilon }_{i,n}:=\tilde{w}^{\varUpsilon }_{i,n}-\mathbb {E}[\tilde{w}^{\varUpsilon }_{i,n}]\) (where \({\varUpsilon }\) is either \(\{ST\}\), \(S/T\backslash S\) or \(T/S\backslash T\)). Hence,

$$\begin{aligned} \begin{aligned} \bigl (\mathscr {B}^{\{ST\}}_{n}\bigr )^2\risingdotseq&(\pi _1+\pi _2)^2\frac{\sum _{i=1}^n\bigl (\hat{w}^{\{ST\}}_{i,n}\bigr )^2}{4\eta \sum _{i=1}^n \varphi _{i,n}}+ \pi _1^2\frac{\sum _{i=1}^n\bigl (\hat{w}^{S/T\backslash S}_{i,n}\bigr )^2}{4\eta \sum _{i=1}^n \varphi _{i,n}}+ \pi _2^2\frac{\sum _{i=1}^n\bigl (\hat{w}^{T/S\backslash T}_{i,n}\bigr )^2}{4\eta \sum _{i=1}^n \varphi _{i,n}}\\&- 2\pi _1(\pi _1+\pi _2)\frac{\sum _{i=1}^n\hat{w}^{S/T\backslash S}_{i,n}\hat{w}^{\{ST\}}_{i,n}}{4\eta \sum _{i=1}^n \varphi _{i,n}} - 2\pi _2(\pi _1+\pi _2)\frac{\sum _{i=1}^n\hat{w}^{T/S\backslash T}_{i,n}\hat{w}^{\{ST\}}_{i,n}}{4\eta \sum _{i=1}^n \varphi _{i,n}} \\&+ 2\pi _1\pi _2\frac{\sum _{i=1}^n\hat{w}^{S/T\backslash S}_{i,n}\hat{w}^{T/S\backslash T}_{i,n}}{4\eta \sum _{i=1}^n \varphi _{i,n}}. \end{aligned} \end{aligned}$$

By Corollary 4, we have

$$\begin{aligned} \frac{\sum _{i=1}^n\bigl (\hat{w}^{\{ST\}}_{i,n}\bigr )^2}{4\eta \sum _{i=1}^n \varphi _{i,n}}\risingdotseq \frac{1}{4}, \quad \frac{\sum _{i=1}^n\bigl (\hat{w}^{S/T\backslash S}_{i,n}\bigr )^2}{4\eta \sum _{i=1}^n \varphi _{i,n}}\risingdotseq \frac{1}{4} \quad \text {and}\quad \frac{\sum _{i=1}^n\bigl (\hat{w}^{T/S\backslash T}_{i,n}\bigr )^2}{4\eta \sum _{i=1}^n \varphi _{i,n}}\risingdotseq \frac{1}{4}. \end{aligned}$$

By Assumption (C), we have \(\mathbb {E}\bigl [\hat{w}^{S/T\backslash S}_{i,n}\hat{w}^{T/S\backslash T}_{i,n}\bigr ]=0\), so applying Lemma 3 we get

$$\begin{aligned} \frac{\sum _{i=1}^n\hat{w}^{S/T\backslash S}_{i,n}\hat{w}^{T/S\backslash T}_{i,n}}{4\eta \sum _{i=1}^n \varphi _{i,n}}\risingdotseq \lim _{np_{i,n}\rightarrow 0}\frac{\mathbb {E}\bigl [\hat{w}^{S/T\backslash S}_{i,n}\hat{w}^{T/S\backslash T}_{i,n}\bigr ]}{4\eta \varphi _{i,n}} =0. \end{aligned}$$

Also by Assumption (C), we have \(\mathbb {E}\bigl [\hat{w}^{S/T\backslash S}_{i,n}\hat{w}^{\{ST\}}_{i,n}\bigr ]\ge 0\) and \(\mathbb {E}\bigl [\hat{w}^{T/S\backslash T}_{i,n}\hat{w}^{\{ST\}}_{i,n}\bigr ]\ge 0\), so

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n\hat{w}^{S/T\backslash S}_{i,n}\hat{w}^{\{ST\}}_{i,n}}{4\eta \sum _{i=1}^n \varphi _{i,n}}\ge 0 \quad \text {and}\quad \lim _{n\rightarrow \infty }\frac{\sum _{i=1}^n\hat{w}^{T/S\backslash T}_{i,n}\hat{w}^{\{ST\}}_{i,n}}{4\eta \sum _{i=1}^n \varphi _{i,n}}\ge 0. \end{aligned}$$

Therefore, \(\lim \limits _{n\rightarrow \infty }\bigl (\mathscr {B}^{\{ST\}}_{n}\bigr )^2\le \frac{1}{4}\bigl ( (\pi _1+\pi _2)^2 + \pi _1^2 + \pi _2^2 \bigr ) =\frac{1}{2}(\pi _{1}^2+\pi _{2}^2+\pi _{1}\pi _{2})\). \(\square \)

Using notations in the proof of Theorem 1, from a high level it is as if we have the following decomposition:

$$\begin{aligned} w^{t}_{i,n}=\pi _1\hat{w}^{s/t\backslash s}_{i,n} + (1-\pi _1)\hat{w}^{\{st\}}_{i,n}, \end{aligned}$$

which is in correspondence to the decomposition of \(p^{t}_{i,n}\) in Equation (1). Similarly,

$$\begin{aligned} w^{s}_{i,n}=\pi _2\hat{w}^{t/s\backslash t}_{i,n} + (1-\pi _2)\hat{w}^{\{st\}}_{i,n}, \end{aligned}$$

and by definition \(w^{\{st\}}_{i,n}=\hat{w}^{\{st\}}_{i,n}\). Thus,

$$\begin{aligned} \begin{aligned} \bigl (w^{\{st\}}_{i,n}-\frac{1}{2}(w^{s}_{i,n} + w^{t}_{i,n})\bigr )^2&=\frac{1}{4}\bigl ( (\pi _1 + \pi _2)\hat{w}^{\{st\}}_{i,n} - \pi _1\hat{w}^{s/t\backslash s}_{i,n} - \pi _2\hat{w}^{t/s\backslash t}_{i,n} \bigr )^2 \\&=\frac{1}{4}\Bigl ( (\pi _1 + \pi _2)^2\bigl (\hat{w}^{\{st\}}_{i,n}\bigr )^2 + \pi _1^2\bigl (\hat{w}^{s/t\backslash s}_{i,n}\bigr )^2 + \pi _2^2\bigl (\hat{w}^{t/s\backslash t}_{i,n}\bigr )^2 \\&\quad - 2\pi _1(\pi _1 + \pi _2)\hat{w}^{s/t\backslash s}_{i,n}\hat{w}^{\{st\}}_{i,n} - 2\pi _2(\pi _1 + \pi _2)\hat{w}^{t/s\backslash t}_{i,n}\hat{w}^{\{st\}}_{i,n}\\&\quad + 2\pi _1\pi _2\hat{w}^{s/t\backslash s}_{i,n}\hat{w}^{t/s\backslash t}_{i,n}\Bigr ). \end{aligned} \end{aligned}$$

By taking summation \(c_n\sum _{i=1}^n\), term \(\hat{w}^{s/t\backslash s}_{i,n}\hat{w}^{t/s\backslash t}_{i,n}\)’s cancel out to 0 because \(\hat{w}^{S/T\backslash S}_{i,n}\) and \(\hat{w}^{T/S\backslash T}_{i,n}\) are independent; meanwhile, \(\hat{w}^{s/t\backslash s}_{i,n}\hat{w}^{\{st\}}_{i,n}\)’s and \(\hat{w}^{t/s\backslash t}_{i,n}\hat{w}^{\{st\}}_{i,n}\)’s sum to positive because \(\hat{w}^{S/T\backslash S}_{i,n}\) and \(\hat{w}^{T/S\backslash T}_{i,n}\) are positively correlated to \(\hat{w}^{\{ST\}}_{i,n}\). Therefore, the sum of the above is bounded by \(\frac{1}{4}\bigl ( (\pi _1+\pi _2)^2 + \pi _1^2 + \pi _2^2 \bigr ) =\frac{1}{2}(\pi _{1}^2+\pi _{2}^2+\pi _{1}\pi _{2})\).

In view of this explanation, the technical points of Theorem 1 are as follows. First, the decomposition of \(w^{t}_{i,n}\) into \(\hat{w}^{s/t\backslash s}_{i,n}\) and \(\hat{w}^{\{st\}}_{i,n}\) is not exact; there is difference between \(\hat{w}^{s/t\backslash s}_{i,n}\) and \(\tilde{w}^{s/t\backslash s}_{i,n}\) due to the expected value, and there is difference between \(F(p^T_{i,n}+1/n)\) and the linear combination of \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\). However, by Lemma 1(b) the expected value converges to 0, and by Lemma 5 the linear approximation holds asymptotically. So this first issue is settled. Second, the most importantly, term \(\bigl (\hat{w}^{\{st\}}_{i,n}\bigr )^2\)’s, \(\bigl (\hat{w}^{s/t\backslash s}_{i,n}\bigr )^2\)’s and \(\bigl (\hat{w}^{t/s\backslash t}_{i,n}\bigr )^2\)’s have to sum to constants independent of s and t, otherwise they cannot be separated from \(\pi _1\) and \(\pi _2\) in the calculation of \(\mathscr {B}^{\{st\}}_{n}\). This requires Eq. (3) as we discussed in Sect. 2.2, and it is a generalized version of the Law of Large Numbers. For this law to hold, one needs conditions to guarantee that the fluctuations of random variables are at comparable scales to cancel out. This leads to the condition \(\lambda <0.5\), which is a non-trivial constraint on function F. Formally, Eq. (3) is proven as Corollary 4.

Insights brought by our theory lead to several applications. First, as we found that the power law tail of natural language data requires \(\lambda <0.5\) for constructing additively compositional vectors, our theory provides important guidance for empirical research on Distributional Semantics (Sect. 3.1). Second, as we found that \(w^{t}_{i,n}\) and \(w^{s}_{i,n}\) have decompositions in which \(\hat{w}^{\{st\}}_{i,n}\) is a common factor and survives averaging, but \(\hat{w}^{s/t\backslash s}_{i,n}\) and \(\hat{w}^{t/s\backslash t}_{i,n}\) cancel out each other, we come to the idea of harnessing additive composition by engineering what is common in the summands. Then, for example, we can make additive composition aware of word order (Sect. 3.2). Third, as one can read from Lemma 2(c)(d) and the proof of Lemma 3, it is important to realize that the behavior of vector representations is dominated by entries at dimensions corresponding to low-frequency words, namely \(w^{\varUpsilon }_{i,n}\)’s where \(\frac{n}{\delta \ln n}\le i\le n\). This understanding has impact on dimension reduction (Sect. 3.3).

2.6 Hierarchical Pitman–Yor process

In Assumptions (A)(B) of Theorem 1 we have required several properties to be satisfied by the probability values \(p_{i,n}\) and \(p^{\varUpsilon }_{i,n}\). Meanwhile, \(p_{i,n}\)’s and \(p^{\varUpsilon }_{i,n}\)’s (\(1\le i\le n\), n fixed) define distributions from which words can be generated. This setting is reminiscent of a Bayesian model where priors of word distributions are specified.

Conversely, by the well-known de Finetti’s Theorem, an exchangeable random sequence of words (i.e., given any sequence sample, all permutations of that sample occur with the same probability) can be seen as if the words are drawn i.i.d. from a conditioned word distribution, where the distribution itself is drawn from a prior. A widely studied example is the Pitman–Yor Process (Pitman and Yor 1997; Pitman 2006); in this section, we use the process to define a generative model, from which Assumptions (A)(B) can be derived.

Definition 11

A Pitman–Yor Process \(PY(\alpha ,\theta )\) \((0<\alpha <1, \theta >-\alpha )\) defines a prior for word distributions, which is the prior corresponding to the exchangeable random sequence generated by the following Chinese Restaurant Process:

  1. 1.

    First, generate a new word.

  2. 2.

    At each step, let \(C(\varpi )\) be the count of word \(\varpi \), and \(C:=\sum _{\varpi }C(\varpi )\) the total count; let N be the number of distinct words. Then:

    1. (2.1)

      Generate a new word with probability \(\dfrac{\theta + \alpha N}{\theta + C}\).

    2. (2.2)

      Or, generate a new copy of an existing word \(\varpi \), with probability \(\dfrac{C(\varpi ) - \alpha }{\theta + C}\).

Definition 12

In the above process \(PY(\alpha ,\theta )\), we define \(p(\varpi ):=\lim \dfrac{C(\varpi )}{C}\), where limit is taken at Step \(\rightarrow \infty \). Fix a word index i such that \(p(\varpi _i)\ge p(\varpi _{i+1})\). Put \(p_i:=p(\varpi _i)\).

Theorem 3

For a sequence generated by \(PY(\alpha , \theta )\), we have \(\lim C/N^{1/\alpha }=Z\) for some Z.

Proof

This is Theorem 3.8 in Pitman (2006). \(\square \)

Theorem 4

We have \(\lim \limits _{i\rightarrow \infty } p_i\cdot i^{1/\alpha }{\varGamma }(1-\alpha )^{1/\alpha }=Z\), where Z is the same as in Theorem 3.

Proof

This is Lemma 3.11 in Pitman (2006). \(\square \)

Theorem 4 shows that, if words are generated by a Pitman–Yor Process \(PY(\alpha , \theta )\), then \(p_i\) has a power law tail of index \(\alpha \). It is in the same form as Assumption (A), and when \(\alpha \approx 1\), it approximates the Zipf’s Law.

For two sequences generated by \(PY(\alpha , \theta )\), their corresponding Z as in Theorem 3 may differ (since the sequences are random), even if they are generated with the same hyper-parameters \(\alpha \) and \(\theta \). Nevertheless, the limit always exists, and Z follows a statistical distribution. The probability density of \(Z^{-\alpha }\) is derived in Pitman (2006), Theorem 3.8:

$$\begin{aligned} -{{\mathrm{d}}}\mathbb {P}(x\le Z^{-\alpha })=\frac{{\varGamma }(\theta +1)}{{\varGamma }(\theta /\alpha +1)}x^{\theta /\alpha }g_\alpha (x){{\mathrm{d}}}x \quad (x>0), \end{aligned}$$
(11)

where \(g_\alpha (x)\) is the Mittag–Leffler density function:

$$\begin{aligned} g_\alpha (x):=\frac{1}{\pi \alpha }\sum _{k=0}^{\infty }\frac{(-1)^{k+1}}{k!}{\varGamma }(\alpha k + 1)\sin (\pi \alpha k)x^{k-1}. \end{aligned}$$

In this article, we only need the fact that \(\lim \limits _{x\rightarrow 0}xg_\alpha (x)\) is a nonzero constant.

Next, we consider the co-occurrence probability \(p^{\varUpsilon }(\varpi )\), conditioned on \(\varpi \) being in the context of a target \({\varUpsilon }\). One first notes that \(p^{\varUpsilon }(\varpi )\) is likely to be related to \(p(\varpi )\); i.e., frequent words are likely to occur in every context, regardless of target. To model this intuition, the idea of Hierarchical Pitman–Yor Process (Teh 2006) is to adapt \(PY(\alpha ,\theta )\) such that in each step, if a new word is to be generated, it is no longer generated brand new, but drawn from another Pitman–Yor Process instead. This second Pitman–Yor Process serves as a “reference” which controls how frequently a word is likely to occur. More precisely, a Hierarchical Pitman–Yor Process \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) generates sequences as follows.

Definition 13

In \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\), instead of generating words directly, one generates a “reference” at each step, where the reference can refer to new words or existing words. We use \(\rho \) to denote a reference and \(\varpi ^\rho \) the word referred to by the reference.

  1. 1.

    First step, generate a new reference which refers to a new word.

  2. 2.

    At each step, let \(C(\rho )\) be the count of reference \(\rho \), and \(C(\varpi ):=\sum _{\varpi ^\rho =\varpi }C(\rho )\) the count of all references referring to word \(\varpi \); let \(C:=\sum _{\varpi }C(\varpi )\) be the total count, \(N_r(\varpi )\) the number of distinct references referring to \(\varpi \), and \(N_r:=\sum _{\varpi }N_r(\varpi )\) the total number of distinct references; finally, let \(N_w\) be the number of distinct words.

    1. (2.1)

      Generate a new reference referring to a new word, with probability

      $$\begin{aligned} \frac{1}{\theta _1 + C}\cdot \frac{\theta _1+\alpha _1N_r}{\theta _2+N_r}\cdot (\theta _2+\alpha _2N_w). \end{aligned}$$
    2. (2.2)

      Generate a new reference referring to an existing word \(\varpi \), with probability

      $$\begin{aligned} \frac{1}{\theta _1 + C}\cdot \frac{\theta _1+\alpha _1N_r}{\theta _2+N_r}\cdot (N_r(\varpi )-\alpha _2). \end{aligned}$$
    3. (2.3)

      Or, generate a new copy of an existing reference \(\rho \), with probability

      $$\begin{aligned} \frac{1}{\theta _1 + C}\cdot (C(\rho )-\alpha _1). \end{aligned}$$

It is easy to see from definition that \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) generates an exchangeable word sequence; and if we focus on distinct references (i.e., ignoring (2.3), consider \(N_r(\varpi )\) as “the count of word \(\varpi \)” in the ordinary Pitman–Yor Process), then the process becomes \(PY(\alpha _2,\theta _2)\). We assume this is the same process which defines word probability \(p(\varpi )\), so

$$\begin{aligned} p(\varpi )=\lim \frac{N_r(\varpi )}{N_r}; \end{aligned}$$

and we define the conditional probability \(p^{\varUpsilon }(\varpi )\) as:

$$\begin{aligned} p^{\varUpsilon }(\varpi ):=\lim \frac{C(\varpi )}{C}. \end{aligned}$$

Thus, \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) indeed connects \(p^{\varUpsilon }(\varpi )\) to \(p(\varpi )\). This connection between word probability and conditioned word probability has been explored in Teh (2006); in which, it is used in an n-gram language model to connect the bigram probability p(w|u) to unigram probability p(w), for deriving a smoothing method.

Unfortunately, a precise analysis on the above \(p^{\varUpsilon }(\varpi )\) is beyond the reach of the authors; instead, we consider a slightly modified process which is much simpler for our purpose.

Definition 14

A Modified Hierarchical Pitman–Yor Process \(M\!H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) generates sequences as follows. Using the same notation as in Definition 13:

  1. 1.

    First step, generate a new reference which refers to a new word.

  2. 2.

    At each step:

    1. (2.1)

      Generate a new reference referring to a new word, with probability

      $$\begin{aligned} \frac{1}{D}\cdot (\theta _2+\alpha _2N_w). \end{aligned}$$
    2. (2.2)

      Generate a new reference referring to an existing word \(\varpi \), with probability

      $$\begin{aligned} \frac{1}{D}\cdot (N_r(\varpi )-\alpha _2). \end{aligned}$$
    3. (2.3)

      Or, generate a new copy of an existing reference \(\rho \), with probability

      $$\begin{aligned} \frac{1}{D}\cdot \frac{N_r(\varpi ^\rho )-\alpha _2}{\theta _1 + \alpha _1N_r(\varpi ^\rho )}\cdot (C(\rho )-\alpha _1). \end{aligned}$$

    In above, D is a normalization factor that makes the probability values sum to 1:

    $$\begin{aligned} D:=\theta _2+ \alpha _2N_w + \sum _\varpi \frac{N_r(\varpi )-\alpha _2}{\theta _1 + \alpha _1N_r(\varpi )}\cdot (C(\varpi )+\theta _1). \end{aligned}$$

\(M\!H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) modifies \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) by canceling \((\theta _1+\alpha _1N_r)/(\theta _2+N_r)\) in (2.1) and (2.2), and scaling (2.3) by a \((N_r(\varpi ^\rho )-\alpha _2)/(\theta _1 + \alpha _1N_r(\varpi ^\rho ))\) factor instead. It is noteworthy that, since \(\lim N_r=\infty \) and \(\lim N_r(\varpi ^\rho )=\infty \), we have

$$\begin{aligned} \lim \frac{\theta _1+\alpha _1N_r}{\theta _2+N_r}=\alpha _1\quad \text {and}\quad \lim \frac{N_r(\varpi ^\rho )-\alpha _2}{\theta _1 + \alpha _1N_r(\varpi ^\rho )}=\frac{1}{\alpha _1}. \end{aligned}$$

So the asymptotic behaviors of \(M\!H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) and \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) are similar.

A favorable property of \(M\!H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) is that, like \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\), it becomes \(PY(\alpha _2,\theta _2)\) when one focuses on distinct references, so we have

$$\begin{aligned} p(\varpi )=\lim \frac{N_r(\varpi )}{N_r} \end{aligned}$$
(12)

as before; besides, if restricted to a specific word \(\varpi \) (i.e., ignoring (2.1), only consider the references referring to \(\varpi \), and regard references as “words”, \(C(\rho )\) as “the count of word \(\rho \)”, and \(C(\varpi ):=\sum _{\varpi ^\rho =\varpi }C(\rho )\) as “the total count”, \(N_r(\varpi )\) as “the number of distinct words” in the ordinary Pitman–Yor Process), then the process becomes \(PY(\alpha _1,\theta _1)\). Thus, by Theorem 3

$$\begin{aligned} \lim \frac{C(\varpi )}{N_r(\varpi )^{1/\alpha _1}}=Z_\varpi \quad \text {for some }Z_\varpi . \end{aligned}$$
(13)

Therefore, combining (12) and (13) we have

$$\begin{aligned} p^{\varUpsilon }(\varpi ):=\lim \frac{C(\varpi )}{C}=p(\varpi )^{1/\alpha _1}Z_\varpi \lim \frac{N_r^{1/\alpha _1}}{C}. \end{aligned}$$

So \(p^{\varUpsilon }(\varpi )/p(\varpi )^{1/\alpha _1}\) is a constant multiple of \(Z_\varpi \), which follows a distribution specified in Eq. (11); and it is easy to see that \(Z_\varpi \)’s for different \(\varpi \) are mutually independent. Thus, we have obtained Assumption (B1).

As for Assumption (B2), we assume \(\theta _1=1\) and derive the distribution of \(Z_\varpi \) from (11):

$$\begin{aligned} -{{\mathrm{d}}}\mathbb {P}(z\le Z_w)=\frac{\alpha _1}{{\varGamma }(1/\alpha _1+1)}\frac{z^{-\alpha _1}g_{\alpha _1}(z^{-\alpha _1})}{z^2}{{\mathrm{d}}}z \quad (z>0). \end{aligned}$$

Since \(\lim \limits _{x\rightarrow 0}xg_\alpha (x)\) is a nonzero constant, the above probability density is of order \(o(z^{-2})\) when \(z\rightarrow \infty \), so the random variable \(Z_w\) has a power law tail of index 1. Thus, Assumption (B2) is approximately satisfied when \(\alpha _1\approx 1\) and \(\theta _1=1\).

3 Applications

In this section, we demonstrate three applications of our theory.

3.1 The choice of function F

The condition \(\lambda <0.5\) specifies a nontrivial constraint on the function F. In Sect. 2.4 we have shown that this is a necessary condition for the norms of natural phrase vectors to converge. The convergence of norms is an outstanding property that might affect not only additive composition but also the composition ability of vector representations in general. Specifically, we note that \(F(x)=\ln {x}\) when \(\lambda =0\), and \(F(x)=\sqrt{x}\) when \(\lambda =0.5\). It is straightforward to predict that these functions might perform better in composition tasks than functions that have larger \(\lambda \), such as \(F(x):=x\) or \(F(x):=x\ln {x}\). In Sect. 5.3, we show experiments that verify the necessity of \(\lambda <0.5\) for our bias bound to hold, and in Sect. 6 we show that F indeed drastically affects additive compositionality as judged by human annotators; while \(F(x):=\ln {x}\) and \(F(x):=\sqrt{x}\) perform similarly well, \(F(x):=x\) and \(F(x):=x\ln {x}\) are much worse.

Different settings of function F have been considered in previous research, and speculations have been made about the reason of semantic additivity of some of the vector representations. In Pennington et al. (2014), the authors noted that logarithm is a homomorphism from multiplication to addition, and used this property to justify \(F(x):=\ln {x}\) for training semantically additive word vectors, but based on the unverified hypothesis that multiplications of co-occurrence probabilities have specialties in semantics. On the other hand, Lebret and Collobert (2014) proposed to use \(F(x):=\sqrt{x}\), which is motivated by the Hellinger distance between two probability distributions, and reported it being better than \(F(x):=x\). Stratos et al. (2015) proposed a similar but more general and better-motivated model, which attributed \(F(x):=\sqrt{x}\) to an optimal choice that stabilizes the variances of Poisson random variables. Based on the assumption that co-occurrence counts are generated by a Poisson Process, the authors pointed out that \(F(x):=\sqrt{x}\) may have the effect of stabilizing the variance in estimating word vectors. In contrast, our theory shows clearly that F affects the bias of additive composition, besides variance. All in all, none of the previous research can explain why \(F(x):=\ln {x}\) and \(F(x):=\sqrt{x}\) are both good choices but \(F(x):=x\) is not.

Intuitively, the condition \(\lambda <0.5\) requires F(x) to decrease steeply as x tends to 0. The steep slope has effect of “amplifying” the fluctuations of lower co-occurrence probabilities, and “suppressing” higher ones as a result. Formally, this can be read from Lemma 1, which shows that \({{\mathrm{Var}}}[F(p^{\varUpsilon }_{i,n}+1/n)]\) scales with \(\varphi _{i,n}=p_{i,n}\bigl (p_{i,n}+(\beta n)^{-1}\bigr )^{-1+2\lambda }\). When \(\lambda <0.5\), the \(\bigl (p_{i,n}+(\beta n)^{-1}\bigr )^{-1+2\lambda }\) factor decreases as \(p_{i,n}\) increases, and the decrease becomes faster when \(\lambda \) is smaller. Thus, in the vector representations we consider, higher co-occurrence probabilities are “suppressed” more when \(\lambda \) is smaller.

3.2 Handling word order in additive composition

By considering the vector representation \(\mathbf {w}^{\{st\}}_n\) we have ignored word order and conflated the phrases “s t” and “t s”. Though the meanings of the two might be related somehow, to treat a compositional framework as approximating \(\mathbf {w}^{\{st\}}_n\) instead of \(\mathbf {w}^{st}_n\) would certainly be troublesome, especially when one tries to extend our theory to longer phrases or even sentences. As the following example (Landauer et al. 1997) demonstrates, meanings of sentences may differ greatly as word order changes.

  1. a.

    It was not the sales manager who hit the bottle that day, but the office worker with the serious drinking problem.

  2. b.

    That day the office manager, who was drinking, hit the problem sales worker with a bottle, but it was not serious.

Thus, it is necessary to handle the changes of meaning brought by different word order. Traditionally, additive composition is considered unsuitable for this purpose, because one always has \(\mathbf {w}^s_{n}+\mathbf {w}^t_{n}=\mathbf {w}^t_{n}+\mathbf {w}^s_{n}\). However, the commutativity can be broken by defining different contexts for “left-hand-side” words and “right-hand-side” words, denoted by \(t\bullet \) and \(\bullet t\), respectively. Then, the co-occurrence probabilities \(p^{t\bullet }_{i,n}\) and \(p^{\bullet t}_{i,n}\) will be different, so \(\frac{1}{2}(\mathbf {w}^{s\bullet }_n+\mathbf {w}^{\bullet t}_n)\) and \(\frac{1}{2}(\mathbf {w}^{t\bullet }_n+\mathbf {w}^{\bullet s}_n)\) are different vectors. In this section, we propose the Near–far Context, which specifies contexts for \(s\bullet \) and \(\bullet t\) such that the additive composition \(\frac{1}{2}(\mathbf {w}^{s\bullet }_n+\mathbf {w}^{\bullet t}_n)\) approximates the natural vector \(\mathbf {w}^{st}_n\) for ordered phrase “s t”.

Fig. 2
figure 2

Surrounding two-word phrase “s t”, the Near–far contexts assigned to \(s\bullet \), \(\bullet t\) and st are the same

Fig. 3
figure 3

Surrounding phrase “t s”, word order reversed, the Near–far contexts assigned to \(s\bullet \) and \(\bullet t\) differ in their NF labels

Definition 15

In Near–far Context, context words are assigned labels, either N or F. For constructing vector representations, we use a lexicon of N-F labeled words, and regard words with different labels as different entries in the lexicon. For any target, we label the nearer two words to each side by N, and the farther two words to each side by F. Except that, for the “left-hand-side” word \(s\bullet \) we skip one word adjacent to the right; and similarly, for the “right-hand-side” word \(\bullet t\) we skip one word adjacent to the left (Fig. 2).

The idea behind Near–far Context is that, in the context of phrase “s t”, each word is assigned an N-F label the same as in the context of \(s\bullet \) and \(\bullet t\) (Fig. 2). On the other hand, for targets s and t occurring in the order-reversed phrase “t s”, context words are labeled differently for \(s\bullet \) and \(\bullet t\) (Fig. 3). As we discussed in Sect. 2.2, the key fact about additive composition is that if a word token t comes from phrase “s t” or “t s”, the context for this token of t is almost the same as the context of “s t” or “t s”. By introducing different labels for context words of \(t\bullet \) and \(\bullet t\), we are able to distinguish “s t” from “t s”. More precisely, similar to our discussion in Sect. 2.5, the common component of \(w^{s\bullet }_{i,n}\) and \(w^{\bullet t}_{i,n}\) will survive in the average \(\frac{1}{2}(w^{s\bullet }_{i,n}+w^{\bullet t}_{i,n})\), whereas independent ones will cancel out each other. Thus, the additive composition \(\frac{1}{2}(\mathbf {w}^{s\bullet }_n+\mathbf {w}^{\bullet t}_n)\) will become closer to \(\mathbf {w}^{st}_n\) rather than \(\mathbf {w}^{ts}_n\), because \(s\bullet \) and \(\bullet t\) share context surrounding “s t” but not “t s”.

Definition 16

Formally, as analogue to Definition 6, we define target \({s\bullet }\backslash t\) which counts every \(s\bullet \) not at the left of word t. We denote \(\pi _{{s\bullet }\backslash t}\) the probability of s being not at the left of t, conditioned on its occurrence. Practically, one can estimate \((1-\pi _{{s\bullet }\backslash t})\) by C(st) / C(s). Similarly, we define \(s/{\bullet t}\) and \(\pi _{s/{\bullet t}}\). Then, we have equations

as parallel to (1).

The following claim is parallel to Claim 1.

Claim 3

Under conditions parallel to Claim 1, we have

In Sect. 5.4, we verify Claim 3 experimentally, and show that in contrast, the error \(||\mathbf {w}^{ts}_n-\frac{1}{2}(\mathbf {w}^{s\bullet }_n + \mathbf {w}^{\bullet t}_n) ||\) for approximating the order-reversed phrase “t s” can exceed this bias bound. Further, we demonstrate that by using the additive composition of Near–far Context vectors, one can indeed assess meaning similarities between ordered phrases.

3.3 Dimension reduction

By far we have only discussed vector representations that have a high dimension equal to the lexicon size n. In practice, people mainly use low-dimensional “embeddings” of words to represent their meaning. Many of the embeddings, including SGNS and GloVe, can be formalized as linear dimension reduction, which is equivalent to the finding of a d-dimensional vector \(\mathbf {v}^t\) (where \(d\ll n\)) for each target word t, and an (nd)-matrix A such that \(\sum _t L(A\mathbf {v}^t, \mathbf {w}^t_n)\) is minimized for some loss function \(L(\cdot , \cdot )\). In other words, \(A\mathbf {v}^t\) is trained as a good approximation for \(\mathbf {w}^t_n\).

Naturally, we expect the loss function L to account for a crucial factor in word embeddings. Although there are empirical investigations on other detailed designs of embedding methods (e.g. how to count co-occurrences, see Levy et al. 2015), the loss functions have not been explicitly discussed previously. In this section, we discuss how the loss functions would affect additive compositionality of word embeddings, from a viewpoint of bounding the bias \(||\mathbf {v}^{\{st\}}-\frac{1}{2}(\mathbf {v}^{s}+\mathbf {v}^{t}) ||\).

SVD When L is the \(L^2\)-loss, its minimization has a closed-form solution given by the Singular Value Decomposition (SVD). More precisely, one considers a matrix whose j-th column is \(\mathbf {w}^t_n\) where t is the j-th target word. Then, SVD factorizes the matrix into \(U{\varSigma }V^\top \), where U, V are orthonormal and \({\varSigma }\) is diagonal. Let \({\varSigma }_d\) denote the truncated \({\varSigma }\) to the top d singular values. Then, A is solved as \(U\sqrt{{\varSigma }_d}\) and \(\mathbf {v}^{t}\) the j-th column of \(\sqrt{{\varSigma }_d}V^\top \). SVD has been used in Lebret and Collobert (2014), Stratos et al. (2015) and Levy et al. (2015). In this setting, we have

$$\begin{aligned} ||A\mathbf {v}^{s}-\mathbf {w}^s_{n} ||\le \varepsilon _1,\quad ||A\mathbf {v}^{t}-\mathbf {w}^t_{n} ||\le \varepsilon _2\quad \text { and }\quad ||A\mathbf {v}^{\{st\}}-\mathbf {w}^{\{st\}}_n ||\le \varepsilon _3, \end{aligned}$$

where \(\varepsilon _1\), \(\varepsilon _2\) and \(\varepsilon _3\) are minimized. Thus, by Triangle Inequality we have

$$\begin{aligned} ||A\cdot \bigl (\mathbf {v}^{\{st\}}-\frac{1}{2}(\mathbf {v}^{s}+\mathbf {v}^{t})\bigr ) ||\le \mathscr {B}^{\{st\}}_n + \frac{1}{2}(\varepsilon _1+\varepsilon _2) + \varepsilon _3. \end{aligned}$$

Further, by Claim 1 we can bound \(\mathscr {B}^{\{st\}}_n\) for sufficiently large n, so \(||\mathbf {v}^{\{st\}}-\frac{1}{2}(\mathbf {v}^{s}+\mathbf {v}^{t}) ||\) is bounded in turn because A is a bounded operator. This bound suggests that word embeddings trained by SVD preserve additive compositionality.

However, the same argument does not directly apply to other loss functions because a general loss may not satisfy a triangle inequality, and a bound for Euclidean distance may not always transform to a bound for the loss, or vice versa. Specifically, we describe two widely used alternative embeddings in the following and discuss the effects of their loss.

GloVe The GloVe model (Pennington et al. 2014) trains a dimension reduction for vector representations with \(F(x):=\ln {x}\). Let \(v^t_i\) be the i-th entry of \(A\mathbf {v}^t\), and \(C^t_i\) the co-occurrence count. Then, the loss function of GloVe is given by

$$\begin{aligned} L(v^t_i, w^t_{i,n}):=f\bigl ( C^t_i \bigr )(v^t_i-w^t_{i,n})^2, \end{aligned}$$

where f is a function set to constant when \(C^t_i\) is larger than a threshold, and decreases to 0 when \(C^t_i\rightarrow 0\). In words, GloVe uses a weighted \(L^2\)-loss and the weight is a function of co-occurrence count. To minimize the loss, GloVe uses stochastic gradient descent methods such as AdaGrad (Duchi et al. 2011).

SGNS The Skip-Gram with Negative Sampling (SGNS) model (Mikolov et al. 2013a) also trains a dimension reduction with \(F(x):=\ln {x}\). The training is based on the Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen 2012), so its loss function has two parameters, the number k of noise samples per data point, and the noise distribution \(p^{\text {noise}}_{i,n}\).

Claim 4

Let \(v^t_i\) be the i-th entry of \(A\mathbf {v}^t\). The loss function of SGNS is given by

$$\begin{aligned} L(v^t_i, w^t_{i,n}) := C(t) D_{\phi _i}\bigl (v^t_i + \ln (kp^{\mathrm{noise}}_{i,n}),\, w^t_{i,n} + \ln (kp^{\mathrm{noise}}_{i,n})\bigr ), \end{aligned}$$

where \(D_{\phi }(\cdot ,\cdot )\) is the Bregman divergence associated to the convex function

$$\begin{aligned} \phi (x):=\bigl (p^t_{i,n}+kp^{\mathrm{noise}}_{i,n}\bigr )\ln \bigl (\exp (x)+kp^{\mathrm{noise}}_{i,n}\bigr ). \end{aligned}$$

When \(k\rightarrow +\infty \), \(D_{\phi }\) converges to the Bregman divergence \(D_\varphi \) associated to \(\varphi (x):=\exp (x)\).

Proof of Claim 4 is found in “Appendix 2”. We draw a graph of the SGNS loss in Fig. 4, where \(D_{\phi }\bigl (v^t_i + \ln (kp^{\text {noise}}_{i,n}),\, w^t_{i,n} + \ln (kp^{\text {noise}}_{i,n})\bigr )\) is plotted on y-axis against \(v^t_i-w^t_{i,n}\) on x-axis. Note that the graph grows faster at \(x\rightarrow +\infty \) than \(x\rightarrow -\infty \), suggesting that an overestimation of \(w^t_{i,n}\) will be punished more than an underestimation. In addition, the loss function weighs more on high co-occurrence probabilities, as indicated by the \(p^t_{i,n}\) coefficient in the equation of the limit curve (Fig. 4). Thus, SGNS loss tends to enforce underestimation of \(w^t_{i,n}\) for frequent context words (as overestimation is costly), and compensate \(w^t_{i,n}\) for rare ones (i.e., overestimation on rare context words is affordable and will be done if necessary). This is a special property of SGNS which might have some smoothing effect.

Fig. 4
figure 4

A graph of the SGNS loss function with two asymptotes (red), and its limit curve at \(k\rightarrow +\infty \) with one asymptote (blue) (Color figure online)

Compared to SVD, both the loss functions of GloVe and SGNS weigh less on rare context words. As a result, the trained \(A\mathbf {v}^t\) may fail to precisely approximate the low co-occurrence part of \(\mathbf {w}^t_{n}\). As we discussed in Sect. 2.5, entries corresponding to low-frequency words dominate the behavior of vector representations; thus, failing to precisely approximate this part might hinder the inheritance of additive compositionality from high-dimensional vector representations to low-dimensional embeddings. Therefore, we conjecture that word vectors trained by GloVe or SGNS might exhibit less additive compositionality compared to SVD, and the composition might be less respectful to our bias bound.

The previous discussion is only exploratory and cannot fully comply with practice because, after \(\mathbf {v}^t\) is trained by dimension reduction, people usually re-scale the norms of all \(\mathbf {v}^t\) to 1, and then they use the normalized vectors in additive composition. It is not clear why this normalization step can usually result in better performance.

Nevertheless, in our experiments (Sect. 5.5), we find that word vectors trained by SVD preserve our bias bound well in additive composition, even after the normalization step is conducted. In contrast, vectors trained by GloVe or SGNS are less respectful to the bound. Further, in extrinsic evaluations (Sect. 6) we show that vectors trained by SVD can indeed be more additive compositional, as judged by human annotators.

4 Related work

Additive composition is a classical approach to approximating meanings of phrases and/or sentences (Foltz et al. 1998; Landauer and Dumais 1997). Compared to other composition operations, vector addition/average has either served as a strong baseline (Mitchell and Lapata 2008; Takase et al. 2016), or remained one of the most competitive methods until recently (Banea et al. 2014). Additive composition has also been successfully integrated into several NLP systems. For example, Tian et al. (2014) use vector additions for assessing semantic similarities between paraphrase candidates in a logic-based textual entailment recognition system (e.g. the similarity between “blamed for death” and “cause loss of life” is calculated by the cosine similarity between sums of word vectors \(\mathbf {v}^{\text {blame}}+\mathbf {v}^{\text {death}}\) and \(\mathbf {v}^{\text {cause}}+\mathbf {v}^{\text {loss}}+\mathbf {v}^{\text {life}}\)); in Iyyer et al. (2015), average of vectors of words in a whole sentence/document is fed into a deep neural network for sentiment analysis and question answering, which achieves near state-of-the-art performance with minimum training time. There are other semantic relations handled by vector additions as well, such as word analogy (e.g. the vector \(\mathbf {v}^{\text {king}}-\mathbf {v}^{\text {man}}+\mathbf {v}^{\text {woman}}\) is close to \(\mathbf {v}^{\text {queen}}\), suggesting “man is to king as woman is to queen”, see Mikolov et al. 2013b), and synonymy (i.e. a set of synonyms can be represented by the sum of vectors of the words in the set, see Rothe and Schütze 2015). We expect all these utilities to be related to our theory of additive composition somehow, for example a link between additive composition and word analogy is hypothesized in Sect. 6.2. Ultimately, our theory would provide new insights into previous works, for instance, the insights about how to construct word vectors.

Lack of syntactic or word-order dependent effects on meaning is considered one of the most important issue of additive composition (Landauer 2002). Driven by this point of view, a number of advanced compositional frameworks have been proposed to cope with word order and/or syntactic information (Mitchell and Lapata 2008; Zanzotto et al. 2010; Baroni and Zamparelli 2010; Coecke et al. 2010; Grefenstette and Sadrzadeh 2011; Socher et al. 2012; Paperno et al. 2014; Hashimoto et al. 2014). The usual approach is to introduce new parameters that represent different word positions or syntactic roles. For example, given a two-word phrase, one can first transform the two word vectors by different matrices and then add the results, so the two matrices are parameters (Mitchell and Lapata 2008); or, regarding different syntactic roles, one can assign matrices to adjectives and use them to modify vectors of nouns (Baroni and Zamparelli 2010); further, one can insert neural network layers between parents and children in a syntactic tree (Socher et al. 2012). An empirical comparison of composition models can be found in Blacoe and Lapata (2012), with an accessible introduction to the literature. One theoretical issue of these methods, however, is the lack of learning guarantee. In contrast, our proposal of the Near–far Context demonstrates that word order can be handled within an additive compositional framework, being parameter-free and with a proven bias bound. Recently, Tian et al. (2016) further extended additive composition to realizing a formal semantics framework.

From a wider perspective, constructing and composing vector representations for linguistic sequences have become one of the central techniques in NLP, and a lot of approaches have been explored. Some of them, such as the vectors constructed from probability ratios and composed by multiplications (Mitchell and Lapata 2010), might still be related to additive composition because by taking logarithm, multiplications become additions and probability ratios become PMIs. Other composition methods range from circular convolution (Mitchell and Lapata 2010) to neural networks such as recursive autoencoder (Socher et al. 2011) and long short-term memory (Melamud et al. 2016). Word vectors can be trained jointly with composition parameters (Hashimoto et al. 2014; Pham et al. 2015), and training signals range from surrounding context words (Takase et al. 2016) to supervised labels (Collobert et al. 2011). We believe it is also important to investigate the theoretical aspects of these approaches, which remain largely unclear. As for word vectors, some theoretical works have been done on explaining the errors of dimension reductions of PMI vectors (Arora et al. 2016; Hashimoto et al. 2016).

Error bounds in approximation schemes have been extensively studied in statistical learning theory (Vapnik 1995; Gnecco and Sanguineti 2008), and especially for neural networks (Niyogi and Girosi 1999; Burger and Neubauer 2001). Since we have formalized compositional frameworks as approximation schemes, there is a good chance to apply the theories of approximation error bounds to this problem, especially for advanced compositional frameworks that have many parameters. Though the theories are usually established on general settings, we see a great potential in using properties that are specific to natural language data, as we demonstrate in this work.

There have been consistent efforts toward understanding stochastic behaviors of natural language. Zipf’s Law (Zipf 1935) and its applications (Kobayashi 2014), non-parametric Bayesian language models such as the Hierarchical Pitman–Yor Process (Teh 2006), and the topic model (Blei 2012) might further help refine our theory. For example, it can be fruitful to consider additive composition of topics.

5 Experimental verification

In this section, we conduct experiments on the British National Corpus (BNC) (The BNC Consortium 2007) to verify assumptions and predictions of our theory. The corpus contains about 100M word tokens, including written texts and utterances in British English. For constructing vector representations we use lemmatized words annotated in the corpus, and for counting co-occurrences we use context windows that do not cross sentence boundaries. The size of the context windows is 5 to each side for a target word, and 4 for a target phrase. We extract all unigrams, ordered and unordered bigrams occurring more than 200 times as targets. This results in 16,210 unigrams, 45,793 ordered bigrams and 45,398 unordered bigrams. For the lexicon of context words we use the same set of unigrams.

Fig. 5
figure 5

Distributions of Spearman’s \(\rho \) between different pairs of random variables

5.1 Test of independence

In order to test the independence assumptions in our theory, we use Spearman’s \(\rho \) to measure the correlations between random variables. Spearman’s \(\rho \) is the Pearson correlation between rank values, and is invariant under transformations of any monotonic function. One has \(-1\le \rho \le -1\), and if two variables are independent, \(\rho \) should be close to 0.

In our theory, Assumption (B1) of Theorem 1 states that \(p^{\varUpsilon }_{i,n}\) and \(p^{\varUpsilon }_{j,n}\) are independent for each \(1\le i < j \le n\). To test, we calculate the Spearman’s \(\rho \) between (i) \(p^T_{i,n}\) and \(p^T_{j,n}\), and (ii) \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\), where T and \(\{ST\}\) vary on the 16,210 unigrams and 45,398 unordered bigram samples respectively. Further, Assumption (C) of Theorem 1 states that for each \(1\le i \le n\), the random variables \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are independent, whereas \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\) have positive correlation. Thus, we check the Spearman’s \(\rho \) between (iii) \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\), and (iv) \(p^{S/T\backslash S}_{i,n}\) and \(p^{\{ST\}}_{i,n}\), where \(\{S, T\}\) vary on the 45,389 unordered bigrams. The results are summarized in Fig. 5.

For most i-j pairs, Fig. 5(i)(ii) suggest that the correlations between \(p^{\varUpsilon }_{i,n}\) and \(p^{\varUpsilon }_{j,n}\) are positive but quite weak (for 70% of the i-j pairs, the Spearman’s \(\rho \) between \(p^T_{i,n}\) and \(p^T_{j,n}\) is \(0.2\pm 0.05\); and for 90% pairs the Spearman’s \(\rho \) between \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) is \(0.1\pm 0.05\)). As a comparison, when i and j indicate a pair of semantically related context words such as black and white, the Spearman’s \(\rho \) between \(p^T_{i,n}\) and \(p^T_{j,n}\) is 0.40 and between \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) is 0.31. Such examples only contribute to a negligible portion of the whole i-j pairs, because semantically related pairs are rare.

On the other hand, Fig. 5(iii) shows that \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) have negative correlation for most i; the Spearman’s \(\rho \) for 66% of the \(p^{S/T\backslash S}_{i,n}\)-and-\(p^{T/S\backslash T}_{i,n}\) pairs is \(-0.2\pm 0.1\). In addition, Fig. 5(iv) confirms that \(p^{S/T\backslash S}_{i,n}\) and \(p^{\{ST\}}_{i,n}\) have positive correlation.

Now, can the observed weak correlations support our assumptions on independence? To test this, one may calculate a p-value as the probability of a Spearman’s \(\rho \) being farther from 0 than the observation, making use of the fact that \(\rho \sqrt{\frac{N-2}{1-\rho ^2}}\) approximately follows a Student’s t-distribution (where N is the sample size). If the p-value is small, the Spearman’s \(\rho \) should be considered too far from 0 to support independence. However, this test turns out to be overly strict; for unordered bigrams (i.e. \(N=45398\)), one needs \(|\rho |<0.012\) to make \(p>0.01\). In other words, since the sample size is huge, even weak correlations among samples can manifest as evidence for rejecting the independence as null hypothesis.

Nevertheless, our theoretical analysis is still valid, because the Law of Large Numbers holds even for weakly correlated random variables, and the fact that \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are negatively correlated does not change the direction of our proven inequality. Therefore, our independence assumptions are oversimplifications for language modeling, but the theoretical conclusions and the bias bound are still likely true.

5.2 Generalized Zipf’s law

Consider the probability ratio \(p^{\varUpsilon }_{i,n}/p_{i,n}\), where \({\varUpsilon }\) can be a unigram, ordered bigram or unordered bigram. Assumption (B) of Theorem 1 states that \((p^{\varUpsilon }_{i,n}/p_{i,n})\)’s (\(1\le i\le n\), n fixed) can be viewed as independent sample points drawn from distributions that have a same power law tail of index 1. We verify this assumption in the following.

A power law distribution has two parameters, the index \(\alpha \) and the lower bound m of the power law behavior. If a random variable X obeys a power law, the probability of \(x\le X\) conditioned on \(m\le X\) is given by

$$\begin{aligned} \mathbb {P}(x\le X | m\le X)=\frac{m^\alpha }{x^\alpha }. \end{aligned}$$
(14)

For each fixed \({\varUpsilon }\), we estimate \(\alpha \) and m from the sample \(p^{\varUpsilon }_{i,n}/p_{i,n}\) \((1\le i\le n)\), using the method of Clauset et al. (2009). Namely, \(\alpha \) is estimated by maximizing the likelihood of the sample, and m is sought to minimize the Kolmogorov-Smirnov statistic, which measures how well the theoretical distribution (14) fits the empirical distribution of the sample. After m is estimated, we plot all \(p^{\varUpsilon }_{i,n}/p_{i,n}\) greater than m in a log-log graph, against their ranking. If the sample points are drawn from a power law, the graph will be a straight line. Since Assumption (B) states that the power law tail is the same for all \({\varUpsilon }\) and has index 1, we should obtain the same straight line for all \({\varUpsilon }\), and the slope of the line should be \(-1\).

In Fig. 6, we summarize the graphs described above for all \({\varUpsilon }\). More precisely, we plot the ranked probability ratios for each fixed \({\varUpsilon }\) into the same log-log graph, and then show the average and standard deviation of the x-th largest probability ratios across different \({\varUpsilon }\). The figure shows that for each target type, most data points lie within a narrow stripe of roughly the same shape, suggesting that the distribution of probability ratios for each fixed \({\varUpsilon }\) is approximately the same. In addition, the shape can be roughly approximated by a straight line with slope \(-1\), which suggests that the distribution is power law of index 1, verifying Assumption (B).

Fig. 6
figure 6

For each x coordinate, the log–log graphs show the average value of the x-th largest probability ratios \(p^{\varUpsilon }_{i,n}/p_{i,n}\) on y axis. The ranking is taken among \(1\le i\le n\) with \({\varUpsilon }\) fixed, and the average is taken across different \({\varUpsilon }\) that are unigrams, unordered bigrams, or ordered bigrams respectively. Standard deviation is shown as error bar

Fig. 7
figure 7

Log–log plot of probability ratios against ranks, for two fixed targets

As a concrete example, in Fig. 7 we show a log-log graph of the x-th largest probability ratios \(p^s_{i,n}/p_{i,n}\) and \(p^t_{i,n}/p_{i,n}\) \((1\le i\le n)\), where s and t are two individual word targets. The red points are cut off because their y values are lower than the boundaries of power law behavior estimated from data. The blue and green points are the power law tails.

Further, to document this Zipfian behavior quantitatively, we conduct a \(\chi ^2\)-test on the distribution of \(p^T_{i,n}/p_{i,n}\). In this test, we fix each i and categorize the values of \(X:=p^T_{i,n}/p_{i,n}\), where T varies over the 16,210 unigram samples. According to Figs. 6 and 7, we assume that X has a power law tail starting from \(X \ge 2^4\). Thus, we divide values of X into 5 categories, being \(X<2^4\), \(2^{4+k}\le X < 2^{5+k}\) \((k=0,1,2)\), and \(X\ge 2^7\), respectively. We count frequencies in each category and choose a parameter \(\frac{1}{2^4}\le m\le \frac{1}{2}\) by minimizing the \(\chi ^2\) statistic. The parameter \(\alpha \) is fixed to 1. Then, the degree of freedom is calculated as \(5-1-1=3\), and the \(\chi ^2\)-test produces a p-value indicating how good the power law hypothesis fit to the observed frequencies. We decide that the test is passed if \(p\ge 0.0001\).

Table 4 \(\chi ^2\)-tests on distributions of \(X:=p^T_{i,n}/p_{i,n}\)

Among all indices \(1\le i \le n\), there are 16% distributions passing the test. A selection of examples is shown in Table 4. It turns out that many function words, such as “the” and “be” cannot pass the test (with all values of X less than \(2^4\)), because the occurring probabilities of these words do not change much, whether or not conditioned on a target. An exception is that several prepositions, such as “between” and “under” do pass. On the other hand, as i becomes larger (i.e. \(p_{i,n}\) becomes smaller), more of the distributions of \(p^T_{i,n}/p_{i,n}\) become distorted, similar to the green dots in Fig. 7 which will fail the test. As Table 4 suggests, no obvious linguistic factor seems able to explain which word would pass. However, Fig. 6 still confirms that the averaged behavior of these distributions obeys a power law.

5.3 The choice of function F

In this section we experimentally verify the effects brought by different function F. Recall that F is parameterized by \(\lambda \) as defined in Definition 10. In Sect. 2.4, we have shown that \(\mathbb {E}[F(X)^2]<\infty \) is a sufficient and necessary condition for the norms of natural phrase vectors to converge to 1. If X has a power law tail of index \(\alpha \), then the condition for \(\mathbb {E}[F(X)^2]<\infty \) is \(\lambda <\alpha /2\). So if we construct vector representations with different \(\lambda \), only those vectors satisfying \(\lambda <\alpha /2\) will have convergent norms. We verify this prediction first.

Fig. 8
figure 8

Plot of standard deviation of norms of natural phrase vectors against different \(\lambda \)

Fig. 9
figure 9

Plot of standard deviation of norms against number of tokens in corpus

In Fig. 8, we plot the standard deviation of the norms of natural phrase vectors on y-axis, against different \(\lambda \) values used for constructing the vectors. We tried \(\lambda =0,0.1,\ldots ,1\). As the graph shows, as long as \(\lambda < 0.5\), most of the norms lie within the range of \(1\pm 0.1\). In contrast, the observed standard deviation quickly explodes as \(\lambda \) gets larger. In addition, the transition point appears to be slightly larger than 0.5, which complies with the fact that the observed \(\alpha \) is slightly larger than 1 (i.e., the slope \(-1/\alpha \) of the power law tails in Figs. 6 and 7 appear to be slightly more gradual than \(-1\)).

To confirm that the above observation represents a general principle across different corpora, we also conduct experiments on English Wikipedia.Footnote 6 We use WikiExtractorFootnote 7 to extract texts from a 2015-12-01 dump, and Stanford CoreNLPFootnote 8 for sentence splitting. The corpus has 1300M word tokens (about 13 times the size of BNC), and we use words in their surface forms instead of lemmas. We extract words and unordered bigrams which occur more than 500 times, resulting in about 85 K words and 264 K bigrams. Then, we additionally make two smaller corpora by uniformly sampling 10% and 1% sentences in Wikipedia. For each corpus, we construct natural phrase vectors and calculate the standard deviation of their norms as previous. The results are shown in Fig. 9. Again, we found that when one sets \(F(p):=\ln p\), the standard deviation is around 0.1; in contrast when \(F(p):=p\), the standard deviation is above 0.5. As the corpus increases, the standard deviation slightly decreases; at Wikipedia’s full size, the standard deviation for \(F(p):=\ln p\) descends below 0.095.

Next, we investigate how F affects the Euclidean distance \(\mathscr {B}^{\{st\}}_n\). In Fig. 10, we plot

$$\begin{aligned} \mathscr {B}^{\{st\}}_n \text { on }y\text {-axis,} \quad \text { against }\quad \sqrt{\frac{1}{2}(\pi _{s/t\backslash s}^2+\pi _{t/s\backslash t}^2+\pi _{s/t\backslash s}\pi _{t/s\backslash t})} \text { on } x\text {-axis}, \end{aligned}$$

for every unordered bigram \(\{st\}\). We tried four different choices of function F, as indicated above the graphs. For the choices (a) \(F(p):=\ln {p}\) and (b) \(F(p):=\sqrt{p}\), we verify the upper bound \(y\le x\) as suggested by Claim 1. In contrast, the approximation errors seem no longer bounded when (c) \(F(p):=p\) or (d) \(F(p):=p\ln {p}\).

In Sect. 6 we will extrinsically evaluate the additive compositionality of vector representations, and find F a crucial factor there; while \(F(p):=\ln {p}\) and \(F(p):=\sqrt{p}\) evaluate similarly well, \(F(p):=p\) and \(F(p):=p\ln {p}\) do much worse. This suggests that our bias bound indeed has the power of predicting additive compositionality, demonstrating the usefulness of our theory. In contrast, it seems that the average level of approximation errors for observed bigrams (shown as green dashed lines in Fig. 10) is less predictive, as the poor choices \(F(p):=p\) and \(F(p):=p\ln {p}\) actually have lower average error levels. This emphasizes a particular caveat that, choosing composition operations by minimizing the observed average error may not always be justifiable. Here if we consider the function F as a parameter in additive composition, and choose the one with the lowest average error observed, we will get the worst setting \(F(p):=p\). Therefore, we see how important a learning theory for composition research is.

Fig. 10
figure 10

Approximation errors for unordered bigrams observed in BNC. The choice of F is shown above each graph. The theoretical upper bound \(y\le x\) is drawn as red solid lines in (a) and (b). The average error levels are drawn as green dashed lines (Color figure online)

5.4 Handling word order in additive composition

For vector representations constructed from the Near–far Contexts (Sect. 3.2), we have a similar bias bound given by Claim 3. In this section, we experimentally verify the bound and qualitatively show that the additive composition of Near–far Context vectors can be used for assessing semantic similarities between ordered bigrams.

Fig. 11
figure 11

Near–far context, \(F(p):=\ln {p}\)

In Figs. 11 and 12, we plot

$$\begin{aligned} \quad \text {(a) } \mathscr {B}^{st}_n \quad \text {and}\quad \text {(b) } ||\mathbf {w}^{ts}_n-\frac{1}{2}(\mathbf {w}^{s\bullet }_n + \mathbf {w}^{\bullet t}_n) ||\quad \text {on }y\text {-axis}, \\ \text {against}\quad \sqrt{\frac{1}{2}(\pi _{{s\bullet }\backslash t}^2+\pi _{s/{\bullet t}}^2+\pi _{{s\bullet }\backslash t}\pi _{s/{\bullet t}})} \text { on }x\text {-axis}, \end{aligned}$$

for every ordered bigram st. We tried two settings of F, namely \(F(p):=\ln {p}\) (Fig. 11) and \(F(p):=\sqrt{p}\) (Fig. 12). In both cases, the approximation errors in (a) are bounded by \(y\le x\) (red solid lines) as suggested by Claim 3. In contrast, the approximation errors for order-reversed bigrams exceed this bound, showing that the additive composition of Near–far Context vectors actually recognizes word order.

In Table 5, we show the 8 nearest word pairs for each of 8 ordered bigrams, measured by cosine similarities between additive compositions of Near–far Context vectors. More precisely, for word pairs “\(s_1\) \(t_1\)” and “\(s_2\) \(t_2\)”, we calculate the cosine similarity between \(\frac{1}{2}(\mathbf {v}^{s_1\bullet } + \mathbf {v}^{\bullet t_1})\) and \(\frac{1}{2}(\mathbf {v}^{s_2\bullet } + \mathbf {v}^{\bullet t_2})\), where \(\mathbf {v}^{s\bullet }\) and \(\mathbf {v}^{\bullet t}\) are normalized 200-dimensional SVD reductions of \(\mathbf {w}^{s\bullet }_n\) and \(\mathbf {w}^{\bullet t}_n\), respectively, with \(F(p):=\sqrt{p}\). The table shows that additive composition of Near–far Context vectors can indeed represent meanings of ordered bigrams, for example, “pose problem” is near to “arise dilemma” but not to “dilemma arise”, and “problem pose” is near to “difficulty cause” but not to “cause difficulty”. It is also noteworthy that “not enough” is similar to “always want”, showing some degree of semantic compositionality beyond word level. We believe this ability of computing meanings of arbitrary ordered bigrams is already highly useful, because only a few bigrams can be directly observed from real corpora.

Fig. 12
figure 12

Near–far context, \(F(p):=\sqrt{p}\)

Table 5 Top 8 similar word pairs, assessed by cosine similarities between additive compositions of Near–far Context vectors

5.5 Dimension reduction

In this section, we verify our prediction in Sect. 3.3 that vectors trained by SVD preserve our bias bound more faithfully than GloVe and SGNS. In Fig. 13, we use normalized word vectors \(\mathbf {v}^t\) that are constructed from the distributional vectors \(\mathbf {w}^t_n\) by reducing to 200 dimensions using different reduction methods. We use SVD in (a) and (b), with \(F(p):=\ln {p}\) in (a) and \(F(p):=\sqrt{p}\) in (b). The GloVe model is shown in (c) and SGNS in (d), both of them using \(F(p):=\ln {p}\). For each unordered bigram \(\{st\}\) we plot

$$\begin{aligned} ||\mathbf {v}^{\{st\}}-\frac{1}{2}(\mathbf {v}^{s}+\mathbf {v}^{t}) ||\text { on }y\text {-axis,} \quad \text { against }\quad \sqrt{\frac{1}{2}(\pi _{s/t\backslash s}^2+\pi _{t/s\backslash t}^2+\pi _{s/t\backslash s}\pi _{t/s\backslash t})} \text { on }x\text {-axis}. \end{aligned}$$

The graphs show that vectors trained by SVD still largely conform to our bias bound \(y\le x\) (red solid lines), but vectors trained by GloVe or SGNS no longer do. Our extrinsic evaluations in Sect. 6 also show that SVD might perform better than GloVe and SGNS.

Fig. 13
figure 13

Approximation errors for different dimension reduction methods

6 Extrinsic evaluation of additive compositionality

In this section, we test additive composition on human annotated data sets to see if our theoretical predictions correlate with human judgments. We conduct a phrase similarity task and a word analogy task.

6.1 Phrase similarity

In a data setFootnote 9 created by Mitchell and Lapata (2010), phrase pairs are annotated with similarity scores. Each instance in the data is a (phrase1, phrase2, similarity) triplet, and each phrase consists of two words. The similarity score is annotated by humans, ranging from 1 to 7, indicating how similar the meanings of the two phrases are. For example, one annotator assessed the similarity between “vast amount” and “large quantity” as 7 (the highest), and the similarity between “hear word” and “remember name” as 1 (the lowest). Phrases are divided into three categories: Verb-Object, Compound Noun, and Adjective-Noun. Each category has 108 phrase pairs, and they are annotated by 18 human participants (i.e., 1,944 instances in each category). Using this data set, we can compare the human ranking of phrase similarity with the one calculated from cosine similarities between vector-based compositions. We use Spearman’s \(\rho \) to measure how correlated the two rankings are.

Vector representations are constructed from BNC, with the same settings described in Sect. 5. We plot in Fig. 14 the distributions of how many times the phrases in the data set occur as bigrams in BNC. The figure indicates that a large portion of the phrases are rare or unseen as bigrams, so their meanings cannot be directly assessed as natural vectors from the corpus. Therefore, the data is suitable for testing compositions of word vectors.

We reduce the high dimensional distributional representations into 200-dimensional and normalize the vectors. The dimension 200 is selected by observing the top 800 singular values calculated by SVD. As illustrated in Fig. 15, the decrease of singular values flattens to a constant rate at a rank of about 200. This suggests that the most characteristic features in the vector representations are projected into 200 dimensions. In our preliminary experiments, we have confirmed that 200-dimension performs better than 100-dimension, 500-dimension or no dimension reduction.

Fig. 14
figure 14

Distributions of how many times the phrases in the data occur as bigrams in BNC. The y-axis shows percentage and x-axis shows frequency range

Fig. 15
figure 15

Top 800 singular values calculated by SVD. The y-axis shows singular value and x-axis shows rank. Different y-scales are used for different settings

For training word embeddings, we use the random projection algorithm (Halko et al. 2011) for SVD, and Stochastic Gradient Descent (SGD) (Bottou 2012) for SGNS and GloVe. Since these are randomized algorithms, we run each test 20 times and report the mean performance with standard deviation. We tune SGD learning rates by checking convergence of the objectives, and get slightly better results than the default training parameters set in the software of SGNSFootnote 10 and GloVe.Footnote 11

As pointed out by Levy et al. (2015), there are other detailed settings that can vary in SGNS and GloVe. We make these settings close enough to be comparable but emphasize the differences of loss functions. More precisely, we use no subsampling and set the number of negative samples to 2 in SGNS, and use the default loss function in GloVe with the cutoff threshold set to 10. In addition, the default implementations of both SGNS and GloVe weigh context words by a function of distance to their targets, which we disable (i.e. equal weights are used for all context words), so as to make it compatible with our problem setting.

Table 6 Spearman’s \(\rho \) in the phrase similarity task

The test results are shown in Table 6. We compare different settings of function F, Ordinary and Near–far Contexts, and different dimension reductions. When using ordinary contexts and SVD reduction, we find that the functions ln (\(F(p):=\ln {p}\)) and sqrt (\(F(p):=\sqrt{p}\)) perform similarly well, whereas id (\(F(p):=p\)) and xlnx (\(F(p):=p\ln {p}\)) are much worse, confirming our predictions in Sect. 3.1. As for Near–far Context vectors (Sect. 3.2), we find that the Nearfar-sqrt-SVD setting has a high performance, demonstrating improvements brought by Near–far to additive composition. However, we note that Nearfar-ln-SVD is worse. One reason could be that the function ln emphasizes lower co-occurrence probabilities, which combined with Near–far labels could make the vectors more prone to data sparseness; or correlatively, some important syntactic markers might be obscured because they occur in high frequency. Finally, we note that SVD is consistently good and usually better than GloVe and SGNS, which supports our arguments in Sect. 3.3.

We report some additional test results for reference. In Table 6, the “Tensor Product” row shows the results of composing Ordinary-ln-SVD vectors by tensor product instead of average, which means that the similarity between two phrases “\(s_1\) \(t_1\)” and “\(s_2\) \(t_2\)” is assessed by taking product of the word cosine similarities \(\cos (s_1,s_2)\cdot \cos (t_1,t_2)\). The numbers are worse than additive composition, suggesting that a similar phrase might be something more than a sequence of individually similar words. In the “Upper Bound” row, we show the best possible Spearman’s \(\rho \) for this task, which are less than 1 because there are disagreements between human annotators. Compared to these numbers, we find that the performance of additive composition on compound nouns is remarkably high. Furthermore, in “Muraoka et al.” we cite the best results reported in Muraoka et al. (2014), which has tested several compositional frameworks. In “Deep Neural”, we also test additive composition of word vectors trained by deep neural networks (normalized 200-dimensional vectors trained by Turian et al. 2010, using the model of Collobert et al. 2011). These results cannot be directly compared to each other because they construct vector representations from different corpora; but we can fairly say that additive composition is still a powerful method for assessing phrase similarity, and linear dimension reductions might be more suitable for training additively compositional word vectors than deep neural networks. Therefore, our theory on additive composition is about the state-of-the-art.

6.2 Word analogy

Word analogy is the task of solving questions of the form “a is to b as c is to __?”, and an elegant approach proposed by Mikolov et al. (2013b) is to find the word vector most similar to \(\mathbf {v}^b - \mathbf {v}^a + \mathbf {v}^c\) . For example, in order to answer the question “man is to king as woman is to __?”, one needs to calculate \(\mathbf {v}^{\text {king}} - \mathbf {v}^{\text {man}} + \mathbf {v}^{\text {woman}}\) and find out its most similar word vector, which will probably turn out to be \(\mathbf {v}^{\text {queen}}\), indicating the correct answer queen.

As pointed out by Levy and Goldberg (2014a), the key to solving analogy questions is the ability to “add” (resp. “subtract”) some aspects to (resp. from) a concept. For example, king is a concept of human that has the aspects of being royal and male. If we can “subtract” the aspect male from king and “add” the aspect female to it, then we will probably get the concept queen. Thus, the vector-based solution proposed by Mikolov et al. (2013b) is essentially assuming that “adding” and “subtracting” aspects can be realized by adding and subtracting word vectors. Why is this assumption admissible?

We believe this assumption is closely related to additive compositionality. Because, if an aspect is represented by an adjective (e.g. male) and a concept is represented by a noun (e.g. human), we can usually “add” the aspect to the concept by simply arranging the adjective and the noun to form a phrase (e.g. male human). Therefore, as the meaning of the phrase can be calculated by additive composition (e.g. \(\mathbf {v}^{\text {male}}+\mathbf {v}^{\text {human}}\)), we have indeed realized the “addition” of aspects by addition of word vectors. Specifically, since \(\textit{man}\approx \textit{male human}\), \(\textit{king}\approx \textit{royal male human}\), \(\textit{woman}\approx \textit{female human}\) and \(\textit{queen}\approx \textit{royal female human}\), we expect the following by additive composition of phrases.

$$\begin{aligned} \begin{aligned} \mathbf {v}^{\text {man}}&\approx \mathbf {v}^{\text {male}} + \mathbf {v}^{\text {human}} \\ \mathbf {v}^{\text {king}}&\approx \mathbf {v}^{\text {royal}} + \mathbf {v}^{\text {male}} + \mathbf {v}^{\text {human}} \\ \mathbf {v}^{\text {woman}}&\approx \mathbf {v}^{\text {female}} + \mathbf {v}^{\text {human}} \\ \mathbf {v}^{\text {queen}}&\approx \mathbf {v}^{\text {royal}} + \mathbf {v}^{\text {female}} + \mathbf {v}^{\text {human}} \\ \end{aligned} \end{aligned}$$

Here, “\(\approx \)” denotes proximity between vectors in the sense of cosine similarity. From these approximate equations, we can imply that \( \mathbf {v}^{\text {king}} - \mathbf {v}^{\text {man}} + \mathbf {v}^{\text {woman}} \approx \mathbf {v}^{\text {royal}} + \mathbf {v}^{\text {female}} + \mathbf {v}^{\text {human}} \approx \mathbf {v}^{\text {queen}} \), which solves the analogy question.

Therefore, we expect word analogy task to serve as an extrinsic evaluation for additive compositionality as well. For this reason, we conduct word analogy task on the standard Msr Footnote 12 (Mikolov et al. 2013b) and Google Footnote 13 (Mikolov et al. 2013a) data sets. Each instance in the data is a 4-tuple of words subject to “a is to b as c is to d”, and the task is to find out d from a, b and c. We train word vectors with the same settings described in Sect. 5, but using surface forms instead of lemmatized words in BNC. Tuples with out-of-vocabulary words are removed from data, which results in 4382 tuples in Msr and 8906 in Google.Footnote 14

Table 7 Accuracy (%) in the word analogy task

The test results are shown in Table 7. Again, we find that ln and sqrt perform similarly well but id and xlnx are worse, confirming that the choice of function F can drastically affect performance on word analogy task as well, which we believe is related to additive compositionality. In addition, we confirm that SVD can perform better than SGNS and GloVe, which gives more support to our conjecture that vectors trained by SVD might be more compatible to additive composition.

7 Conclusion

In this article, we have developed a theory of additive composition regarding its bias. The theory has explained why and how additive composition works, making useful suggestions about improving additive compositionality, which include the choice of a transformation function, the awareness of word order, and the dimension reduction methods. Predictions made by our theory have been verified experimentally, and shown positive correlations with human judgments. In short, we have revealed the mechanism of additive composition.

However, we note that our theory is not “proof” of additive composition being a “good” compositional framework. As a generalization error bound usually is in machine learning theory, our bound for the bias does not show if additive composition is “good”; rather, it specifies some factors that can affect the errors. If we have generalization error bounds for other composition operations, a comparison between such bounds can bring useful insights into the choices of compositional frameworks in specific cases. We expect our bias bound to inspire more results in the research of semantic composition.

Moreover, we believe this line of theoretical research can be pursued further. In computational linguistics, the idea of treating semantics and semantic relations by algebraic operations on distributional context vectors is relatively new (Clarke 2012). Therefore, the relation between linguistic theories and our approximation theory of semantic composition is left largely unexplored. For example, the intuitive distinction between compositional (e.g. high price) and non-compositional (e.g. white lie) phrases is currently ignored in our theory. Our bias bound treats both cases by a single collocation measure. Can one improve the bound by taking account of this distinction, and/or other kinds of linguistic knowledge? This is an intriguing question for future work.