## Abstract

Additive composition (Foltz et al. in Discourse Process 15:285–307, 1998; Landauer and Dumais in Psychol Rev 104(2):211, 1997; Mitchell and Lapata in Cognit Sci 34(8):1388–1429, 2010) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. The bound is written in terms of collocation strength; we prove that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman–Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.

## Keywords

Compositional distributional semantics Bias and variance Approximation error bounds Natural language data Hierarchical Pitman–Yor process## 1 Introduction

The decomposition of generalization errors into bias and variance (Geman et al. 1992) is one of the most profound insights of learning theory. Bias is caused by low capacity of models when the training samples are assumed to be infinite, whereas variance is caused by overfitting to finite samples. In this article, we apply the analysis to a new set of problems in Compositional Distributional Semantics, which studies the calculation of meanings of natural language phrases by vector representations of their constituent words. We prove an upper bound for the bias of a widely used compositional framework, the additive composition (Foltz et al. 1998; Landauer and Dumais 1997; Mitchell and Lapata 2010).

Calculations of meanings are fundamental problems in Natural Language Processing (NLP). In recent years, vector representations have seen great success at conveying meanings of individual words (Levy et al. 2015). These vectors are constructed from statistics of contexts surrounding the words, based on the Distributional Hypothesis that words occurring in similar contexts tend to have similar meanings (Harris 1954). For example, given a target word *t*, one can consider its context as close neighbors of *t* in a corpus, and assess the probability \(p^t_{i}\) of the *i*-th word (in a fixed lexicon) occurring in the context of *t*. Then, the word *t* is represented by a vector \(\bigl (F(p^t_{i})\bigr )_i\) (where *F* is some function), and words with similar meanings to *t* will have similar vectors (Miller and Charles 1991).

Beyond the word level, a naturally following challenge is to represent meanings of phrases or even sentences. Based on the Distributional Hypothesis, it is generally believed that vectors should be constructed from surrounding contexts, at least for phrases observed in a corpus (Boleda et al. 2013). However, a main obstacle here is that phrases are far more sparse than individual words. For example, in the British National Corpus (BNC) (The BNC Consortium 2007), which consists of 100 M word tokens, a total of 16 K lemmatized words are observed more than 200 times, but there are only 46 K such bigrams, far less than the \(16,000^2\) possibilities for two-word combinations. Be it a larger corpus, one might only observe more rare words due to Zipf’s Law, so most of the two-word combinations will always be rare or unseen. Therefore, a direct estimation of the surrounding contexts of a phrase can have large sampling error. This partially fuels the motivation to construct phrase vectors from combining word vectors (Mitchell and Lapata 2010), which also bases on the linguistic intuition that meanings of phrases are “composed” from meanings of their constituent words. In view of machine learning, word vectors have smaller sampling errors, or lower variance since words are more abundant than phrases. Then, a compositional framework which calculates meanings from word vectors will be favorable if its bias is also small.

Here, “bias” is the distance between two types of phrase vectors, one calculated from composing the vectors of constituent words (composed vector), and the other assessed from context statistics where the phrase is treated as a target (natural vector). The statistics is assessed from an infinitely large *ideal corpus*, so that the natural vector of the phrase can be reliably estimated without sampling error, hence conveying the meaning of the phrase by Distributional Hypothesis. If the distance between the two vectors is small, the composed vector can be viewed as a reasonable approximation of the natural vector, hence an approximation of meaning; moreover the composed vector can be more reliably estimated from finite *real corpora* because words are more abundant than phrases. Therefore, an upper bound for the bias will provide a learning-theoretic support for the composition operation.

A number of compositional frameworks have been proposed in the literature (Baroni and Zamparelli 2010; Grefenstette and Sadrzadeh 2011; Socher et al. 2012; Paperno et al. 2014; Hashimoto et al. 2014). Some are complicated methods based on linguistic intuitions (Coecke et al. 2010), and others are compared to human judgments for evaluation (Mitchell and Lapata 2010). However, none of them has been previously analyzed regarding their bias.^{1} The most widely used framework is the additive composition (Foltz et al. 1998; Landauer and Dumais 1997), in which the composed vector is calculated by averaging word vectors. Yet, it was unknown if this average is by any means related to statistics of contexts surrounding the corresponding phrases.

*i*-th word in a fixed lexicon occurring within a context window of a target (i.e. a word or phrase) \({\varUpsilon }\), and define the

*i*-th entry of the

*natural vector*as \(\mathbf {w}^{\varUpsilon }:=\bigl (c\cdot w^{\varUpsilon }_{i}\bigr )\) and

*n*is the lexicon size, \(a^{\varUpsilon }\), \(b_{i}\) and

*c*are real numbers and

*F*is a function. We note that the formalization is general enough to be compatible with several previous research.

In Sect. 2.2, we describe our bias bound for additive composition, sketch its proof, and emphasize its practical consequences that can be tested on a natural language corpus. Briefly, we show that the more exclusively two successive words tend to occur together, the more accurate one can guarantee their additive composition as an approximation to the natural phrase vector; but this guarantee comes with one condition that *F* should be a function that decreases steeply around 0 and grows slowly at \(\infty \); and when such condition is satisfied, one can derive an additional property that all natural vectors have approximately the same norm. These consequences are all experimentally verified in Sect. 5.3.

In Sect. 2.3, we give a formalized version of the bias bound (Theorem 1), with our assumptions on natural language data clarified. These assumptions include the well-know Zipf’s Law, a similar law applied to word co-occurrences which we call *the Generalized Zipf’s Law*, and some intuitively acceptable conditions. The assumptions are experimentally tested in Sects. 5.1 and 5.2. Moreover, we show that the Generalized Zipf’s Law can be drived from a widely used generative model for natural language (Sect. 2.6).

In Sect. 2.4, we prove some key lemmas regarding the aforementioned condition on function *F*; in Sect. 2.5 we formally prove the bias bound (with some supporting lemmas proven in “Appendix 1”), and further give an intuitive explanation for the strength of additive composition: namely, with two words given, the vector of each can be decomposed into two parts, one encoding the contexts shared by both words, and the other encoding contexts not shared; when the two word vectors are added up, the non-shared part of each of them tend to cancel out, because non-shared parts have nearly independent distributions; as a result, the share part gets reinforced, which is coincidentally encoded by the natural phrase vector.

- 1.
The condition required to be satisfied by

*F*provides a unified explanation on why some recently proposed word vectors are good at additive composition (Sect. 3.1). Our experiments also verify that the condition drastically affects additive compositionality and other properties of vector representations (Sects. 5.3, 6). - 2.
Our intuitive explanation inspires a novel method for making vectors recognize word order, which was long thought as an issue for additive composition. Briefly speaking, since additive composition cancels out non-shared parts of word vectors and reinforces the shared one, we show that one can use labels on context words to control what is shared. In this case, we propose the

*Near–far Context*in which the contexts of*ordered*bigrams are shared (Sect. 3.2). Our experiments show that the resulting vectors can indeed assess meaning similarities between ordered bigrams (Sect. 5.4), and demonstrate strong performance on phrase similarity tasks (Sect. 6.1). Unlike previous approaches, Near–far Context still composes vectors by taking average, retaining the merits of being parameter-free and having a bias bound. - 3.
Our theory suggests that singular value decomposition (SVD) is suitable for preserving additive compositionality in dimension reduction of word vectors (Sect. 3.3). Experiments also show that SVD might perform better than other models in additive composition (Sects. 5.5, 6).

## 2 Theory

In this section, we discuss vector representations constructed from an ideal natural language corpus, and establish a mathematical framework for analyzing additive composition. Our analysis makes several assumptions on the ideal corpus, which might be approximations or oversimplifications of real data. In Sect. 5, we will test these assumptions on a real corpus and verify that the theory still makes reasonable predictions.

### 2.1 Notation and vector representation

A natural language corpus is a sequence of words. Ideally, we assume that the sequence is infinitely long and contains an infinite number of distinct words.

### Notation 1

We consider a finite sample of the infinite ideal corpus. In this sample, we denote the number of distinct words by *n*, and use the *n* words as a lexicon to construct vector representations. From the sample, we assess the count \(C_i\) of the *i*-th word in the lexicon, and assume that index \(1\le i\le n\) is taken such that \(C_{i}\ge C_{i+1}\). Let \(C:=\sum _{i=1}^{n}C_{i}\) be the total count, and denote \(p_{i,n}:=C_i/C\).

Contexts are taken as the closest five words to each side for the targets “*tax*” and “*rate*”, and four for the target “*tax_rate*”

| |
---|---|

Target | Words in context |

| |

| |

| |

List of target types

Notation 2 | \({\varUpsilon }\) | a general target can denote either of the following |

Notation 2 | | word targets |

Notation 2 | | two-word phrase target |

Notation 2 | \(\{st\}\) | two-word phrase target with word order ignored |

Definition 6 | \(s/t\backslash s\) | a token of word |

Definition 15 | \(s\bullet ,\bullet t\) | words in the context of \(s\bullet \) (resp. \(\bullet t\)) are assigned the left-hand-side (resp. right-hand-side) Near–far labels |

Definition 16 | \({s\bullet }\backslash t,s/{\bullet t}\) | a target \(s\bullet \) (resp. \(\bullet t\)) not at the left (resp. right) of word |

Theorem 1 | | random word targets |

General | random word targets can form different types such as \(\{ST\}\) and \(S/T\backslash S\) |

With a sample corpus given, we can construct vector representations for *targets*, which are either words or phrases. To define the vectors one starts from specifying a *context* for each target, which is usually taken as words surrounding the target in corpus. As an example, Table 1 shows a word sequence, a phrase target and two word targets; contexts are taken as the closest four or five words to the targets.

### Notation 2

We use *s*, *t* to denote word targets, and *st* a phrase target consisting of two consecutive words *s* and *t*. When the word order is ignored (i.e., either *st* or *ts*), we denote the target by \(\{st\}\). A general target is denoted by \({\varUpsilon }\). Later in this article, we will consider other types of targets as well, and a full list of target types is shown in Table 2.

### Notation 3

Let \(C({\varUpsilon })\) be the count of target \({\varUpsilon }\), and \(C_i^{\varUpsilon }\) the count of *i*-th word co-occurring in the context of \({\varUpsilon }\). Denote \(p^{\varUpsilon }_{i,n}:=C_i^{\varUpsilon }/C({\varUpsilon })\).

In order to approximate the ideal corpus, we will take a sample larger and larger, then consider the limit. Under this limit, it is obvious that \(n\rightarrow \infty \) and \(C\rightarrow \infty \). Further, we will assume some limit properties on \(p_{i,n}\) and \(p^{\varUpsilon }_{i,n}\) as specified in Sect. 2.3. These properties capture our idealization of an infinitely large natural language corpus. In Sect. 2.6, we will show that such properties can be derived from a Hierarchical Pitman–Yor Process, a widely used generative model for natural language data.

### Definition 4

*natural vector*for \({\varUpsilon }\) from the statistics \(p^{\varUpsilon }_{i,n}\) as follows:

*F*is a smooth function on \((0,\infty )\). The subscript

*n*emphasizes that the vector will change if

*n*becomes larger (i.e. a larger sample corpus is taken). The scalar \(c_n\) is for normalizing scales of vectors. In Sect. 2.2, we will further specify some conditions on \(a^{\varUpsilon }_n\), \(b_{i,n}\), \(c_n\) and

*F*, but without much loss of generality.

Frequently used notations and general conventions in this article

Notation 1 | | index \(1\le i \le n\), where |

Notation 1 | \(p_{i,n}\) | empirical probability of the |

Notation 3 | \(p^{\varUpsilon }_{i,n}\) | probability of the |

Definition 4 | \(\mathbf {w}^{\varUpsilon }_{n}:=\bigl (c_n\cdot w^{\varUpsilon }_{i,n}\bigr )_{1\le i\le n}, \quad \text {where}\quad w^{\varUpsilon }_{i,n}=F(p^{\varUpsilon }_{i,n}+1/n)-a^{\varUpsilon }_n-b_{i,n}\) | |

Definition 5 | \(\mathscr {B}^{\{st\}}_{n}:=||\mathbf {w}^{\{st\}}_{n}-\frac{1}{2}(\mathbf {w}^s_{n}+\mathbf {w}^t_{n})||\) | |

Definition 6 | \(\pi _{s/t\backslash s}\) | probability for an occurrence of |

Definition 7 | \({\varLambda }_n\) | set of observed two-word phrases, word order ignored |

General | \(\mathbb {E}[\cdot ],{{\mathrm{Var}}}[\cdot ]\) | expected value and variance of a random variable |

General | \(I_{\mathscr {H}}\) | indicator; \(I_{\mathscr {H}}=1\) if condition \(\mathscr {H}\) is true, 0 otherwise |

General | \(\mathbb {P}(\mathscr {H})\) | probability of \(\mathscr {H}\) being true; \(\mathbb {P}(\mathscr {H})=\mathbb {E}[I_{\mathscr {H}}]\) |

General | \(\lambda ,\beta ,\xi ,\ldots \) | lowercase Greek letters denote real constants |

Theorem 1 | \(X:=p^{\varUpsilon }_{i,n}/p_{i,n}\) where \({\varUpsilon }:=\) \(\{ST\}\), \(S/T\backslash S\), or \(T/S\backslash T\). | |

Lemma 1 | \(Y_{i,n}:=F(p_{i,n}X+1/n)-F(p_{i,n}\beta +1/n)\) | |

Lemma 1 | \(\varphi _{i,n}:=(p_{i,n})^{2\lambda }\bigl (1+(\beta np_{i,n})^{-1}\bigr )^{-1+2\lambda }\) |

To consider \(F(p^{\varUpsilon }_{i,n}+1/n)\) instead of \(F(p^{\varUpsilon }_{i,n})\) can be viewed as a smoothing scheme that guarantees *F*(*x*) being applied to \(x>0\). We will consider *F* that is not continuous at 0, such as \(F(x):=\ln {x}\); yet, \(w^{\varUpsilon }_{i,n}\) has to be well-defined even if \(p^{\varUpsilon }_{i,n}=0\). In practice, the \(p^{\varUpsilon }_{i,n}\) estimated from a finite corpus can often be 0; theoretically, the smoothing scheme plays a role in our proof as well.

The definition of \(\mathbf {w}^{\varUpsilon }_{n}\) is general enough to cover a wide range of previously proposed distributional word vectors. For example, if \(F(x)=\ln {x}\), \(a^{\varUpsilon }_n=0\) and \(b_{i,n}=\ln {p_{i,n}}\), then \(w^{\varUpsilon }_{i,n}\) is the Point-wise Mutual Information (PMI) value that has been widely adopted in NLP (Church and Hanks 1990; Dagan et al. 1994; Turney 2001; Turney and Pantel 2010). More recently, the Skip-Gram with Negative Sampling (SGNS) model (Mikolov et al. 2013a) is shown to be a matrix factorization of the PMI matrix (Levy and Goldberg 2014b); and the more general form of \(a^{\varUpsilon }_n\) and \(b_{i,n}\) is explicitly introduced by the GloVe model (Pennington et al. 2014). Regarding other forms of *F*, it has been reported in Lebret and Collobert (2014) and Stratos et al. (2015) that empirically \(F(x):=\sqrt{x}\) outperforms \(F(x):=x\). We will discuss function *F* further in Sect. 3.1, and review some other distributional vectors in Sect. 4.

We finish this section by pointing to Table 3 for a list of frequently used notations.

### 2.2 Practical meaning of the bias bound

*s t*”. In this work, we study relations between this

*composed vector*and the natural vector \(\mathbf {w}^{\{st\}}_{n}\) of the phrase target.

^{2}More precisely, we study the Euclidean distance

*s t*or

*t s*”. Therefore, the above distance can be viewed as the bias of approximating \(\mathbf {w}^{\{st\}}_{n}\) by the composed vector \({{\mathrm{COMP}}}(\mathbf {w}^s_{n}, \mathbf {w}^t_{n})\). In practice, especially when \({{\mathrm{COMP}}}\) is a complicated operation with parameters, it has been a widely adopted approach to learn the parameters by minimizing the same distances for phrases observed in corpus (Dinu et al. 2013; Baroni and Zamparelli 2010; Guevara 2010). These practices further motivate our study on the bias.

### Definition 5

*additive composition*, where \({{\mathrm{COMP}}}(\mathbf {w}^s_{n}, \mathbf {w}^t_{n}):=\frac{1}{2}(\mathbf {w}^s_{n}+\mathbf {w}^t_{n})\) is a parameter-free composition operator. We define

Our analysis starts from the observation that, every word in the context of \(\{st\}\) also occurs in the contexts of *s* and *t*: as illustrated in Table 1, if a word token *t* (e.g. “*rate*”) comes from a phrase \(\{st\}\) (e.g. “*tax rate*”), and if the context window size is not too small, the context for this token of *t* is almost the same as the context of \(\{st\}\). This motivates us to decompose the context of *t* into two parts, one coming from \(\{st\}\) and the other not.

### Definition 6

*t*which do not occur next to word

*s*in corpus. We use \(\pi _{s/t\backslash s}\) to denote the probability of

*t*not occurring next to

*s*, conditioned on a token of word

*t*. Practically, \((1-\pi _{s/t\backslash s})\) can be estimated by the count ratio \(C(\{st\})/C(t)\). Then, we have the following equation

*t*occurs in the context of either \(\{st\}\) or \(s/t\backslash s\).

*s*and

*t*is. When \(\pi _{s/t\backslash s}\) and \(\pi _{t/s\backslash t}\) are small,

*s*and

*t*tend to occur next to each other exclusively, so \(\mathbf {w}^s_{n}\) and \(\mathbf {w}^t_{n}\) are likely to correlate with \(\mathbf {w}^{\{st\}}_{n}\), making \(\mathscr {B}^{\{st\}}_{n}\) small. This is the fundamental idea of our bias bound, which estimates \(\mathscr {B}^{\{st\}}_{n}\) in terms of \(\pi _{s/t\backslash s}\) and \(\pi _{t/s\backslash t}\). We give a detailed sketch below. First, by Triangle Inequality one immediately has

*average*norm of \(\mathbf {w}^{{\varUpsilon }}_{n}\) equals 1. Thus, if we can prove that \(||\mathbf {w}^{{\varUpsilon }}_{n}||=1\) for

*every*target \({\varUpsilon }\), we will have an upper bound

*ST*is

*randomly*chosen, then \(\lim \limits _{n\rightarrow \infty }||\mathbf {w}^{\{ST\}}_{n}||\) converges to 1 in probability. The argument is sketched as follows. First, when

*ST*is random, \(p^{\{ST\}}_{i,n}\) and \(w^{\{ST\}}_{i,n}\) become random variables. We assume that for each \(i\ne j\), \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) are

*independent*random variables. Note this assumption in contrast to the fact that \(p_{i,n}\ge p_{j,n}\) for \(i>j\); nonetheless, we assume that \(p^{\{ST\}}_{i,n}\) is random enough so that when

*i*changes, no obvious relation exists between \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\). Thus, \(\bigl (w^{\{ST\}}_{i,n}\bigr )^2\)’s (\(1\le i\le n\),

*n*fixed) are independent and we can apply the Law of Large Numbers:

*not*assume \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) are identically distributed.

^{3}For this generalized Law of Large Numbers we need some technical conditions. One necessary condition is \(\lim \limits _{n\rightarrow \infty }\sum _{i=1}^n \mathbb {E}\bigl [\bigl (w^{\{ST\}}_{i,n}\bigr )^2\bigr ]=\infty \), which we prove by explicitly calculating \(\mathbb {E}\bigl [\bigl (w^{\{ST\}}_{i,n}\bigr )^2\bigr ]\); another requirement is that the fluctuations of \(\bigl (w^{\{ST\}}_{i,n}\bigr )^2\) must be at comparable scales so they indeed cancel out. This is formalized as a uniform integrability condition, and we will show it imposes a non-trivial constraint on the function

*F*in definition of word vectors. Finally if Eq. (3) holds, by setting \(c_n:=\sum _{i=1}^n \mathbb {E}\bigl [\bigl (w^{\{ST\}}_{i,n}\bigr )^2\bigr ]\) we are done.

*F*is smooth, this equation can be justified as long as \(p^{s/t\backslash s}_{i,n}\) and \(p^{\{st\}}_{i,n}\) are small compared to 1 /

*n*. Then, we will rigorously prove that, when

*n*is sufficiently large, the total error in the above approximation becomes infinitesimal:

*t*; on the other hand, \(\mathbf {w}^{s/t\backslash s}_{n}\) and \(\mathbf {w}^{t/s\backslash t}_{n}\) should be “independent” because targets \(s/t\backslash s\) and \(t/s\backslash t\) cover disjoint tokens of different words. With this intuition, we will derive the following bias bound:

In the rest of this section, we will formally normalize \(c_n\), \(a^{\varUpsilon }_n\), \(b_{i,n}\) and *F* for simplicity of discussion. These are mild conditions and do not affect the generality of our results. Then, we will summarize our claim of the bias bound, focusing on its practical verifiability.

### Definition 7

### Definition 8

Note that, if the centroid of natural phrase vectors is far from \(\mathbf {0}\), the normalization in Definition 7 would cause all phrase vectors cluster around one point on the unit sphere. Then, the phrase vectors would not be able to distinguish different meanings of phrases. The choice of \(b_{i,n}\) in Definition 8 prevents such degenerated cases.

*F*are fixed, \(\mathscr {B}^{\{st\}}_{n}\) is taking minimum at

### Definition 9

Practically, one can calculate \(a^{\varUpsilon }_n\) and \(b_{i,n}\) by first assuming \(b_{i,n}=0\) in (6) to obtain \(a^{\varUpsilon }_n\), and then substitute \(a^{\{st\}}_n\) in (5) to obtain the actual \(b_{i,n}\). The value of \(a^{\varUpsilon }_n\) will not change because if all vectors have average entry 0, so dose their centroid. In Sect. 2.4, we will derive asymptotic values of \(a^{\varUpsilon }_n\), \(b_{i,n}\) and \(c_n\) theoretically.

### Definition 10

*F*(

*x*) can be either \(x^\lambda /\lambda \) (if \(\lambda \ne 0\)) or \(\ln x\) (if \(\lambda =0\)). This assumption is mainly for simplicity; intuitively, behavior of

*F*(

*x*) only matters at \(x\approx 0\), because

*F*is applied to probability values which are close to 0. Indeed, our results can be generalized to

*G*(

*x*) such that

*F*.

Our bias bound is summarized as follows.

### Claim 1

As we expected, for more “collocational” phrases, since \(\pi _{s/t\backslash s}\) and \(\pi _{t/s\backslash t}\) are smaller, the bias bound becomes stronger. Claim 1 states a prediction that can be empirically tested on a real large corpus; namely, one can estimate \(p^{\varUpsilon }_{i,n}\) from the corpus and construct \(\mathbf {w}^{\varUpsilon }_{n}\) for a fixed *n*, then check if the inequality holds approximately while omitting the limit. In Sect. 5.3, we conduct the experiment and verify the prediction. Our theoretical assumptions on the “ideal natural language corpus” will be specified in Sect. 2.3.

Besides it being empirically verified for phrases observed in a real corpus, the true value of Claim 1 is that the upper bound holds for an arbitrarily large ideal corpus. We can assume any plausible two-word phrase to occur sufficiently many times in the ideal corpus, even when it is unseen in the real one. In that case, a natural vector for the phrase can only be reliably estimated from the ideal corpus, but Claim 1 suggests that additive composition of word vectors provides a reasonable approximation for that unseen natural vector. Meanwhile, since word vectors *can* be reliably estimated from the real corpus, Claim 1 endorses additive composition as a reasonable meaning representation for unseen or rare phrases. On the other hand, it endorses additive composition for frequent phrases as well, because such phrases usually have strong collocations and Claim 1 says that the bias in this case is small.

The condition \(\lambda < 0.5\) on function *F* is crucial; we discuss its empirical implications in Sect. 3.1.

Further, the following is a by-product of Theorem 2 in Sect. 2.4, which corresponds to the previous Eq. (3) in our sketch of proof.

### Claim 2

Under the same conditions in Claim 1, we have \(\lim \limits _{n\rightarrow \infty }||\mathbf {w}^{\{st\}}_{n}||=1\) for all \(\{st\}\).

Thus, all natural phrase vectors approximately lie on the unit sphere. This claim is also empirically verified in Sect. 5.3. It enables a link between the Euclidean distance \(\mathscr {B}^{\{st\}}_{n}\) and the cosine similarity, which is the most widely used similarity measure in practice.

### 2.3 Formalization and assumptions on natural language data

Claim 1 is formalized as Theorem 1 in the following.

### Theorem 1

- (A)
\(\lim \limits _{n\rightarrow \infty }p_{i,n}\cdot i\ln n=1\).

- (B)Let
*S*,*T*be randomly chosen word targets. If \({\varUpsilon }:=\) \(\{ST\}\), \(S/T\backslash S\) or \(T/S\backslash T\), then:- (B1)
For

*n*fixed, \(p^{{\varUpsilon }}_{i,n}\)’s \((1\le i\le n)\) can be viewed as independent random variables. - (B2)
Put \(X:=p^{{\varUpsilon }}_{i,n}/p_{i,n}\). There exist \(\xi ,\beta \) such that \(\mathbb {P}(x\le X)=\xi /x\) for \(x\ge \beta \).

- (B1)
- (C)
For each

*i*and*n*, the random variables \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are independent, whereas \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\) have positive correlation.

We explain the assumptions of Theorem 1 in details below.

### Remark 1

*i*-th word is inversely proportional to

*i*. So \(p_{i,n}\) is proportional to \(i^{-1}\), and the factor \(\ln n\) comes from equations \(\sum _{i=1}^np_{i,n}=1\) and \(\sum _{i=1}^ni^{-1}\approx \ln n\). One immediate implication of Zipf’s Law is that one can make \(np_{i,n}\) arbitrarily small by choosing sufficiently large

*n*and

*i*. More precisely, for any \(\delta >0\), we have

*n*is large enough that \(\ln n\ge 1/\delta \), there is an

*i*in (7) such that \(np_{i,n}\le \delta \). The limit \(np_{i,n}\rightarrow 0\) will be extensively explored in our theory.

Empirically, Zipf’s Law has been thoroughly tested under several settings (Montemurro 2001; Ha et al. 2002; Clauset et al. 2009; Corral et al. 2015).

### Remark 2

When a target \({\varUpsilon }\) is randomly chosen, (B1) assumes that the probability value \(p^{\varUpsilon }_{i,n}\) is random enough that, when \(i\ne j\), there is no obvious relation between \(p^{\varUpsilon }_{i,n}\) and \(p^{\varUpsilon }_{j,n}\) (i.e. they are independent). We test this assumption in Sect. 5.1. Assumption (B2) suggests that \(p^{\varUpsilon }_{i,n}\) is at the same scale as \(p_{i,n}\), and the random variable \(X:=p^{{\varUpsilon }}_{i,n}/p_{i,n}\) has a power law tail^{4} of index 1. We regard (B2) as the *Generalized Zipf’s Law*, analogous to Zipf’s Law because \(p_{i,n}\)’s (\(1\le i\le n\), *n* fixed) can also be viewed as i.i.d. samples drawn from a power law of index 1. In Sect. 2.6, we show that Assumption (B) is closely related to a Hierarchical Pitman–Yor Process; and in Sect. 5.2 we empirically verify this assumption.

### Remark 3

Assumption (C) is based on an intuition that, since \(S/T\backslash S\) and \(T/S\backslash T\) are different word targets and \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are assessed from disjoint parts of corpus, the two random variables should be independent. On the other hand, the targets \(S/T\backslash S\) and \(\{ST\}\) both contain a word *T*, so we expect \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\) to have positive correlation. This assumption is also empirically tested in Sect. 5.1.

### Remark 4

Since *X* has a power law tail of index 1, the probability density \(-{{\mathrm{d}}}\mathbb {P}(x\le X)\) is a multiple of \(x^{-2}{{\mathrm{d}}}x\) for sufficiently large *x*. Wherein, \(\mathbb {E}\bigl [F(X)^2\bigr ]\) becomes an integral of \(F(x)^2x^{-2}{{\mathrm{d}}}x\), so \(\lambda <0.5\) is a necessary condition for \(\mathbb {E}\bigl [F(X)^2\bigr ]<\infty \).

Conversely, \(\lambda <0.5\) is usually a sufficient condition for \(\mathbb {E}\bigl [F(X)^2\bigr ]<\infty \), for instance, if *X* follows the Pareto Distribution (i.e. \(\xi =\beta \)) or Inverse-Gamma Distribution. Another example will be given in Sect. 2.6.

### Lemma 1

- (a)
There exists \(\chi \) such that \(\dfrac{|e_{i,n}|}{\sqrt{\varphi _{i,n}}}\le \chi \) for all

*i*,*n*. - (b)
\(\lim \limits _{np_{i,n}\rightarrow 0}\dfrac{e_{i,n}}{\sqrt{\varphi _{i,n}}}=0. \)

- (c)
The set of random variables \(\bigl \{Y_{i,n}^2/\varphi _{i,n}\bigr \}\) is uniformly integrable; i.e., for any \(\varepsilon >0\), there exists

*N*such that \(\mathbb {E}\bigl [Y_{i,n}^2I_{Y_{i,n}^2>N \varphi _{i,n}}\bigr ]<\varepsilon \varphi _{i,n}\) for all*i*,*n*. - (d)
\(\lim \limits _{np_{i,n}\rightarrow 0}\dfrac{v_{i,n}}{\varphi _{i,n}}=\eta \ne 0\), where \( \eta =\displaystyle \int _0^\infty \bigl (F(z+\beta )-F(\beta )\bigr )^2\cdot \frac{\xi {{\mathrm{d}}}z}{z^2} \).

### Remark 5

As sketched in Sect. 2.2, our proof requires calculation of \(\mathbb {E}\bigl [\bigl (w^{{\varUpsilon }}_{i,n}\bigr )^2\bigr ]\); this is done by applying Lemma 1 above. The lemma calculates the first and second moments of \(Y_{i,n}\); note that \(Y_{i,n}\) differs from \(w^{\varUpsilon }_{i,n}\) only by some constant shift.^{5} As the lemma shows, when *i* and *n* vary, the squared first moment \(e_{i,n}^2\) and the variance \(v_{i,n}\) scale with the constant \(\varphi _{i,n}\). At the limit \(np_{i,n}\rightarrow 0\), Lemma 1(b)(d) suggests that \(e_{i,n}/\sqrt{\varphi _{i,n}}\) and \(v_{i,n}/\varphi _{i,n}\) converge, which is where the power law tail of *X* mostly affects the behavior of \(Y_{i,n}\).

### Remark 6

*F*(

*x*) in Lemma 1 can be generalized to function

*G*(

*x*) as mentioned in Definition 10. Because by Cauchy’s Mean Value Theorem,

*G*(

*x*), and in turn generalize our bias bound.

### Lemma 2

- (a)
\(\lim \limits _{np_{i,n}\rightarrow 0}\dfrac{n^{2\lambda }\cdot \varphi _{i,n}}{np_{i,n}}=\beta ^{1-2\lambda }\).

- (b)
\(n^{-1+2\lambda }\ln n\cdot \varphi _{i,n}\le \beta ^{1-2\lambda }/i\).

- (c)
For any \(\delta >0\), there exists \(M_\delta \) such that \(n^{-1+2\lambda }\ln n\sum _{i=1}^{\frac{n}{\delta \ln n}}\varphi _{i,n} \le M_\delta \) for all

*n*. - (d)
For any \(\delta >0\), we have \(\lim \limits _{n\rightarrow \infty }n^{-1+2\lambda }\ln n\sum _{\frac{n}{\delta \ln n}\le i}^n\varphi _{i,n}=\infty \).

Lemma 1 is derived from Assumption (B) and the condition \(\mathbb {E}\bigl [F(X)^2\bigr ]<\infty \). Lemma 2 is derived from Assumption (A). The proofs are found in “Appendix 1”.

### 2.4 Why is \(\lambda <0.5\) important?

As we note in Remark 4, the condition \(\lambda <0.5\) is necessary for the existence of \(\mathbb {E}\bigl [F(X)^2\bigr ]\). This existence is important because, briefly speaking, the Law of Large Numbers only holds when expected values exist. More precisely, we use the following lemma to prove convergence in probability in Theorem 1, and in particular Eq. (3) as discussed in Sect. 2.2. If \(\mathbb {E}\bigl [F(X)^2\bigr ]=\infty \), the required uniform integrability is not satisfied, which means the fluctuations of random variables may have too different scales to completely cancel out, so their weighted averages as we consider will not converge.

### Lemma 3

*n*fixed) are independent random variables and \(\bigl \{U_{i,n}/\varphi _{i,n}\bigr \}\) is uniformly integrable. Assume \(\lim \limits _{np_{i,n}\rightarrow 0}\mathbb {E}[U_{i,n}]/\varphi _{i,n}=\ell \). Then,

### Proof

This lemma is a combination of the Law of Large Numbers and the Stolz-Cesàro Theorem. We prove it in two steps.

*N*such that \(\mathbb {E}\bigl [|U_{i,n}|I_{|U_{i,n} |>N\varphi _{i,n}}\bigr ]<\varepsilon ^2 \varphi _{i,n}\) for all

*i*,

*n*. Our strategy is to divide the average of \(U_{i,n}\) into two parts, namely

*M*, so

Combining Lemma 1(a)(c)(d) and Lemma 3, we immediately obtain the following. This is almost Eq. (3) we wanted in Sect. 2.2.

### Corollary 4

Now, we can asymptotically derive the normalization of \(a^{\varUpsilon }_n\), \(b_{i,n}\) and \(c_n\), as defined in Sect. 2.2. A by-product is that the norms of natural phrase vectors converge to 1.

### Theorem 2

### Proof

*M*such that

Therefore, if we set \(a^{\{st\}}_n\), \(b_{i,n}\) and \(c_n\) as in Theorem 2, all conditions in Definition 7, Definition 8 and Definition 9 are asymptotically satisfied. In addition, we have obtained the result stated in Claim 2.

In view of Corollary 4, if \(\lambda <0.5\) is not satisfied, the norms of natural phrase vectors will not converge. This prediction is experimentally verified in Sect. 5.3.

### 2.5 Proof of Theorem 1 and an intuitive explanation

In this section, we start to use Eq.(1) and derive our bias bound. Recall that Eq. (1) decomposes \(p^{t}_{i,n}\) into a linear combination of \(p^{s/t\backslash s}_{i,n}\) and \(p^{\{st\}}_{i,n}\); our first notice is that \(F(p^{t}_{i,n}+1/n)\) can be decomposed similarly into a linear combination of \(F(p^{s/t\backslash s}_{i,n}+1/n)\) and \(F(p^{\{st\}}_{i,n}+1/n)\), as if the function *F* has linearity. This is because *F* is smooth, and when \(np_{i,n}\) is sufficiently small, the probability value \(p_{i,n}\) is small compared to 1 / *n*, so \(F(x+1/n)\) can be linearly approximated as \(F'(1/n)x+F(1/n)\), as long as *x* is at the same scale as \(p_{i,n}\). This is formalized as the following lemma.

### Lemma 5

### Proof

*N*. Consider the condition

*F*becomes valid. More precisely, we show that

*H*be the inverse function of \(\hat{F}\):

*H*and

*J*do not depend on

*n*,

*i*,

*S*or

*T*. Now, we consider the limit \(np_{i,n}\rightarrow 0\). By Lemma 2(a), we can replace the \(\varphi _{i,n}\) in (9) with \(np_{i,n}\cdot n^{-2\lambda }\); and since

Now, we are ready to prove Theorem 1. An intuitive discussion is given after the proof.

### Proof of Theorem 1

*t*, then one can calculate that \(\lim \limits _{n\rightarrow \infty }\frac{c_n}{n}\sum _{i=1}^n w^{T}_{i,n}=0\) in probability, by using Lemma 5 and similar to the proof of Theorem 2. Thus, we set \(a^{t}_n:=0\). Then,

In view of this explanation, the technical points of Theorem 1 are as follows. First, the decomposition of \(w^{t}_{i,n}\) into \(\hat{w}^{s/t\backslash s}_{i,n}\) and \(\hat{w}^{\{st\}}_{i,n}\) is not exact; there is difference between \(\hat{w}^{s/t\backslash s}_{i,n}\) and \(\tilde{w}^{s/t\backslash s}_{i,n}\) due to the expected value, and there is difference between \(F(p^T_{i,n}+1/n)\) and the linear combination of \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\). However, by Lemma 1(b) the expected value converges to 0, and by Lemma 5 the linear approximation holds asymptotically. So this first issue is settled. Second, the most importantly, term \(\bigl (\hat{w}^{\{st\}}_{i,n}\bigr )^2\)’s, \(\bigl (\hat{w}^{s/t\backslash s}_{i,n}\bigr )^2\)’s and \(\bigl (\hat{w}^{t/s\backslash t}_{i,n}\bigr )^2\)’s have to sum to constants independent of *s* and *t*, otherwise they cannot be separated from \(\pi _1\) and \(\pi _2\) in the calculation of \(\mathscr {B}^{\{st\}}_{n}\). This requires Eq. (3) as we discussed in Sect. 2.2, and it is a generalized version of the Law of Large Numbers. For this law to hold, one needs conditions to guarantee that the fluctuations of random variables are at comparable scales to cancel out. This leads to the condition \(\lambda <0.5\), which is a non-trivial constraint on function *F*. Formally, Eq. (3) is proven as Corollary 4.

Insights brought by our theory lead to several applications. First, as we found that the power law tail of natural language data requires \(\lambda <0.5\) for constructing additively compositional vectors, our theory provides important guidance for empirical research on Distributional Semantics (Sect. 3.1). Second, as we found that \(w^{t}_{i,n}\) and \(w^{s}_{i,n}\) have decompositions in which \(\hat{w}^{\{st\}}_{i,n}\) is a common factor and survives averaging, but \(\hat{w}^{s/t\backslash s}_{i,n}\) and \(\hat{w}^{t/s\backslash t}_{i,n}\) cancel out each other, we come to the idea of harnessing additive composition by engineering what is common in the summands. Then, for example, we can make additive composition aware of word order (Sect. 3.2). Third, as one can read from Lemma 2(c)(d) and the proof of Lemma 3, it is important to realize that the behavior of vector representations is dominated by entries at dimensions corresponding to low-frequency words, namely \(w^{\varUpsilon }_{i,n}\)’s where \(\frac{n}{\delta \ln n}\le i\le n\). This understanding has impact on dimension reduction (Sect. 3.3).

### 2.6 Hierarchical Pitman–Yor process

In Assumptions (A)(B) of Theorem 1 we have required several properties to be satisfied by the probability values \(p_{i,n}\) and \(p^{\varUpsilon }_{i,n}\). Meanwhile, \(p_{i,n}\)’s and \(p^{\varUpsilon }_{i,n}\)’s (\(1\le i\le n\), *n* fixed) define distributions from which words can be generated. This setting is reminiscent of a Bayesian model where priors of word distributions are specified.

Conversely, by the well-known de Finetti’s Theorem, an exchangeable random sequence of words (i.e., given any sequence sample, all permutations of that sample occur with the same probability) can be seen as if the words are drawn i.i.d. from a conditioned word distribution, where the distribution itself is drawn from a prior. A widely studied example is the Pitman–Yor Process (Pitman and Yor 1997; Pitman 2006); in this section, we use the process to define a generative model, from which Assumptions (A)(B) can be derived.

### Definition 11

- 1.
First, generate a new word.

- 2.At each step, let \(C(\varpi )\) be the count of word \(\varpi \), and \(C:=\sum _{\varpi }C(\varpi )\) the total count; let
*N*be the number of distinct words. Then:- (2.1)
Generate a new word with probability \(\dfrac{\theta + \alpha N}{\theta + C}\).

- (2.2)
Or, generate a new copy of an existing word \(\varpi \), with probability \(\dfrac{C(\varpi ) - \alpha }{\theta + C}\).

- (2.1)

### Definition 12

In the above process \(PY(\alpha ,\theta )\), we define \(p(\varpi ):=\lim \dfrac{C(\varpi )}{C}\), where limit is taken at Step \(\rightarrow \infty \). Fix a word index *i* such that \(p(\varpi _i)\ge p(\varpi _{i+1})\). Put \(p_i:=p(\varpi _i)\).

### Theorem 3

For a sequence generated by \(PY(\alpha , \theta )\), we have \(\lim C/N^{1/\alpha }=Z\) for some *Z*.

### Proof

This is Theorem 3.8 in Pitman (2006). \(\square \)

### Theorem 4

We have \(\lim \limits _{i\rightarrow \infty } p_i\cdot i^{1/\alpha }{\varGamma }(1-\alpha )^{1/\alpha }=Z\), where *Z* is the same as in Theorem 3.

### Proof

This is Lemma 3.11 in Pitman (2006). \(\square \)

Theorem 4 shows that, if words are generated by a Pitman–Yor Process \(PY(\alpha , \theta )\), then \(p_i\) has a power law tail of index \(\alpha \). It is in the same form as Assumption (A), and when \(\alpha \approx 1\), it approximates the Zipf’s Law.

*Z*as in Theorem 3 may differ (since the sequences are random), even if they are generated with the same hyper-parameters \(\alpha \) and \(\theta \). Nevertheless, the limit always exists, and

*Z*follows a statistical distribution. The probability density of \(Z^{-\alpha }\) is derived in Pitman (2006), Theorem 3.8:

Next, we consider the co-occurrence probability \(p^{\varUpsilon }(\varpi )\), conditioned on \(\varpi \) being in the context of a target \({\varUpsilon }\). One first notes that \(p^{\varUpsilon }(\varpi )\) is likely to be related to \(p(\varpi )\); i.e., frequent words are likely to occur in every context, regardless of target. To model this intuition, the idea of Hierarchical Pitman–Yor Process (Teh 2006) is to adapt \(PY(\alpha ,\theta )\) such that in each step, if a new word is to be generated, it is no longer generated brand new, but drawn from another Pitman–Yor Process instead. This second Pitman–Yor Process serves as a “reference” which controls how frequently a word is likely to occur. More precisely, a Hierarchical Pitman–Yor Process \(H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) generates sequences as follows.

### Definition 13

- 1.
First step, generate a new reference which refers to a new word.

- 2.At each step, let \(C(\rho )\) be the count of reference \(\rho \), and \(C(\varpi ):=\sum _{\varpi ^\rho =\varpi }C(\rho )\) the count of all references referring to word \(\varpi \); let \(C:=\sum _{\varpi }C(\varpi )\) be the total count, \(N_r(\varpi )\) the number of distinct references referring to \(\varpi \), and \(N_r:=\sum _{\varpi }N_r(\varpi )\) the total number of distinct references; finally, let \(N_w\) be the number of distinct words.
- (2.1)Generate a new reference referring to a new word, with probability$$\begin{aligned} \frac{1}{\theta _1 + C}\cdot \frac{\theta _1+\alpha _1N_r}{\theta _2+N_r}\cdot (\theta _2+\alpha _2N_w). \end{aligned}$$
- (2.2)Generate a new reference referring to an existing word \(\varpi \), with probability$$\begin{aligned} \frac{1}{\theta _1 + C}\cdot \frac{\theta _1+\alpha _1N_r}{\theta _2+N_r}\cdot (N_r(\varpi )-\alpha _2). \end{aligned}$$
- (2.3)Or, generate a new copy of an existing reference \(\rho \), with probability$$\begin{aligned} \frac{1}{\theta _1 + C}\cdot (C(\rho )-\alpha _1). \end{aligned}$$

- (2.1)

*n*-gram language model to connect the bigram probability

*p*(

*w*|

*u*) to unigram probability

*p*(

*w*), for deriving a smoothing method.

Unfortunately, a precise analysis on the above \(p^{\varUpsilon }(\varpi )\) is beyond the reach of the authors; instead, we consider a slightly modified process which is much simpler for our purpose.

### Definition 14

*Modified Hierarchical Pitman–Yor Process*\(M\!H\!PY(\alpha _1,\theta _1;\alpha _2,\theta _2)\) generates sequences as follows. Using the same notation as in Definition 13:

- 1.
First step, generate a new reference which refers to a new word.

- 2.At each step:In above,
- (2.1)Generate a new reference referring to a new word, with probability$$\begin{aligned} \frac{1}{D}\cdot (\theta _2+\alpha _2N_w). \end{aligned}$$
- (2.2)Generate a new reference referring to an existing word \(\varpi \), with probability$$\begin{aligned} \frac{1}{D}\cdot (N_r(\varpi )-\alpha _2). \end{aligned}$$
- (2.3)Or, generate a new copy of an existing reference \(\rho \), with probability$$\begin{aligned} \frac{1}{D}\cdot \frac{N_r(\varpi ^\rho )-\alpha _2}{\theta _1 + \alpha _1N_r(\varpi ^\rho )}\cdot (C(\rho )-\alpha _1). \end{aligned}$$

*D*is a normalization factor that makes the probability values sum to 1:$$\begin{aligned} D:=\theta _2+ \alpha _2N_w + \sum _\varpi \frac{N_r(\varpi )-\alpha _2}{\theta _1 + \alpha _1N_r(\varpi )}\cdot (C(\varpi )+\theta _1). \end{aligned}$$ - (2.1)

## 3 Applications

In this section, we demonstrate three applications of our theory.

### 3.1 The choice of function *F*

The condition \(\lambda <0.5\) specifies a nontrivial constraint on the function *F*. In Sect. 2.4 we have shown that this is a necessary condition for the norms of natural phrase vectors to converge. The convergence of norms is an outstanding property that might affect not only additive composition but also the composition ability of vector representations in general. Specifically, we note that \(F(x)=\ln {x}\) when \(\lambda =0\), and \(F(x)=\sqrt{x}\) when \(\lambda =0.5\). It is straightforward to predict that these functions might perform better in composition tasks than functions that have larger \(\lambda \), such as \(F(x):=x\) or \(F(x):=x\ln {x}\). In Sect. 5.3, we show experiments that verify the necessity of \(\lambda <0.5\) for our bias bound to hold, and in Sect. 6 we show that *F* indeed drastically affects additive compositionality as judged by human annotators; while \(F(x):=\ln {x}\) and \(F(x):=\sqrt{x}\) perform similarly well, \(F(x):=x\) and \(F(x):=x\ln {x}\) are much worse.

Different settings of function *F* have been considered in previous research, and speculations have been made about the reason of semantic additivity of some of the vector representations. In Pennington et al. (2014), the authors noted that logarithm is a homomorphism from multiplication to addition, and used this property to justify \(F(x):=\ln {x}\) for training semantically additive word vectors, but based on the unverified hypothesis that multiplications of co-occurrence probabilities have specialties in semantics. On the other hand, Lebret and Collobert (2014) proposed to use \(F(x):=\sqrt{x}\), which is motivated by the Hellinger distance between two probability distributions, and reported it being better than \(F(x):=x\). Stratos et al. (2015) proposed a similar but more general and better-motivated model, which attributed \(F(x):=\sqrt{x}\) to an optimal choice that stabilizes the variances of Poisson random variables. Based on the assumption that co-occurrence counts are generated by a Poisson Process, the authors pointed out that \(F(x):=\sqrt{x}\) may have the effect of stabilizing the *variance* in estimating word vectors. In contrast, our theory shows clearly that *F* affects the *bias* of additive composition, besides variance. All in all, none of the previous research can explain why \(F(x):=\ln {x}\) and \(F(x):=\sqrt{x}\) are *both* good choices but \(F(x):=x\) is not.

Intuitively, the condition \(\lambda <0.5\) requires *F*(*x*) to decrease steeply as *x* tends to 0. The steep slope has effect of “amplifying” the fluctuations of lower co-occurrence probabilities, and “suppressing” higher ones as a result. Formally, this can be read from Lemma 1, which shows that \({{\mathrm{Var}}}[F(p^{\varUpsilon }_{i,n}+1/n)]\) scales with \(\varphi _{i,n}=p_{i,n}\bigl (p_{i,n}+(\beta n)^{-1}\bigr )^{-1+2\lambda }\). When \(\lambda <0.5\), the \(\bigl (p_{i,n}+(\beta n)^{-1}\bigr )^{-1+2\lambda }\) factor decreases as \(p_{i,n}\) increases, and the decrease becomes faster when \(\lambda \) is smaller. Thus, in the vector representations we consider, higher co-occurrence probabilities are “suppressed” more when \(\lambda \) is smaller.

### 3.2 Handling word order in additive composition

*s*

*t*” and “

*t*

*s*”. Though the meanings of the two might be related somehow, to treat a compositional framework as approximating \(\mathbf {w}^{\{st\}}_n\) instead of \(\mathbf {w}^{st}_n\) would certainly be troublesome, especially when one tries to extend our theory to longer phrases or even sentences. As the following example (Landauer et al. 1997) demonstrates, meanings of sentences may differ greatly as word order changes.

- a.
*It was not the sales manager who hit the bottle that day, but the office worker with the serious drinking problem.* - b.
*That day the office manager, who was drinking, hit the problem sales worker with a bottle, but it was not serious.*

*Near–far Context*, which specifies contexts for \(s\bullet \) and \(\bullet t\) such that the additive composition \(\frac{1}{2}(\mathbf {w}^{s\bullet }_n+\mathbf {w}^{\bullet t}_n)\) approximates the natural vector \(\mathbf {w}^{st}_n\) for

*ordered*phrase “

*s*

*t*”.

### Definition 15

In *Near–far Context*, context words are assigned labels, either * N* or

*. For constructing vector representations, we use a lexicon of*

**F***-*

**N***labeled words, and regard words with different labels as different entries in the lexicon. For any target, we label the nearer two words to each side by*

**F***, and the farther two words to each side by*

**N***. Except that, for the “left-hand-side” word \(s\bullet \) we skip one word adjacent to the right; and similarly, for the “right-hand-side” word \(\bullet t\) we skip one word adjacent to the left (Fig. 2).*

**F**The idea behind Near–far Context is that, in the context of phrase “*s* *t*”, each word is assigned an * N*-

*label the same as in the context of \(s\bullet \) and \(\bullet t\) (Fig. 2). On the other hand, for targets*

**F***s*and

*t*occurring in the order-reversed phrase “

*t*

*s*”, context words are labeled differently for \(s\bullet \) and \(\bullet t\) (Fig. 3). As we discussed in Sect. 2.2, the key fact about additive composition is that if a word token

*t*comes from phrase “

*s*

*t*” or “

*t*

*s*”, the context for this token of

*t*is almost the same as the context of “

*s*

*t*” or “

*t*

*s*”. By introducing different labels for context words of \(t\bullet \) and \(\bullet t\), we are able to distinguish “

*s*

*t*” from “

*t*

*s*”. More precisely, similar to our discussion in Sect. 2.5, the common component of \(w^{s\bullet }_{i,n}\) and \(w^{\bullet t}_{i,n}\) will survive in the average \(\frac{1}{2}(w^{s\bullet }_{i,n}+w^{\bullet t}_{i,n})\), whereas independent ones will cancel out each other. Thus, the additive composition \(\frac{1}{2}(\mathbf {w}^{s\bullet }_n+\mathbf {w}^{\bullet t}_n)\) will become closer to \(\mathbf {w}^{st}_n\) rather than \(\mathbf {w}^{ts}_n\), because \(s\bullet \) and \(\bullet t\) share context surrounding “

*s*

*t*” but not “

*t*

*s*”.

### Definition 16

*t*. We denote \(\pi _{{s\bullet }\backslash t}\) the probability of

*s*being not at the left of

*t*, conditioned on its occurrence. Practically, one can estimate \((1-\pi _{{s\bullet }\backslash t})\) by

*C*(

*st*) /

*C*(

*s*). Similarly, we define \(s/{\bullet t}\) and \(\pi _{s/{\bullet t}}\). Then, we have equationsas parallel to (1).

The following claim is parallel to Claim 1.

### Claim 3

In Sect. 5.4, we verify Claim 3 experimentally, and show that in contrast, the error \(||\mathbf {w}^{ts}_n-\frac{1}{2}(\mathbf {w}^{s\bullet }_n + \mathbf {w}^{\bullet t}_n) ||\) for approximating the order-reversed phrase “*t* *s*” can exceed this bias bound. Further, we demonstrate that by using the additive composition of Near–far Context vectors, one can indeed assess meaning similarities between ordered phrases.

### 3.3 Dimension reduction

By far we have only discussed vector representations that have a high dimension equal to the lexicon size *n*. In practice, people mainly use low-dimensional “embeddings” of words to represent their meaning. Many of the embeddings, including SGNS and GloVe, can be formalized as linear dimension reduction, which is equivalent to the finding of a *d*-dimensional vector \(\mathbf {v}^t\) (where \(d\ll n\)) for each target word *t*, and an (*n*, *d*)-matrix *A* such that \(\sum _t L(A\mathbf {v}^t, \mathbf {w}^t_n)\) is minimized for some loss function \(L(\cdot , \cdot )\). In other words, \(A\mathbf {v}^t\) is trained as a good approximation for \(\mathbf {w}^t_n\).

Naturally, we expect the loss function *L* to account for a crucial factor in word embeddings. Although there are empirical investigations on other detailed designs of embedding methods (e.g. how to count co-occurrences, see Levy et al. 2015), the loss functions have not been explicitly discussed previously. In this section, we discuss *how the loss functions would affect additive compositionality of word embeddings*, from a viewpoint of bounding the bias \(||\mathbf {v}^{\{st\}}-\frac{1}{2}(\mathbf {v}^{s}+\mathbf {v}^{t}) ||\).

*SVD*When

*L*is the \(L^2\)-loss, its minimization has a closed-form solution given by the Singular Value Decomposition (SVD). More precisely, one considers a matrix whose

*j*-th column is \(\mathbf {w}^t_n\) where

*t*is the

*j*-th target word. Then, SVD factorizes the matrix into \(U{\varSigma }V^\top \), where

*U*,

*V*are orthonormal and \({\varSigma }\) is diagonal. Let \({\varSigma }_d\) denote the truncated \({\varSigma }\) to the top

*d*singular values. Then,

*A*is solved as \(U\sqrt{{\varSigma }_d}\) and \(\mathbf {v}^{t}\) the

*j*-th column of \(\sqrt{{\varSigma }_d}V^\top \). SVD has been used in Lebret and Collobert (2014), Stratos et al. (2015) and Levy et al. (2015). In this setting, we have

*n*, so \(||\mathbf {v}^{\{st\}}-\frac{1}{2}(\mathbf {v}^{s}+\mathbf {v}^{t}) ||\) is bounded in turn because

*A*is a bounded operator. This bound suggests that word embeddings trained by SVD preserve additive compositionality.

However, the same argument does not directly apply to other loss functions because a general loss may not satisfy a triangle inequality, and a bound for Euclidean distance may not always transform to a bound for the loss, or vice versa. Specifically, we describe two widely used alternative embeddings in the following and discuss the effects of their loss.

*GloVe*The GloVe model (Pennington et al. 2014) trains a dimension reduction for vector representations with \(F(x):=\ln {x}\). Let \(v^t_i\) be the

*i*-th entry of \(A\mathbf {v}^t\), and \(C^t_i\) the co-occurrence count. Then, the loss function of GloVe is given by

*f*is a function set to constant when \(C^t_i\) is larger than a threshold, and decreases to 0 when \(C^t_i\rightarrow 0\). In words, GloVe uses a weighted \(L^2\)-loss and the weight is a function of co-occurrence count. To minimize the loss, GloVe uses stochastic gradient descent methods such as AdaGrad (Duchi et al. 2011).

*SGNS* The Skip-Gram with Negative Sampling (SGNS) model (Mikolov et al. 2013a) also trains a dimension reduction with \(F(x):=\ln {x}\). The training is based on the Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen 2012), so its loss function has two parameters, the number *k* of noise samples per data point, and the noise distribution \(p^{\text {noise}}_{i,n}\).

### Claim 4

*i*-th entry of \(A\mathbf {v}^t\). The loss function of SGNS is given by

*y*-axis against \(v^t_i-w^t_{i,n}\) on

*x*-axis. Note that the graph grows faster at \(x\rightarrow +\infty \) than \(x\rightarrow -\infty \), suggesting that an overestimation of \(w^t_{i,n}\) will be punished more than an underestimation. In addition, the loss function weighs more on high co-occurrence probabilities, as indicated by the \(p^t_{i,n}\) coefficient in the equation of the limit curve (Fig. 4). Thus, SGNS loss tends to enforce underestimation of \(w^t_{i,n}\) for frequent context words (as overestimation is costly), and compensate \(w^t_{i,n}\) for rare ones (i.e., overestimation on rare context words is affordable and will be done if necessary). This is a special property of SGNS which might have some smoothing effect.

Compared to SVD, both the loss functions of GloVe and SGNS weigh less on rare context words. As a result, the trained \(A\mathbf {v}^t\) may fail to precisely approximate the low co-occurrence part of \(\mathbf {w}^t_{n}\). As we discussed in Sect. 2.5, entries corresponding to low-frequency words dominate the behavior of vector representations; thus, failing to precisely approximate this part might hinder the inheritance of additive compositionality from high-dimensional vector representations to low-dimensional embeddings. Therefore, we conjecture that word vectors trained by GloVe or SGNS might exhibit less additive compositionality compared to SVD, and the composition might be less respectful to our bias bound.

The previous discussion is only exploratory and cannot fully comply with practice because, after \(\mathbf {v}^t\) is trained by dimension reduction, people usually re-scale the norms of all \(\mathbf {v}^t\) to 1, and then they use the normalized vectors in additive composition. It is not clear why this normalization step can usually result in better performance.

Nevertheless, in our experiments (Sect. 5.5), we find that word vectors trained by SVD preserve our bias bound well in additive composition, even after the normalization step is conducted. In contrast, vectors trained by GloVe or SGNS are less respectful to the bound. Further, in extrinsic evaluations (Sect. 6) we show that vectors trained by SVD can indeed be more additive compositional, as judged by human annotators.

## 4 Related work

Additive composition is a classical approach to approximating meanings of phrases and/or sentences (Foltz et al. 1998; Landauer and Dumais 1997). Compared to other composition operations, vector addition/average has either served as a strong baseline (Mitchell and Lapata 2008; Takase et al. 2016), or remained one of the most competitive methods until recently (Banea et al. 2014). Additive composition has also been successfully integrated into several NLP systems. For example, Tian et al. (2014) use vector additions for assessing semantic similarities between paraphrase candidates in a logic-based textual entailment recognition system (e.g. the similarity between “*blamed for death*” and “*cause loss of life*” is calculated by the cosine similarity between sums of word vectors \(\mathbf {v}^{\text {blame}}+\mathbf {v}^{\text {death}}\) and \(\mathbf {v}^{\text {cause}}+\mathbf {v}^{\text {loss}}+\mathbf {v}^{\text {life}}\)); in Iyyer et al. (2015), average of vectors of words in a whole sentence/document is fed into a deep neural network for sentiment analysis and question answering, which achieves near state-of-the-art performance with minimum training time. There are other semantic relations handled by vector additions as well, such as word analogy (e.g. the vector \(\mathbf {v}^{\text {king}}-\mathbf {v}^{\text {man}}+\mathbf {v}^{\text {woman}}\) is close to \(\mathbf {v}^{\text {queen}}\), suggesting “*man* is to *king* as *woman* is to *queen*”, see Mikolov et al. 2013b), and synonymy (i.e. a set of synonyms can be represented by the sum of vectors of the words in the set, see Rothe and Schütze 2015). We expect all these utilities to be related to our theory of additive composition somehow, for example a link between additive composition and word analogy is hypothesized in Sect. 6.2. Ultimately, our theory would provide new insights into previous works, for instance, the insights about how to construct word vectors.

Lack of syntactic or word-order dependent effects on meaning is considered one of the most important issue of additive composition (Landauer 2002). Driven by this point of view, a number of advanced compositional frameworks have been proposed to cope with word order and/or syntactic information (Mitchell and Lapata 2008; Zanzotto et al. 2010; Baroni and Zamparelli 2010; Coecke et al. 2010; Grefenstette and Sadrzadeh 2011; Socher et al. 2012; Paperno et al. 2014; Hashimoto et al. 2014). The usual approach is to introduce new parameters that represent different word positions or syntactic roles. For example, given a two-word phrase, one can first transform the two word vectors by different matrices and then add the results, so the two matrices are parameters (Mitchell and Lapata 2008); or, regarding different syntactic roles, one can assign matrices to adjectives and use them to modify vectors of nouns (Baroni and Zamparelli 2010); further, one can insert neural network layers between parents and children in a syntactic tree (Socher et al. 2012). An empirical comparison of composition models can be found in Blacoe and Lapata (2012), with an accessible introduction to the literature. One theoretical issue of these methods, however, is the lack of learning guarantee. In contrast, our proposal of the Near–far Context demonstrates that word order can be handled within an additive compositional framework, being parameter-free and with a proven bias bound. Recently, Tian et al. (2016) further extended additive composition to realizing a formal semantics framework.

From a wider perspective, constructing and composing vector representations for linguistic sequences have become one of the central techniques in NLP, and a lot of approaches have been explored. Some of them, such as the vectors constructed from probability ratios and composed by multiplications (Mitchell and Lapata 2010), might still be related to additive composition because by taking logarithm, multiplications become additions and probability ratios become PMIs. Other composition methods range from circular convolution (Mitchell and Lapata 2010) to neural networks such as recursive autoencoder (Socher et al. 2011) and long short-term memory (Melamud et al. 2016). Word vectors can be trained jointly with composition parameters (Hashimoto et al. 2014; Pham et al. 2015), and training signals range from surrounding context words (Takase et al. 2016) to supervised labels (Collobert et al. 2011). We believe it is also important to investigate the theoretical aspects of these approaches, which remain largely unclear. As for word vectors, some theoretical works have been done on explaining the errors of dimension reductions of PMI vectors (Arora et al. 2016; Hashimoto et al. 2016).

Error bounds in approximation schemes have been extensively studied in statistical learning theory (Vapnik 1995; Gnecco and Sanguineti 2008), and especially for neural networks (Niyogi and Girosi 1999; Burger and Neubauer 2001). Since we have formalized compositional frameworks as approximation schemes, there is a good chance to apply the theories of approximation error bounds to this problem, especially for advanced compositional frameworks that have many parameters. Though the theories are usually established on general settings, we see a great potential in using properties that are specific to natural language data, as we demonstrate in this work.

There have been consistent efforts toward understanding stochastic behaviors of natural language. Zipf’s Law (Zipf 1935) and its applications (Kobayashi 2014), non-parametric Bayesian language models such as the Hierarchical Pitman–Yor Process (Teh 2006), and the topic model (Blei 2012) might further help refine our theory. For example, it can be fruitful to consider additive composition of topics.

## 5 Experimental verification

### 5.1 Test of independence

In order to test the independence assumptions in our theory, we use Spearman’s \(\rho \) to measure the correlations between random variables. Spearman’s \(\rho \) is the Pearson correlation between rank values, and is invariant under transformations of any monotonic function. One has \(-1\le \rho \le -1\), and if two variables are independent, \(\rho \) should be close to 0.

In our theory, Assumption (B1) of Theorem 1 states that \(p^{\varUpsilon }_{i,n}\) and \(p^{\varUpsilon }_{j,n}\) are independent for each \(1\le i < j \le n\). To test, we calculate the Spearman’s \(\rho \) between (i) \(p^T_{i,n}\) and \(p^T_{j,n}\), and (ii) \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\), where *T* and \(\{ST\}\) vary on the 16,210 unigrams and 45,398 unordered bigram samples respectively. Further, Assumption (C) of Theorem 1 states that for each \(1\le i \le n\), the random variables \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are independent, whereas \(F(p^{S/T\backslash S}_{i,n}+1/n)\) and \(F(p^{\{ST\}}_{i,n}+1/n)\) have positive correlation. Thus, we check the Spearman’s \(\rho \) between (iii) \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\), and (iv) \(p^{S/T\backslash S}_{i,n}\) and \(p^{\{ST\}}_{i,n}\), where \(\{S, T\}\) vary on the 45,389 unordered bigrams. The results are summarized in Fig. 5.

For most *i*-*j* pairs, Fig. 5(i)(ii) suggest that the correlations between \(p^{\varUpsilon }_{i,n}\) and \(p^{\varUpsilon }_{j,n}\) are positive but quite weak (for 70% of the *i*-*j* pairs, the Spearman’s \(\rho \) between \(p^T_{i,n}\) and \(p^T_{j,n}\) is \(0.2\pm 0.05\); and for 90% pairs the Spearman’s \(\rho \) between \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) is \(0.1\pm 0.05\)). As a comparison, when *i* and *j* indicate a pair of semantically related context words such as *black* and *white*, the Spearman’s \(\rho \) between \(p^T_{i,n}\) and \(p^T_{j,n}\) is 0.40 and between \(p^{\{ST\}}_{i,n}\) and \(p^{\{ST\}}_{j,n}\) is 0.31. Such examples only contribute to a negligible portion of the whole *i*-*j* pairs, because semantically related pairs are rare.

On the other hand, Fig. 5(iii) shows that \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) have *negative* correlation for most *i*; the Spearman’s \(\rho \) for 66% of the \(p^{S/T\backslash S}_{i,n}\)-and-\(p^{T/S\backslash T}_{i,n}\) pairs is \(-0.2\pm 0.1\). In addition, Fig. 5(iv) confirms that \(p^{S/T\backslash S}_{i,n}\) and \(p^{\{ST\}}_{i,n}\) have positive correlation.

Now, can the observed weak correlations support our assumptions on independence? To test this, one may calculate a *p*-value as the probability of a Spearman’s \(\rho \) being farther from 0 than the observation, making use of the fact that \(\rho \sqrt{\frac{N-2}{1-\rho ^2}}\) approximately follows a Student’s *t*-distribution (where *N* is the sample size). If the *p*-value is small, the Spearman’s \(\rho \) should be considered too far from 0 to support independence. However, this test turns out to be overly strict; for unordered bigrams (i.e. \(N=45398\)), one needs \(|\rho |<0.012\) to make \(p>0.01\). In other words, since the sample size is huge, even weak correlations among samples can manifest as evidence for rejecting the independence as null hypothesis.

Nevertheless, our theoretical analysis is still valid, because the Law of Large Numbers holds even for weakly correlated random variables, and the fact that \(p^{S/T\backslash S}_{i,n}\) and \(p^{T/S\backslash T}_{i,n}\) are *negatively* correlated does not change the direction of our proven inequality. Therefore, our independence assumptions are oversimplifications for language modeling, but the theoretical conclusions and the bias bound are still likely true.

### 5.2 Generalized Zipf’s law

Consider the probability ratio \(p^{\varUpsilon }_{i,n}/p_{i,n}\), where \({\varUpsilon }\) can be a unigram, ordered bigram or unordered bigram. Assumption (B) of Theorem 1 states that \((p^{\varUpsilon }_{i,n}/p_{i,n})\)’s (\(1\le i\le n\), *n* fixed) can be viewed as independent sample points drawn from distributions that have a same power law tail of index 1. We verify this assumption in the following.

*m*of the power law behavior. If a random variable

*X*obeys a power law, the probability of \(x\le X\) conditioned on \(m\le X\) is given by

*m*from the sample \(p^{\varUpsilon }_{i,n}/p_{i,n}\) \((1\le i\le n)\), using the method of Clauset et al. (2009). Namely, \(\alpha \) is estimated by maximizing the likelihood of the sample, and

*m*is sought to minimize the Kolmogorov-Smirnov statistic, which measures how well the theoretical distribution (14) fits the empirical distribution of the sample. After

*m*is estimated, we plot all \(p^{\varUpsilon }_{i,n}/p_{i,n}\) greater than

*m*in a log-log graph, against their ranking. If the sample points are drawn from a power law, the graph will be a straight line. Since Assumption (B) states that the power law tail is the same for all \({\varUpsilon }\) and has index 1, we should obtain the same straight line for all \({\varUpsilon }\), and the slope of the line should be \(-1\).

*x*-th largest probability ratios across different \({\varUpsilon }\). The figure shows that for each target type, most data points lie within a narrow stripe of roughly the same shape, suggesting that the distribution of probability ratios for each fixed \({\varUpsilon }\) is approximately the same. In addition, the shape can be roughly approximated by a straight line with slope \(-1\), which suggests that the distribution is power law of index 1, verifying Assumption (B).

As a concrete example, in Fig. 7 we show a log-log graph of the *x*-th largest probability ratios \(p^s_{i,n}/p_{i,n}\) and \(p^t_{i,n}/p_{i,n}\) \((1\le i\le n)\), where *s* and *t* are two individual word targets. The red points are cut off because their *y* values are lower than the boundaries of power law behavior estimated from data. The blue and green points are the power law tails.

*i*and categorize the values of \(X:=p^T_{i,n}/p_{i,n}\), where

*T*varies over the 16,210 unigram samples. According to Figs. 6 and 7, we assume that

*X*has a power law tail starting from \(X \ge 2^4\). Thus, we divide values of

*X*into 5 categories, being \(X<2^4\), \(2^{4+k}\le X < 2^{5+k}\) \((k=0,1,2)\), and \(X\ge 2^7\), respectively. We count frequencies in each category and choose a parameter \(\frac{1}{2^4}\le m\le \frac{1}{2}\) by minimizing the \(\chi ^2\) statistic. The parameter \(\alpha \) is fixed to 1. Then, the degree of freedom is calculated as \(5-1-1=3\), and the \(\chi ^2\)-test produces a

*p*-value indicating how good the power law hypothesis fit to the observed frequencies. We decide that the test is passed if \(p\ge 0.0001\).

\(\chi ^2\)-tests on distributions of \(X:=p^T_{i,n}/p_{i,n}\)

| Word | Frequencies | | ||||
---|---|---|---|---|---|---|---|

1 | | 16210 | 0 | 0 | 0 | 0 | \({<}0.0001\) |

2 | | 16210 | 0 | 0 | 0 | 0 | \({<}0.0001\) |

101 | | 16167 | 29 | 14 | 0 | 0 | 0.0010 |

121 | | 16173 | 29 | 6 | 2 | 0 | 0.0003 |

142 | | 16169 | 28 | 9 | 3 | 1 | 0.0059 |

3075 | | 15992 | 194 | 23 | 1 | 0 | \({<}0.0001\) |

3076 | | 15914 | 176 | 77 | 27 | 16 | 0.0002 |

3077 | | 15969 | 167 | 43 | 18 | 13 | \({<}0.0001\) |

3078 | | 15920 | 206 | 54 | 24 | 6 | \({<}0.0001\) |

3079 | | 15859 | 194 | 93 | 39 | 25 | 0.0126 |

3080 | | 16053 | 124 | 29 | 4 | 0 | \({<}0.0001\) |

Among all indices \(1\le i \le n\), there are 16% distributions passing the test. A selection of examples is shown in Table 4. It turns out that many function words, such as “*the*” and “*be*” cannot pass the test (with all values of *X* less than \(2^4\)), because the occurring probabilities of these words do not change much, whether or not conditioned on a target. An exception is that several prepositions, such as “*between*” and “*under*” do pass. On the other hand, as *i* becomes larger (i.e. \(p_{i,n}\) becomes smaller), more of the distributions of \(p^T_{i,n}/p_{i,n}\) become distorted, similar to the green dots in Fig. 7 which will fail the test. As Table 4 suggests, no obvious linguistic factor seems able to explain which word would pass. However, Fig. 6 still confirms that the averaged behavior of these distributions obeys a power law.

### 5.3 The choice of function *F*

*F*. Recall that

*F*is parameterized by \(\lambda \) as defined in Definition 10. In Sect. 2.4, we have shown that \(\mathbb {E}[F(X)^2]<\infty \) is a sufficient and necessary condition for the norms of natural phrase vectors to converge to 1. If

*X*has a power law tail of index \(\alpha \), then the condition for \(\mathbb {E}[F(X)^2]<\infty \) is \(\lambda <\alpha /2\). So if we construct vector representations with different \(\lambda \), only those vectors satisfying \(\lambda <\alpha /2\) will have convergent norms. We verify this prediction first.

In Fig. 8, we plot the standard deviation of the norms of natural phrase vectors on *y*-axis, against different \(\lambda \) values used for constructing the vectors. We tried \(\lambda =0,0.1,\ldots ,1\). As the graph shows, as long as \(\lambda < 0.5\), most of the norms lie within the range of \(1\pm 0.1\). In contrast, the observed standard deviation quickly explodes as \(\lambda \) gets larger. In addition, the transition point appears to be slightly larger than 0.5, which complies with the fact that the observed \(\alpha \) is slightly larger than 1 (i.e., the slope \(-1/\alpha \) of the power law tails in Figs. 6 and 7 appear to be slightly more gradual than \(-1\)).

To confirm that the above observation represents a general principle across different corpora, we also conduct experiments on English Wikipedia.^{6} We use WikiExtractor^{7} to extract texts from a 2015-12-01 dump, and Stanford CoreNLP^{8} for sentence splitting. The corpus has 1300M word tokens (about 13 times the size of BNC), and we use words in their surface forms instead of lemmas. We extract words and unordered bigrams which occur more than 500 times, resulting in about 85 K words and 264 K bigrams. Then, we additionally make two smaller corpora by uniformly sampling 10% and 1% sentences in Wikipedia. For each corpus, we construct natural phrase vectors and calculate the standard deviation of their norms as previous. The results are shown in Fig. 9. Again, we found that when one sets \(F(p):=\ln p\), the standard deviation is around 0.1; in contrast when \(F(p):=p\), the standard deviation is above 0.5. As the corpus increases, the standard deviation slightly decreases; at Wikipedia’s full size, the standard deviation for \(F(p):=\ln p\) descends below 0.095.

*F*affects the Euclidean distance \(\mathscr {B}^{\{st\}}_n\). In Fig. 10, we plot

*F*, as indicated above the graphs. For the choices (a) \(F(p):=\ln {p}\) and (b) \(F(p):=\sqrt{p}\), we verify the upper bound \(y\le x\) as suggested by Claim 1. In contrast, the approximation errors seem no longer bounded when (c) \(F(p):=p\) or (d) \(F(p):=p\ln {p}\).

*F*a crucial factor there; while \(F(p):=\ln {p}\) and \(F(p):=\sqrt{p}\) evaluate similarly well, \(F(p):=p\) and \(F(p):=p\ln {p}\) do much worse. This suggests that our bias bound indeed has the power of predicting additive compositionality, demonstrating the usefulness of our theory. In contrast, it seems that the average level of approximation errors for observed bigrams (shown as green dashed lines in Fig. 10) is less predictive, as the poor choices \(F(p):=p\) and \(F(p):=p\ln {p}\) actually have lower average error levels. This emphasizes a particular caveat that, choosing composition operations by minimizing the observed average error may not always be justifiable. Here if we consider the function

*F*as a parameter in additive composition, and choose the one with the lowest average error observed, we will get the worst setting \(F(p):=p\). Therefore, we see how important a learning theory for composition research is.

### 5.4 Handling word order in additive composition

*ordered*bigrams.

*st*. We tried two settings of

*F*, namely \(F(p):=\ln {p}\) (Fig. 11) and \(F(p):=\sqrt{p}\) (Fig. 12). In both cases, the approximation errors in (a) are bounded by \(y\le x\) (red solid lines) as suggested by Claim 3. In contrast, the approximation errors for order-reversed bigrams exceed this bound, showing that the additive composition of Near–far Context vectors actually recognizes word order.

*pose problem*” is near to “

*arise dilemma*” but not to “

*dilemma arise*”, and “

*problem pose*” is near to “

*difficulty cause*” but not to “

*cause difficulty*”. It is also noteworthy that “

*not enough*” is similar to “

*always want*”, showing some degree of semantic compositionality beyond word level. We believe this ability of computing meanings of arbitrary ordered bigrams is already highly useful, because only a few bigrams can be directly observed from real corpora.

Top 8 similar word pairs, assessed by cosine similarities between additive compositions of Near–far Context vectors

| | | |
---|---|---|---|

| | | income inflation |

| | | |

| | | |

| | | |

| | inflation income | taxation premium |

| | | |

dilemma serious | | premium taxation | |

| | | |

| | | |
---|---|---|---|

| | | |

| | | really never |

| | | |

| | | ought too |

| | | |

| | | |

| | | |

| | | |

### 5.5 Dimension reduction

*normalized*word vectors \(\mathbf {v}^t\) that are constructed from the distributional vectors \(\mathbf {w}^t_n\) by reducing to 200 dimensions using different reduction methods. We use SVD in (a) and (b), with \(F(p):=\ln {p}\) in (a) and \(F(p):=\sqrt{p}\) in (b). The GloVe model is shown in (c) and SGNS in (d), both of them using \(F(p):=\ln {p}\). For each unordered bigram \(\{st\}\) we plot

## 6 Extrinsic evaluation of additive compositionality

In this section, we test additive composition on human annotated data sets to see if our theoretical predictions correlate with human judgments. We conduct a phrase similarity task and a word analogy task.

### 6.1 Phrase similarity

In a data set^{9} created by Mitchell and Lapata (2010), phrase pairs are annotated with similarity scores. Each instance in the data is a (*phrase1*, *phrase2*, *similarity*) triplet, and each phrase consists of two words. The similarity score is annotated by humans, ranging from 1 to 7, indicating how similar the meanings of the two phrases are. For example, one annotator assessed the similarity between “*vast amount*” and “*large quantity*” as 7 (the highest), and the similarity between “*hear word*” and “*remember name*” as 1 (the lowest). Phrases are divided into three categories: Verb-Object, Compound Noun, and Adjective-Noun. Each category has 108 phrase pairs, and they are annotated by 18 human participants (i.e., 1,944 instances in each category). Using this data set, we can compare the human ranking of phrase similarity with the one calculated from cosine similarities between vector-based compositions. We use Spearman’s \(\rho \) to measure how correlated the two rankings are.

Vector representations are constructed from BNC, with the same settings described in Sect. 5. We plot in Fig. 14 the distributions of how many times the phrases in the data set occur as bigrams in BNC. The figure indicates that a large portion of the phrases are rare or unseen as bigrams, so their meanings cannot be directly assessed as natural vectors from the corpus. Therefore, the data is suitable for testing compositions of word vectors.

For training word embeddings, we use the random projection algorithm (Halko et al. 2011) for SVD, and Stochastic Gradient Descent (SGD) (Bottou 2012) for SGNS and GloVe. Since these are randomized algorithms, we run each test 20 times and report the mean performance with standard deviation. We tune SGD learning rates by checking convergence of the objectives, and get slightly better results than the default training parameters set in the software of SGNS^{10} and GloVe.^{11}

Spearman’s \(\rho \) in the phrase similarity task

Verb-object | Compound noun | Adjective-noun | |
---|---|---|---|

Ordinary-id-SVD | \(.4029 \pm .0009\) | \(.4275 \pm .0009\) | \(.4160 \pm .0009\) |

Ordinary-xlnx-SVD | \(.4204 \pm .0011\) | \(.4728 \pm .0013\) | \(.4511 \pm .0012\) |

Ordinary-ln-SVD | \(\mathbf {.4369 \pm .0022}\) | \(\mathbf {.5187 \pm .0016}\) | \(.4604 \pm .0033\) |

Ordinary-sqrt-SVD | \(.4318 \pm .0019\) | \(.5051 \pm .0020\) | \(.4790 \pm .0018\) |

Nearfar-ln-SVD | \(.4204 \pm .0018\) | \(.5135 \pm .0020\) | \(.4491 \pm .0028\) |

Nearfar-sqrt-SVD | \(\mathbf {.4359 \pm .0020}\) | \(\mathbf {.5193 \pm .0024}\) | \(.4873 \pm .0011\) |

SGNS | \(.4273 \pm .0035\) | \(.4977 \pm .0025\) | \(\mathbf {.5125 \pm .0032}\) |

GloVe | \(.4014 \pm .0046\) | \(.4986 \pm .0053\) | \(.4308 \pm .0062\) |

Tensor Product | \(.4092 \pm .0033\) | \(.4801 \pm .0035\) | \(.4348 \pm .0048\) |

Upper Bound | .691 | .693 | .715 |

Muraoka et al. | .430 | .481 | .469 |

Deep Neural | .305 | .385 | .207 |

The test results are shown in Table 6. We compare different settings of function *F*, Ordinary and Near–far Contexts, and different dimension reductions. When using ordinary contexts and SVD reduction, we find that the functions ln (\(F(p):=\ln {p}\)) and sqrt (\(F(p):=\sqrt{p}\)) perform similarly well, whereas id (\(F(p):=p\)) and xlnx (\(F(p):=p\ln {p}\)) are much worse, confirming our predictions in Sect. 3.1. As for Near–far Context vectors (Sect. 3.2), we find that the Nearfar-sqrt-SVD setting has a high performance, demonstrating improvements brought by Near–far to additive composition. However, we note that Nearfar-ln-SVD is worse. One reason could be that the function ln emphasizes lower co-occurrence probabilities, which combined with Near–far labels could make the vectors more prone to data sparseness; or correlatively, some important syntactic markers might be obscured because they occur in high frequency. Finally, we note that SVD is consistently good and usually better than GloVe and SGNS, which supports our arguments in Sect. 3.3.

We report some additional test results for reference. In Table 6, the “Tensor Product” row shows the results of composing Ordinary-ln-SVD vectors by tensor product instead of average, which means that the similarity between two phrases “\(s_1\) \(t_1\)” and “\(s_2\) \(t_2\)” is assessed by taking product of the word cosine similarities \(\cos (s_1,s_2)\cdot \cos (t_1,t_2)\). The numbers are worse than additive composition, suggesting that a similar phrase might be something more than a sequence of individually similar words. In the “Upper Bound” row, we show the best possible Spearman’s \(\rho \) for this task, which are less than 1 because there are disagreements between human annotators. Compared to these numbers, we find that the performance of additive composition on compound nouns is remarkably high. Furthermore, in “Muraoka et al.” we cite the best results reported in Muraoka et al. (2014), which has tested several compositional frameworks. In “Deep Neural”, we also test additive composition of word vectors trained by deep neural networks (normalized 200-dimensional vectors trained by Turian et al. 2010, using the model of Collobert et al. 2011). These results cannot be directly compared to each other because they construct vector representations from different corpora; but we can fairly say that additive composition is still a powerful method for assessing phrase similarity, and linear dimension reductions might be more suitable for training additively compositional word vectors than deep neural networks. Therefore, our theory on additive composition is about the state-of-the-art.

### 6.2 Word analogy

Word analogy is the task of solving questions of the form “*a* is to *b* as *c* is to __?”, and an elegant approach proposed by Mikolov et al. (2013b) is to find the word vector most similar to \(\mathbf {v}^b - \mathbf {v}^a + \mathbf {v}^c\) . For example, in order to answer the question “*man* is to *king* as *woman* is to __?”, one needs to calculate \(\mathbf {v}^{\text {king}} - \mathbf {v}^{\text {man}} + \mathbf {v}^{\text {woman}}\) and find out its most similar word vector, which will probably turn out to be \(\mathbf {v}^{\text {queen}}\), indicating the correct answer *queen*.

As pointed out by Levy and Goldberg (2014a), the key to solving analogy questions is the ability to “add” (resp. “subtract”) some aspects to (resp. from) a concept. For example, *king* is a concept of *human* that has the aspects of being *royal* and *male*. If we can “subtract” the aspect *male* from *king* and “add” the aspect *female* to it, then we will probably get the concept *queen*. Thus, the vector-based solution proposed by Mikolov et al. (2013b) is essentially assuming that “adding” and “subtracting” aspects can be realized by adding and subtracting word vectors. Why is this assumption admissible?

*male*) and a concept is represented by a noun (e.g.

*human*), we can usually “add” the aspect to the concept by simply arranging the adjective and the noun to form a phrase (e.g.

*male human*). Therefore, as the meaning of the phrase can be calculated by additive composition (e.g. \(\mathbf {v}^{\text {male}}+\mathbf {v}^{\text {human}}\)), we have indeed realized the “addition” of aspects by addition of word vectors. Specifically, since \(\textit{man}\approx \textit{male human}\), \(\textit{king}\approx \textit{royal male human}\), \(\textit{woman}\approx \textit{female human}\) and \(\textit{queen}\approx \textit{royal female human}\), we expect the following by additive composition of phrases.

^{12}(Mikolov et al. 2013b) and Google

^{13}(Mikolov et al. 2013a) data sets. Each instance in the data is a 4-tuple of words subject to “

*a*is to

*b*as

*c*is to

*d*”, and the task is to find out

*d*from

*a*,

*b*and

*c*. We train word vectors with the same settings described in Sect. 5, but using surface forms instead of lemmatized words in BNC. Tuples with out-of-vocabulary words are removed from data, which results in 4382 tuples in Msr and 8906 in Google.

^{14}

Accuracy (%) in the word analogy task

id-SVD | xlnx-SVD | ln-SVD | sqrt-SVD | SGNS | GloVe | |
---|---|---|---|---|---|---|

| \(19.43\pm .06\) | \(32.47\pm .10\) | \(\mathbf {52.04\pm .36}\) | \(51.28\pm .25\) | \(45.16\pm .44\) | \(48.39\pm .44\) |

Msr | \(17.36\pm .06\) | \(33.85\pm .12\) | \(\mathbf {66.67\pm .26}\) | \(60.93\pm .25\) | \(55.56\pm .30\) | \(65.05\pm .55\) |

The test results are shown in Table 7. Again, we find that ln and sqrt perform similarly well but id and xlnx are worse, confirming that the choice of function *F* can drastically affect performance on word analogy task as well, which we believe is related to additive compositionality. In addition, we confirm that SVD can perform better than SGNS and GloVe, which gives more support to our conjecture that vectors trained by SVD might be more compatible to additive composition.

## 7 Conclusion

In this article, we have developed a theory of additive composition regarding its bias. The theory has explained why and how additive composition works, making useful suggestions about improving additive compositionality, which include the choice of a transformation function, the awareness of word order, and the dimension reduction methods. Predictions made by our theory have been verified experimentally, and shown positive correlations with human judgments. In short, we have revealed the mechanism of additive composition.

However, we note that our theory is not “proof” of additive composition being a “good” compositional framework. As a generalization error bound usually is in machine learning theory, our bound for the bias does not show if additive composition is “good”; rather, it specifies some factors that can affect the errors. If we have generalization error bounds for other composition operations, a comparison between such bounds can bring useful insights into the choices of compositional frameworks in specific cases. We expect our bias bound to inspire more results in the research of semantic composition.

Moreover, we believe this line of theoretical research can be pursued further. In computational linguistics, the idea of treating semantics and semantic relations by algebraic operations on distributional context vectors is relatively new (Clarke 2012). Therefore, the relation between linguistic theories and our approximation theory of semantic composition is left largely unexplored. For example, the intuitive distinction between compositional (e.g. *high price*) and non-compositional (e.g. *white lie*) phrases is currently ignored in our theory. Our bias bound treats both cases by a single collocation measure. Can one improve the bound by taking account of this distinction, and/or other kinds of linguistic knowledge? This is an intriguing question for future work.

## Footnotes

- 1.
Unlike natural vectors which always lie in the same space as word vectors, some compositional frameworks construct meanings of phrases in different spaces. Nevertheless, we argue that even in such cases it is reasonable to require some mappings to a common space, because humans can usually compare meanings of a word and a phrase. Then, by considering distances between mapped images of composed vectors and natural vectors, we can define bias and call for theoretical analysis.

- 2.
Or it should be \(\mathbf {w}^{st}_{n}\) if one cares about word order, which we will discuss in Sect. 3.2.

- 3.
This is reasonable, because \(p^{{\varUpsilon }}_{i,n}\) is likely to be at the same scale as \(p_{i,n}\), whereas \(p_{i,n}\) varies for different

*i*. - 4.
The assumption can further be relaxed to \(\lim \limits _{x\rightarrow \infty }x\mathbb {P}(x\le X)=\xi \). We only consider (B2) for simplicity.

- 5.
Namely, the constant \(F(p_{i,n}\beta +1/n)-a^{\varUpsilon }_n -b_{i,n}\). As a further clue, in the upcoming Theorem 2 we will prove that \(a^{\varUpsilon }_n\) can be taken as 0, and \(b_{i,n}\) as \(\mathbb {E}\bigl [F(p^{\{ST\}}_{i,n}+1/n)\bigr ]\) which is in the same scale as \(F(p_{i,n}\beta +1/n)\).

- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
- 12.
- 13.
- 14.
These are about half the size of the original data sets.

## Notes

### Acknowledgements

This work was supported by CREST, JST. We thank the anonymous reviewers for helpful comments; we thank Daichi Mochihashi, Gemma Boleda, and Percy Liang for kind advices; and we thank Naho Orita, Yuichiro Matsubayashi, and Koji Matsuda for cozy discussions on early drafts of this work.

## References

- Arora, S., Li, Y., Liang, Y., & Ma, T. (2016). A latent variable model approach to pmi-based word embeddings.
*Transactions of the Association for Computational Linguistics*,*4*, 385–399.Google Scholar - Banea, C., Chen, D., Mihalcea, R., Cardie, C., & Wiebe, J. (2014). Simcompass: Using deep learning word embeddings to assess cross-level similarity. In:
*Proceedings of SemEval*.Google Scholar - Baroni, M., & Zamparelli, R. (2010). Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In:
*Proceedings of EMNLP*.Google Scholar - Blacoe, W., & Lapata, M. (2012). A comparison of vector-based representations for semantic composition. In:
*Proceedings of EMNLP*.Google Scholar - Blei, D. M. (2012). Probabilistic topic models.
*Communications of the ACM*,*55*(4), 77–84.CrossRefGoogle Scholar - Boleda, G., Baroni, M., Pham, T.N., & McNally, L. (2013). Intensionality was only alleged: On adjective-noun composition in distributional semantics. In:
*Proceedings of IWCS*.Google Scholar - Bottou, L. (2012). Stochastic gradient descent tricks. In G. Montavon, G. B. Orr, & K. R. Müller (Eds.),
*Neural Networks: Tricks of the Trade*. Berlin: Springer.Google Scholar - Burger, M., & Neubauer, A. (2001). Error bounds for approximation with neural networks.
*Journal of Approximation Theory*,*112*(2), 235–250.MathSciNetCrossRefzbMATHGoogle Scholar - Church, K. W., & Hanks, P. (1990). Word association norms, mutual information, and lexicography.
*Computational Linguistics*,*16*(1), 22–29.Google Scholar - Clarke, D. (2012). A context-theoretic framework for compositionality in distributional semantics.
*Computational Linguistics*,*38*(1), 41–47.CrossRefGoogle Scholar - Clauset, A., Shalizi, C. R., & Newman, M. E. J. (2009). Power-law distributions in empirical data.
*SIAM Review*,*51*(4), 661–703.MathSciNetCrossRefzbMATHGoogle Scholar - Coecke, B., Sadrzadeh, M., & Clark, S. (2010). Mathematical foundations for a compositional distributional model of meaning.
*Linguistic Analysis*,*36*(1), 345–384.Google Scholar - Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., & Kuksa, P. (2011). Natural language processing (almost) from scratch.
*Journal of Machine Learning Research*,*12*, 2493–2537.zbMATHGoogle Scholar - Corral, A., Boleda, G., & i Cancho, R. E. (2015). Zipf’s law for word frequencies: Word forms versus lemmas in long texts.
*PLoS One*,*10*(7), 1–23.CrossRefGoogle Scholar - Dagan, I., Pereira, F., & Lee, L. (1994). Similarity-based estimation of word cooccurrence probabilities. In:
*Proceedings of ACL*.Google Scholar - Dinu, G., Pham, N.T., & Baroni, M. (2013). General estimation and evaluation of compositional distributional semantic models. In:
*Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality*.Google Scholar - Duchi, J., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization.
*Journal of Machine Learning Research*,*12*, 2121–2159.MathSciNetzbMATHGoogle Scholar - Foltz, P. W., Kintsch, W., & Landauer, T. K. (1998). The measurement of textual coherence with latent semantic analysis.
*Discourse Process*,*15*, 285–307.CrossRefGoogle Scholar - Geman, S., Bienenstock, E., & Doursat, R. (1992). Neural networks and the bias/variance dilemma.
*Neural Computation*,*4*(1), 1–58.CrossRefGoogle Scholar - Gnecco, G., & Sanguineti, M. (2008). Approximation error bounds via rademachers complexity.
*Applied Mathematical Sciences*,*2*(4), 153–176.MathSciNetzbMATHGoogle Scholar - Grefenstette, E., & Sadrzadeh, M. (2011). Experimental support for a categorical compositional distributional model of meaning. In:
*Proceedings of EMNLP*.Google Scholar - Guevara, E. (2010). A regression model of adjective-noun compositionality in distributional semantics. In:
*Proceedings of the Workshop on GEometrical Models of Natural Language Semantics*.Google Scholar - Gutmann, M. U., & Hyvärinen, A. (2012). Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics.
*Journal of Machine Learning Research*,*13*(1), 207–361.MathSciNetzbMATHGoogle Scholar - Ha LQ, Sicilia-Garcia, E.I., Ming, J., & Smith, F.J. (2002). Extension of zipf’s law to words and phrases. In:
*Proceedings of Coling*.Google Scholar - Halko, N., Martinsson, P. G., & Tropp, J. A. (2011). Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions.
*SIAM Review*,*53*(2), 217–288.MathSciNetCrossRefzbMATHGoogle Scholar - Hashimoto, K., Stenetorp, P., Miwa, M., & Tsuruoka, Y. (2014). Jointly learning word representations and composition functions using predicate-argument structures. In:
*Proceedings of EMNLP*.Google Scholar - Hashimoto, T., Alvarez-Melis, D., & Jaakkola, T. (2016). Word embeddings as metric recovery in semantic spaces.
*Transactions of the Association for Computational Linguistics*,*4*, 273–286.Google Scholar - Iyyer, M., Manjunatha, V., Boyd-Graber, J., & III, H.D. (2015). Deep unordered composition rivals syntactic methods for text classification. In:
*Proceedings of ACL*.Google Scholar - Kobayashi, H. (2014), Perplexity on reduced corpora. In:
*Proceedings of ACL*.Google Scholar - Landauer, T. K. (2002). On the computational basis of learning and cognition: Arguments from LSA. In N. Ross (Ed.),
*The Psychology of Learning and Motivation*(Vol. 41). Cambridge: Academic Press.Google Scholar - Landauer, T. K., & Dumais, S. T. (1997). A solution to platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge.
*Psychological Review*,*104*(2), 211.CrossRefGoogle Scholar - Landauer, T.K., Laham, D., Rehder, B., & Schreiner, M.E. (1997). How well can passage meaning be derived without using word order? a comparison of latent semantic analysis and humans. In:
*Proceedings of Annual Conference of the Cognitive Science Society*.Google Scholar - Lebret, R., & Collobert, R. (2014). Word embeddings through Hellinger PCA. In:
*Proceedings of EACL*.Google Scholar - Levy, O., & Goldberg, Y. (2014a). Linguistic regularities in sparse and explicit word representations. In:
*Proceedings of CoNLL*.Google Scholar - Levy, O., & Goldberg, Y. (2014b). Neural word embedding as implicit matrix factorization. In:
*Advances in Neural Information Processing Systems (NIPS) 27*, 2177–2185.Google Scholar - Levy, O., Goldberg, Y., & Dagan, I. (2015). Improving distributional similarity with lessons learned from word embeddings.
*Transactions of the Association for Computational Linguistics, 3*, 211–225.Google Scholar - Melamud, O., Goldberger, J., & Dagan, I. (2016). context2vec: Learning generic context embedding with bidirectional lstm. In:
*Proceedings of CoNLL*.Google Scholar - Mikolov, T., Ilya, S., Chen, K., Corrado, G., & Dean, J. (2013a). Distributed representations of words and phrases and their compositionality. In
*NIPS’13 Proceedings of the 26th International Conference on Neural Information Processing Systems*(pp. 3111–3119).Google Scholar - Mikolov, T., Yih, Wen-tau, & Zweig, G. (2013b). Linguistic regularities in continuous space word representations. In:
*Proceedings of NAACL-HLT*.Google Scholar - Miller, G. A., & Charles, W. G. (1991). Contextual correlates of semantic similarity.
*Language and Cognitive Processes*,*6*(1), 1–28.MathSciNetCrossRefGoogle Scholar - Mitchell, J., & Lapata, M. (2008). Vector-based models of semantic composition. In:
*Proceedings of ACL-HLT*.Google Scholar - Mitchell, J., & Lapata, M. (2010). Composition in distributional models of semantics.
*Cognitive Science*,*34*(8), 1388–1429.CrossRefGoogle Scholar - Montemurro, M. A. (2001). Beyond the Zipf–Mandelbrot law in quantitative linguistics.
*Physica A: Statistical Mechanics and its Applications*,*300*(3), 567–578.CrossRefzbMATHGoogle Scholar - Muraoka, M., Shimaoka, S., Yamamoto, K., Watanabe, Y., Okazaki, N., & Inui, K. (2014). Finding the best model among representative compositional models. In:
*Proceedings of PACLIC*.Google Scholar - Niyogi, P., & Girosi, F. (1999). Generalization bounds for function approximation from scattered noisy data.
*Advances in Computational Mathematics*,*10*, 51–80.MathSciNetCrossRefzbMATHGoogle Scholar - Paperno, D., Pham, N.T., & Baroni, M. (2014). A practical and linguistically-motivated approach to compositional distributional semantics. In:
*Proceedings of ACL*.Google Scholar - Pennington, J., Socher, R., & Manning, C. (2014). Glove: Global vectors for word representation. In:
*Proceedings of EMNLP*.Google Scholar - Pham, N.T., Kruszewski, G., Lazaridou, A., & Baroni, M. (2015). Jointly optimizing word representations for lexical and sentential tasks with the c-phrase model. In:
*Proceedings of ACL*.Google Scholar - Pitman, J. (2006).
*Combinatorial Stochastic Processes*. Berlin: Springer-Verlag.zbMATHGoogle Scholar - Pitman, J., & Yor, M. (1997). The two-parameter Pisson-Dirichlet distribution derived from a stable subordinator.
*Annals of Probability*,*25*, 855–900.MathSciNetCrossRefzbMATHGoogle Scholar - Rothe, S., & Schütze, H. (2015). Autoextend: Extending word embeddings to embeddings for synsets and lexemes. In:
*Proceedings of ACL-IJCNLP*.Google Scholar - Socher, R., Huang, E. H., Pennin, J., & Manning, C. D. (2011). Dynamic pooling and unfolding recursive autoencoders for paraphrase detection.
*Advances in NIPS*,*24*, 801–809.Google Scholar - Socher, R., Huval, B., Manning, C.D., & Ng, A.Y. (2012). Semantic compositionality through recursive matrix-vector spaces. In:
*Proceedings of EMNLP*.Google Scholar - Stratos, K., Collins, M., & Hsu, D. (2015). Model-based word embeddings from decompositions of count matrices. In:
*Proceedings of ACL-IJCNLP*.Google Scholar - Takase, S., Okazaki, N., & Inui, K. (2016). Composing distributed representations of relational patterns. In:
*Proceedings of ACL*.Google Scholar - Teh, Y.W. (2006). A hierarchical bayesian language model based on Pitman-Yor processes. In:
*Proceedings of ACL*.Google Scholar - The BNC Consortium (2007) The british national corpus, version 3 (bnc xml edition). Distributed by Oxford University Computing Services, http://www.natcorp.ox.ac.uk/
- Tian, R., Miyao, Y., & Matsuzaki, T. (2014). Logical inference on dependency-based compositional semantics. In:
*Proceedings of ACL*.Google Scholar - Tian, R., Okazaki, N., & Inui, K. (2016). Learning semantically and additively compositional distributional representations. In:
*Proceedings of ACL*.Google Scholar - Turian, J., Ratinov, L.A., & Bengio, Y. (2010). Word representations: A simple and general method for semi-supervised learning. In:
*Proceedings of ACL*.Google Scholar - Turney, P.D. (2001). Mining the web for synonyms: PMI-IR versus LSA on TOEFL. In:
*Proceedings of EMCL*.Google Scholar - Turney, P. D., & Pantel, P. (2010). From frequency to meaning: Vector space models of semantics.
*Journal of Artificial Intelligence Research*,*37*(1), 141–188.MathSciNetzbMATHGoogle Scholar - Vapnik, V. N. (1995).
*The Nature of Statistical Learning Theory*. Berlin: Springer-Verlag.CrossRefzbMATHGoogle Scholar - Zanzotto, F.M., Korkontzelos, I., Fallucchi, F., & Manandhar, S. (2010). Estimating linear models for compositional distributional semantics. In:
*Proceedings of Coling*.Google Scholar - Zipf, G. K. (1935).
*The Psychobiology of Language: An Introduction to Dynamic Philology*. Cambridge: M.I.T. Press.Google Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.