The Mechanism of Additive Composition

Additive composition (Foltz et al., 1998; Landauer and Dutnais, 1997; Mitchell and Lapata, 2010) is a widely used method for computing meanings of phrases, which takes the average of vector representations of the constituent words. In this article, we prove an upper bound for the bias of additive composition, which is the first theoretical analysis on compositional frameworks from a machine learning point of view. Our proof relies on properties of natural language data that are empirically verified, and can be theoretically derived from an assumption that the data is generated from a Hierarchical Pitman-Yor Process. The theory endorses additive composition as a reasonable operation for calculating meanings of phrases, and suggests ways to improve additive compositionality, including: transforming entries of distributional word vectors by a function that meets a specific condition, constructing a novel type of vector representations to make additive composition sensitive to word order, and utilizing singular value decomposition to train word vectors.


Introduction
The decomposition of generalization errors into bias and variance (Geman et al., 1992) is one of the most profound insights of learning theory. Bias is caused by low capacity of models when the training samples are assumed to be infinite, whereas variance is caused by overfitting to finite samples. In this article, we apply the analysis to a new set of problems in Compositional Distributional Semantics, which studies the calculation of meanings of natural language phrases by vector representations of their constituent words. We prove an upper bound for the bias of a widely used compositional framework, the additive composition (Foltz et al., 1998;Landauer and Dutnais, 1997;Mitchell and Lapata, 2010).
Calculations of meanings are fundamental problems in Natural Language Processing (NLP). In recent years, vector representations have seen great success at conveying meanings of individual words (Levy et al., 2015). These vectors are constructed from statistics of contexts surrounding the words, based on the Distributional Hypothesis that words occurring in similar contexts tend to have similar meanings (Harris, 1954). For example, given a target word t, one can consider its context as close neighbors of t in a corpus, and assess the probability p t i of the i-th word (in a fixed lexicon) occurring in the context of t. Then, Tian, Okazaki and Inui the word t is represented by a vector F (p t i ) i (where F is some function), and words with similar meanings to t will have similar vectors (Miller and Charles, 1991).
Beyond the word level, a naturally following challenge is to represent meanings of phrases or even sentences. Based on the Distributional Hypothesis, it is generally believed that vectors should be constructed from surrounding contexts, at least for phrases observed in a corpus (Boleda et al., 2013). However, a main obstacle here is that phrases are far more sparse than individual words. For example, in the British National Corpus (BNC) (The BNC Consortium, 2007), which consists of 100M word tokens, a total of 16K lemmatized words are observed more than 200 times, but there are only 46K such bigrams, far less than the 16000 2 possibilities for two-word combinations. Be it a larger corpus, one might only observe more rare words due to Zipf's Law, so most of the two-word combinations will always be rare or unseen. Therefore, a direct estimation of the surrounding contexts of a phrase can have large sampling error. This partially fuels the motivation to construct phrase vectors from combining word vectors (Mitchell and Lapata, 2010), which also bases on the linguistic intuition that meanings of phrases are "composed" from the meanings of their constituent words. From a machine learning point of view, word vectors have smaller sampling error, or lower variance, because words are more abundant than phrases. Hence, a compositional framework that calculates meanings from word vectors will be favorable, if the bias of this composition operation is also small.
Here, "bias" is the distance between two types of phrase vectors, one calculated from composing the vectors of constituent words (composed vector), and the other assessed from context statistics where the phrase is treated as a target (natural vector). The statistics is assessed from an infinitely large ideal corpus, so that the natural vector of the phrase can be reliably estimated without sampling error, hence conveying the meaning of the phrase by Distributional Hypothesis. If the distance between the two vectors is small, the composed vector can be viewed as a reasonable approximation of the natural vector, hence an approximation of meaning; moreover the composed vector can be more reliably estimated from finite real corpora because words are more abundant than phrases. Therefore, an upper bound for the bias will provide a learning-theoretic support for the composition operation.
A number of compositional frameworks have been proposed in the literature (Baroni and Zamparelli, 2010;Grefenstette and Sadrzadeh, 2011;Socher et al., 2012;Paperno et al., 2014;Hashimoto et al., 2014). Some are complicated methods based on linguistic intuitions (Coecke et al., 2010), and others are compared to human judgments for evaluation (Mitchell and Lapata, 2010). However, none of them has been previously analyzed regarding their bias 1 . The most widely used framework is the additive composition (Foltz et al., 1998;Landauer and Dutnais, 1997), in which the composed vector is calculated by averaging word vectors. Yet, it was unknown if this average is by any means related to statistics of contexts surrounding the corresponding phrases.
In this article, we prove an upper bound for the bias of additive composition of two-word phrases, and demonstrate several applications of the theory. An overview is given as below.
1. Unlike natural vectors which always lie in the same space as word vectors, some compositional frameworks construct meanings of phrases in different spaces. Nevertheless, we argue that even in such cases it is reasonable to require some mappings to a common space, because humans can usually compare meanings of a word and a phrase. Then, by considering distances between mapped images of composed vectors and natural vectors, we can define bias and call for theoretical analysis.
What kind of vectors do we consider? (Section 2.1) Practical meaning of the bias bound (Section 2.2) A generative model satisfying the assumptions (Section 2.6) Formalization; Our assumptions on natural language data (Section 2.3) How does a specific condition affect vectors? (Section 2.4) Proof of bias bound; Intuitive explanation (Section 2.5) What kind of vectors are additive compositional? (Section 3.1) How to handle word order in additive composition? (Section 3.2) How does dimension reduction affect additive compositionality? (Section 3.3)

Experimental verification (Section 5)
Does the theory correlate with human judgments? (Section 6) Proof of supporting lemmas (Appendix A) Figure 1: An overview of this article. Arrows show dependencies between sections.
In Section 2.1, we introduce notations and define the vectors we consider in this work. In Section 2.2, we describe our bias bound for additive composition, focusing on its practical consequences that can be tested on a natural language corpus. The consequences are experimentally verified in Section 5.2.
In Section 2.3, we give a formalized version of the bias bound (Theorem 1), with our assumptions on natural language data clarified. These assumptions include the well-know Zipf's Law, a generalized version of the law applied to word co-occurrences, and some intuitively acceptable conditions. The assumptions and their implications are experimentally verified in Section 5. Moreover, we show that the Generalized Zipf's Law holds if the data is generated by a Hierarchical Pitman-Yor Process (Section 2.6).
One of our theoretical findings is that the function F applied to entries of vector representations is required to meet a specific condition. In Section 2.4, we explain how this condition affects vector representations and why it is important to the bias bound.
In Section 2.5, we sketch the proof of our bias bound and give an intuitive explanation. Proofs of some supporting lemmas are left to Appendix A.
We demonstrate three applications of our theory: 1. The condition required to be satisfied by F provides a unified explanation on why some recently proposed word vectors are good at additive composition (Section 3.1). Our experiments also verify that the condition drastically affects additive compositionality and other properties of vector representations (Section 5.2, Section 6).
. . . as a percentage of your income, your tax rate is generally less than that . . . target words in context tax percentage, of, your, income, your, rate, is, generally, less, than rate of, your, income, your, tax, is, generally, less, than, that tax rate of, your, income, your, is, generally, less, than Table 1: Contexts are taken as the closest five words to each side for the targets "tax " and "rate", and four for the target "tax rate".

Theory
In this section, we discuss vector representations constructed from an ideal natural language corpus, and establish a mathematical framework for analyzing additive composition.

Notation and Vector Representation
A natural language corpus is a sequence of words. Ideally, we assume that the sequence is infinitely long and contains an infinite number of distinct words.
Notation 1. In practice, only finite restrictions of the ideal corpus are observed. We denote the observed number of distinct words by n, and use the n words as a lexicon to construct vector representations. Then, we will consider the limit n → ∞. Let p i,n be the observed probability of the i-th word (1 ≤ i ≤ n). Assume i is taken such that p i,n ≥ p i+1,n .
With a corpus given, we consider vector representations for targets, which are either words or phrases. To define the vectors we start from specifying a context for each target, which is usually taken as words surrounding the target in the corpus. Table 1 shows an example of a sequence of words, and the contexts of a phrase target and two word targets. The contexts are taken as words close to the targets in the sequence.
Notation 2. We use s, t to denote word targets, and st a phrase target consisting of two consecutive words s and t. When the word order is ignored (i.e., either st or ts), we denote the target by {st}. A general target is denoted by Υ. Later in this article, we will consider other types of targets as well, and a full list of target types is shown in Table 2.
Notation 3. Let p Υ i,n denote the co-occurrence probability of the i-th word, conditioned on it being in the context of Υ and the lexicon size being n.
Definition 4. In this work, we consider vector representations of the form In above, a Υ n , b i,n and c n are real numbers and F is a smooth function on (0, ∞). In Section 2.2, we will further specify a Υ n , b i,n , c n and F without much loss of generality.
To consider F (p Υ i,n +1/n) instead of F (p Υ i,n ) is a smoothing scheme that guarantees F (x) being applied to x > 0. We will consider F that is not continuous at 0, such as F (x) := ln x; yet, w Υ i,n has to be well-defined even if p Υ i,n = 0. In practice, the p Υ i,n estimated from a finite corpus can often be 0; theoretically, the smoothing scheme plays a role in our proof as well.  The definition of w Υ n is general enough to cover a wide range of previously proposed distributional word vectors. For example, if F (x) = ln x, a Υ n = 0 and b i,n = ln p i,n , then w Υ i,n is the Point-wise Mutual Information (PMI) value that has been widely adopted in NLP (Church and Hanks, 1990;Dagan et al., 1994;Turney, 2001;Turney and Pantel, 2010). More recently, the Skip-Gram with Negative Sampling (SGNS) model (Mikolov et al., 2013b) is shown to be a matrix factorization of the PMI matrix (Levy and Goldberg, 2014b); and the more general form of a Υ n and b i,n is explicitly introduced by the GloVe model (Pennington et al., 2014). Regarding other forms of F , it has been reported in Lebret and Collobert (2014) and Stratos et al. (2015) that empirically F (x) := √ x outperforms F (x) := x. We will discuss this topic further in Section 3.1.
Finally, the scalar c n is a normalization factor that controls the scales of vectors. We finish this section by pointing to Table 3 for a list of frequently used notations.

Practical Meaning of the Bias Bound
A compositional framework combines vectors w s n and w t n to represent the meaning of phrase "s t". In this work, we study relations between this composed vector and the natural vector w {st} n of the phrase target 2 . More precisely, we study the Euclidean distance where COMP(·, ·) is the composition operation. At limit n → ∞, the corpus is infinitely large, so w {st} n can be well estimated to represent the meaning of "s t or t s". Therefore, the above distance can be viewed as the bias of approximating w {st} n by the composed vector COMP(w s n , w t n ). In practice, especially when COMP is a complicated operation with parameters, it has been a widely adopted approach to learn the parameters by minimizing the same distances for phrases observed in a corpus (Dinu et al., 2013;Baroni and Zamparelli, 2010;Guevara, 2010). These practices further motivate our study on the bias.
2. Or it should be w st n if one cares about word order, which we will discuss in Section 3.2.
Notation 1 i, n index 1 ≤ i ≤ n, where n is the lexicon size Notation 1 p i,n probability of the i-th word, p i,n ≥ p i+1,n Notation 3 p Υ i,n probability of the i-th word conditioned on it being in the context of Υ and lexicon size being n π s/t\s probability for an occurrence of t being non-neighbor of s  Definition 5. Consider the additive composition, where COMP(w s n , w t n ) := 1 2 (w s n + w t n ) is a parameter-free operation. We define Our analysis starts from the observation that, every word in the context of {st} also occurs in the contexts of s and t: as illustrated in Table 1, if a word token t (e.g. "rate") comes from a phrase {st} (e.g. "tax rate"), and if the context window size is not too small, the context for this token of t is almost the same as the context of {st}. This motivates us to decompose the context of t into two parts, one coming from {st} and the other not.
Definition 6. Define target s/t\s which counts every occurrence of word t not neighboring s. We use π s/t\s to denote the probability of t not neighboring s, conditioned on an occurrence of t. Practically, (1 − π s/t\s ) can be estimated by the ratio C({st})/C(t), where C(·) denotes frequency count. Then, we have the following equation because a word in the context of t occurs in the context of either {st} or s/t\s.
We can view π s/t\s (and similarly π t/s\t ) as indicating how weak the "collocation" between s and t is. When π s/t\s and π t/s\t are small, s tends to occur with t together, so we expect w s n and w t n to be correlated with w To normalize a Υ n , b i,n , c n and F , we first note that B {st} n does not depend on b i,n because it is canceled out in the following: Thus, b i,n can be arbitrarily fixed without loss of generality. We use b i,n to move the centroid of natural phrase vectors to 0, so that they will not concentrate to one direction.
Definition 7. Let Λ n be the set of two-word phrases observed in a finite corpus, word order ignored. We put so it is favorable to have the equality. We achieve this by adjusting a Υ n such that the entries of each vector average to 0.
Definition 8. We set so that 1 n n i=1 w Υ i,n = 0 for all Υ.
One can calculate a Υ n and b i,n by first assuming b i,n = 0 in (3) to obtain a Υ n , and then substitute a {st} n in (2) to obtain the actual b i,n . The value of a Υ n will not change because if all vectors have average entry 0, so dose their centroid.
Definition 9. For c n , we note that it changes the scales of both B Note that, if the centroid of natural phrase vectors had been far from 0, the normalization in Definition 9 would have caused all phrase vectors cluster around one point on the unit sphere. Then, the phrase vectors would not have been able to distinguish different meanings of phrases. The choice of b i,n in Definition 7 prevents such degenerated cases.
In Section 2.4, we will calculate the asymptotic values of a Υ n , b i,n and c n theoretically.
Definition 10. For simplicity, we consider F such that F (x) = x −1+λ in this article. So F (x) can be x λ /λ (where λ = 0) or ln x (where λ = 0). But intuitively, only the behavior of F (x) at x ≈ 0 matters, because F is applied to probability values close to 0. In fact, our results can be generalized to G(x) such that lim x→0 G (x)x 1−λ = 1 and G (x)x 1−λ ≤ M for some constant M .
Remark 6 in Section 2.3 gives a formal explanation. The exponent λ turns out to be a crucial factor in our theory. Now, we can state a summarized version of our bias bound.
As we expected, for more "collocational" phrases, since π s/t\s and π t/s\t are smaller, the upper bound for B {st} n becomes stronger. Claim 1 states a prediction that can be empirically tested on a real large corpus; namely, one can estimate p Υ i,n from the corpus and construct w Υ n for a fixed n, then check if the inequality holds approximately while omitting the limit. In Section 5.2, we conduct the experiment and verify the prediction. In Section 2.3, we will specify the theoretical assumptions we put on an "ideal natural language corpus".
Besides it being empirically verified for phrases observed in a real corpus, the true value of Claim 1 is that the upper bound holds for an arbitrarily large ideal corpus. We can assume any plausible two-word phrase to occur sufficiently many times in the ideal corpus, even when it is unseen in the real one. In that case, a natural vector for the phrase can only be reliably estimated from the ideal corpus, but Claim 1 suggests that additive composition of word vectors provides a reasonable approximation for that unseen natural vector. Meanwhile, since word vectors can be reliably estimated from the real corpus, Claim 1 endorses additive composition as a reasonable meaning representation for unseen or rare phrases. On the other hand, it endorses additive composition for frequent phrases as well, because such phrases usually have strong collocations and Claim 1 says that the bias in this case is small.
The following is an outstanding by-product of our theory (Theorem 2 in Section 2.4), and it is empirically verified in Section 5.2 as well. So, approximately all natural phrase vectors are distributed on the unit sphere. This makes it possible to convert the Euclidean distance B {st} n into cosine similarity, which is the most widely used similarity measure in practice.

Formalization and Assumptions on Natural Language Data
Claim 1 is formalized as Theorem 1 in the following.
Theorem 1. For an ideal natural language corpus, we assume that: (A) lim n→∞ p i,n · i ln n = 1.
We explain the assumptions of Theorem 1 in details below.
Remark 1. Assumption (A) is the Zipf's Law (Zipf, 1935), which states that the frequency of the i-th word is inversely proportional to i. So p i,n is proportional to i −1 , and the factor ln n comes from n i=1 p i,n = 1 and n i=1 i −1 ≈ ln n. One immediate implication of Zipf's Law is that np i,n can be arbitrarily small. More precisely, for any δ > 0, we have The limit np i,n → 0 will be extensively explored in our theory.
Remark 2. When a target Υ is randomly chosen, the probability value p Υ i,n shows randomness. The assumption (B1) formalizes this intuition. Further, (B2) suggests that p Υ i,n is in the same scale as p i,n , and X := p Υ i,n /p i,n has a power law tail of index 1. We regard it as the Generalized Zipf 's Law because it is parallel to Zipf's Law in that p i,n also has a power law tail of index 1. In Section 2.6, we show that Assumption (B) is closely related to a Hierarchical Pitman-Yor Process; and in Section 5.1 we empirically verify this assumption.
The assumption (B2) on power law tail can be relaxed to lim x→∞ xP(x ≤ X) = ξ. We only consider (B2) for simplicity. Remark 4. The existence of second moment E F (X) 2 < ∞ requires λ < 0.5, because X has a power law tail of index 1 by Assumption (B2). Conversely, λ < 0.5 is usually a sufficient condition for E F (X) 2 < ∞, for instance, if X follows the Pareto Distribution (i.e. ξ = β) or Inverse-Gamma Distribution. Another example is given in Section 2.6.
(c) The set of random variables Y 2 i,n /ϕ i,n is uniformly integrable; i.e., for any ε > 0, there exists N such that E Y 2 i,n I Y 2 i,n >N ϕ i,n < εϕ i,n for all i, n.
(d) lim Remark 5. Lemma 1 is the basis of our analysis. It shows that, as i and n vary, e 2 i,n and v i,n scale with ϕ i,n . Note that for random variable X we only assume a power law tail; the full distribution of X might depend on i, n and whether Υ = {ST }, S/T \S, or T /S\T . Nevertheless, Lemma 1(b)(d) suggests that at limit np i,n → 0, the behavior of e i,n / √ ϕ i,n and v i,n /ϕ i,n do not depend on i, n or Υ; the power law tail of X dominates.
Remark 6. The function F (x) in Lemma 1 can be generalized to function G(x) as mentioned in Definition 10. Because by Cauchy's Mean Value Theorem, G(p i,n x+1/n) − G(p i,n β +1/n) F (p i,n x+1/n) − F (p i,n β +1/n) = G (ζ)ζ 1−λ for some p i,n β +1/n ≤ ζ ≤ p i,n x+1/n, so the random variable G(p i,n X+1/n)−G(p i,n β+1/n) is dominated by M Y i,n and converges pointwisely to Y i,n as n → ∞. Then, by Lebesgue's Dominated Convergence Theorem, we can generalize Lemma 1 to G(x), and in turn generalize our bias bound.
Lemma 2. Regarding the asymptotic behavior of ϕ i,n , we have  Lemma 1 is derived from Assumption (B) and the condition E F (X) 2 < ∞. Lemma 2 is derived from Assumption (A). The proofs are found in Appendix A.

Why is λ < 0.5 important?
As we note in Remark 4, the condition λ < 0.5 is necessary for the existence of E F (X) 2 . This existence is important because, briefly speaking, the Law of Large Numbers only holds when expected values exist. More precisely, we use the following lemma to prove convergence in probability in Theorem 1; if E F (X) 2 = ∞, the required uniform integrability is not satisfied, and the weighted averages of random variables we consider will not converge.
Lemma 3. Assume the set of random variables U i,n /ϕ i,n is uniformly integrable and U i,n (1 ≤ i ≤ n) are independent for each fixed n. Assume lim in probability.
Proof. This lemma is a combination of the Law of Large Numbers and the Stolz-Cesàro Theorem. We prove it in two steps. First step, we prove This is a generalized version of the Law of Large Numbers, saying that the weighted average of U i,n converges in probability to the weighted average of Our strategy is to divide the following into two parts, and show that each part is close to its expectation. For the |U i,n | > N ϕ i,n part, we have < ε 2 for all n by definition, so it has negligible expectation and can be bounded by Markov's Inequality: On the other hand, for the In contrast, by Lemma 2(d) we have Thus, the |U i,n | ≤ N ϕ i,n part concentrates to its expectation by Chebyshev's Inequality. The first step is completed.
Second step, we prove This is a generalized version of the Stolz-Cesàro Theorem, saying that the limit ratio of two series equals the limit ratio of corresponding terms. By definition and Equation (4), for any ε > 0 there exists δ such that In addition, we can bound n can be viewed as a weighted average of two parts, one from indices n δ ln n ≤ i ≤ n and the other from 1 ≤ i < n δ ln n . By Lemma 2(d), the weight for the first part lim n→∞ n −1+2λ ln n n n δ ln n ≤i ϕ i,n = ∞; whereas by Lemma 2(c), the weight for the second part Therefore, the first part dominates, so lim Combining Lemma 1(a)(c)(d) and Lemma 3, we immediately obtain the following.
Now, we can asymptotically derive the normalization of a Υ n , b i,n and c n , as defined in Section 2.2. A by-product is that the norms of natural phrase vectors converge to 1.
Theorem 2. If we put a in probability.
Proof. By definition, Therefore, by Chebyshev's Inequality we have lim Finally, the ordinary version of Law of Large Numbers implies that we immediately have all entries of lim Therefore, if we set a {st} n , b i,n and c n as in Theorem 2, all conditions in Definition 7, Definition 8 and Definition 9 are asymptotically satisfied. In addition, we have obtained the result stated in Claim 2.
In view of Corollary 4, if λ < 0.5 is not satisfied, the norms of natural phrase vectors will not converge. This prediction is experimentally verified in Section 5.2.

Proof of Theorem 1 and an Intuitive Explanation
Recall the equation (1) that p t i,n is decomposed into a linear sum of p s/t\s i,n and p {st} i,n . The next lemma suggests that F (p t i,n +1/n) can be treated like a linear sum of F (p s/t\s i,n +1/n) and F (p {st} i,n +1/n) similarly, as if F has some linearity.
Lemma 5. The set of random variables is uniformly integrable, and Proof. The intuition is that when np i,n → 0, the probability p i,n is a small value compared to 1/n, and since p Υ i,n (Υ := {ST } or Υ := S/T \S) is in the same scale as p i,n , we can use the linear approximation F Formal proof is as below. For brevity, we set By Equation (1) we have p T i,n = πP 1 + (1 − π)P 2 , so FA(p T i,n ) = FA(πP 1 + (1 − π)P 2 ) lies between FA(P 1 ) and FA(P 2 ). Therefore, By Lemma 1(c), FA(P 1 ) 2 /ϕ i,n and FA(P 2 ) 2 /ϕ i,n are uniformly integrable. So for any ε > 0, we have E FA(P 1 ) 2 I FA(P 1 ) 2 >N ϕ i,n < εϕ i,n and E FA(P 2 ) 2 I FA(P 2 ) 2 >N ϕ i,n < εϕ i,n for some N . Consider the condition The previous arguments suggest that we can neglect the possibility of C being satisfied, because the result (5) is arbitrarily small. Next, we consider the complement of C, namely Under this condition, intuitively FA(P 1 ) and FA(P 2 ) are restricted to a small range such that a linear approximation of F becomes valid. More precisely, we show that which will complete the proof. For brevity, we set use H to denote the inverse function of FB: Note that FB, H and J do not depend on n, i, S or T . By Lemma 2(a), we can replace ϕ i,n with np i,n · n −2λ ; and note that where D is the condition Thus, when np i,n → 0, we have Now, we are ready to prove Theorem 1. An intuitive discussion is given after the proof.
Proof of Theorem 1. As in Theorem 2, we set a {st} n := 0 and c n := η n i=1 ϕ i,n −1/2 . If a t := 0 for all t, one can calculate that lim n→∞ 1 n n i=1 w T i,n = 0 in probability, as in the proof of Theorem 2 and with the help of Lemma 5. Thus, we can set a t n := 0. Then, by definition Next, by Lemma 5, Lemma 3 and Triangle Inequality, we can replace F (p T i,n +1/n) with For brevity, we put π 1 := π S/T \S , π 2 := π T /S\T , FA(x) We use " " to denote asymptotic equality at the limit n → ∞. So Again, by Lemma 1(a)(b), Lemma 3 and Triangle Inequality, we can replace By Corollary 4, we have Intuitively, it is as if the word vector w T n can be decomposed into two components: where " " is a heuristic symbol and other notations are as in Proof of Theorem 1. This decomposition is in correspondence to Equation (1), coming from the fact that a word target T can be divided into two types, one neighboring word S and the other not. Similarly, it is as if w S n can be decomposed into Thus, by taking the average 1 1≤i≤n terms tend to cancel out each other because they are independent, which corresponds to the E W S/T \S i,n W T /S\T i,n = 0 argument in the proof and results in a 1 4 (π 2 1 + π 2 2 ) term in the bias bound. It is noteworthy that, using the same argument as in Proof of Theorem 1, one can derive an upper bound for the Euclidean distance w This upper bound reflects how close one might expect 1 2 (w S n + w T n ) and w It is looser than our bias bound presented in Theorem 1, suggesting that additive composition can perform better than one might expect. The difference between √ 2 2 (π 1 + π 2 ) and 1 2 (π 2 1 + π 2 2 + π 1 π 2 ) is exactly because W S/T \S i,n 1≤i≤n and W T /S\T i,n 1≤i≤n are independent and cancel out each other. In view of the above explanation for Proof of Theorem 1, the technical points are as follows. First, the heuristic " " is not strict; there is difference between W Υ i,n and W Υ i,n due to the expected value, and there is difference between F (p T i,n +1/n) and a linear combination of F (p S/T \S i,n + 1/n) and F (p {ST } i,n + 1/n). However, by Lemma 1(b) the expected values converge to 0, and by Lemma 5 a linear approximation holds if one takes limit. So the first issues are settled. Second, the most importantly, terms in Equation (6) have to converge to constants independent of S and T , otherwise they cannot be separated from π 1 and π 2 in the calculation of B {ST } n 2 . Equation (6) comes from Corollary 4, which comes from Lemma 3 and is deeply related to Assumptions (A)(B) and the condition λ < 0.5.
Insights brought by our theory lead to several applications. First, as we found that the power law tail of natural language data requires λ < 0.5 for constructing additive compositional vector representations, our theory provides important guidance for the empirical research of Distributional Semantics (Section 3.1). Second, as we found that the component shared by w T n and w S n survives the averaging operation (i.e. the W 1≤i≤n cancel out each other, we come to the idea of harnessing additive composition by engineering what is shared by the summands. Then, for example, we can make additive composition aware of word order (Section 3.2). Third, as one can read from Lemma 2(c)(d) and the second step of Proof of Lemma 3, it is important to realize that the behavior of vector representations is dominated by entries at dimensions corresponding to low-frequency words, namely w Υ i,n for n δ ln n ≤ i ≤ n. This understanding has impact on dimension reduction (Section 3.3).

Hierarchical Pitman-Yor Process
In Assumptions (A)(B) of Theorem 1 we have required several properties to be satisfied by the probability values p i,n and p Υ i,n . Meanwhile, p i,n and p Υ i,n (1 ≤ i ≤ n) define distributions which words can be generated from. This setting resembles a Bayesian model where priors of word distributions are specified.
Conversely, by the well-known de Finetti's Theorem, an exchangeable random sequence of words (i.e., given any sequence sample, all permutations of that sample occur with the same probability) can be seen as if the words are drawn i.i.d. from a conditioned word distribution, where the distribution itself is drawn from a prior. A widely studied example is the Pitman-Yor Process (Pitman and Yor, 1997;Pitman, 2006); in this section, we use the process to define a generative model, from which Assumptions (A)(B) can be derived.
Definition 11. A Pitman-Yor Process P Y (α, θ) (0 < α < 1, θ > −α) defines a prior for word distributions, which is the prior corresponding to the exchangeable random sequence generated by the following Chinese Restaurant Process: 1. First, generate a new word.
2. At each step, let C( ) be the count of word , and C := C( ) the total count; let N be the number of distinct words. Then: (2.1) Generate a new word with probability θ + αN θ + C .
(2.2) Or, generate a new copy of an existing word , with probability C( ) − α θ + C .
Definition 12. In the above process P Y (α, θ), we define p( ) := lim C( ) C , where limit is taken at Step → ∞. Fix a word index i such that p( i ) ≥ p( i+1 ). Put p i := p( i ).
Theorem 3. For a sequence generated by P Y (α, θ), we have lim C/N 1/α = Z for some Z.
where Z is the same as in Theorem 3.
Theorem 4 shows that, if words are generated by a Pitman-Yor Process P Y (α, θ), then p i has a power law tail of index α. It is in the same form as Assumption (A), and when α ≈ 1, it approximates the Zipf's Law.
For two sequences generated by P Y (α, θ), their corresponding Z as in Theorem 3 may differ, even though the two sequences are generated with the same hyper-parameters α and θ. Nevertheless, the limit always exists, and Z is known to follow a statistical distribution that can be viewed as characterizing the prior of the word distribution {p i }. The distribution of Z −α is explicitly given in Pitman (2006), Theorem 3.8: where g α (x) is the Mittag-Leffler density function: In this article, we only need the fact that lim x→0 xg α (x) is a nonzero constant.
Next, we consider the co-occurrence probability p Υ ( ), conditioned on being in the context of a target Υ. One first notes that p Υ ( ) is likely to be related to p( ); i.e., frequent words are likely to occur in every context, regardless of target. To model this intuition, the idea of Hierarchical Pitman-Yor Process (Teh, 2006) is to adapt P Y (α, θ) such that in each step, if a new word is to be generated, it is no longer generated brand new, but drawn from another Pitman-Yor Process instead. This second Pitman-Yor Process serves as a "reference" which controls how frequently a word is likely to occur. More precisely, a Hierarchical Pitman-Yor Process HP Y (α 1 , θ 1 ; α 2 , θ 2 ) generates sequences as follows.
Definition 13. In HP Y (α 1 , θ 1 ; α 2 , θ 2 ), instead of generating words directly, one generates a "reference" at each step, where the reference can refer to new words or existing words. We use to denote a reference and the word referred to by the reference.
1. First step, generate a new reference which refers to a new word.
2. At each step, let C( ) be the count of reference , and C( ) := = C( ) the count of all references referring to word ; let C := C( ) be the total count, N r ( ) the number of distinct references referring to , and N r := N r ( ) the total number of distinct references; finally, let N w be the number of distinct words.
(2.2) Generate a new reference referring to an existing word , with probability (2.3) Or, generate a new copy of an existing reference , with probability It is easy to see from definition that HP Y (α 1 , θ 1 ; α 2 , θ 2 ) generates an exchangeable word sequence; and if we focus on distinct references (i.e., ignoring (2.3), consider N r ( ) as "the count of word " in the ordinary Pitman-Yor Process), then the process becomes P Y (α 2 , θ 2 ). We assume this is the same process which defines word probability p( ), so and we define the conditional probability p Υ ( ) as: Thus, HP Y (α 1 , θ 1 ; α 2 , θ 2 ) indeed connects p Υ ( ) to p( ). This connection between word probability and conditioned word probability has been explored in Teh (2006); in which, it is used in an n-gram language model to connect the bigram probability p(w|u) to unigram probability p(w), for deriving a smoothing method.
Unfortunately, a precise analysis on the above p Υ ( ) is beyond the reach of the authors; instead, we consider a slightly modified process which is much simpler for our purpose.
Definition 14. A Modified Hierarchical Pitman-Yor Process MHP Y (α 1 , θ 1 ; α 2 , θ 2 ) generates sequences as follows. Using the same notation as in Definition 13: 1. First step, generate a new reference which refers to a new word.
As for Assumption (B2), we assume θ 1 = 1 and derive the distribution of Z from (7): Since lim x→0 xg α (x) is a nonzero constant, the above probability density function is of order o(z −2 ) when z → ∞, so the random variable Z w has a power law tail of index 1. Thus, Assumption (B2) is approximately satisfied when α 1 ≈ 1 and θ 1 = 1.

Applications
In this section, we demonstrate three applications of our theory.

The Choice of Function F
The condition λ < 0.5 specifies a nontrivial constraint on the function F . In Section 2.4 we have shown that this is a necessary condition for the norms of natural phrase vectors to converge. The convergence of norms is an outstanding property that might affect not only additive composition but also the composition ability of vector representations in general. Specifically, we note that F (x) = ln x when λ = 0, and F (x) = √ x when λ = 0.5. It is straightforward to predict that these functions might perform better in composition tasks than functions that have larger λ, such as F (x) := x or F (x) := x ln x. In Section 5.2, we show experiments that verify the necessity of λ < 0.5 for our bias bound to hold, and in Section 6 we show that F indeed drastically affects additive compositionality as judged by human annotators -while F (x) := ln x and F (x) := √ x perform similarly well, F (x) := x and F (x) := x ln x are much worse.
Different settings of function F have been considered in previous research, and speculations have been made about the reason of semantic additivity of some of the vector representations. In Pennington et al. (2014), the authors noted that logarithm is a homomorphism from multiplication to addition, and used this property to justify F (x) := ln x for training semantically additive word vectors, based but on the unverified hypothesis that multiplications of co-occurrence probabilities have specialties in semantics. On the other hand, Lebret and Collobert (2014) proposed to use F (x) := √ x, which is motivated by the Hellinger distance between two probability distributions, and reported its being better than F (x) := x. Stratos et al. (2015) proposed a similar but more general and better-motivated model, which attributed F (x) := √ x to an optimal choice that stabilizes the variances of Poisson random variables. Based on the assumption that co-occurrence counts are generated by a Poisson Process, the authors pointed out that F (x) := √ x may have the effect of stabilizing the variance in estimating word vectors. In contrast, our theory shows clearly that F affects the bias of additive composition, besides the variance. All in all, none of the previous research can explain why F (x) := ln x and F (x) := √ x are both good choices but F (x) := x is not.
Intuitively, the condition λ < 0.5 requires that F (x) decreases steeply as x tends to 0. The steep slope has the effect of "amplifying" the fluctuations of lower co-occurrence probabilities, and "suppressing" higher ones as a result. Formally, this can be read from Lemma 1, where we show that Var[F (p Υ i,n +1/n)] scales with ϕ i,n = p i,n p i,n +(βn) −1 −1+2λ . When λ < 0.5, the factor p i,n + (βn) −1 −1+2λ decreases as p i,n increases, and the decrease is faster when λ is smaller. Thus, in the vector representations we consider, higher cooccurrence probabilities are "suppressed" more when λ is smaller. However, in practice higher co-occurrence probabilities can be more precisely estimated, and they often correspond to important words such as syntax markers. Therefore, though our bias bound only requires λ < 0.5, empirically one might obtain better semantic representations by adopting a λ that is not too small.

Handling Word Order in Additive Composition
By considering the vector representation w {st} n we have ignored word order and conflated the phrases "s t" and "t s". Though the meanings of the two might be related somehow, Figure 2: Surrounding the two-word phrase "s t", Near-far Contexts for s•, •t and st are the same. Figure 3: Surrounding the order-reversed phrase "t s", Near-far Contexts for s• and •t differ in their N -F labels.
to treat a compositional framework as approximating w {st} n instead of w st n would certainly be troublesome, especially when one tries to extend our theory to longer phrases or even sentences. As the following famous example  shows, meanings of sentences may differ greatly as word order changes. a. It was not the sales manager who hit the bottle that day, but the office worker with the serious drinking problem.
b. That day the office manager, who was drinking, hit the problem sales worker with a bottle, but it was not serious.
Thus, it is necessary to handle the change of meanings brought by different word order. Traditionally, additive composition is considered unsuitable for this purpose, because one always has w s n + w t n = w t n + w s n . However, the commutativity can be broken by defining different contexts for "left-hand-side" words and "right-hand-side" words, denoted by t• and •t, respectively. Then, the co-occurrence probabilities p t• i,n and p •t i,n will be different, so 1 2 (w s• n + w •t n ) and 1 2 (w t• n + w •s n ) are different vectors. In this section, we propose the Near-far Context, which specifies contexts for s• and •t such that the additive composition 1 2 (w s• n + w •t n ) approximates the natural vector w st n of the ordered phrase "s t".
Definition 15. In Near-far Context, context words are assigned labels, either N or F . For constructing vector representations, we use a lexicon of N -F labeled words, and regard words with different labels as different entries in the lexicon. For any target, we label the nearer two words to each side by N , and the farther two words to each side by F . Except that, for the "left-hand-side" word s• we skip one word adjacent to the right; and similarly, for the "right-hand-side" word •t we skip one word adjacent to the left (Figure 2).
The idea behind Near-far Context is that, in the context of phrase "s t", each word is assigned an N -F label the same as in the context of s• and •t (Figure 2). On the other hand, for targets s and t occurring in the order-reversed phrase "t s", context words are labeled differently for s• and •t ( Figure 3). As we discussed in Section 2.2, the key fact about additive composition is that if a word token t comes from phrase "s t" or "t s", the context for this token of t is almost the same as the context of "s t" or "t s". By introducing different labels for context words of t• and •t, we are able to distinguish "s t" from "t s". More precisely, as we discussed in Section 2.5, the component shared by w s• n and w •t n will survive in the average 1 2 (w s• n + w •t n ), whereas non-shared components tend to cancel out each other. Thus, the additive composition 1 2 (w s• n + w •t n ) will become closer to w st n rather than w ts n , because s• and •t share context surrounding "s t" but not "t s".
The following claim is parallel to Claim 1.
Claim 3. Under conditions parallel to Claim 1, we have In Section 5.3, we verify Claim 3 experimentally, and show that in contrast, the error w ts n − 1 2 (w s• n + w •t n ) for approximating the order-reversed phrase "t s" can exceed this bias bound. Further, we demonstrate that by using the additive composition of Near-far Context vectors, one can indeed assess meaning similarities between ordered phrases.

Dimension Reduction
By far we have only discussed vector representations that have a high dimension equal to the lexicon size n. In practice, people mainly use low-dimensional "embeddings" of words to represent their meaning. Many of the embeddings, including SGNS and GloVe, can be formalized as linear dimension reduction, which is equivalent to the finding of a d-dimensional vector v t (where d n) for each target word t, and an (n, d)-matrix A such that t L(Av t , w t n ) is minimized for some loss function L(·, ·). In other words, Av t is trained as a good approximation for w t n . Naturally, we expect the loss function L to account for a crucial factor in word embeddings. Although there are empirical investigations on other detailed designs of embedding methods (e.g. how to count co-occurrences, see Levy et al. 2015), the loss functions have not been explicitly discussed previously. In this section, we discuss how the loss functions would affect additive compositionality of word embeddings, from a viewpoint of bounding the bias v {st} − 1 2 (v s + v t ) . SVD When L is the L 2 -loss, its minimization has a closed-form solution given by the Singular Value Decomposition (SVD). More precisely, one considers a matrix whose j-th column is w t n where t is the j-th target word. Then, SVD factorizes the matrix into U ΣV , where U , V are orthonormal and Σ is diagonal. Let Σ d denote the truncated Σ to the top d singular values. Then, A is solved as U √ Σ d and v t the j-th column of √ Σ d V . SVD has been used in Lebret and Collobert (2014), Stratos et al. (2015) and Levy et al. (2015).
For L 2 -loss, we have where ε 1 , ε 2 and ε 3 are minimized. Thus, by Triangle Inequality we have Further, by Claim 1 we can bound B {st} n for sufficiently large n, so v {st} − 1 2 (v s + v t ) is bounded in turn because A is a bounded operator. This bound suggests that word embeddings trained by SVD preserve additive compositionality.
However, the same argument does not directly apply to other loss functions because a general loss may not satisfy a triangle inequality, and a bound for Euclidean distance may not always transform to a bound for the loss, or vice versa. Specifically, we describe two widely used alternative embeddings in the following and discuss the effects of their loss. GloVe The GloVe model (Pennington et al., 2014) trains a dimension reduction for vector representations with F (x) := ln x. Let v t i be the i-th entry of Av t , and let C t i denote how many times target t co-occur with the i-th context word. Then, the loss function is In words, GloVe uses a weighted L 2 -loss and the weight is a function of the co-occurrence count C t i . The function f is set to constant when C t i is larger than a threshold, and decreases to 0 when C t i → 0. To minimize the loss, GloVe uses stochastic gradient descent methods such as AdaGrad (Duchi et al., 2011).

SGNS
The Skip-Gram with Negative Sampling (SGNS) model (Mikolov et al., 2013b) also trains a dimension reduction for vector representations with F (x) := ln x. The training is based on the Noise Contrastive Estimation (NCE) (Gutmann and Hyvärinen, 2012), so the loss function inherits two parameters, namely the number k of noise samples per data point, and the noise distribution p noise i,n . The function is given below.
Claim 4. Let v t i be the i-th entry of Av t . Then SGNS uses the loss function where C(t) is the count of target t, and D φ (·, ·) is the Bregman divergence associated to the convex function When k → +∞, D φ converges to the Bregman divergence D ϕ associated to ϕ(x) := exp(x).
Proof of Claim 4 is found in Appendix B. We draw a graph of the SGNS loss in Figure 4, where D φ v t i + ln(kp noise i,n ), w t i,n + ln(kp noise i,n ) is plotted on y-axis against v t i − w t i,n on x-axis. Note that the graph grows faster at x → +∞ than x → −∞, suggesting that an overestimation of w t i,n will be punished more than an underestimation. In addition, the loss function weighs more on high co-occurrence probabilities, as indicated by the p t i,n coefficient in the equation of the limit curve ( Figure 4). Thus, SGNS loss tends to enforce underestimation of w t i,n for frequent context words (as overestimation is costly), and compensate w t i,n for rare ones (i.e., overestimation on rare context words is affordable and will be done if necessary). This is a special property of SGNS which might have some smoothing effect. Figure 4: A graph of the SGNS loss function with two asymptotes (red), and its limit curve at k → +∞ with one asymptote (blue).
One common attribute shared by GloVe and SGNS but not SVD is that, the loss functions of GloVe and SGNS weigh less on rare context words. As a result, the trained Av t may fail to precisely approximate the low co-occurrence part of w t n . As we discussed in Section 2.5, entries corresponding to low-frequency words dominate the behavior of vector representations; thus, failing to precisely approximate this part might hinder the inheritance of additive compositionality from high-dimensional vector representations to low-dimensional embeddings. Therefore, we conjecture that word vectors trained by GloVe or SGNS might exhibit less additive compositionality compared to SVD, and the composition might be less respectful to our bias bound.
The above discussion is only exploratory and cannot fully comply with practice because, after v t is trained by dimension reduction, people usually re-scale the norms of all v t to 1, and then they use the normalized vectors in additive composition. It is not clear why this normalization step can usually result in better performance.
Nevertheless, in our experiments (Section 5.4), we find that word vectors trained by SVD preserve our bias bound well in additive composition, even after the normalization step is conducted. In contrast, vectors trained by GloVe or SGNS are less respectful to the bound. Further, in extrinsic evaluations (Section 6) we show that vectors trained by SVD can indeed be more additive compositional, as judged by human annotators.

Related Work
Additive composition is a classical approach to approximating meanings of phrases and/or sentences (Foltz et al., 1998;Landauer and Dutnais, 1997). Compared to other composition operations, vector addition/average has either served as a strong baseline (Mitchell and Lapata, 2008;Takase et al., 2016), or remained one of the most competitive methods until recently (Banea et al., 2014). Additive composition has also been successfully integrated into several NLP systems. For example, Tian et al. (2014) use vector additions for assessing semantic similarities between paraphrase candidates in a logic-based textual entailment recognition system (e.g. the similarity between "blamed for death" and "cause loss of life" is calculated by the cosine similarity between sums of word vectors v blame +v death and v cause + v loss +v life ); in Iyyer et al. (2015), average of vectors of words in a whole sentence/document is fed into a deep neural network for sentiment analysis and question answering, which achieves near state-of-the-art performance with minimum training time. There are other semantic relations handled by vector additions as well, such as word analogy (e.g. the vector v king − v man + v woman is close to v queen , suggesting "man is to king as woman is to queen", see Mikolov et al. 2013a), and synonymy (i.e. a set of synonyms can be represented by the sum of vectors of the words in the set, see Rothe and Schütze 2015). We expect all these utilities to be related to our theory of additive composition somehow, for example a link between additive composition and word analogy is hypothesized in Section 6.2. Ultimately, our theory would provide new insights into previous works, for instance, the insights about how to construct word vectors.
Lack of syntactic or word-order dependent effects on meaning is considered one of the most important issue of additive composition (Landauer, 2002). Driven by this point of view, a number of advanced compositional frameworks have been proposed to cope with word order and/or syntactic information (Mitchell and Lapata, 2008;Zanzotto et al., 2010;Baroni and Zamparelli, 2010;Coecke et al., 2010;Grefenstette and Sadrzadeh, 2011;Socher et al., 2012;Paperno et al., 2014;Hashimoto et al., 2014). The usual approach is to introduce new parameters that represent different word positions or syntactic roles. For example, given a two-word phrase, one can first transform the two word vectors by different matrices and then add the results, so the two matrices are parameters (Mitchell and Lapata, 2008); or, regarding different syntactic roles, one can assign matrices to adjectives and use them to modify vectors of nouns (Baroni and Zamparelli, 2010); further, one can insert neural network layers between parents and children in a syntactic tree (Socher et al., 2012). An empirical comparison of composition models can be found in Blacoe and Lapata (2012), with an accessible introduction to the literature. One theoretical issue of these methods, however, is the lack of learning guarantee. In contrast, our proposal of the Near-far Context demonstrates that word order can be handled within an additive compositional framework, being parameter-free and with a proven bias bound. Recently, Tian et al. (2016) further extended additive composition to realizing a formal semantics.
Error bounds in approximation schemes have been extensively studied in statistical learning theory (Vapnik, 1995;Gnecco and Sanguineti, 2008), and especially for neural networks (Niyogi and Girosi, 1999;Burger and Neubauer, 2001). Since we have formalized compositional frameworks as approximation schemes, there is a good chance to apply the theories of approximation error bounds to this problem, especially for advanced compositional frameworks that have many parameters. Though the theories are usually established on general settings, we see a great potential in using properties that are specific to natural language data, as we demonstrate in this work.
There have been consistent efforts toward understanding stochastic behaviors of natural language. Zipf's Law (Zipf, 1935) and its applications (Kobayashi, 2014), non-parametric Bayesian language models such as the Hierarchical Pitman-Yor Process (Teh, 2006), and the topic model (Blei, 2012) might further help refine our theory. For example, it can be fruitful to consider additive composition of topics.

Experimental Verification
In this section, we conduct experiments on the British National Corpus (BNC) (The BNC Consortium, 2007) to verify assumptions and predictions of our theory. The corpus contains unordered bigram target ordered bigram target unigram target Figure 5: For each x coordinate, the log-log graphs show the average value of the x-th largest probability ratios p Υ i,n /p i,n on y-axis. The ranking is taken among 1 ≤ i ≤ n with Υ fixed, and the average is taken across different Υ that are unigrams, unordered bigrams, or ordered bigrams respectively. Standard deviation is shown as error bar. about 100M word tokens, including written texts and utterances in British English. For constructing vector representations we use lemmatized words annotated in the corpus, and for counting co-occurrences we use context windows that do not cross sentence boundaries. The size of the context windows is 5 to each side for a target word, and 4 for a target phrase. We extract all unigrams, ordered and unordered bigrams occurring more than 200 times as targets. This results in 16,210 unigrams, 45,793 ordered bigrams and 45,398 unordered bigrams. For the lexicon of context words we use the same set of unigrams.

Generalized Zipf 's Law
Consider the probability ratio p Υ i,n /p i,n , where the target Υ can be a unigram, ordered bigram or unordered bigram. Assumption (B) of Theorem 1 states that p Υ i,n /p i,n (1 ≤ i ≤ n) can be viewed as independent sample points drawn from distributions that have a same power law tail of index 1. We verify this assumption in the following.
A power law distribution has two parameters, the index α and the lower bound m of the power law behavior. If a random variable X obeys a power law, the probability of x ≤ X conditioned on m ≤ X is given by For each fixed Υ, we estimate α and m from the sample p Υ i,n /p i,n (1 ≤ i ≤ n), using the method of Clauset et al. (2009). Namely, α is estimated by maximizing the likelihood of the sample, and m is sought to minimize the Kolmogorov-Smirnov statistic, which measures how well the theoretical distribution (10) fits the empirical distribution of the sample. After m is estimated, we plot all p Υ i,n /p i,n greater than m in a log-log graph, against their ranking. If the sample points are drawn from a power law, the graph will be a straight line. Since Assumption (B) states that the power law tail is the same for all Υ and has index 1, we should obtain the same straight line for all Υ, and the slope of the line should be −1.
In Figure 5, we summarize the graphs described above for all Υ. More precisely, we plot the ranked probability ratios for each fixed Υ into the same log-log graph, and then show the average and standard deviation of the x-th largest probability ratios across different Υ. The figure shows that for each target type, most data points lie within a narrow stripe of roughly the same shape, suggesting that the distribution of probability ratios for each fixed Υ is approximately the same. In addition, the shape can be roughly approximated by a straight line with slope −1, which suggests that the distribution is power law of index 1, verifying Assumption (B). As a concrete example, in Figure 6 we show a log-log graph of the x-th largest probability ratios p s i,n /p i,n and p t i,n /p i,n (1 ≤ i ≤ n), where s and t are two individual word targets. The red points are cut off because their y values are lower than the boundaries of power law behavior estimated from data. The blue and green points are the power law tails.

The Choice of Function F
In this section we experimentally verify the effects brought by different function F . Recall that F is parameterized by λ as defined in Definition 10. In Section 2.4, we have shown that E[F (X) 2 ] < ∞ is a sufficient and necessary condition for the norms of natural phrase vectors to converge to 1. If X has a power law tail of index α, then the condition for E[F (X) 2 ] < ∞ is λ < α/2. So if we construct vector representations with different λ, only those vectors satisfying λ < α/2 will have convergent norms. We verify this prediction first.
In Figure 7, we plot the standard deviation of the norms of natural phrase vectors on yaxis, against different λ values used for constructing the vectors. We tried λ = 0, 0.1, . . . , 1. As the graph shows, as long as λ < 0.5, most of the norms lie within the range of 1 ± 0.1. In contrast, the observed standard deviation quickly explodes as λ gets larger. In addition, the transition point appears to be slightly larger than 0.5, which complies with the fact that the observed α is slightly larger than 1 (i.e., the slope −1/α of the power law tails in Figure 5 and Figure 6 appear to be slightly more gradual than −1).
Next, we investigate how F affects the Euclidean distance B {st} n . In Figure 8, we plot B {st} n on y-axis, against 1 2 (π 2 s/t\s + π 2 t/s\t + π s/t\s π t/s\t ) on x-axis,  In Section 6, we extrinsically evaluate the additive compositionality of vector representations, and find F a crucial factor there -while F (p) := ln p and F (p) := √ p evaluate similarly well, F (p) := p and F (p) := p ln p do much worse. This suggests that our bias bound indeed has the power of predicting additive compositionality, demonstrating the usefulness of our theory. In contrast, it seems that the average level of approximation errors for observed bigrams (shown as green dashed lines in Figure 8) is less predictive, as the poor choices F (p) := p and F (p) := p ln p actually have lower average error levels. This emphasizes a particular caveat that, choosing composition operations by minimizing the observed average error may not always be justifiable. Here if we consider the function F as a parameter in additive composition, and choose the one with the lowest average error observed, we will get the worst setting F (p) := p. Therefore, we see how important a learning theory for composition research is.

Handling Word Order in Additive Composition
For vector representations constructed from the Near-far Contexts (Section 3.2), we have a similar bias bound given by Claim 3. In this section, we experimentally verify the bound and qualitatively show that the additive composition of Near-far Context vectors can be used for assessing semantic similarities between ordered bigrams. In Figure 9 and Figure 10, we plot (a) B st n and (b) w ts n − 1 2 (w s• n + w •t n ) on y-axis, against 1 2 (π 2 s•\t + π 2 s/•t + π s•\t π s/•t ) on x-axis, for every ordered bigram st. We tried two settings of F , namely F (p) := ln p ( Figure 9) and F (p) := √ p (Figure 10). In both cases, the approximation errors in (a) are bounded by y ≤ x (red solid lines) as suggested by Claim 3. In contrast, the approximation errors for order-reversed bigrams exceed this bound, showing that the additive composition of Near-far Context vectors actually recognizes word order.
In Table 4, we show the 8 nearest word pairs for each of 8 ordered bigrams, measured by cosine similarities between additive compositions of Near-far Context vectors. More    precisely, for word pairs "s 1 t 1 " and "s 2 t 2 ", we calculate the cosine similarity between where v s• and v •t are normalized 200-dimensional SVD reductions of w s• n and w •t n , respectively, with F (p) := √ p. The table shows that additive composition of Near-far Context vectors can indeed represent meanings of ordered bigrams, for example, "pose problem" is near to "arise dilemma" but not to "dilemma arise", and "problem pose" is near to "difficulty cause" but not to "cause difficulty". It is also noteworthy that "not enough" is similar to "always want", showing some degree of semantic compositionality beyond word level. We believe this ability of computing meanings of or- dered bigrams is already highly useful, because there are only a few bigrams whose meanings can be directly assessed from real corpora.

Dimension Reduction
In this section, we verify our prediction in Section 3.3 that vectors trained by SVD preserve our bias bound more faithfully than GloVe and SGNS. In Figure 11, we use normalized word vectors v t that are constructed from the distributional vectors w t n by reducing to 200 dimensions using different reduction methods. We use SVD in (a)  (v s + v t ) on y-axis, against 1 2 (π 2 s/t\s + π 2 t/s\t + π s/t\s π t/s\t ) on x-axis.
The graphs show that vectors trained by SVD still largely conform to our bias bound y ≤ x (red solid lines), but vectors trained by GloVe or SGNS no longer do. Our extrinsic evaluations in Section 6 also show that SVD might perform better than GloVe and SGNS.

Extrinsic Evaluation of Additive Compositionality
In this section, we test additive composition on human annotated data sets to see if our theoretical predictions correlate with human judgments. We conduct a phrase similarity task and a word analogy task.

Phrase Similarity
In a data set 3 created by Mitchell and Lapata (2010), phrase pairs are annotated with similarity scores. Each instance in the data is a (phrase1, phrase2, similarity) triplet, and each phrase consists of two words. The similarity score is annotated by humans, ranging from 1 to 7, indicating how similar the meanings of the two phrases are. For example, one annotator assessed the similarity between "vast amount" and "large quantity" as 7 (the highest), and the similarity between "hear word " and "remember name" as 1 (the lowest). Phrases are divided into three categories: Verb-Object, Compound Noun, and Adjective-Noun. Each category has 108 phrase pairs, and they are annotated by 18 human participants (i.e., 1,944 instances in each category). Using this data set, we can compare the human ranking of phrase similarity with the one calculated from cosine similarities between vector-based compositions. We use Spearman's ρ to measure how correlated the two rankings are. Vector representations are constructed from BNC, with the same settings described in Section 5. We plot in Figure 12 the distributions of how many times the phrases in the data set occur as bigrams in BNC. The figure indicates that a large portion of the phrases are rare or unseen as bigrams, so their meanings cannot be directly assessed as natural vectors from the corpus. Therefore, the data is suitable for testing compositions of word vectors.
We reduce the high dimensional distributional word representations into 200-dimensional and normalize the vectors. The dimension 200 is selected by observing the top 800 singular values calculated by SVD. As illustrated in Figure 13, the decrease of singular values flattens to a constant rate at a rank of about 200. This suggests that the most characteristic features in the vector representations are projected into 200 dimensions. In our preliminary experiments, we have confirmed that a dimension of 200 performs better than 100, 500 or no dimension reduction.
For training word embeddings, we use the random projection algorithm (Halko et al., 2011) for SVD, and Stochastic Gradient Descent (SGD) (Bottou, 2012) for SGNS and GloVe. Since these are randomized algorithms, we run each test 20 times and report the mean performance with standard deviation. We tune SGD learning rates by checking convergence of the objectives, and get slightly better results than the default training parameters set in the software of SGNS 4 and GloVe 5 .
As pointed out by Levy et al. (2015), there are other detailed settings that can vary in SGNS and GloVe. We make these settings close enough to be comparable but emphasize the differences of loss functions. More precisely, we use no subsampling and set the number of negative samples to 2 in SGNS, and use the default loss function in GloVe with the cutoff threshold set to 10. In addition, the default implementations of both SGNS and GloVe weigh context words by a function of distance to their targets, which we disable (i.e. equal

Verb-Object
Compound Noun Adjective-Noun  weights are used for all context words), so as to make it compatible with the contexts we consider in this article.
The test results are shown in Table 5. We compare different settings of function F , Ordinary and Near-far Contexts, and different dimension reductions. When using ordinary contexts and SVD reduction, we find that the functions ln (F (p) := ln p) and sqrt (F (p) := √ p) perform similarly well, whereas id (F (p) := p) and xlnx (F (p) := p ln p) are much worse, confirming our predictions in Section 3.1. As for Near-far Context vectors (Section 3.2), we find that the Nearfar-sqrt-SVD setting has a high performance, demonstrating improvements brought by Near-far to additive composition. However, we note that Nearfar-ln-SVD is worse. One reason could be that the function ln emphasizes lower cooccurrence probabilities, which combined with Near-far labels could make the vectors more prone to data sparseness; or correlatively, some important syntactic markers might be obscured because they occur in high frequency. Finally, we note that SVD is consistently good and usually better than GloVe and SGNS, which supports our arguments in Section 3.3. We report some additional test results for reference. In Table 5, the "Tensor Product" row shows the results of composing Ordinary-ln-SVD vectors by tensor product instead of average, which means that the similarity between two phrases "s 1 t 1 " and "s 2 t 2 " is assessed by taking product of the word cosine similarities cos(s 1 , s 2 ) · cos(t 1 , t 2 ). The numbers are worse than additive composition, suggesting that a similar phrase might be something more than a sequence of individually similar words. In the "Upper Bound" row, we show the best possible Spearman's ρ for this task, which are less than 1 because there are disagreements between human annotators. Compared to these numbers, we find that the performance of additive composition on compound nouns is remarkably high. Furthermore, in "Muraoka et al." we cite the best results reported in Muraoka et al. (2014), which has tested several compositional frameworks. In "Deep Neural", we also test additive composition of word vectors trained by deep neural networks (normalized 200-dimensional vectors trained by Turian et al. 2010, using the model of Collobert et al. 2011). These results cannot be directly compared to each other because they construct vector representations from different corpora; but we can fairly say that additive composition is still a powerful method for assessing phrase similarity, and linear dimension reductions might be more suitable for training additively compositional word vectors than deep neural networks. Therefore, our theory on additive composition is about the state-of-the-art.

Word Analogy
Word analogy is the task of solving questions of the form "a is to b as c is to ?", and an elegant approach proposed by Mikolov et al. (2013a) is to find the word vector most similar to v b − v a + v c . For example, in order to answer the question "man is to king as woman is to ?", one needs to calculate v king − v man + v woman and find out its most similar word vector, which will probably turn out to be v queen , indicating the correct answer queen.
As pointed out by Levy and Goldberg (2014a), the key to solving analogy questions is the ability to "add" (resp. "subtract") some aspects to (resp. from) a concept. For example, king is a concept of human that has the aspects of being royal and male. If we can "subtract" the aspect male from king and "add" the aspect female to it, then we will probably get the concept queen. Thus, the vector-based solution proposed by Mikolov et al. (2013a) is essentially assuming that "adding" and "subtracting" aspects can be realized by adding and subtracting word vectors. Why is this assumption admissible?
We believe this assumption is closely related to additive compositionality. Because, if an aspect is represented by an adjective (e.g. male) and a concept is represented by a noun (e.g. human), we can usually "add" the aspect to the concept by simply arranging the adjective and the noun to form a phrase (e.g. male human). Therefore, as the meaning of the phrase can be calculated by additive composition (e.g. v male + v human ), we have indeed realized the "addition" of aspects by addition of word vectors. Specifically, since man ≈ male human, king ≈ royal male human, woman ≈ female human and queen ≈ royal female human, we expect the following by additive composition of phrases.
Here, "≈" denotes proximity between vectors in the sense of cosine similarity. From these approximate equations, we can imply that v king −v man +v woman ≈ v royal +v female +v human ≈ v queen , which solves the analogy question.
Therefore, we expect word analogy task to serve as an extrinsic evaluation for additive compositionality as well. For this reason, we conduct word analogy task on the standard Msr 6 (Mikolov et al., 2013a) and Google 7 (Mikolov et al., 2013b) data sets. Each instance in the data is a 4-tuple of words subject to "a is to b as c is to d ", and the task is to find out d from a, b and c. We train word vectors with the same settings described in Section 5, but  Table 6: Accuracy (%) in the word analogy task.
using surface forms instead of lemmatized words in BNC. Tuples with out-of-vocabulary words are removed from data, which results in 4382 tuples in Msr and 8906 in Google 8 . The test results are shown in Table 6. Again, we find that ln and sqrt perform similarly well but id and xlnx are worse, confirming that the choice of function F can drastically affect performance on word analogy task as well, which we believe is related to additive compositionality. In addition, we confirm that SVD can perform better than SGNS and GloVe, which gives more support to our conjecture that vectors trained by SVD might be more compatible to additive composition.

Conclusion
In this article, we have developed a theory of additive composition regarding its bias. The theory has explained why and how additive composition works, making useful suggestions about improving additive compositionality, which include the choice of a transformation function, the awareness of word order, and the dimension reduction methods. Predictions made by our theory have been verified experimentally, and shown positive correlations with human judgments. In short, we have revealed the mechanism of additive composition.
However, we note that our theory is not "proof" of additive composition being a "good" compositional framework. As a generalization error bound usually is in machine learning theory, our bound for the bias does not show if additive composition is "good"; rather, it specifies some factors that can affect the errors. If we have generalization error bounds for other composition operations, a comparison between such bounds can bring useful insights into the choices of compositional frameworks in specific cases. We expect our bias bound to inspire more results in the research of semantic composition.
Moreover, we believe this line of theoretical research can be pursued further. In computational linguistics, the idea of treating semantics and semantic relations by algebraic operations on distributional context vectors is relatively new (Clarke, 2012). Therefore, the relation between linguistic theories and our approximation theory of semantic composition is left largely unexplored. For example, the intuitive distinction between compositional (e.g. high price) and non-compositional (e.g. white lie) phrases is currently ignored in our theory. Our bias bound treats both cases by a single collocation measure. Can one improve the bound by taking account of this distinction, and/or other kinds of linguistic knowledge? This is an intriguing question for future work.
8. These are about half the size of the original data sets.

Appendix A. Proof of Lemmas
In this appendix, we prove Lemma 1 and Lemma 2 in Section 2.3.
In order to prove Lemma 1, we first note the following equations: Equation (11) can be obtained by analyzing the derivatives F (x) = x −1+λ , and Equation (12) immediately follows from the identity z λ (F (x) − F (y)) = F (zx) − F (zy).
The limit above is a consequence of Lebesgue's Dominated Convergence Theorem.
Proof of Lemma 2. Firstly, we note that n −1+2λ · ϕ i,n = np i,n + 1/β −1+2λ · p i,n by the definition of ϕ i,n . So Lemma 2(a) immediately follows. To prove Lemma 2(b), we simply apply Assumption (A) to the above.

Appendix B. The Loss Function of SGNS
In this appendix, we discuss the loss function of SGNS. The model is originally proposed as an ad hoc objective function using the negative sampling technique (Mikolov et al., 2013b), without any explicit explanation on what is optimized and what is the loss. It is later shown that SGNS is a factorization of the shifted-PMI matrix (Levy and Goldberg, 2014b), but the loss function for this factorization remains unspecified. Here, we give a re-explanation of the SGNS model, with the loss function explicitly stated.

B.1 Noise Contrastive Estimation
The original objective function of SGNS is proposed as an adaptation of the Noise Contrastive Estimation (NCE) method, but in fact SGNS is using NCE without any adaptation. NCE (Gutmann and Hyvärinen, 2012) is a method for solving the classical problem that, given a sample (x i ) N i=1 (wherein x i ∈ X ) drawn from an unknown probability distribution P data , and a function family f (·; θ) : X → R ≥0 parameterized by θ, to find the optimal θ * such that f (x; θ * ) approximates the distribution P data (x) best. An alternative to NCE is the Maximum Likelihood Estimation (MLE), in which θ * is chosen as to maximize the log-likelihood of the sample (x i ) N i=1 , with respect to the constraint that f (·; θ * ) should be a probability: For MLE, the constraint x∈X f (x; θ) = 1 is important, because f (x; θ) can tend to arbitrarily large if we maximize the log-likelihood without the constraint. NCE finds θ * in a different way. It firstly mixes (x i ) with a noise sample drawn from a known distribution P noise , each data point x i mixed with k noise points y i,1 , . . . , y i,k ∼ P noise . Hence, which gives the probability of a given point x ∈ X being a data point. P data is unknown in (22), so we approximate P(x is data | x) by g(x; θ) as below: g(x; θ) := f (x; θ) f (x; θ) + kP noise (x) .
The most important point of NCE is that, f (x; θ) will not tend to infinity even we maximize (24) without the constraint x∈X f (x; θ) = 1. This is because making f (x; θ) large will accordingly make 1 − g(y i,j ; θ) small, which will decrease the likelihood of "y i,1 , . . . , y i,k being noise". No longer necessary to repeatedly calculate x∈X f (x; θ) during parameter update, NCE usually results in efficient training algorithms.

B.2 The Skip-Gram with Negative Sampling Model
Let p t i be the co-occurrence probability of the i-th word, conditioned on it being in the context of a target word t. SGNS approximates p t i by the function family f (i, t; u, v) := exp(u i · v t + ln(kp noise i )), using NCE to optimize parameters. Here, u and v are parameters of the function family, whose columns are vectors u i and v t , corresponding to the i-th context word and the word target t, respectively. The training data C is a collection of co-occurring context-target word pairs. On the other hand, k and p noise i are constants in the definition of the function family, where k is the number of noise points drawn for each training instance, and p noise i is the probability value of the i-th context word being drawn from the noise distribution P noise .
Substituting the obtained g(i, t; u, v) into (24), we get which is exactly the objective function of SGNS proposed in Mikolov et al. (2013b).
Since SGNS is using f (i, t; u, v) = exp(u i · v t + ln(kp noise i )) to approximate p t i , it is using u i · v t to approximate w t i := ln p t i − ln(kp noise i ). This w t i is in the form of vector entries of distributional representations as given in Definition 4. Thus, SGNS can be viewed as a dimension reduction of the distributional representations we consider in this article.
Proof of Claim 4. To calculate its loss function, we consider the objective of SGNS: The sum is taken across all context-target pairs in C. We regroup the summands by each distinct target, and note that conditioned on an occurrence of target t, the probability for one to encounter the i-th context word co-occurring in the training data is p t i , whereas the expected times for one to draw the context word from noise is given by kp noise i . So we have where C(t) is the occurrence count of t. Now, we know that the optimal O(u, v) is achieved at u i · v t = w t i , so we define Then, to maximize O(u, v) is to minimize M − O(u, v), and by some calculation we obtain where D φ (p, q) := φ(p) − φ(q) − φ (q)(p − q) is the Bregman divergence associated with the convex function φ(x) = (p t i + kp noise i ) ln(exp(x) + kp noise i ).
This is the loss function given in Claim 4. The limit of D φ at k → +∞ is easily derived.