Antithetic and Monte Carlo kernel estimators for partial rankings

In the modern age, rankings data is ubiquitous and it is useful for a variety of applications such as recommender systems, multi-object tracking and preference learning. However, most rankings data encountered in the real world is incomplete, which prevents the direct application of existing modelling tools for complete rankings. Our contribution is a novel way to extend kernel methods for complete rankings to partial rankings, via consistent Monte Carlo estimators for Gram matrices: matrices of kernel values between pairs of observations. We also present a novel variance reduction scheme based on an antithetic variate construction between permutations to obtain an improved estimator for the Mallows kernel. The corresponding antithetic kernel estimator has lower variance and we demonstrate empirically that it has a better performance in a variety of Machine Learning tasks. Both kernel estimators are based on extending kernel mean embeddings to the embedding of a set of full rankings consistent with an observed partial ranking. They form a computationally tractable alternative to previous approaches for partial rankings data. An overview of the existing kernels and metrics for permutations is also provided.


Motivation
Permutations play a fundamental role in statistical modelling and machine learning applications involving rankings and preference data. A ranking over a set of objects can be encoded as a permutation, hence, kernels for permutations are useful in a variety of machine learning applications involving rankings. Applications include recommender systems, multi-object tracking and preference learning. It is of interest to construct a kernel in the space of the data in order capture similarities between datapoints and thereby influence the pattern of generalisation. Kernels are used in many machine learning methods. For instance, a kernel input is required for the maximum mean discrepancy (MMD) two sample test [15], kernel principal component analysis (kPCA) [29], support vector machines [5,7], Gaussian processes (GPs) [27] and agglomerative clustering [10], among others.
Our main contributions are: (i) A novel and computationally tractable way to deal with incomplete or partial rankings by first representing the marginalised kernel [17] as a kernel mean embedding of a set of full rankings consistent with an observed partial ranking. We then propose two estimators that can be represented as the corresponding empirical mean embeddings: (ii) A Monte Carlo kernel estimator that is based on sampling independent and identically distributed rankings from the set of consistent full rankings given an observed partial ranking; (iii) An antithetic variate construction for the marginalised Mallows kernel that gives a lower variance estimator for the kernel Gram matrix. The Mallows kernel has been shown to be an expressive kernel; in particular, Mania et al. [26] show that the Mallows kernel is an example of a universal and characteristic kernel, and hence it is a useful tool to distinguish samples from arXiv:1807.00400v2 [stat.ML] 25 Jul 2018 two different distributions, and it achieves the Bayes risk when used in kernel-based classification/regression [32]. Jiao & Vert [20] have proposed a fast approach for computing the Kendall marginalised kernel, however, this kernel is not characteristic [26], and hence has limited expressive power.
The resulting estimators are used for a variety of kernel machine learning algorithms in the experiments. We present comparative simulation results demonstrating the efficacy of the proposed estimators for an agglomerative clustering task, a hypothesis test task using the maximum mean discrepancy (MMD) [15] and a Gaussian process classification task. For the latter, we extend some of the existing methods in the software library GPy [14].
Since the space of permutations is an example of a discrete space, with a non-commutative group structure, the corresponding reproducing kernel Hilbert spaces (RKHS) have only recently being investigated; see Kondor et al. [24], Fukumizu et al. [13], Kondor & Barbosa [23], Jiao & Vert [20] and Mania et al. [26]. We provide an overview of the connection between kernels and certain semimetrics when working on the space of permutations. This connection allows us to obtain kernels from given semimetrics or semimetrics from existing kernels. We can combine these semimetric-based kernels to obtain novel, more expressive kernels which can be used for the proposed Monte Carlo kernel estimator.

Definitions
We first briefly introduce the theory of permutation groups. A particular application of permutations is to use them to represent rankings; in fact, there is a natural one-to-one relationship between rankings of n items and permutations. For this reason, we sometimes use ranking and permutation interchangeably. In this section, we state some mathematical definitions to formalise the problem in terms of the space of permutations.
Let [n] = {1, 2, . . . , n} be a set of indices for n items, for some n ∈ N. Given a ranking of these n items, we use the notation to denote the ordering of the items induced by the ranking, so that for distinct i, j ∈ [n], if i is preferred to j, we will write i j. Note that for a full ranking, the corresponding relation is a total order on {1, . . . , n}.
We now outline the correspondence between rankings on [n] and the permutation group S n that we use throughout the paper. In words, given a full ranking of [n], we will associate it with the permutation σ ∈ S n that maps each ranking position 1, . . . , n to the correct object under the ranking. More mathematically, given a ranking a 1 · · · a n of [n], we may associate it with the permutation σ ∈ S n given by σ(j) = a j for all j = 1, . . . , n. For example, the permutation corresponding to the ranking on [3] given by 2 3 1, corresponds to the permutation σ ∈ S 3 given by σ(1) = 2, σ(2) = 3, σ(3) = 1. This correspondence allows the literature relating to kernels on permutations to be leveraged for problems involving the modelling of ranking data.
In the next section, we will review some of the semimetrics on S n that can serve as building blocks for the construction of more expressive kernels.

Metrics for permutations and properties
Definition 1 Let X be any set and d : X × X → R is a function, which we write d(x, y) for every x, y ∈ X . Then d is a semimetric if it satisfies the following conditions, for every x, y ∈ X [11]: i) d(x, y) = d(y, x), that is, d is a symmetric function. ii) d(x, y) = 0 if and only if x = y.
A semimetric is a metric if it satifies: iii) d(x, z) ≤ d(x, y) + d(y, z) for every x, y, z ∈ X , that is, d satisfies the triangle inequality.
The following are some examples of semimetrics on the space of permutations S n [9]. All semimetrics in bold have the additional property of being of negative type. Theorem 1, stated below, shows that negative type semimetrics are closely related to kernels.
It can also be defined as the minimum number of substitutions required to change one permutation into the other.
, where the composition operation of the permutation group S n is denoted by • and X j (σ•(σ ) −1 ) = 0 if j is the largest item in its cycle and is equal to 1 otherwise [18]. It is also equal to the minimum number of pairwise transpositions taking σ to σ . Finally, it can also be shown to be equal to n − C(σ • (σ ) −1 ) where C(η) is the number of cycles in η.
where n d (σ, σ ) is the number of discordant pairs for the permutation pair (σ, σ ). It can also be defined as the minimum number of pairwise adjacent transpositions taking σ −1 to (σ ) −1 .
Definition 2 A semimetric is said to be of negative type if for all n ≥ 2, x 1 , . . . , x n ∈ X and α 1 , . . . , α n ∈ R with In general, if we start with a Mercer kernel for permutations, that is, a symmetric and positive definite function k : S n × S n → R, the following expression gives a semimetric d that is of negative type A useful characterisation of semimetrics of negative type is given by the following theorem, which states a connection between negative type metrics and a Hilbert space feature representation or feature map φ.
Theorem 1 [3]. A semimetric d is of negative type if and only if there exists a Hilbert space H and an injective map φ : Once the feature map from Theorem 1 is found, we can directly take its inner product to construct a kernel. For instance, Jiao & Vert [20] propose an explicit feature representation for Kendall kernel given by They show that the inner product between two such features is a positive definite kernel. The corresponding metric, given by Kendall distance, can be shown to be the square of the norm of the difference of feature vectors. Hence, by Theorem 1, it is of negative type.
Analogously, Mania et al. [26] propose an explicit feature representation for the Mallows kernel, given by . In the following proposition, an explicit feature representation for the Hamming distance is introduced and we show that it is a distance of negative type.

Proposition 1
The Hamming distance is of negative type with where the corresponding feature representation is a matrix given by Proof The Hamming distance can be written as a square difference of indicator functions in the following way where each indicator is one whenever the given entry of the permutation is equal to the corresponding element of the identity element of the group. Let the -th feature vector be φ (σ) = I {σ(1)= } , . . . , I {σ(n)= } , then This is the trace of the difference of the product of the feature matrices Φ(σ) − Φ(σ ), where the difference of feature matrices is given by This is the square of the usual Frobenius norm for matrices, so by Theorem 1, the Hamming distance is of negative type.
Another example is Spearman's rank correlation, which is a semimetric of negative type since it is the square of the usual Euclidean distance [3].
The two alternative definitions given for some of the distances in the previous examples are handy from different perspectives. One is an expression in terms of either an injective or non-injective feature representation, while the other is in terms of the minimum number of operations to change one permutation to the other. Other distances can be defined in terms of this minimum number of operations, they are called editing metrics [8]. Editing metrics are useful from an algorithmic point of view whereas metrics defined in terms of feature vectors are useful from a theoretical point of view. Ideally, having a particular metric in terms of both algorithmic and theoretical descriptions gives a better picture of which are the relevant characteristics of the permutation that the metric takes into account. For instance, Kendall and Cayley distances algorithmic descriptions correspond to the bubble and quick sort algorithms respectively [22]. Another property shared by most of the semimetrics in the examples is the following Definition 3 Let σ 1 , σ 2 ∈ S n , (S n , •) denote the symmetric group of degree n with the composition operation, a right-invariant semimetric [9] satisfies In particular, if we take η = σ −1 1 then d(σ 1 , σ 2 ) = d(e, σ 2 • σ −1 1 ), where e corresponds to the identity element of the permutation group. This property is inherited by the distance-induced kernel from Section 2.2, Example 7. This symmetry is analogous to translation invariance for kernels defined in Euclidean spaces.

Kernels for S n
If we specify a symmetric and positive definite function or kernel k, it corresponds to defining an implicit feature space representation of a ranking data point. The well-known kernel trick exploits the implicit nature of this representation by performing computations with the kernel function explicitly, rather than using inner products between feature vectors in high or even infinite dimensional space. Any symmetric and positive definite function uniquely defines an underlying Reproducing Kernel Hilbert Space (RKHS), see the supplementary material Appendix A for a brief overview about the RKHS. Some examples of kernels for permutations are the following 1. The Kendall kernel [20] is given by where n c (σ, σ ) and n d (σ, σ ) denote the number of concordant and discordant pairs between σ and σ respectively. 2. The Mallows kernel [20] is given by k ν (σ, σ ) = exp(−νn d (σ, σ )). 3. The Polynomial kernel of degree m [26], is given by k (m) The Hamming kernel is given by 5. An exponential semimetric kernel is given by where d is a semimetric of negative type. 6. The diffusion kernel [23] is given by where β ∈ R and q is a function that must satisfy q(π) = q(π −1 ) and π q(π) = 0. A particular case is q(σ, σ ) = 1 if σ and σ are connected by an edge in some Cayley graph representation of S n , and q(σ, σ ) = −degree σ if σ = σ or q(σ, σ ) = 0 otherwise. 7. The semimetric or distance induced kernel [30], if the semimetric d is of negative type, then, a family of kernels k, parameterised by a central permutation σ 0 , is given by If we choose any of the above kernels by itself, it will generally not be complex enough to represent the ranking data's generating mechanism. However, we can benefit from the allowable operations for kernels to combine kernels and still obtain a valid kernel. Some of the operations which render a valid kernel are the following: sum, multiplication by a positive constant, product, polynomial and exponential [4].
In the case of the symmetric group of degree n, S n , there exist kernels that are right invariant, as defined in Equation (4). This invariance property is useful because it is possible to write down the kernel as a function of a single argument and then obtain a Fourier representation. The caveat is that this Fourier representation is given in terms of certain matrix unitary representations due to the non-Abelian structure of the group [19].
Even though the space is finite, and every irreducible representation is finite-dimensional [13], these Fourier representations do not have closed form expressions. For this reason, it is difficult to work on the spectral domain as opposed to the R n case. There is also no natural measure to sample from such as the one provided by Bochner's theorem in Euclidean spaces [35]. In the next section, we will present a novel Monte Carlo kernel estimator for the case of partial rankings data.

Partial rankings
Having provided an overview of kernels for permutations, and reviewed the link between permutations and rankings of objects, we now turn to the practical issue that in real datasets, we typically have access only to partial ranking information, such as pairwise preferences and top-k rankings. Following [20], we consider the following types of partial rankings: Definition 4 (Exhaustive partial rankings, top-k rankings) Let n ∈ N. A partial ranking on the set [n] is specified by an ordered collection Ω 1 · · · Ω l of disjoint non-empty subsets Ω 1 , . . . , Ω l ⊆ [n], for any 1 ≤ l ≤ n. The partial ranking Ω 1 · · · Ω l encodes the fact that the items in Ω i are preferred to those in Ω i+1 , for i = 1, . . . , l − 1, with no preference information specified about the items in [n] \ ∪ l i=1 Ω i . A partial ranking Ω 1 · · · Ω l with ∪ l i=1 Ω i = [n] termed exhaustive, as all items in [n] are included within the preference information. A top-k partial ranking is a particular type of exhaustive ranking Ω 1 · · · Ω l , with |Ω 1 | = · · · = |Ω l−1 | = 1, and Ω l = [n] \ ∪ l−1 i=1 Ω i . We will frequently identify a partial ranking Ω 1 · · · Ω l with the set R(Ω 1 , . . . , Ω l ) ⊆ S n of full rankings consistent with the partial ranking. Thus, σ ∈ R(Ω 1 , . . . , Ω l ) iff for all 1 ≤ i < j ≤ l, and for all x ∈ Ω i , y ∈ Ω j , we have σ −1 (x) < σ −1 (y). When there is potential for confusion, we will use the term "subset partial ranking" when referring to a partial ranking as a subset of S n , and "preference partial ranking" when referring to a partial ranking with the notation Ω 1 · · · Ω l .
Thus, for many practical problems, we require definitions of kernels between subsets of partial rankings rather than between full rankings, to be able to deal with datasets containing only partial ranking information. A common approach [34] is to take a kernel K defined on S n , and use the marginalised kernel, defined on subsets of partial rankings by for all R, R ⊆ S n , for some probability distribution p ∈ P(S n ). Here, p(·|R) denotes the conditioning of p to the set R ⊆ S n . Jiao & Vert [20] use the convolution kernel [17] between partial rankings, given by This is a particular case for the marginalised kernel of Equation (5), in which we take the probability mass function to be uniform over R, R respectively. In general, computation with a marginalised kernel quickly becomes computationally intractable, with the number of terms in the right-hand side of Equation (5) growing superexponentially with n, for a fixed number of items in the partial rankings R and R , see Appendix D for a numerical example of such growth. An exception is the Kendall kernel case for two interleaving partial rankings of k and m items or a top-k and top-m ranking. In this case, the sum can be tractably computed and it can be done in O(k log k + m log m) time [20].
We propose a variety of Monte Carlo methods to estimate the marginalised kernel of Equation (5) for the general case, where direct calculation is intractable. (5) is defined for a collection of partial rankings (R i ) I i=1 , given by

Definition 5 The Monte Carlo estimator approximating the marginalised kernel of Equation
are random weights. Note that this general set-up allows for several possibilities: are drawn exactly from the distribution p(·|R i ). In this case, the weights are simply w drawn from some proposal distribution q(·|R i ) with the weights given by the corresponding importance weights w An alternative perspective on the estimator defined in Equation (7), more in line with the literature on random feature approximations of kernels, is to define a random feature embedding for each of the partial rankings (R i ) I i=1 . More precisely, let H K be the (finite-dimensional) Hilbert space associated with the kernel K on the space S n , and let Φ be the associated feature map, so that Φ(σ) = K(σ, ·) ∈ H K for each σ ∈ S n . Then observe that we have K(σ, σ ) = Φ(σ), Φ(σ ) for all σ, σ ∈ S n . We now extend this feature embedding to partial rankings as follows. Given a partial ranking R ⊆ S n , we define the feature embedding of R by With this extension of Φ to partial rankings, we may now directly express the marginalised kernel of Equation (5) as an inner product in the same Hilbert space H K : for all partial rankings R, R ⊆ S n . If we define a random feature embedding of the partial rankings ( then the Monte Carlo kernel estimator of Equation (7) can be expressed directly as for each i, j ∈ {1, . . . , I}. This expression of the estimator as an inner product between randomised embeddings will be useful in the sequel. We provide an illustration of the various RKHS embeddings at play in Figure 2, using the notation of the proof of Theorem 3. In this figure, η is a partial ranking, with three consistent full rankings σ 1 , σ 2 , σ 3 . The extended embedding Φ applied to η is the barycentre in the RKHS of the embeddings of the consistent full rankings, and a Monte Carlo approximation Φ to this embedding is also displayed.
Theorem 2 Let R i ⊆ S n be a partial ranking, and let σ Visualisation of the various embeddings discussed in the proof of Theorem 3. σ 1 , σ 2 , σ 3 are permutations in S n , which are mapped into the RKHS H K by the embedding Φ. η is a partial ranking subset which contains σ 1 , σ 2 , σ 3 , and its embedding Φ(η) is given as the average of the embeddings of its full rankings. The Monte Carlo embedding Φ(η) induced by Equation (7) is computed by taking the average of a randomly sampled collection of consistent full rankings from η.
is a consistent estimator of the marginalised kernel embedding Proof Note that the RKHS in which these embeddings take values is finite-dimensional, and the Monte Carlo estimator is the average of iid terms, each of which is equal to the true embedding in expectation. Thus, we immediately obtain unbiasedness and consistency of the Monte Carlo embedding.
Theorem 3 The Monte Carlo kernel estimator from Equation (7) does define a positive-definite kernel; further, it yields consistent estimates of the true kernel function.
Proof We first deal with the positive-definiteness claim. Let R 1 , . . . , R I ⊆ S n be a collection of partial rankings, and for each i = 1, . . . , I, let (σ be an i.i.d. weighted collection of complete rankings distributed according to p(·|R i ). To show that the Monte Carlo kernel estimator K is positive-definite, we observe that by Equation (8), the I × I matrix with (i, j) th element given by K(R i , R j ) is the Gram matrix of the vectors ( Φ(R i )) I i=1 with respect to the inner product of the Hilbert space H K . We therefore immediately deduce that the matrix is positive semi-definite, and therefore the kernel estimator itself is positive-definite. Furthermore, the Monte Carlo kernel estimator is consistent; see Appendix B in the supplementary material for the proof.
Having established that the Monte Carlo estimator K is itself a kernel, we note that when it is evaluated at two partial rankings R, R ⊆ S n , the resulting expression is not a sum of iid terms; the following result quantifies the quality of the estimator through its variance.
Theorem 4 The variance of the Monte Carlo kernel estimator evaluated at a pair of partial rankings R i , R j , with M i , N j Monte Carlo samples respectively, is given by The proof is given in the supplementary material, Appendix C. We have presented some theoretical properties of the embedding corresponding to the Monte Carlo kernel estimator which confirm that it is a sensible embedding. In the next section, we present a lower variance estimator based on a novel antithetic variates construction.

Antithetic random variates for permutations
A common, computationally cheap variance reduction technique in Monte Carlo estimation of expectations of a given function is to use antithetic variates [16], the purpose of which is to introduce negative correlation between samples without affecting their marginal distribution, resulting in a lower variance estimator. Antithetic samples have been used when sampling from Euclidean vector spaces, for which antithetic samples are straightforward to define. However, to the best of our knowledge, antithetic variate constructions have not been proposed for the space of permutations. We begin by introducing a definition for antithetic samples for permutations.
Definition 6 (Antithetic permutations) Let R ⊆ S n be a top-k partial ranking. The antithetic operator A R : R → R maps each permutation σ ∈ R to the permutation in R of maximal distance from σ.
It is not necessarily clear a priori that the antithetic operator of Definition 6 is well-defined, but for the Kendall distance and top-k partial rankings, it turns out that it is indeed well-defined.
Remark 1 For the Kendall distance and top-k partial rankings, the antithetic operators of Definition 6 are welldefined, in the sense that there exists a unique distancemaximising permutation in R from any given σ ∈ R. Indeed, the antithetic map A R when R is a top-k partial ranking has a particularly neat expression; if the partial ranking corresponding to R is a 1 · · · a k , and we have a full ranking σ ∈ R (so that σ(1) = a 1 , . . . , σ(k) = a k , then the antithetic permutation A R (σ) is given by In this case, we have d(σ, A R (σ)) = n−k 2 . This definition of antithetic samples for permutations has parallels with the standard notion of antithetic samples in vector spaces, in which typically a sampled vector x ∈ R d is negated to form −x, its antithetic sample; −x is the vector maximising the Euclidean distance from x, under the restrictions of fixed norm.
Proposition 2 Let R be a partial ranking and {σ, A R (σ)} be an antithetic pair from R, σ distributed Uniformly in the region R. Let d : S n → R + be the Kendall distance and σ 0 ∈ R a fixed permutation, then X = d(σ, σ 0 ) and Y = d(A R (σ), σ 0 ), then, X and Y have negative covariance.
The proof of this proposition is presented after the relevant lemmas are proved. Since one of the main tasks in statistical inference is to compute expectations of a function of interest, denoted by h, once the antithetic variates are constructed, the functional form of h determines whether or not the antithetic variate construction produces a lower variance estimator for its expectation. If h is a monotone function, we have the following corollary.
Corollary 3 Let h be a monotone increasing (decreasing) function. Then, the random variables h (X) and h (Y ), have negative covariance.
Proof The random variable Y from Proposition 2 is equal in distribution to Y d = K − X, where K is a constant which changes depending whether σ is a full ranking or an exhaustive partial ranking, see the proof of Proposition 2 in the next section for the specific form of the constants. By Chebyshev's integral inequality [12], the covariance between a monotone increasing (decreasing) and a monotone decreasing (increasing) functions is negative.
The next theorem presents the antithetic empirical feature embedding and corresponding antithetic kernel estimator. Indeed, if we take the inner product between two embeddings, this yields the kernel antithetic estimator which is a function of a pair of partial rankings subsets. In this case, the h function from above is the kernel evaluated in each pair, this is an example of a U -statistic [31, Chapter 5].
Theorem 5 Let R i ⊆ S n be a partial ranking, S n denotes the space of permutations of n ∈ N, (σ It is a consistent estimator of the embedding that corresponds to the marginalised kernel Proof Since the estimator is a convex combination of the Monte Carlo Kernel estimator, consistency follows. In the next section, we present the main result about the estimator from Theorem 5, namely, that it has lower asymptotic variance than the Monte Carlo kernel estimator from Equation (7).

Variance of the antithetic kernel estimator
We now establish some basic theoretical properties of antithetic samples in the context of marginalised kernel estimation. In order to do so, we require a series of lemmas to derive the main result in Theorem 6 that guarantees that the antithetic kernel estimator has lower asymptotic variance than the Monte Carlo kernel estimator for the marginalised Mallows kernel.
The following result shows that antithetic permutations may be used to achieve coupled samples which are marginally distributed uniformly on the subset of S n corresponding to a top-k partial ranking.
Proof The proof is immediate from Remark 1, since A R is bijective on R.
Lemma 1 establishes a base requirement of an antithetic sample -namely, that it has the correct marginal distribution. In the context of antithetic sampling in Euclidean spaces, this property is often trivial to establish, but the discrete geometry of S n makes this property less obvious. Indeed, we next demonstrate that the condition of exhaustiveness of the partial ranking in Lemma 1 is neccessary.
Example 1 Let n = 3, and consider the partial ranking 2 1. Note that this is not an exhaustive partial ranking, as the element 3 does not feature in the preference information. There are three full rankings consistent with this partial ranking, namely 3 2 1, 2 3 1, and 2 1 3. Encoding these full rankings as permutations, as described in the correspondence outlined in Section 2, we obtain three permutations, which we respectively denote by σ A , σ B , σ C ∈ S 3 . Specifically, we have Under the right-invariant Kendall distance, we obtain pairwise distances given by Thus, the marginal distribution of an antithetic sample for the partial ranking 2 1 places no mass on σ B , and half of its mass on each of σ A and σ C , and is therefore not uniform over R.
We further show that the condition of right-invariance of the metric d is necessary in the next example.
Example 2 Let n = 3, and suppose d is a distance on S 3 such that, with the notation introduced in Example 1, we have where τ ∈ S 3 is given by τ (1) = 1, τ (2) = 3, τ (3) = 2. Then note that an antithetic sample for the kernel associated with this distance and the partial ranking 1 2, is equal to σ B with probability 2/3 and the other two full rankings with probability 1/6 each, and therefore does not have a uniform distribution.
Examples 1 and 2 serve to illustrate the complexity of antithetic sampling constructions in discrete spaces.
The following two lemmas state some useful relationships between the distance between two permutations (σ, τ ) and the corresponding pair (A R (σ), τ ) in both the unconstrained and constrained cases which correspond to not having any partial ranking information and having partial ranking information, respectively.
Proof This is immediate from the interpretation of the Kendall distance as the number of discordant pairs between two permutations; a distinct pair i, j ∈ [n] are discordant for σ, τ iff they are concordant for A Sn (σ), τ .
In fact, Lemma 2 generalises in the following manner. Proof As for the proof of Lemma 2, we use the "discordant pairs" interpretation of the Kendall distance. Note that if a distinct pair {x, y} ∈ [n] (2) has at least one of x, y ∈ {a 1 , . . . , a l }, then by virtue of the fact that σ, A R (σ), τ ∈ R, any pair of these permutations is concordant for x, y. Now observe that any distinct pair x, y ∈ [n] \ {a 1 , . . . , a l } is discordant for σ, τ iff it is concordant for A R (σ), τ , from the construction of A R (σ) described in Remark 1. The total number of such pairs is n−l 2 , so we have d(σ, τ ) + d(A R (σ), τ ) = n−l 2 , as required.
Next, we show that it is possible to obtain a unique closest element in a given partial ranking set R, denoted by Π R (τ ), with respect to any given permutation τ ∈ S n , τ / ∈ R. This is based on the usual generalisation of a distance between a set and a point [11]. We then use such closest element in Lemmas 5 and 6 to obtain useful decompositions of distances identities. Finally, in Lemma 7 we verify that the closest element is also distributed uniformly on a subset of the original set R. Lemma 4 Let R ⊆ S n be a top-k partial ranking, let τ ∈ S n be arbitrary. There is a unique closest element in R to τ . In other words, arg min σ∈R d(σ, τ ) is a set of size 1.
Proof We use the interpretation of the Kendall distance as the number of discordant pairs between two permutations. Let R be the top-k partial ranking given by x 1 · · · x k [n] \ {x 1 , . . . , x k }, and let X = {x 1 , . . . , x k }. We decompose the Kendall distance between σ ∈ R and τ as follows: As σ varies in R, only some of these terms vary. In particular, it is only the third term that varies with σ, and it is minimised at 0 by the permutation σ in R which is in accordance with τ on the set [n] \ X.
Definition 7 Let R ⊆ S n be a top-k partial ranking. Let Π R : S n → R be the map that takes a permutation to the corresponding Kendall-closest permutation in R; by Lemma 4, this is well-defined.
Lemma 5 (Decomposition of distances) Let σ ∈ R, and τ ∈ S n . We have the following decomposition of the distance d(σ, τ ): Proof We compute directly with the discordant pairs definition of the Kendall distance. Again, let R be the partial ranking x 1 · · · x k , and let X = {x 1 , . . . , x k }. We decompose the Kendall distance between σ ∈ R and τ as before:     i.e. the first two terms of the decomposition in Equation (11). Similarly, we have d(Π R (τ ), σ) = x,y ∈X,x =y 1 x,y discordant for σ,τ , and so the result follows.
Lemma 7 Let R, R ⊆ S n be top-k rankings, in preference notation given by R :a 1 · · · a l [n] \ {a 1 , . . . , a l } , If τ ∼ Unif(R ), then Π R (τ ) is a full ranking with distribution Unif(R ), where R ⊆ R is the partial ranking given by Proof We first show that Π R maps R into R . This is straightforward, as given τ ∈ R , we first observe that Π R (τ ) ∈ R, and so the full ranking Π R (τ ) is consistent with the partial ranking a 1 · · · a l [n] \ {a 1 , . . . , a l } .
Putting these two facts together shows that the full ranking Π R (τ ) must be consistent with the partial ranking Thus, given τ ∼ Unif(R ), the distribution of Π R (τ ) is supported on R . To show that it is uniform, we now argue that equally many rankings in R are mapped to each ranking in R . To see this, we observe that the pre-image of a ranking in R is the set of all rankings in R which are concordant with it on all pairs in [n] \ {a 1 , . . . , a l , b 1 , . . . , b m }. The number of such rankings is independent of the selected ranking in R , and so the statement of the lemma follows.
Having introduced the antithetic operator for a topk partial ranking R, A R : R → R and the projection map Π R : S n → R, we next study how these operations interact with one another.
Lemma 8 Let R ⊆ R ⊆ S n be top-k partial rankings. Then for σ ∈ R, we have Proof We begin by introducing preference-style notation for R and R . Let R be the top-k ranking given by a 1 · · · a l [n] \ {a 1 , . . . , a l }, and let R be the partial ranking given by a 1 · · · a l a l+1 · · · a m [n] \ {a 1 , . . . , a m }. Let σ ∈ R, and let the elements of [n] \ {a 1 , . . . , a m } be given by b 1 , . . . , b q , with indices chosen such that σ corresponds to the full ranking a 1 · · · a m b 1 · · · b q .
Then, the ranking A R (Π R (σ)) is given by a 1 · · · a m b q · · · b 1 , and a straightforward calculation shows that this is also the case for Π R (A R (σ)), as required.
Finally, the last Lemma states the most general identity for a distance, which involves the antithetic operator, the closest element map given a partial rankings set R and a subset of it, denoted by R .
Proof of Proposition 2 Case: σ 0 ∈ S n be the fixed permutation, then This holds true since After proving all the relevant Lemmas, we now present our main result regarding antithetic samples, namely, that this scheme provides negatively correlated pairs of samples.

Theorem 6 Let the antithetic kernel estimator be evaluated at a pair of partial rankings
Then, the asymptotic variance of the estimator from Equation (5) is lower in the antithetic case than in the i.i.d. case.
Proof It has been shown previously that the antithetic kernel estimator is unbiased (in the off-diagonal case), so showing that it has lower MSE in the antithetic case is equivalent to showing that its second moment is smaller in the antithetic case than in the i.i.d. case. The second moment is given by We identify three types of terms in the above sum: (i) those where n = n and m = m ; (ii) those where n = n but m = m , or m = m but n = n ; (iii) those where n = n and m = m .
We remark that in case (i), the 16 terms that appear in the summand all have the same distribution in the antithetic and i.i.d. case, so terms of the form (i) contribute no difference between antithetic and i.i.d.. There are O(N 2 M + M 2 N ) terms of the form (ii), and O(N M ) terms of the form (iii). We thus refer to terms of the form (ii) as cubic terms, and terms of the form (iii) as quadratic terms. We observe that due to the proportion of cubic terms to quadratic terms diverging as N, M → ∞, it is sufficient to prove that each cubic term is less in the antithetic case than the i.i.d. case to establish the claim of lower MSE.
Thus, we focus on cubic terms. Let us consider a term with n = n and m = m . The term has the form E K(σ n , τ m ) + K( σ n , τ m ) + K(σ n , τ m ) + K( σ n , τ m ) × K(σ n , τ m ) + K( σ n , τ m ) + K(σ n , τ m ) + K( σ n , τ m ) . Of the sixteen terms appearing in the expectation above, there are only two distinct distributions they may have. The two types of terms are given below: and Terms of the form in Equation (15) have the same distribution in the antithetic and i.i.d. cases, so we can ignore these. However, terms of the form in Equation (16) have differing distributions in these two cases, so we focus in on these. We deal specifically with the case where K(σ, τ ) = exp(−λd(σ, τ )), so we may rewrite the expression in Equation (16) as We now decompose the distances d(σ n , τ m ), d( σ n , τ m ) using the series of lemmas introduced before. First, we use Lemma 5 to write We give a small example illustrating some of the variables at play in this decomposition in Figure 3. Now, writing R 3 ⊆ R 1 for the partial ranking described by Lemma 7, we have that Π R1 (τ m ), Π R1 (τ m ) i.i.d.

∼
Unif(R 3 ). Therefore, the distances in Equation (18) may be decomposed further: We now consider each term, and argue as to whether the distribution is different in the antithetic and i.i.d. cases, recalling that in the i.i.d. case, σ n is drawn from R 1 independently from σ n , whilst in the antithetic case, σ n = A R1 (σ n ).
-Each of the terms d(Π R1 (τ m ), τ m ) and d(Π R1 (τ m ), τ m ) have the same distribution under the i.i.d. case and antithetic case. Further, in both , so these two terms are independent of all others appearing in the sum in both cases. -Each of the terms d(Π R3 (σ n ), Π R1 (τ m )) and d(Π R3 ( σ n ), Π R1 (τ m )) have the same distribution under the i.i.d. case and the antithetic case, and are independent of all other terms in both cases. -We deal with the terms d(σ n , Π R3 (σ n )) and d( σ n , Π R3 ( σ n )) using Lemma 9. More specifically, under the i.i.d. case, these two distances are clearly i.i.d.. However, under the antithetic case, the lemma tells us that the sum of these two distances is equal to the mean under the distribution of the i.i.d. case almost surely. Thus, in the antithetic case, this random variable has the same mean as in the i.i.d. case, but is more concentrated (strictly so iff d(σ n , Π R3 (σ n )) is not a constant almost surely, which is the case iff Thus, d(σ n , τ m ) + d( σ n , τ m ) has the same mean under the i.i.d. and antithetic cases, but is strictly more concentrated when R 1 = R 3 This holds true iff the partial rankings R 1 and R 2 do not concern exactly the same set of objects. Thus, by a conditional version of Jensen's inequality, since exp(−λx) is strictly convex as a a function of x, we obtain the variance result.

Antithetic kernel estimator and kernel herding
In this section, having established the variance-reduction properties of antithetic samples in the context of Monte Carlo kernel estimation, we now explore connections to kernel herding [6].

Theorem 7
The antithetic variate construction of Theorem 5 is equivalent to the optimal solution for the first two steps of a kernel herding procedure in the space of permutations.
Proof Let R be a partial ranking of n elements. We calculate the sequence of herding samples from the uniform distribution p(·|R) over full rankings consistent with R associated with the exponential semimetric kernel K(σ, σ ) = exp (−λd(σ, σ )), for a metric d of negative definite type. Following [6], we note that the herding samples from p(·|R) associated with the kernel K, with RKHS embedding φ : S n → H, are defined iteratively by where µ p is the RKHS mean embedding of the distribution p. Since p is uniform over its support, any ranking σ in the support of p(·|R) is a valid choice as the first sample in a herding sequence. Given such an initial sample, we then calculate the second herding sample, by considering the herding objective as follows: which as a function of σ 2 , is equal to 2K(σ 1 , σ 2 ) = 2 exp(−λd(σ 1 , σ 2 )), up to an additive constant. Thus, selecting σ 2 to minimize the herding objective is equivalent to maximizing d(σ 1 , σ 2 ), which is exactly the definition of the antithetic sample to σ 1 .
After this result, one would like to do a herding procedure for more than two steps. However, the solution is not the same as picking k herding samples simultaneously. Specifically, the following counterexample, illustrated in Figure 4, clearly shows why. The left plot shows the result of solving the herding objective for 2 samples -the result is an antithetic pair of samples for the region R. If a third sample is selected greedily, with these first two samples fixed, it will yield a different result than if the herding objective is solved for 3 samples simultaneously, as illustrated on the right of the figure.
Remark 4 Theorem 7 says that if we first pick a point uniformly at random from R, then put it into the herding objective and then select the second deterministically to minimise the herding objective this is equivalent to the antithetic variate construction of Definition 6. Alternatively, we could pick the second point uniformly at random from R, independently from the first point. This second scheme will produce a higher value of the herding objective on average. Fig. 4 Samples from the region R, illustrating the difference between solving the herding objective greedily, and solving for all samples simultaneously.
Once we have constructed two estimators for Kernel matrices we present some experiments to asses their performance in the next section.

Experiments
In this section, we use the Monte Carlo and antithetic kernel estimators for a variety of machine learning unsupervised and supervised learning tasks: a nonparametric hypothesis test, an agglomerative clustering algorithm and a Gaussian process classifier.
Definition 6 states the antithetic permutation construction with respect to a given permutation for Kendall's distance. In order to consider partial rankings data, we should respect the observed preferences when obtaining the antithetic variate. The pseudocode from Algorithm 1 corresponds to the algorithmic description for sampling an antithetic permutation and simultaneously respecting the constraints imposed by the observed partial ranking. Namely, the antithetic permutation has the observed preferences fixed in the same locations as the original permutation and only reverses the unobserved locations. This corresponds to maximising the Kendall distance between the permutation pair while respecting the constraints and ensures that both permutations have the right marginals as stated in Remark 1 and Lemma 1.

Datasets
Synthetic data set. The synthetic dataset for the in the nonparametric hypothesis test experiment, where the null hypothesis is H 0 : P = Q and the alternative is H 1 : P = Q, is the following: the dataset from the P distribution is a mixture of Mallows distributions [9] with the Kendall and Hamming distances. The central permutations are given by the identity permutation and the reverse of the identity respectively, with lengthscale equal to one. The dataset from the Q distribution is a sample from the uniform distribution over S n , where n = 6.
Sushi dataset. This dataset contains rankings about sushi preferences given by 5000 users [21]. The users ranked 10 types of sushi and the labels correspond to the user's region (East Japan or West Japan for the Gaussian process classifier and ten different regions for the agglomerative clustering task).

Agglomerative clustering
In this experiment, we used both the full and a censored version of the sushi dataset from Section 5.1. We used various distances for permutations to compute the estimators for the semimetric matrix between pairs of partial rankings subsets. In order to compute our estimators, we censored the dataset by storing the topk = 4 partial rankings per user. The Monte Carlo and antithetic kernel estimators were used to obtain negative type semimetric matrices using the relationship from Equation (2) in the following way: This matrices were then used as an input to the average linkage agglomerative clustering algorithm [10]. The tree purity measure is reported, it provides way to asses the tree produced by the agglomerative clustering algorithm. It can be computed in the following way: when a dendrogram and all correct labels are given, pick uniformly at random two leaves which have the same label c and find the smallest subtree containing the two leaves. The dendrogram purity is the expected value of #leaves with label c in subtree #leaves in the subtree per class. If all leaves in the class are contained in a pure subtree, the dendrogram purity is one. Hence, values close to one correspond to high quality trees.  Table 1 Tree purities for the sushi dataset using a subsample of 100 users with the full Gram matrix K, a censored dataset of topk = 4 partial rankings for the vanilla Monte Carlo estimator K and the antithetic Monte Carlo estimator K a , with n mc = 20 Monte Carlo samples. Tree cut at k = 10 clusters. The median distance criterion was used to select the inverse of the lengthscale for the semimetric exponential kernels.
In Table 1, the true and estimated purities using the full rankings and the partial rankings datasets are reported. We assumed that the true labels are given by the user's region, there are ten different possible regions. The true purity corresponds to an agglomerative clustering algorithm using the Gram matrix obtained from the full rankings. We can compute the Gram matrix for the full rankings because we have access to all of the users' rankings over the ten different types of sushi. The antithetic Monte Carlo estimator outperforms the vanilla Monte Carlo estimator in terms of average purity since it is closer to the true purity. It also has a lower standard deviation when estimating the marginalised Mallows kernel.

Nonparametric hypothesis test with MMD
Let P and Q be probability distributions over S n , the null hypothesis is H 0 : P = Q versus H 1 : P = Q using samples σ 1 , . . . , σ n i.i.d.

∼ Q.
We can estimate a pseudometric between P and Q and reject H 0 if the observed value of the statistic is large. The following is an unbiased estimator of the M M D 2 [15] M M D 2 (P, This statistic depends on the chosen kernel as can be seen in Equation (21). If the kernel is characteristic [32], then the M M D 2 is a proper metric over probability distributions. Analogously, we can compute an MMD squared estimator for partial rankings sets, such that ∼ Q, in the following way We used the synthetic datasets for P and Q described in Section 5.1 to asses the performance of the Monte Carlo and antithetic kernel estimators in a nonparametric hypothesis test. The datasets consist of rankings over n = 10 objects and we censored them to obtain top−k partial rankings with k = 3. We then computed the MMD squared statistic for the samples using the samples from the two populations. Since the non-asymptotic distribution of the statistic from Equation (22) is not known, we performed a permutation test [1] in order to estimate consistently the null distribution and compute the p−value. We did this repeatedly as we varied the number of observations for a fixed number of Monte Carlo samples to see the effect of the sample size in the p-value computations. Specifically, Figure 5 and Table 2 show how the p-value computed with the antithetic kernel estimator has lower variance as we vary the number of observations in our dataset. Both p-values converge to zero since the samples from both populations come from different distributions. In Table 2 we report the standard deviations of the estimated p-values. The pvalue obtained with the antithetic kernel estimator has lower variance accross all sample sizes.

Gaussian process classifier
In this experiment, two different kernels were used to compute the estimators for the Gram matrix between different pairs of partial rankings subsets. The matrix was then provided as the input to a Gaussian process classifier [25]. The Python library GPy [14] was extended with custom kernel classes for partial rankings which compute both the Monte Carlo and antithetic kernel estimators for partial rankings subsets. Previously, it was only possible to do pointwise evaluations of kernels but our implementation allows to compute the kernels over pairs of partial ranking subsets by storing the sets in a tensor first.
For the Mallows kernel, we used the median distance heuristic [33,28] with the Kendall distance to compute the bandwidth parameter and a scale parameter of 9.5. We performed a grid search over different values of the scale parameter and picked the one that had the largest classification accuracy for the test set.  Table 3 Averaged over 10 runs with 4 Monte Carlo samples per run, using the inverse of the median distance as the lengthscale, n = 10, topk = 6.
In Table 3, the results of running the Gaussian process classifier are reported using the marginalised Mallows kernel, the marginalised Gaussian kernel and the marginalised Kendall kernel as well as the corresponding estimators. Since the Mallows kernel is based on the Kendall distance, it is a kernel specifically tailored for permutations and it is the best in terms of predictive performance. The Gaussian kernel is a kernel that is suitable for Euclidean spaces and it does not take into account the data type but it still does well. The Kendall kernel does take into account the data type but it performs the worst. The full model corresponds to using the Gram matrix using the full rankings and MC and Antithetic refer to using our proposed estimators. We see how empirically the predictive accuracy obtained with the antithetic kernel estimator has lower variance as expected.

Conclusion
We addressed the problem of extending kernels to partial rankings by introducing a novel Monte Carlo kernel estimator and explored variance reduction strategies via an antithetic variates construction. Our schemes lead to a computationally tractable alternative to previous approaches for partial rankings data. The Monte Carlo scheme can be used to obtain an estimator of the marginalised kernel with any of the kernels reviewed herein. The antithetic construction provides an improved version of the kernel estimator for the marginalised Mallows kernel. Our contribution is noteworthy because the computation of most of the marginalised kernels grows super-exponentially with respect to the number of elements in the collection, hence, it quickly becomes intractable for relatively small values of the number of ranked items n. An exception is the fast approach for computing the convolution kernel proposed by Jiao & Vert [20], which is only valid for Kendall kernel. Mania et al. [26] have shown that the Kendall kernel is not characteristic using non-commutative Fourier analysis to show that it has a degenerate spectrum. For this reason, using other kernels for permutations might be desirable depending on the task at hand.
One possible direction for future work includes the use of explicit feature representations for traditional random features schemes to further reduce the computational cost of the Gram matrix. Another possible application is to use our method with pairwise preference data where users are not necessarily consistent about their preferences. In this type of data, we could still extract a partial ranking from a given user, then, sample from the space of the corresponding full rankings consistent with this observed partial ranking and obtain our Monte Carlo kernel estimator. This would benefit from our framework because having a partial ranking is in general more informative that having pairwise comparisons or star ratings.
Another natural direction for future work is to develop variance-reduction sampling techniques for a wider variety of kernels over permutations, and to the extend the theoretical analysis of these constructions to discrete graphs more generally.

A Reproducing kernel Hilbert spaces
A reproducing kernel Hilbert space (RKHS) [4] over a set X is a Hilbert space H consisting of functions on X such that for each x ∈ X there is a function kx ∈ H with the property The function kx(·) = k(x, ·) is called the reproducing kernel of H [2]. The space H is endowed with an inner product ·,· H and a norm can be defined based on it such that f H := ·,· H . In order to be a Hilbert space, it needs to contain all limits of Cauchy sequences, i.e. it has to be complete. In the case of the symmetric group of degree n, X = Sn, the space is finite dimensional which guarantees that it is complete. Finally, any symmetric and positive definite function kx : X × X → R uniquely determines an RKHS. Alternatively, a function k : X × X → R is called a kernel if there exists a Hilbert space H and a map φ : X → H such that ∀x, y ∈ X , k(x, y) = φ(x), φ(y) H . The function φ is usually referred to as the feature representation of x. Even though the RKHS induced by the kernel is unique, there can be more than one feature representations that define the same kernel.

B Expectation of the Kernel Monte Carlo estimator
Proof For distinct i, j = 1, . . . , I, let σ If the weights are uniform, By linearity of expectation, since the samples are identically distributed, the expectation in the summand above reduces to = σ∈R i σ ∈R j K(σ, σ )p(σ | R i )p(σ | R j ) as required. The diagonal case, If the weights are non-uniform and are given by the importance sampling weights, namely w (i) = p(σ|R i ) q(σ|R i ) , and the expectation is taken with respect to the proposal q, then By linearity of expectation, since the samples are identically distributed, the expectation in the summand above reduces to = σ∈R i σ ∈R j K(σ, σ )p(σ | R i )p(σ | R j ) as required.