Discrimination between Gaussian process models: active learning and static constructions

The paper covers the design and analysis of experiments to discriminate between two Gaussian process models with different covariance kernels, such as those widely used in computer experiments, kriging, sensor location and machine learning. Two frameworks are considered. First, we study sequential constructions, where successive design (observation) points are selected, either as additional points to an existing design or from the beginning of observation. The selection relies on the maximisation of the difference between the symmetric Kullback Leibler divergences for the two models, which depends on the observations, or on the mean squared error of both models, which does not. Then, we consider static criteria, such as the familiar log-likelihood ratios and the Fréchet distance between the covariance functions of the two models. Other distance-based criteria, simpler to compute than previous ones, are also introduced, for which, considering the framework of approximate design, a necessary condition for the optimality of a design measure is provided. The paper includes a study of the mathematical links between different criteria and numerical illustrations are provided.


Introduction
The term 'active learning ' [cf. Hino (2020) for a recent review] has replaced the traditional (sequential or adaptive) 'design of experiments' in the computer science literature, typically when the response is approximated by Gaussian process regression [GPR, cf. Sauer et al. (2022)]. It refers to selecting the most suitable inputs to achieve the maximum of information from the outputs, usually with the aim of improving prediction accuracy. A good overview is given in Chapter 6 of Gramacy (2020).
Frequently the aim of an experiment-in the broad sense of any data acquisition exercise-may rather be the discrimination between two or more potential explanatory models. When data can be sequentially collected during the experimental process, the literature goes back to the classic procedure of Hunter and Reiner (1965) and has generated ongoing research [see e.g. Schwaab et al. (2008), Olofsson et al. (2018) and Heirung et al. (2019)]. When the design needs to be fixed before the experiment and thus no intermediate data will be available, the literature is less developed. While in the classical (non)linear regression case the criterion of T-optimality [cf. Atkinson and Fedorov (1975)] and the numerous papers extending it was a major step, a similar breakthrough for Gaussian process regression is lacking.
With this paper we would like to investigate various sequential/adaptive and nonsequential design schemes for discriminating between the covariance structure of GPRs and their relative properties. When the observations associated with the already collected points are available, one may base the criterion on the predictions and prediction errors (Sect. 3.1). On the one hand, one natural choice will be to put the next design point where the symmetric Kullback-Leibler divergence between those two predictive (normal) distributions differs most. On the other hand, when the associated observations are not available, the incremental construction of the designs could be based on the mean squared error (MSE) for both models, assuming in turn that either of the two models is the true one (Sect. 3.2). We theoretically investigate the asymptotic differences of the criteria with respect to their discriminatory power.
The static construction of a set of optimal designs of given size for nominal model parameters is the last mode we have considered (Sect. 4). Our first choice is to use the difference between the expected values of the log likelihood ratios, assuming in turn that either of the two models is the true one. This is actually a function of the symmetric Kullback-Leibler divergence, which also arises from Bayesian considerations. In a similar spirit, the Fréchet distance between two covariance matrices provides another natural criterion. Some further novel but simple approaches are considered in this paper as well. In particular we are interested whether complex likelihood-based criteria like the Kullback-Leibler-divergence can be effectively replaced by simpler ones based directly on the respective covariance kernels. The construction of optimal design measures for model discrimination (approximate design theory) is considered in Sect. 5, where we investigate the geometric properties for some of the newly introduced criteria.
Eventually, to compare the discriminatory power of the resulting designs from different criteria, one can compute the correct classification (hit) rates after selecting the model with the higher likelihood value. In Sect. 6, a numerical illustration is provided for two Matérn kernels with different smoothness. Furthermore, we confirm the theoretical considerations about optimal design measures from Sect. 5 on a numerical example.
Except for adaptive designs, where the parameter estimates are continuously updated as new data arrive, we assume that the parameters of the models between which we want to discriminate are known. Therefore, our results are relevant in situations where there is strong prior knowledge about the possible models, for example through previously collected data.

Notation
One of the most popular design criteria for discriminating between rival models is T-optimality (Atkinson and Fedorov 1975). This criterion is only applicable when the observations are independent and normally distributed with a constant variance. López-Fidalgo et al. (2007) generalised the normality assumption and developed an optimal discriminating design criterion to choose among non-normal models. The criterion is based on the log-likelihood ratio test under the assumption of independent observations. We denote by ϕ 0 (y, x, θ 0 ) and ϕ 1 (y, x, θ 1 ) the two rival probability density functions for one observation y at point x. The following system of hypotheses might be considered: where ϕ 1 (y, x, θ 1 ) is assumed to be the true model. A common test statistic is the log-likelihood ratio given as where the null hypothesis is rejected when ϕ 1 (y, x, θ 1 ) > ϕ 0 (y, x, θ 0 ) or equivalently when L > 0. The power of the test refers to the expected value of the log-likelihood ratio criterion under the alternative hypothesis H 1 . We have where D K L (ϕ 1 ϕ 0 ) is the Kullback-Leibler distance between the true and the alternative model (Kullback and Leibler 1951).
Interchanging the two models in the null and the alternative hypothesis, the power of the test would be If it is not clear in advance which of the two models is the true model, one might consider to search for a design optimising a convex combination of (1) and (2), most commonly using weights 1/2 for each model. This would be equivalent to maximising the symmetric Kullback-Leibler distance In this paper we will consider random fields, i.e. we will allow for correlated observations. As we assume that the mean function is known and the same for all models, without loss of generality we can set the mean function equal 0 everywhere. We are solely concerned with discriminating with respect to the covariance structure of the random fields. When the random fields are Gaussian, we might still base the design strategy on the log-likelihood ratio criterion to choose among two rival models.
For a positive definite kernel K (x, x ) and an n-point design . Although x is not bold, it may correspond to a point in a (compact) set X ⊂ R d . Assume that Y (x) corresponds to the realisation of a random field Z x , indexed by x in X , with zero mean E{Z x } = 0 for all x and covariance E{Z x Z x } = K (x, x ) for all (x, x ) ∈ X 2 . Our prediction of a future observation Y (x) based on observations Y n = (Y (x 1 ), . . . , Y (x n )) corresponds to the best linear unbiased predictor (BLUP) η n (x) = k n (x)K −1 n Y n . The associated prediction error is e n (x) = Y (x) − η n (x) and we have The index n will often be omitted when there is no ambiguity, and in that case will refer instead to model i, with i ∈ {0, 1}. We shall need to distinguish between the cases where the truth is model 0 or model 1, and following Stein (1999, p. 58) we denote by E i the expectation computed with model i assumed to be true. We reserve the notation ρ 2 i (x) to the case where the expectation is computed with the true model; i.e., Hence we have ρ 2 with an obvious permutation of indices 0 and 1 when assuming the model 1 is true to compute E 1 {·}. If model 0 is correct, the prediction error is larger when we use model 1 for prediction than if we use the BLUP (i.e., model 0). Stein (1999, p. 58) shows that the relation shown above is valid more generally for models with linear trends. Also of interest is the assumed mean squared error (MSE) E 1 {e 2 1 (x)} when we use model 1 for assessing the prediction error (because we think it is correct) while the truth is model 0, and in particular the ratio , which may be larger or smaller than one. Another important issue concerns the choice of covariance parameters in K 0 and where the σ 2 i define the variance, the θ i may correspond to correlation lengths in a translation invariant model and are thus scalar in the isotropic case, and C(x, x ) defines a correlation.

Prediction-based discrimination
For the incremental construction of a design for model discrimination, points are added conditionally on previous design points. We can distinguish the case where the observations associated with those previous points are available and can thus be used to construct a sequence of predictions (sequential, i.e., conditional, construction) from the unconditional case where observations are not used.

Sequential (conditional) design
Consider stage n, where n design points X n and n observations Y n are available. Assuming that the random field is Gaussian, when model i is true we have Y (x) ∼ N ( η n,i (x), ρ 2 n,i (x)). A rather natural choice is to choose the next design point x n+1 where the symmetric Kullback-Leibler divergence between those two normal distributions differs most; that is, Other variants could be considered as well, such as .
They will not be considered in the rest of the paper. If necessary one can use plug-in estimates σ 2 n,i and θ n,i of σ 2 i and θ i , for instance maximum likelihood (ML) or leave-one-out estimates based on X n and Y n , when we choose x n+1 . Note that the value of σ 2 does not affect the BLUP η n (x) = k n K −1 n Y n . In the paper we do not address the issues related to the estimation of σ 2 or of the correlation length or smoothness parameters of the kernel; one may refer to Karvonen et al. (2020) and the recent papers Karvonen (2022), Karvonen and Oates (2022) for a detailed investigation. The connection between the notion of microergodicity, related to the consistency of the maximum-likelihood estimator, and discrimination through a KL divergence criterion is nevertheless considered in Example 1 below.

Incremental (unconditional) design
Consider stage n, where n design points X n are available. We base the choice of the next point on the difference between the MSEs for both models, assuming that one or the other is true. For instance, assuming that model 0 is true, the difference between the MSEs is A normalisation seems in order here too, such as A third criterion is based on the variation of the symmetric Kullback-Leibler divergence (10) of Sect. 4 when adding an (n + 1)-th point x to X n . Direct calculation, , i = 0, 1, and the expression of the inverse of a block matrix, gives We thus define to be maximised with respect to x ∈ X . Although the σ 2 i do not affect predictions, E i {e 2 j (x)} is proportional to σ 2 i . Unless specific information is available, it seems reasonable to assume that σ 2 0 = σ 2 1 = 1. Other parameters θ i should be chosen to make the two kernels the most similar, which seems easier to consider in the approach presented in Sect. 4, see (11). In the rest of this section we suppose that the parameters of both kernels are fixed.
The un-normalised version φ A (x) given by (5) could be used to derive a one-step (non-incremental) criterion, in the same spirit as those of Sect. 4, through integration with respect to x for a given measure μ on X . Indeed, we have The matrices A i (μ) and A 0,1 (μ) can be calculated explicitly for some kernels and measures μ. This happens in particular when X = [0, 1] d , the two kernels K i are separable, i.e., products of one-dimensional kernels on [0, 1], and μ is uniform on X .

Example 1: exponential covariance, no microergodic parameters
We consider Example 6 in Stein (1999, p. 74) and take The example focuses on two difficulties: first, the two kernels only differ by their parameter values; second, the particular relation between the variance and correlation length makes the parameters α i not microergodic and they cannot be estimated consistently from observations on a bounded interval; see Stein (1999, Chap. 6). It is interesting to investigate the behaviour of the criteria (5), (6) and (7) in this particular situation.
We suppose that n observations are made at x i = (i −1)/(n −1), i = 1, . . . , n ≥ 2. We denote δ = δ n = 1/[2(n − 1)] the half-distance between two design points. The particular Markovian property of random processes with kernels K i simplifies the analysis. The prediction and MSE at a given x ∈ (0, 1) only depend on the position of x relative to its two closest neighbouring design points; moreover, all other points have no influence. Therefore, due to the regular repartition of the x i , we only need to consider the behaviour in one (any) interval is then taken at C i for one of the n − 1 intervals, and we get Similar results apply to the case where the design X n contains the endpoints 0 and 1 and its covering radius CR(X n ) = max x∈[0,1] min i=1,...,n |x − x i | tends to zero, the points x i being not necessarily equally spaced: C i is then the centre of the largest interval [x i , x i+1 ] and δ n = CR(X n ). When δ n is large compared to the correlation lengths 1/α 0 and 1/α 1 , there exist two maxima, symmetric with respect to C i , that get closer to the extremities of I i as α 1 increases, and C i corresponds to a local minimum of φ A (·). This happens for instance when α 0 δ n = 1 and α 1 δ n 2.600455.
A similar behaviour is observed for φ B (x) and φ K L (x): for small enough δ n they both have a unique maximum in For large values of δ n compared to the correlation lengths 1/α 0 and 1/α 1 , there exist two maxima in I i , symmetric with respect to C i . When α 0 δ n = 1, this happens for instance when α 1 δ n 2.020178 for φ B (·) and when α 1 δ n 7.251623 for φ K L (·). However, in the second case the function is practically flat between the two maxima.
The left panel of Fig. 1 This behaviour of φ K L (C i ) for small δ n sheds light on the fact that α is not estimable in this model. Indeed, consider a sequence of embedded n k -point designs X n k , initialised with the design X n = X n 0 considered above and with n k = 2 k (n 0 − 1) + 1, all these designs having the form For k large enough, the increase in Kullback-Leibler divergence (10) from X n k to X n k+1 is thus bounded by c/(n k − 1) 2 for some c > 0, so that the expected log-likelihood Indeed, consider the following iterative modification of X n that cannot decrease min i=3,...,n |x i−2 − x i |: first, move x 1 to zero, then move x 2 to x 1 ; leave x 3 unchanged, but move x 4 to x 3 , etc. For n even, the design X n obtained is the duplication of an (n/2)-points design; for n odd, only the right-most point x n remains single. In the fist case, the minimum distance between points of X n is at most 1/(n/2 − 1), in the second case it is at most 1/( n/2 − 1). We then define X n−1 = X n \ {x i * −1 }. For n large enough, the increase in Kullback-Leibler divergence (10) from X n−1 to X n is thus bounded by c/( n/2 − 1) 3 for some c > 0 depending on α 0 and α 1 . Starting from some design X n 0 , we thus have, for n 0 large enough, when we assume that model 1 is correct), implying in particular that L n does not tend to infinity a.s. and the ML estimator of α is not strongly consistent.

Example 2: exponential covariance, microergodic parameters
Consider now two exponential covariance models with identical variances (which we take equal to one without any loss of generality): has a unique maximum at C i for small enough δ n , with now There are two maxima for φ A (·) in I i , symmetric with respect to C i for large δ n : when α 0 δ n = 1, this happens for instance when α 1 δ n 2.558545. Nothing is changed for φ B (·) compared to Example 1 as the variances cancel in the ratios that define φ B (·), see (3) and (6). The situation is quite different for φ K L (·), with indicating that it is indeed possible to distinguish between the two models much more efficiently with this criterion than with the two others. Interestingly enough, the best choice for next design point is not at C i but always as close as possible to one of the endpoints a i or b i , with however a criterion value similar to that in the centre C i when δ n is small enough, as lim Here, the same sequence of embedded designs as in Example 1 ensures that

Example 3: Matérn kernels
Take K 0 and K 1 as the 3/2 and 5/2 Matérn kernels, respectively: We take θ = θ 0 = 1 in K 0,θ and adjust θ = θ 1 in K 1,θ to minimise φ 2 [K 0,θ 0 ,K 1,θ 1 ] (μ) defined by Eq. (13) in Sect. 4 with μ the uniform measure on [0, 1], which gives θ 1 1.1275. The left panel of Fig. 3shows K 0,θ 0 =1 (x, 0) and K 1,θ (x, 0) for θ = 1 and θ = θ 1 when x ∈ [0, 1]. The right panel presents φ B (x) and φ K L (x) for the same n = 11-point equally spaced design X n as in Example 1 and x ∈ [0, 1] for K 0,1 and K 1,1.1275 (the value of φ A (x) does not exceed 0.65 10 −4 and is not shown). The behaviours of φ B (x) and φ K L (x) are now different in different intervals [x i , x i+1 ] (they remain symmetric with respect to 1/2, however), the maximum of φ K L (x) is obtained at the central point x 5 . The behaviour of φ K L (·) could be related to the fact that discriminating between K 0 and K 1 amounts to estimating the smoothness of the realisation, which requires that some design points are close to each other.

Distance-based discrimination
We will now consider criteria which are directly based on the discrepancies of the covariance kernels. Ideally those should be simpler to compute and still exhibit reasonable efficiencies and some similar properties. The starting point is again the use of the log-likelihood ratio criterion to choose among the two models. Assuming that the random field is Gaussian, the probability densities of observations Y n for the two The expected value of the log-likelihood ratio and similarly A good discriminating design should make the difference E 0 {L n } − E 1 {L n } as large as possible; that is, we should choose X n that maximises i.e. twice the symmetric Kullback-Leibler divergence between the normal distributions with densities ϕ n,0 and ϕ n,1 , see, e.g., Pronzato et al. (2019). We may enforce the normalisation σ 2 0 = σ 2 1 = 1 and choose the θ i to make the two kernels most similar in the sense of the criterion considered; that is, maximise The choice of 0 and 1 is important; in particular, unconstrained minimisation over the θ i could make both kernels completely flat or on the opposite close to Dirac distributions. It may thus be preferable to fix θ 0 and minimise over θ 1 without constraints. Also, the Kullback-Leibler distance is sensitive to kernel matrices being near singularity, which might happen if design points are very close to each other. Pronzato et al. (2019) suggest a family of criteria based on matrix distances derived from Bregman divergences between functions of covariance matrices from Kiefer's ϕ p -class of functions (Kiefer 1974). If p ∈ (0, 1), these criteria are rather insensitive to eigenvalues close or equal to zero. Alternatively, they suggest criteria computed as Bregman divergences between squared volumes of random k-dimensional simplices for k ∈ {2, . . . , d − 1}, which have similar properties. The index n is omitted in the following and we consider fixed parameters for both kernels. The Fréchet-distance criterion 75,3] for the same 11-point equally spaced design X n as in Example 1 and K 0,θ , K 1,θ given by (8) and (9), respectively related to the Kantorovich (Wasserstein) distance, seems of particular interest due to the absence of matrix inversion. The expression is puzzling since the two matrices do not necessarily commute, but the paper Dowson and Landau (1982) is illuminating.
Other matrix "entry-wise" distances will be considered, in particular the one based on the (squared) Frobenius norm, which corresponds to the substitution of K 2 i for K i in (12) for i = 0, 1. Denote more generally where 1 n is the n-dimensional vector with all components equal to 1, the absolute value is applied entry-wise and p denotes power p applied entry-wise. Figure 4shows the values of the criteria i [K 0,1 ,K 1,θ ] , i = 1, 2, F [K 0,1 ,K 1,θ ] and K L [K 0,1 ,K 1,θ ] as functions of θ for the two kernels K 0,θ and K 1,θ given by (8) and (9) and the same regular design as in Example 1: x i = (i − 1)/(n − 1), i = 1, . . . , 11. The criteria are re-scaled so that their maximum equals one on the interval considered for θ . Note the similarity between 2 [K 0,1 ,K 1,θ ] (X n ) and F [K 0,1 ,K 1,θ ] (X n ) and the closeness between the distance-minimising θ for 1 , 2 and F . Also note the good agreement with the value θ 1 1.1275 that minimises φ 2 [K 0,1 ,K 1,θ 1 ] (μ) from Eq. (13), see Example 3. The optimal θ for K L [K 0,1 ,K 1,θ ] (X n ) is much different, however, showing that the criteria do not necessarily agree between them.
An interesting feature of the family of criteria p [K 0 ,K 1 ] (·), p > 0, is that they extend straightforwardly to a design measure version. Indeed, defining ξ n as the empirical measure on the points in X n , ξ n = (1/n) n i=1 δ x i , we can write where we define, for any design (probability) measure on X , Direct calculation gives and thus in particular One can easily check that the criterion is neither concave nor convex in general (as the matrix |K 1 − K 0 | p can have both positive and negative eigenvalues), but we nevertheless have a necessary condition for optimality.
, the inequality necessarily becomes an equality on the support of ξ * .
This suggests the following simple incremental construction: at iteration n, with X n the current design and ξ n the associated empirical measure, choose x n+1 ∈ Arg max x∈X F p [K 0 ,K 1 ] (ξ n ; δ x ) = Arg max x∈X 1 n |k n,0 (x) − k n,1 (x)| p . It will be used in the numerical example of Sect. 6.2.

Optimal design measures
In this section we explain why the determination of optimal design measures maximising φ p (ξ ) is generally difficult, even when limiting ourselves to the satisfaction of the necessary condition in Theorem 1. At the same time, we can characterise measures that are approximately optimal for large p.

A simplified problem with an explicit optimal solution
Consider the extreme case where ψ = ψ * defined by Note that ψ p * (t) = ψ * (t) for any p > 0; we can thus restrict our attention to p = 1 for the maximisation of φ p (ξ ) defined by (13); that is, we consider Theorem 2 When ψ = ψ * and X ⊂ R d is large enough to contain a regular d simplex with edge length , any measure ξ * allocating weight 1/(d + 1) at each vertex of such a simplex maximises φ 1 (ξ ), and φ 1 (ξ * ) = d/(d + 1).
Proof Since φ 1 (ξ ) = 0 when ξ is continuous with respect to the Lebesgue measure on X , we can restrict our attention to measures without any continuous component.
Consider the graph G(ξ ) having the x i as vertices, with an edge (i, j) connecting x i and x j if and only if x i − x j = . We have and Theorem 1 of Motzkin and Straus (1965) implies that φ 1 (ξ ) is maximum when ξ is uniform on the maximal complete subgraph of G(ξ ). The maximal achievable order is d + 1, obtained when the x i are the vertices of a regular simplex in X with edge length . Motzkin and Straus (1965) also indicate in their Theorem 1 that φ 1 (ξ * ) = 1−1/(d +1). This is easily recovered knowing that G(ξ * ) is fully connected with order d + 1. Indeed, we then have which is maximum when all w i equal 1/(d + 1).
When p → ∞, ψ p (·) approaches the (discontinuous) function ψ * (·), suggesting that ξ * may become close to being optimal for φ p when p is large enough. However, when X is large, ξ * is never truly optimal, no matter how large p is. Indeed, suppose that X contains a point x * corresponding to the symmetric of a vertex x k of the simplex defining the support of ξ * with respect to the opposite face of that simplex. Direct calculation gives The right panel of Fig. 6 shows an illustration for K 0 and K 1 being the Matérn 3/2 and Matérn 5/2 kernels K 0,1 and K 1,1 , respectively. The measure ξ * is supported at the vertices of the equilateral triangle with vertices (0, 0), ( , 0), ( /2, √ (3) /2) with now 0.7. At the point x * , symmetric to x k , indicated in red on the figure, we have where the second equality follows from x * − x i = for all i = k, implying that ξ * is not optimal. Another, more direct, proof of the non-optimality of ξ * is to consider the measure ξ that sets weights 1/(d + 1) at all x i = x k and weights 1/[2(d + 1)] at x k and its symmetric x * . Direct calculation gives The first term on the right-hand side comes from the d vertices x i , i = k, each one having weight 1/(d + 1) and being at distance of all other vertices, those having total weight 1 − 1/(d + 1). The second term comes from the two symmetric points x k and x * , each one with weight 1/[2(d + 1)]. Each of these two points is at distance from d vertices with weights 1/(d + 1) and at distance L of the other opposite point with weight 1/[2(d + 1)]. We get after simplification showing that ξ * is not optimal. Note that, for symmetry reasons, the design ξ is not optimal for large enough X . The determination of a truly optimal design seems very difficult. In the simplified problem of Sect. 5.1, where the criterion is based on the function ψ * defined by (15), the measures ξ * and ξ supported on d + 1 and d + 2 points, respectively, have the same criterion value Although ξ * is not optimal, since ψ( x * − x k ) < 1 (as ψ(t) takes its maximum value 1 for t = ), (16) suggests that ξ * may be only marginally suboptimal when p is large enough. Moreover, as the right panel of Fig. 6 illustrates, a design ξ * supported on a regular simplex is optimal provided that X is small enough and p is large enough to make δ ξ * (x) concave at each x i (for symmetry reasons, we only need to check concavity at one vertex). In fact, p > 2 is sufficient. Indeed, assuming that p > 2 and that ψ(·) is twice differentiable everywhere, with second-order derivative ψ (·), except possibly at zero, direct calculation gives which is negative-definite (since ψ ( ) < 0, ψ(·) being maximal at ). The right panel of Fig. 6 gives an illustration. Note that p < 2 on the left panel, and the x i correspond to local minimas of δ ξ * (·). Figure 7shows a plot of δ ξ * (x) for p = 2 and K 0 and K 1 being the Matérn 3/2 and Matérn 5/2 kernels K 0,1 and K 1,1.07 , respectively, suggesting that the form of optimal designs may be in general quite complicated.

Exact designs
In this section, we consider numerical evaluations of designs resulting from the prediction-based and distance-based criteria. Here, the rival models are the isotropic versions of the covariance kernels used in Example 3 (Sect. 3.2) for the design space X = [0, 10] 2 , discretised at n = 25 equally spaced points in each dimension. For an agreement on the setting of correlation lengths in both kernels, we have applied a minimisation procedure. Specifically, we have taken θ = θ 0 = 1 in K 0,θ (x, x ) and adjusted the parameter in the second kernel minimising each of the distance-based criteria for the design X 625 corresponding to the full grid. This resulted in θ 1 = 1.0047, 1.0285, 1.0955 and 1.3403, respectively, for F , 1 , 2 and K L . We have finally chosen θ 1 = 1.07, which seems to be compatible with the above values. The left panel in Fig. 8 shows the plot of the two Matérn covariance functions at the assumed parameter values. This plot illustrates the similarity of the kernels which we aim to discriminate. The right panel in the figure refers to the plot of the absolute difference between the covariance kernels. The red line corresponds to the distance where the absolute difference between them is maximal. This is denoted by , which is equal to = 1.92 in this case.
The sequential approach is the only case where the observations Y n corresponding to the previous design points X n are used in the design construction. In our example, we simulate the data according to the assumed model. We use this information to estimate the parameter setting at each step. The (box)plots of the maximum likelihood (ML) estimatesθ 0 andθ 1 of the inverse correlation lengths θ 0 and θ 1 of K 0,θ (x, x ) and K 1,θ (x, x ), respectively, are presented in Fig. 9. This refers to the case where the first kernel, Matérn 3/2, is the data generator. Theθ 0 estimates converge to their null value, θ 0 = 1, drawn as a red dashed line in the left panel of Fig. 9, as expected due to the consistency of the ML estimator in this case. For the second kernel to be similar to the first one (i.e., less smooth), theθ 1 estimates have increased (see the right panel). The decrease of the correlation length causes the covariance kernel to drop faster as a function of distance. We defer from presenting the opposite case (where the Matérn 5/2 is the data generator), which is similar.
Apart from the methods applied in Sect. 4, we have considered some other static approaches for discrimination. D s -optimal design is a natural candidate that can be applied in the distance-based fashion. For D s -optimality, we require the general form of the Matérn covariance kernel, which is based on the modified Bessel function of the second kind (denoted by C ν ). It is given by Smoothness, ν, is considered as the parameter of interest, while the correlation length θ is assumed as nuisance. The first off-diagonal element in the 2 × 2 information matrix, associated with the estimation of parameters θ = (θ, ν), is see, e.g., Eq. (6.19) in Müller (2007). The other elements in the information matrix are calculated similarly. We have used the supplementary material of Lee et al. (2018) to compute the partial derivatives of the Matérn covariance kernel. Finally, the D scriterion is where M(X n , θ ) 11 is the element of the information matrix corresponding to the nuisance parameter (i.e., in M(X n , θ ) 11 both partial derivatives are calculated with respect to θ ).
In the examples to follow we consider local D s -optimal design; that is, the parameters θ and ν are set at given values. From a Bayesian perspective, models can be discriminated optimally when the difference between the expected entropies of the prior and the posterior model probabilities is maximised. This criterion underlies a famous sequential procedure put forward by Box and Hill (1967) and Hill and Hunter (1969). Since such criteria typically cannot be computed analytically, several bounds were derived. The upper bound proposed by Box and Hill (1967) is equivalent to the symmetric Kullback-Leibler divergence K L . Hoffmann (2017) derives a lower bound based on a lower bound for the Kullback-Leibler divergence between a mixture of two normals, which is given by Eq. (A3) and is denoted by . Here, we assume equal prior probabilities. A more detailed account of Bayesian design criteria and their bounds is given in Appendix A. Table 1 collects simulation results for the given example. We have included the sequential procedure (4) as a benchmark for orientation. For all other approaches the true parameter values are used in the covariance kernels. Concerning static (distancebased) designs based on maximisation of F , 1 , 2 , K L , , D s , for each design size considered we first built a an incremental design and then used a classical exchange-type algorithm to improve it. These designs are thus not necessarily nested, i.e., X n ⊂ X n for n < n .
Each design of size n was then evaluated by generating N = 100 independent sets of n observations generated with the assumed true model, evaluating the likelihood  Bold numbers indicate the highest average hit rate achieved for each design size functions for these sets of observations for both models, and then deciding for each set of observations which model has the higher likelihood value. The hit rate is the fraction of sets of observations where the assumed true model has the higher likelihood value. The procedure was repeated by assuming the other model to be the true one. The two hit rates are then averaged and stated in Table 1, which contains the results for all the criteria and design sizes we considered. For the special case of the sequential construction (4), the design path depends on the observations generated at the previously selected design points; that is, unlike for the other criteria, for a given design size n each random run produces a different design. To compute the hit rates for a particular n we used N = 100 independent runs of the experiment. The hit rates reported in Table 1 reflect the discriminatory power of the corresponding designs. One can observe that F and as expected K L are outperforming in terms of hit rates. The Bayesian lower bound criterion is similar to the symmetric K L . The sequential design strategy (4) does not behave as well as the outperforming ones. It is, however, the realistic scenario that one might consider in applications as it does not assume knowledge of the kernel parameters. The effect of this knowledge can thus be partially calibrated for by comparing the first line against the other criteria.

Optimal design measure for p
Theorem 1 also allows the use of approximate designs as it presents a necessary condition for optimality of the family of criteria φ p , p > 0. This is more extensively discussed in the previous section. Here we present the numerical results for two specific cases of p = 2 and p = 10. To reach a design which might be numerically optimal (or at least nearly optimal), we have applied the Fedorov-Wynn algorithm (Fedorov 1971;Wynn 1970) on a dense regular grid of candidate points.
Numerical results show that for very small p (e.g., p = 1) explicit optimal measures are hard to derive. The left panel in Fig. 10 presents the measure ξ * 2 obtained for φ 2 . To construct ξ * 2 , we have first calculated an optimal design on a dense grid by applying 1000 iterations of the Fedorov-Wynn algorithm (see the comment following Theorem 1); the design measure obtained is supported on 9 grid points. We then applied a continuous optimisation algorithm (library NLopt (Johnson 2021) through its R-interface nloptr) initialised at this 9-point design. The 9 support points of the resulting design measure ξ * 2 are independent of the grid size; they receive unequal weights, proportional to the disk areas on Fig. 10-left. Any translation or rotation of ξ * 2 yields the same value of φ 2 . As the order p increases, we eventually reach an optimal measure with only three support points and equal weights. The right panel in Fig. 10 corresponds to the optimal design measure computed for φ 10 . This has, similarly as before, resulted from application of a continuous optimisation initialised at an optimal 3-point design calculated with the Fedorov-Wynn algorithm on a grid. This optimal design measure ξ * 10 has three support points, drawn as blue dots, with equal weights 1/3 represented by the areas of the red disks. The blue line segments between every two locations have length 1.92, reflecting the ideal interpoint distance (see the right panel of Fig. 8), in agreement with corresponding discussions in Sect. 5. Also here the optimal designs are rotationally and translationally invariant, and thus any design of such type is optimal as long as the design region is large enough to fit it.

Conclusions
In this paper we have considered the design problem for the discrimination of Gaussian process regression models. This problem differs considerably from the well-treated one in standard regression models and thus offers a multitude of challenges. While the KL-divergence is a straightforward criterion, it comes with the price of being computationally demanding and lacking convenient simplifications such as design measures. We have therefore introduced a family of criteria that allow such a simplification at least in special cases and have investigated its properties. We have also compared the performance of these and other potential criteria on several examples and see that KL-divergence can be effectively replaced by simpler criteria without much loss in efficiency. In particular designs based on the Fréchet-distance between covariance kernels seem to be competitive. Results from the approximate design computations indicate that for classical isotropic kernels, designs with d + 1 support points placed at the vertices of a simplex of suitable size are optimal for distance-based criteria φ p for a large enough p when the design region is small enough and are marginally suboptimal otherwise.
As a next step, it would be interesting to investigate the properties of the discrimination designs under parameter uncertainty, for example by considering minimax or Bayesian designs.
A referee has indicated that our techniques could be used for discriminating the intricately convoluted covariances stemming from deep Gaussian processes (as defined in Damianou and Lawrence (2013)) from more conventional ones. This is an interesting issue of high relevance for computer simulation experiments that certainly needs to be explored in the future.
where the data Y k = (Y 1 (x 1 ), . . . , Y k (x k )) are observed at the design X k = (x 1 , . . . , x k ), p(m i ) denotes the prior and p(m i |Y k ) the posterior model probability of model m i and p(Y k ) is the marginal distribution of Y k with respect to the models. Hence, this criterion is the (expected) difference of the model entropy and the conditional model entropy (conditional on the observations). The posterior model probability p(m i |Y k ) is defined by where p(Y k |m i ) is the likelihood of model m i (marginalised over the parameters), and p(Y k ) is given by The first term in (A1) does not depend on the design and can therefore be ignored.
A common alternative formulation of criterion (A1) is the one adopted by Box and Hill (1967) and Hill and Hunter (1969), which will henceforth be called Box-Hill-Hunter (BHH) criterion: In our case, if we assume point priors for the kernel parameters, we have where η k,i = (η 1,i (x 1 ), . . . , η k,i (x k )) is the mean vector of model i at design X k , K k,i is the k × k kernel matrix of model i with elements given by {K k,i } j,l = K i (x j , x l ), and ϕ(·|η, K) is the normal pdf with mean vector η and variance-covariance matrix K.
For example, for a static design involving n design points, we set k = n and assume that η n,i = 0 for each design X n . The model probabilities p(m i ) would just be the prior model probabilities before having collected any observations.
In a sequential design setting, where n observations Y n have already been observed at locations X n and we want to find the optimal design point x where to collect our next observation, we have k = 1 and set η k,i to the conditional mean η n,i (x) = k n,i (x) K −1 n,i Y n and K k,i to the conditional variance ρ 2 n,i (x) = K i (x, x) − k n,i (x) K −1 n,i k n,i (x), where k n,i (x) = (K i (x, x 1 ), . . . , K i (x, x n )), see Sect. 3.1. The prior model probabilities would have to be set to the posterior model probabilities given the already observed data: p(m i ) = p(m i |Y n ) ∝ ϕ(Y n | 0, K n,i ) p(m i ).
It follows that p(Y k ) is a mixture of normal distributions. The criterion representations (A1) and (A2) cannot be computed directly. However, several bounds have been developed for the criterion, the most famous being the classic upper bound derived by Box and Hill (1967).