1 Introduction

1.1 Motivation

The challenge of Bayesian Optimization (BO) in high dimensional problems has been addressed mapping it into low-dimensional problems defined on subsets of variables. Kandasamy et al. (2015), Moriconi et al. (2020) or exploiting a lower intrinsic dimensionality. To tackle the issue of high dimensionality a different approach is proposed in the present paper mapping the original problem into a space of discrete probability distributions.

We consider the optimization of a black-box, expensive, multi-extremal function \(f(x)\):

$$\begin{array}{c}f\left(x\right):x\in X\subset {\mathbb{R}}^{d}\to R\end{array}$$
(1)

where \({\mathcal{X}}\) is the search space and neither gradient nor convexity information are available.

Consider the following composite function for \(i = 1, \ldots ,n\)

$$\begin{array}{c}f\left(x\right)=C\left({h}_{1}\left(\mathrm{x}\right),\dots ,{h}_{n}\left(\mathrm{x}\right)\right)\end{array}$$
(2)

where \(h_{1} \left( {\text{x}} \right), \ldots ,h_{n} \left( {\text{x}} \right)\) is the univariate point cloud associated to \({\text{x}}\).

In the specific case of a linear scalarization of a multi-objective problem, \(f\left(x\right)=\sum_{i=1}^{n}{\lambda }_{i}{h}_{i}\left(x\right)\) and each vector \({h}_{1}\left(\mathrm{x}\right),\dots ,{h}_{n}\left(\mathrm{x}\right)\) is the point cloud associated to \(x\). Another class of problems which yield naturally a distributional representation of a candidate solution are simulation–optimization problems. This is the case in which the objective function is the average performance of a system where \(f(x,w)\) is the value of \(f\left(x\right)\) under the environmental condition \(w\) and \(p(w)\) represents the “relevance” of condition \(w\) (probability of its occurrence or the fraction of time that condition \(w\) occurs). Another setting is the hyperparameter optimization of a machine learning algorithm via k-fold cross validation, with \(f(x,w)\) a loss function (e.g., predictive accuracy, fairness, explainability, etc.) on fold \(w\) using hyperparameter configuration \(x\). The point clouds lay onto a metric space—i.e., the space of discrete probability distributions—in which a metric defines the distance between two points in that space, with the properties of positiveness, symmetry, and triangle inequality. Due to the nature of elements belonging to this space, the most appropriate distance between them is a distance between probability distributions. In this paper we focus on the Wasserstein (WST) distance and embed the original optimization problem in the metric space whose elements are discrete probability distributions, which we call Wasserstein space \(\mathcal{W}\). Wasserstein distance, also known as the Optimal Transport (OT) distance, is a mathematically principled method to align probability distributions. Originated by a paper of Monge (1781), it received its linear programming formulation in Kantorovich (1942). A complete mathematical formulation is in Villani (2009) while Peyré and Cuturi (2019) offer a complete review of recent theoretical and computational advances. The Wasserstein distance has been widely applied in machine learning from shape analysis (Gangbo and McCann 2000) to image interpolation, domain adaptation (Redko et al. 2019), parameter estimation in simulation models (Öcal et al. 2019), structured data on graphs (Vayer et al. 2018), active learning (Frogner et al. 2019), and adversarial networks (Arjovsky et al. 2017). The Wasserstein distance has many important properties: its representational capability has been shown by embedding in \(\mathcal{W}\) a variety of complex objects like images, networks, and words. An explanation of the interest in the Wasserstein distance is that Euclidean embeddings of data are flawed as they account for the correspondence of each feature independently of the other features. Bayesian Optimization (BO) algorithms have so far largely focused on problems where inputs are represented as numerical and categorical variables in Euclidean spaces. A significant advance is provided in Jaquier and Rozo (2020) which extends BO to Riemannian manifolds.

In this paper we extend the distributional approach to BO by encoding the geometry of the data generated in the sequential optimization process and performing the search in \(\mathcal{W}\). The key advantage of BO is its well-known sample efficiency. The main question considered in our study is whether its sample efficiency can be further improved by embedding the optimization process in \(\mathcal{W}\). An important result is the development of a multi-layer perceptron (MLP) to map the results obtained by BO in \(\mathcal{W}\) back to original search space \(\mathcal{X}\). The resulting algorithm BOWS (Bayesian Optimization in the Wasserstein Space), at least for the test functions considered, outperforms “Euclidean” BO already in 10 dimensions and its competitive edge increases substantially as the dimension of the search space increases. We have only considered the case in which the probability measures are univariate discrete probability distributions (aka point clouds).

1.2 Related works

The use of Gaussian Processes with probabilistic inputs has been proposed in Candelieri et al. (2022a), but the use of the WST distance in optimization problems is still a sparsely explored field. The issue of placing optimization in the space of probability distributions has been analyzed in Zhang et al. (2018) where policy optimization in reinforcement learning is modelled using Wasserstein gradient flows and Zhang et al. (2019) where the problem of approximating the posterior distribution in Thompson sampling is solved via Wasserstein gradient flow providing also a theoretical guarantee of convergence. Since Thompson sampling (TS) is used both for sampling a Gaussian process as also as acquisition function in the Bayesian optimization framework, an optimal transport based efficient computational strategy for performing TS is directly relevant for optimization. TS is a sequential optimization process based on the following steps: updating a posterior depending on the set of observations, drawing a sample from posterior as an approximation to the function to be minimized, minimizing this sample function to identify the next candidate point and evaluating the objective function at that point. However, calculating exact posterior distributions is intractable for all but the simplest models. Therefore, the development of computationally efficient approximate methods for the posterior distributions is a crucial problem for scalable TS.

In Gong et al. (2019) and Liu and Wang (2016) it is shown how batch-BO enables to transform the optimization of the acquisition function into finding the optimal distribution in the space of all distributions. The resulting quantile variational optimization is then solved using Stein variational gradient descent. The use of gradient flows in the Wasserstein space has been proposed in Salim et al. (2020) for the identification of OT maps. Wasserstein gradient flows have been also suggested in Rout et al. (2021) and Liutkus et al. (2019) for solving the optimization problems arising in generative modelling. Another problem which has been formulated as optimization over data-generating joint probability distributions is the dataset transformation from unlabelled to labelled (Alvarez-Melis and Fusi 2021) using a particle-based method. The same approach has been proposed in Alvarez-Melis and Fusi (2020) for transfer learning via OT.

The theoretical framework of the previous papers has been reconsidered and focused on BO (Crovini et al. 2022) where the authors propose a batch sequential algorithm based on the Expected Improvement (EI) acquisition function, which is transformed into an acquisition functional defined over a space of probability measures. The key result is that this functional is concave, according to the strong factorization assumption that the probability measure of the batch points takes the form of a product measure. However, the concavity result is derived only for the batch-EI. The optimization of the acquisition functional is then based on its gradient flow over \(\mathcal{W}\). Two formulations of the gradient flow on the space of probability measures are considered: the Stein gradient flow and the particle-based Wasserstein gradient flow. The estimation of batch-EI and the computation of its gradient flow are quite complex involving the solution of the non-linear Fokker–Plank equation.

Other results are related to BO over Riemannian manifolds (Jaquier and Rozo 2020) focused on Robot Learning and Jacquier et al. (2020) focused on high dimensional BO, proposing an approach that builds, on the theory of Riemannian manifolds, a representation of the objective function in a low dimensional latent space.

The issue of Distributionally Robust Optimization (DRO) is analysed in Lau and Liu (2022) who propose a Wasserstein barycentric ambiguity set and (Liu et al. 2022). Closer to the focus of our paper is Kandasamy et al. (2018) which use a kernel induced by the WST distance in a BO framework to search for the best neural network architecture.

Another possible approach for learning from distributions is to consider Reproducing Kernel Hilbert Spaces (RKHS). The kernels associated to probability distributions, in particular the Hilbertian kernel on probability measures have been first proposed in Hein and Bousquet (2005). A solution to the problem in the setting of Hilbert spaces has been provided in Peyré and Cuturi (2019). It must be remarked that in the case of multivariate distributions the construction of positive definite kernels on sets of probability measures is not straightforward.

1.3 Our contributions

A key contribution of this paper is to show that mapping candidate solutions from the search space into univariate discrete probability measures, specifically point clouds, associated to the components of the objective function, can be applied to obtain a BO algorithm where the Gaussian process and the acquisition function are defined over \(\mathcal{W}\). The mapping back from \(\mathcal{W}\) into the original search space \(\mathcal{X}\) is accomplished by a neural network. An indication of the convergence is obtained from a measure of concentration around the global optimum in \(\mathcal{W}\) as the ambiguity set built upon the WST distance between point clouds. Preliminary computational results on additive benchmark functions show that the relative performance of the BOWS algorithm improves over plain BO, both in terms of function evaluations and wall-clock time, as the dimension of the search space \(\mathcal{X}\) increases.

1.4 Organization of the paper

The contents of the paper are organized as follows. Section 2 provides background knowledge about Wasserstein distance and the optimal transport formulations. Section 3 establishes the BOWS algorithm and proposes a neural network which maps the probability distributions from \(\mathcal{W}\) into \(\mathcal{X}\). Section 4 describes the experimental set-up including the algorithms considered and the parameters values for benchmarking BOWS and the computational results over the test functions. Section 5 provides conclusions and perspectives.

2 Methodological background

2.1 The Wasserstein distance between point clouds

Consider two univariate point clouds, respectively denoted with \(\mathbf{H}=\left({h}^{\left(1\right)},\dots ,{h}^{\left(n\right)}\right)\) and \(\mathbf{G}=\left({g}^{\left(1\right)},\dots ,{g}^{\left(m\right)}\right)\). Since the WST distance is originally defined between two probability measures, the two point clouds are mapped into discrete probability distributions. Given a point cloud \(\mathbf{H}\), the associated probability measure is given by:

$$\begin{array}{*{20}c} {\alpha = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \delta_{{h^{\left( i \right)} }} } \\ \end{array}$$
(3)

Given two point clouds, their WST distance is \({W}_{2}\left(\mathbf{H},\mathbf{G}\right)=\underset{P\in U\left(n,m\right)}{\mathrm{min}}\sqrt{\langle P,{C}^{2}\rangle }\), with \({C}^{2}\in {\mathfrak{R}}^{n\times m}\) is the cost matrix between the points of the two clouds and \(P\) is the coupling matrix where \({P}_{i,j}\) denotes the weight of the assignment of \({h}^{\left(i\right)}\) to\({g}^{\left(j\right)}\), and \(U\) is the set of all the possible assignments, that is \(U\left(n,m\right)=\left\{P\in {\mathfrak{R}}_{+}^{n\times m}:P{1}_{m}=\frac{1}{n}{1}_{n} , {P}^{T}{1}_{n}=\frac{1}{m}{1}_{m}\right\}.\)

The optimal coupling \({P}^{*}=\underset{P\in U\left(n,m\right)}{\mathrm{argmin}}\sqrt{\langle P,{C}^{2}\rangle }\) can be obtained through the simplex algorithm.

Since our case study considers univariate point clouds with the same cardinality (i.e., the number \(n\) of objective function’s components), the computation of \({W}_{2}\left(\mathbf{H},\mathbf{G}\right)\) can be simplified as:

$$\begin{array}{c}{W}_{2}\left(\mathbf{H},\mathbf{G}\right)={\left(\frac{1}{n}\sum_{i}^{n}{\left|{\widetilde{h}}^{\left(i\right)}-{\widetilde{g}}^{\left(i\right)}\right|}^{2}\right)}^\frac{1}{2}\end{array}$$
(4)

where \({\widetilde{h}}^{\left(i\right)}\) and \({\widetilde{g}}^{\left(i\right)}\) denote the sorted samples of the two point clouds.

2.2 The “vanilla” Bayesian optimization

A Gaussian Process (GP) is a probability distribution over functions denoted as \(f\left(x\right)\sim GP(\mu (x), k\left(x, {x}^{\mathrm{^{\prime}}}\right))\) where \(\mu \left(x\right)\) is the mean function of the GP and \(k\left( {x,x^{\prime}} \right)\) is the covariance function (aka kernel). Therefore, GP is as a collection of random variables, any finite number of which have a joint Gaussian distribution and \(f\left(x\right)\) can be considered as a sample from a multivariate normal distribution (Archetti and Candelieri 2019; Frazier 2018).

Let denote with \({\mathrm{X}}_{1:N}={\left\{{\mathbf{x}}^{\left(i\right)}\right\}}_{i=1,\dots ,N}\) a set of \(N\) points in \(\Omega \subset {\mathfrak{R}}^{d}\) and with \({y}_{1:N}={\left\{f\left({\mathbf{x}}^{\left(i\right)}\right)+\varepsilon \right\}}_{i=1,..,N}\) the associated function values, possibly noisy with \(\varepsilon\) a zero-mean Gaussian noise \(\varepsilon \sim \mathcal{N}\left(0,{\lambda }_{\varepsilon }^{2}\right)\). Then the posterior predictive mean \(\mu \left(\mathbf{x}\right)\) and standard deviation \({\sigma }^{2}\left(\mathbf{x}\right)\), conditioned on \({\mathrm{X}}_{1:N}\) and \({y}_{1:N}\), are given by the following equations:

$$\begin{array}{c}\mu \left(\mathbf{x}\right)=k\left(\mathbf{x},{\mathrm{X}}_{1:N}\right) {\left[\mathrm{K}+{\lambda }_{\varepsilon }^{2}I\right]}^{-1} {y}_{1:N}\end{array}$$
(5)
$$\begin{array}{c}{\sigma }^{2}\left(\mathbf{x}\right)=k\left(\mathbf{x},\mathbf{x}\right)-k\left(\mathbf{x},{\mathrm{X}}_{1:N}\right) {\left[\mathrm{K}+{\lambda }_{\varepsilon }^{2}I\right]}^{-1} k\left({\mathrm{X}}_{1:N},\mathbf{x}\right)\end{array}$$
(6)

where \(\mathrm{k}\left(\mathbf{x},{\mathrm{X}}_{1:N}\right)={\left\{k\left(\mathbf{x},{\mathbf{x}}^{\left(i\right)}\right)\right\}}_{i=1:N}\) and \(\mathrm{K}\in {\mathfrak{R}}^{N\times N}\) with entries \({\mathrm{K}}_{i,j}=k\left({\mathbf{x}}^{\left(i\right)},{\mathbf{x}}^{\left(j\right)}\right)\).

The acquisition function manages the balance between exploration and exploitation, it is the key driver of the sample efficiency of BO and is an important concept also outside machine learning (Candelieri et al. 2021). It drives the search of the new evaluation points towards regions of the search space with potential better values of the objective function either because value of \(\mu (\mathbf{x})\) is better or the uncertainty represented by \({\sigma }^{2}\left(\mathbf{x}\right)\) is high (or both). A widely used acquisition function is the Confidence Bound (Lower and Upper, respectively for minimization and maximization problems):

$$\begin{array}{c}UCB\left(\mathbf{x}\right)=\mu \left(\mathbf{x}\right)+{\xi }^\frac{1}{2}\sigma \left(\mathbf{x}\right)\end{array}$$
(7)
$$\begin{array}{c}LCB\left(\mathbf{x}\right)=\mu \left(\mathbf{x}\right)-{\xi }^{1/2}\sigma \left(\mathbf{x}\right)\end{array}$$
(8)

where \(\upxi\) is the parameter to manage the exploration/exploitation trade-off.

3 The Bayesian optimization in Wasserstein space algorithm

3.1 Preliminaries

First, we recall here, and also introduce, some useful notations:

  • \(\mathcal{X}\subset {\mathbb{R}}^{d}\) is the Euclidean original search space.

  • \(y\in {\mathbb{R}}\) is the co-domain of the objective function \(f\left(\mathbf{x}\right)\).

  • \(\mathcal{W}\subset {\mathbb{R}}^{n}\) is the (unknown) co-domain of the objective function’s observable components, that are \({h}_{1}\left(\mathbf{x}\right),\dots ,{h}_{n}\left(\mathbf{x}\right)\)—or compactly the point cloud \(\mathbf{H}\in \mathcal{W}\).

  • \(\mu (\mathbf{H})\) and \({\sigma }^{2}(\mathbf{H})\) are the predictive mean and variance of a GP defined over the space of point clouds, \(\mathcal{W}\), and computed according to (5) and (6) where \(\mathbf{x}\in \mathcal{X}\) is replaced by\(\mathbf{H}\in \mathcal{W}\).

  • \(\varphi :\mathcal{W}\to \mathcal{X}\) is a mapping from the space univariate probability distributions back to original search space.

3.2 The BOWS’s GP

For the GP model, we have decided to adopt the (Euclidean) Squared Exponential (SE) kernel, operating on the space \(\mathcal{W}\), that is the \(n\)-dimensional space whom the point clouds belong to. Specifically, the (Euclidean) SE kernel is:

$$k\left(\mathbf{H},\mathbf{H}^{\prime}\right)={e}^{- \frac{{\Vert \mathbf{H}-{\mathbf{H}}^{\mathrm{^{\prime}}}\Vert }^{2}}{2{\mathcal{l}}^{2}}}$$

with \(\mathcal{l}\) the so-called length-scale hyperparameter which is tuned via MLE. If \(\mathcal{l}\in \mathfrak{N}\) the kernel is said isotropic, while it is anisotropic if \(\mathcal{l}\in {\mathfrak{N}}^{n}\).

Although using a Euclidean-based kernel on \(\mathcal{W}\) can seem a contradiction (Candelieri et al. 2022b) prove that using a Euclidean SE kernel between univariate probability measures is equivalent to using a non-stationary anisotropic Wasserstein-based SE kernel, that is:

$$k\left(\mathbf{H},\mathbf{H}^{\prime}\right)={e}^{- \frac{{W}_{2}^{2}\left({\varvec{H}},{{\varvec{H}}}^{\boldsymbol{^{\prime}}}\right)}{2{\mathcal{l}}^{2}}}$$

with \({W}_{2}^{2}\left({\varvec{H}},{{\varvec{H}}}^{\boldsymbol{^{\prime}}}\right)\) computed as in (4).

3.3 The BOWS’s acquisition function

As the test problems considered in the paper are minimization problems, we will use LCB as acquisition function. The main difference with respect to the vanilla BO is that here LCB is defined—as well as the GP—over \(\mathcal{W}\) instead of \(\mathcal{X}\). Thus, minimizing LCB leads to the next point cloud \({\widehat{\mathbf{H}}}^{\left(N+1\right)}\) giving the best exploration–exploitation trade-off, that is:

$$\begin{array}{c}{\widehat{\mathbf{H}}}^{\left(N+1\right)}=\underset{\mathbf{H }\in\Omega \subset \mathcal{W}}{\mathrm{argmin }}LCB\left(\mathbf{H}\right)\end{array}$$
(9)

It is important to remark that, contrary to the search space \(\mathcal{X}\) that is defined by the user and usually box-bounded, there not exists any preliminary information about \(\Omega \subset \mathcal{W}\). The unknown search space problem is intractable to solve in practice. Therefore, we decided to dynamically set up the search space \(\Omega\), according to the point clouds observed so far. This kind of procedure—which is mandatory in our case—it has been anyway proposed in vanilla BO, quite recently and named “weakly specified” search space (Nguyen et al. 2017).

3.4 Mapping from \(\mathcal{W}\) back to \(\mathcal{X}\)

We need to map \({\widehat{\mathbf{H}}}^{(N+1)}\) back to \(\mathcal{X}\) to obtain the associated value \({\mathbf{x}}^{(N+1)}\) and, consequently \({y}^{(N+1)}\) and the actual \({\mathbf{H}}^{(N+1)}\). Indeed, it is important to remark that any possible mapping is anyway affected by some reconstruction error. The mapping \(\varphi :\mathcal{W}\to \mathcal{X}\) is performed by a MLP trained using the sets \({\mathcal{H}}_{1:N}={\left\{{\mathbf{H}}^{\left(i\right)}\right\}}_{i=1:N}\) and \({\mathrm{X}}_{1:N}={\left\{{\mathbf{x}}^{\left(i\right)}\right\}}_{i=1:N}\) as input and output, respectively. The number of layers of the MLP has been set to three and in each layer the number of neurons is \(\mathrm{max}(n,d)\). The MLP is retrained at each iteration or after a given number of iterations, according to the user’s preferences and available computational budget. On the contrary, the GP model is always trained at each iteration (as usual in BO). The MLP model provides the new point \({\mathbf{x}}^{\left(N+1\right)}\) which yields \({y}^{\left(N+1\right)}=f\left({\mathbf{x}}^{\left(N+1\right)}\right)\) and concurrently the actual \({\mathbf{H}}^{(N+1)}\).

The additional computational complexity of BOWS with respect to vanilla BO is given by the training of the MLP, mapping from \(\mathcal{W}\) back to \(\mathcal{X}\). A rough indication of the computational complexity is \(O({m}_{1}\times {m}_{2}\times {m}_{3}\times {m}_{4})\). Where \({m}_{1}\) is the number of epochs; \({m}_{2}\) is the number of training examples; \({m}_{3}\) is the number of objective function’s components; \({m}_{4}\) is the number of neurons. The computational overhead due to working in \(\mathcal{W}\) and the ensuing need to map back to \(\mathcal{X}\) is substantial and explains why the wall-clock time of BOWS is poorer than vanilla BO. It is a reasonable cost to pay for the improvement in sampling efficiency as it will be shown in the computational results in the following section.

4 Computational results

4.1 Experimental setting

The algorithms have been implemented using BoTorch (Balandat et al. 2020) a Python library for Bayesian Optimization part of the PyTorch framework. BoTorch provides an easy-to-use interface for defining, managing, and running sequential experiments and a modular interface for composing Bayesian optimization primitives as probabilistic models, acquisition functions, and optimizers. The computational results reported in this section have been obtained using UCB (with \(\beta =4\)) and a gradient based optimizer. Three test functions have been considered (Table 1) with dimensionality\(d=5, 10, 15 , 20\). For each experiment 10 independent runs have been executed with \(20d\) iterations and \(d\) initial points.

Table 1 Test functions

4.2 Experimental results

In this section, the computational results on the three test functions reported in Table 1 are presented, considering dimensionality \(d=5, 10, 15, 20\).

As shown in Table 2, considering Alpine01 and Vincent, BOWS has a better overall performance respect to vanilla BO; the advantage increases in higher dimensionality. In Fig. 1 is highlighted that BOWS converges faster to an optimal solution, in terms of iterations, particularly considering higher \(d\). In the case of Michalewicz, BOWS generally performs worse than standard BO. The gap in performances decreases increasing the dimensionality.

Table 2 Best seen averaged over 10 trials, with its standard deviation
Fig. 1
figure 1

Best seen over iterations for the two algorithms and the three test functions. The line represents the mean over 10 independents runs while the shaded area is the standard deviation

To explain the different behaviour of BOWS with the Michalewicz test function we have to look at the MLP error. The error is defined in the Wasserstein space as \(\frac{1}{N}\sum_{i=1}^{N}W\left({\widehat{\mathbf{H}}}^{(i)},{\mathbf{H}}^{(i)}\right)\). In the case of Alpine01 and Vincent the error appears to slightly decrease with the increasing of the iterations (Fig. 2). This is coherent to the fact that increasing the iterations means a higher number of training points for the neural network. In the case of Michalewicz, the error shows a completely opposite behaviour, meaning that the MLP cannot properly map the function’s components from \(\mathcal{W}\) back to the search space \(\mathcal{X}\). This behaviour is particularly marked for higher dimensionality of the search space.

Fig. 2
figure 2

MLP’s error over iterations for the three test functions. The line represents the mean over 10 independents runs while the shaded area is the standard deviation. The error is computed in the Wasserstein space instead of the Euclidean space

The main difference between Michalewicz and the other two test function is that the Michalewicz’s components depend on the number of dimensions \(d\), and in particular they get more complex as \(d\) increases (Fig. 3). Specifically, the number of local minima is \(d!\). In the case of Alpine01 and Vincent the complexity of the components does not depend on the dimensionality \(d\).

Fig. 3
figure 3

Michalewicz’s components \(\mathrm{sin}\left(x\right){\mathrm{sin}}^{2\mathrm{k}}\left(\frac{i{x}^{2}}{\pi }\right)\) with \(i=1, 5, 10, 20\)

The difference in complexity of the functions’ components can also be seen by looking at the correlation between \({\mathbf{H}}^{(i)}\) and \({\mathbf{x}}^{(i)}\) for \(i=1,\dots ,N\). As shown in Table 3, in the case of Michalewicz, the Pearson correlation is much lower than the other two test functions, meaning a higher complexity in finding a mapping function.

Table 3 Pearson correlation between \({\mathbf{H}}^{(i)}\) and \({\mathbf{x}}^{(i)}\) for \(i=1,\dots ,N\)

Since mapping the Michalewicz’s components back to the search space is more complex a possible solution is to increase the number of hidden layers of the MLP. Figure 4 and Fig. 5 show that with 5 hidden layers the performance of BOWS for Michalewicz improves and the MLP error decreases.

Fig. 4
figure 4

Best seen over iterations of Michalewicz considering the 5 layers MLP in BOWS

Fig. 5
figure 5

MLP’s error over iterations of Michalewicz considering the 5 layers MLP in BOWS

5 Conclusions and future works

The main conclusion is that a distributional representation of points in the search space as point clouds can be effectively applied to Bayesian optimization. The Wasserstein distance has been chosen because it’s a metric, captures complex relationships between inputs, neighbourhood sizes and connectivity and provides geometrically meaningful distances. Computational experiments show, both in terms of function evaluations and wall clock time, how the new method in two out of three benchmark functions outperforms vanilla Bayesian optimization and its advantage increases with the dimension of the search space.

Future works should address the following main issues:

  • Methodological advances to improve the optimization of the acquisition function considering also, from a theoretical standpoint, both the differentiability of the WST distance and the relation between the gradient flows of the objective function and the transport map.

  • A full analysis of the optimization problems which fit into the BOWS framework. The distributional approach is natural for simulation–optimization problems over discrete structures, sensor placement in physical and informational networks and stochastic vehicle routing. Also, the issue of high dimensionality and the underlying additive structure should be further analyzed.

Additional experiments are required for a more extensive numerical validation of the proposed approach.