Global optimization via inverse distance weighting and radial basis functions

Global optimization problems whose objective function is expensive to evaluate can be solved effectively by recursively fitting a surrogate function to function samples and minimizing an acquisition function to generate new samples. The acquisition step trades off between seeking for a new optimization vector where the surrogate is minimum (exploitation of the surrogate) and looking for regions of the feasible space that have not yet been visited and that may potentially contain better values of the objective function (exploration of the feasible space). This paper proposes a new global optimization algorithm that uses inverse distance weighting (IDW) and radial basis functions (RBF) to construct the acquisition function. Rather arbitrary constraints that are simple to evaluate can be easily taken into account. Compared to Bayesian optimization, the proposed algorithm, that we call GLIS (GLobal minimum using Inverse distance weighting and Surrogate radial basis functions), is competitive and computationally lighter, as we show in a set of benchmark global optimization and hyperparameter tuning problems. MATLAB and Python implementations of GLIS are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Introduction
Many problems in machine learning and statistics, engineering design, physics, medicine, management science, and in many other fields, require finding a global minimum of a function without derivative information; see, e.g., the excellent survey on derivative-free optimization [32]. Some of the most successful approaches for derivative-free global optimization include deterministic methods based on 1 3 recursively splitting the feasible space in rectangles, such as the DIRECT (DIvide a hyper-RECTangle) [22] and Multilevel Coordinate Search (MCS) [18] algorithms, and stochastic methods such as Particle Swarm Optimization (PSO) [12], genetic algorithms [43], and evolutionary algorithms [16].
The aforementioned methods can be very successful in reaching a global minimum without any assumption on convexity and smoothness of the function, but may result in evaluating the function a large number of times during the execution of the algorithm. In many problems, however, the objective function is a black box that can be very time-consuming to evaluate. For example, in hyperparameter tuning of machine learning algorithms, one needs to run a large set of training tests per hyperparameter choice; in structural engineering design, testing the resulting mechanical property corresponding to a given choice of parameters may involve several hours for computing solutions to partial differential equations; in control systems design, testing a combination of controller parameters involves running a real closed-loop experiment, which is time consuming and costly. For this reason, many researchers have been studying algorithms for black-box global optimization that aim at minimizing the number of function evaluations by replacing the function to minimize with a surrogate function [21]. The latter is obtained by sampling the objective function and interpolating the samples with a map that, compared to the original function, is very cheap to evaluate. The surrogate is then used to solve a (much cheaper) global optimization problem that decides the new point where the original function must be evaluated. A better-quality surrogate is then created by also exploiting the new sample and the procedure is iterated. For example, quadratic surrogate functions are used in the well known global optimization method NEWUOA [30].
As underlined by several authors (see, e.g., [21]), purely minimizing the surrogate function may lead to converge to a point that is not the global minimum of the blackbox function. To take into account the fact that the surrogate and the true objective function differ from each other in an unknown way, the surrogate is typically augmented by an extra term that takes into account such an uncertainty. The resulting acquisition function is therefore minimized instead for generating a new sample of the optimization vector, trading off between seeking for a new vector where the surrogate is small and looking for regions of the feasible space that have not yet been visited.
Bayesian Optimization (BO) is a popular class of global optimization methods based on surrogates that, by modeling the black-box function as a Gaussian process, enables one to quantify in statistical terms the discrepancy between the two functions, an information that is taken into account to drive the search. BO has been studied since the sixties in global optimization [25] and in geostatistics [26] under the name of Kriging methods; it become popular to solve problems of Design and Analysis of Computer Experiments (DACE) [34], see for instance the popular Efficient Global Optimization (EGO) algorithm [23]. It is nowadays very popular in machine learning for tuning hyperparameters of different algorithms [9,13,35,37].
Motivated by learning control systems from data [29] and self-calibration of optimal control parameters [14], in this paper we propose an alternative approach to solve global optimization problems in which the objective function is expensive to evaluate that is based on Inverse Distance Weighting (IDW) interpolation [24,36] Global optimization via inverse distance weighting and radial… and Radial Basis Functions (RBFs) [17,27]. The use of RBFs for solving global optimization problems was already adopted in [11,15], in which the acquisition function is constructed by introducing a "measure of bumpiness". The author of [15] shows that such a measure has a relation with the probability of hitting a lower value than a given threshold of the underlying function, as used in Bayesian optimization. RBFs were also adopted in [31], with additional constraints imposed to make sure that the feasible set is adequately explored. In this paper we use a different acquisition function based on two exploration components: an estimate of the confidence interval associated with the interpolant function, defined as in [24], and a new measure based on inverse distance weighting that is totally independent of the underlying black-box function and its surrogate. Both terms aim at exploring the domain of the optimization vector. Moreover, arbitrary constraints that are simple to evaluate are also taken into account, as they can be easily imposed during the minimization of the acquisition function.
Compared to Bayesian optimization, our non-probabilistic approach to global optimization is very competitive, as we show in a set of benchmark global optimization problems and on hyperparameter selection problems, and also computationally lighter than off-the-shelf implementations of BO.
A preliminary version of this manuscript was made available in [4] and later extended in [5] to solve preference-based optimization problems. MATLAB and a Python implementations of the proposed approach and of the one of [5] are available for download at http://cse.lab.imtlu cca.it/~bempo rad/glis. For an application of the GLIS algorithm proposed in this paper to learning optimal calibration parameters in embedded model predictive control applications the reader is referred to [14].
The paper is organized as follows. After stating the global optimization problem we want to solve in Sect. 2, Sects. 3 and 4 deal with the construction of the surrogate and acquisition functions, respectively. The proposed global optimization algorithm is detailed in Sect. 5 and several results are reported in Sect. 6. Finally, some conclusions are drawn in Sect. 7.

Problem formulation
Consider the following constrained global optimization problem where f ∶ ℝ n → ℝ is an arbitrary function of the optimization vector x ∈ ℝ n , , u ∈ ℝ n are vectors of lower and upper bounds, and X ⊆ ℝ n imposes further arbitrary constraints on x. Typically X = {x ∈ ℝ n ∶ g(x) ≤ 0} , where the vector function g ∶ ℝ n → ℝ q defines inequality constraints, with q = 0 meaning that no inequality constraint is enforced; for example, linear inequality constraints are defined by setting g(x) = Ax − b , with A ∈ ℝ q×n , b ∈ ℝ q , q ≥ 0 . We are particularly interested in problems as in (1) such that f(x) is expensive to evaluate and its gradient (1) is not available, while the condition x ∈ X is easy to evaluate. Although not comprehensively addressed in this paper, we will show that our approach also tolerates noisy evaluations of f, that is if we measure y = f (x) + instead of f(x), where is an unknown quantity. We will not make any assumption on f, g, and . In (1) we do not include possible linear equality constraints A e x = b e , as they can be first eliminated by reducing the number of optimization variables and therefore perform the exploration more easily in a lower dimensional space.

Surrogate function
Assume that we have collected a vector We consider next two types of surrogate functions, namely Inverse Distance Weighting (IDW) functions [24,36] and Radial Basis Functions (RBFs) [15,27].

Inverse distance weighting functions
Given a generic new point x ∈ ℝ n consider the squared Euclidean distance function In standard IDW functions [36] the weight functions w i ∶ ℝ n �{x i } → ℝ are defined by the inverse squared distances The alternative weighting function suggested in [24] has the advantage of being similar to the inverse squared distance in (3a) for small values of d 2 , but makes the effect of points x i located far from x fade out quickly due to the exponential term. By defining for i = 1, … , N the following functions v i ∶ ℝ n → ℝ as otherwise is an IDW interpolation of (X, F).

Lemma 1
The IDW interpolation function f defined in (5) enjoys the following properties: f is differentiable everywhere on ℝ n and in particular ∇f (x j ) = 0 for all j = 1, … , N.
The proof of Lemma 1 is very simple and is reported in "Appendix A". Note that in [24] the authors suggest to improve the surrogate function by adding a regression model in (5) to take global trends into account. In our numerical experiments we found, however, that adding such a term does not lead to significant improvements of the proposed global optimization algorithm.
A one-dimensional example of the IDW surrogate f sampled at five different points of the scalar function is depicted in Fig. 1. The global optimizer is x * ≈ −0.9599 corresponding to the global minimum f (x * ) ≈ 0.2795.

Radial basis functions
A possible drawback of the IDW function f defined in (5) is due to property P3: as the number N of samples increases, the surrogate function tends to ripple, having its derivative to always assume zero value at samples. An alternative is to use a radial basis function (RBF) [15,27] as a surrogate function. These are defined by setting where d ∶ ℝ n × ℝ n → ℝ is the function defining the Euclidean distance as in (2), The coefficient vector = [ 1 … N ] � is obtained by imposing the interpolation condition Condition (9) leads to solving the linear system where M is the N × N symmetric matrix whose (i, j)-entry is with M ii = 1 for all the RBF type listed in (8) but the linear and thin plate spline, for which M ii = lim d→0 ( d) = 0 . Note that if function f is evaluated at a new sample x N+1 , matrix M only requires adding the last row/column obtained by computing ( d(x N+1 , x j )) for all j = 1, … , N + 1.
As highlighted in [15,21], matrix M might be singular, even if x i ≠ x j for all i ≠ j . To prevent issues due to a singular M, [15,21] suggest using a surrogate function given by the sum of a RBF and a polynomial function of a certain degree. To also take into account unavoidable numerical issues when distances between sampled points get close to zero, which will easily happen as new samples are added towards finding a global minimum, in this paper we suggest instead to use a singular value decomposition (SVD) M = UΣV � of M. 1 By neglecting singular values below a certain positive The threshold SVD turns out to be useful when dealing with noisy measurements y = f (x) + of f. Figure 2 shows the approximation f obtained from 50 samples with normally distributed around zero with standard deviation 0.1, when SVD = 10 −2 .
A drawback of RBFs, compared to IDW functions, is that property P2 is no longer satisfied, with the consequence that the surrogate may extrapolate large values f (x) where f(x) is actually small, and vice versa. See the examples plotted in Fig. 1. On the other hand, while differentiable everywhere, RBFs do not necessarily have zero gradients at sample points as in P3, which is favorable to better approximate the underlying function with limited samples. For the above reasons, we will mostly focus on RBF surrogates in our numerical experiments.

Scaling
To take into account that different components x j of x may have different ranges u j − j , we simply rescale the variables in optimization problem (1) so that they all range in [−1, 1] . To this end, we first possibly tighten the given box constraints . The bounding box B g ,u g is obtained by solving the following 2n optimization problems where e i is the ith column of the identity matrix, i = 1, … , n . In case of linear inequality constraints X = {x ∈ ℝ n Ax ≤ b} , the problems in (11a) can be solved by (6) is sampled 50 times, with each sample corrupted by noise ∼ N(0, 10 −2 ) (blue). The RBF thin plate spline surrogate with = 0.01 (green) is obtained by setting SVD = 10 −2 (Color figure online) linear programming (LP), as shown later in (17). Since now on, we assume that , u are replaced by where " min " and " max " in (11b) operate component-wise. Next, we introduce scaled variables x ∈ ℝ n whose relation with x is for all j = 1, … , n and finally formulate the following scaled global optimization problem In case X is a polyhedron we have where Ā , b are a rescaled version of A, b defined as and diag( u− 2 ) is the diagonal matrix whose diagonal elements are the components of u− 2 .
Note that, when approximating f s with f s , we use the squared Euclidean distances where the scaling factors h = u h − h 2 and p h ≡ 2 are constant. Therefore, finding a surrogate f s of f s in [−1, 1] is equivalent to finding a surrogate f of f under scaled distances. This is a much simpler scaling approach than computing the scaling factors h and power p as it is common in stochastic process model approaches such as Kriging methods [23,34]. As highlighted in [21], Kriging methods use radial basis , a generalization of Gaussian RBF functions in which the scaling factors and powers are recomputed as the data set X changes.
Global optimization via inverse distance weighting and radial… Note also that the approach adopted in [5] for scaling automatically the surrogate function via cross-validation could be also used here, as well as other approaches specific for RBFs such as Rippa's method [33]. In our numerical experiments we have found that adjusting the RBF parameter via cross-validation, while increasing the computational effort, does not provide significant benefit. What is in fact most critical is the tradeoff between exploitation of the surrogate and exploration of the feasible set, that we discuss in the next section.

Acquisition function
As mentioned earlier, minimizing the surrogate function to get a new sample x N+1 = arg minf (x) subject to ≤ x ≤ u and x ∈ X , evaluating f (x N+1 ) , and iterating over N may easily miss the global minimum of f. This is particularly evident when f is the IDW surrogate (5), that by Property P2 of Lemma 1 has a global minimum at one of the existing samples x i . Besides exploiting the surrogate function f , when looking for a new candidate optimizer x N+1 it is therefore necessary to add to f a term for exploring areas of the feasible space that have not yet been probed.
In Bayesian optimization, such an exploration term is provided by the covariance associated with the Gaussian process. A function measuring "bumpiness" of a surrogate RBF function was used in [15]. Here instead we propose two functions that provide exploration capabilities, that can be used in alternative to each other or in a combined way. First, as suggested in [24] for IDW functions, we consider the confidence interval function s ∶ ℝ n → ℝ for f defined by We will refer to function s as the IDW variance function associated with (X, F). Fig. 3 for a noise-free example and Fig. 4 for the case of noisy measurements of f.
is given by either (3a) or (3b). The rationale behind (14) is that z(x) is zero at sampled points and grows in between. The arctangent function in (14) avoids that z(x) grows excessively when x is located far away from all sampled points. Given parameters , ≥ 0 and N samples (X, F), we define the following acquisition function a ∶ ℝ n → ℝ   ΔF } is the range of the observed samples F and the threshold ΔF > 0 is introduced to prevent the case in which f is not a constant function but, by chance, all sampled values f i are equal. Scaling z by ΔF eases the selection of the hyperparameter , as the amplitude of ΔFz is comparable to that of f .
As we will detail next, given N samples (X, F) a global minimum of the acquisition function (15) is used to define the (N + 1) th sample x N+1 by solving the global optimization problem The rationale behind choosing (15) for acquisition is the following. The term f directs the search towards a new sample x N+1 where the objective function f is expected to be optimal, assuming that f and its surrogate f have a similar shape, and therefore allows a direct exploitation of the samples F already collected. The other two terms account instead for the exploration of the feasible set with the hope of finding better values of f, with s promoting areas in which f is more uncertain and z areas that have not been explored yet. Both s and z provide exploration capabilities, but with an important difference: function z is totally independent on the samples F already collected and promotes a more uniform exploration, s instead depends on F and the surrogate f . The coefficients , determine the exploitation/exploration tradeoff one desires to adopt.
For the example of scalar function f in (6) sampled at five random points, the acquisition function a obtained by setting = 1 , = 1 2 , using a thin plate spline RBF with SVD = 10 −6 , and w i (x) as in (3a), and the corresponding minimum are depicted in Fig. 6.
The following result, whose easy proof is reported in "Appendix A", highlights a nice property of the acquisition function a. 2 , thin plate spline RBF with SVD = 10 −6 , for the scalar example as in Fig. 1, with w i (x) as in (3a) and f as in (6) Lemma 2 Function a is differentiable everywhere on ℝ n .
Problem (16) is a global optimization problem whose objective function and constraints are very easy to evaluate. It can be solved very efficiently using various global optimization techniques, either derivative-free [32] or, if X = {x ∶ g(x) ≤ 0} and g is also differentiable, derivative-based. In case some components of vector x are restricted to be integer, (16) can be solved by mixed-integer programming.

Global optimization algorithm
Algorithm 1, that we will refer to as GLIS (GLobal minimum using Inverse distance weighting and Surrogate radial basis functions), summarizes the proposed approach to solve the global optimization problem (1) using surrogate functions (either IDW or RBF) and the IDW acquisition function (15).
Output: Approximation x * = x Nmax of the global optimizer.
As common in global optimization based on surrogate functions, in Step 4 Latin Hypercube Sampling (LHS) [28] is used to generate the initial set X of samples in the given range , u . Note that the generated initial points may not satisfy the inequality constraints x ∈ X . We distinguish between two cases: (i) the objective function f can be evaluated outside the feasible set F ; (ii) f cannot be evaluated outside F .
In the first case, initial samples of f falling outside F are still useful to define the surrogate function and can be therefore kept. In the second case, since f cannot be evaluated at initial samples outside F , a possible approach is to generate more than N init samples and discard the infeasible ones before evaluating f. For example, the author of [6] suggests the simple method reported in Algorithm 2. This requires the feasible set F to be full-dimensional. In case of linear inequality constraints X = {x ∶ Ax ≤ b} , fulldimensionality of the feasible set F can be easily checked by computing the Chebychev radius r F of F via the LP [8] where in (17) the subscript i denotes the ith row (component) of a matrix (vector). The polyhedron F is full dimensional if and only if r F > 0 . Clearly, the smaller the ratio between the volume of F and the volume of the bounding box B g ,u g , the larger on average will be the number of samples generated by Algorithm 2.
Note that, in alternative to LHS, the IDW function (14) could be also used to generate N init feasible points by solving for N = 1, … , N init − 1 , for any x 1 ∈ F .

Algorithm 2 Latin hypercube sampling with constraints
Input: Upper and lower bounds ( , u) for x and inequality constraint function g : The examples reported in this paper use the Particle Swarm Optimization (PSO) algorithm [41] to solve problem (16) at Step 6.2, although several other global optimization methods such as DIRECT [22] or others [18,32] could be used in alternative. Inequality constraints X = {x ∶ g(x) ≤ 0} can be handled as penalty functions, for example by replacing (16) with where in (18) ≫ 1 . Note that due to the heuristic involved in constructing function a, it is not crucial to find global solutions of very high optimality accuracy when solving problem (16). Regarding feasibility, in case x N+1 violates the constraints and (17) f cannot be evaluated outside X , a remedy would be to increase the penalty parameter and/or to slightly tighten the constraints by penalizing max{g i (x) + g , 0} in (18) instead of max{g i (x), 0} , for some small positive scalar g .
The exploration parameter promotes visiting points in [ , u] where the function surrogate has largest variance, promotes instead pure exploration independently on the surrogate function approximation, as it is only based on the sampled points x 1 , … , x N and their mutual distance. For example, if = 0 and ≫ 1 Algorithm 1 will try to explore the entire feasible region, with consequent slower detection of points x with low cost f(x). On the other hand, setting = 0 will make GLIS proceed only based on the function surrogate and its variance, that may lead to miss regions in [ , u] where a global optimizer is located. For = = 0 , GLIS will proceed based on pure minimization of f that, as observed earlier, can easily lead to converge away from a global optimizer. Figure 7 shows the first six iterations of the GLIS algorithm when applied to minimize the function f given in (6) in [−3, 3] with = 1 , = 0.5.

3
Global optimization via inverse distance weighting and radial…

Computational complexity
The complexity of Algorithm 1, as a function of the number N max of iterations and dimension n of the optimization space and not counting the complexity of evaluating f, depends on Steps 6.1 and 6.2. The latter depends on the global optimizer used to solve Problem (16), which typically depends heavily on n.
Step 6.1 involves computing N max (N max − 1) RBF values ( d(x i , x j )) , i, j = 1, … , N max , j ≠ i , compute the SVD decomposition of the N × N symmetric matrix M in (10a), whose complexity is O (N 3 ) , and solve the linear system in (10a) ( O(N 2 ) ) at each step N = N init , … , N max .

Numerical tests
In this section we report different numerical tests performed to assess the performance of the proposed algorithm (GLIS) and how it compares to Bayesian optimization (BO). For the latter, we have used the off-the-shelf algorithm bayesopt implemented in the Statistics and Machine Learning Toolbox for MATLAB [39], based on the lower confidence bound as acquisition function. All tests were run on an Intel i7-8550 CPU @1.8GHz machine. Algorithm 1 was run in MATLAB R2019b in interpreted code. The PSO solver [42] was used to solve problem (18). We focus our comparison on BO only as it one of the most efficient methods to deal with the optimization of expensive black-box functions.

GLIS optimizing its own parameters
We first use GLIS to optimize its own hyperparameters , , when solving the minimization problem with f(x) as in (6) In optimizing (19), the outer instance of Algorithm 1 is run with H = 2 , (19) is evaluated 100 times, each evaluation requiring executing Algorithm 1 N t = 20 times to minimize function f], and PSO as the global optimizer of the acquisition function. The RBF inverse quadratic function is used in both the inner and outer instances of Algorithm 1. The resulting optimal selection is  (20). The figure also shows the results obtained by using BO on the same problem.
Clearly the results of the hyper-optimization depend on the function f which is minimized in the inner loop. For a more comprehensive and general optimization of GLIS hyperparameters, one could alternatively consider in f H the average performance with respect to a collection of problems instead of just one problem.

Benchmark global optimization problems
We test the proposed global optimization algorithm on standard benchmark problems, summarized in Table 1. For each function the table shows the corresponding number of variables, upper and lower bounds, and the name of the example in [19] reporting the definition of the function. For lack of space, we will only consider the GLIS algorithm implemented using inverse quadratic RBFs for the surrogate, leaving IDW only for exploration, because compared to other RBFs it was found a robust choice experimentally. Global optimization via inverse distance weighting and radial… As a reference target for assessing the quality of the optimization results, for each benchmark problem the optimization algorithm DIRECT [22] was used to compute the global optimum of the function through the NLopt interface [20], except for the ackley and stepfunction2 benchmarks in which PSO is used instead due to the slow convergence of DIRECT on those problems. The corresponding global minima were validated, when possible, against results reported in [19] or, in case of one-or two-dimensional problems, by inspection. Algorithm 1 is run by using the RBF inverse quadratic function with hyperparameters obtained by dividing the values in (20) by the number n of variables, with the rationale that exploration is more difficult in higher dimensions and it is therefore better to rely more on the surrogate function during acquisition. The threshold SVD = 10 −6 is adopted to compute the RBF coefficients in (10c). The number of initial samples is N init = 2n.
For each benchmark, the problem is solved N test = 100 times to collect statistically significant enough results. The last two columns of Table 1 report the average CPU time spent for solving the N test = 100 instances of each benchmark using BO and GLIS. As the benchmark functions are very easy to compute, the CPU time spending on evaluating the N max function values F is negligible, so the time values reported in the table are practically those due to the execution of the algorithms. Algorithm 1 (GLIS) is between 4.6 and 9.4 times faster than Bayesian optimization (about 7 times faster on average). The execution time of GLIS in Python 3.7 on the same machine, using the PSO package pyswarm (https ://pytho Table 1 Benchmark problems considered in the comparison Last two columns: average CPU time spent on each benchmark for solving the N test = 100 instances analyzed in Fig. 9 by Bayesian optimization (BO) and GLIS (Algorithm 1) Step 2, D = 5 11.72 Styblinski-Tang, n = 5 37.02 6.10 nhost ed.org/pyswa rm) to optimize the acquisition function, is similar to that of the BO package GPyOpt [38].
The results of the tests are reported in Fig. 9, where in each plot we show the average function value obtained over N test = 100 runs as a function of the number of function evaluations, and the band defined by the best-case and worst-case instances, and how the global optimum is approached. In order to test the algorithm in the presence of constraints, we consider the camelsixhumps problem and solve it under the following constraints Algorithm 1 is run with hyperparameters set by dividing by n = 2 the values obtained in (20) and with SVD = 10 −6 , N init = 2n for N max = 20 iterations, with penalty = 1000 in (18). The results are plotted in Fig. 10. The unconstrained two global minima of the function are located at −0.0898 0.7126 , 0.0898 −0.7126 .

ADMM hyperparameter tuning for QP
The Alternating Direction Method of Multipliers (ADMM) [7] is a popular method for solving optimization problems such as the following convex Quadratic Program (QP) where z ∈ ℝ n is the optimization vector, ∈ ℝ p is a vector of parameters affecting the problem, and A ∈ ℝ q×n , b ∈ ℝ q , S ∈ ℝ q×p , and we assume Q = Q � ≻ 0 .  We consider a randomly generated QP test problem with n = 5 , q = 10 , p = 3 that is feasible for all ∈ [−1, 1] 3 , whose matrices are reported in "Appendix B" for reference. We set N = 100 in Algorithm 3, and generate M = 2000 samples j uniformly distributed in [−1, 1] 3 . The aim is to find the hyperparameters x = [̄̄] � that provide the best QP solution quality after N ADMM iterations. Quality is expressed by the following objective function where * j (x) , z * j (x) are the optimal value and optimizer found at run #j , respectively, * (x) is the solution of the QP problem obtained by running the very fast and accurate ODYS QP solver [10]. The first term in (22) measures relative optimality, the second term relative violation of the constraints, and we set ̄= 1 to equally weight relative optimality versus relative accuracy. Function f in (22) 11. It is apparent that GLIS attains slightly better function values for the same number of functions evaluations than BO, both on Global optimization via inverse distance weighting and radial… average and in the worst-case. The resulting hyperparameter tuning that minimized the selected ADMM performance index (22) is ̄= 0.1566 , ̄= 1.9498.

Conclusions
This paper has proposed an approach based on surrogate functions to address global optimization problems whose objective function is expensive to evaluate, possibly under constraints that are inexpensive to evaluate. Contrarily to Bayesian optimization methods, the approach is driven by deterministic arguments based on radial basis functions (or inverse distance weighting) to create the surrogate, and on inverse distance weighting to characterize the uncertainty between the surrogate and the black-box function to optimize, as well as to promote the exploration of the feasible space. The computational burden associated with the algorithm is lighter then the one of Bayesian optimization while performance is comparable. Clearly, the main limitation of the algorithm is related to the dimension of the optimization vector it can cope with, as many other black-box global optimization algorithms. Current research is devoted to extend the approach to include constraints that are also expensive to evaluate, and to explore if performance can be improved by adapting the parameters and during the search. Future research should address theoretical issues of convergence of the approach, by investigating assumptions on the black-box function f and on the parameters , , SVD , ΔF > 0 of the algorithm, so to allow guaranteeing convergence, for example using the arguments in [15] based on the results in [40].
Acknowledgements Open access funding provided by Scuola IMT Alti Studi Lucca within the CRUI-CARE Agreement.

A Proofs
Proof of Lemma 1 Property P1 easily follows from (4) In case w i (x) are given by (3a) differentiability follows similarly, with e −t 2 replaced by 1. Therefore f is differentiable and Proof of Lemma 2 As by Lemma 1 functions f and v i are differentiable, ∀i = 1, … , N , it follows immediately that s(x) is differentiable. Regarding differentiability of z, clearly it is differentiable for all x ∉ {x 1 , … , x N } , ∀i = 1, … , N . Let e h be the hth column of the identity matrix of order n. Consider first the case in which w i (x) are given by (3b). The partial derivatives of z at x i are In case w i (x) are given by (3a) differentiability follows similarly, with e −t 2 replaced by 1. Therefore the acquisition function a is differentiable for all , ≥ 0 . ◻ B Matrices of parametric QP considered in Sect. 6.3 z(x i )