Polarized consensus-based dynamics for optimization and sampling

In this paper we propose polarized consensus-based dynamics in order to make consensus-based optimization (CBO) and sampling (CBS) applicable for objective functions with several global minima or distributions with many modes, respectively. For this, we ``polarize'' the dynamics with a localizing kernel and the resulting model can be viewed as a bounded confidence model for opinion formation in the presence of common objective. Instead of being attracted to a common weighted mean as in the original consensus-based methods, which prevents the detection of more than one minimum or mode, in our method every particle is attracted to a weighted mean which gives more weight to nearby particles. We prove that in the mean-field regime the polarized CBS dynamics are unbiased for Gaussian targets. We also prove that in the zero temperature limit and for sufficiently well-behaved strongly convex objectives the solution of the Fokker--Planck equation converges in the Wasserstein-2 distance to a Dirac measure at the minimizer. Finally, we propose a computationally more efficient generalization which works with a predefined number of clusters and improves upon our polarized baseline method for high-dimensional optimization.


Introduction
Partially motivated by the success of machine learning methods, which involve the minimization of high-dimensional and strongly non-convex objectives, in recent years the interest in consensus-based methods which do not rely on first-order gradient information has constantly increased.Typically, zero-order optimization methods either construct a surrogate of the gradient and then perform a gradient-descent-type update [8] or use a swarm model [18,22] where particles are attracted to the particle in the swarm which has the lowest objective value.Notably, the former approach can lead to problems for strongly non-convex objectives with many critical points.This issue is circumvented by swarm models, however, they typically do not allow for a mean-field description which makes their mathematical understanding difficult.In contrast, consensus-based methods aim to achieve consensus by letting particles {x (i) } J i=1 explore the objective landscape while attracting them to the weighted average of their positions with respect to the Gibbs measure π ∝ exp(−βV ) of the objective function V : R d → R, where β > 0 is an inverse heat parameter.This method was first introduced in [20] as consensus-based optimization (CBO).As opposed to most other swarm methods CBO has a mean-field formulation involving a nonlinear Fokker-Planck equation.In [4] convergence to consensus of this equation was first proved, and [16] showed consensus formation of the discrete particle method.The key property for showing that the consensus is achieved close to the global minimum of the objective V is that exp(−βV ) concentrates around the global minimizer of V as β → ∞.More recently, [11] presented an improved convergence analysis which weakens some of the assumptions in [4] and directly proves convergence to a point close to the global minimum in a Wasserstein-2 distance.

Standard CBO
Following up on the original formulation of CBO, numerous extensions were suggested.In [26] an additional drift term, modelling the time-average of the personal best of every particle, is proposed.In [7] an extension of the CBO method for constrained problems is suggested, and [6] adapted the method for high-dimensional problems from machine learning by introducing random batching and changing the noise model.In a different line of work, CBO methods on hypersurfaces were studied in [10] and applications to machine learning were investigated in [12].In [23] CBO was enriched by ensemblebased gradient information which comes at low computational cost and can improve upon the baseline method.For an overview of recent developments we also refer to the review [25].Consensus-based methods have also been transferred to sampling.The work [5] proposes a consensus-based sampling (CBS) method by changing the noise term in CBO to include a weighted sample covariance.This prevents a collapse of the ensemble to full consensus and under suitable assumptions the method was shown to converge to a Gaussian approximation of the distribution exp(−V ).
While existing consensus-based methods have proven to work very well for non-convex objectives with many spurious local extreme points, they all suffer from the conceptual drawback that by definition they can at most approximate one minimum, respectively one mode in the context of sampling.Therefore, the goal of this paper is to design consensus-based particle dynamics which support multiple consensus points, or in other words, polarization.
The central idea for the method which we are presenting in this paper is based on the following though experiment: Assuming that two clusters of particles have formed, each one centered around a global minimum, we do not want to compute a shared weighted mean of their positions.This would pull one of the clusters into the other one.Therefore, we replace the weighted mean-being an integral component of current consensus-based methods-by a collection of localized means which additionally weight particle positions by their proximity to the considered particle.The localization is achieved by a suitable kernel function.This then leads to polarized dynamics where multiple consensus points can be reached which opens up the possibility of finding multiple global minima or multiple modes, respectively.Standard CBO, on the other hand is bound to converge to a single minimum.
As we shall discuss in more detail later, our approach carries strong similarities to bounded confidence models of opinion formation, introduced in [9] but see also [13,15,17].There agents only interact with each other if their opinions are sufficiently close, which in the end can lead to formation of multiple consensus points and is referred to as polarization of opinions in [17].

Main contributions
The focus of this paper is the development of the novel polarized consensusbased dynamics and an extensive numerical evaluation of the method.The mathematical models we propose come with a lot of new theoretical questions, as well, which will be the subject of future investigations.The main contributions and structure of this paper can be summarized as follows: • Section 2.1: We propose a novel polarized computation of weighted means for CBO methods.
• Section 2.2: We propose an algorithmic variant which uses a predetermined number of cluster points to compute weighted means.
• Section 2.3: We propose a novel polarized computation of weighted covariances for the CBS method and prove that it is unbiased for Gaussian targets.
• Section 2.4: We prove convergence of the mean-field dynamics in the Wasserstein-2 distance for sufficiently well-behaved objective functions (Theorem 2.5).
• Section 3: We conduct extensive numerical and statistical evaluations of our polarized optimization method, showing that it can find multiple global minima in low and high dimensional optimization problems and can even improve upon standard CBO for the detection of one minimum.We also test our polarized CBS method for sampling from mixtures of Gaussian and a non-Gaussian distribution where it exhibits better performance than standard CBS.

Models
In this section we describe in detail how standard consensus-based models work and how we generalize them with our polarized approach.For this, we let V : R d → [0, ∞) be a (possibly non-convex) objective function and β > 0 an inverse heat parameter.

Polarized consensus-based optimization
Given a measure ρ ∈ M(R d ), for standard CBO one defines a weighted mean as Here and in the rest of the paper all integrals will be over R d .The consensus-based optimization method from [20] then amounts to solving the following system of stochastic differential equations (SDEs) for particles {x (i) } i=1,...,J : where {W (i) } J i=1 denote independent Brownian motions and σ ≥ 0 determines the strength of the randomness in the model.The Fokker-Planck equation associated to (2.2) is the following PDE As explained above, this dynamical system forces particles to collapse to consensus, meaning that under certain conditions on V the empirical measures ρ(t) converge to δ x as t → ∞, where for β → ∞ the consensus-point x converges to the global minimizer of V , see [4,11].We now explain our polarized modification of CBO for optimizing objective functions with multiple global minima.Given a measure ρ ∈ M(R d ) and a kernel function (2.4) The corresponding polarized optimization dynamics take the form where ρ := 1 J J i=1 δ x (i) .The Fokker-Planck equation associated to (2.5) is the following PDE Note that the idea of letting the dynamics of particle x (i) mainly depend on spatially close particles, as modelled through the kernel k, has strong similarities to bounded confidence models of opinion dynamics introduced in [9].In these models, typically there is no objective function to be minimized and the kernel is of the form k(x, y) := 1 |x−y|≤κ (x, y), where κ > 0 is a so-called confidence level.The simplest such dynamics then take the form where N (x (i) ) := #{1 ≤ j ≤ J : x (i) − x (j) ≤ κ} denotes the numbers of points which are not farther than κ away from x (i) .Notably, (2.7) coincides with (2.5) for the special case of k(x, y) := 1 |x−y|≤κ (x, y), a constant objective V ≡ const, and σ = 0.More generally, dynamics of the form (2.5) and (2.7) can be viewed as processes on co-evolving networks or graphs where the weights between different particles x (i) and x (j) depend on the kernel k and the loss function V .We refer to [2,3] for a unified description and the study of mean-field equations for general processes of this form.Furthermore, for V = const and σ = 0, a time discretization of Equation (2.5) via an Euler-Maruyama scheme with stepsize dt = 1, yields the update , which is known as the mean-shift algorithm [14,24].In the following, we would like to discuss two important special cases of our model: If one chooses the kernel k(x, y) = 1 for all x, y ∈ R , then the weighted mean m β,k [ρ](x) can be rewritten as . (2.8) In this case our method can be regarded as standard CBO applied to a spatially varying quadratic regularization of the objective function y → V x (y), defined as Note that the central difference between the standard and our method is that the weighted mean (2.4) depends on the particle position x and is not same for all particles.Especially for high-dimensional problems it was shown in [6] that the performance of CBO can be significantly improved when using a coordinate-wise noise model.To be precise they suggest the following replacement in (2.5) where ⃗ e n denotes the n-th unit vector in R d .This changes the Laplacian term in the corresponding Fokker-Planck equation (2.6) to For computing an approximate solution of the stochastic CBO dynamics with either of the two discussed noise models, we employ a standard Euler-Maruyama scheme which we sum up in Algorithm 1.
There the function ComputeMean determines the precise variant of CBO that is being used.In Algorithm 2 we specify this function for the proposed polarized CBO which reduces to standard CBO for a constant kernel k ≡ 1.In the next section we will discus an algorithmic variant (see Algorithm 3) of the mean computation (2.4) which is computationally more efficient and exhibits empirical advantages in high dimensions.

Cluster-based model
In this section we propose an algorithmic alternative to the weighted mean, defined in (2.4) with the motivation of making the computation of the weighted means computationally more efficient and to also encourage polarizing effects between the different means.Given particles x (i) for i = 1, . . ., J, standard CBO uses one weighted mean whereas our polarized version uses J many weighted means.

Algorithm 1 General consensus-based optimization
Input: Time step size dt > 0, initial particles {x (j) } J j=1 , diffusion parameter σ ≥ 0, noise model M for i = 1, . . ., J do Algorithm 2 ComputeMean for polarized CBO As an alternative model we consider cluster means c (j) for j = 1, . . ., J c , where J c ≤ J.We encode the probability that the particle x (i) belongs to the cluster mean c (j) with p ij > 0. Given initial probabilities p ij > 0, we perform the following iterative update for each j = 1, . . ., J c , ) The new values of the probabilities p ij are then obtained by renormalization over j, i.e., and the clusters means are then updated via . (2.12) Here, α ≥ 0 is a discounting coefficient.The interpretation of the above scheme is as follows: If the particle x (i) identifies the cluster center c (j) as "belonging to it the most", then r ij = 1 and the probability that x (i) indeed belongs to c (j) is only determined by the spatial proximity, encoded through k(x (i) , c (j) ).If, however, it "feels more dedicated" to another cluster mean c (k) (different from c (j) ), then p ij < p ik and thus r ij < 1.This results in a correction of the particle-cluster correspondence, strengthening the bond to cluster j of particle i.The exponent α ≥ 0 determines the strength of this additional polarization incentive, with larger α ≥ 0 leading to greater polarization.In order to obtain the indivdual mean for each particle (like in the polarized CBO scheme) we simply compute We summarize the cluster-based mean computations in Algorithm 3, which can be used in place of Algorithm 2 in the CBO scheme Algorithm 1.
Algorithm 3 ComputeMean for cluster-based method ▷ Discount probabilities for other clusters end for for j = 1, . . ., J c do ▷ Update cluster centers end for Output: Note that for α = ∞ we have that .In this case one gets that as κ → 0. So for very small values of κ a hard assignment of the points x (i) to the clusters c (j) based on spatial proximity is performed which is reminiscent of the k-means algorithm.
It is important to initialize all quantities correctly in order to obtain a meaningful algorithm.If one naively initializes p ij := 1  Jc for all i = 1, . . ., J and computes initial cluster centers via (2.12), then c equals the standard CBO weighted mean for all j.Correspondingly, the probability updates (2.10) and (2.11) will leave p ij untouched.In this case the method reduces precisely to standard CBO.Therefore, we initialize the probabilities randomly by drawing pij ∼ Unif(0, 1) and normalizing with (2.11).Then we compute initial cluster means using (2.12).
Let us also mention that the complexity of CBO where means are computed with Algorithm 3 is order O(J • J c ) which is significantly smaller than O(J 2 ) which is the complexity of CBO with the baseline polarized mean computation from Algorithm 2. This is due to the fact that the cluster-based method models consensus and polarization of individuals with respect to existing opinionated groups, with the cluster center as a surrogate for these groups, whereas the polarized CBO approach tracks all particles' interactions with each other.
Last, we demonstrate how to obtain a SDE and mean-field interpretation of Algorithm 3, where for conciseness we restrict ourselves to the case α = 0.In this case (2.11) and (2.12) reduce to .
This allows us to express the cluster-based mean computation with α = 0 as the coupled system: ) ) ) (2.14d) Note that Algorithm 3 approximates the fixed point equation for c (j) with one iteration of the fixed point map.The corresponding mean-field system is readily obtained as: ) We expect that similarly one can derive a SDE and mean-field interpretation of the model with α > 0 but we do not endeavour this here.

Polarized consensus-based sampling
The last model variant that we consider here is an application to sampling.In [5] a sampling version of CBO was proposed and termed consensus-based sampling (CBS).Defining a weighted covariance matrix as CBS aims to sample from the measure exp(−V ) and by solving the following system of SDEs: (2.17) Here the parameter λ interpolates between an optimization method (λ = 1) and a sampling method (λ = (1 + β) −1 ).For the latter scaling of λ a collapse of ρ is avoided and ρ samples from exp(−V ) if this measure is Gaussian.The Fokker-Planck equation associated with (2.17) is given by We can polarize CBS by using the mean from (2.4) to define a weighted variance, as follows: for x ∈ R d .The corresponding CBS dynamics are then given by where ρ := 1 The associated Fokker-Planck equation is for a symmetric and positive definite covariance matrix Σ 1 ∈ R d×d , and let for some m ∈ R d and a symmetric and positive definite covariance matrix Σ ∈ R d×d .Then ρ ⋆ defined as is a stationary solution of the Fokker-Planck equation (2.21) for any β > 0 and for λ = (1 + β) −1 .
Proof.We use the following formula for the product of two Gaussians, which can be found, for instance, in [19], to obtain where c x > 0 is a normalization constant and Using this we obtain Similarly, we obtain Combining the two we obtain Similarly, we can compute the weighted covariance as Hence, we get for λ = (1 + β) −1 :

Now we use the definition of
. This proves that ρ ⋆ is a stationary solution of the Fokker-Planck equation (2.21).

Towards mean-field analysis
In this section we will make some remarks on the analysis of the Fokker-Planck equation (2.6) for polarized CBO which we repeat here for convenience: We shall first explain why current analytical frameworks for the mean-field analysis of consensusbased optimization do not carry over to this equation.Then, we shall prove convergence as t → ∞ of solutions to this equation to a singular measure located at a minimizer, restricting ourselves to the the zero temperature limit (β = 0) and to sufficiently nice objective functions V .Weak solutions of this equation are continuous curves of probability measures t → ρ t such that holds true for all smooth and compactly supported test functions ϕ ∈ C ∞ c (R d ).Using the Leray-Schauder fixed point theorem, existence proofs for this equation without a kernel, i.e., k(x, y) = 1, were given in [4,11] under mild assumptions on the objective V and for initial distributions with finite fourth-order moment.Taking into account that we expect that under reasonable Lipschitz-like assumptions on the logarithm of the kernel these arguments translate to our case, but we leave this for future work.The biggest challenge is that for Rather, it is a curve in a space of vector fields.This requires more sophisticated compactness arguments than the one made in [4,11].Besides existence, in the literature there exist two different approaches to proving formation of consensus around the global minimizer of V for the Fokker-Planck equation (2.3) associated to standard CBO.The first one was presented in [4] and constitutes a two-step approach.First, they prove that the non-weighted standard variance , where E(ρ t ) := y dρ t (y), decreases to zero along solutions ρ t of (2.3).This obviously implies that ρ t converges to a Dirac measure δ x concentrated on some point x ∈ R d .Second, the Laplace principle is invoked in order to conclude that x lies close to the global minimizer of V if β is chosen sufficiently large.This analytical approach uses heavily that the weighted mean m β [ρ t ] for CBO does not depend on the spatial variable x which is a linearity property that our weighted mean m β,k [ρ t ](x) does not enjoy.Furthermore, by design our method does in general not converge to a single Dirac mass and hence it's classical variance does not converge to zero.A different approach is presented in [11] where the authors propose a more unified strategy.For this they fix a point x ∈ R d (which later will be the global minimizer of V ) and define the variance-type function The authors note that V[ρ t ] = 1 2 W 2 2 (ρ t , δ x) and so convergence of the variance implies convergence of ρ t to δ x in the Wasserstein-2 distance.They derive a differential inequality for V[ρ t ] which in combination with the Laplace principle allows them to show some kind of semi-convergence behavior, i.e., for every ε > 0 there exists β > 0 such that V[ρ t ] decreases exponentially fast until it hits the threshold V[ρ t ] ≤ ε.
When it comes to generalizing this approach to our setting there are two main obstacles.First, in the case of several global minimizers {x i : 1 ≤ i ≤ N } the Wasserstein-2 distance between ρ t and the empirical measure . Indeed, the latter quantity is a very bad upper bound for the desired Wasserstein-2 distance since it is bounded from below by a positive number.
In the following, we therefore present an alternative approach for analyzing convergence of the Fokker-Planck equation (2.6) which is based on choosing the Lyapunov function and the reasons for this choice will become clear in a moment.We consider the setting of a Gaussian kernel k(x, y) of variance κ 2 /β for κ > 0 and consider the limit as β → ∞.The case of β < ∞ is then treated using the quantitative Laplace principle as, for instance, in [11].Let us assume that the support of ρ equals R d .In this setting the Laplace principle implies that for β → ∞ one has where we assume that κ is sufficiently small such that y → |x−y| 2 2κ 2 + V (y) has a unique minimizer for every x ∈ R d .Note that this is possible, for instance, if V is C 2 and the smallest eigenvalue of its Hessian matrix is bounded from below.Therefore, we consider the limiting dynamics as β → ∞, governed by the drift field x − p(x), where is known as the proximal operator of κ 2 V .Correspondingly, the Lyapunov function becomes and we would like to emphasize that, using properties of the proximal operator, one has The following analysis treats the model case where the loss function V is strongly convex, sufficiently smooth, and additionally satisfies a certain derivative bound in case we consider a Fokker-Planck equation with non-vanishing diffusion term.While this is just the first step toward a comprehensive analysis of polarized CBO method for much larger classes functions, the following results introduce a set of important techniques which-we believe-will be a cornerstone for future analysis.The core ideas of the following analysis are based on discussions of the first author with Massimo Fornasier and Oliver Tse and an extension to non-convex loss functions and β < ∞ is ongoing work.We start with the following decay property of L[ρ t ] for solutions of the associated Fokker-Planck equation: Proposition 2.2 (Exponential decay of the Lypapunov function).Let t → ρ t be a weak solution of the Fokker-Planck equation (2.24) We pose the following assumptions on the loss function V : ) and there exists µ > 0 such that Then there exists constants C 1 , C 2 > 0 such that, if σ < C 1 , it holds for all t > 0 that and consequently Proof.Differentiating L[ρ t ] in time and using the weak form of the PDE implies We continue by computing the derivatives that appear in this expression.First, it holds (2.28) Next we observe that by definition of the proximal operator p(x) it holds (2.29) Differentiating this equation yields and therefore which is a symmetric matrix.Using (2.28) we get For estimating this expression from above it suffices to bound the eigenvalues of M := I − Dp(x) from below.By assumption we have D 2 V ≽ µI, which implies that M ≽ 1 − 1 1+κ 2 µ I = κ 2 µ 1+κ 2 µ I and therefore we can bound the first term in (2.27) (2.32) If σ = 0 we can already conclude the proof, using Gronwall's inequality.For σ > 0 we have to bound the second term in (2.27), coming from the diffusion.Using (2.28) and the product rule it follows Consequently, we obtain Going back to the previous formula for the Laplacian, we obtain the estimate and it remains to bound the first term.For this we need second derivatives of p(x).Writing (2.30) in coordinates gives Taking another derivative with respect to the i-th variable and using the preduct rule yields We have to solve this equation for the second derivatives of p for which we define the matrix Then the previous equation is equivalent to the linear system which is solved by (2.35) Using the definition of A and (2.34) we get and it remains to uniformly bound the first term.Using (2.35), the definition of B as well as (2.31), we get the following estimate in terms of the matrix/tensor norms where C d is a dimensional constant.By assumption the right hand side is uniformly bounded by some constant C > 0 and going back to (2.27) we obtain Since µ > 0, we can choose σ > 0 sufficiently small-for instance, 1+κ 2 µ -such that the brackets are strictly positive.Then we can conclude the proof using Gronwall's inequality.
Example 2.3 (The one-dimensional case).In the case of one spatial dimension d = 1, we can bound the Laplacian ∆ 1 2 |x − p(x)| 2 -which in this case is just the second derivative-more accurately.In we obtain Hence, we obtain the estimate We give three examples of loss functions V which satisfy all assumptions of Proposition 2.2.The first one is the quadratic loss V 1 (x) = ax 2 + bx + c with a > 0, b, c ∈ R for which it holds V ′′′ 1 = 0 and hence the Laplacian is bounded by 1.The second one is ) for which we have (the loose bound) |V ′′′ V ′ | ≤ 1 and so a valid bound for the Laplacian is κ 4 + 1.The third loss is V 3 (x) = x 2 + x + 1 − (x + 1) ln(x + 1) for x ≥ 0, extended to an even function on R. Also here one has (the loose) bound |V ′′′ V ′ | ≤ 1 and again a valid bound for the Laplacian is κ 4 + 1.
Next we prove a compactness property for measures for which L[ρ t ] is uniformly bounded in time.For this we also require that the loss V has a Lipschitz continuous gradient, which is a standard assumption in nonlinear optimization.
Then there exists a subsequence (ρ tn ) n∈N for t n → ∞ as n → ∞ which converges to a probability measure ρ ∞ in the Wasserstein-2 distance.
Proof.Let x * := arg min V .By assumption it holds which implies By definition of p(x) it holds κ 2 ∇V (p(x)) = x − p(x) and therefore Hence, we have showed that Using this as well as the inequality |a (2.36) By assumption one has which, together with change of variables and (2.36), implies Therefore, by compactness a subsequence (p ♯ ρ tn ) n∈N of the push-forward measures converges to some probability measure σ ∞ in the Wasserstein-2 distance as n → ∞.
First, we claim that under our assumptions the map x → p(x) is a biLipschitz homeomorphism with Lipschitz continuous inverse y → p −1 (y) := y + κ 2 ∇V (y).The fact that these two maps are Lipschitz continuous is obvious, noting that the proximal operator of a convex function is 1-Lipschitz and the map p −1 is Lipschitz because ∇V is Lipschitz.Furthermore, we observe that by definition of the proximal operator so p −1 is a left inverse.To see that also p(p −1 (x)) = x we note that This is a strongly convex optimization problem and hence the optimality conditions are necessary and sufficient.They have the (unique) solution y = x and therefore p(p −1 (x)) = x which shows that p −1 is truely the inverse of p.
Next, we claim that the measures ρ tn converge to ρ ∞ := p −1 ♯ σ ∞ in the Wasserstein-2 distance as n → ∞.To see this, let π n denote an optimal coupling of p ♯ ρ tn and σ ∞ such that We define πn := (p −1 × p −1 ) ♯ π n and first argue that this a coupling of ρ tn and ρ ∞ .To see this, we compute for every Borel set A ⊂ R d , where we used that p is invertible and that the first marginal of π n is p ♯ ρ tn .Hence, the first marginal of πn is ρ tn .Similarly, the second marginal can be shown to be ρ ∞ and hence π n is a coupling.We therefore obtain where we used the Lipschitzness of ∇V which implies Lipschitzness of p −1 .
Theorem 2.5.Under the conditions of Propositions 2.2 and 2.4 and assuming L[ρ 0 ] = 0 it holds that where x * := arg min V .
Proof.By Proposition 2.2 it holds L[ρ t ] → 0 and, in particular, t → L[ρ t ] is uniformly bounded.Hence, one can apply Proposition 2.4 to obtain that a subsequence of ρ t converges to some probability measure ρ ∞ in the Wasserstein-2 distance and hence also weakly.Since ρ → L[ρ] is an integral of the continuous and lower-bounded function x →1 2 |x − p(x)| 2 against ρ, it is lower semicontinuous with respect to weak convergence of measures.It follows L[ρ ∞ ] = 0 and hence ρ ∞ = δ x * , where x * is the global minimizer of V .The uniqueness of the minimizer implies that the whole sequence ρ t converges to δ x * and this concludes the proof.

Numerical examples
In this section we evaluate the numerical performance of the proposed algorithms.In all our experiments we chose a time step parameter of dt = 0.01.The code to reproduce all numerical experiments is available on GitHub 1 .

Unimodal Ackley function
In the first example we perform a consistency check for our method for finding the unique global minimum of the Ackley function [1], defined as with standard deviation κ = ∞ for standard CBO and κ = 1 for polarized CBO.Furthermore, we choose β = 1.In this simple situation with a unique global minimum we observe that both standard and polarized CBO find the global minimum and do not get stuck in local minima.Notably, the polarized variant converges slightly slower than standard CBO which is due to the localization effect of the kernel with a relatively small standard deviation.

Multimodal Rastrigin function
In this example we evaluate different choices of kernel functions for minimizing a Rastrigin-type function with three global minima.The original Rastrigin function [21] on R d is defined as and has a global minimum at x = 0 with U (0) = 0.In this experiment we choose d = 2 and minimize the product  and the corresponding results of our method are depicted in Figure 5. Again, we choose β = 1.The kernel parameters κ were chosen sufficiently small for the methods to detect all three minima, which are marked with blue diamonds.While the Gaussian and the Laplace kernel work similarly well-note however that the Laplace kernel needs a much smaller value of κ than then Gaussian-the bounded confidence kernel (cf. the discussion in Section 2.1) works suboptimally for the task of minimization.While it manages to detect all three minima a lot of particle get stuck in suboptimal consensus points which can be explained by the fact that the kernel has compact support.

Multimodal Ackley function
We proceed with quantitative evaluations of our method for a multimodal version of the Ackley function (3.1), defined as where {z i ∈ R d : i = 1, . . ., N } are points which constitute the global minimizers of V , and U is the standard Ackley function defined in (3.1).In Table 1 we plot how many of the three minima in dimension d = 2 were detected by the proposed polarized CBO method with a Gaussian kernel and different values of the standard deviation κ.That means, we show the percentage of runs that detected at least 1, 2, or 3 minima.Here, we employed the standard noise model as specified in (2.5) with σ = 1.0 and β = 1.0.For completeness we also include results for standard CBO (i.e., κ = ∞) which by definition can detect at most one minimum.We say that a method detects a minimum if at convergence there exists a weighted mean m β,k [ρ](x) which is closer than 0.25 in the infinity norm to the minimum.For standard CBO where m β,k [ρ](x) = m β [ρ] for all x this coincides with the definition of success in [5,20].For our experiments we employ for i = 1, . . ., d.While polarized CBO works very well in the two-dimensional example, Table 2 shows that in dimension d = 10, it fails to detect more than one minimum.However, the cluster method from 3 exhibits significantly improved behavior and manages to detect all three minima frequently.Here, we employed the coordinate-wise noise model from (2.9) with σ = 7.5.Additionally we employed a simple scheduling for the parameter β, where we start with β = 30 and increase it in each step via β ← 1.01 • β up to a limit of β max = 10 7 .Here, one could potentially employ more sophisticated approaches as proposed in [5].For the cluster-based methods we chose α = 5.0 in Algorithm 3.Even for κ = ∞, where the kernel has no influence anymore, the method works very well.Note that this case does not correspond to standard CBO.Furthermore, Table 2 shows that our polarized method can outperform standard CBO at finding at least one minimum, albeit at the cost of higher complexity.
We also test the cluster method in d = 30 dimensions and the results can be found in Table 3.We observe that it is harder to find multiple minima, however for J = 1600 the method finds at least Polarized two minima in over 50% of the runs for κ ≥ 1.For smaller particle numbers the algorithm performs better for κ ≤ 1, although the percentage of runs, where multiple minima are found is very low.

Cluster
Polarized

Multimodal sampling
In this section we consider the task of sampling from a bimodal mixture of Gaussians.
, where the tuples a = (a 1 , a 2 ) and b = (b 1 , b 2 ) determine the centers of the clusters.In Figure 6 we plot the result of standard CBS and our polarized variant using Gaussian kernels with three different standard deviations κ ∈ {0.4,0.6, 0.8}.For both methods we choose β = 1.Note that, at least for convex potentials V with bounded Hessian, standard CBS is known to exhibit a Gaussian steady state.Being designed as a method for unimodal sampling, there is not much hope CBS can work in this multimodal situation.Still we include CBS results for comparison.In contrast, our polarized modification of CBS manages to isolate the two modes.Note that in Proposition 2.1 we proved that polarized CBS with a Gaussian kernel is unbiased when when the target measure is a Gaussian.We do not expect this to be true for target measures which are a Cluster J = 200 J = 400 J = 800 J = 1600   6 show that, if the two Gaussians are sufficiently far apart, our method (second to fourth column) seems close to being unbiased.Standard CBS (left column) can, by design, find at most one mode and here it successfully detects the lower Gaussian.Note that we use the same number of particles for both algorithms, namely J = 400, which is why the bottom mode for CBS looks more densely sampled.The situation is different when the two modes are closer together, as shown in the bottom row of Figure 6.Here, standard CBS fails to detect even one mode whereas our polarized version does achieve that.However, as expected in this multimodal situation the result is not a perfect sample but appears to be biased.Furthermore, if the clusters are close, the sensitivity with respect to the choice of the kernel width κ is larger and κ has to be chosen sufficiently small in order to generate enough samples for the lower cluster.

Non-Gaussian sampling
Our final experiment deals with sampling from a non-Gaussian distribution.Again, we use β = 1.According to the theoretical analysis in [5] standard CBS is not expected to correctly sample from such a distribution but rather from a Gaussian approximation.While we do not claim that our polarized method generates an exact sample, our result in Figure 7 show that polarized CBS approximates the non-Gaussian distribution much better than standard CBS.This is due to the fact that the weighted means, indicated as orange crosses, do not collapse to a single point but concentrate in the region of high probability mass.

Conclusion and outlook
In this article we presented a polarized version of consensus-based optimization and sampling dynamics for objectives with multiple global minima or modes.For this we localized the dynamics such that every particle is primarily influenced by close-by particles.We proved that in the case of sampling from a Gaussian distribution this does not introduce a bias.We also suggested a cluster-based version of our polarized dynamics which is computationally more efficient.Our extensive numerical experiments suggested a large potential of our method for detecting multiple global minima or modes, improving over standard consensus-based methods.
There is a lot of room for future work regarding well-definedness of the Fokker-Planck equation derived above, and stability and convergence of both the mean-field and particle system to consensus using less restrictive assumptions than the ones used in this paper.We are convinced that the Lyapunov function L[ρ t ] which we studied in Section 2.4 will be very helpful in this endeavour.Future work will also focus on further numerical improvements of our method, in particular, incorporating  a batching strategy similar to the one from [6] to further improve performance in high-dimensional optimization.Finally, a long-term goal will be to find multimodal sampling methods which are provably consistent for multimodal and non-Gaussian distributions.Given that even gradient-free sampling from unimodal non-Gaussian distributions is still relatively poorly understood, with [23] being a promising approach, this will be a challenging task for future work.

Figure 1 :
Figure 1: Dynamics of standard and the proposed polarized CBO for mininizing the Himmelblau function.The points mark particle locations, the arrows the drift field towards the weighted means.
, else, meaning that the only probability which survive are the one for the likeliest cluster centers.Another interesting special case arises when choosing the Gaussian kernel k(x, y) := exp − |x−y| 2 2κ 2
This function has a global minimum at 0 ∈ R d with A(0) = 0 and in this experiment we choose d = 2 and minimize the shifted version V (x) := A(x − (3, 2)) which has its global minimum at (3, 2) ∈ R 2 .We compare the dynamics of standard CBO with our proposed polarized variant at three different time points in Figures2 and 3. We use the the Gaussian kernel k(x, y) := exp − |x−y| 2 2κ 2

Figure 2 :
Figure 2: Dynamics of standard CBO for minimizing the Ackley function.The points mark particle locations, the arrows the drift field towards the shared weighted mean.

Figure 3 :
Figure 3: Dynamics of the proposed polarized CBO for minimizing the Ackley function.The points mark particle locations, the arrows the drift field towards the individual weighted means.

Figure 4 :
Figure 4: A variant of the Rastrigin function with three global minima.

Figure 5 :
Figure 5: Dynamics of polarized CBO with different kernels for minimizing a Rastrigin-type function with three global minima, marked by red diamonds.For all kernels all minima were detected.Gaussian and Laplace kernel work especially well, whereas the bounded confidence kernel generates too many consensus points due to its compact support.

Figure 6 :
Figure 6: Dynamics of standard CBS and our polarized version, sampling from a mixture of Gaussians (top row: far apart, bottom row: closer together).The points mark particle locations, the arrows the drift field towards the weighted mean(s).

Figure 7 :
Figure 7: Standard and polarized CBS for sampling from a non-Gaussian distribution.The orange crosses mark the position of the weighted means.
21))Perhaps a little unexpectedly we can prove that, just as the Fokker-Planck equation of CBS (2.18), for Gaussian kernels of arbitrary width our version (2.21) leaves Gaussians measures invariant, which is an important consistency property.
Proposition 2.1.The polarized CBS dynamics in sampling mode with a Gaussian kernel leaves a Gaussian target measure invariant.More precisely, let k(x, y)

Table 1 :
Performance of polarized CBO for minimizing the multimodal Ackley function with d = 2 and three global minima: Averaging over 100 independent runs of 1, 000 iterations we plot how many percent of the succeeded in detecting at least 1, 2, or 3 of the minima.Note that, by definition, standard CBO (κ = ∞) can detect at most one minimum.

Table 2 :
Performance of polarized and cluster CBO for minimizing the multimodal Ackley function with d = 10 and ceteris paribus.The first method is slightly better than CBO but fails to detect more than one minimum in most cases.The cluster variant works well for a wide range of parameters κ.It manages to detect all minima in some runs, already for J = 200 particles.Note that κ = ∞ does not correspond to CBO for this method.

Table 3 :
Performance of cluster CBO for minimizing the multimodal Ackley function with d = 30 and ceteris paribus.mixture of Gaussians.Still, our results in the first row of Figure