Bounds on polarization problems on compact sets via mixed integer programming

Finding point configurations, that yield the maximum polarization (Chebyshev constant) is gaining interest in the field of geometric optimization. In the present article, we study the problem of unconstrained maximum polarization on compact sets. In particular, we discuss necessary conditions for local optimality, such as that a locally optimal configuration is always contained in the convex hull of the respective darkest points. Building on this, we propose two sequences of mixed-integer linear programs in order to compute lower and upper bounds on the maximal polarization, where the lower bound is constructive. Moreover, we prove the convergence of these sequences towards the maximal polarization.


Introduction
Suppose you were given a set A and N lamps you are to place such that the darkest point in A is as bright as possible.In less descriptive terms this max-min problem is known as the maximal polarization problem, which we now state in mathematical language.
Let A, D ⊂ R n be nonempty sets and let K : A × D → R ∪ {+∞} be a function bounded from below.An N -point multiset C ⊆ D will be refered to as point configuration (of N points) and the set of all N -point configurations supported on D will be denoted by C. We assign the discrete K-potential associated with C to every point p ∈ A as U K,A (p, C) = c∈C K(p, c).
To any point configuration we associate its polarization It is then natural to consider the (maximal) polarization problem: For a broader context and overview of this formulation of the polarization problem we refer to the recent monograph [1,CH. 14].Problems of this kind have been extensively studied.In particular the case of A = D = S n−1 being a unit sphere and K(x, y) = x − y −s being related to a Riesz potential is rich in results on explicit optimal configurations of few points (eg.[2], [3], [4], [5], [6], [7] [8]), bounds on maximal polarization (eg.[4], [7]) and asymptotic results (eg.[9], [10], [11], [12]).Asymptotic results are also available for more general choices of A, such as rectifiable sets.Moreover, the polarization problem as stated in (1) is closely related to the well-studied covering problem, i.e. the question, whether A can be covered by balls of radius r > 0.
In particular, let K(x, y) = 1 [0,r] ( x − y ), then, a covering with N balls exists if and only if 1 ≤ P K (A).General discussions of covering problems can be found, for example in the seminal book by Conway and Sloane [13].For covering problems on compact metric spaces we refer to [14] for an overview, whereas constructive methods have been developed, e.g. in [15] and [16].
In this paper we consider polarization problems of the following kind.The set A ⊂ R n will be a compact set and we will impose no restrictions on the point configurations, i.e.D = R n .Furthermore, we restrict to functions K(x, y) = f ( x − y ) for some continuous strictly monotone decreasing function f : R + → R + and use the notation U f,A (p, C), P f,A (C), P f (A).If the subscript parameters are clear from context we omit them.
Under the above assumptions, we therefore consider the optimization problem (2) For explicit computations we choose Gaussians f (x) = e −ax 2 .These functions appear rather naturally in the context of universal optimality (cf.[17]): Recall that a function g : (0, ∞) → R is completely monotonic if it is infinitely differentiable and the derivatives satisfy (−1) k g (k) ≥ 0 for all k.The functions g(x) = e −αx are completely monotonic and we can write f ( x − y ) = g( x − y 2 ).In this context functions f (x) = g(x 2 ) are called completely monotonic functions of squared distance.
From this one obtains that the set of completely monotonic functions of squared distance is the cone spanned by the gaussians and the constant function x → 1.
In particular the commonly used Riesz potentials can be written in this way.
We fix some more notation for the case that the infimum P f,A is in fact a minimum, i.e. the minimizers of this function are points in A. In this case, any such minimizer will be called a darkest point of A. Moreover, will be called the set of darkest points of C. To explain this wording we invite the reader to recall the interpretation of the problem we gave in the beginning: we center lamps at the points in C which now illuminate A. The polarization of A is then the lowest level of brightness any point in A can have, any point realizing this is a "darkest point".
Note, that requiring A to be compact is rather natural.Indeed if A were unbounded, then the value of the polarization would always tend to N • inf f .If A were not closed, darkest points need not exist.Consider for example A to be the open disc and C only containing the origin.In this case, P f,A (C) is not attained at any point in A.
In Section 2 we provide some results connecting a locally optimal configuration to the set of its respective darkest points.Theorem 2.1 states that the points of such a configuration are contained in the convex hull of the darkest points while on the other hand Theorem 2.5 states that the darkest points are located either on the boundary of A or in the interior of the convex hull of the configuration.These restrictions provide necessary conditions for optimality.
In Section 3 we investigate mixed-integer approximations of the polarization problem providing upper and lower bounds.These are collected in Theorem 3.5.We then prove that these bounds indeed converge to P f (A) in Theorems 3.8 and 3.9.
In Section 4 we illustrate capabilities and limitations of the approach on some benchmark instances.

Darkest points and necessary conditions
In this section, we investigate structural properties an optimal configuration needs to satisfy in order to potentially falsify the optimality of a given polarization and reduce the search space of optimal configurations.
In particular, we have the following necessary condition that relates local optimality of a configuration to the set of its darkest points: Theorem 2.1.If C is a locally optimal solution of (2), then Proof.Suppose C is a configuration for which we have c ∈ C such that c / ∈ conv(Dark A (C)).In the following, we discuss how to construct a new configuration C in an arbitrary neighbourhood of C such that P (C ) > P (C).Thus C can not be locally optimal.Since f is continuous, the niveau line containing the darkest points is closed and thus Dark A (C) = A ∩ S is compact.Therefore, conv(Dark A (C)) is a compact convex set and we can find a hyperplane H = {x : a x = b} strictly separating this set from c such that a c < b.For ε > 0 small enough c = c + εa still satisfies a c < b.We obtain a new configuration C = C ∪ {c } \ {c}.Note, that for every neighbourhood of C, there is a sufficiently small ε such that C is contained in said neighbourhood.Obviously |c − p| < |c − p| for all points p in the non-negative halfspace of H.In particular c is closer to all of the darkest points than c and since f is monotonously decreasing for all points p in the non-negative halfspace of H.
It remains to assert this also on the negative halfspace.Since all the darkest points are on the positive side of H, a point p ∈ A ∩ (H ∪ H − ) satisfies for some constant δ.By continuity of f , for ε small enough, we can guarantee that The formulated condition is very "unstable" in the following sense: Proposition 2.
2. Suppose C ⊂ conv Dark A (C ).Then we can apply 1. to C with the roles of c, c reversed.But this would give P (C ) < P (C) < P (C ), which is a contradiction.
Optimization methods which only consider single components (like pattern search) or move single configuration points therefore possibly converge to a configuration contained in the convex hull of the darkest points which is not locally optimal.Therefore it seems reasonable to only use optimization methods which are able to move several points at once.Another conclusion is the following, which seems to suggest that the number of optimization variables can be reduced to only m − 1 vectors.Corollary 2.3.For given points We can use Theorem 2.1 to study the structure of the darkest points even more.First, we discuss a way to find certificates for p / ∈ Dark A (C). Lemma 2.4.Let C be a configuration and p ∈ R n be an arbitrary point.Let Since f is strictly monotone decreasing, we have U (q, C) < U (p, C).From this, the second claim follows immediately.
The above definition of N (p, C) of a point p contains only points at which the potential is strictly smaller than at p itself, as we just showed.We think this object will be useful beyond the scope of the previous lemma and subsequent theorem, but in the present work we only need it here.
If we recall the visualization of the polarization problem as placing light sources C to illuminate A the above definition of N (p, C) of a point p contains only points that are illuminated less than p itself.It is (by cone duality) somewhat related to the idea of a physical shadow (which would be resembled most closely by − cone{p − c : c ∈ C}).With this we prove the following result which further restricts the location of the darkest points: Theorem 2.5.Let C be a feasible configuration for (2).Then the points of Dark A (C) are either in the interior of conv(C) or in δA, i.e.
Proof.Let p ∈ Dark A (C) and assume p / ∈ int conv(C).Furthermore, let N (p, C) be defined as in Lemma 2.4.We can find a hyperplane In addition, if C is also locally optimal for (2) by Theorem 2.1 we immediately obtain that C ⊂ conv Dark A (C). Now, assume Dark A (C) ∩ δA = ∅, then as seen above Dark A (C) ⊆ int conv C and we obtain which is a contradiction since C is finite.To summarize, locally optimal configurations C of (2) and its corresponding darkest points Dark A (C) share a similar containment property as is illustrated in Figure 1.

An MIP approach to polarization
The current section is dedicated to the development of two hierarchies of mixed-integer linear programs (MIP) that approximate the maximal polarization of a compact set A with respect to a monotonically decreasing and continuous function f : R + → R + .The MIP, that computes the lower bounds is constructive, i.e. solutions to this MIP are configurations whose polarization is lower bounded by the value of the MIP.The actual polarization of these configurations may very well exceed this lower bound by a significant margin, cf. Figure 3 for some numerical evidence.
First we give an equivalent description of problem (2).For this we observe that by Theorem 2.1 any locally optimal point configuration is necessarily supported on conv(A).Furthermore we can get rid of the infimum by adding new constraints.The resulting optimization problem is then where X N describes the set of all multisets of size N with elements in X.It is now clear, that the sup is actually a max, since the feasible region can easily be made compact by bounding x from below (e.g.x ≥ 0) without changing the value of the program.

MIP Hierarchies
We observe that Problem ( 3) is an optimization problem with finitely many variables (namely x, C), but infinitely many constraints -it is a semiinfinite program (SIP)and therefore not solvable using standard solvers.In the remainder of this section we introduce two hierarchies of (tractable) MIPs, that approximate P(A) from above and below (see Theorem 3.5).For this we make use of the following concept of functions which "control" the difference of two values of f .Definition 3.1.We call a family of functions g c,p : R + → R + for c ∈ conv(A), p ∈ A a family of control functions (with respect to f , A) if for all c ∈ conv(A), p ∈ A: where • denotes the standard Euclidean norm.
Note that f is related to a function K taking two points c, p as arguments: K(c, p) = f ( c − p ).A family of control functions allows us to control the way K changes as we vary either c or p.This control will be an important ingredient of the proof of Theorem 3.5.For continuous functions this is related to bounding the slope of K as can be illustrated by the following example: Suppose the function However, applying global Lipschitz-continuity is not a very precise approximation as it ignores local information around specific points c, p. Therefore we provide a more suitable family of control functions.Proposition 3.2.For f monotonously decreasing and continuous the following is a family of control functions: Proof.We fix c, p and write g = g c,p and ĝ = ĝc,p .Clearly g(0) = ĝ(0) = 0. Since f is continuous, so is g.
which is decreasing since f is decreasing.For x ∈ (0, ∞) we have which is increasing since f is decreasing.Overall g(ε) = max(ĝ(ε), ĝ(−ε)) is an increasing function on R + .By symmetry, it is sufficient to prove that g provides an upper bound for ∆ = |f ( c−p )−f ( c−p )| for all p ∈ conv(A).To this end, we use the triangle inequalities For explicit computations we need to discretize two aspects of the problem.Firstly, we discretize the set of possible point configurations.For this we choose a finite sample Λ ⊂ conv(A) and only optimize over Secondly, we replace the infinite number of constraints, parameterized by A, by a finite subcollection.For this we again choose a finite sample Γ ⊂ A, and only consider the inequalities x ≤ U (p, C) for all p ∈ Γ.
However, this naively sampled problem is not necessarily connected to the original problem, since we enforce only a subset of the infinitely many constraints and allow only a finite number of configurations.Either one of these changes would provide valid bounds but they unfortunately work in different directions.We will now show how to overcome this problem by utilizing the above family of control functions to obtain lower and upper bounds on the original problem.
Let us first consider lower bounds on (3).It is clear that we can restrict the choice of configurations to be supported on a finite sample Λ of conv(A) as in ( 4) and obtain a program that computes a lower bound.
Discretizing the constraints is the harder part, since removing constraints lets the maximum grow.The following lemma shows how a slight variation of discretized constraints for some fintie sample Γ of A imply the validity of all of the infinitely many original constraints.Lemma 3.3.Let g c,p be a family of control functions.Let ε > 0, Λ be an arbitrary finite sample of conv(A) and be an ε-net of A. Furthermore, suppose Proof.Let p ∈ A be arbitrary and n(p) = argmin p∈Γ { p − p } denote the closest sample point to p ∈ A. Note that p − n(p) < ε since Γ is an ε-net.Then, which is larger than x since g c,n(p) is non-decreasing and p − n(p) < ε.
Conversely, if we consider upper bounds on (3), we now cannot simply choose a finite sample Λ of conv(A) to approximate the above SIP.Indeed this would restrict the set of feasible solutions of (3) and thereby lower the maximum instead.Again, the following lemma provides a way around this problem using a variation of the constraints.
Lemma 3.4.Let g c,p be a family of control functions.Let ε > 0 and Λ be an ε-net of conv(A).Furthermore, suppose C ∈ conv(A) N and x satisfy Then, there exists a configuration where the last inequality holds since g n(c),p is non-decreasing and c − n(c) < ε as Λ is an ε-net of conv(A).
Now we can prove the main result of this section.Theorem 3.5.Let ε Λ , ε Γ > 0 and Λ be an ε Λ -net of conv(A) and Γ be an ε Γ -net of A. Furthermore, let g c,p be a family of control functions.Then we have the following: for all p ∈ Γ Proof.We show, that feasible solutions of the left hand sides are also feasible for the right hand sides with the same objective value justifying the asserted inequalities.First, observe that Lemma 3.3 implies that a feasible solution x, y of (6a) is also feasible for (6b) and the objective values coincide.Next, we consider a feasible solution x, y of (6b) and observe that y encodes a multiset x, C satisfy the constraints in ( 2) and with the same objective value x.The next inequality follows rather immediately since (6d) is a relaxation of (2) due to dropping constraints for p ∈ A \ Γ. Lastly, if x, C is a feasible solution of (6d), we apply Lemma Let us briefly comment on the computational complexity of the mixed-integer programs (6a) and (6e).It is worth noting, that mixed-integer linear programming usually refers to optimization problems that include binary variables, which run significantly faster.We would like to note that the integral variables y ∈ {0, . . ., N } Λ in both (6a) and (6e) can be replaced by |Λ| • log(N ) binary variables.
Moreover, in the lower bound of Theorem 3.5 the vector y can be chosen as y ∈ {0, 1} Λ , which still provides a (potentially worse) lower bound and reduces the number of binary variables significantly.Unfortunately, a similar simplification is not immediately possible for the upper bound.However, we introduce another concept which aims to reduce the computational complexity in a similar fashion in the upper bound case.Definition 3.6.
For every p ∈ A there are at least k distinct points p 1 , . . ., p k ∈ Λ such that |p i −p| < ε.
Using an (ε Λ , N )-net we obtain a hierarchy similar to Theorem 3.5 restricting the possible entries of y to {0, 1}.Proposition 3.7.Let ε Λ , ε Γ > 0 and Λ be an (ε Λ , N )-net of conv(A) and Γ be an ε Γ -net of A. Furthermore, let g c,p be a family of control functions.Then, for all p ∈ Γ Proof.The proof works similar to the proof of Theorem 3.5 by replacing C = {{n(c) : c ∈ C}} in the proof of Lemma 3.4 by a set C of N distinct points of Λ.This is possible since Λ is an (ε Λ , N )-net (see Definition 3.6).
A trivial example of an (ε, N )-net can basically be obtained by a multiset consisting of N copies of an ε-net.However, in practise there are usually solutions that need fewer points, albeit more than a classical ε-net.

Convergence Results
After establishing upper and lower bounds to P(A) through the hierarchies presented in Theorem 3.5, we study the quality of these bounds.To this end, we show in this section, that solutions of the bounding problems (6a) and (6e) converge, as ε Λ , ε Γ both tend to 0, to a solution of the original problem (3).Both proofs rely in large parts on the proof of Lemma 6.1 in [19], which proves similar convergence for more general semiinfinite programs, but include minor necessary modifications.At first, we focus on the lower bounds, i.e., we show, that (6a) converges to (6b) as ε Γ → 0: Theorem 3.8.Let (ε k ) be a non-negative sequence converging towards 0. Furthermore, for every k ∈ N choose an ε k net Γ k of A. Then, any accumulation point of a sequence (x k , y k ) k∈N of optimal solutions of (6a) w.r.t.Γ k and ε k is an optimal solution of (6b).
Proof.Let (x, ȳ) be an accumulation point of (x k , y k ).By passing to a subsequence we can assume that (x k , y k ) → (x, ȳ) if k → ∞.We are now going to prove, that (x, ȳ) is feasible and in fact optimal for (6b): Consider an arbitrary p ∈ A and observe that since Γ k is an ε k -net of A, there exists a sequence (p k ) with p k ∈ Γ k such that p k → p as k → ∞.We observe further, that for all k we have and by taking limits x Hence, (x, ȳ) is feasible for (6b).Now, let (x, y) be an arbitrary solution to (6b).Since A is compact and ε k > 0, we know that g c = max p∈A g c,p is a continuous, monotonously non-decreasing function with g c (0) = 0. We now observe, that is feasible for (6a) with respect to Γ k .Since (x k , y k ) is an optimal solution to (6a), we have x k ≥ x − c∈Λ y c • g c (ε k ).Consequently, as g c (0) = 0, in the limit we obtain that x ≥ x.Since x was chosen arbitrarily, we conclude, that (x, ȳ) is indeed optimal for (6b).
One difficulty of the following theorem is the different kinds of feasible solutions when altering the sample Λ. Feasible solutions of (6e) have the form y ∈ {1, . . ., N } Λ with 1 y = N while feasible solutions of (6d) are N -point multisets supported on conv(A).Note that these objects do not permit an easy discussion of convergence.However, both notions can be translated into an element ω ∈ (conv(A)) N which is independent of Λ and allows a discussion of convergence.Note that ω can canonically be translated back into a multiset.Theorem 3.9.Let (ε k ) be a non-negative sequence converging towards 0. Furthermore, for every k ∈ N choose an ε k -net Λ k of conv(A).Let (x k , y k ) be a sequence of optimal solutions of (6e) w.r.t.Λ k , ε k .Identifying each y k with ω k ∈ (conv(A)) N , any accumulation point (x, ω) of this sequence corresponds to an optimal solution of (6d) by identification of ω with a multiset.
Proof.The proof is similiar to the proof of Theorem 3.8.Note that, since order of elements is not important for the discussed problems, we can regard to elements of (conv(A)) N either as tuples or as multisets depending on the context.Suppose (x k , ω k ) with has an accumulation point (x, ω).By passing to a subsequence we can assume that (x k , ω k ) → (x, ω).Consider the continuous function g p = max c∈conv(A) g c,p with g p (0) = 0.Then, we have for all k and p ∈ Γ: By taking limits we obtain x ≤ N i=1 f ( ωi − p ) for all p ∈ Γ.Thus x, ω is feasible for (6d).Now suppose x, ω is an arbitrary solution of (6d).Then by Lemma 3.4 there exists ω k such that x, ω k is a feasible solution for (6d).Since (x k , ω k ) is an optimal solution, we have x k ≥ x and by taking limits x ≥ x.Therefore x is also optimal for (6e).
Note, that the proofs of Theorems 3.8, 3.9 still work if we restrict y to be binary as was discussed at the end of Section 3.1.
Combining Theorems 3.8 and 3.9, we conclude that by choosing a suitable sequence (ε Γ ) k , (ε Λ ) k , we can in theory bound the value of P(A) as tightly as we need.However, solving the respective mixed-integer linear problems in practice will pose a computational challenge.

Computational results
This section presents numerical experiments illustrating the capabilities and limits of the MIP approach presented in this paper.All computations have been performed using Gurobi on a HP DL380 Gen9 server with two Intel(R) Xeon(R) CPU E5-2660v@2.00GHz (each with 14 cores) and 256 GB RAM.We first focus on a simple illustrative example, where A is an equilateral triangle and the size of the configuration is N = 3.In addition, we chose f (x) = e −5 x 2 for our potential function and ε Γ = 0.014, ε Λ = ε Γ /3 as the respective discretization widths of Γ ⊆ A and Λ ⊆ conv(A).Lastly, we restrict both, (6a) and (6e) to binary variables y ∈ {0, 1} Λ instead of integral y ∈ {0, . . ., N } Λ as discussed below Theorem 3.5.Since we expect the resulting configuration to consist of three separate points, this should not significantly impact the quality of the bounds.
We illustrate the configuration given by (6a) in Figure 2. It was obtained after approximately 10 hours.We continue by assessing the numerical evidence on the convergence for the above example.To this end, we illustrate the quality of the binary versions of both, (6a) and (6e) for decreasing values of ε Λ and ε Γ .Here, the binary variant of (6e) was derived from Proposition 3.7.To be precise, for every ε ∈ {0.04, 0.038, . . ., 0.014} we computed the lower bound using ε Λ = ε/3, ε Γ = ε and the upper bound using ε Γ = ε Λ = ε.We chose these scalings for a better comparability, since the (ε Λ , 3)-net in the upperbound case contains more sample points and therefore yields more variables than an (ε Λ , 1)-net.Furthermore, we used scaled versions of the A 2 lattice complemented with additional sample points on the boundary to generate the samples Λ and Γ.This construction ensures that both, Λ and Γ are indeed ε Λ and ε Γ -nets respectively.The obtained bounds are visualized in Figure 3.
It is apparent, that lower values of ε do not always yield better bounds although there is a clearly visible trend to close the gap between the bounds as can be expected from our convergence results established in Theorems 3.8 and 3.9.A drawback of this approach is the computational runtime of the respective MIPs, which vastly increases with the sample size of Γ and Λ from a few seconds if ε = 0.04 to 10 hours for ε = 0.014.As an additional academic example, we use the same approach for different suitable choices of ε = ε Λ = ε Γ and different convex, non-convex or even non-connected A to showcase the wide applicability of our approach.We illustrate the polarizations derived by the binary approximation of our lower bound MIP (6a) in Figure 4.
Fig. 4 Optimal configurations of (6a) for different A in orange with a heatmap of the respective potential (from dark blue over green to yellow).The border of the respective shape A is highlighted in blue (from left to right: ball, triangles, non-convex shape).
Moreover, we briefly summarize the computational results on these additional shapes A in Table 4 below.The respective sample widths were chosen such that the corresponding MIPs could be solved in reasonable time.We note, that the shape of A significantly impacts the runtime of our MIP approach.It seems that the large symmetry group of the ball may contribute to a larger runtime as good solutions may be found everywhere in the branch-and-bound tree used by solvers such as Gurobi.If true, symmetry reduction techniques may lead to substantial improvements.

Outlook
We have seen in Section 2 that the location of the darkest points and the location of the points of a locally optimal configuration are intertwined.We suspect that these results can be extended, in particular by utilizing symmetries of A or requiring A to be convex or even a polytope.Furthermore, it would be interesting to extend these results to other choices of D. However, it is clear that there will be limitations to these kinds of results.Consider for example A = D = S n−1 the unit sphere.In this case, no obvious variant of Theorem 2.1 holds.
In this paper, we have not dealt with explicit computations of locally or globally optimal point configurations, even on simple sets such as n-gons or the unit ball.However, numerical experiments suggest that such configurations show some structure and we hope that extensions of the results in Section 2 can be utilized to obtain proof of optimality for some configurations.Here, we would like to highlight one result in this direction we are aware of, namely that for certain Riesz potentials of modest decay and A equals the closed d-dimensional unit ball, the optimal point configuration consists of N copies of the origin (see [1,Theorem 14.2.6]).We were able to observe similar effects in numerical experiments on regular polytopes.
The MIP hierarchies presented in Section 3 give provable upper and lower bounds converging to the optimal solution.However, unsurprisingly computing these bounds for sufficiently fine samples is very time consuming since MIP is NP-complete.A natural question is, whether well known techniques from mathematical programming -such as convex relaxations, inner approximations, column generation or local refinement, that speed up the computations can be utilized to achieve results for finer samples.However, most of these techniques only provide approximations of the discussed MIP hierarchies, which might limit the gain achieved through the finer samples.
Moreover, it might be helpful to carefully fit the choice of the samples to the specific instance of the problem.For example, if one has a conjecture for an optimal configuration and/or the correct location of the darkest points, this information can be fitted into the samples while retaining the ε-net property of the samples.Furthermore, these ideas might provide a way to use our bounds for analytic proofs of optimality in highly structured situations.Data Availability.Data sharing not applicable to this article as no datasets were generated oranalysed during the current study.

2 . 1 .
Let C be a configuration such that C ⊂ conv Dark A (C).Let c ∈ C and c = c and C = C ∪ {c } \ {c}.Then 1. P (C ) < P (C) and 2. C conv Dark A (C ).Proof.Consider the hyperplane H with outer normal c − c through c, oriented such that c is on the negative side.Since c ∈ conv Dark A (C) there has to be a darkest point d ∈ Dark A (C) in the non-negative halfspace of H (it might be in H).Then c − d < c − d and by monotonicity f ( c − d ) > f ( c − d ).The potentials U (d, C ) and U (d, C) differ by f ( c − d ) − f ( c − d ), therefore the above implies

Fig. 1
Fig. 1 Illustration of Theorems 2.1 and 2.5.A is depicted in red, Dark A in black and the configuration C in orange.The dashed lines depict the convex hulls conv C and conv Dark A (C), whereas the black line depicts all points p ∈ R 2 , such that U (p, C) = P (C).

3. 4
to obtain a set C ∈ Λ N satisfying the constraints of (6e).Then, by encoding C through y ∈ {0, . . ., N } Λ with 1 y = N we obtain a feasible solution to (6e) with the same objectve value x.

Fig. 2
Fig.2Optimal configuration for (6a) with ε = 0.014 and a heatmap of the respective f -potential (from dark blue over green to yellow).The points of the configuration are represented by orange circles.

Fig. 3
Fig.3Upper and lower bounds computed with decreasing values of ε and the respective running optimum (dashed lines) as well as an approximate polarization of the lower bound configuration.