Certified Efficient Global Roundness Evaluation

We propose and analyze a global search algorithm for the computation of the minimum zone sphericity (circularity) error of a given set. The formulation is valid in any dimension and covers both finite sets of data points as well as infinite sets like polygonal chains or triangulations of surfaces. We derive theoretical estimates for the cost to reach a desired accuracy, and validate the performance of the algorithm with various numerical experiments.


Introduction
In metrology applications (e.g., quality and wearing assessment of industrial rollers) an object is considered round enough, if all measurement points on its surface lie between two spheres (cirlces) around a common center, such that the difference of their radii is smaller than some prescribed tolerance.This complies with ANSI Y14.5 and ISO 1101 standards [1,2].
The roundness of any set S ⊂ R d may be defined as the smallest possible difference of radii for a suitable center.A precise definition of the corresponding optimization problem will be given in the next section.
The problem of computing the roundness is also known as the problem of computing the minimum radial separation, the minimum zone sphericity (circularity) error, or the minimum-width shell (annulus).
Computational geometry algorithms that compute exact solutions for finite sets S with n points are mostly based on Voronoi diagrams [3][4][5][6][7].In the plane, they have worst case O(n 2 )-time behavior, but there are also subquadratic O(n 3/2+β )-time algorithms [8] for arbitrarily small β > 0. Under special assumptions on the set S even (expected) linear time can be achieved with linear programming techniques [9], see also [10].
In [11] it was noted that the general complexity to compute an exact solution in any dimension d is O(n floor(d/2)+1 ).No estimates on the constants involved are given, but they generally depend exponentially on d.Indeed, as pointed out in [12], even in the plane, algorithms, that compute exact global solutions, are rather inefficient for large n.Therefore, it is of practical interest to develop algorithms, that efficiently compute approximate global minimizers of the roundness problem for any prescribed accuracy ε > 0, especially, if the roundness of many manufactured objects has to be evaluated.Since the roundness problem may have several local minima, standard local optimization methods cannot guarantee to find global minimizers.To our knowledge, the algorithm with the best complexity estimate to compute an approximate global minimizer is the one in [11] with O n • ln(1/ε)/ε d , which for d = 2 improves the O n•ln(n)+ n ε 2 result of [12].Again, no estimates on the constants involved are given.Furthermore, the case d = 2 is often treated in a special manner, and the algorithms cannot directly be used to deal with sets S, that contain infinitely many points such as line segments.But, for the case that S is the boundary of a convex polygon with n vertices in the plane, an exact solution can also be computed with the help of Voronoi diagrams in O(n)-time [13].
Here, we analyze a global search algorithm to solve the roundness problem, which adaptively divides the initial search space into smaller parts until an approximate global minimizer with the desired accuracy is found.The main idea is in the spirit of the general branch-and-bound methods for global optimization discussed in [14], but here we make use of the special structure of the roundness problem to reduce the search space.The algorithm is simple to implement, because it is based only on the comparison of objective function values, and neither derivatives nor Voronoi diagrams have to be computed, which also makes it less sensitive to perturbations in the data.The formulation is valid in any dimension d, and for both finite sets S of data points as well as infinite sets like polygonal chains or triangulations of surfaces.An approximate global minimizer is found in O n • ln(1/ε) -time (with n beeing the number of points or vertices), which also improves the O n • ln(1/ε)/ε d result of [11].The constants involved depend on the number of objective function evaluations (the most costly operation), which in the worst case also grows exponentially with the dimension d.But, under mild assumptions on the roundness of S and the initial search space, we compute explicit upper bounds on the number of objective function evaluations that are needed to find an approximate global minimizer, whose objective value is not larger than twice the minimal value.Similar ideas might be used to formulate and analyze global search algorithms for related problems like cylindricity or flatness problems under appropriate assumptions on the data.
In the next section we give a precise definition of the roundness problem, and recall a result proven in [9], which shows that global minimizers of the roundness problem exist for almost round sets.We will need this result for the complexity estimate of our algorithm, which is analyzed in Sect.3. The performance of the algorithm is tested in Sect. 4 with various numerical experiments.(RP)

Global Solutions of the Roundness Problem
We assume throughout the paper that S ⊂ R d is compact.This implies that the maximum distance r max (x) := max p∈S x− p and the minimum distance r min (x) := min q∈S x−q of a given center x ∈ R d to the set S are attained, and that the roundness function rd S (x) = r max (x) − r min (x) is continuous.Even so, it was shown in [15,16] that the minimum in (RP) need not exist in general.As a simple counterexample, consider the case that S consists of 3 distinct points on a straight line: For any center x ∈ R d we have rd S (x) > 0, but, by moving the center perpendicularly to and further away from the line, in the limit we get inf x∈R d rd S (x) = 0.But a positive result was obtained in [9] for sets which there are called almost round.Here we need this result for slightly more general parameter values.Similar to [9], in addition to compactness, we make the following two assumptions about the set S with respect to some given center ) The set S contains sufficiently many points sucht that there is at least one point of S inside any cone with apex at x 0 and angle α; see Fig. 1. (A2) The ratio η of the value of the roundness function to the mean radius r (x 0 ) := r max (x 0 )+r min (x 0 ) 2 is sufficiently small; more precisely it holds that Small values of α in Assumption (A1) prevent "big holes" in the data.The ideal case α = 0 in particular corresponds to closed surfaces, for instance if S is the boundary of a polygon in the plane.Larger values of α force the ratio η in Assumption (A2) to be smaller.The next theorem shows that under these assumptions the global minimum exists, and moreover, the search for a minimizer can be restricted to a ball around x 0 .
Theorem 2.1 (cf.Lemma 1 in [9]) Under Assumptions (A1) and (A2) the roundness problem (RP) has a global minimum, which is attained at some center in a ball around x 0 with radius If Assumption (A1) holds with α = 0, then Assumption (A2) is not necessary, and the assertion is valid with ρ := rd S (x 0 ).
Proof At first, we note that Assumption (A2) implies 0 ≤ ρ < ∞ in case α > 0. Let x ∈ R d be any center with x − x 0 ≥ ρ.We will show that the inequality rd S (x) ≥ rd S (x 0 ) holds.Since the roundness function is continuous, its global minimum must then be attained on the ball around x 0 with radius ρ.By Assumption (A1) there exist p, q ∈ S with q−x 0 , x−x 0 q−x 0 • x−x 0 ≥ cos α 2 and p−x 0 , −(x−x 0 ) p−x 0 • x−x 0 ≥ cos α 2 .We set t := x − x 0 and estimate For t ≥ 0 the function is increasing with f (ρ) = r max − r min = rd S (x 0 ).For x − x 0 = t ≥ ρ it follows that we get We will make use of this theorem in two ways.First, an appropriate initial center x 0 allows us to restrict the initial search space for global minimizers.For example, if α ≤ π 2 and η ≤ 0.4, then it suffices to search in a ball around x 0 with radius ρ < 3 2 • rd S (x 0 ).This may be useful even if the roundness value rd S (x 0 ) at the initial center is not very small.Second, by applying the theorem with x 0 being a global minimizer, we will derive complexity estimates for our search algorithm in the next section.
To ensure existence of the global minimum in general, it suffices to restrict the choice of possible centers to a compact set Q ⊂ R d .Therefore, in the following, we consider the constrained roundness problem According to Theorem 2.1, for almost round sets we can choose Q large enough so that solutions of (CRP) are also solutions of (RP).Note that the roundness function in general is not convex.Hence it may have several local minima, and the global minimum may be attained at different centers.Uniqueness of the optimal center cannot so easily be guaranteed, even for almost round sets.As a pathological example, consider the case that S consists of the origin together with a perfect sphere with radius r around the origin: Then the roundness of S is rd(S) = r , and the global minimum is attained at every point in the ball with radius r 2 around the origin.Since we are mainly interested in the minimal objective value, we do not pursue any further the issue of uniqueness, but instead refer to [9,17] for a positive result in dimension d = 2 and a discussion why for almost round sets it is unlikely in practice to encounter local minima which are not also global minima.

Roundness Evaluation Algorithm
We propose to solve the constrained roundness problem (CRP) with a global search algorithm, which adaptively divides the initial search space into smaller parts until an approximate global minimizer with the desired accuracy is found.It consists of the basic global search Algorithm 1 together with Algorithm 2 to speed up convergence.For reasons of simplicity and efficiency, we initially use a cube (square) Q in (CRP),

Basic Global Search
We denote a d-dimensional axis-parallel cube with center x ∈ R d and edge length h > 0 by and its set of 2 d vertices by V h (x).The vertices in V h/2 (x) then coincide with the centers of the subcubes that we obtain by dividing Q h (x) into 2 d subcubes with edge length h 2 , see Fig. 2. We will prove the convergence of the algorithm with the help of the following lemma, which gives a sharp bound on the growth of the roundness function on a cube with respect to its center.

Lemma 3.1 For all y
Proof To x we find p, q ∈ S such that x − p − x − q = rd S (x).Using the triangle inequality, we get rd minimizers.The center x with so far minimal roundness value rd S ( x) is kept as current approximate solution.In line 1 of Algorithm 1 we initialize with x := x 0 , h := h 0 , and C := {x 0 }.New candidate centers are obtained as follows: We start with C new := ∅ (line 3) and run through all current centers x ∈ C (line 4).Based on Lemma 3.1, we make in line 5 what we call a simple test to decide which of the current cubes can be discarded in the next iterations, and which cubes have to be searched further.If the current center x ∈ C passes the test, then we add all centers x new ∈ V h/2 (x) of the subcubes to C new (lines 6, 7).Furthermore, if one of the new centers has a smaller roundness value than x, then we take it as the new approximate solution (lines 8, 9).In the next iteration, we replace C with the new centers (line 14), and halve the edge length h (line 15).The iterations are stopped as soon as the desired accuracy rd( x) ≤ rd(S|Q) + ε is reached, which by the next theorem will be guaranteed to be the case if either the current edge length is small enough, or no cubes remain to be searched (line 16).

Algorithm 1 Basic global search
Require: data S ⊂ R d with roundness function rd S , initial center x 0 ∈ R d and edge length h 0 > 0, and accuracy ε > 0 Ensure: an approximate solution x of (CRP) with rd( x) ≤ rd(S|Q) + ε 1: initialize x := x 0 , h := h 0 , and replace Proof We denote the union of all cubes with current centers C and edge length h at the beginning of an iteration by 123 (2) We prove inductively that on the complement of the region R C,h we have (This also shows that outside of the region R C,h the current approximate solution x has already reached the desired accuracy, so that the search for new, potentially better, approximate minimizers can be restricted to the region R C,h , see also Fig. 3.) Initially we have C = {x 0 } and h = h 0 (cf.line 1 of Algorithm 1), so that R C,h = Q h 0 ( x) = Q.Hence, (3) holds trivially since Q \ R C,h = ∅.Now we assume that (3) holds for the current centers C and edge length h at the beginning of the for-loop in line 4, and show that it also holds for R C new ,h/2 at the end (line 13) with the new centers Otherwise, we have y ∈ R C,h .According to the construction of the set of new centers C new (lines 6, 7) this means that y ∈ Q h (x) for some center x ∈ C which must have failed the test in line 5, i.e., rd Hence, together with Lemma 3.1, in this case we also get Note that if C new = ∅ then the new region R C new ,h/2 is empty, too, so that by (3) we can stop iterating (line 16), because then x already is an approximate minimizer with the desired accuracy.
By (2) and (3), we conclude that the current approximate solution fulfills so that we can also stop iterating as soon as the edge length h of the cubes is small enough, i.e., ).Since at the end of the K -th iteration the edge length of the cubes is h = h 0 2 K , the desired accuracy The most costly operation is the evaluation of the roundness function for the encountered centers.In the worst case, no cubes can be discarded, i.e., after K iterations there are at most K k=0 (2 We set c := cos α 2 , and for k = 0, . . ., K and integer lattice points z ∈ Z d we define Let a k be the number of lattice points z ∈ Z d which fulfill either z < Then the number of roundness function evaluations in iteration k is at most m k , i.e., after K iterations there are at most K k=0 m k roundness function evaluations.Proof Let x be the current approximate solution and x ∈ C be any of the centers in line 4 of Algorithm 1 at the beginning of iteration k ≥ 1.The cube Q h (x) is discarded if the test in line 5 fails, i.e., if We will derive a sufficient condition for the test to fail.
Let us divide Q into equally sized subcubes, which have current edge length h = The minimizer must lie in one of those cubes, say x min ∈ Q h ( x) for some suitable center x ∈ Q (not necessarily x ∈ C).By Lemma 3.1, we get Furthermore, as was shown in (4) in the proof of Theorem 3.1, we have By definition of K it is the smallest integer with 2 8) and ( 7) yield the estimate For the current center x ∈ C, we can write x − x = h • z with some suitable z ∈ Z d .Since x min ∈ Q h ( x) we conclude that For z ≥ √ d 2 , we have t z ≥ 0, so that we can argue as in the proof of Theorem 2.1 for x min instead of x 0 , and with the increasing function f (t) defined there in (1), we get rd Hence, by ( 9) and ( 10), the inequality ( 6) is surely fulfilled in case , and the definition of g k in (5), we conclude that the cube  Since the number of new centers is at most 2 d -times the number of old centers, we infer that the number of roundness function evaluations in iteration k is at most Finally, because ε ≤ rd(S|Q) and at the end of iteration K , we also have , we infer from the first inequality in (8) that the approximate solution at the end of iteration K fulfills rd S ( x) ≤ 2 • rd(S|Q).Some of the results that we computed with the help of Lemma 3.2 are given in Table 1, where the values in brackets correspond to the order of the general worst case estimates if no cubes would be discarded (note that the worst case estimate depends on K and thus on η, but not on α).Lemma 3.2 explains well the good initial behavior of Algorithm 1, which only uses the simple test in line 5 to decide which cubes can be discarded.Moreover, it shows that it is not really important that the initial search space is small, because the algorithm itself rapidly decreases the size of the initial search space.

Acceleration of the Basic Global Search
To speed up the convergence by increasing the number of discarded cubes, we use Algorithm 1 together with Algorithm 2.
Algorithm 2 Acceleration of the basic global search: Replace the if-block in lines 5-12 of Algorithm 1 by the following.4 Red: Subcubes Q h/2 (x new ) of Q h (x) and their vertices.Blue: Data points p, q ∈ S and region with rd S ( y) ≥ y − p − y − q ≥ rd S ( x) − ε for all y ∈ Q h/2 (x new,2 ), so that Q h/2 (x new,2 ) can be discarded x, then, instead of definitely keeping all 2 d subcubes for the next iteration, the second test in line 5 of Algorithm 2 decides whether some of the subcubes can be discarded as well.

Lemma 3.3 If the additional test in line 5 of Algorithm 2 fails, then the subcube Q h/2 (x new ) can be discarded in subsequent iterations.
Proof If the test in line 5 fails, then we have m ≥ 0 and m ≥ rd S ( x) − ε.By definition of m (line 4) all vertices v ∈ V h/2 (x new ) of the subcube Q h/2 (x new ) lie in the set H := { y ∈ R d : y − p − y − q ≥ m}, which for m ≥ 0 is either a hyperboloid or a halfspace, and thus convex.This implies that the whole subcube Q h/2 (x new ) is contained in H . Hence, for all y ∈ Q h/2 (x new ) we get rd S ( y) ≥ y − p − y − q ≥ m ≥ rd S ( x)−ε, i.e., the subcube Q h/2 (x new ) can be discarded in subsequent iterations.
We point out that we do not evaluate the whole costly roundness function at the vertices, but only the values v− p − v−q for the single pair p, q ∈ S chosen in line 2 of Algorithm 2. Furthermore, since the subcubes have several vertices in common, and the value at the center x is already known, it actually suffices to compute the values v − p − v − q for only 3 Obviously, the assertions of Theorem 3.1 and Lemma 3.2 remain valid, if we use Algorithm 1 together with Algorithm 2. But since the second test uses the smaller value rd S ( x) − ε instead of rd S ( x) + √ d • h − ε, potentially more cubes can be discarded, especially in the final iterations, where the simple functions y → y − p − y − q with fixed p, q ∈ S are locally good approximations of the roundness function.This claim is supported by the next lemma and our numerical experiments.Lemma 3.4 Let r be the mean radius at a minimizer x min of (CRP).We make the following assumptions about the ratio η := rd(S|Q) r and the desired accuracy ε: If the current edge length h in line 1 of Algorithm 2 is small enough, and the distance of the current center x to its corresponding data point q ∈ S chosen in line 2 is large enough such that then at least one subcube of Q h (x) will be discarded by the test in line 5.In particular, the condition x − q ≥ r 2 is automatically fulfilled, if x passed the test in line 1, and if Assumption (A1) holds with α ≤ π 2 for x min in place of x 0 .
Proof As shown in (4) in the proof of Theorem 3.1, the current approximate solution fulfills rd S ( x) ≤ rd(S|Q) + max{ε, √ d • h}.Hence, ( 11) and ( 12) imply rd S ( x) ≤ 2 • rd(S|Q).If the current center x passed the first test in line 1 of Algorithm 2, then it follows that rd(x) We show that the second test in line 5 fails for the center x new of the subcube which lies in the halfspace H := { y ∈ R d : x− p x− p − x−q x−q , y − x ≥ 0} (note that at least one of the subcubes must lie in H because the center x lies in H ). Each vertex v ∈ V h/2 (x new ) can be written as We set h(t) := x+t •u− p − x+t •u−q .To see that the test in line 5 fails, it is then sufficient to show that h(t) ≥ rd S (x) − ε (recall that rd S (x) − ε ≥ rd(S|Q) − ε ≥ 0).By Taylor expansion around t = 0 there exists some t ∈ (0, t) and a function g(t) such that We will derive lower bounds for the summands in the Taylor expansion.From ( 11) and ( 12) we infer that which implies t x−q ≤ r /10 r /2 = 1 5 .It follows that the second summand in ( 15) is non-negative, because t ≥ 0 and by (14) we have 0 ≤ x− p x− p − x−q x−q , u ≤ 2. Similarly we see that in the third summand the whole expression in the big brackets is less than or equal to 1, and by ( 13) and ( 16) we then can estimate the third summand from below by Hence we have An elementary but lengthy calculation leads to the estimate From ( 16) and ( 12) we infer that Since x − p ≥ x − q we likewise get x − p + t • u ≥ 2r 5 and thus Together with ( 11) and ( 16) we finally get the desired lower bound for h(t) in ( 17), i.e., It remains to show that the condition x − q ≥ r 2 is fulfilled if Assumption (A1) holds with α ≤ π 2 for x min in place of x 0 .At first we note that by (11) we have As we have seen above, (13) holds if x passed the first test in line 1 of Algorithm 2. We will show that this implies because together with (18) we then can indeed conclude that Let us define Condition (11) implies η ≤ 0.3, so that for α ≤ π 2 we have Now assume that (19) does not hold, which implies x − x min > 9r 20 ≥ ρ.But then a similar argument as in the proof of Theorem 2.1 leads to the estimate rd S (x) ≥ r d S (x min ) = 3 • rd(S|Q), which contradicts (13).

Initial Center
In many applications, an adequate initial center is at hand.But if none is given, we compute an initial center for finite datasets S = { p 1 , . . ., p n } by solving min In [18] it was shown that this problem is equivalent to the following linear least squares problem by setting t := r 2 − x 2 , min

Numerical Experiments
We have implemented Algorithm 1 together with Algorithm 2 in Matlab (R2017b), and test its performance with different kind of sets S.

Random Sets with Predefined Roundness and Unique Optimal Center
At first we generate random sets of test points, but with predefined roundness and unique optimal center.The following lemma explains the construction of such sets.vectors.For all i, j ∈ {1, . . ., d}, with i = j, and Then all p i, j,σ u ,σ v ,−1 lie on the sphere with radius r min := 1 + d 4 • δ 2 − δ 2 around the origin, and all points p i, j,σ u ,σ v ,+1 lie on the sphere with radius r max := 1 + d 4 • δ 2 + δ 2 around the origin, cf.Fig. 5.The set S of all these points has roundness rd(S) = δ with the origin as unique optimal center.Obviously, any superset of S obtained by adding arbitrarily many points p ∈ R d with r min ≤ p ≤ r max to S then also has roundness δ with the origin as unique optimal center.
Proof Since rd S (0) = r max − r min = δ, we have rd(S) ≤ δ, and the assertion follows by proving that there can be no center x ∈ R d with rd S (x) < δ.
We define the functions h, h i, j,σ u ,σ v : R d → R by Then the two points p i, j,σ u ,σ v ,−1 and p i, j,σ u ,σ v ,+1 are foci of the hyperboloid defined by the equation and which contains the origin (this can easily be confirmed by elementary calculations, using the relations u 2 +v 2 = 1 and . The function h is quadratic and strictly convex with gradient h (x) = 8•x and Hessian h (x) = 8• I with d ×d-identity matrix I, and hence h has the origin as unique minimizer with h(0 Now assume that there exists a center x ∈ R d with rd S ( x) < δ.This implies that , which is a contradition to the origin beeing the unique minimizer of h.
We create examples according to Lemma 4.1 for different dimensions d and numbers n of data points (where random points are added to the points p i, j ,σ u ,σ v ,σ w ).In all cases, our algorithm is run with accuracy rd(S) 100 , initial center x 0 = (0.3, . . ., 0.3), and initial edge length h 0 = 1.The initial roundness is rd S (x 0 ) ≈ 1.For each value of d, n and rd(S) we compute the average number of roundness function evaluations, taken over 100 runs with different random points.
At first, we examine the behavior for increasing dimension with a fixed number n = 1000 of data points.The results are displayed in Table 2.The values in brackets are obtained with the simple test of Algorithm 1 only (which only worked well for d ≤ 5), and the values without brackets are obtained together with Algorithm 2. This confirms that Algorithm 2 greatly accelerates the basic global search.On our PC (Intel(R) Core(TM) i7 CPU with 2.8 GHz and 12 GB RAM) the average computing times with Algorithm 2 are negligible for d = 2, 3, less than 0.1 s for d = 4, 5, about 3 s for d = 7, about 20 s for d = 8, and about 3 minutes for d = 9.In the following we always use Algorithm 1 together with Algorithm 2.
Next we examine the behavior for an increasing number of data points with fixed dimension and roundness rd(S) = 0.01.As can be seen in Fig. 6, the average number of roundness function evaluations is almost constant.

Publicly Available Test Data
We check the performance of our algorithm for the publicly available 2D datasets with CMM data (cartesian coordinates obtained with a Coordinate Measuring Machine) that were used in Table 5 of [19], where a comparative study of existing roundness evaluation algorithms was done.The number of data points is rather small, ranging from n = 16 to n = 32.The roundness values obtained in [19] range from 0.03 [mm] to 36 [mm], and are given with 4 decimal digits.Therefore we run our algorithm with accuracy ε = 10 −6 .We compute an initial center x 0 by solving the linear least squares problem (LSQ), and choose the initial edge length h 0 in two ways: Once as h 0 = r 0 with the mean radius r 0 at the initial center, and once as h 0 = 2ρ with the radius ρ according to Theorem 2.1.In all cases, we obtain the same minimal roundness values as given in [19] and computing times are negligible.Table 3 lists the initial roundness value rd S (x 0 ), the computed roundness value rd S ( x), the ratio η at the approximate minimizer, the initial edge length h 0 , and the number of roundness function evaluations, where the values without brackets correspond to the choice h 0 = 2ρ, and the values in brackets correspond to the choice h 0 = r 0 .With the smaller initial edge length less function evaluations are needed, but, as expected, the difference is not that great.

Polygonal Chains and Triangulations
Here we consider examples for the case that S is not a set of finitely many points, but a union of simple sets like line segments or triangles.The roundness function for such sets can be evaluated easily: At first, we recall that for any compact polyhedral set P the maximum distance max p∈P x − p of a given center x to any point of P is attained at one of the vertices of P. Consequently this also holds for lines and triangles.
Let L := {v + t • u : t ∈ [0, 1]} be a line segment with v, u ∈ R d .Then the minimum distance of x to any point of L can be computed as and is attained at v + t • u with t = min 1, max{0, x−v , u u 2 } .
Then the minimum distance of x to any point of D can be computed as Let (t 1 , t2 ) be the solution of the unconstrained linear least squares problem min If it fulfills t1 , t2 ∈ [0, 1] and t1 + t2 ≤ 1, then the minimum distance m is attained at p := v + t1 • u 1 + t2 • u 2 ; otherwise, it is attained at one of the edges of D (which are line segments as above).This also shows that the roundness function can be evaluated in time O(n), where n is the number of vertices of S.
In the second example, the 63 vertices v i, j of the triangulation S shown in Fig. 9 lie exactly on the sphere with radius 1 around the origin, and have coordinates v i, j = cos(φ i ) • sin(θ j ), sin(φ i ) • sin(θ j ), cos(θ j ) with φ i = i • π 24 for i = 0, . . ., 8, and θ j = π 4 + j • π 24 for j = 0, . . ., 6.We run our algorithm with initial center x 0 = (0.3, 0.3, 0.3) and initial edge length h 0 = r 0 = 0.6482.The initial roundness is rd S (x 0 ) = 0.3406.With accuracy ε = 10 −6 we obtain the origin as approximate These examples further demonstrate that the algorithm also copes well in cases where data are given only on a part of a sphere (circle), so that Assumption (A1) is not fulfilled.

Conclusions
We have analyzed a global search algorithm for the solution of the roundness problem.In each iteration, it uses two tests to reduce the search region.For almost round sets the first test leads to relatively small upper bounds on the number of roundness function evaluations that are needed to find an approximate global minimizer, whose roundness function value is not larger than twice the minimal value.The second test is shown to improve the performance in the final iterations to reach a better accuracy.Since our numerical experiments indicate that the improvement in practice is even much better then predicted by the theoretical analysis, it would be interesting to know whether the theoretical result can be strengthened.Furthermore, similar strategies might be used to formulate and analyze global search algorithms for related problems like cylindricity or flatness problems under appropriate assumptions on the data.appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

For
vectors x = (x 1 , . . ., x d ), y = (y 1 , . . ., y d ) ∈ R d we denote their scalar product by x , y := d i=1 x i • y i , and the Euclidean norm by x := d i=1 x 2 i .By introducing the roundness function for a given set S ⊂ R d , rd S (x) := max p∈S x − p − min q∈S x − q , the problem of computing the roundness rd(S) of S may be formulated as the unconstrained optimization problem rd(S) := min x∈R d rd S (x) .

Fig. 1
Fig. 1 Left.Blue: Data points of an almost round set S. Red: Each cone with apex at x 0 and angle α contains at least one point of S. Right.Typical graph of the roundness function rd S (x) of an almost round set S

Fig. 2
Fig. 2 Black: Cube Q h (x) with center x and edge length h.Red: Vertices v i ∈ V h/2 (x) are centers of the subcubes with edge length h 2

Fig. 3
Fig. 3 Left.Blue: Data points S, approximately on a circular arc.Red: Currently best center x after some iterations with Algorithm 1, and region R C,h where centers x with rd S (x) < rd S ( x) − ε may be found.Right.Typical graph of the roundness function rd S (x) of an approximate circular arc S

Fig.
Fig.4Red: Subcubes Q h/2 (x new ) of Q h (x) and their vertices.Blue: Data points p, q ∈ S and region with rd S ( y) ≥ y − p − y − q ≥ rd S ( x) − ε for all y ∈ Q h/2 (x new,2 ), so that Q h/2 (x new,2 ) can be discarded

Fig. 6 Fig. 7 Figure 7
Fig.6 Average number of roundness function evaluations for rd(S) = 0.01 and an increasing number of data points

Fig. 8
Fig. 8 Blue: The vertices of the polygonal chain S lie exactly on a circle.Red: Computed optimal center x

Fig. 9
Fig.9 The vertices of the triangulation S lie exactly on a sphere Since the current approximate solution fulfills rd S ( x) ≤ rd S (x), we infer that rd Lemma 3.1, for all y ∈ R C,h and corresponding center x ∈ C such that y ∈ Q h (x), we have rd S (x) ≤ rd S ( y) + √ d • h.S ( x) ≤ rd S ( y) + √ d • h for all y ∈ R C,h .

Table 1
Maximum number of roundness function evaluations after K iterations, computed with the help of Lemma 3.2 Algorithm 2 augments the simple test by a second one, which exploits the special structure of the roundness function.If the simple test succeeded for the current center

Table 2
Average number of roundness function evaluations for examples created according to Lemma 4.1