# Approximating Tverberg Points in Linear Time for Any Fixed Dimension

- 204 Downloads
- 4 Citations

## Abstract

Let \(P \subseteq \mathbb{R }^d\) be a \(d\)-dimensional \(n\)-point set. A *Tverberg partition* is a partition of \(P\) into \(r\) sets \(P_1, \dots , P_r\) such that the convex hulls \(\hbox {conv}(P_1), \dots , \hbox {conv}(P_r)\) have non-empty intersection. A point in \(\bigcap _{i=1}^{r} \hbox {conv}(P_i)\) is called a *Tverberg point* of depth \(r\) for \(P\). A classic result by Tverberg shows that there always exists a Tverberg partition of size \(\lceil n/(d+1) \rceil \), but it is not known how to find such a partition in polynomial time. Therefore, approximate solutions are of interest. We describe a deterministic algorithm that finds a Tverberg partition of size \(\lceil n/4(d+1)^3 \rceil \) in time \(d^{O(\log d)} n\). This means that for every fixed dimension we can compute an approximate Tverberg point (and hence also an approximate *centerpoint*) in *linear* time. Our algorithm is obtained by combining a novel lifting approach with a recent result by Miller and Sheehy (Comput Geom Theory Appl 43(8):647–654, 2010).

## Keywords

Discrete geometry Tverberg theorem Centerpoint Approximation High dimension## 1 Introduction

In many applications (such as statistical analysis or finding sparse geometric separators in meshes) we would like to have a way to generalize the one-dimensional notion of a median to higher dimensions. A natural way to do this uses the notion of *halfspace depth* (or *Tukey depth*).

### **Definition 1.1**

*halfspace depth of*\(c\)

*with respect to*\(P\) is

*halfspace depth of*\(P\) is the maximum halfspace depth that any point \(c \in \mathbb{R }^d\) can achieve.

A classic result in discrete geometry, the centerpoint theorem, claims that for every \(d\)-dimensional point set \(P\) with \(n\) points, there exists a *centerpoint*, i.e., a point \(c \in \mathbb{R }^d\) with halfspace depth at least \(n/(d+1)\) [6, 15]. There are point sets where this bound cannot be improved.

However, if we actually want to compute a centerpoint for a given point set efficiently, the situation becomes more involved. For \(d=2\), a centerpoint can be found deterministically in linear time [9]. For general \(d\), we can compute a centerpoint in \(O(n^d)\) time using linear programming, since Helly’s theorem implies that the set of all centerpoints can be described as the intersection of \(O(n^d)\) halfspaces [7]. Chan [2] shows how to improve this running time to \(O(n^{d-1})\) with the help of randomization. He actually solves the apparently harder problem of finding a point with maximum halfspace depth. If the dimension is not fixed, a result by Teng shows that it is coNP-hard to check whether a given point is a centerpoint [17].

However, as \(d\) grows, a running time of \(n^{\varOmega (d)}\) is not feasible. Hence, it makes sense to look for faster approximate solutions. A classic approach uses \(\varepsilon \)-approximations [3]: in order to obtain a point of halfspace depth \(n(1/(d+1) - \varepsilon )\), take a random sample \(A \subseteq P\) of size \(O((d/\varepsilon ^2) \log (d/\varepsilon ))\) and compute a centerpoint for \(A\) via linear-programming. This gives the desired approximation with constant probability, and the running time after the sampling step is constant for fixed \(d\). What more could we possibly wish for? For one, the algorithm is Monte-Carlo: with a certain probability, the reported point fails to be a centerpoint, and we know of no fast algorithm to check its validity. This problem can be solved by constructing the \(\varepsilon \)-approximation deterministically [3], at the expense of a more complicated algorithm. Nonetheless, in either case the resulting running time grows exponentially with \(d\), an undesirable feature for large dimensions.

This situation motivated Clarkson et al. [4] to look for more efficient randomized algorithms for approximate centerpoints. They give a simple probabilistic algorithm that computes a point of halfspace depth \(\varOmega (n/(d+1)^2)\) in time \(O(d^2(d \log n + \log (1/\delta ))^{\log (d+2)})\), where \(\delta \) is the error probability. They also describe a more sophisticated algorithm that finds such a point in time polynomial in \(n, d\), and \(\log (1/\delta )\). Both algorithms are based on a repeated algorithmic application of Radon’s theorem (see below). Unfortunately, there remains a probability of \(\delta \) that the result is not correct, and we do not know how to detect failure efficiently.

Thus, more than ten years later, Miller and Sheehy [13] launched a new attack on the problem. Their goal was to develop a deterministic algorithm for approximating centerpoints whose running time is subexponential in the dimension. For this, they use a different proof of the centerpoint theorem that is based on a result by Tverberg: any \(d\)-dimensional \(n\)-point set can be partitioned into \(r = \lceil n/(d+1) \rceil \) sets \(P_1, \dots , P_r\) such that the convex hulls \(\hbox {conv}(P_1), \dots , \hbox {conv}(P_r)\) have nonempty intersection. Such a partition is called a *Tverberg partition* of \(P\). By convexity, any point in \(\bigcap _{i=1}^{r} \hbox {conv}(P_i)\) must be a centerpoint.

*Tverberg depth*\(r^{\prime }\) with respect to \(P\) if there is a partition of \(P\) into \(r^{\prime }\) sets such that \(c\) lies in the convex hull of each set. We also call \(c\) an

*approximate Tverberg point*(of depth \(r^{\prime }\)); see Fig. 1.

Miller and Sheehy describe how to find \(\lceil n/2(d+1)^2\rceil \) disjoint subsets of \(P\) and a point \(c \in \mathbb{R }^d\) such that each subset contains \(d+1\) points and has \(c\) in its convex hull. Hence, \(c\) constitutes an approximate Tverberg point for \(P\) (and thus also an approximate centerpoint), and the subsets provide a certificate for this fact. The algorithm is deterministic and runs in time \(n^{O(\log d)}\). At the same time, it is the first algorithm that also finds an approximate Tverberg partition of \(P\). The running time is subexponential in \(d\), but it is still the case that \(n\) is raised to a power that depends on \(d\), so the parameters \(n\) and \(d\) are not separated in the running time.

In this paper, we show that the running time for finding approximate Tverberg partitions (and hence approximate centerpoints) can be improved. In particular, we show how to find a Tverberg partition with \(\lceil n/4(d+1)^3 \rceil \) sets in deterministic time \(d^{O(\log d)} n\). This is linear in \(n\) for any fixed dimension, and the dependence on \(d\) is only quasipolynomial.

### 1.1 Some Discrete Geometry

We begin by recalling some basic facts and definitions from discrete geometry [11]. A classic fact about convexity is Radon’s theorem.

### **Theorem 1.2**

(Radon’s theorem) For any \(P \subseteq \mathbb{R }^d\) with \(d+2\) points there exists a partition \((P_1, P_2)\) of \(P\) such that \(\mathrm{conv}(P_1) \cap \mathrm{conv}(P_2) \ne \emptyset \).

As mentioned above, Tverberg [18] generalized this theorem for larger point sets.

### **Theorem 1.3**

(Tverberg’s theorem) Any set \(P \subseteq \mathbb{R }^d\) with \(n = (r-1)(d+1) + 1\) points can be partitioned into \(r\) sets \(P_1, \dots , P_r\) such that \(\bigcap _{i=1}^r \mathrm{conv}(P_i) \ne \emptyset \).

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\). We say that \(x \in \mathbb{R }^d\) has *Tverberg depth* \(r\) (with respect to \(P\)) if there is a partition of \(P\) into sets \(P_1, \dots , P_r\) such that \(x \in \bigcap _{i=1}^r \hbox {conv}(P_i)\). Tverberg’s theorem thus states that, for any set \(P\) in \(\mathbb{R }^d\), there is a point of Tverberg depth at least \(\lfloor (n-1)/(d+1) + 1 \rfloor = \lceil n/(d+1)\rceil \). Note that every point with Tverberg depth \(r\) also has halfspace depth \(r\). Thus, from now on we will use the term *depth* as a shorthand for Tverberg depth. As remarked above, Tverberg’s theorem immediately implies the famous centerpoint theorem [11]:

### **Theorem 1.4**

(Centerpoint theorem) For any set \(P\) of \(n\) points in \(\mathbb{R }^d\) there is a point \(c\) such that all halfspaces containing \(c\) contain at least \(\lceil n/(d+1) \rceil \) points from \(P\).

Finally, another classic theorem will be useful for us.

### **Theorem 1.5**

(Carathéodory’s theorem) Suppose that \(P\) is a set of \(n\) points in \(\mathbb{R }^d\) and \(x \in \mathrm{conv}(P)\). Then there is a set of \(d+1\) points \(P^{\prime } \subseteq P\) such that \(x \in \mathrm{conv}(P^{\prime })\).

This means that, in order to describe a Tverberg partition of depth \(r\), we need only \(r (d+1)\) points from \(P\). This observation is also used by Miller and Sheehy [13]. They further note that it takes \(O(d^3)\) time to replace \(d+2\) points by \(d+1\) points using Gaussian elimination. We denote the process of replacing larger sets by sets of size \(d+1\) as *pruning*, see Lemma 2.2.

### 1.2 Our Contribution

We now describe our results in more detail. In Sect. 2, we present a simple lifting argument which leads to an easy Tverberg approximation algorithm.

### **Theorem 1.6**

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\) in general position. One can compute a Tverberg point of depth \(\lceil n/2^d \rceil \) for \(P\) and the corresponding partition in time \(d^{O(1)} n\).

While this does not yet give a good approximation ratio (though constant for any fixed \(d\)), it is a natural approach to the problem: it computes a higher dimensional Tverberg point via successive median partitions—just as a Tverberg point is a higher dimensional generalization of the \(1\)-dimensional median.

By collecting several low-depth points and afterwards applying the brute-force algorithm on small point sets, we get an even higher depth in linear time for any *fixed* dimension:

### **Theorem 1.7**

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\). Then one can find a Tverberg point of depth \(\lceil n/2(d+1)^2 \rceil \) and a corresponding partition in time \(f(2^{d+1}) + d^{O(1)} n\), where \(f(m)\) is the time to compute a Tverberg point of depth \(\lceil m/(d+1) \rceil \) for \(m\) points by brute force.

Finally, by combining our approach with that of Miller and Sheehy, we can improve the running time to be quasipolynomial in \(d\):

### **Theorem 1.8**

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\). Then one can compute a Tverberg point of depth \(\lceil n/4(d+1)^3 \rceil \) and a corresponding pruned partition in time \(d^{O(\log d)} n\).

In Sect. 4, we compare these results to the Miller–Sheehy algorithm and its extensions.

## 2 A Simple Fixed-Parameter Algorithm

We now present a simple algorithm that runs in linear time for any fixed dimension and computes a point of depth \(\lceil n/2^d \rceil \). For this, we show how to compute a Tverberg point by recursion on the dimension. As a byproduct, we obtain a quick proof of a weaker version of Tverberg’s theorem. First, however, we give a few more details about the basic operations performed by our algorithm.

### 2.1 Basic Operations

Our algorithm builds a Tverberg partition for a \(d\)-dimensional point set \(P\) by recursion on the dimension. In each step, we store a Tverberg partition for some point set, together with an approximate Tverberg point \(c\). We have for each set \(P_i\) in the partition a convex combination that witnesses \(c \in \hbox {conv}(P_i)\). All the points that arise during our algorithms are obtained by repeatedly taking convex combinations of the input points, so the following simple observation lets us maintain this invariant.

### **Observation 2.1**

By Carathéodory’s theorem (Theorem 1.5), a Tverberg partition of depth \(r\) can be described by \(r (d+1)\) points from \(P\). In order to achieve running time \(O(n)\), we need the following observation, also used by Miller and Sheehy [13].

### **Lemma 2.2**

Let \(Q \subseteq \mathbb{R }^d\) be a set of \(m \ge d + 2\) points with \(c \in \mathrm{conv}(Q)\), and suppose we have a convex combination of \(Q\) for \(c\). Then we can find a subset \(Q^{\prime } \subset Q\) with \(d+1\) points such that \(c \in \mathrm{conv}(Q^{\prime })\), together with a corresponding convex combination, in time \(O(d^3 m)\).

### *Proof*

Miller and Sheehy observe that replacing \(d+2\) points by \(d+1\) points takes \(O(d^3)\) time by finding an affine dependency through Gaussian elimination, see Grötschel et al. [8, Chap. 1]. The choice of affine dependencies does not matter. Thus, in order to eliminate a point from \(Q\), we can take any subset of size \(d+2\), resolve one of the affine dependencies, and update the convex combination accordingly. Repeating this process, we can replace \(m\) points by \(d+1\) points in time \((m - (d+1))O(d^3) = O(d^3m)\).\(\square \)

The process in Lemma 2.2 is called *pruning*, and we call a partition of a \(d\)-dimensional point set in which all sets have size at most \(d + 1\) a *pruned partition*. This will enable us to bound the cost of many operations in terms of the dimension \(d\), instead of the number of points \(n\).

### 2.2 The Lifting Argument and a Simple Algorithm

Let \(P\) be a \(d\)-dimensional point set. As a Tverberg point is a higher-dimensional version of the median, a natural way to compute a Tverberg point for \(P\) is to first project \(P\) to some lower-dimensional space, then to recursively compute a good Tverberg point for this projection, and to use this point to find a solution in the higher-dimensional space. Surprisingly, we are not aware of any such argument having appeared in the literature so far.

In what follows, we will describe how to *lift* a lower-dimensional Tverberg point into some higher dimension. Unfortunately, this process will come at the cost of a decreased depth for the lifted Tverberg point. For clarity of presentation, we first explain the lifting lemma in its simplest form. In Sect. 3.1, we then state the lemma in its full generality.

### **Lemma 2.3**

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\), and let \(h\) be a hyperplane in \(\mathbb{R }^d\). Let \(c^{\prime } \in h\) be a Tverberg point of depth \(r\) for the projection of \(P\) onto \(h\), with pruned partition \(P_1, \dots , P_r\). Then we can find a Tverberg point \(c \in \mathbb{R }^d\) of depth \(\lceil r/2 \rceil \) for \(P\) and a corresponding Tverberg partition in time \(O(dn)\).

### *Proof*

For every point \(p \in P\), let \(\hbox {pr}(p)\) denote the projection of \(p\) onto \(h\), and for every \(Q \subseteq P\), let \(\hbox {pr}(Q)\) be the projections of all the points in \(Q\). Let \(P_1, \dots , P_r \subseteq P\) such that \(\hbox {pr}(P_1), \dots , \hbox {pr}(P_r)\) is a pruned partition for \(\hbox {pr}(P)\) with Tverberg point \(c^{\prime }\). Let \(\ell \) be the line through \(c^{\prime }\) orthogonal to \(h\).

Since our assumption implies \(c^{\prime } \in \hbox {conv}(\hbox {pr}(P_i))\) for \(i=1, \dots , r\), it follows that \(\ell \) intersects each \(\hbox {conv}(P_i)\) at some point \(x_i \in \mathbb{R }^d\). More precisely, as we have a convex combination \(c^{\prime } = \sum _{p \in P_i} \alpha _p \hbox {pr}(p)\) for each \(P_i\), we simply get \(x_i = \sum _{p \in P_i} \alpha _p p\).

Theorem 1.6 is now a direct consequence of Lemma 2.3.

### **Theorem 2.4**

(Theorem 1.6, restated) Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\) in general position. One can compute a Tverberg point of depth \(\lceil n/2^d \rceil \) for \(P\) and the corresponding partition in time \(d^{O(1)} n\).

### *Proof*

If \(d = 1\), we obtain a Tverberg point and a corresponding partition by finding the median \(c\) of \(P\) [5] and pairing each point to the left of \(c\) with exactly one point to the right of \(c\).

If \(d > 1\), we project \(P\) onto the hyperplane \(x_d = 0\). This gives an \(n\)-point set \(P^{\prime } \subseteq \mathbb{R }^{d-1}\). We recursively find a point of depth \(\lceil n/{2^{d-1}} \rceil \) and a corresponding pruned partition for \(P^{\prime }\). We then apply Lemma 2.3 to get a point \(c \in \mathbb{R }^d\) of depth \(r = \left\lceil \lceil n/{2^{d-1}} \rceil / 2 \right\rceil \ge \lceil n/2^d \rceil \) for \(P\), together with a partition. Each set has at most \(2d\) points, so by applying Lemma 2.2 to each set, it takes \(O(d^4r)\) time to prune all sets.

This yields a total running time of \(T_d(n) \le T_{d-1}(n) + d^{O(1)}n\), which implies the result.\(\square \)

In particular, Theorem 1.6 gives a weak version of Tverberg’s theorem with a simple proof.

### **Corollary 2.5**

### 2.3 An Improved Approximation Factor

In order to improve the approximation factor, we will now use an easy bootstrapping approach. A Tverberg partition of depth \(r\) in \(\mathbb{R }^d\) needs only \((d+1)r\) points. This means that after finding a point of depth \(n/2^d\), we still have \(n\left( 1 - (d+1)/2^d\right) \) unused points at our disposal. The next lemma shows how to leverage these points to achieve an even higher Tverberg depth.

### **Lemma 2.6**

Let \(\rho \ge 2\) and \(q(m,d)\) be a function such that for any \(m\)-point set \(Q \subseteq \mathbb{R }^d\) we can compute a point of depth \(\lceil m/\rho \rceil \) and a corresponding pruned partition in time \(q(m, d)\).

### *Proof*

For example, by Theorem 1.6 we can find a point of depth \(\lceil n/2^d \rceil \) and a corresponding pruned partition in time \(d^{O(1)}n\). Thus, by applying Lemma 2.6 with \(c = 2, \rho = 2^d\), we can also find \(\lceil n/(2\lceil n/2^{d+1} \rceil (d+1)) \rceil \approx 2^d/(d+1) \) points of depth \(\lceil n/2^{d+1} \rceil \) in linear time.

In order to make use of Lemma 2.6, we will also need a lemma that describes how we can combine these points in order to increase the total depth. This generalizes a similar lemma by Miller and Sheehy [13, Lemma 4.1].

### **Lemma 2.7**

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\), and let \(P = \biguplus _{i=1}^\alpha P_i\) be a partition of \(P\). Furthermore, suppose that for each \(P_i\) we have a Tverberg point \(c_i \in \mathbb{R }^d\) of depth \(r\), together with a corresponding pruned partition \(\mathcal P _i\). Let \(C :=\{ c_i \mid 1 \le i \le \alpha \}\) and \(c\) be a point of depth \(r^{\prime }\) for \(C\), with corresponding pruned partition \(\mathcal C \). Then \(c\) is a point of depth \(r r^{\prime }\) for \(P\). Furthermore, we can find a corresponding pruned partition in time \(d^{O(1)}n\).

### *Proof*

As the partitions \(\mathcal P _i\) and \(\mathcal C \) were pruned, each \(Z_{ab}\) consists of at most \((d+1)^2\) points. Thus, by Lemma 2.2, each \(Z_{ab}\) can be pruned in time \(O(d^5)\). Since \(|\mathcal Z | \le n\), the lemma follows.\(\square \)

Combining Lemmas 2.6 and 2.7, we can now prove Theorem 1.7.

### **Theorem 2.8**

(Theorem 1.7, restated) Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\). Then one can find a Tverberg point of depth \(\lceil n/2(d+1)^2 \rceil \) and a corresponding partition in time \(f(2^{d+1}) + d^{O(1)} n\), where \(f(m)\) is the time to compute a Tverberg point of depth \(\lceil m/(d+1) \rceil \) for \(m\) points by brute force, together with an associated Tverberg partition.

### *Proof*

Instead of brute force, we can also use the algorithm by Miller and Sheehy to find a point among the deep points. This gives a worse depth, but it is slightly faster.

### **Theorem 2.9**

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\). Then one can compute a Tverberg point of depth \(\lceil n/4(d+1)^3 \rceil \) and a corresponding partition in time \(2^{O(d \log d)} + d^{O(1)}n\).

### *Proof*

## 3 An Improved Running Time

The algorithm from the previous section runs in linear time for any fixed dimension, but the constants are huge. In this section, we show how to speed up our approach through an improved recursion, and we obtain an algorithm with running time \(d^{O(\log d)} n\) while losing a depth factor of \(1/2(d+1)\).

### 3.1 A More General Version of the Lifting Argument

We first present a more general version of the lifting argument in Lemma 2.3. For this, we need some more notation. Let \(P \subseteq \mathbb{R }^d\) be finite. A \(k\)-dimensional *flat* \(F \subseteq \mathbb{R }^d\) (often abbreviated as \(k\) *-flat*) is defined as a \(k\)-dimensional affine subspace of \(\mathbb{R }^d\) (or, equivalently, as the affine hull of \(k+1\) affinely independent points in \(\mathbb{R }^d\)). We call a \(k\)-dimensional flat \(F \subseteq \mathbb{R }^d\) a *Tverberg* \(k\) *-flat of depth* \(r\) *for* \(P\) if there is a partition of \(P\) into sets \(P_1, \dots , P_r\) such that \(\hbox {conv}(P_i) \cap F \ne \emptyset \) for all \(i = 1, \dots , r\). This generalizes the notion of a Tverberg point.

### **Lemma 3.1**

Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\), and let \(h \subseteq \mathbb{R }^d\) be a \(k\)-flat. Suppose we have a Tverberg point \(c \in h\) of depth \(r\) for \(\hbox {pr}(P) := \hbox {pr}_h(P)\), as well as a corresponding Tverberg partition. Let \(h^{\perp }_c\) be the \((d-k)\)-flat orthogonal to \(h\) that passes through \(c\). Then \(h^{\perp }_c\) is a Tverberg \((d-k)\)-flat for \(P\) of depth \(r\), with the same Tverberg partition.

### *Proof*

Lemma 3.1 lets us use a good algorithm for *any fixed dimension* to improve the general case.

### **Lemma 3.2**

Let \(\delta \ge 1\) be a fixed integer. Suppose we have an algorithm \(\mathcal A \) with the following property: for every point set \(Q \subseteq \mathbb{R }^\delta \), the algorithm \(\mathcal A \) constructs a Tverberg point of depth \(\lceil |Q|/\rho \rceil \) for \(Q\) as well as a corresponding pruned partition in time \(f(|Q|)\).

Then, for any \(n\)-point set \(P \subseteq \mathbb{R }^d\) and for any \(d \ge \delta \), we can find a Tverberg point of depth \(\lceil n/\rho ^{\lceil d/\delta \rceil } \rceil \) and a corresponding pruned partition in time \(\lceil d/\delta \rceil f(n) + d^{O(1)}n\).

### *Proof*

Set \(k :=\lceil d/\delta \rceil \). We use induction on \(k\) to show that such an algorithm exists with running time \(k(f(n) + d^{O(1)}n)\). If \(k = 1\), we can just use algorithm \(\mathcal A \), and there is nothing to show.

Now suppose \(k > 1\). Let \(h \subseteq \mathbb{R }^d\) be a \(\delta \)-flat in \(\mathbb{R }^d\), and let \(\hbox {pr}(P)\) be the projection of \(P\) onto \(h\). We use algorithm \(\mathcal A \) to find a Tverberg point \(c\) of depth \(\lceil n/\rho \rceil \) for \(\hbox {pr}(P)\) as well as a corresponding pruned partition \(\hbox {pr}(P_1), \dots , \hbox {pr}(P_{\lceil n/\rho \rceil })\). This takes time \(f(n)\). By Lemma 3.1, the \((d-\delta )\)-flat \(h^{\perp }_c\) is a Tverberg flat of depth \(\lceil n/\rho \rceil \) for \(P\), with corresponding pruned partition \(P_1, \dots , P_{\lceil n/\rho \rceil }\). For each \(i\), we can thus find a point \(q_i\) in \(\hbox {conv}(P_i) \cap h_c^{\perp }\) in time \(d^{O(1)}\).

For example, the result of Agarwal et al. [1] gives a point of depth \(\lceil n/4 \rceil \) in \(3\) dimensions in time \(O(n \log n)\). Thus, we can find a point of depth \(n/4^{\lceil d/3 \rceil }\) in time \(O(n \log n + d^{O(1)}n)\).

### 3.2 An Improved Algorithm

Finally, we show how to combine the above techniques to obtain an algorithm with a better running time. The idea is as follows: using Lemma 3.2, we can reduce one \(d\)-dimensional instance to two instances of dimension \(d/2\). We would like to proceed recursively, but unfortunately, this reduces the depth of the partition. To fix this, we apply Lemmas 2.6, 2.7 and the Miller–Sheehy algorithm.

### **Theorem 3.3**

(Theorem 1.8, restated) Let \(P\) be a set of \(n\) points in \(\mathbb{R }^d\). Then one can compute a Tverberg point of depth \(\lceil n/4(d+1)^3 \rceil \) and a corresponding pruned partition in time \(d^{O(\log d)} n\).

### *Proof*

We prove the theorem by induction on \(d\). As usual, for \(d=1\) the problem reduces to median computation, and the result is immediate.

Now let \(d \ge 2\). By induction, for any at most \(\lceil d/2 \rceil \)-dimensional point set \(Q \subseteq \mathbb{R }^{\lceil d/2 \rceil }\) there exists an algorithm that returns a Tverberg point of depth \(\lceil |Q|/4(\lceil d/2 \rceil + 1)^3 \rceil \) and a corresponding pruned partition in time \(d^{\alpha \log \lceil d/2 \rceil } n\), for some sufficiently large constant \(\alpha > 0\).

Thus, we can compute a polynomial approximation to a Tverberg point in time pseudopolynomial in \(d\) and linear in \(n\).

## 4 Comparison to Miller–Sheehy

In Table 1, we give a more detailed comparison of our results to the Miller–Sheehy algorithm and its extensions. In Sect. 5.2 of their paper, Miller and Sheehy describe a generalization of their approach that improves the running time for small \(d\) by computing higher order Tverberg points of depth \(r\) by brute force. The approximation quality deteriorates by a factor of \(r/2\). No exact bounds are given, but as far as we can tell, one can achieve a running time of \(O(f(d) n^2)\) for fixed \(d\) by setting the parameter \(r = d + 1\), while losing a factor of \((d+1)/2\) in the approximation.

Comparing our results to Miller–Sheehy and extensions

Algorithm | Running time | Depth |
---|---|---|

Theorem 1.6 | \(O(n)\) | \(n/2^d\) |

Miller–Sheehy | \(n^{O(\log d)}\) | \(n/2(d+1)^2\) |

Theorem 1.7 | \(O\big ( f(2^d) + d^{O(1)}n\big )\) | \(n/2(d+1)^2\) |

Miller–Sheehy generalized (\(r = d + 1\)) | \(O\big ( f(d) n^2\big )\) | \({\approx }n/(d+1)^3\) |

Theorem 2.9 | \(O\big (2^{O(d\log d)} + n\big )\) | \(n/4(d+1)^3\) |

Miller–Sheehy bootstrapped | \(d^{O(\log d)} n^3\) | \({\approx }n/2(d+1)^4\) |

Theorem 1.8 | \(d^{O(\log d)} n\) | \(n/4(d+1)^3\) |

We should emphasize that for all dimensions \(d\) with \(2^d \le 2(d+1)^2\), i.e., \(d \le 7\), our simplest algorithm outperforms every other approximation algorithm in both running time and approximation ratio. For example, it gives a \(1/2\)-approximate Tverberg point in \(3\) dimensions in linear time.

## 5 Conclusion and Outlook

We have presented a simple algorithm for finding an approximate Tverberg point. It runs in linear time for any fixed dimension. Using more sophisticated tools and combining our methods with known results, we managed to improve the running time to \(d^{O(\log d)} n\), while getting within a factor of \(1/4(d+1)^2\) of the bound from Tverberg’s theorem. Unfortunately, the resulting running time remains quasipolynomial in \(d\), and we still do not know whether there exists a polynomial algorithm (in \(n\) and \(d\)) for finding an approximate Tverberg point of linear depth.

However, we are hopeful that our techniques constitute a further step towards a truly polynomial time algorithm and that such an algorithm will eventually be discovered—maybe even by a more clever combination of our algorithm with that of Miller and Sheehy. An alternative promising approach, suggested to us by Don Sheehy, derives from a beautiful proof of Tverberg’s theorem. It is due to Sarkaria and can be found in Matousek’s book [11, Chap. 8]. It uses the colorful Carathéodory theorem:

### **Theorem 5.1**

(Colorful Carathéodory) Let \(C_1 \uplus \cdots \uplus C_{d+1} \subseteq \mathbb{R }^d\) such that for \(i = 1, \dots , d+1\), we have \(0 \in \mathrm{conv}(C_i)\). Then there is a set \(C\) of \(d+1\) points with \(0 \in \mathrm{conv}(C)\) and \(|C_i \cap C| = 1\).

Sarkaria’s proof transforms a \(d\)-dimensional instance of \(n\) points of the Tverberg point problem to a Colorful Carathéodory problem in approximately \(dn\) dimensions.

The question now is whether such a colorful simplex can be found in time polynomial in both \(d\) and \(n\). This would lead to a polynomial time algorithm for computing a Tverberg point. Observe that this would not contradict any complexity theoretic assumptions: an algorithm that *finds* such a point does not necessarily have to decide whether a given point indeed is a Tverberg point.

The simplest proof of Colorful Carathéodory leads directly to an algorithm for finding such a colorful simplex and works as follows: take an arbitrary colorful simplex. If the origin is not contained in it, delete the farthest color and take a point of that color that together with the other points induces a simplex that is closer to the origin. It is unknown whether this procedure runs in polynomial time for both \(d\) and \(n\). Settling this question would constitute major progress on the problem (see [12, 16] for work in this direction).

Yet another approach would be to relax Sarkaria’s proof and to try to formulate it as an approximation problem, which might be easier to solve. However, it is not clear how to state such an approximation to the Colorful Carathéodory problem in a way that leads to an approximate Tverberg point. Perhaps via such a method, our algorithms can be improved further.

It is known that the problem of deciding whether a given point has at least a certain depth is NP-complete [17]. It is possible to strengthen this result to show that in \(\mathbb{R }^{d+1}\), the problem is \(d\)-Sum hard, using the approach by Knauer et al. [10]. However, this does not tell us anything about the actual problem of computing a point of depth \(n/(d+1)\). Such a point is guaranteed to exist, so it is not clear how to prove the problem hard using “standard” NP-completeness theory. Rather, we think that a hardness proof along the lines of complexity classes such as PPAD or PLS [14] should be pursued.

Finally, a common issue with Tverberg point (and centerpoint) algorithms in high dimensions, also pointed out by Clarkson et al. [4], is that the coefficients arising during the algorithm might become exponentially large. While this is not a problem in our uniform cost model, for implementations of the algorithm it seems necessary to bound these. In particular, it would be interesting to investigate the bit complexity of the intermediate solutions arising during the pruning process. As an alternative approach, one might try to perturb the points in the process, thereby lowering the precision of the coefficients. Additionally, one might have to introduce a notion of *almost approximate Tverberg points*, where the point that is returned does not have to lie *inside* all sets, but only *close to* them.

## Notes

### Acknowledgments

We would like to thank Nabil Mustafa for suggesting the problem to us. We also thank him and Don Sheehy for helpful discussions and insightful suggestions. We would further like to thank the anonymous referees for their helpful and detailed comments. Werner was funded by Deutsche Forschungsgemeinschaft within the Research Training Group (Graduiertenkolleg) “Methods for Discrete Structures.”

## References

- 1.Agarwal, P.K., Sharir, M., Welzl, E.: Algorithms for center and Tverberg points. ACM Trans. Algorithms 5(1), Art. 5 (2009)Google Scholar
- 2.Chan, T.M.: An optimal randomized algorithm for maximum Tukey depth. In: Proceedings of the 15th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 430–436 (2004)Google Scholar
- 3.Chazelle, B.: The Discrepancy Method: Randomness and Complexity. Cambridge University Press, Cambridge, MA (2000)Google Scholar
- 4.Clarkson, K.L., Eppstein, D., Miller, G.L., Sturtivant, C., Teng, S.-H.: Approximating center points with iterated Radon points. Int. J. Comput. Geom. Appl.
**6**(3), 357–377 (1996)MathSciNetzbMATHCrossRefGoogle Scholar - 5.Cormen, T.H., Leiserson, C.E., Rivest, R.L., Stein, C.: Introduction to Algorithms, 3rd edn. MIT Press, Cambridge (2009)zbMATHGoogle Scholar
- 6.Danzer, L., Grünbaum, B., Klee, V.: Helly’s theorem and its relatives. In: Proceedings of the Symposium on Pure Mathematics, vol. VII, pp. 101–180. American Mathematical Society, Providence (1963)Google Scholar
- 7.Edelsbrunner, H.: Algorithms in Combinatorial Geometry. Springer, Berlin (1987)zbMATHCrossRefGoogle Scholar
- 8.Grötschel, M., Lovász, L., Schrijver, A.: Geometric Algorithms and Combinatorial Optimization, Volume 2 of Algorithms and Combinatorics, 2nd edn. Springer, Berlin (1993)CrossRefGoogle Scholar
- 9.Jadhav, S., Mukhopadhyay, A.: Computing a centerpoint of a finite planar set of points in linear time. Discrete Comput. Geom.
**12**(3), 291–312 (1994)MathSciNetzbMATHCrossRefGoogle Scholar - 10.Knauer, C, Tiwary, H.R., Werner, D.: On the computational complexity of Ham-Sandwich cuts, Helly sets, and related problems. In: 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011), vol. 9, pp. 649–660. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, Wadern (2011)Google Scholar
- 11.Matoušek, J.: Lectures on Discrete Geometry. Springer, New York (2002)zbMATHCrossRefGoogle Scholar
- 12.Meunier, F., Deza, A.: A further generalization of the colourful Carathéodory theorem. http://arxiv/abs/1107.3380 (2011)
- 13.Miller, G.L., Sheehy, D.R.: Approximate centerpoints with proofs. Comput. Geom. Theory Appl.
**43**(8), 647–654 (2010)MathSciNetzbMATHCrossRefGoogle Scholar - 14.Papadimitriou, C.H.: On the complexity of the parity argument and other inefficient proofs of existence. J. Comput. Syst. Sci.
**48**(3), 498–532 (1994)MathSciNetzbMATHCrossRefGoogle Scholar - 15.Rado, R.: A theorem on general measure. J. Lond. Math. Soc.
**21**, 291–300 (1946)MathSciNetCrossRefGoogle Scholar - 16.Rong, G.: On algorithms for the colourful linear programming feasibility problem. McMaster University, Master’s thesis (2012)Google Scholar
- 17.Teng, S.-H.: Points, spheres, and separators: a unified geometric approach to graph partitioning. PhD thesis, School of Computer Science, Carnegie Mellon University (1992)Google Scholar
- 18.Tverberg, H.: A generalization of Radon’s theorem. J. Lond. Math. Soc.
**41**, 123–128 (1966)MathSciNetzbMATHCrossRefGoogle Scholar