Optimal Partition Trees
 544 Downloads
 22 Citations
Abstract

It is conceptually simpler than Matoušek’s O(n ^{1−1/d })time method. Our partition trees satisfy many ideal properties (e.g., constant degree, optimal crossing number at almost all layers, and disjointness of the children’s cells at each node).

It leads to more efficient multilevel partition trees, which are needed in many data structuring applications (each level adds at most one logarithmic factor to the space and query bounds, better than in all previous methods).

A similar improvement applies to a shallow version of partition trees, yielding O(nlogn) time, O(n) space, and O(n ^{1−1/⌊d/2⌋}) query time for halfspace range emptiness in even dimensions d≥4.
Keywords
Simplex range searching Halfspace range searching Geometric data structures1 Introduction
Data structures for range searching [4, 5, 43, 47] are among the most important and most often used results within the computational geometry literature—countless papers applied such results and techniques to obtain the best theoretical computational bounds for a wide variety of geometric problems. However, to this day, some questions remain even concerning the most basic version of nonorthogonal range searching, simplex range searching. The purpose of this paper is to provide a (hopefully) final resolution to some of these lingering questions, by nailing down the best precise upper bounds.
History
Formally, in simplex range searching , we want to preprocess n (weighted) points in ℝ^{ d } so that we can quickly find (the sum of the weights of) all points inside a given query simplex. The dimension d is assumed to be a small constant. The problem not only is fundamental but has a remarkably rich history. We will confine the discussion of previous work to linear or nearlinear space data structures (otherwise, the number of results would multiply). The first published method was by Willard [53] in 1982, who gave a simple O(n)space partition tree achieving O(n ^{0.792}) query time for d=2. This prompted researchers to look for data structures with the best exponent in the query time. Many subsequent results appeared; for example, for d=2, Willard himself improved the exponent to about 0.774, which was further reduced to 0.695 by Edelsbrunner and Welzl [33]; for d=3, F. Yao [55] obtained 0.936, Dobkin and Edelsbrunner [31] 0.916, Edelsbrunner and Huber [32] 0.909, and Yao et al. [56] 0.899; in higher dimensions, Yao and Yao [54] obtained 1−⌈lg(2^{ d }−1)⌉/d; and so on. A significant advance was made in Haussler and Welzl’s seminal paper [35], which introduced probabilistic techniques to computational geometry; they presented a partition tree with O(n ^{1−1/[d(d−1)+1]+ε }) query time for any fixed ε>0, greatly improving all results that came before, in all dimensions.
 1.
In 1988,^{1} Chazelle and Welzl [25, 52] devised the key technique of iterative reweighting ^{2} to compute spanning trees with low crossing number. They obtained an elegant O(n)space data structure where the answer to any query can be expressed using O(n ^{1−1/d }logn) arithmetic (semigroup) operations on the weights (if subtractions on weights are allowed, the number of operations can be reduced to O(n ^{1−1/d } α(n))). Unfortunately, this combinatorial bound on the number of arithmetic operations does not bound the actual query time. Chazelle and Welzl obtained true “algorithmic” results only for d=2, with O(n) space and \(O(\sqrt{n}\log n)\) query time, and for d=3, with O(nlogn) space and O(n ^{2/3}log^{2} n) query time. The preprocessing time of these data structures is also poor; for d=2, one can get O(n ^{3/2}log^{1/2} n) preprocessing time by a randomized algorithm [52], but improvement to a nearlinear bound requires further ideas from subsequent papers. Despite these issues, their approach was a remarkable breakthrough and set up the right direction for subsequent methods to follow.
 2.
Next, in 1990, Chazelle, Sharir, and Welzl [26] applied cuttings [23, 29] to get efficient data structures for simplex range searching. In the O(n)space case, their method has query time O(n ^{1−1/d+ε }) with O(n ^{1+ε }) preprocessing time for any fixed ε>0. Although this brings us closer to the ideal bounds in any dimension, the method seems inherently suboptimal by at least a logarithmic factor (the data structure requires multiple separate cuttings, so as to guarantee that for every query, at least one of the cuttings is “good”).
 3.
In 1991, Matoušek [39] combined the iterative reweighting technique with the use of cuttings to prove a new simplicial partition theorem . Applying the theorem in a recursive fashion gives an O(n)space partition tree with O(n ^{1−1/d }log^{ O(1)} n) query time and O(nlogn) preprocessing time.
The same paper also gives an alternative O(n)space structure with query time O(n ^{1−1/d }(loglogn)^{ O(1)})—or better still \(O(n^{11/d}2^{O(\log^{*}n)})\) if subtractions of weights are allowed—but the preprocessing time increases to O(n ^{1+ε }). This result is subsumed by the next result, so will be ignored here.
 4.
Finally, in 1992, Matoušek [42] combined iterative reweighting with the hierarchical cuttings of Chazelle [21] to obtain an O(n)space partition tree with O(n ^{1−1/d }) query time. The preprocessing time is O(n ^{1+ε }). This method is considerably more complicated than Matoušek’s previous partitiontheorembased method.
Matoušek’s final method thus seems to achieve asymptotically optimal query time. However, the story is not as neatly resolved as one would like. The main issue is preprocessing time: Matoušek’s final method is suboptimal by an n ^{ ε } factor. This explains why in actual algorithmic applications of range searching results, where we care about preprocessing and query times combined, researchers usually abandon his final method and resort to his earlier partitiontheorem method. Unfortunately, this is the reason why in algorithmic applications, we frequently see appearances of extra log^{ O(1)} n factors in time bounds, with unspecified numbers of log factors.
Even for d=2, no ideal data structure with O(nlogn) preprocessing time, O(n) space, and \(O(\sqrt{n})\) query time has been found.
New Results

Our method immediately yields improved results on multilevel versions of partition trees [5, 43], which are needed in applications that demand more sophisticated kinds of query. With our method, each level costs at most one more logarithmic factor in space and at most one more logarithmic factor in the query time—this behavior is exactly what we want and, for example, is analogous to what we see in the classical multilevel data structure, the range tree [5, 48], which can be viewed as a dlevel 1dimensional partition tree.
In contrast, Matoušek’s final method is not applicable at all, precisely because of the disadvantages mentioned. Matoušek’s paper [42] did explore multilevel results in detail but had to switch to a different method based on Chazelle, Sharir, and Welzl’s cutting approach [26], now with hierarchical cuttings. This method costs two logarithmic factors in space and one logarithmic factor in the query time per level; see [42, Theorem 5.2]. Furthermore, the preprocessing time is huge. An alternative version [42, Corollary 5.2] has O(n ^{1+ε }) preprocessing time, but space and query time increase by an unspecified number of extra logarithmic factors. Matoušek’s partitiontheorem method in the multilevel setting is worse, giving extra O(n ^{ ε }) (or more precisely \(2^{O(\sqrt{\log n})}\)) factors in space and query time per level [39].
For concrete applications, consider the problems of preprocessing n simplices in ℝ^{ d } so that we can (i) report all simplices containing a query point, or (ii) report all simplices contained inside a query simplex. These problems can be solved by a (d+1)level partition tree. The previous method from [42] requires O(nlog^{2d } n) space and O(n ^{1−1/d }log^{ d } n) query time. Our result reduces the space bound to O(nlog^{ d } n).
We get improved data structures for many other problems, even some traditional 2dimensional problems, e.g., counting the number of intersections of line segments with a query line segment, and performing ray shooting among a collection of line segments.
Our new partition tree also leads to a new combinatorial theorem: any npoint set in ℝ^{2} admits a Steiner triangulation of size O(n) so that the maximum number of edges crossed by any line is \(O(\sqrt{n})\). This crossing number bound is tight and improves previous upper bounds by a logarithmic factor [8].

That our method is simpler allows us to examine the preprocessing time more carefully. We show that with some extra effort, a variant of our method indeed accomplishes what we set out to find: a simplex range searching structure with O(nlogn) preprocessing time, O(n) space, and O(n ^{1−1/d }) query time with high probability (i.e., probability at least \(11/n^{c_{0}}\) for an arbitrarily large constant c _{0}). The approach works in the multilevel setting as well, with one more logarithmic factor in the preprocessing time per level.
The improved preprocessing time should immediately lead to improved time bounds for a whole array of algorithmic applications.

We can also obtain new bounds for the halfspace range emptiness/reporting problem. Here, we want to decide whether a query halfspace contains any data point, or report all points inside a query halfspace. Matoušek [40] described a shallow version of his partition theorem to obtain a data structure with O(nlogn) preprocessing time, O(nloglogn) space, and O(n ^{1−1/⌊d/2⌋}log^{ O(1)} n+k) query time for any d≥4, where k denotes the output size in the reporting case. The space was reduced to O(n) by Ramos [49] for even d; see also [1] for d=3. The same paper by Matoušek also gave an alternative data structure with O(n ^{1+ε }) preprocessing time, O(n) space, and \(O(n^{11/{\lfloor{d/2}\rfloor}}2^{O(\log^{*}n)})\) query time for halfspace range emptiness for any d≥4. Although it is conceivable that Matoušek’s final method for simplex range searching with hierarchical cuttings could similarly be adapted to halfspace range emptiness for even d, to the best of the author’s knowledge, no one has attempted to find out, perhaps daunted by the amount of technical details in the paper [42]. It is generally believed that the exponent 1−1/⌊d/2⌋ is tight, although a matching lower bound is currently not known.
With our simpler method, we can derive the following, likely optimal result for even dimensions without too much extra trouble: there is a data structure for halfspace range emptiness with O(nlogn) preprocessing time, O(n) space, O(n ^{1−1/⌊d/2⌋}) query time with high probability for any even d≥4. The same bounds hold for halfspace range reporting when the output size k is small (o(n ^{1−1/⌊d/2⌋}/log^{ ω(1)} n)).
Halfspace range emptiness/reporting has its own long list of applications [17, 41, 45]. As a consequence, we get improved results for ray shooting and linear programming queries in intersections of halfspaces in even dimensions, exact nearest neighbor search in odd dimensions, and the computation of extreme points and convex hulls in even dimensions, to name just a few.
Organization
The highlight of the paper is to be found in Sect. 3—in particular, the proof of Theorem 3.1—which describes the basic version of our new partition tree with O(n) space and O(n ^{1−1/d }) query time. This part is meant to be selfcontained (except for the side remarks at the end), assuming only the background provided by Sect. 2. The rest of the paper—particularly Sect. 5 on getting O(nlogn) preprocessing time, Sect. 6 on halfspace range searching, and Sect. 7 on applications—will get a bit more technical and incorporate additional known techniques, but mostly involves working out consequences of the basic new approach.
2 Background
Let P be the given set of n points in ℝ^{ d }, which we assume to be in general position by standard perturbation techniques. A partition tree is a hierarchical data structure formed by recursively partitioning a point set into subsets. Formally, in this paper, we define a partition tree to be a tree T where each leaf stores at most a constant number of points, each point of P is stored in exactly one leaf, and each node v stores a cell Δ(v) which encloses the subset P(v) of all points stored at the leaves underneath v. Throughout the paper, we take “cell” to mean a polyhedron of O(1) size, or to be more specific, a simplex. In general, the degree of a node is allowed to be nonconstant and the cells of the children of a node are allowed to overlap (although in our new basic method, we can guarantee constant degree and no overlaps of the cells). At each node v, we do not store any auxiliary information about P(v), other than the sum of the weights in P(v). Therefore, a partition tree is by our definition always an O(n)space data structure.
It is known [25] that there are point sets where the query cost is Ω(n ^{1−1/d }) for any partition tree under the above definitions. We want to devise a worstcase optimal method to construct partition trees matching this lower bound.
We begin by mentioning Matoušek’s simplicial partition theorem [39], to set the context. We will not use this theorem anywhere, but we will prove something stronger in Sect. 3.
Theorem 2.1
(Matoušek’s Partition Theorem)
Let P be a set of n points in ℝ^{ d }. Then for any t, we can partition P into t subsets P _{ i } and find t cells Δ _{ i }, with Δ _{ i }⊃P _{ i }, such that each subset contains Θ(n/t) points and each hyperplane crosses at most O(t ^{1−1/d }) cells.
A recursive application of the above theorem immediately gives a partition tree where the query cost satisfies the recurrence Q(n)≤O(t ^{1−1/d })Q(n/t)+O(t). If we set t to an arbitrarily large constant, the recurrence has solution Q(n)≤O(n ^{1−1/d+ε }) for an arbitrarily small constant ε>0. If instead we set t=n ^{ δ } for a sufficiently small constant δ>0, the solution becomes Q(n)≤O(n ^{1−1/d }log^{ O(1)} n). To avoid the extra n ^{ ε } or log^{ O(1)} n factors, we need a more refined method, since each time we recurse using Theorem 2.1, the hidden constant in the crossing number bound “blows up”.
Our new method is described in the next section, and is selfcontained except for the use of two tools: a minor observation and an important lemma, both wellknown and stated here without proofs.
First, we observe that in proving statements like Theorem 2.1, instead of considering an infinite number of possible hyperplanes, it suffices to work with a finite set of “test” hyperplanes. This type of observation was made in many of the previous papers [25, 26]; the particular version we need is a special case of one from [39]:
Observation 2.2
(Test Set)
Given a set P of n points in ℝ^{ d }, we can construct a set H of n ^{ O(1)} hyperplanes with the following property: For any collection of disjoint cells each containing at least one point of P, if the maximum number of cells crossed by a hyperplane in H is ℓ, then the maximum number of cells crossed by an arbitrary hyperplane is O(ℓ).
For this version of the testset observation, the proof is quite simple (briefly, we can just take H to be the O(n ^{ d }) hyperplanes passing through dtuples of points of P).
For the next lemma, we need a definition first. For a given set H of m hyperplanes, a (1/r)cutting [22, 44, 47] is a collection of disjoint cells such that each cell Δ is crossed by at most m/r hyperplanes. Let \(\mathcal{A}(H)\) denote the arrangement of H. We let H _{ Δ } denote the subset of all hyperplanes of H that cross Δ.
Lemma 2.3
(Cutting Lemma)
Let H be a set of m hyperplanes and Δ be any simplex in ℝ^{ d }. For any r, we can find a (1/r)cutting of H into O(r ^{ d }) disjoint cells whose union is Δ.
If the number of vertices in \(\mathcal{A}(H)\) inside Δ is X, then the number of cells can be reduced to O(X(r/m)^{ d }+r ^{ d−1}).
The first part of the lemma is standard and was first shown by Chazelle and Friedman [23]. The Xsensitive variant is also known; e.g., see [21, 30]. The proof is by random sampling.^{3}
To understand the intuitive significance of the cutting lemma, let H be a set of m test hyperplanes, Δ=ℝ^{ d }, and r=t ^{1/d }. Then we get O(t) cells each crossed by O(m/t ^{1/d }) hyperplanes. The average number of points per cell is O(n/t), and the average number of cells crossed by a test hyperplane is O((t⋅m/t ^{1/d })/m)=O(t ^{1−1/d }). Thus, the cutting lemma “almost” implies the partition theorem. The challenge is to turn these average bounds into maximum bounds. For this purpose, Matoušek’s proof of his partition theorem adopts an iterative reweighting strategy by Welzl [25, 52] (roughly, in each iteration, we apply the cutting lemma to a multiset of hyperplanes in H to find one good cell, then increase the multiplicities of the hyperplanes crossing the cell, and repeat).
3 The Basic Method
We now present our new method for simplex range searching with O(n) space and O(n ^{1−1/d }) query time. To keep the presentation simple, we defer discussion of preprocessing time to Sect. 5.
The key lies in the following theorem, where instead of constructing one partition of the point set from scratch as in Matoušek’s partition theorem, we consider the problem of refining a given partition. As we will see, the main result will follow just by repeated applications of our theorem in a straightforward manner.
Theorem 3.1
(Partition Refinement Theorem)
Note that the first term matches the bound from Matoušek’s partition theorem for a partition into O(bt) parts. In the second term, it is crucial that the coefficient b ^{1−1/(d−1)} is asymptotically smaller than b ^{1−1/d }; this will ensure that repeated applications of the theorem do not cause a constantfactor blowup, if we pick b to be sufficiently large. The third term will not matter much in the end, and is purposely not written in the tightest manner. The proof of the theorem, like Matoušek’s, uses the cutting lemma repeatedly in conjunction with an iterated reweighting strategy as detailed below. The basic algorithm is simple:
Proof
We maintain a multiset \(\widehat{H}\), initially containing C copies of each hyperplane in H. The value of C is not important (for conceptual purposes, we can set C to be a sufficiently large power of b, which will ensure that future multiplicities are always integers). The size \(\widehat{H}\) of a multiset H always refers to the sum of the multiplicities (the “weights”) of the elements. Let \(X_{\varDelta }(\widehat{H})\) denote the number of vertices inside Δ defined by the hyperplanes in \(\widehat{H}\); here, the multiplicity of a vertex is the product of the multiplicities of its defining hyperplanes.
The Algorithm
 1.Subdivide Δ _{ i } into disjoint subcells by building a (1/r _{ i })cutting of \(\widehat{H}_{\varDelta _{i}}\) inside Δ _{ i } withfor some constant c. By Lemma 2.3, the number of subcells inside Δ _{ i } is \(O(X_{\varDelta _{i}}(\widehat{H})(r_{i}/\widehat{H}_{\varDelta _{i}})^{d} +r_{i}^{d1})\), which can be made at most b/4 for c sufficiently small.$$r_i := c\min \biggl\{\bigl\widehat{H}_{\varDelta _i}\bigr \biggl(\frac{b}{X_{\varDelta _i}(\widehat{H})} \biggr)^{1/d}, b^{1/(d1)} \biggr\}$$
 2.
Further subdivide each subcell, e.g., by using vertical cuts, to ensure that each subcell contains at most 2n/(bt) points. The number of extra cuts required is O(b) per cell Δ _{ i }, and at most bt/2+o(bt) in total over all Δ _{ i }. So, the number of subcells is O(b) per cell, and at most 3bt/4+o(bt)≪bt in total.
 3.
For each hyperplane h, multiply the multiplicity of h in \(\widehat{H}\) by \((1+1/b)^{\lambda_{i}(h)}\) where λ _{ i }(h) denotes the number of subcells inside Δ _{ i } crossed by h.
Analysis
For a fixed hyperplane h∈H, let ℓ _{ i }(h) be the number of cells in {Δ _{ i },…,Δ _{1}} crossed by h. Since {Δ _{ i },…,Δ _{1}} is a random subset of size i from a set of size t, we have ℓ _{ i }(h)≤O(ℓi/t+log(mt)) w.h.p.(mt)^{4} by a Chernoff bound (a version for sampling without replacement, as noted in the Appendix). So, defining ℓ _{ i }:=max_{ h∈H } ℓ _{ i }(h), we have ℓ _{ i }≤O(ℓi/t+log(mt)) w.h.p.(t).
Let λ(h) be the total number of subcells crossed by h. Since the final multiplicity of h in \(\widehat{H}\) is equal to \(C(1+1/b)^{\lambda(h)}\le\widehat{H}\), it follows that \(\max_{h\in H}\lambda(h)\le\log_{1+1/b}(\widehat{H}/C) \le O(b\log(\widehat{H}/C))\), which has expected value at most O(blogm+(bt)^{1−1/d }+b ^{1−1/(d−1)} ℓ+b ^{1−1/(d−1)}log(mt)logt), which is bounded from above by (2). □
Theorem 3.2
Given n points in ℝ^{ d }, we can build a partition tree with query cost O(n ^{1−1/d }).
Proof
Note that the first term of (3) dominates when u exceeds log^{ c′} n for some constant c′, so we can write ℓ(u)≤O(u ^{1−1/d }+log^{ O(1)} n). We conclude that the query cost is O(∑_{ t≤n } bℓ(t))≤O(∑_{ t≤n }[t ^{1−1/d }+log^{ O(1)} n])=O(n ^{1−1/d }), where the sums are over all t that are powers of b. □
Remarks
To readers familiar with the proof of Matoušek’s partition theorem [39], we point out some differences from our proof of the partition refinement theorem. First, to ensure \(\sum_{j}X_{\varDelta _{j}}(\widehat{H})\le\widehat{H}^{d}\), we need disjointness of cells, which forces us to choose multiple subcells from a cutting instead of one subcell per iteration. This in turn forces us to use a different multiplier (1+1/b) instead of doubling. Unlike Theorem 2.1, Theorem 3.1 guarantees an upper but not a lower bound on size of each subset (certain versions of the test set observation, e.g., Observation 5.1, require such a lower bound), though this is not an issue in our proof of Theorem 3.2.
To those familiar with Matoušek’s final method based on hierarchical cuttings [42], we note that our method share certain common ideas. Both apply iterative reweighting more “globally” to refine multiple cells simultaneously, as a way to avoid constantfactor blowup in the partitiontheorem method; and hierarchical cuttings originate from the same Xsensitive version of the cutting lemma. However, Matoušek’s method uses a far more complicated weight/multiplicity function, because it tries to deal with different layers of the tree all at once; in contrast, our partition refinement theorem works with one layer at a time and leads to a simpler, more modular design. Matoušek’s method also requires a special root of O(n ^{1/d }logn) degree (whose children cells may overlap), and does not guarantee optimality for ℓ(t) except at the bottommost layer, whereas our method ensures optimality for almost all layers (except near the very top for very small t, which is not important). This will make a difference in the next section and in subsequent applications to multilevel data structures.
Random ordering is convenient but is not the most crucial part of the proof of Theorem 3.1: We can easily deterministically find a good cell Δ _{ i } in each iteration if we do not care about the bound on ℓ _{ i }. The naive bound ℓ _{ i }≤ℓ can still lead to a weaker version of Theorem 3.1 with an extra logarithmic factor in the second term of (2), and Theorem 3.2 can still be derived but with some extra effort. (Alternatively, we can derandomize, by considering a potential function of the form \(\sum_{h\in\widehat{H}} 2^{c\ell_{i}(h)t/i}\).)
In the d=2 case, our partition tree can be made into a BSP tree, since the cuttings produced by taking canonical triangulations of the arrangements of samples are easily realizable as binary plane partitions (our algorithm computes such cuttings for r≤b ^{ O(1)} bounded by a constant).
4 Additional Properties
We point out some implications that follow from our method. Define the orderγ query cost of ∂q to be \(\sum_{v\in T: \varDelta (v)\cap\partial q\neq\emptyset}\sum_{\mbox{\scriptsize$w$ child of $v$}}P(w)^{\gamma}\). Define the orderγ query cost of T to be the maximum orderγ query cost of an arbitrary hyperplane. Our previous definition coincides with the case γ=0. This extended definition is relevant in multilevel data structures where secondary data structures for P(v) are stored at each node v; see Sect. 7 for more details. We have the following new result:
Theorem 4.1
 (i)
Height O(logn) and order(1−1/d) query cost O(n ^{1−1/d }logn).
 (ii)
Height O(loglogn) and orderγ query cost O(n ^{1−1/d }) for any fixed γ<1−1/d.
Proof
 (i)The same partition tree in the proof of Theorem 3.2 has order(1−1/d) query cost bounded by the following, where the sums are over all t that are powers of b:
 (ii)To lower the height, we modify our partition tree T. Fix a sufficiently small constant ε>0. We examine each node v of T in a topdown order, and whenever a child w of a node v has P(w)>P(v)^{1−ε }, we remove w by making w’s children v’s children. In the new tree T′, every child w of a node v has P(w)≤P(v)^{1−ε }, so the height is O(loglogn). For each child w of v in T′, w’s parent in T has size at least P(v)^{1−ε }, so the degree of v in T′ is at most bP(v)^{ ε }≤O((n/t)^{ ε }) if P(v)=O(n/t). Summing over all t that are powers of b, we obtain orderγ query cost
We have omitted the case γ>1−1/d, because a simple recursive application of Matoušek’s partition theorem already gives optimal orderγ query cost O(n ^{ γ }) with height O(loglogn) in this case.
Next, define a Bpartial partition tree for a point set P to be the same as in our earlier definition of a partition tree, except that a leaf now may contain up to B points. Define the query cost in the same manner, as in (1). Note the query cost of a partial partition tree upperbounds the number of internal and leaf cells crossed by a hyperplane, but does not account for the cost of any auxiliary data structures we plan to store at the leaves. The following theorem is useful in applications that need space/time tradeoffs, where we can switch to a largerspace data structure with small query time at the leaves; see Sect. 7 for more details. The theorem is also selfevidently useful in the external memory model (hence, the choice of name “B”): we get improved externalmemory (even cacheoblivious) range searching data structures [10] immediately as a byproduct.
Theorem 4.2
Given n points in ℝ^{ d } and B<n/log^{ ω(1)} n, we can build a Bpartial partition tree with size O(n/B) and query cost O(n/B)^{1−1/d }.
Proof
Stop the construction in the proof of Theorem 3.2 when t reaches n/B. Summing over all t that are powers of b, we obtain query cost O(b∑_{ t≤n/B } ℓ(t))=O(n/B)^{1−1/d }. □
5 Preprocessing Time
In this section, we examine the issue of preprocessing time. Obtaining polynomial time is not difficult with our method, and it might be possible to lower the bound to O(n ^{1+ε }) by using recursion, as was done in Matoušek’s final method [42]. Instead of recursion, we show how to directly speed up the algorithm in Sect. 3. We first aim for O(nlog^{ O(1)} n) preprocessing time, and later improve it to O(nlogn).
First, we need a better version of Observation 2.2 with fewer test hyperplanes. The following observation was shown by Matoušek [39] (see also [47]).
Observation 5.1
(Test Set)
Given a set P of n points and any t, we can construct a set H _{ t } of O(tlog^{ O(1)} N) test hyperplanes, in O(tlog^{ O(1)} N) time, satisfying the following property w.h.p.(N) for any N≥n: For any collection of disjoint cells each containing at least n/t points of P, if the maximum number of cells crossed by a hyperplane in H _{ t } is ℓ, then the maximum number of cells crossed by an arbitrary hyperplane is at most O(ℓ+t ^{1−1/d }).
(Roughly interpreted, the proof involves drawing a random sample of t ^{1/d }logN points and taking the O(tlog^{ d } N) hyperplanes through dtuples of points in the sample; the time bound is clearly as stated. The logN factors can be removed with more work, using cuttings, but will not matter.)
Regarding Lemma 2.3, we can use known cutting algorithms [21, 23, 39] to get running time O(mr ^{ O(1)}) (randomized or deterministic).

In step 1, we work with a sparser subset of \(\widehat{H}\), by considering a random sample of \(\widehat{H}\).

More crucially, in step 3, we update the multiplicities less frequently, by considering a random sample of the subcells.
To prepare for the proof of the theorem below, we make two definitions, the first of which is standard: Given a (multi)set S, let a psample be a subset R generated by taking each (occurrence of an) element of S, and independently choosing to put the element in R with probability p. (Note that we can generate a psample in time linear in the output size rather than the size of S, by using a sequence of exponentially distributed random variables rather than 01 random variables.)
For a list S=〈s _{1},…,s _{ K }〉 and a sublist R of S, where S itself (the elements s _{ i } and the size K) may be random, we say that R is a generalized psample of S if the events E _{ i }={s _{ i }∈R} are all independent, each occurring with probability p. Note the subtle difference from the earlier definition (where S was thought of as fixed before R is chosen): it is acceptable for s _{ i } to be dependent on E _{1},…,E _{ i−1}, as long as we decide to choose whether to put s _{ i } to R independently of E _{1},…,E _{ i−1} and s _{ i }.
Properties enjoyed by standard random samples may not necessarily hold for generalized psamples. However, the Chernoff bound (see the Appendix) is still applicable to show that R∩{s _{1},…,s _{ k }}≤O(pk+logN) and k≤O(1/p)(R∩{s _{1},…,s _{ k }}+logN) for all k≤N w.h.p.(N). In particular, assuming K≤N, we have R≤O(pS+logN) and S≤O(1/p)(R+logN) w.h.p.(N).
We now present a nearlineartime algorithm for the partition refinement theorem:
Theorem 5.2
Proof
 0.
Let q be a power of 2 (with a negative integer exponent) closest to \(\min\{\frac{(bi)^{1/d}\log N}{\widehat{H}},\allowbreak \frac{b^{1/(d1)}t\log N}{\widehat{H}\ell},\frac{b^{1/(d1)}i}{\widehat{H}}\}\). If q is different from its value in the previous iteration, then create a new qsample \(\widehat{R}\) of \(\widehat{H}\).
 1.Subdivide Δ _{ i } into disjoint subcells by building a (1/r _{ i })cutting of \(\widehat{R}_{\varDelta _{i}}\) inside Δ _{ i } withBy Lemma 2.3, the number of subcells inside Δ _{ i } is at most b/4 for a sufficiently small constant c.$$r_i := c \min \biggl\{\bigl\widehat{R}_{\varDelta _i}\bigr \biggl(\frac{b}{X_{\varDelta _i}(\widehat{R})} \biggr)^{1/d}, b^{1/(d1)} \biggr\}.$$
 2.
Further subdivide each subcell by using extra cuts to ensure that each subcell contains at most 2n/(bt) points. The number of subcells remains O(b) per cell Δ _{ i }, and at most bt in total.
 3.Take a psample ρ _{ i } of the subcells of Δ _{ i } with \(p:=\min\{\frac{b^{1/d}\log N}{t^{11/d}},\frac{b^{1/(d1)}\log N}{\ell}, \allowbreak \frac{b^{1/(d1)}}{\log t}, 1\}\). If ρ _{ i }≠∅, then
 (a)
For each \(h\in H_{\varDelta _{i}}\), add new copies of h to \(\widehat{H}\) so that the multiplicity of h in \(\widehat{H}\) gets multiplied by \((1+1/b)^{\rho_{i}(h)}\), where ρ _{ i }(h) denotes the set of subcells in ρ _{ i } crossed by h.
 (b)
Insert a qsample of the newly added hyperplanes of \(\widehat{H}\) to \(\widehat{R}\).
 (a)
Let ρ(h)=⋃_{ i } ρ _{ i }(h). Since the multiplicity of h in \(\widehat{H}\) is \(C(1+1/b)^{\rho(h)}\le\widehat{H}\), it follows that \(\max_{h\in H}\rho(h)\le\log_{1+1/b}(\widehat{H}/C)\le O(b\log N)\) if algorithm does not fail. Observe that ρ(h) is a generalized psample of the list λ(h) of all subcells crossed by h, where the subcells are listed in the order they are created. Thus, w.h.p., we have max_{ h∈H }λ(h)≤O((1/p)(ρ(h)+logN))≤O((b/p)logN)≤O((bt)^{1−1/d }+b ^{1−1/(d−1)} ℓ+b ^{1−1/(d−1)}logtlogN+blogN) if the algorithm does not fail.
Analysis of Running Time. Conditioned to a fixed choice of Δ _{ t },…,Δ _{ i+1} and \(\widehat{R}\), we have \(E[\widehat{R}_{\varDelta _{i}}]\le\widehat{R}\ell_{i}/i\). Thus, \(E[\widehat{R}_{\varDelta _{i}}]\le(q\widehat{H}+\log N) \cdot O(\ell i/t+ \log N)/i \le(b\log N)^{O(1)}\) for our choice of q. Assuming that \(\widehat{R}_{\varDelta _{i}}\) is available, step 1 takes time \(\widehat{R}_{\varDelta _{i}}r_{i}^{O(1)}\), which has expected value (blogN)^{ O(1)}. We do not need to compute \(X_{\varDelta _{i}}(\widehat {R})\), but rather we try different values of r _{ i } (powers of 2) until we find a cutting with the right number of subcells. Thus, the expected cost over all iterations is O(t)(blogN)^{ O(1)}. By Markov’s inequality, with probability Ω(1), this bound holds; if not, we declare failure.
In step 2, we can check the number of points per subcell by assigning points to subcells in O(b) time per point, for a total cost of O(bn). We can actually make the cost sublinear in n—namely, O(t)(blogN)^{ O(1)}—by replacing P by a \(\frac{bt\log N}{n}\)sample Q of P. W.h.p., Q=O(btlogN), and for any simplex σ, Q∩σ≤Q/(bt) implies P∩σ≤O(n/(bt)) by a Chernoff bound.
For step 3(a), we first need to compute the set \(H_{\varDelta _{i}}\), i.e., report all hyperplanes of H intersecting a query cell Δ _{ i }. In the dual, H becomes an mpoint set and Δ _{ i } becomes an O(1)size polyhedron, and this subproblem reduces to simplex range searching. We can use existing data structures, e.g., [39], to get O(mlogm) preprocessing time and \(O(m^{11/d}\log^{O(1)}m + H_{\varDelta _{i}})\) query time. Since \(\sum_{i} H_{\varDelta _{i}} \le m\ell\) and the probability that ρ _{ i }≠∅ is at most O(bp), the total expected query time is bp⋅O(tm ^{1−1/d }log^{ O(1)} m+mℓ)≤O(t ^{1/d } m ^{1−1/d }+m)(blogN)^{ O(1)}≤O(t+m)(blogN)^{ O(1)}. Again, by Markov’s inequality, with probability Ω(1), this bound holds; if not, we declare failure. We can compute ρ _{ i }(h) for each \(h\in H_{\varDelta _{i}}\) naively in time O(b), which can be absorbed in the b ^{ O(1)} factor.
Finally, we account for the total cost of all operations done to \(\widehat{R}^{(q)}\): initialization in step 0, insertions in step 3(b), and computing \(\widehat{R}^{(q)}_{\varDelta _{i}}\) in step 1. Fix q. We continue inserting to \(\widehat{R}^{(q)}\) only if \(q\le2\frac{(bt)^{1/d}\log N}{\widehat{H}}\). Thus, w.h.p., at all times, \(\widehat{R}^{(q)}\le O(q\widehat{H}+\log N)\le O(t^{1/d})(b\log N)^{O(1)}\). Computing \(\widehat{R}^{(q)}_{\varDelta _{i}}\) reduces to simplex range searching again. This time, we use an existing data structure that has larger space but supports faster querying, with preprocessing time and total insertion time \(O(\widehat{R}^{(q)}^{d}\log^{O(1)}\widehat{R}^{(q)})= O(t)(b\log N)^{O(1)}\) w.h.p., and query time \(O(\log^{O(1)}\widehat{R}^{(q)} + \widehat{R}^{(q)}_{\varDelta _{i}})\). (Insertions are supported by standard amortization techniques [14].) Since \(\widehat{H}\le CN^{O(1)}\), the total number of different q’s is O(logN), which can be absorbed in the log^{ O(1)} N factor.
To summarize, our algorithm succeeds with probability Ω(1), and w.h.p. the stated bound holds if the algorithm does not fail. We can rerun the algorithm until success, with O(logN) number of trials w.h.p. The running time remains O(m+t)(blogN)^{ O(1)}. At the end, we can assign all the input points to subcells in O(bn) additional time (this term can actually be reduced to O(nlogb) with point location data structures). □
Theorem 5.3
We can build data structures in O(nlogn) time achieving the bounds stated in Theorems 3.2, 4.1, and 4.2 w.h.p.(n).
Proof
Let H be the union of the test sets H _{ t } from Observation 5.1 over all t that are powers of 2; the total size is m=O(nlog^{ O(1)} N). From now on, summations involving the variable t are over all powers of 2 (instead of b).
Let Π _{ t } be the set of all current leaf cells Δ such that the number of points inside Δ is between n/t and 2n/t; the number of such cells is at most t. Apply Theorem 3.1 to Π _{ t } to get a set \(\varPi'_{t}\) of subcells where the number of points inside each subcell is at most 2n/(bt). Make the O(b) subcells of each cell Δ∈Π _{ t } the children of Δ in the partition tree (Δ is no longer a leaf but its children now are).
Note that the above construction works just as well in the proofs of Theorems 4.1 and 4.2. In the proof of Theorem 4.2, the maximum number of internal node cells crossed by an arbitrary hyperplane is at most O(∑_{ t≤n/B } ℓ(t))=O(n/B)^{1−1/d }. The maximum number of leaf cells crossed by an arbitrary hyperplane is at most b times this number, so the query cost remains O(n/B)^{1−1/d }.
By Theorem 5.3, we immediately get O(nlog^{ O(1)} n) preprocessing time w.h.p.(n). We now describe how to reduce the preprocessing time of Theorem 3.2 to O(nlogn), by “bootstrapping”.
Build a B _{0}partial partition tree with O(n/B _{0}) size and O(n/B _{0})^{1−1/d } query cost, by Theorem 4.2, with B _{0}=log^{ c } n for some constant c. Here, we can take H to be the union of H _{ t } for all t≤n/B _{0}, of total size m=O((n/B _{0})log^{ O(1)} n), and the time bound drops to \(O(\sum_{t\le n/B_{0}}[n + (n/B_{0})\log^{O(1)}n])=O(n\log n)\) for c sufficiently large. Using the preceding partition tree method with P(B _{0})=O(B _{0}log^{ O(1)} B _{0}) and \(Q(B_{0})=O(B_{0}^{11/d})\) to handle the subproblems at the leaf cells, we then get a new partition tree with preprocessing time O(n/B _{0})P(B _{0})+O(nlogn)=O(nlogn) and query time O(n/B _{0})^{1−1/d } Q(B _{0})=O(n ^{1−1/d }).
The query bound is expected but can be made to hold w.h.p.(n): For a fixed query hyperplane, we are bounding a sum Z of independent random variables lying between 0 and O(B _{0}) with E[Z]≤O(n ^{1−1/d }); by a Chernoff bound (see the Appendix), w.h.p.(n), we have Z≤O(n ^{1−1/d }+B _{0}logn)=O(n ^{1−1/d }). Observe that every cell in our construction are defined by a constant number of input points, so the number of possible cells, and hence the number of combinatorially different query hyperplanes is polynomially bounded in n. Thus, the maximum query cost is indeed O(n ^{1−1/d }) w.h.p.(n).
The same idea also reduces the preprocessing time of Theorems 4.1 and 4.2 to O(nlogn). □
Note that the above result is Las Vegas, since the random choices made in the preprocessing algorithm only affect the query time, not the correctness of the query algorithm.
6 Shallow Version
In this section, we modify our method to solve the halfspace range reporting problem in an even dimension d≥4. It suffices to consider upper halfspaces, i.e., we want to report all points above a query hyperplane. We say that a hyperplane h is kshallow if the number of points of P above h is at most k.
It can be checked that Observation 2.2 remains true for kshallow hyperplanes, i.e., with the same test set H, for every k, if the maximum number of cells crossed by a kshallow hyperplane in H is ℓ, then the maximum number of cells crossed by an arbitrary kshallow hyperplane is at most O(ℓ).
Matoušek [40] gave a “shallow version” of the cutting lemma. Actually, a weaker version of the lemma suffices to derive a “shallow version” of the partition theorem e.g., as noted in [47] (the original shallow cutting lemma wants to cover all points in the (≤k)level, whereas it suffices to cover a large number of points at low levels). We use this weaker cutting lemma, which has a simpler proof, because it makes an Xsensitive variant of the lemma easier to verify. Our version also has a nice new feature: all cells are vertical, i.e., they contain (0,…,0,−∞).
Stating the right analog of “X” requires more care. Define X _{ Δ }(H,p) to be the expected number of vertices of the lower envelope of a psample of H inside Δ. Define \(\overline{X}_{\varDelta }(H,p)=\sum_{p'\le p}X_{\varDelta }(H,p)\) where the sum is over powers p′ of 2. Note that although X _{ Δ }(⋅,⋅) may not be monotone increasing in p, \(\overline{X}_{\varDelta }(\cdot,\cdot)\) is, so it is more convenient to work with the latter quantity.
In order to state how many points ought to be covered, define μ _{ Δ }(H) (the “measure” of H) to be the total number of pairs (p,h) with p∈P∩Δ and h∈H such that p is above h.
Lemma 6.1
(Shallow Cutting Lemma, weak version)
Let H be a set of m hyperplanes, P be a set of points, and Δ be a vertical cell in ℝ^{ d }. For any r, we can find an O(1/r)cutting of H into O(r ^{⌊d/2⌋}) disjoint vertical cells inside Δ that cover all but O(μ _{ Δ }(H)r/m) points of P.
In terms of \(\overline{X}(\cdot,\cdot)\), the number of cells can be reduced to \(O(\overline{X}_{\varDelta }(H,r/m)+r^{\lfloor{(d1)/2}\rfloor})\).
Proof
Draw an (r/m)sample R of H. Take the canonical triangulation T of the lower envelope of R restricted inside Δ. The expected number of cells is \(O(\overline{X}_{\varDelta }(H,r/m) +r^{\lfloor{(d1)/2}\rfloor})\), since there are O(r ^{⌊(d−1)/2⌋}) vertices in the lower envelope inside each (d−1)dimensional boundary facet of Δ. For each cell σ∈T with H _{ σ }=a _{ σ } m/r (a _{ σ }>1), further subdivide σ as follows: pick a (1/(2a _{ σ }))approximation A _{ σ } of H _{ σ } with \(A_{\sigma}=O(a_{\sigma}^{2}\log a_{\sigma})\) (εapproximations [35, 44, 47] can be found by random sampling); return any triangulation of the (≤A _{ σ }/a _{ σ })level L _{ σ } of A _{ σ } into \(A_{\sigma}^{O(1)}\le a_{\sigma}^{O(1)}\) vertical subcells intersected with σ. The total number of subcells is ∑_{ σ∈T }(rH _{ σ }/m)^{ O(1)}, which has expected value \(O(\overline{X}_{\varDelta }(H,r/m)+r^{\lfloor {(d1)/2}\rfloor})\) by Clarkson and Shor’s or Chazelle and Friedman’s technique [23, 29].
Each vertex of L _{ σ } has level at most O(H _{ σ }/a _{ σ })≤O(m/r) in H by the approximation property of A _{ σ }, so each subcell is indeed crossed by at most O(m/r) hyperplanes (since a hyperplane crossing the subcell must lie below one of the d vertices of the subcell). Inside a cell σ∈T, any uncovered point has level at least A _{ σ }/a _{ σ } in A _{ σ } and thus lies above at least Ω(H _{ σ }/a _{ σ })≥Ω(m/r) hyperplanes in H again by the approximation property. So, the number of uncovered points in T is at most O(μ _{ Δ }(H)r/m). On the other hand, the number of uncovered points outside T is at most μ _{ Δ }(R), which has expected value μ _{ Δ }(H)r/m. By Markov’s inequality, the bounds on the number of cells and uncovered points hold simultaneously to within constant factors with probability Ω(1). □
Equipped with the shallow cutting lemma, we can adopt the same approach from Sect. 3 to solve the halfspace range reporting problem, with d replaced by ⌊d/2⌋ in the exponents. The assumption that d is even is critical, to ensure that ⌊(d−1)/2⌋ is strictly less than ⌊d/2⌋.
Theorem 6.2
(Shallow Partition Refinement Theorem)
Proof
Lastly, we bound the number of points not covered by the subcells. Note that \(\sum_{j=1}^{n} \mu_{\varDelta _{j}}(\widehat{H})\le O(\widehat{H}k)\). Conditioned to a fixed choice of Δ _{ t },…,Δ _{ i+1}, we have \(E[\mu_{\varDelta _{i}}(\widehat{H})]\le O(\widehat{H}k/i)\), so the expected number of uncovered points in Δ _{ i } is \(O(\widehat{H}k/i\cdot r_{i}/\widehat {H}_{\varDelta _{i}})\le O(p_{0}\widehat{H}k/i)\le O(b^{1/{\lfloor{d/2}\rfloor}}k/i^{11/{\lfloor{d/2}\rfloor}})\). Summing over i=1,…,t gives the desired O((bt)^{1/⌊d/2⌋} k) bound. By Markov’s inequality, the bounds on crossing number and the number of uncovered points simultaneously hold to within constant factors with probability Ω(1). □
Theorem 6.3
Given n points in ℝ^{ d } for an even d≥4, we can build a partition tree that has query cost O(n ^{1−1/⌊d/2⌋}+klog^{ O(1)} n) for any kshallow query hyperplane.
Proof
Let k _{ t }:=n/(ct ^{1/⌊d/2⌋}logn) for a constant c. We follow the partition tree construction from the proof of Theorem 5.3, where \(\varPi_{t}'\) is now obtained by applying Theorem 6.2 to Π _{ t } and the k _{ bt }shallow hyperplanes in the test set H from Observation 2.2. We remove the points not covered. The total number of points removed is at most O(∑_{ t }(bt)^{1/⌊d/2⌋} k _{ bt })=O(n/c), which can be made at most n/2 for a sufficiently large c. We ignore the removed points for the time being.
In the analysis, we redefine ℓ(t) to be the maximum number of cells of Π _{ t } crossed by an arbitrary k _{ t }shallow hyperplane. The same recurrence holds, with d and d−1 replaced by ⌊d/2⌋ and ⌊(d−1)/2⌋. Since d is even, ⌊(d−1)/2⌋=⌊d/2⌋−1. We get ℓ(u)≤O(u ^{1−1/⌊d/2⌋}+log^{ O(1)} n).
The at most n/2 removed points can be handled recursively. We can join the O(logn) partition trees into one by creating a new root of degree O(logn). Define \(\overline{\ell}(u,n,k)\) to be the maximum overall number of cells containing between n/u and 2n/u points that are crossed by a kshallow hyperplane. Then \(\overline{\ell}(u,n,k)\le \ell(u,n,k) + \ell(u/2,n/2,k)+ \cdots \le O(u^{11/{\lfloor{d/2}\rfloor}} + (1+ku/n)\log^{O(1)}n)\).
We conclude that the query cost is \(O (\sum_{t\le n}b\overline{\ell}(t,n,k) ) \le O(n^{11/{\lfloor {d/2}\rfloor}} + k\log^{O(1)}n)\). □
Theorem 6.4
Given n points in ℝ^{ d } for an even d≥4 and B<n/log^{ ω(1)} n, we can build a Bpartial partition tree with size O(n/B) and query cost O((n/B)^{1−1/⌊d/2⌋}+(k/B)log^{ O(1)} n) for any kshallow query hyperplane.
Proof
As in the proof of Theorems 4.2 and 5.3, the query cost is \(O (\sum_{t\le n/B}\overline{\ell}(t,n,k) )\allowbreak \le O((n/B)^{11/{\lfloor{d/2}\rfloor}} + (1+k/B)\log^{O(1)}n)\). □
Remarks
Because of the recursive handling of removed points, we lose the property that the children cells are disjoint, but only at the root.
The above method is simpler than many of halfspace range reporting methods from Matoušek’s paper [40], in that we do not need auxiliary data structures. We bound the overall crossing number of a kshallow hyperplane directly as a function of n and k.
We have purposely been sloppy about the second O(klog^{ O(1)} n) term in Theorem 6.3, since our main interest is in the case when k is small. (For large k>n ^{1−1/⌊d/2⌋}log^{ ω(1)} n, we already know how to get O(k) time [40].) By bootstrapping as in the proof of Theorem 5.3 for j rounds, the polylogarithmic factor can be reduced to the jth iterated logarithm log^{(j)} n for any constant j.
Preprocessing Time
We now examine the preprocessing time of this method. First we need a more economical shallow version of the test set observation. This time, we cannot quote Matoušek’s paper [40] directly, since he did not state a sufficiently general version of the observation (he only considered (n/t)shallow hyperplanes rather than near(n/t ^{1/⌊d/2⌋})shallow hyperplanes, so the test set size became larger). We thus include a proof of the version we need (as it turns out, the dependence on t disappears in the case of vertical cells):
Observation 6.5
(Shallow Test Set)
Given a set P of n points in ℝ^{ d } and any k≥logN, we can construct a set H _{ k } of O((n/k)^{⌊d/2⌋}log^{ O(1)} N) O(k)shallow hyperplanes, in O((n/k)^{⌊d/2⌋}log^{ O(1)} N) time, satisfying the following property w.h.p.(N) for any N≥n: For any collection of vertical cells, if the maximum number of cells crossed by a hyperplane in H is ℓ, then the maximum number of cells crossed by an arbitrary kshallow hyperplane is O(ℓ).
Proof
In what follows, the dual of an object q is denoted by q ^{∗}. Draw a random sample R of P of size (n/k)logN and return the set H _{ k } of all hyperplanes through dtuples of points in R that are (clogN)shallow with respect to R for a constant c. Note that in the dual, \(H_{k}^{*}\) corresponds to the vertices of the (≤clogN)level L of R ^{∗} and thus has size \(O(\widehat{R}^{\lfloor{d/2}\rfloor}\log^{O(1)}N)=O((n/k)^{\lfloor{d/2}\rfloor}\log^{O(1)}N)\) [29].
To prove correctness, first note that every hyperplane in H _{ k } is O(k)shallow with respect to P w.h.p.(N) by a Chernoff bound (the number of combinatorially different hyperplanes is polynomially bounded). Let h be a kshallow hyperplane with respect to P. Then by a Chernoff bound, w.h.p.(N), h is (clogN)shallow with respect to R for a sufficiently large c, i.e., the point h ^{∗} is in L. Thus, h ^{∗} lies below a facet Δ ^{∗} of L, defined by d vertices \(h_{1}^{*},\ldots,h_{d}^{*}\in H_{k}^{*}\). Suppose h crosses a vertical cell σ. Then some point q∈σ lies above h, i.e., the hyperplane q ^{∗} lies below h ^{∗}. Then q ^{∗} must lie below one of the points \(h_{1}^{*},\ldots,h_{d}^{*}\), i.e., q must lie above one of the hyperplanes h _{1},…,h _{ d }∈H _{ k }. So, σ is crossed by one of h _{1},…,h _{ d }. □
In the proof of Theorem 6.3, we can take H to be the union of the test sets \(H_{k_{t}}\) over all t; the total size is m=O(∑_{ t }(n/k _{ t })^{⌊d/2⌋}log^{ O(1)} N)=O(∑_{ t } tlog^{ O(1)} N)=O(nlog^{ O(1)} N).
Lemma 6.1 requires O(mr ^{ O(1)}) time; we can repeat for O(logN) trials and return best result found to ensure correctness w.h.p.(N).
We can then proceed as in the proof of Theorem 5.2 to get a nearlineartime algorithm for Theorem 6.2. Since cells are vertical, the computation of \(H_{\varDelta _{i}}\) and \(R_{\varDelta _{i}}\) now reduces to halfspace range reporting in the dual (reporting hyperplanes crossing a vertical cell reduces to reporting hyperplanes lying below each of the d vertices of the cell).
It can be checked that the bootstrapping step in the proof of Theorem 5.3 carries through as well, yielding the final result:
Theorem 6.6
We can build a partition tree in O(nlogn) time achieving the bound stated in Theorems 6.3 and 6.4 w.h.p.(n).
7 Some Applications
Although we dare not go through all papers in the literature that have used simplex or halfspace range searching data structures as subroutines, it is illustrative to at least briefly mention a few sample applications.
Spanning Trees (and Triangulations) with Low Crossing Number
A series of papers [2, 7, 34, 38, 52] addressed the construction of spanning trees such that the maximum number of edges crossed by a hyperplane is small. This problem has direct applications to a number of geometric problems [3, 52]. We obtain the first O(nlogn)time algorithm that attains asymptotically optimal worstcase crossing number.
Corollary 7.1
Given n points in ℝ^{ d }, there is an O(nlogn)time Monte Carlo algorithm to compute a spanning tree with the property that any hyperplane crosses at most O(n ^{1−1/d }) tree edges w.h.p.(n). For d=2, we can ensure that the spanning tree is planar.
Proof
The result follows directly from Theorems 3.2 and 5.3: for each cell in the partition tree, we just select a constant (O(b)) number of edges to connect the subsets at the children.
In the d=2 case, we can ensure that the spanning tree has no selfintersection, because our partition tree guarantees the disjointness of the children’s cells: At each node v, we select connecting edges that lie outside the convex hulls of the children’s subsets. For example, this can be done naively by triangulating the region between the convex hull of P(v) and the convex hull of the children’s subsets, and then picking out appropriate edges; the required time is linear in P(v). The convex hulls themselves can be computed by merging bottomup. The total time is O(nlogn). □
A related combinatorial problem is to find a Steiner triangulation with low crossing number for a 2dimensional point set. The best upper bound known is \(O(\sqrt{n}\log n)\) [8], which is obtained by combining spanning trees with low crossing number and a result of Hershberger and Suri [36] for simple polygons. We obtain the following new theorem:
Corollary 7.2
Given a set P of n points in ℝ^{2}, there exists a triangulation of O(n) points which includes P, with the property that any line crosses at most \(O(\sqrt{n})\) edges.
Proof
The result follows directly from Theorem 3.2, using again the property that our partition tree guarantees disjointness of the children’s cells: We introduce Steiner points at each vertex of every cell in the tree. At each node v, we shrink the children’s cells slightly and triangulate the “gap” between the cell at v and the children’s cells, which has constant (O(b)) complexity. □
Multilevel Data Structures
Many query problems are solved using multilevel partition trees, where we build secondary data structures for each canonical subset and answer a query by identify relevant canonical subsets and querying their corresponding secondary structures. The corollary below encapsulates the main property of our partition tree for multilevel applications, and is a direct consequence of Theorems 4.1 and 5.3:
Corollary 7.3
 (i)
We can form O(n) canonical subsets of total size O(nlogn) in O(nlogn) time, such that the subset of all points inside any query simplex can be reported as a union of disjoint canonical subsets C _{ i } with ∑_{ i }C _{ i }^{1−1/d }≤O(n ^{1−1/d }logn) in time O(n ^{1−1/d }logn) w.h.p.(n).
 (ii)
For any fixed γ<1−1/d, we can form O(n) canonical subsets of total size O(nloglogn) in O(nlogn) time, such that the subset of all points inside any query simplex can be reported as a union of disjoint canonical subsets C _{ i } with ∑_{ i }C _{ i }^{ γ }≤O(n ^{1−1/d }) in time O(n ^{1−1/d }) w.h.p.(n).
For example, we can get the following results:
Corollary 7.4
 (a)
O(nlog^{2} n) preprocessing time and O(nlogn) space such that we can report the k intersections with a query line in \(O(\sqrt{n}\log n + k)\) expected time.^{5}
 (b)
O(nlog^{3} n) preprocessing time and O(nlog^{2} n) space such that we can report the k intersections with a query line segment in \(O(\sqrt{n}\log^{2}n + k)\) expected time.
 (c)
O(nlog^{3} n) preprocessing time and O(nlog^{2} n) space such that we can find the first intersection by a query ray in \(O(\sqrt{n}\log^{2}n)\) expected time.
Specifically, (a) follows by applying Corollary 7.3(i) twice; (b) follows by applying Corollary 7.3(i) once more; and (c) follows from (b) by a simple randomized reduction (e.g., [16]). For (c), the previous result by Agarwal [3] had O(nα(n)log^{4} n) space and \(O(\sqrt{n\alpha(n)}\log^{2}n)\) query time (for nonintersecting segments, his bounds of O(nlog^{3} n) space and \(O(\sqrt{n}\log^{2}n)\) query time are still worse than ours).
For another example, we get the following result by applying Corollary 7.3(i) d+1 times:
Corollary 7.5
Given n simplices in ℝ^{ d }, there is a data structure with O(nlog^{ d+1} n) preprocessing time and O(nlog^{ d } n) space, such that we can find all simplices containing a query point in O(n ^{1−1/d }log^{ d } n) expected time; and we can find all simplices contained inside a query simplex in O(n ^{1−1/d }log^{ d } n) expected time.
In contrast, previous methods [39, 42] had n ^{ O(1)} preprocessing time, O(nlog^{2d } n) space, and O(n ^{1−1/d }log^{ d } n) query time; or O(n ^{1+ε }) preprocessing time, O(nlog^{ O(1)} n) space, and O(n ^{1−1/d }log^{ O(1)} n) query time; or \(O(n 2^{O(\sqrt{\log n})})\) preprocessing time and space, and \(O(n^{11/d}2^{O(\sqrt{\log n})})\) query time.
Other problems can be solved in a similar way, e.g., intersection searching among simplices in ℝ^{ d }.
Applications of the Shallow Case
Halfspace range emptiness dualizes to membership queries for an intersection \(\mathcal{P}\) of n halfspaces: decide whether a query point lies in \(\mathcal{P}\). The problem is a special case of ray shooting in a convex polytope: find the intersection of a query ray with \(\mathcal{P}\). In turn, ray shooting queries are special cases of jdimensional linear programming queries : find a point in \(\mathcal{P}\) that is extreme along a query direction and lies inside a query jflat. The author [16] (see also [50]) has given a randomized reduction of linear programming queries to halfspace range reporting queries in the case when the output size k is small (O(logn)). This reduction does not increase the asymptotic time and space bounds w.h.p. if the query time bound grows faster than n ^{ ε } for some fixed ε>0. Exact nearest neighbor search (finding the closest point in a given point set to a query point under the Euclidean metric) reduces to ray shooting in one higher dimension by the standard lifting map. Theorems 6.3 and 6.6 thus imply:
Corollary 7.6
For d≥4 even, there are data structures for halfspace range emptiness queries for n points in ℝ^{ d }, for ray shooting and linear programming queries inside the intersection of n halfspaces in ℝ^{ d }, and exact nearest neighbor queries of n points in ℝ^{ d−1}, with O(nlogn) preprocessing time, O(n) space, and O(n ^{1−1/⌊d/2⌋}) expected query time.
Tradeoffs
Thus far, we have focused exclusively on linearspace data structures. By combining with a largespace data structure with logarithmic query time, one can usually obtain a continuous tradeoff between space and query time. Specifically, such a tradeoff for halfspace range counting, for example, can be obtained through the following corollary:
Corollary 7.7
 (i)
O(n/B)P(B)+O(nlogn) preprocessing time, O(n/B)S(B)+O(n) space, and O(n/B)^{1−1/d } Q(B)+O(n/B)^{1−1/d } expected query time, assuming B<n/log^{ ω(1)} n.
 (ii)
O(n/B)^{ d } P(B)+(n/B)^{ d } B) preprocessing time, O(n/B)^{ d } S(B)+O((n/B)^{ d } B) space, and Q(B)+O(log(n/B)) (expected) query time.
Proof
Part (i) is an immediate consequence of Theorems 4.2 and 5.3. Part (ii) is known and follows from a direct application of hierarchical cuttings [21] (in particular, see [42, Theorem 5.1]). □
As space/querytime tradeoffs are already known [42], we look instead at preprocessingtime/querytime tradeoffs, which are more important in algorithmic applications and where we get new results. We explain how such a tradeoff can be obtained through the above corollary. Interesting, iterated logarithm arises:
Corollary 7.8
There is a ddimensional halfspace range counting data structure with \(O(m2^{O(\log^{*}n)})\) preprocessing time and \(O((n/m^{1/d})2^{O(\log^{*}n)})\) expected query time for any given m∈[nlogn,n ^{ d }/log^{ d } n].
Proof
To obtain the full tradeoff, we use one final application of Corollary 7.7(ii), choosing B so that B ^{ d−1}/logB=n ^{ d }/m, to get preprocessing time \(O(2^{O(\log^{*}n)}(n/B)^{d} \* B\log B) =O(2^{O(\log^{*}n)}m)\) and query time \(O(2^{O(\log^{*}n)} B^{11/d}/\log^{1/d}B + \log n)= O(2^{O(\log^{*}n)}n/m^{1/d})\). □
The above result is reminiscent of Matoušek’s \(O(n^{4/3}2^{O(\log^{*}n)})\)time algorithm [42] for Hopcroft’s problem in 2d: given n points and n lines in the plane, find a pointline incidence (or similarly count the number of pointaboveline pairs). Matoušek’s solution required only cuttings and managed to avoid partition trees because the problem is offline, i.e., the queries are known in advance. Corollary 7.8 can be viewed as an extension to online queries.
(Incidentally, we pick the example of halfspace range counting in the above, because for simplex range counting, the current best largespace data structure needed in Corollary 7.8(ii) has extra logarithmic factors [42], so our new method can eliminate some but not all logarithmic factors.)
We can obtain similar results for halfspace range emptiness:
Corollary 7.9
 (i)
O(n/B)P(B)+O(nlogn) (expected) preprocessing time, O(n/B)S(B)+O(n) (expected) space, and O(n/B)^{1−1/⌊d/2⌋} Q(B)+O(n/B)^{1−1/⌊d/2⌋} expected query time, assuming B<n/log^{ ω(1)} n.
 (ii)
O(n/B)^{ d } P(B)+O((n/B)^{ d } B) expected preprocessing time, O(n/B)^{ d } S(B)+O((n/B)^{ d } B) expected space, and Q(B)+O(log(n/B)) expected query time.
Proof
Part (i) is a consequence of Theorems 6.4 and 6.6. However, one technical issue arises: for emptiness queries, we could purport that the range is nonempty if the query cost exceeds the stated limit, but this would only yield a Monte Carlo algorithm. We describe a Las Vegas alternative. Store a random sample R of size r in any halfspace range emptiness data structure with O(rlogr) preprocessing time, O(r) space, and O(r ^{1−δ }) query time for some fixed δ>0. If the query halfspace h is not empty with respect to R, we are done. Otherwise, h is O((n/r)logn)shallow w.h.p.(n) by a Chernoff bound (or the εnet property of random samples [35, 44, 47]). Then w.h.p.(n), the query cost from Theorem 6.4 is O((n/B)^{1−1/⌊d/2⌋}+(n/(rB))log^{ O(1)} n). Setting r=(n/B)^{1/⌊d/2⌋}log^{ c } n for a sufficiently large constant c yields the desired bound. (Note that for d≥6, the auxiliary data structure for R is unnecessary as δ=0 still works.)
Part (ii) follows from a direct application of hierarchical cuttings in the shallow context where we only need to cover a lower envelope of n halfspaces (although we are not aware of any references explicitly stating the result, see [11, 49] for the general idea, which involves just repeated applications of an Xsensitive shallow cutting lemma as in Lemma 6.1). □
We can proceed as in the proof of Corollary 7.8 to get a preprocessing/querytime tradeoff for halfspace range emptiness. The author [18] has given another randomized reduction from linear programming queries to halfspace range emptiness. This reduction does not increase the asymptotic expected query time if it grows faster than n ^{ ε }, and does not increase the asymptotic preprocessing time if it grows faster than n ^{1+ε }. Therefore:
Corollary 7.10
For d≥4 even, there is a ddimensional halfspace range emptiness data structure with \(O(m2^{O(\log^{*}n)})\) preprocessing time and \(O((n/m^{1/{\lfloor{d/2}\rfloor}})2^{O(\log^{*}n)})\) expected query time for any given m∈[nlogn,n ^{⌊d/2⌋}/log^{⌊d/2⌋} n]. The same result holds for ray shooting and linear programming queries inside the intersection of n halfspaces in ℝ^{ d }, and exact nearest neighbor queries of n points in ℝ^{ d−1}, assuming m∈[n ^{1+δ },n ^{⌊d/2⌋−δ }] for some fixed δ>0.
One Algorithmic Application
Improved data structures often lead to improved algorithms for offline problems. We close with just one such example: Given n points, we can find the extreme points (the vertices of the convex hull) by answering n linear programming queries. By choosing m=n ^{2−2/(⌊d/2⌋+1)}, we get the following corollary, improving the best previous worstcase time bound of O(n ^{2−2/(⌊d/2⌋+1)}log^{ O(1)} n) [17, 41].
Corollary 7.11
For d≥4 even, we can compute the extreme points of a given set of n points in ℝ^{ d } in \(O(n^{22/({\lfloor {d/2}\rfloor}+1)}2^{O(\log^{*}n)})\) expected time.
By Seidel’s algorithm [51], we can construct the convex hull itself in additional O(flogn) time where f denotes the output size. (For f<n, we could also slightly speed up the convex hull algorithm in [17] for d≥6 even, but an algorithm in [19, Theorem 6.3] is faster in this case.)
8 Conclusions
The new results of this paper completely subsume most of the results from Matoušek’s previous paper [42], at least if randomized algorithms are acceptable (which are to most researchers nowadays). Although we have resolved some of the main open questions concerning upper bounds, there are still a few minor issues. For example, is it possible to eliminate the iterated logarithmic factor in the preprocessingtime/querytime tradeoffs (see Corollary 7.8)? For halfspace range reporting for even d, can one get O(n) space and O(n ^{1−1/⌊d/2⌋}+k) query time, without any extra iterated logarithmic factors in either term (see the remarks after Theorem 6.4)? More importantly, for halfspace emptiness for odd d, can one get O(n) space and O(n ^{1−1/⌊d/2⌋}) query time?
In the other direction, for simplex range searching, can one prove a tight Ω(n ^{1−1/d }) lower bound on the query time for linearspace data structures in, say, the semigroup model [20], without any extra logarithmic factor? Can one prove lower bounds showing in some sense that logarithmicfactor increase is necessary for multilevel partition trees (see Corollary 7.3), or that extra logarithmic factors are necessary in the query time in the case of large nearO(n ^{⌊d/2⌋}) space [42]? More importantly, can one prove near optimal lower bounds for halfspace range emptiness?
It seems plausible that our approach could be adapted to other kinds of partition tree for semialgebraic sets [6, 9] in some cases (where the complexity of arrangements or lower envelopes in d−1 dimensions is strictly less than in d dimensions). We do not know yet if our approach could lead to new results for dynamic partition trees that support insertions and deletions.
Footnotes
 1.
For better historical accuracy, we use dates of the earlier conference proceedings versions of papers.
 2.
 3.
Rough Proof Sketch: We draw a sample R of size r and take the cells of a canonical triangulation T of \(\mathcal{A}(R)\) [27] restricted inside Δ. For a cell σ∈T that is crossed by a _{ σ } m/r hyperplanes with a _{ σ }>1, we further subdivide the cell by taking a (1/a _{ σ })cutting of H _{ σ } using any weaker method with, say, \(a_{\sigma}^{O(1)}\) subcells (e.g., we can again use random sampling). To see why this procedure works, note that the size of the canonical triangulation is proportional to the size of \(\mathcal{A}(R)\) restricted inside Δ, which is at most O(X(r/m)^{ d }+r ^{ d−1}), since each vertex in \(\mathcal{A}(H)\) shows up in \(\mathcal {A}(R)\) with probability O(r/m)^{ d }, and there are O(r ^{ d−1}) vertices in the intersection of \(\mathcal{A}(R)\) with each (d−1)dimensional boundary facet of Δ. On the other hand, analysis by Clarkson and Shor [29] or Chazelle and Friedman [23] tells us that the parameter a _{ σ } is O(1) “on average” over all cells σ∈T.
 4.
Throughout the paper, the notation “w.h.p.(n)” (or “w.h.p.” for short, if n is understood) means “with probability at least \(11/n^{c_{0}}\)” for an arbitrarily large constant c _{0}.
 5.
For simplicity, we state expected time bounds rather than highprobability bounds in this section, although some can be made highprobability bounds with more care. In the expected bounds, we assume that the query objects are independent of the random choices made by the preprocessing algorithm.
Notes
Acknowledgements
I thank Pankaj Agarwal for pointing out the application to triangulations with low crossing number.
Work supported by NSERC. A preliminary version of this paper has appeared in Proc. 26th ACM Sympos. Comput. Geom., pages 1–10, 2010.
References
 1.Afshani, P., Chan, T.M.: Optimal halfspace range reporting in three dimensions. In: Proc. 20th ACMSIAM Sympos. Discrete Algorithms, pp. 180–186 (2009) Google Scholar
 2.Agarwal, P.K.: Intersection and Decomposition Algorithms for Planar Arrangements. Cambridge University Press, Cambridge (1991) zbMATHGoogle Scholar
 3.Agarwal, P.K.: Ray shooting and other applications of spanning trees with low stabbing number. SIAM J. Comput. 21, 540–570 (1992) MathSciNetzbMATHCrossRefGoogle Scholar
 4.Agarwal, P.K.: Range searching. In: Goodman, J., O’Rourke, J. (eds.) CRC Handbook of Discrete and Computational Geometry. CRC Press, New York (2004) Google Scholar
 5.Agarwal, P.K., Erickson, J.: Geometric range searching and its relatives. In: Chazelle, B., Goodman, J.E., Pollack, R. (eds.) Discrete and Computational Geometry: Ten Years Later, pp. 1–56. AMS, Providence (1999) Google Scholar
 6.Agarwal, P.K., Matoušek, J.: On range searching with semialgebraic sets. Discrete Comput. Geom. 11, 393–418 (1994) MathSciNetzbMATHCrossRefGoogle Scholar
 7.Agarwal, P.K., Sharir, M.: Applications of a new spacepartitioning technique. Discrete Comput. Geom. 9, 11–38 (1993) MathSciNetzbMATHCrossRefGoogle Scholar
 8.Agarwal, P.K., Aronov, B., Suri, S.: Stabbing triangulations by lines in 3D. In: Proc. 10th ACM Sympos. Comput. Geom., pp. 267–276 (1995) Google Scholar
 9.Agarwal, P.K., Efrat, A., Sharir, M.: Vertical decomposition of shallow levels in 3dimensional arrangements and its applications. SIAM J. Comput. 29, 912–953 (1999) MathSciNetCrossRefGoogle Scholar
 10.Agarwal, P.K., Arge, L., Erickson, J., Franciosa, P.G., Vitter, J.S.: Efficient searching with linear constraints. J. Comput. Syst. Sci. 61, 194–216 (2000) MathSciNetzbMATHCrossRefGoogle Scholar
 11.Amato, N.M., Goodrich, M.T., Ramos, E.A.: Parallel algorithms for higherdimensional convex hulls. In: Proc. 35th IEEE Sympos. Found. Comput. Sci., pp. 683–694 (1994) Google Scholar
 12.Arora, S., Hazan, E., Kale, S.: Multiplicative weights method: a metaalgorithm and its applications. Theory Comput. (2012, to appear) Google Scholar
 13.Arya, S., Mount, D.M., Xia, J.: Tight lower bounds for halfspace range searching. In: Proc. 26th ACM Sympos. Comput. Geom., pp. 29–37 (2010) Google Scholar
 14.Bentley, J., Saxe, J.: Decomposable searching problems I: statictodynamic transformation. J. Algorithms 1, 301–358 (1980) MathSciNetzbMATHCrossRefGoogle Scholar
 15.Brönnimann, H., Goodrich, M.T.: Almost optimal set covers in finite VCdimension. Discrete Comput. Geom. 14, 463–479 (1995) MathSciNetzbMATHCrossRefGoogle Scholar
 16.Chan, T.M.: Fixeddimensional linear programming queries made easy. In: Proc. 12th ACM Sympos. Comput. Geom., pp. 284–290 (1996) Google Scholar
 17.Chan, T.M.: Outputsensitive results on convex hulls, extreme points, and related problems. Discrete Comput. Geom. 16, 369–387 (1996) MathSciNetzbMATHCrossRefGoogle Scholar
 18.Chan, T.M.: An optimal randomized algorithm for maximum Tukey depth. In: Proc. 15th ACMSIAM Sympos. Discrete Algorithms, pp. 423–429 (2004) Google Scholar
 19.Chan, T.M., Snoeyink, J., Yap, C.K.: Primal dividing and dual pruning: outputsensitive construction of fourdimensional polytopes and threedimensional Voronoi diagrams. Discrete Comput. Geom. 18, 433–454 (1997) MathSciNetzbMATHCrossRefGoogle Scholar
 20.Chazelle, B.: Lower bounds on the complexity of polytope range searching. J. Am. Math. Soc. 2, 637–666 (1989) MathSciNetzbMATHCrossRefGoogle Scholar
 21.Chazelle, B.: Cutting hyperplanes for divideandconquer. Discrete Comput. Geom. 9, 145–158 (1993) MathSciNetzbMATHCrossRefGoogle Scholar
 22.Chazelle, B: Cuttings. In: Mehta, D.P., Sahni, S. (eds.) Handbook of Data Structures and Applications, pp. 25.1–25.10. CRC Press, Boca Raton (2005) Google Scholar
 23.Chazelle, B., Friedman, J.: A deterministic view of random sampling and its use in geometry. Combinatorica 10, 229–249 (1990) MathSciNetzbMATHCrossRefGoogle Scholar
 24.Chazelle, B., Rosenberg, B.: Simplex range reporting on a pointer machine. Comput. Geom., Theory Appl. 5, 237–247 (1996) MathSciNetzbMATHGoogle Scholar
 25.Chazelle, B., Welzl, E.: Quasioptimal range searching in space of finite VCdimension. Discrete Comput. Geom. 4, 467–489 (1989) MathSciNetzbMATHCrossRefGoogle Scholar
 26.Chazelle, B., Sharir, M., Welzl, E.: Quasioptimal upper bounds for simplex range searching and new zone theorems. Algorithmica 8, 407–429 (1992) MathSciNetzbMATHCrossRefGoogle Scholar
 27.Clarkson, K.L.: A randomized algorithm for closestpoint queries. SIAM J. Comput. 17, 830–847 (1988) MathSciNetzbMATHCrossRefGoogle Scholar
 28.Clarkson, K.L.: Las Vegas algorithms for linear and integer programming when the dimension is small. J. ACM 42, 488–499 (1995) MathSciNetzbMATHCrossRefGoogle Scholar
 29.Clarkson, K.L., Shor, P.W.: Applications of random sampling in computational geometry, II. Discrete Comput. Geom. 4, 387–421 (1989) MathSciNetzbMATHCrossRefGoogle Scholar
 30.de Berg, M., Schwarzkopf, O.: Cuttings and applications. Int. J. Comput. Geom. Appl. 5, 343–355 (1995) zbMATHCrossRefGoogle Scholar
 31.Dobkin, D., Edelsbrunner, H.: Organizing point sets in two and three dimensions. Report f130, Inst. Informationsverarb., Tech. Univ. Graz, Austria (1984) Google Scholar
 32.Edelsbrunner, H., Huber, F.: Dissecting sets of points in two and three dimensions. Report f138, Inst. Informationsverarb., Tech. Univ. Graz, Austria (1984) Google Scholar
 33.Edelsbrunner, H., Welzl, E.: Halfplanar range search in linear space and O(n ^{0.695}) query time. Inf. Process. Lett. 23, 289–293 (1986) zbMATHCrossRefGoogle Scholar
 34.Edelsbrunner, H., Guibas, L., Hershberger, J., Seidel, R., Sharir, M., Snoeyink, J., Welzl, E.: Implicitly representing arrangements of lines or segments. Discrete Comput. Geom. 4, 433–466 (1989) MathSciNetzbMATHCrossRefGoogle Scholar
 35.Haussler, D., Welzl, E.: Epsilonnets simplex range queries. Discrete Comput. Geom. 2, 127–151 (1987) MathSciNetzbMATHCrossRefGoogle Scholar
 36.Hershberger, J., Suri, S.: A pedestrian approach to ray shooting: shoot a ray, take a walk. J. Algorithms 18, 403–431 (1995) MathSciNetzbMATHCrossRefGoogle Scholar
 37.Hoeffding, W.: Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 58, 13–30 (1963) MathSciNetzbMATHCrossRefGoogle Scholar
 38.Matoušek, J.: Spanning trees with low crossing number. RAIRO Theor. Inform. Appl. 25, 103–124 (1991) zbMATHGoogle Scholar
 39.Matoušek, J.: Efficient partition trees. Discrete Comput. Geom. 8, 315–334 (1992) MathSciNetzbMATHCrossRefGoogle Scholar
 40.Matoušek, J.: Reporting points in halfspaces. Comput. Geom., Theory Appl. 2, 169–186 (1992) zbMATHGoogle Scholar
 41.Matoušek, J.: Linear optimization queries. J. Algorithms 14, 432–448 (1993). Also with O. Schwarzkopf in Proc. 8th ACM Sympos. Comput. Geom., pp. 16–25, 1992 MathSciNetzbMATHCrossRefGoogle Scholar
 42.Matoušek, J.: Range searching with efficient hierarchical cuttings. Discrete Comput. Geom. 10, 157–182 (1993) MathSciNetzbMATHCrossRefGoogle Scholar
 43.Matoušek, J.: Geometric range searching. ACM Comput. Surv. 26, 421–461 (1994) Google Scholar
 44.Matoušek, J.: Lectures on Discrete Geometry. Springer, Berlin (2002) zbMATHGoogle Scholar
 45.Matoušek, J., Schwarzkopf, O.: On ray shooting in convex polytopes. Discrete Comput. Geom. 10, 215–232 (1993) MathSciNetzbMATHCrossRefGoogle Scholar
 46.Motwani, R., Raghavan, P.: Randomized Algorithms. Cambridge University Press, Cambridge (1995) zbMATHGoogle Scholar
 47.Mulmuley, K.: Computational Geometry: An Introduction Through Randomized Algorithms. PrenticeHall, Englewood Cliffs (1994) Google Scholar
 48.Preparata, F.P., Shamos, M.I.: Computational Geometry: An Introduction. Springer, New York (1985) Google Scholar
 49.Ramos, E.: On range reporting, ray shooting, and klevel construction. In: Proc. 15th ACM Sympos. Comput. Geom., pp. 390–399 (1999) Google Scholar
 50.Ramos, E.: Linear programming queries revisited. In: Proc. 16th ACM Sympos. Comput. Geom., pp. 176–181 (2000) Google Scholar
 51.Seidel, R.: Constructing higherdimensional convex hulls at logarithmic cost per face. In: Proc. 18th ACM Sympos. Theory Comput, pp. 404–413 (1986) Google Scholar
 52.Welzl, E.: On spanning trees with low crossing numbers. In: Monien, B., Ottmann, T. (eds.) Data Structures and Efficient Algorithms. Lect. Notes Comput. Sci., vol. 594, pp. 233–249. Springer, Berlin (1992) CrossRefGoogle Scholar
 53.Willard, D.E.: Polygon retrieval. SIAM J. Comput. 11, 149–165 (1982) MathSciNetzbMATHCrossRefGoogle Scholar
 54.Yao, A.C., Yao, F.F.: A general approach to Ddimensional geometric queries. In: Proc. 17th ACM Sympos. Theory Comput., pp. 163–168 (1985) Google Scholar
 55.Yao, F.F.: A 3space partition and its applications. In: Proc. 15th ACM Sympos. Theory Comput., pp. 258–263 (1983) Google Scholar
 56.Yao, F.F., Dobkin, D.P., Edelsbrunner, H., Paterson, M.S.: Partitioning space for range queries. SIAM J. Comput. 18, 371–384 (1989) MathSciNetzbMATHCrossRefGoogle Scholar