Abstract
We consider the problem of digitalizing Euclidean segments. Specifically, we look for a constructive method to connect any two points in \(\mathbb {Z}^d\). The construction must be consistent (that is, satisfy the natural extension of the Euclidean axioms) while resembling them as much as possible. Previous work has shown asymptotically tight results in two dimensions with \(\varTheta (\log N)\) error, where resemblance between segments is measured with the Hausdorff distance, and N is the \(L_1\) distance between the two points. This construction was considered tight because of a \(\varOmega (\log N)\) lower bound that applies to any consistent construction in \(\mathbb {Z}^2\). In this paper we observe that the lower bound does not directly extend to higher dimensions. We give an alternative argument showing that any consistent construction in d dimensions must have \(\varOmega (\log ^{1/(d1)}\!N)\) error. We tie the error of a consistent construction in high dimensions to the error of similar weak constructions in two dimensions (constructions for which some points need not satisfy all the axioms). This not only opens the possibility for having constructions with \(o(\log N)\) error in high dimensions, but also opens up an interesting line of research in the tradeoff between the number of axiom violations and the error of the construction. A side result, that we find of independent interest, is the introduction of the bichromatic discrepancy: a natural extension of the concept of discrepancy of a set of points. In this paper, we define this concept and extend known results to the chromatic setting.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Euclidean line segments are one of the most fundamental objects of geometry. Although often loosely referred to as the shortest path connecting the endpoints, segments have a clear and unique axiomatic definition out of which many interesting properties follow. For example, it is well known that the intersection of two segments is always a segment (that could possibly degenerate to a point or even become empty). The definition of other mathematical concepts heavily depends on the definition of segments (e.g., we say that a certain region P of the space is convex if for any two points \(p,q\in P\), the line segment defined by p and q is in P).
The definition of segment works very well in a Euclidean or similar spaces with infinite precision. Digital representation (such as pixels in a screen) introduces imprecision. The most common approach used in practice is to somehow round the Euclidean segment into the digital space. The digital segments will look very similar to the Euclidean counterparts (that is, the error is very small). However, we cannot guarantee the useful properties and concepts that follow from the axiomatic definition of Euclidean segment (see Fig. 1).
In the aspect of the consistency of digital segments, we look for a deterministic method to construct digital segments in a way that (i) the analogous of Euclidean axioms are satisfied and (ii) the digital segments resemble the Euclidean ones as much as possible.
1.1 Preliminaries
Our aim is to construct a digital path \(dig (p,q)\) for any two points \(p,q \in \mathbb {Z}^d\). Ideally, we want dig to be defined for any pairs of points in \(\mathbb {Z}^d\) (full list of requirements is described below), but sometimes we consider the case in which dig is only defined for a subset of \(\mathbb {Z}^d \times \mathbb {Z}^d\).
Definition 1.1
For any \(S\subseteq \mathbb {Z}^d \times \mathbb {Z}^d\), let \(DS (S)\) be a set of digital segments such that \(dig (p,q)\in DS (S)\) for all \((p,q)\in S\). We say that \(DS (S)\) forms a partial set of consistent digital segments on S (partial CDS for short) if for every pair \((p,q)\in S\) it satisfies the following five axioms:

(S1)
Grid path property: \(dig (p,q)\) is a path between p and q under the 2dneighbor topology.^{Footnote 1}

(S2)
Symmetry property: if \((q,p) \in S\), \(dig (p,q)=dig (q,p)\).

(S3)
Subsegment property: for any \(r\in dig (p,q)\), \(dig (p,r) \in DS (S)\) and \(dig (p,r) \subseteq dig (p,q)\).

(S4)
Prolongation property: \(\exists \, r \in \mathbb {Z}^d\) such that \(dig (p,r)\in DS (S)\) and \(dig (p,q) \subset dig (p,r)\).

(S5)
Monotonicity property: for all \(i\le d\) such that \(p_i = q_i\), it holds that every point \(r\in dig (p,q)\) satisfies \(r_i = p_i = q_i\).
These axioms give nice properties of digital segments analogous to Euclidean line segments. For example, (S1) and (S3) imply that the intersection of two digital segments is another segment (that could degenerate to a single point or an empty set). (S5) implies that the intersection of a segment with an axisaligned halfspace is a segment [and connected by (S1)], and so on.
A partial CDS for \(S = \mathbb {Z}^d\times \mathbb {Z}^d\) is called a set of consistent digital segments (CDS for short). Although our final goal is to have such a construction that works for the case in which \(S=\mathbb {Z}^d\times \mathbb {Z}^d\), in this paper we consider subsets of the form \(S = \{o\}\times \mathbb {Z}^d\) (where o is the origin or any fixed point in \(\mathbb {Z}^d\)). We say that a partial CDS on such a set is a consistent digital ray system (CDR for short), as it contains all segments (or rays) from o to \(\mathbb {Z}^d\). Note that the five axioms would imply that CDR is a tree connecting a fixed point o to any other point of \(\mathbb {Z}^d\) (see Fig. 2). If we have a consistent construction for such a tree rooted at every grid point, then we have a CDS (see Fig. 2).
Another property that we want from partial CDS is that they visually resemble the Euclidean segments. The resemblance between the digital segment \(dig (p,q)\) and the Euclidean counterpart \(\overline{pq}\) is measured using the Hausdorff distance. The Hausdorff distance H(A, B) of two objects A and B is defined by \( H(A, B) = \max { \{ h(A,B), h(B, A) \}}\), where \(h (A,B)= \max _{a \in A} \min _{b \in B} \delta (a,b)\), and \(\delta (a,b)\) is the \(\Vert \,{ \cdot }\,\Vert _{\infty }\) Linfinity norm.^{Footnote 2}
The resemblance of a partial CDS on S is defined as
(that is, the largest error created between a digital segment and its Euclidean counterpart). This value is simply referred to as the error of the partial CDS construction. We are interested to see how the error grows as we enlarge our focus of interest. Thus, we limit the domain to the case in which both points are in the \(L_1\) ball of radius N centered at the origin [i.e., \(\mathcal {G}_N=\mathbb {Z}^{d} \cap B_1(o, N)\)]. Rather than looking for the exact function, we are interested in the asymptotic behavior of the error as a function of N. For simplicity, we will actually restrict ourselves to the positive orthant \(\mathcal {G}^+_N=\mathcal {G}_N\cap _i H_i\), where \(H_i=\{p\in \mathbb {Z}^d:p_i \ge 0\}\) and \(p_i\) is the ith coordinate of p (the results extend to other orthants by symmetry).
1.2 Previous Work
Research on the digital representation of line segments has been an active area of research for over half a century [13]. Many different approaches have been considered. Most common techniques look for methods that implicitly encode the properties we desire. For example, a popular approach is to consider a dynamic method to digitize line segments. In this setting, the way we transform a Euclidean segment into a digital one will depend on which other segments are present (and their specific coordinates). It is known that a grid of exponential size is needed if we want to preserve the combinatorial types [12]. Another workaround is known as snap rounding that represents line segments by polygonal chains: each segment is carefully rounded to avoid inconsistencies. Note that both of these ideas implicitly keep the error small while making sure that the intersection of two digital segments is a connected component. Although they work well in practice, they have the drawback that they cannot be used to define objects that are based on digital segments (such as digital starshapes or convex regions).
The first paper to explicitly look for an axiomatic approach was in 1987 by Luby [14]: in his work he introduced the concept of CDS (under the name of smooth geometries) and gave a method to construct CDSs in \(\mathbb {Z}^2\) based on a characterization of CDRs in \(\mathbb {Z}^2\): any CDR can be uniquely identified by four total orders of the integers (and vice versa). By choosing a proper total order and using it for all points of \(\mathbb {Z}^2\) we obtain a CDS with \(O(\log N)\) error. Håstad^{Footnote 3} gave a matching lower bound for any such construction.
These results were rediscovered by Chun et al. [9] and Christ et al. [7]. They renewed interest in the topic and sparked other related research: Chowdhury and Gibson [5] gave necessary and sufficient conditions for a collection of CDRs to form a CDS. In a companion paper, the same authors [6] afterwards provided an alternative characterization together with a constructive algorithm; specifically, they gave an algorithm that, given a collection of segments in an \(N \times N\) grid that satisfies the five axioms, computes a CDS that contains those segments. The algorithm runs in polynomial time of N.
Unfortunately, most of these results only work on the digital plane. Out of the previously mentioned results, only the CDR construction of Chun et al. [9] extends to three and higher dimensions. The construction has \(O(\log N)\) error regardless of the dimension. Chun et al. [9] also considered the case in which the monotonicity property (S5) is not preserved. They showed that if we remove (S5), we can obtain a CDR with O(1) error in any dimension. Although the error is small, the resulting segments are far from what we would consider similar to the Euclidean segments (because they loop around many times). Recently, Chiu and Korman [4] showed that the problem in higher dimensions behaves very differently from the two dimensional case. Specifically, they studied how to extend the CDS construction of Christ et al. [7] and showed that it is very limiting in three (and higher) dimensions. Their method can construct arbitrarily many CDRs [with \(\varOmega (\log N)\) error] and some of them could admit a CDS. However, whenever the construction yields a CDS, it will have \(\varOmega (N)\) error.
Our interest in higher dimensions comes motivated by an application in image segmentation. Image segmentation is the act of separating an object from its background in an image (that is, determining which pixels are part of the background and which ones not). Chun et al. [9] showed how to combine their CDR construction with the framework of Asano et al. [1] to segment two dimensional images. This idea has been extended to consider other shapes (see [8] for a detailed list), but always two dimensional. The hope is that a high dimensional CDR with low error will produce more accurate segmentation algorithms. Although traditional images taken with a camera are two dimensional, images from a medical equipment such as those taken with an MRI machine can have three or even higher dimensions (say, when we want to track changes of a particular object along time).
The concept of consistency has also been investigated in more general graphs than \(\mathbb {Z}^d\). A system of paths in a realweighted graphs is a collection of paths defined between any two vertices in the graph. Similar to our setting, a system of paths is said to be consistent when the intersection of any two paths in the system is also a connected path in the system. The characterization of such system has been studied, see [3, 10] for more details.
1.3 Differences Between Two and Higher Dimensions
When approximating some geometric object, it often happens that higher dimensions create a larger error than in a lower dimension setting. Since the high dimensional setting contains a two dimensional subspace, it is common for lower bounds to extend to higher dimensions. However, this is not true for the case of CDRs: although a three dimensional CDR contains two dimensional subspaces, those subspaces need not exactly be CDRs [and thus the \(\varOmega (\log N)\) lower bound does not directly hold]. In this paper, we further explain the reason and investigate the lower bound for the higher dimensional case.
The main reason why a subspace is not a CDR is because of the prolongation property (S4): we require that every segment is extendable, but has no constraints on the dimension in which it does so. In particular, a subspace of a high dimensional CDR need not be a CDR (see an example in Fig. 3). Subspaces of CDRs are what we call weak CDR: it is a construction that almost always behaves like a CDR but some vertices may not satisfy the prolongation property (S4). Each vertex that does not extend is called an inner leaf. In this paper we study weak CDRs in two dimensions and the implications that they have for (proper) CDRs in higher dimensions.
The lower bound argument for \(\mathbb {Z}^2\) is heavily based on discrepancy theory [15]. Any split vertex (branch point) of the CDR tree is mapped to an xcoordinate in \([0,1)\subset \mathbb {R}\) (see Fig. 4). Then, a CDR contains many split vertices, and thus it is mapped to a sequence of real numbers in [0, 1). Håstad showed that the discrepancy of the sequence gives a lower bound for the Hausdorff error of the CDR in \(\mathbb {Z}^2\). Afterwards, Christ et al. showed that there is a bijection between the two objects showing that sequences in [0, 1) with discrepancy d can be transformed into CDRs whose error is \(\varTheta (d)\).
1.4 Need of Bichromatic Discrepancy
A key property for Håstad’s lower bound argument is that the intersection of a CDR in two dimensions with line \(x+y=c\) (for any \(c\in \mathbb {Z}\)) contains a single split vertex.^{Footnote 4} Then, he maps the relative position of the split vertex into the [0, 1) interval and links the error of the CDR to the discrepancy of the transformed pointset. Unfortunately, the key property does not hold when we look at two dimensional subspaces of a CDR in three or more dimensions. Since subspaces can be weak, we can have inner leaves that in turn allow for more than one split vertex to exist in the same diagonal. Thus, we need to map both split and inner leaves into a two dimensional unit square. Similar to Håstad’s argument, we can link the error of the CDR with the discrepancy of the generated pointset. However, because we have two types of points we must instead look at a two colored pointset and study the difference in size of the two groups.
This naturally brings the idea of bichromatic discrepancy: let R and B be a set of red and blue points in the unit square, respectively. Let \(m=BR\) and assume \(m>0\). For any set P of points in the unit square and \(x,y\in [0,1]\) let P[x, y] be the number of points in \(P\cap [0,x]\times [0,y]\). For any two real numbers \(0 \le x,y\le 1\) we define the discrepancy of R and B at (x, y) as \(D_{R,B}(x,y)=mxy (B[x,y]R[x,y])\). The discrepancy of R and B is simply defined as \(D^*_{R,B}=\max _{(x,y)\in [0,1]^2}  D_{R,B}(x,y) \) (i.e., the highest discrepancy we can achieve among all possible rectangles). The discrepancy \(D_{R,B}^*\) of a twocolored pointset is high if and only if there is an axisaligned rectangle with the origin as corner in which the difference of the cardinalities is far from the expected difference.
This is a natural extension of the concept of discrepancy. Indeed, the classic definition of (monochromatic) discrepancy is the particular case in which \(R=\emptyset \) (see [15] for a detailed survey of this concept and its many applications). To the best of our knowledge, the extension of discrepancy to chromatic settings is largely unexplored. Beck [2] considered coloring an uncolored pointset so as to minimize the chromatic discrepancy [obtaining \(O(\log ^4\! n)\) upper and \(\varOmega (\log ^4\!n)\) lower bounds for the definition given above, or \(O(n^{1/2+\varepsilon })\) upper and \(\varOmega (n^{1/4\varepsilon })\) lower when rectangles can have arbitrary orientation]. Dobkin et al. [11] introduce algorithms to find the maximum discrepancy for a given pointset (under different definitions of bichromatic discrepancy).
1.5 Results and Paper Organization
As mentioned above, the lower bound argument for two dimensions maps CDR into a pointset and studied the dependency between the two objects. Our lower bound arguments need to use an additional intermediate space: from (a) any CDR in \(\mathbb {Z}^d\) we consider (b) the weak CDR it generates in the \(x_1x_2\)plane, and (c) map the weak CDR into a set of points in the unit square. Along the paper we will analyze properties of each of the spaces, and see what implications it has for the other two. Specifically, we show the following:
(i) First, in Sect. 2, we detail how to map weak CDRs into a twocolored pointset in \([0,1)\times [0,1)\).
(ii) As mentioned above, we must also extend the discrepancy results [15] to a (bi)chromatic setting. For the purposes of this paper, the main result we need is the following one:
Theorem 1.2
(bichromatic discrepancy) For any set R and B of points such that \(B>R,\) there exists a constant \(c > 0\) such that
Proof of the chromatic extension can be done by making minor changes to Schmidt’s original proof [16] for monochromatic discrepancy. As such, we defer the additional details needed to the appendix.
(iii) With this new discrepancy result we obtain a tradeoff between the error of any weak CDR and the number of inner leaves [i.e., vertices that do not satisfy (S4)]. When the weak CDR has zero inner leaves (and thus is a proper CDR) our bound matches the lower bound of Håstad. As the number of inner leaves increases, the lower bound decreases. In Sect. 3 we prove the following relationship.
Theorem 1.3
For any \(N \in \mathbb {N},\) any weak CDR defined on \(\mathcal {G}^+_N\subset \mathbb {Z}^2\) with \(\kappa _2\) inner leaves between lines \(x_1+x_2=\lceil N/2 \rceil \) and \(x_1+x_2 = N\) has \(\varOmega ({N \log N}/({N+\kappa _2}))\) error.
This shows an important relationship between \((\mathrm{b})\) and \((\mathrm{c})\) spaces mentioned above. In short, the CDR will have \(\varOmega (\log N)\) error unless there are \(\omega (N)\) many inner leaves.
(iv) Intuitively speaking Theorem 1.3 says that a CDR with few inner leaves [say, \(o(N\log N)\)] in the \(x_1x_2\)plane induces \(\omega (1)\) error in that plane. On the other hand, we also prove that many inner leaves [say, \(\omega (N)\)] in that plane cause too many points to extend to one of the remaining dimensions, and create \(\omega (1)\) error as well.
Lemma 1.4
Any CDR defined on \(\mathcal {G}^+_N\subset \mathbb {Z}^d\) with \(\kappa _2\) inner leaves in \(x_1x_2\)plane between lines \(x_1+x_2=\lceil N/2 \rceil \) and \(x_1+x_2 = N\) has \(\varOmega ((\kappa _2/N)^{{1}/({d2})})\) error.
Balancing these two error sources leads to our main result, i.e., a lower bound of \(\varOmega (\log ^{1/(d1)}\!N)\) for any CDR construction in d dimensions (see Sect. 4):
Theorem 1.5
Any CDR in \(\mathbb {Z}^d\) has \(\varOmega ( \log ^{1/(d1)}\! N)\) error.
Recall that Luby [14]’s CDR works in any dimension and has \(O(\log N)\) error, so the gap between upper and lower bounds grows with d. Although we believe our analysis to be loose (especially in Theorem 1.5), we have no reason to believe that \(O(\log N)\) is the correct bound either.
In Sects. 5 and 6, we explore the possibility of having a CDR in high dimensions with \(o(\log N)\) error (rather than directly looking at CDRs in high dimensions, we see what properties it would imply in the other two subspaces). Although we cannot explicitly find a construction with \(o(\log N)\) error, we provide interesting insight on how further research can solve this question.
(v) Our lower bound argument shows that for a CDR in \(\mathbb {Z}^3\) with \(o(\log N)\) error to exist we must be able to construct a weak CDR in \(\mathbb {Z}^2\) with \(O(N\log N)\) inner leaves and \(o(\log N)\) error. We show how to create a weak CDR with at most 5/2 error and \(\varTheta (N^2)\) inner leaves. Although the construction follows the spirit of the method of Chun et al. [9] (for the case in which the monotonicity property is not preserved), we believe our resulting CDR to be much more aesthetically pleasing and it resembles more the Euclidean segments.
Theorem 1.6
For any \(N>0\) we can create a weak CDR in \(\mathbb {Z}^2\) with at most 5/2 error and at most \(N^2/12\) inner leaves.
Although the number of inner leaves is large (roughly speaking, one in six vertices will be an inner leaf), this construction has other interesting properties that would make it suit for practical purposes. Moreover, we extend this construction to have a tradeoff construction with O(c) error and \(O(N^2/c)\) inner leaves for any \(c \ge 1\).
Theorem 1.7
For any \(N>0\) and \(1 \le c \le N\) we can create a weak CDR in \(\mathbb {Z}^2\) with O(c) error and \(O(N^2/c)\) inner leaves.
Discussion of this construction and other variations is given in Sect. 5.
(vi) In order to reduce the number of inner leaves we instead look at how the mapping of any weak CDR with constant error to points would look. Discrepancy results give us a condition on any such CDR with \(o(N^2)\) many inner leaves. Specifically, in Sect. 6 we show the following result.
Theorem 1.8
Let B and R be two sets of points whose discrepancy satisfies \(D_{R,B}^*<1\). Then \(B= \varOmega (m^2),\) where \(m =BR\).
Note that the constant 1 is crucial for our proof: although we would like the statement to hold when \(D_{R,B}^*=O(1)\), it is unlikely that our approach can be extended. We believe that this result could be the first step towards proving that the construction in (v) is asymptotically tight. Further discussion of this result is given in Sect. 5 (after the statement’s proof).
Further discussion on the implication of these results is given in Sect. 7.
2 Mapping a Weak CDR into a Pointset
We start by showing how to transform a weak CDR in two dimensions into a twocolored pointset in \([0,1)^2\). Given any weak CDR, its restriction to \(\mathcal {G}^+_N\) forms a spanning tree T of \(\mathcal {G}^+_N\) because of axioms (S1) and (S3). Although the tree is undirected, we see it as a directed graph (rooted tree) whose edges are oriented away from the origin (root). Then, (S5) implies that the parent of each vertex (x, y) (except the root) is either \((x1,y)\) or \((x,y1)\). For any edge \(e=uv\) of T, where u is the parent of v, we define T(e) as the subtree of T that is rooted at the child node v of e. We slightly abuse the notation and use T(v) to denote the subtree that is emanating from v towards the leaves [that is, \(T(v)=T(e)\)].
For any \(n\le N\) let \(L_n\) be the points of \(\mathcal {G}^+_N\) whose sum of coordinates is n [i.e., \(L_n=\{(x,y)\in \mathcal {G}^+_N:x+y=n\}\)]. We follow the usual terminology that we call a vertex of degree one a leaf. We further consider two subcategories: we say that a leaf v of T is an inner leaf if it is not in \(L_N\). All the vertices in \(L_N\) are called boundary leaves. Note that, by properties of CDR, all vertices of \(L_N\) are proper leaves (since any children should be in \(L_{N+1}\), which is outside \(\mathcal {G}^+_N\)). Further note that in a proper CDR there will be no inner leaves. A vertex v of T is a split vertex if it has degree three or it is the origin. Let \(\mathcal {S}\) be the set of split vertices and \(\mathcal {D}\) the set of inner leaves.
2.1 Auxiliary Function
Before giving the transformation from a tree to a point set we first define an auxiliary function \(M:\mathcal {G}^+_N\rightarrow [0,1]\). For any \(p \in L_N\) we set \(M(p)={p_x}/({p_x+p_y})\). For any subtree \(T'(v)\) of T we define two more functions inductively for \(v \in L_{n}\) from \(n = N\) to 0 as follows:
where M(p) for \(p \in \mathcal {D}\) is defined in the next paragraph.
For any inner leaf \(\ell \in \mathcal {D}\), we know that the edges \(e_1=(\ell _x1,\ell _y+1)(\ell _x,\ell _y+1)\) and \(e_2=(\ell _x+1,\ell _y1)(\ell _x+1,\ell _y)\) must be present in T. Thus, we define \(M(\ell )\) as \(M(\ell )=({\max T(e_1)+\min T(e_2)})/{2}\). Intuitively speaking, we look at the leaves above and to the right of \(\ell \), and assign a value that is in between the two of them (see Fig. 5, left). The following statement shows that these values are sorted along \(L_n\).
Lemma 2.1
Let \(T(u), T(v) \subset T\) be two subtrees of T rooted at the vertices \(u, v \in L_n\) (respectively) for some \(n\le N\) such that \(u_x < v_x\). Then, it holds that \(\max T(u)<\min T(v)\).
Proof
We prove this statement by induction on n from N to 1. If both \(u,v\in L_N\) then both T(u) and T(v) consist of a single vertex and the proof trivially follows. Now, assume that the statement is true for any two vertices \(u', v' \in L_i\) for \(i > n\). We need to show that the statement holds for any two vertices \(u, v \in L_n\) such that \(u_x < v_x\).
First observe that if we have two descendants \(u'\) and \(v'\) from u and v respectively such that \(u',v'\in L_{n'}\) for some \(n'>n\), then it holds that \(u'_x < v'_x\). Indeed, this follows from the fact that when we embed T in the natural way with edges drawn as straight segments, the result is a tree with no crossings. Thus, if \(v'_x < u'_x\) happened for some descendants, then the two paths in T from u to \(u'\) and from v to \(v'\) would either cross or form a cycle. Any of those two situations would contradict with the fact that T is a weak CDR.
Back to our original proof, consider the case in which neither u nor v are inner leaves. By the above argument we have that the xcoordinate of any child \(u'\in L_{n+1}\) of u must be smaller than any child \(v'\in L_{n+1}\) of v. By induction, this implies that \(\max T(u') < \min T(v')\) and thus \(\max T(u)< \min T(v)\).
The cases in which u or v are inner leaves are similar: if u is an inner leaf, we have \(\max T(u)= M(u) =({\max T(u_1)+\min T(u_2)})/{2}\), where \(u_1=(u_x,u_y+1)\in L_{n+1}\) and \(u_2=(u_x+1,u_y)\in L_{n+1}\). By induction on \(u_1\) and \(u_2\) we have \(\max T(u_1)< \min T(u_2)\) and \(\max T(u)< \min T(u_2)\), thus we need to compare \(\min T(u_2)\) with any children of v. If v is also an inner leaf, we can do a similar argument and have that \(\max T(v_1)< \min T(v)\) where \(v_1=(v_x,v_y+1)\).
In general, given u, let \(u'\in L_{n+1}\) be the child of u with the largest xcoordinate (or \(u'=u_2\) if u is an inner leaf). Similarly, we define \(v'\) as the child of v with the smallest xcoordinate (or \(v'=v_1\) if v is an inner leaf). Again, by planarity of the natural embedding, we have that \(u'_x \le v'_x\) if at least one of u, v is an inner leaf. In either case, we can use induction and get that \(\max T(u')\le \min T(v')\) which implies \(\max T(u)<\max T(u')\le \min T(v') \le \min T(v)\) (if u is an inner leaf) or \(\max T(u)\le \max T(u')\le \min T(v') < \min T(v)\) (if v is an inner leaf) completing the proof. \(\square \)
For any subtree \(T'\) of T, its depth is the longest possible length of a path from its root to any of its leaves. Any split vertex \(s\in \mathcal {S}\) has two branching edges \(e_1\) and \(e_2\), each defining a subtree. The subtree of higher depth is the preferred subtree of s [in case of tie, we choose the tree emanating from \((s_x+1,s_y)\)]. For any point \(p\in \mathcal {G}^+_N\) we define a walk from p to some leaf of T. If \(p\in L_n\) has degree two, we follow the single edge to \(L_{n+1}\). If \(p \in \mathcal {S}\), we follow the edge to the preferred subtree. This process is continued until we reach a leaf \(\gamma (p)\).
With this virtual walk we can define the function M to all points \(p\in \mathcal {G}^+_N\) (not only leaves) of the domain as follows. If p is neither a split nor a leaf, we define M(p) as \(M(p)=M(\gamma (p))\). For a split vertex s, let \(s'\) be the child of s that is not on the preferred subtree of s. Then, we define M(s) as \(M(s)=M(\gamma (s'))\).
Intuitively speaking, from any vertex we always follow its only edge away from the root (if it has degree 2) or the preferred edge (if it has degree 3) until we reach a leaf. The only exception is if we start on a split vertex, in which case we do not follow the preferred edge at the first step. This exception is needed to make sure that the end points of the walk starting from split vertices are distinct.
Lemma 2.2
For any split vertex \(s\in \mathcal {S},\) there exists a unique leaf \(\ell \in \mathcal {D}\cup L_N\) such that \(M(s)=M(\ell )\). And for any leaf \(\ell \in \mathcal {D}\cup L_N \setminus \{(N,0)\},\) there exists a unique split vertex \(s \in \mathcal {S}\) such that \(M(s)= M(\ell )\).
Proof
By definition of the auxiliary function, two leaves do not have the same mapping. Thus, it remains to show that the walk of two different split vertices cannot end at the same leaf. Imagine doing the walk backwards: start at any leaf, walk towards the origin and stop as soon as you reach a split vertex by traversing its nonpreferred edge. Since each split vertex has exactly two children, it follows that exactly one leaf will stop at each split vertex. The exceptional case is the leaf (N, 0), from which walking backwards to the origin is a horizontal path and the path does not contain any nonpreferred edge. That is, in the inverse walk we follow preferred edges until we reach a nonpreferred edge. This is equivalent to starting at a split vertex and follow the nonpreferred edge once and continue with the preferred edges, which is the exact definition of our auxiliary function. \(\square \)
2.2 Transforming the Tree into a Pointset
With the auxiliary function M we can define the mapping between a weak CDR into a bicolored pointset in the unit square. For any vertex \(v=(v_x,v_y) \in \mathcal {G}^+_N\) we define its transformation as \(\pi (v)=(M(v), ({v_x+v_y})/{N})\). Given any weak CDR, we look at the tree T it defines in \(\mathcal {G}^+_N\). Each vertex \(v \in \mathcal {D}\) creates a red point \(\pi (v)\) and each split vertex \(w\in \mathcal {S}\) creates a blue point \(\pi (w)\) (note that we do not transform the boundary leaves in \(L_N\) into points). We define the mapping of T as the union of the sets \(R=\{\pi (v):v\in \mathcal {D}\}\) and \(B=\{\pi (v):v\in \mathcal {S}\}\) (see Fig. 5, right). Note that the two sets depend on the tree T [and thus \(R=R(T)\) and \(B=B(T)\)]. From now on we assume that T is fixed, and thus we simplify the notation for ease of reading. For any set P of points in the unit square and \(x,y \in [0,1]\) let P[x, y] be the number of points in \(P \cap [0,x] \times [0,y]\).
Lemma 2.3
For any weak CDR T in \(\mathcal {G}^+_N\subset \mathbb {Z}^2\) and \(n <N,\) the red and blue points on the line \(y=n/N\) alternate in color starting and ending with a blue point. In particular, we have \(B[1,n/N]R[1,n/N]=n+1\).
Proof
For the first statement we observe that only points that lie in \(L_n\) will have ycoordinates equal to n/N. Moreover, since \(L_{n+1}\) has one more vertex than \(L_n\), each diagonal must have exactly one more split vertex than inner leaves. Indeed, Chun et al. showed that in proper CDRs each diagonal has exactly one split vertex (and of course, zero inner leaves).
Now we need to show that split vertices and inner leaves appear alternatingly on the diagonal line. Consider two consecutive split vertices \(u, v \in L_n\) such that \(u_x < v_x\). By definition of split, the edges \(e_u =(u_x, u_y)(u_x+1,u_y)\) and \(e_v = (v_x, v_y)(v_x, v_y+1)\) are all in T. Observe that there are \(v_x  u_x  1\) vertices in \(L_n\) and \(v_x  u_x  2\) vertices in \(L_{n+1}\) between \(e_u\) and \(e_v\). Since two different vertices of \(L_n\) cannot connect to the same vertex of \(L_{n+1}\), one of them will not reach \(L_{n+1}\). That vertex will be an inner leaf and will be between u and v as claimed.
That is, the blue pointset has one more point than the red pointset in each horizontal line \(y = i/N\). Summing up the differences from \(i=0\) to n, we get that in total there are \(n+1\) additional blue points \(p=(x,y)\) with \(y\le n/N\). \(\square \)
With the above observations we can now state the main relationship between the weak CDR and its mapped pointset. For any vertex \(v\in L_n\), its path to the origin splits the tree into two portions. Consider the portion of the tree up to \(L_n\) that is above the path from v to the origin. In \(L_0\), the subtree contains a single vertex (the root) whereas at the diagonal \(L_n\) contains \(v_x+1\) vertices. Since the number of leaves grows with split vertices and shrinks with inner leaves, this means that in the portion of the tree that we are looking at, the difference between split vertices and inner leaves must be \(v_x\), see Fig. 5. Note that if the two children of a split vertex [e.g., (5, 0) in Fig. 5] are not in the same portion, the number of leaves does not grow with that split vertex. However, these split vertices may be still contained in the rectangle that we consider in the mapped pointset. This is the reason why we do not have an equality in Theorem 2.4.
Theorem 2.4
For any vertex \(v \in \mathcal {G}^+_N\) it holds that
Proof
We split the proof into two auxiliary lemmas.
Lemma 2.5
Let \(v \in L_n\) be a split vertex such that \(v_x < n\). If \(M(v) < M(\gamma (v))\) the rectangle \([M(v),M(\gamma (v))] \times [0,({n1})/{N}]\) contains exactly one point, which is blue and has \(M(\gamma (v))\) as xcoordinate. If \(M(\gamma (v)) < M(v)\) the rectangle \([M(\gamma (v)),M(v)] \times [0,({n1})/{N}]\) contains exactly one point, which is blue and has \(M(\gamma (v))\) as xcoordinate. When \(v=(n,0) \in L_n\) the rectangle \([M(v),M(\gamma (v))] \times [0,({n1})/{N}]\) is empty.
Proof
We first consider the case of \(M(v) < M(\gamma (v))\). When we keep following from v to the preferred subtree, we end up in a leaf, called \(\ell \). By definition of M we have \(M(\ell ) = M(\gamma (v))\). Since \(v_x < n\) we have \(M(\gamma (v)) \ne 1\). By Lemma 2.2 there is a unique split vertex \(s \in \mathcal {S}\) such that \(M(s)= M(\ell )\). This split vertex is below layer \(L_n\) (indeed, we reach \(L_n\) from \(\ell \) by following only preferred edges and the inverse walk has to stop when we traverse a nonpreferred edge of s) and therefore s is transformed to a blue point in the rectangle. Now let \(s'\) be a split vertex which is mapped to a blue point in the rectangle. We will show that \(s' = s\). Let \(\ell '\) be the unique leaf such that \(M(\ell ')= M(s')\). Consider first the case in which \(\ell '\) is below layer \(L_n\) (that is, \(\ell '_x+\ell '_y < n\)). Then let \(v'\) be the vertex on \(dig (o,v)\) and \(L_{\ell '_x+\ell '_y}\). If \(\ell '_x < v'_x\) (resp. \(v'_x < \ell '_x\)) then Lemma 2.1 implies that \(M(\ell ') < \min T_{v'}\le M(v)\) [resp. \(M(\gamma (v)) \le \max T_{v'} < M(\ell ')\)]. This would be a contradiction to \(s'\) being mapped to a blue point in the rectangle.
It remains to consider the case in which \(\ell '\) is above layer \(L_n\). Define \(\ell ''\) to be the vertex on \(dig (o, \ell ')\) and \(L_n\). Lemma 2.1 implies that \(\ell '' = v\) [otherwise we have either \(M(\ell ') < M(v)\) or \(M(\gamma (v))< M(\ell ')\) which would again be a contradiction]. Recall that there is only one split vertex whose walk to its corresponding leaf through preferred subtrees passes through v. Hence \(s' = s\) and there is exactly one blue point in the rectangle.
We now show that there cannot be any red point either. Indeed, recall that for every red point there is a blue point with the same xcoordinate and smaller ycoordinate because for each inner leaf \(\ell \) there is a unique split vertex s defined by the walk from s to \(\ell \) such that \(M(\ell ) = M(s)\). From the previous argument, we know that s with \(M(s) = M(\gamma (v))\) is mapped to the only one blue point in the rectangle and its corresponding leaf \(\ell \) defined by the walk is above \(L_n\). Hence, even if \(\ell \) is an inner leaf, the mapped red point is not in the rectangle. Moreover, there cannot be any other red point in the rectangle (since it would imply that the corresponding blue point would also be in and we already ruled out this case).
In the same way we can also prove that if \(M(\gamma (v)) < M(v)\) the rectangle \([M(\gamma (v)),M(v)] \times [0,({n1})/{N}]\) contains exactly one point, which is blue and has \(M(\gamma (v))\) as xcoordinate. If \(v_x = n\) then \(\ell \) as defined above is the leaf (N, 0) and \(M(\ell ) = 1\). Lemma 2.2 implies that there is no split vertex s with \(M(s) = 1\). \(\square \)
Lemma 2.6
For any vertex \(v \in \mathcal {G}^+_N\) it holds that
Proof
We first prove by induction over n that for all \(n \in \{0,1,\ldots ,N\}\) the following statement holds:
The quantity \(B \cap \{x\} \times [0,({n1})/{N}]  R \cap \{x\} \times [0,({n1})/{N}]\) counts the difference between the number of blue points and red points on the vertical segment with xcoordinate x and length \(({n1})/{N}\). Because of Lemma 2.2 we know that each split vertex shares the same value with a leaf in the auxiliary function M. If the leaf is an inner leaf, both blue (split) and red (inner) points lie on the same unit segment \(\{x\} \times [0,1]\). Otherwise, there is only one blue point on \(\{x\} \times [0,1]\) because M(p) for \(p \in L_N\) are all different. Hence the quantity \(B \cap \{x\} \times [0,({n1})/{N}]  R \cap \{x\} \times [0,({n1})/{N}]\) can either be 0 or 1.
The base case \(n=0\) trivially holds. We have \( \{M(\gamma (p)) : p \in L_0\} = \{1\}\) and
We assume that (2) holds for layer \(L_n\) and we prove that it also holds for \(L_{n+1}\). We distinguish three cases for any vertex q in layer \(L_n\).

If q has degree 2 then q and its child \(r \in L_{n+1}\) are mapped by \(M \circ \gamma \) to the same value. Moreover q does not create any vertex in the set B nor R.

If q is an inner leaf, then the value \(M(\gamma (q))\) will not appear in \(\{M(\gamma (p)) :p \in L_{n+1}\}\) any more. The value \(M(\gamma (q))\) also disappears in
$$\begin{aligned} \biggl \{x \in [0,1]: \biggl B \cap \{x\} \times \biggl [0,\frac{n}{N}\biggr ]\biggr   \biggl R \cap \{x\} \times \biggl [0,\frac{n}{N}\biggr ]\biggr  = 1 \biggr \} \cup \{1\}, \end{aligned}$$because q created a red point in R with the coordinates \((M(\gamma (q)),{n}/{N}) = (M(q),{n}/{N})\).

If q is a split vertex, then the value \(M(\gamma (q))\) will stay in \(\{M(\gamma (p)) :p \in L_{n+1}\}\). Moreover, \(\{M(\gamma (p)) :p \in L_{n+1}\}\) contains the additional value M(q). The value M(q) also appears in
$$\begin{aligned} \biggl \{x \in [0,1]: \biggl B \cap \{x\} \times \biggl [0,\frac{n}{N}\biggr ]\biggr   \biggl R \cap \{x\} \times \biggl [0,\frac{n}{N}\biggr ]\biggr  = 1 \biggr \} \cup \{1\}, \end{aligned}$$because q creates a blue point in B with the coordinates (M(q), n/N).
Hence (2) holds.
Let v be a vertex in layer \(L_n\), i.e., \(n = v_x + v_y\). By Lemma 2.1 we know that a vertex \(u \in L_n\) with \(u_x < v_x\) satisfies \(M(\gamma (u)) < M(\gamma (v))\). By Lemma 2.1 we also know that a vertex \(w \in L_n\) with \(v_x < w_x\) satisfies \(M(\gamma (v)) < M(\gamma (w))\). Hence the number of vertices in layer \(L_n\) with smaller xcoordinate than that of v is exactly the number of vertices which are mapped by \(M \circ \gamma \) to a smaller value than that of v. If \(v_x < n\):
If \(v_x = n\) then
By Lemma 2.3, the red and blue points on the line \(y= v_x+v_y\) alternate in color starting and ending with a blue point. Hence, any interval [0, x] on the line \(y=v_x+v_y\) contains at most one more blue points. Therefore,
is at most one. Lemmas 2.6 and 2.3 directly imply Theorem 2.4. \(\square \)
3 Lower Bound for Two Dimensional Weak CDRs
Before giving the proof of Theorem 1.3, we recall that a proof for a proper CDR (i.e., one without inner leaves) was given in [9]. Our proof follows the same spirit, so we first give an overview of their proof and describe what changes when we introduce inner leaves.
Lemma 3.1
Given a CDR, a point \(p=(x,y)\in L_N,\) and an integer \(n<N,\) let \(p'=(x',y')\in L_n\) be the unique point of \(L_n\) that is in \(dig (o,p)\). The Hausdorff error of the CDR is at least \(x'x{n}/{N}\).
Proof
This result was shown by Chun et al. [9, Lemma 3.5, in Cases 1 and 2]. We give the proof for completeness. Consider the Linfinity ball of radius \(x'x{n}/{N}\) centered at p n/N. By construction, this ball contains \(p'\) in its boundary. Because of the monotonicity axiom, no vertex of \(dig (o,p)\) can be in the interior of the ball. In particular, when measuring the Hausdorff distance of point \(p{n}/{N}\in \overline{op}\) we get an error of at least \(x'x{n}/{N}\). \(\square \)
Consider any point \(p\in L_N\) and virtually sweep a line of slope \(1\) from the origin all the way to \(L_N\). During the sweep, the intersection between the diagonal line and either the Euclidean segment \(\overline{op}\) or the digital one \(dig (o,p)\) will be a point. Lemma 3.1 says that if we can find an instant of time for which two intersection points are at distance \(\partial \) from each other, then the Hausdorff error of the whole CDR must be \(\varOmega (\partial )\) (see Fig. 6).
In order to find this instant of time we see how much the subtrees grow. Consider a consecutive set of I vertices in some intermediate layer \(L_n\). Let \(\mathcal {L}(I)\) be the vertices of \(L_N\) whose digital path to the origin passes through some vertex of I. If the CDR has small error, we need \(\mathcal {L}(I)\) to have roughly NI/n many points. The difference between the expected number of vertices and \(\mathcal {L}(I)\) combined with Lemma 3.1 will give a lower bound on the Hausdorff error.
Our proof follows the same spirit (transform the tree into a pointset, use discrepancy to find a subset with too many/too few children and use Lemma 3.1 to find a large error). Although all three steps follow the same spirit, they need major changes to account for the possibility of inner leaves.
The biggest change is how we map the tree. In proper CDRs each line has a unique split vertex and always extends to \(L_N\). Thus, a region with a large number of split vertices directly implies a large error. In our setting, we could potentially have a region with many split vertices followed by a large number of inner leaves to cancel out the growth. This is why we need two major changes: first we now color the points red and blue depending on whether they are split vertices or inner leaves. We also introduce a second dimension to track when the children of a split vertex stop extending. Intuitively speaking, the xcoordinate of our mapping will be similar to the mapping done by Chun et al. [9] whereas the ycoordinate represents time. Thus, the difference in ycoordinates between red and blue points can be used to determine for how long are the two children of a split vertex alive (the longer the difference in ycoordinates, the further away that the two children extend).
We now use the mapping of Sect. 2 together with the two colors discrepancy (Theorems 1.2 and A.4) to show a lower bound on the error of weak CDRs. The discrepancy result in Theorem 1.2 considers the points in the whole unit square. Due to some technical reasons, in Sect. 4 we will need a discrepancy result for the points in the upper half of the unit square instead (Theorem A.4). The difference between the two theorems is just a constant factor and thus would have little implication. Here we use Theorem A.4 and prove the result in terms of the number of inner leaves in the upper half. Specifically, we show the following result.
Theorem 1.3
For any \(N \in \mathbb {N},\) any weak CDR defined on \(\mathcal {G}^+_N\subset \mathbb {Z}^2\) with \(\kappa _2\) inner leaves between lines \(x_1+x_2=\lceil N/2 \rceil \) and \(x_1+x_2 = N\) has \(\varOmega ({N \log N}/({N+\kappa _2}))\) error.
Proof
Given a weak CDR and its associated tree T, consider its transformation into the sets R and B of red and blue points defined by \(\pi \). Let \(b_2\) and \(r_2\) be the numbers of blue and red points in the rectangle \([0,1] \times [1/2,1]\) respectively. By Lemma 2.3, we have \(b_2  r_2 = \lfloor N/2 \rfloor \). We apply the discrepancy result (Theorem A.4) with \(b_2r_2 = \lfloor N/2 \rfloor \) and \(r_2 = \kappa _2\), and obtain that there exist \(\alpha , \beta \in [0,1]\) and \(c' > 0\) such that \(B[\alpha ,\beta ]  R[\alpha ,\beta ]  N \alpha \beta  > c' \cdot N \cdot \log N/({N+\kappa _2})\).
We want to use Theorem 2.4 on the vertex of T whose image is \((\alpha ,\beta )\). Naturally, such a vertex need not exist, but we will find one nearby whose associated discrepancy is also high. Let \(n= \lfloor N\beta \rfloor \) and observe that \(B[\alpha ,\beta ]=B[\alpha ,{n}/{N}]\); indeed, by the way we transform points, their ycoordinates are of the form i/N. However, by definition of n we know that \(\beta \) is between n/N and \((n+1)/N\) and thus no point can lie in the horizontal strip \(y\in (n/N, \beta ]\) (by the same argument we also have \(R[\alpha ,\beta ]=R[\alpha ,{n}/{N}]\)).
If we substitute \(\beta \) in the previous equation we get
for a large enough N, \(\kappa _2 \in O(N \log N)\) and for some \(c''>0\). We get the additional 1 term because of the rounding in the definition of n.
Now we need to do a similar operation for \(\alpha \). Let \(q_i=(i,ni)\) be a vertex of \(L_n\). By Lemma 2.1 the image of the auxiliary function \(M(q_i)\) monotonically increases as i grows. Let \(Q=\{q_i: M(q_i) \le \alpha \}\) and \(\alpha '=\max _{q_i\in Q} M(q_i)\). Note that, by definition of the set Q, it trivially holds that \(\alpha '\le \alpha \).
Lemma 3.2
Proof
The difference between the two rectangles is the rectangle \(\varDelta \) whose opposite corners are \((\alpha ',0)\) and \((\alpha ,n/N)\), and one of the boundary \(\overline{(\alpha ',0)(\alpha ',{n}/{N})}\) is open. We claim that red and blue points are paired (sharing the same xcoordinate) in \(\varDelta \) (and thus, for each red point that we remove we are also removing a blue one). By Lemma 2.2, we know that all the blue points have different xcoordinates, so do red points. Hence, if there are red and blue points on the same vertical line, they must be the only pair in that vertical line. First notice that if there is a red point in \(\varDelta \), there also exists a blue point in \(\varDelta \) with the same xcoordinate and below the red point. By the virtual walk that we define the auxiliary function, every split vertex is closer to the origin than the corresponding leaf. Hence, after the transformation \(\pi \), if there is a red point, then there must exist a blue point with the same xcoordinate (by Lemma 2.2) and smaller ycoordinate. Then, we will show that if there is a blue point in \(\varDelta \), there also exists a red point in \(\varDelta \) with the same xcoordinate.
Assume, for the sake of contradiction that there exists a blue point p in \(\varDelta \) such that there does not exist a red point q with the same xcoordinate as p in \(\varDelta \). Let s be the split vertex whose image is p. By definition of the transformation \(\pi \), the xcoordinate of p is M(s), which is between \(\alpha '\) and \(\alpha \). We apply Lemma 2.2 to find the unique leaf \(\ell \) such that \(M(s)=M(\ell )\). Since \(\pi (\ell )\notin \varDelta \), we have that \(\ell _x+\ell _y>n\). Let m be the unique vertex of \(L_n\) that is in the path from s to \(\ell \). It follows that \(\pi (m)=(M(\ell ), {n}/{N})\in \varDelta \). This gives a contradiction with the definition of \(\alpha '\), and thus implies that if there exists a blue point in \(\varDelta \), then there also exists a red point in \(\varDelta \) with the same xcoordinate. \(\square \)
Thus, given a pair \((\alpha ,\beta )\) whose associated rectangle has high discrepancy, we have snapped it to the pair \((\alpha ', {n}/{N})\) that defines another rectangle with high discrepancy. More importantly, by definition of Q, we know that \(\pi (q_{Q1})=(\alpha ', {n}/{N})\). Note that \(q_{Q1}\) need not be a split vertex or an inner leaf (and thus, \((\alpha ', {n}/{N})\) may not be a point of \(R \cup B\)).
Let \(b'=B[\alpha ',{n}/{N}]\) and \(r'=R[\alpha ',{n}/{N}]\). If we apply Theorem 2.4 to point \(q_{Q1}\) we get that \(b'r'2\le Q1 \le b'r'\). This set Q is the one that makes the role of I in the proof overview: we know that vertices of Q are the ones that extend to cover all the vertices of \(L_N\) whose image is \(\alpha '\) or less. As such, we would expect Q to contain roughly \(n\alpha '\) elements. However, the discrepancy result tells us that the size of Q is \(c''{N\log N}/({N+\kappa _2})\) units away from that value. We say that p is productive if some point of T(p) is in \(L_N\) (this is equivalent to the fact that p can be extended to reach the boundary). Let \(k \le b'r'2\) be the biggest integer such that \(q_k\) is productive. Note that k is well defined because \(q_0\) is always productive [(0, n) always extends to (0, N)]. The proof now considers a few cases depending on whether k is small or large (specifically, we say that k is small if
large otherwise) and if Q contains too few or too many points.
\(\underline{{k}\,\mathrm{is}\,\mathrm{small}}\) Recall that we looked for the largest possible k (such that \(q_k\) is productive). Thus, if k is small, we have many points in layer \(L_n\) that are consecutive and not productive. In particular, none of the vertices in
are productive. Let
(note that this point is surrounded by nonproductive points in both sides along \(L_n\)). Shoot a ray \(\gamma \) from o towards \(q_m\). Let p be the vertex on \(L_N\) that is closest to \(\gamma \). Observe that the \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance between \(\gamma \) and p is at most 1/2. Let \(\gamma '\) be the ray shooting from o towards p. Similarly, the \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance between \(\gamma '\) and \(q_m\) is at most 1/2 (see Fig. 7, left). We now apply Lemma 3.1 to \(dig (o,p)\). We know that the Euclidean segment \(\overline{op}\) is close to \(q_m\). The digital segment must cross \(L_n\) and is far from \(q_m\); the closest it can pass is either
That is, we know that the intersection of \(\overline{op}\) with the line \(x+y=n\) is at most half a unit away from \(q_m\). Similarly, the intersection with \(dig (o,p)\) is at least
from \(q_m\). Thus, by triangle inequality the \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance between \(dig (o,p)\) and \(\overline{op}\) is at least
\(\underline{k\,\mathrm{is}\,\mathrm{large}\,\mathrm{and}\,b'r'\ge n\alpha +c''\cdot {(N \log N)}/({N+\kappa _2})}\) Look at the xcoordinate of \(q_k\). We know that Q has at least
many elements, and k is among the productive vertices with the largest xcoordinate. In particular, the xcoordinate of \(q_k\) is at least
Let p be the unique leaf of \(L_N\) such that \(M(p)=M(q_k)\). We now apply Lemma 3.1 to \(dig (o,p)\) at the line \(x+y=n\). By definition of p, we have that \(dig (o,p)\) passes through \(q_k\). Now, by definition of Q, we know that \(M(q_k) \le \alpha \) and in particular the xcoordinate of p is at most \(\alpha N\) (see Fig. 7, right). Thus, the Euclidean segment \(\overline{op}\) must intersect at a point whose xcoordinate is at most \(\alpha n\). That is, when we look at the Euclidean and the digital segments along line \(x+y=n\), the Euclidean crossing happens at xcoordinate at most \(\alpha n\). However, the xcoordinate of the digital crossing is at least
By Lemma 3.1 we conclude that the error must be \(\varOmega ({N \log N}/({N+\kappa _2}))\) as claimed.
\(\underline{{b'r'< n\alpha  c''\cdot {(N \log N)}/({N+\kappa _2})}}\) This proof is very similar to the previous case. Consider the vertex \(p=(\lfloor \alpha N \rfloor , N\lfloor \alpha N \rfloor )\in L_N\) and apply Lemma 3.1 to \(dig (o,p)\) and \(\overline{op}\). At line \(x+y=n\) the Euclidean segment \(\overline{op}\) passes through a point whose xcoordinate is \(\lfloor \alpha N \rfloor \cdot {n}/{N} \ge \lfloor \alpha n \rfloor 1\). By definition, \(M(p)\le \alpha \) and thus \(dig (o,p)\) must pass through some vertex q of Q. In particular, the xcoordinate of q is at most \(b'r' < n\alpha  c''\cdot {N\log N}/({N+\kappa _2})\), giving the \(\varOmega ({N \log N}/({N+\kappa _2}))\) error and completing the proof of Theorem 1.3.
\(\square \)
Note that if we use Theorem 1.2 instead, the same argument follows and we would get the following result.
Theorem 3.3
For any \(N \in \mathbb {N},\) any weak CDR defined on \(\mathcal {G}^+_N\subset \mathbb {Z}^2\) with \(\kappa _1\) inner leaves has \(\varOmega (({N \log N})/({N+\kappa _1}))\) error.
4 Lower Bound for CDRs in High Dimensions
We now use the lower bound of weak CDRs to obtain a lower bound for CDRs in three or higher dimensions. Consider the restriction of any ddimensional CDR T to the \(x_1x_2\)plane (we call this restriction the \(x_1x_2\)restriction of T and denote it by \(T_{x_1x_2}\)). Recall that the key observation is that \(T_{x_1x_2}\) is a (possibly weak) CDR and that any inner leaf in \(T_{x_1x_2}\) must extend in some \(x_i\)direction in T for some \(i \in [3..d]\). We have seen that \(T_{x_1x_2}\) needs to have a large number of inner leaves to have \(o(\log N)\) error. In the following, we will show that a large number of inner leaves will cause constraints for \(\mathbb {Z}^d\) and have an impact in the overall error of T.
We do a slight abuse of notation and use the same terms as in two dimensions. For simplicity of the notation, we assume that N is a positive even number. For any \(n \le N\), let \(L_n = \bigl \{(x_1,x_2,\ldots ,x_d) \in \mathcal {G}^+_N: \sum _{i=1}^{d} x_i = n\bigr \}\). Given any CDR in \(\mathcal {G}^+_N\), we consider the CDR as a tree rooted at the origin. Let T(v) be the subtree rooted at v.
From Theorem 1.3, we already know that in order for \(T_{x_1x_2}\) to have sublogarithmic error we must have \(\kappa _2\in \omega (N)\) inner leaves. However, each inner leaf ties to a boundary leaf in \(L_N\) in d dimensions. In other words, the subtrees rooted at the vertices in \(L_{N/21} \cap T_{x_1x_2}\) must cover all these boundary vertices. We now observe that a weak CDR with inner leaves in the \(x_1x_2\)plane induces subtrees which are too big for the high dimensional proper CDR (see Fig. 8).
Lemma 4.1
Given any CDR in \(\mathcal {G}^+_N,\) let \(\kappa _2\) be the number of inner leaves in \(T_{x_1x_2}\) between lines \(x_1+x_2=N/2\) and \(x_1+x_2 = N\). There exists a vertex \(v \in L_{N/21}\) such that \(v_i=0\) for \(i=3,\ldots ,d\) and some boundary leaf \(u \in T(v) \cap L_N\) has \(u_j \ge (\kappa _2/N)^{{1}/({d2})}1\) for some \(j \in [3..d]\).
Proof
The proof follows from a packing argument. Consider the set \(V=\{(0,N/21,0,\ldots ,0), (1,N/22,0,\ldots ,0), \ldots , (N/21,0,0,\ldots ,0)\}\). Note that these vertices lie in the \(x_1x_2\)plane and thus are in \(T_{x_1x_2}\). Because they are the two dimensional equivalent of \(L_{N/21}\), the union of their subtrees covers \(T_{x_1x_2}\) between \(L_{N/2}\) and \(L_{N}\). In this region we know that we have \(\kappa _2\) many inner leaves, which will extend to \(L_N\) with the first step in the \(x_i\)direction for some \(i \in [3..d]\). Let \(Y_N\) be the extended vertices on \(L_N\) from these \(\kappa _2\) inner leaves, i.e., \(Y_N \ge \kappa _2\).
Let \(B_N =\bigl \{(x_1,x_2,\ldots ,x_d)\in \mathcal {G}^+_N: \sum _{i=1}^{d} x_i = N, \,x_1+x_2<N, \,\forall \,i \in [3..d]\; x_i < (\kappa _2/N)^{{1}/({d2})}1\bigr \}\), see Fig. 8. Since we have less than \((\kappa _2/N)^{{1}/({d2})}\) choices for \(x_3,\ldots ,x_d\), at most N choices for \(x_1\) and the value of \(x_2\) is adjusted to satisfy the constraint \(\sum _{i=1}^{d} x_i = N\), the size of \(B_N\) is less than \(\kappa _2\). Hence, \(B_N\) cannot contain all vertices of \(Y_N\). Moreover, no vertices of \(Y_N\) lie on \(x_1x_2\)plane, so there exists some vertex \(u \in Y_N\) such that \(u_j \ge (\kappa _2/N)^{{1}/({d2})}1\) for some \(j \in [3..d]\), which is in \(T(v) \cap L_N\) for some \(v \in V\). \(\square \)
The existence of this vertex v is the root of the problem. We conclude with the following statement.
Lemma 1.4
Any CDR defined on \(\mathcal {G}^+_N\subset \mathbb {Z}^d\) with \(\kappa _2\) inner leaves in \(x_1x_2\)plane between lines \(x_1+x_2=\lceil N/2 \rceil \) and \(x_1+x_2 = N\) has \(\varOmega ((\kappa _2/N)^{{1}/({d2})})\) error.
Proof
Apply Lemma 4.1 to obtain a vertex \(v\in L_{N/21} \cap T_{x_1x_2}\) that satisfies some \(u \in T(v) \cap L_N\) with \(u_j \ge (\kappa _2/N)^{{1}/({d2})}1\) for some \(j \in [3..d]\). Let \(u'\) be the intersection of \(\overline{ou}\) and the affine plane containing \(L_{N/21}\), see Fig. 8. As \(L_N\) and \(L_{N/21}\) are parallel, \(({u_j'o_j})/({u_jo_j}) \ge {1}/{3}\) for \(N\ge 6\), this implies that \(u'_j= \varOmega ((\kappa _2/N)^{{1}/({d2})})\). By construction, we have that v is on the \(dig (o,u)\) and \(v_j= 0\), hence \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance between \(dig (o,u)\) and \(\overline{ou}\) is \(\varOmega ((\kappa _2/N)^{{1}/({d2})})\). \(\square \)
Combining with Theorem 1.3 gives us a lower bound for CDRs in d dimensions.
Theorem 1.5
Any CDR in \(\mathbb {Z}^d\) has \(\varOmega ( \log ^{1/(d1)}\! N)\) error.
Proof
By Theorem 1.3 and Lemma 1.4, the error is \(\varOmega (({N\log N})/({N+\kappa _2}))\) and \(\varOmega ((\kappa _2/N)^{{1}/({d2})})\), where \(\kappa _2\) is the number of inner leaves in \(T_{x_1x_2}\) between \(L_{N/2}\) and \(L_{N}\). The balance between the two is obtained by choosing \(\kappa _2 = \varTheta (N \log ^{({d2})/({d1})}\! N)\), giving the \(\varOmega ( \log ^{1/(d1)}\! N)\) lower bound. \(\square \)
5 A Construction of a Weak CDR with Constant Error
In this section we describe how to construct a weak CDR in two dimensions with constant error. Specifically, we show that the weak CDR has at most 5/2 error and at most \(N^2/12\) inner leaves, which is about 1/6 of the total number of grid points in \(\mathcal {G}^+_N\).
Assume that N is a power of 2. We partition \(\mathcal {G}^+_N\) into \(\log _2 N\) diagonal slices by a set of lines \(x+y=2^i\) for \(i=1,\ldots , \log _2 N1\). We use \(S_i\) to denote the ith slice between \(x+y=2^{i1}\) and \(x+y=2^i\) for \(i=2,\ldots , \log _2 N\). The first slice \(S_1\) is \(\mathcal {G}^+_2\), which only has six points. There are two proper CDRs for this small set and both have the same error. So we can use either of the two. For each other slice \(S_i\), we draw a greedy digital path from each point \(p=(p_x,p_y) \in L_{2^{i1}}\) to \(2p=(2p_x, 2p_y)\). The greedy digital path simply tries to approximate the Euclidean segment as much as possible. More formally, the path between p and 2p is defined by picking a point in each \(L_j\) that has the smallest \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance to the line segment \(\overline{p,2p}\) for \(j=2^{i1}, \ldots , 2^{i}\) (in case of tie, we pick the point with smaller ycoordinate). Lemma 5.1 shows that the way we picked the points would give a digital path following the 4neighbor grid topology.
The last step of the construction is as follows: for those points \((p_x,p_y) \in \mathcal {G}^+_N/\{(0,0)\}\) not having an edge to \((p_x1,p_y)\) or \((p_x,p_y1)\), we connect \((p_x,p_y)\) to \((p_x1,p_y)\) if \(p_x \ge p_y\), otherwise to \((p_x, p_y1)\). We call this construction GREEDY, see Fig. 9.
The following lemma shows that every two consecutive points we picked in \(L_j\) and \(L_{j+1}\) to form the greedy digital path is connected under the 4neighbor topology.
Lemma 5.1
For \(i=2,\ldots , \log _2 N,\) any greedy digital path from \(p \in L_{2^{i1}}\) to \(2p\in L_{2^i}\) is connected under the 4neighbor topology and it is xymonotone.
Proof
It is trivial for \(p=(0,2^{i1})\) and \(p=(2^{i1},0)\), so we ignore these two cases in the following. Suppose that there exists two consecutive points we picked in the greedy path \(u \in L_j\) and \(v \in L_{j+1}\) for some \(2^{i1} \le j \le 2^i1\) such that u and v are not connected in the 4neighbor topology, i.e., \(u_x \ne v_x\) and \(u_y \ne v_y\). Since u and v are grid points, this implies that \(\Vert uv\Vert _{\infty } \ge 2\). Let \(u'\) and \(v'\) be the points on the line segment \(\overline{p,2p}\) with the smallest \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance to u and v respectively, i.e., \(u'\) (resp. \(v'\)) is the intersection of \(\overline{p,2p}\) and \(x+y=j\) (resp. \(x+y=j+1\)). Since the slope of \(\overline{p,2p}\) is between 0 and \(\infty \) exclusively, \(\Vert u'v'\Vert _{\infty } < 1\). Furthermore, we know that the \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance between any two consecutive points on \(L_j\) is 1, so \(\Vert uu'\Vert _{\infty } \le 0.5\) and the same holds for \(\Vert vv' \Vert _{\infty }\). By triangle inequality, we have \(\Vert uv\Vert _{\infty } \le \Vert uu'\Vert _{\infty } + \Vert u'v'\Vert _{\infty } + \Vert v'v\Vert _{\infty } <2\), which gives a contradiction. It is easy to see that the greedy digital path is xymonotone because we only pick one grid point per each \(L_j\). \(\square \)
Next, we show that any two greedy digital paths in the same slice \(S_i\) are disjoint so that when we concatenate all the greedy digital paths slice by slice, it is easy to see that they form a tree rooted at the origin in Lemma 5.5.
Lemma 5.2
For \(i=2,\ldots , \log _2 N,\) any two greedy digital paths in \(S_i\) are disjoint.
Proof
By the way we picked the grid points on the greedy digital paths, we can see that any grid point on the greedy digital path has at most 0.5 Linfinity distance to the corresponding line segment. Given any two consecutive line segments \(\overline{p,2p}\) and \(\overline{q,2q}\) in \(S_i\) where \(q_x = p_x +1\), for any point \(v \in \overline{{p,2p}}/\{p\}\), the \(\Vert \,{ \cdot }\,\Vert _{\infty }\) distance from v to \(\overline{q,2q}\) is larger than 1. Hence, one grid point cannot be assigned to more than one greedy digital path. \(\square \)
Then, we show how the greedy digital paths in \(S_i\) connect to the greedy digital paths in \(S_{i1}\), which gives us the structure of \(\mathrm{dig}(o,p)\) how it passes through some intermediate points.
Lemma 5.3
For \(i=3,\ldots , \log _2 N\) and any \(p = (p_x, p_y) \in L_{2^{i1}},\) \(dig (p, 2p) \subset dig (q, 2p),\) where \(q\, {=} \,(\lfloor p_x/2\rfloor , \lceil p_y/2\rceil )\) if \(p_x \ge p_y,\) \((\lceil p_x/2\rceil , \lfloor p_y/2\rfloor )\) otherwise.
Proof
When \(p_x\) is even, \(q = (p_x/2, p_y/2) \in L_{2^{i2}}\), so \(dig (q,p)\) is a greedy digital path. \(dig (q,2p)\) is the concatenation of \(dig (q,p)\) and \(dig (p,2p)\). Hence, \(dig (p, 2p) \subset dig (q, 2p)\).
When \(p_x\) is odd, let \(p_x = 2k + 1\) for some \(k = 0,1,\ldots , 2^{i2}1\). Then, \(p_y = (2^{i1}  2k  1)\). Assume that \(p_x \ge p_y\) (another case can be proved by the same approach). By the last step of GREEDY construction, p will connect horizontally to \((p_x1, p_y)\) and so on, so we follow the digital path from p horizontally until we hit some greedy path. Since p is between \(p' = (2k,2^{i1}  2k)\) and \((2k+2,2^{i1}  2k  2)\) on \(L_{2^{i1}}\), the greedy path we hit is \(dig (p'/2,p')\) and then we follow \(dig (p'/2,p')\) and reach \(p'/2 = (k, 2^{i2}  k) = (\lfloor p_x/2\rfloor , \lceil p_y/2\rceil )\). Therefore, \(dig (p,2p) \subset dig (q,2p)\), where \(q = p'/2\). \(\square \)
By repeatedly applying Lemma 5.3 for all \(i=3,\ldots , \log _2 N\), we have the following corollary.
Corollary 5.4
For \(i=3,\ldots , \log _2 N\) and any \(p = (p_x, p_y) \in L_{2^{i1}},\) \(dig (o,p)\) passes through all the points \((\lfloor p_x/2^j\rfloor , \lceil p_y/2^j\rceil )\) if \(p_x \ge p_y,\) otherwise \((\lceil p_x/2^j\rceil , \lfloor p_y/2^j\rfloor )\) for \(j=1,\ldots , i2\).
Now we have all the tools to show that GREEDY is a rooted tree at the origin so that GREEDY is a weak CDR.
Lemma 5.5
GREEDY is a rooted tree at the origin with xymonotone paths to all the vertices.
Proof
If we can show that every grid point \(v \in \mathcal {G}^+_N\) except the origin has exactly one edge to either \((v_x1,v_y)\) or \((v_x,v_y1)\), then GREEDY is a tree with xymonotone paths connecting to all the grid points from the origin because the graph is connected in \(\mathcal {G}^+_N\) and there are \(\mathcal {G}^+_N1\) edges. The xymonotonicity comes from the fact that all the grid points are connected towards the origin. Clearly, this holds in \(S_1\) because this part is a CDR with \(N=2\). Hence, we consider \(S_i\) for \(i=2,\ldots ,\log _2 N\).
By Lemmas 5.1 and 5.2, we know that all the greedy digital paths in \(S_i\) are xymonotone and disjoint. Hence, for any point \(v \in dig (p,2p)/\{p\}\) with \(p \in L_{2^{i1}}\), there is only one edge to either \((v_x1,v_y)\) or \((v_x,v_y1)\).
Furthermore, the last step of GREEDY only applies to the grid points \(p \in \mathcal {G}^+_N/\{(0,0)\}\) not having an edge to \((p_x1, p_y)\) or \((p_x,p_y1)\). Hence, there is only one edge to \((p_x1, p_y)\) or \((p_x,p_y1)\) assigned to those points. \(\square \)
Then, we can talk about the quality of GREEDY in term of the number of inner leaves and the Hausdorff error in the next two lemmas.
Lemma 5.6
There are at most \(N^2/12\) inner leaves in GREEDY.
Proof
By Corollary 5.4, every grid point on the greedy digital paths can be extended to \(L_N\). Thus, in each \(S_i\), the inner leaves are created between two greedy digital paths \(dig (p,2p)\) and \(dig (q,2q)\) exclusively, where \(p_x+p_y = 2^{i1}\), \(q_x = p_x + 1\) and \(q_y = p_y 1\). We only consider the grid points below the line \(x=y\). The other case is symmetric. By GREEDY construction, all the grid points between \(dig (p,2p)\) and \(dig (q,2q)\) exclusively are connected to their left hand side neighbors. Hence, there is at most one inner leaf per each horizontal line between \(dig (p,2p)\) and \(dig (q,2q)\), except the line \(y=2p_y1\) because \((2p_x+1,2p_y1)\) can be extended to \(S_{i+1}\). Therefore, there are at most \(\sum _{y=p_y}^{2p_y2} 1 = p_y1\) inner leaves between \(dig (p,2p)\) and \(dig (q,2q)\). By considering all \(dig (p,2p)\) for \(p_y = 1,\ldots , 2^{i2}\) and the symmetric case, we have \(2\bigl (\sum _{p_y=1}^{2^{i2}} p_y 1 \bigr )= 2^{i2}(2^{i2}1)\) inner leaves in \(S_i\). We sum up for all the slices, we get
inner leaves. \(\square \)
Lemma 5.7
The Hausdorff error in GREEDY is at most 5/2.
Proof
Given any point \(v \in \mathcal {G}^+_N\), there exists some \(n \le N\) such that \(2^{n1} < v_x + v_y \le 2^n\). Based on our slice partition, we partition \(\overline{o,v}\) into n line segments such that each line segment lies in some slice \(S_i\). Then, we bound the Hausdorff error between each line segment and the corresponding piece of digital segment within each slice, which implies the overall Hausdorff error. Recall that H(A, B) is the Hausdorff distance between A and B under \(\Vert \,{ \cdot }\,\Vert _{\infty }\) metric.
We first show how to partition \(\overline{o,v}\) and \(dig (o,v)\). For the sake of simplicity, we assume that \(v_x \ge v_y\). Let \(k= v_x + v_y\) and let \(v_i\) be the intersection of \(\overline{o,v}\) and \(x+y= 2^i\) for \(i \le n1\), i.e., \(v_{i,x} = 2^{i}v_x/k\) and \(v_{i,y} = 2^{i} v_y/k \). Let \(v' = (\lfloor 2^{n1}v_x/k \rfloor , \lceil 2^{n1} v_y/k \rceil )\) and \(v'' = (\lfloor 2^{n1}v_x/k \rfloor + 1, \lceil 2^{n1} v_y/k \rceil  1)\) so that \(v_{n1}\) is between \(v'\) and \(v''\) on line \(x+y=2^{n1}\). Therefore, v is between \(\overline{v',2v'}\) and \(\overline{v'',2v''}\). Based on the GREEDY construction, v is also between \(dig (v',2v')\) and \(dig (v'', 2v'')\). Suppose that v is not in \(dig (v'', 2v'')\), then \(dig (o,v)\) is constructed by a horizontal path from v to \(dig (v',2v')\) and then following \(dig (o,2v')\) to the origin. Let \(v'_i = (\lfloor 2^{i}v_x/k\rfloor , \lceil 2^{i}v_y/k\rceil )\) for \(i = 2, \ldots , n1\). Since \(dig (o,v)\) passes through \(v'=v'_{n1}\), by Corollary 5.4, \(dig (o,v)\) also passes through all \(v'_i\). Then, we need to consider \(H(\overline{v_{n1}, v},dig (v',v))\) and \(H(\overline{v_{i1}^{},v_i},dig (v'_{i1},v'_i))\), whose maximum gives the bound on \(H(\overline{o,v},dig (o,v))\).
Now we are going to bound \(H(\overline{v_{n1}, v},dig (v',v))\). Let p (resp. \(p'\)) be the intersection of \(\overline{v',2v'}\) [resp. \(dig (v',2v')\)] and \(x+y=k\) and q (resp. \(q'\)) be the intersection of \(\overline{v'',2v''}\) [resp. \(dig (v'',2v'')\)] and \(x+y=k\) so that \(dig (v',v)\) is between \(dig (v',p')\) and \(dig (v'',q')\) (see Fig. 9). Then, \(H(\overline{v_{n1}, v},dig (v',v))\) is bounded by the maximum of \(H(\overline{v_{n1}, v},dig (v',p'))\) and \(H(\overline{v_{n1}, v}, dig (v'',q'))\). Furthermore,
Because of the GREEDY construction, both \(H(\overline{v',p},dig (v',p'))\) and \(H(\overline{v'',q},dig (v'',q'))\) are at most 0.5. Since \(H(\overline{v', p}, \overline{v'', q}) < 2\), we have \(H(\overline{v_{n1}, v},dig (v',v)) < 5/2\).
For the second part, \(H(\overline{v_{i1}^{},v_i},dig (v'_{i1},v'_i))\) is bounded by \(H(\overline{v_{i1},v_i},\overline{v'_{i1},v'_i}) + H(\overline{v'_{i1},v'_i},dig (v'_{i1},v'_i))\). Since \(\Vert v_{i1}  v'_{i1}\Vert _\infty \) and \({\Vert v_{i}  v'_{i}\Vert _\infty }\) are at most 1, \(H(\overline{v_{i1},v_i}, \overline{v'_{i1},v'_i})\) is also at most 1. Based on the GREEDY construction, we know that \(H(\overline{p,2p},dig (p,2p)) \le 0.5\) for any \(p\in L_{2^i}\). If \(v'_{i} = 2v'_{i1}\), we are done. Otherwise, \(v'_i = (2v'_{i1,x}+1, 2v'_{i1,y}1 ) \), then \(dig (v'_{i1},v'_i)\) is constructed by a horizontal path from \(v'_i\) to \(dig (v'_{i1},2v'_{i1})\), and then following \(dig (v'_{i1},2v'_{i1})\) to \(v'_{i1}\). Using the similar argument, we can show that \(H(\overline{v'_{i1},v'_i}, dig (v'_{i1},v'_i)) \le 1.5\). Overall, it gives \(H(\overline{o,v},dig (o,v)) \le 5/2\) when \(dig (o,v)\) passes through \(v'\).
We go back to another case when \(v \in dig (v'',2v'')\). Let \(v''_i = (\lfloor 2^{i}v_x/k + 1/2^{n1i}\rfloor , \lceil 2^iv_y/k  1/2^{n1i} \rceil )\) for \(i=2,\ldots , n1\). Since \(dig (o,v)\) passes through \(v''=v''_{n1}\), by Corollary 5.4, \(dig (o,v)\) also passes through all \(v''_i\). Since \(\Vert v_{i}  v''_{i}\Vert _\infty \) is at most 1 for \(i=2,\ldots ,n1\), we can apply the same argument as above to show that \(H(\overline{o,v},dig (o,v)) \le 5/2\) when \(dig (o,v)\) passes through \(v''\). \(\square \)
Combining all these lemmas, we have our main theorem.
Theorem 1.6
For any \(N>0\) we can create a weak CDR in \(\mathbb {Z}^2\) with at most 5/2 error and at most \(N^2/12\) inner leaves.
Proof
Lemma 5.5 guarantees that GREEDY satisfies axioms 1, 2, 3, and 5. Lemmas 5.6 and 5.7 give the two qualities of the weak CDR. \(\square \)
From the above theorem, we can also extend it to have a tradeoff construction with O(c) error and \(O(N^2/c)\) inner leaves by scaling the tree.
Theorem 1.7
For any \(N>0\) and \(1 \le c \le N\) we can create a weak CDR in \(\mathbb {Z}^2\) with O(c) error and \(O(N^2/c)\) inner leaves.
Proof
We first apply the GREEDY construction in \(\mathcal {G}^+_{\lceil N/c \rceil }\), in which we have O(1) error and \(O(N^2/c^2)\) inner leaves. Then, we scale up the tree by a factor of c so that the original grid edges have length c. Therefore, the error of the tree in \(\mathcal {G}^+_{\lceil N/c \rceil \cdot c}\) becomes O(c), but the tree does not cover all the refined grid vertices. Then, we draw some vertical or horizontal line segments that branch from the tree as shown in Fig. 10. This will increase the number of inner leaves by a factor of c, i.e., \(O(N^2/c)\). Each new branch is a copy of some subpath of the original GREEDY tree and is shifted by at most c steps. Hence, their errors are still O(c). \(\square \)
In addition to the GREEDY construction for a weak CDR, we can also observe some nice properties in GREEDY. If we remove all the branches which end up at some inner leaves, we have an infinite tree which covers 2/3 of the grid points of \(\mathcal {G}^+_N\). In particular, for each grid point not in the tree, there must exist a vertex in the tree within one unit distance. Hence, given any point p in \(\mathcal {G}^+_N\), we can snap p to some vertex q in the tree with distance at most 1. Then, \(dig (o,p)\) can be approximated by \(dig (o,q)\) with a very small distortion at the end point, but the Hausdorff error \(H(\overline{o,p},dig (o,q))\) is O(1). Let \(T_G\) be the tree created by GREEDY after removing all the inner branches. We give a formal statement as follows:
Theorem 5.8
Let V be the vertices in \(T_G\). Then, \(DS (\{o\} \times V)\) realized by \(T_G\) is a partial CDS with O(1) error. Moreover, for any vertex \(p \in \mathbb {Z}^2,\) there exists a vertex \(q \in V\) such that \(\Vert pq\Vert _\infty \le 1\).
Proof
By Lemma 5.5, we know that every path from any vertex in \(T_G\) to the origin is xymonotone. And by Corollary 5.4, we know that all the greedy digital paths can be extended to infinity. Hence, \(DS (\{o\} \times V)\) realized by \(T_G\) satisfies all the five axioms. Lemma 5.7 gives the O(1) error.
By definition of greedy digital path, in each \(L_j\) between two consecutive greedy digital paths there is at most one point not on these two paths. Hence, the distance between those points and greedy paths is at most one. \(\square \)
6 Point Sets with Constant Discrepancy
In this section we construct red R and blue B point sets such that the absolute value of their discrepancy is 1. Let \(m > 0\) be the difference between the number of blue and red points as defined in Appendix. Our construction has \(\varTheta (m^2)\) many points. Afterwards we also prove that a discrepancy of 1 cannot be achieved with \(o(m^2)\) many points. We first describe a specific configuration of points, called staircase.
Definition 6.1
A staircase is a sequence of alternating blue and red points \((p_1, \ldots , p_n)\) in the unit square. It starts and ends with a blue point. Moreover for every red point \(p_i\), the blue point \(p_{i1}\) has smaller xcoordinate and the same ycoordinate. The blue point \(p_{i+1}\) has the same xcoordinate and smaller ycoordinate.
Given a staircase, we can define a curve by connecting consecutive points on the staircase. Additionally we add a vertical segment at the beginning and a horizontal segment at the end, in order to connect the curve to the boundary of the unit square, see Fig. 11. We will also use the term “staircase” for this curve.
Observation 6.2
The transformation in Sect. 2 maps a CDR to blue and red points in the unit square, which can be decomposed into a set of staircases.
Proof
Below every red point there is a corresponding blue point. Moreover, by Lemma 2.3, in each row the set of blue and red points is alternating, i.e., to the left of every red point there is a corresponding blue point.\(\square \)
Assume that a set of blue and red points forms one staircase. Then the curve induced by the staircase splits the unit square into two parts. The set of points (x, y) to the bottomleft of the staircase satisfies \(B[x,y]R[x,y] = 0\) whereas the set of points to the topright of the staircase satisfies \(B[x,y]R[x,y] = 1\), see Fig. 11. When the set of blue and red points can be decomposed into many staircases, then we can easily compute the value \(B[x,y]R[x,y]\) by counting how many staircases are to the bottomleft of point (x, y).
Recall the definition of discrepancy of R and B at point (x, y):
The first term mxy represents the expected difference between the numbers of blue and red points in the axisaligned rectangle with corner points (0, 0) and (x, y). Every point (x, y) along the curves \(C_i := \{(x,y) \in [0,1]^2:x\cdot y = {i}/{m}\}\), where \(i \in \{0,\ldots ,m\}\), describes a rectangle \([0,x] \times [0,y]\) in which we expect i many blue points more than red points. Figure 12 illustrates the curves \(C_i\) in black for \(m=7\).
The idea of our construction is to approximate the level curves \(C_{i0.5}\) by staircases, where \(i \in \{1,\ldots ,m\}\). We will construct m staircases such that the staircase approximating \(C_{i0.5}\) is between \(C_{i1}\) and \(C_{i}\). This guarantees that the discrepancy \(D^*_{R,B}\) is at most 1.
We describe how we construct the staircase which approximates \(C_{i0.5}\). We start with a blue point at the intersection of the two curves \(C_{i1}\) and \(x = y\). This is the blue point
Starting from there we move horizontally to the right until we hit the curve \(C_i\) at the point
We add a red point here. Then we move vertically down until we hit \(C_{i1}\) and put a blue point. We continue in this fashion, i.e., from a blue point on \(C_{i1}\) we move horizontally to the right and put a red point on \(C_i\). From a red point on \(C_i\) we move vertically down and put a blue point on \(C_{i1}\). The blue points will have the coordinates
and the red points have the coordinates
where \(k \in \{0,1,2,\ldots \}\). We stop this construction when we leave the unit square, i.e., we look for the largest k such that the blue point (3) is still contained in \([0,1]^2\). The maximum value for k is
for \(i \ge 2\) and \(k=0\) for \(i=1\). So far we described how we construct the staircases on the side \(y \le x\). We add red and blue points on the side \(y > x\) to make the construction symmetric to the line \(y = x\). Figure 12 illustrates our construction, which we call the symmetric greedy staircase construction.
Observation 6.3
The points of the symmetric greedy staircase construction with m stairs are
where \([1,m] = \{1,2,\ldots ,m\}\) and
There are \(2 k^*(i)+1\) [resp. \(2k^*(i)\)] many blue (resp. red) points in the ith staircase.
Theorem 6.4
The symmetric greedy staircase construction has discrepancy 1.
Proof
Consider any point (x, y) between the ith and \((i+1)\)th staircase. It holds that \(B[x,y]R[x,y] = i\). Moreover both staircases are bounded from below by the level curve \(C_{i1}\) and from above by \(C_{i+1}\), which means that \(({i1})/{m} \le x \cdot y \le ({i+1})/{m}\). Summarizing, we can bound the discrepancy
\(\square \)
Theorem 6.5
The symmetric greedy staircase construction with m staircases has \(O(m^2)\) many points.
Proof
The number of blue points, which are used in our construction, is
We now bound the denominator from below by
Putting the inequalities together, we get
The continuous function \(f(i) = i \log {({m}/{i})}\) has exactly one maximum in the interval [2, m] with a value bounded by \(m \log m\) and is monotone on both sides of it. Therefore we can replace the sum by an integral.
\(\square \)
Note that point sets R, B with constant discrepancy and \(O(m^2)\) many points can also be constructed by applying the transformation of Sect. 2 on the weak CDR, constructed in Sect. 5. We now show that our construction is tight.
Theorem 6.6
Let B and R be point sets, which can be decomposed into m nonintersecting staircases, and have a discrepancy bounded by a constant \(\xi \). Then \(B = \varOmega (m^2)\).
Proof
The ith staircase is bounded from below by the level curve \(C_{i\xi }\) and from above by \(C_{i+\xi 1}\) because of the discrepancy constraint. We count how many points are necessary to create the ith stair. The minimum number can be realized by constructing a stair in a greedy manner between \(C_{i\xi }\) and \(C_{i+\xi 1}\) because both curves are convex.
where
The number of blue points can therefore be bounded by
Using the inequality
and comparing the sum with an integral, as done in the proof of Theorem 6.5,
we can conclude \({B} \ge \varOmega (m^2)\). \(\square \)
Theorem 1.8
Let B and R be two sets of points whose discrepancy satisfies \(D_{R,B}^*<1\). Then \(B= \varOmega (m^2),\) where \(m =BR\).
Proof
Consider the sets \(S_i := \{(x,y) \in [0,1]^2:B[x,y]R[x,y] = i\}\) for \(i \in \{0,1,\ldots ,m\}\). Because the discrepancy of the point set B and R is less than 1 we can conclude that

(1)
the curves \(C_i\) are contained in \(S_i\), and

(2)
the points between \(C_i\) and \(C_{i+1}\) are either contained in \(S_i\) or \(S_{i+1}\).
Therefore there exists a curve between \(C_i\) and \(C_{i+1}\) which is only neighboring \(S_i\) to its bottomleft and \(S_{i+1}\) to its topright for each \(i \in \{0,1,\ldots ,m1\}\). This curve is a staircase. Hence there exists a staircase between \(C_i\) and \(C_{i+1}\) for each \(i \in \{0,1,\ldots ,m1\}\). Those staircases are nonintersecting because \(D_{R,B}^*<1\). Therefore they consist of at least \(\varOmega (m^2)\) many points, as shown in Theorem 6.6. \(\square \)
Theorem 1.8 is open if the upper bound \(D_{R,B}^*<1\) is replaced by O(1).
Discussion. As mentioned in the introduction, this result could be the first step towards showing that GREEDY construction is tight (in the sense that the number of inner leaves cannot be drastically decreased without increasing the error): we want to create a weak CDR with O(1) error and few inner leaves. Theorem 1.3 states that \(\varOmega (n\log n)\) many inner leaves will be necessary. Theorem 1.8 strengthens the result by saying \(\varOmega (n^2)\) many inner leaves are necessary. However, this claim only holds if we are going to use a CDR whose mapping decomposes into staircases that do not cross.
This noncrossing requirement is key and follows from the fact that we require \(D_{R,B}^*<1\). In particular, it is unlikely that our proof extends for any larger constant \(D_{R,B}^*=O(1)\). Doing so would most likely bring an improvement on the bichromatic discrepancy bound (and thus the nonchromatic as well). However, we could go a different route to improve the result: in general, weak CDRs could map to points whose staircases intersect (and then Theorem 1.8 does not apply). Intersecting staircases can again be decomposed into nonintersecting (only touching) staircases, where there does not need to be a blue or red point at every turn. Thus, rather than looking at higher discrepancy, it would make sense to study if we can extend the quadratic lower bound to colored pointsets that form such touching staircases.
7 Final Remarks
Common intuition would say that the \(\varOmega (\log N)\) lower bound for the error of twodimensional CDR and CDS automatically extends to higher dimensions. The observation that this is not true opens up new ways in which research can continue. By the time of writing this, we know several ways to construct CDRs with \(O(\log n)\) Hausdorff error in any dimension, but only know of \(\varOmega (\log ^{1/(d1)}\!N)\) lower bound. If we had to guess, the upper bound seems closer to the truth, but this is a hunch based on the fact that our analysis is a bit loose. Regardless of which bound is tight, further study in the relationship between the three spaces (CDR in high dimensions, 2D weak CDR and the twocolored pointset) could lead to either designing tighter lower bounds or possibly lead to better CDR constructions [or even better, obtain a CDS construction with o(N) error in arbitrary dimensions].
We also find that weak CDRs are an interesting research topic on their own. In particular, we would like to find the relationship between the number of inner leaves and the error of the construction. That is, say that we want a CDR with O(e) error (for some \(e\le \log n\)). What is the minimum number of leaves \(\ell =\ell (e)\) that such a CDR must have? Can we find such a construction? Theorem 1.3 seems to indicate a linear relationship between the two, and it is not hard to obtain one (see, for example, Theorem 1.7 and Fig. 10). However, this construction is most likely not the best possible one. Indeed, even if we are interested in \(O(\log N)\) error, this construction creates a large number of inner leaves, but we know of CDRs with the same error and no inner leaves. Thus, the question becomes, can we significantly improve upon the greedy construction in Sect. 5? Or is there some exponential dependency between the number of inner leaves and the error of the weak CDR?
Notes
The 2dneighbor topology is the natural one that connects to your predecessor and successor in each dimension. Formally speaking, two points are connected if and only if their \(\Vert \,{ \cdot }\,\Vert _{1}\) distance is exactly one.
We choose this norm because it simplifies calculations, but we note that the Hausdorff distance traditionally uses the Euclidean norm instead. Since we are interested in asymptotic behavior, the exact of metric is irrelevant (as long as the metrics are equivalent for any fixed dimension d, i.e., \(\Vert \,{\cdot }\,\Vert _{\infty } \le \Vert \,{\cdot }\,\Vert _{2} \le \sqrt{d}\,\Vert \,{\cdot }\Vert _{\infty }\)).
The lower bound was published by Luby, but credit given to Håstad (see [14, Thm. 19]).
Technically speaking the intersection could contain more than one split vertex, but only one will lie in the \(x,y\ge 0\) quadrant.
References
Asano, T., Chen, D.Z., Katoh, N., Tokuyama, T.: Efficient algorithms for optimizationbased image segmentation. Int. J. Comput. Geom. Appl. 11(2), 145–166 (2001)
Beck, J.: Balanced twocolorings of finite sets in the square. I. Combinatorica 1(4), 327–335 (1981)
Bodwin, G.: On the structure of unique shortest paths in graphs. In: 30th Annual ACM–SIAM Symposium on Discrete Algorithms (San Diego 2019), pp. 2071–2089. SIAM, Philadelphia (2019)
Chiu, M.K., Korman, M.: High dimensional consistent digital segments. SIAM J. Discrete Math. 32(4), 2566–2590 (2018)
Chowdhury, I., Gibson, M.: A characterization of consistent digital line segments in \({\mathbb{Z}}^2\). In: 23rd Annual European Symposium on Algorithms (Patras 2015). Lecture Notes in Computer Science, vol. 9294, pp. 337–348. Springer, Heidelberg (2015)
Chowdhury, I., Gibson, M.: Constructing consistent digital line segments. In: 12th Latin American Symposium on Theoretical Informatics (Ensenada 2016). Lecture Notes in Computer Science, vol. 9644, pp. 263–274. Springer, Berlin (2016)
Christ, T., Pálvölgyi, D., Stojaković, M.: Consistent digital line segments. Discrete Comput. Geom. 47(4), 691–710 (2012)
Chun, J., Kaothanthong, N., Kasai, R., Korman, M., Nöllenburg, M., Tokuyama, T.: Algorithms for computing the maximum weight region decomposable into elementary shapes. Comput. Vis. Image Underst. 116(7), 803–814 (2012)
Chun, J., Korman, M., Nöllenburg, M., Tokuyama, T.: Consistent digital rays. Discrete Comput. Geom. 42(3), 359–378 (2009)
Cizma, D., Linial, N.: Geodesic geometry on graphs. Discrete Comput. Geom. (2022). https://link.springer.com/article/10.1007/s0045402100345w
Dobkin, D.P., Gunopulos, D., Maass, W.: Computing the maximum bichromatic discrepancy, with applications to computer graphics and machine learning. J. Comput. Syst. Sci. 52(3), 453–470 (1996)
Goodman, J.E., Pollack, R., Sturmfels, B.: Coordinate representation of order types requires exponential storage. In: 21st Annual ACM Symposium on Theory of Computing (Seattle 1989), pp. 405–410. ACM, New York (1989)
Klette, R., Rosenfeld, A.: Digital straightness—a review. Discrete Appl. Math. 139(1–3), 197–230 (2004)
Luby, M.G.: Grid geometries which preserve properties of Euclidean geometry: a study of graphics line drawing algorithms. In: Theoretical Foundations of Computer Graphics and CAD (Il Ciocco 1987). NATO Advanced Science Institutes Series, vol. F40, pp. 397–432. Springer, Berlin (1987)
Matoušek, J.: Geometric Discrepancy. Algorithms and Combinatorics, vol. 18. Springer, Berlin (1999)
Schmidt, W.M.: Irregularities of distribution, VII. Acta Arith. 21, 45–50 (1972)
Acknowledgements
The authors would like to thank Matthew Gibson, Evanthia Papadopoulou, André van Renssen, and Marcel Roeloffzen for their helpful discussions during the creation of this paper. The authors would also like to thank the anonymous reviewers for the many comments that helped improve the paper. We would especially like to thank a reviewer of SODA who showed us how to improve the lower bound from \(\varOmega (\log ^{1/d}\!N)\) to \(\varOmega (\log ^{1/(d1)}\!N)\). M.K. Chiu: partially supported by ERC StG 757609. T. Tokuyama: supported by MEXT Kakenhi 17K19954 and 18H05291.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Editor in Charge Kenneth Clarkson
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A: Bichromatic Discrepancy
Appendix A: Bichromatic Discrepancy
Let R and B be a set of red and blue points in the unit square, respectively. Let \(r=R\) and \(b=B\), and further assume that \(b>r\). For any set P of points in the unit square and \(x,y \in [0,1]\) let P[x, y] be the number of points in \(P \cap [0,x] \times [0,y]\). For any two sets R and B and real numbers \(x,y\le 1\) we define the discrepancy of R and B at (x, y) as
The discrepancy of R and B is defined as \(D^*_{R,B}=\max _{(x,y)\in [0,1]^2}  D_{R,B}(x,y) \) (i.e., the highest discrepancy we can achieve among all possible rectangles).
Theorem 1.2
(bichromatic discrepancy) For any set R and B of points such that \(B>R,\) there exists a constant \(c > 0\) such that
Note that if we set \(R=\emptyset \) we get the classic two dimensional discrepancy result for which there are several proofs (see [15] for a detailed survey). In order to extend the bound for the case of \(R\ne \emptyset \), we make minor changes to Schmidt’s proof [16]. We start by using an auxiliary function G (defined below) and combining it with the trivial inequality
to obtain
Note that for simplicity in the notation we removed the integration limits. Our definition of G is identical to the one used by Schmidt: let \({m}=\lceil { \log _2 (b+r) }\rceil +1\) and observe that, by definition of \({m}\) we have \(2(b+r) \le 2^{m} \le 4(b+r)\). Only in this section we use the variable name “m” in a different sense in order to keep the notation similar to [15]. For any \(j\in \{0, \ldots , {m}\}\) we define function \(f_j:[0,1]^2 \rightarrow \{1,0,1\}\) as follows: subdivide the unit square with \(2^j\) equally spaced vertical lines and \(2^{{m}j}\) horizontal lines.
For any value of j we subdivide the unit square into rectangles of area \(2^{{m}}\) (larger values of j will result in thinner but wider rectangles). Let A be a rectangle of subdivision associated to \(f_j\). We define \(f_j\) within the rectangle to be 0 if A contains any point of \(R \cup B\). If A does not have neither red nor blue points, we further subdivide it into four congruent quadrants. The function value of \(f_j\) is equal to 1 in the upper right and lower left quadrants, and \(1\) in upper left and lower right quadrants (see a visual representation of \(f_j\) in [15, p. 173]).
Then, we define G as \(G=(1+cf_0) (1+cf_1)\cdots (1+cf_{m})1\), where \(c \in (0,1)\) is a small constant (whose value will be chosen afterwards). Note that G can also be expressed as \(G=G_1 + \cdots +G_{m}\), where
Schmidt showed that \(\int G \le 2\) (regardless of the value of \({m}\)). Thus, we now focus in giving a lower bound for \(\int D_{R,B}G\).
Lemma A.1
There exists a constant \(c_1\) such that
Proof
By definition of \(G_1\) we have \(\int D_{R,B}G_1 = c\sum _{j=0}^{m} \int D_{R,B}f_j\). Thus, it suffices to show that for any value of j it holds that \(\int D_{R,B}f_j \ge c' ({br})/({b+r})\) (for some other constant \(c'>0\)).
Recall that, when defining \(f_j\), we subdivided the unit square into at least \(2(b+r)\) rectangles. For the rectangles that contain at least one point of \(R \cup B\), \(f_j\) is set to zero, and thus they do not contribute to the integral. Since we have \(b+r\) many points, we know that there must exist at least \(b+r\) rectangles that do not contain any point of R or B. Let A be any such rectangle, and let \(A_{SW },A_{NW },A_{SE },A_{NE }\) be the four subquadrants of A (where the subindex refers to the cardinal position of the quadrant). Recall that \(f_j\) is equal to 1 for any point of \(A_{SW } \cup A_{NE }\) and \(1\) for points of \(A_{SE }\cup A_{NW }\).
Let \(\mathsf {w}\) and \(\mathsf {h}\) be vectors defined by the horizontal and vertical sides of \(A_{SW }\), respectively. Observe that their lengths are \(2^{j1}\) and \(2^{j{m}1}\), respectively. Then, we have
If we apply the definition of \(D_{R,B}\) [cf. (4)] to the four terms inside the integral, we get
Observe that we are integrating twice positively and twice negatively over almost identical functions. In fact, the terms of the first integral all cancel out except along the rectangle \([x,x+\mathsf {w})\times [y,y+\mathsf {h})\). Similarly, when we look at the second and third terms, the contribution of any point in \(R\cup B\) is cancelled out unless it is in the rectangle \([x,x+\mathsf {w})\times [y,y+\mathsf {h})\). However, by definition of A there are no such points. Thus, we obtain
That is, when we integrate \(f_j D_{R,B}\) over a rectangle A containing no point of \(R\cup B\), the result is \((br)2^{2 {m}4}\). We know that there are at least \(b+r\) rectangles not containing points of \(R\cup B\), thus their contribution is at least
\(\square \)
Lemma A.2
There exists a constant \(c_{2}\) such that
Proof
Recall that \(G_k=c^k\sum _{0 \le j_1< j_2< \ldots < j_k \le {m}} f_{j_1}\cdots f_{j_k}\). Fix any valid set of indices and consider the value of \(\int f_{j_1}\cdots f_{j_k} D_{R,B}\).
As shown in [15], function \(f_{j_1}\cdots f_{j_k}\) is largely defined by \(f_{j_1}\) and \(f_{j_k}\). Indeed, if we overlay the rectangular partition defined by functions \(f_{j_1},\ldots , f_{j_k}\) we obtain a grid of rectangles whose width is \(2^{j_k}\) and height \(2^{({m}j_1)}\). In each of these rectangles, the function is zero (if any of the rectangles associated to the \(f_{j_i}\) functions contains a point of \(R\cup B\)), or is further subdivided into four equal sized quadrants and in each one it is \(+1\) or \(1\) alternatively.
Let A be one of the rectangles of the refined grid. As shown in Lemma A.1, we have that
where \(\tau \in \{1,1\}\). This extra term appears because the product of the different functions involved can change the sign of each of the four quadrants. In any case, we have \(\int _A f_{j_1}\cdots f_{j_k} D_{R,B} \le (br) 2^{2({m}+g)4}\) where \(g=j_kj_1\).
By the way the grid is constructed, there are \(2^{{m}j_1}\times 2^{j_k}=2^{{m}+g}\) many rectangles, and thus we conclude that \(\int f_{j_1}\cdots f_{j_k} D_{R,B} \le (br) 2^{{m}g4}\). In order to obtain a bound \(\int D_{R,B}G_k\) we sum over all possible indices.
Note that in the sum, the indices \(j_2,\ldots , j_{k1}\) do not matter. Thus, we group the terms by the gap between the indices \(j_1\) and \(j_k\) (say, if \(j_1=3\) and \(j_k=7\) the gap is 4). Note that the minimum gap is at least \(k1\) (since otherwise we do not have enough space to choose the \(k2\) indices in between) and at most \({m}\). Once we have a gap of g there are \({m}g\) options for index \(j_1\).
In order to bound the sum over all \(G_k\) from above, we first reorder the summation order.
The sum contains the first terms of the geometric sum \(\sum _{g=1}^\infty (({1+c})/{2})^{g1} \le {2}/({1c})\) (for any \(c<1\)). In particular, if we set \(c\le 1/2\) we can bound the partial sum by 4 from above. Recall that \({m} = \varTheta (\log (b+r))\) and \(2^{m} = \varTheta (b+r)\). Thus, the lemma is proven. \(\square \)
Corollary A.3
There exists a constant \(\kappa >0\) such that
Proof
Apply the inequality \(\int (A+B) \ge \int A \int B\) and Lemmas A.1 and A.2 to obtain:
Note that Lemmas A.1 and A.2 holds for any value of c such that \(c\in (0, 1/2]\). By choosing a sufficiently small value of c [say, \(c=\min {\{{1}/{2}, {c_1}/({2c_{2}})\}}\)] we obtain
\(\square \)
This completes the proof of Theorem 1.2.
When \(R = \emptyset \), it would be expected that we need to distribute the blue points uniformly in the unit square to have a low discrepancy. Indeed, it is also held for the red points. The following theorem implies that even though there are many red points, if the red points are concentrated in the lower half of the unit square, the discrepancy cannot be reduced. For simplicity, we only show a special case of how the discrepancy depends on the points in \([0,1] \times [1/2, 1]\), which is good enough for our purpose in Sect. 4. Notice that the same argument can be applied in a more general case.
Theorem A.4
Let R and B be two sets of points in the unit square such that \(B>R\). Let \(r_2\) and \(b_2\) be the number of red and blue points in \([0,1] \times [1/2, 1]\) respectively. If \(b_2 > r_2,\) there exists a constant \(c > 0\) such that
Proof
Let \(R_2\) and \(B_2\) be the set of red and blue points in \([0,1] \times [1/2, 1]\) respectively. We denote the size of R, B, \(R_2\), and \(B_2\) by r, b, \(r_2\), and \(b_2\), respectively. Consider the upper half of the unit square \([0,1] \times [1/2, 1]\) and rescale the vertical length to be 1. By Theorem 1.2, there exists a point (x, 2y) such that
for some constant \(c'>0\).
Then, we map the point (x, 2y) back to a point \((x,1/2+y)\) in the original unit square. We will show that either \(D_{R,B}(x,1/2+y)\) or \(D_{R,B}(x,1/2\epsilon )\) would give us the desired lower bound, where \(\epsilon \) is an arbitrarily small constant such that rectangle \([0,1] \times [0, 1/2\epsilon ]\) only contains \(B \setminus B_2\) and \(R \setminus R_2\). If
we are done. If
the proof is also done. Because
Suppose that the above two cases do not hold, we have
Let \(R_1 = R \setminus R_2\) and \(B_1 = B \setminus B_2\), which are inside the rectangle \([0,1] \times [0, 1/2\epsilon ]\). Consider
The first inequality is given by
Since
we can conclude that
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Chiu, MK., Korman, M., Suderland, M. et al. Distance Bounds for High Dimensional Consistent Digital Rays and 2D PartiallyConsistent Digital Rays. Discrete Comput Geom 68, 902–944 (2022). https://doi.org/10.1007/s00454021003496
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00454021003496