An Approximation Algorithm for Computing Shortest Paths in Weighted 3d Domains
 317 Downloads
 4 Citations
Abstract
We present an approximation algorithm for computing shortest paths in weighted threedimensional domains. Given a polyhedral domain \(\mathcal D \), consisting of \(n\) tetrahedra with positive weights, and a real number \(\varepsilon \in (0,1)\), our algorithm constructs paths in \(\mathcal D \) from a fixed source vertex to all vertices of \(\mathcal D \), the costs of which are at most \(1+\varepsilon \) times the costs of (weighted) shortest paths, in \(O(\mathcal{C }(\mathcal D )\frac{n}{\varepsilon ^{2.5}}\log \frac{n}{\varepsilon }\log ^3\frac{1}{\varepsilon })\) time, where \(\mathcal{C }(\mathcal D )\) is a geometric parameter related to the aspect ratios of tetrahedra. The efficiency of the proposed algorithm is based on an indepth study of the local behavior of geodesic paths and additive Voronoi diagrams in weighted threedimensional domains, which are of independent interest. The paper extends the results of Aleksandrov et al. (J ACM 52(1):25–53, 2005), to three dimensions.
Keywords
Shortest path problems Weighted paths Weighted 3d domains Approximation algorithms Voronoi diagrams1 Introduction
1.1 Motivation
The computation of shortest paths is a key problem arising in a number of diverse application areas including geographic information systems, robotics, computer graphics, computeraided design, medical computing and others. This has motivated the study and subsequent design of efficient algorithms for solving shortest path problems in different settings based on the geometric nature of the problem domain [e.g., twodimensional (2d), threedimensional (3d), surfaces, the presence/absence of obstacles] and the cost function/metric (e.g., Euclidean, \(L_p\), link distance, weighted/unweighted, multicriteria). In addition to its driver—the applications—the field has provided, and still provides, exciting challenges from a theoretical perspective. As a result, shortest path problems have become fundamental problems in areas of Computer Science such as Computational Geometry and Algorithmic Graph Theory.
The standard 3d Euclidean shortest path problem of computing a shortest path between a pair of points avoiding a set of polyhedral obstacles, denoted as the ESP3D problem, is known to be NPhard even when the obstacles are parallel triangles in space. It is not difficult to see that the number of combinatorially distinct shortest paths from a source point to a destination point may be exponential in the input size. Canny and Reif [8] used this to establish the NPhardness of the ESP3D problem, for any \(L_p\) metric, \(p\ge 1\). In addition to this combinatorial hardness result, Chandrajit and Bajaj [7] has provided an algebraic hardness argument that an exponential number of bits may be required. More recently, Mitchell and Sharir [27] gave NPcompleteness proofs for the problem of computing Euclidean shortest paths among sets of stacked axisaligned rectangles, and computing \(L_1\)shortest paths among disjoint balls. Given the NPhardness of the ESP3D problem, work has focused on exploiting the geometric structure of the obstacles and/or on providing approximation algorithms. We will mention some of these approaches in Sect. 1.4.

In Geology, seismic refraction and reflection methods are employed based on measurements of the travel time of seismic waves refracted at the interfaces between subsurface layers of different densities. Some wave types propagate along shortest paths and weighted shortest path algorithms may be used to produce more accurate and more efficient estimation of subsurface layer characteristics, e.g., the amount of oil contained in the subsurface [16]. Another related application is the assessment of garbage dumps’ health. When a new garbage dump is built, sensors are placed at the bottom, and when the garbage dump starts to fill, waves from the top passing through the garbage to these sensors are used in order to determine the decomposition rate of the garbage [16].

Computation of 3d shortest path has also been used to compute fastest routes for aircrafts between designated origin and destination points, while avoiding hazardous, timevarying weather systems. Krozel et al. [21] investigate synthesizing weather avoidance routes in the transition airspace. Our weighted 3d region model can be used to generalize that approach: instead of totally avoiding undesirable regions, one can assign penalty weights to them and then search for routes that minimize travel through such regions, while also avoiding unnecessarily long detours.

In medical applications simulation of sonic wavefront propagation is used when performing imaging methods as photoacoustic tomography or ultrasound imaging through heterogeneous tissue [12, 33]. In radiation therapy, domain features include densities of tissue, bone, organs, cavities, or risk to radiation exposure, and optimal radiation treatment planning takes this nonhomogeneity into consideration.

The problem of timeoptimum movement planning in 2d and 3d for a point robot that has bounded control velocity through a set of \(n\) polygonal regions of given translational flow velocities has been studied by Reif and Sun [30]. They state that this intriguing geometric problem has immediate applications to macroscale motion planning for ships, submarines, and airplanes in the presence of significant flows of water or air. Also, it is a central motion planning problem for many of the mesoscale and microscale robots that have environments with significant flows that can affect their movement. They establish the computational hardness for the 3d version of this problem by showing the problem to be PSPACE hard. They give a decision algorithm for the 2d flow path problem, which has very high computational complexity, and they also design an efficient approximation algorithm with bounded error. The determination of the exact computational complexity of the 3d flow path problem is posed as an open problem. Although our weighted 3d model does not apply directly to this setting, it may be useful for constructing initial approximations by assigning appropriate weights depending on the velocity and direction of the flows in different regions. In addition, the discretization scheme and the algorithmic techniques developed here may also be useful for solving the 3d flow path problem.
1.2 Problem Formulation
In this paper, we consider the following problem. Let \(\mathcal D \) be a connected 3d domain consisting of \(n\) tetrahedra with a positive real weight associated to each of them. The 3d weighted shortest path problem (WSP3D) is to compute minimum cost paths in \(\mathcal D \) from a fixed source vertex to all vertices of \(\mathcal D \). The cost of a path in \(\mathcal D \) is defined as the weighted sum of the Euclidean lengths of the subpaths within each crossed tetrahedron. We will describe and analyze an approximation algorithm for this problem that, for any real number \(\varepsilon \in (0,1)\), computes paths the costs of which are at most \(1+\varepsilon \) times greater than the costs of the minimum cost paths. In Sect. 2, we describe our model in detail.
Note that the WSP3D problem can be viewed as a generalization of the ESP3D problem. Namely, given an instance of the ESP3D problem, one can find a large enough cube containing all the obstacles, tetrahedralize the free space (i.e., exterior of the obstacles, but in the interior of the cube) and set equal weights to the resulting tetrahedra obtaining an instance of the WSP3D problem.
1.3 Challenges
A key difference between Euclidean shortest path computation in 2d and 3d weighted domain is the NP ardness already mentioned. Underlying this is the fact that, unlike in 2d, the search space for an Euclidean 3d shortest path is not discrete. Specifically, in 2d, the edges of a shortest path (e.g., Euclidean shortest paths among obstacles in the plane) are edges of a graph, namely the visibility graph of the obstacles including the source and the destination points. In contrast, in polyhedral 3d domains, the bending points of shortest paths on obstacles may lie in the interior of the obstacles’ edges. Moreover, in weighted 3d settings, bending points may even belong to the interior of the faces.
Furthermore, even in the case of weighted triangulated planar domains, the (weighted) shortest path may be composed of \(\Theta (n^2)\) segments. Not only is the path complexity higher but also computation of weighted shortest paths in 2d turns out to be substantially more involved than in the Euclidean setting. In fact, no exact algorithm is even known, and the first \((1+\varepsilon )\) approximation algorithm described in [26] had an \(O(n^8\log (\frac{n}{\varepsilon }))\) time bound, where \(n\) is the number of triangles in the subdivision. This problem has been actively researched since then, and currently the best known in terms of \(n\) and \(\varepsilon \) algorithm for the weighted region problem on planar subdivisions (as well as on polyhedral surfaces) runs in \(O(\frac{n}{\sqrt{\varepsilon }}\log \frac{n}{\varepsilon }\log \frac{1}{\varepsilon })\) time [3]. The constants involved in the bigO notation in the bounds on the time complexity of all known to us algorithms depend in various ways on the geometry of the subdivision and/or on the weights. (See [3] for a detailed literature review and discussion for the planar case.)
One of the classical tools of computational geometry is the Voronoi diagram. This structure finds numerous applications (see e.g., [6]). It is also a key ingredient in several efficient shortest path algorithms. Researchers have studied these diagrams under several metrics (including Euclidean, Manhattan, weighted, additive, convex, abstract) and for different types of objects (including points, lines, curves, polygons), but somehow the computation of these diagrams in media with different densities (i.e., the refractive media) remained elusive. One of the main ingredients in solving the problem studied here is to compute (partial) additive Voronoi diagrams of points in refractive media. The generic techniques of Klein [19], Klein et al. [20] and Lê [23] do not apply in this case, as the bisecting surfaces do not satisfy the required conditions. In this paper, we make an important step toward the understanding and computation of these diagrams.
1.4 Previous Related Work
By now, shortest path problems in 2d are fairly well understood. Efficient algorithms have been developed for many problem instances and surveys are readily available describing the state of the art in the field. In 3d, virtually all the work has been devoted to the ESP3D problem. Papadimitriou [29] suggested the first polynomial time approximation scheme for that problem. It runs in \(O(\frac{n^4}{\varepsilon ^2}(L+\log (n/\varepsilon )))\) time, where \(L\) is the number of bits of precision in the model of computation. Clarkson [11] provided an algorithm running in \(O(n^2\lambda (n)\log (n/{\varepsilon })/{(\varepsilon ^4)}+n^2\log n\rho \log (n\log \rho ))\) time, where \(\rho \) is the ratio of the longest obstacle edge to the distance between the source and the target vertex, \(\lambda (n)={\alpha (n)}^{O(\alpha (n))^{O(1)}}\), and \(\alpha (n)\) is the inverse Ackermann’s function.
Papadimitriou’s algorithm was revised and its analysis was refined by Choi et al. [9] under the bit complexity framework. Their algorithm runs roughly in \(O(\frac{n^4L^2}{\varepsilon ^2}\mu (X))\) time, where \(\mu (X)\) represents the time (or bit) complexity of multiplying \(X\)bit integers and \(X=O(\log (\frac{n}{\varepsilon })+L)\). In [10], the same authors further developed their ideas and proposed a precisionsensitive algorithm for the ESP3D problem. In [5], Asano et al. proposed and studied a technique for computing approximate solutions to optimization problems and obtained another precisionsensitive approximation algorithm for the ESP3D problem with improved running time in terms of \(L\).
HarPeled [13] proposed an algorithm that invokes Clarkson’s algorithm as a subroutine \(O(\frac{n^2}{\varepsilon ^2}\log \frac{1}{\varepsilon })\) times to build a data structure for answering approximate shortest path queries from a fixed source in \(O(\log \frac{n}{\varepsilon })\) time. The data structure is constructed in roughly \(O(\frac{n^6}{\varepsilon ^4})\) time. Agarwal et al. [1] considered the ESP3D problem for the case of convex obstacles and proposed an approximation algorithm running in \(O(n+\frac{k^4}{\varepsilon ^7}\log ^2\frac{k}{\varepsilon }\log \log k)\) time, where \(k\) is the number of obstacles. In contrast to all other algorithms discussed here, the complexity of this algorithm does not depend on the geometric features of the obstacles. In the same paper, the authors describe a data structure for answering approximate shortest path queries from a fixed source in logarithmic time.
In the weighted (nonEuclidean) 3d case no previous algorithms have been reported by other authors. In [2], we have announced and sketched an approximation scheme for WSP3D problem that runs in \(O(\frac{n}{\varepsilon ^{3.5}}\log \frac{1}{\varepsilon }(\frac{1}{\sqrt{\varepsilon }}+\log n))\) time (ignoring the geometric parameters). Furthermore, the runtime improves to \(O(\frac{n}{\varepsilon ^3}\log \frac{1}{\varepsilon }\log n)\) when all weights are equal. In this paper, we apply that approach, but develop the required details, apply new techniques, improve the complexity bounds, and provide a rigorous mathematical analysis.
1.5 Contributions of This Paper

We provide an approximation algorithm for solving the WSP3D problem in a polyhedral domain \(\mathcal D \) consisting of \(n\) weighted tetrahedra. The algorithm computes approximate weighted shortest paths from a source vertex to all other vertices of \(D\) in \(O(\mathcal{C }(\mathcal D )\frac{n}{\varepsilon ^{2.5}}\log \frac{n}{\varepsilon }\log ^3\frac{1}{\varepsilon })\) time, where \(\varepsilon \in (0,1)\) is the userspecified approximation parameter and \(\mathcal{C }(\mathcal D )\) is a geometric parameter related to the aspect ratios of tetrahedra.^{1} The costs of the computed paths are within a (multiplicative) factor of \(1+\varepsilon \) of the cost of the corresponding shortest paths. As we stated above, the ESP3D problem, i.e., the unweighted version of this problem, is already NPhard even when the obstacles are parallel triangles in the space [8]. The time complexity of our algorithm, which is designed for the more general weighted setting, compares favorably even when applied in the Euclidean setting to the existing approximation algorithms.

Our detailed analysis, especially the results on additive Voronoi diagrams derived in Sect. 2, provides valuable insights into the understanding of Voronoi diagrams in heterogeneous media. This may open new avenues, for example, for designing an algorithm to compute discretized Voronoi diagrams in such settings.

Our approximation algorithms in 2d have proven to be easily implementable and be of practical value [22]. They have been applied in a variety of different settings such as multiresolution mesh morphing [24], interactive mesh fusion [17], topology matching [14], Cultural Heritage Toolbox [28], metamorphosis of arbitrary triangular meshes [18], and in the context of studying locally deepest regions of major sulci on the cortical surface [15]. Our algorithm for WSP3D presented here, in spite of being hard to analyze, essentially uses similar primitives, and thus also has the potential to be implementable, practical, and applicable in different areas.

Our work provides further evidence that discretization is a powerful tool when solving shortest pathrelated problems in both Euclidean and weighted settings. We conjecture that the discretization methodology used here generalizes to any fixed dimension. Furthermore, our discretization scheme is independent of the source vertex and can be used with no changes to approximate paths from other source vertices. This feature makes it applicable for solving the all pairs shortest paths problem and for designing data structures to answer shortest path queries in weighted 3d domains.

The complexity of our algorithm does not depend on the weights assigned to the tetrahedra composing \(\mathcal D \), but only depends on their geometry. We analyze and evaluate that dependence in detail. Geometric dependencies arise also in Euclidean settings and in most of the previous papers. For example, in Clarkson [11], the running time of the algorithm depends on the ratio of the longest edge to the distance between the source and the target vertex. Applying known techniques (see e.g., [1]), such dependency can often be removed. Here, this would be possible provided that an upper bound on the number of segments on weighted shortest paths in 3d is known. However, the increase in the dependency on \(n\) in the time complexity that these techniques suffer from, which is of order \(\Omega (n^2)\), appears not to justify such an approach here. Moreover, in this case, the dependency on the ratio of weights may be required, at the expense of removing the dependency on geometric features. In our approach, the dependency on the geometry is proportional to the average of quantities related to aspect ratios of the tetrahedra composing \(\mathcal D \). Thus, when \(n\) is large, many tetrahedra would have to be fairly degenerate so that this average to play a major role. We therefore conjecture that in typical applications, the approach presented here would work well.
1.6 Organization of the Paper
In Sect. 2, we describe the model used throughout this paper, formulate the problem, present some properties of shortest paths in 3d, and derive a key result on additive Voronoi diagrams in refractive media. In Sect. 3, we describe our discretization scheme which is a generalization of the 2d scheme introduced in [3]. In Sect. 4, we construct a weighted graph, estimate the number of its nodes and edges, and prove that shortest paths in \(\mathcal D \) can be approximated by paths in the graph so constructed. In Sect. 5, we present our algorithm for the WSP3D problem. In Sect. 6, we conclude this paper.
2 Problem Formulation and Preliminaries
2.1 Model
Let \(\mathcal{D }\) be a connected polyhedral domain in 3d Euclidean space. We assume that \(\mathcal{D }\) is partitioned into \(n\) tetrahedra \(T_1, \ldots , T_n\), such that \(\mathcal{D }=\cup _{i=1}^n T_i\) and the intersection of any pair of different tetrahedra is either empty or a common element (a face, an edge, or a vertex) on their boundaries. We call these tetrahedra cells. A positive weight \(w_i\) is associated with each cell \(T_i\) representing the cost of traveling in it. The weight of boundary element of cell is the minimum of the weights of the cells incident to that boundary element. We consider paths (connected rectifiable curves) that stay in \(\mathcal{D }\). The cost of a path \(\pi \) in \(\mathcal{D }\) is defined by \( \Vert \pi \Vert =\sum _{i=1}^n w_i\pi _i\), where \(\pi _i\) denotes the Euclidean length of the intersection \(\pi _i=\pi \cap T_i\). Boundary edges and faces are assumed to be part of the cell from which they inherit their weight.
Given two distinct points \(u\) and \(v\) in \(\mathcal{D }\), we denote by \(\pi (u,v)\) any of the minimum cost paths between \(u\) and \(v\) that stays in \(\mathcal{D }\). We refer to the minimum cost paths as shortest paths. For a given approximation parameter \(\varepsilon >0\), a path \(\pi _\varepsilon =\pi _\varepsilon (u,v)\) is an \(\varepsilon \)approximation of the shortest path \(\pi =\pi (u,v)\), if \(\Vert \pi _\varepsilon \Vert \le (1+\varepsilon )\Vert \pi \Vert \). In this paper, we present an algorithm that, for a given source vertex \(u\) and an approximation parameter \(\varepsilon \in (0,1)\), computes \(\varepsilon \)approximate shortest paths from \(u\) to all other vertices of \(\mathcal D \).
 (i)
Cellcrossing—a segment that crosses a cell joining two points on its boundary;
 (ii)
Faceusing—a segment lying along a face of a cell; or
 (iii)
Edgeusing—a segment along an edge of a cell.
It is well known that when \(\pi \) is a shortest path then it is a linear path and the angles \((\varphi ^, \theta ^)\) and \((\varphi ^+, \theta ^+)\) are related by Snell’s law as follows:
Snell’s Law of Refraction
Let \(a\) be a bending point on a geodesic path \(\pi \) lying in the interior of a face \(f\) of \(D\). Let the segments preceding and succeeding \(a\) in \(\pi \) be \(s^\) and \(s^+\), respectively. Let the weights of the cells containing \(s^\) and \(s^+\) be \(w^\) and \(w^+\), respectively. Then \(s^+\) belongs to the plane containing \(s^\) and a perpendicular to \(f\) and the angles \(\varphi ^\) and \(\varphi ^+\) are related by \(w^\sin \varphi ^ =w^+\sin \varphi ^+\).
We refer to linear paths that are locally optimal (i.e., satisfy Snell’s law) as geodesic paths. The shortest path between pair of vertices \(u\) and \(v\) has to be a geodesic path, but there might be multiple geodesic paths joining \(u\) and \(v\). Hence the shortest path between pair of vertices \(u\) and \(v\) is the geodesic path of smallest cost joining them. Below, we discuss some of the implications of Snell’s law on the local behavior of geodesic paths.
In the next subsection, we study the properties of simple geodesic paths between points in neighboring cells or in the same cell through a faceusing segment. We consider a simplified situation, where the two neighboring cells are halfspaces separated by a single plane. We define a function related to the cost of these geodesic paths and prove a number of properties that it possesses. These properties are essential to the design and the analysis of our algorithm.
2.2 Weighted Distance Function
Let \(F\) be a plane in the threedimensional Euclidean space. We denote the two open halfspaces defined by \(F\) by \(F^\) and \(F^+\) and assume that positive weights \(w^\) and \(w^+\) have been associated with them, respectively. We refer to \(F^\) as the lower halfspace and to \(F^+\) as the upper halfspace. Let \(v\) be a point in \(F^\) and let \(\ell \) be a line parallel to \(F\). In this section, we study paths between \(v\) and points \(\mathbf{x}\) on \(\ell \) that are required to cross (to have a nonempty intersection with) the plane \(F\). We study the structure and the behavior of shortest paths of this type as well as some properties of a function measuring the weighted distance defined by the cost of these paths.
In the case where \(\ell \) is in the upper halfspace, any path joining \(v\) and \(\mathbf{x}\in \ell \) crosses \(F\) and the crossing requirement is not a restriction. There is a single geodesic path joining \(v\) and \(\mathbf{x}\) in this case, which we denote by \(\bar{\pi }(v,\mathbf{x})\), and hence the shortest path coincides with it. The structure of the path \(\bar{\pi }(v,\mathbf{x})\) is governed by Snell’s law and it consists of two segments \((v,a)\) and \((a,\mathbf{x})\). The bending point \(a\) is in \(F\) and its position is uniquely determined by Snell’s law (Fig. 1). In the case where \(\ell \) is in the lower halfspace, the requirement on the paths to cross \(F\) is essential. In this case, the geodesic path \(\bar{\pi }(v,\mathbf{x})\) joining \(v\) and \(x\) is unique and the shortest path coincides with it. The path \(\bar{\pi }(v,\mathbf{x})\) is of the form \(\{v,a,a_1,\mathbf{x}\}\), where the bending points \(a\) and \(a_1\) are in \(F\) and their position is uniquely determined by Snell’s law. Namely, the path lies in the plane perpendicular to \(F\) that contains \(v\) and \(\mathbf{x}\) and the inangle at \(a\) is equal to the outangle at \(a_1\). In fact, in all cases except one, the bending points \(a\) and \(a_1\) coincide and lie at the intersection point of \(F\) and the line through \(v\) and the point \(\hat{x}\) symmetric to \(\mathbf{x}\) with respect to \(F\). The points \(a\) and \(a_1\) are different only in the case where \(w^>w^+\) and the acute angle between the line through \(v\) and \(\hat{\mathbf{x}}\) and the direction normal to \(F\) is greater than the critical angle \(\varphi ^*\) (defined by \(\sin \varphi ^*=w^+/w^\)). In this case the path \(\bar{\pi }(v,\mathbf{x})\) consists of three segments as shown in Fig. 2a. We observe that the path \(\{v,a,a_1,\hat{\mathbf{x}}\}\) joining \(v\) to the point symmetric to \(\mathbf{x}\) with respect to \(F\) is simply the segment \((v,\hat{\mathbf{x}})\) in all cases except for the latter. So, it is possible and appropriate to model the case where \(\ell \) is in the lower halfspace as follows. We assume that the weights assigned to the lower and upper halfspaces are equal to \(w^\), but the plane \(F\) has its own weight \(w=\min (w^,w^+)\). Then we consider geodesic paths joining \(v\) to points on the line in the upper halfspace that is symmetric to \(\ell \) with respect to \(F\). Using this trick, we will avoid considering different cases, and simplify our arguments in this section.
Thus, throughout this section we slightly extend our model by assigning additional weight \(w\) to the plane \(F\), so that \(w=\min (w^,w^+)\) if \(w^\not =w^+\), and \(w\le w^\) if \(w^=w^+\). Then we do not need to consider the case where \(\ell \) is in \(F^\) separately and can assume that \(\ell \) is always in \(F^+\) (Fig. 2b).
Recall that in the case where \(w^\not =w^+, w=\min (w^,w^+)\) the path \(\bar{\pi }(v,\mathbf{x})\) consists of two segments \((v,a)\) and \((a,\mathbf{x})\), where the bending point \(a\) is uniquely determined by Snell’s law (Fig. 2a). In the case where \(w\le w^=w^+\), the path \(\bar{\pi }(v,\mathbf{x})\) is a single segment \((v,\mathbf{x})\), provided that the angle \(\varphi \) between \((v,\mathbf{x})\) and the direction normal to the plane \(F\) is smaller than or equal to the critical angle \(\varphi ^*\). Or, if \(\varphi >\varphi ^*\), it is in the plane perpendicular to \(F\) containing \(v\) and \(\mathbf{x}\) and consists of three segments \((v,a), (a,a_1)\) and \((a_1,\mathbf{x})\), where the acute angles between the segments \((v,a)\) and \((a_1,\mathbf{x})\), and the direction normal to the plane \(F\) are equal to the critical angle \(\varphi ^*\), and the segment \((a,a_1)\) is in \(F\) (Fig. 2b).
Lemma 2.1
 (a)
It is continuous and differentiable.
 (b)
It is symmetric, i.e. \(c(v,x)=c(v,x)\).
 (c)
It is strictly increasing for \(x>0\).
 (d)
It is convex.
 (e)It has asymptotes for \(x\rightarrow \pm \infty \) as follows:
 (e1)
if \(w^+ < w^\) then the asymptotes are \(w^+(z^\cot \varphi ^* \pm x)\),
 (e2)
if \(w^ < w^+\) then the asymptotes are \(w^(z^+\cot \varphi ^* \pm x)\),
 (e3)
in the explicit case \(w^+=w^\ge w\) the asymptotes are \(\pm wx\),
 (e1)
Proof
In the explicit case (\(w^=w^+\)), all these properties follow straightforwardly from the explicit representation (3). So, we consider the case \(w^\not =w^+\)
From (1) and \(a_1=a=(\tau x, (1\tau ) y, 0)\) it follows that \(c(v,x)=w^\sqrt{\tau ^2(x^2+y^2)+(z^)^2} + w^+\sqrt{(1\tau )^2(x^2+y^2)+(z^+)^2}\), where \(\tau \) is the root of Eq. (2). The root \(\tau \) can be viewed as a function of \(x\), which by the implicit function theorem is continuous and differentiable. Hence property (a) holds.
Property (b) follows from the observation that the value of the function \(c(v,x)\) is determined by the distance between the projections of the points \(v\) and \(\mathbf{x}\) on \(F\), which is \(\sqrt{y^2+x^2}\) where \(y\) is fixed.
In the case where \(w^<w^+\) we use Snell’s law and observe that the bending point \(a(x)\) of the shortest path \(\bar{\pi }(v,\mathbf{x})\) converges to \(xz^+\tan \varphi ^*\), that is \(\lim _{x\rightarrow +\infty }(a(x)xz^+\tan \varphi ^*)=0\). Then (e2) is established analogously to (e1). \(\square \)
2.3 Refractive Additive Voronoi Diagram
Next we study Voronoi diagrams under the weighted distance metric defined above. Given a set \(S\) of \(k\) points \(v_1,\ldots ,v_k\) in \(F^\), called sites, and nonnegative real numbers \(C_1, \ldots ,C_k\), called additive weights, the additive Voronoi diagram for \(S\) is a subdivision of \(F^+\) space into regions \(\mathcal{V }(v_i, F^+)=\{x\in F^+ \mathrm dist (x,v_i)+C_i\le \mathrm dist (x,v_j) + C_j \text{ for } j\ne i\}\), where ties are resolved in favor of the site with smaller additive weight, or if even in favor of the site with smaller index. The regions \(\mathcal{V }(v_i,F^+)\) are called Voronoi cells. In the classic case, \(\mathrm dist (\cdot ,\cdot )\) has been defined as the Euclidean distance. Here, for \(\mathrm dist (\cdot ,\cdot )\), we use the weighted distance function \(c(v,x)\).
Theorem 2.1
If the points \(v\) and \(v^{\prime }\) in \(F^\) are at the same distance from \(F\), then the Voronoi cell \(\mathcal{V }(v,v^{\prime },{\ell };C)\) is an interval on \({\ell }\)—possibly empty, finite or infinite.
Proof
First, consider the case when \(C=0\). We denote the set of points \(\mathbf{x}\) in \(F^+\) such that \(c(v,\mathbf{x})=c(v^{\prime },\mathbf{x})\) by \(B(v,v^{\prime })\) and observe that it is a halfplane perpendicular to \(F\). Therefore, the set of points \(\mathbf{x}\) on \({\ell }\) for which \(c(v,x)=c(v^{\prime },x)\) is either a single point, the whole line \({\ell }\), or empty. Correspondingly, the Voronoi cell \(\mathcal{V }(v,v^{\prime },{\ell };0)\) is either a halfline, empty, or the whole line \({\ell }\) and the theorem holds for \(C=0\).
So, we assume that \(C>0\). We consider the equation \(c(v^{\prime },x)c(v,x)=C\) and claim that it cannot have more than two solutions. Before we prove that claim (Claim 2.1 below), we argue that it implies the theorem.
Assume that the equation \(c(v^{\prime },x)c(v,x)=C\) has at most two solutions. If it does not have any or has just one solution, then the theorem follows straightforwardly. In the case where it has exactly two solutions, the cell \(\mathcal{V }(v,v^{\prime },{\ell };C)\) has to be either a finite interval on \({\ell }\), or a complement of a finite interval on \({\ell }\). From the definition of the Voronoi cell \(\mathcal{V }(v,v^{\prime },{\ell };C)\) and \(C>0\) it follows that \(\mathcal{V }(v,v^{\prime },{\ell };C)\subset \mathcal{V }(v,v^{\prime },{\ell };0)\). We know that \(\mathcal{V }(v,v^{\prime },{\ell };0)\) is either empty, a halfline, or the whole line. If it is either empty or a halfline then \(\mathcal{V }(v,v^{\prime },{\ell };C)\) must be either empty or a finite interval and the theorem holds.
It remains to consider the case where \(\mathcal{V }(v,v^{\prime },{\ell };0)\) is the whole line \({\ell }\). We argue that \(\mathcal{V }(v,v^{\prime },{\ell };C)\) cannot be a complement to a finite interval. We have \(\mathcal{V }(v,v^{\prime },{\ell };0)={\ell }\) and therefore the line \({\ell }\) must be parallel to the halfplane \(B(v,v^{\prime })\). Furthermore, the plane containing \(B(v,v^{\prime })\) is a perpendicular bisector of the segment \((v,v^{\prime })\) and thus the point \(v^{\prime }\) must have coordinates \((0,y^{\prime },z^)\) (Fig. 3). In this setting, using Lemma 2.1(e), we observe that \(c(v,x)\) and \(c(v^{\prime },x)\) have same asymptotes at infinity and thus \(\lim _{x\rightarrow \pm \infty }(c(v^{\prime },x)c(v,x))=0\). Therefore, the cell \(\mathcal{V }(v,v^{\prime },{\ell };C)\) must be finite. The theorem follows.\(\square \)
Next we establish the validity of the claim used in the proof of Theorem 2.1.
Claim 2.1
The equation \(c(v^{\prime },x)c(v,x)=C\) has at most two solutions.
We will prove the claim by showing that the function \(g(x)=c(v^{\prime },x)c(v,x)\) is unimodal, i.e., it has at most one local extrema. We establish this property in two steps. First, we prove a characterization property of a local extremum of \(g\) (Proposition 2.1). Then, we show that at most one point can possess that property.
These angles are completely defined by the angles \(\varphi \) and \(\theta \) defining the corresponding shortest paths at the bending points \(a(x)\) and \(a^{\prime }(x)\). By observing Fig. 3, we have \(a\mathbf y ^+=va\cos \alpha \) and on the other hand \(a\mathbf y ^+=a\mathbf y \cos \theta =va\sin \varphi \cos \theta \) and hence \(\cos \alpha = \sin \varphi \cos \theta \). Next, we prove that the angles \(\alpha (x_0)\) and \(\alpha ^{\prime }(x_0)\) must be equal at any local extremum \(x_0\).
Proposition 2.1
If \(x_0\) is a local extremum of the function \(g\), then \(\alpha (x_0)=\alpha ^{\prime }(x_0)\).
Proof
Our next step is to show that there cannot be two points on \({\ell }\) satisfying Proposition 2.1. To do that, we study in more detail the relationship between the position of points \(v\) and \(\mathbf{x}\), and the angle \(\alpha (x)\). We observe that, for any fixed \(y\) (the \(y\)coordinate of \(v\)), there is a onetoone correspondence between the real numbers \(x\) and the angles \(\alpha \). That is, for a fixed \(v\), there is a onetoone correspondence between the points \(\mathbf{x}\) on \(\ell \) and the angle between the shortest path \(\bar{\pi }(v,\mathbf{x})\) and the positive direction on \(\ell \). Hence, we may consider and study the well defined function \(x=x(y,\alpha )\). We prove the following:
Proposition 2.2
The second mixed derivative of the function \(x=x(y,\alpha )\) exists and is negative, i.e., \(x_{y\alpha }<0\).
Let us first show that Proposition 2.2 implies Claim 2.1.
Proof of Claim 2.1
We assume that Proposition 2.2 holds and will show that the function \(g(x)\) has at most one local extremum. Recall that the point \(v\) has coordinates \((0,y,z^)\) and let us denote the coordinates of the point \(v^{\prime }\) by \((x^{\prime },y^{\prime },z^)\).
We first consider the case where \(y=y^{\prime }\). In this case, we observe that \(\alpha ^{\prime }(x+x^{\prime })=\alpha (x)\). In addition, the function \(\alpha (x)\) is strictly monotone and, therefore, for any \(x\), the angles \(\alpha (x)\) and \(\alpha ^{\prime }(x)\) are different. From Proposition 2.1 it follows that in this case the function \(g(x)\) has no local extremum.
The proof of Proposition 2.2 is rather long and uses elaborate mathematical techniques and manipulations. On the other hand, the rest of the paper is independent of the details in that proof. So, we present the full proof for the interested readers in Appendix A.1.
Corollary 2.1
Consider a plane \(F_1\) in \(F^+\) parallel to \(F\) and two points \(v\) and \(v^{\prime }\) in \(F^\) lying at the same distance from \(F\). For any nonnegative constant \(C\), the Voronoi cell \(\mathcal{V }(v,v^{\prime },F_1;C)=\{x\in F_1: c(v,x)+C < c(v^{\prime },x)\}\) is convex.
What can we say about the Voronoi cell of \(v\) in \(F^+\)? The above corollary implies that the intersection between the Voronoi cell and any plane, parallel to \(F\), is convex. This, as such, is not sufficient to conclude that the cell is convex. We close this section with the following conjecture.
Conjecture 2.1
In the above setting, the Voronoi cell of \(v\) in \(F^+, \mathcal{V }(v,v^{\prime },F^+;C)=\{x\in F^+: c(v,x)+C < c(v^{\prime },x)\}\), is convex.
Remark 2.1
Examples showing that equal distance of the points \(v\) and \(v^{\prime }\) from the bending plane \(F\) is a necessary condition for \(c(v^{\prime },x)c(v,x)\) to be unimodal in Theorem 2.1 are not difficult to construct. In fact, if we take arbitrary points \(v\) and \(v^{\prime }\) in \(F^\) at different distances from \(F\), it is very likely that \(c(v^{\prime },x)c(v,x)\) will have more than one local extrema and hence, for a proper choice of \(C\), the equation \(c(v^{\prime }, x)  c(v,x)= C\) will have more than two solutions. Such examples can be constructed by choosing arbitrary points \(v\) in \(F^\) and \(x_1, x_2\) on \(\ell \) and then computing a point \(v^{\prime } \in F^\) so that \(\alpha (v, x_i)=\alpha (v^{\prime }, x_i)\), for \(i= 1, 2\), where \(\alpha \)’s are the angles between the line \(\ell \) and the shortest paths coming from \(v\) and \(v^{\prime }\), respectively. As a result \(c(v^{\prime },x)c(v,x)\) will have local extrema at \(x_1\) and \(x_2\).
3 Discretization of \(\mathcal D \)
In this section, we describe the construction of a carefully chosen set of additional points placed in \(\mathcal{D }\), called Steiner points. These Steiner points collectively form a discretization of \(\mathcal{D }\), which is later used to approximate geodesic paths in \(\mathcal{D }\). Steiner points are placed on the edges of \(\mathcal{D }\) and on the bisectors of the dihedral angles of the tetrahedra in \(\mathcal{D }\). While it may seem more natural to place the Steiner points on the faces of the tetrahedra, placing them on the bisectors proves to be more efficient, leading to a speed up of approximately \(\varepsilon ^{1}\) compared to the alternate placement. Recall that \(\varepsilon \) is an approximation parameter in \((0,1)\). We provide a precise estimate on the number of Steiner points which depends on \(\varepsilon \) and aspect ratios of tetrahedra of \(\mathcal{D }\).
3.1 Placement of Steiner Points
We use the following definitions:
Definition 3.1
 (a)
For a point \(x\in \mathcal{D }\), we define \(D(x)\) to be the union of the tetrahedra containing \(x\). We denote by \(\partial D(x)\) the set of faces on the boundary of \(D(x)\) that are not incident to \(x\).
 (b)
We define \(d(x)\) to be the minimum Euclidean distance from \(x\) to any point on \(\partial D(x)\).
 (c)
For each vertex \(v\in \mathcal{D }\), we define a radius \(r(v)=d(v)/14\).
 (d)
For any internal point \(x\) on an edge in \(\mathcal{D }\), we define a radius \(r(x)=d(x)/24\). The radius of an edge \(e\in \mathcal{D }\) is \(r(e)=\max _{x\in e} r(x)\).
Using radii \(r(v)\) and \(r(x)\) and our approximation parameter \(\varepsilon \), we define “small” regions around vertices and edges of \(\mathcal{D }\), called vertex and edge vicinities, respectively.
Definition 3.2
 (a)
The convex hull of the intersection points of the ball \(B(v,\varepsilon r(v))\) having center \(v\) and radius \(\varepsilon r(v)\) with the edges incident to \(v\) is called the vertex–vicinity of \(v\) and is denoted by \(D_\varepsilon (v)\).
 (b)
The convex hull of the intersections between the “spindle” \(\cup _{x\in e}B(x,\varepsilon r(x))\) and the faces incident to \(e\) is called the edge vicinity of \(e\) and is denoted by \(D_\varepsilon (e)\).
Definition 3.3
 (a)
all points \(P_{i,j}\) outside the union \({D_\varepsilon }(AB)\cup {D_\varepsilon }(A)\cup {D_\varepsilon }(B)\),
 (b)
the intersection points of the lines \(L_i\) with the boundary of that union.
Our discussion is summarized in the following lemma.
Lemma 3.1
 (a)The number of Steiner points placed on a bisector \(\mathbf{b}=ABP\) of a dihedral angle \(\gamma \) in a tetrahedron \(T\), is bounded by \(C_\mathbf{b}\frac{1}{\varepsilon ^2}\log \frac{2}{\varepsilon }\), where the constant \(C_\mathbf{b}\) depends on the geometric features of \(\mathcal{D }\) around the edge \(AB\) and is bounded by$$\begin{aligned} \frac{14AB}{r(e)\sin ^2(\gamma /2)}\log \frac{AB^2 h}{r(e)\bar{r}(A)\bar{r}(B)}. \end{aligned}$$
 (b)The number of segments that are parallel to \(AB\) on a bisector \(\mathbf{b}=ABP\), containing Steiner points, is bounded by \(C^1_{ABP}(T)\frac{1}{\sqrt{\varepsilon }}\log \frac{2}{\varepsilon }\), where$$\begin{aligned} C^1_{ABP}(T)<\frac{8}{\sin (\gamma /2)}\log \frac{AB^2 h}{8r(e)\bar{r}(A)\bar{r}(B)}. \end{aligned}$$
 (c)
The total number of Steiner points is bounded by \(C(\mathcal{D })\frac{n}{\varepsilon ^2}\log \frac{2}{\varepsilon }\), where \(n\) is the number of tetrahedra in \(\mathcal{D }\) and \(C(\mathcal{D })\) is bounded by an absolute constant times the average of the constants \(C_\mathbf{b}\) for all bisectors \(\mathbf{b}\) in \(\mathcal{D }\).
By placing Steiner points in this way, in the next lemma, we show that it is possible to approximate the cellcrossing segments that have their endpoints outside the vertex and the edge vicinities.
Lemma 3.2
Let \(ABP\) be the bisector of a dihedral angle \(\gamma \) formed by the faces \(ABC\) and \(ABD\) of a tetrahedron \(ABCD\). Let \(x_1\) and \(x_2\) be points on the faces \(ABC\) and \(ABD\), respectively, that lie outside of the union \({D_\varepsilon }(AB)\cup {D_\varepsilon }(A)\cup {D_\varepsilon }(B)\). Then there exists a Steiner point \(q\) on \(ABP\), such that \(\max (\angle x_2x_1q, \angle x_1x_2q) \le \sqrt{\frac{\varepsilon }{2}}\) and \(x_1q+qx_2\le (1+\varepsilon /2)x_1x_2\).
Proof
Clearly, the segment \(x_1x_2\) intersects the bisector triangle \(ABP\) in a point \(x_0\) lying outside the vertex vicinities \({D_\varepsilon }(A),{D_\varepsilon }(B)\), and the edge vicinity \({D_\varepsilon }(AB)\). Recall that Steiner points in \(ABP\) are placed on a set of lines \(L_i\) parallel to \(AB\) and passing through the sequence of points \(P_i\) on the altitude \(PH\) of \(ABP\). Let \(i_0\) be the maximum index such that the line \(L_{i_0}\) is farther away from \(AB\) than \(x_0\). We define \(q\) to be the closest Steiner point to \(x_0\) on the line \(L_{i_0}\).
4 Discrete Paths
In this section, we use the Steiner points introduced above for the construction of a weighted graph \(G_{\varepsilon }=(V(G_{\varepsilon }),E(G_{\varepsilon }))\). We estimate the number of its nodes and edges and then establish that shortest paths in \(\mathcal D \) can be approximated by paths in \(G_\varepsilon \). We follow the approach laid out in [3], but the details are substantially different, as we have to handle both the vertex and edge vicinities, as well as the bisectors in 3d space.
The set of nodes \(V(G_\varepsilon )\) consists of the vertices of \(\mathcal D \), the Steiner points placed on the edges of \(\mathcal D \) and the Steiner points placed on the bisectors. The edges of the graph \(G_\varepsilon \) join nodes lying on neighboring bisectors as defined below. A bisector is a neighbor to itself. Two different bisectors are neighbors if the dihedral angles they split share a common face. We say that a pair of bisectors sharing a face \(f\) are neighbors with respect to \(f\). (So, a single bisector \(\mathbf{b}\) is a neighbor to itself with respect to both faces forming the dihedral angle it splits.)
First, we define edges joining pairs of Steiner points on neighboring bisectors. Let \(p\) and \(q\) be nodes corresponding to Steiner points lying on neighboring bisectors \(\mathbf{b}\) and \({\mathbf{b}_\mathbf{1}}\), respectively, that share a common face \(f\). We consider the shortest weighted path between \(p\) and \(q\) of the type \(\{p,x,y,q\}\), where \(x\) and \(y\) belong to \(f\) (points \(x\) and \(y\) are not necessarily different). We refer to this shortest path as a local shortest path between \(p\) and \(q\) crossing \(f\) and denote it by \(\hat{\pi }(p,q;f)\). Nodes \(p\) and \(q\) are joined by an edge in \(G_\varepsilon \) if none of the points \(x\) or \(y\) are on an edge of \(f\). Such an edge is said to cross the face \(f\). In the case where \(p\) and \(q\) lie on the same bisector, say \(\mathbf{b}\), splitting an angle between faces \(f_1\) and \(f_2\), we define two parallel edges in \(G_\varepsilon \) joining \(p\) and \(q\)—one crossing \(f_1\) and another crossing \(f_2\).
Lemma 4.1
We have \(V(G_\varepsilon )=O(\frac{n}{\varepsilon ^2}\log \frac{1}{\varepsilon })\) and \(E(G_\varepsilon )=O(\frac{n}{\varepsilon ^4}\log ^2\frac{1}{\varepsilon })\).
Proof
The estimate on the number of nodes follows directly from Lemma 3.1 and the fact that \(\mathcal D \) has \(O(n)\) vertices. The number of edges in \(G_\varepsilon \) can be estimated as follows. There are \(O(n)\) faces in \(\mathcal D \) and at most 21 pairs of neighbor bisectors with respect to a fixed face in \(\mathcal D \). By Lemma 3.1(a), there are \(O(\frac{1}{\varepsilon ^4}\log ^2\frac{1}{\varepsilon })\) pairs of nodes lying on two fixed neighboring bisectors. When combined, these three facts prove the estimate on the number of edges of \(G_\varepsilon \). \(\square \)
Paths in \(G_\varepsilon \) are called discrete paths. The cost, \(c(\pi )\), of a discrete path \(\pi \) is the sum of the costs of its edges. Note that if we replace each of the edges in a discrete path \(\pi \) by the corresponding (at most three) segments forming the shortest path used to compute its cost we obtain a path in \(\mathcal D \) with cost \(c(\pi )\). Next, we state the main theorem of this section.
Theorem 4.1
Let \(\tilde{\pi }(v_0,v)\) be a shortest path between two different vertices \(v_0\) and \(v\) in \(\mathcal D \). There exists a discrete path \(\pi (v_0,v)\), such that \( c(\pi (v_0,v)) \le (1+\varepsilon )\Vert \tilde{\pi }(v_0,v)\Vert \).
Proof
We prove the theorem by constructing a discrete path \(\pi (v_0,v)\) the cost of which is as required. Recall that the shortest path \(\tilde{\pi }(v_0,v)\) is a linear path consisting of cellcrossing, faceusing, and edgeusing segments that satisfy Snell’s law at each bending point. We construct the discrete path \(\pi \) by successive modifications of \(\tilde{\pi }\) described below as steps.
Next, in Step 4, using a similar approach as above, we further partition each subpath \(\tilde{\pi }_3(v_i,v_{i+1})\) into betweenedge vicinities portions and edge vicinity portions. Then, we replace each edge vicinity portion by an edgeusing segment plus 2 additional segments and estimate the cost of the resulting path \(\tilde{\pi }_4\).
Step 4: Let \((v_i,a)\) be the first segment of the path \(\tilde{\pi }_3(v_i,v_{i+1})\). If \(a\) is not inside an edge vicinity, we define \(a_{i,0}=v_i\). Otherwise, if \(a\) is inside an edge vicinity, say \({D_{\varepsilon }}(e_0)\), we consider the union \(D(e_0)\) of tetrahedra incident to the edge \(e_0\) and denote by \(\partial D(e_0)\) the set of faces on the boundary of \(D(e_0)\) that are not incident to \(e_0\). Then, let \(a^{\prime }\) be the first bending point on the path \(\tilde{\pi }_3(v_i,v_{i+1})\) after \(v_i\) lying on \(\partial D(e_0)\). We define \(a_{i,0}\) to be the last bending point on \(\tilde{\pi }_3(v_i,a^{\prime })\) that is inside \({D_{\varepsilon }}(e_0)\). Next, let \(b_{i,1}\) be the first bending point on \(\tilde{\pi }_3(a_{i,0},v_{i+1})\) that is inside an edge vicinity, say \(D_\varepsilon (e_1)\) and let \(a^{\prime \prime }\) be the first bending point on \(\tilde{\pi }_3(b_{i,1},v_{i+1})\) that is on \(\partial D(e_1)\). We define \(a_{i,1}\) as the last bending point on \(\tilde{\pi }_3(b_{i,1},a^{\prime \prime })\) that is in the same edge vicinity as \(b_{i,1}\). Assume that, following this approach, the sequence of bending points \(a_{i,0}, b_{i,1}, a_{i,1},\ldots , a_{i,k_i1}, b_{i,k_i}\) has been defined. They partition the portion \(\tilde{\pi }_3(v_i,v_{i+1})\) into subportions \(\tilde{\pi }_3(v_i,v_{i+1})=\big \{ \tilde{\pi }_3(v_i,a_{i,0}),\ldots ,\tilde{\pi }_3(a_{i,j1},b_{i,j}), \tilde{\pi }_3(b_{i,j},a_{i,j}), \ldots , \tilde{\pi }_3(b_{i,k_i},v_{i+1})\big \}\). Portions between \(a_{i,j}\) and \(b_{i,j+1}\), for \(j=0, \ldots ,k_i1\), are called the betweenedge vicinity portions. Portions between \(b_{i,j}\) and \(a_{i,j}\), for \(j=0,\ldots ,k_i\), are called the edge vicinity portions (\(b_{i,0}=v_i\) and \(a_{i,k_i}=v_{i+1}\)).
According to our construction, the bending points \(a_{i,0}, b_{i,1}, a_{i,1},\ldots , a_{i,k_i1}, b_{i,k_i}\), defining the above partition lie inside edge vicinities. Moreover, consecutive points \(b_{i,j}\) and \(a_{i,j}\), for \(j=0,\ldots ,k_i\), are in one and the same edge vicinity \(D_\varepsilon (e_j)\).
Next, we modify betweenedge vicinities portions \(\tilde{\pi }_3(a_{i,j},b_{i,j+1})\), into paths \(\tilde{\pi }_4(p_{i,j},q_{i,j+1})\), joining Steiner points \(p_{i,j}\) and \(q_{i,j+1}\), and not intersecting any vertex or edge vicinities. We apply a construction analogous to the one used in Step 3 to define the paths \(\tilde{\pi }_3(v_i,v_{i+1})\).
The bending points defining \(\tilde{\pi }_4\) can be partitioned into two groups. The first group consists of bending points corresponding to nodes of the graph \(G_\varepsilon \), i.e., Steiner points on bisectors, Steiner points on edges, and vertices of \(\mathcal D \). The second group consists of the remaining bending points of \(\tilde{\pi }_4\), which are bending points inside the faces of \(\mathcal D \). We complete the proof of the theorem by showing that the sequence of the nodes in the first group defines a discrete path \(\pi (v_0,v)\) the cost of which \(c(\pi (v_0,v))\le \Vert \tilde{\pi }_4(v_0,v)\Vert \). It suffices to show that any two consecutive nodes (bending points in the first group) along the path \(\tilde{\pi }_4\) are adjacent in the approximation graph \(G_\varepsilon \).
 1.
A faceusing segment with one of its endpoint being a (node) vertex of \(\mathcal D \) or a Steiner point on an edge of \(\mathcal D \).
 2.
A segment joining two nodes, at least one of them being a Steiner point on an edge of \(\mathcal D \) or a vertex of \(\mathcal D \).
Now, let \(p\) and \(q\) be two consecutive nodes along the path \(\tilde{\pi }_4\). We show that \(p\) and \(q\) are adjacent in \(G_\varepsilon \). We consider, first, the case where at least one of the nodes, say \(p\), is a vertex of \(\mathcal D \). Let \(x\) be the bending point following \(p\) along the path \(\tilde{\pi }_4\). By the definition of bending points adjacent to the vertices (in Step 3), we know that \((p,x)\) is a faceusing segment followed by a cellcrossing segment \((x,x_1)\), joining \(x\) to a (node) Steiner point on a bisector lying in one of the tetrahedra incident to the face that contains \((p,x)\). So, \(q=x_1\) and \(q\) is inside a tetrahedron incident to \(p\). Thus, \(p\) and \(q\) are adjacent in \(G_\varepsilon \). The case where at least one of the nodes \(p\) or \(q\) is a Steiner point on an edge of \(\mathcal D \) can be treated analogously.
Assume now that both \(p\) and \(q\) are Steiner points on bisectors. Let \(x\) and \(x_1\) be the bending points following \(p\) along \(\tilde{\pi }_4\). The point \(x\) has to be a bending point on a face of the tetrahedron containing \(p\). The segment \((x,x_1)\) is either a cellcrossing or a faceusing segment. In the first case, \(q\) must coincide with \(x_1\) and is adjacent to \(p\) in \(G_\varepsilon \), since it lies in a tetrahedron that is a neighbor to the one containing \(p\). In the second case, where \((x,x_1)\) is a faceusing segment, we consider the bending point \(x_2\) that follows \(x_1\) along the path \(\tilde{\pi }_4\). The segment \((x_1,x_2)\) must be a cellcrossing segment. Thus, in this case, \(q=x_2\) is adjacent to \(p\), because the tetrahedra containing \(p\) and \(q\) are neighbors.
We have shown that any pair \(p\) and \(q\) of consecutive nodes on the path \(\tilde{\pi }_4\) are adjacent in \(G_\varepsilon \). Hence, we define a discrete path \(\pi (v_0,v)\) to be the path in \(G_\varepsilon \) following the sequence of nodes along \(\tilde{\pi }_4\). Finally, we observe that the subpaths of \(\tilde{\pi }_4(p,q)\) joining pairs of consecutive nodes stay in the union of the tetrahedra containing these nodes and cross faces shared by the bisectors containing them. Hence, by the definition of the cost of the edges in \(G_\varepsilon \), we have \(c(p,q)\le \Vert \tilde{\pi }_4(p,q)\Vert \). Summing these estimates, for all edges of \(\pi (v_0,v)\), and using (38), we obtain \( c(\pi (v_0,v))\le \Vert \tilde{\pi }_4(v_0,v)\Vert \le (1+\varepsilon )\Vert \tilde{\pi }(v_0,v)\Vert \). \(\square \)
5 An Algorithm for Computing SSSP in \(G_\varepsilon \)
In this section, we present our algorithm for solving the Single Source Shortest Paths (SSSP) problem in the approximation graph \(G_\varepsilon =(V(G_\varepsilon ),E(G_\varepsilon ))\). Straightforwardly, one can apply Dijkstra’s algorithm, which runs in \(O(E(G_\varepsilon ) + V(G_\varepsilon )\log V(G_\varepsilon ))\) time. By Lemma 4.1, we have \(V(G_\varepsilon )=O(\frac{n}{\varepsilon ^2}\log \frac{1}{\varepsilon })\) and \(E(G_\varepsilon )=O(\frac{n}{\varepsilon ^4}\log ^2 \frac{1}{\varepsilon })\). Thus, the SSSP problem in \(G_\varepsilon \) can be solved in \(O(\frac{n}{\varepsilon ^4}\log \frac{n}{\varepsilon }\log \frac{1}{\varepsilon })\) time.
In the remainder of this section, we demonstrate how geometric properties of our model can be used to obtain a more efficient algorithm for solving the SSSP problem. More precisely, we present an algorithm that runs in \(O(V_\varepsilon (\log V_\varepsilon +\frac{1}{\sqrt{\varepsilon }}\log ^3\frac{1}{\varepsilon }))= O(\frac{n}{\varepsilon ^{2.5}}\log \frac{n}{\varepsilon }\log ^3 \frac{1}{\varepsilon })\) time.
Informally, the idea is to avoid consideration of large portions of the edges of the graph \(G_\varepsilon \) when searching for shortest paths. We achieve that by applying the strategy proposed first in [31, 32] and developed further in [3] and by using the properties of the weighted distance function and additive Voronoi diagrams studied in Sect. 2.2. We maintain a priority queue containing candidate shortest paths. At each iteration of the algorithm, a shortest path from the source \(s\) to some node \(u\) of \(G_\varepsilon \) is found. Then, the algorithm constructs edges adjacent to \(u\) that can be continuations of the shortest path from \(s\) to \(u\) and inserts them in the priority queue as new candidate shortest paths. In general, one needs to consider all edges adjacent to \(u\) as possible continuations. In our case, we divide the edges adjacent to \(u\) into \(O(\frac{1}{\sqrt{\varepsilon }}\log \frac{1}{\varepsilon })\) groups related to the segments containing Steiner points in the neighboring bisectors and demonstrate that we can consider just a constant number of edges in each group. The latter is possible due to the structure of the Voronoi cell \(\mathcal{V }(u)\) of the node \(u\) in the additive Voronoi diagram related to a fixed group (see Theorem 2.1).
This section consists of two subsections. In the first one we describe the general structure of the algorithm. Section 5.2 is divided into four parts. In the first part we show how the general strategy can be applied in our case and present an outline of the algorithm. In the next two parts we provide details of the implementation of the algorithm and analyze its complexity. Finally, in part four we establish the main result of the paper.
5.1 General Structure of the Algorithm
Different strategies for maintaining information about \(E(S)\) and finding an optimal edge \(e(S)\) during each iteration result in different algorithms for computing SSSP. For example, Dijkstra’s algorithm maintains only a subset \(Q(S)\) of \(E(S)\), which, however, always contains an optimal edge. Alternatively, as in [3], one may maintain a subset of \(E(S)\) containing one edge per node \(u\) in \(S\). The target node of this edge is called the representative of \(u\) and is denoted by \(\rho (u)\). The node \(u\) itself is called predecessor of its representative. The representative \(\rho (u)\) is defined to be the target of the minimum cost edge in the propagation set \(I(u)\) of \(u\), where \(I(u)\subset E(S)\) consists of all edges \((u,v)\) such that \(\delta (u)+c(u,v) < \delta (u^{\prime }) + c(u^{\prime },v)\) for all nodes \(u^{\prime }\in S\) that have entered \(S\) before \(u\). The union of propagation sets forms a subset \(Q(S)\) of \(E(S)\) that always contains an optimal edge. Propagation sets \(I(u)\), for \(u\in S\), form a partition of \(Q(S)\), which is called propagation diagram, and is denoted by \(\mathcal{I }(S)\).
The set of representatives \(R\subset S^a\) can be organized in a priority queue, where the key of the node \(\rho (u)\) in \(R\) is defined to be \(\delta (u) + c(u,\rho (u))\). Observe that the edge corresponding to the minimum in \(R\) is an optimal edge for \(S\). In each iteration, the minimum key node \(v\) in \(R\) is selected and the following three steps are carried:
Step 1. The node \(v\) is moved from \(R\) into \(S\). Then, the propagation set \(I(v)\) is computed and the propagation diagram \(\mathcal{I }(S)\) is updated accordingly.
Step 2. Representative \(\rho (v)\) of \(v\) and a new representative, \(\rho (u)\), for the predecessor \(u\) of \(v\) are computed.
Step 3. The new representatives, \(\rho (u)\) and \(\rho (v)\), are either inserted into \(R\) together with their corresponding keys, or (if they are already in \(R\)) their keys are updated.
Clearly, this leads to a correct algorithm for solving the SSSP problem in \(G\). The total time for the priority queue operations^{6} is \(O(V\log V)\). Therefore, the efficiency of this strategy depends on the maintenance of the propagation diagram, the complexity of the propagation sets, and the efficient updates of the new representatives. In the next subsection, we address these issues and provide necessary details.
5.2 Implementation Details and Analysis
5.2.1 Notation and Algorithm Outline
Our algorithm follows the general strategy as described in the previous subsection. It takes as input a weighted tetrahedralization \(\mathcal D \), an approximation parameter \(\varepsilon \) and a fixed vertex \(s\) of \(\mathcal D \). The algorithm first computes the set of nodes \(V_\varepsilon \), consisting of the vertices of \(\mathcal D \) and all Steiner points as described in Sect. 3.1. This computation takes time proportional to \(V_\varepsilon \). We view the graph \(G_\varepsilon \) as a directed graph where each of its original edges is replaced by a pair of oppositely oriented edges with cost equal to the cost of the original edge. Following our main idea the edges of the graph \(G_\varepsilon \) are not computed in advance. Only these of the edges of \(G_\varepsilon \) that eventually participate in a shortest path to some vertex will be explicitly computed during the course of the algorithm.
Let, as above, \(S\) be the set of the nodes to which the shortest path has already been found and \(E(S)\) be the set of the edges joining \(S\) with \(S^a\subset V\setminus S\). We partition the edges of \(G_\varepsilon \) (and respectively \(E(S)\)) into groups so that the propagation sets and the corresponding propagation diagrams, when restricted to a fixed group, have a simple structure and can be updated efficiently. Then, for each node \(u\) in \(S\), we will keep multiple representatives in \(R\). For each group where edges incident to \(u\) participate and where its propagation set is nonempty, the number of such representatives is, on average, constant. A node in \(S^a\) will have multiple predecessors—at most as many as the number of the groups where edges incident to it participate. We will show that the number of groups, where edges incident to \(u\) can participate, is bounded by \(O(\frac{1}{\sqrt{\varepsilon }}\log \frac{1}{\varepsilon })\) times the number of bisectors incident to \(u\). In a fixed group, we will be able to compute new representatives in \(O(\log \frac{1}{\varepsilon })\) time and update propagation diagrams in \(O(\log ^2\frac{1}{\varepsilon })\) time.
A fixed bisector \(\mathbf{b}\) has either three or six neighboring bisectors (\(\mathbf{b}\) itself and two or five others, respectively) with respect to each of the two faces forming the dihedral angle bisected by \(\mathbf{b}\). Hence, the total number of ordered triples \((\mathbf{b},{\mathbf{b}_\mathbf{1}}, f)\) does not exceed \(36n\). By Lemma 3.1(b), the number of Steiner segments on any bisector is \(O(\frac{1}{\sqrt{\varepsilon }}\log \frac{1}{\varepsilon })\) and thus the number of subgroups of a group corresponding to a triple \((\mathbf{b},{\mathbf{b}_\mathbf{1}},f)\) is \(O(\frac{1}{\varepsilon }\log ^2\frac{1}{\varepsilon })\). In total, the number of groups \(E(\ell ,\ell _1,f)\) is \(O(\frac{n}{\varepsilon }\log ^2\frac{1}{\varepsilon })\).
A node \(u\) lying on a Steiner segment \(\ell \) will have a set of representatives for each group \(E(\ell ,\ell _1, f)\) corresponding to a triple, where \(\ell \) is the first element and where its propagation set is nonempty. We denote this set by \(\varvec{\rho }(u,\ell _1, f)\). The set of representatives \(\varvec{\rho }(u,\ell _1, f)\) will correspond to the structure of the propagation set \(I(u;\ell ,\ell _1, f)\), as we will detail in the next subsection.
Consider an iteration of our algorithm. Let \(v\) be the node extracted from priority queue \(R\), containing all representatives. Let \(\mathcal{T }(v)\) be the set of triples \((\ell ,\ell _1, f)\) such that \(v\) lies on \(\ell \). First, we need to move \(v\) from \(S^a\) to \(S\), since the distance from \(v\) to the source \(s\) has been found. We store the nodes in \(S\) in dynamic doubly linked lists related to Steiner segments \({\ell }\) and organized in a binarysearch tree with respect to their positions on \({\ell }\). These lists are denoted by \(S({\ell })\). Instead of maintaining the set \(S^a\) explicitly, for each Steiner segment \({\ell }\) we maintain a complimentary doubly linked list \(\bar{S}({\ell })\) containing the nodes on \({\ell }\) that are not in \(S({\ell })\). These lists are also organized to allow binarysearch with respect to the positions of the nodes on \({\ell }\). Combined with the data structures called Adjacency diagrams the lists \(\bar{S}({\ell })\) describe \(S^a\) completely. The computation and usage of adjacency diagrams are discussed in detail later in this section. We assume that in a preprocessing step the lists and \(S({\ell })\) and \(\bar{S}({\ell })\) are initiated for all Steiner segments \({\ell }\) so that \(S({\ell })\) are empty and \(\bar{S}({\ell })\) contains all nodes on \({\ell }\). By Lemma 3.1(a) this preprocessing takes \(O(V_\varepsilon \log \frac{1}{\varepsilon })\) time.
In Sect. 5.2.2, we describe the structure and maintenance of the data structures related to the propagation diagrams \(\mathcal{I }(\ell ,\ell _1, f)\). Computation and updates of the sets of representatives are described in Sect. 5.2.3. We conclude our discussion in Sect. 5.2.4 by summarizing the time complexity of the algorithm and by establishing our main result.
5.2.2 Implementation of Step 3.1
We consider a fixed triple \(({\ell },{\ell }_1, f)\), where \(\ell \) and \(\ell _1\) are Steiner segments on neighboring bisectors \(\mathbf{b}\) and \({\mathbf{b}_\mathbf{1}}\) sharing \(f\). The propagation diagram \(\mathcal{I }({\ell },{\ell }_1,f)\) was defined as the set consisting of the propagation sets of the active nodes on \(\ell \). Instead of explicitly computing the propagation diagram, we construct and maintain a number of data structures that allow efficient computation and updates of representatives.
 C1.
The nodes \(u\) and \(v_1\) are adjacent in \(G_\varepsilon \) by an edge that crosses \(f\);
 C2.
\(\delta (u)+c(u,v_1) < \delta (u_i)+c(u_i,v_1)\), for \(i=1,\ldots ,k\);
 C3.
The node \(v_1\) is in \(\bar{S}({\ell }_1)\).
Adjacency Diagram: The Adjacency Diagram \(\mathcal{A }(\ell ,\ell _1, f)\) consists of sets \(A(u,\ell _1)\), for all nodes \(u\) on \(\ell \). We assume that the nodes on \(\ell _1\) are stored in an ordered list \(V(\ell _1)\) according to their position on that segment. For any fixed node \(u\in \ell \), the adjacency set \(A(u,\ell _1)\) will be computed and stored as a sublist of the list \(V(\ell _1)\). We denote this sublist by \(\bar{A}(u,\ell _1)\).
We reduce the size of \(\bar{A}(u,\ell _1)\) by replacing each portion of consecutive nodes in them by a pair of pointers to the first and to the last node in that portion. (Isolated nodes are treated as portions of length one.) Hence, each sublist \(\bar{A}(u,\ell _1)\) is an ordered list of pairs of pointers identifying portions of consecutive nodes in the underlying list \(V(\ell _1)\). The size of the sublists implemented in this way is proportional to the number of the consecutive portions they contain. Next, we discuss the structure of the lists \(\bar{A}(u,\ell _1)\) and show that their size is bounded by a small constant.
According to our definitions (Sect. 4), an edge \((u,u_1)\) is present in \(A(u,{\ell }_1)\) if the local shortest path \(\hat{\pi }(u,u_1;f)\) does not touch the boundary of \(f\), where the path \(\hat{\pi }(u,u_1;f)\) was defined in (24). We refer to intervals on \(\ell _1\) with both of their endpoints being Steiner points as Steiner intervals. Furthermore, we say that a Steiner interval is covered by the set \(A(u,\ell _1)\) if all Steiner points, including its endpoints, are in \(A(u,\ell _1)\). Clearly, each maximal interval covered by \(A(u,\ell _1)\) corresponds to and defines a portion of consecutive nodes on \(\ell _1\) that are adjacent to \(u\). Moreover, by our definition, the list \(\bar{A}(u,\ell _1)\) consists of the pairs of pointers to the endpoints of the maximal intervals covered by \(A(u,\ell _1)\). In the next lemma, we show that there are at most seven maximal Steiner intervals covered by \(A(u,\ell _1)\).
Lemma 5.1
The number of the maximal intervals covered by \(A(u,{\ell }_1)\) is at most seven. The corresponding ordered list \(\bar{A}(u,\ell _1)\) can be computed in \(O(\log K(\ell _1))\) time, where \(K(\ell _1)\) denotes the number of Steiner points on \(\ell _1\).
Proof
Presented in Appendix A. 3. \(\square \)
We assume that the nodes which are endpoints of the maximal Steiner intervals covered by the sets \(A(u,\ell _1)\), for all nodes \(u\in \ell _1\), are precomputed in a preprocessing step and stored in the lists \(\bar{A}(u,\ell _1)\) as discussed above. Lemma 5.1 implies that this preprocessing related to the group \((\ell ,\ell _1,f)\) takes \(O(K(\ell )\log K(\ell _1))\) time, where \(K(\ell )\) and \(K(\ell _1)\) denote the number of nodes on \(\ell \) and \(\ell _1\), respectively. Next, we discuss the Voronoi diagram data structure related to condition C2.
We assume that \(\mathcal{V }(u_1, \ldots ,u_k)\) is known and stored. We further assume that a node \(v\) on \(\ell \) has been extracted by the extractmin operation in Step 1 of our algorithm. In Step 3.1, we need to add the new site \(v\) and to compute the Voronoi diagram \(\mathcal{V }(u_1, \ldots , u_k,v)\). Next we show how this can be achieved in \(O(\log ^2 \frac{1}{\varepsilon })\) time. First, the following lemma shows that the Voronoi cell of \(v\) has a simple structure.
Lemma 5.2
Let \(u_1,\ldots , u_k\) be the active nodes on \(\ell \) and let \(v\) be the last node inserted in \(S\). Then the Voronoi cell \(\mathcal{V }(v)\), in the Voronoi diagram \(\mathcal{V }(u_1, \ldots , u_k, v)\), is either empty or consists of a single interval on \(\ell _1\).
Proof
By our assumptions \(\delta (u_i)\le \delta (v)\), for \(i=1,\ldots ,k\). The Voronoi cell \(\mathcal{V }(v)\) can be represented as an intersection \(\mathcal{V }(v)=\cap _{i=1}^k\mathcal{V }_i(v)\), where the sets \(\mathcal{V }_i(v)\) are defined by \(\mathcal{V }_i(v)=\{x\in \ell _1\ :\ \delta (v)\delta (u_i) +c(v,x)<c(u_i,x)\}\). By Theorem 2.1, each of \(\mathcal{V }_i(v)\) is either empty or is an interval on \(\ell _1\), and thus the same is true for their intersection. \(\square \)
Using the above lemma, we easily obtain a bound on the size of the Voronoi diagrams.
Corollary 5.1
The number of the intervals comprising the diagram \(\mathcal{V }(u_1,\ldots ,u_k)\) does not exceed \(2k1\).
Next, we present and analyze an efficient procedure which, given the Voronoi diagram \(\mathcal{V }(u_1,\ldots ,u_k)\) and a new node \(v\) inserted in \(S\), determines the Voronoi diagram \(\mathcal{V }(u_1,\ldots ,u_k, v)\). This includes computation of the Voronoi cell \(\mathcal{V }(v)\), update of the set of points \(x_1,\ldots ,x_m\) describing \(\mathcal{V }(u_1,\ldots ,u_k)\) to another set describing \(\mathcal{V }(u_1,\ldots ,u_k,v)\) and update of the assignment information between intervals and Voronoi cells.
According to Lemma 5.2, the Voronoi cell \(\mathcal{V }(v)\) is an interval, which we denote by \((x^,x^+)\). Let \(M\) be any of the points \(x_1,\ldots ,x_m\) characterizing the diagram \(\mathcal{V }(u_1,\ldots ,u_k)\). The following claim shows that the relative position of \(M\) with respect to the interval \((x^,x^+)\) can be determined in constant time.
Claim 5.1
The relative position of \(M\) with respect to the interval \((x^,x^+)\) can be determined in \(O(1)\) time.
Proof
By the definition of point \(M\), it follows that there are two nodes \(u_{i_1}\) and \(u_{i_2}\) such that \(\delta (u_{i_1})+c(u_{i_1},M)=\delta (u_{i_2})+c(u_{i_2},M)\). We denote the latter value by \(d(M)\) and note that \(d(M)\le \delta (u_i)+c(u_i,M)\), for \(i=1,\ldots ,k\). Then we compute the value \(d(v,M)=\delta (v)+c(v,M)\) and compare it with \(d(M)\).
If \(d(v,M)<d(M)\), then we have \(M\in (x^,x^+)\) and thus \(x^<M<x^+\). In the case where \(d(v,M)\ge d(M)\), we compute the Voronoi cell \(\triangle (v)\) of \(v\) in the three cites diagram \(\mathcal{V }(u_{i_1},u_{i_2},v)\). By Lemma 5.2, the cell \(\triangle (v)\) is an interval on \(\ell _1\). Since \(M\) must be outside \(\triangle (v)\) and \((x^,x^+)\subset \triangle (v)\), it follows that the relative position between \(M\) and \((x^,x^+)\) is the same as the relative position between \(M\) and \(\triangle (v)\).
The claimed time bound follows from the described procedure, which besides the constant number of simple computations, involves a constant number of evaluations of the function \(c(\cdot ,\cdot )\) and eventually solving of the equations \(c(u_i,x)c(v,x)=\delta (v)\delta (u_i)\), for \(i=i_1,i_2\). \(\square \)
To implement all of these updates efficiently, we maintain the sequence \(X\) of points characterizing the Voronoi diagram of the currently active nodes on \(\ell \) in an orderstatistics tree, allowing us to report order statistics as well as insertions and deletions in \(O(\log X)\) time. Based on this data structure, the computation of the interval \((x^,x^+)\) takes \(O(\log ^2 X)\) time, since each of the \(O(\log X)\) iterations takes \(O(\log X)\) time. Updating the Voronoi diagram requires two insertions and \(j^+j^+1\) deletions in \(X\), where insertions take \(O(\log X)\) time and deletions are carried out in amortized \(O(1)\) time.
Let us estimate the time for the maintenance of the Voronoi diagram of the active nodes in the group \(E(\ell ,\ell _1,f)\). We denoted the total number of nodes on \(\ell \) by \(K(\ell )\). Each node on \(\ell \) becomes active once during the execution. Thus, each node on \(\ell \) becomes subject of procedure Voronoi cell exactly once. According to Corollary 5.1, the sizes of the sequences \(X\) characterizing Voronoi diagrams in the group \(E(\ell ,\ell _1,f)\) are bounded by \(2K(\ell )+1\). Therefore, the total time spent by procedure Voronoi cell in the group \(E(\ell ,\ell _1,f)\) is \(O(K(\ell )\log ^2K(\ell ))\). In total, there are \(O(K(\ell ))\) insertions in the sequence \(X\), and, clearly, the total number of deletions is bounded by the number of insertions. Hence, the total time spent for insertions and deletions is \(O(K(\ell )\log K(\ell ))\). Thus, the time spent for the maintenance of the Voronoi diagram in a fixed group \(E(\ell , \ell _1,f)\) is \(O(K(\ell )\log ^2 K(\ell ))\). Next, we discuss the computation and maintenance of a data structure that combines the adjacency diagram and the Voronoi diagram.
Propagation Diagram: As discussed above, the propagation set \(I(u)\) of an active node \(u\) on \(\ell \) is described completely by the set of nodes on \(\ell _1\) satisfying conditions C1, C2, and C3. We denote the set of nodes on \(\ell _1\) satisfying C1 and C2 with respect to \(u\) by \(I^{\prime }(u)\). Slightly abusing our terminology, we refer to this set again as propagation set of \(u\). Similarly, we refer to the set consisting of the sets \(I^{\prime }(u)\) for all currently active nodes as propagation diagram and denote it by \(\mathcal{I }^{\prime }(u_1,\ldots ,u_k)\), where, as above, \(u_1,\ldots ,u_k\) are the currently active nodes on \(\ell _1\).
The difference between the originally defined propagation set \(I(u)\) and the set \(I^{\prime }(u)\) is that the elements of \(I(u)\) are the edges joining \(u\) to the nodes on \(\ell _1\) satisfying C1, C2, and C3, whereas the elements of \(I^{\prime }(u)\) are the nodes on \(\ell _1\) that satisfy C1 and C2, but not necessarily C3. Indeed, the set \(I^{\prime }(u)\) is closely related to \(I(u)\) and when combined with the list \(\bar{S}({\ell }_1)\) describes it completely. Based on this observation, we compute and maintain the propagation diagram \(\mathcal{I }^{\prime }(u_1,\ldots ,u_k)\) instead of the originally defined diagram.
We describe the sets \(I^{\prime }(u_i)\) by specifying the maximal Steiner intervals they cover. We implement these sets as ordered lists of pairs of pointers to the endpoints of these intervals in the underlying list \(V(\ell _1)\). The propagation sets of different active nodes do not intersect, and hence, the endpoints of the maximal Steiner intervals of the propagation sets \(I^{\prime }(u_1), \ldots , I^{\prime }(u_k)\) form a sequence, \(\mathcal{I }^{\prime }=\{A_1\le y_1 \le z_1\le \dots \le y_{m_1}\le z_{m_1} \le B_1\}\), where \(\ell _1=(A_1,B_1)\). The points \(y_j\) and \(z_j\), for \(j=1,\ldots ,m_1\), are Steiner points (nodes) on \(\ell _1\). Any of the Steiner intervals \((y_j,z_j)\) is a maximal Steiner interval covered by one of the sets \(I^{\prime }(u_1),\ldots ,I^{\prime }(u_k)\), whereas the Steiner points inside the intervals \((z_j,y_{j+1})\) do not belong to any of the sets \(I^{\prime }(u_i)\). Clearly, the sequence \(\mathcal{I }^{\prime }\) plus the assignment of the intervals \((y_j,z_j)\) to the sets \(I^{\prime }(u_i)\) covering them determine the diagram \(\mathcal{I }^{\prime }(u_1,\ldots ,u_k)\). We implement sequence \(\mathcal{I }^{\prime }\) as an ordered list of pointers to the underlying list \(V(\ell _1)\). In addition, we associate with it a binarysearch tree based on the position of the Steiner points on the segment \(\ell _1\). The diagram \(\mathcal{I }^{\prime }(u_1,\ldots ,u_k)\) is maintained in Step 3.1 and details are as follows.
Let, as above, \(v\) be the node extracted by the extractmin operation in Step 1 in the current iteration of the algorithm. We assume that the diagram \(\mathcal{I }^{\prime }(u_1,\ldots ,u_k)\) is known—i.e., we know the sequence \(\mathcal{I }^{\prime }\) as well as the assignment of the intervals \((y_j,z_j)\) to the propagation sets \(I^{\prime }(u_i)\). Next, we describe the update of \(\mathcal{I }^{\prime }\) and the assignment information specifying \(\mathcal{I }^{\prime }(u_1,\ldots ,u_k,v)\). By definition, \(I^{\prime }(v)\) consists of the nodes on \(\ell _1\) that lie in the Voronoi cell \(\mathcal{V }(v)\) and belong to the adjacency set \(A(v,\ell _1)\). By Lemma 5.2, \(\mathcal{V }(v)\) is either empty or a single interval, which we have denoted by \((x^,x^+)\). We denote by \((v^,v^+)\), the largest Steiner interval inside the interval \((x^,x^+)\). The interval \((v^,v^+)\) is easily found using binarysearch in \(O(\log K(\ell _1))\) time, where as above \(K(\ell _1)\) denotes the number of Steiner points on \(\ell _1\). On the other hand (see Lemma 5.1), the adjacency set \(A(v,\ell _1)\) consists of the nodes lying inside a constant number (at most seven) of Steiner intervals, which were computed and stored as the list \(\bar{A}(v,\ell _1)\). Hence, the maximal Steiner intervals specifying the propagation set \(I^{\prime }(v)\) can be obtained as the intersection of intervals in \(\bar{A}(v,\ell _1)\) with \((v^,v^+)\). This is done in constant time by identifying the position of the points \(v^\) and \(v^+\) with respect to the elements of the list \(\bar{A}(v,\ell _1)\). Clearly, the socomputed maximal Steiner intervals covered by \(I^{\prime }(v)\) are at most seven. We update the sequence \(\mathcal{I }^{\prime }\) by inserting each maximal Steiner interval covered by \(I^{\prime }(v)\) in the same way as we inserted the interval \((x^,x^+)\) into the sequence \(X\) describing the Voronoi diagram. More precisely, let \((y,z)\) be any of the maximal Steiner intervals covered by \(I^{\prime }(v)\). We insert the points \(y\) and \(z\) at their positions in the ordered sequence \(\mathcal{I }^{\prime }\), and then we delete the points of \(\mathcal{I }^{\prime }\) between \(y\) and \(z\). If the interval containing \(y\) is \((y_j,z_j)\), we set new \(z_j\) to be the Steiner point preceding \(y\) on \(\ell _1\). Similarly, if the interval containing \(z\) is \((y_j,z_j)\), then we call, \(y_j\), the Steiner point following \(z\) on \(\ell _1\).
At each iteration of the algorithm, the endpoints of the (at most seven) intervals are inserted into the sequence \(\mathcal{I }^{\prime }\). Hence, the size of the sequence \(\mathcal{I }^{\prime }\) is bounded by \(14K(\ell )\) and insertions in \(\mathcal{I }^{\prime }\) are implemented in \(O(\log K(\ell ))\) time. Deletions are implemented in \(O(1)\) time. The total number of insertions is \(O(K(\ell ))\) and the total number of deletions is at most the number of insertions. Therefore, the total time spent for the maintenance of \(\mathcal{I }^{\prime }\) and the propagation diagram is \(O(K(\ell )(\log K(\ell )+K(\ell _1)))\).
Lemma 5.3
The total time spent by the algorithm implementing Step 3.1 is \(O(\frac{V_\varepsilon }{\sqrt{\varepsilon }}\log ^3\frac{1}{\varepsilon })\).
5.2.3 Computation and Updates of Set of Representatives
Next, we concentrate on the computation of representatives in Steps 3.2, 3.3 and 3.4. The set of representatives \(\varvec{\rho }(v;\ell ,\ell _1,f)\) of an active node \(v\) on \(\ell \) in a group \(E(\ell ,\ell _1,f)\) contains one representative for each interval \((y_j,z_j)\) in the propagation set \(I^{\prime }(v)\). Recall that \(I^{\prime }(v)\) consists of a set of intervals \((y_j,z_j)\) stored in the sequence \(\mathcal{I }^{\prime }\), characterizing the propagation diagram of the currently active nodes on \(\ell \). The representative in \(\varvec{\rho }(v;\ell ,\ell _1,f)\), corresponding to \((y_j,z_j)\in I^{\prime }(v)\), is the target of the minimum cost edge from \(v\) to a node in \(\bar{S}\cap (y_j,z_j)\). By Lemma 2.1, the function \(c(v,x)\) is convex and thus has a single minimum in any interval. Let \(x^*(v)\) be the point on \(\ell _1\), where \(c(v,x)\) achieves its minimum. To efficiently compute the representatives, we compute in a preprocessing step the points \(x^*(v)\), for all nodes on \(\ell \). From the definition of the function \(c(v,x)\) and Snell’s law, it follows that \(x^*(v)\) is the point on \(\ell _1\) that is closest to \(v\). So, each of \(x^*(v)\) can be computed in constant time, which leads to \(O(K(\ell ))\) preprocessing time for the group \(E(\ell ,\ell _1,f)\), where \(K(\ell )\) is the number of nodes on \(\ell \). Thus, the total time for preprocessing in all groups is \(O(\frac{V_\varepsilon }{\sqrt{\varepsilon }}\log \frac{1}{\varepsilon })\).
 1.
New representatives \(\rho (v)\) are computed when \(v\) becomes active and its propagation set is nonempty. We need to compute one new representative for each maximal Steiner interval \((y,z)\) in the propagation set \(I^{\prime }(v)\). Recall that there are at most seven such intervals and they were computed and stored in the sequence \(\mathcal{I }^{\prime }\). To compute \(\rho (v)\) in the interval \((y,z)\), we determine the leftmost and rightmost nodes from \(\bar{S}\) inside the interval \((y,z)\). This is done by finding the positions of the points \(y\) and \(z\) in the sequence of nodes currently in \(\bar{S}\). Let the leftmost and the rightmost nodes from \(\bar{S}\) in \((y,z)\) be \(y^a\) and \(z^a\), respectively. Then, we determine the position of the point \(x^*(v)\) with respect to \(y^a\) and \(z^a\). If it is to the left of \(y^a\), then \(\rho (v)=y^a\). If it is to the right of \(z^a\), then \(\rho (v)=z^a\). If \(x^*(v)\) is inside \((y^a,z^a)\), we determine the two nodes in \(\bar{S}\) immediately to the left and to the right of \(x^*(v)\), and \(\rho (v)\) is one of these two nodes. Using the binarysearch tree on \(\bar{S}\), the nodes \(y^a\) and \(z^a\) and eventually the nodes neighboring \(x^*(v)\) are determined in \(O(\log \frac{1}{\varepsilon })\) time.
 2.
When some representative \(\rho (v)\) is removed from \(\bar{S}\), a new representative for \(v\) is one of the neighbors of \(\rho (v)\) in the doubly linked list \(\bar{S}\) that lie in the same interval \((y,z)\) as \(\rho (v)\). This is carried out in \(O(1)\) time.
 3.
When some interval of the propagation set \(I^{\prime }(v)\) shrinks and the current representative \(\rho (v)\) is no longer inside this interval, then \(\rho (v)\) is updated as follows. Let, as above, \(y^a\) and \(z^a\) be the leftmost and the rightmost nodes from \(\bar{S}\), respectively, in the updated interval. Then, if \(\rho (v)\) lies to the left of \(y^a\), we set \(\rho (v)=y^a\). If \(\rho (v)\) is to the right of \(z^a\), we set \(\rho (v)=z^a\). As above, determination of the nodes \(y^a\) and \(z^a\) is done in \(O(\log \frac{1}{\varepsilon })\) time.
Finally, the number of priority queue operations executed in Step 3.4 is bounded by the number of computed representatives. Thus, the total time for Step 3.4 is \(O(\frac{V_\varepsilon }{\sqrt{\varepsilon }}\log \frac{n}{\varepsilon }\log \frac{1}{\varepsilon })\).
5.2.4 Complexity of the Algorithm and the Main Result
Here, we summarize our discussion from the previous three subsections and state our main result. Step 1 of our algorithm takes \(O(V_\varepsilon \log \frac{n}{\varepsilon })\) time. Step 2 requires \(O(V_\varepsilon \log \frac{1}{\varepsilon })\) time. By Lemma 5.3, Step 3.1 takes in total \(O(\frac{V_\varepsilon }{\sqrt{\varepsilon }}\log ^3\frac{1}{\varepsilon })\) time. The total time for implementation of Steps 3.2 and 3.3 is \(O(\frac{V_\varepsilon }{\sqrt{\varepsilon }}\log ^2\frac{1}{\varepsilon })\) and the total time for Step 3.4 is \(O(\frac{V_\varepsilon }{\sqrt{\varepsilon }}\log \frac{n}{\varepsilon }\log \frac{1}{\varepsilon })\). By Lemma 3.1, we obtain \(V_\varepsilon =O(\frac{n}{\varepsilon ^2}\log \frac{1}{\varepsilon })\). We have thus established the following:
Theorem 5.1
The SSSP problem in the approximation graph \(G_\varepsilon \) can be solved in \(O(\frac{n}{\varepsilon ^{2.5}}\log \frac{n}{\varepsilon }\log ^3\frac{1}{\varepsilon })\) time.
Consider the polyhedral domain \(\mathcal D \). Starting from a vertex \(v_0\) of \(\mathcal D \), our algorithm solves the SSSP problem in the corresponding graph \(G_\varepsilon \) and constructs a shortest paths tree rooted at \(v_0\). According to Theorem 4.1, the computed distances from \(v_0\) to all other vertices of \(\mathcal D \) (and to all Steiner points) are within a factor of \(1+\varepsilon \) of the cost of the corresponding shortest paths. Using the definition of edges of \(G_\varepsilon \), an approximate shortest path can be output by simply replacing the edges in the discrete path with the corresponding local shortest paths used to define their costs. This can be done in time proportional to the number of segments in this path, because computation of the local shortest paths takes \(O(1)\) time. The approximate shortest paths tree rooted at \(v_0\) and containing all Steiner points and vertices of \(\mathcal D \) can be output in \(O(V_\varepsilon )\) time. Thus, the algorithm we described solves the WSP3D problem and the following theorem states the result.
Theorem 5.2
Let \(\mathcal D \) be a weighted polyhedral domain consisting of \(n\) tetrahedra and \(\varepsilon \in (0,1)\). The weighted shortest path problem in three dimensions (WSP3D), requiring the computation of approximate weighted shortest paths from a source vertex to all other vertices of \(\mathcal D \), can be solved in \(O(\frac{n}{\varepsilon ^{2.5}}\log \frac{n}{\varepsilon }\log ^3\frac{1}{\varepsilon })\) time.
6 Conclusions
This paper generalizes the weighted region problem, originally studied in 1991 by Mitchell and Papadimitriou [26] for the planar setting, to 3d weighted domains. We present an approximation scheme for the WSP3D problem. The complexity of our algorithm is independent of the weights, but depends upon the geometric features of the given tetrahedra as stated in Lemma 3.1.
There are some fairly standard techniques which can be employed here to remove the dependence on geometry (cf., [1]), provided that an estimate is known on the maximum number of segments (i.e., the combinatorial complexity) in weighted shortest paths in three dimensions. It has been shown that the combinatorial complexity of weighted shortest paths in planar case is \(\Theta (n^2)\) [26]. We conjecture that the similar bound holds in three dimensions, but the proof techniques in [26] do not seem to apply here, since they use planarity. If the combinatorial complexity of these paths in three dimensions is a polynomial in \(n\), then we can remove the dependence on the geometry by increasing the runtime by a polynomial factor in \(n\). This, however, will introduce a dependence on the weights. It remains an open question even in the planar weighted case whether there is a polynomial time shortest path algorithm independent on both geometry and weights.
This paper also investigated additive Voronoi diagrams in heterogeneous media. We studied a fairly simple scenario and already the analysis of that was very technical and cumbersome. It is desirable to find simpler and more elegant ways to understand the combinatorics of these diagrams. Nevertheless, we believe that the discretization scheme and the algorithms presented here can be used successfully for efficient computation of approximate Voronoi diagrams in heterogeneous media.
Our algorithm does not require any complex data structures or primitives and as such should be implementable and even practical. Its structure allows Steiner points to be generated “on the fly” as the shortest path wavefront propagates though the weighted domain. This feature allows the design of more compact and adaptive implementation schemes that can be of high practical value. In the planar case (and in terrains), in an experimental study [22], it was shown that a constant number of Steiner points suffice to produce highquality paths. We believe that the same holds here and this merits further investigation.
One of the classical problems that motivated this study is the unweighted version of this problem, namely the ESP3D problem. There, we need to find a shortest path between a source and a target point, lying completely in the free space, avoiding threedimensional polyhedral obstacles. We can use our techniques to solve this problem, though this will require triangulating (i.e., tetrahedralization) the free space. As outlined above, the complexity of our algorithm depends upon the geometry of these tetrahedra; so it is natural to ask whether the free space can be partitioned into nice tetrahedra? Unfortunately, there is no simple answer to this question which has been an important topic of study in computational and combinatorial geometry for several decades. Nevertheless, our algorithm provides a much simpler and so far the fastest method for solving the ESP3D problem, provided the free space is already partitioned into nondegenerate tetrahedra.
Combining the techniques of answering weighted shortest path queries on polyhedral surfaces [4] and the existence of nice separators for wellshaped meshes [25], we believe that our construction presented in this paper can be used for answering (approximate) weighted shortest path queries in 3d.
Footnotes
 1.
See Lemma 3.1 for details on the value of \(\mathcal{C }(\mathcal D )\).
 2.
The 2d case was treated there, but the arguments readily apply to the 3d model considered in this paper.
 3.
Although the roots of a quartic can be expressed as a rational function of radicals over its coefficients, they are too complex to be analytically manipulated and used here.
 4.
Throughout the paper logarithms with unspecified base are assumed to be base \(2\).
 5.
Note that such a segment still can have an endpoint in a vertex or edge vicinity related to other vertices or edges incident to \(f_1\) and \(f_2\).
 6.
Note that we do not need a priority queue based on elaborated data structures such as Fibonacci heaps. Any priority queue with logarithmic time per operation suffices.
Notes
Acknowledgments
This research supported by NSERC and the U.S. Department of Energy through the LANL/LDRD Program.
References
 1.Agarwal, P.K., Sharathkumar, R., Yu, H.: Approximate Euclidean shortest paths amid convex obstacles. In C. Mathieu (ed.) SODA, pp. 283–292. SIAM, New York (2009)Google Scholar
 2.Aleksandrov, L., Maheshwari, A., Sack, J.R.: Approximation algorithms for geometric shortest path problems. In: Yao, F.F., Luks, E.M. (eds.) STOC, pp. 286–295. ACM, New York (2000)CrossRefGoogle Scholar
 3.Aleksandrov, L., Maheshwari, A., Sack, J.R.: Determining approximate shortest paths on weighted polyhedral surfaces. J. ACM 52(1), 25–53 (2005)MathSciNetzbMATHCrossRefGoogle Scholar
 4.Aleksandrov, L., Djidjev, H., Guo, H., Maheshwari, A., Nussbaum, D., Sack, J.R.: Algorithms for approximate shortest path queries on weighted polyhedral surfaces. Discrete Comput. Geom. 44(4), 762–801 (2010)MathSciNetzbMATHCrossRefGoogle Scholar
 5.Asano, T., Kirkpatrick, D., Yap, C.: Pseudo approximation algorithms with applications to optimal motion planning. Discrete Comput. Geom. 31(1), 139–171 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
 6.Aurenhammer, F., Klein, R.: Handbook of Computational Geometry. Chapter Voronoi Diagrams. North Holland, Amsterdam (2000)Google Scholar
 7.Balaji, C.L.: The algebraic degree of geometric optimization problems. Discrete Comput. Geom. 3, 177–191 (1988)MathSciNetCrossRefGoogle Scholar
 8.Canny, J.F., Reif, J.H.: New lower bound techniques for robot motion planning problems. In: FOCS, pp. 49–60. IEEE Computer Society, Los Alamitos (1987)Google Scholar
 9.Choi, J., Sellen, J., Yap, C.K.: Approximate Euclidean shortest paths in 3space. Int. J. Comput. Geom. Appl. 7(4), 271–295 (1997)MathSciNetCrossRefGoogle Scholar
 10.Choi, J., Sellen, J., Yap, C.K.: Precisionsensitive Euclidean shortest path in 3space. SIAM J. Comput. 29(5), 1577–1595 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
 11.Clarkson, K.L.: Approximation algorithms for shortest path motion planning (extended abstract). In: Aho, A.V. (ed.) STOC, pp. 56–65. ACM, New York (1987)Google Scholar
 12.Cox, B., Treeby, B.: Artifact trapping during time reversal photoacoustic imaging for acoustically heterogeneous media. IEEE Trans. Med. Imaging 29(2), 387–396 (2010)CrossRefGoogle Scholar
 13.HarPeled, S.: Constructing approximate shortest path maps in three dimensions. SIAM J. Comput. 28(4), 1182–1197 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
 14.Hilaga, M., Shinagawa, Y., Komura, T., Kunii, T.L.: Topology matching for fully automatic similarity estimation of 3d shapes. In: SIGGRAPH, pp. 203–212 (2001)Google Scholar
 15.Im, K., Jo, H.J., Mangin, J.F., Evans, A.C., Kim, S.I., Lee, J.M.: Spatial distribution of deep sulcal landmarks and hemispherical asymmetry on the cortical surface. Cereb. Cortex 20(3), 602–611 (2010)CrossRefGoogle Scholar
 16.Inderwiesen, P.L., Lo, T.W.: Fundamentals of Seismic Tomography. Geophysical Monograph Series, vol. 6. Society of Exploration Geophysicists, Tarnaka (1994)Google Scholar
 17.Kanai, T., Suzuki, H., Mitani, J., Kimura, F.: Interactive mesh fusion based on local 3d metamorphosis. In: MacKenzie, I.S., Stewart, J. (eds.) Graphics Interface, pp. 148–156. Canadian Human–Computer Communications Society, St. John’s (1999)Google Scholar
 18.Kanai, T., Suzuki, H., Kimura, F.: Metamorphosis of arbitrary triangular meshes. IEEE Comput. Graph. Appl. 20(2), 62–75 (2000)CrossRefGoogle Scholar
 19.Klein, R.: Concrete and Abstract Voronoi Diagrams. Lecture Notes in Computer Science, vol. 400. Springer, Berlin (1989)CrossRefGoogle Scholar
 20.Klein, R., Langetepe, E., Nilforoushan, Z.: Abstract Voronoi diagrams revisited. Comput. Geom. 42(9), 885–902 (2009)MathSciNetzbMATHCrossRefGoogle Scholar
 21.Krozel, J., Penny, S., Prete, J., Mitchell, J.S.B.: Comparison of algorithms for synthesizing weather avoidance routes in transition airspace. In Collection of Technical Papers—AIAA Guidance, Navigation, and Control Conference, vol. 1, pp. 446–461 (2004)Google Scholar
 22.Lanthier, M., Maheshwari, A., Sack, J.R.: Approximating shortest paths on weighted polyhedral surfaces. Algorithmica 30(4), 527–562 (2001)MathSciNetzbMATHCrossRefGoogle Scholar
 23.Lê, N.M.: Abstract Voronoi diagram in 3space. J. Comput. Syst. Sci. 68(1), 41–79 (2004)zbMATHCrossRefGoogle Scholar
 24.Lee, A., Dobkin, D., Sweldens, W., Schröder, P.: Multiresolution mesh morphing. In: Proceedings of SIGGRAPH 99, pp. 343–350, August 1999Google Scholar
 25.Miller, G.L., Teng, S.H., Thurston, W., Vavasis, S.A.: Automatic mesh partitioning. In: Alan, G., John, G., Joseph, L. (eds.) Graphs Theory and Sparse Matrix Computation. The IMA Volumes in Mathematics and Its Application, vol. 56, pp. 57–84. Springer, Berlin (1993)CrossRefGoogle Scholar
 26.Mitchell, J.S.B., Papadimitriou, C.H.: The weighted region problem: finding shortest paths through a weighted planar subdivision. J. ACM 38(1), 18–73 (1991)MathSciNetzbMATHCrossRefGoogle Scholar
 27.Mitchell, J.S.B., Sharir, M.: New results on shortest paths in three dimensions. In: Snoeyink, J., Boissonnat, J.D. (eds.) Symposium on Computational Geometry, pp. 124–133. ACM, New York (2004)Google Scholar
 28.Ozmen, C., Balcisoy, S.: A framework for working with digitized cultural heritage artifacts. In: Levi, A., Savas, E., Yenigün, H., Balcisoy, S., Saygin, Y. (eds.) ISCIS. Lecture Notes in Computer Science, vol. 4263, pp. 394–400. Springer, Berlin (2006)Google Scholar
 29.Papadimitriou, C.H.: An algorithm for shortestpath motion in three dimensions. Inf. Process. Lett. 20(5), 259–263 (1985)MathSciNetzbMATHCrossRefGoogle Scholar
 30.Reif, J.H., Sun, Z.: Movement planning in the presence of flows. Algorithmica (New York) 39(2), 127–153 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
 31.Sun, Z., Reif, J.: Bushwhack: an approximation algorithm for minimal paths through pseudoEuclidean spaces. In: Eades, P., Takaoka, T. (eds.) Algorithms and Computation. Lecture Notes in Computer Science, vol. 2223, pp. 160–171. Springer, Berlin (2001). doi: 10.1007/3540456783_15
 32.Sun, Z., Reif, J.H.: On finding approximate optimal paths in weighted regions. J. Algorithms 58(1), 1–32 (2006)MathSciNetzbMATHCrossRefGoogle Scholar
 33.Varslot, T., Taralsen, G.: Computer simulation of forward wave propagation in soft tissue. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 52(9), 1473–1482 (2005)CrossRefGoogle Scholar