1 Introduction

Implicit surfaces have a long tradition in computer graphics as they are a powerful abstraction of many geometric models [6, 23] and as they provide construction operations on the underlying shapes either directly, e.g., using metaballs [5], or indirectly, e.g., using skeleton-based representations [2]. Implicit surfaces are commonly defined as an iso-contour of a 3D scalar field \(s({\varvec{x}}),\; {\varvec{x}}\in {{\mathbb {R}}}^3\), where \({\varvec{x}}\) is inside and outside the object, if \(s({\varvec{x}}) < 0\) and \(s({\varvec{x}})>0\), respectively.

Direct rendering of implicit surfaces,e.g., without conversion into a mesh, is, however, a non-trivial task, and is commonly handled with ray-casting [26, 33, 44]. This involves finding the first root of \(s({\varvec{r}}(\lambda ))\) along a ray \({\varvec{r}}(\lambda ) ={\varvec{o}}+\lambda {\vec {\varvec{d}}}\) cast from a viewpoint \({\varvec{o}}\) in direction \({\vec {\varvec{d}}}\). For polynomial scalar fields s of degree \(\le 4\), several accurate or even analytic approaches for root finding exist, e.g., interval arithmetic [45], which can also be utilized by approximating s by polynomials of degree \(\le 4\) [28, 43] or Taylor expansions [45]. This often induces an approximation error that is hardly controllable, as finding a sufficiently accurate approximation of \(s({\varvec{r}}(\lambda ))\) is a very challenging task, especially when being restricted to low degree approximations. Alternatively, if the global or local Lipschitz constant for the scalar field is known, more accurate root finding algorithms can be used, e.g., LG-surfaces [29] and Sphere Tracing [26]. However, while Lipschitz constants are known when rendering, e.g., SDFs, they are generally not known a priori when rendering surfaces based on arbitrary implicit functions.

In this paper, we propose a rendering approach for implicit surfaces that approximates the scalar field along rays with controllable accuracy and does not require any knowledge of Lipschitz constants. The only technical requirement is, that the scalar field is globally \(C^0\)-continuous. The specific contributions of this paper are:

  1. (a)

    An efficient and accurate approach to find the first root along a ray, which involves an efficient Chebyshev approximation method, comprising several pre-conditioning and post-processing stages, as well as an optional, recursive root finding, to achieve as-optimal-as-possible scalar field approximations along rays,

  2. (b)

    a performance analysis of our method using scalar fields of varying types, e.g., polynomial and non-algebraic, regarding both, computational aspects and achievable numerical accuracy, which are compared to existing rendering approaches, and

  3. (c)

    an open source CPU-based C++ implementation of the Chebyshev proxy construction process, the rendering approach and an implicit surface rendering framework that also includes implementations of other common rendering approaches, namely LG-Surfaces [29], Sphere Tracing [26], Adaptive Marching Points [44], a local quartic polynomial approximation process [43], Segment Tracing [21] and a simple ray marching approach.

Limitations. Our method is not well-suited for functions that are defined piecewise, e.g., discrete Signed Distance Fields, due to their large number of discontinuities in some derivative terms, which involve many recursions and high polynomial degrees, yielding a significant drop in runtime performance.

2 Related work

Implicit surfaces are described based on a position in space \({\textbf{x}}\in \mathbb {R}^3\) and an arbitrary scalar field \(S({\textbf{x}})\) [6], where the surface is some iso-contour of S, commonly \(S({\textbf{x}}) = 0\). We distinguish three distinct ways to visualize implicit surfaces: (i) polynomial approximations that locally approximate the surface using piecewise polynomials of various degrees, (ii) rendering approaches that require prior knowledge of the Lipschitz constant of the scalar field, either locally or globally, and (iii) interval refinement methods using root isolation techniques.

Polynomial approximations. For polynomial scalar functions along rays up to degree 4 closed-form solutions can be applied [24], while for higher-degree polynomials no general analytic solutions can exist. Instead, for higher degree, and/or piecewise polynomial surfaces, numerical root finding approaches can be utilized, e.g., Bézier clipping [30, 38]. For arbitrary scalar fields, approximating polynomials can be constructed along a ray, to which various root finding methods are applied [28, 43], involving, e.g., Taylor expansion [45] or Bernstein polynomials [30]. However, increasing the degree of a monomial polynomial arbitrarily does not necessarily yield a better approximation due to Runge’s phenomenon [41], which is a fundamental limitation when constructing polynomial approximations using interpolation methods in existing rendering approaches. Our method addresses these limitations by utilizing Chebyshev polynomials that do not suffer from Runge’s phenomenon and which can scale to much greater numerical accuracy for arbitrary implicit surfaces.

Lipschitz constant-based approaches. Knowing the Lipschitz continuity of the scalar function and its Lipschitz constant k significantly simplifies the root finding procedure. Kalra and Barr [29] utilized the Lipschitz continuity of the surface function and of the gradient of the surface function to construct an approach in which surfaces are guaranteed to be intersected. If only a global Lipschitz constant is known, sphere tracing [26] can be applied, which provides a robust rendering approach that iterates along a ray based on the current function magnitude. Several extensions have been proposed to sphere tracing, e.g., using overrelaxation [32], locally defined Lipschitz constants [22], or nonlinearly deformed signed distance fields [42]. Sphere tracing-like approaches can, however, be arbitrarily slow for grazing rays, which can be diminished using local Lipschitz constant, e.g., using Segment tracing [21]. Our method circumvents these limitations by not relying on a known Lipschitz constant.

Interval refinement. For arbitrarily defined polynomial functions a robust ray tracing approach is to utilize interval-based approaches [14, 37], where the main goal is to perform a root isolation, i.e., finding an interval along the ray that contains a single root, notably the first root for surface rendering. This can be done in a variety of ways, e.g., using Budan’s theorem [16], Descartes’ rule [25] or Sturm’s theorem [48], but these approaches generally rely on the surface function being polynomial or monomial polynomial approximations of the surface function that may not be accurate for challenging surface functions. Several improvements over these approaches have been proposed in the past, e.g., using affine arithmetics to obtain tighter interval bounds [19], Taylor tests for root bracketing [40] or GPU-based algorithms [34]. Recently, Keeter [31] proposed an interval refinement approach that realizes a hierarchical approach to render implicit surfaces involving thousands of arithmetic operations efficiently on GPUs. Our method is able to work with traditional arithmetic operations, while preserving numerical robustness, by directly incorporating numerical error approximations into the rendering procedure.

3 Chebyshev proxy rendering

Rendering an implicit surface function \(s({\textbf{x}})\) can be formulated as finding intersections along a ray \({\varvec{r}}(\lambda ) ={\varvec{o}}+\lambda {\vec {\varvec{d}}}\), that are cast from a viewpoint \({\varvec{o}}\) in direction \({\vec {\varvec{d}}}\), where the intersections are the roots along r. Consequently, we can reformulate the problem of intersection testing along rays as a root search problem for an arbitrary unary function \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\). More specifically, for rendering problems we are only interested in intersections along a compact interval, i.e., intersections in front of the camera up to some maximum depth \(b<\infty \) with a minimum distance \(a\ge 0\), which means that we need to find roots of an arbitrary function \(f:[a,b]\rightarrow {\mathbb {R}}\). However, finding roots of arbitrary functions is a challenging problem and, accordingly, we aim to transform the problem into a form that can be readily handled.

Our approach comprises five separate steps as

  1. (a)

    Constructing a Chebyshev proxy \(f_p\), which allows for robust root finding, see Sect. 3.1.

  2. (b)

    Finding the roots of the Chebyshev proxy \(f_p\), see Sect. 3.2.

  3. (c)

    Improving the pre-conditions for the approximation accuracy of \(f_p\) to enhance the numerical accuracy of the root finding for f, see Sect. 3.3.

  4. (d)

    Post-processing steps to refine the estimated root, see Sect. 3.4.

  5. (e)

    A recursive process to improve numerical accuracy in cases where extreme numerical conditions lead to visual artifacts, see Sect. 3.5.

3.1 Chebyshev proxy construction

In \({\mathbb {R}}\) the Weierstrass Approximation Theorem [49] guarantees the existence of a proxy function \(f_p\) for every \(C^0\) continuous function f over a compact interval \([a,b]\subset {\mathbb {R}}\) with arbitrary precision \(\epsilon \), i.e., \( \left\| f-f_p \right\| _\infty \le \epsilon \), using a polynomial function of finite degree n, depending on \(\epsilon \). To construct \(f_p\), we utilize Chebyshev polynomials as these do not suffer from Runge’s phenomen [41], i.e., increasing the degree results in better approximations, and as they can be efficiently constructed using DFTs. In general, Chebyshev basis polynomials of the first kind \(T_n\), of degree n, are recursively defined via

$$\begin{aligned} T_0(x) = 1;\; T_1(x) = 2 x;\; T_{n+1}(x) = 2x T_n(x) - T_{n-1}(x), \end{aligned}$$

Based on this orthogonal basis over \([-1,1]\), a Chebyshev polynomial \(p(x) = \sum _{i=0}^{n}c_i T_i(x)\), with Chebyshev coefficients \(C=\{c_0,\dots ,c_{n}\}\), can be constructed, which can be used for approximating a \(C^0\)-function f. The corresponding coefficients C are calculated by sampling f on a Chebyshev-Lobato and performing a discrete Fourier transform (DFT) with \({\mathcal {O}}(n\log n)\) time complexity [8]. Sampling at the Chebyshev-Lobato grid, defined over \([-1,1]\) as \(x_{k}=-\cos ({\frac{k\pi }{n}}),\, k=0,1,\dots , n\), guarantees uniform convergence for analytic functions with an approximation error \({\mathcal {O}}\left( e^{-n\mu }\right) \) [4] in the mathematical sense, with n being the polynomial degree and \(\mu \) being a positive constant. Note, that arbitrary compact finite intervals can be transformed to the interval \([-1,1]\) via an affine transformation.

While the Weierstrass Approximation Theorem prescribes the existence of a polynomial degree \(n^\text {opt}_{\epsilon _u}\) with an \(L_\infty \) approximation error of \(\epsilon \) for \(f:[a,b]\rightarrow {\mathbb {R}}\), there is no general approach to find this lower bound degree. Therefore, heuristic approaches, e.g., [1, 3, 7, 9, 15, 18, 27, 35, 36, 46], exist to determine \(n_\epsilon \approx n^\text {opt}_\epsilon \), e.g., using the interstitial error [12] as an approximation of the \(L_\infty \) error.

In our proposed method, we utilize the approach of Aurentz and Trefethen [1] that employs the round-off plateau behavior of the Chebyshev coefficients, and has been well tested for a large variety of functions and is integrated into the widely utilized Chebfun library [17]. Using \(n_\epsilon \), a uniquely determined proxy function \(f_p\) exists as

$$\begin{aligned} f_p = \sum _{i=0}^{n_\epsilon } c_i T_i,\text { with } \left\| f - f_p \right\| _\infty \lessapprox \epsilon _u. \end{aligned}$$
(1)

Machine Precision & Discretization. At this point, it is important to note that in computer science we always operate on the finite set \({\mathbb {M}}\) of machine numbers and that we deal with the machine approximation \(f_m:{\mathbb {M}}\rightarrow {\mathbb {M}}\) of the “real” function f given by the numerical implementation. As we do not address this level of approximation, we assume \(f_m=f\), i.e., \(f_m\) represents the “real” function whose nearest root we are looking for. We, therefore, use f and \(f_m\) for analytic aspects with infinite precision and numerical aspects with finite precision. Moreover, there are limits in transferring “ infinitely precise” analytical concepts to finite precision machine operators and representations, which we will address in the following. Specifically, there is always an underlying limitation relating to the achievable computational precision, e.g., based on the number and magnitude of coefficients.

Several heuristics exist to estimate this approximation accuracy limit \(\epsilon _a\) for Chebyshev proxys [12, 39], where we utilize the estimated proposed by Boyd [12] given as

$$\begin{aligned} \epsilon _a = n^2 \epsilon _{\text {machine}}\underbrace{\max _{i\in [0,\dots ,n)}|c_i|}_{{{\text {mmc}}(f_p)}}\; \le \; \left\| f_p-f_m \right\| _\infty , \end{aligned}$$
(2)

where \({{\text {mmc}}(f_p)}\) is the maximum magnitude coefficient.

For our proxy construction process, we utilize the adaptive approach of Boyd [12] and the heuristic chopping process of Aurentz and Trefethen [1], which tries to achieve \( \left\| f_m-f_p \right\| _\infty \le \epsilon _u\) by appropriately setting the threshold for the round-off plateau detection. This round-off plateau is the result of the difference in magnitude of coefficients being limited by the machine precision \(\epsilon _m\), as any coefficient smaller than \({{\text {mmc}}(f_p)}/\epsilon _m\) will (most likely) not affect the result of machine precision operations. Therefore, \(\epsilon _u\) can not be chosen below a value that depends on \({{\text {mmc}}(f_p)}\) and the approximation error is bound on both sides

$$\begin{aligned} \epsilon _a \le \left\| f_m-f_p \right\| _\infty \le \epsilon _u, \end{aligned}$$
(3)

where both limits depend on \({{\text {mmc}}(f_p)}\) [1, 12]. \({\text {mmc}}(f_p)\) is considered as being proportional to the magnitude of the function [12]

$$\begin{aligned} {\text {mmc}}(f_p)\propto {\text {mag}}_{f_m}(a,b), \end{aligned}$$
(4)

where \({\text {mag}}\) denotes the absolute magnitude of the function over the given interval range. Under the assumption of Eq. 4, \(\epsilon _a\) and \(\epsilon _u\) have to strictly relate to \(\epsilon _m {\text {mag}}_{f_m}(a,b)\), i.e., \(\epsilon _u<\epsilon _m\cdot {\text {mag}}_{f_m}(a,b)\) can, in general, not be achieved [12].

Since this proportionality is semi-well understood for many cases, we evaluated the proportionality hypothesis for all test functions to ensure that this also applies to rendering problems. The determination of the maximum magnitude \({\text {mag}}_{f_m}(a,b)= \left\| f_m\vert _{(s,t]} \right\| _\infty \) is challenging for an arbitrary function, as it requires root finding for derivatives. We, therefore, evaluate the assumption indirectly using the maximum magnitude \({\text {mag}}_{f_p}(a,b)\) of the proxy \(f_p\) by applying QR-based root search on the Chebyshev proxy’s derivative (see Sect. 3.2). We evaluated \(\log _{10}({{\text {mmc}}(f_p)}/{{\text {mag}}_{f_p}(a,b)})\) for all 16 functions from Fig. 2 and found that the maximum difference of 0.2, i.e., a difference factor of 5, which occurs for the Bohemian Star test function (Fig. 2b), as shown in Fig. 1. Consequently, we assume that the proportionality is valid for ray casting polynomial surfaces, rational surfaces, analytic algebraic surfaces, algebraic surfaces, and non-algebraic surfaces, which we use for testing (see Sect. 4).

Fig. 1
figure 1

The logarithmic per-pixel ratio \(\log _{10}({{\text {mmc}}(f_p)}/{{\text {mag}}_{f_p}(a,b)})\) for the Bohemian star surface (Fig. 2b), estimated using the proxy

3.2 Root finding for chebyshev proxies

After constructing the proxy function \(f_p\), we now need to find the roots of \(f_p\) for the intersection tests. This can be achieved using the companion matrix M, which is the uniquely defined matrix that has \(f_p\) as its characteristic polynomial. Consequently, the real Eigenvalues of M are the real roots of p. Commonly, for Chebyshev polynomials the Eigenvalues of the companion matrix are computed via a QR algorithm [47], which estimates roots of the polynomial simultaneously in time complexity \({\mathcal {O}}(n^3)\) [11]. Due to this computational complexity and the limited numerical accuracy for large matrices, reducing the polynomial degree has clear benefits. However, simply excluding higher polynomial terms is not feasible, as it would negatively affect the approximation accuracy \(\epsilon _p\).

We utilize the subdivision approach proposed by Boyd [10, 11] that reduces the polynomial degree during the calculation. This approach relies on the concept that, in general, a function \(f_m\) on [ab] approximated by \(f_p\) with degree n can be alternatively approximated on the intervals [ad] and [db], with \(d\in (a,b)\), with lower degree and similar approximation accuracy. This subdivision process is especially useful for functions f that are \(C^\infty \) everywhere on [ab] besides a few \(C^k\) points. In this case, the subdivision ultimately excludes the \(C^k\) points and yields intervals over which f is \(C^\infty \) and for which the polynomial degree for approximation is significantly reduced [12].

More precisely, \(f_m:[a,b]\rightarrow {\mathbb {M}}\) is transformed to the canonical interval \([-1,1]\) using an affine mapping [12, 17]. \([-1,1]\) is split at the heuristically chosen value \(d = -0.004849834917525\) [17], and two new approximation functions \(f_p^l\) and \(f_p^r\) are determined over \(\left. f_p\right| _{[-1,d]}\) and \(\left. f_p\right| _{[d,1]}\), respectively. The new proxies \(f_p^l\) and \(f_p^r\) are constructed by evaluating \(f_p\) on a Chebyshev-Lobatto grid of degree n, equal to the degree of \(f_p\), on each sub-interval, with time complexity \({\mathcal {O}}(n^2)\), and applying a DFT, with time complexity \({\mathcal {O}}(n\log n)\), to find the respective coefficients [17]. See, Boyd and Gally [13] for further numerical properties of Chebyshev root searching.

3.3 Improving pre-conditions

As discussed in Sect. 3.1, the error bounds \(\epsilon _a\) and \(\epsilon _u\) both depend on the maximum magnitude coefficient of the polynomial proxy \({\text {mmc}}(f_p)\) and, as \({\text {mmc}}(f_p) \propto {\text {mag}}_{f_m} (a,b)\), they both depend on the maximum magnitude of \(f_m\). It is important to keep in mind that uniformly scaling the function, e.g., searching for roots of \(g:x\rightarrow \frac{x}{1000}f(x)\), does not improve the root searching process as the errors are still bound in the same relative ratio to the maximum magnitudes. Consequently, to achieve smaller \(\epsilon _a\) and \(\epsilon _u\) we have to either exclude regions of large relative magnitude from the root search interval to reduce \({\text {mmc}}(f_p)\), or regions that are difficult to approximate using Chebyshev proxies, i.e., regions with \(C^0\) locations, that require high Chebyshev proxy degrees.

Using the fact that \(f_m:[a,b]\rightarrow {\mathbb {M}}\) has an (unknown) machine Lipschitz constant, yields that \({\text {mag}}_{f_m}(a,b)\) can be reduced by lowering \(f_m(a), f_m(b)\) and \(b-a\). Reducing \(f_m(a)\) and \(f_m(b)\) can be efficiently achieved by excluding segments on either end of the interval that do not contain the first root of the function. Moreover, reducing the root search to a sub-interval \([a_s,b_s] \subsetneq [a,b]\) always reduces the interval length \(b-a\).

As we aim to find the first root \(\xi _0\), we shift \(f_m\) by the approximation error \(\epsilon _a\)] of \(f_p\) to guarantee that \(\xi _0\) will not be missed when using \(f_p\) for root search. This also holds, if \(\xi _0\) is a double root of \(f_m\). The shifted function \(f_p^\epsilon :[a,b]\rightarrow {\mathbb {M}}\) is constructed by directly modifying the Chebyshev coefficients \(c_i^p\) of the proxy function \(f_p\) of \(f_m\):

$$\begin{aligned} f_p^\epsilon (x) = \sum _{i=0}^n c^\epsilon _i T_i(x), c_i^\epsilon = {\left\{ \begin{array}{ll} c_i^p, &{}i \ne 0,\\ c_i^p - s\,{\text {sgn}}(f_m(a))\epsilon , &{}\text {else} \end{array}\right. }. \nonumber \\ \end{aligned}$$
(5)

Note that if \(|f_p(a)|\le \epsilon \) then a was already the approximated first root and the process can be stopped.

Performing this shift requires estimating \(\epsilon _a= \left\| f_m-f_p \right\| _\infty \), which we achieve using Eq. 2. Subsequently, we shift \(f_p\) by \(s\cdot \epsilon _a\), where empirical experiments reveal that \(s = 100\) is sufficient in all our test cases, i.e., no roots of \(f_m\) are missed.

However, this shift is not without drawbacks. The first root \({{\tilde{\xi }}}_0\) of \(f_p^\epsilon \) is an estimate for the actual first root \(\xi _0\) of \(f_m\) that is located left of \(\xi _0\). Depending on the shift value \(s\cdot \epsilon _a\), this offset may be large. For example, \(f(x) = e^{-x}-0.5, x\in [-100,100],\) yields an estimated approximation error of \(\epsilon _a\approx 5.9\cdot 10^{27}\). Moreover, the shift can generate a pseudo-root, i.e., \({{\tilde{\xi }}}_0\) does not correspond to \(\xi _0\). Consequently, we may have \({{\tilde{\xi }}}_0\ll \xi _0\) in specific cases. These problems are addressed in our post-processing (Sect. 3.4) and in our recursive approach (Sect. 3.5).

3.4 Post-processing

Given a root \({{\tilde{\xi }}}\) of the proxy \(f_p\), or \(f_p^\epsilon \), two main categories of errors exist that we try to diminish via post-processing: (i) errors on the proxy, i.e., \({{\tilde{\xi }}}\) may not be the exact first root of \(f_p\), and (ii) errors of the approximation, i.e., \({{\tilde{\xi }}}\) may not be the exact first root of \(f_m\). We utilize two separate approaches, i.e., iterative refinement using Newton’s method and an incremental shifting.

Iterative Refinement. Newton’s method is commonly used to polish estimated roots of polynomial functions [12]. This approach improves the estimate of the first root \({{\tilde{\xi }}}^0={{\tilde{\xi }}}\) by iteratively estimating a new approximate root \({{\tilde{\xi }}}_{n+1}\) for \(f_m\), as

$$\begin{aligned} {{\tilde{\xi }}}_{n+1} = {{\tilde{\xi }}}_n - \frac{f_m({{\tilde{\xi }}}_n)}{f_m^\prime ({{\tilde{\xi }}}_n)}, \end{aligned}$$
(6)

until a convergence criterion has been reached, e.g., \(|f_m({{\tilde{\xi }}}_{n+1})|\le \epsilon \), which can be applied either to the proxy \(f_p\) or \(f_m\) itself.

Applying Newton’s method to \(f_p\) is straightforward [12] as the derivative of \(f_p\) can be calculated directly from the coefficients of \(f_p\), but it does not necessarily improve the root’s accuracy relative to \(f_m\) as the approximation accuracy for \(f_m\) by \(f_p\) remains unchanged (see also Sect. 4). Applying Newton’s method to \(f_m\) directly requires a derivative estimate, e.g., using finite differences, which may not be sufficiently accurate to achieve convergence. Moreover, there is no guarantee that starting the Newton process at \({{\tilde{\xi }}}\) will converge toward the corresponding root of \(f_m\), i.e., it may diverge or converge to a different root, potentially outside the search interval.

Considering the shifted function \(f_p^\epsilon \), the approximate root \({{\tilde{\xi }}}^\epsilon \) is generally left of the first root \(\xi \) of \(f_m\), if \({{\tilde{\xi }}}^\epsilon \) is no pseudo-root. Accordingly, Newton’s method can yield significant improvements, but it may, depending on the specific constellation, also reduce the accuracy considerably, see Sect. 4.

Incremental Shifting. This approach successively evaluates \({{\tilde{\xi }}}_+\) or \({{\tilde{\xi }}}_-\), i.e., the previous or next number in machine precision to \({{\tilde{\xi }}}\), to improve the root estimate of \(f_m\), or f, i.e.,

$$\begin{aligned} {{\tilde{\xi }}}^{i+1} = {\left\{ \begin{array}{ll} {{\tilde{\xi }}}^i_+,&{} |f({{\tilde{\xi }}}^i_+)|< |f({{\tilde{\xi }}}^i)|,\\ {{\tilde{\xi }}}^i_-,&{} |f({{\tilde{\xi }}}^i_-)| < |f({{\tilde{\xi }}}^i)|,\\ {{\tilde{\xi }}}^i, &{}\text {else}, \end{array}\right. } \end{aligned}$$
(7)

which is repeated until \({{\tilde{\xi }}}^{i+1}={{\tilde{\xi }}}^i\). This process is guaranteed to improve the error of the root estimate, but it may get stuck in a local minimum. Applying this process to f may incur significant computational costs, as the offset \(\xi -{{\tilde{\xi }}}\) can be large and, thus, may require \(\gg 10^6\) iterations, while applying it to \(f^\epsilon _p\) requires only a few iterations. Therefore, our root finding process applies incremental shifting to \(f_p\) only, see Sect. 3.5.

3.5 Recursive proxy root finding

The main objective in rendering implicit surfaces is the visual quality and correctness; see Sect. 4.2.2. It is, however, hardly possible to robustly determine qualitative failure cases by algorithmic or numerical means. Specifically, locally nearly flat or multiple roots of the per ray scalar function can cause strong visual artefacts; see Sect. 4.3.

We, therefore, propose a manually activatable root searching algorithm that recursively combines the pre-conditioning via shifting (Sect. 3.3), the proxy root finding using Chebyshev approximation (Sect. 3.1), and post-processing refinement procedures (Sect. 3.4), as depicted in Algorithm 1. This recursive procedure works by splitting the initial search interval into more granular intervals that can be processed at higher approximation accuracies. As we are only interested in finding the first root along a ray, we perform this recursion in a depth-first manner, i.e., we evaluate the left-most intervals first, as we can stop the entire process once the first root of \(f_m\) has been found.

We search for sub-intervals where \(f_p\) provides a more accurate approximation to \(f_m\) and that potentially contains a root. For example, given a function \(f_m\) over [ab], with \(f_m(a) = 100\) and an approximation accuracy of \(\epsilon = 1\), then, instead of searching for roots of \(f_m\) directly, searching for intervals where \(f_m(x) \le \epsilon \) and subsequently re-approximating \(f_m\) over these intervals, yields a more accurate and efficient root search in most cases.

To perform this interval division, we shift \(f_m\) by the approximation error of \(f_p\) over the entire interval by

$$\begin{aligned} \sigma = s \max \left\{ \epsilon _u, n^2 \epsilon _m {\text {mmc}}(f_p)\right\} , \end{aligned}$$
(8)

where \(\epsilon _a=n^2 \epsilon _m {\text {mmc}}(f_p)\) (see Eq. 2) is a lower bound for \( \left\| f_p-f_m \right\| _\infty \) and s is set to 100; see also Sect. 3.3. No root for \(f_p^\sigma \) is a root of f, but indicates a location at which \(|f_p|\) drops below the threshold \(|\sigma |\). The roots of \(f_p^\sigma \) define sub-intervals with lower magnitudes, and, thus, can increase the approximation accuracy. For now, we assume that \(f_m(a)>\sigma \).

Algorithm 1
figure d

Recursive root finding algorithm for \(f:[a,b]\rightarrow \mathbb {M}\) using a user-defined precision threshold parameter \(\epsilon _u\) and a global scaling parameter s.

If no root was found for \(f_p^\sigma \) then no root can exist for \(f_m\), as the function remains above the threshold \(\sigma \) everywhere in absolute magnitude and has no sign changes. If a single root \({\bar{\xi }}_0\) is found for \(f_p^\sigma \), this marks the transition from \( \left| f_p \right| > \sigma \) to \( \left| f_p \right| <\sigma \). In such cases, a new root search can be performed on the interval \([{\bar{\xi }}_0,b]\) as \([a,{\bar{\xi }}_0)\) can not contain a root of \(f_m\). If two roots \({\bar{\xi }}_0\) and \({\bar{\xi }}_1\) are found for \(f_p^\sigma \), \( \left| f_p \right| <\sigma \) is only valid between \({\bar{\xi }}_0\) and \({\bar{\xi }}_1\). Consequently, a root of \(f_m\) can only occur in this interval, and a new root search is performed on \([{\bar{\xi }}_0,{\bar{\xi }}_1]\). Finally, if there exist three or more roots \(\{{\bar{\xi }}_0,{\bar{\xi }}_1,{\bar{\xi }}_2,\dots \}\) for \(f_p^\sigma \) then, analogously, roots can occur on the intervals \([{\bar{\xi }}_0,{\bar{\xi }}_1]\) and \([{\bar{\xi }}_2,b]\), where a root search is performed on each sub-interval.

Overall, this process recursively narrows down onto the first root, as given in Algorithm 1. As the approximation gets more accurate with each subdivision, it would be possible to reduce the scaling parameter s in each iteration. However, there was no practical benefit in our experiments and, thus, we use \(s=100\) in all cases.

In the case of \(|f_m(a)|\le \sigma \), the argumentation above is inverted, e.g., if a single root \({\bar{\xi }}_0\) was found then a root exists in the interval \([a,{\bar{\xi }}_0]\) instead of \([{\bar{\xi }}_0,b]\). However, if no roots were found and \(f(a)\le \sigma \), then we assume that a already was a root of \(f_m\).

Fig. 2
figure 2

Evaluated surfaces, normals color coded

4 Evaluation

Implementation and system. We implemented our proposed approach, including the adaptive Chebyshev approximation, in an open source C++ program included in the supplementary material. We perform our evaluations on a system with an AMD Ryzen 3950X CPU, an Nvidia 3090 RTX GPU and 64 GiB of host memory. The actual rendering performed on the CPU using double precision arithmetic, while the GPU is utilized only to visualize the results. Note that there are no technical limitations that would prevent implementing our procedure on a GPU, however, due to limitations regarding double precision arithmetic performance on consumer GPUs, such an implementation would not be widely useful. All images are ray traced at a \(1920\times 1080\) pixel resolution with no antialiasing or subsampling applied.

Test functions. The 17 implicit surfaces used for the evaluation are shown in Figs. 2 and 7. Their definitions are given in Appendix B, and meta information for the functions is provided in Table 1. The test surface functions are of five different classes. The classes are (a) polynomial surfaces (\(C^\infty \) everywhere) that can be represented exactly using Chebyshev proxies, (b) rational surfaces (\(C^\infty \) outside sparse singularities), e.g., using division operations, (c) analytic algebraic surfaces (\(C^\infty \)), (d) algebraic surfaces (\(C^\infty \) outside sparse singularities), utilizing, e.g., the square root function, and (e) non-algebraic (transcendental) surfaces that may be only \(C^0\) in large regions of the image, e.g., using \({\text {max}}\) and \({\text {abs}}\).

Evaluation procedure. The evaluation consists of three distinct parts: (1) Evaluation of the proposed rendering approach on its own. This comprises the evaluation of (1.a) the required polynomial degrees and the resulting coefficient distributions (Sect. 4.1.1), and (1.b) the computational performance and effects of post-processing (Sect. 4.1.4). (2) Comparisons of the proposed rendering approach to existing rendering techniques, including (2.a) a discussion of the various visible failure modes occurring in rendering of implicit surfaces in Sect. 4.2.1, and, (2.b) a quantitative comparisons against baseline methods separated by the five categories of implicit surfaces (Sect. 4.2.2). (3) An evaluation of our recursive root searching approach applied to the Bohemian Star function in Sect. 4.3.

Precision control. Our proposed rendering technique depends on the user desired precision parameter \(\epsilon _u^*\), which transforms to \(\epsilon _u=10^{\epsilon _u^*}\epsilon _m\) (see Sect. 3.1). While \(\epsilon _u^*=0\) would be ideal in all cases, the process is bound by the approximation accuracy \(\epsilon _a\) (see Eq. 3). Thus, too small \(\epsilon _u^*\) enforces infeasible high polynomial degrees. We, therefore, apply a heuristic by choosing \(\epsilon _u^*\) based on the portion of the image that is not \(C^\infty \), as these regions require “allocate” the majority of computational resources. For surface functions that are \(C^\infty \) (besides a few points; Fig. 2j), we set \(\epsilon _u^*= 1\), for functions with discontinuities in small regions, we set \(\epsilon _u^*= 4\), and for functions with discontinuities in most of the image, i.e., the cube function (see Fig. 2p), we set \(\epsilon _u^*= 6\). Note that for polynomial functions of low degrees, e.g., the sphere function (Fig. 2a), the underlying function can be exactly represented using a Chebyshev proxy \(f_p\) and thus the choice of \(\epsilon _u^*\) does not matter.

Table 1 Meta information for the test surfaces: User desired precision \(\epsilon _u\) in terms of machine precision (\(\epsilon _u=10^{\epsilon _u^*}\epsilon _m\)), the estimates of the dynamic range \({\text {dr}}\) and the Lipschitz constant k using a polynomial proxy. Coefficients: The number of polynomial coefficients of the proxy function as median (med), lower (\(Q_1\)) and upper (\(Q_3\)) quartile, and maximum (\(\max \)) over all pixels

4.1 Evaluation of the proposed method

We evaluate our method regarding different properties specific to our method, focusing on both numerical aspects, e.g., the polynomial degrees required for different functions, as well as computational aspects, e.g., the computational cost of constructing Chebyshev proxies.

4.1.1 Polynomial degree distributions

Regarding the polynomial degree of the proxy \(f_p\), we need to consider both, the maximum degree of all pixels, which requires the “highest approximation effort” for a pixel-ray, and the distribution of the degrees, which relates more to the average approximation effort for a surface. As mentioned before, using a Chebyshev-Lobato grid guarantees uniform convergence for analytic functions with an approximation error \({\mathcal {O}}\left( e^{-n\mu }\right) \) [4], with n being the polynomial degree and \(\mu \) being a positive scaling coefficient and approximating non analytic functions, i.e., functions that are not \(C^\infty \) everywhere, is more challenging [12]. Consequently, functions that are polynomial, e.g., a sphere, require only few coefficients to be approximated, whereas \(C^0\)-continuous non-algebraic functions require significantly more coefficients.

Table 1 describes the distribution of polynomial degrees across different surface functions, revealing the strong variation in the required polynomial degree. In general, we observe that the polynomial surfaces require only very low polynomial degrees, as they can be represented exactly using a Chebyshev proxy, which is in line with the expected behavior. Analytic surfaces require moderate polynomial degrees everywhere, with a relatively low ratio of the maximum polynomial degree to the median polynomial degree, which is in line with the expected behavior as the function is only difficult to approximate in some regions. Rational surfaces require higher polynomial degrees everywhere and exhibit a much higher ratio of maximum to median polynomial degree. This behavior is exacerbated for algebraic surfaces, e.g., the Morph surface (Fig. 2j) exhibits a relatively low median degree of 25, but requires at worst a degree 5755 polynomial, where the high degree polynomials occurs for rays close to the four points in which the function is \(C^0\)-continuous, see Fig. 2j. Finally, for non-algebraic surfaces the overall required polynomial degree increases significantly with similar outliers in the maximum polynomial degree, e.g., the Cut (min) surface (Fig. 2n) requires high degree approximations along the seam between the two input surfaces, shifting the overall median degree up to 508, while yielding a significantly lower approximation accuracy than the Morph surface Fig. 2j.

Table 2 Accuracy evaluation of some example functions. \({{\tilde{\xi }}}_N\) is the result of applying Newton’s method to \({{\tilde{\xi }}}\) on \(f_m\). \(Q_1\) and \(Q_3\) denote the lower and upper quartile values and med the median for the signed distribution of errors. The \(\max \) value denotes the absolute maximum magnitude value and its sign
Fig. 3
figure 3

Computational cost of the proxy construction in milliseconds per ray for our proposed rendering technique

Fig. 4
figure 4

Computational cost of the root search, excluding proxy construction, in milliseconds per ray for our proposed rendering technique

4.1.2 Proxy construction

The computational costs of the proxy construction are mostly determined by the sampling of the surface function on a Chebyshev–Lobatto grid and the subsequent DFT. In our proposed rendering approach, we utilize the adaptive Chebyshev proxy construction of Boyd [12], where the function is evaluated on Chebyshev-Lobatto grids of size \(2^k+1\), starting with \(k=4\), and performing a subsequent chopping of the coefficients using the approach of Aurentz and Trefethen [1]. Performing the DFT of the Chebyshev-Lobato has time complexity \({\mathcal {O}}(n\log n)\), independent of the computational complexity of f. Accordingly, the computational requirements of constructing the proxy function are tightly coupled to the polynomial degree of \(f_p\). Furthermore, as our test surface functions are computationally efficient to evaluate, the DFT is dominant in the overall computational requirements of the proxy construction in our experiments. For an overview of the computational requirements for each method see Fig. 3, and for more detailed results for each individual implicit function, please see the supplementary material.

4.1.3 Root finding

Regarding the computational performance of the root searching, we observe that the overall computational cost mainly depends on the polynomial degree of \(f_p\) and not on the underlying surface function. However, the overall computational costs are significantly higher due to the computational complexity of the QR decomposition of \({\mathcal {O}}(n^3)\), which is dominating the \({\mathcal {O}}(n\log n)\) for the DFT in the proxy construction. For an overview of the computational requirements for each method see Fig. 4, and for more detailed results for each individual implicit function, please see the supplementary material.

4.1.4 Post-processing

To assess the impact of the post-processing, we investigate three exemplary functions, i.e., the polynomial surface Sphere, the rational surface Blend II, and the algebraic surface Union (soft), by evaluating the implicit surface function f at the estimated roots \({{\tilde{\xi }}}\) and \({{\tilde{\xi }}}^\epsilon \) found for the proxy \(f_p\) and the shifted proxy \(f_p^\epsilon \) function, respectively, see Table 2. Note that the quartiles \(Q_1\) and \(Q_3\) indicate errors away from and toward the ray origin, respectively, as the functions are positive outside the surface. For the Sphere surface, the error without any shifting is very low, i.e., in the order of \(\epsilon _m\), where applying the post-processing yields a median error of exactly zero. Finding the root for the shifted function \(f_p^\epsilon \) yields a positive error, as the root is shifted toward the ray origin. Applying the post-processing yields errors identical to the non-shifted and post-processed function.

For the Blend II surface, the behavior is similar for both the non-shifted and shifted variant, i.e., the error for the non-shifted variant is close to machine precision whereas shifting the function yields a similar increase in error magnitude of about 6 orders of magnitude. We see that applying a Newton iteration based post-processing on the shifted function, yielding the root estimate \({{\tilde{\xi }}}_N\), does not yield the same accuracy as for the non-shifted variant of the Blend II surface, in fact, the median error magnitude increases by about an order of magnitude.

For the Union (soft) surface, the error of the non-shifted variant is significantly higher than for the previous two functions and can become \(f({{\tilde{\xi }}})=-8.76 \times 10^{-4}\) in outlier cases. The shifted variant yields a smaller relative absolute error as the non-shifted variant, i.e., the error only increases by 3 orders of magnitude regarding the median instead of 4 or 8. Similar to the other examples, the estimated root is also pushed toward the ray origin and applying the post-processing yields a significant improvement in the error magnitudes for both variants regarding \(Q_1\), median and \(Q_3\), but the maximum error magnitude increases significantly. This behavior indicates that for this specific function, the estimated root \({{\tilde{\xi }}}\) of \(f_p\) is not a good starting point for a Newton-based post-processing and generates outliers with significantly higher error.

Overall, applying the shift to the proxy function yields the expected behavior, i.e., the estimated intersection is pushed toward the ray origin, and applying a Newton iteration-based post-processing does significantly increase the accuracy in most cases. However, the post-processing, also in combination with shifting, does not robustly improve the accuracy in all cases. Accordingly, the post-processing should be combined with a sanity check, i.e., if the absolute magnitude of f actually decreased after applying the post-processing.

4.2 Comparison against baseline methods

Table 3 Comparison of the requirements of the different rendering techniques regarding Lipschitz constants, derivatives, and the number of user-supplied hyperparameters that need to be adjusted on a per function basis for optimal results
Fig. 5
figure 5

Examples of the different qualitative errors observed in ray-surface intersections for the Bohemian Star function (see Fig. 2b). The left image (using Ray Marching) displays a jagged outline of the surface and a discontinuous surface. The middle image (using Ray Marching) displays large regions of missed surface. The right image (using AMP-Taylor) displays significant amounts of false surfaces

Fig. 6
figure 6

The Bohemian Star surface rendered using different approaches with surfaces normal colorized

To compare our method against existing approaches, we distinguish existing methods based on their requirements regarding the prior knowledge of Lipschitz constants and derivatives, and the number of tuning parameters to be specified per surface function; see Table 3. We compare against two methods that do not require any prior knowledge, i.e., ray marching (Ray March) and Adaptive Marching Points (AMP-Sign) [44], that uses a sign-test to isolate roots. All other methods either require Lipschitz constants (Sphere Tracing (Sphere) [26] and Segment Tracing (Segment) [21]), derivatives (Adaptive Marching Points (AMP-Taylor) [44] using a Taylor-test to isolate roots and a piecewise polynomial approach (Sherstyuk) [43]), or both (LG Surfaces approach (LG) [29]).

In general, there is no direct option to estimate the Lipschitz constant for a given black-box function \(f:[a,b]\rightarrow {\mathbb {M}}\), i.e., finding the maximum derivative of f, besides finding all roots of \(f^{\prime \prime }\) on [ab]. In other words, computing the Lipschitz constant of f is as expensive as computing all roots of f. Consequently, it is not possible to apply methods that require Lipschitz constants, such as sphere tracing, without a second method that does not require Lipschitz constants to estimate the same. Note that while approximations could be used for the Lipschitz constant, significantly underestimating the Lipschitz constant would lead to missing surface intersections and overestimating Lipschitz constants would increase computational costs. Accordingly, using an accurate value for the Lipschitz constants is important for a fair comparison with baseline methods that rely on this constant.

Exclusively for the purpose of comparing with methods that require Lipschitz constants, e.g., Sphere Tracing, we construct a Chebyshev proxy \(f_p\) of degree \(n_\epsilon \) for all surface functions f and estimate their Lipschitz constant using our methods for root finding. Our results indicate, that the high root finding accuracy of our approach yields Lipschitz constants with comparably marginal errors.

Table 4 Visual evaluation of the results for the different rendering techniques for the different surface functions. Each cell indicates if there are either visual deformations of the surface, e.g., if the surface boundary is different, or holes in the surface or if there are spurious surface elements displayed

4.2.1 Qualitative evaluation

When evaluating the different surface functions, we identified three distinct categories of qualitative errors in the visualized surface (see Fig. 5): (i) Deformations, e.g., the surface was visible dilated or showed a jagged instead of a smooth boundary/silhouette, (ii) Holes in the surface, e.g., a rendering method missed the ray-surface intersection, and (iii) False Surfaces, e.g., the rendering method found false surface intersections. A good example of the various kinds of error modes is the Bohemian Star function, due to the large amount of multiple roots that comprise the surface of the function (see Fig. 6). When using the shifted function \(f_p^\sigma \) as described in Sect. 4.1.4, our method does not exhibit any noticeable qualitative errors in the surface rendering, however, if no shifting was applied then our method also misses some surface intersections (see Fig. 9a); see also the detailed discussion in Sect. 4.3.

As depicted in Fig. 6, the only other methods that do not display any noticeable errors for the Bohemian Star are the Sphere Tracing and Segment Tracing approaches, which require additional knowledge about the Lipschitz constant. Using a naïve ray marching approach with sign tests yields significant gaps in the surface, with similar behavior as AMP-sign. While the AMP-Taylor approach yields no holes in the surface, it generates a large amount of false surfaces close to the multiple roots of the surface. The approach by Sherstyuk [43], yields both, false surfaces and holes in the surface, especially close to the coordinate axis. Finally, LG surfaces exhibited some false surfaces, but this error is mainly due to difficulties in estimating the Lipschitz constant and the gradient of the function, and is not necessarily a conceptual problem of the method itself. The overall qualitative results are summarized in Table 4, which highlights that the only two methods not exhibiting any visually noticeable problems are our proposed method and Sphere Tracing, which requires a globally known Lipschitz constant and which can be arbitrarily slow for grazing rays.

Fig. 7
figure 7

The Kiss surface (\(f(x,y,z) = (1 - z) z^4 - y^2 - x^2\)) rendered using our method

4.2.2 Quantitative evaluation

Evaluate the quantitative accuracy of our method, and the baseline methods, is done by evaluating an intersection point \({{\tilde{\xi }}}\) using the absolute value of the actual surface function f, i.e., \(|f({{\tilde{\xi }}})|\). The resulting error distributions are visualized as box plots in Fig. 8. It is important to note that the actual magnitude of the error \(|f({{\tilde{\xi }}})|\) depends on the Lipschitz constant of the underlying surface function. The Bohemian Star (see Fig. 6) and Kiss  (see Fig. 7) surfaces, for instance, have large Lipschitz constants (see Table 1), which means that a small error in the estimation of \({{\tilde{\xi }}}\) may yield a large error in the actual function magnitude. Furthermore, to evaluate the computational performance, we only measure the actual root searching process, i.e., for methods that require evaluating Lipschitz constants (either locally or globally), the execution time of these steps is excluded as there is no fair comparison possible as finding the Lipschitz constant of a black-box function is a broader problem that is beyond the scope of our paper. Moreover, as we evaluate the Lipschitz constant, and derivatives along rays, using a Chebyshev proxy (of high numerical accuracy), including this in the evaluation of baseline methods would lead to an unfair comparison regarding computational performance as our method does not require this additional root search and thus has to be necessarily faster.

Fig. 8
figure 8

This figure shows the absolute errors |f| evaluated at the computed ray-surface intersection (top) and the overall timing of the ray-surface intersection test (bottom) as statistics for the entire frame for all the different surface functions grouped by the respective rendering approaches

General Observations. Considering the results in Fig. 8, we observe several effects across all surface categories, which we summarize here. Firstly, Sphere and Segment tracing exhibit a very tight spread on the achieved error in the rendering which is primarily based on the stopping criterion of these methods, i.e., they use an absolute threshold value \(\epsilon \) and make no further attempts in improving the results after achieving this threshold. Conversely, these methods tend to show large variations in computational performance, especially for Sphere Tracing, as their computational performance is directly coupled to the Lipschitz constant of the underlying surface and especially for grazing rays Sphere Tracing required up to \(10^{11}\) iterations. Secondly, while the LG surface approach also shows a similar thresholding behavior for the error, the observed accuracy can be much greater as this is an upper bound for the method and not a stopping criterion, i.e., the error can be much lower than the threshold value. Finally, the ray marching approaches (Ray-March, AMP-Sign, AMP-Taylor and Sherstyuk) show relatively consistent computational performance across the different surfaces as their performance only directly depends on the computational cost of evaluating f.

Fig. 9
figure 9

The Bohemian Star surface visualized using our standard rendering process (left), using the high precision mode (middle) and with a dilated surface using \(f_p^\epsilon \) and setting \(s=10^8\) (right) with detail views for the bulk surface, axis aligned roots and diagonal aligned roots

Polynomial Surfaces. Our proposed method achieves a numerical accuracy comparable to the best baseline methods, e.g., Sphere Tracing. Regarding the computational performance. our method is better than most methods, i.e., only Sphere Tracing and the adaptive marching point approaches occasionally outperform our method. For this surfaces category, the Lipschitz constants can be fairly large and, consequently, the computational performance of Sphere Tracing can vary greatly for different parts of the rendered image due to grazing rays. Finally, LG-Surfaces demonstrates the worst overall computational performance in this category.

Rational Surfaces. For this category, our proposed method outperforms all baseline methods considering the achieved numerical accuracy by several orders of magnitude. However, our methods has the second highest computational cost, i.e., only LG-Surfaces exhibits higher computational efforts.

Analytic Surfaces. In this category, the performance of all methods is comparable to the rational surfaces with a few notable differences. Our proposed method performs notably worse for the exponential cube function regarding numerical accuracy, as the function is approximated with a much lower polynomial degree, than the other analytic surfaces, see Table 1. Moreover, as this function also has a Lipschitz constant that is noticeably smaller than the other surfaces of this category, Sphere Tracing also performs notably worse, regarding computational performance. Finally, the computational performance of our proposed methods and of LG-Surfaces is comparable across the category with all other methods performing noticeably better.

Algebraic Surfaces. This category shows behavior analogous to the analytic surfaces in quantitative aspects and requires no further specialized discussion.

Non-Algebraic Surfaces. In this category, our method still yields a better numerical accuracy than the other non-Lipschitz methods, i.e., Ray Marching and adaptive marching points, while performing worse than all other baselines by relatively small margins, with the single exception of the Sherstyuk approach, which performs exceptionally well for the cube surface. Regarding computational complexity, our proposed rendering approach performs worse than most baselines methods besides Segment Tracing and LG-Surfaces. It also can be observed, that there is a wide spread of computational performance, specifically for Segment Tracing and LG for surface category.

4.3 Recursive root searching

The Bohemian Star is a challenging implicit surface function as it comprises axis aligned lines and diagonally oriented curves of roots with even multiplicity, see Sect. 3.5 and Fig. 9. Visualizing the bulk surface is straight forward as these regions have single roots, i.e., there is a change of sign with large enough magnitudes on either side of the surface. On \(y=0\), \(f^\text {star}(x,y,z)\) gets

$$\begin{aligned} f^\text {star}(x,0,z) = 16x^2z^2, \end{aligned}$$
(9)

which is 0 if x and/or z are 0, and all of these roots have an even multiplicity. Evaluating this function close to these axes yields discretization errors of 0, leading to root intervals for f. For \(x=\pm z\), \(f^\text {star}(x,y,z)\) gets

$$\begin{aligned} \begin{aligned} f^\text {star}(x,y,x) = \left( 2x^2\left( y^2-2\right) +y^4\right) ^2, \end{aligned} \end{aligned}$$
(10)

which is zero for \(y=\pm \sqrt{\sqrt{x^2\left( x^2+4\right) }-x^2}\), see the dilated surface in Fig. 9 for a visualization of these curves of roots with even multiplicity. These curves of roots are numerically rarely rounded to 0 in \({\mathbb {M}}\) if evaluated in proximity to these curves. Using our recursive root search approach, a few single pixels containing these surface segments can be identified. Consequently, very few, if any, rays hit these points making them difficult to visualize, even with knowledge of their existence. Applying a shift to the function ensures that even multiplicity roots are found, and the surface is closed in the main region (Fig. 9, bottom detail view). In this case the lines of roots along the main axes are clearly visible, whereas the diagonal curves are not clearly visible as no rays intersect these curves. Note, that applying Newton’s method to pseudo-roots as post-processing in case of roots with even multiplicity may diverge significantly, as, for instance, there are no roots along the ray to converge to.

5 Conclusions

In this paper we presented a novel rendering approach for black-box implicit surface functions based on Chebyshev proxy function. Our approach can handle a wide variety of surfaces, ranging from polynomial to non-algebraic surfaces, all without requiring any knowledge of the underlying function. Furthermore, by utilizing numerically accurate and fast to evaluate Chebyshev proxy functions, our method is able to outperform methods that require similar amounts of knowledge of the underlying implicit function regarding numerical accuracy. Moreover, our proposed rendering method achieves better visual results and numerical accuracy than methods with similar requirements on the implicit function being rendered. Overall, our proposed method can serve as a useful baseline method for the evaluation of the numerical accuracy of other methods and can be used to accurately visualize complex implicit surfaces, e.g., polynomial surfaces with large regions of very thin surfaces, without requiring any derivation of properties of the function, such as their local Lipschitz constants. In the future we intend to expand this approach, e.g., using beam tracing, to reliably find small features that no individual ray is likely to intersect.