1 Introduction

The generation of curves and surfaces with certain properties as convergence, regularity, stability, order of approximation, and adaptation to discontinuities is used in real and industrial applications. Subdivision schemes are very well-known and powerful mathematical tools for this purpose, due to their properties. In the literature, many articles have been published about linear subdivision schemes, see e.g. [11, 23, 40, 41, 43, 47, 57, 59, 61, 62] and non-linear ones, see e.g. [4,5,6, 15, 29, 37, 38, 53]. Most of the subdivision methods and their analysis are applied to smooth data. In this paper we present a way to extend the application of subdivision to non-smooth data. By non-smooth data we mean possible discontinuities at least in the function (jump) or the first order derivative (corner).

We have in mind the ENO (Essential Non-Oscillatory) reconstruction which is an approximation method for non-smooth data [39]. It uses a local linear approximation on a selection of the stencils that aims to avoid zones affected by discontinuities. Thus, the precision is only reduced at the intervals which contain a corner or jump discontinuity. The ENO-SR (Essential Non-Oscillatory Subcell Resolution) technique improves the approximation properties of the reconstruction at intervals that contain a corner. In [7], the authors proposed a rigorous analysis of the ENO-SR procedure both for point-value data and for cell-average sampling. In both cases, the algorithm in [7] is considered for univariate data. If we assume that h is the grid spacing, the algorithm uses second-order differences, \(\Delta _h^2 f(x)=f(x)-2f(x+h)+f(x+2h),\) as smoothness indicators to point out the presence of corners. The detection algorithm defines the intervals, \(I_k:=[kh,(k+1)h]\), with \(k\in \mathbb {Z}\), as potential candidates of containing corners. Then the algorithm uses the following rules, that label the cells as B if they are suspicious of containing a discontinuity or G if they are not, for a given approximation of order m of the location of the corner. Obviously, this means that the algorithm in [7] can be employed to identify intervals where there might be discontinuities in the first derivative, if the data is given through a point values discretization, or in the function, if the data is given through the cell average discretization. We reproduce the algorithm in what follows:

  1. 1.

    If

    $$\begin{aligned} |\Delta ^2_h f((k-1)h)|>|\Delta ^2_h f((k-1\pm n)h)|, \quad n=1,\ldots , m, \end{aligned}$$

    then the intervals \(I_{k-1}\) and \(I_k\) are labeled as B. A priory it is not possible to know which among the intervals \(I_{k-1}\) and \(I_k\) contains the corner, so both are labeled as suspicious.

  2. 2.

    If

    $$\begin{aligned} |\Delta ^2_h f((k-1)h)|>|\Delta ^2_h f((k+ n)h)|, \quad n=1,\ldots , m-1, \end{aligned}$$

    and

    $$\begin{aligned} |\Delta ^2_h f((k-1)h)|>|\Delta ^2_h f((k-1-n)h)|, \quad n=1,\ldots , m-1, \end{aligned}$$

    then the interval \(I_k\) is labeled as B. In this case, the two largest divided differences include the interval \(I_k\), which is a candidate to contain the discontinuity.

All other intervals are labeled as G and it is assumed that they do not contain a corner. The design of the previous algorithm assures that, considering a small enough h, all the intervals that contain a corner are labeled as B, and all the intervals that are labeled as G are smooth. Nevertheless, some intervals may be labeled as B while being inside a smooth region of the function.

The described algorithm is capable of detecting jumps in the function and the first derivative. Even so, it is necessary for a strategy to locate the discontinuities. We can only hope to locate jump discontinuities in the function if the data is given in the cell-averages discretization. If the data is given in the point-values, we can only locate corners. In order to do so, a very simple approximation is chosen: interpolatory polynomials are built to the left and the right of the suspicious intervals, i.e. those labeled as B or BB (i.e., two contiguous intervals labelled as B). Next, a function H, defined as the difference of the two polynomials, is constructed. Supposing that the grid spacing is fine enough, we can assume that there is only one root of H inside the suspicious intervals (Lemma 3, statement 2 of [7]). Thus, the location of the corners can be approximated by finding the roots of H, and the accuracy of the result will depend on the order of the polynomials used to construct H. In the cell-average data, it is well known that jumps in the function transform into corners of the primitive, so their location can be approximated using the point-values of the primitive function [7]. Mind also that the stencil used to construct the polynomials automatically imposes what is the minimum distance between discontinuities that is admissible to detect them: if we polynomials of degree \(m-1\) are used to construct H, then there must be at least m data points between discontinuities.

For a piecewise \(C^2\) function f exhibiting a jump \([f']\) in its first derivative at \(x^*\), the authors establish in [7] that the corner is consistently identified when employing a grid-spacing discretization h smaller than the critical scale \(h_c\), being

$$\begin{aligned} h_c :=\frac{|[f^{'}]|}{4\sup _{t\in \mathbb {R}\backslash \{x^*\}} |f^{''}(t)|}. \end{aligned}$$
(1)

This critical scale represents the minimum level of resolution required to differentiate between the corner and a smooth region. The algorithms for detection presented in [7] are capable of identifying both corner and jump discontinuities. It is worth noting that only corner discontinuities can be pinpointed using point-values discretization, as the location of jumps in the function is lost during the discretization process. To locate jumps in the function, different discretization methods that accumulate information over the entire discretization interval, rather than sampling it at isolated points, must be employed. One such method is the cell-average discretization, which will be introduced later in Sect. 2. In their work [7], the authors also conducted an analysis of jump discontinuities and data discretized using the cell-average method, leading to the determination of the critical scale,

$$\begin{aligned} h_c :=\frac{|[f]|}{4\sup _{t\in \mathbb {R}\backslash \{x^*\}} |f^{'}(t)|}, \end{aligned}$$
(2)

where [f] is the size of the jump in the function f.

Although we use the edge detector proposed in [7], other techniques can be found in the literature. For example, in [8, 9] the authors propose new edge detectors to locate jumps in the function and the derivatives of piecewise smooth functions sampled using sparse data.

On the other hand, in [10], a specific prediction operator in the interpolatory framework was proposed and analyzed. It is oriented towards the representation of piecewise discontinuous functions. Given a family of discontinuity points, a 4-point linear position-dependent prediction operator is defined. The convergence of this non-uniform subdivision process is established using matrix formalism. The algorithm leads to the successful control of the classical Gibbs phenomenon, associated with the approximation of locally discontinuous functions. This position-dependent algorithm is equivalent to the ENO-SR subdivision scheme, assuming the previous knowledge of the discontinuity positions, in particular improving the accuracy of the linear approach.

Despite their good accuracy properties, the theoretical regularityFootnote 1 of ENO and ENO-SR schemes in [10] is \(C^{1-}\) in the point-value sampling, which is smaller than the regularity of the 4-point linear subdivision scheme, that is \(C^{2-}\), as we can see in [15]. We emphasize that \(C^2\) regularity is important in order to increase the aerodynamics or hydrodynamics of a design. In fact, as we can see in many of the works related to Computer Aided Design, the \(C^2\) regularity is a crucial property. This is the motivation, for instance, of the usual use of cubic splines. See the monograph [22] for more information. Other adaptive interpolation schemes like the PPH algorithm [3] manage to eliminate the Gibbs phenomenon close to the discontinuities, but introducing smearing and, thus, reducing the order of accuracy near discontinuities but also the regularity (that is \(C^{1-}\) numerically) close to the discontinuities.

Also, new methods to avoid Gibbs phenomenon have been developed, see for example [18, 19, 21, 42, 55]. The idea is based on building an arbitrary set of interpolation nodes with certain properties.

The goal of this paper is to develop a regularization–correction approach (R–C) to the problem. In the first stage, the data is smoothed by subtracting an appropriate non-smooth data sequence. Then a uniform linear 4-point subdivision approximation operator is applied to the smoothed data. Finally, an approximation with the appropriate discontinuity structure is restored by correcting the smooth approximation with the non-smooth element used in the first stage. Indeed, we prove that the suggested R–C procedure produces approximations for functions with corner and jump discontinuities, which have the following five important properties:

  1. 1.

    Interpolation.

  2. 2.

    High precision.

  3. 3.

    High piecewise regularity.

  4. 4.

    No smearing of discontinuities.

  5. 5.

    No oscillations.

As far as we know, this is the first time that a procedure that owns all these properties at the same time appears in the literature. We use the particular case of the 4-point Dubuc–Deslauriers interpolatory subdivision scheme [20] through which we obtain a \(C^{2-}\) piecewise regular limit function that is capable of reproducing piecewise cubic polynomials. Indeed, it is straightforward to obtain higher regularity and reproduction of piecewise polynomials of a higher degree, just using the same technique with larger stencils.

We do not want to lose the opportunity to mention here some works related to the elimination of Gibbs oscillations in global expansions. Given a piecewise smooth function, it is possible to construct a global expansion, for example, using the Fourier basis. The global expansions are contaminated by the presence of a local discontinuity, and the result is that the partial sums are oscillatory and feature non-uniform convergence. This characteristic behavior is called the Gibbs phenomenon. David Gottlieb and Chi-Wang Shu showed that these slowly and non-uniformly convergent global approximations retain within them high order information which can be recovered with suitable postprocessing [31,32,33,34,35]. More information can be found in the review paper [30] and in the related works [44, 58].

By splitting a given singular function into a relatively smooth part and a specially structured singular part, Eckhoff showed how the traditional Fourier method can be modified to give numerical methods of high order for calculating derivatives and integrals [26, 27].

Tadmor and Tanner discussed the reconstruction of piecewise smooth data from its (pseudo-) spectral information. Spectral projections enjoy superior resolution provided the data is globally smooth, while the presence of jump discontinuities is responsible for spurious Gibbs oscillations in the neighbourhood of edges and an overall deterioration of the unacceptable first-order convergence in rate. The purpose was to regain superior accuracy in the piecewise smooth case, and this is achieved by mollification [60].

De Marchi et al. in [21] introduce a new polynomial interpolation method using mapped bases. Assuming the location of the jumps are known, this algorithm avoids Gibbs phenomenon depending on the choice of new nodes that they call “fake nodes”.

Our approach is different in nature. We don’t consider any expansion but a direct approximation method, basically a subdivision scheme. Moreover, we don’t make any postprocessing but a preprocessing, and then a linear approximation is applied to smooth data, deriving directly all the good properties of the linear scheme as the order of convergence, stability, or regularity. Other interesting references also related to sparse data cases and higher dimensions are [1, 12,13,14, 16, 28, 36, 52, 56] and the references therein.

The paper is organized as follows: in Sect. 2 we introduce the discretizations that are being used in the paper, Sect. 3 is dedicated to introducing our R–C approximation approach, and the analysis of the approximation order of the resulting approximants. In Sect. 4 we present some experiments for univariate and bivariate functions discretized by point-value sampling, in Sect. 5 we show some experiments for univariate and bivariate functions discretized by cell-averages sampling and, finally, in Sect. 6 we expose the conclusions.

2 The discretizations: point-value and cell-average data

We start from the premise that we are given a set of discretized data. In the case of a point-value discretization, we can just consider that we are given a vector of values on a grid. This means that our data is interpreted as the sampling of a function at grid-points \(\{x_j=jh\}\). In this case, as has been mentioned above, the position of the discontinuities in the function is lost during the discretization process and there is no hope to recover their exact position. Other kinds of discontinuities, such as corners, can be located using the point-value discretization. Another option is to consider the original data as averages of a certain piecewise continuous function over the intervals of a grid. This is the cell-average setting, which also allows locating jump discontinuities in the function. In both cases, we consider the grid points in [0, 1]:

$$\begin{aligned} X=\{ x_{j} \}_{j=0}^{N},\quad x_{j}=jh, \quad h=\frac{1}{N}, \end{aligned}$$

being N an integer value. For the point-value case we use the discretization \(\{f_j=f(x_j)\}_{j=0}^{N}\) at the data points \(\{x_j= jh\}_{j=0}^N\).

On the other hand, for the cell-averages sampling we are given the local averages values,

$$\begin{aligned} \bar{f}_j=\frac{1}{h}\int ^{x_j}_{x_{j-1}}f(x)dx,\quad j=1, \ldots , N. \end{aligned}$$
(3)

Also in this case we aim at approximating the underlying function f.

Let us define the sequence \(\{F_j\}\) as,

$$\begin{aligned} F_j=h\sum _{i=1}^{j}\bar{f}_i=\int _{0}^{x_j}f(y)dy,\quad j=1, \ldots , N, \end{aligned}$$
(4)

taking \(F_0=0\). Denoting by F the primitive function of f, i.e. \(F(x)=\int _0^xf(y)dy\), the values \(\{F_j\}\) are the point-value discretization of F. Now we are back in the case of point-value data, for F, and after finding an approximation G(x) to F(x), \(g(x)=G'(x)\) would be the approximation to \(f(x)=F'(x)\), such that,

$$\begin{aligned} \bar{g}_j=\frac{F_j-F_{j-1}}{h}. \end{aligned}$$
(5)

3 The Regularization–Correction (R–C) algorithm for non-smooth data

We present our approach for the approximation of a function with one singular point. Later on, we explain how to use it for the case of several singular points.

Let f be a piecewise \(C^4\)-smooth function on [0, 1], with a singular point at \(x^*\), and assume we are given the vector of values \(\{f(x_i)\}_{i=0}^{N}\) at the data points \(\{x_i = ih\}_{i=0}^{N}\), \(N=1+1/h\). We denote by \(f^-(x)\) and \(f^+(x)\) the functions to the left and to the right of the discontinuity respectively.

In our framework we are going to use the 4-point Dubuc–Deslauriers subdivision scheme:

$$\begin{aligned} \left\{ \begin{array}{l} (S f^{k})_{2j}= f_j^k,\\ (S f^{k})_{2j+1}= -\frac{1}{16} f_{j-1}^k + \frac{9}{16} f_j^k + \frac{9}{16} f_{j+1}^k- \frac{1}{16} f_{j+2}^k. \end{array}\right. \end{aligned}$$
(6)

In the literature about subdivision schemes, S represents the subdivision operator and \(f_j^k\) represents the data at the spatial position \(x_j\) and the subdivision scale k. Thus, the scheme in (6) constructs the data sequence at the scale \(k+1\) taking the values at even positions from the lower scale k, and approximates the values at odd positions using central polynomial interpolation. This scheme has fourth-order of accuracy for \(C^4\) functions, and it has a \(C^{2^-}\) regularity [20, 24]. We aim at retrieving these properties for functions with corners and jumps in the function for data discretized by point-values or by cell-averages.

In the following sections, we suggest the framework of a regularization–correction algorithm that is adapted to the discontinuities, that avoids the Gibbs phenomenon, that attains the same regularity at smooth zones, as the equivalent linear algorithm, without smearing of discontinuities and that reproduces polynomials of the degree associated to the linear scheme used.

3.1 The R–C approximation algorithm for point-value data

In this subsection, we introduce the R–C approximation algorithm. The main idea is to use the given discrete data of f in order to find an explicit function q(x) such that \(g(x)=f(x)-q(x)\) is a smooth function. Then we can apply to the data \(\{g_j=g(x_j)\}_{j=0}^{N}\) any standard approximation procedure with high smoothness and high approximation order. For example, we can use the above 4-point subdivision algorithm and we denote the resulting approximation by \(g^\infty \). The last step is the correction step in which we define the approximation to f as \(g^\infty +q\).

Let \(T_3^-(x)\) and \(T_3^+(x)\) be the third order Taylor approximations of \(f^-\) and of \(f^+\) at \(x^*\) respectively. Consider the one sided cubic polynomial

$$\begin{aligned} \left\{ \begin{array}{l} T_+(x)=0,\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ x<x^*,\\ T_+(x)=T_3^+(x)-T_3^-(x),\ \ \ x\ge x^*. \end{array}\right. \end{aligned}$$
(7)

For \(x \ge x^*\)

$$\begin{aligned} T_+(x) = [f]+[f^{'}] (x-x^*)+\frac{1}{2}[f^{''}](x-x^*)^2+\frac{1}{6}[f^{'''}](x-x^*)^3, \end{aligned}$$
(8)

where \([f],\ [f^{'}],\ [f^{''}],\ [f^{'''}]\) denote the size of the jumps in the function and the derivatives of f at \(x^*\): \([f]=f^+(x^*)-f^-(x^*), [f^{'}]=(f^+)'(x^*)-(f^-)'(x^*), [f^{''}]=(f^+)''(x^*)-(f^-)''(x^*)\) and \([f^{'''}]=(f^+)'''(x^*)-(f^-)'''(x^*).\)

It follows that

$$\begin{aligned} g\equiv f-T_+\in C^3[0,1], \end{aligned}$$

and this observation is the basis for our proposed algorithm. Since we do not know the exact left and right derivatives of f at \(x^*\), we will use instead a fourth order approximation of \(T_+(x)\): For \(x \ge x^*\),

$$\begin{aligned} \widetilde{T}_+(x) = \widetilde{[f]}+\widetilde{[f^{'}]}(x-x^*)+\frac{1}{2}\widetilde{[f^{''}]}(x-x^*)^2+\frac{1}{6}\widetilde{[f^{'''}]}(x-x^*)^3, \end{aligned}$$
(9)

where

$$\begin{aligned} \left( \begin{array}{l} \widetilde{[f]} \\ \widetilde{[f']}\\ \widetilde{[f'']} \\ \widetilde{[f''']} \end{array}\right) = \left( \begin{array}{l} [f] \\ \left[ f'\right] \\ \left[ f''\right] \\ \left[ f'''\right] \end{array}\right) + \left( \begin{array}{l} O(h^4) \\ O(h^3) \\ O(h^2) \\ O(h) \end{array}\right) . \end{aligned}$$
(10)

In Sect. 3.3, we will propose a procedure to approximate the size of these jumps.

Comparing Eqs. (8) and (9) and using (10) it follows that near \(x^*\), i.e., if \(|x-x^*|=O(h)\),

$$\begin{aligned} |T_+(x)-\tilde{T}_+(x)|=O(h^4),\ \ \ as\ h\rightarrow 0. \end{aligned}$$
(11)

Our algorithm has three steps:

  • Step 1: Smoothing the data. We compute the new data using \(\tilde{T}_+(x)\) in (9),

    $$\begin{aligned} \tilde{g}(x_i) = f(x_i) -\tilde{T}_+(x_i),\ \ i=0,...,N. \end{aligned}$$
    (12)

    It is clear that g(x) does not present discontinuities up to the third derivative. As we show below, using a fourth-order approximation \(\tilde{g}\) to g, implies a truncation error that is \(O(h^4)\).

  • Step 2: Subdivision. We apply the 4-point interpolatory subdivision scheme to the data \(\tilde{g}(x_j)\) and we denote the limit function by \(\tilde{g}^\infty \).

  • Step 3: Correcting the approximation.

    $$\begin{aligned} \tilde{f}^\infty (x) = \tilde{g}^\infty (x) + \widetilde{T}_+(x). \end{aligned}$$
    (13)

Remark 1

Instead of correcting the data to the right of \(x^*\) by subtracting the values of \(\widetilde{T}_+\), we could have corrected the data to the left of \(x^*\) by adding to f the values of \(\widetilde{T}_-\), which is zero for \(x\le x^*\) and for \(x\le x^*\) \(\widetilde{T}_-(x)=p_3(x)\) where

$$\begin{aligned} p_3(x)=\widetilde{[f]}+\widetilde{[f^{'}]}(x-x^*)+\frac{1}{2}\widetilde{[f^{''}]}(x-x^*)^2+\frac{1}{6}\widetilde{[f^{'''}]}(x-x^*)^3. \end{aligned}$$
(14)

Let us denote this corrected data by \(\{\tilde{r}(x_i)\}_{i=0}^N\). Then, in Step 2, we would apply the subdivision to the corrected data and in Step 3 we would compute the final approximation by subtracting \(\widetilde{T}_-(x)\).

We claim that in both versions of the correction, to the right and to the left, the final approximation will be the same. This is due to the observation that the difference between the two data values, \(\{\tilde{r}(x_i)\}_{i=0}^N\) and \(\{\tilde{g}(x_i)\}_{i=0}^N\), are just the values \(\{p_3(x_i)\}_{i=0}^N\). Since \(p_3\) is a cubic polynomial and the subdivision scheme is linear and reconstructs cubic polynomials, the difference between the two final approximations will be zero.

In Fig. 1 we present an example with one of the functions that we use in the numerical experiments (more specifically, the one in (24)). The circles represent a function discretized by point-values with a discontinuity in the first derivative. The dots represent the smoothed data in (12), and we can see that it owns better regularity properties than the original data. Using the strategy presented in this section, we apply a subdivision scheme to the corrected data and then we compute the corrected approximation in (13).

Fig. 1
figure 1

The circles in this graphs correspond to the original data obtained from the discretization of the function in (24) with \(a=0\) (left), and with \(a=10\) (right). The dots correspond to the corrected data in (12)

Remark 2

The algorithm can be applied to functions with \(m>1\) jump or corner discontinuities. In this case the correction can be applied iteratively from left to right as the discontinuities are found. The total correction term would be,

$$\tilde{T}_+^{\text {total}}(x)=\sum _{n=1}^{m}\tilde{T}_+^{[n]}(x),$$

being

$$\begin{aligned} \widetilde{T}_+^{[n]}(x) = \widetilde{[f]}_n+\widetilde{[f^{'}]}_n(x-x^*_n)+\frac{1}{2}\widetilde{[f^{''}]}_n(x-x^*_n)^2+\frac{1}{6}\widetilde{[f^{'''}]}_n(x-x^*_n)^3, \end{aligned}$$
(15)

the correction term (9) at each of the m discontinuities placed at \(x^*_n, n=1, \ldots , m\), found in the data. In this case, \(\tilde{T}_+^{\text {total}}(x)\) should be used instead on \(\tilde{T}_+(x)\) in steps 1 and 3 of the algorithm.

3.2 The R–C approximation algorithm for cell-average data

In order to work with data discretized by cell-averages, \(\{\bar{f}_j\}\), we firstly obtain the point-value data \(\{F_j\}\) of the primitive function F (4), as in the original Harten’s framework [39]. Then we apply our strategy for the point-value discretization described above to obtain an approximation \(G\sim F\) and then approximate f by \(G'\). The main difference from the case of point-value data is that the function F is a continuous function, with a jump in its first derivative. Thus, an additional step in the algorithm for cell-average data is to define \(x^*\) as the intersection point of the two local polynomials derived from the data \(\{F_j\}\). In Fig. 2 we sketch the overall process.

Fig. 2
figure 2

Left, cell-average data of the function presented in (28). Center, primitive of the data in the plot to the left, and the corrected primitive. Right, result of the R–C approximation to the function discretized by cell-averages

In the next section, we present the construction of a fourth-order accurate approximation of the size of the jumps in the function and its derivatives for data discretized by point-values.

3.3 Fourth-order approximation of the size of the jumps in the function and its derivatives in the point-value framework

The approximation of the size of the jumps with the desired accuracy can be obtained using the strategy derived in [2]. In order to obtain the approximate size of the jumps in the derivatives of f, we need to know the position of the discontinuity \(x^*\) up to certain accuracy. In the introduction, we mentioned how to obtain an approximation of the position of the discontinuity \(x^*\) with the desired accuracy following [7]. If we work with stencils of four points, we need \(O(h^4)\) accuracy in the detection. Let us suppose that we know that the discontinuity is placed at a distance \(\alpha \) from \(x_j\) in the interval \(\{x_j, x_{j+1}\}\). If we want \(O(h^4)\) accuracy for the approximation of the jump relations, we need to use four points to each side of the discontinuity. Let’s suppose that we use the data \(\{f_{j-3}, f_{j-2}, f_{j-1}, f_{j}, f_{j+1}, f_{j+2}, f_{j+3}, f_{j+4}\}\) placed at the positions \(\{x_{j-3}, x_{j-2}, x_{j-1}, x_{j}, x_{j+1}, x_{j+2}, x_{j+3}, x_{j+4}\}\). Then, we can approximate the values of the derivatives of f from both sides of the discontinuity just using third-order Taylor expansion around \(x^*\) and setting the following system of equations: For the left hand side,

$$\begin{aligned} \begin{aligned} f_j&=f^{-}(x^*)-f^{-}_x(x^*) \alpha +\frac{1}{2} f^{-}_{xx}(x^*) \alpha ^2-\frac{1}{3!} f^{-}_{xxx}(x^*) \alpha ^3,\\ f_{j-1}&=f^{-}(x^*)-f^{-}_x(x^*) (h+\alpha ) +\frac{1}{2} f^{-}_{xx}(x^*) (h+\alpha )^2-\frac{1}{3!} f^{-}_{xxx}(x^*) (h+\alpha )^3,\\ f_{j-2}&=f^{-}(x^*)-f^{-}_x(x^*) (2h+\alpha ) +\frac{1}{2} f^{-}_{xx}(x^*) (2h+\alpha )^2-\frac{1}{3!} f^{-}_{xxx}(x^*) (2h+\alpha )^3,\\ f_{j-3}&=f^{-}(x^*)-f^{-}_x(x^*) (3h+\alpha ) +\frac{1}{2} f^{-}_{xx}(x^*) (3h+\alpha )^2-\frac{1}{3!} f^{-}_{xxx}(x^*) (3h+\alpha )^3, \end{aligned} \end{aligned}$$
(16)

and for the right hand side,

$$\begin{aligned} \begin{aligned} f_{j+1}&=f^{+}(x^*)+f^{+}_x(x^*) (h-\alpha ) +\frac{1}{2} f^{+}_{xx}(x^*) (h-\alpha )^2+\frac{1}{3!} f^{+}_{xxx}(x^*) (h-\alpha )^3,\\ f_{j+2}&=f^{+}(x^*)+f^{+}_x(x^*) (2h-\alpha ) +\frac{1}{2} f^{+}_{xx}(x^*) (2h-\alpha )^2+\frac{1}{3!} f^{+}_{xxx}(x^*) (2h-\alpha )^3,\\ f_{j+3}&=f^{+}(x^*)+f^{-}_x(x^*) (3h-\alpha ) +\frac{1}{2} f^{+}_{xx}(x^*) (3h-\alpha )^2+\frac{1}{3!} f^{+}_{xxx}(x^*) (3h-\alpha )^3,\\ f_{j+4}&=f^{+}(x^*)+f^{+}_x(x^*) (4h-\alpha ) +\frac{1}{2} f^{+}_{xx}(x^*) (4h-\alpha )^2+\frac{1}{3!} f^{+}_{xxx}(x^*) (4h-\alpha )^3, \end{aligned} \end{aligned}$$
(17)

where \(\alpha =x^*-x_{j}\). It is clear that the previous systems can be written in matrix form where the system matrix is a Vandermonde matrix, that is always invertible.

Solving the two systems (16) and (17), where \(f^{-}(x^*)\), \(f^{-}_{x}(x^*)\), \(f^{-}_{xx}(x^*)\), \(f^{-}_{xxx}(x^*)\) and \(f^{+}(x^*)\), \(f^{+}_{x}(x^*)\), \(f^{+}_{xx}(x^*)\), \(f^{+}_{xxx}(x^*)\) are the unknowns, we can obtain approximations for the size of the jumps in the derivatives of f.

It is proved in [2] that

$$\begin{aligned} \left( \begin{array}{l} \widetilde{[f]}\\ \widetilde{[f']}\\ \widetilde{[f'']}\\ \widetilde{[f''']} \end{array}\right) = \left( \begin{array}{l} f^{+}(x^*)-f^{-}(x^*)\\ f^{+}_x(x^*)-f^{-}_x(x^*)\\ f^{+}_{xx}(x^*)-f^{-}_{xx}(x^*)\\ f^{+}_{xxx}(x^*)-f^{-}_{xxx}(x^*) \end{array}\right) + \left( \begin{array}{l} O(h^4)\\ O(h^3)\\ O(h^2)\\ O(h) \end{array}\right) . \end{aligned}$$
(18)

3.4 The approximation properties

Let us analyze the approximation properties of the R–C scheme for the case of point-value data and cell-average data.

For the point-values’ case we consider that a discontinuity is placed in the interval \((x_j, x_{j+1})\). Also, we consider the two cases, jumps in the function or corners. Let us start with the first case.

When dealing with jump discontinuities using point-values of a function at a certain discretization resolution, there is no way to find a high order approximation to the exact location of the discontinuity. We can only hope to locate the interval containing the jump in the function, and we can assume that the discontinuity is at any point \(x^*\) within this interval.

Remark 3

The technique described in this and previous sections can also be applied to treat the approximation near the boundary points 0 and 1. Outside the boundary, we use a zero-padding strategy, and we treat each boundary as a discontinuity which position is known. Thus, treating the boundaries is just considering more than one discontinuity in the data, as explained in Remark 2. Using this strategy enables us to extend the approximation results discussed below to the entire interval [0, 1]. In the following, we implicitly assume that the boundaries are treated as above.

Theorem 3.1

The point-values’ case: jump discontinuities Let us consider a function f that is piecewise \(C^4\)-smooth over the interval [0, 1] and contains a jump discontinuity. Let us assume that we have a vector of values \(\{f_j\}_{j=0}^{N}\) at data points \(\{x_j = jh\}_{j=0}^{N}\), where \(N=1/h+1\). We assume the discontinuous point to be at \(x^*\in (x_j, x_{j+1})\). The R–C algorithm provides an approximation that interpolates these data points. This approximation is \(C^{2-}\{[0,x^*)\cup (x^*,1]\}\), and it exhibits the property \(||f-\tilde{f}^\infty ||_{\infty ,[0,1]\setminus \{x^*\}} = O(h^4)\) as h approaches zero. This holds true for any piecewise \(C^4\)-smooth function f that has a singular point at \(x^*\) and has point-values \(\{f(x_j)\}_{j=0}^{N}=\{f_j\}_{j=0}^{N}\).

Proof

The 4-point subdivision scheme is an interpolatory scheme and the limit function it defines is \(C^{2-}\) [20]. Hence,

$$\begin{aligned} \tilde{g}^\infty (x_j)=f(x_j)-\tilde{T}_+(x_j), \end{aligned}$$

and \(\tilde{g}^\infty \in C^{2-}[0,1]\). Adding the correction term in (13) yields the interpolation to f as

$$\begin{aligned} {\tilde{f}}^\infty ={\tilde{g}}^\infty +{\tilde{T}}_+, \end{aligned}$$

which has the jump discontinuity at \(x^*\).

To prove the approximation order, we use the fact that the subdivision scheme is a local procedure and that it reproduces cubic polynomials. Furthermore, by [20, 24], for \(f\in C^4(\mathbb {R})\) the 4-point subdivision scheme gives \(O(h^4)\) approximation order to f. Therefore, away from the discontinuity, for \(x<x_{j-2}\) and for \(x>x_{j+3}\),

$$\begin{aligned} |\tilde{g}^\infty -(f-\tilde{T}_+)|=O(h^4), \end{aligned}$$

as \(h\rightarrow 0\). Correcting the approximation by \(\tilde{T}_+\) yields the approximation result away from \(x^*\).

To show the approximation order near \(x^*\), we rewrite the data for the subdivision process, \(\tilde{g}(x_j)=f(x_j)-\tilde{T}_+(x_j)\), as

$$\begin{aligned} \tilde{g}(x_j)=f(x_j)-T_+(x_j)-T_3^-(x_j)+(T_+(x_j)-\tilde{T}_+(x_j))+T_3^-(x_j). \end{aligned}$$

Denoting \(q=f-T_+-T_3^-\), we observe that \(q=f-T_3^-\) for \(x<x^*\) and \(q=f-T_3^+\) for \(x>x^*\). In both cases \(q(x)=O(h^4)\) near \(x^*\). Next we note that by (10)

$$\begin{aligned} (T_+(x_i)-\tilde{T}_+(x_i))=O(h^4), \end{aligned}$$

for \(i-4\le j\le i+5\). Using the above, plus the locality of the subdivision scheme and the reproduction of cubic polynomials, it follows that

$$\begin{aligned} \tilde{g}^\infty (x)=T_3^-(x)+O(h^4), \ \ x_{j-2}\le x\le x_{j+3}. \end{aligned}$$

Finally, in view of (7) and (11) we obtain

$$\begin{aligned} \tilde{f}^\infty (x)=\tilde{g}^\infty (x)+\tilde{T}_+(x)= \left\{ \begin{array}{l} T_3^-(x)+O(h^4),\ \ \ \ x_{j-2}\le x< x^*,\\ T_3^+(x)+O(h^4),\ \ \ \ x^*< x\le x_{j+3}, \end{array}\right. \end{aligned}$$
(19)

implying that

$$\begin{aligned} \tilde{f}^\infty (x)=f(x)+O(h^4), \ \ \ x_{j-2}\le x\le x_{j+3},\ x\ne x^*, \end{aligned}$$

which completes the proof.

Remark 4

In order to include the point \(x^*\) in Theorem 3.1, we need to suppose that the initial data comes from a function that is \(C^4\) smooth from the left and the right of \(x^*\), as this information, as well as the exact position of the discontinuity, is lost in the discretization process.

Now we can continue with the case when we find corner discontinuities, i.e. discontinuities in the first derivative.

Theorem 3.2

The point-values’ case: corners Let us consider a function f that is piecewise \(C^4\)-smooth over the interval [0, 1] and contains a corner discontinuity. Let us assume that we have a vector of values \(\{f_j\}_{j=0}^{N}\) at data points \(\{x_j = jh\}_{j=0}^{N}\), where \(N=1/h+1\). We assume the discontinuous point to be at the approximated location \(s^*\in (x_j, x_{j+1})\). The R–C algorithm provides an approximation that interpolates these data points. This approximation is \(C^{2-}\{[0,x^*)\cup (x^*,1]\}\), and it exhibits the property \(||f-\tilde{f}^\infty ||_{\infty } = O(h^4)\) as h approaches zero.

Proof

Outside the interval \((x^*, s^*)\) (if the approximated location \(x^*\) is to the right of \(s^*\), the analysis is equivalent), the proof is similar to the one followed in Theorem 3.1.

Let’s now analyze the case when we subdivide in the interval \((x^*, s^*)\). We can just follow the proof of Theorem 1 in [7]. We suppose that the discontinuity is placed in the interval \([x_{j}, x_{j+1}]\), being \(h=x_{j+1}-x_{j}\) a uniform grid-spacing. A graph of this case is plotted in Fig. 3. As we are using cubic polynomials for the location of the discontinuity, it follows that the distance between \(x^*\) and \(s^*\) is \(O(h^4)\), as proved in statement 3 of Lemma 3 of [7]. Let us analyze the accuracy attained by the subdivision scheme in the point-value discretization in the interval \((x^*, s^*)\). The left side of the discontinuity placed at \(s^*\) is labeled as the − side and the right side of \(s^*\) as the \(+\) side. The approximated location of the discontinuity is labeled as \(x^*\). Following the process described in Sect. 3.3, we obtain the size of the jumps in the function and its derivatives at \(x^*\). It is clear that if the piecewise function is not composed of polynomials of degree smaller or equal than 3, the location of the discontinuity will be approximated. In this case, obtaining the size of the jump in the function and its derivatives at \(x^*\) is just as extending \(f^+\) from \(s^*\) up to \(x^*\), as shown in Fig. 3. Let’s write the limit function obtained by the R–C algorithm as,

$$\begin{aligned} \tilde{f}^\infty (x)=\left\{ \begin{array}{ll} (\tilde{f}^\infty )^-(x)&{}x<x^*,\\ (\tilde{f}^\infty )^+(x)&{}x>x^*. \end{array}\right. \end{aligned}$$
(20)

The error obtained at any point \(x\in (x^*, s^*)\) will be,

$$\begin{aligned} |f(x)-\tilde{f}^\infty (x)|=|f^-(x)-(\tilde{f}^\infty )^+(x)|\le |f^-(x)-f^+(x)|+ |f^+(x)-(\tilde{f}^\infty )^+(x)|. \end{aligned}$$
(21)

Following similar arguments to the ones used in the first part of the proof, the second term is bounded and is \(O(h^4)\). For the first term, we can use second order Taylor expansion around \(s^*\) to write,

$$\begin{aligned} \begin{aligned} |f^-(x)-f^+(x)| \le |[f']|(s^*-x)+\sup _{\mathbb {R}\backslash s^*}|f''|(s^*-x)^2\\ \le (s^*-x^*)\left( |[f']|+\sup _{\mathbb {R}\backslash s^*}|f''|h\right) . \end{aligned} \end{aligned}$$
(22)

As \(h<h_c\) (see (1)), then

$$\begin{aligned} \begin{aligned} |f^-(x)-f^+(x)|&\le \frac{5}{4}|[f']|(s^*-x^*). \end{aligned} \end{aligned}$$
(23)

Now, we can use point 3 of Lemma 2 of [7], that establishes that \((s^*-x^*)\le C\frac{h^m\sup _{x\in \mathbb {R}\backslash \{s^*\}} |f^{(m)}(x)|}{|[f^{'}]|}\) (with \(m=4\) in our case), to obtain that the first term of the right hand side of (21) is also bounded and is \(O(h^4)\). This concludes the proof.\(\square \)

Fig. 3
figure 3

In this figure we can see an example of a corner discontinuity placed in the interval \((x_{j}, x_{j+1})\). The left side of the discontinuity placed at \(s^*\) is labeled as the − side and the right side of \(s^*\) as the \(+\) side. The approximated location of the discontinuity is labeled as \(x^*\)

Due to the order of accuracy attained by the R–C algorithm close to the discontinuity and to the regularity and convergence of the linear scheme, we can state the following corollary:

Corollary 3.3

The R–C scheme does not introduce the Gibbs phenomenon nor smearing close to jumps or corner discontinuities.

Proof

Given the fact that the correction introduced in step 1 of the algorithm eliminates the presence of the discontinuity up to the order of accuracy of the linear scheme, we can consider that the linear scheme in step 2 is applied to smooth data after the correction, so no Gibbs effect can appear. Step 3 consists in correcting back the subdivided data, so no smearing of discontinuities can appear.

Corollary 3.4

The R–C scheme reproduces piecewise cubic polynomials.

Proof

For piecewise cubic polynomials, the location of the discontinuity is exact. Remind that we are using cubic interpolating polynomials for the location algorithm. Following Theorem 3.1 we get the proof. Mind that if the discontinuity is in the first derivative, the proof given for Theorem 3.1 would be equivalent for this case, as now we know that the exact position of the corner is \(x^*\).

In what follows, we enumerate the main properties of the R–C algorithm:

  • The R–C scheme has the same (piecewise) regularity as the linear subdivision used.

  • The R–C scheme does not produce any oscillations in the presence of discontinuities.

  • The algorithm has high accuracy without the smearing of singularites.

  • The algorithm is interpolatory.

  • The R–C scheme is exact for piecewise polynomial functions of the same degree as the one used in the construction of the scheme.

  • The R–C scheme is stable (see for example [46], for the definition of stability of a subdivision scheme in this context) due to the stability of the linear scheme used (see Corollary 6 of [46] for \(w=\frac{1}{16}\)).

Let us consider now the cell-average discretization. In this case, we will only analyze the detection of jumps in the function.

Theorem 3.5

The cell-averages’ case Let f be a piecewise \(C^3\)-smooth function on [0, 1], with a jump discontinuity at \(s^*\), and assume we are given the cell-average data

$$\begin{aligned} \bar{f}_j=\frac{1}{h}\int ^{x_j}_{x_{j-1}}f(x)dx,\quad j=1, \ldots , N. \end{aligned}$$

Applying the algorithm in Sect. 3.2 to the data sequence \(\{F_j\}\) defined in (4) to approximate the primitive function F, the following results hold:

1. The approximate singular point \(x^*\) satisfies \(|s^*-x^*|\le Ch^4.\)

2. The approximation G(x) to the primitive function F(x) interpolates the data \(\{F_j\}_{j=1}^N\), and \(|F(x)-G(x)|\le Ch^4\), \(x\in [0,1]\).

3. The approximation \(g(x)=G'(x)\) to f(x) satisfies \(|f(x)-g(x)|=O(h^3)\) as \(h\rightarrow 0\) for \(x\in [0,1]\), x not in the closed interval between \(s^*\) and \(x^*\).

4. Denoting \(\delta =x^*-s^*\),

$$\begin{aligned} |f(x)-g(x+\delta )|\le Ch^3\ \ \forall x\in [0,1]\setminus \{s^*\}. \end{aligned}$$

Proof

Figure 4 represents a possible scenario for this theorem. Let us prove every point of the theorem:

  1. 1.

    First we note that the primitive function F is piecewise \(C^4\). After locating the interval \([x_j,x_{j+1}]\) containing the corner, the algorithm finds the two cubic polynomials, \(\tilde{T}_-\) and \(\tilde{T}_+\), respectively interpolating the data at four points to the left and the right of the interval that contains the corner. As we are using cubic polynomials for the location of the corner, the intersection point \(x^*\) of the two polynomials in \([x_j,x_{j+1}]\) satisfies \(|s^*-x^*|\le Ch^4\), as proved in point 3 of Lemma 3 of [7].

  2. 2.

    Now we can follow the steps of Theorem 3.1 to deduce that \(|F(x)-G(x)|\le Ch^4\), \(x\in [0,1]\).

  3. 3.

    We also use here the result in [54] that for \(F\in C^4(\mathbb {R})\) the 4-point subdivision scheme gives \(O(h^3)\) approximation order to the derivative of F. This, together with the observation that both F and G are differentiable outside the interval between \(s^*\) and \(x^*\), yield the result in item 3.

  4. 4.

    To prove item 4 let us assume, w.l.o.g., that \(x^*>s^*\). Both f and \(g(\cdot +\delta )\) have their jump discontinuity at \(s^*\). For \(x<s^*\) we have \(x+\delta <x^*\). Since \(|\delta |=O(h^4)\), and g is continuous for \(x<x^*\), it follows that

    $$\begin{aligned} g(x+\delta )=g(x)+O(h^4). \end{aligned}$$

    Hence,

    $$\begin{aligned} |f(x)-g(x+\delta )|=|f(x)-g(x)|+O(h^4), \end{aligned}$$

    and by item 3, \(|f(x)-g(x)|=O(h^3)\). Similarly, for \(x>s^*\), \(x+\delta >x^*\) and \(|f(x+\delta )-f(x)|=O(h^4)\). Hence,

    $$\begin{aligned} |f(x)-g(x+\delta )|=|f(x+\delta )-g(x+\delta )|+O(h^4)=O(h^3). \end{aligned}$$
Fig. 4
figure 4

In this figure we represent a jump discontinuity in the interval \((x_{j}, x_{j+1})\), that transforms into a discontinuity in the first derivative for the primitive. The approximated location of the discontinuity is labeled as \(x^*\) and the exact location is labeled as \(s^*\). The exact function is represented by f(x) and the approximation is represented by g(x). The exact primitive is represented by F(x) and the approximation is represented by G(x)

Remark 5

Inside the interval between \(s^*\) and \(x^*\), as mentioned in Section 7 of [7], we can not hope to obtain better accuracy than O(1) in the infinity norm. Even though, we can provide a result in the \(L^1\) norm (in fact, this result is provided for the \(L^p\) norm in Section 7 of [7]). We use the inequality

$$\begin{aligned} ||f^{\infty }- f||_{L^1}\le ||f^{\infty } - g||_{L^1}+||g- f||_{L^1}, \end{aligned}$$

where \(f^{\infty }\) is the limit function in the cell-averages and g is the smooth function obtained by our algorithm plus the final translation, which has the same cell-averages as the function f and has the discontinuity at \(x^*\). The first term of the right-hand side of the inequality is controlled by the order of the linear subdivision scheme (as the correction does not perturb the order) and the second term is controlled using the results in Section 7 of [7].

3.5 The case of univariate non-uniform data

The R–C procedure includes five steps:

  1. 1.

    Finding the interval including the jump in the function, defining \(x^*\).

  2. 2.

    Estimate the derivatives of the function from the right and from the left at \(x^*\).

  3. 3.

    Regularization: Subtract from the data the values of a one-sided polynomial defined by the jumps of the (estimated) derivatives at \(x^*\).

  4. 4.

    Fit a smooth approximation to the regularized data (using subdivision or splines).

  5. 5.

    Obtain the final approximation by adding the one-sided polynomial.

For a non-uniform distribution of data points steps 1, 2 and 4 should be revised as follows:

In the uniform data case, the detection of the interval containing the jump is done by computing the second-order differences of the data. For the non-uniform case, we can use second-order divided differences instead. Assuming the data-points distribution is quasi-uniform, the location of the jump is recovered using the locations of maximal second-order divided differences.

If we aim at an algorithm that reconstructs cubic polynomials, we need to estimate the size of the jumps in the derivatives at \(x^*\) within 4th-order accuracy. This can be done by computing two interpolating cubic polynomials, one using 4 data points to the right of \(x^*\) and the other to the left.

In the uniform case, we suggested applying interpolatory subdivision to the regularized data. Another option would be using interpolation by splines or shape preserving subdivision schemes [48, 50, 51]. A uniform subdivision is attractive due to its simplicity. Non-uniform subdivision is much more involved, and we suggest using cubic spline interpolation for the regularized data.

4 Numerical results for the case of point-value sampling

In the experiments presented in this section, for corner discontinuities, first we have calculated a high accuracy approximation of the position of the discontinuity using the technique described in the introduction. Then, we compute high order approximations of the size of the jumps in the function and its derivatives using the process described in Sect. 3.3.

Let’s start by the function,

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} a+\left( x-\frac{\pi }{6}\right) \left( x-\frac{\pi }{6}-10\right) +x^2+\sin (10x), &{} \text {if } x< \frac{\pi }{6}, \\ x^2+\sin (10x),&{} \text {if } x\ge \frac{\pi }{6}, \end{array}\right. \end{aligned}$$
(24)

\(x\in [0, 1]\).

It is easy to check that, for \(a=0\), the size of the jump in the function at \(x=\frac{\pi }{6}\) is \([f]=0\), the size of the jump in the first derivative is \([f']=10\) and in the second derivative is \([f'']=-2\).

We compare the performance of three algorithms: One is the linear algorithm which is just the application of the 4-points subdivision to the data. The second is the quasi-linear ENO-SR discussed in [7] which uses special subdivision rules near the corner. The third is the R–C algorithm suggested here. In Fig. 5 we present the result obtained by the linear algorithm (top), the quasi-linear ENO-SR scheme (botom left), and the R–C algorithm (bottom right). In Fig. 6 we present a zoom around the corner. Figure 7 shows the error in the limit function obtained by each of the three algorithms. In order to obtain these graphs, we have started from 16 data points of the original function in (24) and we have performed 5 levels of subdivision. In Figs. 5 and 6, we represent the original function with a dotted line, the discretized data at the lower resolution (16 data points) with blank circles, and the limit function with a dashed line. In Fig. 7 we represent the linear algorithm in black, the R–C algorithm in red, and the quasilinear algorithm in blue. We can see that the results obtained by the quasi-linear algorithm and the R–C algorithm are very similar. Even though, as we will see in Sect. 4.2, the regularity of the limit function obtained by the quasi-linear algorithm is lower than the one obtained by the R–C approach.

Now we apply the algorithm to a function with more than one discontinuity. For example, the function

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} \left( x-\frac{\pi }{12}\right) \left( x-\frac{\pi }{12}-10\right) +x^2+\sin (10x), &{} \text {if } x< \frac{\pi }{12}\\ x^2+\sin (10x),&{} \text {if } \frac{\pi }{12}\le x< \frac{3\pi }{12},\\ (x-\frac{3\pi }{12})(x-\frac{3\pi }{12}-5)+x^2+\sin (10x),&{} \text {if } x\ge \frac{3\pi }{12}, \end{array}\right. \end{aligned}$$
(25)

that presents two discontinuities. In Fig. 8 we present the result obtained when applying the R–C algorithm to the function in (25). We can see that the algorithm can deal with more than one discontinuity with no problem, just applying steps 1 and 3 of Sect. 3 as many times as discontinuities are detected.

Fig. 5
figure 5

Limit function obtained by the linear algorithm (top), the quasi-linear algorithm (bottom left) and the R–C algorithm (bottom right). In order to obtain these graphs we have started from 16 initial data points of the function in (24)

Fig. 6
figure 6

Zoom of the limit functions shown in Fig. 5 obtained by the linear algorithm (top), the quasi-linear algorithm (bottom left) and the R–C algorithm (bottom right)

Fig. 7
figure 7

Error obtained when subdividing the sampling of the function in (24) with 16 initial points. The limit function has 512 points and has been obtained after \(l=5\) levels of subdivision. We can see that the errors for the quasilinear and the R–C algorithms are similar

Fig. 8
figure 8

Limit function obtained by the R–C algorithm after five levels of subdivision using as initial data the piecewise smooth function in (25) that presents two corners. At the bottom of the figure we present two zooms around the corners

4.1 Jump discontinuities in the point-value sampling case

We have previously mentioned that it is not possible to locate the position of a discontinuity in the function using a discretization by point-values. Although this is true, it is indeed possible to locate the interval of length h that contains the discontinuity. Our objective is to obtain subdivided data that keeps the regularity of the linear subdivision scheme applied, that does not smear the discontinuities nor present Gibbs phenomenon and that have optimal accuracy close to the discontinuities. Then, as argued in Theorem 3.1, we just assume that the discontinuity is placed at the middle of the interval. Then, as before, we can compute high order approximations of the size of the jumps in the function and its derivatives using the process described in Sect. 3.3. Let’s apply the R–C algorithm to data obtained from the sampling of the function,

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} \left( x-\frac{\pi }{12}\right) \left( x-\frac{\pi }{12}-10\right) +x^2+\sin (10x)+1, &{} \text {if } x< \frac{\pi }{6},\\ x^2+\sin (10x),&{} \text {if } \frac{\pi }{12}\le x< \frac{3\pi }{12},\\ (x-\frac{3\pi }{12})(x-\frac{3\pi }{12}-5)+x^2+\sin (10x)+2,&{} \text {if } x\ge \frac{3\pi }{12}. \end{array}\right. \end{aligned}$$
(26)

Figure 9 presents the result obtained after five levels of subdivision using 100 initial points. As expected, we do not obtain Gibbs phenomenon nor smearing of discontinuities. Observe that in Fig. 9 we have plotted the limit function and the original data with continuous lines to point out the position of the jump in the function, but no data or subdivided data is placed in the middle of the jump.

In the following subsections, we will check the regularity and the accuracy obtained using the R–C algorithm.

Fig. 9
figure 9

Limit function obtained after 5 levels of subdivision using the R–C algorithm for a piecewise continuous function. The initial data has 100 points. At the bottom of the figure we present two zooms around the discontinuities

4.2 Numerical regularity

Following the notation used by Dyn and Levin in [23] an univariate stationary subdivision scheme with finitely supported mask \(\textbf{a}=\{a_j\}_{j\in \mathbb {Z}}\) is defined as beginning with an initial sequence of finite data \(f^0=\{f^0_i\}_{i\in J_0}\). New values are obtained through refinement at level \(k+1\) and denoted by \(f^{k+1}=\{f^{k+1}_i\}_{i\in J_{k+1}}\). The values at the scale \(k+1\) are obtained from the values at the scale k by applying the rule,

$$\begin{aligned} (S_\textbf{a} f^k)_{i}=f^{k+1}_{i}:=\sum _{j\in J_k} a_{i-2j}f^k_j. \end{aligned}$$

Generally, the data \(f^{k+n}\) is said to be obtained from the data \(f^{k}\) after n levels of subdivision. Following [45], the regularity of a limit function of a subdivision process can be evaluated numerically using the values \(\{f^L_n\}\) obtained at subdivision level j as

$$\begin{aligned} \beta _k=-\log _2\left( 2^k\frac{||\Delta ^{k+1}f^{L+1}_n ||_\infty }{||\Delta ^{k+1}f^{L}_n||_\infty } \right) , k=1,2,\end{aligned}$$

where we have denoted by \(\Delta ^2f^{L+1}_n=f^{L+1}_{n-1}-2f^{L+1}_n+f^{L+1}_{n+1}\) and \(\Delta ^3=\Delta ^2f^{L+1}_n-\Delta ^2f^{L+1}_{n-1}\). This expression provides an estimate for \(\beta _1\) and \(\beta _2\) such that the limit functions are \(C^{1+\beta _1-}\) and \(C^{2+\beta _2-}\) smooth (see footnote 1 in page 4).

Let’s consider the regularity of the limit function obtained when subdividing data acquired through a point-value discretization of the function (24). In Table 1 we present numerical estimations of the regularity constant of the linear algorithm, the quasi-linear algorithm, and the R–C algorithm. To obtain this table, we start from 100 initial data points and we subdivide from \(L=5\) to \(L=10\) levels of subdivision in order to obtain an approximation of the limit function. We measure the numerical regularity for \(x<\frac{\pi }{6}\) assuring that the corner is not contained in the data. From this table, we can see that the numerical estimate of the regularity for the R–C algorithm is very close to the one obtained by the linear scheme. The quasi-linear scheme clearly is less regular.

4.3 Grid refinement analysis in the point-value sampling case

In this subsection, we present an experiment oriented to check the order of accuracy of the schemes presented. In order to do this, we check the error of interpolation in the infinity norm obtained in the whole domain and then we perform a grid refinement analysis. We define the order of accuracy of the reconstruction as,

$$\begin{aligned} order_{k+1}=\log _2\left( \frac{E^k_\infty }{E^{k+1}_\infty }\right) , \end{aligned}$$

\(E_\infty ^k\) being the \(\ell _\infty \) error obtained using data with a grid spacing \(h_k=N_k^{-1}\) and \(E_\infty ^{k+1}\) the error obtained with a grid spacing \(h_k/2\).

For point-value data at the poins \(\{x_j=jN_k^{-1}\}_{j=0, \ldots , N_k}\), we estimate \(E^k_\infty \) using the values of the test function and its approximation on a mesh refined by a factor of \(2^{-10}\). Table 2 presents the results obtained by the three algorithms for the test function in (24) with \(a=0\). We can see how the linear algorithm loses the accuracy due to the presence of the discontinuity, while the quasi-linear algorithm and the R–C approach keep high order of accuracy in the whole domain.

Let us remark that for any initial data obtained from a piecewise polynomial function of degree smaller or equal than three, the R–C scheme and the quasi-linear method achieve exact approximations within machine precision.

We also check the accuracy attained by the R–C scheme when working with piecewise continuous functions as the ones analyzed in Sect. 4.1. For this experiment, the data at the resolution k has been obtained through the sampling of the function in (24) with \(a=10\), that is a piecewise continuous function with a jump discontinuity of size a at \(x=\frac{\pi }{6}\). The philosophy of this experiment is the one explained in Sect. 4.1: as the exact position of the discontinuity is lost when discretizing a function by point-values, we consider that the discontinuity is placed at the middle of the suspicious intervals. Then, in order to obtain the error and the order of accuracy, we compare with the original function but with the discontinuity placed in the middle of the suspicious interval. The results are presented in Table 3 and we can see that the R–C algorithm attains the maximum possible accuracy in the infinity norm.

Table 1 Numerical estimation of the limit functions regularity \(C^{1+\beta _1-}\) and \(C^{2+\beta _2-}\) for the different schemes presented and the function in (24)
Table 2 Grid refinement analysis in the \(l^{\infty }\) norm for the function in (24) for the three algorithms
Table 3 Grid refinement analysis in the \(l^{\infty }\) norm for the R–C scheme for the function in (24), that in this case is a piecewise continuous function with a jump discontinuity in the function which size is equal to \(a=10\). In this case, as the exact position of the discontinuity is lost when discretizing a function through the point-values, we consider that the discontinuity of the function in (24) is not placed at \(\pi /6\) but at the middle of the suspicious interval

4.4 R–C approximation of bivariate point-value data

Taking into account what has been explained in previous subsections, we can try to approximate piecewise smooth two dimensional functions. Let’s consider for example the next bivariate function,

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} \cos (\pi x)\cos (\pi y), &{} \text {if } (x+\frac{1}{2})^2+(y-\frac{1}{2})^2< 1,\\ 1-\cos (\pi x)\sin (\pi y),&{} \text {if } (x+\frac{1}{2})^2+(y-\frac{1}{2})^2\ge 1, \end{array}\right. \end{aligned}$$
(27)

that is displayed in Fig. 10. The additional challenge here is locating and approximating the discontinuity curve. Assuming that the discontinuity curve is nowhere parallel to the x-axis, we can apply here the level set function approach presented in [49].

We can proceed as follows:

  • Detect and locate the possible discontinuities using the rows data. Use this information to build an approximate signed-distance data from the unknown discontinuity curve. Now fit a spline surface S(xy) to this data, and construct the zero level set of this surface. This approach from [49] gives an O(h) approximation to the discontinuity curve.

  • Obtain and store the one-dimensional correction terms for all the rows.

  • Add the correction term to the rows and apply to the corrected data the tensor product linear 4-point subdivision algorithm to generate the first stage approximation g(xy).

  • Subdivide along columns the correction terms to obtain a smooth function representing the correction term T(xy) over the whole domain.

  • Use the level set function S(xy) to set T(xy) to zero to the right of the approximate discontinuity curve.

  • Subtract the masked correction term from g(xy).

Of course, in order to apply this technique successfully, it is necessary that discontinuities are far enough from each other and from the boundaries. Figure 10 top to the left presents the original bivariate data sampled from the function in (27). Top to the right we present the resultant subdivided data obtained following the process described before. Bottom left we can observe the subdivided correction term, which is clearly smooth. Bottom to the right we can see the subdivided and masked correction term.

Fig. 10
figure 10

Top to the left, plot of the function in (27). Top to the right, subdivided data. Bottom to the left subdivided correction term. Bottom to the right masked subdivided correction term

5 Numerical results for the case of cell-average data

In this section, we work with piecewise continuous functions supposing that the data is discretized by cell-averages so that we can localize the position of the jump discontinuity up to the accuracy needed using the algorithm in [7]. Then, as before, we can compute high order approximations of the size of the jumps in the function and its derivatives using the process described in Sect. 3.3.

In all the experiments presented in this section we will use the following function discretized by cell-averages,

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} 10+\left( x-\frac{\pi }{6}\right) \left( x-\frac{\pi }{6}-10\right) +x^2+\sin (10x), &{} \text {if } x< \frac{\pi }{6},\\ x^2+\sin (10x),&{} \text {if } x\ge \frac{\pi }{6}, \end{array}\right. \end{aligned}$$
(28)

with \(x\in [0, 1]\). It is easy to check that for the primitive F of f, the size of the jump in the function at \(x=\frac{\pi }{6}\) is \([F]=0\), the size of the jump in the first derivative is \([F']=-[f]=-10\) and in the second derivative is \([F'']=[f']=10\).

Figure 11 shows the limit function obtained by the linear algorithm (left), the quasi-linear algorithm (center), and the R–C algorithm (right). In order to obtain these graphs, we have started from 20 initial cell-averages of the function in (28). When discretizing the function in (28) through the cell-averages, we have represented the data at the lowest resolution \(\bar{f}^{0}_j\) at the positions \(x_{j-1}^{0}+\frac{h_{0}}{2}\). Figure 11 shows that the linear algorithm produces oscillations close to the discontinuity. The R–C and the quasi-linear algorithms do not produce oscillations and attain a very good approximation close to the discontinuities. Figure 12 shows a zoom around the discontinuity. Let us mention here that the point attained in the middle of the jump by the quasi-linear method and the R–C algorithm is not due to diffusion introduced by the algorithms, it is due to the kind of discretization used. Mind that, if the function presents a discontinuity in the interval \(({x_{j-1}^k}, x^k_{j})\), then the discretization in (3) will always return a cell value at some point in the middle of the jump, that can be observed in the graphs of Figs. 11 and 12. This value simply corresponds to the mean of the function in the interval that contains the discontinuity, i.e. the cell-average value in (3).

Fig. 11
figure 11

Limit function obtained by the linear algorithm (top), the quasi-linear algorithm (bottom left), and the R–C algorithm (bottom right). In order to obtain these graphs, we have started from 20 initial cell-averages of the function in (24)

Fig. 12
figure 12

Zoom of the limit functions shown in Fig. 11 obtained by the linear algorithm (top), the quasi-linear algorithm (bottom left) and the R–C algorithm (bottom right)

5.1 Numerical regularity in the cell-average case

In this subsection, we analyze the numerical regularity attained by each of the algorithms for data discretized by cell-averages. In Table 4 we present some numerical estimations of the regularity constant of the different algorithms analyzed. The data has been obtained from the discretization through cell-averages of the function in (28). Following what was done in the point-value discretization, in order to obtain this table we start from 100 initial data cells and subdivide from \(L=5\) to \(L=10\) levels of subdivision to obtain an approximation of the limit function. We measure the numerical regularity for \(x<\frac{\pi }{6}\), assuring that the discontinuity is not contained in the data. From this table, we can see that the numerical estimate of the regularity for the R–C algorithm is close to the one obtained by the linear scheme. The quasi-linear scheme seems to be less regular.

5.2 Grid refinement analysis for the cell-average sampling

In this section we reproduce the grid refinement analysis that we performed for the point-values’ case, using the infinity norm, but we also use the \(L^1\) norm. For point-value data at the poins \(\{x_j=jN_k^{-1}\}_{j=0, \ldots , N_k}\), we estimate \(E^k_1\) using the cell-average data of the exact test function and its approximation on a mesh refined by a factor of \(2^{-10}\).

In this case, we have used the function presented in (28) discretized by cell-averages. Table 5 presents the errors and orders of approximation obtained by the three algorithms in the infinity norm. For our approach we show the error outside the interval \([x^*, s^*]\), in order to check the results established in Theorem 3.5. For the linear and quasi-linear subdivision schemes, we present the infinity norm in the whole domain. We can see how the linear algorithm loses the accuracy close to the discontinuity, while the quasi-linear algorithm and the R–C approach keep high order of accuracy. In particular, the R–C algorithm attains \(O(h^3)\) accuracy, which is in accordance with the results of Theorem 3.5. We can also perform the same grid refinement analysis but using the \(l^1\) norm instead. The results are presented in Table 6. Mind that in both tables (Tables 5, 6) the numerical order of accuracy is at least 3, but the errors are smaller in both norms for the R–C approach. Mind that the accuracy has been reduced by one for all the algorithms as we are using a subdivision algorithm with a stencil of three cells. The R–C approach attains similar order of accuracy as the quasi-linear algorithm.

Table 4 Numerical estimation of the limit functions regularity \(C^{\beta _1-}\) and \(C^{1+\beta _2-}\) for the different schemes presented and the function in (28) discretized by cell-averages

5.3 Approximation of bivariate cell-averages’ data

In this case, we have applied the algorithms analyzed in previous sections to two-dimensional data using a tensor product approach, i.e. we directly process one-dimensional data by rows and then by columns. The behaviour of the discontinuity curve, and its treatment, for data discretized using cell averages in multiple dimensions is much more difficult. In fact, the algorithm in [7] is not designed to work in multiple dimensions except for very simple cases, i.e. when the discontinuities are aligned with the axes. The use of an accurate algorithm to locate the discontinuity in several dimensions can allow to tackle the problem using the method proposed in this article. In this section we will present a very simple example to show that the algorithm could be extended to cell averages using tensor product.

In this section we will work with the bivariate function discretized by cell-averages,

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} \cos (\pi x) \cos (\pi y), &{} \text {if } 0\le x<0.5, 0\le y<0.5,\\ -\cos (\pi x) \cos (\pi y)+2,&{} \text {if } 0.5< x\le 1, 0\le y<0.5,\\ -\cos (\pi x) \cos (\pi y)+2,&{} \text {if } 0< x\le 0.5, 0.5\le y<1,\\ -\cos (\pi x) \cos (\pi y)+4,&{} \text {if } 0.5< x\le 1, 0.5\le y<1. \end{array}\right. \end{aligned}$$
(29)

Figure 13 shows the result of one step of subdivision using the tensor product approach for the data presented in Fig. 13 top to the left. This means that we apply the one-dimensional subdivision scheme by rows and then by columns. The result presented in Fig. 13 top to the right corresponds to the linear algorithm. We can observe that the effect of the discontinuity appears in the subdivided data in the form of the smearing of the data and the Gibbs effect. The result of the quasi-linear algorithm and the R–C approach is presented in Fig. 13 bottom to the left and to the right respectively. We can see that both results are very similar. As shown in previous sections, the main difference is the regularity of the data close to the discontinuity, which is higher for the R–C approach. In terms of accuracy, both algorithms perform similarly.

Table 5 Grid refinement analysis after ten levels of subdivision in the \(l^{\infty }\) norm for the function in (28) discretized by cell-averages and for the three subdivision schemes presented
Table 6 Grid refinement analysis in the \(L^1\) norm for the function in (28) discretized by cell-averages and for the three approximation schemes

6 Conclusions

In this paper, we have introduced a regularization–correction approach to the problem of approximating piecewise smooth functions. It can be used in the framework of subdivision schemes in order to design curves and surfaces with a desired regularity and avoiding Gibbs phenomenon. In the first stage, the data is smoothed by subtracting an appropriate non-smooth data sequence. Then a uniform linear 4-point subdivision approximation operator is applied to the smoothed data. Finally, an approximation with the proper discontinuity structure is restored by correcting the smooth approximation with the non-smooth element used in the first stage. We deal with both cases of point-value data and cell-average data. The resulting approximations for functions with discontinuities have the following five important properties,

  1. 1.

    Interpolation.

  2. 2.

    High precision.

  3. 3.

    High piecewise regularity.

  4. 4.

    No smearing of discontinuities.

  5. 5.

    No oscillations.

We have used the 4-point Dubuc–Deslauriers interpolatory subdivision scheme [20] through which we obtain a \(C^{2-}\) piecewise regular limit function that is capable of reproducing piecewise cubic polynomials.

Fig. 13
figure 13

Top to the left, plot of the function in (27) discretized through cell-average values at a low resolution. Top to the right, subdivided data using the linear algorithm. Bottom to the left, subdivided data using the quasilinear algorithm. Bottom to the right, subdivided data using the R–C algorithm

The first advantage of our approach is that the resultant scheme presents the same regularity as the regularity of the linear subdivision scheme used in the second stage of the algorithm. The accuracy of the regularization–correction approach is obtained from the accuracy reached in the location of the discontinuities and in the accuracy of the approximation of the size of the jump in the function and its derivatives, which is done through Taylor’s expansions. Thus, the R–C algorithm presents the same regularity, at each regularity zone, as the linear subdivision scheme [17, 24, 25] plus the same accuracy as the quasi-linear scheme [7]. We present the results for the 4-point linear subdivision scheme, but the approach is indeed applicable to any other subdivision scheme aimed to deal with the approximation of functions with corner or jump discontinuities. By construction, the R–C subdivision algorithm does not present the Gibbs phenomenon and does not smear the discontinuities. As far as we know, this is the first time that an algorithm that owns all these properties at the same time appears in the literature. The numerical results confirm our theoretical analysis.