1 Introduction

In past years, many techniques have been designed and developed in order to construct curves or surfaces with some properties such as polynomial reproduction or monotonicity-preservation. For example, splines, non-uniform rational B-splines (NURBS) and others (see, e.g. [2, 9]). In this context, linear subdivision schemes appear as useful and efficient instruments due to their simple computation (see e.g. [8, 19, 24]). They consist in obtaining new points from given data using refinement operators and can be classified depending on such operators: if a single operator is used for all the iterations, then the subdivision scheme is called stationary or level-independent (see e.g. [8, 16]), otherwise it is denominated non-stationary or level dependent (see e.g. [10, 11, 18]). They are also classified by the linearity of the operators (see e.g. [13, 14]).

There is a vast literature on the generation of subdivision schemes and the study of their properties. An essential property is convergence, which means that the process tends uniformly to a continuous function, for any initial values. Deslauriers and Dubuc, in [12], analysed that the scheme based on centered Lagrange interpolation is convergent using Fourier transform techniques to prove it.

One of the most common studied properties is the reproduction of polynomials, i.e., if the given data are point-values of a polynomial, then the subdivision scheme generates more point-values of such polynomial. This is studied in detail in [17]. Its study is interesting since the reproduction is linked with convergence properties and the approximation capability of the scheme.

In some real applications, the given data come from measures that are contaminated by noise and, as a consequence, a suitable subdivision scheme should be used to converge to an appropriate limit function. To this purpose, Dyn et al. in [16] propose a new linear scheme based on least-square methods where the noise is reduced by applying the scheme several times. These schemes are determined by two parameters m and d with \(d<m\): For each m consecutive data values, \((y_1,\ldots ,y_{m})\), attached to some equidistant knots \((x_1,\ldots ,x_{m})\), a polynomial regression is performed. The search is constrained to polynomials of degree d and leads to a unique solution, \({\hat{p}}\), that minimizes the regression error concerning the \(\ell ^2\)-norm (least-squares):

$$\begin{aligned} \hat{p}=\mathop {\mathrm {arg\,min}}\limits _{p \in \Pi _{d}(\mathbb {R})} \sum _{l=1}^m (y_{l}-p(x_l))^2. \end{aligned}$$
(1)

The subdivision refinement rules can be obtained by evaluating \({\hat{p}}\) at a certain point, which, in this work, is assumed to be 0 without loss of generalization. The resulting schemes are linear, which implies some benefits and drawbacks. In [16], the convergence is proved for \(d=0,1\), as well as some properties such as polynomials reproduction. These schemes significantly reduce noise when the data is contaminated, as illustrated in Fig.  1c, but exhibit poor approximation in the absence of noise, as seen in Fig.  1a. In contrast, we propose new subdivision schemes that offer enhanced approximation capabilities while effectively eliminating noise, as depicted in Fig.  1b, d.

Fig. 1
figure 1

Limit curves (black lines) are derived from subdivision schemes applied to star-shaped data points (blue circles). In a and b, the schemes are applied to the original data, while in c and d, the data is corrupted by normal noise with \(\sigma =0.5\). The subdivision scheme proposed in [16] is employed in a and c, while a novel subdivision scheme is utilized in b and d

In this example and in many applied situations, the location of the data is relevant to obtain the approximation; hence, a weight function is considered to assign significance depending on the distance from the knots \(x_l\) to 0. These methods, as Shepard’s algorithm (see [26]), are called moving least squares (see [22]). In [4, 5], the weighted local polynomial regression (WLPR) was used to design a prediction operator for a multiresolution algorithm, leading to good results on image processing when the data was contaminated with some noise. Prediction operators can be considered subdivision operators and their properties can be studied [10]. In this paper, we study the family of subdivision schemes based on the prediction operators in [4] and develop a new technique to study their convergence based on some asymptotic behaviour. Also, some properties such as polynomial reproduction, the Gibbs phenomenon in discontinuous data, monotonicity preservation and denoising and approximation capabilities are analysed. We present some theoretical conditions to construct subdivision schemes, depending on the weight function, which combine an accurate approximation to the data if it is noise-free and significant noise removal if it is contaminated, as we show in Figs. 1b, d. We provide some examples to check the theoretical results.

The paper is organized as follows: Firstly, we briefly review the classical components of linear subdivision schemes with the aim to be self-contained in this work. In Sect. 3, we explain the WLPR and define a general form, leading to new subdivision schemes definitions. Afterward, we study different properties in some particular cases: Starting with \(d=0,1\), we analyse the convergence, the smoothness of the limit functions, the monotonicity preservation and the Gibbs phenomenon when the initial data present large gradients. In Sect. 5, we develop a new technique to study the convergence of a family of schemes to apply it to the case \(d=2,3\) later. We analyse the approximation and noise reduction capabilities of the new schemes in Sects. 7 and 8. Finally, some numerical experiments are performed to confirm the theoretical properties in Sect. 9, and some conclusions and future work are proposed in Sect. 10.

2 Preliminaries: A Brief Review of Linear Subdivision Schemes

Let us denote by \(\ell _\infty (\mathbb {Z})\) the set of bounded real sequences with indices in \(\mathbb {Z}\). A linear binary univariate subdivision operator \(S_\textbf{a}:\ell _\infty (\mathbb {Z})\rightarrow \ell _\infty (\mathbb {Z})\) with finitely supported mask \(\textbf{a}=\{a_l\}_{l\in \mathbb {Z}}\subset \mathbb {R}\) is defined to refine the data on the level k, \(\textbf{f}^k=\{f^k_j\}_{j\in \mathbb {Z}} \in \ell _\infty (\mathbb {Z})\), as:

$$\begin{aligned} f^{k+1}_{2j+i}:=\left( S_\textbf{a}\textbf{f}^k\right) _{2j+i}:=\sum _{l\in \mathbb {Z}} a_{2l-i}f^k_{j+l}, \quad j\in \mathbb {Z}, \quad i=0,1. \end{aligned}$$
(2)

In this work, we only consider level-independent subdivision schemes, meaning that the successive application of a unique operator \(S_\textbf{a}\) constitutes the subdivision scheme. Hence, we will refer to \(S_\textbf{a}\) as the subdivision scheme as well. The binary adjective refers to the two formulas/rules of (2) (corresponding to \(i=0\) and \(i=1\)) which are characterized by the even mask, \(\textbf{a}^0=\{a_{2\,l}\}_{l\in \mathbb {Z}}\), and by the odd mask, \(\textbf{a}^1=\{a_{2\,l-1}\}_{l\in \mathbb {Z}}\). It is called length of a mask to the number of elements that are between the first and the last non-zero elements, both included.

Remark 1

If a linear subdivision scheme is applied to some data \(\widetilde{\textbf{g}} = \{G(j) + \epsilon _j\}_{j\in \mathbb {Z}}\), where G is a smooth function and \(\varvec{\epsilon }= \{\epsilon _j\}_{j\in \mathbb {Z}}\) is random data, also referred to as “noise”, the result is:

$$\begin{aligned} S_\textbf{a}\widetilde{\textbf{g}} = S_\textbf{a}\textbf{g} + S_\textbf{a}\varvec{\epsilon }, \end{aligned}$$

which implies that we can study separately the smooth and the pure noisy cases.

If we apply these rules recursively to some initial data \(\textbf{f}^0\), it is desirable that the process converges to a continuous function, in the following sense.

Definition 1

A subdivision scheme \(S_\textbf{a}\) is uniformly convergent if for any initial data \(\textbf{f}^0\in \ell _\infty (\mathbb {Z})\), there exists a continuous function \(F:\mathbb {R}\rightarrow \mathbb {R}\) such that

$$\begin{aligned} \lim _{k\rightarrow \infty } \sup _{j \in \mathbb {Z}}|(S_\textbf{a}^k \textbf{f}^0)_j - F(2^{-k}j)|=0. \end{aligned}$$

Then, we denote by \(S^\infty _{\textbf{a}}\textbf{f}^0=F\) to the limit function generated from \(\textbf{f}^0\). We write \(S_\textbf{a}\in \mathcal {C}^d\) if all the limit functions have such smoothness, \(S^\infty _{\textbf{a}}\textbf{f}^0 \in \mathcal {C}^d\), \(\forall \textbf{f}^0 \in \ell _\infty (\mathbb {Z})\).

A usual tool for the analysis of linear schemes is the symbol (see [19]), that we define as follows.

Definition 2

The symbol of a subdivision scheme \(S_\textbf{a}\) is the Laurent polynomial \( a(z) = \sum _{j\in \mathbb {Z}} a_j z^{-j}. \)

According to the Definition 5.1 of [17], a subdivision scheme \(S_\textbf{a}\) is odd-symmetric if \( a_j = a_{-j}, \ \forall j\in \mathbb {Z}, \) or even-symmetric if \( a_j = a_{1-j}, \ \forall j\in \mathbb {Z}. \) In terms of the symbol, these are translated as \(a(z)=a(1/z)\) or \(a(z)=z a(1/z)\), respectively. The schemes, that we will construct in this paper, are odd-symmetric, but, to simplify some equations, we consider a more relaxed definition of odd-symmetry and even-symmetry.

Definition 3

A subdivision scheme \(S_\textbf{a}\) is symmetric if \( a_j = a_{j_0-j}, \ \forall j\in \mathbb {Z}, \) for some \(j_0 \in \mathbb {Z}\). It is even(odd)-symmetric if \(j_0\) is odd (even).

A useful property for a subdivision scheme is the reproduction of polynomials.

Definition 4

A subdivision scheme \(S_\textbf{a}\) reproduces,Footnote 1\(\Pi _d\) (polynomials up to degree d) if

$$\begin{aligned} S_\textbf{a}\{p(2 j)\}_{j\in \mathbb {Z}} = \{p(j)\}_{j\in \mathbb {Z}}, \qquad \forall p\in \Pi _d. \end{aligned}$$

A necessary condition for convergence is the reproduction of constants. The following lemma determines the relation between the mask, the symbol, and the reproduction of the constants.

Lemma 2.1

The following facts are equivalent:

  1. (a)

    \(S_\textbf{a}\) reproduces \(\Pi _0\) (constant functions).

  2. (b)

    \(\sum _{j\in \mathbb {Z}} a^0_j = \sum _{j\in \mathbb {Z}} a^1_j = 1\).

  3. (c)

    \(a(z) = (1+z)q(z)\) for some Laurent polynomial q such that \(q(1)=1\).

In such case, the \(S_\textbf{q}\) scheme is well-defined and called difference scheme. If \(\Vert S_{\textbf{q}}\Vert _\infty < 1\), then \(S_\textbf{a}\) is convergent.

According to the last Lemma, a subdivision scheme is convergent if the norm of the difference scheme (as a linear endomorphism in the space of bounded sequences) is less than 1. The following classical result allows us to readily compute the norm.

Lemma 2.2

The norm of a (binary) scheme \(S_\textbf{q}:\ell _\infty (\mathbb {Z})\rightarrow \ell _\infty (\mathbb {Z})\) can be computed as

$$\begin{aligned} \Vert S_\textbf{q}\Vert _\infty = \max \left\{ \sum _{j\in \mathbb {Z}} |q_{2j}|,\sum _{j\in \mathbb {Z}} |q_{2j+1}|\right\} . \end{aligned}$$

There exists a direct relationship between the symmetry of \(S_\textbf{a}\) and the symmetry of its difference scheme, \(S_\textbf{q}\). We introduce it in the following result.

Lemma 2.3

If a scheme is odd-symmetric, then its difference scheme is even-symmetric.

Proof

It can be easily checked using the symbols. \(\square \)

Next theorem by Dyn and Levin, [19], links the smoothness of \(S_\textbf{a}\) and \(S_{2\textbf{q}}\).

Theorem 2.4

If the scheme \(S_{2\textbf{q}}\) is convergent and \(\mathcal {C}^{m-1}\), then \(S_{\textbf{a}}\) is convergent and \(\mathcal {C}^{m}\).

Remark 2

We give now a more explicit formula to compute \(\textbf{q}\) for the kind of schemes we consider in this paper. We will analyse odd-symmetric subdivision schemes, which implies that the length of the mask is always odd and two possible situations may occur, depending on which even or odd mask has the largest support.

Since the even and odd masks \(\textbf{a}^0\) and \(\textbf{a}^1\) are finitely supported, from now on we will treat them as vectors containing only their support, which will be important for the theoretical results in Sect. 5. The first situation is that, for some \(n\in \mathbb {N}\), the masks are \(\textbf{a}^{0} = \{a^0_{l}\}_{l=1-n}^{n-1}\) and \(\textbf{a}^{1} = \{a^1_{l}\}_{l=1-n}^{n}\), while the second one corresponds to \(\textbf{a}^{0} = \{a^0_{l}\}_{l=-n}^{n}\) and \(\textbf{a}^{1} = \{a^1_{l}\}_{l=1-n}^{n}\) (pay attention to the supports).

To compute \(\textbf{q}\) with a unique formula for both cases, we redefine the mask for the second case, consisting in \({\bar{a}}^0_l:= a^1_{l}\), \(l=1-n,\ldots ,n\), and \({\bar{a}}^1_l:= a^0_{l-1}\), \(l=1-n,\ldots ,n+1\). Now the first indices of the supports are \(1-n\), in both situations, and the last indices are \(n-1\) and n (for the first and second mask, respectively) for the first situation and n and \(n+1\) for the second one. Now, in both cases, the second mask is the largest and we can affirm that there exists some \(n\in \mathbb {N}\) such that

$$\begin{aligned} (S_\textbf{a}\textbf{f})_{2j} = \sum _{l=1-n}^{L_n} a^{0}_{l} f_{j+l}, \quad (S_\textbf{a}\textbf{f})_{2j + 1} = \sum _{l=1-n}^{L_n+1} a^{1}_{l} f_{j+l}, \quad j\in \mathbb {Z}, \end{aligned}$$
(3)

with \(L_n = n-1\) or \(L_n=n\), so that \(\textbf{a}^{0} = \{a^0_l\}_{l=1-n}^{L_n}\) and \(\textbf{a}^{1} = \{a^1_{l}\}_{l=1-n}^{L_n+1}\). In any case, now the odd-symmetry is written as

$$\begin{aligned} a^0_{l}&= a^0_{L_n+1-n - l},&l = 1-n,\ldots ,L_n,\\ a^1_{l}&= a^1_{L_n+2-n - l},&l = 1-n,\ldots ,L_n+1. \end{aligned}$$

In terms of the coefficients of \(\textbf{a}\), this is

$$\begin{aligned} a_{2l} = a^0_{2(L_n+1-n - l)} = a^0_{2(L_n+1-n) - 2l}, \quad a_{2l-1} = a_{2(L_n+2-n - l)-1} = a_{2(L_n+1-n) - (2l-1)}. \end{aligned}$$

Observe that is it fulfilled that \(a_j = a_{j_0-j}\), for \(j=2-2n,\ldots ,2L_n+1\), with

$$\begin{aligned} j_0 = 2(L_n+1-n) = {\left\{ \begin{array}{ll} 0, &{} \text {if } L_n = n-1,\\ 2, &{} \text {if } L_n = n. \end{array}\right. } \end{aligned}$$

Thus, the scheme is odd-symmetric according to Definition 3.

Finally, for a subdivision operator written as (3), the difference mask \(\textbf{q}\) can be computed as follows:

$$\begin{aligned} \begin{aligned} q^0_{j}&:= q_{2j} = \sum _{l=-n+1}^{j} a^{n,0}_{l} - a^{n,1}_{l}, \qquad j = 1-n, \ldots , L_n,\\ q^1_{j}&:= q_{2j+1} = \sum _{l=j}^{L_n} a^{n,0}_{l} - a^{n,1}_{l+1}, \qquad j = 1-n, \ldots , L_n. \end{aligned} \end{aligned}$$
(4)

According to Lemma 2.3, \(S_\textbf{q}\) is an even-symmetric scheme. In particular,

$$\begin{aligned} q^0_{j} = q^1_{L_n+1-n - j}, \qquad j = 1-n, \ldots , L_n. \end{aligned}$$
(5)

3 Weighted Local Polynomial Regression (WLPR)

The schemes analysed in the present work have been applied to image processing in a multiresolution context as prediction operator for both point-value and for cell-average discretizations, (see, e.g. [4, 5]). They are based on weighted local polynomial regression (WLPR) and they can be defined by inserting a weight function in the minimization problem (1), which emphasizes the points closer to where the new data is attached. In this section, we briefly introduce WLPR and describe some of its properties. For a more detailed description, see [21, 23].

Firstly, we fix the space of functions where the regression is performed: \(\Pi _d\), the space of polynomials of degree at most d. Other function spaces could be used as well (see [21]). We can parametrize the polynomials in \(\Pi _d\) as

$$\begin{aligned} p(x) = \beta _0 + \beta _1 x + \dots + \beta _d x^d = A(x)^T \varvec{\beta }, \end{aligned}$$

where the superscript T is the matrix transposition, \(A(x)^T=(1,x,\ldots ,x^d)\) and \(\varvec{\beta }\in \mathbb {R}^{d+1}\). The vectors are considered column vectors in order to perform the matrix multiplication. With this notation, the regression problem (1) can be expressed as

$$\begin{aligned} \begin{aligned} {\hat{\varvec{\beta }}} =\mathop {\mathrm {arg\,min}}\limits _{\varvec{\beta }\in \mathbb {R}^{d+1}} \sum _{i=1}^m L_2(y_i, A(x_i)^T \varvec{\beta }),\quad {\hat{p}}(x) = A(x)^T {\hat{\varvec{\beta }}}, \quad L_2(s,t):= (s-t)^2. \end{aligned} \end{aligned}$$

The second ingredient is the weight function, \(\omega : \mathbb {R}\rightarrow [0,1]\), which assigns a value to the distance between \(x_i\) and 0, which is the location where \({\hat{p}}\) is evaluated in this work. We define \(\omega \) as

$$\begin{aligned} \omega (x)={\left\{ \begin{array}{ll} \phi (|x|), &{} |x|\le 1,\\ 0, &{} \text {in other case}, \end{array}\right. } \end{aligned}$$

and we impose that \(\phi :[0,1] \rightarrow [0,1]\) is a decreasing function such that \(\phi (0)=1\). With these assumptions, it is clear that \(\omega \) has compact support, \([-1,1]\), is even, increasing in \([-1,0]\) and decreasing in [0, 1], and it reaches the maximum at point \(x=0\). The choice \(w(0)=1\) assigns the highest weight to the point where \({\hat{p}}\) is evaluated. In [23], some functions are proposed, which we compile in Table 1. Observe that many of them have the form \(\phi (x)=(1-x^p)^q\) with \(p,q>0\).

Table 1 Weight functions, see [23]

The third component is the bandwidth, \(\lambda \in (1,+\infty )\backslash \mathbb {N}\). We define

$$\begin{aligned} \textbf{w}^\lambda = \{w_l^\lambda \}_{l\in \mathbb {Z}}, \quad w_l^\lambda :=\omega \left( \frac{l}{\lambda }\right) =\phi \left( \frac{|l|}{\lambda }\right) , \,\,l\in \mathbb {Z}. \end{aligned}$$

The parameter \(\lambda \) determines how many data values are used in the regression and allows to distribute the weights of the points used in the rank \([-\lambda ,\lambda ]\). By the properties of the function \(\omega \), if \(\lambda _1\le \lambda _2\), then \(w^{\lambda _1}_{l}\le w^{\lambda _2}_{l}\) for any \(l\in \mathbb {Z}\).

Finally, we choose a vector norm, typically \(\ell ^2\) is taken for its simplicity, but any \(\ell ^p\)-norm can be used depending on the characteristics of the problem. The loss function is defined accordingly: \(L_p(s,t) = |s-t|^p\). A discussion on the well-posedness of the regression problem and the uniqueness of the solution is presented in [6]. In summary, there exists a unique solution for \(p>1\), and for \(p=1\), there is either a unique solution or an infinite number of them.

With the above elements, these two problems were proposed in [4, 5] to design the two subdivision rules

$$\begin{aligned} \begin{aligned} {\hat{\varvec{\beta }}}^i=\mathop {\mathrm {arg\,min}}\limits _{\varvec{\beta }\in \mathbb {R}^{d+1}} \sum \{ w^\lambda _{2l-i}L_p\left( f^k_{j+l},A(2l-i)^T\varvec{\beta }\right) \,: \, l\in \mathbb {Z}, |2l-i| < \lambda \},\quad i=0,1. \end{aligned} \end{aligned}$$
(6)

Once the fitted polynomial is obtained, it is evaluated at 0 to define the new data:

$$\begin{aligned} \left( S_{d,\mathbf {w^\lambda }} f^k\right) _{2j+i}=A(0)^T {\hat{\varvec{\beta }}}^i = (1,0,\ldots ,0){\hat{\varvec{\beta }}}^i = {\hat{\beta }}^i_0,\quad i=0,1, \end{aligned}$$
(7)

so that only the first coordinate of \({\hat{\varvec{\beta }}}^i\) is needed.

Proposition 3.1

For any \(p>1\), the scheme is well-defined if \(d +1 \le 2\left\lfloor \frac{\lambda +1}{2} \right\rfloor \). Moreover, if \(d = 2n-1\) and \(\lambda \in (2n-1,2n)\), with \(n\in \mathbb {N}\), the resulting subdivision scheme is the interpolatory 2n-point Deslauriers-Dubuc subdivision scheme.

Proof

Let us discuss when this scheme is well defined. Two situations may occur, depending on whether or not d (the polynomial degree) is smaller than the amount of data \(f^{k}_{j+l}\) in the minimization problem (6).

For \(i=0\), if \(d < 2\left\lfloor \frac{\lambda }{2} \right\rfloor \) then (6) is a least squares problem and there is a unique solution [7], otherwise it can be found a polynomial that interpolates the data. Even if the interpolating polynomial is not unique, its evaluation at 0 is exactly \(f^k_j\). Hence, the even rule is well defined for any \(\lambda \in (1,+\infty )\backslash \mathbb {N}\), coinciding with the even rule of the Deslauriers-Dubuc subdivision scheme for \(d \ge 2\left\lfloor \frac{\lambda }{2} \right\rfloor \), i.e. \(f^{k+1}_{2j}=f^k_j\).

For \(i=1\), a least squares problem is solved if \(d+1 < 2\left\lfloor \frac{\lambda +1}{2} \right\rfloor \) and an interpolation problem with unique solution is solved when the equality is reached, coinciding with the Deslaurier-Dubuc odd rule in the last case. However, nor the polynomial neither its value at zero are unique when \(d +1 > 2\left\lfloor \frac{\lambda +1}{2} \right\rfloor \), so that the scheme is not well defined in this case.

As conclusion, only if the polynomial degree is \(d = -1 + 2\left\lfloor \frac{\lambda +1}{2} \right\rfloor \) and \(d \ge 2\left\lfloor \frac{\lambda }{2} \right\rfloor \), the resulting scheme is the Deslauriers-Dubuc interpolatory subdivision scheme, independently the choice of \(\omega \) and the loss function \(L_p\). For \(\lambda \in (1,+\infty )\backslash \mathbb {N}\), the condition \(-1 + 2\left\lfloor \frac{\lambda +1}{2} \right\rfloor \ge 2\left\lfloor \frac{\lambda }{2} \right\rfloor \) is equivalent to \(2n-1<\lambda <2n\) for some \(n\in \mathbb {N}\). In this range, \(d = -1 + 2\left\lfloor \frac{\lambda +1}{2} \right\rfloor = 2n-1\). \(\square \)

The scheme (7) coincides with the one proposed by Dyn et al. in [16] if \(p=2\) and \(\phi (x)=1\) are used (corresponding to rect in Table 1). Also, the non-linear subdivision scheme presented by Mustafa et al. in [25] can be obtained with the same choice of \(\phi (x)=1\) but with \(p=1\).

We will analyse the properties of our schemes, specifically for the polynomial degrees \(d=0,1,2,3\), the loss function \(L_2\) and several choices of \(\phi \).

We will study how the choice of \(\phi \) affects the approximation and noise reduction capabilities. We will show that it is not possible to define a function \(\phi \) giving the best approximation and the greatest denoising. In fact, one may decide how much importance to adjudge to each property and find an equilibrium. This decision may be based on the magnitude of the noise and the smoothness of the underlying function.

Observe that, when \(2n-1< \lambda < 2n\), for some \(n\in \mathbb {N}\), the even rule (\(i=0\)) support is shorter than the odd (\(i=1\)) one, and just the opposite occurs when \(2n< \lambda < 2n+1\). To simplify, we will discuss in detail the first case, where even and odd masks have lengths \(2n-1\) and 2n, respectively, since the second one is analogue and the Remark 2 can be taken into account for the subsequent analysis. Nevertheless, we address both situations throughout the paper when it can be done without additional effort.

To give a more explicit definition of the schemes, we solve the quadratic problem posed in (6) with \(p=2\). In this case, it is a weighted least squares problem and its solution is well-known. Let us start with the derivation of the odd mask, \(\textbf{a}^1\), for \(2n-1<\lambda <2n+1\). For the sake of simplicity, we omit the dependence on \(d,\omega ,\lambda \) for the following vectors and matrices. If we denote as \(\textbf{W}^1\) the diagonal matrix consisting on the vector

$$\begin{aligned} \textbf{w}^1=\left( w^\lambda _{2n-1},\ldots ,w^\lambda _{1},w^\lambda _{1},\ldots ,w^\lambda _{2n-1}\right) , \end{aligned}$$
(8)

we call

$$\begin{aligned} \textbf{x}^1= \left( \begin{array}{c} -2n+1\\ \vdots \\ -1\\ 1\\ \vdots \\ 2n-1\\ \end{array}\right) , \quad \textbf{X}^1=\left( (\textbf{x}^1)^0, (\textbf{x}^1)^1, \ldots ,(\textbf{x}^1)^d \right) = \left( \begin{array}{c} A(-2n+1)^T\\ \vdots \\ A(-1)^T\\ A(1)^T\\ \vdots \\ A(2n-1)^T\\ \end{array}\right) , \end{aligned}$$
(9)

where the powers \((\textbf{x}^1)^t\), \(t=0,\ldots ,d\), are computed component-wisely, so that \(\textbf{X}^1\) is a \(2n\times (d+1)\) matrix, and we denote \(\textbf{f}^{1,j,k}=(f^k_{j-n+1},\ldots ,f^k_{j},f^k_{j+1},\ldots ,f^k_{j+n})^T\), then the problem of (6) can be written as:

$$\begin{aligned} {\hat{\varvec{\beta }}}^1=\mathop {\mathrm {arg\,min}}\limits _{\varvec{\beta }\in \mathbb {R}^{d+1}}||(\textbf{W}^1)^\frac{1}{2}\textbf{f}^{1,j,k}-(\textbf{W}^1)^\frac{1}{2}\textbf{X}^1 \varvec{\beta }||_2^2, \end{aligned}$$

since \((\textbf{X}^1)^T\textbf{W}^1\textbf{X}^1\) is a positive definite matrix (which is deduced from the facts that \(\textbf{X}^1\) is full rank and \(\textbf{W}\) is a diagonal matrix with positives entries), the matrix is invertible and the solution is

$$\begin{aligned} \hat{\varvec{\beta }}^1=\left( \left( \textbf{X}^1\right) ^T\textbf{W}^1\textbf{X}^1\right) ^{-1}\left( \textbf{X}^1\right) ^T\textbf{W}^1\textbf{f}^{1,j,k}. \end{aligned}$$
(10)

For the sake of clarity, we write down the above terms:

$$\begin{aligned}{} & {} (\textbf{X}^1)^T\textbf{W}^1\textbf{X}^1 = \left( \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{j+l-2} \right) _{j,l=1}^{d+1},\nonumber \\{} & {} (\textbf{X}^1)^T\textbf{W}^1\textbf{f}^{1,j,k} = \left( \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{l-1} f^k_{j+i} \right) _{l=1}^{d+1}. \end{aligned}$$
(11)

Since we only need the first coordinate \({\hat{\beta }}^1_0\), we can use the Cramer’s formula instead of solving the full system. Thus, we have for \(\left( S_{d,\mathbf {w^\lambda }} \textbf{f}^k\right) _{2j+1}={\hat{\beta }}^1_0\) the following formula:

$$\begin{aligned} \left( S_{d,\mathbf {w^\lambda }} \textbf{f}^k\right) _{2j+1}= \frac{ \left| \begin{array}{cccccc} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1}f^k_{j+i} &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1) &{} \cdots &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^d \\ \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)f^k_{j+i} &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^2 &{} \cdots &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{d+1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^d f^k_{j+i} &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{d+1} &{} \cdots &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{2d} \end{array}\right| }{ |(\textbf{X}^1)^T\textbf{W}^1\textbf{X}^1|}. \end{aligned}$$

Using the linearity of the determinant respect to the first column, we conclude that \((S_{d,\mathbf {w^\lambda }} \textbf{f}^k)_{2j+1} = \sum _{l=-n+1}^{n} a^1_l f^k_{j+l}\), where the odd mask coefficients are

$$\begin{aligned} \begin{aligned}&a^1_l = \frac{w^\lambda _{2l-1}}{|(\textbf{X}^1)^T\textbf{W}^1\textbf{X}^1|} \left| \begin{array}{llllll} 1 &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1) &{} \cdots &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^d \\ 2l-1 &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^2 &{} \cdots &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{d+1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ (2l-1)^d &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{d+1} &{} \cdots &{} \displaystyle \sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^{2d} \end{array}\right| . \end{aligned} \end{aligned}$$

Observe that, since the vector \(\textbf{w}^1\) is symmetric, \(w^\lambda _{2i-1} = w^\lambda _{1-2i}\), then \(\sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^t = 0\) for any odd value of t, and \(\sum _{i=-n+1}^{n} w^\lambda _{2i-1} (2i-1)^t = 2\sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^t\) for the even values. Thus, the above expressions can be simplified by placing many zeros and by shorting the range of the remaining sums.

By (10), it can also be expressed as \(\textbf{a}^1 = (\varvec{\beta }^1)^T \textbf{e}_1 = \textbf{W}^1\textbf{X}^1((\textbf{X}^1)^T\textbf{W}^1\textbf{X}^1)^{-1} \textbf{e}_1\), where \(\textbf{e}_1\) is the first element of the canonical basis of \(\mathbb {R}^{d+1}\).

Analogously, for \(2n-2<\lambda <2n\), we can prove that \(\textbf{a}^0 = \textbf{W}^0\textbf{X}^0((\textbf{X}^0)^T\textbf{W}^0\textbf{X}^0)^{-1} \textbf{e}_1\), so that

$$\begin{aligned} a^0_l = |(\textbf{X}^0)^T\textbf{W}^0\textbf{X}^0|^{-1} w^\lambda _{2l} \left| \begin{array}{llllll} 1 &{} \sum _{i=-n+1}^{n-1} w^\lambda _{2i} (2i) &{} \cdots &{} \sum _{i=-n+1}^{n-1} w^\lambda _{2i} (2i)^d \\ 2l &{} \sum _{i=-n+1}^{n-1} w^\lambda _{2i} (2i)^2 &{} \cdots &{} \sum _{i=-n+1}^{n-1} w^\lambda _{2i} (2i)^{d+1} \\ \vdots &{} \vdots &{} \ddots &{} \vdots \\ (2l)^d &{} \sum _{i=-n+1}^{n-1} w^\lambda _{2i} (2i)^{d+1} &{} \cdots &{} \sum _{i=-n+1}^{n-1} w^\lambda _{2i} (2i)^{2d} \end{array}\right| , \end{aligned}$$

where \(\textbf{W}^0\) is the diagonal matrix with diagonal

$$\begin{aligned} \textbf{w}^0=(w^\lambda _{2n-2},\ldots ,w^\lambda _2,1,w^\lambda _2,\ldots ,w^\lambda _{2n-2}), \end{aligned}$$
(12)

and

$$\begin{aligned} \textbf{x}^0=\left( -2(n-1), \ldots , 2,0,2, \ldots , 2(n-1) \right) ^T, \quad \textbf{X}^0=\left( (\textbf{x}^0)^0, (\textbf{x}^0)^1, \ldots ,(\textbf{x}^0)^d \right) . \end{aligned}$$

Note that the matrix \((\textbf{X}^0)^T\textbf{W}^0\textbf{X}^0\) is also positive definite. Collecting these developments, for \(2n-1<\lambda <2n\), we can define our weighted local polynomial regression-based subdivision as:

$$\begin{aligned} \left( S_{d,\mathbf {w^\lambda }} f^k\right) _{2j+i}= \sum _{l=1-n}^{n-1+i} a^i_l f^k_{j+l}, \quad i=0,1. \end{aligned}$$

A direct consequence, by construction, is that the scheme reproduces polynomials up to degree d.

Proposition 3.2

The scheme \(S_{d,\mathbf {w^\lambda }}\) reproduces \(\Pi _d\).

Remark 3

Observe that we have considered \(\{1,x,\ldots ,x^d\}\) as basis of \(\Pi _d\), which has led to the linear system with matrix (11). It is possible to consider an orthonormal basis of \(\Pi _d\) in a way that the matrix is diagonal, leading to a cleaner mathematical description. However, we preferred the basis \(\{1,x,\ldots ,x^d\}\) because the resulting expression of the subdivision operator is more explicit. A possible benefit of considering an orthonormal basis is that the next results might be more intuitive.

Now we prove that \((\textbf{W}^i)^{-1}\textbf{a}^i\), \(i=0,1\), are exactly the evaluations of some polynomial at the grid points \(\textbf{x}^i\).

Lemma 3.3

For \(i=0,1\), the even and odd masks are

$$\begin{aligned} \textbf{a}^i = \textbf{W}^i \textbf{X}^i \varvec{\alpha }^i = \left\{ w^\lambda _{2j-i}\sum _{t=0}^d \alpha ^i_t x^t_j\right\} _{j=1-n}^{L_n+i}. \end{aligned}$$
(13)

That is, the vector \( (\textbf{W}^i)^{-1} \textbf{a}^i \) coincides with the evaluation of the polynomial \(A(x)^T \varvec{\alpha }^i\) at the points \(\textbf{x}^i\), being \(\varvec{\alpha }^i=((\textbf{X}^i)^T\textbf{W}^i\textbf{X}^i)^{-1}\textbf{e}_1\), which expression depends on \(n,\lambda ,\omega ,d\).

Proof

By the previous computations,

$$\begin{aligned} \textbf{a}^i = \textbf{W}^i \textbf{X}^i ((\textbf{X}^i)^T\textbf{W}^i\textbf{X}^i)^{-1}\textbf{e}_1 = \textbf{W}^i \textbf{X}^i \varvec{\alpha }^i. \end{aligned}$$

For \(\textbf{a}^1\) (for \(\textbf{a}^0\) is analogous), using (9) we obtain

$$\begin{aligned} (\textbf{W}^1)^{-1} \textbf{a}^1 = \textbf{X}^1 \varvec{\alpha }^1 = \left( \begin{array}{c} A(-2n+1)^T\\ \vdots \\ A(-1)^T\\ A(1)^T\\ \vdots \\ A(2n-1)^T\\ \end{array}\right) \varvec{\alpha }^1 = \left( \begin{array}{c} A(-2n+1)^T \varvec{\alpha }^1\\ \vdots \\ A(-1)^T \varvec{\alpha }^1\\ A(1)^T \varvec{\alpha }^1\\ \vdots \\ A(2n-1)^T \varvec{\alpha }^1\\ \end{array}\right) . \end{aligned}$$

That is, the coordinates of \((\textbf{W}^1)^{-1} \textbf{a}^1\) are the evaluations of the polynomial \(A(x)^T \varvec{\alpha }^1 \) at the \(\textbf{x}^1\) grid points. \(\square \)

Moreover, these even and odd masks are the only ones that lead to polynomial reproduction and verify that \((\textbf{W}^i)^{-1}\textbf{a}^i\) are polynomial evaluations. This property can be used in practice to more easily determine the masks.

Theorem 3.4

The scheme \(S_{d,\mathbf {w^\lambda }}\) is the unique scheme that reproduces \(\Pi _d\) polynomials and its even and odd masks have the form \(\textbf{a}^i = \textbf{W}^1 \textbf{X}^1 \varvec{\alpha }^i\), for some \(\varvec{\alpha }^i\in \mathbb {R}^{d+1}\), \(i=0,1\).

Proof

It is a consequence of Lemma 3.3 together with Proposition 3.2. Suppose that some rule \({\hat{\textbf{a}}} = \textbf{W}^i \textbf{X}^i \hat{\varvec{\alpha }}\), for some \({\hat{\varvec{\alpha }}}\in \mathbb {R}^{d+1}\), fulfils the reproduction conditions for \(\Pi _d\). Then

$$\begin{aligned} \sum _j {\hat{a}}_j (x^i_j)^t = \delta _{0,t}, \qquad t=0,1,\ldots ,d, \quad i=0,1, \end{aligned}$$

or, written with matrix multiplications, \( (\textbf{X}^i)^T {\hat{\textbf{a}}} = \textbf{e}_1. \) Then,

$$\begin{aligned} (\textbf{X}^i)^T {\hat{\textbf{a}}} = (\textbf{X}^i)^T \textbf{W}^i \textbf{X}^1 {\hat{\varvec{\alpha }}} = \textbf{e}_1 \rightarrow {\hat{\varvec{\alpha }}} = ((\textbf{X}^i)^T \textbf{W}^i \textbf{X}^i)^{-1}\textbf{e}_1 = \varvec{\alpha }^i. \end{aligned}$$

\(\square \)

The symmetry of the scheme is another consequence of being based on a polynomial regression problem.

Lemma 3.5

The scheme \(S_{d,\mathbf {w^\lambda }}\) is odd-symmetric.

Proof

We prove that \(a^1_{j} = a^1_{1-j}\) for \(2n-1<\lambda <2n+1\) (it can be analogously proven that \(a^0_{j} = a^0_{-j}\) for \(2n-2<\lambda <2n\)). Let us consider \(\textbf{f}^j = \{\delta _{j,l} \, \ l\in \{-n+1,\ldots ,n\} \}\). The coordinates of the odd masks \(\textbf{a}^1\) can be obtained by applying the rule to \(\textbf{f}^j\), for \(j=-n+1,\ldots ,n\), and take the first coordinate,

$$\begin{aligned} a^1_j = \sum _{l=-n+1}^n a^1_{l}\delta _{j,l} = \sum _{l=-n+1}^n a^1_{l}f^j_{l} = \left( S_{d,\mathbf {w^\lambda }}\textbf{f}^j\right) _1 = \hat{p}^j(0), \end{aligned}$$

where, by (6) and (7),

$$\begin{aligned} \hat{p}^j=\mathop {\mathrm {arg\,min}}\limits _{p \in \Pi _{d}(\mathbb {R})} \sum _{l=-n+1}^n w^\lambda _{2l-1}\left( \delta _{j,l}-p(2l-1)\right) ^2. \end{aligned}$$
(14)

Then, \(a^1_{j} = a^1_{1-j}\) provided that \((S_{d,\mathbf {w^\lambda }}\textbf{f}^j)_1 = \left( S_{d,\mathbf {w^\lambda }}\textbf{f}^{1-j}\right) _1 \), or in other words, \({\hat{p}}^j(0) = {\hat{p}}^{1-j}(0)\). Observe that,

$$\begin{aligned} \begin{aligned} \hat{p}^{1-j}=\mathop {\mathrm {arg\,min}}\limits _{q \in \Pi _{d}(\mathbb {R})} \sum _{l=-n+1}^n w^\lambda _{2l-1}\left( \delta _{1-j,l}-q(2l-1)\right) ^2, \end{aligned} \end{aligned}$$

and, performing the change in the summation index l by \(1-l\) and using \(w^\lambda _{2l-1} = w^\lambda _{1-2l}\) and \(\delta _{j,l} = \delta _{1-j,1-l}\),

$$\begin{aligned} \hat{p}^{1-j}&=\mathop {\mathrm {arg\,min}}\limits _{q \in \Pi _{d}(\mathbb {R})} \sum _{l=-n+1}^n w^\lambda _{1-2l}(\delta _{1-j,1-l}-q(1-2l))^2\nonumber \\ {}&=\mathop {\mathrm {arg\,min}}\limits _{q \in \Pi _{d}(\mathbb {R})} \sum _{l=-n+1}^n w^\lambda _{2l-1}(\delta _{j,l}-q(1-2l))^2. \end{aligned}$$
(15)

Observe the similarity between (14) and (15). Since the minimum is unique, it is reached in (15) by \({\hat{p}}^{j}(-t)\). Thus, \({\hat{p}}^{1-j}(t) = {\hat{p}}^{j}(-t)\) and, then, \(a^1_j = {\hat{p}}^j(0) = {\hat{p}}^{1-j}(0) = a^1_{1-j}\). \(\square \)

By Lemma 3.3, we know that \( (\textbf{W}^1)^{-1} \textbf{a}^i \) are the evaluations of a polynomial at \(\textbf{x}^i\). To take profit of the symmetry, let us write as in (13):

$$\begin{aligned} (\textbf{W}^0)^{-1} \textbf{a}^0 = \left\{ \sum _{t=0}^d \alpha ^0_t (2l)^t \right\} _{l=-n+1}^{n-1}, \quad (\textbf{W}^1)^{-1} \textbf{a}^1 = \left\{ \sum _{t=0}^d \alpha ^1_t (2l-1)^t \right\} _{l=-n+1}^n. \end{aligned}$$
(16)

Since \(a^0_{i} = a^0_{-i}\), \(\forall i=-n+1,\ldots ,n-1\), and \(a^1_{i} = a^1_{1-i}\), \(\forall i=-n+1,\ldots ,n\), and \(\omega \) is even, it can be deduced that the polynomials only have even powers. That is

$$\begin{aligned} \alpha ^i_{2t-1} = 0, \quad \forall 1 \le t \le (d+1)/2. \end{aligned}$$

A direct consequence is that the subdivision schemes obtained for any weight function of degree d (even number) coincides with the one for \(d+1\), proven in the following lemma.

Proposition 3.6

Let \(\omega \) be a weight function, \(d\in 2\mathbb {Z}_{+}\) and \(\lambda \in (1,+\infty )\backslash \mathbb {N}\) be such that \(d \le -2 + 2\left\lfloor \frac{\lambda +1}{2} \right\rfloor \), then

$$\begin{aligned} S_{d,\mathbf {w^\lambda }} = S_{d+1,\mathbf {w^\lambda }}. \end{aligned}$$

Proof

The even and odd masks of \(S_{d+1,\mathbf {w^\lambda }}\) can be written in terms of the evaluation of a \((d+1)\)-degree polynomial, according to Lemma 3.3. Since the odd coefficients are zero, then the leading coefficient is zero, for both rules \(i=0,1\). Then, both \(S_{d,\mathbf {w^\lambda }}\) and \(S_{d+1,\mathbf {w^\lambda }}\) fulfils the conditions of Theorem 3.4, for the same polynomial degree d, hence they must coincide. \(\square \)

Therefore, we can just study the properties of the subdivision schemes based on the space of polynomials \(\Pi _d(\mathbb {R})\) with d an even number.

4 WLPR-Subdivision Schemes for \(d=0,1\)

In this section we present the WLPR-Subdivision schemes for \(d=0,1\) and their properties, by Proposition 3.6, we can just consider \(d=0\). To simplify the notation, in this section we omit d, \(\textbf{w}\) and \(\lambda \) in some variables, such as \(S:= S_{0,\textbf{w}^\lambda }= S_{1,\textbf{w}^\lambda }\). In this case, the coefficients of the subdivision schemes are easily obtained from \(\omega \) thanks to Lemma 3.3: If we denote as \(||\textbf{w}^i||_1\) the sum of the components of the vector \(\textbf{w}^i\) with \(i=0,1\), defined in (12) and (8),

$$\begin{aligned} ||\textbf{w}^0||_1=1+2\sum _{l=1}^{n-1}w_{2l}^\lambda ,\quad ||\textbf{w}^1||_1=2\sum _{l=0}^{n-1}w_{2l+1}^\lambda , \end{aligned}$$
$$\begin{aligned} \textbf{a}^i = \textbf{W}^i \textbf{X}^i \varvec{\alpha }^i = \textbf{w}^i \varvec{\alpha }^i, \quad \varvec{\alpha }^i = \left( (\textbf{X}^i)^T \textbf{W}^i \textbf{X}^i\right) ^{-1}\textbf{e}_1 = ||\textbf{w}^i||_1^{-1}, \end{aligned}$$

thus \(\textbf{a}^i = \textbf{w}^i/||\textbf{w}^i||_1\). Another way to obtain \(\varvec{\alpha }^i\) is based on Theorem 3.4: Since \(\textbf{a}^i = \textbf{w}^i \varvec{\alpha }^i\) and the scheme must reproduce \(\Pi _0\) (constant functions), then \(1 = \sum _j a^i_j = \varvec{\alpha }^i \Vert \textbf{w}^i\Vert _1\) by Lemma 2.1, thus \(\varvec{\alpha }^i= ||\textbf{w}^i||_1^{-1}\).

The explicit form of the resulting WLPR-subdivision scheme is, if \(2n-1<\lambda <2n\),

$$\begin{aligned} (S f^k)_{2j}=\sum _{l=1-n}^{n-1} \left( \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1}\right) f^k_{j+l},\quad (S f^k)_{2j+1}= \sum _{l=1-n}^{n}\left( \frac{w_{2l-1}^\lambda }{||\textbf{w}^1||_1}\right) f^k_{j+l}, \end{aligned}$$
(17)

and, for \(2n<\lambda <2n+1\), it can be written in the following way, in agreement with Remark 2,

$$\begin{aligned} (S f^k)_{2j}= \sum _{l=1-n}^{n}\left( \frac{w_{2l-1}^\lambda }{||\textbf{w}^1||_1}\right) f^k_{j+l}, \quad (S f^k)_{2j+1}=\sum _{l=1-n}^{n+1} \left( \frac{w^\lambda _{2l-2}}{||\textbf{w}^0||_1}\right) f^k_{j+l}. \end{aligned}$$
(18)

Note that if \(\lambda \in (1,2)\) then \(\textbf{w}^0=1\) and \(\textbf{w}^1/||\textbf{w}^1||_1=(\frac{1}{2},\frac{1}{2})\), so that the mask for any function \(\omega \) of the subdivision scheme is \(\textbf{a}=[1,2,1]/2\), in other words, the interpolatory Deslauriers-Dubuc scheme [12] (as stated in Proposition 3.1).

Similar to other subdivision schemes based on approximation techniques, like the Deslauriers-Dubuc scheme that iteratively applies local linear interpolation, a WPLR scheme can be viewed as the reiterated application of a generalized Shepard method [26] on a equispaced grid. In each iteration, the bandwidth of the Shepard method is halved, and the evaluation points consist of the midpoints of the previous ones.

The properties of the scheme may differ from its original approximation technique. For instance, if \(\omega (x)=1\) for \(|x|\le 1\), then the schemes presented by Dyn et al. in [16] are recovered, as we mentioned above. These schemes are for \(2n-1<\lambda <2n\):

$$\begin{aligned} \left( S_{\texttt {rect}} f^k\right) _{2j}=\sum _{l=-n+1}^{n-1}\frac{f^k_{j+l}}{2n-1},\quad \left( S_{\texttt {rect}} f^k\right) _{2j+1}= \sum _{l=1-n}^{n}\frac{f^k_{j+l}}{2n}. \end{aligned}$$

This scheme was proved to be \(\mathcal {C}^1\). However, corresponding Shepard method for this weight function is not continuous.

We list some masks for the weight function \(\omega (x)=1-|x|\), \(|x|\le 1\), and for several values of \(\lambda \):

$$\begin{aligned} \begin{aligned} \textbf{a}_{1,\mathbf {tria^{1.5}}}&=[1,2,1]/2,\\ \textbf{a}_{1,\mathbf {tria^{2.5}}}&=[1/7, 1/2, 5/7, 1/2, 1/7],\\ \textbf{a}_{1,\mathbf {tria^{3.5}}}&=[1/12, 3/13, 5/12, 7/13, 5/12, 3/13, 1/12],\\ \textbf{a}_{1,\mathbf {tria^{4.5}}}&=[1/21, 3/20, 5/21, 7/20, 3/7, 7/20, 5/21, 3/20, 1/21],\\ \textbf{a}_{1,\mathbf {tria^{5.5}}}&=[ 1/30, 3/31, 1/6, 7/31, 3/10, 11/31, 3/10, 7/31, 1/6, 3/31, 1/30]. \end{aligned} \end{aligned}$$
(19)

As we can see, all subdivision schemes in this section present a positive mask, since \(\omega \) is a positive function. Then, the following result on convergence proved in [20], (see also [24, 28]) can be applied.

Proposition 4.1

([20]) Let \(\textbf{a}=\{a_l\}_{l\in \mathbb {Z}}\) be a mask with support \([q,q+k]\), being q and k fixed integers, \(k\ge 3\). Suppose that \(a_q,a_{q+1},\ldots ,a_{q+k-1},a_{q+k}>0\) and \(\sum _{l\in \mathbb {Z}}a_{2l}=\sum _{l\in \mathbb {Z}}a_{2l+1}=1,\) then the subdivision scheme converges.

As a direct consequence, the schemes in this section, (17) and (18), are convergent because the masks are positive. Observe that the condition \(k\ge 3\) in Proposition 4.1 requires considering \(\lambda >2\).

Corollary 4.2

The subdivision scheme \(S_{1,\textbf{w}^\lambda }\), defined in (17) or (18), is convergent for any \(\lambda \in (1,+\infty )\backslash \mathbb {N}\) and any positive function \(\omega \) with \(w(x)>0\), if \(|x|<1\).

In Fig.  2 we show some examples of the limit functions for some weight functions, \(\lambda \in \{3.2,3.4,3.6,3.8\}\) and \(\textbf{f}^0 = \{\delta _{0,l}\}_{l\in \mathbb {Z}}\). The support of all these limit functions is \([-3,3]\) because the mask support length does not vary.

Fig. 2
figure 2

Limit functions of the subdivision schemes \(S_{1,\textbf{w}^\lambda }\) for some weight functions (see Table 1) and \(\lambda =3.2\) (blue), \(\lambda =3.4\) (orange), \(\lambda =3.6\) (yellow) and \(\lambda =3.8\) (purple)

To analyse the smoothness of the limit functions generated by S, we consider the Theorem 2.4. In particular, we will prove that the mask of the difference scheme \(S_{\textbf{q}}\) is positive and apply again Proposition 4.1. Thanks to the odd-symmetry of the scheme, the study can be reduced to a half of its coefficients.

Lemma 4.3

Let n be a natural number, \(n\ge 2\), \(\lambda \in (2n-1,2n)\) and \(\omega \) a weight function. The coefficients of the difference scheme \(S_\textbf{q}\) are positive if

$$\begin{aligned} \frac{\sum _{l=j_0}^{n-1}w^\lambda _{2l+1}}{\sum _{l=j_0}^{n-1} w^\lambda _{2l}}< \frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}<\frac{\sum _{l=j_1}^{n-1}w^\lambda _{2l+1}}{\sum _{l=j_1+1}^{n-1} w^\lambda _{2l}}, \qquad j_0=1,\ldots ,n-1, \quad j_1=1,\ldots ,n-2. \end{aligned}$$

Proof

By Lemma 2.3, \(S_\textbf{q}\) is even-symmetric. Since \(2n-1<\lambda <2n\), then \(L_n = n-1\) in (5) and we have

$$\begin{aligned} q^0_j=q^1_{-j}, \qquad j = 1-n, \ldots , n-1. \end{aligned}$$

Then, if \(q^0_{j}>0\), for \(j=1-n,\ldots ,0\), and \(q^1_{j}>0\), for \(j=1-n,\ldots ,-1\), the result is proved.

First, we check that the coefficients \(q^0_0\) and \(q^1_{1-n}\) are always positive.

$$\begin{aligned} q^0_0 =\sum _{l=-n+1}^{0} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} - \frac{w^\lambda _{2l-1}}{||\textbf{w}^1||_1}=\sum _{l=-n+1}^{0} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} - \frac{1}{2}=\frac{1 + \sum _{l=-n+1}^{-1} w^\lambda _{2l}}{1+2\sum _{l=-n+1}^{-1}w^\lambda _{2l}} - \frac{1}{2}>0, \end{aligned}$$

since \(w^\lambda _0=1\) and \(||\textbf{w}^0||_1=1+2\sum _{l=-n+1}^{-1}w^\lambda _{2l}.\) Analogously, by (4), we have that

$$\begin{aligned} q^1_{1-n}=\sum _{l=1-n}^{n-1} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} - \frac{w^\lambda _{2l+1}}{||\textbf{w}^1||_1}=1-\sum _{l=1-n}^{n-1} \frac{w^\lambda _{2l+1}}{||\textbf{w}^1||_1}=\frac{w^\lambda _{1-2n}}{||\textbf{w}^1||_1}>0. \end{aligned}$$

Now we check \(q^0_{j}>0\), for \(j=1-n,\ldots ,-1\). From (4), we have that

$$\begin{aligned} q^0_{j} = \sum _{l=-n+1}^{j}a^{n,0}_l-a^{n,1}_l=\sum _{l=-n+1}^{j} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} - \frac{w^\lambda _{2l-1}}{||\textbf{w}^1||_1}, \qquad j = 1-n, \ldots , -1,\\ \end{aligned}$$

then

$$\begin{aligned} \begin{aligned}&0<q^0_{j}=\sum _{l=-n+1}^{j} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} - \frac{w^\lambda _{2l-1}}{||\textbf{w}^1||_1} \Leftrightarrow \sum _{l=-n+1}^{j}\frac{w^\lambda _{2l-1}}{||\textbf{w}^1||_1}< \sum _{l=-n+1}^{j} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} \Leftrightarrow \\&\Leftrightarrow \frac{\sum _{l=-n+1}^{j}w^\lambda _{2l-1}}{\sum _{l=-n+1}^{j} w^\lambda _{2l}} < \frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}. \end{aligned} \end{aligned}$$
(20)

As \(w^\lambda _l=w^\lambda _{-l}\) for all \(l\in \mathbb {Z}\), we have that, if \(j=1-n,\ldots , -1\),

$$\begin{aligned} \begin{aligned}&\sum _{l=-n+1}^{j}w^\lambda _{2l-1}=\sum _{l=-n+1}^{j}w^\lambda _{1-2l}=\sum _{l=-j}^{n-1}w^\lambda _{2l+1},\\&\sum _{l=-n+1}^{j}w^\lambda _{2l}=\sum _{l=-n+1}^{j}w^\lambda _{-2l}=\sum _{l=-j}^{n-1}w^\lambda _{2l}. \end{aligned} \end{aligned}$$
(21)

Therefore, by (20) we obtain:

$$\begin{aligned} 0<q^0_{j} \Leftrightarrow \frac{\sum _{l=j}^{n-1}w^\lambda _{2l+1}}{\sum _{l=j}^{n-1} w^\lambda _{2l}} < \frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}, \quad j=1,\ldots ,n-1. \end{aligned}$$
(22)

Now we check \(q^1_{j}>0\), for \(j=2-n,\ldots ,-1\). By (4):

$$\begin{aligned} \begin{aligned} q^1_{j}&= \sum _{l=j}^{n-1} a^{n,0}_{l} - a^{n,1}_{l+1} = \sum _{l=j}^{n-1} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} - \sum _{l=j}^{n-1}\frac{w^\lambda _{2l+1}}{||\textbf{w}^1||_1} \qquad \\&=1-\sum _{l=1-n}^{j-1} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1}-\left( 1-\sum _{l=-n}^{j-1}\frac{w^\lambda _{2l+1}}{||\textbf{w}^1||_1}\right) \\&=\sum _{l=-n}^{j-1}\frac{w^\lambda _{2l+1}}{||\textbf{w}^1||_1} -\sum _{l=1-n}^{j-1} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1}. \end{aligned} \end{aligned}$$

Following the same reasoning:

$$\begin{aligned} \begin{aligned}&0<q^1_{j}=\sum _{l=-n}^{j-1}\frac{w^\lambda _{2l+1}}{||\textbf{w}^1||_1} -\sum _{l=1-n}^{j-1} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1} \Leftrightarrow \sum _{l=1-n}^{j-1} \frac{w^\lambda _{2l}}{||\textbf{w}^0||_1}< \sum _{l=-n}^{j-1}\frac{w^\lambda _{2l+1}}{||\textbf{w}^1||_1}\Leftrightarrow \\&\Leftrightarrow \frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}< \frac{\sum _{l=-n}^{j-1}w^\lambda _{2l+1}}{ \sum _{l=1-n}^{j-1} w^\lambda _{2l}}. \end{aligned} \end{aligned}$$

Again, by (21), we have

$$\begin{aligned} 0<q^1_{j}\Leftrightarrow \frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}< \frac{\sum _{l=1-n}^{j}w^\lambda _{2l-1}}{ \sum _{l=1-n}^{j-1} w^\lambda _{2l}}=\frac{\sum _{l=-j}^{n-1}w^\lambda _{2l+1}}{\sum _{l=1-j}^{n-1} w^\lambda _{2l}}, \quad j=2-n, \ldots , -1. \end{aligned}$$
(23)

Collecting conditions (22) and (23), we get the result:

$$\begin{aligned} \frac{\sum _{l=j_0}^{n-1}w^\lambda _{2l+1}}{\sum _{l=j_0}^{n-1} w^\lambda _{2l}}< \frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}<\frac{\sum _{l=j_1}^{n-1}w^\lambda _{2l+1}}{\sum _{l=j_1+1}^{n-1} w^\lambda _{2l}}, \end{aligned}$$

with \(j_0=1,\ldots ,n-1\) and \(j_1=1,\ldots ,n-2\). \(\square \)

Lemma 4.4

Let \(n\in \mathbb {N}\), \(n\ge 2\), \(\lambda \in (2n-1,2n)\), and \(\omega \) a weight function be. Let us consider

$$\begin{aligned} p^{\omega ^\lambda }_0:[0,n-1]\rightarrow \mathbb {R}, \quad p^{\omega ^\lambda }_0(l):=\frac{\phi \left( \frac{2l+1}{\lambda }\right) }{\phi \left( \frac{2l}{\lambda }\right) }, \end{aligned}$$

so that \(p^{\omega ^\lambda }_0(l)=\frac{w^\lambda _{2l+1}}{w^\lambda _{2l}}\) for \(l=0,1,\ldots ,n-1\). If \(p^{\omega ^\lambda }_0\) is a decreasing function, then the coefficients of the difference scheme are positive.

Proof

Note that:

$$\begin{aligned} \frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}=\frac{2\sum _{l=0}^{n-1}w^\lambda _{2l+1}}{1+2\sum _{l=0}^{n-1} w^\lambda _{2l}}=\frac{\sum _{l=0}^{n-1}w^\lambda _{2l+1}}{\frac{1}{2}+\sum _{l=0}^{n-1} w^\lambda _{2l}}. \end{aligned}$$

Consider this basic property: For any \(a,b,c,d>0\),

$$\begin{aligned} \frac{a}{b}\le \frac{c}{d} \Rightarrow \frac{a}{b}\le \frac{a+c}{b+d}\le \frac{c}{d}. \end{aligned}$$
(24)

Firstly, since \(p^{\omega ^\lambda }_0\) is decreasing, we get by (24):

$$\begin{aligned} \frac{w^\lambda _{2n-1}}{w^\lambda _{2n-2}}=p^{\omega ^\lambda }_0(n-1)\le p^{\omega ^\lambda }_0(n-2)=\frac{w^\lambda _{2n-3}}{w^\lambda _{2n-4}}\Rightarrow \frac{w^\lambda _{2n-1}}{w^\lambda _{2n-2}}\le \frac{w^\lambda _{2n-1}+w^\lambda _{2n-3}}{w^\lambda _{2n-2}+w^\lambda _{2n-4}}\le \frac{w^\lambda _{2n-3}}{w^\lambda _{2n-4}}. \end{aligned}$$

And, again using the monotony of function \(p^{\omega ^\lambda }_0\) and (24):

$$\begin{aligned} \begin{aligned}&\frac{w^\lambda _{2n-1}+w^\lambda _{2n-3}}{w^\lambda _{2n-2}+w^\lambda _{2n-4}}\le \frac{w^\lambda _{2n-3}}{w^\lambda _{2n-4}}\le \frac{w^\lambda _{2n-5}}{w^\lambda _{2n-6}}\Rightarrow \\&\Rightarrow \frac{w^\lambda _{2n-1}+w^\lambda _{2n-3}}{w^\lambda _{2n-2}+w^\lambda _{2n-4}}\le \frac{w^\lambda _{2n-1}+w^\lambda _{2n-3}+w^\lambda _{2n-5}}{w^\lambda _{2n-2}+w^\lambda _{2n-4}+w^\lambda _{2n-6}} \le \frac{w^\lambda _{2n-5}}{w^\lambda _{2n-6}} \end{aligned} \end{aligned}$$

Repeating this process, we get by (24):

$$\begin{aligned} \begin{aligned}&\frac{w^\lambda _{2n-1}}{w^\lambda _{2n-2}}\le \frac{w^\lambda _{2n-1}+w^\lambda _{2n-3}}{w^\lambda _{2n-2}+w^\lambda _{2n-4}}\le \ldots \le \frac{\sum _{l=1}^{n-1}w^\lambda _{2l+1}}{\sum _{l=1}^{n-1}w^\lambda _{2l}}\le \frac{w^\lambda _{1}}{w^\lambda _{0}}\Rightarrow \\&\frac{w^\lambda _{2n-1}}{w^\lambda _{2n-2}}\le \frac{w^\lambda _{2n-1}+w^\lambda _{2n-3}}{w^\lambda _{2n-2}+w^\lambda _{2n-4}}\le \ldots \le \frac{\sum _{l=0}^{n-1}w^\lambda _{2l+1}}{w^\lambda _0+\sum _{l=1}^{n-1}w^\lambda _{2l}} < \frac{\sum _{l=0}^{n-1}w^\lambda _{2l+1}}{\frac{1}{2}+\sum _{l=1}^{n-1}w^\lambda _{2l}}=\frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}. \end{aligned} \end{aligned}$$

Secondly, we define \(p^{\omega ^\lambda }_1:[0,n-2]\rightarrow \mathbb {R}\) as

$$\begin{aligned} p_1^{\omega ^\lambda }(l)= 1/p_0^{\omega ^\lambda }(l+1/2), \quad \forall l\in [0,n-2], \end{aligned}$$

which is an increasing function since \(p^{\omega ^\lambda }_0\) is decreasing. We have that

$$\begin{aligned} \begin{aligned}&\frac{w^\lambda _{2n-5}}{w^\lambda _{2n-4}}=p^{\omega ^\lambda }_1(n-3)\le p^{\omega ^\lambda }_1(n-2)=\frac{w^\lambda _{2n-3}}{w^\lambda _{2n-2}}<\frac{w^\lambda _{2n-3}+w^\lambda _{2n-1}}{w^\lambda _{2n-2}}\Rightarrow \\&\Rightarrow \frac{w^\lambda _{2n-5}}{w^\lambda _{2n-4}}<\frac{w^\lambda _{2n-1}+w^\lambda _{2n-3}+w^\lambda _{2n-5}}{w^\lambda _{2n-2}+w^\lambda _{2n-4}}<\frac{w^\lambda _{2n-3}+w^\lambda _{2n-1}}{w^\lambda _{2n-2}}. \end{aligned} \end{aligned}$$

Again, using the same strategy, we get:

$$\begin{aligned} \begin{aligned}&\frac{w^\lambda _{1}}{w^\lambda _{2}}<\frac{\sum _{l=1}^{n-1}w^\lambda _{2l+1}}{\sum _{l=2}^{n-1}w^\lambda _{2l}}<\ldots<\frac{w^\lambda _{2n-3}+w^\lambda _{2n-1}}{w^\lambda _{2n-2}}\Rightarrow \\&\Rightarrow \frac{\sum _{l=0}^{n-1}w^\lambda _{2l+1}}{\sum _{l=1}^{n-1}w^\lambda _{2l}}<\frac{\sum _{l=1}^{n-1}w^\lambda _{2l+1}}{\sum _{l=2}^{n-1}w^\lambda _{2l}}<\ldots<\frac{w^\lambda _{2n-3}+w^\lambda _{2n-1}}{w^\lambda _{2n-2}}\Rightarrow \\&\frac{||\textbf{w}^1||_1}{||\textbf{w}^0||_1}=\frac{\sum _{l=0}^{n-1}w^\lambda _{2l+1}}{\frac{1}{2}+\sum _{l=1}^{n-1}w^\lambda _{2l}}<\frac{\sum _{l=0}^{n-1}w^\lambda _{2l+1}}{\sum _{l=1}^{n-1}w^\lambda _{2l}}<\frac{\sum _{l=1}^{n-1}w^\lambda _{2l+1}}{\sum _{l=2}^{n-1}w^\lambda _{2l}}<\ldots <\frac{w^\lambda _{2n-3}+w^\lambda _{2n-1}}{w^\lambda _{2n-2}}.\\ \end{aligned} \end{aligned}$$

Then, by Lemma 4.3, we conclude that the coefficients of the difference scheme, (4), are positive. \(\square \)

Next lemma allows to easily check the monotonicity of \(p^{\omega ^\lambda }_0\).

Lemma 4.5

Let \(n\ge 2\) and \(\lambda \in (2n-1,2n)\) be. If \(\phi :[0,1]\rightarrow [0,1]\) is continuous and differentiable in (0, 1) and the quotient function \(\phi '/\phi \) is decreasing, then \(p_0^{\omega ^\lambda }\) is decreasing.

Proof

By hypothesis, the function \(p_0^{\omega ^\lambda }(l) = \frac{\phi ((2\,l+1)/\lambda )}{\phi (2\,l/\lambda )}\) is continuous for \([0,n-1]\) and differentiable in \((0,n-1)\) (observe that we are considering l a real number here). Hence, it is decreasing provided that its derivative is negative. In addition,

$$\begin{aligned}&p_0^{\omega ^\lambda \prime }(l) = \frac{2}{\lambda }\frac{\phi '((2l+1)/\lambda )\phi (2l/\lambda )-\phi ((2l+1)/\lambda )\phi '(2l/\lambda )}{\phi (2l/\lambda )^2}<0 \\ \Leftrightarrow \quad&\phi '((2l+1)/\lambda )\phi (2l/\lambda )-\phi ((2l+1)/\lambda )\phi '(2l/\lambda )<0 \\ \Leftrightarrow \quad&\frac{\phi '((2l+1)/\lambda )}{\phi ((2l+1)/\lambda )} < \frac{\phi '(2l/\lambda )}{\phi (2l/\lambda )}, \end{aligned}$$

and \(\phi '/\phi \) is decreasing by hypothesis. \(\square \)

Table 2 Functions \(p_0^{\omega ^\lambda }\), \(\phi '/ \phi \) and its derivative, being \(\phi \) the functions presented in Table 1 and \(2n-1<\lambda <2n\)

Therefore, we prove the following corollary.

Corollary 4.6

(\(\mathcal {C}^1\) limit functions) Let \(n\in \mathbb {N}\), \(n\ge 2\) and \(\lambda \in (2n-1,2n)\) be. The scheme \(S_{1,\textbf{w}^\lambda }\) is \(\mathcal {C}^1\) for \(\phi (x) = 1\) and for any weight function \(\phi (x) = (1-x^p)^q\) with \(p\ge 1\) and \(q>0\).

Proof

From Table 2, the function \(p_0^{\omega ^\lambda }\) is decreasing for any \(\phi (x) = (1-x^p)^q\) with \(p\ge 1\) and \(q>0\). Then, by Lemma 4.3 the coefficients of the difference scheme are positive and by Proposition 4.1 the subdivision scheme \(S_{2\textbf{q}}\) is convergent. The case \(\phi (x)=1\) is studied in [16]. \(\square \)

In general, these schemes are not \(\mathcal {C}^2\) as it is mentioned in [16]. It is easy to check that if \(a_{1,\textbf{w}^\lambda }(z)=(z+1)^2q(z)\) then \(q(-1)\) is not necessarily equal to 0 and therefore \((1+z)^3\) is not a factor of \(a_{1,\textbf{w}^\lambda }(z)\). Thus, these schemes do not necessarily generate \(\mathcal {C}^2\) functions, according to [15]. For example, the subdivision schemes showed in (19), \(\textbf{a}_{1,\mathbf {tria^{j+\frac{1}{2}}}}\), \(j=2,3,4,5\) are not \(\mathcal {C}^2\).

In order to finish this section, we study two properties. Firstly, we analyse if the new family of schemes conserves monotonicity. In our case, the result presented by Yad-Shalom in [27] can be used:

Proposition 4.7

Let \(S_\textbf{a}\) be a convergent subdivision scheme and \(S_{\textbf{q}}\) its corresponding difference scheme with a positive mask. If the initial data, \(\textbf{f}^0\), is non-decreasing, then the limit function \(S^\infty \textbf{f}\) is non-decreasing.

With this proposition, we can enunciate the following corollary.

Corollary 4.8

(Monotonicity preservation) For \(\lambda \in (1,+\infty )\backslash \mathbb {N}\) and any weight function introduced in Table 1, the scheme \(S_{1,\textbf{w}^\lambda }\) conserves the monotonicity.

Finally, when the initial data presents a large variation, representing a discontinuity, and a linear subdivision scheme is applied, certain undesirable effects may appear near the large variation. This is something similar to the Gibbs phenomenon that the Fourier series present near discontinuities. In [1], it is proved that if the mask of the scheme is non-negative then the Gibbs phenomenon does not appear in the limit function.

Corollary 4.9

(Avoiding Gibbs phenomenon) For \(\lambda \in (1,+\infty )\backslash \mathbb {N}\), the scheme \(S_{1,\textbf{w}^\lambda }\) avoids the Gibbs phenomenon.

In Sect. 9, we present some examples checking these theoretical results. For \(d=0,1\), the resulting masks are positive and we have used classic tools to study its properties. However, for \(d\ge 2\), the masks are no longer positive. In the next section, we will develop a novel technique based on numerical integration for this goal and we will apply it to prove the convergence of the schemes based on weighted-least squares.

5 A Tool for the Convergence Analysis

The purpose of this section is to provide new theoretical results to analyse the convergence. In Sect. 4, the convergence was easily proven by the positivity of the mask. However, in Sect. 6 we will prove the convergence of the schemes based on the regression with polynomials of degrees \(d=2,3\), which are no longer positive, so that we cannot follow the same strategy. Nevertheless, as a consequence of Lemma 3.3, the masks can be seen as the evaluation of a second degree polynomial and this fact is advantageous and we will take profit of it in this section.

For any particular value of n, a fixed \(\omega \) and considering some \(\lambda _n\) such that \(2n-1<\lambda _n < 2n+1\), \(\lambda _n\ne 2n\), it can be easily computed the difference scheme using the formula (4) and checked if its norm is less than 1, which would imply convergence. Let us call this method the direct inspection. But it serves to prove convergence only for the chosen n, and we wish to prove it for all \(n\in \mathbb {N}\). Our strategy will consist in proving convergence asymptotically, that is, to prove it for \(\forall n> n_0\), for some \(n_0\in \mathbb {N}\), and then check the convergence for each \(n \le n_0\) by direct inspection.

First, we would like to give a general idea about this asymptotic convergence. Thanks to the properties of the space of polynomials \(\Pi _d\), the problem (6) can be formulated using equidistant knots in the interval \([-1,1]\), such as

$$\begin{aligned} \begin{aligned} {\hat{\varvec{\beta }}}^i=\mathop {\mathrm {arg\,min}}\limits _{\varvec{\beta }\in \mathbb {R}^{d+1}} \sum _{l=1-n}^{n-1+i} \omega \left( (2l-i)/{\lambda _n}\right) L_p\left( f^k_{j+l},A\left( \frac{2l-i}{2n}\right) ^T\varvec{\beta }\right) ,\quad i=0,1. \end{aligned} \end{aligned}$$

The last sum is, in fact, a composite integration rule. So that, if \(n\rightarrow \infty \), then \(2n/\lambda _n\rightarrow 1\) and the problem seems (this is not a rigorous argument, but it serves to understand the situation) to converge to

$$\begin{aligned} \begin{aligned} \mathop {\mathrm {arg\,min}}\limits _{\varvec{\beta }\in \mathbb {R}^{d+1}} \int _{-1}^1 L_p\left( f(x),A(x)^T\varvec{\beta }\right) \ \omega (x) dx, \end{aligned} \end{aligned}$$

for both \(i=0,1\). On the one hand, the given data is now a function f(x) which is approximated by a polynomial \(A(x)^T\varvec{\beta }\in \Pi _d\) in the \(L_p\) norm with a weight function \(\omega \). On the other hand, by Lemma 3.3 the corresponding subdivision even and odd masks, say \(\textbf{a}^{n,i}\), fulfils \(a^{n,i}_l = \omega ((2\,l-i)/{\lambda _n})^{-1}A(x)^T\varvec{\alpha }^{n,i}\), for some coefficients \(\varvec{\alpha }^{n,i}\in \mathbb {R}^{d+1}\). Then, the masks also seem to converge to some continuos function, if some normalization is performed since the masks supports increase with n (see later Remark 4 and Sect. 6 for more details). The results presented in this section exploit this kind of situations.

From now on, we consider a family of subdivision schemes \(\{S_{\textbf{a}^n}\}_{n=1}^\infty \) as in (3). The results in this section allow to prove convergence for \(n > n_0\), for some \(n_0\in \mathbb {N}\), and also provides the value of \(n_0\), so that it can be checked convergence for \(n\le n_0\) by direct inspection. Combining both proofs, we obtain convergence for all \(n\in \mathbb {N}\). In particular, \(\lim _{n\rightarrow \infty }\Vert S_{\textbf{q}^n}\Vert _\infty \) will be computed, which ensures the asymptotic convergence when that limit is less than 1. Here we denote by \(\textbf{a}^{n,0},\textbf{a}^{n,1},\textbf{q}^{n,0},\textbf{q}^{n,1}\) the even and odd masks of the masks \(\textbf{a}^n,\textbf{q}^n\).

Theorem 5.1

Let \(\{S_{\textbf{a}^n}\}_{n=1}^\infty \) be a sequence of subdivision schemes that reproduce \(\Pi _0\), which odd rules are longer than the even rules, as in (3). Let \(r:[-1,1]\rightarrow \mathbb {R}\) be a \(\mathcal {C}^1\) function and let \(R(t):= \int _{-1}^{t} r(s) ds\) be. If

$$\begin{aligned} a^{n,0}_{j} - a^{n,1}_{j}&= r(j/n)n^{-2} + \kappa ^n_j,&j = 1-n,\ldots ,L_n, \end{aligned}$$
(25)
$$\begin{aligned} |\kappa ^n_j|&\le \mu n^{-3},&j = 1-n, \ldots , L_n, \end{aligned}$$
(26)
$$\begin{aligned} \Vert R\Vert _1&= \int _{-1}^{1} |R(t)| dt <1,&\end{aligned}$$
(27)

for some \(\mu > 0\), then the even mask of the difference schemes fulfil

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert \textbf{q}^{n,0}\Vert _1 = \lim _{n\rightarrow \infty } \sum _{l=1-n}^{L_n} |q^{n,0}_{l} | \le \Vert R\Vert _1.\end{aligned}$$

In particular, \(\Vert \textbf{q}^{n,0}\Vert _1 < 1\), \(\forall n > n_{0}\), being

$$\begin{aligned} n_0&= {\left\{ \begin{array}{ll} \displaystyle \frac{\sqrt{(\Vert r\Vert _\infty +2 (\mu +\Vert r'\Vert _\infty ))^2-4 (1-\Vert R\Vert _1) (\mu +\Vert r'\Vert _\infty )}+\Vert r\Vert _\infty +2 (\mu +\Vert r'\Vert _\infty )}{2 (1-\Vert R\Vert _1)}, &{} \text {if }L_n = n-1,\\ \displaystyle \frac{\sqrt{(\Vert r\Vert _\infty +2 (\mu +\Vert r'\Vert _\infty ))^2+4 (1-\Vert R\Vert _1) \mu }+\Vert r\Vert _\infty +2 (\mu +\Vert r'\Vert _\infty )}{2 (1-\Vert R\Vert _1)}, &{} \text {if }L_n = n, \end{array}\right. } \end{aligned}$$
(28)

where

$$\begin{aligned} \Vert r\Vert _\infty = \max _{t\in [-1,1]} |r(t)|, \quad \Vert r'\Vert _\infty = \max _{t\in [-1,1]} |r'(t)|. \end{aligned}$$

Proof

First, we may write \(q^{n,0}_{j}\) in terms of r:

$$\begin{aligned} q^{n,0}_{j} = \sum _{l=1-n}^{j}\{ a^{n,0}_{l} - a^{n,1}_{l} \} = \sum _{l=1-n}^{j} \{ r(l/n)n^{-2} + \kappa ^n_l \}. \end{aligned}$$

Using the composite (backward) rectangle rule, we obtain

$$\begin{aligned} n^{-1} \sum _{l=1-n}^{j} r(l/n) = \int _{-1}^{j/n} r(t) dt + \theta ^n_j= R(j/n) + \theta ^n_j, \end{aligned}$$

where \(\theta ^n_j\) is the integration error, which fulfils \( |\theta ^n_j| \le n^{-1}\Vert r'\Vert _\infty . \) Then,

$$\begin{aligned} q^{n,0}_{j} = n^{-1}R(j/n) + n^{-1}\theta ^n_j + \sum _{l=1-n}^{j} \kappa ^n_l. \end{aligned}$$

With this computation, we will prove first that \(\Vert \textbf{q}^{n,0}\Vert _1 < 1\), \(\forall n > n_{0}\):

$$\begin{aligned} \Vert \textbf{q}^{n,0}\Vert _1&= \sum _{j=1-n}^{L_n} |n^{-1}R(j/n) + n^{-1}\theta ^n_j + \sum _{l=1-n}^{j} \kappa ^n_l| \\&\le n^{-1} \sum _{j=1-n}^{L_n} |R(j/n)| + n^{-1}\sum _{j=1-n}^{L_n}|\theta ^n_j| + \sum _{j=1-n}^{L_n}\sum _{l=1-n}^{j} | \kappa ^n_l|. \end{aligned}$$

Now, if \(L_n=n-1\), we use that \(R(-1) = 0\) and the composite (forward) rectangle rule, thus obtaining that

$$\begin{aligned} n^{-1}\sum _{j=1-n}^{L_n} |R(j/n)| = n^{-1}\sum _{j=-n}^{n-1} |R(j/n)| = \int _{-1}^1 |R(t)| dt + \rho ^n = \Vert R\Vert _1 + \rho ^n, \end{aligned}$$

where \(\rho ^n\) is the integration error of R(t),

$$\begin{aligned} |\rho ^n| \le n^{-1} \max _{t\in [-1,1]} |R'(t)| = n^{-1} \Vert r\Vert _\infty . \end{aligned}$$

If \(L_n = n\), we use the composite (backward) rectangle rule and we obtain a similar result:

$$\begin{aligned} n^{-1}\sum _{j=1-n}^{L_n} |R(j/n)| = n^{-1}\sum _{j=1-n}^{n} |R(j/n)| = \Vert R\Vert _1 + {\tilde{\rho }}^n, \quad |{\tilde{\rho }}^n| \le n^{-1} \Vert r\Vert _\infty . \end{aligned}$$

Using all the upper bounds we found, we obtain:

$$\begin{aligned} \begin{aligned} \Vert \textbf{q}^{n,0}\Vert _1&\le \Vert R\Vert _1 + \frac{\Vert r\Vert _\infty }{n} + \frac{(L_n+n)\Vert r'\Vert _\infty }{n^{2}} + \frac{1}{2} (L_n+n) (L_n+n+1) \mu n^{-3}. \end{aligned} \end{aligned}$$
(29)

From here we deduce that the limit when \(n\rightarrow \infty \) of the right part of (29) is \(\Vert R\Vert _1\), which is less than 1. Hence, there exists \(n_0\ge 1\) such that \(\Vert \textbf{q}^{n,0}\Vert _1<1\), \(\forall n> n_0\). In particular, we can find for which value of \(n_0\) the right part of (29) is equal to 1, by solving a second degree equation, arriving to (28). \(\square \)

Remark 4

In practice, if the expressions of \(a^{n,0}_j,a^{n,1}_j\) are well defined for any \(j\in \mathbb {R}\) (this is the case of \(S_{3,\mathbf {w^\lambda }}\), see (36)), then a practical way to compute r(t) is

$$\begin{aligned} r(t):= \lim _{n\rightarrow \infty } (a^{n,0}_{t n} - a^{n,1}_{t n})n^2. \end{aligned}$$

In Sect. 6, a complete example of the application of the results of this section will be performed.

For odd-symmetric subdivision operators, due (5), the satisfaction of the hypothesis of Theorem 5.1 is sufficient to ensure convergence.

Theorem 5.2

Let \(\{S_{\textbf{a}^n}\}_{n=1}^\infty \) be a set of odd-symmetric subdivision schemes fulfilling the hypothesis of Theorem 5.1. Then, the subdivision scheme \(S_{\textbf{a}^n}\) is convergent if \(n> n_0\) with \(n_0\) as in (28).

For subdivision operators that are not odd-symmetric, we need to inspect when \(\Vert \textbf{q}^{n,1}\Vert _1 < 1\).

Theorem 5.3

Let \(\{S_{\textbf{a}^n}\}_{n=1}^\infty \) be as in (3) and consider a flipped version of them, \(\{S_{{\bar{\textbf{a}}}^n}\}_{n=1}^\infty \), defined as

$$\begin{aligned} {\bar{a}}^{n,0}_{j}&:= a^{n,0}_{L_n+1-n - j},&j=1-n,\ldots ,L_n, \\ {\bar{a}}^{n,1}_{j}&:= a^{n,1}_{L_n+2-n - j},&j=1-n,\ldots ,L_n+1 . \end{aligned}$$

Then

$$\begin{aligned} q^{n,0}_{j}&= {\bar{q}}^{n,1}_{L_n+1-n - j}, \qquad \Vert \textbf{q}^{n,0}\Vert _1 = \Vert {\bar{\textbf{q}}}^{n,1}\Vert _1. \end{aligned}$$

Moreover, if \(\{S_{\textbf{a}^n}\}_{n=1}^\infty \) fulfil the conditions of Theorem 5.1, then \(\{S_{\bar{\textbf{a}}^n}\}_{n=1}^\infty \) fulfil

$$\begin{aligned} {\bar{a}}^{n,0}_{j} - {\bar{a}}^{n,1}_{j+1}&= {\bar{r}}(j/n)n^{-2} + {\bar{\kappa }}^n_j,&j = 1-n,\ldots ,L_n, \end{aligned}$$
(30)
$$\begin{aligned} |{\bar{\kappa }}^n_j|&\le \bar{\mu }n^{-3},&j = 1-n, \ldots , L_n, \end{aligned}$$
(31)
$$\begin{aligned} \Vert {\bar{R}}\Vert _1&<1,&\end{aligned}$$
(32)

where \({\bar{r}}(t):= r(-t)\), \({\bar{\kappa }}^n_j:= \kappa ^n_{-j}\), \({\bar{R}}(t):= \int _{t}^{1} {\bar{r}}(s) ds = R(-t)\),

$$\begin{aligned} \bar{\mu }&= {\left\{ \begin{array}{ll} \mu , &{} L_n = n-1, \\ \mu + \Vert r'\Vert _\infty , &{} L_n = n. \end{array}\right. } \end{aligned}$$

Conversely, if \(\{S_{{\bar{\textbf{a}}}^n}\}_{n=1}^\infty \) fulfils (30), (31) and (32), then \(\{S_{\textbf{a}^n}\}_{n=1}^\infty \) fufils Theorem 5.1 with

$$\begin{aligned} \mu&= {\left\{ \begin{array}{ll} \bar{\mu }, &{} L_n = n-1, \\ \bar{\mu }+ \Vert {r}'\Vert _\infty , &{} L_n = n. \end{array}\right. } \end{aligned}$$

Proof

Observe that for \(j = 1-n,\ldots ,L_n\) we have:

$$\begin{aligned} {\bar{a}}^{n,0}_{j} - {\bar{a}}^{n,1}_{j+1}&= a^{n,0}_{L_n+1-n - j} - a^{n,1}_{L_n+2-n - (j+1)}= a^{n,0}_{L_n+1-n - j} - a^{n,1}_{L_n+1-n - j}, \end{aligned}$$

If (25) is fulfilled, then

$$\begin{aligned} a^{n,0}_{L_n+1-n - j} - a^{n,1}_{L_n+1-n - j} = r((L_n+1-n - j)/n)n^{-2} + \kappa ^n_{L_n+1-n - j}. \end{aligned}$$

If \(L_n = n-1\), then \(r((L_n+1-n - j)/n) = r(-j/n)\), so that, defining \({\bar{r}}(t):= r(-t)\), \({\bar{\kappa }}^n_{j}:= \kappa ^n_{-j}\), the relation between (25)-(26) and (30)-(31) is clear. If \(L_n = n\), then \(r((L_n+1-n - j)/n) = r(1/n -j/n)\). To arrive to the same conclusion, we define

$$\begin{aligned} {\bar{\kappa }}^n_{L_n+1-n - j}:= (r(1/n -j/n) - r(-j/n))n^{-2} + \kappa ^n_{L_n+1-n - j} \end{aligned}$$

and use the Lipschitz condition of r (which constant is \(\Vert r'\Vert _\infty \)), as follows:

$$\begin{aligned} a^{n,0}_{L_n+1-n - j} - a^{n,1}_{L_n+1-n - j}&= r(-j/n)n^{-2} + (r(1/n -j/n) - r(-j/n))n^{-2} + \kappa ^n_{L_n+1-n - j} \\&= r(-j/n)n^{-2} + {\bar{\kappa }}^n_{L_n+1-n - j}, \\ |{\bar{\kappa }}^n_{L_n+1-n - j}|&\le \Vert r'\Vert _\infty n^{-1} n^{-2} + \mu n^{-3} = (\mu + \Vert r'\Vert _\infty ) n^{-3}. \end{aligned}$$

On the one hand,

$$\begin{aligned} {\bar{R}}(t)&= \int _{t}^{1} {\bar{r}}(s) ds = \int _{t}^{1} r(-s) ds \overset{[u = -s]}{=}\ \int _{-t}^{-1} - r(u) du = \int _{-1}^{-t} r(u) du = R(-t), \end{aligned}$$

and

$$\begin{aligned} \int _{-1}^{1} |{\bar{R}}(t)| dt = \int _{-1}^{1} \left| R(-t) \right| dt = \int _{-1}^{1} \left| R(t) \right| dt, \end{aligned}$$

thus, the equivalence between (27) and (32) also holds true.

On the other hand, \(S_{\textbf{a}^{n}}\) reproduces \(\Pi _0\) if, and only if, \(S_{{\bar{\textbf{a}}}^{n}}\) does. Hence, the finite difference scheme exists and can be computed with the formula (4)

$$\begin{aligned} \begin{aligned}&{\bar{q}}^{n,0}_{j} = \sum _{l=1-n}^{j} a^{n,0}_{L_n+1-n - l} - a^{n,1}_{L_n+2-n - l}\\&\overset{[k = L_n+1-n - l]}{=}\ \sum _{k=L_n+1-n - j}^{L_n} a^{n,0}_{k} - a^{n,1}_{k+1} = q^{n,1}_{L_n+1-n - j}. \end{aligned} \end{aligned}$$

Hence,

$$\begin{aligned} \sum _{j=1-n}^{L_n} |q^{n,1}_{j} | = \sum _{j=1-n}^{L_n} |q^{n,1}_{L_n+1-n - j} | = \sum _{j=1-n}^{L_n} |{\bar{q}}^{n,0}_{j}|. \end{aligned}$$

\(\square \)

The next result is a direct consequence of the previous one, which in combination with Theorem 5.1 provides a sufficient condition for the convergence of the schemes.

Corollary 5.4

Let \(\{S_{\textbf{a}^n}\}_{n=1}^\infty \) be a sequence of subdivision schemes that reproduce \(\Pi _0\), which odd rules are longer than the even rules, as in (3). Let \(r:[-1,1]\rightarrow \mathbb {R}\) be a \(\mathcal {C}^1\) function and let \(R(t):= \int _{t}^{1} r(s) ds\) be. If

$$\begin{aligned} a^{n,0}_{j} - a^{n,1}_{j+1}&= r(j/n)n^{-2} + \kappa ^n_j,&j = 1-n,\ldots ,L_n,\\ |\kappa ^n_j|&\le \mu n^{-3},&j = 1-n, \ldots , L_n,\\ \Vert R\Vert _1&<1,&\end{aligned}$$

for some \(\mu >0\), then \( \Vert \textbf{q}^{n,1}\Vert _1 < 1, \; \forall n \ge n_{0}, \) where

$$\begin{aligned} n_0&= {\left\{ \begin{array}{ll} \displaystyle \frac{\sqrt{(\Vert r\Vert _\infty +2 (\mu +\Vert r'\Vert _\infty ))^2-4 (1-\Vert R\Vert _1) (\mu +\Vert r'\Vert _\infty )}+\Vert r\Vert _\infty +2 (\mu +\Vert r'\Vert _\infty )}{2 (1-\Vert R\Vert _1)}, &{} \text {if }L_n = n-1,\\ \displaystyle \frac{\sqrt{(\Vert r\Vert _\infty +2 (\mu +2\Vert r'\Vert _\infty ))^2+4 (1-\Vert R\Vert _1) (\mu +\Vert r'\Vert _\infty ) }+\Vert r\Vert _\infty +2 (\mu +2\Vert r'\Vert _\infty )}{2 (1-\Vert R\Vert _1)}, &{} \text {if }L_n = n, \end{array}\right. } \end{aligned}$$
(33)

Proof

By the Theorem 5.3, the flipped version of this scheme fulfils Theorem 5.1. To compute \(n_0\), we replace \(\mu \) by \({\bar{\mu }}\) in (28), obtaining (33). \(\square \)

6 WLPR-Subdivision Schemes for \(d=2,3\)

We consider \(\{\lambda _n\}_{n\ge 2}\) such that \(2n-1<\lambda _n<2n\), then \(L_n = 1-n\). The following computations could be done for \(2n<\lambda _n<2n+1\) as well. First, we compute the coefficients of \(S_{2,\textbf{w}^\lambda }\) (denote it by \(S^n\) from now on). According to Lemma 3.3, the even and odd masks are \(\textbf{a}^{n,i} = \textbf{W}^i \textbf{X}^i \varvec{\alpha }^i\), \(i=0,1\), where \(\varvec{\alpha }^i=((\textbf{X}^i)^T\textbf{W}^i\textbf{X}^i)^{-1} \textbf{e}_1\). Then, to compute \(\varvec{\alpha }^i\) we may solve the system

$$\begin{aligned} (\textbf{X}^i)^T\textbf{W}^i\textbf{X}^i \varvec{\alpha }^i = \textbf{e}_1. \end{aligned}$$

We start with \(i=1\). Using (11) and the symmetry of \(\textbf{w}^1\) and \(\textbf{x}^1\),

$$\begin{aligned} (\textbf{X}^1)^T\textbf{W}^1\textbf{X}^1 = \left( \begin{array}{llllll} \Vert \textbf{w}^1\Vert _1 &{} 0 &{} 2\sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^2 \\ 0 &{} 2\sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^2 &{} 0 \\ 2\sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^2 &{} 0 &{} 2\sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^{4} \end{array}\right) , \end{aligned}$$
$$\begin{aligned} \begin{aligned} \Delta ^1&:= \left| (\textbf{X}^1)^T\textbf{W}^1\textbf{X}^1\right| \\ {}&= 4 \left( \Vert \textbf{w}^1\Vert _1\sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^{4} - 2\left( \sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^2\right) ^2\right) \sum _{i=1}^{n} w^\lambda _{2i-1} (2i-1)^2. \end{aligned} \end{aligned}$$

Hence, using the Kramer’s formula, the three coefficients of \(\varvec{\alpha }^1\) are:

$$\begin{aligned} \alpha ^1_0&= (\Delta ^1)^{-1} \left| \begin{array}{llllll} 1 &{} 0 &{} 2\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2 \\ 0 &{} 2\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2 &{} 0 \\ 0 &{} 0 &{} 2\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^{4} \end{array}\right| \\&= 4 (\Delta ^1)^{-1} \left( \sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2\right) \left( \sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^{4}\right) \\&= \frac{\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^{4}}{\Vert \textbf{w}^1\Vert _1\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^{4} - 2\left( \sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2\right) ^2}\\&= \frac{\sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^{4}}{\Vert \textbf{w}^1\Vert _1\sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^{4} - 2\left( \sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^2\right) ^2}, \end{aligned}$$
$$\begin{aligned} \alpha ^1_1&= 0, \\ \alpha ^1_2&= (\Delta ^1)^{-1} \left| \begin{array}{llllll} \Vert \textbf{w}^1\Vert _1 &{} 0 &{} 1 \\ 0 &{} 2\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2 &{} 0 \\ 2\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2 &{} 0 &{} 0 \end{array}\right| \\&= -4(\Delta ^1)^{-1} \left( \sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2\right) ^2\\&= -\frac{\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2}{\Vert \textbf{w}^1\Vert _1\sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^{4} - 2\left( \sum _{l=1}^{n} w^\lambda _{2l-1} (2l-1)^2\right) ^2}\\&= -\frac{1}{4}\frac{\sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^2}{\Vert \textbf{w}^1\Vert _1\sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^{4} - 2\left( \sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^2\right) ^2}. \end{aligned}$$

Then, by (16), the even mask coefficients are

$$\begin{aligned} a^{n,1}_j&= w^\lambda _{2j-1} \left( \alpha ^1_0 + \alpha ^1_2 (2j-1)^2\right) = w^\lambda _{2j-1} \left( \alpha ^1_0 + 4\alpha ^1_2 \left( j-\frac{1}{2}\right) ^2\right) \nonumber \\&= w^\lambda _{2j-1} \frac{\sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^{4} - \left( j-\frac{1}{2}\right) ^2\sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^2}{\Vert \textbf{w}^1\Vert _1\sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^{4} - 2\left( \sum _{l=1}^{n} w^\lambda _{2l-1} \left( l-\frac{1}{2}\right) ^2\right) ^2},\nonumber \\&\quad j=1-n,\ldots ,n. \end{aligned}$$
(34)

Similarly, for \(j=1-n,\ldots ,n-1\)

$$\begin{aligned} a^{n,0}_j&= w^\lambda _{2j} (\alpha ^0_0 + 4\alpha ^0_2 j^2) =w^\lambda _{2j} \frac{\sum _{l=1}^{n-1} w^\lambda _{2l} l^{4} - j^2\sum _{l=1}^{n-1} w^\lambda _{2l} l^2}{\Vert \textbf{w}^0\Vert _1\sum _{l=1}^{n-1} w^\lambda _{2l} l^{4} - 2\left( \sum _{l=1}^{n-1} w^\lambda _{2l} l^2\right) ^2},&\end{aligned}$$
(35)

where

$$\begin{aligned} \alpha ^0_0&= \frac{\sum _{l=1}^{n-1} w^\lambda _{2l} l^{4}}{\Vert \textbf{w}^0\Vert _1\sum _{l=1}^{n-1} w^\lambda _{2l} l^{4} - 2\left( \sum _{l=1}^{n-1} w^\lambda _{2l} l^2\right) ^2},\\ \alpha ^0_2&= -\frac{1}{4}\frac{\sum _{l=1}^{n-1} w^\lambda _{2l} l^2}{\Vert \textbf{w}^0\Vert _1\sum _{l=1}^{n-1} w^\lambda _{2l} l^{4} - 2\left( \sum _{l=1}^{n-1} w^\lambda _{2l} l^2\right) ^2}. \end{aligned}$$

We first prove convergence \(\forall n\ge 2\) in the simplest case, \(\phi (x)=1\), using the new convergence analysis tools. Later we will discuss the general case.

6.1 Convergence of the Subdivision Schemes Based on Weighted Least Squares with \(d=2,3\) and \(\phi (x)=1\)

In this case, \(w_l=1\), \(1-2n\le l \le 2n-1\), so that the mask coefficients can be simplified to

$$\begin{aligned} \begin{aligned} a^{n,0}_{j}&= -\frac{3 \left( 5 j^2-3 n^2+3 n+1\right) }{8 n^3-12 n^2-2 n+3}, \qquad j = -n+1, \ldots , n-1, \\ a^{n,1}_{j}&= \frac{15 (j-1) j-9 n^2+9}{8 n-8 n^3}, \qquad j = -n+1, \ldots , n. \end{aligned} \end{aligned}$$
(36)

It can be easily checked that these operators are odd-symmetric, which for sure we knew by Lemma 3.5. Hence, to prove convergence we can apply Theorem 5.2.

Observe that the algebraic expressions of \(a^{n,0}_{j}\) and \(a^{n,1}_{j}\) are well defined even for \(j\in \mathbb {R}\). Then, for any \(t\in [-1,1]\), we define

$$\begin{aligned} r(t):= \lim _{n\rightarrow \infty } (a^{n,0}_{t n} - a^{n,1}_{t n})n^2 = -\frac{45 t^2}{16}-\frac{15 t}{8}+\frac{9}{16}, \end{aligned}$$

that we obtained with the aid of a symbolic computation program. We also computed that

$$\begin{aligned} \begin{aligned} n^3 \kappa ^n_j =&n^3(a^{n,0}_{j} - a^{n,1}_{j} - r(j/n)n^{-2}) \\ =&3 \rho (n)^{-1}(-120 j^2 n^4-120 j^2 n^3+225 j^2 n^2+30 j^2 n-45 j^2-80 j n^4\\&+120 j n^3+20 j n^2-30 j n+32 n^6-12 n^5-41 n^4+12 n^3+9 n^2), \end{aligned} \end{aligned}$$
(37)

where

$$\begin{aligned} \rho (n):= 16 (n-1) n (n+1) (2 n-3) (2 n-1) (2 n+1). \end{aligned}$$

Now we should find \(\mu \) such that \(|\kappa ^n_j n^{3}|\le \mu \) for \(1-n\le j \le n-1\). On the one hand,

$$\begin{aligned} \rho (n) > 16 (n-2)^3 (2n-4) (2n-4) (2n-4) = 128 (n-2)^6 \ge 0, \qquad \forall n\ge 2. \end{aligned}$$

On the other hand, the numerator of (37) can be easily bounded using that \(|j|\le n\) and increasing to 6 the degree of every monomial:

$$\begin{aligned} |\rho (n) n^3 \kappa ^n_j/3|&\le 120 n^6 + 120 n^6 +225 n^6+30 n^6 + 45 n^6 + 80 n^6+ 120 n^6\\&\quad +20 n^6 +30 n^6+32 n^6 + 12 n^6 + 41 n^6+12 n^6+9 n^6\\&= 896 n^6. \end{aligned}$$

As conclusion,

$$\begin{aligned} |n^3 \kappa ^n_j| \le 3 \frac{896 n^6}{128 (n-2)^6} = \frac{21 n^6}{(n-2)^6}, \quad \forall n>2. \end{aligned}$$

Then, for any \(n_1 \ge 3\),

$$\begin{aligned} |\kappa ^n_j| \le n^{-3} \mu _1, \quad \mu _1 = \frac{21 n_1^6}{(n_1-2)^6}, \qquad \forall n \ge n_1. \end{aligned}$$
(38)

To compute \(n_0\), it is also necessary to compute:

$$\begin{aligned} \Vert R\Vert _1&= \int _{-1}^1 \left| \int _{-1}^t r(s) ds \right| dt = \frac{1}{10} \left( 3 \sqrt{15}-5\right) \simeq 0.661895, \\ \Vert r\Vert _\infty&= \max _{t\in [-1,1]} |r(t)| = 33/8, \quad \Vert r'\Vert _\infty = \max _{t\in [-1,1]} |r'(t)| = 15/2. \end{aligned}$$

Now, using formula (28) (case \(L_n = n-1\)) for \(\mu = \mu _1\),

$$\begin{aligned} n_0 = \frac{\left( \sqrt{15}+5\right) }{6} \bigg ( \frac{42 n_1^6}{(n_1-2)^6}+\sqrt{\left( \frac{42 n_1^6}{(n_1-2)^6}+\frac{153}{8}\right) ^2+\frac{6}{5} \left( \sqrt{15}-5\right) \left( \frac{21 n_1^6}{(n_1-2)^6}+\frac{15}{2}\right) }+\frac{153}{8}\bigg ). \end{aligned}$$

It is desirable to prove convergence for as much values of n as possible, so \(n_1\) should be chosen such that \(n_0\) is as small as possible, but greater or equal than \(n_1\), due to (38). We computationally found that the compromise is achieved for \(n_1 = 188\), leading to \(n_0 \simeq 188.506\). Hence, according to Theorem 5.2, the subdivision schemes are convergent for \(n \ge 189\). For smaller values of n, we have computationally checked by direct inspection that

$$\begin{aligned} \Vert \textbf{q}^{n,0}\Vert _1 = \Vert \textbf{q}^{n,1}\Vert _1 \le 29/42 \simeq 0.690476, \qquad \forall 2 \le n \le 189. \end{aligned}$$

This symbolic computation is quick and without rounding errors, so this can be considered a rigorous proof of the convergence.

We can perform some additional computations in order to provide an upper bound of \(\Vert \textbf{q}^{n,0}\Vert _1 = \Vert \textbf{q}^{n,1}\Vert _1\) valid for any \(n\ge 2\). According to (29),

$$\begin{aligned}{} & {} \Vert \textbf{q}^{n,0}\Vert _1 \le \frac{1}{10}(3 \sqrt{15}-5) + n^{-1} 33/8 + (2n-1)n^{-2} 15/2+ (2n^{-1}-n^{-2}) 21 \left( \frac{94}{93}\right) ^6, \\{} & {} \quad \forall n \ge 189. \end{aligned}$$

We checked that the right hand side is less than 29/42 for any \(n \ge 2236\), and we explicitly computed that for \(n \le 2236\), \(\Vert \textbf{q}^{n,0}\Vert _1 \le 29/42\). As conclusion,

$$\begin{aligned} \Vert \textbf{q}^{n,0}\Vert _1 = \Vert \textbf{q}^{n,1}\Vert _1 \le 29/42, \qquad \forall n\ge 2, \end{aligned}$$

and the equality is reached only for \(n=4\).

We tried to prove \(\mathcal {C}^1\) regularity with this technique by applying the results to the divided difference schemes, \(S_{2 \textbf{q}^n}\), but they do not satisfy (25).

6.2 Convergence of the Subdivision Schemes Based on Weighted Least Squares with \(d=2,3\) and a General Function \(\phi (x)\)

In this situation, we will study the convergence only for large n values, so that we will not calculate \(n_0\), because we have not been able to perform the direct inspection without specifying \(\phi \). In order to compute \(r(t):= \lim _{n\rightarrow \infty } (a^{n,0}_{t n} - a^{n,1}_{t n})n^2,\) we will define a \(\mathcal {C}^1\) function \(U^{n}_j\) such that \(a^{n,i}_{j} = U^{n}_j(1-i/2)\), \(i=0,1\), which will allow to write

$$\begin{aligned} a^{n,0}_{tn} - a^{n,1}_{tn} = U^{n}_{tn}(1) - U^{n}_{tn}(1/2) = \frac{1}{2}(U^{n}_{tn})'(\xi _{t,n}), \quad \xi _{t,n}\in (1/2,1). \end{aligned}$$

For that purpose, we define \(\sigma _{\lambda _n}(\phi ,x,k):= \sum _{l=1}^{n} \phi (\frac{l-x}{{\lambda _n}/2}) (l-x)^{k}\), \(k\in \mathbb {N}\), \(x\in [1/2,1]\). Recall that \(w^{\lambda _n}_0 = 1\), \(w_l^{\lambda _n} =\omega \left( \frac{l}{\lambda _n}\right) \) and \(\omega (x) = \phi (|x|)\). Observe that the even and odd masks (34) and (35) can be expressed as

$$\begin{aligned} a^{n,1}_j&= \phi \left( \frac{j-1/2}{{\lambda _n}/2}\right) \frac{\sigma _{\lambda _n}(\phi ,1/2,4) - \left( j-\frac{1}{2}\right) ^2\sigma _{\lambda _n}(\phi ,1/2,2)}{\Vert \textbf{w}^1\Vert _1\sigma _{\lambda _n}(\phi ,1/2,4) - 2\sigma _{\lambda _n}(\phi ,1/2,2)^2} = U^{n}_j(1/2),\\ a^{n,0}_j&=w^{\lambda _n}_{2j} \frac{\sum _{l=1}^{n} w^{\lambda _n}_{2(l-1)} (l-1)^{4} - j^2\sum _{l=1}^{n} w^{\lambda _n}_{2(l-1)} (l-1)^2}{\Vert \textbf{w}^0\Vert _1\sum _{l=1}^{n} w^{\lambda _n}_{2(l-1)} (l-1)^{4} - 2\left( \sum _{l=1}^{n} w^{\lambda _n}_{2(l-1)} (l-1)^2\right) ^2}\\&= \phi \left( \frac{j-0}{{\lambda _n}/2}\right) \frac{\sigma _{\lambda _n}(\phi ,1,4) - (j-0)^2\sigma _{\lambda _n}(\phi ,1,2)}{\Vert \textbf{w}^0\Vert _1\sigma _{\lambda _n}(\phi ,1,4)) - 2\sigma _{\lambda _n}(\phi ,1,2)^2} = U^{n}_j(1). \end{aligned}$$

Thus, we may define the link function as

$$\begin{aligned} U^{n}_j(x):= \phi \left( \frac{j+x-1}{{\lambda _n}/2}\right) \frac{\sigma _{\lambda _n}(\phi ,x,4) - (j+x-1)^2\sigma _{\lambda _n}(\phi ,x,2)}{(\Vert \textbf{w}^1\Vert _1 + (2x-1)(\Vert \textbf{w}^0\Vert _1-\Vert \textbf{w}^1\Vert _1))\sigma _{\lambda _n}(\phi ,x,4) - 2\sigma _{\lambda _n}(\phi ,x,2)^2}. \end{aligned}$$

Observe that \(U^{n}_j\in \mathcal {C}([1/2,1])\cap \mathcal {C}^1((1/2,1))\) provided that \(\phi \in \mathcal {C}([0,1])\cap \mathcal {C}^1((0,1))\) (\(\phi '\) may not exist at 0 or 1). To follow more easily the next computations, we write \(U^{n}_j(x) = \phi (\frac{j+x-1}{{\lambda _n}/2})U_{\text {num}}(x)/U_{\text {den}}(x)\), where \(U_{\text {num}}(x),U_{\text {den}}(x)\) are the numerator and denominator that appear in the last formula. Taking into account that

$$\begin{aligned} \frac{\partial }{\partial x} \sigma _{\lambda _n}(\phi ,x,k) = -\frac{2}{\lambda _n} \sigma _{\lambda _n}(\phi ',x,k) - k\sigma _{\lambda _n}(\phi ,x,k-1), \qquad k>1, \end{aligned}$$

we proceed to compute the derivative.

$$\begin{aligned} (U^{n}_j)^\prime (x)= & {} \frac{2}{\lambda _n}\phi '\left( \frac{j+x-1}{{\lambda _n}/2}\right) \frac{U_{\text {num}}(x)}{U_{\text {den}}(x)} + \phi \left( \frac{j+x-1}{{\lambda _n}/2}\right) \frac{U_{\text {num}}'(x)}{U_{\text {den}}(x)}\\{} & {} - \phi \left( \frac{j+x-1}{{\lambda _n}/2}\right) \frac{U_{\text {num}}(x)U_{\text {den}}'(x)}{U_{\text {den}}^2(x)}, \end{aligned}$$

where

$$\begin{aligned} U_{\text {num}}'(x) =&-\frac{2}{\lambda _n} \sigma _{\lambda _n}(\phi ',x,4) - 4\sigma _{\lambda _n}(\phi ,x,3) - 2(j+x-1)\sigma _{\lambda _n}(\phi ,x,2) \\&- (j+x-1)^2\left( -\frac{2}{\lambda _n} \sigma _{\lambda _n}(\phi ',x,2) - 2\sigma _{\lambda _n}(\phi ,x,1)\right) , \\ U_{\text {den}}'(x) =&2(\Vert \textbf{w}^0\Vert _1-\Vert \textbf{w}^1\Vert _1)\sigma _{\lambda _n}(\phi ,x,4)+(\Vert \textbf{w}^1\Vert _1 +\\&+ (2x-1)(\Vert \textbf{w}^0\Vert _1-\Vert \textbf{w}^1\Vert _1))\left( -\frac{2}{\lambda _n} \sigma _{\lambda _n}(\phi ',x,4) - 4\sigma _{\lambda _n}(\phi ,x,3)\right) \\&- 4\sigma _{\lambda _n}(\phi ,x,2)\left( -\frac{2}{\lambda _n} \sigma _{\lambda _n}(\phi ',x,2) - 2\sigma _{\lambda _n}(\phi ,x,1)\right) . \end{aligned}$$

Finally, we proceed to compute \(r(t) = \lim _{n\rightarrow \infty } \frac{n^2}{2} (U^{n}_{tn})'(\xi _{t,n}).\) To this purpose, we define

$$\begin{aligned} I_k(\phi ):= \int _0^1 \phi (x) x^k dx, \quad k\in \mathbb {N}, \end{aligned}$$

we observe \(\lim _{n\rightarrow \infty }{2n}/{\lambda _n} = 1\) and we use the following composite integration rule

$$\begin{aligned} n^{-k-1} \sigma _{\lambda _n}(\phi ,x,k)&=n^{-1}\sum _{l=1}^n \phi \left( \frac{l-x}{n}\frac{2n}{{\lambda _n}}\right) \left( \frac{l-x}{n}\right) ^k = I_k(\phi ) + \mathcal {O}(n^{-1}), \, \end{aligned}$$

\(\forall k\in \mathbb {N}\cup \{0\}, \,\, \forall x\in \left[ \frac{1}{2},1\right] .\) Defining \(\sigma _{\lambda _n}(\phi ,x,0):= \sum _{l=1}^{n} \phi (\frac{l-x}{{\lambda _n}/2})\), so that \(\frac{\partial \sigma _{\lambda _n}}{\partial x}(\phi ,x,0) = -\frac{2}{\lambda _n}\sigma _{\lambda _n}(\phi ',x,0)\), we note that

$$\begin{aligned} n^{-1}\Vert \textbf{w}^1\Vert _1 = 2n^{-1}\sum _{l=1}^{n} \phi \left( \frac{l-1/2}{{\lambda _n}/2}\right) = 2n^{-1}\sigma _{\lambda _n}(\phi ,1/2,0) = 2I_0(\phi )+ \mathcal {O}(n^{-1}), \quad i=0,1, \end{aligned}$$

and

$$\begin{aligned} \sigma _{\lambda _n}(\phi ,1,0)-\sigma _{\lambda _n}(\phi ,1/2,0) =&\frac{1}{2}\frac{\partial \sigma _{\lambda _n}}{\partial x}(\phi ,\xi _n,0) = - \frac{1}{2} \frac{2}{\lambda _n} \sigma _{\lambda _n}(\phi ',\xi _n,0)\\ =&-\frac{1}{2} \int _0^1 \phi '(x)dx + \mathcal {O}(n^{-1}) = \frac{1}{2}(\phi (0) - \phi (1))+ \mathcal {O}(n^{-1}), \end{aligned}$$

so that

$$\begin{aligned} \Vert \textbf{w}^0\Vert _1-\Vert \textbf{w}^1\Vert _0 = 2\sigma _{\lambda _n}(\phi ,1/2,1)-\phi (0)-2\sigma _{\lambda _n}(\phi ,1/2,0) = -\phi (1) + \mathcal {O}(n^{-1}). \end{aligned}$$

Taking these comments into account and taking \(j = t n\), we find out that

$$\begin{aligned}&\lim _{n\rightarrow \infty } \phi \left( \frac{t n+\xi _{t,n}-1}{{\lambda _n}/2}\right) = \phi (t), \quad \lim _{n\rightarrow \infty } \phi '\left( \frac{t n+\xi _{t,n}-1}{{\lambda _n}/2}\right) = \phi '(t),\\&\Vert \textbf{w}^1\Vert _1 + (2x-1)(\Vert \textbf{w}^0\Vert _1-\Vert \textbf{w}^1\Vert _1) = 2nI_0(\phi )+ \mathcal {O}(n^{0})+\\ {}&+ (2x-1)(-\phi (1) + \mathcal {O}(n^{-1})) = 2nI_0(\phi )+ \mathcal {O}(n^{0}) \end{aligned}$$

and

$$\begin{aligned} U_{\text {num}}(x) =&n^5 I_4(\phi ) - t^2 n^5 I_2(\phi ) + \mathcal {O}(n^{4}),\\ U_{\text {den}}(x) =&2n^6 I_0(\phi )I_4(\phi ) - 2n^6 I_2(\phi )^2 + \mathcal {O}(n^4),\\ U_{\text {num}}'(x) =&-n^4 I_4(\phi ') - 4n^4 I_3(\phi ) - 2tn^4 I_2(\phi )\\ {}&- t^2 n^2(-n^2 I_2(\phi ') - 2n^2 I_1(\phi )) + \mathcal {O}(n^3),\\ U_{\text {den}}'(x) =&2 n^5 (-\phi (1)) I_4(\phi )+2nI_0(\phi )(-n^4 I_4(\phi ')\\ {}&- 4n^4 I_3(\phi )) - 4n^3 I_2(\phi )(-n^2 I_2(\phi ') - 2n^2 I_1(\phi ))\\ {}&+ \mathcal {O}(n^{4}). \end{aligned}$$

Hence,

$$\begin{aligned} r(t)&= \lim _{n\rightarrow \infty } \frac{1}{2}n^2(U^{n}_{tn})^\prime (\xi _{t,n}) = \frac{1}{2}\phi '(t)\lim _{n\rightarrow \infty }n \frac{n^5}{n^6} \frac{I_4(\phi ) - t^2 I_2(\phi )}{2I_0(\phi )I_4(\phi ) - 2I_2(\phi )^2} \\&+ \frac{1}{2}\phi (t)\lim _{n\rightarrow \infty }n^2\frac{n^4}{n^6} \frac{-I_4(\phi ') - 4I_3(\phi ) - 2tI_2(\phi ) - t^2 (- I_2(\phi ') - 2 I_1(\phi ))}{2I_0(\phi ) I_4(\phi ) - 2 I_2(\phi )^2} \\&- \frac{1}{2}\phi (t)\lim _{n\rightarrow \infty }n^2 n^5 (I_4(\phi ) - t^2 I_2(\phi )) \\&\quad \cdot \frac{n^5}{n^{12}} \frac{ 2(-\phi (1)) I_4(\phi ) + 2I_0(\phi )(-I_4(\phi ') - 4I_3(\phi )) - 4 I_2(\phi )(- I_2(\phi ') - 2 I_1(\phi )) }{(2I_0(\phi ) I_4(\phi ) - 2I_2(\phi )^2)^2}\\&= \frac{1}{4}\phi '(t)\frac{I_4(\phi ) - t^2 I_2(\phi )}{I_0(\phi )I_4(\phi ) - I_2(\phi )^2} - \frac{1}{4}\phi (t)\frac{I_4(\phi ') + 4I_3(\phi ) + 2tI_2(\phi ) - t^2 ( I_2(\phi ') + 2 I_1(\phi ))}{I_0(\phi ) I_4(\phi ) - I_2(\phi )^2} \\&- \frac{1}{4}\phi (t) (I_4(\phi ) - t^2 I_2(\phi )) \frac{ (-\phi (1))I_4(\phi ) - I_0(\phi )(I_4(\phi ') + 4I_3(\phi )) + 2 I_2(\phi )( I_2(\phi ') + 2 I_1(\phi )) }{(I_0(\phi ) I_4(\phi ) - I_2(\phi )^2)^2}. \end{aligned}$$

Clearly, the former expression is valid provided that \(I_0(\phi ) I_4(\phi ) - I_2(\phi )^2 \ne 0\). Fortunately, we can use the Schwartz’s inequality for the inner product \(\langle f,g \rangle := \int _0^1 f(x) g(x) \phi (x)dx\) to deduce that

$$\begin{aligned} I_0(\phi ) I_4(\phi ) - I_2(\phi )^2 = \langle 1,1\rangle \langle x^2,x^2\rangle - \langle 1,x^2\rangle ^2 > 0. \end{aligned}$$

We gather in Table 3 the computation of r(t) and \(\Vert R\Vert _1\) for several choices of \(\phi \). Since \(\Vert R\Vert _1<1\) for all of them, we conclude that, for n large enough, any of the corresponding subdivision schemes converge. We realized that the value of \(\Vert R\Vert _1\) could be greater than one for some extreme choices of \(\phi \). An example is \(\phi (x)=1+1000x^2\), but functions of this kind were dismissed in Sect. 3 due to their lack of practical significance.

Table 3 The function r(t) and the value \(\Vert R\Vert _1\) of Theorem 5.1 for \(S_{3,\textbf{w}^\lambda }\) several choices of \(\phi \), and \(2n-1<\lambda _n<2n\)

The results provided in Sect. 5 allow to analyse the convergence for \(n\ge n_{0}\), for certain \(n_{0}\in \mathbb {N}\). For smaller values of n, we suggest fixing the function \(\phi \) and the value n and using the algorithm presented in [19] based on the results proved in [15] (Section 3) to study the convergence and regularity of the limit functions. The development of new theoretical tools for the smoothness analysis with arbitrary \(\phi \) and with \(n\ge n_{0}\) would be desirable, but not developed in this paper.

We perform a simple example of a scheme with \(d=2,3\) that produces \(\mathcal {C}^2\) limit function, in contrast with the 4 point Deslauriers and Dubuc scheme that is not so regular. For \(\phi (x)=(1-x^2)^3\) (trwt) and \(\lambda =5.5\), the subdivision scheme is the following

$$\begin{aligned} \left( S_{3,\texttt {trwt}^{5.5}}\textbf{f}^k\right) _{2j}&=-\frac{505}{8762}\left( f^k_{j-2}+f^k_{j+2}\right) +\frac{311}{1349}\left( f^k_{j-1}+f^k_{j+1}\right) +\frac{367}{561}f^k_j, \\ \left( S_{3,\texttt{trwt}^{5.5}}\textbf{f}^k\right) _{2j+1}&=-\frac{29}{3601}\left( f^k_{j-2}+f^k_{j+3}\right) -\frac{352}{9181}\left( f^k_{j-1}+f^k_{j+2}\right) \\&\quad +\frac{3492}{6391}\left( f^k_{j}+f^k_{j+1}\right) , \end{aligned}$$

whose symbol (Definition 2) is equal to \(a_{3,\texttt{trwt}^{5.5}}(z)=\frac{(z+1)^3}{4}b(z)\) with

$$\begin{aligned} b(z)=-\frac{116}{3601}\left( z^{2}+z^{-5}\right) -\frac{393}{2935}(z+z^{-4})+\frac{306}{887}(1+z^{-3})+\frac{193}{601}\left( z^{-1}+z^{-2}\right) . \end{aligned}$$

Since \(||S_\textbf{b}||_\infty =\frac{377}{453}<1,\) the scheme is \(\mathcal {C}^2\) by Theorem 2.4.

The next two sections are devoted to study the approximation and the noise suppression capability depending on the chosen weight function.

7 Approximation Capability

To study the approximation capability, we consider the subdivision scheme \(S_{d,\mathbf {w^\lambda }}\) defined in (7) with \(d\ge 0\) and \(\lambda \) satisfying the conditions requested in Proposition 3.1. We know that it has approximation order \(d+1\) since it reproduces polynomials of degree d, regardless of the choice of the weight function \(\omega \). In this section, we will study the influence of the weight function on the approximation capability of the scheme.

Let \(F \in \mathcal {C}^{d+2}\) and consider the initial data \(\textbf{f}^h=\{f^{h}_j\}_{j\in \mathbb {Z}}\) with \(h>0\) and

$$\begin{aligned} f^{h}_{j}=F\left( j h\right) ,\quad j\in \mathbb {Z}. \end{aligned}$$

Let \(j_0\in \mathbb {Z}\) be any integer, we calculate the approximation error between \((S_{d,\mathbf {w^\lambda }}\textbf{f}^h)_{2j_0+i}\) and \(F((j_0+i/2)h)\), with \(i=0,1\), and analyse the largest contribution term. By Taylor’s theorem, we have that there exist \(p_{i}\in \Pi _d\) such that:

$$\begin{aligned} f^{h}_{j}=F(jh)=p_i(jh)+\frac{F^{(d+1)}((j_0+i/2)h)}{(d+1)!}(j-(j_0+i/2))^{d+1}h^{d+1}+\mathcal {O}(h^{d+2}). \end{aligned}$$

Applying the subdivision operator and considering its polynomial reproduction capability,

$$\begin{aligned}&(S_{d,\mathbf {w^\lambda }}\textbf{f}^h)_{2j_0+i}\\&\quad =\sum _{l=1-n}^{L_n+i} a^{i}_{l} f^h_{j_0+l}\\&\quad =\sum _{l=1-n}^{L_n+i} a^{i}_{l}\left( p_i((j_0+l)h)+\frac{F^{(d+1)}((j_0+i/2)h)}{(d+1)!}(l-i/2)^{d+1}h^{d+1}+\mathcal {O}(h^{d+2})\right) \\&\quad =\sum _{l=1-n}^{L_n+i} a^{i}_{l}p_i((j_0+l)h)+\frac{F^{(d+1)}((j_0+i/2)h)}{(d+1)!}h^{d+1}\sum _{l=1-n}^{L_n+i} a^{i}_{l}(l-i/2)^{d+1}+\mathcal {O}(n h^{d+2})\\&\quad =p_i((j_0+i/2)h)+\frac{F^{(d+1)}((j_0+i/2)h)}{(d+1)!}h^{d+1}\sum _{l=1-n}^{L_n+i} a^{i}_{l}(l-i/2)^{d+1}+\mathcal {O}(n h^{d+2})\\&\quad =F((j_0+i/2)h)+\frac{F^{(d+1)}((j_0+i/2)h)}{(d+1)!}h^{d+1}\sum _{l=1-n}^{L_n+i} a^{i}_{l}(l-i/2)^{d+1}+\mathcal {O}(n h^{d+2}). \end{aligned}$$

Therefore, the largest contribution to the approximation error is given by

$$\begin{aligned} \frac{F^{(d+1)}((j_0+i/2)h)}{(d+1)!}h^{d+1}\sum _{l=1-n}^{L_n+i} a^{i}_{l} (l-i/2)^{d+1}. \end{aligned}$$

We conclude that if two linear schemes are given with the same approximation order, then the scheme with a lesser value of

$$\begin{aligned} \eta = \max \left\{ \sum _{l=1-n}^{L_n} a^0_l l^{d+1},\sum _{l=1-n}^{L_n+1} a^1_l \left( l - \frac{1}{2}\right) ^{d+1}\right\} \end{aligned}$$

provides better approximators, in general. We observe that, if \(a^i_l = n^{-1}H(l/n) + \mathcal {O}(n^{-2})\approx n^{-1}H(l/n)\), for some function H, \(i=0,1\), (in that case, \(H(t):= \lim _{n\rightarrow \infty } n a^i_{t n}\)), then

$$\begin{aligned} \begin{aligned} \sum _{l=1-n}^{L_n+i} a^i_l (l - i/2)^{d+1} =&n^{-1} \sum _{l=1-n}^{L_n+i} H(l/n) (l - i/2)^{d+1}= n^{d} \sum _{l=1-n}^{L_n+i} H(l/n) \left( l/n - \frac{i}{2n}\right) ^{d+1}\\ =&n^{d+1} \int _{-1}^1 t^{d+1} H(t) dt+ \mathcal {O}(n^{d}). \end{aligned} \end{aligned}$$

Since the proposed schemes are odd-symmetric, then \(H(t)=H(-t)\) and \(\int _{-1}^1 t^{d+1} H(t) dt = 2I_{d+1}(H)\) and the approximation error is given by

$$\begin{aligned} 2h^{d+1} n^{d+1}I_{d+1}(H)\frac{F^{(d+1)}\left( (j_0+i/2)h\right) }{(d+1)!} + \mathcal {O}(n h^{d+2})+ \mathcal {O}(n^{d}h^{d+1}), \end{aligned}$$
(39)

which increases with hn and \(I_{d+1}(H)\). We will test this formula in Sect. 9.2.

Now, we explore how the selection of \(\phi \) influences H, with the aim of determining which \(\phi \) is the best from an approximation point of view.

Theorem 7.1

The approximation error produced by \(S_{d,\textbf{w}^\lambda }\) is, asymptotically (\(\lambda \propto n \rightarrow \infty , h\rightarrow 0\)), proportional to \(2 I_{d+1}(H)\). In particular,

$$\begin{aligned} 2 I_{d+1}(H) = {\left\{ \begin{array}{ll} \frac{I_2(\phi )}{I_0(\phi )}, &{} d=0,1, \\ begin{eqnarray*}5pt] \frac{I_2(\phi )I_6(\phi ) - I_4(\phi )^2}{I_0(\phi )I_4(\phi )-I_2(\phi )^2}, &{} d=2,3. \end{array}\right. } \end{aligned}$$

Proof

According to (39), we know that the approximation error is proportional to \(2 I_{d+1}(H)\). For \(d=0,1\), we can compute H from the expression of \(\textbf{a}^0,\textbf{a}^1\) in (17) and (18). For instance, for \(2n-1<\lambda <2n\) (the other case is analogous),

$$\begin{aligned} H(t)= & {} \lim _{n\rightarrow \infty } n a^i_{t n} = \lim _{n\rightarrow \infty } n \frac{\phi (|2 t n+i|/\lambda )}{\sum _{j=1-n}^{L_n}\phi (|2j+i|/\lambda )} \\= & {} \lim _{n\rightarrow \infty } n \frac{\phi (|2 t n+i|/\lambda )}{2 n \int _0^1 \phi (t) dt + \mathcal {O}(1)} = \frac{\phi (|t|)}{2I_0(\phi )}. \end{aligned}$$

Hence, \(2I_2(H)=I_2(\phi )/I_0(\phi )\). For \(d=2,3\), using the results in Sect. 6.2:

$$\begin{aligned} H(t) = \phi (|t|)\frac{1}{2}\frac{I_4(\phi )-t^2 I_2(\phi )}{I_0(\phi )I_4(\phi )-I_2(\phi )^2}. \end{aligned}$$

Then, \(2I_4(H)=-(I_2(\phi )I_6(\phi ) - I_4(\phi )^2)/(I_0(\phi )I_4(\phi )-I_2(\phi )^2)\). \(\square \)

From the last result, we conclude that, despite for a given polynomial degree d the approximation order is \(d+1\) for any choice of \(\lambda \) and \(\omega \), it is convenient to minimize \(\left| \frac{I_2(\phi )}{I_0(\phi )}\right| \) for \(d=0,1\) and \(\left| \frac{I_2(\phi )I_6(\phi ) - I_4(\phi )^2}{I_0(\phi )I_4(\phi )-I_2(\phi )^2}\right| \) for \(d=2,3\) in order to reduce the approximation error, at least for large values of \(\lambda \).

For \(d=0,1\), in Table 4, we see that the smallest values are reached for \(\phi (x) = e^{-\xi x}\) with large \(\xi \) and for \(\phi (x) = (1-x^p)^q\) with large q or small p. We add for comparison \(\Vert H\Vert _2^2\), that according to Sect. 8, the smaller it is, the greater is its noise reduction capability. For any scheme, we can see that the greater the approximation capability, the smaller the noise reduction capability. In conclusion, approximation and noise reduction are incompatible, in this sense, and some equilibrium may be found. This is further discussed in Sect. 8.1.

The same conclusion can be obtained as in the case \(d=2,3\) from Table 5: The smallest values are reached for \(\phi (x) = e^{-\xi x}\) with large \(\xi \) and for \(\phi (x) = (1-x^p)^q\) with large q or small p. A great approximation power implies a low noise reduction capability, which will be studied in Sect. 8.1.

8 Noise Reduction

In this section, we study the application of a subdivision operator to purely noisy data, \(S_\textbf{a}\varvec{\epsilon }\), where all the values \(\epsilon _j\) follows a random distribution E, and are mutually uncorrelated. The results of this study can be applied to any data contaminated with noise due to Remark 1.

A direct result is that

$$\begin{aligned} \Vert S_\textbf{a}\varvec{\epsilon }\Vert _\infty \le \Vert S_\textbf{a}\Vert _\infty \Vert \varvec{\epsilon }\Vert _\infty . \end{aligned}$$

Since \(\Vert S_\textbf{a}\Vert _\infty \ge 1\) for any convergent schemes, the best condition is reached for \(d=0,1\), for which \(\Vert S_{d,\textbf{w}^\lambda }\Vert _\infty = 1\), since the mask is positive. Hence, it cannot be concluded from this formula that the noise is reduced.

Table 4 Computations of H, \(2|I_2(H)| = |\int _{-1}^1 t^2\,H(t) dt|\) and \(\Vert H\Vert _2^2\) for several choices of \(\phi \), \(d=0,1\)
Table 5 Computations of H, \(2|I_4(H) | = |\int _{-1}^1 t^4\,H(t) dt|\) and \(\Vert H\Vert _2^2\) for several choices of \(\phi \), \(d=2,3\)

To reveal the denoising capabilities, a basic statistical analysis can be carried out. If the variance of the refined data is lesser than the variance of the given data, \(\mathop {\textrm{var}}\limits (E)\), it indicates a reduction of randomness. Using that

$$\begin{aligned} \mathop {\textrm{var}}\limits (\alpha X + \beta Y) = \alpha ^2 \mathop {\textrm{var}}\limits (X) + \beta ^2 \mathop {\textrm{var}}\limits (Y), \qquad \alpha ,\beta \in \mathbb {R}, \end{aligned}$$

provided that XY are two uncorrelated random distributions, the variance after one subdivision step is

$$\begin{aligned} \mathop {\textrm{var}}\limits \left( \sum _{l\in \mathbb {Z}} a_{2l+i}E\right) = \sum _{l\in \mathbb {Z}} a_{2l+i}^2\mathop {\textrm{var}}\limits \left( E\right) = \Vert \textbf{a}^i\Vert _ 2^2\mathop {\textrm{var}}\limits \left( E\right) , \quad i=0,1. \end{aligned}$$

Hence, the variance reduction is given by

$$\begin{aligned} \textsc {VR}_\textbf{a}:= \max \{\Vert \textbf{a}^0\Vert _2^2,\Vert \textbf{a}^1\Vert _2^2\}. \end{aligned}$$

For some schemes studied in this work, this quantity is: For \(d=0,1\), if \(2n-1<\lambda <2n\),

$$\begin{aligned} \textsc {VR}_{1,\mathbf {w^\lambda }}= \max \left\{ \sum _{l=-n+1}^{n-1} \left( \frac{w^\lambda _{2l}}{||\mathbf {w^\lambda _0}||_1}\right) ^2, \ \sum _{l=-n+1}^{n}\left( \frac{w_{2l-1}^\lambda }{||\mathbf {w^\lambda _1}||_1}\right) ^2 \right\} < 1. \end{aligned}$$

The last quantity is less than one due to the reproduction of constants and the positivity of the coefficients. In case that \(\phi (x) = 1\), then \(\textsc {VR}_{1,\texttt {rect}^\lambda } = \lfloor \lambda \rfloor ^{-1} = (2n-1)^{-1}\), which is the lowest value that can be obtained with a rule of this length. For \(d=2,3\), \(\phi (x)=1\) and \(2n-1<\lambda <2n\),

$$\begin{aligned} \textsc {VR}_{3,\texttt {rect}^\lambda } = \frac{9 n^2-9 n-3}{8 n^3-12 n^2-2 n+3} > (2n-1)^{-1}, \qquad \forall n\ge 2, \end{aligned}$$
(40)

which maximum is achieved for \(n=2\) (i.e. \(3<\lambda <4\), corresponding to the interpolatory DD4 scheme), which is 1.

Observe that \(\textsc {VR}_{3,\texttt {rect}^\lambda } = \mathcal {O}(n^{-1})\). This means that the noise tends to be completely removed when the mask support tends to \(\infty \).

For any choice of \(\phi (x)\), an asymptotic result can be given for the noise reduction using an argument similar to Sect. 7: If \(a^i_l = n^{-1}H(l/n) + \mathcal {O}(n^{-2})\), for some function H, \(i=0,1\), then

$$\begin{aligned} \lim _{n\rightarrow \infty } \Vert \textbf{a}^i\Vert _2^2 =\lim _{n\rightarrow \infty } n^{-2}\sum _{l=1-n}^{L_n+i} H(l/n)^2 = n^{-1}\int _{-1}^1 H(t)^2 dt, \end{aligned}$$

so that the noise reduction factor behaves asymptotically as

$$\begin{aligned} \textsc {VR}_\textbf{a}= n^{-1} \Vert H\Vert _2^2 + \mathcal {O}(n^{-2}). \end{aligned}$$

Under these assumptions, we observe that the noise tends to be removed after an iteration when \(n\rightarrow \infty \). In the Tables 4 and 5, we compute \(H(t):= \lim _{n\rightarrow \infty } n a^i_{t n}\) and the factor \(\Vert H\Vert _2^2\) for several \(\phi \) functions, \(d=0,1,2,3\).

8.1 An Equilibrium Between Approximating and Denoising

We have seen that, in order to maximize the approximation and denoising capabilities, the values \(I_4(H)\) and \(\Vert H\Vert _2\) should be minimized. This is a multi-objective minimization problem, which solutions form a Pareto front that we have estimated using the MATLAB optimization toolbox. Here we will only consider the case \(d=2,3\), but a similar analysis can be performed with \(d=0,1\).

First, observe Fig.  3-left. We find out that \(\phi (x) = (1-x^p)^q\) is always more convenient than \(\phi (x) = e^{-\xi x}\), meaning that for each value of \(\xi \) there exists some pair (pq) for which \(\phi (x) = (1-x^p)^q\) approximates and denoises better than \(\phi (x) = e^{-\xi x}\). It can also be affirmed that \(\phi (x)=1\) is in the Pareto front and it the best for noise reduction and the worst for approximating. In the other extreme would be an interpolatory scheme, with the best approximation capability but the worst denoising power.

The Pareto-optimal values (pq) for \(\phi (x) = (1-x^p)^q\) form a curve (see Fig.  3-right) which seems to interpolate the integer values (2, 1) and (4, 5).

Fig. 3
figure 3

Left, the pair of values \((2I_4(H),\Vert H\Vert _2^2)\) for several choices of \(\phi \). Thus, the lower x and y axis values, the better approximation and denoising capabilities, respectively. Blue, \(\phi (x) = (1-x^p)^q\) for several values (pq) pairs such that \(1\le p \le 20\), \(\frac{1}{2}\le q \le 20\); red, the Pareto front of the previous pairs; green, \(\phi (x) = \exp (-\xi x)\) for \(\frac{1}{2}\le \xi \le 10\). Right, the red line represents the pairs of values (pq) for which \(\phi (x) = (1-x^p)^q\) is Pareto-optimal

In conclusion, we recommend the use of rect to obtain the best denoising. However, with epan the noise increases by \(11.11 \%\) while the approximation error is reduced by \(44.44 \%\) compared to rect. If the approximation is desired to be prioritized, \(\phi (x) = (1-x^4)^5\) is a good choice, since the noise increases by \(31.58 \%\) while the approximation error is reduced by \(71.43 \%\), compared to rect. The rest of the (pq) values related to Table 1 are near to be optimal and can be used as well for other approximating-denoising balances. We recommend to never use exp(\(\xi \)).

Just to mention that for \(d=0,1\) similar conclusions can be obtained. For that polynomial degrees, the weight functions \(\phi (x) = \exp (-\xi x)\) are also worse than \(\phi (x) = (1-x^p)^q\). The weight function epan is still Pareto optimal, but the pair \((p,q)=(4,5)\) is not.

9 Numerical Experiments

In this section, we provide several numerical examples to illustrate the functionality of the new schemes in generating curves. All experiments were conducted using MATLAB R2022a, and the code for the subdivision schemes is accessible on open repositories (refer to Section 11).

We check that the subdivision schemes are convergent for \(d=0,1,2,3\) and that the curve present \(\mathcal {C}^1\) smoothness (but not \(\mathcal {G}^1\), meaning that kinks can be produced). We analysed the approximating and denoising capabilities to numerically validate the results in Sects. 7 and 8. Only for \(d=0,1\), we test the conservation of the monotonicity applying the schemes to fit a non-decreasing initial data. Finally, we perform a numerical test using the discretization of a discontinuous function and observe that the proposed methods avoid Gibbs phenomenon in the neighbourhood of an isolated discontinuity for \(d=0,1\).

Fig. 4
figure 4

Several subdivision schemes (by columns) applied to the star-shaped data in (41). In the first row, they are applied to the original data. In the second and third row, the data is contaminated by normal noise with \(\sigma =0.5\) and \(\sigma =1\), respectively

9.1 Application to Noisy Geometric Data

We start with one of the experiments presented in [16] which consists of a star-shaped curve given by:

$$\begin{aligned} F(t) = (4\cos (t)+\cos (4t), 4\sin (t)-\sin (4t)), \end{aligned}$$
(41)

with samples taken at \(t^0_j=j\pi /25\) with \(j\in \mathbb {Z}\). That is, we consider \(\textbf{f}^0:= F|_{{{\textbf{t}}}^0}\), \(\textbf{t}^0 =\{t^0_j\}_{j\in \mathbb {Z}}\), i.e. \(f^0_j = F(t^0_j)\). Because of the periodicity of the function, we can focus on \(j=0,\ldots , 49\). We add Gaussian noise in each component, defining \(\tilde{\textbf{f}}^0=\textbf{f}^0+\varvec{\epsilon }^\sigma \) with \(\varvec{\epsilon }^\sigma =\{(\epsilon ^{\sigma ,1}_j,\epsilon ^{\sigma ,2}_j)\}_{j=0}^{49}\), being \(\epsilon ^{\sigma ,1}_j,\epsilon ^{\sigma ,2}\sim \mathcal {N}(0,\sigma )\), \(j=0,\ldots ,49\) and \(\sigma \in \{0.5,1\}\). In Fig.  4, we illustrate the results only for two interesting choices of \(\phi \), according to the conclusions in Sect. 8.1. Nevertheless, the results obtained with the rest of weight functions are graphically similar and they are shown in detail in Table 6.

Without noise, the smaller is \(\lambda \) the more accurate are the results for any \(\phi \) and d. Measuring the approximation error as

$$\begin{aligned} \Vert S^5 \textbf{f}^0 - F|_{{{\textbf{t}}}^5}\Vert _\infty =\max _{j\in \mathbb {Z}}\{\Vert (S^5 \textbf{f}^0)_j - F(t^5_j)\Vert _2\}, \end{aligned}$$

where \(t^5_j:= 2^{-5}t^0_j\), \(j\in \mathbb {Z}\). As expected, we can see in Table 6 that the approximation error is always smaller for \(d=2,3\) than for \(d=0,1\). But also, if we sort the weight function by the approximation order, for any \(0\le d \le 3\), they would be exactly in the same order as if we sort them by the theoretical approximation power in Tables 4 and 5. Nevertheless, it has to be taken into account that the results in Tables 4 and 5 have an asymptotic nature, when \(h\rightarrow 0\). These behaviours are also visible in the first row of Fig.  4.

We measure the noise reduction capability of the schemes with the quantities \(\Vert S^5\varvec{\epsilon }^{0.5}\Vert _\infty \) and \(\Vert S^5\varvec{\epsilon }^{1}\Vert _\infty \), in Table 6. The ones that show results closer to zero are the schemes with higher denoising capacity. In general, the noise is more reduced when \(\lambda \) is larger or d is smaller. Comparing the sorting of the weight functions by its theoretical denoising capability, according to Tables 4 and 5, and by the numbers in Table 6, we see that both orderings are the same, in general. Only in some particular cases this ordering is slightly changed. The reason may be that the results on Tables 4 and 5 are asymptotical, for \(\lambda \rightarrow \infty \). But also, the reduction is in terms of the variance of the statistical distribution. Hence, the same experiment should be repeated many times and the results averaged in order to obtain a more consistent comparison.

In Fig.  4, we can see how important the choice of the weight function is to increase the approximation capability (and only losing a bit of denoising capability). In turn, taking \(d=2,3\) gives better approximations and \(\lambda \) can also be increased to reduce noise.

Of course, during our study we generated much more graphics than the ones here presented. In some of them, specially in presence of noise, artifacts may appear, such as auto-intersections or kinks, proving that it does not provide \(\mathcal {G}^1\) curves, even if the scheme is \(\mathcal {C}^1\). By taking \(\lambda \) larger, the artifacts usually disappear and curves become softer.

Table 6 Analysis of the approximation and denoising capabilities for the different subdivision schemes with \(d=0,1,2,3\) and \(\lambda =3.7, 5.8, 9.5\) and 15.5

9.2 Approximation Error When \(\lambda \) is Being Increased

In this section we challenge formula (39) with a suited experiment. Let us consider \(G(x) = \cos (\pi x)\) and the initial data \(\textbf{g}^{0,h}=\{g^{0,h}_j\}_{j\in \mathbb {Z}}\) and \(\widetilde{\textbf{g}}^{0,h}=\{\widetilde{g}^{0,h}_j\}_{j\in \mathbb {Z}}\) with \( g^{0,h}_j = G(j h), \) \( \widetilde{g}^{0,h}_j = g^{0,h}_j + \epsilon _j, \) \( \epsilon _j\sim U\left( \left[ -\frac{1}{4},\frac{1}{4}\right] \right) , \) where U(I) is the uniform distribution in the interval I. We consider the spacings \(h_k=10^{-k}\) and the support parameters \(\lambda _k = 3.5 + 10^{k-1} = 3.5 + 0.1/h_k\), \(k=1,2,3,4\). The value \(\lambda _k\) is modified accordingly to \(h_k\) to maintain almost constant the support of the basic limit function, which determines the influence of each data point on the limit function. The results of applying 5 iterations of the scheme \(S_{3,\texttt {rect}^{\lambda _k}}\) to \(\widetilde{\textbf{g}}^{0,h_k}\), for \(k=1,2,3,4\), are shown in Fig.  5. On the one hand, it shows how the noise after five iterations tends to 0 when the stencil length tends to infinity, \(\lambda _k\rightarrow \infty \), but slowly, since the variance decay speed is \(\mathcal {O}(\lambda _k^{-1})\) according to (40). On the other hand, the approximation error does not decay to zero, as can be observed in Table 7, where the numbers are never smaller than (and seems to tend to) the asymptotic error estimation in (39), which is (for \(j=0\), \(i=0\))

$$\begin{aligned}{} & {} \left| 2I_4(H)\frac{G^{(4)}(0)}{4!} h_k^4 n_k^{4} \right| \\{} & {} \quad = \frac{3}{35}\frac{|G^{(4)}(0)|}{24} h_k^{4} (3+0.1/h_k)^{4} \overset{k\rightarrow +\infty }{\longrightarrow }\ \frac{\pi ^{4}}{24} \cdot 0.1^{4}\cdot \frac{3}{35} \simeq \text {3.4789e-05}. \end{aligned}$$

This threshold is not a real constrain in practice, since the noise is usually greater than the approximation error (see first row of Table 7). If an approximation error tending to zero is needed, \(\lambda _k \propto h_k^{-\frac{1}{2}}\) can be chosen, for instance.

Table 7 The approximation error at \(x=0\) after five iterations of \(S_{3,\texttt {rect}^{\lambda _k}}\) applied to data with and without noise
Fig. 5
figure 5

Five iterations of \(S_{3,\texttt {rect}^{\lambda _k}}\) applied to \(\widetilde{\textbf{g}}^{0,h_k}\), for \(k=1,2,3,4\). The blue circles are the initial data, the red line is the smooth function G and the black line represents the limit function. The parameters for each graphic are (by rows): \(h_1=10^{-1}\), \(\lambda _1 = 4.5\); \(h_2=10^{-2}\), \(\lambda _2 = 13.5\); \(h_3=10^{-3}\), \(\lambda _3 = 103.5\); \(h_4=10^{-4}\), \(\lambda _4 = 1003.5\)

9.3 Avoiding Gibbs Phenomenon

In this section we confirm that the subdivision schemes based on weighted-least squares with \(d=0,1\) avoid Gibbs phenomenon, as stated in Corollary 4.9. To study this property, we propose the following experiment. We discretize the function:

$$\begin{aligned} f(x)=\left\{ \begin{array}{ll} \sin (\pi x), &{} x\in [0,0.5]; \\ -\sin (\pi x), &{} x\in (0.5,1], \end{array} \right. \end{aligned}$$

in the interval [0, 1] with 33 equidistant points, \(x_i=i\cdot h\), \(i=0,\ldots ,32\) and \(h=\frac{1}{32}\) and apply the subdivision schemes. We show the results in Fig.  6. The absence of the Gibbs phenomenon around the discontinuity is evident, replaced instead by diffusion influenced by the support length (larger support leads to more diffusion) and the weight function. Specifically, we observe that the ‘rect’ function introduces more diffusion than ‘trwt.’ This aligns with expectations, as the ‘rect’ function disperses the influence of the data across the stencil when computing the new one.

Fig. 6
figure 6

Limit functions for discontinuous data using subdivision schemes with rect (red line) and trwt (black line) weight functions, \(d=0,1\)

9.4 Monotonicity

Finally, we introduce the last example in order to see numerically that the new family of schemes conserves the monotonicity of the data, for \(d=0,1\), proved in Corollary 4.8. We apply \(S_{1,\texttt {rect}}\) and \(S_{1,\texttt {trwt}}\) to the data collected in Table 8 (see [3]) and obtain Fig.  7. Similar observations to those in Sect. 9.3 are drawn from the results: an increase in the value of \(\lambda \) leads to more noticeable diffusion, and the weight function also impacts the width of the diffusion.

Table 8 Staircase data
Fig. 7
figure 7

Limit functions for monotone data using subdivision schemes with rect (red line) and trwt (black line) weight functions, \(d=0,1\)

10 Conclusions and Future Work

In this work, a family of subdivision schemes based on weighted local polynomial regression has been analysed. We introduced the general form of this type of schemes and prove that the schemes corresponding to the polynomial degrees \(d=2k\) and \(d=2k+1\) coincide, for \(k=0,1,2\ldots \) In particular, we analysed in detail the cases \(d=0,1,2,3\) with positive weight functions, \(\omega \), with compact support.

In the first part of the paper, for \(d=0,1\), we took advantage of the positivity of the mask to prove the convergence of the schemes. Also, under some conditions of the \(\omega \) functions, the \(\mathcal {C}^1\) regularity of the limit function was demonstrated. Afterward, some properties were proved, such as monotonicity and elimination of the Gibbs phenomenon effect. In the second part, we developed a general technique to analyse the convergence of a family of linear schemes and used it in the case \(d=2,3\).

The last sections have been dedicated to discussing noise removal and approximation capabilities. We showed how the weight function \(\phi \) determines these properties and that it is not possible to find a \(\phi \) that maximizes both approximation and noise reduction capabilities. This led to a multi-objective optimization problem in which optimal solutions were found along a Pareto front. Some numerical tests were presented to confirm the theoretical results.

For future works, we can consider the following ideas: The \(\mathcal {C}^1\) regularity of the cases \(d=2,3\) were not proven. New theoretical tools such as those presented in Sect. 5 and their application to these schemes can be done.

We considered several weight functions \(\phi \) from the literature. Now that we understand the impact of \(\phi \) on both approximation and denoising capabilities, we can design \(\phi \) with the aim of improving them. Taking into account that the noise contribution is usually greater than the approximation error on the final curve, the use of an optimized weight function can be even more interesting than augmenting the polynomial degree, since some properties related to the monotonicity and the Gibbs phenomenon are only available for \(d=0,1\).

If the data present some outliers, a different loss function can provide better results. Mustafa et al. in [25] proposed a variation of Dyn’s schemes changing the \(\ell ^2\)-norm by the \(\ell ^1\)-norm in the polynomial regression but they do not prove their properties. The theoretical study of this scheme, as well as the use of different weight functions, can be considered in the future.