Skip to main content

High-Order Approximation of Set-Valued Functions


We introduce the notion of metric divided differences of set-valued functions. With this notion we obtain bounds on the error in set-valued metric polynomial interpolation. These error bounds lead to high-order approximations of set-valued functions by metric piecewise-polynomial interpolants of high degree. Moreover, we derive high-order approximation of set-valued functions by local metric approximation operators reproducing high-degree polynomials.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. Artstein, Z.: Piecewise linear approximations of set-valued maps. J. Approx. Theory 56, 41–47 (1989)

    MathSciNet  Article  Google Scholar 

  2. Atkinson, K.E.: An Introduction to Numerical Analysis, 2nd edn. Wiley (1989)

  3. Babenko, V.F., Babenko, V.V., Polishchuk, M.V.: Approximation of some classes of set-valued periodic functions by generalized trigonometric polynomials. Ukrainian Math. J. 68(4), 502–514 (2016)

    MathSciNet  Article  Google Scholar 

  4. Baier, R., Farkhi, E.: Regularity and integration of set-valued maps represented by generalized Steiner points. Set-Valued Anal. 15, 185–207 (2007)

    MathSciNet  Article  Google Scholar 

  5. Baier, R., Perria, G.: Set-valued Hermite interpolation. J. Approx. Theory 163, 1349–1372 (2011)

    MathSciNet  Article  Google Scholar 

  6. Berdysheva, E.E., Dyn, N., Farkhi, E., Mokhov, A.: Metric approximation of set-valued functions of bounded variation. J. Comput. Appl. Math. 349, 251–264 (2019)

    MathSciNet  Article  Google Scholar 

  7. Campiti, M.: Korovkin-type approximation in spaces of vector-valued and set-valued functions. Appl. Anal. 98, 2486–2496 (2019)

    MathSciNet  Article  Google Scholar 

  8. Donchev, T., Farkhi, E.: Moduli of smoothness of vector-valued functions of a real variable and applications. Numer. Funct. Anal. Optim. 11(5,6), 497–509 (1990)

    MathSciNet  Article  Google Scholar 

  9. Dyn, N., Farkhi, E., Mokhov, A.: Approximations of set-valued functions by metric linear operators. Constr. Approx. 25, 193–209 (2007)

    MathSciNet  Article  Google Scholar 

  10. Dyn, N., Farkhi, E., Mokhov, A.: Approximation of univariate set-valued functions: an overview. Serdica Math. J. 33, 495–514 (2007)

    MathSciNet  MATH  Google Scholar 

  11. Dyn, N., Farkhi, E., Mokhov, A.: Multi-segmental representations and approximation of set-valued functions with 1D images. J. Approx. Theory 159, 39–60 (2009)

    MathSciNet  Article  Google Scholar 

  12. Dyn, N., Farkhi, E., Mokhov, A.: Approximations of Set-Valued Functions: Adaptation of Classical Approximation Operators. Imperial College Press, London (2014)

    Book  Google Scholar 

  13. Dyn, N., Farkhi, E., Mokhov, A.: The metric integral of set-valued functions. Set-Valued Var. Anal. 26, 867–885 (2018)

    MathSciNet  Article  Google Scholar 

  14. Lempio, F.: Set-valued interpolation, differential inclusions, and sensitivity in optimization. In: Recent Developments in Well-Posed Variational Problems, Math. Appl., 331, Kluwer Acad. Publ., Dordrecht, pp. 137–169 (1995)

  15. Mureşan, M.: Set-valued approximation of multifunctions, Studia Universitatis Babeş-Bolyai. Mathematica 55(1), 107–148 (2010)

    MathSciNet  Google Scholar 

  16. Nikolskiĭ, Approximation M.S.: of convex-valued continuous multivalued mappings (Russian), Dokl. Akad. Nauk SSSR 308(5) (1989), 1047–1050; translation in Soviet Math. Dokl. 40(2) (1990), 406–409

  17. Rockafellar, R.T., Wets, R.: Variational Analysis. Springer, Berlin (1998)

    Book  Google Scholar 

  18. Schoenberg, I.J.: On Spline Functions (with Supplement by T.N.E Grevill), Inequalities (O. Shisha, editor), Academic Preses, pp. 255–291 (1967)

  19. Vitale, R.A.: Approximations of convex set-valued functions. J. Approx. Theory 26, 301–316 (1979)

    MathSciNet  Article  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Alona Mokhov.

Additional information

Communicated by Wolfgang Dahmen.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


Appendix A: Proof of Lemma 5.12

For the readers’ convenience we recall here the notation and Lemma 5.12. Let

$$\begin{aligned} W=\{(x,y):\; a'\le x \le b',\, c \le y \le d \, \}, \quad \; a\le a' < b' \le b, \quad x^*\in (a',b'),\end{aligned}$$

and let \({g_i(x): [a', x^*]\rightarrow (c,d)}\), \(i=1,2\) be such that \(g_1(x^*)=g_2(x^*)\) and \(g_1(x)<g_2(x)\) for \(x \in [a', x^*)\). Assume that

$$\begin{aligned} \mathrm {Graph}(F) \bigcap W = W \setminus \left\{ (x,y) : \; x \in [a',x^*]\, , \, y \in \big ( g_1(x),g_2(x) \big ) \right\} . \end{aligned}$$

Note that by (13) the only boundaries of \(\mathrm {Graph}(F)\) in the interior of W are \(g_1(x)\) and \(g_2(x)\). In the left neighborhood of \(x^*\), \(F(x)\bigcap [c,d]\) consists of two disjoint intervals, while in the right neighborhood of \(x^*\), \(F(x)\bigcap [c,d] = [c,d]\) (see Fig. 3).

Lemma 5.12.

Under the above notation and condition (13), assume that F has bounded second divided differences in a neighborhood of \(x^*\).

  1. (i)

    Then the function \({g(x)=g_2(x)-g_1(x)}\), \(x \in [a', x^*]\) has a vanishing left derivative at \(x=x^*\).

  2. (ii)

    If in addition \(g_i(x)\), \(i=1,2\) are continuous, then for \(\tilde{x} \in (a',x^* -\varepsilon )\), with \(\varepsilon >0\) small enough, the second divided differences of \(g_i(x)\), \(i=1,2\) in a neighborhood of \(\tilde{x}\) are bounded.


See Fig. 5 for illustration of the two claims.

(i) Let \(x_1=x^*-h\), \(x_2=x^*+h\), \(\tilde{x}=x^*\) and \(\chi _2=\{x_1, x_2\}\), \({{\widetilde{\chi }}_2=\{\tilde{x}, x_1, x_2\}}\). Denote \(y^*=\frac{g_1(x_1)+g_2(x_1)}{2}\). In accordance with Definition 5.1 we define \(f \in \Upsilon ({\widetilde{\chi }}_2,F)\). Let \({f(x_1)=g_2(x_1)}\), \({f(x_2)=y^*}\), \(f(x^*)=y^*\). It is easy to see that

$$\begin{aligned} \big ( f(x_1),f(x_2) \big ) \in \Pi ({F(x_1)},{F(x_2)}). \end{aligned}$$

Denote by \(p_1(\chi _2,f)(x)\) the linear polynomial interpolating the data \((x_1, f(x_1))\), \((x_2, f(x_2))\), then

$$\begin{aligned} (f(x^*), p_1(\chi _2,f)(x^*)) \in \Pi ({F(x^*)},{P_1(\chi _2,F)(x^*)}), \end{aligned}$$

since \(p_1(\chi _2,f)(x^*)) \) is a projection of \(f(x^*)=y^*\) on \(P_1(\chi _2,F)(x^*)\). Thus \(f[x^*,x_1,x_2] \in F[x^*;x_1,x_2].\)

Next we compute \(f[x^*,x_1,x_2]\),

$$\begin{aligned} \begin{aligned} f[x^*,x_1,x_2]&=\left( \frac{f(x_2)-f(x^*)}{h} - \frac{f(x^*)-f(x_1)}{h} \right) \frac{1}{2h}\\&=\left( \frac{y^*-y^*}{h} - \frac{y^*-g_2(x_1)}{h} \right) \frac{1}{2h} \\&= \frac{g_1(x_1)-g_2(x_1)}{4h^2} = -\frac{g(x_1)}{4h^2} = \frac{g(x^*)-g(x_1)}{h} \frac{1}{4h} \in F[x^*;x_1,x_2]. \end{aligned} \end{aligned}$$

By the assumptions and by (14) we have \(\left| \frac{g(x_1)}{4h^2} \right| =\left| \frac{g(x^*-h)}{4h^2} \right| \le |F[x^*;x_1,x_2]|\le M\) for h small enough and some \(M \in {\mathbb {R}}\).

Therefore \({ \lim _{h \rightarrow 0} \left| h\right| \left| \frac{g(x^*-h)}{h^2} \right| =0 }\). Since \(g(x^*)=0\) we get

$$\begin{aligned} \lim _{h \rightarrow 0} \left| h\right| \left| \frac{g(x^*)-g(x^*-h)}{h^2}\right| = \lim _{h \rightarrow 0} \left| \frac{g(x^*)-g(x^*-h)}{h} \right| =g'_{-}(x^*)=0. \end{aligned}$$
Fig. 5
figure 5

Graph of F in violet , graph of \(P_1(\chi _2,F)\) restricted to \([x_1,x_2]\) in brown (Color figure online)

(ii) For \(\tilde{x} \in (a',x^*)\) let \(g(\tilde{x})=L>0\) and \(x_1=\tilde{x}-h_1\), \(x_2=\tilde{x}+h_2\), with \(h_i>0\), \(x_i \in (a', x^*)\), \(i=1,2\). Let \(h=\max \{h_1, h_2 \}\) and \(\chi _2=\{x_1, x_2\}\). It is enough to show that \(g_i[\tilde{x}, x_1, x_2] \in F[\tilde{x}; x_1, x_2]\), \(i=1,2\). We prove it for \(i=1\). The proof for \(i=2\) is similar.

According to Definition 5.1, we have to show that for h small enough

$$\begin{aligned} \big (g_1(x_1), g_1(x_2)\big ) \in \Pi ({F(x_1)},{F(x_2)}), \end{aligned}$$


$$\begin{aligned} \big ( g_1(\tilde{x}), p_1(\chi _2, g_1)(\tilde{x}) \big ) \in \Pi ({F(\tilde{x})},{ P_1(\chi _2, F)(\tilde{x})}). \end{aligned}$$

First we prove (16). Denote \(I=[a',x^*-\varepsilon ]\). By the continuity of \(g_i(x)\), \(i=1,2\) we have for h small enough

$$\begin{aligned} \left| g_i(x_j)-g_i(\tilde{x})\right| \le \omega _{I}\big ( {g_i},{h} \big ) < \frac{L}{4},\; i,j=1,2. \end{aligned}$$

Now, using the last inequality and the inverse triangle inequality we get

$$\begin{aligned} \left| g_2(x_2)-g_1(x_1)\right|&= \left| g_2(x_2)-g_2(\tilde{x}) + g_2(\tilde{x})-g_1(\tilde{x}) + g_1(\tilde{x}) -g_1(x_1) \right| \\&\ge -\left| g_2(x_2)-g_2(\tilde{x})\right| + \left| g_2(\tilde{x})-g_1(\tilde{x})\right| - \left| g_1(\tilde{x}) -g_1(x_1)\right| \\&\ge L-\omega _{I}\big ( {g_1},{h} \big )-\omega _{I}\big ( {g_2},{h} \big ) > \frac{L}{2}. \end{aligned}$$

Denote \(\lambda =\frac{h_2}{h_1+h_2}\), \(y_i=p_1(\chi _2,g_i)(\tilde{x})\), \(i=1,2\). By Lemma 3.2 and by (17) we get

$$\begin{aligned} \left| g_1(\tilde{x})-y_1\right| \le \omega _{I}\big ( {g_1},{h} \big ) < \frac{L}{4}. \end{aligned}$$

On the other hand

$$\begin{aligned} \begin{aligned} \left| g_1(\tilde{x})-y_2\right|&= \left| g_1(\tilde{x}) - g_2(\tilde{x}) +g_2(\tilde{x}) - y_2\right| \\&\ge \left| g_1(\tilde{x})-g_2(\tilde{x})\right| - \lambda \left| g_2(\tilde{x})- g_2(x_1)\right| - (1-\lambda )\left| g_2(\tilde{x})-g_2(x_2))\right| \\&\ge L - \omega _{I}\big ( {g_2},{h} \big ) > \frac{3L}{4}, \end{aligned}\nonumber \\ \end{aligned}$$

and similarly

$$\begin{aligned} \left| g_2(\tilde{x})-y_1\right| \ge L - \omega _{I}\big ( {g_1},{h} \big ) > \frac{3L}{4}. \end{aligned}$$

Now, consider four cases:  (i) \(y_1 < g_1(\tilde{x})\), (ii) \(y_1=g_1(\tilde{x})\), (iii) \(g_1(\tilde{x})< y_1 < g_2(\tilde{x})\) and (iv) \( y_1 \ge g_2(\tilde{x})\).

The second case gives (16) immediately, while the fourth case \( y_1 \ge g_2(\tilde{x})\) is impossible for h small enough by (18) and (20).

In the first case \(g_1(\tilde{x}) \notin P_1(\chi _2,F)(\tilde{x})\). Therefore \(\Pi _{P_1(\chi _2,F)(\tilde{x})}{(g_1(\tilde{x}))} \in \partial P_1(\chi _2,F)(\tilde{x}) = \{y_1,y_2\}\). By (18) and (19) for h small enough (16) follows.

In the third case \(y_1 \notin F(\tilde{x})\). Therefore \(\Pi _{F(\tilde{x})}{(y_1)} \in \partial F(\tilde{x}) =\{g_1(\tilde{x}),g_2(\tilde{x})\}\). Again in view of (18) and (19) we conclude (16) for h small enough.

The proof of (15) is based on similar arguments as the proof of (16). \(\square \)

Appendix B: Verification of Example 5.13

We recall the definition of F. Let \(g_1(x), g_2(x): [a,b] \longrightarrow [0, \infty )\) such that \(-g_1<g_2\), with \(g_i''\) continuous, non-positive and \(g_i' \ge 0\) in [ab], \(i=1,2\). Also let \({\tau (x): [a,c] \longrightarrow [0, \infty )}\), \(a<c<b\) with \(\tau ' <0, \) \(\tau ''\) positive and continuous in [ac]. Moreover \({\tau ^{(i)}(c)=0}\), \(i=0,1,2\) and \({0< \tau (a) < \min \{ g_2(a), g_1(a) \} }\). The multifunction F is defined by

$$\begin{aligned} F(x) = \left\{ \begin{array}{ll} { [-g_1(x)\, ,-\tau (x)]\, \bigcup \, [\, \tau (x), g_2(x) \,] }\, , &{} x \in [a,c], \\ {[-g_1(x)\, , g_2(x) ]} \, , &{} x \in [c,b]. \\ \end{array} \right. \end{aligned}$$

To analyze F we first consider a simpler SVF with a convex graph

$$\begin{aligned} G(x) = [ -g_1(x)\, , g_2(x) ] \; , \; \, x \in [a,b]. \end{aligned}$$

The graphs of F and G are presented in Fig.6.

Fig. 6
figure 6

Graph of G (left) and graph of F (right)

For \(\chi =\{x_1, x_3 \}\) such that \(a \le x_1 < x_3 \le b\) and \(x_2 \in (x_1, x_3)\) we show that

$$\begin{aligned} \big | \, G[x_2; x_1, x_3] \, \big | \le M = \frac{1}{2} \max \left\{ \, |g''_i(x)|\; , \; x\in (x_1,x_3) , i=1,2 \, \right\} . \end{aligned}$$

First, we consider all metric pairs of the sets \(G(x_1), G(x_3)\),

$$\begin{aligned} \Pi ({G(x_1)},{G(x_3)}) =&\{(y,y)\, : \, y \in [ -g_1(x_1)\, , g_2(x_1) ] \} \\&\bigcup _{j=1}^2 \left\{ \, (\, (-1)^j g_j(x_1), y\, )\,: \; y \in \mathrm {co}\{(-1)^j g_j(x_1), (-1)^j g_j(x_3)\} \right\} . \end{aligned}$$

Thus \(P_1(\chi , G)\) consists of lines , l(x) , restricted to \([x_1,x_3]\)

$$\begin{aligned} \begin{aligned}&P_1(\chi , G)(x) = \{ \, l(x)\equiv y \, : \,y \in [-g_1(x_1), g_2(x_1)] \, \} \\&\bigcup _{j=1}^2 \left\{ \, l(x)=(-1)^j g_j(x_1) +\frac{y-(-1)^j g_j(x_1)}{x_3-x_1}(x-x_1)\,: \right. \\&\quad \left. \; y \in \mathrm {co}\{(-1)^j g_j(x_1), (-1)^j g_j(x_3)\} \right\} . \end{aligned} \end{aligned}$$

Since \(g''_1\) and \(g''_2\) are negative, \(\mathrm {Graph}\left( G|_{[x_1,x_3]}\right) \) is convex, therefore

$$\begin{aligned} \mathrm {Graph}\left( P_1(\chi , G)|_{[x_1,x_3]} \right) \subset \mathrm {Graph}\left( G|_{[x_1,x_3]} \right) . \end{aligned}$$

Thus it follows that \(\left( l(x_2),l(x_2) \right) \in \Pi ({P_1(\chi , G)(x_2)},{G(x_2)})\). By this and the definition of the metric divided difference all the triplets \(\left( l(x_1), l(x_2), l(x_3) \right) \) of the lines in (21) contribute zero to \(G[x_2; x_1,x_3]\).

It remains to consider the rest of the metric pairs in \(\Pi ({P_1(\chi , G)(x_2)},{G(x_2)})\). Let \(l_j\) denote the line through \(\big (x_1, (-1)^jg_j(x_1) \big )\) and \(\big (x_3, (-1)^jg_j(x_3) \big )\), \(j=1,2\), restricted to \([x_1, x_3]\). The lines \(l_1\) and \(l_2\) are the lower and the upper boundaries of \(\mathrm {Graph}\left( P_1(\chi , G)|_{[x_1,x_3]} \right) \), respectively. It is clear that

$$\begin{aligned} \big \{ (l_j(x_2), y )\, : \; y \in \mathrm {co}\{ l_j(x_2), (-1)^j g_j(x_2) \} \subset \Pi ({P_1(\chi , G)(x_2)},{G(x_2)}) \big \}, \; j=1,2 \ , \end{aligned}$$

and the corresponding second-order divided differences are

$$\begin{aligned} \begin{aligned}&\bigcup _{j=1}^2 \left\{ \, f[x_2, x_1, x_3]\, : \, f(x_i)=(-1)^j g_j(x_i), i=1,3 \, , \,\right. \\&\quad \left. f(x_2)=y\in \mathrm {co}\{ l_j(x_2) , (-1)^j g_j(x_2) \} \right\} . \end{aligned} \end{aligned}$$

First, note that the divided differences in (22) for \(j=2\) have absolute values bounded by the constant \(M_2\), where \({M_j = \frac{1}{2} \max \{ |g_j''(x)|\, : \, x\in [a,b]\} }\), \(j=1,2\). This is obvious for \(y=g_2(x_2)\), since the divided difference equals \(g_2[x_1, x_2, x_3]\) , which by the assumption on \(g_2\), is non-positive with absolute value bounded by \(M_2\). Note that all the other divided differences in (22) for \(j=2\) are in the interval \(\big [ g_2[x_1, x_2, x_3] , 0 \big ]\). Indeed, in view of the relation (obtained from (7) for \(r=3\)) we have

$$\begin{aligned} f[x_2, x_1, x_3] = \frac{f(x_1)}{(x_1-x_2)(x_1-x_3)} +\frac{f(x_2)}{(x_2-x_1)(x_2-x_3)} + \frac{f(x_3)}{(x_3-x_1)(x_3-x_2)} \, , \end{aligned}$$

and since \({f(x_2)=y \in [l_2(x_2),g_2(x_2) ]}\) and \({f(x_i)=g_2(x_i)=l_2(x_i)}\), \(i=1,3\), while the denominator of \(f(x_2)\) is negative, we get \(f[x_2, x_1, x_3] \in \big [g_2[x_1, x_2, x_3] , 0 \big ]\).

For the case \(j=1\) in (22), by similar arguments one can show that all the divided differences are in \({ \big [0, -g_1[x_1, x_2, x_3] \big ] \subset [0, M_1]}\).

In the next step we prove that \(\{ |F[x_2; x_1, x_3]|\, : \, a\le x_1<x_2<x_3\le b \}\) is a bounded set, with a bound independent of \(x_1, x_2, x_3\). Let for \(j=1,2\)

$$\begin{aligned} h_j(x)= \left\{ \begin{array}{ll} (-1)^j\tau (x)\, , &{} x \in [a,c], \\ 0 \, , &{} x \in [c,b]. \\ \end{array} \right. \end{aligned}$$

By the assumptions on \(\tau (x)\), we have \({h_1''(x) \le 0}\), \({h_2''(x) \ge 0}\) for \({x\in [a,b]}\) and that \({h_1'(x)>0}\), \({h_2'(x)<0}\), for \(x \in [a,c]\). Note that we can decompose F(x) as \(F(x)=F_1(x)\bigcup F_2(x)\) with \({F_1(x)=[-g_1(x), h_1(x)]}\) and \({F_2(x)=[h_2(x), g_2(x)]}\). The multifunctions \(F_1, F_2\) satisfy \(F_1(x)\bigcap F_2(x)=\emptyset \) for \(x \in [a,c)\) and \({F_1(x)\bigcap F_2(x)=\{0\}}\) for \(x \in [c,b]\). Also the boundaries of \(F_j\), \(j=1,2\) satisfy all the requirements that the boundaries of G satisfy. Then from the result on the second order divided differences of G we conclude that \(\{ |F_j[x_2; x_1, x_3]| \} \le {\widetilde{M}}_j\) for any \(a\le x_1<x_2<x_3\le b\) with \({ {\widetilde{M}}_j = \frac{1}{2} \max \left\{ \, |g''_j(x)| , |h_j''(x)|\, : \; x\in [a,b] \right\} }\), \(j=1,2\).

To obtain the boundedness of divided differences of F it remains to show that for any \({a\le x_1<x_2<x_3\le b}\)

$$\begin{aligned} F[x_2; x_1, x_3] \subseteq F_1[x_2; x_1, x_3] \bigcup F_2[x_2; x_1, x_3]\, . \end{aligned}$$

Indeed, it is easy to conclude (23) from the following observation

$$\begin{aligned} \Pi ({F(x_1)},{F(x_2)}) \subseteq \Pi ({F_1(x_1)},{F_1(x_2)}) \bigcup \Pi ({F_2(x_1)},{F_2(x_2)}) \end{aligned}$$

and from the convexity of the graphs of \(F_1\) and \(F_2\), the monotonicity of their boundaries, as well as the fact that \(h_2(x)=-h_1(x)\), \(x \in [a,b]\).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dyn, N., Farkhi, E. & Mokhov, A. High-Order Approximation of Set-Valued Functions. Constr Approx (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Set-valued functions
  • Metric linear combinations
  • Set-valued metric divided differences
  • Set-valued metric polynomial interpolation
  • Metric local linear operators
  • High-order approximation

Mathematics Subject Classification

  • 26E25
  • 41A10
  • 41A25
  • 41A35
  • 41A36
  • 41H04