1 Introduction

Set-valued functions (SVFs, multifunctions) find applications in different fields such as economy, optimization, dynamical systems, control theory, game theory, differential inclusions, geometric modeling. Analysis of set-valued functions has been a rapidly developing field in the last decades. One may consider the book [4] as establishing the field of set-valued analysis. Approximation of SVFs has been developing in parallel.

Older approaches to the approximation, related mainly to control theory, investigate almost exclusively SVFs with convex images (values). Research on approximation and numerical integration of set-valued functions with convex images can be found e.g. in [6,7,8,9,10,11, 14, 16,17,18, 20, 27,28,32, 36, 37]. The standard tools used are the Minkowski linear combinations and the Aumann integral. It is well-known that the Aumann integral of a multifunction with compact values in \({{{\mathbb {R}}}}^d\) is convex even if the values of the integrand are not convex [5]. This property is called convexification, see e.g. [20]. Also the Minkowski convex combinations with a growing number of summands suffer from convexification [20].

Some newer applications, as geometric modeling for instance, motivate the study of approximation of SVFs with general, not necessarily convex values. Trying to apply the known methods for the convex-valued case to set-valued functions with general values, R. A. Vitale cosidered in [37] the polynomial Bersntein operators adapted to SVFs by replacing linear combinations of numbers by the Minkowski linear combinations of sets. While this construction works perfectly for SVFs with convex images, in the general case the sequence of so generated Bernstein approximants does not approximate the given SVF but the multifunction with values equal to the convex hulls of those of the original SVF. Clearly, such methods are useless for approximating set-valued functions with general, not necessarily convex images.

A pioneering work on approximation of SVFs with general images is done by Z. Artstein [3], who constructs piecewise-linear interpolants of multifunctions. He replaces the Minkowski averages between two sets by the set of averages of special pairs of elements termed in later works “metric pairs”. Using the concept of metric pairs and metric linear combinations, N. Dyn, E. Farkhi and A. Mokhov developed in a series of works techniques that are free of convexification and are suitable for approximating set-valued functions with general compact images. The tools used in these techniques include repeated binary metric averages [19, 22, 24], metric linear combinations [21, 22], metric selections [22, 23] and the metric integral [23], which is extended here to a weighted metric integral. In [13, 21,22,23] the authors studied approximation of set-valued functions by means of metric adaptations of classical approximation operators such as the Bernstein polynomial operator, the Schoenberg spline operator, the polynomial interpolation operator. While in older papers the approximated SVFs are mainly continuous, the later works [13, 23] are concerned with multifunctions of bounded variation.

The main topic is an adaptation of the trigonometric Fourier series to set-valued functions of bounded variation with general compact images. We also try to obtain error bounds under minimal regularity requirements on the multifunctions to be approximated and focus on the investigation on SVFs of bounded variation. We use in our analysis some properties of maps of bounded variation with values in metric spaces proved in [15].

We are familiar only with few works on trigonometric approximation of multifunctions. Some results on this topic for convex-valued SVFs by methods based on the Aumann integral are obtained in [6]. For the related topic of trigonometric approximation of fuzzy-valued functions see, e.g. [2, 12, 25, 38]. Note that in this context the level sets determine multifunctions with convex values (intervals in \({{\mathbb {R}}}\)).

In this paper we define the metric analogue of the partial sums of the Fourier series of a multifunction via convolutions with the Dirichlet kernel of order n, for \(n\ge 0\), the convolutions being defined as weighted metric integrals. To study error bounds of these approximants and to prove convergence as \(n\rightarrow \infty \), we introduce new one-sided local moduli of continuity in Sect. 3 and quasi-moduli of continuity in Sect. 6. The main result of the paper is analogous to the classical Dirichlet-Jordan Theorem for real functions [39]. It states the pointwise convergence in the Hausdorff metric of the metric Fourier approximants of a multifunction of bounded variation to a compact set. In particular, if the multifunction F is of bounded variation and continuous at a point x, then the metric Fourier approximants of it at x converge to F(x). The convergence is uniform in closed finite intervals where F is continuous. At a point of discontinuity the limit set is determined by the values of the metric selections of F there.

The paper is organized as follows. In the next section some basic notions and notation are recalled. One-sided local moduli of continuity of univariate functions with values in a metric space are introduced and studied in Sect. 3. The theory developed in Sect. 3 is specified in Sect. 4 to set-valued functions of bounded variation, to their chain functions and metric selections. In Sect. 5 the weighted metric integral is introduced and some of its properties are derived. The main results of the paper are presented in Sect. 6. To make the reading easier, the section is divided into three subsections. The first subsection contains the definition of the metric Fourier approximants of multifunctions. The second subsection contains a refinement of the classical Dirichlet-Jordan Theorem [39]. There we obtain error bounds for the Fourier approximants for special classes of real functions of bounded variation. This refinement is used in the third subsection for the main results on the metric Fourier approximation of set-valued functions. In Sect. 7 we discuss properties of a set-valued function and of its metric selections at a point of discontinuity and study the structure of the limit set of the metric Fourier approximants.

There are two appendices: Appendix A contains the proof of Theorem 4.13 which is stated without a proof in Section 4 of [23]. Appendix B contains the proof of the refined Dirichlet-Jordan Theorem from Sect. 6.2.

2 Preliminaries

In this section we introduce some notation and basic notions related to sets and set-valued functions.

All sets considered from now on are sets in \({{{\mathbb {R}}}}^d\). We denote by \(\mathrm {K}({{{\mathbb {R}}}}^d)\) the collection of all compact non-empty subsets of \({{{\mathbb {R}}}}^d\). By \(\mathrm {Co}({{{\mathbb {R}}}}^d)\) we denote the collection of all convex sets in \(\mathrm {K}({{{\mathbb {R}}}}^d)\). The convex hull of a set A is denoted by \(\mathrm {co}(A)\). The metric in \({{{\mathbb {R}}}}^d\) is of the form \(\rho (u,v)=|u-v|\), where \(|\cdot |\) is a norm on \({{{\mathbb {R}}}}^d\). Note that all norms on \({{{\mathbb {R}}}}^d\) are equivalent. In the following we fix one norm in \({{{\mathbb {R}}}}^d\). Recall that \({{{\mathbb {R}}}}^d\) is a complete metric space.

Let A and B be non-empty subsets of \({{{\mathbb {R}}}}^d\). To measure the distance between A and B, we use the Hausdorff metric based on \(\rho \)

$$\begin{aligned} \mathrm {haus}(A,B)_{\rho }= \max \left\{ \sup _{a \in A}\mathrm {dist}(a,B)_{\rho },\; \sup _{b \in B}\mathrm {dist}(b,A)_{\rho } \right\} , \end{aligned}$$
(1)

where the distance from a point c to a set D is \(\mathrm {dist}(c,D)_{\rho }=\inf _{d \in D}\rho (c,d)\).

It is well known that \(\mathrm {K}({{{\mathbb {R}}}}^d)\) and \(\mathrm {Co}({{{\mathbb {R}}}}^d)\) are complete metric spaces with respect to the Hausdorff metric [33, 35]. For an arbitrary metric space \((X,\rho )\), the same formula (1) defines a metric on the set \({\mathcal {C}}(X)\) of all non-empty closed subsets of X. It is known that the metric space \(({\mathcal {C}}(X), \mathrm {haus})\) is complete if \((X,\rho )\) is complete. Moreover, \(({\mathcal {C}}(X), \mathrm {haus})\) is compact if X is compact (e.g. [1, Section 4.4]).

We denote by \(|A|=\mathrm {haus}(A,\{0\})\) the “norm” of the set \(A \in \mathrm {K}({{{\mathbb {R}}}}^d)\).

The set of projections of \(a \in {{{\mathbb {R}}}}^d\) on a set \(B \in \mathrm {K}({{{\mathbb {R}}}}^d)\) is

$$\begin{aligned} \Pi _{B}{(a)} =\{b \in B \ : \ |a-b|=\mathrm {dist}(a,B)\}, \end{aligned}$$

and the set of metric pairs of two sets \(A,B \in \mathrm {K}({{{\mathbb {R}}}}^d)\) is

$$\begin{aligned} \Pi \big ( {A},{B} \big ) = \{(a,b) \in A \times B \ : \ a \in \Pi _{A}{(b)} \;\, \text{ or }\;\, b\in \Pi _{B}{(a)} \}.\end{aligned}$$

Using metric pairs, we can rewrite

$$\begin{aligned} \mathrm {haus}(A,B)= \max \{|a-b| \ :\ (a,b)\in \Pi \big ( {A},{B} \big )\}. \end{aligned}$$

In [23], the three last-named authors introduced the notions of a metric chain and of a metric linear combination as follows.

Definition 2.1

[23] Given a finite sequence of sets \(A_0, \ldots , A_n \in \mathrm {K}({{{\mathbb {R}}}}^d)\), \(n \ge 1\), a metric chain of \(A_0, \ldots , A_n\) is an \((n+1)\)-tuple \((a_0,\ldots ,a_n)\) such that \((a_i,a_{i+1}) \in \Pi \big ( {A_i},{A_{i+1}} \big )\), \(i=0,1,\ldots ,n-1\). We denote the collection of all metric chains of \(A_0, \ldots , A_n\) by

$$\begin{aligned} {\mathrm {CH}}(A_0,\ldots ,A_n)= \left\{ (a_0,\ldots ,a_n) \ : \ (a_i,a_{i+1}) \in \Pi \big ( {A_i},{A_{i+1}} \big ), \ i=0,1,\ldots ,n-1 \right\} . \end{aligned}$$

The metric linear combination of the sets \(A_0, \ldots , A_n \in \mathrm {K}({{{\mathbb {R}}}}^d)\), \(n \ge 1\), is

$$\begin{aligned} \bigoplus _{i=0}^n \lambda _i A_i = \left\{ \sum _{i=0}^n \lambda _i a_i \ : \ (a_0,\ldots ,a_n) \in {\mathrm {CH}}(A_0,\ldots ,A_n) \right\} , \quad \lambda _0,\ldots ,\lambda _n \in {{\mathbb {R}}}. \end{aligned}$$

Note that the metric linear combination depends on the order of the sets, in contrast to the Minkowski linear combination of sets which is defined by

$$\begin{aligned} \sum _{i=0}^n \lambda _i A_i = \left\{ \sum _{i=0}^n \lambda _i a_i \ : \ a_i \in A_i \right\} ,\quad n \ge 1. \end{aligned}$$

For a sequence of sets \(\{A_n\}_{n=1}^{\infty }\) the lower Kuratowski limit is the set of all limit points of converging sequences \(\{a_{n}\}_{n=1}^{\infty }\), where \(a_{n} \in A_{n} \), namely,

$$\begin{aligned} \liminf _{n \rightarrow \infty } A_n = \left\{ a \ : \ \exists \, a_{n} \in A_{n} \text { such that } \lim _{n \rightarrow \infty }a_{n} = a \right\} . \end{aligned}$$

Analogously, for a set-valued function \(F:[a,b]\rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\) and \({\widetilde{x}} \in [a,b]\) we define

$$\begin{aligned}&\liminf _{x \rightarrow {\widetilde{x}}} F(x) = \left\{ y \ : \ \forall \, \{x_k\}_{k=1}^{\infty } \subset [a,b] \right. \\&\left. \text {with} \ x_k\rightarrow {\widetilde{x}} \ \exists \, \{y_k\}_{k=1}^{\infty } \ \text {with} \ y_k\in F(x_k), k\in {{\mathbb {N}}}, \ \text {and} \ y_k \rightarrow y \right\} . \end{aligned}$$

The upper Kuratowski limit is the set of all limit points of converging subsequences \(\{a_{n_k}\}_{k=1}^{\infty }\), where \({a_{n_k} \in A_{n_k} }\), \(k\in {{\mathbb {N}}}\), namely

$$\begin{aligned}&\limsup _{n \rightarrow \infty } A_n \\&\quad =\left\{ a \ : \ \exists \, \{n_k\}_{k=1}^{\infty },\, n_{k+1}>n_k,\, k\in {{\mathbb {N}}},\ \exists \, a_{n_k} \in A_{n_k} \text { such that } \lim _{k \rightarrow \infty }a_{n_k} = a \right\} . \end{aligned}$$

Correspondingly, for a set-valued function \(F:[a,b]\rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\) and \({\widetilde{x}} \in [a,b]\)

$$\begin{aligned}&\limsup _{x \rightarrow {\widetilde{x}}} F(x) = \left\{ y \ : \ \exists \, \{x_k\}_{k=1}^{\infty } \subset [a,b] \right. \\&\left. \text {with} \ x_k\rightarrow {\widetilde{x}} \ \exists \, \{y_k\}_{k=1}^{\infty } \ \text {with} \ y_k\in F(x_k), k\in {{\mathbb {N}}}, \ \text {and} \ y_k \rightarrow y \right\} . \end{aligned}$$

A sequence \(\{A_n\}_{n=1}^{\infty }\) converges in the sense of Kuratowski to A if \({\displaystyle A = \liminf _{n \rightarrow \infty } A_n =} {\limsup _{n \rightarrow \infty } A_n }\). Similarly, a set A is a Kuratowski limit of F(x) as \(x \rightarrow {\widetilde{x}}\) if \({\displaystyle A = \liminf _{x \rightarrow {\widetilde{x}}} F(x) = \limsup _{x \rightarrow {\widetilde{x}}} F(x) }\).

Remark 2.2

There is a connection between convergence in the sense of Kuratowski and convergence in the Hausdorff metric, the latter meaning that \(\displaystyle \lim _{n \rightarrow \infty } { \mathrm {haus}(A_n,A)} = 0\) or \(\displaystyle \lim _{x \rightarrow {\widetilde{x}}} { \mathrm {haus}\big ( F(x), A \big )} = 0\), respectively. If the underlying space X is compact, then convergence in the Hausdorff metric and in the sense of Kuratowski are equivalent (see, e.g., [1, Section 4.4]).

3 Local Regularity Measures of Functions with Values in a Metric Space

Here we focus our investigation to local regularity measures of functions defined on a fixed interval \([a,b] \subset {{\mathbb {R}}}\) with values in a complete metric space \((X,\rho )\).

A basic notion in this paper is the modulus-bounding function \(\omega (\delta )\) which is a non-decreasing function \(\omega : [0,\infty ) \rightarrow [0,\infty )\). Frequently we occur the situation when in addition \(\lim \limits _{\delta \rightarrow 0^+} \omega (\delta )=0\), but we do not require this in the definition.

In the analysis of continuity of a function at a point, the notion of the local modulus of continuity is instrumental [34]

$$\begin{aligned} \omega \big ( {f},{x^*},{\delta } \big )= & {} \sup \left\{ \, \rho (f(x_1),f(x_2)): \quad x_1,x_2 \in \left[ x^*-\frac{\delta }{2},x^*+\frac{\delta }{2} \right] \cap [a,b] \,\right\} ,\nonumber \\&\quad \delta >0. \end{aligned}$$
(2)

To characterize left and right continuity of functions, we introduce the left and the right local moduli of continuity, respectively.

Definition 3.1

The left local modulus of continuity of f at \(x^*\in [a,b]\) is

$$\begin{aligned} \omega ^{-}\big ( {f},{x^*},{\delta } \big )=\sup \left\{ \rho (f(x),f(x^*)) \ :\ x\in [x^*-\delta , x^*] \cap [a,b] \right\} ,\quad \delta >0. \end{aligned}$$
(3)

Similarly, the right local modulus of continuity of f at \(x^* \in [a,b]\) is

$$\begin{aligned} \omega ^{+}(f,x^*,\delta )=\sup \left\{ \rho (f(x),f(x^*)) \ : \ x\in [x^*,x^*+\delta ] \cap [a,b] \right\} ,\quad \delta >0. \end{aligned}$$
(4)

Remark 3.2

  1. (i)

    One can define the one-sided local moduli of continuity analogously to (2), for example, the left local modulus as

    $$\begin{aligned} \nu ^{-}(f,x^*,\delta )=\sup \left\{ \rho (f(x_1),f(x_2))\ : \ x_1,x_2\in [x^*-\delta , x^*] \cap [a,b] \right\} ,\quad \delta >0. \end{aligned}$$

    Yet it is easily seen that this quantity is equivalent to (3), namely

    $$\begin{aligned} \omega ^{-}(f,x^*,\delta ) \le \nu ^{-}(f,x^*,\delta ) \le 2\omega ^{-}(f,x^*,\delta ). \end{aligned}$$
  2. (ii)

    Note that the classical global modulus of continuity \(\displaystyle \omega \big ( {f},{\delta } \big ) = \sup _{x\in [a,b]} \omega \big ( {f},{x},{\delta } \big )\) is subadditive in \(\delta \), while this property is not satisfied by the local moduli.

The following relations hold for \(x^* \in [a,b]\):

$$\begin{aligned} \max \{ \omega ^{-}(f,x^*,\delta ), \omega ^{+}(f,x^*,\delta ) \} \le \omega \big ( {f},{x^*},{2\delta } \big ) , \end{aligned}$$
(5)
$$\begin{aligned} \omega \big ( {f},{x^*},{\delta } \big ) \le 2\max \left\{ \omega ^{-}\left( f,x^*,\delta /2 \right) , \omega ^{+}\left( f,x^*,\delta /2\right) \right\} , \quad \delta > 0. \end{aligned}$$

In the next proposition we extend some properties known for the local modulus of continuity \(\omega (f,x^*,\delta )\) to the one-sided local moduli. The proof is standard and we omit it.

Proposition 3.3

A function \(f:[a,b]\rightarrow X\) is left continuous at \(x^* \in (a,b]\) if and only if \(\lim \limits _{\delta \rightarrow 0+}\omega ^{-}\big ( {f},{x^*},{\delta } \big )= 0\). The function f is right continuous at \(x^* \in [a,b)\) if and only if \(\lim \limits _{\delta \rightarrow 0+}\omega ^{+}(f,x^*,\delta )=0\).

We recall the notion of the variation of a function \({f:[a,b]\rightarrow X}\). Let \(\chi =\{x_0,\ldots , x_n\} \), \(a=x_0< \cdots <x_n=b\), be a partition of the interval [ab] with the norm

$$\begin{aligned} |\chi |=\max _{0\le i\le n-1} (x_{i+1}-x_i). \end{aligned}$$

The variation of f on the partition \(\chi \) is defined as

$$\begin{aligned} V(f,\chi ) = \sum _{i=1}^{n} \rho (f(x_i),f(x_{i-1})). \end{aligned}$$

The total variation of f on [ab] is

$$\begin{aligned} V_{a}^{b}(f) = \sup _{\chi } V(f,\chi ), \end{aligned}$$

where the supremum is taken over all partitions \(\chi \) of [ab].

A function f is said to be of bounded variation if \({ V_{a}^{b}(f) < \infty }\). We call functions of bounded variation BV functions and write \(f \in \mathrm {BV}[a,b]\). If f is also continuous, we write \(f\in \mathrm {CBV}[a,b]\).

For \(f \in \mathrm {BV}[a,b]\) the function \(v_f:[a,b]\rightarrow {{\mathbb {R}}}\)\(v_f(x)=V_{a}^{x}(f)\) is called the variation function of f. Note that

$$\begin{aligned} V_{z}^{x}(f)=v_f(x)-v_f(z) \quad \text{ for } \quad a\le z<x \le b, \end{aligned}$$

and that \(v_f\) is monotone non-decreasing.

Proposition 3.4

For a function \({f:[a,b]\rightarrow X}\), \({f \in \mathrm {BV}[a,b]}\) we have

$$\begin{aligned} \omega ^{-}\big ( {f},{x^*},{\delta } \big )\le & {} \omega ^{-}\big ( {v_f},{x^*},{\delta } \big ) \quad \text {and} \quad \omega ^{+}(f,x^*,\delta ) \le \omega ^{+}(v_f,x^*,\delta ),\\&\quad x^* \in [a,b], \quad \delta > 0. \end{aligned}$$

The proof is straightforward.

The following claim is a slight refinement of Proposition 1.1.1 in [22] and of [26, Chapter 9, Sec. 32, Theorem 3]. Its proof is a minor nodification of the proofs in the above references.

Proposition 3.5

A function \(f:[a,b]\rightarrow X\), \(f \in \mathrm {BV}[a,b]\) is left continuous at \(x^*\in (a,b]\) if and only if \(v_f\) is left continuous at \(x^*\). The function f is right continuous at \(x^* \in [a,b)\) if and only if \(v_f\) is right continuous at \(x^*\).

Analogs of Propositions 3.4 and 3.5 for the two-sided local modulus of continuity are well-known:

Proposition 3.6

For a function \({f:[a,b]\rightarrow X}\), \({f \in \mathrm {BV}[a,b]}\) we have

$$\begin{aligned} \omega (f,x^*,\delta ) \le \omega (v_f,x^*,\delta ), \quad x^* \in [a,b], \quad \delta > 0. \end{aligned}$$

Moreover, f is continuous at \(x^* \in [a,b]\) if and only if \(v_f\) is continuous at \(x^*\).

The first statement can be proved along the same lines, and the second statement follows immediately from Proposition 3.5.

Remark 3.7

Note that, in general, \(\omega (f,x^*,\delta )\) and \(\omega (v_f, x^*,\delta )\) are not equivalent for \(f \in \mathrm {BV}[a,b]\). As an example, consider \(f(x) = x^2 \sin {\frac{1}{x}} \in \mathrm {BV}[0,1]\) (where we define \(f(0) = 0\) by continuity). It is easy to see that

$$\begin{aligned} \omega (f,0,\delta ) = \sup { \left\{ |f(x_1) - f(x_2)| : x_1,x_2 \in \left[ 0, \delta /2 \right] \right\} } \le 2 \left( \frac{\delta }{2} \right) ^2 = \frac{\delta ^2}{2}, \quad \delta > 0. \end{aligned}$$

To estimate the local variation of f, consider the points \(\frac{1}{x_k} = \frac{\pi }{2} + \pi k\), \(k \in {{\mathbb {N}}}\), so that \(\sin { \frac{1}{x_k} } = (-1)^k\). Then

$$\begin{aligned} \omega (v_f, 0, \delta ) = V_0^{\delta /2}(f) \ge 2 \sum _{ k> \frac{2}{\delta \pi } - \frac{1}{2}} \left( \frac{1}{\frac{\pi }{2} + \pi k} \right) ^2 \ge \frac{2}{\pi ^2} \sum _{ k > \frac{2}{\delta \pi } + \frac{1}{2}} \frac{1}{k^2} \sim \delta . \end{aligned}$$

Helly’s Selection Principle (see, e.g. [26, Chapter 6]) will be heavily used in our analysis. We cite a version of it which is relevant to our paper.

Helly’s Selection Principle. Let \(\{f_n\}_{n \in {{\mathbb {N}}}}\) be a sequence of functions \(f_n : [a,b] \rightarrow {{\mathbb {R}}}\), and assume that there are constants \(A,B > 0\) such that \(|f_n(x)| \le A\), \(n \in {{\mathbb {N}}}\), \(x \in [a,b]\) and \(V_a^b(f_n) \le B\), \(n \in {{\mathbb {N}}}\). Then \(\{f_n\}_{n \in {{\mathbb {N}}}}\) contains a subsequence \(\{f_{n_k}\}_{k \in {{\mathbb {N}}}}\) that converges pointwisely to a function \(f^{\infty } : [a,b] \rightarrow {{\mathbb {R}}}\), i.e., \(f^{\infty }(x) = \lim _{k \rightarrow \infty } f_{n_k}(x)\), \(x \in [a,b]\).

In the following statements we consider pointwise limits of sequences of BV functions. We show that the limit function inherits local properties which are shared by the members of the sequence. The first result is known, see e.g. [15, Section 2], and the second one follows from it immediately.

Theorem 3.8

Let \(\{f_n\}_{n=1}^\infty \) be a sequence of functions \(f_n : [a,b]\rightarrow X\) that converges pointwisely to a function \(f^{\infty } : [a,b]\rightarrow X\). Then

$$\begin{aligned} V_{a}^{b}(f^\infty ) \le \liminf _{n \rightarrow \infty } V_a^b(f_n). \end{aligned}$$

In particular, if \(V_{a}^{b}(f_n) \le A\) for all \(n\in {{\mathbb {N}}}\) with some \(A \in {{\mathbb {R}}}\), then

$$\begin{aligned} V_{a}^{b}(f^\infty ) \le A. \end{aligned}$$

In the next theorem we study sequences of functions which are equicontinuous from the left or from the right at a point.

Theorem 3.9

Let \(x^*\in (a,b]\), and \(\{f_n\}_{n=1}^\infty \) be a sequence of functions \(f_n: [a,b]\rightarrow X\) satisfying \({\omega ^-(f_n, x^*, \delta ) \le \omega (\delta )}\), \(0< \delta \le \delta _0\), \(n\in {{\mathbb {N}}}\), where \(\omega (\delta )\) is a modulus-bounding function. If \(f^\infty = \lim \limits _{n\rightarrow \infty } f_n \) pointwisely on \([x^*-\delta _0, x^*]\cap [a,b]\), then

$$\begin{aligned} \omega ^-(f^\infty ,x^*,\delta )\le \omega (\delta ),\quad 0< \delta \le \delta _0. \end{aligned}$$

In particular, if \(\lim \limits _{\delta \rightarrow 0^+} \omega (\delta ) =0\) then \(f^\infty \) is left continuous at \(x^*\).

Proof

Let \(\delta \in (0,\delta _0]\). Fix \(z\in [x^*-\delta , x^*]\cap [a,b]\). By the assumption,

$$\begin{aligned} \rho (f_n(z),f_n(x^*)) \le \omega ^-(f_n,x^*,\delta ) \le \omega (\delta ), \quad n\in {{\mathbb {N}}}. \end{aligned}$$

Let \(\varepsilon >0\) be arbitrarily small. There exists \(N(\varepsilon ,z)\) such that

$$\begin{aligned} \rho (f^\infty (z),f_{n}(z)) \le \frac{\varepsilon }{2}\quad \text{ and } \quad \rho (f^\infty (x^*),f_{n}(x^*)) \le \frac{\varepsilon }{2} \end{aligned}$$

for all \(n \ge N(\varepsilon ,z)\). For such n we have

$$\begin{aligned}&\rho (f^\infty (z),f^\infty (x^*)) \le \rho (f^\infty (z),f_n(z)) + \rho (f_n(z),f_n(x^*)) + \rho (f_n(x^*),f^\infty (x^*)) \\&\quad \le \frac{\varepsilon }{2} + \omega (\delta ) + \frac{\varepsilon }{2} = \varepsilon + \omega (\delta ). \end{aligned}$$

Since \(\varepsilon > 0\) was taken arbitrarily, it follows that \(\rho (f^\infty (z),f^\infty (x^*)) \le \omega (\delta )\). Thus,

$$\begin{aligned} \omega ^-(f^\infty ,x^*,\delta ) = \sup \left\{ \rho (f^\infty (z),f^\infty (x^*)) \ : \ z \in [x^*-\delta , x^*]\cap [a,b] \right\} \le \omega (\delta ). \end{aligned}$$

In particular, it follows from Proposition 3.3 that \(f^\infty \) is left continuous at \(x^*\). \(\square \)

An analogous result holds for the right continuity at \(x^*\).

Arguing along the same lines, one can also prove an analogous statement for the two-sided local modulus of continuity.

Theorem 3.10

Let \(x^*\in [a,b]\) and let \(\, \{f_n\}_{n=1}^{\infty }\) be a sequence of functions \(\, f_n : [a,b] \rightarrow X\) satisfying \({\omega (f_n, x^*, \delta ) \le \omega (\delta )}\)\(0< \delta \le \delta _0\), \(n \in {{\mathbb {N}}}\), where \(\omega (\delta )\) is a modulus-bounding function. If \(f^\infty = \lim \limits _{n \rightarrow \infty } f_n \) pointwisely on \([x^*-\frac{\delta _0}{2}, x^*+\frac{\delta _0}{2}]\cap [a,b]\), then

$$\begin{aligned} \omega (f^\infty ,x^*,\delta ) \le \omega (\delta ), \quad 0< \delta \le \delta _0. \end{aligned}$$

In particular, if \(\lim \limits _{\delta \rightarrow 0^+} \omega (\delta ) =0\) then \(f^\infty \) is continuous at \(x^*\).

As the last statement in this section, we formulate a property similar to Theorem 3.9 for the local moduli of the function \(v_f\).

Proposition 3.11

Let \(x^*\in (a,b]\), and let \(\{f_n\}_{n=1}^\infty \) be a sequence of functions \(f_n: [a,b]\rightarrow X\), \(f_n \in \mathrm {BV}[a,b]\), satisfying \({\omega ^-(v_{f_n}, x^*, \delta ) \le \omega (\delta )}\), \(0< \delta \le \delta _0\), \(n\in {{\mathbb {N}}}\), where \(\omega (\delta )\) is a modulus-bounding function. If \(f^\infty = \lim \limits _{n\rightarrow \infty } f_n \) pointwisely on [ab], then

$$\begin{aligned} \omega ^-(v_{f^\infty },x^*,\delta )\le \omega (\delta ),\quad 0< \delta \le \delta _0. \end{aligned}$$

In particular, if \(\lim \limits _{\delta \rightarrow 0^+} \omega (\delta ) =0\) then \(v_{f^\infty }\) is left continuous at \(x^*\).

Proof

Let \(x\in [x^*-\delta , x^*] \cap [a,b]\). By Theorem 3.8 and by the monotonicity of the variation function we have

$$\begin{aligned} v_{f^\infty }(x^*)-v_{f^\infty }(x)&= V_x^{x^*} (f^\infty )\le \liminf _{n \rightarrow \infty } V_x^{x^*} (f_n) \le v_{f_n}(x^*)-v_{f_n}(x) \\&\quad \le \omega ^{-}\big ( {v_{f_n}},{x^*},{\delta } \big ) \le \omega (\delta ). \end{aligned}$$

Taking supremum over \(x \in [x^*-\delta , x^*]\cap [a,b]\) we get the first claim. The second claim follows from Proposition 3.3. \(\square \)

Analogous statements hold for the right local modulus of continuity and for the two-sided local modulus of continuity.

4 Multifunctions, Their Chain Functions and Metric Selections

The main object of this paper are set-valued functions (SVFs, multifunctions) mapping [ab] to \(\mathrm {K}({{{\mathbb {R}}}}^d)\). First we recall some basic notions on such SVFs.

The graph of a multifunction F is the set of points in \({{\mathbb {R}}}^{d+1}\) defined as

$$\begin{aligned} {\mathrm {Graph}}(F)= \left\{ (x,y) \ : \ y\in F(x),\; x \in [a,b] \right\} . \end{aligned}$$

It is easy to see that if \(F \in \mathrm {BV}[a,b]\) then \({\mathrm {Graph}}(F)\) is a bounded set and F has a bounded range, namely \(\Vert F\Vert _\infty = \left| \bigcup _{x\in [a,b]} F(x) \right| < \infty \). We denote the class of SVFs of bounded variation with compact graphs by \({\mathcal {F}}[a,b]\).

For a set-valued function \(F : [a,b] \rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\), a single-valued function \({s:[a,b] \rightarrow {{{\mathbb {R}}}}^d}\) such that \(s(x) \in F(x)\) for all \(x \in [a,b]\) is called a selection of F.

Below we present some definitions and results from [23] that will be used in this paper. In particular, we recall the definitions of chain functions and metric selections.

Given a multifunction \(F: [a,b] \rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\), a partition \(\chi =\{x_0,\ldots ,x_n\} \subset [a,b]\), \(a=x_0< \cdots < x_n=b\), and a corresponding metric chain \(\phi =(y_0,\ldots ,y_n) \in {\mathrm {CH}}\left( F(x_0),\ldots ,F(x_n) \right) \) (see Definition 2.1), the chain function based on \(\chi \) and \(\phi \) is

$$\begin{aligned} c_{\chi , \phi }(x)= \left\{ \begin{array}{ll} y_i, &{} x \in [x_i,x_{i+1}), \quad i=0,\ldots ,n-1,\, \\ y_n, &{} x=x_n. \end{array} \right. \end{aligned}$$
(6)

Result 4.1

[23]  For \(F \in {\mathcal {F}}[a,b]\), all chain functions satisfy \(V_a^b(c_{\chi , \phi }) \le V_a^b(F)\) and \(\Vert c_{\chi , \phi }\Vert _\infty \le \Vert F\Vert _\infty \).

A selection s of F is called a metric selection, if there is a sequence of chain functions \(\{ c_{\chi _k, \phi _k} \}_{k \in {{\mathbb {N}}}}\) of F with \({\lim _{k \rightarrow \infty } |\chi _k| =0}\) such that

$$\begin{aligned} s(x)=\lim _{k\rightarrow \infty } c_{\chi _k, \phi _k}(x) \quad \text{ pointwisely } \text{ on } \ [a,b]. \end{aligned}$$

We denote the set of all metric selections of F by \({\mathcal {S}}(F)\).

Note that the definitions of chain functions and metric selections imply that a metric selection s of a multifunction F is constant in any open interval where the graph of s stays in the interior of \({\mathrm {Graph}}(F)\).

Result 4.2

[23]   Let \(F \in {\mathcal {F}}[a,b]\). Through any point \(\alpha \in {\mathrm {Graph}}(F)\) there exists a metric selection which we denote by \({\,s_\alpha }\). Moreover, F has a representation by metric selections, namely

$$\begin{aligned} F(x) = \{ s_\alpha (x) \ :\ \alpha \in {\mathrm {Graph}}(F)\}. \end{aligned}$$

Result 4.3

[23]  Let s be a metric selection of \(F \in {\mathcal {F}}[a,b]\). Then \(V_a^b(s) \le V_a^b(F)\) and \(\Vert s\Vert _\infty \le \Vert F\Vert _\infty \).

The next statements focus on local regularity properties of chain functions and metric selections. They refine results in [22] and [23].

Lemma 4.4

Let \(F\in {\mathcal {F}}[a,b]\) and let \(c_{\chi , \phi }\) be a chain function corresponding to a partition \(\chi \) and a metric chain \(\phi \) as in (6). Then for any \(x^*\in [a,b]\) we have

$$\begin{aligned} \omega ^{-}(c_{\chi , \phi },x^*,\delta ) \le \omega ^{-}(v_F,x^*,\delta +|\chi |),\quad \delta >0. \end{aligned}$$

Proof

The claim holds trivially for \(x^* = a\). So we assume that \(x^* \in (a,b]\). Let \(\chi =\{x_0,\ldots ,x_n\}\), \(a=x_0< \cdots < x_n=b\). We have \(x^*\in [x_k,x_{k+1})\) for some \(0 \le k \le n-1\) or \(x^*=x_n=b\). Take \(z \in [a,b]\) such that \(x^*-\delta \le z \le x^*\). If \(x_k \le z \le x^*\), then \(c_{\chi , \phi }(z) = c_{\chi , \phi }(x^*) = c_{\chi , \phi }(x_k)\), and thus \(|c_{\chi , \phi }(x^*)-c_{\chi , \phi }(z)| = 0\). Otherwise there is \(i< k\) such that \(x_i \le z <x_{i+1}\). By the definitions of the chain function and of the metric chain we get

$$\begin{aligned} |c_{\chi , \phi }(x^*)-c_{\chi , \phi }(z)| = |c_{\chi , \phi }(x_k)-c_{\chi , \phi }(x_i)|&\le \sum _{j=i}^{k-1} |c_{\chi , \phi }(x_{j+1})-c_{\chi , \phi }(x_j)|\\&\le \sum _{j=i}^{k-1} \mathrm {haus}\big ( F(x_{j+1}),F(x_j) \big ). \end{aligned}$$

Using the definitions of the variation of F, of \(v_F\) and of \(\omega ^-\), we continue the estimate:

$$\begin{aligned} |c_{\chi , \phi }(x^*)-c_{\chi , \phi }(z)|&\le V_{x_i}^{x_k}(F) \le V_{x_i}^{x^*}(F) = v_F(x^*)-v_F(x_i)\\&\le \omega ^{-}(v_F,x^*,x^*-x_i) \le \omega ^{-}(v_F,x^*,\delta +|\chi |). \end{aligned}$$

Taking the supremum over \(z \in [x^*-\delta , x^*]\cap [a,b]\) we obtain the claim of the lemma. \(\square \)

Lemma 4.5

Let \(F\in {\mathcal {F}}[a,b]\) and let \(c_{\chi , \phi }\) be a chain function corresponding to a partition \(\chi \) and a metric chain \(\phi \). Then for any \(x\in [a,b]\) we have

$$\begin{aligned} \omega ^{+}(c_{\chi , \phi },x^*,\delta ) \le 2\omega \left( v_F,x^*,2(\delta +|\chi |) \right) ,\quad \delta >0. \end{aligned}$$

Proof

If \(x^* = b\), then the claim holds trivially. So we assume that \(x^* \in [a,b)\). Let \(x^*\in [x_k,x_{k+1})\) for some \(0 \le k \le n-1\). Take \(z \in [a,b]\) such that \(x^* \le z \le x^*+\delta \). There is \(i\ge k\) such that \(x_i \le z <x_{i+1}\). By the definition of the chain function we get

$$\begin{aligned}&|c_{\chi , \phi }(x^*)-c_{\chi , \phi }(z)| = |c_{\chi , \phi }(x_k)-c_{\chi , \phi }(x_i)| \le \sum _{j=k}^{i-1} |c_{\chi , \phi }(x_{j+1})-c_{\chi , \phi }(x_j)| \\&\quad \le \sum _{j=k}^{i-1} \mathrm {haus}(F(x_{j+1}),F(x_j)) \le V_{x_k}^{x_i}(F). \end{aligned}$$

Using the definitions of the variation of F, of the variation function \(v_F\) and (2), (3), (4), (5), we obtain

$$\begin{aligned} |c_{\chi , \phi }(x^*)-c_{\chi , \phi }(z)|&\le V_{x_k}^{x_i}(F) \le V_{x_k}^{x^*}(F) + V_{x^*}^{z}(F) \le \omega ^{-}(v_F,x^*,|\chi |) + \omega ^{+}(v_F,x^*,\delta ) \\ {}&\le \omega (v_F,x^*,2|\chi |) + \omega (v_F,x^*,2\delta ) \le 2\omega \left( v_F,x^*,2(|\chi |+\delta ) \right) . \end{aligned}$$

The claim of the lemma follows by taking the supremum over \(z \in [x^*, x^*+\delta ]\cap [a,b]\). \(\square \)

Lemma 4.6

Let \(F \in {\mathcal {F}}[a,b]\) and let \(c_{\chi , \phi }\) be a chain function corresponding to a partition \(\chi \) and a metric chain \(\phi \). Then for any \(x^*\in [a,b]\) we have

$$\begin{aligned} \omega \big ( { c_{\chi , \phi } },{x^*},{\delta } \big ) \le \omega \big ( {v_F},{x^*},{\delta +2|\chi |} \big ), \quad \delta > 0. \end{aligned}$$

Proof

Let \(x,z \in [x^*-\delta /2, x^*+\delta /2] \cap [a,b]\), \(x < z\). First assume that \(z \ne x_n\). In this case there exist ki with \(0 \le k \le i \le n-1\) such that \(x\in [x_k,x_{k+1})\) and \(z \in [x_i,x_{i+1})\). We get

$$\begin{aligned} |c_{\chi , \phi }(x)-c_{\chi , \phi }(z)|&= |c_{\chi , \phi }(x_k)-c_{\chi , \phi }(x_i)| \le \sum _{j=k}^{i-1} |c_{\chi , \phi }(x_{j+1})-c_{\chi , \phi }(x_j)| \\&\le \sum _{j=k}^{i-1} \mathrm {haus}(F(x_{j+1}),F(x_j)) \\&\le V_{x_k}^{x_i}(F) \le V_{x_k}^{z}(F) = v_F(x_k)-v_F(z) \le \omega \big ( {v_F},{x^*},{\delta + 2|\chi | } \big ). \end{aligned}$$

The above inequalities hold also for \(x<z=x_n\). In the case when \(x=z\) this estimate is trivial. Taking the supremum over \(x,z \in [x^*-\delta /2, x^*+\delta /2] \cap [a,b]\) we obtain \( \omega \big ( { c_{\chi , \phi } },{x^*},{\delta } \big )\le \omega \big ( {v_F},{x^*},{\delta + 2|\chi |} \big )\). \(\square \)

Theorem 4.7

Let \(F\in {\mathcal {F}}[a,b]\), s be a metric selection of F and \(x^*\in [a,b]\). Then

$$\begin{aligned} \omega ^{-}(s,x^*,\delta ) \le \omega ^{-}(v_F,x^*,2\delta ),\quad \delta >0. \end{aligned}$$

In particular, if F is left continuous at \(x^*\), then s is left continuous at \(x^*\).

Proof

Let s be a metric selection of F. Then there exists a sequence of partitions \(\{\chi _n\}_{n \in {{\mathbb {N}}}}\) with \(|\chi _n| \rightarrow 0\), \(n \rightarrow \infty \), and a corresponding sequence of chain functions \(\{c_n\}_{n \in {{\mathbb {N}}}}\) such that \(s(x)=\lim \limits _{n\rightarrow \infty }c_n(x)\) pointwisely for all \(x\in [a,b]\). For n so large that \(|\chi _n|\le \delta \), we get by Lemma 4.4

$$\begin{aligned} \omega ^{-}(c_n,x^*,\delta ) \le \omega ^{-}(v_F,x^*,\delta +|\chi _n|) \le \omega ^{-}(v_F,x^*,2\delta ). \end{aligned}$$

Theorem 3.9 implies

$$\begin{aligned} \omega ^{-}(s,x^*,\delta ) \le \omega ^{-}(v_F,x^*,2\delta ). \end{aligned}$$

Moreover, if F is left continuous at \(x^*\) then by Propositions 3.5 and 3.3 we have \(\omega ^{-}(v_F,x^*,2\delta ) \rightarrow 0\) as \(\delta \rightarrow 0\). The latter implies that s is left continuous at \(x^*\). \(\square \)

Using Lemma 4.5 instead of Lemma 4.4 and arguing as above, we obtain

Theorem 4.8

Let \(F\in {\mathcal {F}}[a,b]\), s be a metric selection of F and \(x^*\in [a,b]\). Then

$$\begin{aligned} \omega ^{+}(s,x^*,\delta ) \le 2\omega (v_F,x^*,4\delta ), \quad \delta >0. \end{aligned}$$

Similarly, Lemma 4.6 and Theorem 3.10 lead to

Theorem 4.9

Let \(F \in {\mathcal {F}}[a,b]\), s be a metric selection of F and \(x^*\in [a,b]\). Then

$$\begin{aligned} \omega \big ( {s},{x^*},{\delta } \big ) \le \omega \big ( {v_F},{x^*},{ 2\delta } \big ), \quad \delta >0. \end{aligned}$$

In particular, if F is continuous at \(x^*\), then s is continuous at \(x^*\).

Remark 4.10

Analysing the proofs, it is not difficult to see that the estimates in Theorems 4.74.9 can be improved in the following way

$$\begin{aligned}&\omega ^{-}(s,x^*,\delta ) \le \omega ^{-}(v_F,x^*,\delta +\varepsilon ), \quad \omega ^{+}(s,x^*,\delta ) \le 2\omega (v_F,x^*,2\delta + \varepsilon ), \\&\omega \big ( {s},{x^*},{\delta } \big ) \le \omega \big ( {v_F},{x^*},{ \delta + \varepsilon } \big ), \quad \delta > 0, \end{aligned}$$

with an arbitrarily small \(\varepsilon >0\). Taking the supremum of the both sides of the last inequality over \(x^* \in [a,b]\) we obtain

$$\begin{aligned} \omega \big ( {s},{\delta } \big ) \le \omega \big ( {v_F},{\delta +\varepsilon } \big ). \end{aligned}$$

If \(F\in \mathrm {CBV}[a,b]\), then \(v_F \in \mathrm {CBV}[a,b]\) and \(\omega (v_F, \delta )\) is continuous in \(\delta \). Taking the limit as \(\varepsilon \rightarrow 0+\) we get

$$\begin{aligned} \omega \big ( {s},{\delta } \big ) \le \omega \big ( {v_F},{\delta } \big ). \end{aligned}$$

Therefore also \(s\in \mathrm {CBV}[a,b]\).

Lemma 4.11

Let \(F\in {\mathcal {F}}[a,b]\) and let \(c_{\chi , \phi }\) be a chain function corresponding to a partition \(\chi \) and a metric chain \(\phi \). Let \(\delta > 0\) be such that \([a+\delta +|\chi |, b-\delta ] \ne \emptyset \). Then for any \(x\in [a+\delta +|\chi |, b-\delta ]\) we have

$$\begin{aligned} V_{x-\delta }^{x+\delta }(c_{\chi , \phi }) \le V_{x-\delta -|\chi |}^{x+\delta }(F) \le \omega \left( v_F,x,2(\delta +|\chi |) \right) . \end{aligned}$$

Proof

Let \(\chi =\{x_0,\ldots ,x_n\}\), \(a=x_0< \cdots < x_n=b\). By definition, \(\left( c_{\chi , \phi }(x_j),c_{\chi , \phi }(x_{j+1}) \right) \in \Pi \big ( {F(x_j)},{F(x_{j+1})} \big )\), \(j=0,\ldots ,n-1\). Thus, \(V_{x_i}^{x_k}(c_{\chi , \phi })\le V_{x_i}^{x_k}(F)\) for all \(0\le i < k \le n\). If \(x_k \le x-\delta< x+\delta < x_{k+1}\), then \(c_{\chi , \phi }(t) = c_{\chi , \phi }(x_k)\) for all \(t \in [x-\delta , x+ \delta ]\), and therefore \( V_{x-\delta }^{x+\delta }(c_{\chi , \phi })=0 \). In the case when \(x_{i-1} \le x-\delta< x_i< \cdots< x_k \le x+\delta < x_{k+1}\) we have \({c_{\chi , \phi }(x-\delta )=c_{\chi , \phi }(x_{i-1})}\), \({c_{\chi , \phi }(x+\delta )=c_{\chi , \phi }(x_{k})}\). Thus,

$$\begin{aligned} V_{x-\delta }^{x+\delta }(c_{\chi , \phi }) = V_{x_{i-1}}^{x_k}(c_{\chi , \phi })\le V_{x_{i-1}}^{x_k}(F) \le V_{x-\delta -|\chi |}^{x+\delta }(F). \end{aligned}$$

For the second inequality, we continue the estimate as follows:

$$\begin{aligned} V_{x-\delta }^{x+\delta }(c_{\chi , \phi }) \le V_{x-\delta -|\chi |}^{x+\delta }(F) = v_F(x + \delta ) - v_F(x - \delta - |\chi |) \le \omega \left( v_F,x,2(\delta +|\chi |) \right) . \end{aligned}$$

\(\square \)

Theorem 4.12

Let \(F\in {\mathcal {F}}[a,b]\) and let s be a metric selection of F. Then for all small \(\delta > 0\) and all \(x\in [a+2\delta , b-\delta ]\) we have

$$\begin{aligned} V_{x-\delta }^{x+\delta }(s) \le V_{x-2\delta }^{x+\delta }(F) \le \omega \left( v_F,x,4\delta \right) . \end{aligned}$$

Proof

Since s is a metric selection, there exists a sequence of partitions \(\{\chi _n \}_{n \in {{\mathbb {N}}}}\) with \(|\chi _n| \rightarrow 0\), \(n \rightarrow \infty \), and a corresponding sequence of chain functions \(\{c_n\}_{n \in {{\mathbb {N}}}}\) such that \(s(x)=\lim \limits _{n\rightarrow \infty }c_n(x)\) pointwisely. Take n so large that \(|\chi _n| < \delta \), then by Lemma 4.11 we have \( { V_{x-\delta }^{x+\delta }(c_n) \le V_{x-\delta -|\chi _n|}^{x+\delta }(F) \le V_{x-2\delta }^{x+\delta }(F) } \). In view of Theorem 3.8 we get \(V_{x-\delta }^{x+\delta }(s) \le V_{x-2\delta }^{x+\delta }(F) \le \omega \left( v_F,x,4\delta \right) \). \(\square \)

The statement of Theorem 4.12 can be improved in the same manner like in Remark 4.10. Namely, the estimate

$$\begin{aligned} V_{x-\delta }^{x+\delta }(s) \le V_{x-\delta -\varepsilon }^{x+\delta }(F) \le \omega \left( v_F,x,2\delta + \varepsilon \right) \end{aligned}$$

holds with an arbitrarily small \(\varepsilon > 0\).

The next result was announced in [23, Lemma 3.9] without a detailed proof. Although the result is intuitively clear, its proof is rather complicated. We present the full proof in Appendix A.

Theorem 4.13

For \(F\in {\mathcal {F}}[a,b]\), the pointwise limit of a sequence of metric selections of F is a metric selection of F.

5 Weighted Metric Integral

The well-known Aumann integral [5] of a multifunction F is defined as

$$\begin{aligned} \int _a^b F(x)dx = \left\{ \int _a^b s(x)dx \ : \ s\ \text{ is } \text{ an } \text{ integrable } \text{ selection } \text{ of }\ F \right\} . \end{aligned}$$
(7)

Everywhere in this context we understand the integral of a function \(f : [a,b] \rightarrow {{{\mathbb {R}}}}^d\) to be applied to each component of f.

It is known that the Aumann integral is convex for each function \(F \in {\mathcal {F}}[a,b]\), even if the values of F are not convex. Moreover,

$$\begin{aligned} \int _a^b F(x)dx = \int _a^b \mathrm {co}\big ( F(x) \big ) dx, \quad \int _a^b w(x)Adx =\left( \int _a^b w(x)dx \right) \; \mathrm {co}(A), \end{aligned}$$
(8)

where \(A \in \mathrm {K}({{{\mathbb {R}}}}^d)\) and \(w(x) \ge 0\), \(x \in [a,b]\).

The metric integral of SVFs has been introduced in [23]. In contrast to the Aumann integral, the metric integral is free of the undesired effect of the convexification. We recall its definition. First we define the metric Riemann sums. For a multifunction \(F:[a,b] \rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\) and for a partition \(\chi =\{x_0,\ldots ,x_n\}\), \({a=x_0<x_1<\cdots < x_n=b}\), the metric Riemann sum of F is defined by

$$\begin{aligned} {\scriptstyle ({\mathcal {M}})} S_{\chi } F = \bigoplus _{i=0}^{n-1}(x_{i+1}-x_i)F(x_i). \end{aligned}$$

Definition 5.1

[23] The metric integral of F is defined as the Kuratowski upper limit of metric Riemann sums corresponding to partitions with norms tending to zero, namely,

$$\begin{aligned} {\scriptstyle ({\mathcal {M}})}\int _{a}^{b}F(x)dx= \limsup _{|\chi | \rightarrow 0} {\scriptstyle ({\mathcal {M}})} S_{\chi } F. \end{aligned}$$

The upper limit here is understood in the following sense: \(y \in \limsup _{|\chi | \rightarrow 0} {\scriptstyle ({\mathcal {M}})} S_{\chi } F\) if there is a sequence of partitions \(\{ \chi _n\}_{n \in {{\mathbb {N}}}}\) with \(|\chi _n| \rightarrow 0\), \(n \rightarrow \infty \), and a sequence \(\{y_n\}_{n \in {{\mathbb {N}}}}\) such that \(y_n \in {\scriptstyle ({\mathcal {M}})} S_{\chi _n} F\) and \(y_n \rightarrow y\), \(n \rightarrow \infty \).

It is easy to see that the set \({\scriptstyle ({\mathcal {M}})}\int _{a}^{b}F(x)dx\) is non-empty if F has a bounded range.

The following result from [23] relates the metric integral of \(F\in {\mathcal {F}}[a,b]\) to its metric selections.

Result 5.2

[23] Let \(F \in {\mathcal {F}}[a,b]\). Then \( {\scriptstyle ({\mathcal {M}})}\int _{a}^{b}F(x)dx=\left\{ \int _{a}^{b} {s} (x)dx \ :\ s \in {\mathcal {S}}(F) \right\} . \)

In this section we define an extension of the metric integral, namely, the weighted metric integral.

For a set-valued function \(F:[a,b] \rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\), a weight function \(k:[a,b] \rightarrow {{\mathbb {R}}}\) and for a partition \(\chi =\{x_0,\ldots ,x_n\}\), \({a=x_0<x_1<\cdots < x_n=b}\), we define the weighted metric Riemann sum of F by

$$\begin{aligned} {\scriptstyle ({\mathcal {M}}_k)} S_{\chi } F&= \left\{ \sum _{i=0}^{n-1} (x_{i+1}-x_i)k(x_i) y_i \ : \ (y_0,\ldots ,y_{n-1}) \in {\mathrm {CH}}(F(x_0),\ldots ,F(x_{n-1})) \right\} \\&= \bigoplus _{i=0}^{n-1}(x_{i+1}-x_i)k(x_i) F(x_i). \end{aligned}$$

Remark 5.3

The elements of \( {\scriptstyle ({\mathcal {M}}_k)} S_{\chi } F \) are of the form \( \int _{a}^{b} k_\chi (x) c_{\chi ,\phi }(x) dx \), where \(c_{\chi , \phi }\) is a chain function based on the partition \(\chi \) and a metric chain \(\phi = (y_0, \ldots , y_n) \in {\mathrm {CH}}\left( F(x_0), \ldots ,F(x_{n})\right) \), and \(k_\chi \) the piecewise constant function defined by

$$\begin{aligned} k_\chi (x)= \left\{ \begin{array}{ll} k(x_i), &{} x \in [x_i,x_{i+1}), \quad i=0,\ldots ,n-1,\, \\ k(x_n), &{} x=x_n. \end{array} \right. \end{aligned}$$
(9)

We define the weighted metric integral of F as the Kuratowski upper limit of weighted metric Riemann sums.

Definition 5.4

The weighted metric integral of F with the weight function k is defined by

$$\begin{aligned} {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx= \limsup _{|\chi | \rightarrow 0} {\scriptstyle ({\mathcal {M}}_k)} S_{\chi } F. \end{aligned}$$

The set \({\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\) is non-empty whenever the SVF kF has a bounded range.

Observe that the weighted metric integral of F with the weight k is not the metric integral of the multifunction kF. The difference is that the metric chains in Definition 5.4 are constructed on the base of the function F, and not kF which would be in the latter case.

In the remaining part of this section we extend results obtained for the metric integral in [23] to the weighted metric integral.

Remark 5.5

It is possible to define a “right” weighted metric Riemann sum as

$$\begin{aligned} {\scriptstyle ({\mathcal {M}}_k)} {\widetilde{S}}_{\chi } F = \bigoplus _{i=0}^{n-1}(x_{i+1}-x_i)k(x_{i+1}) F(x_{i+1}), \end{aligned}$$

and a corresponding weighted metric integral. For BV functions F and k, this integral is identical with \({\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\). This can be concluded from the following lemma.

Lemma 5.6

Let \(F, k \in \mathrm {BV}[a,b]\). Then

$$\begin{aligned} \mathrm {haus}\left( {\scriptstyle ({\mathcal {M}}_k)} {\widetilde{S}}_{\chi } F,{\scriptstyle ({\mathcal {M}}_k)} S_{\chi } F \right) \le |\chi | \left( \Vert k\Vert _\infty \, V_a^b(F) + \Vert F\Vert _\infty \, V_a^b(k) \right) . \end{aligned}$$

Proof

Fix a partition \(\chi \) and consider a corresponding chain \(\phi =(y_0,\ldots ,y_n)\in {\mathrm {CH}}(F(x_0), \ldots ,F(x_n) ) \). We have

$$\begin{aligned}&\mathrm {haus}\left( {\scriptstyle ({\mathcal {M}}_k)} {\widetilde{S}}_{\chi } F, {\scriptstyle ({\mathcal {M}}_k)} S_{\chi } F \right) \\ {}&\quad \le \sup \left\{ \left| \sum _{i=0}^{n-1} k(x_{i+1})y_{i+1}(x_{i+1}-x_i)-\sum _{i=0}^{n-1} k(x_i)y_i(x_{i+1}-x_i)\right| \right. \\&\qquad \qquad \left. \ : \ \phi \in {\mathrm {CH}}(F(x_0), \ldots ,F(x_n)) \right\} \\ {}&\quad \le \sup \left\{ \sum _{i=0}^{n-1} \left| k(x_{i+1})y_{i+1}-k(x_i)y_i \right| (x_{i+1}-x_i) \ : \ \phi \in {\mathrm {CH}}(F(x_0), \ldots ,F(x_n)) \right\} . \end{aligned}$$

Since

$$\begin{aligned} |k(x_{i+1})y_{i+1}-k(x_i)y_i|&\le |k(x_{i+1})y_{i+1}-k(x_{i+1})y_i|+|k(x_{i+1})y_i-k(x_i)y_i| \\ {}&\le \Vert k\Vert _\infty \, \mathrm {haus}(F(x_{i+1}),F(x_i)) + \Vert F\Vert _\infty \, |k(x_{i+1})-k(x_i)|, \end{aligned}$$

the desired estimate follows. \(\square \)

The next theorem is an extension of Result 5.2 to the weighted metric integral.

Theorem 5.7

Let \(F \in {{\mathcal {F}}}[a,b]\) and \(k \in \mathrm {BV}[a,b]\). Then

$$\begin{aligned} {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx=\left\{ \int _{a}^{b}k(x)s(x)dx \ : \ s \in {\mathcal {S}}(F) \right\} . \end{aligned}$$

Proof

By Result 4.3, every metric selection s of \(F \in {{\mathcal {F}}}[a,b]\) is BV, and thus ks is Riemann integrable. Denote \(I=\left\{ \int _{a}^{b}k(x)s(x)dx \ : s \in {\mathcal {S}}(F) \right\} \).

We first show that \(I \subseteq {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\). Let s be a metric selection of F. Then s is the pointwise limit of a sequence of chain functions \(\{c_n\}_{n\in {{\mathbb {N}}}}\) corresponding to partitions \(\{\chi _n\}_{n\in {{\mathbb {N}}}}\) with \(\lim _{n \rightarrow \infty } |\chi _n| =0\). Denote \(k_n=k_{\chi _n}\) (see (9)) and \(\sigma _n = \int _a^b k_n(x)c_n(x)dx\). By Remark 5.3, \(\sigma _n \in {\scriptstyle ({\mathcal {M}}_k)} S_{\chi _n} F\).

Clearly, \(\Vert k_n\Vert _\infty \le \Vert k\Vert _\infty \) and \(V_a^b(k_n) \le V_a^b(k)\). By Helly’s Selection Principle there exists a subsequence \(\{k_{n_\ell }\}_{\ell \in {{\mathbb {N}}}}\) that converges pointwisely to a certain function \(k^*\). For simplicity we denote this sequence by \(\{k_n\}_{n \in {{\mathbb {N}}}}\) again. It is easy to see that \(k^*(x)=k(x)\) at all points of continuity of k. Indeed, for a partition \(\chi _n\) there is an index \(i_n\) such that \(x\in [x_{i_n},x_{i_n+1})\), where \(x_{i_n}\) and \(x_{i_n+1}\) are subsequent points in \(\chi _n\). By (9) we get

$$\begin{aligned} |k_n(x)-k(x)|=|k_n(x_{i_n})-k(x)| = |k(x_{i_n})-k(x)| \le \omega \big ( {k},{x},{|\chi _n|} \big ). \end{aligned}$$

Thus \(\lim _{n \rightarrow \infty } k_n(x)c_n(x) = k(x)s(x)\) at all points of continuity of k. Note that since k is BV, it has at most countably many points of discontinuity in [ab]. By Result 4.1 and the Lebesgue Dominated Convergence Theorem we obtain

$$\begin{aligned} \int _{a}^{b}k(x)s(x)dx = \lim _{n \rightarrow \infty } \int _{a}^{b}k_{n}(x)c_{n}(x)dx = \lim _{n \rightarrow \infty } \sigma _{n} \in {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx. \end{aligned}$$

It remains to show the converse inclusion \({\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\subseteq I\). Let \({\sigma \in {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx}\). There exists a sequence \(\{\sigma _n\}_{n\in {{\mathbb {N}}}}\), \({ \sigma _n \in {\scriptstyle ({\mathcal {M}}_k)} S_{\chi _n} F }\), such that \({\displaystyle \sigma = \lim _{n\rightarrow \infty } \sigma _n }\). By Remark 5.3 we have \(\sigma _n = \int _{a}^{b}k_n(x)c_n(x)dx\). Applying Helly’s Selection Principle two times consequently, we conclude that there is a subsequence \(\{ k_{n_\ell } \}_{\ell \in {{\mathbb {N}}}}\) that converges pointwisely to a certain function \(k^*\), and then there is a subsequence \(\left\{ c_{n_{\ell _m}} \right\} _{m \in {{\mathbb {N}}}}\) that converges pointwisely to a certain function s. By definition, \(s\in {\mathcal {S}}(F)\). It follows from Result 4.1 and the Lebesgue Dominated Convergence Theorem that

$$\begin{aligned} \sigma = \lim _{ m \rightarrow \infty } \sigma _{n_{l_m}} = \lim _{ m \rightarrow \infty } \int _{a}^{b}k_{n_{l_m}}(x)c_{n_{l_m}}(x)dx= \int _{a}^{b}k^*(x)s(x)dx=\int _{a}^{b}k(x)s(x)dx, \end{aligned}$$

which completes the proof. \(\square \)

Theorem 5.7, (7) and (8) yield the following statement.

Corollary 5.8

Under the assumptions of Theorem 5.7 we have

$$\begin{aligned} {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\subseteq \int _{a}^{b}k(x)F(x)dx. \end{aligned}$$
(10)

Moreover,

$$\begin{aligned} \mathrm {co}\left( {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\right) \subseteq \int _{a}^{b}k(x)F(x)dx. \end{aligned}$$

Corollary 5.8 implies the following “inclusion property” of the weighted metric integral as stated below.

Proposition 5.9

For \(F \in {{\mathcal {F}}}[a,b]\) and \(k \in \mathrm {BV}[a,b]\) we have

$$\begin{aligned} \int _{a}^{b}k(x)dx \left( \bigcap _{x\in [a,b]}F(x) \right) \subseteq {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\subseteq (b-a) \, \mathrm {co}\left( \bigcup _{x\in [a,b]} k(x)F(x) \right) . \end{aligned}$$
(11)

Moreover, if \(k(x) \ge 0\) , \(x\in [a,b]\) and \(\int _{a}^{b}k(x)dx \ne 0\) then

$$\begin{aligned} \bigcap _{x\in [a,b]}F(x) \subseteq \frac{{\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx}{\int _{a}^{b}k(x)dx} \subseteq \mathrm {co}\left( \bigcup _{x\in [a,b]} F(x) \right) . \end{aligned}$$
(12)

Proof

First we prove the left inclusion in (11). If \(\bigcap _{x\in [a,b]}F(x) = \emptyset \) then there is nothing to prove. Suppose \(\bigcap _{x\in [a,b]}F(x) \ne \emptyset \). Let \( p \in \bigcap _{x\in [a,b]}F(x)\). Then \(s(x)\equiv p\), \(x\in [a,b]\), is a metric selection of F, since for any partition \(\chi \) the function \(c_{\chi ,\phi }(x) \equiv p\) is a chain function corresponding to the chain \(\phi =(p,\ldots ,p)\). Therefore,

$$\begin{aligned} p\int _{a}^{b}k(x)dx = \int _{a}^{b}k(x)s(x)dx \in {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx. \end{aligned}$$

To show the right inclusion in (11), we use (10) and (8) and write

$$\begin{aligned} {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\subseteq & {} \int _{a}^{b}k(x)F(x)dx \subseteq \int _{a}^{b}\left( \bigcup _{x\in [a,b]} k(x)F(x) \right) dx \\= & {} (b-a)\, \mathrm {co}\left( \bigcup _{x\in [a,b]} k(x)F(x) \right) . \end{aligned}$$

In the case when \(k(x) \ge 0\) and \(\int _{a}^{b}k(x)dx \ne 0\), the left inclusion in (12) follows directly from (11). To prove the right inclusion in (12), we start with (10). Denoting \(R=\bigcup _{x\in [a,b]} F(x) \in \mathrm {K}({{{\mathbb {R}}}}^d)\) we get in view of the second property in (8)

$$\begin{aligned} {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\subseteq \int _{a}^{b}k(x)F(x)dx \subseteq \int _{a}^{b}k(x)Rdx = \left( \int _{a}^{b} k(x) dx \right) \; \mathrm {co}(R), \end{aligned}$$

and the right inclusion follows. \(\square \)

Note that the middle set in (12) is a weighted average of F(x) on [ab]. Proposition 5.9 says that it contains the intersection of the sets \(\{F(x)\}_{x\in [a,b]}\) and is contained in the convex hull of their union.

Proposition 5.10

Let \(F \in {{\mathcal {F}}}[a,b]\) and \(k \in \mathrm {BV}[a,b]\). The set \({\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\) is compact.

Proof

Since F and k are both bounded, the set \({\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\) is bounded. To prove the proposition, it suffices to show that it is closed. Consider a convergent sequence \(\{v_n\}_{n \in {{\mathbb {N}}}} \subset {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\). Let \(\displaystyle \ v=\lim _{ n \rightarrow \infty }v_n \). By Theorem 5.7 we have \(v_n=\int _{a}^{b}k(x)s_n(x)dx\) for some \(s_n \in {\mathcal {S}}(F)\). The sequence \(\{s_n\}_{n\in {{\mathbb {N}}}}\) is uniformly bounded and of uniformly bounded variation. By Helly’s Selection Principle there exists a subsequence \(\{s_{n_\ell }\}_{\ell \in {{\mathbb {N}}}}\) which converges pointwisely to a certain function \(s^\infty \) as \(\ell \rightarrow \infty \). By Theorem 4.13, \(s^\infty \) is a metric selection. Clearly, \(\lim _{\ell \rightarrow \infty } k(x)s_{n_\ell }(x) = k(x)s^\infty (x)\) pointwisely. Applying the Lebesgue Dominated Convergence Theorem we get

$$\begin{aligned} \int _{a}^{b}k(x)s^\infty (x)dx = \lim _{\ell \rightarrow \infty } \int _{a}^{b}k(x)s_{n_\ell }(x)dx = \lim _{\ell \rightarrow \infty }v_{n_\ell }=v, \end{aligned}$$

and thus \(v \in {\scriptstyle ({\mathcal {M}}_{k})}\int _{a}^{b}k(x)F(x)dx\). \(\square \)

6 The Metric Fourier Approximation of SVFs of Bounded Variation

6.1 On Fourier Approximation of Real-Valued Functions of Bounded Variation

First we present the classical material relevant to our study of SVFs.

For a \(2\pi \)-periodic real-valued function \(f : {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) which is integrable over the period, its Fourier series is

$$\begin{aligned} f(x) \sim \frac{1}{2} a_0 + \sum _{k=1}^\infty (a_k \cos {kx} + b_k \sin {kx}), \end{aligned}$$

where

$$\begin{aligned} a_k= & {} a_k(f) = \frac{1}{\pi } \int _{-\pi }^{\pi } f(t) \cos {kt} dt, \quad k = 0,1,\ldots , \quad \text{ and } \nonumber \\ b_k= & {} b_k(f) = \frac{1}{\pi } \int _{-\pi }^{\pi } f(t) \sin {kt} dt, \quad k = 1,2,\ldots . \end{aligned}$$
(13)

Following the classical theory of Fourier series, we introduce the Dirichlet kernel (see e.g. [39, Chapter II])

$$\begin{aligned} D_n(x) = \frac{1}{2} + \sum _{k=1}^n \cos {kx} = \frac{\sin {\left( n + \frac{1}{2} \right) x } }{ 2 \sin { \left( \frac{1}{2}x \right) }} , \quad x \in {{\mathbb {R}}}. \end{aligned}$$

For the partial sums of the Fourier series one has the well-known representation

$$\begin{aligned} {\mathscr {S}}_n f (x)= & {} \frac{1}{2} a_0 + \sum _{k=1}^n (a_k \cos {kx} + b_k \sin {kx}) = \frac{1}{\pi } \int _{-\pi }^{\pi } D_n(x-t) f(t) dt\nonumber \\= & {} \frac{1}{\pi } \int _{-\pi }^{\pi } \partial _{n,x}(t) f(t) dt, \end{aligned}$$
(14)

where \(\partial _{n,x}(t) = D_n(x-t)\).

A basic result on the convergence of Fourier series of real-valued functions of bounded variation is the Dirichlet-Jordan Theorem (e.g., [39, Chapter II, (8.1) Theorem]).

Dirichlet-Jordan Theorem. Let \(f : {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) be a \(2\pi \) -periodic function of bounded variation on \([-\pi ,\pi ]\). Then at every point x

$$\begin{aligned} \lim _{n \rightarrow \infty } {{\mathscr {S}}_nf(x)} = \frac{1}{2}( f(x-0) + f(x+0)). \end{aligned}$$

In particular, \({\mathscr {S}}_nf\) converges to f at every point of continuity of f. If f is continuous at every point of a closed interval I, then the convergence is uniform in I.

Following [39, Chapter II], we introduce the so-called modified Dirichlet kernel

$$\begin{aligned} D^*_n(x) = \frac{1}{2} + \sum _{k=1}^{n-1} \cos {kx} + \frac{1}{2} \cos {nx} = \frac{1}{2} \sin {nx} \cot { \left( \frac{1}{2}x \right) } , \quad x \in {{\mathbb {R}}}, \end{aligned}$$
(15)

and the modified Fourier sum

$$\begin{aligned} {\mathscr {S}}^*_n f (x) = \frac{1}{\pi } \int _{-\pi }^{\pi } D^*_n(x-t) f(t) dt. \end{aligned}$$

Clearly,

$$\begin{aligned} D_n(x) - D^*_n(x) = \frac{1}{2} \cos {nx}, \end{aligned}$$
(16)

and

$$\begin{aligned} \frac{1}{\pi } \int _{-\pi }^{\pi } D_n(x) dx = \frac{1}{\pi } \int _{-\pi }^{\pi } D^*_n(x) dx = 1. \end{aligned}$$
(17)

We also need the next result that follows immediately from [39, Chapter II, (4.12) Theorem].

Lemma 6.1

Let \(f \in \mathrm {BV}[-\pi ,\pi ]\). Then its Fourier coefficients (13) satisfy the estimate

$$\begin{aligned} |a_n(f)| \le \frac{2V_{-\pi }^{\pi }(f)}{\pi n}, \quad |b_n(f)| \le \frac{2V_{-\pi }^{\pi }(f)}{\pi n}, \quad n \in {{\mathbb {N}}}. \end{aligned}$$

A further property of the kernel \(D^*_n\) which can be found in [39, Chapter II, (8.2) Lemma] is

Lemma 6.2

There is a constant \(C > 0\) such that for all \(\xi \in [0,\pi ]\) and all \(n \in {{\mathbb {N}}}\)

$$\begin{aligned} \left| \frac{2}{\pi } \int _0^\xi D^*_n(x) dx \right| \le C. \end{aligned}$$
(18)

Remark 6.3

Analyzing the proof of this statement in [39, Chapter II, (8.2) Lemma], one can see that one can take \(C =2 \), i.e.

$$\begin{aligned} \left| \frac{2}{\pi } \int _0^\xi D^*_n(x) dx \right| \le 2, \quad \xi \in [0,\pi ], \quad n \in {{\mathbb {N}}}. \end{aligned}$$

6.2 Local Quasi-Moduli and Error Bounds for Fourier Approximation of Real-Valued BV Functions

It is known that functions of bounded variation with values in an arbitrary complete metric space \((X,\rho )\) are not necessarily continuous, but have right and left limits at any point [15]. To study such functions we introduce the left and right local quasi-moduli for discontinuous functions of bounded variation.

Definition 6.4

For a function \(f : [a,b] \rightarrow X\) of bounded variation and \(x^* \in (a,b]\) we define the left local quasi-modulus

$$\begin{aligned} \varpi ^{-}\big ( {f},{x^*},{\delta } \big ) = \sup { \big \{ \rho (f(x^*-0),f(x)) \ : \ x \in [x^*-\delta ,x^*) \cap [a,b] \big \} }, \quad \delta >0, \end{aligned}$$

and for \(x^* \in [a,b)\) the right local quasi-modulus

$$\begin{aligned} \varpi ^{+}\big ( {f},{x^*},{\delta } \big ) = \sup { \{ \rho (f(x^*+0),f(x)) \ : \ x \in (x^*,x^* + \delta ] \cap [a,b] \} }, \quad \delta >0, \end{aligned}$$

where \(\displaystyle f(x-0) = \lim _{ t \rightarrow x-0 }f(t)\), \(\displaystyle f(x+0) = \lim _{ t \rightarrow x+0 }f(t)\).

The facts given in the following remark are direct consequences of the above definitions.

Remark 6.5

Let \(f : [a,b] \rightarrow X\) be a BV function and \(x^* \in (a,b]\) for the left modulus or \(x^* \in [a,b)\) for the right modulus, respectively.

  1. (i)

    If f is monotone then

    $$\begin{aligned}&\varpi ^{-}\big ( {f},{x^*},{\delta } \big ) = \rho (f(x^*-0), f(x^* - \delta )),\\&\varpi ^{+}\big ( {f},{x^*},{\delta } \big ) = \rho (f(x^*+\delta ),f(x^* + 0)).\end{aligned}$$
  2. (ii)

    Although at a point of discontinuity  \(x^*\) at least one of the local moduli \(\omega ^{-}\big ( {f},{x^*},{\delta } \big )\), \(\omega ^{+}\big ( {f},{x^*},{\delta } \big )\) does not tend to zero as \(\delta \) tends to zero, for the local quasi-moduli we always have

    $$\begin{aligned} \lim _{ \delta \rightarrow 0^+ } \varpi ^{-}\big ( {f},{x^*},{\delta } \big )=0, \quad \lim _{ \delta \rightarrow 0^+ } \varpi ^{+}\big ( {f},{x^*},{\delta } \big ) =0. \end{aligned}$$
  3. (iii)

    The left local quasi-modulus of f at a point \(x^* \in (a,b]\) coincides with the left local modulus (3) of the function

    $$\begin{aligned} {\widetilde{f}}(x) = {\left\{ \begin{array}{ll} f(x), &{} x \ne x^*,\\ f(x^*-0), &{} x = x^*. \end{array}\right. } \end{aligned}$$

    An analogous relation holds for the right local quasi-modulus. Clearly, at a point of continuity of f the one sided local quasi-moduli and the one-sided local moduli of Sect. 3 coincide.

In the next two lemmas we derive results similar to those in Sect. 4 for the local one-sided moduli.

Lemma 6.6

Let \(F \in {\mathcal {F}}[a,b]\), \(x^* \in (a,b]\) and \(c_{\chi ,\phi }\) be a chain function corresponding to a partition \(\chi \) and a metric chain \(\phi \). Then

$$\begin{aligned} \varpi ^{-}\big ( {v_{c_{\chi ,\phi }}},{x^*},{\delta } \big ) \le \varpi ^{-}\big ( {v_F},{x^*},{\delta + |\chi |} \big ), \quad \delta >0. \end{aligned}$$

Proof

We estimate \(\, \varpi ^{-}\big ( {v_{c_{\chi ,\phi }}},{x^*},{\delta } \big ) = v_{c_{\chi ,\phi }}(x^*-0) - v_{c_{\chi ,\phi }}(x^* - \delta )\). Let \({\chi = \{ a=x_0< x_1< \cdots < x_m=b\}}\). If \({x^* \not \in \chi }\), then \(x^* \in (x_{k-1},x_{k})\) with some \(1 \le k \le m\). If \(x^* \in \chi \), then \(x^* = x_k\) for some \(1 \le k \le m\). In both cases \(c_{\chi ,\phi }(x) = c_{\chi ,\phi }(x_{k-1})\) for \(x_{k-1} \le x < x^*\), so that \(c_{\chi ,\phi }(x^*-0) = c_{\chi ,\phi }(x_{k-1})\). If \(x_{k-1} \le x^* - \delta < x^*\), then \(c_{\chi ,\phi }(x^* - \delta ) = c_{\chi ,\phi }(x_{k-1})\) and \(v_{c_{\chi ,\phi }}(x^*-0) - v_{c_{\chi ,\phi }}(x^* - \delta ) = 0\). Otherwise there is \(0 \le i < k-1\) such that \(x_i \le x^* - \delta < x_{i+1}\) and \(c_{\chi ,\phi }(x^* - \delta ) = c_{\chi ,\phi }(x_{i})\). By the definitions of the metric chain and of the chain function we have

$$\begin{aligned} v_{c_{\chi ,\phi }}(x^*-0) - v_{c_{\chi ,\phi }}(x^* - \delta )&= \sum _{j=i}^{k-2} |c_{\chi ,\phi }(x_{j+1}) - c_{\chi ,\phi }(x_j)| \\&\le \sum _{j=i}^{k-2} \mathrm {haus}(F(x_{j+1}), F(x_j)) \\&\le V_{x_i}^{x_{k-1}}(F) = v_F(x_{k-1}) - v_F(x_i)\\&\le v_F(x^*-0) - v_F(x^*-\delta -|\chi |) \\&= \varpi ^{-}\big ( {v_F},{x^*},{\delta + |\chi |} \big ) \end{aligned}$$

and we obtain the claim. \(\square \)

Lemma 6.7

Let \(F \in {\mathcal {F}}[a,b]\), \(x^* \in (a,b]\) and \(s \in {\mathcal {S}}(F)\). Then

$$\begin{aligned} \varpi ^{-}\big ( {v_s},{x^*},{\delta } \big ) \le \varpi ^{-}\big ( {v_F},{x^*},{2\delta } \big ), \quad \delta >0. \end{aligned}$$

Proof

Let \(s \in {\mathcal {S}}(F)\) and \(\delta > 0\). There exists a sequence of chain functions \(\{c_n\}_{n \in {{\mathbb {N}}}}\) that corresponds to a sequence of partitions \(\{\chi _n\}_{n \in {{\mathbb {N}}}}\) with \(|\chi _n| \rightarrow 0\) as \(n \rightarrow \infty \) such that \(s(x) = \lim _{n \rightarrow \infty }{c_n(x)}\), \(x \in [a,b]\). Take \(N \in {{\mathbb {N}}}\) so large that \(|\chi _n| < \delta \) for all \(n \ge N\).

We estimate \(\varpi ^{-}\big ( {v_s},{x^*},{\delta } \big ) = v_s(x^* - 0) - v_s(x^* - \delta )\). Take \(0< t < \delta \). For each \(n \ge N\) we have by Lemma 6.6

$$\begin{aligned} V_{x^*-\delta }^{x^*-t}(c_n)=v_{c_n}(x^* - t) - v_{c_n}(x^* - \delta )\le & {} \varpi ^{-}\big ( {v_{c_n}},{x^*},{\delta } \big ) \le \varpi ^{-}\big ( {v_F},{x^*},{\delta + |\chi _n|} \big )\\\le & {} \varpi ^{-}\big ( {v_F},{x^*},{2\delta } \big ). \end{aligned}$$

By Theorem 3.8 we have

$$\begin{aligned} v_{s}(x^* - t) - v_{s}(x^* - \delta )=V_{x^*-\delta }^{x^*-t}(s) \le \liminf \limits _{n \rightarrow \infty } V_{x^*-\delta }^{x^*-t}(c_n) \le \varpi ^{-}\big ( {v_F},{x^*},{2\delta } \big ). \end{aligned}$$

Taking the limit as \(t \rightarrow 0+\) we obtain the claim. \(\square \)

Note that we cannot expect a bound for \(\varpi ^{+}\big ( {v_{c_{\chi ,\phi }}},{x^*},{\delta } \big )\) in terms of \(\big ({v_F}{x^*}{\delta + \varepsilon }\big )\). The reason is that in the definition of the chain function we use values on the left of a point \(x^*\) that we cannot control by \(\varpi ^{+}\big ( {v_F},{x^*},{\delta } \big )\). However, the following estimates hold true for a metric selection s.

Lemma 6.8

Let \(F \in {\mathcal {F}}[a,b]\), \(x^* \in [a,b)\) and \(s \in {\mathcal {S}}(F)\). Then

$$\begin{aligned} \varpi ^{+}\big ( {v_s},{x^*},{\delta } \big ) \le \varpi ^{+}\big ( {v_F},{x^*},{\delta } \big ), \quad \delta >0. \end{aligned}$$

Proof

Let \(s \in {\mathcal {S}}(F)\) and \(\delta > 0\). Let \(\{c_n\}_{n \in {{\mathbb {N}}}}\) be a sequence of chain functions like in the proof of Lemma 6.7. We estimate \( \varpi ^{+}\big ( {v_s},{x^*},{\delta } \big ) = v_{s}(x^* + \delta ) - v_{s}(x^* + 0)\). Take \(0< t <\delta \). There is \(N \in {{\mathbb {N}}}\) such that \(|\chi _n| < t\) for all \(n \ge N\). Then the interval \((x^*, x^* + t)\) contains at least one point of the partition \(\chi _n\), \(n \ge N\). Let \(\chi _n = \{ a=x_0^n< x_1^n< \cdots < x_{m(n)}^n=b\}\). There is \(0 \le k(n) \le m(n)-1\) such that \(x^* + t \in [x_{k(n)}^n,x_{k(n)+1}^n)\). It holds \(x_{k(n)}^n > x^*\).

If \(x^* + \delta \in [x_{k(n)}^n,x_{k(n)+1}^n)\), then \(c_n(x^* + t) = c_n(x^* + \delta ) = c_n(x_{k(n)}^n)\), so that \(v_{c_n}(x^* + \delta ) - v_{c_n}(x^* + t) = 0\). Otherwise there is \(k(n) < i(n) \le m(n)-1\) such that \(x^* + \delta \in [x_{i(n)}^n, x_{i(n)+1}^n)\), or \(x^* + \delta = b = x_{m(n)}^n\) so that \(i(n) = m(n)\). In both cases \(c_n(x^* + \delta ) = c_n(x_{i(n)}^n)\). Therefore,

$$\begin{aligned} V_{x^*+t}^{x^*+\delta }(c_n)&= v_{c_n}(x^* + \delta ) - v_{c_n}(x^* + t) = \sum _{j=k(n)}^{i(n)-1} |c_n(x_{j+1}^n) - c_n(x_j^n)|\\&\le \sum _{j=k(n)}^{i(n)-1} \mathrm {haus}(F(x_{j+1}^n), F(x_j^n)) \\&\le V_{x_{k(n)}}^{x_{i(n)}}(F) = v_F(x_{i(n)}) - v_F(x_{k(n)}) \\&\le v_F(x^* + \delta ) - v_F(x^*+0) = \varpi ^{+}\big ( {v_F},{x^*},{\delta } \big ) \end{aligned}$$

for each \(n \ge N\). By Theorem 3.8 we have

$$\begin{aligned} v_{s}(x^* + \delta ) - v_{s}(x^* + t)=V_{x^*+t}^{x^*+\delta }(s) \le \liminf \limits _{n \rightarrow \infty } V_{x^*+t}^{x^*+\delta }(c_n) \le \varpi ^{+}\big ( {v_F},{x^*},{\delta } \big ). \end{aligned}$$

Taking the limit as \(t \rightarrow 0+\) we obtain the claim. \(\square \)

In the next definition we introduce several classes of periodic vector-valued functions.

Definition 6.9

Given \(B > 0\), a point \(x \in {{\mathbb {R}}}\), a closed interval \(I \subset {{\mathbb {R}}}\) and a modulus-bounding function \(\omega \), we define the following classes of functions.

  1. (i)

    \({\mathscr {B}}{\mathscr {V}}_{d}\big ( {B},{x},{\omega } \big )\) is the class of all \(2\pi \)-periodic functions \(f : {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}^d\) satisfying

    $$\begin{aligned} V_{-\pi }^{\pi }(f) \le B \quad \text{ and } \quad \varpi ^{-}\big ( {v_f},{x},{\delta } \big ) \le \omega (\delta ), \quad \varpi ^{+}\big ( {v_f},{x},{\delta } \big ) \le \omega (\delta ) \end{aligned}$$

    for all \(0< \delta \le \pi \).

  2. (ii)

    \({\mathscr {B}}{\mathscr {V}}_{d}\big ( {B},{I},{\omega } \big ) = {\displaystyle \bigcap _{z\in I}{\mathscr {B}}{\mathscr {V}}_{d}\big ( {B},{z},{\omega } \big )} \).

  3. (iii)

    \({\mathscr {C}}{\mathscr {B}}{\mathscr {V}}_{d}\big ( {B},{I},{\omega } \big ) = {\mathscr {B}}{\mathscr {V}}_{d}\big ( {B},{I},{\omega } \big ) \cap {{\mathcal {C}}}_d(I) \), where \({{\mathcal {C}}}_d(I)\) is the class of functions \(f : {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}^d\) which are continuous on I.

Remark 6.10

It is easy to conclude from the equivalence of norms on \({{{\mathbb {R}}}}^d\) that if \(f : {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}^d\), \(f = \begin{pmatrix} f_1 \\ \vdots \\ f_d \end{pmatrix}\), and \(f \in {\mathscr {B}}{\mathscr {V}}_{d}\big ( {B},{x},{\omega } \big )\), then \(f_j \in {\mathscr {B}}{\mathscr {V}}_{1}\big ( {KB},{x},{K\omega } \big )\), \(j = 1, \ldots , d\), with a constant \(K > 0\) depending only on the underlying norm on \({{{\mathbb {R}}}}^d\).

In view of Remark 6.10 we formulate the subsequent results only for functions \(f : {{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\).

The theorem below is an extension of the Dirichlet-Jordan Theorem for the class \({\mathscr {B}}{\mathscr {V}}_{1}\big ( {B},{x},{\omega } \big )\). To establish the result, we carefully go through the proof of the Dirchlet-Jordan Theorem in [39, Chapter II] and examine the estimates. The proof is given in Appendix B.

Theorem 6.11

Let \(B > 0\), \(x \in {{\mathbb {R}}}\) and \(\omega \) be a modulus-bounding function. Then for all \(f \in {\mathscr {B}}{\mathscr {V}}_{1}\big ( {B},{x},{\omega } \big )\) and each \(\delta \in (0,\pi ]\) we have

$$\begin{aligned} \left| {\mathscr {S}}_n f(x) - \frac{1}{2}\big ( f(x+0) + f(x-0) \big ) \right| \le \frac{2B}{\pi n} \left( 1+6\cot \left( \frac{\delta }{2} \right) \right) +8C\omega (\delta ), \quad n\in {{\mathbb {N}}}, \end{aligned}$$
(19)

where C is the constant from Lemma 6.2.

In view of Remark 6.3 one can take \(C=2\) in (19).

The next corollary follows from the above theorem.

Corollary 6.12

Let \(B > 0\), \(x \in {{\mathbb {R}}}\) and \(\omega \) be a modulus-bounding function satisfying \(\lim _{ \delta \rightarrow 0^+ }\omega (\delta )=0\). Then

$$\begin{aligned} \lim _{ n \rightarrow \infty }\, \sup \left\{ \left| {\mathscr {S}}_n f(x) - \frac{1}{2}\big (f(x+0) + f(x-0)\big ) \right| \ : \ f\in {\mathscr {B}}{\mathscr {V}}_{1}\big ( {B},{x},{\omega } \big ) \right\} =0. \end{aligned}$$

Proof

Take an arbitrary \(\varepsilon >0\). Fix \(\delta >0\) such that \(\omega (\delta )<\frac{\varepsilon }{16C}\). Choose n large enough such that \({\frac{2B}{\pi n} \left( 1+6\cot \left( \frac{\delta }{2} \right) \right) < \frac{\varepsilon }{2} }\). Then by (19) we have

$$\begin{aligned} \left| {\mathscr {S}}_n f(x) - \frac{1}{2}\big (f(x+0) + f(x-0)\big ) \right| < \frac{\varepsilon }{2} + \frac{\varepsilon }{2} = \varepsilon \end{aligned}$$

for all \(f \in {\mathscr {B}}{\mathscr {V}}_{1}\big ( {B},{x},{\omega } \big )\), and the statement follows. \(\square \)

For \( f \in {\mathscr {B}}{\mathscr {V}}_{1}\big ( {B},{I},{\omega } \big )\) the estimate in the right-hand side of (19) does not depend on \(x \in I\). We arrive at the following statement.

Corollary 6.13

Let \(B > 0\), \(I \subset {{\mathbb {R}}}\) be a closed interval and \(\omega \) be a modulus-bounding function satisfying \({\lim _{\delta \rightarrow 0+}{\omega (\delta )} = 0}\). Then

$$\begin{aligned} \lim _{ n \rightarrow \infty }\, \sup \left\{ \left| {\mathscr {S}}_n f(x) - \frac{1}{2}\big (f(x+0) + f(x-0)\big ) \right| \ : \ x\in I, \ f\in {\mathscr {B}}{\mathscr {V}}_{1}\big ( {B},{I},{\omega } \big ) \right\} =0. \end{aligned}$$

Finally, if f is in addition continuous in I then the Fourier series of f converges to f on I, and the statement above takes the following form.

Corollary 6.14

Under the assumptions of Corollary 6.13 we have

$$\begin{aligned} \lim _{ n \rightarrow \infty }\, \sup \left\{ \left| {\mathscr {S}}_n f(x) - f(x) \right| \ : \ x \in I, \; f\in {\mathscr {C}}{\mathscr {B}}{\mathscr {V}}_{1}\big ( {B},{I},{\omega } \big ) \right\} =0. \end{aligned}$$

6.3 Extension to SVFs

We define the Fourier series of set-valued functions via the integral representation (14) using the weighted metric integral.

Definition 6.15

Let \(F:[-\pi ,\pi ] \rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\). The metric Fourier series of F is the sequence of the set-valued functions \(\{{\mathscr {S}}_nF\}_{n \in {{\mathbb {N}}}}\), where \({\mathscr {S}}_nF \) is a SVF defined by

$$\begin{aligned} {\mathscr {S}}_nF (x) = \frac{1}{\pi } {\scriptstyle ({\mathcal {M}}_{\partial _{n,x}})} \int _{-\pi }^{\pi } \partial _{n,x}(t) F(t) dt, \quad x \in [-\pi ,\pi ], \quad n \in {{\mathbb {N}}}, \end{aligned}$$

whenever the integrals above exist.

For \(F \in {\mathcal {F}}[-\pi ,\pi ]\) the integrals in Definition 6.15 exist. Moreover, each \(\partial _{n,x} = D_n(x - \cdot )\) for fixed \(n \in {{\mathbb {N}}}\) and \({x \in {{\mathbb {R}}}}\) is of bounded variation on each finite interval. Hence, if \(F \in {\mathcal {F}}[-\pi ,\pi ]\), then the set-valued functions \({{\mathscr {S}}_nF}\) have compact images by Proposition 5.10. By Theorem 5.7 we have

$$\begin{aligned}&{\mathscr {S}}_nF (x) = \left\{ {\mathscr {S}}_n s (x) \ : \ s \in {\mathcal {S}}(F) \right\} = \left\{ \frac{1}{\pi } \int _{-\pi }^{\pi } D_n(x-t) s(t) dt \ : \ s \in {\mathcal {S}}(F) \right\} , \nonumber \\&x \in [-\pi ,\pi ]. \end{aligned}$$
(20)

Note that we do not expect metric selections s in this definition to be periodic. In fact, even if the set-valued function F itself is periodic, it can have metric selections that are not periodic (see Fig. 1).

Fig. 1
figure 1

An example of a non-periodic metric selection of a periodic SVF

For \(F \in {\mathcal {F}}[-\pi ,\pi ]\) and \(x \in (-\pi ,\pi )\) we define

$$\begin{aligned} A_F(x) = \left\{ \frac{1}{2} \left( s(x+0) + s(x-0) \right) \ : \ s \in {\mathcal {S}}(F) \right\} . \end{aligned}$$
(21)

We show that this is the limit set of the Fourier approximants.

Proposition 6.16

Let \(F \in {\mathcal {F}}[-\pi ,\pi ]\) and \(x \in (-\pi ,\pi )\). Then there exists \(\delta _0=\delta _0(x) > 0\) such that for all \(\delta \in (0,\delta _0] \) and \(n \in {{\mathbb {N}}}\) the following estimate holds

$$\begin{aligned} \mathrm {haus}\left( {\mathscr {S}}_nF(x), A_F(x) \right) \le K \left[ \frac{ V_{-\pi }^{\pi } (F)}{n} \left( 1+6\cot \left( \frac{\delta }{2} \right) \right) + \omega (\delta ) \right] , \end{aligned}$$
(22)

where \(\omega (\delta )=\max \big \{\varpi ^{-}\big ( {v_F},{x},{2\delta } \big ) , \varpi ^{+}\big ( {v_F},{x},{\delta } \big ) \big \}\) and \(K > 0\) is a constant that depends only on the underlying norm in the space \({{\mathbb {R}}}^d\).

Proof

First we observe that \(\omega (\delta )\) is a modulus-bounding function by its definition.

Next, by (20), (21) we have

$$\begin{aligned} \mathrm {haus}\left( {\mathscr {S}}_nF(x),A_ F(x) \right) \le \sup { \left\{ \left| {\mathscr {S}}_ns(x) - \frac{1}{2} \left( s(x+0) + s(x-0) \right) \right| \ : \ s \in {\mathcal {S}}(F) \right\} }. \end{aligned}$$
(23)

Indeed, for any \(y \in {\mathscr {S}}_nF(x)\) and for any selection \(s \in {\mathcal {S}}(F)\) with \(y={\mathscr {S}}_ns(x)\), the following holds: \( \left| y - \frac{1}{2} \left( s(x+0) + s(x-0) \right) \right| \ge \mathrm {dist}(y, A_ F(x)). \) Similarly, for any \(z \in A_ F(x)\) and for any \(s \in {\mathcal {S}}(F)\) such that \(z= \frac{1}{2} \left( s(x+0) + s(x-0) \right) \) we have \(\left| z - {\mathscr {S}}_ns(x) \right| \ge \mathrm {dist}(z, {\mathscr {S}}_nF(x) )\). The last two inequalities, in view of (1), imply (23).

Let \(s \in {\mathcal {S}}(F)\), and let \({\tilde{s}}\) be the \(2\pi \)-periodic function that coincides with s on \([-\pi ,\pi )\). Clearly, \({\mathscr {S}}_n s = {\mathscr {S}}_n {\tilde{s}}\). By Result 4.3 we have \(V_{-\pi }^{\pi }( {\tilde{s}} ) \le 2 V_{-\pi }^{\pi } (F)\); the factor 2 here comes because of a possible jump at the point \(\pi \). Since x lies in the open interval \((-\pi ,\pi )\), there exists \(\delta _0 > 0\) such that \([x- \delta _0, x + \delta _0] \subset (-\pi ,\pi )\) and therefore \({\tilde{s}}\) coincides with s in the interval \([x- \delta _0, x + \delta _0]\). Thus by Lemmas 6.7 and 6.8

$$\begin{aligned}&\varpi ^{-}\big ( {v_{{\tilde{s}}}},{x},{\delta } \big ) \le \varpi ^{-}\big ( {v_F},{x},{2\delta } \big ) \le \omega (\delta ), \\&\varpi ^{+}\big ( {v_{{\tilde{s}}}},{x},{\delta } \big ) \le \varpi ^{+}\big ( {v_F},{x},{\delta } \big ) \le \omega (\delta ), \quad \delta \in (0,\delta _0]. \end{aligned}$$

For \(\delta > \delta _0\), we redefine \(\omega (\delta )\) in a non-decreasing way so that the estimates \(\varpi ^{-}\big ( {v_{{\tilde{s}}}},{x},{\delta } \big ) \le \omega (\delta )\), \(\varpi ^{+}\big ( {v_{{\tilde{s}}}},{x},{\delta } \big ) \le \omega (\delta )\) hold for all \(\delta \in (0,\pi ]\). We achieve it by putting \(\omega (\delta ) = 2 V_{-\pi }^{\pi }(F)\) for \(\delta _0 < \delta \le \pi \).

By Remark 6.10, there exist a constant \(K_1 > 0\) such that for each metric selection \(s \in {\mathcal {S}}(F)\), each coordinate of its \(2\pi \)-periodization \({\tilde{s}}_j\), \(j = 1, \ldots , d\), lies in the class \({\mathscr {B}}{\mathscr {V}}_{1}\big ( {2 K_1 V_{-\pi }^{\pi } (F)},{x},{ K_1\omega } \big )\). Applying Theorem 6.11 to all \({\tilde{s}}_j\), \(j = 1, \ldots , d\), we obtain for each \(s\in {\mathcal {S}}(F)\)

$$\begin{aligned}&\left| {\mathscr {S}}_ns(x) - \frac{1}{2}(s(x+0) + s(x-0)) \right| \\&\quad \le K_2 \max _{j = 1, \ldots ,d} { \left| {\mathscr {S}}_ns_j(x) - \frac{1}{2}(s_j(x+0) + s_j(x-0)) \right| } \\&\quad \le K_2 \left[ \frac{2K_1 V_{-\pi }^{\pi }(F)}{\pi n} \left( 1+6\cot \left( \frac{\delta }{2} \right) \right) + 8C K_1 \omega (\delta ) \right] , \end{aligned}$$

where the constant \(K_2 > 0\) depends only on the underlying norm in \({{\mathbb {R}}}^d\). In view of (23) the claim follows with \(K= 2K_1 K_2 \max \{\frac{1}{\pi }, 4C\}\), where C is defined in (18).

\(\square \)

The next two theorems are the main results of the paper.

Theorem 6.17

Let \(F \in {\mathcal {F}}[-\pi ,\pi ]\) and \(x \in (-\pi ,\pi )\). Then

$$\begin{aligned} \lim _{n \rightarrow \infty }{ \mathrm {haus}\left( {\mathscr {S}}_nF(x), A_F(x) \right) } = 0. \end{aligned}$$
(24)

Proof

Let \(\omega (\delta )=\max \big \{\varpi ^{-}\big ( {v_F},{x},{2\delta } \big ) , \varpi ^{+}\big ( {v_F},{x},{\delta } \big ) \big \}\). By Remark 6.5(ii), \(\omega (\delta ) \rightarrow 0\) as \(\delta \rightarrow 0+\). To prove (24), take an arbitrary \(\varepsilon > 0\) and choose in (22) first \(\delta \in (0, \delta _0(x)]\) so small that \(K \omega (\delta ) < \frac{\varepsilon }{2}\). Then by choosing n so large that \(K V_{-\pi }^{\pi } (F) \frac{1}{n} \left( 1+6\cot \left( \frac{\delta }{2} \right) \right) < \frac{\varepsilon }{2}\) we complete the proof. \(\square \)

In case F is continuous, its Fourier series converges to F in the Hausdorff metric. Namely, the following holds true.

Theorem 6.18

Let \(F \in {\mathcal {F}}[-\pi ,\pi ]\) and let F be continuous at \(x \in (-\pi ,\pi )\). Then

$$\begin{aligned} \lim _{n \rightarrow \infty }{ \mathrm {haus}\left( {\mathscr {S}}_nF(x), F(x) \right) } = 0. \end{aligned}$$

If F is continuous in a closed interval \(I \subset (-\pi ,\pi )\), then the convergence is uniform in I.

Proof

The first statement of the above theorem is an immediate consequence of Theorem 6.17. For the second statement note that there exists \(\delta _0 > 0\) such that \([x-\delta _0,x+\delta _0] \subset (-\pi ,\pi )\) for all \(x \in I\). Defining \(\omega (\delta )\) as in the proof of Proposition 6.16 and applying Corollary 6.13, we obtain the result. \(\square \)

7 On the Limit Set of the Fourier Approximants

In the previous section we proved that the sequence \(\{ {\mathscr {S}}_nF(x)\}_{n \in {{\mathbb {N}}}} \) converges at a point x where F is discontinuous to the set \(A_F(x) = \left\{ \frac{1}{2} \left( s(x+0) + s(x-0) \right) \ : \ s \in {\mathcal {S}}(F) \right\} \). An interesting question is to describe the set \(A_F(x)\) in terms of the values of F. At the moment we do not have a satisfactory answer to this question.

The two statements below give some idea about the structure of a set-valued function F and its metric selections at a point x where F is discontinuous.

Proposition 7.1

For \(F\in {\mathcal {F}}[a,b]\) and \(x \in (a,b)\) we have \(F(x-0) \cup F(x+0) \subseteq F(x)\).

Proof

We show that \(F(x-0) \subseteq F(x)\), the proof for \(F(x+0)\) is similar.

Since F is bounded, we can restrict our consideration to a bounded region of \({{\mathbb {R}}}^d\), so that the convergence in the Hausdorff metric is equivalent to the convergence in the sense of Kuratowski (see Remark 2.2).

Consider \(y \in F(x-0)\). Take an arbitrary sequence \(\{x_n\}_{n \in {{\mathbb {N}}}}\) with \(x_n < x\), \(n \in {{\mathbb {N}}}\), and \(x_n \rightarrow x\), \(n \rightarrow \infty \). Since \(F(x-0)\) coincides with the lower Kuratowski limit \(\liminf _{t \rightarrow x-0}{F(t)}\), for each n there exists \(y_n \in F(x_n)\) such that \(y_n \rightarrow y\), \(n \rightarrow \infty \). We have \((x_n,y_n) \in {\mathrm {Graph}}{(F)}\) for each \(n \in {{\mathbb {N}}}\) and \((x_n,y_n) \rightarrow (x,y)\), \(n \rightarrow \infty \). Since \({\mathrm {Graph}}{(F)}\) is closed, it follows that \((x,y) \in {\mathrm {Graph}}{(F)}\), and thus \(y \in F(x)\). This implies that \(F(x-0) \subseteq F(x)\). \(\square \)

Proposition 7.2

For \(F \in {\mathcal {F}}[a,b]\)

$$\begin{aligned}&F(x-0) = \{ s(x-0) \ : \ s \in {\mathcal {S}}(F) \}, \quad x \in (a,b], \quad \text {and} \\&F(x+0) = \{ s(x+0) \ : \ s \in {\mathcal {S}}(F) \}, \quad x \in [a,b). \end{aligned}$$

Proof

We prove the first claim, the proof of the second one is similar.

Fix \(x\in (a,b]\). The inclusion \(\{ s(x-0) \ :\ s\in {\mathcal {S}}(F)\} \subseteq F(x-0)\) follows from the fact that \(F(x-0)\) coincides with the Kuratowski upper limit \(\limsup _{t \rightarrow x-0}{F(t)}\) (see Remark 2.2). It remains to show \(F(x-0) \subseteq \{s(x-0) \ :\ s\in {\mathcal {S}}(F)\} \). Define a multifunction \({{\widetilde{F}}}: [a,b] \rightarrow \mathrm {K}({{{\mathbb {R}}}}^d)\) by

$$\begin{aligned} {{\widetilde{F}}}(t)= \left\{ \begin{array}{ll} F(t), &{} t \ne x, \\ F(x-0), &{} t=x. \end{array} \right. \end{aligned}$$

Clearly, \({{\widetilde{F}}}\) is left continuous at x and \({{\widetilde{F}}} \in {\mathcal {F}}[a,b]\). By Result 4.2\({{\widetilde{F}}}\) has a representation by its metric selection. By Proposition 7.1\({{\widetilde{F}}}(x) \subseteq F(x)\), and thus \({\mathcal {S}}({{\widetilde{F}}}) \subseteq {\mathcal {S}}(F)\).

Now, let \(y \in F(x-0) = {{\widetilde{F}}}(x) \subseteq F(x)\). There exists a selection \(s\in {\mathcal {S}}({{\widetilde{F}}}) \subseteq {\mathcal {S}}( F)\) such that \(y=s(x)\). Since \({{\widetilde{F}}}\) is left continuous at x, by Theorem 4.7s is also left continuous at x. Thus, \({y = s(x) = s(x-0) \in \{s(x-0) \ :\ s\in {\mathcal {S}}(F)\} }\). \(\square \)

In view of the last proposition and by the definition of \(A_F(x)\) (see (21)), we conclude

$$\begin{aligned} A_F(x) \subseteq \frac{1}{2} F(x-0) + \frac{1}{2} F(x+0), \end{aligned}$$

where the right-hand side is the Minkowski average which might be much larger than \(A_F(x)\).

One could conjecture that \(A_F(x)\) coincides with the metric average of \(F(x-0)\) and \(F(x+0)\), namely

$$\begin{aligned} A_F(x)=\frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0), \end{aligned}$$

where

$$\begin{aligned} \frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0)=\left\{ \frac{1}{2} y^- + \frac{1}{2} y^+ \, : \, (y^-,y^+) \in \Pi \big ( {F(x-0)},{F(x+0)} \big ) \right\} . \end{aligned}$$

It is easy to see that a sufficient condition for the inclusion

$$\begin{aligned} A_F(x) \subseteq \frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0) \end{aligned}$$
(25)

is the property

$$\begin{aligned} \big ( s(x-0),s(x+0) \big ) \in \Pi \big ( {F(x-0)},{F(x+0)} \big ) \end{aligned}$$
(26)

for any \(s\in {\mathcal {S}}(F)\). However, (26) is not always true. The next example provides a counterexample to both (26) and (25).

Example 7.3

Let \(B(x_1,x_2)\) denote the closed disc of radius 1 with center at the point \((x_1,x_2)\), and let \(x\in (-\pi ,\pi )\). Consider the function \(F: [-\pi ,\pi ] \rightarrow {\mathrm {K}({{\mathbb {R}}}^2)}\), \(F \in {\mathcal {F}}[-\pi ,\pi ]\), defined by

$$\begin{aligned} F(t) = {\left\{ \begin{array}{ll} B(-2,2), &{} t \in [-\pi ,x), \\ B(-2,2) \cup \{(0,0)\} \cup B(2,2), &{} t=x, \\ B(2,2), &{} t \in (x,\pi ], \end{array}\right. } \end{aligned}$$

and its metric selection

$$\begin{aligned} s(t) = {\left\{ \begin{array}{ll} (-2 + \frac{\sqrt{2}}{2}, 2 - \frac{\sqrt{2}}{2}), &{} t \in [-\pi ,x), \\ (0,0), &{} t=x,\\ (2 - \frac{\sqrt{2}}{2}, 2 - \frac{\sqrt{2}}{2}), &{} t \in (x,\pi ]. \end{array}\right. } \end{aligned}$$

First we show that (26) does not hold. It is easy to see that \(s(x-0) = (-2 + \frac{\sqrt{2}}{2}, 2 - \frac{\sqrt{2}}{2}) = \Pi _{F(x-0)}((0,0))\) is the projection of \({(0,0) \in F(x)}\) on \({F(x-0)}\), and \({s(x+0) = (2 - \frac{\sqrt{2}}{2}, 2 - \frac{\sqrt{2}}{2}) = \Pi _{F(x+0)}((0,0))}\) is the projection of (0, 0) on \({F(x+0)}\). On the other hand, the pair \(\big ( s(x-0) , s(x+0) \big )\) is not a metric pair of \((F(x-0),F(x+0))\) since the line connecting the points \(s(x-0)\) and \(s(x+0)\) does not pass through any of the centers of the two discs. By similar geometric arguments one can show that \({\frac{1}{2} ( s(x-0) + s(x + 0)) = (0, 2 - \frac{\sqrt{2}}{2}) \in A_F(x)}\), but does not belong to \({\frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0)}\).

Note that in this example \(F(x-0)\cup F(x+0) \ne F(x)\), and that the selection s for which (26) does not hold satisfies \(s(x) \notin F(x-0)\cup F(x+0)\).

Also the reverse inclusion to (25), \(A_F(x) \supseteq \frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0)\), does not hold in general. The next example demonstrates this.

Example 7.4

Consider the set-valued function \(F: [-\pi ,\pi ] \rightarrow \mathrm {K}({{\mathbb {R}}})\) defined by

$$\begin{aligned} F(t) = {\left\{ \begin{array}{ll} \left\{ -\frac{1}{4}, 0, \frac{1}{4} \right\} , &{} t \in [-\pi ,x),\\ \left\{ -1, -\frac{1}{4}, 0, \frac{1}{4}, 1 \right\} , &{} t = x, \\ \left\{ -1 + t - x, 1 + t - x \right\} , &{} t \in (x,\pi ], \end{array}\right. } \end{aligned}$$

where \(x \in (-\pi ,\pi )\). We have \(F(x-0) = \left\{ -\frac{1}{4}, 0, \frac{1}{4} \right\} \), \(F(x+0) = \{-1, 1 \}\), and their metric average is \(\frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0) = \left\{ -\frac{5}{8}, -\frac{1}{2}, \frac{1}{2}, \frac{5}{8} \right\} \). We show that \(\frac{1}{2} \in \frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0)\) does not belong to \(A_F(x)\), i.e., there is no metric selection s of F such that

$$\begin{aligned} \frac{1}{2} = \frac{1}{2}(s(x-0) + s(x+0)). \end{aligned}$$
(27)

Indeed, if (27) is fulfilled for a selection \({\hat{s}}\) of F, then for this selection we necessarily have \({\hat{s}}(t) = 0\) for \(t \in [-\pi ,x)\) and \({\hat{s}}(t) = 1 + t - x\) for \(t \in (x,\pi ]\) (with an arbitrary choice of the value \(s(x) \in F(x)\)). But such \({\hat{s}}\) cannot be a metric selection, because there are no chain functions that would lead to such a selection. The only chain functions which might converge to \({\hat{s}}\) are constant with the value 0 on the left of x and piecewise constant functions with values sampled from \(1 + t -x\) on the right of x, possibly except for the interval between two neighboring points of the partition that contains the point x.

But no chain function can take the value 0 on the left of x and the value \({1 + t -x}\) on the right of x. Indeed, if x is not a point of the partition, then this is impossible because the closest point to 0 in the set \({F(t) = \left\{ -1 + t - x\, ,\, 1 + t - x \right\} }\), \(t > x\), is \( -1 + t - x\) and not \( 1 + t - x\), and the closest point to \(1 + t -x\) in the set \(F(t) = \left\{ -\frac{1}{4}, 0, \frac{1}{4} \right\} \), \(t < x\), is \(\frac{1}{4}\) and not 0. If x is a point of the partition, then the value of a chain function at x is one of the five values from \({F(x) = \left\{ -1, -\frac{1}{4}, 0, \frac{1}{4}, 1 \right\} }\). The choices 0 and 1 are impossible because of the reasons explained above. But also the other three choices are impossible, since the pointwise limit of the chain functions would not be equal to 0 on the left of x.

Yet, the conjecture \(A_F(x)=\frac{1}{2} F(x-0) \oplus \frac{1}{2} F(x+0)\) or a weaker form of it might be true for functions F from a certain subclass of \({\mathcal {F}}[a,b]\).