Variation diminishing-type properties for multivariate sampling Kantorovich operators

In this paper we establish a variation-diminishing type estimate for the multivariate Kantorovich sampling operators with respect to the concept of multidimensional variation introduced by Tonelli. A sharper estimate can be achieved when step functions with compact support (digital images) are considered. Several examples of kernels have been presented.

, he managed to obtain, together with his co-authors, the existence of integrable selections in the variational sense, which was an open problem in Measure Theory for several years. It therefore seemed natural to us to dedicate the results of this paper to Mimmo, considering that these issues represented part of his scientific training in the early years and beyond. Each of us, in a different form, owes much to Mimmo; but surely together, we can express our gratitude for what he has left us, both from a scientific and a human point of view, and this contribution wants to be one of the many ways to tell him ... Thanks! Working with families of operators in BV-spaces, a classical and important result that is usually investigated is an estimate of the variation of the operators in terms of the variation of the function to which they are applied: such property, known as "variation diminishing type estimate", can be obtained for several families of operators, well-known and used in approximation theory, such as the Bernstein polynomials, the convolution operators, the Mellin operators, the sampling operators and others (see, e.g., [3][4][5]9,14,32]). In this paper we study the case of multidimensional sampling Kantorovich operators, using the variation in the sense of Tonelli. We also consider separately the one-dimensional case (Theorem 2), that can be derived easily by the analogous result for the generalized sampling series [5]: in this case, for non-negative kernels, it is possible to obtain a classical variation diminishing result, i.e., the variation of the operators is smaller than the variation of the function itself. The multidimensional case (Theorem 1) is much more delicate: this is due to the "structure" of the Tonelli variation that is responsible, in particular, of the dependance of the constant in the estimate on the dimension of the space (this phenomenon is known as the curse-ofdimensionality occurring in several approximation problems), and to the particular form of the sampling Kantorovich operators. Nevertheless, the approximation properties of such family of operators, in their multidimensional form, are interesting and they have been widely studied in last years in view of their connections to Sampling Theory and Digital Image Processing, see e.g., [16,25,26,[28][29][30]33,38]. Their definition has been introduced in 2007 in the univariate form [15] and subsequently it was extended to the multidimensional setting in [27]. The latter generalization has been given in order to have a class of approximation operators of the sampling-type, suitable in order to reconstruct not necessarily continuous signals, that is exactly the context in which the problem of processing (reconstruction, enhancement, smoothing etc) digital images is included [7,27].
In this direction, variation diminishing type estimates may have an applicative interpretation. Indeed, in case of step-type functions, that is, the mathematical model of digital images, it is possible to obtain a sharper estimate (Corollary 1) proving that the L 1 -norm of the variation of the sections of the sampling Kantorovich operators, in case of non-negative kernels, is smaller than the L 1 -norm of the variation of the sections of the function itself: from the applicative point of view, this produces a smoothing effect on the image reconstructed by means of the multidimensional sampling Kantorovich operators, with respect to the original one.

Notations and preliminaries
In the present paper, we denote by BV (R) where the supremum is taken over all the possible partitions a = x 0 < x 1 < . . . < x n = b of the interval [a, b], is the Jordan variation of f over [a, b] [3,11,12]. For what concerns the multivariate extension of the above concepts, several possible approaches can be found in the literature (see, e.g., [13]).
In particular, here we consider the concept of variation introduced by Tonelli [35] for two variables, extended to the general case of R N by Radó and Vinti [34,37]. In order to recall it, we first introduce the following notation.
For a function f : we will denote the (N − 1)-dimensional interval obtained deleting by I the j-th coordinate, i.e., Given a vector x ∈ R N and α ∈ R, we will use the usual notation for products and quotients, i.e., αx = (αx 1 , . . . , αx N ) and, for α = 0, For more details and results about BV -spaces, see, e.g., [2][3][4][8][9][10][11]13,31]. Now, we are able to recall the definition of the multivariate Tonelli variation. First of all, for any Then the variation of f on I ⊂ R N can be defined as where the supremum is taken over all the finite families of N -dimensional intervals {J 1 , . . . , J m } which form partitions of I . Passing to the supremum over all the intervals I ⊂ R N , we obtain the variation of f over the whole R N , i.e.,

Definition 1 A measurable and bounded function
By the definition it immediately follows that, for every [34,37]).
In order to provide some estimates with respect to the multivariate Tonelli variation for a family of multivariate sampling-type operators, we now introduce the following definition.
A function χ : R N → R will be called a kernel if it satisfies the following assumptions: where the convergence of the series is uniform on the compact sets of R N . In particular, the above conditions are usually satisfied by the multivariate discrete approximate identities (see, e.g., [19]). This paper deals with the family of multivariate sampling Kantorovich operators [27], defined as The sampling Kantorovich operators represent the L 1 -version of the generalized sampling series (see, e.g., [17,18]). Note that the operators (K w f ) w>0 and (S w f ) w>0 are well-defined, for instance, for any In the present paper, in order to establish some estimates with respect to the Tonelli variation for the operators K w , we consider kernels which are given by the product of onedimensional kernels of averaged form (see, e.g., [5,7,32]). More precisely, we definē for some m ∈ N, and χ i : R −→ R is a (one-dimensional) kernel for every i = 1, . . . , N (i.e., satisfying (χ 1 ) and (χ 2 ) with N = 1). It is easy to see thatχ m is a kernel itself and that, for every i = 1, . . . , N , Moreover,χ m is everywhere differentiable and From now on, for the sake of simplicity, we will denote byK m w andS m w the multivariate Kantorovich and generalized sampling series, respectively, both based upon the averaged product kernelχ m .

Theorem 1 For any f
and henceK m Proof First of all we can observe that, for f ∈ BV (R N ), and hence, using (χ 2 ) for each one-dimensional kernelχ i,m , χ j , we have, for every t ∈ R N , The estimate (3) proves thatK m w f ∈ AC loc (R N ). Now, using (2) and the change of variable k j = k j + m, and k i = k i , if i = j, we can write what follows Now, using the Fubini-Tonelli theorem, the change of variable y j = wt j − k j + m/2, y i = wt i − k i , i = j, and the inequality (1) we have . . , N , and so, passing to the supremum over I ⊂ R N , HenceK m w f ∈ BV (R N ) and thereforeK m w f ∈ AC(R N ).
In the one-dimensional case, namely for it is possible to obtain a sharper estimate. Notice that, in case of a non-negative kernel, this gives a classical variation-diminishing result, since χ 1 = 1. Namely, we may obtain ThereforeK m w f ∈ BV (R) and soK m w f ∈ AC(R).
Proof It is easy to see that [6]). Therefore, taking into account of Proposition 5.1 of [14] (with D = wϕ(w·) 1 = 1) and of Proposition 1 of [5] (notice that the constant that multiplies and soK m w f ∈ BV (R). ThereforeK m w f ∈ AC(R) since, as in Theorem 1, it can be proved thatK m w f ∈ AC loc (R).

Examples and applications
In the present section, we discuss some applicative aspects of the theory of sampling Kantorovich operators, in particular in the present setting of BV-spaces. For this purpose, we first recall that some concrete applications of the above operators to the reconstruction and the enhancement of digital images have already been obtained in [25,26].  1 i, j (x, y), where 1 i, j denotes the characteristic functions of the set [i, i + 1) × [ j, j + 1). Obviously any step function with compact support (as I M ) is in fact a function belonging to BV (R 2 ). In such particular case, besides the estimate established in Theorem 1, that obviously holds, a sharper estimate for the variation of the two-dimensional sampling Kantorovich operators can be proved, in fact resulting a variation diminishing type property. ν = 1, ..., N , that form a partition of [a, b[. Then, for every w ∈ N and m ∈ N,

f is constant on each interval of a grid of multi-dimensional intervals of the form
and therefore In particular, if χ i is non-negative, i = 1, . . . , N , it turns out that Proof Observing that, for any step function f as above and w ∈ N it turns out thatK m w f = S m w f , the proof immediately follows by Proposition 5 of [7].
As a consequence of (4), it turns out that for any image (matrix) M, the two-dimensional sampling Kantorovich operatorK m w I M , w, m ∈ N, based onχ m and generated by a nonnegative kernel χ, allows to obtain a filtered image in which globally, the variations of the sections of the image are reduced: this may be interpreted as a reduction of the "jumps of gray levels" with respect to the original image M, producing a smoothing effect on the reconstructed image (see Fig. 1c). As it is well-known, smoothing procedures have a wide range of applications in digital image processing: among them, image enhancement, noise reduction, automatic selection procedures. Figure 1 illustrates an example of application of our operators to the latter problem. The aim is to produce an automatic procedure to select "big" objects from an image (the original, Fig. 1a, is a 250 × 250 Hubble space image): this can be done by filtering the image with the sampling Kantorovich operator, producing a smoothing effect (Fig. 1c), and hence applying thresholding to the filtered image (Fig. 1d). This produces an image where just the largest objects are detected. The importance of the application of the smoothing filter in this process can be easily deduced by comparing Fig.  1d to the result of thresholding applied directly to the original image (Fig. 1b). Now, we can furnish some examples of kernels for which the above results hold. As a first example, we can consider the multivariate product kernel of the averaged type generated by the well-known Fejér kernel, defined by where the sinc-function (see, e.g., [6,24]) is of the form sinc(x) := and its corresponding multivariate version is F m (t) := N i=1F m (t i ), t ∈ R N . In practice, the averaged Fejér kernel represents the L 1 -version of the well-known Lanczos kernel [32], which is defined as an averaged of a sinc-type function. Note that the Fejér kernel is nonnegative and then for the multivariate sampling Kantorovich series based upon F m hold the variation diminishing type properties established in both Theorem 1 and Corollary 1.
Other examples of one-dimensional non-negative kernels with unbounded support that can be used to define product averaged type kernels are, e.g., the Jackson-type kernels, defined by with n ∈ N, α ≥ 1, and c n a non-zero normalization coefficient, given by c n := R sinc 2n u 2nπα du −1 , [33]. The averaged-kernels by means of the functions J n and their multivariate product can be constructed as in the case of Fejér kernel. Now, in order to give examples of duration limited kernels, i.e., one-dimensional kernels with compact support, we recall the definition of the well-known central B-spline of order n ∈ N (see, e.g., [1,36]), defined by where (x) + := max {x, 0} denotes "the positive part" of x ∈ R. Now, let us denote bȳ the averaged B-spline kernel of order n ∈ N. Recalling that M n (t) = M n−1 (t + 1/2) − M n−1 (t −1/2), t ∈ R (n ≥ 2), for m = 1 we haveM n,1 (t) = M n (t +1/2) − M n (t −1/2) = M n+1 (t), t ∈ R (n ≥ 1), from which we can obtain thatM n,1 (t) = M n+1 (t), t ∈ R, for every n ∈ N, namely, the averaged kernel with m = 1 generated by a central B-spline of order n is a B-spline itself of order n + 1. Thus, the multivariate averaged type product kernel with m = 1 and generated by M n is M n 1 (t) := N i=1M n,1 (t i ) = N i=1 M n+1 (t i ), t ∈ R N . In practice, in the latter case the multivariate sampling Kantorovich series based upon the product of N averaged B-spline M n with m = 1 coincide with the sampling Kantorovich operators based upon the multidimensional central B-spline of order n + 1.
Funding Open access funding provided by Universitá degli Studi di Perugia within the CRUI-CARE Agreement.

Compliance with ethical standards
Conflict of interest On behalf of all authors, the corresponding author states that there is no conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.