1 Introduction and preliminaries

The term ‘statistical convergence’ was first presented by Fast [1]; it is a generalization of the concept of ordinary convergence. Actually, a root of the notion of statistical convergence can be detected by Zygmund [2] (also see [3]), where he used the term ‘almost convergence’ which turned out to be equivalent to the concept of statistical convergence. Statistical convergence was further investigated by Schoenberg [4], Šalát [5], Fridy [6], and Connor [7].

Recall the definition of natural (or asymptotic) density as follows: Suppose that \(E\subseteq\mathbb{N}:=\{1,2,\ldots\}\) and \(E_{n}=\{k\leq n:k\in E\}\). Then

$$\delta(E)=\lim_{n}\frac{1}{n}|E_{n}| $$

is called the natural density of E provided that the limit exists, where \(|\cdot|\) denotes the cardinality of the enclosed set. A sequence \(s=(s_{k})\) is said to be statistically convergent, shortly \({\mathcal{S}}\)-convergent, [1] to L, in symbols, we shall write \({\mathcal{S}}\mbox{-}\!\lim x=L\), if \(\delta(K_{\epsilon })=0\) for every \(\epsilon>0\), where

$$K_{\epsilon}:=\bigl\{ k\in\mathbb{N}:|s_{k}-L|\geq\epsilon\bigr\} , $$

equivalently,

$$\lim_{n}n^{-1}\bigl|\bigl\{ k\leq n:|s_{k}-L|\geq \epsilon\bigr\} \bigr|=0. $$

We remark that every convergent sequence is statistically convergent but its converse is not always valid.

In 2009, the concept of weighted statistical convergence was defined and studied by Karakaya and Chishti [8] and further modified by Mursaleen et al. [9] in 2012.

Let \(p=(p_{k})\) be a sequence of nonnegative numbers such that \(p_{0}>0\) and

$$P_{n}=\sum_{k=0}^{n}p_{k} \rightarrow\infty \quad\mbox{as } n\rightarrow \infty. $$

The lower and upper weighted densities of \(E\subseteq\mathbb{N}\) are defined by

$${\underline{\delta}}_{\bar{N}}(E)=\liminf_{n} \frac{1}{P_{n}}\bigl|\{k\leq P_{n}:k\in E\}\bigr| $$

and

$${\overline{\delta}}_{\bar{N}}(E)=\limsup_{n} \frac{1}{P_{n}}\bigl|\{k\leq P_{n}:k\in E\}\bigr|, $$

respectively. We say that E has weighted density, denoted by \(\delta_{\bar{N}}(E)\), if the limits of both above densities exist and are equal, that is, one writes

$${\delta}_{\bar{N}}(E)=\lim_{n}\frac{1}{P_{n}}\bigl|\{k\leq P_{n}:k\in E\}\bigr|. $$

The sequence \(s=(s_{k})\) is said to be weighted statistically convergent (or \(S_{\bar{N}}\)-convergent) to L if for every \(\epsilon>0\), the set \(\{k\in\mathbb {N}:p_{k}|s_{k}-L|\geq\epsilon\}\) has weighted density zero, i.e.

$$\lim_{n}\frac{1}{P_{n}}\bigl|\bigl\{ k\leq P_{n}:p_{k}|s_{k}-L| \geq\epsilon\bigr\} \bigr|=0. $$

In this case we write \(L=S_{\bar{N}}\mbox{-}\!\lim s\).

Remark 1.1

If \(p_{k}=1\) for all k then weighted statistical convergence is reduced to statistical convergence.

In 2013, Belen and Mohiuddine [10] presented a generalization of this notion through de la Vallée-Poussin mean and called this weighted λ-statistical convergence (in short, \(S_{\lambda}^{\bar {N}}\)-convergence). Recently, the notion was modified by Ghosal [11] by adding the condition \(\liminf p_{k}>0\).

Let X and Y be two sequence spaces and let \(A=(a_{n,k})\) be an infinite matrix. If for each \(s=(s_{k})\) in X the series

$$A_{n}s=\sum_{k}a_{n,k}s_{k}= \sum_{k=1}^{\infty}a_{n,k}s_{k} $$

converges for each \(n\in\mathbb {N}\) and the sequence \(As=(A_{n}s)\) belongs to Y, then we say that matrix A maps X into Y. By the symbol \((X,Y)\) we denote the set of all matrices which map X into Y.

A matrix A (or a matrix map A) is called regular if \(A\in(c,c)\), where the symbol c denotes the spaces of all convergent sequences, and

$$\lim_{n}A_{n}s=\lim_{k}s_{k} $$

for all \(s\in c\). The well-known Silverman-Toeplitz theorem (see [12]) asserts that \(A=(a_{n,k})\) is regular if and only if

  1. (i)

    \(\sup_{n}\sum_{k}|a_{n,k}|<\infty\);

  2. (ii)

    \(\lim_{n}a_{n,k}=0\) for each k;

  3. (iii)

    \(\lim_{n}\sum_{k}a_{n,k}=1\).

Kolk [13] extended the definition of statistical convergence with the help of the nonnegative regular matrix \(A=(a_{n,k})\), which he called it A-statistical convergence. For any nonnegative regular matrix A, we say that a sequence \(s=(s_{k})\) is A-statistically convergent, shortly \(S_{A}\)-convergent, to L provided that for every \(\epsilon>0\) we have

$$\lim_{n}\sum_{k:|s_{k}-L|\geq\epsilon}a_{n,k}=0. $$

Edely and Mursaleen [14] introduced statistical A-summability (or, \(A_{S}\)-summability) and showed that \(S_{A}\)-convergence implies \(A_{S}\)-summability under the assumption of bounded sequence but the converse is not valid always.

2 Statistical weighted A-summability

In the present section we introduce the notion of statistical weighted A-summability and prove that this method of summability is stronger than the weighted A-statistically convergent notion. We also define and characterize a weighted regular matrix.

Let \(s=(s_{k})_{k\in\mathbb {N}}\) be a sequence of real or complex numbers. The weighted means \(\sigma_{n}\) here are of the form

$$\sigma_{n}=\frac{1}{Q_{n}}\sum_{k=1}^{n}q_{k}s_{k}\quad (n\geq1), $$

where \(q=(q_{k})_{k\in\mathbb {N}}\) is a given sequence of nonnegative numbers such that \(\liminf_{k} q_{k}>0\) and Q the sequence with \(Q_{n}=\sum_{k=1}^{n}q_{k}\neq0\) for all \(n\geq0\). If

$$\lim_{n} \biggl|\frac{1}{Q_{n}}\sum_{k=1}^{n}q_{k}s_{k}-L \biggr|=0, $$

then we say that \(s=(s_{k})_{k\in\mathbb {N}}\) is weighted summable, called \(c^{\bar{N}}\)-summable, to some number L. In symbols, we shall write \(c^{\bar{N}}\mbox{-}\!\lim s=L\) and \(c^{\bar{N}}\) denotes the space of all weighted summable sequences. In particular, if we take \(q_{k}=1\) for all k then \(c^{\bar{N}}\)-summability is reduced to \((C,1)\)-summability while the sequence \((s_{k})_{k\in\mathbb {N}}\) is Cesàro summable (shortly, \((C,1)\)-summable) to L if \(\lim_{n\to \infty}\sigma_{n}'=L\), where

$$\sigma_{n}'=\frac{1}{n}\sum _{k=1}^{n}s_{k}. $$

Definition 2.1

A sequence \(s=(s_{k})_{k\in\mathbb {N}}\) is said to be weighted A-summable if the A-transform of s is weighted summable. It is said to be weighted A-summable to L if the A-transform of s is weighted summable to L, that is,

$$\lim_{m} \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m} \sum_{k=1}^{\infty }q_{n}a_{n,k}s_{k}-L \Biggr|=0. $$

For convenience, we shall use the convention that

$$A_{m}^{\bar{N}}(s)=\frac{1}{Q_{m}}\sum _{n=1}^{m}\sum_{k=1}^{\infty }q_{n}a_{n,k}s_{k}. $$

Definition 2.2

The matrix A (or a matrix map A) is said to be weighted regular matrix, or weighted regular method, if \(As\in c^{\bar{N}}\) for all \(s=(s_{k})\in c\) with \(c^{\bar{N}} \mbox{-}\!\lim As=\lim s\) and one denotes this by \(A\in(c,c^{\bar{N}})\). Clearly, \(A\in(c,c^{\bar{N}})\) means that \(A_{m}^{\bar{N}}(s)\) exists for each \(m\in\mathbb {N}\) and each \(s\in c\) and that \(A_{m}^{\bar {N}}(s)\to L\) (\(m\to\infty\)) whenever \(s_{k}\to L\) (\(k\to\infty\)).

We prove the following characterization of a weighted regular matrix.

Theorem 2.3

The matrix \(A=(a_{n,k})\) is weighted regular, that is, \(A\in(c,c^{\bar{N}})\), if and only if

$$\begin{aligned}& \sup_{m}\sum_{k=1}^{\infty} \frac{1}{Q_{m}} \Biggl|\sum_{n=1}^{m}q_{n}a_{n,k} \Biggr|< \infty; \end{aligned}$$
(1)
$$\begin{aligned}& \lim_{m}\frac{1}{Q_{m}}\sum_{n=1}^{m}q_{n}a_{n,k}=0 \quad\textit{for each } k; \end{aligned}$$
(2)
$$\begin{aligned}& \lim_{m}\frac{1}{Q_{m}}\sum_{n=1}^{m} \sum_{k=1}^{\infty }q_{n}a_{n,k}=1. \end{aligned}$$
(3)

Proof

Sufficiency. Let the conditions (1)-(3) hold, and suppose that \(s_{k}\in c\) with \(s_{k}\to L\) as \(k\to\infty\). Then for each \(\epsilon>0\) there exists \(N\in\mathbb {N}\) such that \(|s_{k}|<|L|+\epsilon\) for \(k>N\). One writes

$$\begin{aligned} \frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty }q_{n}a_{n,k}s_{k} =& \frac{1}{Q_{m}}\sum_{k=1}^{\infty}\sum _{n=1}^{m}q_{n}a_{n,k}s_{k} \\ =&\frac{1}{Q_{m}}\sum_{k=1}^{m-2}\sum _{n=1}^{m}q_{n}a_{n,k}s_{k}+ \frac {1}{Q_{m}}\sum_{k=m-1}^{\infty}\sum _{n=1}^{m}q_{n}a_{n,k}s_{k}. \end{aligned}$$

Therefore

$$\Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty }q_{n}a_{n,k}s_{k} \Biggr|\leq\|s\|\sum_{k=1}^{m-2}\frac{1}{Q_{m}} \sum_{n=1}^{m}q_{n}a_{n,k}+\bigl(|L|+ \epsilon\bigr)\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty}q_{n}a_{n,k}. $$

Taking \(m\to\infty\) and using (2) and (3), we obtain

$$\Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty }q_{n}a_{n,k}s_{k} \Biggr|\leq|L|+\epsilon $$

and consequently

$$\lim_{m}\frac{1}{Q_{m}}\sum_{n=1}^{m} \sum_{k=1}^{\infty }q_{n}a_{n,k}s_{k}=L= \lim s_{k}\quad (\mbox{since } \epsilon \mbox{ was arbitrary}). $$

This shows that A is weighted regular.

Necessity. Let \(A\in(c,c^{\bar{N}})\). Taking \(e_{k},e\in c\), where \(e_{k}\) is the sequence with 1 in place k and 0 elsewhere and \(e=(1,1,1,\dots)\), then the A-transforms of the sequence \(e_{k}\) and e belong to \(c^{\bar{N}}\) and hence \(e_{k}\in c\) gives condition (2) and \(e\in c\) proves the validity of (3).

Let us write

$$A_{m}^{\bar{N}}(s)=\frac{1}{Q_{m}}\sum _{n=1}^{m}q_{n}\beta _{n}(s),\qquad \beta_{n}(s)=\sum_{k=1}^{\infty}a_{n,k}s_{k}. $$

Clearly, \(\beta_{n}\in c'\) (the linear space of all continuous linear functionals of c) and so \(A_{m}^{\bar{N}}\in c'\). Since A is almost regular, \(A_{m}^{\bar{N}}(x)\) exists for each \(m\in\mathbb {N}\) and each \(s\in c\) and \(c^{\bar{N}}\mbox{-}\!\lim A_{m}(s)=L=\lim s_{k}\). It follows that \((A_{m}^{\bar{N}}(s))\) is bounded for \(s\in c\). Hence \((\| A_{m}^{\bar{N}}\|)\) is bounded by the uniform boundedness principle.

For each \(b\in\mathbb {Z}^{+}\), the positive integers, we define \(x=(x_{k})\) by

$$x_{k}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \operatorname{sgn}\sum_{n=1}^{m}q_{n}a_{n,k} &\mbox{for } 1\leq k\leq b,\\ 0 &\mbox{for } k>b. \end{array}\displaystyle \right . $$

Then \(x\in c\), \(\|x\|=1\), and also

$$\bigl|A_{m}^{\bar{N}}(x)\bigr|=\frac{1}{Q_{m}}\sum _{k=1}^{b} \Biggl|\sum_{n=1}^{m}q_{n}a_{n,k} \Biggr|. $$

Hence

$$\bigl|A_{m}^{\bar{N}}(x)\bigr|\leq\bigl\| A_{m}^{\bar{N}}\bigr\| \|x\|= \bigl\| A_{m}^{\bar{N}}\bigr\| . $$

Therefore

$$\frac{1}{Q_{m}}\sum_{k=1}^{\infty} \Biggl|\sum _{n=1}^{m}q_{n}a_{n,k} \Biggr| \leq\bigl\| A_{m}^{\bar{N}}\bigr\| , $$

so (1) is valid. □

The authors of [15] defined a notion of weighted A-statistically convergent which is incorrect because they did not consider the fraction ‘\(\frac{1}{Q_{n}}\)’ as mentioned above in the definition of weighted mean, so here we present its slight modified version as follows.

Definition 2.4

Let \(A=(a_{n,k})\) be a nonnegative weighted regular matrix and let \(E\subseteq\mathbb {N}\). Then the weighted A-density of E is given by

$$\delta_{\bar{N}}^{A}(E)=\lim_{m} \frac{1}{Q_{m}}\sum_{n=0}^{m}\sum _{k\in E}q_{n}a_{n,k} $$

provided that the limit exists. A sequence \(x=(x_{k})\) of real or complex numbers is said to be weighted A-statistically convergent, denoted by \(S_{A}^{\bar{N}}\)-convergent, to L if for every \(\epsilon>0\)

$$\delta_{\bar{N}}^{A} \bigl(E(\epsilon) \bigr)=0, $$

where

$$ E(\epsilon)=\bigl\{ k\in\mathbb{N}:|x_{k}-L|\geq\epsilon\bigr\} . $$

In symbols, we shall write \(S_{A}^{\bar{N}}\mbox{-}\!\lim x=L\).

Definition 2.5

Let \(A=(a_{n,k})\) be a nonnegative weighted regular matrix. A sequence \(s=(s_{k})\) of real or complex numbers is said to be statistically weighted A-summable, denoted by \({A}_{S}^{\bar{N}}\)-summable, to L, in symbols, we shall write \({A}_{S}^{\bar{N}}\mbox{-}\!\lim s=L\), if the following equality holds for each \(\epsilon>0\):

$$\delta(E_{\epsilon})=0, $$

where \(E_{\epsilon}=\{m\in\mathbb {N}:|T_{m}-L|\geq\epsilon\}\) and

$$T_{m}=A_{m}^{\bar{N}}(s)=\frac{1}{Q_{m}}\sum _{j=0}^{m}\sum_{k=1}^{\infty }q_{j}a_{j,k}s_{k}, $$

equivalently, we can write

$$\lim_{n}\frac{1}{n}\bigl|\bigl\{ m\leq n:|T_{m}-L|\geq \epsilon\bigr\} \bigr|=0. $$

Thus, a sequence \(s=(s_{k})\) is \({A}_{S}^{\bar{N}}\)-summable to L if and only if \(A_{m}^{\bar{N}}(s)\) is \({\mathcal{S}}\)-convergent to L.

Theorem 2.6

If a sequence \(s=(s_{k})\) is bounded and weighted A-statistically convergent to L then it is weighted A-summable to L and hence statistically weighted A-summable to L but not conversely.

Proof

Let \((s_{k})\) be bounded and weighted A-statistically convergent to L, and let \(E(\epsilon)=\{k\in\mathbb {N}:|s_{k}-L|\geq \epsilon\}\). Then

$$\begin{aligned} &\Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty }q_{n}a_{n,k}s_{k}-L \Biggr|\\ &\quad= \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty}q_{n}a_{n,k}(s_{k}-L)+L \Biggl(\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty}q_{n}a_{n,k}-1 \Biggr) \Biggr| \\ &\quad\leq \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty }q_{n}a_{n,k}(s_{k}-L) \Biggr|+|L| \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k=1}^{\infty}q_{n}a_{n,k}-1 \Biggr| \\ &\quad\leq \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k\in E(\epsilon )}q_{n}a_{n,k}(s_{k}-L) \Biggr|+ \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k\notin E(\epsilon)}q_{n}a_{n,k}(s_{k}-L) \Biggr| \\ &\qquad{}+|L| \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m} \sum_{k=1}^{\infty }q_{n}a_{n,k}-1 \Biggr| \\ &\quad\leq\sup_{k}|s_{k}-L|\frac{1}{Q_{m}}\sum _{n=1}^{m}\sum_{k\in E(\epsilon)}q_{n}a_{n,k}+ \epsilon\frac{1}{Q_{m}}\sum_{n=1}^{m}\sum _{k\notin E(\epsilon)}q_{n}a_{n,k} \\ &\qquad{}+|L| \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m} \sum_{k=1}^{\infty }q_{n}a_{n,k}-1 \Biggr|. \end{aligned}$$

Using the definition of weighted A-statistical convergence and the conditions of a weighted regularity of the matrix A, we have

$$\lim_{m} \Biggl|\frac{1}{Q_{m}}\sum_{n=1}^{m} \sum_{k=1}^{\infty }q_{n}a_{n,k}s_{k}-L \Biggr|=0 \quad(\mbox{since } \epsilon \mbox{ was arbitrary}), $$

that is, s is weighted A-summable to L. Hence

$${\mathcal{S}}\mbox{-}\!\lim_{m} \Biggl|\frac{1}{Q_{m}}\sum _{n=1}^{m}\sum_{k=1}^{\infty}q_{n}a_{n,k}s_{k}-L \Biggr|=0. $$

This shows that the sequence s is \({A}_{S}^{\bar{N}}\)-summable to L. □

Example 2.7

Let us take A as Cesàro matrix (or, \((C,1)\)-matrix) and is defined as follows:

$$ a_{n,k}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} \frac{1}{n} & \mbox{if } 1\leq k\leq n, \\ 0 & \mbox{if } k>n. \end{array}\displaystyle \right . $$

Consider a bounded sequence \(s=(s_{k})\) which is defined by

$$ s_{k}=\left \{ \textstyle\begin{array}{@{}l@{\quad}l} 1 &\mbox{if } k \mbox{ is odd},\\ 0 &\mbox{if } k \mbox{ is even}, \end{array}\displaystyle \right . $$

and suppose also that \(q_{n}=1\) for all \(n=1,\ldots,m\) and so \(Q_{m}=m\). Then we see that s is weighted A-summable to 1/2 and hence statistically weighted A-summable to the same limit but not weighted A-statistically convergent.

3 Application

In this final section, we apply our previous notion of summability, i.e. \(A_{S}^{\bar{N}}\)-summability, to obtain the Koronkin type approximation theorem.

The approximation theorem investigated by Korovkin [16] nowadays called Korovkin’s type approximation theorem and he stated that the convergence to h (real-valued functions) of a positive linear operator is dependent only on the convergence at the functions 1, x, \(x^{2}\) (or other equivalent test functions). Many mathematicians extended the Korovkin’s type approximation theorems by using various test functions in several setup, including Banach spaces, abstract Banach lattices, function spaces, Banach algebras, and so on. First of all, Gadjiev and Orhan [17] established the classical Korovkin theorem through statistical convergence and displayed an interesting example in support of the result. Recently, Korovkin’s type theorems have been obtained by Mohiuddine [18] and Edely et al. [19] for almost convergence and λ-statistical convergence, respectively. The authors of [20] established these types of approximation theorem in weighted \(L_{p}\) spaces, where \(1\leq p<\infty\), through A-summability, which is stronger than ordinary convergence. For these type of approximation theorems and related concepts, one may refer to [2128] and the references therein.

We use the notation \(C_{B}(D)\) to denote the space of all continuous and bounded real-valued functions on \(D=I\times I\) equipped with the following norm:

$$\Vert h\Vert_{C_{B}(D)}:=\sup_{(x,y)\in D}\bigl|h(x,y)\bigr|,\quad h\in C_{B}(D), $$

where \(I=[0,\infty)\). Suppose h is a real-valued function on D such that

$$\bigl|h(g,r)-h(x,y)\bigr|\leq\omega^{\ast} \biggl(h;\sqrt{ \biggl( \frac {g}{1+g}-\frac{x}{1+x} \biggr)^{2}+ \biggl( \frac{r}{1+r}-\frac {y}{1+y} \biggr)^{2}} \biggr), $$

and the space of such functions is denoted by \(H_{\omega^{\ast}}(D)\). In this case \(\omega^{\ast}\) is used for the modulus of continuity and is defined by

$$\omega^{\ast}(h;\delta)=\sup_{(g,r),(x,y)\in K} \bigl\{ \bigl|h(g,r)-h(x,y)\bigr|:\sqrt{(g-x)^{2}+(r-y)^{2}}\leq\delta, \delta>0 \bigr\} . $$

We remark that any function \(h\in H_{\omega^{\ast}}(D)\) is continuous and bounded on D, and a necessary and sufficient condition for \(h\in H_{\omega^{\ast}}(D)\) is that

$$\lim_{\delta\rightarrow0}\omega^{\ast}(h;\delta)=0. $$

We are writing the Korovkin type approximation theorem of Çakar and Gadjiev [29] through the usual convergence for the following test functions:

$$h_{0}(g,r)=1,\qquad h_{1}(g,r)=\frac{g}{1+g},\qquad h_{2}(g,r)=\frac{r}{1+r}, $$

and

$$h_{3}(g,r)= \biggl(\frac{g}{1+g} \biggr)^{2}+ \biggl( \frac{r}{1+r} \biggr)^{2}. $$

Theorem 3.1

Let \((J_{k})\) be a sequence of positive linear operators (PLO) from \(H_{\omega^{\ast}}(D)\) into \(C_{B}(D)\). Then

$$\lim_{k\rightarrow\infty} \bigl\| J_{k}(h;x,y)-h(x,y) \bigr\| _{C_{B}(D)}=0 \quad\bigl(\forall h\in H_{\omega^{\ast}}(D) \bigr) $$

if and only if

$$ \lim_{k\rightarrow\infty} \bigl\| J_{k}(h_{i};x,y)-h_{i} \bigr\| _{C_{B}(D)}=0, $$

where \(i=0,1,2,3\).

For \(S_{A}\)-convergence and \(A_{S}\)-summability, proofs are in [30] and [31], respectively. We prove the following Çakar and Gadjiev [29] type theorem through a statistically weighted A-summability method.

Theorem 3.2

Let \(A=(a_{j,k})\) be a nonnegative weighted regular matrix and \((J_{k})\) be a sequence of PLO from \(H_{\omega^{\ast}}(D)\) into \(C_{B}(D)\). Then

$$ {\mathcal{S}}\mbox{-}\!\lim \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k}(h;x,y)-h(x,y) \Biggr\| _{C_{B}(D)}=0\quad \bigl(\forall h\in H_{\omega^{\ast}}(D) \bigr) $$
(4)

if and only if

$$\begin{aligned}& {\mathcal{S}}\mbox{-}\!\lim \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k}(1;x,y)-1 \Biggr\| _{C_{B}(D)}=0, \end{aligned}$$
(5)
$$\begin{aligned}& {\mathcal{S}}\mbox{-}\!\lim \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k} \biggl(\frac{g}{1+g};x,y \biggr)-\frac {x}{1+x} \Biggr\| _{C_{B}(D)}=0, \end{aligned}$$
(6)
$$\begin{aligned}& {\mathcal{S}}\mbox{-}\!\lim \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k} \biggl(\frac{r}{1+r};x,y \biggr)-\frac {y}{1+y} \Biggr\| _{C_{B}(D)}=0, \end{aligned}$$
(7)
$$\begin{aligned}& \begin{aligned}[b] &{\mathcal{S}}\mbox{-}\!\lim \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k} \biggl( \biggl(\frac{g}{1+g} \biggr)^{2}+ \biggl( \frac{r}{1+r} \biggr)^{2};x,y \biggr)\\ &\quad{}- \biggl( \biggl( \frac{x}{1+x} \biggr)^{2}+ \biggl(\frac{y}{1+y} \biggr)^{2} \biggr) \Biggr\| _{C_{B}(K)}=0. \end{aligned} \end{aligned}$$
(8)

Proof

The conditions (5)-(8) follow immediately from (4) by taking into account that each of the functions \(h_{i}\) belongs to \(H_{\omega^{\ast}}(K)\), where \(i=0,1,2,3\). Let \(h\in H_{\omega^{\ast}}(K)\) and \((x,y)\in D\) be fixed. Let \(\varepsilon>0\) be given. Then there exist \(\delta_{1},\delta_{2}>0\) such that

$$\bigl|h(g,r)-h(x,y)\bigr|< \epsilon $$

holds for all \((g,r)\) in D satisfying the conditions:

$$\biggl|\frac{g}{1+g}-\frac{x}{1+x} \biggr|< \delta_{1} \quad\mbox{and}\quad \biggl|\frac{r}{1+r}-\frac{y}{1+y} \biggr|< \delta_{2}. $$

Consider \(D(\delta)\) of the form

$$ D(\delta):= \biggl\{ (g,r)\in D:\sqrt{ \biggl(\frac{g}{1+g}- \frac {x}{1+x} \biggr)^{2}+ \biggl(\frac{r}{1+r}- \frac{y}{1+y} \biggr)^{2}}< \delta \biggr\} , $$

where \(\delta=\min\{\delta_{1},\delta_{2}\}\). Therefore, we obtain

$$\begin{aligned} \bigl|h(g,r)-h(x,y)\bigr|&=\bigl|h(g,r)-(x,y)\bigr|\chi_{D(\delta )}(g,r)+\bigl|h(g,r)-h(x,y)\bigr| \chi_{D\setminus D(\delta)}(g,r) \\ &\leq\epsilon+2A\chi_{D\setminus D(\delta )}(g,r), \end{aligned}$$
(9)

where \(\chi_{K}\) stands for the characteristic function of K and \(A=\| h\|_{C_{B}(D)}\). Also, we obtain

$$ \chi_{D\setminus D(\delta)}(g,r)\leq\frac{1}{\delta_{1}^{2}} \biggl(\frac{g}{1+g}- \frac{x}{1+x} \biggr)^{2}+\frac{1}{\delta_{2}^{2}} \biggl( \frac{r}{1+r}-\frac{y}{1+y} \biggr)^{2}. $$
(10)

The inequalities (9) and (10) give

$$ \bigl|h(g,r)-h(x,y)\bigr|\leq\epsilon+\frac{2A}{\delta^{2}} \biggl\{ \biggl( \frac {g}{1+g}-\frac{x}{1+x} \biggr)^{2}+ \biggl( \frac{r}{1+r}-\frac{y}{1+y} \biggr)^{2} \biggr\} . $$
(11)

By a direct computation, we see that

$$\begin{aligned} \bigl|J_{k}(h;x,y)-h(x,y)\bigr|\leq{}& \epsilon+N \bigl\{ \bigl|J_{k}(h_{0};x,y)-h_{0}(x,y)\bigr| \\ &{}+\bigl|J_{k}(h_{1};x,y)-h_{1}(x,y)\bigr|+\bigl|J_{k}(h_{2};x,y)-h_{2}(x,y)\bigr| \\ &{}+\bigl|J_{k}(h_{3};x,y)-h_{3}(x,y)\bigr| \bigr\} , \end{aligned}$$
(12)

where

$$ N:=\varepsilon+A+\frac{4A}{\delta^{2}}. $$

Consequently, by writing

$$\frac{1}{Q_{m}}\sum_{j=1}^{m}\sum _{k=1}^{\infty}q_{j}a_{j,k}J_{k}(h_{i};x,y) $$

instead of \(J_{k}(h_{i};x,y)\) (\(i=0,1,2,3\)) and considering the supremum over \((x,y)\in D\), one obtains

$$\begin{aligned} &\Biggl\| \frac{1}{Q_{m}}\sum_{j=1}^{m}\sum _{k=1}^{\infty }q_{j}a_{j,k}J_{k}(h;x,y)-h(x,y) \Biggr\| _{C_{B}(D)} \\ &\quad\leq\epsilon+N \Biggl( \Biggl\| \frac{1}{Q_{m}}\sum_{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k}(h_{0};x,t)-h_{0}(x,y) \Biggr\| _{C_{B}(D)}\\ &\qquad{}+ \Biggl\| \frac{1}{Q_{m}}\sum_{j=1}^{m}\sum _{k=1}^{\infty }q_{j}a_{j,k}J_{k}(h_{1};x,y)-h_{1}(x,y) \Biggr\| _{C_{B}(D)}\\ &\qquad{}+ \Biggl\| \frac{1}{Q_{m}}\sum_{j=1}^{m}\sum _{k=1}^{\infty }q_{j}a_{j,k}J_{k}(h_{2};x,y)-h_{2}(x,y) \Biggr\| _{C_{B}(D)}\\ &\qquad{}+ \Biggl\| \frac{1}{Q_{m}}\sum_{j=1}^{m}\sum _{k=1}^{\infty }q_{j}a_{j,k}J_{k}(h_{3};x,y)-h_{3}(x,y) \Biggr\| _{C_{B}(D)} \Biggr). \end{aligned}$$

For a given \(t>0\) choose \(\epsilon>0\) such that \(\epsilon< t\), we shall define the following sets:

$$V= \Biggl\{ m\in\mathbb {N}: \Biggl\| \frac{1}{Q_{m}}\sum_{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k}(h;x,y)-h(x,y) \Biggr\| _{C_{B}(D)}\geq t \Biggr\} , $$

and

$$V_{i}= \Biggl\{ m\in\mathbb {N}: \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m}\sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k}(h_{i};x,y)-h_{i}(x,y) \Biggr\| _{C_{B}(D)}\geq\frac{t-\epsilon}{4N} \Biggr\} , $$

where \(i=0,1,2,3\). This shows that \(V\subset\bigcup_{i=0}^{3}V_{i}\) and so

$$\delta(V)\leq\delta(V_{0})+\delta(V_{1})+ \delta(V_{2})+\delta(V_{3}). $$

Hence, using assumptions (5)-(8), we get

$$ {\mathcal{S}}\mbox{-}\!\lim \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k}J_{k}(h;x,y)-h(x,y) \Biggr\| _{C_{B}(D)}=0. $$

 □

Finally, we conclude our work by the following illustration, Example 3.3, which shows that Theorem 3.2 is stronger than Theorem 3.1.

Example 3.3

For any given \(k\in\mathbb {N}\), let us write the Bleimann et al. [32] (in short, BBH) operators of two variables as below:

$$ \Phi_{k}(h;x,y):=\frac{1}{(1+x)^{k}(1+y)^{k}}\sum_{i=0}^{k} \sum_{u=0}^{k}h \biggl(\frac{i}{k-i+1}, \frac{u}{k-u+1} \biggr)\binom {k}{i}\binom{k}{u}x^{i}y^{u}, $$
(13)

where \(h\in H_{\omega}(D)\) and \(D=[0,\infty)\times[0,\infty)\). We know that

$$ (1+x)^{k}=\sum_{i=0}^{k} \binom{k}{i}x^{i}. $$
(14)

By considering the test function \(h_{0}(x,y)=1\) and solving (13) and (14), one obtains

$$ \Phi_{k}(h_{0};x,y)\rightarrow1=h_{0}(x,y). $$

Again by solving (13) and (14) and taking \(h_{1}(x,y)=\frac{x}{1+x}\), we get

$$\begin{aligned} \Phi_{k}(h_{1};x,y) =&\frac{1}{(1+x)^{k}}\sum _{i=1}^{k}\frac {i}{k+1}\binom{k}{i}x^{i}, \\ =&\frac{x}{(1+x)^{k}}\frac{k}{k+1}\sum_{i=0}^{k-1} \binom{k-1}{i}x^{i} \\ =&\frac{x}{(1+x)}\frac{k}{k+1}, \end{aligned}$$

and consequently

$$ \Phi_{k}(h_{1};x,y)\rightarrow\frac{x}{1+x}=h_{1}(x,y). $$

Similarly, for the test functions \(h_{2}\) and \(h_{3}\), we obtain

$$ \Phi_{k}(h_{2};x,y)=\frac{y}{(1+y)}\frac{k}{k+1} \rightarrow\frac {y}{1+y}=h_{2}(x,y) $$

and

$$\begin{aligned} \Phi_{k}(h_{3};x,y)&= \biggl(\frac{x}{1+x} \biggr)^{2}\frac {k(k-1)}{(k+1)^{2}}+\frac{x}{1+x}\frac{k}{(k+1)^{2}}+ \biggl(\frac {y}{1+y} \biggr)^{2}\frac{k(k-1)}{(k+1)^{2}}+ \frac{x}{1+x}\frac{k}{(k+1)^{2}}\\ &\rightarrow \biggl(\frac{x}{1+x} \biggr)^{2}+ \biggl( \frac{y}{1+y} \biggr)^{2}=h_{3}(x,y). \end{aligned}$$

We now suppose that A is a \((C,1)\)-matrix, the sequence \(s=(s_{k})\) is the same as taken in Example 2.7, and \(q_{n}=1\) for all \(n=1,2,\dots ,m\). Then \({\mathcal{S}}\mbox{-}\!\lim A_{m}^{\bar{N}}(s)=0\) but s is not convergent. Also we have the sequence of operators \(\Phi ^{*}_{k}:H_{\omega * }(D)\rightarrow C_{B}(D)\) such that

$$ \Phi^{*}_{k}(h;x,y)=(1+s_{k}) \Phi_{k}(h;x,y). $$

We see that the sequence \((\Phi^{*}_{k})\) satisfies the conditions (5)-(8) of Theorem 3.2 and consequently, we obtain

$${\mathcal{S}}\mbox{-}\!\lim \Biggl\| \frac{1}{Q_{m}}\sum _{j=1}^{m} \sum_{k=1}^{\infty}q_{j}a_{j,k} \Phi^{*}_{k}(h;x,y)-h(x,y) \Biggr\| _{C_{B}(D)}=0. $$

But, on the other hand, Theorem 3.1 does not hold for \((\Phi^{*}_{k})\), since the sequence \(s=(s_{k})\) (and so the sequence of operators \(\Phi ^{*}_{k}\)) is not convergent. Therefore, we conclude that Theorem 3.2 is stronger than Theorem 3.1.