Abstract
The goal of this paper is to build complete convergence and complete integral convergence for END sequences of random variables under sub-linear expectation space. By using the Markov inequality, we extend some complete convergence and complete integral convergence theorems for END sequences of random variables when we have a sub-linear expectation space, and we provide a way to learn this subject.
Similar content being viewed by others
1 Introduction
Classical probability theorems were widely used in many fields, which only hold on some occasions of model certainty. However, there are uncertainties, such as measures of risk, non-linear stochastic calculus and statistics in the process of finance. At this time, sub-linear expectation and capacity are not additive, the limit theorems of classical probability space are no longer valid. Therefore, the study of the limit theorems of sub-linear expectation becomes more complex. Peng Shige [1,2,3] of Shandong University constructed the basic concept of sub-linear expectation and gave a complete set of axioms of sub-linear expectation theories. The sub-linear expectation axiom system makes up for the deficiency of limit theorems of classical probability space. Since the general framework of sub-linear expectation was introduced by Peng Shige, many scholars have paid close attention to it, and lots of excellent results have been established. For example, Zhang [4,5,6] studied the sub-linear expectation space in depth and proved some important inequalities under the sub-linear expectations, Xu and Zhang [7] proved a three series theorem for independent random variables under sub-linear expectations with applications. Wu and Jiang [8] proved the strong law of large numbers and the law of iterated logarithm in the sub-linear expectation space. Chen [9] obtained the strong law of large numbers for independent isomorphic sequences in sub-linear expectation space, and he also obtained the central limit theorem of weighted sums in sub-linear expectation space. The complete convergence and complete integral convergence have a relatively complete development in limit theories of probability. The notion of complete convergence was raised by Hsu and Robbins [10], and Chow [11] built complete moment convergence. However, to the best of our knowledge, except for Liu [12], Chen [13], Wu and Guan [14], not many authors discussed the properties for END random variables. We know Sung [15] founded the notion of uniform integrability, we obtained complete convergence and complete integral convergence for the array of END random variables under uniform integrability, which were not considered in Sung [15]. We found the concept of uniform integrability on random variable sequences and uniform integrability is a more extensive condition than that of Cesàro [16, 17].
The complete integral convergence is a more important version of the complete convergence, and both of them are most important problems in classical probability theories. Many of the related results have already been obtained in classical probability space. Now, some corresponding results were obtained by Gut and Stadtmuller [18], Qiu and Chen [19], Wu and Jiang [20] and Feng and Wang [21], we still need to perfect the complete convergence and complete integral convergence under sub-linear expectation. We establish the complete convergence and complete integral convergence for END random variables under sub-linear expectation and generalize them [22] to the sub-linear expectation space.
2 Preliminaries
We use the framework and notions of Peng [1]. Let \((\varOmega , \mathcal{F})\) be a given measurable space and let \(\mathcal{H}\) be a linear space of real functions defined on \((\varOmega , \mathcal{F})\) such that if \(X_{1}, X_{2},\ldots ,X_{n} \in \mathcal{H}\) then \(\varphi (X_{1},\ldots ,X_{n})\in \mathcal{H}\) for each \(\varphi \in \mathrm{C}_{l,\operatorname{Lip}}(\mathbb{R}_{n})\), where \(\mathrm{C}_{l,\operatorname{Lip}}(\mathbb{R}_{n})\) denotes the linear space of (local Lipschitz) functions φ satisfying
for some \(c>0\), \(m\in \mathbb{N}\) depending on φ. \(\mathcal{H}\) is considered as a space of random variables. In this case we denote \(X \in \mathcal{H}\).
Definition 2.1
([1])
A sub-linear expectation \(\mathbb{\hat{E}}\) on \(\mathcal{H}\) is a function \(\mathbb{\hat{E}} : \mathcal{H} \rightarrow \bar{\mathbb{R}}\) satisfying the following properties: for all \(X,Y \in \mathcal{H}\), we have
-
(a)
monotonicity: if \(\ X \geq Y\) then \(\mathbb{\hat{E}} X \geq \mathbb{\hat{E}} Y\);
-
(b)
the constant preserving property: \(\mathbb{\hat{E}}c = c\);
-
(c)
sub-additivity: \(\mathbb{\hat{E}}(X+Y) \leq \mathbb{\hat{E}}X + \mathbb{\hat{E}}Y\); whenever \(\mathbb{\hat{E}}X + \mathbb{\hat{E}}Y \) is not of the form \(+\infty - \infty \) or \(-\infty + \infty \);
-
(d)
positive homogeneity: \(\mathbb{\hat{E}}(\lambda X) = \lambda \mathbb{\hat{E}}X\), \(\lambda \geq 0\).
Here \(\bar{\mathbb{R}} = [-\infty ,\infty ]\). The triple \((\varOmega , \mathcal{H}, \mathbb{\hat{E}})\) is called a sub-linear expectation space. Given a sub-linear expectation \(\mathbb{\hat{E}}\), let us denote the conjugate expectation ε̂ of \(\mathbb{\hat{E}}\) by
From the definition, it is easily shown that for all \(X,Y \in \mathcal{H}\)
If \(\mathbb{\hat{E}}Y = \hat{\varepsilon }Y \), then \(\mathbb{\hat{E}}(X + aY) = \mathbb{\hat{E}}X + a\mathbb{\hat{E}}Y\) for any \(a \in \mathbb{R}\). Next, we consider the capacities corresponding to the sub-linear expectations. Let \(\mathcal{G} \subset \mathcal{F}\). A function \(V : \mathcal{G} \rightarrow [0,1]\) is called a capacity if
It is called sub-additive if \(V(A \cup B) \leq V(A) + V(B)\) for all \(A,B \in \mathcal{G}\) with \(A \cup B \in \mathcal{G}\). In the sub-linear space \((\varOmega , \mathcal{H}, \mathbb{\hat{E}})\), we denote a pair \((\mathbb{V},\mathcal{V})\) of capacities by
where \(A^{c}\) is the complement set of A. By definition of \(\mathbb{V}\) and \(\mathcal{V}\), it is obvious that \(\mathbb{V}\) is sub-additive, and
This implies the Markov inequality: \(\forall X \in \mathcal{H}\),
from \(I(|X|\geq x) \leq |X|^{p}/x^{p} \in \mathcal{H}\).
Definition 2.2
([1])
A sequence of random variables \(\{X_{n}; n \geq 1\}\) is said to be upper (resp. lower) extended negatively dependent if there is some dominating constant \(K \geq 1\) such that
whenever the non-negative functions \(\varphi _{i}(x) \in C_{l,\operatorname{Lip}}( \mathbb{R})\), \(i = 1,2,\ldots \) are all non-decreasing (resp. all non-increasing). They are called extended negatively dependent (END) if they are both upper extended negatively dependent and lower extended negatively dependent.
It is obvious that, if \(\{X_{n}; n \geq 1\}\) is a sequence of extended negatively dependent random variables and \(\ f_{1}(x), f_{2}(x),\ldots \in C_{l,\operatorname{Lip}}(\mathbb{R})\) are non-decreasing (resp. non-increasing) functions, then \(\{f_{n}(X_{n});n\geqq 1\}\) is also a sequence of END random variables.
Definition 2.3
([4])
The Choquet integrals/expectations \((C_{\mathbb{V}},C_{\mathcal{V}})\) are defined by
with V being replaced by \(\mathbb{V}\) and \(\mathcal{V}\), respectively.
We define C to be various positive constants at different places in this paper.
Lemma 2.1
([4], Theorem 3.1)
Assume that \(\{X_{i};u_{n} \leq i\leq m_{n}\}\) is an array of row-wise END random variables in \((\varOmega ,\mathcal{H},\hat{\mathbb{E}})\) with \(\hat{\mathbb{E}}X_{i} \leq 0\), for \(u_{n}\leq i\leq m_{n}\). Let \(B_{n}=\sum_{i = u _{n}}^{m_{n}}\hat{\mathbb{E}}[X_{i}^{2}]\). Then, for any given n and for all \(x>0\), \(y>0\), \(K \geq 1\). Then
Here K is the dominating constant in Definition 2.2.
For \(0<\mu <1\), let \(g(x)\in C_{l,\operatorname{Lip}}(\mathbb{R})\) be a non-increasing function such that \(0\leq g(x)\leq 1\) for all x and \(g(x)=1\) if \(x\leq \mu \), \(g(x)=0\) if \(x>1\). Then
Lemma 2.2
Assume that \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\) is an array of row-wise END random variables. Let \(\{h_{n};n\geq 1\}\) and \(\{k_{n};n \geq 1\}\) be increasing sequences of positive constants with \(h_{n}\rightarrow \infty \), \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \) and \(\frac{h_{n}}{k_{n}}\rightarrow 0\), for some \(r>0\), satisfying
and
Then the following statements hold:
Proof
We have \(\frac{h_{n}}{k_{n}}\rightarrow 0\) as \(n\rightarrow \infty \), so there exists N such that \(h_{n}\leq k _{n}\) if \(n>N\). We know \(g(x)\in C_{l,\operatorname{Lip}}(\mathbb{R})\) is a non-increasing function and \(h_{n}\leq k_{n}\), by (2.2) we obtain
and \(1-g(\frac{|X_{ni}|^{r}}{k_{n}})\leq I (|X_{ni}|^{r}>\mu k _{n} )\). When \(0<\alpha \leq r\) and \(0<\mu <1\), then, for \(n>N\), combining (2.4), we can get
Therefore, (2.5) has been proven.
Now we prove (2.6). Assume that \(A_{n}=k_{n}^{-\frac{ \beta }{r}}\sum_{i=u_{n}}^{m_{n}}\hat{\mathbb{E}}|X_{ni}|^{ \beta } g (\frac{|X_{ni}|^{r}}{k_{n}} ) \), on considering (2.1), we conclude that
For \(A_{n1}\), note that \(\frac{\beta }{r}>1\), \(0<\mu <1\), \(g (\frac{ \mu |X_{ni}|^{r}}{1} )\leq I (\mu |X_{ni}|^{r}\leq 1 )\) and (2.3), while \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \), so we can obtain
For \(A_{n2}\), because of (2.2), we obtain \(I(|X_{ni}|^{r}>j) \leq 1-g (\frac{|X_{ni}|^{r}}{j} ) \). So we have
Because of \(\frac{h_{n}}{k_{n}}\rightarrow 0\) as \(n\rightarrow \infty \), \(\frac{\beta }{r}>1\), \(g(x)\in C_{l,Lip}(\mathbb{R})\) is a non-increasing function, and we have (2.3) and (2.4), then
Now (2.6) has been proven. The proof is completed. □
3 Main results
Theorem 3.1
Assume that \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\) is an array of row-wise END random variables; \(\{h_{n};n\geq 1\}\) and \(\{k_{n};n \geq 1\}\) are two increasing sequences of positive constants with \(h_{n}\rightarrow \infty \), \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \). For some \(1\leq r <2\), satisfying (2.3),
Then, for all \(\varepsilon >0\), we have
and
In particular, if \(\hat{\mathbb{E}}X_{ni}=\hat{\mathcal{E}}X_{ni}\), then
Theorem 3.2
Assume that \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\) is an array of row-wise END random variables; \(\{h_{n};n\geq 1\}\) and \(\{k_{n};n \geq 1\}\) are two increasing sequences of positive constants with \(h_{n}\rightarrow \infty \), \(k_{n}\rightarrow \infty \) as \(n\rightarrow \infty \). For some \(1\leq r <2\), satisfying (2.3), (3.1), (3.2),
Then, for all \(\varepsilon >0 \) and \(\hat{\mathbb{E}}X_{ni}= \hat{\mathcal{E}}X_{ni}\), we have
Proof of Theorem 3.1
For an array of row-wise END random variables \(\{X_{ni};u_{n}\leq i\leq m_{n},n\geq 1\}\), to ensure the truncated random variables are also END, we demand that truncated functions belong to \(C_{l,\operatorname{Lip}}\). For all \(u_{n}\leq i\leq m_{n}\), \(n \geq 1\), \(\lambda \geq 0\), for all \(\varepsilon >0\), we define
Through this, it is easy to see that \(Y_{ni}\leq \vert Y_{ni} \vert \leq \lambda k_{n}^{\frac{1}{r}}\), \(|Y_{ni}|\leq |X_{ni}|\) and
Now we prove (3.4), for all \(\varepsilon >0\), it suffices to verify that
By the Markov inequality, (3.1) and (3.3), we may draw the conclusion that
Next we consider \(I_{2}\). Let \(B_{n}=\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )^{2}\), \(x= \varepsilon k_{n}^{\frac{1}{r}}\), \(y=2\lambda k_{n}^{\frac{1}{r}}\). Assume \(\lambda =\frac{\varepsilon }{2}\), so \(y=2\lambda k_{n}^{\frac{1}{r}}= \varepsilon k_{n}^{\frac{1}{r}}\) in Lemma 2.1. For all \(u_{n}\leq i\leq m_{n}\), \(n\geq 1\), \(\varepsilon >0\), we have
and
Because of \(Y_{ni}-\hat{\mathbb{E}}Y_{ni}\leq \lambda k_{n}^{ \frac{1}{r}}+\hat{\mathbb{E}}|Y_{ni}|\leq \varepsilon k_{n}^{ \frac{1}{r}} \), we have \(\mathbb{V} (\max_{u_{n}\leq i\leq m_{n}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )> \varepsilon k_{n}^{\frac{1}{r}} )=0 \). So we can get
From the proof of \(I_{1}\), it is easy to prove that
Noting that \(g (\frac{\mu |X_{ni}|}{\lambda k_{n}^{\frac{1}{r}}} ) \leq 1\) and \(g (\frac{\mu ^{r}|X_{ni}|^{r}}{\lambda ^{r}k_{n}} )-g (\frac{|X_{ni}|^{r}}{h_{n}} )\leq I \{\mu h_{n}< |X _{ni}|^{r}\leq \frac{\lambda ^{r}k_{n}}{\mu ^{r}} \}\), combining (2.1), (2.3), (3.2) and (3.3) we get
So we have \(I_{2}<\infty \).
Finally, we prove \(I_{3}\rightarrow 0 \). We only need to prove \(\lim_{n\rightarrow \infty } k_{n}^{-\frac{1}{r}}\sum_{i=u_{n}}^{m_{n}} \vert \hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} \vert =0\). There exists n such \(h_{n} \leq \lambda ^{r}k_{n} \), thus \(1-g (\frac{|X_{ni}|}{\lambda k _{n}^{\frac{1}{r}}} )\leq 1-g (\frac{|X_{ni}|^{r}}{\lambda ^{r} k_{n}} )\leq 1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) \). By combining (2.4) and \(\vert \hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} \vert \leq \hat{\mathbb{E}} \vert Y_{ni}-X _{ni} \vert \),
So we obtain \(I_{3}\rightarrow 0 \). So (3.4) to be established.
Now we should prove (3.5), because \(\{-X_{ni}; u _{n}\leq i \leq m_{n}, n\geq 1 \} \) is also an array of row-wise END random variables. We use \(\{-X_{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) instead of \(\{X_{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) in (3.4), and by \(\hat{\mathbb{E}}X _{ni}=-\hat{\mathcal{E}}[-X_{ni}] \), then we get (3.5). Finally, we need to prove (3.6). We have \(\hat{\mathbb{E}}X_{ni}=\hat{\mathcal{E}}X_{ni} \), so we obtain
The proof is completed. □
Proof of Theorem 3.2
We know \(\hat{\mathbb{E}}|X_{ni}|^{r} (1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) )\leq C_{ \mathbb{V}}|X_{ni}|^{r} (1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) ) \), hence (3.8) implies (3.3). We have
If we want to prove (3.9), it suffices to prove \(I_{4}<\infty \) and \(I_{5}<\infty \). Because of Theorem 3.1, we obtain \(I_{4}<\infty \). For all \(u_{n}\leq i \leq m_{n}\), \(n\geq 1\), \(t\geq k_{n}\), \(\delta >0 \), we define
Through this, we can get
Next we need to prove \(I_{5}<\infty \). Let \(I_{6}= \sum_{n=1} ^{\infty }k_{n}^{-1}\int _{k_{n}}^{\infty }\mathbb{V} (\sum_{i=u_{n}}^{m_{n}} (X_{ni}-\hat{\mathbb{E}}X_{ni} )>t ^{\frac{1}{r}} )\,\mathrm{d}t \), noting that
Because of (2.7) and (3.8), we get
So we have \(I_{61}<\infty \).
Next we consider \(I_{62}\), Let \(B_{n}=\sum_{i=u_{n}}^{m_{n}} \hat{\mathbb{E}} (Y_{ni}-\hat{\mathbb{E}}Y_{ni} )^{2}\), \(x=\frac{t ^{\frac{1}{r}}}{3}\), \(y=\frac{t^{\frac{1}{r}}}{6} \) in Lemma 2.1. For all \(u_{n}\leq i \leq m_{n}\), \(n\geq 1\), \(t\geq k _{n}\), \(\delta >0 \), suppose that \(\delta =\frac{1}{12} \), there are \(Y_{ni}-\hat{\mathbb{E}}Y_{ni}\leq 2\delta t^{\frac{1}{r}} \leq \frac{t ^{\frac{1}{r}}}{6} \) and \(\hat{\mathbb{E}} (Y_{ni}- \hat{\mathbb{E}}Y_{ni} )^{2}\leq 4\hat{\mathbb{E}}Y_{ni}^{2} \). By Lemma 2.1, we get
By a similar argument to the proof of \(I_{21}\), we have (2.3), (3.2) and (3.3), then
Because of (2.7) and (3.3), it is obvious that
Similar to the proof of \(I_{22}\), then
That is to say \(I_{62}<\infty \).
By a similar argument to the proof of (3.10) and for \(I_{63}\), t is greater than \(k_{n}\), it follows that \(t^{- \frac{1}{r}}< k_{n}^{-\frac{1}{r}} \), there exists n such that \(h_{n}\leq \delta ^{r}k_{n}< \delta ^{r}t \), thus \(1-g (\frac{|X _{ni}|}{\delta t^{\frac{1}{r}}} )< 1-g (\frac{|X_{ni}|}{ \delta k_{n}^{\frac{1}{r}}} )\leq 1-g (\frac{|X_{ni}|^{r}}{ \delta ^{r}k_{n}} )\leq 1-g (\frac{|X_{ni}|^{r}}{h_{n}} ) \). Then we can get
We are conscious of \(I_{63}=\sum_{n=1}^{\infty }k_{n}^{-1} \int _{k_{n}}^{\infty }\mathbb{V} (t^{-\frac{1}{r}} \sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}}X_{ni} ) >\frac{1}{3} )\,\mathrm{d}t \), on the other hand, we obtain \(\sup_{t\geq k_{n}}t^{-\frac{1}{r}} \vert \sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}} X_{ni} ) \vert \rightarrow 0 \) as \(n\rightarrow \infty \), then \(\sup_{t\geq k_{n}}t^{-\frac{1}{r}}\times\sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y_{ni}- \hat{\mathbb{E}} X_{ni} )\rightarrow 0 \) as \(n\rightarrow \infty \), so n is sufficiently large, we know \(\mathbb{V} (t ^{-\frac{1}{r}}\sum_{i=u_{n}}^{m_{n}} (\hat{\mathbb{E}}Y _{ni} -\hat{\mathbb{E}}X_{ni} ) >\frac{1}{3} )\leq 0 \), and we get \(I_{63}<\infty \). Combining \(I_{61}<\infty \), \(I_{62}<\infty \) and \(I_{63}<\infty \), then we have \(I_{6}<\infty \). We use \(\{-X _{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) instead of \(\{X_{ni}; u_{n}\leq i \leq m_{n}, n\geq 1 \} \) in \(I_{6}\), and by \(\hat{\mathbb{E}}X_{ni}=\hat{\mathcal{E}}X_{ni} \), then we get
By \(I_{6}<\infty \), it is obvious that
So we obtain \(I_{5}<\infty \), in other words, (3.9) is proved and the proof is completed. □
References
Peng, S.: Multi-dimensional g-Brownian motion and related stochastic calculus under g-expectation. Stoch. Process. Appl. 118(12), 2223–2253 (2008)
Peng, S.: A new central limit theorem under sublinear expectations. J. Math. 53(8), 1989–1994 (2008)
Peng, S.: G-Brownian motion and related stochastic calculus of Itô type. Stoch. Anal. Appl. 34(2), 139–161 (2007)
Zhang, L.X.: Exponential inequalities under the sub-linear expectations with applications to laws of the iterated. Sci. China Math. 59(12), 2503–2526 (2016)
Zhang, L.X.: Rosenthals inequalities for independent and negatively dependent random variables under sub-linear expectations with applications. Sci. China Math. 59(4), 751–768 (2016)
Zhang, L.X., Lin, J.H.: Marcinkiewicz’s strong law of large numbers for nonlinear expectations. Stat. Probab. Lett. 137, 269–276 (2018)
Xu, J.P., Zhang, L.X.: Three series theorem for independent random variables under sub-linear expectations with applications. Acta Math. Sin. Engl. Ser. 35, 172–184 (2018)
Wu, Q.Y., Jiang, Y.Y.: Strong law of large numbers and Chover’s law of the iterated logarithm under sub-linear expectations. J. Math. Anal. Appl. 460(1), 252–270 (2018)
Chen, Z.: Strong laws of large numbers for sub-linear expectations. Sci. China Math. 59(5), 945–954 (2016)
Hsu, P.L., Robbins, H.: Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 33(2), 25–31 (1947)
Chow, Y.S.: On the rate of moment complete convergence of sample sums and extremes. Bull. Inst. Math. Acad. Sin. 16, 177–201 (1988)
Liu, L.: Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 79, 1209–1298 (2009)
Chen, Y.Q., Chen, A.Y., Ng, K.W.: The strong law of large numbers for extended negatively dependent random variables. J. Appl. Probab. 47, 908–922 (2010)
Wu, Y.F., Guan, M.: Convergence properties of the partial sums for sequences of END random variables. J. Korean Math. Soc. 49, 1097–1110 (2012)
Sung, H.S., Lisawadi, S., Volodin, A.: Weak laws of large numbers for arrays under a condition of uniform integrability. J. Korean Math. Soc. 45, 289–300 (2008)
Chandra, T.K.: Uniform integrability in the Cesàro sense and the weak law of large numbers. Sankhya, Ser. A 51, 309–317 (1989)
Chandra, T.K., Goswami, A.: Cesàro α-integrability and laws of large numbers. J. Theor. Probab. 16, 655–669 (2003)
Gut, A., Stadtmuller, U.: An intermediate Baum–Katz theorem. Stat. Probab. Lett. 81, 1486–1492 (2011)
Qiu, D.H., Chen, P.Y.: Complete moment convergence for i.i.d. random variables. Stat. Probab. Lett. 91, 76–82 (2014)
Wu, Q.Y., Jiang, Y.Y.: Complete convergence and complete moment convergence for negatively associated sequences of random variables. J. Inequal. Appl. 2016, 157 (2016)
Feng, F.X.: Complete convergence for weighted sums of negatively dependent random variables under the sub-linear expectations. Commun. Stat., Theory Methods 1(1), 1–16 (2018)
Wu, Y.F., Peng, J.Y., Hu, T.C.: Limiting behaviour for arrays of row-wise END random variables under conditions of h-integrability. Stoch. Int. J. Probab. Stoch. Process. 87(3), 409–423 (2015)
Acknowledgements
The authors are grateful to the editors and anonymous referees for their helpful comments and suggestions that improved the clarity and readability of this article.
Availability of data and materials
Not applicable.
Authors’ information
Qunying Wu, professor, doctor, working in the field of probability and statistics.
Funding
This paper was supported by the National Natural Science Foundation of China (11661029), the Support Program of the Guangxi China Science Foundation (2018GXNSFAA281011) and Support Program of the Guangxi China Science Foundation (2018GXNSFAA294131).
Author information
Authors and Affiliations
Contributions
All authors contributed equally to the writing of this paper. All the authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Liang, Z., Wu, Q. Theorems of complete convergence and complete integral convergence for END random variables under sub-linear expectations. J Inequal Appl 2019, 114 (2019). https://doi.org/10.1186/s13660-019-2064-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-019-2064-0