## 1 Introduction

The notion of a fuzzy set was introduced by L.A. Zadeh in 1965. The introduced notion of a fuzzy set is a powerful tool for studying different branches of science (in particular mathematics). It has been applied for introducing the fuzzy analogue of many notions existing for crisp sets. Some of these applications are due to Atanassov (1999, 2012), Couso et al. (2014), Dubois et al. (2005), Grattan-Guiness (1975), Tripathy and Ray (2012) and many others. Problems concerning convergence of series and sequences of fuzzy numbers have been considered, i.a., by Tripathy and Das (2012), Tripathy and Sarma (2012), and Tripathy et al. (2012).

The classical Kolmogorov’s theory of probability is a commonly used mathematical model of randomness. It has been described in nearly all textbooks on probability and is well known both for scientists and practitioners dealing with random phenomena. However, there exist other approaches to model such phenomena proposed by mathematicians, physicists and philosophers (see, e.g. the book of Fine 1973). One of them, the Boolean algebraic probability theory, was proposed for the case of quantum systems. This approach is based on fundamental works of great mathematicians: C. Carathéodory, G. Birkhoff, and J. von Neumann. In their works (see, Birkhoff and von Neumann 1936 and Carathéodory 1956), they considered states and observables of a quantum system as counterparts of probability and random variables in the Kolmogorov theory of probability. The concepts of Carathéodory, Birkhoff, and von Neumann were further developed by Gudder (1979), Pták and Pulmannová (1989), Riečan and Neubrunn (1997), Varadarajan (1968), where quantum logics were considered as orthomodular posets.

In many practical applications, randomness is not the only source of uncertainty. The second such source is imprecision nowadays usually modelled by Zadeh’s fuzzy sets. When uncertain phenomena of interest are both random and imprecise, the concept of a fuzzy random variable can be applied. There exist many definitions of the fuzzy random variable which have different interpretations (see, e.g. Kruse and Meyer 1987; Kwakernaak 1978; Liu and Liu 2003; Puri and Ralescu 1986; Couso et al. 2014). According to the first, introduced by Kwakernaak (1978), the fuzzy random variable can be interpreted as fuzzy perception of an original crisp random variable. According to this interpretation, the fuzzy random variable is a (disjunctive) fuzzy set of classical random variables and is described by a fuzzy set of classical probability distributions. This interpretation is nowadays called epistemic and allows to generalize in a relatively easy way the classical concepts of probability and mathematical statistics. Another definition was proposed by Puri and Ralescu (1986). According to their definition, to describe the fuzzy random variable $$\sigma$$-algebras of fuzzy sets are used. Thus, it is a classical random variable with values belonging to a set of functions. This interpretation of the fuzzy random variable understood in the sense of Puri and Ralescu is nowadays called ontic, and its theoretical foundations can be regarded as the generalizations of theory of random sets. For more information about different definitions and interpretations of fuzzy random variables, the reader is encouraged to read an excellent textbook by Couso, Dubois and Sánchez (2014), or the paper by Dubois and Prade (2012).

Fuzzy sets, according to Zadeh himself, were introduced in order to provide a precise mathematical description (i.e. to ‘precisiate’) of imprecise notions, described, e.g. using human’s plain language. However, from a practical point of view non-random imprecision may have different, sometimes subtle, interpretations. Therefore, many different generalizations of fuzzy sets have been proposed. One of such generalizations, which nowadays has become quite popular, is the theory of IF-sets introduced by Atanassov (see Atanassov 1999, 2012 and references therein) in the 1980s. Another one is the theory of interval-valued fuzzy sets (IVF-sets), introduced independently (in the same year!) by four authors: Grattan-Guiness (1975), Jahn (1975), Sambuc (1975), and Zadeh (1975). In this case, the generalization of fuzzy sets consists in the considering the values of the membership function as uncertain intervals (see, e.g. Zadeh 1975 and Türkşen 1986). The mutual relationship between these two models has been studied by many authors who have shown their formal equivalence (see, Deschrijver and Kerre 2003 and Dubois et al. 2005). It is worth noticing that IVF-sets are also equivalent to grey sets introduced by Deng (1989) very popular in East Asia (see Deschrijver and Kerre 2003).

The majority of results about the fuzzy generalizations of the probability have been related to the classical Kolmogorov’s definition of probability. However, fuzzy models of quantum mechanics have also been studied recently. One should mention the theories of F-quantum spaces and fuzzy quantum logics (for details see Riečan and Neubrunn 1997 and references therein). The fuzzy quantum logic of all measurable functions with values in the interval [0, 1] is an example of MV-algebra, introduced by Chang (1958). Fundamentals and the most important theorems of MV-algebraic probability theory, including the central limit theorem, can be found in Nowak and Hryniewicz (2015), Riečan (1999, 2000), Riečan and Mundici (2002), Riečan and Neubrunn (1997). In the area of fuzzy sets, the MV-algebraic probability theory was also applied in the Atanassov’s IF-sets setting. This application let Riečan (2007b) to develop the probability theory for IF-events. Riečan (2007b) proved the central limit theorem (CLT) for independent, identically distributed IF-observables and M-observables. Other results of probability theories concerning IF-events can be found, e.g. in Ciungu and Riečan (2010), Lendelová (2006), Lendelová and Petrovičová (2006), Renčová (2010), Riečan (2004, 2006a, b, 2007a). Generalized versions of central limit theorems within the IF-probability theory and M-probability theory for non-identically distributed observables were proved in the recent paper by Nowak and Hryniewicz (2016).

In contrast to the probability theory of IF-sets, its counterpart for the probability theory of IVF-sets is much less developed. Some interesting results concerning the probability theory of IVF-sets have been published by Riečan and Král’ (2010), Kuková (2011), and Samuelčík and Hollá (2013). The aim of this paper is to fill the gap between the well-developed theory of probability for IF-sets and the theory of probability for IVF-sets. It is devoted to central limit theorems within the IV-probability theory, i.e. the probability theory for IV-events, which involves the Łukasiewicz connectives between IVF-sets. Analysing the limit behaviour of the row sums of triangular arrays of independent IV-observables, we prove the Lindeberg CLT and the Lyapunov CLT as well as the Feller theorem, which is a converse of the Lindeberg CLT. We use a proving technique based on MV-algebraic probability theory from Nowak and Hryniewicz (2015). We additionally present three examples of applications of our theorems for sequences and arrays of IV-observables with convergent scaled sums or row sums. The first example is general and concerns a sequence of independent IV-observables with the same distributions. We prove an appropriate theorem for such a sequence and we apply it to the case corresponding to the classical de Moivre–Laplace theorem. In the second and the third example, we apply the Lindeberg CLT and the Lyapunov CLT. In the last two examples, we use a special form of the IV-probability, basing on a modified notion of the probability of IF-events, considered by Szmidt and Kacprzyk (1999a), Szmidt and Kacprzyk (1999b), Grzegorzewski and Mrówka (2002) and generalized by Nowak (2003, 2004a, b).

Despite formal similarity between IF-sets and IVF-sets, the semantics used for their interpretation is different. Therefore, the results obtained in this paper may be useful for those practitioners for whom the semantics of IVF-sets is better understandable than the semantics of IF-sets. These results may be used in the development of statistical methods for IFV-sets which, as for now, are practically non-existent. In the development of such methods, one can use approaches, already developed for random fuzzy sets, and described in an overview paper by Gil and Hryniewicz (2009) or in a recent paper by Blanco-Fernández et al. (2014).

The paper is organized as follows. In Sect. 2, we introduce some elements of the theories of MV-algebras and MV-probability. Section 3 contains our main results, including the Lindeberg CLT, Lyapunov’s CLT, and the Feller theorem for IV-observables. In Sect. 4, we analyse examples of applications of the limit theorems proved in Sect. 3. The last section is dedicated to conclusions.

## 2 Basic notions and facts concerning the MV-algebraic probability theory

Let $$n\in \mathbb {N}$$, where $$\mathbb {N}$$ is the set of all positive integers. We denote by $$\mathbb {N}[n]$$ and $$\mathcal {B}(\mathbb {R}^{n})$$ the set $$\{1,2,\ldots ,n\}$$ and the $$\sigma$$-algebra of Borel subsets of $$\mathbb {R} ^{n}$$, respectively.

We will use following theorem (see Billingsley 1986, Theorem 16.12) concerning the change of variable for integrals.

Let $$(X,\mathcal {X})$$ and $$(X^{\prime },\mathcal {X} ^{\prime })$$ be measurable spaces. Let $$T:X\rightarrow X^{\prime }$$ be an $$\mathcal {X}/\mathcal {X}^{\prime }$$ measurable function, i.e. $$T^{-1}\left( A^{\prime }\right) \in \mathcal {X}$$ for each $$A^{\prime }\in \mathcal {X} ^{\prime }$$. For a measure $$\mu$$ on $$\mathcal {X}$$ we define a measure $$\mu T^{-1}$$ on $$\mathcal {X}^{\prime }$$ given by

\begin{aligned} \mu T^{-1}(A^{\prime }) =\mu (T^{-1}( A^{\prime }))\text {,}\quad A^{\prime }\in \mathcal {X}^{\prime }. \end{aligned}

### Theorem 1

Let $$f:X^{\prime }\rightarrow \mathbb {R}$$ be an $$\mathcal {X}^{\prime }$$-measurable function. If f is non-negative, then

\begin{aligned} \int _{X}f(Tx)\mu (\mathrm{d}x) =\int _{X^{\prime }}f(x^{\prime })\mu T^{-1}(\mathrm{d}x^{\prime }). \end{aligned}
(1)

A function f (not necessarily non-negative) is integrable with respect to $$\mu T^{-1}$$ if and only if fT is integrable with respect to $$\mu$$, in which case (1) and

\begin{aligned} \int _{T^{-1}(A^{\prime })}f(Tx)\mu (\mathrm{d}x)=\int _{A^{\prime }}f(x^{\prime })\mu T^{-1}(\mathrm{d}x^{\prime }), \end{aligned}
(2)

where $$A^{\prime }\in \mathcal {X}^{\prime }$$, hold. Moreover, for any non-negative f, the identity (2) always holds.

MV-algebras are considered as non-commutative generalizations of Boolean algebras. The fundamentals of the theory of MV-algebras were discussed, e.g. by Cignoli et al. (2000) and Mundici (1986). We present only selected elements of the theory of MV-algebras and the MV-algebraic probability theory from Riečan and Mundici (2002) and Nowak and Hryniewicz (2015) with minor modifications.

### Definition 1

An algebra $$(M,0,1,\lnot ,\oplus ,\odot )$$, where M is a non-empty set, the operation $$\oplus$$ is associative and commutative with 0 as the neutral element,

\begin{aligned} \lnot 0=1,\lnot 1=0, \end{aligned}

and for arbitrary $$x,y\in M$$

\begin{aligned} x\oplus 1&=1,\\ y\oplus \lnot (y\oplus \lnot x)&=x\oplus \lnot (x\oplus \lnot y),\\ x\odot y&=\lnot (\lnot x\oplus \lnot y), \end{aligned}

is called an MV-algebra.

In an MV-algebra $$(M,0,1,\lnot ,\oplus ,\odot )$$ the relation $$\le$$ given by the condition

\begin{aligned} x\le y\Leftrightarrow x\odot \lnot y=0\text {,}\quad x,y\in M, \end{aligned}

defines a partial order.

The distributive lattice $$(M,\vee ,\wedge )$$ with least element 0 and greatest element 1, where

\begin{aligned} x\vee y=\lnot (\lnot x\oplus y)\oplus y \end{aligned}

and

\begin{aligned} x\wedge y=\lnot (\lnot x\vee \lnot y), \end{aligned}

for $$x,y\in M$$, is called the underlying lattice of M.

### Definition 2

We call an MV-algebra M $$\sigma$$ - complete (complete) if every sequence (non-empty family, respectively) of elements of M has the supremum in M.

We will use the following notations.

Let $$\{ A_{n}\} _{n=1}^{\infty }$$ be a sequence of subsets of a set X. Then

\begin{aligned} A_{n}\nearrow A\text {\,iff }A_{1}\subseteq A_{2}\subseteq \ldots \quad \text {and}\quad {\textstyle \bigcup _{n}^{\infty }} A_{n}=A. \end{aligned}

For a sequence $$\{x_{n}\} _{n=1}^{\infty }$$ of real numbers,

\begin{aligned} x_{n}\nearrow x\text {\,iff}\ x_{1}\le x_{2}\le \ldots \quad \text {and}\quad x=\sup _{i}x_{i}. \end{aligned}

Additionally, for a sequence $$\{b_{n}\}_{n=1}^{\infty }$$ of elements of an MV-algebra $$M\$$

\begin{aligned} b_{n}\nearrow b\text {\,iff }b_{1}\le b_{2}\le \ldots \text {and}\quad b=\sup _{i}b_{i} \end{aligned}

with respect to the underlying order of M.

Within the MV-algebraic probability theory the notions of state and observable were introduced, by abstracting the properties of probability measure and classical random variable.

### Definition 3

Let M be a $$\sigma$$-complete MV-algebra. A state on M is a function $$m:M\rightarrow [0,1]$$ fulfilling the following conditions for arbitrary $$a,b,c\in M$$ and $$\{a_{n}\} _{n=1}^{\infty }\subset M$$:

(1):

$$m(1)=1;$$

(2):

if $$b\odot c=0$$,   then $$m(b\oplus c)=m(b)+m(c);$$

(3):

if $$a_{n}\nearrow a$$,   then $$m(a_{n})\nearrow m(a).$$

We call a state m faithful if $$m(x)\ne 0$$ for each nonzero element x of$$\ M$$.

Apart from the defined above notion of state, its additive counterpart, for which $$\sigma$$-additivity is not assumed, is also considered in the literature (see Nowak and Hryniewicz 2015 for more details).

### Definition 4

A pair (Mm) consisting of a $$\sigma$$ - complete MV-algebra M and a faithful state m on M is called a probability MV-algebra.

It was proved that every probability MV-algebra is complete (see Mundici 2011, Theorem 13.8).

### Definition 5

Let M be a $$\sigma$$-complete MV-algebra. An n -dimensional observable of M is a function $$x:\mathcal {B}(\mathbb {R} ^{n})\rightarrow M$$ fulfilling the following conditions:

(1):

$$x(\mathbb {R} ^{n})=1;$$

(2):

$$x(A)\odot x(B)=0$$ and $$x(A\cup B)=x(A)\oplus x(B)$$ for arbitrary $$A,B\in \mathcal {B}(\mathbb {R} ^{n})$$ such that $$A\cap B=\varnothing ;$$

(3):

for arbitrary $$A,A_{1},A_{2},\ldots \in \mathcal {B}(\mathbb {R} ^{n})$$

if $$A_{n}\nearrow A$$, then $$x(A_{n})\nearrow x(A).$$

### Theorem 2

Let M be a $$\sigma$$-complete MV-algebra, $$x:\mathcal {B}(\mathbb {R}^{n})\rightarrow M$$ be an n-dimensional observable, and m be a state on M. Then the function $$m_{x}:\mathcal {B}(\mathbb {R}^{n})\rightarrow [0,1]$$ given by

\begin{aligned} m_{x}(A)=(m\circ x)(A)=m(x(A))\text {,}\quad A\in \mathcal {B}(\mathbb {R} ^{n})\text {,} \end{aligned}

is a probability measure on $$\mathcal {B}(\mathbb {R} ^{n})$$.

For the proof of Theorem 2 we refer the reader to Nowak and Hryniewicz (2015).

### Definition 6

Let (Mm) be a probability MV-algebra. An observable $$x:\mathcal {B}(\mathbb {R})\rightarrow M$$ of M is said to be integrable in (Mm) if the expectation $$\mathbb {E}(x)=\int _{\mathbb {R} }tm_{x}(\mathrm{d}t)$$ exists. We say that x is square-integrable in (Mm) if $$\int _{\mathbb {R} }t^{2}m_{x}(\mathrm{d}t)$$ exists. If x is square-integrable in (Mm), then its variance exists and is given by

\begin{aligned} \mathbb {D}^{2}(x)&=\int _{ \mathbb {R}}t^{2}m_{x}(\mathrm{d}t)-(\mathbb {E}(x)) ^{2}\\&=\int _{ \mathbb {R} }(t-\mathbb {E}(x))^{2}m_{x}(\mathrm{d}t). \end{aligned}

We denote by $$L_{m}^{1}$$ ($$L_{m}^{2}$$) the space of observables $$x:\mathcal {B} (\mathbb {R})\rightarrow M$$ integrable (square-integrable, respectively) in (Mm). More generally, we write $$x\in L_{m}^{p}$$ for $$p\ge 1$$ if $$\int _{ \mathbb {R} }|t|^{p}m_{x}(\mathrm{d}t)<\infty .$$

### Definition 7

Let (Mm) be a probability MV-algebra. Observables $$x_{1},x_{2},$$...,$$x_{n}$$ of M are said to be independent (with respect to m) if there exists an n-dimensional observable $$h:\mathcal {B}(\mathbb {R} ^{n})\rightarrow M$$ such that for all $$C_{1},C_{2},$$...,$$C_{n} \in \mathcal {B}(\mathbb {R})$$

\begin{aligned}&m(h(C_{1}\times C_{2}\times \cdots \times C_{n}))\\&\quad =m(x_{1}(C_{1}))\cdot m(x_{2}(C_{2}))\cdot \ldots \cdot m(x_{n}(C_{n})) \\&\quad =m_{x_{1}}(C_{1})\cdot m_{x_{2}}(C_{2})\cdot \ldots \cdot m_{x_{n}}(C_{n}). \end{aligned}

### Remark 1

Assume that $$x_{1},x_{2},\ldots ,x_{n}:\mathcal {B} (\mathbb {R}) \rightarrow M$$ are independent observables in a probability MV-algebra (Mm) and $$h:\mathcal {B}(\mathbb {R} ^{n})\rightarrow M$$ is their joint observable. Then for any Borel measurable function $$g: \mathbb {R} ^{n}\rightarrow \mathbb {R}$$

\begin{aligned} g(x_{1},x_{2},\ldots ,x_{n})=h\circ g^{-1} \end{aligned}

is an observable.

We fix a sequence $$\{k_{n}\} _{n\in \mathbb {N} }$$ of positive integers such that $$\lim _{n\rightarrow \infty }k_{n}=\infty$$ and a sequence of probability MV-algebras$$\ \{(M_{(n)},m_{(n)})\} _{n\in \mathbb {N} }$$. For each $$n\in \mathbb {N}$$ and arbitrary observable $$x:\mathcal {B}(\mathbb {R})\rightarrow M_{(n)}$$ belonging to $$L_{m_{(n)}}^{2}$$ we denote by $$\mathbb {E}_{(n)}(x)$$ the expected value of x and by $$\mathbb {D}_{(n)}^{2}(x)$$ the variance of x with respect to $$m_{(n)}$$. Furthermore, for each $$x:\mathcal {B}(\mathbb {R})\rightarrow M_{(n)}$$ belonging to $$L_{m_{(n)}}^{2}$$ and $$\varepsilon ,s>0$$ we use the symbol $$l_{n}^{x}(\varepsilon ,s)$$ to denote the $$\mathbb {R}$$-valued function of the form

\begin{aligned} l_{n}^{x}(\varepsilon ,s)=\mathbb {E}_{\left( n\right) }\left( \left( x-\mathbb {E}_{(n)}(x)\right) ^{2}I_{|x-\mathbb {E}_{(n)}(x)|>\varepsilon s}\right) . \end{aligned}

### Definition 8

Let us assume that for each $$n\in \mathbb {N} \ \{x_{n1},x_{n2},\ldots ,x_{nk_{n}}\}$$ is a sequence of independent (with respect to $$m_{(n)}$$) observables of the MV-algebra $$M_{(n)}$$. We call $$\{ x_{nj}\} _{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ a triangular array of independent observables (TA for short).

### Definition 9

Let $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ be a TA such that for each $$n\in \mathbb {N}$$

\begin{aligned} x_{nj}\in L_{m_{(n)}}^{2},\quad \text { }j\in \mathbb {N}[k_{n}],\quad \text { }s_{n}^{2}=\sum \limits _{j=1}^{k_{n}}} \mathbb {D}_{(n)}^{2}(x_{nj})\in (0,\infty )\text {.} \end{aligned
(3)

Then $$\{x_{nj}\} _{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ is said to satisfy the Lindeberg condition if for arbitrary $$\varepsilon >0$$

\begin{aligned} L_{n}(\varepsilon )=\frac{1}{s_{n}^{2}} \sum \limits _{j=1}^{k_{n}}} l_{n}^{x_{nj}}(\varepsilon ,s_{n}) \rightarrow 0\text {,}\quad as\,n\rightarrow \infty . \end{aligned
(4)

### Definition 10

A TA $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ fulfilling (3) is said to satisfy the Lyapunov condition if there exists $$\delta >0$$ such that

\begin{aligned} \frac{1}{s_{n}^{2+\delta }} \sum \limits _{j=1}^{k_{n}}} \mathbb {E}_{(n)}\left( |x_{nj}-\mathbb {E}_{(n)}(x_{nj})|^{2+\delta }\right) \rightarrow 0\text {,}\quad { as}\ n\rightarrow \infty . \end{aligned
(5)

### Definition 11

Let $$\left\{ \left( M_{(n)},m_{(n)}\right) \right\} _{n\in \mathbb {N} }$$ be a sequence of probability MV-algebras. A sequence of observables $$\big \{ x_{n}:\mathcal {B}(\mathbb {R})\rightarrow M_{(n)}\big \} _{n\in \mathbb {N} }$$ is convergent in distribution to a function $$F: \mathbb {R} \rightarrow [0,1]\;$$if

\begin{aligned} \lim _{n\rightarrow \infty }m_{(n)}(x_{n}((-\infty ,t)))=F(t) \end{aligned}

for each $$t\in \mathbb {R}$$. If $$\left\{ x_{n}:\mathcal {B}(\mathbb {R})\rightarrow M_{(n)}\right\} _{n\in \mathbb {N} }$$ is convergent in distribution to the cumulative distribution function of the standard normal distribution $$\varPhi$$, then we write

$$x_{n}\rightarrow N(0,1)$$ in distribution, as $$n\rightarrow \infty$$.

We recall generalized versions of MV-algebraic central limit theorems and the Feller theorem proved in Nowak and Hryniewicz (2015).

### Theorem 3

(Lindeberg CLT) Let us assume that a TA $$\{x_{nj}\}_{n\in \mathbb {N} ,j\in \mathbb {N}[k_{n}]}$$ satisfies (3) and the Lindeberg condition (4). Then

\begin{aligned} \frac{1}{s_{n}}\left( \sum _{j=1}^{k_{n}}x_{nj}-\sum _{j=1}^{k_{n}} \mathbb {E}_{(n)}(x_{nj})\right) \rightarrow N(0,1) \end{aligned}

in distribution, as $$n\rightarrow \infty$$.

### Theorem 4

(Lyapunov CLT) Let us assume that a TA $$\{x_{nj}\}_{n\in \mathbb {N} ,j\in \mathbb {N}[k_{n}]}$$ satisfies (3) and Lyapunov’s condition (5). Then

\begin{aligned} \frac{1}{s_{n}}\left( \sum _{j=1}^{k_{n}}x_{nj}-\sum _{j=1}^{k_{n}} \mathbb {E}_{(n)}(x_{nj})\right) \rightarrow N(0,1) \end{aligned}

in distribution, as $$n\rightarrow \infty$$.

### Theorem 5

(Feller) Let $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ be a TA satisfying (3) and such that for each $$\varepsilon >0$$

\begin{aligned} \underset{1\le j\le k_{n}}{\hbox {max}}(m_{(n)}) _{x_{nj}}((-\infty ,-\varepsilon s_{n})\cup (\varepsilon s_{n},\infty ))\rightarrow 0, \end{aligned}

as $$n\rightarrow \infty$$. If

\begin{aligned} \frac{1}{s_{n}}\left( \sum _{j=1}^{k_{n}}x_{nj}-\sum _{j=1}^{k_{n}} \mathbb {E}_{(n)}(x_{nj})\right) \rightarrow N(0,1) \end{aligned}

in distribution, as $$n\rightarrow \infty$$, then the Lindeberg condition (4) holds.

## 3 IV-probability

### Definition 12

Let $$(\varOmega ,\mathcal {S})$$ be a measurable space. By an interval-valued event (for short IV-event) we mean any pair $$A=(\mu _{A},\nu _{A})$$ of $$\mathcal {S}$$-measurable, [0, 1]-valued functions such that $$\mu _{A}\le \nu _{A}$$. We denote by $$\mathcal {V}(\varOmega ,\mathcal {S})$$ the set of all IV-events and we introduce the following operations on $$\mathcal {V}(\varOmega ,\mathcal {S})$$. For $$A=(\mu _{A},\nu _{A})$$, $$B=(\mu _{B},\nu _{B}) \in \mathcal {V}(\varOmega ,\mathcal {S})$$, $$\{A_{n}\} _{n\in \mathbb {N}}=\left\{ \left( \mu _{A_{n}} ,\nu _{A_{n}}\right) \right\} _{n\in \mathbb {N}}\subset \mathcal {V}(\varOmega ,\mathcal {S})$$:

\begin{aligned} A\oplus B&=(\mu _{A}\oplus \mu _{B},\nu _{A}\oplus \nu _{B})\\&=((\mu _{A}+\mu _{B})\wedge 1,(\nu _{A}+\nu _{B})\wedge 1) ;\\ A\odot B&=(\mu _{A}\odot \mu _{B},\nu _{A}\odot \nu _{B})\\ {}&=((\mu _{A}+\mu _{B}-1)\vee 0,(\nu _{A}+\nu _{B}-1)\vee 0) \end{aligned}

and we write $$A_{n}\nearrow A\Leftrightarrow \mu _{A_{n}}\nearrow \mu _{A} ,\ \nu _{A_{n}}\nearrow \nu _{A}\text {.}$$

Moreover, we consider the product

\begin{aligned} A\cdot B=(\mu _{A}\mu _{B},\nu _{A}\nu _{B}). \end{aligned}

$$\mathcal {V}(\varOmega ,\mathcal {S})$$ is ordered as follows:

\begin{aligned} A\le B\Leftrightarrow \mu _{A}\le \mu _{B},\quad \ \nu _{A}\le \nu _{B}. \end{aligned}

The following theorem from Kuková (2011) states that $$\mathcal {V}(\varOmega ,\mathcal {S})$$ can be embedded to an MV-algebra.

### Theorem 6

Let

\begin{aligned} \mathcal {G}=\{A=(\mu _{A},\nu _{A}); \mu _{A},\nu _{A}:\varOmega \rightarrow \mathbb {R}\} \end{aligned}

and the summation be defined by the formula

\begin{aligned} A+B=(\mu _{A}+\mu _{B},\nu _{A}+\nu _{B})\quad \text { for } A,B\in \mathcal {G}. \end{aligned}

Let the partial order on $$\mathcal {G}$$ be defined by

\begin{aligned} A\le B\Leftrightarrow \mu _{A}\le \mu _{B}\wedge \nu _{A}\le \nu _{B}\quad \text { for }A,B\in \mathcal {G}. \end{aligned}

Let − denote the inverse operation to $$+$$, $$\mathbf {0}_{\varvec{{\Omega }}}=(0_{\varOmega },0_{\varOmega })$$ be the neutral element of $$+$$, and $$\mathbf {1}_{{\varvec{\Omega }}}=(1_{\varOmega },1_{\varOmega })$$. Let $$\mathcal {M}(\varOmega ,\mathcal {S})$$ be an interval in $$\mathcal {G}$$, $$\mathcal {M}(\varOmega ,\mathcal {S})=[\mathbf {0}_{\varvec{{\Omega }}},\mathbf{1}_{\varvec{{\Omega }}}]$$, with the operations

\begin{aligned} A\oplus B&=((\mu _{A}+\mu _{B})\wedge 1,(\nu _{A}+\nu _{B})\wedge 1) ;\\ A\odot B&=((\mu _{A}+\mu _{B}-1)\vee 0,(\nu _{A}+\nu _{B}-1)\vee 0) . \end{aligned}

Then the system

\begin{aligned} (\mathcal {M}(\varOmega ,\mathcal {S}),\oplus ,\odot ,\le ,\mathbf {0}_{{\varvec{\Omega }}},\mathbf{1}_{\varvec{{\Omega }}}) \end{aligned}

is an MV-algebra and $$\mathcal {V}(\varOmega ,\mathcal {S})\subset \mathcal {M}(\varOmega ,\mathcal {S})$$.

We recall the notions of state, probability and observable for IV-events from Kuková (2011), Samuelčík and Hollá (2013), which we call IV-state, IV-probability and IV-observable, respectively, in this paper.

### Definition 13

An IV-state on $$\mathcal {V}(\varOmega ,\mathcal {S})$$ is a map $$m:\mathcal {V}(\varOmega ,\mathcal {S}) \rightarrow [0,1]$$ satisfying the following conditions for all $$A,B\in \mathcal {V}(\varOmega ,\mathcal {S})$$ and $$\{ A_{n}\} _{n=1}^{\infty }\subset \mathcal {V}(\varOmega ,\mathcal {S})$$:

(1):

$$m(\mathbf {1}_{{\varvec{\Omega }}})=1,\ m(\mathbf {0}_{{\varvec{\Omega }}})=0;$$

(2):

$$A\odot B=\mathbf {0}_{\varvec{{\Omega }}}\Rightarrow m(A\oplus B)=m(A)+m(B);$$

(3):

if $$A_{n}\nearrow A$$, then $$m(A_{n})\nearrow m(A).$$

Let $$\mathcal {J}$$ be the family of all closed subintervals of [0, 1].

### Definition 14

An IV-probability is a mapping $$\mathfrak {P}:\mathcal {V}(\varOmega ,\mathcal {S})\rightarrow \mathcal {J}$$ satisfying the following conditions for all $$A,B\in \mathcal {V}(\varOmega ,\mathcal {S})$$ and $$\{ A_{n}\} _{n=1}^{\infty }\subset \mathcal {V}(\varOmega ,\mathcal {S})$$:

(1):

$$\mathfrak {P}(\mathbf {1}_{{\varvec{\Omega }}})=[1,1],\ \mathfrak {P}(\mathbf {0}_{{\varvec{\Omega }}})=[0,0];$$

(2):

$$A\odot B=\mathbf {0}_{{\varvec{\Omega }}}\Rightarrow \mathfrak {P}(A\oplus B)=\mathfrak {P}(A) +\mathfrak {P}(B);$$

(3):

if $$A_{n}\nearrow A$$, then $$\mathfrak {P}( A_{n})\nearrow \mathfrak {P}(A).$$

An IV-probability space is a pair $$(\mathcal {V}(\varOmega ,\mathcal {S}) ,\mathfrak {P})$$, where $$\mathfrak {P}$$ is an IV-probability on $$\mathcal {V}(\varOmega ,\mathcal {S})$$. We will use the notation $$\mathfrak {P}(A)\mathfrak {=}[\mathfrak {P}^{\flat }(A)\mathfrak {,P}^{\natural }(A)]$$ for each $$A\in \mathcal {V}(\varOmega ,\mathcal {S}).$$

It is easy to verify that if $$\mathfrak {P}:\mathcal {V}(\varOmega ,\mathcal {S})\rightarrow \mathcal {J}$$ is an IV-probability, then the mappings $$\mathfrak {P}^{\flat }\mathfrak {\ }$$ and $$\mathfrak {P}^{\natural }$$ are IV-states on $$\mathcal {V}(\varOmega ,\mathcal {S})$$.

In Sect. 4, we will use the following lemma.

### Lemma 1

Let $$\hat{P}$$ be a probability measure defined on a measurable space $$(\varOmega ,\mathcal {S})$$. Then $$\mathfrak {P}_{\hat{P}}:\mathcal {V}(\varOmega ,\mathcal {S})\rightarrow \mathcal {J}$$ of the form:

\begin{aligned} \mathfrak {P}_{\hat{P}}((\mu ,\nu )) =\left[ \int _{\varOmega }\mu \mathrm{d}\hat{P},\int _{\varOmega }\nu \mathrm{d}\hat{P}\right] ,\quad (\mu ,\nu )\in \mathcal {V}(\varOmega ,\mathcal {S}), \end{aligned}
(6)

is an IV-probability.

### Proof

It is obvious that $$\mathfrak {P}_{\hat{P}}$$ satisfies condition (1). Let $$A\odot B=\mathbf {0}_{{\varvec{\Omega }}}$$ for $$A,B\in \mathcal {V}(\varOmega ,\mathcal {S})$$. Then for arbitrary $$\omega \in \varOmega$$

\begin{aligned} \mu _{A}(\omega )+\mu _{B}(\omega )\le 1\quad \text {and}\quad \nu _{A}(\omega )+\nu _{B}(\omega )\le 1. \end{aligned}

Therefore,

\begin{aligned}&\mathfrak {P}_{\hat{P}}(A\oplus B)\\&\quad =\left[ \int _{\varOmega }\left[ (\mu _{A}+\mu _{B})\wedge 1\right] \mathrm{d}\hat{P},\int _{\varOmega }\left[ (\nu _{A}+\nu _{B})\wedge 1\right] \mathrm{d}\hat{P}\right] \\&\quad =\left[ \int _{\varOmega }(\mu _{A}+\mu _{B})\mathrm{d}\hat{P},\int _{\varOmega }(\nu _{A}+\nu _{B})\mathrm{d}\hat{P}\right] \\&\quad =\left[ \int _{\varOmega }\mu _{A}\mathrm{d}\hat{P},\int _{\varOmega }\nu _{A}\mathrm{d}\hat{P}\right] +\left[ \int _{\varOmega }\mu _{B}\mathrm{d}\hat{P},\int _{\varOmega }\nu _{B}\mathrm{d}\hat{P}\right] \\&\quad =\mathfrak {P}_{\hat{P}}(A) +\mathfrak {P}_{\hat{P}}(B) . \end{aligned}

Thus, condition (2) is satisfied.

Let $$A\in \mathcal {V}(\varOmega ,\mathcal {S})$$, $$\{A_{n}\} _{n=1}^{\infty } \subset \mathcal {V}(\varOmega ,\mathcal {S})$$ and $$A_{n}\nearrow A$$. Then

\begin{aligned} \mathfrak {P}_{\hat{P}}(A_{n})&=\left[ \int _{\varOmega }\mu _{A_{n} }\mathrm{d}\hat{P},\int _{\varOmega }\nu _{A_{n}}\mathrm{d}\hat{P}\right] \\&\nearrow \left[ \int _{\varOmega }\mu _{A}\mathrm{d}\hat{P},\int _{\varOmega }\nu _{A}\mathrm{d}\hat{P}\right] =\mathfrak {P}(A) \end{aligned}

by the Dominated Convergence Theorem. Therefore, condition (3) is also fulfilled. $$\square$$

The IV-probability $$\mathfrak {P}_{\hat{P}}((\mu ,\nu ))$$ is a modification of the probability of IF-events, which was considered by Szmidt and Kacprzyk (1999a, b), Grzegorzewski and Mrówka (2002) and generalized by Nowak (2003, 2004a, b).

### Definition 15

An IV-observable is a mapping $$x:\mathcal {B}(\mathbb {R})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ satisfying the following conditions:

(1):

$$x(\mathbb {R})=\mathbf {1}_{{\varvec{\Omega }}},\ x(\emptyset )=\mathbf {0}_{{\varvec{\Omega }}};$$

(2):

whenever $$A,B\in \mathcal {B}(\mathbb {R})$$ and $$A\cap B=\emptyset$$, then

\begin{aligned} x(A)\odot x(B) =\mathbf {0}_{{\varvec{\Omega }}}\ \text {and}\quad x(A\cup B)=x(A)\oplus x(B); \end{aligned}
(3):

for all $$A,A_{1},A_{2},\ldots \in \mathcal {B}(\mathbb {R})$$

if $$A_{n}\nearrow A$$, then $$x(A_{n})\nearrow x(A).$$

### Definition 16

If $$x_{1},x_{2},\ldots ,x_{n}:\mathcal {B}(\mathbb {R})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ are IV-observables, then their joint IV-observable is the map $$h:\mathcal {B}(\mathbb {R} ^{n}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ satisfying the following conditions:

(1):

$$h(\mathbb {R} ^{n})=\mathbf {1}_{{\varvec{\Omega }}},h(\emptyset )=\mathbf {0}_{{\varvec{\Omega }}};$$

(2):

whenever $$A,B\in \mathcal {B}(\mathbb {R} ^{n})$$ and $$A\cap B=\emptyset$$, then

\begin{aligned} h(A) \odot h(B)=\mathbf {0}_{\varvec{{\Omega }}}, h(A\cup B)=h(A)\oplus h(B); \end{aligned}
(3):

for all $$A,A_{1},A_{2},\ldots \in \mathcal {B}(\mathbb {R} ^{n})$$

if $$A_{n}\nearrow A$$, then $$h(A_{n})\nearrow h(A);$$

(4):

for all $$C_{1},C_{2},\ldots ,C_{n}\in \mathcal {B} (\mathbb {R})$$

\begin{aligned}&h(C_{1}\times C_{2}\times \cdots \times C_{n})\\&\quad =x_{1}(C_{1})\cdot x_{2}(C_{2})\cdot \ldots \cdot x_{n}(C_{n}). \end{aligned}

The following theorems and proposition from Kuková (2011) (see Theorem 2) and Samuelčík and Hollá (2013) (see Proposition 1 and Theorem 1), concern properties of the notions defined above.

### Theorem 7

Let $$\bar{p}:\mathcal {M}(\varOmega ,\mathcal {S}) \rightarrow [0,1]$$ be defined by the formula

\begin{aligned} \bar{p}(A)=\bar{p}(\mu _{A},\nu _{A})=p(\mu _{A},1)-p(0,1-\nu _{A}), \end{aligned}

where $$p:\mathcal {V}(\varOmega ,\mathcal {S}) \rightarrow [0,1]$$ is an IV-state on $$\mathcal {V}(\varOmega ,\mathcal {S})$$. Then

1. 1.

for arbitrary $$A\in \mathcal {V}(\varOmega ,\mathcal {S})$$ $$\bar{p}(A)=p(A),$$

2. 2.

$$\bar{p}$$ is a state on $$\mathcal {M}(\varOmega ,\mathcal {S})$$.

### Proposition 1

If $$x:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ is an IV-observable and p :  $$\mathcal {V}(\varOmega ,\mathcal {S})\rightarrow [0,1]$$ is an IV-state, then the mapping $$p_{x}$$ $$=$$ $$p\circ x:$$ $$\mathcal {B}(\mathbb {R})\rightarrow [0,1]$$ defined by the formula

\begin{aligned} p_{x}(A)=p(x(A)) \end{aligned}

is a probability measure.

### Theorem 8

For any IV-observables

\begin{aligned} x_{1},x_{2},\ldots ,x_{n}:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S}) \end{aligned}

there exists their joint IV-observable

\begin{aligned} h:\mathcal {B}(\mathbb {R} ^{n}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S}). \end{aligned}

The following remarks will be very useful in the further part of the paper.

### Remark 2

Since $$\mathcal {V}(\varOmega ,\mathcal {S})\subset \mathcal {M}(\varOmega ,\mathcal {S})$$, any IV-observable $$x:\mathcal {B}(\mathbb {R} ) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ is an observable in the sense of the MV-algebraic probability theory. Furthermore,

\begin{aligned} \mathfrak {P}_{x}^{\flat }=\mathfrak {P}^{\flat }\circ x,\mathfrak {P} _{x}^{\natural }\mathfrak {=P}^{\natural }\circ x:\mathcal {B}( \mathbb {R}) \rightarrow [0,1] \end{aligned}

are probability measures.

### Remark 3

Let $$x_{1},x_{2},\ldots ,x_{n}:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ be IV-observables. Let $$g: \mathbb {R} ^{n}\rightarrow \mathbb {R}$$ be a Borel measurable function and let $$h:\mathcal {B}(\mathbb {R} ^{n})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ be the joint observable of $$x_{1},x_{2},\ldots ,x_{n}$$. Then

\begin{aligned} g(x_{1},x_{2},\ldots ,x_{n})=h\circ g^{-1} \end{aligned}

is an IV-observable.

### Definition 17

IV-observables

\begin{aligned} x_{1},x_{2},\ldots ,x_{n}:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S}) \end{aligned}

are independent (with respect to $$\mathfrak {P}$$) if there exists an n-dimensional IV-observable $$h:\mathcal {B}(\mathbb {R} ^{n})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ such that for all $$C_{1},C_{2},\ldots ,C_{n}\in \mathcal {B}(\mathbb {R})$$

\begin{aligned} \mathfrak {P}^{\flat }(h(C_{1}\times C_{2}\times \cdots \times C_{n}))&=\mathfrak {P}_{x_{1}}^{\flat }(C_{1})\cdot \mathfrak {P}_{x_{2}}^{\flat }(C_{2})\\&\cdot \ldots \cdot \mathfrak {P}_{x_{n}}^{\flat }(C_{n}),\\ \mathfrak {P}^{\natural }(h(C_{1}\times C_{2}\times \cdots \times C_{n}))&=\mathfrak {P}_{x_{1}}^{\natural }(C_{1})\cdot \mathfrak {P}_{x_{2}}^{\natural }(C_{2})\\&\cdot \ldots \cdot \mathfrak {P}_{x_{n}}^{\natural }(C_{n}). \end{aligned}

### Definition 18

Let $$\mathfrak {P}:\mathcal {V}(\varOmega ,\mathcal {S}) \rightarrow \mathcal {J}$$ be an IV-probability and $$x:\mathcal {B}(\mathbb {R})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ be an IV-observable. Then x is said to be integrable if the expectations

\begin{aligned} \mathbb {E}^{\flat }(x)=\int _{ \mathbb {R} }t\mathfrak {P}_{x}^{\flat }(\mathrm{d}t),\quad \mathbb {E}^{\natural }(x)=\int _{\mathbb {R} }t\mathfrak {P}_{x}^{\natural }(\mathrm{d}t) \end{aligned}

exist. We say that x is square-integrable if $$\int _{ \mathbb {R} }t^{2}\mathfrak {P}_{x}^{\flat }(\mathrm{d}t)$$ and $$\int _{ \mathbb {R} }t^{2}\mathfrak {P}_{x}^{\natural }(\mathrm{d}t)$$ exist. If x is square-integrable, then the variances of x also exist and are described by the equalities

\begin{aligned} \mathbb {D}^{\flat ,2}(x)&=\int _{ \mathbb {R} }t^{2}\mathfrak {P}_{x}^{\flat }(\mathrm{d}t)-(\mathbb {E}^{\flat }(x)) ^{2}\\&=\int _{ \mathbb {R} }(t-\mathbb {E}^{\flat }(x))^{2}\mathfrak {P} _{x}^{\flat }(\mathrm{d}t),\\ \mathbb {D}^{\natural ,2}(x)&=\int _{ \mathbb {R} }t^{2}\mathfrak {P}_{x}^{\natural }(\mathrm{d}t)-(\mathbb {E} ^{\natural }(x))^{2}\\&=\int _{ \mathbb {R} }(t-\mathbb {E}^{\natural }(x))^{2}\mathfrak {P} _{x}^{\natural }(\mathrm{d}t).\end{aligned}

We write $$x\in L_{\mathfrak {P}}^{p_{1},p_{2}}$$ for $$p_{1},p_{2}\ge 1$$ if $$\int _{ \mathbb {R} }|t|^{p_{1}}\mathfrak {P}_{x}^{\flat }(\mathrm{d}t) <\infty$$ and $$\int _{ \mathbb {R} }|t|^{p_{2}}\mathfrak {P}_{x}^{\natural }(\mathrm{d}t) <\infty .$$ Finally, we use the notation $$x\in L_{\mathfrak {P}}^{p}$$ instead of $$x\in L_{\mathfrak {P}}^{p,p}$$ for $$p\ge 1$$.

The following lemma concerns the form of the expected value of a Borel function of an IV-observable.

### Lemma 2

Let $$\mathfrak {P}:\mathcal {V}(\varOmega ,\mathcal {S}) \rightarrow \mathcal {J}$$ be an IV-probability, $$\varphi$$ be an $$\mathbb {R}$$-valued Borel function, which domain is the whole set of real numbers $$\mathbb {R}$$, $$x:\mathcal {B}(\mathbb {R})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$ be an IV-observable and $$y=\varphi (x)=x\circ \varphi ^{-1}$$. Then $$\mathbb {E}^{\flat }(y)$$ exists if and only if $$\int _{ \mathbb {R} }|\varphi (t) |\mathfrak {P}_{x}^{\flat }(\mathrm{d}t) <\infty$$ and then $$\mathbb {E}^{\flat }(y)=\int _{ \mathbb {R} }\varphi (t) \mathfrak {P}_{x}^{\flat }(\mathrm{d}t) .$$ Moreover, the analogous assertion holds for $$\mathbb {E}^{\natural }(y)$$ and the corresponding probability measure $$\mathfrak {P} _{x}^{\natural }$$.

### Proof

We use Theorem 1 for

\begin{aligned} (X,\mathcal {X})=(X^{\prime },\mathcal {X}^{\prime })=(\mathbb {R} ,\mathcal {B}(\mathbb {R})),\quad T=\varphi \quad \text {and}\quad f(t)=t. \end{aligned}

Then $$\mu =\mathfrak {P}_{x}^{\flat }$$ is a probability measure. Moreover, by straightforward computations one can verify the equality $$\mu T^{-1}=\mathfrak {P}_{\varphi (x)}^{\flat }=\mathfrak {P}_{y}^{\flat },$$ which ends the proof. One can prove analogously the assertion for $$\mathbb {E}^{\natural }(y)$$. $$\square$$

### 3.2 Central limit theorems

We denote by $$\{ k_{n}\} _{n\in \mathbb {N} }$$ a fixed sequence of positive integers and assume that $$\lim _{n\rightarrow \infty }k_{n}=\infty$$.

Let $$\left\{ \left( \mathcal {V}\left( \varOmega _{(n)},\mathcal {S}_{(n)}\right) ,\mathfrak {P}_{(n)}\right) \right\} _{n\in \mathbb {N} }$$ be a sequence of IV-probability spaces. For each $$n\in \mathbb {N}$$ and an observable $$x:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega _{(n) } ,\mathcal {S}_{(n)})$$, we use the symbols $$\mathbb {E} _{(n)}^{\flat }(x)$$, $$\mathbb {E}_{(n)}^{\natural }(x)$$ to denote the expected values of x and the symbols $$\mathbb {D}_{(n) }^{2,\flat }(x)$$, $$\mathbb {D}_{(n) }^{2,\natural }(x)$$ to denote the variances of x with respect to $$\mathfrak {P}_{(n)}^{\flat }\mathfrak {\ }$$and $$\mathfrak {P}_{(n)}^{\natural }$$, respectively.

For $$n\in \mathbb {N}$$, constants $$\varepsilon ,s>0$$, and an arbitrary IV-observable $$x:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega _{(n) } ,\mathcal {S}_{(n)})$$ belonging to $$L_{\mathfrak {P} _{(n)}}^{2}$$ we consider the following $$\mathbb {R}$$-valued functions:

\begin{aligned} \mathfrak {l}_{n,\flat }^{x}(\varepsilon ,s)=\mathbb {E}_{(n)}^{\flat }\left( \left( x-\mathbb {E}_{(n)}^{\flat }(x)\right) ^{2}I_{|x-\mathbb {E}_{(n)}^{\flat }(x)|>\varepsilon s}\right) ,\\ \mathfrak {l}_{n,\natural }^{x}(\varepsilon ,s)=\mathbb {E} _{(n)}^{\natural }\left( \left( x-\mathbb {E}_{(n)}^{\natural }(x) \right) ^{2}I_{|x-\mathbb {E}_{(n)}^{\natural }(x)|>\varepsilon s}\right) . \end{aligned}

Lemma 2 implies that $$\mathfrak {l}_{n,\flat }^{x}$$ and $$\mathfrak {l} _{n,\natural }^{x}$$ are well-defined.

### Definition 19

Let for each $$n\in \mathbb {N}$$ $$\ \{ x_{nj}\} _{j\in \mathbb {N}[k_{n}]}$$ be a sequence of independent (with respect to $$\mathfrak {P}_{(n)}$$) IV-observables of $$\mathcal {V}(\varOmega _{(n)},\mathcal {S}_{(n)})$$. Then $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ is called a triangular array of independent IV-observables (TVI for short).

### Definition 20

Let $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ be a TVI satisfying the following conditions for each $$n\in \mathbb {N}$$

\begin{aligned}&x_{nj}\in L_{\mathfrak {P}_{(n)}}^{2},\quad j\in \mathbb {N}[k_{n}], \end{aligned}
(7)
\begin{aligned}&\text { }s_{n}^{2,\flat }= \sum \limits _{j=1}^{k_{n}}} \mathbb {D}_{(n)}^{2,\flat }(x_{nj}) ,\quad \text { }s_{n}^{2,\natural }= \sum \limits _{j=1}^{k_{n}}} \mathbb {D}_{(n)}^{2,\natural }(x_{nj})\in (0,\infty ).\nonumber \\ \end{aligned
(8)

Then $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ is said to satisfy the Lindeberg condition if for each $$\varepsilon >0$$

\begin{aligned} \mathfrak {L}_{n}(\varepsilon )=\mathfrak {L}_{n}^{\flat }(\varepsilon ) +\mathfrak {L}_{n}^{\natural }(\varepsilon )\rightarrow 0\text {,\quad as }n\rightarrow \infty , \end{aligned}
(9)

where

\begin{aligned}&\mathfrak {L}_{n}^{\flat }(\varepsilon )=\frac{1}{s_{n}^{2,\flat }} \sum \limits _{j=1}^{k_{n}}} \mathfrak {l}_{n,\flat }^{x_{nj}}\left( \varepsilon ,s_{n}^{\flat }\right) ,\\&\mathfrak {L}_{n}^{\natural }(\varepsilon )=\frac{1}{s_{n}^{2,\natural }} \sum \limits _{j=1}^{k_{n}}} \mathfrak {l}_{n,\natural }^{x_{nj}}\left( \varepsilon ,s_{n}^{\natural }\right) ,\\&s_{n}^{\flat }=\sqrt{s_{n}^{2,\flat }}\quad \hbox {and}\quad s_{n}^{\natural }=\sqrt{s_{n}^{2,\natural }}. \end{aligned

### Definition 21

Let $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ be a TVI fulfilling (7)–(8). Then the array $$\{x_{nj} \}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ is said to satisfy the Lyapunov condition if there exist positive constants $$\delta _{1}$$ and $$\delta _{2}$$ such that

\begin{aligned}&\frac{1}{s_{n}^{2+\delta _{1},\flat }} \sum \limits _{j=1}^{k_{n}}} \mathbb {E}_{(n)}^{\flat } \left( |x_{nj}-\mathbb {E}_{(n) }^{\flat }(x_{nj}) |^{2+\delta _{1}}\right) \nonumber \\&\quad +\frac{1}{s_{n}^{2+\delta _{2},\natural }} \sum \limits _{j=1}^{k_{n}}} \mathbb {E}_{(n) }^{\natural }\left( |x_{nj}-\mathbb {E}_{(n)}^{\natural }(x_{nj})|^{2+\delta _{2}}\right) \rightarrow 0,\nonumber \\&\quad \text {as }n\rightarrow \infty , \end{aligned
(10)

where $$s_{n}^{2+\delta _{1},\flat }=\left( s_{n}^{\flat }\right) ^{2+\delta _{1}}$$ and $$s_{n}^{2+\delta _{2},\natural }=\left( s_{n}^{\natural }\right) ^{2+\delta _{2}}$$.

The next two theorems are IV-probabilistic versions of central limit theorems.

### Theorem 9

(Lindeberg CLT) Let $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ be a TVI satisfying (7)–(8) as well as the Lindeberg condition (9). Then for $$t\in \mathbb {R}$$

\begin{aligned}&\mathfrak {P}_{(n)}^{\flat }\left( \frac{\sum _{j=1}^{k_{n} }x_{nj}-\sum _{j=1}^{k_{n}}\mathbb {E}_{(n)}^{\flat }(x_{nj})}{s_{n}^{\flat }}((-\infty ,t))\right) \nonumber \\&\quad \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty , \end{aligned}
(11)
\begin{aligned}&\mathfrak {P}_{(n)}^{\natural }\left( \frac{\sum _{j=1}^{k_{n} }x_{nj}-\sum _{j=1}^{k_{n}}\mathbb {E}_{(n)}^{\natural }(x_{nj})}{s_{n}^{\natural }}((-\infty ,t)) \right) \nonumber \\&\quad \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty . \end{aligned}
(12)

### Proof

We consider the sequence of MV-algebras

$$\mathcal {M}(\varOmega _{(n)},\mathcal {S}_{(n)})$$ with states $$\mathfrak {\bar{P}}_{(n) }^{\flat }$$, $$n\in \mathbb {N}$$. From Theorem 7, it follows that for arbitrary $$n\in \mathbb {N}$$ one can find a state $$\mathfrak {\bar{P}}_{(n) }^{\flat }:\mathcal {M}(\varOmega _{(n)},\mathcal {S}_{(n)}) \rightarrow [0,1] \mathcal {\ }$$such that $$\mathfrak {\bar{P}}_{(n)}^{\flat }|_{\mathcal {V}(\varOmega _{(n)},\mathcal {S}_{(n)})}=\mathfrak {P}_{(n)}^{\flat }$$. Since

\begin{aligned}\mathcal {V}(\varOmega _{(n)},\mathcal {S}_{(n)})\subset \mathcal {M}(\varOmega _{(n) },\mathcal {S}_{(n)}),\end{aligned}

the array $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ is a TA of MV-algebras

$$\left\{ \mathcal {M}(\varOmega _{(n)},\mathcal {S}_{(n)}) \right\} _{n\in \mathbb {N} }$$. For arbitrary $$n\in \mathbb {N}$$, $$j\in \mathbb {N}[k_{n}]$$ $$(\mathfrak {\bar{P}}_{(n) }^{\flat }) _{x_{nj}}$$ and $$(\mathfrak {P}_{( n)}^{\flat }) _{x_{nj}}$$ coincide. Consequently, for arbitrary $$n\in \mathbb {N}$$ expected values $$\mathbb {E}_{(n) }^{\flat }(x_{nj})$$, $$j\in \mathbb {N}[k_{n}]$$, variances $$\mathbb {D} _{(n) }^{2,\flat }( x_{nj})$$, $$j\in \mathbb {N} [k_{n}]$$ as well as $$\mathfrak {L}_{n}^{\flat }(\varepsilon )$$, $$\mathfrak {L}_{n}^{\natural }(\varepsilon )$$ coincide with their MV-algebraic counterparts for the state $$\mathfrak {\bar{P}}_{(n) }^{\flat }$$. Thus, the Lindeberg condition (9) implies its MV-algebraic counterpart (4) for $$\{ x_{nj}\} _{j\in \mathbb {N}[k_{n}],n\in \mathbb {N} }$$, since the array fulfils (3).

Let $$\{ Z_{n}^{\flat }\} _{n\in \mathbb {N}}$$ be the sequence of $$\mathcal {V}(\varOmega _{(n)},\mathcal {S}_{(n)})$$-valued observables given by the formula:

\begin{aligned} Z_{n}^{\flat }=\frac{\sum _{j=1}^{k_{n}}x_{nj}-\sum _{j=1}^{k_{n}}\mathbb {E} _{(n) }^{\flat }(x_{nj}) }{s_{n}^{\flat }} ,\;n\in \mathbb {N}. \end{aligned}
(13)

Clearly, for each $$n\in \mathbb {N}$$ and $$t\in \mathbb {R}$$

\begin{aligned} \mathfrak {\bar{P}}_{(n) }^{\flat }\left( Z_{n}^{\flat }((-\infty ,t)) \right) =\mathfrak {P}_{(n) }^{\flat }\left( Z_{n}^{\flat }((-\infty ,t))\right) . \end{aligned}

By Theorem 3,

\begin{aligned} \mathfrak {\bar{P}}_{(n) }^{\flat }\left( Z_{n}^{\flat }((-\infty ,t))\right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty \end{aligned}

for $$t\in \mathbb {R}$$ and therefore the convergence (11) holds. The convergence (12) we obtain analogously. $$\square$$

### Theorem 10

(Lyapunov CLT) Let $$\{x_{nj}\} _{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ be a TVI satisfying (7)–(8) and Lyapunov’s condition (10). Then for each $$t\in \mathbb {R}$$ (11)–(12) hold.

### Proof

As we noticed previously, $$\{x_{nj}\} _{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ is a TA of the MV-algebras $$\{\mathcal {M}(\varOmega _{(n)},\mathcal {S}_{(n)})\} _{n\in \mathbb {N}}$$ with states $$\mathfrak {\bar{P} }_{(n)}^{\flat }$$, $$n=1,2,\ldots$$ For any positive integer n expected values $$\mathbb {E}_{(n) }^{\flat }(x_{nj})$$, $$j\in \mathbb {N}[k_{n}]$$, and variances $$\mathbb {D}_{(n)}^{2,\flat }(x_{nj})$$, $$j\in \mathbb {N}[k_{n}]$$, coincide with their MV-algebraic counterparts for the state $$\mathfrak {\bar{P}}_{(n)}^{\flat }$$. Thus, the TA $$\{x_{nj}\} _{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ of MV-algebras $$\mathcal {M}(\varOmega _{(n)},\mathcal {S} _{(n)})$$ fulfils (3) and the Lyapunov condition (10) implies its counterpart (5) for this array, considered with states $$\mathfrak {\bar{P}}_{(n)}^{\flat }$$. Therefore, Theorem 4 implies

\begin{aligned} \mathfrak {\bar{P}}_{(n) }^{\flat }\left( Z_{n}^{\flat }((-\infty ,t))\right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty \end{aligned}

for arbitrary $$t\in \mathbb {R}$$, where $$\{Z_{n}^{\flat }\} _{n\in \mathbb {N} }$$ is the sequence of $$\mathcal {V}(\varOmega _{(n)},\mathcal {S}_{(n)})$$-valued observables given by formula (13). The same reasoning as in the last part of the previous proof justifies (11). Analogously, we obtain (12). $$\square$$

The following theorem is an IV-probabilistic version of the Feller theorem.

### Theorem 11

(Feller) Let $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ be a TVI satisfying (7)–(8) and such that for each $$\varepsilon >0$$

\begin{aligned} \lim _{n\rightarrow \infty }\max _{1\le j\le k_{n}}\left( \mathfrak {P}_{\left( n\right) }^{\flat }\right) _{x_{nj}}\left( E\left( \varepsilon s_{n}^{\flat }\right) \right)&=0, \end{aligned}
(14)
\begin{aligned} \lim _{n\rightarrow \infty }\max _{1\le j\le k_{n}}\left( \mathfrak {P}_{\left( n\right) }^{\natural }\right) _{x_{nj}}\left( E\left( \varepsilon s_{n}^{\natural }\right) \right)&=0, \end{aligned}
(15)

where

\begin{aligned} E(a)=(-\infty ,-a)\cup (a,\infty ) \end{aligned}

for $$a>0$$. Let us additionally assume that for $$t\in \mathbb {R}$$ the convergences (11) and (12) hold. Then the Lindeberg condition (9) is fulfilled.

### Proof

Since for any $$n\in \mathbb {N}$$, $$j\in \mathbb {N}[k_{n}]$$ $$(\mathfrak {\bar{P}}_{(n) }^{\flat }) _{x_{nj}}$$ and $$(\mathfrak {P}_{(n) }^{\flat }) _{x_{nj}}$$ coincide, the equalities

\begin{aligned} \lim _{n\rightarrow \infty }\max _{1\le j\le k_{n}}\left( \mathfrak {\bar{P} }_{\left( n\right) }^{\flat }\right) _{x_{nj}}\left( E\left( \varepsilon s_{n}^{\flat }\right) \right)&=0, \end{aligned}
(16)
\begin{aligned} \lim _{n\rightarrow \infty }\max _{1\le j\le k_{n}}\left( \mathfrak {\bar{P} }_{\left( n\right) }^{\natural }\right) _{x_{nj}}\left( E\left( \varepsilon s_{n}^{\natural }\right) \right)&=0 \end{aligned}
(17)

follows from (14) and (15). The TA $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ of MV-algebras $$\mathcal {M}(\varOmega _{(n)},\mathcal {S}_{(n)})$$, considered with states $$\mathfrak {\bar{P}}_{(n)}^{\flat }$$ and $$\mathfrak {\bar{P}}_{(n)}^{\natural }$$, satisfies (3) and therefore Theorem 5 implies that for each $$\varepsilon >0\; \mathfrak {L}_{n}^{\flat }(\varepsilon ) \rightarrow 0$$, $$\mathfrak {L}_{n}^{\natural }(\varepsilon ) \rightarrow 0,$$ as $$n\rightarrow \infty .$$ Thus, $$\mathfrak {L}_{n}(\varepsilon ) =\mathfrak {L}_{n}^{\flat }(\varepsilon ) +\mathfrak {L}_{n}^{\natural }(\varepsilon ) \rightarrow 0\text {, as }n\rightarrow \infty$$, which finishes the proof. $$\square$$

## 4 Applications

In this section, we present and analyse three examples of arrays and sequences of IV-observables with convergent scaled sums or row sums. The first example is preceded by the central limit theorem for independent, identically distributed IV-observables with proof. In the second and third example, the considered observables are not identically distributed and therefore Theorem 12 cannot be applied for them. To prove the convergence in distribution of the considered scaled row sums to standard normal distribution, we use Theorem 9 and Theorem 10.

### 4.1 Convergence of independent IV-observables with the same distributions

Let us consider a sequence of independent IV-observables with the same distribution.

### Theorem 12

Let $$(\mathcal {V}(\varOmega ,\mathcal {S}) ,\mathfrak {P})$$ be an IV-probability space. Let us assume that $$\{ x_{j}:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S})\} _{j\in \mathbb {N} }$$ is a sequence of independent IV-observables with the same distribution and variances

\begin{aligned} (\mathbb {\sigma }^{\flat }) ^{2}=\mathbb {D}^{2,\flat }(x) ,\quad (\mathbb {\sigma }^{\natural }) ^{2}=\mathbb {D}^{2,\natural }(x) \end{aligned}

with respect to $$\mathfrak {P} ^{\flat }\mathfrak {\ }$$and $$\mathfrak {P}^{\natural }$$, respectively, where $$0<\mathbb {\sigma }^{\flat },\mathbb {\sigma }^{\natural }<\infty$$. Let $$e^{\flat }=\mathbb {E}^{\flat }(x_{1}) \ \text {and}\ e^{\natural }=\mathbb {E} ^{\natural }(x_{1}).$$ Then for $$t\in \mathbb {R}$$

\begin{aligned} \mathfrak {P}^{\flat }\left( \frac{\sum _{j=1}^{n}x_{j}-ne^{\flat } }{\mathbb {\sigma }^{\flat }\sqrt{n}}((-\infty ,t))\right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty ,\nonumber \\ \end{aligned}
(18)
\begin{aligned} \mathfrak {P}^{\natural }\left( \frac{\sum _{j=1}^{n}x_{j}-ne^{\natural } }{\mathbb {\sigma }^{\natural }\sqrt{n}}((-\infty ,t)) \right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty .\nonumber \\ \end{aligned}
(19)

### Proof

Let $$\{(\mathcal {V}(\varOmega _{(n)},\mathcal {S}_{(n)}), \mathfrak {P}_{(n) })\} _{n\in \mathbb {N} }$$ be the constant sequence of IV-probability spaces, where $$\varOmega _{(n)}=\varOmega$$, $$\mathcal {S}_{(n)}=\mathcal {S}$$ for arbitrary $$n\in \mathbb {N}$$. Let for each $$n\in \mathbb {N}$$ $$k_{n}=n$$ and $$x_{nj}=x_{j}\text {, }j\in \mathbb {N}[n]$$. Then $$\{x_{nj} \}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ is a TVI with respect to the aforementioned constant sequence of IV-probability spaces. Moreover, $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ satisfies conditions (7) and (8), where

\begin{aligned} s_{n}^{2,\flat }=n(\mathbb {\sigma }^{\flat })^{2},\quad s_{n}^{2,\natural }=n(\mathbb {\sigma }^{\natural })^{2}. \end{aligned}

Furthermore, for each $$n\in \mathbb {N}$$ and $$j\in \mathbb {N}[n]$$

\begin{aligned} \mathfrak {l}_{n,\flat }^{x_{nj}}\left( \varepsilon ,s_{n}^{\flat }\right) =\mathbb {E}^{\flat }\left( (x_{1}-e^{\flat }) ^{2}I_{|x_{1} -e^{\flat }|>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\right) ,\\ \mathfrak {l}_{n,\natural }^{x_{nj}}\left( \varepsilon ,s_{n}^{\natural }\right) =\mathbb {E}^{\natural }\left( (x_{1}-e^{\natural }) ^{2} I_{|x_{1}-e^{\natural }|>\varepsilon \mathbb {\sigma }^{\natural }\sqrt{n}}\right) . \end{aligned}

Clearly,

\begin{aligned}&\int _{ \mathbb {R} }( t-e^{\flat })^{2}I_{|t-e^{\flat }|>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\mathfrak {P}_{x}^{\flat }(\mathrm{d}t) \\&\quad \le \int _{ \mathbb {R} }(t-e^{\flat })^{2}\mathfrak {P}_{x}^{\flat }(\mathrm{d}t) =(\mathbb {\sigma }^{\flat }) ^{2}<\infty . \end{aligned}

Therefore,

\begin{aligned}&\mathbb {E}^{\flat }\left( ( x_{1}-e^{\flat })^{2}I_{|x_{1} -e^{\flat }|>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\right) \\&\quad =\int _{ \mathbb {R} }(t-e^{\flat })^{2}I_{|t-e^{\flat }|>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\mathfrak {P}_{x}^{\flat }(\mathrm{d}t) \end{aligned}

by Lemma 2. Applying the Dominated Convergence Theorem, we obtain the convergence

\begin{aligned}&\mathbb {E}^{\flat }\left( (x_{1}-e^{\flat })^{2}I_{|x_{1} -e^{\flat }|>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\right) \\&\quad =\int _{ \mathbb {R} }(t-e^{\flat })^{2}I_{|t-e^{\flat }|>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\mathfrak {P}_{x}^{\flat }(\mathrm{d}t) \rightarrow 0\text {,}\quad as\,n\rightarrow \infty \text {.} \end{aligned}

Thus,

\begin{aligned} \mathfrak {L}_{n}^{\flat }(\varepsilon )&=\frac{1}{s_{n}^{2,\flat }} \sum \limits _{j=1}^{k_{n}}} \mathfrak {l}_{n,\flat }^{x_{nj}}\left( \varepsilon ,s_{n}^{\flat }\right) \\&=\frac{1}{n(\mathbb {\sigma }^{\flat }) ^{2}}n\mathbb {E}^{\flat }\left( \left( x_{1}-e^{\flat }\right) ^{2}I_{|x_{1}-e^{\flat } |>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\right) \\&=\frac{1}{(\mathbb {\sigma }^{\flat })^{2}}\mathbb {E}^{\flat }\left( \left( x_{1}-e^{\flat }\right) ^{2}I_{|x_{1}-e^{\flat } |>\varepsilon \mathbb {\sigma }^{\flat }\sqrt{n}}\right) \rightarrow 0, \end{aligned

as $$n\rightarrow \infty$$ and similarly

\begin{aligned} \mathfrak {L}_{n}^{\natural }(\varepsilon ) =\frac{1}{(\mathbb {\sigma }^{\natural })^{2}}\mathbb {E}^{\flat }\left( \left( x_{1}-e^{\natural }\right) ^{2}I_{|x_{1}-e^{\natural }|>\varepsilon \mathbb {\sigma }^{\natural }\sqrt{n}}\right) \rightarrow 0, \end{aligned}

as $$n\rightarrow \infty$$. Consequently, $$\{x_{nj}\}_{n\in \mathbb {N},j\in \mathbb {N}[k_{n}]}$$ satisfies the Lindeberg condition (9). Therefore, Theorem 9 implies the convergences (18) and (19). $$\square$$

Let $$\mathcal {V}(\varOmega ,\mathcal {S})$$ be defined as follows:

\begin{aligned} \varOmega =\{\omega _{1},\omega _{2},\ldots ,\omega _{K}\} ,\quad \mathcal {S=} 2^{\varOmega },\quad K\in \mathbb {N} . \end{aligned}

Let for each $$k\in \mathbb {N} \left[ K\right]$$ $${\pi }_{k}=(\mathbf {\mu }_{k},\mathbf {\nu }_{k})\in \mathcal {V}(\varOmega ,\mathcal {S})$$ has the form:

\begin{aligned} \mathbf {\mu }_{k}(\omega _{i}) =\mathbf {\nu }_{k}(\omega _{i})=\left\{ \begin{array} [c]{lll} 1 &{}\quad \text {if} &{}\quad i=k\text {,}\\ 0 &{}\quad &{}\text {otherwise.} \end{array} \right. \end{aligned}

We assume that the IV-probability $$\mathfrak {P}:\mathcal {V}(\varOmega ,\mathcal {S})\rightarrow \mathcal {J}$$ has the form $$\mathfrak {P=P}_{\hat{P}}$$, where $$\hat{P}$$ is the probability defined on $$(\varOmega ,\mathcal {S})$$ by the conditions: $$\hat{P}( \{\omega _{k}\}) =p_{k}>0$$, $$k\in \mathbb {N} \left[ K\right]$$, $$\sum _{k=1}^{K}p_{k}=1$$.

The following example corresponds to the case considered in the de Moivre–Laplace CLT.

Let $$K=2$$. Let for arbitrary $$A\in \mathcal {B}(\mathbb {R} )$$ observable $$x_{j}:\mathcal {B}(\mathbb {R})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})\text {, }j\in \mathbb {N} ,\;$$ be uniquely defined by the conditions

\begin{aligned} x_{j}(A) =\left\{ \begin{array} [c]{lll} \mathbf {0}_{\varvec{{\Omega }}} &{}\quad if &{}\quad A\cap \{0,1\} =\emptyset ,\\ {\pi }_{1} &{}\quad if &{}\quad A\cap \{ 0,1\} =\{ 0\} ,\\ {\pi }_{2} &{}\quad if &{}\quad A\cap \{0,1\} =\{1\} . \end{array} \right. \end{aligned}

We assume that the IV-observables $$\{x_{j}\} _{j\in \mathbb {N} }$$ are independent. Then for $$j\in \mathbb {N}$$

\begin{aligned} e^{\flat }=\mathbb {E}^{\flat }(x_{j})=e^{\natural }=\mathbb {E} ^{\natural }(x_{j})&=p_{2} \end{aligned}

and

\begin{aligned} \mathbb {D} ^{2,\flat }( x_{j})=\mathbb {D}^{2,\natural }( x_{j})=p_{1}p_{2}>0,\;\mathbb {\sigma }^{\flat }&=\mathbb {\sigma }^{\natural }=\sqrt{p_{1}p_{2}}. \end{aligned}

Thus, by Theorem 12, for $$t\in \mathbb {R}$$

\begin{aligned} \mathfrak {P}^{\flat }\left( \frac{\sum _{j=1}^{n}x_{j}-np_{2}}{\sqrt{np_{1}p_{2}}}((-\infty ,t) ) \right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty ,\\ \mathfrak {P}^{\natural }\left( \frac{\sum _{j=1}^{n}x_{j}-np_{2}}{\sqrt{np_{1}p_{2}}}((-\infty ,t)) \right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty . \end{aligned}

### 4.2 Application of the Lindeberg CLT

Let $$(\mathcal {V}(\varOmega ,\mathcal {S}),\mathfrak {P})$$ be the IV-probability space defined in the previous subsection for $$K=3$$ and $$p_{1}=p_{3}$$. Let for each $$n\in \mathbb {N}$$ $$x_{nj}=x_{j}$$, $$k_{n}=n$$ and for arbitrary $$A\in \mathcal {B}(\mathbb {R})$$ the observable $$x_{j}:\mathcal {B}(\mathbb {R}) \rightarrow \mathcal {V}(\varOmega ,\mathcal {S}) \text {, }j=1,2,3,\ldots ,$$ is uniquely defined by the following conditions:

\begin{aligned} x_{j}(A) =\left\{ \begin{array} [c]{lll} \mathbf {0}_{\varvec{{\Omega }}} &{}\quad if &{}\quad A\cap \varGamma _j =\emptyset ,\\ {\pi }_{1} &{}\quad if &{}\quad A\cap \varGamma _j =\left\{ -\sqrt{1+\frac{1}{2^{j}}}\right\} ,\\ {\pi }_{2} &{}\quad if &{}\quad A\cap \varGamma _j =\left\{ 0\right\} ,\\ {\pi }_{3} &{}\quad if &{}\quad A\cap \varGamma _j =\left\{ \sqrt{1+\frac{1}{2^{j}}}\right\} \end{array} \right. , \end{aligned}

where

\begin{aligned} \varGamma _j=\left\{ -\sqrt{1+\frac{1}{2^{j}}} ,0,\sqrt{1+\frac{1}{2^{j}}}\right\} .\end{aligned}

We assume that $$\{ x_{j}\} _{j\in \mathbb {N}}$$ are independent. Then for each $$j\in \mathbb {N}$$

\begin{aligned} \mathbb {E}_{(n) }^{\flat }(x_{j}) =\mathbb {E} _{(n) }^{\natural }(x_{j})&=0 \end{aligned}

and

\begin{aligned} \mathbb {D}_{(n) }^{2,\flat }(x_{j})&=\mathbb {D} _{(n) }^{2,\natural }(x_{j}) =2p_{1}\left( 1+\frac{1}{2^{j}}\right) .\\ \end{aligned}

Thus,

\begin{aligned} s_{n}^{2,\flat }&=s_{n}^{2,\natural }=2p_{1}\left( n+1-\frac{1}{2^{n} }\right) \rightarrow \infty \text {,}\quad as\,n\rightarrow \infty . \end{aligned}

For a fixed $$\varepsilon >0$$ and sufficiently large n

\begin{aligned} \mathfrak {l}_{n,\flat }^{x_{nj}}\left( \varepsilon ,s_{n}^{\flat }\right)&=\mathfrak {l}_{n,\natural }^{x_{nj}}\left( \varepsilon ,s_{n}^{\natural }\right) \\&=\mathbb {E}_{(n)}^{\flat }\left( \left( x_{j}\,{-}\,\mathbb {E}_{(n) }^{\flat }(x_{j}) \right) ^{2}I_{|x_{j}-\mathbb {E}_{(n) }^{\flat }(x_{j}) |>\varepsilon s_{n}^{\flat }}\right) =0 \end{aligned}

as well as

\begin{aligned} \mathfrak {L}_{n}^{\flat }(\varepsilon )&=\frac{1}{s_{n}^{2,\flat }} \sum \limits _{j=1}^{k_{n}}} \mathfrak {l}_{n,\flat }^{x_{nj}}\left( \varepsilon ,s_{n}^{\flat }\right) \\&=\mathfrak {L}_{n}^{\natural }(\varepsilon ) =\frac{1}{s_{n}^{2,\natural }} \sum \limits _{j=1}^{k_{n}}} \mathfrak {l}_{n,\natural }^{x_{nj}}\left( \varepsilon ,s_{n}^{\natural }\right) =0, \end{aligned

since the supports of the probability measures $$(\mathfrak {P}_{(n)}^{\flat }) _{x_{j}}$$ and $$(\mathfrak {P}_{(n) }^{\natural })_{x_{j}}$$ are bounded. Consequently,

\begin{aligned} \mathfrak {L}_{n}(\varepsilon ) =2\mathfrak {L}_{n}^{\flat }(\varepsilon ) \rightarrow 0\text {,\quad as }n\rightarrow \infty . \end{aligned}

Therefore, the convergences

\begin{aligned}&\mathfrak {P}_{\left( n\right) }^{\flat }\left( \frac{\sum _{j=1}^{k_{n} }x_{nj}}{s_{n}^{\flat }}( (-\infty ,t)) \right) \\&\quad =\mathfrak {P}^{\flat }\left( \frac{\sum _{j=1}^{n}x_{j}}{s_{n}^{\flat }}((-\infty ,t)) \right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty ,\\&\mathfrak {P}_{(n) }^{\natural }\left( \frac{\sum _{j=1}^{k_{n} }x_{nj}}{s_{n}^{\natural }}(( -\infty ,t)) \right) \\&\quad =\mathfrak {P}^{\natural }\left( \frac{\sum _{j=1}^{n}x_{j}}{s_{n}^{\natural } }((-\infty ,t)) \right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow \infty \end{aligned}

for each $$t\in \mathbb {R}$$ follow from Theorem 9.

### 4.3 Application of the Lyapunov CLT

We consider the $$\mathcal {V}\left( \varOmega ,\mathcal {S}\right)$$ specified in the previous example for $$K=3$$.

For arbitrary $$n\in \mathbb {N}$$, we denote by $$\hat{P}_{n}$$ the probability defined on $$(\varOmega ,\mathcal {S})$$ by the equalities:

\begin{aligned} \hat{P}_{n}(\{ \omega _{1}\}) =\hat{P}_{n}(\{ \omega _{3}\} ) =\frac{1-4^{-n}}{2},\quad \hat{P}_{n}(\{ \omega _{2}\}) = 4^{-n} \end{aligned}

and by $$\mathfrak {P}_{(n)}$$ the IV-probability $$\mathfrak {P}_{(n) }:\mathcal {V}(\varOmega ,\mathcal {S}) \rightarrow \mathcal {J}$$ of the form $$\mathfrak {P}_{(n) }=\mathfrak {P}_{\hat{P}_{n}}$$.

For each $$n\in \mathbb {N}$$, we assume that $$k_{n}=n$$ and for $$A\in \mathcal {B}(\mathbb {R})$$ the IV-observable $$x_{nj}:\mathcal {B}(\mathbb {R})\rightarrow \mathcal {V}(\varOmega ,\mathcal {S})$$, $$j\in \mathbb {N} [n]$$, is defined by the formula

\begin{aligned} x_{nj}(A) =\left\{ \begin{array} [c]{lll} \mathbf {0}_{\varvec{{\Omega }}} &{}\quad if &{}\quad A\cap \{ -1,0,1\} =\emptyset ,\\ {\pi }_{1} &{}\quad if &{}\quad A\cap \{ -1,0,1\}=\{-1\} ,\\ {\pi }_{2} &{}\quad if &{}\quad A\cap \{-1,0,1\}=\{0\} ,\\ {\pi }_{3} &{}\quad if &{}\quad A\cap \{-1,0,1\}=\{1\} . \end{array} \right. \end{aligned}

We additionally assume that $$\{ x_{nj}\} _{j\in \mathbb {N}[n]}$$ are independent for each positive integer n.

Fix $$n\in \mathbb {N}$$ and $$\delta >0$$. For arbitrary $$j\in \mathbb {N}[n]$$

\begin{aligned} \mathbb {E}_{(n) }^{\flat }( x_{nj})= & {} \mathbb {E}_{(n) }^{\natural }(x_{nj}) =0,\\ \mathbb {E}_{(n) }^{\flat }\left( |x_{nj}|^{2+\delta }\right)= & {} \mathbb {E}_{(n)}^{\natural }\left( |x_{nj}|^{2+\delta }\right) =1-4^{-n},\\ \mathbb {D}_{(n)}^{2,\flat }(x_{nj})= & {} \mathbb {D} _{(n) }^{2,\natural }(x_{nj}) =\mathbb {E}_{(n) }^{\flat }((x_{nj})^{2})\\= & {} \mathbb {E} _{(n) }^{\natural }((x_{nj}) ^{2}) =1-4^{-n}. \end{aligned}

Therefore,

\begin{aligned} s_{n}^{\flat }=s_{n}^{\natural }=\sqrt{n( 1-4^{-n}) } \end{aligned}

and

\begin{aligned}&\frac{ \sum \nolimits _{j=1}^{k_{n}}} \mathbb {E}_{(n) }^{\flat }\left( |x_{nj}-\mathbb {E}_{(n) }^{\flat }(x_{nj})|^{2+\delta }\right) }{s_{n}^{2+\delta ,\flat }}\\&\quad +\frac{ \sum \nolimits _{j=1}^{k_{n}}} \mathbb {E}_{(n) }^{\natural }\left( |x_{nj}-\mathbb {E}_{(n) }^{\natural }(x_{nj}) |^{2+\delta }\right) }{s_{n}^{2+\delta ,\natural }}\\&\quad =\frac{2n\left( 1-4^{-n}\right) }{n^{1+\frac{\delta }{2}}( 1-4^{-n}) ^{1+\frac{\delta }{2}}}=\frac{2}{n^{\frac{\delta }{2}}(1-4^{-n}) ^{\frac{\delta }{2}}}\rightarrow 0, \\&\quad \text {as }n\rightarrow \infty . \end{aligned

Therefore, by Theorem 10,

\begin{aligned} \mathfrak {P}_{(n) }^{\flat }\left( \frac{\sum _{j=1}^{k_{n} }x_{nj}}{s_{n}^{\flat }}((-\infty ,t)) \right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow & {} \infty ,\\ \mathfrak {P}_{(n) }^{\natural }\left( \frac{\sum _{j=1}^{k_{n} }x_{nj}}{s_{n}^{\natural }}((-\infty ,t)) \right) \rightarrow \varPhi (t)\text {,}\quad as\,n\rightarrow & {} \infty \end{aligned}

for $$t\in \mathbb {R}$$.

## 5 Conclusions

In this paper, we proved the Lindeberg CLT, the Lyapunov CLT, and the Feller theorem for IV-events. The results obtained in the IV-probabilistic case correspond to the classical limit theorems for independent but not necessary identically distributed random variables. We also presented examples of applications of the aforementioned limit theorems for scaled sums of IV-observables. Our future possible considerations will concern further development of the probability theory for IV-events. In our opinion, the theorems proved in this paper can be important tools for future statistical applications.