1 Preliminaries and background

Since the beginning of the twentyfirst century the study of linearity within non linear settings (known as lineability theory nowadays) has become a sort of a trend within many areas of mathematical research such as real and complex analysis [4, 8, 9, 24, 37], linear dynamics [13, 22, 23], set theory [27], operator theory [14, 37], measure theory [9], and algebraic geometry [12].

Let us recall that if V is a vector space and \(\kappa \) is a (finite or infinite) cardinal number, a subset \(A\subset V\) is said to be \(\kappa \)-lineable if there exists a vector subspace \(M\subset V\) such that the dimension of M is \(\kappa \) and \(M\backslash \{0\}\subset A.\) In addition, if A is contained in some commutative algebra, then A is called strongly \(\kappa \)-algebrable if there exists a \(\kappa \)-generated free algebra M with \(M\backslash \{0\}\subset A;\) that is, there exists a subset B of cardinality \(\kappa \) with the following property: for any positive integer m,  any nonzero polynomial P in m variables without constant term and any distinct elements \(x_1,\ldots ,x_m\in B,\) we have \(P(x_1,\ldots ,x_m)\ne 0\) and \(P(x_1,\ldots ,x_m)\in A.\)

Recently, in [11], it was introduced the notion of convex lineability, which will also be explored in this work. As usual, conv(B) will denote the convex hull of a subset B of a vector space V,  that is,

$$\begin{aligned} {\textrm{conv}}(B) = \left\{ \sum _{i=1}^{m}a_ix_i : m\in {\mathbb {N}}, x_i\in B, a_i\in [0,1], \sum _{i=1}^{m}a_i=1\right\} . \end{aligned}$$

We say that a subset A of a vector space V is \(\kappa \)-convex lineable if there exists a linearly independent subset \(B\subset V\) such that B has cardinality \(\kappa \) and conv\((B)\subset A.\)

The notion of lineability dates back to 2004 (see [4]). It was originally coined by V. I. Gurariy and it was just recently introduced by the American Mathematical Society in the 2020 Mathematical Subject Classification under the references 15A03 and 46B87. The original motivation for this concept was, probably, the inspiration that came from the famous example of Weierstrass (also known as Weierstrass’ monster, a continuous nowhere differentiable function on \({\mathbb {R}},\) see [39]). In 1966, V. I. Gurariy showed that the set of Weierstrass’ monsters contains (except for the 0 function) an infinite dimensional vector space (see [28, 29]). The current state of the art of this area of research can be consulted, for instance, in [1, 3, 5, 10, 12, 13, 18,19,20, 25, 33].

In this work, we link the topic of lineability with probability theory. This paper’s goal is to continue the ongoing research that was started in [21, 26]. On this occasion we shall analyzed the many different types of convergence of sequences of random variables from the perspective of lineability theory. A recent article that also studies this topic is [7].

Throughout the paper, \(\left( \Omega ,{\mathcal {A}},\mu \right) \) will always denote a probability space. Any measurable function defined on it will be called a random variable. Two random variables X and Y on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) will be said to be equal if \(X=Y\) except on a set of probability zero. Given a random variable X and a real number \(\varepsilon ,\) we will frequently write \(\mu (X>\varepsilon )\) instead of \(\mu \left( \left\{ \omega \in \Omega :X(\omega )>\varepsilon \right\} \right) \) (and similarly for other inequalities). We define below some basic kinds of convergence considered in probability theory that will be studied in the article.

  1. (1)

    A sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables converges almost surely to another random variable X if there exists a subset \(B\in {\mathcal {A}}\) such that \(\mu (B)=1\) and \(\lim _{n\rightarrow \infty } X_n(\omega )=X(\omega )\) for every \(\omega \in B.\)

  2. (2)

    The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges almost surely uniformly to X if there exists a set \(B\in {\mathcal {A}}\) such that \(\mu (B)=1\) and \(\lim _{n\rightarrow \infty }X_n(\omega )=X(\omega )\) uniformly on B.

  3. (3)

    It is said that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in probability to X if

    $$\begin{aligned} \lim _{n\rightarrow \infty }\mu \left( |X_n-X|>\varepsilon \right) =0 \end{aligned}$$

    for every \(\varepsilon >0.\)

  4. (4)

    The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges completely to X if

    $$\begin{aligned} \sum _{n=1}^{\infty }\mu \left( |X_n-X|>\varepsilon \right) <\infty \end{aligned}$$

    for every \(\varepsilon >0.\)

  5. (5)

    Let us suppose that X and \(X_n,\) for all \(n\in {\mathbb {N}},\) belong to the space \(L^p\left( \Omega ,{\mathcal {A}},\mu \right) \) for some \(p>0.\) The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to X in \(L^p\)-sense or in p-mean if

    $$\begin{aligned} \lim _{n\rightarrow \infty } E\left( |X_n-X|^p\right) =0, \end{aligned}$$

    where E(X) denotes the expectation of the random variable X.

  6. (6)

    Let F and \(F_n\) be the distribution functions of X and \(X_n,\) respectively. The sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in distribution to X if \(\lim _{n\rightarrow \infty }F_n(x)=F(x)\) for all \(x\in {\mathbb {R}}\) at which F is continuous. When this property holds, it is also said that \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges weakly to F.

  7. (7)

    Let us suppose that F and \(F_n,\) for all \(n\in {\mathbb {N}},\) are absolutely continuous distribution functions with densities f and \(f_n,\) respectively. The sequence of distribution functions \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges in variation to F if

    $$\begin{aligned} \lim _{n\rightarrow \infty }\int _{-\infty }^{+\infty }|f_n(x)-f(x)|dx = 0. \end{aligned}$$

The following diagram contains the different implications held by the modes of convergence defined above.

figure a

2 Algebrability of sets of sequences of random variables

2.1 Almost sure uniform convergence and complete convergence

Let us suppose that \(\left\{ X_n\right\} _{n=1}^{\infty }\) is a sequence of random variables on a probability space \(\left( \Omega ,{\mathcal {A}},\mu \right) \) and that \(X_n\rightarrow 0\) uniformly on a subset B with \(\mu (B)=1.\) Given \(\varepsilon >0,\) there exists \(n_0\in {\mathbb {N}}\) such that \(|X_n(\omega )|\le \varepsilon \) for every \(n>n_0\) and every \(\omega \in B.\) As a consequence,

$$\begin{aligned} \sum _{n=1}^{\infty }\mu \left( |X_n(\omega )|>\varepsilon \right) \le \sum _{n=1}^{n_0}\mu \left( |X_n(\omega )|>\varepsilon \right) +\sum _{n=n_0+1}^{\infty }\mu \left( \Omega \backslash B\right) < \infty . \end{aligned}$$

That is, \(\left\{ X_n\right\} _{n=1}^{\infty }\) also converges completely to zero. However, the converse is not true. Based on the example given in [38, Section 14.15], we will prove the strong \({\mathfrak {c}}\)-algebrability of the collection of sequences that converge completely but do not converge almost surely uniformly. We will consider the following product of sequences of random variables:

$$\begin{aligned} \left\{ X_n\right\} _{n=1}^{\infty } \cdot \left\{ Y_n\right\} _{n=1}^{\infty } = \left\{ X_n\cdot Y_n\right\} _{n=1}^{\infty }. \end{aligned}$$

The symbols \({\mathcal {B}}\) and \(\lambda \) will denote the Borel \(\sigma \)-algebra in [0, 1] and the Lebesgue measure on [0, 1],  respectively, and \({\mathfrak {c}}\) will denote the cardinality of the continuum.

Theorem 1

Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( [0,1],{\mathcal {B}},\lambda \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 completely but does not converge almost surely uniformly. Then \({\mathcal {S}}\) is strongly \({\mathfrak {c}}\)-algebrable.

Proof

It is based in a general algebrability criteria that appears in [6, Proposition 7]. For each \(b>0\) and each \(n\in {\mathbb {N}},\) the random variable \(X_{b,n}\) is defined as follows:

$$\begin{aligned} X_{b,n}={\left\{ \begin{array}{ll} e^{b/x} &{} \text {if } x\in \left( 0,\frac{1}{n^2}\right] , \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Let \({\mathcal {H}}\subset (0,+\infty )\) be a Hamel basis for \({\mathbb {R}}\) over the field \({\mathbb {Q}}.\) It is known that the cardinality of such a basis is \({\mathfrak {c}}\) (see [35, Theorem 4.2.3]). We will prove that if \(h_1,\ldots ,h_m\) are distinct elements of \({\mathcal {H}}\) and P is a non-zero polynomial in m variables without constant term, then the sequence

$$\begin{aligned} \left\{ P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) \right\} _{n=1}^{\infty } \end{aligned}$$

is not zero and belongs to \({\mathcal {S}}.\) The polynomial P can be written as

$$\begin{aligned} P(x_1,\ldots ,x_m) = \sum _{i=1}^{k}a_i x_1^{s_{i1}}\cdots x_m^{s_{im}}, \end{aligned}$$

where \(k\in {\mathbb {N}},\) \(a_i\in {\mathbb {R}}\backslash \{0\},\) \(s_{ij}\in {\mathbb {N}}\cup \{0\}\) for \(1 \le i \le k,\) \(1 \le j \le m,\) and \(s_{i1}+\cdots +s_{im}>0.\) We can assume that all the vectors of exponents \((s_{i1},\ldots ,s_{im})\) are different. Let \(h_1,\ldots ,h_m\) be distinct elements of \({\mathcal {H}}\) and note that

$$\begin{aligned} X_{h_1,n}(x)^{s_{i1}}\cdots X_{h_m,n}(x)^{s_{im}} = {\left\{ \begin{array}{ll} e^\frac{s_{i1}h_1+\cdots +s_{im}h_m}{x} &{} \text {if } x\in \left( 0,\frac{1}{n^2}\right] , \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Hence,

$$\begin{aligned} X_{h_1,n}(x)^{s_{i1}}\cdots X_{h_m,n}(x)^{s_{im}} = X_{s_{i1}h_1+\cdots +s_{im}h_m}(x). \end{aligned}$$

Setting \(b_i=s_{i1}h_1+\cdots +s_{im}h_m\) for each \(i\in \{1,\ldots ,k\},\) we have

$$\begin{aligned} P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) = \sum _{i=1}^{k}a_iX_{b_i,n}. \end{aligned}$$

Since \({\mathcal {H}}\) is a basis for \({\mathbb {R}}\) over \({\mathbb {Q}},\) it follows that \(b_1,\ldots ,b_k\) are distinct. Moreover, each \(b_i\) is strictly positive because \(h_1,\ldots ,h_m\) are also positive and \(s_{i1}+\cdots +s_{im}>0.\) Without loss of generality, we can suppose that \(0<b_1<\cdots <b_k.\)

If \(P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) \) were zero at every point of [0, 1],  then \(\sum _{i=1}^{k}a_{i}e^{b_i/x}\) would be zero for every \(x\in \left( 0,1/n^2\right] ,\) which is not possible because

$$\begin{aligned} \lim _{x\rightarrow 0^+}\frac{\sum _{i=1}^{k}a_ie^{b_i/x}}{e^{b_k/x}} = a_k + \lim _{x\rightarrow 0^+} \sum _{i=1}^{k-1}\frac{a_i}{e^{(b_k-b_i)/x}} = a_k \ne 0. \end{aligned}$$

Furthermore, if \(\varepsilon >0,\) then

$$\begin{aligned} \sum _{n=1}^{\infty }\lambda \left( \left| \sum _{i=1}^{k}a_iX_{b_i,n}\right| >\varepsilon \right) \le \sum _{n=1}^{\infty }\lambda \left( \left( 0,1/n^2\right] \right) = \sum _{n=1}^{\infty }\frac{1}{n^2} < \infty . \end{aligned}$$

That is, the sequence \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) converges completely to 0.

Let us suppose that \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) also converges almost surely uniformly to 0. Then there is some measurable set B with \(\lambda (B)=1\) and there is \(n_0\in {\mathbb {N}}\) such that \(\left| \sum _{i=1}^{k}a_iX_{b_i,n}(x)\right| <1\) for all \(n\ge n_0\) and all \(x\in B.\) Moreover, the fact that \(\lambda (B)=1\) implies that 0 is an accumulation point of B. Hence, fixed \(n\ge n_0\) and \(A= B\cap \left( 0,1/n^2\right] ,\) we have

$$\begin{aligned} 0&= \lim _{x\in A, \, x\rightarrow 0^+}\frac{1}{e^{b_k/x}} \ge \lim _{x\in A, \, x\rightarrow 0^+}\frac{\left| \sum _{i=1}^{k}a_iX_{b_i,n}(x)\right| }{e^{b_k/x}} \\&= \lim _{x\in A, \, x\rightarrow 0^+}\frac{\left| \sum _{i=1}^{k}a_ie^{b_i/x}\right| }{e^{b_k/x}} = |a_k| > 0. \end{aligned}$$

This contradiction shows that \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge almost surely uniformly to 0. That is, the sequence \(\left\{ P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) \right\} _{n=1}^{\infty } =\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) \(\square \)

2.2 Complete convergence and almost sure convergence

As a consequence of the Borel–Cantelli lemma, if a sequence of random variables converges completely to 0, then it also converges almost surely (see [34, Theorem 1.27 and Proposition 5.7]). However, the converse does not hold in general (see [38, Section 14.4]). Let us mention that both convergence modes are equivalent if the random variables \(\left\{ X_n\right\} _{n=1}^{\infty }\) are independent (see [31]). Based on these facts, we will study the set of sequences of random variables that converge to zero almost surely but not completely.

Theorem 2

Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( [0,1],{\mathcal {B}},\lambda \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely but does not converge completely. Then \({\mathcal {S}}\) is strongly \({\mathfrak {c}}\)-algebrable.

Proof

For each \(b\in {\mathbb {R}}\) and \(n\in {\mathbb {N}},\) the random variable \(X_{b,n}\) is defined as follows:

$$\begin{aligned} X_{b,n}(x)={\left\{ \begin{array}{ll} e^{b/x} &{} \text {if } x\in \left( 0,\frac{1}{n}\right] , \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Let \({\mathcal {H}}\subset (0,+\infty )\) be a Hamel basis for \({\mathbb {R}}\) over the field \({\mathbb {Q}},\) whose cardinality is \({\mathfrak {c}}\) (see [35, Theorem 4.2.3]). Let \(h_1,\ldots ,h_m\) be distinct elements of \({\mathcal {H}}\) and let P be a non-zero polynomial in m variables without constant term. As it was shown in the proof of Theorem 1, there are positive numbers \(b_1<\cdots <b_k\) such that

$$\begin{aligned} P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) = \sum _{i=1}^{k}a_iX_{b_i,n}. \end{aligned}$$

Following the same technique employed in the proof of Theorem 1 it can be shown that \(\left\{ \sum _{i=1}^{k}a_iX_{b_i,n}\right\} _{n=1}^{\infty }\) is not the zero sequence. Furthermore, note that \(\left\{ \sum _{i=1}^{k}a_{i}X_{b_i,n}\right\} _{n=1}^{\infty }\) converges to 0 almost surely (in fact, this happens at every point of [0, 1]).

Finally, we will prove that the sequence \(\left\{ \sum _{i=1}^{k}a_{i}X_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge completely to zero. Let \(\varepsilon >0.\) Since

$$\begin{aligned} \lim _{x\rightarrow 0^+}\left| \sum _{i=1}^{k}a_ie^{b_i/x}\right| = \lim _{x\rightarrow 0^+} \left| a_ke^{b_k/x}\right| \cdot \left| 1+\sum _{i=1}^{k-1}\frac{a_i}{a_ke^{(b_k-b_i)/x}}\right| = +\infty , \end{aligned}$$

there exists \(n_0\in {\mathbb {N}}\) such that \(\left| \sum _{i=1}^{k}a_ie^{b_i/x}\right| >\varepsilon \) for every \(x\in (0,1/n_0].\) Hence, if \(n\ge n_0\) and \(x\in (0,1/n],\) then

$$\begin{aligned} \left| \sum _{i=1}^{k}a_i X_{b_i,n}(x)\right| =\left| \sum _{i=1}^{k}a_i e^{b_i/x}\right| >\varepsilon \end{aligned}$$

and so

$$\begin{aligned} \sum _{n=1}^{\infty }\lambda \left( \left| \sum _{i=1}^{k}a_i X_{b_i,n}\right| >\varepsilon \right) \ge \sum _{n=n_0}^{\infty }\lambda \left( \left( 0, \frac{1}{n}\right] \right) = \sum _{n=n_0}^{\infty }\frac{1}{n} = +\infty . \end{aligned}$$

This proves that the sequence

$$\begin{aligned} \left\{ P\left( X_{h_1,n},\ldots ,X_{h_m,n}\right) \right\} _{n=1}^{\infty } = \left\{ \sum _{i=1}^{k}a_{i}X_{b_i,n}\right\} _{n=1}^{\infty } \end{aligned}$$

belongs to \({\mathcal {S}}\) and concludes the proof of the strongly \({\mathfrak {c}}\)-algebrability of this set. \(\square \)

Remark 3

Let us recall that a probability space \((\Omega ,{\mathcal {A}},\mu )\) is said to be non-atomic if for every set \(B\in {\mathcal {A}}\) with \(\mu (B)>0\) there exists another set \(A\in {\mathcal {A}}\) such that \(A\subset B\) and \(0<\mu (A)<\mu (B).\) By [16, Theorem 9.2.2], Theorems 1 and 2 remain valid if the space \(\left( [0,1],{\mathcal {B}},\lambda \right) \) is replaced by \(\left( \Omega ,{\mathcal {A}},\mu \right) ,\) where \(\Omega \) is a complete separable metric space, \({\mathcal {A}}\) is the Borel \(\sigma \)-algebra in \(\Omega ,\) and \(\mu \) is a non-atomic probability measure.

Remark 4

It is well known that every sequence of random variables that converges almost surely also converges in probability (see [34, Proposition 5.10]). In [2, Theorem 7.1] it was proved the \({\mathfrak {c}}\)-lineability of the set of sequences of Lebesgue measurable functions on [0, 1] that converge in measure (that is, in probability) but not almost surely, while the strongly \({\mathfrak {c}}\)-algebrability of that family of functions was obtained in [17, Theorem 2.2]. In the case of a non-atomic probability space, the strong \({\mathfrak {c}}\)-algebrability of the collection of sequences of random variables that converge in probability but not almost surely was proved in [7, Theorem 11].

3 Lineability of sets of sequences of random variables

Before presenting the next theorems about sequences of random variables, we need to state the following result, whose proof can be seen in [32, Lemma 9.21] and that will be applied in Theorems 6 and 15:

Theorem 5

There exists a family \(\Delta \) of subsets of \({\mathbb {N}}\) with the following properties : 

  • Each \(\sigma \in \Delta \) is infinite.

  • The elements of \(\Delta \) are almost disjoint. That is,  if \(\sigma \) and \(\sigma '\) are two different elements of \(\Delta ,\) then \(\sigma \cap \sigma '\) is finite or empty.

  • The cardinality of \(\Delta \) is \({\mathfrak {c}}.\)

Next we prove a general result about lineability of sets of sequences.

Theorem 6

Let V be a vector space over \({\mathbb {R}}\) and let \(c_1\) and \(c_2\) be two types of convergence defined on V such that both \(c_1\) and \(c_2\) have the following properties : 

  1. (a)

    If \(\left\{ X_n\right\} _{n=1}^{\infty }\subset V\) and \(X_n\rightarrow 0,\) then also \(\lambda X_n\rightarrow 0\) for all \(\lambda \in {\mathbb {R}}.\)

  2. (b)

    The sequence in V constantly equal to 0 converges to 0.

  3. (c)

    If \(\left\{ X_n\right\} _{n=1}^{\infty }\) is a sequence in V that can be decomposed into a finite number of subsequences converging to 0,  then \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0.

  4. (d)

    If \(\left\{ X_n\right\} _{n=1}^{\infty }\) is a sequence that converges to 0,  then all subsequences of \(\left\{ X_n\right\} _{n=1}^{\infty }\) also converge to 0.

  5. (e)

    If a finite number of terms from a sequence is deleted,  the convergence or lack of convergence does not change.

Let

If \({\mathcal {S}}\ne \emptyset ,\) then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.

Proof

Let \(\Delta \) be a family of \({\mathfrak {c}}\) almost disjoint infinite subsets of \({\mathbb {N}}\) (see Theorem 5). If \(\sigma \in \Delta \) and \(j\in {\mathbb {N}},\) \(\sigma (j)\) denotes the jth element of the set \(\sigma .\) Let us suppose that a sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) Given \(\sigma \in \Delta ,\) we define \(\left\{ X_{\sigma ,n}\right\} _{n=1}^{\infty }\) as follows:

$$\begin{aligned} X_{\sigma ,n} = {\left\{ \begin{array}{ll} X_j &{} \text {if } n\in \sigma \text { and } \sigma (j)=n, \\ 0 &{} \text {if } n\notin \sigma . \end{array}\right. } \end{aligned}$$

The set \({\mathcal {M}}=\left\{ \left\{ X_{\sigma ,n}\right\} _{n=1}^{\infty }:\sigma \in \Delta \right\} \) has cardinality \({\mathfrak {c}}\) and every linear combination of its elements belongs to \({\mathcal {S}}.\) Indeed, let us suppose that \(k\in {\mathbb {N}},\) \(\sigma _1,\ldots ,\sigma _k\) are distinct elements of \(\Delta ,\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) Since \(\sigma _1,\ldots ,\sigma _k\) are almost disjoint, there exists \(n_0\in {\mathbb {N}}\) such that each \(n\ge n_0\) belongs at most to one set \(\sigma _i,\) \(i \in \{1, \ldots , k\}.\) Therefore, \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=n_0}^{\infty }\) can be decomposed into \(k+1\) subsequences: the zero sequence and \(\left\{ a_iX_j:j\in {\mathbb {N}},\sigma _i(j)\ge n_0\right\} \) with \(i\in \{1,\ldots ,k\}.\) By the properties (a) and (e) we have that

$$\begin{aligned} \left\{ a_iX_j : j\in {\mathbb {N}}, \sigma _i(j)\ge n_0\right\} \overset{c_1}{\longrightarrow }0 \end{aligned}$$

for every \(i\in \{1,\ldots ,k\}.\) Moreover, the zero sequence converges to 0 by the property (b). Then (c) implies that \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=n_0}^{\infty }\) converges to zero with respect to \(c_1.\) Using again (e), it follows that

$$\begin{aligned} \sum _{i=1}^{k}a_iX_{\sigma _i,n} \overset{c_1}{\longrightarrow }0. \end{aligned}$$

Let us see that this linear combination does not converge to zero with respect to \(c_2.\) Since \(,\) by (a) we know that \(.\) Then (e) implies

By (d) we deduce that

Therefore, \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\)

By (b), the fact that implies that \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i,n}\right\} _{n=1}^{\infty }\) cannot be the zero sequence. Hence, we deduce that the elements of \({\mathcal {M}}\) are linearly independent and the proof of the \({\mathfrak {c}}\)-lineability of \({\mathcal {S}}\) is complete. \(\square \)

Theorem 7

The following modes of convergence of sequences of random variables satisfy the properties (a), (b), (c), (d), and (e) given in Theorem 6: almost sure convergence,  almost sure uniform convergence,  complete convergence,  convergence in probability,  convergence in distribution,  and convergence in \(L^p\)-sense.

Proof

The unique property that might not be obvious is (a) for convergence in distribution. However, it can be deduced from Slutsky’s theorem (see [34, Theorems 5.22 and 5.23]). \(\square \)

3.1 Almost sure uniform convergence, complete convergence and almost sure convergence

The following theorem complements the result given in Theorem 1.

Theorem 8

Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 completely but does not converge almost surely uniformly. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.

Proof

Since \(\left( \Omega ,{\mathcal {A}},\mu \right) \) is non-atomic, for every \(n\in {\mathbb {N}}\) there exists a subset \(\Omega _n\in {\mathcal {A}}\) such that \(\mu (\Omega _n)=1/n^2\) (see [16, Corollary 1.12.10]). The random variable \(X_n\) is defined as the characteristic function of \(\Omega _n\):

$$\begin{aligned} X_n={\textbf{1}}_{\Omega _n}. \end{aligned}$$

Note that if \(\varepsilon >0,\) then

$$\begin{aligned} \sum _{n=1}^{\infty }\mu (|X_n|>\varepsilon ) \le \sum _{n=1}^{\infty }\mu (\Omega _n) = \sum _{n=1}^{\infty }\frac{1}{n^2} < \infty . \end{aligned}$$

That is, the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges completely to 0.

Let us suppose that \(\left\{ X_n\right\} _{n=1}^{\infty }\) also converges almost surely uniformly to 0. Then there is some measurable set B with \(\mu (B)=1\) and there is \(n_0\in {\mathbb {N}}\) such that \(|X_n(\omega )|<1/2\) for all \(n\ge n_0\) and all \(\omega \in B.\) Since \(X_{n_0}(\omega )=1\) if \(\omega \in \Omega _{n_0},\) it follows that \(B\cap \Omega _{n_0}=\emptyset \) and then

$$\begin{aligned} 0 = \mu (\Omega \backslash B) \ge \mu \left( \Omega _{n_0}\right) = \frac{1}{n_0^2} > 0. \end{aligned}$$

This contradiction shows that the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) does not converge almost surely uniformly to 0. That is, \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) Now we just need to apply Theorem 6 to get the result. \(\square \)

The following theorem complements the result given in Theorem 2.

Theorem 9

Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely but does not converge completely. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.

Proof

By [16, Corollary 1.12.10], there exists a sequence \(\left\{ \Omega _n:n\in {\mathbb {N}}\right\} \) of measurable subsets of \(\Omega \) such that \(\Omega _{n+1}\subset \Omega _n\) and \(\mu \left( \Omega _n\right) =1/n\) for each \(n\in {\mathbb {N}}.\) Let \(X_n={\textbf{1}}_{\Omega _n}\) and note that if \(\omega \notin \Omega _{n_0}\) for some \(n_0\in {\mathbb {N}},\) then \(X_n(\omega )=0\) for all \(n\ge n_0.\) Therefore,

Since \(\mu \left( \bigcap _{n=1}^{\infty }\Omega _n\right) =0,\) it follows that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely. Moreover,

$$\begin{aligned} \sum _{n=1}^{\infty }\mu \left( \left| X_n\right| >\frac{1}{2}\right) = \sum _{n=1}^{\infty } \mu \left( \Omega _n\right) = \sum _{n=1}^{\infty }\frac{1}{n} = +\infty . \end{aligned}$$

Consequently, the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) does not converge completely to 0. We have proved that \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}},\) so this set is \({\mathfrak {c}}\)-lineable by Theorem 6. \(\square \)

3.2 Convergence in \(L^p\)-sense and almost sure convergence

Let us recall that E(X) denotes the expectation of a random variable X. By the Lyapunov inequality, it is known that \(E(|X|^q)^{1/q}\le E(|X|^p)^{1/p}\) if \(0<q<p\) (see [15, page 277]). As a consequence, it follows that the convergence in \(L^p\)-sense implies the convergence in \(L^q\)-sense whenever \(0<q<p,\) but the converse is not true (see [38, Section 14.5]). Moreover, it is also known that almost sure convergence and convergence in \(L^p\)-sense are independent, that is, none of them implies the other one (see [38, Sections 14.7 and 14.8]). More generally, considering non-atomic probability spaces, Theorems 10, 11, and 12 below bring interesting information related to these modes of convergence.

Theorem 10

Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \(p>0.\) Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 in \(L^q\)-sense for all \(q\in (0,p)\) but does not converge in \(L^p\)-sense. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.

Proof

Let \(n_0\in {\mathbb {N}}\) such that \(n^{-p+1/n}\le 1\) for all \(n\ge n_0.\) For each \(n\ge n_0\) there exists \(\Omega _n\in {\mathcal {A}}\) such that \(\mu (\Omega _n)=n^{-p+1/n}\) (see [16, Corollary 1.12.10]). Define

$$\begin{aligned} X_n(\omega )={\left\{ \begin{array}{ll} n &{} \text {if } \omega \in \Omega _n, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

On the one hand, if \(0<q<p,\) then

$$\begin{aligned} \lim _{n\rightarrow \infty } E\left( \left| X_n\right| ^q\right) = \lim _{n\rightarrow \infty } \int _{\Omega }\left| X_n\right| ^q d\mu = \lim _{n\rightarrow \infty } n^{q-p+\frac{1}{n}} = 0 \end{aligned}$$

and so \(\left\{ X_n\right\} _{n=n_0}^{\infty }\) converges to 0 in \(L^q\)-sense. On the other hand, we have that

$$\begin{aligned} \lim _{n\rightarrow \infty }E\left( \left| X_n\right| ^p\right) = \lim _{n\rightarrow \infty } n^{1/n} = 1. \end{aligned}$$

Therefore, \(\left\{ X_n\right\} _{n=n_0}^{\infty }\) does not converge to zero in \(L^p\)-sense. This proves that \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) By Theorem 6, we conclude that \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable. \(\square \)

Theorem 11

Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=1}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 almost surely but does not converge in \(L^p\)-sense for any \(p>0.\) Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.

Proof

Since the probability space is non-atomic, there exists a collection \(\left\{ \Omega _n:n\in {\mathbb {N}}\right\} \) of measurable subsets of \(\Omega \) such that \(\mu (\Omega _n)=1/n\) and \(\Omega _{n+1}\subset \Omega _n\) for every \(n\in {\mathbb {N}}.\) Set

$$\begin{aligned} X_n(\omega )={\left\{ \begin{array}{ll} n^n &{} \text {if } \omega \in \Omega _n, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

On the one hand, if \(\omega \notin \bigcap _{n\in {\mathbb {N}}}\Omega _n,\) then \(X_n(\omega )=0\) for n large enough (dependent on \(\omega \)). That is, the set of points \(\omega \) for which there is no convergence of \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) is a subset of \(\bigcap _{n\in {\mathbb {N}}}\Omega _n.\) Since \(\mu \left( \Omega _n\right) =1/n\) for all n,  we have that \(\mu \left( \bigcap _{n\in {\mathbb {N}}}\Omega _n\right) =0.\) Thus,

$$\begin{aligned} \mu \left( \left\{ \omega \in \Omega : \lim _{n\rightarrow \infty }X_n(\omega )=0\right\} \right) = 1. \end{aligned}$$

On the other hand, if \(p>0,\) then

$$\begin{aligned} \lim _{n\rightarrow \infty }E\left( \left| X_n\right| ^p\right) = \lim _{n\rightarrow \infty }n^{np-1} = +\infty \end{aligned}$$

and so \(\left\{ X_{n}\right\} _{n=1}^{\infty }\) does not converge to 0 in \(L^p\)-sense. This proves that \(\left\{ X_{n}\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) By Theorem 6 we conclude that \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable. \(\square \)

Theorem 12

Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a non-atomic probability space and let \({\mathcal {S}}\) be the set of all sequences \(\left\{ X_n\right\} _{n=n_0}^{\infty }\) of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) such that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 in \(L^p\)-sense for all \(p>0\) but does not converge almost surely. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.

Proof

Since the probability space is non-atomic, there exists a measurable set \(\Omega _0\) such that \(\mu (\Omega _0)=1/2\) (see [16, Corollary 1.12.10]). Defining \(\Omega _1=\Omega \backslash \Omega _0,\) we find two measurable sets \(\Omega _0\) and \(\Omega _1\) such that

$$\begin{aligned} \Omega =\Omega _0\cup \Omega _1, \quad \Omega _0\cap \Omega _1=\emptyset , \quad \mu (\Omega _0)=\mu (\Omega _1)=\frac{1}{2}. \end{aligned}$$

With the same argument, it is possible to find two measurable sets \(\Omega _{00}\) and \(\Omega _{01}\) such that

$$\begin{aligned} \Omega _0=\Omega _{00}\cup \Omega _{01}, \quad \Omega _{00}\cap \Omega _{01}=\emptyset , \quad \mu (\Omega _{00})=\mu (\Omega _{01})=\frac{1}{2^2}. \end{aligned}$$

Similarly, there exist two measurable sets \(\Omega _{10}\) and \(\Omega _{11}\) such that

$$\begin{aligned} \Omega _1=\Omega _{10}\cup \Omega _{11}, \quad \Omega _{10}\cap \Omega _{11}=\emptyset , \quad \mu (\Omega _{10})=\mu (\Omega _{11})=\frac{1}{2^2}. \end{aligned}$$

If the previous argument is repeated, we obtain a family

$$\begin{aligned} \left\{ \Omega _{j_1\cdots j_m}:m\in {\mathbb {N}}, \ j_1,\ldots ,j_m\in \{0,1\}\right\} \end{aligned}$$

of measurable subsets of \(\Omega \) with the following properties:

  • \(\Omega _{j_1\cdots j_m}=\Omega _{j_1\cdots j_{m}0}\cup \Omega _{j_1\cdots j_{m}1}.\)

  • \(\Omega _{j_1\cdots j_{m}0}\cap \Omega _{j_1\cdots j_{m}1}=\emptyset .\)

  • \(\mu \left( \Omega _{j_1\cdots j_{m}0}\right) =\mu \left( \Omega _{j_1\cdots j_{m}1}\right) =1/2^{m+1}.\)

  • For each \(m\in {\mathbb {N}},\) \(\Omega =\bigcup \left\{ \Omega _{j_1\cdots j_m}: j_1,\ldots ,j_m\in \{0,1\}\right\} .\)

Let \(X_1={\textbf{1}}_{\Omega }.\) Moreover, given a natural number \(n\ge 2,\) we consider its binary expansion:

$$\begin{aligned} n = 2^m + 2^{m-1}j_1 + 2^{m-2}j_2 +\cdots + 2j_{m-1} + j_m, \end{aligned}$$

where \(m\ge 1\) and \(j_1,\ldots ,j_m\in \{0,1\}\) are unique for each \(n\ge 2.\) Hence, we can define

$$\begin{aligned} X_n={\textbf{1}}_{\Omega _{j_1\cdots j_m}}. \end{aligned}$$

Let this m be denoted by \(n^*.\) The following table clarifies how each \(X_n\) is defined.

\(n=1\)

 

\(X_1={\textbf{1}}_{\Omega }\)

\(n=2=2^1+0\)

\(n^*=1\)

\(X_2={\textbf{1}}_{\Omega _0}\)

\(n=3=2^1+1\)

\(n^*=1\)

\(X_3={\textbf{1}}_{\Omega _1}\)

\(n=4=2^2+2\cdot 0+0\)

\(n^*=2\)

\(X_4={\textbf{1}}_{\Omega _{00}}\)

\(n=5=2^2+2\cdot 0+1\)

\(n^*=2\)

\(X_5={\textbf{1}}_{\Omega _{01}}\)

\(n=6=2^2+2\cdot 1+0\)

\(n^*=2\)

\(X_6={\textbf{1}}_{\Omega _{10}}\)

\(n=7=2^2+2\cdot 1+1\)

\(n^*=2\)

\(X_7={\textbf{1}}_{\Omega _{11}}\)

\(n=8=2^3+2^2\cdot 0+2\cdot 0+0\)

\(n^*=3\)

\(X_8={\textbf{1}}_{\Omega _{000}}\)

\(n=9=2^3+2^2\cdot 0+2\cdot 0+1\)

\(n^*=3\)

\(X_9={\textbf{1}}_{\Omega _{001}}\)

      \(\vdots \)

\(\vdots \)

\(\vdots \)

Let us see that \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) On the one hand,

$$\begin{aligned} \lim _{n\rightarrow \infty } E(X_n) = \lim _{n\rightarrow \infty } \int _{\Omega }\left| X_n\right| ^p d\mu = \lim _{n\rightarrow \infty } \mu \left( \Omega _{j_1\cdots j_{n^*}}\right) = \lim _{n\rightarrow \infty }\frac{1}{2^{n^*}} = 0, \end{aligned}$$

which means that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges to 0 in \(L^p\) for all \(p>0.\) On the other hand, for each \(\omega \in \Omega \) and each \(m\in {\mathbb {N}},\) there exist unique \(j_1,\ldots ,j_m\in \{0,1\}\) such that \(\omega \in \Omega _{j_1\cdots j_m}.\) If \(0\le \ell <2^m,\) then

$$\begin{aligned} X_{2^m+\ell }(\omega )= {\left\{ \begin{array}{ll} 1 &{} \text {if } \ell =2^{m-1}j_1+2^{m-2}j_2+\cdots +j_m, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

As a consequence, the sequence \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) contains infinitely many elements equal to 0 and infinitely many elements equal to 1. We thus conclude that \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) does not converge for any \(\omega \in \Omega .\) This proves that \(\left\{ X_n(\omega )\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}},\) so this set is \({\mathfrak {c}}\)-lineable by Theorem 6. \(\square \)

3.3 Convergence in distribution and convergence in probability

Although every sequence of random variables that converges in probability also converges in distribution, the reciprocal result does not hold in general (see [34, Proposition 5.13] and [38, Section 14.2]). Our next result will show the lineability of the collection of sequences converging in distribution but not in probability.

Theorem 13

Let \(\left( \Omega ,{\mathcal {A}},\mu \right) \) be a probability space on which is possible to define infinitely many independent and identically distributed different random variables. Let \({\mathcal {S}}\) be the set of all sequences of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) that converge in distribution but do not converge in probability. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-lineable.

Proof

Let \(\left\{ X_n\right\} _{n=1}^{\infty }\) be a sequence of independent and identically distributed different random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) .\) Since all random variables \(X_n\) have the same distribution function, that we will call F,  it follows that the sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in distribution to \(X_1.\)

If F only took the values 0 and 1,  every random variable \(X_n\) would be constant almost surely and equal to the same value (\(\min \left\{ x\in {\mathbb {R}}:F(x)=1\right\} \)). Therefore, we would have that \(X_n=X_m\) for all \(n,m\in {\mathbb {N}},\) which contradicts the hypothesis of the theorem. Then there must exist \(x_0\in {\mathbb {R}}\) such that \(0<F(x_0)<1.\) Since F is right-continuous, there exists \(\delta >0\) such that if \(x_0\le x<x_0+\delta ,\) then \(F(x)<1.\) Choose any a and b so that

$$\begin{aligned} x_0< a< b < x_0+\delta . \end{aligned}$$

Since F is non-decreasing, for all m and n we have

$$\begin{aligned} \mu (X_m<a) \cdot \mu (X_n>b) \ge F(x_0)\cdot (1-F(b)) > 0. \end{aligned}$$
(3.1)

These inequalities will be applied later on.

It is known that convergence in probability is metrizable (see [15, Exercise 21.15]). In fact, convergence in probability is equivalent to convergence with respect to the following metric in the space of all random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \):

$$\begin{aligned} d(X,Y) = E\left( \frac{|X-Y|}{1+|X-Y|}\right) = \int _{\Omega }\frac{|X-Y|}{1+|X-Y|}d\mu . \end{aligned}$$

Let us estimate the distance between \(X_m\) and \(X_n\) for two natural numbers \(m\ne n.\) To do that, we will use that the function \(f(t)=\frac{t}{1+t}\) is increasing on \([0,+\infty )\) and that the variables \(X_m\) and \(X_n\) are independent:

$$\begin{aligned} d(X_m,X_n)&= \int _{\Omega }f\left( \left| X_m-X_n\right| \right) d\mu \ge \int _{\left( X_m<a\right) \cap \left( X_n>b\right) } f\left( \left| X_m-X_n\right| \right) d\mu \\&\ge \mu \left( \left( X_m<a\right) \cap \left( X_n>b\right) \right) \cdot f(b-a) \\&= \mu \left( X_m<a\right) \cdot \mu \left( X_n>b\right) \cdot f(b-a). \end{aligned}$$

By (3.1), if \(m\ne n,\) then

$$\begin{aligned} d(X_m,X_n) \ge F(x_0)\cdot (1-F(b))\cdot f(b-a) > 0. \end{aligned}$$

We conclude that \(\left\{ X_n\right\} _{n=1}^{\infty }\) is not a Cauchy sequence, which implies that \(\left\{ X_n\right\} _{n=1}^{\infty }\) cannot converge in probability. Therefore, \(\left\{ X_n\right\} _{n=1}^{\infty }\) belongs to \({\mathcal {S}}.\) By Theorem 6, we obtain the \({\mathfrak {c}}\)-lineability of the set \({\mathcal {S}}.\) \(\square \)

Remark 14

The existence of a probability space and a collection of independent random variables on it with prefixed distributions, as required in Theorems 13 and 15, is guaranteed by [15, Theorem 20.4].

4 Convergence of sums of sequences of random variables

Let XY\(X_n,\) and \(Y_n\) (with \(n\in {\mathbb {N}}\)) be random variables and suppose that \(\left\{ X_n\right\} _{n=1}^{\infty }\) converges in distribution to X and \(\left\{ Y_n\right\} _{n=1}^{\infty }\) converges in distribution to Y,  which will be denoted by \(X_n\overset{\text {d}}{\longrightarrow }X\) and \(Y_n\overset{\text {d}}{\longrightarrow }Y.\) In general, it is not possible to guarantee the convergence of \(\left\{ X_n+Y_n\right\} _{n=1}^{\infty }\) to \(X+Y,\) as it can be seen [38, Section 14.10]. Inspired by the cited counterexample, we will prove a stronger result:

Theorem 15

Let us suppose that \(\left( \Omega ,{\mathcal {A}},\mu \right) \) is a probability space such that there are two subsets \(\Omega _1\in {\mathcal {A}}\) and \(\Omega _2\in {\mathcal {A}}\) with \(\Omega =\Omega _1\cup \Omega _2,\) \(\Omega _1\cap \Omega _2=\emptyset ,\) and \(\mu (\Omega _1)=\mu (\Omega _2)=1/2.\) For each \(i\in \{1,2\},\) let \({\mathcal {A}}_i\) be the elements of \({\mathcal {A}}\) contained in \(\Omega _i\) and let \(\mu _i(B)=2\mu (B)\) for all \(B\in {\mathcal {A}}_i.\) In addition,  suppose that there is a sequence \(\left\{ X_n\right\} _{n=1}^{\infty }\) of independent random variables defined on \(\left( \Omega _1,{\mathcal {A}}_1,\mu _1\right) \) and there is another sequence \(\left\{ Y_n\right\} _{n=1}^{\infty }\) of independent random variables defined on \(\left( \Omega _2,{\mathcal {A}}_2,\mu _2\right) \) such that the distribution function of every \(X_n\) and every \(Y_n\) is

$$\begin{aligned} F(x)={\left\{ \begin{array}{ll} 0 &{} \text {if } x<-1, \\ \frac{1+x}{2} &{} \text {if } -1\le x\le 1, \\ 1 &{} \text {if } x\ge 1. \end{array}\right. } \end{aligned}$$

Under these conditions,  there exist two vector spaces,  \({\mathcal {V}}_1\) and \({\mathcal {V}}_2,\) of sequences of random variables on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) with the following properties : 

  1. (1)

    The dimension of both spaces \({\mathcal {V}}_1\) and \({\mathcal {V}}_2\) is \({\mathfrak {c}}.\)

  2. (2)

    If \(\left\{ Z_n\right\} _{n=1}^{\infty }\in {\mathcal {V}}_1\backslash \{0\}\) and \(\left\{ T_n\right\} _{n=1}^{\infty }\in {\mathcal {V}}_2\backslash \{0\},\) then there are random variables Z and T on \(\left( \Omega ,{\mathcal {A}},\mu \right) \) satisfying \(Z_n\overset{\textrm{d}}{\longrightarrow }Z,\) \(T_n\overset{\textrm{d}}{\longrightarrow }T,\) and \(.\)

Proof

Given \(a\in {\mathbb {R}}\backslash \{0\}\) and \(n\in {\mathbb {N}},\) let \(F_a\) denote the distribution function of \(aX_n.\) On the one hand, if \(a>0,\) then

$$\begin{aligned} F_a(x)&= \mu _1\left( \left\{ \omega \in \Omega _1 : aX_n(\omega )\le x\right\} \right) = F\left( \frac{x}{a}\right) = {\left\{ \begin{array}{ll} 0 &{} \text {if } x<-a, \\ \frac{1+\frac{x}{a}}{2} &{} \text { if } -a\le x<a, \\ 1 &{} \text {if } x\ge a. \end{array}\right. } \end{aligned}$$

On the other hand, if \(a<0,\) then

$$\begin{aligned} F_a(x)&= \mu _1\left( \left\{ \omega \in \Omega _1 : aX_n(\omega )\le x\right\} \right) = \mu _1\left( \left\{ \omega \in \Omega _1 : X_n(\omega )\ge \frac{x}{a}\right\} \right) \\&= 1-\mu _1\left( \left\{ \omega \in \Omega _1 : X_n(\omega )<\frac{x}{a}\right\} \right) = {\left\{ \begin{array}{ll} 1 &{} \text {if } x/a\le -1, \\ 1-\frac{1+\frac{x}{a}}{2} &{} \text { if } -1<x/a\le 1, \\ 0 &{} \text {if } x/a>1, \end{array}\right. } \\&= {\left\{ \begin{array}{ll} 1 &{} \text {if } x\ge -a, \\ \frac{1-\frac{x}{a}}{2} &{} \text {if } a\le x<-a, \\ 0 &{} \text {if } x<a. \end{array}\right. } \end{aligned}$$

That is, for every \(a\in {\mathbb {R}}\backslash \{0\}\) and every \(n\in {\mathbb {N}},\) the distribution function of \(aX_n\) is

$$\begin{aligned} F_a(x)={\left\{ \begin{array}{ll} 0 &{} \text {if } x<-|a|, \\ \frac{1+\frac{x}{|a|}}{2} &{} \text {if } -|a|\le x<|a|, \\ 1 &{} \text {if } x\ge |a|. \end{array}\right. } \end{aligned}$$

As a consequence, the density function of \(aX_n\) is

$$\begin{aligned} f_a(x)={\left\{ \begin{array}{ll} \frac{1}{2|a|} &{} \text {if } -|a|<x<|a|, \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

For each \(n\in {\mathbb {N}},\) we define the following extension of \(X_n\) to the whole space \(\left( \Omega ,{\mathcal {A}},\mu \right) \):

$$\begin{aligned} X_n^*(\omega )={\left\{ \begin{array}{ll} X_n(\omega ) &{} \text {if } \omega \in \Omega _1, \\ 0 &{} \text {if } \omega \in \Omega _2. \end{array}\right. } \end{aligned}$$

If \(n_1,\ldots ,n_k\) are distinct positive integers and \(a_1,\ldots ,a_k\) are non-zero real numbers, then the variables \(a_1X_{n_1},\ldots ,a_kX_{n_k}\) are independent, so the density function of \(\sum _{i=1}^{k}a_{i}X_{n_i}\) is the following convolution:

$$\begin{aligned} g_{a_1,\ldots ,a_k} = f_{a_1} *\cdots *f_{a_k} \end{aligned}$$
(4.1)

(see [15, page 267]). Therefore, the distribution function of \(\sum _{i=1}^{k}a_{i}X_{n_i}\) is

$$\begin{aligned} G_{a_1,\ldots ,a_k}(x) = \int _{-\infty }^{x}g_{a_1,\ldots ,a_k}(t)dt. \end{aligned}$$
(4.2)

If S is defined as

$$\begin{aligned} S(x)={\left\{ \begin{array}{ll} 0 &{} \text {if } x<0, \\ 1 &{} \text {if } x\ge 0, \end{array}\right. } \end{aligned}$$

then the distribution function of the extension \(\sum _{i=1}^{k}a_{i}X_{n_i}^*\) is \(\left( G_{a_1,\ldots ,a_k}+S\right) /2.\) Indeed, since \(\sum _{i=1}^{k}a_{i}X_{n_i}^*=0\) on \(\Omega _2,\) if \(x<0,\) then

$$\begin{aligned} \mu \left( \sum _{i=1}^{k}a_{i}X_{n_i}^* \le x\right)&= \mu \left( \left\{ \omega \in \Omega _1: \sum _{i=1}^{k}a_{i}X_{n_i}^*(\omega )\le x\right\} \right) \\&= \frac{1}{2}\mu _1\left( \left\{ \omega \in \Omega _1 : \sum _{i=1}^{k}a_{i}X_{n_i}(\omega )\le x\right\} \right) \\&= \frac{1}{2}G_{a_1,\ldots ,a_k}(x) = \frac{G_{a_1,\ldots ,a_k}(x)+S(x)}{2}. \end{aligned}$$

On the other hand, if \(x\ge 0,\) then

$$\begin{aligned} \mu \left( \sum _{i=1}^{k}a_{i}X_{n_i}^* \le x\right)&= \mu (\Omega _2) + \mu \left( \left\{ \omega \in \Omega _1 : \sum _{i=1}^{k}a_{i}X_{n_i}^*(\omega )\le x\right\} \right) \\&= \frac{1}{2} + \frac{1}{2}\mu _1\left( \left\{ \omega \in \Omega _1 : \sum _{i=1}^{k}a_{i}X_{n_i}(\omega )\le x\right\} \right) \\&= \frac{1}{2} + \frac{1}{2}G_{a_1,\ldots ,a_k}(x) = \frac{G_{a_1,\ldots ,a_k}(x)+S(x)}{2}. \end{aligned}$$

This proves that

$$\begin{aligned} H_{a_1,\ldots ,a_k} = \frac{G_{a_1,\ldots ,a_k}+S}{2} \end{aligned}$$
(4.3)

is the distribution function of \(\sum _{i=1}^{k}a_{i}X_{n_i}^*\) whenever \(n_1,\ldots ,n_k\) are distinct positive integers and \(a_1,\ldots ,a_k\) are non-zero real numbers.

Let \(\Delta \) be a family of \({\mathfrak {c}}\) almost disjoint subsets of \({\mathbb {N}}\) (see Theorem 5). Given \(n\in {\mathbb {N}}\) and \(\sigma \in \Delta ,\) the symbol \(\sigma (n)\) denotes the nth element of \(\sigma .\) The space \({\mathcal {V}}_1\) is defined as the linear span of the set

$$\begin{aligned} \left\{ \left\{ X_{\sigma (n)}^*\right\} _{n=1}^{\infty }:\sigma \in \Delta \right\} . \end{aligned}$$

Let us see that any non-zero element of \({\mathcal {V}}_1\) is convergent in distribution. Let \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\right\} _{n=1}^{\infty }\in {\mathcal {V}}_1,\) where \(\sigma _1,\ldots ,\sigma _k\) are distinct elements of \(\Delta \) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) Since \(\sigma _1,\ldots ,\sigma _k\) are almost disjoint sets, there exists \(n_0\) such that \(\sigma _i(n)\ne \sigma _j(n)\) for all \(n\ge n_0\) and \(i\ne j.\) By Eq. (4.3), if \(n\ge n_0,\) the distribution function of \(\sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\) is \(H_{a_1,\ldots ,a_k}.\) Moreover, since the distribution function of \(\sum _{i=1}^{k}a_iX_i^*\) is also \(H_{a_1,\ldots ,a_k},\) we can conclude that

$$\begin{aligned} \sum _{i=1}^{k}a_iX_{\sigma _i(n)}^* \overset{\textrm{d}}{\longrightarrow }\sum _{i=1}^{k}a_iX_i^*. \end{aligned}$$

Note that if \(i\in \{1,\ldots ,k\},\) then

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: f_{a_i}(t)=0 \text { for all } t\in (-\infty ,x)\right\} =-|a_i|. \end{aligned}$$

Using that \(g_{a_1,\ldots ,a_k}=f_{a_1}*\cdots *f_{a_k}\) and the Titchmarsh convolution theorem (see [36]), we obtain

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: g_{a_1,\ldots ,a_k}(t)=0 \text { for all } t\in (-\infty ,x)\right\} = \sum _{i=1}^{k} (-|a_i|) < 0. \end{aligned}$$

Since \(G_{a_1,\ldots ,a_k}(x)=\int _{-\infty }^{x}g_{a_1,\ldots ,a_k}(t)dt,\) it follows that

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: G_{a_1,\ldots ,a_k}=0 \text { on } (-\infty ,x)\right\} < 0. \end{aligned}$$

Hence,

$$\begin{aligned} \sup \left\{ x : H_{a_1,\ldots ,a_k}=0 \text { on } (-\infty ,x)\right\}&= \sup \left\{ x : G_{a_1,\ldots ,a_k}=0 \text { on } (-\infty ,x)\right\} \\&< 0 = \sup \left\{ x : S=0 \text { on } (-\infty ,x)\right\} . \end{aligned}$$

Therefore, \(H_{a_1,\ldots ,a_k}\ne S.\) Since S is the distribution function of the random variable which is equal to zero everywhere, we deduce that \(\sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\ne 0\) if \(n\ge n_0,\) so \(\left\{ \sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*\right\} _{n=1}^{\infty }\) is not the zero sequence. That is, the set

$$\begin{aligned} \left\{ \left\{ X_{\sigma (n)}^*\right\} _{n=1}^{\infty }:\sigma \in \Delta \right\} \end{aligned}$$

is linearly independent and the dimension of \({\mathcal {V}}_1\) is \({\mathfrak {c}}.\)

For each \(n\in {\mathbb {N}},\) the random variable \(Y_n\) defined on \(\Omega _2\) is extended to the whole space \(\Omega \) as follows:

$$\begin{aligned} Y_n^*(\omega )={\left\{ \begin{array}{ll} Y_n(\omega ) &{} \text {if } \omega \in \Omega _2, \\ 0 &{} \text {if } \omega \in \Omega _1. \end{array}\right. } \end{aligned}$$

The space \({\mathcal {V}}_2\) is defined as the linear span of

$$\begin{aligned} \left\{ \left\{ Y_{\sigma (n)}^*\right\} _{n=1}^{\infty }:\sigma \in \Delta \right\} . \end{aligned}$$

Similar arguments to those used previously imply that the dimension of \({\mathcal {V}}_2\) is \({\mathfrak {c}}\) and that any non-zero element of \({\mathcal {V}}_2\) is convergent in distribution. In particular, if \(\varphi _1,\ldots \varphi _{\ell }\) are distinct elements of \(\Delta \) and \(b_1,\ldots ,b_{\ell }\in {\mathbb {R}}\backslash \{0\},\) then there is \(n_1\in {\mathbb {N}}\) such that if \(n\ge n_1,\) the distribution functions of \(\sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}\) and \(\sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}^*\) are \(G_{b_1,\ldots ,b_{\ell }}\) and \(H_{b_1,\ldots ,b_{\ell }},\) respectively. By Eq. (4.3), the distribution function of \(\sum _{i=1}^{\ell }b_iX_{k+i}^*\) is also \(H_{b_1,\ldots ,b_{\ell }},\) so we deduce that

$$\begin{aligned} \sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}^* \overset{\textrm{d}}{\longrightarrow }\sum _{i=1}^{\ell }b_iX_{k+i}^*. \end{aligned}$$

To conclude the proof, we have to prove that the sum of both sequences does not converge in distribution to the sum of the limits. Setting

$$\begin{aligned} Z_n = \sum _{i=1}^{k}a_iX_{\sigma _i(n)}^*, \quad T_n = \sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}^*, \quad L = \sum _{i=1}^{k}a_iX_i^* + \sum _{i=1}^{\ell }b_iX_{k+i}^*, \end{aligned}$$

we have to prove that \(.\) Let \(n\ge \max \{n_0,n_1\}.\) Since \(Z_n=0\) on \(\Omega _2\) and \(T_n=0\) on \(\Omega _1,\) if \(x\in {\mathbb {R}},\) then

$$\begin{aligned} \mu \left( Z_n+T_n\le x\right)&= \mu \left( \left\{ \omega \in \Omega _1 : Z_n(\omega )\le x\right\} \right) + \mu \left( \left\{ \omega \in \Omega _2 : T_n(\omega )\le x\right\} \right) \\&= \frac{1}{2}\mu _1\left( \left\{ \omega \in \Omega _1 : \sum _{i=1}^{k}a_iX_{\sigma _i(n)}(\omega )\le x\right\} \right) \\&\quad + \frac{1}{2}\mu _2\left( \left\{ \omega \in \Omega _2 : \sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}(\omega )\le x\right\} \right) \\&= \frac{1}{2}G_{a_1,\ldots ,a_k}(x) + \frac{1}{2}G_{b_1,\ldots ,b_{\ell }}(x), \end{aligned}$$

where we have applied Eq. (4.2) (and a similar one for \(\sum _{i=1}^{\ell }b_iY_{\varphi _i(n)}\)). This proves that if \(n\ge \max \{n_0,n_1\},\) then the distribution function of \(Z_n+T_n\) is

$$\begin{aligned} \frac{G_{a_1,\ldots ,a_k} + G_{b_1,\ldots ,b_{\ell }}}{2}. \end{aligned}$$

Moreover, by Eq. (4.3), the distribution function L is

$$\begin{aligned} H_{a_1,\ldots ,a_k,b_1,\ldots ,b_{\ell }} = \frac{G_{a_1,\ldots ,a_k,b_1,\ldots ,b_{\ell }}+S}{2}. \end{aligned}$$

Let us see that these two distribution functions are not equal. On the one hand, by the Titchmarsh convolution theorem (see [36]), we have

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: g_{a_1,\ldots ,a_k}=0 \text { on } (-\infty ,x)\right\} = \sum _{i=1}^{k} (-|a_i|). \end{aligned}$$

Since \(G_{a_1,\ldots ,a_k}(x)=\int _{-\infty }^{x}g_{a_1,\ldots ,a_k}(t)dt,\) we deduce that

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: G_{a_1,\ldots ,a_k}=0 \text { on } (-\infty ,x)\right\} = \sum _{i=1}^{k} (-|a_i|). \end{aligned}$$

Similarly,

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: G_{b_1,\ldots ,b_{\ell }}=0 \text { on } (-\infty ,x)\right\} = \sum _{i=1}^{\ell } (-|b_i|). \end{aligned}$$

Since \(G_{a_1,\ldots ,a_k}\) and \(G_{b_1,\ldots ,b_{\ell }}\) are non-negative at every point, it follows that

$$\begin{aligned} \sup \left\{ x: \frac{G_{a_1,\ldots ,a_k}+G_{b_1,\ldots ,b_{\ell }}}{2}=0 \text { on } (-\infty ,x)\right\} = \min \left\{ \sum _{i=1}^{k} (-|a_i|), \sum _{i=1}^{\ell } (-|b_i|)\right\} . \end{aligned}$$

On the other hand, applying again the Titchmarsh convolution theorem, we obtain

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: g_{a_1,\ldots ,a_k,b_1,\ldots ,b_{\ell }}=0 \text { on } (-\infty ,x)\right\} = -\sum _{i=1}^{k}|a_i| - \sum _{i=1}^{\ell }|b_i|, \end{aligned}$$

which implies

$$\begin{aligned} \sup \left\{ x\in {\mathbb {R}}: G_{a_1,\ldots ,a_k,b_1,\ldots ,b_{\ell }}=0 \text { on } (-\infty ,x)\right\} = -\sum _{i=1}^{k}|a_i| - \sum _{i=1}^{\ell }|b_i| \end{aligned}$$

and then

$$\begin{aligned}&\sup \left\{ x\in {\mathbb {R}}: H_{a_1,\ldots ,a_k,b_1,\ldots ,b_{\ell }}=0 \text { on } (-\infty ,x)\right\} \\&\quad = \sup \left\{ x\in {\mathbb {R}}: G_{a_1,\ldots ,a_k,b_1,\ldots ,b_{\ell }}=0 \text { on } (-\infty ,x)\right\} \\&\quad = -\sum _{i=1}^{k}|a_i| - \sum _{i=1}^{\ell }|b_i| < \min \left\{ \sum _{i=1}^{k} (-|a_i|), \ \sum _{i=1}^{\ell } (-|b_i|)\right\} . \end{aligned}$$

Therefore,

$$\begin{aligned} \frac{G_{a_1,\ldots ,a_k}+G_{b_1,\ldots ,b_{\ell }}}{2} \ne H_{a_1,\ldots ,a_k,b_1,\ldots ,b_{\ell }}, \end{aligned}$$

which proves that \(\left\{ Z_n+T_n\right\} _{n=1}^{\infty }\) does not converge in distribution to L. \(\square \)

5 Convex lineability of sets of sequences of random variables

5.1 Convergence of distribution functions and convergence of densities

Let us suppose that F and \(F_n\) (with \(n\in {\mathbb {N}}\)) are absolutely continuous distribution functions with densities f and \(f_n,\) respectively. By the Scheffé theorem, if \(\lim _{n\rightarrow \infty }f_n(x)=f(x)\) for almost all \(x\in {\mathbb {R}},\) then \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges to F at every point of \({\mathbb {R}}\) (see [15, Theorem 16.12]). However, the converse result does not hold in general (see [38, Section 14.9]). In this sense, we have the following result that relates both modes of convergence from the point of view of lineability:

Theorem 16

Let \({\mathcal {S}}\) be the set of all sequences \(\left\{ F_n\right\} _{n=1}^{\infty }\) of absolutely continuous distribution functions such that \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges at every point of \({\mathbb {R}},\) but the sequence of their densities \(\left\{ f_n\right\} _{n=1}^{\infty }\) does not converge at almost any point of [0, 1]. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-convex lineable.

Proof

For each \(b\in (0,+\infty )\) and \(n\in {\mathbb {N}},\) define

$$\begin{aligned} F_{b,n}(x) = {\left\{ \begin{array}{ll} 0 &{} \text {if } x<0, \\ \left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) e^{b(x-1)} &{} \text {if } x\in [0,1], \\ 1 &{} \text {if } x>1. \end{array}\right. } \end{aligned}$$

In order to prove that \(F_{b,n}\) is a distribution functions, we only need to show that \(F_{b,n}\) is nondecreasing. To do that, we will study its derivative on (0, 1),  which will be denoted by \(f_{b,n}.\) Thus, if \(0<x<1,\) then

$$\begin{aligned} f_{b,n}(x)=\left( 1-\cos (2\pi nx) + b\left( x-\frac{\sin 2\pi nx}{2\pi n}\right) \right) e^{b(x-1)}. \end{aligned}$$

Let \(g_n(x)=x-\frac{\sin (2\pi nx)}{2\pi n}\) and observe that \(g_n'(x)=1-\cos (2\pi nx)>0\) for all \(x\in (0,1),\) which means that \(g_n\) is strictly increasing on (0, 1). Since \(g_n(0)=0,\) it follows that \(g_n(x)>0\) for all \(x\in (0,1).\) Therefore, \(f_{b,n}(x)>0\) for all \(x\in (0,1),\) which implies that \(F_{b,n}\) is increasing on (0, 1) and, consequently, it is a distribution function. We also consider another distribution function:

$$\begin{aligned} G_b(x)= {\left\{ \begin{array}{ll} 0 &{} \text {if } x<0, \\ xe^{b(x-1)} &{} \text {if } x\in [0,1], \\ 1 &{} \text {if } x>1. \end{array}\right. } \end{aligned}$$

The set \({\mathcal {M}}=\left\{ \left\{ F_{b,n}\right\} _{n=1}^{\infty }:b\in (0,+\infty )\right\} \) is linearly independent. Indeed, let us suppose that \(0<b_1<\cdots <b_k\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) If for some \(n\in {\mathbb {N}}\) we had

$$\begin{aligned} \sum _{i=1}^{k}a_iF_{b_i,n}(x) = \sum _{i=1}^{k}a_i\left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) e^{b_i(x-1)} = 0 \end{aligned}$$

for every \(x\in [0,1],\) the Identity Theorem for holomorphic functions would imply that

$$\begin{aligned} \sum _{i=1}^{k}a_i\left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) e^{b_i(x-1)} = 0 \end{aligned}$$

at every point x of the complex plane. As a consequence, it would follow that

$$\begin{aligned} 0&= \lim _{x\rightarrow +\infty } \frac{\sum _{i=1}^{k}a_i\left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) e^{b_i(x-1)}}{a_ke^{b_k(x-1)}}\\&= \lim _{x\rightarrow +\infty }\left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) + \lim _{x\rightarrow +\infty } \sum _{i=1}^{k-1}\frac{a_i\left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) }{a_ke^{(b_k-b_i)(x-1)}} = +\infty . \end{aligned}$$

This contradiction shows that, for every \(n\in {\mathbb {N}},\) \(\sum _{i=1}^{k}a_iF_{b_i,n}\) is not identically zero on [0, 1]. Therefore, \(\left\{ \sum _{i=1}^{k}a_iF_{b_i,n}\right\} _{n=1}^{\infty }\) is not the zero sequence and then the elements of \({\mathcal {M}}\) are linearly independent.

Next we will prove that every convex combination of elements of \({\mathcal {M}}\) belongs to \({\mathcal {S}}.\) Let us suppose again that \(0<b_1<\cdots <b_k,\) \(a_1,\ldots ,a_k\in [0,1],\) and \(\sum _{i=1}^{k}a_i=1.\) The sum \(\sum _{i=1}^{k}a_iF_{b_i,n}\) is a new distribution function such that

$$\begin{aligned} \lim _{n\rightarrow \infty }\sum _{i=1}^{k}a_{i}F_{b_i,n}(x) = \sum _{i=1}^{k}a_{i}G_{b_i}(x) \end{aligned}$$

for all \(x\in {\mathbb {R}}.\)

Now let us see that the sequence of densities \(\left\{ \sum _{i=1}^{k}a_{i}f_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge at any point of (0, 1). If \(x\in (0,1),\) then

$$\begin{aligned} \sum _{i=1}^{k} a_{i}f_{b_i,n}(x) = \sum _{i=1}^{k}a_i\left( 1+b_i\left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) \right) e^{b_i(x-1)} - \cos (2\pi nx)\sum _{i=1}^{k}a_ie^{b_i(x-1)}. \end{aligned}$$

Note that

$$\begin{aligned} \lim _{n\rightarrow \infty } \sum _{i=1}^{k}a_i\left( 1+b_i\left( x-\frac{\sin (2\pi nx)}{2\pi n}\right) \right) e^{b_i(x-1)} = \sum _{i=1}^{k}a_i\left( 1+b_ix\right) e^{b_i(x-1)}. \end{aligned}$$

Moreover, \(\sum _{i=1}^{k}a_{i}e^{b_i(x-1)}\) is strictly positive and does not depend on n. Hence, if \(\left\{ \sum _{i=1}^{k}a_{i}f_{b_i,n}(x)\right\} _{n=1}^{\infty }\) were convergent, the sequence \(\left\{ \cos (2\pi nx)\right\} _{n=1}^{\infty }\) should be convergent as well, which is not possible. On the one hand, if \(x\in {\mathbb {Q}},\) then that sequence is periodic. On the other hand, if \(x\notin {\mathbb {Q}},\) then \(\left\{ (\cos (2\pi nx),\sin (2\pi nx))\right\} _{n=1}^{\infty }\) is dense in the unit circle and so \(\left\{ \cos (2\pi nx)\right\} _{n=1}^{\infty }\) is dense in \([-1,1]\) (see [30, Proposition 4.1.1]). Therefore, \(\left\{ \sum _{i=1}^{k}a_{i}f_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge at any point of (0, 1). The densities are unique almost everywhere, which implies that any other sequence of densities will not converge at almost any point of (0, 1). \(\square \)

5.2 Convergence in distribution and convergence in variation

Let us suppose again that F and \(F_n\) (with \(n\in {\mathbb {N}}\)) are absolutely continuous distribution functions with density functions f and \(f_n,\) respectively. If \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges in variation to F,  for each \(x\in {\mathbb {R}}\) we have

$$\begin{aligned} \lim _{n\rightarrow \infty }|F_n(x)-F(x)|&= \lim _{n\rightarrow \infty } \left| \int _{-\infty }^{x}f_n(t)dt-\int _{-\infty }^{x}f(t)dt\right| \\&\le \lim _{n\rightarrow \infty } \int _{-\infty }^{+\infty }|f_n(t)-f(t)|dt = 0. \end{aligned}$$

Therefore, \(\left\{ F_n\right\} _{n=1}^{\infty }\) converges weakly to F. However, the converse is not true in general (see [38, Section 14.12]). The next result shows that the set of sequences of absolutely continuous distribution functions converging weakly but not in variation contains a large convex space in the lineability sense.

Theorem 17

Let F denote the function on \({\mathbb {R}}\) defined as follows : 

$$\begin{aligned} F(x)={\left\{ \begin{array}{ll} 0 &{} \text {if } x\le 0, \\ x &{} \text {if } 0<x<1, \\ 1 &{} \text {if } x\ge 0. \end{array}\right. } \end{aligned}$$

Let \({\mathcal {S}}\) denote the set of all sequences of absolutely continuous distribution functions that converge to F at every point of \({\mathbb {R}}\) but do not converge to F in variation. Then \({\mathcal {S}}\) is \({\mathfrak {c}}\)-convex lineable.

Proof

We will work restricted to [0, 1],  since all the distribution functions considered in this proof will take the value 0 on \((-\infty ,0]\) and the value 1 on \([1,+\infty ).\) Given \(b\in (0,1)\) and \(n\in {\mathbb {N}},\) the function \(g_{b,n}\) is defined as follows:

$$\begin{aligned} g_{b,n}(t)={\left\{ \begin{array}{ll} \frac{1}{b} &{} \text {if } t\in \left[ \frac{j}{n},\frac{j+b}{n}\right] \text {for some } j\in \left\{ 0,\ldots ,n-1\right\} , \\ 0 &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$

Note that \(\int _{0}^{1}g_{b,n}(t)dt=1.\) Let \(G_{b,n}\) be the distribution function associated to the density \(g_{b,n}.\) For each \(x\in [0,1]\) there exists \(j\in \{0,\ldots ,n-1\}\) such that either \(x\in \left[ \frac{j}{n},\frac{j+b}{n}\right] \) or \(x\in \left[ \frac{j+b}{n},\frac{j+1}{n}\right] .\) On the one hand, if \(x\in \left[ \frac{j}{n},\frac{j+b}{n}\right] \) for some \(j\in \{0,\ldots ,n-1\},\) then

$$\begin{aligned} G_{b,n}(x) = \int _{0}^{x}g_{b,n}(t)dt = \sum _{i=0}^{j-1}\frac{1}{b}\cdot \frac{b}{n} + \int _{j/n}^{x}\frac{1}{b}dt = \frac{j}{n}-\frac{j}{nb} + \frac{x}{b}. \end{aligned}$$
(5.1)

Hence,

$$\begin{aligned} G_{b,n}(x)-x = \frac{j}{n} - \frac{j}{nb} + \frac{1-b}{b}x \le \frac{j}{n} - \frac{j}{nb} + \frac{1-b}{b}\cdot \frac{j+b}{n} = \frac{1-b}{n} \end{aligned}$$

and

$$\begin{aligned} G_{b,n}(x)-x = \frac{j}{n} - \frac{j}{nb} + \frac{1-b}{b}x \ge \frac{j}{n} - \frac{j}{nb} + \frac{1-b}{b}\cdot \frac{j}{n} = 0. \end{aligned}$$

On the other hand, if \(x\in \left[ \frac{j+b}{n},\frac{j+1}{n}\right] \) for some \(j\in \{0,\ldots ,n-1\},\) then

$$\begin{aligned} G_{b,n}(x) = \int _{0}^{x}g_{b,n}(t)dt = \int _{0}^{\frac{j+b}{n}}g_{b,n}(t)dt = \sum _{i=0}^{j}\frac{1}{b}\cdot \frac{b}{n} = \frac{j+1}{n}. \end{aligned}$$
(5.2)

Hence,

$$\begin{aligned} G_{b,n}(x)-x = \frac{j+1}{n}-x \le \frac{j+1}{n}-\frac{j+b}{n} = \frac{1-b}{n} \end{aligned}$$

and

$$\begin{aligned} G_{b,n}(x)-x = \frac{j+1}{n}-x \ge \frac{j+1}{n}-\frac{j+1}{n} = 0. \end{aligned}$$

Therefore, we obtain that

$$\begin{aligned} 0 \le G_{b,n}(x)-x \le \frac{1-b}{n} < \frac{1}{n} \end{aligned}$$
(5.3)

for every \(x\in [0,1].\)

Let us prove that the set \(\left\{ G_{b,1}: b\in (0,1)\right\} \) is linearly independent. Let \(k\in {\mathbb {N}},\) \(0<b_1<\cdots<b_k<1,\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}.\) If \(n=1\) and \(x\in \left[ b_{k-1},b_k\right] ,\) then the equalities in (5.2) imply that \(G_{b_i,1}(x)=1\) for every \(i\in \{1,\ldots ,k-1\},\) while \(G_{b_k,1}(x)=\frac{x}{b_k}\) by (5.1). Therefore, for \(x\in [b_{k-1},b_k],\) we have

$$\begin{aligned} \sum _{i=1}^{k}a_iG_{b_i,1}(x) = a_1+\cdots +a_{k-1}+\frac{x}{b_k}. \end{aligned}$$

Consequently, it cannot happen that \(\sum _{i=1}^{k}a_iG_{b_i,1}(x)=0\) for all \(x\in [b_{k-1},b_k].\) This proves the set of functions \(\left\{ G_{b,1}: b\in (0,1)\right\} \) is linearly independent, which in turn implies that the set of sequences

$$\begin{aligned} {\mathcal {M}} = \left\{ \left\{ G_{b,n}\right\} _{n=1}^{\infty } : b\in (0,1)\right\} \end{aligned}$$

is linearly independent as well.

Next we will prove that every convex combination of elements of the set \({\mathcal {M}}\) belongs to \({\mathcal {S}}.\) Let \(k\in {\mathbb {N}},\) \(0<b_1<\cdots<b_k<1,\) and \(a_1,\ldots ,a_k\in {\mathbb {R}}\backslash \{0\}\) such that \(\sum _{i=1}^{k}a_i=1.\) By the inequalities in (5.3), if \(x\in [0,1],\) then

$$\begin{aligned} \left| \sum _{i=1}^{k}a_iG_{b_i,n}(x)-x\right|&= \left| \sum _{i=1}^{k}a_iG_{b_i,n}(x)-\sum _{i=1}^{k}a_ix\right| \\&\le \sum _{i=1}^{k}a_i|G_{b_i,n}(x)-x| < \frac{1}{n}. \end{aligned}$$

Therefore, \(\left\{ \sum _{i=1}^{k}a_iG_{b_i,n}\right\} _{n=1}^{\infty }\) converges to F at every point.

Finally, the variation between F and \(\sum _{i=1}^{k}a_iG_{b_i,n}\) is equal to

$$\begin{aligned} \int _{0}^{1}\left| 1-\sum _{i=1}^{k}a_{i}g_{b_i,n}(x)\right| dx. \end{aligned}$$

Since \(b_1<\cdots <b_k,\) we have that

$$\begin{aligned} \left[ \frac{j+b_k}{n},\frac{j+1}{n}\right] \subset \left[ \frac{j+b_i}{n},\frac{j+1}{n}\right] \end{aligned}$$

for every \(i\in \{1,\ldots ,k\}\) and every \(j\in \{0,\ldots ,n-1\}.\) Therefore, if \(i\in \{1,\ldots ,k\}\) and \(j\in \{0,\ldots ,n-1\},\) then \(g_{b_i,n}(x)=0\) for all \(x\in \left[ \frac{j+b_k}{n},\frac{j+1}{n}\right] .\) Hence,

$$\begin{aligned} \int _{0}^{1}\left| 1-\sum _{i=1}^{k}a_{i}g_{b_i,n}(x)\right| dx&\ge \sum _{j=0}^{n-1}\int _{\frac{j+b_k}{n}}^{\frac{j+1}{n}}\left| 1-\sum _{i=1}^{k}a_ig_{b_i,n}(x)\right| dx \\&= \sum _{j=0}^{n-1}\int _{\frac{j+b_k}{n}}^{\frac{j+1}{n}}dx = 1-b_k > 0. \end{aligned}$$

Consequently, \(\left\{ \sum _{i=1}^{k}a_iG_{b_i,n}\right\} _{n=1}^{\infty }\) does not converge to F in variation and the proof is complete. \(\square \)