1 Introduction and Notations

In the classical framework of probability theory, the expectation is a linear function of random variables defined on a measurable space and the probability is an additive function of events on this space. The sub-linear expectation is an extension of the expectation by relaxing its linearity to sub-linearity, and it related probability, called capacity, is non-longer additive. Under the sub-linear expectation, Peng [7,8,9, 11] gave the notions of the G-normal distributions, G-Brownian motions, G-martingales, independence of random variables, identical distribution of random variables and so on. Peng [9, 11] and Krylov [5] established the central limit theorem for independent and identically distributed (i.i.d.) random variables. Zhang [16] obtained the Lindeberg central limit theorem for independent but not necessary identically distributed one-dimensional random variables as well as martingale like sequences. In this paper, we consider the multi-dimensional martingale like random vectors. In the classical probability space, since the convergence in distribution of a sequence of random vectors \({\varvec{X}}_n=(X_{n,1},\ldots ,X_{n,d})\) is equivalent to the convergence in distribution of any linear functions \(\sum _k\alpha _kX_{n,k}\) of \({\varvec{X}}_n\) by the Cramér–Wold device, the central limit theorem for random vectors follows from the central limit theorem for one-dimensional random variables directly. Under the sub-linear expectation, due to the nonlinearity, the Cramér–Wold device is no longer valid for showing the convergence of random vectors. In this paper, we derive the functional central limit theorem for martingale like random vectors under the Lindeberg condition. As applications, we establish the Lindeberg central limit theorem for independent random vectors, give the sufficient and necessary conditions of the central limit theorem for independent and identically distributed random vectors and obtain a Lévy characterization of a multi-dimensional G-Brownian motion.

We use the framework and notations of Peng [8, 9, 11]. If the reader is familiar with these notations, the remainder of this section can be skipped. Let \((\Omega ,\mathcal F)\) be a given measurable space, and let \(\mathscr {H}\) be a linear space of real functions defined on \((\Omega ,\mathcal F)\) such that if \(X_1,\ldots , X_n \in \mathscr {H}\) then \(\varphi (X_1,\ldots ,X_n)\in \mathscr {H}\) for each \(\varphi \in C_{l,Lip}(\mathbb R^n)\), where \(C_{l,Lip}(\mathbb R^n)\) denotes the linear space of (local Lipschitz) functions \(\varphi \) satisfying

$$\begin{aligned}&|\varphi ({\varvec{x}}) - \varphi ({\varvec{y}})| \le C(1 + |{\varvec{x}}|^m + |{\varvec{y}}|^m)|{\varvec{x}}- {\varvec{y}}|, \;\; \forall {\varvec{x}}, {\varvec{y}} \in \mathbb R^n,&\\&\text{ for } \text{ some } C > 0, m \in \mathbb N \text { depending on } \varphi .&\end{aligned}$$

\(\mathscr {H}\) is considered as a space of “random variables.” In this case, we denote \(X\in \mathscr {H}\). We also denote the space of bounded Lipschitz functions and the space of bounded continuous functions on \(\mathbb R^n\) by \(C_{b,Lip}(\mathbb R^n)\) and \(C_b(\mathbb R^n)\), respectively. A sub-linear expectation \(\widehat{\mathbb E}\) on \(\mathscr {H}\) is a function \(\widehat{\mathbb E}: \mathscr {H}\rightarrow \overline{\mathbb R}\) satisfying the following properties: for all \(X, Y \in \mathscr {H}\),

  1. (1)

    Monotonicity: If \(X \ge Y\) then \(\widehat{\mathbb E}[X]\ge \widehat{\mathbb E}[Y]\);

  2. (2)

    Constant preserving: \(\widehat{\mathbb E}[c] = c\);

  3. (3)

    Sub-additivity: \(\widehat{\mathbb E}[X+Y]\le \widehat{\mathbb E}[X] +\widehat{\mathbb E}[Y ]\) whenever \(\widehat{\mathbb E}[X] +\widehat{\mathbb E}[Y ]\) is not of the form \(+\infty -\infty \) or \(-\infty +\infty \);

  4. (4)

    Positive homogeneity: \(\widehat{\mathbb E}[\lambda X] = \lambda \widehat{\mathbb E}[X]\), \(\lambda \ge 0\).

Here, \(\overline{\mathbb R}=[-\infty , \infty ]\). The triple \((\Omega , \mathscr {H}, \widehat{\mathbb E})\) is called a sub-linear expectation space. Given a sub-linear expectation \(\widehat{\mathbb E}\), let us denote the conjugate expectation \(\widehat{\mathcal E}\) of \(\widehat{\mathbb E}\) by \( \widehat{\mathcal E}[X]:=-\widehat{\mathbb E}[-X]\), \( \forall X\in \mathscr {H}\). If X is not in \(\mathscr {H}\), we define its sub-linear expectation by \(\widehat{\mathbb E}^{*}[X]=\inf \{\widehat{\mathbb E}[Y]: X\le Y\in \mathscr {H}\}\). When there is no ambiguity, we also denote it by \(\widehat{\mathbb E}\).

After having the sub-linear expectation, we consider the capacities. Let \(\mathcal G\subset \mathcal F\). A function \(V:\mathcal G\rightarrow [0,1]\) is called a capacity if

$$\begin{aligned} V(\emptyset )=0, \;V(\Omega )=1 \; \text { and } V(A)\le V(B)\;\; \forall \; A\subset B, \; A,B\in \mathcal G. \end{aligned}$$

It is called to be sub-additive if \(V(A\bigcup B)\le V(A)+V(B)\) for all \(A,B\in \mathcal G\) with \(A\bigcup B\in \mathcal G\). Let \((\Omega , \mathscr {H}, \widehat{\mathbb E})\) be a sub-linear expectation space. In this paper, we denote \((\mathbb V,\mathcal V)\) be a pair of capacities with the properties that

$$\begin{aligned}{} & {} \widehat{\mathbb E}[f]\le \mathbb V(A)\le \widehat{\mathbb E}[g]\;\; \text { if } f\le I_A\le g,\nonumber \\{} & {} \quad f,g\in \mathscr {H} \text { and } A\in \mathcal F,\mathbb V\text { is sub-additive} \end{aligned}$$
(1.1)

and \(\mathcal V(A):= 1-\mathbb V(A^c)\), \(A\in \mathcal F\). It is obvious that \( \mathcal V(A\bigcup B)\le \mathcal V(A)+\mathbb V(B). \) We call \(\mathbb V\) and \(\mathcal V\) the upper and the lower capacity, respectively. In general, we can choose \(\mathbb V\) as

$$\begin{aligned} \mathbb V(A):=\inf \{\widehat{\mathbb E}[\xi ]: I_A\le \xi , \xi \in \mathscr {H}\},\;\; \forall A\in \mathcal F. \end{aligned}$$
(1.2)

To distinguish this capacity with others, we denote it by \(\widehat{\mathbb V}\) and \(\widehat{\mathcal V}(A)=1-\widehat{\mathbb V}(A)\). \(\widehat{\mathbb V}\) is the largest capacity satisfying (1.1).

When there exists a family of probability measure on \((\Omega ,\mathscr {F})\) such that

$$\begin{aligned} \widehat{\mathbb E}[X]=\sup _{P\in \mathscr {P}}P[X]=:\sup _{P\in \mathscr {P}}\int X\hbox {d}P , \end{aligned}$$
(1.3)

\(\mathbb V\) can be defined as

$$\begin{aligned} \mathbb V(A)=\sup _{P\in \mathscr {P}}P(A). \end{aligned}$$
(1.4)

We denote this capacity by \(\mathbb V^{\mathscr {P}}\), and \(\mathcal V^{\mathscr {P}}(A)=1-\mathbb V^{\mathscr {P}}(A)\).

If \(\mathbb V_1\) and \(\mathbb V_2\) are two capacities having the property (1.1), then for any random variable \(X\in \mathscr {H}\),

$$\begin{aligned} \mathbb V_1(X\ge x+\epsilon )\le \mathbb V_2(X\ge x)\le \mathbb V_1(X\ge x-\epsilon )\;\; \text { for all } \epsilon >0 \text { and } x. \end{aligned}$$
(1.5)

In fact, let \(f, g\in C_{b,Lip}(\mathbb R)\) such that \(I\{y\ge x+\epsilon \}\le f(y)\le I\{y\ge x\}\le g(y)\le I\{y\ge x-\epsilon \}\). Then,

$$\begin{aligned} \mathbb V_1(X\ge x+\epsilon )\le \widehat{\mathbb E}[f(X)]\le \mathbb V_2(X\ge x)\le \widehat{\mathbb E}[g(X)]\le \mathbb V_1(X\ge x-\epsilon ). \end{aligned}$$

It follows from (1.5) that

$$\begin{aligned} \mathbb V_1(X\ge x)=\mathbb V_2(X\ge x)\;\; \mathbb V_1(X> x)=\mathbb V_2(X> x) \end{aligned}$$

for all but except countable many x. In this paper, the events that we considered are almost of the type \(\{X\ge x\}\) or \(\{X> x\}\) , and so the results will not depend on the capacity that we have chosen.

Next, we recall the notations of identical distribution and independence.

Definition 1.1

(Peng [9, 11])

  1. (i)

    (Identical distribution) Let \({\varvec{X}}_1\) and \({\varvec{X}}_2\) be two n-dimensional random vectors defined, respectively, in sub-linear expectation spaces \((\Omega _1, \mathscr {H}_1, \widehat{\mathbb E}_1)\) and \((\Omega _2, \mathscr {H}_2, \widehat{\mathbb E}_2)\). They are called identically distributed, denoted by \({\varvec{X}}_1\overset{d}{=} {\varvec{X}}_2\), if

    $$\begin{aligned} \widehat{\mathbb E}_1[\varphi ({\varvec{X}}_1)]=\widehat{\mathbb E}_2[\varphi ({\varvec{X}}_2)], \;\; \forall \varphi \in C_{l,Lip}(\mathbb R^n), \end{aligned}$$

    whenever the sub-expectations are finite. A sequence \(\{X_n;n\ge 1\}\) of random variables (or random vectors) is said to be identically distributed if \(X_i\overset{d}{=} X_1\) for each \(i\ge 1\).

  2. (ii)

    (Independence) In a sub-linear expectation space \((\Omega , \mathscr {H}, \widehat{\mathbb E})\), a random vector \({\varvec{Y}} = (Y_1, \ldots , Y_n)\), \(Y_i \in \mathscr {H}\) is said to be independent to another random vector \({\varvec{X}} = (X_1, \ldots , X_m)\) , \(X_i \in \mathscr {H}\) under \(\widehat{\mathbb E}\), if for each test function \(\varphi \in C_{l,Lip}(\mathbb R^m \times \mathbb R^n)\), we have \( \widehat{\mathbb E}[\varphi ({\varvec{X}}, {\varvec{Y}} )] = \widehat{\mathbb E}\big [\widehat{\mathbb E}[\varphi ({\varvec{x}}, {\varvec{Y}} )]\big |_{{\varvec{x}}={\varvec{X}}}\big ],\) whenever \(\overline{\varphi }({\varvec{x}}):=\widehat{\mathbb E}\left[ |\varphi ({\varvec{x}}, {\varvec{Y}} )|\right] <\infty \) for all \({\varvec{x}}\) and \(\widehat{\mathbb E}\left[ |\overline{\varphi }({\varvec{X}})|\right] <\infty \). Random variables (or random vectors) \(X_1,\ldots , X_n\) are said to be independent if for each \(2\le k\le n\), \(X_k\) is independent to \((X_1,\ldots , X_{k-1})\). A sequence of random variables (or random vectors) is said to be independent if for each n, \(X_1,\ldots , X_n\) are independent.

Finally, we recall the notations of G-normal distribution and G-Brownian motion which are introduced by Peng [8, 9]. We denote by \(\mathbb S(d)\) the collection of all \(d\times d\) symmetric matrices. A function \(G:\mathbb S(d) \rightarrow \mathbb R\) is called a sub-linear function monotonic in \(A \in \mathbb S(d)\) if for each \(A, \overline{A}\in \mathbb S(d)\),

$$\begin{aligned} {\left\{ \begin{array}{ll} &{}G(A+\overline{A})\le G(A)+G(\overline{A}), \\ &{} G(\lambda A)=\lambda G(A), \;\; \forall \lambda >0,\\ &{} G( A)\ge G(\overline{A}), \;\; \text { if } A\ge \overline{A}. \end{array}\right. } \end{aligned}$$

Here, \(A\ge \overline{A}\) means that \(A- \overline{A}\) is semi-positive definite. G is continuous if \(|G(A)-G(\overline{A})|\rightarrow 0\) when \(\Vert A-\overline{A}\Vert _{\infty }\rightarrow 0\), where \(\Vert A-\overline{A}\Vert _{\infty }=\max _{i,j}|a_{ij}-\overline{a}_{ij}|\) for \(A=(a_{ij};i,j=1,\ldots ,d)\) and \(A=(\overline{a}_{ij};i,j=1,\ldots ,d)\).

Definition 1.2

(G-normal random variable) Let \(G:\mathbb S(d) \rightarrow \mathbb R\) be a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). A d-dimensional random vector \({\varvec{\xi }}=(\xi _1,\ldots ,\xi _d)\) in a sub-linear expectation space \((\widetilde{\Omega }, \widetilde{\mathscr {H}}, \widetilde{\mathbb E})\) is called a G-normal distributed random variable (written as \(\xi \sim N\big (0, G\big )\) under \(\widetilde{\mathbb E}\)), if for any \(\varphi \in C_{l,Lip}(\mathbb R^d)\), the function \(u({\varvec{x}},t)=\widetilde{\mathbb E}\left[ \varphi \left( {\varvec{x}}+\sqrt{t} {\varvec{\xi }}\right) \right] \) (\({\varvec{x}}\in \mathbb R^d, t\ge 0\)) is the unique viscosity solution of the following heat equation:

$$\begin{aligned} \partial _t u -\frac{1}{2} G\left( D^2 u\right) =0, \;\; u(0,{\varvec{x}})=\varphi ({\varvec{x}}), \end{aligned}$$

where \(Du=\big (\partial _{x_i} u, i=1,\ldots ,d\big )\) and \(D^2u=D(Du)=\big (\partial _{x_i,x_j} u\big )_{i,j=1}^d\).

That \({\varvec{\xi }}\) is a G-normal distributed random vector is equivalent to that if \({\varvec{\xi }}^{\prime }\) is an independent copy of \({\varvec{\xi }}\), then

$$\begin{aligned} \widetilde{\mathbb E}\left[ \varphi (\alpha {\varvec{\xi }}+\beta {\varvec{\xi }}^{\prime })\right] =\widetilde{\mathbb E}\left[ \varphi \big (\sqrt{\alpha ^2+\beta ^2}{\varvec{\xi }}\big )\right] , \;\; \forall \varphi \in C_{l,Lip}(\mathbb R) \text { and } \forall \alpha ,\beta \ge 0, \end{aligned}$$

and \(G(A)=\widetilde{\mathbb E}\left[ \langle {\varvec{\xi }}A,{\varvec{\xi }}\rangle \right] \) (cf. Definition II.1.4 and Example II.1.13 of Peng [10]), where \(\langle {\varvec{x}},{\varvec{y}}\rangle \) is the scalar product of \({\varvec{x}}, {\varvec{y}}\). When \(d=1\), G can be written as \(G(\alpha )=\alpha ^+\overline{\sigma }^2-\alpha ^+\underline{\sigma }^2\), and we write \(\xi \sim N(0,[\underline{\sigma }^2,\overline{\sigma }^2])\) if \(\xi \) is a G-normal distributed random variable (c..f. Peng [11]).

Definition 1.3

(G-Brownian motion) A d-dimensional random process \(({\varvec{W}}_t)_{t\ge 0}\) in the sub-linear expectation space \((\widetilde{\Omega }, \widetilde{\mathscr {H}}, \widetilde{\mathbb E})\) is called a G-Brownian motion if

  1. (i)

    \({\varvec{W}}_0={\varvec{0}}\);

  2. (ii)

    For each \(0\le t_1\le \ldots \le t_p\le t\le s\),

    $$\begin{aligned}&\widetilde{\mathbb E}\left[ \varphi \big ({\varvec{W}}_{t_1},\ldots , {\varvec{W}}_{t_p}, {\varvec{W}}_s-{\varvec{W}}_t\big )\right] \nonumber \\&= \widetilde{\mathbb E}\left[ \widetilde{\mathbb E}\left[ \varphi \big ({\varvec{x}}_1,\ldots , {\varvec{x}}_p, \sqrt{t-s}){\varvec{\xi }}\big )\right] \big |_{{\varvec{x}}_1={\varvec{W}}_{t_1},\ldots , {\varvec{x}}_p={\varvec{W}}_{t_p}}\right] \\&\;\; \forall \varphi \in C_{l,Lip}(\mathbb R^{p\times (d+1)}), \nonumber \end{aligned}$$
    (1.6)

    where \({\varvec{\xi }}\sim N(0,G)\).

Let \(C_{[0,\infty )}=C_{[0,\infty )}(\mathbb R^d)\) be a function space of continuous real d-dimensional functions on \([0,\infty )\) equipped with the supremum norm \(\Vert {\varvec{x}}\Vert =\sum \limits _{i=1}^{\infty }\sup \nolimits _{0\le t\le 2^i}(|{\varvec{x}}(t)|\wedge 1)/2^i\), where \(|{\varvec{y}}|\) is the Euclidean norm of \({\varvec{y}}\). Denote by \(C_b\big (C_{[0,\infty )}\big )\) the set of bounded continuous functions \(h(x):C_{[0,\infty )}\rightarrow \mathbb R\). As shown in Peng [8, 10] and Denis et al. [2], there is a sub-linear expectation space \(\big (\widetilde{\Omega }, \widetilde{\mathscr {H}},\widetilde{\mathbb E}\big )\) with \(\widetilde{\Omega }= C_{[0,\infty )}\) and \(C_b\big (\widetilde{\Omega }\big )\subset \widetilde{\mathscr {H}}\) such that \(\widetilde{\mathbb E}\) is countably sub-additive, \((\widetilde{\mathscr {H}}, \widetilde{\mathbb E}[\Vert \cdot \Vert ])\) is a Banach space, and the canonical process \(W(t)(\omega ) = \omega _t (\omega \in \widetilde{\Omega })\) satisfies (i) and (ii). Further, there exists a weakly compact family of probability measures \(\mathscr {P}\) on \((\widetilde{\Omega }, \mathscr {B}_{\widetilde{\Omega }})\) such that

$$\begin{aligned} \widetilde{\mathbb E}[X]=\max _{P\in \mathscr {P}}\textsf {E}_P[X] \text { for } X\in \widetilde{\mathscr {H}}, \end{aligned}$$

where \(\mathscr {B}_{\widetilde{\Omega }}\) is the Borel \(\sigma \)-algebra on \(\widetilde{\Omega }\) (c.f. Theorem 6.2.5 Proposition 6.3.2 of Peng [10]). In the sequel of this paper, the G-normal random vectors and G-Brownian motions are considered in \((\widetilde{\Omega }, \widetilde{\mathscr {H}}, \widetilde{\mathbb E})\).

2 Functional Central Limit Theorem for Martingale Vectors

On the sub-linear expectation space \((\Omega , \mathscr {H}, \widehat{\mathbb E})\), we write \(\eta _n\overset{d}{\rightarrow }\eta \) if \(\widehat{\mathbb E}\left[ \varphi (\eta _n)\right] \rightarrow \widehat{\mathbb E}\left[ \varphi (\eta )\right] \) holds for all bounded and continuous functions \(\varphi \), \(\eta _n\overset{\mathbb V}{\rightarrow }\eta \) if \(\mathbb V\left( |\eta _n-\eta |\ge \epsilon \right) \rightarrow 0\) for any \(\epsilon >0\), \( \eta _n\le \eta +o(1)\) in capacity \(\mathbb V\) if \( (\eta _n-\eta )^+\overset{\mathbb V}{\rightarrow }0\), \(\eta _n\rightarrow \eta \) in \(L_p\) if \(\lim _n \widehat{\mathbb E}[|\eta _n-\eta |^p]=0\), and \( \eta _n\le \eta +o(1)\) in \(L_p\) if \( (\eta _n-\eta )^+\rightarrow 0\) in \(L_p\). We also write \(\xi \le \eta \) in \(L_p\) if \(\widehat{\mathbb E}[((\xi -\eta )^+)^p]=0\), \(\xi = \eta \) in \(L_p\) if \(\widehat{\mathbb E}[|\xi -\eta |^p]=0\), \(X\le Y\) in \(\mathbb V\) if \(\mathbb V\left( X-Y\ge \epsilon \right) =0\) for all \(\epsilon >0\), and \(X= Y\) in \(\mathbb V\) if both \(X\le Y\) and \(Y\le X\) holds in \(\mathbb V\).

We recall the definition of the conditional expectation under the sub-linear expectation. Let \((\Omega , \mathscr {H}, \widehat{\mathbb E})\) be a sub-linear expectation space. Let \(\mathscr {H}_{n,0}\subset \ldots \subset \mathscr {H}_{n,k_n}\) be subspaces of \(\mathscr {H}\) such that

  1. (i)

    any constant \(c\in \mathscr {H}_{n,k}\) and,

  2. (ii)

    if \(X_1,\ldots ,X_d\in \mathscr {H}_{n,k}\), then \(\varphi (X_1,\ldots ,X_d)\in \mathscr {H}_{n,k}\) for any \(\varphi \in C_{l,lip}(\mathbb R^d)\), \(k=0,\ldots , k_n\).

Denote \(\mathscr {L}(\mathscr {H})=\{X:\widehat{\mathbb E}[|X|]<\infty , X\in \mathscr {H}\}\). We consider a system of operators in \(\mathscr {L}(\mathscr {H})\),

$$\begin{aligned} \widehat{\mathbb E}_{n,k}: \mathscr {L}(\mathscr {H})\rightarrow \mathscr {L}(\mathscr {H}_{n,k}). \end{aligned}$$

Suppose that the operators \(\widehat{\mathbb E}_{n,k}\) satisfy the following properties: for all \(X, Y \in \mathscr {L}({\mathscr {H}})\),

  1. (a)

    \( \widehat{\mathbb E}_{n,k} [ X+Y]=X+\widehat{\mathbb E}_{n,k}[Y]\) in \(L_1\) if \(X\in \mathscr {H}_{n,k}\), and \( \widehat{\mathbb E}_{n,k} [ XY]=X^+\widehat{\mathbb E}_{n,k}[Y]+X^-\widehat{\mathbb E}_{n,k}[-Y]\) in \(L_1\) if \(X\in \mathscr {H}_{n,k}\) and \(XY\in \mathscr {L}({\mathscr {H}})\);

  2. (b)

    \(\widehat{\mathbb E}\left[ \widehat{\mathbb E}_{n,k} [ X]\right] =\widehat{\mathbb E}[X]\).

Denote \(\widehat{\mathbb E}[X|\mathscr {H}_{n,k}]=\widehat{\mathbb E}_{n,k}[X]\), \(\widehat{\mathcal E}[X|\mathscr {H}_{n,k}]=-\widehat{\mathbb E}_{n,k}[-X]\). \(\widehat{\mathbb E}[X|\mathscr {H}_{n,k}]\) is called the conditional sub-linear expectation of X given \(\mathscr {H}_{n,k}\), and \(\widehat{\mathbb E}_{n,k}\) is called the conditional expectation operator.

For a random vector \({\varvec{X}}=(X_1,\ldots , X_d)\), we denote \(\widehat{\mathbb E}[{\varvec{X}}]=(\widehat{\mathbb E}[X_1],\ldots , \widehat{\mathbb E}[X_d])\) and \(\widehat{\mathbb E}[{\varvec{X}}|\mathscr {H}_{n,k}]=(\widehat{\mathbb E}[X_1|\mathscr {H}_{n,k}],\ldots , \widehat{\mathbb E}[X_d|\mathscr {H}_{n,k}])\). Now, we assume that \(\{{\varvec{Z}}_{n,k}; k=1,\ldots , k_n\}\) is an array of d-dimensional random vectors such that \({\varvec{Z}}_{n,k}\in \mathscr {H}_{n,k}\) and \(\widehat{\mathbb E}[|{\varvec{Z}}_{n,k}|^2]<\infty \), \(k=1,\ldots , k_n\). Let \(D_{[0,1]}=D_{[0,1]}(\mathbb R^d)\) be the space of right continuous d-dimensional functions having finite left limits which is endowed with the Skorohod topology (c.f. Billingsley [1]) and \(\tau _n(t)\) be a non-decreasing function in \(D_{[0,1]}(\mathbb R^1)\) which takes integer values with \(\tau _n(0)=0\), \(\tau _n(1)=k_n\). Define \({\varvec{S}}_{n,i}=\sum _{k=1}^i {\varvec{Z}}_{n,k}\),

$$\begin{aligned} {\varvec{W}}_n(t)= {\varvec{S}}_{n, \tau _n(t)}. \end{aligned}$$
(2.1)

Then, \({\varvec{W}}_n\) is an element in \(D_{[0,1]}(\mathbb R^d)\). The following is the functional central limit theorem.

Theorem 2.1

Suppose that the operators \(\widehat{\mathbb E}_{n,k}\) satisfy (a) and (b). Assume that the following Lindeberg condition is satisfied:

$$\begin{aligned} \sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ \left( \left| {\varvec{Z}}_{n,k}\right| ^2-\epsilon \right) ^+|\mathscr {H}_{n,k-1}\right] \overset{\mathbb V}{\rightarrow }0\;\; \forall \epsilon >0 \end{aligned}$$
(2.2)

and

$$\begin{aligned} \sum _{k=1}^{k_n}\left\{ |\widehat{\mathbb E}[{\varvec{Z}}_{n,k} |\mathscr {H}_{n,k-1}]|+|\widehat{\mathcal E}[{\varvec{Z}}_{n,k} |\mathscr {H}_{n,k-1}]|\right\} \overset{\mathbb V}{\rightarrow }0. \end{aligned}$$
(2.3)

Further, assume that there is a continuous non-decreasing non-random function \(\rho (t)\) and a non-random function \(G:\mathbb S(d)\rightarrow \mathbb R\) for which

$$\begin{aligned} \sum _{k\le \tau _n(t)} \widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n,k} A,{\varvec{Z}}_{n,k} \rangle \big |\mathscr {H}_{n,k-1}\right] \overset{\mathbb V}{\rightarrow }G(A)\rho (t), \;\; A\in \mathbb S(d). \end{aligned}$$
(2.4)

Then for any \(0=t_0<\ldots < t_d\le 1\),

$$\begin{aligned} \Big ( {\varvec{W}}_n(t_1),\ldots , {\varvec{W}}_n(t_d)\Big )\overset{d}{\rightarrow }\Big ( {\varvec{W}}(\rho (t_1)),\ldots , {\varvec{W}}(\rho (t_d))\Big ), \end{aligned}$$
(2.5)

and for any bounded continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\widehat{\mathbb E}\left[ \varphi \left( {\varvec{W}}_n\right) \right] =\widetilde{\mathbb E}[\varphi ({\varvec{W}}\circ \rho )], \end{aligned}$$
(2.6)

where \({\varvec{W}}\) is a G-Brownian motion with \({\varvec{W}}(1) \sim N(0,G)\) under \(\widetilde{\mathbb E}\), and \({\varvec{W}}\circ \rho (t)={\varvec{W}}(\rho (t))\).

The proof of this theorem will stated in the last section.

Remark 2.2

Let \(G_n(A,t)=\sum _{k\le \tau _n(t)} \widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n,k} A,{\varvec{Z}}_{n,k} \rangle \big |\mathscr {H}_{n,k-1}\right] \). It is easily seen that \(G_n(A,t):\mathbb S(d) \rightarrow \mathbb R\) is a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). So, G is a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). Without loss of generality, we assume \(G(I_{d\times d})=1\) for otherwise we can replace \(\rho (t)\) by \(G(I_{d\times d})\rho (t)\). It is obvious that

$$\begin{aligned} |G_n(A,t)-G_n(\overline{A},t)|&\le d\Vert A-\overline{A}\Vert _{\infty } \sum _{k\le \tau _n(t)} \widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n,k},{\varvec{Z}}_{n,k} \rangle \big |\mathscr {H}_{n,k-1}\right] \\&= d\Vert A-\overline{A}\Vert _{\infty } G_n(I,t). \end{aligned}$$

It follows that \(|G(A)-G(\overline{A})|\le d\Vert A-\overline{A}\Vert _{\infty }\). Then, it can be verified that (2.4) holds uniformly in A in a bounded area, and G(A) is continuous in \(A\in \mathbb S(d)\).

Remark 2.3

When \(d=1\), (2.4) is equivalent to

$$\begin{aligned}{} & {} \sum _{k\le \tau _n(t)} \widehat{\mathbb E}[Z_{n,k}^2|\mathscr {H}_{n,k-1}] \overset{\mathbb V}{\rightarrow }\rho (t), \;\; t\in [0,1] \end{aligned}$$
(2.7)
$$\begin{aligned}{} & {} \text {and}\; \sum _{k\le \tau _n(t)} \widehat{\mathcal E}[Z_{n,k}^2|\mathscr {H}_{n,k-1}] \overset{\mathbb V}{\rightarrow }r\rho (t), \;\; t\in [0,1]. \end{aligned}$$
(2.8)

The condition (2.7) is assumed in Zhang [16]. But, (2.8) is replaced by a more stringent condition as follows,

$$\begin{aligned} \sum _{k=1}^{k_n} \left| r\widehat{\mathbb E}[Z_{n,k}^2|\mathscr {H}_{n,k-1}] - \widehat{\mathcal E}[Z_{n,k}^2|\mathscr {H}_{n,k-1}]\right| \overset{\mathbb V}{\rightarrow }0. \end{aligned}$$

As shown in Remark 3.2, (2.7) and (2.8) cannot be weakened furthermore.

3 Applications

3.1 Lindeberg’s CLT for Independent Random Vectors

From Theorem 2.1, we have the following functional central limit theorem for independent random vectors.

Theorem 3.1

Let \(\{{\varvec{Z}}_{n,k};k=1,\ldots , k_n\}\) be an array of independent d-dimensional random vectors, \(n=1,2,\ldots \), and \(\tau _n(t)\) be a non-decreasing function in \(D_{[0,1]}(\mathbb R^1)\) which takes integer values with \(\tau _n(0)=0\), \(\tau _n(1)=k_n\). Denote \( {\varvec{W}}_n(t)=\sum _{k\le \tau _n(t)}{\varvec{Z}}_{n,k}. \) Assume that

$$\begin{aligned} \sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ \left( |{\varvec{Z}}_{n,k}|^2-\epsilon \right) ^+ \right] \rightarrow 0\;\; \forall \epsilon >0 \end{aligned}$$
(3.1)

and

$$\begin{aligned} \sum _{k=1}^{k_n}\left\{ |\widehat{\mathbb E}[{\varvec{Z}}_{n,k} ]|+|\widehat{\mathcal E}[{\varvec{Z}}_{n,k} |\right\} \rightarrow 0. \end{aligned}$$
(3.2)

Further, assume that there is a continuous non-decreasing non-random function \(\rho (t)\) and a non-random function \(G:\mathbb S(d)\rightarrow \mathbb R\) for which

$$\begin{aligned} \sum _{k\le \tau _n(t)} \widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n,k} A,{\varvec{Z}}_{n,k} \rangle \right] \rightarrow G(A)\rho (t), \;\; A\in \mathbb S(d). \end{aligned}$$
(3.3)

Then for any \(0=t_0<\ldots < t_d\le 1\),

$$\begin{aligned} \Big ( {\varvec{W}}_n(t_1),\ldots , {\varvec{W}}_n(t_d)\Big )\overset{d}{\rightarrow }\Big ( {\varvec{W}}\big (\rho (t_1)\big ),\ldots , {\varvec{W}}\big (\rho (t_d)\big )\Big ), \end{aligned}$$
(3.4)

and for any continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\) with \( |\varphi ({\varvec{x}})|\le C \sup _{t\in [0,1]}|{\varvec{x}}(t)|^2\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\widehat{\mathbb E}\left[ \varphi \left( {\varvec{W}}_n\right) \right] =\widetilde{\mathbb E}[\varphi ({\varvec{W}}\circ \rho )], \end{aligned}$$
(3.5)

where \({\varvec{W}}\) is G-Brownian motion on \([0,\infty )\) with \({\varvec{W}}(1) \sim N(0,G)\) under \(\widetilde{\mathbb E}\). Further, when \(p>2\), (3.5) holds for any continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\) with \( |\varphi ({\varvec{x}})|\le C \sup _{t\in [0,1]}|{\varvec{x}}(t)|^p\) if (3.1) is replaced by the condition that

$$\begin{aligned} \sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ \left| {\varvec{Z}}_{n,k} \right| ^p\right] \rightarrow 0. \end{aligned}$$
(3.6)

Proof

For a bounded continuous function \(\varphi \), (3.5) follows from Theorem 2.1 for the functional central limit theorem of martingale vectors. For continuous function \(\varphi :D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\) with \( |\varphi ({\varvec{x}})|\le C \sup _{t\in [0,1]}|{\varvec{x}}(t)|^p\), we first note that (3.1) is implied by (3.6) for \(p>2\). Since (3.5) holds for bounded continuous function \(\varphi \) and

$$\begin{aligned} \big |\varphi ({\varvec{x}})-(-N)\vee \varphi ({\varvec{x}})\wedge N\big |\le \left( C\sup _{t\in [0,1]}|{\varvec{x}}(t)|^p-N\right) ^+, \end{aligned}$$

it is sufficient to show that \(\{\max \limits _{i\le k_n}|\sum \nolimits _{k\le i}{\varvec{Z}}_{n,k}|^p, n\ge 1\}\) is uniformly integrable, i.e.,

$$\begin{aligned} \lim _{N\rightarrow \infty }\limsup _{n\rightarrow \infty }\widehat{\mathbb E}\left[ \left( \max _{i\le k_n}\Big |\sum _{k=1}^i {\varvec{Z}}_{n,k}\Big |^p -N\right) ^+\right] =0 \end{aligned}$$
(3.7)

under the conditions (3.2), (3.3), (3.1) or/and (3.6). For showing (3.7), it is sufficient to consider the one-dimensional case. Let \(Y_{n,k}=(- 1)\vee Z_{n,k}\wedge 1\) and \(\widehat{Y}_{n,k}=Z_{n,k}-Y_{n,k}\). Then, the Lindeberg condition (3.1) implies that

$$\begin{aligned} \sum _{k=1}^{k_n} \widehat{\mathbb E}[|\widehat{Y}_{n,k} |] = \sum _{k=1}^{k_n}\widehat{\mathbb E}\Big [\big (|Z_{n,k}|- 1\big )^+\Big ] \le 2 \sum _{k=1}^{k_n}\widehat{\mathbb E}\Big [\big (|Z_{n,k}|^2- 1/2\big )^+\Big ] \rightarrow 0. \end{aligned}$$
(3.8)

It follows that

$$\begin{aligned} \sum _{k=1}^{k_n}\left\{ |\widehat{\mathbb E}[Y_{n,k}]|+|\widehat{\mathcal E}[Y_{n,k}]|\right\} \rightarrow 0, \end{aligned}$$
(3.9)

by (3.2). Also, it is obvious that

$$\begin{aligned} \sum _{k=1}^{k_n} \widehat{\mathbb E}[|Y_{n,k}|^q] \le \sum _{k=1}^{k_n} \widehat{\mathbb E}[Y_{n,k}^2]\le \sum _{k=1}^{k_n} \widehat{\mathbb E}[Z_{n,k}^2] =O(1), \forall q\ge 2. \end{aligned}$$
(3.10)

By the Rosenthal-type inequality for independent random variables (c.f. Theorem 2.1 of Zhang[14]),

$$\begin{aligned} \widehat{\mathbb E}\left[ \max _{i\le k_n}\Big |\sum _{k=1}^i Y_{n,k}\Big |^q\right]&\le C_q\left\{ \sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ |Y_{n,k}|^q\right] +\left( \sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ Y_{n,k}^2\right] \right) ^{q/2}\right. \nonumber \\&\quad + \left. \left( \sum _{k=1}^{k_n}\Big (\big |\widehat{\mathbb E}\left[ Y_{n,k} \right] \big |+\big |\widehat{\mathcal E}\left[ Y_{n,k} \right] \big |\Big )\right) ^q\right\} \le C_q\textrm{,} \end{aligned}$$
(3.11)

by (3.9) and (3.10). It follows that

$$\begin{aligned}&\lim _{N\rightarrow \infty }\limsup _{n\rightarrow \infty }\widehat{\mathbb E}\left[ \left( \max _{i\le k_n}\Big | \sum _{k=1}^i Y_{n,k} \Big |^p -N\right) ^+\right] \\&\quad \le \lim _{N\rightarrow \infty }\limsup _{n\rightarrow \infty }N^{-1}\widehat{\mathbb E}\left[ \max _{i\le k_n}\Big | \sum _{k=1}^i Y_{n,k} \Big |^{2p} \right] = 0. \end{aligned}$$

For \(\widehat{Y}_{n,k}\), by the Rosenthal-type inequality for independent random variables again, we have

$$\begin{aligned}&\widehat{\mathbb E}\left[ \max _{i\le k_n}\Big |\sum _{k=1}^i \widehat{Y}_{n,k}\Big |^{p}\right] \\&\quad \le C_p\left\{ \sum _{k=1}^{k_n}\widehat{\mathbb E}[|\widehat{Y}_{n,k}|^p] +\left( \sum _{k=1}^{k_n}\widehat{\mathbb E}[|\widehat{Y}_{n,k}|^2] \right) ^{p/2} + \left( \sum _{k=1}^{k_n}\big ((\widehat{\mathbb E}[\widehat{Y}_{n,k}])^+ +(\widehat{\mathcal E}[\widehat{Y}_{n,k}])^-\big )\right) ^{p }\right\} \\&\quad \le C_p\left\{ \sum _{k=1}^{k_n}\widehat{\mathbb E}[(|Z_{n,k}|^p- 1)^+] +\left( \sum _{k=1}^{k_n}\widehat{\mathbb E}[(Z_{n,k}^2-1)^+] \right) ^{p/2} + \left( \sum _{k=1}^{k_n}\widehat{\mathbb E}[(|Z_{n,k}|- 1)^+] \right) ^{p }\right\} \\&\quad \rightarrow 0 \end{aligned}$$

by (3.8) and the condition (3.1) (and (3.6) when \(p>2\)). Hence, (3.7) is proved.\(\square \)

Remark 3.2

When \(d=1\), the condition (3.3) is equivalent to

$$\begin{aligned}{} & {} \sum _{k\le \tau _n(t)} \widehat{\mathbb E}[Z_{n,k}^2 ] \rightarrow \rho (t), \;\; t\in [0,1], \end{aligned}$$
(3.12)
$$\begin{aligned}{} & {} \sum _{k\le \tau _n(t)} \widehat{\mathcal E}[Z_{n,k}^2 ] \rightarrow r\rho (t), \;\; t\in [0,1]. \end{aligned}$$
(3.13)

Suppose that \(\{Z_{n,k};k=1,\ldots , k_n\}\) is an array of independent random variables with \(\widehat{\mathbb E}[Z_{n,k}]=\widehat{\mathcal E}[Z_{n,k}]=0\), \(k=1,\ldots , k_n\), and the Lindeberg condition (3.14) is satisfied. If (3.4) or (3.5) holds, then as shown in the proof of Theorem 3.1,

$$\begin{aligned}{} & {} \sum _{k\le k_n(t)}\widehat{\mathbb E}[Z_{n,k}^2]=\widehat{\mathbb E}[W_n^2(t)] \rightarrow \widehat{\mathbb E}[W^2(\rho (t))]=\rho (t), \\{} & {} \sum _{k\le k_n(t)}\widehat{\mathcal E}[Z_{n,k}^2]=\widehat{\mathcal E}[W_n^2(t)] \rightarrow \widehat{\mathcal E}[W^2(\rho (t))]=r\rho (t). \end{aligned}$$

So, the conditions (3.12) and (3.13) cannot be weakened furthermore.

Corollary 3.3

Let \(\{X_{n,k};k=1,\ldots , k_n\}\) be an array of independent random variables, \(n=1,2,\ldots \). Denote \(\overline{\sigma }_{n,k}^2=\widehat{\mathbb E}[X_{n,k}^2]\), \(\underline{\sigma }_{n,k}^2=\widehat{\mathcal E}[X_{n,k}^2]\) and \(B_n^2=\sum _{k=1}^{k_n} \overline{\sigma }_{n,k}^2\). Suppose that the Lindeberg condition is satisfied:

$$\begin{aligned} \frac{1}{B_n^2}\sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ \left( X_{n,k}^2-\epsilon B_n^2 \right) ^+\right] \rightarrow 0\;\; \forall \epsilon >0, \end{aligned}$$
(3.14)

and further, there is a constant \(r\in [0,1]\) such that

$$\begin{aligned}&\frac{\sum _{k=1}^{m} \underline{\sigma }_{n,k}^2 }{\sum _{k=1}^{m} \overline{\sigma }_{n,k}^2 }\rightarrow r, \text { as long as } k_n\ge m\rightarrow \infty \;\text { with } \liminf \frac{\sum _{k=1}^{m} \overline{\sigma }_{n,k}^2 }{B_n^2}>0, \end{aligned}$$
(3.15)
$$\begin{aligned}&\frac{\sum _{k=1}^{k_n}\left\{ |\widehat{\mathbb E}[X_{n,k}]|+|\widehat{\mathcal E}[X_{n,k}]|\right\} }{B_n}\rightarrow 0. \end{aligned}$$
(3.16)

Then for any continuous function \(\varphi \) with \(\ |\varphi (x)|\le C x^2\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\widehat{\mathbb E}\left[ \varphi \left( \frac{\sum _{k=1}^{k_n}X_{n,k}}{B_n}\right) \right] =\widetilde{\mathbb E}[\varphi (\xi )], \end{aligned}$$
(3.17)

where \(\xi \sim N(0,[r, 1])\) under \(\widetilde{\mathbb E}\).

Proof

Let \(Z_{n,k}=X_{n,k}/B_n\), \(k=1,\ldots , k_n\). It is easily seen that the array \(\{Z_{n,k}; k=1,\ldots , k_n\}\) satisfies (3.1) and (3.2). Denote \(B_{n,0}^2=0\), \(B_{n,k}^2=\sum _{i=1}^k \overline{\sigma }_{n,i}^2\). Define the function \(\tau _n(t)\) by

$$\begin{aligned} \tau _n(t)=k \text { if } B_{n,k}^2/B_n^2\le t<B_{n,k+1}^2/B_n^2, \;\; \text { and } \tau _n(1)=k_n. \end{aligned}$$

From the Lindeberg condition (3.14), it is easily verified that

$$\begin{aligned} \frac{\max _k \underline{\sigma }_{n,k}^2}{B_n^2}\le \frac{\max _k \overline{\sigma }_{n,k}^2}{B_n^2}\rightarrow 0. \end{aligned}$$

It follows that

$$\begin{aligned} \left| \sum _{k\le \tau _n(t)}\widehat{\mathbb E}[Z_{n,k}^2]-t\right| =\left| \frac{B^2_{n,\tau _n(t)}}{B_n^2}- t\right| \le \frac{\max _k \overline{\sigma }_{n,k}^2}{B_n^2}\rightarrow 0, \end{aligned}$$

and \(\tau _n(t)\rightarrow \infty \) if \(t>0\). By the condition (3.21), we have

$$\begin{aligned} \sum _{k\le \tau _n(t)}\widehat{\mathcal E}[X_{n,k}^2]=\frac{\sum _{k\le \tau _n(t)} \underline{\sigma }_{n,k}^2}{B_n^2} = \frac{\sum _{k\le \tau _n(t)} \underline{\sigma }_{n,k}^2}{\sum _{k\le \tau _n(t)} \overline{\sigma }_{n,k}^2}\frac{B^2_{n,\tau _n(t)}}{B_n^2} \rightarrow rt. \end{aligned}$$

So, (3.12) and (3.13) are satisfied with \(\rho (t)=t\). Hence, (3.23) follows from (3.5).

It is easily seen that (3.15) is implied by the following condition of Zhang [16],

$$\begin{aligned} \frac{\sum _{k=1}^{k_n} \left| r\overline{\sigma }_{n,k}^2 - \underline{\sigma }_{n,k}^2\right| }{B_n^2}\rightarrow 0. \end{aligned}$$
(3.18)

Zhang [16] also showed that the condition (3.18) cannot be weakened to

$$\begin{aligned} \frac{\;\;\sum _{k=1}^{k_n} \underline{\sigma }_{n,k}^2\;\;}{\sum _{k=1}^{k_n} \overline{\sigma }_{n,k}^2} \rightarrow r. \end{aligned}$$
(3.19)

However, the following theorem shows that if we consider a sequence of independent random variables instead of arrays of independent random variables, then the condition (3.15) can be weakened to (3.19).

Theorem 3.4

Let \(\{X_k;k=1,2,\ldots \}\) be a sequence of independent random variables. Denote \(\overline{\sigma }_{k}^2=\widehat{\mathbb E}[X_{k}^2]\), \(\underline{\sigma }_{k}^2=\widehat{\mathcal E}[X_{k}^2]\), \(B_n^2=\sum _{k=1}^{n} \overline{\sigma }_{k}^2\) . Suppose that the Lindeberg condition is satisfied:

$$\begin{aligned} \frac{1}{B_n^2}\sum _{k=1}^n\widehat{\mathbb E}\left[ \left( X_{k}^2-\epsilon B_n^2 \right) ^+\right] \rightarrow 0\;\; \forall \epsilon >0, \end{aligned}$$
(3.20)

and further, there is a constant \(r\in [0,1]\) such that

$$\begin{aligned}&\frac{\;\;\sum _{k=1}^n \underline{\sigma }_k^2\;\;}{\sum _{k=1}^{n} \overline{\sigma }_k^2} \rightarrow r, \;\; \text {also,} \end{aligned}$$
(3.21)
$$\begin{aligned}&\frac{\sum _{k=1}^{n}\left\{ |\widehat{\mathbb E}[X_k]|+|\widehat{\mathcal E}[X_k]|\right\} }{B_n}\rightarrow 0. \end{aligned}$$
(3.22)

Then for any continuous function \(\varphi \) with \(\ |\varphi (x)|\le C x^2\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\widehat{\mathbb E}\left[ \varphi \left( \frac{\sum _{k=1}^n X_k}{B_n}\right) \right] =\widetilde{\mathbb E}[\varphi (\xi )], \end{aligned}$$
(3.23)

where \(\xi \sim N(0,[r, 1])\) under \(\widetilde{\mathbb E}\).

Proof

Obviously, since (3.21) implies (3.15).\(\square \)

3.2 CLT for i.i.d. Random Vectors

Now, we consider a sequence \(\{{\varvec{X}}_k;k=1,2,\ldots \}\) of independent and identically distributed d-dimensional random vectors, and let \({\varvec{S}}_n=\sum _{k=1}^n {\varvec{X}}_k\).

If we let \({\varvec{Z}}_{n,k}={\varvec{X}}_k/\sqrt{n}\), \(k=1,\ldots , n\), then (3.1) is equivalent to that \(\widehat{\mathbb E}[(|{\varvec{X}}_1|^2-c)^+]\rightarrow 0\) as \(c\rightarrow \infty \), (3.2) is equivalent to that \(\widehat{\mathbb E}[{\varvec{X}}_1]=\widehat{\mathbb E}[-{\varvec{X}}_1]={\varvec{0}}\), and (3.3) is automatically satisfied with \(G(A)=\widehat{\mathbb E}[\langle {\varvec{X}}_1A,{\varvec{X}}_1\rangle ]\), \(\rho (t)\equiv t\) and \(\tau _n(t)=[nt]/n\). From Theorem 3.1, we obtain Peng’s central limit theorem (c.f. Theorem 2.4.4. of Peng (2019)).

Corollary 3.5

Suppose \(\widehat{\mathbb E}[(|{\varvec{X}}_1|^2-c)^+]\rightarrow 0\) as \(c\rightarrow \infty \), \(\widehat{\mathbb E}[{\varvec{X}}_1]=\widehat{\mathbb E}[-{\varvec{X}}_1]={\varvec{0}}\). Let \(G(A)=\widehat{\mathbb E}[\langle {\varvec{X}}_1A,{\varvec{X}}_1\rangle ]\). Then,

$$\begin{aligned} \lim _{n\rightarrow \infty } \widehat{\mathbb E}\left[ \varphi \left( \frac{{\varvec{S}}_n}{\sqrt{n}}\right) \right] =\widetilde{\mathbb E}\left[ \varphi ({\varvec{\xi }})\right] ,\;\; \forall \varphi \in C_b(\mathbb R^d), \end{aligned}$$
(3.24)

where \({\varvec{\xi }}\sim N\left( 0,G\right) \).

The next theorem gives the sufficient and necessary conditions of the central limit theorem for independent and identically distributed random vectors. For a random vector \({\varvec{X}}=(X_1,\ldots , X_d)\), we write \({\varvec{X}}^{(c)}=(X_1^{(c)},\ldots , X_d^{(c)})\), where \(X_i^{(c)}=(-c)\vee (X_i\wedge c)\), \(i=1,\ldots ,d\).

Theorem 3.6

Suppose that

  1. (i)

    \(\lim \limits _{c\rightarrow \infty } \widehat{\mathbb E}[|{\varvec{X}}_1|^2\wedge c]\) is finite;

  2. (ii)

    \(x^2\mathbb V\left( |{\varvec{X}}_1|\ge x\right) \rightarrow 0\) as \(x\rightarrow \infty \);

  3. (iii)

    \(\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\left[ {\varvec{X}}_1^{(c)}\right] =\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\left[ -{\varvec{X}}_1^{(c)}\right] ={\varvec{0}}\);

  4. (iv)

    The limit

    $$\begin{aligned} G(A)=\lim _{c\rightarrow \infty }\widehat{\mathbb E}\left[ \langle {\varvec{X}}_1^{(c)} A,{\varvec{X}}_1^{(c)}\rangle \right] \end{aligned}$$
    (3.25)

    exists for each \( A\in \mathbb S(d)\).

Then for any bounded continuous function \(\varphi : D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \widehat{\mathbb E}\left[ \varphi \left( \frac{{\varvec{S}}_{[n\cdot ]}}{\sqrt{n}}\right) \right] =\widetilde{\mathbb E}\left[ \varphi ({\varvec{W}} )\right] , \end{aligned}$$
(3.26)

where \({\varvec{W}}\) is a G-Brownian motion with \({\varvec{W}}_1\sim N(0,G)\). In particular, (3.24) holds with where \({\varvec{\xi }}\sim N\left( 0,G\right) \).

Conversely, if (3.24) holds for any \(\varphi \in C_{b,Lip}(\mathbb R^d)\) and a random vector \({\varvec{\xi }}\) with \(x^2\widetilde{\mathbb V}\left( |{\varvec{\xi }}|\ge x\right) \rightarrow 0\) as \(x\rightarrow \infty \), then (i)-(iv) hold.

Remark 3.7

If \(\widehat{\mathbb E}[(|{\varvec{X}}_1|^2-c)^+]\rightarrow 0\) as \(c\rightarrow \infty \), then (i), (ii) and (iv) are satisfied, \(G(A)=\widehat{\mathbb E}\left[ \langle {\varvec{X}}_1 A,{\varvec{X}}_1\rangle \right] \), and (iii) is equivalent to \(\widehat{\mathbb E}[{\varvec{X}}_1]=\widehat{\mathbb E}[-{\varvec{X}}_1]=0\). Also, if \(C_{\mathbb V}(|{\varvec{X}}_1|^2)<\infty \), then (i), (ii) and (iv) are satisfied.

For the one-dimensional case \(d=1\), (iv) is equivalent to \(\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}[X_1^2\wedge c]\) and \(\lim \limits _{c\rightarrow \infty }\widehat{\mathcal E}[X_1^2\wedge c]\) are finite which are implied by (i). In general, we don’t know whether (iv) can be derived from (i)-(iii) or not.

Proof

When \(d=1\), this theorem is proved by Zhang [15] (c.f. Theorem 4.2), where it is shown that \(\lim _{c\rightarrow \infty }\widehat{\mathbb E}\left[ {\varvec{X}}_1^c\right] \) and \(\lim _{c\rightarrow \infty }\widehat{\mathbb E}\left[ -{\varvec{X}}_1^c\right] \) exist and are finite under the condition (i). Note

$$\begin{aligned} \left| \widehat{\mathbb E}\left[ \langle {\varvec{X}}_1^c A,{\varvec{X}}_1^c\rangle \right] -\widehat{\mathbb E}\left[ \langle {\varvec{X}}_1^c \overline{A},{\varvec{X}}_1^c\rangle \right] \right| \le |A-\overline{A}| \widehat{\mathbb E}[|{\varvec{X}}_1|^2\wedge (d c^2)]. \end{aligned}$$

It is easily seen that if the limit in (3.25) exists, then it is finite and G(A) is a continuous sub-linear function monotonic in \(A \in \mathbb S(d)\). We first prove the direct part. Let \({\varvec{Y}}_{n,k}= \frac{1}{\sqrt{n}}{\varvec{X}}_k^{(\sqrt{n})}\). By (i)-(iii) we have that

$$\begin{aligned}{} & {} \sum _{i=1}^n\widehat{\mathbb E}\left[ |{\varvec{Y}}_{n,i}|^2\right] =\widehat{\mathbb E}\left[ \big |{\varvec{X}}_1^{(\sqrt{n})}\big |^2\right] \le \widehat{\mathbb E}\left[ |{\varvec{X}}_1|^2\wedge (dn)\right] \le C_0, \end{aligned}$$
(3.27)
$$\begin{aligned}{} & {} \sum _{i=1}^n \widehat{\mathbb E}[|{\varvec{Y}}_{n,i}|^p]\le \epsilon ^{p-2}\widehat{\mathbb E}\left[ |{\varvec{X}}_1|^2\wedge (dn)\right] + d n\mathbb V(|{\varvec{X}}_1|\ge \epsilon \sqrt{n})\rightarrow 0, \;\; \forall p>2\nonumber \\ \end{aligned}$$
(3.28)

as \(n\rightarrow \infty \) and then \(\epsilon \rightarrow 0\), and

$$\begin{aligned}&\sum _{i=1}^n \left( \left| \widehat{\mathbb E}[{\varvec{Y}}_{n,i}]\right| +\left| \widehat{\mathbb E}[-{\varvec{Y}}_{n,i}]\right| \right) \nonumber \\&\quad = \lim _{c\rightarrow \infty }\sqrt{n} \left( \left| \widehat{\mathbb E}[{\varvec{X}}_1^{(\sqrt{n})}]-\widehat{\mathbb E}[{\varvec{X}}_1^{(c\sqrt{n})}]\right| +\left| \widehat{\mathbb E}[-{\varvec{X}}_1^{(\sqrt{n})}] -\widehat{\mathbb E}[-{\varvec{X}}_1^{(c\sqrt{n})}]\right| \right) \nonumber \\&\quad \le 2 \sum _{i=1}^d \lim _{c\rightarrow \infty }\sqrt{n}\widehat{\mathbb E}\left[ \big (|X_{1,i}|\wedge (c\sqrt{n})-\sqrt{n}\big )^+\right] \nonumber \\&\quad \le 2\sum _{i=1}^d \sqrt{n} \left( \lim _{c\rightarrow \infty } \widehat{\mathbb E}\left[ \big (|X_{1,i}|\wedge (c\sqrt{n})-x\sqrt{n}\big )^+\right] + \widehat{\mathbb E}\left[ \big (|X_{1,i}|\wedge (x\sqrt{n})- \sqrt{n}\big )^+\right] \right) \nonumber \\&\quad \le 2\sum _{i=1}^d \left( \frac{\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}[X_{1,i}^2\wedge (c^2n)]}{x}+xn\mathbb V\left( |X_{1,i}|\ge \sqrt{n}\right) \right) \rightarrow 0 \end{aligned}$$
(3.29)

as \(n\rightarrow \infty \) and then \(x\rightarrow \infty \). Further, by (iv),

$$\begin{aligned} \sum _{k=1}^{[nt]}\widehat{\mathbb E}\left[ \langle {\varvec{Y}}_{n,k} A,{\varvec{Y}}_{n,k}\rangle \right] =\frac{[nt]}{n} \widehat{\mathbb E}\left[ \langle {\varvec{X}}_1^{(\sqrt{n})} A,{\varvec{X}}_1^{(\sqrt{n})}\rangle \right] \rightarrow G(A) t. \end{aligned}$$

Denote \({\varvec{W}}_n(t)= \sum _{k=1}^{[nt]} {\varvec{Y}}_{n,k}\). By Theorem 3.1, for any bounded continuous function \(\varphi : D_{[0,1]}(\mathbb R^d)\rightarrow \mathbb R\),

$$\begin{aligned} \lim _{n\rightarrow \infty } \widehat{\mathbb E}\left[ \varphi \left( {\varvec{W}}_n\right) \right] =\widetilde{\mathbb E}\left[ \varphi ({\varvec{W}} )\right] . \end{aligned}$$
(3.30)

Note

$$\begin{aligned} \left| \widehat{\mathbb E}\left[ \varphi \left( \frac{{\varvec{S}}_{[n\cdot ]}}{\sqrt{n}}\right) \right] -\widehat{\mathbb E}\left[ \varphi \left( {\varvec{W}}_n\right) \right] \right|&\le \sup _{{\varvec{x}}}|\varphi ({\varvec{x}})|\sum _{k=1}^n \mathbb V\left( \frac{{\varvec{X}}_k}{\sqrt{n}}\ne {\varvec{Y}}_{n,k}\right) \nonumber \\&\le \sup _{{\varvec{x}}}|\varphi ({\varvec{x}})| n \mathbb V\left( |{\varvec{X}}_1|\ge \sqrt{n}\right) \rightarrow 0. \end{aligned}$$
(3.31)

(3.26) is proved.

Now, suppose that (3.24) holds. By (3.24), for each element \(X_{1,i}\) of \({\varvec{X}}_1=(X_{1,1},\ldots , X_{1,d})\), \(i=1,\ldots , d\), we have

$$\begin{aligned} \lim _{n\rightarrow \infty } \widehat{\mathbb E}\left[ \varphi \left( \frac{\sum _{k=1}^n X_{k,i}}{\sqrt{n}}\right) \right] =\widetilde{\mathbb E}\left[ \varphi ( \xi _i )\right] ,\;\; \forall \varphi \in C_b^1(\mathbb R). \end{aligned}$$

By Theorem 4.2 of Zhang [15], \(\lim \limits _{c\rightarrow \infty } \widehat{\mathbb E}[X_{1,i}^2\wedge c]\) is finite, \(x^2\mathbb V\left( |X_{1,i}|\ge x\right) \rightarrow 0\) as \(x\rightarrow \infty \), and \(\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\big [X_{1,i}^{(c)}\big ]=\lim \limits _{c\rightarrow \infty }\widehat{\mathbb E}\big [-X_{1,i}^{(c)}\big ]=0\). So, (i)-(iii) are proved.

At last, we show (iv). Let \({\varvec{Y}}_{n,k}\) be defined as above. Then, (3.27)–(3.29) remain true. Let \({\varvec{T}}_{n,m}=\sum _{k=1}^m {\varvec{Y}}_{n,k}\), \(1\le m\le n\) and \({\varvec{T}}_n={\varvec{T}}_{n,n}\). Then, similar to (3.11), we have

$$ \max _n\widehat{\mathbb E}\Big [|{\varvec{T}}_n|^p\Big ]\le \max _n\widehat{\mathbb E}\Big [\max _{m\le n}|{\varvec{T}}_{n,m}|^p\Big ]\le C_p, \;\; \forall p\ge 2. $$

Hence,

$$\begin{aligned} \left\{ |{\varvec{T}}_n|^p; n\ge 1 \right\} \text { is uniformly integrable for any } p\ge 2. \end{aligned}$$
(3.32)

On the other hand, by (3.24) and (3.31),

$$\begin{aligned} \lim _{n\rightarrow \infty } \widehat{\mathbb E}\left[ \varphi \left( {\varvec{T}}_n\right) \right] =\widetilde{\mathbb E}\left[ \varphi ({\varvec{\xi }})\right] ,\;\; \forall \varphi \in C_b^1(\mathbb R^d). \end{aligned}$$
(3.33)

Choosing \(\varphi ({\varvec{x}})=|{\varvec{x}}|^p\wedge c\) yields

$$\begin{aligned} \widetilde{\mathbb E}[|{\varvec{\xi }}|^p\wedge c]=\lim _{n\rightarrow \infty } \widehat{\mathbb E}\left[ |{\varvec{T}}_n |^p\wedge c\right] \le \max _n\widehat{\mathbb E}\Big [|{\varvec{T}}_n|^p\Big ]\le C_p. \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{c\rightarrow \infty } \widetilde{\mathbb E}[|{\varvec{\xi }}|^p\wedge c] \le C_p\text { is finite for any } p\ge 2. \end{aligned}$$
(3.34)

Let \(G_{\xi }^{(c)}(A)=\widetilde{\mathbb E}\left[ \langle {\varvec{\xi }}^{(c)}A,{\varvec{\xi }}^{(c)}\rangle \right] \). Note, for \(a>b\),

$$\begin{aligned} \left| \langle {\varvec{\xi }}^{(a)}A,{\varvec{\xi }}^{(a)}\rangle - \langle {\varvec{\xi }}^{(b)}A,{\varvec{\xi }}^{(b)}\rangle \right| \le |A|(|{\varvec{\xi }}^{(a)}|+|{\varvec{\xi }}^{(b)}|)|{\varvec{\xi }}^{(a)}-{\varvec{\xi }}^{(b)}|. \end{aligned}$$

It follows that

$$\begin{aligned}&\left| G_{\xi }^{(a)}(A)-G_{\xi }^{(b)}(A)\right| \nonumber \\&\quad \le |A|\Big (\widetilde{\mathbb E}\big [(|{\varvec{\xi }}^{(a)}|+|{\varvec{\xi }}^{(b)}|)^2\big ]\Big )^{1/2} \Big (\widetilde{\mathbb E}\big [\sum _{k=1}^d (\xi _k^2\wedge a^2-b^2)^+\big ]\Big )^{1/2}\\&\quad \le C|A|\left( \widehat{\mathbb E}\big [|{\varvec{\xi }}|^2\wedge (d a^2)\big ]\right) ^{1/2} \left( \frac{d \widehat{\mathbb E}[|{\varvec{\xi }}|^3\wedge a^3]}{b}\right) ^{1/2} \rightarrow 0\text { as } a>b\rightarrow \infty ,\nonumber \end{aligned}$$
(3.35)

by (3.34). It follows that

$$\begin{aligned} G_{\xi }(A)=\lim _{c\rightarrow \infty } G_{\xi }^{(c)}(A) \text { exists and is finite}. \end{aligned}$$

Now, choosing \(\varphi ({\varvec{x}})=\langle {\varvec{x}}^{(c)} A,{\varvec{x}}^{(c)}\rangle \) in (3.33) yields

$$\begin{aligned} \lim _{n\rightarrow \infty } \widehat{\mathbb E}\Big [\langle {\varvec{T}}_n^{(c)} A,{\varvec{T}}_n^{(c)}\rangle \Big ]=G_{\xi }^{(c)}(A). \end{aligned}$$

Note that \(|\langle {\varvec{T}}_n A,{\varvec{T}}_n- \langle {\varvec{T}}_n^{(c)} A,{\varvec{T}}_n^{(c)}\rangle |\le 2|A|\cdot |{\varvec{T}}_n|^2I\{|{\varvec{T}}_n|>c\}\), and \(\{|{\varvec{T}}_n|^2, n\ge 1\}\) is uniformly integrable by (3.32). Letting \(c\rightarrow \infty \) in the above equation yields

$$\begin{aligned} \lim _{n\rightarrow \infty } \widehat{\mathbb E}\big [\langle {\varvec{T}}_n A,{\varvec{T}}_n\rangle \big ]=G_{\xi }(A). \end{aligned}$$

On the other hand, note

$$\begin{aligned} \langle {\varvec{T}}_n A,{\varvec{T}}_n\rangle =\sum _{k=1}^n\langle {\varvec{Y}}_{n,k} A,{\varvec{Y}}_{n,k}\rangle +2\sum _{k=1}^n\langle {\varvec{T}}_{n,k-1} A,{\varvec{Y}}_{n,k}\rangle . \end{aligned}$$

Since

$$\begin{aligned} \widehat{\mathbb E}\left[ \langle {\varvec{x}},{\varvec{X}}\rangle \right] \le \sum _{i=1}^d( x_i^+ \widehat{\mathbb E}[X_i]+x_i^-\widehat{\mathbb E}[-X_i])\le 2 |{\varvec{x}}| \big (|\widehat{\mathbb E}[{\varvec{X}}]|+|\widehat{\mathbb E}[-{\varvec{X}}]|\big ), \end{aligned}$$

we have

$$\begin{aligned} \widehat{\mathbb E}\left[ \pm \sum _{k=1}^n\langle {\varvec{T}}_{n,k-1} A,{\varvec{Y}}_{n,k}\rangle \right]&\le 2 \sum _{k=1}^n \widehat{\mathbb E}[|{\varvec{T}}_{n,k-1} A|]\big (|\widehat{\mathbb E}[{\varvec{Y}}_{n,k}]|+|\widehat{\mathbb E}[-{\varvec{Y}}_{n,k}]|\big )\\&\le C\sum _{k=1}^n \big (|\widehat{\mathbb E}[{\varvec{Y}}_{n,k}]|+|\widehat{\mathbb E}[-{\varvec{Y}}_{n,k}]|\big )\rightarrow 0. \end{aligned}$$

It follows that

$$\begin{aligned}&\widehat{\mathbb E}\big [\langle {\varvec{T}}_n A,{\varvec{T}}_n\rangle \big ]-\sum _{k=1}^n \widehat{\mathbb E}\big [\langle {\varvec{Y}}_{n,k} A,{\varvec{Y}}_{n,k}\rangle \big ]\\&\quad = \widehat{\mathbb E}\big [\langle {\varvec{T}}_n A,{\varvec{T}}_n\rangle \big ]- \widehat{\mathbb E}\big [\sum _{k=1}^n\langle {\varvec{Y}}_{n,k} A,{\varvec{Y}}_{n,k}\rangle \big ]\rightarrow 0, \end{aligned}$$

where the inequality is due to the independence of \(\langle {\varvec{Y}}_{n,k} A,{\varvec{Y}}_{n,k}\rangle \), \(k=1,\ldots , n\). We conclude that

$$\begin{aligned} \widehat{\mathbb E}\big [\langle {\varvec{X}}_1^{(\sqrt{n})} A,{\varvec{X}}_1^{(\sqrt{n})}\rangle \big ]=\sum _{k=1}^n \widehat{\mathbb E}\big [\langle {\varvec{Y}}_{n,k} A,{\varvec{Y}}_{n,k}\rangle \big ] \rightarrow G_{\xi }(A). \end{aligned}$$

Similar to (3.35), for \(\sqrt{n}\le b\le a\le \sqrt{n+1}\) we have

$$\begin{aligned}&|\widehat{\mathbb E}\big [\langle {\varvec{X}}_1^{(a)} A,{\varvec{X}}_1^{(a)}\rangle \big ]-\widehat{\mathbb E}\big [\langle {\varvec{X}}_1^{(b)} A,{\varvec{X}}_1^{(b)}\rangle \big ]| \\&\quad \le |A|\Big (\widehat{\mathbb E}\big [(|{\varvec{X}}_1^{(a)}|+|{\varvec{X}}_1^{(b)}|)^2\big ]\Big )^{1/2} \Big (\widehat{\mathbb E}\big [\sum _{k=1}^d (X_{1,k}^2\wedge a^2-b^2)^+\big ]\Big )^{1/2}\\&\quad \le C |A| \Big (\sum _{k=1}^d (n+1) \mathbb V\big (|X_{1,k}|\ge \sqrt{n}\big )\Big )^{1/2}\rightarrow 0, \end{aligned}$$

by (i) and (iii). Hence,

$$\begin{aligned} \lim _{c\rightarrow \infty }\widehat{\mathbb E}\big [\langle {\varvec{X}}_1^{(c)} A,{\varvec{X}}_1^{(c)}\rangle \big ]=\lim _{n\rightarrow \infty } \widehat{\mathbb E}\big [\langle {\varvec{X}}_1^{(\sqrt{n})} A,{\varvec{X}}_1^{(\sqrt{n})}\rangle \big ]=G_{\xi }(A), \;\; A\in \mathbb S(d). \end{aligned}$$

(iv) is now proved. \(\square \)

3.3 Lévy’s Characterization of G-Brownian Motion

At last, we give a Lévy’s characterization of a multi-dimensional G-Brownian motion as an application of Theorem 2.1. Let \(\{\mathscr {H}_t; t\ge 0\}\) be a non-decreasing family of subspaces of \(\mathscr {H}\) such that (1) a constant \(c\in \mathscr {H}_t\) and, (2) \(\varphi (X_1,\ldots ,X_d)\in \mathscr {H}_t\) whenever \(X_1,\ldots ,X_d\in \mathscr {H}_t\) and \(\varphi \in C_{l,lip}\). We consider a system of operators on \(\mathscr {L}(\mathscr {H})=\{X\in \mathscr {H}; \widehat{\mathbb E}[|X|]<\infty \}\),

$$\begin{aligned} \widehat{\mathbb E}_t: \mathscr {L}(\mathscr {H})\rightarrow \mathscr {L}(\mathscr {H}_t) \end{aligned}$$

and denote \(\widehat{\mathbb E}[X|\mathscr {H}_t]=\widehat{\mathbb E}_t[X]\), \(\widehat{\mathcal E}[X|\mathscr {H}_t]=-\widehat{\mathbb E}_t[-X]\). Suppose that the operators \(\widehat{\mathbb E}_t\) satisfy the following properties: for all \(X, Y \in \mathscr {L}({\mathscr {H}})\),

  1. (i)

    \( \widehat{\mathbb E}_t [ X+Y]=X+\widehat{\mathbb E}_t[Y]\) in \(L_1\) if \(X\in \mathscr {H}_t\), and \( \widehat{\mathbb E}_t [ XY]=X^+\widehat{\mathbb E}_t[Y]+X^-\widehat{\mathbb E}_t[-Y]\) in \(L_1\) if \(X\in \mathscr {H}_t\) and \(XY\in \mathscr {L}({\mathscr {H}})\);

  2. (ii)

    \(\widehat{\mathbb E}\left[ \widehat{\mathbb E}_t [ X]\right] =\widehat{\mathbb E}[X]\).

For a random vector \({\varvec{X}}=(X_1,\ldots , X_d)\), we denote \(\widehat{\mathbb E}_t[{\varvec{X}}]=\big (\widehat{\mathbb E}_t[X_1],\ldots , \widehat{\mathbb E}_t[X_d]\big )\).

Definition 3.8

A d-dimensional process \({\varvec{M}}_t\) is called a martingale, if \({\varvec{M}}_t\in \mathscr {L}(\mathscr {H}_t)\) and

$$\begin{aligned} \widehat{\mathbb E}[{\varvec{M}}_t|\mathscr {H}_s]={\varvec{M}}_s, \;\; s\le t. \end{aligned}$$

Denote

$$\begin{aligned} W_T({\varvec{M}},\delta )&= \sup _{t_i}\widehat{\mathbb E}\big [\max _{1\le i\le n} |{\varvec{M}}(t_i)-{\varvec{M}}(t_{i-1})|\wedge 1\big ], \\&\quad \text { where the supremum } \sup _{t_i} \text { is taken over all } t_is \text { with } \\&0=t_0<t_1<\cdots<t_n=T,\;\; \delta /2<t_i-t_{i-1}<\delta , \; i=1,\ldots , n. \end{aligned}$$

The Lévy characterization of a one-dimensional G-Brownian motion under G-expectation in a Wiener space is established by Xu and Zhang [12, 13], Gao et al. [3], Lin [6] and Hu and Li [4] by the method of the stochastic calculus. The following theorem gives a Lévy characterization of a d-dimensional G-Brownian motion.

Theorem 3.9

Let \({\varvec{M}}_t\) be a d-dimensional random process in \((\Omega ,\mathscr {H},\mathscr {H}_t, \widehat{\mathbb E})\) with \({\varvec{M}}_0={\varvec{0}}\),

$$\begin{aligned} \text { for all } p>0 \text { and } t\ge 0, \;\; C_{\mathbb V}(|{\varvec{M}}_t|^p)<\infty \implies \widehat{\mathbb E}[|{\varvec{M}}_t|^p]<\infty . \end{aligned}$$
(3.36)

Suppose that \({\varvec{M}}_t\) satisfies

  1. (I)

    Both \({\varvec{M}}_t\) and \(-{\varvec{M}}_t\) are martingales;

  2. (II)

    There is a function \(G:\mathbb S(d) \rightarrow \mathbb R\) such that \(\langle {\varvec{M}}_t A, {\varvec{M}}_t\rangle -G(A)t\) is a real martingale for each \(A \in \mathbb S(d)\);

  3. (III)

    For any \(T>0\), \(\lim _{\delta \rightarrow 0}W_T({\varvec{M}},\delta )=0\).

Then, G(A) is continuous and monotonic in \(A \in \mathbb S(d)\), and \({\varvec{M}}_t\) satisfies Property (ii) as in Definition 1.3 with \({\varvec{M}}_1\sim N(0,G)\).

Proof

By (II), \(G(A)t =\widehat{\mathbb E}[\langle {\varvec{M}}_t A, {\varvec{M}}_t\rangle ]\). So, G(A) is monotonic in \(A \in \mathbb S(d)\). With the same argument as in 2.2, \(|G(A)-G(\overline{A})|\le d\Vert A-\overline{A}\Vert _{\infty } G(I)\). So G(A) is continuous in \(A \in \mathbb S(d)\). Note that \(\widehat{\mathbb E}[\langle ({\varvec{M}}_t-{\varvec{M}}_s) A, {\varvec{M}}_t-{\varvec{M}}_s\rangle |\mathscr {H}_s]=G(A)(t-s)\) (\(0<s<t\)) by (I) and (II). In particular, \(\widehat{\mathbb E}[M_{t,k}-M_{s,k})^2|\mathscr {H}_s]=\sigma _{k}^2(t-s)\) (\(0<s<t\)) for some \(\sigma _k\ge 0\). By Lemma 5.7 of Zhang [16], we have for each \(k=1,\ldots , d\),

$$\begin{aligned} \widehat{\mathbb E}[|M_{t,k}-M_{s,k}|^p]\le C_{\mathbb V}\big (|M_{t,k}-M_{s,k}|^p\big )\le C_p(t-s)^{p/2}, \;\;t\ge s\ge 0, \; p\ge 2. \end{aligned}$$

For Property (ii) in Definition 1.3, it is sufficient to show that for any \(0<t_1<\cdots <t_p\) and \(\varphi \in C_{b,Lip}(\mathbb R^{d\times p})\),

$$\begin{aligned} \widehat{\mathbb E}\left[ \varphi ({\varvec{M}}_{t_1},\cdots , {\varvec{M}}_{t_p})\right] =\widetilde{\mathbb {E}}\left[ \varphi ({\varvec{W}}_{t_1},\ldots , {\varvec{W}}_{t_p})\right] . \end{aligned}$$
(3.37)

Without loss of generality, we assume \(0<t_1<\cdots <t_d\le 1\). Let

$$\begin{aligned} k_n=2^n, \;\; {\varvec{Z}}_{n,k}={\varvec{M}}_{k/2^n}-{\varvec{M}}_{(k-1)/2^n}, \;\; \mathscr {H}_{n,k}=\mathscr {H}_{k/2^n}, \;\; k=1,\ldots , k_n, \end{aligned}$$

and \(\tau _n(t)=[t2^n]\). Then, \(\widehat{\mathbb E}[{\varvec{Z}}_{n,k}|\mathscr {H}_{n,k-1}]=\widehat{\mathbb E}[-{\varvec{Z}}_{n,k}|\mathscr {H}_{n,k-1}]=0\),

$$\begin{aligned}{} & {} \widehat{\mathbb E}[\langle {\varvec{Z}}_{n,k}A, {\varvec{Z}}_{n,k}\rangle |\mathscr {H}_{n,k-1}]=G(A)\frac{1}{2^n}, \\{} & {} \sum _{k=1}^{k_n}\widehat{\mathbb E}[|{\varvec{Z}}_{n,k}|^3]\le C\sum _{k=1}^{2^n}\big (\frac{1}{2^n}\big )^{3/2}=\frac{C}{2^{n/2}}\rightarrow 0. \end{aligned}$$

Hence, the sequence \(\{{\varvec{Z}}_{n,k}, \mathscr {H}_{n,k}\}\) satisfies the conditions (2.2)-(2.4) with \(\rho (t)=t\). Let \({\varvec{W}}_n(\cdot )\) be defined as in (2.1). By Theorem 2.1, \( ({\varvec{W}}_n(t_1),\cdots ,{\varvec{W}}_n(t_p))\overset{d}{\rightarrow }({\varvec{W}}_{t_1},\ldots , {\varvec{W}}_{t_p}). \) On the other hand,

$$\begin{aligned} |{\varvec{W}}_n(t)-{\varvec{M}}_t|= \Big |{\varvec{M}}_t-{\varvec{M}}_{[2^nt]/2^n}\Big |\overset{\mathbb V}{\rightarrow }0. \end{aligned}$$

So, (3.37) holds for all \(\varphi \in C_{b,Lip}(\mathbb R^{d\times p})\). The proof is now completed.

4 Proofs

For the capacity and sub-linear expectation, we have the following lemma.

Lemma 4.1

We have

  1. (1)

    if \(X\le Y\) in \(L_p\), then \(X\le Y\) in \(\mathbb V\);

  2. (2)

    if \(X\le Y\) in \(\mathbb V\) and \(\widehat{\mathbb E}[((X-Y)^+)^p]<\infty \), then \(X\le Y\) in \(L_q\) for \(0<q<p\);

  3. (3)

    if \(X\le Y\) in \(\mathbb V\), f(x) is non-decreasing continuous function and \(\mathbb V(|Y|\ge M)\rightarrow 0\) as \(M\rightarrow \infty \), then \(f(X)\le f(Y)\) in \(\mathbb V\);

  4. (4)

    if \(p\ge 1\), \(X,Y\ge 0\) in \(L_p\), \(X\le Y\) in \(L_p\), then \(\widehat{\mathbb E}[X^p]\le \widehat{\mathbb E}[Y^p]\);

  5. (5)

    if \(\widehat{\mathbb E}\) is countably additive, then \(X\le Y\) in \(\mathbb V\) is equivalent to \(X\le Y\) in \(L_p\) for any \(p>0\);

  6. (6)

    if \(X_n\rightarrow 0\) in \(L_p\), then \(X_n\rightarrow 0\) in \(\mathbb V\) and in \(L_q\) for \(0<q<p\);

  7. (7)

    if \(X_n\rightarrow 0\) in \(\mathbb V\) and \(\widehat{\mathbb E}[|X_n|^p]\le C<\infty \), then \(X_n\rightarrow 0\) in \(L_q\) for \(0<q<p\).

Properties (1)–(5) are proved in Zhang [16]. By noting

$$\begin{aligned} \mathbb V(|X_n|\ge \epsilon )\le \frac{\widehat{\mathbb E}[|X_n|^p}{\epsilon ^p}, \;\; \epsilon >0, \end{aligned}$$

\( |X_n|^q\le \epsilon ^q+\epsilon ^{q-p}|X_n|^p \), and \(\widehat{\mathbb E}[|X_n|^q]\le \epsilon ^q+\epsilon ^{q-p}\widehat{\mathbb E}[|X_n|^p]\), (6) follows. For (7), note that

$$\begin{aligned} \widehat{\mathbb E}[|X_n|^q]\le \epsilon ^q+c^q\mathbb V(|X_n|\ge \epsilon )+ \frac{\widehat{\mathbb E}[|X_n|^p]}{c^{p-q}}, \end{aligned}$$

the result follows.

The following lemma gives the properties of the conditional expectation operators \(\widehat{\mathbb E}_{n,k}\).

Lemma 4.2

[16] For any \(X,Y\in \mathscr {L}(\mathscr {H})\), we have

  1. (a)

    \(\widehat{\mathbb E}_{n,k} [c] = c\) in \(L_1\), \(\widehat{\mathbb E}_{n,k} [\lambda X] = \lambda \widehat{\mathbb E}_{n,k} [X]\) in \(L_1\) if \(\lambda \ge 0\);

  2. (b)

    \(\widehat{\mathbb E}_{n,k}[X]\le \widehat{\mathbb E}_{n,k}[Y]\) in \(L_1\) if \(X\le Y\) in \(L_1\);

  3. (c)

    \(\widehat{\mathbb E}_{n,k}[X]-\widehat{\mathbb E}_{n,k}[Y]\le \widehat{\mathbb E}_{n,k}[X-Y]\) in \(L_1\);

  4. (d)

    \(\widehat{\mathbb E}_{n,k}\left[ \left[ \widehat{\mathbb E}_{n,l} [ X]\right] \right] =\widehat{\mathbb E}_{n,l\wedge k} [ X]\) in \(L_1\);

  5. (e)

    if \(|X|\le M\) in \(L_p\) for all \(p\ge 1\), then \( \big |\widehat{\mathbb E}_{n,k}[X]\big | \le M\) in \(L_p\) for all \(p\ge 1\).

To prove functional central limit theorems, we need the following Rosenthal-type inequalities which can be proved by the same argument as in Theorem 4.1 of Zhang [16].

Lemma 4.3

Suppose that \(\{X_{n,i}\}\) are a set of bounded random variables, \(X_{n,k}\in \mathscr {H}_{n,k}\). Set \(S_0=0\), \(S_k=\sum _{i=1}^k X_{n,i}\). Then,

$$\begin{aligned} \widehat{\mathbb E}\left[ \left( \max _{k\le {k_n}} (S_{k_n}-S_k)\right) ^2\big |\mathscr {H}_{n,0}\right] \le \widehat{\mathbb E}\left[ \sum _{k=1}^{k_n} \widehat{\mathbb E}[X_{n,k}^2|\mathscr {H}_{n,k-1}]\Big |\mathscr {H}_{n,0}\right] \text { in } L_1, \end{aligned}$$
(4.1)

when \(\widehat{\mathbb E}[X_{n,k}|\mathscr {H}_{n,k-1}]\le 0\) in \(L_1\), \(k=1,\ldots , k_n\). In general, for \(p\ge 2\), there is a constant \(C_p\) such that

$$\begin{aligned}&\widehat{\mathbb E}\Big [\max _{k\le {k_n}} |S_k|^p\big |\mathscr {H}_{n,0}\Big ] \nonumber \\&\le C_p\left\{ \widehat{\mathbb E}\left[ \sum _{k=1}^{k_n} \widehat{\mathbb E}[|X_{n,k}|^p|\mathscr {H}_{n,k-1}]\Big |\mathscr {H}_{n,0}\right] +\widehat{\mathbb E}\left[ \Big (\sum _{k=1}^{k_n} \widehat{\mathbb E}[X_{n,k}^2|\mathscr {H}_{n,k-1}]\Big )^{p/2}\Big |\mathscr {H}_{n,0}\right] \right. \nonumber \\&\qquad \left. +\widehat{\mathbb E}\left[ \Big \{\sum _{k=1}^{k_n} \Big (\big ( \widehat{\mathbb E}[X_{n,k} |\mathscr {H}_{n,k-1}]\big )^++\big ( \widehat{\mathcal E}[X_{n,k} |\mathscr {H}_{n,k-1}]\big )^-\Big )\Big \}^p\Big |\mathscr {H}_{n,0}\right] \right\} \text { in } L_1. \end{aligned}$$
(4.2)

The following lemma will be used in the proof of the convergence of finite-dimensional distribution (2.5).

Lemma 4.4

[16] Suppose that the operators \(\widehat{\mathbb E}_{n,k}\) satisfy (a) and (b), \({\varvec{X}}_n\in \mathscr {H}_{n,k_n^{\prime }}\subset \mathscr {H}\) is a \(d_1\)-dimensional random vector, and \({\varvec{Y}}_n\in \mathscr {H}\) is a \(d_2\)-dimensional random vector. Write \(\mathscr {H}_{n}=\mathscr {H}_{n,k_n^{\prime }}\). Assume that \({\varvec{X}}_n\overset{d}{\rightarrow }{\varvec{X}}\), and for any bounded Lipschitz function \(\varphi ({\varvec{x}},{\varvec{y}}):\mathbb R_{d_1}\bigotimes \mathbb R_{d_2}\rightarrow \mathbb R\),

$$\begin{aligned} \widehat{\mathbb E}\left[ \Big |\widehat{\mathbb E}[\varphi ({\varvec{x}},{\varvec{Y}}_n)|\mathscr {H}_n]-\widetilde{\mathbb E}[\varphi ({\varvec{x}},{\varvec{Y}})]\Big |\right] \rightarrow 0,\;\; \forall {\varvec{x}}, \end{aligned}$$

where \({\varvec{X}}\), \({\varvec{Y}}\) are two random vectors in a sub-linear expectation space \((\Omega , \mathscr {H}, \widetilde{\mathbb E})\) with \(\widetilde{\mathbb V}(\Vert {\varvec{X}}\Vert >\lambda )\rightarrow 0\) and \(\widetilde{\mathbb V}(\Vert {\varvec{Y}}\Vert >\lambda )\rightarrow 0\) as \(\lambda \rightarrow \infty \). Then

$$\begin{aligned} ({\varvec{X}}_n,{\varvec{Y}}_n)\overset{d }{\rightarrow }(\widetilde{{\varvec{X}}},\widetilde{{\varvec{Y}}}), \end{aligned}$$

where \(\widetilde{{\varvec{Y}}}\) is independent to \(\widetilde{{\varvec{X}}}\), \(\widetilde{{\varvec{X}}}\overset{d}{=} {\varvec{X}}\) and \(\widetilde{{\varvec{Y}}}\overset{d}{=} {\varvec{Y}}\).

Proof of Theorem 2.1

Without loss of generality, we assume that \(|{\varvec{Z}}_{n,k}|\le \epsilon _n\), \(k=1,\ldots ,k_n\), with a sequence \(0<\epsilon _n\rightarrow 0\), \(\delta _{k_n}=\sum _{k=1}^{k_n}\widehat{\mathbb E}[|{\varvec{Z}}_{n,k}|^2|\mathscr {H}_{n,k-1}]\le 2\rho (1) \) in \(L_1\), and \(\chi _{k_n}=:\sum _{k=1}^{k_n}\left\{ |\widehat{\mathbb E}[ {\varvec{Z}}_{n,k} |\mathscr {H}_{n,k-1}]|+|\widehat{\mathcal E}[{\varvec{Z}}_{n,k} |\mathscr {H}_{n,k-1}]|\right\} <1\) in \(L_1\) (c.f. the same arguments at the beginning of the proofs of Theorems 3.1 and 3.2 of Zhang [16]). Under these assumptions, the property (g) of the conditional expectation implies that all random variables considered above are bounded in \(L_p\) for all \(p>0\), and then the convergences in (2.3) and (2.4) all hold in \(L_p\) for any \(p>0\), by Lemma 4.1.

We first show that for any \(r\ge 2\), there is a positive constant \(C_r>0\) such that

$$\begin{aligned}{} & {} \widehat{\mathbb E}\left[ \max \limits _{\tau _n(s)\le k \le \tau _n(t)} \left| {\varvec{S}}_{n,k}-{\varvec{S}}_{n,\tau _n(s)}\right| ^r\big |\mathscr {H}_{n,\tau _n(s)}\right] \le C_r \; \text { in } L_p, \end{aligned}$$
(4.3)
$$\begin{aligned}{} & {} \widehat{\mathbb E}\left[ \max \limits _{\tau _n(s)\le k \le \tau _n(t)} \left| {\varvec{S}}_{n,k}-{\varvec{S}}_{n,\tau _n(s)}\right| ^r\big |\mathscr {H}_{n,\tau _n(s)}\right] \le C_r\left( \rho (t)-\rho (s)\right) ^{r/2}+o(1) \; \text { in } L_p, \end{aligned}$$
(4.4)
$$\begin{aligned}{} & {} \widehat{\mathbb E}\left[ {\varvec{S}}_{n,\tau _n(t)}-{\varvec{S}}_{n,\tau _n(s)} \big |\mathscr {H}_{n,\tau _n(s)}\right] \rightarrow {\varvec{0}} \; \text { in } L_p, \end{aligned}$$
(4.5)
$$\begin{aligned}{} & {} \widehat{\mathcal E}\left[ {\varvec{S}}_{n,\tau _n(t)}-{\varvec{S}}_{n,\tau _n(s)} \big |\mathscr {H}_{n,\tau _n(s)}\right] \rightarrow {\varvec{0}} \; \text { in } L_p, \end{aligned}$$
(4.6)
$$\begin{aligned}{} & {} \widehat{\mathbb E}\left[ \Big \langle ({\varvec{S}}_{n,\tau _n(t)}-{\varvec{S}}_{n,\tau _n(s)})A, {\varvec{S}}_{n,\tau _n(t)}-{\varvec{S}}_{n,\tau _n(s)}\Big \rangle \Big |\mathscr {H}_{n,\tau _n(s)}\right] \nonumber \\{} & {} \quad \rightarrow G(A) \big (\rho (t)-\rho (s)\big ) \; \text { in } L_p, \;\; \forall A \in \mathbb S(d), \end{aligned}$$
(4.7)

for any \(0<s<t\) and \(p>0\). Further, (4.7) holds uniformly in \(A \in \mathbb S(d)\) with \(|A|\le c\).

For (4.3)–(4.6), it is sufficient to verify the one-dimensional case. For (4.3), by Lemma 4.3,

$$\begin{aligned}&\widehat{\mathbb E}\left[ \max _{\tau _n(s)\le k \le \tau _n(t)} \left| S_{n,k}-S_{n,\tau _n(s)}\right| ^r\big |\mathscr {H}_{n,\tau _n(s)}\right] \nonumber \\&\quad \le C_r \left\{ \widehat{\mathbb E}\left[ \sum _{k=\tau _n(s)+1}^{\tau _n(t)} \widehat{\mathbb E}[|Z_{n,k}|^r|\mathscr {H}_{n,k-1}]\Big |\mathscr {H}_{n,\tau _n(s)}\right] \right. \nonumber \\&\qquad + \widehat{\mathbb E}\left[ \Big (\sum _{k=\tau _n(s)+1}^{\tau _n(t)} \widehat{\mathbb E}[|Z_{n,k}|^2|\mathscr {H}_{n,k}]\Big )^{r/2}\Big |\mathscr {H}_{n,\tau _n(s)}\right] \nonumber \\&\qquad +\left. \widehat{\mathbb E}\left[ \Big \{\sum _{k=\tau _n(s)+1}^{\tau _n(t)} \Big (\big | \widehat{\mathbb E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |+\big | \widehat{\mathcal E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |\Big )\Big \}^r\Big |\mathscr {H}_{n,\tau _n(s)}\right] \right\} \nonumber \\&\quad \le C_r \Biggl \{\epsilon _n^{r-2} \widehat{\mathbb E}\left[ \delta _{k_n} \Big |\mathscr {H}_{n,\tau _n(s)}\right] + \widehat{\mathbb E}\left[ \left( \sum _{k=\tau _n(s)+1}^{\tau _n(t)} \widehat{\mathbb E}[ |Z_{n,k}|^2|\mathscr {H}_{n,k}]\right) ^{r/2}\Big |\mathscr {H}_{n,\tau _n(s)}\right] \nonumber \\&\qquad + \widehat{\mathbb E}\left[ \chi _{k_n}^r\Big |\mathscr {H}_{n,\tau _n(s)}\right] \Biggr \} \\&\quad \le C_r\left\{ 2\rho (1)+(2\rho (1))^{r/2}+1\right\} \text { in } L_1. \nonumber \end{aligned}$$
(4.8)

Note that the random variable \(\max \limits _{\tau _n(s)\le k \le \tau _n(t)} |S_{n,k}-S_{n,\tau _n(s)}|\) is a bounded (\(\le (\tau _n(t)-\tau _n(s))\epsilon _n\)). By the property (g) of \(\widehat{\mathbb E}_{n,k}\), \(\widehat{\mathbb E}\Big [\max \limits _{\tau _n(s)\le k \le \tau _n(t)} |S_{n,k}-S_{n,\tau _n(s)}|^r\big |\mathscr {H}_{n,\tau _n(s)}\Big ]\) is bounded in \(L_p\) for any \(p>0\). Hence, by (1) and (2) of Lemma 4.1, (4.3) is proved. By this inequality and Lemma 4.1, it is sufficient to consider the case of \(p=1\) for (4.4)–(4.7).

It is easily shown that

$$\begin{aligned} \widehat{\mathbb E}[\pm Z_{n,k} |\mathscr {H}_{n,k}]\le \big | \widehat{\mathbb E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |+\big | \widehat{\mathcal E}[Z_{n,k} |\mathscr {H}_{n,k}]\big | \text { in } L_1. \end{aligned}$$

Then,

$$\begin{aligned}&\widehat{\mathbb E}\left[ \pm \left( S_{n,\tau _n(t)}-S_{n,\tau _n(s)}\right) -\sum _{k=\tau _n(s)+1}^{\tau _n(t)}\big | \widehat{\mathbb E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |+\big | \widehat{\mathcal E}[Z_{n,k} |\mathscr {H}_{n,k}] \Big |\mathscr {H}_{n,\tau _n(s)}\right] \\&\quad =\widehat{\mathbb E}\left[ \widehat{\mathbb E}\Big [\sum _{k=\tau _n(s)+1}^{\tau _n(t)}\big \{\pm Z_{n,k}-\big | \widehat{\mathbb E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |-\big | \widehat{\mathcal E}[Z_{n,k} |\mathscr {H}_{n,k}]\big \}\big |\mathscr {H}_{n,\tau _n(t)-1} \Big ]\Big |\mathscr {H}_{n,\tau _n(s)}\right] \\&\quad \le \widehat{\mathbb E}\left[ \widehat{\mathbb E}\Big [\sum _{k=\tau _n(s)+1}^{\tau _n(t)-1}\big \{\pm Z_{n,k}-\big | \widehat{\mathbb E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |-\big | \widehat{\mathcal E}[Z_{n,k} |\mathscr {H}_{n,k}]\big \}\big |\mathscr {H}_{n,\tau _n(t)-1} \Big ]\Big |\mathscr {H}_{n,\tau _n(s)}\right] \\&\quad = \widehat{\mathbb E}\left[ \sum _{k=\tau _n(s)+1}^{\tau _n(t)-1}\big \{\pm Z_{n,k}-\big | \widehat{\mathbb E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |-\big | \widehat{\mathcal E}[Z_{n,k} |\mathscr {H}_{n,k}]\big \} \Big |\mathscr {H}_{n,\tau _n(s)}\right] \\&\quad \le \cdots \le 0 \text { in } L_1. \end{aligned}$$

It follows that

$$\begin{aligned}&\widehat{\mathbb E}\left[ \pm \left( S_{n,\tau _n(t)}-S_{n,\tau _n(s)}\right) \big |\mathscr {H}_{n,\tau _n(s)}\right] \nonumber \\&\quad \le \widehat{\mathbb E}\left[ \sum _{k=\tau _n(s)+1}^{\tau _n(t)}\big | \widehat{\mathbb E}[Z_{n,k} |\mathscr {H}_{n,k}]\big |+\big | \widehat{\mathcal E}[Z_{n,k} |\mathscr {H}_{n,k}] \Big |\mathscr {H}_{n,\tau _n(s)}\right] \\&\quad \le \widehat{\mathbb E}\left[ \chi _{k_n}\big |\mathscr {H}_{n,\tau _n(s)}\right] \rightarrow 0 \text { in } L_1,\nonumber \end{aligned}$$
(4.9)

which implies (4.5) and (4.6).

For (4.7), we first note that

$$\begin{aligned} \sum _{k=\tau _n(s)+1}^{\tau _n(t)}\widehat{\mathbb E}\left[ \big \langle {\varvec{Z}}_{n, k}A,{\varvec{Z}}_{n, k}\big \rangle \Big |\mathscr {H}_{n,k-1}\right] \rightarrow G(A)\big (\rho (t)-\rho (s)\big ) \text { in } L_p, \end{aligned}$$
(4.10)

for any \(p>0\), by condition (2.4). Without loss of generality, we assume \(s=0\), \(t=1\). Note

$$\begin{aligned}&\langle {\varvec{S}}_{k_n}A, {\varvec{S}}_{k_n}\rangle -\sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n, k}A, {\varvec{Z}}_{n,k}\rangle \big |\mathscr {H}_{n,k-1}\right] \\&\quad = \sum _{k=1}^{k_n}\left( \langle {\varvec{Z}}_{n, k}A, {\varvec{Z}}_{n,k}\rangle -\widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n, k}A, {\varvec{Z}}_{n,k}\rangle \Big |\mathscr {H}_{n,k-1}\right] \right) +2\sum _{k=1}^{k_n}\langle {\varvec{S}}_{n, k-1}A, {\varvec{Z}}_{n,k}\rangle ,\\&\quad \widehat{\mathbb E}\left[ \pm \langle {\varvec{S}}_{n, k-1}A, {\varvec{Z}}_{n,k}\rangle \big |\mathscr {H}_{n,k-1}\right] \\&\quad \le 2|{\varvec{S}}_{n,k-1}A| \left\{ \left| \widehat{\mathbb E}\left[ {\varvec{Z}}_{n,k}\big |\mathscr {H}_{n,k-1}\right] \right| +\left| \widehat{\mathbb E}\left[ - {\varvec{Z}}_{n,k}\big |\mathscr {H}_{n,k-1}\right] \right| \right\} \text { in }L_1. \end{aligned}$$

And then

$$\begin{aligned}&\widehat{\mathbb E}\left[ \pm \left( \sum _{k=1}^{k_n} \langle {\varvec{S}}_{n, k-1}A, {\varvec{Z}}_{n,k}\rangle \right) \big |\mathscr {H}_{n,0}\right] \\&\quad \le 2\widehat{\mathbb E}\left[ \sum _{k=1}^{k_n}|{\varvec{S}}_{n,k-1}A|\left\{ \left| \widehat{\mathbb E}\left[ {\varvec{Z}}_{n,k}\big |\mathscr {H}_{n,k-1}\right] \right| +\left| \widehat{\mathcal E}\left[ {\varvec{Z}}_{n,k}\big |\mathscr {H}_{n,k-1}\right] \right| \right\} \big |\mathscr {H}_{n,0}\right] \text { in } L_1, \end{aligned}$$

similar to (4.9). It follows that

$$\begin{aligned}&\left| \widehat{\mathbb E}\Big [\langle {\varvec{S}}_{k_n}A, {\varvec{S}}_{k_n}\rangle -\sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n, k}A, {\varvec{Z}}_{n,k}\rangle \big |\mathscr {H}_{n,k-1}\right] \Big |\mathscr {H}_{n,0}\Big ]\right| \\&\qquad \le 2\widehat{\mathbb E}\left[ \chi _{k_n}\max _{k\le k_n}|{\varvec{S}}_{n,k}A| \Big |\mathscr {H}_{n,0}\right] \text { in } L_1. \end{aligned}$$

Taking the sub-linear expectation yields

$$\begin{aligned}&\widehat{\mathbb E}\left[ \left| \widehat{\mathbb E}\Big [\langle {\varvec{S}}_{k_n}A, {\varvec{S}}_{k_n}\rangle -\sum _{k=1}^{k_n}\widehat{\mathbb E}\left[ \langle {\varvec{Z}}_{n, k}A, {\varvec{Z}}_{n,k}\rangle \big |\mathscr {H}_{n,k-1}\right] \Big |\mathscr {H}_{n,0}\Big ]\right| \right] \\&\quad \le 2\widehat{\mathbb E}\Big [ \chi _{k_n}\max _{k\le k_n}|{\varvec{S}}_{n,k}A|\Big ] \le 2\left( \widehat{\mathbb E}[\chi _{k_n}^2]\widehat{\mathbb E}[\max _{k\le k_n}|{\varvec{S}}_{n,k}A|^2]\right) ^{1/2} \le C \left( \widehat{\mathbb E}[\chi _{k_n}^2]\right) ^{1/2}\rightarrow 0, \end{aligned}$$

by (4.3) and the fact that \(\chi _{k_n}\rightarrow 0\) in \(L_p\). By noting (4.10), we have

$$\begin{aligned} \widehat{\mathbb E}\left[ \left| \widehat{\mathbb E}\Big [\langle {\varvec{S}}_{k_n}A, {\varvec{S}}_{k_n}\rangle -G(A)\rho (1) \Big |\mathscr {H}_{n,0}\Big ]\right| \right] \rightarrow 0. \end{aligned}$$

(4.7) is proved. By the same argument as in Remark 2.2, (4.7) holds uniformly in \(A\in \mathbb S(d)\) with \(|A|\le c\).

For (4.4), it is easily seen the first and the third terms in (4.8) converge to 0 in \(L_1\), and the second term converges to \(\big (\rho (t)-\rho (s)\big )^{r/2}\) by (4.10). And hence, (4.4) is proved. The proof of (4.3)–(4.7) is completed.

Now, let \(\omega _{\delta }({\varvec{x}})=\sup _{|t-s|<\delta ,t,s\in [0,1]}|{\varvec{x}}(t)-{\varvec{x}}(s)|\). Assume \(0<\delta <1/10\). Let \(0=t_0<t_1\ldots <t_K=1\) such that \(t_k-t_{k-1}=\delta \), and let \(t_{K+1}=t_{K+2}=1\). For any \(\epsilon >0\), it is easily seen that

$$\begin{aligned}&\limsup _{n\rightarrow \infty } \mathbb V\left( w_{\delta }\left( {\varvec{W}}_n\right) \ge 3\epsilon \right) \\&\quad \le \limsup _{n\rightarrow \infty }2\sum _{k=0}^{K-1} \mathbb V\left( \max _{s\in [t_k,t_{k+2}]} |{\varvec{S}}_{n,\tau _n(s)}-{\varvec{S}}_{n,\tau _n(t_k)}|\ge \epsilon \right) \\&\quad \le \frac{2}{\epsilon ^4} \sum _{k=0}^{K-1}\limsup _{n\rightarrow \infty }\widehat{\mathbb E}\left[ \max _{s\in [t_k,t_{k+2}]} |{\varvec{S}}_{n,\tau _n(s)}-{\varvec{S}}_{n,\tau _n(t_k)}|^4\right] \\&\quad \le \frac{C}{\epsilon ^4} \sum _{k=0}^{K-1} \big (\rho (t_{k+2})-\rho (t_k)\big )^2\le \frac{C\rho (1)}{ \epsilon ^4} \sup _{|t-s|\le 2\delta } \big |\rho (t)-\rho (s)\big | \end{aligned}$$

by (4.4). It follows that for any \(\epsilon >0\),

$$\begin{aligned} \lim _{\delta \rightarrow 0} \limsup _{n\rightarrow \infty } \mathbb V\left( w_{\delta }\left( {\varvec{W}}_n\right) \ge \epsilon \right) =0. \end{aligned}$$
(4.11)

Hence, the sequence \(\{{\varvec{W}}_n(\cdot ); n\ge 1\}\) is tight, and so, for (2.6) it is sufficient to show (2.5) (c.f. [1, 16]). Note that (2.5) is equivalent to

$$\begin{aligned}&\Big ( {\varvec{S}}_{n,\tau _n(t_1)}-{\varvec{S}}_{n,\tau _n(t_0)},\ldots , {\varvec{S}}_{n,\tau _n(t_d)}-{\varvec{S}}_{n,\tau _n(t_{d-1})}\Big )\\&\quad \overset{d}{\rightarrow }\Big ( {\varvec{W}}(\rho (t_1))-{\varvec{W}}(\rho (t_0)),\ldots ,{\varvec{W}}(\rho (t_d))-{\varvec{W}}(\rho (t_{d-1}))\Big ). \end{aligned}$$

By Lemma 4.4 and the induction, it is sufficient to show that for any \(0\le s<t\le 1\) and a bounded Lipschitz function \(\varphi ({\varvec{u}}, {\varvec{x}})\),

$$\begin{aligned} \widehat{\mathbb E}\left[ \left| \widehat{\mathbb E}\left[ \varphi \big ({\varvec{u}}, {\varvec{S}}_{n, \tau _n(t)}-{\varvec{S}}_{n, \tau _n(s)}\big )\big |\mathscr {H}_{n,\tau _n(s)}\right] -\widetilde{\mathbb E}\left[ \varphi \big ({\varvec{u}}, {\varvec{W}}(\rho (t))-{\varvec{W}}(\rho (s)) \big )\right] \right| \right] \rightarrow 0. \end{aligned}$$
(4.12)

For showing (4.12), without loss of generality we assume \(s=0\) and \(t=1\), \(|\varphi ({\varvec{u}},{\varvec{x}})-\varphi ({\varvec{u}},{\varvec{y}})|\le |{\varvec{x}}-{\varvec{y}}|\), \(|\varphi ({\varvec{u}},{\varvec{x}})|\le 1\). Let \(V(t, {\varvec{x}})=V^{{\varvec{u}}} (t, {\varvec{x}})\) be the unique viscosity solution of the following equation,

$$\begin{aligned} \partial _t V^{{\varvec{u}}} + \frac{1}{2}G( D^2 V^{{\varvec{u}}})=0,\;\; (t, x) \in [0,\varrho + h] \times \mathbb R, \; V^{{\varvec{u}}}|_{t=\varrho +h} = \varphi ({\varvec{u}}, {\varvec{x}}), \end{aligned}$$

where \(\varrho =\rho (1)-\rho (0)\). Without loss of generality, we assume that there is a constant \(\epsilon >0\) such that

$$\begin{aligned} G(A)-G(\overline{A})\ge tr(A-\overline{A}) \epsilon \text { for all } \; A,\overline{A}\in \mathbb S(d) \text { with } A\ge \overline{A}, \end{aligned}$$
(4.13)

for otherwise we can add a random vector \(\epsilon \cdot \widehat{\mathbb E}[|{\varvec{Z}}_{n,k}|^2\big |\mathscr {H}_{n,k-1}]{\varvec{\xi }}_{n,k}\) to \({\varvec{Z}}_{n,k}\), where \({\varvec{\xi }}_{n,k}\) has a d-dimensional standard normal \(N(0, I_{d\times d})\) distribution and is independent to \({\varvec{Z}}_{n,1},\ldots , {\varvec{Z}}_{n,k}\), \({\varvec{\xi }}_{n,1},\ldots ,{\varvec{\xi }}_{n,k-1}\). Under (4.13) by the interior regularity of \(V^{{\varvec{u}}}\) (c.f. Theorem C.4.5 of Peng[10]),

$$\begin{aligned} \Vert V^{{\varvec{u}}}\Vert _{C^{1+\alpha /2,2+\alpha }([0,\rho +h/2]\times \mathbb R^d)} < \infty , \text { for some } \alpha \in (0, 1). \end{aligned}$$
(4.14)

According to the definition of G-normal distribution, we have \(V(t,{\varvec{x}})=V^{{\varvec{u}}}(t,{\varvec{x}})=\widetilde{\mathbb E}\big [\varphi ({\varvec{u}}, {\varvec{x}}+\sqrt{\varrho +h-t}{\varvec{\xi }})\big ]\), where \({\varvec{\xi }}\sim N(0,G)\) under \(\widetilde{\mathbb E}\). In particular,

$$\begin{aligned}{} & {} V(h,{\varvec{0}})=\widetilde{\mathbb E}\big [\varphi ( {\varvec{u}}, \sqrt{\varrho }{\varvec{\xi }})\big ], \;\; V(\varrho +h, {\varvec{x}})=\varphi ({\varvec{u}},{\varvec{x}}).\nonumber \\{} & {} |V(t,{\varvec{x}})-V(t,{\varvec{y}})|\le |{\varvec{x}}-{\varvec{y}}|,\;\; |V(t,{\varvec{x}})-V(s,{\varvec{x}})|\le \frac{|s-t|\widehat{\mathbb E}[|{\varvec{\xi }}|]}{\sqrt{\varrho +h-t}+\sqrt{\varrho +h-s}}.\nonumber \\ \end{aligned}$$
(4.15)

By (4.15), \(|V(\varrho +h,{\varvec{x}})-V(\varrho ,{\varvec{x}})|\le \sqrt{h}\widehat{\mathbb E}[|{\varvec{\xi }}|]\) and \(|V(h,{\varvec{0}})-V(0,{\varvec{0}})|\le \sqrt{h}\widehat{\mathbb E}[|{\varvec{\xi }}|]\). So, for (4.12) it is sufficient to show that

$$\begin{aligned} \widehat{\mathbb E}\left[ \Big |\widehat{\mathbb E}[ V(\varrho , {\varvec{S}}_{k_n}) |\mathscr {H}_{n,0}]-V(0,{\varvec{0}})\Big |\right] \rightarrow 0. \end{aligned}$$
(4.16)

By (4.15) again and (4.14), for all \((t,{\varvec{x}})\in [0, \varrho +h/2] \times \mathbb R^d\),

$$\begin{aligned} |D V(t,{\varvec{x}})|\le C,\;\; |\partial _t V(t,{\varvec{x}})|\le C, \;\; |D^2 V(t, {\varvec{x}})|\le |D^2 V(0, {\varvec{0}})|+C|{\varvec{x}}|^{\alpha }\le C+C|{\varvec{x}}|^{\alpha }. \end{aligned}$$

For an integer m large enough, we define \(t_i=i/m\), \({\varvec{Y}}_{n,i}={\varvec{S}}_{n,\tau _n(t_i)}-{\varvec{S}}_{n,\tau _n(t_{i-1})}\), \(\widetilde{\delta }_i=\rho (t_i)\), \({\varvec{T}}_i=\sum _{j=1}^i {\varvec{Y}}_{n,j}\), \(i=1,\ldots , m\). Applying the Taylor’s expansion yields

$$\begin{aligned}&V(\varrho , {\varvec{S}}_{k_n})-V(0,{\varvec{0}})\\&\quad =\sum _{i=0}^{m-1} \left\{ [ V( \widetilde{\delta }_{i+1}, {\varvec{T}}_{i+1})-V( \widetilde{\delta }_i, {\varvec{T}}_{i+1})] +[ V( \widetilde{\delta }_i, {\varvec{T}}_{i+1})-V( \widetilde{\delta }_i, {\varvec{T}}_i)]\right\} \\&\quad =: \sum _{i=0}^{m-1} \left\{ I_{n}^i+J_{n}^i\right\} , \;\;\text {with }\\&J_{n}^i = \partial _t V( \widetilde{\delta }_i, {\varvec{T}}_i)\big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big )+ \big \langle D V( \widetilde{\delta }_i, {\varvec{T}}_i), {\varvec{Y}}_{n,i+1}\big \rangle +\frac{1}{2}\big \langle {\varvec{Y}}_{n,i+1} D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i), {\varvec{Y}}_{n,i+1}\big \rangle \\&\quad = \left\{ \partial _t V( \widetilde{\delta }_i, {\varvec{T}}_i)+\frac{1}{2}G\Big (D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i)\Big )\right\} \big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big ) \\&\qquad + \frac{1}{2} \Big \{ \big \langle {\varvec{Y}}_{n,i+1} D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i), {\varvec{Y}}_{n,i+1}\big \rangle -\widehat{\mathbb E}\left[ \big \langle {\varvec{Y}}_{n,i+1} D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i), {\varvec{Y}}_{n,i+1}\big \rangle \Big |\mathscr {H}_{n,\tau _n(t_i)}\right] \Big \} \\&\qquad + \Big \{\big \langle D V( \widetilde{\delta }_i, {\varvec{T}}_i), {\varvec{Y}}_{n,i+1}\big \rangle \Big \}\\&\qquad +\frac{1}{2} \Big \{ \widehat{\mathbb E}\left[ \big \langle {\varvec{Y}}_{n,i+1} D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i), {\varvec{Y}}_{n,i+1}\big \rangle \Big |\mathscr {H}_{n,\tau _n(t_i)}\right] -G\Big (D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i)\Big )\big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big )\Big \}\\&\quad =: 0+ J_{n,1}^i+J_{n,2}^i+J_{n,3}^i \end{aligned}$$

and

$$\begin{aligned} I_{n}^i&=\big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big )\left[ \big ( \partial _t V( \widetilde{\delta }_i+\gamma \big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big ), {\varvec{T}}_{i+1})-\partial _t V( \widetilde{\delta }_i , {\varvec{T}}_{i+1})\big )\right. \\&\quad \left. +\big ( \partial _t V( \widetilde{\delta }_i, {\varvec{T}}_{i+1})-\partial _t V( \widetilde{\delta }_i , {\varvec{T}}_i)\big ) \right] \\&\quad + \frac{1}{2}\left\langle {\varvec{Y}}_{n,i+1}\left[ D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i+\beta {\varvec{Y}}_{n,i+1})-D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i)\right] ,{\varvec{Y}}_{n,i+1}\right\rangle , \end{aligned}$$

where \(\gamma \) and \(\beta \) are between 0 and 1.

By (4.14), it is easily seen that

$$\begin{aligned} |I_n^i|&\le C\big | \widetilde{\delta }_{i+1}-\widetilde{\delta }_i \big |^{1+\alpha /2 } +C (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i) |{\varvec{Y}}_{n,i+1}|^{\alpha } +C|{\varvec{Y}}_{n,i+1}|^{2+\alpha }\\&\le C \left( \rho (t_{i+1})-\rho (t_i)\right) ^{1+\alpha /2} +o(1)\; \text { in } L_1, \end{aligned}$$

by (4.4), where C is a positive constant which does not depend on \(t_i\)s.

For \(J_{n,1}^i\), note

$$\begin{aligned} \widehat{\mathbb E}\left[ J_{n,1}^i\big |\mathscr {H}_{n,\tau _n(t_i)}\right] =0\;\text { in } L_1. \end{aligned}$$

It follows that

$$\begin{aligned} \widehat{\mathbb E}\left[ \sum _{i=0}^{m-1}J_{n,1}^i\Big |\mathscr {H}_{n,0}\right]&= \widehat{\mathbb E}\left[ \sum _{i=0}^{m-2}J_{n,1}^i+\widehat{\mathbb E}\left[ J_{n,1}^{m-1}\big |\mathscr {H}_{n,\tau _n(t_{m-1})}\right] \Big |\mathscr {H}_{n,0}\right] \\&\qquad \widehat{\mathbb E}\left[ \sum _{i=0}^{m-2}J_{n,1}^i\Big |\mathscr {H}_{n,0}\right] =\ldots =0 \text { in } L_1. \end{aligned}$$

For \(J_{n,2}^i\), we have

$$\begin{aligned}&\widehat{\mathbb E}[J_{n,2}^i|\mathscr {H}_{n,0}]=\widehat{\mathbb E}\Big [\widehat{\mathbb E}[J_{n,2}^i|\mathscr {H}_{n,\tau _n(t_i)}]\big |\mathscr {H}_{n,0}\Big ]\\&\quad \le \widehat{\mathbb E}\left[ |D V( \widetilde{\delta }_i, T_i))|\left\{ | \widehat{\mathbb E}[{\varvec{Y}}_{n,i+1}|\mathscr {H}_{n,\tau _n(t_i)}]|+ |\widehat{\mathcal E}[{\varvec{Y}}_{n,i+1}|\mathscr {H}_{n,\tau _n(t_i)}]|\right\} \Big | \mathscr {H}_{n,0}\right] \\&\quad \le C \widehat{\mathbb E}\left[ \left| \widehat{\mathbb E}[{\varvec{Y}}_{n,i+1}|\mathscr {H}_{n,\tau _n(t_i)}]\right| + \left| \widehat{\mathcal E}[{\varvec{Y}}_{n,i+1}|\mathscr {H}_{n,\tau _n(t_i)}]\right| \Big | \mathscr {H}_{n,0}\right] \rightarrow 0 \text { in } L_1, \end{aligned}$$

by (4.5) and (4.6). Similarly, \(\widehat{\mathbb E}[-J_{n,2}^i|\mathscr {H}_{n,0}]\le o(1)\) in \(L_1\).

For \(J_{n,3}^i\), we have

$$\begin{aligned} |J_{n,3}^i|&\le \frac{1}{2} |D^2 V(\widetilde{\delta }_i, {\varvec{T}}_i)|\sup _{|A|\le 1}\Big | \widehat{\mathbb E}\left[ \big \langle {\varvec{Y}}_{n,i+1} A, {\varvec{Y}}_{n,i+1}\big \rangle \Big |\mathscr {H}_{n,\tau _n(t_i)}\right] -G(A)\big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big )\Big |\\&\le (C+C|{\varvec{T}}_i|^{\alpha })\sup _{|A|\le 1}\Big | \widehat{\mathbb E}\left[ \big \langle {\varvec{Y}}_{n,i+1} A, {\varvec{Y}}_{n,i+1}\big \rangle \Big |\mathscr {H}_{n,\tau _n(t_i)}\right] -G(A)\big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big )\Big | \\&\le C \sup _{|A|\le 1}\Big | \widehat{\mathbb E}\left[ \big \langle {\varvec{Y}}_{n,i+1} A, {\varvec{Y}}_{n,i+1}\big \rangle \Big |\mathscr {H}_{n,\tau _n(t_i)}\right] -G(A)\big (\widetilde{\delta }_{i+1}-\widetilde{\delta }_i\big )\Big |=o(1) \text { in } L_1 \end{aligned}$$

by (4.3) and (4.7), where C is a positive constant which does not depend on \(t_i\)s.

Combining the above arguments yields

$$\begin{aligned}&\Big |\widehat{\mathbb E}[ V(\varrho , {\varvec{S}}_{k_n}) |\mathscr {H}_{n,0}]-V(0,{\varvec{0}})\Big | \\&\quad \le \sum _{i=0}^{m-1} \left\{ \widehat{\mathbb E}[J_{n,2}^i|\mathscr {H}_{n,0}]+\widehat{\mathbb E}[-J_{n,2}^i|\mathscr {H}_{n,0}]+\widehat{\mathbb E}[|J_{n,3}^i|\big |\mathscr {H}_{n,0}] +\widehat{\mathbb E}[|I_n^i|\big |\mathscr {H}_{n,0}]\right\} \\&\quad \le C \sum _{i=0}^{m-1}\left( \rho (t_{i+1})-\rho (t_i)\right) ^{1+\alpha /2} +o(1) \\&\quad \le C \max _i\Big (\rho ((i+1)/m)-\rho (i/m)\Big )^{\alpha /2}\varrho +o(1)\; \text { in } L_1. \end{aligned}$$

The proof of (4.16) is completed by letting \(m\rightarrow \infty \).