Appendix: Proof of Theorem 1
In this section we follow the same line of reasoning as the proof of Theorem 1.1 of Pang and Zheng (2017). We begin by decomposing the diffusion-scaled process into three separate processes in Lemma 1. The convergence of each process separately towards a Brownian motion is proven in Lemma 4, Lemma 5 and Lemma 6. These lemmas are the multivariate equivalents of Lemmas 2.6, 2.7 and 2.8 of Pang and Zheng (2017), respectively. Finally, we conclude the proof with the joint convergence of the processes at the end of this section.
Lemma 1
The diffusion-scaled process
\({\hat {\boldsymbol Y}}^{n}\)
can be decomposed into the following three processes:
$$\hat{Y}_{i}^{n}(t)=\hat{U}_{i}^{n}(t)+\hat{V}_{i}^{n}(t)+\hat{W}_{i}^{n}(t)$$
where
$$\hat{U}_{i}^{n}(t):=\frac{1}{n^{\delta}}\sum\limits_{k=1}^{A^{n}(t)}\left( Z_{i,k}^{n}(J^{n}(\tau_{i,k}^{n}))-m^{n}_{i,J^{n}(\tau_{i,k}^{n})}\right),$$
$$\hat{V}_{i}^{n}(t):=\frac{1}{n^{\delta}}\left( \sum\limits_{k=1}^{A^{n}(t)}m^{n}_{i,J^{n}(\tau_{i,k}^{n})}-{{\int}_{0}^{t}} m^{n}_{i,J^{n}(s)}\lambda^{n}_{i,J^{n}(s)}ds\right),$$
$$\hat{W}_{i}^{n}(t):=\frac{1}{n^{\delta}}\left( {{\int}_{0}^{t}} m^{n}_{i,J^{n}(s)}\lambda^{n}_{i,J^{n}(s)}ds-\sum\limits_{j=1}^{I}\lambda_{i,j}^{n}\mu_{i,j}^{n}\pi_{j} t\right).$$
For each \(n\in \mathbb {N}\), define \(\mu ^{n}_{i,*}:=\max \limits _{j\in S}\mu _{i,j}^{n}\), \(\lambda ^{n}_{i,*}:=\max \limits _{j\in S}\lambda _{i,j}^{n}\) and \(\sigma ^{n}_{i,*}:=\max \limits _{j\in S}\sigma _{i,j}^{n}\). By the scaling of the parameters of Yn, we obtain that, for all i ∈{1,...,m},
$$ \frac{1}{n}\lambda^{n}_{i,*}\rightarrow \lambda_{i,*}, \ \mu^{n}_{i,*}\rightarrow \mu_{i,*} \ \text{and} \ \sigma^{n}_{i,*}\rightarrow \sigma_{i,*}, $$
(10)
in \(\mathbb {R}\) as \(n\rightarrow \infty \). Then we can find n1 > 0 and Δ > 0 such that, for any n > n1 and all i ∈{1,...,m},
$$ \max\Big\{\frac{1}{n}\lambda^{n}_{i,*},\mu^{n}_{i,*},\sigma^{n}_{i,*}\Big\}<{\Delta}. $$
(11)
We fix the n1 and Δ throughout the proof. We start by proving the convergence of \({\hat {\boldsymbol U}}^{n}\). For this we require the next auxiliary result, which is a direct extension of Lemma 2.2 of Pang and Zheng (2017).
Lemma 2
Letz1,1, z1,2,...,zn, n− 1, zn, nandw1,1, w1,2,...,wn, n− 1, wn, nbe complex numbers of modulus ≤ b. Then
$$\Bigg|\prod\limits_{i=1}^{m}\prod\limits_{j=1}^{n} z_{i,j}-\prod\limits_{i=1}^{m}\prod\limits_{j=1}^{n} w_{i,j}\Bigg|\leq b^{m-1}\sum\limits_{i=1}^{m}\Bigg|\prod\limits_{j=1}^{n} z_{i,j}-\prod\limits_{j=1}^{n} w_{i,j}\Bigg|\leq b^{m+n-2}\sum\limits_{i=1}^{m}\sum\limits_{j=1}^{n}|z_{i,j}-w_{i,j}|$$
Lemma 3
The finite-dimensional distributions of\(\hat {\boldsymbol U}^{n}=(\hat {U}_{1}^{n},...,\hat {U}_{m}^{n})\)converge to those of\(\hat {\boldsymbol U}\), where\( \hat {\boldsymbol U}:=(\hat {U}_{1},...,\hat {U_{m}})\)with
$$ \hat{\boldsymbol U}:=\left\{\begin{array}{llll}\mathbf{B}^{1}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ \boldsymbol{0}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1); \end{array}\right. $$
(12)
here\(\mathbf {B}^{1}=({B^{1}_{1}},...,{B^{1}_{m}})\)is am-dimensional Brownian motion with\(\mathbb {E}\left [(\mathbf {B}^{1}(t))(\mathbf {B}^{1}(t))^{\top }\right ]=\bar {\Sigma }^{1}t\), where\(\bar {\Sigma }^{1}\)has been defined in Section 3.1.
Proof
We need to prove
$$ (\hat{\boldsymbol U}^{n}(t_{1}),...,\hat{\boldsymbol U}^{n}(t_{k}))\Rightarrow(\hat{\boldsymbol U}(t_{1}),...,\hat{\boldsymbol U}(t_{k})) \ \text{in} \ \mathbb{R}^{m\times k} \ \text{as} \ n\rightarrow\infty, $$
(13)
for any 0 ≤ t1 ≤⋯ ≤ tk ≤ T and k ≥ 1. We first consider the case of a single point in time: we aim at proving that, for each t ≥ 0,
$$\hat{\boldsymbol U}^{n}(t)\Rightarrow\hat{\boldsymbol U}(t) \in \mathbb{R}^{m} \ \text{as} \ n\rightarrow\infty.$$
By Lévy’s continuity theorem on \(\mathbb {R}^{m}\) (Kallenberg 1997, Thm, 4.3), it is sufficient to show convergence of the characteristic function: we have to prove that, as \(n\to \infty \)
$$ {\psi_{t}^{n}}(\boldsymbol{\theta}):=\mathbb{E}\left[e^{i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}^{n}(t)}\right]\to \psi_{t}(\boldsymbol{\theta}):=\mathbb{E}\left[e^{i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}(t)}\right]$$
for every \(\boldsymbol {\theta }\in \mathbb {R}^{m}\). By the definition of \(\hat {\boldsymbol U}\) in Eq. 12,
$$ \psi_{t}(\boldsymbol{\theta}):=\mathbb{E}\left[e^{i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}(t)}\right]=\left\{\begin{array}{llll}\exp\left( -\frac{1}{2}\boldsymbol{\theta}^{T}\bar{\Sigma}^{1}\boldsymbol{\theta} t\right), & \delta=\frac{1}{2}, \ \alpha\geq 1\\ 1, & \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1). \end{array}\right. $$
(14)
Let \(\mathcal {A}^{n}_{t}:=\sigma \{\mathbf {A}^{n}(s):0\leq s\leq t\}\vee \sigma \{J^{n}(s):0\leq s\leq t\}\vee \mathcal {N}\), where \(\mathcal {N}\) is the collection of P-null sets. Then, by conditioning, we obtain
$$ \begin{array}{@{}rcl@{}} {\psi_{t}^{n}}(\boldsymbol{\theta})&=&\mathbb{E}\left[\exp\left( i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}^{n}(t)\right)\right]=\mathbb{E}\left[\mathbb{E}\left[\exp\left( i\boldsymbol{\theta}^{T}\hat{\boldsymbol U}^{n}(t)\right)\big|\mathcal{A}^{n}_{t}\right]\right]\\ &=&\mathbb{E}\left[\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\mathbb{E}\left[\exp\left( i\theta_{i}\frac{1}{n^{\delta}}\left( Z_{i,k}^{n}(J^{n}(\tau_{i,k}^{n}))-\mu^{n}_{i,J^{n}(\tau_{i,k}^{n})}\right)\right)\big|\mathcal{A}^{n}_{t}\right]\right]\\ &=&\mathbb{E}\left[\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\left( 1-\frac{{\theta_{i}^{2}}}{2n^{2\delta}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}+o(n^{-2\delta})\right)\right] \end{array} $$
(15)
By Eq. 10, we can find n2 such that for any n > n2 and all i ∈{1,...,m},
$$0<\max\limits_{1\leq k\leq {A^{n}_{i}}(t)}\left\{\frac{{\theta_{i}^{2}}}{2n^{2\delta}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}-o\left( n^{-2\delta}\right)\right\}<1.$$
Furthermore, recall the definition of n1 in Eq. 11. Then, for \(\delta =\frac {1}{2}\), α ≥ 1 and for any
$$ n>n_{3}:=\max\{n_{1},n_{2}\}, $$
(16)
we have
$$ \begin{array}{@{}rcl@{}} \Big|{\psi_{t}^{n}}(\boldsymbol{\theta})-\psi_{t}(\boldsymbol{\theta})\Big|&\leq& \mathbb{E}\Bigg[\Bigg|\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\left( 1-\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}+o(n^{-1})\right) \\ &&\qquad-\prod\limits_{i=1}^{m}\prod\limits_{k=1}^{{A_{i}^{n}}(t)}\exp\left( -\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\Bigg|\Bigg]\\ &&+\Bigg|\mathbb{E}\left[\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\right]-\exp\left( -\sum\limits_{i=1}^{m}\frac{{\theta_{i}^{2}}}{2}\bar{\sigma}_{i}^{2}t\right)\Bigg|\\ &\leq & \mathbb{E}\left[\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{4}}}{4n^{2}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{4}\right]+o(1)\\ &&+\Bigg|\mathbb{E}\left[\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\right]-\exp\left( -\sum\limits_{i=1}^{m}\frac{{\theta_{i}^{2}}}{2}\bar{\sigma}_{i}^{2}t\right)\Bigg|\\ &\rightarrow& \ 0 \ \text{as} \ n\rightarrow\infty; \end{array} $$
(17)
here, the first inequality is due to the triangle inequality and the second inequality follows by Lemma 2 above in combination with Lemma 2.3. of Pang and Zheng (2017). By Eq. 11, for large enough n defined above, we have
$$\mathbb{E}\left[\frac{1}{n}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{4}\right]\leq{\Delta}^{5} t, \ \forall i, \ t\geq 0.$$
As a result, the first two terms in the last equation converge to 0 when \(n\rightarrow \infty \). For the convergence of the last term, since the sequence
$$\left\{\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right):n\geq 1\right\}$$
is bounded for each t ≥ 0, it suffices to show that, for all i ∈{1,...,m},
$$ \frac{1}{n}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\Rightarrow\bar{\sigma}_{i}^{2} t, \ \ \text{in} \ \mathbb{R} \ \text{as} \ n\rightarrow\infty. $$
(18)
This follows from the convergences
$$\sum\limits_{j=1}^{I} \frac{\lambda_{i,j}^{n}}{n}{{\int}_{0}^{t}} \mathbb{1}(J^{n}(s)=j) \mathrm{d}s\rightarrow\sum\limits_{j=1}^{I}\lambda_{i,j}\pi_{j}t \ \text{a.s.},$$
and
$$\sum\limits_{j=1}^{I} \frac{\lambda_{i,j}^{n}}{n}(\sigma_{i,j}^{n})^{2}{{\int}_{0}^{t}} \mathbb{1}(J^{n}(s)=j) \mathrm{d}s\rightarrow\sum\limits_{j=1}^{I}\lambda_{i,j}\sigma_{i,j}^{2}\pi_{j}t \ \text{a.s.}$$
by claim (4) in Anderson et al. (2016), the weak law of large numbers for Poisson processes, and the ‘random change of time lemma’ (Billingsley 1999, pp. 151).
For \(\delta =1-\frac {\alpha }{2}\) and α ∈ (0, 1), we follow the same line of reasoning and prove
$$\Bigg|\mathbb{E}\left[\exp\left( -\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{{A_{i}^{n}}(t)}\frac{{\theta_{i}^{2}}}{2n^{2\delta}}(\sigma_{i,J^{n}(\tau_{i,k}^{n})}^{n})^{2}\right)\right]-1\Bigg|\rightarrow 0 \ \text{as} \ n\rightarrow\infty.$$
Thereby, we have shown Eq. 13.
To show the convergence of the finite-dimensional distributions, it is sufficient to prove that for any \((\boldsymbol {\theta }^{1},...,\boldsymbol {\theta }^{k})\in \mathbb {R}^{m\times l}\) and 0 ≤ t1 < ⋯ < tl ≤ T,
$$\mathbb{E}\left[\exp\left( i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}^{n}(t_{k})\right)\right]\rightarrow\mathbb{E}\left[\exp\left( i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}(t_{k})\right)\right] \ \text{as} \ n\rightarrow\infty.$$
By the definition of \(\hat {\boldsymbol U}\), we have
$$ \begin{array}{@{}rcl@{}} &&\mathbb{E}\left[\exp\left( i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}(t_{k})\right)\right]\\ &=& \left\{\begin{array}{llll}{\displaystyle \exp\left( -\frac{1}{2}\sum\limits_{k_{1}=1}^{l}\sum\limits_{k_{2}=1}^{l}(\boldsymbol{\theta}^{k_{1}})^{\top}\bar{\Sigma}^{1}\boldsymbol{\theta}^{k_{2}} (t_{k_{1}}\wedge t_{k_{2}})\right)}, & \delta=\frac{1}{2}, \ \alpha\geq 1\\ 1, & \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1). \end{array}\right. \end{array} $$
By conditioning and direct calculation as in Eq. 15, we have
$$ \begin{array}{@{}rcl@{}} &&\mathbb{E}\bigg[\exp\bigg(i\sum\limits_{k=1}^{l}(\boldsymbol{\theta}^{k})^{\top}\hat{\boldsymbol U}^{n}(t_{k})\bigg)\bigg]\\&=& \mathbb{E}\left[\prod\limits_{j=1}^{l}\prod\limits_{i=1}^{m}\exp\left( i\frac{1}{n^{\delta}}\sum\limits_{k=j}^{l}{\theta^{k}_{i}}\sum\limits_{h=A^{n}(t_{j-1})+1}^{A^{n}(t_{j})}\left( Z_{i,h}^{n}(J^{n}(\tau_{i,h}^{n}))-\mu^{n}_{i,J^{n}(\tau_{i,h}^{n})}\right)\right)\right]\\ &&\rightarrow\left\{\begin{array}{llll}{\prod}_{j=1}^{l}{\prod}_{i=1}^{m}\exp\left( -\frac{1}{2}\left( {\sum}_{k=j}^{l} {\theta^{k}_{i}}\right)^{2}\bar{\sigma}_{i}^{2}(t_{j}-t_{j-1})\right), & \delta=\frac{1}{2}, \ \alpha\geq 1\\ 1, & \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1), \end{array}\right. \end{array} $$
as \(n\rightarrow \infty \), and
$$ \begin{array}{@{}rcl@{}} &&\prod\limits_{j=1}^{l}\prod\limits_{i=1}^{m}\exp\left( -\frac{1}{2}\left( \sum\limits_{k=j}^{l} {\theta^{k}_{i}}\right)^{2}\bar{\sigma}_{i}^{2}(t_{j}-t_{j-1})\right)\\ &=&\exp\left( -\frac{1}{2}\sum\limits_{k_{1}=1}^{l}\sum\limits_{k_{2}=1}^{l}(\boldsymbol{\theta}^{k_{1}})^{\top}\bar{\Sigma}^{1}\boldsymbol{\theta}^{k_{2}} (t_{k_{1}}\wedge t_{k_{2}})\right). \end{array} $$
Applying Lévy’s continuity theorem (on \(\mathbb {R}^{m}\) now), the convergence can be shown in a similar way as in Eqs. 15 and 17. Therefore, we have proven the weak convergence of the finite-dimensional distributions. □
Lemma 4
\(\hat {\boldsymbol U}^{n}\Rightarrow \hat {\boldsymbol U}\)in\(\mathbb {D}^{m}\)as\(n\rightarrow \infty \), where\(\hat {\boldsymbol U}\)is given in Eq. 12.
Proof
Marginal tightness of the \(\hat {\boldsymbol U}_{i}^{n}\) has been proven by Pang and Zheng (2017, Lemma 2.5) which implies joint tightness for \(\hat {\boldsymbol U}\) (Kosorok 2008, Lemma 7.14(i)). Together with the continuity of \(\hat {\boldsymbol U}\), Lemma 3, we apply Thm. 13.1 of Billingsley (1999) to conclude the convergence of \(\hat {\boldsymbol U}^{n}\). □
Lemma 5
\(\hat {\boldsymbol V}^{n}\Rightarrow \hat {\boldsymbol V}\)in\(\mathbb {D}^{m}\)as\(n\rightarrow \infty \), where\(\hat {\boldsymbol V}\)is given by
$$ \hat{\boldsymbol V}:=\left\{\begin{array}{llll} \mathbf{B}^{2}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ \mathbf{0}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1); \end{array}\right. $$
(19)
here\(\mathbf {B}^{2}:=({B_{1}^{2}},...,{B_{m}^{2}})\)is am-dimensional zero-mean Brownian motion with\(\mathbb {E}\left [(\mathbf {B}^{2}(t))(\mathbf {B}^{2}(t))^{T}\right ]=\bar {\Sigma }^{2}t\), where\(\bar {\Sigma }^{2}\)has been defined in Section 3.1.
Proof
As centered Poisson processes are \(\mathbb {R}\)-valued martingales, i.e., for each \(n\in \mathbb {N}\) and every i ∈{1,...,m} the process
$$\left\{{A_{i}^{n}}(t)-{{\int}_{0}^{t}}\lambda_{i,J^{n}(u)}^{n} \mathrm{d}u:t\geq 0\right\}$$
is a martingale, \(\hat {\boldsymbol V}^{n}\) is a \(\mathbb {R}^{m}\)-valued martingale. The maximum jump for \(\hat {V}_{i}^{n}\) is \(\mu _{i,*}^{n}/n^{\delta }\). By Eq. 1, we obtain that the expected value of the maximum jump is asymptotically negligible, i.e., for all i ∈{1,...,m}
$$\frac{1}{n^{\delta}}\mathbb{E}\left[\mu_{i,*}^{n}\right]\rightarrow 0, \ \text{as} \ n\rightarrow\infty.$$
For \(n\in \mathbb {N}\), let \(\{[\hat {V}_{i}^{n},\hat {V}_{j}^{n}](t):t\geq 0\}\) be the quadratic covariation process of \(\hat {V}_{i}^{n}\) and \(\hat {V}_{i}^{n}\). Then, for each t, we have by the quadratic variation of a compound Poisson process, as \(n\to \infty \),
$$ \begin{array}{@{}rcl@{}} [\hat{V}_{i}^{n},\hat{V}_{j}^{n}](t)&=&\frac{1}{n^{2\delta}}\left\{\begin{array}{llll} {\sum}_{k=1}^{{A_{i}^{n}}}(\mu^{n}_{i,J^{n}(\tau_{i,k})})^{2} & \text{for} \ i=j,\\ 0, & \text{for} \ i\neq j, \end{array}\right. \end{array} $$
(20)
$$ \begin{array}{@{}rcl@{}}&\Rightarrow&\left\{\begin{array}{llll}\bar{\Sigma}_{i,j}^{2} t, & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ 0, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1), \end{array}\right. \ \end{array} $$
(21)
in \( \mathbb {R}^{m}\), where the convergence is proven in the same way as Eq. 18. Applying Thm. 2.1 of Whitt (2007), we have shown the convergence of \(\hat {\boldsymbol V}^{n}\). □
Lemma 6
\(\hat {\boldsymbol W}^{n}\Rightarrow \hat {\boldsymbol W}\)in\(\mathbb {D}^{m}\)as\(n\rightarrow \infty \), where the limit process\(\hat {\boldsymbol W}:=\{\hat {\boldsymbol W}(t):t\geq 0\}\)is given by
$$ \hat{\boldsymbol W}:=\left\{\begin{array}{llll}\mathbf{0}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha> 1,\\ \mathbf{B}^{3}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1]; \end{array}\right. $$
(22)
here\(\mathbf {B}^{3}=({B_{1}^{3}},...,{B_{m}^{3}})\)is am-dimensional Brownian motion with\(\mathbb {E}\left [(\mathbf {B}^{3}(t))(\mathbf {B}^{3}(t))^{T}\right ]=\bar {\Sigma }^{3}t\), where\(\bar {\Sigma }^{3}\)has been defined in Section 3.1.
Proof
Let \(\bar {{\boldsymbol W}}^{n}:=(\bar {W}_{1}^{n},...,\bar {W}_{m}^{n})\) with, for i = 1,…,m,
$$\bar{W}_{i}^{n}:=\frac{1}{n^{\delta}}\left( \sum\limits_{k=1}^{I}\mu_{i,k}^{n}\lambda_{i,k}^{n}{{\int}_{0}^{t}} \mathbb{1}(J^{n}(s)=k) \mathrm{d}s-\sum\limits_{k=1}^{I}\mu_{i,k}^{n}\lambda_{i,k}^{n}\pi_{k} t \right).$$
By Prop. 3.2 of Anderson et al. (2016), we have, as \(n\rightarrow \infty \),
$$\bar{{\boldsymbol W}}^{n}\Rightarrow\left\{\begin{array}{llll}\mathbf{0}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha> 1,\\ {\boldsymbol B}^{3}, & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1], \end{array}\right.$$
with B3 as defined above. This concludes the proof. □
Proof of Theorem 1
By Lemmas 1, 4, 5 and 6, we have obtained the marginal convergence of \(\hat {\boldsymbol U}^{n}\Rightarrow \hat {\boldsymbol U}\), \(\hat {\boldsymbol V}^{n}\Rightarrow \hat {\boldsymbol V}\) and \(\hat {\boldsymbol W}^{n}\Rightarrow \hat {\boldsymbol W}\). We now have to prove the joint convergence
$$\left( \hat{\boldsymbol U}^{n},\hat{\boldsymbol V}^{n},\hat{\boldsymbol W}^{n}\right)\Rightarrow \left( \hat{\boldsymbol U},\hat{\boldsymbol V},\hat{\boldsymbol W}\right)$$
where \(\hat {\boldsymbol U}\), \(\hat {\boldsymbol V}\) and \(\hat {\boldsymbol W}\) are mutually independent. To this end, we first note that \(\hat {\boldsymbol U}^{n}\) and \(\hat {\boldsymbol V}^{n}\) are compensated compound Poisson processes which are martingales. Furthermore, by Anderson et al. (2016, Lemma 3.1) the process \(\hat {\boldsymbol W}^{n}\) is a martingale.
By Jacod and Shiryaev (2002, Thm. 3.12, Ch. VIII), or a slightly less extensive version is given in Aldous et al. (1985, Cor. 2.17, pp. 264), it suffices to show that, for \({\boldsymbol M}^{n}:=(\hat {\boldsymbol U}^{n},\hat {\boldsymbol V}^{n},\hat {\boldsymbol W}^{n})\) and \({\boldsymbol M}:=(\hat {\boldsymbol U},\hat {\boldsymbol V},\hat {\boldsymbol W})\),
$$ \left[{\boldsymbol M}^{n},{\boldsymbol M}^{n}\right](t)\Rightarrow \left[\begin{array}{llll} \hat{\Sigma}^{1} & \textbf{0} & \textbf{0} \\ \textbf{0} & \hat{\Sigma}^{2} & \textbf{0} \\ \textbf{0} & \textbf{0} & \hat{\Sigma}^{3} \end{array}\right] t, $$
(23)
where
$$(\hat{\Sigma}^{1},\hat{\Sigma}^{2}) = \left\{\begin{array}{llll}(\bar{\Sigma}^{1},\bar{\Sigma}^{2}), & \text{if \ } \delta=\frac{1}{2}, \ \alpha\geq 1,\\ (\mathbf{0},\mathbf{0}), & \text{if \ } \delta=1-\frac{\alpha}{2}, \ \alpha\in(0,1), \end{array}\right. \hat{\Sigma}^{3}=\left\{\begin{array}{llll}\mathbf{0}, & \text{if \ } \delta=\frac{1}{2}, \ \alpha> 1,\\ \bar{\Sigma}^{3}, & \text{if \ } \delta=1 - \frac{\alpha}{2}, \ \alpha\!\in\!(0,1]. \end{array}\right.$$
For \(\alpha \in (0,\infty )\) and \(\delta \in [\frac {1}{2},1)\),
$$ \begin{array}{@{}rcl@{}} \left[\hat{U}_{i}^{n},\hat{V}_{j}^{n}\right](t)&=&\frac{1}{n^{2\delta}}\left\{\begin{array}{llll} {\sum}_{k=1}^{{A_{i}^{n}}}\left( Z_{i,k}^{n}(J^{n}(\tau_{i,k}^{n}))-\mu^{n}_{i,J^{n}(\tau_{i,k}^{n})}\right)\mu^{n}_{i,J^{n}(\tau_{i,k})} & \text{for} \ i=j,\\0, & \text{for} \ i\neq j, \end{array}\right. \\ &\Rightarrow &0 \ \text{in} \ \mathbb{R}^{m}, \ \text{as} \ n\rightarrow\infty. \end{array} $$
Together with \(\big [\hat {U}_{i}^{n},\hat {W}_{j}^{n}\big ](t)=0\) and \(\big [\hat {V}_{i}^{n},\hat {W}_{j}^{n}\big ](t)=0\) this proves Eq. 23. The proof of Theorem 1 is completed by applying the continuous mapping theorem. □