Abstract
This paper develops the asymptotic theory for parametric and nonparametric regression models when the errors have a fractional local to unity root (FLUR) model structure. FLUR models are stationary time series with semi-long range dependence property in the sense that their covariance function resembles that of a long memory model for moderate lags but eventually diminishes exponentially fast according to the presence of a decay factor governed by a an exponential tempering parameter. When this parameter is sample size dependent, the asymptotic theory for these regression models admit a wide range of stochastic processes with behavior that includes long, semi-long, and short memory processes.
Similar content being viewed by others
References
Abramowitz M, Stegun I (1965) Handbook of mathematical functions, 9th edn. Dover, Illinois
Baillie R (1996) Long-memory processes and fractional integration in econometrics. J Econ 73:5–59
Barndorff-Nielsen O (1998) Processes of normal inverse gaussian type. Finance Stoch 2:41–68
Beran J, Feng Y (2013) Optimal convergence rates in non-parametric regression with fractional time series errors. J Time Ser Anal 34:30–39
Beran J, Weiershäuser A, Galizia C, Rein J, Smith B, Strauch M (2014) On piecewise polynomial regression under general dependence conditions, with an application to calcium-imaging data. Sankhya B 76:49–81
Billingsley P (1968) Convergence of probability measures. Wiley, Hoboken
Bobkoski M (2011) Hypothesis testing in nonstationary time series. Dissertation, University of Wisconsin
Boor C (2001) A practical guide to splines. Springer, Berlin
Brockwell P, Davis R (2012) Time series: theory and methods, 2nd edn. Springer, Berlin
Chan N, Wei C (1987) Asymptotic inference for nearly nonstationary AR(1) processes. Ann Statist 15:1050–1063
Csörgő M, Horváth L (1997) Limit theorems in change-point analysis. Wiley, Hoboken
Csörgő S, Mielniczuk J (1995) Close short-range dependent sums and regression estimation. Acta Sci Math (Szeged) 60:177–196
Csörgő S, Mielniczuk J (1995) Nonparametric regression under long-range dependent normal errors. Ann Statist 23:1000–1014
Csörgő S, Mielniczuk J (1995) Distant long-range dependent sums and regression estimation. Stochastic Process Appl 59:143–155
Dacorogna M, Moller U, Nagler R, Olsen R, Pictet O (1993) A geographical model for the daily and weekly seasonal volatility in the foreign exchange market. J Int Money Finance 12:413–443
De Brabanter K, Cao F, Gijbels I, Opsomer J (2018) Local polynomial regression with correlated errors in random design and unknown correlation structure. Biometrika 105:681–690
Deo R (1997) Nonparametric regression with long-memory errors. Stat Probab Lett 33:89–94
Feller W (1971) An introduction to probability theory and its applications, 2nd edn. Wiley, Hoboken
Giraitis L, Kokoszka P, Leipus R (2000) Stationary arch models: dependence structure and central limit theorem. Econ Theory 16:3–22
Giraitis L, Kokoszka P, Leipus R, Teyssière G (2003) On the power of R/S-type tests under contiguous and semi-long memory alternatives. Acta Appl Math 78:285–299
Giraitis L, Koul H, Surgailis D (2012) Large sample inference for long memory processes. Imperial College Press, London
Granger C, Joyeux R (1980) An introduction to long-memory time series models and fractional differencing. J Time Ser Anal 1:15–29
Granger C, Ding Z (1996) Varieties of long memory models. J Econ 73:61–77
Guo H, Koul H (2007) Nonparametric regression with heteroscedastic long memory errors. J Statist Plann Inference 137:379–404
Hall P, Hart J (1990) Nonparametric regression with long-range dependence. Stochast Process Appl 36:339–351
Hosking J (1981) Fractional differencing. Biometrika 68:165–176
Hosking J (1984) Modeling persistence in hydrological time series using fractional differencing. Water Resour Res 20:1898–1908
McLeod AI, Meerschaert MM, Sabzikar F (2016) Artfima v1.5. https://cran.r-project.org/web/packages/artfima/artfima.pdf
Meerschaert M, Sabzikar F (2014) Stochastic integration with respect to tempered fractional brownian motion. Stochast Process Appl 124:2363–2387
Müller U, Watson M (2014) Measuring uncertainty about long-run predictions. Rev Econ Stud 83:1711–1740
Phillips P (1987) Time series regression with a unit root. Econometrica 55:277–301
Phillips P, Yu J (2011) Dating the timeline of financial bubbles during the subprime crisis. Quantitat Econ 2:455–491
Phillips P, Shi S, Yu J (2015) Testing for multiple bubbles: historical episodes of exuberance and collapse in the S&P 500. Int Econ Rev 56:1042–1076
Phillips P, Shi S, Yu J (2015) Testing for multiple bubbles: limit theory of real time detectors. Int Econ Rev 56:1077–1131
Pipiras V, Taqqu M (1997) Asymptotic theory for certain regression models with long memory errors. J Time Ser Anal 18:385–393
Pipiras V, Taqqu M (2000) Convergence of weighted sums of random variables with long range dependence. Stochast Process Appl 90:157–174
Priestley M, Chao M (1972) Nonparametric function fitting. J R Stat Soc Ser B Stat Methodol 34:385–392
Ray B, Tsay R (1997) Bandwidth selection for kernel regression with long-range dependent errors. Biometrika 84:791–802
Robinson P (1997) Large sample inference for nonparametric regression with dependent errors. Ann Stat 25:2054–2083
Sabzikar F, Surgailis D (2018) Tempered fractional brownian and stable motions of second kind. Stat Probab Lett 132:17–27
Sabzikar F, Surgailis D (2018) Invariance principles for tempered fractionally integrated processes. Stochast Process Appl 128:3419–3438
Samorodnitsky G, Taqqu M (1994) Stable non-gaussian random processes: stochastic models with infinite variance. Chapman and Hall, Boca Raton
Seber G, Wild C (1989) Nonlinear regression. Wiley, Hoboken
Wahba G (1990) Spline models for observational data. Society for Industrial and Applied Mathematics (SIAM), New Delhi
Weiershäuser A (2012) Piecewise polynomial regression with fractional residuals for the analysis of calcium imaging data. Dissertation. http://kops.uni-konstanz.de/handle/123456789/18867, University of Konstanz
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Proofs
Proofs
Before we prove the main results of the paper, we first state two technical lemmas upon which our results are based. Lemma 1 and Lemma 2, play an important role in establishing the asymptotic results in Sect. 4. Next, we introduce some notations that will be used in Lemma 2.
For the function f and \(m\in \mathbb {N}\cup \{\infty \}\), we define the approximation
Lemma 1
Let \(b_d(k)\) be defined as in (4). Then for any \(y \in \mathbb {R}\)
as \(N\rightarrow \infty \).
proof Lemma 1
For \(d>0\), since \(\sum _{k=0}^Nb_d(k) \sim 1/(d\varGamma (d)) N^d\) as \(N\rightarrow \infty \) according to (4) and \(e^{-(\lambda _N+\frac{ix}{N})} \le 1\), then according to the Tauberian theorem for power series (Feller 1971, p. 447 Theorem 5) we have
and consequently \(\bigl (\lambda _N + i\frac{y}{N}\bigr )^{d}/(1-e^{- (\lambda _N + i\frac{y}{N})k})^{d}\sim 1\) for \(\lambda _N \rightarrow 0\) and \(N \rightarrow \infty \) as \(N\rightarrow \infty \), proving (43). For \(-1<d<0\), define \(\widetilde{b}_d(k) = \sum _{i=k}^\infty b_d(i) \sim -1/(d\varGamma (d))k^{\widetilde{d}-1}\) with \(\widetilde{d}=d+1 \in (0,1)\). Next, we have that
and therefore \(\sum _{i=0}^{k-1} b_d(i) = -\sum _{i=k}^\infty b_d(i)\). Using summation by parts (Giraitis et al., 2012, p. 32, Eq. 2.5.8) yields
By setting \(j=k-1\) and using (44) we have
Application of the Tauberian theorem for power series (Feller, 1971, p. 447 Theorem 5) yields
as \(N\rightarrow \infty \), proving (43). In the general case \(-j< d < -j +1\), \(j = 1,2,\ldots \) (43) follows similarly using summation by parts j times. For \(d=0\), it can be shown that the same result holds under an additional assumption on the sum of the \(b_d(k)\)’s Sabzikar and Surgailis (2018b).
Lemma 2
Let \(\{ X_{d,\lambda _N}(j)\}_{j\in \mathbb {Z}}\) be tempered linear process given by (3), \(N\lambda _N\rightarrow \lambda _*\in (0,\infty )\) and \(d>-1/2\). Let \({\mathcal {A}}_{d,\lambda _*}\) be the class of functions defined by (26) and let
be satisfied, then
as \(N\rightarrow \infty \).
proof Lemma 2
We first note that \(\{X_{d,\lambda _N}(j)\}_{j\in \mathbb {Z}}\) can be written as
where \(\hat{B}(d \omega ) \) is complex-valued Gaussian noise with \(\mathbb {E}|\hat{B}(d \omega )|^2 = d \omega \), see Brockwell and Davis (2012, Sects. 4.6–4.7). Define
The Wiener integral U is well-defined, since \(f\in {\mathcal {A}}_{d,\lambda }\). To show that the series \(U_{N}\) is well-defined in the \(L^{2}(\varOmega )\), first apply the spectral representation of \(\{X_{d,\lambda _N}(j)\}_{j\in \mathbb {Z}}\) given by (46)
where \(\widehat{f_{N,m}}(y)=\sum _{j=0}^{m} f\Big (\frac{j}{N}\Big ) \frac{1}{\sqrt{2\pi }} \int _{\mathbb {R}}e^{i\omega y}\mathbf {1}_{(\frac{j}{N},\frac{j+1}{N})}(\omega )\ d\omega \) is the Fourier transform of \({f_{N,m}}\). We note
for \(d>-\frac{1}{2}\) and a constant C by Lemma 1. Using (48) and (49), we have
where \(C'\) is another constant. Now, for \(m_2>m_1\ge 1\), we have
as \(m_1,m_2\rightarrow \infty \) and this shows the series is well-defined. The following remark illustrates the inequality in (50).
Remark 5
In (50) we used Lemma 1 and \(\Big |\frac{\frac{iy}{N}}{e^{\frac{iy}{N}}-1} \Big |^{2} \le \frac{\pi ^2}{4}\) for \(y \in [-N\pi ,N\pi ]\). This can be seen as follows
Then taking the limit yields
Next, we show that \(U_{N}\) converges in distribution to U as \(N\rightarrow \infty \). Similar to Meerschaert and Sabzikar (2014, Theorem 3.15), the set of elementary functions are dense in \({\mathcal {A}}_{d,\lambda }\) and then there exists a sequence of elementary functions \(f^{l}\) such that \(\Vert f-f^{l}\Vert _{{\mathcal {A}}_{3}}\rightarrow 0\), as \(l\rightarrow \infty \). Now, assume
Observe that \(U^{l}_{N}\) is well-defined, since \(U^{l}_{N}\) has a finite number of terms and the elementary function \(f^{l}\) is in \({\mathcal {A}}_{3}\). According to Meerschaert and Sabzikar (1968, Theorem 4.2.), the series \(U_{N}\) converges in distribution to U if
- Step 1:
-
\(U^{l}{\mathop {\longrightarrow }\limits ^{d}}U\), as \(l\rightarrow \infty \),
- Step 2:
-
for all \(l\in \mathbb {N}\), \(U^{l}_{N}{\mathop {\longrightarrow }\limits ^{d}}U^{l}\), as \(N\rightarrow \infty \),
- Step 3:
-
\(\limsup _{l\rightarrow \infty }\limsup _{N\rightarrow \infty }\mathbb {E}\Big |U^{l}_{N}-U_{N}\Big |^{2}=0\).
Step 1: The random variables \(U^{l}\) and U have normal distribution with mean zero and variances \(\Vert f^{l}\Vert _{{\mathcal {A}}_{3,\lambda _N}}\) and \(\Vert f\Vert _{{\mathcal {A}}_{3,\lambda _N}}\), respectively, since f and \(f^{l}\) are in \({{\mathcal {A}}_{3,\lambda _N}}\). Therefore \(\mathbb {E}\Big |U^{l}-U\Big |^{2}=\Vert f^{l}-f\Vert _{{\mathcal {A}}_{3,\lambda _N}}\rightarrow 0\) as \(l\rightarrow \infty \).
Step 2: Note that \(f^{l}\) is an elementary function and hence \(U^{l}_{N}\), given by (51), can be written as \(U^{l}_{N}=\frac{1}{ N^{d+\frac{1}{2} } }\int _{\mathbb {R}} f^{l}(u) dS_{d,\lambda _N}(u)\). Now, apply part (iii) of Theorem 4.3 in Sabzikar and Surgailis (2018b) to see that \(\frac{S_{d,\lambda _N}(u)}{ N^{d+\frac{1}{2} } }{\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}\varGamma ^{-1}(d+1) B^{II}_{d,\lambda _*}(u)\), as \(N\rightarrow \infty \), and this implies that \(U^{l}_{N}{\mathop {\longrightarrow }\limits ^{\mathrm{f.d.d.}}}U^{l}\), as \(N\rightarrow \infty \).
Step 3: By a similar arguments of (49) and (50), we have
where \(\widehat{f^{l}_{N}}(y)\) and \(\widehat{f_N}(y)\) are the Fourier transforms of
and \(f_{N}:=\sum _{j=0}^{\infty }f\Big (\frac{j}{N}\Big ) \mathbf{1}_{ \big (\frac{j}{N},\frac{Nx-j+1}{N}\big ) }(u)\) respectively. Note that \(f^{l}\) is an elementary function and therefore \(\widehat{f^{l}_{N}}\) converges to \(\widehat{f^{l}}\) at every point and \(\Big |\widehat{f^{l}_{N}}(\omega )-\widehat{f^{l}}(\omega )\Big |\le \widehat{g^{l}}(\omega )\) uniformly in N, for some function \(\widehat{g^{l}}(\omega )\) which is bounded by the minimum of \(C_1\) and \(C_2|\omega |^{-1}\) for all \(\omega \in \mathbb {R}\) (see Theorem 3.2. in Pipiras and Taqqu (2000) for more details). Let \(\mu _{d,\lambda }(d\omega )= (\lambda ^2+\omega ^2)^{-d}\ d\omega \) be the measure on the real line for \(d >-\frac{1}{2}\), then \(\widehat{g^{l}}(\omega )\in L^{2}(\mathbb {R},\mu _{d,\lambda })\). Now apply the Dominated Convergence Theorem to see that
as \(N\rightarrow \infty \). From (48) and (53), we have
The first two terms tend to zero as \(N\rightarrow \infty \) because of (53) and Condition A respectively, and the last term tends to zero as \(l\rightarrow 0\) (see step 1) and this completes the proof of Step 3. \(\square \)
proof Theorem 1
We prove only part (c) and omit the proofs of parts (a) and (b) due to the similarity of proofs. We first show that
where \(B^{II}_{d,\lambda _*}(t)\) is TFBMII. Starting from the l.h.s of (54), using Riemann sums to integrals and integration by parts
where we used
and the assumptions on the kernel function K to see that
Next, by stationarity of \(X_{d,\lambda _N}(j)\) and a change of variable we have
where \(l_{x}(u)= \lfloor N(x-hu)\rfloor - \lfloor N(x-h)\rfloor +1\). Consequently, from (55) and (56), we get
According to Sabzikar and Surgailis (2018b, Theorem 4.3) we have
in D[0, 2] provided \(Nh\rightarrow \infty \). Now, the desired result (54) follows from (57), (58), and the continuous mapping theorem. \(\square \)
proof of Theorem 2
The proof of this theorem follows by Proposition 1 and Theorem 1 and hence we omit the details. \(\square \)
proof of Theorem 3
For brevity, we restrict the proof of the theorem to \(k=2\). Moreover, we just prove part (c) of the theorem since the proofs of the other two cases are similar. We first note that for each \(0<x_i<1\),
Let \(j_i\) be integers such that \(|Nx_i-j_i|\le 1\) for \(i=1,2\) and (similar to Deo (1997)) define
for \(i=1, 2\). Since the kernel function K vanishes in \(\mathbb {R}\setminus [-1,1]\) and \(|K^{'}(x)|\le C\) for all \(x\in [-1,1]\), it follows that
for \(i=1,2\). By a change of variable \(s=\nu +j_1 -Nh\) and \(s=\nu +j_2 -Nh-\lfloor N\delta \rfloor \), with \(\delta = x_2-x_1\), for \(\widetilde{A}_{N,1}\) and \(\widetilde{A}_{N,2}\) respectively, use the fact that \(X_{d,\lambda _N}\) is stationary, and \(|K^{'}(x)|\le C\) to see that
where
and
We use the partial sums \(A^{*}_{N,i}\), for \(i=1,2\), to establish the functional limit theorems. Let \(\{ K_m\}\) be a sequence of elementary functions such that \(K_m\rightarrow K\) in \(L^{2}\) as \(m\rightarrow \infty \). Define \(A^{*}_{m,Ni}\) be as (63) and (64) with \(K_m(x)=\sum _{i=1}^{m} a_i \mathbf{1}( t_{i-1},t_{i} )(x)\), where \(a_i\) are some constants and \(-1\le t_i \le 1\) for \(i=0, \ldots , m\). We can rewrite \(A^{*}_{m,N,i}\) as
where
and
Using Sabzikar and Surgailis (2018b, Theorem 4.3) and the continuous mapping theorem yields
as \(N\rightarrow \infty \) and hence
Next, we need to show that \(A^{*}_{m,N,1}\) and \(A^{*}_{m,N,2}\) are asymptotically independent (i.e. \(\sigma ^{2}_{12} = \sigma ^{2}_{21} =0\) ). Observe that
and
where
Since \(b_{d}(j)e^{-\lambda _N j}\sim C j^{d-1} e^{-\lambda _N j}\) for large lag j, see (4), then for \(p> {\lfloor \lfloor Nh\rfloor t_j\rfloor }\), we have
where C is a constant. Therefore, we get
since \(h\log (Nh) \rightarrow 0\) and \(M=Nh\log (Nh)\). Consequently,
and by a similar argument
From (75), (76), and \( \lfloor N\delta \rfloor - 2N\rightarrow \infty \), we conclude that \( S^{*}_{N,1}(t_j) - S^{*}_{N,1}(t_{j-1})\) and \( S^{*}_{N,2}(t_{j^{'}}) - S^{*}_{N,2}(t_{{ j^{'} } -1})\) are asymptotically independent for all \(j, j^{'}\) and this implies that \(A^{*}_{m,N,1}\) and \(A^{*}_{m,N,2}\) are asymptotically independent. Thus
where \(\sigma ^{2}_{ii}\) is given by (69) and \(\sigma ^{2}_{12}=\sigma ^{2}_{21}=0\). Since
and by Pipiras and Taqqu (1997, Theorem 2), one obtains
then the desired results follows by (61), (62), (78), and (79). \(\square \)
proof of Theorem 4
The proofs of part (a) and part (b) are similar and we just prove part (b) for \(\lambda _*\in (0,\infty )\). Using the triangle inequality yields
For the first term, we have by Weiershäuser (2012, p. 95 Theorem 5.1.9), (3) and Markov’s inequality,
Since \(\varDelta \) is arbitrary, \(\lim _{N\rightarrow \infty }\mathbb {P}\big ( N^{\frac{1}{2} - d}\big \Vert \hat{\theta }-\theta \big \Vert >\frac{\varDelta }{2} \Big ) = 0\) for \(d < 1/2\). For the second term and again by Markov’s inequality
Let \(\varOmega = ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} {{\,\mathrm{\mathbf {M}}\,}}_{N+}^T\), then \(\mathbb {E}\Vert \varOmega {{\,\mathrm{\mathbf {e}}\,}}_N\Vert ^2 = {{\,\mathrm{\text {tr}}\,}}(\varOmega \varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N}) + \{\mathbb {E}({{\,\mathrm{\mathbf {e}}\,}}_N)\}^T\varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N}\mathbb {E}({{\,\mathrm{\mathbf {e}}\,}}_N)\) where \({{\,\mathrm{\text {tr}}\,}}(A)\) and \(\varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N}\) denote the trace of the matrix A and the variance-covariance matrix of \({{\,\mathrm{\mathbf {e}}\,}}_N\) respectively. Since \(\{ X_{d,\lambda }(t) \}_{t\in \mathbb {Z}}\) is a tempered mean zero linear process we have \(\mathbb {E}\Vert \varOmega {{\,\mathrm{\mathbf {e}}\,}}_N\Vert ^2 = {{\,\mathrm{\text {tr}}\,}}(\varOmega \varSigma _{{{\,\mathrm{\mathbf {e}}\,}}_N{{\,\mathrm{\mathbf {e}}\,}}_N})\). Further, the variance-covariance matrix of \({{\,\mathrm{\mathbf {e}}\,}}_N\) is finite, see (17). Consequently, the numerator of (80) is finite since \({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+}\) is full rank for \(N\rightarrow \infty \) and hence the second term goes to zero for \(d<1/2\). \(\square \)
proof of Theorem 5
The proofs of part (a) and part (b) are similar and hence we just give the proof for part (b). We first note that \(\int _\mathbb {R}\mu _{(i+)}(u) dB^{II}_{d,\lambda _*}(u) = \int _{\mathbb {R}} {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}(u) dB(u) \). Observe that \(\mu _{(i+)}\in L^{p}(\mathbb {R})\) for \(p\ge 1\) and hence \({\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}\in L^{p}\). In particular, let \(p=2\) and apply the Ito-isometry to conclude that \(\int _{\mathbb {R}} {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}(u) dB(u)\) is well-defined. Because of Theorem 4, we need to show that
as \(N\rightarrow \infty \). Observe that \(N ({{\,\mathrm{\mathbf {M}}\,}}_{N+}^T{{\,\mathrm{\mathbf {M}}\,}}_{N+})^{-1} \rightarrow \varLambda \) as \(N\rightarrow \infty \). Therefore the RHS of (81) follows if we show
But this is equivalent to show that
Note that
and by Lemma 2
where \(m_{\alpha }(u) := \sum _{i=1}^{p+1} \alpha _i \mu _{(i+)}(u) \) and this completes the proof. \(\square \)
proof of Theorem 6
We only proof part (c) since the other parts follow a similar procedure. Let \(\varXi \) be the random vector
Then we can write
for \(i=1,\ldots , p+1\). We observe that \(\int _\mathbb {R}\big ( {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)} \big )(s) dB(s)\) is a Gaussian stochastic process with mean zero and finite variance \(\int _\mathbb {R}\big | {\mathbb {I}}^{d,\lambda _*}_{-}\mu _{(i+)}(s) \big |^2 \ ds\). Using the Ito-isometry for the Wiener integrals, one can see \(\varXi \) has the covariance matrix
and consequently \(\varLambda \varXi \) has normal distribution with covariance matrix \(\varLambda \varSigma _0 \varLambda \) and this completes the proof of the first part. Next, we have
where \(C = \frac{2}{\varGamma (d) \sqrt{\pi } (2\lambda )^{d-\frac{1}{2}}}\) and we used
for \(\nu > -\frac{1}{2}\) and \(\lambda >0\) in (87). This completes the proof of the second part and Theorem. \(\square \)
Rights and permissions
About this article
Cite this article
De Brabanter, K., Sabzikar, F. Asymptotic theory for regression models with fractional local to unity root errors. Metrika 84, 997–1024 (2021). https://doi.org/10.1007/s00184-021-00812-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-021-00812-7
Keywords
- Tempered linear processes
- Semi-long range dependence
- Non-parametric regression
- Piecewise polynomial regression
- Tempered fractional calculus