Abstract
In many applications, the available data come from a sampling scheme that causes loss of information in terms of left truncation. In some cases, in addition to left truncation, the data are weakly dependent. In this paper we are interested in deriving the asymptotic normality as well as a Berry-Esseen type bound for the kernel density estimator of left truncated and weakly dependent data.
Similar content being viewed by others
1 Introduction
\(\mathcal{P}\) is a population with large, deterministic and finite size N with elements \(\{ ( Y_{i}, T_{i}); i=1,\ldots,N\}\). In sampling from this population we only observe those pairs for which \(Y_{i} \geq T_{i}\). Suppose that there is at least one pair with this condition. The sample is denoted by \(\{ (Y_{i}, T_{i}); i=1,\ldots,n\}\). This model is called random left-truncated model (RLTM). We assume that \(\{ Y_{i}; i \geq1 \}\) is a stationary α-mixing sequence of random variables and \(\{ T_{i}; i=1, \ldots,N\}\) is an independent and identically distributed (i.i.d.) sequence of random variables. The definition of a strong mixing sequence is presented in Definition 1.
Definition 1
Let \({ \{ {{Y_{i}; i \ge1}} \}}\) be a sequence of random variables. The mixing coefficient of this sequence is
where \({\mathcal{F}}_{l}^{m}\) denotes the σ-algebra generated by \(\{ {{Y_{j}}} \}\) for \(l \le j \le m\). This sequence is said to be strong mixing or α-mixing if the mixing coefficient converges to zero as \(m\rightarrow\infty\).
Studying the various aspects of left-truncated data is of high interest due to their applicability in much research. One of these applications is in survival analysis. It is well known that in medical research on some specific diseases such as AIDS and dementia, the sampling scheme results in data samples that are left truncated. This model also arises in astronomy [1].
Strong mixing sequences of random variables are widely occurring in practice. One application is in the analysis of time series and in renewal theory. A stationary ARMA-sequence fulfils the strong mixing condition with an exponential rate of mixing coefficient. The concept of strong mixing sequences was first introduced by Rosenblatt [2] where a central limit theorem is presented for a sequence of random variables that satisfies the mixing condition.
The Berry-Esseen inequality or theorem was stated independently by Berry [3] and Esseen [4]. This theorem specifies the rate at which the scaled mean of a random sample converges to the normal distribution for all sample spaces. Parzen [5] derived a Berry-Esseen inequality for the kernel density estimator of an i.i.d. sequence of random variables. Several works were done for left-truncated observations. We can refer to [6] where the distribution of left-truncated data was estimated and asymptotic properties of the estimator were derived. More work was done by Stute [7]. Prakasa Rao [8] presented a Berry-Esseen theorem for the density estimator of a sample that forms a stationary Markov process. Liang and Un̈a-Álvarez [9] have derived a Berry-Esseen inequality for mixing data that are right censored. Yang and Hu [10] presented Berry-Esseen type bounds for kernel density estimator based on a φ-mixing sequence of random variables. Asghari et al. [11, 12] presented a Berry-Esseen type inequality for the kernel density estimator, respectively, for a left-truncated model and for length-biased data.
This paper is organized as follows. In Section 2, needed notations are introduced and some preliminaries are listed. In Section 3, the Berry-Esseen type theorem for the estimator of the density function of the data is presented. In Section 4, the theorems and corollaries of Section 3 are proved.
2 Preliminaries and notation
Suppose that \(Y_{i}\)’s and \(T_{i}\)’s for \(i=1, \ldots,N\) are positive random variables with distributions F and G, respectively. Let the joint distribution function of \(( Y_{1}, T_{1} )\) be
in which \(\alpha=\mathrm{P} ( Y_{1}\geq T_{1} )\).
If the marginal distribution function of \(Y_{i}\) is denoted by \(F^{*}\), we have
so the marginal density function of Y is
A kernel estimator for f is given by
In many applications, the distribution function of the truncation random variable G is unknown. So \(f_{n} ( y )\) is not applicable in these cases and we need to use an estimator of G. Before starting the estimation details, for any distribution function L on \([0,\infty]\), let \(a_{L}:=\inf\{x>0: L(x)>0\}\) and \(b_{L}:=\sup\{x>0: L(x)<1\}\).
Woodroof [6] pointed out that F and G can be estimated only if \(a_{G} \leq a_{F}\), \(b_{G} \leq b_{F}\) and \(\int_{{a_{F}}}^{\infty}{\frac {{dF}}{G}} < \infty\). This integrability condition can be replaced by the stronger condition \(a_{G} < a_{F}\). Using this assumption, here we use the non-parametric maximum likelihood estimator for G that is presented by Lynden-Bell [13] and is denoted by \(G_{n}\),
in which \({S}(y) = \sum_{i = 1}^{n} {I_{ \{ Y_{i} = y \}}} \) and \(C_{n}(s) = \frac{1}{n}\sum_{i = 1}^{n} {{I_{ \{ {{T_{i}} \le s \le{Y_{i}}} \}}}} \).
Using the definition of \(C_{n}\) that is mentioned in the estimation procedure of G and also using the empirical estimators of \(F^{*}\) and \(G^{*}\), which are denoted by \(F_{n}^{*}\) and \(G_{n}^{*}\), we have
It can be seen that \({C_{n}} ( s )\) is actually the empirical estimator of \(C_{n} = {G^{*}} ( y ) - {F^{*}} ( y ) = {\alpha^{ - 1}}G ( y ) [ {1 - F ( y )} ]\), \(y \in[{a_{F}}, + \infty)\). This fact gives the following estimator of α:
For details as regards \(\alpha_{n} \), see [14]. Using \(\alpha_{n} \), we present a more applicable estimator of f, which is denoted \(\hat{f}_{n}\) and is defined as
Note that in (2.2) the sum is taken over i’s for which \({G_{n}} ( {{Y_{i}}} ) \ne0\).
3 Results
Before presenting the main theorems, we need to state some assumptions. Suppose that \(a_{G} < a_{F}\) and \(b_{G} \leq b_{F}\). Woodroof [6] stated that the uniform convergence rate of \(G_{n}\) to G is true for \(y \in [ {a,{b_{G}}} ]\) for \(a>a_{G}\). Thus, we have to assume that \(a_{G} < a_{F}\). Let \(\mathcal{C}= [ {a,b} ]\) be a compact set such that \(\mathcal{C} \subset \{ y;y \in [ {{a_{F}},{b_{F}}} [ \}\). As mentioned in the Introduction, \(\{ Y_{i}; i \geq1 \}\) is a stationary α-mixing sequence of random variables with mixing coefficient \(\beta(n)\), and \(\{ T_{i}; i \geq1\}\) is an i.i.d. sequence of random variables.
Definition 2
The kernel function K, is a second order kernel function if \(\int _{-\infty}^{\infty}{K ( t )\,dt} = 1\), \(\int_{-\infty}^{\infty} {tK ( t )\,dt} = 0\) and \(\int_{-\infty}^{\infty} {{t^{2}}K ( t )\,dt} > 0\).
Assumptions
- A1:
-
\(\beta(n)=O ( n^{-\lambda} )\) for some \(\lambda> \frac{2+\delta}{\delta}\) in which \(0 < \delta\leq1\).
- A2:
-
For the conditional density of \(Y_{j+1}\) given \(Y_{1}=y_{1}\) (denoted by \(f_{j} ( \cdot|y_{1} )\)), we have \(f_{j} ( y_{2} | y_{1} ) \leq M \) for \(y_{1}\) and \(y_{2}\) in a neighborhood of \(y\in\mathbb {R}\) in which M is a positive constant.
- A3:
-
-
(i)
K is a positive bounded kernel function such that \(K ( t )=0\) for \(\vert t\vert >1\) and \(\int_{ - 1}^{1} {K ( t )} = 1\).
-
(ii)
K is a second order kernel function.
-
(iii)
f is twice continuously differentiable.
-
(i)
- A4:
-
Let \(p=p_{n}\) and \(q=q_{n}\) be positive integers such that \(p+q \leq n\), there exists a constant C such that for n large enough \(\frac{q}{p} \le C\). Also \(p{h_{n}} \to0\), \(q{h_{n}} \to0\) as \(n \to \infty\).
- A5:
-
\(\{ {{T_{i}};i \ge1} \}\) is a sequence of i.i.d. random variable with common continuous distribution function G, and independent of \(\{ {{Y_{i}};i \ge1} \}\).
- H1:
-
The kernel function \(K( \cdot)\) is differentiable and Hölder continuous with exponent \(\beta> 0\).
- H2:
-
\(\beta ( n )=O ( n^{-\lambda} )\) for \(\lambda > \frac{{1 + 5\beta}}{\beta}\) in which \(\beta> \frac{1}{7}\).
- H3:
-
The joint density of \(( Y_{i} , Y_{j} )\), \(f_{ij}^{*}\), exists and we have \(\sup_{u,v} \vert {f_{ij}^{*} ( {u,v} ) - {f^{*}} ( u ){f^{*}} ( v )} \vert \le C < \infty\) for some constant C.
- H4:
-
There exists \(\lambda>5+\frac{1}{\beta}\) and for the bandwidth \(h_{n}\) we have \(\frac{\log\log n}{nh_{n}^{2}} \to0\) and \(C{n^{\frac{{ ( {3 - \lambda} )\beta}}{{\beta ( {\lambda + 1} ) + 2\beta + 1}} + \eta}} < {h_{n}} < C'{n^{\frac{1}{{1 - \lambda}}}}\) which η is such that \(\frac{1}{{\beta ( {\lambda + 1} ) + 2\beta + 1}} < \eta < \frac{{ ( {\lambda - 3} )\beta}}{{\beta ( {\lambda + 1} ) + 2\beta + 1}} + \frac{1}{{1 - \lambda}}\).
Discussion of the assumptions. A1, A2, and A4 are common in the literature. For example Zhou and Liang [15] used A2 for deconvolution estimator of multivariate density of α-mixing process. A3(i)-(ii) are commonly used in non-parametric estimation. A3(iii) is specially needed for a Taylor expansion. H1-H4 are needed to use Theorem 4.1 of [16] in Theorem 4 here.
Let \(\sigma_{n}^{2} ( y ) := n{h_{n}}\mathit {Var}[ {{f_{n}} ( y )} ]\), so by letting \(\frac{1}{{\sqrt{n{h_{n}}} }}K ( {\frac{{{Y_{i}} - y}}{{{h_{n}}}}} )\frac{\alpha}{{G ( {{Y_{i}}} )}} =: {W_{ni}}\), we can write
Let \(k = [ {\frac{n}{{p + q}}} ]\), \({k_{m}} = ( {m - 1} ) ( {p + q} ) + 1\) and \({l_{m}} = ( {m - 1} ) ( {p + q} ) + p + 1\), in which \(m=1,2,\ldots,k\). Now we have the following decomposition:
in which
From now on, we let \({\sigma^{2}}(y):=\frac{\alpha f (y)}{G(y)}\int_{-1}^{1} {{K^{2}}(t)\,dt} \), \(u(n):=\sum_{j= n}^{\infty}{(\alpha (j))^{\frac{\delta}{\delta + 2}}}\).
Theorem 1
If Assumptions A1-A3(i) and A4 are satisfied and f and G are continuous in a neighborhood of y for \(y \geq a_{F}\), then for large enough n we have
in which
and
Theorem 2
If the assumptions of Theorem 1 and A5 are satisfied, then for \(y \geq a_{F}\) and for large enough n we have
in which \(a_{n}\) is defined in (3.3).
Theorem 3
If the assumptions of Theorem 2 are satisfied, G has bounded first derivative in a neighborhood of y and f has bounded derivative of order 2 in a neighborhood of y for \(y \geq a_{F}\), then for large enough n we have
in which
and \(\lambda''\) and \(\lambda'''\) are defined in (3.4).
Remark 1
In many applications, f and G are unknown and should be estimated, so \({\sigma^{2}} ( y )\) is not applicable in these cases. Here we present an estimator for it that is denoted by \(\hat{\sigma}_{n}^{2} ( y )\) and is defined as follows:
Using this estimator instead of \({\sigma^{2}} ( y )\) in Theorem 3, costs a change in the rate of convergence. This change is discussed in the following corollaries.
Corollary 1
Let Assumptions A3, A5 and H1-H4 be satisfied, then for \(y \in\mathcal{C}\)
in which
Theorem 4
Let Assumptions A1-A5 and H1-H4 be satisfied. For \(y \in\mathcal{C}\) and for large enough n we have
in which \(a'_{n}\) is defined in (3.5) and \(c_{n}\) is defined in (3.6).
4 Proofs
In order to start the proofs of the main theorems, we shall state some lemmas that are used in the proving procedure of the main theorems. For the sake of simplicity let C, \(C'\) and \(C''\), be positive appropriate constants which may take different values at different places.
Lemma 1
[17]
Let X and Y be random variables such that \(E{\vert X \vert ^{r}} < \infty\) and \(E{\vert Y \vert ^{s}} < \infty\) in which r and s are constants such that \(r,s>1\) and \({r^{ - 1}} + {s^{ - 1}} < 1\). Then we have
Lemma 2
Suppose that Assumptions A1-A3(i) and A4 are satisfied. If f and G are continuous in a neighborhood of y for \(y \geq a_{F}\) then \(\sigma_{n}^{2} ( y ) \to{\sigma^{2}} ( y ) \) as \(n \to\infty\). Furthermore, if f and G have bounded first derivatives in a neighborhood of y for \(y\geq a_{F}\), for such y’s we have
in which
Proof
Using the decomposition that is defined in (3.2) we can write
As assumed in the lemma, f and G are continuous in a neighborhood of y so they are bounded in this neighborhood. Now under Assumption A3(i) we have
so it can be concluded that
Lemma 1 for arbitrarily \(\delta> 0\) and also the continuity of f in a neighborhood of y gives
now using the notation \(u ( n ) := \sum_{j = n}^{\infty}{ ( {\alpha ( {j} )} )^{\frac{\delta }{{\delta + 2}}}}\), which is defined before, and A1 we get the following result:
Under Assumption A2 we can write
Now, using (4.4), (4.5), (4.6), and (4.2), we have
By the same argument as is used for \(\vert {\mathrm{I}'} \vert \) and \(\vert {\mathrm{II}'} \vert \) and \(\vert {\mathrm{III}'} \vert \), it can be concluded that
Now, using (4.8) and (4.9), we have
Similarly
and
So we can write
Gathering all that is obtained above,
and by letting
we have
On the other hand using (4.7), (4.10), and (4.13), we have
So for \(A_{n}\) we can write
On the other hand from (4.3), it can easily be concluded that \(\sum_{i = 1}^{n} {\mathit {Var}( {{W_{ni}}} )} \to{\sigma ^{2}} ( y )\) as \(n \to\infty\). Now under Assumptions A1 and A4 \(\vert {{A_{n}}} \vert \to0\), so \(\sigma_{n}^{2} ( y ) \to {\sigma^{2}} ( y )\). If f and G have bounded first derivatives in a neighborhood of y, we can write
From (4.14) we get the following result:
and the proof is completed. □
Before starting the next lemma, we note that
If we let \(\sum_{i = 1}^{n} {{Z_{ni}}} =: {S_{n}}\), it can be observed that
in which
Lemma 3
Suppose that Assumptions A1-A3(i) and A4 are satisfied and f and G are continuous in a neighborhood of y for \(y\geq a_{F}\). Then for such y’s we have
Proof
With the aid of Lemma 2 we can write
The same argument shows that \(E{ ( {{S'''_{n}}} )^{2}} = O ( {{\lambda'''_{n}}} )\), so we have
and
So the proof is completed. □
In the following let \({H_{n}}: = \sum_{m = 1}^{k} {{X_{nm}}} \) in which \({{X_{nm}}}\), \(m = 1,\ldots,k\), are independent random variables with the same distribution as \({Y'_{nm}}\), \(m = 1,\ldots,k\). φ and \({\varphi'}\) are, respectively, the characteristic functions of \(S'_{n}\) and \(H_{n}\). Also let \(s^{\prime 2}_{n}: = \sum_{m = 1}^{k} {\mathit {Var}( {{X_{nm}}} )}\) and \(s^{2}_{n}: = \sum_{m = 1}^{k} {\mathit {Var}( {{Y'_{nm}}} )} \).
Lemma 4
Under the assumptions of Lemma 3, for \(y\geq a_{F}\) we have the following:
Proof
It can easily be seen that \(s_{n}^{2} = E{ ( {{S'_{n}}} )^{2}} - 2\sum_{1 \le i < j \le k} {\mathit {Cov}( {{Y'_{ni}},{Y'_{nj}}} )} \), \(E ( {S_{n}^{2}} ) = 1\) and
Using (4.25) and Lemma 2, we can write
On the other hand, from Lemma 2 we know that \(\sum_{1 \le i < j \le k} {\mathit {Cov}( {{j'_{ni}},{j'_{nj}}} )} = O ( h_{n}^{ - \delta/ ( 2 + \delta )} u ( q ) )\), so substituting this in (4.26), gives the result,
□
Lemma 5
[18]
Let \(\{ {{X_{j}},j \ge1} \}\) be a stationary sequence with mixing coefficient \(\alpha ( k )\) and suppose that \(E (X_{n} )=0\), \(r>2\), and there exist \(\tau> 0\) and \(\lambda> \frac{{r ( {r + \tau} )}}{{2\tau}}\) such that \(\alpha ( n ) = O ( {{n^{ - \lambda}}} )\) and also \(E{\vert {{X_{i}}} \vert ^{r + \tau}} < \infty\). In this case, for any \(\epsilon> 0\), there exists a constant C, for which we have
Lemma 6
Under the assumptions of Lemma 3 for \(y\geq a_{F}\) we have
Proof
Using [19], Theorem 5.7, for \(r>2\) we can write
On the other hand, using Lemma 5 there exists \(\tau> 0\) such that for any \(\epsilon> 0\)
Let \(\epsilon=\delta' \), \(r=2+2\delta'\) for \(0<2\delta'<\delta\) and \(\tau=\delta-2\delta'\) and \(\lambda> \frac{{ ( {1 + \delta'} ) ( {2 + \delta} )}}{{\delta- 2\delta'}}\), so we have
From Lemma 4, \(s_{n}^{2} \to1\), so the proof is completed. □
Lemma 7
[20]
Let \(\{ {{X_{j}},j \ge1} \}\) be a stationary sequence with mixing coefficient \(\alpha ( k )\). Suppose that p and q are positive integers. Let \({T _{l}} = \sum_{j = ( {l - 1} ) ( {p + q} ) + 1}^{ ( {l - 1} ) ( {p + q} ) + p} {{X_{j}}} \) in which \(1\leq l \leq k\). If \(s,r>0\) such that \({s^{ - 1}} + {r^{ - 1}} = 1\), there exists a constant \(C>0\) such that
Lemma 8
Under the assumptions of Lemma 3 for \(y\geq a_{F}\) we have
Proof
By letting \(b=1\) in [19], Theorem 5.3, p.147, for any \(T>0\) we have
Now by letting \(s=r=2\) in Lemma 7, there exists a constant \(C>0\) for which we have
Now using (4.32) and (4.33) we have
so
On the other hand applying Lemma 6 gives
so
By choosing \(T = { ( {\alpha ( q ){\lambda'''_{n}}} )^{-1/4}}\) we get the following result:
and the lemma is proved. □
Lemma 9
[21]
Let X and Y be random variables. For any \(a>0\) we have
Proof of Theorem 1
Using (4.21) and Lemma 9, for any \(a_{1}>0\) and \(a_{2}>0\) we can write
By choosing \({a_{1}} = {{\lambda''_{n}}^{1/3}}\) and \({a_{2}} = {\lambda _{n}^{\prime\prime\prime1/3}}\) and using Lemma 3, we have
On the other hand using Lemmas 8, 4, and 6 we have
So the proof is completed. □
Proof of Theorem 2
According to Lemma 9 for any \(a>0\) we can write
and
From Lemma 5.2 of [16] we have
and from [22] we have
So we can write
Now by choosing \(a = { ( {{h_{n}}\log\log n} )^{1/4}}\) and using Theorem 1 we get the result
□
Proof of Theorem 3
By the triangular inequality and using Lemma 1 for
we have
Here we used the fact that the event \(\frac{{\sqrt{n{h_{n}}} }}{{\sigma ( y )}}\vert {E{f_{n}} ( y ) - f ( y )} \vert > a\) does not happen for the selected a.
From the inequality \(\sup_{y} \vert {\Phi ( {\eta y} ) - \Phi ( y )} \vert \leq\frac{1}{{e\sqrt{2\pi } }} ( {\vert {\eta - 1} \vert + \vert {{\eta^{ - 1}} - 1} \vert } )\), it can be concluded that
Under Assumptions A3(ii) and A3(iii), use of the Taylor expansion yields
So from (4.48), (4.49), (4.50), Theorem 2, and Lemma 2, we have
□
Proof of Corollary 1
Using the triangular inequality it can be seen that
Under Assumptions A3, A5, H1-H4, Theorem 4.1 of [16] we obtain
From (4.45) and (4.52) we have
Using (4.53) and (4.52) in (4.51) proves the corollary. □
Proof of Theorem 4
Using the triangular inequality we can write
By Assumptions A1-A3(i), A4 and A5, Theorem 3 results in the following:
in which \(a'_{n}\) is defined in Theorem 3.
Under Assumptions A3, A5, H1-H4, Corollary 1 results in the following:
Substituting (4.55) and (4.56) in (4.54) proves the theorem. □
5 Conclusions
In this paper we obtained Berry-Esseen type bounds for the kernel density estimator based on left-truncated and strongly mixing data. Here it is concluded that in RLTM, which is also dealing with weak dependency, we can get asymptotic normality but comparing the results with [11] we see that the rates get much more complicated and also slower.
References
Segal, IE: Observational validation of the chronometric cosmology: I. Preliminaries and the redshift-magnitude relation. Proc. Natl. Acad. Sci. USA 72(7), 2473-2477 (1975)
Rosenblatt, M: A central limit theorem and a strong mixing condition. Proc. Natl. Acad. Sci. USA 42, 43-47 (1956)
Berry, AE: The accuracy of the Gaussian approximation to the sum of independent variates. Trans. Am. Math. Soc. 49(1), 122-136 (1941)
Esseen, CG: On the Liapunaff limit of error in the theory of probability. Ark. Mat. Astron. Fys. 28A(9), 1-19 (1942)
Parzen, E: On estimation of a probability density function and mode. Ann. Math. Stat. 33, 1065-1076 (1962)
Woodroofe, M: Estimating a distribution function with truncated data. Ann. Stat. 13, 163-177 (1985)
Stute, W: Almost sure representation of the product limit estimator of truncated data. Ann. Stat. 21, 146-156 (1993)
Prakasa Rao, BLS: Berry-Esseen type bound for density estimation of stationary Markov processes. Bull. Math. Stat. 17, 15-21 (1977)
Liang, HY, Una-Alvarez, J: A Berry-Esseen type bound in kernel density estimation for strong mixing censored samples. J. Multivar. Anal. 100, 1219-1231 (2009)
Yang, W, Hu, S: The Berry-Esseen bounds for kernel density estimator under dependent sample. J. Inequal. Appl. 2012, 287 (2012)
Asghari, P, Fakoor, V, Sarmad, M: A Berry-Esseen type bound in kernel density estimation for a random left truncation model. Commun. Stat. Appl. Methods 21(1), 115-124 (2014)
Asghari, P, Fakoor, V, Sarmad, M: A Berry-Esseen type bound for the kernel density estimator of length-biased data. J. Sci. Islam. Repub. Iran 26(3), 256-272 (2015)
Lynden-Bell, D: A method of allowing for known observational selection in small samples applied to 3CR quasars. Mon. Not. R. Astron. Soc. 155, 95-118 (1971)
He, S, Yang, GL: Estimation of the truncation probability in the random truncation model. Ann. Stat. 26, 1011-1028 (1998)
Zhou, Y, Liang, H: Asymptotic normality for \(L_{1}\) norm kernel estimator of conditional median under α-mixing dependence. J. Multivar. Anal. 73, 136-154 (2000)
Ould-Saïd, E, Tatachak, A: Strong consistency rate for the kernel mode estimator under strong mixing hypothesis and left truncation. Commun. Stat., Theory Methods 38, 1154-1169 (2009)
Hall, P, Heyde, CC: Martingale Limit Theory and Its Application. Academic Press, New York (1980)
Yang, SC: Maximal moment inequality for partial sums of strong mixing sequences and application. Acta Math. Sin. Engl. Ser. 23, 1013-1024 (2007)
Petrov, VV: Limit Theorems of Probability Theory. Oxford University Press, New York (1995)
Yang, SC, Li, YM: Uniformly asymptotic normality of the regression weighted estimator for strong mixing samples. Acta Math. Sin. 49(5), 1163-1170 (2006)
Chang, MN, Rao, PV: Berry-Esseen bound for the Kaplan-Meier estimator. Commun. Stat., Theory Methods 18(12), 4647-4664 (1989)
Gu, MG, Lai, TL: Functional laws of the iterated logarithm for the product-limit estimator of a distribution function under random censorship or truncation. Ann. Probab. 18(1), 160-189 (1990)
Acknowledgements
The authors would like to sincerely thank the anonymous referees for their careful reading of the manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Asghari, P., Fakoor, V. A Berry-Esseen type bound for the kernel density estimator based on a weakly dependent and randomly left truncated data. J Inequal Appl 2017, 1 (2017). https://doi.org/10.1186/s13660-016-1272-0
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13660-016-1272-0