Skip to main content
Log in

On mean derivative estimation of longitudinal and functional data: from sparse to dense

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

Derivative estimation of the mean of longitudinal and functional data is useful, because it provides a quantitative measure of changes in the mean function that can be used for modeling of the data. We propose a general method for estimation of the derivative of the mean function that allows us to make inference about both longitudinal and functional data regardless of the sparsity of data. The \(L^2\) and uniform convergence rates of the local linear estimator for the true derivative of the mean function are derived. Then the optimal weighting scheme under the \(L^2\) rate of convergence is obtained. The performance of the proposed method is evaluated by a simulation study, and additionally compared with another existing method. The method is used to analyse a real data set involving children weight growth failure.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Benko M, Härdle W, Kneip A (2009) Common functional principal components. Ann Stat 37:1–34

    Article  MathSciNet  Google Scholar 

  • Cao G, Wang J, Wang L, Todem D (2012) Spline confidence bands for functional derivatives. J Stat Plan Inference 142(6):1557–1570

    Article  MathSciNet  Google Scholar 

  • Cao H, Liu W, Zhou Z (2018) Simultaneous nonparametric regression analysis of sparse longitudinal data. Bernoulli 24(4A):3013–3038

    Article  MathSciNet  Google Scholar 

  • Chen Y, Yao W (2017) Unified inference for sparse and dense longitudinal data in time-varying coefficient models. Scand J Stat 44:268–284

    Article  MathSciNet  Google Scholar 

  • Dai W, Tong T, Genton M (2016) Optimal estimation of derivatives in nonparametric regression. J Mach Learn Res 17:1–25

    MathSciNet  MATH  Google Scholar 

  • Dai X, Müeller H-G, Tao W (2018) Derivative principal component analysis for representing the time dynamics of longitudinal and functional data. Stat Sinica 28:1583–1609

    MathSciNet  MATH  Google Scholar 

  • Ebrahimzadeh F, Hajizadeh E, Baghestani AR, Nazer MR (2018) Effective factors on the rate of growth failure in children below two years of age: a recurrent events model. Iran J Public Health 47(3):418–426

    Google Scholar 

  • Fan J, Gijbels I (1996) Local polynomial modelling and its applications. CRC Press, Boca Raton

    MATH  Google Scholar 

  • Fan J, Zhang WY (2000) Simultaneous confidence bands and hypothesis testing in varying-coefficient models. Scand J Stat 27:715–731

    Article  MathSciNet  Google Scholar 

  • Grith MM, Wagner HH, Härdle WK, Kneip AA (2018) Functional principal component analysis for derivatives of multivariate curves. Stat Sin 28:2469–2496

    MathSciNet  MATH  Google Scholar 

  • Hall P, Müller H-G, Yao F (2009) Estimation of functional derivatives. Ann Stat 37(6A):3307–3329

    Article  MathSciNet  Google Scholar 

  • Horváth L, Kokoszka P (2012) Inference for functional data with applications. Springer, New York

    Book  Google Scholar 

  • Hosseinioun N, Doosti H, Nirumand HA (2012) Nonparametric estimation of the derivatives of a density by the method of wavelet for mixing sequences. Stat Pap 53(1):195–203

    Article  MathSciNet  Google Scholar 

  • Hsing T, Eubank R (2015) Theoretical foundations of functional data analysis, with an introduction to linear operators. Wiley, New York

    Book  Google Scholar 

  • Kara-Zaitri L, Laksaci A, Rachdi M, Vieu P (2017) Data-driven kNN estimation in nonparametric functional data-analysis. J Multivar Anal 153:176–188

    Article  Google Scholar 

  • Kokoszka P, Reimherr M (2017) Introduction to functional data analysis. Chapman & Hall/CRC Press, Boca Raton

    Book  Google Scholar 

  • Li Y, Hsing T (2010) Uniform convergence rates for nonparametric regression and principal component analysis in functional/longitudinal data. Ann Stat 38:3321–3351

    MathSciNet  MATH  Google Scholar 

  • Lima IR, Cao G, Billor N (2018) M-based simultaneous inference for the mean function of functional data. Ann Inst Stat Math. https://doi.org/10.1007/s10463-018-0656-y

    Article  MATH  Google Scholar 

  • Liu B, Müller H-G (2009) Estimating derivatives for samples of sparsely observed functions, with application to on-line auction dynamics. J Am Stat Assoc 104:704–714

    Article  Google Scholar 

  • Liu W, Lu X (2011) Empirical likelihood for density-weighted average derivatives. Stat Pap 52(2):391–412

    Article  MathSciNet  Google Scholar 

  • Ramsay JO, Silverman B (2002) Applied functional data analysis: methods and case studies. Springer, New York

    Book  Google Scholar 

  • Ramsay JO, Silverman B (2005) Functional data analysis. Springer, New York

    Book  Google Scholar 

  • Rossi F, Villa-Vialaneix N (2011) Consistency of functional learning methods based on derivatives. Pattern Recogn Lett 32(8):1197–1209

    Article  Google Scholar 

  • Pini A, Spreafico L, Vantini S, Vietti A (2019) Multi-aspect local inference for functional data: analysis of ultrasound tongue profiles. J Mach Learn Res 170:162–185

    MathSciNet  MATH  Google Scholar 

  • Poyton AA, Varziri MS, McAuley KB, McLellan PJ, Ramsay JO (2006) Parameter estimation in continuous-time dynamic models using principal differential analysis. Comput Chem Eng 30(4):698–708

    Article  Google Scholar 

  • Simpkin AJ, Durban M, Lawlor DA, MacDonald-Wallis C, May MT, Metcalfe C, Tilling K (2018) Derivative estimation for longitudinal data analysis: examining features of blood pressure measured repeatedly during pregnancy. Stat Med 37:2836–2854

    Article  MathSciNet  Google Scholar 

  • Srivastava A, Klassen E, Joshi S, Jermyn I (2011) Shape analysis of elastic curves in euclidean spaces. IEEE Trans Pattern Anal Mach Intell 33(7):1415–1428

    Article  Google Scholar 

  • Wang H, Zhong P-S, Cui Y, Li Y (2018) Unified empirical likelihood ratio tests for functional concurrent linear models and the phase transition from sparse to dense functional data. J R Stat Soc Ser B 80(2):343–364

    Article  MathSciNet  Google Scholar 

  • Xiao J, Li X, Shi J (2019) Local linear smoothers using inverse Gaussian regression. Stat Pap 60:1225–1253

    Article  MathSciNet  Google Scholar 

  • Zhang J-T, Chen J (2007) Statistical inferences for functional data. Ann Stat 35:1052–1079

    MathSciNet  MATH  Google Scholar 

  • Zhang X, Wang J-L (2016) From sparse to dense functional data and beyond. Ann Stat 44:2281–2321

    MathSciNet  MATH  Google Scholar 

  • Zhang X, Wang J-L (2018) Optimal weighting schemes for longitudinal and functional data. Stat Prob Lett 138:165–170

    Article  MathSciNet  Google Scholar 

  • Zheng S, Yang L, Härdle W (2014) A smooth simultaneous confidence corridor for the mean of sparse functional data. J Am Stat Assoc 109:661–673

    Article  MathSciNet  Google Scholar 

  • Zhou L, Lin H, Liang H (2018) Efficient estimation of the nonparametric mean and covariance functions for longitudinal and sparse functional data. J Am Stat Assoc 113:1550–1564

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

We thank Dr. Farzad Ebrahimzadeh for providing the children weight growth failure data set. Moreover, the support and resources from the center for High Performance Computing at Shahid Beheshti University of Iran are gratefully acknowledged. We also thank the Associate Editor and two referees for providing constructive comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Mohammad E. Hosseini-Nasab.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proof of Lemma 1

Proof

Denote the equi-distant partition on [0, 1] by \(\chi (\eta _n)\) in which the grid length is \(\eta _n \equiv \left( \sum _{i=1}^n N_i \omega _i^2 \log (1/\sum _{i=1}^n N_i \omega _i^2 )\right) ^{2+r}\). Then

$$\begin{aligned} \sup _{t \in [0,1]} \vert S_{r}(t) - ES_{r}(t) \vert \le \sup _{t \in \chi (\eta _n)} \vert S_{r}(t) - ES_{r}(t) \vert +D_1 +D_2,\; \; r=0,1,\ldots ,4, \end{aligned}$$
(13)

where

$$\begin{aligned} D_1 = \sup _{\vert s-t \vert< \eta _n} \vert S_{r}(t) - S_{r}(s) \vert ,\;\; D_2 = \sup _{\vert s-t \vert < \eta _n } \vert ES_{r}(t) - ES_{r}(s) \vert . \end{aligned}$$

For the second term on the right-hand side of Eq. (13), we deduce that

$$\begin{aligned} D_1&\le \sup _{\vert s-t \vert< \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h}\vert K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - t}{h}\right) ^r - K\left( \frac{t_{ij}-s}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^r \vert \\&\le \sup _{\vert s-t \vert< \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h} \vert \sum _{l=1}^{r}\displaystyle {r\atopwithdelims ()l}K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^{r-l}\left( \frac{s-t}{h}\right) ^{l}\\&\quad -K\left( \frac{t_{ij}-s}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^r \vert \\&\quad +\sup _{\vert s-t \vert < \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h}\vert K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} -s}{h}\right) ^r - K\left( \frac{t_{ij}-s}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^r \vert \\&= B_{1} +B_{2}, \end{aligned}$$

where

$$\begin{aligned} B_{1}= & {} \sup \limits _{\vert s-t \vert< \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h}\vert K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} -s}{h}\right) ^r - K\left( \frac{t_{ij}-s}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^r \vert , \\ B_{2}= & {} \sup _{\vert s-t \vert < \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h} \vert \sum _{l=1}^{r}\displaystyle {r\atopwithdelims ()l}K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^{r-l}\left( \frac{s-t}{h}\right) ^{l}\\&-K\left( \frac{t_{ij}-s}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^r \vert . \end{aligned}$$

Note that Assumption (A1) implies that \(K(.) \le M_K\) for a constant \(M_K\), therefore

$$\begin{aligned} B_{1}&\le \sup _{\vert s-t \vert < \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h}\vert \frac{t_{ij} -s}{h}\vert ^{r}M_K\vert \frac{s -t}{h}\vert \le \frac{ \eta _n}{h^{r+2}}M_K\nonumber \\&=o(a_{n}). \end{aligned}$$
(14)

Using the fact that if for some i and j, \(\vert \frac{t_{ij}-s}{h}\vert >1 \) then \(K\big (\frac{t_{ij}-s}{h}\big )=0 \), we have

$$\begin{aligned} B_{21}&= \sup _{\vert s-t \vert< \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h} \left| \sum _{l=1}^{r} \left( \displaystyle {r \atopwithdelims ()l}K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^{r-l}\left( \frac{s-t}{h}\right) ^{l}-\right. \right. \nonumber \\&\quad \left. \left. K\left( \frac{t_{ij}-s}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^r \right) I_{\lbrace \vert \frac{t_{ij}-s}{h}\vert >1 \rbrace } \right| \nonumber \\&\le \sup _{\vert s-t \vert< \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h} \left| \sum _{l=1}^{r}\displaystyle {r\atopwithdelims ()l} K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^{r-l}\left( \frac{s-t}{h}\right) ^{l} \right| \nonumber \\&\le \sup _{\vert s-t \vert < \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i}\sum _{l=1}^{r} \frac{1}{h} \displaystyle {r\atopwithdelims ()l}\vert s-t\vert ^{l} \nonumber \\&\le \sum _{l=1}^{r}\displaystyle {r\atopwithdelims ()l}\frac{\eta _n^{l}}{h^{r+1}}\nonumber \\&=o(a_{n}), \end{aligned}$$
(15)

where \(I_{\lbrace .\rbrace }\) is the indicator function. Otherwise, if for some i and j, \( \vert \frac{t_{ij}-s}{h} \vert \le 1\), we can show that

$$\begin{aligned} B_{22}&= \sup _{\vert s-t \vert< \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h} \left| \sum _{l=1}^{r} \left( \displaystyle {r \atopwithdelims ()l}K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^{r-l}\left( \frac{s-t}{h}\right) ^{l} \right. \right. \nonumber \\&\quad \left. \left. -K\left( \frac{t_{ij}-s}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^r \right) I_{\lbrace \vert \frac{t_{ij}-s}{h}\vert \le 1 \rbrace } \right| \nonumber \\&\le sup_{\vert s-t \vert< \eta _n} \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h} \vert \sum _{l=1}^{r}\displaystyle {r\atopwithdelims ()l}K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - s}{h}\right) ^{r-l}\left( \frac{s-t}{h}\right) ^{l}\nonumber \\&\quad -K\left( \frac{t_{ij}-s}{h}\right) \vert \nonumber \\&\le \frac{M_K}{h}sup_{\vert s-t \vert < \eta _n}\bigg \vert \frac{s-t}{h}\bigg \vert \nonumber \\&=o(a_{n}). \end{aligned}$$
(16)

Hence \(B_{2}=B_{21}+B_{22}=o_{p}(a_{n})\) follows directly from (15) and (16). For \( D_2 \) in (13), Jensen’s inequality implies that \( D_2=o(a_{n}) \). For the first term on the right-hand side of Eq. (13), define

$$\begin{aligned} v_{ij}=\left( h\frac{\omega _{i}^{2} \log \left( \frac{1}{\sum _{i=1}^{n}N_{i}\omega _{i}^{2}}\right) }{\sum _{i=1}^{n}N_{i}\omega _{i}^{2}}\right) ^{\frac{1}{2}}\frac{1}{h}K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij}-t}{h}\right) ^{r} , \end{aligned}$$

then we can write for any \(M >0\),

$$\begin{aligned}&P(S_{r}(t)-E(S_{r}(t))>Ma_{n})\\&\quad =P\left( \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} \frac{1}{h}\left( K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - t}{h}\right) ^{r}\right. \right. \\&\qquad \left. \left. - E\left( K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij} - t}{h}\right) ^{r}\right) \right)>Ma_{n}\right) \\&\quad \le P\left( \sum _{i=1}^n \sum _{j=1}^{N_i}(v_{ij}-E(v_{ij}))>M log\left( \frac{1}{\sum _{i=1}^{n}N_{i}\omega _{i}^{2}}\right) \right) . \end{aligned}$$

By Markov inequality and independence of the \( v_{ij} \), we deduce that

$$\begin{aligned} P(S_{r}(t)-E(S_{r}(t))>Ma_{n})&\le \frac{\prod _{i=1}^{n}\prod _{j=1}^{N_{i}}E(e^{v_{ij}-E(v_{ij})})}{\left( \frac{1}{\sum _{i=1}^{n}N_{i}\omega _{i}^{2}}\right) ^M}\\&\le \left( \sum _{i=1}^{n}N_{i}\omega _{i}^{2}\right) ^{M-C} , \end{aligned}$$

for some constant C. Therefore, for a large M we have

$$\begin{aligned} P(\sup _{t \in [0,1]} \vert S_{r}(t) - ES_{r}(t) \vert&\le \sup _{t \in \chi (\eta _n)} \vert S_{r}(t) - ES_{r}(t) \vert >Ma_{n})\nonumber \\&\le 2 (\sum _{i=1}^{n}N_{i}\omega _{i}^{2})^{M-C-r-2}log\left( \frac{1}{\sum _{i=1}^{n}N_{i}\omega _{i}^{2}}\right) ^{-(r+2)}\nonumber \\&=o(a_{n}) ,\; \; r=0,1,\ldots ,4. \end{aligned}$$
(17)

The proof is complete by combining (13)–(17). \(\square \)

1.2 Proof of Theorem 3.1

Proof

Combining Eq. (9) and Assumptions (A1)–(A3), we conclude that

$$\begin{aligned} \widehat{\mu ^{\prime }}(t)-\mu ^{\prime }(t) =\frac{\frac{f(t)}{h\mu ^2_k}R_1 - f^{\prime } (t) R_0}{f^2 (t)-h^2 f^{\prime 2}(t)\mu _k^2}+ O_p({h^2} + h a_n) . \end{aligned}$$
(18)

From Assumptions (A1) and (A2), we have \(f^2 (t)>0\) and \(h^2 f^{\prime 2}(t)\mu _k^2=O(h^2)\). Therefore, under Assumption (A5), \(f^2 (t)-h^2 f^{\prime 2}(t)\mu _k^2\) tends to a positive constant that is bounded away from zero with probability approaching one as \(h \rightarrow 0\), so we have

$$\begin{aligned} E&\int { \left( \frac{\frac{f(t)}{h\mu ^2_k} R_1 - f^{\prime } (t) R_0}{f^2 (t)-h^2 f^{\prime 2}(t)\mu _k^2} \right) ^2 }dt \\&= E \int \left[ \left( \frac{ f(t) }{h^2 \mu ^2_k( f^2 (t) - h^2 f^{\prime 2} (t) \mu _k^2 )} \right) \sum _{i=1}^{n} \omega _i \sum _{j=1}^{N_i} K\left( \frac{t_{ij}-t}{h}\right) \left( \frac{t_{ij}-t}{h} \right) \delta _{ij} \right] ^2 dt \\&\quad +E \int \left[ \left( \frac{ f^{\prime } (t) }{h( f^2 (t) - h^2 f^{\prime 2} (t) \mu _k^2 )} \right) \sum _{i=1}^{n} \omega _i \sum _{j=1}^{N_i} K(\frac{t_{ij}-t}{h}) \delta _{ij} \right] ^2 dt \\&\le \left( \frac{M_1}{h^4} + \frac{M_2}{h^2}\right) \sum _{i=1}^{n} \omega _i^2 N_i \int E \left[ K^2 \left( \frac{s-t}{h}\right) \left( \gamma (s,s)+\sigma ^2\right) \right] ^2 dt\\&\quad + \left( \frac{M_1}{h^4} + \frac{M_2}{h^2}\right) \sum _{i=1}^{n} \omega _i^2 N_i(N_i-1) \int E K\left( \frac{s_1-t}{h}\right) K \left( \frac{s_2-t}{h}\right) \gamma (s_1,s_2) dt \\&\le \left( \frac{M_3}{h^4} + \frac{M_4}{h^2}\right) \sum _{i=1}^{n} \omega _i^2 N_i h + \sum _{i=1}^{n} \omega _i^2 N_i(N_i-1)h^2 , \end{aligned}$$

where \(M_{1}\), \(M_2\), \(M_{3}\), and \(M_4\) are some constants. For any \(M>0\) by Markov inequality,

$$\begin{aligned}&P\left( \Vert \frac{\frac{f(t)}{h\mu ^2_k} R_1 - f^{\prime 2} (t) R_0}{f^2 (t)-h^2 f^{\prime 2}(t)\mu _k^2} \Vert _2 > M b_n \right) \nonumber \\&\quad \le \frac{ \left( \frac{M_3}{h^4} + \frac{M_4}{h^2}\right) \sum _{i=1}^{n} \omega _i^2 N_i h + \sum _{i=1}^{n} \omega _i^2 N_i(N_i-1)h^2 }{M^2 b_n^2} . \end{aligned}$$
(19)

Therefore, by Assumption (A5), the right-hand side of (19) tends to zero as \(M \rightarrow \infty \). Combining (18) and (19) completes the proof. \(\square \)

1.3 Proof of Theorem 3.2

Proof

We can write

$$\begin{aligned}&\frac{f(t)}{h \mu _k^2} R_1 -f^{\prime } (t) R_0 \\&\quad = \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} K_h (t_{ij} -t) \delta _{ij} \left( -f^{\prime }(t) + \frac{f(t)}{h\mu _k^2}\left( \frac{t_{ij} -t}{h}\right) \right) . \end{aligned}$$

By Lemma 5 in Zhang and Wang (2016), we deduce that

$$\begin{aligned}&\sup _{t\in [0,1] }\vert \sum _{i=1}^n \omega _i \sum _{j=1}^{N_i} K_h (t_{ij} -t) \delta _{ij} \vert \\&\quad =O_p \left( \frac{ \log {n } \left[ \sum _{i=1}^n N_i \omega _i^2 h + \sum _{i=1}^n N_i (N_i -1)\omega _i^2 h^2 \right] }{h} \right) . \end{aligned}$$

Using the fact that \(f^{\prime }(t)\), f(t) and \(\mu _k^2\) are bounded, and \(K_h(t_{ij}-t) =0\) if \(\vert t_{ij} -t \vert > h\), we have

$$\begin{aligned} \sup _{t \in [0,1]} \vert \widehat{\mu ^{\prime }} (t) - \mu ^{\prime }(t) \vert = O_p \left( \frac{ \log {n } \left[ \sum _{i=1}^n N_i \omega _i^2 h + \sum _{i=1}^n N_i (N_i -1)\omega _i^2 h^2 \right] }{h^2} + h^2\right) . \end{aligned}$$

\(\square \)

1.4 Proof of Theorem 3.3

Proof

For a fixed bandwidth h, minimizing the \(L^2\) rate is equivalent to the unique minimizer of \(b_n\) defined in Theorem 3.1. Applying the method of Lagrange multipliers under the condition \(\sum _{i=1}^n N_i \omega _i =1\), we deduce that

$$\begin{aligned} \omega _i^{opt} = \frac{1/h^3 + (N_i -1)/h^2}{\sum _{i=1}^n N_i( 1/h^3 + (N_i -1)/h^2)} . \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sharghi Ghale-Joogh, H., Hosseini-Nasab, S.M.E. On mean derivative estimation of longitudinal and functional data: from sparse to dense. Stat Papers 62, 2047–2066 (2021). https://doi.org/10.1007/s00362-020-01173-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-020-01173-5

Keywords

Mathematics Subject Classification

Navigation