Skip to main content
Log in

An adaptive estimation for covariate-adjusted nonparametric regression model

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

For covariate-adjusted nonparametric regression model, an adaptive estimation method is proposed for estimating the nonparametric regression function. Compared with the procedures introduced in the existing literatures, the new method needs less strict conditions and is adaptive to covariate-adjusted nonparametric regression with asymmetric variables. More specifically, when the distributions of the variables are asymmetric, the new procedures can gain more efficient estimators and recover data more accurately by elaborately choosing proper weights; and for the symmetric case, the new estimators can obtain the same asymptotic properties as those obtained by the existing method via designing equal bandwidths and weights. Simulation studies are carried out to examine the performance of the new method in finite sample situations and the Boston Housing data is analyzed as an illustration.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

References

Download references

Acknowledgements

The research was supported by NNSF Project (U1404104, 11501522, 11601283 and 11571204) of China, China Social Science Fund 18BTJ021 and a Natural Science Project of Zhengzhou University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lu Lin.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this Appendix, we first present some necessary lemmas, then give the technical proofs of the theorems.

Lemma 1

Assume the conditions C1–C6 hold, for \(\ell =0,1,2,\) we have

$$\begin{aligned} \sup _U\left| \hat{\psi }_0^{+(\ell )}(U)-\psi _0^{+(\ell )}(U)\right| =O_p\{\delta _{\ell }(h_1)\}, \end{aligned}$$
(13)
$$\begin{aligned} \sup _U\left| \hat{\psi }_0^{-(\ell )}(U)-\psi _0^{-(\ell )}(U)\right| =O_p\{\delta _{\ell }(h_2)\}, \end{aligned}$$
(14)

where \(\delta _{\ell }(h_i)=h_i^2+(nh_i^{2\ell +1})^{-1/2}(\log n)^{1/2}.\)

Lemma 1 could be obtained from Eq. (7.6) Delaigle et al. (2016), details see Masry (1996) and Hansen (2008). Similarly results also hold true for \(\hat{\phi }_0^{+(\ell )}\) and \(\hat{\phi }_0^{-(\ell )}.\)

For the recovered data \( \hat{Y}_i=\widetilde{Y}_i/{\hat{\psi }}(U_i), \, {\hat{X}}_i=\widetilde{X}_i/{\hat{\phi }}(U_i), i=1,2,\ldots ,n, \) let \({\hat{w}}_Y(U_i)=\psi (U_i)/{\hat{\psi }}(U_i),\) \({\hat{w}}_X(U_i)=\phi (U_i)/{\hat{\phi }}(U_i),\) then the expression could be formulated as \(\hat{Y}_i={\hat{w}}_Y(U_i)Y_i,\) \(\hat{X}_i={\hat{w}}_X(U_i)X_i.\)

Lemma 2

\(\Vert {\hat{w}}_Y(U)-1\Vert _\infty =O_p(\delta _0(h_1)+\delta _0(h_2)).\)

Proof

\({\hat{w}}_Y(U)-1=\frac{\psi (U)-{\hat{\psi }}(U)}{{\hat{\psi }}(U)},\)

$$\begin{aligned} \psi -{\hat{\psi }}= & {} a_1\left( \frac{\psi _0^+}{m_0^+}-\frac{\hat{\psi }_0^+}{{\hat{m}}_0^+}\right) +a_2\left( \frac{\psi _0^-}{m_0^-}-\frac{\hat{\psi }_0^-}{{\hat{m}}_0^-}\right) \\= & {} \left[ a_1\left( \frac{\psi _0^+-\hat{\psi }_0^+}{m_0^+}\right) +a_2\left( \frac{\psi _0^--\hat{\psi }_0^-}{m_0^-}\right) \right] \left[ 1+O_p(n^{-1/2})\right] \\= & {} O_p\big (\delta _0(h_1)+\delta _0(h_2)\big ). \end{aligned}$$

The last equation can be obtained from Lemma 1. \(\square \)

Lemma 3

Assume the conditions C1–C6 hold, the following equation satisfies

$$\begin{aligned} \max _x|{\hat{f}}_{{\hat{X}}}(x)-{\hat{f}}_{X}(x)|=O_p(h^{-1}(\delta _0(h_3)+\delta _0(h_4))=o_p(1), \end{aligned}$$

where \({\hat{f}}_{{\hat{X}}}(x)=n^{-1}\sum \nolimits _{i=1}^nK_h({\hat{X}}_i-x)\) and \({\hat{f}}_X(x)=n^{-1}\sum \nolimits _{i=1}^nK_h(X_i-x).\)

Lemma 3 can be proved by the similar methods used in the proof of Eq. (7.11) in Delaigle et al. (2016).

Lemma 4

Suppose conditions C1–C6 hold, we have

$$\begin{aligned}&\hat{\psi }_0^+-\psi _0^+=\frac{1}{2}h_1^2{\psi _0^{''+}}u_2+\Gamma _{11}+o_p(h_1^2),\quad \hat{\psi }_0^--\psi _0^-=\frac{1}{2}h_2^2{\psi _0^{''-}}u_2+\Gamma _{12}+o_p(h_2^2),\\&\hat{\phi }_0^+-\phi _0^+=\frac{1}{2}h_3^2{\phi _0^{''+}}u_2+\Gamma _{21}+o_p(h_3^2),\quad \hat{\phi }_0^--\phi _0^-=\frac{1}{2}h_4^2{\phi _0^{''-}}u_2+\Gamma _{22}+o_p(h_4^2), \end{aligned}$$

where

$$\begin{aligned}&\Gamma _{11}(u)=f_U^{-1}(u)\frac{1}{n}\sum _{i=1}^nK_{h_1}(U_i-u)\psi (U_i)[Y_iI(Y_i>0)-m_0^+],\\&\Gamma _{12}(u)=f_U^{-1}(u)\frac{1}{n}\sum _{i=1}^nK_{h_2}(U_i-u)\psi (U_i)[Y_iI(Y_i<0)-m_0^-],\\&\Gamma _{21}(u)=f_U^{-1}(u)\frac{1}{n}\sum _{i=1}^nK_{h_3}(U_i-u)\phi (U_i)[X_iI(X_i>0)-\mu _0^+],\\&\Gamma _{22}(u)=f_U^{-1}(u)\frac{1}{n}\sum _{i=1}^nK_{h_4}(U_i-u)\phi (U_i)[X_iI(X_i<0)-\mu _0^-]. \end{aligned}$$

All these equations can be easily computed for the case of local linear estimation, and they are similar to Eq. (F.27) in the supplement of Delaigle et al. (2016).

Combining Lemma 2 with Lemma 4, we have

$$\begin{aligned} {\hat{w}}_Y(U)-1= & {} -\frac{1}{2}u_2(a_1h_1^2+a_2h_2^2)\psi ^{''}(U)/\psi (U)(1+o_p(1))\\&\quad -[a_1\Gamma _{11}(U)/m_0^++a_2\Gamma _{12}(U)/m_0^-]/\psi (U)(1+o_p(1)),\\ {\hat{w}}_X(U)-1= & {} -\frac{1}{2}u_2(b_1h_3^2+b_2h_4^2)\phi ^{''}(U)/\phi (U)(1+o_p(1))\\&\quad -[b_1\Gamma _{21}(U)/\mu _0^++b_2\Gamma _{22}(U)/\mu _0^-]/\phi (U)(1+o_p(1)). \end{aligned}$$

Proof of Theorem 1

Following Eq. (9) and Lemma 4, we will obtain the expectation of \({\hat{\phi }}(u),\)

$$\begin{aligned} E(\hat{\phi }(u))= & {} b_1\left[ \phi _0^+(u)+1/2h_3^2u_2\phi _0^{+''}(u)+o(h_3^2)\right] /\mu _0^+(1+O(n^{-1/2}))\\&\quad +b_2\left[ \phi _0^-(u)+1/2h_4^2u_2\phi _0^{-''}(u)+o(h_4^2)\right] /\mu _0^-\left( 1+O(n^{-1/2})\right) \\= & {} \left\{ b_1\left[ \phi (u)+1/2h_3^2u_2\phi ^{''}(u)+o(h_3^2)\right] \right. \\&\left. \quad +b_2\left[ \phi (u)+1/2h_4^2u_2\phi ^{''}(u)+o(h_4^2)\right] \right\} \left( 1+O(n^{-1/2})\right) \\= & {} \phi (u)+1/2u_2\phi ^{''}(u)\left[ b_1h_3^2+b_2h_4^2\right] (1+o(1)), \end{aligned}$$

and the variance of \(\hat{\phi }(u),\)

$$\begin{aligned} Var(\hat{\phi }(u))= & {} Var(b_1\Gamma _{21}/\mu _0^++b_2\Gamma _{22}/\mu _0^-)(1+o(1))\\= & {} [Var(b_1\Gamma _{21}/\mu _0^+)+Var(b_2\Gamma _{22}/\mu _0^-)\\&\quad +2Cov(b_1\Gamma _{21}/\mu _0^+,b_2\Gamma _{22}/\mu _0^-)](1+o(1))\\= & {} f_U^{-1}(u)\phi ^2(u)[(nh_3)^{-1}b_1^2(\mu _0^{+})^{-2}Var(XI(X>0))\\&\quad +(nh_4)^{-1}b_2^2(\mu _0^{-})^{-2}Var(XI(X<0))](1+o(1)). \end{aligned}$$

The proof of Theorem 1 is completed. \(\square \)

Proof of Theorem 2:

$$\begin{aligned} {\hat{m}}_{NW}(x)-m(x)= & {} {\hat{f}}^{-1}_{{\hat{X}}}(x)n^{-1}\sum _{i=1}^nK_h(x-{\hat{X}}_i)[{\hat{w}}_Y(U_i)-1]Y_i \\&\quad +{\hat{f}}^{-1}_{{\hat{X}}}(x)n^{-1}\sum _{i=1}^nK_h(x-{\hat{X}}_i)[m(X_i)-m(x)]\\&\quad +{\hat{f}}^{-1}_{{\hat{X}}}(x)n^{-1}\sum _{i=1}^nK_h(x-{\hat{X}}_i)\sigma (X_i)\epsilon _i\\\equiv & {} \hat{\Pi }_1(x)+\hat{\Pi }_2(x)+\hat{\Pi }_3(x). \end{aligned}$$

Firstly, we consider the term \(\hat{\Pi }_1(x)\) multiplied by \({\hat{f}}_{{\hat{X}}}(x)\) and obtain,

$$\begin{aligned} {\hat{f}}_{{\hat{X}}}(x)\hat{\Pi }_1(x)= & {} n^{-1}\sum _{i=1}^nK_h(x-X_i)[{\hat{w}}_Y(U_i)-1]Y_i\\&\quad +n^{-1}\sum _{i=1}^n[K_h(x-{\hat{X}}_i)-K_h(x-X_i)][{\hat{w}}_Y(U_i)-1]Y_i\\\equiv & {} J_1(x)+J_2(x). \end{aligned}$$

Similar to the computation of Eq. (7.25) in Delaigle et al. (2016), we have \(\max \nolimits _x |J_2(x)|=O_p(h^{-1}(\delta _0(h_1)+\delta _0(h_2))(\delta _0(h_3)+\delta _0(h_4))\log n).\) We also note

$$\begin{aligned} J_1(x)= & {} -\frac{1}{2}u_2(a_1h_1^2+a_2h_2^2)n^{-1}\sum _{i=1}^nK_h(x-X_i)\psi ^{''}(U_i)/\psi (U_i)Y_i(1+o_p(1))\\&\quad -\frac{1}{n}[a_1\Gamma _{11}(U_i)/m_0^++a_2\Gamma _{12}(U_i)/m_0^-]/\psi (U_i)Y_iK_h(x-X_i)\\= & {} -\frac{1}{2}m(x)f_X(x)E[\psi ^{''}(U)/\psi (U)]u_2(a_1h_1^2+a_2h_2^2)(1+o_p(1))\\= & {} O_p(a_1h_1^2+a_2h_2^2). \end{aligned}$$

Therefore, \(J_1(x)\) is the dominating term and \(J_2(x)\) is negligible. Together with Lemma 3, we have

$$\begin{aligned} \hat{\Pi }_1(x)=-\frac{1}{2}m(x)E[\psi ^{''}(U)/\psi (U)]u_2\left( a_1h_1^2+a_2h_2^2\right) (1+o_p(1)). \end{aligned}$$
(15)

Next, we consider the term \(\hat{\Pi }_2(x),\) by a Taylor’s expansion we obtain

$$\begin{aligned} {\hat{f}}_{{\hat{X}}}(x)\hat{\Pi }_2(x)= & {} n^{-1}\sum _{i=1}^nK_h(x-X_i)[m(X_i)-m(x)]\\&\quad +(nh^2)^{-1}\sum _{i=1}^nK'\left( \frac{x-X_i}{h}\right) (1-{\hat{w}}_X(U_i))X_i[m(X_i)-m(x)]\\&\quad +(2nh^3)^{-1}\sum _{i=1}^nK''(\xi _i)(1-{\hat{w}}_X(U_i))^2X_i^2[m(X_i)-m(x)]I(|{\hat{X}}_i-x|\le h)\\\equiv & {} I_1(x)+I_2(x)+I_3(x), \end{aligned}$$

where \(\xi _i\) lies between \((x-X_i)/h\) and \((x-{\hat{X}}_i)/h.\) By the law of large numbers, and after some calculations we have

$$\begin{aligned} E[I_1(x)]= & {} \frac{1}{2}h^2u_2f_X(x)\{m''(x)+2m'(x)f_X'(x)/f_X(x)\}(1+o(1)),\\ E[I_2(x)]= & {} \frac{1}{2}u_2\left( b_1h_3^2+b_2h_4^2\right) E[\phi ''(U)/\phi (U)]xm'(x)f_X(x)\int tK'(t)dt(1+o(1)), \\ E[I_3(x)]= & {} O(h^{-1}(\delta _0(h_3)+\delta _0(h_4))^2), \end{aligned}$$

and \(I_3(x)\) can be negligible compared with \(I_1(x)\) and \(I_2(x).\)

Finally, for \(\hat{\Pi }_3(x)\) we have,

$$\begin{aligned} {\hat{f}}_{{\hat{X}}}(x)\hat{\Pi }_3(x)= & {} n^{-1}\sum _{i=1}^nK_h(x-X_i)\sigma (X_i)\epsilon _i+n^{-1}\sum _{i=1}^n[K_h(x-{\hat{X}}_i)\\&\quad -K_h(x-X_i)]\sigma (X_i)\epsilon _i\\= & {} Q_1(x)+Q_2(x). \end{aligned}$$

Similar to the proof of Lemma F.1. in Delaigle et al. (2016), we can obtain \(Q_2(x)\) is negligible compared with \(Q_1(x).\) Then, we have

$$\begin{aligned} E(Q_1(x))= & {} E\left[ n^{-1}\sum _{i=1}^nK_h(x-X_i)\sigma (X_i)\epsilon _i\right] =0,\\ Var(Q_1(x))= & {} \frac{1}{n}Var\big [K_h(x-X)\sigma (X)\epsilon \big ]\\= & {} \frac{1}{nh}\sigma ^2(x)f_X(x)v_0+o(nh)^{-1}. \end{aligned}$$

By central limit theorem and Slutsky theorem, we obtain

$$\begin{aligned} \hat{\Pi }_3(x)\xrightarrow {L} N\left( 0, \frac{1}{nhf_X(x)}\sigma ^2(x)v_0\right) . \end{aligned}$$
(16)

Based on the expansion of \(\Pi _1(x), \Pi _2(x)\) and \(\Pi _3(x)\) above, the proof of Theorem 2 is completed. \(\square \)

Proof of Theorem 3:

Following Eq. (12) we know \(\hat{m}_{LL}(x)=\mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)\mathbf{T}_{{\hat{X}},{\hat{Y}}}(x,K,h).\)

Firstly, consider the term \(\mathbf{T}_{{\hat{X}},{\hat{Y}}}(x,K,h)\) and

$$\begin{aligned} \mathbf{T}_{{\hat{X}},{\hat{Y}}}(x,K,h)= & {} 1/n\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau Y_i\\&\quad +1/n\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau ({\hat{Y}}_i-Y_i). \end{aligned}$$

After some computation we have

$$\begin{aligned}&\left| \frac{1}{n}\sum _{i=1}^n\bigg \{K_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau -K_h(X_i-x)\left( 1,\frac{X_i-x}{h}\right) ^\tau \bigg \}({\hat{Y}}_i-Y_i)\right| \nonumber \\&\quad =o_p\left( \delta _0(h_1)+\delta _0(h_2)\right) . \end{aligned}$$
(17)

So \(\mathbf{T}_{{\hat{X}},{\hat{Y}}}(x,K,h)\) can be expressed as

$$\begin{aligned} \mathbf{T}_{{\hat{X}},{\hat{Y}}}(x,K,h)= & {} \frac{1}{n}\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau Y_i\\&\quad +\frac{1}{n}\sum _{i=1}^nK_h(X_i-x)\left( 1,\frac{X_i-x}{h}\right) ^\tau ({\hat{Y}}_i-Y_i)+o_p\left( \delta _0(h_1)+\delta _0(h_2)\right) . \end{aligned}$$

Then \({\hat{m}}_{LL}(x)\) can be written as

$$\begin{aligned} {\hat{m}}_{LL}(x)= & {} \mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)1/n\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau Y_i\\&\quad +\mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)1/n\sum _{i=1}^nK_h(X_i-x)\left( 1,\frac{X_i-x}{h}\right) ^\tau ({\hat{Y}}_i-Y_i)\\&\quad +o_p\left( \delta _0(h_1)+\delta _0(h_2)\right) \\\equiv & {} {\hat{m}}_{LL,1}(x)+{\hat{m}}_{LL,2}(x)+o_p\left( \delta _0(h_1)+\delta _0(h_2)\right) . \end{aligned}$$

\(\square \)

Secondly, we deal with \({\hat{m}}_{LL,1}(x),\) a standard decomposition shows

$$\begin{aligned} {\hat{m}}_{LL,1}(x)= & {} m(x)+m'(x)\mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)\frac{1}{n}\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau (1-{\hat{w}}_X(U_i))X_i\\&\quad +\frac{1}{2}{} \mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)\frac{1}{n}\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau m''(\xi _i)(X_i-x)^2\\&\quad +\mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)\frac{1}{n}\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau \sigma (X_i)\epsilon _i,\\\equiv & {} m(x)+G_1(x)+G_2(x)+G_3(x). \end{aligned}$$

Also we can obtain \(\Vert \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)-f_X^{-1}(x)\mathbf{S}^{-1}\Vert _\infty =o_p(1),\) where \(\mathbf{S}=\mathrm{diag}(1,u_2).\) Combining with Eq. (17), we have

$$\begin{aligned} G_1(x)= & {} m'(x)\mathbf{e}_1^\tau f_X^{-1}(x)\mathbf{S}^{-1}\frac{1}{n}\sum _{i=1}^nK_h(X_i-x)\\&\quad \times \left( 1,\frac{X_i-x}{h}\right) ^\tau (1-{\hat{w}}_X(U_i))X_i(1+o_p(1))\\= & {} \frac{m'(x)}{f_X(x)}\frac{1}{n}\sum _{i=1}^nK_h(X_i-x)(1-{\hat{w}}_X(U_i))X_i(1+o_p(1))\\= & {} \frac{1}{2}m'(x)xu_2(b_1h_3^2+b_2h_4^2)E(\phi ''(U)/\phi '(U))+o_p(b_1h_3^2+b_2h_4^2+h^2)\\= & {} {\tilde{B}}_\phi (x)+o_p(b_1h_3^2+b_2h_4^2+h^2),\\ G_2(x)= & {} \frac{1}{2}f_X^{-1}(x)\mathbf{S}^{-1}n^{-1}\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau m''(\xi _i)(X_i-x)^2(1+o_p(1))\\= & {} \frac{1}{2}f_X^{-1}(x)n^{-1}\sum _{i=1}^nK_h({\hat{X}}_i-x)m''(\xi _i)(X_i-x)^2(1+o_p(1)). \end{aligned}$$

Then we can obtain \(E(G_2(x))=\frac{1}{2}h^2m''(x)u_2(1+o(1))=B_1(x)(1+o(1)).\) And,

$$\begin{aligned} G_3(x)= & {} \mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)n^{-1}\sum _{i=1}^nK_h({\hat{X}}_i-x)\left( 1,\frac{{\hat{X}}_i-x}{h}\right) ^\tau \sigma (X_i)\epsilon _i\\= & {} f_X^{-1}(x)n^{-1}\sum _{i=1}^nK_h({\hat{X}}_i-x)\sigma (X_i)\epsilon _i(1+o_p(1)). \end{aligned}$$

By Eq. (16), it follows \(G_3(x)\xrightarrow {L} N\left( 0, \frac{1}{nhf_X(x)}\sigma ^2(x)v_0\right) .\)

Thirdly, we consider \({\hat{m}}_{LL,2}(x),\)

$$\begin{aligned} {\hat{m}}_{LL,2}(x)= & {} \mathbf{e}_1^\tau \mathbf{S}_{{\hat{X}}}^{-1}(x,K,h)1/n\sum _{i=1}^nK_h(X_i-x)\left( 1,\frac{X_i-x}{h}\right) ^\tau ({\hat{Y}}_i-Y_i)\\= & {} f_X^{-1}(x)n^{-1}\sum _{i=1}^nK_h({\hat{X}}_i-x)({\hat{Y}}_i-Y_i)(1+o_p(1))\\= & {} -\frac{1}{2}m(x)E[\psi ^{''}(U)/\psi (U)]u_2(a_1h_1^2+a_2h_2^2)(1+o_p(1)), \end{aligned}$$

and the last equation follows from Eq. (15).

Combined with the decomposition components of \({\hat{m}}_{LL}(x),\) the proof of Theorem 3 is completed.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, F., Lin, L., Lu, Y. et al. An adaptive estimation for covariate-adjusted nonparametric regression model. Stat Papers 62, 93–115 (2021). https://doi.org/10.1007/s00362-019-01084-0

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-019-01084-0

Keywords

Navigation