Skip to main content
Log in

L1 Properties of the Nadaraya Quantile Estimator

  • Published:
Sankhya A Aims and scope Submit manuscript

Abstract

Let X be a real random variable having f as density function. Let F be its cumulative distribution function and Q its quantile function. For h > 0, let Fh and Qh denote respectively the Nadaraya kernel estimator of F and Q. In the first part of this paper the almost sure convergence of the conventional L1 distance between Qh and Q is established. In the second part, the L1 right inversion distance is introduced. The representation of this L1 right inversion distance in terms of Fh and F is given. This representation allows us to suggest ways to choose a global bandwidth for the estimator Qh.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Adamowski, K. (1985). Nonparametric kernel estimation of flood frequencies. Water. Resour. Res. 21, 11, 1585–1590.

    Article  Google Scholar 

  • Altman, N. and Léger, C. (1995). Bandwidth selection for kernel distribution function estimation. J. Statist. Plann. Inference 46, 2, 195–214.

    Article  MathSciNet  Google Scholar 

  • Apostol, T. M. (1974). Mathematical analysis, 2nd edn. Addison-Wesley, Reading. Mass.-London-Don Mills, Ont.

    MATH  Google Scholar 

  • Azzalini, A. (1981). A note on the estimation of a distribution function and quantiles by a kernel method. Biometrika 68, 1, 326–328.

    Article  MathSciNet  Google Scholar 

  • Bickel, P. J. and Freedman, D. A. (1981). Some asymptotic theory for the bootstrap. Ann. Statist. 9, 6, 1196–1217.

    Article  MathSciNet  Google Scholar 

  • Bowman, A., Hall, P. and Prvan, T. (1998). Bandwidth selection for the smoothing of distribution functions. Biometrika 85, 4, 799–808.

    Article  MathSciNet  Google Scholar 

  • Cai, Z. and Roussas, G. G. (1997). Smooth estimate of quantiles under association. Stat. Probab. Lett. 36, 3, 275–287.

    Article  MathSciNet  Google Scholar 

  • Chacón, J. E. and Rodríguez-Casal, A. (2010). A note on the universal consistency of the kernel distribution function estimator. Stat. Probab. Lett. 80, 17-18, 1414–1419.

    Article  MathSciNet  Google Scholar 

  • Cheng, M. -Y. and Sun, S. (2003). Bandwidth selection for kernel quantile estimation. unpublished.

  • Falk, M. (1985). Asymptotic normality of the kernel quantile estimator. Ann. Statist. 13, 1, 428–433.

    Article  MathSciNet  Google Scholar 

  • Faucher, D., Rasmussen, P. F. and Bobée, B. (2001). A distribution function based bandwidth selection method for kernel quantile estimation. J. Hydrol.250, 1, 1–11.

    Article  Google Scholar 

  • Lejeune, M. and Sarda, P. (1992). Smooth estimators of distribution and density functions. Comput. Stat. Data Anal. 14, 4, 457–471.

    Article  MathSciNet  Google Scholar 

  • Quintela-del Río, A. (2011). On bandwidth selection for nonparametric estimation in flood frequency analysis. Hydrol. Process. 25, 5, 671–678.

    Article  Google Scholar 

  • Sarda, P. (1993). Smoothing parameter selection for smooth distribution functions. J. Statist. Plann. Inference 35, 1, 65–75.

    Article  MathSciNet  Google Scholar 

  • Schmid, F. (1993). Measuring interdistributional inequality. Springer, Technical report.

    Google Scholar 

  • Shankar, B. (1998). An optimal choice of bandwidth for perturbed sample quantiles. Master’s thesis.

  • Shorack, G. R. and Wellner, J. A. (1986). Empirical processes with applications to statistics. Wiley, New York,.

    MATH  Google Scholar 

  • Wang, L. (2008). Kernel type smoothed quantile estimation under long memory. Stat. Pap. 51, 1, 57–67.

    Article  MathSciNet  Google Scholar 

  • Yamato, H. (1973). Some statistical properties of estimators of density and distribution functions. Bull. Math. Statist. 15, 1-2, 113–131.

    MathSciNet  MATH  Google Scholar 

  • Youndjé, E. (2018). Equivalence between mallows and girone–cifarelli tests. Stat. Probab. Lett. 141, 125–128.

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

My heartfelt gratitude goes to Professor William Strawderman of Rutgers University who helped me to improve the English level of this paper. I also greatly appreciated helpful comments from the referees.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to É. Youndjé.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proofs

A.1.1 Proof of Theorem 1

Proof of Theorem 1 (i)

Let us assume that all the random variables X,X1,…,Xn are defined on the probability space \(({\Omega },\ {\mathscr{A}},\ \mathrm {P}).\) For ω ∈Ω, set

$$ \lVert F_{h}(.,\omega)-F(.,\omega)\rVert = \sup_{x\in{\mathbb{R}}} | F_{h}(x,\omega)-F(x,\omega)|. $$

By Theorem 2 of Chacón and Rodríguez-Casal (2010) the set

$$ {\Omega}_{1}=\{\omega\in {\Omega}:\quad \lVert F_{h}(.,\omega)-F(.,\omega)\rVert\xrightarrow{\quad a.s.\quad} 0 \} $$

is of probability 1. For ω ∈Ω, set

$$ T_{n}(\omega)=\frac{| X_{1}(\omega)|+\cdots+| X_{n}(\omega)|}{n} $$

and

$$ {\Omega}_{2}=\{\omega\in {\Omega}:\quad T_{n}(\omega) \xrightarrow{\quad a.s.\quad} \mathrm{E}| X| \}. $$

By the strong law of large numbers P(Ω2) = 1. Let ω ∈Ω1 ∩Ω2. By Lemma 8.3 of Bickel and Freedman (1981), to prove Theorem 1 (i), it is enough to show that

$$ \begin{array}{@{}rcl@{}} (a)\quad &&F_{h}(x,\omega)\xrightarrow{\quad \quad} F(x),\quad \forall x\in {\mathbb{R}} \\ (b)\quad && \int | x| \mathrm{d}F_{h}(x,\omega)\xrightarrow{\quad \quad} \int | x| \mathrm{d}F(x)=\mathrm{E}| X|. \end{array} $$

(a) is true because ω ∈Ω1. Let us prove (b). Using the substitution u = (xXi)/h we get

$$ \int | x| K\left( \frac{x-X_{i}}{h}\right) \mathrm{d}x=h \int | X_{i}+uh| K(u)\mathrm{d}u. $$

We have

$$ \int | x| \mathrm{d}F_{h}(x,\omega)=T_{n}(\omega)+\int | x| \mathrm{d}F_{h}(x,\omega) -T_{n}(\omega). $$

Setting

$$ {\Gamma}_{1}=\int | x| \mathrm{d}F_{h}(x,\omega) -T_{n}(\omega), $$

we have

$$ {\Gamma}_{1}=\frac{1}{n}\sum\limits_{i=1}^{n}\int\left( | X_{i}+uh| -| X_{i}|\right) K(u)\mathrm{d}u, $$

it follows that

$$ \begin{array}{@{}rcl@{}} | {\Gamma}_{1}| &\leq & \frac{1}{n}\sum\limits_{i=1}^{n}\int | uh| K(u)\mathrm{d}u \\ &= & h\int | u| K(u)\mathrm{d}u. \end{array} $$

From this last inequality we see that \({\Gamma }_{1} \xrightarrow {\quad \quad } 0.\) Hence, (b) is true because ω ∈Ω2.

Proof of Theorem 1 (ii)

We are going to use the following lemmas in the proof.

Lemma 1.

If (A.2) is true then

$$ \int | x | f_{h}(x)\mathrm{d} x<+ \infty. $$

Lemma 2.

Let Y be a random variable with cumulative distribution function G. If G is continuous, then \(\mathrm {E}| Y|=+\infty \) if and only if

$$ {\int}_{-\infty }^{0} G(x)\mathrm{d} x=+\infty\quad \text{or}\quad {\int}_{0}^{+\infty }(1- G(x))\mathrm{d} x=+\infty. $$

(Let us continue with the proof of Theorem 1(ii)) It follows from Lemma 1 and Lemma 2 that

$$ {\int}_{-\infty }^{0} F_{h}(x)\mathrm{d} x<+\infty\quad \text{and}\quad {\int}_{0}^{+\infty }(1- F_{h}(x))\mathrm{d} x<+\infty. $$

Since \(\mathrm {E}| X|=+\infty ,\) by Lemma 2 we have

$$ {\int}_{-\infty }^{0} F(x)\mathrm{d} x=+\infty\quad \text{or}\quad {\int}_{0}^{+\infty }(1- F(x))\mathrm{d} x=+\infty. $$

We have:

$$ \begin{array}{@{}rcl@{}} \int | F_{h}(x)-F(x)| \mathrm{d}x& = &{\int}_{-\infty }^{0} | F(x)-F_{h}(x)| \mathrm{d}x+ {\int}_{0}^{+\infty } | (1-F(x))-(1- F_{h}(x))| \mathrm{d}x \\ &\ge & {\int}_{-\infty }^{0} \left( F(x)-F_{h}(x) \right)\mathrm{d}x+ {\int}_{0}^{+\infty } \left( (1-F(x))-(1- F_{h}(x)) \right) \mathrm{d}x \\ &= & +\infty. \end{array} $$

A.1.1.1 Proof of Lemma 1

We have

$$ \int | x| K\left( \frac{x-X_{i}}{h}\right) \mathrm{d}x=h \int | X_{i}+uh| K(u)\mathrm{d}u. $$

It follows that

$$ \int | x| K\left( \frac{x-X_{i}}{h}\right) \mathrm{d}x\leq h | X_{i}|\int K(u)\mathrm{d}u+ h^{2} \int | u| K(u)\mathrm{d}u. $$

This inequality permits to get

$$ \int | x| f_{h}(x) \mathrm{d}x\leq \frac{1}{n} \sum\limits_{i=1}^{n}| X_{i}| +h\int | u| K(u)\mathrm{d}u<+\infty. $$

A.1.1.2 Proof of Lemma 2

It is well-known that

$$ \begin{array}{@{}rcl@{}} \mathrm{E} | Y| &= &{\int}_{0}^{+\infty} \mathrm{P}\left( | Y|>t\right)\mathrm{d}t \\ &= & {\int}_{0}^{+\infty} \mathrm{P}\left( Y<-t\right)\mathrm{d}t + {\int}_{0}^{+\infty} \mathrm{P}\left( Y>t\right)\mathrm{d}t\\ &= &{\int}_{-\infty}^{0}G(t) \mathrm{d}t+ {\int}_{0}^{+\infty} (1-G(t))\mathrm{d}t. \end{array} $$

A.2.1 Proof of Proposition 1

First we prove some useful equalities. Let (xn) be a bounded sequence. Then we have:

$$ \begin{array}{@{}rcl@{}} F(\underset{n}{\sup} x_{n}) &= & \underset{n}{\sup} F(x_{n}) \end{array} $$
(A.3)
$$ \begin{array}{@{}rcl@{}} F(\underset{n}{\inf} x_{n})&= & \underset{n}{\inf} F(x_{n}). \end{array} $$
(A.4)

Proof of Eq. A.3. We have

$$ \begin{array}{@{}rcl@{}} x_{n}\leq \sup_{n} x_{n} &\implies & F(x_{n})\leq F(\underset{n}{\sup} x_{n}) \\ &\implies & \underset{n}{\sup} F(x_{n})\leq F(\underset{n}{\sup} x_{n}). \end{array} $$

Set \(u_{0}=\sup _{n} x_{n}.\) For \(\varepsilon =\frac {1}{k}\ (k\in {\mathbb {N}}^{*}),\ \exists n_{k}\in {\mathbb {N}}\) such that

$$ \begin{array}{@{}rcl@{}} && u_{0}-\frac{1}{k} < x_{n_{k}}\leq u_{0}\\ &\implies & F(u_{0}-\frac{1}{k})\leq F(x_{n_{k}})\\ &\implies & F(u_{0}-\frac{1}{k})\leq \sup_{n} F(x_{n}),\ \forall k\in{\mathbb{N}}^{*}\\ &\implies & F(u_{0})\leq \underset{n}{\sup} F(x_{n}). \end{array} $$

This last inequality completes the proof of Eq. A.3. The proof of Eq. A.4 is similar. Let α0 ∈ (0, 1) and assume that Q(α0) is the unique solution of the equation F(y) = α0. To prove Proposition 1, it is enough to show that, if (sn) is a sequence then

$$ F(s_{n})\xrightarrow{\quad \quad} \alpha_{0} \implies s_{n}\xrightarrow{\quad \quad} Q(\alpha_{0}). $$
(A.5)

Proof of the implication (A.5)

–(sn) is necessarily bounded. Suppose for instance that \(s_{n_{k}}\xrightarrow {\quad \quad } -\infty ,\) then \(F(s_{n_{k}})\xrightarrow {\quad \quad } 0 \neq \alpha _{0}.\) This contradicts the fact that \(F(s_{n_{k}})\xrightarrow {\quad \quad } \alpha _{0}.\) Next, set

$$ l_{n}=\inf \{s_{k},\quad k\geq n \},\quad u_{n}=\sup \{s_{k},\quad k\geq n \}. $$

Using Eq. A.4 we have

$$ \begin{array}{@{}rcl@{}} && F(l_{n})=\inf_{k\geq n} F(s_{k})\\ &\implies & \lim F(l_{n})=\underline{\lim} F(s_{n})\\ &\implies & F(\lim l_{n}) =\alpha_{0} \\ &\implies & F(\underline{\lim} s_{n}) =\alpha_{0}. \end{array} $$

Using Eq. A.3 we can prove similarly that

$$ F(\overline{\lim} s_{n}) =\alpha_{0}. $$

The uniqueness of Q(α0) implies that

$$ \underline{\lim} s_{n}=Q(\alpha_{0})=\overline{\lim} s_{n} $$

and these equalities show that

$$ s_{n}\xrightarrow{\quad \quad} Q(\alpha_{0}). $$

A.3.1 Proof of Theorem 2

Theorem 2 follows from certain properties of cumulative distribution functions. Lemma 3 below is its restatement in terms of general cumulative distribution functions.

Lemma 3.

Let G and H be continuous cumulative distribution functions. Then we have:

$$ \begin{array}{@{}rcl@{}} \text{(i)}\ \ \ \ && {{\int}_{0}^{1}} \left| H\left( G^{-1}(\alpha)\right) -\alpha\right|\mathrm{d}\alpha = \int| G(x)-H(x)| \mathrm{d}H(x)\\ \text{(ii-a)}\ \ \ && {{\int}_{0}^{1}} \left| G\left( H^{-1}(\alpha)\right) -\alpha\right|\mathrm{d}\alpha = \int| G(x)-H(x)| \mathrm{d}H(x)\\ \text{(ii-b)}\ \ \ && \int| G(x)-H(x)| \mathrm{d}G(x)=\int| G(x)-H(x)| \mathrm{d}H(x)\\ \text{(iii)}\ \ \ \ && \int| H\left( G^{-1}(\alpha)\right) -G\left( H^{-1}(\alpha)\right) | \mathrm{d}\alpha = 2\int| G(x)-H(x)| \mathrm{d}H(x).\\ \end{array} $$

Note that (ii-b) is a restatement of Corollary 1. All the identities in Lemma 3 are presented in Schmid (1993). However, the author states and proves his results assuming that G and H are continuous and bijective. We are going to use the following lemma in our proof.

Lemma 4.

(Lemma 1 in Youndjé (2018)) Let G, H and L be three continuous cumulative distribution functions. Then we have:

$$ \int| G(x)-H(x)| \mathrm{d} L(x)= {{\int}_{0}^{1}} \left| L\left( G^{-1}(\alpha)\right) - L\left( H^{-1}(\alpha)\right)\right|\mathrm{d}\alpha. $$

Proof of Lemma 3

By Lemma 4 we have:

$$ \begin{array}{@{}rcl@{}} \int| G(x)-H(x)| \mathrm{d} H(x) & = & {{\int}_{0}^{1}} \left| H\left( G^{-1}(\alpha)\right) - H\left( H^{-1}(\alpha)\right)\right|\mathrm{d}\alpha \end{array} $$
(A.6)
$$ \begin{array}{@{}rcl@{}} & = & {{\int}_{0}^{1}} \left| H\left( G^{-1}(\alpha)\right) -\alpha\right|\mathrm{d}\alpha \end{array} $$
(A.7)

therefore (i) is established. Since H is supposed continuous, using the substitution α = H(x) (see Apostol (1974)) we have:

$$ \begin{array}{@{}rcl@{}} \int| G(x)-H(x)| \mathrm{d} H(x)&= & {{\int}_{0}^{1}} \left| G\left( H^{-1}(\alpha)\right) - H\left( H^{-1}(\alpha)\right)\right|\mathrm{d}\alpha\\ &= & {{\int}_{0}^{1}} \left| G\left( H^{-1}(\alpha)\right) -\alpha\right|\mathrm{d}\alpha. \end{array} $$

These equalities show that (ii-a) is true. Using the substitution α = G(x) we have:

$$ \begin{array}{@{}rcl@{}} \int| G(x)-H(x)| \mathrm{d} G(x)&= & {{\int}_{0}^{1}} \left| G\left( G^{-1}(\alpha)\right) - H\left( G^{-1}(\alpha)\right)\right|\mathrm{d}\alpha\\ &= & {{\int}_{0}^{1}} \left| H\left( G^{-1}(\alpha)\right) -\alpha\right|\mathrm{d}\alpha\\ &= & \int| G(x)-H(x)| \mathrm{d} H(x). \end{array} $$

This last equality follows from Eqs. A.6 and A.7. The proof of (ii-b) is obtained from this chain of equalities. Let us set L = (G + H)/2. On one hand we have

$$ \begin{array}{@{}rcl@{}} \int| G(x)-H(x)| \mathrm{d} L(x)&= &\frac{1}{2}\int| G(x)-H(x)| \mathrm{d} G(x)+ \frac{1}{2}\int| G(x)-H(x)| \mathrm{d} H(x)\\ &=&\int| G(x)-H(x)| \mathrm{d} H(x) \end{array} $$

by Lemma 3 (ii-b). On the other hand, using Lemma 4, we have

$$ \begin{array}{@{}rcl@{}} \int| G(x)-H(x)| \mathrm{d} L(x)&= & {{\int}_{0}^{1}} \left| L\left( G^{-1}(\alpha)\right) - L\left( H^{-1}(\alpha)\right)\right|\mathrm{d}\alpha\\ &= & {{\int}_{0}^{1}} \left|\left( \frac{G+H}{2}\right) \left( G^{-1}(\alpha)\right) - \left( \frac{G+H}{2}\right)\left( H^{-1}(\alpha)\right)\right|\mathrm{d}\alpha\\ &= & {{\int}_{0}^{1}} \left| \left( \frac{\alpha}{2}+ \frac{H\left( G^{-1}(\alpha)\right)}{2}\right) -\left( \frac{G\left( H^{-1}(\alpha)\right)}{2}+ \frac{\alpha}{2} \right)\right|\mathrm{d}\alpha\\ &= & \frac{1}{2}\int| H\left( G^{-1}(\alpha)\right) -G\left( H^{-1}(\alpha)\right) |\mathrm{d}\alpha. \end{array} $$

It follows from the two chains of equalities above that

$$ \int| G(x)-H(x)| \mathrm{d} H(x)= \frac{1}{2}\int| H\left( G^{-1}(\alpha)\right) -G\left( H^{-1}(\alpha)\right) |\mathrm{d}\alpha $$

and this identity is exactly Lemma 3 (iii).

A.4.1 Proof of Proposition 2

Proof of Proposition 2 (i)

We give the proof for the Cauchy cumulative distribution function. The one for the Pareto cdf follows the same steps. We have

$$ \int \left( F_{h}(x)-F(x)\right)^{2} \mathrm{d}x= {\int}_{-\infty }^{0} \left( F(x)-F_{h}(x)\right)^{2} \mathrm{d}x+ {\int}_{0}^{+\infty } \left( (1-F(x))-(1- F_{h}(x))\right)^{2} \mathrm{d}x, $$

thus, to prove Proposition 2 it is enough to show that

$$ (i)\ {\int}_{-\infty }^{0} \left( F(x)-F_{h}(x)\right)^{2} \mathrm{d}x<+\infty\quad \text{and} \quad (ii)\ {\int}_{0}^{+\infty } \left( (1-F(x))-(1- F_{h}(x))\right)^{2} \mathrm{d}x<+\infty. $$

Proof of (ii): To prove (ii) it is enough to show that

$$ (a)\ {\int}_{0}^{+\infty } \left( 1-F(x)\right)^{2} \mathrm{d}x<+\infty\quad \text{and} \quad (b)\ {\int}_{0}^{+\infty } \left( 1-F_{h}(x)\right)^{2} \mathrm{d}x<+\infty. $$

We have

$$ {\int}_{0}^{+\infty } \left( 1-F_{h}(x)\right)^{2} \mathrm{d}x\leq {\int}_{0}^{+\infty } \left( 1-F_{h}(x)\right) \mathrm{d}x<+\infty $$
(A.8)

because of Lemma 1 and Lemma 2. Hence (b) is true. To prove (a), it is enough to show that if F is the Cauchy cdf, we have:

$$ 1-F(x)=\frac{1}{\pi x} +o\left( \frac{1}{x}\right)\quad \text{as}\quad x\xrightarrow{\quad \quad} +\infty. $$
(A.9)

We have

$$ F(x)=\frac{1}{\pi} \arctan(x)+\frac{1}{2}. $$

For x > 0, it is widely known that

$$ \arctan(x) +\arctan\left( \frac{1}{x}\right)=\frac{\pi}{2}. $$

It follows that

$$ \begin{array}{@{}rcl@{}} && \arctan(x)=\frac{\pi}{2}-\arctan\left( \frac{1}{x}\right)\\ &\implies & F(x)= \frac{1}{2} -\frac{1}{\pi}\arctan\left( \frac{1}{x}\right) +\frac{1}{2}\\ &\implies & 1- F(x)=\frac{1}{\pi}\arctan\left( \frac{1}{x}\right) \end{array} $$

hence, Eq. A.9 is obtained via Taylor expansion of order 1. Using the fact that

$$ \arctan(x) +\arctan\left( \frac{1}{x}\right)=-\frac{\pi}{2}\quad\text{for}\quad x<0, $$

the proof of (i) follows the same steps as that of (ii).

Proof of Proposition 2 (ii) We have:

$$ \begin{array}{@{}rcl@{}} ISE(F_{h},F)&= &\int (F_{h}(x)-F(x))^{2} \mathrm{d}x\\ &\ge & {\int}^{+\infty }_{1} (F_{h}(x)-F(x))^{2} \mathrm{d}x \\ &= & {\int}^{+\infty }_{1} \left( (1-F(x))-(1- F_{h}(x)) \right)^{2} \mathrm{d}x. \end{array} $$

Thus the proof of Proposition 2 (ii) will be complete if we prove that:

$$ {\int}^{+\infty }_{1} \left( (1-F(x))-(1- F_{h}(x)) \right)^{2} \mathrm{d}x=\ +\infty. $$
(A.10)

Next, set \(J=[1,\ +\infty )\) and

$$ L^{2}(J)=\{ h:J\longrightarrow {\mathbb{R}}, \quad \mid \ {\int}^{+\infty }_{1} h(x)^{2}\mathrm{d}x<+\infty \}. $$

It is well-known that L2(J) is a vector space, hence to prove (A.10) it is enough to show that

$$ 1-F_{h}\in L^{2}(J)\quad \text{and} \quad 1-F \notin L^{2}(J). $$

Equation (A.8) shows that 1 − FhL2(J). When \(F(x)=1 -\frac {1}{ax^{a}}\) for x > 1 (the Pareto cdf) we have:

$$ {\int}^{+\infty }_{1} \left( (1-F(x) \right)^{2} \mathrm{d}x=\frac{1}{a^{2}} {\int}^{+\infty }_{1} \frac{1}{x^{2a}} \mathrm{d}x=+\infty\quad \text{if}\quad 0<a\leq \frac{1}{2} $$

and this ends the proof of Proposition 2 (ii).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Youndjé, É. L1 Properties of the Nadaraya Quantile Estimator. Sankhya A 84, 867–884 (2022). https://doi.org/10.1007/s13171-020-00225-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13171-020-00225-0

Keywords

AMS (2000) subject classification.

Navigation