Abstract
Arriaza et al (Metrika 82:99–124, 2019) introduced the right and left shape functions, which enjoy interesting properties in terms of describing the global form of a distribution. This paper proposes and studies nonparametric estimators of those functions. The estimators involve nonparametric estimation of the quantile and density functions. Pointwise and uniform consistency are proved under general regularity assumptions, as well as the limit in law. Simulations are included to study the practical performance of the proposed estimators. The analysis of a real data set illustrates the methodology.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Arriaza et al. (2019) have introduced two functions, the left shape function and the right shape function, which, in some stochastic sense, synthesize the form of the distribution and can be employed to study the behavior of the tails and the symmetry of a random variable. Specifically, let X be an absolutely continuous random variable with probability density function f and distribution function F. For each \(u \in (0,1)\), let
The left shape function and the right shape function of X are defined as
and
respectively, provided that the expectations exist, where \(x^=\max \{0,x\}\) and \(x^+=\max \{0,x\}\), \(\forall x \in {\mathbb {R}}\). Since \(L_X(u)=R_{X}(1u)\), \(\forall u \in (0,1)\) (see Lemma 2.2 in Arriaza et al. 2019), from now on we will restrict our attention to the right shape function, \(R_X\). In order to simplify notation, the subindex X will be suppressed from the right shape function, so, from now on, we write R(u) instead of \(R_X(u)\) when there is no possibility of confusion.
Notice that if \({\mathbb {E}}X<\infty \) and f is bounded, then R(u) is a welldefined quantity for each \(u\in (0,1)\) since
where M is a positive constant. Moreover, R is a positive and strictly decreasing function with \(\displaystyle \lim _{u \rightarrow 1^}R(u)=0\) (see Remark 2.4 in Arriaza et al. 2019). So we define \(R(1)=0\) and, if the limit exists, \(R(0)=\displaystyle \lim _{u \rightarrow 0^+}R(u)\).
The right shape function has several remarkable properties. For example, the limit when u approaches 1 of the quotient of the right shape functions of two random variables provides useful information on the relative behavior of their residual Rényi entropies of order 2, a measure of interest in reliability and other fields; if \({\mathcal {F}}\) is a locationscale family of distribution functions, that is,
for some fixed distribution function \(F_0\), then for any random variables X and Y with distribution function in \({\mathcal {F}}\), we have that \(R_X(u)=R_Y(u)\), \(\forall u \in (0,1)\), in other words, the right shape function characterizes locationscale families; among many others (see Arriaza et al. 2019). Moreover, if
then \(S_X(u)=S_Y(u)\), \(\forall u \in (0,1)\), for all X and Y with distribution function in \({\mathcal {F}}\), and \(S_X(u)=0\), \(\forall u \in (0,1)\), if and only if the distribution of X is symmetric. These properties can be used to make inferences. For example, since R characterizes locationscale families, it may be used to build goodnessoffit tests of these families. A key step towards the development of statistical procedures based on the right shape function is the study of an estimator of such function. This is just the objective of this paper.
Remark 1
The definition of the function S (as before, the subindex X will be skipped when there is no possibility of confusion) is a bit different from that given in Arriaza et al. (2019), which is \({S^{\textrm{Arr}}(u)}=R_{X}(1u)R_{X}(1u)\). Both definitions are closely related: \(S(u)=0\), \(\forall u \in (0,1)\), if and only if \(S^{\textrm{Arr}}(u)=0\), \(\forall u \in (0,1)\), and \(S(u) \geqslant 0\), \(\forall u \in (0,1)\), if and only if \(S^{\textrm{Arr}}(u) \leqslant 0\), \(\forall u \in (0,1)\).
Let \(X_1, \ldots , X_n\) be a random sample from X, that is, \(X_1, \ldots , X_n\) are independent with the same distribution as X. Since
where \(\int \) stands for the integral on the whole real line, to estimate R(u) we propose to replace F with the empirical distribution function,
where \(\textbf{1}\{ \cdot \}\) denotes the indicator function (that is, \(\textbf{1}\{ X_i \leqslant x \}=1\) if \(X_i \leqslant x \) and \(\textbf{1}\{ X_i \leqslant x \}=0\) if \(X_i > x \)), and f with a kernel estimator
where \({\hat{h}}\) is the bandwidth and K is a kernel. We take \({\hat{h}}={\hat{\sigma }}\times g(n)\), where \({\hat{\sigma }}={\hat{\sigma }}(X_1, \ldots , X_n)\) is an estimator of \(\sigma =\sigma (X)\), a spread measure of X, both of them satisfying \(\sigma (aX+b)=a\sigma (X)\) and \({\hat{\sigma }}(aX_1+b, \ldots , aX_n+b)=a{\hat{\sigma }}(X_1, \ldots , X_n)\), \(\forall a,\, b \in {\mathbb {R}}\), and g is a decreasing function. Further assumptions on \({\hat{h}}\) and K will be specified later. For \(u\in (0,1)\), the empirical quantile function, \(F_n^{1}(u)\), is defined as follows
where \(X_{1:n} \leqslant \cdots \leqslant X_{n:n}\) denote the order statistics.
Therefore, we consider the following plugin estimator of R(u),
Notice that
and thus, \(R_n\) is a piecewise constant function. Observe that \(R_n(1)=0\), \(\forall n\). The behavior of \(R_n\) at \(u=1\) is consistent since, as seen before, \(\displaystyle \lim _{u \rightarrow 1^}R(u)=0\). Observe also that, by construction, \(R_n(u)=R_n(u; X_1, \ldots , X_n)= R_n(u; aX_1+b, \ldots , aX_n+b)\), \(\forall a, b \in {\mathbb {R}}\), thus \(R_n(u)\) is locationscale invariant, the same as R(u).
Analogously, we consider the following plugin estimator of S(u),
where \(R_{X,n}(u)\) stands for \(R_n(u)\), the estimator of \(R_X(u)\) calculated from the sample \(X_1, \ldots , X_n\), and \(R_{X,n}(u)\) stands for the estimator of \(R_{X}(u)\) calculated from the sample \(X_1, \ldots , X_n\).
Section 5 of Arriaza et al. (2019) proposes another estimator of R(u) that consists of replacing both f(x) and F(x) with kernel estimators \(f_n(x)\) and \({\hat{F}}_n(x)=\int _{\infty }^x f_n(y)dy\), respectively. No properties of the resulting estimators of R and S were studied there. A main drawback of such estimators is that they do not have an easily computable expression, which must be approximated numerically.
The paper unfolds as follows. Section 2 derives asymptotic properties related to the pointwise and uniform consistency of the proposed estimator of the right shape function. Several results are given under different regularity assumptions. In Sect. 3 results related to the pointwise asymptotic normality and global weak convergence are detailed. In Sect. 4, a simulation study and an application to a real data set illustrate the practical performance of the estimator. This section also contains an application to a goodnessoffit testing problem for a locationscale family that can be solved by employing the proposed estimator. All computations have been programmed and run in R (R Core Team 2020). Some conclusions and further research possibilities are discussed in Sect. 5. Finally, all proofs are deferred to Sect. 6.
Throughout the paper it will be tacitly assumed that X is an absolutely continuous random variable with cumulative distribution function F and bounded probability density function f; all limits are taken when \(n \rightarrow \infty \), where n denotes the sample size; \({\mathop {\longrightarrow }\limits ^{{\mathcal {L}}}}\) stands for the convergence in law; \({\mathop {\longrightarrow }\limits ^{P}}\) stands for the convergence in probability; \({\mathop {\longrightarrow }\limits ^{a.s}}\) stands for the almost sure convergence; for a function \(w: (a,b) \subseteq {\mathbb {R}} \mapsto {\mathbb {R}}\) and \(x \in (a,b]\), \(w(x)\) denotes the onesided limit \(\displaystyle \lim _{y \rightarrow x}w(y)\); \(O_P(1)\) refers to a stochastic sequence bounded in probability and \(o_P(1)\) refers to a stochastic sequence that converges to zero in probability; the kernel function \(K:{\mathbb {R}} \mapsto {\mathbb {R}}\) is a probability density function satisfying some of the following assumptions:
Assumption 1

(i)
K has compact support and is Lipschitz continuous.

(ii)
K has bounded variation.

(iii)
K is symmetric, \(K(x)=K(x)\), \(\forall x \in {\mathbb {R}}\).

(iv)
The support of K is [c, d], for some \(\infty<c<d<\infty \), \(K(c)=K(d)=0\), K is twice differentiable on (c, d), with bounded derivatives \(K'\) and \(K''\).
The bandwidth \(h=\sigma \times g(n)\) will be assumed to satisfy some of the following assumptions:
Assumption 2

(i)
\(h \rightarrow 0\) and \(\sum _{n\geqslant 1}\exp \{\varepsilon n h^2\}<\infty \), \(\forall \varepsilon >0\).

(ii)
\(h \rightarrow 0\), \(nh \rightarrow \infty \), \(nh^4 \rightarrow 0\).

(iii)
\(h \rightarrow 0\), \(nh^2 \rightarrow \infty \), \(nh^4 \rightarrow 0\).
In Assumption 2, notice that (iii) is stronger than (ii). On the other hand, the condition \(\sum _{n\geqslant 1}\exp \{\varepsilon n h^2\}<\infty \) in (i) implies \(nh^2 \rightarrow \infty \) in (iii), but does not entail \(nh^4 \rightarrow 0\).
2 Almost sure limit
The next theorem gives the almost sure limit of \(R_n(u)\), for each \(u \in (0,1)\).
Theorem 1
Suppose that \({\mathbb {E}}X<\infty \), that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), that f is uniformly continuous, that K satisfies Assumption 1 (i) and that h satisfies Assumption 2 (i). Let \(u \in (0,1)\). Suppose that there is a unique solution in x of \(F(x) \leqslant u \leqslant F(x)\). Then, \(R_n(u) {\mathop {\longrightarrow }\limits ^{a.s.}} R(u)\).
Let (a, b) denote the support of F, that is, \(a=\sup \{x\,:\, F(x)=0\}\) and \(b=\inf \{x\,:\, F(x)=1\}\). A key assumption in Theorem 1 to get the a.s. convergence is the uniform continuity of f, necessary to get the uniform convergence of \({\hat{f}}_n\) to f. This assumption may not hold, specially if either \(a>\infty \) or \(b<\infty \). Nevertheless, if such assumption fails, we still can get the a.s. convergence by using other assumptions, as stated in the next theorem.
Theorem 2
Suppose that f is twice continuously differentiable on (a, b), that \({\mathbb {E}}(X^2)<\infty \), \({\mathbb {E}}\{f'(X)^2\}<\infty \), \({\mathbb {E}}\{f''(X)^2\}<\infty \) and \({\mathbb {E}}\{X^2f''(X)^2\}<\infty \), that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), that K satisfies Assumption 1 (i) and (iii) and that h satisfies Assumption 2 (i) and (ii). Let \(u \in (0,1)\). Suppose that there is a unique solution in x of \(F(x) \leqslant u \leqslant F(x)\). Then, \(R_n(u) {\mathop {\longrightarrow }\limits ^{a.s.}} R(u)\).
In general, it is not possible to get the uniform a.s. convergence of \(R_n\) to R because the convergence of the empirical quantile function to the population quantile function is not uniform, unless F has finite support. In such a case, the next theorem shows that we also have the uniform convergence of \(R_n\) to R.
Theorem 3
Suppose that \(\infty<a<b<\infty \), f is continuous in (a, b) and \(\displaystyle \inf _{0 \leqslant u \leqslant 1} f\left( F^{1}(u)\right) >0\). Suppose also that K satisfies Assumption 1 (i), that h satisfies Assumption 2 (i), and that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\). Then,
Let \(w:[0,1] \mapsto {\mathbb {R}}\) a measurable positive function and let \(L^2(w)\) denote the separable Hilbert space of (equivalence classes of) measurable functions \(f:[0,1] \mapsto {\mathbb {R}}\) satisfying \(\int _0^1 f(u)^2 w(u)du<\infty \), the scalar product and the resulting norm in \(L^2(w)\) will be denoted by \( \langle f, g \rangle _{w}=\int _0^1 f(u)g(u) w(u)du\) and \(\Vert f \Vert _{w}=\sqrt{ \langle f, f \rangle _{w}}\), respectively. If \(w(u)=1\), \(0 \leqslant u \leqslant 1\), then we simply denote \(L^2(w)\), \(\langle \cdot , \cdot \rangle _{w}\) and \(\Vert \cdot \Vert _{w}\) by \(L^2\), \(\langle \cdot , \cdot \rangle \) and \(\Vert \cdot \Vert \), respectively.
As said before, in general, it is not possible to obtain the uniform convergence of \(R_n\) to R. However, under quite general assumptions, it can be shown that \(R_n\) converges to R in \(L^2(w)\). A first result in this sense is given in the following theorem.
Theorem 4
Suppose that \({\mathbb {E}}(X^2)<\infty \), that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), that f is uniformly continuous, that K satisfies Assumption 1 (i) and that h satisfies Assumption 2 (i). Then \(R \in L^2\) and \(\Vert R_nR\Vert {\mathop {\longrightarrow }\limits ^{a.s.}} 0.\)
It readily follows that if \(w:[0,1] \mapsto {\mathbb {R}}\) is a measurable bounded positive function, then the statement in the previous theorem also holds in \(L^2(w)\).
Corollary 1
Let \(w:[0,1] \mapsto {\mathbb {R}}\) be a measurable bounded positive function. Suppose that assumptions in Theorem 4 hold. Then \(R \in L^2(w)\) and \(\Vert R_nR\Vert _w {\mathop {\longrightarrow }\limits ^{a.s.}} 0.\)
The uniform continuity of f can be replaced with other assumptions.
Theorem 5
Let \(w:[0,1] \mapsto {\mathbb {R}}\) be a measurable bounded positive function. Suppose that f is twice continuously differentiable on (a, b), that \({\mathbb {E}}(X^2)<\infty \), \({\mathbb {E}}\{f'(X)^2\}<\infty \), \({\mathbb {E}}\{f''(X)^2\}<\infty \) and \({\mathbb {E}}\{X^2f''(X)^2\}<\infty \), that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), that K satisfies Assumption 1 (i) and (iii) and that h satisfies Assumption 2 (i) and (ii). Then \(R \in L^2(w)\) and \(\Vert R_nR\Vert _w {\mathop {\longrightarrow }\limits ^{a.s.}} 0.\)
Remark 2
The assumptions in the statements of the previous asymptotic results exclude the optimal rate of the bandwidth for density estimation, to wit, \(n^{1/5}\). Notice, however, that the objective here differs from the mere estimation of the density. This situation is not uncommon in nonparametric literature. For instance, when the target is to estimate a distribution function using the kernel method, Azzalini (1981) showed that the optimal bandwidth is of order \(n^{1/3}\). Other examples can be found in PardoFernández et al. (2015) and PardoFernández and JiménezGamero (2019).
Remark 3
All properties studied so far were stated for \(R_n\) as an estimator of R. Clearly, these properties carry over \(S_n\) as an estimator of S, which are not given to save space. The finite sample performance \(R_n(u)\) and \(S_n(u)\) as estimators of R(u) and S(u), respectively, will be numerically studied in Sect. 4 for data coming from a uniform distribution.
3 Weak limit
We first study the weak limit of \(\sqrt{n}\left\{ R_{n}(u)R(u)\right\} \) at each \(u\in (0,1)\).
Theorem 6
Suppose that f is twice continuously differentiable on (a, b), that \({\mathbb {E}}(X^2)<\infty \), \({\mathbb {E}}\{f'(X)^2\}<\infty \), \({\mathbb {E}}\{f''(X)^2\}<\infty \) and \({\mathbb {E}}\{X^2f''(X)^2\}<\infty \), that \(\sqrt{n}({\hat{\sigma }}\sigma )=O_P(1)\), that K satisfies Assumption 1 (i), (iii) and (iv), and that h satisfies Assumption 2 (i) and (iii). Let \(u \in (0,1)\) be such that \(f(F^{1}(u))>0\), then
where
\(1\leqslant i \leqslant n\), with
and therefore
where \(\varrho ^2(u)={\mathbb {E}}\{Y_1(u)^2\}\).
Recall that to estimate R(u) we replaced the population quantile function \(F^{1}\) with the empirical quantile function \(F_n^{1}\) and the population density function f with the kernel estimator \({\hat{f}}_n\). Each of these two replacements have an effect on the asymptotic behavior of \(\sqrt{n}\left\{ R_{n}(u)R(u)\right\} \): (a) the first replacement is the responsible for the term \(\frac{\textbf{1}\{X_i \leqslant F^{1}(u)\}u }{f(F^{1}(u))} \mu (u)\) in the expression of \(Y_i(u)\); and (b) the second replacement is the responsible for the coefficient 2 in the first part of \(Y_i(u)\). Notice that, under the assumptions made, taking the bandwidth data dependent, \({\hat{h}}={\hat{\sigma }}g(n)\), has no effect on the asymptotic distribution of \(\sqrt{n}\left\{ R_{n}(u)R(u)\right\} \).
The result in Theorem 6 can be used to give approximate (in the sense of asymptotic) confidence intervals for R(u). Let \({\hat{\varrho }}(u)\) denote any consistent estimator of \({\varrho (u)}\) (see the explanation below for a candidate). If \(z_{v}\) is such that \(\Phi (z_{v})=v\), where \(\Phi \) stands for the cumulative distribution function of the standard normal distribution, for a given \(\alpha \in (0,1)\), then
is a random confidence interval for R(u) with asymptotic confidence level \(1\alpha \). If \(Y_1(u), \ldots , Y_n(u)\) were observed, since \(\varrho ^2(u)={\mathbb {E}}\{Y_1(u)^2\}\), which also coincides with the variance of \(Y_1(u)\), \({\mathbb {V}}\{Y_1(u)\}\), one could consistently estimate \(\varrho ^2(u)\) by means of the sample variance of \(Y_1(u), \ldots , Y_n(u)\). The point is that \(Y_1(u), \ldots , Y_n(u)\) depend on unknown quantities. Taking into account that \({\mathbb {V}}\{Y_1(u)\}={\mathbb {V}}\{W_1(u)\}\), where \( W_i(u)=2\{X_iF^{1}(u)\}^+ f(X_i)+\textbf{1}\{X_i \leqslant F^{1}(u)\}\mu (u)/f(F^{1}(u))\), \(1\leqslant i \leqslant n\), we propose to replace f by \({\hat{f}}_n\) and F by \(F_n\) in the expression of \(W_1(u), \ldots , W_n(u)\), giving rise to
where
and then estimate \(\varrho ^2(u)\) by means of the sample variance of \({\hat{W}}_1(u), \ldots , {\hat{W}}_n(u)\), that we denote by \({\hat{\varrho }}^2(u)\). The finite sample performance of the confidence interval in (2) as well as the the goodness of \({\hat{\varrho }}^2(u)/n\) as an approximation to the variance of \(R_n(u)\) will be examined in Sect. 4 for data coming from a uniform distribution.
Finally, we study the convergence in law of \(n\Vert R_nR\Vert _w^2\), for some adequate w. In general, \(n\Vert R_nR\Vert ^2\) does not possess a weak limit unless we assume rather strong assumptions on F. This is because the derivations to study such a limit involve those of \(\sqrt{n}\{F_n^{1}F^{1}\}\) in \(L^2\), and therefore, it inherits the same limitations (see, for example, del Barrio et al. 2000, 2005). A convenient way to overcome these difficulties is to consider, instead of \(\Vert \cdot \Vert ^2\), the norm in \(L^2(w)\), with \(w(u)=f^2(F^{1}(u))\). This weight function is taken for analytical convenience. It may seem a bit odd, because f (and hence F) is unknown in practical applications. Nevertheless, as stated in the Introduction, \(n\Vert R_nR\Vert _w^2\) could be used as a test statistic for testing goodnessoffit to a locationscale family (1), and in such a case under the null hypothesis \(f(F^{1}(u))=\frac{1}{\varsigma }f_0(F_0^{1}(u))\), so we can take \(w(u)=f_0(F_0^{1}(u))\) which is known in that testing framework. Later on, we will discuss other weight functions
We first show that, under some conditions, the linear approximation for \(\sqrt{n}\{R_{n}(u)R(u)\} \) given in Theorem 6 for a fixed \(u\in (0,1)\), is valid for all u in certain intervals, in the \(L^2(w)\) sense.
Theorem 7
Suppose that f is twice continuously differentiable on (a, b), that \({\mathbb {E}}(X^2)<\infty \), \({\mathbb {E}}\{f'(X)^2\}<\infty \), \({\mathbb {E}}\{f''(X)^2\}<\infty \) and \({\mathbb {E}}\{X^2f''(X)^2\}<\infty \), that \(\sqrt{n}({\hat{\sigma }}\sigma )=O_P(1)\), that K satisfies Assumption 1 (i), (iii) and (iv), and that h satisfies Assumption 2 (i) and (iii). Suppose also that \(f\left( F^{1}(u)\right) >0\), \(u \in (0,1)\) and
for some finite \(\gamma >0\). Let
Suppose also that if \(A = 0\) then f is nondecreasing on an interval to the right of a. Let \(w(u)=f^2(F^{1}(u))\), then
Corollary 2
Under assumptions in Theorem 7, \(n\Vert R_nR\Vert _w^2 {\mathop {\longrightarrow }\limits ^{{\mathcal {L}}}} \Vert Z\Vert ^2_w\), where \(\{Z(u), 0 \leqslant u \leqslant 1\}\) is a zeromean Gaussian process on \(L^2(w)\) with \(Z(1)=0\) and covariance function \(cov \{Z(u),\, Z(s)\}={\mathbb {E}}\{ Y_1(u) \, Y_1(s) \}\), \(u, s \in (0,1)\).
From the proof of Theorem 7 and Theorem 4.6 (i) of del Barrio et al. (2005), the results in Theorem 7 and Corollary 2 keep on being true for any bounded weight function w satisfying
Although this result may seem more general than those stated in Theorem 7 and Corollary 2, notice that the choice of an adequate weight function requires a strong knowledge of f.
As observed after Theorem 6, the replacement of the population quantile function \(F^{1}\) with the empirical quantile function \(F_n^{1}\) in the expression of R to build the estimator \(R_n\) is the responsible for the term \(\frac{\textbf{1}\{X_i \leqslant F^{1}(u)\}u }{f(F^{1}(u))} \mu (u)\) in the expression of \(Y_i(u)\), which makes \(Y_i(u)\) to inherit the properties and difficulties of estimating the quantile function. So, one may wonder if there is any advantage in using the right (left) shape function instead of the quantile function. From a theoretical point of view the answer is yes: in order to use the quantile function one must assume certain conditions on the right tail and on the left tail of the distribution, while if one uses right (left) shape function then only assumptions on the left (right) tail are necessary. This is because both \(R_n(u)\) and R(u) (respectively, \(L_n(u)\) and L(u)) go to 0 as \(u\uparrow 1\) (respectively, \(u\downarrow 0\)).
Remark 4
The properties studied in this section for \(R_nR\) are inherited by \(S_nS\). Specifically, let \(u \in (0,1)\), then
is an approximate (in the sense of asymptotic) confidence interval for S(u), where \({\hat{\varrho }}_S^2(u)\) is the sample variance of \(V_1(u), \ldots , V_n(u)\), where \(V_i(u)={\hat{W}}_{X,i}(u){\hat{W}}_{X,i}(u)\), \({\hat{W}}_{X,i}(u)\) are the quantities defined in (3) calculated on the sample \(X_1, \ldots , X_n\), and \({\hat{W}}_{X,i}\) are the quantities defined in (3) calculated on \(X_1, \ldots , X_n\), \(i=1,\ldots ,n\). Notice that if 0 does not belong to the confidence interval (4) one may conclude that the law of X is not symmetric.
4 Some numerical illustrations
4.1 Estimation of R and S
If X has a uniform distribution on the interval (a, b), \(X\sim U(a,b)\), then \(R(u)=0.5(1u)^2\) (see, Table 1 in Arriaza et al. 2019). For several values of n, we have generated 10,000 random samples with size n from a distribution U(0, 1). For each sample, we have estimated R using \(R_n\) taking as K the Epanechnikov kernel (scaled so that it has variance 1) and \(h=sd\times n^{\tau }\), with \(\tau =0.35, 0.40, 0.45, 0.49\), and \(sd^2\) denoting the sample variance. Notice that for these choices of h Assumption 2 (i), (ii) and (iii) are met. Tables 1 and 2 show the value of R(u), the bias and the standard deviation of the values of \(R_n(u)\), the mean of the standard deviation estimator \(\hat{\varrho }(u)/\sqrt{(}n)\) (recall that it is based on asymptotic arguments), and the coverage of the confidence interval (2) calculated at the nominal level 95%, for \(u=0.1, \ldots , 0.9\) and \(n=100, \,250, \,500, \,1000\). Figure 1 displays the graph of 1000 estimations for \(\tau =0.45\) in grey joint with the population shape function in black. Looking at these tables and figure we see that the bias and the variance become smaller as u approaches 1; the standard deviation estimator is, on average, a bit larger than the true standard deviation, specially for smaller sample sizes; the bias depends on the values of \(\tau \) and u, being negative for smaller values of \(\tau \) and for larger values of u; as for the coverage of the confidence interval, it also depends the values of \(\tau \) and u: it is rather poor for smaller values of \(\tau \) and larger values of u, this is because in such cases the value of the bias in relation to the standard deviation estimator is nonnegligible. As expected from Theorem 3, the differences between \(R_n(u)\) and R(u) become smaller as n increases, uniformly in u.
A similar experiment was carried out for the estimation of S, whose results are summarized in Tables 3 and 4 and Fig. 2. Looking a these tables and figure we see that the bias is quite small in all cases; that the variance becomes smaller as u approaches 1 and it also decreases with n; the standard deviation estimator is, on average, a bit larger than the true standard deviation, the differences become smaller as n increases; this fact provokes that the coverage of the confidence intervals is larger than the nominal value, specially for small sample sizes.
4.2 Glass fibre breaking strengths
As an illustration, we will analyse a real data set already considered in Arriaza et al. (2019) and which had been previously introduced by Smith and Naylor (1987). The set consists of 63 observations of the breaking strength of glass fibres of 1.5 cm of length collected by the National Physical Laboratory in England (for more details about the data set, see Smith and Naylor 1987). The left panel of Fig. 3 depicts the histogram and the kernel density estimator obtained from the data. As discussed in Arriaza et al. (2019), this explanatory analysis suggests a certain negative skewness in the distribution. The right panel of Fig. 3 displays the estimator \(S_n(u)\), \(u \in (0,1)\), with \({\hat{h}}=sd\times n^{0.45}\) and taking as K the Epanechnikov kernel (scaled so that it has variance 1). Other values for \({\hat{h}}\) have been investigated and similar results were obtained. The graph also displays the confidence intervals in (4) for S(u) calculated at the nominal level 90% for \(u=0.1, \ldots , 0.9\). The confidence intervals are also detailed in Table 5. Notice that the estimator of the function S tends to lie over the horizontal axis, which indicates asymmetry in the distribution. Moreover, the confidence intervals for S(0.1), S(0.2) and S(0.4) do not contain the zero. This conclusion is in agreement with Arriaza et al. (2019).
4.3 An application to testing goodnessoffit
Now we consider the problem of testing goodnessoffit to a uniform distribution, that is, we want to test
on the basis of a sample of size n from X. Let \(f_0\) and \(F_0\) denote the probability density function and the cumulative distribution function of the U(0, 1) law, respectively. Then \(w(u)=f_0(F_0(u))=\textbf{1}\{0 \leqslant u \leqslant 1\}\), and thus \(\Vert \cdot \Vert =\Vert \cdot \Vert _w\). Let R denote the right shape function of X and let \(R_0\) denote the right shape function of a uniform distribution, \(R_0(u)=0.5(1u)^2\). From Theorem 3, it follows that \(\Vert R_nR_0\Vert \) converges to \(\Vert RR_0\Vert \), which is equal to 0, under the null, and a positive quantity under alternatives. Thus, it seems reasonable to consider the test that rejects \(H_0\) for large values of \(T_n=n\Vert R_nR_0\Vert ^2\). The critical region is \(T_n \geqslant t_{n,\alpha }\), where \( t_{n,\alpha }\) is the \(\alpha \) upper percentile of the null distribution of \(T_n\), whose value can be calculated by simulation by generating data from a U(0, 1) law, since the null distribution of \(T_n\) does not depend on the values of a and b, but only on \(F_0\). Notice that \(T_n\) has the readily computable expression
where
There are many tests in the statistical literature for testing \(H_0\) against \(H_1\), and the objective of this section is not to provide an exhaustive list of such tests, but only to suggest a possible application of the results stated in the previous sections. In our view, this as well as other possible applications deserve further separate research, out of the scope of this manuscript. Nevertheless, since \(T_n\) is closely related to the Wasserstein distance between \(F_n\) and \(F_0\), which is equal to the squared root of the \(L^2\) norm of the difference between the empirical quantile function and the quantile function of \(F_0\), we carried out a small simulation experiment in order to compare the powers of the newly proposed tests and the one based on the Wasserstein distance. To make the Wasserstein distance invariant with respect to location and scale changes, we consider as test statistic \(W_n=n\Vert F_n^{1}F_0^{1}\Vert ^2/{\hat{\sigma }}^2\), \({\hat{\sigma }}^2\) denoting the sample variance, and reject the null hypothesis for large values of \(W_n\) (see, del Barrio et al. 2000), The critical region is \(W_n \geqslant w_{n,\alpha }\), where \( w_{n,\alpha }\) is the \(\alpha \) upper percentile of the null distribution of \(W_n\), whose value can be calculated by simulation by generating data from a U(0, 1) law, since the null distribution of \(W_n\) does not depend on the values of a and b, but only on \(F_0\).
Table 6 displays the values \(t_{n,\alpha }\) and \(w_{n,\alpha }\) for \(n=30, \, 50\) and \(\alpha =0.05\), calculated by generating 100,000 samples. To calculate \(T_n\) we used the Epanechnikov kernel (scaled so that it has variance 1) and \({\hat{h}}=sd \times n^{\tau }\), \(\tau =0.26,\, 0.30, \, 0.35, \, 0.40, \, 0.45, \, 0.49\). As alternatives, we considered several members of the logLindley distribution, a family with support on (0,1) with a large variety of shapes (see GómezDéniz et al. 2014; Jodrá and JiménezGamero 2016) and probability density function
for some \(\kappa >0\) and \(\lambda \geqslant 0\). Figure 4 represents the empirical power of the two tests for \(\kappa =1.5, \, 2, \, 2.5\) and \(0.6 \leqslant \lambda \leqslant 5\), calculated by generating 10,000 samples for each combination of the parameter values. Looking at Fig. 4 we see that, although according to Corollary 2 the asymptotic null distribution of \(T_n\) does not depend on the value of \({\hat{h}}\), for finite sample sizes it has an effect on the power of the test, being higher for larger values of \({\hat{h}}\). As expected from the results in Janssen (2000), no test has the largest power against all alternatives. The power increases with the sample size.
5 Conclusions and further research
As seen in the Introduction, shape functions have shown to have interesting properties. From a practical point of view, estimators of these functions need to be proposed and studied. This paper has focused on nonparametric estimators of the shape functions. The proposed estimators have been studied both theoretically and numerically. They exhibit nice asymptotic properties and the numerical experiments show a reasonable practical behavior. Optimal choice of the smoothing parameter involved in the construction of the estimator has not been dealt with in this piece of research. This issue deserves more investigation and will be considered in future studies.
6 Proofs
This section sketches the proofs of the results stated in the previous sections. Along this section, M is a generic positive constant taking many different values throughout the proofs and \(f_n(x)\) is defined as \({\hat{f}}_n(x)\) with \({\hat{h}}={\hat{\sigma }} \times g(n)\) replaced with \(h=\sigma \times g(n)\).
Lemma 1
Suppose that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), that K satisfies Assumption 1 (i) and that h satisfies Assumption 2 (i). Then \(\displaystyle \sup _x {\hat{f}}_n(x)f_n(x) {\mathop {\longrightarrow }\limits ^{a.s.}} 0\).
Proof
We have that
with
Since
we can write
Recall that if K is Lipschitz continuous and has compact support, then it has bounded variation. Under the assumptions made on K and h we have that (see, for example, the proof of Theorem 2.1.3 of Prakasa Rao 1983)
Since f is bounded,
Using (8), (9), (10) and that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), it follows that
Now we study \(\delta _2(x)\). Since K is Lipschitz continuous and has compact support, \(\textrm{Sup}(K)\), we have that
Since \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), it follows that for n large enough, \(\textbf{1}\{ (X_jx)/h \in \textrm{Sup}(K) \text{ or } (X_jx)/{\hat{h}} \in \textrm{Sup}(K) \} \subseteq \textbf{1}\{ X_jx/h \leqslant C \}\), for certain positive constant C. Therefore
where \(f_{U,n}(x)\) is the kernel estimator of f built by using as kernel the probability density function of the uniform law on the interval \([C,C]\). Proceeding as before, we have that
Using that \( \delta _2(x)\leqslant D(x)\), (12), (13) and that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), it follows that
The result follows from (5), (11) and (14). \(\square \)
Remark 5
From the previous proof, notice that if in the statement of Lemma 1 the assumption \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma \) is replaced with \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{P}} \sigma \) then it is concluded that \(\displaystyle \sup _x {\hat{f}}_n(x)f_n(x) {\mathop {\longrightarrow }\limits ^{P}} 0\).
Proof of Theorem 1
We have that
where
From the SLLN, it follows that
As for \(R_{2n}(u)\), we have that
Under the assumptions made (see, for example, Theorem 2.1.3 in Prakasa Rao 1983)
Taking into account that \(\frac{1}{n}\sum _{i=1}^nX_i {\mathop {\longrightarrow }\limits ^{a.s.}} {\mathbb {E}}X<\infty \), it follows that
In order to study \(R_{3n}(u)\) we first observe that
which implies
and therefore
Under the assumptions made \(F_n^{1}(u)F^{1}(u) {\mathop {\longrightarrow }\limits ^{a.s.}} 0\) (see, for example display (1.4.9) in Csörgő 1983). We also have that
From (17), Lemma 1 and taking into account that f is bounded, it follows that the right handside of inequality (20) is bounded a.s. Therefore,
We have that
where
and
where \(\delta _1(x)\) and \(\delta _2(x)\) are as defined in (6) and (7), respectively. Since
and \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma \), from (16) and (18), it follows that
Using (12), we get that
where
Proceeding as before, it can be seen that \( R_{U,n}(u) {\mathop {\longrightarrow }\limits ^{a.s.}} R(u)\), and hence
The result follows from (15), (16), (18), (21), (22), (24) and (26). \(\square \)
Proof of Theorem 2
Let us consider decomposition (15). We have that \(R_{1n}(u)+R_{2n}(u)=T_{1n}(u)+T_{2n}(u)\) with
and \(T_{2n}(u)=\frac{n1}{n}U_n(u)\), where \(U_n(u)=\frac{1}{n(n1)}\sum _{i \ne j}H_n(X_i,X_j;u)\) is a degree two Ustatistic with symmetric kernel
We first see that
Notice that under the assumptions made,
therefore, by the SLLN,
Finally, since K is bounded and \(nh \rightarrow \infty \), (27) follows.
Next we will see that
With this aim, we first calculate \({\mathbb {E}}\{U_n(u)\}\).
Since \(\int K(y)f(xhy)dy \rightarrow f(x)\), \(\{xF^{1}(u)\}^+ \left( \int K(y)f(xhy)dy\right) f(x) \leqslant M \{xF^{1}(u)\}^+\) and (28), by dominated convergence theorem we have that \({\mathbb {E}}\{U_n(u)\} \rightarrow R(u)\).
Now, let \(A_n(x; u)={\mathbb {E}}\left\{ H_n(x,X;u) \right\} \) and \(\tau =\int x^2K(x)dx\). Routine calculations show that
Let \(\varepsilon >0\). From the stated assumptions \({\mathbb {E}}\{a_n(X, u)^2\} = O(h^4)\), therefore
Since \(nh^4 \rightarrow 0\), it follows that
which implies that
Let \(B_n(u)=U_n(u)+ {\mathbb {E}}\{H_n(X_1,X_2; u)\}\frac{2}{n}\sum _{i=1}^nA_n(X_i; u)\). It can be seen that \( {\mathbb {E}} \left\{ B_n(u)^2 \right\} =O(1/n^2\,h). \) Reasoning as before, we get that
Summarizing,
with \(t_{2n}(u) {\mathop {\longrightarrow }\limits ^{a.s.}} 0\). By the SLLN we have that \(\frac{1}{n}\sum _{i=1}^nA(X_i; u) {\mathop {\longrightarrow }\limits ^{a.s.}} R(u)\), and thus (29) is proven.
Finally, proceeding as in the proof of Theorem 1, one gets that \(R_{in}(u) {\mathop {\longrightarrow }\limits ^{a.s.}} 0\), \(i=3,4\). This completes the proof. \(\square \)
Lemma 2
Let f be a probability density function with finite support, (a, b), continuous in (a, b). Suppose also that K satisfies Assumption 1 (ii) and that h satisfies Assumption 2 (ii). Then \( \displaystyle \sup _{x \in (a,b)}f_n(x)f(x){\mathop {\longrightarrow }\limits ^{a.s.}} 0\).
Proof
From the proof of Theorem 2.1.3 in Prakasa Rao (1983), we have that
So, it suffices to see that
We have that
From the continuity of f, for each fixed \(y \in {\mathbb {R}}\)
and
Thus, (30) holds true by applying dominated convergence theorem. \(\square \)
Proof of Theorem 3
First of all, we see that, under the assumptions made, R is a continuous function on [0, 1]. Since the function
is continuous, it follows that \(R(u)=\int G(x,u)dx\) is continuous on (0, 1). Recall that \(\displaystyle \lim _{u \rightarrow 1^}R(u)=0=R(1)\) and, if the limit exists, \(R(0)=\displaystyle \lim _{u \rightarrow 0^+}R(u)\). Thus, R is continuous at \(u=1\). To see that it is also continuous at \(u=0\) it suffices to see that the limit \(\displaystyle \lim _{u \rightarrow 0^+}R(u)\) exits, which is true since
and \(\int c(x)dx<\infty \), then by dominated convergence theorem
Next, we consider decomposition (15) and study each term on the right handside of such expression. Since R is a continuous function on [0, 1], the point by point convergence of \(R_{1n}(u) \) to R(u) implies the uniform convergence on [0, 1], that is
As for \(R_{2n}(u)\), taking into account that \(\frac{1}{n}\sum _{i=1}^n \left\{ X_iF^{1}(u)\right\} ^+ \leqslant {\bar{X}}a\), with \({\bar{X}}=(1/n)\sum _{i=1}^n X_i\), we have that
From Lemma 2, taking into account that Assumption 1 (i) implies Assumption 1 (ii) and that \({\bar{X}}a {\mathop {\longrightarrow }\limits ^{a.s.}} {\mathbb {E}}(X)a < \infty \), we conclude that
From (31) and (32), we conclude that
For \(R_{3n}(u)\) we have that
In the proof of Theorem 1 we saw that \((1/n)\sum _{i=1}^n {\hat{f}}_{n}(X_i)\) is bounded a.s. Under the assumptions made (see, for example, p. 6 of Csörgő 1983),
Therefore
Finally, taking into account decomposition (22), it suffices to show that \(\sup _{0 \leqslant u \leqslant 1} \left R_{4in}(u) \right {\mathop {\longrightarrow }\limits ^{a.s.}} 0,\) \(i=1,2\). Using (23), (33) and that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), one gets that
Using (25), (33) and that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), one similarly gets that
This concludes the proof. \(\square \)
Proof of Theorem 4
First of all, we see that, under the assumptions made, \(R \in L^2\). We have that
with (recall that f is a bounded function and that \(\int _0^1 F^{1}(u)^2 du={\mathbb {E}}(X^2)\))
and thus \(R \in L^2\).
Next, we consider decomposition (15) and study each term on the right handside of such expression. Since \(R_{1n}\) is an average of integrable i.i.d. random elements whose expectation is R(u), applying the SLLN in Hilbert spaces we obtain that (see, for example Theorem 2.4 of Bosq 2000),
in \(L^2\). As for \(R_{2n}(u)\), we have that
Let \(V(u)={\mathbb {E}}[\{X_iF^{1}(u)\}^+]\). A parallel reasoning to that used to prove that \(R \in L^2\) shows that \(V \in L^2\). Now, from the SLLN in Hilbert spaces it follows that \(\frac{1}{n}\sum _{i=1}^n \{X_iF^{1}(u)\}^+ {\mathop {\longrightarrow }\limits ^{a.s.}} V\), in \(L^2\). This fact and (17) gives
in \(L^2\). From (15), (31) and (32), we conclude that
For \(R_{3n}(u)\) we have that
As shown in the proof of Theorem 1, the second factor on the right handside of the above inequality is a.s. bounded. Since \({\mathbb {E}}(X^2)<\infty \), from Lemma 8.3 in Bickel and Freedman (1981), it follows that \(\Vert F_n^{1}F^{1}\Vert {\mathop {\longrightarrow }\limits ^{a.s.}} 0\), and thus \(R_{3n} {\mathop {\longrightarrow }\limits ^{a.s.}} 0\), in \(L^2\). Finally, taking into account decomposition (22), using (23), (25) (34) and that \({\hat{\sigma }} {\mathop {\longrightarrow }\limits ^{a.s.}} \sigma >0\), one also gets that \(R_{4n} {\mathop {\longrightarrow }\limits ^{a.s.}} 0\), in \(L^2\). This concludes the proof. \(\square \)
Proof of Theorem 5
Similar developments to those made in the proof of Theorem 2, and sharing the notation used there, show that
with \(t_{2n} {\mathop {\longrightarrow }\limits ^{a.s.}} 0\) in \(L^2(w)\). By the SLLN in Hilbert spaces we have that \(\frac{1}{n}\sum _{i=1}^nA(X_i; \cdot ) {\mathop {\longrightarrow }\limits ^{a.s.}} R\), in \(L^2(w)\). Finally, proceeding as in the proof of Theorem 4 we get that \(\Vert R_{in} \Vert _w {\mathop {\longrightarrow }\limits ^{a.s.}} 0\), \(i=3,4\), which completes the proof. \(\square \)
Proof of Theorem 6
Similar developments to those made in the proof of Theorem 2, and sharing the notation used there, show that
Now, taking into account (19) we can write
where
We have that
Under the assumptions made, \(\sqrt{n} \left\{ F^{1}_n(u) F^{1}(u) \right\} =O_P(1)\) and \(F^{1}_n(u) {\mathop {\longrightarrow }\limits ^{a.s.}} F^{1}(u)\), which implies that for each \(\varepsilon >0\) there exists \(n_0\) such that \(\textbf{1}\{F_n^{1}(u)<X_i\leqslant F^{1}(u)\} \leqslant \textbf{1}\{F^{1}(u)\varepsilon <X_i\leqslant F^{1}(u)\}\), \(\forall n \geqslant n_0\), and hence \({\mathbb {E}}[ \textbf{1}\{F_n^{1}(u)<X_i\leqslant F^{1}(u)\} ]=O(\varepsilon )\). Proceeding as in the proof of Theorem 1, it can be seen that \((1/n)\sum _{i=1}^n {\hat{f}}_n(X_i)=O_P(1)\). Therefore \(T_{1n}(u) {\mathop {\longrightarrow }\limits ^{P}} 0\). Analogously, it can seen that \(T_{2n}(u) {\mathop {\longrightarrow }\limits ^{P}} 0\) and that \(\frac{1}{n}\sum _{i=1}^n f_n(X_i)\textbf{1}\{F^{1}(u)<X_i \leqslant F_n^{1}(u)\}=o_P(1)\). Now, proceeding as in the proof of Theorem 2, it can be seen that
From Lemma 1,
We also have that (see, e.g. Theorem 2.5.2 in Serfling 1980)
Thus, by Slutsky theorem,
Therefore, it has been shown that
To prove the result, it remains to see that \(\sqrt{n} R_{4n}(u)=o_P(1)\). Recall decomposition (22). From (23) and (35), it follows that
Now we study \(R_{42n}(u)\). A Taylor expansion of \(K((X_jX_i)/{{\hat{h}}})\) around \(K\big ((X_jX_i)/{{h}}\big )\) gives,
where
where \(\tilde{h}=\alpha h + (1\alpha ) {\hat{h}}\), for some \(\alpha \in (0,1)\). The assumptions made imply that \(Q_n(u)=o_P(1)\). Now proceeding as in the proof of Theorem 2, and taking into account that \(\int u K'(u)f(x+hu)du\rightarrow f(x)\int u K'(u)du=f(x)\), we obtain that
Finally, the result follows from (38), (39) and (40). \(\square \)
Proof of Theorem 7
Similar developments to those made in the proof of Theorem 2, and sharing the notation used there, show that
with \(\int _0^1r_{1n}(u)^2w(u)du {\mathop {\longrightarrow }\limits ^{P}} 0\).
As for \(R_{3n}(u)\), we consider decomposition (36). In the proof of Theorem 6 it was shown that \(T_{1n}(u) {\mathop {\longrightarrow }\limits ^{P}} 0\), for each \(u \in (0,1)\). We have that
From Theorems 2.1, 3.1.1 and 3.2.1 in Csörgő (1983), it follows that
Under the assumptions made,
Thus, by dominated convergence theorem, it follows that
Analogously it can be seen that
From Theorems 2.1 and 3.2.1 in Csörgő (1983), it follows that
Notice that the convergence in (37) holds for each \(u \in [0,1]\). Since \(\mu (u)\) is a continuous function, it follows that such convergence is uniform on the interval [0, 1]. Therefore,
Summarizing, it has been shown that
The same steps given in the proof of Theorem 6 to show that \(\sqrt{n}R_{4n}(u)=o_P(1)\) can be used to see that \(\int _0^{(n1)/n}n R_{4n}(u)^2w(u)du {\mathop {\longrightarrow }\limits ^{P}} 0\). This completes the proof. \(\square \)
Proof of Corollary 2
Define
\(1 \leqslant i \leqslant n\), and \(W_n(u)=\frac{1}{\sqrt{n}} \sum _{i=1}^n {\widetilde{Y}}_i(u)\), \(u \in [0,1]\).
From Theorem 7, we have that
We first see that \({\widetilde{Y}}_1 \in L^2(w)\). With this aim we write \({\widetilde{Y}}_1(u)={\widetilde{Y}}_{11}(u){\widetilde{Y}}_{12}(u)+{\widetilde{Y}}_{13}(u)\), with
and
In the proof of Theorem 4 we saw that R, \(\{X_1F^{1}(u)\}^+ \in L^2\); since f is bounded, we also have that \({\widetilde{Y}}_{11}, \, {\widetilde{Y}}_{12} \in L^2(w)\). As for \({\widetilde{Y}}_{13}\),
because \(\mu (u) \leqslant M\), \(\forall u \in [0,1]\), since f is bounded. Thus, \({\widetilde{Y}}_1 \in L^2(w)\).
From the central limit theorem in Hilbert spaces and the continuous mapping theorem,
Since R is a decreasing function with \(\displaystyle \lim _{u\uparrow 1} R(u)=0\), and w is bounded
References
Arriaza A, Di Crescenzo A, Sordo MA, SuárezLlorens A (2019) Shape measures based on the convex transform order. Metrika 82:99–124
Azzalini A (1981) A note on the estimation of a distribution function and quantiles by a kernel method. Biometrika 68(1):326–328
Bickel PJ, Freedman DA (1981) Some asymptotic theory for the bootstrap. Ann Stat 9(6):1196–1217
Bosq D (2000) Linear processes in function spaces. Theory and applications. Lecture Notes in Statistics Vol 149. Springer, New York
Csörgő M (1983) Quantile processes with statistical applications. Society for Industrial and Applied Mathematics, Philadelphia
del Barrio E, CuestaAlbertos JA, Matrán C (2000) Contributions of empirical and quantile processes to the asymptotic theory of goodnessoffit tests. TEST 9(1):1–96
del Barrio E, Giné E, Utzet F (2005) Asymptotics for \({L}_2\) functionals of the empirical quantile process, with applications to tests of fit based on weighted wasserstein distances. Bernoulli 11(1):131–189
GómezDéniz E, Sordo MA, CalderínOjeda E (2014) The LogLindley distribution as an alternative to the beta regression model with applications in insurance. Insurance 54:49–57
Janssen A (2000) Global power functions of goodness of fit tests. Ann Stat 28(1):239–253
Jodrá P, JiménezGamero MD (2016) A note on the LogLindley distribution. Insurance 71:189–194
PardoFernández JC, JiménezGamero MD (2019) A model specification test for the variance function in nonparametric regression. AStA Adv Stat Anal 103:387–410
PardoFernández JC, JiménezGamero MD, El Ghouch A (2015) A nonparametric ANOVAtype test for regression curves based on characteristic functions. Scand J Stat 42(1):197–213
Prakasa Rao BLS (1983) Nonparametric functional estimation. Academic Press Inc, New York
R Core Team (2020) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna
Serfling RJ (1980) Approximation theorems of mathematical statistics. Wiley, New York
Smith RL, Naylor JC (1987) A comparison of maximum likelihood and Bayesian estimators for the threeparameter Weibull distribution. J R Stat Soc Ser C 36:358–369
Acknowledgements
The authors thank the two anonymous referees for their constructive comments. This work supported by the Grant PID2020118101GBI00, Ministerio de Ciencia e Innovación (MCIN/AEI/10.13039/501100011033).
Funding
Open Access funding provided thanks to the CRUECSIC agreement with Springer Nature.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
JiménezGamero, M.D., PardoFernández, J.C. Nonparametric estimation of the shape functions and related asymptotic results. Stat Papers 65, 1575–1611 (2024). https://doi.org/10.1007/s00362023014567
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362023014567