Skip to main content
Log in

Goodness-of-fit tests for the Weibull distribution based on the Laplace transform and Stein’s method

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

We propose novel goodness-of-fit tests for the Weibull distribution with unknown parameters. These tests are based on an alternative characterizing representation of the Laplace transform related to the density approach in the context of Stein’s method. Asymptotic theory of the tests is derived, including the limit null distribution, the behaviour under contiguous alternatives, the validity of the parametric bootstrap procedure, and consistency of the tests against a large class of alternatives. A Monte Carlo simulation study shows the competitiveness of the new procedure. Finally, the procedure is applied to real data examples taken from the materials science.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Allison, J. S., Betsch, S., Ebner, B., Visagie, J. (2022). On testing the adequacy of the inverse Gaussian distribution. Mathematics, 10(3), 350.

    Article  Google Scholar 

  • Anastasiou, A., Barp, A., Briol, F. X., Ebner, B., Gaunt, R. E., Ghaderinezhad, F., Gorham, J., Gretton, A., Ley, C., Liu, Q., Mackey, L., Oates, C. J., Reinert, G., Swan, Y. (2023). Stein’s method meets computational statistics: A review of some recent developments. Statistical Science, 38(1), 120–139.

    Article  MathSciNet  MATH  Google Scholar 

  • Betsch, S., Ebner, B. (2019). A new characterization of the gamma distribution and associated goodness-of-fit tests. Metrika, 82(7), 779–806.

    Article  MathSciNet  MATH  Google Scholar 

  • Betsch, S., Ebner, B. (2021). Fixed point characterizations of continuous univariate probability distributions and their applications. Annals of the Institute of Statistical Mathematics, 73, 31–59.

    Article  MathSciNet  MATH  Google Scholar 

  • Betsch, S., Ebner, B., Klar, B. (2021). Minimum \(L^q\)-distance estimators for non-normalized parametric models. Canadian Journal of Statistics, 49(2), 514–548.

    Article  MathSciNet  Google Scholar 

  • Betsch, S., Ebner, B., Nestmann, F. (2022). Characterizations of non-normalized discrete probability distributions and their application in statistics. Electronic Journal of Statistics, 16(1), 1303–1329.

    Article  MathSciNet  MATH  Google Scholar 

  • Bickel, P. J., Doksum, K. A. (2015). Mathematical statistics: Basic ideas and selected topics, (2nd ed., Vol. 1). New York, NY: CRC Press.

    Book  MATH  Google Scholar 

  • Bothma, E., Allison, J. S., Visagie, I. J. H. (2022). New classes of tests for the Weibull distribution using Stein’s method in the presence of random right censoring. Computational Statistics, 37, 1751–1770.

    Article  MathSciNet  MATH  Google Scholar 

  • Bowman, A. W., Foster, P. J. (1993). Adaptive smoothing and a density-based test of multivariate normality. Journal of the American Statistical Association, 88(422), 529–537.

    Article  MathSciNet  MATH  Google Scholar 

  • Cabaña, A., Quiroz, A. J. (2005). Using the empirical moment generating function in testing for the Weibull and the type I extreme value distributions. TEST, 14(2), 417–431.

    Article  MathSciNet  MATH  Google Scholar 

  • Chandra, M., Singpurwalla, N. D., Stephens, M. A. (1981). Kolmogorov statistics for tests of fit for the extreme value and Weibull distributions. Journal of the American Statistical Association, 76(375), 729–731.

    Google Scholar 

  • Chen, X., White, H. (1998). Central limit and functional central limit theorems for Hilbert-valued dependent heterogeneous arrays with applications. Econometric Theory, 14(2), 260–284.

    Article  MathSciNet  Google Scholar 

  • Ferguson, T. S. (1996). A course in large sample theory. London: Texts in statistical science series. Chapman & Hall.

    Book  MATH  Google Scholar 

  • Grobler, G. L., Bothma, E., Allison, J. S. (2022). Testing for the Rayleigh distribution: A new test with comparisons to tests for exponentiality based on transformed data. Mathematics, 10(8), 1316.

    Article  Google Scholar 

  • Henze, N. (1993). A new flexible class of omnibus tests for exponentiality. Communications in Statistics - Theory and Methods, 22(1), 115–133.

    Article  MathSciNet  MATH  Google Scholar 

  • Henze, N. (1996). Empirical-distribution-function goodness-of-fit tests for discrete models. Canadian Journal of Statistics, 24(1), 81–93.

    Article  MathSciNet  MATH  Google Scholar 

  • Henze, N. (2002). Invariant tests for multivariate normality: A critical review. Statistical Papers, 19(4), 467–506.

    Article  MathSciNet  MATH  Google Scholar 

  • Henze, N., Meintanis, S. G. (2002). Tests of fit for exponentiality based on the empirical Laplace transform. Statistics, 36(2), 147–161.

    Article  MathSciNet  MATH  Google Scholar 

  • Henze, N., Zirkler, B. (1990). A class of invariant and consistent tests for multivariate normality. Communications in Statistics A - Theory and Methods, 19(10), 3595–3617.

    Article  MathSciNet  MATH  Google Scholar 

  • Janssen, A. (2000). Global power functions of goodness of fit tests. The Annals of Statistics, 28(1), 239–253.

    Article  MathSciNet  MATH  Google Scholar 

  • Krit, M. (2014). Goodness-of-fit tests for the weibull distribution based on the laplace transform. Journal de la Société Française de Statistique, 155(3), 135–151.

    MathSciNet  MATH  Google Scholar 

  • Krit, M. (2019). EWGoF: Goodness-of-Fit Tests for the Exponential and Two-Parameter Weibull Distributions. https://CRAN.R-project.org/package=EWGoF, r package version 2.2.2.

  • Krit, M., Gaudoin, O., Xie, M., Remy, E. (2016). Simplified likelihood based goodness-of-fit tests for the Weibull distribution. Communications in Statistics - Simulation and Computation, 45(3), 920–951.

    Article  MathSciNet  MATH  Google Scholar 

  • Krit, M., Gaudoin, O., Remy, E. (2021). Goodness-of-fit tests for the Weibull and extreme value distributions: A review and comparative study. Communications in Statistics - Simulation and Computation, 50(7), 1888–1911.

    Article  MathSciNet  MATH  Google Scholar 

  • Ley, C., Swan, Y. (2013). Stein’s density approach and information inequalities. Electronic Communications in Probability, 18(7), 1–14.

    MathSciNet  MATH  Google Scholar 

  • Mackisack, M., Stillman, R. (1996). A cautionary tale about Weibull analysis [reliability estimation]. IEEE Transactions on Reliability, 45(2), 244–248. 

    Article  Google Scholar 

  • Mann, N. R., Fertig, K. W. (1975). A goodness-of-fit test for the two parameter vs. three parameter Weibull; confidence bounds for threshold. Technometrics, 17(2), 237–245

    Article  MathSciNet  MATH  Google Scholar 

  • Mann, N. R., Scheuer, E. M., Fertig, K. W. (1973). A new goodness-of-fit test for the two-parameter Weibull or extreme-value distribution with unknown parameters. Communications in Statistics, 2(5), 383–400.

    Article  MathSciNet  MATH  Google Scholar 

  • McCool, J. I. (1970). Inference on Weibull percentiles and shape parameter from maximum likelihood estimates. IEEE Transactions on Reliability, 19(1), 2–9.

    Article  Google Scholar 

  • Nikitin, Y. Y. (2017). Tests based on characterizations, and their efficiencies: A survey. Acta et Commentationes Universitatis Tartuensis de Mathematica, 21(1), 3–24.

    Article  MathSciNet  MATH  Google Scholar 

  • Pérez-Rodríguez, P., Vaquera-Huerta, H., Villaseñor-Alva, J. A. (2009). A goodness-of-fit test for the Gumbel distribution based on Kullback-Leibler information. Communications in Statistics - Theory and Methods, 38(6), 842–855.

    Article  MathSciNet  MATH  Google Scholar 

  • R Core Team (2021). R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria, https://www.R-project.org/

  • Rinne, H. (2009). The Weibull distribution. A Handbook. Boca Raton, FL: CRC Press.

    MATH  Google Scholar 

  • Shapiro, S. S., Brain, C. W. (1987). W-test for the Weibull distribution. Communications in Statistics - Simulation and Computation, 16(1), 209–219.

    Article  MathSciNet  MATH  Google Scholar 

  • Smith, R. L. (1991). Weibull regression models for reliability data. Reliability Engineering & System Safety, 34(1), 55–76.

    Article  MathSciNet  Google Scholar 

  • Smith, R. M., Bain, L. J. (1976). Correlation type goodness-of-fit statistics with censored sampling. Communications in Statistics - Theory and Methods, 5(2), 119–132.

    Article  MathSciNet  MATH  Google Scholar 

  • Tenreiro, C. (2019). On the automatic selection of the tuning parameter appearing in certain families of goodness-of-fit tests. Journal of Statistical Computation and Simulation, 89(10), 1780–1797.

    Article  MathSciNet  MATH  Google Scholar 

  • Van der Vaart A. W. (1998). Asymptotic statistics. Cambridge Series in statistical and probabilistic mathematics, 3. Cambridge University Press, Cambridge.

  • Watson, A. S., Smith, R. L. (1985). An examination of statistical theories for fibrous materials in the light of experimental data. Journal of Materials Science, 20(9), 3260–3270.

    Article  Google Scholar 

  • Weibull, W. (1951). A statistical distribution function of wide applicability. Journal of Applied Mechanics, 18, 293–297.

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank an anonymous referee for helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Bruno Ebner.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Proofs

A Proofs

1.1 A.1 Proof of Theorem 1

We first assume that X has the Weibull distribution \(W(\lambda ,k)\), and we write \(F(\cdot ,\lambda ,k)\) and \(f(\cdot ,\lambda ,k)\) for the distribution function and the density function of X, respectively. Furthermore, we put

$$\begin{aligned} \kappa _{f}(x)=\biggl \vert \frac{\frac{\textrm{d}}{\textrm{d}x}f(x,\lambda ,k) \min \big (F(x,\lambda ,k),1-F(x,\lambda ,k)\big )}{f^2(x,\lambda ,k)} \biggl \vert , \qquad 0< x < \infty . \end{aligned}$$

Letting \(\tau =(-\lambda ^k\log (1/2))^{1/k}\), we have

$$\begin{aligned} \min \big (F(x,\lambda ,k),1-F(x,\lambda ,k)\big )=\left\{ \begin{array}{ll} F(x,\lambda ,k), &{} x \le \tau \\ 1-F(x,\lambda ,k), &{} x > \tau \end{array}\right. . \end{aligned}$$

Using L’Hôspital’s rule, we deduce \(\lim _{x \rightarrow 0} \big (x^{-k} (1-\exp (-(x/\lambda )^k))\big )=\lambda ^{-k}\), and it follows that \(\lim _{x \rightarrow 0} \kappa _{f}(x) [3] =\vert \frac{k-1}{k} \vert .\) It is easily seen that \(\lim _{x \rightarrow \infty } \kappa _{f}(x)=1.\) The continuity of \(\kappa _{f}(\cdot )\) then yields

$$\begin{aligned} \sup _{x\in (0,\infty )} \kappa _{f}(x) < \infty . \end{aligned}$$
(20)

A further application of L’Hôspital’s rule gives

$$\begin{aligned} \lim _{x \rightarrow 0} \frac{F(x,\lambda ,k)}{f(x,\lambda ,k)}= \lim _{x \rightarrow 0} \frac{f(x,\lambda ,k)}{\frac{\textrm{d}}{\textrm{d}x}f(x,\lambda ,k)} =0. \end{aligned}$$
(21)

In view of (20), (21) and

$$\begin{aligned} \int _0^{\infty } x \Big \vert \frac{\textrm{d}}{\textrm{d}x}f(x,\lambda ,k) \Big \vert \textrm{d}x \le k-1+\frac{k}{\lambda ^k}\mathbb {E}[X^k] < \infty , \end{aligned}$$

we can apply Corollary 2 of Betsch and Ebner (2021). Hence \(X\) follows a \(W(\lambda ,k)\)-distribution if and only if its density is given by

$$\begin{aligned} f_{X}(s)=\mathbb {E}\biggl [ -\frac{\frac{\textrm{d}}{\textrm{d}x}f(x,\lambda ,k)\vert _{X}}{f(X,\lambda ,k)}1\{X>s\}\biggl ] =\mathbb {E}\biggl [ -\frac{k-1-\frac{kX^k}{\lambda ^k}}{X}1\{X>s\}\biggl ] \end{aligned}$$

for almost every \(s > 0\). Next, we apply Tonelli’s theorem to conclude that

$$\begin{aligned} \int _0^{\infty } \textrm{e}^{-ts} \mathbb {E}\biggl [\biggl \vert -\frac{\frac{\textrm{d}}{\textrm{d}x}f(x,\lambda ,k)\vert _{X}}{f(X,\lambda ,k)}\biggl \vert 1\{X>s\}\biggl ]\textrm{d}s= & {} \int _0^{\infty } \textrm{e}^{-ts} \int _0^{\infty } \Big \vert \frac{\textrm{d}}{\textrm{d}x}f(x,\lambda ,k)\Big \vert 1\{x>s\}\textrm{d}x\textrm{d}s \\\le & {} \int _0^{\infty } x\Big \vert \frac{\textrm{d}}{\textrm{d}x}f(x,\lambda ,k)\Big \vert \textrm{d}x \le \vert k-1 \vert +k \end{aligned}$$

holds for \(t > 0\). Using Fubini’s theorem, the Laplace transform of \(X\) takes the form

$$\begin{aligned} \mathcal {L}_{X}(t)=\int _0^{\infty } \textrm{e}^{-ts} \mathbb {E}\biggl [ -\frac{k-1-\frac{kX^k}{\lambda ^k}}{X} 1\{X>s\}\biggl ]\textrm{d}s = \mathbb {E}\biggl [\frac{1}{X}\biggl (k \biggl (\frac{X}{\lambda }\biggl )^k -k+1 \biggl ) \biggl (\frac{1}{t}-\frac{1}{t}\textrm{e}^{-tX} \biggl ) \biggl ] \end{aligned}$$

for \(t > 0\). The converse assertion follows since the Laplace transform determines the distribution.\(\square\)

1.2 A.2 Proof of Theorem 5

Recall (15) and the definition of \(V_n\) given in (14). The proof consists of two steps. We first write \(V_n\) as a sum of i.i.d. random elements of \(\mathscr {L}_w^2\) plus a term that is \(o_{\mathbb {P}}(1)\). Then, a Hilbert space central limit theorem completes the proof. As for step 1, we apply two Taylor expansions in order to approximate the estimator \({\widehat{k}}_n\) in the exponent by \(k_n\) and \({\widehat{\lambda }}_n\) in the denominator by \(\lambda _n\). Starting with \({\widehat{k}}_n\), a second-order Taylor expansion yields

$$\begin{aligned} \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) ^{{\widehat{k}}_{n}}&=\left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) ^{k_n}+\log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) ^{k_n}({\widehat{k}}_{n}-k_n)+R_{n,j}({\widehat{k}}_{n}-k_n)^2, \end{aligned}$$

where

$$\begin{aligned} R_{n,j}= \frac{1}{2} \biggl (\log \biggl (\frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\biggl )\biggl )^2\biggl (\frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\biggl )^{k_n^*} \end{aligned}$$

and \(|k_n^* - k_n| \le |{\widehat{k}}_{n} -k_n|\). We now define

$$\begin{aligned} V_{n}^{(1)}(t)=&\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \bigg {[} \frac{1}{X_{n,j}}\bigg (\left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) ^{k_n}\left( {\widehat{k}}_{n}+\log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) {\widehat{k}}_{n}({\widehat{k}}_{n}-k_n)\right) -{\widehat{k}}_{n}+1\bigg ) \\ {}&\times \big (1-\textrm{e}^{-tX_{n,j}}\big )-t \textrm{e}^{-tX_{n,j}}\bigg {]} \end{aligned}$$

and show that

$$\begin{aligned} \Vert V_{n}-V_{n}^{(1)}\Vert ^2 = o_{\mathbb {P}}(1). \end{aligned}$$
(22)

To this end, notice that

$$\begin{aligned} \Vert V_{n}-V_{n}^{(1)}\Vert ^2&= \int _0^{\infty } \big {|}V_{n}(t)-V_{n}^{(1)}(t)\big {|}^2w(t) \textrm{d}t \nonumber \\&=\int _{0}^{\infty }\bigg | \frac{{\widehat{k}}_n}{\sqrt{n}} \sum _{j=1}^{n} \frac{1-\exp (-tX_{n,j})}{X_{n,j}} ({\widehat{k}}_{n}-k_n)^2R_{n,j} \bigg |^2w(t)\textrm{d}t\nonumber \\&\le {\widehat{k}}_n^2\big (\sqrt{n}({\widehat{k}}_{n}-k_n)({\widehat{k}}_{n}-k_n)\big )^2 \cdot \bigg (\frac{1}{n} \sum _{j=1}^n R_{n,j}\bigg )^2 \cdot \int _{0}^{\infty } t^2w(t)\textrm{d}t, \end{aligned}$$
(23)

since \(1-\textrm{e}^{-t} \le t\) for \(t \ge 0\). The first factor of (23) converges to zero in probability in view of the tightness of \(\sqrt{n}({\widehat{k}}_{n}-k_n)\), and assumption (5) ensures the existence of the integral. It thus remains to show that \(n^{-1}\sum _{j=1}^n R_{n,j}\) is a tight sequence. Since \((a-b)^2 \le 2a^2+2b^2\) \((a,b \in \mathbb {R}\)), the definition of \(R_{n,j}\) yields

$$\begin{aligned} 0 \le \frac{1}{n}\sum _{j=1}^n R_{n,j} \le \frac{1}{{\widehat{\lambda }}_n^{k_n^*}} \cdot \frac{1}{n}\sum _{j=1}^n \big (\log X_{n,j}\big )^2 X_{n,j}^{k_n^*} + \frac{\big (\log {\widehat{\lambda }}_n\big )^2}{{\widehat{\lambda }}_n^{k_n^*}} \cdot \frac{1}{n}\sum _{j=1}^n X_{n,j}^{k_n^*}. \end{aligned}$$

The factors that precede the arithmetic means converge almost surely and are thus tight sequences. Hence, it remains to show that \(Z_{n,1}= n^{-1}\sum _{j=1}^n X_{n,j}^{k_n^*}\) and \(Z_{n,2}= n^{-1}\sum _{j=1}^n \big (\log X_{n,j}\big )^2 X_{n,j}^{k_n^*}\) are tight sequences. We tackle \(Z_{n,1}\) since the reasoning for \(Z_{n,2}\) is the same. Given \(\varepsilon >0\), we have to find \(K >0\) such that \(\mathbb {P}(Z_{n,1} > K) \le \varepsilon\) for each n. Since \(k_n^*\) converges almost surely, there is some positive \(k^+\) such that \(\mathbb {P}(k_n^* \le k^+) \ge 1- \varepsilon /2\), \(n \ge 1\), whence \(\mathbb {P}\big (Z_{n,1} \le 1 + n^{-1}\sum _{j=1}^n X_{n,j}^{k^+}\big ) \ge 1-\varepsilon /2\) for each n. In view of the almost sure convergence of \(n^{-1}\sum _{j=1}^n X_{n,j}^{k^+}\), there is some \(L>0\) such that \(\mathbb {P}\big (n^{-1}\sum _{j=1}^n X_{n,j}^{k^+} \le L\big ) \ge 1-\varepsilon /2\) for each n. Taking \(K= 1+L\), it follows that \(\mathbb {P}(Z_{n,1} \le K) \ge 1-\varepsilon\) for each n, as was to be shown.

In a similar way, a Taylor expansion yields

$$\begin{aligned} \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) ^{k_n}&=\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n}-k_n\frac{X_{n,j}^{k_n}}{\lambda _n^{k_n+1}}({\widehat{\lambda }}_{n}-\lambda _n)+{\widetilde{R}}_{n,j}({\widehat{\lambda }}_{n}-\lambda _n)^2, \end{aligned}$$

where

$$\begin{aligned} {\widetilde{R}}_{n,j}=\frac{1}{2}\, (k_n+1)k_n\frac{X_{n,j}^{k_n}}{(\lambda _n^*)^{k_n+2}} \end{aligned}$$

and \(|\lambda _n^*-\lambda _n| \le |{\widehat{\lambda }}_n-\lambda _n|\). Putting

$$\begin{aligned} V_{n}^{(2)}(t)=&\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \Bigg [ \frac{1}{X_{n,j}}\left( \left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n}-k_n\frac{X_{n,j}^{k_n}}{\lambda _n^{k_n+1}}({\widehat{\lambda }}_{n}-\lambda _n)\right) \\&\times \left( {\widehat{k}}_{n}+\log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) {\widehat{k}}_{n}({\widehat{k}}_{n}-k_n)\right) -{\widehat{k}}_{n}+1\Biggl )\big (1-\textrm{e}^{-tX_{n,j}}\big )-t \textrm{e}^{-tX_{n,j}}\Bigg ], \end{aligned}$$

and

$$\begin{aligned} A_{n,j} \!&\!:= \!&\! {\widehat{k}}_{n} \! +\! \log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) {\widehat{k}}_{n}({\widehat{k}}_{n}-k_n),\nonumber \\ B_{n,j} \!&\!:= \!&\! \left( 1\! +\! \log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) ({\widehat{k}}_{n}\! -\! k_n)\right) , \end{aligned}$$
(24)

it follows by complete analogy with the first expansion that

$$\begin{aligned} \Vert V_{n}^{(1)}-V_{n}^{(2)}\Vert ^2 =&\int _0^{\infty } |V_{n}^{(1)}(t)\! -\! V_{n}^{(2)}(t)|^2w(t)\textrm{d}t \\ =&\int _{0}^{\infty }\biggl | \frac{1}{\sqrt{n}} \sum _{j=1}^{n} \frac{1\! -\! \exp (-tX_{n,j})}{X_{n,j}} A_{n,j} {\widetilde{R}}_{n,j}({\widehat{\lambda }}_{n}\! -\! \lambda _n)^2 \biggl |^2w(t)\textrm{d}t \\ \le&\big (\sqrt{n}({\widehat{\lambda }}_{n}\! -\! \lambda _n)^2{\widehat{k}}_{n}\big )^2 \bigg (\frac{1}{n} \sum _{j=1}^n B_{n,j} {\widetilde{R}}_{n,j}\bigg )^2 \! \int _{0}^{\infty } t^2w(t)\textrm{d}t =o_{\mathbb {P}}(1). \end{aligned}$$

To finish the first step, we show

$$\begin{aligned} \Biggl \Vert V_{n}^{(2)}(\cdot ) - \frac{1}{\sqrt{n}}\sum _{j=1}^n W_{n,j}(\cdot ) \Biggl \Vert ^{2}=o_{\mathbb {P}}(1), \end{aligned}$$
(25)

where \(W_{n,j}(\cdot )\) is defined by

$$\begin{aligned} W_{n,j}(t)=&\frac{1}{X_{n,j}}\biggl (\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n} k_n-k_n+1\biggl ) \big (1- \textrm{e}^{-tX_{n,j}}\big ))-t \textrm{e}^{-tX_{n,j}} \\&-\psi _{1}(X_{n,j},\lambda _n,k_n)\frac{k_n^2}{\lambda _n^{k_n+1}} \mathbb {E}\left[ X^{k_n-1} \big (1- \textrm{e}^{-tX}\big )\right] \\&+\psi _{2}(X_{n,j},\lambda _n,k_n)\biggl (\frac{k_n}{\lambda _n^{k_n}} \mathbb {E}\left[ X^{k_n-1}\log (X/\lambda _n) \big (1-\textrm{e}^{-tX}\big )\right] \\&\quad -\mathbb {E}\left[ X^{-1}\big (1- \textrm{e}^{-tX}\big )\right] +\frac{1}{\lambda _n^{k_n}} \mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \biggl ). \end{aligned}$$

Here, X has the Weibull distribution \(W(\lambda _0,k_0)\), and \(\psi _1,\psi _2\) satisfy (7) – (11). To verify (25) we successively eliminate the remaining estimators in \(V_n^{(2)}\). Note that – with \(A_{n,j}\) given in (24) –

$$\begin{aligned} V_{n}^{(2)}(t)=&\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \Bigg \{ \frac{1}{X_{n,j}}\Biggl (\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n} A_{n,j} -{\widehat{k}}_{n}+1\Biggl ) \big (1-\textrm{e}^{-tX_{n,j}}\big )-t\textrm{e}^{-tX_{n,j}} \Bigg \} \\&-\sqrt{n} ({\widehat{\lambda }}_{n}-\lambda _n) \bigg ( \frac{k_n^2}{\lambda _n^{k_n+1}} \mathbb {E}\left[ X^{k_n-1} \big (1-\textrm{e}^{-tX}\big )\right] +K_{n}^{(1)}(t) \bigg ), \end{aligned}$$

where

$$\begin{aligned} K_{n}^{(1)}(t)=&\frac{1}{n}\sum _{j=1}^n \frac{1}{X_{n,j}}k_n\frac{X_{n,j}^{k_n}}{\lambda _n^{k_n+1}}\left( {\widehat{k}}_{n}+\log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) {\widehat{k}}_{n}({\widehat{k}}_{n}-k_n)\right) \big (1-\textrm{e}^{-tX_{n,j}}\big )\\ {}&-\frac{k_n^2}{\lambda _n^{k_n+1}}\mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] . \end{aligned}$$

We have

$$\begin{aligned} \Vert K_{n}^{(1)}\Vert ^2 =&\int _{0}^{\infty } \! \biggl ( \frac{1}{n}\sum _{j=1}^n\frac{k_n X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}{\widehat{k}}_{n} \big (1\! -\! \textrm{e}^{-tX_{n,j}}\big ) -\frac{k_n^2}{\lambda _n^{k_n+1}}\mathbb {E} \! \left[ \! X^{k_n-1}\big (1\! -\! \textrm{e}^{-tX}\big )\right] \! \biggl )^2 \! w(t)\textrm{d}t \\&+2\int _{0}^{\infty } \biggl ( \frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}{\widehat{k}}_{n}\big (1-\textrm{e}^{-tX_{n,j}}\big ) -\frac{k_n^2}{\lambda _n^{k_n+1}}\mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \biggl ) \\&\quad \times \biggl (\frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}\log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) {\widehat{k}}_{n}({\widehat{k}}_{n}-k_n)\big (1-\textrm{e}^{-tX_{n,j}}\big )\biggl )w(t)\textrm{d}t \\&+\int _{0}^{\infty } \biggl (\frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}\log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) {\widehat{k}}_{n}({\widehat{k}}_{n}-k_n)\big (1-\textrm{e}^{-tX_{n,j}}\big )\biggl )^2w(t)\textrm{d}t\\ =&: I_{n,1} + 2I_{n,2} + I_{n,3}, \end{aligned}$$

say. Regarding \(I_{n,1}\), we have

$$\begin{aligned} I_{n,1}=&\int _{0}^{\infty } \biggl ( \frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}k_{n} \big (1-\textrm{e}^{-tX_{n,j}}\big ) -\frac{k_n^2}{\lambda _n^{k_n+1}}\mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \biggl )^2w(t)\textrm{d}t \\&+\int _{0}^{\infty } \biggl ( \frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}({\widehat{k}}_{n}-k_{n}) \big (1-\textrm{e}^{-tX_{n,j}}\big ) \biggl )^2w(t)\textrm{d}t \\&+2\int _{0}^{\infty } \biggl ( \frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}({\widehat{k}}_{n}-k_{n}) \big (1-\textrm{e}^{-tX_{n,j}}\big ) \biggl ) \\&\quad \times \biggl ( \frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}}k_{n} \big (1-\textrm{e}^{-tX_{n,j}}\big ) -\frac{k_n^2}{\lambda _n^{k_n+1}}\mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \biggl ) w(t)\textrm{d}t \\ =&: I_{n,1}^{(1)} + I_{n,1}^{(2)} + 2I_{n,1}^{(3)}, \end{aligned}$$

say. To tackle \(I_{n,1}^{(2)}\), we use Fubini’s theorem and the convergence in distribution of \((X_{n,i},X_{n,j},k_n)\) to \((X^{(1)},X^{(2)},k_0)\) as \(n \rightarrow \infty\) for \(i\ne j\), where \(X^{(1)},X^{(2)}\) are i.i.d. random variables having the Weibull distribution \(W(\lambda _0,k_0)\). Invoking the continuous mapping theorem, the inequality \(1-\textrm{e}^{-t}\le t\) for \(t \ge 0\) and assumption (5), it follows that

$$\begin{aligned}&\sup _{n \in \mathbb {N}} \mathbb {E} \bigg [ \int _{0}^{\infty } \biggl ( \frac{1}{n}\sum _{j=1}^nk_n\frac{X_{n,j}^{k_n-1}}{\lambda _n^{k_n+1}} \big (1-\textrm{e}^{-tX_{n,j}}\big ) \biggl )^2w(t)\textrm{d}t \bigg ] \nonumber \\ \le&\sup _{n \in \mathbb {N}} \frac{1}{n^2} \sum _{i,j=1}^n \mathbb {E} \bigg [ \bigg (k_n\frac{X_{n,i}^{k_n}}{\lambda _n^{k_n+1}} \biggl )\bigg (k_n\frac{X_{n,j}^{k_n}}{\lambda _n^{k_n+1}} \biggl ) \bigg ] \int _{0}^{\infty } t^2 w(t)\textrm{d}t \nonumber \\ =&\sup _{n \in \mathbb {N}} \frac{k_n^2}{\lambda _n^{2(k_n-1)}} \frac{1}{n^2} \Big (n(n-1) \mathbb {E} \big [ X_{n,1}^{k_n} X_{n,2}^{k_n} \big ] +n \mathbb {E} \big [ X_{n,1}^{2k_n} \big ] \Big ) \int _{0}^{\infty } t^2 w(t)\textrm{d}t < \infty . \end{aligned}$$
(26)

By Markov’s inequality, the expression inside the expectation in (26) is a tight sequence. Since \({\widehat{k}}_n - k_n \rightarrow 0\) almost surely as \(n \rightarrow \infty\), we have \(I_{n,1}^{(2)} = o_{\mathbb {P}}(1)\). We now show that \(I_{n,1}^{(1)}\) converges to \(0\) in \(\mathscr {L}^1(\Omega ,\mathscr {A},\mathbb {P})\). With the same arguments as above, it follows that

$$\begin{aligned} \mathbb {E}\big [ I_{n,1}^{(1)} \big ]=&\int _{0}^{\infty } \Biggl \{ \mathbb {E}\Biggl [ \frac{1}{n^2}\sum _{i,j=1}^n \left( \frac{k_n^2}{\lambda _n^{k_n+1}}\right) ^2 X_{n,i}^{k_n-1}X_{n,j}^{k_n-1}\big (1- \textrm{e}^{-tX_{n,i}}\big )\big (1-\textrm{e}^{-tX_{n,j}}\big ) \Biggl ] \\&-2\frac{k_n^2}{\lambda _n^{k_n+1}}\mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \mathbb {E}\Biggl [ \frac{1}{n}\sum _{j=1}^n\frac{k_n^2}{\lambda _n^{k_n+1}}X_{n,j}^{k_n-1}\big (1-\textrm{e}^{-tX_{n,j}}\big ) \Biggl ] \\&+\left( \frac{k_n^2}{\lambda _n^{k_n+1}}\right) ^2\mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] ^2 \Biggl \}w(t)\textrm{d}t \\ =&\left( \frac{k_n^2}{\lambda _n^{k_n+1}}\right) ^2 \Biggl \{\frac{1}{n}\int _{0}^{\infty }\text{ Var }\left[ X_{n,1}^{k_n-1}\big (1-\textrm{e}^{-tX_{n,1}}\big )\right] w(t)\textrm{d}t \\&+\int _{0}^{\infty } \biggl ( \mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] -\mathbb {E}\left[ X_{n,1}^{k_n-1}\big (1-\textrm{e}^{-tX_{n,1}}\big )\right] \biggl )^2 w(t)\textrm{d}t \Biggl \}. \end{aligned}$$

Using again \(1-\textrm{e}^{-t} \le t\) for \(t \ge 0\), the variance is bounded from above by \(t^2 \mathbb {E}[X_{n,1}^{2k_n}]\), and the last integral converges to zero as \(n \rightarrow \infty\) by dominated convergence. Hence, \(\mathbb {E}\big [ I_{n,1}^{(1)} \big ] \rightarrow 0\) and thus \(I_{n,1}^{(1)} = o_{\mathbb {P}}(1)\). Likewise, the Cauchy-Schwarz inequality implies \(I_{n,1}^{(3)} = o_{\mathbb {P}}(1)\). Moreover, with a similar reasoning, one obtains \(I_{n,2} = o_{\mathbb {P}}(1)\) and \(I_{n,3} = o_{\mathbb {P}}(1)\) and thus \(\Vert K_{n}^{(1)}\Vert ^2 = o_{\mathbb {P}}(1)\). Using the tightness of the sequence \(\sqrt{n} ({\widehat{\lambda }}_{n}-\lambda _n)\) and display (7) we conclude \(\Vert V_{n}^{(2)}(\cdot )-V_{n}^{(3)}(\cdot )\Vert ^2=o_{\mathbb {P}}(1)\), where

$$\begin{aligned} V_{n}^{(3)}(t)=&\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \Bigg \{\frac{1}{X_{n,j}}\Biggl (\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n} \left( {\widehat{k}}_{n}+\log \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) {\widehat{k}}_{n}({\widehat{k}}_{n}-k_n)\right) -{\widehat{k}}_{n}+1\Biggl )\\&\times \big (1-\textrm{e}^{-tX_{n,j}}\big )-t \textrm{e}^{-tX_{n,j}} -\psi _{1}(X_{n,j},\lambda _n,k_n)\frac{k_n^2}{\lambda _n^{k_n+1}} \mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \Bigg \}. \end{aligned}$$

We can write

$$\begin{aligned} V_{n}^{(3)}(t)=&\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \Bigg \{ \frac{1}{X_{n,j}}\Biggl (\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n} {\widehat{k}}_{n}-{\widehat{k}}_{n}+1\Biggl ) \big (1-\textrm{e}^{-tX_{n,j}}\big )-t\textrm{e}^{-tX_{n,j}} \\&-\psi _{1}(X_{n,j},\lambda _n,k_n)\frac{k_n^2}{\lambda _n^{k_n+1}} \mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \Bigg \} \\&+\sqrt{n} ({\widehat{k}}_{n}-k_n) \Bigg (\frac{k_n}{\lambda _n^{k_n}}\mathbb {E}\left[ \log \left( \frac{X}{\lambda _n}\right) X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] + K_{n}^{(2)}(t) \Bigg ), \end{aligned}$$

where

$$\begin{aligned} K_{n}^{(2)}(t) =&\frac{1}{n}\sum _{j=1}^n \log \! \left( \frac{X_{n,j}}{{\widehat{\lambda }}_{n}}\right) \frac{{\widehat{k}}_{n} X_{n,j}^{k_n-1}}{\lambda _n^{k_n}} \big (1\! -\! \textrm{e}^{-t X_{n,j}}\big ) - \frac{k_n}{\lambda _n^{k_n}}\mathbb {E} \! \left[ \log \! \left( \frac{X}{\lambda _n}\right) X^{k_n-1}\big (1\! - \! \textrm{e}^{-t X}\big )\right] . \end{aligned}$$

In a similar way as for \(K_{n}^{(1)}\), one can show that \(\Vert K_{n}^{(2)}\Vert ^2 = o_{\mathbb {P}}(1)\). Using the tightness of the sequence \(\sqrt{n} ({\widehat{k}}_{n}-k_n)\) and display (8) we conclude \(\Vert V_{n}^{(3)}(\cdot )-V_{n}^{(4)}(\cdot )\Vert ^2=o_{\mathbb {P}}(1)\), where

$$\begin{aligned} V_{n}^{(4)}(t)=&\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \Bigg \{ \frac{1}{X_{n,j}}\Biggl (\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n} {\widehat{k}}_{n}-{\widehat{k}}_{n}+1\Biggl ) \big (1-\textrm{e}^{-tX_{n,j}}\big )-t\textrm{e}^{-tX_{n,j}} \\&-\psi _{1}(X_{n,j},\lambda _n,k_n)\frac{k_n^2}{\lambda _n^{k_n+1}} \mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \\&+\psi _{2}(X_{n,j},\lambda _n,k_n)\frac{k_n}{\lambda _n^{k_n}} \mathbb {E}\left[ X^{k_n-1}\log \left( \frac{X}{\lambda _n}\right) \big (1-\textrm{e}^{-tX}\big )\right] \Bigg \}. \end{aligned}$$

Next, we rewrite

$$\begin{aligned} V_{n}^{(4)}(t)=&\frac{1}{\sqrt{n}} \sum _{j=1}^{n} \Bigg \{ \frac{1}{X_{n,j}}\Biggl ( \Biggl (\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n}-1 \Biggl )k_n+1\Biggl ) \big (1- \textrm{e}^{-tX_{n,j}}\big )-t\textrm{e}^{-tX_{n,j}} \\&-\psi _{1}(X_{n,j},\lambda _n,k_n)\frac{k_n^2}{\lambda _n^{k_n+1}} \mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] \\&+\psi _{2}(X_{n,j},\lambda _n,k_n)\frac{k_n}{\lambda _n^{k_n}} \mathbb {E}\left[ X^{k_n-1}\log \left( \frac{X}{\lambda _n}\right) \big (1-\textrm{e}^{-tX}\big )\right] \Bigg \} \\&+\sqrt{n} ({\widehat{k}}_{n}\! - \! k_n) \bigg (\! -\mathbb {E} \! \left[ X^{-1}\big (1\! -\! \textrm{e}^{-tX}\big )\right] +\frac{1}{\lambda _n^{k_n}} \mathbb {E} \! \left[ X^{k_n-1} \big (1\! -\! \textrm{e}^{-tX}\big )\right] + K_{n}^{(3)}(t) \! \bigg ), \end{aligned}$$

where

$$\begin{aligned} K_{n}^{(3)}(t) =&\frac{1}{n}\sum _{j=1}^n \frac{1}{X_{n,j}} \Biggl (\left( \frac{X_{n,j}}{\lambda _n}\right) ^{k_n}-1 \Biggl ) \big (1-\textrm{e}^{-tX_{n,j}}\big ) +\mathbb {E}\left[ X^{-1}\big (1-\textrm{e}^{-tX}\big )\right] \\ {}&-\frac{1}{\lambda _n^{k_n}} \mathbb {E}\left[ X^{k_n-1}\big (1-\textrm{e}^{-tX}\big )\right] . \end{aligned}$$

It is an easy task to show that \(\Vert K_{n}^{(3)}\Vert ^2 = o_{\mathbb {P}}(1)\). Due to the tightness of \(\sqrt{n} ({\widehat{k}}_{n}-k_n)\) and (8) we obtain (25).

Note that \(W_{n,j}, j=1,\ldots ,n\), are centered and row-wise i.i.d. random elements of \(\mathscr {L}_w^2\) with finite second moments, i.e., we have \(\mathbb {E} \Vert W_{n,1} \Vert ^2 < \infty\) for all \(n\). Furthermore, by dominated convergence we conclude that \(\lim _{n \rightarrow \infty } \mathbb {E}[W_{n,1}(s)W_{n,1}(t)] = \mathbb {E}[W(s)W(t)]\), where \(W\) is defined in the claim of the theorem.

Step 2: By assumptions (9) and (10), there is a function \({\widetilde{c}}\) such that \(\vert \mathbb {E}[W_{n,1}(s)W_{n,1}(t)] \vert \le {\widetilde{c}}(s,t)\) for each n and for each \(s,t \in [0, \infty ) \times [0, \infty )\). Moreover, by assumption (5),

$$\begin{aligned} \int _0^{\infty } \int _0^{\infty } {\widetilde{c}}(s,t)^i w(s)w(t)\, \textrm{d}s\, \textrm{d}t < \infty , \qquad i=1,2. \end{aligned}$$
(27)

Therefore, the Lindeberg–Feller central limit theorem and Slutzky’s lemma imply

$$\begin{aligned} \frac{1}{\sqrt{n}} \sum _{j=1}^n \langle W_{n,j},g\rangle {\mathop {\longrightarrow }\limits ^{D}} N(0,\sigma _{(\lambda _0,k_0)}^2(g)), \qquad g \in \mathscr {L}_w^2 \setminus \{0\}, \end{aligned}$$

where \(\sigma _{(\lambda _0,k_0)}^2(g) = \lim _{n\rightarrow \infty } \mathbb {E} \big [ \langle W_{n,1},g\rangle ^2 \big ] = \mathbb {E} \big [ \langle W,g\rangle ^2 \big ].\) The last equality follows from (27). Note that Lindeberg’s condition is easily verified since \(W_{n,j}\) are i.i.d. for \(j=1,\ldots ,n\). Thus, an application of Lemma 3.1 of Chen and White (1998) yields \(V_n {\mathop {\longrightarrow }\limits ^{D}} \mathcal {W}\) for some centered Gaussian random element \(\mathcal {W}\) of \(\mathscr {L}_w^2\) with covariance operator \({\widetilde{\Sigma }}_{(\lambda _0,k_0)}\) satisfying \(\sigma _{(\lambda _0,k_0)}^2(g)=\langle {\widetilde{\Sigma }}_{(\lambda _0,k_0)}g,g \rangle\) for each \(g \in \mathscr {L}_w^2 {\setminus } \{0\}\). By Fubini’s theorem and dominated convergence, we obtain

$$\begin{aligned} \sigma _{(\lambda _0,k_0)}^2(g)&= \lim _{n\rightarrow \infty } \int _0^{\infty } \int _0^{\infty } \mathbb {E} \big [ W_{n,1}(t)W_{n,1}(s) \big ] g(t)g(s)w(t)w(s)\textrm{d}t\textrm{d}s\\ {}&= \int _0^{\infty } (\Sigma _{(\lambda _0,k_0)}g)(s)g(s)w(s)\textrm{d}s, \end{aligned}$$

where \(\Sigma _{(\lambda _0,k_0)}\) is given by (16). Thus \({\widetilde{\Sigma }}_{(\lambda _0,k_0)}= \Sigma _{(\lambda _0,k_0)}\) and the assertion follows. \(\square\)

1.3 A.3 Proof of Theorem 7

Let \(\mu _n\) and \(\nu _n\) denote the probability measures of \((X_{n,1},\ldots X_{n,n})\) under \(H_0\) and in the situation of the assertion, respectively. As in the proof of Theorem 5, we have

$$\begin{aligned} \biggl \Vert V_n - \frac{1}{\sqrt{n}} \sum _{j=1}^n W_{n,j}^* \biggl \Vert ^2= o_{\mu _n}(1), \end{aligned}$$

where

$$\begin{aligned} W_{n,j}^*(t)=&\frac{1}{X_{n,j}}\biggl (\left( \frac{X_{n,j}}{\lambda }\right) ^{k} k-k+1\biggl )\big (1-\textrm{e}^{-tX_{n,j}}\big )-t\textrm{e}^{-tX_{n,j}} \\ {}&-\psi _{1}(X_{n,j},\lambda ,k)\frac{k^2}{\lambda ^{k+1}} \mathbb {E}\left[ X^{k-1}\big (1-\textrm{e}^{-tX}\big )\right] \\&+\psi _{2}(X_{n,j},\lambda ,k)\biggl (\frac{k}{\lambda ^{k}} \mathbb {E}\left[ X^{k-1}\log (X/\lambda )\big (1-\textrm{e}^{-tX}\big )\right] -\mathbb {E}\left[ X^{-1}\big (1-\textrm{e}^{-tX}\big )\right] \\ {}&+\frac{1}{\lambda ^{k}} \mathbb {E}\left[ X^{k-1}\big (1-\textrm{e}^{-tX}\big )\right] \biggl ). \end{aligned}$$

By contiguity, it follows that

$$\begin{aligned} \biggl \Vert V_n - \frac{1}{\sqrt{n}} \sum _{j=1}^n W_{n,j}^* \biggl \Vert ^2= o_{\nu _n}(1). \end{aligned}$$
(28)

Putting

$$\begin{aligned} \delta (g)= \lim _{n \rightarrow \infty } \text{ Cov } \biggl [ \langle W_{n,1}^*,g \rangle ,c(X_{n,1})-\frac{1}{2\sqrt{n}}c(X_{n,1})^2 \biggl ] \end{aligned}$$

for \(g \in \mathscr {L}_w^2\), a combination of Slutzky’s lemma and the multivariate Lindeberg–Feller central limit theorem give

$$\begin{aligned} \left( \begin{array}{c} \frac{1}{\sqrt{n}} \sum _{j=1}^n \langle W_{n,j}^*,g \rangle \\ \log L_n(X_{n,1},\ldots ,X_{n,n}) \end{array}\right) {\mathop {\longrightarrow }\limits ^{D_{\mu _n}}} N \left( \left( \begin{array}{c} 0 \\ -\frac{\tau ^2}{2} \end{array} \right) , \left( \begin{array}{cc} \sigma ^2(g) &{} \delta (g) \\ \delta (g) &{} \tau ^2 \end{array} \right) \right) \end{aligned}$$

for some \(\sigma ^2(g)>0\). Now LeCam’s third lemma yields the convergence in distribution of \(\frac{1}{\sqrt{n}} \sum _{j=1}^n \langle W_{n,j}^*,g \rangle\) to the \(N(\delta (g),\sigma ^2(g))\)-law under \(\nu _n\) for every \(g \ne 0\), i.e. the convergence of the finite-dimensional distributions. The tightness under \(\nu _n\) follows by contiguity. Therefore, \(n^{-1/2} \sum _{j=1}^n W_{n,j}^{*} {\mathop {\longrightarrow }\limits ^{D_{\nu _n}}} \mathcal {W} + \zeta\), where \(\zeta\) is defined in the assertion. \(\square\)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ebner, B., Fischer, A., Henze, N. et al. Goodness-of-fit tests for the Weibull distribution based on the Laplace transform and Stein’s method. Ann Inst Stat Math 75, 1011–1038 (2023). https://doi.org/10.1007/s10463-023-00873-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-023-00873-7

Keywords

Navigation