Abstract
In this paper, we address the problem of deciding if either n consecutive independent failure times have the same failure rate or if there exists some \(k\in \{1,\ldots ,n\}\) such that the common failure rate of the first k failure times is different from the common failure rate of the last \(n-k\) failure times, based on an exponential lifetime distribution. The statistical test we propose is based on the empirical average ratio under the assumption of exponentiality. The proposed test is compared to the one based on the Mann–Whitney statistic for which no parametric assumption on the underlying distribution is necessary. The proposed statistics are free of the unknown underlying distribution under the null hypothesis of homogeneity of the n failure times which enables the determination of critical values of the proposed tests by Monte Carlo methods for small sample sizes.
Similar content being viewed by others
References
Aly E, Kochar S (1997) Change points tests based on u-statistics with applications in reliability. Metrika 45:259-269
Arnold BC, Balakrishnan N, Nagaraja AN (2008) A first course in order statistics. SIAM, Philadelphia
Balakrishnan N, Ng HKT (2006) Precedence-type tests and applications. Wiley, Hoboken
Bhattacharya P, Johnson R (1968) Nonparametric tests for shifts at an unknown time point. Ann Math Statist 39:1731-1734
Bordes L, Mercier S (2012) Extended geometric processes: semiparametric estimation and application to reliability. J Iran R Stat Soc 12:1-34
Chernoff H, Zacks S (1964) Estimating the current mean of a normal distribution which is subjected to changes in time. Ann Math Stat 35:999-1018
Csörgö M, Horváth L (1997) Limit theorems in change-point analysis. Wiley, New York
Einmahl J, McKeague I (2003) Empirical likelihood based hypothesis testing. Bernoulli 9:267-290
Gibbons J, Chakraborti S (2003) Nonparametric statistical inference. Marcel Dekker, New York
Johnson NL, Kotz S, Balakrishnan N (1995a) Continuous univariate distributions, vol 1. Wiley, New York
Johnson NL, Kotz S, Balakrishnan N (1995b) Continuous univariate distributions, vol 2. Wiley, New York
Lam Y (2007) The geometric process and its applications. World Scientific, Singapore
Lindsey J (2004) Statistical analysis of stochastic processes in time. Cambridge University Press, Cambridge
Lombard F (1987) Rank tests for change-point problems. Biometrika 74:615-624
Page E (1954) Continuous inspection scheme. Biometrika 41:100-114
Page E (1955) A test for a change in a parameter occurring at an unknown point. Biometrika 42:523-527
Page E (1957) On problems in which a change in parameters occurs at an unknown point. Biometrika 44:248-252
Pettitt A (1979) A non-parametric approach to the change-point problem. Appl Stat 28:126-135
Pettitt A (1980) Some results on estimating a change-point using non-parametric type statistics. Appl Stat 11:261-272
Proschan F (1963) Theoretical explanation of observed decreasing failure rate. Technometrics 5:375-383
Sen A, Srivastava M (1975) On tests for detecting change in mean. Ann Stat 3:98-108
Zou C, Liu Y, Qin P, Wang Z (2007) Empirical likelihood ratio test for the change-point problem. Stat Prob Lett 77:374-382
Author information
Authors and Affiliations
Corresponding author
Appendix: Proofs
Appendix: Proofs
1.1 Proof of Proposition 1
In the case of exponential distribution, the statistic \(S_{n,k}^{(1)}\) involves ratio of independent Erlang distributed random variables. Recall that random variable X follows the Erlang distribution with parameters \(r \in \mathbb {N}^*\) and \(\alpha >0\) if its probability density function f is given by
where \(\varGamma (r)=(r-1)!\). We denote it by \(X\sim Erl(r,\alpha )\). In particular, we use the simplified notation \(Erl(r)\equiv Erl(r,1)\).
Calculation of the covariance matrix of \(\mathbf{S}_n\) requires the calculation of \(\mathbb {E}[1/(X(X+Y))]\), where X and Y are two independent Erlang distributed random variables.
Lemma 2
Let \(X\sim Erl(r)\) and \(Y\sim Erl(s)\) be independent. If \(r\geqslant 2\) and \(s\geqslant 1\), then
Proof
Let \(U=\tfrac{X}{X+Y}\) and \(V=X+Y\). It is well-known that U and V are independent. Moreover, U is Beta distributed with parameters (r, s) and V is Erlang distributed with parameter \(r+s\). Then, we have
using expressions for the moments of the Beta distribution in Johnson et al. (1995b) and of the gamma distribution in Johnson et al. (1995a). \(\square \)
Now, we shall present the proof of Proposition 1. We have
where \(R_k=T_n-T_k=\sum _{j=k+1}^n X_j\). Let \((k,k') \in \{3,\dots ,n-3\}^2\), then we find
Assuming that \(3\leqslant k < k'\leqslant n-3\), we write
and \(T_{k'} = U_{k,k'}+T_{k}\). Then, we need to find the expectation
where
These three random variables are clearly independent, and since
we readily find
Then, we obtain
Similarly, we obtain the variance of \(S_{n,k}^{(1)}\) as
Rights and permissions
About this article
Cite this article
Balakrishnan, N., Bordes, L., Paroissin, C. et al. Single change-point detection methods for small lifetime samples. Metrika 79, 531–551 (2016). https://doi.org/10.1007/s00184-015-0566-4
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-015-0566-4