# Optimal design of inspection times for interval censoring

A Publisher Correction to this article was published on 12 September 2019

This article has been updated

## Abstract

We treat optimal equidistant and optimal non-equidistant inspection times for interval censoring of exponential distributions. We provide in particular a new approach for determining the optimal non-equidistant inspection times. The resulting recursive formula is related to a formula for optimal spacing of quantiles for asymptotically best linear estimates based on order statistics and to a formula for optimal cutpoints by the discretisation of continuous random variables. Moreover, we show that by the censoring with the optimal non-equidistant inspection times as well as with optimal equidistant inspection times, there is no loss of information if the number of inspections is converging to infinity. Since optimal equidistant inspection times are easier to calculate and easier to handle in practice, we study the efficiency of optimal equidistant inspection times with respect to optimal non-equidistant inspection times. Moreover, since the optimal inspection times are only locally optimal, we also provide some results concerning maximin efficient designs.

## Introduction

In some experiments, continuous lifetime random variables can not be observed steadily but only at some number of predefined points in time (inspection times). For example, our research is motivated by a cooperation with mechanical engineers who were interested in the wear of diamond impregnated drilling tools. In particular, they were interested in the lifetime estimation of single diamonds on the tool based on drilling experiments. Thereby, at given inspection times, the drilling was stopped and it was checked which diamonds on the drilling tool were broken out. Since all diamonds were labelled, the broken diamonds could be detected by analyzing the surface of the tool with a microscope. This is time consuming so that not too many inspection times should be used. Originally, an inspection took place every minute and this was done over 50 min of drilling. For more details on these experiments and their results see Kansteiner et al. (2017a, b) and Malevich et al. (2018). It turned out that only very few diamonds broke out within the 50 min for some experimental setups. Therefore, the question arose whether longer inspection intervals are superior and how long the inspection intervals should be.

For the modelling such kind of experiments, let $$T_1,\ldots ,T_N$$ be independent nonnegative random variables (lifetime variables). However, the realizations $$t_1,\ldots ,t_N$$ of $$T_1,\ldots ,T_N$$ are not observed directly. Only realizations $$z_n$$ of $$Z_n$$, $$n=1,\ldots ,N$$, with

\begin{aligned} Z_n=i, \quad \text {if}\quad T_n\in (\tau _{i-1},\tau _i],\quad i=1,\ldots ,I+1, \end{aligned}
(1)

are observed where $$0= \tau _0<\tau _1<\cdots<\tau _I< \tau _{I+1}=\infty$$ are given inspection times.

Such data are called interval-censored data or grouped data. They appear not only in engineering science where failures of objects can only be detected at special inspection times but also in other fields like medicine, e.g. where diseases are reported at specific points in time. The analysis of such data is an old problem and was already treated in the book of Kulldorff (1961). Nevertheless, it is still a very active research area. There are several new books on this topic as those of Sun (2006) and Bogaerts et al. (2018) and many recent papers as those of Attia and Assar (2012), Ismail (2015), Ahn et al. (2018), Wang et al. (2018), and Gao et al. (2018).

The question how to choose optimal inspection times $$\tau _1<\cdots <\tau _I$$ was also treated already in the sixties of the last century. Kulldorff (1961) listed locally optimal inspection times for exponential distribution for $$I=1,\ldots ,6$$ and Nelson (1977) extended these results for $$I=1,\ldots ,10$$ for equally spaced inspections and optimally spaced inspections. Wei and Bau (1987) and Wei and Shau (1987) provided tables for locally optimal inspection times for other distributions and Parmigiani (1998) and Inoue and Parmigiani (2002) studied a Bayesian approach to find optimal inspection times. Related results can be found also in the context of contingent valuation and optimal cutpoints (Gunduz and Torsney 2006; Nguyen and Torsney 2007; Schmidt and Schwabe 2015). After Aggarwala (2001) introduced progressive Type I interval censoring in 2001, several papers as those of Lin et al. (2009), Tsai and Lin (2010), Wu and Huang (2010), and Attia and Assar (2012) treated optimal inspection times for progressive interval censoring for several types of distributions. There are also other design considerations for interval-censored data, as the determination of the sample size for comparing several groups (Lui 1993) or stress levels in accelerated life tests (Yum and Choi 1989; Seo and Yum 1991; Islam and Ahmad 1994; Yang and Tse 2005; Ismail 2015).

Most of these approaches did not provide much theory about the optimal inspection times. They only calculated the locally optimal inspection times numerically and provided then tables with the optimal inspection times.

However, as Saleh (1964) and Park (2006) noted, there is a relationship to optimal spacing of quantiles for asymptotically best linear estimates (ABLE) based on order statistics. The treatment of optimal spacing of quantiles started already in the sixties of the last century (see Sarhan et al. 1963; Saleh 1966; Kulldorff 1973; Eubank 1982; Ogawa 1998) and concerned several types of distributions. For the exponential distribution, Saleh derived a recursive formula for the optimal spacing in his Ph.D. thesis Saleh (1964). Later, this formula was published with a misprint in Saleh (1966, Theorem 6.2). Similar results but in the context of optimal cutpoints were also obtained in Schmidt and Schwabe (2015).

In this paper, we provide a new approach for deriving the optimal non-equidistant inspection times for exponential distribution. The resulting recursive formula is related to the formula of Saleh and the formula of Schmidt and Schwabe, but its derivation is much easier. Moreover, we show that a standardized Fisher information based on the censoring with the optimal inspection times is approaching 1 if the number I of inspections tends to infinity. We prove this convergence not only for optimally spaced inspection times but also for optimally equidistantly spaced inspection times. This implies in particular that already $$I=5$$ provides a high efficiency, similarly to a result found by numerical calculations in Shapiro and Gulati (1996) for test procedures for exponential distribution and in Raab et al. (2004) for the Weibull distribution.

In Sect. 2, the maximum likelihood estimator is presented and the corresponding Fisher information is given. Section 3 provides the optimal inspection times for the case of equidistantly spaced inspection times and Sect. 4 presents the results concerning the optimal non-equidistantly spaced inspection times. Since the optimal inspection times depend on the unknown parameter, i.e. they are only locally optimal, we discuss also maximin efficient designs in both sections. A comparison of locally optimal and maximin efficient equidistant and non-equidistant designs is given in Sect. 5. Finally, Sect. 6 provides a short discussion of the results.

## Maximum likelihood estimator and the Fisher information

Throughout the paper, we consider independent (non-observable) lifetime variables $$T_1,\ldots , T_N$$ and the corresponding interval-censored lifetime variables $$Z_1,\ldots ,Z_N$$ given by (1). Additionally, we assume that $$T_1,\ldots ,T_N$$ have an exponential distribution with unknown parameter $$\lambda >0$$ and corresponding cumulative distribution function $$F_{\lambda }$$. We put $$F_\lambda (\infty ):=1$$ and $$e^{-\infty }:=0$$.

Then the likelihood function for an observed value $$z_n=i$$, $$i=1,\ldots ,I+1$$, is given by

\begin{aligned} l_\lambda (z_n):= & {} P_\lambda \left( Z_n=i\right) = P_\lambda \left( T_n\in (\tau _{i-1},\tau _i]\right) \\= & {} F_\lambda (\tau _i)-F_\lambda (\tau _{i-1}) =e^{-\lambda \tau _{i-1}}-e^{-\lambda \tau _{i}}. \end{aligned}

Because of the independence assumption, the common likelihood function of $$(z_1,\ldots ,z_N)$$ is given by

\begin{aligned} L_\lambda (z_1,\ldots ,z_N):=\prod _{i=1}^I \left( e^{-\lambda \tau _{i-1}}-e^{-\lambda \tau _{i}}\right) ^{n_i}\;\left( e^{-\lambda \tau _{I}}\right) ^{n_{I+1}} \end{aligned}
(2)

with

\begin{aligned} n_i:=\sum _{n=1}^N 1\!\!1_{\{i\}}(z_n)=\sum _{n=1}^N 1\!\!1_{(\tau _{i-1},\tau _i]}(t_n),\;\;i=1,\ldots ,I+1, \end{aligned}

where $$1\!\!1_A$$ denotes the indicator function for the set A. Then a maximum likelihood estimator for $$\lambda$$ can be easily determined by maximizing (2).

Further, the Fisher information is given as

\begin{aligned} I_\lambda (\tau _1,\ldots ,\tau _{I}):= & {} E_\lambda \left( \left( \frac{\partial }{\partial \lambda }\ln l_\lambda (Z_n)\right) ^2\right) \nonumber \\= & {} \sum _{i=1}^{I+1} \frac{\left( \tau _i e^{-\lambda \tau _{i}}-\tau _{i-1}e^{-\lambda \tau _{i-1}}\right) ^2}{\left( e^{-\lambda \tau _{i-1}}-e^{-\lambda \tau _{i}}\right) ^2}\,P_\lambda (Z_n=i) \nonumber \\= & {} \frac{1}{\lambda ^2}\left( \sum _{i=1}^{I} \frac{\left( \lambda \tau _i e^{-\lambda \tau _{i}}-\lambda \tau _{i-1}e^{-\lambda \tau _{i-1}}\right) ^2}{e^{-\lambda \tau _{i-1}}-e^{-\lambda \tau _{i}}} + (\lambda \tau _I)^2 e^{-\lambda \tau _I}\right) . \end{aligned}
(3)

Thus, to find optimal inspection times $$\tau _1^*,\ldots ,\tau _{I}^*$$ so that $$I_\lambda (\tau _1,\ldots ,\tau _{I})$$ is maximized, it is sufficient to use the substitution $$x_i:=\lambda \tau _i$$ and to find $$x_1^*,\ldots ,x_I^*$$ which maximize

\begin{aligned} f_I(x_1,\ldots ,x_I):=\sum _{i=1}^{I} \frac{\left( x_i e^{-x_{i}}-x_{i-1}e^{-x_{i-1}}\right) ^2}{e^{-x_{i-1}}-e^{-x_{i}}} + x_I^2 e^{-x_I}, \end{aligned}
(4)

where $$x_0=x_0^*=0$$. Thereby, $$f_I(x_1,\ldots ,x_I)$$ is a standardized Fisher information and the substitution from above is in the spirit of the canonical form of Ford et al. (1992). In particular, the optimal $$x_1^*,\ldots ,x_I^*$$ satisfy that the quantity

\begin{aligned} \frac{1}{N\lambda ^2} f_I(x_1^*,\ldots ,x_I^*)^{-1} \end{aligned}

is the asymptotic variance of the asymptotically best linear estimate (ABLE) for $$\frac{1}{\lambda }$$ based on order statistics for the exponential distribution and $$x_1^*,\ldots ,x_I^*$$ are the quantiles of the so called optimal spacing of quantiles, see Sarhan et al. (1963) and Saleh (1966). These optimal quantiles have the advantage that they are independent of the unknown parameter $$\lambda$$ while the optimal inspection times depend on $$\lambda$$.

The following lemma provides another representation of $$f_I$$ in (4) which can be found in Theorem 6.2 in Saleh (1966) in the context of optimal spacing of quantiles.

### Lemma 1

The function $$f_I(x_1,\ldots ,x_I)$$ in (4) can be simplified as follows

\begin{aligned} f_I(x_1,\ldots ,x_I)=\sum _{i=1}^{I} \frac{\left( x_i-x_{i-1}\right) ^2}{e^{x_{i}}-e^{x_{i-1}}}. \end{aligned}
(5)

## Optimal equidistant inspection times

At first, let us consider the special case of a design with equidistant inspection times $$\tau _1=\varDelta , \tau _2=2\varDelta ,\ldots , \tau _I=I\varDelta$$. Equidistant designs are useful in applications because their implementation and realization is more convenient. In this case, (3) becomes

\begin{aligned} I_{\lambda ,eq}(\varDelta )=\frac{1}{\lambda ^2}\left( \sum _{i=1}^{I} \frac{\left( \lambda i\varDelta e^{-\lambda i \varDelta }-\lambda (i-1)\varDelta e^{-\lambda (i-1)\varDelta }\right) ^2}{e^{-\lambda (i-1)\varDelta }-e^{-\lambda i\varDelta }} + (\lambda I \varDelta )^2 e^{-\lambda I\varDelta }\right) . \end{aligned}

Again, with the substitution $$x:=\lambda \varDelta$$, the maximization of $$I_{\lambda ,eq}(\varDelta )$$ with respect to $$\varDelta$$ is equivalent to the maximization of

\begin{aligned} f_{I,eq}(x):= & {} \sum _{i=1}^{I} \frac{\left( i\, x e^{-i\, x}- (i-1)\, x e^{- (i-1)\, x}\right) ^2}{e^{- (i-1\,)x}-e^{- i\, x}} + ( I\, x)^2 e^{- I\, x}. \end{aligned}
(6)

Hence, the maximum $$\varDelta ^*(\lambda ):=\varDelta ^*(\lambda ,I)$$ of $$I_{\lambda ,eq}(\varDelta )$$ is given by $$\varDelta ^*(\lambda )=\frac{x_{eq}^*}{\lambda }$$ if $$f_{I,eq}$$ has a maximum at $$x_{eq}^*:=x_{eq}^*(I)$$. The optimal equidistantly spaced inspection times are then $$\varDelta ^*(\lambda ), 2\varDelta ^*(\lambda ),\ldots ,I \varDelta ^*(\lambda )$$.

### Lemma 2

The function $$f_{I,eq}(x)$$ in (6) can be simplified as follows

\begin{aligned} f_{I,eq}(x)=\frac{e^{x} x^2 (1-e^{-I x})}{(e^{x}-1)^2}, \text{ in } \text{ particular } f_{1,eq}(x)=\frac{ x^2 }{e^{x}-1}. \end{aligned}
(7)

### Proof

Note that $$f_{I,eq}(x)$$ is a special case of the function $$f_I(x_1,\ldots ,x_I)$$ from (4) with $$x_i=ix$$ for $$i=1,\ldots ,I$$. Lemma 1 yields then the assertion. $$\square$$

The values $$x_{eq}^*$$ can be found numerically. Table 1 contains the first inspection point $$x_{eq}^*$$, the last inspection point $$I x_{eq}^*$$ and the maximum of the function $$f_{I,eq}$$ for some values of I.

### Theorem 1

For the function $$f_{I,eq}(x)$$ in (6), the following holds:

1. (i)

$$f_{I,eq}(x)\le 1$$      for all      $$I\in \mathbb {N}$$      and      $$x>0$$.

2. (ii)

$$\max \{f_{I,eq}(x);\,\, x>0\} \rightarrow 1$$      as      $$I\rightarrow \infty$$.

3. (iii)

$$f_{I,eq}$$ is a unimodal function for each $$I\in \mathbb {N}$$.

### Proof

(i) Since $$f_{I,eq}(x)$$ is a special case of the function $$f_I(x_1,\ldots ,x_I)$$ from (4) with $$x_i=ix$$ for $$i=1,\ldots ,I$$, the statement (i) follows from Theorem 2 (i) in Section 4. Hence, we show here only (ii) and (iii).

(ii) Since $$1-e^{-I_1 x} < 1-e^{-I_2 x}$$ for $$I_1<I_2$$ and $$x>0$$, we have

\begin{aligned} \max \{f_{I_1,eq}(x);\,\, x>0\} < \max \{f_{I_2,eq}(x);\,\, x>0\} \end{aligned}

so that $$a_I:=\max \{f_{I,eq}(x);\,\, x>0\}$$, $$I\in \mathbb {N}$$, is an increasing sequence. From (i) it follows that $$a_I\le 1$$ for all $$I\in \mathbb {N}$$. This yields

\begin{aligned} \lim _{I\rightarrow \infty } a_I=a_\infty \le 1. \end{aligned}

Consider the function $$f_{I,eq}(x)$$ with $$x=1/\sqrt{I}$$:

\begin{aligned} f_{I,eq}\left( 1/\sqrt{I}\right) =\frac{e^{1/\sqrt{I}} (1-e^{-\sqrt{I}})}{I\,(e^{1/\sqrt{I}}-1)^2}. \end{aligned}

Using the substitution $$y:=1/\sqrt{I}$$ and L’Hospital’s rule, we obtain

\begin{aligned} \lim _{I\rightarrow \infty } f_{I,eq}\left( 1/\sqrt{I}\right) =\lim _{y\rightarrow 0} \frac{e^y y^2 (1-e^{-1/y})}{(e^y-1)^2}=1. \end{aligned}

Since $$f_{I,eq}\left( 1/\sqrt{I}\right) \le a_I$$ by definition, we obtain

\begin{aligned} a_\infty \ge \lim _{I\rightarrow \infty } f_{I,eq}\left( 1/\sqrt{I}\right) =1. \end{aligned}

Therefore, $$a_\infty =1$$.

(iii) For unimodality it is sufficient to show that $$f_{I,eq}$$ has only one extremum and this extremum is maximum point. The first derivative of $$f_{I,eq}$$ is

\begin{aligned} f^\prime _{I,eq}(x)=\frac{x e^x\left( 2 e^x-2-x-x e^x +e^{-Ix}(2+x+x e^x-2 e^x+Ix(e^x-1))\right) }{(e^x-1)^3}. \end{aligned}

Define $$q(x):=2+x+x e^x-2 e^x$$ for $$x\ge 0$$. Note that $$q(0)=0$$ and that q(x) is strictly increasing since $$q^\prime (x)=1+e^x(x-1)$$ and $$e^x<1/(1-x)$$ for $$0<x<1$$. So, $$q^\prime (x)>0$$ for all $$x>0$$. Therefore, $$q(x)>0$$ for $$x>0$$. Using this fact, we rewrite $$f^\prime _{I,eq}(x)$$ as follows:

\begin{aligned} f^\prime _{I,eq}(x)=\frac{x e^x\, q(x)\, e^{-Ix} (Ix) }{(e^x-1)^3}\left( \frac{1-e^{Ix}}{Ix}+\frac{1}{\frac{x(e^{x}+1)}{e^{x}-1}-2}\right) . \end{aligned}

Since $$x>0$$, $$f^\prime _{I,eq}(x)=0$$ is equivalent to

\begin{aligned} p(x):=\frac{1-e^{Ix}}{Ix}+\frac{1}{\frac{x(e^{x}+1)}{e^{x}-1}-2} = 0. \end{aligned}

Note that the function $$\frac{1-e^{Ix}}{Ix}$$ is decreasing for $$x>0$$:

\begin{aligned} \frac{d}{dx}\left( \frac{1-e^{Ix}}{Ix}\right) =\frac{e^{Ix}(1-Ix)-1}{Ix^2}<0,\quad \text {since } e^x<\frac{1}{1-x} \text { for } 0<x<1. \end{aligned}

Since $$e^x>1+x+x^2/2$$ and, consequently, $$(e^x-x)^2>(1+x^2/2)^2$$ for $$x>0$$, we show that $$\frac{x(e^{x}+1)}{e^{x}-1}$$ is increasing for $$x>0$$:

\begin{aligned} \frac{d}{dx}\left( \frac{x(e^{x}+1)}{e^{x}-1}\right) =\frac{e^{2x}-2xe^x-1}{(e^{x}-1)^2}= \frac{(e^{x}-x)^2-x^2-1}{(e^{x}-1)^2}>\frac{x^4}{4(e^{x}-1)^2}>0. \end{aligned}

This makes the function p(x) decreasing for $$x>0$$. Moreover, it is easy to check that the function p(x) is a continuous function with $$\displaystyle \lim _{x\rightarrow 0} p(x)=+\infty$$ and $$\displaystyle \lim _{x\rightarrow +\infty } p(x)=-\infty$$. Hence, there exists only one $$x_0>0$$ such that $$p(x_0)=0$$ with $$p(x)<0$$ for $$x>x_0$$ and $$p(x)>0$$ for $$x<x_0$$. $$\square$$

### Remark 1

From Theorem 1 and from Table 1, it follows that already with $$I=5$$ equidistant inspections we obtain more than 93% of the maximum information. Note that the maximum information coincides with the information of the maximum likelihood estimator for non-censored lifetimes.

To derive maximin efficient inspection times, the efficiency of a given equidistant partition $$\varDelta , 2\varDelta , \ldots ,I\varDelta$$ with respect to the locally optimal equidistantly spaced inspections $$\varDelta ^*(\lambda ), 2 \varDelta ^*(\lambda )$$, $$\ldots , I \varDelta ^*(\lambda )$$ with $$\varDelta ^*(\lambda )=x^*_{eq}/\lambda$$ is considered. This efficiency is given by

\begin{aligned} \frac{I_{\lambda ,eq}(\varDelta )}{I_{\lambda ,eq}(\varDelta ^*(\lambda ))} = \frac{\frac{1}{\lambda ^2}f_{I,eq}(\lambda \varDelta )}{\frac{1}{\lambda ^2}f_{I,eq}(x^*_{eq})}=\frac{f_{I,eq}(\lambda \varDelta )}{f_{I,eq}(x^*_{eq})} \end{aligned}

and Lemma 2 yields

\begin{aligned} \frac{I_{\lambda ,eq}(\varDelta )}{I_{\lambda ,eq}(\varDelta ^*(\lambda ))} = \frac{e^{\lambda \varDelta } (\lambda \varDelta )^2 (1-e^{-I \lambda \varDelta })}{f_{I,eq}(x^*_{eq}) (e^{\lambda \varDelta }-1)^2}. \end{aligned}

However, it makes no sense to allow all $$\lambda >0$$ for the maximin efficiency. Since $$\varDelta >0$$, it follows

\begin{aligned} \lim _{\lambda \rightarrow \infty } \frac{e^{\lambda \varDelta } (\lambda \varDelta )^2 (1-e^{-I \lambda \varDelta })}{(e^{\lambda \varDelta }-1)^2}= \lim _{\lambda \rightarrow \infty } \frac{ \frac{(\lambda \varDelta )^2}{e^{\lambda \varDelta }} (1-e^{-I \lambda \varDelta })}{(1-\frac{1}{e^{\lambda \varDelta }})^2}=0. \end{aligned}

Using L’Hospital’s rule, we also obtain

\begin{aligned} \lim _{\lambda \rightarrow 0} \frac{e^{\lambda \varDelta } (\lambda \varDelta )^2 (1-e^{-I \lambda \varDelta })}{(e^{\lambda \varDelta }-1)^2}= \lim _{x\rightarrow 0}\frac{e^{x} x^2 (1-e^{-I x})}{(e^{x}-1)^2}=0. \end{aligned}

Hence, we have

\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{I_{\lambda ,eq}(\varDelta )}{I_{\lambda ,eq}(\varDelta ^*(\lambda ))} = 0 =\lim _{\lambda \rightarrow \infty }\frac{I_{\lambda ,eq}(\varDelta )}{I_{\lambda ,eq}(\varDelta ^*(\lambda ))} \end{aligned}

so that $$\lambda$$ must be restricted by a lower bound L and an upper bound U to get maximin efficient inspection times.

Since $$f_{I,eq}$$ is a unimodal function for each $$I\in \mathbb {N}$$ (see Theorem 1), a maximin efficient inspection distance $$\varDelta ^*_{L,U}$$ for $$\lambda \in [L,U]$$ is given by

\begin{aligned} \varDelta ^*_{L,U}:=&\varDelta ^*([L,U]):= \arg \max _{\varDelta> 0}\min _{\lambda \in [L,U]}\frac{I_{\lambda ,eq}(\varDelta )}{I_{\lambda ,eq}(\varDelta ^*(\lambda ))}\\ =&\arg \max _{\varDelta > 0}\min \left\{ \frac{e^{L \varDelta } (L \varDelta )^2 (1-e^{-I L \varDelta })}{ (e^{L \varDelta }-1)^2}, \frac{e^{U \varDelta } (U \varDelta )^2 (1-e^{-I U \varDelta })}{ (e^{U \varDelta }-1)^2}\right\} \frac{1}{f_{I,eq}(x^*_{eq})}. \end{aligned}

This means that the maximin efficient $$\varDelta ^*_{L,U}$$ must satisfy (see e.g. Dette and Biedermann 2003)

\begin{aligned} \frac{e^{L \varDelta ^*_{L,U}} (L \varDelta ^*_{L,U})^2 (1-e^{-I L \varDelta ^*_{L,U}})}{ (e^{L \varDelta ^*_{L,U}}-1)^2} =\frac{e^{U \varDelta ^*_{L,U}} (U \varDelta ^*_{L,U})^2 (1-e^{-I U \varDelta ^*_{L,U}})}{ (e^{U \varDelta ^*_{L,U}}-1)^2}, \end{aligned}

or equivalently

\begin{aligned} f_{I,eq}(L\varDelta ^*_{L,U})=f_{I,eq}(U\varDelta ^*_{L,U}). \end{aligned}

Because of the scale equivariance of the criterion, we have the following lemma.

### Lemma 3

If $$\varDelta ^*_{L,U}$$ is maximin efficient for $$\lambda \in [L,U]$$ then $$\alpha \varDelta ^*_{L,U}$$ is maximin efficient for $$\lambda \in \left[ \frac{L}{\alpha },\frac{U}{\alpha }\right]$$ for any $$\alpha >0$$.

## Optimal non-equidistant inspection times

The aim of this section is to determine an optimal choice of the inspection times $$\tau _1,\ldots ,\tau _I$$ for a fixed number I of inspections. We want to find $$\tau _1^*(\lambda ):=\tau _{1,I}^*(\lambda )$$, $$\ldots ,\tau _{I}^*(\lambda ):=\tau _{I,I}^*(\lambda )$$ so that the information $$I_\lambda (\tau _1,\ldots ,\tau _{I})$$ in (3) is maximized. According to Sect. 2, it is sufficient to find $$x_1^*:=x_{1,I}^*$$, $$\ldots$$, $$x_I^*:=x_{I,I}^*$$ which maximize $$f_I(x_1,\ldots ,x_I)$$ given by (4) or (5). Then (3) is maximized by $$\tau _1^*(\lambda )=\frac{x_1^*}{\lambda },\ldots ,\tau _I^*(\lambda )=\frac{x_I^*}{\lambda }$$, where $$x_1^*,\ldots ,x_I^*$$ can be determined numerically. For the optimal spacing of quantiles of asymptotically best linear estimates based on order statistics, this was done already in Sarhan et al. (1963) for $$I=1,\ldots , 15$$. For optimal inspection times, this was done in Kulldorff (1961) for $$I=1,\ldots ,6$$ and in Nelson (1977) for $$I=1,\ldots ,10$$. Table 2 provides some values for I up to 50 which were calculated with Wolfram Mathematica of Wolfram Research, Inc. (2017).

The expression for $$f_I$$ in (5) was maximized in Saleh (1964, Theorem 4.2) and the recursion

\begin{aligned} x_{i+1,I}^*=x_{i,I-1}^* + x_{1,I}^*,\quad i=1,\ldots ,I-1, \end{aligned}
(8)

was proved where $$x_{1,I-1}^*,\ldots ,x_{I-1,I-1}^*$$ and $$x_{1,I}^*,\ldots ,x_{I,I}^*$$ are the solutions for $$I-1$$ and I, respectively. Moreover, Saleh (1964) showed in Theorem 4.1 that the maximum of $$f_I$$ exists and it is reached at one and only one point. Formula (8) was also derived in Schmidt and Schwabe (2015) together with another recursive formula for the optimal cutpoints and the corresponding optimal information.

We get similar results using a different approach. We notice that the distances between the optimal inspection times do not change for different I. According to Table 2, e.g. the distances between the last $$x^*_{I}$$ and the second last $$x^*_{I-1}$$ are the same for all $$I\in \mathbb {N}$$. The same holds for other distances $$d^*_i:=d^*_{i,I}=x_i^*-x_{i-1}^*$$, $$i=1,\ldots ,I$$ (see Table 3). This property follows directly from Theorem 2(ii). This theorem provides also a recursive formula for calculating the optimal inspection times $$x_1^*,\ldots ,x_I^*$$ and shows that the function $$f_I(x_1^*,\ldots ,x_I^*)$$ converges to 1 if the number I of inspections tends to infinity.

### Theorem 2

Let $$(x_{1}^*,\ldots ,x_{I}^*):= \arg \max \{f_I(x_1,\ldots ,x_I);\,\, x_1,\ldots ,x_I>0\}$$ with $$f_I$$ given in (4). Then the following holds:

1. (i)

$$f_I(x_1,\ldots ,x_I)\le 1$$       for all      $$I\in \mathbb {N} \qquad \text {and}\qquad x_1,\ldots ,x_I>0.$$

2. (ii)

Consider the following function

\begin{aligned} g(t,c):=\frac{t^2}{e^{t}-1}+\frac{1}{e^{t}}\, c,\qquad \quad t\ge 0,\,\, c\ge 0, \end{aligned}
(9)

and let $$c_0, c_1,\ldots , c_I$$ be defined inductively via

\begin{aligned} c_i:=\max \{g(t,c_{i-1});\,\,t\ge 0\}, \quad i=1,\ldots ,I,\quad c_0:=0. \end{aligned}
(10)

Then $$f_I(x_1^*,\ldots ,x_I^*)=c_I$$ and $$(x_{1}^*,\ldots ,x_{I}^*)$$ can be found as

\begin{aligned} x_{i}^*=\sum _{j=1}^i d_j^*,\qquad \quad d_j^*=\arg \max \{g(t,c_{I-j});\,\, t>0\},\quad i,j=1,\ldots ,I. \end{aligned}
3. (iii)

$$f_I(x_1^*,\ldots ,x_I^*)\rightarrow 1$$       as       $$I\rightarrow \infty$$.

### Proof

(i) For a proof see Saleh (1964, Lemma 4.1).

(ii) Let $$d_i:=x_i-x_{i-1}$$ for $$i=1,\ldots , I$$, where $$x_0:=d_0:=0$$. Then (5) yields

\begin{aligned} f_I(x_1,\ldots ,x_I)=\sum _{i=1}^{I} \frac{d_i^2}{e^{x_{i-1}}(e^{d_{i}}-1)} =\sum _{i=1}^{I} \frac{d_i^2}{e^{d_1+\cdots +d_{i-1}}(e^{d_{i}}-1)} =:\tilde{f}_I(d_1,\ldots ,d_I). \end{aligned}

Notice that $$\tilde{f}_I(d_1,\ldots ,d_I)$$ can be represented as

\begin{aligned} \tilde{f}_I(d_1,\ldots ,d_I)= & {} \frac{d_1^2}{e^{d_{1}}-1}+ \frac{1}{e^{d_1}}\left( \frac{d_2^2}{e^{d_{2}}-1}+ \frac{1}{e^{d_2}}\left( \frac{d_3^2}{e^{d_{3}}-1}+ \frac{1}{e^{d_3}}\left( \phantom {\frac{1}{1}}\ldots \right. \right. \right. \nonumber \\&...\left. \left. \left. \left( \frac{d_{I-1}^2}{e^{d_{I-1}}-1} +\frac{1}{e^{d_{I-1}}}\left( \frac{d_I^2}{e^{d_{I}}-1}\right) \right) \ldots \right) \right) \right) \nonumber \\= & {} g(d_1, g(d_2, g(d_3,g(\ldots g(d_{I-1},g(d_I,0))\ldots )))), \end{aligned}
(11)

where the function g(tc) is given in (9). Since g(tc) is an increasing function with respect to c for each $$t\ge 0$$, it holds:

The function $$\tilde{f}_I(d_1,\ldots ,d_I)$$ is maximized at

\begin{aligned}&d_I^*=\arg \max \{g(t,0);\,\, t>0\},\nonumber \\&d_{I-1}^*=\arg \max \{g(t,g(d_I^*,0));\,\, t>0\},\nonumber \\&d_{i}^*=\arg \max \{g(t,g(d_{i+1}^*,g(\ldots g(d_I^*,0)\ldots )));\,\, t>0\},\quad i=1,\ldots ,I-2.\nonumber \\ \end{aligned}
(12)

Formula (12) together with definition (10) implies

\begin{aligned}&g(d_I^*,0) = c_1,\quad g(d_{I-1}^*,g(d_I^*,0))=c_2,\ldots , \\&g(d_i^*,g(\ldots g(d_{I-1}^*,g(d_I^*,0))\ldots ))=c_{I+1-i},\quad i=1,\ldots ,I-2. \end{aligned}

Consequently, $$\tilde{f}_I(d_1^*,\ldots ,d_I^*)= c_I$$ and

\begin{aligned} d_i^*=\arg \max \{g(t,c_{I-i});\,\, t>0\},\quad i=1,\ldots ,I. \end{aligned}

The statement in (ii) follows from the equality $$\tilde{f}_I(d_1,\ldots ,d_I)=f_I(x_1,\ldots ,x_I)$$ with $$d_i=x_i-x_{i-1}$$ for $$i=1,\ldots , I$$.

(iii) We divide the proof into two parts. In the first step we show that $$f_I(x_1^*,\ldots ,x_I^*)\rightarrow c_{\infty }\le 1$$ as $$I\rightarrow \infty$$ and in the second step we prove that $$c_{\infty }=1$$.

Step 1. It follows from (ii) that $$f_I(x_1^*,\ldots ,x_I^*)=c_I$$. Therefore, it is sufficient to show that $$c_I\rightarrow c_{\infty }\le 1$$ as $$I\rightarrow \infty$$. Since the function g(tc) is increasing with respect to c for each $$t\ge 0$$, we obtain

\begin{aligned} c^\prime< c^{\prime \prime }\quad \Longrightarrow \quad \max \{g(t,c^{\prime });\,\,t\ge 0\}<\max \{g(t,c^{\prime \prime });\,\,t\ge 0\}. \end{aligned}
(13)

Note that (13) and the recursive definition (10) imply by induction that $$(c_I)_{I\ge 0}$$ is an increasing sequence provided we establish the induction basis $$c_0<c_1$$. This base case can be shown numerically (see Tables 1 and 2 for $$I=1$$):

\begin{aligned} c_1=\max \{g(t,0);\,t\ge 0\}\approx 0.6476>0=c_0. \end{aligned}

It follows from (i) and (ii) that $$c_I\le 1$$ for all $$I\in \mathbb {N}_0$$. This means that the sequence $$c_0, c_1, \ldots$$ is an increasing sequence, which is bounded by 1. Hence, $$c_I\rightarrow c_{\infty } \le 1$$ as $$I\rightarrow \infty$$.

Step 2. Define $$h(c):=\max \{g(t,c);\,\, t\ge 0\}$$ for $$c\ge 0$$. We will prove that $$c_{\infty }=1$$ by showing the following: (a) $$c_\infty =h(c_\infty )$$; (b) $$c<h(c)$$ for $$c\in (0,1)$$.(a) At first let us show that h is a continuous function on $$[0,\infty )$$. By definition, we have to show: for any $$\varepsilon > 0$$, there exists some $$\delta > 0$$ such that for all $$c^\prime , c^{\prime \prime }$$ with $$|c^\prime -c^{\prime \prime }|\le \delta$$, the following holds

\begin{aligned} |h(c^\prime )-h(c^{\prime \prime })|\le \varepsilon . \end{aligned}

By symmetry, we may assume that $$c^\prime \ge c^{\prime \prime }$$. Let $$t^\prime :=\arg \max \{g(t,c^\prime );\,\, t\ge 0\}$$, $$t^{\prime \prime }:=\arg \max \{g(t,c^{\prime \prime });\,\, t\ge 0\}$$ and $$\delta :=\varepsilon$$. Then using the fact that g(tc) and, consequently, h(c) is non-decreasing in c, we obtain:

\begin{aligned}&|h(c^\prime )-h(c^{\prime \prime })|=h(c^\prime )-h(c^{\prime \prime })= g(t^\prime ,c^\prime )-g(t^{\prime \prime },c^{\prime \prime })\\&\quad \le g(t^\prime ,c^\prime )-g(t^{\prime },c^{\prime \prime })=\frac{1}{e^{t^\prime }} (c^\prime -c^{\prime \prime })\le \varepsilon . \end{aligned}

Thus, h is continuous. In Step 1 we showed that $$c_I\rightarrow c_\infty$$ as $$I\rightarrow \infty$$, where $$c_I=h(c_{I-1})$$ with $$c_0=0$$. The continuity of h yields $$c_\infty =h(c_\infty )$$.

(b) Let us show that $$c<h(c)$$ for $$c\in (0,1)$$. Consider

\begin{aligned} g(t,c)-c = \frac{t^2}{e^{t}-1}+\frac{1}{e^{t}}\, c -c = \frac{t^2}{e^{t}-1}+ \frac{1-e^t}{e^{t}}\, c= \frac{t^2 e^t -c(e^t-1)^2}{e^{t}(e^{t}-1)}. \end{aligned}

The fact that $$e^t\le 1/(1-t)$$ for any $$t\in [0,1)$$ yields $$(e^t-1)^2\le (t/(1-t))^2$$ and

\begin{aligned} g(t,c)-c \ge \frac{t^2 e^t -\frac{c t^2}{(1-t)^2}}{e^{t}(e^{t}-1)}=\frac{t^2 \left( e^t -\frac{c }{(1-t)^2}\right) }{e^{t}(e^{t}-1)} \end{aligned}

for $$t\in (0,1)$$. Let $$t_0=1-\sqrt{c}$$. Note that $$t_0\in (0,1)$$, since $$c\in (0,1)$$. Then

\begin{aligned} g(t_0,c)-c \ge \frac{(1-\sqrt{c})^2 \left( e^{1-\sqrt{c}} -1\right) }{e^{1-\sqrt{c}}(e^{1-\sqrt{c}}-1)}>0 \end{aligned}

and, consequently, $$h(c)\ge g(t_0,c)>c$$ for all $$c\in (0,1)$$.Suppose that $$c_\infty <1$$. Then it follows that $$c_\infty < h(c_\infty )$$ which contradicts the fact that $$c_\infty =h(c_\infty )$$ (see above). So, $$c_\infty =1$$. $$\square$$

### Remark 2

From Theorem 2 and from Table 2, it follows that already with $$I=5$$ inspections we obtain more than 94% of the maximum information.

To derive maximin-efficient inspection times, the efficiency of a given partition $$\tau _1,\ldots ,\tau _I$$ with respect to the locally optimal inspections $$\tau _1^*(\lambda )=\frac{x_1^*}{\lambda },\ldots ,$$$$\tau _{I}^*(\lambda )=\frac{x_I^*}{\lambda }$$ is considered:

\begin{aligned}&\frac{I_\lambda (\tau _1,\ldots ,\tau _{I})}{I_\lambda (\tau _1^*(\lambda ), \ldots ,\tau _{I}^*(\lambda ))}\nonumber \\&\quad = \frac{\frac{1}{\lambda ^2}\sum _{i=1}^{I+1} \frac{\left( \lambda \tau _i e^{-\lambda \tau _{i}}-\lambda \tau _{i-1}e^{-\lambda \tau _{i-1}}\right) ^2}{e^{-\lambda \tau _{i-1}}-e^{-\lambda \tau _{i}}}}{\frac{1}{\lambda ^2}f_I(x_1^*,\ldots ,x_I^*)} = \frac{\sum _{i=1}^{I+1} \frac{\left( \lambda \tau _i e^{-\lambda \tau _{i}}-\lambda \tau _{i-1}e^{-\lambda \tau _{i-1}}\right) ^2}{e^{-\lambda \tau _{i-1}}-e^{-\lambda \tau _{i}}}}{f_I(x_1^*,\ldots ,x_I^*)}. \end{aligned}
(14)

Since $$0<\tau _1<\ldots<\tau _I<\tau _{I+1}$$, L’Hospital’s rule yields for $$i=1,\ldots ,I+1$$

\begin{aligned}&\lim _{\lambda \rightarrow \infty }\frac{\left( \lambda \tau _i e^{-\lambda \tau _{i}}-\lambda \tau _{i-1}e^{-\lambda \tau _{i-1}}\right) ^2}{e^{-\lambda \tau _{i-1}}-e^{-\lambda \tau _{i}}} = \lim _{\lambda \rightarrow \infty }\frac{e^{-\lambda \tau _{i-1}} \left( \lambda \tau _i e^{-\lambda (\tau _{i}-\tau _{i-1})}-\lambda \tau _{i-1}\right) ^2}{1-e^{-\lambda (\tau _{i}-\tau _{i-1})}}\\&\quad = 0 = \lim _{\lambda \rightarrow 0}\frac{e^{-\lambda \tau _{i-1}} \left( \lambda \tau _i e^{-\lambda (\tau _{i}-\tau _{i-1})}-\lambda \tau _{i-1}\right) ^2}{1-e^{-\lambda (\tau _{i}-\tau _{i-1})}}. \end{aligned}

Hence, we have again

\begin{aligned} \lim _{\lambda \rightarrow 0}\frac{I_\lambda (\tau _1,\ldots ,\tau _{I})}{I_\lambda (\tau _1^*(\lambda ),\ldots ,\tau _{I}^*(\lambda ))} = 0 =\lim _{\lambda \rightarrow \infty }\frac{I_\lambda (\tau _1,\ldots ,\tau _{I})}{I_\lambda (\tau _1^*(\lambda ),\ldots ,\tau _{I}^*(\lambda ))} \end{aligned}

so that $$\lambda$$ must be restricted by a lower bound L and an upper bound U to get maximin efficient inspection times $$\varvec{\tau }^*_{L,U}:=(\tau ^*_{1}([L,U]),\ldots ,\tau ^*_{I}([L,U]))$$ defined by

\begin{aligned} \varvec{\tau }^*_{L,U}:=\arg \max _{ (\tau _1,\ldots ,\tau _I)\in (0,\infty )^I}\,\min _{\lambda \in [L,U]}\frac{I_\lambda (\tau _1,\ldots ,\tau _{I})}{I_\lambda (\tau _1^*(\lambda ),\ldots ,\tau _{I}^*(\lambda ))}. \end{aligned}

Again, because of the equivariance of the criterion, we have the following lemma.

### Lemma 4

If $$\varvec{\tau }^*_{L,U}$$ is maximin efficient for $$\lambda \in [L,U]$$ then $$\alpha \varvec{\tau }^*_{L,U}$$ is maximin efficient for $$\lambda \in \left[ \frac{L}{\alpha },\frac{U}{\alpha }\right]$$ for any $$\alpha >0$$.

## Comparison of the optimal and optimal equidistantly spaced inspection times

Let us compare the equidistant and the non-equidistant cases. In Fig. 1, we see how the design points are spread and how fast the maxima of the functions $$f_I$$ and $$f_{I,eq}$$ converge to 1. Fig. 1

Let us calculate the efficiency of the locally optimal equidistantly spaced inspections $$\varDelta ^*(\lambda ), 2 \varDelta ^*(\lambda ),\ldots , I\varDelta ^*(\lambda )$$ with respect to the locally optimal non-equidistant inspections $$\tau _1^*(\lambda ),\ldots ,\tau _I^*(\lambda )$$. Sections 3 and 4 yield

\begin{aligned} \frac{I_{\lambda }(\varDelta ^*(\lambda ),\ldots , I\varDelta ^*(\lambda ))}{I_{\lambda }(\tau _1^*(\lambda ),\ldots ,\tau _I^*(\lambda ))} = \frac{f_{I,eq}(x^*_{eq})}{f_I(x^*_1,\ldots ,x^*_I)}=: g(I), \end{aligned}

i.e. the efficiency does not depend on parameter $$\lambda$$. Table 4 provides the efficiency of the equidistant design for some values of I. We see that the equidistant design yields nearly the same information as the optimal design, but the optimization of (6) is much easier than the optimization of (4).

Moreover, Tables 5 and 6 provide the maximin efficient equidistant and non-equidistant designs, their maximin efficiencies and the relative efficiency of the maximin efficient equidistant designs with respect to the maximin efficient non-equidistant designs for $$I=2$$ and $$I=5$$ for some given lower and upper bounds. Here it becomes apparent that the advantage of a maximin efficient non-equidistant design is higher when the interval [LU] gets larger.

## Discussion

We characterized locally optimal and maximin efficient equidistant and non-equidistant inspection times. In particular, we showed that locally optimal equidistant inspection times are almost as efficient as locally optimal non-equidistant inspection times. However, this does not hold for maximin efficient designs when the parameter space is large. This is due to a much larger inspection region in the non-equidistant case (see Table 5). However, large inspection regions can cause problems in practical applications. For instance, in the diamonds example from the introduction, an additional requirement was a time horizon $$\tau$$ for the drilling and therefore, a restricted inspection region $$[0,\tau ]$$. Hence, not only the inspection times but also the number I of inspections must be optimized so that $$\tau _I^*\le \tau$$. The analysis of the dependence of an optimal number I and optimal inspection times on the time horizon $$\tau$$ will be treated in another paper.

## Change history

• ### 12 September 2019

Unfortunately, due to a technical error, the articles published in issues 60:2 and 60:3 received incorrect pagination. Please find here the corrected Tables of Contents. We apologize to the authors of the articles and the readers.

## References

1. Aggarwala R (2001) Progressive interval censoring: some mathematical results with applications to inference. Commun Stat Theory Methods 30:1921–1935

2. Ahn S, Lim J, Paik MC, Sacco RL, Elkind MS (2018) Cox model with interval-censored covariate in cohort studies. Biom J 60:797–814

3. Attia AF, Assar SM (2012) Optimal progressive group-censoring plans for Weibull distribution in presence of cost constraint. Int J Contemp Math Sci 7:1337–1349

4. Bogaerts K, Komarek A, Lesaffre E (2018) Survival analysis with interval-censored data: a practical approach with examples in R, SAS, and BUGS. Interdisciplinary Statistics Series. Chapman & Hall, Boca Raton

5. Cheng SW (1975) A unified approach to choosing optimum quantiles for the ABLE’s. J Am Stat Assoc 70:155–159

6. Dette H, Biedermann S (2003) Robust and efficient designs for the Michaelis–Menten model. J Am Stat Assoc 98:679–686

7. Eubank RL (1982) A bibliography for the ABLUE. Technical Report, Southern Methodist University Dallas, Texas

8. Ford I, Torsney B, Wu CFJ (1992) The use of a canonical form in the construction of locally optimal designs for nonlinear problems. J R Stat Soc Series B 54:569–583

9. Gao F, Zeng D, Couper D, Lin DY (2018) Semiparametric regression analysis of multiple right- and interval-censored events. J Am Stat Assoc. https://doi.org/10.1080/01621459.2018.1482756

10. Gunduz N, Torsney B (2006) Some advances in optimal designs in contingent valuation studies. J Stat Plan Inference 136:1153–1165

11. Inoue LYT, Parmigiani G (2002) Designing follow-up times. J Am Stat Assoc 97:847–858

12. Islam A, Ahmad N (1994) Optimal design of accelerated life tests for the Weibull distribution under periodic inspection and Type I censoring. Microelectron Reliabil 34:1459–1468

13. Ismail AA (2015) Optimum partially accelerated life test plans with progressively Type I interval-censored data. Seq Anal 34:135–147

14. Kansteiner M, Biermann D, Dagge M, Müller C, Ferreira M, Tillmann W (2017) Statistical evaluation of the wear behaviour of diamond impregnated tools used for the core drilling of concrete. In: Proceedings of Euro PM2017, Europe’s annual powder metallurgy congress and exhibition, Milan, 1–5 October 2017

15. Kansteiner M, Biermann D, Malevich N, Horn M, Müller C, Ferreira M, Tillmann W (2017) Analysis of the wear behaviour of diamond impregnated tools used for the core drilling of concrete with statistical lifetime prediction. In: Proceedings of Euro PM2018, Europe’s annual powder metallurgy congress and exhibition, Bilbao, 14–18 October 2018

16. Kulldorff G (1961) Contributions to the theory of estimation from grouped and partially grouped samples. Wiley, New York

17. Kulldorff G (1973) A note on the optimum spacing of sample quantiles from the six extreme value distributions. Ann Stat 1:562–567

18. Lin C-T, Wu SJS, Balakrishnan N (2009) Planning life tests with progressively Type-I interval censored data from the lognormal distribution. J Stat Plan Inference 139:54–61

19. Lui KJ (1993) Sample size determination for cohort studies under an exponential covariate model with grouped data. Biometrics 49:773–778

20. Malevich N, Müller CH, Kansteiner M, Biermann D, Ferreira M, Tillmann W (2018) Statistical analysis of the lifetime of diamond impregnated tools for core drilling of concrete. Submitted

21. Nelson W (1977) Optimum demonstration tests for grouped inspection data from an exponential distribution. IEEE Trans Reliab 26:226–231

22. Nguyen T, Torsney B (2007) Optimal cutpoint determination: the case of one point design. In: López-Fidalgo J, Rodríguez-Díaz J, Torsney B (eds) mODa 8: Advances in model-oriented design and analysis. Physica, Heidelberg, pp 131–138

23. Ogawa J (1998) Optimal spacing of the selected sample quantiles for the joint estimation of the location and scale parameters of a symmetric distribution. J Stat Plan Inference 70:345–360

24. Park S (2006) Conditional optimal spacing in exponential distribution. Lifetime Data Anal 12:523–530

25. Parmigiani G (1998) Designing observation times for interval censored data. Sankyha A 60:446–458

26. Raab GM, Davies JA, Salter AB (2004) Designing follow-up intervals. Statist Med 23:3125–3137

27. Saleh AKME (1964) On the estimation of the parameters of exponential distribution based on optimum order statistics in censored samples. Ph.D. Dissertation, University of Western Ontario, London, Canada

28. Saleh AKME (1966) Estimation of the parameters of the exponential distribution based on optimum order statistics in censored samples. Ann Math Statist 37:1717–1735

29. Sarhan AE, Greenberg BG, Ogawa J (1963) Simplified estimates for the exponential distribution. Ann Math Statist 34:102–116

30. Schmidt M, Schwabe R (2015) Optimal cutpoints for random observations. Statistics 49:1366–1381

31. Seo SK, Yum BJ (1991) Accelerated life test plans under intermittent inspection and Type I censoring: the case of Weibull failure distribution. Naval Res Logist 38:1–22

32. Shapiro SS, Gulati S (1996) Selecting failure monitoring times for an exponential life distribution. J Qual Technol 28:429–438

33. Sun J (2006) The statistical analysis of interval-censored failure time data. Statistics for Biology and Health. Springer, New York

34. Tsai T-R, Lin C-W (2010) Acceptance sampling plans under progressive interval censoring with likelihood ratio. Stat Papers 51:259–271

35. Wang S, Wang C, Wang P, Sun J (2018) Semiparametric analysis of the additive hazards model with informatively interval-censored failure time data. Comput Stat Data Anal 125:1–9

36. Wei D, Bau JJ (1987) Some optimal designs for grouped data in reliability demonstration tests. IEEE Trans Reliab 36:600–604

37. Wei D, Shau CK (1987) Fitting and optimal grouping on gamma reliability data. IEEE Trans Reliab 36:595–599

38. Wolfram Research, Inc. (2017) Mathematica, Version 11.2, Champaign, IL

39. Wu S-J, Huang S-R (2010) Optimal progressive group-censoring plans for exponential distribution in presence of cost constraint. Stat Papers 51:431–443

40. Yang C, Tse S-K (2005) Planning accelerated life tests under progressive Type I interval censoring with random removals. Commun Stat Simul Comput 34:1001–1025

41. Yum BJ, Choi SC (1989) Optimal design of accelerated life tests under periodic inspection. Naval Res Logist 36:779–795

Download references

## Acknowledgements

The authors gratefully acknowledge support from the Collaborative Research Center “Statistical Modelling of Nonlinear Dynamic Processes” (SFB 823, B4) of the German Research Foundation (DFG). Additionally, the authors thank the two unknown referees for their helpful remarks and suggestions.

## Author information

Authors

### Corresponding author

Correspondence to Nadja Malevich.

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

Reprints and Permissions