1 Introduction

In questionnaire surveys respondents are often allowed to give an answer in the form of an interval. For example, the respondent can be asked to select from several pre-specified intervals; this question format is known as range card. Another approach is called unfolding brackets, where the respondent is asked a sequence of yes-no questions that narrow down the range in which the respondent’s true value is. These formats are suitable when asking questions that are difficult to answer with an exact value (e.g., recall questions) or when asking sensitive questions (e.g., asking about income) because they allow partial information to be elicited from respondents who are unable or unwilling to provide exact amounts. However, studies have found that the pre-specified intervals given to the respondents in a range-card question are likely to influence their answers. Such bias is known as bracketing effect (see, e.g., McFadden et al. 2005). Similarly, the unfolding brackets format is prone to the so-called anchoring effect, i.e., answers can be biased toward the starting value in the sequence of yes-no questions (see, e.g., Furnham and Boo 2011; Van Exel et al. 2006).

A format that does not involve any pre-specified values is the respondent-generated intervals approach, suggested by Press and Tanur (2004a, b), where the respondent is asked to provide both a point value (a best guess for the true value) and an interval. They employed Bayesian methods for estimating the parameters of the underlying distribution. Similar format, in which the respondent is free to answer with any interval containing his/her true value, was considered by Belyaev and Kriström (2010). They use the term self-selected interval (SSI). Estimating the underlying distribution using SSI data, however, requires some generally untestable assumptions related to how the respondent chooses the interval. To avoid such assumptions, Belyaev and Kriström (2012, 2015) introduced a novel two-stage approach. The idea is to ask the respondent first to provide an SSI and then to select from several sub-intervals of the SSI the one that most likely contains his/her true value. Data collected in a pilot stage are used for generating the sub-intervals in the second question. Belyaev and Kriström (2012, 2015) proposed a nonparametric maximum likelihood estimator of the underlying distribution for two-stage SSI data. Angelov and Ekström (2017) extended their work by exploring a sampling scheme where the number of sub-intervals in the second question is limited to two or three, which is motivated by the fact that a question with a large number of sub-intervals might be difficult to implement in practice, e.g., in a telephone interview.

Data consisting of self-selected intervals are a special case of interval-censored data. Let X be a random variable of interest. An observation on X is interval-censored if, instead of observing X exactly, only an interval \((L,R\,]\) is observed, where \(L < X \le R\) (see, e.g., Zhang and Sun 2010). Interval-censored data arise most commonly when the observed variable is the time to some event (known as survival data, failure time data, lifetime data, duration data, or time-to-event data). The problem of estimating the underlying distribution for interval-censored data has been approached through nonparametric methods by Peto (1973), Turnbull (1976), and Gentleman and Geyer (1994), among others. These estimators rely on the assumption of noninformative censoring, i.e., the observation process that generates the censoring is independent of the variable of interest (see, e.g., Sun 2006, p. 244). In the sampling schemes considered by Belyaev and Kriström (2010, 2012, 2015) and Angelov and Ekström (2017) this is not a reasonable assumption as it is the respondent who chooses the interval; thus, the standard methods are not appropriate. The existing methods for data with informative interval censoring (see Finkelstein et al. 2002; Shardell et al. 2007) are specific for time-to-event data and are not directly applicable in the context that we are discussing.

In this paper, we focus on parametric estimation of the underlying distribution function, i.e., we assume a particular functional form of the distribution. Compared to nonparametric methods, this approach usually leads to more efficient estimators, provided that the distributional assumption is true (see, e.g., Collett 1994, p. 107). The problem of choosing the right parametric model can be sidestepped by using a wide parametric family like the generalized gamma distribution (see, e.g., Cox et al. 2007) that includes most of the commonly used distributions as special cases (exponential, gamma, Weibull, and log-normal).

We suggest two modifications of the sampling scheme for SSI data studied in Angelov and Ekström (2017) and propose a parametric maximum likelihood estimator. In Sect. 2, we introduce the sampling schemes. In Sect. 3, the statistical model is defined and the corresponding likelihood function is derived. Asymptotic properties of the maximum likelihood estimator are established in Sect. 4. The results of a simulation study are presented in Sect. 5, and the paper is concluded in Sect. 6. In “Appendix” are given proofs and auxiliary results.

2 Sampling schemes

2.1 Scheme A

The rationale behind this scheme is that we need to have more information than just the self-selected intervals in order to estimate the underlying distribution. Therefore, we ask the respondent to select a sub-interval of the interval that he/she stated. The problem of deciding where to split the stated interval into sub-intervals can be resolved using some previously collected data (in a pilot stage) or based on other knowledge about the quantity of interest.

We consider the following two-stage scheme for collecting data. In the pilot stage, a random sample of \(n_0\) individuals is selected and each individual is requested to give an answer in the form of an interval containing his/her value of the quantity of interest. It is assumed that the endpoints of the intervals are rounded, for example, to the nearest integer or to the nearest multiple of 10. Thus, instead of (50.2, 78.7] respondents will answer with (50, 79] or (50, 80].

Let \( d_0^{\star }< d_1^{\star }< \cdots< d_{k'-1}^{\star } < d_{k'}^{\star } \) be the endpoints of all observed intervals. The set \( \{ d_j^{\star } \} = \{ d_0^{\star }, \ldots , d_{k'}^{\star } \} \) can be seen as a set of typical endpoints. The data collected in the pilot stage are used only for constructing the set \( \{ d_j^{\star } \} \), which is needed for the main stage. The set \( \{ d_j^{\star } \} \) may also be constructed using data from a previous survey, or it can be determined by the researcher based on prior knowledge about the quantity of interest or other reasonable arguments. For instance, if it is known that the variable of interest ranges between 0 and 200 and that the respondents are rounding their endpoints to a multiple of 10, then a reasonable set of endpoints will be \(\{0,10,20,\ldots ,200\}\).

In the main stage, a new random sample of n individuals is selected and each individual is asked to state an interval containing his/her value of the quantity of interest. We refer to this question as Qu1. The stated interval is then split into two or three sub-intervals, and the respondent is asked to select one of these sub-intervals (the points of split are chosen in some random fashion among the points \(d_j^{\star }\) that are within the stated interval, e.g., equally likely or according to some other pre-specified probabilities). We refer to this question as Qu2. The respondent may refuse to answer Qu2, and this will be allowed for. If there are no points \(d_j^{\star }\) within the stated interval, the second question is not asked.

Let \( d_0< d_1< \cdots< d_{k-1} < d_k \) be the union of \( \{ d_j^{\star } \} \) and the endpoints of all intervals observed at the main stage. Note that k is unknown but, because of the rounding of endpoints, it can not be arbitrarily large. Let us define a set of intervals \( {\mathcal {V}} = \{ \mathbf {v}_1, \ldots , \mathbf {v}_k \} \), where \( \mathbf {v}_j = (d_{j-1}, d_{j}], \; j=1, \ldots , k \), and let \( {\mathcal {U}} = \{ \mathbf {u}_1, \ldots , \mathbf {u}_m \} \) be the set of all intervals that can be expressed as a union of intervals from \( {\mathcal {V}} \), i.e., \( {\mathcal {U}} = \{ (d_l, d_r] : \,\, d_l < d_r, \,\, l,r=0,\ldots ,k \} \). For example, if \( {\mathcal {V}} = \{ (0,5], \, (5,10], \, (10,20] \}\), then \( {\mathcal {U}} = \{ (0,5], \, (5,10], \)\(\, (10,20], \, (0,10], \, (5,20], \, (0,20] \} \). We denote \({\mathcal {J}}_{\scriptstyle h}\) to be the set of indices of intervals from \({\mathcal {V}}\) contained in \(\mathbf {u}_h\):

$$\begin{aligned}&{\mathcal {J}}_{\scriptstyle h} = \{ j: \,\, \mathbf {v}_j \subseteq \mathbf {u}_h \}, \quad h=1, \ldots , m . \end{aligned}$$

In the example with \( {\mathcal {V}} = \{ (0,5], \, (5,10], \, (10,20] \}\), \( \mathbf {u}_5 = (5,20] = \mathbf {v}_2 \cup \mathbf {v}_3 \), hence \( {\mathcal {J}}_5 = \{2,3\} \).

Remark 1

The main difference between this scheme and the one explored in Angelov and Ekström (2017) is that with scheme A there is no exclusion of respondents, while with the former scheme respondents are excluded if they stated an interval with endpoints not belonging to \( \{ d_j^{\star } \} \).

2.2 Scheme B

This scheme is a modification of scheme A with two follow-up questions after Qu1 aiming to extract more refined information from the respondents. The pilot stage is the same as in scheme A. The sets \( \{ d_0, \ldots , d_k \} \), \( {\mathcal {V}} \), \( {\mathcal {U}} \), and \( {\mathcal {J}}_{\scriptstyle h} \) are also defined in the same way. In the main stage, a new random sample of n individuals is selected and each individual is asked to state an interval containing his/her value of the quantity of interest. We refer to this question as Qu1. The stated interval is then split into two sub-intervals, and the respondent is asked to select one of these sub-intervals. The point of split is the \(d_j^{\star }\) that is the closest to the middle of the interval; if there are two points that are equally close to the middle, one of them is taken at random. This way of splitting the interval yields two sub-interval of similar length, which would be more natural for the respondent. We refer to this question as Qu2a. The interval selected at Qu2a is thereafter split similarly into two sub-intervals, and the respondent is asked to select one of them. We refer to this question as Qu2b. The respondent may refuse to answer the follow-up questions Qu2a and Qu2b. If there are no points \(d_j^{\star }\) within the interval stated at Qu1 or Qu2a, the respective follow-up question is not asked. We assume that if a respondent has answered Qu2a, he/she has chosen the interval containing his/her true value, independent of how the interval stated at Qu1 was split. An analogous assumption is made about the response to Qu2b.

If we know the intervals stated at Qu1 and Qu2b, we can find out the answer to Qu2a. For this reason, if Qu2b is answered, the data from Qu2a can be omitted. Let \(\hbox {Qu2}\varDelta \) denote the last follow-up question that was answered by the respondent. If the respondent did not answer both Qu2a and Qu2b, we say that there is no answer at \(\hbox {Qu2}\varDelta \). We will distinguish three types of answers in the main stage:

Type 1. :

\( \; ( \mathbf {u}_h; \text{ NA }) \), when the respondent stated interval \(\mathbf {u}_h\) at Qu1 and did not answer \(\hbox {Qu2}\varDelta \);

Type 2. :

\( \; ( \mathbf {u}_h; \mathbf {v}_j ) \), when the respondent stated interval \(\mathbf {u}_h\) at Qu1 and \(\mathbf {v}_j\) at \(\hbox {Qu2}\varDelta \), where \( \mathbf {v}_j \subseteq \mathbf {u}_h \);

Type 3. :

\( \; ( \mathbf {u}_h; \mathbf {u}_s ) \), when the respondent stated interval \(\mathbf {u}_h\) at Qu1 and \(\mathbf {u}_s\) at \(\hbox {Qu2}\varDelta \), where \(\mathbf {u}_s\) is a union of at least two intervals from \({\mathcal {V}}\) and \( \mathbf {u}_s \subset \mathbf {u}_h \).

Similar types of answers can be considered for scheme A, as well. In what follows we will use these three types for both schemes (for scheme A, \(\hbox {Qu2}\varDelta \) will denote Qu2).

3 Model and estimation

We consider the unobserved (interval-censored) values \( x_1, \ldots , x_n \) of the quantity of interest to be values of independent and identically distributed (i.i.d.) random variables \( X_1, \ldots , X_n \) with distribution function \( F(x) = \mathrm {P}\,(X_i \le x) \). Our goal is to estimate F(x) through a maximum likelihood approach. Let \(q_j\) be the probability mass placed on the interval \(\mathbf {v}_j = (d_{j-1}, d_j]\):

$$\begin{aligned} q_j = \mathrm {P}\,(X_i \in \mathbf {v}_j) = F(d_j) - F(d_{j-1}), \quad j=1, \ldots , k . \end{aligned}$$

Because only intervals with endpoints from \( \{ d_0, \ldots , d_k \} \) are observed, the likelihood function will depend on F(x) through the probabilities \(q_j\). In order to avoid complicated notation, we assume that \( q_j > 0 \) for all \( j=1,\ldots ,k \). The case when \( q_j=0 \) for some j can be treated similarly (cf. Rao 1973, p. 356).

Let \( H_i, \; i=1,\ldots ,n \), be i.i.d. random variables such that \( H_i = h \) if the i-th respondent has stated interval \(\mathbf {u}_h\) at Qu1. The event \( \{H_i = h\} \) implies \( \{X_i \in \mathbf {u}_h\} \). Let us denote

$$\begin{aligned} w_{h|j} = \mathrm {P}\,( H_i = h \,|\, X_i \in \mathbf {v}_j ) . \end{aligned}$$

If \(\mathbf {u}_h\) does not contain \(\mathbf {v}_j\), then \(w_{h|j} = 0\).

Hereafter we will need the following frequencies:

\(n_{h,\mathrm {NA}}\) :

is the number of respondents who stated \(\mathbf {u}_h\) at Qu1 and NA (no answer) at \(\hbox {Qu2}\varDelta \);

\(n_{hj}\) :

is the number of respondents who stated \(\mathbf {u}_h\) at Qu1 and \(\mathbf {v}_j\) at \(\hbox {Qu2}\varDelta \), where \( \mathbf {v}_j \subseteq \mathbf {u}_h \);

\(n_{h*s}\) :

is the number of respondents who stated \(\mathbf {u}_h\) at Qu1 and \(\mathbf {u}_s\) at \(\hbox {Qu2}\varDelta \), where \(\mathbf {u}_s\) is a union of at least two intervals from \({\mathcal {V}}\) and \( \mathbf {u}_s \subset \mathbf {u}_h \);

\(n_{h \bullet }\) :

is the number of respondents who stated \(\mathbf {u}_h\) at Qu1 and any sub-interval at \(\hbox {Qu2}\varDelta \).

Now we will derive the likelihood for scheme B. If respondent i has given an answer of type 1, i.e., \(\mathbf {u}_h\) at Qu1 and \(\text{ NA }\) at \(\hbox {Qu2}\varDelta \), then the contribution to the likelihood can be expressed using the law of total probability: \( \mathrm {P}\,( H_i = h ) = \sum _{j\in {\mathcal {J}}_{\scriptstyle h}} w_{h|j} \, q_j \). If an answer of type 2 is observed, i.e., \(\mathbf {u}_h\) at Qu1 and \(\mathbf {v}_j\) at \(\hbox {Qu2}\varDelta \), then the contribution to the likelihood is \( w_{h|j} \, q_j \). And if a respondent has given an answer of type 3, i.e., \(\mathbf {u}_h\) at Qu1 and \(\mathbf {u}_s\) at \(\hbox {Qu2}\varDelta \), then the contribution to the likelihood is \( \sum _{j\in {\mathcal {J}}_{\scriptstyle s}} w_{h|j} \, q_j \). Thus, the log-likelihood function corresponding to the main-stage data is

$$\begin{aligned} \log L(\mathbf {q})&= \sum _h n_{h,\mathrm {NA}} \log \Biggl ( \,\sum _{j\in {\mathcal {J}}_{\scriptstyle h}} w_{h|j} \, q_j \Biggr ) + \sum _{h,j} n_{hj} \log ( w_{h|j} \, q_j ) \nonumber \\&\quad + \sum _{h,s} n_{h*s}\log \Biggl ( \,\sum _{j\in {\mathcal {J}}_{\scriptstyle s}} w_{h|j} \, q_j \Biggr ) + c_1 , \end{aligned}$$
(1)

where \(c_1\) does not depend on \( \mathbf {q}= (q_1, \ldots , q_k) \). By similar arguments it can be shown that the log-likelihood for scheme A has essentially the same form as the log-likelihood (1), it differs by an additive constant (the pre-specified probabilities of choosing the points of split of the stated interval are incorporated in \(c_1\)).

If we want to estimate F(x) without making any distributional assumptions, we can maximize the log-likelihood (1) with respect to \(\mathbf {q}\) (for details see Angelov and Ekström 2017). Here we will assume that F(x) belongs to a parametric family, i.e., F(x) is a known function of some unknown parameter \({\varvec{\theta }}= (\theta _1, \ldots , \theta _d)\), and thus the probabilities \(q_j\) are functions of \({\varvec{\theta }}\). Therefore, the log-likelihood will be a function of \({\varvec{\theta }}\), i.e., \(\log L({\varvec{\theta }}) = \log L\bigl ( \mathbf {q}({\varvec{\theta }}) \bigr )\), and in order to estimate F(x) we need to estimate \({\varvec{\theta }}\). For emphasizing that F(x) depends on \({\varvec{\theta }}\), we will sometimes write \(F_{{\varvec{\theta }}}(x)\).

The conditional probabilities \(w_{h|j}\) are nuisance parameters. If \(w_{h|j}\) does not depend on j, the assumption of noninformative censoring will be satisfied. In our case, there are no grounds for making such assumptions about \(w_{h|j}\), and therefore, we need the data from \(\hbox {Qu2}\varDelta \) in order to estimate \(w_{h|j}\). For this task we employ the procedure suggested in Angelov and Ekström (2017), which we outline here. The idea is first to estimate the probabilities \( p_{j|h} = \mathrm {P}\,( X_i \in \mathbf {v}_j \,|\, H_i=h ), \;j \in {\mathcal {J}}_{\scriptstyle h} \). For a given h, a strongly consistent estimator \(\widetilde{p}_{j|h}\) of \( p_{j|h}, \;j \in {\mathcal {J}}_{\scriptstyle h} \) is obtained by maximizing the log-likelihood:

$$\begin{aligned} \sum _{j} n_{hj} \log p_{j|h} + \sum _{s} n_{h*s}\log \Biggl (\,\sum _{j\in {\mathcal {J}}_{\scriptstyle s}} p_{j|h} \Biggr ) + c_0 , \end{aligned}$$

where \(c_0\) does not depend on \(p_{j|h}\). Then, an estimator of \(w_{h|j}\) is derived using the Bayes formula:

$$\begin{aligned} \widetilde{w}_{h|j} = \frac{\widetilde{p}_{j|h} \, \widehat{w}_h}{\sum _{s} \widetilde{p}_{j|s} \, \widehat{w}_s} , \end{aligned}$$

where \( \widehat{w}_h = (n_{h \bullet } + n_{h,\mathrm {NA}})/n \) is a strongly consistent estimator of \( w_h = \mathrm {P}\,( H_i = h )\).

To find the maximum likelihood estimate of the parameter \({\varvec{\theta }}\), we insert the estimates of the probabilities \(w_{h|j}\) into \(\log L({\varvec{\theta }})\) and maximize with respect to \({\varvec{\theta }}\). Alternatively, one may maximize the log-likelihood with respect to both \({\varvec{\theta }}\) and the nuisance parameters \(w_{h|j}\) using standard numerical optimization methods. This is, however, a high-dimensional and computationally time-consuming optimization problem, which we avoid by simply plugging in the estimated nuisance parameters \(\widetilde{w}_{h|j}\) into the log-likelihood.

Remark 2

The proposed methodology for estimating \(F_{{\varvec{\theta }}}(x)\) assumes that the respondents are selected according to simple random sampling. If this is not the case, extrapolating the results to the target population may be incorrect. For surveys with a complex design, parameter estimates can be obtained, for example, by using the pseudo-likelihood approach, in which the individual contribution to the log-likelihood is weighted by the reciprocal of the corresponding sample inclusion probability (see, e.g., Chambers et al. 2012, p. 60).

4 Asymptotic results

Let us consider \(q_j\) as a function of \({\varvec{\theta }}= (\theta _1, \ldots , \theta _d)\), a multidimensional parameter belonging to a set \(\Theta \subseteq \mathbb {R}^d\), and let the true value \({\varvec{\theta }}^0\) be an interior point of \(\Theta \). In this section we prove the consistency and asymptotic normality of the proposed maximum likelihood estimator of \({\varvec{\theta }}\). We also show the asymptotic validity of a bootstrap procedure which can be used for constructing confidence intervals.

Let \({\varvec{\theta }}^1, {\varvec{\theta }}^2 \in \Theta \) and \(\Vert {\varvec{\theta }}^1 - {\varvec{\theta }}^2 \Vert \) denote the Euclidean distance between \({\varvec{\theta }}^1\) and \({\varvec{\theta }}^2\). Let the contribution of the i-th respondent to the log-likelihood be denoted \(\,\mathrm {llik}_i({\varvec{\theta }})\), whose precise definition is given by (13). We will consider the following assumptions:

A1 :

If \({\varvec{\theta }}^1 \ne {\varvec{\theta }}^2\), then \(\mathbf {q}({\varvec{\theta }}^1) \ne \mathbf {q}({\varvec{\theta }}^2)\).

A2 :

For every \(\delta >0\), there exists \(\varepsilon >0\) such that

$$\begin{aligned} \inf _{\Vert {\varvec{\theta }}- {\varvec{\theta }}^0 \Vert > \delta } \,\sum _{j} q_j({\varvec{\theta }}^0) \log \frac{q_j({\varvec{\theta }}^0)}{q_j({\varvec{\theta }})} \ge \varepsilon . \end{aligned}$$
A3 :

The functions \(q_j({\varvec{\theta }})\) are continuous.

A4 :

The functions \(q_j({\varvec{\theta }})\) have first-order partial derivatives that are continuous.

A5 :

The set \(\Theta \) is compact, and the functions \(q_j({\varvec{\theta }})\) have first- and second-order partial derivatives that are continuous on \(\Theta \). Furthermore, \(q_j({\varvec{\theta }}) > 0\) on \(\Theta \).

A6 :

For each \( {\varvec{\theta }}\in \Theta \), the Fisher information matrix \(\mathbf {I}({\varvec{\theta }})\) with elements \( I_{r\ell }({\varvec{\theta }}) = \)\(- \mathrm {E}\,_{{\varvec{\theta }}} \Bigl ( \frac{\partial ^2 \,\mathrm {llik}_i({\varvec{\theta }})}{\partial \theta _r \partial \theta _{\ell }} \Bigr ) \),   \( r,\ell = 1,\ldots ,d \), is nonsingular.

We say that \(\widetilde{{\varvec{\theta }}}\) is an approximate maximum likelihood estimator (cf. Rao 1973, p. 353) of \({\varvec{\theta }}\) if for some \(c \in (0,1)\),

$$\begin{aligned} L(\widetilde{{\varvec{\theta }}}) \ge c \, \sup _{{\varvec{\theta }}\in \Theta } L({\varvec{\theta }}) . \end{aligned}$$
(2)

Let \(\gamma _t\) be the probability that a respondent gives an answer of type t, for \(t=1,2,3\).

Theorem 1

Let \(\widetilde{{\varvec{\theta }}}\) be an approximate maximum likelihood estimator of \({\varvec{\theta }}\).

  1. (i)

    If assumption A2 is satisfied, \( \gamma _2>0 \), and the conditional probabilities \(w_{h|j}\) are known, then \( \widetilde{{\varvec{\theta }}} \;\overset{{{\mathrm{a.s.}}}}{\longrightarrow }{\varvec{\theta }}^0 \) as \( n \longrightarrow \infty \).

  2. (ii)

    If assumption A2 is satisfied, \( \gamma _2>0 \), and a strongly consistent estimator of \(w_{h|j}\) is inserted into the log-likelihood, then \( \widetilde{{\varvec{\theta }}} \;\overset{{{\mathrm{a.s.}}}}{\longrightarrow }{\varvec{\theta }}^0 \) as \( n \longrightarrow \infty \).

Theorem 2

If assumptions A2 and A3 are satisfied, \( \gamma _2>0 \), and the conditional probabilities \(w_{h|j}\) are known (or strongly consistently estimated), then the maximum likelihood estimator of \({\varvec{\theta }}\) exists and is strongly consistent.

Theorem 3

If assumptions A1 and A4 are satisfied and the conditional probabilities \(w_{h|j}\) are known (or strongly consistently estimated), then there exists a root \({\varvec{{\bar{\theta }}}}\) of the system of likelihood equations

$$\begin{aligned} \frac{\partial \log L({\varvec{\theta }})}{\partial \theta _r} = 0, \quad r=1,\ldots ,d, \end{aligned}$$
(3)

such that \( {\varvec{{\bar{\theta }}}} \;\overset{{{\mathrm{a.s.}}}}{\longrightarrow }{\varvec{\theta }}^0 \) as \( n \longrightarrow \infty \).

In what follows, \(\widetilde{{\varvec{\theta }}}\) will denote the maximum likelihood estimator of \({\varvec{\theta }}\), unless we state that it denotes an approximate maximum likelihood estimator.

For obtaining asymptotic distributional results about \( \sqrt{n}(\widetilde{{\varvec{\theta }}}- {\varvec{\theta }}^0) \) we will use the notion of weakly approaching sequences of distributions (Belyaev and Sjöstedt-de Luna 2000), which is a generalization of the well-known concept of weak convergence of distributions but without the need to have a limiting distribution. Two sequences of random variables, \( \{X_n\}_{n \ge 1} \) and \( \{Y_n\}_{n \ge 1} \), are said to have weakly approaching distribution laws, \( \{{\mathcal {L}}(X_n)\}_{n \ge 1} \) and \( \{{\mathcal {L}}(Y_n)\}_{n \ge 1} \), if for every bounded continuous function \( \varphi (\cdot ) \),\( \mathrm {E}\,\varphi (X_n) - \mathrm {E}\,\varphi (Y_n) \longrightarrow 0 \) as \(n \longrightarrow \infty \). Further, we say that the sequence of conditional distribution laws \( \{{\mathcal {L}}(X_n \,|\, Z_n)\}_{n \ge 1} \) weakly approaches \( \{{\mathcal {L}}(Y_n)\}_{n \ge 1} \) in probability (along \(Z_n\)) if for every bounded continuous function \( \varphi (\cdot ) \), \( \mathrm {E}\,(\varphi (X_n) \,|\, Z_n) - \mathrm {E}\,\varphi (Y_n) \longrightarrow 0 \) in probability as \(n \longrightarrow \infty \).

Theorem 4

Let assumptions A2, A4, and A6 be true, \( \gamma _2>0 \), and the conditional probabilities \(w_{h|j}\) be known (or strongly consistently estimated). Then the maximum likelihood estimator \(\widetilde{{\varvec{\theta }}}\) exists and the distribution of \( \sqrt{n}(\widetilde{{\varvec{\theta }}}- {\varvec{\theta }}^0) \) weakly approaches \( {\mathcal {N}}(\mathbf {0}, \mathbf {I}^{-1}({\varvec{\theta }}^0)) \) as \( n \longrightarrow \infty \).

The claim of Theorem 4 implies weak convergence, i.e., the limiting distribution of \( \sqrt{n}(\widetilde{{\varvec{\theta }}}- {\varvec{\theta }}^0) \) is multivariate normal with zero mean vector and covariance matrix \(\mathbf {I}^{-1}({\varvec{\theta }}^0)\).

Let \( \mathbf {y}_1, \ldots , \mathbf {y}_n \) be the observed main-stage data. Each data point \(\mathbf {y}_i\) is a vector of size four, where the first two elements represent the endpoints of the interval stated at Qu1 and the last two elements represent the endpoints of the interval stated at \(\hbox {Qu2}\varDelta \). We consider \( \mathbf {y}_1, \ldots , \mathbf {y}_n \) to be values of i.i.d. random variables \( \mathbf {Y}_1, \ldots , \mathbf {Y}_n \). We denote \( \mathbf {Y}_{1:n}=(\mathbf {Y}_1, \ldots , \mathbf {Y}_n) \). Let \( \mathbf {Y}_1^{\star }, \ldots , \mathbf {Y}_n^{\star } \) be i.i.d. random variables taking on the values \( \mathbf {y}_1, \ldots , \mathbf {y}_n \) with probability 1 / n, i.e., \( \mathbf {Y}_1^{\star }, \ldots , \mathbf {Y}_n^{\star } \) is a random sample with replacement from the original data set \( \{ \mathbf {y}_1, \ldots , \mathbf {y}_n \} \). We say that \( \mathbf {Y}_1^{\star }, \ldots , \mathbf {Y}_n^{\star } \) is a bootstrap sample. Let \(\widetilde{{\varvec{\theta }}}^{\star }\) be the maximum likelihood estimator of \({\varvec{\theta }}\) from the bootstrap sample \( \mathbf {Y}_1^{\star }, \ldots , \mathbf {Y}_n^{\star } \).

Theorem 5

Let assumptions A2, A5, and A6 be true, \( \gamma _2>0 \), and the conditional probabilities \(w_{h|j}\) be known (or strongly consistently estimated). Then the distribution of\( \sqrt{n}(\widetilde{{\varvec{\theta }}}^{\star } - \widetilde{{\varvec{\theta }}}) \,|\, \mathbf {Y}_{1:n} \) weakly approaches the distribution of \( \sqrt{n}(\widetilde{{\varvec{\theta }}}- {\varvec{\theta }}^0) \) in probability as \( n \longrightarrow \infty \).

This result can be applied for constructing confidence intervals for \(\theta _r, \; r=1,\ldots ,d\). Let \( G_{\mathrm {boot}}(x) = \mathrm {P}\,\bigl ( n^{1/2}(\widetilde{\theta }^{\star }_r - \widetilde{\theta }_r) \le x \,|\, \mathbf {Y}_{1:n} \bigr ) \). The interval

$$\begin{aligned} \Bigl [ \, \widetilde{\theta }_r - n^{-1/2}\, G^{-1}_{\mathrm {boot}}(1-\alpha /2) , \quad \widetilde{\theta }_r - n^{-1/2}\, G^{-1}_{\mathrm {boot}}(\alpha /2) \, \Bigr ] \end{aligned}$$
(4)

is an approximate \(1-\alpha \) confidence interval for \(\theta _r\) (hybrid bootstrap confidence interval; see Shao and Tu 1995, p. 140).

5 Simulation study

We have conducted a simulation study to examine the performance of the proposed methods. The data for the pilot stage and for Qu1 at the main stage are generated in the same way. We describe it for Qu1 to avoid unnecessary notation. In all simulations, the random variables \(X_1, \ldots , X_n\) are independent and have a Weibull distribution:

$$\begin{aligned} F(x) = \mathrm {P}\,(X_i \le x) = 1 - \exp (-(x/\sigma )^\nu ), \quad \text{ for }\; x>0, \end{aligned}$$

where \(\nu =1.5\) and \(\sigma =80\). The Weibull distribution has a flexible shape and is used in various contexts, for example, in contingent valuation studies where people are asked how much they would be willing to pay for a certain nonmarket good (see, e.g., Alberini et al. 2005). Contingent valuation is a natural application area for the sampling schemes considered here because they account for respondent uncertainty.

Let \(U_{1}^{\mathrm {L}}, \ldots , U_{n}^{\mathrm {L}}\) and \(U_{1}^{\mathrm {R}}, \ldots , U_{n}^{\mathrm {R}}\) be sequences of i.i.d. random variables defined below:

$$\begin{aligned}&U_{i}^{\mathrm {L}} = M_i \,U_{i}^{(1)} + (1 - M_i) \,U_{i}^{(2)} , \nonumber \\&U_{i}^{\mathrm {R}} = M_i \,U_{i}^{(2)} + (1 - M_i) \,U_{i}^{(1)} , \end{aligned}$$
(5)

where \( M_i \sim \mathrm {Bernoulli}(1/2), \, U_{i}^{(1)} \sim \mathrm {Uniform}(0,20) \), and \( U_{i}^{(2)} \sim \mathrm {Uniform}(20,50) \). Let \( ( L_{1i}, R_{1i} ] \) be the interval stated by the i-th respondent at Qu1. The left endpoints are generated as \( L_{1i} = (X_i - U_{i}^{\mathrm {L}}) \,\mathbb {1}\{X_i - U_{i}^{\mathrm {L}} > 0\} \) rounded downwards to the nearest multiple of 10. The right endpoints are generated as \( R_{1i} = X_i + U_{i}^{\mathrm {R}} \) rounded upwards to the nearest multiple of 10. The data for the follow-up questions Qu2a and Qu2b are generated according to scheme B. The probability that a respondent gives no answer to \(\hbox {Qu2}\varDelta \) is 1/6. All computations were performed in R (R Core Team 2016). The R code can be obtained from the first author upon request.

It is of interest to investigate to what extent the set of endpoints \( \{ d_j^{\star } \} \) influences the properties of the estimator of \( {\varvec{\theta }}= (\nu , \sigma ) \). For this purpose, we explore three different ways of obtaining the set \( \{ d_j^{\star } \} \), i.e., three variations of scheme B, specified below:

  1. (i)

    pilot stage with sample size \(n_0=20\);

  2. (ii)

    pilot stage with sample size which is the same as in the main stage, \(n_0=n\);

  3. (iii)

    skipping the pilot stage and using instead a predetermined set of endpoints\( \{ d_j^{\star } \} = \{ 0, 10, 20, \ldots , 300, 320, 340, \ldots , 400 \}\), which is a reasonable set given the rounding to a multiple of 10 and the likely values of \(X_i\).

Under the settings of our simulations, the set \( \{ d_j^{\star } \} \) will on average be smallest in scenario (i) and largest in scenario (iii).

First, we compare the suggested estimator of \( {\varvec{\theta }}\) under the three variations of scheme B and the maximum likelihood estimator when \(X_1, \ldots , X_n\) are observed without censoring (uncensored observation scheme). For each scheme, 40000 samples of different sizes are generated. Table 1 presents the relative bias and the root mean square error over the simulations. If \(\widetilde{\nu }\) is an estimator of \(\nu \), the relative bias of \(\widetilde{\nu }\) is defined as \(\text{ rb }(\widetilde{\nu }) = 100\,\text{ bias }(\widetilde{\nu })/\nu \). The root mean square error is of more or less the same magnitude in each of the three scenarios for obtaining the set of endpoints \( \{ d_j^{\star } \} \). However, if we look at the results for \(n=1000\), the bias is smallest when the set of endpoints is largest. This indicates that the set \( \{ d_j^{\star } \} \) should not be too small (ideally one would like the set to contain all endpoints that future respondents will give). As we can expect, the error with the uncensored scheme is lower; however, the difference is pretty small. The bias is fairly close to zero with all schemes. Analogous simulations for scheme A displayed comparable results with a slightly higher root mean square error. We also conducted similar simulations with the scheme suggested in Angelov and Ekström (2017) which showed a larger bias, e.g., for \(n_0=n=100\), \(\text{ rb }(\widetilde{\nu }) = 6.7\), for \(n_0=n=1000\), \(\text{ rb }(\widetilde{\nu }) = 1.7\), while with schemes A and B of the current paper, \(\text{ rb }(\widetilde{\nu }) < 1\) in each of the cases studied. This bias can be attributed to the exclusion of respondents in the former scheme. For the sake of brevity, the detailed simulation results for scheme A and the scheme of Angelov and Ekström (2017) are omitted.

In addition, we have performed simulations to examine potential bias due to wrongly assuming that \(w_{h|j}\) does not depend on j. This assumption implies noninformative censoring, and in this case the likelihood will be proportional to

$$\begin{aligned} \prod _{i=1}^n \bigl [ F_{{\varvec{\theta }}}(b_i) - F_{{\varvec{\theta }}}(a_i) \bigr ] , \end{aligned}$$
(6)

where \((a_i, b_i]\) is the last interval stated by respondent i at the series of questions Qu1, \(\hbox {Qu2}\varDelta \) (cf. Sun 2006, p. 28). We compare the estimator suggested in this paper with an estimator assuming noninformative censoring, obtained by maximizing the likelihood (6). For generating data we use the model stated above with \( M_i \sim \mathrm {Bernoulli}(1/100) \) in (5). This model corresponds to a specific behavior of the respondents, that is, at Qu1 they tend to choose an interval in which the true value is located in the right half of the interval. The estimator assuming noninformative censoring has been applied both to the full data (Qu1 and \(\hbox {Qu2}\varDelta \)) and to the data only from Qu1. Table 2 displays the relative bias and the root mean square error of the estimators based on 40000 simulated samples of sizes \(n=100\) and \(n=1000\), with scheme variation B(ii). For \(n=1000\) when using the full data, the bias of the estimator assuming noninformative censoring is substantially greater than the bias of our estimator. Similar thing is observed for the root mean square error. The results with \(n=100\) indicate that when using the full data, for \(\nu \) the bias of our estimator is a bit greater than the bias of the other estimator, while for \(\sigma \) the bias of our estimator is smaller. Yet, with our estimator the estimated distribution function more closely resembles the true distribution. If only the data from Qu1 are used, the bias under the assumption of noninformative censoring is considerably larger for both sample sizes.

Table 1 Simulation results for different sampling schemes
Table 2 Comparison of our estimator (I) with an estimator assuming noninformative censoring (II)
Table 3 Confidence intervals: coverage proportion and average length

Finally, we compare the performance of the bootstrap confidence intervals (4) and the confidence intervals constructed using normal approximation (see Theorem 4). Table 3 shows results based on 1500 simulated samples of sizes \(n=100\) and \(n=1000\) using scheme variation B(iii); the confidence level is 0.95. One bootstrap confidence interval is calculated using 1000 bootstrap samples. For both sample sizes we see that the bootstrap confidence intervals have similar coverage and length as the confidence intervals based on normal approximation.

6 Conclusion

We considered two schemes (A and B) for collecting self-selected interval data that extend sampling schemes studied before in the literature. Under general assumptions, we proved the existence, consistency, and asymptotic normality of a proposed parametric maximum likelihood estimator. In comparison with the scheme used in a previous paper (Angelov and Ekström 2017), the new schemes do not involve exclusion of respondents and this leads to a smaller bias of the estimator as indicated by our simulation study. Furthermore, the simulations showed a good performance of the estimator compared to the maximum likelihood estimator for uncensored observations. It should be noted that the censoring in this case is imposed by the design of the question. A design allowing uncensored observations might introduce bias in the estimation if respondents are asked a question that is difficult to answer with an exact amount (e.g., number of hours spent on the internet) and they give a rough best guess. We also demonstrated via simulations that ignoring the informative censoring can lead to bias. We presented a bootstrap procedure for constructing confidence intervals that is easier to apply compared to the confidence intervals based on asymptotic normality where, e.g., the derivatives of the log-likelihood need to be calculated. According to our simulations, the two approaches yield similar results in terms of coverage and length of the confidence intervals. Finally, it would be of interest in future research to develop a test for assessing the goodness of fit of a parametric model.