1 Introduction

A stratified sample combines probability samples (e.g., SRS) from each stratum in a population. The total variation in a stratified sample can thus be partitioned into within- and between-stratum variation. In certain settings, it may not be straightforward to stratify the population if the stratification variable is subjective, such as visual inspection, and not available for all population units at once. In such cases, the stratification principle can be used at the sample level. MacEachern et al. (2004) introduced the concept of the JPS sample to improve the information content of an SRS by post-stratifying based on a (subjective) ranking variable.

To construct a JPS sample in an infinite population, one first selects an SRS sample and measures each unit in the sample for the characteristic of interest. For each unit, \(H-1\) additional units are then selected, without measurement, at random from the population to form a comparison set of size H. Relative positions (ranks) of every unit in the original SRS are determined in their comparison sets using a ranking procedure on an auxiliary variable, such as visual ranking. Under a consistent ranking procedure, regardless of its quality, the JPS sampling yields unbiased estimators for the population mean and total (Ozturk 2016a).

The main difference between the stratified and JPS sampling procedures is that the former combines SRSs selected across all strata, while the latter partitions a single SRS from the population into ranking groups post-experimentally, based on the ranking information from additional comparison sets. The stratum sample sizes are constant in the stratified sampling, while the judgment ranking group sample sizes are random in the JPS sampling. The JPS sample is similar to a post-stratified (PS) sample, with a clear distinction. The PS sample is stratified based on the population strata membership information, while the JPS sample is segregated into the judgment (ranking) groups based on the position (ranking) information in small comparison sets. Hence, a JPS sample can be used for populations for which no strata membership is available.

In the context of modern digital plant research (Araus and Cairns 2014), JPS sampling presents a range of potential applications in agricultural and environmental research and practice. We provide here a motivating example of the field estimation of early crop establishment of faba bean in an agronomic experiment conducted in the University of Adelaide, Australia (Kasprzak 2021). The aim was to compare the crop performance, including faba bean emergence, under a range of planting densities, controlled by the rate of sowing and row spacing. It is believed better early crop establishment leads to better yield and profitability. The study measured several variables, including the seedling emergence observed over the first two weeks after sowing. An expert human observer would typically inspect five random transects of 1m length in an experimental plot. For automated observations, the entire research field was divided into 2640 grid cells (approximately 50 cm long and 25 cm wide) for aerial photograph observations with a drone (DJI’s Phantom 4 Pro v2.0) in addition to the observations by the human observer. The drone had an on-board camera and recorded the image of each cell. These images were later processed using a machine learning algorithm to predict the actual number of seedlings emerged.

For the purposes of this paper, we consider the data collected in a single year of the trial over the entire field: (1) the manual counts by the expert observer of seedling emergence (Y) on each grid cell approximately two weeks after sowing; (2) the automated counts X predicted from the image data. The ranking variable X is highly correlated with Y, \(\rho _{Y,X}= 0.766\). The lack of perfect correspondence is explained by the uncertainty in the manual and automated counts due to the presence of ground cover (stubble) concealing the dusty green faba beans seedlings.

In the seedling emergence data, the auxiliary variable X can be used as a ranking variable for JPS. Even though the X values would be different from the Y values, they provide reasonably accurate ranks for faba bean emergence on the grid cells selected for comparison sets.

In a JPS sample, rank classes for individual units are assigned after the simple random sample is collected. In certain settings, it is possible to have more than one set of ranks for the same measured unit in the SRS. Ranks can be constructed with different ranking mechanisms. To illustrate this, in our example of the faba bean emergence, the ranks of Y on each measured unit can be obtained K, \(K \ge 1,\) times, with a minimal ranking cost, by constructing K different comparison sets (simply selecting grid cells at random from automated images) for each Y-value. In a different setting, units in each comparison set may be ranked by K rankers using different visual inspection or assessment protocols, or by K machine learning algorithms requiring, say, different amounts of training. In some other cases, for the n units selected and measured for a JPS, \(n(H-1)\) units can be additionally selected in the population, and the same person may visually rank the units in K different comparison sets of size H for each unit in the JPS by permuting the \(n(H-1)\) unmeasured units into n comparison sets K times.

The ranking information plays a significant role in the construction of a JPS sample. In a post-stratified sample (PS), the observations are binned based on their membership in stratum populations. However, knowledge of the strata membership may be too strong an assumption. When the membership information is not available for all population units, position information may be readily available in a small set of units in comparison sets. This position information is used in JPS sampling to create judgment groups of homogeneous observations. Each measured unit in a JPS sample possesses two pieces of information: the value of the characteristic of interest Y, and the relative position of the unit among the other \(H-1\) unmeasured units in its comparison set. The latter presents additional information through the ranks of the measured units. If the ranking quality is good and H is large, the information content of a JPS sample can be substantially higher than the information content of a PS sample of the same size.

The problem of reducing the impact of ranking error on the estimators in the JPS sampling design for an infinite population has generated extensive research. Wang et al. (2006) used the concomitants order statistics to estimate the population mean. Frey (2016) and Frey and Feeman (2012, 2013) constructed estimators for the population mean and variance by conditioning on the judgment group samples sizes. These new estimators improve the unconditional JPS estimators. Judgment ranks induce a stochastic ordering among judgment group means in a JPS sample. Chen et al. (2014), Frey and Ozturk (2011), Wang et al. (2008, 2012), and Stokes et al. (2007) constructed constrained estimators using isotonic regression under the restriction of stochastic ordering among judgment ranking group means. Judgment group sample sizes are random variables in a JPS sample. Hence, sample sizes for some judgment groups may be zero. Ozturk (2017) constructed conditional ranks with smaller set sizes by conditioning on the original ranks in a JPS sample. He showed that these conditional ranks in smaller comparison sets reduce the impact of empty judgment classes. The main approach in the aforementioned papers is either to average the ranking information in the sample or to smooth it through isotonic regression. Omidvar et al. (2018) estimated the prevalence of osteoporosis using a JPS sample in a finite mixture model. Zamanzade and Wang (2017) constructed several estimators for the population proportion using a JPS sample.

Recently, there has been increased research on JPS sampling in the finite population setting. Ozturk (2016a); Ozturk (2016b); Ozturk (2019a) used design-based inference to construct JPS estimators for the population mean and total. The JPS sample can be constructed under sampling with or without replacement. It has been shown that the estimator of the variance of the sample mean requires a finite population correction factor in sampling without replacement. Ozturk (2019b) constructed a JPS sample using probability proportional-to-size sampling to assign higher selection probabilities to important units in the population. In a model-based inference, Ozturk (2018a); Ozturk and Bayramoglu Kavlak (2018b) constructed predictors for the population mean and total using a super-population model.

Regardless of how the ranks are created, the ranking information from multiple consistent sources can be combined to draw statistical inference. If the ranking cost is negligible, combining sets of ranks improves the information content of a JPS sample without impacting its sampling cost. In a slightly different context, Ozturk (2013) and Ozturk and Demirel (2016) used the tie structures among K rankers to construct a ranked set sample and estimate the population mean and variance. Stokes et al. (2007) used the raking principle in contingency tables to incorporate ranking information into a JPS sample.

This paper adds to the JPS research for finite populations and presents new estimators of the population mean and total which combine the ranking information obtained from different sources in a JPS design. The design-based inference is presented in detail, and the model-based inference is also provided in the Supplementary Material. Section 2 details the construction of JPS sampling designs with and without replacement, \(D_1\) and \(D_2\). In design \(D_1\), units in a comparison set are selected without replacement. The units are ranked, and the rank of the measured unit is identified. Once the rank is identified, all units, including the measured ones, are returned to the population before constructing the next comparison set, and the process is repeated till all the units in the JPS are assigned their judgment rank. In this design, the same unit can appear in the sample more than once, and all observations in the sample are independent. Hence, this sample is equivalent to a JPS sample constructed from an infinite population. The design \(D_2\) is similar to design \(D_1\). The main difference is that all units in the comparison set are removed from the population, including both the measured and unmeasured ones, before selecting the next comparison set. The designs \(D_1\) and \(D_2\) induce different kinds of the correlation structure among the measured observations in a JPS sample. All observations in \(D_1\) are independent, while observations in \(D_2\) are negatively correlated. Section 2 also provides a brief review of the known results for the distributional properties of the sample mean of the JPS sampling design. Section 3 introduces equally and unequally weighted estimators, which combine the ranking information from K different sets of ranks. The unequal weights depend on the standard errors of the JPS estimators based on individual ranking sources. Section 4 provides the variance estimator for the weighted estimators of the population mean. Section 5 investigates the empirical properties of the proposed estimators. Section 6 applies the methodology to the faba bean seedling emergence data to estimate the average density of seedlings. Section 7 provides some concluding remark. Supplementary Material presents developments for model-based inference and agreement-weight estimators.

2 Sampling Designs

In this paper, we assume the finite population setting unless specified otherwise. The finite population setting under sampling with replacement also covers the JPS sampling designs in the infinite population setting. A finite population of size N will be denoted by \({{\mathcal {P}}}\). Let Y be the variable of interest. The values of Y on population units will be denoted by \(y_1, \ldots , y_N\). The population mean and variance are defined as follows

$$\begin{aligned} \mu = \frac{1}{N} \sum _{i=1}^N y_i, \quad \sigma ^2= \frac{1}{N} \sum _{i=1}^N(y_i - \mu )^2 . \end{aligned}$$

Design \(D_1\): we first select an SRS with replacement and measure each of the n units in the sample, \(Y_1, \ldots , Y_n\). For each unit (\(Y_i\)), we then select additional \(H-1\) units under sampling without replacement from the population to form a comparison set \(\{Y_i,Y^*_1,\ldots , Y^*_{H-1}\}\), where \(Y^*_h \ne Y_i, h=1, \ldots , H-1\) . We rank all units in the comparison set from smallest to largest based on the perceived value of the characteristic Y (or some other ranking information available) and identify the rank of \(Y_i\), \(R_{i,1}\). All units in the comparison set, including the one we measured initially, are returned to the population before the construction of the next comparison set. A population size of at least H units is required to complete the design. This process creates the following JPS sample

$$\begin{aligned} D_1 = \{ Y_i,R_{i,1}\};\ i=1,\ldots , n. \end{aligned}$$

Design \(D_2\): the JPS sample is constructed in a fashion similar to \(D_1\). We first select an SRS of size n without replacement, and measure all the items, \(Y_i,\ldots , Y_n\). For each measured item (\(Y_i\)), we construct a comparison set, \(\{Y_i, Y^*_1, \ldots , Y^*_{H-1} \}\) with the constraint that \(Y_h^*\ne Y_i, h=1, \ldots , H-1\), and identify the ranks (\(R_{i,2}\)) for \(i=1, \ldots , n\). In this design, all units in a comparison set are removed from the population before selecting a comparison set for another item. Hence, all comparison sets are disjoint, and a population size of at least Hn units is required. The JPS sample created is

$$\begin{aligned} D_2=\{ Y_i,R_{i,2} \}; i=1, \ldots , n. \end{aligned}$$

We note that samples \(D_1\) and \(D_2\) have different distributional properties. If the population size is sufficiently large, the pairs in sample \(D_2\) become approximately independent, and samples \(D_1\) and \(D_2\) are approximately equivalent.

The conditional mean and variance of \(Y_j\) given \(R_{j,r}=h\), and the conditional covariance of \(Y_j,Y_t\) given that \(R_{j,r}=h,R_{t,r}=h'\) will be denoted by

$$\begin{aligned} \mu _{[h],r}= & {} E(Y_{[h],r})= E(Y_j|R_{j,r}=h), \text{ for } r=1,2 \\ \sigma ^2_{[h],r}= & {} \hbox {Var}(Y_{[h],r})= \hbox {Var}(Y_j|R_{j,r}=h) \text{ for } r=1,2 \end{aligned}$$

and

$$\begin{aligned} \sigma _{[h,h'],2}=\hbox {cov}(Y_{[h],2},Y_{[h'],2})= \hbox {cov}(Y_j,Y_t|R_{j,2}=h,R_{t,2}=h') \text{ for } r=2. \end{aligned}$$

For design \(D_1\), \(\sigma ^2_{[h,h'],1}=0\) since the observations are independent.

In the expressions above, the square brackets correspond to judgment class statistics. Under perfect ranking, the brackets can be replaced with round parentheses corresponding to order statistics.

Unbiased estimators of the population mean and total based on samples \(D_1\) and \(D_2\) are given by

$$\begin{aligned} {\bar{Y}}_{\mathrm{{JPS}},r}= \sum _{h=1}^H \frac{I_{h,r} J_{h,r}}{d_{n,r}} \sum _{i=1}^n I(R_{i,r}=h) Y_i, \ r=1,2, \end{aligned}$$

where the subscript r denotes that the JPS sample is generated by design \(D_r\) and

$$\begin{aligned} J_{h,r}= \left\{ \begin{array}{ll} 1/n_{h,r} &{} n_{h,r}>0, \\ 0 &{} n_{h,r}=0, \end{array}\right. \quad n_{h,r} =\sum _{i=1}^n I(R_{i,r}=h), \quad I_{h,r}= I(n_{h,r}>0), \quad d_{n,r}= \sum _{h=1}^H I_{h,r}. \end{aligned}$$

The estimator \({\bar{Y}}_{\mathrm{{JPS}},r}\) can be written in a slightly different form as a weighted average of the judgment class means

$$\begin{aligned} {\bar{Y}}_{\mathrm{{JPS}},r}= \sum _{h=1}^H w_{h,r} {\bar{Y}}_h, \ {\bar{Y}}_h=J_{h,r}\sum _{i=1}^n I(R_{i,r}=h) Y_i, \quad w_{h,r}= I_{h,r}/d_{n,r}, \end{aligned}$$

where the weights \(w_{h,r}\) make the estimator unbiased for any set size H and sample size n. In JPS sampling, ranks have a discrete uniform distribution with the support on integers \(1,\ldots , H\). Hence, \(n_{h,r}\), \(d_{n,r}\), and \(w_{h,r}\) are all random variables since they are functions of ranks \(R_{i,r}\), \(i=1,\ldots , n.\) The sample size vector of judgment class groups, \({\varvec{n}}_r=(n_{1,r},\ldots , n_{H,r})\), has a multinomial distribution with the number of trials n and the success probability vector \((1/H, \ldots , 1/H)\). The following theorem provides some useful results for these random variables.

Lemma 1

The following equalities hold for the distribution of \(\frac{I_{h,r}}{d_{n,r}}, h=1,\cdots H, r=1,2\) (Ozturk 2014; Dastbaravarde et al. 2016):

  • \(E(\frac{I_{1,r}}{d_{n,r}})=1/H\)

  • \(E(\frac{I_{1,r}}{d_{n,r}^2})=\frac{1}{H^2}\sum _{k=1}^{H}(\frac{k}{H})^{n-1} \)

  • \(Var(\frac{I_{1,r}}{d_{n,r}})=\frac{1}{H^2}\sum _{k=1}^{H-1}(\frac{k}{H})^{n-1}\)

  • \(\hbox {cov}\left( \frac{I_{1,r}}{d_{n,r}},\frac{I_{2,r}}{d_{n,r}}\right) = -\frac{1}{H-1}var\left( \frac{I_{1}}{d_{n_l}}\right) \)

  • \(E(\frac{I_{1,r}^2J_{1,r}}{d_{n,r}^2})=\frac{1}{H^{n}}\left( \frac{1}{n}+\sum _{k=2}^{H}\sum _{j=1}^{k-1}\sum _{t=1}^{n-k+1}\frac{(-1)^{j-1}}{k^2t}\left( {\begin{array}{c}H-1\\ k-1\end{array}}\right) \left( {\begin{array}{c}k-1\\ j-1\end{array}}\right) \left( {\begin{array}{c}n\\ t\end{array}}\right) (k-j)^{n-t}\right) . \)

We note that the expected values, variances and covariances in Lemma 1 do not depend on the values of Y on population units. They can be computed for any given sample and set sizes. We do not assume that the assigned rank \(R_{i,r}\) is the same as the actual rank of \(Y_{i}\) in its comparison set, but we require a consistency in the ranking procedure. The ranking procedure is called consistent if it satisfies the following equality

$$\begin{aligned} \frac{1}{H}\sum _{h=1}^{H} E\left\{ Y_{i}|I(R_{i,r}=h)\right\} =E(Y_{i}) \text{ for } r=1,2. \end{aligned}$$

The consistency of a ranking procedure in JPS sampling holds since \(P(R_{i,r}=h) = 1/H\) for \(h=1,\ldots , H\). For convenience, we will consider ranking in the ascending order throughout the paper.

Ozturk (2016a) showed that the estimator \({\bar{Y}}_{\mathrm{{JPS}},r}\) is unbiased for the population mean under a consistent ranking scheme and provided a closed form expression for its variance under designs \(D_1\) and \(D_2\). The following theorem is stated for easy access in the remainder of the paper.

Theorem 1

Let \((Y_i,R_i); i=1,\ldots , n\), be a JPS sample of set size H constructed under a consistent ranking scheme based on either design \(D_1\) or design \(D_2\) from the finite population \({{\mathcal {P}}}\) of size \(N > nH\). (i) The estimators \({\bar{Y}}_{\mathrm{{JPS}},r}\) are unbiased for \(\mu \). (ii) the variance of the estimators \({\bar{Y}}_{\mathrm{{JPS}},r}\) are

$$\begin{aligned} \sigma ^2_{1}= \frac{H}{H-1} \hbox {Var}\left( \frac{I_{1,1}}{d_{n,1}}\right) \sum _{h=1}^H ( \mu _{[h],1}- \mu )^2 + E\left( \frac{I_{1,1}^2J_{1,1}}{d_{n,1}^2}\right) \sum _{h-1}^H \sigma _{[h],1}^2 \end{aligned}$$

for design \(D_1\) and

$$\begin{aligned} \sigma ^2_{2}= & {} C_1(n,H)\left\{ \sum _{h=1}^H\sigma ^2_{[h],2}-\sum _{h=1}^H \sigma _{[h,h],2}\right\} + C_2(n,H,N)\frac{H^2\sigma ^2}{H-1} \end{aligned}$$
(1)

for design \(D_2\), where

$$\begin{aligned} C_1(n,H)= & {} \left\{ \frac{1}{H(H-1)}+E(\frac{I_{1,2}^2J_{1,2}}{d_{n,2}^2})-\frac{H}{H-1}E\left( \frac{I_{1,2}^2}{d_{n,2}^2}\right) \right\} \\ C_2(n,H,N)= & {} \left\{ {\hbox {Var}\left( \frac{I_{1,2}}{d_{n,2}}\right) }-\frac{1}{N-1}\left\{ \frac{1}{H}-E\left( \frac{I_{1,2}^2}{d_{n,2}^2}\right) \right\} \right\} . \end{aligned}$$

We now consider an unbiased estimator for the variance of \({\bar{Y}}_{\mathrm{{JPS}},r}\). Let

$$\begin{aligned} U_{1,r}= & {} \frac{1}{E\left( \frac{I_{1,r}I_{2,r}}{d_{n,r}^2}\right) }\sum _{h=1}^H\sum _{h\ne h'}^H \frac{I_{h,r}I_{h',r}J_{h,r}J_{h',r}}{d_{n,r}^2}\sum _{i=1}^n\sum _{j=1}^n (X_i-X_j)^2 I(R_{i,r}=h)I(R_{j,r}=h'), \\ U_{2,r}= & {} \sum _{h=1}^H\frac{HI_{h,r}^*J_{h,r}J^*_{h,r}}{d_{n,r}^*} \sum _{i=1}^n\sum _{j\ne i}^n (X_i-X_j)^2 I(R_{i,r}=h)I(R_{j,r}=h), \end{aligned}$$

and

$$\begin{aligned} {\hat{\sigma }}^2_{SRS}=\frac{1}{n-1}\sum _{i=1}^n(X_i-{\bar{X}})^2, \end{aligned}$$

where

$$\begin{aligned} J^*_{h,r}= \left\{ \begin{array}{ll} \frac{1}{n_{h,r}-1} &{} n_{h,r} >1 \\ 0 &{} n_{h,r}\le 1, \end{array} \right. \end{aligned}$$

and \(I_{h,r}^*=I(n_{h,r}>1)\), \(d_{n,r}^*=\sum _{h=1}^H I^*_{h,r}\). From Lemma 1, one can easily establish that \(E(I_{1,r}I_{2,r}/d_{n,r}^2)=(1/H-E(I_{1,r}/d_{n,r})^2)/(H-1)\). Hence, \(U_{1,r}\) is a statistic that depends only on the data. The following theorem (adapted to the notation of this paper) is stated from Ozturk (2016a).

Theorem 2

Let \((Y_i,R_{i,r})\), \(i=1,\ldots , n\), be JPS samples of set size H from designs \(D_r\), \(r=1,2\). Assume that at least one of the judgment groups has at least two measured observations, \(d^*_{n,r} > 0\). Unbiased estimators of \(\sigma ^2_{1}\) and \(\sigma ^2_{2}\) are given by, respectively,

$$\begin{aligned} {\hat{\sigma }}^2_{1}= & {} \frac{\hbox {Var}\left( I_{1,1}/d_{n,1}\right) }{2(H-1)}U_{1,1}+ \left\{ E\left( \frac{I_{1,1}^2J_{1,1}}{d_{n,1}^2}\right) -\hbox {Var}\left( \frac{I_{1,1}}{d_{n,1}}\right) \right\} \frac{U_{2,1}}{2}, \end{aligned}$$
(2)
$$\begin{aligned} {\hat{\sigma }}^2_{2}= & {} C_1(n,H)U_{2,2}/2+C_2(n,H,N)\frac{H^2{\hat{\sigma }}^2_{SRS}}{H-1}, \end{aligned}$$
(3)
$$\begin{aligned} {\bar{\sigma }}^2_{2}= & {} C_1(n,H)U_{2,2}/2+C_2(n,H,N)\frac{(N-1)(U_{1,2}+U_{2,2})}{2N(H-1)}, \end{aligned}$$
(4)

Theorem 2 provides two unbiased estimators: \({\hat{\sigma }}^2_{2} \) and \({\bar{\sigma }}^2_{2}\) for \(\sigma ^2_{2}\). Ozturk (2016a) showed that for some values of n, H, and N, the coefficient \(C_2(n,H,N)\) could be negative. This may rarely lead to a negative value for \( {\hat{\sigma }}^2_{2}\). When this happens, the following truncated variance estimator can be used

$$\begin{aligned} {\tilde{\sigma }}^2_{2}=\left\{ \begin{array}{ll} {\hat{\sigma }}^2_{2} &{} \text{ if } {\hat{\sigma }}^2_{2}>0 \\ C_1(n,H)U_{2,2}/2 &{} \text{ if } {\hat{\sigma }}^2_{2} \le 0. \end{array} \right. \end{aligned}$$
(5)

Further development on this estimator can be found in Ozturk (2016a).

Statistical inference can also be drawn using a model-based approach under a super-population model using a multi-ranker model. Detailed development for this approach is provided in the Supplementary Material.

3 Combining Ranking Information

In this section, we look at JPS sampling from a different perspective. We assume that each measured unit is assigned K conditionally independent ranks given the comparison set. The JPS sampling still has a single measurement (\(Y_i\)) for each unit, but it has K different ranks. The definition of the JPS sample in the previous section is now extended to cover the source of ranking information

$$\begin{aligned} D_{r,k}= \{Y_i,R_{i,k,r}\}, i=1, \ldots , n, \ r=1,2, \ k=1, \ldots , K, \ r=1,2, \end{aligned}$$

where \(R_{i,k,r}\) is the rank assigned to \(Y_i\) by ranking method k in design \(D_r\). The subscript \(r=1,2\) denotes that the sample is generated from designs \(D_1\) or \(D_2\), respectively. As we indicated in Sect. 1, ranks can be constructed using information from different sources. Our objective here is to combine the information contained in these K sets of ranks to construct a better estimator than a JPS sample estimator based on a single ranking method. We consider two different approaches for combining the ranking information. The first approach uses the standard errors of the estimators. The second approach combines the ranking information using equal weights for each of the K sets of ranks.

Let \({\bar{Y}}_{k,r}\) and \({\hat{\sigma }}^2_{k,r}\) be the JPS estimator and its variance estimate, respectively, based on ranking method k and design \(D_r\). For design \(D_1\), we combine the ranking information from different sources by constructing weighted estimators. We use either equal weight (E) or unequal weight of inverse standard error (S).

$$\begin{aligned} {\bar{Y}}_{E,1} = \frac{1}{K}\sum _{k=1}^K {\bar{Y}}_{k,1}, \quad {\bar{Y}}_{S,1} = \frac{\sum _{k=1}^K \frac{{\bar{Y}}_{k,1}}{ {\hat{\sigma }}^2_{k,1}}}{A_{1}}, \quad \end{aligned}$$

where

$$\begin{aligned} A_{1}= \sum _{k=1}^K \frac{1}{{\hat{\sigma }}^2_{k,1}}. \end{aligned}$$

Similar estimators can be constructed for design \(D_2\). There are two unbiased estimators for the variance of \({\bar{Y}}_{2}\) (\(\sigma ^2_{2}\)). Our small scale simulation study showed that the variance estimator \({\tilde{\sigma }}^2_{k,2}\) performs slightly better than \({\hat{\sigma }}^2_{k,2}\). Therefore, we use \({\tilde{\sigma }}^2_{k,2}\) to construct the estimator that combines the ranking information from K ranking methods.

$$\begin{aligned} {\bar{Y}}_{E,2} = \frac{1}{K}\sum _{k=1}^K {\bar{Y}}_{k,2} \text{ and } {\bar{Y}}_{S,2} = \frac{\sum _{k=1}^K \frac{{\bar{Y}}_{k,2}}{ {\tilde{\sigma }}^2_{k,2}}}{A_{2}}, \text{ where } A_{M}= \sum _{k=1}^K \frac{1}{{\tilde{\sigma }}^2_{k,M}}. \end{aligned}$$

The combined estimators can be classified into two groups. The first group gives equal weight to each individual ranking estimator. The second group assigns greater weights to the ranking methods producing smaller variances. In the Supplementary Material, we demonstrate another approach to combining the ranking information at each observation level using the agreement scores.

4 Variance Estimator of the Combined Estimator

In order to asses the uncertainty of the combined estimators, we need to estimate their variances. We first consider the jackknife variance estimate for the combined estimators. This estimate is appealing in settings where it may not be clear how to calculate a good estimator or the standard deviation of the estimator. Such situations occur when the delta method may not work properly due to the absence of a clear theoretical or functional relationship between the data and the estimator. In our study, the JPS sampling generates one set of response measurements and K different sets of ranks for each measured unit in its comparison set, and the ranking methods could vary substantially. For example, the same comparison set in a field research project may be ranked through visual inspection, an auxiliary field variable and computer assisted automated ranking methods.

Let \({\bar{Y}}^{(-i)}_{E,r}\) and \({\bar{Y}}^{(-i)}_{S,r}\), for \(r=1,2\), be the combined estimators after removing the i-th observation along with all its ranks from the sample. The jackknife variance estimate for each of the combined estimators is then given by

$$\begin{aligned} {\hat{\sigma }}^2_{E,r}= & {} \sum _{i=1}^n({\hat{Y}}^{(-i)}_{E,r} -\bar{{J}}_{E,r})^2((n-1)/n)^2, \ r=1,2, \\ {\hat{\sigma }}^2_{S,r}= & {} \sum _{i=1}^n({Y}^{*(-i)}_{S,r} -{{\bar{J}}}_{S,r})^2((n-1)/n)^2, \ r=1,2 , \end{aligned}$$

where \(\bar{{J}}_{E,r}=\sum _{i=1}^n {\bar{Y}} ^{(-i)}_{E,r}/n\) and \({\bar{J}}_{S,r}=\sum _{i=1}^n {\bar{Y}} ^{(-i)}_{S,r}/n\). In the jackknife variance estimators, the coefficient \((n-1)/n\) is replaced with \(((n-1)/n)^2\).

The variance estimates allow us to construct an approximate \(100(1-\alpha )\%\) confidence interval for the population mean and total,

$$\begin{aligned}&{\bar{Y}}_{E,r} \pm t_{n-1, 1-\alpha /2} {\hat{\sigma }}_{E,r}, {\bar{Y}}_{S,r} \pm t_{n-1, 1-\alpha /2} {\hat{\sigma }}_{S,r}, \\&{\bar{Y}}_{\mathrm{{JPS}},r}\pm t_{n-1, 1-\alpha /2} {\hat{\sigma }}_{\mathrm{{JPS}},r}, \quad {\bar{Y}}_{\mathrm{{SRS}},r} \pm t_{n-1, 1-\alpha /2} {\hat{\sigma }}_{\mathrm{{SRS}},r} \end{aligned}$$

for \(r=1,2\), where \(t_{n-1,a}\) is the a-th upper quantile of the t-distribution with \(n-1\) degrees of freedom, and \({\bar{Y}}_{\mathrm{{SRS}},r}\) and \({\hat{\sigma }}_{\mathrm{{SRS}},r}\) are the sample mean and sample variance of a simple random sample under design \(D_r\), \(r=1,2\). An R-package RankedSetSampling is developed to compute the estimators and construct confidence intervals for Design \(\hbox {D}_1\) and \(\hbox {D}_2\), Ozturk et al. (2021). It is available to download at https://biometryhub.github.io/RankedSetSampling.

5 Empirical Comparisons of Combined Estimators

In this section, we performed a simulation study to empirically investigate the properties of the estimators. We considered four populations of various distribution shapes likely to occur in practice. The populations include uniform (also beta(\(p = 1, q = 1\))), beta (4,4), beta(4,13) and the standard exponential distribution (exp(1)). These distributions correspond to populations with short tail (uniform, beta(1,1)), symmetric (beta(4,4)), slightly positively skewed (beta(4,13)) and positively skewed (exponential, exp(1)). For convenience, uniform, beta(4,4), beta (4,13), and exp(1) distributions are denoted as Group I, II, III, and IV populations, respectively. The values of the population units in simulations are generated by the quantiles of these four distributions

$$\begin{aligned} y_i= F^{-1}_{G}(i/(N+1)), i=1, \ldots , N, \ G= {\hbox {I, II, III, V}}, \end{aligned}$$

where \(N=400\) is the size of the finite population and \(F_G\) is the cumulative distribution function of the Group G population, \(G~=~\)I, II, III, IV. These four populations are also used in McIntyre (1952, 2005) to illustrate the efficiency of ranked set sample designs in agricultural experiments.

The JPS samples are generated using designs \(D_1\) and \(D_2\) with set sizes \(H=3,6\) and sample sizes \(n=30\). The number of rankers and simulation size are selected to be \(K=3,12\) and 2000, respectively. The quality of ranking information is modeled through the Dell and Clutter (1972) model. This model creates a ranking variable V in each comparison set. Let \({\varvec{Y}}_i^\top =(Y_i, Y_1,\ldots , Y_{H-1})\) be the comparison set for \(Y_i\) generated from a population with variance \(\sigma ^2\). The Dell and Clutter model generates another random vector, \({\varvec{\epsilon }}_i^\top =(\epsilon _i, \epsilon _1, \ldots , \epsilon _{H-1})\), from a normal distribution with mean zero and variance \(\sigma _{\epsilon }^2\). We then add these two vectors to construct the ranking vector \({\varvec{V}}_i^\top =(V_i,V_1,\ldots , V_{H-1})\). Units in the comparison set are ranked based on the values of V, and the rank of \(V_i\) is taken as the judgment rank of \(Y_i\). The correlation coefficient between \(Y_i\) and \(V_i\) is given by \(\rho =1/\sqrt{1+\sigma _{\epsilon }^2/\sigma ^2}\). The magnitude of \(\sigma _{\epsilon }^2\) controls the quality of ranking information. If \(\sigma _{\epsilon }^2 =0\), the rank of \(Y_i\) is the same as the rank of \(V_i\), hence, no ranking error occurs. Large values of \(\sigma _\epsilon ^2\) correspond to random ranking. In our simulation study, we considered six different ranking models.

Table 1 Efficiency of multi-ranker combined estimators relative to the estimator \({{\bar{Y}}}_{\mathrm{{SRS}},r}\), \(\hbox {RE}_{E,r}=\hbox {MSE}({{\bar{Y}}}_{\mathrm{{SRS}},r})/\hbox {MSE}({{\bar{Y}}}_{E,r})\), \(r=1,2\)
\(M_1:\):

All K ranking methods use \(\rho =0.1\).

\(M_2:\):

All K ranking methods use \(\rho =0.5\).

\(M_3:\):

All K ranking methods use \(\rho =0.8\).

\(M_4:\):

Two-thirds of the ranking methods use \(\rho =0.8\) and the remaining one-third use \(\rho =0.3\).

\(M_5:\):

One-third of the ranking methods use \(\rho =0.8\) and the remaining two-thirds use \(\rho =0.3\).

\(M_6:\):

The bottom, middle and upper one-third of ranking methods use \(\rho =0.8, 0.5, 0.3\), respectively.

The relative efficiencies are defined as

$$\begin{aligned} \hbox {RE}_{E,r} =\frac{\hbox {MSE}({\bar{Y}}_{\mathrm{{SRS}},r})}{\hbox {MSE}({\bar{Y}}_{E,r})}, \quad \hbox {RE}_{S,r} =\frac{\hbox {MSE}({\bar{Y}}_{\mathrm{{SRS}},r})}{\hbox {MSE}({\bar{Y}}_{S,r})}, \quad \hbox {RE}_{J,r} =\frac{\hbox {MSE}({\bar{Y}}_{\mathrm{{SRS}},r})}{\hbox {MSE}({\bar{Y}}_{\mathrm{{JPS}},2})}, \end{aligned}$$

for \(r=1,2\). We note that the estimators in each design \(D_r\), \(r=1,2\) are compared with the simple random sample mean estimator. Values of \(\hbox {RE}_{.,.}\) greater than one indicate that the rank-based estimators have higher efficiencies than the SRS estimator of the same size.

Table 1 provides the relative efficiencies for Group I (uniform) and II (beta(4,4)) populations when \(K=3\) and \(K=12\). It is clear that the JPS estimators perform better than the SRS estimators for models \(M_2\), \(M_3\), \(M_4\), \(M_5\) and \(M_6\). In ranking model \(M_1\), there is a weak correlation, \(\rho =0.1\), between the response and ranking variables. Hence, for this model, the JPS estimators have RE values less than one and appear to be slightly less efficient than the SRS estimator. These low efficiency values are related to the quality of ranking information. Since the judgment group sample sizes are random variables in a JPS sample, they bring an additional variation to the variance of the estimator. When there is no ranking information to separate the data into homogeneous ranking groups, the JPS sample estimators may be slightly less efficient. On the other hand, the multi-ranker combined JPS estimators with \(K=12\) and \(H=6\) are almost as efficient as the SRS estimator even for ranking model \(M_1\). This shows that the multi-ranker JPS estimators perform as well as the SRS estimator even if the ranking quality is poor. If there is a reasonable information gain to improve the ranking quality, the multi-ranker JPS estimators always outperform the SRS and a JPS estimator with \(K=1\).

The impact of ranking quality can be clearly seen by considering the efficiency achieved for models \(M_1\), \(M_2\) and \(M_3\), where all the ranking methods have the same correlation \(\rho =0.1, 0.5\), and 0.8, respectively. The efficiency values are ordered for models \(M_1\), \(M_2\) and \(M_3\) in all columns in Table 1. The efficiencies of all the estimators are increasing functions of \(\rho \), for a given design \(D_r\) and the number of ranking methods K. The efficiencies for model \(M_1\) (\(\rho =0.1\) ) are smaller than 1 when \(K=3\), but nearly equal one when \(K=12\). When \(\rho \) increases to 0.5 (\(M_2\)) and 0.8 (\(M_3\)) the efficiencies also increase for both designs (\(D_1\), \(D_2\)) with the number of ranking methods \(K=3\) and \(K=12\).

If the ranking correlation \(\rho \) is the same for all ranking methods (ranking models \(M_1\), \(M_2\), and \(M_3\)), the estimators \({\bar{Y}}_{S,r}\) and \({\bar{Y}}_{E,r}\) have similar efficiency results. On the other hand, if ranking methods have different \(\rho \) values (ranking models \(M_4\), \(M_5\) and \(M_6\)), the estimator weighted with the inverse standard error outperforms the equal weight estimator. For example, the values of \(\hbox {RE}_{S,r}\) and \(\hbox {RE}_{E,r}\) are essentially equal for models \(M_1\), \(M_2\) and \(M_3\), while \(\hbox {RE}_{S,r}\) are larger than \(\hbox {RE}_{E,r}\) for models \(M_4\), \(M_5\) and \(M_6\).

It is also clear that the efficiencies are an increasing function of the set size when there is a reasonable ranking information to rank the units in the comparison sets. For example, the efficiencies of all the estimators for set size \(H=6\) are higher than the efficiencies of the estimators with \(H=3\) for ranking models \(M_i\), \(i=1, 2,\ldots , 6\). For model \(M_1\), the efficiencies of the estimators of set size \(H=6\) (row 7, Table 1) are lower than the efficiencies of the estimators of set size \(H=3\) (row 1). The reason for this is that when \(H=6\) and \(n=30\), the judgment ranking group sample sizes are more uneven than when \(H=3\) and \(n=30\). This leads to some ranking groups being empty, and inflates the variance of the estimator. It is less likely to have empty ranking groups when \(H=3\).

Table 2 Efficiency of multi-ranker combined estimators relative to the estimator \({{\bar{Y}}}_{\mathrm{{SRS}},r}\), \(\hbox {RE}_{E,r}=\hbox {MSE}({{\bar{Y}}}_{\mathrm{{SRS}},r})/\hbox {MSE}({{\bar{Y}}}_{E,r})\), \(r=1,2\)
Table 3 Coverage probabilities of jackknife confidence intervals of population mean

Both Groups I and II populations are symmetric, but the Group I distribution has a short tail (or heavy tail) in comparison to that of Group II. The Group III population is slightly right-skewed, while the Group IV distribution has a strong skewness to the right. The efficiencies of the estimators for Groups III and IV are given in Table 2. Even though the shapes of these populations are different, the efficiency patterns are similar. The main difference is that the improvement in efficiencies decreases with the strength of skewness. The uniform distribution yields the maximum improvement in efficiency. The least improvement happens for the exponential distribution. This result is consistent with the efficiency results of ranked set samples in McIntyre (1952), McIntyre (2005). McIntyre reported that the efficiencies are the highest for the uniform distribution and decrease with skewness. A similar result is also reported in Takahasi and Wakimoto (1968). Under perfect ranking and infinite population settings, they proved that the upper bound of relative efficiency of a ranked set sample mean with respect to the simple random sample mean is \((H+1)/2\). This upper bound is achieved for the uniform distribution.

We also computed the simulation standard errors of the multi-ranker estimators of population mean. The maximum value of the simulation standard error for simulation size 2000 was 0.002. Due to space considerations, no tables of the estimated simulation standard errors are reported here.

Let \(C_{E,r}\), \(C_{S,r}\) and \(C_{J,r}\) be the coverage probabilities of the confidence intervals of population mean based on the estimators \({\bar{Y}}_{E,r}\), \({\bar{Y}}_{S,r}\) and \({\bar{Y}}_{J,r}\), respectively. Tables 3 and 4 present these coverage probabilities for the sample size of \(n=30\) and the population distributions of Groups I, II, III and IV. It is clear that all the coverage probabilities are reasonably close to the nominal coverage probability of 0.95, for both symmetric and skewed distributions. We also performed another simulation study with smaller sample size \(n=15\). In that case, the coverage probabilities are reasonably close to 0.95 for Groups I, II and III. The coverage probabilities for Group IV (exponential distribution) were about 0.90, which is smaller then the nominal value 0.95. The results are presented in Supplementary Material. This additional simulation study shows that to achieve a reasonable coverage probability for skewed distributions, the multi-ranker JPS samples require larger sample sizes.

6 Faba Bean Crop Establishment Example

In this section, we performed another simulation study using the faba bean seedling emergence data introduced in Sect. 1. The population means and variances of the actual seedling counts (Y) and automated seedling counts (X) for the entire field of \(N= 2640\) transects are \(\mu _Y=1.051\) seedling/50 cm, \(\sigma ^2_Y=0.926\) (seedling/50 cm)\(^2\), and \(\mu _X=0.769\) seedling/50 cm, \(\sigma ^2_X=0.649\) (seedling/50 cm)\(^2\), respectively. The correlation coefficient between X and Y is relatively high, \(\rho _{Y,X}=\hbox {cor}(X,Y)= 0.766\).

In this simulation, the JPS samples were generated with set sizes \(H=3,4,6\) and sample sizes \(n=15, 30\). For each JPS sample in the simulation, we constructed \(K=3, 12\) different comparison sets and determined the rank of Y using the auxiliary variable X. Hence, each Y value in a sample had either \(K=3\) or \(K=12\) judgment ranks \(R_{i,k}\), \(\{ Y_i, R_{i,k}, k=1,\ldots , K; i=1, \ldots , n\}\), where \(R_{i,k}\) is the rank of \(Y_i\) in the comparison set k. The simulation size was taken to be 2000. We note that the actual values of seedling emergence Y and their predicted values are count data per a given area. Since the X-variable is a discrete random variable, ties among X-values in comparison sets may occur. When ties happen, we break them at random. Even though the correlation coefficient between Y and X is relatively high, ties among the X-values in the comparison sets may reduce the quality of ranking information.

In this population, values of the X variable are available for all \(N= 2640\) population units. This provides an alternative estimator to estimate the population mean or total using a ratio estimator. For the ratio estimator, we select an SRS of size n using sampling with (\(D_1\)) and without (\(D_2\)) replacement and measure the pairs \((Y_{i,r},X_{i,r})\), \(i=1, \ldots ,n\) for design \(D_r\), \(r=1,2\). The ratio estimator of the population mean is then given by

$$\begin{aligned} {\bar{Y}}_{R,r}=\frac{{\bar{Y}}_{r}}{{\bar{X}}_{r}} {\bar{x}}, r=1,2, \end{aligned}$$

where

$$\begin{aligned} {\bar{Y}}_{r}=\frac{1}{n} \sum _{i=1}^n Y_{i,r}; \quad {\bar{X}}_{r}=\frac{1}{n} \sum _{i=1}^n X_{i,r}; \quad {\bar{x}}=\frac{1}{N} \sum _{i=1}^N x_{i}. \end{aligned}$$

The efficiency of the ratio estimator with respect to the SRS estimator is defined by

$$\begin{aligned} \hbox {RE}_{R,r}= \frac{\hbox {MSE}({\bar{Y}}_{\mathrm{{SRS}},r})}{\hbox {MSE}({\bar{Y}}_{R,r})}, \ r=1,2. \end{aligned}$$

Table 5 presents the efficiencies of the estimators with respect to the estimator \({\bar{Y}}_{\mathrm{{SRS}},r}\). It is clear that all estimators, including the ratio estimators, provide an improvement over the SRS within each sampling design \(D_r\). Among these estimators, the least efficient estimators (\(\hbox {RE}_{J,r}\)) are the JPS estimators with a single ranking method. The JPS estimators are in general less efficient than the ratio estimators; see the efficiency values (\(\hbox {RE}_{J,1}, \hbox {RE}_{R,1}\)) and (\(\hbox {RE}_{J,2}, \hbox {RE}_{R,2}\)).

The efficiency improvements of multi-ranker JPS estimators, even when \(K=3\), are substantial. Both equal and unequal weight JPS estimators perform better than JPS estimators with \(K=1\) and the ratio estimators in both designs \(D_1\) and \(D_2\) . For example, the standard error-weighted estimators are 1.252 (1.906/1.523) and 1.206 (1.907/1.581) times more efficient than the ratio estimators when \(n=30\), \( K=12\) and \(H=6\) for designs \(D_1\) and \(D_2\), respectively. For the JPS estimator with \(K=1\), the same efficiency values are 1.536 (1.906/1.241) and 1.537 (1.907/1.241).

Table 5 demonstrates that the efficiency values of equally and standard error-weighted estimators are very close to each other. In this simulation, all K ranking methods use the same correlation coefficient \(\rho _{X,Y}=0.766\). Hence, the standard error weights for all ranking groups are on average equal. This leads to equal weights and hence similar efficiency values to those of the equally weighted estimators.

Table 4 Coverage probabilities of jackknife confidence intervals of population mean
Table 5 Efficiency of the multi-ranker combined estimators relative to the SRS estimator \({{\bar{Y}}}_{\mathrm{{SRS}},r}\), \(\hbox {RE}_{E,r}=\hbox {MSE}({{\bar{Y}}}_{\mathrm{{SRS}},r})/\hbox {MSE}({{\bar{Y}}}_{E,2})\) for the faba bean seedling emergence data, the population size is \(N=2640\)

Table 5 shows very little difference between the efficiency values of the same estimators under design \(D_1\) and design \(D_2\). For example, the efficiency values in columns 4 and 8, 5 and 9, and 6 and 10 are very close. This can be anticipated since the finite population correction factor 0.989 (1-30/2640) is very close to 1, and the impact of sampling without replacement is not visible.

We have also computed the empirical coverage probabilities of the confidence intervals for the population mean. All confidence intervals provided coverage probabilities reasonably close to the nominal coverage probability of 0.95. These results are not reported here.

7 Concluding Remarks

Many survey and field sampling studies collect concurrently with the main variable or make it available in some other way; additional auxiliary variables correlated with the variable of interest. This situation is typical of modern field phenotyping in plant research. This additional information can induce a structure among the sample units, creating multiple sets of ranks for each measured observation. These ranks can be used to form judgment post-stratified samples with multiple ranks for a given data set. In this paper, we introduce novel weighted estimators, which combine the ranking information from different sources in a JPS sample. We constructed two different weight functions. The first weight function uses the standard error of each single-ranker JPS estimator. The second weight function (presented in Supplementary Material) utilizes the agreement scores of all ranking methods. The variances of the estimators are computed using the jackknife variance estimate. Approximate \((1-\alpha )100\%\) confidence intervals are constructed.

The empirical studies show that the proposed estimators always perform better than the SRS and the JPS estimator of a single ranking method. The efficiency gain is substantial if the set size of the comparison set is large, \(H \ge 3\). Combining ranking information from multiple sources can also be applied to more complex designs, such as stratified JPS and two-stage JPS designs. The implementation of such novel estimators requires new computing tools to handle the computational complexity. An R-package has been developed to compute the estimators.