Background

To evaluate the reliability when two raters classify objects as either positive (+) or negative (−), Cohen’s κ [1] and the intra-class version of κ, which is identical to Scott’s π [2], have often been used. Let pa be the agreement probability, and p1 and p2 the probabilities classified as (+) by rater 1 and 2 respectively. Then Cohen’s κ (κCohen) and Scott’s π (κScott) are defined as follows:

$$ {\kappa}_{Cohen}=\frac{p_a-{p}_{e(c)}}{1-{p}_{e(c)}},\kern0.5em {\kappa}_{Scott}=\frac{p_a-{p}_{e(s)}}{1-{p}_{e(s)}}, $$

where \( {p}_{e(c)}={p}_1{p}_2+\left(1-{p}_1\right)\left(1-{p}_2\right),{p}_{e(s)}={p}_{+}^2+{\left(1-{p}_{+}\right)}^2 \) and p+ = (p1 + p2)/2. The pe(c) and pe(s) are the probabilities of agreement expected by chance for Cohen’s κ and Scott’s π respectively. The pe(c) assumes that the probabilities of positive classification differ between two raters, while the pe(s) assumes that these two probabilities are the same. Landis and Koch provided benchmarks of the strength of consistency as follows: values ≤0 as poor, 0.00 to 0.20 as slight, 0.21 to 0.40 as fair, 0.41 to 0.60 as moderate, 0.61 to 0.80 as substantial and 0.81 to 1.00 as almost perfect agreement [3]. Although the authors acknowledge the arbitrary nature of their benchmarks, they recommended their benchmark scale as a useful guideline for practitioners.

Many extensions have been made to Cohen’s κ including those for agreement in the cases of ordinal data [4], multiple raters [5,6,7,8,9], comparisons of correlated κ ’s [10,11,12,13] and stratified data [14, 15]. However, as Feinstein and Cicchetti showed, Cohen’s κ depends strongly on the marginal distributions and therefore behaves paradoxically [16]. This behavior can be explained by the bias effect and the prevalence effect, on which various discussions have been undertaken [16,17,18]. A number of alternative measures of agreements have also been proposed, such as Holley and Guilford’s G [19], Aickin’s α [20], Andres and Marzo’s delta [21], Marasini’s s* [22, 23] and Gwet’s AC1 [24] and AC2 [25].

Gwet showed that AC1 has better statistical properties (bias and variance) than Cohen’s κ, Scott’s π and G-index under a limited set of simulations for two raters with binary outcomes [24]. Shanker and Bangdiwala compared Cohen’s κ, Scott’s π, Prevalence Adjusted Bias Adjusted Kappa (PABAK) [26], AC1 and B-statistic [27], which is not a kappa-type chance-corrected measure, in the case of two raters and binary outcomes and showed that AC1 has better properties than other kappa-type measures [28]. In addition, AC1 has been utilized in the field of medical research over the past decade [29,30,31,32,33,34,35]. Therefore, in this study we have limited our discussion to AC1 in the case of two raters with binary outcomes.

First, a brief review of the concept of Gwet’s AC1 is provided. Consider the situation in which two raters independently classify randomly extracted subject as positive (+) or negative (−). Gwet defined two events: G = {the two raters agree} and R = {at least one rater performs random rating}. The probability of agreement expected by chance is then pe = P(G ⋂ R) = P(G| R)P(R). A random rating would lead to the classification of an individual into each category with the same probability \( \frac{1}{2} \) and it follows that \( P\left(G|R\right)=2\times \left(\frac{1}{2}\right)\times \left(\frac{1}{2}\right)=\frac{1}{2} \). As for the estimation of P(R), this probability cannot be obtained from data. Therefore, Gwet proposed approximating it with a normalized measure of randomness Ψ, defined as follows:

$$ P(R)\simeq \Psi =\frac{\pi_{+}\left(1-{\pi}_{+}\right)}{\frac{1}{2}\left(1-\frac{1}{2}\right)}=4{\pi}_{+}\left(1-{\pi}_{+}\right), $$
(1)

where π+ is the probability that a randomly chosen rater classifies a randomly chosen subject into the + category. Thus, the approximated probability of chance agreement is represented by

$$ {p}_e^{\ast }=P\left(G|R\right)\Psi =2{\pi}_{+}\left(1-{\pi}_{+}\right). $$
(2)

AC1 is thus defined as follows:

$$ \gamma =\frac{p_a-{p}_e^{\ast }}{1-{p}_e^{\ast }}, $$
(3)

where pa is the probability of agreement. Although pe is approximated to \( {p}_e^{\ast } \), Gwet showed that the bias of γ, the difference between γ and the true inter-rater reliability, is equal to or less than Cohen’s κ, Scott’s π and G-index under some assumption in the case of two raters with binary outcomes. Gwet also provided an estimator \( {\hat{\gamma}}^{\ast } \) of γ and its variance for multiple raters and multiple categories based on the randomization approach, which requires the selection of subjects to be random in such a way that all possible subject samples have the exact same chance of being selected. However, it is advantageous to employ a model-based approach when, for example, the evaluation of the effect of subject covariates on agreement is of interest. Therefore, in the case of two raters with binary outcomes, Ohyama [36] assumed the underlying probability that a subject is rated as (+) and its marginal homogeneity of the two raters, and then constructed the likelihood. The maximum likelihood estimator of γ, which is shown to be identical to the estimator given by Gwet, was derived. The likelihood-based confidence intervals for AC1, inclusion of subject covariates, hypothesis testing and sample size determination were also discussed [36].

In this article, we discuss stratification analyses as another approach to adjust the effect of subject covariates on agreement. For example, a clinical assessment whether a patient has a particular disease symptom may be influenced by overall severity of the disease. In such a case, we consider stratification based on the severity of the disease. Another example is a multicenter inter-rater agreement study, in which the classifications for subjects are conducted independently in each center. These situations require several independent agreement statistics. Then the main purpose of the analyses would be testing whether the degree of inter-rater agreement can be regarded as homogeneous across strata, such as centers and severities of the disease.

For κ, Fleiss has been at the forefront of the idea of χ2 test-based inter-class consistency with large sample variances [37] and further studies by Donner, Eliasziw and Klar [14], Nam [15, 38] and Wilding, Consiglio and Shan [39] have developed the homogeneity test of κ across covariate levels. However, there are no reports on homogeneity tests for AC1 or on an estimator of common AC1. Therefore, in this article, we derive the homogeneity score test for AC1 from K independent strata and its performance is investigated. An estimation of the common AC1 between strata and its confidence intervals is also discussed. Finally, an example application of our approach to clinical trial data is provided.

Methods

Homogeneity tests

Score test

Consider K independent strata involving nk subjects for k = 1, …, K. In each stratum, two raters independently classify subjects as either positive (+) or negative (−). Let Xkij= 1 if subject i(=1, …, nk) in the k-th stratum is classified as “+” by rater j(=1, 2) and Xkij = 0 otherwise. Suppose that P(Xkij = 1| i) = uki, \( E\left({u}_{ki}\right)={\pi}_k\ \mathrm{and}\kern0.5em Var\left({u}_{ki}\right)={\sigma}_k^2 \). The γ of the k-th stratum is then expressed as follows [36]:

$$ {\gamma}_k=\frac{1+2\left[{\sigma_k}^2-2{\pi}_k\left(1-{\pi}_k\right)\right]}{1-2{\pi}_k\left(1-{\pi}_k\right)}. $$
(4)

Let the number of observed pairs in the three categories of the k-th stratum be x1k, x2k and x3k and their corresponding probabilities be P1k(γk), P2k(γk) and P3k(γk). The data of the k-th stratum are then given as shown in Table 1.

Table 1 Data layout

The log-likelihood function is given by

$$ l\left(\boldsymbol{\gamma}, \boldsymbol{\pi} \right)=\sum \limits_{k=1}^K{l}_k\left({\gamma}_k,{\pi}_k\right), $$
(5)

where γ = (γ1, …, γK)', π = (π1, …, πK)', lk(γk, πk) = x1k log P1k(γk) + x2k log P2k(γk) + x3k log P3k(γk),

$$ {P}_{1k}\left({\gamma}_k\right)={\pi}_k\left(2-{\pi}_k\right)-\frac{1}{2}+\frac{\gamma_k}{2}{A}_k, $$
(6)
$$ {P}_{2k}\left({\gamma}_k\right)={A}_k\left(1-{\gamma}_k\right), $$
(7)
$$ {P}_{3k}\left({\gamma}_k\right)=\left(1-{\pi}_k\right)\left(1+{\pi}_k\right)-\frac{1}{2}+\frac{\gamma_k}{2}{A}_k, $$
(8)

and Ak = 1 − 2πk(1 − πk).

The maximum likelihood estimators of γk and πk are then given by

$$ {\hat{\gamma}}_k=1-\frac{2{n}_k{x}_{2k}}{n_k^2+{\left({x}_{1k}-{x}_{3k}\right)}^2} $$
(9)

and

$$ {\hat{\pi}}_k=\frac{2{x}_{1k}+{x}_{2k}}{2{n}_k}, $$
(10)

respectively.

The first and second derivatives of the log-likelihood function and the Fisher information matrix are given in the Appendix. The aim of this study is to test the homogeneity of the agreement coefficients among K strata, and thus the null hypothesis to test is represented by H0 : γk = γ0 (k = 1, 2, ..., K). The score test statistic for the null hypothesis is derived as follows (see Appendix):

$$ T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right)=\sum \limits_{k=1}^K\frac{{{\overset{\sim }{R}}_k}^2{\overset{\sim }{D}}_k}{n_k\left({\overset{\sim }{B}}_k{\overset{\sim }{D}}_k-{{\overset{\sim }{C}}_k}^2\right)}, $$
(11)

where \( {\overset{\sim }{B}}_k,{\overset{\sim }{C}}_k,{\overset{\sim }{D}}_k\ \mathrm{and}\ {\overset{\sim }{R}}_k \) are obtained by substituting the maximum likelihood estimators \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) under the null hypothesis into \( {\displaystyle \begin{array}{l}{B}_k=\frac{1}{P_{1k}}+\frac{4}{P_{2k}}+\frac{1}{P_{3k}},\\ {}{C}_k=\frac{1}{P_{1k}}-\frac{1}{P_{3k}}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right){B}_k,\\ {}{D}_k=\frac{1}{P_{1k}}+\frac{1}{P_{3k}}+\left(1-{\gamma}_k\right)\left(1-2{\pi}_k\right)\left(\frac{1}{P_{1k}}-\frac{1}{P_{3k}}+{C}_k\right),\\ {}{R}_k=\frac{x_{1k}}{P_{1k}}-\frac{2{x}_{2k}}{P_{2k}}+\frac{x_{3k}}{P_{3k}}.\end{array}} \)

\( T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right) \) is asymptotically distributed as a χ2 with K − 1 degrees of freedom. The homogeneity hypothesis is rejected at level α when \( T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right) \)\( \kern0.5em {\chi}_{\left(1-\alpha \right),K-1}^2 \), where \( {\chi}_{\left(1-\alpha \right),K-1}^2 \) is the 100 × (1 − α) percentile point of the χ2 distribution with K − 1 degrees of freedom.

Note that, since 0 ≤ P1k(γk), P2k(γk), P3k(γk) ≤ 1 and P1k(γk) + P2k(γk) + P3k(γk) = 1, substituting (6), (7) and (8) into these equations, the admissible range of γk with respect to πk is obtained as follows [36]:

$$ \frac{2-\left(1-|1-2{\pi}_k|\right)\left(3+|1-2{\pi}_k|\right)}{2-\left(1-|1-2{\pi}_k|\right)\left(1+|1-2{\pi}_k|\right)}\le {\gamma}_k\le 1. $$
(12)

When obtaining the maximum likelihood estimators \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) under the null hypothesis by numerical calculation, initial values need to be set to satisfy this condition.

Goodness-of-fit test

Donner, Eliasziw and Klar proposed a goodness-of-fit approach for testing homogeneity of kappa statistics in the case of two raters with binary outcomes [40]. This procedure can also be applied to AC1 statistics. Given that the frequencies x1k, x2k, x3k, k = 1, …, K in Table 1 follow a multinomial distribution conditional on nk, estimated probabilities under H0 are given by \( {\hat{P}}_{hk}\left({\overset{\sim }{\gamma}}_0\right) \), which is obtained by replacing πk by \( {\hat{\pi}}_k \) and γk by \( {\overset{\sim }{\gamma}}_0 \) in Phk(γk); h = 1, 2, 3; k = 1, …, K. Then the goodness-of-fit statistic is derived as follows:

$$ {\chi}_G^2=\sum \limits_{k=1}^K\sum \limits_{h=1}^3\frac{{\left({x}_{hk}-{n}_k{\hat{P}}_{hk}\left({\tilde{\gamma}}_0\right)\right)}^2}{n_k{\hat{P}}_{hk}\left({\tilde{\gamma}}_0\right)}, $$
(13)

under H0, \( {\chi}_G^2 \) follows an approximate χ2 distribution with K − 1 degrees of freedom. The homogeneity hypothesis is rejected at level α when \( {\chi}_G^2\ge {\chi}_{\left(1-\alpha \right),K-1}^2 \), where \( {\chi}_{\left(1-\alpha \right),K-1}^2 \) is the 100 × (1 − α) percentile point of the χ2 distribution with K − 1 degrees of freedom.

Estimation of common AC1

If the assumption of homogeneity is reasonable, the estimate of γ0 can be used as an appropriate summary measure of reliability. The maximum likelihood estimator \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) are obtained by maximizing the log-likelihood functions \( {l}_0\left({\gamma}_0,\boldsymbol{\pi} \right)=\sum \limits_{k=1}^K{l}_k\left({\gamma}_0,{\pi}_k\right) \). Since an analytical solution cannot be obtained from this function, numerical iterative calculations are used. The variance \( Var\left({\overset{\sim }{\gamma}}_0\right) \) of \( {\overset{\sim }{\gamma}}_0 \) can be expressed as follows (see Appendix):

$$ Var\left({\tilde{\gamma}}_0\right)=4{\left[\sum \limits_{k=1}^K{n}_k{A_k}^2\left({B}_k^{(0)}-\frac{{C_k^{(0)}}^2}{D_k^{(0)}}\right)\right]}^{-1}={\left[\sum \limits_{k=1}^K\frac{1}{Va{r}_k\left({\tilde{\gamma}}_0\right)}\right]}^{-1}, $$
(14)

where \( {B}_k^{(0)},{C}_k^{(0)},{D}_k^{(0)} \) are values using γk = γ0 in Bk, Ck, Dk respectively, and

$$ Va{r}_k\left({\tilde{\gamma}}_0\right)=\frac{1}{n_k{A}_k^2}\left[{A}_k\left(1-{\gamma}_0\right)-\left({A}_k^2-4{A}_k+2\right){\left(1-{\gamma}_0\right)}^2-{A}_k\left(2{A}_k-1\right){\left(1-{\gamma}_0\right)}^3\right]. $$
(15)

A simple 100 × (1 − α) % confidence interval using the asymptotic normality of \( {\overset{\sim }{\gamma}}_0 \) can be expressed as follows:

$$ {\tilde{\gamma}}_0\pm {Z}_{\raisebox{1ex}{$\alpha $}\!\left/ \!\raisebox{-1ex}{$2$}\right.}\sqrt{\hat{Var}\left({\tilde{\gamma}}_0\right)}, $$
(16)

where Zα/2 is the α/2 upper quantile of the standard normal distribution and \( \hat{Var}\left({\overset{\sim }{\gamma}}_0\right) \) is obtained by substituting \( {\overset{\sim }{\gamma}}_0 \) and \( {\overset{\sim }{\pi}}_k \) into (14). Hereafter, this method is referred to as the simple asymptotic (SA) method. Since Eq. (14) depends on γ0, SA method may not have the correct coverage rate, and the normality of the sampling distribution of \( {\overset{\sim }{\gamma}}_0 \) may be improved using Fisher’s Z transformation. This method is referred to below as Fisher’s Z transformation (FZ) method (see Appendix).

As an alternative method, we employ the profile variance approach, which has been shown to perform well in the case of the intra-class κ for binary outcome data [41,42,43]. This approach also performs well for AC1 in the case of two raters with binary outcomes [36]. The confidence interval based on the profile variance can be obtained by solving the following inequality for γ0:

$$ \frac{{\left(\overset{\sim }{\boldsymbol{\gamma}}-{\gamma}_0\right)}^2}{\overset{\sim }{Var}\left({\overset{\sim }{\gamma}}_0\right)}\le {Z}_{\mathrm{a}/2}^2, $$
(17)

where \( \overset{\sim }{Var}\left({\overset{\sim }{\gamma}}_0\right) \) is given by substituting \( {\overset{\sim }{\pi}}_k \) into πk in (15). Hereafter, this method is referred to as the profile variance (PV) method (see Appendix).

Numerical evaluations

We conducted Monte Carlo simulations to investigate the performance of the proposed homogeneity tests and to evaluate the estimate of common AC1 and its confidence intervals under the following conditions: the number of strata in the simulation is K = 2 or 3; and random observations are generated from the trinomial distributions according to the probabilities of (6), (7) and (8) by giving the values of γk and πk. The balanced and unbalanced cases were considered for the values of πk and nk. The values of γk and πk are set within the theoretical range of Eq. (12) derived in the preceding paragraph. Ten thousand times of iterations were carried out for each parameter combination.

When πk is close to 0 or 1 and nk is small, there are cases in which the generated data include zero cells. In such cases, Bk, Ck, Dk and Rk cannot be estimated . Thus, when zero cells were generated, we adopted the approach of adding 0.5 to the frequency of each combination by two raters, (+,+), (+,−), (−,+), (−,−). This simple method was discussed by Agresti [44] and was adopted in a previous study [39].

Results

Empirical type I error rate for the homogeneity test

The type I error rates of the homogeneity tests with a significance level of 0.05 were examined. The sample size was set at nk = n = 20, 50, 80 for balanced settings and (n1, n2, n3) = (20, 50, 80) for unbalanced settings. The error rate obtained by the score test is expressed as SCORE and the error rate obtained by the goodness-of-fit test is expressed as GOF. Table 2 summarizes the results for K = 2.

Table 2 Empirical type I error rates of homogeneity tests for γ1 = γ2 = γ0 based on 10,000 simulations (K = 2 balanced sample size)

Overall, the proposed score test did not show any significant type I error rate inflation, but it was very conservative when sample size was small and γ0 was close to 1.

In the case of n = 20 when γ0 = 0.1, 0.3 or 0.5, the type I error rates of SCORE were maintained at the nominal level of 0.05 regardless of whether πk was balanced or unbalanced, but when γ0 = 0.7 or 0.9, the type I error rates were slightly conservative. Especially when γ0 = 0.9, the rate was significantly conservative to the extent of being less than 0.01. In the case of n = 50, the type I error rates were maintained at the nominal level of 0.05 except when γ0 = 0.9. Finally in the case of n = 80, the type I error rates were almost maintained at the nominal level. In contrast, the type I error rate of GOF tended to be larger than that of SCORE and in many cases it was not maintained at the nominal level.

The results obtained for K = 3 are shown Table S1 and Table S2 in Additional file 1.

The Additional file 2 provides the simulation code of empirical type I error rate using R language.

Empirical power of the homogeneity test

The empirical power of the score test was investigated only for the case of K = 2, by setting γ1 = 0.1, 0.3, 0.5 and γ2 − γ1 = 0.3, 0.4. The values of πk and nk were set as in the type I error simulation. The results are shown in Table 3. The power tended to be large as the value of γ1 increased under the fixed values of π and γ2 − γ1.

Table 3 Empirical power of homogeneity tests based on 10,000 simulations (K = 2 balanced sample size)

The empirical power of the GOF test was also examined under the same simulation conditions as the score test. The results are also shown in Table 3. However, the GOF had a large type I error rate inflation (Table 2) and was invalid as a test.

The Additional file 2 provides the simulation code of empirical power using R language.

Bias and mean square error for common AC1

We evaluated the bias and mean square error (MSE) of the maximum likelihood estimator for the common AC1, \( {\overset{\sim }{\gamma}}_0 \). The balanced and unbalanced conditions for πk and the balanced condition for nk were considered. The results are shown in Table 4. The bias of \( {\overset{\sim }{\gamma}}_0 \) tended to be small as γ0 increased, but \( {\overset{\sim }{\gamma}}_0 \) was almost unbiased. As expected, the bias and MSE tended to be small as the sample size increased.

Table 4 Bias and mean square error of the maximum likelihood estimator for the common AC1 based on 10,000 simulations (K = 2 balanced sample size)

The Additional file 3 provides the simulation code of bias and mean square error for common AC1 using R language.

Confidence intervals for common AC1

We conducted a simulation study to evaluate the performances of the three confidence intervals presented in the previous section. The coverage rates of the 95% confidence interval were examined. The balanced and unbalanced conditions for πk and the balanced condition for nk are considered. The results are shown in Table 5. The coverage rate of the SA method was generally lower than 0.95 under many conditions, with the exception of the value being close to 0.99 in the case of n1 = n2 = 20 and γ0= 0.9. The FZ method and PV method greatly improved the coverage rates close to the nominal level. However, the coverage rate of the PV method was closer to the nominal level than that of the FZ method in most cases under the conditions examined. The coverage rates of each method were also evaluated in the case of K = 3, and the unbalanced nk conditions and both the FZ method and the PV method achieved coverage rates near 0.95 (results not shown).

Table 5 Coverage rates of common γ 95% confidence intervals of the three proposed methods based on 10,000 simulations

The Additional file 4 provides the simulation code of confidence intervals for common AC1 using R language.

An example

As an example, we used data from a randomized clinical trial called the Silicon Study, which was conducted to investigate the effectiveness of silicone fluids versus gases in the management of proliferative vitreoretinopathy (PVR) by vitrectomy [45]. The PVR classification, determined at the baseline visit, defines the severity of the disease as a continuum of increasing pathology graded as C3, D1, D2 or D3. The presence or absence of retinal injury in the superior nasal cavity was evaluated clinically by the operating ophthalmic surgeon and photographically by an independent fundus photograph reading center [46].

The data and results are summarized in Table 6. For reference, the results of the homogeneity score test proposed by Nam for the intra-class κ are also provided [15]. The probabilities of agreement in each stratum were from 0.800 to 0.880 and not so different. However, the values of κ in each stratum were from 0.117 to 0.520 and were greatly different. This might be due to the prevalence effect caused by the small values of π. In contrast, the values of γ were 0.723 to 0.861 and did not differ greatly among strata.

Table 6 Agreement between ophthalmologist and reading center classifying superior nasal retinal breaks stratified by PVR grade

The proposed homogeneity score statistic \( T\left({\overset{\sim }{\gamma}}_0,\overset{\sim }{\boldsymbol{\pi}}\right) \) was 2.060 (p-value = 0.560) and the homogeneity hypothesis was not rejected. The estimate of common AC1 was 0.808 and its 95% confidence intervals were 0.743–0.873 (SA method), 0.732–0.864 (FZ method) and 0.730–0.862 (PV method). Also, the score statistic for testing the homogeneity of κ’s [15] was 2.700 (p-value = 0.440) and the common κ was 0.352.

The Additional file 5 provides the code for clinical data examples using R language.

To investigate the sensitivity of the indicators to πk, we hypothetically considered more balanced and less balanced πk under fixed pa and nk in each stratum. The generated data set and analysis results are summarized as Table S3 in the Additional file 1. κ was more sensitive to changes in the value of π, but AC1 was less sensitive to changes in the value of π than κ. The common AC1 was not affected as much as the common κ even if the π balance was lost.

Discussion

It is well known that Cohen’s κ depends strongly on the marginal distributions, and Gwet proposed alternative and more stable measures of agreement, AC1 for nominal data and its extended agreement AC2 for ordinal data [24, 25]. A number of alternative measures have also been proposed, as in Holley and Guilford’s G [19], Aickin’s α [20], Andres and Marzo’s delta [21] and Marasini’s s* [22, 23]. Gwet [24] and Shankar and Bangdiwala [28] compared some measures and showed that AC1 has better properties than other kappa-type measures. In addition, AC1 has been utilized in the field of medical research over the past decade [29,30,31,32,33,34,35]. However statistical inference procedures of AC1 have not been discussed sufficiently. Therefore, Ohyama expressed AC1 using population parameters to develop a likelihood-based inference procedure and constructed confidence intervals of AC1 based on profile variances and likelihood ratios. Inclusion of subjects’ covariates, hypothesis testing and sample size estimation were also presented [36]. In the present study, the case of stratified data was discussed as one development of Ohyama [36] for two raters with binary outcomes. Furthermore, tests were derived for the homogeneity of AC1 between K independent strata and the inference of common AC1 was discussed.

In the numerical evaluation of type I error, both tests were conservative when the sample size was small and γ0 was 0.9, but the conservativeness was relaxed when the sample size was as large as 80. In other settings of simulation, the score test performed well while GOF sometimes could not achieve the nominal level. Therefore, we recommend using the score test for testing the homogeneity of AC1 among K strata. Note that, when zero cells are observed, the homogeneity score test statistic cannot be calculated. In such cases in our simulation study, we simply added 0.5 to the data set, which had no serious effect on the performance of the proposed score test in our simulation settings.

If the homogeneity assumption is reasonable, it may be desired to provide an estimate of the common AC1 as a summary measure of reliability. In the present study, we proposed an estimator of common AC1 and constructed its confidence intervals based on the SA, FZ, and PV methods. We also evaluated the performance of each numerically. The bias and MSE tended to be small as the sample size increased, and the results were nearly 0 when n = 80. The PV method provides coverage levels close to nominal in most situations, while the SA method tends to provide a shortage of coverage and the FZ method tends to provide excess coverage in some situations. Therefore, we recommend the PV method for calculating confidence intervals.

As in the PVR example, AC1 in each stratum is less affected by the prevalence or marginal probability than by the κ. It is suggested that the proposed homogeneity test and the general framework of common AC1 estimation are also essentially more stable than those of the κ.

There were some limitations in this study. First, as described above, the performance of the proposed score test was very conservative when γ0 = 0.9 and sample size was small. An exact approach might be an alternative method in such cases.

Next, in this study, the cases were limited to two raters with binary outcomes in each stratum. However, in the evaluation of medical data, it is often the case that multiple raters classify subjects into nominal or ordered categories. Our proposed method may be extended to the case of multiple raters with binary outcomes using the likelihood function for multiple raters. In the cases of two raters with nominal outcomes, Agresti [47] proposed a quasi-symmetry model with kappa as a parameter, and this technique may be extended to AC1 in the case of stratified data.

Finally, continuous covariates need to be categorized adequately to apply the proposed approach. The regression model proposed by Ohyama [36] can be used to assess the effect of continuous covariates on AC1, but it is limited to the case of two raters with binary data. Nelson and Edwards [48] and Nelson, Mitani and Edwards [49] proposed a method for constructing a measure of agreement using generalized linear mixed-effect models by introducing continuous latent variables representing the subject’s true disease status and for flexibly incorporating rater and subject covariates. These approaches might be applicable to AC1 and AC2.

Conclusion

The method proposed in this study is considered to be useful for summarizing evaluations of consistency performed in multiple or stratified inter-rater agreement studies. In addition, the proposed method can be applied not only to medical or epidemiological research but also to assessment of the degree of consistency of characteristics, such as biometrics, psychological measurements, and data in the behavioral sciences.