Abstract
In this paper we consider the problem of constructing confidence intervals and confidence lower bounds for the intraclass correlation coefficient in an interrater reliability study where the raters are randomly selected from a population of raters. The likelihood function of the interrater reliability is derived and simplified, and the profile likelihood based approach is readily available for computing the confidence intervals of the interrater reliability. Unfortunately, the confidence intervals computed by using the profile likelihood function are in general too narrow to have the desired coverage probabilities. From the practical point of view, a conservative approach, if at least as precise as any existing method, is preferred since it gives the correct results with a probability higher than claimed. Under this rationale, we propose the so-called modified profile likelihood approach in this paper. Simulation study shows that, the proposed method in general has better performance than currently used methods.
Similar content being viewed by others
References
Bartko JJ (1966) The intraclass correlation coefficient as a measure of reliability. Psychol Rep 19:3–11
Cappelleri JC, Ting N (2003) A modified large sample approach to approximate interval estimation for a particular intraclass correlation coefficient. Stat Med 22:1861–1877
Cutti AG, Ferrari A, Garofalo P, Raggi M, Cappello A, Ferrari A (2010) ‘Outwalk’: a protocal for clinical gait analysis based on inertial and magnetic sensors. Med Biol Eng Comput 48:17–25
Fisher RA (1985) Statistical methods for research workers. W.B. Saunders Company, Philadelphia
Fleiss JL (1986) The design and analysis of clinical experiment. Wiley, New York
Fleiss JL, Shrout PE (1978) Approximate interval estimation for a certain intraclass correlation coefficient. Psychometrika 43:259–262
Fleiss JL, Shrout PE (1979) Intraclass correlation: uses in assessing rater reliability. Psychol Bull 86(2):420–428
MacLennan RN (1993) Interrater reliability with SPSS for windows 5.0. Am Stat 47(4):292–296
Rajaratnam N (1960) Reliability forumlas for independent decision data when reliability data matched. Psychometrika 25:262–271
Satterthwaite FE (1946) An approximate distribution of estimates of variance components. Biometrics 2:110–114
Streiner DL, Norman GR (1995) Health measurement scales: a practical guide to their development and use, 2nd edn. Oxford University Press, NY
Tian L, Cappelleri JC (2004) A new approach for interval estimation and hypothesis testing of a certain intraclass correlation coefficient: the generalized variable method. Stat Med 23:2125–2135
Vaidyanathan M, Clarke LP, Velthuizen RP, Phuphanich S, Bensaid AM, Hall LO, Bezdek JC, Greenberg H, Trotti A, Silbiger M (1995) Comparison of supervised MRI segementation methods for tumor volume determination during therapy. Magn Reson Imaging 5:719–728
Weerahandi S (1993) Generalized confidence intervals. J Am Stat Assoc 88:899–905
Yi Q, Wang P, He Y (2008) Reliability analysis for continuous measurements: equivalence test for agreement. Stat Med 27:2816–2825
Zou H, McDermott MP (1999) Higher-moments approaches to approximate interval estimation for a certain intraclass correlation coefficient. Stat Med 18:2051–2061
Acknowledgments
The authors would like to thank the co-editor, the associate editor and the two reviewers for their valuable suggestions and editorial comments, which led to a significant improvement of the original manuscript.
Author information
Authors and Affiliations
Corresponding author
Appendix: derivation of likelihood function
Appendix: derivation of likelihood function
In this section we perform a series of algebraic operations for simplifying the expression of the likelihood function. First we introduce several notations. For an integer \(n\), let \(\varvec{1}_{n}\) denote the \(n\)-dimensional one-vector whose components are one
\(\varvec{I}_n\) the \(n\times n\) identity matrix, and \(\varvec{J}_n\) the \(n\times n\) one-matrix
For simplicity, the subscript \(n\) is suppressed in case of no confusion. Let \(y_j\) be the data on the \(j\)th subject:
and \(y\) is the \(RS\)-dimensional vector of all data
It follows that the covariance matrix of \(y\) is the \(RS \times RS\) matrix \(\sigma ^2 \varvec{V}\), where
Thus the log-likelihood function is given by
and
Setting to zero the partial derivative of \(-2l\) with respect to \(\mu \)
gives the maximum likelihood estimator (MLE) \(\hat{\mu }\) of \(\mu \)
since the vector \(\varvec{1}\) is an eigenvalue of the matrix \(\varvec{V}^{-1}\) (and \(\varvec{V}\)). Similarly, equating with zero the partial derivative of \(-2l\) with respect to \(\sigma ^2\)
yields the MLE \(\hat{\sigma ^2}\) of \(\sigma ^2\)
Replace \(\mu \) by its maximum likelihood estimate \(\bar{y}_{\cdot \cdot }\), we shall have
Let
then
and
The nuisance parameter \(\mu \) is not involved in (26), (27), (28) and (29).
To evaluate the determinant \(|\varvec{V}|\) of the matrix \(\varvec{V}\) and simplify the quadratic form \(\varDelta \) in (26), we need to find out the eigenvalues and eigenvectors of the matrix \(\varvec{V}\).
For an integer \(n\), let \(h_1^{(n)},\ h_2^{(n)}, \ldots ,\ h_n^{(n)}\) be the \(n\times 1\) vectors given by
where
The superscript \( ^{(n)}\) is suppressed if no confusion arises from this omission. Then the vectors \(\{h_i\}_{i=1}^n\) are eigenvectors of the matrix \(\varvec{J}_n\). Indeed,
The matrix
is the \(n\times n\) orthogonal matrix due to Helmert. Now for \(i = 1,\ 2,\ \ldots , R\), let
then the matrix \(\varvec{Q}\) given by
is orthogonal. Furthermore,
by (32), so \(q_{ij}\)’s are the eigenvectors of \(\varvec{V}\) and
are the eigenvalues of the matrix \(\varvec{V}\).
Define two diagonal \(S\times S\) matrices \(\varvec{\Lambda }_1\) and \(\varvec{\Lambda }_2\) by
and denote by \(\varvec{\Lambda }\) the following \(RS \times RS\) diagonal matrix
It follows that
Since \(\varvec{Q}\) is orthogonal, the determinant of the matrix \(\varvec{V}\) is
It follows that from (43) that
Define \( \varvec{z}\stackrel{def}{=} \varvec{Q}^T (\varvec{y}-\bar{y}_{\cdot \cdot }\varvec{1})\), then
where \(z_{(k)}\) denotes the \(k\)th component of \(\varvec{z}\) and
To simplify the expressions of \(a\), \(b\), \(c\) and \(d\), let \(x_j = y_j - \bar{y}_{\cdot \cdot }\varvec{1}_R\) for all \(j=1,\ 2,\ \ldots , S\), \(x = y - \bar{y}_{\cdot \cdot } \equiv (x_1^T, x_2^T, \ldots , x_S^T)^T \), and
where \(h^T x \) is understood as
It follows that
which implies that
By (47), (51), (52) and the fact that \(H_S\) is an orthogonal matrix,
By (47), (51), (52) and the fact that \(H_R\) is an orthogonal matrix,
By the proof of (54) and (55),
Using the fact that \(H_S\) is orthogonal again yields that
Thus, by (47), (51), (52), (56), (57) and (58),
Hence,
and
It follows that the log-likelihood function is
which implies that \(\text{ SSBS}\), \(\text{ SSBR}\) and \(\text{ SSE}\) are mutually independent. Furthermore, for true parameter values, the distributions of
are Chi-square distributions with degrees of freedom \(S-1\), \(R-1\) and \((R-1)(S-1)\), respectively. It is easy to verify that
Replacing \(\sigma ^2\) in (62) by its MLE in (61) yields the following log-likelihood function of \({\rho _{s}}\) and \({\rho _{r}}\):
where
The constant \(c_0\) is free of parameters, and \(D=D(\lambda _1, \lambda _2, \lambda _3, \lambda _4)\) is the determinant of the matrix \(\varvec{V}\).
A notable fact is that, the log-likelihood function in (65) depends only on the two parameters: \({\rho _{s}}\) and \({\rho _{r}}\).
Rights and permissions
About this article
Cite this article
Xiao, Y., Liu, H. Modified profile likelihood approach for certain intraclass correlation coefficients. Comput Stat 28, 2241–2265 (2013). https://doi.org/10.1007/s00180-013-0405-x
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00180-013-0405-x