Skip to main content
Log in

Modified profile likelihood approach for certain intraclass correlation coefficients

  • Original Paper
  • Published:
Computational Statistics Aims and scope Submit manuscript

Abstract

In this paper we consider the problem of constructing confidence intervals and confidence lower bounds for the intraclass correlation coefficient in an interrater reliability study where the raters are randomly selected from a population of raters. The likelihood function of the interrater reliability is derived and simplified, and the profile likelihood based approach is readily available for computing the confidence intervals of the interrater reliability. Unfortunately, the confidence intervals computed by using the profile likelihood function are in general too narrow to have the desired coverage probabilities. From the practical point of view, a conservative approach, if at least as precise as any existing method, is preferred since it gives the correct results with a probability higher than claimed. Under this rationale, we propose the so-called modified profile likelihood approach in this paper. Simulation study shows that, the proposed method in general has better performance than currently used methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  • Bartko JJ (1966) The intraclass correlation coefficient as a measure of reliability. Psychol Rep 19:3–11

    Article  Google Scholar 

  • Cappelleri JC, Ting N (2003) A modified large sample approach to approximate interval estimation for a particular intraclass correlation coefficient. Stat Med 22:1861–1877

    Article  Google Scholar 

  • Cutti AG, Ferrari A, Garofalo P, Raggi M, Cappello A, Ferrari A (2010) ‘Outwalk’: a protocal for clinical gait analysis based on inertial and magnetic sensors. Med Biol Eng Comput 48:17–25

    Article  Google Scholar 

  • Fisher RA (1985) Statistical methods for research workers. W.B. Saunders Company, Philadelphia

    Google Scholar 

  • Fleiss JL (1986) The design and analysis of clinical experiment. Wiley, New York

    Google Scholar 

  • Fleiss JL, Shrout PE (1978) Approximate interval estimation for a certain intraclass correlation coefficient. Psychometrika 43:259–262

    Article  MATH  Google Scholar 

  • Fleiss JL, Shrout PE (1979) Intraclass correlation: uses in assessing rater reliability. Psychol Bull 86(2):420–428

    Article  Google Scholar 

  • MacLennan RN (1993) Interrater reliability with SPSS for windows 5.0. Am Stat 47(4):292–296

    Google Scholar 

  • Rajaratnam N (1960) Reliability forumlas for independent decision data when reliability data matched. Psychometrika 25:262–271

    Article  MathSciNet  Google Scholar 

  • Satterthwaite FE (1946) An approximate distribution of estimates of variance components. Biometrics 2:110–114

    Article  Google Scholar 

  • Streiner DL, Norman GR (1995) Health measurement scales: a practical guide to their development and use, 2nd edn. Oxford University Press, NY

    Google Scholar 

  • Tian L, Cappelleri JC (2004) A new approach for interval estimation and hypothesis testing of a certain intraclass correlation coefficient: the generalized variable method. Stat Med 23:2125–2135

    Article  Google Scholar 

  • Vaidyanathan M, Clarke LP, Velthuizen RP, Phuphanich S, Bensaid AM, Hall LO, Bezdek JC, Greenberg H, Trotti A, Silbiger M (1995) Comparison of supervised MRI segementation methods for tumor volume determination during therapy. Magn Reson Imaging 5:719–728

    Article  Google Scholar 

  • Weerahandi S (1993) Generalized confidence intervals. J Am Stat Assoc 88:899–905

    Article  MathSciNet  MATH  Google Scholar 

  • Yi Q, Wang P, He Y (2008) Reliability analysis for continuous measurements: equivalence test for agreement. Stat Med 27:2816–2825

    Article  MathSciNet  Google Scholar 

  • Zou H, McDermott MP (1999) Higher-moments approaches to approximate interval estimation for a certain intraclass correlation coefficient. Stat Med 18:2051–2061

    Article  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the co-editor, the associate editor and the two reviewers for their valuable suggestions and editorial comments, which led to a significant improvement of the original manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuanhui Xiao.

Appendix: derivation of likelihood function

Appendix: derivation of likelihood function

In this section we perform a series of algebraic operations for simplifying the expression of the likelihood function. First we introduce several notations. For an integer \(n\), let \(\varvec{1}_{n}\) denote the \(n\)-dimensional one-vector whose components are one

$$\begin{aligned} \varvec{1}_n = \left[ \begin{array}{l} 1 \\ 1 \\ \vdots \\ 1 \end{array}\right] , \end{aligned}$$
(14)

\(\varvec{I}_n\) the \(n\times n\) identity matrix, and \(\varvec{J}_n\) the \(n\times n\) one-matrix

(15)

For simplicity, the subscript \(n\) is suppressed in case of no confusion. Let \(y_j\) be the data on the \(j\)th subject:

$$\begin{aligned} y_j = \left[ \begin{array}{l} y_{1j} \\ y_{2j} \\ \vdots \\ y_{Rj} \end{array}\right] , \end{aligned}$$
(16)

and \(y\) is the \(RS\)-dimensional vector of all data

$$\begin{aligned} \varvec{y}= \left[ \begin{array}{l} y_1 \\ y_2 \\ \vdots \\ y_S \end{array}\right] . \end{aligned}$$
(17)

It follows that the covariance matrix of \(y\) is the \(RS \times RS\) matrix \(\sigma ^2 \varvec{V}\), where

(18)

Thus the log-likelihood function is given by

$$\begin{aligned} l&= -\frac{1}{2} \left\{ RS \ln (2\pi ) + \ln | \sigma ^2 \varvec{V}| + (\varvec{y}-\mu \varvec{1})^T (\sigma ^2 \varvec{V})^{-1} (\varvec{y}-\mu \varvec{1})\right\} \nonumber \\&= -\frac{1}{2} \left\{ RS \ln (2\pi ) + RS \ln \sigma ^2 + \ln |\varvec{V}| + \frac{(\varvec{y}-\mu \varvec{1})^T \varvec{V}^{-1} (\varvec{y}-\mu \varvec{1})}{\sigma ^2} \right\} \end{aligned}$$
(19)

and

$$\begin{aligned} -2l = RS \ln (2\pi ) + RS \ln \sigma ^2 + \ln |\varvec{V}| + \frac{(\varvec{y}-\mu \varvec{1})^T \varvec{V}^{-1} (\varvec{y}-\mu \varvec{1})}{\sigma ^2}. \end{aligned}$$
(20)

Setting to zero the partial derivative of \(-2l\) with respect to \(\mu \)

$$\begin{aligned} \frac{\partial (-2l)}{\partial \mu } = -\frac{2}{\sigma ^2} ( \varvec{1}^T \varvec{V}^{-1} y - \mu \varvec{1}^T \varvec{V}^{-1} \varvec{1}). \end{aligned}$$
(21)

gives the maximum likelihood estimator (MLE) \(\hat{\mu }\) of \(\mu \)

$$\begin{aligned} \hat{\mu } = \frac{ \varvec{1}^T \varvec{V}^{-1} \varvec{y}}{ \varvec{1}^T \varvec{V}^{-1} \varvec{1}} = \frac{\varvec{1}^T \varvec{y}}{\varvec{1}^T \varvec{1}} = \bar{y}_{\cdot \cdot }. \end{aligned}$$
(22)

since the vector \(\varvec{1}\) is an eigenvalue of the matrix \(\varvec{V}^{-1}\) (and \(\varvec{V}\)). Similarly, equating with zero the partial derivative of \(-2l\) with respect to \(\sigma ^2\)

$$\begin{aligned} \frac{\partial (-2l)}{\partial \sigma ^2} = \frac{RS}{\sigma ^2} - \frac{( \varvec{1}^T \varvec{V}^{-1} \varvec{y}- \mu \varvec{1}^T \varvec{V}^{-1} \varvec{1})}{(\sigma ^2)^2} \end{aligned}$$
(23)

yields the MLE \(\hat{\sigma ^2}\) of \(\sigma ^2\)

$$\begin{aligned} \hat{\sigma ^2} = \frac{(\varvec{y}-\mu \varvec{1})^T \varvec{V}^{-1} (\varvec{y}-\mu \varvec{1})}{RS}. \end{aligned}$$
(24)

Replace \(\mu \) by its maximum likelihood estimate \(\bar{y}_{\cdot \cdot }\), we shall have

$$\begin{aligned} \hat{\sigma ^2} = \frac{(\varvec{y}-\bar{y}_{\cdot \cdot } \varvec{1})^T \varvec{V}^{-1} (\varvec{y}-\bar{y}_{\cdot \cdot } \varvec{1})}{RS}. \end{aligned}$$
(25)

Let

$$\begin{aligned} \varDelta \stackrel{def}{=} (\varvec{y}-\bar{y}_{\cdot \cdot } \varvec{1})^T \varvec{V}^{-1} (\varvec{y}-\bar{y}_{\cdot \cdot } \varvec{1}), \end{aligned}$$
(26)

then

$$\begin{aligned} \hat{\sigma ^2} = \frac{\varDelta }{RS}, \end{aligned}$$
(27)

and

$$\begin{aligned} l&= -\frac{1}{2} \left\{ RS \ln (2\pi ) + RS \ln \sigma ^2 + \ln |\varvec{V}| + \frac{\varDelta }{\sigma ^2} \right\} , \end{aligned}$$
(28)
$$\begin{aligned} -2l&= RS \ln (2\pi ) + RS \ln \sigma ^2 + \ln |\varvec{V}| + \frac{\varDelta }{\sigma ^2}. \end{aligned}$$
(29)

The nuisance parameter \(\mu \) is not involved in (26), (27), (28) and (29).

To evaluate the determinant \(|\varvec{V}|\) of the matrix \(\varvec{V}\) and simplify the quadratic form \(\varDelta \) in (26), we need to find out the eigenvalues and eigenvectors of the matrix \(\varvec{V}\).

For an integer \(n\), let \(h_1^{(n)},\ h_2^{(n)}, \ldots ,\ h_n^{(n)}\) be the \(n\times 1\) vectors given by

$$\begin{aligned} h_1^{(n)} \!=\! d^{(n)}_1 \left[ \begin{array}{l} 1 \\ 1 \\ 1 \\ 1 \\ \vdots \\ 1 \end{array}\right] ,\ h_2^{(n)} \!=\! d_2 \left[ \begin{array}{l} 1 \\ -1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{array}\right] ,\ h_3^{(n)} \!=\! d_3 \left[ \begin{array}{l} 1 \\ 1 \\ -2 \\ 0 \\ \vdots \\ 0 \end{array}\right] ,\ldots , h_n^{(n)} \!=\! d_n \left[ \begin{array}{l} 1 \\ 1 \\ 1 \\ \vdots \\ 1 \\ -(n-1) \end{array}\right] ,\nonumber \\ \end{aligned}$$
(30)

where

$$\begin{aligned} d^{(n)}_1 = \frac{1}{\sqrt{n}},\ \ \ d_i = \frac{1}{\sqrt{(i-1)i}} \quad \text{ for} \quad i =2,\ \ldots , n. \end{aligned}$$
(31)

The superscript \( ^{(n)}\) is suppressed if no confusion arises from this omission. Then the vectors \(\{h_i\}_{i=1}^n\) are eigenvectors of the matrix \(\varvec{J}_n\). Indeed,

$$\begin{aligned} \varvec{J}_n h_1 = n h_1,\ \ \ \varvec{J}_n h_2 = 0 \quad \text{ for} \text{ all} \quad i=2,\ \ldots , n. \end{aligned}$$
(32)

The matrix

$$\begin{aligned} \varvec{H}_n = \left[ \begin{array}{llll} h_1&h_2&\dots&h_n \end{array}\right] \end{aligned}$$
(33)

is the \(n\times n\) orthogonal matrix due to Helmert. Now for \(i = 1,\ 2,\ \ldots , R\), let

$$\begin{aligned} q_{i1} = d^{(S)}_1 \left[ \begin{array}{l} h_i^{(R)} \\ h_i^{(R)} \\ h_i^{(R)} \\ h_i^{(R)} \\ \vdots \\ h_i^{(R)} \end{array}\right] ,\ q_{i2} = d_2 \left[ \begin{array}{l} h_i^{(R)} \\ -h_i^{(R)} \\ 0 \\ 0 \\ \vdots \\ 0 \end{array}\right] ,\ q_{i3} = d_3 \left[ \begin{array}{l} h_i^{(R)} \\ h_i^{(R)} \\ -2h_i^{(R)} \\ 0 \\ \vdots \\ 0 \end{array}\right] ,\ \ldots ,\ q_{iS} = d_S\nonumber \\ \left[ \begin{array}{l} h_i^{(R)} \\ h_i^{(R)} \\ h_i^{(R)} \\ \vdots \\ h_i^{(R)} \\ -(S-1)h_i^{(R)} \end{array}\right] ,\nonumber \\ \end{aligned}$$
(34)

then the matrix \(\varvec{Q}\) given by

$$\begin{aligned} \varvec{Q}= \left[ \begin{array}{llllllllll} q_{11}&\ \dots&q_{1S};&q_{21}&\dots&q_{2S};&\dots \ \ ;&q_{R1}&\dots&q_{RS} \end{array}\right] \end{aligned}$$
(35)

is orthogonal. Furthermore,

$$\begin{aligned} \varvec{V}q_{11}&= [1-{\rho _{s}}-{\rho _{r}}+R{\rho _{s}}+S{\rho _{r}}] q_{11} \nonumber \\ \varvec{V}q_{1j}&= [1-{\rho _{s}}-{\rho _{r}}+R{\rho _{s}}] q_{1j}\ \ \ \text{ for} j =2,\ 3,\ \ldots , S \nonumber \\ \varvec{V}q_{i1}&= [1-{\rho _{s}}-{\rho _{r}}+S{\rho _{r}}] q_{i1}\ \ \ \text{ for} i =2,\ 3,\ \ldots , R \nonumber \\ \varvec{V}q_{ij}&= [1-{\rho _{s}}-{\rho _{r}}] q_{ij}\ \ \ \text{ for} i =2,\ 3,\ \ldots , R \text{ and} j =2,\ 3,\ \ldots ,\ S, \end{aligned}$$
(36)

by (32), so \(q_{ij}\)’s are the eigenvectors of \(\varvec{V}\) and

$$\begin{aligned}&\lambda _1 \stackrel{def}{=} 1-{\rho _{s}}-{\rho _{r}}+R{\rho _{s}}+S{\rho _{r}}, \end{aligned}$$
(37)
$$\begin{aligned}&\lambda _2 \stackrel{def}{=} 1-{\rho _{s}}-{\rho _{r}}+R{\rho _{s}},\ \end{aligned}$$
(38)
$$\begin{aligned}&\lambda _3 \stackrel{def}{=} 1-{\rho _{s}}-{\rho _{r}}+S{\rho _{r}}, \end{aligned}$$
(39)
$$\begin{aligned}&\lambda _4 \stackrel{def}{=} 1-{\rho _{s}}-{\rho _{r}}, \end{aligned}$$
(40)

are the eigenvalues of the matrix \(\varvec{V}\).

Define two diagonal \(S\times S\) matrices \(\varvec{\Lambda }_1\) and \(\varvec{\Lambda }_2\) by

(41)

and denote by \(\varvec{\Lambda }\) the following \(RS \times RS\) diagonal matrix

(42)

It follows that

$$\begin{aligned} \varvec{Q}^T \varvec{V}\varvec{Q}= \varvec{\Lambda } \text{ and} \varvec{V}^{-1} = \varvec{Q}\varvec{\Lambda }^{-1} \varvec{Q}^T. \end{aligned}$$
(43)

Since \(\varvec{Q}\) is orthogonal, the determinant of the matrix \(\varvec{V}\) is

$$\begin{aligned} |\varvec{V}| = |\varvec{\Lambda }| = | \varvec{\Lambda }_1 | \times |\varvec{\Lambda }_2|^{R-1} = \lambda _1 \times \lambda _2^{S-1} \times \lambda _3^{R-1} \times \lambda _4^{(R-1)(S-1)}. \end{aligned}$$
(44)

It follows that from (43) that

$$\begin{aligned} \varDelta = (\varvec{y}-\bar{y}_{\cdot \cdot }\varvec{1})^T \varvec{Q}\varvec{\Lambda }^{-1} \varvec{Q}^T (\varvec{y}-\bar{y}_{\cdot \cdot }\varvec{1}) = [\varvec{Q}^T (\varvec{y}-\bar{y}_{\cdot \cdot }\varvec{1})]^T \varvec{\Lambda }^{-1} [\varvec{Q}^T (\varvec{y}-\bar{y}_{\cdot \cdot }\varvec{1})].\nonumber \\ \end{aligned}$$
(45)

Define \( \varvec{z}\stackrel{def}{=} \varvec{Q}^T (\varvec{y}-\bar{y}_{\cdot \cdot }\varvec{1})\), then

$$\begin{aligned} \varDelta&= \frac{z_{(1)}^2}{\lambda _1} + \frac{\sum _{j=2}^S z_{(j)}^2}{\lambda _2} + \frac{\sum _{i=2}^R z_{(S(i-1)+1)}^2}{\lambda _3} + \frac{\sum _{i=2}^R \sum _{j=2}^S z_{(S(i-1)+j)}^2}{\lambda _4} \nonumber \\&= \frac{a}{\lambda _1} + \frac{b}{\lambda _2} + \frac{c}{\lambda _3} + \frac{d}{\lambda _4}, \end{aligned}$$
(46)

where \(z_{(k)}\) denotes the \(k\)th component of \(\varvec{z}\) and

$$\begin{aligned} a = z_{(1)}^2,\ \ \ b = \sum _{j=2}^S z_{(j)}^2,\ \ \ c = \sum _{i=2}^R z_{(S(i-1)+1)}^2,\ \ \ d = \sum _{i=2}^R \sum _{j=2}^S z_{(S(i-1)+j)}^2. \end{aligned}$$
(47)

To simplify the expressions of \(a\), \(b\), \(c\) and \(d\), let \(x_j = y_j - \bar{y}_{\cdot \cdot }\varvec{1}_R\) for all \(j=1,\ 2,\ \ldots , S\), \(x = y - \bar{y}_{\cdot \cdot } \equiv (x_1^T, x_2^T, \ldots , x_S^T)^T \), and

$$\begin{aligned} w_1&= x_1 + x_2 + \ldots + x_S \equiv d_1^{(S)} [h_1^{(S)}]^T x \nonumber \\ w_2&= x_1 - x_2 \equiv d_2^{(S)} [h_2^{(S)}]^T x, \nonumber \\ w_3&= x_1 + x_2 - 2 y_3 \equiv d_3^{(S)} [h_2^{(S)}]^T x, \nonumber \\&\dots \dots \dots \dots \dots \dots \dots \nonumber \\ w_S&= x_1 + \ldots + x_{S-1} - (S-1) x_S \equiv d_S^{(S)} [h_S^{(S)}]^T x, \end{aligned}$$
(48)

where \(h^T x \) is understood as

$$\begin{aligned} h^T x = \sum _{j=1}^S h_{(j)} x_j. \end{aligned}$$
(49)

By (34) and (48),

$$\begin{aligned} w_1&= \left[ \begin{array}{l} {x}_{1 \cdot } \\ {x}_{2 \cdot } \\ \vdots \\ {x}_{R \cdot } \end{array}\right] ,\end{aligned}$$
(50)
$$\begin{aligned} \text{ and} z_{(S(i-1)+j)}&= q_{ij}^T x = d_j^{(S)} h_i^T w_j. \end{aligned}$$
(51)

It follows that

$$\begin{aligned} h_1^T w_1 = \frac{1}{R} \times 1^T w_1 = \frac{1}{S} \times {x}_{\cdot \cdot } = 0, \end{aligned}$$
(52)

which implies that

$$\begin{aligned} a = z_{(1)}^2 = ( d_1^{(S)} h_1^T w_1 )^2 = 0. \end{aligned}$$
(53)

By (47), (51), (52) and the fact that \(H_S\) is an orthogonal matrix,

$$\begin{aligned} b&= \sum _{j=1}^S (d_j^{(S)} h_1^T w_j)^2 = \sum _{j=1}^S (h_1^Tx_j)^2 \nonumber \\&= \sum _{j=1}^S \left( \frac{{x}_{\cdot j}}{\sqrt{R}} \right) ^2 = R \sum _{j=1}^S \bar{x}_{\cdot j}^2 \nonumber \\&= R \sum _{j=1}^S (\bar{y}_{\cdot j}-\bar{y}_{\cdot \cdot })^2 = \text{ SSBS}. \end{aligned}$$
(54)

By (47), (51), (52) and the fact that \(H_R\) is an orthogonal matrix,

$$\begin{aligned} c&= \sum _{i=1}^R (d_1^{(S)} h_i^Tw_1)^2 = [d_1^{(S)}]^2 \sum _{i=1}^R (h_i^Tw_1)^2 \nonumber \\&= \frac{1}{S} \times (H_Rw_1)^TH_Rw_1 = \frac{1}{S} \times w_1^T w_1 \nonumber \\&= \frac{1}{S} \sum _{i=1}^R {x}_{i \cdot }^2 = S \sum _{i=1}^R \bar{x}_{i \cdot }^2 \nonumber \\&= S \sum _{i=1}^R (\bar{y}_{i \cdot }-\bar{y}_{\cdot \cdot })^2 = \text{ SSBR}. \end{aligned}$$
(55)

By the proof of (54) and (55),

$$\begin{aligned} \text{ SSBS}&= \sum _{j=1}^S (d_j^{(S)} h_1^T w_j)^2, \end{aligned}$$
(56)
$$\begin{aligned} \text{ SSBR}&= \sum _{i=1}^R (d_1^{(S)}h_i^T w_1)^2. \end{aligned}$$
(57)

Using the fact that \(H_S\) is orthogonal again yields that

$$\begin{aligned} \sum _{j=1}^S (d_j^{(S)} h_i^Tw_j)^2 = \sum _{j=1}^S (h_i^Tx_j)^2\ \ \ \text{ for} \text{ all} i=1, 2, \ldots , R. \end{aligned}$$
(58)

Thus, by (47), (51), (52), (56), (57) and (58),

$$\begin{aligned} d&= \sum _{i=2}^R\sum _{j=2}^S (d_j^{(S)} h_i^Tw_j)^2 \nonumber \\&= \sum _{i=1}^R \sum _{j=1}^S (d_j^{(S)} h_i^Tw_j)^2 - \sum _{j=1}^S (d_j^{(S)} h_1^T w_j)^2 - \sum _{i=1}^R (d_1^{(S)}h_i^T w_1)^2 \nonumber \\&= \sum _{i=1}^R \sum _{j=1}^S (h_i^Tx_j)^2 - \text{ SSBS}- \text{ SSBR}\nonumber \\&= \sum _{j=1}^S (H_R x_j)^T(H_R x_j) - \text{ SSBS}- \text{ SSBR}\nonumber \\&= \sum _{j=1}^S x_j^T x_j - \text{ SSBS}- \text{ SSBR}\nonumber \\&= \text{ TOT}- \text{ SSBS}- \text{ SSBR}= \text{ SSE}. \end{aligned}$$
(59)

Hence,

$$\begin{aligned} \varDelta = \frac{\text{ SSBS}}{\lambda _2} + \frac{\text{ SSBR}}{\lambda _3} + \frac{\text{ SSE}}{\lambda _4}, \end{aligned}$$
(60)

and

$$\begin{aligned} \hat{\sigma ^2} = \frac{\varDelta }{RS} = \frac{1}{RS} \left( \frac{\text{ SSBS}}{\lambda _2} + \frac{\text{ SSBR}}{\lambda _3} + \frac{\text{ SSE}}{\lambda _4}\right) . \end{aligned}$$
(61)

It follows that the log-likelihood function is

$$\begin{aligned} l&= -\frac{1}{2} \left[ RS \ln (2\pi ) + RS \ln \sigma ^2\ +\right. \nonumber \\&\ln \lambda _1 + (S-1) \ln \lambda _2 + (R-1) \ln \lambda _3 + (R-1)(S-1) \ln \lambda _4 \nonumber \\&\left. +\frac{1}{\sigma ^2} \left( \frac{\text{ SSBS}}{\lambda _2} + \frac{\text{ SSBR}}{\lambda _3} + \frac{\text{ SSE}}{\lambda _4}\right) \right] , \end{aligned}$$
(62)

which implies that \(\text{ SSBS}\), \(\text{ SSBR}\) and \(\text{ SSE}\) are mutually independent. Furthermore, for true parameter values, the distributions of

$$\begin{aligned} \frac{\text{ SSBS}}{\lambda _2 \sigma ^2},\ \ \ \frac{\text{ SSBR}}{\lambda _3 \sigma ^2}\ \ \text{ and} \ \ \frac{\text{ SSE}}{\lambda _4 \sigma ^2} \end{aligned}$$
(63)

are Chi-square distributions with degrees of freedom \(S-1\), \(R-1\) and \((R-1)(S-1)\), respectively. It is easy to verify that

$$\begin{aligned} \theta _S = \lambda _2 \sigma ^2,\ \ \ \theta _R = \lambda _3 \sigma ^2, \ \ \ \sigma _{e^2}= \lambda _4 \sigma ^2. \end{aligned}$$
(64)

Replacing \(\sigma ^2\) in (62) by its MLE in (61) yields the following log-likelihood function of \({\rho _{s}}\) and \({\rho _{r}}\):

$$\begin{aligned} l&= -\frac{1}{2} \left[ c_0 + D(\lambda _1, \lambda _2, \lambda _3, \lambda _4) + RS \ln \varDelta \right] \nonumber \\&= -\frac{1}{2} \left[ c_0 + D(\lambda _1, \lambda _2, \lambda _3, \lambda _4) + RS \ln \left( \frac{\text{ SSBS}}{\lambda _2} + \frac{\text{ SSBR}}{\lambda _3} + \frac{\text{ SSE}}{\lambda _4}\right) \right] , \end{aligned}$$
(65)

where

$$\begin{aligned} c_0&= (1 + \ln 2\pi - \ln RS)RS ,\nonumber \\ D(\lambda _1, \lambda _2, \lambda _3, \lambda _4)&= \ln \lambda _1 + (S-1) \ln \lambda _2 + (R-1) \ln \lambda _3 + (R-1)(S-1) \ln \lambda _4. \nonumber \\ \end{aligned}$$
(66)

The constant \(c_0\) is free of parameters, and \(D=D(\lambda _1, \lambda _2, \lambda _3, \lambda _4)\) is the determinant of the matrix \(\varvec{V}\).

A notable fact is that, the log-likelihood function in (65) depends only on the two parameters: \({\rho _{s}}\) and \({\rho _{r}}\).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Xiao, Y., Liu, H. Modified profile likelihood approach for certain intraclass correlation coefficients. Comput Stat 28, 2241–2265 (2013). https://doi.org/10.1007/s00180-013-0405-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00180-013-0405-x

Keywords

Navigation