Appendix
Proofs
This section contains the proofs for Propositions 1 through 7. We let \(j = 1,2\) denote the members of an arbitrary dyad and assume that \(\theta _1 \le \theta _2\) by choice of notation. Subscripts for items are omitted. Several proofs require derivatives of monotonic functions, which the reader will recall are defined almost everywhere on their domain.
Proof of Proposition 1
Let \(f, g :\mathbb {R} \rightarrow [0,1]\) be monotone non-decreasing functions, and let \(a, b \in [0,1]\) be fixed constants. The function
$$\begin{aligned} h(x, y)&= a\,f(x)[1-g(y)] + b\,[1-f(x)]g(y) + f(x)g(y) \nonumber \\&= a\,f(x) + b\;g(y) + (1 - a - b)\,f(x)g(y) \end{aligned}$$
(15)
is seen to be non-decreasing in x for fixed y by considering it partial derivative in x and noting that \(df / dx = f'(x) \ge 0\):
$$\begin{aligned} \frac{\partial }{\partial x} h(x, y)&= a\,f'(x) + (1 - a - b)\,f'(x)\,g(y) \nonumber \\&= a\,f'(x) \,[1 - g(y)] + (1 - b)\,f'(x)\,g(y) \ge 0. \end{aligned}$$
(16)
A similar argument shows that Eq. (15) is also non-decreasing in y, and Proposition 1 follows directly.
Proof of Proposition 2
Let \(f(x, y) = x(1-y)\) with \(0< x \le y < 1\). We show f is strictly concave with global maximum \(f(1/2, 1/2) = 1/4\).
A sufficient condition for f to be strictly concave is that \(\varvec{ u}' H \, \varvec{ u} < 0\), where \(H = \left( {\begin{matrix} 0 &{} -1 \\ -1 &{} 0 \end{matrix}} \right) \) is the Hessian of f and \(\varvec{u} = (u_1, u_2)\) is in the domain of f. The quadratic form reduces to \(q = -2\, u_1 u_2\), and the \(u_i\) are strictly positive, so \(q < 0\).
The global maximum can be found by applying the Karush–Kuhn–Tucker (KKT) conditions for constrained optimization as follows (Boyd & Vandenberghe, 2004, see e.g.). The only inequality that is active at the proposed solution is \( g(x, y) = x - y \le 0\), so the objective function and its gradient may be written, respectively, as
$$\begin{aligned} L(x, y, \mu )&= f(x, y) - \mu \, g(x, y) = x (1-y) - \mu (x-y),\\ \nabla L(x, y, \mu )&= \left[ \begin{array}{c} 1 - y + \mu \\ -x - \mu \end{array} \right] . \end{aligned}$$
The KKT conditions state that any local maximum \((x^*, y^*)\) of f must satisfy \(\nabla L(x^*, y^*, \mu ) = \varvec{ 0}\), and \(\mu \, g(x^*, y^*) = 0\) for \(\mu \ne 0\). These equations are readily solved to show \(y^* = x^* = 1/2\).
Proof of Proposition 3
Part 1 of the proposition follows from directly from the definition of \(\theta _{0}\) and the global maximum of \(\Delta (P_{1}, P_{2})\) derived in Proposition 2.
Part 2 additionally uses the result (from Proposition 2) that \(\Delta (P_{1}, P_2)\) is strictly concave, and the assumption (from Proposition 3) that \(P(\theta )\) is strictly increasing on \({\mathcal {N}}\), which together imply that \(\Delta (u_{12})\) is strictly decreasing in each coordinate of \(u_{12} = (\theta _0 - \theta _1, \theta _2 - \theta _0)\), for \(\theta _{1}, \theta _2 \in {\mathcal {N}}\). The result then follows from writing \(\delta = (\theta _2 - \theta _0) + (\theta _0 - \theta _1)\).
Proof of Proposition 4
Let \(P(z_j) = [1 + \exp \{-z_j\}]^{-1}\) with \(z_j = \alpha (\theta _j - \beta )\) and \(Q(z_j) = 1- P(z_j)\). We show that
$$\begin{aligned} \underset{\beta }{\mathrm{arg \, max}} \; \{P(z_1)\,Q(z_2)\} = (\theta _1 + \theta _2)/2. \end{aligned}$$
First note that
$$\begin{aligned} \frac{\partial }{\partial \beta } P(z_1)\,Q(z_2) = \alpha \, P(z_1)\,Q(z_2)\; [P(z_2)- Q(z_1)]. \end{aligned}$$
Setting this to zero gives
$$\begin{aligned} Q(z_1) = P(z_2) \quad \Leftrightarrow \quad P(-z_1) = P(z_2) \quad \Leftrightarrow \quad \;\; -z_1 = z_2, \end{aligned}$$
(17)
hence there is a single critical point at \(\beta ^* = (\theta _1 + \theta _2)/2\). To show that this is a local maximum, we first find the second derivative,
$$\begin{aligned} \frac{\partial ^2}{\partial \beta } P(z_1)\,Q(z_2) = \alpha ^2 \,P(z_1)\,Q(z_2)\left( [P(z_2) - Q(z_1)]^2 - P(z_1)\,Q(z_1) - P(z_2)\,Q(z_2)\right) , \end{aligned}$$
then use Expression (17) to write
$$\begin{aligned} \left. \frac{\partial ^2}{\partial \beta } P(z_1)\,Q(z_2) \right| _{\beta ^*}&= \alpha ^2 \,P(z_1)^2\left( [Q(z_1)- Q(z_1)]^2 - 2 P(z_1)\,Q(z_1)\right) \\&= -2 \, \alpha ^2 \,P(z_1)^3 \, Q(z_1) < 0. \end{aligned}$$
Since there is only a single critical point and this is a local maximum, it follows that \(\beta ^*\) must also be the global maximum that \(P(z_1)\,Q(z_2)\) is strictly concave in \(\beta \).
Proof of Proposition 5
Using the same notation as above, let \(z^*_j = \alpha (\theta _j - \beta ^*)\). We show that \(P(z_1^*)\, Q(z_2^*)\) is monotone non-increasing in \(\alpha \) as follows:
$$\begin{aligned} \frac{\partial }{\partial \alpha } P(z_1^*)\, Q(z_2^*) = \frac{\partial }{\partial \alpha } [P(z_1^*)]^2 = 2\,(\theta _1 - \beta ^*) \, P(z_1^*)\, Q(z_1^*) \le 0. \end{aligned}$$
The first equality uses Expression (17), and the inequality follows since \(\theta _1 \le \beta ^*\) by choice of subscripts \(j = 1, 2\).
Proof of Proposition 6
Using the same notation as above, the result
$$\begin{aligned} \alpha = \frac{2}{\delta }\, \ln \frac{1 - \sqrt{D}}{\sqrt{D}}. \end{aligned}$$
follows from using the following equalities to solve for \(\alpha \)
$$\begin{aligned} D = \Delta ^*(\theta _{12}) = P(z_1^*)\, Q(z_2^*) = [P(z_1^*)]^2. \end{aligned}$$
Proof of Proposition 7
Part 1 of the proposition requires computing the Fisher information of a, which is obtained by writing the Bernoulli density of \(Y_i \in \{0, 1\}\) as \(f(Y_i \mid \zeta ) = R_i^{y_i} + (1-R_i)^ {(1-y_i)}\) with \(R_i\) defined as in Eq. (9):
$$\begin{aligned} R_{i} = P_{i1}\,P_{i2} + a\, P_{i1}\,Q_{i2} + b\, Q_{i1}\,P_{i2}. \end{aligned}$$
Part 2 uses the result (from Proposition 3) that \(\Delta _i = P_{i1}Q_{i2} = 1/4\) if and only if \(\theta _1 = \theta _2\). Then \(P_{i1} = P_{i2}\) and \(R_{i} = P_{i1}^2 + (a + b) P_{i1}\,Q_{i1}\), which shows that the value of \(R_{i}\) is not affected by exchanging the values of a and b.
Estimating Equations
This section provides equations for ML and MAP estimation of the one-parameter RSC model. Referring to Sects. 2.1 and 2.2, let \(\theta _r\) and \(\varvec{X}_{r}\) denote the latent trait and response pattern, respectively, for respondent r. The group response vector is denoted as \(\varvec{Y}\), and \(v = \text {logit}(w)\) is the logit of the weight from the one-parameter RSC model in Eq. (14). We let \(P_{ir} = P_{i}(\theta _r)\) denote the IRF for item i on an individual assessment, and \(R_{j} = R_j(\varvec{u})\) denote the group IRF for item j on a group assessment, for \(\varvec{u} = (\theta _r, \theta _2, v)\). Estimation using the equations outlined in this section is implemented in the R package scirt available at www.github.com/peterhalpin/scirt.
Using the local independence assumptions for individual and group assessments, the log-likelihood of interest is
$$\begin{aligned} \ell (\varvec{u} \mid \varvec{X}_{1}, \varvec{X}_{2}, \varvec{Y}) = \sum _i \ell (\theta _1 \mid X_{i1}) + \sum _i \ell (\theta _2 \mid X_{i2}) + \sum _j \ell (\varvec{u} \mid {Y}_{j}) \end{aligned}$$
(18)
where
$$\begin{aligned} \ell (\theta _r \mid X_{r1}) = x_{ir}\, \ln (P_{ir}) + (1-x_{ir})\, \ln (1 - P_{ir}) \end{aligned}$$
and
$$\begin{aligned} \ell (\varvec{u} \mid {Y}_{j}) = y_{j}\, \ln (R_{j}) + (1-y_{j})\, \ln (1 - R_{j}). \end{aligned}$$
Methods for estimating \(\theta _r\) via \(\ell (\theta _r \mid X_{ir})\) are well known (Baker & Kim, 2004, e.g.), so we focus on estimation of v via \(\ell = \ell (\varvec{u} \mid {Y}_{j})\). Its gradient is
$$\begin{aligned} \nabla \ell = \frac{\partial }{\partial \varvec{u}} \ell&= \sum _j m_{j} \left[ \frac{\partial }{\partial \theta _1} R_{j} \quad \frac{\partial }{\partial \theta _2} R_{j} \quad \frac{\partial }{\partial v} R_{j} \right] ^T \end{aligned}$$
(19)
where
$$\begin{aligned} m_j = \frac{y_{j}}{R_j} - \frac{1 - y_{j}}{1 - R_j}. \end{aligned}$$
Letting \(P'_{ir} = \frac{\partial }{\partial \theta _r} P_{ir}\) and \(w' = \frac{\partial }{\partial v} w\), for \(w = \mathrm{logistic}(v)\) the derivatives of the group IRFs in Eq. (9) can be written as
$$\begin{aligned} \frac{\partial }{\partial \theta _r} R_{j} = (w + (1-2w)\,P_{js})\; P'_{ir} \end{aligned}$$
and
$$\begin{aligned} \frac{\partial }{\partial v} R_{j} = (P_{jr}Q_{js} + Q_{jr}P_{js}) \; w'. \end{aligned}$$
Let \(H(\ell ) = \{h_{rs}\}\) denote the Hessian of \(\ell \), with elements given by
$$\begin{aligned} h_{rs} = \frac{\partial ^2}{\partial u_r \partial u_s} \ell = m_{j} \; \frac{\partial ^2}{\partial u_r \partial u_s} R_{j} - n_{j} \; \frac{\partial }{\partial u_r} R_{j} \; \frac{\partial }{\partial u_s} R_{j} \quad \quad r, s = 1, 2, 3 \end{aligned}$$
(20)
with
$$\begin{aligned} n_{j} = \frac{y_{j}}{R_{j}^2} + \frac{1- y_{j}}{(1 - R_{j})^2}. \end{aligned}$$
Also let \(P''_{ir} = \frac{\partial }{\partial \theta _r} P'_{ir}\) and \(w'' = \frac{\partial }{\partial v} w'\). Then the necessary second derivatives are
$$\begin{aligned} \frac{\partial ^2}{\partial \theta _r^2} R_{j}&= (w + (1-2w)\,P_{js})\; P''_{jr}\\ \frac{\partial ^2}{\partial \theta _r \partial \theta _s} R_{j}&= (1-2w) \, P'_{jr}\,P'_{js}\\ \frac{\partial ^2}{\partial \theta _r \partial v} R_{j}&= (1 - 2P_{js}) P'_{jr}\, w' \\ \frac{\partial ^2}{\partial v^2} R_{j}&= (P_{jr}Q_{js} + Q_{jr}P_{js}) \; w''. \end{aligned}$$
ML estimation of v can proceed using Eqs. (18) through (20) and the provided derivatives, with standard errors computed by inverting either the observed or expected Hessian. In the latter case, the terms \(m_{j}\) vanish under expectation, and the standard errors can be obtained using only the first-order derivatives of the individual and group IRFs.
In order to demonstrate the identification the weight w, we assume \(\theta _1\) and \(\theta _2\) are known and compute the Hessian of a single item for v:
$$\begin{aligned} \frac{\partial ^2}{\partial v^2} \ell (\varvec{u} \mid {Y}_{j}) = m_{j} \, (P_{jr}Q_{js} + Q_{jr}P_{js}) \; w'' -n_{j} \; [(P_{jr}Q_{js} + Q_{jr}P_{js}) \; w']^2. \end{aligned}$$
(21)
Setting \(v = w\), then \(w'' = 0\) and the first term vanishes. The second term is non-positive and equals zero only if \(P_{jr} = 0\) or \(P_{jr} = 1\) for both \(r = 1\) and \(r = 2\). Demonstrating identification for \(v = \text {logit}(w)\) is less straightforward, but an asymptotic argument shows that \(E(m_{j}) = 0\), in which case the item information again reduces to second term in Eq. (21).
When considering MAP rather than ML estimation, the likelihood in (18) is replaced by the posterior distribution of \(\varvec{u}\),
$$\begin{aligned} p(\varvec{u} \mid \varvec{X}_{1}, \varvec{X}_{2}, \varvec{Y}) \propto p(\varvec{X}_{1}, \varvec{X}_{2}, \varvec{Y} \mid \varvec{u}) \times p(\varvec{u}). \end{aligned}$$
(22)
As described in the main paper, we assume that \(p(\varvec{u}) = \prod _k p(u_k)\) with \(\theta _r \thicksim N(0, 1)\) and \(v \thicksim N(0, \sigma _v)\). MAP estimation of w proceeds by using
$$\begin{aligned} \nabla \ell + \frac{\partial }{\partial \varvec{u}} \ln p(\varvec{u}) \quad \mathrm{and}\quad H(\ell ) + \frac{\partial ^2}{\partial \varvec{u} \partial \varvec{u}^T} \ln p(\varvec{u}) \end{aligned}$$
in place of Eqs. (19) and (20).