Appendix 1: On the equivalence of the formulation inspired by Fehr and Schmidt (1999) and (5)
Expanding the formulation inspired by Fehr and Schmidt (1999) yields
$$\begin{aligned} \displaystyle \sum \limits _{i=1}^{n}\sum \limits _{j=1}^{n}\left( {\tilde{w}}_{i}-{\tilde{w}}_{j}\right) ^{2}&=\displaystyle \sum \limits _{i=1}^{n}\sum \limits _{j=1}^{n}\left( {\tilde{w}}_{i}^{2}-2{\tilde{w}}_{i}{\tilde{w}}_{j}+{\tilde{w}}_{j}^{2}\right) \\ &=\displaystyle n\cdot \sum \limits _{i=1}^{n}{\tilde{w}}_{i}^{2}-2\cdot \sum \limits _{i=1}^{n}\sum \limits _{j=1}^{n}{\tilde{w}}_{i}{\tilde{w}}_{j}+n\cdot \sum \limits _{j=1}^{n}{\tilde{w}}_{j}^{2}\\ &=\displaystyle n\cdot \sum \limits _{i=1}^{n}{\tilde{w}}_{i}^{2}-2n\bar{w}\cdot \sum \limits _{i=1}^{n}{\tilde{w}}_{i}+n\cdot \sum \limits _{i=1}^{n}{\tilde{w}}_{i}^{2}\\ &=\displaystyle 2n\cdot \left( \sum \limits _{i=1}^{n}{\tilde{w}}_{i}^{2}-\bar{w}\cdot \sum \limits _{i=1}^{n}{\tilde{w}}_{i}\right) \\ &=\displaystyle 2n\cdot \left( \sum \limits _{i=1}^{n}{\tilde{w}}_{i}^{2}-n\bar{w}^{2}\right) . \end{aligned}$$
According to the König–Huygens theorem the term in brackets is equal to \(\sum \nolimits _{i=1}^{n}({\tilde{w}}_{i}-\bar{w})^{2}.\) Hence, we arrive at
$$\begin{aligned} \sum \limits _{i=1}^{n}\sum \limits _{j=1}^{n}\left( {\tilde{w}}_{i}-{\tilde{w}}_{j}\right) ^{2}=2n\cdot \sum \limits _{i=1}^{n}\left( {\tilde{w}}_{i}-\bar{w}\right) ^{2}, \end{aligned}$$
implying that our subsequently employed specification of \(s({\tilde{\mathbf {w}}})\) in (5), which is the component of the principal’s utility function that reflects her preferences for equal pay, and the variant inspired by Fehr and Schmidt (1999) are proportional and therefore equivalent.
Appendix 2: Derivation of the principal’s expected disutility
First, we note that the specification (5) of \(s({\tilde{\mathbf {w}}})\) results from (4) when substituting \(\frac{1}{n}\cdot {\mathbf {J}}\) for \({\mathbf {Q}}\) into (4). Since \({\mathbf {I}}-\frac{1}{n}\cdot {\mathbf {J}}=:{\varvec{\Omega }}\) is symmetric as well as idempotent, we can write
$$\begin{aligned} s({\tilde{\mathbf {w}}})=\beta \cdot {\tilde{\mathbf {w}}}^{\prime }{\varvec{\Omega }}{\tilde{\mathbf {w}}}. \end{aligned}$$
(24)
We now use the production function (1) as well as the remuneration function (2) to obtain \({\tilde{\mathbf {w}}}={\mathbf {f}}+{\mathbf {V}}({\mathbf {e}}+{{\tilde{\varvec\upvarepsilon }}}).\) Since we assume all agents to be identical, any optimal contract will assign the same fixed wage component f to each agent. Therefore, we can anticipate \({\mathbf {f}}=f\cdot {\mathbf {i}}.\) Simple algebra then reveals \({\mathbf {f}}^{\prime }{\varvec{\Omega }}{\mathbf {f}}=f^{2}\cdot {\mathbf {i}}{^{\prime }}{\varvec{\Omega }}{\mathbf {i}}=0.\) Hence, (24) becomes
$$\begin{aligned} s({\tilde{\mathbf {w}}})&=\beta \cdot ({\mathbf {e}}^{\prime }+{{\tilde{\varvec\upvarepsilon }}}^{\prime }){\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}({\mathbf {e}}+{{\tilde{\varvec\upvarepsilon }}})\\ &=\beta \cdot ({\mathbf {e}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{\mathbf {e}}+2{\mathbf {e}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{{\tilde{\varvec\upvarepsilon }}}+{{\tilde{\varvec\upvarepsilon }}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{{\tilde{\varvec\upvarepsilon }}}). \end{aligned}$$
(25)
The first term in brackets, \({\mathbf {e}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{\mathbf {e}},\) contains \({\mathbf {V}}{\mathbf {e}},\) which is the vector of expected variable compensation components. In the optimum, all of these expected values are the same (again, because we assume that all agents are identical), implying \({\mathbf {e}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{\mathbf {e}}=0.\) Furthermore, \(\mathrm{E}({\mathbf {e}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{{\tilde{\varvec\upvarepsilon }}})=0\) since \(\mathrm{E}({{\tilde{\varvec\upvarepsilon }}})={\mathbf {0}}.\) Therefore, the expectation of \(s({\tilde{\mathbf {w}}})\) in (25) reduces to
$$\begin{aligned} \mathrm{E}[s({\tilde{\mathbf {w}}})]=\beta \cdot \mathrm{E}({{\tilde{\varvec\upvarepsilon }}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{{\tilde{\varvec\upvarepsilon }}}). \end{aligned}$$
(26)
Applying the trace operator on the right-hand side of (26) in conjunction with the linearity of the expectation operator and \({\varvec{\Upsigma }}=\mathrm{E}({{\tilde{\varvec\upvarepsilon }}}{{\tilde{\varvec\upvarepsilon }}}^{\prime })-\mathrm{E}({{\tilde{\varvec\upvarepsilon }}})\mathrm{E}({{\tilde{\varvec\upvarepsilon }}})^{\prime }=\mathrm{E}({{\tilde{\varvec\upvarepsilon }}}{{\tilde{\varvec\upvarepsilon }}}^{\prime })\) yields
$$\begin{aligned} \mathrm{E}[s({\tilde{\mathbf {w}}})] &=\beta \cdot \mathrm{E}[\mathrm{tr}({{\tilde{\varvec\upvarepsilon }}}^{\prime }{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{{\tilde{\varvec\upvarepsilon }}})]=\beta \cdot \mathrm{E}[\mathrm{tr}({\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{{\tilde{\varvec\upvarepsilon }}}{{\tilde{\varvec\upvarepsilon }}}^{\prime })]\\ &=\beta \cdot \mathrm{tr}[{\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}\mathrm{E}({{\tilde{\varvec\upvarepsilon }}}{{\tilde{\varvec\upvarepsilon }}}^{\prime })]=\beta \cdot \mathrm{tr}({\mathbf {V}}^{\prime }{\varvec{\Omega }}{\mathbf {V}}{\varvec{\Upsigma }}). \end{aligned}$$
(27)
Finally, we plug \({\mathbf {I}}-\frac{1}{n}\cdot {\mathbf {J}}={\mathbf {I}}-\frac{1}{n}\cdot {\mathbf {i}}{\mathbf {i}}^{\prime }\) for \({\varvec{\Omega }}\) into the last term of (27) and make use of the trace’s invariance under cyclic permutations:
$$\begin{aligned} \mathrm{E}[s({\tilde{\mathbf {w}}})] &=\beta \cdot \left[ \mathrm{tr}({\mathbf {V}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }})-\frac{1}{n}\cdot \mathrm{tr}({\mathbf {V}}^{\prime }{\mathbf {i}}{\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }})\right] \\ &=\beta \cdot \left[ \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })-\frac{1}{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}\right] . \end{aligned}$$
(28)
Appendix 3: Derivation of the optimal contract in cases of individual agent behavior
1.1 Appendix 3.1: Optimal shares
To solve the program (9), we maximize the Lagrangian function \(\mathcal {L}=\Phi +\varvec{\uplambda }^{\prime }\varvec{\upvarphi },\) where \(\Phi\) is given by (7), \(\varvec{\upvarphi }\) is given by (3), and \(\varvec{\uplambda }\) is a vector of Lagrange multipliers used to incorporate the PC \({\hat{\varvec{\upvarphi }}}={\mathbf {0}}.\) IC is taken into account by substituting \({\hat{\mathbf {e}}}\) for \({\mathbf {e}}{\text {:}}\)
$$\begin{aligned} \mathcal {L}&={\mathbf {i}}^{\prime }({\mathbf {I}}-{\mathbf {V}}){\hat{\mathbf {e}}}-{\mathbf {i}}^{\prime }{\mathbf {f}}-\beta \cdot \left[ \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })-\tfrac{1}{n}{\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}\right] \\ &\quad +\varvec{\uplambda }^{\prime }\left[ {\mathbf {f}}+{\mathbf {V}}{\hat{\mathbf {e}}}-\Delta ({\hat{\mathbf {e}}}{\hat{\mathbf {e}}}^{\prime }){\mathbf {i}}-\tfrac{\alpha }{2}\Delta ({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }){\mathbf {i}}\right] . \end{aligned}$$
We now maximize this function with respect to \({\mathbf {f}}\) and \({\mathbf {V}}.\) Since the first derivative of \(\mathcal {L}\) with respect to \({\mathbf {f}}\) is \(-{\mathbf {i}}+\varvec{\uplambda },\) the respective first-order condition is \(\varvec{\uplambda }={\mathbf {i}}.\) Substituting this condition into \(\mathcal {L}\) results in the simplified version
$$\begin{aligned} \mathcal {L}&={\mathbf {i}}^{\prime }{\hat{\mathbf {e}}}-\beta \cdot \left[ \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })-\tfrac{1}{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}\right] -{\mathbf {i}}^{\prime }\Delta ({\hat{\mathbf {e}}}{\hat{\mathbf {e}}}^{\prime }){\mathbf {i}}-\tfrac{\alpha }{2}\cdot {\mathbf {i}}^{\prime }\Delta ({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }){\mathbf {i}}\\ &={\mathbf {i}}^{\prime }{\hat{\mathbf {e}}}-{\hat{\mathbf {e}}}^{\prime }{\hat{\mathbf {e}}}-\left( \frac{\alpha }{2}+\beta \right) \cdot \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })+\tfrac{\beta }{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}. \end{aligned}$$
(29)
Since \({\hat{\mathbf {e}}}=\tfrac{1}{2}\cdot \Delta ({\mathbf {V}}){\mathbf {i}}\) according to (8), \({\mathbf {i}}^{\prime }{\hat{\mathbf {e}}}\) can also be written \(\frac{1}{2}\cdot \mathrm{tr}({\mathbf {V}})\) and \({\hat{\mathbf {e}}}^{\prime }{\hat{\mathbf {e}}}\) equals \(\frac{1}{4}\cdot \mathrm{tr}[\Delta ({\mathbf {V}})\Delta ({\mathbf {V}})].\) Plugging these expressions into (29) yields the final formulation of the Lagrangian function that is maximized with respect to \({\mathbf {V}}{\text {:}}\)
$$\begin{aligned} \mathcal {L}=\tfrac{1}{2}\cdot \mathrm{tr}({\mathbf {V}})-\tfrac{1}{4}\cdot \mathrm{tr}(\Delta ({\mathbf {V}})\Delta ({\mathbf {V}}))-\left( \tfrac{\alpha }{2}+\beta \right) \cdot \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })+\tfrac{\beta }{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}. \end{aligned}$$
(30)
The respective gradient \(\nabla {\mathcal {L}},\) i.e., the vector of the first derivatives of \(\mathcal {L}\) with respect to the elements of \({\mathbf {V}},\) evaluates to
$$\begin{aligned} \nabla {\mathcal {L}}=\tfrac{1}{2}\cdot \mathrm{vec}({\mathbf {I}})-\tfrac{1}{2}\cdot \mathrm{vec}[\Delta ({\mathbf {V}})]-(\alpha +2\beta )\cdot \mathrm{vec}({\mathbf {V}}{\varvec{\Upsigma }})+\tfrac{2\beta }{n}\cdot \mathrm{vec}({\mathbf {J}}{\mathbf {V}}{\varvec{\Upsigma }}), \end{aligned}$$
where \(\mathrm{vec}\) is an operator that rearranges the elements of a matrix by stacking its columns into a single column vector. Using the Kronecker product \(\varvec{\otimes },\) we can rewrite \(\mathrm{vec}({\mathbf {V}}{\varvec{\Upsigma }})=({\varvec{\Upsigma }}\varvec{\otimes }{\mathbf {I}})\mathrm{vec}({\mathbf {V}})\) and \(\mathrm{vec}({\mathbf {J}}{\mathbf {V}}{\varvec{\Upsigma }})=({\varvec{\Upsigma }}\varvec{\otimes }{\mathbf {J}})\mathrm{vec}({\mathbf {V}}).\) Furthermore, \(\mathrm{vec}[\Delta ({\mathbf {V}})]={\mathbf {B}}\mathrm{vec}({\mathbf {V}}),\) where \({\mathbf {B}}:=\sum \nolimits _{i=1}^{n}({\mathbf {u}}_{i}\,\varvec{\otimes}\,{\mathbf {u}}_{i})({\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime })\) and \({\mathbf {u}}_{i}\) is the ith \((n\times 1)\) unit vector. Hence,
$$\begin{aligned} \nabla {\mathcal {L}}=\tfrac{1}{2}\cdot \mathrm{vec}({\mathbf {I}})-\left[ \tfrac{1}{2}\cdot {\mathbf {B}}+(\alpha +2\beta )\cdot {\varvec{\Upsigma }}\,\varvec{\otimes }\,{\mathbf {I}}-\tfrac{2\beta }{n}\cdot {\varvec{\Upsigma }}\,\varvec{\otimes }\,{\mathbf {J}}\right] \mathrm{vec}({\mathbf {V}}). \end{aligned}$$
The first-order condition \(\nabla {\mathcal {L}}={\mathbf {0}}\) is thus equivalent to
$$\begin{aligned} \mathrm{vec}({\mathbf {V}})=({\mathbf {A}}+{\mathbf {B}})^{-1}\mathrm{vec}({\mathbf {I}}), \end{aligned}$$
(31)
where \({\mathbf {A}}:={\varvec{\Upsigma }}\,\varvec{\otimes }\,\left[ (2\alpha +4\beta )\cdot {\mathbf {I}}-\tfrac{4\beta }{n}\cdot {\mathbf {J}}\right] .\) The inverse of \({\mathbf {A}}\) is given by \({{\varvec{\Upsigma }}^{-1}}\,\varvec{\otimes }\,{\mathbf {N}}\) with
$$\begin{aligned} {\mathbf {N}}:=\left[ (2\alpha +4\beta )\cdot {\mathbf {I}}-\tfrac{4\beta }{n}\cdot {\mathbf {J}}\right] ^{-1}=\tfrac{1}{2\alpha +4\beta }\cdot \left( {\mathbf {I}}+\tfrac{2\beta }{n\alpha }\cdot {\mathbf {J}}\right) . \end{aligned}$$
We now can compute the inverse of \({\mathbf {A}}+{\mathbf {B}}{\text {:}}\)
$$\begin{aligned} ({\mathbf {A}}+{\mathbf {B}})^{-1} &={{\mathbf {A}}^{-1}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\varvec{\otimes }{\mathbf {u}}_{j}\right) \cdot {{\mathbf {A}}^{-1}}\left( {\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }\right) {{\mathbf {A}}^{-1}}\\ &={{\varvec{\Upsigma }}^{-1}}\varvec{\otimes }{\mathbf {N}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\varvec{\otimes }{\mathbf {u}}_{j}\right) \cdot {{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{{\varvec{\Upsigma }}^{-1}}\varvec{\otimes }{\mathbf {N}}{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{\mathbf {N}}, \end{aligned}$$
where \({\mathbf {T}}:=({\mathbf {I}}+{\mathbf {B}}{{\mathbf {A}}^{-1}}{\mathbf {B}})^{-1}.\) Plugging this into (31) results in
$$\begin{aligned} \mathrm{vec}({\mathbf {V}}) &=\left( {{\varvec{\Upsigma }}^{-1}}\varvec{\otimes }{\mathbf {N}}\right) \mathrm{vec}({\mathbf {I}})\\ &\quad -\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\varvec{\otimes }{\mathbf {u}}_{j}\right) \cdot \left( {{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{{\varvec{\Upsigma }}^{-1}}\varvec{\otimes }{\mathbf {N}}{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{\mathbf {N}}\right) \mathrm{vec}({\mathbf {I}})\\ &=\mathrm{vec}\left( {\mathbf {N}}{{\varvec{\Upsigma }}^{-1}}\right) \\ &\quad -\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\varvec{\otimes }{\mathbf {u}}_{j}\right) \cdot \mathrm{vec}\left( {\mathbf {N}}{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{\mathbf {N}}{{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{j}{\mathbf {u}}_{i}^{\prime }{{\varvec{\Upsigma }}^{-1}}\right) \\ &=\mathrm{vec}\left( {\mathbf {N}}\left[ {\mathbf {I}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\varvec{\otimes }{\mathbf {u}}_{j}\right) \cdot {\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{\mathbf {N}}{{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{j}{\mathbf {u}}_{i}^{\prime }\right] {{\varvec{\Upsigma }}^{-1}}\right) , \end{aligned}$$
which is equivalent to
$$\begin{aligned} {\mathbf {V}}&={\mathbf {N}}\left[ {\mathbf {I}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\varvec{\otimes }{\mathbf {u}}_{j}\right) \cdot {\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{\mathbf {N}}{{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{j}{\mathbf {u}}_{i}^{\prime }\right] {{\varvec{\Upsigma }}^{-1}}\\ &={\mathbf {N}}\left[ {\mathbf {I}}-\Delta \left( {\varvec{\Upxi }}\Delta \left( {\mathbf {N}}{{\varvec{\Upsigma }}^{-1}}\right) {\mathbf {J}}\right) \right] {{\varvec{\Upsigma }}^{-1}}, \end{aligned}$$
(32)
where
$$\begin{aligned} {\varvec{\Upxi }}:=\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\varvec{\otimes }{\mathbf {u}}_{j}\right) \cdot {\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime } &={\mathbf {C}}\left[ {\mathbf {I}}+{\mathbf {B}}\left( {{\varvec{\Upsigma }}^{-1}}\varvec{\otimes }{\mathbf {N}}\right) {\mathbf {B}}\right] ^{-1}{\mathbf {C}}^{\prime }\\ &={\mathbf {C}}\left\{ {\mathbf {I}}+{\mathbf {B}}\left[ \left( {\mathbf {N}}\,\varvec{\odot }\,{{\varvec{\Upsigma }}^{-1}}\right) \,\varvec{\otimes }\,{\mathbf {J}}\right] {\mathbf {B}}\right\} ^{-1}{\mathbf {C}}^{\prime } \end{aligned}$$
and \({\mathbf {C}}:=\sum \nolimits _{i=1}^{n}{\mathbf {u}}_{i}({\mathbf {u}}_{i}^{\prime }\varvec{\otimes }{\mathbf {u}}_{i}^{\prime }).\,\varvec{\odot }\) denotes the entry-wise (or Hadamard) product of two matrices. Plugging this as well as \(\Delta ({\mathbf {N}}{{\varvec{\Upsigma }}^{-1}}){\mathbf {J}}=({\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}){\mathbf {J}}\) into (32) yields
$$\begin{aligned} {\mathbf {V}}^{in}&={\mathbf {N}}\Delta \left( {\mathbf {I}}-{\mathbf {C}}\left\{ {\mathbf {I}}+{\mathbf {B}}\left[ \left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \varvec{\otimes }{\mathbf {J}}\right] {\mathbf {B}}\right\} ^{-1}{\mathbf {C}}^{\prime }\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) {\mathbf {J}}\right) {{\varvec{\Upsigma }}^{-1}}\\ &={\mathbf {N}}\Delta \left( {\mathbf {J}}-{\mathbf {J}}\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) {\mathbf {C}}\left\{ {\mathbf {I}}+{\mathbf {B}}\left[ \left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \varvec{\otimes }{\mathbf {J}}\right] {\mathbf {B}}\right\} ^{-1}{\mathbf {C}}^{\prime }\right) {{\varvec{\Upsigma }}^{-1}}\\ &={\mathbf {N}}\Delta \left( {\mathbf {J}}\left[ {\mathbf {I}}-\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) {\mathbf {C}}\left\{ {\mathbf {I}}+{\mathbf {B}}\left[ \left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \varvec{\otimes }{\mathbf {J}}\right] {\mathbf {B}}\right\} ^{-1}{\mathbf {C}}^{\prime }\right] \right) {{\varvec{\Upsigma }}^{-1}}\\ &={\mathbf {N}}\Delta \left( {\mathbf {J}}\left[ {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right] ^{-1}\right) {{\varvec{\Upsigma }}^{-1}}. \end{aligned}$$
1.2 Appendix 3.2: Optimal objective function value
The principal’s optimal objective function value \(\Phi ^{in}\) coincides with the value of the Lagrangian function (30) evaluated at \({\mathbf {V}}={\mathbf {V}}^{in}.\) Since \({\mathbf {V}}^{in}\) is symmetric, we can write
$$\begin{aligned} \Phi ^{in}=\tfrac{1}{2}\mathrm{tr}\left( {\mathbf {V}}^{in}\right) -\tfrac{1}{4}\mathrm{tr}\left( \Delta \left( {\mathbf {V}}^{in}\right) \Delta \left( {\mathbf {V}}^{in}\right) \right) -\left( \tfrac{\alpha }{2}+\beta \right) \cdot \mathrm{tr}\left( {\mathbf {V}}^{in}{\varvec{\Upsigma }}{\mathbf {V}}^{in}\right) +\tfrac{\beta }{n}{\mathbf {i}}^{\prime }{\mathbf {V}}^{in}{\varvec{\Upsigma }}{\mathbf {V}}^{in}{\mathbf {i}}. \end{aligned}$$
Making use of \(\Delta ({\mathbf {V}}^{in})=\Delta ({\mathbf {J}}[{\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}]^{-1}[{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}]),\) the trace of \({\mathbf {V}}^{in}\) calculates
$$\begin{aligned} \mathrm{tr}\left( {\mathbf {V}}^{in}\right) ={\mathbf {i}}^{\prime }\Delta \left( {\mathbf {V}}^{in}\right) {\mathbf {i}}={\mathbf {i}}^{\prime }\left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) {\mathbf {i}}. \end{aligned}$$
(33)
Furthermore, \(\mathrm{tr}(\Delta ({\mathbf {V}}^{in})\Delta ({\mathbf {V}}^{in}))\) is equivalent to
$$\begin{aligned} {\mathbf {i}}^{\prime }\Delta \left( {\mathbf {V}}^{in}\right) \Delta \left( {\mathbf {V}}^{in}\right) {\mathbf {i}}={\mathbf {i}}^{\prime }\left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \Delta \left( {\mathbf {V}}^{in}\right) {\mathbf {i}} \end{aligned}$$
(34)
and \({\mathbf {V}}^{in}{\varvec{\Upsigma }}{\mathbf {V}}^{in}\) equals \({\mathbf {N}}\Delta ({\mathbf {J}}[{\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}]^{-1}){\mathbf {V}}^{in},\) implying
$$\begin{aligned} \mathrm{tr}\left( {\mathbf {V}}^{in}{\varvec{\Upsigma }}{\mathbf {V}}^{in}\right) &=\mathrm{tr}\left\{ \Delta \left( {\mathbf {J}}\left[ {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right] ^{-1}\right) {\mathbf {V}}^{in}{\mathbf {N}}\right\} \\ &={\mathbf {i}}^{\prime }\Delta \left( {\mathbf {J}}\left[ {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right] ^{-1}\right) \Delta \left( {\mathbf {V}}^{in}{\mathbf {N}}\right) {\mathbf {i}}\\ &={\mathbf {i}}^{\prime }\left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\Delta \left( {\mathbf {V}}^{in}{\mathbf {N}}\right) {\mathbf {i}} \end{aligned}$$
(35)
as well as
$$\begin{aligned} {\mathbf {i}}^{\prime }{\mathbf {V}}^{in}{\varvec{\Upsigma }}{\mathbf {V}}^{in}{\mathbf {i}}&=\mathrm{tr}\left\{ \Delta \left( {\mathbf {J}}\left[ {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right] ^{-1}\right) {\mathbf {V}}^{in}{\mathbf {J}}{\mathbf {N}}\right\} \\ &={\mathbf {i}}^{\prime }\left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\Delta \left( {\mathbf {V}}^{in}{\mathbf {J}}{\mathbf {N}}\right) {\mathbf {i}}. \end{aligned}$$
(36)
Plugging (33)–(36) into \(\Phi ^{in},\) we arrive at \(\Phi ^{in}=\frac{1}{4}\cdot {\mathbf {i}}^{\prime }({\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}})^{-1}{\mathbf {D}}{\mathbf {i}},\) where
$$\begin{aligned} {\mathbf {D}}&:=\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \left[ 2\cdot {\mathbf {I}}-\Delta \left( {\mathbf {V}}^{in}\right) \right] -(2\alpha +4\beta )\cdot \Delta \left( {\mathbf {V}}^{in}{\mathbf {N}}\right) +\frac{4\beta }{n}\cdot \Delta \left( {\mathbf {V}}^{in}{\mathbf {J}}{\mathbf {N}}\right) \\ &=\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \left[ 2\cdot {\mathbf {I}}-\Delta \left( {\mathbf {V}}^{in}\right) \right] -\Delta \left\{ {\mathbf {V}}^{in}\left[ (2\alpha +4\beta )\cdot {\mathbf {I}}-\frac{4\beta }{n}\cdot {\mathbf {J}}\right] {\mathbf {N}}\right\} \\ &=\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \left[ 2\cdot {\mathbf {I}}-\Delta \left( {\mathbf {V}}^{in}\right) \right] -\Delta \left( {\mathbf {V}}^{in}\right) \\ &=2\cdot \left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) -\left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) \Delta \left( {\mathbf {V}}^{in}\right) . \end{aligned}$$
Hence,
$$\begin{aligned} \Phi ^{in}&=\frac{1}{4}\cdot {\mathbf {i}}^{\prime }\left[ 2\cdot \left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) -\Delta \left( {\mathbf {V}}^{in}\right) \right] {\mathbf {i}}\\ &=\frac{1}{4}\cdot \left[ 2\cdot {\mathbf {i}}^{\prime }\left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\left( {\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) {\mathbf {i}}-\mathrm{tr}\left( {\mathbf {V}}^{in}\right) \right] \\ &=\frac{1}{4}\cdot \left\{ 2\cdot \mathrm{tr}\left[ \left( {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\Delta \left( {\mathbf {N}}{{\varvec{\Upsigma }}^{-1}}\right) {\mathbf {J}}\right] -\mathrm{tr}\left( {\mathbf {V}}^{in}\right) \right\} \\ &=\frac{1}{4}\cdot \left\{ 2\cdot \mathrm{tr}\left[ {\mathbf {N}}\Delta \left( {\mathbf {J}}\left[ {\mathbf {I}}+{\mathbf {N}}\varvec{\odot }{{\varvec{\Upsigma }}^{-1}}\right] ^{-1}\right) {{\varvec{\Upsigma }}^{-1}}\right] -\mathrm{tr}\left( {\mathbf {V}}^{in}\right) \right\} \\ &=\frac{1}{4}\cdot \mathrm{tr}\left( {\mathbf {V}}^{in}\right) . \end{aligned}$$
Appendix 4: Derivation of the optimal contract in cases of coordinated agent behavior
1.1 Appendix 4.1: Optimal shares
Exactly as in Appendix 3.1, the Lagrangian function (29) has to be maximized with respect to \({\mathbf {V}}.\) However \({\hat{\mathbf {e}}}\) is now given by \(\tfrac{1}{2}\cdot {\mathbf {V}}^{\prime }{\mathbf {i}}\) according to (18). Therefore, \({\mathbf {i}}^{\prime }{\hat{\mathbf {e}}}\) now evaluates to \(\frac{1}{2}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\mathbf {i}}\) and \({\hat{\mathbf {e}}}^{\prime }{\hat{\mathbf {e}}}\) equals \(\frac{1}{4}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\mathbf {V}}^{\prime }{\mathbf {i}}.\) Plugging these expressions into (29) results in the final formulation of the Lagrangian function:
$$\begin{aligned} \mathcal {L}=\tfrac{1}{2}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\mathbf {i}}-\tfrac{1}{4}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\mathbf {V}}^{\prime }{\mathbf {i}}-\left( \tfrac{\alpha }{2}+\beta \right) \cdot \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })+\tfrac{\beta }{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}. \end{aligned}$$
(37)
The respective gradient \(\nabla {\mathcal {L}}\) can be written
$$\begin{aligned} \nabla {\mathcal {L}}=\tfrac{1}{2}\cdot {\mathbf {i}}-\tfrac{1}{2}\cdot \mathrm{vec}({\mathbf {J}}{\mathbf {V}})-(\alpha +2\beta )\cdot \mathrm{vec}({\mathbf {V}}{\varvec{\Upsigma }})+\tfrac{2\beta }{n}\cdot \mathrm{vec}({\mathbf {J}}{\mathbf {V}}{\varvec{\Upsigma }}). \end{aligned}$$
Thus, the first-order condition \(\nabla {\mathcal {L}}={\mathbf {0}}\) is equivalent to
$$\begin{aligned} {\mathbf {J}}{\mathbf {V}}+(2\alpha +4\beta )\cdot {\mathbf {V}}{\varvec{\Upsigma }}-\tfrac{4\beta }{n}\cdot {\mathbf {J}}{\mathbf {V}}{\varvec{\Upsigma }}={\mathbf {J}}\iff {\mathbf {J}}{\mathbf {V}}{\mathbf {I}}+{\mathbf {N}}^{-1}{\mathbf {V}}{\varvec{\Upsigma }}={\mathbf {J}}, \end{aligned}$$
with \({\mathbf {N}}\) defined as in Appendix 3.1. Solving the latter equation for \({\mathbf {V}},\) we arrive at
$$\begin{aligned} \mathrm{vec}({\mathbf {V}})=({\mathbf {A}}+{\mathbf {B}})^{-1}\mathrm{vec}({\mathbf {J}}), \end{aligned}$$
(38)
where \({\mathbf {A}}:={\varvec{\Upsigma }}\,\varvec{\otimes }\,{\mathbf {N}}^{-1}\) and \({\mathbf {B}}:={\mathbf {I}}\,\varvec{\otimes }\,{\mathbf {J}}.\) As in Appendix 3.1, the inverse of \({\mathbf {A}}\) is given by \({{\varvec{\Upsigma }}^{-1}}\,\varvec{\otimes }\,{\mathbf {N}}.\) We note that \({\mathbf {B}}\) can be represented in the following way:
$$\begin{aligned} {\mathbf {B}}={\mathbf {B}}_{1}{\mathbf {B}}_{2},\quad \text {where}\quad {\mathbf {B}}_{1}:=\textstyle \sum \limits \limits _{i=1}^{n}\left( {\mathbf {u}}_{i}\,\varvec{\otimes }\,{\mathbf {i}}\right) \left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime }\right) \quad \text {and}\quad {\mathbf {B}}_{2}:=\textstyle \sum \limits \limits _{i=1}^{n}\left( {\mathbf {u}}_{i}\,\varvec{\otimes }\,{\mathbf {u}}_{i}\right) \left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {i}}^{\prime }\right) . \end{aligned}$$
As in Appendix 3.1, \({\mathbf {u}}_{i}\) denotes the ith \((n\times 1)\) unit vector. We use matrix \({\mathbf {C}}\) as defined in Appendix 3.1 to formulate the auxiliary matrix
$$\begin{aligned} {\mathbf {T}}:=\left( {\mathbf {I}}+{\mathbf {B}}_{2}{{\mathbf {A}}^{-1}}{\mathbf {B}}_{1}\right) ^{-1}=\left( {\mathbf {I}}+{\mathbf {i}}^{\prime }{\mathbf {N}}{\mathbf {i}}\cdot {\mathbf {C}}^{\prime }{{\varvec{\Upsigma }}^{-1}}{\mathbf {C}}\right) ^{-1}. \end{aligned}$$
We note that \({\mathbf {T}}\) is symmetric. Now we can compute the inverse of \({\mathbf {A}}+{\mathbf {B}}{\text {:}}\)
$$\begin{aligned} ({\mathbf {A}}+{\mathbf {B}})^{-1} &=\left( {\mathbf {A}}+{\mathbf {B}}_{1}{\mathbf {B}}_{2}\right) ^{-1}={{\mathbf {A}}^{-1}}-{{\mathbf {A}}^{-1}}{\mathbf {B}}_{1}{\mathbf {T}}{\mathbf {B}}_{2}{{\mathbf {A}}^{-1}}\\ &={{\mathbf {A}}^{-1}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\,\varvec{\otimes }\,{\mathbf {u}}_{j}\right) \cdot {{\mathbf {A}}^{-1}}\left( {\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }\,\varvec{\otimes }\,{\mathbf {J}}\right) {{\mathbf {A}}^{-1}}\\ &={{\varvec{\Upsigma }}^{-1}}\,\varvec{\otimes }\,{\mathbf {N}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\,\varvec{\otimes }\,{\mathbf {u}}_{j}\right) \cdot {{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{{\varvec{\Upsigma }}^{-1}}\,\varvec{\otimes }\,{\mathbf {N}}{\mathbf {J}}{\mathbf {N}}. \end{aligned}$$
Plugging this into (38) yields
$$\begin{aligned} \mathrm{vec}({\mathbf {V}}) &=\left( {{\varvec{\Upsigma }}^{-1}}\,\varvec{\otimes }\,{\mathbf {N}}\right) \mathrm{vec}({\mathbf {J}})\\ &\quad -\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\,\varvec{\otimes }\,{\mathbf {u}}_{j}\right) \cdot \left( {{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{i}{\mathbf {u}}_{j}^{\prime }{{\varvec{\Upsigma }}^{-1}}\,\varvec{\otimes }\,{\mathbf {N}}{\mathbf {J}}{\mathbf {N}}\right) \mathrm{vec}({\mathbf {J}})\\ &=\mathrm{vec}\left( {\mathbf {N}}{\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}\right) \\ &\quad -\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\,\varvec{\otimes }\,{\mathbf {u}}_{j}\right) \cdot \mathrm{vec}\left( {\mathbf {N}}{\mathbf {J}}{\mathbf {N}}{\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{j}{\mathbf {u}}_{i}^{\prime }{{\varvec{\Upsigma }}^{-1}}\right) \\ &=\mathrm{vec}\left( {\mathbf {N}}{\mathbf {J}}\left[ {\mathbf {I}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\,\varvec{\otimes }\,{\mathbf {u}}_{j}\right) \cdot {\mathbf {N}}{\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{j}{\mathbf {u}}_{i}^{\prime }\right] {{\varvec{\Upsigma }}^{-1}}\right) , \end{aligned}$$
which is equivalent to
$$\begin{aligned} {\mathbf {V}}&={\mathbf {N}}{\mathbf {J}}\left[ {\mathbf {I}}-\textstyle \sum \limits \limits _{i=1}^{n}\textstyle \sum \limits \limits _{j=1}^{n}\left( {\mathbf {u}}_{i}^{\prime }\,\varvec{\otimes }\,{\mathbf {u}}_{i}^{\prime }\right) {\mathbf {T}}\left( {\mathbf {u}}_{j}\,\varvec{\otimes }\,{\mathbf {u}}_{j}\right) \cdot {\mathbf {N}}{\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}{\mathbf {u}}_{j}{\mathbf {u}}_{i}^{\prime }\right] {{\varvec{\Upsigma }}^{-1}}\\ &={\mathbf {N}}{\mathbf {J}}({\mathbf {I}}-{\mathbf {i}}^{\prime }{\mathbf {N}}{\mathbf {i}}\cdot {{\varvec{\Upsigma }}^{-1}}{\mathbf {C}}{\mathbf {T}}{\mathbf {C}}^{\prime }){{\varvec{\Upsigma }}^{-1}}\\ &={\mathbf {N}}{\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}\left[ {\mathbf {I}}-{\mathbf {i}}^{\prime }{\mathbf {N}}{\mathbf {i}}\cdot {\mathbf {C}}\left( {\mathbf {I}}+{\mathbf {i}}^{\prime }{\mathbf {N}}{\mathbf {i}}\cdot {\mathbf {C}}^{\prime }{{\varvec{\Upsigma }}^{-1}}{\mathbf {C}}\right) ^{-1}{\mathbf {C}}^{\prime }{{\varvec{\Upsigma }}^{-1}}\right] \\ &={\mathbf {N}}{\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+{\mathbf {i}}^{\prime }{\mathbf {N}}{\mathbf {i}}\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}. \end{aligned}$$
Considering \({\mathbf {N}}{\mathbf {J}}=\left[ \tfrac{1}{2\alpha +4\beta }\cdot \left( {\mathbf {I}}+\tfrac{2\beta }{n\alpha }\cdot {\mathbf {J}}\right) \right] {\mathbf {J}}=\tfrac{1}{2\alpha }\cdot {\mathbf {J}}\) and \({\mathbf {i}}^{\prime }{\mathbf {N}}{\mathbf {i}}={\mathbf {i}}^{\prime }\left[ \tfrac{1}{2\alpha +4\beta }\cdot \left( {\mathbf {I}}+\tfrac{2\beta }{n\alpha }\cdot {\mathbf {J}}\right) \right] {\mathbf {i}}=\tfrac{n}{2\alpha },\) we eventually arrive at
$$\begin{aligned} {\mathbf {V}}^c=\tfrac{1}{2\alpha }\cdot {\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}. \end{aligned}$$
1.2 Appendix 4.2: Optimal objective function value
The principal’s optimal objective function value \(\Phi ^c\) coincides with the value of the Lagrangian function (37) evaluated at \({\mathbf {V}}={\mathbf {V}}^c.\) Since \({\mathbf {V}}^c\) is symmetric, we can write
$$\begin{aligned} \Phi ^c=\tfrac{1}{2}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\mathbf {i}}-\tfrac{1}{4}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\mathbf {V}}^c{\mathbf {i}}-\left( \tfrac{\alpha }{2}+\beta \right) \cdot \mathrm{tr}\left( {\mathbf {V}}^c{\varvec{\Upsigma }}{\mathbf {V}}^c\right) +\tfrac{\beta }{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\varvec{\Upsigma }}{\mathbf {V}}^c{\mathbf {i}} \end{aligned}$$
and observe
$$\begin{aligned} {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\mathbf {i}}=\tfrac{n}{2\alpha }\cdot {\mathbf {i}}^{\prime }{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{\mathbf {i}} \end{aligned}$$
(39)
as well as
$$\begin{aligned} {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\mathbf {V}}^c{\mathbf {i}}=\left( \tfrac{n}{2\alpha }\right) ^{2}\cdot {\mathbf {i}}^{\prime }{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}{\mathbf {i}}. \end{aligned}$$
(40)
\({\mathbf {V}}^c{\varvec{\Upsigma }}{\mathbf {V}}^c\) obviously equals \(\left( \tfrac{1}{2\alpha }\right) ^{2}\cdot {\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{\varvec{\Upsigma }}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}{\mathbf {J}},\) implying
$$\begin{aligned} \mathrm{tr}\left( {\mathbf {V}}^c{\varvec{\Upsigma }}{\mathbf {V}}^c\right) &=\left( \tfrac{1}{2\alpha }\right) ^{2}{\mathbf {i}}^{\prime }\Delta \left[ {\mathbf {J}}{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{\varvec{\Upsigma }}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }{{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}{\mathbf {J}}\right] {\mathbf {i}}\\ &=\tfrac{n}{(2\alpha )^{2}}\cdot {\mathbf {i}}^{\prime }{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{\varvec{\Upsigma }}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}{\mathbf {i}} \end{aligned}$$
(41)
as well as
$$\begin{aligned} {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\varvec{\Upsigma }}{\mathbf {V}}^c{\mathbf {i}}&=\left( \tfrac{n}{2\alpha }\right) ^{2}\cdot {\mathbf {i}}^{\prime }{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{\varvec{\Upsigma }}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}{\mathbf {i}}\\ &=n\cdot \mathrm{tr}\left( {\mathbf {V}}^c{\varvec{\Upsigma }}{\mathbf {V}}^c\right) . \end{aligned}$$
(42)
Plugging (39)–(42) into \(\Phi ^c,\) we arrive at \(\Phi ^c=\frac{n}{8\alpha }\cdot {\mathbf {i}}^{\prime }{{\varvec{\Upsigma }}^{-1}}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{\mathbf {D}}{\mathbf {i}}=\frac{1}{4}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\mathbf {D}}{\mathbf {i}},\) where
$$\begin{aligned} {\mathbf {D}}&:=2\cdot {\mathbf {I}}-\tfrac{n}{2\alpha }\cdot \left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}-{\varvec{\Upsigma }}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}\\ &=2\cdot {\mathbf {I}}-\left( \tfrac{n}{2\alpha }\cdot {\mathbf {I}}+{\varvec{\Upsigma }}\right) \left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}\\ &=2\cdot {\mathbf {I}}-{\varvec{\Upsigma }}\left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) \left( {\mathbf {I}}+\tfrac{n}{2\alpha }\cdot {{\varvec{\Upsigma }}^{-1}}\right) ^{-1}{{\varvec{\Upsigma }}^{-1}}\\ &=2\cdot {\mathbf {I}}-{\mathbf {I}}={\mathbf {I}}. \end{aligned}$$
Hence, \(\Phi ^c=\tfrac{1}{4}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}^c{\mathbf {i}}\) immediately follows.
Appendix 5: Calculations for our robustness checks
We replace \(\beta\) with \(\beta _{P}\) and measure the strength of the agents’ social preferences by \(\beta _{A}.\) The principal’s goal function then is:
$$\begin{aligned} \Phi ={\mathbf {i}}^{\prime }({\mathbf {I}}-{\mathbf {V}}){\mathbf {e}}-{\mathbf {i}}^{\prime }{\mathbf {f}}-\beta _{P}\cdot \left[ \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })-\tfrac{1}{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}\right] . \end{aligned}$$
1.1 Appendix 5.1: Competitive agents
We define the agents’ social preference function \({\tilde{\mathbf {s}}}=\beta _{A}\cdot \left( \tfrac{1}{n}\cdot {\mathbf {J}}-{\mathbf {I}}\right) {\tilde{\mathbf {w}}},\) which we subtract from the vector that collects all of their wages \({\tilde{\mathbf {w}}}.\,{\tilde{\mathbf {s}}}\) is the vector of differences between \(w_{i}\) and \(\bar{w}.\) The vector comprising the agents’ goal functions thus writes
$$\begin{aligned} \varvec{\upvarphi }&=\mathrm{E}({\tilde{\mathbf {w}}}-{\tilde{\mathbf {s}}})-\Delta ({\mathbf {e}}{\mathbf {e}}^{\prime }){\mathbf {i}}-\tfrac{\alpha }{2}\cdot \Delta (\mathrm{Var}({\tilde{\mathbf {w}}}-{\tilde{\mathbf {s}}})){\mathbf {i}}\\&={\mathbf {f}}+{\mathbf {V}}{\mathbf {e}}-\mathrm{E}({\tilde{\mathbf {s}}})-\Delta ({\mathbf {e}}{\mathbf {e}}^{\prime }){\mathbf {i}}-\tfrac{\alpha }{2}\cdot \Delta (\mathrm{Var}({\tilde{\mathbf {w}}}-{\tilde{\mathbf {s}}})){\mathbf {i}}, \end{aligned}$$
where \(\mathrm{E}({\tilde{\mathbf {s}}})=\beta _{A}\cdot \left( \tfrac{1}{n}\cdot {\mathbf {J}}-{\mathbf {I}}\right) \mathrm{E}({\tilde{\mathbf {w}}}),\) in which \({\mathbf {i}}^{\prime }\mathrm{E}({\tilde{\mathbf {s}}})={\mathbf {0}}\) holds, and
$$\begin{aligned} \mathrm{Var}({\tilde{\mathbf {w}}}-{\tilde{\mathbf {s}}})&=\mathrm{Var}\left\{ \left[ {\mathbf {I}}-\beta _{A}\cdot \left( \tfrac{1}{n}\cdot {\mathbf {J}}-{\mathbf {I}}\right) \right] {\mathbf {V}}{{\tilde{\varvec\upvarepsilon }}}\right\} \\&=\left[ {\mathbf {I}}-\beta _{A}\cdot \left( \tfrac{1}{n}\cdot {\mathbf {J}}-{\mathbf {I}}\right) \right] {\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }\left[ {\mathbf {I}}-\beta _{A}\cdot \left( \tfrac{1}{n}\cdot {\mathbf {J}}-{\mathbf {I}}\right) \right] ^{\prime }. \end{aligned}$$
The Lagrangian function (29) then writes
$$\begin{aligned} \mathcal {L}={\mathbf {i}}^{\prime }{\hat{\mathbf {e}}}-{\hat{\mathbf {e}}}^{\prime }{\hat{\mathbf {e}}}-\beta _{P}\cdot \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })+\tfrac{\beta _{P}}{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}-\tfrac{\alpha }{2}\cdot \mathrm{tr}[\mathrm{Var}({\tilde{\mathbf {w}}}-{\tilde{\mathbf {s}}})]. \end{aligned}$$
We now have to take a closer look at the expression \(\mathrm{tr}[\mathrm{Var}({\tilde{\mathbf {w}}}-{\tilde{\mathbf {s}}})]{\text {:}}\)
$$\begin{aligned} \mathrm{tr}[\mathrm{Var}({\tilde{\mathbf {w}}}-{\tilde{\mathbf {s}}})]&=\mathrm{tr}\left\{ \left[ {\mathbf {I}}-\beta _{A}\cdot \left( \tfrac{1}{n}\cdot {\mathbf {J}}-{\mathbf {I}}\right) \right] {\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }\left[ {\mathbf {I}}-\beta _{A}\cdot \left( \tfrac{1}{n}\cdot {\mathbf {J}}-{\mathbf {I}}\right) \right] ^{\prime }\right\} \\&=\mathrm{tr}\left\{ {\mathbf {I}}+\beta _{A}\left( 2+\beta _{A}\right) \left( {\mathbf {I}}-\tfrac{1}{n}\cdot {\mathbf {J}}\right) {\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }\right\} \\&=\mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })+\beta _{A}\left( 2+\beta _{A}\right) \cdot \mathrm{tr}\left[ \left( {\mathbf {I}}-\tfrac{1}{n}\cdot {\mathbf {J}}\right) {\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }\right] \\&=\left[ 1+\beta _{A}\left( 2+\beta _{A}\right) \right] \cdot \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })-\tfrac{\beta _{A}(2+\beta _{A})}{n}\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}. \end{aligned}$$
The next steps are analogous to the corresponding calculations in Appendix 2 and result in the following representation of the Lagrangian function:
$$\begin{aligned} \mathcal {L}&={\mathbf {i}}^{\prime }{\hat{\mathbf {e}}}-{\hat{\mathbf {e}}}^{\prime }{\hat{\mathbf {e}}}-\left[ \tfrac{\alpha }{2}+\alpha \beta _{A}\left( 1+\tfrac{1}{2}\beta _{A}\right) +\beta _{P}\right] \cdot \mathrm{tr}({\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime })+\tfrac{1}{n}\cdot \left[ \alpha \beta _{A}\left( 1+\tfrac{1}{2}\beta _{A}\right) +\beta _{P}\right] \\&\quad\cdot {\mathbf {i}}^{\prime }{\mathbf {V}}{\varvec{\Upsigma }}{\mathbf {V}}^{\prime }{\mathbf {i}}. \end{aligned}$$
Comparing this expression with the one in (29) shows that the new model corresponds to the old one if we define the parameter \(\beta\) in the old model as \(\beta :=\alpha \beta _{A}\left( 1+\tfrac{1}{2}\beta _{A}\right) +\beta _{P}.\) We discuss the implications of this insight in our robustness section in the main text.
1.2 Appendix 5.2: Inequity-averse agents
Under the assumptions given in the main text, agent i’s social preference function writes:
$$\begin{aligned} {\tilde{s}}_{A}=\beta _{A}\cdot \left| {\tilde{w}}_{i}-{\tilde{w}}_{j}\right| =\beta _{A}\cdot \left| v_{1}-v_{2}\right| \cdot |\tilde{\varepsilon }|, \end{aligned}$$
with \(\tilde{\varepsilon }:=\tilde{\varepsilon }_{i}-\tilde{\varepsilon }_{j}.\)
\(\tilde{\varepsilon }\) is normally distributed with mean 0 and variance \(2\sigma ^{2}-2\varrho \sigma ^{2}=2(1-\varrho )\sigma ^{2}.\) Concerning \(|\tilde{\varepsilon }|\) we can calculate \(\mathrm{E}(|\tilde{\varepsilon }|)=2\sqrt{(1-\varrho )/\pi }\cdot \sigma\) as well as
$$\begin{aligned} \mathrm{Var}(|\tilde{\varepsilon }|)&=\mathrm{E}\left( |\tilde{\varepsilon }|^{2}\right) -\mathrm{E}^{2}(|\tilde{\varepsilon }|)=\mathrm{E}\left( \tilde{\varepsilon }^{2}\right) -\mathrm{E}^{2}(|\tilde{\varepsilon }|)\\ &=\mathrm{Var}(\tilde{\varepsilon })+\mathrm{E}^{2}(\tilde{\varepsilon })-\mathrm{E}^{2}(|\tilde{\varepsilon }|)=2(1-2/\pi )(1-\varrho )\sigma ^{2}. \end{aligned}$$
Exploiting \(\hat{e}_{1}=\hat{e}_{2}=\hat{e},\) the familiar goal function of the principal becomes
$$\begin{aligned} \Phi =2\cdot \left( 1-v_{1}-v_{2}\right) \cdot \hat{e}-f_{1}-f_{2}-\beta _{P}\cdot \left( v_{1}-v_{2}\right) ^{2}\cdot (1-\varrho )\sigma ^{2}. \end{aligned}$$
Similarly we can write for agent i’s goal function:
$$\begin{aligned} \varphi _{i}=\mathrm{E}\left( {\tilde{w}}_{i}\right) -\mathrm{E}\left( {\tilde{s}}_{A}\right) -\tfrac{1}{2}\alpha \mathrm{Var}\left( {\tilde{w}}_{i}-{\tilde{s}}_{A}\right) -\hat{e}_{i}^{2}. \end{aligned}$$
Exploiting \(\mathrm{E}({\tilde{s}}_{A})=2\beta _{A}\cdot |v_{1}-v_{2}|\cdot \sqrt{(1-\varrho )/\pi }\cdot \sigma\) and
$$\begin{aligned} \mathrm{Var}\left( {\tilde{w}}_{i}-{\tilde{s}}_{A}\right) =\left[ v_{1}^{2}+v_{2}^{2}+2v_{1}v_{2}\varrho +2(1-2/\pi )\beta _{A}^{2}\cdot \left( v_{1}-v_{2}\right) ^{2}\cdot (1-\varrho )\right] \cdot \sigma ^{2}, \end{aligned}$$
we get:
$$\begin{aligned} \varphi _{i}&=f_{i}+v_{1}\hat{e}_{i}+v_{2}\hat{e}_{j}-2\beta _{A}\cdot \left| v_{1}-v_{2}\right| \cdot \sqrt{(1-\varrho )/\pi }\cdot \sigma \\&\quad -\tfrac{1}{2}\alpha \cdot \left[ v_{1}^{2}+v_{2}^{2}+2v_{1}v_{2}\varrho +2(1-2/\pi )\beta _{A}^{2}\left( v_{1}-v_{2}\right) ^{2}\cdot (1-\varrho )\right] \sigma ^{2}-\hat{e}_{i}^{2}. \end{aligned}$$
The Lagrangian function therefore becomes:
$$\begin{aligned} \mathcal {L}&=2\cdot \left( \hat{e}-\hat{e}^{2}\right) -4\beta _{A}\cdot \left| v_{1}-v_{2}\right| \cdot \sqrt{(1-\varrho )/\pi }\cdot \sigma \\&\quad -\left\{ \alpha \cdot \left( v_{1}^{2}+v_{2}^{2}+2v_{1}v_{2}\varrho \right) +\left[ 2(1-2/\pi )\alpha \beta _{A}^{2}+\beta _{P}\right] \cdot \left( v_{1}-v_{2}\right) ^{2}(1-\varrho )\right\} \sigma ^{2}. \end{aligned}$$
In the case of individual agent behavior the agents’ reaction functions are \(\hat{e}_{i}=\frac{v_{i}}{2}.\) Using the abbreviations \(\gamma _{1}:=4\beta _{A}\cdot \sqrt{(1-\varrho )/\pi }\cdot (1+\varrho )\sigma\) and \(\gamma _{2}:=\left[ 2(1-2/\pi )\beta _{A}^{2}+\tfrac{\beta _{P}}{\alpha }\right] \cdot (1-\varrho ),\) we can then write:
$$\begin{aligned} \mathcal {L}=v_{1}-\tfrac{1}{2}v_{1}^{2}-\gamma _{1}/(1+\varrho )\cdot \left| v_{1}-v_{2}\right| -\left[ v_{1}^{2}+v_{2}^{2}+2v_{1}v_{2}\varrho +\gamma _{2}\cdot \left( v_{1}-v_{2}\right) ^{2}\right] \alpha \sigma ^{2}. \end{aligned}$$
If we assume that \(v_{1}\ge v_{2}\) holds, maximizing \(\mathcal {L}\) with respect to \(v_{1}\) and \(v_{2}\) yields
$$\begin{aligned} v_{1}^{in}=\frac{1-\gamma _{1}+\gamma _{2}}{\delta } \quad \mathrm{and}\quad v_{2}^{in}=\frac{-\varrho +\gamma _{1}\cdot \left[ 1+\frac{1}{2\alpha \sigma ^{2}(1+\varrho )}\right] +\gamma _{2}}{\delta }, \end{aligned}$$
where \(\delta :=1+2\alpha \sigma ^{2}\cdot (1-\varrho ^{2})+\gamma _{2}\cdot [1+4\alpha \sigma ^{2}\cdot (1+\varrho )].\) Plugging \(v_{1}^{in}\) and \(v_{2}^{in}\) in the Lagrangian function, we can derive the optimal goal function value of the principal:
$$\begin{aligned} \Phi ^{in}=\frac{1-2\gamma _{1}+\frac{\gamma _{1}^{2}}{1+\varrho }\cdot \left[ 2+\frac{1}{2\alpha \sigma ^{2}(1+\varrho )}\right] +\gamma _{2}}{2\delta }. \end{aligned}$$
We now examine under which condition \(v_{1}^{in}\ge v_{2}^{in}\) indeed holds:
$$\begin{aligned} 1-\gamma _{1}\ge -\varrho +\gamma _{1}\cdot \left[ 1+\frac{1}{2\alpha \sigma ^{2}(1+\varrho )}\right] \iff \gamma _{1}\le \frac{2\alpha \sigma ^{2}(1+\varrho )^{2}}{1+4\alpha \sigma ^{2}(1+\varrho )}, \end{aligned}$$
which, exploiting the definition of \(\gamma _{1},\) we can re-write as follows:
$$\begin{aligned} \beta _{A}\le \frac{\alpha \sigma (1+\varrho )}{2\cdot [1+4\alpha \sigma ^{2}(1+\varrho )]\sqrt{(1-\varrho )/\pi }}. \end{aligned}$$
If we assume that \(v_{1}<v_{2}\) holds, conducting the typical derivations yields:
$$\begin{aligned} v_{1}=\frac{1+\gamma _{1}+\gamma _{2}}{\delta } \quad \mathrm{and}\quad v_{2}=\frac{-\varrho -\gamma _{1}\cdot \left[ 1+\frac{1}{2\alpha \sigma ^{2}(1+\varrho )}\right] +\gamma _{2}}{\delta }. \end{aligned}$$
Taking a closer look at these expressions, \(v_{1}<v_{2}\) would only be possible for sufficiently negative values of parameter \(\gamma _{1},\) which according to its definition can only embrace positive values. Therefore, no interior solution exists when \(\beta _{A}>\alpha \sigma (1+\varrho )/\{2\cdot [1+4\alpha \sigma ^{2}(1+\varrho )]\sqrt{(1-\varrho )/\pi }\}.\)