Appendix 1
The SL copula
For notational simplicity only, we will consider the case that ω is a scalar.
In the SL model, \(\left[ {\begin{array}{*{20}{c}} {u^ \ast } \\ \omega \end{array}} \right]\sim N\left( {0,{\Sigma}} \right)\) where = \(\left[ {\begin{array}{*{20}{c}} {\sigma _u^2} & {{\Sigma}_{\omega u}} \\ {{\Sigma}_{\omega u}} & {\sigma _\omega ^2} \end{array}} \right]\). Then u = |u*|.
The joint density of u* and ω, say g(u*,ω), is the bivariate normal density of N(0,∑) The joint density of u and ω is then
$$\begin{array}{l}h\left( {u,\omega } \right) = g\left( {u,\omega } \right) + g\left( { - u,\omega } \right)\\ \qquad\quad \, = \frac{1}{{2\pi }}\left| {\Sigma} \right|^{ - 1/2}\exp \left[ { - \frac{1}{2}\left( {u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} \right] \\\qquad\quad + \frac{1}{{2\pi }}\left| {\Sigma} \right|^{ - 1/2}\exp \left[ { - \frac{1}{2}\left( { - u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} { - u} \\ \omega \end{array}} \right)} \right]\end{array}$$
as given in Schmidt and Lovell (1980) (Eq. (A.12)).
Now define \(\mathop {\sum}\nolimits_ \ast { = \left[ {\begin{array}{*{20}{c}} {\sigma _u^2} & { - {\Sigma}_{\omega u}} \\ { - {\Sigma}_{\omega u}} & {\sigma _\omega ^2} \end{array}} \right]}\). It is easy to verify that |∑*| = |∑| and that \(\left( { - u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} { - u} \\ \omega \end{array}} \right) = \left( {u,\omega } \right){\Sigma}_ \ast ^{ - 1}\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)\). Therefore\(\begin{array}{*{20}{c}} {h\left( {u,\omega } \right) = \frac{1}{{2\pi }}\left| {\Sigma} \right|^{ - 1/2}{\mathrm{exp}}\left[ { - \frac{1}{2}\left( {u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} \right]} & {^{\prime\prime} {\mathrm{term}}\,1^{\prime\prime} } \end{array}\)
$$\begin{array}{*{20}{c}} { + \frac{1}{{2\pi }}\left| {{\Sigma}_ \ast } \right|^{ - 1/2}{\mathrm{exp}}\left[ { - \frac{1}{2}\left( {u,\omega } \right)\mathop {\sum}\nolimits_ \ast ^{ - 1} {\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} } \right]} & {^{\prime\prime} {\mathrm{term}}\,2^{\prime\prime} } \end{array}$$
To calculate the copula, we now need to divide h(u,ω) by the product of the marginal densities of u and ω, that is, by
$$\frac{2}{{\sqrt {2\pi } }}\,\frac{1}{{\sigma _u}}\exp \left( { - \frac{1}{{2\sigma _u^2}}u^2} \right) \cdot \frac{1}{{\sqrt {2\pi } }}\frac{1}{{\sigma _\omega }}\exp \left( { - \frac{1}{{2\sigma _\omega ^2}}\omega ^2} \right).$$
Carrying out this division, the first term above (“term 1”) becomes, by standard algrebra used in the derivation of the normal copula,
$$\frac{1}{2}\left| R \right|^{ - 1/2}\exp \left[ { - \frac{1}{2}\left( {u,\omega } \right)\left( {R^{ - 1} - I} \right)\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} \right] \quad {\mathrm{where}}\,\,R = \left[ {\begin{array}{*{20}{c}} 1 & \rho \\ \rho & 1 \end{array}} \right]$$
$$= \frac{1}{2}\left( {1 - \rho ^2} \right)^{ - \frac{1}{2}}\exp \left[ { - \frac{1}{2}\left( {1 - \rho ^2} \right)^{ - 1}\left( {\rho ^2u^2 + \rho ^2\omega ^2 - 2\rho u\omega } \right)} \right]$$
which is one-half times the normal copula with parameter ρ. Similarly, the second term above (“term 2”) becomes one-half times the normal copula with parameter −ρ.
Appendix 2
Proofs of Results for the APS copulas
Proof of Result 1 We have a copula of the form c(w1,w2) = 1 + g(w1)h(w2), where \({\int}_0^1 {g\left( s \right)ds} = {\int}_0^1 {h\left( s \right)ds = 0}\), and specifically where \(h\left( {w_2} \right) = 1 - k_q^{ - 1}q\left( {w_2} \right)\) with \(k_q = {\int}_0^1 {q\left( s \right)ds}\). Define \(G\left( {w_1} \right) = {\int}_0^{w_1} {g\left( s \right)ds}\), \(H\left( {w_2} \right) = {\int}_0^{w_2} {h\left( s \right)ds}\), \(Q\left( {w_2} \right) = {\int}_0^{w_2} {q\left( s \right)ds}\), \(G^ \ast = {\int}_0^1 {G\left( {w_1} \right)dw_1}, \,H^ \ast = {\int}_0^1 {H\left( {w_2} \right)dw_2}\), and \(Q^ \ast = {\int}_0^1 {Q\left( {w_2} \right)dw_2}\), and note that H(w2) = w2−Q(w2) and kq = Q(1). A general result for Sarmanov copulas (Rodriguez-Lallena and Ubeda-Flores 2004) is that cov(w1,w2) = G*H*. The value of G* is θ/6 when g(w1) = θ(1 − 2w2), but this does not feature in the proof, which simply establishes that H* = 0.
To show that H* = 0, we use the symmetry of q(s) around s = 1/2, which implies that Q(s) = Q(1) − Q(1 − s) for s > 1/2. Therefore \(Q^ \ast = {\int}_0^1 {Q\left( {w_2} \right)dw_2} = {\int}_0^{1/2} {Q\left( {w_2} \right)dw_2} + {\int}_{1/2}^1 {\left[ {Q\left( 1 \right) - Q\left( {1 - w_2} \right)} \right]dw_2} = {\int}_0^{1/2} {Q\left( {w_2} \right)dw_2} + \frac{1}{2}Q\left( 1 \right) - {\int}_{1/2}^1 {Q\left( {1 - w_2} \right)dw_2} = \frac{1}{2}Q\left( 1 \right) = \frac{1}{2}k_q\). Then \(H^ \ast = \frac{1}{2} - k_q^{ - 1}Q^ \ast = \frac{1}{2} - k_q^{ - 1}\left( {\frac{1}{2}k_q} \right) = 0\), which implies that cov(w1,w2) = 0. ☐
Proof of Result 2
$$\begin{array}{l}E\left[ {w_1q\left( {w_2} \right)} \right] = {\int} {{\int} {w_1q\left( {w_2} \right)dw_1dw_2} + } \left[ {\theta {\int} {w_1\left( {1 - 2w_1} \right)dw_1} } \right]\\ \cdot \left[ {{\int} {q\left( {w_2} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)} dw_2} \right].\end{array}$$
Here all integrals are from zero to one.
The first term on the r.h.s. of this equation is \({\int} {w_1dw_1} \cdot {\int} {q\left( {w_2} \right)dw_2} = \frac{1}{2}Q\left( 1 \right) = \frac{1}{2}k_q\).
The first term in brackets following the “+” sign equals \(\theta \left( {\frac{1}{2} - \frac{2}{3}} \right) = - \frac{1}{6}\theta\).
The second term in brackets equals
$$\begin{array}{l}{\int} {q\left( {w_2} \right)dw_2 - } k_q^{ - 1}{\int} {\,} q\left( {w_2} \right)^2dw_2 = k_q - k_q^{ - 1}Eq\left( {w_2} \right)^2\\ = k_q - k_q^{ - 1}\left[ {{\mathrm{var}}\left( {q\left( {w_2} \right)} \right) + k_q^2} \right] = - k_q^{ - 1}{\mathrm{var}}\left( {q\left( {w_2} \right)} \right)\end{array}$$
Combining terms, \(E \left[ {w_1q\left( {w_2} \right)} \right] =\frac{1}{2}k_q + \frac{1}{6}\theta k_q^{ - 1}{\mathrm{var}}\left( {q\left( {w_2} \right)} \right)\) and therefore
$${\mathop{\rm{cov}}} \left[ {w_1q\left( {w_2} \right)} \right] = \frac{1}{6}\theta k_q^{ - 1}{\mathrm{var}}\left( {q\left( {w_2} \right)} \right).\quad \square$$
Proof of Result 3 We have the APS-2 copula \(c( {w_1,w_2} ) = 1 + \theta ( {1 - 2w_1} )( {1 - k_q^{ - 1}q( {w_2} )} )\) where \(w_1 = F_u\left( u \right)\) and \(w_2 = F_\omega \left( \omega \right)\). For notational simplicity only, suppose that \(E\left( u \right) = E\left( \omega \right) = 0\). (Otherwise we just have to do the analysis below in terms of deviations from means.) Then
$${\mathrm{cov}}\left( {u,\omega } \right) = E\left( {u\omega } \right) = {\int}_{\! - \infty }^\infty {{\int}_{\! - \infty }^\infty u\omega {f_u\left( u \right)f_\omega \left( \omega \right)c\left( {F_u\left( u \right),\,F_\omega \left( \omega \right)} \right)du\,d\omega } }$$
$$\begin{array}{l}\begin{array}{*{20}{c}} { = \displaystyle{\int}_{ - \infty }^\infty {{\int}_{ - \infty }^\infty {u\omega f_u\left( u \right)f_\omega \left( \omega \right)du\,d\omega } } } & {\left[ {{\mathrm{term}}\,1} \right]} \end{array}\\ \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_0^\infty {{\int}_0^\infty {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{2}}} \right]} \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_{ - \infty }^0 {{\int}_{ - \infty }^0 {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{3}}} \right]} \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_{ - \infty }^0 {{\int}_0^\infty {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{4}}} \right]} \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_0^\infty {{\int}_{ - \infty }^0 {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{5}}} \right]} \end{array}$$
where again, for visual simplicity, w1 = Fu(u) and w2 = Fω(ω).
Term 1 equals zero because E(u) = E(ω) = 0.
Term 2 equals the negative of term 3 (i.e., they sum to zero). Because u and ω are symmetric, fu(u) = fu(−u) and fω(ω) = fω(−ω). Also Fu(u) = 1 − Fu(−u) so that 1 − 2Fu(−u) = −[1 − 2Fu(u)]. Finally, q(s) is symmetric around \(s = \frac{1}{2}\), so that q(s) = q(1−s) and therefore q(Fω(−ω)) = q(1 − Fω(−ω)) = q(Fω(ω)). This implies that the value of the integrand in term 2 for any u,ω pair (e.g., (0.3, 0.4)) is the negative of the value of the integrand in term 3 of the corresponding pair (e.g., (−0.3, −0.4)), and thus the two terms sum to zero.
Similarly term 4 and term 5 sum to zero, and then cov(u,ω) = 0. ☐
Proof of Result 4 Lower tail dependence equals P(w1 ≤ α|w2 ≤ α) = P(w1 ≤ α, w2 ≤ α)/P(w2 ≤ α) = \(\{ {\alpha ^2 + \theta \alpha ( {1 - \alpha } )[ {\alpha - k_q^{ - 1}Q( \alpha )} ]} \}/\alpha = \alpha + \theta ( {1 - \alpha } )[ {\alpha - k_q^{ - 1}Q( \alpha )} ]\) and this goes to zero as α → 0. Upper tail dependence equals P(w1 ≥ α|w2 ≥ α) = P(w1 ≥ α, w2 ≥ α)/P(w2 ≥ α) = \(\{ {( {1 - \alpha } )^2 + \theta \alpha ( {1 - \alpha } )[ {\alpha - k_q^{ - 1}Q( \alpha )} ]} \}/( {1 - \alpha } ) = ( {1 - \alpha } ) + \theta \alpha [ {\alpha - k_q^{ - 1}Q( \alpha )} ]\) and this goes to zero as α → 1. ☐
Proof of Result 5
(i) It is easy to verify that the marginals of c(w1,w2) are uniform, so we just need to verify that c(w1,w2) ≥ 0 for all w1,w2 in the unit cube. We have −1 ≤ 1 − 2w1 ≤ 1 and −2 ≤ 1 − 12(w2 − 1/2)2 ≤ 1, so −2 ≤ (1 − 2w1)[1 − 12(w2 − 1/2)2] ≤ 2. Therefore c(w1,w2) ≥ 0 if −1/2 ≤ θ ≤ 1/2.
(ii), (iii) Some useful integrals (integrals are from zero to one):
-
(a)
\({\int} {\left( {w_2 - 1/2} \right)^2dw_2 = \frac{1}{{12}}}.\)
-
(b)
\({\int} {\left( {w_2 - 1/2} \right)^4dw_2 = \frac{1}{{80}}}\)
Therefore \({\mathop{\rm{var}}} \left( {w_2 - 1/2} \right)^2 = \frac{1}{{12}} - \left( {\frac{1}{{80}}} \right)^2 = \frac{1}{{180}}\) which is (iii). To establish (ii), use Result 2 to obtain \({\mathop{\rm{cov}}} \left( {w_1,q\left( {w_2} \right)} \right) = \frac{1}{6}\theta \cdot 12 \cdot \frac{1}{{180}} = \frac{1}{{90}}\theta\).
(iv) \({\mathrm{corr}}\,\left( {w_1,q\left( {w_2} \right)} \right) = \frac{{\frac{1}{{90}}\theta }}{{\sqrt {1/12} \sqrt {1/180} }} = \frac{2}{{\sqrt {15} }}\theta\). ☐
Proof of Result 6
(i) Once again it is easy to verify that the marginals of c(w1,w2) are uniform, so we just need to verify that c(w1,w2) ≥ 0 for all w1,w2 in the unit cube. We have −1 ≤ 1 − 2w1 ≤ 1 and −1 ≤ 1 − 4|w2 − 1/2| ≤ 1, so −1 ≤ (1 − 2w1)[1 − 4]|w2 − 1/2| ≤ 1. Therefore c(w1,w2) ≥ 0 if −1 ≤ θ ≤ 1.
(ii), (iii) Some useful integrals (integrals are from zero to one):
-
(a)
\({\int} {\left| {w_2 - 1/2} \right|dw_2 = \frac{1}{4}.}\)
-
(b)
\({\int} {\left| {w_2 - 1/2} \right|^2dw_2 = \frac{1}{{12}}}\)
Therefore \({\mathop{\rm{var}}} \left( {\left| {w_2 - 1/2} \right|} \right) = \frac{1}{{12}} - \left( {\frac{1}{4}} \right)^2 = \frac{1}{{48}}\) which is (iii). Then use Result 2 to obtain \({\mathop{\rm{cov}}} \left( {w_1,q\left( {w_2} \right)} \right) = \frac{1}{6}\theta \cdot 4 \cdot \frac{1}{{48}} = \frac{1}{{72}}\theta\).
(iv) \({\mathrm{corr}}\left( {w_1,q\left( {w_2} \right)} \right) = \frac{{\frac{1}{{72}}\theta }}{{\sqrt {1/12} \sqrt {1/48} }} = \frac{1}{3}\theta\). ☐
Proof of Result 10 First consider APS-3-A. We have c*(w1,w2,w3) = (c12 − 1) + (c13 − 1) + c23. Here \(c_{12} - 1 = \theta _{12}( {1 - 2w_1} )[ {1 - 12( {w_2 - \frac{1}{2}} )^2} ]\). We have −1 ≤ (1 − 2w1) ≤ 1 and \(- 2 \le 1 - 12\left( {w_2 - \frac{1}{2}} \right)^2 \le 1\). So the smallest that c12 − 1 can be is −2|θ12|. Similarly the smallest that c13−1 can be is −2|θ13|. Finally, \(\delta = \left( {1 - \rho ^2} \right)^{ - 1/2}\exp \left( { - \frac{1}{2}\frac{{\rho ^2}}{{\left( {1 - \rho ^2} \right)}}} \right)\) ≤ bivariate normal copula = \(\left( {1 - \rho ^2} \right)^{ - 1/2}\exp \left[ { - \frac{1}{2}\frac{1}{{\left( {1 - \rho ^2} \right)}}\left( {\rho w_2 - \rho w_3} \right)^2} \right]\). So a sufficient condition for c* to be a copula is −2|θ12| − 2|θ13| + δ ≥ 0, or |θ12|+|θ13| ≤ δ/2. To see why this is necessary, note that the lower bound is attainable, for example at w1 = 0,w2 = 1, and w3 = 0. ☐
The proof of the result for APS-3-B follows exactly the same lines so it is omitted.
Appendix 3
Generalization of Result 9 to higher dimensions
Consider the four-dimensional case. We start with two-dimensional copulas c12, c13,c14,c23,c24, and c34. We can construct three-dimensional copulas as in Result 9. We can construct a four-dimensional copula as
$$\begin{array}{l}c^ \ast \left( {w_1,w_2,w_3,w_4} \right) = 1 + \left( {c_{12} - 1} \right) + \left( {c_{13} - 1} \right) + \left( {c_{14} - 1} \right) \\\qquad\qquad\qquad\qquad+ \left( {c_{23} - 1} \right) + \left( {c_{24} - 1} \right) + \left( {c_{34} - 1} \right).\end{array}$$
The implied 3-copulas are as given in Result 9, for example,
$$\begin{array}{l}\displaystyle \int {c^ \ast \left( {w_1,w_2,w_3,w_4} \right)dw_1} = 1 + 0 + 0 + 0 + \left( {c_{23} - 1} \right)\\ \qquad\qquad\qquad\qquad\qquad\,\,+ \left( {c_{24} - 1} \right) + \left( {c_{34} - 1} \right)\\ \qquad\qquad\qquad\qquad\qquad\,\,= c^ \ast \left( {w_2,w_3,w_4} \right).\end{array}$$
The implied 2-copulas are therefore the two-copulas with which we started, for example, c12,c13, etc.
This extends to arbitrary dimensionality d. We can define a d-copula c*(w1,…,wd) using \(\left( {\begin{array}{*{20}{c}} d \\ 2 \end{array}} \right)\) bivariate copulas cij as follows:
$$c^ \ast \left( {w_1, \ldots ,w_d} \right) = 1 + \mathop {\sum}\limits_{1 \le i < j \le d} {\left( {c_{ij} - 1} \right)}.$$
Returning for purposes of discussion to the four-dimensional case, the copula c*(w1,w2,w3,w4) is a copula if it is a density (i.e., it is non-negative) and there is no issue if we are satisfied with the lower dimensional copulas that it implies. But this may not always be the case. For example, suppose that we have a production frontier system as in Eqs. (1) and (2) but now we have four inputs, so that we have four random errors (u, ω2, ω3, and ω4) instead of three. It might be natural to want (ω2,ω3,ω4) to be trivariate normal, that is, to be marginally normal and to have the trivariate normal copula. But c*(w1,w2,w3,w4) as defined above does not imply a trivariate normal 3-copula, even if c23, c24, and c34 are all bivariate normal copulas.
An alternative construction is as follows. Let co(w2,w3,w4) be a trivariate normal copula, and c12, c13, and c14 be the desired 2-copulas linking w1 to w2, w3, and w4. Then define the 4-copula
$$\begin{array}{l}c^o\left( {w_1,w_2,w_3,w_4} \right) = 1 + \left( {c_{12} - 1} \right) + \left( {c_{13} - 1} \right) + \left( {c_{14} - 1} \right) \\\qquad\qquad\qquad\qquad+ \left( {c^o\left( {w_2,w_3,w_4} \right) - 1} \right).\end{array}$$
If c23,c24, and c34 are all bivariate normal copulas, then the 2-copulas implied by co are the same as those implied by c*, and so are the 3-copulas that involve w1. But the 3-copula for w2, w3, and w4 is different, because co(w1,w2,w3,w4) implies the 3-copula co(w2,w3,w4), whereas c*(w1,w2,w3,w4) implies the 3-copula 1 + (c23 − 1) + (c24 − 1) + (c34 − 1), which is not a trivariate normal copula even if its constituent 2-copulas are all bivariate normal.
Another way to construct a four-copula from lower dimensional copulas is to use a vine copula, as in Joe (1996) and Aas et al. (2009). These require specification of two-dimensional marginal and conditional copulas. In general there are many such vine copulas because they depend on the vine structure and the numbering of the variables. However, in our problem there is arguably a natural structure where the two-dimensional marginals are APS-2 and the conditional copulas are Gaussian. Thus the first variable is the half-normal error, and the remaining three variables are multivariate normal. Because of the multivariate normal assumption, the ordering of the last three variables does not matter. The benefit is that this representation results in a somewhat simpler functional form of the density than other vine representations.