Skip to main content

A new family of copulas, with application to estimation of a production frontier system

Abstract

This paper makes two contributions. The first is to propose a new family of copulas for which the copula arguments are uncorrelated but dependent. Specifically, if w1 and w2 are the uniform random variables in the copula, they are uncorrelated, but w1 is correlated with |w2 − 1/2|. We show how this family of copulas can be applied to the error structure in an econometric production frontier model. The second contribution is to give some general results on how to extend a two-dimensional copula to three or more dimensions. This extension is necessary in our production frontier model when there are multiple inputs, but our results apply more generally to the extension of arbitrary two-dimensional copulas. We also report the results of some simulations and we give an empirical example.

This is a preview of subscription content, access via your institution.

References

  • Aas K, Czado C, Frigessi A, Bakken H (2009) Pair-copula constructions of multiple dependence. Insur Math Econ 44:182–198

    Article  Google Scholar 

  • Aigner DJ, Lovell CAK, Schmidt P (1977) Formulation and estimation of stochastic frontier production function models. J Econom 6:21–37

    Article  Google Scholar 

  • Atkinson SE, Cornwell C (1994) Parametric estimation of technical and allocative inefficiency with panel data. Int Econ Rev 35:231–244

    Article  Google Scholar 

  • Christensen LR, Greene WH (1976) Economies of scale in U.S. electric power generation. J Political Econ 84:655–676

    Article  Google Scholar 

  • Ferrier GD, Lovell CAK (1990) Measuring cost efficiency in banking: econometric and linear programming evidence. J Econom 46:229–245

    Article  Google Scholar 

  • Greene WH (1980) On the estimation of a flexible frontier production model. J Econom 13:101–115

    Article  Google Scholar 

  • Greene WH (2003) Simulated likelihood estimation of the normal-gamma stochastic frontier function. J Prod Anal 19:179–190

    Article  Google Scholar 

  • Joe H (1996) Families of m-variate Distributions with given margins and m(m-1)/2 bivariate dependence parameters. In: Rüschendorf L, Schweizer B, Taylor MD (eds). Distributions with fixed marginals and related topics, IMS Lecture Notes Monograph Series. Institute of Mathematical Statistics.

  • Kumbhakar SC (1987) The specification of technical and allocative inefficiency in stochastic production and profit frontiers. J Econom 34:335–348

    Article  Google Scholar 

  • Kumbhakar SC (1991) The measurement and decomposition of cost efficiency: the translog cost system. Oxf Econ Pap 43:667–683

    Article  Google Scholar 

  • Kumbhakar SC (1997) Modelling allocative inefficiency in a translog cost function and cost share equations: an exact relationship. J Econom 76:351–356

    Article  Google Scholar 

  • Meeusen W, van den Broeck J (1977) Efficiency estimation from Cobb-Douglas production functions with composed error. Int Econ Rev 18:435–444

    Article  Google Scholar 

  • Nelsen RB (2006) An introduction to copulas, 2nd edn. Springer.

  • Rodriguez-Lallena JA, Ubeda-Flores M (2004) A new class of bivariate copulas. Stat Probab Lett 66:315–325

    Article  Google Scholar 

  • Rungsuriyawiboon S, Stefanou SE (2007) Dynamic efficiency estimation: an application to U.S. electric utilities. J Bus Econ Stat 25:226–238

    Article  Google Scholar 

  • Sarmanov OV (1966) Generalized normal correlation and two-dimensional Frechet classes. Dokl Math 168:596–599

    Google Scholar 

  • Schmidt P, Lovell CAK (1979) Estimating technical and allocative inefficiency relative to stochastic production and cost frontiers. J Econom 9:343–366

    Article  Google Scholar 

  • Schmidt P, Lovell CAK (1980) Estimating stochastic production and cost frontiers when technical and allocative inefficiency are correlated. J Econom 13:83–100

    Article  Google Scholar 

  • Tran KC, Tsionas EG (2015) Endogeneity in stochastic frontier models: copula approach without external instruments. Econ Lett 133:85–88

    Article  Google Scholar 

  • Tsionas EG, Tran KC (2016) On the joint estimation of heterogenous technologies, technical and allocative inefficiency. Econom Rev 35:871–893

    Article  Google Scholar 

Download references

Acknowledgements

Artem Prokhorov’s research for this paper was supported by a grant from the Russian Science Foundation (Project No. 20-18-00365).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Peter Schmidt.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Appendices

Appendix 1

The SL copula

For notational simplicity only, we will consider the case that ω is a scalar.

In the SL model, \(\left[ {\begin{array}{*{20}{c}} {u^ \ast } \\ \omega \end{array}} \right]\sim N\left( {0,{\Sigma}} \right)\) where = \(\left[ {\begin{array}{*{20}{c}} {\sigma _u^2} & {{\Sigma}_{\omega u}} \\ {{\Sigma}_{\omega u}} & {\sigma _\omega ^2} \end{array}} \right]\). Then u = |u*|.

The joint density of u* and ω, say g(u*,ω), is the bivariate normal density of N(0,) The joint density of u and ω is then

$$\begin{array}{l}h\left( {u,\omega } \right) = g\left( {u,\omega } \right) + g\left( { - u,\omega } \right)\\ \qquad\quad \, = \frac{1}{{2\pi }}\left| {\Sigma} \right|^{ - 1/2}\exp \left[ { - \frac{1}{2}\left( {u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} \right] \\\qquad\quad + \frac{1}{{2\pi }}\left| {\Sigma} \right|^{ - 1/2}\exp \left[ { - \frac{1}{2}\left( { - u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} { - u} \\ \omega \end{array}} \right)} \right]\end{array}$$

as given in Schmidt and Lovell (1980) (Eq. (A.12)).

Now define \(\mathop {\sum}\nolimits_ \ast { = \left[ {\begin{array}{*{20}{c}} {\sigma _u^2} & { - {\Sigma}_{\omega u}} \\ { - {\Sigma}_{\omega u}} & {\sigma _\omega ^2} \end{array}} \right]}\). It is easy to verify that |∑*| = |∑| and that \(\left( { - u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} { - u} \\ \omega \end{array}} \right) = \left( {u,\omega } \right){\Sigma}_ \ast ^{ - 1}\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)\). Therefore\(\begin{array}{*{20}{c}} {h\left( {u,\omega } \right) = \frac{1}{{2\pi }}\left| {\Sigma} \right|^{ - 1/2}{\mathrm{exp}}\left[ { - \frac{1}{2}\left( {u,\omega } \right){\Sigma}^{ - 1}\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} \right]} & {^{\prime\prime} {\mathrm{term}}\,1^{\prime\prime} } \end{array}\)

$$\begin{array}{*{20}{c}} { + \frac{1}{{2\pi }}\left| {{\Sigma}_ \ast } \right|^{ - 1/2}{\mathrm{exp}}\left[ { - \frac{1}{2}\left( {u,\omega } \right)\mathop {\sum}\nolimits_ \ast ^{ - 1} {\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} } \right]} & {^{\prime\prime} {\mathrm{term}}\,2^{\prime\prime} } \end{array}$$

To calculate the copula, we now need to divide h(u,ω) by the product of the marginal densities of u and ω, that is, by

$$\frac{2}{{\sqrt {2\pi } }}\,\frac{1}{{\sigma _u}}\exp \left( { - \frac{1}{{2\sigma _u^2}}u^2} \right) \cdot \frac{1}{{\sqrt {2\pi } }}\frac{1}{{\sigma _\omega }}\exp \left( { - \frac{1}{{2\sigma _\omega ^2}}\omega ^2} \right).$$

Carrying out this division, the first term above (“term 1”) becomes, by standard algrebra used in the derivation of the normal copula,

$$\frac{1}{2}\left| R \right|^{ - 1/2}\exp \left[ { - \frac{1}{2}\left( {u,\omega } \right)\left( {R^{ - 1} - I} \right)\left( {\begin{array}{*{20}{c}} u \\ \omega \end{array}} \right)} \right] \quad {\mathrm{where}}\,\,R = \left[ {\begin{array}{*{20}{c}} 1 & \rho \\ \rho & 1 \end{array}} \right]$$
$$= \frac{1}{2}\left( {1 - \rho ^2} \right)^{ - \frac{1}{2}}\exp \left[ { - \frac{1}{2}\left( {1 - \rho ^2} \right)^{ - 1}\left( {\rho ^2u^2 + \rho ^2\omega ^2 - 2\rho u\omega } \right)} \right]$$

which is one-half times the normal copula with parameter ρ. Similarly, the second term above (“term 2”) becomes one-half times the normal copula with parameter −ρ.

Appendix 2

Proofs of Results for the APS copulas

Proof of Result 1 We have a copula of the form c(w1,w2) = 1 + g(w1)h(w2), where \({\int}_0^1 {g\left( s \right)ds} = {\int}_0^1 {h\left( s \right)ds = 0}\), and specifically where \(h\left( {w_2} \right) = 1 - k_q^{ - 1}q\left( {w_2} \right)\) with \(k_q = {\int}_0^1 {q\left( s \right)ds}\). Define \(G\left( {w_1} \right) = {\int}_0^{w_1} {g\left( s \right)ds}\), \(H\left( {w_2} \right) = {\int}_0^{w_2} {h\left( s \right)ds}\), \(Q\left( {w_2} \right) = {\int}_0^{w_2} {q\left( s \right)ds}\), \(G^ \ast = {\int}_0^1 {G\left( {w_1} \right)dw_1}, \,H^ \ast = {\int}_0^1 {H\left( {w_2} \right)dw_2}\), and \(Q^ \ast = {\int}_0^1 {Q\left( {w_2} \right)dw_2}\), and note that H(w2) = w2Q(w2) and kq = Q(1). A general result for Sarmanov copulas (Rodriguez-Lallena and Ubeda-Flores 2004) is that cov(w1,w2) = G*H*. The value of G* is θ/6 when g(w1) = θ(1 − 2w2), but this does not feature in the proof, which simply establishes that H* = 0.

To show that H* = 0, we use the symmetry of q(s) around s = 1/2, which implies that Q(s) = Q(1) − Q(1 − s) for s > 1/2. Therefore \(Q^ \ast = {\int}_0^1 {Q\left( {w_2} \right)dw_2} = {\int}_0^{1/2} {Q\left( {w_2} \right)dw_2} + {\int}_{1/2}^1 {\left[ {Q\left( 1 \right) - Q\left( {1 - w_2} \right)} \right]dw_2} = {\int}_0^{1/2} {Q\left( {w_2} \right)dw_2} + \frac{1}{2}Q\left( 1 \right) - {\int}_{1/2}^1 {Q\left( {1 - w_2} \right)dw_2} = \frac{1}{2}Q\left( 1 \right) = \frac{1}{2}k_q\). Then \(H^ \ast = \frac{1}{2} - k_q^{ - 1}Q^ \ast = \frac{1}{2} - k_q^{ - 1}\left( {\frac{1}{2}k_q} \right) = 0\), which implies that cov(w1,w2) = 0. ☐

Proof of Result 2

$$\begin{array}{l}E\left[ {w_1q\left( {w_2} \right)} \right] = {\int} {{\int} {w_1q\left( {w_2} \right)dw_1dw_2} + } \left[ {\theta {\int} {w_1\left( {1 - 2w_1} \right)dw_1} } \right]\\ \cdot \left[ {{\int} {q\left( {w_2} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)} dw_2} \right].\end{array}$$

Here all integrals are from zero to one.

The first term on the r.h.s. of this equation is \({\int} {w_1dw_1} \cdot {\int} {q\left( {w_2} \right)dw_2} = \frac{1}{2}Q\left( 1 \right) = \frac{1}{2}k_q\).

The first term in brackets following the “+” sign equals \(\theta \left( {\frac{1}{2} - \frac{2}{3}} \right) = - \frac{1}{6}\theta\).

The second term in brackets equals

$$\begin{array}{l}{\int} {q\left( {w_2} \right)dw_2 - } k_q^{ - 1}{\int} {\,} q\left( {w_2} \right)^2dw_2 = k_q - k_q^{ - 1}Eq\left( {w_2} \right)^2\\ = k_q - k_q^{ - 1}\left[ {{\mathrm{var}}\left( {q\left( {w_2} \right)} \right) + k_q^2} \right] = - k_q^{ - 1}{\mathrm{var}}\left( {q\left( {w_2} \right)} \right)\end{array}$$

Combining terms, \(E \left[ {w_1q\left( {w_2} \right)} \right] =\frac{1}{2}k_q + \frac{1}{6}\theta k_q^{ - 1}{\mathrm{var}}\left( {q\left( {w_2} \right)} \right)\) and therefore

$${\mathop{\rm{cov}}} \left[ {w_1q\left( {w_2} \right)} \right] = \frac{1}{6}\theta k_q^{ - 1}{\mathrm{var}}\left( {q\left( {w_2} \right)} \right).\quad \square$$

Proof of Result 3 We have the APS-2 copula \(c( {w_1,w_2} ) = 1 + \theta ( {1 - 2w_1} )( {1 - k_q^{ - 1}q( {w_2} )} )\) where \(w_1 = F_u\left( u \right)\) and \(w_2 = F_\omega \left( \omega \right)\). For notational simplicity only, suppose that \(E\left( u \right) = E\left( \omega \right) = 0\). (Otherwise we just have to do the analysis below in terms of deviations from means.) Then

$${\mathrm{cov}}\left( {u,\omega } \right) = E\left( {u\omega } \right) = {\int}_{\! - \infty }^\infty {{\int}_{\! - \infty }^\infty u\omega {f_u\left( u \right)f_\omega \left( \omega \right)c\left( {F_u\left( u \right),\,F_\omega \left( \omega \right)} \right)du\,d\omega } }$$
$$\begin{array}{l}\begin{array}{*{20}{c}} { = \displaystyle{\int}_{ - \infty }^\infty {{\int}_{ - \infty }^\infty {u\omega f_u\left( u \right)f_\omega \left( \omega \right)du\,d\omega } } } & {\left[ {{\mathrm{term}}\,1} \right]} \end{array}\\ \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_0^\infty {{\int}_0^\infty {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{2}}} \right]} \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_{ - \infty }^0 {{\int}_{ - \infty }^0 {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{3}}} \right]} \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_{ - \infty }^0 {{\int}_0^\infty {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{4}}} \right]} \end{array}$$
$$\begin{array}{*{20}{c}} { + \theta \displaystyle{\int}_0^\infty {{\int}_{ - \infty }^0 {u\omega f_u\left( u \right)f_\omega \left( \omega \right)\left( {1 - 2w_1} \right)\left( {1 - k_q^{ - 1}q\left( {w_2} \right)} \right)\,du\,d\omega } } } & {\left[ {{\mathrm{term}}\,{\mathrm{5}}} \right]} \end{array}$$

where again, for visual simplicity, w1 = Fu(u) and w2 = Fω(ω).

Term 1 equals zero because E(u) = E(ω) = 0.

Term 2 equals the negative of term 3 (i.e., they sum to zero). Because u and ω are symmetric, fu(u) = fu(−u) and fω(ω) = fω(−ω). Also Fu(u) = 1 − Fu(−u) so that 1 − 2Fu(−u) = −[1 − 2Fu(u)]. Finally, q(s) is symmetric around \(s = \frac{1}{2}\), so that q(s) = q(1−s) and therefore q(Fω(−ω)) = q(1 − Fω(−ω)) = q(Fω(ω)). This implies that the value of the integrand in term 2 for any u,ω pair (e.g., (0.3, 0.4)) is the negative of the value of the integrand in term 3 of the corresponding pair (e.g., (−0.3, −0.4)), and thus the two terms sum to zero.

Similarly term 4 and term 5 sum to zero, and then cov(u,ω) = 0. ☐

Proof of Result 4 Lower tail dependence equals P(w1 ≤ α|w2 ≤ α) = P(w1 ≤ α, w2 ≤ α)/P(w2 ≤ α) = \(\{ {\alpha ^2 + \theta \alpha ( {1 - \alpha } )[ {\alpha - k_q^{ - 1}Q( \alpha )} ]} \}/\alpha = \alpha + \theta ( {1 - \alpha } )[ {\alpha - k_q^{ - 1}Q( \alpha )} ]\) and this goes to zero as α → 0. Upper tail dependence equals P(w1 ≥ α|w2 ≥ α) = P(w1 ≥ α, w2 ≥ α)/P(w2 ≥ α) = \(\{ {( {1 - \alpha } )^2 + \theta \alpha ( {1 - \alpha } )[ {\alpha - k_q^{ - 1}Q( \alpha )} ]} \}/( {1 - \alpha } ) = ( {1 - \alpha } ) + \theta \alpha [ {\alpha - k_q^{ - 1}Q( \alpha )} ]\) and this goes to zero as α → 1. ☐

Proof of Result 5

(i) It is easy to verify that the marginals of c(w1,w2) are uniform, so we just need to verify that c(w1,w2) ≥ 0 for all w1,w2 in the unit cube. We have −1 ≤ 1 − 2w1 ≤ 1 and −2 ≤ 1 − 12(w2 − 1/2)2 ≤ 1, so −2 ≤ (1 − 2w1)[1 − 12(w2 − 1/2)2] ≤ 2. Therefore c(w1,w2) ≥ 0 if −1/2 ≤ θ ≤ 1/2.

(ii), (iii) Some useful integrals (integrals are from zero to one):

  1. (a)

    \({\int} {\left( {w_2 - 1/2} \right)^2dw_2 = \frac{1}{{12}}}.\)

  2. (b)

    \({\int} {\left( {w_2 - 1/2} \right)^4dw_2 = \frac{1}{{80}}}\)

Therefore \({\mathop{\rm{var}}} \left( {w_2 - 1/2} \right)^2 = \frac{1}{{12}} - \left( {\frac{1}{{80}}} \right)^2 = \frac{1}{{180}}\) which is (iii). To establish (ii), use Result 2 to obtain \({\mathop{\rm{cov}}} \left( {w_1,q\left( {w_2} \right)} \right) = \frac{1}{6}\theta \cdot 12 \cdot \frac{1}{{180}} = \frac{1}{{90}}\theta\).

(iv) \({\mathrm{corr}}\,\left( {w_1,q\left( {w_2} \right)} \right) = \frac{{\frac{1}{{90}}\theta }}{{\sqrt {1/12} \sqrt {1/180} }} = \frac{2}{{\sqrt {15} }}\theta\). ☐

Proof of Result 6

(i) Once again it is easy to verify that the marginals of c(w1,w2) are uniform, so we just need to verify that c(w1,w2) ≥ 0 for all w1,w2 in the unit cube. We have −1 ≤ 1 − 2w1 ≤ 1 and −1 ≤ 1 − 4|w2 − 1/2| ≤ 1, so −1 ≤ (1 − 2w1)[1 − 4]|w2 − 1/2| ≤ 1. Therefore c(w1,w2) ≥ 0 if −1 ≤ θ ≤ 1.

(ii), (iii) Some useful integrals (integrals are from zero to one):

  1. (a)

    \({\int} {\left| {w_2 - 1/2} \right|dw_2 = \frac{1}{4}.}\)

  2. (b)

    \({\int} {\left| {w_2 - 1/2} \right|^2dw_2 = \frac{1}{{12}}}\)

Therefore \({\mathop{\rm{var}}} \left( {\left| {w_2 - 1/2} \right|} \right) = \frac{1}{{12}} - \left( {\frac{1}{4}} \right)^2 = \frac{1}{{48}}\) which is (iii). Then use Result 2 to obtain \({\mathop{\rm{cov}}} \left( {w_1,q\left( {w_2} \right)} \right) = \frac{1}{6}\theta \cdot 4 \cdot \frac{1}{{48}} = \frac{1}{{72}}\theta\).

(iv) \({\mathrm{corr}}\left( {w_1,q\left( {w_2} \right)} \right) = \frac{{\frac{1}{{72}}\theta }}{{\sqrt {1/12} \sqrt {1/48} }} = \frac{1}{3}\theta\). ☐

Proof of Result 10 First consider APS-3-A. We have c*(w1,w2,w3) = (c12 − 1) + (c13 − 1) + c23. Here \(c_{12} - 1 = \theta _{12}( {1 - 2w_1} )[ {1 - 12( {w_2 - \frac{1}{2}} )^2} ]\). We have −1 ≤ (1 − 2w1) ≤ 1 and \(- 2 \le 1 - 12\left( {w_2 - \frac{1}{2}} \right)^2 \le 1\). So the smallest that c12 − 1 can be is −2|θ12|. Similarly the smallest that c13−1 can be is −2|θ13|. Finally, \(\delta = \left( {1 - \rho ^2} \right)^{ - 1/2}\exp \left( { - \frac{1}{2}\frac{{\rho ^2}}{{\left( {1 - \rho ^2} \right)}}} \right)\) ≤ bivariate normal copula = \(\left( {1 - \rho ^2} \right)^{ - 1/2}\exp \left[ { - \frac{1}{2}\frac{1}{{\left( {1 - \rho ^2} \right)}}\left( {\rho w_2 - \rho w_3} \right)^2} \right]\). So a sufficient condition for c* to be a copula is −2|θ12| − 2|θ13| + δ ≥ 0, or |θ12|+|θ13| ≤ δ/2. To see why this is necessary, note that the lower bound is attainable, for example at w1 = 0,w2 = 1, and w3 = 0. ☐

The proof of the result for APS-3-B follows exactly the same lines so it is omitted.

Appendix 3

Generalization of Result 9 to higher dimensions

Consider the four-dimensional case. We start with two-dimensional copulas c12, c13,c14,c23,c24, and c34. We can construct three-dimensional copulas as in Result 9. We can construct a four-dimensional copula as

$$\begin{array}{l}c^ \ast \left( {w_1,w_2,w_3,w_4} \right) = 1 + \left( {c_{12} - 1} \right) + \left( {c_{13} - 1} \right) + \left( {c_{14} - 1} \right) \\\qquad\qquad\qquad\qquad+ \left( {c_{23} - 1} \right) + \left( {c_{24} - 1} \right) + \left( {c_{34} - 1} \right).\end{array}$$

The implied 3-copulas are as given in Result 9, for example,

$$\begin{array}{l}\displaystyle \int {c^ \ast \left( {w_1,w_2,w_3,w_4} \right)dw_1} = 1 + 0 + 0 + 0 + \left( {c_{23} - 1} \right)\\ \qquad\qquad\qquad\qquad\qquad\,\,+ \left( {c_{24} - 1} \right) + \left( {c_{34} - 1} \right)\\ \qquad\qquad\qquad\qquad\qquad\,\,= c^ \ast \left( {w_2,w_3,w_4} \right).\end{array}$$

The implied 2-copulas are therefore the two-copulas with which we started, for example, c12,c13, etc.

This extends to arbitrary dimensionality d. We can define a d-copula c*(w1,…,wd) using \(\left( {\begin{array}{*{20}{c}} d \\ 2 \end{array}} \right)\) bivariate copulas cij as follows:

$$c^ \ast \left( {w_1, \ldots ,w_d} \right) = 1 + \mathop {\sum}\limits_{1 \le i < j \le d} {\left( {c_{ij} - 1} \right)}.$$

Returning for purposes of discussion to the four-dimensional case, the copula c*(w1,w2,w3,w4) is a copula if it is a density (i.e., it is non-negative) and there is no issue if we are satisfied with the lower dimensional copulas that it implies. But this may not always be the case. For example, suppose that we have a production frontier system as in Eqs. (1) and (2) but now we have four inputs, so that we have four random errors (u, ω2, ω3, and ω4) instead of three. It might be natural to want (ω2,ω3,ω4) to be trivariate normal, that is, to be marginally normal and to have the trivariate normal copula. But c*(w1,w2,w3,w4) as defined above does not imply a trivariate normal 3-copula, even if c23, c24, and c34 are all bivariate normal copulas.

An alternative construction is as follows. Let co(w2,w3,w4) be a trivariate normal copula, and c12, c13, and c14 be the desired 2-copulas linking w1 to w2, w3, and w4. Then define the 4-copula

$$\begin{array}{l}c^o\left( {w_1,w_2,w_3,w_4} \right) = 1 + \left( {c_{12} - 1} \right) + \left( {c_{13} - 1} \right) + \left( {c_{14} - 1} \right) \\\qquad\qquad\qquad\qquad+ \left( {c^o\left( {w_2,w_3,w_4} \right) - 1} \right).\end{array}$$

If c23,c24, and c34 are all bivariate normal copulas, then the 2-copulas implied by co are the same as those implied by c*, and so are the 3-copulas that involve w1. But the 3-copula for w2, w3, and w4 is different, because co(w1,w2,w3,w4) implies the 3-copula co(w2,w3,w4), whereas c*(w1,w2,w3,w4) implies the 3-copula 1 + (c23 − 1) + (c24 − 1) + (c34 − 1), which is not a trivariate normal copula even if its constituent 2-copulas are all bivariate normal.

Another way to construct a four-copula from lower dimensional copulas is to use a vine copula, as in Joe (1996) and Aas et al. (2009). These require specification of two-dimensional marginal and conditional copulas. In general there are many such vine copulas because they depend on the vine structure and the numbering of the variables. However, in our problem there is arguably a natural structure where the two-dimensional marginals are APS-2 and the conditional copulas are Gaussian. Thus the first variable is the half-normal error, and the remaining three variables are multivariate normal. Because of the multivariate normal assumption, the ordering of the last three variables does not matter. The benefit is that this representation results in a somewhat simpler functional form of the density than other vine representations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Amsler, C., Prokhorov, A. & Schmidt, P. A new family of copulas, with application to estimation of a production frontier system. J Prod Anal 55, 1–14 (2021). https://doi.org/10.1007/s11123-020-00590-w

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11123-020-00590-w

Keywords

  • Copula
  • Production frontier
  • Technical inefficiency
  • Allocative inefficiency