Skip to main content
Log in

Families of linear efficiency programs based on Debreu’s loss function

  • Published:
Journal of Productivity Analysis Aims and scope Submit manuscript

Abstract

Gerard Debreu introduced a well known radial efficiency measure which he called a “coefficient of resource utilization.” He derived this scalar from a much less well known “dead loss” function that characterizes the monetary value sacrificed to inefficiency, and which is to be minimized subject to a normalization condition. We use Debreu’s loss function, together with a variety of normalization conditions, to generate several popular families of linear efficiency programs. Our methodology also can be employed to generate entirely new families of linear efficiency programs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Diewert (1983) extended Debreu’s loss measure, but in a different context and in a different way than we do. Diewert focused his analysis on measuring the output loss that can be attributed to distortions within the production sector of an open economy. In addition, Diewert did not consider alternative normalization conditions as we do.

  2. The above quoted phrases are from Debreu (1951, pp. 274, 275, 284).

  3. Ten Raa (2008) provides a discussion of Debreu’s economic system, part of which is our production sector.

  4. \( \left( {a,b} \right) > \left( {d,e} \right) \) means that \( a_{i} > d_{i} ,\,\forall i = 1, \ldots ,m \) and \( b_{r} > e_{r} ,\,\forall r = 1, \ldots ,s \). \( \left( {a,b} \right) \ge \left( {d,e} \right) \) means that \( a_{i} \ge d_{i} ,\,\forall i = 1, \ldots ,m \) and \( b_{r} \ge e_{r} ,\,\forall r = 1, \ldots ,s \).

  5. Given a vector \( \left( {\tilde{x},\tilde{y}} \right) \in R_{ + }^{m} \times R_{ + }^{s} \), a vector \( \left( {c,p,\alpha } \right) \in R_{ + }^{m} \times R_{ + }^{s} \times R \) defines a hyperplane given by the equation \( \sum\nolimits_{r = 1}^{s} {p_{r} \tilde{y}_{r} } - \sum\nolimits_{i = 1}^{m} {c_{i} \tilde{x}_{i} } = \alpha \). By definition, a supporting hyperplane of T is a hyperplane that contains at least one point of \( \partial^{W} \left( T \right) \), and \( \sum\nolimits_{r = 1}^{s} {p_{r} y_{r} } - \sum\nolimits_{i = 1}^{m} {c_{i} x_{i} } \le \alpha \), for all \( \left( {x,y} \right) \in T \).

  6. Chambers et al. (1998) prove that there is a dual relationship between the profit function and the directional distance function. In particular, the directional distance function can be recovered from the profit function by means of \( \beta^{*} = \mathop {\inf }\nolimits_{{\left( {c,p} \right) \ge \left( {0_{m} ,0_{s} } \right)}} \left\{ {\Uppi \left( {c,p} \right) - \left( {\sum\nolimits_{r = 1}^{s} {p_{r} y_{r} } - \sum\nolimits_{i = 1}^{m} {c_{i} x_{i} } } \right):\sum\nolimits_{r = 1}^{s} {p_{r} g_{r}^{ + } } + \sum\nolimits_{i = 1}^{m} {c_{i} g_{i}^{ - } } = 1} \right\} \). This is clearly a particular case of program A2 taking as NC the linear condition LNC3, since \( \Uppi \left( {c,p} \right) \) is defined only for prices that support points belonging to \( \partial^{W} \left( T \right) \).

  7. Strictly speaking, the \( \ell_{\infty } \) distance from \( \left( {x_{0} ,y_{0} } \right) \) to \( \partial^{W} \left( T \right) \) is equal to the directional distance function associated with the directional vector \( g = \left( {1_{m} ,1_{s} } \right) \) only if \( \left( {x_{0} ,y_{0} } \right) \in T \); otherwise, the directional distance function is equal to − [the \( \ell_{\infty } \) distance].

  8. Asmild and Pastor (2010) provide a detailed presentation of the RDM and MEA programs.

  9. Based on the solution of the enhanced additive program, Bardhan et al. (1996) defined an efficiency measure called MED (Measure of Efficiency Dominance) which was renamed by Banker and Cooper (1994) as MEP (Measure of Efficiency Proportions), i.e., MEP = MED.

  10. As Pastor and Aparicio (2010) have recently shown, linear programs that are associated with additive distance functions generate inefficiency measures (e.g., directional distance functions) and, as a consequence, have the same linear objective function as the corresponding linear loss function program. On the other hand, linear programs that are associated with multiplicative distance functions generate efficiency measures (e.g., BCC programs), and their objective functions are not the objective function of the corresponding linear loss function programs but are closely related to them.

References

  • Ali AI, Seiford LM (1993) The mathematical programming approach to efficiency analysis, chapter 3. In: Fried HO, Lovell CAK, Schmidt SS (eds) The measurement of productive efficiency. Oxford University Press, New York

    Google Scholar 

  • Asmild M, Pastor JT (2010) Slack free MEA and RDM with comprehensive efficiency measures. OMEGA 38(6):475–483

    Article  Google Scholar 

  • Banker RD, Cooper WW (1994) Validation and generalization of DEA and its uses. TOP 2:249–314

    Article  Google Scholar 

  • Banker RD, Charnes A, Cooper WW (1984) Some models for estimating technical and scale inefficiencies in data envelopment analysis. Manage Sci 30(9):1078–1092

    Article  Google Scholar 

  • Bardhan I, Bowlin WF, Cooper WW, Sueyoshi T (1996) Models and measures for efficiency dominance in DEA, parts I and II. J Oper Res Soc Jpn 39(3):322–344

    Google Scholar 

  • Bogetoft P, Hougaard JL (1999) Efficiency evaluations based on potential (non-proportional) improvements. J Prod Anal 12(3):233–247

    Article  Google Scholar 

  • Briec W (1997) A graph-type extension of Farrell technical efficiency measure. J Prod Anal 8(1):95–110

    Article  Google Scholar 

  • Briec W (1999) Hölder distance function and measurement of technical efficiency. J Prod Anal 11(2):111–131

    Article  Google Scholar 

  • Chambers RG, Chung Y, Färe R (1996) Benefit and distance functions. J Econ Theory 70(2):407–419

    Article  Google Scholar 

  • Chambers RG, Chung Y, Färe R (1998) Profit, directional distance functions, and Nerlovian efficiency. J Optim Theory Appl 98(2):351–364

    Article  Google Scholar 

  • Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision-making units. Eur J Oper Res 2(6):429–444

    Article  Google Scholar 

  • Charnes A, Cooper WW, Golany B, Seiford L, Stutz J (1985) Foundations of data envelopment analysis for Pareto-Koopmans efficient empirical production functions. J Econom 30(1–2):91–107

    Article  Google Scholar 

  • Charnes A, Cooper WW, Rousseau J, Semple J (1987) Data envelopment analysis and axiomatic notions of efficiency and reference sets. Research Report CCS558, Center for Cybernetic Studies, University of Texas, Austin TX, USA

  • Cooper WW, Park KS, Pastor JT (1999) RAM: a range adjusted measure of inefficiency for use with additive models and relations to other models and measures in DEA. J Prod Anal 11(1):5–42

    Article  Google Scholar 

  • Cooper WW, Pastor JT, Borras F, Aparicio J, Pastor D (2011) BAM: a bounded adjusted measure of efficiency for use with bounded additive models. J Prod Anal 35:85–94

    Article  Google Scholar 

  • Debreu G (1951) The coefficient of resource utilization. Econometrica 19(3):273–292

    Article  Google Scholar 

  • Diewert WE (1983) The measurement of waste within the production sector of an open economy. Scand J Econ 85(2):159–179

    Article  Google Scholar 

  • Färe R, Lovell CAK (1978) Measuring the technical efficiency of production. J Econ Theory 19(1):150–162

    Article  Google Scholar 

  • Färe R, Primont D (1995) Multi-output production and duality: theory and applications. Kluwer Academic Publishers, Boston

    Book  Google Scholar 

  • Färe R, Grosskopf S, Lovell CAK (1985) The measurement of efficiency of production. Kluwer-Nijhoff Publishing, Boston

    Google Scholar 

  • Farrell MJ (1957) The measurement of productive efficiency. J R Stat Soc Ser A General 120(3):253–282

    Article  Google Scholar 

  • Koopmans TC (1951) Analysis of production as an efficient combination of activities. In: Koopmans TC (ed) Activity analysis of production and allocation. Cowles Commission for Research in Economics Monograph No. 13. Wiley, New York

    Google Scholar 

  • Lovell CAK, Pastor JT (1995) Units invariant and translation invariant DEA models. Oper Res Lett 18(3):147–151

    Article  Google Scholar 

  • Luenberger DG (1992a) Benefit functions and duality. J Math Econ 21(5):461–481

    Article  Google Scholar 

  • Luenberger DG (1992b) New optimality principles for economic efficiency and equilibrium. J Optim Theory Appl 75(2):221–264

    Article  Google Scholar 

  • Pastor JT, Aparicio J (2010) Distance functions and efficiency measurement. Indian Econ Rev (forthcoming)

  • Pastor JT, Ruiz JL, Sirvent I (1999) An enhanced DEA Russell-graph efficiency measure. Eur J Oper Res 115(3):187–198

    Google Scholar 

  • Portela MCAS, Thanassoulis E (2006) Zero weights and non-zero slacks: different solutions to the same problem. Ann Oper Res 145:129–147

    Article  Google Scholar 

  • Ray SC (2007) Shadow profit maximization and a measure of overall inefficiency. J Prod Anal 27(3):231–236

    Article  Google Scholar 

  • Silva Portela MCA, Thanassoulis E, Simpson G (2004) Negative data in DEA: a directional distance approach applied to bank branches. J Oper Res Soc 55(10):1111–1121

    Article  Google Scholar 

  • Ten Raa T (2008) Debreu’s coefficient of resource utilization, the solow residual, and FTP: the connection by Leontief preferences. J Prod Anal 30(3):191–199

    Article  Google Scholar 

  • Thompson RG, Singleton F, Thrall R, Smith B (1986) Comparative site evaluations for locating a high-energy physics lab in Texas. Interfaces 16:35–49

    Article  Google Scholar 

  • Tone K (2001) A slacks-based measure of efficiency in data envelopment analysis. Eur J Oper Res 130(3):498–509

    Article  Google Scholar 

Download references

Acknowledgments

We thank two anonymous referees for providing constructive comments and help in improving the contents and the presentation of this paper. Also, we are grateful to Professor Prasada Rao, Director of the CEPA at the University of Queensland, for his hospitality and to Ministerio de Ciencia e Innovacion, Spain for supporting this research with grant MTM2009-10479.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jesus T. Pastor.

Appendix

Appendix

Proof of Proposition 1

Let \( \left( {c^{*} ,p^{*} ,\alpha^{*} } \right) \) be an optimal solution of program A3. Then, since \( \left( {c^{*} ,p^{*} ,\alpha^{*} } \right) \in SH\left( T \right) \), there exists a vector \( \left( {x^{*} ,y^{*} } \right) \in \partial^{W} \left( T \right) \) such that \( \sum\nolimits_{r = 1}^{s} {p_{r}^{*} y_{r}^{*} } - \sum\nolimits_{i = 1}^{m} {c_{i}^{*} x_{i}^{*} } = \alpha^{*} \) and \( \sum\nolimits_{r = 1}^{s} {p_{r}^{*} y_{r}^{*} } - \sum\nolimits_{i = 1}^{m} {c_{i}^{*} x_{i}^{*} } \ge \sum\nolimits_{r = 1}^{s} {p_{r}^{*} v_{r} } - \sum\nolimits_{i = 1}^{m} {c_{i}^{*} u_{i} } \), \( \forall \left( {u,v} \right) \in T \). Hence, by definition, \( \left( {c^{*} ,p^{*} } \right) \in Q\left( {x^{*} ,y^{*} } \right) \). Now, we observe that \( \left( {x^{*} ,y^{*} ;c\left( {x^{*} ,y^{*} } \right),p\left( {x^{*} ,y^{*} } \right)} \right) \), with \( c\left( {x^{*} ,y^{*} } \right) = c^{*} \) and \( p\left( {x^{*} ,y^{*} } \right) = p^{*} \), is a feasible solution of A2. Finally, it is easy to prove that \( \left( {x^{*} ,y^{*} ;c\left( {x^{*} ,y^{*} } \right),p\left( {x^{*} ,y^{*} } \right)} \right) \) is also an optimal solution of A2 and, in fact, program A2 has the same optimal value as program A3. □

Proof of Proposition 2

It is apparent from the structure of program A3 and the fact that if \( \sum\nolimits_{r = 1}^{s} {p_{r} y_{rj} } - \sum\nolimits_{i = 1}^{m} {c_{i} x_{ij} } - \alpha \le 0 \), \( \forall j = 1, \ldots ,n \), then \( \sum\nolimits_{r = 1}^{s} {p_{r} v_{r} } - \sum\nolimits_{i = 1}^{m} {c_{i} u_{i} } - \alpha \le 0 \), \( \forall \left( {u,v} \right) \in T \). Also, it is easy to prove that if \( \left( {c^{*} ,p^{*} ,\alpha^{*} } \right) \) is an optimal solution of program A4, then \( \left( {c^{*} ,p^{*} ,\alpha^{*} } \right) \in SH\left( T \right) \). □

We now prove Programs 1, 5 and 7. Proofs of the remaining programs are trivial.

Program 1. The BCC input-oriented program

Consider the linear loss function program A4 with linear normalization condition LNC1. As a consequence of LNC1 the objective function of Program 1 is equivalent to \( 1 + \min \left\{ { - \sum\nolimits_{r = 1}^{s} {p_{r} y_{r0} } + \alpha } \right\} \) and to \( 1 - \max \left\{ {\sum\nolimits_{r = 1}^{s} {p_{r} y_{r0} } - \alpha } \right\} \), which yields

$$ 1 - L\left( {x_{0} ,y_{0} ;{\text{NC}}1} \right) = \begin{aligned}{*{20}c} {\max_{c,p,\alpha } } \hfill & {\sum\limits_{r = 1}^{s} {p_{r} y_{r0} } - \alpha } \hfill \\ {{\text{s}} . {\text{t}} .} \hfill & {\sum\limits_{r = 1}^{s} {p_{r} y_{rj} } - \sum\limits_{i = 1}^{m} {c_{i} x_{ij} } - \alpha \le 0,\quad \forall j} \hfill \\ {} \hfill & {c \ge 0_{m} ,p \ge 0_{s} } \hfill \\ {} \hfill & {\sum\limits_{i = 1}^{m} {c_{i} x_{i0} } = 1\quad (LNC1)} \hfill \\ \end{aligned} $$

This program is exactly the multiplier form of the BCC input-oriented program. Being linear duals, the optimal value of the envelopment form equals the optimal value of the multiplier form. Therefore \( 1 - L\left( {x_{0} ,y_{0} ;{\text{LNC}}1} \right) = \theta^{*} \).

Program 5. The input-oriented Russell program

This program assumes that \( x_{0} > 0_{m} \). By means of the change of variables \( \theta_{i} = {\frac{{x_{i0} - s_{i0}^{ - } }}{{x_{i0} }}} = 1 - {\frac{{s_{i0}^{ - } }}{{x_{i0} }}} \), \( i = 1, \ldots ,m \), we get that Program 5 is equivalent to

$$ \begin{aligned}{*{20}c} {1 - \max_{{\lambda ,s_{0}^{ - } }} } \hfill & {\sum\limits_{i = 1}^{m} {{\frac{{s_{i0}^{ - } }}{{mx_{i0} }}}} } \hfill \\ {{\text{s}} . {\text{t}} .} \hfill & {\sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} } = x_{i0} - s_{i0}^{ - } ,\quad \forall i} \hfill \\ {} \hfill & {\sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} } \ge y_{r0} ,\quad \forall r} \hfill \\ {} \hfill & {\sum\limits_{j = 1}^{n} {\lambda_{j} } = 1} \hfill \\ {} \hfill & {\lambda \ge 0_{n} ,s_{0}^{ - } \ge 0_{m} } \hfill \\ \end{aligned} $$

In words, the input-oriented Russell program is equivalent to 1 minus a weighted additive program with weights \( w_{i}^{ - } = {1 \mathord{\left/ {\vphantom {1 {mx_{i0} }}} \right. \kern-\nulldelimiterspace} {mx_{i0} }} \), \( i = 1, \ldots ,m \), and \( w_{r}^{ + } = 0 \), \( r = 1, \ldots ,s \). Finally, thanks to Program 4, we have that \( \left\{ {c_{i} \ge {1 \mathord{\left/ {\vphantom {1 {mx_{i0} ,i = 1, \ldots ,m}}} \right. \kern-\nulldelimiterspace} {mx_{i0} ,i = 1, \ldots ,m}}} \right\} \) are the normalization conditions for Program 5 and, at optimum, \( 1 - L\left( {x_{0} ,y_{0} ;{\text{LNC}}5} \right) = \frac{1}{m}\sum\nolimits_{i = 1}^{m} {\theta_{i}^{*} } \).

Program 7. The enhanced Russell graph program

Consider the linear loss function program A4 with linear normalization condition LNC7. This program assumes that \( x_{0} > 0_{m} \) and \( y_{0} > 0_{s} \). This program is equivalent (at its optimal solutions) to another program with the same constraints and objective function \( \max \left\{ {1 - \left( { - \sum\nolimits_{r = 1}^{s} {p_{r} y_{r0} } + \sum\nolimits_{i = 1}^{m} {c_{i} x_{i0} } + \alpha } \right)} \right\} \). Performing the change of variables \( \omega = 1 - \left( { - \sum\nolimits_{r = 1}^{s} {p_{r} y_{r0} } + \sum\nolimits_{i = 1}^{m} {c_{i} x_{i0} } + \alpha } \right) \) leads to the following equivalent reformulation (at its optimal solutions).

$$ \begin{aligned}{*{20}c} {\max_{c,p,\alpha ,\omega } } \hfill & \omega \hfill \\ {{\text{s}} . {\text{t}} .} \hfill & {\omega = 1 - \sum\limits_{i = 1}^{m} {c_{i} x_{i0} } + \sum\limits_{r = 1}^{s} {p_{r} y_{r0} } - \alpha } \hfill \\ {} \hfill & { - \sum\limits_{i = 1}^{m} {c_{i} x_{ij} } + \sum\limits_{r = 1}^{s} {p_{r} y_{rj} } - \alpha \le 0,\quad \forall j} \hfill \\ {} \hfill & { - c_{i} \le - {\frac{1}{{mx_{i0} }}},\quad \forall i} \hfill \\ {} \hfill & {{\frac{\omega }{{sy_{r0} }}} - p_{r} \le 0,\quad \forall r} \hfill \\ {} \hfill & {c \ge 0_{m} ,p \ge 0_{s} } \hfill \\ \end{aligned} $$

The first added restriction is just the definition of ω. The final set of restrictions has been reordered so as to have all the variables on the same side. The linear dual of the reformulated program is

$$ \begin{aligned}{*{20}c} {\min_{{\beta ,\mu ,t_{0}^{ - } ,t_{0}^{ + } }} } \hfill & {\beta - \frac{1}{m}\sum\limits_{i = 1}^{m} {{\frac{{t_{i0}^{ - } }}{{x_{i0} }}}} } \hfill \\ {{\text{s}} . {\text{t}} .} \hfill & {\beta + \frac{1}{s}\sum\limits_{r = 1}^{s} {{\frac{{t_{r0}^{ + } }}{{y_{r0} }}}} = 1} \hfill \\ {} \hfill & {\beta x_{i0} - \sum\limits_{j = 1}^{n} {\mu_{j} x_{ij} } - t_{i0}^{ - } \ge 0,\quad \forall i} \hfill \\ {} \hfill & { - \beta y_{r0} + \sum\limits_{j = 1}^{n} {\mu_{j} y_{rj} } - t_{r0}^{ + } \ge 0,\quad \forall r} \hfill \\ {} \hfill & {\beta - \sum\limits_{j = 1}^{n} {\mu_{j} } = 0,} \hfill \\ {} \hfill & {\mu \ge 0_{n} ,t_{0}^{ - } \ge 0_{m} ,t_{0}^{ + } \ge 0_{s} } \hfill \\ \end{aligned} . $$

Making a second change of variables \( t_{i0}^{ - } = \beta s_{i0}^{ - } \), \( i = 1, \ldots ,m \), \( t_{r0}^{ + } = \beta s_{r0}^{ + } \), \( r = 1, \ldots ,s \), \( \mu_{j} = \beta \lambda_{j} \), \( j = 1, \ldots ,n \), generates

$$ \begin{aligned}{*{20}c} {\min_{{\beta ,\lambda ,s_{0}^{ - } ,s_{0}^{ + } }} } \hfill & {\beta \left( {1 - \frac{1}{m}\sum\limits_{i = 1}^{m} {{\frac{{s_{i0}^{ - } }}{{x_{i0} }}}} } \right)} \hfill \\ {{\text{s}} . {\text{t}} .} \hfill & {\beta \left( {1 + \frac{1}{s}\sum\limits_{r = 1}^{s} {{\frac{{s_{r0}^{ + } }}{{y_{r0} }}}} } \right) = 1} \hfill \\ {} \hfill & {\beta \left( {x_{i0} - \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} } - s_{i0}^{ - } } \right) \ge 0,\quad \forall i} \hfill \\ {} \hfill & { - \beta \left( {y_{r0} - \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} } + s_{r0}^{ + } } \right) \ge 0,\quad \forall r} \hfill \\ {} \hfill & {\beta \left( {1 - \sum\limits_{j = 1}^{n} {\lambda_{j} } } \right) = 0,} \hfill \\ {} \hfill & {\lambda \ge 0_{n} ,\,s_{0}^{ - } \ge 0_{m} ,\,s_{0}^{ + } \ge 0_{s} } \hfill \\ \end{aligned} . $$

The first restriction tells us two things. First that \( \beta = \left( {1 + \frac{1}{s}\sum\nolimits_{r = 1}^{s} {{\frac{{s_{r0}^{ + } }}{{y_{r0} }}}} } \right)^{ - 1} > 0 \), which means that the objective function can be rewritten as shown below, and second, as a consequence, that all restrictions but the first can be simplified by deleting β. Therefore this nonlinear program can be rewritten as

$$ \begin{aligned}{*{20}c} {\min_{{\lambda ,s_{0}^{ - } ,s_{0}^{ + } }} } \hfill & {{{\left( {1 - \frac{1}{m}\sum\limits_{i = 1}^{m} {{\frac{{s_{i0}^{ - } }}{{x_{i0} }}}} } \right)} \mathord{\left/ {\vphantom {{\left( {1 - \frac{1}{m}\sum\limits_{i = 1}^{m} {{\frac{{s_{i0}^{ - } }}{{x_{i0} }}}} } \right)} {\left( {1 + \frac{1}{s}\sum\limits_{r = 1}^{s} {{\frac{{s_{r0}^{ + } }}{{y_{r0} }}}} } \right)}}} \right. \kern-\nulldelimiterspace} {\left( {1 + \frac{1}{s}\sum\limits_{r = 1}^{s} {{\frac{{s_{r0}^{ + } }}{{y_{r0} }}}} } \right)}}} \hfill \\ {{\text{s}} . {\text{t}} .} \hfill & {x_{i0} - \sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} } - s_{i0}^{ - } \ge 0,\quad \forall i} \hfill \\ {} \hfill & {y_{r0} - \sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} } + s_{r0}^{ + } \le 0,\quad \forall r} \hfill \\ {} \hfill & {1 - \sum\limits_{j = 1}^{n} {\lambda_{j} } = 0,} \hfill \\ {} \hfill & {\lambda \ge 0_{n} ,s_{0}^{ - } \ge 0_{m} ,s_{0}^{ + } \ge 0_{s} } \hfill \\ \end{aligned} . $$

The restrictions are exactly the restrictions of the additive program. The first two sets of restrictions can be equivalently written as equalities. Therefore, if we perform a third change of variables \( \theta_{i} = 1 - {\frac{{s_{i0}^{ - } }}{{x_{i0} }}},i = 1, \ldots ,m,\phi_{r} = 1 + {\frac{{s_{r0}^{ + } }}{{y_{r0} }}},r = 1, \ldots ,s \), we finally get

$$ \begin{aligned}{*{20}c} {\min_{\lambda ,\theta ,\phi } } \hfill & {{{\frac{1}{m}\sum\limits_{i = 1}^{m} {\theta_{i} } } \mathord{\left/ {\vphantom {{\frac{1}{m}\sum\limits_{i = 1}^{m} {\theta_{i} } } {\frac{1}{s}\sum\limits_{r = 1}^{s} {\phi_{r} } }}} \right. \kern-\nulldelimiterspace} {\frac{1}{s}\sum\limits_{r = 1}^{s} {\phi_{r} } }}} \hfill \\ {{\text{s}} . {\text{t}} .} \hfill & {\sum\limits_{j = 1}^{n} {\lambda_{j} x_{ij} } \le \theta_{i} x_{i0} ,\quad \forall i} \hfill \\ {} \hfill & {\sum\limits_{j = 1}^{n} {\lambda_{j} y_{rj} } \ge \phi_{r} y_{r0} ,\quad \forall r} \hfill \\ {} \hfill & {\sum\limits_{j = 1}^{n} {\lambda_{j} } = 1} \hfill \\ {} \hfill & {\lambda \ge 0_{n} ,\theta \le 1_{m} ,\phi \ge 1_{s} } \hfill \\ \end{aligned} $$

which is, exactly, the enhanced Russell graph program of Pastor et al. (1999), also known as the SBM (Slacks-Based Measure) (Tone 2001).

Finally, considering all the above steps, we have at optimum

$$ 1 - L\left( {x_{0} ,y_{0} ;{\text{LNC}}7} \right) = {{\frac{1}{m}\sum\limits_{i = 1}^{m} {\theta_{i}^{*} } } \mathord{\left/ {\vphantom {{\frac{1}{m}\sum\limits_{i = 1}^{m} {\theta_{i}^{*} } } {\frac{1}{s}\sum\limits_{r = 1}^{s} {\phi_{r}^{*} } }}} \right. \kern-\nulldelimiterspace} {\frac{1}{s}\sum\limits_{r = 1}^{s} {\phi_{r}^{*} } }} $$

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pastor, J.T., Lovell, C.A.K. & Aparicio, J. Families of linear efficiency programs based on Debreu’s loss function. J Prod Anal 38, 109–120 (2012). https://doi.org/10.1007/s11123-011-0216-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11123-011-0216-4

Keywords

JEL Classification

Navigation