Skip to main content
Log in

An efficiency measure satisfying the Dmitruk–Koshevoy criteria on DEA technologies

  • Published:
Journal of Productivity Analysis Aims and scope Submit manuscript

Abstract

The purpose of this paper is to develop an efficiency measurement model by enhancing a CCR (Charnes–Cooper–Rhodes) model and then to prove that the enhanced model satisfies five desirable properties: indication, strict monotonicity, homogeneity, continuity and units unvariance. In order for our model to be empirically tractable, we also provide an algorithm aimed at estimating efficiency scores.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. Färe and Lovell (1978) included another axiom. However, Russell (1985) showed that the axiom is not well-defined, and if it is modified then it is implied by the other three axioms.

References

  • Bessent A, Bessent W, Elam J, Clark T (1988) Efficiency frontier determination by constrained facet analysis. Oper Res 36(5):785–796

    Article  Google Scholar 

  • Bol G (1986) On technical efficiency measures. J Econ Theory 38:380–385

    Article  Google Scholar 

  • Chang K-P, Guh Y–Y (1991) Linear production functions and the data envelopment analysis. Eur J Oper Res 52(1):215–223

    Article  Google Scholar 

  • Charnes A, Cooper WW, Rhodes E (1978) Measuring the efficiency of decision-making units. Eur J Oper Res 2(6):429–444

    Article  Google Scholar 

  • Charnes A, Cooper WW, Thrall RM (1991) A structure for classifying and characterizing efficiency and inefficiency in data envelopment analysis. J Prod Anal 2:197–237

    Article  Google Scholar 

  • Cooper WW, Ruiz JL, Sirvent I (2007) Choosing weights from alternative optimal solutions of dual multiplier models in DEA. Eur J Oper Res 180:433–458

    Article  Google Scholar 

  • Cooper WW, Huang Z, Li S, Zhu J (2008) A response to the critiques of DEA by Dmitruk and Koshevoy, and Bol. J Prod Anal 29:15–21

    Article  Google Scholar 

  • Dmitruk V, Koshevoy GA (1991) On the existence of a technical efficiency criterion. J Econ Theory 55:121–144

    Article  Google Scholar 

  • Färe R, Lovell CAK (1978) Measuring the technical efficiency of production. J Econ Theory 19:150–162

    Article  Google Scholar 

  • Farrell MJ (1957) The measurement of productive efficiency. J R Stat Soc Series A120:253–261

    Google Scholar 

  • Green RH, Doyle JR, Cook WD (1996) Efficient bounds in data envelopment analysis. Eur J Oper Res 89(3):482–490

    Article  Google Scholar 

  • Khachiyan L, Boros E, Borys K, Elbassioni K, Gurvich V (2008) Generating all vertices of a polyhedron is hard. Discret Comput Geom 39:174–190

    Article  Google Scholar 

  • Koopmans TC (ed) (1951) Activity analysis of production and allocation. Wiley, New York

    Google Scholar 

  • Olesen O, Petersen N (1996) Indicators of Ill-conditioned data sets and model misspecification in data envelopment analysis: an extended facet approach. Manage Sci 42:205–219

    Article  Google Scholar 

  • Olesen O, Petersen N (2003) Identification and use of efficient faces and facets in DEA. J Prod Anal 20:323–360

    Article  Google Scholar 

  • Russell RR (1985) Measures of technical efficiency. J Econ Theory 35:109–126

    Article  Google Scholar 

  • Russell RR (1988) On the axiomatic approach to the measurement of technical efficiency. In: Eichhorn W (ed) Measurement in economics: theory and application of economic indices. Physica-Verlag, Heidelberg, pp 207–220

    Google Scholar 

  • Russell RR (1990) Continuity of measures of technical efficiency. J Econ Theory 51:255–267

    Article  Google Scholar 

  • Russell RR, Schworm W (2006) Efficiency measurement on convex polyhedral (DEA) technologies: an axiomatic approach, http://www.economics.adelaide.edu.au/workshops/doc/russell1.pdf

  • Russell RR, Schworm W (2009) Axiomatic foundations of efficiency measurement on data-generated technologies. J Prod Anal 31:77–86

    Article  Google Scholar 

  • Schrijver A (1986) Theory of linear and integer programming. Wiley, Chichester

    Google Scholar 

  • Shephard RW (1970) Theory of cost and production functions. Princeton University Press, Princeton

    Google Scholar 

Download references

Acknowledgments

We are grateful to three anonymous reviewers for their valuable comments. This research was partially supported by Research of the Ministry of Education, Culture, Sports, Science and Technology of Japan, Grant numbers 23510165, 22310092, the Japan Society for the Promotion of Science.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hirofumi Fukuyama.

Appendix

Appendix

Lemma 1

\( {\mathbf{x}} \in Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) \Leftrightarrow \) there exists a pair (v, w) such that \( {\mathbf{v}} > {\mathbf{0}},\;{\mathbf{w}} \ge{\mathbf{0}},\;{\mathbf{vx}} = {\mathbf{w}\hat{\mathbf{y}}} \) and vx j  ≥ wy j for all j = 1,2, …, J.

Proof

Note that \( {\mathbf{x}} \in Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) \Leftrightarrow ``{\mathbf{x}}^{\prime } \le {\mathbf{x}}\,{\text{and}}\,{\mathbf{x}}^{\prime } \ne {\mathbf{x}}\quad {\text{imply}}\,{\text{that}}\,{\mathbf{x}}^{\prime } \notin T\left( {{\hat{\mathbf{y}}}} \right)'' \). Therefore, there is no solution of the following linear inequalities system:

$$ \begin{gathered} - \sum\limits_{j = 1}^{J} {{\mathbf{x}}_{j} \lambda_{j} \le \mu {\mathbf{x}}} , - \sum\limits_{j = 1}^{J} {{\mathbf{x}}_{j} \lambda_{j} \ne \mu {\mathbf{x}}} \hfill \\ \sum\limits_{j = 1}^{J} {{\mathbf{y}}_{j} \lambda_{j} \ge \mu {\hat{\mathbf{y}}}} , \hfill \\ \lambda_{j} \ge 0,\quad \forall j = 1, \ldots ,J \hfill \\ \mu > 0. \hfill \\ \end{gathered} $$
(28)

From Slater’s theorem of the alternative, a linear inequalities system

$$ \begin{gathered} - {\mathbf{vx}}_{j} + {\mathbf{wy}}_{j} \le 0,\quad \forall j = 1, \ldots ,J \hfill \\ {\mathbf{vx}} - {\mathbf{wy}} + \alpha = 0, \hfill \\ \end{gathered} $$
(29)

has a solution v > 0, w ≥ 0, α = 0 or v ≥ 0, w ≥ 0, α > 0. Since \( {\mathbf{x}} \in T\left( {{\hat{\mathbf{y}}}} \right) \), it follows that vx – wy = 0 and α = 0. Therefore, the above system (26) has a solution v > 0, w ≥ 0, α = 0 but v ≥ 0, w ≥ 0, α > 0.

Conversely, we assume that v > 0, w ≥ 0, α = 0 satisfies the system (26). It follows from Slater’s theorem of the alternative that the system (27) has no solution. Therefore, we see form \( {\mathbf{x}} \in T\left( {{\hat{\mathbf{y}}}} \right) \) that \( {\mathbf{x}} \in Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right). \) \( \square \)

Lemma 2

\( \left( {{\mathbf{x}},{\mathbf{y}}} \right) \in Eff\left( T \right) \Leftrightarrow \) there exists a pair ( v , w ) such that v > 0 , w > 0 , vx = wy and vx j  ≥ wy j for all j = 1,2, …, J.

Proof

Similar to the proof of Lemma 1, we can prove Lemma 2. \( \square \)

Lemma 3

For any face F of T, the set \( \left\{ {{\mathbf{x}}\left| {\left( {{\mathbf{x}},{\hat{\mathbf{y}}}} \right) \in F,{\mathbf{x}} \in T\left( {{\hat{\mathbf{y}}}} \right)} \right.} \right\} \) is a face of \( T\left( {{\hat{\mathbf{y}}}} \right) \) . For any face G of \( T\left( {{\hat{\mathbf{y}}}} \right) \) , there exists a face F of T such that \( \left\{ {\left( {{\mathbf{x}},{\hat{\mathbf{y}}}} \right)\left| {{\mathbf{x}} \in G} \right.} \right\} \subseteq F \) .

Proof

It follows from two definitions of face, (6) and (7). \( \square \)

Lemma 8

If 1 ≥ ε1 ≥ ε2 ≥ 0, then we have

$$ {\mathbf{I}}^{N} (\varepsilon_{2} ){\mathbf{s}}_{{}}^{x} \ge {\mathbf{0}}\, \Rightarrow \,{\mathbf{I}}^{N} (\varepsilon_{1} ){\mathbf{s}}^{x} \ge {\mathbf{0}}. $$
(30)

Proof

For an arbitrary positive number ε ≤ 1, I N(ε) is a square matrix, its maximum eigenvalue is 1 + (N − 1)ε and e N, an N-dimensional vector of ones, is a corresponding eigenvector. Denoting an N by N dimensional square matrix of ones by E, we have

$$ {\mathbf{EI}}^{N} (\varepsilon ) = (1 + (N - 1)\varepsilon ){\mathbf{E}}. $$
(31)

Consider the case where 0 < ε2 ≤ ε1 = 1. We multiply both sides of (31) by s x from the right and utilize I N1) = I N(1) = E to obtain

$$ {\mathbf{EI}}^{N} (\varepsilon_{2} ){\mathbf{s}}^{x} = (1 + (N - 1)\varepsilon_{2} ){\mathbf{Es}}^{x} = (1 + (N - 1)\varepsilon_{2} ){\mathbf{I}}^{N} (\varepsilon_{1} ){\mathbf{s}}^{x} . $$

If I N2) = s x ≥ 0, then EI N2)s x ≥ 0. This result along with (1 + (N − 1)ε2) > 0 yields I N1) = s x ≥ 0.

Now consider the case where 1 > ε1 ≥ ε2 ≥ 0. Let I be an N by N identity matrix. For any ε ∈ (0,1), we have

$$ \frac{1}{1 - \varepsilon }\left( {{\mathbf{E}} - {\mathbf{I}}^{N} (\varepsilon )} \right) = {\mathbf{E}} - {\mathbf{I}}. $$
(32)

Applying (10) twice to I N2) = I + ε2(E − I), we obtain

$$ {\mathbf{I}}^{N} (\varepsilon_{2} ) = E - \frac{1}{{1 - \varepsilon_{1} }}\left( {{\mathbf{E}} - {\mathbf{I}}^{N} \left( {\varepsilon_{1} } \right)} \right) + \frac{{\varepsilon_{2} }}{{1 - \varepsilon_{1} }}\left( {{\mathbf{E}} - {\mathbf{I}}^{N} \left( {\varepsilon_{1} } \right)} \right), $$

which yields

$$ {\mathbf{I}}^{N} (\varepsilon_{2} ) = \frac{{\varepsilon_{2} - \varepsilon_{1} }}{{1 - \varepsilon_{1} }}{\mathbf{E}} + \frac{{1 - \varepsilon_{2} }}{{1 - \varepsilon_{1} }}{\mathbf{I}}^{N} \left( {\varepsilon_{1} } \right). $$

A simple manipulation along with the use of (31) yields

$$ {\mathbf{I}}^{N} (\varepsilon_{2} ) = \frac{1}{{1 - \varepsilon_{1} }}\left( {\left( {1 - \varepsilon_{2} } \right){\mathbf{I}} - \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{1 + \left( {N - 1} \right)\varepsilon_{1} }}{\mathbf{E}}} \right){\mathbf{I}}^{N} (\varepsilon_{1} ) . $$
(33)

Since the maximum eigenvalue of matrix E is N, we have

$$ 1 - \varepsilon_{2} - \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{1 + \left( {N - 1} \right)\varepsilon_{1} }}N \ge 1 - \varepsilon_{2} - \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{\varepsilon_{1} }} = \frac{{\varepsilon_{2} }}{{\varepsilon_{1} }} - \varepsilon_{2} = \varepsilon_{2} \left( {\frac{1}{{\varepsilon_{1} }} - 1} \right) > 0. $$

The maximum eigenvalue of matrix \( \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{\varepsilon_{1} + \left( {N - 1} \right)\varepsilon_{1} }}{\mathbf{E}} \) is less than \( 1 - \varepsilon_{2} \). This means that the inverse matrix of \( \left( {\left( {1 - \varepsilon_{2} } \right){\mathbf{I}} - \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{1 + \left( {N - 1} \right)\varepsilon_{1} }}{\mathbf{E}}} \right) \) in Eq. (33) is a positive matrix. Multiplying both sides of Eq. (33) by s x from the right and by \( \left( {\left( {1 - \varepsilon_{2} } \right){\mathbf{I}} - \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{1 + \left( {N - 1} \right)\varepsilon_{1} }}{\mathbf{E}}} \right)^{ - 1} \) from the left, we obtain

$$ \left[ {\left( {1 - \varepsilon_{2} } \right){\mathbf{I}} - \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{1 + \left( {N - 1} \right)\varepsilon_{1} }}{\mathbf{E}}} \right]^{ - 1} {\mathbf{I}}^{N} \left( {\varepsilon_{2} } \right){\mathbf{s}}^{x} = \frac{1}{{1 - \varepsilon_{1} }}{\mathbf{I}}^{N} \left( {\varepsilon_{1} } \right){\mathbf{s}}^{x} . $$
(34)

If \( {\mathbf{I}}^{N} (\varepsilon_{2} ){\mathbf{s}}^{x} \ge {\mathbf{0}} \), then \( {\mathbf{I}}^{N} (\varepsilon_{1} ){\mathbf{s}}^{x} \ge {\mathbf{0}} \) due to the fact that \( \left( {\left( {1 - \varepsilon_{2} } \right){\mathbf{I}} - \frac{{\varepsilon_{1} - \varepsilon_{2} }}{{1 + \left( {N - 1} \right)\varepsilon_{1} }}{\mathbf{E}}} \right)^{ - 1} \) is a positive matrix and 1 – ε1 > 0. □

Lemma 9

For any ε ∈ [0,1], we have

$$ T\left( {{\hat{\mathbf{y}}}} \right) \subseteq \left\{ {{\mathbf{x}} + \, {\mathbf{s}}^{x} \left| {{\mathbf{x}} \in {\text{Eff}}\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) , { }I^{N} \left( \varepsilon \right){\mathbf{s}}^{x} \ge {\mathbf{0}}} \right.} \right\} $$
(35)

where the equality (=) holds if and only if ε = 0.

Proof

For any \( {\mathbf{x}} \in T\left( {{\hat{\mathbf{y}}}} \right) \) there exists \( {\mathbf{x}}^{*} \in {\text{Eff}}\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) \) such that x = x* + s x for some s x ≥ 0. Therefore, we have \( T\left( {{\hat{\mathbf{y}}}} \right) = \left\{ {{\mathbf{x}} + \, {\mathbf{s}}^{x} \left| {{\mathbf{x}} \in Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) , { }I^{N} \left( 0 \right){\mathbf{s}}^{x} \ge {\mathbf{0}}} \right.} \right\} \). Lemma 8 implies that

$$ \begin{aligned} T\left( {{\hat{\mathbf{y}}}} \right) & = \left\{ {{\mathbf{x}} + \, {\mathbf{s}}^{x} \left| {{\mathbf{x}} \in Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) ,\, \, I^{N} \left( 0 \right){\mathbf{s}}^{x} \ge {\mathbf{0}}} \right.} \right\} \\ & \subseteq \left\{ {{\mathbf{x}} + \, {\mathbf{s}}^{x} \left| {{\mathbf{x}} \in Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) , { }I^{N} \left( \varepsilon \right){\mathbf{s}}^{x} \ge {\mathbf{0}}} \right.} \right\} \\ \end{aligned} $$

for all ε ∈ [0,1].

Lemma 10

We have

$$ 0 < \mathop {\min }\limits_{{f = 1, \ldots ,f_{2} }} \,\mathop {\max }\limits_{{\left( {{\mathbf{v}},{\mathbf{w}}} \right) \in VW\left( {\mathbb{F}\left( {{\mathbf{v}}^{f} ,{\mathbf{w}}^{f} } \right)} \right)}} \quad \min \left\{ {v_{n} \left| {n = 1, \ldots ,N} \right.} \right\}. $$

Proof

From the choice of \( \left( {{\mathbf{v}}^{f} ,{\mathbf{w}}^{f} } \right) \in \Re_{ + + }^{N} \times \Re_{ + }^{M} \) for all f = 1, …, f 2 we have \( v_{n}^{f} > 0\quad \forall n = 1, \ldots ,N\;{\text{and}}\quad \forall f = 1, \ldots ,f_{2} \). This means that for all f = 1, …, f 2

$$ 0 < \min \left\{ {v_{1}^{f} , \ldots ,v_{N}^{f} } \right\} \le \mathop {\max }\limits_{{\left( {{\mathbf{v}},{\mathbf{w}}} \right) \in VW\left( {\mathbb{F}\left( {{\mathbf{v}}^{f} ,{\mathbf{w}}^{f} } \right)} \right)}} \,\min \left\{ {v_{n} \left| {n = 1, \ldots ,N} \right.} \right\}. $$

Lemma 11

Let η* be the optimal value of (19) and let \( \bar{\varepsilon } = 1/\left( {\tfrac{1}{{\eta^{*} }} - N + 1} \right) \), then the set \( Q\left( \varepsilon \right) \equiv \left\{ {{\mathbf{x}} + {\mathbf{s}}^{x} \left| {{\mathbf{x}} \in Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) , { }{\mathbf{I}}^{N} \left( \varepsilon \right){\mathbf{s}}^{x} \ge {\mathbf{0}}} \right.} \right\} \) satisfies [Q1]–[Q5] for any \( \varepsilon \in (0,\bar{\varepsilon }] \).

Proof

For any \( \varepsilon \in (0,\bar{\varepsilon }] \) we need to prove the following properties:

$$ \begin{array}{*{20}c} {[Q1]} \hfill & {Q\left( \varepsilon \right)\;{\text{is closed}}.} \hfill \\ {[Q2]} \hfill & {Q\left( \varepsilon \right) + R_{ + }^{N} \subseteq Q\left( \varepsilon \right)} \hfill \\ {[Q3]} \hfill & {Isoq\left( {Q\left( \varepsilon \right)} \right) = Eff\left( {Q\left( \varepsilon \right)} \right)} \hfill \\ {[Q4]} \hfill & {T\left( {{\hat{\mathbf{y}}}} \right) \subseteq \,Q\left( \varepsilon \right)} \hfill \\ {[Q5]} \hfill & {Eff\left( {T\left( {{\hat{\mathbf{y}}}} \right)} \right) \subseteq \,Eff\left( {\,Q\left( \varepsilon \right)} \right)} \hfill \\ \end{array} $$

For any ε ∈ (0,1] Property [Q1] trivially holds. For any ε ∈ (0,1], Properties [Q2] and [Q4] follow directly from Lemma 8.

By using the alternative theorem of Slater, we have the following equivalent condition to Property [Q3] : the optimal value of (20) for \( {\hat{\mathbf{x}}} \in Q\left( \varepsilon \right) \) is 1 if and only if there exists a pair (v, w) such that \( {\mathbf{v}} > {\mathbf{0}},\,{\mathbf{w}} \ge {\mathbf{0}},\,{\mathbf{u}} \ge {\mathbf{0}},\,{\mathbf{v}\hat{\mathbf{x}}} = {\mathbf{w}\hat{{y}}},\,{\mathbf{v}} = {\mathbf{u}}I^{N} \left( \varepsilon \right) \) and vx j  ≥ wy j for all j = 1,2, …, J.

Choose any ε ∈ (0,1] arbitrarily. Assume that \( {\hat{\mathbf{x}}} \in Isoq\left( {Q\left( \varepsilon \right)} \right) \), equivalently, \( {\hat{\mathbf{x}}} \) satisfies \( 1 = \min \left\{ {\theta \left| {\theta {\hat{\mathbf{x}}} \in Q\left( \varepsilon \right)} \right.} \right\} \) The dual problem of \( \min \left\{ {\theta \left| {\theta {\hat{\mathbf{x}}} \in Q\left( \varepsilon \right)} \right.} \right\} \)is

$$ \max \left\{ {{\mathbf{w}\hat{\mathbf{y}}}}\left|{{\mathbf{vx}}_{j} - {\mathbf{wy}}_{j} \ge 0\quad \forall j = 1,\ldots ,J,\,{\mathbf{v}\hat{\mathbf{x}}}} = 1,\,{\mathbf{v}} ={\mathbf{u}}I^{N} \left( \varepsilon\right),\,{\mathbf{v}},{\mathbf{w}},{\mathbf{u}} \ge {\mathbf{0}}\right. \right\}. $$
(36)

By the duality theorem of LP, the dual problem (36) has an optimal solution (v *, w *, u *) and its optimal value is \( 1 = {\mathbf{w}}^{*} {\hat{\mathbf{y}}} \). Suppose that u * = 0, then v * = u * I N(ε) = 0. This contradicts the supposition \( {\mathbf{v}}^{*} {\hat{\mathbf{x}}} = 1 \). We have u * ≥ 0 and u * ≠ 0. It follows from ε ∈ (0,1] that v * = u * I N(ε) > 0.

Conversely, assume that there exists a pair (v, w) such that v > 0, W ≥ 0, u ≥ 0, \( {\mathbf{v}\hat{\mathbf{x}}} = {\mathbf{w}\hat{\mathbf{y}}},\,{\mathbf{v}} = {\mathbf{u}}I^{N} \left( \varepsilon \right) \) and vx j  ≥ wy j for all j = 1,2, …, J, then, the dual problem (36) has an optimal solution and its optimal value is 1. By the duality theorem of LP, we have \( 1 = \min \left\{ {\theta \left| {\theta {\hat{\mathbf{x}}} \in Q\left( \varepsilon \right)} \right.} \right\} \) and hence, \( {\hat{\mathbf{x}}} \in Isoq\left( {Q\left( \varepsilon \right)} \right) \).

Let \( \bar{\varepsilon } = 1/\left( {\tfrac{1}{{\eta^{*} }} - N + 1} \right) \)and choose any \( \varepsilon \in (0,\bar{\varepsilon }] \) arbitrarily, then we will show Property [Q5] by the following assertions:

Assertion A

Let E be an N × N matrix whose components are all one, then \( \left( {{\mathbf{I}}^{N} \left( \varepsilon \right)} \right)^{ - 1} = \frac{1}{1 - \varepsilon }\left( {{\mathbf{I}}^{N} \left( 0 \right) - \frac{\varepsilon }{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{E}}} \right) \) where \( 0 < \varepsilon \le \bar{\varepsilon } < 1 \) .

Proof

For any ε ∈ (0,1), we have

$$ \begin{aligned} \frac{1}{1 - \varepsilon }\left( {{\mathbf{I}}^{N} \left( 0 \right) - \frac{\varepsilon }{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{E}}} \right){\mathbf{I}}^{N} \left( \varepsilon \right) & = \frac{1}{1 - \varepsilon }\left( {{\mathbf{I}}^{N} \left( 0 \right) - \frac{\varepsilon }{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{E}}} \right)\left( {\left( {1 - \varepsilon } \right){\mathbf{I}}^{N} \left( 0 \right) + \varepsilon {\mathbf{E}}} \right) \\ & = \frac{1}{1 - \varepsilon }\left( {\left( {1 - \varepsilon } \right){\mathbf{I}}^{N} \left( 0 \right) - \frac{{\varepsilon \left( {1 - \varepsilon } \right)}}{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{E}} + \varepsilon {\mathbf{E}} - \frac{{N\varepsilon^{2} }}{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{E}}} \right) \\ & = \frac{1}{1 - \varepsilon }\left( {\left( {1 - \varepsilon } \right){\mathbf{I}}^{N} \left( 0 \right) - \frac{{\varepsilon \left( {1 + \left( {N - 1} \right)\varepsilon } \right)}}{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{E}} + \varepsilon {\mathbf{E}}} \right) \\ & = \frac{1}{1 - \varepsilon }\left( {\left( {1 - \varepsilon } \right){\mathbf{I}}^{N} \left( 0 \right) - \varepsilon {\mathbf{E}} + \varepsilon {\mathbf{E}}} \right) = \frac{1}{1 - \varepsilon }\left( {1 - \varepsilon } \right){\mathbf{I}}^{N} \left( 0 \right) = {\mathbf{I}}^{N} \left( 0 \right). \\ \end{aligned} $$

Assertion B

For every f = 1, …, f 2 , each problem ( 36 ) for any \( {\hat{\mathbf{x}}} \in \hat{\mathbb{F}}\left( {{\mathbf{v}}^{f} ,{\mathbf{w}}^{f} } \right) \) has an optimal solution attaining the optimal objective function value 1.

Proof

Choose f ∈ {1, …f 2} arbitrarily and let \( \left( {{\bar{\mathbf{v}}}^{f} ,{\bar{\mathbf{w}}}^{f} } \right) \) be an optimal solution of \( \mathop {\max }\limits_{{\left( {{\mathbf{v}},{\mathbf{w}}} \right) \in VW\left( {\mathbb{F}\left( {{\mathbf{v}}^{f} ,{\mathbf{w}}^{f} } \right)} \right)}} \;\;\min \left\{ {v_{n} \left| {n = 1, \ldots ,N} \right.} \right\} \), then it follows from Lemma 10 that

$$ \bar{v}_{n}^{f} \ge \eta^{*} > 0\quad {\text{for}}\,{\text{all}}\,n = 1, \ldots ,N $$
(37)
$$ \sum\limits_{n = 1}^{N} {\bar{v}_{n}^{f} } = 1 $$
(38)
$$ {\bar{\mathbf{v}}}^{f} {\mathbf{x}}_{j} - {\bar{\mathbf{w}}}^{f} {\mathbf{y}}_{j} \ge 0\quad \forall j = 1, \ldots ,J $$
(39)
$$ {\bar{\mathbf{v}}}^{f} {\hat{\mathbf{x}}} = {\bar{\mathbf{w}}}^{f} {\hat{\mathbf{y}}} $$
(40)
$$ {\bar{\mathbf{w}}}^{f} \ge {\mathbf{0}}. $$
(41)

Let \( \hat{v}_{n}^{f} \equiv {{\bar{v}_{n}^{f} } \mathord{\left/ {\vphantom {{\bar{v}_{n}^{f} } {\sum\nolimits_{l = 1}^{N} {\bar{v}_{l}^{f} \hat{x}_{l} } }}} \right. \kern-\nulldelimiterspace} {\sum\nolimits_{l = 1}^{N} {\bar{v}_{l}^{f} \hat{x}_{l} } }} \) for all n = 1, …, N and let \( \hat{w}_{m}^{f} \equiv {{\bar{w}_{m}^{f} } \mathord{\left/ {\vphantom {{\bar{w}_{m}^{f} } {\sum\nolimits_{l = 1}^{N} {\bar{v}_{l}^{f} \hat{x}_{l} } }}} \right. \kern-\nulldelimiterspace} {\sum\nolimits_{l = 1}^{N} {\bar{v}_{l}^{f} \hat{x}_{l} } }} \) for all m = 1, …, M, then

$$ \hat{v}_{n}^{f} \ge 0\quad {\text{for}}\,{\text{all}}\,n = 1, \ldots ,N $$
(42)
$$ \sum\limits_{n = 1}^{N} {\hat{v}_{n}^{f} } \hat{x}_{n} = 1 $$
(43)
$$ {\hat{\mathbf{v}}}^{f} {\mathbf{x}}_{j} - {\hat{\mathbf{w}}}^{f} {\mathbf{y}}_{j} \ge 0\quad \forall j = 1, \ldots ,J $$
(44)
$$ {\hat{\mathbf{v}}}^{f} {\hat{\mathbf{x}}} = 1,\quad {\hat{\mathbf{w}}}^{f} {\hat{\mathbf{y}}} = 1 $$
(45)
$$ {\hat{\mathbf{w}}}^{f} \ge {\mathbf{0}} . $$
(46)

By the definition of \( \bar{\varepsilon } \) we have \( \bar{\varepsilon } > 0 \). If we find a vector \( {\hat{\mathbf{u}}}^{f} \ge {\mathbf{0}}\;{\text{such}}\,{\text{that}}\;{\hat{\mathbf{v}}}^{f} = {\hat{\mathbf{u}}}^{f} I^{N} \left( \varepsilon \right) \), then we will complete the proof of Assertion B. Hereafter, we discuss the existence of such a \( {\hat{\mathbf{u}}}^{f} \).

Firstly, we consider the case of \( \eta^{*} < {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 N}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$N$}} \), equivalently \( \bar{\varepsilon } < 1 \), then we have 0 < ε < 1. Let e be an N-dimensional vector whose component is all one, then it follows from Assertion A, \( {1 \mathord{\left/ {\vphantom {1 \varepsilon }} \right. \kern-\nulldelimiterspace} \varepsilon } \ge {1 \mathord{\left/ {\vphantom {1 {\bar{\varepsilon }}}} \right. \kern-\nulldelimiterspace} {\bar{\varepsilon }}},\,\eta^{*} = 1/\left( {\tfrac{1}{{\bar{\varepsilon }}} + N - 1} \right) \) and (37) that

$$ \begin{aligned} {\hat{\mathbf{v}}}^{f} \left( {I^{N} \left( \varepsilon \right)} \right)^{ - 1} & = \frac{1}{1 - \varepsilon }{\hat{\mathbf{v}}}^{f} \left( {{\mathbf{I}}^{N} \left( 0 \right) - \frac{\varepsilon }{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{E}}} \right) = \frac{1}{{\left( {1 - \varepsilon } \right)\sum\nolimits_{n = 1}^{N} {\bar{v}_{n}^{f} } }}\left( {{\bar{\mathbf{v}}}^{f} - \frac{\varepsilon }{{1 + \left( {N - 1} \right)\varepsilon }}{\mathbf{e}}} \right) \\ & = \frac{1}{{\left( {1 - \varepsilon } \right)\sum\nolimits_{n = 1}^{N} {\bar{v}_{n}^{f} } }}\left( {{\bar{\mathbf{v}}}^{f} - \frac{1}{{{1 \mathord{\left/ {\vphantom {1 \varepsilon }} \right. \kern-\nulldelimiterspace} \varepsilon } + \left( {N - 1} \right)}}{\mathbf{e}}} \right) \\ & \ge \frac{1}{{\left( {1 - \varepsilon } \right)\sum\nolimits_{n = 1}^{N} {\bar{v}_{n}^{f} } }}\left( {{\bar{\mathbf{v}}}^{f} - \frac{1}{{{1 \mathord{\left/ {\vphantom {1 {\bar{\varepsilon }}}} \right. \kern-\nulldelimiterspace} {\bar{\varepsilon }}} + \left( {N - 1} \right)}}{\mathbf{e}}} \right) = \frac{1}{{\left( {1 - \varepsilon } \right)\sum\nolimits_{n = 1}^{N} {\bar{v}_{n}^{f} } }}\left( {{\bar{\mathbf{v}}}^{f} - \eta^{*} {\mathbf{e}}} \right) \ge {\mathbf{0}}. \\ \end{aligned} $$

In the case of \( \eta^{*} < {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 N}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$N$}} \) we have \( {\hat{\mathbf{u}}}^{f} = {\hat{\mathbf{v}}}^{f} \left( {I^{N} \left( \varepsilon \right)} \right)^{ - 1} \ge {\mathbf{0}} \) and hence, the dual problem (36) has an optimal solution \( \left( {{\hat{\mathbf{v}}}^{f} ,{\hat{\mathbf{w}}}^{f} ,{\hat{\mathbf{u}}}^{f} } \right) \) and its optimal value is 1.

In the case of \( \eta^{*} = {\raise0.7ex\hbox{$1$} \!\mathord{\left/ {\vphantom {1 N}}\right.\kern-\nulldelimiterspace} \!\lower0.7ex\hbox{$N$}} \) we have \( \bar{\varepsilon } = 1 \). When \( \varepsilon < \bar{\varepsilon } \), as stated above, the dual problem (36) has an optimal solution \( \left( {{\hat{\mathbf{v}}}^{f} ,{\hat{\mathbf{w}}}^{f} ,{\hat{\mathbf{u}}}^{f} } \right) \) and the optimal value 1. Otherwise, \( \varepsilon = \bar{\varepsilon } = 1 \), then it follows from (37) and (38) that \( \bar{v}_{n}^{f} = \eta^{*} = 1/N \) for all n = 1, …N. Hence, we have \( \hat{v}_{n}^{f} = {{\bar{v}_{n}^{f} } \mathord{\left/ {\vphantom {{\bar{v}_{n}^{f} } {\sum\nolimits_{l = 1}^{N} {\bar{v}_{l}^{f} \hat{x}_{l} } }}} \right. \kern-\nulldelimiterspace} {\sum\nolimits_{l = 1}^{N} {\bar{v}_{l}^{f} \hat{x}_{l} } }} = {1 \mathord{\left/ {\vphantom {1 {\sum\nolimits_{l = 1}^{N} {\hat{x}_{l} } }}} \right. \kern-\nulldelimiterspace} {\sum\nolimits_{l = 1}^{N} {\hat{x}_{l} } }} \) for all n = 1, …, N. Let \( \hat{u}_{n}^{f} = {1 \mathord{\left/ {\vphantom {1 {\left( {N\sum\nolimits_{l = 1}^{N} {\hat{x}_{l} } } \right)}}} \right. \kern-\nulldelimiterspace} {\left( {N\sum\nolimits_{l = 1}^{N} {\hat{x}_{l} } } \right)}} \) for all n = 1, …, N, then we have \( {\hat{\mathbf{u}}}^{f} \ge {\mathbf{0}} \) and

$$ {\hat{\mathbf{u}}}^{f} I^{N} \left( {\bar{\varepsilon }} \right) = {\hat{\mathbf{u}}}^{f} I^{N} \left( 1 \right) = {\hat{\mathbf{u}}}^{f} E = \frac{1}{{\sum\nolimits_{n = 1}^{N} {\hat{x}_{n}^{{}} } }}{\mathbf{e}} = {\hat{\mathbf{v}}}^{f} . $$

This means that the dual problem (36) for \( \varepsilon = \bar{\varepsilon } = 1 \) has an optimal solution \( ({\hat{\mathbf{v}}}^{f} ,{\hat{\mathbf{w}}}^{f} ,{\hat{\mathbf{u}}}^{f} ) \) and its optimal value is 1. □

Since \( {\hat{\mathbf{x}}} \in Eff (Q(\varepsilon ) ) \)is equivalent to the existence of an optimal solution of (36) attaining the optimal value 1, it follows from Assertion B that a face \( \hat{\mathbb{F}}({\mathbf{v}}^{f} ,{\mathbf{w}}^{f} ) \) of \( T({\hat{\mathbf{y}}}) \) is included in \( Eff (Q(\varepsilon ) ) \) for all f = 1, …f 2, and it follows from (11) that Property [Q5] is valid for the fixed \( \varepsilon \in (0,\bar{\varepsilon }] \). □

Theorem 12

Assume \( 1 = \mathop {\min }\limits_{j = 1, \ldots ,J} x_{nj} \;\left( {n = 1, \ldots ,N} \right) \) and choose ε satisfying the assumptions of Lemma 11, then eCCR ( 22 ) model satisfies [I], [M], [H], [C] and [U].

Proof

This assertion follows from Lemma 11 and the input transformation of D x . □

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fukuyama, H., Sekitani, K. An efficiency measure satisfying the Dmitruk–Koshevoy criteria on DEA technologies. J Prod Anal 38, 131–143 (2012). https://doi.org/10.1007/s11123-011-0248-9

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11123-011-0248-9

Keywords

JEL Classification

Navigation