# Imposing regional monotonicity on translog stochastic production frontiers with a simple three-step procedure

- 1.8k Downloads
- 29 Citations

## Abstract

We show that the monotonicity condition is conceptually important in Stochastic Frontier Analysis (SFA). Despite its importance, most empirical studies do not impose monotonicity—probably because existing approaches are rather complex and laborious. Therefore, we propose a three-step procedure that is much simpler than existing approaches. We demonstrate how monotonicity of a translog function can be imposed regionally at a connected set (region) of input quantities. Our method can be applied not only to impose monotonicity on translog production frontiers but also to impose other restrictions on cost, distance, or profit frontiers.

## Keywords

Stochastic frontier analysis Theoretical consistency Monotonicity Minimum distance estimation Translog## JEL Classification

C51 D24## 1 Introduction

The analysis of technical efficiency is a widely used tool in empirical production studies. It is generally based on a “frontier” production function that represents the maximum output quantities attainable from each set of input quantities (Coelli et al. 2005). This methodology accounts for the fact that not all producers succeed in optimizing their production processes and might not achieve the maximum output level given their input quantities. It is often used to explore and compare the (relative) efficiencies of different producers and to determine factors that influence the producer’s efficiency.

Microeconomic theory implies that production functions should monotonically increase in all inputs. The importance of theoretical consistency in frontier analysis has been already stressed by Sauer et al. (2006). We show that the monotonicity property is particularly important for estimating the (relative) efficiencies of individual firms because otherwise a reasonable interpretation of the results is impossible (see also O’Donnell and Coelli 2005). The non-parametric non-stochastic Data Envelopment Analysis (DEA) implicitly imposes monotonicity while the parametric Stochastic Frontier Analysis (SFA) with flexible functional forms generally disregards this condition. Many empirical applications of SFA present results in which the monotonicity condition is not fulfilled, despite its importance (Sauer et al. 2006). Although procedures for imposing monotonicity of frontier functions have been proposed in the literature, they are rarely used
^{1}—probably because these procedures are rather complex and laborious. Therefore, we present a new three-step procedure that is much simpler and can be used also by practitioners. Furthermore, we demonstrate how monotonicity of a translog function can be imposed not only locally at a single data point but regionally at a connected set (region) of data points.

## 2 Theoretical consistency of production frontiers

### 2.1 Monotonicity

As noted above, microeconomic theory requires that production functions monotonically increase in all inputs, i.e. the output quantity must not decrease if any input quantity is increased. The rationale for the monotonicity assumption is as follows: if (in rare cases) there is indeed a negative technical input–output relationship (e.g. too much fertilizer burns the crops), a wise manager would simply leave a part of the input unused (e.g. leave some of the fertilizer in the bag). Therefore, increasing the (unused) quantity of this input would leave the output (at least) unchanged.

*y*is the output quantity, \(\user2{x}\) is a vector of

*n*input quantities, and \(\varvec{\beta}\) is a vector of parameters, monotonicity requires that all marginal products (

*f*

_{ i }) are positive

*not*monotonically increasing, the efficiency estimates of the individual firms cannot be reasonably interpreted. We illustrate this problem in Fig. 1. In this example, we have a non-monotone production frontier. Firm A is below the production frontier and hence, considered to be inefficient, while firm B is on the production frontier and hence, considered to be efficient. However, firm B uses much more of the input to produce the same output as firm A, which means that firm B uses its input less efficiently than firm A. Thus, the efficiency measures based on this non-monotone production frontier imply just the opposite of the actual situation and hence, the (relative) efficiency estimates based on a non-monotone production frontier cannot be reasonably interpreted.

The problem of a non-monotone production frontier inhibits not only a reasonable interpretation of the individual (relative) efficiency estimates, but also the analysis of factors that might affect technical (in)efficiency. This is because the non-monotonicity distorts the efficiency estimates, which are the endogenous values in this analysis (e.g. in the “Technical Efficiency Effects Model” proposed by Battese and Coelli 1995).

If an estimated production frontier is not monotonically increasing in all inputs, the question of what to do arises. If the monotonicity condition is violated at many data points, the model is likely misspecified and we suggest changing the model specification. If the monotonicity condition is violated only at a few data points, these are probably random deviations from the “true” monotonically increasing production frontier and we suggest imposing the monotonicity condition in the estimation.

### 2.2 Quasiconcavity

Besides monotonicity, microeconomic theory often assumes that production functions are also quasiconcave in all inputs Lau (1978), because this implies convex input sets and hence, decreasing marginal rates of technical substitution. However, quasiconcave production functions do not guarantee that the input demand functions are “everywhere” differentiable (Dhrymes 1967; Barten et al. 1969).^{2}

If all inputs are perfectly divisible and different production activities can be applied independently, production functions are generally quasiconcave (e.g. Varian 1992). Furthermore, a non-quasiconcave point of the production function cannot reflect profit-maximizing behavior under standard microeconomic assumptions. However, the assumptions of perfectly divisible inputs and independent applicability of different production activities are not always fulfilled in the real world and measuring technical efficiency generally assumes only that producers maximize output given their input quantities but not that producers maximize their profit. Hence, in contrast to the monotonicity assumption, there is not necessarily a technical rationale for production functions to be quasiconcave. Moreover, even a non-quasiconcave point of the production function might reflect profit-maximizing behavior if not all of the prices are exogenously given or there are restrictions on input use (e.g. fertilizer use in water protection areas).

Hence, we suggest abstaining from imposing quasiconcavity when estimating (frontier) production functions. However, we propose to check for quasiconcavity after the econometric estimation because some standard results of microeconomic theory (e.g. convex input sets) do not hold in case of non-quasiconcavity.

*f*, quasiconcavity can be checked using its bordered Hessian matrix

*i*th and

*j*th input quantity. Because all input quantities are generally non-negative (\(x_i \geq 0 \forall i\)), a necessary condition for quasiconcavity is

*n*= 1), monotonicity implies quasiconcavity (Takayama 1994, p. 62). In case of two or more inputs (

*n*> 1), monotonicity does not (necessarily) imply quasiconcavity.

## 3 Restricted estimation of frontier functions

### 3.1 Approaches proposed in the literature

Despite the importance of monotonicity, our search of the literature found only a very few applications that impose this condition in SFA. One approach is a restricted maximum likelihood (ML) estimation, i.e. the likelihood function is maximized subject to the restriction that the theoretically derived properties of the frontier function are fulfilled. For instance, Bokusheva and Hockmann (2006) estimate a translog production frontier under monotonicity and quasi-concavity restrictions. However, they impose these restrictions only locally at the sample mean, which is not sufficient for obtaining reasonable efficiency estimates (see above). Furthermore, the maximization of the likelihood function under constraints is rather complex and the algorithms used for the optimization frequently have convergence problems or converge to local maxima.

As another solution, O’Donnell and Coelli (2005) use the Bayesian MCMC method to estimate a stochastic frontier distance function with all desirable theoretical conditions imposed at all data points. This is probably the most suitable and most sophisticated approach, but it is rather complex and laborious.

The main reason why the constrained ML and the MCMC approaches have been used so rarely is probably because these methods are not available in standard econometrics software packages. Hence, their application requires advanced skills in econometrics and in computer programming and many (applied) researchers and practitioners do not have the knowledge or the time to apply these methods.

### 3.2 Three-step procedure

As a solution, we propose a much simpler three-step procedure that is based on the two-step method suggested by Koebel et al. (2003).

*u*≥ 0 captures technical inefficiency,

*v*captures statistical noise, \(\user2{z}\) is a vector of variables explaining technical inefficiency, and \(\varvec{\delta}\) is a vector of parameters to be estimated. This estimation can be done by a standard software package for SFA. We extract the unrestricted parameters of the production frontier \(\hat{\varvec{\beta}}\) and their covariance matrix \(\hat{\varvec{\Upsigma}}_\beta\) from the estimation results.

^{3}:

^{4}The restricted parameters (\(\hat{\varvec{\beta}}^0\)) are asymptotically equivalent to a (successful) restricted one-step ML estimation (Koebel et al. 2003). However, it might be problematic to obtain a (consistent) covariance matrix of the restricted parameters \(\hat{\varvec{\Upsigma}}_\beta^0\), because standard bootstrapping leads to an inconsistent covariance matrix if the restricted parameters are at the boundary of the feasible parameter space (Andrews 2000; Dhrymes 2006).

^{5}Andrews (2000) suggests alternative methods, e.g. rescaled bootstrapping, that lead to a consistent covariance matrix even in the case of binding inequality constraints. However, these alternative methods are only valid under specific conditions that need to be checked for our specific case. Thus we leave this interesting topic for future research.

^{6}

_{0}and α

_{1}allow an adjustment of the restricted production frontier, which gets

_{1}is positive, this adjustment is a strictly monotonically increasing transformation. Hence, it does not affect the monotonicity and quasiconcavity (Arrow and Enthoven 1961, p. 781) condition of \(f( \user2{x}, \hat{\varvec{\beta}}^0 )\). However, if desired, an adjustment can be prevented by restricting α

_{0}to zero and α

_{1}to one.

^{7}Since the estimation of Eq. 12 includes a generated regressor (\(\tilde{y}\)), the standard errors obtained in the third step might be biased (see Pagan 1984).

The monotonicity restrictions can be checked by statistical tests. In a first step, the inequality restrictions in (11) that are binding in the distance minimization (10) are determined. In a second step, standard statistical tests such as the Wald test or the likelihood ratio test are applied by treating the binding inequality restrictions as equality restrictions.

### 3.3 Translog production function

_{ ij }= β

_{ ji }. Its marginal products are

_{ ij }is Kronecker delta with Δ

_{ ij }= 1 if

*i*=

*j*and Δ

_{ ij }= 0 otherwise.

*R*is a matrix of dimension

*n*× (1 +

*n*(

*n*+ 3)/2) with

*T*> 1 data points, the matrix

*R*in Eq. 20 can be created for each data point and then all of these (sub)matrices can be stacked to a new

*R*matrix with

*T*·

*n*rows.

*A*=

*R*, and \(\user2{b} = - R \hat{\varvec{\beta}}\). After solving this quadratic programming problem, the restricted β coefficients can be obtained by \(\hat{\varvec{\beta}}^0 = \user2{s}^* + \hat{\varvec{\beta}}\). Hence, this distance minimization can be done easily by any quadratic programming software.

As stated in Theorem 1 below, the translog functional form has the advantage that monotonicity can be easily imposed regionally, i.e. in a closed connected set of the input quantities.

### **Theorem 1 **(Regional monotonicity of translog functions)

*A translog function*\(f( \user2{x}, \varvec{\beta} )\)

*is monotonic in*\(\user2{x}\)

*on a closed connected set that consists of all*\(\ln \user2{x}\)

*in the convex polyhedron with vertices*\(\ln \user2{x}_1, \ldots, \ln \user2{x}_p,\)

*if and only if each of its partial derivatives retains the sign over all vertices:*

*where*

*The proof is given in*Appendix 2.

*n*-dimensional space of (logarithmic) input quantities ensures that monotonicity is fulfilled in the entire polyhedron.

^{8}Hence, if monotonicity is imposed at all sample points, the monotonicity condition also is fulfilled on all points on the straight lines between each two sample points (given that input quantities are measured in logarithmic terms) and the problem of non-monotone intervals between sample points (as demonstrated in Fig. 2) is ruled out. If the input quantities are measured in natural (non-logarithmic) terms, monotonicity is imposed in a closed connected set of the input quantities but it is not a convex polyhedron because the edges of this set are not straight but rather they are curved. This is illustrated in Fig. 3.

Another option is to impose monotonicity at the vertices of a box (*n*-dimensional cuboid) that includes the region at which monotonicity should be imposed (e.g. all data points). The lower left vertex of this box should be (at most) at the position (min *x* _{1}, min*x* _{2},…, min*x* _{ n }) and the upper right vertex of this box should be (at least) at the position (max*x* _{1}, max*x* _{2},…, max*x* _{ n }), where the edges of this box are parallel to the axes of the *n*-dimensional space of logarithmic input quantities. This ensures that the region at which monotonicity is imposed is also a box in the space of natural (non-logarithmic) input quantities. This is illustrated in Fig. 4.

## 4 An empirical example

We demonstrate this method using panel data collected from 43 smallholder rice producers in the Tarlac region of the Philippines from 1990 to 1997. This data set is published as a supplement to Coelli et al. (2005).^{9} The data include one output (tons of freshly threshed rice) and three inputs: area planted (in hectares), labour used (in man-days of family and hired labour), and fertilizer used (in kg of active ingredients). We explain technical inefficiency according to the education of the household head (in years) and the percentage of area classified as “bantog” (upland) fields.

All estimations and calculations have been done within the “R software environment for statistical computing and graphics” (R Development Core Team 2009) using the “R” packages “frontier” (Coelli and Henningsen 2009), “micEcon” (Henningsen 2008), and “quadprog” (Turlach and Weingessel 2007). The commands that have been used for this analysis are available in Appendix 3.

^{10}The β and δ coefficients are defined as before, σ

^{2}is the total error variance (\(\sigma^2_u + \sigma^2_v\)), and γ is the proportion of the variance of technical inefficiency in the total error variance (\(\sigma^2_u / \sigma^2\)). The monotonicity condition is violated at 39 out of 344 observations and quasiconcavity is not fulfilled at four observations. While the education of the household head has no significant influence on technical efficiency, the proportion of “bantog” (upland) fields significantly (at the 10% level) increases the farm’s efficiency.

Unrestricted stochastic frontier estimation

| Estimate | Std. error | | Pr(>| |
---|---|---|---|---|

β | −7.5546 | 1.6898 | −4.4708 | 0.0000 |

β | −2.0886 | 0.7812 | −2.6735 | 0.0079 |

β | 3.0734 | 0.7954 | 3.8641 | 0.0001 |

β | 0.7890 | 0.5472 | 1.4420 | 0.1502 |

β | −0.3972 | 0.2139 | −1.8568 | 0.0642 |

β | 0.5829 | 0.1778 | 3.2776 | 0.0012 |

β | 0.0428 | 0.1415 | 0.3025 | 0.7625 |

β | −0.5647 | 0.2755 | −2.0496 | 0.0412 |

β | −0.1276 | 0.1410 | −0.9051 | 0.3661 |

β | −0.0030 | 0.0924 | −0.0321 | 0.9744 |

δ | −0.0103 | 0.0489 | −0.2097 | 0.8341 |

δ | −1.0724 | 0.5914 | −1.8134 | 0.0707 |

σ | 0.4089 | 0.1720 | 2.3771 | 0.0180 |

γ | 0.9168 | 0.0386 | 23.7612 | 0.0000 |

_{0}and α

_{1}estimated in the final step. Of course, the monotonicity condition is fulfilled at all observations now. Moreover, the quasiconcavity condition also is fulfilled at all observations. Interestingly, we obtained the same result, i.e. imposing monotonicity implies quasiconcavity, also for other empirical applications (e.g. Wiebusch 2005; Henning and Mumm 2009; Henning and Han 2009). Barnett (2002) argues that imposing curvature but not monotonicity increases the incidence of monotonicity violations. Hence, imposing monotonicity first and checking for curvature thereafter—as in our approach—seems more effective than imposing curvature alone. However, monotonicity has a closer relationship to quasiconcavity than to concavity (see above) so that it is questionable if imposing monotonicity generally implies concavity in empirical applications.

Minimum distance estimation

| coef | diff | diff/std.err | adj.coef |
---|---|---|---|---|

\(\beta_0^0\) | −4.8927 | 2.6619 | 1.5753 | −4.8918 |

\(\beta_1^0\) | −0.9999 | 1.0887 | 1.3935 | −0.9998 |

\(\beta_2^0\) | 1.8159 | −1.2575 | −1.5811 | 1.8157 |

\(\beta_3^0\) | 0.6851 | −0.1040 | −0.1900 | 0.6850 |

\(\beta_{11}^0\) | −0.1918 | 0.2055 | 0.9603 | −0.1918 |

\(\beta_{12}^0\) | 0.3323 | −0.2506 | −1.4091 | 0.3323 |

\(\beta_{13}^0\) | 0.0168 | −0.0260 | −0.1838 | 0.0168 |

\(\beta_{22}^0\) | −0.2431 | 0.3216 | 1.1674 | −0.2430 |

\(\beta_{23}^0\) | −0.1275 | 0.0002 | 0.0013 | −0.1275 |

\(\beta_{33}^0\) | 0.0217 | 0.0246 | 0.2667 | 0.0217 |

^{2}) from around 0.41 to 0.46. In contrast, the proportion of the variance of technical inefficiency in the total error variance (γ) does not change much.

Final stochastic frontier estimation

| Estimate | Std. error | | Pr(>| |
---|---|---|---|---|

α | 0.0005 | 0.0469 | 0.0110 | 0.9912 |

α | 0.9999 | 0.0190 | 52.5687 | 0.0000 |

\(\delta_1^0\) | −0.0231 | 0.0571 | −0.4045 | 0.6861 |

\(\delta_2^0\) | −1.1885 | 0.6733 | −1.7653 | 0.0784 |

σ | 0.4620 | 0.2039 | 2.2656 | 0.0241 |

γ | 0.9277 | 0.0333 | 27.8679 | 0.0000 |

We test the monotonicity restrictions by a Wald test and a likelihood ratio test. Both tests do not reject the monotonicity restrictions with *P*-values of 0.39 and 0.42, respectively.

## 5 Conclusions

We have shown that efficiency estimates based on non-monotone frontier functions cannot be reasonably interpreted. Given the importance of monotonicity we suggest that non-monotone production frontiers should no longer be used in empirical production analysis, particularly since we have proposed a three-step procedure that is much simpler than existing approaches. We show that imposing monotonicity at one point is not sufficient to obtain reasonable efficiency estimates and we demonstrate how monotonicity of a flexible translog function can be imposed on a closed set (region) of input quantities. Our three-step method can be used to impose theoretical consistency not only on translog production frontiers but also on other functional forms and other frontier functions such as distance, cost, or profit frontiers. Although the theoretical restrictions for these functions are more complex than the monotonicity restrictions of a translog production frontier, our proposed three-step procedure still is probably less complex than a restricted ML or a Bayesian Markov chain Monte Carlo (MCMC) estimation.

## Footnotes

- 1.
This is in contrast to empirical estimations of standard (non-frontier) microeconomic models. These models are frequently estimated under restrictions derived from microeconomic theory since three decades (see e.g. Lau 1978).

- 2.
We thank an anonymous reviewer for pointing this out to us.

- 3.
The inclusion of the \(\varvec{\delta}\) parameters in the distance minimization is discussed in Appendix 1.

- 4.
The speed and probability of convergence of the non-linear distance minimization can be increased by providing analytical gradients: \(\left. \partial \left( \hat{\varvec{\beta}}^0 - \hat{\varvec{\beta}} \right) \hat{\varvec{\Upsigma}}_\beta^{-1} \left( \hat{\varvec{\beta}}^0 - \hat{\varvec{\beta}} \right) \right/ \partial \hat{\varvec{\beta}}^0= 2 \hat{\varvec{\Upsigma}}_\beta^{-1} ( \hat{\varvec{\beta}}^0 - \hat{\varvec{\beta}} )\).

- 5.
We thank two anonymous reviewers for pointing this out.

- 6.
Bayesian MCMC estimations deliver a consistent covariance matrix, but their estimation results are often sensitive to assumptions about prior distributions and starting values. In this regard, the estimators of our three step procedure can still be useful to specify prior distributions and starting values for Bayesian MCMC approaches. We thank Christian Aßmann for this comment.

- 7.
While α

_{1}can be easily restricted to one by using \(( \ln y - \ln \tilde{y} )\) as the output variable and using no input variable in Eq. 10, not all software packages allow the restriction of α_{0}to zero. However, in our empirical applications, α_{0}and α_{1}were always very close to zero and one, respectively, which means that there was virtually no adjustment. - 8.
Terrell (1996) imposes regional monotonicity on a (non-frontier) translog cost function by imposing this condition at each point of a fine grid that spans the desired region. Given our finding, it is unnecessary to use the interior of the grid because imposing monotonicity only at its vertices is sufficient for guaranteeing monotonicity in the entire region.

- 9.
It can be downloaded from http://www.uq.edu.au/economics/cepa/software/CROB2005.zip.

- 10.
The last column shows the (asymptotic) marginal significance level assuming that the

*t*-values have a standard normal distribution.

## Notes

### Acknowledgements

The authors thank Christian Aßmann, Uwe Jensen, Subal Kumbhakar, two anonymous referees, the participants of the 2nd Halle Workshop on Efficiency and Productivity Analysis (Halle, Germany, May 26–27, 2008) and the participants of the Fifth North American Productivity Workshop (New York City, USA, June 24–27, 2008) for their very helpful comments and suggestions. Of course, all remaining errors are the sole responsibility of the authors. The first author is grateful to the H. Wilhelm Schaumann Stiftung and the German Research Foundation (Deutsche Forschungsgemeinschaft, DFG) for financially supporting this research.

### Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.

## References

- Andrews DWK (2000) Inconsistency of the bootstrap when a parameter is on the boundary of the parameter space. Econometrica 68:399–405CrossRefGoogle Scholar
- Arrow KJ, Enthoven AC (1961) Quasi-concave programming. Econometrica 29(4):779–800, http://www.jstor.org/stable/1911819 Google Scholar
- Barnett WA (2002) Tastes and technology: curvature is not sufficient for regularity. J Econom 108:199–202CrossRefGoogle Scholar
- Barten AP, Kloek T, Lempers FB (1969) A note on a class of utility and production functions yielding everywhere differentiable demand functions. Revi Econ Stud 36(1):109–111CrossRefGoogle Scholar
- Battese GE, Coelli TJ (1995) A model for technical inefficiency effects in a stochastic frontier production function for panel data. Empir Econ 20:325–332CrossRefGoogle Scholar
- Bokusheva RA, Hockmann H (2006) Production risk and technical inefficiency in Russian agriculture. Eur Rev Agric Econ 22(1):93–118Google Scholar
- Chiang AC (1984) Fundamental methods of mathematical economics, 3rd edn. McGraw-HillGoogle Scholar
- Coelli T, Henningsen A (2009) frontier: stochastic frontier analysis. R package version 0.991, http://CRAN.R-project.org
- Coelli TJ, Rao DSP, O’Donnell CJ, Battese GE (2005) An introduction to efficiency and productivity analysis, 2nd edn. Springer, New YorkGoogle Scholar
- Dhrymes PJ (1967) On a class of utility and production functions yielding everywhere differentiable demand functions. Rev Econ Stud 34(4):399–408CrossRefGoogle Scholar
- Dhrymes PJ (2006) Constrained estimation, http://www.columbia.edu/pjd1/mypapers/mycurrentpapers/constraintestimation.pdf, Department of Economics, Columbia University, New York
- Diewert WE (1974) Functional forms for revenue and factor requirements functions. Int Econ Rev 15(1):119–30CrossRefGoogle Scholar
- Harville DA (1997) Matrix algebra from a statistician’s perspective. Springer, New YorkGoogle Scholar
- Henning CHCA, Han J (2009) Firm-government relations and economic performance Chinese style: estimating the impact of firm-government relations on techncial efficiency in Chinese regional agribusiness industry, Department of Agricultural Economics, University of KielGoogle Scholar
- Henning CHCA, Mumm J (2009) Coopetition in business networks and economic performance: estimating interfirm networks and technical efficiency in the German dairy industry, Department of Agricultural Economics, University of KielGoogle Scholar
- Henningsen A (2008) micEcon: tools for microeconomic analysis and microeconomic modeling. R package version 0.5, http://CRAN.R-project.org
- Koebel B, Falk M, Laisney F (2003) Imposing and testing curvature conditions on a Box-Cox cost function. J Bus Econ Stat 21(2):319–335CrossRefGoogle Scholar
- Lau LJ (1978) Testing and imposing monotonicity, convexity and quasi-convexity constraints. In: Fuss M, McFadden D (eds) Production economics: a dual approach to theory and applications, Vol 1. North-Holland, Amsterdam, pp 409–453Google Scholar
- O’Donnell CJ, Coelli TJ (2005) A Bayesian approach to imposing curvature on distance functions. J E’conom 126(2):493–523CrossRefGoogle Scholar
- Pagan A (1984) Econometric issues in the analysis of regressions with generated regressors. International Econ Rev 25(1):221–247CrossRefGoogle Scholar
- R Development Core Team (2009) R: a language and environment for statistical computing. R foundation for statistical computing, Vienna, Austria, http://www.R-project.org, ISBN 3-900051-07-0
- Sauer J, Frohberg K, Hockmann H (2006) Stochastic efficiency measurement: the curse of theoretical consistency. J Appl Econ 9(1):139–165Google Scholar
- Takayama A (1994) Analytical methods in economics. Harvester WheatsheafGoogle Scholar
- Tangian AS (2002) A unified model for cardinally and ordinally constructing quadratic objective functions. In: Tangian AS, Gruber J (eds) Constructing and applying objective functions, no. 510 in lecture notes in economics and mathematical systems. Springer, Berlin, pp 117–169Google Scholar
- Terrell D (1996) Incorporating monotonicity and concavity conditions in flexible functional forms. J Appl E’conom 11:179–194CrossRefGoogle Scholar
- Turlach BA, Weingessel A (2007) Quadprog: functions to solve quadratic programming problems. R package version 1.4-11Google Scholar
- Varian HR (1992) Microeconomic analysis, 3rd edn. W.W. Norton & Company, New YorkGoogle Scholar
- Wiebusch A (2005) Ländliche Kreditmärkte in Transformationsländern: Marktversagen und die Rolle formaler und informeller Institutionen in Polen und der Slowakei. PhD thesis, Department of Agricultural Economics, University of Kiel, http://eldiss.uni-kiel.de/macau/receive/dissertation_diss_00001481