Constraints in models of production and cost via slack-based measures

In this paper, we propose the use of stochastic frontier models to impose theoretical regularity constraints (like monotonicity and concavity) on flexible functional forms. These constraints take the form of inequalities involving the data and the parameters of the model. We address a major concern when statistically endogenous variables are present in these inequalities. We present results with and without endogeneity in the inequality constraints. In the system case (e.g., cost-share equations) or more generally, in production function-first-order conditions case, we detect an econometric problem which we solve successfully. We provide an empirical application to US electric power generation plants during 1986–1997, previously used by several authors.


Introduction
In many areas of applied economics operations research and applied economics, equations or systems of equations are often estimated that must satisfy certain theoretical constraints either globally or locally (that is at a specific point of approximation). Other times the equations must satisfy certain monotonicity or other constraints at each observation. Although globally flexible functional forms that satisfy the constraints (globally), often practitioners use flexible forms that cannot satisfy these restrictions B Mike G. Tsionas m.tsionas@lancaster.ac.uk Caroline Khan kkhan@kau.edu.sa 1 using only parametric restrictions. Instead, the constraints also involve the data. Suppose, for example, we have a translog production function of the form: y = β 1 + β 2 x 1 + β 3 x 2 + 1 2 β 4 x 2 1 + 1 2 β 5 x 2 2 + β 6 x 1 x 2 , where y, x 1 , x 2 denote the logs of output, capital and labor. The input elasticities must be positive, so we have the constraints: For problems related to the use of the translog see, for example, O'Donnell (2018, p. 286, footnote 11). We expand on this point below.
Imposing constraints has given rise to a significant literature including O'Donnell et al. (2001) and O'Donnell and . McCausland (2008) uses orthogonal polynomials, while other authors proposed the use of neural networks (Vouldis et al. 2010). Diewert and Wales (1987) spell out the conditions that must be satisfied, while Gallant and Golub (1984), and Lau (1978) represent earlier attempts. Ivaldi et al. (1996) compare different functional forms in a concise way.
The dominant approach seems to be the one adopted by O'Donnell et al. (2001) and O'Donnell and  who impose the constraints by assuming a different technology for each firm. Terrell (1996) is more in line with Geweke (1986), while Wolff et al. (2010) present new "local" approaches. In this paper, we retain the original problem: Given an equation or a system of equations of the traditional form, is it possible to use standard Markov Chain Monte Carlo (MCMC) methods to perform Bayesian inference subject to many data-and parameter-specific inequality constraints? This problem is fundamentally different from Geweke (1991) since we have, more often than not, a number of constraints exceeding the number of equations, and the inequality constraints must be imposed exactly without regard for their posterior probability. Surely, the constraints must not be imposed at all data points when they depend on the data as, for example, a translog cost or production function would reduce to a Cobb-Douglas which is not second-order flexible.
It turns out that there are two approaches to solve the problem. In the first approach that we call "naive," all inequality constraints are converted to equalities using a surplus formulation. The surpluses are modeled within the stochastic frontier approach. This is, essentially, the approach in Huang and Huang (2019) which has been proposed independently of this paper. In the second approach, we take up a major problem with this model; viz. the fact that endogenous variables may appear in the inequality constraints. To the best of our knowledge, the problem (and a potential solution) has not been realized before. As we mentioned, the first approach has been proposed independently by Huang and Huang (2019) where the surpluses are assumed to follow independent half-normal distributions. The problems with this approach are: First, the surpluses cannot be independent as violation of some constraints (say monotonicity) are known to have effects on other constraints (like curvature). Second, endogeneity is not taken into account although endogenous variables appear in the frontier equations that impose the constraints. In turn, specialized methods can be used. Third, it is well known that imposition of theoretical constraints (which is necessary in any functional form to account for the information provided by neoclassical production theory) affects estimates of technical inefficiency, so the surpluses should be correlated with the onesided error term in the production or cost function.
Moreover, we apply the new techniques to the translog as it is used widely. If researchers want to use globally flexible functional forms that satisfy monotonicity and curvature via only parametric restrictions (e.g., Koop et al. 1997 and the Generalized McFadden functional form), they can certainly do it and the methods in this paper are not necessary. However, as the translog is quite popular, we use it here as our benchmark case. Another case of flexible functional forms where the constraints also depend on the data has been analyzed in Tsionas (2016).

General
Let us consider the simplest case of an equation which is linear in the parameters: where z i is an m × 1 vector of basic covariates, β is a k × 1 vector of parameters, g : R m → R k is a vector function, and u i is a nonnegative error component representing technical inefficiency. The translog or polynomials in certain variables z i are leading examples. Apparently, we can write: where x i = g(z i ) is k × 1. Suppose we require the function g (z) β to be monotonic, and without loss of generality assume that all first-order partial derivatives must be nonnegative, that is Dg (z) β ≥ 0 (m×1) . Since no parameters are involved in g, it is clear that the (transposed) Jacobian Dg (z) ≡ X 0 , which is m × k, is a simple function of z and, therefore, a simple function of X .
Suppose that for the ith observation, we have Bayesian inference in the linear model subject to a few inequality constraints has been considered by Geweke using both importance sampling (Geweke 1986) and Gibbs sampling (Geweke 1996). Here, we follow a different approach, independently proposed by Huang and Huang (2019).
Suppose we write the constraints as follows: where v i0 is an m ×1 two-sided error term and u i0 is an m ×1 nonnegative error component. Here, v i0 represents noise in the inequality constraints and inequality constraints themselves are represented by u i0 . So, (5) imposes the constraints X i0 β ≥ 0 m×1 up to measurement errors (v i0 ). Moreover, u i0 represents slacks in the constraints. If we write the equations together, we have: We are now ready to specify our distributional assumptions on the error components: In this specification, the two-sided and one-sided error components are correlated across equations. This specification, unlike the one in Huang and Huang (2019), has certain advantages. First, the error terms v i and v i0 are allowed to be correlated, as the imposition of theoretical constraints affects parameter estimates in the first equation of (6). Second, the one-sided error terms u i and u i0 are allowed to be correlated, since the extent of violation of certain constraints is very likely to affect the degree of violation of other constraints.

Simplified example
For ease of presentation and to establish the techniques, we assume = σ 2 ω 2 I m , and Φ = 0 φ 2 I m . In this case, we have no technical inefficiency (viz. u i = 0) and the scale parameter of surpluses, u i , have the same scale parameter (φ). Clearly, the scales could have been different (Huang and Huang 2019), but most importantly Φ should allow for correlations between the violations of different constraints. Here, we focus on the simplest possible case to provide the background of the new approach. For a fixed value of ω, which controls the degree of satisfaction of the constraints, the posterior distribution of this model is given by: where For the prior, we have: The only parameters of interest are the elements of β, and possibly σ , but not φ, which like u, are artificial parameters introduced to facilitate Bayesian inference. Alternatively, u represents prior parameters with a prior given by (5) in which φ and ω are parameters. The user may have high relative prior precision with respect to the degree of satisfaction of the constraints (so parameter ω can be set in advance) but in other respects the user does not particularly care about how far in the acceptable region are the parameters. Of course, if this is not the case, it can always be controlled via choice of an informative prior of β.
Suppose for simplicity that p (β, σ, φ) ∝ σ −1 φ −1 . Then, we can use the Gibbs sampler based on the following standard full posterior conditional distributions: and finally: . . . , n, (14) where 1 nm is a vector of ones whose dimensionality is nm × 1. For details on the derivations, see Tsionas (2000). Generating random draws from these full posterior conditional distributions is straightforward. The last distribution is quite standard in Bayesian analysis of the normal-half normal stochastic model. So far, we have assumed that ω can be set in advance. This is, of course, a possibility. If the user does not feel comfortable about this choice, then one can use the following prior: where n, q ≥ 0 are prior parameters. In this case, the posterior conditional of ω is: The interpretation of (15) is that from a fictitious sample of size n the average ω 2 would be close to (u There is nothing wrong with setting these parameters so that the prior is extremely informative if that is necessary; for example n = n, and q = 0.001. The interpretation, in this case, is that we need the errors v 0 to be quite small, so that the constraints satisfy "exactly" the inequality constraints. Of course, this is related to Theil's mixed estimator.

An artificial example
Following Parmeter et al. (2009), we have the following model: where the x's are generated as uniform in the interval [0, 2.5], and v ∼ N 0, 0.1 2 . We have n = 100 observations, and the x's are sorted. The constraint is that we need this function to be non-decreasing that is 3 + 2x − 9x 2 + 4x 3 ≥ 0 at all observed data points.
In this case, x i = 1, x, x 2 , x 3 , x 4 , and x 0i = 0, 1, 2x, −9x 2 , 4x 3 . So the model is y i = x i β + v i subject to the constraints x 0i β ≥ 0. For this example, we set n = 50 and Q = 0.001. Gibbs sampling is implemented using 15,000 passes the first 5000 of which are discarded to mitigate possible start up effects. The Least Squares (LS) fit has 22 violations of the constraints, while the Bayes fit has none. The Bayes fit is computed as follows. For each draw β (s) , we compute f (s) i = X 1 β (s) . After the burn-in period, our estimate of the fit isf i = S −1 S s=1 x i1 β (s) which is equivalent tof i = x iβ whereβ = E (β|y, X ) is the posterior expectation of β that can be approximated arbitrarily well (since it is simulation-consistent) byβ = S −1 S s=1 β (s) . The same is true for the derivative. These computations involve only a standard Gibbs sampling scheme and the trivial computation of the posterior mean of β. In Fig. 1, we present the original data points, the LS fit, the Bayes fit and the constrained least squares (LS) fit as it is more appropriate to compare restricted LS with the Bayesian estimates.
Even in this case, one may argue that the selection of parameters results in satisfaction of the constraints, but it does not guarantee the best fit. This criticism is not totally unfounded. For example, it would be possible to select these parameters so as to pass a line with positive slope through the points that would, apparently, satisfy all the constraints. Therefore, it may be necessary to device a mechanism by which ω is truly adjusted to the data so as to guarantee the best possible fit and also satisfying the constraints. Therefore, we search directly over the minimum value of ω that guarantees satisfaction of all constraints. It turns out that this value is 1.076 (when we use 15,000 Gibbs passes and the first 5000 are discarded). It turned out that this problem requires a fine grid of values (of the order 10 −4 ) in the relevant range which is determined empirically by trial-and-error.
Despite the effort it does not appear that the results are any better compared to standard Bayes analysis when ω is assigned a prior. The results are presented in panel (b) of Fig. 1. By "full Bayes fit," we mean the fit when ω is drawn from its full conditional distribution. By "conditional Bayes fit," we mean the fit when a detailed search is made over ω to determine the optimal value ω * = 1.076 that constitutes the value for which all constraints are satisfied along with fully Bayesian solutions for β, σ , φ, and u.

Posterior
In the general case of technical inefficiency and correlated error components, we can write the posterior of the model in (6) as follows: where p(β, , Φ) denotes the prior, ψ i = y i 0 (m×1) , and , for all i = 1, . . . , n. Our prior is a reference flat prior: 1 Therefore, the posterior becomes: In this formulation, Φ is a general (positive-definite) covariance matrix which allows for the fact that violations of different constraints may be related in an unknown way. The posterior can be analyzed easily using MCMC as shown in part 5.1 of the "Technical Appendix."

Systems of equations
Many important systems of equations like the translog can be written in the form: 2 where β J (m) represents a particular selection of elements of vector β with the indices in J (m), m = 1, . . . , M. We agree that β J (1) = β so that J (1) = {1, 2, . . . , d}. Suppose, for example, that we have a translog cost function with K inputs and N outputs 3 : where C is cost. We assume linear homogeneity with respect to prices which can be imposed directly by dividing C and all prices by p K . The share equations are: Clearly, x t consists of the regressors in the cost function, whose dimensionality is consisting of ones and zeros), for all m = 2, . . . , M, and A 1 = I (d×d) . Defining x t0 A m = x tm for m = 1, . . . , M, we can write the full system in the form: where The output cost elasticities are: where D n is an L × d selection matrix, and I (n) represents the proper set of indices. Therefore, we have: Monotonicity with respect to prices and outputs implies the following restrictions: for all t = 1, . . . , T . In total, we have r = T (K + N − 1) monotonicity restrictions that we can represent in the form: where W is r × d and W t = x tm , m = 2, . . . , K , z tn , n = 1, . . . , N is the tth row of W. We assume ξ ∼ N r 0, ω 2 I , and u ∼ N + r 0, σ 2 u I . Further, we assume: v ∼ N (0, ⊗ I T ). Therefore, the complete system along with monotonicity constraints is: Moreover, we assume = σ 2 I. The conditional posterior distributions required to implement Gibbs sampling are presented in part 5.2 of the "Technical Appendix."

Concavity
Diewert and Wales (1987) showed that concavity of the translog cost function requiring negative semidefiniteness of ∇ pp C ( p, y) amounts to negative semidefiniteness of The concavity restrictions can be expressed in the form: where Ω 2 and σ 2 w are parameters. If we set Ω 2 to a small number, the meaning of this expression is that all eigenvalues of the M t matrix are nonnegative. In practice, we can treat Ω as a parameter to examine systematically the extent of violation of the constraint(s). Moreover, it is straightforward to have different Ω parameters for different constraints or treat Ω as a general covariance matrix.

Endogeneity issues
In the case of the cost function where input prices and outputs are taken as predetermined or in the case of (4) where the covariates are weakly exogenous, the Bayesian techniques, we have described, can be applied easily. However, there are many instances in which the covariates or explanatory variables are endogenously determined. An extremely important case is when (4) represents a production function. Under the assumption of cost minimization, inputs are decision variables (and, therefore, endogenous), while output (the left-hand-side variable) is predetermined. Moreover, economic exogeneity and econometric exogeneity are different things. If econometric exogeneity is rejected, this does not mean that the economic assumptions are wrong. Measurement errors, for example, would be a typical example. Lai and Kumbhakar (2019) consider the case of a Cobb-Douglas production function along with the first-order conditions for cost minimization. To summarize the approach of Lai and Kumbhakar (2019), we have the following Cobb-Douglas production function: where y it is log output for firm i and date t, x kit is the log of kth input for firm i and date t , v it is a two-sided error term, u it is a non-negative error component that represents technical inefficiency in production, and β k > 0, k = 1, . . . , K . Suppose also input prices are w kit . From the first-order conditions of cost minimization (where inputs are endogenous choice variables and output is predetermined), we obtain: These conditions can be expressed as follows: The constraints are only on the parameters β k (k = 1, . . . , K ), so this is not a very interesting example. However, if we generalize (32) to the translog case, we have: Suppose all parameters are collected into the vector β whose dimension is d = 1 + K + K (K +1) 2 , after imposing symmetry, viz. β kk = β k k , k, k = 1, . . . , K . The first-order conditions for cost minimization are as follows: These equations can be rewritten as follows: Moreover, it is convenient to rewrite (35) in the form: where ψ( are the nonlinear terms in the translog functional form. From (35) and (37), we have a system of K equations in K endogenous variables. The economic restrictions are as follows. First, we have monotonicity: This imposes the following set of restrictions: Following Diewert and Wales (1987), given the monotonicity restrictions, we need the matrix B = [β kk , k, k = 1, . . . , K ] to be negative semi-definite. Therefore, it is necessary and sufficient that the eigenvalues of B, say Λ(β) = [λ 1 (β), . . . , λ K (β)] , are all non-positive. This imposes the following set of nonlinear restrictions: From (40) and (41), we have 2K additional equations so in total we have K endogenous variables but 3K stochastic equations. From the econometric point of view, this is, clearly, a major problem, as we lack 2K endogenous variables to complete the system in (35), (37), (40), and (41). Let us write the entire system as follows: Let us write the system in (42) compactly as follows: Although we have 3K equations, there are only K endogenous variables. To provide 2K additional equations, it seems that the only possibility is to assume that U it ≡ [ũ it ,ǔ it ] are, in fact, endogenous variables. This provides, indeed, the missing 2K additional equations. To accomplish this, we depart from the assumption that U it is a (vector) random variable, and, instead, we make use of the following device originally proposed by Paul and Shankar (2018) and further developed by Tsionas and Mamatzakis (2019): where Φ(·) is any distribution function (for example, the standard normal), and γ = [γ 1 , . . . , γ K ] ∈ R 2K is a vector of parameters. The idea of Paul and Shankar (2018) is that efficiency, r = e −u is in the interval (0, 1] and, therefore, r can be modeled using any distribution function. In turn, we have: 2 , . . . , u it,K ] . MCMC for this model is detailed in the "Technical Appendix" (section 5.4).

Empirical application
We use the same data as in Lai and Kumbhakar (2019) which have been used before by Rungsuriyawiboon and Stefanou (2008). We have panel data on n = 82 US electric power generation plants during 1986-1997 (T = 12). The three inputs are labor and maintenance, fuel, and capital. We use a production frontier approach. Output is net steam electric power generation in megawatt-hours. Input prices are available, and a time trend is included in both the production function and the predetermined variables. MCMC is implemented using 150,000 draws discarding the first 50,000 to mitigate possible start up effects. Since we have 984 observations, we impose the constraints in (40) and (41) at randomly chosen points. The reason is that imposing the constraints in (40) and (41) at all points, compromises the flexibility of the translog and reduces it to the Cobb-Douglas production function, which is, clearly, very restrictive. We select the points by using the following methodology. Suppose we impose the constraints at the means of the data and P other points (P = 1, . . . , P) where P < nT . The points are randomly chosen, and we set P = 500 which is, roughly, half the number of available observations. P itself is randomly chosen, uniformly distributed in 1, 2, . . . , P . We repeat the process 10,000 times, and we compute the marginal likelihood of the model. The marginal likelihood is defined as: The integral is not available analytically but can be computed numerically using the methodology of Perrakis et al. (2015). In turn, we select the value of P as well as the particular points at which the constraints are imposed, by maximizing the value of M(D). For each P, we average across all datasets with this number of points, and we present the normalized log marginal likelihood in Fig. 6. Marginal posterior densities of input elasticities are reported in Fig. 1. Without imposing the theoretical constraints, we have a number of violations in fuel and capital elasticities and even labor (where there is a distinct mode around zero). After imposing the constraints, the marginal posteriors are much more concentrated around their mean or median, showing that imposition of theoretical constraints improves the accuracy of statistical inference for these elasticities.
Marginal posterior densities of aspects of the model are reported in Fig. 2. Technical efficiency is defined as r it = e −u it where u it ≥ 0 represents technical inefficiency. Technical change (TC) is defined as the derivative of the log production function with respect to time, viz.
where SCE it is the scale effect (Kumbhakar et al. 2015, equation 11.8). Under monotonicity and/or concavity, technical efficiency averages 85% and ranges from 78 to 93%. Without imposition of theoretical constraints technical efficiency is considerable lower, averaging 78% and ranging from 74 to 84%. Therefore, imposing the constraints is quite informative for efficiency and delivers results that are different compared to an unrestricted translog production function. Technical change averages 1% and ranges from − 3 to 5% per annum. Efficiency change is much more pronounced when monotonicity and/or concavity restrictions are imposed. Without the restrictions, it averages 1% and ranges from − 1 to 3.5%. With the restrictions in place, it averages 3.2% and ranges from − 1 to 6%. In turn, productivity growth (the sum of technical change, efficiency change and scale effect) averages 4.2% relative to only 2% in the translog model without the constraints.
In relation to (13), let us define = σ 11 σ 1 σ 1 * , where σ 11 is the variance of v it,1 , σ 1 represents the vector of covariances between v it,1 and v it , and * is the covariance matrix of v it . To examine whether the artificial error terms v it are of quantitative importance, we can use the measure | * |/σ 11 . This measure provides the (generalized) variability of v it in terms of the variance of v it,1 , viz. the stochastic error in the production function.
Aspects of the posterior distribution of the model are reported in Figs. 2 and 3. Reported are posterior means with posterior standard deviations in parentheses The marginal posterior densities of measure | * |/σ 11 are reported in the upper left panel of Fig. 3. The (generalized) variance of two-sided error terms in the constraints is only 3.5% relative to the variance of production function error term, implying that the one-sided error terms account for most of the variability of the equations corresponding to the restrictions. In the upper right panel of Fig. 4, we report the marginal posterior density of λ = σ 2 u σ 11 which represents the signal-to-noise ratio in frontier models. This ratio averages 1.3 (ranging from 0.5 to 2.7) without the constraints and 2.5 (ranging from 1.5 to 3.7) when the constraints are imposed. This suggests that the imposition of theoretical constraints allows for more precise inferences in the stochastic frontier model. To allow for the presence of the constraints, it is more appropriate to define the signal-to-noise ratio as λ * = σ 2 u | | . The marginal posterior density of λ * is reported in the bottom panel of Fig. 3. Evidently, the new measure is lower compared to λ, but it is still considerably larger than the ratio without imposition of the theoretical constraints.
Posterior moments are presented in Table 1. Another important issue is whether posterior predictive densities of efficiency estimates are more informative relative to unconstrained estimates. From  . Productivity growth (PG) is PG = TC + EC unconstrained maximum likelihood estimates and Bayes estimates that impose concavity, sometimes yield higher efficiency estimates and sometimes they yield lower efficiency estimates. Standard errors without concavity and standard errors with concavity are, more often than not, lower in the second case, but there are some exceptions. On the other hand, the results of O'Donnell and  suggest that imposing monotonicity and curvature yields more precise estimates (Table 3 and Figures 2-9).
In Fig. 4, we present posterior predictive densities of efficiency for nine randomly selected observations.
In line with O'Donnell and  or O' Donnell et al. (1999), we find that, more often than not, the posterior predictive efficiency densities are more concentrated around their modal values. The posterior predictive efficiency densities of other farms behave in the same way, and results are available on request. A related issue is whether imposition of monotonicity and curvature result in stochastic dominance over the model without these restrictions. From the evidence in Fig. 5, where we report normalized cumulative distribution functions (cdf), we have stochastic dominance of the model with monotonicity and curvature only for farms 6 and 9. Therefore, as a  and * is the covariance matrix of v it . To examine whether the artificial error terms v it are of quantitative importance, we can use the measure | * |/σ 11 . This measure provides the (generalized) variability of v it in terms of the variance of v it,1 , viz. the stochastic error in the production function. To allow for the presence of the constraints, it is more appropriate to define the signal-to-noise ratio as λ * = σ 2 u | | rule, imposition of the restrictions does not, necessarily, imply stochastic dominance mostly because the average posterior predictive efficiency estimates change as well.

Concluding remarks
An issue of great practical importance is the imposition of theoretical inequality constraints on cost or production functions. These constraints can be handled efficiently using a novel formulation that converts inequality constraints to equalities using surpluses which are treated in the context of stochastic frontier analysis. The idea has been developed independently by Huang and Huang (2019). However, the authors did not deal with the case of cost-share systems (in which more problems arise and need to be addressed) neither they allowed for correlation between violations of mono- There are two problems that are successfully resolved in this paper. First, the constraints are not independent as it is known, for example, that imposing monotonicity leads to fewer violations of concavity. Second, when explanatory variables are endogenous, special endogeneity problems arise which cannot be solved easily. In turn, proposed are special techniques to address this issue, and they are shown to perform well in an empirical application. and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

MCMC associated with (19)
Conditional on all other parameters the posterior density of u i is as follows: We remind that in the main part we have assumed = σ 2 ω 2 I m , and Φ = 0 φ 2 I m . Here, we allow for general covariance matrices with the following prior: We adopt general covariance matrices because of the following reasons. It is always possible to fix in advance in the form = σ 2 ω 2 I m , where σ 2 remains an unknown parameter, but ω is fixed to a small value (say 0.001). In this case, the posterior conditional distribution of σ 2 is: We do not recommend this practice for three reasons. First, assuming independent two-sided errors is too restrictive as imposition of one constraint is not independent of imposing other constraints. Second, assuming independent one-sided errors is too restrictive as well. Third, a common value of φ is too restrictive as different constraints may require different surplus. However, with regard to the third point, it is not difficult to replace φ 2 I m with an m × m diagonal matrix whose diagonal elements are φ 1 , . . . , φ m . Moreover, it is possible to treat φ as a general covariance matrix (denoted Φ), as we do here.
The posterior conditional distributions are in the well-known inverted-Wishart family: Finally, the posterior conditional distribution of β is a multivariate normal: All conditionals are in standard forms which facilitate random number generation for implementation of MCMC.

MCMC associated with (29)
We remind that we assume = σ 2 ω 2 I m , ξ ∼ N (0, ω 2 I), and u ∼ N + r (0, σ 2 u I ). In this case, we have no technical inefficiency (viz. u i = 0) and the scale parameter of surpluses, u i , have the same scale parameter.
The conditional posterior distribution of σ 2 is .  Terrell (1996) OC ( Terrell (1996) OC ( (c) scale parameters Terrell (1996) OC (2005)   . For each set of parameters (in panels a, c, and d, we report the median acf across all parameters

(b) technical inefficiency
We use MCMC to obtain draws from p(β, γ , , σ u , u 1 ; D). It is possible to integrate out analytically from (16) using properties of the inverted Wishart distribution (Zellner 1971, p. 229, formula (8.24) and p. 243, formula (8.86)) and we obtain: First, we draw σ u from its posterior conditional: To update β and γ , we use a Girolami and Calderhead (2011) algorithm which uses first-and second-order derivative information from the log of (17).
From Fig. 6, we see that the log marginal likelihood is largest when we use, approximately, 50 points to impose the monotonicity and curvature restrictions.
(b)), and scale parameters (panel (c)) are similar. Reported in panel (c) is GCD for the latent variables in our constraints. From GCD, we can see that MCMC has converged (Fig. 11).