Stochastic frontier estimation through parametric modelling of quantile regression coefficients

Stochastic frontiers are a very popular tool used to compare production units in terms of efficiency. The parameters of this class of models are usually estimated through the use of the classic maximum likelihood method even, in the last years, some authors suggested to conceive and estimate the productive frontier within the quantile regression framework. The main advantages of the quantile approach lie in the weaker assumptions about data distribution and in the greater robustness to the presence of outliers respect to the maximum likelihood approach. However, empirical evidence and theoretical contributions have highlighted that the quantile regression applied to the tails of the conditional distribution, namely the frontiers, suffers from instability in estimates and needs specific tools and approaches. To avoid this limitation, we propose to model the parameters of the stochastic frontier as a function of the quantile in order to smooth its trend and, consequently, reduce its instability. The approach has been illustrated using real data and simulated experiments confirming the good robustness and efficiency properties of the proposed method.


Introduction
In the last decades, several methods have been proposed in literature to estimate the production or cost frontiers; this research stream-without claiming to be exhaustiveconcerned the generalization of the baseline parametric stochastic frontier model (SFA, Kumbhakar and Lovell 2004;Kumbhakar et al. 2020) to the panel data (see e.g. Battese and Coelli 1995;Greene 2005), to the heterogeneity and spatial dependence (see e.g. Bille' et al. 2018;Fusco and Vidoli 2013;Tsionas and Michaelides 2016;Kutlu et al. 2020), to more flexible functional forms (semiparametric Fan et al. 1996, seminonparametric Kuosmanen 2012 and to the generalization of error term distributions (Greene 2003;Papadopoulos 2021). A multitude of methods, therefore, have provided access to a plurality of explanatory possibilities concerning the heterogeneous behaviours of the production units in terms of efficiency.
All these methods, however, focused more on the correct methodological specification and properties than on the interpretative capabilities that the method could offer or rather on the concrete subsequent applicability in private or public policies; in other terms, the focus has been more on the long-run benchmark estimate-that is coincident with the estimated frontier-rather than on the partial benchmark references that can be useful in the short and medium term.
Quantile regression (QR, Koenker and Bassett 1978;Koenker 2005), conversely, can represent a steady approach to the long-term benchmark given that it can design gradual paths of return from inefficiency. In other terms, minimizing an asymmetrically weighted sum of absolute errors, quantile regression models allow to "going beyond models for the conditional mean" (Koenker and Hallock 2001) making it possible to derive different partial benchmark references for each quantile of the dependent variable analysed.
But it is exactly this "plurality of benchmark references" that is paradoxically the greatest weakness in the concrete use of these methods in the field of estimating production efficiency; how to identify the feasible production frontier? how to identify the stochastic part of the random noise?  solves this crucial issue suggesting a heuristic algorithm to estimate the specific quantile of the conditional output distribution corresponding to the true stochastic frontier and paving the way for the use of quantile models in the field of efficiency estimation. Our proposed approach lives in this novel research stream combining a more general method such as the Frumento and Bottai (2016) quantile regression coefficients modelling (QRCM) approach with the  quantile selection method. This approach produces improvements both from an economic point of view, since it makes it possible to design gradual and consistent paths to recovery of inefficiency, and from a statistical point of view: in particular the absence of assumptions on the error/inefficiency term and the robustness to outliers (Furno and Vistocco 2018) represent two crucial aspects in the practical application of frontier models.
The ultimate aim, therefore, of our paper is essentially methodological and would like to offer to the scientific and applicative debate an original flexible estimation method with the aim of bypassing the limitations (i) of the SFA in relation to assumptions about error distribution 1 and robustness in the presence of outliers and (ii) of the QR models in relation to the lack of monotonicity in the trend of the estimated coefficients as the quantile increases, or rather, in economic terms, to identify partial and feasible benchmark references that can be used to gradually reduce inefficiency. In other terms, the proposed approach aims to represent a method with clear methodological properties, but also rich in terms of application since it provides not only an estimate of inefficiency, but also and above all references that allow to identify partial benchmarks to overcome such inefficiency.
The remainder of the paper is organized as follows. In Sect. 2 methods and theoretical approaches relating to stochastic frontiers and quantile regressions will be outlined clarifying the methodological contribution of this paper; the properties of the proposed method and the application characteristics will be better highlighted on both some case studies (Sect. 3) and simulated data (Sect. 4). Section 5 is devoted to concluding remarks.

Frontier QRCM model
Standard quantile regression (Koenker and Bassett 1978;Koenker and Hallock 2001) is a regressive technique which aims to estimate the conditional τ th quantile of a response variable y given covariates x = (x 1 , . . . , x q ), and-assuming a linear relationship between y and x-it can be formulated as follows: where τ ∈ (0, 1) is the quantile and the coefficients vectors β(τ ) are non-smooth functions. Parameter β(τ ) plays a key role in QR models, but it can be highly variable in a random form for each quantile especially in the distribution tails (broken straight line in Fig. 1) leading to non-monotone increasing of the fitted functions (as shown in the first plot in Fig. 2). It should be noted that the shortcoming linked to the non-monotonicity represents a crucial flaw in the standard model in economic terms because it does not allow for the design coherent and consistent efficiency recovery policies.
Modelling the relationships between variables outside of the mean, QR is particular useful when outcomes are non-normally distributed and have nonlinear relationships with predictor variables relaxing the common regression assumptions and making no assumptions about the distribution of the residuals; given these premises, QR is less sensitive to extreme values than standard regression models proving its distributional robustness in the "insensitivity to small deviations from the assumptions the model imposes on the data" (Huber 1981).
Continuous and monotone β j (τ |θ) functions allow to highlight two issues: (i) the first one is related to the ability to bypass the problems of instability of the estimates on the extremal quantile as highlighted by many researchers (see e.g. Chernozhukov 2005) "due to data sparsity" (Li and Wang 2019) and "heavy-tailed distribution" (Huang and Nguyen 2017) and (ii) identify monotonous estimated curves that increase as the quantile rises and avoid quantile crossings between multiple estimated frontiers (as in Wang et al. (2014) proposal for nonparametric quantile regression) as shown in the second plot in Fig. 2; it is therefore possible, starting from these partial benchmark reference curves, define intermediate benchmarks useful in the short and medium term. Sottile et al. (2019) suggested a penalized method that can address the selection of covariates in the QRCM modelling framework "directly on the parameters of the conditional quantile function [and] using information on all quantiles".
In recent years, QR has been used to estimate efficiency (Bernini et al. 2004;Liu et al. 2008;Roth and Rajagopal 2018) pointing out two improvements over models such as SFA that use maximum likelihood estimation: its robustness to the presence of outliers/abnormal points and its independence to the distributional choice providing a useful comparison for applied researchers.
Despite these methodological and empirical advantages, the main critical point has always been the discretionary choice of the "right" quantile corresponding to the production frontier: some authors, e.g. Knox et al. (2007), Liu et al. (2008) or Behr (2010) starting from the well-known finding that if no inefficiency is present in the sample the SFA frontier corresponds to an OLS estimation and hypothesizing that this is also true for the quantile regression (Horrace and Parmeter 2018), suggested (in a very subjective way, in our opinion) to choose the quantile, for production frontiers estimation, above τ = 0.5 (the median) and preferably from 0.8 to 0.975, to try to be as close as possible to a frontier but ignoring, however, distributional assumptions.  and  finally solve this limitation proposing a heuristic method to choose the "right" quantile, demonstrating that, if the quantile is identified by considering the conditional distribution of the output given the regressors under a specific distributional setting of residuals, it is the one consistent with the location of the stochastic frontier. Tsionas (2020); Tsionas et al. (2020); Zhang et al. (2021) represent the latest methodological and empirical updates of this growing literature.
In SFA production setting, residuals are represented as a compound error = v −u, assuming that v is the random term, i.e. v ∼ N (0, σ 2 v ) and u is the inefficiency term with a positive skewed distribution like the "Half-Normal" , i.e. u ∼ N + (0, σ 2 u ) or the "Exponential" (Jradi et al. 2021), i.e. u ∼ Exp(1/σ u ); given these premises, therefore, we refer to "wrong skewness" when the distribution of the term u presents a negative skewness. Finally, the compound error follows a negative skewed distribution as, respectively, "Normal Half-Normal" or "Normal Exponential".
Given these assumptions, and following the Jradi and Ruggiero (2019) proposal, the optimal quantile corresponding to the true location of the production frontier, in the case of "Normal Half-Normal" distribution, can be expressed as: gives information about the quantity of inefficiency in the sample; following Fan et al. (1996), Jradi and Ruggiero (2019) also derived the λ = σ u /σ v parameter that gives an immediate suggestion of the amount of inefficiency with respect to the noise; this parameter can be expressed as 3 : Given these premises, the empirical algorithm for estimate the "right" quantile involves the iteration over different quantiles (e.g. τ = 0.5, 0.51, . . . , 0.99) and the comparison with the related likelihood choosing the one with the highest likelihood value in order to minimize τ * − τ .
In this paper, the two methods outlined above are combined in order to gain the flexibility and independence from functional assumptions of the QRCM method with the objectivity in the choice of optimal production frontier of the Jradi and Ruggiero (2019) approach. In mathematical terms: 3 This result is derived from the following-well known in SFA literature-equations σ = σ v 1 + λ 2 where τ * is the "right" QRCM quantile obtained by estimating QRCM, also in this case, for different quantiles (e.g. τ = 0.5, 0.51, . . . , 0.99) and chosen by minimizing the difference τ * − τ . Therefore, from a technical point of view, by imposing a parametrization and some degree of smoothness on coefficients β, the fitted values-and consequently the residuals -are estimated by using information on all quantiles simultaneously; following this approach, it is possible to inherit the other advantages of parametric modelling like parsimony, the ease of interpretation (Frumento and Bottai 2016) and the possibility to be applied to cases-latent variables, missing or partially observed data, causal inference-where "parameters are harder to estimate in closed form" (Waldmann 2018)-where applying standard QR proves to be difficult and computationally inefficient. Please note that a useful criterion for finding the best smoothness function can be using a goodness-of-fit test; in this paper, following Frumento and Bottai (2016), a Kolmogorov-Smirnov test has been considered 4 (more detailed information can be found in Sect. 3).
Moreover, the choice of the form, which leads to coefficients highly correlated on the frontier with SFA's ones, allows to include in the efficiency estimation not only all properties of the ML approach, but also to transfer them to lower quantiles.
On the other hand, from an economic point of view, the possibility to have increasing monotonous functions at the observed covariate values across quantiles lets to estimate different partial benchmark references in the short, medium and long term.

Properties of the proposed method: some case studies
In this Section, two empirical applications, based on two datasets well known in the literature, are proposed: in the first one (Sect. 3.1) the focus is on the comparison of QR/QRCM methods with respect to SFA; this is the most favourable scenario for SFA since there are no outliers in the data and the assumptions about the error distribution are met. In the second one (Sect. 3.2), instead, SFA estimation brings out wrong skewness on the inefficiency term showing very clearly the advantage of using QRCM-type estimation methods in this context.

Philippine rice farming dataset
In this Subsection, as previously stated, QRCM is compared to standard QR approach in order to bring out two evidences: (i) the QRCM capability to estimate more stable β parameters at quantile variations and (ii) the greater approximation, in terms of estimation, with respect to the SFA taken as a reference model given an optimal β's smooth function and the estimation of an optimal quantile.
Philippine rice farming dataset is widely used in literature to compare frontier methods (see for example Coelli et al. 2005, Rho and Schmidt 2015 or . The dataset contains annual data collected from 43 smallholder rice producers in the Tarlac region of the Philippines between 1990 and 1997 5 . In this dataset, the output variable (y) is tonnes of freshly threshed rice and the main input variables are being area (area) of planted rice (hectares), total labour (labour) used (man-days of family and hired-labour) and fertilizer (npk) used (kilograms); the relative translog production frontier specification is defined as: Frontier specification reported in equation (7) has been estimated by the three methods; specifically, the QRCM approach 6 needed to identify the best smooth function for the quantile coefficients: this choice is clearly related to the empirical framework under consideration either by choosing the smooth function based on its theoretical properties or by using adjustment criteria. In this application, following Frumento and Bottai (2016), a Kolmogorov-Smirnov goodness-of-fit test has been used for this purpose, indeed, they suggest to test the null hypothesis H 0 : τ 1 , . . . , τ n ∼ U (0, 1), since by definition, at the true model τ 1 , . . . , τ n are independently and identically distributed draws from a standard uniform distribution. Moreover, with the aim to better approximate the functional form on the frontier, in this paper, a further criterion to select from the functional forms shown in Table 6 has been added, that is, a high correlation of the obtained βs with those of the SFA. The mix of the two approaches has led to the choice of the function I (qnorm(τ 3 )) + I (log(τ )) 7 .
In Fig. 3, the QR and the QRCM β coefficients smooth functions for each quantile from 0.6 and 1 are plotted, showing that QR coefficients are too volatile especially for quantile values greater than 0.8-those that are most relevant in terms of the production frontier-while QRCM smooth function is able to well approximate all translog terms.
In Table 1, a comparison of production translog frontier β coefficients, the efficiency specific parameters, namely, the λ, the total variance σ 2 and the mean Fan et al. (1996) efficiency values (standard deviations in brackets), estimated with corrected ordinary least squares (COLS, Winsten 1957), SFA, QR and QRCM, are reported. Moreover, the obtained optimal quantile τ * for QR and QRCM are shown.
It can be noted that, QRCM method is able to approximate better, in terms of linear coefficients, the functional form of SFA with respect QR suggesting an economic interpretation closer to the SFA one. More in particular and similarly to , the optimal τ quantiles estimated with QR and QRCM: (i) are very similar  to the one computed a posteriori, for a merely comparison purpose, for SFA and COLS frontiers (0.908 and 0.889, respectively); (ii) are close to the upper decile. The obtained average level of efficiency is about 0.467 for COLS and goes up to 0.729 for SFA, 0.744 for QR and 0.732 for QRCM. Moreover, the sum of the linear terms of the translog production frontier is close to one for all methods indicating a slightly decreasing returns of scale for the Philippines rice farms. Finally, the Spearman correlation index on efficiency values, in Table 8 in "Appendix", shows how QR and even further QRCM's rankings differ more from the COLS than those of SFA (respectively, 0.954, 0.931 and 0.989). Such a result, in this case, overcomes the criticism highlighted in Ondrich and Ruggiero (2001)

NBER manufacturing dataset
In this Subsection, the NBER manufacturing productivity dataset 8 (Bartelsman and Gray 1996) has been considered to highlight the properties of the proposed QRCM method with respect to SFA when in the presence of "wrong skewness" problem 9 (Green and Mayes 1991). Wrong skewness, in fact, may can be ascribed to incorrect or outlier data, an incorrect or incomplete specification of the production model 10 or to both making the choice of the form of the inefficiency term "sometimes a matter of computational convenience" Bonanno and Domma (2017). This dataset, already used in literature to study the above mentioned problem, has been used primarily to propose new skewed densities for the compound error (see, among others, Li 1996;Carree 2002;Tsionas 2007;Almanidis and Sickles 2012;Almanidis et al. 2014;Bonanno and Domma 2017;Hafner et al. 2018) or the adjustment of the estimator for finite sample (see, among others, Simar and Wilson 2009;Cai et al. 2021). In our case, however, no error distribution is to be assumed a priori to obtain model convergence.
NBER dataset contains information on 473 US manufacturing industries for 54 years (from 1958 to 2011) and, by following Bonanno and Domma (2017) and Hafner et al. (2018), 54 sub-sectors from the textile industry over the years 1958-2011 are analysed. Even in this case and following Hafner et al. (2018) approach, with the aim to compare QRCM model with SFA and QR in terms of "wrong skewness", a cross-sectional estimation for each year has been carried out.
In this dataset, the output variable (y) is the total value added and, as input variables, total employment (labour), cost of materials (materials), energy cost (energy) and capital stock (capital) are used; the Cobb-Douglas production frontier specification is defined as: ln(y i ) = β 0 + β 1 ln(labour i ) + β 2 ln(materials i ) + β 3 ln(energy i ) + β 4 ln(capital i ) For each year, the production frontier has been estimated by using OLS, SFA (Normal-Half Normal, Normal-Exponential and Normal-t-Normal specification), QR and QRCM models.
More specifically, as resulted by the best approximation among functions in Table 6, the smooth function I (qnorm(τ )) + I (log(τ )) has been chosen for QRCM for the most numbers of years.
Results are reported in Fig. 4; it can be seen that as long as the skewness is "correct" (values in the bottom plot below 0-years from 1958 to 1998) all methods work in a similar way, but in the presence of "wrong" skewness (values in the bottom plot above 0-for the last few years) the a posteriori τ * parameter collapses to the median because it fails to estimate the inefficiency (no convergence of the maximum likelihood optimizer) for SFA, regardless of specifications of the residuals (dark green straight 9 It occurs when the sign of the empirical OLS residuals skewness is positive instead of negative, when on the contrary as pointed out in Sect. 2 at page 13, in production efficiency = v − u and so follows a negative skewed distribution. 10 Although this is not always true; Hafner et al. (2018) in particular claims that "when observing the "wrong" skewness, most researchers are tempted to believe that the model is wrong, and we know that even a correct SFM allowing inefficient firms may produce the wrong sign for the skewness. This happens more often with small sample sizes or when the ratio Var(V)/Var(U) increases".  1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999  Finally, in Table 9 in the "Appendix", the estimated efficiency results, by method, are reported. In particular, it is noteworthy that in the years where residuals present a "wrong" skewness the efficiency is close to 1 for SFA as it fails to detect inefficiency, unlike QR and QRCM which are able to estimate it.

Simulations
The aim of this section is to assess, in a more systematic way, the properties of the QRCM model both in terms of estimating the frontier and the inefficiency of individual units. SFA and Jradi et al. (2019) QR have been chosen as contrasting methods as they represent the natural comparison both on the side of stochastic efficiency and quantile approach. The production simulation setting mimics the Banker and Natarajan (2008) proposal-also followed by Johnson and Kuosmanen (2011)-generating sample data by a cubic polynomial in x: choosing α 0 = −37, α 1 = 48, α 2 = −12 and α 3 = 1 in order to ensuring the monotonicity and the concavity in the range x = [1, 4]. Finally, in the efficiency setting, two key parameters must be defined: the error term v set, as usually, from a two-side Normal distribution N (μ v , σ v ) with μ v = 0 and σ v = 1 and the inefficiency term u which will be varied, in the following simulations, in absolute terms and distributional form. After drawing the random variables x as a uniform [1,4] for 200 units, the logarithm of the output y = φ(x) has been set as: Finally, two measures have been used to evaluate the performance of the proposed model against both the simulated frontier and the SFA and QR methods: • the mean squared error (MSE), that is, the average squared difference between the simulated and the estimated values, in order to verify the accuracy of the frontier estimate; MSE = 1/n n i=1 (y i − y i ) 2 ; • the average of the absolute value of the differences (Mean abs diff.) between estimated and true efficiencies with the aim of evaluating the models on the efficiency estimation side; Mean_diff = 1/n n i=1 |(eff i − eff i )|.

First simulation: half-normal inefficiency
Once the general framework of the simulation had been set up, some settings have been varied in order to assess the stability and flexibility of the models. In this first simulation, the inefficiency has been generated from a Half-Normal distribution with parameters μ u = 0 and σ u ∈ [0.6, 1.2, 1.8, 2.4, 3]; as a result, keeping in mind that σ v is equal to 1, in the following simulations λ = σ u /σ v is equal, respectively, to [0.6, 1.2, 1.8, 2.4, 3]. Moreover, the choice of β(τ |θ) has been set among all those proposed in Table 6; this simulation setting is, therefore, the more favourable for the SFA since inefficiency follows a standard Half-Normal distribution and no outlier/out-of-scale data are present. Figure 5 allows to verify how the three models (QRCM, SFA and QR) substantially allow a good fit to the frontier and how this result is quite stable as the inefficiency varies.
From a preliminary analysis (Fig. 6), the QR estimator seems to be substantially less accurate than the corresponding QRCM estimator (as stable as the quantile chosen by the  algorithm); these initial impressions will be hereafter verified.
The setting proposed has been the starting point for checking the performance of the three chosen methods in terms of frontier fitting and efficiency estimation. Table 2 reports mean and standard deviation for MSE and mean difference in absolute value of efficiencies over 1000 iterations varying λ.
In terms of MSE, it can be seen that the difference between quantile-based methods and SFA tends to decrease as the inefficiency in the data increases, while the QRCM ever outperforms QR. This result is confirmed-even more clearly-by the average difference in absolute value between the inefficiency estimates and the true values, confirming a substantial equivalence of the methods under analysis in the case most favourable to the SFA, i.e. that in which the form of the inefficiency is Half-Normal and no outliers are included. But what if the form of the inefficiency is no longer standard, an issue that often occurs in real-world data? Section 4.2 will try to answer this question by varying the inefficiency distribution and including outliers in the simulated data.

Second simulation: varying inefficiency distribution
In this second simulation, therefore, always starting from the baseline setting proposed in Sect. 4, the finite sample performance of the proposed estimator has been examined by means of Monte Carlo simulation (1000 replications) by varying the distributional form of inefficiency and for three percentage levels of outliers of the total number of cases (1%, 3%, 5%); more specifically, outliers have been generated according to equation (9) in which the term α 0 has been set equal to -32. More specifically, six different distributions (see Table 3 and Fig. 7) for the u term have been chosen: (1) Half-Normal, with the aim of verifying the impact of outliers; (2) Skew-Normal (Azzalini and Valle 1996) with high positive marginal skewness; (3) Skew-Normal with low positive marginal skewness; (4) Skew-Normal with low negative marginal skewness; (5) Skew-Normal with high negative marginal skewness; (6) Gamma.   Not all distributions could be expressed in terms of mean and variance like the Half Normal; therefore, in order to make the simulations comparable, the parameters of each distribution (for analytical specification of parameters, please see Table 10) have been set in such a way as to obtain similar mean and variance; Table 3 verifies this result (results are reported for σ u = 3; similar results for σ u = 1 are available from the authors) by also highlighting another key parameter, namely skewness, which-as highlighted in Sect. 2-in SFA models is necessary to be positive in order to obtain convergence. Table 4 and Table 5 show, respectively, the average values of the MSE and the absolute mean difference for efficiencies by method, distribution of inefficiency and percentage of outliers included in the simulated data. Some implications arise: • SFA approach-in the case of Half-Normal distribution-proves to be very sensitive to the presence of outliers; this result is most evident when inefficiency in the data is strongest. • In the case of "wrong skewness" (negative Skew-Normal distributions) SFA does not converge-as already highlighted in Sect. 3.2-in all iterations and, therefore, performs worse than quantile models; this effect is increasing as the skewness of the u term decreases reflecting the fact that, as soon as the inefficiency data depart from the standard assumptions, the SFA model tends to inaccurately estimate the production frontier. • QRCM performs better than QR both in terms of MSE and absolute difference for all inefficiency distributions and all percentage levels of outliers.

Final remarks
In this paper, the effects on efficiency estimates of the presence of outliers in the observed data and the non-occurrence of distributional assumptions have been analysed. After a brief review of some recent developments in robust estimation of stochastic boundaries, based on quantile regression approaches, the use of a variant of these methods based on modelling β parameters as a function of the quantile τ has been proposed. We have focused on this approach because the introduction of a similar model, very flexible and smooth, is fast and stable and offers a very practical approach to a solid estimate of the efficiencies not sensitive to the presence of anomalous data and to distributions even very far from the classical Half-Normal and Exponential assumptions. The approach has been then illustrated using real data, already used in literature, and simulated experiments. The results confirm that the proposed method offers good robustness properties and, in many cases, may be more efficient than the two main alternative estimation approaches, both robust like quantile regression and not robust like maximum likelihood. As has already been verified in previous studies (Song et al. 2017;Wheat et al. 2019;Zulkarnain and Indahwati 2021), the latter are extremely compromised by anomalous data and often, if the efficiency distribution is different from that specified, the algorithms used for its optimization do not converge and fail in the search of a maximum (Meesters 2014). On the contrary, quantile regression does not seem to suffer from similar problems, but its estimation capabilities are seriously compromised by a known and evident instability of the parameters relating to the higher quantiles which unfortunately are those necessary for stochastic frontier models. Our suggested method appears to be successful in simultaneously solving the drawbacks of its competitors. At the acceptable price of a slight loss of estimation efficiency when the data are not contaminated by outliers and there is no doubt about the theoretical distribution of efficiencies, it has provided reliable estimates in any real and simulated case. It should also be emphasized the advantage of not necessarily requiring any preliminary tests to verify distributional hypotheses and regression diagnostics, nor the application of complex procedures for the automatic identification of outliers. Finally, the recovery of the parametrization within the quantile regression approach gives greater flexibility to its practical use, allowing with extreme simplicity to impose constant parameters or frontiers that do not overlap as the quantile adopted increases. This flexibility could allow new and simpler developments also in the methodological field by introducing, for example, in these models some dependence parameters in time, space or network data. But this is left for further research, along with the possible extension of the proposed model to panel data, being currently only defined for crosssectional data.
Funding Open access funding provided by Università degli Studi di Urbino Carlo Bo within the CRUI-CARE Agreement. The authors received no specific funding for this work.

Appendix C. Efficiency results-NBER manufacturing
See Table 9.