Bayesian Analysis of Nonnegative Data Using Dependency-Extended Two-Part Models

This article is motivated by the challenge of analysing an agricultural field experiment with observations that are positive on a continuous scale or zero. Such data can be analysed using two-part models, where the distribution is a mixture of a positive distribution and a Bernoulli distribution. However, traditional two-part models do not include any dependencies between the two parts of the model. Since the probability of zero is anticipated to be high when the expected value of the positive part is low, and the other way around, this article introduces dependency-extended two-part models. In addition, these extensions allow for modelling the median instead of the mean, which has advantages when distributions are skewed. The motivating example is an incomplete block trial comparing ten treatments against weed. Gamma and lognormal distributions were used for the positive response, although any density on the support of real numbers can be accommodated. In a cross-validation study, the proposed new models were compared with each other and with a baseline model without dependencies. Model performance and sensitivity to choice of priors were investigated through simulation. A dependency-extended two-part model for the median of the lognormal distribution performed best with regard to mean square error in prediction. Supplementary materials accompanying this paper appear online.


INTRODUCTION
In many areas of applied statistics, distributions are positively skewed and variance is increasing with the mean. These phenomena violate the assumptions of normality and homoscedasticity needed for statistical inference in traditional analysis of variance and regression. In addition, the data may include an excess of zeros. For example, in a study of garden bird abundance, average count, which is effectively continuous, spikes at zero for many of the species (Swallow et al. 2016), in a study of precipitation, there are some months without rain (Harvey and Van der Merwe 2012;Fuentes et al. 2008;Sun and Stein 2015), and in a study of bycatch data, the endangered hammerhead shark is often missing (Cantoni et al. 2017). Our application is an agricultural plant protection experiment, where the weed of interest does not grow in every plot. In plant protection experiments, treatments effectively controlling the weed either eliminate the weed, giving raise to zeros in the dataset, or result in small positive levels of weed biomass. Non-effective treatments, on the other hand, do not eliminate the weed but typically show large levels of weed biomass. Thus, there is a negative relationship between probability of zero and level of biomass conditioned on the weed being present. We wanted to model probability of zero explicitly as a function of the conditional level of weed biomass, which was not possible using previously proposed methods.
As is well known, the median is more robust against outliers than the mean. Yet, models for means are more common than models for medians. We shall propose dependency-extended two-part models for both the mean and the median.
Using two-part models, the probability of zero and the level of the positive observations are modelled separately. Thus, the distribution is assumed to be a mixture of a Bernoulli distribution and a positive distribution, which cannot take negative values. Duan et al. (1983) used a probit link for modelling the Bernoulli event and fitted a log-linear model for the positive part, while Zhou and Tu (2000) and Hautsch et al. (2013) used a logit link for fitting the Bernoulli piece. Chen and Qin (2003) and Yang et al. (2016) used an empirical likelihood, thus avoiding exact assumptions on the distribution of the positive part of the data. In its original form, the two parts of the two-part model can be fitted separately, using different or common explanatory variables. This is possible since the likelihood can be written as a product of two factors corresponding to the two parts of the model. However, if the two parts share the same parameters (Moulton et al. 2002), or if the parameters of the two parts are constrained to be related, such factorization is not possible (Mills 2013). In analysis of count data, two-part models are known as hurdle models (Rose et al. 2006). Zero-inflated Poisson or zero-inflated negative binomial models are other popular options for count data. Tang et al. (2018) used such models and assumed a dependency between the probability of zero and the Poisson mean, but only as a result of common predictors.
In this article, the traditional two-part model is extended to accommodate explicit functional dependencies between the level of the positive part and the probability of zero. Four models are explored: a baseline model without correlated parts (Model 0), a model with correlated random effects (Model 1) and two dependency-extended models (Models 2 and 3), which includes functional dependencies between the two parts. Let t, where t = 1, . . . , T , be the treatment index and s, where s = 1, . . . , S, be the block index. The models are: Models 0 and 1: log In Model 0, the level of the positive part, μ f,st , and the probability of zero, p st , are modelled by fixed treatment effects: α t , β t and random block effects: b s , v s , which are uncorrelated. Model 1 is the same as Model 0, but with correlated block effects. In Model 2, the level of the positive part, μ f,st , is dependent, through regression coefficients γ 1 and γ 2 , on the probability of zero, p st , and a random block effect, b s , whereas the probability of zero, p st , is modelled by fixed treatment effects, α t . In Model 3, the logarithm of the level of the positive part, log(μ f,st ), is a sum of a fixed treatment effect, α t , and a random block effect, b s , whereas the probability of zero, p st , is dependent on the level of the positive part, μ f,st through regression coefficients γ 1 and γ 2 .
Models similar to Model 1 have been proposed for count data (Min and Agresti 2005;Neelon et al. 2010;Cantoni et al. 2017) and for lognormally distributed repeated measures data (Tooze et al. 2002). Our dependency-extended two-part Models 2 and 3 will be compared with Model 1. It will be shown in Sect. 5.1 that Model 1 does not work well when only zeros are observed for some levels of the random effects, which happens frequently in practice. Note that Models 2 and 3 comprise fewer parameters than Model 1, which can make them more robust and easy to fit.
In Models 0, 1 and 3, log(μ f,st ) = α t + b s . However, the logarithm of the conditional level in Model 2 can be written similarly, as with this model, log(μ f,st ) = α t + b s , where α t = γ 1 + γ 2 α t . In terms of the probability of zero, p st , all models use logistic regression. Models 2 and 3 have considerably fewer parameters than Model 1 and are therefore apt to work well for small datasets. Specifically, Model 2 assumes no random effects on the probability of zero. Model 3 uses only two parameters for the logit part, which can be beneficial when the number of zeros is either small or large. However, the great advantage of Models 2 and 3 is that they provide a mathematical description of how the probability of zero and the positive part are related. Similar to a model proposed by Neuhaus et al. (2018) for outcome-dependent visit processes, the parameter γ 2 governs the strength and direction of the association between the conditional mean or median and whether or not data are observed.
For the positive part, we used the lognormal distribution and the gamma distribution. Using the lognormal distribution, attention characteristically shifts from the mean to the median (Goldberger 1968), which is invariant under distributional transformation (Rao and D'Cunha 2016) and less sensitive to outliers. The lognormal distribution is often a better assumption than the normal, especially in biology, where the lognormal distribution arises asymptotically as a consequence of biological mechanisms (Koch 1969). In several disciplines, e.g. geology, food technology, social sciences and economics, the lognormal distribution is used when effects are multiplicative rather than additive (Limpert et al. 2001). The gamma distribution is another useful distribution when data are positively skewed and heteroscedastic, although rarely used with excess of zeros (Musal and Ekin 2017). Specifically, when the standard deviation is proportional to the mean, this distribution motivates a model with constant coefficient of variation (Lee et al. 2006, p. 71).
Our research was motivated by difficulties encountered in weed science when analysing agricultural field experiments using traditional statistical methods. In order to assume normality and homoscedasticity, data are often log-transformed before analysis. However, when some observations are zero, i.e. when no weed is observed, some small positive constant must first be added to the observed values. This procedure destroys the multiplicative scale and introduces an undesirable arbitrariness, since the choice of the constant may affect the conclusions. Modelling heteroscedasticity is another option (Damesa et al. 2018), but also such modelling, for example, using the power-of-the-mean model (Carroll and Ruppert 1988), would be problematic when only zeros are observed for some treatments, since the variance would be estimated to zero for those treatments.
Two-part models with dependency between the two parts are rare in the literature, perhaps due to the cumbersome maximization of the likelihood (Feuerverger 1979). We suggest instead using Bayesian methodology. Bayesian analysis of two-part models has previously been proposed by Rodrigues-Motta et al. (2015), who assumed the positive distribution to be a member of the biparametric exponential family, and Harvey and Van der Merwe (2012), who recommended Bayesian methods for inference on means and variances in a two-part model with a lognormal distribution. However, these authors did not assume any extension between the two parts. Bayesian models have also been proposed for zero-inflated longitudinal nonnegative data (Swallow et al. 2016;Biswas and Das 2020) and zero-inflated count data (Neelon et al. 2010;King and Song 2019;Bertoli et al. 2020). Specifically, Biswas and Das (2020) and Neelon et al. (2010) proposed models with correlated random effects, which in this regard are similar to our Model 1. Tiao and Draper (1968) and Besag and Higdon (1999) pioneered Bayesian analysis of experiments with incomplete blocks. Several authors, with various focus, have explored Bayesian methods for agricultural field experiments, for example, Donald et al. (2011), who modelled spatial correlation, Forkman and Piepho (2013), who studied prediction of random effects of treatments, Singh et al. (2015), who reported a Bayesian analysis of a crop variety trial with an incomplete block design, and Theobald et al. (2002), who investigated Bayesian analysis of multi-environment crop variety trials. However, none of them considered two-part mixed-effects models.
In analysis of designed experiments, random effects are common, since effects of units involved in randomization should be modelled as random (Piepho et al. 2003). As a means of decreasing residual error variance, agricultural field trials frequently include incomplete blocks, i.e. blocks that do not include all treatments. Effects of incomplete blocks are usually modelled as random, since this allows recovery of inter-block information about treatment differences (Piepho and Edmondson 2018).
Dependence between the zero and the nonzero components of a hierarchical model has also been touched upon in spatio-temporal literature. In this field, Fuentes et al. (2008) considered a zero-inflated log-Gaussian process model and modelled the probability of no rain in terms of the amount of the rainfall, inducing dependency between those parts. In a similar study, Sun and Stein (2015) modelled partly precipitation by means of a space-time Gaussian random field varying along time or space and time, and partly the logit of the probability of precipitation as a function of space and time. However, no relationship was imposed between the parts. This article has two aims. The first is to present new methodology for analysis of zeroaugmented positive observations by introducing dependencies between the probability of zero and the level of the positive part, beyond correlation of random effects. Because of these dependencies, zeros convey information about the mean or the median of the positive part, and the other way around. This assumption makes sense in weed experiments, and presumably also in many other areas of research. The second aim is to propose methods for modelling medians, not just means, since medians are less affected by skewness and extreme values.
Section 2 presents, as a motivating example, the creeping thistle weed dataset. Section 3 details the models, and Sect. 4 describes the Bayesian approach for fitting them. Section 5 evaluates the models and the choice of priors through cross-validation and simulation studies and identifies the best model for the creeping thistle dataset. Section 6 uses this final model for the statistical analysis. Section 7 concludes with a discussion.

AN AGRICULTURAL WEED EXPERIMENT
Our motivating example is an agricultural weed trial with ten experimental treatments. The experiment aimed at comparing nine different mixtures of plant protection products (treatments 2-10) with regard to their efficacy on the weed creeping thistle (Cirsium arvense) in a field with spring barley (Hordeum vulgare). In addition to the active treatments, the experiment included a control treatment with no application of plant protection (treatment 1). The design comprised four replicates with ten plots each, which all received different treatments. Within each replicate, the plots were grouped into two incomplete blocks of five contiguous plots each, following an alpha design (Patterson and Williams 1976). Replicate 1 comprised blocks 1 and 2, replicate 2 comprised blocks 3 and 4, and so on.
In agricultural experiments, it is common to partition the replicates into incomplete blocks of adjacent plots. In this way, homogeneous intra-block variance is achieved. This strategy is successful if combined with an efficient design that ensures that as many pairs of treatments as possible occur together in blocks. The alpha design is such a design (Verdooren 2020). Table 1 provides an overview of the dataset. Means, medians and standard deviations were computed by treatment and block. These computations were made both with zeros excluded, i.e. when Y ∈ (0, ∞), and with zeros included, i.e. when Y ∈ [0, ∞). For treatments 4, 8 and 9, no creeping thistle was observed, i.e. only zeros were recorded. Similarly, no creeping thistle was observed in block 7. Standard deviations vary between treatments, as a result   Percentages of zeros out of four replicates per treatment and five plots per block. Biomass means, medians and standard deviations (g m −2 ) were computed with zeros excluded, i.e. when Y ∈ (0, ∞), and with zeros included, i.e. when Y ∈ [0, ∞). * There is only one observation larger than zero in block 2 of block effects and residual error heterogeneity. The huge differences between blocks in standard deviation are mainly due to differences between treatments. Figure A1 in Web Appendix A of the supplementary materials shows all observations by treatment. This figure indicates positive skewness. Table A1 of Web Appendix A includes the dataset. The small size of this dataset (40 observations), combined with a high proportion of zeros (47.5%), is challenging.

MODEL DEVELOPMENT
Let y st ∈ [0, ∞) be the observation in treatment t = 1, . . . , T in block s = 1, . . . , S. The blocks are incomplete, since they include subsets of treatments. Given λ and p st , the density function of the response is given by the mixture distribution where p st ∈ [0, 1] is the probability of observing a zero, and λ ∈ ⊂ R p . This construction was presented by Rodrigues-Motta et al. (2015) for p = 2 with f (y st |λ) in (1) being a member of the biparametric exponential family (Bar-Lev and Reiser 1982; Bose and Boukai 1993) and parametrized by λ = (μ f,st , φ), where μ f,st represents the conditional mean or conditional median and φ is a dispersion or precision parameter, depending on the choice of f (.|λ). When μ f,st is the conditional mean, the marginal mean is

MODELS
and where α t and β t are fixed effects of treatment t. Furthermore, b s and v s are random effects of block s, so that Model 0 will serve as a baseline model for comparisons in the cross-validation and simulation studies.
Model 1: This model is the same as Model 0, but with correlated random effects: The covariance between log(μ st ) and logit( p st ) is induced by σ b,v , which is the covariance between b s and v s .

Model 2:
The logarithm of μ f,st is modelled as a linear function of logit( p st ), that is, where the block effect, b s , is N(0, σ 2 b ) distributed. The probability of observing a zero, p st , is the same across blocks, given by where α t is a fixed effect of treatment t. Thus, we can write p st = p t . The covariance between log(μ f,st ) and logit( p st ) is the variance of logit( p st ) multiplied by γ 2 , so that γ 2 determines the sign and the strength of the covariance.

Model 3:
The logit of the probability of zero, logit( p st ), is modelled as a linear function of μ f,st , which is dependent on a fixed treatment effect, α t , and a random block effect, b s . Specifically, The covariance between log(μ f,st ) and logit( p st ) is the variance of log(μ f,st ) multiplied by γ 2 . Thus, also with this model, γ 2 determines the sign and the strength of the covariance.
In Models 2 and 3, the location parameter, μ f,st , and the probability of zero, p st , are functionally related, which they are not in Models 0 and 1. Consequently, Models 2 and 3 require fewer parameters than Models 0 and 1, which can be advantageous for small datasets. According to Model 2, the probabilities of zero depend only on the treatments, whereas according to Model 3, they depend also on the blocks.

THE DISTRIBUTION OF Y ∈ (0, ∞)
In Models 0-3, f (y st |λ) is either a gamma or a lognormal distribution. Parametrized by the mean, μ f,st , and the shape or dispersion parameter, φ, the gamma distribution is Parametrized by the median, μ f,st , and a dispersion parameter, φ, the lognormal distribution is Note that in (12), μ f,st is the mean of the distribution, whereas in (13), μ f,st is the median of the distribution. Under distribution (13) (12), however, the median has no simple closed form. The lognormal distribution enables modelling the median instead of the mean.
To overcome estimation drawbacks due to a small number of observations per block and treatment, the median can be modelled instead of the mean, as the former is more robust. The median is a natural, always finite and appropriate quantity of centrality for a skewed distribution, which is the case in our application. To assess effects of treatments on the median, rather than the mean, f (y st |λ) should be chosen to be a lognormal distribution, as proposed by Goldberger (1968).
The two proposed distributions of Y > 0 are both members of the biparametric exponential family. Their density functions are similar in shape, but the lognormal distribution is more skewed than the gamma distribution. The lognormal distribution arises in biological applications when effects are multiplicative rather than additive (Koch 1969;Limpert et al. 2001). The gamma distribution arises, for example, as a sum of time intervals between events in a Poisson process. However, in this study, the main difference is that the gamma distribution was used for modelling the mean, while the lognormal distribution was used for modelling the median. Therefore, differences in results between using the gamma distribution and using the lognormal distribution are also related to differences between modelling the mean and modelling the median.

HIERARCHICAL STRUCTURE SPECIFICATION
Let u s = (b s , v s ) or u s = b s depending on the imposed model, and let = (λ, p, ) be the vector of parameters, where p is the vector of mixtures probabilities p st , and is the covariance matrix of the random effects. Based on the data (y; u), the complete likelihood is The augmented posterior distribution of ( , u) is recognizable only up to a proportionality constant, and the marginal posterior distributions cannot be obtained analytically. Therefore, the relevant MCMC steps (combination of Gibbs and Metropolis-within-Gibbs sampling) were implemented using the BRugs package (Thomas et al. 2006), which connects R with the OpenBUGS software. Convergence was monitored via MCMC chains, auto-correlation, density plots and the Brooks-Gelman-Rubin potential scale reduction factorR, all available in the R coda library (Cowles and Carlin 1996). The R code for the Bayesian analysis is provided as supplementary materials. The full conditional posterior distributions of treatment and block effects and dispersion components are specified in Web Appendix B.

POSTERIOR STATISTICS
Posterior statistics were averaged across blocks as follows. The mixture probability was computed as p t = s∈S t p st / S s I (s ∈ S t ), where p st is defined in (1). For the gamma distribution, the conditional and marginal means were computed as μ f,t = s∈S t μ f,st / S s I (s ∈ S t ) and μ g,t = s∈S t μ g,st / S s I (s ∈ S t ), respectively, where μ f,st is defined in (12) and μ g,st is defined in (2). For the lognormal distribution, the conditional and marginal medians were computed as μ f,t = s∈S t μ f,st / S s I (s ∈ S t ) and y g,t = s∈S tỹ g,st / S s I (s ∈ S t ), respectively, whereỹ g,st is defined in Sect. 3 and μ f,st is defined in (13). Figure 1 presents, for Models 1-3 and gamma distribution, 95% credible intervals (CIs) for conditional means (displayed on the first row of subfigures), marginal means (on the second row) and probabilities of zero (on the third row). The limits of the 95% CIs are the 2.5th and 97.5th percentiles of the posterior distributions. Similarly, Fig. 2 presents, for Models 1-3 with lognormal distribution, 95% CIs for conditional medians, marginal medians and probabilities of zero. In these figures, CIs for conditional means and medians are much larger using Models 0 and 1 than using Models 2 and 3. Note that the scale of the horizontal axis differs between the models. Using Models 0 and 1, very wide CIs were observed for treatments 4, 8 and 9, for which only zeros were obtained. Table 2 presents posterior means and 95% CIs for model parameters. The correlation ρ between random effects b s and v s in Model 1 is positive, implying a positive correlation between mean or median and probability of zero. On the other hand, the estimated correlation between mean or median and probability of zero using Models 2 and 3 is negative, as the estimate for γ 2 is negative for both these models. A negative correlation seems more reasonable than a positive, as we would expect the probability of zero to decrease with an increasing amount of weed.

CROSS-VALIDATION OF MODELS FOR THE WEED EXPERIMENT
A leave-one-out cross-validation was performed, using replicates as observations. In this evaluation, Models 1-3 were compared with Model 0 and with each other. Model 0 can be regarded as a baseline model, since the two parts of this model are independent.

Cross-validation method
The means of the posterior distributions ofỹ g,st were used as predictions. With four models and two distributions, altogether eight methods for prediction were compared. One replicate (10 observations) at a time was left out, and the remaining data were used for the analysis. Letμ (−r ) g denote the vector of predictions, sorted by treatment, based on an analysis of the dataset with replicate r removed. Furthermore, let y denote the vector of observations, sorted by replicate and treatment, andμ g the concatenated vector (μ (−1) . Predictive performance was evaluated using the root-mean-square error (RMSE) criterion, here defined as the square root of (μ g − y) T (μ g − y)/N , where N = 40 is the number of observations.

Cross-validation results
Using the gamma distribution, thus modelling the mean, the RMSE was 89.7 for Model 0. The more advanced models performed better. The RMSE was 89.5, 87.3 and 87.3, for Models 1, 2 and 3, respectively. The dependency-extended models, i.e. Models 2 and 3, performed better than Model 1.
Using the lognormal distribution, thus modelling the median, the RMSE was 79.2 for Model 0. Again, the more advanced models performed better. The RMSE was 77.5, 76.4 and 78.9, for Models 1, 2 and 3, respectively. Thus, Model 2 performed best.
The cross-validation clearly showed that for this dataset, it was better to model the median using the lognormal distribution than to model the mean using the gamma distribution.

SIMULATION STUDIES
To investigate the performance of the models using the median, simulation studies were conducted based on the example of Sect. 2. The objectives of these studies were (1) to compare the models with regard to ability to recover parameter information and (2) to check the sensitivity of the results to the choice of prior distributions. The lognormal distribution was used, since this outperformed the gamma distribution in the cross-validation study.
Data were generated according to the models, where the true parameter values were set to the posterior mean estimates. These were for Model 0:α =  (0.11, 0.30, 0.39, 0.83, 0.68, 0.27, 0.42, 0.83, 0.83, 0.14),γ 1 = 3.69,γ 2 = −0.44,σ 2 b = 0.86 2 ,φ = 0.41; Model 3:α = (4.10, 3.56, 3.37, −2.30, 2.27, 3.54, 3.41, −2.16, −2.43, 3.93),γ 1 = 2.92,γ 2 = −1.13,σ 2 b = 1.20 2 ,φ = 0.42. The same experimental design was used as in the example. Thus, datasets were generated with T = 10 treatments and S = 8 incomplete blocks with 5 treatments each. The allocation of treatments to blocks was the same as in the original dataset. For each dataset, l = 1, . . . , 100, a total of 40 observations were generated. This was done for each model. Letỹ 0t denote the averaged marginal medianỹ g,t of treatment t, defined in Sect. 5.1, computed using the original data of the example. Letỹ lt denote the averaged marginal medianỹ g,t of treatment t, computed in the same way but using the lth simulated dataset. Quality was assessed using the root-mean-square error (RMSE), here defined as where L = 100 is the number of generated datasets and T = 10 is the number of treatments. The smaller the RMSE, the better the performance of the model.

Impact of the choice of model
To assess simulation objective (1), for each generated dataset l, the models were fitted independently of the model used for sampling. In other words, Model k was fitted to each of the 100 datasets generated using Model r , for k = 0, 1, 2, 3, and r = 0, 1, 2, 3. Table 3 presents the results. The rows of this table represent four different scenarios. For each scenario, the ranking of the four models is the same. Regardless how the data were generated, Model 2 was consistently the best model for analysing the data. The assumption of correlation between the block effects, which distinguishes Model 1 from Model 0, did not improve the performance, since the RMSE was always larger for Model 1 than for Model 0. The dependency-extended Model 3, with treatment-by-block-specific probabilities of zero, was less successful than the other models. When the data were generated from Model 2 and this model was also used for the analysis, the smallest RMSE was achieved: RMSE = 33.0. The results of this simulation study indicate that the dependency-extended Model 2 is the best option for analysing experiments of this type.

Impact of the choice of prior distributions
Having identified Model 2 as the best model for the analysis, a study was performed to assess simulation objective (2), i.e. to investigate the sensitivity of the results to the choice of the prior distributions. For this study, the 100 datasets generated using Model 2 were used. Alternative priors to those specified in Sect. 4.1 were considered. Among the alternative priors, continuous uniform distributions, U(0, 50) and U(0, 100), were investigated, as recommended by Gelman (2006). Performance was measured using the RMSE specified in (15). Results are given in Table 4. The first row of this table includes the prior distributions that were used in this article, yielding RMSE = 33.0, as is previously shown in Table 3. Using other diffuse prior distributions does not change the results much. In just two of the investigated cases, the RMSE was smaller than 33.0. When a normal distribution with variance 1 was used for γ 1 and γ 2 , the RMSE was 32.8. Similarly, when a U-shaped prior for p was used instead of a continuous uniform distribution, RMSE was 32.8. In both these cases, more informative priors were substituted for diffuse priors. When there is no preknowledge, diffuse priors are preferred. The results of this study point at Model 2 being robust to the choice of diffuse prior distributions.
For supplementary information, Web Appendix C includes tables with posterior means computed using various sets of prior distributions, for all four models and for both distributions.

FINAL ANALYSIS OF THE WEED EXPERIMENT
Based on an overall assessment of the results from the cross-validation and the simulation studies of Sect. 5, modelling the median using Model 2 was considered the best option for the weed experiment. Using this model, treatments rank with regard to effectiveness as follows: 4, 8, 9, 5, 7, 3, 2, 6, 10 and 1, with posterior means for the probability of zero equal  to 0. 83,0.83,0.83,0.68,0.42,0.39,0.30,0.27,0.14 and 0.11,respectively. Thus,treatments 4,8 and 9 are the best treatments for eliminating creeping thistle weed, while treatment 1 is the worst, which was expected as treatment 1 is the control. Unlike previously published two-part models (Tang et al. 2018;Cantoni et al. 2017;Rose et al. 2006;Tooze et al. 2002), Models 2 and 3 include an explicit functional relationship between the two parts of the model. Under Model 2, conditional on the random effects, the median isỹ f,st = exp(γ 1 + γ 2 + b s ). Its expected value with respect to the random block effect is given by because a N(0, σ 2 b ) variate b s has moment generating function E(exp(t b s )) = exp(t 2 σ 2 b /2). Figure 3 displays the relationship between the probability of zero and the expected value of the conditional median using the posterior meansγ 1 = 3.69,γ 2 = −0.44 andσ 2 = 0.86. Sinceγ 2 is negative, the probability of zero decreases with the conditional median, as illustrated by the curve. As examples, the dashed and dotted lines indicate the probabilities of zero and conditional medians for treatments 1 and 2, respectively. Although the expected value (16) of the conditional median depends on the block variance, the ratio between the expected values of any two treatments is independent of this variance. For example, the ratio between the expected values of the conditional medians for treatments 2 and 1 is, by (16), since α t = log( p t /(1 − p t )) by (9). The posterior mean of the slope γ 2 in (8)  respectively. Using these posterior means for computation of (17), the expected conditional median of treatment 2 is 58% of the expected conditional median of treatment 1.
Model 2 provides scientific insights into the performance of the experimental treatments. Specifically, it is not correlation between block effects that causes dependency between the two parts of the model, as would have been assumed using Model 1. According to Model 2, the experimental treatments have varying probabilities of zero and varying conditional means, following the relationship shown in Fig. 3.

DISCUSSION
This article introduced dependency-extended two-part models for modelling nonnegative continuous observations in case of zero inflation, heteroscedasticity and skewness. Such data are encountered in many different scientific disciplines; however, we studied specifically an agricultural field experiment with herbicidal treatments. New features were modelling of medians instead of means and modelling of functional dependencies between the parts (Models 2 and 3). The approach of parts correlated through random effects (Cantoni et al. 2017), which was used in Model 1, does not work well when some levels contain only zeros. When this happens, 95% credible intervals for conditional means or medians become very large. The same phenomenon occurs using a traditional two-part model with uncorrelated parts (Model 0). Note that in weed control experiments, it is very common that only zeros, i.e. no weed, are observed for highly effective herbicides. Furthermore, if the seeds of the weed are windblown, the weed can appear in patches on the field and be missing in some incomplete blocks. Using Model 1, the level of the positive observations, i.e. the mean or the median, and the probability of zero are modelled as functions of fixed and random effects, where the random effects are allowed to be correlated. This is the model that is difficult to fit when only zeros are observed for some levels of the fixed effects or some levels of the random effects. In such cases, using the proposed Bayesian method, the vague prior information completely dominates the likelihood. Using Model 2, the level of the positive observations is modelled as a function of the probability of zero and a random effect, whereas using Model 3, the probability of zero is modelled as a function of the level of the positive observations, which is dependent on a random effect. Models 2 and 3 have the advantage, as compared to Models 0 and 1, that all information is considered in the construction of the joint posterior distribution. These models managed the problem of zero excess better than Model 1. For our crop protection experiment, the cross-validation and the simulation studies indicated that Model 2 performed best when modelling the median using the lognormal distribution.
Dependency-extended two-part models are most useful when there are several predictors. In this event, which includes our case of modelling effects of categorical variables, an assumption of a functional relationship between the two parts enables a more parsimonious model than a model without such an assumption.
All two-part models have the advantage that the positive part and the probability of zero are separated. Thus, two-part models enable assessment of the probability of zero and the conditional means or medians, not just marginal means or medians. In addition, the proposed dependency-extended two-part models make it possible to show graphically how the probability of zero decreases with the level of the positive observations. In weed research, this means that if much weed were observed in the experiment, then the probability is low that the herbicide will kill the weeds completely. This is presumably not because of correlation between block effects (Model 1), but rather because there is some functional relationship between biomass and probability of zero (Models 2 and 3).
Instead of using Bayesian methodology, maximum likelihood estimation can be used. However, if only zeros have been observed for some of the treatments, then the variances in the estimates of the effects of those treatments cannot be computed. For example, the nlmixed procedure of the SAS System gives the error message that the Hessian matrix is not positive definite and therefore the estimated covariance matrix may be unreliable.
The underlying problem is the following. The maximum likelihood estimator of a probability, p, isp = y/n, where y is Bin(n, p). When y = 0 is observed, thenp = 0, which is unrealistic and undesirable. The variance ofp is usually obtained from the Hessian using the parameter estimate. This variance isp((1 −p))/n, which is also 0 whenp = 0, providing poor information about the precision inp. The maximum likelihood estimator does not perform well on the boundary of the parameter space.
Through the use of Bayesian statistics, this problem is avoided. By multiplying the likelihood by a prior distribution for p, a posterior distribution is obtained which does not have all density concentrated at 0. Although the area under the posterior distribution is largest in a vicinity of 0, the posterior distribution does not exclude that p is greater than 0.
Bayesian analysis has the big advantage that statistical inference can be made for any function of the parameters, through sampling from the posterior distribution. In experiments without pre-knowledge, care should be taken to select vague priors. However, the value of the Bayesian analysis is even larger if prior information is taken into account.
Our models can be extended further by allowing positive distributions that are not members of the exponential family. Other link functions than the log and the logit can be used, e.g. the identity instead of the log or the probit instead of the logit. In larger agricultural field experiments, it would be possible to add effects of complete replicates. For other applications than analysis of incomplete block experiments, the models must be adjusted by including other explanatory variables. In zero-augmented spatial-temporal two-part models, our dependency-extended idea could be used to relate the components of the auto-covariance function, thereby reducing the number of parameters.
In summary, this article introduced two-part models with dependencies between the probability of zero and the level of the positive part and showed how these can be analysed using Bayesian methodology. Particularly Model 2, expressing the median of the lognormal distribution as a function of the probability of zero and a random effect, performed well. We believe the basic construction of this model can be used successfully also in other areas of research.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funding Open access funding provided by Swedish University of Agricultural Sciences. Rodrigues-Motta thanks for the partial support of FAPESP, Brazil, grant #2014/02211-1. [Received September 2020. Revised June 2021. Accepted July 2021. Published Online August 2021