A New Bayesian Two-Sample t Test and Solution to the Behrens–Fisher Problem Based on Gaussian Mixture Modelling with Known Allocations

Testing differences between a treatment and control group is common practice in biomedical research like randomized controlled trials (RCT). The standard two-sample t test relies on null hypothesis significance testing (NHST) via p values, which has several drawbacks. Bayesian alternatives were recently introduced using the Bayes factor, which has its own limitations. This paper introduces an alternative to current Bayesian two-sample t tests by interpreting the underlying model as a two-component Gaussian mixture in which the effect size is the quantity of interest, which is most relevant in clinical research. Unlike p values or the Bayes factor, the proposed method focusses on estimation under uncertainty instead of explicit hypothesis testing. Therefore, via a Gibbs sampler, the posterior of the effect size is produced, which is used subsequently for either estimation under uncertainty or explicit hypothesis testing based on the region of practical equivalence (ROPE). An illustrative example, theoretical results and a simulation study show the usefulness of the proposed method, and the test is made available in the R package bayest. In sum, the new Bayesian two-sample t test provides a solution to the Behrens–Fisher problem based on Gaussian mixture modelling.


Introduction
In medical research, the t test is one of the most popular statistical procedures conducted. In randomized controlled trials (RCT), the goal often is to test the efficacy of new treatments or drugs and find out the size of an effect. Usually, a Statistics in Biosciences (2022) 14:380-412 treatment and control group are used, and differences in a response variable like blood pressure or cholesterol level between both groups are observed. The gold standard for deciding if the new treatment or drug is more effective than the status quo treatment or drug is the p value, which is the probability, under the null hypothesis H 0 , of obtaining a difference equal to or more extreme than what was actually observed. The dominance of p values when comparing two groups in medical (and other) research is overwhelming [19,20,28].
The original two-sample t test belongs to the class of frequentist solutions. These are based on sampling statistics, which allow to reject the null hypothesis via the use of p values. The misuse and drawbacks of p values in medical research have been detailed in a variety of papers, including an official ASA statement in 2016 [38]. On the other side, Bayesian versions of the two-sample t test have become more popular recently. Examples include the proposals in [12,32,37,40] and [14]. All of these focus on the Bayes factor (BF) for testing a null hypothesis H 0 ∶ = 0 of no effect against a one-or two-sided alternative H 1 ∶ > 0 , H 1 ∶ < 0 or H 1 ∶ ≠ 0 . Bayes factors themselves are also not without problems: (1) Bayes factors are sensible to prior modelling [18].
(2) Bayes factors require the researcher to calculate marginal likelihoods, the calculations of which can be complex except when conjugate distributions exist. (3) In the setting of the two-sample t test, Bayes factors weigh the evidence for H 0 ∶ = 0 against the evidence for H 1 ∶ ≠ 0 (or H 1 ∶ < 0 or H 1 ∶ > 0 ) given the data x. In the case, when BF 10 = 20 , H 1 is 20 times more likely after observing the data than H 0 . The natural question following in such cases is: How large is ? A Bayes factor cannot answer this question and was not designed to answer such questions, but often this is of most relevance in applied biomedical research. Last, in most applied research, estimation of the effect size is more desirable than a mere rejection or acceptance of a point or composite hypothesis [19,20,24].
To be fair enough, Bayes factors can be computed alongside posterior estimates, so testing and estimation do not mutually exclude each other. However, as the Bayes factor is often proposed as a replacement for the p value, it is even more questionable if practitioners will really combine testing with estimation of effect sizes, especially when scales translating the size of a Bayes factor into evidence (similar to p < .05 , p < .01 ) are regularly provided by now, see [35] and [2]. p values and the Bayes factor are useful tools if explicit hypothesis testing is necessary. However, exactly this necessity may be questioned in a wide range of applied biomedical research. Therefore, this paper proposes an alternative Bayesian two-sample t test by formulating the statistical model as a two-component Gaussian mixture with known allocations and using the region of practical equivalence (ROPE). Instead of focussing on rejection or confirmation of hypotheses, the proposed method's focus lies on estimation of the effect size under uncertainty. Still, if desired the method also provides a Bayesian solution to the Behrens-Fisher problem by enabling to test the equality 1 = 2 of two groups of normally distributed data N( 1 , 2 1 ) and N( 2 , 2 2 ) without assuming 2 1 = 2 2 .

Modelling the Bayesian t Test as a Mixture Model with Known Allocations
In this section, the two-sample t test is modelled as a two-component Gaussian mixture with known allocations.
"Consider a population made up of K subgroups, mixed at random in proportion to the relative group sizes 1 , … , K . Assume interest lies in some random feature Y which is heterogeneous across and homogeneous within the subgroups. Due to heterogeneity, Y has a different probability distribution in each group, usually assumed to arise from the same parametric family p(y| ) however, with the parameter differing across the groups. The groups may be labeled through a discrete indicator variable S taking values in the set {1, … , K}. When sampling randomly from such a population, we may record not only Y, but also the group indicator S. The probability of sampling from the group labeled S is equal to S , whereas conditional on knowing S, Y is a random variable following the distribution p(y| S ) with S being the parameter in group S. (...) The marginal density p(y) is obviously given by the following mixture density " [9, p. 1] Clearly, this resembles the situation of the two-sample t test, in which the allocations S are known. While traditionally mixtures are treated with missing allocations, in the setting of the two-sample t test, these are known, leading to a "degenerate" mixture 1 . While this assumption does not only remove computational difficulties like label switching, it also makes sense from a semantic perspective: the inherent assumption of a researcher is that the population is indeed made up of K = 2 subgroups, which differ in a random feature Y which is heterogeneous across groups and homogeneous within each group. The group indicator S of course is recorded. When conducting a randomized controlled trial (RCT), the clinician will choose the patients according to a sampling plan, which could be set to achieve equally sized groups, that is, 1 = 2 . Therefore, when sampling the population with the goal of equally sized groups, the researcher randomizes the patients with equal probability from the population into treatment and control group. After the RCT is conducted, the resulting histogram of observed Y values will take the form of the mixture density p(y) above and express bimodality due to the mixture model of the data-generating process. 2 After fixing the mixture weights, the family of distributions for the single groups needs to be chosen. The above considerations lead to consider finite mixtures of normal distributions, as these "occur frequently in many areas of applied statistics such as [...] medicine" [9, p. 169]. The components p(y| k ) become f N (y; k , 2 k ) for k = 1, 2 in this case, where f N (y; k , 2 k ) is the density of the univariate normal distribution. Parameter estimation in finite mixtures of normal distributions consists of estimation of the component parameters ( k , 2 k ) , the allocations S i , i = 1, … , n and the weight distribution ( 1 , 2 ) based on the available data y i , i = 1, … , n . In the case of the two-sample Bayesian t test, the allocations S i (where S i = 1 if y i belongs to the first component and S i = 2 else) are known for all observations y i , i = 1, … , n . Also, the weights 1 , 2 are known. Therefore, inference is concerned with the component parameters k , 2 k for k = 1, 2 given the complete data (S, y).

Inference via Gibbs Sampling
From the above line of thought, it is clear that due to the representation via a mixture model with known allocations, no prior is placed directly on the effect size ∶= 1 − 2 s itself, where and s 2 1 and s 2 2 are the empirical variances of the two groups, see also Cohen [5]. This is the common approach in existing Bayesian t tests [14]. Instead, in the proposed mixture model, priors are assigned to the parameters of the Gaussian mixture components 1 , 2 and 2 1 , 2 2 . This has several benefits: Incorporation of available prior knowledge is easier achieved with the mixture component parameters than for the effect size, which is an aggregate of these component parameters. Consider a drug where from biochemical properties, it can safely be assumed that the mean in the treatment group will become larger, but the variance will increase, too. Incorporating such knowledge on k and k is much easier than incorporating it in the prior of the effect size . This situation holds in particular, when group sizes n 1 , n 2 are not balanced. These practical gains of translating prior knowledge into prior parameters come at a cost: In contrast to existing solutions [14], the model implies that no closedform expression for the posterior of is available. Therefore, sampling methods are used here, to first construct the joint posterior p( 1 , 2 , 1 , 2 |S, y) and subsequently use a sample of size m, to produce a sample ( (1) In summary, via Gibbs sampling, the posterior of can be approximated reasonably well. In order to apply Gibbs sampling, the conditional distributions need to be derived.

Derivation of the Full Conditionals Using the Independence Prior
To derive the full conditionals, it first has to be decided which priors should be used on the mixture component parameters. There are multiple priors available, the most prominent among them the conditionally conjugate prior and the independence prior [7,9]. While the conditionally conjugate prior has the advantage of leading to a closed-form posterior p( , 2 |S, y) , the main difficulty in the setting of the Bayesian two-sample t test is that while a priori the component parameters k = ( k , 2 k ) are pairwise independent across both groups, inside each group the mean k and variance 2 k are dependent. This is in contrast to the assumption in the setting of the Bayesian two-sample t test, and therefore, the independence prior is chosen, which is used in [7] and [30]. The independence prior assumes the mean k and the variance 2 k are a priori independent, which is p( , 2 ) = is the inverse Gamma distribution. The normal prior on the means k seems reasonable as the parameters b 0 and B 0 can be chosen to keep the influence of the prior only weakly informative. 3 The inverse Gamma prior is chosen because for a two-component Gaussian mixture to show any signs of bimodality -in which case, one would assume differences between two subgroups in the whole sample -the variance should not be huge, because otherwise, the modes (or the bell-shape) of the two normal components of 1 3 Statistics in Biosciences (2022) 14:380-412 the mixture will flatten out more and more, until unimodality is reached. Thus, the inverse Gamma prior connects this model aspect by giving more probability mass to smaller values of 2 k , while extremely large values get much less prior probability mass. 4 The hyperparameters c 0 and C 0 then offer control over this kind of shrinkage on 2 k towards zero. In the simulation study below, the prior sensitivity will also be studied briefly.
The independence prior is, therefore, used and leads to the following full conditionals:

Theorem 1 For the Bayesian two-sample t-test model, the full conditional distributions under the independence prior
is the inverse Gamma distribution) are given as: with B 1 (S), b 1 (S), B 2 (S), b 2 (S) as defined in Equations (20) and (21), and c 1 (S), c 2 (S), C 1 (S) and C 2 (S) as defined in Equations (22) and (23) in Appendix A.2.
A proof is given in Appendix A.2, which builds on the derivations in Appendix A.1. Note that when 1 ≠ 2 , N 1 (S) and N 2 (S) in the Appendices just need to be changed accordingly. For example, if the first group consists of 30 observations, and the second group of 70, setting N 1 (S) = 30 and N 2 (S) = 70 implies 1 = 0.3 and 2 = 0.7 , handling the case of unequal group sizes easily.

Derivation of the Single-Block Gibbs Sampler
Based on the full conditionals derived in the last section, this section now derives a single-block Gibbs sampler to obtain the joint posterior distribution given the complete data (S, y). The resulting Gibbs sampler is given as follows: where B k (S), b k (S) and c k (S), C k (S) are given by Equations (20), (21), (22) and (23) in the Appendix.
A proof is given in Appendix A.3.

The Shift from Hypothesis Testing to Estimation Under Uncertainty and the ROPE
As already mentioned, instead of explicit hypothesis testing via p values and Bayes factors, we follow the proposed shift from hypothesis testing to estimation under uncertainty. Cumming [6] proposed such a shift originally from frequentist hypothesis testing to estimation, called the "New Statistics", a process observable in a broad range of scientific fields, see Wasserstein and Lazar [39]. Note that one of the six principles for properly interpreting p values in the 2016 ASA statement stressed that a p value "does not measure the size of an effect or the importance of a result". [38, p. 132]. Cumming [6], therefore, included in his proposal a focus on "estimation based on effect sizes" [6, p. 7]. To promote this shift, Kruschke and Liddell [24] offered two conceptual distinctions, which are replicated in Table 1. While Cumming [6] originally proposed a shift from frequentist hypothesis testing to frequentist estimation under uncertainty, in this paper, it is proposed that this shift can be achieved easier by Bayesian methods. The main reasons are that confidence intervals as quantities for estimation are still "highly sensitive to the stopping p( 1 , 2 , 2 1 , 2 2 |S, y)

The Proposal of a Region of Practical Equivalence
To facilitate the shift to an estimation-oriented perspective, Kruschke and Liddell [24] advertised the region of practical equivalence (ROPE). As they note: "ROPE"s go by different names in the literature, including "interval of clinical equivalence", "range of equivalence", "equivalence interval", "indifference zone", "smallest effect size of interest," and "good-enough belt" ...' [24, p. 185], where these terms come from a wide spectrum of scientific domains, see Carlin and Louis [3], Freedman, Lowe and Macaskill [8], Hobbs and Carlin [16], Lakens [25] and Schuirmann [34]. The uniting idea is to establish a region of practical equivalence around the null value of the hypothesis, which expresses "the range of parameter values that are equivalent to the null value for current practical purposes". [24, p. 185]. With a caution not to slip back into dichotomic thinking, the following decision rule was proposed by Kruschke and Liddell [24]: Reject the null value, if the 95% highest posterior density interval (HPD) falls entirely outside the ROPE. Accept the null value, if the 95% HPD falls entirely inside the ROPE. In the first case, with more than 95% probability, the parameter value is not inside the ROPE, and therefore not practically equivalent to the null value. A rejection of the null value then seems legitimate. In the second case, the parameter value is inside the ROPE with at least 95% posterior probability, and therefore, practically equivalent to the null value. It seems legitimate to accept the null value. Of course, it would also be possible to accept the null value iff the whole posterior is located inside the ROPE, leading to an even stricter decision rule. While the idea of a ROPE is intriguing, a limitation should be noted which is that it can only apply in situations where scientific standards of practically equivalent parameter values exist and are widely accepted by researchers. Luckily, this is the case for effect sizes, which have a long tradition of being categorized in biomedical and psychological research, see Cohen [5].
Definition 2 (ROPE) The region of practical equivalence (ROPE) R for (or around) a hypothesis H ⊂ is a subset of the parameter space with H ⊂ R.
A statistical hypothesis H is now described via a region of practical equivalence R, e.g. H ∶ = 0 can be described as R ∶= [ 0 − , 0 + ] for > 0 . By definition, any set R ⊂ with H ⊂ R is allowed to describe H and should be selected depending on how precise the measuring process of the experiment or study is assumed to be. Next, we define two options for the ROPE: where H makes a statement about the unknown model parameter . If the true parameter value 0 lies in R that is 0 ∈ R , then R is called correct, otherwise incorrect.
A correct ROPE, therefore, contains the true parameter value 0 , while an incorrect one does not.
Note that the ROPE is equivalent to what is also known as an interval hypothesis H 0 ∶ ∈ [ 0 − 0 , 0 + 0 ] for some 0 ∈ ℝ [21]. The test of an interval hypothesis against its alternative, which structures the parameter space into values which are materially different from 0 in the scientific context at hand and values which are not ranges back to Hodges and Lehmann [17], see also Kelter [21,22]. The ROPE is, thus, primarily a synonym for an interval hypothesis.

The Proposal for a Shift Towards Estimation Under Uncertainty
The two major drawbacks of the proposal of Kruschke [23] and Kruschke and Liddell [24] are that the ROPE still facilitates hypothesis testing, enforcing a binary decision of rejection or acceptance, while it is also unclear what to do when the 95%-HPD lies partly inside and partly outside the ROPE. Therefore, a different proposal is made in this paper, which is estimation of the mean probable effect size (MPE) instead of hypothesis testing. This procedure will be used in the proposed t test afterwards. First, the acceptance or rejection of a hypothesis H can be formalized as follows: Definition 4 ( -accepted / -rejected) Let the unknown parameter (or vector of unknown parameters) in an experiment E ∶= {X, , {f }} , where the random variable X taking values in ℝ and having density f for some ⊂ is observed. Let f ( |x) the posterior distribution of (under any prior ( ) ), and let C the corresponding % highest posterior density interval of f ( |x) . Let R ⊂ a ROPE around the hypothesis H of interest, which makes a statement about . Then, if C ⊂ R , the hypothesis H is called -accepted, else -rejected. If = 1 , then H is simply called accepted, else rejected.
Thus, if C lies completely inside the ROPE R and if = 1 , the entire posterior probability mass indicates that is practically equivalent to the values described by the ROPE R. Thus, H can be accepted. If < 1 , the strength of this statement becomes less with decreasing of course. For example, if H is 0.75 accepted for a given ROPE R, 25% of the posterior indicates that may take values different than the ones included in the ROPE R. It is clear that little is gained if the value of is small or close to zero when speaking of -acceptance. Therefore, instead of forcing an acceptance or rejection (which only makes sense for substantial values of ), a perspective focussing on continuous estimation is preferred: That is, the percentage of the posterior distribution's probability mass inside the ROPE R j around MPE .
For simplicity of notation, the subscript R j is omitted whenever it is clear which ROPEs R j are used for partitioning the support of f ( |x) . Now, in contrast to strict -acceptance or -rejection rules based on the ROPE R j , we propose to use MPE and PMP( MPE ) together to estimate the effect size under uncertainty, and to quantify this uncertainty via PMP( MPE ) . If MPE is non-zero, the t test found a difference between both groups. The size of this difference is quantified by MPE itself. The uncertainty in this statement is quantified by PMP( MPE ) . For the developed two-sample t test, we propose the following procedure: 1. For a fixed credible level , the effect size range (ESR) should be reported. That is, which effect sizes are assigned positive probability mass by the % HPD interval, 0 ≤ ≤ 1 . The ESR is a first estimate of credible effect sizes a posteriori. 2. The support of the posterior distribution f ( |S, Y) in the Bayesian t test model is partitioned into the standardized ROPEs of the effect size of [5], leading to a partition P of the support as given in the definition of PMP( MPE ). 3. The mean posterior effect size MPE is calculated as an estimate of the true effect size . The surrounding ROPE R j with MPE ⊂ R j of the partition P is selected, and the exact percentage inside R j is reported as the posterior mass percentage PMP( MPE ).
The above procedure leads to an estimation of the effect size under uncertainty instead of a hypothesis testing perspective. Additionally to the posterior mean MPE , the posterior mass percentage PMP( MPE ) gives a continuous measure of the trustworthiness of the estimate ranging from 0% to 100% (actually from zero to one, but for better interpretability how much of the posteriors mass is allocated in the ROPE R j the values zero to 100 per cent will be used in what follows). MPE estimates with PMP( MPE ) > 0.5 (or 50%) could be interpreted as decisive but do not need to. PMP( MPE ) can be treated as a continuous measure of support for the effect size estimated by MPE . There are multiple advantages of utilizing a ROPE and combining it with MPE and PMP( MPE ) , the most important of which may be If R j is correct, then PMP( MPE ) → 1 for n → ∞ almost surely, and if R j is incorrect, then PMP( MPE ) → 0 for n → ∞ almost surely, except possibly on a set of -measure zero for any prior on .
A proof is given in Appendix A.4. Using MPE together with PMP( MPE ), therefore, will eventually lead to the correct estimation of in the sense that when a correct ROPE is chosen, the posterior mass percentage will converge to one, and if an incorrect ROPE is chosen, the posterior mass percentage will converge to zero. Thus, the procedure indicates whether is practically equivalent to the values given by the ROPE or not. If necessary, explicit hypothesis testing can be performed via -rejection. The advantages compared to p values and Bayes factors which favour an explicit hypothesis testing perspective are 1. As Greenland et al. [13, p. 338] stress with regard to the dichotomy induced by hypothesis testing, "estimation of the size of effects and the uncertainty surrounding our estimates will be far more important for scientific inference and sound judgement than any such classification". 2. In contrast to the Bayes factor (BF), the ROPE and MPE have important advantages: they do not encourage the same automatic calculation routines as Bayes factors. For example, Gigerenzer and Marewski [11] warned explicitly against Bayes factors becoming the new p values due to the same automatic calculation routines, and the approach via the ROPE fosters estimation and judging the evidence based on the continuous support for MPE provided by PMP( MPE ) , instead of using thresholds. 3. The ROPE is supported by the following argument, which questions the use of testing point hypothesis like H 0 ∶ = 0 (no matter if rejecting or confirming is the goal): In practice, measuring is always done with finite precision (like blood pressure, or the heart rate), and therefore, the goal rarely is to show (or reject) that the effect size is exactly equal to zero but much more that is negligibly small to deny the existence of any existing, (clinically) relevant effect. Therefore, invariances like = 0 can be interpreted as not existing, at least not exactly, and the search for approximate invariances, as described by a ROPE R = (−.2, .2) around = 0 is intellectually (more) compelling. A clinician will be satisfied by the statement that the true effect size is not exactly zero, but with 95% probability negligibly small. This general problem of precise hypothesis testing was first noted by Hodges & Lehmann [17] as mentioned earlier, compare also Rao & Lovric [29] and Kelter [21].

Illustrative Example
To clarify the above line of thought, the following example combines the developed Gibbs sampler for the Bayesian t test with the ROPE, MPE and PMP( MPE ) . It uses data from Wagenmakers et al. [36], who conducted a randomized controlled trial in which participants had to fill out a personality questionnaire while rolling a kitchen roll clockwise or counter clockwise. The mean score of both groups was compared afterwards.

Frequentist Analysis via Welch's Two-Sample t Test
A traditional two-sided two-sample Welch's t test indicates that there is no significant difference between both groups, yielding a p value of 0.4542. 5 What is missing is the effect size, which is of much more interest. Note that computing the effect size from the raw study data does not quantify the uncertainty in the data, which is undesirable. From the p value, a clinician can only judge that the results are unlikely to be observed under the null hypothesis. However, if the effect is clinically relevant or negligible remains unknown (or at best only point and interval estimates are reported). In this case, as the p value is quite large, neither can the null hypothesis of no effect be rejected, nor can be said with certainty that there is indeed no effect in the sense of confirming the null hypothesis, leaving the clinician without any trace to proceed with but to collect more data.  Fig. 1 Posterior distribution of the effect size and analysis for the kitchen roll RCT of [36] via MPE and PMP( MPE ) 5 The data and code including a full replication script for all simulations and figures presented in the paper are available at https:// osf. io/ cvwr5/. Figure 1 in contrast shows an analysis of the posterior of produced by the Gibbs sampler for the Bayesian t test. In it, the ROPE, MPE and PMP( MPE ) are used. The posterior distribution of is given in the upper plot and shows that the posterior mean is 0.149 and the posterior mode 0.156, that is, the mean posterior effect size given the data is 0.149, no effect discernible from zero. The 95% highest density interval ranges from 0.114 to 0.182, showing that with 95% probability, there is no effect discernible from = 0 , given the data. Even when taking the 100% highest posterior density interval (HPD), this situation does not change as indicated by the upper plot. The coloured horizontal lines and vertical dotted lines represent the boundaries of the different ROPEs according to Cohen [5]. The lower plot shows the results of partitioning the posterior mass of into the ROPEs for different effect sizes, which are standardized as small, if ∈ [0.2, 0.5) , medium, if ∈ [0.5, 0.8) and large, if ∈ [0.8, ∞) [5]. 100% of this posterior probability mass lies inside the ROPE (−0.2, 0.2) of no effect. So MPE = 0.149 indicates that no effect discernible from zero is apparent, and the posterior mass percentage PMP( MPE ) = 1 (or 100%) shows that the estimate MPE is trustworthy, as the entire posterior probability mass is located inside the ROPE (−0.2, 0.2) of no effect (also indicated by the upper plot). Based on this analysis, one can conclude that given the data, it is highly probable that there exists no effect. The method provides more insight than the information a p value is giving: In the example, the p value cannot reject the null hypothesis H 0 ∶ = 0 and neither can the null hypothesis be confirmed. Even if the p value would have been significant, this means only that the result is unlikely to be observed under the null hypothesis. The non-significant p value of 0.4542 in this case does not enable researchers to accept the null hypothesis of no effect. The proposed procedure in contrast does.

Bayes Factor Analysis
A Bayes factor would have to be combined with estimation to yield the same information, and a Bayes factor alone of course would not have provided this information. In the example, the Bayes factor BF 01 of H 0 ∶ = 0 against H 1 ∶ ≠ 0 is BF 01 = 5.015 when using the recommended wide Cauchy C(0, 1) prior of Rouder et al. [32], which indicates only moderate evidence for the null hypothesis H 0 ∶ = 0 according to Van Doorn et al. [35]. This is in sharp contrast to the PMP( MPE ) value of 100% , which strongly suggests that the null hypothesis H 0 ∶ = 0 is confirmed. The posterior in Fig. 1 is obtained by the Gibbs sampler given in Corollary 1.

Simulation Study
Primary interest now lies in the ability to correctly estimate different sizes of effects via the combination of the derived t test, MPE and PMP( MPE ) . The effect size ROPEs are oriented at the standard effect sizes of Cohen [5], where an effect is categorized as small, if ∈ [0.2, 0.5) or ∈ (−0.5, −0.2] , medium, if ∈ [0.5, 0.8) or ∈ (−0.8, −0.5] and large, if ≥ 0.8 or ≤ −0.8 . Secondary interest lies in analysing if the Gibbs sampler achieves better performance regarding the type I and II error compared with Welch's t test, the standard NHST solution. 6 The plan of the study is as follows: If there is indeed an effect, the Gibbs sampler should lead to a posterior distribution of which lies outside the ROPE (−0.2, 0.2) , which is equivalent to the rejection of the null hypothesis H 0 ∶ = 0 . The precise estimation of the size of an effect is a second task, one more demanding than the sole rejection of H 0 ∶ = 0 . If the sampler correctly rejects the null hypothesis, because the posteriors concentrate in the set (−∞, −0.2] ∪ [0.2, ∞) , this indicates that it makes no type II error and subsequently achieves a power of nearly 100%. Of course, this will depend on the sample sizes in both groups. If additionally, the 95%-credible intervals of the pos- Note that in each setting 2 1 ≠ 2 2 which is the premise in the Behrens-Fisher problem [33]. In each of the three effect size scenarios, 100 datasets of the corresponding two-component mixture were simulated for different sample sizes, and the Gibbs sampler was run for each of the 100 datasets for 10000 iterations, using a burn-in of 5000. MPE , the ESR and the ROPE criterion together with -acceptance are applied, that is, the hypothesis H stating a small, medium or large effect size is -accepted if the 95%-HPD lies completely inside the corresponding ROPE the prior sensitivity analysis. In total, the Gibbs sampler should stabilize around the true effect size . 7

Results
The upper row of Fig. 2 shows the results for small effect sizes. The two left plots show the results for n = 100 and n = 200 observations in each group. It is clear that the 95%-HPDs in both cases fluctuate strongly, indicating that anything from no effect to a medium effect is possible. The two right plots of the upper row show the results when increasing to n = 300 and n = 700 observations per group. The 95%-HPDs get narrower and stabilize inside the ROPE. While for n = 300, there are still some outliers, for n = 700, all HPDs have concentrated inside the ROPE of a small effect -that is PMP( MPE ) = 1 (100%) for all iterations -and the estimates MPE (blue points) have already converged closely to the true effect size indicated by the solid black line. The necessary sample size for this precision is not small, but a small effect requires a large sample size to be detected. The middle row of Fig. 2 shows the results for medium effect sizes. The two left plots in it show the result for n = 100 and n = 200 observations in each group for a medium effect size. Increasing the sample size to n = 400 and n = 600 leads to the results shown in the two right plots. These figures show that even for sample sizes of n = 100 in both groups, no 95% HPD lies completely inside the ROPE (−0.2, 0.2) around 0 = 0 of no effect, indicating that while the size of the effect may still not be estimated accurately, a null hypothesis of no effect 0 = 0 could always be rejected when using sample sizes of at least n = 100 in each group and the underlying effect has medium size. When it comes to precisely estimating the size of the effect, larger sample sizes similar to those needed to detect small effect sizes are necessary, as shown by the right plots of the second row.
The lower row of Fig. 2 shows the results for large effect sizes. About n = 50 observations in each group suffice to produce MPE and PMP( MPE ) which estimate small to large effects, and thereby reject a null hypothesis of no effect, while about n = 150 to n = 200 seem reasonable to precisely estimate a large effect size. When using MPE (blue points) as an estimator for , sample sizes of n = 200 produce an estimate close to the true effect size, which in this case was 0 = 1.030723.

Controlling the Type I Error Rate
In frequentist NHST, the Neyman-Pearson theory aims at controlling the type I error rate , which is the probability to reject the null hypothesis H 0 when it is true. In the setting of the two-sample Bayesian t test, this equals the rejection of H 0 ∶ = 0 although the true effect size is 0 = 0 . Following Cohen [5], an effect is considered small if the effect size is at least | | ≥ 0.2 , so effect sizes in the interval (−0.2, 0.2) can be considered as noise, or practically equivalent to zero. Therefore, a ROPE of (−0.2, 0.2) is set around the null value 0 = 0 to compare the type I error rate of the proposed method against the standard frequentist NHST solution, Welch's t test. Again 100 datasets of different sample sizes are simulated where the true effect size 0 is set to zero. The Gibbs sampler should produce a posterior distribution of which concentrates inside the ROPE, so that the null hypothesis H ∶ 0 = 0 is -accepted for = 0.95 , see Definition 4. If the 95%-HPD interval lies (entirely) outside the ROPE, this equals -rejection for = 0.95 , or in frequentist terms the rejection of the null hypothesis H 0 ∶ 0 = 0 of no effect, and therefore, the commitment of a type I error. The following two definitions formalize the type I and II error building on the concept of -rejection: Definition 6 ( type I error) An type I error happens if the true parameter value 0 ∈ H , with H ⊂ R for a ROPE R ⊂ , but H is -rejected for .

Definition 7 ( type II error) An type II error happens if the true parameter value
The left plot in Fig. 3 shows the results of 100 datasets of size n = 50 in each group. The first group was simulated as N(148.3, 1.34) , and the second group as N(148.3, 2.03) . The true effect size is The blue points represent MPE and the blue-dotted lines, the 95%-HPDs of the posterior of . While the estimates fluctuate strongly for n = 50 , increasing sample size in each group successively to n = 200 as shown by the progression of the plots from left to right shows that false-positive results -type I errors with = .95 -get completely eliminated for sufficiently large sample size. The right plot with sample size n = 300 shows that no type I error with = .95 occurs anymore. Also, MPE stabilizes around the true value 0 = 0 of no effect, indicating its convergence to the true effect size .
The simulations show that the type I error rate converges to zero when the sample size is increased. The number of credible intervals which lie partly inside and partly outside the ROPE decreases to zero. In contrast, p values are uniformly distributed under the null hypothesis, so that no matter what size the samples in both groups are, in the long-run one will still obtain % (most often 5%) type I errors. Conducting Welch's t tests will, thus, inevitably lead to a type I error rate of 5%, if the test level is set to = .05 . If the sample size is at least n = 200 in each group, the proposed Bayesian t test together with MPE and PMP( MPE ) performs better with respect to control the type I error rate.
From a theoretical perspective, it is of course of interest for which values of this fact does hold, and indeed, using the two generalized types of type I and II errors, it can also be shown that the number of type I (type II) errors converges to zero for any ≠ 0 , when a correct (incorrect) ROPE is chosen: Statistics in Biosciences (2022) 14:380-412 Theorem 3 For the Bayesian two-sample t test model, the probability of making an type I error for any ≠ 0 converges to zero for any correct ROPE R around the hypothesis H which makes a statement about the unknown parameter . Also, the probability of making a type II error for any ≠ 0 converges to zero for any incorrect ROPE R.
A proof is given in Appendix A.5. The implications of Theorem 3 are that if a correct ROPE R is chosen, then eventually the probability of making a type I error will become zero. Thus, when the ROPE R includes the true parameter 0 , eventually the hypothesis H will be -accepted for = 1 , that is, accepted. If on the other hand, an incorrect ROPE is selected, which does not include the true parameter 0 , then eventually the probability of making an type II error -that is, accepting H although 0 ∉ H -will become zero for any ≠ 0 if sample size is large enough.

Prior Sensitivity Analysis
Section 2.3 detailed the independence prior used in the model and of specific interest is of course the influence of this prior on the results produced by the procedure. Therefore, three different hyperparameter settings were selected to resemble a wide, medium and narrow prior, where the shrinkage effect on the 2 k , k = 1, 2 caused  N(1, 1) by the inverse Gamma prior G −1 (c 0 , C 0 ) on 2 k increases with the prior getting narrower (that is, 2 k is shrunken towards zero). The same applies for the normal prior N(b 0 , B 0 ) on the means k , k = 1, 2 . The following hyperparameters were chosen for the three different settings: For the wide prior, b 0 ∶=x and B 0 ∶= 10 ⋅ s 2 (x) where x and s 2 (x) are the complete sample mean and variance. c 0 and C 0 were selected both as 0.01 for the wide prior, implying fatter tails of the inverse Gamma prior than in the medium or narrow prior. For the medium prior, B 0 was decreased to 5 ⋅ s 2 (x) , and c 0 and C 0 increased to 0.1 both. For the narrow prior finally, B 0 ∶= s 2 (x) and c 0 = C 0 = 1 , which is the most informative of all three priors.
Subsequently, 100 datasets with n = 100 observations in each group were simulated, where the first group was generated as N(0, 1) and the second as N(1, 1) . The Gibbs sampler was run for 10000 iterations with a burn-in of 5000 once for each prior on each dataset. Fig. 4 shows the results of the simulations. Here, the resulting posterior densities of MPE are overlayed in Fig. 4, and it becomes clear that the wide and medium prior do result in barely differing posteriors. When using the narrow prior the shrinkage moves the posterior slightly towards smaller values of .
The lower plot in Fig. 4 additionally shows the posterior distributions of differences between means obtained from the three priors: The left-hand plot shows the posterior distribution of differences between MPE obtained via a wide and a medium prior. The middle plot shows the posterior distribution of differences between MPE obtained via a wide and a narrow prior, and the right-hand plot the posterior distribution of differences between MPE obtained via a medium and a narrow prior. The results show that in all cases the differences are of tiny magnitude, indicating that the proposed t test is quite robust to the prior hyperparameters selected. Of course, it can happen that MPE will be drawn towards smaller values when switching from the wide to the narrow prior, but the posterior mass percentage supporting a large effect will not vary much as shown by the nearly identical resulting posterior densities in Fig. 4, which is a strength of the continuous quantification through PMP( MPE ) . Based on the sensitivity analysis, all three hyperparameter settings differ only slightly, and therefore, the wide prior seems suitable for most applications, as it is the least informative one and can be interpreted as most objective because it introduces the smallest amount of subjectivity into the analysis.

Discussion
In this paper, a new Bayesian two-sample t test based on Gaussian mixture modelling with known allocations was introduced which provides a Bayesian solution to the Behrens-Fisher problem, while simultaneously focussing on effect size estimation. Following the proposal of a shift from hypothesis testing to estimation under uncertainty, a Bayesian two-sample t test was derived for inference on the effect size , which is the quantity of interest in most biomedical research. Also, the dichotomy of the ROPE decision rule of Kruschke and Liddell [24] was resolved by introducing the mean probable effect size MPE as an estimator of , combined with the posterior mass percentage PMP( MPE ) , a continuous measure which quantifies the support for the evidence suggested by MPE .
Theoretical results showed that the use of the proposed method leads to a consistent estimation procedure which shifts from hypothesis testing to estimation under uncertainty. Also, theoretical results showed that under any correct ROPE R, the number of introduced type I errors converges to zero a.s. under the law determined by the prior on , while under any incorrect ROPE (which means the ROPE picked by the researcher does not cover the true parameter value 0 ), the number of introduced type II errors converges to zero a.s. under the law determined by the prior on . Together, these properties and the introduced concept of -rejection make the proposed method an attractive alternative to existing solutions via p values or the Bayes factor.
One important limitation of the approach is that the results depend on the chosen priors for k and 2 k . Despite being quite robust, especially the choice of the inverse Gamma prior for the variances may be questioned. While it is beyond the scope of this paper to perform a sensitivity analysis using different priors for the variances, one remedy which allows changing the priors would be to switch to different sampling techniques, for example Hamiltonian Monte Carlo in Stan [4]. Also, the speed of convergence to entire elimination of type I errors may be slow, although the simulation results are promising.
The proposed method, therefore, may be helpful in improving the reproducibility in biomedical research, especially by reducing the number of false-positive results, which is one of the biggest problems of the medical sciences, see McElreath and Smaldino [27]. Finally, it should be noted that both Bayes factors and p values are reasonable tools when hypothesis testing is the goal [19]. The proposed new Bayesian two-sample t test is not intended to be a replacement of these tools, but as a complementary tool for cases in which the test of a precise point null hypothesis H 0 ∶ = 0 for some 0 ∈ is not desired but effect size estimation under uncertainty is more important. Such situations are, in particular, important in the medical sciences [17,21,26]. Therefore, the main advantage may be seen in the shift towards effect size estimation. Future work could extend the model to more than two groups, leading to an equivalent of the ANOVA, use more robust component distributions like t-distributions, or derive the posterior under different priors on the mixture components parameters.

Bayesian Parameter Estimation for Known Allocations
In this section, for the setting when the Allocations S are known the posterior distribution of k , 2 k given the complete data (S, y) are derived, see also [9]. As already mentioned above, the weights ( 1 , 2 ) are known in this case because for every observation y i ∈ (y 1 , … , y N ) it is known to which group y i belongs, that is, the quantities S i = k, k ∈ {1, 2} are available for all i ∈ {1, … , N}.
In the setting of a t test between two groups with equal sample sizes n 1 = n 2 and n 1 + n 2 = N , the belonging of an observation y i to its group normally is known for all observations i ∈ {1, … , N} . The underlying data-generating process, therefore, can be assumed to consist of a mixture of K = 2 components with weights 1 = 2 = 0.5 . This makes inference in the mixture model much easier compared to the case when both the weights ( 1 , 2 ) as well as the component parameters ( 1 , 2 ) and ( 2 1 , 2 2 ) are unknown. To conduct inference about the unknown parameters, the necessary group-specific quantities are the number N k (S) of observations in group k for k = 1, 2 , the within-group variance s 2 y,k (S) and the group mean ȳ k (S): where | ⋅ | denotes the cardinality of a set. These quantities depend on S, so the classification of the observation y i to the component S i = k needs to be available. When S i = k for an observation y i holds, then the observational model for observation y i is N( k , 2 k ) and y i contributes to the complete-data likelihood p(y| , 2 , S) by a factor of Taking into account all observations y 1 , … , y N , the complete-data likelihood function can be written as follows: The complete-data likelihood is a product of K = 2 components, of which each summarizes the information about the ith group, i ∈ {1, 2} . These K = 2 factors are then combined in a Bayesian analysis with a prior. While the ultimate interest lies in the posterior of both k , 2 k , we consider first two different cases, which will eventually lead to the solution of the joint posterior for k and 2 k .
In the first case, when the variance 2 k is fixed, the complete-data likelihood function as a function of is the kernel of a univariate normal distribution. Choosing a N(b 0 , B 0 )-distribution as a conjugate prior, the posterior density of k given 2 k and the N k (S) observations in group k for k = 1, 2 can be derived as follows: where in (1) the fact that 2 k is assumed to be given and the allocations S are known constants, too, was used.
In general, for a sample of size n from a N( , ) distribution with known variance 2 , a standard Bayesian analysis, see e.g. [15, p. 181], yields that the likelihood when combined with a prior ∼ N( , 2 ) leads to the posterior Substituting = b 0 and 2 = B 0 for the prior of as well as k for and 2 k for 2 in the likelihood, the posterior p( k | 2 k , S, y) in Eq. (1) becomes By Eq. (6), the posterior can be written as follows: where for an empty group k for k = 1, 2 the term N k (S)ȳ k (S) is defined as zero.
On the other hand, if the mean k is regarded as fixed, the complete-data likelihood as a function of 2 k is the kernel of an inverse Gamma density. Choosing the conjugate inverse Gamma prior 2 k ∼ G −1 (c 0 , C 0 ) , a standard Bayesian analysis -for details, see for example Held and Sabanés-Bové [15, p. 181] -yields the posterior of 2 k | k , S, y as with The case of interest here is when both k and 2 k are unknown, and in this case, a closed-form solution for the joint posterior p( k , 2 k |S, y) does exist only under specific conditions. That is, the prior variance of the mean of group k, k , must depend on 2 k through the relation B 0,k = 2 k N 0 , where N 0 is a newly introduced hyperparameter in the prior of k , that is, the prior k ∼ N(b 0 , B 0 ) then becomes k ∼ N(b 0 , 2 k ∕N 0 ) . The joint posterior now can be rewritten as follows: where (1) follows from the fact that the group parameters k , 2 k are assumed to be independent across groups and (2) follows from factorizing the joint posterior as follows: As the factor (A) in Eq. (12) was already derived in Eq. (6) with corresponding parameters in Eqs. (7) and (8) for k = 1, 2 , the factor (A) of the posterior in Eq. (12) is normal distributed N(b k (S), B k (S)) with parameters , where N 0 is the newly introduced hyperparameter. In (3), Equation (8) was used, in (4) again the relation B 0,k = 2 k N 0 , in (5) the right-hand side of Eq. (13) was substituted for B k (S) in Eq. (16).
The remaining term (B) of Eq. (12) is the marginal posterior of 2 k that is and by integrating out k , a standard Bayesian analysis shows that the marginal posterior of 2 k is distributed as inverse Gamma G −1 (c k (S), C k (S)) , where c k (S) is already given in Eq. (10), and the parameter C k (S) in Eq. (11) changes to This is because combining an inverse-gamma prior with the normal likelihood with known mean yields an inverse-gamma posterior as shown above and marginalizing this posterior for the variance yields exactly another inverse-gamma distribution with different parameters. Details can be found in [15].

Application to the Two-Sample t Test: Derivation of the Marginal and Joint Posterior Distributions
Summarizing, for two groups, the two-component Gaussian mixture can be interpreted as a data-generating process which consists of K = 2 components. The weights 1 and 2 are both equal to 1/2 for equally sized groups, that is, N = n 1 + n 2 with n 1 being the sample size of group one and n 2 the sample size of group two and n 1 = n 2 . Taking into account all observations y 1 , … , y N , the complete-data likelihood function can be written as follows: where and again, after selecting a conjugate inverse-gamma prior 2 k ∼ G −1 (c 0 , C 0 ) for k = 1, 2 , these posteriors are also completely determined.