Power priors for replication studies

The ongoing replication crisis in science has increased interest in the methodology of replication studies. We propose a novel Bayesian analysis approach using power priors: The likelihood of the original study’s data is raised to the power of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α, and then used as the prior distribution in the analysis of the replication data. Posterior distribution and Bayes factor hypothesis tests related to the power parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α quantify the degree of compatibility between the original and replication study. Inferences for other parameters, such as effect sizes, dynamically borrow information from the original study. The degree of borrowing depends on the conflict between the two studies. The practical value of the approach is illustrated on data from three replication studies, and the connection to hierarchical modeling approaches explored. We generalize the known connection between normal power priors and normal hierarchical models for fixed parameters and show that normal power prior inferences with a beta prior on the power parameter \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}α align with normal hierarchical model inferences using a generalized beta prior on the relative heterogeneity variance \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$I^2$$\end{document}I2. The connection illustrates that power prior modeling is unnatural from the perspective of hierarchical modeling since it corresponds to specifying priors on a relative rather than an absolute heterogeneity scale.


Introduction
Power priors form a class of informative prior distributions that allow data analysts to incorporate historical data into a Bayesian analysis (Ibrahim et al., 2015).The most basic version of the power prior is obtained by updating an initial prior distribution with the likelihood of the historical data raised to the power of α, where α is usually restricted to the range from zero (i.e., complete discounting) to one (i.e., complete pooling).As such, the power parameter α specifies the degree to which historical data are discounted, thereby providing a quantitative compromise between the extreme positions of completely ignoring and fully trusting the historical data.
One domain where historical data are per definition available is the analysis of replication studies.
One pertinent question in this domain is the extent to which a replication study has successfully replicated the result of an original study (National Academies of Sciences, Engineering, and Medicine, 2019).Many methods have been proposed to address this question (Bayarri and Mayoral, 2002b;Verhagen and Wagenmakers, 2014;Johnson et al., 2016;Etz and Vandekerckhove, 2016;van Aert and van Assen, 2017;Ly et al., 2018;Hedges and Schauer, 2019;Mathur and VanderWeele, 2020;Held, 2020;Pawel andHeld, 2020, 2022;Held et al., 2022, among others).Here we propose a new and conceptually straightforward approach, namely to construct a power prior for the data from the original study, and to use that prior to draw inferences from the data of the replication study.The power prior approach can accommodate two common notions of replication success: First, the notion that the replication study should provide evidence for a genuine effect.This can be quantified by estimating and testing an effect size θ, typically by assessing whether there is evidence that θ is different from zero.Second, the notion that the data from the original and replication studies should be compatible.This can be quantified by estimating and testing of the power parameter α.Values close to α = 1 indicate compatibility as there is a complete pooling of both data sets, and values close to α = 0 indicate incompatibility as the original data are completely discounted.
Below we first show how power priors can be constructed from data of an original study under a meta-analytic framework (Section 2).We then shown how the power prior can be used for parameter estimation (Section 2.1) and Bayes factor hypothesis testing (Section 2.2).Throughout, the methodology is illustrated by application to data from three replication studies which were part of a large-scale replication project (Protzko et al., 2020).In Section 3, we explore the connection to the alternative hierarchical modeling approach for incorporating the original data (Bayarri and Mayoral, 2002b,a;Pawel and Held, 2020), which has been previously used for evidence synthesis and compatibility assessment in replication settings.In doing so, we identify explicit conditions under which posterior distributions and tests can be reverse-engineered from one framework to the other.Essentially, power prior inferences using the commonly assigned beta prior on the power parameter α align with normal hierarchical model inferences if either a generalized F prior is assigned to the between-study heterogeneity variance τ 2 which scales with the variance of the original data, or if a generalized beta prior is assigned to the relative heterogeneity I 2 .This perspective also explains the observed difficulty of making conclusive inferences about the power parameter α, as it is difficult to make inferences about a variance from two observations alone, and also because the commonly assigned beta prior on α is entangled with the variance from the data.

Power prior modeling of replication studies
Let θ denote an unknown effect size and θi an estimate thereof obtained from study i ∈ {o, r} where the subscript indicates "original" and "replication", respectively.Assume that the likelihood of the effect estimates can be approximated by a normal distribution with σ i the (assumed to be known) standard error of the effect estimate θi .The effect size may be adjusted for confounding variables, and depending on the outcome variable, a transformation may be required for the normal approximation to be accurate (e. g., a log-transformation for an odds ratio effect size).This is the same framework that is typically used in meta-analysis, and it is applicable to many types of data and effect sizes (Spiegelhalter et al., 2004, chapter 2.4).There are, of course, situations where the approximation is inadequate and modified distributional assumptions are required (e. g., for data from studies with small sample sizes and/or extreme effect sizes).
The goal is now to construct a power prior for θ based on the data from the original study.Updating of an (improper) flat initial prior f (θ) ∝ 1 by the likelihood of the original data raised to a (fixed) power parameter α leads to the normalized power prior as first proposed by Duan et al. (2005), see also Neuenschwander et al. (2009).There are different ways to specify α.The simplest approach fixes α to an a priori reasonable value, possibly informed by background knowledge about the similarity of the two studies.Another option is to use the empirical Bayes estimate (Gravestock and Held, 2017), that is, the value of α that maximizes the likelihood of the replication data marginalized over the power prior.Finally, it is also possible to specify a prior distribution for α, the most common choice being a beta distribution α | x, y ∼ Be(x, y) for a normalized power prior conditional on α as in (1).This approach leads to a joint prior for the effect size θ and power parameter α with density where N(• | m, v) is the normal density function with mean m and variance v, and Be(• | x, y) is the beta density with parameters x and y.The uniform distribution (x = 1, y = 1) is often recommended as the default choice (Ibrahim et al., 2015).We note that α does not have to be restricted to the unit interval but could also be treated as a relative precision parameter (Held and Sauter, 2017).We will, however, not consider such an approach since power parameters α > 1 lead to priors with more information than what was actually supplied by the original study.

Parameter estimation
Updating the prior (2) with the likelihood of the replication data leads to the posterior distribution The normalizing constant is generally not available in closed form but requires numerical integration with respect to α.If inference concerns only one parameter, a marginal posterior distribution for either α or θ can be obtained by integrating out the corresponding nuisance parameter from (3).In the case of the power parameter α, this leads to whereas for the effect size θ, this gives (Abramowitz and Stegun, 1965, chapters 6 and 13).

Example "Labels"
We now illustrate the methodology on data from the large-scale replication project by Protzko et al. (2020).The project featured an experiment called "Labels" for which the original study reported the following conclusion: "When a researcher uses a label to describe people who hold a certain opinion, he or she is interpreted as disagreeing with those attributes when a negative label is used and agreeing with those attributes when a positive label is used" (Protzko et al., 2020, p. 17).This conclusion was based on a standardized mean difference effect estimate θo = 0.21 and standard error σ o = 0.05 obtained from 1577 participants.Subsequently, four replication studies were conducted, three of them by a different laboratory than the original one, and all employing large sample sizes.Since the same original study was replicated by three independent laboratories, this is an instance of a "multisite" replication design (Mathur and VanderWeele, 2020).While in principle it would be possible to analyze all of these studies jointly, we will show separate analyses for each pair of original and replication study as it reflects the typical situation of only one replication study being conducted per original study.Section 4 discusses possible extensions of the power prior approach for joint analyses in multisite designs.
Figure 1 shows joint and marginal posterior distributions for effect size θ and power parameter α based on the results of the three external replication studies and a power prior for the effect size θ constructed from the original effect estimate θo = 0.21 (with standard error σ o = 0.05) and an initial flat prior f (θ) ∝ 1.The power parameter α is assigned a uniform Be(x = 1, y = 1) prior distribution.
The first replication found an effect estimate which was smaller than the original one ( θr1 = 0.09 with σ r1 = 0.05), whereas the other two replications found effect estimates that were either identical ( θr2 = 0.21 with σ r2 = 0.04) or larger ( θr3 = 0.44 with σ r3 = 0.06) than that reported in the original study.This is reflected in the marginal posterior distributions of the power parameter α, shown in the bottom right panel of Figure 1.That is, the marginal distribution of the first replication (yellow) is slightly peaked around α = 0.2 suggesting some incompatibility with the original study.In contrast, the second replication shows a marginal distribution (green) which is monotonically increasing so that the value α = 1 receives the highest support, thereby indicating compatibility of the two studies.
Finally, the marginal distribution of the third replication (blue) is sharply peaked around α = 0.05 with 95% credible interval from 0 to 0.62 indicating strong conflict between this replication and the Figure 1: Joint (top) and marginal (bottom) posterior distributions of effect size θ and power parameter α based on data from the "Labels" experiment (Protzko et al., 2020).The dashed lines depict the posterior density for the effect size θ when the replication data are analyzed in isolation without incorporation of the original data.The horizontal error bars represent the corresponding 95% highest posterior density credible intervals.
The dotted line represents the limiting posterior density of the power parameter α for perfectly agreeing original and replication studies.
original study.The sharply peaked posterior is in stark contrast to the relatively diffuse posteriors of the first and second replications which hardly changed from the uniform prior.This is consistent with the asymptotic behavior of normalized power priors identified in Pawel et al. (2023a); In case of data incompatibility, normalized power priors with beta prior assigned to α permit arbitrarily peaked posteriors for small values of α.In contrast, for perfectly agreeing original and replication studies ( θo = θr ) there is a limiting posterior for α that gives only slightly more probability to values near one.The limiting posterior is in this case a Be(3/2, 1) distribution, whose density is indicated by the dotted line.One can see, that the (green) posterior from the second replication is relatively close to the limiting posterior, despite its finite sample size.Similarly, the corresponding (green) 95% credible S. Pawel, F. Aust, L. Held, E.-J.Wagenmakers interval from 0.12 to 1 suggests that a wide range of very low to very high α values remain credible despite the excellent agreement of original and replication study.
The bottom left panel of Figure 1 shows the marginal posterior distribution of the effect size θ.
Shown is also the posterior distribution of θ when the replication data are analyzed in isolation (dashed line), to see the information gain from incorporating the original data via a power prior.The degree of compatibility with the replication study influences how much information is borrowed from the original study.For instance, the (green) marginal posterior density based on the most compatible replication ( θr2 = 0.21) is the most concentrated among the three replications, despite the standard error being the largest (σ r2 = 0.06).Consequently, the 95% credible interval of θ is substantially narrower compared to the credible interval from the analysis of the replication data in isolation (dashed green).In contrast, the (blue) marginal posterior of the most conflicting estimate ( θr3 = 0.44) borrows less information and consequently yields the least peaked posterior, despite the standard error being the smallest (σ r3 = 0.04).In this case, the conflict with the original study even inflates the variance of posterior compared to the isolated replication posterior given by dashed blue line.This is, for example, apparent through its 95% credible interval (0.31 to 0.5) being even wider than the credible interval (0.35 to 0.52) based on the analysis of the replication data in isolation.

Hypothesis testing
In addition to estimating θ and α, we may also be interested in testing hypotheses about these parameters.Let H 0 and H 1 denote two competing hypotheses, each of them with an associated prior f (θ, α | H i ) and a resulting marginal likelihood obtained from integrating the likelihood of the replication data with respect to the prior for i ∈ {0, 1}.A principled Bayesian hypothesis testing approach is to compute the Bayes factor since it corresponds to the updating factor of the prior odds to the posterior odds of the hypotheses based on the data θr (first equality), or because it represents the relative accuracy with which the hypotheses predict the data θr (second equality) (Jeffreys, 1939;Good, 1958;Kass and Raftery, 1995).
A Bayes factor BF 01 ( θr ) > 1 provides evidence for H 0 , whereas a Bayes factor BF 01 ( θr ) < 1 provides evidence for H 1 .The more the Bayes factor deviates from one, the larger the evidence.In the following we will examine the Bayes factors related to various hypotheses about θ and α.

Hypotheses about the effect size θ
Researchers may be interested in testing the null hypothesis that there is no effect (H 0 : θ = 0) against the alternative that there is an effect (H 1 : θ ̸ = 0).We note that while the point null hypothesis H 0 is often unrealistic, it is usually a good approximation to more realistic interval null hypotheses that assign a distribution tightly concentrated around zero (Berger and Delampady, 1987;Ly and Wagenmakers, 2022).Under H 0 there are no free parameters, but under the alternative H 1 the specification of a prior distribution for θ and α is required.A natural choice is to use the normalized power prior based on the original data along with a beta prior for the power parameter as in (2).The associated Bayes factor is then given by An intuitively reasonable choice for the prior of α under H 1 is a uniform α ∼ Be(x = 1, y = 1) distribution.However, it is worth noting that assigning a point mass α = 1 leads to which is the replication Bayes factor under normality (Verhagen and Wagenmakers, 2014;Ly et al., 2018;Pawel and Held, 2022), that is, the Bayes factor contrasting a point null hypothesis to the posterior distribution of the effect size based on the original data (and in this case a uniform initial prior).A fixed α = 1 can also be seen as the limiting case of a beta prior with y > 0 and x → ∞.The power prior version of the replication Bayes factor is thus a generalization of the standard replication Bayes factor, one that allows the original data to be discounted to some degree.

Hypotheses about the power parameter α
To quantify the compatibility between the original and replication study, researchers may also be interested in testing hypotheses regarding the power parameter α.For example, we may want to test the hypothesis that the data sets are "compatible" and should be completely pooled (H c : α = 1) against the hypothesis that they are incompatible or "different" and the original data should be discounted to some extent (H d : α < 1).
One approach is to assign a point prior H d : α = 0 which represents the extreme position that the original data should be completely discounted.This leads to the issue that for a flat initial prior f (θ) ∝ 1, the power prior with α = 0 is not proper and so the resulting Bayes factor is only defined up to an arbitrary constant.Instead of the flat prior, we may thus assign an uninformative but proper initial prior to θ, for instance, a unit-information prior θ ∼ N(0, κ 2 ) with κ 2 the variance from one (effective) observation (Kass and Wasserman, 1995) as it encodes minimal prior information about the direction or magnitude of the effect size (Best et al., 2021).Updating the unit-information prior by the likelihood of the original data raised to the power of α leads then to a Power priors for replication studies S. Pawel, F. Aust, L. Held, E.-J.Wagenmakers An alternative approach that avoids the specification of a proper initial prior for θ is to assign a prior to α under H d .A suitable class of priors is given by H d : α ∼ Be(1, y) with y > 1.The Be(1, y) prior has its highest density at α = 0 and is monotonically decreasing thus representing the more nuanced position that the original data should only be partially discounted.The parameter y determines the extent of partial discounting and the simple hypothesis H d : α = 0 can be seen as a limiting case when y → ∞.The resulting Bayes factor is given by . (10)

Example "Labels" (continued)
Table 1 displays the results of the proposed hypothesis tests applied to the three replications of the "Labels" experiment.The Bayes factors contrasting H 0 : θ = 0 to H 1 : θ ̸ = 0 with normalized power prior with uniform prior for the power parameter α under the alternative (column indicate neither evidence for absence nor presence of an effect in the first replication, but decisive evidence for the presence of an effect in the second and third replication.In all three cases, the Bayes factors are close to the standard replication Bayes factors with α = 1 under the alternative (column Table 1: Hypothesis tests for the replication studies of the "Labels" experiment with original standardized mean difference effect estimate θo = 0.21 and standard error σ o = 0.05.The columns indicate replication effect estimates θr , their standard errors σ r , Bayes factors contrasting the absence of an effect H 0 : θ = 0 to the presence of an effect H 1 : θ ̸ = 0 with either a uniform prior α ∼ Be(x = 1, y = 1) or point prior α = 1 under H 1 , and Bayes factors contrasting study incompatibility H d : α < 1 to study compatibility H c : α = 1 with either complete discounting prior α = 0 or partial discounting prior α ∼ Be(1, y = 2) under H d .
Tests about the effect size θ Tests about the power parameter α In order to compute the Bayes factor for testing H d : α = 0 versus H c : α = 1 we need to specify a unit variance for the unit-information prior.A crude approximation for the variance of a standardized mean difference effect estimate is given by Var( θi ) = 4/n i with n i the total sample size of the study, and assuming equal sample size in both groups (Hedges and Schauer, 2021, p. 5).We may thus set the variance of the unit-information prior to κ 2 = 2 since a total sample size of n i = 2 (at least one The previous analysis is based on a beta prior with y = 2 corresponding to a linearly decreasing density in α, Figure 2 shows the Bayes factor for other values of y.We see that in the realistic range of y = 1 (uniform prior) to y = 100 (almost all mass at α = 0) the results for the first and third replication hardly change, while for the second replication the Bayes factor shifts from anecdotal evidence to stronger evidence for compatibility.To conclude, our analysis suggests that only the second replication was fully successful in the sense that it provides evidence for the presence of an effect while also being compatible with the original study.For the other two replications the conclusions are more nuanced: In the first replication, there is neither evidence for the absence nor the presence of an effect, but substantial evidence for compatibility when a complete discounting prior is used, and no evidence for (in)compatibility when a partial discounting prior is used.Finally, in the third replication there is decisive evidence for an effect, but also strong evidence of incompatibility with the original study.

Bayes factor asymptotics
Some of the Bayes factors in the previous example provided only modest evidence for the test-relevant hypotheses despite the large sample sizes in original and replication study.It is therefore of interest to understand the asymptotic behavior of the proposed Bayes factors.For instance, we may wish to understand what happens when the standard error of the replication study σ r becomes arbitrarily small (through an increase in sample size).Assume that θr is a consistent estimator of its true underlying effect size θ r , so that as the standard error σ r goes to zero, the estimate will converge in probability to the true effect size θ r .The true replication effect size θ r may be different from the true original effect size θ o , for example, because the participant populations from both studies systematically differ.
The limiting Bayes factors for testing the effect size θ from ( 7) and ( 8) are then given by , with δ(•) the Dirac delta function.Both Bayes factors are hence consistent (Bayarri et al., 2012) in the sense that they indicate overwhelming evidence for the correct hypothesis (i.e., the Bayes factors go to infinity/zero if the true effect size θ r is zero/non-zero).In contrast, the Bayes factors for testing the power parameter α from ( 9) and ( 10) converge to positive constants and The amount of evidence one can find for either hypothesis thus depends on the original effect estimate θo , the standard error σ o , and the true effect size θ r .For instance, in the "Labels" experiment we have an original effect estimate θo = 0.21, a standard error σ o = 0.05, and a unit variance κ 2 = 2.The bound ( 11) is minimized for a true effect size equal to the original effect estimate θ r = θo = 0.21, so the most extreme level we can obtain is lim σr↓0 BF dc (θ r | H d : α = 0) = 1/28.Similarly, the bound ( 12) is minimized for θ r = θo = 0.21 since then the confluent hypergeometric function term becomes one, Even in a perfectly precise replication study we cannot find more evidence, and hence the posterior probability of H c : α = 1 cannot converge to one.
While the Bayes factors ( 9) and ( 10) are inconsistent if the replication data become arbitrarily informative, the situation is different when also the original data become arbitrarily informative (reflected by also the standard error σ o going to zero and the original effect estimate θo converging to its true effect size θ o ).The Bayes factor with H d : α = 0 from ( 9) is then consistent as the limit (11) goes correctly to infinity/zero if the true effect size of the replication study θ r is different/equivalent from the true effect size of the original study θ o .In contrast, the Bayes factor with H d : α ∼ Be(1, y) from ( 10) is still inconsistent since it only shows the correct asymptotic behavior when the true effect sizes are unequal (i.e., the Bayes factor goes to infinity) but not when the effect sizes are equivalent, in which case it is still bounded by B(3/2, y)/B(1, y).

Bayes factor design of replication studies
Now assume that the replication study has not yet been conducted and we wish to plan for a suitable sample size.The design of replication studies should be aligned with the planned analysis (Anderson and Maxwell, 2017) and if multiple analyses are performed, a sample size may be calculated that guarantees a sufficiently conclusive analysis in each case (Pawel et al., 2023b).In the power prior framework, samples size calculations may be based on either hypothesis testing or estimation of the effect size θ or the power parameter α.Estimation based approaches have been developed by Shen et al. (2023).Here, we focus on samples size calculations based on Bayes factor hypothesis testing as the methodology is still lacking.
In the case of testing the effect size θ, Pawel and Held (2022) studied Bayesian design of replication studies based on the Bayes factor (8) with α = 1 under H 1 , i.e., the replication Bayes factor under normality.They obtained closed-form expressions for the probability of replication success under H 0 and H 1 based on which standard Bayesian design can be performed (Weiss, 1997;Gelfand and Wang, 2002;De Santis, 2004;Schönbrodt and Wagenmakers, 2017).For the Bayes factor (7) with α ∼ Be(x, y) under H 1 , closed-form expressions are not available anymore and simulation or numerical integration have to be used for sample size calculations.
For tests related to the power parameter α, there are also closed-form expressions for the probability of replication success based on the Bayes factor ( 9) with α = 0 under H d .We will now show how these can be derived and used for determining the replication sample size.With some algebra, one can show Denote by m i and v i the mean and variance of θr under hypothesis i ∈ {d, c}.The left hand side of (13) then follows a scaled non-central chi-squared distribution under both hypotheses.Hence the probability of replication success is given by with non-centrality parameter To determine the replication sample size, we can now use ( 14) to compute the probability of replication success at a desired level γ over a grid of replication standard errors σ r , and under either hypothesis H d and H c .The appropriate standard error σ r is then chosen so that the probability for finding correct evidence is sufficiently high under the respective hypothesis, and sufficiently low under the wrong hypothesis.Subsequently, the standard error σ r needs to be translated into a sample size, e. g., for standardized mean differences via the aforementioned approximation n r ≈ 4/σ 2 r .
θ Figure 3: Probability of replication success as a function of relative variance for the three replications of experiment "Labels" regarded as original study.The arrows point to the relative variance associated with an 80% probability under the respective hypotheses.

Example "Labels" (continued)
Figure 3 illustrates Bayesian design based on the Bayes factor BF dc ( θr | H d : α = 0) testing the power parameter α from (9).The three replication studies from the experiment "Labels" are now regarded as original studies, and each column of the figure shows the corresponding design of future replications.
In each plot, the probability for finding strong evidence for shown as a function of the relative sample size.In both cases, the probability is computed assuming that either H c (blue) or H d (yellow) is true.
The curves look more or less similar for all three studies.We see from the lower panels that the probability for finding strong evidence for H d is not much affected by the sample size of the replication study; it stays at almost zero under H c , while under H d it increases from about 75% to about 90%.In contrast, the top panels show that the probability for finding strong evidence for H c rapidly increases under H c and seems to level off at an asymptote.Under H d the probability stays below 5% across the whole range.
S. Pawel, F. Aust, L. Held, E.-J.Wagenmakers The arrows in the plots display the required relative sample size to obtain strong evidence with probability of 80% under the correct hypothesis.We see that original studies with smaller standard errors require smaller relative sample sizes in the replication to achieve the same probability of replication success.Under H c the required relative sample sizes are larger than under H d .However, while the probability of misleading evidence under H c seems to be well controlled under the determined sample size, under H d it stays roughly 5% for all three studies, and even for very large replication sample sizes.Choosing the sample size based on finding strong evidence for H c assuming H c is true thus also guarantees appropriate error probabilities for finding strong evidence for H d in all three studies.At the same time, it seems that the probability for finding misleading evidence for H c cannot be reduced below around 5% which might be undesirably high for certain applications.

Connection to hierarchical modeling of replication studies
Hierarchical modeling is another approach that allows for the incorporation of historical data in Bayesian analyses; moreover, hierarchical models have previously been used in the replication setting (Bayarri and Mayoral, 2002b,a;Pawel and Held, 2020).We will now investigate how the hierarchical modeling approach is related to the power prior approach in the analysis of replication studies, both in parameter estimation and hypothesis testing.

Connection to parameter estimation in hierarchical models
Assume a hierarchical model where for study i ∈ {o, r} the effect estimate θi is normally distributed around a study specific effect size θ i which itself is normally distributed around an overall effect size θ * .The heterogeneity variance τ 2 determines the similarity of the study specific effect sizes θ i .The overall effect size θ * is assigned an (improper) flat prior f (θ * ) ∝ k, for some k > 0, which is a common approach in hierarchical modeling of effect estimates (Röver et al., 2021).
We show in Appendix A that under the hierarchical model ( 15) the marginal posterior distribution of the replication specific effect size θ r is given by that is, a normal distribution whose mean is a weighted average of the replication effect estimate θr and the original effect estimate θo .The amount of shrinkage of the replication towards the original effect estimate depends on how large the replication standard error σ r is relative to the heterogeneity variance τ 2 and the original standard error σ o .There exists a correspondence between the posterior for the replication effect size θ r from the hierarchical model ( 16) and the posterior for the effect size θ under the power prior approach.Specifically, note that under the power prior and for a fixed power parameter α, the posterior of the effect size θ is given by The hierarchical posterior ( 16) and the power prior posterior (17) thus match if and only if which was first shown by Chen and Ibrahim (2006).For instance, a power prior model with α = 1 corresponds to a hierarchical model with τ 2 = 0, and a hierarchical model with τ 2 → ∞ corresponds to a power prior model with α ↓ 0. In between these two extremes, however, α has to be interpreted as a relative measure of heterogeneity since the transformation to τ 2 involves a scaling by the variance σ 2 o of the original effect estimate.For this reason, there is a direct correspondence between α and the popular relative heterogeneity measure (Higgins and Thompson, 2002) computed from τ 2 and the variance of the original estimate σ 2 o , that is, with inverse of the same functional form.Figure 4 shows α and the corresponding τ 2 and I 2 values which lead to matching posteriors.
It has remained unclear whether or not a similar correspondence exists in cases where α and τ 2 are random and assigned prior distributions.Here we confirm that there is indeed such a correspondence.
Specifically, the marginal posterior of the replication effect size θ r from the hierarchical model matches with the marginal posterior of the effect size θ from the power prior model if the prior density functions for every τ 2 ≥ 0, see Appendix B for details.Importantly, the correspondence condition (20) involves a scaling by the variance from the original effect estimate σ 2 o , meaning that also in this case α acts similar to a relative heterogeneity parameter.This can also be seen from the correspondence condition between α and I 2 = τ 2 /(σ 2 o + τ 2 ), which can be derived in exactly the same way as the correspondence between α and τ 2 .That is, the marginal posteriors of θ and θ r match if the prior density functions ) of a hierarchical model versus the power parameter α from a power prior model which lead to matching posteriors for the effect sizes θ and θ r .The variance of the original effect estimate σ 2 o = 0.05 2 from the "Labels" experiment is used for the transformation to the heterogeneity scale τ 2 .
Interestingly, conditions ( 21) and ( 20) imply that a beta prior on the power parameter α ∼ Be(x, y) corresponds to a generalized F prior on the heterogeneity τ 2 ∼ GF(y, x, 2/σ 2 o ) and a generalized beta prior on the relative heterogeneity I 2 ∼ GBe(y, x, 2), see Appendix C for details on both distributions.
This connection provides a convenient analytical link between hierarchical modeling and the power prior framework, as beta priors for α are almost universally used in applications of power priors.
The result also illustrates that the power prior framework seems unnatural from the perspective of hierarchical modeling since it corresponds to specifying priors on the I 2 scale rather than on the τ 2 scale.The same prior on I 2 will imply different degrees of informativeness on the τ 2 scale for original effect estimates θo with different variances σ 2 o since I 2 is entangled with the variance of the original effect estimate.
Figure 5 provides three examples of matching priors using the variance of the original effect estimate from the "Labels" experiment for the transformation to the heterogeneity scale τ 2 .The top row of Figure 5 shows that the uniform prior on α corresponds to a f (τ Daniels, 1999).This prior has the highest density at τ 2 = 0 but still gives some mass to larger values of τ 2 .Similarly, on the scale of I 2 the prior slightly favors smaller values.The middle row of Figure 5 shows that the α ∼ Be(2, 1) prior -indicating more compatibility between original and replication than the uniform prior-gives even more mass to small values of τ 2 and I 2 , and also has the highest density at τ 2 = 0 and I 2 = 0.In contrast, the bottom row of Figure 5 shows that the α ∼ Be(1, 2) prior -indicating less compatibility between original and replication than the uniform prior-gives less mass to small τ 2 and I 2 , and has zero density at τ 2 = 0 and I 2 = 0. o ) (left), the relative heterogeneity I 2 = τ 2 /(σ 2 o +τ 2 ) ∼ GBe(y, x, 2) (middle) and the power parameter α ∼ Be(x, y) (right) that lead to matching marginal posteriors for effect sizes θ and θ r .The variance of the original effect estimate σ 2 o = 0.05 2 from the "Labels" experiment is used for the transformation to the heterogeneity scale τ 2 .

Connection to hypothesis testing in hierarchical models
Two types of hypothesis tests can be distinguished in the hierarchical model; tests for the overall effect size θ * and tests for the heterogeneity variance τ 2 .In all cases, computations of marginal likelihoods of the form with i ∈ {j, k} are required for obtaining Bayes factors BF jk ( θr ) = f ( θr | H j )/f ( θr | H k ) which quantify the evidence that the replication data θr provide for a hypothesis H k over a competing hypothesis H j .
Under each hypothesis a joint prior for τ 2 and θ * needs to be assigned.
As with parameter estimation, it is of interest to investigate whether there is a correspondence with hypothesis tests from the power prior framework from Section 2.2.For two tests to match, one needs to assign priors to τ 2 and θ * , respectively, to α and θ so that the marginal likelihood ( 22) equals the marginal likelihood from the power prior model ( 6) under both test-relevant hypotheses.
Concerning the generalized replication Bayes factor from ( 7) testing H 0 : θ = 0 versus H 1 : θ ̸ = 0, one can show that it matches with the Bayes factor contrasting H 0 : θ * = 0 versus H 1 : θ * ̸ = 0 with for the replication data in in the hierarchical framework.The Bayes factor thus compares the likelihood of the replication data under the hypothesis H 0 postulating that the global effect size θ * is zero and that there is no effect size heterogeneity, relative to the likelihood of the data under the hypothesis H 1 postulating that θ * follows the posterior based on the original data and an initial flat prior for θ * along with a generalized F prior on the heterogeneity τ 2 .Setting the heterogeneity to τ 2 = 0 under H 1 instead produces the replication Bayes factor under normality from (8).
The Bayes factor (9) that tests complete discouting H d : α = 0 versus complete compatibility H c : α = 1 can be obtained in the hierarchical framework by contrasting Hence, the Bayes factor compares the likelihood of the replication data under the initial unit-information prior relative to the likelihood of the replication data under the unitinformation prior updated by the original data, assuming no heterogeneity under either hypothesis (so that the hierarchical model collapses to a fixed effects model).Although this particular test relates to the power parameter α in the power prior model, it is surprisingly unrelated to testing the heterogeneity variance τ 2 in the hierarchical model.
The Bayes factor (10) testing H d : α < 1 versus H c : α = 1 using the partial discounting prior H d : α ∼ Be(1, y) corresponds to testing H d : τ 2 > 0 versus H c : τ 2 = 0 with priors The test for compatibility via the power parameter α is thus equivalent to a test for compatibility via the heterogeneity τ 2 (to which a generalized F prior is assigned) after updating of a flat prior for θ * with the data from the original study.

Bayes factor asymptotics in the hierarchical model
Like the original test of H c : α = 1 versus H d : α ∼ Be(1, y), the corresponding test of τ 2 is inconsistent in the sense that when the standard errors from both studies go to zero (σ o ↓ 0 and σ r ↓ 0) and their true effect sizes are equivalent (θ o = θ r ), the Bayes factor BF dc does not go to zero (to indicate overwhelming evidence for H c : τ 2 = 0) but converges to a positive constant.It is, however, possible to construct a consistent test for H c : τ 2 = 0 when we assign a different prior to τ 2 under H d : τ 2 > 0.
For instance, when we assign an inverse gamma prior H d : τ 2 ∼ IG(q, r) with shape q and scale r, the Bayes factor is given by with IG(• | q, r) the density function of the inverse gamma distribution.The limiting Bayes factor is therefore lim σo,σr↓0 so it correctly goes to zero/infinity when the effect sizes θ r and θ o are equivalent/different.To understand why the test with H d : τ 2 ∼ IG(q, r) is consistent, but the original test with H d : α ∼ Be(1, y) is not, one can transform the consistent test on τ 2 to the corresponding test on α.The inverse gamma prior for τ 2 implies a prior for α with density The Bayes factor contrasting H c : α = 1 versus H d : α < 1 with prior (23) assigned to α under H d will thus produce a consistent test.The prior is shown in Figure 6 for different parameters q and r and original standard errors σ o .We see that the prior depends on the standard error of the original effect estimate σ o , the smaller σ o the more the prior is shifted towards zero.For example, the standard error σ o = 0.05 from the "Labels" experiment leads to priors that are almost indistinguishable from a point mass at α = 0.The prior thus "unscales" α from the original standard error σ o , thereby leading to a consistent test for study compatibility and resolving the inconsistency property of the beta prior.

Discussion
We showed how the power prior framework can be used for design and analysis of replication studies.
The approach supplies analysts with a suite of methods for assessing effect sizes and study compatibility.Both aspects can be tackled from an estimation or a hypothesis testing perspective, and the choice between the two is primarily philosophical.We believe that both perspectives provide valueable inferences that complement each other.Visualizations of joint and marginal posterior distributions are highly informative in terms of the available uncertainty.However, the power parameter α is an abstract quantity disconnected from actual scientific phenomena.Testing hypotheses of complete discounting versus complete pooling may therefore be more intuitive for researchers.Both approaches also suffer from similar problems: If the original and replication data are in perfect agreement, the posterior distribution of α hardly changes from the prior.For example, for the commonly used uniform prior α ∼ Be(x = 1, y = 1), we can at best obtain a α | θr ∼ Be(x + 1/2 = 3/2, y = 1) posterior (Pawel et al., 2023a).This means that for a "compatibility threshold" of, say, 0.8, we can never have a posterior probability higher than Pr(α > 0.8 | θr ) = 0.28, and for a threshold of 0.9 it is even lower Pr(α > 0.9 | θr ) = 0.15.The fact that the Bayes factor for testing problem from a different perspective.
We also showed how the power prior approach is connected to hierarchical modeling, and gave conditions under which posterior distributions and hypothesis tests correspond between normal power prior models and normal hierarchical models.This connection provides an intuition for why even with highly precise and compatible original and replication study one can hardly draw conclusive inferences about the power parameter α; the power parameter α has a direct correspondence to the relative heterogeneity variance I 2 , and an indirect correspondence to the heterogeneity variance τ 2 in a hierarchical model.Making inferences about a heterogeneity variance from two studies alone seems like a virtually impossible task since the "unit of information" is the number of studies and not the number of samples within a study.Moreover, Bayes factor hypothesis tests related to α have the undesirable asymptotic property of inconsistency if a beta prior is assigned to α.This is because the prior scales with the variance of the original data, just as a beta prior for I 2 would in a hierarchical model.The identified link may also have computational advantages, e.g., it may be possible to estimate power prior models using the hierarchical model estimation procedures, or vice versa, but more research is needed on the connection in more complex situations that depart from normality assumptions.
Which of the two approaches should data analysts use in practice?We believe that the choice should be primarily guided by whether the hierarchical or the power prior model is scientifically more suitable for the studies at hand.If data analysts deem it scientifically plausible that the studies' underlying effect sizes are connected via an overarching distribution then the hierarchical model may be more suitable, particularly because the approach naturally generalizes to more than two studies.On the other hand, if data analysts simply want to downweight the original studies' contribution depending on the observed conflict, the power prior approach might be more suitable.The identified limitations for inferences related to the power parameter α should, however, be kept in mind when beta priors are assigned to the power parameter α.
There are also situations where the hierarchical and power prior frameworks can be combined, for example, when multiple replications of a single original study are conducted (multisite replications).
In that case, one may model the replication effect estimates in a hierarchical fashion but link their overall effect size to the original study via a power prior.Multisite replications are thus the opposite of the usual situation in clinical trials where several historical "original" studies but only one current "replication" study is available (Gravestock and Held, 2019).
Another commonly used Bayesian approach for incorporating historical data are robust mixture priors, i.e., priors which are mixtures of the posterior based on the historical data and an uninformative prior distribution (Schmidli et al., 2014).We conjecture that inferences based on robust mixture priors can be reverse-engineered within the framework of power priors through Bayesian model averaging over two hypotheses about the power parameter; however, more research is needed to explore the relationship between the two approaches.
The proposed methods are based on the standard meta-analytic assumption of approximate normality of effect estimates with known variances.This makes our methodology applicable to a wide range of effect sizes that may arise from different data models.However, in some situations this assumption may be inadequate, for example, when studies have small sample sizes.In this case, the methods could be modified to use the exact likelihood of the data (e.g., binomial or t), as in Bayarri and Mayoral (2002b), who used a t likelihood.However, the methodology would need to be adapted for each effect size type.Therefore, future work may examine specific data models in more detail to obtain more precise inferences.In this case, however, using the exact likelihood typically requires numerical methods to evaluate integrals that can be evaluated analytically under normality.
We primarily focused on the evaluation of (objective) Bayesian properties of the proposed methods.
Further work is needed to evaluate their frequentist properties, for example, with a carefully planned simulation study (Morris et al., 2019).As in other recent studies (Muradchanian et al., 2021;Freuli et al., 2022), it would be interesting to simulate the realistic scenario of questionable research practices and publication bias affecting the original study to see how the adaptive downweighting of power priors can account for the inflated original results.which can be further simplified to identify the posterior given in ( 16).
When the heterogeneity τ 2 is also assigned a prior distribution, the posterior distribution can be factorized in the posterior conditional on τ 2 from (24) and the marginal posterior of τ 2 holds for every τ 2 ≥ 0.

Figure 2 :
Figure 2: Sensitivity of the Bayes factor BF dc { θr | H d : α ∼ Be(1, y)} with respect to the parameter y of the partial discounting prior under H d .

Figure 4 :
Figure 4: The heterogeneity τ 2 and relative heterogeneityI 2 = τ 2 /(τ 2 + σ 2o ) of a hierarchical model versus the power parameter α from a power prior model which lead to matching posteriors for the effect sizes θ and θ r .The variance of the original effect estimate σ 2 o = 0.05 2 from the "Labels" experiment is used for the transformation to the heterogeneity scale τ 2 .

Figure 6 :
Figure 6: Prior for the power parameter α implied by an inverse gamma prior H d : τ 2 ∼ IG(q, r) in a hierarchical model with consistent test for H c : τ 2 = 0 versus H d : τ 2 > 0.
observation from each group) is required to estimate a standardized mean difference.Based on this choice, the Bayes factors BF dc ( θr | H d : α = 0) in Table 1 indicate that the data provide substantial and strong evidence for the compatibility hypothesis H c in the first and second replication study, respectively, whereas the data indicate strong evidence for complete incompatibility H d in the third replication study.The Bayes factor BF dc { θr | H d : α ∼ Be(1, y = 2)} in the right-most column with the partial discounting prior assigned under hypothesis H d indicates absence of evidence for either hypothesis in