Optimal scaling of random walk Metropolis algorithms using Bayesian large-sample asymptotics

High-dimensional limit theorems have been shown useful to derive tuning rules for finding the optimal scaling in random walk Metropolis algorithms. The assumptions under which weak convergence results are proved are, however, restrictive: the target density is typically assumed to be of a product form. Users may thus doubt the validity of such tuning rules in practical applications. In this paper, we shed some light on optimal scaling problems from a different perspective, namely a large-sample one. This allows to prove weak convergence results under realistic assumptions and to propose novel parameter-dimension-dependent tuning guidelines. The proposed guidelines are consistent with the previous ones when the target density is close to having a product form, and the results highlight that the correlation structure has to be accounted for to avoid performance deterioration if that is not the case, while justifying the use of a natural (asymptotically exact) approximation to the correlation matrix that can be employed for the very first algorithm run.


Random walk Metropolis algorithms
Consider a Bayesian statistical framework where one wants to sample from an intractable posterior distribution π to perform inference. This posterior distribution, also called the target distribution in a sampling context, is considered here to be that of model parameters θ ∈ Θ = R d , given a data sample of size n. We assume that π has a probability density function (PDF) with respect to the Lebesgue measure; to simplify, we will also use π to denote this density function. Tools called random walk Metropolis (RWM) algorithms (Metropolis et al. 1953), which are Markov chain Monte Carlo (MCMC) methods, can be employed to sample Sebastian M. Schmon  π . An iteration of such an algorithm can be outlined as follows: given a current value of the chain θ, a proposal for the next one is made using θ := θ + S , ∼ ϕ( · ; 0, 1), where S is a scaling matrix and ϕ( · ; 0, 1) denotes the standard normal distribution; this proposal is accepted with probability α(θ , θ ) := min 1, π(θ ) π(θ ) ; if the proposal is rejected, the chain remains at the same state.

Optimal scaling problems
Often, S = λ1, where λ is a positive constant to be determined. In this case, λ is the only free parameter. Yet, this parameter has to be tuned carefully because small values lead to tiny movements of the Markov chain simulated by RWM, while large values induce high rejection rates, both being undesirable. Finding the optimal value is thus a nontrivial problem. The last 20 years have witnessed a significant progress in the line of research studying such problems called optimal scaling problems, whether it is in RWM (Roberts et al. 1997;Bédard 2007;Sherlock and Roberts 2009;Durmus et al. 2017;Yang et al. 2020) or other algorithms including a scaling parameter (Roberts and Rosenthal 1998;Bédard et al. 2012;Beskos et al. 2013). In all these articles, the authors derive tuning rules based on analyses in the highdimensional regime d → ∞.
In the seminal work of Roberts et al. (1997) on RWM, the tuning rule for λ follows from the analysis of a Langevin diffusion which is the limiting process of a re-scaled continuous-time version of RWM. The rule is remarkably simple: set λ = / √ d and tune so that the acceptance rate is 0.234. The resulting optimal value is universal, in the sense that it minimizes the stationary integrated autocorrelation time of any function of the limiting process. The tuning rule is, however, derived under the assumption that π(θ ) = d i=1 f (θ i ), where θ := (θ 1 , . . . , θ d ) and f satisfies some regularity conditions. Assuming independent and identically distributed (IID) parameters considerably reduces the scope of applicability. One may be tempted to search for transformations/standardizations yielding IID parameters to expand the scope, but they exist only in specific situations (e.g. Gaussian target distributions). It will be seen that one of the main contributions of this paper is to provide formal and realistic conditions under which RWM algorithms targeting π behave similarly to RWM targeting a Gaussian distribution with specific mean and covariance in an asymptotic regime. Our results thus allow to demonstrate that standardizing the parameters to expand the scope of applicability of the results of Roberts et al. (1997) is valid under regularity conditions, but only asymptotically.
The scope has been expanded otherwise in the past. For example, Bédard (2007) and Durmus et al. (2017) proved that the result is robust to departure from the identically distributed part of the assumption. Yang et al. (2020) proved that the result is valid under assumptions that are more general but difficult to verify. Empirical results in realistic scenarios where the IID assumption is, thus, not satisfied show that an acceptance rate of 0.234 is close to being optimal in these scenarios (e.g. Shang et al. 2015;Zhang et al. 2016;Gagnon et al. 2021), which can be seen as another demonstration of the robustness of the original results.

Contributions
In this paper, we provide an alternative explanation of these empirical results in realistic scenarios, based on Bayesian large-sample theory. To achieve this, we revisit optimal scaling problems in RWM by exploiting important results underpinning that theory. In particular, we prove a weak convergence result as n → ∞, with d being fixed, and derive tuning rules from it. While this asymptotic regime is ubiquitous in statistics, it is only recently that it was found useful in the analysis of MCMC algorithms (Deligiannidis et al. 2018;Gagnon 2021;Schmon et al. 2021a). Intuitively, if n is large enough and π is a posterior distribution resulting from a sufficiently regular Bayesian model, then π is close to a concentrating Gaussian, implying that RWM algorithms targeting π behave like those targeting a Gaussian. This idea is formalized in Sect. 2.
The proximity between π and a concentrating Gaussian can be established by virtue of Bernstein-von Mises theorems (see, e.g. Theorem 10.1 in Van der Vaart 2000 andKleijn andVan der Vaart 2012). Verifying that a Bayesian model is sufficiently regular is thus closely related to verifying that the assumptions of such theorems are satisfied and has a priori nothing to do with whether the parameters are IID or not. Instead, such theorems rely on local asymptotic normality, meaning that a certain function of the log-likelihood allows for a quadratic expansion (usually) around some "true" parameter value θ 0 . If the posterior concentrates around θ 0 , the quadratic expansion of the loglikelihood implies an asymptotically Gaussian posterior; this happens under weak conditions such as IID data points with regularity conditions on the distribution and positive prior mass around θ 0 . The results in Roberts et al. (1997) actually rely on a similar quadratic expansion, but one that requires to impose a IID constraint on the parameters instead. We discuss in more detail the resemblance between both expansions in Sect. 3, allowing to establish a connection between our guidelines and theirs.
An advantage of the approach adopted in this paper to analyse MCMC algorithms is that a lot is known about which models are sufficiently regular (e.g. LeCam 1953; Bickel and Yahav 1969;Johnson 1970;Ghosal et al. 1995;Van der Vaart 2000;Kleijn and Van der Vaart 2012). Many models based on the exponential family are, for instance, regular enough. A notable example of such a model, namely Bayesian logistic regression, is studied in Sect. 4. We finish this section by outlining our main contributions: (i) presentation of a large-sample asymptotic framework and realistic assumptions under which a weak convergence of RWM is proved (Sect. 2); (ii) an extensive analysis of the limiting RWM algorithm (Sect. 3) that allows to: (a) provide dimensiondependent optimal tuning guidelines, (b) show that the "0.234" rule-of-thumb is asymptotically valid from the point of view adopted in this paper in certain situations and that this rule is in fact quite robust to a departure from the IID assumption when S = λ1, without providing any guarantee regarding the algorithm performance; the latter deteriorates when there is a significant departure from the IID assumption and S = λ1 because this scaling matrix does not account for the correlation in between the parameters (Sect. 3); (iii) justification of the use of natural asymptotically exact approximations to the covariance matrix such as the inverse Fisher information or its observed version that can be employed for the very first algorithm run to avoid deterioration of performance.
Our analysis is mainly based on an efficiency measure called the expected squared jumping distance (ESJD). It is defined as the average squared distance between two consecutive states (or a function of them). Optimizing this measure does not yield a universally optimal scaling because it is optimal for one function and thus not necessarily for all functions. Typically, ESJD is optimized for the identity function; this strategy has demonstrated on many occasions in the literature to lead to reliable conclusions (see, e.g., Yang et al. (2020)). This choice also allows to establish a formal connection between our results and those of Roberts et al. (1997) in Sect. 3.

Notation and framework
We first note that within our framework the Bayesian posterior π depends on n; therefore, from now on the target will be denoted by π n . The target being a posterior distribution in fact depends on a set of observations that will be denoted by y 1:n := (y 1 , . . . , y n ) ∈ n i=1 Y i . We make this dependence implicit to simplify. We assume y 1:n to be the first n components of a realization of some unknown data generating process P Y on ∞ i=1 Y i . Through its dependence on the data points, the distribution π n is a random measure on R d . Consequently, everything derived from it (or in fact directly from the data points) is random, such as integrals with respect to π n and the distributions of Markov chains produced by RWM targeting π n . In the following, we make statements about the convergence of such mathematical objects in P Y -probability. We now briefly describe what we mean by this and refer to Schmon (2020) and Schmon et al. (2021b, Section S1) for more details on random measures and such convergences in a MCMC context. We say, for instance, that an integral with respect to π n , denoted by I n , converges to I in P Y -probability when P Y |I n − I | → 0. A Markov chain produced by RWM targeting π n is seen to weakly converge in P Y -probability towards another Markov chain when the finite-dimensional distributions converge in P Y -probability, where the latter can be seen as random integrals involving π n and random transition kernels.
The matrix S will also depend on n and will thus be written S n . We use ϕ(θ ; μ, Σ) to denote a Gaussian density with argument θ , mean μ, and covariance matrix Σ and use Φ to denote the cumulative distribution function of a standard normal; I(θ) andθ n denote the Fisher information evaluated at θ and a parameter estimator, respectively. Finally, the norm of a vector μ with respect to a matrix Σ is denoted by μ 2 Σ := μ T Σμ. We simply write μ 2 when Σ = 1.

Large-sample asymptotics of RWM
We first present three conditions under which a weak convergence of RWM can be established, and next, our result. The first condition is that a Bernstein-von Mises theorem holds, i.e. the concentration of the PDF π n around the true model parameter value θ 0 , as n increases, with a shape that resembles that of a Gaussian. For simplicity, we only consider the case where the Bayesian model is well specified, but our result remains valid under model misspecification; however, in this case, θ 0 is some fixed parameter value and the covariance matrix of the Gaussian is different (see Kleijn and Van der Vaart 2012).
Assumption 1 (Bernstein-von Mises theorem) As n → ∞, we have the following convergences in P Y -probability: If the posterior concentrates at a rate of 1/ √ n, the scaling of the random walk needs to decrease at the same rate. Note that this is an analogous requirement to that in Roberts et al. (1997); in that paper, the scaling diminishes with d like 1/ √ d. In both cases, it is to accommodate to the reality that, as n or d increases, the acceptance rate rapidly deteriorates if the scaling is not suitably reduced.
The scaling matrix is more precisely considered here to be of the following form: S n = (λ/ √ n)M n , with M n a matrix that is allowed to depend on n (and the data, but this dependence is made implicit to simplify the notation). The second assumption is now presented.
Assumption 2 (Proposal scaling) The proposal is scaled as follows: S n = (λ/ √ n)M n , and there exists a matrix M such that M n M T n → MM T in P Y -probability, where we say that a matrix converges in probability whenever all entries converge in probability.
A choice of matrix M n that satisfies Assumption 2 is the identity matrix 1. In the following, it will be seen that choosing M n to be the result of a Cholesky decomposi- , may be preferable, depending on the strength of the correlation between the parameters. When the correlation is significant, the desirable property is that M n M T n → MM T = I(θ 0 ) −1 in P Y -probability, which is often the case for regular models when M n M T n = I(θ n ) −1 . Note that other choices of matrices M n may have this property. For instance, it may be valid to choose M n to be the result of a Cholesky decomposition of the inverse observed information matrix instead.
Given that the target distribution concentrates and the proposal scaling decreases, we need to standardize the Markov chains simulated by RWM to obtain a non-trivial limit. For each time step, we consider the transformation z n := n 1/2 (θ n −θ n ). The proposals after the transformation are thus z n = z n + λM n and the resulting Markov chains have a stationary PDF π Z n which is such that π Z n (z n ) = π n (θ n + n −1/2 z n )/n d/2 . This implies that the proposals are sampled from a Gaussian with a non-decreasing scaling and the stationary distribution behaves like a Gaussian with mean 0 and covariance I(θ 0 ) −1 , as n → ∞. Let Ξ n := Z k,n k≥0 be such a standardized Markov chain with Z k,n being the state of the chain after k iterations.
An asymptotic result that we prove is a convergence of Ξ n towards Ξ := Z k k≥0 , which is a Markov chain simulated by a RWM algorithm targeting a Gaussian with mean 0 and covariance I(θ 0 ) −1 using proposals given by z = z + λM .
To obtain the result, we assume that the chains start in stationarity. If this is not the case, the result generally still holds (at least approximatively), but for subchains formed of states with iteration indices larger than a certain threshold. Indeed, the chains produced by RWM are irreducible and they are typically aperiodic (they are if there are positive probabilities of rejecting proposals); therefore, they are typically ergodic (Tierney 1994). This implies that the chains typically reach stationarity (at least approximatively) after a large enough number of iterations.
We are now ready to present the main theoretical results of this paper.
Theorem 1 Under Assumptions 1, 2 and 3, we have the following convergences in P Y -probability: (i) Ξ n converges weakly to Ξ ; (ii) the expected acceptance probability converges, The proof of Theorem 1 and the proof of all the following theoretical results are deferred to Appendix A. Note that, as shown in the proof, Result (iii) holds under a more general, but more technical, assumption.

Tuning guidelines and analysis of the limiting RWM
We first present in Sect. 3.1 special cases of the limiting ESJD resulting from specific choices for M; these special cases will be seen to suggest tuning guidelines. Subsequently, we turn to an extensive analysis of the limiting RWM in Sect. 3.2 showing the relevance of these guidelines, but also the robustness of the 0.234 rule when M = 1. An interesting feature of the proposed guidelines is that they are consistent with this rule. An asymptotic connection with the results of Roberts et al. (1997)

Tuning guidelines
In the same spirit as Roberts et al. (1997) who optimize the speed measure of their limiting diffusion as a proxy, we propose here to optimize with respect to the tuning parameter λ, for given M. There exists a simple expression for ESJD(λ, M) for the typical choice M = 1 or when M results from a Cholesky decomposition of I(θ 0 ) −1 , i.e. when MM T = I(θ 0 ) −1 . The expressions are provided in Corollary 1, along with the expected acceptance probabilities associated with these special cases of M.
and the expected acceptance probability is and the expected acceptance probability is In general, expressions (1) and (2) in Corollary 1 cannot be optimized analytically, but can be approximated efficiently using independent Monte Carlo sampling, and thus, numerically optimized using the resulting approximations. We note that (1) and (2) coincide when I(θ 0 ) = 1 and that, in general, (1) depends on I(θ 0 ), while (2) does not. This reveals that the value of λ maximizing (1) is similar to that maximizing (2) when the model parameters are close to be IID, but is expected to be different otherwise. More precisely, it is expected that the value of λ maximizing (1) is small when the parameters are strongly correlated, yielding inefficient RWM algorithms; this is confirmed in Sect. 3.2. Corollary 1 also reveals that, when M is such that MM T = I(θ 0 ) −1 , the optimal value for λ is invariant to the covariance structure. In other words, Corollary 1 suggests the following practical guideline: set . Aiming to match the proposal covariance to the target covariance has a long history in MCMC (see, e.g., Haario et al. (2001) in a context of adaptive algorithms). To exactly match the target covariance, S n is typically set to S n = (λ/ √ n)1 and trial runs are performed to estimate the covariance. This may turn out to be ineffective when RWM with this choice of scaling matrix performs poorly. The guideline proposed here provides an alternative: while the matrix used to build S n does not correspond to the target covariance, it is asymptotically equivalent to it (under the assumptions mentioned in Sect. 2); the advantage is that this alternative can be implemented for the very first algorithm run.
In Table 1, we present the results of a numerical optimization of ESJD(λ, M) when λ = / √ d and M is such that MM T = I(θ 0 ) −1 based on Monte Carlo samples of size 10,000,000 and a grid search, for several values of d. The optimization is thus with respect to , and the optimal value is denoted byˆ . Note that we have observed empirically that optimizing the effective sample size (ESS) yields similar results. Note also that the code to produce all numerical results is available online 1 . In Table 1 √ d allows to establish a connection with the results of Roberts et al. (1997) in Sect. 3.3. The existence of such a connection is highlighted by the values of the optimal acceptance rates for large values of d. In Sect. 3.3, we establish that ESJD converges as d → ∞ to the same expression which is optimized in Roberts et al. (1997) and which leads within their framework to an optimal acceptance rate of 23.38%. From this result, we prove that the asymptotically optimal acceptance rate derived within our framework is 23.38% as well. What is remarkable is that, not only do we retrieve within our framework the same value as Roberts et al. (1997) when the parameters are IID, i.e. when I(θ 0 ) −1 = 1, but the limiting optimal acceptance rate is also 23.38% when I(θ 0 ) = 1, as long as MM T = I(θ 0 ) −1 , which is a consequence of the invariance of (2), a quality that the acceptance rate also has.
From Table 1, we observe that when M is such that MM T = I(θ 0 ) −1 , the optimal acceptance rate is approximately 44% for d = 1, 35% for d = 2 and decreases towards 23.38% as d increases, regardless of the covariance structure. A theoretical result allows to support our numerical findings. Proposition 1 states that, for fixed , the expected acceptance probability decreases monotonically as d increases, which confirms, for instance, that from d = 1 to d = 2 with =ˆ = 2.42 fixed, the expected acceptance probability decreases.
We finish this section by noting that for d = 1, the ESJD and expected acceptance probability of a RWM targeting a Gaussian distribution have closed-form expressions (see Sherlock and Roberts 2009) and can thus be optimized using these expressions.

Analysis of the limiting RWM
We now present the practical implications of the guidelines proposed in Sect. 3.1 (in the asymptotic regime n → ∞) through an analysis of the impact of different target covariances on the performance and acceptance rate of the optimal limiting RWM. More precisely, we analyse the behaviour of the limiting RWM with M = 1 and M such that MM T = I(θ 0 ) −1 under different target covariances; for each of these covariances, the algorithms are made optimal, in the sense that λ (or ) is tuned according to the expressions in Corollary 1 (or Table 1). The algorithm with M such that MM T = I(θ 0 ) −1 has a higher complexity because an additional matrix multiplication is required every iteration. However, in standard modern statistical computing frameworks we found both algorithms to take roughly the same amount of time to complete; it is the case for instance for the numerical experiments presented in this paper that were performed in R (R Core Team 2020) on a computer with an i9 CPU.
For the analysis, we focus on showing what happens when the correlation between the model parameters increases under a specific covariance structure: the (i, j)th entry of I(θ 0 ) −1 is given by ρ |i− j| , where −1 ≤ ρ ≤ 1 is a varying parameter. This covariance structure is often called autoregressive of order 1 and represents a situation where the parameters are standardized, in the sense that their marginal variances are all equal to 1, and the correlations between them decline exponentially with distance, at a speed that depends on ρ. In this setting, the target covariance matrix is parametrized with only one parameter, ρ. The case where 0 ≤ ρ ≤ 1 is more interesting for the current purpose; a value close to 0 leads to weak correlations between the parameters, whereas a value close to 1 makes the correlation persist with distance, yielding strong correlations between the parameters. Note that the situation where parameters are standardized and M = 1 is equivalent to that where the parameters are non-standardized but M is a diagonal matrix with diagonal entries equal to the marginal standard deviations. The empirical results are presented in Fig. 1.
In Fig. 1, the algorithm performances are evaluated using the minimum of the marginal ESSs, reported per iteration. ESJD cannot be used to evaluate performance across different values of ρ because using a norm with respect to I(θ 0 ) in ESJD standardizes this measure. We show the results for 0 ≤ ρ ≤ 0.9 as beyond 0.9, RWM with M = 1 becomes unreliable. As suggested by the expressions in Corollary 1, the performance of RWM with M such that MM T = I(θ 0 ) −1 does not vary with ρ, while it does for RWM with M = 1; it in fact deteriorates when ρ increases due to an optimal value for that decreases. As for the acceptance rate, it is invariant as well for RWM with the Cholesky decomposition matrix and increases slightly with ρ for RWM with the identity matrix. The optimal acceptance rate becomes closer to 0.234 as d increases when ρ = 0, which is not surprising given that the target in this case satisfies the assumptions of Roberts et al. (1997). It is, however, remarkable that, for M = 1, the optimal acceptance rate only slightly increases as ρ gets closer to 1.

Connection to scaling limits
The aim of this section is to establish a formal connection between our guidelines and those of Roberts et al. (1997) through an asymptotic analysis of features of the limiting chain Ξ := Z k k≥0 as d increases. In particular, it will be pointed out using a theoretical argument that our guidelines are consistent in that we find equivalent asymptotically optimal values for and acceptance rate as these authors. The stationary distribution of Ξ , which is a Gaussian with mean 0 and covariance I(θ 0 ) −1 , can be seen as a special case of the product target studied by Roberts et al. (1997) when I(θ 0 ) −1 = 1. As mentioned in the previous sections, it is thus not surprising but reassuring to find the same asymptotically optimal values within our framework for this special case.
To find the optimal values for RWM in the highdimensional limit, we analyse the expected acceptance probability and ESJD(λ, M) by considering them as sequences indexed by d, and let d → ∞. We provide a result establishing that ESJD(λ, M) converges towards a function that is equivalent to that optimized in Roberts et al. (1997), when λ = / √ d and the proposal covariance is set to MM T = I(θ 0 ) −1 . The ESJD is optimized by an equivalent value for , and the expected acceptance probability converges to the same limiting acceptance rate as Roberts et al. (1997), which is seen to imply that the asymptotically optimal acceptance rate is the same. The asymptotically optimal values are 2.38 and 0.234 for and the acceptance rate, respectively. Within our framework, these values are optimal for any target covariance I(θ 0 ) −1 given that the limiting acceptance rate and ESJD do not depend on I(θ 0 ) −1 .
Before presenting the formal results, we provide an informal argument explaining why the connection exists and more precisely why ESJD(λ, M) converges towards a function that is equivalent to that in Roberts et al. (1997). Central to the reason why the efficiency measures are asymptotically the same are the convergences of the acceptance rates in both contexts to a constant as d → ∞. To provide the informal argument, we thus present the acceptance rates and show how Taylor expansions explain their asymptotic behaviour. We start with that in Roberts et al. (1997); we thus consider a sequence of target densities {π d } with π d (θ) = d i=1 f (θ i ) and θ = θ + ( / √ d) , f satisfying some regularity conditions. Under these assumptions, it can be proved that for d large, where "≈" is to be understood as a relationship asserting that the expressions are asymptotically equivalent and for the equality (3), we used that the term in the exponential has a conditional normal distribution given θ (because θ i − θ i = ( / √ d) i ) and the closed-form of E[min{1, e X }] when X ∼ ϕ. We establish a limit using that In their context,ˆ = 2.38/ √ L and 2 Φ −ˆ √ L/2 = 0.234.
In theory, one can obtain a more general limiting expression for ESJD(λ, M) when M is not specified to be such that MM T = I(θ 0 ) −1 . However, one would need to know how I(θ 0 ) −1 behaves when d grows because ESJD(λ, M) depends, in general, on I(θ 0 ) −1 . For example, from (1), it can be observed that whenever 2 I(θ 0 ) /d → L ∈ R as d → ∞ in probability, that is, whenever the correlation in I(θ 0 ) allows for a law of large numbers of the squared norm 2 I(θ 0 ) , as long as uniform integrability conditions hold. In the previous section, for example, the autoregressive covariance matrix allows for a law of large numbers and uniform integrability conditions hold. This is a consequence of the form of I(θ 0 ), which is a tridiagonal matrix, turning 2 I(θ 0 ) into a sum of correlated random variables, but where the correlation exists only for random variables that are close to each other; more precisely, each random variable in the sum is correlated with those with indices that differ by 1. The conditions aforementioned may fail to hold when the matrix I(θ 0 ) yields a sum of correlated random variables where each of them is correlated to a number of random variables that grows with d.
The limiting behaviour of ESJD for the case M = 1 recently received detailed attention in Yang et al. (2020). These authors perform analyses under the traditional asymptotic framework d → ∞; however, in contrast to earlier work, their approach does not require the restrictive assumption of IID model parameters. Instead, the authors perform analyses under an assumption of partially connected graphical models. A key mathematical object there which measures the "roughness" of the log target density is It appears, for instance, in an expectation that is asymptotically equivalent to their expected acceptance probability: where the expectation is with respect to π d . It also appears in an expectation analogous to (1) that is asymptotically equivalent to their ESJD. There exists an interesting connection between their optimization problem and that of optimizing (1) that can be established by identifying the counterpart to I d (θ ) in (1) and the expected acceptance probability. The optimal acceptance rates derived under their framework are often close to 0.234, for large enough d, which is what we observed under our framework as well, for instance, in Sect. 3.2. We finish this section with a brief analysis which highlights the existence of that connection by focussing on similarities in between the acceptance rates. We identify the counterpart to I d (θ ) to be recalling that Note that under regularity conditions, the normalized version of ∂ ∂θ i log π d (θ ) 2 , when seen as the square of the derivative of the sum of the log prior and log densities, converges in distribution to I(θ ) ii times a chi-square random variable with 1 degree of freedom as n → ∞. For weak interactions in between model parameters represented by sparse graphs, 2 I(θ 0 ) /d thus encodes similar information to I d (θ ). This highlights that the expected acceptance probability under our framework, given by and theirs, given by (4), are similar in essence. In general, Jensen's inequality allows to observe that given that x → Φ(−a √ x) is convex for x ≥ 0 with a > 0. Acceptance rates derived within our framework are thus expected to be larger than those derived within the framework of Yang et al. (2020), when π d concentrates around θ 0 . They have for instance been observed to be larger than 0.234 in Sect. 3.2, while in Yang et al. (2020) they are shown to be smaller than or equal to 0.234.
We do not investigate the problem of convergence of ESJD(λ, M) in full generality. In addition to Yang et al. (2020), we refer the reader to Ghosal (2000), Belloni and Chernozhukov (2009) and Belloni and Chernozhukov (2014) who conducted analyses of posterior distributions in asymptotic regimes where d is allowed to grow with n.

Logistic regression with real data
In this section, we demonstrate that the RWM algorithm targeting π n behaves similarly to its asymptotic counterpart, targeting a Gaussian distribution, in some practical cases. To achieve this, we consider a specific practical case and compare the asymptotically optimal value for when MM T = I(θ 0 ) −1 based on ESJD (which does not depend on the unknown I(θ 0 ) −1 ) to that obtained from tuning the non-limiting ESJD with M n M T n set to be the inverse of the observed information matrix. We also compare the optimal acceptance rates and present results for the RWM algorithm using M n = 1. The practical case that we study is one where the posterior distribution results from a Bayesian logistic regression model and a patent data set from Fahrmeir et al. (2007). We will see that for this example with a sample size of n = 4,866 and d = 9 parameters, both the optimal values for and acceptance rates coincide accurately, showing that the limiting RWM represents a good approximation of that targeting π n in situations where the Bayesian models are regular and the sample sizes are realistically large. This example also allows to show that the guidelines derived from the limiting RWM and the performance analysis conducted in Sect. 3.2 are relevant in such situations.
We denote the binary response variable and covariate vector data points by r 1 , . . . , r n and x 1 , . . . , x n , respectively, with the first component of each x i being equal to 1. In logistic regression, the parameters θ are regression coefficients. Let us assume that Y 1 , . . . , Y n = (R 1 , X 1 ), . . . , (R n , X n ) are IID random variables and also that the model is well specified in order to fit in the theoretical framework presented in Sect. 2. Formally speaking, the latter assumption is certainly not true, but the fact that the empirical results are close to the theoretical (and asymptotic) ones suggests that the model approximates well the true data generating process. We now show that Theorem 1 can be applied by verifying the assumptions stated in Sect. 2. The logistic regression model is, as mentioned in Sect. 1.3, regular enough; Assumption 1 is thus satisfied. We set M n M T n to be the inverse of a standardized version of the observed information matrix evaluated at the maximum a posteriori estimateθ n , i.e. the inverse of where .
Under weak regularity conditions, M n M T n converges and we set S n = (λ/ √ n)M n , implying that Assumption 2 is satisfied if these weak regularity conditions are verified. Theorem 1 therefore holds provided that the chains start in stationarity (Assumption 3) and these weak regularity conditions are verified.
When d = 9, the asymptotically optimal value for when MM T = I(θ 0 ) −1 is 2.39 and the acceptance rate of the limiting RWM using this value is 26.26%. The optimal values for the RWM algorithm with M n set as the inverse of (5) are essentially the same: 2.37 and 26.68% for and the acceptance rate, respectively. The value of that maximizes the ESS per iteration is 2.40; the maximum ESS per iteration is 0.034, which is significantly higher than the maximum of 0.006 attained by the algorithm with M n = 1. As explained and shown in Sect. 3, a poor performance of the latter sampler is due to a strong correlation in between the parameters. For this sampler, a value of 6.89 is optimal for based on the ESS, whereas a value of 6.51 is optimal when the ESJD is instead considered. The acceptance rate of the algorithm using M n = 1 and the latter value is 27.69%. Note that we tried smaller models with less covariates and larger ones with interaction terms, and the optimal values when M n is set as the inverse of (5) are consistent with the guidelines presented in Table 1. The results in this numerical experiment follow from a numerical optimization of ESJD and ESS based on Markov chain samples of size 10,000,000 and a grid search.

Discussion
In this paper, we have analysed the behaviour of random walk Metropolis (RWM) algorithms when used to sample from Bayesian posterior distributions, under the asymptotic regime n → ∞, in contrast with previous asymptotic analyses where d → ∞. Our analysis led to novel parameterdimension-dependent tuning guidelines which are consistent with the well-known 0.234 rule. A formal argument allowed to show that this rule can in fact be derived from the angle adopted in this paper as well. We believe that similar analyses to those performed in this paper can be conducted to develop practical tuning guidelines for more sophisticated algorithms like Metropolis-adjusted Langevin algorithm (Roberts and Tweedie 1996) and Hamiltonian Monte Carlo (Duane et al. 1987), and to establish other interesting connections with optimal scaling literature (e.g. Roberts and Rosenthal 1998;Beskos et al. 2013).
The guidelines developed in this paper for RWM algorithms are valid under weak assumptions; we essentially only require a Bernstein-von Mises theorem to hold for the target distribution. This is in stark contrast to scaling limit approaches. To our knowledge, there is one contribution, Yang et al. (2020), that provides guidelines for a realistic model based on a scaling limit argument, and it requires the posterior distribution to concentrate, which is in line with the argument of this paper. The guidelines proposed in our paper are in theory valid in the limit n → ∞; we have demonstrated that they are nevertheless applicable in realistic scenarios with typical data sizes using an example of logistic-regression analysis of real data. This example, together with our analysis of the limiting RWM, also allows to support the findings about the robustness of the 0.234 rule to non-independent and identically distributed (IID) model parameters when the scaling matrix is a diagonal matrix.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.

Proof (Theorem 1) Result (i).
To prove this result, we use Theorem 2 of Schmon et al. (2021a). We thus have to verify three conditions.
1. As n → ∞, the following convergence holds in P Yprobability: Z 0,n converges weakly to Z 0 . 2. Use P n and P to denote the transition kernels of Ξ n and Ξ , respectively. These are such that in P Y -probability as n → ∞, for all h ∈ BL, the set of bounded Lipschitz functions. 3. The transition kernel P is such that Ph( · ) is continuous for any h ∈ C b , the set of continuous bounded functions.
We start with Condition 1. It suffices to verify that in P Y -probability, for any measurable set A. We have that in P Y -probability by Assumption 1, using Jensen's inequality, that A ⊆ R d , and a change of variable θ = z/n 1/2 +θ n . We turn to Condition 2. We have that where here α n (z, z ) := min 1, π Z n (z ) π Z n (z) , ρ n (z) is the corresponding rejection probability, and α(z, z ) := min 1, ϕ(z ; 0, I(θ 0 ) −1 ) ϕ(z; 0, I(θ 0 ) −1 ) , ρ(z) is the corresponding rejection probability. Thus, Therefore, using the triangle inequality, We prove that the first integral on the right-hand side (RHS) converges to 0 in P Y -probability. The other integral is seen to converge using similar arguments. We have that using Jensen's inequality, that there exists a positive constant K such that |h| ≤ K , and the triangle inequality. We now prove that each of the last two integrals converges to 0. We begin with the first one: in P Y -probability by Assumption 2, using that 0 ≤ α n ≤ 1 and Devroye et al., (2018, Proposition 2.1), where tr( · ) and det( · ) are the trace and determinant operators, respectively. Note that by Assumption 2 we have that M n M T n → MM T in probability, meaning that all components converge, which implies that the trace and the log of the determinant both vanish. Next, using the triangle inequality. The second integral is seen to converge to 0 because α(z, z ) ϕ(z; 0, in P Y -probability by Assumption 1, using that 0 ≤ α ≤ 1 and a change of variable θ = z/n 1/2 +θ n . For the first integral, we write using that | min{a, b} − min{c, d}| ≤ |a − c| + |b − d| for any real numbers a, b, c and d. It is seen that both integrals on the RHS vanish as above (recall (7)) after noticing that ϕ(dz ; z, λ 2 MM T ) dz = ϕ(dz; z , λ 2 MM T ) dz , which is used in the second integral. There remains to verify Condition 3: the continuity of Ph. Without loss of generality, consider a non-random sequence of vectors (e n ) n≥1 with monotonically shrinking components (in absolute value) such that sup n e T n (MM T ) −1 e n < ∞. We now prove that Ph(z + e n ) → Ph(z) as n → ∞.
We have that We prove that the first term on the RHS converges to the convergence of the second term follows using similar arguments.
We write where the expectation is with respect to ϕ( · ; z, λ 2 MM T ); we highlight a dependence on z using the notation E z . We have that exp − e T n (λ 2 MM T ) −1 e n 2 → 1, almost surely, given the continuity of α and the exponential function.
To prove that the expectation converges to we thus only need to prove that is uniformly integrable. To prove this, we show that We have that This concludes the proof of Result (i).
Result (ii). We want to prove that in P Y -probability as n → ∞. Using the triangle and Jensen's inequality and that 0 ≤ α ≤ 1, We have shown in the proof of Result (i) that both integrals converge to 0 (recall (6) and (7)), which concludes the proof of Result (ii).
We now prove that each of the integrals on the RHS vanishes. We start with the first one, λ 2 π Z n (z + λM n ) −ϕ(z + λM n ; 0, I(θ 0 ) −1 ) ϕ(d ; 0, 1) dz using the change of variable z = z + λM n . As we have seen before, the last integral vanishes (recall (8)). The third integral on the RHS in (9) vanishes for similar reasons. For the second one, we use that M n → M in P Yprobability. This is true because M n M T n → MM T in P Y -probability and the Cholesky decomposition yields a continuous map. Now, using Devroye et al., (2018, Proposition 2.1) and Cauchy-Schwarz inequality, The first integral on the RHS is bounded. We write the second one as an expectation: λ j,n = tr(A n ) → 0, using an eigendecomposition of A n and that ξ n := (ξ 1,n , . . . , ξ d,n ) T := Q T n is a random vector with independent standard normal components, where A n := (M −1 M n − 1) T (M −1 M n −1), Q n is an orthogonal matrix whose columns are the eigenvectors of A n , and Λ n is a diagonal matrix whose entries λ 1,n , . . . , λ d,n are the eigenvalues of A n . This concludes the proof.
The formulae for ESJD with M such that MM T = I −1 θ 0 and the expected acceptance probabilities are derived analogously.