# Bayesian computation: a summary of the current state, and samples backwards and forwards

- 6.5k Downloads
- 30 Citations

## Abstract

Recent decades have seen enormous improvements in computational inference for statistical models; there have been competitive continual enhancements in a wide range of computational tools. In Bayesian inference, first and foremost, MCMC techniques have continued to evolve, moving from random walk proposals to Langevin drift, to Hamiltonian Monte Carlo, and so on, with both theoretical and algorithmic innovations opening new opportunities to practitioners. However, this impressive evolution in capacity is confronted by an even steeper increase in the complexity of the datasets to be addressed. The difficulties of modelling and then handling ever more complex datasets most likely call for a new type of tool for computational inference that dramatically reduces the dimension and size of the raw data while capturing its essential aspects. Approximate models and algorithms may thus be at the core of the next computational revolution.

## Keywords

Bayesian analysis MCMC algorithms ABC techniques Optimisation## 1 Introduction

One may reasonably balk at the terms “computational statistics” and “Bayesian computation” since, from its very start, statistics has always involved some computational step to extract information, something manageable like an estimator or a prediction, from raw data. This necessarily incomplete and unavoidably biased review of the recent past, current state, and immediate future of algorithms for Bayesian inference thus first requires us to explain what we mean by computation in a statistical context, before turning to what we perceive as medium term solutions and possible deadends.

Computations are an issue in statistics whenever processing a dataset becomes a difficulty, a liability, or even an impossibility. Obviously, the computational challenge varies according to the time when it is faced: what was an issue in the nineteenth century is most likely not so any longer (take for instance the derivation of the moment estimates of a mixture of two normal distributions so painstakenly set by Pearson 1894 for estimating the ratio of “forehead” breadth–body length on a dataset of 1000 crabs or the intense algebraic derivations found in the analysis of variance of the 1950s and 1960s Searle et al. 1992).

The introduction of simulation tools in the 1940s followed hard on the heels of the invention of the computer and certainly contributed an impetus towards faster and better computers, at least in the first decade of this revolution. This shows that these tools were both needed, and unavailable without electronic calculators. The introduction of Markov chain Monte Carlo is harder to pin down as some partial versions can be traced all the way back to 1944–1945 and the Manhattan project at Los Alamos (Metropolis 1987). It is surprisingly much later, i.e., only by the early 1990s, that such methods became part of the Bayesian toolbox, that is, some time after the devising of other computer-dependent tools like the bootstrap or the EM algorithm, and despite the availability of personal computers that considerably eased programming and experimenting (Robert and Casella 2011). It is presumably pointless to try to attribute this delay to a definite cause but a certain lack of probabilistic culture within the statistics community is probably partly to blame.

What makes this time-lag in MCMC methods becoming assimilated into the statistics community even more surprising is that fact that Bayesian inference having a significant role in statistical practice was really on hold pending the discovery of flexible computational tools that (implicitly or explicitly) delivered values for the medium–high-dimensional integrals that underpin the calculation of posterior distributions, in all but toy problems where conjugacy provided explicit answers. In fact, until Bayesians discovered MCMC, the only computational methodology that seemed to offer much chance of making practical Bayesian statistics practical was the portfolio of quadrature methods developed under Adrian Smith’s leadership at Nottingham (Naylor and Smith 1982; Smith et al. 1985, 1987).

The very first article in the first issue of *Statistics and Computing*, whose quarter-century we celebrate in this special issue, was (to the journal’s credit!) on Bayesian analysis, and was precisely in this direction of using clever quadrature methods to approach moderately high-dimensional posterior analysis (Dellaportas and Wright 1991). By the next (second) issue, sampling-based methods had started to appear, with three papers out of five in the issue on or related to Gibbs sampling (Verdinelli and Wasserman 1991; Carlin and Gelfand 1991; Wakefield et al. 1991).

Now, reflecting upon the evolution of MCMC methods over the 25 or so years they have been at the forefront of Bayesian inference, the focus has evolved a long way, from hierarchical models that extended the linear, mixed and generalised linear models (Albert 1988; Carlin et al. 1992; Bennett et al. 1996) which were initially the focus, and graphical models that stemmed from image analysis (Geman and Geman 1984) and artificial intelligence, to dynamical models driven by ODE’s (Wilkinson 2011b) and diffusions (Roberts and Stramer 2001; Dellaportas et al. 2004; Beskos et al. 2006), hidden trees (Larget and Simon 1999; Huelsenbeck and Ronquist 2001; Chipman et al. 2008; Aldous et al. 2008) and graphs, aside with decision making in highly complex graphical models. While research on MCMC theory and methodology is still active and continually branching (Papaspiliopoulos et al. 2007; Andrieu and Roberts 2009; Łatuszyński et al. 2011; Douc and Robert 2011), progressively incorporating the capacities of parallel processors and GPUs (Lee et al. 2009; Jacob et al. 2011; Strid 2010; Suchard et al. 2010; Scott et al. 2013; Calderhead 2014), we wonder if we are not currently facing a new era where those methods are no longer appropriate to undertake the analysis of new models, and of new formulations where models are no longer completely defined. We indeed believe that imprecise models, incomplete information and summarised data will become, if not already, a central aspect of statistical analysis, due to the massive influx of data and the need to provide non-statisticians with efficient tools. This is why we also cover in this survey the notions of approximate Bayesian computation (ABC) and comment on the use of optimisation tools.

The plan of the paper is that in Sects. 2 and 3 we discuss recent progress and current issues in Markov chain Monte Carlo and ABC, respectively. In Sect. 4, we highlight some areas of modern optimisation that, through lack of familiarity, are making less impact in the mainstream of Bayesian computation than we think justified. Our Discussion in Sect. 5 raises issues about data science and relevance to applications, and looks to the future.

## 2 MCMC, targeting the posterior

When MCMC techniques were introduced to the mainstream statistical (Bayesian) community in 1990, they were received with skepticism that they could 1 day become the central tool of Bayesian inference. For instance, despite the assurance provided by the ergodic theorem, many researchers thought at first that the convergence of those algorithms was a mere theoretical anticipation rather than a practical reality, in contrast to traditional Monte Carlo methods, and hence that they could not be trusted to provide “exact” answers. This perspective is obviously obsolete by now, when MCMC output is considered as “exact” as regular Monte Carlo, if possibly less efficient in some settings. Nowadays, MCMC is again attracting more attention (than in the past decade, say, where developments were more about alternatives, some of which described in the following sections), both because of methodological developments linked to better theoretical tools, for instance in the handling of stochastic processes, and because of new advances in accelerated computing via parallel and cloud computing.

### 2.1 Basics of MCMC

The first observation about the Metropolis–Hastings is that the flexibility in choosing *q* is a blessing, but also a curse since the choice determines the performance of the algorithm. Hence a large part of the research on MCMC along the past 30 years (if we arbitrarily set the starting date at Geman and Geman 1984) has been on choice of the proposal *q* to improve the efficiency of the algorithm, and in characterising its convergence properties. This typically requires gathering or computing additional information about \(\pi \) and we discuss some of the fundamental strategies in subsequent sections. Algorithm 1, and its variants in which variables are updated singly or in blocks according to some schedule, remains a keystone in standard use of MCMC methodology, even though the newer Hamiltonian Monte Carlo (HMC) approach (see Sect. 2.3) may sooner or later come to replace it. While there is nothing intrinsically unique to the nature of this algorithm, or optimal in its convergence properties (other than the result of Peskun 1973 on the optimality of the acceptance ratio), attempts to bypass Metropolis–Hastings are few and limited. For instance, the birth-and-death process developed by Stephens (2000) used a continuous time jump process to explore a set of models, only to be later shown (Cappé et al. 2002) to be equivalent to the (Metropolis–Hastings) reversible jump approach of Green (1995).

Another aspect of the generic Metropolis–Hastings that became central more recently is that while the accept–reject step does overcome need to know the normalising constant, it still requires \(\pi ,\) if unnormalised, and this may be too expensive to compute or even intractable for complicated models and large datasets. Much recent research effort has been devoted to the design and understanding of appropriate modifications that use estimators or approximations of \(\pi \) instead and we will take the opportunity to summarise some of the progress in this direction.

### 2.2 MALA and manifold MALA

*h*. There is a delicate tradeoff between accuracy of the approximation improving as \(h \rightarrow 0\) and sampling efficiency [as measured, e.g., by the effective sample size (ESS)] improving when

*h*increases. This solution was soon followed by its Metropolised version (Rossky et al. 1978) that uses the Euler approximation of (2) to produce a proposal in the Metropolis–Hastings algorithm 1, by letting \(q (\cdot |\theta ^{(n-1)}) := \theta ^{(n-1)} + {h \over 2 } \nabla \log \pi (\theta ^{(n-1)}) + h^{1/2}N(0,\,I_{d\times d}).\) While in the probability community Langevin diffusions and their equilibrium distributions had also been around for some time (Kent 1978), it was the Roberts and Tweedie (1996a) paper (motivated by Besag 1994 comment on Grenander and Miller 1994) that brought the approach to the centre of interest of the computational statistics community and sparked systematic study, development and applications of Metropolis adjusted Langevin algorithms (hence MALA) and their cousins.

There is a large body of empirical evidence that at the extra price of computing the gradient, MALA algorithms typically provide a substantial speed-up in convergence on certain types of problems. However for very light-tailed distributions the drift term may grow to infinity and cause additional instability. More precisely, for distributions with sufficiently smooth contours, MALA is geometrically ergodic (c.f. Roberts and Rosenthal 2004) if the tails of \(\pi \) decay as \(\exp \{-|\theta |^{\beta }\}\) with \(\beta \in [1,\,2],\) while the random walk Metropolis algorithm is geometrically ergodic for all \(\beta \ge 1\) (Roberts and Tweedie 1996a; Mengersen and Tweedie 1996). The lack of geometrical ergodicity has been precisely quantified by Bou-Rabee and Hairer (2012).

*h*, adaptive versions (both discussed in Sect. 2.4), combinations with proximal operators (Pereyra 2015; Schreck et al. 2013), and applications and algorithm development for the infinite-dimensional context (Pillai et al. 2012; Cotter et al. 2013). One particular direction of active research is considering a more general version of Eq. (1) with state-dependent drift and diffusion coefficient

*h*results in an algorithm that is as robust as random walk Metropolis, in the sense that it is geometrically ergodic for targets with tail decay of \(\exp \{- |\theta |^{\beta }\}\) for \(\beta >1\) (see Taylor 2014). A robustified version of such a metric has been introduced in Betancourt (2013) and termed SoftAbs. Here one approximates the absolute value of the eigenspectrum of the Hessian of \(\pi \) with a smooth strictly positive function \(\lambda _{i} \rightarrow \lambda _{i} \frac{\exp {\{\alpha \lambda _{i}\}} + \exp {\{-\alpha \lambda _{i}\}}}{\exp {\{\alpha \lambda _{i}\}} - \exp {\{-\alpha \lambda _{i}\}}},\) where \(\alpha \) is a smoothing parameter. The metric stabilises the behaviour of both MMALA, and HMC algorithms (discussed in the sequel), in the neighbourhoods where the signature of the Hessian changes.

### 2.3 Hamiltonian Monte Carlo

As with many improvements in the literature, starting with the very notion of MCMC, Hamiltonian (or hybrid) Monte Carlo (HMC) stems from Physics (Duane et al. 1987). After a slow emergence into the statistical community (Neal 1999), it is now central in statistical software like STAN (Stan Development Team 2014). For a complete account of this important flavour of MCMC, the reader is referred to Neal (2013), which inspired the description below; see also Betancourt et al. (2014) for a highly mathematical differential-geometric approach to HMC.

*p*its speed or momentum and

*M*the Hamiltonian matrix of \(\pi .\) In more statistical language, HMC creates an auxiliary variable

*p*such that moving according to Hamilton’s equations

*k*-th order symplectic integrator (Hairer et al. 2006), most commonly on the second order

*leapfrog approximation*that relies on a small step level \(\epsilon ,\) updating

*p*and \(\theta \) via a modified Euler’s method called the leapfrog that is reversible and being symplectic, preserves volume as well. This discretised update can be repeated for an arbitrary number of steps.

When considering the implementation via a Metropolis algorithm, a new value of the momentum *p* is drawn from the pseudo-prior \(\propto \exp \{ -p^\text {T} M^{-1} p / 2 \}\) and it is followed by a Metropolis step, which proposal is driven by the leapfrog approximation to the Hamiltonian dynamics on \((\theta ,\,p)\) and which acceptance is governed by the Metropolis acceptance probability. What makes the potential strength of this augmentation (or disintegration) scheme is that the value of \(H(\theta ,\,p)\) hardly changes during the Metropolis move, which means that it is most likely to be accepted and that it may produce a very different value of \(\pi (\theta )\) without modifying the overall acceptance probability. In other words, moving along level sets is almost energy-free, but if the move proceeds for “long enough”, the chain can reach far-away regions of the parameter space, thus avoid the myopia of standard MCMC algorithms. As explained in Neal (2013), this means that HMC avoids the inefficient random walk behaviour of most Metropolis–Hastings algorithms. What drives the exploration of the different values of \(H(\theta ,\,p)\) is therefore the simulation of the momentum, which makes its calibration both quite influential and delicate (Betancourt et al. 2014) as it depends on the unknown normalising constant of the target. (By calibration, we mean primarily the choice of the time discretisation step \(\varepsilon \) in the leapfrog approximation and of the number *L* of leapfrog leaps, but also the choice of the precision matrix *M*).

### 2.4 Optimal scaling and adaptive MCMC

The convergence of the Metropolis–Hastings algorithm 1 depends crucially on the choice of the proposal distribution *q*, as does the performance of both more complex MCMC and SMC algorithms, that often are hybrids using Metropolis–Hastings as simulation substeps.

Optimising over all implementable *q* appears to be a “disaster problem” due to its infinite-dimensional character, lack of clarity about what is implementable, what is not, and the fact that this optimal *q* must depend in a complex way on the target \(\pi \) to which we have only a limited access. In particular MALA provides a specific approach to constructing \(\pi \)-tailored proposals and HMC can be viewed as a combination of Gibbs and special Metropolis moves for an extended target.

In this optimisation context, it is thus reasonable to restrict ourselves to some parametric family of proposals \(q_{\xi },\) or more generally of Markov transition kernels \(P_{\xi },\) where \(\xi \in \varXi \) is a tuning parameter, possibly high-dimensional.

The aim of adaptive Markov chain Monte Carlo is conceptually very simple. One expects that there is a set \(\varXi _{\pi } \subset \varXi \) of good parameters \(\xi \) for which the kernel \(P_{\xi }\) converges quickly to \(\pi ,\) and one allows the algorithm to search for \(\varXi _{\pi }\) “on the fly” and redesign the transition kernel during the simulation as more and more information about \(\pi \) becomes available. Thus an adaptive MCMC algorithm would apply the kernel \(P_{\xi ^{(n)}}\) to sample \(\theta ^{(n)}\) given \(\theta ^{(n-1)},\) where the tuning parameter \(\xi ^{(n)}\) is itself a random variable which may depend on the whole history \(\theta ^{(0)},\ldots ,\theta ^{(n-1)}\) and on \(\xi ^{(n-1)}.\) Adaptive MCMC rests on the hope that the adaptive parameter \(\xi ^{(n)}\) will find \(\varXi _{\pi },\) stay there essentially forever and inherit good convergence properties.

There are at least two fundamental difficulties in executing this strategy in practice. First, standard measures of efficiency of Markovian kernels, like the total variation convergence rate (c.f. Meyn and Tweedie 2009; Roberts and Rosenthal 2004), \(L^{2}(\pi )\) spectral gap (Diaconis and Stroock 1991; Roberts 1996; Saloff-Coste 1997; Levin et al. 2009) or asymptotic variance (Peskun 1973; Geyer 1992; Tierney 1998) in the Markov chain central limit theorem will not be available explicitly, and their estimation from a Markov chain trajectory is often an even more challenging task than the underlying MCMC estimation problem itself.

Secondly, when executing an adaptive strategy and trying to improve the transition kernel on the fly, the Markov property of the process is violated, therefore standard theoretical tools do not apply, and establishing validity of the approach becomes significantly more difficult. While the approach has been successfully applied in some very challenging practical problems (Solonen et al. 2012; Richardson et al. 2010; Griffin et al. 2014), there are examples of seemingly reasonable adaptive algorithms that fail to converge to the intended target distribution (Bai et al. 2011; Łatuszyński et al. 2013), indicating that compared to standard MCMC even more care must be taken to ensure validity of inferential conclusions.

*f*,

*Z*of a SDE

*h*(

*l*) is equivalent to maximising the efficiency of the algorithm as the dimension goes to infinity. Surprisingly, there is a one-to-one correspondence between the value \(l_{opt} = \mathrm{argmax}\, h(l)\) and the mean acceptance probability of 0.234.

The magic number 0.234 does not depend on *f* and gives a universal tuning recipe to be used for example in adaptive algorithms: choose the scale of the increment so that approximately 23 % of the proposals are accepted.

The result, established under restrictive assumptions, has been empirically verified to hold much more generally, for non iid targets and also in medium- and even low-dimensional examples with *d* as small as 5. It has been also combined with relative efficiency loss due to mismatch between the proposal and target covariance matrices (see Roberts and Rosenthal 2001).

The simplicity of the result and easy access to the average acceptance rate makes optimal scaling the main theoretical driver in development of adaptive MCMC algorithms, and adaptive MCMC is the main application and motivation for researching optimal scaling.

A large body of theoretical work extends optimal scaling formally to different and more general scenarios. For example Metropolis for smooth non iid targets has been addressed, e.g., by Bédard (2007), and in infinite dimensional settings by Beskos et al. (2009). Discrete and other discontinuous targets have been considered in Roberts (1998) and Neal et al. (2012). For MALA algorithms an optimal acceptance rate of 0.574 has been established in Roberts and Rosenthal (1998) and confirmed in infinite-dimensional settings in Pillai et al. (2012) along with the stepsize \(\sigma ^{2}_{d} = l^{2} d^{-1/3}.\) Hybrid Monte Carlo (see Sect. 2.3) has been analysed in a similar spirit by Beskos et al. (2013) and Betancourt et al. (2014) concluding that any value \(\in [0.6,\,0.9]\) will be close to optimal and the leapfrog step size should be taken as \(h = l \times d^{-1/4}.\) These results not only inform about optimal tuning, but also provide an efficiency ordering on the algorithms in *d*-dimensions. Metropolis algorithms need \(\mathcal {O}(d)\) steps to explore the state space, while MALA and HMC need respectively \(\mathcal {O}(d^{1/3})\) and \(\mathcal {O}(d^{1/4}).\)

Further extensions include studying the transient phase before reaching stationarity (Christensen et al. 2005; Jourdain et al. 2012, 2014), the scaling of multiple-try MCMC (Bédard et al. 2012) and delayed rejection MCMC (Bédard et al. 2014), and the temperature scale of parallel tempering type algorithms (Atchadé et al. 2011b; Roberts and Rosenthal 2014). Interestingly, the optimal scaling of the discussed in Sect. 2.5 pseudo-marginal algorithms as obtained in Sherlock et al. (2014), and extended to more general settings in Doucet et al. (2012) and Sherlock (2014), suggests an acceptance rate of just 0.07.

While each of these numerous optimal scaling results gives rise, at least in principle, to an adaptive MCMC design, the pioneering and most successful algorithm is the Adaptive Metropolis of Haario et al. (2001). With its increasing popularity in applications, this has fuelled the development of the field.

Versions and refinements of the adaptive Metropolis algorithm (Roberts and Rosenthal 2009; Andrieu and Thoms 2008) have served well in applications and motivated much of the theoretical development. These include, among many other contributions, adaptive Metropolis, delayed rejection adaptive Metropolis (Haario et al. 2006), regional adaptation and parallel chains (Craiu et al. 2009), and the robust version of Vihola (2012) estimating the shape of the distribution rather than its covariance matrix and hence suitable for heavy tailed targets.

Analogous development of adaptive MALA algorithms in Atchadé (2006) and Marshall and Roberts (2012) and of adaptive HMC and Riemannian Manifold Monte Carlo in Wang et al. (2013) building on the adaptive scaling theory, resulted in a similar drastic mixing improvement as the original Adaptive Metropolis.

Another substantial and still unexplored area where adaptive algorithms are applied for very high dimensional and multimodal problems is model and variable selection (Nott and Kohn 2005; Richardson et al. 2010; Lamnisos et al. 2013; Ji and Schmidler 2013; Griffin et al. 2014). These algorithms can incorporate reversible jump moves (Green 1995) and are guided by scaling limits for discrete distributions as well as temperature spacing of parallel tempering to address multimodality. Successful implementations allow for fully Bayesian variable selection in models with over 20,000 variables for which otherwise only ad hoc heuristic approaches have been used in the literature.

To address the second difficulty with adaptive algorithms, several approaches have been developed to establish their theoretical underpinning. While for standard MCMC, convergence in total variation and law of large numbers are obtained almost trivially, and the effort concentrates on stronger results, like CLTs, geometric convergence, nonasymptotic analysis, and, maybe most importantly, comparison and ordering of algorithms, adaptive samplers are intrinsically difficult. The most elegant and theoretically-valid strategy is to change the underlying Markovian kernel at regeneration times only (Gilks et al. 1998). Unfortunately, this is not very appealing for practitioners since regenerations are difficult to identify in more complex settings and are essentially impractically rare in high dimensions. The original Adaptive Metropolis of Haario et al. (2001) has been validated (under some restrictive additional conditions) by controlling the dependencies introduced by the adaptation and using convergence results for mixingales. The approach has been further developed in Atchadé and Rosenthal (2005) and Atchadé (2006) to verify its ergodicity under weaker assumptions and apply the mixingale approach to adaptive MALA. Another successful approach (Andrieu and Moulines 2006 refined in Saksman and Vihola 2010) rests on martingale difference approximations and martingale limit theorems to obtain, under suitable technical assumptions, versions of LLN and CLTs. There are close links between analysing adaptive MCMC and stochastic approximation algorithms and in particular the adaptation step can be often written as a mean field of the stochastic approximation procedure; Andrieu and Robert (2001), Atchadé et al. (2011a) and Andrieu et al. (2015) contribute to this direction of analysis. Fort et al. (2011) develop an approach where both adaptive and interacting MCMC algorithms can be treated in the same framework. This allows addressing “external adaptation” algorithms such as the interacting tempering algorithm (a simplified version of the celebrated equi-energy sampler of Kou et al. 2006) or adaptive parallel tempering in Miasojedow et al. (2013).

*Diminishing Adaptation*and

*Containment*are sufficient to guarantee that an adaptive MCMC algorithm will converge asymptotically to the correct target distribution. To this end recall the total variation distance between two measures defined as \(\Vert \nu (\cdot ) - \mu (\cdot )\Vert := \sup _{A\in \mathcal {F}}|\nu (A) - \mu (A)|,\) and for every Markov transition kernel \(P_{\xi },\,\xi \in \varXi \) and every starting point \(\theta \in \varTheta \) define the \(\varepsilon \) convergence function \(M_{\varepsilon }{\text {:}}\,\varTheta \times \varXi \rightarrow \mathbb {N}\) as

*t*, i.e.,

### **Definition 1**

*Diminishing Adaptation*) The adaptive algorithm with starting values \(\theta ^{(0)}=\theta \) and \(\xi ^{(0)} = \xi \) satisfies Diminishing Adaptation, if

### **Definition 2**

(*Containment*) The adaptive algorithm with starting values \(\theta ^{(0)}=\theta \) and \(\xi ^{(0)} = \xi \) satisfies Containment, if for all \(\varepsilon > 0\) the sequence \(\{M_{\varepsilon }(\theta ^{(n)},\,\xi ^{(n)})\}_{n=0}^{\infty }\) is bounded in probability.

While diminishing adaptation is a standard requirement, Containment is subject to some discussion. On one hand, it may seem difficult to verify in practice; on the other, it may appear restrictive in the context of ergodicity results under some weaker conditions (c.f. Fort et al. 2011). However, it turns out (Łatuszyński and Rosenthal 2014) that if Containment is not satisfied, then the algorithm may still converge, but with positive probability it will be asymptotically less efficient than any nonadaptive ergodic MCMC scheme. Hence algorithms that do not satisfy Containment are termed AdapFail and are best avoided. Containment has been further studied in Bai et al. (2011) and is in particular implied by simultaneous geometric or polynomial drift conditions of the adaptive kernels.

Given that adaptive algorithms may be incorporated in essentially any sampling scheme, their introduction seems to be one of the most important innovations of the last two decades. However, despite substantial effort and many ingenious contributions, the theory of adaptive MCMC lags behind practice even more than may be the case in other computational areas. While theory always matters, the numerous unexpected and counterintuitive examples of transient adaptive algorithms suggest that in this area theory matters even more for healthy development.

For adaptive MCMC to become a routine tool, a clear-cut result is needed saying that under some easily verifiable conditions these algorithms are valid and perform not much worse than their nonadaptive counterpart with fixed parameters. Such a result is yet to be established and may require deeper understanding of how to construct stable adaptive MCMC, rather than aiming heavy technical artillery at algorithms currently in use without modifying them.

### 2.5 Estimated likelihoods and pseudo-marginals

*z*as an auxiliary variable and to simulate the joint distribution \(\pi (\theta ,\,z|y)\) on \(\varTheta \times \mathcal {Z}\) by a standard method, leading to simulating the marginal density \(\pi (\cdot |y)\) as a by-product. However, when the dimension of the auxiliary variable

*z*grows with the sample size, this technique may run into difficulties as induced MCMC algorithms are more and more likely to have convergence issues. An illustration of this case is provided by hidden Markov models, which have eventually to resort to particle filters as Markov chain algorithms become ineffective (Chopin 2007; Fearnhead and Clifford 2003). Another situation where the target density \(\pi (\cdot |y)\) cannot be directly computed is the case of the “doubly intractable” likelihood (Murray et al. 2006a), when the likelihood function \(\ell (\theta |y)\propto g(y|\theta )\) itself contains a term that is intractable, in that it makes the normalising constant

Examples of this kind abound in Markov random fields models, as for instance for the Ising model (Murray et al. 2006b; Møller et al. 2006).

Both the approaches of Murray et al. (2006a) and Møller et al. (2006) require sampling data from the likelihood \(\ell (\theta |y),\) which limits their applicability. The latter uses in addition an importance sampling function and may suffer from poor acceptance rates.

*n*, given the pair \((\theta ^{(n-1)},\,S_{\theta ^{(n-1)}}^{(n-1)})\) of parameter value and density estimate at this value, the proposal \((\theta ^{\prime },\,S_{\theta ^{\prime }})\) is obtained by sampling \(\theta ^{\prime } \sim q(\cdot |\theta ^{(n-1)})\) and obtaining \(S_{\theta ^{\prime }},\) the estimate of \(\pi (\theta ^{\prime }|y).\) Analogously to the standard Metropolis–Hastings step, the pair \((\theta ^{\prime },\,S_{\theta ^{\prime }})\) is accepted as \((\theta ^{(n)},\,S_{\theta ^{(n)}}^{(n)})\) with probability

It is not difficult to verify that the bivariate chain on extended state space \(\varTheta \times \mathbb {R}_{+}\) enjoys the correct \(\pi (\theta |y)\) marginal on \(\varTheta \) and the approach is valid, see Andrieu and Roberts (2009) for details (and also Andrieu and Vihola 2015 for an abstracted account).

One specific instance of constructing unbiased estimators of the posterior is presented in Girolami et al. (2013) and based on random truncations of infinite series expansions. The paper also offers an excellent overview of inference methods for intractable likelihoods.

The performance of the pseudo-marginal approach will depend on the quality of the estimators \(S_{\theta }\) and hence stabilising them as well as understanding this relationship is an active area of current development. Often \(S_{\theta }\) is constructed as an importance sampler based on an importance sample *z*. Thus in particular, the improvements from using multiple samples of *z* to estimate \(\pi \) are of interest and can be assessed from Andrieu and Vihola (2015) where the efficiency of the algorithm is studied in terms of its spectral gap and CLT asymptotic variance. Sherlock et al. (2014), Doucet et al. (2012) and Sherlock (2014), on the other hand, investigate the efficiency as a function of the acceptance rate and variance of the noise, deriving the optimal scaling, as discussed in Sect. 2.4.

As an alternative to the above procedure of using estimates of the intractable likelihood to design a new Markov chain on an extended state space with correct marginal, one could naively use these estimates to approximate the Metropolis–Hastings accept–reject ratio and let the Markov chain evolve in the original state space. This would amount to dropping the current realisation of \(S_{\theta }\) and obtaining a new one in each accept–reject attempt. Such a procedure is termed Monte Carlo within Metropolis (Andrieu and Roberts 2009). Unfortunately this approach does not preserve the stationary distribution, and the resulting Markov chain may even not be ergodic (Medina-Aguayo et al. 2015). If ergodic, the difference between stationary distribution, resulting from the noisy acceptance must be quantified, which is a highly nontrivial task and the bounds will rarely be tight (see also Alquier et al. 2014; Pillai and Smith 2014; Rudolf and Schweizer 2015 for related methodology and theory). The approach is however an interesting avenue since at the price of being biased, it overcomes mixing difficulties of the exact pseudo-marginal version.

Design and understanding of pseudo-marginal algorithms is a direction of dynamic methodological development that in the coming years will be further fuelled not only by complex models with intractable likelihoods, but also by the need of MCMC algorithms for Big Data. In this context the likelihood function cannot be evaluated for the whole dataset even in the *iid* case just because computing the long product of individual likelihoods is infeasible. Several Big Data MCMC approaches have been already considered in Welling and Teh (2011), Korattikara et al. (2013), Teh et al. (2014), Bardenet et al. (2014), Maclaurin and Adams (2014), Minsker et al. (2014), Quiroz et al. (2014) and Strathmann et al. (2015).

### 2.6 Particle MCMC

While we refrain from covering particle filters here, since others (Beskos et al. 2015) in this volume are focussing on this technique, a recent advance at the interface between MCMC, pseudo-marginals, and particle filtering is the notion of particle MCMC (or *pMCMC*), developed by Andrieu et al. (2011). This innovation is indeed rather similar to the pseudo-marginal algorithm approach, taking advantage of the state-space models and auxiliary variables used by particle filters. It differs from standard particle filters in that it targets (mostly) the marginal posterior distribution of the parameters.

*t*, a new value \(\theta ^{\prime }\) of \(\theta \) is proposed from an arbitrary transition kernel \(\mathfrak {h}(\cdot |\theta ^{(t)})\) and then a new value of the latent series \(x_{0:T}^{\prime }\) is generated from a particle filter approximation of \(p(x_{0:T}|\theta ^{\prime },\,y_{1:T}).\) Since the particle filter returns as a by-product (Del Moral et al. 2006) an unbiased estimator of the marginal posterior of \(y_{1:T},\,\hat{q}(y_{1:T}|\theta ^{\prime }),\) this estimator can be used as such in the Metropolis–Hastings ratio

This approach is being used increasingly in complex dynamic models like those found in signal processing (Whiteley et al. 2010), dynamical systems like the PDEs in biochemical kinetics (Wilkinson 2011b) and probabilistic graphical models (Lindsten et al. 2014). An extension to approximating the sequential filtering distribution is found in Chopin et al. (2013).

### 2.7 Parallel MCMC

Since MCMC relies on local updating based on the current value of a Markov chain, opportunities for exploiting parallel resources, either CPU or GPU, would seem quite limited, In fact, the possibilities reach far beyond the basic notion of running independent or coupled MCMC chains on several processors. For instance, Craiu and Meng (2005) construct parallel antithetic coupling to create negatively correlated MCMC chains (see also Frigessi et al. 2000), while Craiu et al. (2009) use parallel exploration of the sample space to tune an adaptive MCMC algorithm. Jacob et al. (2011) exploit GPU facilities to improve by Rao-Blackwellisation the Monte Carlo approximations produced by a Markov chain, even though the parallelisation does not improve the convergence of the chain. See also Lee et al. (2009) and Suchard et al. (2010) for more detailed contributions on the appeal of using GPUs towards massive parallelisation, and Wilkinson (2005) for a general survey on the topic.

Another recently-explored direction is “prefetching”. Based on Brockwell (2006) this approach computes the \(2^{2},\,2^{3},\ldots , 2^{k}\) values of the posterior that will be needed \(2,\,3,\ldots ,k\) sweeps ahead by simulating the possible “futures” of the Markov chain, according to whether the next *k* proposals are accepted or not, in parallel. Running a regular Metropolis–Hastings algorithm then means building a decision tree back to the current iteration and drawing \(2,\,3,\ldots ,k\) uniform variates to go down the tree to the appropriate branch. As noted by Brockwell (2006), “in the case where one can guess whether or not acceptance probabilities will be ‘high’ or ‘low’, the tree could be made deeper down ‘high’ probability paths and shallower in the ‘low’ probability paths.” This idea is exploited in Angelino et al. (2014), by creating “speculative moves” that consider the reject branch of the prefetching tree more often than not, based on some preliminary or dynamic evaluation of the acceptance rate. Using a fast but close-enough approximation to the true target (and a fixed sequence of uniforms) may also produce a “single most likely path” on which prefetched simulations can be run. The basic idea is thus to run simulations and costly likelihood computations on many parallel processors along a prefetched path, a path that has been prefetched for its high approximate likelihood. There are obviously instances where this speculative simulation is not helpful because the actual chain with the genuine target ends up following another path. Angelino et al. (2014) actually go further by constructing sequences of approximations for the precomputations. The proposition for the sequence found therein is to subsample the original data and use a normal approximation to the difference of the log (sub-)likelihoods. See Strid (2010) for related ideas.

A different use of parallel capabilities is found in Calderhead (2014). At each iteration of Calderhead’s algorithm, *N* replicas are generated, rather than 1 in traditional Metropolis–Hastings. The Markov chain actually consists of *N* components, from which one component is selected at random as a seed for the next proposal. This approach can be seen as a special type of data augmentation (Tanner and Wong 1987), where the index of the selected component is an auxiliary variable. The neat trick in the proposal (and the reason for its efficiency gain) is that the stationary distribution of the auxiliary variable can be determined and hence used *N* times in updating the vector of *N* components. An interesting feature of this approach is when the original Metropolis–Hastings algorithm is expressed as a finite state space Markov chain on the set of indices \(\{1,\ldots ,N\}.\) Conditional on the values of the *N* dimensional vector, the stationary distribution of that sub-chain is no longer uniform. Hence, picking *N* indices from the stationary helps in selecting the most appropriate images, which explains why the rejection rate decreases. The paper indeed evaluates the impact of increasing the number of proposals in terms of ESS, acceptance rate, and mean squared jump distance. Since this proposal is an almost free bonus resulting from using *N* processors, it sounds worth investigating and comparing with more complex parallel schemes.

*M*subsets and taking a power 1/

*m*of the prior in each case. (This may induce issues about impropriety.) However, the subdivision is arbitrary and can thus be implemented in cases other than the fairly restrictive iid setting. Because each (subsample) nonparametric estimate involves

*T*terms, the resulting overall estimate contains

*Tm*terms and the authors suggest using an independent Metropolis sampler to handle this complexity. This is in fact necessary for producing a final sample from the (approximate) true posterior distribution.

- (1)
Curse of dimensionality in the number of parameters

*d*; - (2)
Curse of dimensionality in the number of subsets

*m*; - (3)
Tail degeneration;

- (4)
Support inconsistency and mode misspecification.

*Tm*terms corresponding to a product of

*m*sums of

*T*terms sounds self-defeating, Neiswanger et al. (2013) use a clever device to avoid the combinatorial explosion, namely operating on one component at a time. Having non-manageable targets is not such an issue in the post-MCMC era. Point 3 is formally correct, in that the kernel tail behaviour induces the kernel estimate tail behaviour, most likely disconnected from the true target tail behaviour, but this feature is true for any non-parametric estimate, even for the Weierstrass transform defined below, and hence maybe not so relevant in practice. In fact, by lifting the tails up, the simulation from the subposteriors should help in visiting the tails of the true target. Finally, point 4 does not seem to be life-threatening. Assuming that the true target can be computed up to a normalising constant, the value of the target for every simulated parameter could be computed, eliminating those outside the support of the product and highlighting modal regions.

The Weierstrass transform of a density *f* is a convolution of *f* and of an arbitrary kernel *K*. Wang and Dunson (2013) propose to simulate from the product of the Weierstrass transform, using a multi-tiered Gibbs sampler. Hence, the parameter is only simulated once and from a controlled kernel, while the random effects from the convolution are related with each subposterior. While the method requires coordination between the parallel threads, the components of the target are separately computed on a single thread. The clearest perspective on the Weierstrass transform may possibly be the rejection sampling version where simulations from the subpriors are merged together into a normal proposal on \(\theta ,\) to be accepted with a probability depending on the subprior simulations.

VanDerwerken and Schmidler (2013) keep with the spirit of parallel MCMC papers like consensus Bayes (Scott et al. 2013), embarrassingly parallel MCMC (Neiswanger et al. 2013) and Weierstrass MCMC (Wang and Dunson 2013), namely that the computation of the likelihood can be broken into batches and MCMC run over those batches independently. The idea of the authors is to replace an exploration of the whole space operated via a single Markov chain (or by parallel chains acting independently which all have to “converge”) with parallel and independent explorations of parts of the space by separate Markov chains. The motivation is that “Small is beautiful”: it takes a shorter while to explore each set of the partition, hence to converge, and, more importantly, each chain can work in parallel with the others. More specifically, given a partition of the space, into sets \(A_{i}\) with posterior weights \(w_{i},\) parallel chains are associated with targets equal to the original target restricted to those \(A_{i}\)s. This is therefore an MCMC version of partitioned sampling. With regard to the shortcomings listed in the quote above, the authors consider that there does not need to be a bijection between the partition sets and the chains, in that a chain can move across partitions and thus contribute to several integral evaluations simultaneously. It is somewhat unclear (a) whether or not this impacts ergodicity (it all depends on the way the chain is constructed, i.e., against which target) as it could lead to an over-representation of some boundary regions and (b) whether or not it improves the overall convergence properties of the chain(s). A more delicate issue with the partitioned MCMC approach stands with the partitioning. Indeed, in a complex and high-dimension model, the construction of the appropriate partition is a challenge in itself as we often have no prior idea where the modal areas are. Waiting for a correct exploration of the modes is indeed faster than waiting for crossing between modes, provided all modes are represented and the chain for each partition set \(A_{i}\) has enough energy to explore this set. It actually sounds unlikely that a target with huge gaps between modes will see a considerable improvement from the partitioned version when the partition sets \(A_{i}\) are selected on the go, because some of the boundaries between the partition sets may be hard to reach with an off-the-shelf proposal. A last comment about this innovative paper is that the adaptive construction of the partition has much in common with Wang–Landau schemes (Wang and Landau 2001; Lee et al. 2005; Atchadé and Liu 2010; Jacob and Ryder 2014).

## 3 ABC and others, exactly delivering an approximation

Motivated by highly complex models where MCMC algorithms and other Monte Carlo methods were too inefficient by far, approximate methods have emerged where the output cannot be considered as simulations from the genuine posterior, even under idealised situations of infinite computing power. These methods include ABC techniques, described in more details below, but also variational Bayes (Jaakkola and Jordan 2000), empirical likelihood (Owen 2001), integrated nested Laplace approximation (INLA) (Rue et al. 2009) and other solutions that rely on pseudo-models, or on summarised versions of the data, or both. It is quite important to signal this evolution as we think that it may be a central feature of computational Bayesian statistics in the coming years. From a statistical perspective, it also induces a somewhat paradoxical situation where loss of information is balanced by improvement in precision, for a given computational budget. This perspective is not only interesting at the computational level but forces us (as statisticians) to re-evaluate in depth the nature of a statistical model and could produce a paradigm shift in the near future by giving a brand new meaning to George Box’s motto that “all models are wrong”.

### 3.1 ABC per se

It seems important to discuss ABC in this partial tour of Bayesian computational techniques as (a) they provide the only approach to their model for some Bayesians, (b) they deliver samples in the parameter space that are exact simulations from a posterior of some kind (Wilkinson 2013), \(\pi ^{\mathrm{ABC}}(\theta |\mathbf {y}_{0})\) if not the original posterior \(\pi (\theta |\mathbf {y}_{0}),\) where \(\mathbf {y}_{0}\) denotes the data in this section (c) they may be more intuitive to some researchers outside statistics, as they entail simulating from the inferred model, i.e., going forward from parameter to data, rather than backward, from data to parameter, as in traditional Bayesian inference, (d) they can be merged with MCMC algorithms, and (e) they allow drawing inference directly from summaries of the data rather than the data itself.

ABC techniques play a role in the 2000s that MCMC methods did in the 1990s, in that they handle new models for which earlier (e.g., MCMC) algorithms were at a loss, in the same way the latter (MCMC) were able to handle models that regular Monte Carlo approaches could not reach, such as latent variable models (Tanner and Wong 1987; Diebolt and Robert 1994; Richardson and Green 1997). New models for which ABC unlocked the gate include Markov random fields, Kingman’s coalescent for phylogeographical data, likelihood models with an intractable normalising constant, and models defined by their quantile function or their characteristic function. While the ABC approach first appeared a “quick-and-dirty” solution, to be considered only until more elaborate representations could be found, those algorithms have been progressively incorporated into the statistician’s toolbox as a novel form of generic nonparametric inference handling partly-defined statistical models. They are therefore attractive as much for this reason as for being handy computational solutions when everything else fails.

A statistically intriguing feature of those methods is that they customarily require—for greater efficiency—replacing the data with (much) smaller-dimension summaries^{1} or summary statistics, because of the complexity of the former. In almost every case calling for ABC, those summaries are not sufficient statistics and the method thus implies from the start a loss of statistical information, at least at a formal level, since relying on the raw data is out of the question and therefore the additional information it provides is moot. This imposed reduction of the statistical information raises many relevant questions, from the choice of summary statistics (Blum et al. 2013) to the consistency of the ensuing inference (Robert et al. 2011).

Although it has now diffused into a wide range of applications, the technique of ABC was first introduced by and for population genetics (Tavaré et al. 1997; Pritchard et al. 1999) to handle ancestry models driven by Kingman’s coalescent and with strictly intractable likelihoods (Beaumont 2010). The likelihood function of such genetic models is indeed “intractable” in the sense that, while derived from a fully defined and parameterised probability model, this function cannot be computed (at all or within a manageable time) for a single value of the parameter and for the given data. Bypassing the original example to avoid getting mired into the details of population genetics, examples of intractable likelihoods include densities with intractable normalising constants, i.e., \(f (\mathbf {y}|\theta ) = g(\mathbf {y}|\theta )/Z(\theta )\) such as in Potts (1952) and auto-exponential (Besag 1972) models, and pseudo-likelihood models (Cucala et al. 2009).

### *Example 1*

A very simple illustration of an intractable likelihood is provided by Bayesian inference based on the median and median absolute deviation statistics of a sample from an arbitrary location-scale family, \(y_{1},\ldots ,y_{n}\mathop {\sim }\limits ^{\mathrm{iid}} \sigma ^{-1} g(\sigma ^{-1}\{y-\mu \}),\) as the joint distribution of this statistic is not available in closed form.

*inverse probability*(Rubin 1984). This concept is that data \(\mathbf {y}\) simulated conditional on values of the parameter close to the “true” value of the parameter should look more similar to the actual data \(\mathbf {y}_{0}\) than data \(\mathbf {y}\) simulated conditional on values of the parameter far from the “true” value. ABC actually involves an acceptance/rejection step in that parameters simulated from the prior are accepted only when

*S*(

*y*) of the parameter \(\theta \) are available, they are good candidates. In the ABC literature, they are called

*summary statistics*, a term that does not impose any constraint on their form and hence leaves open the question of performance, as discussed in Marin et al. (2012) and Blum et al. (2013). A more practical version of the ABC algorithm is shown in Algorithm 3 below, with a different output for each choice of the summary statistic. We stress in this version of the algorithm the construction of the tolerance \(\epsilon \) as a quantile of the simulated distances \(\rho (S(\mathbf {y}^{0}),\,S(\mathbf {y}^{(t)})),\) rather than an additional parameter of the method.

An immediate question about this approximate algorithm is how much it remains connected with the original posterior distribution and in case it does not, where does it draw its legitimacy. A first remark in this connection is that it constitutes at best a convergent approximation to the posterior distribution \(\pi (\theta |S(y_{0})).\) It can easily be seen that ABC generates outcomes from a genuine posterior distribution when the data is randomised with scale \(\epsilon \) (Wilkinson 2013; Fearnhead and Prangle 2012). This interpretation indicates a decrease in the precision of the inference but it does not provide a universal validation of the method. A second perspective on the ABC output is that it is based on a nonparametric approximation of the sampling distribution (Blum 2010; Blum and François 2010), connected with both indirect inference (Drovandi et al. 2011) and *k*-nearest neighbour estimation (Biau et al. 2014). While a purely Bayesian nonparametric analysis of this aspect has not yet emerged, this brings an additional if cautious support for the method.

### *Example 2*

The discrepancy can however be completely eliminated by post-processing: Fig. 2 reproduces Fig. 1 by comparing the histograms of an ABC sample with the version corrected by Beaumont et al.’s (2002) local regression, as the latter is essentially equivalent to a regular Gibbs output.

Barber et al. (2015) studies the rate of convergence for ABC algorithms through the mean square error when approximating a posterior moment. They show the convergence rate is of order \(,\) when *q* is the dimension of the ABC summary statistic, associated with an optimal tolerance in \(.\) Those rates are connected with the nonparametric nature of ABC, as already suggested in the earlier literature: for instance, Blum (2010), who links ABC with standard kernel density non-parametric estimation and find a tolerance (re-expressed as a bandwidth) of order Open image in new window and an rmse of order Open image in new window as well, while Fearnhead and Prangle (2012) obtain similar rates, with a tolerance of order Open image in new window for noisy ABC. See also Calvet and Czellar (2014). Similarly, Biau et al. (2014) obtain precise convergence rates for ABC interpreted as a *k*-nearest-neighbour estimator.

- (1)
the standard ABC–MCMC (with

*N*replicates of the simulated pseudo-data to each simulated parameter value), - (2)
versions involving simulations of the replicates repeated at the subsequent step,

- (3)
use of a stopping rule in the generation of the pseudo data, and

- (4)
a “gold-standard algorithm” based on the (unavailable) measure of an \(\epsilon \) ball around the data.

*random*number of auxiliary variables is sufficient to produce geometric ergodicity. We note that this result does not contradict the parallel result of Bornn et al. (2014), who establish that there is no efficiency gain in simulating \(N>1\) replicates of the pseudo-data, since there is no randomness involved in that approach. However, the latter result only applies to functions with finite variances.

Indeed, Robert et al. (2011) pointed out the potential irrelevance of ABC-based posterior probabilities, due to the possible ancilarity (for model choice) of summary statistics, as also explained in Didelot et al. (2011). Marin et al. (2014) consider for instance the comparison of normal and Laplace fits on both normal and Laplace samples and show that using sample mean and sample variance as summary statistics produces Bayes factors converging to values near 1, instead of the consistent 0 and \(+\infty .\)

Marin et al. (2014) analyses this phenomenon with the aim of producing a necessary and sufficient consistency condition on summary statistics. Quite naturally, the summaries that are acceptable must display different behaviour under both models, in the guise of ranges of means \(\mathbb {E}_{\theta }[S(\mathbf {y}_{0})]\) that do not intersect for the two models. (In the counter-example of the normal–Laplace test, the expectations of the sample mean and variance can be recovered under both models.) This characterisation then leads to a practical asymptotic test validating summary statistics and to the realisation that a larger number of summaries helps in achieving this goal (while degrading the estimated tolerance). More importantly, it shows that the reduction of information represented by an ABC approach may prevent discriminating between models, at least when trying to recover the Bayes factor. In the end, this is a natural consequence of simplifying the description of both the data and the model, and can be found in most limited information settings.

### 3.2 More fish in the alphabet soup

Besides ABC, approximation techniques have spread wide and far towards analysing more complex or less completely defined models. Rather than a confusion, this multiplicity of available approximations is beneficial both to the understanding of the underlying model and to the calibration of those different methods.

*t*,

*t*th step in the EP algorithm goes as follows:

- (1)
Select \(1\le j\le k\) at random

- (2)Define the marginal$$\begin{aligned} \nu _{-j}\left( \theta |\lambda _{t}\right) \propto \frac{\nu (\theta |\lambda _{t})}{\nu _{j}(\theta |\lambda _{t})}; \end{aligned}$$
- (3)Update the hyperparameter \(\lambda _{t}\) by solving$$\begin{aligned} \lambda _{t+1} = \mathop {\hbox {argmin}}\limits _{\lambda } \mathrm{KL}\left\{ \pi _{j}(\theta )\nu _{-j}\left( \theta |\lambda _{t}\right) ,\,\nu (\theta |\lambda )\right\} \end{aligned}$$
- (4)Update \(\nu _{j}(\theta |\lambda _{t})\) as$$\begin{aligned} \nu _{j}\left( \theta |\lambda _{t+1}\right) \propto \frac{ \nu (\theta |\lambda _{t+1}) }{ \nu _{-j}(\theta |\lambda _{t}) }. \end{aligned}$$

While different approximations keep being developed and tested, with arguments ranging from efficient programming, to avoiding simulations, to having an ability to deal with more complex structures, their drawback is the overall incapacity to assess the amount of approximation involved. Bootstrap evaluations can be attempted in the simplest cases but cannot be extended to more realistic situations.

## 4 Optimisation in modern Bayesian computation

Optimisation methodology for high-dimensional maximum-a-posteriori (MAP) estimation is another area of Bayesian computation that has received a lot of attention over the last years, particularly for problems related to machine learning, signal processing and computer vision. One reason for this is that for many Bayesian models optimisation is significantly more computationally tractable than integration. This has generated a lot of interest in MAP estimators, especially for applications involving very high-dimensional parameter spaces or tight computing time constraints, for which calculating other summaries of the posterior distribution is not feasible. Here we review some of the major breakthroughs in this topic, which originated mainly outside the statistics community. We focus on developments related to high-dimensional convex optimisation, though many of the techniques discussed below are also useful for non-convex optimisation. In particular, in Sect. 4.1 we concentrate on *proximal optimisation algorithms*, a powerful class of iterative methods that exploit tools from convex analysis, monotone operator theory and theory of non-expansive mappings to construct carefully designed fixed-point schemes. We refer the reader to the excellent book by Bauschke and Combettes (2011) for the mathematics underpinning proximal optimisation algorithms, and to the recent tutorial papers by Combettes and Pesquet (2011), Cevher et al. (2014) and Parikh and Boyd (2014) for an overview of the field and applications to signal processing and machine learning.

However, we do think it is vital to insist that, at the same time as asserting that modern optimisation methodology represents a much-underused opportunity in Bayesian inference, in its raw form it inevitably fails to deliver essential elements of the Bayesian paradigm. The vision is not to deliver a point estimate of an unknown structure, but the full richness of Bayesian inference in its coherence, its proper treatment of uncertainty, its intrinsic treatment of model uncertainty, and so on. Bayesian statistics does not boil down to optimisation with penalisation (Lange et al. 2014). We need to express the uncertainty associated with decisions and estimation, stemming from the stochastic nature of the data, and our lack of knowledge about relevant mechanisms.

The challenge is to use the awesome capacity of fast optimisation in a high-dimensional parameter space to focus on local regions of that space where a combination of analytic and numerical investigation can deliver at least approximations to full posterior distributions and derived quantities. The community has barely risen to this challenge, with only isolated examples such as the discussion in Green (2015) of a problem in unlabelled shape analysis. However, the growing community of INLA (Rue et al. 2009) users may bring an heightened awareness of such possibilities, along with its efficient code (Schrödle and Held 2011; Muff et al. 2013). Another promising research area is to use mathematical and algorithmic tools from convex optimisation to design more efficient high-dimensional MCMC algorithms (Pereyra 2015).

### 4.1 Proximal algorithms

Similarly to many other computational methodologies that are widely used nowadays, proximal algorithms were first proposed several decades ago by Moreau (1962), Martinet (1970) and Rockafellar (1976), and regained attention recently in the context of large-scale inverse problems and “big data”.

*g*belongs to the class \(\varGamma _{0}(\mathbb {R}^{n})\) of lower semicontinuous convex functions from \(\mathbb {R}^{n} \rightarrow (-\infty ,\,+\infty ].\) Notice that

*g*may be non-differentiable and take value \(g({\varvec{\theta }}) = +\infty ,\) reflecting constraints in the parameter space. In order to introduce proximal algorithms we first recall the following standard definitions and results from convex analysis: we say that \({\varvec{\varphi }}\in \mathbb {R}^{n}\) is a subgradient of

*g*at \({\varvec{\theta }}\in \mathbb {R}^{n}\) if it satisfies \((\mathbf {u}-{\varvec{\theta }})^{T}{\varvec{\varphi }}+ g({\varvec{\theta }}) \le g(\mathbf {u}),\,\forall \mathbf {u}\in \mathbb {R}^{n}.\) The set of all such subgradients defines the subdifferential set \(\partial g({\varvec{\theta }}),\) and \(\hat{{\varvec{\theta }}}_{MAP}\) is a minimiser of

*g*if and only if \(\varvec{0} \in \partial g(\hat{{\varvec{\theta }}}_{MAP}).\) The (convex) conjugate of \(g \in \varGamma _{0}(\mathbb {R}^{n})\) is the function \(g^{*} \in \varGamma _{0}(\mathbb {R}^{n})\) defined as \(g^{*}({\varvec{\varphi }}) = \sup _{\mathbf {u}\in \mathbb {R}^{n}} \mathbf {u}^{T}{\varvec{\varphi }}- g(\mathbf {u}).\) The subgradients of

*g*and \(g^{*}\) satisfy the property \({\varvec{\varphi }}\in \partial g({\varvec{\theta }}) \Leftrightarrow {\varvec{\theta }}\in \partial g^{*}({\varvec{\varphi }}).\)

### **Property 1**

The proximity mapping of *g* is related to its subdifferential by the inclusion \(\{{\varvec{\theta }}-{\text {prox}}^{\lambda }_{g}({\varvec{\theta }})\}/\lambda \in \partial g\{{\text {prox}}^{\lambda }_{g}({\varvec{\theta }})\},\) which collapses to \(\nabla g\{{\text {prox}}^{\lambda }_{g}({\varvec{\theta }})\}\) when \(g \in \mathcal {C}^{1}.\) As a result, for any \(\lambda > 0,\) the minimiser of *g* verifies the fixed-point equation \({\varvec{\theta }}= {\text {prox}}^{\lambda }_{g}({\varvec{\theta }}).\)

### **Property 2**

Proximity mappings are firmly non-expansive; that is, \(\Vert {\text {prox}}^{\lambda }_{g}({\varvec{\theta }})-{\text {prox}}^{\lambda }_{g}(\mathbf {u})\Vert ^{2} \le ({\varvec{\theta }}-\mathbf {u})^{T}\{{\text {prox}}^{\lambda }_{g}({\varvec{\theta }})-{\text {prox}}^{\lambda }_{g}(\mathbf {u})\},\,\forall {\varvec{\theta }},\,\mathbf {u}\in \mathbb {R}^{n}.\)

### **Property 3**

The proximity mappings of *g* and its conjugate \(g^{*}\) are related by Moreau’s decomposition formula: \({\varvec{\theta }}= {\text {prox}}_{g}^{\lambda }({\varvec{\theta }}) + \lambda {\text {prox}}^{1/\lambda }_{g^{*}}({\varvec{\theta }}/\lambda ).\)

*proximal point algorithm*given by the iteration

*g*, i.e., \({\varvec{\theta }}^{k+1} = {\varvec{\theta }}^{k} - \lambda {\varvec{\varphi }},\) with \({\varvec{\varphi }}\in \partial g({\varvec{\theta }}^{k+1}).\) Alternatively, proximal point algorithms can also be interpreted as explicit (forward) gradient steepest descent to minimise the

*Moreau envelope*of \(g,\, e_{\lambda } ({\varvec{\theta }}) = \inf _{\mathbf {u}\in \mathbb {R}^{n}}\, g(\mathbf {u}) + \Vert \mathbf {u}- {\varvec{\theta }}\Vert ^{2}/2\lambda ,\) a convex lower bound on

*g*that by construction is continuously differentiable and has the same minimiser as

*g*.

Proximal point algorithms may appear of little relevance because evaluating \({\text {prox}}^{\lambda }_{g}\) can be as difficult as solving (8) in the first place (notice that (9) is a convex minimisation problem similar to (8)). Surprisingly, many advanced proximal optimisation methods can in fact be shown to be either applications of this simple algorithm, or closely related to it.

*g*, e.g.,

^{2}differentiable and \(g_{2} \in \varGamma _{0}(\mathbb {R}^{n}),\) possibly non-differentiable, has a proximity mapping that can be computed efficiently with a specialised algorithm. This decomposition is useful for instance in linear inverse problems, where \(g_{1}\) is often related to a Gaussian observation model involving linear operators and \(g_{2}\) to a log-prior promoting a parsimonious representation (e.g., sparsity on some appropriate dictionary, low-rankness) or enforcing convex constraints (e.g., positivity, positive definiteness). For models that admit this decomposition, it is possible to compute \(\hat{{\varvec{\theta }}}_{MAP}\) efficiently with a

*forward*–

*backward*algorithm, also known as the proximal gradient algorithm

*O*(1 /

*k*). If the value of the Lipschitz constant \(\beta \) is unknown \(\lambda _{n}\) can be found by line-search.

*g*admits a decomposition \(g({\varvec{\theta }}) = g_{1}({\varvec{\theta }}) + g_{2}(L{\varvec{\theta }})\) for some linear operator \(L \in \mathbb {R}^{n\times p},\,g_{1} \in \varGamma _{0}(\mathbb {R}^{n})\) strongly convex and \(g_{2} \in \varGamma _{0}(\mathbb {R}^{p})\) with efficient proximity mapping. In this case, the Fenchel–Rockafellar theorem states that \(\hat{{\varvec{\theta }}}_{MAP}\) can be computed by solving the dual problem (Bauschke and Combettes 2011, Chap. 19)

*p*-dimensional problem can be solved iteratively with a forward–backward algorithm \({\varvec{\psi }}^{k+1} ={\text {prox}}^{\lambda _{n}}_{g^{*}_{2}}({\varvec{\psi }}^{k} - \lambda _{n} \nabla g^{*}_{1}(-L^{T}{\varvec{\psi }}^{k}))\) that can also be accelerated to converge with rate \(O(1/k^{2}),\) and where we note that the proximity mapping of \(g_{2}^{*}\) is typically evaluated by using Property 3, and that the strong convexity of \(g_{1}\) implies Lipschitz differentiability of \(g_{1}^{*}.\) Computing \(\hat{{\varvec{\theta }}}_{MAP}\) via (14) can lead to important computational savings, in particular if \(p \ll n\) or if \(g_{2}\) is separable and has a proximity mapping that can be computed in parallel for each element of \({\varvec{\theta }}\) (this is generally not possible for \(g_{2} \circ L\)). We refer the reader to Komodakis and Pesquet (2014) for an overview of recent dual and primal–dual algorithms and guidelines for parallel implementations.

*alternating direction method of multipliers*(ADMM), which operates by formulating (11) as a constrained optimisation problem

*g*admits the decomposition \(g({\varvec{\theta }}) = \sum \nolimits _{m = 1}^{M}g_{m}(L_{m} {\varvec{\theta }})\) with \(g_{m} \in \varGamma (\mathbb {R}^{p_{m}})\) and \(L_{m} \in \mathbb {R}^{n \times {p_{m}}}\) such that the mappings of \(g_{m}\) are easy to compute and \(Q = \sum \nolimits _{m = 1}^{M} L_{m}^{T} L_{m}\) is invertible. Then, in a manner akin to (16), we express (8) as

*M*at a coarse level (e.g., on a multi-processor system). Further parallelisation may be possible at a finer scale (e.g., on a vectorial processor such as GPU or FPGA) by taking advantage of the structure of \({\text {prox}}^{\lambda }_{g_{m}}\) or by using specialised algorithms. This algorithm, known as the

*simultaneous direction method of multipliers*, is also closely related to the ADMM, Douglas–Rachford and proximal point algorithms. Notice that splitting

*g*not only allows the exploitation of parallel computer architectures, but may also significantly simplify the computation of proximity mappings; often \({\text {prox}}^{\lambda }_{g_{m}}\) has a closed-form expression. Lastly, it is worth mentioning that there are other modern proximal optimisation algorithms that can be massively parallelised, for example the

*generalised forward backward*algorithm (Raguet et al. 2013), the

*parallel proximal*algorithms (Combettes and Pesquet 2008; Pesquet and Pustelnik 2012), and the parallel primal–dual algorithm (Combettes and Pesquet 2012).

Finally, main current topics of research in proximal optimisation include theory and methodology for: (1) randomised and stochastic algorithms that operate with estimators of gradients and proximity mappings to reduce computational complexity and allow for errors in the update rules, (2) adaptive and variable metric algorithms (e.g., Riemannian and Newton-type) that exploit the model’s geometry to improve convergence speed, and (3) proximal methods for non-convex problems. We anticipate that in the future new and stronger connections will emerge between proximal optimisation and stochastic simulation, in particular through developments in stochastic optimisation and high-dimensional MCMC sampling. For example, one connection is through the integration of modern stochastic convex optimisation and Markovian stochastic approximation (Combettes and Pesquet 2014; Andrieu et al. 2015), and of proximal optimisation and high-dimensional MCMC sampling (Pereyra 2015).

### 4.2 Convex relaxations

Modern proximal optimisation was greatly motivated by important theoretical results on the recovery of partially-observed sparse vectors and low-rank matrices through convex minimisation (Candès et al. 2006; Candès and Tao 2009) and on *compressive sensing* (Candès and Wakin 2008). A key idea underlying these works is that of approximating a combinatorial optimisation problem, whose solution is NP-hard, with a “relaxed” convex problem that is computationally tractable, and whose solution is in some sense close to the solution of the original problem. Reciprocally, the development of modern convex optimisation has in turn generated much interest in log-concave models, convex regularisers, and “convexifications” (i.e., convex relaxations for intractable or poorly tractable models) for statistical inference problems involving high-dimensionality, large datasets and computing time constraints (Chandrasekaran et al. 2012; Chandrasekaran and Jordan 2013).

### 4.3 Illustrative example

*total-variation*Markov random field, \(\Vert \cdot \Vert _{1{-}2}\) denotes the composite \(\ell _{1} - \ell _{2}\) norm and \(\nabla _{d}\) is the discrete gradient operator that computes the vertical and horizontal differences between neighbour image pixels. This prior is log-concave and models the fact that differences between neighbouring image pixels are usually very small but occasionally take large values; it is arguably the most widely used prior in modern statistical image processing. The values of

*H*and \(\sigma ^{2}\) are typically determined during the system’s calibration process and are here assumed known.

*majorisation*–

*minimisation*strategy. To be precise, starting from some initial condition \({\varvec{\theta }}^{(0)},\) e.g., \({\varvec{\theta }}^{(0)} = \mathbf {y},\) we iteratively minimise the following sequence of strictly convex majorants (Oliveira et al. 2009)

*SALSA*(Afonso et al. 2011) implemented with \(g_{1} ({\varvec{\theta }}) = \Vert \mathbf {y}- H{\varvec{\theta }}\Vert ^{2}_{2}/2\sigma ^{2},\,g_{2}(\mathbf {u}) = \alpha _{eff}^{(t)}\Vert \nabla _{d} \mathbf {u}\Vert _{1{-}2},\) and the constraint \({\varvec{\theta }}= \mathbf {u}\) (though we could have also used other modern algorithms Pesquet and Pustelnik 2012; Combettes and Pesquet 2012; Raguet et al. 2013). To compute the proximity mapping of \(g_{1}\) we use the fact that

*H*is block-circulant to compute matrix products and pseudo-inverses with the FFT algorithm. We compute the proximity mapping of \(g_{2}\) with a highly parallelised implementation of the specialised algorithm of Chambolle (2004).

Figure 3 presents a blurred and noisy observation \(\mathbf {y}\) of the popular “boats” image^{3} of size \(512 \times 512\) pixels, generated with a uniform \(9\times 9\) blur and a noise power of \(\sigma ^{2} = 0.5^{2}\) (blurred-signal-to-noise ratio \(BRSN = 10\log _{10}\{\Vert H{{\varvec{\theta }}}_{0}\Vert ^{2}_{2}/\sigma ^{2}\} = 40\) dB). Figure 4 below shows the MAP estimate \(\hat{{\varvec{\theta }}}_{MAP}\) obtained by solving (21) using four iterations of (22) and a total of 51 ADMM iterations. We observe that this resolution enhancement process has produced a remarkably sharp image with very noticeable fine detail. Moreover, Fig. 5 shows the magnitude of the marginal \(90\,\%\) credibility regions for each pixel, as measured by the distance between the 5 and \(95\,\%\) quantile estimates. These estimates were computed using the proximal Metropolis-adjusted Langevin algorithm (Pereyra 2015), which is appropriate for high-dimensional densities that are not continuously differentiable. We observe in Fig. 5 that the uncertainty is mainly concentrated at the contours and object boundaries, revealing that model is able to accurately detect the presence of sharp edges in the image but with some uncertainty about their exact location. Finally, Fig. 6 shows the convergence of the estimates \({\varvec{\theta }}^{(t,k)}\) produced by each ADMM iteration to \(\hat{{\varvec{\theta }}}_{MAP}\) (as measured by the mean squared error \(\Vert {\varvec{\theta }}^{(t,k)} - \hat{{\varvec{\theta }}}_{MAP} \Vert _{2}^{2}\)). Notice that computing \(\hat{{\varvec{\theta }}}_{MAP}\) only required 10 s (experiment conducted on an Apple Macbook Pro computer running Matlab 2013, a C++ implementation would certainly produce even faster results). This is remarkably fast given the high dimensionality of the problem (\(n = 262,144\)). The computation of the credibility regions by MCMC sampling (20,000 samples with a thinning factor of 1000 to reduce the algorithm’s memory foot-print) required 75 h.

## 5 Discussion

### 5.1 Bayesian computation in the era of data science

As with other areas of statistical science, the Bayesian computation community has to decide whether data science is an opportunity or a threat. Inevitably if we do not treat it as an opportunity, it will become a threat. Thanks to the ubiquity of “big data” (as an over-hyped phrase mostly useful for attracting research funding, but also to at least some extent in reality), a new potentially multi-disciplinary field of data science is rapidly opening up. This field is attracting huge material resources, and will absorb much human talent. Statistical science has to be a part of this, for its own survival, but also for the sake of society. As Tim Harford has cogently argued (Harford 2014):

Recall big data’s four articles of faith. Uncanny accuracy is easy to overrate if we simply ignore false positives [...]. The claim that causation has been “knocked off its pedestal” is fine if we are making predictions in a stable environment but not if the world is changing [...] or if we ourselves hope to change it. The promise that “N = All”, and therefore that sampling bias does not matter, is simply not true in most cases that count. As for the idea that “with enough data, the numbers speak for themselves” – that seems hopelessly naïve in data sets where spurious patterns vastly outnumber genuine discoveries.

“Big data” has arrived, but big insights have not. The challenge now is to solve new problems and gain new answers – without making the same old statistical mistakes on a grander scale than ever.

It is a mistake to think that Bayes has no part to play in these developments, but more of us need to get more involved, and learn new tools, as in the way the Consensus Monte Carlo algorithm (Scott et al. 2013) exploits the Hadoop environment (White 2012) and the MapReduce programming model (Dean and Ghemawat 2008). Another direction that can prevent a potential schism between Bayesian modelling and highly complex models is to aim for modularity and local learning, that it, to abandon the goal of modelling big universes for analysing a series of small worlds, in spite of the loss of coherence, amd hence compromise to the Bayesian paradigm, that this entails. The curious case of the cut models presented in Plummer (2015) is an illustration of the potential for developing partial-information Bayesian inference tools where “small is beautiful” because this is the only viable solution.

### 5.2 Do we care enough about applications?

Bayesian computation began in order to answer rather practical problems—how can we perform a Bayesian analysis of these data using this model?—or the corresponding meta-problems—how can Bayesian analysis be performed generally and reliably for this class of models? The focus was applied methodology (although since the methods were new, they tended to be published in premier theory/methodology journals). Because the research community wanted to understand (the advantages, performance and limitations of) the methods they were advocating, more theoretical work started to be conducted, and, for example, many probabilists were attracted to study the Markov chains that MCMC methodologists created. The centre of mass of research activity drifted away from the original motivations, just as has happened in other areas of mathematically-rigorous computation.

At the same time, those working with data became more ambitious with regard to the scale of data, the complexity of modelling and the sophistication of analysis, all factors that have in principle (and often in fact) stimulated new developments in Bayesian computation. But to a large extent this is a rich, self-stimulating and self-supporting area of research; new applications may or may not need new computational techniques, but new techniques don’t seem to need applications to justify themselves. It is apposite to ask to what extent is cutting-edge computational methodology research really delivering answers to questions that application domains are posing. And to what extent is cutting-edge computational methodology research successfully answering real questions?

We may not be unanimous about answers to these questions, except we can probably all agree they are “not entirely”. We will also disagree about how much this matters, but again there may be something to agree about, that we have failed if methodological innovations disconnect completely from applications. Legitimate differences in research goals partially explain the trend in this direction, but it is fair to say that there is a big communication problem between the computational statistics community and many of the communities where Bayesian computational methods are applied. Unfortunately people in these communities do not always keep up with the state of the art in computational statistics. At the same time, statisticians are often not aware of important developments arising in other fields. (ABC is a good illustration: it took more than 5 years of development within the population genetics community before statisticians became aware the technique existed and a few more years before they realised this was proper Bayesian inference applied on approximate models.) We can perhaps blame the fact that there are not enough people working at the interface of the different communities, but life at the interface is not easy because multidisciplinary and interdisciplinary research is often seen as “marginal” by both communities and is thus difficult to publish, communicate, etc. Then there are of course problems in dissemination, related to the different writing styles, journals, computing languages, software, etc. of each community.

We strongly encourage those developing new techniques always to find a way to disseminate them in such a way that at least *somebody* else could use them, preferably someone without the ability to have invented the technique for themselves!—and advocate, of course, that successful dissemination be properly rewarded in our career structures.

In a somewhat parallel path, we have seen over the past decades the emergence of new languages and meta-languages intended to handle complexity both of problems and of solutions towards a wider audience of users. BUGS (Lunn et al. 2010) is the archetypal example of such languages and it has been successful to the extent that a large proportion of the users has a fairly limited statistical background and often even less of a computational background. However, the population of BUGS users and sympathisers is tiny compared to that of SAS or other corporate statistical systems. In this respect, we have failed to disseminate concepts like Bayesian analysis and wonderful tools like MCMC algorithms, because most people are unable to turn them into codes by themselves. (Perusing one of the numerous statistics and machine-learning on-line forums like Cross Validated quickly exposes the methodological gap between academics and the masses!) It is unclear how novel programming developments like STAN (Stan Development Team 2014) are going to modify this picture, in that they still assume a decent understanding of both modelling and simulation issues. In that respect, network-based approaches as those covered by BUGS sound more promising towards “modelling locally to learn globally”. Similarly, ABC software is either too specific, like DIYABC (Cornuet et al. 2008) which addresses only population genetic questions, or too dependent on the ability of the modeller to program simulated outcomes from the model under study.

### 5.3 Anticipating the future

In which of the areas we discuss do we expect a particular emphasis of effort, or significant progress, or do we see particular needs for new efforts or new directions?

One expectation is that in the future computational methodologies will be more flexible and malleable. Over the past 25 years Bayesian modelling and inference techniques have been applied successfully to thousands of problems across a wide range application domains. Each application brings its own constraints in terms of model dimensionality and complexity, data, inferences, accuracy and computing times. These constraints also vary significantly within specific applications. For example, in hyperspectral remote sensing, when a new Bayesian model is introduced it is often first explored and validated by MCMC sampling, then approximated with a variational Bayes method, and then approximated again so that it can be applied to gigabyte-large datasets by using optimisation techniques. Similarly, an interesting result revealed by a fast inference technique can be analysed more deeply with more reliable and accurate methods might. Therefore we expect that in the future the different main computational methodologies will become more adaptable and that the boundaries between them will be less well defined, with many algorithms developed that combine simulation, variational approximations and optimisation. These will be able to handle a wide spectrum of models, degrees of accuracy and computing times, as well as models that have some parts that are simple but high-dimensional and others that are more complex but that only involve low-dimensional components. This can be achieved by using approximations and optimisation to improve stochastic sampling, by using simulation within deterministic algorithms to handle specific parts of the model that are difficult to compute analytically, or in completely new and original ways.

We also anticipate that computational methodologies will continue to be challenged by larger and larger datasets. There is of course a threat that the whole field turns into a library of machine-learning techniques, with limited validation on reference learning sets and a quick turnover of methods, which would both impoverish the field and fail to reach a general audience of practitioners. We must retain a sense of the stochastic elements in data collection, data analysis, and inference, recognising uncertainty in data and models, to preserve the inductive strength of data science—seeing beyond the data we have to what it might have been, what it be next time, and where it came from.

## Footnotes

- 1.
Maybe due to their initial introduction in population genetics, the oxymoron ‘summary statistics’ is now prevalent in descriptions of ABC algorithms, included in the statistical literature, where the (linguistically sufficient) term ‘statistic’ would suffice.

- 2.
\(g_{1} \in \mathcal {C}^{1}\) has \(\beta \)-Lipschitz continuous gradient if \(\Vert \nabla g_{1}({\varvec{\theta }}) -\nabla g_{1}(\mathbf {u})\Vert \le \beta \Vert {\varvec{\theta }}-\mathbf {u}\Vert , \,\forall ({\varvec{\theta }},\,\mathbf {u}) \in \mathbb {R}^{N} \times \mathbb {R}^{N}\)

- 3.
The boat image is available for download from the SIPI image database at http://sipi.usc.edu/database/database.php?volume=misc&image=38#top

## Notes

### Acknowledgments

Supported in part by “SuSTaIn”, EPSRC Grant EP/D063485/1, at the University of Bristol, and “i-like”, EPSRC Grant EP/K014463/1, at the University of Warwick. Krzysztof Łatuszyński holds a Royal Society University Research Fellowship, and Marcelo Pereyra a Marie Curie Intra-European Fellowship for Career Development. Peter Green also holds a Distinguished Professorship at UTS, Sydney, and Christian Robert an Institut Universitaire de France chair at CEREMADE, Université Paris-Dauphine.

## References

- Afonso, M., Bioucas-Dias, J., Figueiredo, M.: An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE. Trans. Image Process.
**20**, 681–695 (2011)MathSciNetGoogle Scholar - Albert, J.: Computational methods using a Bayesian hierarchical generalized linear model. J. Am. Stat. Assoc.
**83**, 1037–1044 (1988)Google Scholar - Aldous, D., Krikun, M., Popovic, L.: Stochastic models for phylogenetic trees on higher-order taxa. J. Math. Biol.
**56**, 525–557 (2008)zbMATHMathSciNetGoogle Scholar - Alquier, P., Friel, N., Everitt, R., Boland, A.: Noisy Monte Carlo: convergence of Markov chains with approximate transition kernels. Stat. Comput. 1–19 (2014)Google Scholar
- Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo (with discussion). J. R. Stat. Soc. B
**72**(2), 269–342 (2011)MathSciNetGoogle Scholar - Andrieu, C., Moulines, É: On the ergodicity properties of some adaptive MCMC algorithms. Ann. Appl. Probab.
**16**, 1462–1505 (2006)Google Scholar - Andrieu, C., Robert, C.: Controlled MCMC for optimal sampling. Technical report, Cahiers du Ceremade (2001)Google Scholar
- Andrieu, C., Roberts, G.: The pseudo-marginal approach for efficient Monte Carlo computations. Ann. Stat.
**37**, 697–725 (2009)zbMATHMathSciNetGoogle Scholar - Andrieu, C., Tadić, V.B., Vihola, M.: On the stability of some controlled Markov chains and its applications to stochastic approximation with Markovian dynamic. Ann. Appl. Probab.
**25**, 1–45 (2015)zbMATHMathSciNetGoogle Scholar - Andrieu, C., Thoms, J.: A tutorial on adaptive MCMC. Stat. Comput.
**18**, 343–373 (2008)MathSciNetGoogle Scholar - Andrieu, C., Vihola, M.: Convergence properties of pseudo-marginal Markov chain Monte Carlo algorithms. Ann. Appl. Probab.
**25**, 1030–1077 (2015)MathSciNetGoogle Scholar - Angelino, E., Kohler, E., Waterland, A., Seltzer, M., Adams, R.: Accelerating MCMC via parallel predictive prefetching. arXiv preprint arXiv:1403.7265 (2014)
- Atchadé, Y.: An adaptive version for the Metropolis adjusted Langevin algorithm with a truncated drift. Methodol. Comput. Appl. Probab.
**8**, 235–254 (2006)zbMATHMathSciNetGoogle Scholar - Atchadé, Y., Rosenthal, J.: On adaptive Markov chain Monte Carlo algorithms. Bernoulli
**11**, 815–828 (2005)zbMATHMathSciNetGoogle Scholar - Atchadé, Y.F., Liu, J.S.: The Wang–Landau algorithm in general state spaces: applications and convergence analysis. Stat. Sin.
**20**, 209–233 (2010)zbMATHGoogle Scholar - Atchadé, Y., Fort, G., Moulines, E., Priouret, P.: Adaptive Markov chain Monte Carlo: theory and methods. In: Barber, D., Cemgil, A.T., Chiappa, S. (eds.) Bayesian Time Series Models, chap. 2, pp. 32–51. Cambridge University Press, Cambridge (2011a)Google Scholar
- Atchadé, Y.F., Roberts, G.O., Rosenthal, J.S.: Towards optimal scaling of Metropolis-coupled Markov chain Monte Carlo. Stat. Comput.
**21**, 555–568 (2011b)zbMATHMathSciNetGoogle Scholar - Bai, Y., Roberts, G., Rosenthal, J.: On the containment condition for adaptive Markov chain Monte Carlo algorithms. Adv. Appl. Stat.
**21**, 1–54 (2011)zbMATHMathSciNetGoogle Scholar - Barber, S., Voss, J., Webster, M.: The rate of convergence for Approximate Bayesian computation. Electron. J. Stat.
**9**, 80–105 (2015)zbMATHMathSciNetGoogle Scholar - Bardenet, R., Doucet, A., Holmes, C.: Towards scaling up Markov chain Monte Carlo: an adaptive subsampling approach. In: Jebara, T., Xing, E.P. (eds.) Proceedings of the 31st International Conference on Machine Learning (ICML-14). JMLR Workshop and Conference Proceedings, pp 405–413 (2014)Google Scholar
- Barthelmé, S., Chopin, N.: Expectation propagation for likelihood-free inference. J. Am. Stat. Assoc.
**109**, 315–333 (2014)Google Scholar - Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)zbMATHGoogle Scholar
- Beaumont, M.: Estimation of population growth or decline in genetically monitored populations. Genetics
**164**, 1139–1160 (2003)Google Scholar - Beaumont, M.: Approximate Bayesian computation in evolution and ecology. Annu. Rev. Ecol. Evol. Syst.
**41**, 379–406 (2010)Google Scholar - Beaumont, M., Nielsen, R., Robert, C., Hey, J., Gaggiotti, O., Knowles, L., Estoup, A., Mahesh, P., Coranders, J., Hickerson, M., Sisson, S., Fagundes, N., Chikhi, L., Beerli, P., Vitalis, R., Cornuet, J.-M., Huelsenbeck, J., Foll, M., Yang, Z., Rousset, F., Balding, D., Excoffier, L.: In defense of model-based inference in phylogeography. Mol. Ecol.
**19**(3), 436–446 (2010)Google Scholar - Beaumont, M., Zhang, W., Balding, D.: Approximate Bayesian computation in population genetics. Genetics
**162**, 2025–2035 (2002)Google Scholar - Bédard, M.: Weak convergence of Metropolis algorithms for non-i.i.d. target distributions. Ann. Appl. Probab.
**17**, 1222–1244 (2007)zbMATHMathSciNetGoogle Scholar - Bédard, M., Douc, R., Moulines, E.: Scaling analysis of multiple-try MCMC methods. Stoch. Process. Appl.
**122**, 758–786 (2012)zbMATHGoogle Scholar - Bédard, M., Douc, R., Moulines, E.: Scaling analysis of delayed rejection MCMC methods. Methodol. Comput. Appl. Probab.
**16**, 811–838 (2014)zbMATHMathSciNetGoogle Scholar - Belle, E., Benazzo, A., Ghirotto, S., Colonna, V., Barbujani, G.: Comparing models on the genealogical relationships among Neandertal, Cro-Magnoid and modern Europeans by serial coalescent simulations. Heredity
**102**, 218–225 (2008)Google Scholar - Bennett, J., Racine-Poon, A., Wakefield, J.: MCMC for nonlinear hierarchical models. In: Gilks, W., Richardson, S., Spiegelhalter, D. (eds.) Markov Chain Monte Carlo in Practice, pp. 339–358. Chapman and Hall, New York (1996)Google Scholar
- Berger, J., Fienberg, S., Raftery, A., Robert, C.: Incoherent phylogeographic inference. Proc. Natl. Acad. Sci. USA
**107**, E57 (2010)Google Scholar - Besag, J.: Comments on “Representations of knowledge in complex systems” by U. Grenander and M.I. Miller. J. R. Stat. Soc. B
**56**, 591–592 (1994)Google Scholar - Besag, J., Green, P.: Spatial statistics and Bayesian computation (with discussion). J. R. Stat. Soc. B
**55**, 25–38 (1993)zbMATHMathSciNetGoogle Scholar - Besag, J., Green, P.J., Higdon, D., Mengersen, K.: Bayesian computation and stochastic systems (with discussion). Stat. Sci.
**10**, 3–66 (1995)zbMATHMathSciNetGoogle Scholar - Besag, J.E.: Nearest-neighbour systems and the auto-logistic model for binary data. J. R. Stat. Soc. B
**34**, 75–83 (1972)zbMATHMathSciNetGoogle Scholar - Beskos, A., Jasra, A., Muzaffer, E.A., Stuart, A.M.: Sequential Monte Carlo methods for Bayesian elliptic inverse problems. Stat. Comput.
**25**(2015, in press)Google Scholar - Beskos, A., Papaspiliopoulos, O., Roberts, G., Fearnhead, P.: Exact and computationally efficient likelihood-based estimation for discretely observed diffusion processes (with discussion). J. R. Stat. Soc. B
**68**, 333–382 (2006)zbMATHMathSciNetGoogle Scholar - Beskos, A., Pillai, N., Roberts, G., Sanz-Serna, J.-M., Stuart, A.: Optimal tuning of the hybrid Monte Carlo algorithm. Bernoulli
**19**, 1501–1534 (2013)zbMATHMathSciNetGoogle Scholar - Beskos, A., Roberts, G., Stuart, A.: Optimal scalings for local Metropolis–Hastings chains on nonproduct targets in high dimensions. Ann. Appl. Probab.
**19**, 863–898 (2009)zbMATHMathSciNetGoogle Scholar - Betancourt, M.: A general metric for Riemannian manifold Hamiltonian Monte Carlo. In: Nielsen, F., Barbaresco, F. (eds) National Conference on the Geometric Science of Information. Lecture Notes in Computer Science, vol 8085, pp 327–334. Springer (2013)Google Scholar
- Betancourt, M., Byrne, S., Girolami, M.: Optimizing the integrator step size for Hamiltonian Monte Carlo. arXiv preprint arXiv:1411.6669 (2014)
- Betancourt, M.J., Byrne, S., Livingstone, S., Girolami, M.: The geometric foundations of Hamiltonian Monte Carlo. ArXiv e-prints. 1410.5110 (2014)Google Scholar
- Biau, G., Cérou, F., Guyader, A.: New insights into Approximate Bayesian computation. Ann. l’IHP (Probab. Stat.)
**51**, 376–403 (2014)Google Scholar - Blum, M.: Approximate Bayesian computation: a non-parametric perspective. J. Am. Stat. Assoc.
**105**, 1178–1187 (2010)Google Scholar - Blum, M., François, O.: Non-linear regression models for approximate Bayesian computation. Stat. Comput.
**20**, 63–73 (2010)MathSciNetGoogle Scholar - Blum, M., Nunes, M., Prangle, D., Sisson, S.: A comparative review of dimension reduction methods in Approximate Bayesian computation. Stat. Sci.
**28**, 189–208 (2013)MathSciNetGoogle Scholar - Bornn, L., Pillai, N., Smith, A., Woodard, D.: A pseudo-marginal perspective on the ABC algorithm. ArXiv e-prints. 1404.6298 (2014)Google Scholar
- Bou-Rabee, N., Hairer, M.: Nonasymptotic mixing of the MALA algorithm. IMA J. Numer. Anal.
**33**(1), 80–110 (2012)MathSciNetGoogle Scholar - Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn.
**3**, 1–122 (2011)Google Scholar - Brockwell, A.: Parallel Markov chain Monte Carlo simulation by pre-fetching. J. Comput. Graph. Stat.
**15**, 246–261 (2006)MathSciNetGoogle Scholar - Calderhead, B.: A general construction for parallelizing Metropolis–Hastings algorithms. Proc. Natl. Acad. Sci. USA
**111**, 17408–17413 (2014)Google Scholar - Calvet, C., Czellar, V.: Accurate methods for Approximate Bayesian computation filtering. J. Financ. Econom. (2014, to appear)Google Scholar
- Candès, E.J., Romberg, J.K., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math.
**59**, 1207–1223 (2006)zbMATHGoogle Scholar - Candès, E.J., Tao, T.: The power of convex relaxation: near-optimal matrix completion. IEEE Trans. Inf. Theory
**56**, 2053– 2080 (2009)Google Scholar - Candès, E.J., Wakin, M.B.: An introduction to compressive sampling. IEEE Signal Process. Mag.
**25**, 21–30 (2008)Google Scholar - Cappé, O., Robert, C., Rydén, T.: Reversible jump MCMC converging to birth-and-death MCMC, and more general continuous time samplers. J. R. Stat. Soc. B
**65**, 679–700 (2002)Google Scholar - Carlin, B., Gelfand, A.: An iterative Monte Carlo method for nonconjugate Bayesian analysis. Stat. Comput.
**1**, 119–128 (1991)Google Scholar - Carlin, B., Gelfand, A., Smith, A.: Hierarchical Bayesian analysis of change point problems. Appl. Stat. (C)
**41**, 389–405 (1992)zbMATHGoogle Scholar - Cevher, V., Becker, S., Schmidt, M.: Convex optimization for big data: scalable, randomized, and parallel algorithms for big data analytics. IEEE Signal Process. Mag.
**31**, 32–43 (2014)Google Scholar - Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis.
**20**, 89–97 (2004)MathSciNetGoogle Scholar - Chandrasekaran, V., Jordan, M.I.: Computational and statistical tradeoffs via convex relaxation. Proc. Natl. Acad. Sci. USA
**110**, 1181–1190 (2013)MathSciNetGoogle Scholar - Chandrasekaran, V., Recht, B., Parrilo, P., Willsky, A.: The convex geometry of linear inverse problems. Found. Comput. Math.
**12**, 805–849 (2012)zbMATHMathSciNetGoogle Scholar - Chipman, H., George, E., McCulloch, R.: BART: Bayesian additive regression trees. Technical report, Acadia University. arXiv:0806.3286v1 (2008)
- Chopin, N.: Inference and model choice for time-ordered hidden Markov models. J. R. Stat. Soc. B
**69**(2), 269–284 (2007)zbMATHMathSciNetGoogle Scholar - Chopin, N., Jacob, P.E., Papaspiliopoulos, O.: SMC2: an efficient algorithm for sequential analysis of state space models. J. R. Stat. Soc. B
**75**, 397–426 (2013)MathSciNetGoogle Scholar - Christensen, O., Roberts, G., Rosenthal, J.: Scaling limits for the transient phase of local Metropolis–Hastings algorithms. J. R. Stat. Soc. B
**67**, 253–268 (2005)zbMATHMathSciNetGoogle Scholar - Combettes, P.L., Pesquet, J.-C.: A proximal decomposition method for solving convex variational inverse problems. Inverse Probl.
**24**, 065014 (2008)MathSciNetGoogle Scholar - Combettes, P.L., Pesquet, J.-C.: Proximal splitting methods in signal processing. In: Bauschke, H.H., Burachik, R.S., Combettes, P.L., Elser, V., Luke, D.R., Wolkowicz, H. (eds.) Fixed-Point Algorithms for Inverse Problems in Science and Engineering, pp. 185–212. Springer, New York (2011)Google Scholar
- Combettes, P.L., Pesquet, J.-C.: Primal–dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued Anal.
**20**, 307–330 (2012)zbMATHMathSciNetGoogle Scholar - Combettes, P.L., Pesquet, J.-C.: Stochastic Quasi-Fejér block-coordinate fixed point iterations with random sweeping. ArXiv e-prints. 1404.7536 (2014)Google Scholar
- Cornuet, J.-M., Ravigné, V., Estoup, A.: Inference on population history and model checking using DNA sequence and microsatellite data with the software DIYABC (v1.0). BMC Bioinform.
**11**, 401 (2010)Google Scholar - Cornuet, J.-M., Santos, F., Beaumont, M., Robert, C., Marin, J.-M., Balding, D., Guillemaud, T., Estoup, A.: Inferring population history with DIYABC: a user-friendly approach to Approximate Bayesian computation. Bioinformatics
**24**, 2713–2719 (2008)Google Scholar - Cotter, S., Roberts, G., Stuart, A., White, D., et al.: MCMC methods for functions: modifying old algorithms to make them faster. Stat. Sci.
**28**, 424–446 (2013)MathSciNetGoogle Scholar - Craiu, R., Rosenthal, J., Yang, C.: Learn from thy neighbour: parallel-chain and regional adaptive MCMC. J. Am. Stat. Assoc.
**104**, 1454–1466 (2009)zbMATHMathSciNetGoogle Scholar - Craiu, R.V., Meng, X.-L.: Multiprocess parallel antithetic coupling for backward and forward Markov chain Monte Carlo. Ann. Stat.
**33**, 661–697 (2005)zbMATHMathSciNetGoogle Scholar - Cucala, L., Marin, J.-M., Robert, C., Titterington, D.: Bayesian inference in \(k\)-nearest-neighbour classification models. J. Am. Stat. Assoc.
**104**(485), 263–273 (2009)MathSciNetGoogle Scholar - Dean, J., Ghemawat, S.: MapReduce: simplified data processing on large clusters. Commun. ACM
**51**, 107–113 (2008)Google Scholar - Del Moral, P., Doucet, A., Jasra, A.: Sequential Monte Carlo samplers. J. R. Stat. Soc. B
**68**, 411–436 (2006)zbMATHGoogle Scholar - Dellaportas, P., Papaspiliopoulos, O., Roberts, G.: Bayesian inference for non-Gaussian Ornstein–Uhlenbeck stochastic volatility processes. J. R. Stat. Soc. B
**66**, 369–393 (2004)zbMATHMathSciNetGoogle Scholar - Dellaportas, P., Wright, D.: Positive embedded integration in Bayesian analysis. Stat. Comput.
**1**, 1–12 (1991)Google Scholar - Diaconis, P., Stroock, D.: Geometric bounds for eigenvalues of Markov chains. Ann. Appl. Probab.
**1**, 36–61 (1991)zbMATHMathSciNetGoogle Scholar - Didelot, X., Everitt, R., Johansen, A., Lawson, D.: Likelihood-free estimation of model evidence. Bayesian Anal.
**6**, 48–76 (2011)MathSciNetGoogle Scholar - Diebolt, J., Robert, C.: Estimation of finite mixture distributions by Bayesian sampling. J. R. Stat. Soc. B
**56**, 363–375 (1994)zbMATHMathSciNetGoogle Scholar - Doll, J., Dion, D.: Generalized Langevin equation approach for atom/solid-surface scattering: numerical techniques for Gaussian generalized Langevin dynamics. J. Chem. Phys.
**65**, 3762–3766 (1976)Google Scholar - Douc, R., Robert, C.: A vanilla Rao-Blackwellization of Metropolis–Hastings algorithms. Ann. Stat.
**39**, 261–277 (2011)zbMATHMathSciNetGoogle Scholar - Doucet, A., Pitt, M., Deligiannidis, G., Kohn, R.: Efficient implementation of Markov chain Monte Carlo when using an unbiased likelihood estimator. Biometrika (to appear 2015). arXiv preprint arXiv:1210.1871 (2012)
- Drovandi, C., Pettitt, A., Fddy, M.: Approximate Bayesian computation using indirect inference. J. R. Stat. Soc. A
**60**, 503–524 (2011)Google Scholar - Duane, S., Kennedy, A.D., Pendleton, B.J., Roweth, D.: Hybrid Monte Carlo. Phys. Lett. B
**195**, 216–222 (1987)Google Scholar - Ermak, D.: A computer simulation of charged particles in solution. I. Technique and equilibrium properties. J. Chem. Phys.
**62**, 4189–4196 (1975)Google Scholar - Excoffier, C., Leuenberger, D., Wegmann, L.: Bayesian computation and model selection in population genetics. arXiv:0901.2231 (2009)
- Fearnhead, P., Clifford, P.: On-line inference for hidden Markov models via particle filters. J. R. Stat. Soc. B
**65**, 887–899 (2003)zbMATHMathSciNetGoogle Scholar - Fearnhead, P., Prangle, D.: Constructing summary statistics for Approximate Bayesian computation: semi-automatic Approximate Bayesian computation (with discussion). J. R. Stat. Soc. B
**74**, 419–474 (2012)MathSciNetGoogle Scholar - Fort, G., Moulines, E., Priouret, P.: Convergence of adaptive and interacting Markov chain Monte Carlo algorithms. Ann. Stat.
**39**, 3262–3289 (2011)zbMATHMathSciNetGoogle Scholar - Frigessi, A., Gasemyr, J., Rue, H.: Antithetic coupling of two Gibbs sampler chains. Ann. Stat.
**28**, 1128–1149 (2000)zbMATHMathSciNetGoogle Scholar - Geman, S., Geman, D.: Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell.
**6**, 721–741 (1984)zbMATHGoogle Scholar - Geyer, C.J.: Practical Markov chain Monte Carlo. Stat. Sci.
**7**, 473–483 (1992)Google Scholar - Ghirotto, S., Mona, S., Benazzo, A., Paparazzo, F., Caramelli, D., Barbujani, G.: Inferring genealogical processes from patterns of Bronze-Age and modern DNA variation in Sardinia. Mol. Biol. Evol.
**27**, 875–886 (2010)Google Scholar - Gilks, W., Roberts, G., George, E.: Adaptive direction sampling. J. R. Stat. Soc. D
**43**, 179–189 (1994)Google Scholar - Gilks, W., Roberts, G., Sahu, S.: Adaptive Markov chain Monte Carlo through regeneration. J. Am. Stat. Assoc.
**93**, 1045–1054 (1998)zbMATHMathSciNetGoogle Scholar - Girolami, M., Calderhead, B.: Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Stat. Soc. B
**73**, 123–214 (2011)MathSciNetGoogle Scholar - Girolami, M., Lyne, A.-M., Strathmann, H., Simpson, D., Atchadé, Y.: Playing Russian roulette with intractable likelihoods. arXiv preprint. arXiv:1306.4032 (2013)
- Green, P.: Reversible jump MCMC computation and Bayesian model determination. Biometrika
**82**, 711–732 (1995)zbMATHMathSciNetGoogle Scholar - Green, P.J.: Chap. 19: MAD-Bayes matching and alignment for labelled and unlabelled configurations. In: Dryden, I.L., Kent, J.T. (eds.) Geometry Driven Statistics, pp. 365–375. Wiley, Chichester (2015)Google Scholar
- Grelaud, A., Marin, J.-M., Robert, C., Rodolphe, F., Tally, F.: Likelihood-free methods for model choice in Gibbs random fields. Bayesian Anal.
**3**(2), 427–442 (2009)Google Scholar - Grenander, U., Miller, M.I.: Representations of knowledge in complex systems (with discussion and a reply by the authors). J. R. Stat. Soc. B
**56**, 549–603 (1994)zbMATHMathSciNetGoogle Scholar - Griffin, J., Łatuszyński, K., Steel, M.: Individual adaptation: an adaptive MCMC scheme for variable selection problems arXiv preprint. arXiv:1412.6760 (2014)
- Guillemaud, T., Beaumont, M., Ciosi, M., Cornuet, J.-M., Estoup, A.: Inferring introduction routes of invasive species using approximate Bayesian computation on microsatellite data. Heredity
**104**, 88–99 (2009)Google Scholar - Haario, H., Laine, M., Mira, A., Saksman, E.: DRAM: efficient adaptive MCMC. Stat. Comput.
**16**, 339–354 (2006)MathSciNetGoogle Scholar - Haario, H., Saksman, E., Tamminen, J.: An adaptive Metropolis algorithm. Bernoulli
**7**, 223–242 (2001)zbMATHMathSciNetGoogle Scholar - Hairer, E., Lubich, C., Wanner, G.: Geometric numerical integration: structure-preserving algorithms for ordinary differential equations. In: Springer Series in Computational Mathematics, 2nd edn, vol 31. Springer, Berlin (2006)Google Scholar
- Harford, T.: Big data: are we making a big mistake? Significance
**11**, 14–19 (2014)Google Scholar - Hastings, W.: Monte Carlo sampling methods using Markov chains and their application. Biometrika
**57**, 97–109 (1970)zbMATHGoogle Scholar - Huelsenbeck, J.P., Ronquist, F.: MRBAYES: Bayesian inference of phylogenetic trees. Bioinformatics
**17**, 754–755 (2001)Google Scholar - Jaakkola, T., Jordan, M.: Bayesian parameter estimation via variational methods. Stat. Comput.
**10**, 25–37 (2000)Google Scholar - Jacob, P., Robert, C., Smith, M.: Using parallel computation to improve independent Metropolis–Hastings based estimation. J. Comput. Graph. Stat.
**20**, 616–635 (2011)MathSciNetGoogle Scholar - Jacob, P.E., Ryder, R.J.: The Wang–Landau algorithm reaches the flat histogram criterion in finite time. Ann. Appl. Probab.
**24**, 34–53 (2014)zbMATHMathSciNetGoogle Scholar - Ji, C., Schmidler, S.C.: Adaptive Markov chain Monte Carlo for Bayesian variable selection. J. Comput. Graph. Stat.
**22**, 708–728 (2013)MathSciNetGoogle Scholar - Jordan, M.I., Ghahramani, Z., Jaakkola, T.S., Saul, L.K.: An introduction to variational methods for graphical models. Mach. Learn.
**37**, 183–233 (1999)zbMATHGoogle Scholar - Jourdain, B., Lelièvre, T., Miasojedow, B.: Optimal scaling for the transient phase of the random walk Metropolis algorithm: the mean-field limit. Ann. Appl. Probab. arXiv preprint. arXiv:1210.7639 (to appear, 2012)
- Jourdain, B., Lelièvre, T., Miasojedow, B.: Optimal scaling for the transient phase of Metropolis Hastings algorithms: the longtime behavior. Bernoulli
**20**, 1930–1978 (2014)zbMATHMathSciNetGoogle Scholar - Kent, J.: Time-reversible diffusions. Adv. Appl. Probab.
**10**(4), 819–835 (1978)zbMATHMathSciNetGoogle Scholar - Kloeden, P.E., Platen, E.: Numerical solution of stochastic differential equations. In: Applications of Mathematics (New York), vol 23. Springer, Berlin (1992)Google Scholar
- Komodakis, N., Pesquet, J.-C.: Playing with duality: an overview of recent primal–dual approaches for solving large-scale optimization problems. ArXiv e-prints. 1406.5429 (2014)Google Scholar
- Korattikara, A., Chen, Y., Welling, M.: Austerity in MCMC land: cutting the Metropolis–Hastings budget. arXiv preprint. arXiv:1304.5299 (2013)
- Kou, S.C., Zhou, Q., Wong, W.H.: Equi-energy sampler with applications in statistical inference and statistical mechanics (with discussions and a rejoinder by the authors). Ann. Stat.
**34**, 1581–1652 (2006)zbMATHMathSciNetGoogle Scholar - Lamnisos, D., Griffin, J.E., Steel, M.F.: Adaptive Monte Carlo for Bayesian variable selection in regression models. J. Comput. Graph. Stat.
**22**, 729–748 (2013)MathSciNetGoogle Scholar - Lange, K., Chi, E.C., Zhou, H.: A brief survey of modern optimization for statisticians. Int. Stat. Rev.
**82**, 46–70 (2014)MathSciNetGoogle Scholar - Larget, B., Simon, D.L.: Markov chain Monte Carlo algorithms for the Bayesian analysis of phylogenetic trees. Mol. Biol. Evol.
**16**, 750–759 (1999)Google Scholar - Łatuszyński, K., Kosmidis, I., Papaspiliopoulos, O., Roberts, G.O.: Simulating events of unknown probabilities via reverse time martingales. Random Struct. Algorithms
**38**, 441–452 (2011)zbMATHGoogle Scholar - Łatuszyński, K., Roberts, G., Rosenthal, J.: Adaptive Gibbs samplers and related MCMC methods. Ann. Appl. Probab.
**23**(1), 66–98 (2013)zbMATHMathSciNetGoogle Scholar - Łatuszyński, K., Rosenthal, J.S.: The containment condition and AdapFail algorithms. J. Appl. Probab.
**51**, 1189–1195 (2014)zbMATHMathSciNetGoogle Scholar - Lee, A., Łatuszyński, K.: Variance bounding and geometric ergodicity of Markov chain Monte Carlo kernels for Approximate Bayesian computation. Biometrika
**101**, 655–671 (2014)zbMATHMathSciNetGoogle Scholar - Lee, A., Yau, C., Giles, M., Doucet, A., Holmes, C.: On the utility of graphics cards to perform massively parallel simulation with advanced Monte Carlo methods. Arxiv preprint. arXiv:0905.2441 (2009)
- Lee, H.K., Okabe, Y., Landau, D.P.: Convergence and refinement of the Wang–Landau algorithm. Technical report (2005)Google Scholar
- Leuenberger, C., Wegmann, D.: Bayesian computation and model selection without likelihoods. Genetics
**184**, 243–252 (2010)Google Scholar - Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence (2009)zbMATHGoogle Scholar
- Lindsten, F., Jordan, M.I., Schön, T.B.: Particle Gibbs with ancestor sampling. ArXiv e-prints. 1401.0604 (2014)Google Scholar
- Lunn, D., Thomas, A., Best, N., Spiegelhalter, D.: The BUGS Book: A Practical Introduction to Bayesian Analysis. Chapman and Hall/CRC Press Boca Raton (2010)Google Scholar
- MacKay, D.J.C.: Information Theory, Inference and Learning Algorithms. Cambridge University Press, Cambridge (2002)Google Scholar
- Maclaurin, D., Adams, R.P.: Firefly Monte Carlo: exact MCMC with subsets of data. arXiv preprint. arXiv:1403.5693 (2014)
- Marin, J., Pillai, N., Robert, C., Rousseau, J.: Relevant statistics for Bayesian model choice. J. R. Stat. Soc. B
**76**(5), 833–859 (2014)MathSciNetGoogle Scholar - Marin, J., Pudlo, P., Robert, C., Ryder, R.: Approximate Bayesian computational methods. Stat. Comput.
**21**(2), 1–14 (2012)MathSciNetGoogle Scholar - Marshall, T., Roberts, G.: An adaptive approach to Langevin MCMC. Stat. Comput.
**22**, 1041–1057 (2012)zbMATHMathSciNetGoogle Scholar - Martinet, B.: Regularisation d’inéquations variationelles par approximations successives. Rev. Fran. d’Autom. Infomat. Rech. Opérationelle
**4**, 154–159 (1970)zbMATHMathSciNetGoogle Scholar - Medina-Aguayo, F.J., Lee, A., Roberts, G.O.: Stability of noisy Metropolis–Hastings. arXiv preprint. arXiv:1503.07066 (2015)
- Mengersen, K., Tweedie, R.: Rates of convergence of the Hastings and Metropolis algorithms. Ann. Stat.
**24**, 101–121 (1996)zbMATHMathSciNetGoogle Scholar - Metropolis, N.: The beginning of the Monte Carlo method. Los Alamos Sci.
**15**, 125–130 (1987)MathSciNetGoogle Scholar - Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E.: Equations of state calculations by fast computing machines. J. Chem. Phys.
**21**, 1087–1092 (1953)Google Scholar - Meyn, S., Tweedie, R.: Markov Chains and Stochastic Stability. Cambridge University Press, Cambridge (2009)zbMATHGoogle Scholar
- Miasojedow, B., Moulines, E., Vihola, M.: An adaptive parallel tempering algorithm. J. Comput. Graph. Stat.
**22**, 649–664 (2013)MathSciNetGoogle Scholar - Minka, T.: Expectation propagation for approximate Bayesian inference. In: Jack, D.K., Breese, S. (eds) UAI ’01: Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, pp 362–369. University of Washington, Seattle (2001)Google Scholar
- Minsker, S., Srivastava, S., Lin, L., Dunson, D.B.: Robust and scalable Bayes via a median of subset posterior measures. arXiv preprint. arXiv:1403.2660 (2014)
- Møller, J., Pettitt, A.N., Reeves, R., Berthelsen, K.K.: An efficient Markov chain Monte Carlo method for distributions with intractable normalising constants. Biometrika
**93**, 451–458 (2006)MathSciNetGoogle Scholar - Moreau, J.-J.: Fonctions convexes duales et points proximaux dans un espace Hilbertien. C. R. Acad. Sci. Paris Sér. A Math.
**255**, 2897–2899 (1962)zbMATHMathSciNetGoogle Scholar - Muff, S., Riebler, A., Rue, H., Saner, P., Held, L.: Bayesian analysis of measurement error models using INLA. ArXiv e-prints. 1302.3065 (2013)Google Scholar
- Murray, I., Ghahramani, Z., MacKay, D.: MCMC for doubly-intractable distributions. In: Uncertainty in Artificial Intelligence. UAI-2006 (2006a)Google Scholar
- Murray, I., MacKay, D.J., Ghahramani, Z., Skilling, J.: Nested sampling for Potts models. In: Weiss, Y., Schölkopf, B., Platt, J. (eds.) Advances in Neural Information Processing Systems, vol. 18, pp. 947–954. MIT Press, Cambridge (2006b)Google Scholar
- Naylor, J., Smith, A.: Application of a method for the efficient computation of posterior distributions. Appl. Stat.
**31**, 214–225 (1982)zbMATHMathSciNetGoogle Scholar - Neal, P., Roberts, G., Yuen, W.K.: Optimal scaling of random walk Metropolis algorithms with discontinuous target densities. Ann. Appl. Probab.
**22**, 1880–1927 (2012)zbMATHMathSciNetGoogle Scholar - Neal, R.: Bayesian Learning for Neural Networks. Lecture Notes, vol. 118. Springer, New York (1999)Google Scholar
- Neal, R.: MCMC using Hamiltonian dynamics. In: Brooks, S., Gelman, A., Jones, G., Meng, X.-L. (eds.) Handbook of Markov Chain Monte Carlo, pp. 113–162. Chapman and Hall/CRC Press, Boca Raton (2013)Google Scholar
- Neiswanger, W., Wang, C., Xing, E.: Asymptotically exact, embarrassingly parallel MCMC. arXiv preprint. arXiv:1311.4780 (2013)
- Nesterov, Y.: Introductory lectures on convex optimization: a basic course. In: Applied Optimization, vol 87. Kluwer Academic Publishers, Boston (2004)Google Scholar
- Nott, D., Kohn, R.: Adaptive sampling for Bayesian variable selection. Biometrika
**92**, 747–763 (2005)zbMATHMathSciNetGoogle Scholar - Oliveira, J., Bioucas-Dias, J., Figueiredo, M.: Adaptive total variation image deblurring: a majorization–minimization approach. Signal Process.
**89**, 1683–1693 (2009)zbMATHGoogle Scholar - Owen, A.B.: Empirical Likelihood. Chapman and Hall, Boca Raton (2001)zbMATHGoogle Scholar
- Paisley, J., Blei, D.M., Jordan, M.I.: Variational Bayesian inference with stochastic search. In: Proceedings of the 29th International Conference on Machine Learning (ICML-12), pp 1367–1374 (2012)Google Scholar
- Papaspiliopoulos, O., Roberts, G.O., Sköld, M.: A general framework for the parametrization of hierarchical models. Stat. Sci.
**22**, 59–73 (2007)zbMATHGoogle Scholar - Parikh, N., Boyd, S.: Proximal algorithms. Found. Trends Optim.
**1**, 123–231 (2014)Google Scholar - Patin, E., Laval, G., Barreiro, L., Salas, A., Semino, O., Santachiara-Benerecetti, S., Kidd, K., Kidd, J., Van Der Veen, L., Hombert, J., et al.: Inferring the demographic history of African farmers and Pygmy hunter-gatherers using a multilocus resequencing data set. PLoS Genet.
**5**, e1000448 (2009)Google Scholar - Pearson, K.: Contribution to the mathematical theory of evolution. Proc. Trans. R. Soc. A
**185**, 71–110 (1894)zbMATHGoogle Scholar - Pereyra, M.: Proximal Markov chain Monte Carlo algorithms. Stat. Comput. (2015, to appear)Google Scholar
- Peskun, P.: Optimum Monte Carlo sampling using Markov chains. Biometrika
**60**, 607–612 (1973)zbMATHMathSciNetGoogle Scholar - Pesquet, J.-C., Pustelnik, N.: A parallel inertial proximal optimization method. Pac. J. Optim.
**8**, 273–305 (2012)zbMATHMathSciNetGoogle Scholar - Pillai, N.S., Smith, A.: Ergodicity of approximate MCMC chains with applications to large data sets. arXiv preprint. arXiv:1405.0182 (2014)
- Pillai, N.S., Stuart, A.M., Thiéry, A.H.: Optimal scaling and diffusion limits for the Langevin algorithm in high dimensions. Ann. Appl. Probab.
**22**, 2320–2356 (2012)zbMATHMathSciNetGoogle Scholar - Plummer, M.: Cuts in Bayesian graphical models. Stat. Comput.
**25**(1), 37–43 (2015)MathSciNetGoogle Scholar - Potts, R.B.: Some generalized order–disorder transitions. Proc. Camb. Philos. Soc.
**48**, 106–109 (1952)zbMATHMathSciNetGoogle Scholar - Pritchard, J., Seielstad, M., Perez-Lezaun, A., Feldman, M.: Population growth of human Y chromosomes: a study of Y chromosome microsatellites. Mol. Biol. Evol.
**16**, 1791–1798 (1999)Google Scholar - Pudlo, P., Marin, J.-M., Estoup, A., Cornuet, J.-M., Gautier, M., Robert, C.P.: ABC model choice via random forests. ArXiv e-prints. 1406.6288 (2014)Google Scholar
- Quiroz, M., Villani, M., Kohn, R.: Speeding up MCMC by efficient data subsampling. arXiv preprint. arXiv:1404.4178 (2014)
- Raguet, H., Fadili, J., Peyré, G.: A generalized forward–backward splitting. SIAM J. Imaging Sci.
**6**, 1199–1226 (2013)zbMATHMathSciNetGoogle Scholar - Ramakrishnan, U., Hadly, E.: Using phylochronology to reveal cryptic population histories: review and synthesis of 29 ancient DNA studies. Mol. Ecol.
**18**, 1310–1330 (2009)Google Scholar - Richardson, S., Bottolo, L., Rosenthal, J.: Bayesian models for sparse regression analysis of high dimensional data. Bayesian Stat.
**9**, 539–569 (2010)Google Scholar - Richardson, S., Green, P.: On Bayesian analysis of mixtures with an unknown number of components (with discussion). J. R. Stat. Soc. B
**59**, 731–792 (1997)zbMATHMathSciNetGoogle Scholar - Robert, C., Casella, G.: A short history of Markov chain Monte Carlo: subjective recollections from incomplete data. Stat. Sci.
**26**, 102–115 (2011)zbMATHMathSciNetGoogle Scholar - Robert, C., Cornuet, J.-M., Marin, J.-M., Pillai, N.: Lack of confidence in ABC model choice. Proc. Natl. Acad. Sci. USA
**108**(37), 15112–15117 (2011)Google Scholar - Roberts, G., Gelman, A., Gilks, W.: Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann. Appl. Probab.
**7**, 110–120 (1997)zbMATHMathSciNetGoogle Scholar - Roberts, G., Rosenthal, J.: Optimal scaling of discrete approximations to Langevin diffusions. J. R. Stat. Soc. B
**60**, 255–268 (1998)zbMATHMathSciNetGoogle Scholar - Roberts, G., Rosenthal, J.: Optimal scaling for various Metropolis–Hastings algorithms. Stat. Sci.
**16**, 351–367 (2001)zbMATHMathSciNetGoogle Scholar - Roberts, G., Rosenthal, J.: General state space Markov chains and MCMC algorithms. Probab. Surv.
**1**, 20–71 (2004)zbMATHMathSciNetGoogle Scholar - Roberts, G., Rosenthal, J.: Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. J. Appl. Probab.
**44**, 458 (2007)zbMATHMathSciNetGoogle Scholar - Roberts, G., Rosenthal, J.: Examples of adaptive MCMC. J. Comput. Graph. Stat.
**18**, 349–367 (2009)MathSciNetGoogle Scholar - Roberts, G., Stramer, O.: Langevin diffusions and Metropolis–Hastings algorithms. Methodol. Comput. Appl. Probab.
**4**, 337–357 (2002)zbMATHMathSciNetGoogle Scholar - Roberts, G., Tweedie, R.: Exponential convergence of Langevin distributions and their discrete approximations. Bernoulli
**2**, 341–363 (1996a)zbMATHMathSciNetGoogle Scholar - Roberts, G., Tweedie, R.: Geometric convergence and central limit theorems for multidimensional Hastings and Metropolis algorithms. Biometrika
**83**, 95–110 (1996b)zbMATHMathSciNetGoogle Scholar - Roberts, G.O.: Markov chain concepts related to sampling algorithms. In: Markov Chain Monte Carlo in Practice, p 57. Chapman and Hall, London (1996)Google Scholar
- Roberts, G.O.: Optimal Metropolis algorithms for product measures on the vertices of a hypercube. Stoch. Stoch. Rep.
**62**, 275–283 (1998)zbMATHGoogle Scholar - Roberts, G.O., Rosenthal, J.S.: Minimising MCMC variance via diffusion limits, with an application to simulated tempering. Ann. Appl. Probab.
**24**, 131–149 (2014)zbMATHMathSciNetGoogle Scholar - Roberts, G.O., Stramer, O.: On inference for partially observed nonlinear diffusion models using the Metropolis–Hastings algorithm. Biometrika
**88**, 603–621 (2001)zbMATHMathSciNetGoogle Scholar - Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim.
**14**, 877–898 (1976)zbMATHMathSciNetGoogle Scholar - Rossky, P., Doll, J., Friedman, H.: Brownian dynamics as smart Monte Carlo simulation. J. Chem. Phys.
**69**, 4628–4633 (1978)Google Scholar - Rubin, D.: Bayesianly justifiable and relevant frequency calculations for the applied statistician. Ann. Stat.
**12**, 1151–1172 (1984)zbMATHGoogle Scholar - Rubinstein, R.Y.: Simulation and the Monte Carlo Method. Wiley, New York (1981)zbMATHGoogle Scholar
- Rudolf, D., Schweizer, N.: Perturbation theory for Markov chains via Wasserstein distance. arXiv preprint. arXiv:1503.04123 (2015)
- Rue, H., Martino, S., Chopin, N.: Approximate Bayesian inference for latent Gaussian models using integrated nested Laplace approximations (with discussion). J. R. Stat. Soc. B
**71**, 319–392 (2009)zbMATHMathSciNetGoogle Scholar - Saksman, E., Vihola, M.: On the ergodicity of the adaptive Metropolis algorithm on unbounded domains. Ann. Appl. Probab.
**20**, 2178–2203 (2010)zbMATHMathSciNetGoogle Scholar - Salimans, T., Knowles, D.A.: Fixed-form variational posterior approximation through stochastic linear regression. Bayesian Anal.
**8**, 837–882 (2013)MathSciNetGoogle Scholar - Saloff-Coste, L.: Lectures on finite Markov chains. In: Lectures on Probability Theory and Statistics (Saint-Flour, 1996). Lecture Notes in Mathematics, vol 1665, pp 301–413. Springer, Berlin (1997)Google Scholar
- Schreck, A., Fort, G., Corff, S.L., Moulines, E.: A shrinkage-thresholding Metropolis adjusted Langevin algorithm for Bayesian variable selection. arXiv preprint. arXiv:1312.5658 (2013)
- Schrödle, B., Held, L.: A primer on disease mapping and ecological regression using INLA. Comput. Stat.
**26**, 241–258 (2011)zbMATHGoogle Scholar - Scott, S., Blocker, A., Bonassi, F., Chipman, H., George, E., McCulloch, R.: Bayes and big data: the consensus Monte Carlo algorithm. In: EFaBBayes 250 Conference, vol 16 (2013)Google Scholar
- Searle, S., Casella, G., McCulloch, C.: Variance Components. Wiley, New York (1992)zbMATHGoogle Scholar
- Sherlock, C.: Optimal scaling for the pseudo-marginal random walk Metropolis: insensitivity to the noise generating mechanism. arXiv preprint. arXiv:1408.4344 (2014)
- Sherlock, C., Thiery, A.H., Roberts, G.O., Rosenthal, J.S.: On the efficiency of pseudo-marginal random walk Metropolis algorithms. Ann. Stat.
**43**, 238–275 (2014)MathSciNetGoogle Scholar - Smith, A., Skene, A., Shaw, J., Naylor, J., Dransfield, M.: The implementation of the Bayesian paradigm. Commun. Stat. Theory Methods
**14**, 1079–1102 (1985)zbMATHMathSciNetGoogle Scholar - Smith, A., Skene, A.M., Shaw, J.E.H., Naylor, J.C.: Progress with numerical and graphical methods for practical Bayesian statistics. J. R. Stat. Soc. D
**36**, 75–82 (1987)Google Scholar - Solonen, A., Ollinaho, P., Laine, M., Haario, H., Tamminen, J., Järvinen, H.: Efficient MCMC for climate model parameter estimation: parallel adaptive chains and early rejection. Bayesian Anal.
**7**, 715–736 (2012)MathSciNetGoogle Scholar - Stan Development Team: STAN: A C++ Library for Probability and Sampling, Version 2.5.0. http://mc-stan.org/ (2014)
- Stephens, M.: Bayesian analysis of mixture models with an unknown number of components—an alternative to reversible jump methods. Ann. Stat.
**28**, 40–74 (2000)zbMATHMathSciNetGoogle Scholar - Stramer, O., Tweedie, R.: Langevin-type models I: diffusions with given stationary distributions and their discretizations. Methodol. Comput. Appl. Probab.
**1**, 283–306 (1999a)zbMATHMathSciNetGoogle Scholar - Stramer, O., Tweedie, R.: Langevin-type models II: self-targeting candidates for MCMC algorithms. Methodol. Comput. Appl. Probab.
**1**, 307–328 (1999b)zbMATHMathSciNetGoogle Scholar - Strathmann, H., Sejdinovic, D., Girolami, M.: Unbiased Bayes for big data: paths of partial posteriors. arXiv preprint. arXiv:1501.03326 (2015)
- Strid, I.: Efficient parallelisation of Metropolis–Hastings algorithms using a prefetching approach. Comput. Stat. Data Anal.
**54**, 2814–2835 (2010)zbMATHMathSciNetGoogle Scholar - Suchard, M., Wang, Q., Chan, C., Frelinger, J., Cron, A., West, M.: Understanding GPU programming for statistical computation: studies in massively parallel massive mixtures. J. Comput. Graph. Stat.
**19**, 418–438 (2010)MathSciNetGoogle Scholar - Tanner, M., Wong, W.: The calculation of posterior distributions by data augmentation. J. Am. Stat. Assoc.
**82**, 528–550 (1987)zbMATHMathSciNetGoogle Scholar - Tavaré, S., Balding, D., Griffith, R., Donnelly, P.: Inferring coalescence times from DNA sequence data. Genetics
**145**, 505–518 (1997)Google Scholar - Taylor, K.: PhD Thesis, University of Warwick, 2014Google Scholar
- Teh, Y.W., Thiéry, A., Vollmer, S.: Consistency and fluctuations for stochastic gradient Langevin dynamics. arXiv preprint. arXiv:1409.0578 (2014)
- Templeton, A.: Statistical hypothesis testing in intraspecific phylogeography: nested clade phylogeographical analysis vs. Approximate Bayesian computation. Mol. Ecol.
**18**(2), 319–331 (2008)MathSciNetGoogle Scholar - Templeton, A.: Coherent and incoherent inference in phylogeography and human evolution. Proc. Natl. Acad. Sci. USA
**107**(14), 6376–6381 (2010)Google Scholar - Tierney, L.: A note on Metropolis–Hastings kernels for general state spaces. Ann. Appl. Probab.
**8**, 1–9 (1998)zbMATHMathSciNetGoogle Scholar - Toni, T., Welch, D., Strelkowa, N., Ipsen, A., Stumpf, M.: Approximate Bayesian computation scheme for parameter inference and model selection in dynamical systems. J. R. Soc. Interface
**6**, 187–202 (2009)Google Scholar - VanDerwerken, D.N., Schmidler, S.C.: Parallel Markov chain Monte Carlo. ArXiv e-prints. 1312.7479 (2013)Google Scholar
- Verdinelli, I., Wasserman, L.: Bayesian analysis of outlier problems using the Gibbs sampler. Stat. Comput.
**1**, 105–117 (1991)Google Scholar - Verdu, P., Austerlitz, F., Estoup, A., Vitalis, R., Georges, M., Théry, S., Froment, A., Le Bomin, S., Gessain, A., Hombert, J.-M., Van der Veen, L., Quintana-Murci, L., Bahuchet, S., Heyer, E.: Origins and genetic diversity of pygmy hunter-gatherers from Western Central Africa. Curr. Biol.
**19**, 312–318 (2009)Google Scholar - Vihola, M.: Robust adaptive Metropolis algorithm with coerced acceptance rate. Stat. Comput.
**22**, 997–1008 (2012)zbMATHMathSciNetGoogle Scholar - Wakefield, J., Gelfand, A., Smith, A.: Efficient generation of random variates via the ratio-of-uniforms method. Stat. Comput.
**1**, 129–133 (1991)Google Scholar - Wang, F., Landau, D.P.: Efficient, multiple-range random walk algorithm to calculate the density of states. Phys. Rev. Lett.
**86**, 2050–2053 (2001)Google Scholar - Wang, X., Dunson, D.: Parallelizing MCMC via Weierstrass sampler. arXiv preprint. arXiv:1312.4605 (2013)
- Wang, Z., Mohamed, S., de Freitas, N.: Adaptive Hamiltonian and Riemann manifold Monte Carlo. In: Proceedings of the 30th International Conference on Machine Learning, pp 1462–1470 (2013)Google Scholar
- Wegmann, D., Excoffier, L.: Bayesian inference of the demographic history of chimpanzees. Mol. Biol. Evol.
**27**, 1425–1435 (2010)Google Scholar - Welling, M., Teh, Y.W.: Bayesian learning via stochastic gradient Langevin dynamics. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11), pp 681–688 (2011)Google Scholar
- White, T.: Hadoop: The Definitive Guide. O’Reilly Media, Sebastopol (2012)Google Scholar
- Whiteley, N., Andrieu, C., Doucet, A.: Efficient Bayesian inference for switching state-space models using discrete particle Markov chain Monte Carlo methods. ArXiv e-prints. 1011.2437 (2010)Google Scholar
- Wilkinson, D.: Parallel Bayesian computation. In: Kontoghiorghes, E.J. (ed.) Handbook of Parallel Computing and Statistics, pp. 481–512. Marcel Dekker/CRC Press, New York (2005)Google Scholar
- Wilkinson, D.: The particle marginal Metropolis–Hastings (PMMH) particle MCMC algorithm. Darren Wilkinson’s research blog. https://darrenjw.wordpress.com/2011/05/17/the-particle-marginal-metropolis-hastings-pmmh-particle-mcmc-algorithm/ (2011a)
- Wilkinson, D.J.: Stochastic Modelling for Systems Biology, 2nd edn. CRC Press, New York (2011)Google Scholar
- Wilkinson, R.: Approximate Bayesian computation (ABC) gives exact results under the assumption of model error. Stat. Appl. Genet. Mol. Biol.
**12**, 129–141 (2013)MathSciNetGoogle Scholar - Xifara, T., Sherlock, C., Livingstone, S., Byrne, S., Girolami, M.: Langevin diffusions and the Metropolis-adjusted Langevin algorithm. Stat. Probab. Lett.
**91**, 14–19 (2014)zbMATHMathSciNetGoogle Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.