Skip to main content

Advertisement

Log in

Computing marginal likelihoods via the Fourier integral theorem and pointwise estimation of posterior densities

  • Published:
Statistics and Computing Aims and scope Submit manuscript

Abstract

In this paper, we present a novel approach to the estimation of a density function at a specific chosen point. With this approach, we can estimate a normalizing constant, or equivalently compute a marginal likelihood, by focusing on estimating a posterior density function at a point. Relying on the Fourier integral theorem, the proposed method is capable of producing quick and accurate estimates of the marginal likelihood, regardless of how samples are obtained from the posterior; that is, it uses the posterior output generated by a Markov chain Monte Carlo sampler to estimate the marginal likelihood directly, with no modification to the form of the estimator on the basis of the type of sampler used. Thus, even for models with complicated specifications, such as those involving challenging hierarchical structures, or for Markov chains obtained from a black-box MCMC algorithm, the method provides a straightforward means of quickly and accurately estimating the marginal likelihood. In addition to developing theory to support the favorable behavior of the estimator, we also present a number of illustrative examples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Abramowitz, M., Stegun, I.A.: Sine and cosine integrals. In: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, pp. 231–233. Dover (1972)

  • Abrams, D.I., Goldman, A.I., Launer, C., Korvick, J.A., Neaton, J.D., Crane, L.R., Grodesky, M., Wakefield, S., Muth, K., Kornegay, S., et al.: A comparative trial of didanosine or zalcitabine after treatment with zidovudine in patients with human immunodeficiency virus infection. N. Engl. J. Med. 330(10), 657–662 (1994)

    Article  Google Scholar 

  • Botev, Z., L’Ecuyer, P., Tuffin, B.: Markov chain importance sampling with applications to rare event probability estimation. Stat. Comput. 23, 271–285 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Carlin, B.P., Chib, S.: Bayesian model choice via Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B (Methodol.) 57(3), 473–484 (1995)

    MATH  Google Scholar 

  • Chan, J., Eisenstat, E.: Marginal likelihood estimation with the cross-entropy method. Econ. Rev. 34, 256–285 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  • Chen, M.H.: Computing marginal likelihoods from a single MCMC output. Stat. Neerlandica 59, 256–285 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Chen, M.H., Shao, Q.M.: On Monte Carlo methods for estimating ratios of normalizing constants. Ann. Stat. 25(4), 1563–1594 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  • Chib, S.: Marginal likelihood from the Gibbs output. J. Am. Stat. Assoc. 90(432), 1313–1321 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  • Chib, S., Carlin, B.P.: On MCMC sampling in hierarchical longitudinal models. Stat. Comput. 9(1), 17–26 (1999)

    Article  Google Scholar 

  • Chib, S., Greenberg, E.: Analysis of multivariate probit models. Biometrika 85(2), 347–361 (1998)

    Article  MATH  Google Scholar 

  • Chib, S., Jeliazkov, I.: Marginal likelihood from the Metropolis-Hastings output. J. Am. Stat. Assoc. 96(453), 270–281 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  • Clyde, M., George, E.I.: Model uncertainty. Stat. Sci. pp. 81–94 (2004)

  • Dellaportas, P., Forster, J.J., Ntzoufras, I.: On Bayesian model and variable selection using MCMC. Stat. Comput. 12(1), 27–36 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  • Folland, G.B.: Fourier analysis and its applications, vol. 4. Am. Math. Soc. (2009)

  • Friel, N., Pettitt, A.N.: Marginal likelihood estimation via power posteriors. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 70(3), 589–607 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Friel, N., Wyse, J.: Estimating the evidence: a review. Stat. Neerl. 66, 288–308 (2012)

    Article  MathSciNet  Google Scholar 

  • Frühwirth-Schnatter, S.: Estimating marginal likelihoods for mixture and Markov switching models using bridge sampling techniques. Economet. J. 7(1), 143–167 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  • Gelman, A., Meng, X.L.: Simulating normalizing constants: from importance sampling to bridge sampling to path sampling. Stat. Sci. pp. 163–185 (1998)

  • Green, P.J.: Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82(4), 711–732 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  • Greene, W.H.: Econometric Analysis. Prentice Hall, Hoboken (1997)

    Google Scholar 

  • Gronau, Q., Singmann, H., Wagenmakers, E.J., et al.: bridgesampling: An r package for estimating normalizing constants. J. Stat. Softw. 92(10) (2020)

  • Han, C., Carlin, B.P.: Markov chain Monte Carlo methods for computing Bayes factors: a comparative review. J. Am. Stat. Assoc. 96(455), 1122–1132 (2001)

    Article  Google Scholar 

  • Ho, N., Walker, S.G.: Multivariate smoothing via the Fourier integral theorem and Fourier kernel. arXiv preprint arXiv:2012.14482 (2020)

  • Knuth, K.H., Habeck, M., Malakar, N.K., Mubeen, A.M., Placek, B.: Bayesian evidence and model selection. Digit. Signal Process. 47, 50–67 (2015)

    Article  MathSciNet  Google Scholar 

  • Lenk, P.: Simulation pseudo-bias correction to the harmonic mean estimator of integrated likelihoods. J. Comput. Graph. Stat. 18(4), 941–960 (2009)

    Article  MathSciNet  Google Scholar 

  • Lewis, S.M., Raftery, A.E.: Estimating Bayes factors via posterior simulation with the Laplace-metropolis estimator. J. Am. Stat. Assoc. 92, 648–655 (1997)

    MathSciNet  MATH  Google Scholar 

  • Llorente, F., Martino, L., Lopez-Santiago, J.: Marginal likelihood computation for model selection and hypothesis testing: an extensive review (2021)

  • Meng, X.L., Wong, W.H.: Simulating ratios of normalizing constants via a simple identity: a theoretical exploration. Stat. Sin. pp. 831–860 (1996)

  • Mira, A., Nicholls, G.: Bridge estimation of the probability density at a point. Stat. Sin. pp. 603–612 (2004)

  • Neal, R.M.: Annealed importance sampling. Stat. Comput. 11, 125–139 (2001)

    Article  MathSciNet  Google Scholar 

  • Newton, M.A., Raftery, A.E.: Approximate Bayesian inference with the weighted likelihood bootstrap. J. R. Stat. Soc. B 56, 3–48 (1994)

    MathSciNet  MATH  Google Scholar 

  • Pajor, A.: Estimating the marginal likelihood using the arithmetic mean identity. Bayesian Anal. 12(1), 261–287 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  • Perrakis, K., Ntzoufras, I., Tsionas, E.G.: On the use of marginal posteriors in marginal likelihood estimation via importance sampling. Comput. Stat. Data Anal. 77, 54–69 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Priestley, H.A.: Introduction to complex analysis. Oxford (1985)

  • Raftery, A.: Hypothesis testing and model selection. In: Gilks, W.R., 1230 Richardson, S., Spiegelhalter, D.J. (eds.) Markov Chain Monte 1231 Carlo in Practice, p. Chapter 10. Chapman & Hall/CRC, Boca 1232 Raton (1996)

  • Raftery, A.E., Newton, M.A., Satagopan, J.M., Krivitsky, P.N.: Estimating the integrated likelihood via posterior simulation using the harmonic mean identity. In: Bernardo, J.M., Bayarri, M.J., Berger, J.O., Dawid, A.P., Heckerman, D., Smith, A.F.M., West, M. (eds.) Bayesian Statistics 8, pp. 1–45. Oxford University Press, Oxford (2007)

    Google Scholar 

  • Ritter, C., Tanner, M.A.: Facilitating the Gibbs sampler: the Gibbs stopper and the Griddy-Gibbs sampler. J. Am. Stat. Assoc. 87(419), 861–868 (1992)

    Article  Google Scholar 

  • Robert, C.P., Wraith, D.: Computational methods for Bayesian model choice. In: Aip Conference Proceedings, vol. 1193, pp. 251–262. American Institute of Physics (2009)

  • Silverman, B.W.: Algorithm AS 176: Kernel density estimation using the fast Fourier transform. J. R. Stat. Soc. Ser. C (Appl. Stat.) 31(1), 93–99 (1982)

    MATH  Google Scholar 

  • Skilling, J.: Nested sampling for general Bayesian computation. Bayesian Anal. 1, 833–860 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Wand, M.P., Jones, M.C.: Kernel Smoothing. CRC Press, Boca Raton (1994)

    Book  MATH  Google Scholar 

  • Weinberg, M.D.: Computing the Bayes factor from a Markov chain Monte Carlo simulation of the posterior distribution. Bayesian Anal. 7, 737–770 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Williams, E.J.: Regression Analysis, vol. 14. Wiley, Hoboken (1959)

    MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful for the comments and suggestions of three reviewers which have allowed us to significantly improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Stephen G. Walker.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file 1 (pdf 385 KB)

Appendices

Appendix

1.1 Proof of Theorem 1:

We first show that

$$\begin{aligned} I(R)={\int }_{-\infty }^\infty \cos (Rx)\,\phi (x)\,\hbox {d}x=e^{-{\frac{1}{2}} {R}^2} \end{aligned}$$
(4)

for all \(R\ge 0\). Now,

$$\begin{aligned} I'(R)=-\int _{-\infty }^\infty \sin (Rx)\,x\,\phi (x)\,\hbox {d}x, \end{aligned}$$

and using integration by parts, with \(x\,\phi (x)=-\phi '(x)\), we have \(I'(R)=-R\,I(R)\) and hence (4) holds since \(I(0)=1\).

Now consider

$$\begin{aligned} \begin{aligned} I(R)&=\int _{-\infty }^\infty \cos (Rx)\,\phi (x-\mu )\,\hbox {d}x \\&=\int _{-\infty }^\infty \cos (R(x+\mu ))\,\phi (x)\,\hbox {d}x \end{aligned} \end{aligned}$$

and recall

$$\begin{aligned}\cos (R(x+\mu ))=\cos (Rx)\cos (R\mu )-\sin (Rx)\sin (R\mu ),\end{aligned}$$

and so,

$$\begin{aligned} I(R)=\cos (R\mu )\,e^{-{\frac{1}{2}} R^2} \end{aligned}$$

since \(\sin (Rx)\) is an odd function. Further, it is straightforward to show that

$$\begin{aligned} \begin{aligned} \int _{-\infty }^\infty \cos (R(y-x))\,&\phi ((x-\mu )/\sigma )/\sigma \,\hbox {d}x \\&=\cos (R(y-\mu ))\,e^{-{\frac{1}{2}}\sigma ^2R^2}, \end{aligned} \end{aligned}$$
(5)

using suitable transforms.

If

$$\begin{aligned} J(R)=\int _{-\infty }^\infty \frac{\sin (Rx)}{x}\,\phi (x)\,\hbox {d}x, \end{aligned}$$

then \(J'(R)\) is given by (4), so

$$\begin{aligned} J(R)=\int _{0}^{R}\,e^{-{\frac{1}{2}} s^2}\,\hbox {d}s \end{aligned}$$

since \(J(0)=0\). Hence,

$$\begin{aligned} { \begin{aligned} J(y;\mu ,\sigma ,R)&=\int _{-\infty }^\infty \frac{\sin (R(y-x))}{y-x}\,\phi ((x-\mu )/\sigma )/\sigma \,\hbox {d}x \\&=\int _0^R e^{-{\frac{1}{2}}\sigma ^2s^2}\,\cos (s(y-\mu ))\,\hbox {d}s. \end{aligned} } \end{aligned}$$

We want to look at

$$\begin{aligned} \mathrm{E}{\widehat{f}}(y)-f(y)=\frac{1}{\pi }J(y;\mu ,\sigma ,R)-\phi ((y-\mu )/\sigma )/\sigma , \end{aligned}$$

and from (4), we have that

$$\begin{aligned} \int _0^\infty e^{-{\frac{1}{2}}\sigma ^2 s^2}\,\cos (s(y-\mu ))\,\hbox {d}s=\pi \,\phi ((y-\mu )/\sigma )/\sigma . \end{aligned}$$

Therefore,

$$\begin{aligned}\begin{aligned} \pi |\mathrm{E}{\widehat{f}}(y)-f(y)|&=\left| \int _R^\infty e^{-{\frac{1}{2}}\sigma ^2 s^2}\,\cos (s(y-\mu ))\,\hbox {d}s\right| \\&\le \int _R^\infty e^{-{\frac{1}{2}}\sigma ^2 s^2}\,\hbox {d}s <\frac{1}{\sigma ^2R}e^{-\frac{1}{2}\sigma ^2R^2}. \end{aligned}\end{aligned}$$

This completes the proof.

1.2 General theory:

If \(f(x)\) is integrable, piecewise smooth, and piecewise continuous on \({\mathbb {R}}\), defined at its points of discontinuity so as to satisfy \(f(x)=\frac{1}{2}\left[ f(x-)+f(x+)\right] \) for all \(x\), then, as a consequence of the Fourier inversion theorem (see Folland (2009) for details), we have that

$$\begin{aligned} f(x)=\lim _{R\rightarrow \infty }\int _{-\infty }^\infty \frac{\sin (R(x-y))}{\pi (x-y)}f(y)dy. \end{aligned}$$

Now, assume that observations \(X_1,\dots ,X_n\) are i.i.d. from \(f\), a smooth density function on \({\mathbb {R}}\). Then, consider the (Monte Carlo) estimator

$$\begin{aligned} {\widehat{f}}_n(x)=\frac{1}{n}\sum _{i=1}^n\frac{\sin (R(x-x_i))}{\pi (x-x_i)}. \end{aligned}$$

For the mean, we see that, for \(R\rightarrow \infty \),

$$\begin{aligned} \begin{aligned} E\left[ {\widehat{f}}_n\left( x\right) \right]&=\frac{1}{n}\sum _{i=1}^nE\left[ \frac{\sin \left( R\left( x-x_i\right) \right) }{\pi (x-x_i)}\right] \\&=\frac{1}{\pi }\int _{\mathbb {R}}\frac{\sin \left( R\left( x-y\right) \right) }{x-y}\,f(y)\,dy \\&=\frac{1}{\pi }\int _{\mathbb {R}}\frac{\sin (u)}{u}f\left( x-u/R\right) du\\&=\frac{1}{\pi }\int _{\mathbb {R}}\frac{\sin (u)}{u}f\left( x\right) du+O\left( {1}/{R}\right) \\&=f\left( x\right) +O\left( {1}/{R}\right) . \end{aligned} \end{aligned}$$

For the variance, we see that, for \(R\rightarrow \infty \),

$$\begin{aligned} \text {Var}\left[ {\widehat{f}}_n\left( x\right) \right]&=\frac{1}{n^2}\sum _{i=1}^n\text {Var}\left[ \frac{\sin \left( R\left( x-x_i\right) \right) }{\pi (x-x_i)}\right] \\&\le \frac{1}{\pi ^2n}E\left[ \frac{\sin ^2\left( R\left( x-x_1\right) \right) }{\left( x-x_1\right) ^2}\right] \\&=\frac{1}{\pi ^2n}\int _{\mathbb {R}}\frac{\sin ^2\left( R\left( x-y\right) \right) }{\left( x-y\right) ^2}f(y)dy \\&=\frac{R}{\pi ^2n}\int _{\mathbb {R}}\frac{\sin ^2(u)}{u^2}f\left( x-u/R\right) du\\&=\frac{R}{\pi ^2n}\int _{\mathbb {R}}\frac{\sin ^2(u)}{u^2}\left[ f\left( x\right) +O\left( 1/R\right) \right] du \\&=O(R/n). \end{aligned}$$

Extending to higher dimensions, we find, as a consequence of the Fourier inversion theorem, that

$$\begin{aligned} \begin{aligned} f(x)&=\lim _{R_1\rightarrow \infty }\dots \lim _{R_d\rightarrow \infty }\int _{{\mathbb {R}}^d}\\&\quad \times \left( \prod _{j=1}^d\frac{\sin (R_j(x_j-y_j))}{\pi (x_j-y_j)}\right) f(y)dy. \end{aligned} \end{aligned}$$

In particular, assuming that \(R_j=R\) for \(j=1,\dots ,d\), we see that the estimator takes the form

$$\begin{aligned} {\widehat{f}}_n\left( x\right) =\frac{1}{n}\sum _{i=1}^n\prod _{j=1}^d\frac{\sin \left( R\left( x_j-x_{ji}\right) \right) }{\pi \left( x_j-x_{ji}\right) }. \end{aligned}$$

Then, as an extension of the result for univariate variance, assuming mutual independence of all components of \(X\), we have that

$$\begin{aligned} \begin{aligned} \text {Var}\left[ {\widehat{f}}_n\left( x\right) \right]&=\frac{1}{n^2}\sum _{i=1}^n\text {Var}\left[ \prod _{j=1}^d\frac{\sin \left( R\left( x_j-x_{ji}\right) \right) }{\pi \left( x_j-x_{ji}\right) }\right] \\&\le \frac{1}{\left( \pi ^{d}\right) ^2n}E\left[ \prod _{j=1}^d\frac{\sin ^2\left( R\left( x_j-x_{j1}\right) \right) }{\left( x_j-x_{j1}\right) ^2}\right] \\&\le \frac{1}{\left( \pi ^{d}\right) ^2n}\prod _{j=1}^dE\left[ \frac{\sin ^2\left( R\left( x_j-x_{j1}\right) \right) }{\left( x_j-x_{j1}\right) ^2}\right] \\&=O\left( R^d/n\right) . \end{aligned} \end{aligned}$$

Details for Chib and Jeliazkov (2001)

In moving from Gibbs output to Metropolis–Hastings output, we can consider the method of bridge sampling; see, for example, Gronau et al. (2020). With this method, the estimate of the marginal likelihood, which can also be seen as the normalizing constant for a posterior density function, is given by

$$\begin{aligned} {\widehat{m}}(x)=\frac{n_1^{-1}\sum _{j=1}^{n_1} h(\widetilde{\theta }_j)p(x\mid {\widetilde{\theta }}_j)\pi ({\widetilde{\theta }}_j)}{n_2^{-1}\sum _{j=1}^{n_2} h(\theta _j^*)\,g(\theta _j^*)}, \end{aligned}$$

where the \(({\widetilde{\theta }}_j)_{j=1:n_1}\) are i.i.d. from the importance density g, and the \((\theta _j^*)_{j=1:n_2}\) are i.i.d. from the posterior density. The choices to be made include the bridge function h and the importance density g. As previously mentioned, Mira and Nicholls (2004) have identified the estimator of Chib and Jeliazkov (2001) as a bridge sampling estimator.

In the context of MCMC chains produced by the Metropolis–Hastings algorithm, Chib and Jeliazkov (2001) introduce a more complicated method for marginal likelihood estimation. The Metropolis–Hastings algorithm is more flexible than the Gibbs algorithm insofar as not all the normalizing constants of the full conditional densities need to be known to run the Metropolis–Hastings algorithm. For the Metropolis–Hastings algorithm, the estimation of the posterior ordinate \(\pi (\theta ^*|x)\) given the posterior sample \(\{\theta ^{(1)},\dots ,\theta ^{(M)}\}\) requires the specification of a proposal density \(q(\theta ,\theta '|x)\) for the transition from \(\theta \) to \(\theta '\).

With this approach, Chib and Jeliazkov (2001) provide the following estimate for the marginal likelihood: \({\widehat{\pi }}\left( \theta ^*\mid x\right) =\)

$$\begin{aligned} {\begin{aligned} \prod _{r=1}^p\frac{M^{-1}\sum _{s=1}^M\alpha \left( \theta _r^{(s)},\theta _{r}^*\mid x,\psi _{r-1}^*,\psi ^{r+1,(s)}\right) q\left( \theta _{r}^{(s)},\theta _{r}^*\mid x,\psi _{r-1}^*,\psi ^{r+1,(s)}\right) }{N^{-1}\sum _{t=1}^N\alpha \left( \theta _r^{*},\theta _{r}^{(t)}\mid x,\psi _{r-1}^*,\psi ^{r+1,(t)}\right) }, \end{aligned}} \end{aligned}$$

where \(\alpha \left( \theta _r,\theta _r'|\psi _{r-1},\psi ^{r+1}\right) =\)

$$\begin{aligned} { \begin{aligned}&\min \left\{ 1,\frac{f\left( x\mid \theta _r',\psi _{r-1},\psi ^{r+1}\right) \pi \left( \theta _r',\theta _{-r}\right) }{f\left( x\mid \theta _r,\psi _{r-1},\psi ^{r+1}\right) \pi \left( \theta _r,\theta _{-r}\right) }\right. \\&\quad \left. \times \frac{q\left( \theta '_r,\theta _r\mid x,\psi _{r-1},\psi ^{r+1}\right) }{q\left( \theta _r,\theta '_r\mid x,\psi _{r-1},\psi ^{r+1}\right) }\right\} , \end{aligned} }\end{aligned}$$

with \(\psi _{r-1}\) denoting the parameters (or blocks of parameters) up to \(r\) and \(\psi ^{r+1}\) denoting those beyond \(r\). In particular, the quantity \(\pi (\theta _r^*\mid x,\theta _1^*,\dots ,\theta _{r-1}^*)\) such that \(\pi (\theta _r^*\mid x,\theta _1^*,\dots ,\theta _{r-1}^*)=\)

$$\begin{aligned} { \begin{aligned} \frac{\hbox {E}_1\{\alpha (\theta _r,\theta _{r}^*\mid x,\psi _{r-1}^*,\psi ^{r+1})q(\theta _r,\theta _r^*\mid x,\psi ^*_{r-1},\psi ^{r+1})\}}{\hbox {E}_2\{\alpha (\theta _r^*,\theta _r\mid x,\psi _{r-1}^*,\psi ^{r+1})\}} \end{aligned} }\end{aligned}$$

involves \(\hbox {E}_1\), an expectation with respect to the conditional posterior \(\pi (\theta _r,\psi ^{r+1}|x,\psi ^*_{r-1})\), and \(\hbox {E}_2\), an expectation with respect to the conditional product measure

$$\begin{aligned} \pi (\psi ^{r+1}\mid x,\psi _r^*)\,\,q(\theta _r^*,\theta _r\mid x,\psi _{r-1}^*,\psi ^{r+1}). \end{aligned}$$

Here, we have used exactly the notation in Chib and Jeliazkov (2001).

In the case of Metropolis–Hastings output, as in the case of Gibbs output, multiple runs of the sampling algorithm are needed in general, save when a single block is required, for the calculation of the numerator value. Moreover, for the Metropolis–Hastings output, there appears in the denominator of the marginal likelihood estimate a second expectation, which requires separate treatment that can prove time-consuming for certain likelihood evaluations, as revealed in Sect. 4.7.

The corresponding modifications to the algorithm must be made to ensure that the extra information needed to compute each conditional ordinate is properly stored, and although any subsequent runs simulate from a smaller set of distributions, the added sampling not only takes up time but also increases the chances that an error may be made by the user during implementation, especially as there are now multiple different expectations being taken between the parameters (or blocks) in the numerator and the denominator.

Supplementary Material

We describe the contents of the Supplementary Material document. Section 1 highlights the ability of the Fourier integral theorem, without any tuning or estimation of a covariance matrix, to estimate the value of a point on an irregular shaped density function. Section 2 compares the Fourier approach with the Warp–III algorithm. Section 3 shows how it is possible to also estimate the likelihood function using the Fourier integral theorem when the likelihood is deemed intractable. Section 4 does a wide-ranging comparison of methods for a non-nested linear model, and Section 5 does the same for a logistic regression model. Section 6 considers a mixture model, Section 7 a bimodal model, and Section 8 a dynamic linear model. In Section 9, we look at the possibility in some cases of being able to reduce the dimension of the problem by integrating out a number of parameters. Finally in Section 10 we have some remarks on the Tables presented in the main paper.

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Rotiroti, F., Walker, S.G. Computing marginal likelihoods via the Fourier integral theorem and pointwise estimation of posterior densities. Stat Comput 32, 67 (2022). https://doi.org/10.1007/s11222-022-10131-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11222-022-10131-0

Keywords

Navigation