Skip to main content

Advertisement

Log in

Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances

  • Published:
Psychometrika Aims and scope Submit manuscript

Abstract

In comparing characteristics of independent populations, researchers frequently expect a certain structure of the population variances. These expectations can be formulated as hypotheses with equality and/or inequality constraints on the variances. In this article, we consider the Bayes factor for testing such (in)equality-constrained hypotheses on variances. Application of Bayes factors requires specification of a prior under every hypothesis to be tested. However, specifying subjective priors for variances based on prior information is a difficult task. We therefore consider so-called automatic or default Bayes factors. These methods avoid the need for the user to specify priors by using information from the sample data. We present three automatic Bayes factors for testing variances. The first is a Bayes factor with equal priors on all variances, where the priors are specified automatically using a small share of the information in the sample data. The second is the fractional Bayes factor, where a fraction of the likelihood is used for automatic prior specification. The third is an adjustment of the fractional Bayes factor such that the parsimony of inequality-constrained hypotheses is properly taken into account. The Bayes factors are evaluated by investigating different properties such as information consistency and large sample consistency. Based on this evaluation, it is concluded that the adjusted fractional Bayes factor is generally recommendable for testing equality- and inequality-constrained hypotheses on variances.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In B. N. Petrov & F. Csáki (Eds.), 2nd international symposium on information theory (pp. 267–281). Budapest: Akadémiai Kiadó.

    Google Scholar 

  • Arden, R., & Plomin, R. (2006). Sex differences in variance of intelligence across childhood. Personality and Individual Differences, 41(1), 39–48.

    Article  Google Scholar 

  • Aunola, K., Leskinen, E., Lerkkanen, M.-K., & Nurmi, J.-E. (2004). Developmental dynamics of math performance from preschool to grade 2. Journal of Educational Psychology, 96(4), 699–713.

    Article  Google Scholar 

  • Bartlett, M. S. (1957). A comment on D. V. Lindley’s statistical paradox. Biometrika, 44(3–4), 533–534.

    Article  Google Scholar 

  • Berger, J. O. (2006). The case for objective Bayesian analysis. Bayesian Analysis, 1(3), 385–402.

    Article  Google Scholar 

  • Berger, J. O., & Mortera, J. (1999). Default Bayes factors for nonnested hypothesis testing. Journal of the American Statistical Association, 94(446), 542–554.

    Article  Google Scholar 

  • Berger, J. O., & Pericchi, L. R. (1996). The intrinsic Bayes factor for model selection and prediction. Journal of the American Statistical Association, 91(433), 109–122.

    Article  Google Scholar 

  • Berger, J. O., & Pericchi, L. R. (2001). Objective Bayesian methods for model selection: Introduction and comparison. In P. Lahiri (Ed.), Model selection (pp. 135–207). Beachwood, OH: Institute of Mathematical Statistics.

  • Berger, J. O., & Sellke, T. (1987). Testing a point null hypothesis: The irreconcilability of \(P\) values and evidence. Journal of the American Statistical Association, 82(397), 112–122.

    Google Scholar 

  • Böing-Messing, F., & Mulder, J. (2016). Automatic Bayes factors for testing variances of two independent normal distributions. Journal of Mathematical Psychology, 72, 158–170.

    Article  Google Scholar 

  • Böing-Messing, F., van Assen, M. A. L. M., Hofman, A. D., Hoijtink, H., & Mulder, J. (2017). Bayesian evaluation of constrained hypotheses on variances of multiple independent groups. Psychological Methods, 22(2), 262–287.

    Article  PubMed  Google Scholar 

  • Carroll, R. J. (2003). Variances are not always nuisance parameters. Biometrics, 59(2), 211–220.

    Article  PubMed  Google Scholar 

  • De Santis, F., & Spezzaferri, F. (2001). Consistent fractional Bayes factor for nested normal linear models. Journal of Statistical Planning and Inference, 97(2), 305–321.

    Article  Google Scholar 

  • Fox, J.-P., Mulder, J., & Sinharay, S. (2017). Bayes factor covariance testing in item response models. Psychometrika, 82(4), 979–1006.

    Article  PubMed  Google Scholar 

  • Gelman, A., Carlin, J. B., Stern, H. S., & Rubin, D. B. (2004). Bayesian data analysis (2nd ed.). Boca Raton, FL: Chapman & Hall/CRC.

    Google Scholar 

  • Gilks, W. R. (1995). Discussion of O’Hagan. Journal of the Royal Statistical Society. Series B (Methodological), 57(1), 118–120.

    Google Scholar 

  • Grissom, R. J. (2000). Heterogeneity of variance in clinical data. Journal of Consulting and Clinical Psychology, 68(1), 155–165.

    Article  PubMed  Google Scholar 

  • Hoijtink, H. (2011). Informative hypotheses: Theory and practice for behavioral and social scientists. Boca Raton, FL: Chapman & Hall/CRC.

    Book  Google Scholar 

  • Jefferys, W. H., & Berger, J. O. (1992). Ockham’s razor and Bayesian analysis. American Scientist, 80(1), 64–72.

    Google Scholar 

  • Jeffreys, H. (1961). Theory of probability (3rd ed.). Oxford: Oxford University Press.

    Google Scholar 

  • Kass, R. E., & Raftery, A. E. (1995). Bayes factors. Journal of the American Statistical Association, 90(430), 773–795.

    Article  Google Scholar 

  • Klugkist, I., Laudy, O., & Hoijtink, H. (2005). Inequality constrained analysis of variance: A Bayesian approach. Psychological Methods, 10(4), 477–493.

    Article  PubMed  Google Scholar 

  • Kofler, M. J., Rapport, M. D., Sarver, D. E., Raiker, J. S., Orban, S. A., Friedman, L. M., et al. (2013). Reaction time variability in ADHD: A meta-analytic review of 319 studies. Clinical Psychology Review, 33(6), 795–811.

    Article  PubMed  Google Scholar 

  • Lehre, A.-C., Lehre, K. P., Laake, P., & Danbolt, N. C. (2009). Greater intrasex phenotype variability in males than in females is a fundamental aspect of the gender differences in humans. Developmental Psychobiology, 51(2), 198–206.

    Article  PubMed  Google Scholar 

  • Liang, F., Paulo, R., Molina, G., Clyde, M. A., & Berger, J. O. (2008). Mixtures of \(g\) priors for Bayesian variable selection. Journal of the American Statistical Association, 103(481), 410–423.

    Article  Google Scholar 

  • Lindley, D. V. (1957). A statistical paradox. Biometrika, 44(1–2), 187–192.

    Article  Google Scholar 

  • Lucas, J. W. (2003). Status processes and the institutionalization of women as leaders. American Sociological Review, 68(3), 464–480.

    Article  Google Scholar 

  • Mulder, J. (2014a). Bayes factors for testing inequality constrained hypotheses: Issues with prior specification. British Journal of Mathematical and Statistical Psychology, 67(1), 153–171.

    Article  PubMed  Google Scholar 

  • Mulder, J. (2014b). Prior adjusted default Bayes factors for testing (in)equality constrained hypotheses. Computational Statistics & Data Analysis, 71, 448–463.

    Article  Google Scholar 

  • Mulder, J. (2016). Bayes factors for testing order-constrained hypotheses on correlations. Journal of Mathematical Psychology, 72, 104–115.

    Article  Google Scholar 

  • Mulder, J., & Fox, J.-P. (2013). Bayesian tests on components of the compound symmetry covariance matrix. Statistics and Computing, 23(1), 109–122.

    Article  Google Scholar 

  • Mulder, J., Hoijtink, H., & de Leeuw, C. (2012). BIEMS: A Fortran 90 program for calculating Bayes factors for inequality and equality constrained models. Journal of Statistical Software, 46(2), 1–39.

    Article  Google Scholar 

  • Mulder, J., Hoijtink, H., & Klugkist, I. (2010). Equality and inequality constrained multivariate linear models: Objective model selection using constrained posterior priors. Journal of Statistical Planning and Inference, 140(4), 887–906.

    Article  Google Scholar 

  • Mulder, J., Klugkist, I., van de Schoot, R., Meeus, W. H., Selfhout, M., & Hoijtink, H. (2009). Bayesian model selection of informative hypotheses for repeated measurements. Journal of Mathematical Psychology, 53(6), 530–546.

    Article  Google Scholar 

  • Mulder, J., & Wagenmakers, E.-J. (2016). Editors’ introduction to the special issue "Bayes factors for testing hypotheses in psychological research: Practical relevance and new developments". Journal of Mathematical Psychology, 72, 1–5.

    Article  Google Scholar 

  • O’Hagan, A. (1995). Fractional Bayes factors for model comparison. Journal of the Royal Statistical Society. Series B (Methodological), 57(1), 99–138.

    Google Scholar 

  • O’Hagan, A. (1997). Properties of intrinsic and fractional Bayes factors. Test, 6(1), 101–118.

    Article  Google Scholar 

  • Ruscio, J., & Roche, B. (2012). Variance heterogeneity in published psychological research: A review and a new index. Methodology, 8(1), 1–11.

    Article  Google Scholar 

  • Russell, V. A., Oades, R. D., Tannock, R., Killeen, P. R., Auerbach, J. G., Johansen, E. B., et al. (2006). Response variability in attention-deficit/hyperactivity disorder: A neuronal and glial energetics hypothesis. Behavioral and Brain Functions, 2(1), 1–25.

    Article  Google Scholar 

  • Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464.

    Article  Google Scholar 

  • Silverstein, S. M., Como, P. G., Palumbo, D. R., West, L. L., & Osborn, L. M. (1995). Multiple sources of attentional dysfunction in adults with Tourette’s syndrome: Comparison with attention deficit-hyperactivity disorder. Neuropsychology, 9(2), 157–164.

    Article  Google Scholar 

  • Snijders, T. A. B., & Bosker, R. J. (2012). Multilevel analysis: An introduction to basic and advanced multilevel modeling (2nd ed.). London: Sage.

    Google Scholar 

  • Spiegelhalter, D. J., & Smith, A. F. M. (1982). Bayes factors for linear and log-linear models with vague prior information. Journal of the Royal Statistical Society. Series B (Methodological), 44(3), 377–387.

    Google Scholar 

  • Verhagen, A. J., & Fox, J.-P. (2013). Bayesian tests of measurement invariance. British Journal of Mathematical and Statistical Psychology, 66(3), 383–401.

    PubMed  Google Scholar 

  • Wagenmakers, E.-J. (2007). A practical solution to the pervasive problems of \(p\) values. Psychonomic Bulletin & Review, 14(5), 779–804.

    Article  Google Scholar 

  • Weerahandi, S. (1995). ANOVA under unequal error variances. Biometrics, 51(2), 589–599.

    Article  Google Scholar 

  • Zellner, A. (1986). On assessing prior distributions and Bayesian regression analysis with \(g\)-prior distributions. In P. K. Goel & A. Zellner (Eds.), Bayesian inference and decision techniques: Essays in honor of Bruno de Finetti (pp. 233–243). Amsterdam, The Netherlands: Elsevier.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Florian Böing-Messing.

Additional information

This research was partly supported by a Rubicon grant which was awarded to Joris Mulder by The Netherlands Organisation for Scientific Research (NWO).

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (R 15 KB)

Appendices

Appendix A: Computation of \(m_t^\mathrm {B}(\varvec{x}, \varvec{b})\)

The final expression for the marginal likelihood under an (in)equality-constrained hypothesis \(H_t\) in the balanced Bayes factor can be derived as follows:

$$\begin{aligned} m_t^\mathrm {B}(\varvec{x}, \varvec{b})= & {} \int _{\Omega _t} \int _{\mathbb {R}^J} f_t\left( \varvec{x} | \varvec{\mu }, \varvec{\sigma }_t^2\right) \pi _t^\mathrm {B}\left( \varvec{\mu }, \varvec{\sigma }_t^2 \big | \varvec{x}^{\varvec{b}}\right) \text {d}\varvec{\mu } \, \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} \int _{\Omega _t} \int _{\mathbb {R}^J} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} f\left( \varvec{x}_{k_j} | \mu _{k_j}, \sigma _k^2\right) \right) \nonumber \\&\quad C \frac{1}{P^\mathrm {B}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}^{\varvec{b}}\right) } \prod _{k=1}^{K_t} \text {Inv-}\chi ^2\left( \sigma _k^2 | \nu , \tau ^2\right) \mathbf {1}_{\Omega _t}\left( \varvec{\sigma }_t^2\right) \text {d}\varvec{\mu } \, \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C \frac{1}{P^\mathrm {B}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}^{\varvec{b}}\right) } \int _{\Omega _t} \prod _{k=1}^{K_t} \left( \frac{\nu \tau ^2}{2}\right) ^{\frac{\nu }{2}} \Gamma \left( \frac{\nu }{2}\right) ^{-1} \left( \sigma _k^2\right) ^{-\left( \frac{\nu }{2}+1\right) } \exp \left( -\frac{\nu \tau ^2}{2 \sigma _k^2}\right) \nonumber \\&\quad \prod _{j=1}^{J_k} \int _\mathbb {R} \left( \sigma _k^2 2 \pi \right) ^{-\frac{n_{k_j}}{2}} \exp \left( -\frac{1}{2 \sigma _k^2} \left( \left( n_{k_j}-1\right) s_{k_j}^2 + n_{k_j}\left( \bar{x}_{k_j}-\mu _{k_j}\right) ^2\right) \right) \text {d}\mu _{k_j} \, \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C \frac{1}{P^\mathrm {B}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}^{\varvec{b}}\right) } \left( \frac{\nu \tau ^2}{2}\right) ^{\frac{\nu K_t}{2}} \Gamma \left( \frac{\nu }{2}\right) ^{-K_t} \int _{\Omega _t} \prod _{k=1}^{K_t} \left( \sigma _k^2\right) ^{-\left( \frac{\nu }{2}+1\right) } \exp \left( -\frac{\nu \tau ^2}{2 \sigma _k^2}\right) \nonumber \\&\quad \prod _{j=1}^{J_k} n_{k_j}^{-\frac{1}{2}} \left( \sigma _k^2 2 \pi \right) ^{-\frac{n_{k_j}-1}{2}} \exp \left( -\frac{\left( n_{k_j}-1\right) s_{k_j}^2}{2 \sigma _k^2}\right) \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C \frac{1}{P^\mathrm {B}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}^{\varvec{b}}\right) } \left( \frac{\nu \tau ^2}{2}\right) ^{\frac{\nu K_t}{2}} \Gamma \left( \frac{\nu }{2}\right) ^{-K_t} (2\pi )^{-\frac{\sum _{k=1}^{K_t} \left( \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k\right) }{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} n_{k_j}^{-\frac{1}{2}}\right) \nonumber \\&\quad \int _{\Omega _t} \prod _{k=1}^{K_t} \left( \sigma _k^2\right) ^{-\left( \frac{\nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2} + 1\right) } \exp \left( -\frac{\nu \tau ^2 + \sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2}{2 \sigma _k^2}\right) \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C \frac{1}{P^\mathrm {B}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}^{\varvec{b}}\right) } \left( \nu \tau ^2\right) ^{\frac{\nu K_t}{2}} \Gamma \left( \frac{\nu }{2}\right) ^{-K_t} \pi ^{-\frac{\sum _{k=1}^{K_t} \left( \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k\right) }{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} n_{k_j}^{-\frac{1}{2}}\right) \nonumber \\&\quad \left( \prod _{k=1}^{K_t} \Gamma \left( \frac{\nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}\right) \left( \nu \tau ^2 + \sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2\right) ^{-\frac{\nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}}\right) \nonumber \\&\quad \int _{\Omega _t} \prod _{k=1}^{K_t} \text {Inv-}\chi ^2\left( \sigma _k^2 \, \Bigg | \, \nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k, \frac{\nu \tau ^2 + \sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2}{\nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}\right) \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C \frac{P^\mathrm {B}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}\right) }{P^\mathrm {B}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}^{\varvec{b}}\right) } \left( \nu \tau ^2\right) ^{\frac{\nu K_t}{2}} \Gamma \left( \frac{\nu }{2}\right) ^{-K_t} \pi ^{-\frac{\sum _{k=1}^{K_t} \left( \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k\right) }{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} n_{k_j}^{-\frac{1}{2}}\right) \nonumber \\&\quad \prod _{k=1}^{K_t} \Gamma \left( \frac{\nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}\right) \left( \nu \tau ^2 + \sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2\right) ^{-\frac{\nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}}, \end{aligned}$$
(30)

where in the third line we may drop the indicator function because the integration region for the variances is already restricted to \(\Omega _t\), and the integrand in the fifth line is a product of kernels of scaled inverse-\(\chi ^2\) distributions with degrees of freedom parameters \(\nu _k = \nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k\) and scale parameters \(\tau _k^2 = \frac{\nu \tau ^2 + \sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2}{\nu + \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}\).

Appendix B: Computation of \(m_t^\mathrm{GF}(\varvec{x}, \varvec{b})\)

In the generalized fractional Bayes factor, the marginal likelihood under an (in)equality-constrained hypothesis \(H_t\) is defined as

$$\begin{aligned} m_t^\mathrm{GF}(\varvec{x}, \varvec{b}) = \frac{\int _{\Omega _t} \int _{\mathbb {R}^J} f_t\left( \varvec{x} | \varvec{\mu }, \varvec{\sigma }_t^2\right) \pi _t^N\left( \varvec{\mu }, \varvec{\sigma }_t^2\right) \text {d}\varvec{\mu } \, \text {d}\varvec{\sigma }_t^2}{\int _{\Omega _t} \int _{\mathbb {R}^J} f_t\left( \varvec{x} | \varvec{\mu }, \varvec{\sigma }_t^2\right) ^{\varvec{b}} \pi _t^N\left( \varvec{\mu }, \varvec{\sigma }_t^2\right) \text {d}\varvec{\mu } \, \text {d}\varvec{\sigma }_t^2} = \frac{m_t^N(\varvec{x})}{m_t^N(\varvec{x}^{\varvec{b}})}. \end{aligned}$$
(31)

We first derive the denominator:

$$\begin{aligned} m_t^N(\varvec{x}^{\varvec{b}})= & {} \int _{\Omega _t} \int _{\mathbb {R}^J} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} f\left( \varvec{x}_{k_j} | \mu _{k_j}, \sigma _k^2\right) ^{b_{k_j}}\right) C_t \prod _{k=1}^{K_t} \sigma _k^{-2} \, \mathbf {1}_{\Omega _t}\left( \varvec{\sigma }_t^2\right) \, \text {d}\varvec{\mu } \, \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C_t \int _{\Omega _t} \prod _{k=1}^{K_t} \sigma _k^{-2} \prod _{j=1}^{J_k} \int _\mathbb {R} \left( \sigma _k^2 2 \pi \right) ^{-\frac{b_{k_j}n_{k_j}}{2}} \nonumber \\&\quad \exp \left( -\frac{b_{k_j}}{2 \sigma _k^2} \left( \left( n_{k_j}-1\right) s_{k_j}^2 + n_{k_j}\left( \bar{x}_{k_j}-\mu _{k_j}\right) ^2\right) \right) \text {d}\mu _{k_j} \, \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C_t \int _{\Omega _t} \prod _{k=1}^{K_t} \sigma _k^{-2} \prod _{j=1}^{J_k} \left( b_{k_j}n_{k_j}\right) ^{-\frac{1}{2}} \left( \sigma _k^2 2 \pi \right) ^{-\frac{b_{k_j}n_{k_j}-1}{2}} \exp \left( -\frac{b_{k_j}\left( n_{k_j}-1\right) s_{k_j}^2}{2 \sigma _k^2}\right) \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C_t \, (2\pi )^{-\frac{\sum _{k=1}^{K_t} \left( \left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k\right) }{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} \left( b_{k_j}n_{k_j}\right) ^{-\frac{1}{2}}\right) \nonumber \\&\quad \int _{\Omega _t} \prod _{k=1}^{K_t} \left( \sigma _k^2\right) ^{-\left( \frac{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}{2}+1\right) } \exp \left( -\frac{\sum _{j=1}^{J_k} b_{k_j}\left( n_{k_j}-1\right) s_{k_j}^2}{2 \sigma _k^2}\right) \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C_t \, \pi ^{-\frac{\sum _{k=1}^{K_t} \left( \left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k\right) }{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} \left( b_{k_j}n_{k_j}\right) ^{-\frac{1}{2}}\right) \nonumber \\&\quad \left( \prod _{k=1}^{K_t} \Gamma \left( \frac{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}{2}\right) \left( \sum _{j=1}^{J_k} b_{k_j}\left( n_{k_j}-1\right) s_{k_j}^2\right) ^{-\frac{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}{2}}\right) \nonumber \\&\quad \int _{\Omega _t} \prod _{k=1}^{K_t} \text {Inv-}\chi ^2\left( \sigma _k^2 \, \Bigg | \left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k, \frac{\sum _{j=1}^{J_k} b_{k_j}\left( n_{k_j}-1\right) s_{k_j}^2}{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}\right) \text {d}\varvec{\sigma }_t^2\nonumber \\= & {} C_t \, \pi ^{-\frac{\sum _{k=1}^{K_t} \left( \left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k\right) }{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} \left( b_{k_j}n_{k_j}\right) ^{-\frac{1}{2}}\right) \nonumber \\&\quad \left( \prod _{k=1}^{K_t} \Gamma \left( \frac{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}{2}\right) \left( \sum _{j=1}^{J_k} b_{k_j}\left( n_{k_j}-1\right) s_{k_j}^2\right) ^{-\frac{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}{2}}\right) \nonumber \\&\quad P^\mathrm{GF}\left( \varvec{\sigma }_t^2 \in \Omega _t \big | \varvec{x}^{\varvec{b}}\right) . \end{aligned}$$
(32)

The expression for the numerator in Eq. (31) is identical to the final expression in Eq. (32) with all b’s equal to 1, that is,

$$\begin{aligned} \begin{aligned} m_t^N(\varvec{x})&= C_t \, \pi ^{-\frac{\sum _{k=1}^{K_t} \left( \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k\right) }{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} n_{k_j}^{-\frac{1}{2}}\right) \\&\quad \left( \prod _{k=1}^{K_t} \Gamma \left( \frac{\left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}\right) \left( \sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2\right) ^{-\frac{\left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}}\right) \\&\quad P^\mathrm{GF}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}\right) , \end{aligned} \end{aligned}$$
(33)

where

$$\begin{aligned} P^\mathrm{GF}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}\right) = \int _{\Omega _t} \prod _{k=1}^{K_t} \text {Inv-}\chi ^2\left( \sigma _k^2 \, \Bigg | \left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k, \frac{\sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2}{\left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}\right) \text {d}\varvec{\sigma }_t^2. \end{aligned}$$
(34)

The final expression for the marginal likelihood in Eq. (31) is then given by

$$\begin{aligned} \begin{aligned} m_t^\mathrm{GF}(\varvec{x}, \varvec{b})&= \frac{m_t^N(\varvec{x})}{m_t^N(\varvec{x}^{\varvec{b}})}\\&= \frac{P^\mathrm{GF}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}\right) }{P^\mathrm{GF}\left( \varvec{\sigma }_t^2 \in \Omega _t | \varvec{x}^{\varvec{b}}\right) } \, \pi ^{-\frac{\sum _{k=1}^{K_t} \sum _{j=1}^{J_k} \left( 1-b_{k_j}\right) n_{k_j}}{2}} \left( \prod _{k=1}^{K_t} \prod _{j=1}^{J_k} b_{k_j}^{\frac{1}{2}}\right) \\&\quad \prod _{k=1}^{K_t} \Gamma \left( \frac{\left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}\right) \Gamma \left( \frac{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}{2}\right) ^{-1} \\&\quad \left( \sum _{j=1}^{J_k} \left( n_{k_j}-1\right) s_{k_j}^2\right) ^{-\frac{\left( \sum _{j=1}^{J_k} n_{k_j}\right) - J_k}{2}} \left( \sum _{j=1}^{J_k} b_{k_j}\left( n_{k_j}-1\right) s_{k_j}^2\right) ^{\frac{\left( \sum _{j=1}^{J_k} b_{k_j}n_{k_j}\right) - J_k}{2}}. \end{aligned} \end{aligned}$$
(35)

Appendix C: Computing the Probability That \(\varvec{\sigma }_t^2 \in \Omega _t\)

The integrals in Eqs. (15), (20), (21), and (25) can be approximated numerically using the following Monte Carlo approach. For the BBF and the GFBF, we first sample \(\sigma _k^{2(s)} \sim \text {Inv-}\chi ^2\left( \nu _k, \tau _k^2\right) \), for \(k = 1,\cdots ,K_t\), where \(\sigma _k^{2(s)}\) is the sth draw from \(\text {Inv-}\chi ^2\left( \nu _k, \tau _k^2\right) \), for \(s = 1,\cdots ,S\), and \(\nu _k\) and \(\tau _k^2\) are as in Eqs. (15), (20), and (21), respectively. An approximation of the probability that the inequality constraints on the unique variances hold is then given by the proportion of draws that fall in \(\Omega _t\), that is,

$$\begin{aligned} P\left( \varvec{\sigma }_t^2 \in \Omega _t\right) \approx \frac{1}{S} \sum _{s=1}^S \mathbf {1}_{\Omega _t}\left( \varvec{\sigma }_t^{2(s)}\right) , \end{aligned}$$
(36)

where \(\varvec{\sigma }_t^{2(s)} = \begin{bmatrix} \sigma _1^{2(s)}&\cdots&\sigma _{K_t}^{2(s)} \end{bmatrix}^T\), and \(\mathbf {1}_{\Omega _t}\left( \varvec{\sigma }_t^{2(s)}\right) \) is the indicator function which is 1 if \(\varvec{\sigma }_t^{2(s)} \in \Omega _t\) and 0 otherwise.

For the AFBF, let \(\phi _k = a_k\sigma _k^2\). We then proceed analogously to the BBF and the GFBF: First, we sample \(\phi _k^{(s)} \sim \text {Inv-}\chi ^2\left( \nu _k, \tau _k^2\right) \), for \(k = 1,\cdots ,K_t\) and \(s = 1,\cdots ,S\), where \(\nu _k\) and \(\tau _k^2\) are as in the second row of Eq. (25). Then

$$\begin{aligned} P^\mathrm{AF}\left( \varvec{\sigma }_t^2 \in \Omega _t^a \big | \varvec{x}^{\varvec{b}}\right) = P^\mathrm{AF}\left( \varvec{\phi }_t \in \Omega _t \big | \varvec{x}^{\varvec{b}}\right) \approx \frac{1}{S} \sum _{s=1}^S \mathbf {1}_{\Omega _t}\left( \varvec{\phi }_t^{(s)}\right) , \end{aligned}$$
(37)

where \(\varvec{\phi }_t = \begin{bmatrix} \phi _1&\cdots&\phi _{K_t} \end{bmatrix}^T\) and \(\varvec{\phi }_t^{(s)} = \begin{bmatrix} \phi _1^{(s)}&\cdots&\phi _{K_t}^{(s)} \end{bmatrix}^T\).

Appendix D: Simulation Results for \(J = 6\) Populations

Table 4 Overview of the population variances used in the simulation study with 6 populations.
Fig. 4
figure 4

Results of a simulation study comparing the performance of the three automatic Bayes factors in testing variances of 6 populations. We examined five different patterns of the population variances: a \(\sigma _1^2 = \cdots = \sigma _6^2\), b \(\sigma _1^2< \cdots < \sigma _6^2\), c \(\sigma _1^2 = \sigma _2^2 = \sigma _3^2 < \sigma _4^2 = \sigma _5^2 = \sigma _6^2\), d \(\sigma _2^2< \sigma _1^2< \sigma _3^2< \cdots < \sigma _6^2\), and e \(\sigma _6^2< \cdots < \sigma _1^2\). In patterns b to e we considered three different sizes of the order effect: small, medium, and large. For each combination of pattern and effect size, we drew 1000 samples of size \(n_1 = \cdots = n_6 = n\). In each sample we then tested four hypotheses: \(H_0:\sigma _1^2 = \cdots = \sigma _6^2\), \(H_1:\sigma _1^2< \cdots < \sigma _6^2\), \(H_2:\sigma _1^2 = \sigma _2^2 = \sigma _3^2 < \sigma _4^2 = \sigma _5^2 = \sigma _6^2\), and \(H_3:\lnot \, (H_0 \vee H_1 \vee H_2)\). Eventually, we computed the expected posterior probability of the true hypothesis \(\bar{P}(H_t | \varvec{x})\) across the 1000 samples. The plots show \(\bar{P}(H_t | \varvec{x})\) as a function of the common sample size n for the BBF (red lines), GFBF (green lines), and AFBF (blue lines) under a small effect (dotted lines), medium effect (dashed lines), and large effect (solid lines) (Color figure online).

In the simulation with 6 populations, we considered the same factors as in the simulation with 4 populations (cf. Sect. 6.5.1). First, we used the same patterns of the population variances: null (\(\sigma _1^2 = \cdots = \sigma _6^2\)), order (\(\sigma _1^2< \cdots < \sigma _6^2\)), mixed (\(\sigma _1^2 = \sigma _2^2 = \sigma _3^2 < \sigma _4^2 = \sigma _5^2 = \sigma _6^2\)), near order (\(\sigma _2^2< \sigma _1^2< \sigma _3^2< \cdots < \sigma _6^2\)), and reverse order (\(\sigma _6^2< \cdots < \sigma _1^2\)). Second, we again used the approach of Böing-Messing et al. (2017) to determine the population variances for a small, medium, and large effect. The resulting values of the variances are shown in Table 4. Note that the values for the variances in the mixed pattern are the same as in the simulation with 4 populations (cf. Table 1) because in both cases there are only two unique variances. Third, we used common sample sizes \(n \in \lbrace 5, 10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10{,}000 \rbrace \). The hypotheses we tested in each condition were analogous to those in the simulation with 4 populations (cf. Eq. (29)): \(H_0:\sigma _1^2 = \cdots = \sigma _6^2\), \(H_1:\sigma _1^2< \cdots < \sigma _6^2\), \(H_2:\sigma _1^2 = \sigma _2^2 = \sigma _3^2 < \sigma _4^2 = \sigma _5^2 = \sigma _6^2\), and \(H_3:\lnot \, (H_0 \vee H_1 \vee H_2)\). The results of the simulation with 6 populations are shown in Fig. 4. A notable difference between the results of the simulations with 4 and 6 populations is that under the near-order pattern with 6 populations even larger samples are needed to detect that the complement \(H_3\) is true (cf. Figs. 2d and 4d). This is because the ratio of adjacent variances is smaller in the case of 6 populations (cf. Tables 1 and 4), which makes it more difficult for the Bayes factors to detect that the order of the first two population variances is reversed. Note that in Figs. 4a and e the lines for the GFBF and AFBF overlap to a large extent.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Böing-Messing, F., Mulder, J. Automatic Bayes Factors for Testing Equality- and Inequality-Constrained Hypotheses on Variances. Psychometrika 83, 586–617 (2018). https://doi.org/10.1007/s11336-018-9615-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11336-018-9615-z

Keywords

Navigation