Skip to main content

Meta-analysis I

Computational Methods

  • Chapter
  • First Online:
Methods of Clinical Epidemiology

Abstract

Meta-analysis is now used in a wide range of disciplines, in particular epidemiology and evidence-based medicine, where the results of some meta-analyses have led to major changes in clinical practice and health care policies. Meta-analysis is applicable to collections of research that produce quantitative results, examine the same constructs and relationships, and have findings that can be configured in a comparable statistical form called an effect size (e.g. correlation coefficients, odds ratios, proportions, etc.), that is, are comparable given the question at hand. These results from several studies that address a set of related research hypotheses are then quantitatively combined using statistical methods. This chapter provides an in-depth discussion of the various statistical methods currently available, with a focus on bias adjustment in meta-analysis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

  • Al Khalaf MM, Thalib L, Doi SA (2011) Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses. J Clin Epidemiol 64:119–123

    Article  PubMed  Google Scholar 

  • Bailey KR (1987) Inter-study differences: how should they influence the interpretation and analysis of results? Stat Med 6:351–360

    Article  PubMed  CAS  Google Scholar 

  • Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C, Lau J (2002) Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA 287:2973–2982

    Article  PubMed  Google Scholar 

  • Batham A, Gupta MA, Rastogi P, Garg S, Sreenivas V, Puliyel JM (2009) Calculating prevalence of hepatitis B in India: using population weights to look for publication bias in conventional meta-analysis. Indian J Pediatr 76:1247–1257

    Article  PubMed  Google Scholar 

  • Berard A, Bravo G (1998) Combining studies using effect sizes and quality scores: application to bone loss in postmenopausal women. J Clin Epidemiol 51:801–807

    Article  PubMed  CAS  Google Scholar 

  • Bravata DM, Olkin I (2001) Simple pooling versus combining in meta-analysis. Eval Health Prof 24:218–230

    Article  PubMed  CAS  Google Scholar 

  • Burton A, Altman DG, Royston P, Holder RL (2006) The design of simulation studies in medical statistics. Stat Med 25:4279–4292

    Article  PubMed  Google Scholar 

  • Concato J (2004) Observational versus experimental studies: what’s the evidence for a hierarchy? NeuroRx 1:341–347

    Article  PubMed  Google Scholar 

  • Conn VS, Rantz MJ (2003) Research methods: managing primary study quality in meta-analyses. Res Nurs Health 26:322–333

    Article  PubMed  Google Scholar 

  • Deeks JJ, Dinnes J, D’Amico R, Sowden AJ, Sakarovitch C, Song F, Petticrew M, Altman DG (2003) Evaluating non-randomised intervention studies. Health Technol Assess 7:1–173, iii-x

    Google Scholar 

  • DerSimonian R, Laird N (1986) Meta-analysis in clinical trials. Control Clin Trials 7:177–188

    Article  PubMed  CAS  Google Scholar 

  • Doi SA, Thalib L (2008) A quality-effects model for meta-analysis. Epidemiology 19:94–100

    Article  PubMed  Google Scholar 

  • Doi SA, Thalib L (2009) An alternative quality adjustor for the quality effects model for meta-analysis. Epidemiology 20:314

    Article  PubMed  Google Scholar 

  • Doi SA, Barendregt JJ, Mozurkewich EL (2011) Meta-analysis of heterogenous clinical trials: an empirical example. Contemp Clin Trials 32:288–298

    Article  PubMed  Google Scholar 

  • Doi SA, Barendregt JJ, Onitilo AA (2012) Methods for the bias adjustment of meta-analyses of published observational studies. J Eval Clin Pract. doi:10.1111/j.1365-2753.2012.01890.x [Epub ahead of print]

    PubMed  Google Scholar 

  • Downs SH, Black N (1998) The feasibility of creating a checklist for the assessment of the methodological quality both of randomised and non-randomised studies of health care interventions. J Epidemiol Community Health 52:377–384

    Article  PubMed  CAS  Google Scholar 

  • Egger M, Juni P, Bartlett C, Holenstein F, Sterne J (2003) How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess 7:1–76

    PubMed  CAS  Google Scholar 

  • Eisenhart C (1947) The assumptions underlying the analysis of variance. Biometrics 3:1–21

    Article  PubMed  CAS  Google Scholar 

  • Greenland S (1994) Invited commentary: a critical look at some popular meta-analytic methods. Am J Epidemiol 140:290–296

    PubMed  CAS  Google Scholar 

  • Herbison P, Hay-Smith J, Gillespie WJ (2006) Adjustment of meta-analyses on the basis of quality scores should be abandoned. J Clin Epidemiol 59:1249–1256

    Article  PubMed  Google Scholar 

  • Higgins JP, Thompson SG (2002) Quantifying heterogeneity in a meta-analysis. Stat Med 21:1539–1558

    Article  PubMed  Google Scholar 

  • Juni P, Witschi A, Bloch R, Egger M (1999) The hazards of scoring the quality of clinical trials for meta-analysis. JAMA 282:1054–1060

    Article  PubMed  CAS  Google Scholar 

  • Kjaergard LL, Villumsen J, Gluud C (2001) Reported methodologic quality and discrepancies between large and small randomized trials in meta-analyses. Ann Intern Med 135:982–989

    Article  PubMed  CAS  Google Scholar 

  • Leeflang M, Reitsma J, Scholten R, Rutjes A, Di Nisio M, Deeks J, Bossuyt P (2007) Impact of adjustment for quality on results of metaanalyses of diagnostic accuracy. Clin Chem 53:164–172

    Article  PubMed  CAS  Google Scholar 

  • Lindsey JK (1999) On the use of corrections for overdispersion. Appl Stat 48:553–561

    Google Scholar 

  • McCullagh P, Nelder JA (1983) Generalized linear models. Chapman and Hall, London

    Google Scholar 

  • Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, Tugwell P, Klassen TP (1998) Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 352:609–613

    Article  PubMed  CAS  Google Scholar 

  • Moja LP, Telaro E, D’Amico R, Moschetti I, Coe L, Liberati A (2005) Assessment of methodological quality of primary studies by systematic reviews: results of the metaquality cross sectional study. BMJ 330:1053

    Article  PubMed  Google Scholar 

  • Overton RC (1998) A comparison of fixed-effects and mixed (random-effects) models for meta-analysis tests of moderator variable effects. Psychol Methods 3:354–379

    Article  Google Scholar 

  • Poole C, Greenland S (1999) Random-effects meta-analyses are not always conservative. Am J Epidemiol 150:469–475

    Article  PubMed  CAS  Google Scholar 

  • Realini JP, Goldzieher JW (1985) Oral contraceptives and cardiovascular disease: a critique of the epidemiologic studies. Am J Obstet Gynecol 152:729–798

    PubMed  CAS  Google Scholar 

  • Schulz KF, Chalmers I, Hayes RJ, Altman DG (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273:408–412

    Article  PubMed  CAS  Google Scholar 

  • Senn S (2007) Trying to be precise about vagueness. Stat Med 26:1417–1430

    Article  PubMed  Google Scholar 

  • Shuster JJ (2010) Empirical vs natural weighting in random effects meta-analysis. Stat Med 29:1259–1265

    Article  PubMed  Google Scholar 

  • Slim K, Nini E, Forestier D, Kwiatkowski F, Panis Y, Chipponi J (2003) Methodological index for non-randomized studies (minors): development and validation of a new instrument. ANZ J Surg 73:712–716

    Article  PubMed  Google Scholar 

  • Spiegelhalter DJ, Best NG (2003) Bayesian approaches to multiple sources of evidence and uncertainty in complex cost-effectiveness modelling. Stat Med 22:3687–3709

    Article  PubMed  Google Scholar 

  • Tjur T (1998) Nonlinear regression, quasi likelihood, and overdispersion in generalized linear models. Am Stat 52:222–227

    Google Scholar 

  • Tritchler D (1999) Modelling study quality in meta-analysis. Stat Med 18:2135–2145

    Article  PubMed  CAS  Google Scholar 

  • Turner RM, Spiegelhalter DJ, Smith GC, Thompson SG (2009) Bias modelling in evidence synthesis. J R Stat Soc Ser A Stat Soc 172:21–47

    Article  PubMed  Google Scholar 

  • Verhagen AP, de Vet HC, de Bie RA, Kessels AG, Boers M, Bouter LM, Knipschild PG (1998) The Delphi list: a criteria list for quality assessment of randomized clinical trials for conducting systematic reviews developed by Delphi consensus. J Clin Epidemiol 51:1235–1241

    Article  PubMed  CAS  Google Scholar 

  • Verhagen AP, de Vet HC, de Bie RA, Boers M, van den Brandt PA (2001) The art of quality assessment of RCTs included in systematic reviews. J Clin Epidemiol 54:651–654

    Article  PubMed  CAS  Google Scholar 

  • Wells G, Shea B, O’Connell D, Peterson J, Welch V, Losos M, Tugwell P (2000) The Newcastle-Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses. http://www.ohri.ca/programs/clinical_epidemiology/oxford.htm. Accessed 15 June 2007

  • Whiting P, Harbord R, Kleijnen J (2005) No role for quality scores in systematic reviews of diagnostic accuracy studies. BMC Med Res Methodol 5:19

    Article  PubMed  Google Scholar 

  • Woolf B (1955) On estimating the relation between blood group and disease. Ann Hum Genet 19:251–253

    Article  PubMed  CAS  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Suhail A. R. Doi .

Editor information

Editors and Affiliations

Appendices

Appendix 1: Need for an Overdispersion Correction

In a study with overdispersed data, the mean or expectation structure (θ) is adequate but the variance structure [σ 2(θ)] is inadequate. Individuals in the study can have the outcome with some degree of dependence on study-specific parameters unrelated to the intervention. If such data are analysed as if the outcomes were independent, then sampling variances tend to be too small, giving a false sense of precision. One approach is to think of the true variance structure as following the form [ϕ(θ)σ 2(θ)]; however, it is complex to fit such a form. As a simpler approach, we suppose ϕ(θ) = c, so that the true variance structure [cσ 2(θ)] is some constant multiplier of the theoretical variance structure. A common method of estimating c suggested used by Lindsey (1999) or Tjur (1998) is to use the observed chi-squared goodness of fit statistic for the pooled studies divided by its degrees of freedom:

$$ c={\chi^2}/\mathrm{df} $$

If there is no overdispersion or lack of fit, c = 1 (because the expected value of the chi-squared statistic is equal to its degrees of freedom) and if there is, then c > 1. In a meta-analysis, this goodness of fit chi-squared divided by its df is equal to H 2 as defined by Higgins and Thompson (2002).

The problem of using the overdispersion parameter as a constant multiplier of the variances of each study in the meta-analysis presupposes that, for a constant increase in this parameter, there is a constant increase in variance. This means that the impact of the parameter is not capped and a point is eventually reached where there is overinflation of the variances for a given level of overdispersion resulting in overcorrection and confidence intervals that are too wide. In order to reduce the impact of large values of H 2, we can transform H 2 to its reciprocal and use this to proportionally inflate the variances. Higgins and Thompson (2002) also defined an I 2 parameter, which is an index of dispersion that is restricted between 0 (no dispersion) and 1. If we reverse the I 2 scale (by subtracting it from 1) so that no dispersion (only sampling error) is now 1 as opposed to 0, then (1 − I 2) is indeed the reciprocal of H 2. We thus used (1 − I 2) as an exponent to proportionally inflate study variances < 1. For variance > 1, we used 2 minus this overdispersion parameter (which reduces to [I 2 + 1]) as the inflation factor. Additional rescaling was done by scaling (1 − I 2) to various roots and using the simulation described above to see the impact on coverage of the confidence interval. The fourth root was found to result in an acceptable simulated coverage of the confidence interval around 95 %. We thus used [(1 − I 2)1/4] as the final overdispersion correction factor. This is also equivalent to (1/H 2)1/4. This correction was then used to inflate the variances of individual studies resulting in a more conservative meta-analysis pooled variance. Even if the accuracy of this approximation is questionable, common sense suggests that it is better to perform this correction, implicitly making the (more or less incorrect) assumption that the distribution of c is approximated well enough by a χ 2 distribution with k − 1 degrees of freedom than not to perform any correction at all, implicitly making the (certainly incorrect) assumption that there is no overdispersion in the data (Tjur 1998). This adjustment in the QE model corrects for overdispersion within studies that affect the precision of the pooled estimate, not for heterogeneity between studies that affect the estimate itself.

Appendix 2: Quality Scores and Population Impact Scores

For a QE type of meta-analysis, a reproducible and effective scheme of quality assessment is required. However, any quality score can be used with the method and thus we are not constrained to any one method. There are many different quality assessment instruments and most have parameters that allow us to assess the likelihood for bias. Although the importance of such quality assessment of experimental studies is well established, quality assessment of other study designs in systematic reviews is far less well developed. The feasibility of creating one quality checklist to apply to various study designs has been explored by Downs and Black (1998), and research has gone into developing instruments to measure the methodological quality of observational studies in meta-analyses (see Chap. 13). Nevertheless, there is as yet no consensus on how to synthesize information about quality from a range of study designs within a systematic review, although many quality assessment schemes exist. Concato (2004) suggests that a more balanced view of observational and experimental evidence is necessary. The way Q i is computed from the score for each study and the additional use of population weights (for burden of disease or type C studies) is depicted in Table 14.1. The population weights are applied as a method of standardization of the group pooled estimates where there is a single estimate per group. The population weighted analysis does not use inverse variance weighting and if a rate is being pooled would give an equivalent result to direct standardization used in epidemiology. Rates have a problematic variance but can be based on a normal approximation to the Poisson distribution:

$$ \mathrm{Va}{\mathrm{{r}}_\mathrm{{r}\mathrm{ate}}}={\it O}\times {{\left( {\frac{{\it K}}{{\it P}}} \right)}^2} $$

where O are the observed events, P is the person-time of observation and K is a constant multiplier. In the computation, zero rates can be imputed to have variances based on a single observed event as a continuity correction.

Table 14.1 Hypothetical calculation of Q i for use in QE meta-analysesa

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Doi, S.A.R., Barendregt, J.J. (2013). Meta-analysis I. In: Doi, S., Williams, G. (eds) Methods of Clinical Epidemiology. Springer Series on Epidemiology and Public Health. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37131-8_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-37131-8_14

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-37130-1

  • Online ISBN: 978-3-642-37131-8

  • eBook Packages: MedicineMedicine (R0)

Publish with us

Policies and ethics