International Journal of Public Health

, Volume 62, Issue 8, pp 845–847 | Cite as

Population health intervention research: myths and misconceptions


While great strides have been made in Population Health Intervention Research (PHIR), uptake by public health researchers has been suboptimal (Hawe and Potvin 2009; Di Ruggiero et al. 2017). Common PHIR myths account for some of this lag in uptake, and misconceptions lead researchers to forgo PHIR, since the challenges seem insurmountable. As a result, the most common type of population health research remains the non-interventional observational study.

Given that a “simple myth is more cognitively attractive than an overcomplicated correction” (Cook and Lewandowsky 2011), we will attempt to address a subset of myths with brevity, at the risk of oversimplification. Of course, in reality, the issues are more nuanced.

Myth 1: Only randomized trials will suffice

One of the greatest barriers to PHIR is the fear that only randomized trials will suffice as ‘proof’ that a proposed public health intervention works (Di Ruggiero et al. 2017). It is generally true that ‘intervention research’ should aim for randomized controlled trials (RCTs) as the gold standard for evidence generation when randomization represents the best study design for the research question at hand, and when randomization is feasible (either at the individual level, or at the cluster level). However, for interventions where randomization is impossible or unethical to operationalize, the next best available study design from generally agreed schema of evidence hierarchies should be employed, and will be respected as the ‘best possible evidence’ for the question at hand (Haynes et al. 2012).

Myth 2: Randomized trials are too expensive

Admittedly, RCTs can be expensive though this is not automatically true. Many informative RCTs have been performed with small budgets, and with greater ‘bang for buck’ than non-randomized studies. In fact, RCTs may represent the most cost-effective option of all study designs, due to greater knowledge gained per research dollar spent (Haynes et al. 2012). As public health researchers, we need to break free of the limiting preconception that RCTs are impossible and prohibitively expensive, as they often represent the best instrument for providing proof of net benefit. Since every policy implementation is by nature an (uncontrolled) experiment, the act embedding random assignment in initiatives that are being implemented any way, using administrative data that are already routinely collected, adds a small margin of cost that is worthy of the knowledge gained. A growing number of funders and decision-makers are specifically seeking and supporting randomized evidence to inform important questions in public health (Haynes et al. 2012).

Myth 3: Randomized vs. observational: an irreparable validity-relevance tradeoff

It is commonly argued that only observational studies provide external validity (i.e., more closely reflect expected results in real-world settings), while randomized trials provide improved internal validity (i.e., more closely estimate the ‘truth’ with respect to measured differences between intervention and comparator) which sacrifices external validity and applicability. In reality, both designs play an essential role when each is respected for its strengths and limitations to inform different aspects of the same research question. Perhaps, most importantly, pragmatic randomized trials represent an innovative hybrid study design which simultaneously overrides this validity-relevance tradeoff. Large simple pragmatic trials randomize ‘all-comers’ to test the effectiveness of an intervention versus comparator under everyday realistic conditions, without attempts to control the ‘messiness’ of the real-world setting. Pragmatic trials have shown to be efficient means of finding solutions in local and global settings (Haynes et al. 2012). Furthermore, in public health research, there is an increasing reliance on quasi-experimental design (pre/post intervention) when randomization is not feasible (Brownson et al. 2010).

Myth 4: Positive results are preferred over negative

The objective of PHIR is to seek the ‘truth’ about whether an intervention works, in whom, and under what circumstances. People often ‘try’ for a positive result when they undertake interventional research. However, the point of research is to proceed with equipoise: neither preferring one result over another, nor having a preconceived notion of what the conclusion should be. There is just as much value in learning what does not work as what does work, to inform public health policy about what not to implement, thereby preventing wasted resources.

Myth 5: If it works, it is worth it

A common misconception underlying evidence-based decision-making is that any intervention which has been shown to ‘work’ should be translated to practice. However, just because it ‘works’ does not necessarily mean that it is ‘worth it’ or cost-effective (Haynes 1999).The first point of contention in this argument is the definition of ‘works’. All interventions bring an array of possible effects (both positive and negative), which will vary in their importance to individuals and to the population as a whole. To support the conclusion that an intervention works, PHIR needs to provide evidence that the magnitude and types of benefits outweigh the risks (i.e., the net benefit is sufficiently large for the most relevant outcomes and health inequities, rather than surrogate outcomes). The second point of contention is that even for interventions where proven benefits outweigh the risks if the magnitude of benefit is incommensurate with the resources required to achieve that benefit, then the intervention may not be worth it, even though it ‘works’. Societies are not willing to pay exceedingly greater resources for small margins of benefit when there are other better uses for resources which would provide better return on social investment.

An orienting framework for evidence generation

Perhaps, a sequenced approach to PHIR evidence generation can be suggested, at least as an initial starting point for orientation of an ‘idealized scenario’ (see Fig. 1), whereby randomized or other controlled studies are initially performed (if feasible) to answer ‘can it work?’ (in the ideal trial setting), followed by quasi-experimental or observational studies to answer ‘does it work?’ (in the messy real-world setting), and each followed by contextual analysis and deliberation of whether it would work and be suitable ‘here?’ (in my specific setting) (Haynes 1999). Finally, economic evaluation and return on investment analysis is performed to determine whether it is worth it (Brownson et al. 2010; Haynes 1999). Alternatively, quasi-experimental studies or large-scale pragmatic randomized trials with piggy-backed economic evaluation could be adopted to address a number of these issues simultaneously. Requirement for sequenced evidence will depend on the nature of the research questions being addressed, and the rigor of evidence required, given the health equities at stake.
Fig. 1

Progressive roles for differing study designs

Addressing these myths and misconceptions, and providing a simplifying framework, may surmount some barriers to improving the quantity and quality of PHIR, and accelerate progression toward the ultimate goal of evidence-informed policymaking, reduced inequity, and improved return on investment in public health.


Compliance with ethical standards

Conflict of interest

The authors declare no competing interests.


  1. Brownson RC, Baker EA, Leet TL, Gillespie KN, True WR (2010) Evidence-based public health, 2nd edn. Oxford University Press, OxfordCrossRefGoogle Scholar
  2. Cook J, Lewandowsky S (2011) The Debunking handbook. University of Queensland. St. Lucia, Australia, November 5. ISBN 978-0-646-56812-6. []
  3. Di Ruggiero E, Potvin L, Allerante JP, Dawson A, De Leeuw E et al (2017) Ottawa statement from the sparking solutions summit on population health intervention research. Can J Public Health 107(6):e492–e496CrossRefPubMedGoogle Scholar
  4. Hawe P, Potvin L (2009) What is population health intervention research? Can J Public Health 100(1):8–14Google Scholar
  5. Haynes B (1999) Can it work? Does it work? Is it worth it? BMJ 319:652CrossRefPubMedPubMedCentralGoogle Scholar
  6. Haynes L, Service O, Goldacre B, Torgerson D (2012) Test, learn, adapt: developing public policy with randomized controlled trials—cabinet office. Technical Report. Cabinet Office Behavioural Insights Team, UK. Available at Accessed 18 April 2017

Copyright information

© Swiss School of Public Health (SSPH+) 2017

Authors and Affiliations

  1. 1.Department of Epidemiology and BiostatisticsSchulich School of Medicine and Dentistry, Western UniversityLondonCanada
  2. 2.Department of Anesthesia and Perioperative MedicineSchulich School of Medicine and Dentistry, Western UniversityLondonCanada

Personalised recommendations