Journal of Public Health Policy

, Volume 38, Issue 2, pp 203–215 | Cite as

Participatory simulation modelling to inform public health policy and practice: Rethinking the evidence hierarchies

  • Eloise O’Donnell
  • Jo-An Atkinson
  • Louise Freebairn
  • Lucie Rychetnik
Original Article


Drawing on the long tradition of evidence-based medicine that aims to improve the efficiency and effectiveness of clinical practice, the field of public health has sought to apply ‘hierarchies of evidence’ to appraise and synthesise public health research. Various critiques of this approach led to the development of synthesis methods that include broader evidence typologies and more ‘fit for purpose’ privileging of methodological designs. While such adaptations offer great utility for evidence-informed public health policy and practice, this paper offers an alternative perspective on the synthesis of evidence that necessitates a yet more egalitarian approach. Dynamic simulation modelling is increasingly recognised as a valuable evidence synthesis tool to inform public health policy and programme planning for complex problems. The development of simulation models draws on and privileges a wide range of evidence typologies, thus challenging the traditional use of ‘hierarchies of evidence’ to support decisions on complex dynamic problems.


dynamic simulation modelling participatory modelling evidence hierarchy evidence synthesis systems science health policy 


There is consensus among public health researchers, policy makers and practitioners on the importance of scientific knowledge informing policy and practice responses to complex public health problems.1 Drawing on the long tradition of evidence-based medicine that aims to improve the efficiency and effectiveness of clinical practice, public health has sought to apply ‘hierarchies of evidence’ to the appraisal and synthesis of public health research.2

Hierarchies of evidence provide standard rankings of evidence rigour based on study design, and are used to inform judgements about the strength of the evidence base for policy and practice decisions.3 A typical example is illustrated in Textbox 1.
Textbox 1

Typical hierarchy of evidence

1. Systematic reviews and meta-analyses

2. RCTs with definitive results

3. RCTs with non-definitive results

4. Cohort studies

5. Case control studies

6. Cross-sectional surveys

7. Case reports4

While the above depiction is variable, systematic reviews, meta-analyses and RCTs are typically ranked at the top of the hierarchy and case series down the bottom

However, there has been much debate about the appropriateness of applying judgements of evidence quality designed for assessing the effectiveness of clinical interventions to the synthesis of public health research.2 Criticisms of applying this approach in public health have included:
  • The complexity of public health problems and the multi-faceted policy responses often required to address them are not as amenable to the experimental methods of evaluation. A focus on evidence derived from randomised controlled trials (RCTs) may shift policy attention away from the broader system factors that underlie public health problems;2

  • Analytic studies privilege internal consistency over the contextual factors that influence how and why an intervention works.5 7 This makes the generalizability of individual studies difficult to assess, a problem that is further compounded in meta-analyses of such studies;8,9

  • The high value placed on internal validity, generally equated with strength of evidence,10 has given rise to synthesis methods that assign less value to expert opinion, ‘practice-based’ knowledge and qualitative enquiry that provide information of great relevance and utility to programme decision makers; and that

  • Traditional evidence hierarchies do not account for the synergistic nature of mixed methods (such as RCTs and qualitative research), making it challenging when trying to answer end-user questions that move beyond intervention effectiveness.4

Responses to such criticisms led to important modifications to how public health evidence is appraised and weighed—including broader evidence typologies, privileging different methodological designs depending on the question being asked, and reflecting on contextual appropriateness, implementation, public acceptability and cost-effectiveness of interventions.4,10 13 While these adaptations have greatly improved the utility of hierarchies in guiding the synthesis of evidence to support public health policy and practice, this paper offers an alternative perspective necessitated by recent advances in decision support capability now available to policy makers through the use of dynamic simulation modelling. Through the example of alcohol-related chronic disease and acute harms prevention, we describe the benefits of dynamic simulation modelling and its unique approach to evidence synthesis.

Valuing Diverse Evidence in Chronic Disease Prevention

Chronic diseases are among the most common, costly and preventable of all health problems, accounting for 90 per cent of all deaths in Australia 2011.14 Empirical research over the past two decades has delivered profound insights into the multi-level influences that contribute to the risk of developing these chronic diseases—including genetic, individual, social, economic and environmental factors.15 17 What is less clear is how these factors interact to give rise to the trends in chronic disease we are seeing, and which risk factors are more important to target than others in particular contexts. The increasing scope and complexity of options for intervening have also generated notable uncertainty and debate about how to respond, and how to ensure that policy responses are sustainable, cost-effective and translatable.18

Information relating to particular contexts and an understanding of the challenges faced by those who deliver and receive public health interventions are recognised to be important for evidence-informed decision making.11 More specifically, Ammerman et al (2014) have identified the value of local policy and practice experience as an important source of information when designing policies and programmes that are practical and work in the real world.11,19,20 Local experience is invaluable when addressing questions such as:
  • How is our context similar to or different from the published evidence?

  • What element of a policy is or isn’t likely to work in a local setting, and why?

  • If there are differential impacts on population groups or settings, why might this be happening?

Some of the answers to such questions may not be found in the published literature, and the importance of using both research and practice-based knowledge is well recognised.10,11

Innovative Methods for Synthesis of Diverse Evidence Sources

New methods have been developed to support systematic and robust syntheses of both published evidence and other forms of knowledge through active stakeholder engagement in the evidence synthesis process. We provide an overview of the distinctive elements of two international exemplars of this work compared to the traditional approach of systematic reviews of evidence, and highlight the additional value that dynamic simulation modelling offers (Table 1).
Table 1

Characteristics of methods for synthesis of diverse evidence sources

Synthesis method

Diverse evidence sources sought and integrated

Scope of question

Synthesis/evidence aggregation method (Implicit or Explicit)

Engagement of diverse stakeholders in synthesis process

Consensus building actively sought


Output of product

Traditional systematic review



Explicit for quantitative reviews




Narrative summary (including quantitative risk estimates)

Rapid Realist Review







Narrative summary

‘Method for synthesising knowledge about public policies’ (NCCHPP approach)







Narrative summary

Simulation modelling (using a participatory approach)






Analytic tool

Quantitative comparative impact and cost estimates of a variety of policy scenarios

The Rapid Realist Review,21 first promoted by Pawson,22 includes a diverse range of both quantitative and qualitative evidence sources aimed at providing policy makers with an understanding of ‘what works, for whom, in what contexts, to what extent, and most importantly how and why?’22 This method provides policy makers with a product that is useful for informing time-sensitive decisions or those relating to emerging issues where there is limited time and resources.21

Method for Synthesizing Knowledge about Public Policies (MSKPP) 20 The National Collaborating Centre for Healthy Public Policy (NCCHPP), Canada, also developed an approach for knowledge synthesis to support public health policy that considers a range of aspects beyond programme effectiveness.20 Issues such as effects on equity, unintended effects and contextual issues related to implementation such as cost, feasibility and acceptability, are also considered. The method also considers a range of both scientific and non-scientific sources. By looking beyond measures of policy effectiveness, and by taking into account issues related to implementation, this method is able to paint a more accurate picture of policies that are most likely to succeed in the specific context in which their implementation is being considered.11,20

The RRR21 and MSKPP20 methods both draw on a much broader range of evidence than that typically found in traditional systematic reviews. They extract and synthesise both quantitative and qualitative information, apply qualitative analysis of the causal mechanisms underpinning the problem or interventions being assessed, and generate a narrative review that addresses the agreed review topic. They also include processes for engaging stakeholders (such as policy makers, researchers and practice experts) in the evidence synthesis process to increase the contextual relevance and translatability of the outputs.

While both of these methods offer advantages over standard evidence syntheses in that they deliver contextually enriched policy recommendations, there remain a number of important limitations:
  • They are not able to test quantitatively their hypotheses or the assumptions made in the development of theory or the mapping of causal mechanisms that underpin the synthesis process.

  • They are limited in their ability to help decision makers navigate important uncertainties and challenges presented by the dynamics of a problem over time. Such dynamics arise from the interaction of risk factors, population heterogeneity, changing human behaviour and non-additive effects of combining interventions.

  • The output of these methods is a static document, that when provided to decision makers, enters the milieu of competing voices and broader considerations with limited potential for ongoing and active engagement in weighing the alternative policy options.

Dynamic simulation modelling is a systems science methodology that can be used as an alternative approach to synthesise best available evidence and inform policy decisions. Dynamic simulation models are computer models that are representations of the real world. They provide a platform for integrating diverse evidence sources (research, administrative and other data, expert and local knowledge) through a process of mapping the causes and outcomes of complex problems, and potential points of intervention, followed by quantification, testing and validation of that map. The resulting dynamic model provides decision makers with a ‘what if’ tool, to examine in a robust, risk-free and low cost way, the likely impact on the outcomes of interest for different intervention or policy options (applied individually or in combination). The model’s outputs also include the comparative costs and health system implications of the intervention options over the short and longer term.23 25

The simulation models take into account the nature of complex dynamic systems, i.e. the multiple influences, feedback loops and the unpredictable emergent properties of interactions between variables that occur in the real world, and thus better inform solutions to complex public health problems.26

Recent advances in modelling software capability and more user-friendly interfaces have meant that simulation modelling is now more accessible, enabling a transparent and participatory approach for complex multi-scale model development. This has enabled stakeholder engagement and consensus building to become embedded in the development of dynamic simulation models, adding value to policy design for complex problems such as chronic disease prevention.24

A Participatory Modelling Approach

The benefits of simulation modelling methods are enhanced when using a participatory modelling approach.25,27 29 Engaging a multidisciplinary group of stakeholders (such as domain experts, policy makers, programme planners, clinicians, health economists, consumers, community representatives and multi-method simulation modellers) enhances stakeholder communication and the transparency and translation of model outcomes to achieve broader support for consensus and collaborative action.25,28,30 Thus a participatory approach to simulation modelling also provides a platform for strengthening relationships for research translation.

In participatory modelling for public health policy, the group collaboratively maps the risk factors and causal pathways relating to a chronic disease problem, and the mechanisms by which selected interventions have their effects. This map is then quantified, tested and validated as a simulation model to forecast the likely impact of interventions over time. The completed model can then be used to inform strategy discussions with a wider group of stakeholders to enable them to consider the impacts and trade-offs between alternative policy options, and to assist policy agencies in building consensus for action. A more detailed description of this process as applied to the prevention of alcohol-related harm is described further by Atkinson et al.31

It is imperative that the model is not only grounded within local empirical research and data, but that the model building group includes experts who are privy to local contextual factors and have a solid understanding of the policy issues at hand.30 Furthermore, policy makers and other stakeholders need to be embedded in the process from conception so that the scope and boundaries of the model are appropriate to the problem and policy makers understand how to translate model results into effective actions.28

How Dynamic Simulation Modelling Values Different Types of Evidence

The dynamic simulation modelling approach facilitates collaborative dialogue within the multidisciplinary model building group, often comprising local, national and international members.

Model assumptions and parameter values are derived from considering, critiquing and weighing many sources of evidence, including those traditionally considered both at the top and the bottom of any evidence hierarchy. For example, systematic reviews/meta-analyses are mostly insufficient for mapping the causal web of genetic, social, cultural, economic and environmental factors that influence drinking behaviour, and the mechanism by which this behaviour leads to alcohol-related chronic disease and acute harms.32,33 Rather, integration and triangulation of evidence from systematic reviews, local analytic studies, conceptual models, qualitative studies and expert and local knowledge is required to map and quantify this complex problem. The example below refers to the application of dynamic simulation modelling to inform strategic planning for the prevention of alcohol-related harms in NSW, Australia (the NSW alcohol harm reduction model) developed through partnership between The Australian Prevention Partnership Centre and NSW Ministry of Health.31

The NSW alcohol harm reduction model has a virtual population of individuals with key characteristics of the NSW population. A set of rules have been applied to govern individual decision making about when to drink, where to drink, how much to drink and whom to drink with, each of which depends on factors such as peer influence, access to liquor outlets, opening hours, alcohol pricing, drinking habits, socio-cultural norms, health literacy, etc. This set of rules constitutes the causal hypothesis that underlies alcohol behaviours of individuals. The hypothesis and associated rules to quantify the model were derived by a participatory stakeholder group from a synthesis of the following evidence:
  1. a.

    A well-known conceptual model of behaviour in context (COM-B)34 used as an organising framework;

  2. b.

    Systematic reviews of evidence relating to factors contributing to risky alcohol consumption;

  3. c.

    Expert knowledge regarding the conditional interactions of factors that lead to heterogeneous risk behaviours across the population of interest;

  4. d.

    Estimates from surveys (e.g. The National Drug Strategy Household Survey), the collection of real-time behavioural data (such as big data and accompanying data science analytic techniques)35 and/or expert consensus to quantify the behavioural components of the model;

  5. e.

    Historical administrative and/or survey data for key indicators against which the model hypothesis and its outputs could be tested to validate whether it is a plausible representation of the problem (such that indicators include proportion drinking at lifetime risk, proportion participating in single occasion risk drinking, and alcohol-related emergency department presentations, hospitalisations and mortality).


The development of the complex behavioural component of the model would not have been feasible without each of these sources of evidence. The modelling group considered the relevance and value of evidence sources, case-by-case, and privileged no single source of evidence. Key experts guided combining sources in the model to represent the entire system of alcohol-related harms, and the most likely potential impact of policy and programme implementation, including unanticipated effects.31,36

The participatory approach allowed for explicit and iterative considerations that took into account the literature, opinions of experts and practitioners, and a process of quantitative testing and validation of the model’s hypothesised causal mechanisms (through comparison with real world data from multiple indicators). This helped build confidence in the model’s predictions.25

As such, the dynamic evidence synthesis process is inherently translational in nature. It reflects the complexity, and uses both the science of probability and the more qualitative wisdom of experience.


Participatory simulation modelling is a systematic, rigorous and explicit method of evidence synthesis that takes account of the complexity of the real world, and makes better and more pragmatic use of all the available evidence to inform public health policy and practice. This necessitates a more egalitarian approach to the use of different evidence typologies, and relies on explicit integration of multiple evidence sources. It is worth noting however, that the policy and practice-relevant ‘what if’ discussions inherent in the modelling process would normally occur anyway, but only after the completion of a traditional systematic review, and often with the many assumptions that underpin these deliberations remaining implicit. Dynamic simulation modelling offers an interactive tool that allows policy makers to explore more explicitly what are the most efficient, effective, cost-effective, equitable and feasible responses to complex public health problems. It values and integrates expert and practice-based knowledge and provides a platform for diverse stakeholders to engage in a constructive and formative way in the development of evidence-informed policy.



This work was funded by the National Health and Medical Research Council of Australia (NHMRC) through its partnership centre grant scheme (Grant ID: GNT9100001). NSW Health, ACT Health, The Commonwealth Department of Health, The Hospitals Contribution Fund of Australia and HCF Research Foundation contributed funds to support this work as part of the NHMRC partnership centre grant scheme. The contents of this paper are solely the responsibility of the individual authors and do not reflect the views of the NHMRC or funding partners. The authors thank Geoff McDonnell for his review and valuable comments on the penultimate draft of this paper, and Sally Redman for her contributions to discussions during the conceptualisation of this work.


  1. 1.
    McGuire, W.L. (2005) Beyond EBM: New directions for evidence-based public health. Perspectives in Biology and Medicine Autumn 48(4): 557–569.CrossRefGoogle Scholar
  2. 2.
    Abeysinghe, S. and Parkhurst, J. (2013) ‘Good’ evidence for improved policy making: from hierarchies to appropriateness. London: London School of Hygiene and Tropical Medicine,
  3. 3.
    Nutley, S.M. and Powell, A.E. (2013) What counts as good evidence? London: Alliance for Useful Evidence. Retrieved from (Archived by WebCite® at
  4. 4.
    Petticrew, M. and Roberts, H. (2003) Evidence, hierarchies, and typologies: Horses for courses. Journal of Epidemiology and Community Health 57(7): 527–529.CrossRefGoogle Scholar
  5. 5.
    Milat, A.J., Bauman, A.E., Redman, S. and Curac, N. (2011) Public health research outputs from efficacy to dissemination: A bibliometric analysis. BMC Public Health 11: 934.CrossRefGoogle Scholar
  6. 6.
    Cunningham, J.A. and Van Mierlo, T. (2009) Methodological issues in the evaluation of internet-based interventions for problem drinking. Drug and Alcohol Review 28(1): 12–17.CrossRefGoogle Scholar
  7. 7.
    Ogilvie, D., Foster, C.E., Rothnie, H., Cavill, N., Hamilton, V., Fitzsimons, C.F. et al (2007). Interventions to promote walking: Systematic review. BMJ 334(7605): 1204.CrossRefGoogle Scholar
  8. 8.
    Laws, R.A., St George, A.B., Rychetnik, L. and Bauman, A.E. (2012) Diabetes prevention research: A systematic review of external validity in lifestyle interventions. American Journal of Preventive Medicine 43(2): 205–214.CrossRefGoogle Scholar
  9. 9.
    Glasgow, R.E., Lichtenstein, E. and Marcus, A.C. (2003) Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health 93(8): 1261–1267.CrossRefGoogle Scholar
  10. 10.
    Green, L.W. (2008) Making research relevant: If it is an evidence-based practice, where’s the practice-based evidence? Family Practice 25(1): i20–i24.CrossRefGoogle Scholar
  11. 11.
    Ammerman, A., Smith, T.W. and Calancie, L. (2014) Practice-based evidence in public health: Improving reach, relevance, and results. Annual Review of Public Health 35: 47–63.CrossRefGoogle Scholar
  12. 12.
    Hansen, H.F. (2014) Organisation of evidence-based knowledge production: Evidence hierarchies and evidence typologies. Scandinavian Journal of Public Health 42(13): 11–17.CrossRefGoogle Scholar
  13. 13.
    Green, L.W. and Glasgow, R.E. (2006) Evaluating the relevance, generalization, and applicability of research: Issues in external validation and translation methodology. Evaluation and the Health Professions 29: 126–153.CrossRefGoogle Scholar
  14. 14.
    Australian Health Survey: first results, 2011–2012 [database on the Internet] 2012,
  15. 15.
    Brockway, I. (2012) Risk factors contributing to chronic disease. Canberra, Australia.: Australian Institute of Health and Welfare 2012 Contract No.: Cat no. PHE 157.Google Scholar
  16. 16.
    Wilcox, S. (2014) Chronic diseases in Australia: the case for changing course. Melbourne, Australia: Australian Health Policy Collaboration. Report No.: Report no. 2014-02.Google Scholar
  17. 17.
    Cadilhac, DA. (2009) The Health and Economic Benefits of Reducing Disease Risk Factors: Research Report. Melbourne, Australia.Google Scholar
  18. 18.
    Rychetnik, L., Bauman, A., Laws, R., King, L., Rissel, C., Nutbeam, D. et al (2012). Translating research for evidence-based public health: Key concepts and future directions. Journal of Epidemiology and Community Health 66(12): 1187–1192.CrossRefGoogle Scholar
  19. 19.
    Leeman, J. and Sandelowski, M. (2012) Practice-based evidence and qualitative inquiry. Journal of Nursing Scholarship: An Official Publication of Sigma Theta Tau International Honor Society of Nursing/Sigma Theta Tau 44(2): 171–179.CrossRefGoogle Scholar
  20. 20.
    Morestin, F., Gauvin, F., Hogue, M., Benoit, F. Method for synthesizing knowledge about public policies Canada: National Collaborating Centre for Healthy Public Policy,
  21. 21.
    Saul, J.E., Willis, C.D., Bitz, J. and Best, A. (2013) A time-responsive tool for informing policy making: Rapid realist review. Implementation Science: IS 8: 103.CrossRefGoogle Scholar
  22. 22.
    Pawson, R. (2002). Evidence-based policy: The promise of ‘realist synthesis’. Evaluation 8(3): 340–358.CrossRefGoogle Scholar
  23. 23.
    Homer, J.B. and Hirsch, G.B. (2006) System dynamics modeling for public health: Background and opportunities. American Journal of Public Health 96(3): 452–458.CrossRefGoogle Scholar
  24. 24.
    Li, Y., Lawley, M.A., Siscovick, D.S., Zhang, D. and Pagán, J.A. (2016) Agent-based modeling of chronic diseases: A narrative review and future research directions. Preventing Chronic Disease 2016: 13.Google Scholar
  25. 25.
    Atkinson, J.A., Page, A., Wells, R., Milat, A. and Wilson, A. (2015) A modelling tool for policy analysis to support the design of efficient and effective policy responses for complex public health problems. Implementation Science: IS 10: 26.CrossRefGoogle Scholar
  26. 26.
    Ip, E.H., Rahmandad, H., Shoham, D.A., Hammond, R., Huang, T.T., Wang, Y. et al (2013) Reconciling statistical and systems science approaches to public health. Health Education & Behavior : The Official Publication of the Society for Public Health Education 40(1 Suppl): 123S–131S.CrossRefGoogle Scholar
  27. 27.
    Voinov, A. and Bousquet, F. (2010) Modelling with stakeholders. Environmental Modelling and Software 25(11): 1268–1281.CrossRefGoogle Scholar
  28. 28.
    Rouwette, E.A.J.A., Korzilius, H., Vennix, J.A.M., and Jacobs, E. (2011) Modeling as persuasion: The impact of group model building on attitudes and behavior. System Dynamics Review 27(1): 1–21.Google Scholar
  29. 29.
    Hovmand, P. (2014) Community Based System Dynamics. New York: Springer.CrossRefGoogle Scholar
  30. 30.
    Loyo, H.K., Batcher, C., Wile, K., Huang, P., Orenstein, D., and Milstein, B. (2013) From model to action: Using a system dynamics model of chronic disease risks to align community action. Health Promotion Practice 14(1): 53–61.CrossRefGoogle Scholar
  31. 31.
    Atkinson, J., O’Donnell, E.M., Wiggers, J., McDonnell, G., Mitchell, J., Freebairn, L. et al (2017) A participatory dynamic simulation modelling approach to developing policy responses to reduce alcohol-related harms: rationale and procedure. Public Health Research and Practice 27(1): 2711707.Google Scholar
  32. 32.
    Sugihara, G., May, R., Ye, H., Hsieh, C.H., Deyle, E., Fogarty, M. et al (2012) Detecting causality in complex ecosystems. Science 338(6106): 496–500.CrossRefGoogle Scholar
  33. 33.
    Shpitser, I. and Pearl, J. (2008) Complete identification methods for the causal hierarchy. Journal of Machin Learning Research 9: 1941–1979.Google Scholar
  34. 34.
    Michie, S., van Stralen, M.M. and West, R. (2011) The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implementation science 6: 42–49.CrossRefGoogle Scholar
  35. 35.
    Marshall, D.A., Burgos-Liz, L., Pasupathy, K.S., Padula, W.V., Ijzerman, M.J., Wong, P.K. et al (2016) Transforming healthcare delivery: Integrating dynamic simulation modelling and big data in health economics and outcomes research. PharmacoEconomics 34(2): 115–126.CrossRefGoogle Scholar
  36. 36.
    Gittelsohn, J., Mui, Y., Adam, A., Lin, S., Kharmats, A., Igusa, T. et al (2015) Incorporating systems science principles into the development of obesity prevention interventions: Principles, benefits, and challenges. Current Obesity Reports 4(2): 174–181.CrossRefGoogle Scholar

Copyright information

© Macmillan Publishers Ltd 2017

Authors and Affiliations

  • Eloise O’Donnell
    • 1
  • Jo-An Atkinson
    • 2
  • Louise Freebairn
    • 4
  • Lucie Rychetnik
    • 3
  1. 1.The Australian Prevention Partnership CentreSax InstituteSydneyAustralia
  2. 2.The Australian Prevention Partnership Centre, School of MedicineThe University of SydneySydneyAustralia
  3. 3.The Australian Prevention Partnership Centre, School of MedicineThe University of Notre DameNotre DameUSA
  4. 4.The Australian Prevention Partnership Centre, Knowledge Translation and Health Outcomes, Epidemiology Section, ACT HealthCanberra CityAustralia

Personalised recommendations