Barriers to the evaluation of evidence-based public health policy

  • Megan Freund
  • Alison Zucca
  • Robert Sanson-Fisher
  • Andrew Milat
  • Lisa Mackenzie
  • Heidi Turon


Public health policy has the potential to produce great benefits for individuals and communities. There is growing demand that such efforts be rigorously evaluated to ensure that the expected benefits are, in fact, realised. Commonly, public health policy is evaluated by consumer acceptability, reach, or changes in knowledge and attitudes. Non-robust research designs are often used. But these approaches to evaluation do not answer three critical questions: Has a change in the desired outcome occurred? Was it a consequence of the policy and not some extraneous factor? Was the size of the change considered significant and cost-effective? We, a team of government and academic scholars working in research and evaluation, have examined some of the more common impediments to robust evaluation: political impediments, a lack of investment in evaluation capacity within bureaucracy, and the failure of academic researchers to understand the need for the evaluation of public health policy.


Public health policy Evaluation Complex intervention Population health Policymaker Implementation science Dissemination science 



This work was supported funding from The Australian Prevention Partnership Centre and infrastructure funding from the Hunter Medical Research Institute. Megan Freund is supported by a National Health and Medical Research Institute Translating Research Into Practice fellowship. Dr Lisa Mackenzie is supported by a Postdoctoral Fellowship grant [PF-16-011] from the Australian National Breast Cancer Foundation.


  1. 1.
    Rychetnik L, et al. A glossary for evidence based public health. J Epidemiol Community Health. 2004;58(7):538–45.CrossRefGoogle Scholar
  2. 2.
    Centers for Disease Control and Prevention. Ten great public health achievements–United States, 1900-1999. MMWR Morb Mortal Wkly Rep. 1999;48(12):241.Google Scholar
  3. 3.
    Masters S, et al. Return on investment of public health interventions: a systematic review. J Epidemiol Commun Health. 2017;71:827–34.CrossRefGoogle Scholar
  4. 4.
    Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.CrossRefGoogle Scholar
  5. 5.
    Oxman AD, et al. A framework for mandatory impact evaluation to ensure well informed public policy decisions. Lancet. 2010;375(9712):427–31.CrossRefGoogle Scholar
  6. 6.
    House of Commons Health Committee. Health Inequalities: Third Report of Session 2008-09 (Vol 1). London: HMSO; 2009.Google Scholar
  7. 7.
    Komro KA, et al. Research design issues for evaluating complex multicomponent interventions in neighborhoods and communities. Transl Behav Med. 2016;6(1):153–9.CrossRefGoogle Scholar
  8. 8.
    Wolfenden L, et al. What is generated and what is used: a description of public health research output and citation. Eur J Public Health. 2016;26(3):523–5.CrossRefGoogle Scholar
  9. 9.
    Hoomans T, Severens JL. Economic evaluation of implementation strategies in health care. Implement Sci. 2014;9(1):168.CrossRefGoogle Scholar
  10. 10.
    Milat AJ, et al. The concept of scalability: increasing the scale and potential adoption of health promotion interventions into policy and practice. Health Promot Int. 2013;28(3):285–98.CrossRefGoogle Scholar
  11. 11.
    Craig P, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. BMJ. 2008;337:a1655.CrossRefGoogle Scholar
  12. 12.
    Datta J, Petticrew M. Challenges to evaluating complex interventions: a content analysis of published papers. BMC Public Health. 2013;13(1):1.CrossRefGoogle Scholar
  13. 13.
    Schauer F. Transparency in three dimensions. Univ Ill Law Rev. 2011;2011(4):1339–57.Google Scholar
  14. 14.
    Aldridge R, et al. Lancet UK policy matters: better evidence for better health. Lancet. 2011;377(9778):4.CrossRefGoogle Scholar
  15. 15.
    Jansen MW, et al. Public health: disconnections between policy, practice and research. Health Res Policy Syst. 2010;8(1):37.CrossRefGoogle Scholar
  16. 16.
    Robson J, et al. The NHS Health Check in England: an evaluation of the first 4 years. BMJ Open. 2016;6(1):e008840.CrossRefGoogle Scholar
  17. 17.
    Sanson-Fisher RW, et al. Evaluation of systems-oriented public health interventions: alternative research designs. Annu Rev Public Health. 2014;35:9–27.CrossRefGoogle Scholar
  18. 18.
    EPOC. What study designs should be included in an EPOC review? EPOC resources for review authors. Cochrane Effective Practice and Organisation of Care. 2017; Available from: Available at
  19. 19.
    Hopkins DP, et al. Smokefree policies to reduce tobacco use: a systematic review. Am J Prev Med. 2010;38(2):S275–89.CrossRefGoogle Scholar
  20. 20.
    Olstad D, et al. Can policy ameliorate socioeconomic inequities in obesity and obesity-related behaviours? A systematic review of the impact of universal policies on adults and children. Obes Rev. 2016;17(12):1198–217.CrossRefGoogle Scholar
  21. 21.
    Shanks CB, Banna J, Serrano EL. Food waste in the national school lunch program 1978–2015: a systematic review. J Acad Nutr Diet. 2017;117(11):1792–807.CrossRefGoogle Scholar
  22. 22.
    Lipsey MW, Cullen FT. The effectiveness of correctional rehabilitation: a review of systematic reviews. Annu Rev Law Soc Sci. 2007;3:297–320.CrossRefGoogle Scholar
  23. 23.
    Drake EK, Aos S, Miller MG. Evidence-based public policy options to reduce crime and criminal justice costs: implications in Washington State. Vict Offenders. 2009;4(2):170–96.CrossRefGoogle Scholar
  24. 24.
    Hawe P, Degeling D, Hall J. Evaluating health promotion: a health workers guide. Sydney: MacLennan & Petty; 1990.Google Scholar
  25. 25.
    Mercer SL, et al. Study designs for effectiveness and translation research: identifying trade-offs. Am J Prev Med. 2007;33(2):139–54.CrossRefGoogle Scholar
  26. 26.
    Burstein P, Linton A. The impact of political parties, interest groups, and social movement organizations on public policy: some recent evidence and theoretical concerns. Soc Forces. 2002;81(2):380–408.CrossRefGoogle Scholar
  27. 27.
    Brownson RC, et al. Researchers and policymakers: travelers in parallel universes. Am J Prev Med. 2006;30(2):164–72.CrossRefGoogle Scholar
  28. 28.
    Orton L, et al. The use of research evidence in public health decision making processes: systematic review. PLoS ONE. 2011;6(7):e21704.CrossRefGoogle Scholar
  29. 29.
    Bauman A, Nutbeam D. Planning and evaluating population interventions to reduce noncommunicable disease risk–reconciling complexity and scientific rigour. Public Health Res Pract. 2014;25(1):e2511402.CrossRefGoogle Scholar
  30. 30.
    Knai C, et al. Reported barriers to evaluation in chronic care: experiences in six European countries. Health Policy. 2013;110(2):220–8.CrossRefGoogle Scholar
  31. 31.
    Petticrew M, et al. In search of social equipoise. BMJ. 2013;346:40–1.Google Scholar
  32. 32.
    Milton S, Petticrew M, Green J. Why do local authorities undertake controlled evaluations of health impact? A qualitative case study of interventions in housing. Public Health. 2014;128(12):1112–7.CrossRefGoogle Scholar
  33. 33.
    Schneider CH, Milat AJ, Moore G. Barriers and facilitators to evaluation of health policies and programs: policymaker and researcher perspectives. Eval Program Plan. 2016;58:208–15.CrossRefGoogle Scholar
  34. 34.
    Jacob S, Speer S, Furubo J-E. The institutionalization of evaluation matters: updating the International Atlas of Evaluation 10 years later. Evaluation. 2015;21(1):6–31.CrossRefGoogle Scholar
  35. 35.
    Rosenstein, Parliamentarians Forum for Development Evaluation. National Evaluation Policies Global Mapping Report-2015. 2nd edition. 2015.
  36. 36.
    Bourgeois I, Cousins JB. Understanding dimensions of organizational evaulation capacity. Am J Eval. 2013;34(3):21.CrossRefGoogle Scholar
  37. 37.
    Nutbeam D. What’s in a word? Finding the value in evaluation. The Mandarin, 2017. Accessed 9 March 2017.
  38. 38.
    Heaton J, Day J, Britten N. Inside the “black box” of a knowledge translation program in applied health research. Qual Health Res. 2015;25(11):1477–91.CrossRefGoogle Scholar
  39. 39.
    Wolfenden L, et al. Embedding researchers in health service organizations improves research translation and health service performance: the Australian Hunter New England Population Health example. J Clin Epidemiol. 2017;85:3–11.CrossRefGoogle Scholar
  40. 40.
    Kerr EA, Riba M, Udow-Phillips M. Helping health service researchers and policy makers speak the same language. Health Serv Res. 2015;50(1):1–11.CrossRefGoogle Scholar
  41. 41.
    Kuo T, Gase LN, Inkelas M. Dissemination, implementation, and improvement science research in population health: opportunities for public health and CTSAs. Clin Transl Sci. 2015;8(6):807–13.CrossRefGoogle Scholar
  42. 42.
    Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10(1):53.CrossRefGoogle Scholar
  43. 43.
    Eccles MP, et al. An implementation research agenda. Implement Sci. 2009;4(1):18.CrossRefGoogle Scholar
  44. 44.
    Remme JH, et al. Defining research to improve health systems. PLoS Med. 2010;7(11):e1001000.CrossRefGoogle Scholar
  45. 45.
    NSW Government, NSW Government Program Evaluation Guidelines. Department of Premier and Cabinet, Sydney: Available online at:, 2006.
  46. 46.
    Norton S, et al. Narrative review of strategies by organizations for building evaluation capacity. Eval Progr Plan. 2016;58:1–19.CrossRefGoogle Scholar
  47. 47.
    Brownson RC, Chriqui JF, Stamatakis KA. Understanding evidence-based public health policy. Am J Public Health. 2009;99(9):1576–83.CrossRefGoogle Scholar
  48. 48.
    Tabak RG, et al. Dissemination and implementation science training needs: insights from practitioners and researchers. Am J Prev Med. 2017;52(3):S322–9.CrossRefGoogle Scholar
  49. 49.
    Jensen JD, Smed S. The Danish tax on saturated fat–short run effects on consumption, substitution patterns and consumer prices of fats. Food Policy. 2013;42:18–31.CrossRefGoogle Scholar
  50. 50.
    Snowdon C. The proof of the pudding: Denmark’s fat tax fiasco, IEA Current Controversies Paper No. 42, Institute of Economic Affairs. 2013.Google Scholar
  51. 51.
    Bødker M, et al. The rise and fall of the world’s first fat tax. Health Policy. 2015;119(6):737–42.CrossRefGoogle Scholar
  52. 52.
    Bødker M, et al. The Danish fat tax—effects on consumption patterns and risk of ischaemic heart disease. Prev Med. 2015;77:200–3.CrossRefGoogle Scholar
  53. 53.
    Smed S, et al. The effects of the Danish saturated fat tax on food and nutrient intake and modelled health outcomes: an econometric and comparative risk assessment evaluation. Eur J Clin Nutr. 2016;70(6):681.CrossRefGoogle Scholar
  54. 54.
    Thow AM, et al. The effect of fiscal policy on diet, obesity and chronic disease: a systematic review. Bull World Health Organ. 2010;88:609–14.CrossRefGoogle Scholar
  55. 55.
    Brownson RC, et al. Building capacity for dissemination and implementation research: one university’s experience. Implement Sci. 2017;12(1):104.CrossRefGoogle Scholar

Copyright information

© Springer Nature Limited 2018

Authors and Affiliations

  • Megan Freund
    • 1
    • 2
    • 3
  • Alison Zucca
    • 1
    • 2
    • 3
  • Robert Sanson-Fisher
    • 1
    • 2
    • 3
  • Andrew Milat
    • 4
  • Lisa Mackenzie
    • 1
    • 2
    • 3
  • Heidi Turon
    • 1
    • 2
    • 3
  1. 1.Health Behaviour Research Collaborative, School of Medicine and Public Health, Faculty of Health and MedicineUniversity of NewcastleCallaghanAustralia
  2. 2.Priority Research Centre for Health Behaviour, Faculty of Health and MedicineThe University of NewcastleCallaghanAustralia
  3. 3.Hunter Medical Research InstituteNew LambtonAustralia
  4. 4.Sydney Medical SchoolUniversity of SydneyCamperdownAustralia

Personalised recommendations