Advertisement

Canadian Journal of Public Health

, Volume 100, Issue 1, pp I8–I14 | Cite as

What Is Population Health Intervention Research?

  • Penelope Hawe
  • Louise Potvin
Article

Abstract

Population-level health interventions are policies or programs that shift the distribution of health risk by addressing the underlying social, economic and environmental conditions. These interventions might be programs or policies designed and developed in the health sector, but they are more likely to be in sectors elsewhere, such as education, housing or employment. Population health intervention research attempts to capture the value and differential effect of these interventions, the processes by which they bring about change and the contexts within which they work best. In health research, unhelpful distinctions maintained in the past between research and evaluation have retarded the development of knowledge and led to patchy evidence about policies and programs. Myths about what can and cannot be achieved within community-level intervention research have similarly held the field back. The pathway forward integrates systematic inquiry approaches from a variety of disciplines.

Key words

Evaluation population health intervention research evidence-based practice intervention research population health 

Résumé

Les interventions populationnelles de santé comprennent l’ensemble des actions qui visent à modifier la distribution des risques à la santé en ciblant les conditions sociales, économiques et environnementales qui façonnent la distribution des risques. Sous forme de programmes et politiques, ces interventions peuvent provenir du secteur de la santé mais sont aussi souvent pilotées par d’autres secteurs comme l’éducation, le logement ou l’emploi. La recherche sur les interventions de santé des populations poursuivent l’objectif de documenter la valeur et les effets de ces interventions, les processus par lesquels les changements opèrent et les conditions qui favorisent les effets. Dans le domaine de la recherche en santé, des distinctions inutiles entre la recherche et l’évaluation ont retardé le développement des connaissances sur l’intervention de santé des populations et mené à une mauvaise intégration des données de recherche pour soutenir la pratique et les décisions concernant les programmes et politiques de santé des populations. Cet article déboulonne donc certains mythes pernicieux concernant la recherche sur les interventions, notamment relativement aux coûts associés, à ses visées et à la croyance en un rôle nécessairement marginal des communautés concernées pour développer des interventions efficaces. Cet article retourne aussi comme arbitraire et injustifiée la distinction traditionnelle entre la recherche sur les interventions et la recherche évaluative. En fait cet article montre que la recherche sur les interventions a tout à gagner d’un rapprochement avec la recherche évaluative et d’une intégration des méthodes de recherche appliquée provenant d’une diversité de disciplines.

Mots clés

Évaluation intervention pour la santé des populations pratique fondée sur des données probantes recherche sur les interventions santé des populations 

References

  1. 1.
    Hawe P, Shiell A. Using evidence to expose the unequal distribution of problems and the unequal distribution of solutions. Eur J Public Health 2007;17(5):413.CrossRefGoogle Scholar
  2. 2.
    MacMahon B, Pugh TF. Epidemiology: Principles and Methods. Boston, MA: Little Brown, 1970.Google Scholar
  3. 3.
    Institute of Population and Public Health, Canadian Institutes of Health Research. Population Health Intervention Research Initiative for Canada (“PHIRIC”) Workshop Report. Ottawa, ON: CIHR. Available online at: https://doi.org/www.cihr-irsc.gc.ca/e/33515.html (Accessed December 2008).Google Scholar
  4. 4.
    Porter D. Health, Civilisation and the State. A History of Public Health from Ancient to Modern Times. London, UK: Routledge, 1999.Google Scholar
  5. 5.
    Fassin D. L’espace politique de la santé. Essai de généalogie. Paris: Presses Universitaires de France, 1996.Google Scholar
  6. 6.
    Wagenaar AC, Webster DW. Preventing injuries to children through compulsory automobile safety seat use. Pediatrics 1986;78:662–72.PubMedGoogle Scholar
  7. 7.
    Mills JL, Signore C. Neural tube defects rates before and after food fortification with folic acid. Birth Defects Res 2004;70(11):8445–55.CrossRefGoogle Scholar
  8. 8.
    Mathee A, Rollin H, von Schirnding Y, Levin J, Naik I. Reductions in blood lead levels among school children following the introduction of unleaded petrol in South Africa. Environ Res 2006;100(3):319–22.CrossRefGoogle Scholar
  9. 9.
    Institute of Population and Public Health, Canadian Institutes of Health Research. Mapping and Tapping the Wellsprings of Health. Strategic Plan 2002–2007. Ottawa: CIHR.Google Scholar
  10. 10.
    Rose G. The Strategy of Preventive Medicine. Oxford, UK: Oxford University Press, 1992.Google Scholar
  11. 11.
    Mischen PA, Sinclair TAP. Making implementation more democratic through action implementation research. J Public Admin Res Theory 2009;19:145–64.CrossRefGoogle Scholar
  12. 12.
    O’Toole LJ. The theory-practice issue in policy implementation research. Public Admin 2004;82(2):309–29.CrossRefGoogle Scholar
  13. 13.
    Noble CH. The eclectic roots of strategy implementation research. J Business Res 1999;45(2):119–34.CrossRefGoogle Scholar
  14. 14.
    Nutbeam D, Bauman A. Evaluation in a Nutshell. Sydney: McGraw Hill, 2006.Google Scholar
  15. 15.
    Foy R, Eccles M, Grimshaw J. Why does primary care need more implementation research? Fam Pract 2001;18(4):353–55.CrossRefGoogle Scholar
  16. 16.
    Hawe P, Degeling D, Hall J. Evaluating Health Promotion. A Health Workers’ Guide. Sydney: Maclennan and Petty, 1990.Google Scholar
  17. 17.
    Brown CA, Milford RJ. The stepped wedge trial design: A systematic review. BMC Med Res Methodol 2006;6:54.CrossRefGoogle Scholar
  18. 18.
    Suchman EA. Evaluative Research. New York: Russel Sage, 1967.Google Scholar
  19. 19.
    Weiss CH. Evaluation, 2nd ed. Upper Saddle River, NJ: Prentice Hall, 1998.Google Scholar
  20. 20.
    Zaza S, Briss PA, Harris KW. The Guide to Community Preventive Services: What Works to Promote Health? New York, NY: Oxford University Press, 2005.Google Scholar
  21. 21.
    Spinks A, Turner C, Nixon J, McLure R. ‘WHO Safe Communities’ model for the prevention of injury in whole populations. Cochrane Database Syst Rev Issue 2, Art. No. CD004445.Google Scholar
  22. 22.
    Sefton C. The NSW Safe Communities pilot projects — evaluation methodology. N S W Public Health Bull 2002;13(4):76–77.CrossRefGoogle Scholar
  23. 23.
    Jackson N, Waters E and the Guidelines for Systematic Reviews in Health Promotion and Public Health Taskforce. The challenges of systematically reviewing public health interventions. J Public Health Med 2004;26:303–7.CrossRefGoogle Scholar
  24. 24.
    Susser M. The tribulations of trials. Interventions in communities. Am J Public Health 1995;85:156–60.CrossRefGoogle Scholar
  25. 25.
    Leventhal H, Safer MA, Cleary PD, Gutman M. Cardiovascular risk reduction by community based programs for lifestyle change: Comments on the Stanford study. J Consult Clin Psychol 1980;48:150–58.CrossRefGoogle Scholar
  26. 26.
    Patton MQ. Qualitative Research Methods, 2nd ed. Newbury Park, CA: Sage, 1990.Google Scholar
  27. 27.
    Stokols D. Translating social ecological theory into guidelines for community health action. Am J Health Promotion 1996;10(4):282–98.CrossRefGoogle Scholar
  28. 28.
    McLeroy K, Steckler A, Simons-Morton B, Goodman RM. Social science theory in health education: Time for a new model? Health Educ Res 1993;8(3):305–12.CrossRefGoogle Scholar
  29. 29.
    Ingle MD, Klaus R. Competency based program evaluation: A contingency approach. Evaluation and Program Planning 1981;3:277–87.CrossRefGoogle Scholar
  30. 30.
    Petticrew M, Whitehead M, Macintyre S, Graham H, Egan M. Evidence for public health policy on inequalities: 1. The reality according to policymakers. J Epidemiol Community Health 2004;58:811–16.CrossRefGoogle Scholar
  31. 31.
    Patton MQ. Utilisation-Focused Evaluation: The New Century Text. Newbury Park: Sage, 1997.Google Scholar
  32. 32.
    Perry CL, Komro KA, Veblen-Mortenson S, Bosma LM, Farbakhsh K, Munson KA, et al. A randomised controlled trial of the middle and high school DARE and DARE plus programmes. Arch Pediatr Adolesc Med 2003;157:178–84.CrossRefGoogle Scholar
  33. 33.
    Lord M. Truth or dare. A new drug course. US News World Rep 2001;130(8):30.PubMedGoogle Scholar
  34. 34.
    Birkeland S, Murphy-Graham E, Weiss C. Good reasons for ignoring good evaluation: The case of the drug abuse resistance education (DARE) program. Evaluation and Program Planning 2005;28(3):247–56.CrossRefGoogle Scholar
  35. 35.
    Steckler A, Linnan L (Eds.). Process Evaluation for Public Health Interventions and Research. San Francisco, CA: Jossey Bass, 2002.Google Scholar
  36. 36.
    Goodman RM, Steckler AB. A model of institutionalisation of health promotion programs. Fam Community Health 1987;11:63–78.CrossRefGoogle Scholar
  37. 37.
    Potvin L, Haddad S, Frohlich KL. Beyond process evaluation. In: Rootman I, Goodstadt M, Hyndman B, McQueen DV, Potvin L, Springett J, et al. (Eds.), Evaluation in Health Promotion. Principles and Perspectives. Copenhagen: WHO Regional Publications, European Series, 2001; No 92:45–62.Google Scholar
  38. 38.
    Bisset S, Cargo M, Delormier T, Macaulay AC, Potvin L. Legitimizing diabetes as a community health issue: A case analysis of an Aboriginal community in Canada. Health Promotion Int 2004;19:317–26.CrossRefGoogle Scholar
  39. 39.
    Baker IR, Dennison BA, Boyer PS, Sellers KF, Russo TJ, Sherwood NA. An asset-based community initiative to reduce television viewing in New York State. Prev Med 2007;44:437–41.CrossRefGoogle Scholar
  40. 40.
    Hawe P, Stickney EK. Developing the effectiveness of an intersectoral food policy coalition through formative evaluation. Health Educ Res 1997;12:213–25.CrossRefGoogle Scholar
  41. 41.
    Levesque L, Guilbault G, Delormier T, Potvin L. Unpacking the black box: A deconstruction of the programming approach and physical activity intervention implemented in the Kahnawake Schools Diabetes Prevention Project. Health Promot Pract 2005;6:64–71.CrossRefGoogle Scholar
  42. 42.
    Ho LS, Gittelsohn J, Harris SB, Ford E. Development of an integrated prevention program with First Nations in Canada. Health Promot Int 2006;21:88–97.CrossRefGoogle Scholar
  43. 43.
    Corrigan M, Cupples ME, Smith SM, Byrne M, Leathem CS, Clerkin P, et al. The contribution of qualitative research to designing a complex intervention for secondary prevention of coronary heart disease in two different health care systems. BMC Health Serv Res 2006;6:90.Google Scholar
  44. 44.
    Wickizer TM, Wagner E, Cheadle A, Pearson D, Berry W, Maeser J, et al. Implementation of the Henry J. Kaiser Family Foundation’s Community Health Promotion Grant Program: A process evaluation. Milbank Q 1998;77:121–47.CrossRefGoogle Scholar
  45. 45.
    Cooke MB, Ford J, Levine J, Bourke C, Newell L, Lapidus G. The effects of city-wide implementation of “Second Step” on elementary school students’ prosocial and aggressive behaviors. J Primary Prev 2007;28:93–115.CrossRefGoogle Scholar
  46. 46.
    O’Loughlin JL, Paradis G, Gray-Donald K, Renaud L. The impact of a community-based heart disease prevention program in a low-income inner-city neighbourhood. Am J Public Health 1999;89:1819–26.CrossRefGoogle Scholar
  47. 47.
    Wagenaar AC, Erickson DJ, Harwood EM, O’Malley PM. Effects of state coalitions to reduce underage drinking: A national evaluation. Am J Prev Med 2006;31:307–15.CrossRefGoogle Scholar
  48. 48.
    Potvin L, Goldberg C. Deux rôles joués par l’évaluation dans la transformation de la pratique en promotion de la santé. In: O’Neill M, Dupéré S, Pederson A, Rootman I (Eds.), La promotion de la santé au Canada et au Québec: Perspectives critiques. Québec: Presses de l’Université Laval, 2006;457–73.Google Scholar
  49. 49.
    Minkler M. Community Organising and Community Building for Health. Rutgers University Press, 2004.Google Scholar
  50. 50.
    Riley T, Hawe P, Shiell A. Contested ground: How should qualitative evidence inform the conduct of a community intervention trial? J Health Serv Res Policy 2005;10(2):103–10.CrossRefGoogle Scholar
  51. 51.
    Cook TD, Campbell DT. Quasi Experimentation: Design and Analysis Issues for Field Settings. Chicago, IL: Rand McNally, 1979.Google Scholar
  52. 52.
    Biglan A, Ary D, Wagenaar AC. The value of interrupted time series experiments for community intervention research. Prev Sci 2000;1(1):31–49.CrossRefGoogle Scholar
  53. 53.
    Zerger SL, Irizarry R, Peng RD. On time series analysis of public health and biomedical data. Annu Rev Public Health 2006;27:57–79.CrossRefGoogle Scholar
  54. 54.
    Cargo M, Mercer SL. The value and challenge of participatory research: Strengthening its practice. Annu Rev Public Health 2008;29:325–53.CrossRefGoogle Scholar
  55. 55.
    Minkler M, Wallerstein N. Community Based Participatory Research for Health. San Francisco: Jossey Bass, 2003.Google Scholar
  56. 56.
    Hawe P, Shiell A, Riley T. Complex interventions: How far ‘out of control’ should a randomised controlled trial be? Br Med J 2004;328:1561–63.CrossRefGoogle Scholar
  57. 57.
    Hawe P, McKenzie N, Scurry R. Randomised controlled trial of the use of modified postal reminder card on the uptake of measles vaccination. Arch Dis Childhood 1998;79:136–40.CrossRefGoogle Scholar
  58. 58.
    Flay BR. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Prev Med 1986;15(5):451–74.CrossRefGoogle Scholar
  59. 59.
    Patton GC, Bond L, Carlin JB, Thomas L, Butler H, Glover S, et al. Promoting social inclusion in schools: A group-randomized trial of effects on student health risk behavior and well-being. Am J Public Health 2006;96(9):1582–87.Google Scholar
  60. 60.
    Green LW, Kreuter MW. Health Promotion Planning: An Educational and Ecological Approach, 3rd ed. Mountain View, CA: Mayfield, 1999.Google Scholar
  61. 61.
    Morrell S, Taylor R, Quine S, Kerr C. Suicide and unemployment in Australia 1907–1990. Soc Sci Med 1993;36(6):749–56.CrossRefGoogle Scholar
  62. 62.
    Bartholomew LK, Parcel GS, Kok G, Gottlieb NH. Planning Health Promotion Programs. An Intervention Mapping Approach. San Francisco: Jossey Bass, 2006.Google Scholar
  63. 63.
    Harrison M. Disease and the Modern World: 1500 to the Present Day. Cambridge: Polity, 2004.Google Scholar
  64. 64.
    Miller JH, Page SE. Complex Adaptive Systems. New Jersey: Princeton University Press, 2007.Google Scholar
  65. 65.
    Best A, Moor G, Holmes B, Clark PI, Brice T, Lieschow S, et al. Health promotion dissemination and system thinking: Towards an integrative model. Am J Health Behav 2003;27(Suppl):S206–S216.Google Scholar
  66. 66.
    Cook TD. The false choice between theory-based evaluation and experimentation. In: Rogers PJ, Hasci TA, Petrosino A, Huebner TA (Eds.), Program Theory in Evaluation: Challenges and Opportunities. New Directions for Evaluation 2000;87:27–34.CrossRefGoogle Scholar

Copyright information

© The Canadian Public Health Association 2009

Authors and Affiliations

  1. 1.Population Health Intervention Research CentreUniversity of CalgaryCalgaryCanada
  2. 2.Léa-Roback Research Centre on Social Health InequalityUniversity of MontrealCanada

Personalised recommendations