International Journal of Public Health

, Volume 63, Issue 5, pp 555–556 | Cite as

Making theory from knowledge syntheses useful for public health

  • Geoff Wong

For some researchers, theory is a dirty word, akin to nothing but a guess. For others, there is not enough theory. So, how do we move forward from this situation? To start, we need to define what a theory is. There are many definitions of theory, a simple one is a “… theory is an attempt to organize the facts—some ‘proven’, some more conjectural—within a domain of inquiry into a structurally coherent system.” (Klee 1997). Theories are more than just guesses, because they have to be at the very least partially supported by some facts or data. The part of this definition that may worry some is the use of the word conjectural, which may suggest that a theory is just a guess. But, almost all of our theories are partial and so to make advances we do often need to conjecture. Theory forms an important part of all scientific endeavors, including the science of knowledge synthesis, translation, and exchange (KSTE).

What about the other accusations leveled at theory—that there is not enough? I would argue that there is more to this issue than mere quantity. What we need is more of the right kind of theory and for it to be useful. We use theories to help us understand, explain and/or predict (to some extent) the phenomena that occur in any domain of inquiry. For example, why do many cigarette smokers continue to smoke despite knowledge of the harm cigarettes will cause them? To answer such questions, we need theory that is ‘testable’. In other words, a theory is of little value if it is not expressed in such a way that permits us to collect data to confirm, refute or refine it. Theories with this property are middle-range in abstraction (Merton 1967). Going back to why smokers might continue to smoke, we could specify a theory that explains this continued behavior in the middle-range of abstraction. For example, we could conjecture that smokers continue to smoke, because they tend to focus on benefits they get from smoking today rather than think about what the long-term harms from smoking. We are conjecturing that present bias is causing them to continue and we have specified the theory in such a way that we could collect data to test it.

For theories to be useful, they should also be transferable. Stating the obvious, the most useful theories in public health research are those that provide explanations, understanding and/or ‘predictions’ that are transferable from one situation to another. Otherwise, we have theories that are local—just about the situations from where the data have been gathered. But, to make claims that any theory is more than just local requires us to have explicit rationales for why—i.e., there should be explicit scientific reasons to underpin any such claims for the transferability of theories. To do so requires us to delve into an aspect of the philosophy of science—namely ontology. Put very simply, ontology is about the nature of reality, for example, describing and explaining how the world must work for science to be possible (Bhaskar 1978). More on this later.

Are there any research approaches that are able to address the issues about theory I have outlined above? Two approaches are worth considering—namely realist evaluation (Pawson and Tilley 1997) and realist review (Pawson 2006).

Realist review is a theory-driven approach to evidence synthesis. Data from documents (e.g., studies, other reviews, policy documents, etc.) are used to develop and test theory. Realist evaluation is a theory-driven approach to evaluation that uses data that is collected by the evaluators for theory development and testing. They are underpinned by an explicit philosophy of science that provides a scientific basis for claims for the transferability of any theories developed and tested—namely realism (Pawson 2006, 2013; Pawson and Tilley 1997). Pawson and Tilley assert that outcomes happen because of hidden causal forces that are activated under certain contexts—summarized as context + mechanism = outcome (or C + M = O). This causal force is called a mechanism (Astbury and Leeuw 2010). Hidden causal forces are all around us—for example, gravity. Objects fall to the ground on earth because of the hidden force of gravity. If we change our context to outer space, when we let go of an object, it would not fall. Mechanisms also exist in the social world, an example would be present bias in smokers who continue to smoke.

Mechanisms enable us to make claims about the transferability of theories—we can use our theories of gravity on Earth and Pluto, because the same mechanisms are in operation on both planets. The same applies to any mechanisms we might put forward for explaining why smokers continue to smoke—i.e., present bias is something that is common to most smokers. Finally, in realist reviews and evaluations, theories are deliberately expressed in the middle-range, with the preferred form being C + M = O. Put another way, in this context, this mechanism is triggered to cause this outcome. The importance of specifying a theory in the middle-range is to enable us to check if any theories we put forward are supported by data.

Realist reviews and evaluations are best suited for making sense of complex interventions where context is thought to influence outcomes and answer questions that ask some or all of what works, for whom, in what contexts, to what extent, how and why? Reporting and quality standards, and training materials exist for realist reviews (Wong et al. 2014) and realist evaluations (Wong et al. 2017). Additional resources and a link to an active email listserv may be found on the RAMESES Projects website (RAMESES Projects 2018).

In summary, our knowledge of the world is based on theory, but we need the right kinds of theory for them to be useful. These theories should be expressed in the middle-range and have a clear scientific basis for why they are transferable from situation to situation. Realist review and evaluation are two such approaches that can produce theories that can meet these needs.



My thanks to Erica Di Ruggiero for her help and advice on earlier drafts of this editorial.


  1. Astbury B, Leeuw F (2010) Unpacking black boxes: mechanisms and theory building in evaluation. Am J Eval 31(3):363–381CrossRefGoogle Scholar
  2. Bhaskar R (1978) A realist theory of science verso, LondonGoogle Scholar
  3. Klee R (1997) Introduction to the philosophy of science. Cutting nature at its seams. Oxford University Press, New YorkGoogle Scholar
  4. Merton R (1967) On theoretical sociology. Five essays, old and new. The Free Press, New YorkGoogle Scholar
  5. Pawson R (2006) Evidence-based policy: a realist perspective. Sage, LondonCrossRefGoogle Scholar
  6. Pawson R (2013) The science of evaluation: a realist manifesto. Sage, LondonCrossRefGoogle Scholar
  7. Pawson R, Tilley N (1997) Realistic evaluation. Sage, LondonGoogle Scholar
  8. RAMESES Projects. 2018. Accessed 25 Feb 2018
  9. Wong G, Greenhalgh T, Westhorp G, Pawson R (2014). Development of methodological guidance, publication standards and training materials for realist and meta-narrative reviews: the RAMESES (Realist And Meta-narrative Evidence Syntheses—Evolving Standards) project”, Health Serv Deliv Res, vol. 2 (30)Google Scholar
  10. Wong G, Westhorp G, Greenhalgh J, Manzano A, Jagosh J, Greenhalgh P (2017) Quality and reporting standards, resources, training materials and information for realist evaluation: the RAMESES II project. Health Serv Deliv Res 5(28)Google Scholar

Copyright information

© Swiss School of Public Health (SSPH+) 2018

Authors and Affiliations

  1. 1.Nuffield Department of Primary Care Health SciencesUniversity of OxfordOxfordUK

Personalised recommendations