The “Dependent Variable Problem”: How Do We Know What Caused Desired Change?
- 1 Downloads
Governments typically introduce new programs because they want to achieve some sort of positive change. While policy makers may have a clear idea of what they want to achieve, there is often less certainty about the most effective way of achieving desired policy goals or even whether achieving a policy goal is possible given current financial constraints and/or institutional arrangements. Even when desired policy goals are achieved, it may be difficult to isolate which factor (or combination of factors) had the most significant impact. This difficulty is known as the dependent variable problem.
After introducing and defining the dependent variable problem, the chapter identifies the sort of research and policy questions that are explicitly concerned with causality and the different methods or techniques that policy makers can use to increase their understanding of what factor (or combination of factors) is facilitating, or impeding, desired change. The chapter discusses the strengths and limitations of randomized controlled trials where outcomes for two groups, one of whom underwent intervention, are compared, quasi-experimental approaches, and cross-case comparisons of sequential events. The chapter makes the point that, in the real world, causality is often complex, nonlinear, and/or influenced by macro-level factors beyond the control of policy makers responsible for a particular policy or program. For this reason, policy makers need to think carefully about the particular combination of techniques capable of generating the sort of information that will help them understand causal linkages.
KeywordsCausal inference Randomized control trials Cross-case comparison Sequential change Regulatory interventions
- American Economic Association (AEA n.d.). https://www.socialscienceregistry.org
- Braithwaite, J. (2016). Restorative justice and responsive regulation: The question of evidence. RegNet working paper no. 51. School of Regulation and Global Governance, https://johnbriathwaite.com/wp-content/uploads/2016/10/SSRN_2016BraithwaiteJ-revised-51.pdf . Accessed 2 Apr 2019.
- Farrow, K., S. Hurley, and J. Sturrock. 2015. Grand Ali29 August 2019bis: How declining public sector capability affects services for the disadvantaged. cpd.org.au/wp-content/uploads/2015/12/Grand-Albis-Final.pdf
- Haydu, J. 1998. Making use of the past: Time periods as cases to compare and as sequences of problem solving. American Journal of Sociology 51 (3): 688–701.Google Scholar
- McCarney, R., J. Warner, S. Iliffe, R. van Haselen, M. Griffin, and P. Fisher. 2007. The Hawthorn Effect: A randomised controlled trial. BMC Medical Research Methodology. https://doi.org/10.1186/1471-2288-7-30.
- McKenzie, D. 2012. A pre-analysis plan checklist. http://blogs.worldbank.org/impactevaluations/a-pre-analysis-plan-checklist. Accessed 29 Aug 2019.
- Nevile, A., E. Malbon, A. Kay, and G. Carey. 2019. The implementation of complex social policy: Institutional layering and unintended consequences in the National Disability Insurance Scheme. Australian.Google Scholar
- Pearl, J., and D. Mackenzie. 2018. The book of why: The new science of cause and effect. New York: Basic Books.Google Scholar
- Pollitt, C. 1995. Improvement strategies. In Quality improvement in European public services, ed. C. Pollitt and G. Bouckaert, 131–161. London: SAGE.Google Scholar
- Torgerson, D., and C. Togerson. 2008. Designing randomised trials in health, education and the social sciences: An introduction. Houndmills. Palgrave Macmillan.Google Scholar
- van der Heijden, J., and J. Kuhlmann. 2017. Studying incremental institutional change: A systematic critical meta-review of the literature from 2005–2015. Policy Studies Journal 87 (3): 509–538.Google Scholar