Skip to main content

Advertisement

Log in

Do Means of Program Delivery and Distributional Consequences Affect Policy Support? Experimental Evidence About the Sources of Citizens’ Policy Opinions

  • Original Paper
  • Published:
Political Behavior Aims and scope Submit manuscript

Abstract

Recent scholarship argues that citizens’ support for specific government programs in the United States is affected by the means through which benefits are delivered as well as the distributional consequences of these policies. In this paper, we extend this literature in two ways through a series of novel survey experiments, deployed on a nationally representative sample. First, we directly examine differences in public support for prospective government spending when manipulating the mode of delivery. Second, we examine whether information about the distributional consequences of two existing government programs affects their popularity. We find that citizens have a preference for indirect spending that is independent of the distributional consequences of a given policy and identify mechanisms that may explain this view. Furthermore, we find little evidence that highlighting the regressive effects of current government programs significantly reduces the demand for their policy benefits. Our findings have implications for understanding the political calculus of policy design and the potential for public persuasion.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. See, Hacker and Pierson (2010), Mettler (2011), Campbell (2003), Faricy and Ellis (2013), Haselswerdt and Bartels (2015), among others.

  2. Still another possibility, not addressed in this paper, is that indirectly delivered (regressive) programs could become more popular among program beneficiaries if they were delivered directly because individuals would better understand that they benefited personally. Any change in popularity due to direct program delivery could indeed be smaller or larger than the potentially countervailing effect of the overall distribution of program costs and benefits becoming more transparent.

  3. See, Hetherington (2005), Hetherington and Rudolph (2008), Hacker and Pierson (2017), McCarty et al. (2016), among others.

  4. It is important to note that indirect delivery of government benefits can also be accomplished by using private sector intermediaries. For example, in the case of Medicare, the national health insurance system for seniors, the federal government contracts services and the monitoring of those services to private health care providers. In an extensive examination of Medicare, Morgan and Campbell (2011) argue that policymakers “delegate” governance due to citizens’ divergent preferences for both small government and greater social provision.

  5. See, Applebaum (2001), Gilens (1999), Henry et al. (2004), among others.

  6. See, e.g., Arnold (1992), Gilens (2009), Jacoby (1994), Howard (2007), Morgan and Campbell (2011), and Ellis and Stimson (2012).

  7. For a general discussion of the potential biases in within-subject designs, see, Charness et al. (2012).

  8. For a general discussion of the challenges interpreting the effects of bundled treatments, see, Dunning (2012, pp. 300–302).

  9. For information regarding Lucid’s specific recruitment method, see: https://luc.id/wp-content/uploads/2017/07/IRB-Methodology_.Lucid_.pdf.

  10. This household-income targeted subsample is constructed due to the requirements of an unrelated study with questions that are fielded on this survey.

  11. We use a raking procedure to match the marginal distributions of our sample to the ACS margins of age, education, household income, gender, and minority identification. See, Battaglia et al. (2009) for a an overview of raking as well as practical considerations for implementing the procedure.

  12. For a discussion on estimating and interpreting survey experimental treatment effects with survey weights, see Miratrix et al. (2018) and Franco et al. (2017).

  13. Analyses of the Lucid sample and SSI sample (discussed in Experiment 2) reflect decisions made prior to data collection and that were documented in two pre-analysis plans registered with Evidence in Governance and Politics (EGAP). The pre-analysis plans are filed jointly under EGAP ID 20170725AA. Materials to replicate the analyses presented in this article are available here: https://doi.org/10.7910/DVN/XYYAQC.

  14. WIC is a federally funded program that targets low-income pregnant and postpartum women, infants, and children up to 5 years of age. Depending on the State agency administering the benefit, participants receive WIC benefits through checks, vouchers, or as electronic benefit transfers (see, https://www.fns.usda.gov/wic/). A typical example of an existing US Department of Labor job training policy is the Trade Adjustment Assistance Program (TAA). The TAA targets workers that have lost their jobs or have their jobs threatened due to trade-related circumstances. Eligible individuals receive some combination of classroom training, on-the-job training, income support, and reimbursement for job search related expenses (see, https://www.doleta.gov/tradeact/docs/program_brochure2014.pdf).

  15. Note, that this method of transfer is similar to Electronic Benefit Transfer (EBT) cards currently used by many states.

  16. Note, however, that certain benefits under TAA are transferred to eligible individuals as a reimbursement for a qualified expense.

  17. Given that exposing respondents to both direct and indirect framings of the policy could generate biases in subsequent responses, we present this question last, and only in regards to the final policy that is shown. E.g., a respondent assigned to view the nutrition policy followed by the job training policy is only asked this question about the job training policy at the end of the corresponding question-block.

  18. For the job training policy respondents are asked about “Citizen/business misuse and abuse” so as to include any funds diverted by educational institutions.

  19. HMID benefit figures are based on authors’ calculations using the following data: Estimates of Federal Tax Expenditures for Fiscal Years 2015–2019 (Joint Committee on Taxation), Table 1.2. All Returns: Adjusted Gross Income, Exemptions, Deductions, and Tax Items, by Size of Adjusted Gross Income and by Marital Status, Tax Year 2014 (IRS). Both figures are conditional on take-up, but higher income households are much more likely to take up the policy.

  20. UI benefit figures are based on authors’ calculations using US Department of Labor data: Significant Provisions of State Unemployment Insurance Laws, Effective January 2014.

  21. We find that treatment effect on preferences to modify the HMID are statistically significant (\(\alpha = 0.05\)) when estimate the SATE without using weights. See, rows 3–4 of Online Appendix Figure B.1c.

  22. We do find, however, that the unweighted effect of the regressive frame is to increase support for lower-income individuals paying a lower UI premium is (\({\widehat{SATE}} = 0.08\), \(p < 0.05\)). See, row 5 of Online Appendix Figure B.1d.

  23. For a comparison between MTurk, student convenience, and ANES samples, see Berinsky et al. (2012).

  24. The complete wordings of the distributional treatments employed in the MTurk sample are shown in Online Appendix Figure  B.8.

  25. The complete wordings of the distributional treatments employed in the SSI sample are shown in Online Appendix Table B.5.

  26. The experiment conducted on MTurk was intended as a pilot study, thus we did not pre-register a plan for analysis.

  27. The differences in results between samples could be the result of a number of factors. We call attention to two such factors. First, the type of respondent in each sample: MTurk respondents complete the survey as a “Human Intelligence Tasks (HITs)” in an online labor market whereas the average SSI respondent is usually compensated to take part in marketing research. Second, as the distributional treatments used in the MTurk sample are much different than those used in the SSI sample, the information about HMID beneficiaries presented to MTurk respondents could interact with the direct/indirect treatment in ways that are different than we observe with SSI respondents.

  28. We note that even when restricting our attention to the regressive framing of the HMID, respondents in both the direct and indirect treatment conditions favor, on average, increasing spending on the program.

  29. For discussions on the effects of partisanship on different apolitical activities, see, Margolis and Sances (2016), Gerber and Huber (2009), Oliver et al. (2015), among others.

  30. We do find, however, that the unweighted effect of the regressive frame is to increase support for lower-income individuals paying a lower UI premium among Republicans is (\(\widehat{SATE_{Rep}} = 0.12\), \(p < 0.05\)). See, row 5 of Online Appendix Figure A.4d.

  31. A potential real-world example can be found in recent changes to the HMID that lowers the limit on qualifying mortgages from $1,000,000 to $750,000. See: http://wapo.st/2zMo5QV?tid=ss_mail&utm_term=.e57003cdcdc7.

References

  • Applebaum, L. D. (2001). The influence of perceived deservingness on policy decisions regarding aid to the poor. Political Psychology, 22(3), 419–442.

    Article  Google Scholar 

  • Arnold, R. D. (1992). The logic of congressional action. New Haven: Yale University Press.

    Google Scholar 

  • Battaglia, M. P., Hoaglin, D. C., & Frankel, M. R. (2009). Practical considerations in raking survey data. Survey Practice, 2(5), 1–10.

    Article  Google Scholar 

  • Benjamini, Y., & Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 57(1), 289–300.

    Google Scholar 

  • Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk. Political Analysis, 20(03), 351–368.

    Article  Google Scholar 

  • Campbell, A. L. (2003). How policies make citizens: Senior political activism and the American welfare state. Princeton studies in American politics: Historical, international and comparative perspectives. Princeton: Princeton University Press.

    Google Scholar 

  • Charness, G., Gneezy, U., & Kuhn, M. A. (2012). Experimental methods: Between-subject and within-subject design. Journal of Economic Behavior and Organization, 81(1), 1–8.

    Article  Google Scholar 

  • Dunning, T. (2012). Natural experiments in the social sciences: A design-based approach. Strategies for social inquiry. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Ellis, C., & Stimson, J. A. (2012). Ideology in America. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Faricy, C., & Ellis, C. (2013). Public attitudes toward social spending in the United States: The differences between direct spending and tax expenditures. Political Behavior, 36(1), 53–76.

    Article  Google Scholar 

  • Franco, A., Malhotra, N., Simonovits, G., & Zigerell, L. J. (2017). Developing standards for post-hoc weighting in population-based survey experiments. Journal of Experimental Political Science, 4(2), 161–172.

    Article  Google Scholar 

  • Gerber, A. S., & Huber, G. A. (2009). Partisanship and economic behavior: Do partisan differences in economic forecasts predict real economic behavior? American Political Science Review, 103(03), 407–426.

    Article  Google Scholar 

  • Gilens, M. (1999). Why Americans hate welfare. Race, media, and the politics of antipoverty policy. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Gilens, M. (2009). Preference gaps and inequality in representation. Political Science & Politics, 42(02), 335–341.

    Article  Google Scholar 

  • Hacker, J. S., & Pierson, P. (2010). Winner-take-all politics. How Washington made the rich richer-and turned its back on the middle class. New York: Simon and Schuster.

    Google Scholar 

  • Hacker, J. S., & Pierson, P. (2017). American amnesia. How the war on government led us to forget what made America prosper. New York: Simon and Schuster.

    Google Scholar 

  • Haselswerdt, J., & Bartels, B. L. (2015). Public opinion, policy tools, and the status quo: Evidence from a survey experiment. Political Research Quarterly, 68(3), 607–621.

    Article  Google Scholar 

  • Henry, P. J., Reyna, C., & Weiner, B. (2004). Hate welfare but help the poor: How the attributional content of stereotypes explains the paradox of reactions to the destitute in America. Journal of Applied Social Psychology, 34(1), 34–58.

    Article  Google Scholar 

  • Hetherington, M. J. (2005). Why trust matters. Declining political trust and the demise of American liberalism. Princeton: Princeton University Press.

    Google Scholar 

  • Hetherington, M. J., & Rudolph, T. J. (2008). Priming, performance, and the dynamics of political trust. The Journal of Politics, 70(2), 498–512.

    Article  Google Scholar 

  • Howard, C. (2007). The welfare state nobody knows: Debunking myths about U.S. social policy. Princeton: Princeton University Press.

    Google Scholar 

  • Jacoby, W. G. (1994). Public attitudes toward government spending. American Journal of Political Science, 38(2), 336–361.

    Article  Google Scholar 

  • Kuziemko, I., Norton, M. I., Saez, E., & Stantcheva, S. (2015). How elastic are preferences for redistribution? Evidence from randomized survey experiments. The American Economic Review, 105(4), 1478–1508.

    Google Scholar 

  • Margolis, M. F., & Sances, M. W. (2016). Partisan differences in nonpartisan activity: The case of charitable giving. Political Behavior, 39(4), 839–864.

    Article  Google Scholar 

  • McCarty, N., Poole, K. T., & Rosenthal, H. (2016). Polarized America. The dance of ideology and unequal riches. Cambridge: MIT Press.

    Google Scholar 

  • Mettler, S. (2011). The submerged state: How invisible government policies undermine American democracy. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Miratrix, L. W., Sekhon, J. S., Theodoridis, A. G., & Campos, L. F. (2018). Worth weighting? How to think about and use weights in survey experiments. Political Analysis, 26(03), 275–291.

    Article  Google Scholar 

  • Morgan, K. J., & Campbell, A. L. (2011). The delegated welfare state: Medicare, markets, and the governance of social policy. Studies in postwar American political development. Oxford: Oxford University Press.

    Google Scholar 

  • Oliver, J. E., Wood, T., & Bass, A. (2015). Liberellas versus konservatives: Social status, ideology, and birth names in the United States. Political Behavior, 38(1), 1–27.

    Google Scholar 

Download references

Acknowledgements

We thank Sarah Anzia, Christopher Berry, Natália S. Bueno, Alan Gerber, Jacob Hacker, John Henderson, Reuben Kline, Nolan McCarty, Patrick Tucker, Ebonya Washington, the three anonymous reviewers of this article and seminar participants at Yale University, the 2018 Emory University Conference on Institutions and Law Making, the 2018 New York University CESS Experimental Political Science Conference, and the 2017 American Political Science Association annual meeting for helpful comments and advice.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gregory A. Huber.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Electronic supplementary material 1 (PDF 416 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ashok, V.L., Huber, G.A. Do Means of Program Delivery and Distributional Consequences Affect Policy Support? Experimental Evidence About the Sources of Citizens’ Policy Opinions. Polit Behav 42, 1097–1118 (2020). https://doi.org/10.1007/s11109-019-09534-z

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-019-09534-z

Keywords

Navigation