Skip to main content
Log in

Campaign Principal-Agent Problems: Volunteers as Faithful and Representative Agents

  • Original Paper
  • Published:
Political Behavior Aims and scope Submit manuscript

Abstract

Volunteer-based voter contact presents multiple potential principal-agent problems for political campaigns. Conflicting potential solutions to these principal-agent problems generate two opposing expectations about campaigns’ preferences for ideological types of volunteers. Concerns about volunteers substituting their own ideological messages for the moderate and noncommittal ones campaigns prefer should make moderate volunteers more desirable; concerns about maximizing volunteer work-hours should lead to preferences for volunteers whose ideology matches the candidate’s. Using interviews with campaign operatives, a conjoint experiment, and a correspondence experiment, we show campaigns prefer volunteers whose views align with the candidate – interpreted by campaign operatives as a signal of likely enthusiasm and dedication – rather than moderate volunteers. However, even without resource constraints, these preferences are weak and fade in the presence of stronger indicators of commitment. They are absent in real-world volunteer recruitment. Overall, campaigns are more concerned with volunteers shirking responsibilities than they are with volunteers going off-message.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Data Availability

Replication data for this project is available at https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/VNUJM8.

Notes

  1. Previous work examining the 2012 Obama campaign has even suggested campaigns “use recruitment to…offset the principal-agent problem” as there were very minor ideological differences (neither substantively nor statistically significant) between the Obama volunteers who were “recruited by the campaign” and volunteers who had “asked to volunteer” (Enos and Hersh 2015, 264–66). However, that work concluded the Obama campaign’s recruitment of moderate volunteers was ineffective (264). That work, however, does not provide clear reasons why there is no substantive difference between recruited and non-recruited volunteers: were campaigns unconcerned about ideological misrepresentation and prioritized solving other principal-agent problems which led them to recruit more ideologically extreme volunteers, or did they prefer moderate volunteers, but were constrained in their recruitment efforts?

  2. In addition, if the main use of volunteers is for turnout, moderate volunteers might be less effective at turning out supporters whose views are more ideologically extreme. Although the prioritization of mobilization over persuasion (or vice versa) could be a result of the type of volunteers that campaigns have available to them, our evidence suggest that campaigns are skeptical about the effectiveness of door-to-door persuasion (see interviews below for more details). In line with this view, research shows that even the most effective canvassing has little to no effect on persuasion (Kalla and Broockman 2018) but has large substantive effects on turnout (Green and Gerber 2019). However, we find no evidence that campaigns consider volunteer ideology with the explicit purpose of being more effective at turning out supporters.

  3. It is important to emphasize that we do not dispute the potential for principal-agent problems to arise on ideological dimensions and affect the effectiveness of messaging to persuadable voters. Campaigns, however, do not appear to prioritize these concerns.

  4. The research was approved by IRBs at Florida State University and Brigham Young University-Idaho.

  5. Indeed, voters use information about a candidate’s supporters as a heuristic for the candidate’s positions and likely behavior in elected office (Popkin 1994, Chapter 3).

  6. However, if campaigns aim to use volunteers more for turnout rather than persuasion, volunteers with more extreme ideologies might be more likely to turnout supporters. As we note further on, campaign practitioners view direct voter contact as inefficient for persuasion because of their perception that persuasion requires repetition which is hard to implement through direct voter contact. Previous work has also shown substantive effects of volunteer voter turnout efforts (Green and Gerber 2019) and limited direct voter contact persuasion effects (Kalla and Broockman 2018).

  7. The only examination of a campaign’s perspective we are aware of is Maisel (1982) who discusses tangentially the principal-agent problems of shirking and misaligned goals he dealt with in his own 1978 congressional primary campaign in Maine.

  8. When asked specifically if there were differences in the volunteer recruitment or deployment processes between presidential and congressional campaigns, one GOP operative with experience in presidential campaigns responded, “No. Categorically no. There isn’t a difference in who presidential campaigns and congressional campaigns want to recruit to volunteer or how they think about deploying their volunteers. There’s just more of them.” A Democratic operative said nearly the same thing indicating preferences for volunteers was the same “whether it’s a school board race or the Biden campaign.”.

  9. One campaign manager on a long-shot campaign even indicated he “would have loved to have been misrepresented because it would (provide a) chance to go on the media and clarify.”.

  10. Although message consistency was an aspect of volunteer monitoring, much of this monitoring was done to ensure, as one operative put it, “people are doing the work that they say they’re doing.”.

  11. In contrast, television advertising is repetitive and largely used for persuasion (Sides et al., 2022).

  12. Three campaigns responded twice.

  13. Limited survey roll-off meant 192 individuals assessed 1,150 volunteer profiles in the conjoint experiment.

  14. We estimate models with a forced choice operationalization, a binary variable indicating whether the respondent selected a volunteer with that attribute/demographic profile. Rescaled to a [0,1] interval, we interpret this as the probability of taking the action in question.

  15. As noted previously, the online appendix contains three additional models from a more limited sample of viable primary candidates only as measured by 1) the presence of a campaign Twitter account, 2) a minimum of $50,000 in fundraising, and 3) primary success. Those results show similar results.

  16. Roughly 75% of Enos and Hersh’s (2015) sample of volunteers come from Democratic volunteers for presidential campaigns. We assume, as they do, that the trends they show apply to congressional campaign volunteers. We do not have similar data on Republican volunteers. As such, we do not know what type of volunteer might be under-represented relative to Republicans’ target voter population and thus, theoretically, in higher demand.

  17. Through their comments at the end of the survey, some respondents appeared to guess (incorrectly) that we were primarily interested in race and gender. Indeed, our results from the correspondence experiment show Democrats actually are more responsive to potential volunteers who are men. While conjoint experiments are supposed to allow respondents to reveal biases (Hainmueller, Hopkins, and Yamamoto 2014), the small number of characteristics may have inadvertently led to demand effects. However, other research has suggested this should not have any effect on outcomes (Mummolo and Peterson, 2019).

  18. We also tested the effects of the respondent’s ideology and found similar results which are available in the online appendix. The Pearson’s correlation coefficient between respondent’s and candidate’s ideology when scaled for ideological strength is .70.

  19. Some of the 1,976 campaigns for whom we had emails were excluded because the contact information we had was duplicated for another campaign (often because of a shared consultant acting as campaign manager or because the candidate ran in two separate races using the same email contact information) causing us to accidentally send two emails.

  20. For the results presented here, we estimate ideal points using the tweetscores package, which infers partisan ideal points based on which elite accounts a user follows (Barberá 2015). Analysis using an alternate method based on users’ full follow profiles are included in the online appendix and produces substantively similar results.

  21. We use official campaign Twitter accounts. When the candidate did not have an official campaign account, we used their personal Twitter account if available.

  22. In addition to candidate extremity, we also considered potentially moderating effects of candidate party affiliation, the office the candidate is seeking, incumbency status, and prospective volunteer gender. None were important for predicting conditional average treatment effects.

References

  • Bansak, K., Hainmueller, J., Hopkins, D. J., & Yamamoto, T. 2020. “Using conjoint experiments to analyze elections: the essential role of the average marginal component effect (AMCE).” Stanford University Working Paper.

  • Barberá, P. (2015). Birds of the same feather tweet together: bayesian ideal point estimation using twitter data. Political Analysis, 23(1), 76–91.

    Article  Google Scholar 

  • Bawn, K., Cohen, M., Karol, D., Masket, S. E., Noel, H., & Zaller, J. (2012). A theory of political parties: Groups, policy demands and nominations in American politics. Perspectives on Politics, 10(3), 571–597.

    Article  Google Scholar 

  • Garcia Bedolla, L., & Michelson, M. R. (2012). Mobilizing Inclusion: Transforming the Electorate through Get-Out-the-Vote Campaigns. Yale University Press.

    Book  Google Scholar 

  • Berinsky, A. J. (1999). The two faces of public opinion. American Journal of Political Science, 43(4), 1209–1230.

    Article  Google Scholar 

  • Bleich, E., & Pekkanen, R. (2013). How to report interview data. In L. Mosley (Ed.), Interview Research in Political Science (pp. 84–105). Cornell University Press.

    Google Scholar 

  • Butler, D. M., & Crabtree, C. (2021). Audit studies in political science. In D. P. Green & J. Druckman (Eds.), Advances in Experimental Political Science (pp. 42–55). Cambridge University Press.

    Chapter  Google Scholar 

  • Butler, D. M., Karpowitz, C. F., & Pope, J. C. (2012). A field experiment on legislators’ home styles: service versus policy. Journal of Politics, 74(2), 474–486.

    Article  Google Scholar 

  • Collier, D., Brady, H. E. & Seawright, J. (2010). A sea change in political methodology. In: H. E. Brady and D. Collier (Eds.), Rethinking Social Inquiry (pp. 1–11). Rowman & Littlefield.

    Google Scholar 

  • Costa, M. (2017). How responsive are political elites? A meta-analysis of experiments on public officials. Journal of Experimental Political Science, 4(3), 241–254.

    Article  Google Scholar 

  • Druckman, J. N., Kifer, M. J., & Parkin, M. (2009). Campaign communications in U.S. congressional elections. American Political Science Review, 103(3), 343–366.

    Article  Google Scholar 

  • Enos, R. D., & Hersh, E. D. (2015). Party activists as campaign advertisers: the ground campaign as a principal-agent problem. American Political Science Review, 109(2), 252–278.

    Article  Google Scholar 

  • Green, D. P., & Gerber, A. S. (2019). Get Out the Vote: How to Increase Voter Turnout (4th ed.). Brookings Institution Press.

    Google Scholar 

  • Grossmann, M. (2012). What (or Who) makes campaigns negative? American Review of Politics, 33, 1–22.

    Article  Google Scholar 

  • Hainmueller, J., Hopkins, D. J., & Yamamoto, T. (2014). Causal inference in conjoint analysis: understanding multidimensional choices via stated preference experiments. Political Analysis, 22, 1–30.

    Article  Google Scholar 

  • Hassell, H. J. G. (2020). It’s who’s on the inside that counts: campaign practitioner personality and campaign electoral integrity. Political Behavior, 42(4), 1119.

    Article  Google Scholar 

  • Hassell, H. J. G., Holbein, J. B., & Miles, M. R. (2020). There is no liberal media bias in the news political journalists choose to cover. Science Advances. https://doi.org/10.1126/sciadv.aay9344

    Article  PubMed  PubMed Central  Google Scholar 

  • Holbein, J. B., & Carnes, N. (2019). Do public officials exhibit social class biases when they handle casework? Evidence from multiple correspondence experiments. PLoS ONE, 14, e0214244.

    Article  PubMed  PubMed Central  Google Scholar 

  • Issenberg, S. (2012). The Victory Lab : The Secret Science of Winning Campaigns. Crown.

    Google Scholar 

  • Kalla, J. L., & Broockman, D. E. (2018). The minimal persuasive effects of campaign contact in general elections: evidence from 49 field experiments. American Political Science Review, 112(1), 148–166.

    Article  Google Scholar 

  • Klein, E. 2021. “David Shor is telling democrats what they don’t want to hear.” New York Times. October 8. https://www.nytimes.com/2021/10/08/opinion/democrats-david-shor-education-polarization.html.

  • LaPiere, R. T. (1934). Attitudes vs. actions. Social Forces, 13(2), 230–237.

    Article  Google Scholar 

  • Leeper, T. J., Hobolt, S. B., & Tilley, J. (2020). Measuring subgroup preferences in conjoint experiments. Political Analysis, 28(2), 207–221.

    Article  Google Scholar 

  • Lynch, J. F. (2013). Aligning sampleing strategies with analytical goals. In L. Mosley (Ed.), Interview Research in Political Science (pp. 31–44). Cornell University Press.

    Google Scholar 

  • Maisel, L. S. (1982). From Obscurity to Oblivion: Running in the Congressional Primary. University of Tennessee Press.

    Google Scholar 

  • Martin, G. J., & Peskowitz, Z. (2018). Agency problems in political campaigns: media buying and consulting. American Political Science Review, 112(2), 231–248.

    Article  Google Scholar 

  • McGuire, B. (2019). Scaling the Field Program in Modern Political Campaigns. Harvard University Kennedy School of Government.

    Google Scholar 

  • Miao, H. 2021. “Democrats’ historic Georgia senate wins were years in the making thanks to local grassroots.” CNBC, January 9.

  • Mosley, L. (Ed.) (2013). Just talk to people? Interviews in contemporary political science. In: Interview Research in Political Science (pp. 1–28). Cornell University Press.

  • Mummolo, J., & Peterson, E. (2019). Demand effects in survey experiments: an empirical assessment. American Political Science Review, 113(2), 517–529.

    Article  Google Scholar 

  • Panagopoulos, C. (2016). All about that base: Changing campaign strategies in U.S. presidential elections. Party Politics, 22(2), 179–190.

    Article  Google Scholar 

  • Popkin, S. L. (1994). The Reasoning Voter. University of Chicago Press.

    Google Scholar 

  • Sides, J. (2006). The origins of campaign agendas. British Journal of Political Science, 36(3), 407–436.

    Article  Google Scholar 

  • Sides, J., Vavreck, L., & Warshaw, C. (2022). The effects of television advertising in United States elections. American Political Science Review, 116(2), 702–718.

    Article  Google Scholar 

  • Ward, I. 2021. “The Democrats’ privileged college-kid problem.” Politico. Oct. 9. https://www.politico.com/news/magazine/2021/10/09/david-shor-democrats-privileged-college-kid-problem-514992.

  • Weller, N., & Barnes, J. (2014). Finding Pathways: Mixed-Method Research for Studying Causal Mechanisms. Cambridge University Press.

    Book  Google Scholar 

  • Wilson, J. Q. (1974). Political Organizations. Basic Books.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hans J. G. Hassell.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

We are grateful to the many campaign practitioners who provided their insights into how campaigns use volunteers and the types of volunteers they want to recruit. A previous version of this paper was presented at the 2021 APSA Conference. We are also grateful to Ryan Enos, Eitan Hersh, and Kevin Reuning for their comments and suggestions and to Quintin Beazer, Daniel Butler, and Adam Dynes for the conversations at SPSA in Puerto Rico in 2020 helped shape this project. All errors, of course, remain our own. Replication data for this project is available at https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/VNUJM8.

Supplementary Information

Below is the link to the electronic supplementary material.

Supplementary file1 (DOCX 3938 KB)

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chewning, T.K., Green, J., Hassell, H.J.G. et al. Campaign Principal-Agent Problems: Volunteers as Faithful and Representative Agents. Polit Behav 46, 405–426 (2024). https://doi.org/10.1007/s11109-022-09836-9

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-022-09836-9

Keywords

Navigation