Skip to main content
Log in

Finding the Way Home: The Dynamics of Partisan Support in Presidential Campaigns

  • Original Paper
  • Published:
Political Behavior Aims and scope Submit manuscript

Abstract

The tendency for lukewarm partisans to “come home” is generally regarded as the chief dynamic of presidential campaigns, but little is known about what draws these voters closer to their party’s candidate. The pattern is often taken as prima facie evidence that campaigns activate partisanship, but there is little direct evidence that party identification (PID) exerts any greater influence on candidate preference late in the campaign than it had earlier. This study uses panel surveys from two elections to uncover the mechanisms that lead partisans home. It demonstrates that past research focused on the fall campaign has missed evidence for activation of PID, which occurs as the primary phase closes. It also demonstrates that under certain conditions activation of ideology plays just as important a role in bringing partisans home as activation of PID. These findings indicate that the process whereby partisans “come home” is multi-faceted and may have nearly as much to do with ideology as with party loyalty.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. I use the term predisposition simply as shorthand for an opinion or demographic trait that tends to incline a voter to favor a particular candidate. The particular predispositions I focus on below—party identification and ideology—are often considered to be “fundamental” to vote choice in the literature on campaigns (e.g., Finkel 1993; Gelman and King 1993; Johnston et al. 2004; Lenz 2012; Sides and Vavreck 2013; Vavreck 2009).

  2. An alternative set of explanations focuses on changes to the ingredients of candidate support rather than on changes to the weights assigned to these ingredients. These explanations were explored here as well. While there is evidence that changes in partisanship or ideology do predict subsequent changes in candidate support, these changes are too small to account for the overall growth in partisan support. See appendix B for more details.

  3. Hillygus and Shields (2008) look a longer timeframe that includes the late spring but find no evidence of partisan activation. However, they test for partisan activation by comparing strong and weak partisans. Activation may work by increasing the influence of the direction of partisanship without increasing the influence of the strength of partisanship.

  4. Author’s calculations based on 2008 Wisconsin Advertising Project data.

  5. Knowledge Networks administered the survey to a nationally representative, probability-based panel of individuals. To promote completion of multiple waves, response incentives were provided to certain individuals deemed “rare respondents”, which includes individuals who are less than 30 years of age, who did not complete high school, or who are non-white. Beginning in the fifth wave of the survey, incentives were provided to “late respondents”, i.e. those who failed to complete all of the first four waves within seven days of initial fielding for each. Individuals in this group received an incentive if they completed subsequent waves before specified dates. The amount of the incentive per wave ranged from $5 to $10 depending on wave and the combination of rare and late respondent characteristics. More information about the APYN Panel is at http://www.knowledgenetworks.com/GANP/election2008/index.html.

  6. Attrition is a concern with any panel study. While a majority of respondents to this survey completed at least ten interviews, anywhere from 15 to 40 % of the baseline respondents are absent in a given re-interview wave. Many of these return in subsequent waves. It would be especially problematic if predispositions are less correlated with candidate support among those who drop out relative to those who remain in the study. The reduction in the sample over the course of the study would then make it look as if the salience of these predispositions increases even when no such change actually occurs. The use of fixed effects partly alleviates this concern. The fixed effects ensure that inference is based on within-individual change. Therefore, the evidence for activation presented below is based on actual change in salience rather than a change in the sample. However, attrition remains a concern if the people who exit the study are different from those who stay in a way that would prevent similar activation among them. If so, then the results may not be generalizable to the population even though they are internally valid for respondents who remain in the panel. I use a discrete-time duration model to identify risk factors associated with attrition (Box-Steffensmeier and Jones 2004). Of particular concern here, however, is whether variables in the activation models predict attrition. Ideology does not predict dropout. Except for the fact that independents are more likely to exit the sample, party identification does not predict dropout. Two demographics—gender and education—predict attrition. The percentage of female who complete the final wave (59.3 %) is very close to the percentage of males who do so (62.1 %). There is a larger difference across education. Those who have a college degree are more likely to complete the final wave (67.6 %) than those without a college degree (57.6 %). In each case a majority still complete the final interview. However the rates retention in the final wave for these groups is very close to retention rates for others. Results of the duration model appear in appendix table A1.

  7. Another concern with panel data is that participation in the panel makes respondents different from individuals who are not part of the study. For example, panelists may learn from their experience in the panel, or perhaps pay more attention to the campaign as a result of participation. Such learning could artificially activate predispositions. The APYN Panel includes additional fresh cross sectional samples in the third, sixth, and ninth interview waves. These cross-section samples can be compared to the panel sample to identify any difference in their opinions or the correlations between those opinions and candidate evaluations (without the inclusion of respondent fixed effects because the fresh cross section samples do not have multiple observations of the same individuals). The results from these tests appear in appendix table A2. Members of the fresh sample are less favorable toward McCain relative to Obama than the panelists in the third and ninth waves, but not in the sixth. Otherwise there are no statistically significant differences between the groups in their party identification, opinions of President Bush, and ideology. Additionally, there are no statistically significant differences in the correlations between these opinions and their evaluations of the candidates when comparing panelists and the fresh samples.

  8. Arguably, vote intention (i.e., whom the respondent plans to vote for in the November election) would provide a more direct measure of candidate preference, but there are three problems with using vote intention. First, binary or categorical outcomes require generalized linear models that do not permit the use of respondent fixed-effects (Wooldridge 2002), which are important in this analysis for soaking up time-constant heterogeneity across individuals as a source for omitted variable bias. Second, the APYN Panel did not include head-to-head vote intention questions until the fourth wave of interviews. This is especially important for the present study. Dropping the first three waves of the APYN Panel would prevent detection of any campaign activation that occurs in conjunction with the move from primary to general election phases. Third, the categorical outcome is subject to ceiling effects—that is, they hide any increases in support for a candidate once a voter decides to vote for that candidate. Ceiling effects stack the desk against finding evidence for activation. There is good reason to take candidate evaluations as a reasonable proxy for vote intention. Candidate evaluations are highly predictive of vote intention throughout the APYN Panel. Indeed, goodness of fit statistics improve over the course of the campaign when comparing a series of models predicting vote intention from candidate evaluations in each wave. This suggests the evidence in this paper may be a conservative estimate for activation’s influence on votes.

  9. Excluding independents makes for a more conservative test of ideological activation. If independents are included, then any evidence of ideological activation could be interpreted as arising from independents only, if partisans rely instead on their party identification. By excluding independents, any evidence for ideological activation can only be attributed to partisans. Results are statistically and substantively similar when excluding learners.

  10. Although multi-wave surveys were administered in other years as well, those studies are either locally representative (such as the Columbia School studies), fielded only during the late summer and fall (the 2000 and 2004 National Annenberg Election Surveys), or are not yet publicly available (the 2012 Cooperative Campaign Analysis Project). The 2008 presidential election year had three additional nationally representative panel surveys with multiple waves administered throughout the election year (American National Election Study, National Annenberg Election Survey, and the Cooperative Campaign Analysis Project). The APYN Panel is used here because it has the largest number of waves.

  11. The full model results appear in column one of appendix table A3.

  12. It is worth remembering that these baseline estimates may themselves be contaminated by any reverse causation that occurs before the initial interview. In the present context, the magnitude of the change in Fig. 2 (which is not a function of reverse causation) relative to this initial association is of more interest. Estimates from the model with time-varying predispositions appear in column three of appendix table A3.

  13. The fixed effects control for all time-constant unobserved or observed variables but not for any effects produced by changes in the salience of those variables. Therefore, the activation of party identification in Fig. 2 could actually be the activation of any time-constant excluded variable positively correlated with party identification. To protect against such a misinterpretation, I estimate an activation model that also includes time interactions with a variety of observed time-constant variables including race, ethnicity, gender, education, and income. Although these models cannot control for the activation of unobserved variables, the fact that the results in Fig. 2 do not change with the inclusion of these additional interactions is reassuring.

  14. The analysis in this and the following paragraph is confirmed by a series of difference-in-difference models testing the influence of partisanship (and primary candidate support) across pairs of adjacent waves. Results from these models appear in appendix table A5. The models include indicators for time of interview in the second of each adjacent pair of interviews and interactions between time and the key variable of interest (partisanship and supporting the eventual nominee). The Republican versus Democrat models include a binary indicator of party (1 for Party of candidate, 0 for opposite party). The primary support models include a binary indicator for favoring the eventual nominee. The models are similar to the model used for the APYN Panel in Fig. 2, but the comparison point is the immediately preceding wave rather than the initial wave of the panel. Because each model includes only two time points, individual fixed effects, which behave poorly when too few observations per individual are available, are excluded. Instead, all models control for ideology, assessments of the incumbent, race, income, race, ethnicity, gender, education, and age.

  15. One potential concern is that the models testing activation should include ideological proximity to the candidates rather than ideological position. Because the APYN Panel includes questions about where respondents think the candidates stand on the ideology scale in two waves only (neither of which is the initial wave), the proximity measure cannot be appropriately constructed for the models appearing in Fig. 4. However, the APYN Panel includes a battery of questions in the seventh (early September), ninth (late October), and final (late November) waves asking respondents about their own positions and their perceptions of the candidates’ positions. A truncated activation model using these late interviews yields the same results whether issue positions or issue proximity is included.

  16. Differences in the degree of activation across these predispositions could be due in part to differences in the amount of measurement error. Estimates from Wiley–Wiley models (Wiley and Wiley 1971) reveal a high reliability for the party identification measure in the first wave of the APYN Panel at 0.986. The reliabilities for the first wave measures of ideology and favorability toward President Bush are lower but still relatively high at 0.824 and 0.856 respectively. There is very little improvement in the reliability for any of these measures over the course of the study, consistent with what Feldman (1989) found in a panel from the 1970s. Because ideology is measured with more error than party identification, it may be harder to uncover evidence for activation. There is some evidence for this interpretation. I also estimate the activation model using a scale of items to measure each predisposition. Combining multiple measures reduces the amount of error in the measurement of a latent concept (Ansolabehere et al. 2008). I measure party identification by combining the standard seven-point scale with favorability toward the Democrats and favorability toward the Republicans. I measure ideology with a scale that combines the five-point measure with respondent positions on ten issues (abortion, funding stem cell research, immigration, the Iraq War, government efforts to reduce income inequality, gun control, private accounts for Social Security, oil drilling, health care, and taxes). I measure assessments of the incumbent by combining Bush favorability with an approval item and general mood about how the country is doing. The results of this model appear in column two of appendix table A3. There is evidence for activation of all three predispositions. The evidence for activation of party identification is similar across the two models, but the evidence for activation of ideology is stronger and the evidence for activation of attitudes toward the incumbent is slightly weaker. This indicates that ideological activation may in fact be even stronger than the evidence in Fig. 3. Therefore, Fig. 3 should be taken as a conservative test for ideological activation. The fact that such evidence appears even in the presence of measurement error indicates the strength of this finding. It should also be noted that the use of scale measures does not necessarily bolster the evidence for ideological activation. Using similar scales to measure predispositions in the 1980 ANES Panel yields no evidence for ideological activation.

  17. Using residence in a battleground state as a measure of exposure to campaign information yields statistically and substantively similar results.

References

  • Abramson, P. R., Aldrich, J. H., & Rhode, D. W. (1983). Change and continuity in the 1980 elections: Revised edition. Washington: CQ Press.

    Google Scholar 

  • Allison, P. D. (2009). Fixed effects regression models. Los Angeles: Sage.

    Google Scholar 

  • Alvarez, R. M. (1998). Information and elections. Ann Arbor: University of Michigan Press.

    Google Scholar 

  • Angrist, J. D., & Pischke, J. (2009). Mostly harmless econometrics: An empiricist’s companion. Princeton: Princeton University Press.

    Google Scholar 

  • Ansolabehere, S., Rodden, J., & Snyder, J. M. (2008). The strength of issues: Using multiple measures to gauge preference stability, ideological constraint, and issue voting. American Political Science Review, 102(2), 215–232.

    Article  Google Scholar 

  • Bartels, L. M. (1988). Presidential primaries and the dynamics of public choice. Princeton: Princeton University Press.

    Google Scholar 

  • Bartels, L. M. (2006). Priming and persuasion in presidential campaigns. In H. E. Brady & R. Johnston (Eds.), Capturing campaign effects (pp. 78–112). Ann Arbor: University of Michigan Press.

    Google Scholar 

  • Berelson, B., Lazars Feld, P., & McPhee, W. (1954). Voting: A study of opinion formation in a presidential campaign. Chicago: University of Chicago Press.

    Google Scholar 

  • Box-Steffensmeier, J. M., & Jones, B. S. (2004). Event history modeling: A guide for social scientists. New York: Cambridge University Press.

    Book  Google Scholar 

  • Campbell, J. (2008). The American campaign. College Station: Texas A&M Press.

    Google Scholar 

  • Campbell, A., Converse, P. E., Miller, W. E., & Stokes, D. E. (1960). The American voter. New York: Wiley.

    Google Scholar 

  • Erikson, R. S., & Wlezien, C. (2012). The timeline of presidential elections: How campaigns do (and do not) matter. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Feldman, S. (1989). Measuring issue preferences: The problem of response instability. Political Analysis, 1(1), 25–60.

    Article  Google Scholar 

  • Finkel, S. E. (1993). Reexamining the ‘minimal effects’ model in recent presidential campaigns. Journal of Politics, 55(1), 1–21.

    Article  Google Scholar 

  • Fiorina, M. P. (2005). Culture war? The myth of a polarized America. New York: Pearson.

    Google Scholar 

  • Gelman, A., & King, G. (1993). Why are American presidential election campaign polls so variable when votes are so predictable? British Journal of Political Science, 23(1), 409–451.

    Article  Google Scholar 

  • Henderson, M., Hillygus, D. S., & Tompson, T. (2010). ‘Sour grapes’ or rational voting? Voter decision making among thwarted primary voters in 2008. Public Opinion Quarterly, 74, 499–529.

    Article  Google Scholar 

  • Hetherington, M. J. (2011). Resurgent mass partisanship: The role of elite polarization (updated). In R. G. Niemi, H. F. Weisberg, & D. C. Kimball (Eds.), Controversies in voting behavior (5th ed., pp. 242–265). Washington, D.C.: CQ Press.

    Google Scholar 

  • Hillygus, D. S., & Henderson, M. (2010). Policy issues and the dynamics of vote choice in the 2008 presidential election. Journal of Elections, Public Opinion, and Parties, 20(2), 241–269.

    Article  Google Scholar 

  • Hillygus, D. S., & Jackman, S. (2003). Voter decision making in election 2000: Campaign effects, partisan activation, and the Clinton legacy. American Journal of Political Science, 47(4), 583–596.

    Article  Google Scholar 

  • Hillygus, D. S., & Shields, T. G. (2008). The persuadable voter: Wedge issues in presidential campaigns. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Holbrook, T. M. (1996). Do campaigns matter?. Thousand Oaks: Sage.

    Google Scholar 

  • Holbrook, T. M., & McClurg, S. D. (2005). The mobilization of core supporters: Campaigns, turnout, and electoral composition in United States presidential elections. American Journal of Political Science, 49(4), 689–703.

    Article  Google Scholar 

  • Johnston, R., Blais, A., Brady, H. E., & Crete, J. (1992). Letting the people decide: Dynamics of a Canadian election. Stanford: Stanford University Press.

    Google Scholar 

  • Johnston, R., Hagen, M. G., & Jamieson, K. H. (2004). The 2000 presidential election and the foundations of party politics. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Johnston, R., Thorson, E., & Gooch, A. (2010). The Economy and the dynamics of the 2008 presidential campaign: evidence from the national annenberg election study. Journal of Elections, Public Opinion, and Parties, 20(2), 271–289.

    Article  Google Scholar 

  • Just, M. R., Crigler, A. N., Alger, D. E., Cook, T. E., Kern, M., & West, D. M. (1996). Crosstalk: Citizens, candidates, and the media. Chicago: University of Chicago Press.

    Google Scholar 

  • Kaplan, N., Park, D. K., & Gelman, A. (2012). Understanding persuasion and activation in presidential campaigns: The random walk and mean-reversion models. Presidential Studies Quarterly, 42(4), 843–866.

    Article  Google Scholar 

  • Kenski, K., Hardy, B. W., & Jamieson, K. H. (2010). The obama victory: How media, money, and message shaped the 2008 election. Oxford: Oxford University Press.

    Google Scholar 

  • Lazarsfeld, P. F., Berelson, B. R., & Gaudet, H. (1944). The people’s choice: How the voter makes up his mind in a presidential campaign. New York: Columbia University Press.

    Google Scholar 

  • Lenz, G. S. (2009). Learning and opinion change, not priming: Reconsidering the evidence for the priming hypothesis. American Journal of Political Science, 53(4), 821–837.

    Article  Google Scholar 

  • Lenz, G. S. (2012). Follow the leader: How voters respond to politicians’ policies and performance. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • McClurg, S. D., & Holbrook, T. M. (2009). Living in a battleground: Presidential campaigns and fundamental predictors of vote choice. Political Research Quarterly, 62(3), 495–506.

    Article  Google Scholar 

  • Peterson, D. A. M. (2009). Campaign learning and vote determinants. American Journal of Political Science, 53(2), 821–837.

    Article  Google Scholar 

  • Piston, S. (2010). How explicit racial prejudice hurt Obama in the 2008 election. Political Behavior, 32, 431–451.

    Article  Google Scholar 

  • Sides, J., & Vavreck, L. (2013). The gamble: Choice and chance in the 2012 presidential election. Princeton: Princeton University Press.

    Google Scholar 

  • Tesler, M., & Sears, D. (2009). Obama’s race: The 2008 election and the dream of a post-racial America. Chicago: University of Chicago Press.

    Google Scholar 

  • Vavreck, L. (2009). The message matters: The economy and presidential campaigns. Princeton: Princeton University Press.

    Book  Google Scholar 

  • Wiley, D. E., & Wiley, J. A. (1971). The estimation of measurement error in panel data. In H. M. Blalock (Ed.), Causal models in the social sciences (pp. 364–373). Chicago: Aldine-Atherton.

    Google Scholar 

  • Wooldridge, J. M. (2002). Econometric analysis of cross section and panel data. Cambridge: MIT Press.

    MATH  Google Scholar 

  • Wooldridge, J. M. (2009). Introductory econometrics: A modern approach. Mason: South-Western Cengage Learning.

    Google Scholar 

Download references

Acknowledgments

I would like to thank D. Sunshine Hillygus, Steven Ansolabehere, participants in the American Politics Research Workshop at Harvard University, participants in the Political Behavior and Identities Research Workshop at Duke University, and the members of the political science department at the University of Mississippi.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Henderson.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (DOCX 34 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Henderson, M. Finding the Way Home: The Dynamics of Partisan Support in Presidential Campaigns. Polit Behav 37, 889–910 (2015). https://doi.org/10.1007/s11109-014-9296-y

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-014-9296-y

Keywords

Navigation