Political Behavior

, Volume 30, Issue 1, pp 97–113

Mobilizing the Seldom Voter: Campaign Contact and Effects in High-Profile Elections

Authors

    • Department of Political ScienceUniversity of Arkansas
  • Jay Barth
    • Department of PoliticsHendrix College
  • Martha Kropf
    • Department of Political ScienceUniversity of North Carolina—Charlotte
  • E. Terrence Jones
    • Department of Political Science and Public Policy AdministrationUniversity of Missouri—St. Louis
Original Paper

DOI: 10.1007/s11109-007-9042-9

Cite this article as:
Parry, J., Barth, J., Kropf, M. et al. Polit Behav (2008) 30: 97. doi:10.1007/s11109-007-9042-9

Abstract

Decades of research suggests that campaign contact together with an advantageous socioeconomic profile increases the likelihood of casting a ballot. Measurement and modeling handicaps permit a lingering uncertainty about campaign communication as a source of political mobilization however. Using data from a uniquely detailed telephone survey conducted in a pair of highly competitive 2002 U.S. Senate races, we further investigate who gets contacted, in what form, and with what effect. We conclude that even in high-profile, high-dollar races the most important determinant of voter turnout is vote history, but that holding this variable constant reveals a positive effect for campaign communication among “seldom” voters, registered but rarely active participants who—ironically—are less likely than regular or intermittent voters to receive such communication.

Keywords

Voter mobilizationVoter turnoutVoting behaviorCampaign effectsCampaign contact

Scores of scholarly studies examine the whether and why of the turnout decision. A handful of the earliest efforts (e.g., Gosnell, 1927; Eldersveld, 1956; Cutright and Rossi, 1958; Wolfinger, 1963) examined—and found support for—the positive effect of party organizing and grassroots campaigning on voter mobilization. Attention shifted in the 1960s however to the predictive power of the demographic characteristics and emotional predispositions of individuals, qualities political scientists could suddenly measure on a grand scale thanks to advances in survey research methodology. In the estimation of Patterson and Caldeira (1983) in fact, the discipline became fixated for a time on a prospective voter’s sense of effectiveness, dutiful partisanship or “some other psychological involvement” and on “the skills and resources cultivated in substantial formal education” or other situational advantage (p. 677).1

A few scholars, however, retained an interest in party competition and campaigning—in canvassing, rallies, bumper stickers, and television advertisements—as potential predictors of the voting act though the inherent endogeniety of the contact–turnout relationship, among other problems, has placed this work in some doubt. A unique dataset tracking the type and quantity of campaign communication allows us to join a new generation of such thinkers—including Gerber and Green (2000), Weilhouwer (2000), Niven (2002, 2004) and others—in further assessing the influence of politics on participation. We conclude that vote history is the most important determinant of voting even in high-dollar, high-profile races, but that holding this variable constant reveals a positive effect for campaign communication among “seldom” voters, registered but rarely active participants who—ironically—are less likely than regular or intermittent voters to receive such communication.

Voter Turnout: An Overview

Most turnout prediction models of the past four decades rely upon what Shaw et al. (2000) call a “resource-based model.” The likelihood of casting a ballot—and, in advance of that, registering to vote—is treated as a function of a person’s social status, especially educational attainment and annual income (e.g., Verba and Nie 1972; Wolfinger and Rosenstone 1980). Scholars toiling in this tradition have found substantial support for the significance of such variables in overcoming the time and information costs of political participation (Downs, 1957). Contemporary studies—further inclusive of marital status, residential stability, sex, region, civic skills, Internet access, and a host of legal-institutional factors—continue to provide convincing evidence that individuals are activated (or not) by their particular circumstances (Teixeira 1987, 1992; Nagler 1991; Leighley and Nagler, 1992; Rosenstone and Hansen 1993; Jackson 1996b; Verba et al. 1995; Knack 1995; Highton and Wolfinger 1998; Tolbert and McNeal 2003).

The political, or “mobilization-based,” model has been applied to the turnout question somewhat less frequently. Still, scholars have demonstrated the importance of campaign context, including a competitive partisan environment, ideologically polarized political parties, liberal party elites, the presence of a close race, robust campaign expenditures, and the stimulus of an up-ticket contest or interesting initiative question on the same ballot (Patterson and Caldeira 1983; Copeland 1983; Caldeira et al. 1985; Cox and Munger 1989; Hill and Leighley 1993; Rosenstone and Hansen 1993; Jackson 1996a; Jackson et al. 1998; Brown et al. 1999; Tolbert and Smith 2005). Canvassing by candidates, parties, and groups, too, has been demonstrated to spur voter interest (Blydenburgh 1971; Wielhouwer and Lockerbie 1994; Gerber and Green 2000; Wielhouwer 2000; Shaw et al. 2000; Niven 2002, 2004; Green et al. 2003; Hillygus 2005). Taken together, then, political science has demonstrated that politics do matter. Still, our knowledge about the conditions of this relationship remains incomplete due to a trio of measurement and modeling handicaps inherent in existing tests of campaign effects.

Campaign Contact and Voter Mobilization

One of the more vexing obstacles to assessing the role of campaigns in voter mobilization lies in the available data. Other than the innovative but tiny-N studies of Eldersveld (1956) and a few others, efforts to measure the impact of actual politicking on potential voters are rare. Most scholars have settled for aggregate measures of campaign activity and/or voter turnout. Patterson and Caldeira (1983), for example, regress gubernatorial participation rates on state-level measures of party competition and campaign spending among other variables. Jackson’s (1996b) treatment of the turnout puzzle combines similar state- (or congressional-district)-level measures of campaign environment with the standard lineup of demographic characteristics measured at the individual level. Each is a resourceful approach that yields promising results for a campaign effects model of voter turnout. Yet, without individual-level measures of both a resource- and a mobilization-based approach (especially the type and quantity of campaign contact), we cannot be certain of the actual contribution of either.

A handful of studies have overcome these measurement problems by relying upon large-scale post-election surveys or returning to the experimental approach of Gosnell (1927) and others. Wielhouwer and Lockerbie (1994) exercise the former strategy in their assessment of party contact and political participation. Using 40 years of the University of Michigan’s National Election Study data, they conclude that party-initiated voter contact has a stimulating effect that surpasses that of the traditional predictors of participation (see also Huckfeldt and Sprague 1992; Wielhouwer 2000; Shaw et al. 2000; Goldstein and Ridout 2002). Gerber and Green (2000) are rightly credited with revitalizing the experimental approach, and they do so on a grand scale. In a randomized experiment of 30,000 potential voters they find evidence of a substantial mobilizing effect for personal canvassing (and a smaller effect for direct mail). A multi-city follow-up study further boosted their conclusions about the potential of face-to-face appeals (Green et al. 2003).

Innovative as they are, these studies present problems that pertain mainly to generalizability. Both Huckfeldt and Sprague (1992) and Gerber and Green (2000), for example, rely upon data collected in a single city (South Bend, IN and New Haven, CT, respectively). Niven (2002, 2004) likewise uses a single primary race for the state legislature and a March 2001 mayoral election in Boynton Beach, Florida. Extending such work to municipal elections in six additional cities as Green et al. (2003) do is reassuring but again falls short as a definitive test of mobilization effects because even the most competitive local election is likely to be both nonpartisan and of little interest to potential voters (leading most experimental studies to rely solely upon a handful of nonpartisan mobilization appeals).2 While Wielhouwer and Lockerbie (1994), Wielhouwer (2000), Shaw et al. (2000), and others avoid this pitfall by using national-level data, their findings again are limited both by the narrow scope of reported campaign contact (e.g., “Did … the political parties … talk to you about the campaign this year?”) and by the fact that a nationwide sample masks the fundamentally regional nature of election campaigns (see Wielhouwer and Lockerbie 1994, p. 218). Consequently, the effect of (1) campaign contact of varying form and volume in spurring participation, and (2) the effect of such contact on the kind of regional and competitive partisan contest on which the bulk of America’s electioneering budget is spent would be well served by further testing. Our data are particularly suitable for both tasks.

A third challenge to assessing mobilization effects lies in the difficulty of modeling a contact–turnout relationship independent of past participation. If candidates and their allies direct materials mainly toward those with consistent voting histories, “those most likely to vote will also be most likely to receive contact, and the apparent link between contact and turnout may be spurious” (Gerber and Green 2000, p. 653). Huckfeldt and Sprague (1992) encountered a parallel problem when they discovered that primary election nonvoters reported party contact only about half as frequently as regular participants. The ironic result, they conclude, is that party communication appears to have no discernible mobilization effect on either group because “politically marginal citizens appear to lie beyond the reach of partisan organization. Conversely, politically engaged citizens are overwhelmingly likely to vote in the general election whether they are contacted or not” (p. 81). While we cannot surmount wholly the endogenous nature of these phenomena, the data used here supply an unusually complete account of the contact–turnout relationship. Specifically, drawing upon Niven (2004), we split the sample into three groups to reveal a mobilizing effect for campaign communication among “seldom” voters.

Data, Descriptives, Variables, and Analysis

To further probe the conditions under which campaigns will influence voter turnout, we draw upon a detailed three-wave panel survey conducted in a pair of highly competitive 2002 U.S. Senate races. The data were collected with the financial assistance of the Pew Charitable Trusts to supplement the on-going qualitative work of David Magleby and his on-the-ground collaborative research partners in election districts all over the country (see, for example, Magleby and Monson 2003). The races we tracked were for U.S. Senate seats, two of the most competitive (and highest dollar) contests in the country in 2002: the Arkansas matchup between incumbent Republican Tim Hutchinson and Democrat Mark Pryor, and the Missouri contest between incumbent Democrat Jean Carnahan and Republican Jim Talent.3

The first wave of interviews was conducted by telephone with 2,000 registered voters (1,000 in each state) during the last week of August and the first 2 weeks of September. Twelve hundred of these respondents were reinterviewed between October 14th and October 27th. The final wave of interviews took place between the night of November 5th (election day) and November 7th; a total of 1,000 respondents were contacted in this final stage. In all, 4,200 interviews were conducted in these two states over a 9-week period (Magleby and Monson 2003). For this project, we draw only upon the approximately 1,000 respondents (500 in Arkansas and 501 in Missouri) who participated in all three waves of the panel study because we are interested in the total volume of campaign contact experienced by potential voters (and because the last wave included two key variables: whether or not the respondent voted4 and reported telephone contacts).5

The central advantage of such a dataset is that—unlike past approaches—it allows for the application of a mobilization model to high-profile elections in which citizens were bombarded by political messages of varied forms from multiple (i.e., party and nonparty) sources. The survey’s rich inventory of respondents’ political attitudes and behavior likewise positions us to tackle the related modeling problems noted above. Of course, in avoiding the obstacles inherent in past studies, we are left with some of our own. First, though competitive, up-ticket races in two American states better represent the conditions under which the bulk of election communication takes place than do the state legislative, single-city, or municipal elections used in experimental treatments, our data may still suffer problems of generalizeability. Specifically, Missouri and Arkansas retain distinctive political characteristics (including a peculiar “apartisanship”) that may make comparison of even their highest-profile races to those in other states suspect. In addition, experimental scholars such as Ansolabehere and Iyengar (1995) and Gerber and Green (2000) are suspicious of campaign communication studies that rely upon self-reported contact. Under such conditions, the researcher “has no control over and often little knowledge of (the) political contact” experienced by potential voters (Gerber and Green 2000, p. 654) and respondent recall has been demonstrated to be less than perfect (on this point, see also Ansolabehere et al. 1999). While this is indeed a troubling limitation of survey-based studies, we believe our data are uniquely suited to surmount, or at least diminish, such concerns. Primarily, we are encouraged by both a robust sample size and by the very specific nature of the interview instrument which included responses for five separate kinds of communication.6 In addition, rather than asking respondents to estimate an average day’s campaign contact some time after ostensibly receiving it or, worse, demanding that they quantify all the information received over the course of the electioneering period, our design inquires about the “average day this past week” at three different points during the final stretch of the campaign.

With the potential limitations of our approach duly noted, we proceed now to testing three specific hypotheses:

H1

Regular voters will receive a greater volume of targeted campaign materials than less regular voters.

H2

The greater the volume of reported contact, especially of the targeted variety (i.e., not blanket broadcast messages), the greater the likelihood of voter turnout.

H3

Campaign effects will be stronger for intermittent voters than for regular or seldom voters.

Descriptive Findings

Relying upon the approach described above, we are able to provide a glimpse of a high-profile, high-dollar campaign as experienced by potential voters (see Table 1). Though nearly all respondents report receipt of some kind of campaign contact, the type and frequency varies widely. Television spots—unsurprisingly—dominate respondents’ exposure to election messages; the average person reported viewing a total of 28 advertisements via this medium over the three average days captured by the panel study, and fewer than 7% of the sample reported no contact of this type. Radio ads, while the second most voluminous source of reported political communication, failed to reach more than one-fourth of the sample (a likely consequence of the kind of narrow-casting still possible with the purchase of radio time). Mail pieces also were a common form of ammunition used by candidates and by non-candidate communicators (i.e., parties and interest groups) as evidenced by the six pieces—an average of two per day—received by the typical respondent and by the fact that only one in seven reported no contact of this sort. Telephone calls and in-person communication round out our inventory of reported campaign contact, though substantial portions of our respondents (37% and 49%, respectively) reported no contact of this kind. Overall, potential voters clearly felt the effects of 2002s record-setting campaign budgets in these states; the average respondent was targeted with nearly 50 messages (by any means) during just the three average days under study.7
Table 1

Type and frequency of campaign contact, full samplea

Contact type

Mean

Standard deviation

Proportion reporting no contact (%)b

Mail pieces

5.8

6.2

14.4

Television ads

28.0

22.0

6.5

Radio ads

10.3

14.8

27.8

In-person contact

1.8

2.7

48.5

Telephone calls

2.3

3.5

36.5

Total (all types)

48.2

31.6

1.7

Source: Center for the Study of Elections and Democracy, 2002 panel survey of Arkansas and Missouri races for U.S. Senate

aMissing observations across the five types of reported campaign contact were imputed using the Amelia program made available by Honaker et al. (2001; see also King et al. 2001). Respondents’ estimates were calculated based on all other individual characteristics used in our model of turnout likelihood

bReflects the proportion of all respondents reporting zero contacts of each type, inclusive of cases for which Amelia imputed negative values

But who is contacted and by what means? On the first score, Goldstein and Ridout (2002) and Gershtenson (2003) provide convincing evidence that campaign strategists target particular kinds of people, and that they have honed these skills in recent years. Political parties (the contact source they investigate) direct their appeals to individuals who are wealthier, older, and more educated; they also target those with strong partisan attachments and histories of election participation. In short, party mobilization efforts are “being targeted at those who are already likely to vote in the first place” (Goldstein and Ridout, 2002, p. 22). Less is known, however, about the type and volume of this communication. Are today’s mobilization messages as tightly calibrated as their recipients? Specifically, we propose that regular voters are more likely than others to receive targeted campaign materials—including mail, telephone calls, and in-person contacts—while their less dependable counterparts will receive mostly “blanketed communication” such as television and radio ads.

Table 2, which breaks the sample into consistent (i.e., “every election”), intermittent (i.e., “almost every election” or “every two years”) and seldom voters (i.e., “presidential elections” or “only a few elections” or “only elections I’m interested in”), provides support for such a pattern. We present both the mean number of reported contacts (by type and total) and the percentage reporting no such contact for all three groups. While the differences in reported campaign contact are not large, they are in the expected direction with few exceptions.8 Regular voters—who appear on voter rolls and party mailing lists—receive more mail, more in-person contact, and more telephone calls on average than those with less consistent turnout habits. They also recall viewing more television commercials than the least active voters, though all groups are amply blanketed in this regard. Conversely, higher proportions of intermittent and seldom voters report no contact of the targeted varieties; fully 41%, for example, of the least active voters received not a single phone call as compared to just 34% of the regulars. Unsurprisingly, consistent voters also receive both more total contacts (mean = 50.0) and more targeted contacts (mean = 10.8) than members of the other groups (means = 48.1/45.1 and 10.1/8.3, respectively).
Table 2

Type and frequency of campaign contact, consistent, intermittent, and seldom votersa

Contact type

Seldom

Intermittent

Consistent

Mean

No contact (%)b

Mean

No contact (%)b

Mean

No contact (%)b

Mail pieces

4.7

18.2

6.0

13.4

6.2

12.8

Television ads

25.7

5.5

28.8

6.0

28.4

7.8

Radio ads

11.1

26.0

9.5

29.9

10.9

26.6

In-person contact

1.6

52.5

1.7

52.5

2.2

39.4

Telephone calls

2.0

41.4

2.4

35.9

2.4

33.8

Total (all types)

45.1

0.6

48.1

1.6

50.0

2.5

Targeted Contact (mail, in-person, phone)

8.3

13.3

10.1

6.8

10.8

7.5

Source: Center for the Study of Elections and Democracy, 2002 panel survey of Arkansas and Missouri races for U.S. Senate

aConsistent voters (32.5% of sample) answered “always” on the 7-point vote history scale, while intermittents (49.1%) and seldoms (18.3%) include “every two years” and “almost every election,” “only a few,” “only elections I’m interested in,” and “only presidential elections,” respectively. Missing observations across the five types of reported campaign contact were imputed using the Amelia program made available by Honaker et al. (2001; see also King et al. 2001). Respondents’ estimates were calculated based on all other individual characteristics used in our model of turnout likelihood

bReflects the proportion of all respondents reporting zero contacts of each type, inclusive of cases for which Amelia imputed negative values

Multivariate Models and Hypotheses

The dependent variable in our general statistical model is reported turnout in Wave 3 measured by a dummy variable in which a 1 indicates that the individual voted, and a 0 otherwise. The main explanatory variables are the level and type of campaign contact. Because past literature tells us little about the impact of volume on turnout (though see Niven 2004) but finds strong support for the mobilizing effect of campaign contacts that are personalized, we estimate effects for total reported contact, for each of the five specific forms separately, and for targeted communication. If “old-fashioned” canvassing—by in-person outreach and targeted mail pieces in particular—affects voting behavior, we would expect high volumes of reported contact, especially of the targeted variety (i.e., not blanket broadcast messages), to help explain our respondents’ decision to vote.

Individual-level data allow us to include a host of key resource-based predictors as well. As discussed earlier, higher incomes and levels of educational attainment are the star performers of most political participation studies. Past research also suggests that being female, married, white, and older will boost a person’s ballot-casting odds, as will strong partisan loyalties, campaign interest, and feelings of political efficacy. Finally, vote history has been perhaps the most robust contributor to vote likelihood, though this measure has been used somewhat less frequently in the literature (see Plutzer, 2002). We control for all of these.9 All independent variables have been coded such that a positive coefficient will be produced if the relationship is in the expected direction. A fuller description of the variables and their operationalizations is provided in the Appendix.

Initial Turnout Models—Full Sample

Tables 3 presents logistic regression estimates for three separate models of voter turnout, each tapping a different conceptualization of campaign contact; estimates are computed for the full sample. Column A features a blunt “total contact” variable (i.e., the number of reported contacts of all types), column B itemizes each of the five forms of contact and treats it as a stand-alone predictor, and column C combines mail, telephone, and in-person contacts into a “targeted contact” variable.10 The most striking aspect of each model is that while nearly every resource, attitudinal, and mobilization variable is related to the likelihood of turnout in the expected direction, only age and vote history achieve significance. Neither campaign contact (measured as total volume, by individual type, and of targeted varieties only) nor respondents’ other individual characteristics make significant contributions to the variation in likely turnout.11 Before we reject hypothesis two and conclude that our respondents are moved chiefly by habit, however, we return to the modeling problem addressed earlier.
Table 3

Turnout models, full sample

Independent variables

3A total model

3B itemized model

3C targeted model

Married

.084 (.234)

.108 (.235)

.087 (.234)

Education

.093 (.073)

.094 (.074)

.091 (.073)

Female

.071 (.214)

.034 (.217)

.050 (.213)

Income

.012 (.045)

.014 (.045)

.015 (.044)

Age

.095** (.039)

.088** (.039)

.095** (.039)

Age2

−.001** (.000)

−.001** (.000)

−.001** (.000)

White

.202 (.415)

.239 (.422)

.265 (.415)

Vote history

.214*** (.073)

.204*** (.073)

.210*** (.072)

Interest

.189 (.163)

.185 (.164)

.183 (.163)

Partisan

.000 (.111)

.026 (.113)

−.001 (.111)

Efficacy

.047 (.034)

.049 (.035)

.044 (.034)

Mail pieces

na

.032 (.021)

na

Television spots

na

.003 (.005)

na

Radio spots

na

.002 (.008)

na

In-person contact

na

−.064 (.039)

na

Telephone calls

na

.068 (.046)

na

Total contacts

.004 (.004)

na

na

Targeted contacts

na

na

.021 (.014)

Constant

−3.639

−3.568

−3.545

N

956

956

956

−2 log likelihood

655.189

648.986

654.193

Cases predicted

88.2%

88.2%

88.3%

Note: All columns are (unstandarized) logit estimates because of the binary nature of turnout as the dependent variable. Standard error values are in parentheses

**<.05, ***<.01, two-tailed

Refined Turnout Models—Split-Sample

If election activists—including candidates, parties, and interest groups—direct their materials toward those already possessing the propensity to vote (and, conversely, away from the “unmobilizable”), it is unrealistic to expect such appeals to have an effect on turnout. In short, there is a strong likelihood of an interactive relationship between vote history and campaign contact. Tables 46 explore this connection by presenting logistic regression coefficients for our three categories of respondents: consistents, intermittents, and seldoms, respectively. Readers will recall that consistent voters include those who reported voting in “every” election; intermittents in “almost every” election or “every two years”; and seldoms in “presidential elections only” or less. Though our categories are not an exact match of those used in previous efforts, the hypothesis is comparable to Niven (2004): regular voters, together with seldom voters, mask the effect of campaign contact. We hypothesize that once our model is applied to voter subsets, campaign effects will surface, especially within the intermittent group.
Table 4

Turnout models, consistent voters

Independent variables

4A total model

4B itemized model

4C targeted model

Married

.598 (.460)

.683 (.470)

.564 (.453)

Education

.226 (.148)

.237 (.148)

.210 (.147)

Female

.025 (.433)

.015 (.449)

.002 (.431)

Income

.029 (.086)

.031 (.086)

.033 (.086)

Age

.134* (.073)

.131* (.074)

.127* (.073)

Age2

−.001 (.001)

−.001 (.001)

−.001 (.001)

White

.316 (.722)

.534 (.754)

.472 (.724)

Interest

−.417 (.386)

−.404 (.389)

−.360 (.377)

Partisan

.186 (.195)

.274 (.205)

.200 (.195)

Efficacy

.038 (.060)

.043 (.062)

.036 (.060)

Mail pieces

Na

.053 (.037)

na

Television spots

na

.009 (.011)

na

Radio spots

na

.015 (.016)

na

In-person contact

na

−.124* (.075)

na

Telephone calls

na

.029 (.077)

na

Total contacts

.011 (.007)

na

na

Targeted contacts

na

na

.019 (.025)

Constant

−3.174

−3.776

−2.841

N

309

309

309

−2 log likelihood

177.756

173.916

179.627

Cases predicted

90.9%

90.9%

90.6%

Note: All columns are (unstandardized) logit estimates because of the binary nature of turnout as the dependent variable. Standard error values are in parentheses

*<.1, two-tailed

In all three tables, the model in column A again includes the “total contact” variable as our test of campaign effects, column B presents each of the five forms of contact as an itemized predictor, and column C combines mail, telephone, and in-person contacts into a “targeted contact” variable. Broadly speaking, the results are supportive of our hypothesis. Holding vote history constant reveals that while campaign effects are not evident in the full sample of contact-saturated potential voters (Table 3), nor in our sub-sample of habitual participants (Table 4), they do matter for less-than-perfect voters (Tables 5 and 6).
Table 5

Turnout models, intermittent voters

Independent variables

5A total model

5B itemized model

5C targeted model

Married

−.061 (.367)

−.003 (.373)

−.061 (.367)

Education

−.070 (.108)

−.079 (.111)

−.070 (.108)

Female

.083 (.328)

.064 (.337)

.078 (.327)

Income

.077 (.070)

.079 (.073)

.078 (.070)

Age

−.007 (.078)

−.010 (.079)

−.009 (.078)

Age2

.000 (.001)

.000 (.001)

.000 (.001)

White

.550 (.615)

.801 (.633)

.556 (.611)

Interest

.264 (.262)

.308 (.267)

.260 (.263)

Partisan

−.032 (.175)

.007 (.181)

−.032 (.175)

Efficacy

.085 (.057)

.090 (.058)

.084 (.057)

Mail pieces

na

.001 (.029)

na

Television spots

na

−.006 (.007)

na

Radio spots

na

.015 (.014)

na

In-person contact

na

−.134** (.058)

na

Telephone calls

na

.160* (.082)

na

Total contacts

.001 (.005)

na

na

Targeted contacts

na

na

.004 (.017)

Constant

−.099

−.538

−.046

N

472

472

472

−2 log likelihood

289.161

279.169

289.133

Cases predicted

90.5%

90.5%

90.5%

Note: All columns are (unstandardized) logit estimates because of the binary nature of turnout as the dependent variable. Standard error values are in parentheses

*<.1, **<.05, two-tailed

Table 6

Turnout models, seldom voters

Independent variables

6A total model

6B itemized model

6C targeted model

Married

−.151 (.444)

−.140 (.460)

−.163 (.450)

Education

.214 (.176)

.213 (.165)

.209 (.160)

Female

.285 (.423)

.025 (.445)

.171 (.162)

Income

−.077 (.084)

−.080 (.085)

−.090 (.083)

Age

.171** (.067)

.194** (.072)

.179** (.068)

Age2

−.002** (.001)

−.002** (.001)

−.002** (.001)

White

−1.096 (1.094)

−1.039 (1.117)

−.951 (1.099)

Interest

.562* (.299)

.546 (.341)

.559 (.319)

Partisan

−.154 (.225)

−.215 (.233)

−.201 (.227)

Efficacy

−.002 (.066)

−.054 (.071)

−.027 (.067)

Mail pieces

na

.081 (.056)

na

Television spots

na

.011 (.012)

na

Radio spots

na

−.025 (.014)*

na

In-person contact

na

.137 (.106)

na

Telephone calls

na

.040 (.098)

na

Total contacts

.005 (.008)

na

na

Targeted contacts

na

na

.090** (.036)a

Constant

−3.446

−3.024

−3.353

N

175

175

175

−2 log likelihood

165.635

154.938

158.725

Cases predicted

79.4%

81.1%

79.4%

Note: All columns are (unstandardized) logit estimates because of the binary nature of turnout as the dependent variable. Standard error values are in parentheses

*<.1, **<.05, two-tailed

aWe also ran our full model with an interaction term between seldom voters and targeted contact. We found a statistically significant effect for such contact across voter groups

Specifically, few patterns emerge for “consistents.” Age remains marginally significant and in the expected direction. And in-person contact emerges as a faintly useful—but negative—predictor among this group (as well as among intermittents). We are puzzled by the latter finding, particularly in light of recent experimental work, but propose two potential explanations both of which speak to the difference between the election scenarios examined in other studies and those explored here. First, it is possible that while generic (i.e., nonpartisan, noncandidate) GOTV messages are effective mobilizers under certain conditions, candidates, parties, and/or their allies may “overdo” their targeted outreach efforts in super-heated circumstances such as those examined here, so much so that they actually turn off some active voters. Alternatively, perhaps our seldom voters—in close contests at least—actually receive more in-person campaign contact than they report. In that case, the negative relationship would not be a consequence of contact reducing turnout, but of less-than-likely turnout having attracted the contact in the first place. Further testing—under varied election conditions and with mechanisms for contact-validation—would no doubt prove illuminating.

Our results diverge from past results in an even more important way in Tables 5 and 6. True, consistent with Niven (2004), intermittent voters are positively mobilized by telephone calls while consistent voters are not. But it is seldom voters who prove to be the most interesting subset in our sample. Not only do education and interest emerge (and does age resurface) as relatively robust predictors of turnout for this group (Table 6C), but targeted contact also exerts a positive, significant influence on vote likelihood.12 The positive effects for mail pieces and television spots, moreover, approach significance for this group and this group alone. (When tested independently of other contact types, in fact, both provide a convincing boost to voter turnout though the results are not shown in the interest of space.) To facilitate interpretation of these effects, Table 7 presents the expected probability of voting among “seldoms” who received varying amounts of targeted contacts. The substantive impact is striking. Receiving just the mean number of eight mail, in-person, and/or telephone contacts results in a 0.13 increase in the probability of voting over receiving none. Receiving an above-average amount provides a similar boost. In short, receipt of targeted campaign materials improves the probability that otherwise unlikely participants will cast ballots up-ticket, high-dollar contests.
Table 7

Expected probability of voting among seldom voters, varying targeted contact

Probability of voting

Minimum # of contacts (0)

1 Standard deviation less than the mean # of contacts (.0436)

Mean # of contacts (8.459)

1 Standard deviation more than the mean # of contacts (16.481)

Maximum # of contacts (45)

 

0.668 (0.093)

0.676 (0.090)

0.809 (0.055)

0.891 (0.048)

0.979 (0.038)

Note: To simulate the impact of differing levels of campaign communication on seldom voters, targeted contact was set at five different values while sex was set at female, marital status at married, and race at white; all other variables were set at their means. Estimations were produced using Clarify: Software for Interpreting and Presenting Statistical Results by Michael Tomz, Jason Wittenberg, and Gary King. Standard errors appear in parentheses

Discussion and Conclusion

The campaign effects question is an important one for both political scientists and political practitioners. It is also one that suffered during the early ascendance of survey methodology. With the arrival of a renewed interest in the relationship between politics and participation, however, Caldeira et al. (1985) jubilantly declared that “… electorates need not merely ‘emerge,’ the products of faceless social, economic, and psychological forces.” Instead, a voting public “can be brought into vigorous being where there is an active political life—where political parties have leaders and organizations breathing hard to contest for public office, to make politics competitive, and to engage in political activities aimed at turning voters out to the polls” (p. 507). For decades since, scholars have gathered new datasets and employed increasingly sophisticated models to test—and support—the role of campaign effects in mobilizing voters.

But such effects are not uniform. Both the type of contact and the characteristics of the person who receives it influence its impact. Our approach—especially the use of a rich inventory of the type and quantity of campaign communication—supplies an opportunity to test such differences in the context of a high-dollar, high-profile election and for different groups of people. Like Hillygus (2005) and Niven (2002, 2004)—yet using a dataset more reflective of America’s regional and highest-dollar elections—we find that while most campaign efforts are of little consequence to habitual voters, they can have a mobilizing effect on those with less dependable voting histories. In fact, we find that those with the least active political pasts are the most likely to feel the positive effects of campaign communication, of the targeted variety at least. Validated vote (as well as validated contact) studies—with more contact detail and of higher-profile races than those conducted so far—as well as the application of our approach to a greater number of election contests would add to our confidence in this conclusion.

There is irony of course in the fact that the people most likely to be mobilized by campaign communication such as direct mail, in-person canvassing, and telephone calls are also the least likely to receive it. As Goldstein and Ridout (2002) and Gershtenson (2003) demonstrate, parties target wealthy, older, educated partisans with histories of election participation, i.e., regular voters. We found similar patterns in reported contact—though we capture both party and non-party sources of campaign communication—in our 2002 sample. But parties and their allies are concerned with voters’ preferences more than with mobilization per se. Only in close contests then should we expect to see campaign contact targeted at seldom voters, an unfortunate truth for those concerned with (re)broadening citizen engagement in the U.S.

Footnotes
1

The arrival of mass quantities of public opinion data, together with the computer technology to analyze them, undoubtedly enhanced the allure of this approach. Patterson and Caldeira (1983) suggest as much when they explicitly reference the “seminal electoral research of the Survey Research Center at the University of Michigan” in their review of the literature (676).

 
2

In fact, though the authors made an effort to include several “competitive” races in their research design, the average turnout rate across the six elections was just 25.6%.

 
3

Also on the Arkansas ballot in 2002 was a gubernatorial contest between the Republican incumbent, Mike Huckabee, and long-time Democratic state treasurer, Jimmie Lou Fisher. While it is likely some “bleed over” occurred in campaign communication for the two races, total spending (candidate plus noncandidate) in the Pryor–Hutchinson contest exceeded the other by at least threefold (15 million dollars as compared with five).

 
4

Eighty-eight percent of the total sample reported voting, comporting with the “overreporting” pattern of previous studies. As Katosh and Traugott (1981) and Sigelman (1982) argue, however, most inferences based on reported vote remain true when tested against validated vote (though, see also Hill and Hurley 1984). In addition, recent work suggests that survey participation itself does not boost vote likelihood; see Mann (2005).

 
5

The question regarding telephone contact was included only in the third wave because such efforts at voter stimulus tend to occur later in campaigns.

 
6

The specific items were: (1) “On an average day this past week, how many pieces of mail about the U.S. Senate race did you receive?” (2) “On an average day this past week, how many television ads about the U.S. Senate race did you see?” (3) “On an on average day this past week, how many radio ads about the U.S. Senate race did you hear?” (4) “On an average day this past week, how many times were you contacted in person by someone with information about the U.S. Senate race?” and (5) “On an average day this past week, how many telephone calls about the U.S. Senate race did you receive?” While a similar question was asked about e-mail communication, the number of missing responses was too great to yield significant results or to impute—comfortably—values for the missing data.

 
7

While the respective means for total contact were similar for Missouri and Arkansas examined independently, there was some variation in the way the campaigns were conducted. Missourians, for example, reported receiving significantly more mail and radio than Arkansans.

 
8

While only in-person contact presented a statistically significant difference in means between intermittent and regular voters, most of the differences between seldoms and consistents achieved significance of at least .05. Differences are significant in most “% no contact” observations as well, particularly within targeted types.

 
9

We also ran each model with a dummy variable to control for state, on the possibility that AR and MO have different enough political histories and institutional arrangements to impact voter mobilization; it was (grossly) insignificant each time.

 
10

With Green and Gerber (2000) and Gerber et al. (2003) in mind, we ran three additional models that included each of the “targeted contact” variables—in-person contact, mail contact, and telephone contact—alone. Only mail approached significance as a predictor of turnout. Vote history retained its dominant position while mail contact produced a coefficient of .032 and a significance value of .097 (standard error = .020).

 
11

Multicollinearity among the separate forms of contact are not to blame for such poor performance. No single form is significantly correlated (Pearson, 2-tailed, at .01) with another at a level higher than .294.

 
12

We present separate models for the three types of voters for ease of interpretation. We note however that the effects are statistically significant across different types of voters. Including a variable in the full model interacting canvassed contacts with a dummy for seldom voters (versus all others), we find that the interactive term is positive and significant. For the constituent terms, seldom voters are significantly less likely to vote when canvassed contact is zero. Canvassed contact has no significant effect for voters who are intermittent or regular voters.

 

Acknowledgements

The authors wish to thank David Magleby and Quin Monson of the Center for Elections and Democracy at Brigham Young University for providing us these data. We also wish to thank Gary King for his assistance with the Amelia imputation program, and John Szmer for his helpful advice. Any errors in interpretation are the responsibility of the authors.

Copyright information

© Springer Science+Business Media, LLC 2007