Over the past decade, hundreds of millions of dollars have been spent on Facebook advertisements during U.S. elections (e.g., Evers-Hillstrom, 2018). During the 2018 election cycle, more candidates advertised on Facebook than television, running ads to woo supporters, raise funds, and mobilize voters (Fowler et al., 2020). Facebook has come under intense scrutiny due to widespread use of its ads and the platform's lack of regulatory oversight; this attention is based in part on an implied assumption that Facebook ads must have some sort of impact on voter behavior. Yet there is limited empirical evidence in the academic literature that Facebook ads have any measurable effect: previous experimental work has not detected an effect in terms of turnout, Democratic vote share, candidate name identification, or favorability (Broockman & Green, 2014; Collins et al., 2014; Coppock et al., 2020a; Coppock et al., 2020b; Kalla, 2017; Shaw et al., 2018), though there may be a persuasive impact in European multi-party systems (Hager, 2019). This lack of evidence has not deterred political advertisers from using the platform, nor has it stopped the media and candidates alike from crediting Facebook ads with shifting the outcomes of elections (Baldwin-Philippi, 2020; Beckett, 2017).

This paper reports the results of a well-powered, pre-registered field experiment designed to determine whether longitudinal exposure to issue-oriented Facebook advertisements from a political organization have the ability to influence voting behavior, specifically by mobilizing individuals unlikely to vote. In collaboration with Progress Texas, a 501(c)3 organization, I target lower-propensity voters with 7 weeks of issue-oriented advertisements on Facebook, with subjects randomly assigned to one of four message streams or a control group. Ads were microtargeted using voter file data that was uploaded to Facebook via the Custom Audiences tool, allowing for specific voters to be assigned to treatment conditions.

Results show that despite the substantial sample size (N = 871,479), there is no detectable main effect of assignment to Facebook ads on turnout that cannot be ruled out as a Type I error. However, within competitive congressional districts (CDs) there is a significant 1.66 percentage point (pp) effect of ads supporting abortion rights relative to the control group; the effect is concentrated among voters coded as female in the voter file. Evidence is also suggestive of an effect of the abortion rights ads within GOP stronghold counties, solely among female voters. The other three message streams—focused on healthcare, immigration reform, and gun control—had no effect, thus in the aggregate the 2,084,335 Facebook ad impressions delivered in this experiment had no impact on turnout. Complicating matters is the relatively low treatment rate of subjects (~ 35%), which is likely due to Facebook's algorithmic preference for exposing individuals deemed to be receptive to the ads. Results suggest that the effects of the ads are conditional on alignment of message, audience, and electoral context. Importantly, Facebook's microtargeting system appears to find and expose the specific individuals who meet these conditions.

Effects of Advertising on Voting Behavior

Despite the media attention that Facebook ads receive, academic research on their ability to change voting outcomes in the United States largely fails to find effects distinguishable from zero. Ads deployed entirely or partially on Facebook have not been demonstrated to impact individual voter turnout (Collins et al., 2014; Kalla, 2017), Democratic vote share (Coppock et al., 2020a, b), or candidate name identification or favorability (Broockman & Green, 2014; Shaw et al., 2018). One study conducted in partnership with a Republican gubernatorial candidate actually finds a negative impact of Facebook ads on turnout in a primary among fans of the candidate (Shaw et al., 2018).

Looking abroad, a study conducted with a German political party finds a positive effect on party vote share from a combination of Facebook and Google Ads, amounting to an 0.7pp (p = 0.155) increase for the sponsoring party and a 1.4pp (p = 0.094) decrease in vote share for competing parties (Hager, 2019). The ads had no effect on turnout and appear to have been more effective in areas with stronger bases of support. The author remains "skeptical whether online ads have a decisive influence on elections" (Hager, 2019, p. 389).

Research measuring the impact of other forms of Internet advertising finds small effects in terms of turnout and vote choice. Internet display ads are effective at increasing turnout in a Republican primary by approximately 0.25pp, though pre-roll video has no effect (Shaw et al., 2018). A study conducted on Millennial voters in a municipal election finds an increase in turnout of 0.52pp, but only among voters in competitive districts (Haenschen & Jennings, 2019). Internet ads deployed in a Republican primary generated a weakly positive but non-significant effect on candidate choice (Turitto et al., 2014).

This lack of sizeable or detectable effects from digital ads is consistent with the broader political science literature on paid campaign promotions. Campaigns' largest expenditure is advertising, particularly on television, suggesting that campaign consultants believe it is effective (Jacobson, 2015). However, there are few experiments that randomly assign real-world exposure to mass media advertisements and measure its effect on voting. One meta-analytic estimate suggests that TV ads can raise turnout by 0.5pp and radio ads by 1pp, but the effect is not significant (Green & Gerber, 2019). Other studies attempt to use a combination of survey experiments, ad tracking, geographic turnout data, and self-reports to estimate the effects on turnout. If such effects exist, they are in the range of 1pp (Ashworth & Clinton, 2007; Vavreck, 2007), though other work finds no effect (Krasno & Green, 2008). Another strain of work explores negative advertisements in particular; a meta-analysis finds no effect on turnout (Lau et al., 2007). Other recent field and survey experiments and meta-analyses find that advertising is also largely unable to persuade voters in partisan general election contests (Coppock et al., 2020a; Kalla & Broockman, 2018).

Notably, Facebook has been able to impact turnout through tactics that leverage interpersonal networks. A social "I Voted" widget deployed in 2010 and 2012 (Bond et al., 2012; Jones et al., 2017) boosted turnout by 0.39pp and 0.24pp, respectively. However, the ability to deploy this tool is not available to the wider public. Other studies have found compelling evidence of friend-to-friend mobilization within Facebook networks (Haenschen, 2016; Teresi & Michelson, 2015), though these tactics are difficult to scale up. Facebook use generally or for politics is not associated with higher voter participation (Boulianne, 2015), and there is no consistent evidence that candidates' use of social media impacts their likelihood of winning (Kim et al., 2019). Thus if Facebook is to be leveraged as a tool for influencing voter behavior, the most likely and accessible pathway for political entities is through advertising.

An entirely separate body of research into Facebook ad effectiveness exists: the so-called "grey literature" conducted by political organizations that do not consistently release results to the public (see Issenberg, 2012). The handful of public practitioner and platform case studies available demonstrate the successful use of Facebook ads for outcomes other than voting. For instance, ads have proven effective for fundraising, particularly by amplifying email solicitations (Trilogy Interactive, n.d.). Facebook itself touts the use of its platform to reach targeted voters and improve advertising ROI (Facebook, n.d.), and sends staff to work directly with campaigns (Kreiss & McGregor, 2018). The platform allegedly considered releasing a case study to demonstrate its effectiveness for the 2016 Trump campaign, but held off due to the potential for backlash (Beckett, 2017).

Together, this literature raises the question as to whether Facebook ads specifically and political advertising generally has an effect on voter behaviors and attitudes, despite the billions of dollars spent on it each cycle.

Methodological Challenges to Facebook Advertising Experiments

Despite insufficient evidence of effectiveness, advertising on Facebook remains seductive due to the affordances of microtargeting, or the ability to show ads to precise segments of the voting public deemed most likely to be influenced by them (Kim et al., 2018). Facebook enables advertisers to upload lists of specific individuals and target ads to any user on that list who can be matched to a Facebook profile. Individuals not on the list do not see the ads, offering a theoretical improvement in return on investment (ROI) through spending funds to reach only the most desired eyeballs; this practice has been common in consumer marketing for over a decade (e.g. Agan, 2007). Prior to the introduction of advertising archives for digital political ads in the wake of the 2016 election (Constine, 2018), these ads were essentially invisible to those not targeted by them (Kim et al., 2018).

Microtargeting may offer a methodological and theoretical explanation for null results in previous online ad experiments. Several prior studies have used geographic-based cluster targeting rather than individual microtargeting (e.g. Coppock et al., 2020a; Broockman & Green, 2014; Turitto et al., 2014, though see Hager, 2019 and Kalla, 2017). More precise targeting may be needed to estimate effects of ads on those assigned to exposure (Haenschen & Jennings, 2019; though see Collins et al., 2014).

The matter is further complicated by recent work demonstrating that Facebook's internal algorithm prioritizes showing political advertisements to people most likely to agree with them (Ali et al., 2019b). This selective algorithmic exposure is thought to be greatest when advertising budgets are low. Facebook's algorithm categorizes users based on their digital trace data as interested in politics (Thorson et al., 2019); these individuals would be more likely to be exposed than others in the same uploaded custom audience, even in a randomized controlled trial. Thus not all microtargeted users will necessarily be exposed, and exposure may be correlated with susceptibility to the ads themselves.

Furthermore, while randomization within this experiment ensures that voters targeted to receive other political advertisements during the study are evenly distributed across groups, the likelihood that they would be so targeted will vary by district or county depending on underlying electoral conditions. Facebook ads are displayed based on bid amounts: individuals see ads from advertisers willing to pay more to show them. If a campaign is willing to bid more than the partner organization to reach the same voter, our ad might not be shown. Since campaign spending tends to reflect electoral salience, individuals in more competitive areas might be less likely to see the ads in this study. Facebook does not report which individuals see a campaign's ads, only how many members of a target audience are exposed. All of this is to say that one can attempt to treat a precise list of individuals on Facebook and estimate average rate of treatment of that list, but cannot know if confounds are predicting which individuals are so exposed.

Testing Microtargeted Issue-Based Ads

This study explores whether Facebook ads have an effect on voting behavior via a well-powered (N = 871,479) experiment testing random assignment to microtargeted issue-oriented adsFootnote 1. The study was conducted in Texas during the 2018 U.S. Midterm elections in partnership with a progressive organization. The partner was responsible for the advertising content, and chose to test four separate message streams offering progressive content on the topics of abortion rights, healthcare, immigration, and gun control. The subject pool, referred to by the partner as the “emerging Texas electorate,” were not expected to be likely Midterm voters.

Despite media representations to the contrary (e.g., Beckett, 2017), based the lack of significant findings in prior academic studies of Facebook ads' impact on turnout, one cannot assume that the ads will have an effect. Thus, two research questions are posed:Footnote 2


Did assignment to a message stream impact turnout?


Did assignment to any treatment condition impact turnout?

If the advertisements do work, in line with prior mobilization research they should be moderated by electoral context—specifically, whether the voter has a competitive race on their ballot (Arceneaux & Nickerson, 2009; Haenschen & Jennings, 2019; Malhotra et al., 2011). During the 2018 election cycle, Texas was home to a number of competitive congressional and county-level races. Theory anticipates that the ads should have been more effective in these areas owing to the heightened salience.


Did congressional-level electoral salience moderate the effect of treatment?


Did county-level electoral salience moderate the effect of treatment?

Separately, a voter's individual propensity of voting moderates whether they are susceptible to mobilization (Arceneaux & Nickerson, 2009; Malhotra et al., 2011).


Did voter propensity moderate the effect of treatment?

Additional exploratory analyses not included in the pre-registration are presented as well.


This experiment was conducted in partnership with Progress Texas during the 2018 U.S. Midterm Election to measure the effects of longitudinal exposure to issue-oriented advertising content on Facebook. Progress Texas is a 501(c)3 "non-profit media organization promoting progressive messages and actions" (Progress Texas, 2020). The partner targeted individuals they deemed unlikely to vote without intervention, seeking to mobilize them through exposure to content about issues they had already been focused on in the years leading up to the experiment: abortion rights, healthcare, immigration, and gun control.

An a priori power analysis was performed to determine the necessary sample size to detect a significant effect based on budgetary constraints that capped treatment groups at 40,000 voters for each of the four groups. Turnout in the control group was expected to be low given the sampling frame. The sample size of 871,480 subjects—160,000 divided evenly into four treatment groups of 40,000 and the remainder in the control group—was adequately powered at the 0.05 α level to detect a 0.55pp increase in turnout if baseline turnout in control is 30%, 0.52pp increase if turnout in the control group is 25%, and 0.48pp increase if turnout in the control group is 20%.

After randomization, treatment groups were uploaded to Facebook using the Custom Audience feature, which matches the source data (voter registration records) to the Facebook user database. Due to privacy concerns, Facebook does not report either how many or which individuals matched. Ads were run from the Progress Texas page, and contained the necessary disclaimers for paid political advertising.Footnote 3


The campaign ran from September 18 to Election Day, November 6, 2018, with new ads starting approximately every four days and running for a week. Ads were bid to maximize reach—the number of people in the target audience who were shown the ad—and capped at three impressions per user, to prevent Facebook from showing the ads dozens of times to a smaller pool of subjects. Each ad had a budget of $500, which would have been sufficient to expose the 40,000 subjects in each treatment group. Actual exposure rates are reported in the results section. Ads were only shown in Facebook newsfeeds on desktop and mobile devices.

While the issue varied between groups, the media format of each week's ads was constant. For instance, on October 15, all four ads were animated graphics: individuals in the abortion rights group were shown a graphic about abortion rights, gun control a graphic about gun control, etc. The partner was responsible for generating the content and staging the ads. Sample ads are available in the supplement. Ads were primarily focused on policy issues, though the final three ads promoted early voting and Election Day, respectively.


The dependent variable for the study, voter turnout, was measured using voter file data obtained from Catalist.

Several covariates were developed for this study, detailed in Table 1. CD competitiveness for each subject's district was based on Cook Political Report. Districts coded as competitive consisted of the 7th, 22nd, 23rd, and 32nd Districts.Footnote 4 County competitiveness and lean was based on two elections on the 2018 ballot. First, the margin of victory in 2018 races for Governor and County Judge were averaged.Footnote 5 Counties with an average absolute margin of victory within five points were considered competitive, otherwise not. To determine partisan lean, counties with an average margin of victory within five points were coded as tossup (Fort Bend, Harris, Hays); counties with a Republican margin greater than five points were coded as a GOP stronghold (Collin, Denton, Tarrant, Williamson); counties with a Democratic margin greater than five points were coded as a Democratic stronghold (Bexar, Dallas, Travis).

Table 1 Participant demographics

Modeled voter scores were provided by Catalist. The voter propensity score predicts the likelihood of an individual voter casting a ballot in the 2018 Midterm election without any mobilization. The Democratic support score predicts the likelihood that a voter will choose Democratic candidates if they vote.Footnote 6


The sampling frame was developed by the partner organization due to a desire to target what they refer to as the “emerging Texas electorate”—relatively young voters in metropolitan areas that are unlikely to vote consistently, but likely to be progressive in ideology and support the organization's stated policy priorities. Given the partner's 501(c)3 status, they are not permitted to target based on partisanship. The partner chose to target voters whose only prior participation was the 2016 general election or had registered for the first time since that contest, were under age 40, and registered in one of ten urban or suburban counties they selected comprising the Houston, Dallas, Austin, and San Antonio metro areas.Footnote 7 This resulted in 871,479 registered voters; descriptive statistics are reported in Table 1. Subjects were then randomly assigned to one of four treatment groups of 40,000 voters or the control group. Tests of joint orthogonality were performed following McKenzie (2015) that verified random assignment.Footnote 8

After the election, an updated voter file was obtained from Catalist. Of the original 871,479 subjects, 12,334 (1.42%) were no longer registered to vote and another 4181 (0.48%) could not be located in the voter file database. A further 1206 (0.14%) voters were registered in multiple states. These 17,721 subjects were removed from analysis; there was no association between being removed and group assignment, X2 (4, N = 871,479) = 6.86, p = 0.14.

Per the preregistration, voters who moved during the experiment would be removed before final analysis. A total of 53,507 (6.27%) voters were no longer registered in their original county and were removed; again there was no association between moving and group assignment, X2 (4, N = 853,758) = 2.51, p = 0.64. Subsequent examination of the data found that 10,489 (2.1%) of remaining subjects did not have a birthdate in the Catalist data that matched the birthdate of the original subject; these are assumed to be incorrect matches by the voter file vendorFootnote 9, Footnote 10. These subjects were removed from analysis as well; there was no relationship between producing a bad match and group assignment, X2 (4, N = 800,251) = 2.28, p = 0.68.Footnote 11 This results in a final sample of 789,762 voters.


First, I estimate the degree to which subjects were exposed to the ads. Next, I conduct statistical analysis using linear regression at the level of assignment to determine if the advertisements had an impact on turnout. This is followed by a pre-registered analysis of heterogeneous effects based on theoretically motivated variables.Footnote 12 Subsequently, I conduct an exploratory analysis to determine whether voter sex or modeled Democratic support predict susceptibility to the ads. Finally, I estimate the complier average causal effect (CACE) based on the above findings.

Estimating Exposure

Due to privacy concerns Facebook does not report the number of individuals in an uploaded audience who match their user database, so it is not possible to know how many of the 40,000 individuals in each treatment group had the potential to be reached, nor what share of that potential audience was exposed. However, it is possible to estimate treatment rates by looking at the actual reach of the ads. Table 2 reports the average reach of all ads in each group, as well as the ad with the highest reach. Based on these statistics the average treatment rate for all ads and highest treatment rate are calculated: 34% to 42% of the 40,000-subject pool was exposed during the experiment.

Table 2 Estimated treatment rates by ad condition

Again, without knowing how many individuals matched it is not possible to calculate how many matched individuals were exposed. This exposure was likely not uniform itself, since Facebook "preferentially exposes users to political advertising that it believes is relevant to them" (Ali et al., 2019b, p. 1); this skew also occurs for race and gender (Ali et al., 2019a). It is possible that Facebook selectively showed the ads to the 34–40% of subjects it deemed most likely to respond to them. As such I conduct the analysis at the level of assignment to determine intent-to-treat (ITT) effects, and extrapolate the CACE based on exposure estimates.

Main Effects

Treatment effects are estimated using linear regression, with results reported in Table 3.Footnote 13 Covariate-adjusted predicted turnout percentages for treatment groups that are reported in the manuscript are calculated with the emmeans R package. This approach accommodates uneven sample sizes (e.g., treatment vs. control) and covariate imbalance (e.g., 55.1% of subjects are female). To reduce the likelihood of a type I error, pairwise comparisons are calculated with a false discovery rate (FDR) adjustment performed to p values after all models are estimated.Footnote 14

Table 3 Linear regression, effect of treatment assignment on turnout

Results show that none of the ads had a significant main effect on voting. While the abortion rights ads generated an 0.49pp increase in turnout relative to the control group, this result was not significant when controlling for FDR (p = 0.061, p adj. = 0.092). No significant differences were found between treatment groups after the FDR adjustment. If a main effect on turnout exists, it is not large enough to be distinguished from a Type I error even with a control group of 644,684 voters. RQ1 is answered in the negative: none of the message streams had an overall impact on turnout.

The pooled effect of assignment to any ad vs. the control group is also non-significant and slightly negative (− 0.04pp, p = 0.744); results are reported in Table 2 of the supplement. I answer RQ2 in the negative as well.

Moderating Effects of Congressional District, County, and Voter Propensity

Turning to theoretically motivated heterogeneity, analysis finds a significant moderating effect of CD competitiveness and marginally significant effect of county-level competitiveness. Results are reported in Table 3.

Within competitive CDs, abortion rights ads generated a 1.66pp (p adj. = 0.016) increase in voters' predicted turnout percentage relative to the control group, as well as increases relative to the healthcare (2.04pp, p adj. = 0.031) and immigration (1.98pp, p adj. = 0.038) conditions. There are no such effects in uncompetitive districts. The findings are noteworthy given that only 135,146 (17.1%) of subjects were registered in competitive districts; given the statistical power in uncompetitive districts (n = 654,616) the lack of significant results is reasonably strong evidence that the ads are not effective in such circumstances. RQ3 is thus answered partially in the affirmative: Congressional-level salience moderated the effect of treatment for abortion rights ads only. Figure 1 depicts covariate-adjusted predicted turnout percentages by treatment group and CD competitiveness with 95% confidence intervals.

Fig. 1
figure 1

Covariate-adjusted predicted turnout percentage based on group assignment and congressional district competitiveness

A subsequent analysis of the moderating effect of county competitiveness shows that within GOP strongholds, the abortion rights ads were marginally effective relative to the control group, generating an 0.98pp increase in turnout (p adj. = 0.070); the ads were also effective relative to the healthcare (1.39pp, p adj. = 0.062) and gun control (1.62pp, p adj. = 0.027) conditions. There were no effects on voters in tossup or Democratic stronghold counties. Results are depicted in Fig. 2 and reported in Table 3. RQ4 can also be answered partially in the affirmative: county competitiveness moderated the effect of the abortion rights ads only.Footnote 15

Fig. 2
figure 2

Covariate-adjusted predicted turnout percentage based on group assignment and county partisanship

A final test for an interaction between voter propensity score and treatment was non-significant; results are reported in Table A3 of the supplement. RQ5 is answered in the negative.

Exploratory Analyses

Based on these findings, I conduct a series of exploratory analyses that were not part of the pre-registered analysis plan. Results are presented in the supplement. Since the only effective ad featured the issue of abortion rights, I examine a potential moderating effects of sex or Democratic support on treatment to determine if either identified receptive subjects in competitive CDs and GOP stronghold counties.

In both areas, the effect of the advertisements was concentrated in voters coded as female in the voter file. In competitive CDs, female voters assigned to abortion rights ads demonstrated an increase in predicted turnout percentage of 1.86pp relative to the control group (p adj. = 0.0499); the same comparison was not significant for voters coded as male or unknown, though it was positive (1.42pp, p adj. = 0.181).Footnote 16 Abortion rights ads were also more effective than healthcare ads (3.26pp, p adj. = 0.010) on women in competitive CDs. Results are depicted in Fig. 3 and presented in Table A5. In GOP stronghold counties, the increase among female voters assigned to abortion rights ads was 1.63pp (p adj. = 0.024) relative to the control group; the ads were also marginal or significant relative to the healthcare (2.19pp, p adj. = 0.027), immigration (2.47pp, p adj. = 0.011), and gun control (3.03pp, p adj. = 0.002) conditions for female voters. There was no effect on male voters in GOP strongholds assigned to the abortion rights ads; they exhibited a non-significant 0.20pp (p adj. = 0.835) increase in turnout relative to the control group. Results are reported in Table A7 in the supplemental materials.

Fig. 3
figure 3

Covariate-adjusted predicted turnout percentage based on group assignment and sex in competitive congressional districts

Given that the messaging takes a progressive stance on the issues, I also considered whether subjects' modeled Democratic support score moderated the effect of treatment; effects were non-significant in both competitive CDs and GOP stronghold counties (Tables A6, A8). This lack of effect may be an artifact of the sample itself, which was already highly likely to vote for Democrats according to their support scores.

Estimating Complier Average Causal Effects

Based on this analysis and the exposure rates calculated in Table 2, it is possible to estimate the treatment effect of abortion rights ads on exposed subjects, or CACE, by dividing the ITT effects by the estimated treatment rate. Table 4 reports the number of subjects assigned to the abortion rights condition overall, within competitive CDs, and within GOP counties. Point estimates consist of the increase in predicted turnout percentage relative to the control group calculated using emmeans. Average (34.9%) and high (38.9%) treatment rates were calculated in Table 2 for the abortion rights ads.

Table 4 Estimates of CACE from abortion rights ads

In competitive CDs, where the abortion rights ads made a significant impact at the level of assignment, the estimated CACE is 95% likely to fall between 3.78 and 5.29pp.Footnote 17 This estimate would put seven weeks of a maximum of six Facebook ad exposures per week—42 total ad impressions—on par with door-to-door canvassing targeting a similar pool of voters and greater than the 1.4–2.9pp effects derived from a single piece of social pressure direct mail (Green & Gerber, 2019).

On the surface, this seems promising: repeated exposure to issue-based Facebook ads offer a way to increase turnout, especially among less-likely voters such as those enrolled in this experiment. However, research tells us that Facebook shows political advertisements to people likely to agree with them (Ali et al., 2019a, b; Thorson et al., 2019). Thus we should view the CACE with caution: it is likely the uppermost limit of treatment effects, derived from Facebook exposing subjects deemed most receptive to the ads. Showing the ads to more people may well result in attenuation of this effect.


Microtargeted, political issue-oriented Facebook advertisements can have an impact on voter turnout, though that effect is heavily conditional on an alignment of message, audience, and electoral context. Results show no main effect of assignment to any of the four ad conditions; while there was a 0.49pp increase in predicted turnout percentage in the abortion rights condition, one cannot rule out the possibility of a Type I error (p = 0.061, p adj. = 0.092). Within competitive CDs, there is a sizeable 1.66pp increase in turnout in the abortion rights condition relative to the control group, upholding prior work showing that electoral salience moderates treatment effectiveness (Arceneaux & Nickerson, 2009; Haenschen & Jennings, 2019; Malhotra et al., 2011). The abortion rights ads also appear to have had a marginally significant effect in GOP counties. In both instances, the effect is concentrated among voters coded as female in the voter file. These findings speak to the power of microtargeting: Facebook's platform enables advertisers to target a specific list of individual voters, and its algorithm seemingly exacerbates that effect by exposing individuals it predicts to be most likely to respond. In this instance, abortion rights ads were an effective way to mobilize women in competitive CDs in the 2018 U.S. Midterm election.

However, in the aggregate—with no consideration of message or context—Facebook ads have no impact on turnout. Given the sample size, this experiment is able to offer very precise treatment estimates with narrow confidence intervals. The estimated impact of assignment to any advertising stream relative to the control group is − 0.04pp (SE = 0.001), which is statistically indistinguishable from zero. This finding echoes prior work showing no effect of Facebook ads (e.g., Collins et al., 2014; Coppock et al., 2020a; Kalla, 2017). Thus, despite the tremendous media attention received by Facebook ads based on their theorized potential to influence the outcome of elections, empirical evidence simply does not match this enthusiasm. One cannot blanket an electorate with cheap Facebook ads and expect any sort of widespread, measurable impact on turnout. Campaigns are likely spending millions of dollars on advertisements that have almost no impact on whether or not their targets vote.

These results also contribute to a greater understanding of the (in)effectiveness of much paid campaign communication. Recent empirical work has re-evaluated television ads and persuasion campaigning generally (Coppock et al., 2020b; Kalla & Broockman, 2018), finding limited effects outside of primary campaigns, ballot measures, and specific pools of voters. A growing theoretical perspective suggests that campaigns are more effective at mobilizing their base voters, and that campaign activity generally is only effective in close elections (Nickerson & Rogers, 2020; Panagopoulos, 2016). This study finds that Facebook ads about abortion rights are effective on unlikely female voters in competitive CDs, generating a 1.86pp increase in predicted turnout. Assuming the ads only mobilized voters in support of candidates who favor abortion rights, the ads could theoretically generate enough additional votes to impact outcomes in elections that are already very close.

When Do Digital Advertisements Work?

While most studies of digital ads do not find an effect on turnout, persuasion, favorability, or name recognition (Broockman & Green, 2014; Coppock et al. 2020a; Collins et al., 2014; Kalla, 2017; Shaw et al., 2018), the findings presented here—alone and in concert with other experiments into the use of digital ads that have generated measurable impacts (Haenschen & Jennings, 2019; Hager, 2019; Shaw et al., 2018)—begin to offer contours around when online advertisements, particularly those delivered on Facebook, are effective at changing voting behavior: namely, when the message is relevant and the election is competitive and high-salience.

Message content matters, and must be addressed to a receptive audience. In this study, the abortion rights message was effective; this may have been particularly relevant to Texas voters given the state's recent political history.Footnote 18 The ads were only effective on voters coded as female, perhaps because they were compelled by a message stating that their bodily autonomy was at risk; there was no significant impact on male voters. Prior work on digital ads also shows that only some messages are effective (social norms vs. information in Haenschen & Jennings, 2019; emotional vs. information in Hager, 2019; plan-making vs. social pressure in Kalla, 2017). The challenge for political advertisers is in determining that message ahead of time.

Mobilization tactics are known to be contingent on electoral salience (Arceneaux & Nickerson, 2009; Malhotra et al., 2011), in part because level of interest determines which voters are receptive to such missives. In this study, effects were concentrated in competitive CDs, where the intensity of underlying campaign activity likely activated the low-propensity subject pool. This mirrors findings in other work on mobilization in which digital ads were only effective in competitive districts (Haenschen & Jennings, 2019).

However, ads were also effective on female voters in GOP counties, where there may have been limited on-the-ground campaign activity seeking to mobilize lower-propensity, Democratic-leaning voters. Subjects may not have received much campaign contact outside of these ads. Longitudinal treatment with Facebook ads may offer a way to target and mobilize voters who are not otherwise able to be contacted by campaigns due to a lack of resources or logistical barriers. This merits further investigation, particularly as a way to reach low-propensity rural and urban voters whose physical addresses are inaccessible and whose cell phone numbers may not be available in the voter file.

Moving forward, this area of experimentation must also grapple with evidence that exposure to microtargeted Facebook ads is not uniform or random across an intended audience (Ali et al., 2019a, b), making it more difficult to determine if and to what degree these ads work. The best evidence of Facebook ads' effectiveness would need to come from the platform itself. Political advertisers need to call on the platform to open the black box of their own A/B testing tools, report group-level results based on actual exposure to ads, and clarify who its algorithm is actually treating. Otherwise, researchers' inability to know who was exposed to the ads hampers our ability to detect effects. In the case of the main effects, it is possible that the abortion rights advertisements were effective among the 35% of targets who actually saw them, but the lack of treatment data makes it impossible to distinguish the signal from the noise.


One of the biggest questions in the study of online political advertising is one of exposure: how many ads per day for what duration are needed to generate a detectable effect? This study was constrained by its $25,000 budget, which limited how many ad impressions could be purchased and for how many subjects. The need to treat a large subject pool to ensure adequate statistical power is in direct conflict with the desire to maximize subjects' exposure to the ad content itself. In this study, subjects received at most six impressions per week, three each from two different ads. This may not be enough exposure for some voters to move the needle on turnout, especially given estimates that Internet users are exposed to over 2600 ads per week (Elliott, 2017). Ideally, a placebo-controlled design is needed to test the impact of being assigned to receive a political ad vs. not receive one, to ensure that the treatments are not simply replacing other political ads. Such designs remain cost-prohibitive given the large sample sizes needed to detect effects, and obtaining research funding to funnel into Facebook's billion-dollar advertising behemoth to buy treatment ads—let alone placebo replacements—remains challenging.

One other limitation in studies of this nature comes from the treatments themselves, which were developed by the partner organization. A post-hoc content analysis explored the tone of the different ad conditions; all four treatment groups were found to have a negative tone, on average. Abortion rights were among the most negative, and were viewed as marginally more negative than immigration ads.Footnote 19 However, estimated ad tone for each treatment condition was not predictive of turnout. Future work should consider varying tone systematically within online advertising treatments to see if it has an effect.


Longitudinal exposure to microtargeted issue-oriented Facebook ads has an impact on voter turnout, however effects are conditional on the alignment of message, audience, and electoral salience. Importantly, Facebook's own microtargeting tool appears to find such receptive individuals and expose them to the ads. Microtargeting is thus happening on two levels: the advertiser who uploads a selected list of voters, and the platform that decides which of those individuals to expose. These findings, along with other experimental work on the mobilizing effects of digital ads, offer some insight into the highly conditional nature of when ads are effective, suggesting that the precise affordances of microtargeting and its ability to selectively advertise to specific voters is key to the effectiveness of such campaigns, as well as our ability to experimentally evaluate them.