Appendix 1:Testing Assumptions/Robustness Tests
Due to the limitations of the Catalist database and the ACS, I do not have reliable voter turnout data for the years prior to 2006, which makes it difficult to test the assumptions of the difference-in-differences setup. However, in this section I present several tests of the assumptions based on the available data. I verify that pre-treatment trends in turnout do not predict treatment, I run a placebo test to demonstrate that my approach does not find treatment effects where none should exist, and I use synthetic matching to address concerns that control units may not be similar enough to treated units.
Checking Pre-treatment Trends
First, we might worry that places that already had steeper growth in Latino turnout might have also received the SC treatment for some reason, such that the effect I observe is not actually driven by immigration enforcement. To test for this possibility, I use the best available data from 2002 and 2006 to check whether the pre-treatment turnout trends predict treatment. I construct 2002 voter turnout data slightly differently than the 2006 and 2010 data; I use CVAP estimates from the 2000 Census because the ACS did not produce estimates of Latino CVAP prior to 2006 (and then interpolate using 2000 Census and 2006 ACS data to produce 2002 estimates).Footnote 24 Further, Catalist began collecting voter files to construct their database in 2006, so it is possible that their turnout data for prior years is incomplete due to people voting and then being removed from the voter rolls before 2006. Both numerator and denominator are biased by an unknown amount, so it is not clear in which direction the turnout estimates will be biased.
Table 7 presents the results of a regression of the treatment variable onto the 2002–2006 change in Latino turnout in each state cluster.Footnote 25 There is no evidence that pre-2006 time trends, at least for the limited period for which there is data, predict treatment.
Table 7 Predicting treatment with prior Latino turnout trends (including all jurisdictions)
Next, I use another dataset to verify the parallel-trends assumption. I use Latino citizen voter turnout rates from the Current Population Survey for elections from 1996 to 2006, and check whether these turnout rates predict treatment (enrollment in the Secure Communities Program). This analysis is shown in Table 8.
I calculate Latino citizen turnout rates for each cluster as follows: I restrict the dataset to jurisdictions that are included in my dataset for the above analyses (dropping places in each state that voluntarily enrolled in SC). Then, for each “cluster” (roughly a state, but with self-selected counties dropped), I calculate the percentage of Latino citizens of voting age that report having turned out in the most recent election, using the survey weights provided with the survey. The November CPS supplement asks about the general election that has just taken place, so for some years it is the midterm congressional election, and in others it is the presidential election.
Some clusters contained very few respondents, so the turnout estimates were quite noisy. In Column (1) of Table 8, I have dropped all clusters with fewer than 30 respondents; Column (2) contains all clusters. In both cases, there is no evidence that previous years’ turnout rates predicted treatment, which supports the parallel trends assumption. Figure 4 plots the Latino turnout trends of states with and without treated units.
Table 8 Predicting treatment with prior turnout from CPS
Placebo Test: 2002–2006
Having constructed Latino turnout estimates from 2002 for some of the jurisdictions from the main dataset, I can also run a placebo test to check whether there is evidence of a “treatment effect” before the treatment actually took place. Table 9 replicates the main analysis in the paper, the models from columns 4 and 5 of Table 4, for the turnout change from 2002 to 2006 instead of 2006–2010. As discussed above, this data covers a limited number of places and is likely an undercount of voters, but is the best data available. I do not find a comparable treatment effect for 2006–2010.
Table 9 Placebo test: main analysis replicated on 2002–2006 treatment change
Synthetic Control
Next, I address concerns about the comparability of treatment and control units, and the possibility of extreme counterfactuals, by using synthetic matching (Abadie et al. 2010). I use this approach to construct a “synthetic control” for each of the treated clusters that is a weighted average of other clusters in the dataset.Footnote 26 I use the available pre-treatment data—the change in Latino voter turnout in each cluster from 2002 to 2006—to create matches that should have similar time trends in voter turnout. This process would be improved by the inclusion of more historical turnout data, but even with limited data it serves as a check on the difference-in-differences results.
I draw from the untreated clusters (that is, states without full pre-election SC enrollment, with any voluntarily-enrolled jurisdictions dropped) to construct matches for each of the treated clusters. For each cluster, I then compare the change in Latino turnout from 2006 to 2010 between the treated and synthetic control unit. The difference between these changes is taken as the treatment effect of Secure Communities enrollment. I take the mean of all treated clusters’ estimates to find an overall estimate of 1.4 percentage points. This is slightly lower than the 2–3 percentage points estimated in the main analysis in Table 4, but is in the same direction and is of comparable magnitude. As shown in Table 10, a mean weighted by the 2006 Latino population of each cluster yields a point estimate of 2.9 percentage points, somewhat larger than the main estimate.Footnote 27
Table 10 Difference-in-difference estimates, compared to synthetic versions of each cluster
The resulting weights for each synthetic match are available on request, and will be included in the online supplemental information. I have not attempted to quantify the uncertainty around the estimate produced via synthetic matching, as it is not immediately clear how to do so with multiple treated units. The results are fairly similar to the OLS estimates presented in the "Results" section, and so I rely on the better-understood OLS standard errors, as do other papers using this approach as a check (Hall 2013).
Appendix 2: Analysis of Record Submissions
One mechanism discussed above was direct experience with deportation: citizens might observe people they know being deported, and change their political behavior in response. This is unlikely to explain my results, as I focus on places that enrolled in the program only a few months before the 2010 election. However, I use available ICE data to ensure that program implementation in those few months does not explain the turnout results presented here.
Relatively few people would have been deported due to the Secure Communities program at the time of the 2010 election, but there is some variation in the number of people whose fingerprints were submitted to ICE to check their immigration status. In this section, I explore whether places with different numbers of fingerprint submissions had different political responses.
To examine whether program implementation affected changes in turnout, I split the treated units into those with high (above-median) and low (below-median) numbers of fingerprint submissions to ICE, and estimate the SC treatment effect in each subset. ICE provided data on submissions from the time of program activation until August 2012, so I adjusted them to reflect the amount of time the program had actually been in effect by the time of the 2010 election. I assumed that submissions were uniform across the time period reported, and simply multiplied the total number of submissions by the fraction of activated time that fell before the 2010 election.Footnote 28 I divided the treated portion of the sample into units that had sent more than 74 (the sample median) records to ICE prior to the 2010 election, and those that had submitted fewer than that. These record submissions represent the upper bound of people who might have faced deportation due to the Secure Communities program in that jurisdiction—not everyone whose fingerprints were submitted would actually have been deported, and very few people were likely deported before the 2010 election.
Table 11 Treatment effects by number of fingerprint submissions (robust clustered SE’s)
Table 11 shows the results of this analysis. They support the assertion that personal experiences with deportation do not drive the turnout effects reported in the main paper. If individual people were turning out to vote because someone they knew personally was in danger of deportation, we would expect more record submissions to be associated with more votes and thus a bigger turnout effect. This is decidedly not the case; as seen in Table 3, higher-submission communities do not show a larger treatment effect than low-submission communities.
It should be noted that this is an observational analysis, and we might think that places with many submissions are different from places with few submissions in many other ways that could affect turnout and the way the SC program was implemented and perceived. One such concern is population, but the same pattern of results appears when the analysis is performed with population-adjusted counts of record submissions (submissions per 1,000 residents, or per 1,000 Latino citizens).
Appendix 3: Additional CCES Analysis
See Tables 12 and 13.
Table 12 Respondent-reported campaign/activist contact, 2006 (Latinos)
Table 13 Respondent-reported campaign/activist contact, 2010 (non-Latinos)