Skip to main content
Log in

Measuring Exposure to Political Advertising in Surveys

  • Original Paper
  • Published:
Political Behavior Aims and scope Submit manuscript

Abstract

Research on the influence of negative political advertising in America is characterized by fundamentally conflicting findings. In recent years, however, survey research using estimates of exposure based on a combination of self-reported television viewing habits and Campaign Media Analysis Group data (a database of all advertisements broadcast on national and cable television in the top 75 media markets) has argued that exposure to negative political advertising boosts interest in the campaign and turnout. This paper examines the measurement properties of self-reports of television viewing. I argue that the errors from common survey formats may both be nonrandom and larger than previously acknowledged. The nonrandom error is due to the tendency of politically knowledgeable individuals to be more sensitive to question format. Thus the inferences drawn about the relationship between political knowledge, exposure to negative ads, and political behavior are also sensitive to the measures used to estimate exposure. I demonstrate, however, that one commonly used measure of exposure—the log of estimated exposure—is not only more theoretically defensible but also alleviates some of the more serious problems due to measurement error.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

Notes

  1. More recent CMAG data cover the top 100 media markets.

  2. This calculation is based on self-reported viewing habits at particular times of day, or “dayparts.” As discussed below, estimates of exposure using CMAG data have also based television viewing habits on how often individuals claim to watch particular shows. Allthese methods, however, are based on the same principle of multiplying self-reported viewing by ads aired (see Ridout et al., 2004).

  3. Of course, she may still be indirectly affected by advertising: through media coverage, or if the candidates’ ads become an issue of discussion in the campaign. However, that is beyond what can be explored here and beyond the range of most work using CMAG data.

  4. In fact, the calculation is slightly more complicated because the ads on news programs are the total across the three networks. The estimate is therefore divided by three.

  5. The shows were “Jeopardy”, “Wheel of Fortune”, “morning news programs such as ‘Today,’ ‘Good Morning America,’ or ‘The Early Show”’, “daytime television talk shows such as ‘Oprah Winfrey,’ ‘Rosie O’Donnell,’ or ‘Jerry Springer’, network news programs in the late afternoon or early evening such as ‘World News Tonight’ on ABC, ‘NBC Nightly News,’ ‘The CBS Evening News,’or some other network news, and local TV news shows in the late afternoon or early-evening, such as ‘Eyewitness News’ or ‘Action News.’

  6. The ads within shows method is similar to Ridout et al’s (2004)“five program measure.”

  7. The daypart questions were phrased identically to the ANES 1998 pilot (see Appendix). In 2000 the ANES asked questions about specific programs. In 2002 and 2004 the ANES asked only about news programs. For the questions about programs, I used the phrasing of the 2000 ANES.

  8. This echoes the diary study (i.e., the shows method has the lowest correlation) but I cannot calculate the correlation with the daypart questions because of the daypart and shows questions being asked of different halves of the sample.

  9. I excluded 12 respondents who, in answer to the typical weekday day, evening, or weekend questions, said they watched more than 10 h a day because they were all coded as an ‘11’ in the ANES survey rather than by the exact number of hours. Because the hours they watch may exceed 11, the discrepancy with the daypart questions could be exaggerated. This is not a conventional case of censoring for which tobit estimation would be appropriate. The censoring affects a component of a dependent variable (discrepancy), stopping us from knowing whether the two methods of self-report offer very similar answers for these 12 respondents, rather than there being censoring of the dependent variable itself at its upper or lower levels.

  10. Indeed, replacing political knowledge with level of education in Table 2 shows the same robust, positive relationship. With the inclusion of both political knowledge and education in the same model, however, the coefficients for each are reduced and political knowledge drifts to statistical insignificance; they share variance because educated individuals tend to be more politically informed. They each indicate that political sophistication is associated with sensitivity to question wording. In the remainder of the paper I continue to focus on political knowledge because it is the more common indicator of political sophistication in this literature (e.g., Freedman et al., 2004; Kahn & Kenney, 1999)

  11. The CMAG data for 1998 do not include information about gubernatorial advertising. However, Stevens (2005) argues that because both the gubernatorial and Senate elections in California, Georgia, and Illinois shared similar characteristics, such as competitiveness, and because candidates tend to air ads at the same time it is a reasonable assumption that exposure to advertising in the gubernatorial race was highly correlated with exposure to the Senate race. In Tables 3 and 4 I include one dependent variable that is specific to the gubernatorial races in these states, the number of issues that respondents recognize the candidates have talked about: if exposure to negative advertising increases awareness of issues and individuals who saw a lot of Senate ads also saw a lot of gubernatorial ads we would expect exposure to negative advertising to have a positive relationship with recognition of issues.

  12. Total negative advertising in a television market is arguably a better measure of campaign intensity than total advertising because we tend to see more advertising, and more negative advertising, in competitive races. I also estimated all the models in Tables 3 and 4 with total advertising as a proxy for campaign intensity. It made no difference to the results.

  13. The relatively small sample sizes in Table 3, for an ANES survey, are because, first, the daypart questions were asked of a half sample and, second CMAG data cover only the top 75 television markets, containing about three-quarters of the U.S. population, meaning there is no information about advertising where many of the respondents lived (which is why there are roughly one-third fewer respondents in Table 3 than in Table 2).

  14. On-line models of attitude formation and updating also imply that the capacity of new information to alter impressions diminishes.

  15. Using the log of their estimates is likely the reason why Ridout et al. (2004) find high correlations between their three estimates of exposure using CMAG data. It is not, as they imply, because daypart and show methods provide essentially the same information about television viewing habits but because the correlations are between logged estimates of exposure, meaning the variation due to discrepancies has been reduced.

  16. The conditional effects of exposure for high sophisticates, the combination of main effect and interaction, are statistically insignificant.

  17. See www.studentmonitor.com

  18. There would not have been concerns about future classes with me because I was in the throes of leaving the university.

References

  • Allen, C. (1965). Photographing the tv audience. Journal of Advertising Research, 5, 2–8.

    Google Scholar 

  • Ansolabehere, S., Iyengar, S., & Simon, A. (1999). Replicating experiments using aggregate and survey data: The case of negative advertising and turnout. American Political Science Review, 93, 901–909.

    Article  Google Scholar 

  • Bartels, L. (1996). Entertainment television items on 1995 pilot study. Report to the National Election Studies Board of Overseers.

  • Berry, W., & Feldman, S. (1985). Multiple regression in practice. Newbury Park: Sage.

    Google Scholar 

  • Brooks, D. (2006). The resilient voter: Moving toward closure in the debate over negative campaigning and turnout. Journal of Politics, 68, 684–697.

    Article  Google Scholar 

  • Cacioppo, J., & Petty, R. (1989). Effects of message repetition and position on argument processing, recall, and persuasion. Journal of Personality and Social Psychology, 107, 3–12.

    Google Scholar 

  • Chang, L., & Krosnick, J. (2003). Measuring the frequency of regular behaviors: Comparing the ‘typical week’ to the ‘past week. Sociological Methodology, 33, 55–80.

    Article  Google Scholar 

  • Clinton, J., & Lapinski, J. (2004). ‘Targeted’ advertising and voter turnout: an experimental study of the 2000 presidential election. Journal of Politics, 66, 69–96.

    Article  Google Scholar 

  • Finkel, S., & Geer, J. (1998). A spot check: casting doubt on the demobilizing effect of attack advertising. American Journal of Political Science, 42, 573–595.

    Article  Google Scholar 

  • Freedman, P., Franz, M., & Goldstein, K. (2004). Campaign advertising and democratic citizenship. American Journal of Political Science, 48, 723–741.

    Article  Google Scholar 

  • Freedman, P., & Goldstein, K. (1999). Measuring media exposure and the effects of negative ads. American Journal of Political Science, 43, 1189–1208.

    Article  Google Scholar 

  • Freedman, P., Goldstein, K., & Granato, J. (2000). Learning, expectations, and the effect of political advertising. Chicago: Paper presented at the annual meeting of the Midwest Political Science Association.

    Google Scholar 

  • Geer, J. (2006). In defense of negativity: Attack ads in presidential campaigns. Chicago: University of Chicago Press.

    Google Scholar 

  • Goldstein, K., & Freedman, P. (2002a). Campaign advertising and voter turnout: new evidence for a stimulation effect. Journal of Politics, 64, 721–740.

    Article  Google Scholar 

  • Goldstein, K., & Freedman, P. (2002b). Lessons learned: Campaign advertising in the 2000 elections. Political Communication 19, 5–28.

    Article  Google Scholar 

  • Holbrook, A., Krosnick, J., Visser, P., Gardner, W., & Cacioppo, J. (2001). Attitudes toward presidential candidates and political parties: Initial optimism, inertial first impressions, and a focus on flaws. American Journal of Political Science, 45, 930–950.

    Article  Google Scholar 

  • Kahn, K. F., & Kenney, P. (1999). Do negative campaigns mobilize or suppress turnout? Clarifying the relationship between negativity and participation. American Political Science Review, 93, 877–890.

    Article  Google Scholar 

  • Kahn, K., & Kenney, P. (2004). No holds barred: Negativity in U.S. Senate Campaigns. Upper Saddle River: Prentice Hall.

    Google Scholar 

  • Kan, M. Y., & Gershiny, J. (2006). Infusing time diary evidence into panel data: an exercise in calibrating time-use estimates for the BHPS. ISER Working Paper 2006-19. Colchester: University of Essex.

  • Lau, R., & Pomper, G. (2001). Effects of negative campaigning on turnout in U.S. Senate elections, 1988–1998. Journal of Politics, 63, 804–819.

    Article  Google Scholar 

  • Lau, R., Sigelman, L., Heldman, C., & Babbitt, P. (1999). The effects of negative political advertisements: A meta-analytic assessment. American Political Science Review, 93, 851–875.

    Article  Google Scholar 

  • Martin, P. (2004). Inside the black box of negative campaign effects: Three reasons why negative campaigns mobilize. Political Psychology, 25, 545–562.

    Article  Google Scholar 

  • Patterson, T., & McClure, R. (1976). Political advertising: Voter reaction to televised political commercials. Princeton: Citizen’s Research Foundation.

    Google Scholar 

  • Price, V., & Zaller, J. (1993). Who gets the news? Alternative measures of news reception and their implications for research. Public Opinion Quarterly, 57, 133–64.

    Article  Google Scholar 

  • Ridout, T., Shah, D., Goldstein, K., & Franz, M. (2004). Evaluating measures of campaign advertising exposure on political learning. Political Behavior, 26, 201–225.

    Article  Google Scholar 

  • Robinson, J., & Godbey, G. (1997). Time for life: The surprising ways Americans use their time. University Park: Pennsylvania State University Press.

    Google Scholar 

  • Stevens, D. (2005). Separate and unequal effects: Information, political sophistication and negative advertising in American elections. Political Research Quarterly, 58, 413–426.

    Google Scholar 

  • Tourangeau, R., Rips, L. R., & Rasinski, K. (2000). The psychology of survey response. Cambridge: Cambridge University Press.

    Google Scholar 

  • Wattenberg, M., & Brians, C. (1999). Negative campaign advertising: Demobilizer or mobilizer? American Political Science Review, 93, 891–899.

    Article  Google Scholar 

  • West, D. (1994). Political advertising and news coverage in the 1992 California U.S. Senate campaigns. Journal of Politics, 56, 1053–1075.

    Article  Google Scholar 

Download references

Acknowledgements

Thanks to Barbara Allen, Andrew Seligsohn, and the editors for helpful comments and suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Stevens.

Appendix

Appendix

Coding of Variables

Daypart Questions. Question Wording: Thinking about this past week, about how many hours did you personally watch television on a typical weekday morning/afternoon, from [6:00 to 10:00 AM/ 10:00 AM to 4:00 PM/4:00 PM to 8:00 PM/8:00 PM to 11:00 PM/11:00 PM to 1:00 AM]. Thinking about this past weekend, about how many hours did you personally watch television from 6:00 AM to 7:00 PM? Coding: The total number of weekday hours (multiplied by 5) were combined with the total number of weekend hours to estimate the total number of hours of TV watched per week.

Typical Week Questions (from ANES 1998 Pilot). Question Wording: On a typical weekday, about how many hours of television do you watch during the morning and afternoon? About how many hours of television do you watch on a typical weekday evening? On a typical weekend day, about how many hours of television do you watch during the morning and afternoon? Coding: The total number of weekday hours (multiplied by 5) were combined with the total number of weekend day hours (multiplied by 2).

Show Questions (ANES 1998 Pilot). Question Wording: How many days/times in the past week have you watched [The Today Show/The Rosie O’Donnell Show/daytime soap operas like General Hospital or Days of Our Lives/Jeopardy or Wheel of Fortune/a sports event/local news]? Coding: The sum of all six genres (each genre was rescaled from zero to one) divided by six.

Show Questions (Experiment). Question Wording: How many times in a typical week do you watch [Jeopardy/Wheel of Fortune/morning news programs such as Today, Good Morning America, or The Early Show/daytime television shows such as Oprah Winfrey or Jerry Springer/national network on news/local TV news shows, either in the late afternoon or early evening]?.

Efficacy. Question Wording: Please tell me how much you agree or disagree with these statements ... agree strongly, agree somewhat, neither agree nor disagree, disagree somewhat, disagree strongly, don’t know? Public officials don’t care what people like me think; Sometimes politics seems so complicated that a person like me can’t really understand what’s going on; People like me don’t have any say about what the government does. Coding: The average response on the 1 to 5 scale.

Number of Days in the Past Week Talked About Politics. Question Wording: How many days in the past week did you talk about politics with family or friends?

Number of Issues Recognize that Candidates Have Talked About. Question Wording: For each issue we would like to know if you think either one of the candidates, both, or neither is talking about these issues (private school vouchers, abortion, gun-related crimes, campaign contributions from PACs, protecting the quality of the air and water, improving discipline in schools). Coding: Total of number of issues each candidate is talking about.

Intention to Vote. Question Wording: (Half sample 1) So far as you know, do you expect to vote in the elections this coming November? Would you say that you are definitely going to vote, probably going to vote, or are you just leaning towards voting? (Half sample 2) Please rate the probability you will vote in the elections this coming November (on a 0 to 100 scale). Coding (Half sample 1): Not going to vote = 0, leaning = 1, probably = 2, definitely = 3. Coding (Half sample 2): 0–19 = 0, 20–50 = 1, 51–80 = 2, 81–100 = 3.

Contacted by a Party/Candidate. Question Wording: Thus far in the campaign, have you received any mail from a candidate or political party about the election? How about door-to-door campaigning? Thus far in the campaign, have any candidates or party workers made any phone calls to you about the election? Coding: 1 for each contact for a range of 0 to 3 (mean = .8).

Party Identification. Question Wording: Generally speaking, do you consider yourself to be a Republican, a Democrat, an Independent, or what? [If Republican or Democrat] Would you call yourself a strong [Republican or Democrat] or a not very strong [Republican or Democrat]? [If Independent] Do you think of yourself as closer to the Republican or Democratic party? Coding: Strong identifiers with either party were coded as 3, those saying they considered themselves a not very strong Republican or Democrat as 2, those claiming to be Independent but closer to one of the parties as 1, and those Independent and closer to neither party, or Other as 0.

Political Knowledge. Question Wording: Who has the final responsibility to decide if a law is constitutional or not... is it the President, Congress, or the Supreme Court? Whose responsibility is it to nominate judges to the Federal Courts... the President, Congress, or the Supreme Court? Do you happen to know which party has the most members in the House of Representatives in Washington? Do you happen to know which party has the most members in the U.S. Senate? Coding: each correct answer was coded 1, and answers to the four questions combined to create a 0–4 scale.

Education. Question Wording: What is the highest grade of school or year of college you have completed? Did you get a high school diploma or pass a high school equivalency test (GED)? What is the highest degree that you have earned? Coding: 0 for 12 years or less and no high school diploma, 1 for 12 years or less with high school diploma or GED, 2 for 13 or more years.

The Validity of the Diary Study

A student sample

A frequent objection to student samples is that college students are not “real” people. Indeed, Chang and Krosnick’s (2003) research suggests that, as relatively educated individuals, students might be more sensitive to question wording about television viewing habits. However, there is no reason to believe that the differences in recall across the questions should be different for student and adult samples. Moreover, sampling educated students who had been keeping diaries for four weeks and were therefore atypically alerted to their viewing habits should, if anything, lessen the discrepancies between the diaries and surveys.

Student subjects may alter their television viewing habits to impress an instructor, or simply lie about them to indicate watching less television or more serious programs

The initial instructions students were given strove to limit false reporting by stressing they should not change their habits, that they would only be noting the times they watched television, not the programs they watched (with the exception of news in the second study), and that the instructor would form no judgments on the basis of how much or when they watched television. Empirically, the results do not suggest social desirability biases in student diary entries. According to Student Monitor, for example, college students watch an average of 11 h of television a week. Footnote 17 The average amount of television subjects watched per week over the four weeks, according to their diaries, was 10.4 h, with a range of 9.6 h in Week 3 to 11.0 h in Week 4. The average number of times their diaries said they watched national and/or local news a week was .8 times each (i.e., less than once a week), which would not impress many instructors. Finally, I asked members of the Spring 2005 class, after they had received credit for maintaining the diaries and after they had received their course credit, to let me know whether or not they had kept the diaries accurately. Footnote 18 Roughly 50 percent of the class responded and, without exception, said that their entries had been accurate; some subjects even went to some length to describe the methods by which they had ensured accuracy. I compared the discrepancies between diaries and surveys for this subsample of avowedly accurate diary keepers to the rest of the class. One might think that this subsample would show smaller discrepancies but there was no statistically significant difference in the size of the discrepancies; in fact, if anything they were larger for those subjects who testified to the accuracy of the diaries.

In a four week period subjects may have grown increasingly weary of keeping the diary, implying growing rather than constant inaccuracy

Again, the consistent reminders subjects received were intended to guard against this but it is a possibility that can also be tested empirically. If students were increasingly inaccurate in their diary entries, the correlation between the typical viewing habits they gave in the surveys and the earlier weeks of the diaries should be stronger than in later weeks. However, the correlations were very consistent: .57, .59, .57, and .60 in weeks 1 through 4 respectively.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Stevens, D. Measuring Exposure to Political Advertising in Surveys. Polit Behav 30, 47–72 (2008). https://doi.org/10.1007/s11109-007-9035-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-007-9035-8

Keywords

Navigation