Skip to main content
Log in

Issue Publics, Campaigns, and Political Knowledge

  • Original Paper
  • Published:
Political Behavior Aims and scope Submit manuscript

Abstract

Building on the growing body of research on campaign learning, this paper considers the way that learning about policy issues depends on the nature of the issue and its relevance for the individual citizen. Specifically, the analysis finds that seniors learned much more than non-seniors about candidate positions on an emerging Social Security issue that was heavily emphasized in the 2000 campaign, but not when the same issue was more familiar and largely ignored by the candidates and press in the 2004 campaign. Yet, even without additional learning or campaign emphasis, seniors still knew more than non-seniors in the later contest. These results suggest that once party positions become familiar to them, issue publics will hold their information advantage across future elections without dependence on further campaign emphasis.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. Beyond the two expectations I test here it may be possible to derive additional expectations. One is that the knowledge gap between issue publics and the mass public will begin to shrink at some point during a topic’s tenure on the political agenda if information remains available. Unfortunately, beyond the suggestion that such a result could require sustained information beyond what is typically observed, the theory offers little guidance about exactly how much information or for how long (Genova and Greenberg 1979).

  2. Issue publics may also be more likely to know information at time t not only because they were more likely to have acquired it at time t  1, but also because they are more likely to recall the information they learned. Iyengar (1990, p. 162) finds those who are selectively attentive to information about specific issues are “more able to remember news accounts of these issues about which they are already relatively well informed”.

  3. Television campaign ad data are from Goldstein et al. (2002) and Goldstein and Rivlin (2007).

  4. Both Bush and Gore mentioned prescription drug coverage for seniors repeatedly throughout the 2000 campaign in their speeches and ads. However, neither candidate mentioned the issue of re-importation.

  5. For further details about the NAES see Johnston et al. (2004) and Brady and Johnston (2006).

  6. The set of respondents randomly assigned to an interview day and the sample of interviews on that day are not synonymous. The survey design randomly assigns individuals to a ‘replicate’, the day of the first attempted contact, but not all respondents answer the survey on that day. About 60 % of respondents assigned to an interview date answer the survey within 2 days of this date; about 90 % do so within a week. Because late responders may be systematically different from initial responders, it would be problematic to compare only respondents from the same replicate who answer the survey on different days. However, that is less of an issue here. Because late responders were randomly assigned a date of initial contact, their actual interview dates are effectively randomly distributed throughout the period of analysis (Brady and Johnston 2006).

  7. Results are statistically and substantively similar if all interviews that contain the knowledge questions are included (i.e. 19 May through 27 November 2000, and 19 April through 16 November 2004).

  8. Most of the analyses of learning about Social Security positions in 2000 I present here rely on two separate questions, one asking about Bush’s position and one asking about Gore’s position. This battery was asked over a longer period of time than the alternative item (posed to a separate subset of respondents) asking respondents to simultaneously identify both candidates’ positions within a single question. I also use this approach for analysis of other policy items from the 2000 survey that were asked in this way. The exception is the model combining 2000 and 2004 responses to Social Security questions, which uses the single item from 2000 because it more closely matches the 2004 question.

  9. This question was asked from 19 April to 7 July and again from 11 August through 3 November. Because the first series falls almost entirely outside the timeframe of this analysis, I use only the second series. Results from an analysis of the first series are similar to the results presented below for the second series. The 2004 NAES included two additional questions about candidate positions on the Medicare prescription drug benefit. From 5 October through 16 November, respondents were asked about which candidate “favors allowing the federal government to negotiate with drug companies for lower prescription drug prices for senior citizens.” From 19 April through 9 August, respondents were asked about which candidate “favors the Medicare prescription drug law that was recently enacted.” Because the timing of the interviews, I exclude both. However, neither item revealed greater learning among the issue public.

  10. This is just one among many possible definitions of campaign learning. For example, learning could also describe reduction in the uncertainty surrounding voters’ perceptions even as accuracy remains unchanged (Alvarez 1998; Peterson 2009). Because my substantive concern follows from the literature about whether voters have correct policy knowledge to evaluate candidates and government, I focus on accuracy.

  11. A senior indicator may conflate the effect of age with membership in an issue public. I replicated the statistical models reported below with the addition of age (as a linear term and as a quadratic term) alongside the senior indicator. Age is positively associated with knowledge of candidate positions on these issues, but the relationship between senior status and knowledge persists—indicating the demographic taps something beyond a simple age effect. But even if there is an age based issue public effect on knowledge about Social Security or Medicare over and above the generic impact of age, another complication is identifying the appropriate age cutoff for measuring the issue public. For example, an analysis using a threshold of 50 rather than 65 (the age of eligibility for receiving benefits through these programs) also reveals a positive association between the demographic group and knowledge of Social Security positions—albeit reduced in magnitude by about a third.

  12. The responses likely contain measurement error as some respondents randomly guess correctly rather than reporting that they do not know. The fact that some respondents randomly guess does not threaten the conclusions about changes in knowledge if the share of respondents who randomly guess remains constant over time. Furthermore, if the share of respondents who randomly guess declines over time (as the literature on uncertainty suggests) then the test I use here would be biased against finding the patterns of learning I demonstrate. This is because a greater share of responses coded as accurate early in the campaign would be due to random guessing, but no pattern of learning would be detectable for those who moved from accidental accuracy to an informed response later in the campaign. Therefore, even without correcting for random guessing, I provide a conservative test for learning as long as the share of random guessers does not increase over the course of the campaign.

  13. Political knowledge is measured with a five-point scale indicating the interviewer’s subjective evaluation of the respondent. These subjective measures have been shown to perform as well as scales constructed from direct knowledge tests (Zaller 1992, p. 338). This is the only measure of general political knowledge included in both NAES studies throughout the entire period analyzed here. For short intervals during 2004 the NAES also included a standard five question quiz of facts about government. The subjective and objective measures are relatively well associated with a τβ coefficient of 0.57.

  14. To allow for possible ceiling effects as the share of accurate perceptions approaches the full sample, the models in Table 1 were re-specified using log of interview day. Substantive results are the same. I also estimated separate regression models on weekly samples, thus allowing the relationship between senior status and correct perceptions to vary over time. The overall timing of changes in information shown in the unconditioned differences in Fig. 2 is evident in these models as well. Seniors extend their advantage over the summer months, but differences stabilize over the fall.

  15. Fixed effects are included to ensure that the comparison between seniors and non-seniors does not simply proxy for a comparison between battleground and non-battleground residents. If seniors are more likely to live in areas that received substantial campaigning on the Social Security issue, then differences between them and non-seniors would at least partly reflect differences in their information environments. Of course, a geographic-based difference does not necessarily contradict the issue public hypothesis. It could also mean that seniors are targeted with Social Security and Medicare campaigning, supporting the theory of latent issue publics. The NAES data are insufficient for resolving this question of active versus latent issue publics, but they do speak to the question about whether the difference between seniors and non-seniors is simply a product of geography. The media market fixed effects essentially compare trends in the difference between seniors and non-seniors within markets. Results are robust to both specifications (as well as to exclusion of any geographic fixed effects).

  16. One concern is that members of an issue public may be more likely to perceive differences between candidates on their issue (Krosnick 1990). If this is the case, then the accuracy of seniors’ perceptions could have nothing to do with actual knowledge. To check for this possibility I examine seniors’ propensity to see differences between the candidates on another Social Security issue: Which candidate favors the biggest funding increase? If seniors are more likely to perceive polarization on Social Security issues (regardless of actual candidate positions), then there would be a difference between the two groups on this issue as well. For this test I regress an indicator for perceiving a difference between the candidates on the same set of covariates used in the private accounts models. Seniors are actually less likely to perceive differences between the candidates on funding (and become even less likely to do so by the end of the campaign). This means that the results for private accounts are not simply an artifact of a propensity to see differences between the candidates on issues related to Social Security.

  17. As with the 2000 data, I also estimated a separate logit model for each weekly sample from early July through Election Day. The magnitude of the estimated differences between seniors and non-seniors remains stable throughout the period. In most weeks, the probability of accurate perceptions among seniors is about 0.20–0.25 greater than for non-seniors. The only exceptions occur in late August and early September when the weekly differences decline to about 0.15, but even those estimates remain statistically indistinguishable from the estimates from all other weeks.

  18. The 2000 NAES includes a four wave panel as well, but those data include even fewer cases and no waves were administered prior to the convention period.

  19. These figures are based on analysis of the 2004 Campaign Communications Study data on direct mail conducted by the Center for the Study of Elections and Democracy at Brigham Young University. See Hillygus and Shields (2008) for further details.

  20. This result is not a function of union members paying greater attention to politics and therefore learning more about all issues. Union members did not outlearn others on prescription drug re-importation, private accounts for Social Security, tax cuts, the assault weapons ban, or eliminating overseas tax breaks in order to cut taxes for companies that create jobs in the US. Other than union organizing, the only issue on which union members learned more than the rest of the public was providing government health insurance for all children and workers.

  21. GRPs provide a measure of audience reach. A value of 100 represents a television ad buy that would be seen once, on average, by everyone in the media market. In 2000, values of total GRPs for presidential advertising range from 0 to 76,692 (in Madison, WI). A value of zero indicates that no purchases were made in that market, and, therefore, that no one there saw any television campaign ads. The highest value means that, on average, residents in the Madison media market saw over 760 ads over the course of the campaign. These GRP data are from Shaw (2006).

  22. Battleground classifications are from Shaw (2006). Battleground states in 2000 are Florida, Arkansas, Iowa, Maine, Michigan, Missouri, New Hampshire, New Mexico, Oregon, Pennsylvania, Tennessee, Washington, and Wisconsin. Leaning states are Arizona, Kentucky, Louisiana, Nevada, Ohio, West Virginia, Minnesota, Illinois, and California.

  23. These estimated probabilities are based upon regression analyses structurally similar to the one presented in the first column of Table 1 but conducted separately within levels of general political knowledge. Because only 4 % of the sample was graded in the lowest category of general political knowledge the bottom two categories (D and F) are combined.

References

  • Althaus, S. (2003). Collective preferences in democratic politics: Opinion surveys and the will of the people. New York: Cambridge University Press.

    Book  Google Scholar 

  • Alvarez, R. M. (1998). Information and elections. Ann Arbor, MI: University of Michigan Press.

    Google Scholar 

  • Arnold, D. R., Graetz, M. J., & Munnell, A. H. (2000). Framing the social security debate: Values, politics, and economics. Washington, DC: National Academy of Social Insurance.

    Google Scholar 

  • Brady, H. E., & Johnston, R. (2006). The rolling cross-section and causal attribution. In H. E. Brady & R. Johnston (Eds.), Capturing campaign effects. Ann Arbor, MI: University of Michigan Press.

    Google Scholar 

  • Campbell, A. (2003). How policies make citizens: Senior political activism and the American welfare state. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Campbell, J. E. (2008). The American campaign: U.S. presidential campaigns and the national vote (2nd ed.). College Station, TX: Texas A&M Press.

    Google Scholar 

  • Converse, P. E. (1964). The nature of belief systems in mass publics. In D. E. Apter (Ed.), Ideology and discontent. New York: Free Press.

    Google Scholar 

  • Converse, P. E. (1990). Popular representation and the distribution of information. In J. A. Ferejohn & J. H. Kuklinski (Eds.), Information and democratic processes. Urbana, IL: University of Illinois Press.

    Google Scholar 

  • Dahl, R. A. (1961). Who governs? Democracy and power in an American city. New Haven, CT: Yale University Press.

    Google Scholar 

  • Dahl, R. A. (1989). Democracy and its critics. New Haven, CT: Yale University Press.

    Google Scholar 

  • Dalton, R. J. (2002). Citizen politics: Public opinion and political parties in advanced industrial democracies (3rd ed.). New York: Chatham House Publishers.

    Google Scholar 

  • Dao, J. (2004, April 16). N.R.A. opens all out drive for Bush and its views. The New York Times.

  • Delli Carpini, M. X., & Keeter, S. (1996). What americans know about politics and why it matters. New Haven, CT: Yale University Press.

    Google Scholar 

  • Gallagher, J. (2004, September 27). Election stakes high for unions, managers. Detroit Free Press.

  • Genova, B. K. L., & Greenberg, B. S. (1979). Interests in news and the knowledge gap. Public Opinion Quarterly, 43(1), 79–91.

    Article  Google Scholar 

  • Gerber, A. S., Gimpel, J. G., Green, D. P., & Shaw, D. R. (2011). How large and long-lasting are the persuasive effects of televised campaign ads? Results from a randomized field experiment. American Political Science Review, 105(1), 135–150.

    Article  Google Scholar 

  • Gershkoff, A. (2006). How issue interest can rescue the American public. Unpublished Dissertation, Princeton University.

  • Goldstein, K. M., Franz, M. M., & Ridout, T. N. (2002). Political advertising in 2000. Combined File [dataset]. Final release. Madison, WI: The Department of Political Science at the University of Wisconsin-Madison and the Brennan Center for Justice at New York University.

  • Goldstein, K. M., & J. Rivlin. (2007). Presidential advertising, 2003–2004. Combined File [dataset]. Final release. Madison, WI: The University of Wisconsin Advertising Project, The Department of Political Science at the University of Wisconsin-Madison.

  • Hershey, M. R. (2001). The campaign and the media. In G. M. Pomper (Ed.), The election of 2000. New York: Seven Bridges Press.

    Google Scholar 

  • Hillygus, D. S., & Shields, T. G. (2008). The persuadable voter: Wedge issues in presidential campaigns. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Holbrook, T. M. (2002). Presidential campaigns and the knowledge gap. Political Communication, 19(4), 437–454.

    Article  Google Scholar 

  • Hutchings, V. L. (2001). Political context, issue salience, and selective attentiveness: Constituent knowledge of the Clarence Thomas confirmation vote. Journal of Politics, 63(3), 846–868.

    Article  Google Scholar 

  • Hutchings, V. L. (2003). Public opinion and democratic accountability: How citizens learn about politics. Princeton, NJ: Princeton University Press.

    Google Scholar 

  • Iyengar, S. (1990). Shortcuts to political knowledge: The role of selective attention and accessibility. In J. A. Ferejohn & J. H. Kuklinski (Eds.), Information and democratic processes. Urbana, IL: University of Illinois Press.

    Google Scholar 

  • Jerit, J., Barabas, J., & Bolsen, T. (2006). Citizens, knowledge, and the information environment. American Journal of Political Science, 50(2), 266–282.

    Article  Google Scholar 

  • Johnston, R., Hagen, M. G., & Jamieson, K. H. (2004). The 2000 presidential election and the foundations of party politics. Cambridge, MA: Cambridge University Press.

    Book  Google Scholar 

  • Krosnick, J. A. (1990). Government policy and citizen passion: A study of issue publics in contemporary America. Political Behavior, 12(1), 59–92.

    Article  Google Scholar 

  • Lenz, G. (2009). Learning and opinion change, not priming: Reconsidering the priming hypothesis. American Journal of Political Science, 53(4), 821–837.

    Article  Google Scholar 

  • Luskin, R. (1990). Explaining political sophistication. Political Behavior, 12(4), 331–361.

    Article  Google Scholar 

  • McCracken J., & Butters, J. (2004, February 7). Democratic candidates back prounion proposal. Detroit Free Press.

  • Mitchell, D. (2012). It’s about time: The lifespan of information effects in a multiweek campaign. American Journal of Political Science, 56(2), 298–311.

    Article  Google Scholar 

  • Moe, T. (2001). Schools, vouchers, and the American public. Washington, DC: Brookings Institution Press.

    Google Scholar 

  • Moore, D. W. (1987). Political campaigns and the knowledge gap hypothesis. Public Opinion Quarterly, 51(2), 186–200.

    Article  Google Scholar 

  • Page, B. I., & Shapiro, R. Y. (1992). The rational public: Fifty years of trends in Americans’ policy preferences. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Peterson, D. A. M. (2009). Campaign learning and vote determinants. American Journal of Political Science, 53(2), 821–837.

    Article  Google Scholar 

  • Petrocik, J. R. (1996). Issue ownership in presidential elections, with a 1980 case study. American Journal of Political Science, 40(3), 825–850.

    Article  Google Scholar 

  • Popkin, S. L. (1991). The reasoning voter: Communication and persuasion in presidential campaigns. Chicago: University of Chicago Press.

    Google Scholar 

  • Rosenstone, S. J., & Hansen, J. M. (2003). Mobilization, participation, and democracy in America. New York: Longman Classics.

    Google Scholar 

  • Shaw, D. (2006). The race to 270: The electoral college and the campaign strategies of 2000 and 2004. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Tichenor, P. J., Donohue, G. A., & Olien, C. N. (1975). Mass media and the knowledge gap. Communication Research, 2(2), 3–23.

    Google Scholar 

  • U.S. National Commission on Retirement Policy. (1999). The 21st century retirement security plan: Final report of the National Commission on Retirement Policy. Washington, DC: Center for Strategic and International Studies.

    Google Scholar 

  • Verba, S., Schlozman, K. L., & Brady, H. E. (1995). Voice and equality: Civic voluntarism in American politics. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Zaller, J. R. (1992). The nature and origins of mass opinion. Cambridge, MA: Cambridge University Press.

    Book  Google Scholar 

  • Zaller, J. R. (1996). The myth of massive media impact revisited: New support for a discredited idea. In D. Mutz, P. M. Sniderman, & R. A. Brody (Eds.), Political persuasion and attitude change. Ann Arbor, MI: University of Michigan Press.

    Google Scholar 

Download references

Acknowledgments

I would like to thank D. Sunshine Hillygus for many helpful comments and for access to direct mail data for the 2004 campaign, Daron Shaw for providing access to media market data and for comments offered at a presentation of an earlier draft of this paper at the 2009 Midwest Political Science Association Annual Meeting, as well as Claudine Gay, Paul Peterson, and Stephen Ansolabehere for helpful comments. This research was supported in part by the National Science Foundation’s IGERT program, Multidisciplinary Program in Inequality and Social Policy at Harvard University (Grant No. 0333403).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael Henderson.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Henderson, M. Issue Publics, Campaigns, and Political Knowledge. Polit Behav 36, 631–657 (2014). https://doi.org/10.1007/s11109-013-9243-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-013-9243-3

Keywords

Navigation