Abstract
Gender-based differences in political knowledge are pervasive in the United States and abroad. Previous research on the source of these differences has focused on resource differentials or instrumentation, with scholars arguing either that the gender gap is real and intractable, or that it is an artifact of the way the concept is measured. Our study differs from past work by showing that (1) male–female differences in political knowledge persist even when knowledge is measured with recommended practices, but that (2) knowledge gaps can be ameliorated. Across laboratory, survey, and natural experiments, we document how exposure to information diminishes gender-based differences in political knowledge. The provision of facts reduces—and often eliminates—the gender gap in political knowledge on questions covering a range of topics.
Notes
Additionally, women exhibit higher levels of knowledge when the survey setting allows them to draw upon procedural, rather than declarative, memory (Prior and Lupia 2008).
McGlone et al. (2006) report that the gender gap in knowledge disappears when the survey interviewer is a woman.
Hansen and Pedersen (2014) adopt a similar approach in a study of Danish respondents (i.e., they examine levels of knowledge among men and women before and after an election).
The stimulus was devoid of interpretation or journalistic devices that might cause men and women to interpret the information differently. Research has shown that contemporary news coverage can be “marginalizing” (Bauer et al. 2016), and conveys the message that politics is a “man’s game” (e.g., Kahn and Goldenberg 1991; Kahn 1992, 1994).
The study was approved by the Human Subjects Committee at Florida State University (HSC No. 2013.10185). Student subjects were recruited to participate in exchange for extra credit and instructed to sign up through web-based appointment system. The MTurk respondents received $1.00 for completing the survey. See Druckman and Kam 2011 or Berinsky et al. 2012 for more on the merits of convenience samples.
We selected these particular options because they have been used as placebo topics in past experimental studies (e.g., Gerber and Green 2005) and we expected them to be non-reactive.
The information in the response options was truthful (not deceptive). This item was a question, so we recorded which option the responded selected and analyze those data later.
Prior to the re-randomization of treatments, and for the Warren experiment only, subjects were first blocked on gender to explore possible heterogeneous treatment effects. Treatment probability was the same within blocks, the blocks are of similar large sizes (n > 300), and our main interest lies in effects for each gender block, all of which makes empirical analysis straightforward (Gerber and Green 2012, Chaps. 3–4).
The knowledge items did not include a "don't know" option, but subjects could refuse to answer by skipping to the next item. Few people skipped the knowledge questions, ranging from a low of .7% (same sex marriage, unemployment) to a high of 5.9% (Elizabeth Warren). Analyses of refusals as a separate outcome category indicate the treatments reduced incorrect answers (all p < . 10 with p-values averaging .04) but had no significant effect on refusals (all p > .05 with p-values averaging .74), with the exception of the Warren item where both incorrect and refusals were reduced significantly (p < .01).
Differences in knowledge between student subjects and MTurk respondents were minimal (i.e., p-values averaged p = .34 for all cases and p-values averaged .53 for controls only).
Men and women were indistinguishable in their knowledge of Elizabeth Warren (p = .42).
We employ the symbols “♀” for women and “♂” for men to aid in the presentation of results.
We can augment the size of the control group by using respondents who were treated in other conditions (recall that treated respondents were shown one of the stories but asked all four knowledge questions). Using an enlarged control group (with more statistical power), the results are similar in terms of direction and substantive magnitude. Three patterns, in particular, are worth noting. First, in Panel C, the gender gap in the control group is 8 points and significant (p < .05; double digit gaps remain in the other panels and are significant at p < .01). Second, female learning effects average 17.5 points across the panels in Fig. 2 (p < .05 or better). Third, even with more cases, all DIDs in Fig. 2 are insignificant.
We analyzed the subset of respondents who selected the treatment topic in response to the news interest question. The patterns were similar to Figs. 2 and 3. Among respondents selecting the treatment topic, knowledge differences between men and women in the treatment group vanish to the point of insignificance (unemployment was the only exception). Later we report on gender differences in which of the four news items a respondent selected.
YouGov uses a matching algorithm with respect to gender, age, race, and education to produce an internet sample that approximates the demographic makeup of known marginals for the general population of the United States from the U.S. Census Bureau’s American Community Survey (see Ansolabehere and Rivers 2013). The completion rate for the study was 30%, and it was approved by the Human Subjects Committee at Stony Brook University (#2014-2858-R1).
The formatting of items in Study 2 was identical to Study 1 (e.g., there were images corresponding to response options).
The outcome occurred one or two questions later, depending on its placement relative to a related opinion question (not analyzed here). The other knowledge item has the same structure.
As with Study 1, the questions were based upon a national survey conducted by Pew Research Center in March–April of 2015. In the Pew survey 33% knew the correct answer regarding three women among nine justices while 52% could say that Republicans controlled 54 of the 100 seats in the U.S. Senate. At the time of this writing, information regarding gender differences on these two items was not available from Pew.
Figure 4 presents the results using unweighted data, however the patterns are identical with weighted data (e.g., the DIDs for Panels A and B are .07 and .02, respectively; both n.s.).
On unemployment the double-digit difference was sizeable at p = .12; on Senate control the difference was significant at p = < .01. There were no significant differences between men and women in the likelihood of selecting the other treatment topics (p’s range from .39 to .98).
Every spring the trustees overseeing Medicare and Social Security release a report outlining the financial trajectory of the programs. In 2007 news coverage of Medicare peaked because the program’s finances (the “trust fund”) had fallen below a specific threshold, triggering a call for legislative action. In the original study, the authors estimated the causal effect of information on knowledge and attitudes (i.e., there were no analyses by gender).
The answer choices provided in random order were “Medicare,” “Social Security,” or “Both programs will exhaust their funds within the same year.”
Media exposure was measured with a question that asked, “How have you been getting most of your information about current events?” If respondents replied television, they were asked which channel from a list of network and cable sources. If they replied newspapers, they were asked to indicate which one. The survey data were then paired with media content analyses, such that “exposed” individuals represent people who report using a source that actually provided the exhaustion date information (see Barabas and Jerit 2010 for details).
Both changes are insignificant (p < .41 and p < .21, respectively), which is likely due to the small size of the subgroups (n = 27 and 46, respectively).
The difference between the two DIDs is 20 points but it is insignificant (p < .26).
On October 1, 2008, the USCIS introduced a new version of the test immigrants take to become U.S. citizens. The old test had been widely criticized for being too easy.
The difference of these difference-in-differences (DIDID) is 24 points (p < .07). Although the CCAP study is based on a panel design, YouGov/Polimetrix added new cases in later waves. This is why, in Fig. 5, the n for Time 2 is larger than the n at Time 1.
Our results are consistent with Ondercin et al. (2011), but the addition of evidence from controlled experiments as well as a natural experiment in which we isolate exposed and unexposed people extends their findings. We also take a different approach to analyzing differential effects and employ an explicit DID test of men versus women rather than a comparison of marginal effects versus the null (see Table 3 in Ondercin et al. 2011).
References
Abrajano, M. (2015). Reexamining the ‘racial gap’ in political knowledge. Journal of Politics, 77(1), 44–54.
Ansolabehere, S., & Rivers, D. (2013). Cooperative survey research. Annual Review of Political Science, 16, 307–329.
Atekson, L. R., & Rapoport, R. B. (2003). The more things change the more they stay the same: Examining gender differences in political attitude expression, 1952-2000. Public Opinion Quarterly, 67(4), 495–521.
Barabas, J., & Jerit, J. (2009). Estimating the causal effects of media coverage on policy-specific knowledge. American Journal of Political Science, 53(January), 79–89.
Barabas, J., & Jerit, J. (2010). Are survey experiments externally valid? American Political Science Review, 104(2), 226–242.
Barabas, J., Jerit, J., Pollock, W., & Rainey, C. (2014). The question(s) of political knowledge. American Political Science Review, 108(4), 840–855.
Bauer, N., Krupnikov Y., & Yeganeh, S. (2016). Casualties in the ‘war on women’: Can campaign rhetoric about discrimination demobilize women? Paper presented at annual meeting of the Midwest Political Science Association, Chicago, IL.
Bennett, L. L. M., & Bennett, S. E. (1989). Enduring gender differences in political interest: The impact of socialization and political dispositions. American Politics Research, 17(1), 105–122.
Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Using mechanical turk as a subject recruitment tool for experimental research. Political Analysis, 20(3), 351–368.
Boydstun, A. (2013). Making the news: politics, the media, and agenda setting. Chicago: University of Chicago Press.
Burns, N., Schlozman, K., & Verba, S. (2001). The private roots of public action: Gender, equality, and political participation. Cambridge, MA: Harvard University Press.
Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press.
Delli Carpini, M. X., Keeter, S., & Kennamer, J. D. (1994). Effects of the news media environment on citizen knowledge of state politics and government. Journalism Quarterly, 71, 443–456.
Dolan, K. (2011). Do women and men know different things? Measuring gender differences in political knowledge. Journal of Politics, 73(1), 97–107.
Dow, J. (2009). Gender differences in political knowledge: distinguishing characteristics- based and returns-based differences. Political Behavior, 31(1), 117–136.
Druckman, J. N., & Kam, C. D. (2011). Students as experimental participants: A defense of the ‘narrow data base’. In J. N. Druckman, D. P. Green, J. Kuklinski, & A. Lupia (Eds.), Cambridge handbook of experimental political science (pp. 41–57). New York: Cambridge.
Ferrin, M., & Fraile, M. (2014). Measuring political knowledge in Spain: Problems ad consequences of the gender gap in knowledge. Revista Española de Investigaciones Sociológicas, 147, 53–72.
Fortin-Rittberger, J. (2016). Cross-national gender gaps political knowledge: How much is due to context? Political Research Quarterly, 69(3), 391–402.
Fraile, M. (2014a). Do women know less about politics than men? The gender gap in political knowledge in Europe. Social Politics, 21(2), 261–289.
Fraile, M. (2014b). Does deliberation contribute to decreasing the gender gap in knowledge? European Union Politics, 15(3), 372–388.
Fridkin, K. L., & Kenney, P. J. (2014). How the gender of U.S. senators influences people’s understanding and engagement in politics. Journal of Politics, 76(4), 1017–1031.
Gerber, A. S., & Green, D. P. (2005). Do phone calls increase voter turnout? An update. Annals of the American Academy of Political and Social Science, 601(Sept), 142–154.
Gerber, A. S., & Green, D. P. (2012). Field experiments: Design, analysis, and interpretation. New York: W.W. Norton.
Gilens, M. (2001). Political ignorance and collective policy preferences. American Political Science Review, 95(June), 379–396.
Graber, D. A. (1988). Processing the news: How people tame the information tide (2nd ed.). White Plains, NY: Longman.
Hansen, K. M., & Pedersen, R Tue. (2014). Campaigns matter: How voters become knowledgeable and efficacious during election campaigns. Political Communication, 31, 303–324.
Hochschild, J. L., & Einstein, K. L. (2015). Do facts matter? Information and misinformation in American politics. Norman, OK: University of Oklahoma Press.
Hooghe, M., Quintelier, E., & Reeskens, T. (2006). How political is the personal? Gender differences in the level and the structure of political knowledge. Journal of Women, Politics & Policy, 28(2), 115–125.
Jerit, J., Barabas, J., & Bolsen, T. (2006). Citizens, knowledge, and the information environment. American Journal of Political Science, 50(April), 266–282.
Kahn, K. F. (1992). “Does being male help?” An investigation of the effects of candidate gender and campaign coverage on evaluations of U.S. senate candidates. Journal of Politics, 54(2), 497–517.
Kahn, K. F. (1994). Does gender make a difference? An experimental examination of sex stereotypes and press patterns in statewide campaigns. American Journal of Political Science, 38(1), 162–195.
Kahn, K. F., & Goldenberg, E. N. (1991). Women candidates in the news: An examination of gender differences in U.S. Senate campaign coverage. Public Opinion Quarterly, 55(2), 480–499.
Kanthak, K., & Woon, J. (2015). Women don’t run? Election aversion and candidate entry. American Journal of Political Science, 59(3), 595–612.
Kuklinski, J. H., Quirk, P. J., Jerit, J., & Rich, R. (2001). The political environment and citizen competence. American Journal of Politic Science, 45, 410–424.
Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D., & Rich, R. (2000). Misinformation and the Currency of citizenship. Journal of Politics, 62, 790–816.
Lawless, J. L., & Fox, R. L. (2010). It still takes a candidate: Why women don’t run for office. Cambridge, MA: Cambridge University Press.
Lizotte, M.-K., & Sidman, A. (2009). Explaining the gender gap in political knowledge. Politics & Gender, 5(2), 127–152.
Lupia, A. (2016). Uninformed: Why people know so little about politics and what we can do about it. New York: Oxford University Press.
McGlone, M., Aronson, J., & Kobrynowicz, D. (2006). Stereotype threat and the gender gap in political knowledge. Psychology of Women Quarterly, 30, 392–398.
Miller, M. K., & Orr, S. K. (2008). Experimenting with a ‘third way’ in political knowledge estimation. Public Opinion Quarterly, 72(4), 768–780.
Mondak, J., & Anderson, M. (2004). The knowledge gap: A reexamination of gender-based differences in political knowledge. Journal of Politics, 66(2), 492–512.
Morton, R. B., & Williams, K. C. (2010). From nature to the lab. Experimental political science and the study of causality. New York, NY: Cambridge University Press.
Murphy, M. C., Steele, C., & Gross, J. J. (2007). Signaling threat: How situational cues affect women in math, science, and engineering settings. Psychological Science, 18(10), 879–885.
Mutz, D. C. (2011). Population based survey experiments. Princeton, NJ: Princeton University Press.
Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.
Ondercin, H. J., Garand, J. C., & Crapanzano, L. E. (2011). Political learning during the 2000 U.S. presidential election: The impact of the campaign on the gender gap in political knowledge. Electoral Studies, 30, 727–737.
Pereira, M. F., Fraile, M., & Rubal, M. (2014). Young and gapped? Political knowledge of girls and boys in Europe. Political Research Quarterly, 68(1), 1–14.
Perez, E. O. (2015). Mind the gap: Why large group deficits in political knowledge emerge—and what to do about them. Political Behavior, 37, 933–954.
Pietryka, M. T., & MacIntosh, R. C. (2013). An analysis of ANES items and their use in the construction of political knowledge scales. Political Analysis, 21(4), 407–429.
Preece, J. R. (2016). Mind the gender gap: The influence of self-efficacy on political interest. Forthcoming, Politics & Gender.
Prior, M. (2014). Visual political knowledge: A different road to competence? Journal of Politics, 76(1), 41–57.
Prior, M., & Lupia, A. (2008). Money, time, and political knowledge: Distinguishing quick recall and political learning skills. American Journal of Political Science, 52(1), 168–182.
Sanbonmatsu, K. (2003). Gender-related political knowledge and the descriptive representation of women. Political Behavior, 25(4), 367–388.
Shaker, L. (2012). Local political knowledge and assessments of citizen competence. Public Opinion Quarterly, 76(3), 525–537.
Stolle, D., & Gidengil, E. (2010). What do women really know? A gendered analysis of varieties of political knowledge. Perspectives on Politics, 8(1), 93–109.
Tichenor, P. J., Donohue, G. A., & Olien, C. N. (1970). Mass media flow and differential growth in knowledge. Public Opinion Quarterly, 34(Summer), 159–170.
Verba, S., Burns, N., & Schlozman, K. L. (1997). Knowing and caring about politics: Gender and political engagement. Journal of Politics, 59(4), 1051–1072.
Verba, S., Schlozman, K. L., & Brady, H. E. (1995). Voice and equality: Civic Volunteerism in American politics. Cambridge: Harvard University Press.
Wolak, J., & McDevitt, M. (2013). The roots of the gender gap in political knowledge in adolescence. Political Behavior, 33, 505–533.
Acknowledgments
The authors thank Nichole Bauer, Emily Farris, and the anonymous reviewers for helpful comments and suggestions. They also thank Scott Clifford for help programming Study 1. Previous versions of this paper were presented at the Political Science Department at Aarhus University and the annual meeting of the International Society of Political Psychology. Replication files are available at Political Behavior’s Dataverse page.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Jerit, J., Barabas, J. Revisiting the Gender Gap in Political Knowledge. Polit Behav 39, 817–838 (2017). https://doi.org/10.1007/s11109-016-9380-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11109-016-9380-6