Skip to main content

Advertisement

Log in

Revisiting the Gender Gap in Political Knowledge

Political Behavior Aims and scope Submit manuscript

Abstract

Gender-based differences in political knowledge are pervasive in the United States and abroad. Previous research on the source of these differences has focused on resource differentials or instrumentation, with scholars arguing either that the gender gap is real and intractable, or that it is an artifact of the way the concept is measured. Our study differs from past work by showing that (1) male–female differences in political knowledge persist even when knowledge is measured with recommended practices, but that (2) knowledge gaps can be ameliorated. Across laboratory, survey, and natural experiments, we document how exposure to information diminishes gender-based differences in political knowledge. The provision of facts reduces—and often eliminates—the gender gap in political knowledge on questions covering a range of topics.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Notes

  1. Additionally, women exhibit higher levels of knowledge when the survey setting allows them to draw upon procedural, rather than declarative, memory (Prior and Lupia 2008).

  2. In technical terms, there is “differential item functioning” of traditional knowledge questions across gender groups (Pietryka and MacIntosh’s 2013; see Abrajano 2015 or Perez 2015 for similar findings with respect to different racial and ethnic groups).

  3. McGlone et al. (2006) report that the gender gap in knowledge disappears when the survey interviewer is a woman.

  4. Hansen and Pedersen (2014) adopt a similar approach in a study of Danish respondents (i.e., they examine levels of knowledge among men and women before and after an election).

  5. The stimulus was devoid of interpretation or journalistic devices that might cause men and women to interpret the information differently. Research has shown that contemporary news coverage can be “marginalizing” (Bauer et al. 2016), and conveys the message that politics is a “man’s game” (e.g., Kahn and Goldenberg 1991; Kahn 1992, 1994).

  6. The study was approved by the Human Subjects Committee at Florida State University (HSC No. 2013.10185). Student subjects were recruited to participate in exchange for extra credit and instructed to sign up through web-based appointment system. The MTurk respondents received $1.00 for completing the survey. See Druckman and Kam 2011 or Berinsky et al. 2012 for more on the merits of convenience samples.

  7. We selected these particular options because they have been used as placebo topics in past experimental studies (e.g., Gerber and Green 2005) and we expected them to be non-reactive.

  8. The information in the response options was truthful (not deceptive). This item was a question, so we recorded which option the responded selected and analyze those data later.

  9. Prior to the re-randomization of treatments, and for the Warren experiment only, subjects were first blocked on gender to explore possible heterogeneous treatment effects. Treatment probability was the same within blocks, the blocks are of similar large sizes (n > 300), and our main interest lies in effects for each gender block, all of which makes empirical analysis straightforward (Gerber and Green 2012, Chaps. 3–4).

  10. The knowledge items did not include a "don't know" option, but subjects could refuse to answer by skipping to the next item. Few people skipped the knowledge questions, ranging from a low of .7% (same sex marriage, unemployment) to a high of 5.9% (Elizabeth Warren). Analyses of refusals as a separate outcome category indicate the treatments reduced incorrect answers (all p < . 10 with p-values averaging .04) but had no significant effect on refusals (all p > .05 with p-values averaging .74), with the exception of the Warren item where both incorrect and refusals were reduced significantly (p < .01).

  11. Differences in knowledge between student subjects and MTurk respondents were minimal (i.e., p-values averaged p = .34 for all cases and p-values averaged .53 for controls only).

  12. Men and women were indistinguishable in their knowledge of Elizabeth Warren (p = .42).

  13. We employ the symbols “” for women and “” for men to aid in the presentation of results.

  14. We can augment the size of the control group by using respondents who were treated in other conditions (recall that treated respondents were shown one of the stories but asked all four knowledge questions). Using an enlarged control group (with more statistical power), the results are similar in terms of direction and substantive magnitude. Three patterns, in particular, are worth noting. First, in Panel C, the gender gap in the control group is 8 points and significant (p < .05; double digit gaps remain in the other panels and are significant at p < .01). Second, female learning effects average 17.5 points across the panels in Fig. 2 (p < .05 or better). Third, even with more cases, all DIDs in Fig. 2 are insignificant.

  15. We analyzed the subset of respondents who selected the treatment topic in response to the news interest question. The patterns were similar to Figs. 2 and 3. Among respondents selecting the treatment topic, knowledge differences between men and women in the treatment group vanish to the point of insignificance (unemployment was the only exception). Later we report on gender differences in which of the four news items a respondent selected.

  16. YouGov uses a matching algorithm with respect to gender, age, race, and education to produce an internet sample that approximates the demographic makeup of known marginals for the general population of the United States from the U.S. Census Bureau’s American Community Survey (see Ansolabehere and Rivers 2013). The completion rate for the study was 30%, and it was approved by the Human Subjects Committee at Stony Brook University (#2014-2858-R1).

  17. The formatting of items in Study 2 was identical to Study 1 (e.g., there were images corresponding to response options).

  18. The outcome occurred one or two questions later, depending on its placement relative to a related opinion question (not analyzed here). The other knowledge item has the same structure.

  19. As with Study 1, the questions were based upon a national survey conducted by Pew Research Center in March–April of 2015. In the Pew survey 33% knew the correct answer regarding three women among nine justices while 52% could say that Republicans controlled 54 of the 100 seats in the U.S. Senate. At the time of this writing, information regarding gender differences on these two items was not available from Pew.

  20. Figure 4 presents the results using unweighted data, however the patterns are identical with weighted data (e.g., the DIDs for Panels A and B are .07 and .02, respectively; both n.s.).

  21. On unemployment the double-digit difference was sizeable at p = .12; on Senate control the difference was significant at p = < .01. There were no significant differences between men and women in the likelihood of selecting the other treatment topics (p’s range from .39 to .98).

  22. Every spring the trustees overseeing Medicare and Social Security release a report outlining the financial trajectory of the programs. In 2007 news coverage of Medicare peaked because the program’s finances (the “trust fund”) had fallen below a specific threshold, triggering a call for legislative action. In the original study, the authors estimated the causal effect of information on knowledge and attitudes (i.e., there were no analyses by gender).

  23. The answer choices provided in random order were “Medicare,” “Social Security,” or “Both programs will exhaust their funds within the same year.”

  24. Media exposure was measured with a question that asked, “How have you been getting most of your information about current events?” If respondents replied television, they were asked which channel from a list of network and cable sources. If they replied newspapers, they were asked to indicate which one. The survey data were then paired with media content analyses, such that “exposed” individuals represent people who report using a source that actually provided the exhaustion date information (see Barabas and Jerit 2010 for details).

  25. Both changes are insignificant (p < .41 and p < .21, respectively), which is likely due to the small size of the subgroups (n = 27 and 46, respectively).

  26. The difference between the two DIDs is 20 points but it is insignificant (p < .26).

  27. On October 1, 2008, the USCIS introduced a new version of the test immigrants take to become U.S. citizens. The old test had been widely criticized for being too easy.

  28. The difference of these difference-in-differences (DIDID) is 24 points (p < .07). Although the CCAP study is based on a panel design, YouGov/Polimetrix added new cases in later waves. This is why, in Fig. 5, the n for Time 2 is larger than the n at Time 1.

  29. Our results are consistent with Ondercin et al. (2011), but the addition of evidence from controlled experiments as well as a natural experiment in which we isolate exposed and unexposed people extends their findings. We also take a different approach to analyzing differential effects and employ an explicit DID test of men versus women rather than a comparison of marginal effects versus the null (see Table 3 in Ondercin et al. 2011).

References

  • Abrajano, M. (2015). Reexamining the ‘racial gap’ in political knowledge. Journal of Politics, 77(1), 44–54.

    Article  Google Scholar 

  • Ansolabehere, S., & Rivers, D. (2013). Cooperative survey research. Annual Review of Political Science, 16, 307–329.

    Article  Google Scholar 

  • Atekson, L. R., & Rapoport, R. B. (2003). The more things change the more they stay the same: Examining gender differences in political attitude expression, 1952-2000. Public Opinion Quarterly, 67(4), 495–521.

    Article  Google Scholar 

  • Barabas, J., & Jerit, J. (2009). Estimating the causal effects of media coverage on policy-specific knowledge. American Journal of Political Science, 53(January), 79–89.

    Google Scholar 

  • Barabas, J., & Jerit, J. (2010). Are survey experiments externally valid? American Political Science Review, 104(2), 226–242.

    Article  Google Scholar 

  • Barabas, J., Jerit, J., Pollock, W., & Rainey, C. (2014). The question(s) of political knowledge. American Political Science Review, 108(4), 840–855.

    Article  Google Scholar 

  • Bauer, N., Krupnikov Y., & Yeganeh, S. (2016). Casualties in the ‘war on women’: Can campaign rhetoric about discrimination demobilize women? Paper presented at annual meeting of the Midwest Political Science Association, Chicago, IL.

  • Bennett, L. L. M., & Bennett, S. E. (1989). Enduring gender differences in political interest: The impact of socialization and political dispositions. American Politics Research, 17(1), 105–122.

    Article  Google Scholar 

  • Berinsky, A. J., Huber, G. A., & Lenz, G. S. (2012). Using mechanical turk as a subject recruitment tool for experimental research. Political Analysis, 20(3), 351–368.

    Article  Google Scholar 

  • Boydstun, A. (2013). Making the news: politics, the media, and agenda setting. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Burns, N., Schlozman, K., & Verba, S. (2001). The private roots of public action: Gender, equality, and political participation. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press.

    Google Scholar 

  • Delli Carpini, M. X., Keeter, S., & Kennamer, J. D. (1994). Effects of the news media environment on citizen knowledge of state politics and government. Journalism Quarterly, 71, 443–456.

    Article  Google Scholar 

  • Dolan, K. (2011). Do women and men know different things? Measuring gender differences in political knowledge. Journal of Politics, 73(1), 97–107.

    Article  Google Scholar 

  • Dow, J. (2009). Gender differences in political knowledge: distinguishing characteristics- based and returns-based differences. Political Behavior, 31(1), 117–136.

    Article  Google Scholar 

  • Druckman, J. N., & Kam, C. D. (2011). Students as experimental participants: A defense of the ‘narrow data base’. In J. N. Druckman, D. P. Green, J. Kuklinski, & A. Lupia (Eds.), Cambridge handbook of experimental political science (pp. 41–57). New York: Cambridge.

    Chapter  Google Scholar 

  • Ferrin, M., & Fraile, M. (2014). Measuring political knowledge in Spain: Problems ad consequences of the gender gap in knowledge. Revista Española de Investigaciones Sociológicas, 147, 53–72.

    Google Scholar 

  • Fortin-Rittberger, J. (2016). Cross-national gender gaps political knowledge: How much is due to context? Political Research Quarterly, 69(3), 391–402.

    Article  Google Scholar 

  • Fraile, M. (2014a). Do women know less about politics than men? The gender gap in political knowledge in Europe. Social Politics, 21(2), 261–289.

    Article  Google Scholar 

  • Fraile, M. (2014b). Does deliberation contribute to decreasing the gender gap in knowledge? European Union Politics, 15(3), 372–388.

    Article  Google Scholar 

  • Fridkin, K. L., & Kenney, P. J. (2014). How the gender of U.S. senators influences people’s understanding and engagement in politics. Journal of Politics, 76(4), 1017–1031.

    Article  Google Scholar 

  • Gerber, A. S., & Green, D. P. (2005). Do phone calls increase voter turnout? An update. Annals of the American Academy of Political and Social Science, 601(Sept), 142–154.

    Article  Google Scholar 

  • Gerber, A. S., & Green, D. P. (2012). Field experiments: Design, analysis, and interpretation. New York: W.W. Norton.

    Google Scholar 

  • Gilens, M. (2001). Political ignorance and collective policy preferences. American Political Science Review, 95(June), 379–396.

    Google Scholar 

  • Graber, D. A. (1988). Processing the news: How people tame the information tide (2nd ed.). White Plains, NY: Longman.

    Google Scholar 

  • Hansen, K. M., & Pedersen, R Tue. (2014). Campaigns matter: How voters become knowledgeable and efficacious during election campaigns. Political Communication, 31, 303–324.

    Article  Google Scholar 

  • Hochschild, J. L., & Einstein, K. L. (2015). Do facts matter? Information and misinformation in American politics. Norman, OK: University of Oklahoma Press.

    Google Scholar 

  • Hooghe, M., Quintelier, E., & Reeskens, T. (2006). How political is the personal? Gender differences in the level and the structure of political knowledge. Journal of Women, Politics & Policy, 28(2), 115–125.

    Article  Google Scholar 

  • Jerit, J., Barabas, J., & Bolsen, T. (2006). Citizens, knowledge, and the information environment. American Journal of Political Science, 50(April), 266–282.

    Article  Google Scholar 

  • Kahn, K. F. (1992). “Does being male help?” An investigation of the effects of candidate gender and campaign coverage on evaluations of U.S. senate candidates. Journal of Politics, 54(2), 497–517.

    Article  Google Scholar 

  • Kahn, K. F. (1994). Does gender make a difference? An experimental examination of sex stereotypes and press patterns in statewide campaigns. American Journal of Political Science, 38(1), 162–195.

    Article  Google Scholar 

  • Kahn, K. F., & Goldenberg, E. N. (1991). Women candidates in the news: An examination of gender differences in U.S. Senate campaign coverage. Public Opinion Quarterly, 55(2), 480–499.

    Article  Google Scholar 

  • Kanthak, K., & Woon, J. (2015). Women don’t run? Election aversion and candidate entry. American Journal of Political Science, 59(3), 595–612.

    Article  Google Scholar 

  • Kuklinski, J. H., Quirk, P. J., Jerit, J., & Rich, R. (2001). The political environment and citizen competence. American Journal of Politic Science, 45, 410–424.

    Article  Google Scholar 

  • Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D., & Rich, R. (2000). Misinformation and the Currency of citizenship. Journal of Politics, 62, 790–816.

    Article  Google Scholar 

  • Lawless, J. L., & Fox, R. L. (2010). It still takes a candidate: Why women don’t run for office. Cambridge, MA: Cambridge University Press.

    Book  Google Scholar 

  • Lizotte, M.-K., & Sidman, A. (2009). Explaining the gender gap in political knowledge. Politics & Gender, 5(2), 127–152.

    Article  Google Scholar 

  • Lupia, A. (2016). Uninformed: Why people know so little about politics and what we can do about it. New York: Oxford University Press.

    Google Scholar 

  • McGlone, M., Aronson, J., & Kobrynowicz, D. (2006). Stereotype threat and the gender gap in political knowledge. Psychology of Women Quarterly, 30, 392–398.

    Article  Google Scholar 

  • Miller, M. K., & Orr, S. K. (2008). Experimenting with a ‘third way’ in political knowledge estimation. Public Opinion Quarterly, 72(4), 768–780.

    Article  Google Scholar 

  • Mondak, J., & Anderson, M. (2004). The knowledge gap: A reexamination of gender-based differences in political knowledge. Journal of Politics, 66(2), 492–512.

    Article  Google Scholar 

  • Morton, R. B., & Williams, K. C. (2010). From nature to the lab. Experimental political science and the study of causality. New York, NY: Cambridge University Press.

    Book  Google Scholar 

  • Murphy, M. C., Steele, C., & Gross, J. J. (2007). Signaling threat: How situational cues affect women in math, science, and engineering settings. Psychological Science, 18(10), 879–885.

    Article  Google Scholar 

  • Mutz, D. C. (2011). Population based survey experiments. Princeton, NJ: Princeton University Press.

    Book  Google Scholar 

  • Nyhan, B., & Reifler, J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330.

    Article  Google Scholar 

  • Ondercin, H. J., Garand, J. C., & Crapanzano, L. E. (2011). Political learning during the 2000 U.S. presidential election: The impact of the campaign on the gender gap in political knowledge. Electoral Studies, 30, 727–737.

    Article  Google Scholar 

  • Pereira, M. F., Fraile, M., & Rubal, M. (2014). Young and gapped? Political knowledge of girls and boys in Europe. Political Research Quarterly, 68(1), 1–14.

    Google Scholar 

  • Perez, E. O. (2015). Mind the gap: Why large group deficits in political knowledge emerge—and what to do about them. Political Behavior, 37, 933–954.

    Article  Google Scholar 

  • Pietryka, M. T., & MacIntosh, R. C. (2013). An analysis of ANES items and their use in the construction of political knowledge scales. Political Analysis, 21(4), 407–429.

    Article  Google Scholar 

  • Preece, J. R. (2016). Mind the gender gap: The influence of self-efficacy on political interest. Forthcoming, Politics & Gender.

  • Prior, M. (2014). Visual political knowledge: A different road to competence? Journal of Politics, 76(1), 41–57.

    Article  Google Scholar 

  • Prior, M., & Lupia, A. (2008). Money, time, and political knowledge: Distinguishing quick recall and political learning skills. American Journal of Political Science, 52(1), 168–182.

    Article  Google Scholar 

  • Sanbonmatsu, K. (2003). Gender-related political knowledge and the descriptive representation of women. Political Behavior, 25(4), 367–388.

    Article  Google Scholar 

  • Shaker, L. (2012). Local political knowledge and assessments of citizen competence. Public Opinion Quarterly, 76(3), 525–537.

    Article  Google Scholar 

  • Stolle, D., & Gidengil, E. (2010). What do women really know? A gendered analysis of varieties of political knowledge. Perspectives on Politics, 8(1), 93–109.

    Article  Google Scholar 

  • Tichenor, P. J., Donohue, G. A., & Olien, C. N. (1970). Mass media flow and differential growth in knowledge. Public Opinion Quarterly, 34(Summer), 159–170.

    Article  Google Scholar 

  • Verba, S., Burns, N., & Schlozman, K. L. (1997). Knowing and caring about politics: Gender and political engagement. Journal of Politics, 59(4), 1051–1072.

    Article  Google Scholar 

  • Verba, S., Schlozman, K. L., & Brady, H. E. (1995). Voice and equality: Civic Volunteerism in American politics. Cambridge: Harvard University Press.

    Google Scholar 

  • Wolak, J., & McDevitt, M. (2013). The roots of the gender gap in political knowledge in adolescence. Political Behavior, 33, 505–533.

    Article  Google Scholar 

Download references

Acknowledgments

The authors thank Nichole Bauer, Emily Farris, and the anonymous reviewers for helpful comments and suggestions. They also thank Scott Clifford for help programming Study 1. Previous versions of this paper were presented at the Political Science Department at Aarhus University and the annual meeting of the International Society of Political Psychology. Replication files are available at Political Behavior’s Dataverse page.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jennifer Jerit.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (PDF 320kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jerit, J., Barabas, J. Revisiting the Gender Gap in Political Knowledge. Polit Behav 39, 817–838 (2017). https://doi.org/10.1007/s11109-016-9380-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-016-9380-6

Keywords

Navigation