The Consequences of Explicit and Implicit Gender Attitudes and Candidate Quality in the Calculations of Voters

Abstract

How much does a voter’s attitude towards female versus male leadership manifest itself at the ballot box and when does information regarding candidate qualifications or the lack thereof matter in this relationship? I conduct an in-depth survey, which includes a vote choice experiment randomizing the sex of the more qualified candidate, a novel gender and leadership Implicit Association Test, and a measure of explicit gender attitudes to explore this question. I find that the propensity to pick a female candidate increases as explicit and implicit attitudes against female leadership decrease, suggesting that traditional explicit measures underestimate the effects of gender attitudes and miss a key dimension of people’s preferences. Gender attitudes in the electoral process remain consequential, but have grown subtler, which is missed when only assessing people’s self-reported explicit attitudes. Fortunately, the effects of voters’ gender attitudes can be attenuated by candidate qualification information; however, it does not rid the effects of gender on vote choice uniformly. People who explicitly state a preference for male leaders do not respond to individuating information, even if the female candidate is clearly more qualified than her male counterpart. However, people who implicitly prefer male leaders, but explicitly state being gender-equitable respond to individuating information and tend to select the more qualified candidate regardless of the candidate’s sex. The study points to the significance of dual process account of reasoning—acknowledging that individuals operate on two levels, System 1 (automatic and implicit) and System 2 (effortful and explicit)—in understanding voting behavior.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2

Change history

  • 25 October 2017

    The original version of this article contains a labelling mistake. Table 3 reports the quality score by ABA Rating. The order of the labels under “Weak ranking” in Table 3 is incorrect. The correction strengthens the claim in the article that the study participants were more likely to assign higher likability score to candidates with objectively better ABA ratings. The corrected Table 3 is given below.

Notes

  1. 1.

    See http://www.cbc.ca/news/interactives/map-world-womenpolitics/.

  2. 2.

    The focus of this research project is on voter-level gender biases. Certainly, biases may exist against women in the selection of candidates for viable races at the party level. According to Burrell (2008) and Sanbonmatsu (2006), political party recruitment efforts reflect gender biases; fewer women are encouraged to run in the first place.

  3. 3.

    See http://www.gallup.com/poll/155285/atheists-muslims-bias-presidential-candidates.aspx for more information on the Gallup poll.

  4. 4.

    Sample Czar is an online sampling firm (http://www.sampleczar.com) that has an extensive panel of over five million respondents in the US and another five million globally.

  5. 5.

    When identifying panelists for a study, Sample Czar enforces (1) rigid frequency of survey restrictions, so as to not bombard panel members with multiple invitations per day; and (2) unsubscribe speed-through and other bad respondents, and panelists who have been “inactive,” which is defined as those who have not accepted a survey invitation in six months. Respondents were told the following at the start of the survey: “You are invited to participate in a research study on elections. The goal of this project is to better understand people’s attitudes toward elections and political candidates. In order to do this, we will be asking you questions about your political preferences and beliefs.”

  6. 6.

    There were 438 total observations; however, observations with data duplication issues (31 subjects) were excluded from the analysis. Data duplication problems occurred due to subjects restarting the survey multiple times. Excluding subjects with duplication problems, there were 407 respondents (93 %) that successfully completed the experiment portion of the study; 393 individuals (90 %) that successfully completed the experiment, survey, and IAT components; and 390 individuals (89 %) that completed the entire study. Attrition did not vary substantially by treatment condition. Each of the 12 treatment conditions had up to seven individuals drop out of the study.

  7. 7.

    Data are from the Center for American Women and Politics, Eagleton Institute of Politics, Rutgers University (http://www.cawp.rutgers.edu).

  8. 8.

    88 % of the sample identified as white, and the average respondent had at least some college education, and was 3.86 on the political ideology scale, where 1 denotes strong democrat and 7 denotes strong republican.

  9. 9.

    Yeager et al. (2009) note that surveys of non-probability samples of people who volunteer to do surveys for money or prizes are less accurate than probability sample surveys, and hence there should be caution in presuming that non-probability samples yield data that are externally valid.

  10. 10.

    The two exceptions I am aware of are McDermott (1998), who used a California state poll, and Anderson et al. (2011), who surveyed prospective jurors in six counties in four US states.

  11. 11.

    For the 407 participants who completed the experiment component, the maximum and minimum conditions had 43 and 25 subjects, respectively, and hence, the ratio of maximum to minimum conditions is 1.72. Maximum (minimum) condition denotes the treatment condition with the largest (smallest) number of subjects. For the 390 participants who completed the entire survey, the maximum and minimum conditions had 41 and 24 subjects, respectively, resulting in a 1.71 maximum to minimum condition ratio. Based on 10,000 Monte Carlo simulations, I find that 1.71 and 1.72 correspond to the 42nd and 44th percentile, respectively.

  12. 12.

    Respondents were also asked to consider four additional races—two legislative races and two gubernatorial races—that are not included in this study, as there are no objective candidate quality measure that are divorced from party identification or policy positions.

  13. 13.

    See http://www.abanet.org/scfedjud/ratings.html for an overview of judicial nominee ratings.

  14. 14.

    Information about when candidates were initially nominated is presented to voters in the Official Voter Information Guide, and voters can thus infer the partisan affiliation of the candidates by referencing when the candidates were initially nominated. However, acquiring knowledge regarding partisan affiliations requires that voters take pro-active steps that are not necessary in executive and legislative races, which make judicial races a lower information setting. See http://www.ajs.org/selection/sel_state-select-map.asp#COLORADO for information regarding judicial selection methods by state.

  15. 15.

    Anderson et al. (2011) is one exception.

  16. 16.

    The non-gendered (male-male) contest was included within the study to enable analyses of whether respondents were able to detect the quality differences holding gender constant. I did not include a female-female contest, as the purpose of including the non-gendered contest was to conduct a manipulation check.

  17. 17.

    Candidate pictures were only of white individuals to ensure that race considerations were unnecessary. Popular “non-ethnic” names (e.g., Mary Brooks and David Smith) in the United States were selected for the same reason. Names were selected based upon the list of popular baby names, according to the US Social Security Administration (see http://www.ssa.gov/OACT/babynames/).

  18. 18.

    Pictures were included, as typical voter guides include pictures of candidates. However, literature on “thin-slices” demonstrate that pictures can impact voters’ calculations due to attractiveness, age assessments, health assessments, etc. (e.g., Biddle and Hamermesh 1998; Mobius and Rosenblat 2006). As such, for each picture, I include controls for the pictures used (see Empirical Strategy section for details on the adjustment).

  19. 19.

    In other words, individuals could have received a “both strong” race or a “strong female” race or a “strong male” race or a “mixed strength” race first. The effect of the ordering of the seven election contests did not affect results and are available upon request.

  20. 20.

    To the extent that some respondents are unable to assess the candidate quality differentials, the effect of candidate quality on the relationship between prejudice and vote choice will likely be underestimated. If individuating information does not clearly tip the scales in favor of one candidate on the dimension of candidate quality, then relative candidate quality would be less of a consideration, making it more difficult to test the validity of Hypothesis 2.

  21. 21.

    While descriptive statistics for raw measures are reported, for all analyses I recoded all survey measures to lie between 0 and 1.

  22. 22.

    Questions were based upon recommendations in Sanbonmatsu and Dolan (2007) and Sanbonmatsu (2002).

  23. 23.

    To maximize reliability, all bipolar questions had a 7-point rating scale and unipolar questions had a 5-point rating scale (Birkett 1986; Kalton et al. 1980; Lissetz and Green 1975).

  24. 24.

    See http://www.implicit.harvard.edu/implicit.

  25. 25.

    The code for creating this IAT is available upon request.

  26. 26.

    IAT D Effect\(=-[(1/2)(Mean_{stage 6}-Mean_{stage 3})/\sigma _{6,3}+(1/2)(Mean_{stage 7}-Mean_{stage 4})]/\sigma _{7,4}\) (Greenwald et al. 2003). The IAT measure involves computing two mean differences and dividing each difference score by its associated “inclusive” standard deviation. The D effect is then an equal-weight average of two resulting ratios. Stage 6 and 7 are trials in which pictures of women are paired with “leadership” words and pictures of male individuals are paired with “follower” words. Stage 3 and 4 are trials in which males are paired with “leadership” words and women are paired with “follower” words. Hence, a positive score would indicate that an individual took longer to associate pictures of women with “leadership” words (Mean stage 6) than pictures of men with “leadership” words (Mean stage 3), and longer to associate pictures of men with “follower” words (Mean stage 7) than pictures of females with “follower” words (Mean stage 4). The part of the IAT D Effect that accommodates general processing speed—the fact that irrespective of their attitudes, some individuals respond faster than others on a wide range of cognitive tasks—is the “inclusive” standard deviation. Respondents are obliged to correct errors before proceeding, and latencies are measured to the occurrence of the correct response.

  27. 27.

    Critics of the IAT note that the IAT is taken in full consciousness on the part of the subjects. According to Burdein et al. (2006), “respondents are aware of the stimulus but not of how their responses have been affected” (361). As such, savvy individuals can control their responses (e.g., slowing down their responses to all trials to minimize the gap in response times between the blocks with congruent categories and the blocks with incongruent categories (e.g., Fiedler and Bluemke 2005; Steffens 2004). To the extent that respondents were aware of the IAT and the mechanics of how the IAT measure works when completing the survey, this may be a very valid criticism. However, the details of the IAT are not well-known outside of the psychology circle, and hence, an unlikely problem. The two cases in which respondents were performing too slowly, possibly due to being distracted or an intention to game the measurement—respondents with extreme outlier latencies—were removed. Additionally, if respondents “faked” the IAT, then there would be minimal predictive power of the IAT, inducing a downward bias on predictive validity.

  28. 28.

    See http://www.implicit.harvard.edu/implicit/demo/background/raceinfo.html.

  29. 29.

    See Nosek and Hansen (2008) for evidence against this interpretation.

  30. 30.

    The IAT was utilized due to its popularity as an implicit measure; however, there are a number of criticism of the IAT measure. One potential concern is the reliability of the IAT measure. However, research on implicit measures are finding that the IAT has greater reliability than other implicit measures (Nosek and Hansen 2008; Perez 2013). A second concern, according to some scholars, is that IAT scores could be subject to priming effects (e.g., Mendelberg 2008). A third issue is around the relative nature of the IAT. The relative merits of various implicit measures are outside the scope of this study. However, given the debate on the merits and weaknesses of various implicit measures, further research utilizing alternative implicit measures are warranted.

  31. 31.

    To measure general political knowledge, participants were quizzed on ten political knowledge items proposed by the Pew Research Center (PRC) in February of 2008 (Delli Carpini and Keeter 1996; Keeter and Suls 2008). Questions were drawn from http://www.pewresearch.org/newsiq/. For each question, the share of individuals who answered each question correctly, and the mean total score (out of a possible 10) are displayed in Table D.9 in the Online Appendix. Each result is based on a multiple choice question. A description of each of the knowledge measures is provided in the Online Appendix.

  32. 32.

    A series of ten questions were asked regarding political interest. Eight questions regarding the intensity to which respondents take interest in politics and being involved in politics were asked (see Table E.10 in the Online Appendix). A description of each of the interest measures is provided in the Online Appendix. To create one political interest index, each of the eight non-political affiliation questions is normalized and summed together. The additive measure is then normalized to be between 0 and 1 (Cronbach’s α = 0.80).

  33. 33.

    Codes for control variable are as follows: (1) sex (1 = female); (2) whether the respondent identifies him/herself as white; (3) age; (4) education level (1 = elementary, 2 = junior high, 3 = some high school education, 4 = high school graduate, 5 = some college, 6 = 2-year college degree, 7 = bachelor’s degree, 8 = some graduate school education, 9 = postgraduate degree); (5) religiosity (0 = atheist, 1 = agnostic, 2 = slightly religious, 3 = moderately religious, 5 = strongly religious); (6) community type (urban, suburban, or rural); (7) household income (0 = below $20,000, 1 = $20,000–40,000, 2 = $40,000–60,000, 3 = $60,000–80,000, 4 = $80,000–100,000, 5 = $100,000–120,000, 6 = $120,000–140,000, 7 = $140,000–160,000, 8 = $160,000–180,000, 9 = $180,000–200,000, 10 = above $200,000); (8) Party Identification (1 = strong Democrat, 2 = not-strong Democrat, 3 = lean Democrat, 4 = neither, 5 = lean Republican, 6 = not-strong Republican, 7 = strong Republican); (9) political knowledge score (number correct out of a possible 10); (10) political interest index (average of eight measures, with each question measured on a 5-point scale (1 = not at all important, 2 = slightly important, 3 = somewhat important, 4 = very important, 5 = extremely important); (11) WORDSUM test score (number correct out of a possible 10). All variables were coded to lie between 0 and 1.

  34. 34.

    WORDSUM is a 10-item verbal ability test employed in the General Social Survey (GSS) of the National Opinion Research Center. The 10 GSS vocabulary items were developed by Thorndike in response to the need for a very brief test of intelligence in a social survey (Thorndike and Gallup 1944). For each of the 10 WORDSUM items, GSS respondents are asked to choose the one word of five possible matches that comes closest in meaning to the following words: lift, concern, broaden, blunt, accustom, chirrup, edible, cloistered, tactility, and sedulous. Words were selected from each level of the vocabulary section of the Institute for Education Research Intelligence Scale: Completion, Arithmetic Problems, Vocabulary, and Directions (Thorndike 1942).

  35. 35.

    To verify that results are not sensitive to empirical strategy, pooled, logistic and probit analyses were all conducted. Regression results when using a probit model and linear probability model are available upon request. There are no changes in the direction and significance of coefficients across the three dichotomous choice models.

  36. 36.

    While estimates from both empirical strategies are reported in the tables, only results from the random-effects models are reported in the main text.

  37. 37.

    All reported predictive probabilities are mean predictions, estimating the likelihood that a respondent would select the female candidate over an equally qualified male candidate over the level of the attitude measure, adjusting for the overall mean of each of the other predictors (e.g., demographic characteristics).

  38. 38.

    According to Baron and Kenny (1986), for mediation to occur, an independent variable should influence the dependent variable as well as the proposed mediator. Moreover, when the effects of the independent variable and mediator on the dependent variable are simultaneously controlled for, the effect of the mediated variable should be reduced.

  39. 39.

    Note that the main effects and the interaction effects are not sensitive to the choice of explicit measure.

  40. 40.

    Note that the word “bias” should be used carefully. One could argue that gender preferences that arise from a belief regarding ideology as opposed to stereotypes regarding gender roles and/or leadership capabilities are not really “biased” or prejudiced; their gender preferences arise from a rational assessment of preferred policy rather than gender attitudes. However, my claim that bias may be a factor arises from the fact that whether non-liberal voters who believe that female candidates are more liberal are included, excluded or considered separately, implicit attitudes matter and are attenuated by individuating information regarding candidate quality. I find that candidate quality can attenuate a predisposition to choose a candidate of a particular sex among non-liberal voters who believe women leaders are more liberal, as well as non-liberal voters without the belief that women are more liberal and liberal voters. Results are available upon request.

  41. 41.

    Investigation on the interaction between gender attitudes and on-line and memory-based processing of candidate information (Kim and Garrett 2012) is a related stream of research that will also help to build a more systematic understanding of individuals’ political information processing and its implications for vote choice.

References

  1. Alexander, D., & Anderson, K. (1993). Gender as a factor in the attribution of leadership traits. Political Research Quarterly, 46(3), 527–545.

    Article  Google Scholar 

  2. Anderson, M. R., Lewis, C. J., & Baird, C. L. (2011). Punishment or reward? An experiment on the effects of sex and gender issues on candidate choice. Journal of Women, Politics and Policy, 32(2), 136–157.

    Article  Google Scholar 

  3. Anzia, S. F., & Berry, C. R. (2011). The Jackie (and Jill) Robinson effect: Why do Congresswomen outperform Congressmen? American Journal of Political Science, 55(3), 478–493.

    Article  Google Scholar 

  4. Arcuri, L., Castelli, L., Galdi, S., Zogmaister, C., & Amadori, A. (2008). Predicting the vote: Implicit attitudes as predictors of the future behavior of decided and undecided voters. Political Psychology, 29(3), 369–387.

    Article  Google Scholar 

  5. Arkes, H. R., & Tetlock, P. E. (2004). Attributions of implicit prejudice, or “Would Jesse Jackson ‘fail’ the Implicit Association Test?”. Psychological Inquiry, 15, 257–278.

    Article  Google Scholar 

  6. Banaji, M. R., & Greenwald, A. G. (1995). Implicit gender stereotyping in judgments of fame. Journal of Personality and Social Psychology, 68, 181–198.

    Article  Google Scholar 

  7. Bargh, J. A. (1999). The cognitive monster: The case against the controllability of automatic stereotype effects. In S. Chaiken & Y. Trope (Eds.), Dual-Process Theories in Social Psychology (pp. 361–382). New York, NY: The Guilford Press.

    Google Scholar 

  8. Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator distinction in social psychological research: Conceptual, strategic, and statistical considerations. Journal of Personality and Social Psychology, 51, 1173–1182.

    Article  Google Scholar 

  9. Berger, J., Fisek, M. H., Norman, R. Z., & Zelditch, J. M. (1977). Status characteristics and social interactions: An expectations states approach. New York: Elsevier Science.

    Google Scholar 

  10. Biddle, J. E., & Hamermesh, D. S. (1998). Beauty, productivity, and discrimination: Lawyers’ looks and lucre. Journal of Labor Economics, 16(1), 172–201.

    Article  Google Scholar 

  11. Birkett, N. J. (1986). Selecting the number of response categories for a likert-type scale. Presented at the Annual Meeting of the American Statistical Association.

  12. Braver, S. L., & Braver, W. (1988). The statistical treatment of the Solomon four-group design: A meta-analytic approach. Psychological Bulletin, 104, 150–154.

    Article  Google Scholar 

  13. Burdein, I., Lodge, M., & Taber, C. (2006). Experiments on the automaticity of political beliefs and attitudes. Political Psychology, 27(3), 359–371.

    Article  Google Scholar 

  14. Burrell, B. (1994). A woman’s place is in the house: Campaigning for Congress in the feminist era. Ann Arbor, MI: University of Michigan Press.

    Google Scholar 

  15. Burrell, B. (2008). Political parties, fund-raising, and sex. In B. Reingold (Ed.), Legislative women: Getting elected, getting ahead. Boulder, CO: Lynne Rienner Publishers.

    Google Scholar 

  16. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  17. Darcy, R., & Schramm, S. S. (1977). When women run against men. Public Opinion Quarterly, 41, 1–12.

    Article  Google Scholar 

  18. Deber, R. B. (1982). “The fault, dear Brutus”: Women as congressional candidates in Pennsylvania. Journal of Politics, 44, 473–479.

    Article  Google Scholar 

  19. Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press.

    Google Scholar 

  20. Devine, P. G. (1989). Stereotypes and prejudice: Their automatic and controlled components. Journal of Personality and Social Psychology, 56(1), 5–18.

    Article  Google Scholar 

  21. Devine, P. G., Plant, E. A., Amodio, D. M., Harmon-Jones, E., & Vance, S. L. (2002). The regulation of explicit and implicit race bias: The role of motivations to respond without prejudice. Journal of Personality and Social Psychology, 82, 835–848.

    Article  Google Scholar 

  22. Dolan, K. A. (1998). Voting for women in the ‘Year of the Woman’. American Journal of Political Science, 42(1), 272–293.

    Article  Google Scholar 

  23. Dolan, K. A. (2004). Voting for women: How the public evaluates women candidates. Boulder, CO: Westview Press.

    Google Scholar 

  24. Dolan, K. A. (2010). The impact of gender stereotyped evaluations on support for women candidates. Political Behavior, 32, 69–88.

    Article  Google Scholar 

  25. Dovidio, J. F., Kawakami, K., & Gaertner, S. L. L. (2002). Implicit and explicit prejudice and interracial interactions. Journal of Personality and Social Psychology, 82, 62–68.

    Article  Google Scholar 

  26. Dubois, P. (1984). Voting cues in nonpartisan trial court elections: A multivariate assessment. Law and Society Review, 18(3), 395–436.

    Article  Google Scholar 

  27. Dunton, B. C., & Fazio, R. H. (1997). An individual difference measure of motivation to control prejudiced reactions. Personality and Social Psychology Bulletin, 23(3), 316–326.

    Article  Google Scholar 

  28. Eagly, A. H. (1987). Sex differences in social behavior: A social role interpretation. Hillsdale, NJ: Lawrence Erlbaum.

    Google Scholar 

  29. Erskine, H. (1971). The polls: Women’s role. Public Opinion Quarterly, 35(2), 275–290.

    Article  Google Scholar 

  30. Evans, J. S. B. T. (2003). In two minds: Dual-process accounts of reasoning. Trends in Cognitive Sciences, 7(10), 454–459.

    Article  Google Scholar 

  31. Fazio, R. H., Jackson, J. R., Dunton, B. C., & Williams, C. J. (1995). Variability in automatic activation as an unobtrusive measure of racial attitudes: A bonafide pipeline. Journal of Personality and Social Psychology, 69, 1013–1027.

    Article  Google Scholar 

  32. Fazio, R. H., & Olson, M. A. (2003). Implicit measures in social cognition research: Their meaning and uses. Annual Review of Psychology, 54, 297–327.

    Article  Google Scholar 

  33. Fiedler, K., & Bluemke, M. (2005). Faking the IAT: Aided and unaided response control on the Implicit Association Tests. Basic and Applied Social Psychology, 27, 307–316.

    Article  Google Scholar 

  34. Finn, C., & Glaser, J. (2010). Voter affect and the 2008 US presidential election: Hope and race mattered. Analyses of Social Issues and Public Policy, 10(1), 262–275.

    Article  Google Scholar 

  35. Forsythe, D. R., Heiney, M. M., & Wright, S. S. (1997). Biases in appraisals of women leaders. Group Dynamics, 1, 98–101.

    Article  Google Scholar 

  36. Fox, R. L., & Smith, E. R. A. N. (1998). The role of candidate sex in voter decision-making. Political Psychology, 19(2), 405–419.

    Article  Google Scholar 

  37. Fridkin, K. L., Kenney, P. J., & Woodall, G. S. (2009). Bad for men, better for women: The impact of stereotypes during negative campaigns. Political Behavioral, 31, 53–77.

    Article  Google Scholar 

  38. Gaertner, S. L., & Dovidio, J. F. (1986). The aversive form of racism. In J. F. Dovidio & S. L. Gaertner (Eds.), Prejudice, discrimination and racism: Theory and research (pp. 61–89). Orlando, FL: Academic Press.

    Google Scholar 

  39. Galdi, S., Arcuri, L., & Gawronski, B. (2008). Automatic mental associations predict future choices of undecided decision-makers. Science, 321(5892), 1100–1102.

    Article  Google Scholar 

  40. Galdi, S., Gawronski, B., Arcuri, L., & Friese, M. (2012). Selective exposure in decided and undecided individuals: Differential relations to automatic associations and conscious beliefs. Personality and Social Psychology Bulletin, 38(5), 559–569.

    Article  Google Scholar 

  41. Gawronski, B., & Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5), 692–731.

    Article  Google Scholar 

  42. Greenwald, A. G., McGhee, D. E., & Schwartz, J. L. K. (1998). Measuring individual differences in implicit cognition: The Implicit Association Test. Journal of Personality and Social Psychology, 74, 1464–1480.

    Article  Google Scholar 

  43. Greenwald, A. G., Nosek, B. A., & Banaji, M. R. (2003). Understanding and using the Implicit Association Test: An improved scoring algorithm. Journal of Personality and Social Psychology, 85, 197–216.

    Article  Google Scholar 

  44. Greenwald, A. G., Poehlman, T. A., Uhlmann, E., & Banaji, M. R. R. (2009a). Understanding and using the Implicit Association Test: Meta-analysis of predictive validity. Journal of Personality and Social Psychology, 97, 17–41.

    Article  Google Scholar 

  45. Greenwald, A. G., Smith, C. T., Sriram, N., Bar-Anan, Y., & Nosek, B. A. (2009b). Implicit race attitude predicted vote in the 2008 US presidential elections. Analyses of Social Issues and Public Policy, 9, 241–253.

    Article  Google Scholar 

  46. Hall, M. (2001). State supreme courts in American democratic accountability: Probing the myths of judicial reform. American Political Science Review, 95(2), 315–330.

    Article  Google Scholar 

  47. Hoffman, W., Gschwender, T., Castelli, L., & Schmitt, M. (2005). Implicit and explicit attitudes and interracial interaction: The moderating role of situationally available control resources. Group Processes and Intergroup Relations, 11(1), 69–87.

    Article  Google Scholar 

  48. Huddy, L., & Terkildsen, N. (1993). The consequences of gender stereotypes for women candidates at different levels and types of office. Political Research Quarterly, 46(3), 503–525.

    Article  Google Scholar 

  49. Iyengar, S., Valentino, N. A., Ansolabehere, S., & Simon, A. F. (1997). Running as women: Gender stereotyping in political campaigns. In P. Norris (Ed.), Women, media, and politics (pp. 77–98). New York, NY: Oxford University Press.

    Google Scholar 

  50. Johnson, C. A., Schaefer, R., & McKnight, R. N. (1978). The salience of judicial candidates and elections. Social Science Quarterly, 59, 371–378.

    Google Scholar 

  51. Kahn, K. F. (1996). The political consequences of being a woman: How stereotypes influence the conduct and consequences of political campaigns. New York: Columbia University Press.

    Google Scholar 

  52. Kahneman, D. (2003). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93(5), 1449–1475.

    Article  Google Scholar 

  53. Kalton, G., Roberts, J., & Holt, D. (1980). The effects of offering a middle response. The Statistician, 29, 65–79.

    Article  Google Scholar 

  54. Kam, C. D. (2007). Implicit attitudes, explicit choices: When subliminal priming predicts candidate preference. Political Behavior, 29, 343–367.

    Article  Google Scholar 

  55. Keeter, S., & Suls, R. (2008). Awareness of Iraq war fatalities plummets. Retrieved June 26, 2012, from http://pewresearch.org/pubs/762/political-knowledge-update.

  56. Kim, Y. M., & Garrett, K. (2012). On-line and memory-based: Revisiting the relationship between candidate evaluation processing models. Political Behavior, 34, 345–368.

    Article  Google Scholar 

  57. Krupnikov, Y., & Bauer, N. M. (2014). The relationship between campaign negativity, gender and campaign context. Political Behavior, 36, 167–188.

    Article  Google Scholar 

  58. Lau, R. R., & Redlawsk, D. P. (2006). How voters decide: Information processing during election campaigns. New York, NY: Cambridge University Press.

    Google Scholar 

  59. Lawless, J. L. (2004). Women, war, and winning elections: Gender stereotyping in the post-september 11th era. Political Research Quarterly, 57, 479–490.

    Article  Google Scholar 

  60. Lawless, J. L., & Fox, R. L. (2005). It takes a candidate: Why women don’t run for office. Cambridge: Cambridge University Press.

    Google Scholar 

  61. Leeper, M. S. (1991). The impact of prejudice on female candidates: An experimental look at voter inference. American Politics Quarterly, 19, 248–261.

    Article  Google Scholar 

  62. Lissetz, R. W., & Green, S. B. (1975). Effect of the number of scale points on reliability: A Monte Carlo approach. Journal of Applied Psychology, 60, 10–13.

    Article  Google Scholar 

  63. Lodge, M., & Taber, C. (2000). Three steps toward a theory of motivated political reasoning. In A. Lupia, M. D. McCubbins, & S. Popkin (Eds.), Elements of reason (pp. 183–213). Cambridge, UK: Cambridge University Press.

    Google Scholar 

  64. Lodge, M., & Taber, C. (2005). The automaticity of affect for political leaders, groups, and issues: An experimental test of the hot cognition hypothesis. Political Psychology, 26, 455–482.

    Article  Google Scholar 

  65. Malhotra, N., Margalit, Y., & Mo, C. H. (2012). Economic explanations for opposition to immigration: Distinguishing between prevalence and conditional impact. American Journal of Political Science, 57(2), 391–410.

    Article  Google Scholar 

  66. Matland, R. E. (1994). Putting Scandinavian equality to the test: An experimental evaluation of gender stereotyping of political candidates in a sample of Norwegian voters. British Journal of Political Science, 24, 273–292.

    Article  Google Scholar 

  67. McDermott, M. (1997). Voting cues in low-information elections: Candidate gender as a social information variable in contemporary United States elections. American Journal of Political Science, 41(1), 270–283.

    Article  Google Scholar 

  68. McDermott, M. (1998). Race and gender cues in low-information elections. Political Research Quarterly, 19(2), 57–80.

    Google Scholar 

  69. Mendelberg, T. (2008). Racial priming revived. Perspectives on Politics, 6(1), 109–123.

    Google Scholar 

  70. Mobius, M. M., & Rosenblat, T. S. (2006). Why beauty matters. American Economic Review, 96(1), 222–235.

    Article  Google Scholar 

  71. Mondak, J. J. (1993). Public opinion and heuristic processing of source cues. Political Behavior, 15(2), 167–192.

    Article  Google Scholar 

  72. Nosek, B. A. (2005). Moderators of the relationship between implicit and explicit evaluation. Journal of Experimental Psychology, 134, 565–584.

    Article  Google Scholar 

  73. Nosek, B. A., & Banaji, M. R. (2001). The go/no-go association test. Social Cognition, 19, 625–664.

    Article  Google Scholar 

  74. Nosek, B. A., Greenwald, A. G., & Banaji, M. R. (2007). The Implicit Association Test at age 7: A methodological and conceptual review. In J. A. Bargh (Ed.), Automatic processes in social thinking and behavior (pp. 265–292). New York, NY: Psychology Press.

    Google Scholar 

  75. Nosek, B. A., & Hansen, J. (2008). The association in our heads belong to us: Searching for attitudes and knowledge in implicit evaluation. Cognition and Emotion, 22, 553–594.

    Article  Google Scholar 

  76. Payne, B. K., Krosnick, J. A., Pasek, J., Lelkes, Y., Akhtar, O., & Tompson, T. (2010). Implicit and explicit prejudice in the 2008 American presidential election. Journal of Experimental Social Psychology, 46, 367–374.

    Article  Google Scholar 

  77. Perez, E. O. (2010). Explicit evidence on the import of implicit attitudes: The IAT and immigration policy judgments. Political Behavior, 32, 517–545.

    Article  Google Scholar 

  78. Perez, E. O. (2013). Implicit attitudes: Meaning, measurement, and synergy with political science. Politics, Groups, and Identities, 1(2), 275–297.

    Article  Google Scholar 

  79. Popkin, S. (1991). The reasoning voter: Communication and persuasion in presidential campaigns. Chicago: University of Chicago Press.

    Google Scholar 

  80. Sanbonmatsu, K. (2002). Gender stereotypes and vote choice. American Journal of Political Science, 46(1), 20–34.

    Article  Google Scholar 

  81. Sanbonmatsu, K. (2006). Where women run: Gender and party in the American states. Ann Arbor, MI: University of Michigan Press.

    Google Scholar 

  82. Sanbonmatsu, K., & Dolan, K. A. (2007). Gender stereotypes and gender preferences on the 2006 ANES pilot study. In A report to the ANES Board of overseers.

  83. Sapiro, V. (1981). If US Senator Baker were a woman: An experimental study of candidate images. Political Psychology, 3(1/2), 61–83.

    Article  Google Scholar 

  84. Sears, D. O. (1986). College sophomores in the laboratory: Influences of a narrow data base on social psychology’s views of human nature. Journal of Personality and Social Psychology, 51(3), 515–530.

    Article  Google Scholar 

  85. Seltzer, R. A., Newman, J., & Leighton, M. V. (1997). Sex as a political variable. Boulder, CO: Lynne Rienner.

    Google Scholar 

  86. Smith, E. R. A., & Fox, R. L. (2001). A research note: The electoral fortunes of women candidates for Congress. Political Research Quarterly, 54(1), 205–221.

    Article  Google Scholar 

  87. Solomon, R. L. (1949). An extension of control group design. Psychological Bulletin, 46, 137–150.

    Article  Google Scholar 

  88. Squire, P., & Smith, E. R. A. N. (1988). The effect of partisan information on voters in nonpartisan elections. Journal of Politics, 50, 169–179.

    Article  Google Scholar 

  89. Stanovich, K. E. (1999). Who is rational? Studies of individual differences in reasoning. Mahwah, NJ: Lawrence Erlbaum.

    Google Scholar 

  90. Stanovich, K. E. (2002). Individual differences in reasoning. In T. Gilovich, D. Griffin, & D. Kahneman (Eds.), Heuristics and biases. New York, NY: Cambridge University Press.

    Google Scholar 

  91. Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate. Behavioral and Brain Sciences, 23(5), 645–665.

    Article  Google Scholar 

  92. Steffens, M. C. (2004). Is the Implicit Association Test immune to faking? Experimental Psychology, 51, 165–179.

    Article  Google Scholar 

  93. Steinem, G. (2008). Women are never front-runners. New York Times. Retrieved 4 September, 2012, http://www.nytimes.com/2008/01/08/opinion/08steinem.html.

  94. Strack, F., & Deutsch, R. (2004). Reflective and impulsive determinants of social behavior. Personality and Social Psychology Review, 8(3), 220–247.

    Article  Google Scholar 

  95. Thorndike, R. L. (1942). Two screening tests of verbal intelligence. Journal of Applied Psychology, 26, 128–135.

    Article  Google Scholar 

  96. Thorndike, R. L., & Gallup, G. H. (1944). Verbal intelligence of the American adult. Journal of General Psychology, 30, 75–85.

    Article  Google Scholar 

  97. Vianello, M., & Siemienska, R. (1990). Gender inequality: A comparative study of discrimination and participation. Newbury Park, CA: Sage.

    Google Scholar 

  98. Wilson, T. D., Lindsey, S., & Schooler, T. Y. (2000). A model of dual attitudes. Psychological Review, 107(1), 101–126.

    Article  Google Scholar 

  99. Winter, N. J. G. (2010). Masculine Republicans and feminine Democrats: Gender and Americans’ explicit and implicit images of the political parties. Political Behavior, 32, 587–618.

    Article  Google Scholar 

  100. Yeager, D. S., Krosnick, J. A., Chiang, L., Javitz, H. S., Levendusky, M. S., Simpser, A., & Wang, R. (2009). Comparing the accuracy of RDD telephone surveys and internet surveys conducted with probability and non-probability samples. Retrieved June 26, 2012, from www.comm.stanford.edu/faculty/krosnick/Mode04.pdf.

Download references

Acknowledgments

This research was funded by a generous grant from the Stanford Interdisciplinary Behavioral Research Fund. A debt of gratitude goes to Jonathan Bendor, Jim Fishkin, Danielle Harlan, Shanto Iyengar, Jon Krosnick, Jennifer Lawless, Neil Malhotra, Josh Pasek, Keith Payne, Efren Perez, Baba Shiv, Zakary Tormala, Michael Weiksner, Christian Wheeler, and Sam Wineburg, as well as participants of the the annual meeting of both the Midwest Political Science Association and the American Political Science Association, Stanford’s American Politics Seminar, Stanford’s Political Psychology Research Group Seminar, the Stanford Graduate Writing Workshop, and the Graduate School of Business Political Economy Seminar at Stanford University for helpful comments and advice. David Sleeth-Keppler at the Graduate School of Business Behavioral Lab was invaluable to my efforts as well. All errors and opinions are my own.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Cecilia Hyunjung Mo.

Additional information

A correction to this article is available online at https://doi.org/10.1007/s11109-017-9436-2.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 533 KB)

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Mo, C.H. The Consequences of Explicit and Implicit Gender Attitudes and Candidate Quality in the Calculations of Voters. Polit Behav 37, 357–395 (2015). https://doi.org/10.1007/s11109-014-9274-4

Download citation

Keywords

  • Gender
  • Implicit attitudes
  • Vote choice
  • Implicit association test (IAT)