Skip to main content
Log in

Accuracy and Bias in Perceptions of Political Knowledge

  • Original Paper
  • Published:
Political Behavior Aims and scope Submit manuscript

Abstract

Learning through social communication is promoted when citizens are able to identify which of their associates is likely to possess the necessary political information. This paper examines the factors that influence individuals’ evaluations of political expertise. Actual political expertise plays a large role in perceived expertise, but mistakes are made. These are largely the result of assuming that those engaged in politics must also be knowledgeable about politics. This paper uses the 1996 Indianapolis-St. Louis Study and the 2000 National Election Study to identify factors that bias levels of perceived political knowledge. The paper concludes by demonstrating that perceived expertise plays a larger role than actual expertise in the social influence process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. A direct answer to this question is beyond the scope of this paper. Obviously, perceived expertise serves as a better proxy for actual expertise if there is less bias in people’s perceptions.

  2. For an explanation as to why two different questions were asked see Huckfeldt et al. (1998). An analysis of three data sets—including the ISL—shows that both questions result in networks with similar characteristics (Klofstad et al. 2009). I assume, therefore, that all dyads discuss politics regardless of the particular network generator used to elicit the name of the discussant.

  3. Alternative measures of agreement could have been used (e.g., the average distance between main respondent and discussant attitudes on a series of issues). However, whether individuals are on the same sides of an issue better predicts whether the individuals believe they agree than the actual distance between the pair on a policy scale (Kenny and Jenner 2008).

  4. Besides the measures of objective knowledge, the two differences in ISL and NES measures are education and income. In the NES, education is measured using the respondent’s highest degree attained. Income is measured using six ordered categories as in the ISL, but the cut points between the categories are different.

  5. The questions that were used to construct the ISL and NES objective knowledge scales are listed in the Appendix.

  6. The post-election interviewer rating is preferable for three reasons. First, the pre-election interviewer rating is skewed with more interviewers saying respondents are well informed. Second, the post-election interviewer asked the general knowledge battery. This interviewer, therefore, should have a better idea of the actual level of respondent knowledge. Third, 80% of respondents had the same interviewer for the pre- and post-election survey. So, they should have learned more about the individual over the course of both interviews. As a result, the post-election measure should provide a more stringent test of factors biasing perceptions of knowledge.

  7. None of the models in Table 1 perfectly replicate the models in Huckfeldt (2001, p. 429). The first model in Table 5 in the appendix replicates Huckfeldt’s model. The second model adds the variables of interest for this paper. This analysis supports the conclusions from Table 1. Both a likelihood-ratio test and the AIC support the new model even though the AIC punishes less parsimonious models. A likelihood-ratio test cannot be performed unless both models use the same cases. As a result, the n in the Huckfeldt replication is lower than in the original because of missing data in the additional variables.

  8. Predicted values are calculated by varying the value of the variable of interest while holding all other variables at their means. Predicted values are based on coefficient estimates from the first model in Table 1.

  9. About 13% of the dyads are married.

  10. Discussants at the ninetieth percentile for engagement have their mean political discussion variable set at 3.667 out of four and they are very interested in the campaign and have participated in three activities. Discussants at the ninetieth percentile for actual knowledge answered all three objective knowledge questions correctly and spent 18 years in school. Discussants at the tenth percentile have their mean political discussion variable is set at 1.5. They are not at all interested in the campaign, but did participate in one activity. They answer one objective knowledge question correctly and had 12 years of schooling.

  11. The proper analytic solution to this problem would be to model the selection process first and then use results from that model as part of the perceived knowledge estimation (Achen 1986). I cannot do this, however, because I do not have data on those individuals who were not named as discussants. Since expertise (Ahn et al. 2010) and sex (Huckfeldt and Sprague 1995; Klofstad et al. 2006; Mendez and Osborn 2005) are important factors in the selection process, their effects may be underestimated. The other factors that bias perceptions of political knowledge may also impact the probability of selection and those effects may also be underestimated, but there is no way of knowing with the available data.

  12. One of the major difficulties in measuring objective political knowledge is that some groups are more likely to guess than others. If a respondent says, “I don’t know” in response to a knowledge question, this individual is typically coded as having given an incorrect answer. If a respondent guesses, there is some probability that the person will guess correctly. Thus, those groups more likely to guess will have higher levels of political knowledge than they should. This should be less of a problem in the 2000 NES as interviewers discouraged “don’t know” responses. To ensure the results were not driven by the remaining “don’t knows”, I reran the models using two different measures of political knowledge. First, I measured political knowledge as the number of correct answers over the number questions answered—the denominator varies by person based on how often they said “don’t know”. Second, I randomly assigned answers to those questions to which the respondent answered “don’t know” (Mondak and Anderson 2004). In both cases, the coefficients for the engagement and demographic variables were larger than those reported in Table 2. This suggests that the model in Table 2 may underestimate the amount of bias in the interview assessments of the respondent’s political knowledge. I present these results, however, because they provide the most conservative estimates of the amount of bias.

  13. The 2000 NES was conducted by the Center for Political Studies of the Institute for Social Research. Interviewers were trained at regional conferences prior to the study implementation. To ensure the quality of their work, the interviewers’ work was reviewed by supervisors throughout the interview period. Interviewers are not given specific instructions as how to make a judgment about a respondent’s expertise. They are, however, aware that they will have to make such a judgment and have experience doing so. More information on the design and implementation of the NES is available at http://www.electionstudies.org/overview/overview.htm.

  14. This is, of course, an imperfect measure of influence because it is unclear if the discussant influenced the main respondent or if the main respondent influenced the discussant. It is, also, possible that an outside factor influenced both individuals. This is, however, a standard measure of discussion partner influence in cross-sectional studies (e.g., Huckfeldt and Sprague 1991).

References

  • Achen, C. H. (1986). The statistical analysis of quasi-experiments. Berkeley: University of California Press.

    Google Scholar 

  • Ahn, T. K., Huckfeldt, R., Mayer, A. K., & Ryan, J. B. (2008). Political experts and the collective enhancement of civic capacity. Paper presented at the annual meeting of the Midwest Political Science Association, Chicago.

  • Ahn, T. K., Huckfeldt, R., & Ryan, J. B. (2010). Communication, influence, and informational asymmetries among voters. Political Psychology. doi:10.1111/j.1467-9221.2010.00783.x.

  • Bartels, L. M. (2002). Beyond the running tally: Partisan bias in political perceptions. Political Behavior, 24(2), 117–150.

    Article  Google Scholar 

  • Chaffee, S., & Frank, S. (1996). How Americans get political information: Print versus broadcast news. Annals of the American Academy of Political and Social Science, 546, 48–58.

    Article  Google Scholar 

  • Delli Carpini, M. X., & Keeter, S. (1996). What Americans know about politics and why it matters. New Haven, CT: Yale University Press.

    Google Scholar 

  • Downs, A. (1957). An economic theory of democracy. New York: Harper Row.

    Google Scholar 

  • Fiorina, M. P. (1990). Information and rationality in elections. In J. A. Ferejohn & J. H. Kuklinski (Eds.), Information and democratic processes (pp. 329–342). Urbana, IL: University of Illinois Press.

    Google Scholar 

  • Fiske, S. T., & Pavelchak, M. A. (1986). Category-based versus piecemeal-based affective responses: Developments in schema-triggered affect. In R. M. Sorrentino & E. T. Higgins (Eds.), Handbook of motivations, cognition: Foundations of social behavior (pp. 167–203). New York: Guilford Press.

    Google Scholar 

  • Huckfeldt, R. (2001). The social communication of political expertise. American Journal of Political Science, 45(2), 425–438.

    Article  Google Scholar 

  • Huckfeldt, R., Levine, J., Morgan, W., & Sprague, J. (1998). Election campaigns, social communication, and the accessibility of perceived discussant preference. Political Behavior, 20(4), 263–294.

    Article  Google Scholar 

  • Huckfeldt, R., & Sprague, J. (1991). Discussant effects on vote choice: Intimacy, structure and interdependence. American Political Science Review, 53(1), 122–158.

    Google Scholar 

  • Huckfeldt, R., & Sprague, J. (1995). Citizens, politics, and social communication: Information and influence in an election campaign. New York: Cambridge University Press.

    Book  Google Scholar 

  • Kenny, C., & Jenner, E. (2008). Direction versus proximity in the social influence process. Political Behavior, 30(1), 73–95.

    Article  Google Scholar 

  • Klofstad, C., McClurg, S. D., & Rolfe, M. (2006). Family members, friends and neighbors: Differences in personal political networks. Paper presented at the annual meeting of the Midwest Political Science Association, Chicago.

  • Klofstad, C., McClurg, S. D., & Rolfe, M. (2009). Measurement of political discussion networks: A comparison of two ‘name generator’ procedures. Public Opinion Quarterly, 73(3), 462–483.

    Article  Google Scholar 

  • Kuklinski, J. H., & Quirk, P. J. (2000). Reconsidering the rational public: Cognition, heuristics, and mass opinion. In A. Lupia, M. D. McCubbins, & S. L. Popkin (Eds.), Elements of reason: Cognition, choice, the bounds of rationality. New York: Cambridge University Press.

    Google Scholar 

  • Kuklinski, J. H., Quirk, P. J., Jerit, J., Schwieder, D., & Rich, R. F. (2000). Misinformation and the currency of democratic citizenship. Journal of Politics, 62(3), 790–816.

    Google Scholar 

  • Lau, R. R., Andersen, D. J., & Redlawsk, D. P. (2008). An exploration of correct voting in recent U.S. presidential elections. American Journal of Political Science, 52(2), 395–411.

    Article  Google Scholar 

  • Lau, R. R., & Redlawsk, D. P. (2001). Advantages and disadvantages of cognitive heuristics in political decision making. American Journal of Political Science, 45(4), 951–971.

    Article  Google Scholar 

  • Lau, R. R., & Redlawsk, D. P. (2006). How voters decide: Information processing during election campaigns. New York: Cambridge University Press.

    Book  Google Scholar 

  • Lazarsfeld, P. F., Berelson, B., & Gaudet, H. (1948). The people’s choice: How a voter makes up his mind in a presidential campaign. New York: Columbia University Press.

    Google Scholar 

  • Lippmann, W. (1922). Public opinion. New York: Free Press.

    Google Scholar 

  • Lodge, M., & Taber, C. (2000). Three steps toward a theory of motivated political reasoning. In A. Lupia, M. D. McCubbins, & S. L. Popkin (Eds.), Elements of reason: Cognition, choice, and the bounds of rationality. New York: Cambridge University Press.

    Google Scholar 

  • Lupia, A. (2006). How elitism undermines the study of voter competence. Critical Review, 18, 217–232.

    Google Scholar 

  • Lupia, A., & McCubbins, M. D. (1998). The democratic dilemma: Can citizens learn what they need to know? New York: Cambridge University Press.

    Google Scholar 

  • Luskin, R. C. (1990). Explaining political sophistication. Political Behavior, 12(4), 331–361.

    Article  Google Scholar 

  • McClurg, S. D., & Wade, M. (2006). “He said, she said: The interpersonal foundations of the gender gap.” Paper presented at the annual meeting of the Midwest Political Science Association, Chicago.

  • Mendez, J., & Osborn, T. (2005). Gender crossfire? The political discussion of women and men. Paper presented at the annual meeting of the Midwest Political Science Association, Chicago.

  • Mendez, J. M., & Osborn, T. (2010). Gender and the perception of knowledge in political discussion. Political Research Quarterly, 63(2), 269–279.

    Article  Google Scholar 

  • Mondak, J. J. (1995). Media exposure and political discussion in U.S. elections. Journal of Politics, 57(1), 62–85.

    Article  Google Scholar 

  • Mondak, J. J., & Anderson, M. R. (2004). The knowledge gap: A reexamination of gender-based differences in political knowledge. Journal of Politics, 66(2), 492–512.

    Google Scholar 

  • Murphy, N. A., Hall, J. A., & Colvin, C. R. (2003). Accurate intelligence assessments in social interactions: Mediators and gender effects. Journal of Personality, 71(3), 465–493.

    Article  Google Scholar 

  • Paulhus, D. L., & Morgan, K. L. (1997). Perceptions of intelligence in leaderless groups: The dynamic effects of shyness and acquaintance. Journal of Personality and Social Psychology, 72(3), 581–591.

    Article  Google Scholar 

  • Price, V., & Zaller, J. (1993). Who gets the news? Alternative measures of news reception and their implications for research. Public Opinion Quarterly, 57(2), 133–164.

    Article  Google Scholar 

  • Richey, S. (2008). The autoregressive influence of social network political knowledge on voting behaviour. British Journal of Political Science, 38(3), 527–542.

    Article  Google Scholar 

  • Ryan, J. B. (2010). The effects of network expertise and biases on vote choice. Political Communication, 27(1), 44–58.

    Article  Google Scholar 

  • Sokhey, A. E., & McClurg, S. D. (2008). Social networks and correct voting. Paper presented at the annual meeting of the Midwest Political Science Association, Chicago.

  • Zebrowitz, L. A., Hall, J. A., Murphy, N. A., & Rhodes, G. (2002). Looking smart and looking good: Facial cues to intelligence and their origins. Personality and Social Psychology Bulletin, 28(2), 238–249.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to John Barry Ryan.

Appendix

Appendix

1996 Indianapolis-St. Louis Study

The Objective Knowledge Scale for the Indianapolis-St. Louis study is constructed using discussant answers to three questions. Those three questions were:

  1. 1.

    Whose responsibility is it to determine if a law is constitutional or not? Is it the President, the Congress, or the Supreme Court?

  2. 2.

    What are the first 10 amendments in the Constitution called?

  3. 3.

    How much of a majority is required for the U.S. Senate and House to override a presidential veto?

Discussants received a point for each correct answer to these questions. “Do not know” answers were counted as incorrect answers. The ISL Objective Knowledge Scale ranges from 0 to 3 with a mean of 2.2 and a standard deviation of 0.94. The correlation between perceived knowledge and objective knowledge is 0.24. The table compares the perceptions of knowledge with the actual levels of objective knowledge.

 

Objective knowledge

0

1

2

3

Total

Knows not much at all

15.5%

7.3%

6.2%

2.8%

5.3%

Knows an average amount

67.0%

72.7%

57.3%

50.3%

56.6%

Knows a great deal

17.5%

20.0

36.5%

46.9%

38.1%

N

103

205

417

745

1,470

  1. χ2 = 92.2; 6 df; p = 0.00

2000 National Election Study

The Objective Knowledge Scale for the 2000 National Election Study was constructed using answers to two types of questions: factual questions and relative placements of candidates. The 14 factual questions asked respondents to identify which party controls the House of Representatives, which party controls the United States Senate, the jobs of Trent Lott, William Rehnquist, Tony Blair and Janet Reno, as well as the home state and the religion of the major party presidential and vice-presidential candidates. There are two correct answers for Richard Cheney’s home state because he lived in Texas at the time, but was registered as living in Wyoming. In each case, a correct answer is coded as one and all other responses are coded as zero.

The seven relative placement items asked respondents to place George W. Bush and Al Gore on ideology and six issues: abortion, environmental policy, providing government services versus decreasing spending, government guarantee of jobs and a decent standard of living, government efforts to help blacks, and defense spending. In all cases, a respondent only receives credit for a correct answer if the respondent places both candidates and says Bush has a more conservative position than Gore.

The NES objective knowledge scale ranges from 0 to 20—no respondents answered all of the questions correctly—with a mean of 8.32 and a standard deviation of 4.74. The correlation between the interviewer ratings and objective knowledge is 0.65.

The table compares the interviewer ratings with the actual levels of objective knowledge (see also Table 5).

  

Objective knowledge

0–3

4–6

7–9

10–12

13–20

Total

Low information level

72.56%

37.27%

19.49%

9.00%

0.29%

26.6%

Average information level

23.83%

47.88%

47.92%

39.45%

18.48%

35.6%

High information level

3.61%

14.85%

32.59%

51.56%

81.23%

37.9%

N

277

330

313

289

341

1550

  1. χ2 = 748.2; 8 df; p = 0.00
Table 5 Comparison with Huckfeldt (2001)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ryan, J.B. Accuracy and Bias in Perceptions of Political Knowledge. Polit Behav 33, 335–356 (2011). https://doi.org/10.1007/s11109-010-9130-0

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11109-010-9130-0

Keywords

Navigation