Encyclopedia of Personality and Individual Differences

Living Edition
| Editors: Virgil Zeigler-Hill, Todd K. Shackelford


  • Adam Carlitz
  • Kimberly RiosEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-28099-8_1785-1



Subjective judgments of what is true or false.


Drawing from traditional philosophical conceptions of belief, psychologists define “beliefs” as fundamental units of thought (i.e., subjective judgments of what is true or false) upon which attitudes are often built. Attitudes, by contrast, are global evaluations – liking or disliking – toward an object (e.g., person, thing, event, idea). For instance, one might hold a belief that Heaven does [does not] exist. One might also hold a positive [negative] attitude – that is, be in favor of [opposed to] – the idea that good people spend eternity in Heaven after death. Beliefs and attitudes, though distinct, are not mutually exclusive. Thus, it is possible to like an idea while not necessarily believing it, and vice versa. For example, most people enjoy the fantastical idea of having superpowers but do not believe superpowers are possible. Baumeister and Bushman (2011) offer a simple explanation for the distinction between attitudes and beliefs: “Attitudes are for choosing, whereas beliefs are for explaining” (p. 200). In other words, beliefs help us predict and make sense of phenomena such as attitudes, choices, behaviors, and cause-and-effect relationships, whereas attitudes influence our choices and behaviors.

Before forming new and modifying existing beliefs, three important cognitive operations take place: sensation (i.e., detecting stimuli via eyes, ears, nose, skin, and tongue), perception (i.e., decoding and interpreting sensory information), and evaluation (i.e., determining the veracity of the information). However, what we perceive is not necessarily reality; beliefs and knowledge are thus distinct. For example, you may believe you know the word “disinterested” means “not interested,” when it actually means “unbiased.” Rather than being objectively true or false (as is knowledge), beliefs exist along a continuum such that degrees of belief are often expressed in terms of certainty or confidence. That is, we can be anywhere from absolutely uncertain to absolutely certain that something is true.

How Are Beliefs Formed?

The process by which we form beliefs has been explained primarily via the cartesian and spinozan schools of thought. According to the cartesian view, potential beliefs are thought to be comprehended first and then either accepted or rejected. For example, Connors and Halligan (2015) put forth a five-stage account of belief formation: (1) perception of sensory information, (2) generation of potential beliefs to explain this information, (3) evaluation of the veracity of potential beliefs and their relation to existing beliefs, (4) acceptance of one or more potential beliefs, and (5) consequences of the accepted belief(s) for memory, cognitive processing, interpretation of existing beliefs, and evaluation of subsequent potential beliefs. In contrast, according to the spinozan view, comprehension and acceptance automatically take place simultaneously, and only later are beliefs evaluated to determine whether they will be maintained or rejected. For instance, the two-factor framework for explaining delusional beliefs (Coltheart et al. 2011) suggests that, when presented with perceptual input (e.g., a rabbit in the sky), we first automatically accept a potential belief (e.g., a rabbit is flying in the sky) and then later evaluate it against alternative potential beliefs (e.g., all rabbits can fly vs. a rabbit is falling from a tall structure vs. this “rabbit” is actually a bird that only resembles a rabbit). We evaluate the accepted belief against the alternatives according to which best explains the perceived information and is most consistent with our existing beliefs. Because our existing beliefs are likely to contain the proposition “rabbits cannot fly,” one of the alternatives is more likely to be accepted and stored in memory. Also supporting the spinozan view, Wyer and Radvansky’s (1999) research suggests that individuals perceive and comprehend information based on its similarity to existing information stored in memory. If the similarity exceeds a certain point, the new information is not only comprehended but also regarded as true. Regardless of which model is most accurate, individuals are susceptible to many cognitive processing biases, which are very much capable of influencing the process of belief formation and its consequences.

How Are Beliefs Influenced and Modified?

Belief formation mostly occurs implicitly (i.e., outside conscious awareness) and thus employs fast and efficient, rather than slow and deliberative, thinking. Cognitive processing of this sort frequently relies on judgmental heuristics and increases susceptibility to bias (Kahneman 2013). Consider the availability heuristic (i.e., ease-of-retrieval): a mental shortcut where beliefs most easily retrieved from memory exert greater influence upon the evaluation of incoming information (Tversky and Kahneman 1973). Beliefs may be more accessible depending upon their novelty, frequency (i.e., how often they are conjured, reinforced, and/or applied), and context. As an example, people watching the news are more likely to hear about accidental deaths compared to those caused by diabetes. Thus, participants believed accidental deaths to be over 300 times more likely than those due to diabetes – even though people are four times more likely to die from diabetes (Kahneman 2013). Though easily available information can exert an undue influence, so too can our existing beliefs. Confirmation bias, for instance, is the tendency to favor information that adheres to our existing beliefs (Kahneman 2013). Similarly, the backfire effect refers to people’s tendencies to reinforce their existing beliefs when presented with opposing evidence. Both biases are motivated by an overall desire to avoid meaning threats and maintain cognitive consistency. As such, confirmation bias and the backfire effect are most likely to affect strongly held beliefs and make modifying existing beliefs very difficult, even in the face of robust disconfirmatory evidence.

Our strongest and oldest beliefs are sometimes referred to as foundational or basic beliefs and are central to our belief systems (i.e., interconnected sets of mutually supportive beliefs). Foundational beliefs link to other beliefs, which themselves link to others, and so on – thereby forming associative belief networks. If foundational beliefs are somehow undermined, other beliefs within the belief network will be jeopardized as well. Consequently, foundational beliefs are especially important to our understanding of the world and very difficult to modify.

However, there are circumstances in which people are motivated to modify even their foundational beliefs. One such circumstance is cognitive dissonance, an aversive mental state that occurs when holding conflicting thoughts, beliefs, and/or attitudes. To illustrate, someone might strongly believe in both creationism and evolution. If these beliefs are perceived as inconsistent, he or she will likely experience internal conflict and be driven to modify one or both beliefs.

What Are the Purposes of Beliefs?

Humans have an innate desire to make sense of the world. Beliefs have two principal functions toward this end: (1) to understand and explain phenomena and (2) to make predictions about the future. However, when people perceive their beliefs are unable to accomplish these functions (e.g., when their beliefs are inconsistent or conflict with one another as with cognitive dissonance), they experience threat – an anxious mental and physiological response characterized by a fear of potential harm. To alleviate the negative consequences of such threats, people employ meaning-making strategies such as generalization and representativeness.

Because belief systems are incomplete representations (i.e., they do not contain all possible information about their referents) and this incompleteness can be threatening, people often rely on generalization to fill gaps within their representations. Generalization is a largely implicit process that allows for quick and efficient predictions about specific objects and broader categories. Deductive generalization is when people apply their beliefs about a set of exemplars to the general category. Consider the category birds containing the following exemplars: robin, hawk, and penguin. One might reason that because robins, hawks, and penguins have beaks, all birds must have beaks. By contrast, inductive generalization is when one applies information about a general category to its exemplars. For example, because birds can fly, one might therefore conclude that robins, hawks, and penguins can fly. Though generalizations usually yield accurate predictions, they are not infallible. For instance, you may have noticed that the above example of inductive generalization produced the erroneous conclusion that penguins can fly. Indeed, there are circumstances under which making strong generalizations can be problematic.

When making generalizations, people disproportionately assign more value to the beliefs most representative (i.e., generally true) of the object in question. To illustrate this representativeness heuristic (Tversky and Kahneman 1973) consider seeing a woman reading The New York Times on a New York subway. In this particular situation, our most prototypical belief about highly educated women might be that they enjoy acquiring information more than do those less educated. As a result, people tend to believe the woman on the subway is more likely to have a PhD than no college degree (Kahneman 2013). In making such predictions, people seem to ignore several important factors, such as how many more people without a college degree exist relative to PhDs. Using such stereotypical beliefs to make predictions can lead to not only forecasting errors but also negative consequences for those being stereotyped.

What Are Some Special Cases of Beliefs?

Stereotypes are commonly held, stable beliefs about members of a social category (e.g., race, gender, ethnicity, nationality, age). They are a specific form of generalization and an application of the representativeness heuristic described above. With some exceptions, stereotypes are often oversimplified and incorrect, as they are especially likely to develop when people have little information about a group or have limited time to process information. For example, people are more prone to apply their existing stereotypes about Asian-Americans (e.g., shy, polite, short) when they are cognitively busy (Gilbert and Hixon 1991), and individuals are more susceptible to stereotyping outgroup members as “all the same” when they have not had much contact with the outgroup in question (Islam and Hewstone 1993). Negative stereotypes can have pernicious consequences for their targets (e.g., women who are made aware of the stereotype that women are bad drivers perform more poorly on a driving simulation task than do those not made aware of the stereotype, regardless of whether they believe the stereotype is true; Yeung and von Hippel 2008). However, positive stereotypes can also be very problematic for members of targeted groups who fear they cannot conform to the stereotype. Consider the stereotype that Asian-Americans are good at math. This stereotype can lead members of the target group to find it difficult to meet society’s high expectations of them and can produce feelings of inferiority among those who feel they are not good at math.

Another special case of beliefs involves religion. Religious beliefs are, themselves, powerful tools for dealing with psychological threat. Indeed, religious beliefs prescribe appropriate behavior – especially helpful during uncertainty – and offer individuals a sense of meaning. Many scholars theorize that religious beliefs developed from our desire to understand the world. For instance, rather than endure uncertainty about the origins and purpose of human life, most people rely on their belief in God (or other such supernatural beings; Park 2005). Notably, though, religious beliefs differ from the other beliefs discussed thus far in that the former require faith (i.e., strength in one’s belief despite not having confirmatory evidence). Because faith-based (religious) beliefs do not require empirical evidence, they cannot be proven true nor can they be falsified. As a result and in addition to their perceived explanatory power, faith-based (religious) beliefs have endured for millennia and continue to protect against psychological threat.


In sum, beliefs are distinct from both attitudes and knowledge. Though there is some disagreement about how beliefs are initially formed, the conditions under which beliefs are most influential as well as the impact of biases and heuristics are well documented. Additionally, beliefs can take particular forms, such as stereotypes or faith-based (religious) beliefs. Given people’s desires to simplify and generate meaning in a complex world, it is little wonder that beliefs are such powerful determinants of attitudes and behavior.



  1. Baumeister, R. F., & Bushman, B. J. (2011). Social psychology and human nature (2nd ed.). Boston: Wadsworth.Google Scholar
  2. Coltheart, M., Langdon, R., & McKay, R. (2011). Delusional belief. Annual Review of Psychology, 62, 271–298.CrossRefPubMedGoogle Scholar
  3. Connors, M. H., & Halligan, P. W. (2015). A cognitive account of belief: A tentative road map. Frontiers in Psychology, 5, 1588.CrossRefPubMedPubMedCentralGoogle Scholar
  4. Gilbert, D. T., & Hixon, J. G. (1991). The trouble of thinking: Activation and application of stereotypic beliefs. Journal of Personality and Social Psychology, 60(4), 509–517.CrossRefGoogle Scholar
  5. Islam, M. R., & Hewstone, M. (1993). Dimensions of contact as predictors of intergroup anxiety, perceived out-group variability, and out-group attitude: An integrative model. Personality and Social Psychology Bulletin, 19(6), 700–710.CrossRefGoogle Scholar
  6. Kahneman, D. (2013). Thinking, fast and slow. New York: Farrar, Straus and Giroux.Google Scholar
  7. Park, C. L. (2005). Religion as a meaning-making framework in coping with life stress. Journal of Social Issues, 61(4), 707–729.CrossRefGoogle Scholar
  8. Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5(2), 207–232.CrossRefGoogle Scholar
  9. Wyer, R. S., Jr., & Radvansky, G. A. (1999). The comprehension and validation of social information. Psychological Review, 106(1), 89–118.CrossRefPubMedGoogle Scholar
  10. Yeung, N. C. J., & von Hippel, C. (2008). Stereotype threat increases the likelihood that female drivers in a simulator run over jaywalkers. Accident Analysis & Prevention, 40(2), 667–674.CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Ohio UniversityAthensUSA

Section editors and affiliations

  • Catherine Cottrell
    • 1
  1. 1.Division of Social SciencesNew College of FloridaSarasotaUSA