Encyclopedia of Animal Cognition and Behavior

Living Edition
| Editors: Jennifer Vonk, Todd Shackelford

Cognitive Bias

  • Fernando BlancoEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-47829-6_1244-1


Cognitive Bias Visual Illusion Paranormal Belief Base Rate Information Side Track 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Cognitive bias refers to a systematic (that is, nonrandom and, thus, predictable) deviation from rationality in judgment or decision-making.


Most traditional views on human cognition propose that people tend to optimality when making choices and judgments. According to this view, which has been pervasive in many cognitive sciences (particularly Psychology and Economics), people behave like rational, close-to-optimal agents, capable of solving simple as well as complex cognitive problems, and to maximize the rewards they can obtain from their interactions with the environment. Generally, a rational agent would weight potential costs and benefits of their actions, eventually choosing the option that is overall more favorable. This involves taking into consideration all the information that is relevant for solving the problem, while leaving out any irrelevant information that could contaminate the decision (Stanovich 1999). Whole research areas in social sciences have been built upon this assumption of rationality (see, e.g., the Homo Economicus theory in Economics).

However, this traditional view has been challenged in the last decades, in light of evidence coming from Experimental Psychology and related areas. Thus, a growing body of experimental knowledge suggests that people’s judgments and decisions are often far from rational: they are affected by seemingly irrelevant factors or fail to take into account important information. Moreover, these departures from the rational norm are usually systematic: people fail consistently in the same type of problem, making the same mistake. That is, people seem to be irrational in a predictable way (Ariely 2008). Therefore, a theory that aims to model human judgment and decision-making must, in principle, be able to explain these instances of consistent irrationality or cognitive biases.

We can begin to examine the concept of cognitive bias by describing a similar, easier to depict, type of phenomenon: Visual illusions. Figure 1 represents a famous object configuration, the “Muller-Lyer illusion” (Howe and Purves 2005). The reader must observe the picture and decide which of the two horizontal segments, a or b, is longer.
Fig. 1

The Muller-Lyer illusion (Adapted from Howe and Purves (2005) by the author)

In Western societies, most people would agree in that segment b looks slightly longer than segment a. In fact, despite this common impression, both segments a and b have exactly the same length. These segments only differ in the orientation of the adornments that round off the edges (i.e., “arrows” pointing inward or outward), which are responsible of creating the illusion that they have different lengths (Howe and Purves 2005). This example of a very simple visual illusion illustrates how people’s inferences can be tricked by irrelevant information (the adornments), leading to systematic errors in perception. A similar situation can occur across a variety of domains and tasks, revealing the presence of cognitive biases not only in perception but also in judgment, decision-making, memory, etc.

Typically, the consequence of cognitive bias is a form of irrational behavior that is predictable (because it is systematic). Cognitive biases have been proposed to underlie many beliefs and behaviors that are dangerous or problematic to individuals: superstitions, pseudoscience, prejudice, poor consumer choices, etc. In addition, they become specially dangerous at the group-level, since these mistakes are systematic rather than random (i.e., one individual’s mistake is not cancelled out by another one’s), leading to disastrous collective decisions such as those observed during the last economical crisis (Ariely 2009).

Potential Causes

Traditionally, research on cognitive biases has tended to elaborate taxonomies, or lists of experimentally documented biases, rather than focusing on providing explicative frameworks (Hilbert 2012). Consequently, this is the way in which many textbooks and introductory manuals approach the topic: e.g., Baron (2008) lists a total of 53 biases in his introductory book to Thinking and Deciding.

Nonetheless, some authors attempted to offer coherent conceptual frameworks to understand what all these biases share in common and how they originate. Here, we list some of the approaches that have tried to account for cognitive biases: limited cognitive resources, influence of motivation and emotion, social influence, and heuristics. They will be examined in turn.

Limited Cognitive Resources

First, an obvious explanation for many reported cognitive biases is the limited processing capacity of the human mind. For example, since people’s memory does not possess infinite capacity, we cannot consider any arbitrarily big amount of information when we make an inference or decision, even if all this information is relevant to the problem. Rather, we are forced to focus on a subset of the available information, which we cannot process in detail either. Therefore, in most complex problems, the optimal, truly rational solution is out of reach and we can only aim at “bounded rationality” (Kahneman 2003), that is, making the best decision after taking into consideration a limited amount of information. This explanation works well to explain certain instances of cognitive bias, such as the problems associated to thinking with probabilities. In particular, research has documented that people tend to neglect base rate information when making Bayesian inferences (i.e., a cognitive bias). However, the same problem is solved much more easily when it is formulated in terms of frequencies, rather than in probabilities (Gigerenzer and Hoffrage 1995). The calculations become simpler in the latter case just because of the more natural presentation format, and the bias seems sensitive to this manipulation.

Emotion and Motivation

Another potential cause for at least some cognitive biases is emotion or affect. Traditionally, research on decision-making understands rationality as “formal consistency,” conforming to the laws of probability and utility theory. Then, emotions are left out the rational decision-making process, as they can only contaminate the results. However, subsequent research shows that emotions play a substantial role in decision-making (Bechara and Damasio 2005) and suggests that without emotional evaluations, decisions would never reach optimality. After all, emotions are biologically relevant because they affect behavior: e.g., we fear what can harm us, and consequently decide to avoid it. Several cognitive biases could be explained by the influence of emotions. For instance, the loss-aversion bias (Kahneman and Tversky 1984) consists of the preference for avoiding losses over acquiring gains of equivalent value, and it could be driven by the asymmetry in the affective value of the two types of outcomes. Other examples concern moral judgment. Many typical studies on moral judgment consist of presenting participants with fictitious situations such as the famous “trolley dilemmas” (Bleske-Rechek et al. 2010). In one simple variant of the problem, participants would be told that a trolley is out of control, barreling down the railways. Close ahead, there are five people tied up to the track. The participant could pull a lever to divert the trolley to a side track, in which there is one person tied up and unable to scape. Would the participant pull the lever? From a rational viewpoint, the utility calculus seems straightforward: it is preferable to save 5 people (and kill one) than the opposite outcome. Most people behave according to this utilitarian, rational rule. However, we know that the participants’ decisions in these dilemmas are sensitive to affective manipulations: for example, if the person who lies on the side track is a close relative of the participant (or their romantic partner), this would affect the decision (Bleske-Rechek et al. 2010). Therefore, emotions can drive some systematic deviations from the rational norm.

A related potential source of bias is motivation. Research has shown that people’s inferences can be biased by their prior beliefs and attitudes. That is, they can engage in motivated reasoning (Kunda 1990): when solving a task, they choose the beliefs and strategies that are more likely to arrive at conclusions that they want to arrive at. For example, participants who had to judge the effectiveness of gun-control measures to prevent crime exhibited biased processing of contingency information to eventually align with their initial attitudes toward gun control (i.e., “motivated numeracy”) (Kahan et al. 2012). This could explain many other instances of bias in reasoning and behavior.

Social Influence

Certain cognitive biases could be produced, or at least modulated, by social cues. For instance, Yechiam et al. (2008) examined the risk-aversion bias in a gambling task: typically, the bias consists of people preferring a sure outcome over a gamble of equivalent or higher value. Crucially, the researchers found that the bias was reduced when participants were observed by their peers. Other cognitive biases possess a more fundamental link to social cues. It is the case of the bandwagon bias, which describes the tendency of people to conform to the opinions expressed earlier by others, and which has strong influence in collective behaviors, such as voting in elections (Obermaier et al. 2015). So far, we do not know whether the contribution of social cues to this and related biases are due to people’s preference to conform to their peers or because people use others’ opinions as a source of information to form their judgments.

Heuristics and Mental Shortcuts

Perhaps the most successful attempt to provide a coherent framework to understand cognitive biases is Kahneman and Tversky’s research program on heuristics (Kahneman et al. 1982). The rationale of this approach is as follows. First, making rational choices is not always feasible, or even desirable, for several reasons: (a) it takes time to effortfully collect and weight all the evidence to solve a problem; (b) it also needs the investment of lots of cognitive resources that could be used for other purposes; and (c) quite often a rough approximation to the best solution of a problem is “good enough,” whereas keeping on working to get the optimal solution is so expensive that it does not pay off. Therefore, the mind uses heuristics, or mental shortcuts, to arrive at a conclusion in a fast-and-frugal way. A heuristic is a simple rule that does not aim to capture the problem in all its complexity or to arrive at the optimal solution, but that produces a “good enough” solution quickly, and minimizing the effort. At this point, we can recover the example of the Muller-Lyer illusion that we described above, to realize that it can be explained as a heuristic-based inference. In real life, people’s visual system must handle three-dimensional information to make inferences about distances and depth. This is highly demanding in terms of resources. However, the task can be simplified by looking for simple rules. Our cognitive system seems to interpret the visual input by making use of certain invariant features of the environment. In a three-dimensional world, two segments that converge at one point (as in Fig. 1) usually indicate a vertex (e.g., the edges formed by adjacent walls and ceilings typically display this configuration). The orientation of the edges can be used to predict whether the vertex is incoming (panel a) or outgoing (panel b) (first invariant). Furthermore, an edge far away from the observer will look smaller than one that is near (second invariant). In the real world that we navigate most of time, applying these two simple rules serves to correctly infer depth and distance, without implementing a costly information processing procedure. However, Fig. 1 is not a three-dimensional picture, and this is why applying the simple rules (heuristics) leads to an error or an illusion: incorrectly perceiving depth in a flat, bi-dimensional arrangement of lines and, thus, incorrectly judging the sizes of the segments. Nonetheless, Fig. 1 can be thought of as an exception, whereas most of visual configurations we see everyday actually confirm the simple rules, which explains why the heuristic is useful. In conclusion, as illustrated by the visual illusion example, judgment can be biased by the operation of heuristics that exploit regularities or invariants in the world. These heuristics are simple rules that can be expressed in an intuitive manner (such as “distant objects are seen smaller”) and need little time and effort to reach a conclusion (i.e., they are economic). While they lead to conclusions that approximate a good enough solution most of the times, they can also lead to systematic mistakes.

A great deal of experimental research has documented several heuristics that could underlie many cognitive biases. Perhaps the most famous ones are the representativeness heuristic, the availability heuristic, and the anchoring-and-adjustment heuristic (Gilovich et al. 2002).


This heuristic is based on similarity or belonging and can be intuitively formulated as “if A is similar to B (or belongs to group B), then A will work in the same way as B does.” That is, when an exemplar is perceived as representative of the group, all the features that are typical of the group are attributed to the exemplar. An example could be that of deducing that a given person is smart, just because he studies at the university or because he wears glasses. This heuristic can explain why people commit certain errors when solving Bayesian reasoning problems, such as the base-rate neglect (Bar-Hillel 1980).


The availability heuristic is based on the ease with which a representation comes to mind. If a certain idea is easy to evoke or imagine, then it is incorrectly judged as likely to happen. A classical example is the reported overestimation of the likelihood of an airplane crash after watching a movie about airplane accidents. This heuristic can account for many common biases, such as recency bias: the piece of information that is the last in being presented is also the most easily remembered, and therefore it is usually weighted more heavily than other pieces of information, which can lead to serious errors in many domains (e.g., judicial). Another instance of the operation of the availability heuristic is the evaluation of near-win events: the fourth runner in crossing the finish line in a race often feels worse than the tenth one. This happens because this person can vividly imagine him/herself as being the third.

Anchoring and Adjustment

This is sometimes considered as a special case of the availability heuristic. Knowing a tentative answer to a question is known to bias further attempts to answer (these answers become closer to the anchor). For instance, imagine that you are asked the question: “in which year did Albert Einstein visit the USA for the first time?”. Let us assume that you do not know the answer, so you must guess. Most people would pick up a number like “1950.” Now, imagine that you are given a tentative range of years to answer. Research shows that people who are given a range of 1200 to 2000 actually answer with lower numbers than those who are given a range of 1900 to 2000. Respondents seem to use the range as a tentative response (anchor) and adjust their judgment away from it. The anchoring effect has been extensively studied in consumer behavior. For example, arbitrary numbers (i.e., the last digits of the participants’ social security numbers) can act as anchors to affect the amount of money that participants are willing to pay in exchange for a series of items (Ariely et al. 2003).

A further elaboration of the heuristics approach is the dual-system theory of human cognition (Kahneman 2013). According to this theory, the mind has two working modes: System I is fast, intuitive, heuristics-based, automatic, and frugal, whereas System II is slow, rational, optimality oriented, and resource-greedy. People perform many everyday tasks under System I: when the task is easy, when we need a quick solution, or when an approximate solution (not optimal) is good enough. However, certain task demands can activate System II. For instance, manipulations that reduce cognitive fluency, such as presenting the problem in a nonnative language, can lead to more thoughtful, rational, and bias-free solutions (Costa et al. 2014).

An Evolutionary Perspective

A complementary line of research focused on understanding why cognitive biases appeared in first place in the evolution course, by analyzing their associated benefits. Thus, the error-management-theory (Haselton and Nettle 2006) proposes that cognitive biases (produced by heuristics or by any other mechanism) were selected by evolution because they actually offer advantage for survival.

In ancestral environments, there is a pressure to make important, life-or-death decisions quickly (e.g., it is better to run away upon sighting a potential predator than to wait until it is clearly visible, but perhaps too close to scape). These conditions foster the development of decision mechanisms that (a) work fast and (b) produce the so-called “least costly mistake.” In this example, it is better to mistakenly conclude that a predator is in the surroundings than the alternative, i.e., mistakenly conclude that there is no predator. We know this because the two errors have very different consequences (it could be unnecessary waste of time in the former case and death in the latter) (Blanco 2016). Sometimes the least costly mistake is the less likely mistake, as it happens with the Muller-Lyer illusion above: most of times that we interpret colliding lines as three-dimensional vertexes, and as a cue to depth perception, we are right. In general, many cognitive biases seem to systematically favor the conclusion that aligns with the least-costly mistake, formulated in any of these ways.

Additionally, exhibiting certain cognitive biases can even produce other type of benefits, particularly in emotional terms. For example, one well-known cognitive bias is the illusion of control (Langer 1975), which consists of the mistaken belief that one can exert control over outcomes that are actually uncontrollable. The emotional consequences of this bias are noteworthy: a person who thinks that there is nothing that one can do to affect a relevant outcome might feel despair, or even depressed, for it is a sad realization that one has no control over his/her life. In contrast, those who develop the illusion of control (incorrectly) attribute to themselves any positive outcome that may happen, so they feel confident and safe. Furthermore, they will even feel motivated to keep trying and producing actions to affect the environment (i.e., the illusion of control produces behavioral persistence). In fact, evidence suggests that mildly depressed people are less likely to show the illusion of control (i.e., the depressive realism effect, Alloy and Abramson 1979), which highlights the connection between cognitive biases and positive emotions.

In sum, at least some cognitive biases (like the illusion of control) seem to be associated to positive outcomes, or to the long-run minimization of costly mistakes, that could have represented an evolutionary advantage for our ancestors. Therefore, the traits that underlie the bias would have been selected through our evolutionary history.

Nonetheless, the error management theory has been received with some skepticism. Admittedly, typical cognitive biases not always align with the least-costly mistake. For instance, the same bias that facilitates the quick detection of a hidden predator (e.g., clustering illusion), thus protecting us from a serious threat, could also produce dangerous decisions in a different context, such as believing that a bogus health treatment, such as quackery, works (Blanco 2016).


Cognitive biases have been defined as a general feature of cognition. As such, they are pervasive and can be observed in a vast variety of domains and tasks. Much has been studied about the impact of these biases on several key aspects of life. For example, cognitive biases could underlie highly societal issues, such as prejudice and racial hate (Hamilton and Gifford 1976), paranormal belief, or pseudomedicine usage (Blanco 2016), and more generally, the prevalence of poor decisions in many contexts, like consumer behavior (Ariely 2008).

Thus, it is not strange that researchers have tried to find out ways to overcome cognitive biases, a practice commonly known as “debiasing” (Larrick 2004; Lewandowsky et al. 2012). Different strategies have been used to develop debiasing techniques. Some are based on increasing the motivation to perform well (under the assumption that people can use normative strategies when solving tasks). Others focus on providing normative strategies to participants, so that they can replace their intuitive (and imperfect) approaches to a problem. Finally, other debiasing interventions take the form of workshops to improve critical thinking and reasoning skills.

One common obstacle that debiasing efforts have encountered is called the “blind spot bias” (Pronin et al. 2002): While people can readily identify biases in others’ arguments, they find it difficult to detect similar biases in their own judgment. This is why transmitting the scientific knowledge about how cognitive biases work (and which factors affect them) can be a useful tool to complement debiasing strategies. In sum, advancing in our understanding of cognitive biases is in the interest of all society (Lilienfeld et al. 2009).



  1. Alloy, L. B., & Abramson, L. Y. (1979). Judgment of contingency in depressed and nondepressed students: Sadder but wiser? Journal of Experimental Psychology: General, 108(4), 441–485. doi:10.1037/0096-3445.108.4.441.CrossRefGoogle Scholar
  2. Ariely, D. (2008). Predictably irrational: The hidden forces that shape our decisions. New York: Harper Collins. doi:10.5465/AMP.2009.37008011.Google Scholar
  3. Ariely, D. (2009, August). The end of rational economics. Hardvard Business Review,87(7), 78–84.Google Scholar
  4. Ariely, D., Loewenstein, G., & Prelec, D. (2003). “Coherent arbitrariness”: Stable demand curves without stable preferences. The Quarterly Journal of Economics, 118(1), 73–106. doi:10.1162/00335530360535153.CrossRefGoogle Scholar
  5. Bar-Hillel, M. (1980). The base-rate fallacy in probability judgments. Acta Psychologica, 44(3), 211–233. doi:10.1016/0001-6918(80)90046-3.CrossRefGoogle Scholar
  6. Baron, J. (2008). Thinking and deciding. New York: Cambridge University Press.Google Scholar
  7. Bechara, A., & Damasio, A. R. (2005). The somatic marker hypothesis: A neural theory of economic decision. Games and Economic Behavior, 52(2), 336–372. doi:10.1016/j.geb.2004.06.010.CrossRefGoogle Scholar
  8. Blanco, F. (2016). Positive and negative implications of the causal illusion. Consciousness and Cognition. doi:10.1016/j.concog.2016.08.012.PubMedGoogle Scholar
  9. Bleske-Rechek, A., Nelson, L. A., Baker, J. P., Remiker, M. W., & Brandt, S. J. (2010). Evolution and the trolley problem: People save five over one unless the one is young, genetically related, or a romantic partner. Journal of Social, Evolutionary, and Cultural Psychology, 4(3), 115–127.CrossRefGoogle Scholar
  10. Costa, A., Foucart, A., Arnon, I., Aparici, M., & Apesteguia, J. (2014). “Piensa” twice: On the foreign language effect in decision making. Cognition, 130(2), 236–254. doi:10.1016/j.cognition.2013.11.010.CrossRefPubMedGoogle Scholar
  11. Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684–704.CrossRefGoogle Scholar
  12. Gilovich, T., Griffin, D., & Kahneman, D. (2002). Heuristics and biases: The psychology of intuitive judgment. New York: Cambridge University Press.CrossRefGoogle Scholar
  13. Hamilton, D. L., & Gifford, R. K. (1976). Illusory correlation in interpersonal perception: A cognitive basis of stereotypic judgments. Journal of Experimental Social Psychology, 12, 392–407.CrossRefGoogle Scholar
  14. Haselton, M. G., & Nettle, D. (2006). The paranoid optimist: An integrative evolutionary model of cognitive biases. Personality and Social Psychology Review, 10(1), 47–66. doi:10.1207/s15327957pspr1001_3.CrossRefPubMedGoogle Scholar
  15. Hilbert, M. (2012). Toward a synthesis of cognitive biases: How noisy information processing can bias human decision making. Psychological Bulletin, 138(2), 211–237. doi:10.1037/a0025940.CrossRefPubMedGoogle Scholar
  16. Howe, C. Q., & Purves, D. (2005). The Müller-Lyer illusion explained by the statistics of image-source relationships. Proceedings of the National Academy of Sciences of the United States of America, 102(4), 1234–1239. doi:10.1073/pnas.0409314102.CrossRefPubMedPubMedCentralGoogle Scholar
  17. Kahan, D. M., Peters, E., Dawson, E. C., & Slovic, P. (2012). Motivated Numeracy and Enlightened Self-Government (Working Paper No. 307). New Haven: Yale Law School.Google Scholar
  18. Kahneman, D. (2003). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58, 697–720. doi:10.1037/ 0003-066X.58.9.697.CrossRefPubMedGoogle Scholar
  19. Kahneman, D. (2013). Thinking, fast and slow. New York: Penguin Books.Google Scholar
  20. Kahneman, D., & Tversky, A. (1984). Choices, values, and frames. American Psychologist, 39(4), 341–350. doi:10.1037/0003-066x.39.4.341.CrossRefGoogle Scholar
  21. Kahneman, D., Slovic, P., & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. London: Cambridge University Press.CrossRefGoogle Scholar
  22. Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.CrossRefPubMedGoogle Scholar
  23. Langer, E. J. (1975). The illusion of control. Journal of Personality and Social Psychology, 32(2), 311–328. doi:10.1037/0022-3514.32.2.311.CrossRefGoogle Scholar
  24. Larrick, R. P. (2004). Debiasing. In D. J. Koehler & N. Harvey (Eds.), Blackwell handbook of judgment and decision making (pp. 316–337). Oxford: Blackwell.CrossRefGoogle Scholar
  25. Lewandowsky, S., Ecker, U. K. H., Seifert, C. M., Schwarz, N., & Cook, J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. doi:10.1177/1529100612451018.CrossRefPubMedGoogle Scholar
  26. Lilienfeld, S. O., Ammirati, R., & Landfield, K. (2009). Giving debiasing away: Can psychological research on correcting cognitive errors promote human welfare? Perspectives on Psychological Science, 4(4), 390–398.CrossRefPubMedGoogle Scholar
  27. Obermaier, M., Koch, T., & Baden, C. (2015). Everybody follows the crowd? Effects of opinion polls and past election results on electoral preferences. Journal of Media Psychology, 1–12. doi:10.1027/1864-1105/a000160.Google Scholar
  28. Pronin, E., Lin, D. Y., & Ross, L. (2002). The bias blind spot: Perceptions of bias in self versus others. Personality and Social Psychology Bulletin, 28, 369–381.CrossRefGoogle Scholar
  29. Stanovich, K. E. (1999). Who is rational? studies of individual differences in reasoning. Mahwah: Erlbaum.Google Scholar
  30. Yechiam, E., Druyan, M., & Ert, E. (2008). Observing others’ behavior and risk taking in decisions from experience. Judgment and Decision making, 3(7), 493–500.Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Universidad de DeustoBilbaoSpain

Section editors and affiliations

  • Oskar Pineno
    • 1
  1. 1.Hofstra UniversityLong IslandUSA