Any model of cognition, rationality, reasoning, or decision-making implicitly features an underlying theory of and assumptions about perception (Kahneman, 2003a; Simon, 1956). That is, any model of rationality makes assumptions about what options are seen or not, how (or whether) these options are represented and compared, and which ones are chosen and why. The very idea of rationality implies that someone—the agents themselves, the system as a whole or the scientist modeling the behavior—perceives and knows the optimal or best option and thus can define whether, and how, rationality is achieved. Rationality, then, is defined as correctly perceiving different options and choosing those that are objectively the best.
In emphasizing rationality, cognitive and social scientists are incorporating—most often implicitly—certain theories and assumptions about perception, about the abilities and ways in which organisms or agents perceive, see, and represent their environments, or compute and process information, compare options, behave, and make choices. Assumptions about perception and vision, as we will discuss, are at the very heart of these models and the focus of our paper.
Neoclassical economics has historically featured some of the most extreme assumptions about the nature of perception and rationality. This has taken the form of assuming some variant of a perfectly rational or omniscient actor and an associated “efficient market” (Fama, 1970; cf. Buchanan, 1959; Hayek, 1945).Footnote 1 This work—in its most extreme form—assumes that agents have perfect information and thus there are no unique, agent-specific opportunities to be perceived or had: the environment is objectively captured and exhausted of any possibilities for creating value. Markets are said to be efficient as they, automatically and instantaneously, anticipate future contingencies and possibilities (Arrow & Debreu, 1954).
Much of this work assumes that there is, in effect, an “ideal observer” (cf. Geisler, 2011; Kersten et al., 2004)—either represented by the omniscience of all agents or the system as a whole—and thus an equilibrium (Arrow & Debreu, 1954). As noted by Buchanan, economists “have generally assumed omniscience in the observer, although the assumption is rarely made explicit” (1959: 126). The omniscient agent of economics has of course been criticized both from within and outside economics, as it does not allow for any subjectivity or individual level heterogeneity. For example, as Kirman argues, this approach “is fatally flawed because it attempts to impose order to the economy through the concept of an omniscient individual” (1992: 132). Thomas Sargent further argues that “The fact is that you simply cannot talk about differences within the typical rational expectations model. There is a communism of models. All agents inside the model, the econometrician, and God share the same model” (Evans & Honkapohja 2005: 566).Footnote 2 Although the death of the omniscient agent of economics has been predicted for many years, it continues to influence large parts of the field.
It is precisely this literature in economics, which assumes different forms of global or perfect rationality, that led to the emergence of the behavioral and cognitive revolution in the social sciences, to challenge the idea of agent omniscience.Footnote 3 Herbert Simon was the most influential early challenger of the traditional economic model of rationality. He sought to offer “an alternative to classical omniscient rationality” (1979: 357), and he anchored this alternative on the concept of “bounded rationality,” a concept specifically focused on the nature of vision and perception (Simon, 1956). Simon’s work was carried forward by Daniel Kahneman, who also sought to develop “a coherent alternative to the rational agent model” (2003a: 1449) by focusing on visual metaphors, illusions, and perception. We revisit both Simon and Kahneman’s work next.
To foreshadow our conclusion, we argue that both Simon and Kahneman, as well as later psychologists and behavioral economists, have unwittingly replaced the assumption of economic omniscience with perceptual omniscience, or an all-seeing view of perception. Neither Simon’s nor Kahneman’s model has overcome the paradigmatic assumption of omniscience, even though (or because) they have critiqued it. Instead, these models have merely introduced a different form of omniscience. We find it particularly important to revisit this work because it shows how the behavioral revolution was, and continues to be, deeply rooted in arguments about perception and vision. Though this work has sought to develop a psychologically more realistic and scientific approach to understanding rationality, we argue that this work can be challenged on both counts.
Bounded rationality and perception
As noted above, Herbert Simon challenged the assumption of agent omniscience (particularly pervasive in economics) with the idea of bounded rationality. The specific goal of his research program was, to quote Simon again, “to replace the global rationality of economic man with a kind of rational behavior that is compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kind of environments in which such organisms exist” (1955: 99, emphasis added). Rather than presume the omniscience of organisms or agents, Simon hoped to interject psychological realism into the social sciences by modelling the “actual mechanisms involved in human and other organismic choice” (1956: 129). Bounded rationality became an important meta-concept and an influential alternative to models of the fully rational economic agent—a trans-disciplinary idea that has influenced a host of the social sciences, including psychology, political science, law, cognitive science, sociology, and economics (e.g., Camerer, 1998, 1999; Conlisk, 1996; Evans, 2002; Jolls et al., 1998; Jones, 1999; Korobkin, 2015; Luan et al., 2014; Payne et al., 1992; Puranam et al., 2015; Simon, 1978, 1980; Todd & Gigerenzer, 2003; Williamson, 1985). These notions of rationality continue to influence different disciplines in various ways, including recent work on universal models of reasoning, computation, and “search” (Gershman et al., 2015; Hills et al., 2015).
To unpack the specific problems associated with bounded rationality, as it relates to vision and perception, we revisit some of the original models and examples provided by Simon. We then discuss how these arguments have extended and evolved in the cognitive and social sciences more broadly (Kahneman, 2003a), including the domain of behavioral psychology and economics.
In most of his examples, Simon asks us to imagine an animal or organism searching for food in its environment (e.g., 1955, 1956, 1964, 1969; Newell & Simon, 1976; cf. Luan et al., 2014).Footnote 4 This search happens on a predefined space (or what he also calls “surface”) where the organism can visually scan for food (choice options) and “locomote” and move toward and consume the best options (Simon, 1956). Initially the organism explores the space randomly. But it learns over time. Thus vision is seen as a tool for capturing information about and representing one’s environment.
What is central to the concept of bounded rationality, and most relevant to our arguments, is the specification of boundedness itself. Simon emphasizes the organism’s “perceptual apparatus” (1956: 130). The visual scanning and capturing of the environment for options is given primacy: “the organism’s vision permits it to see, at any moment, a circular portion of the surface about the point in which it is standing” (Simon, 1956: 130, emphasis added). Rather than omnisciently seeing (and considering) the full landscape of possibilities or environment (say, options for food)—as models of global rationality might specify things—Simon instead argues that perception (the relevant, more bounded consideration set of possibilities) is delimited by the organism’s “length and range of vision” (1956: 130-132). Similar arguments have recently been advanced in the cognitive sciences in universal models that emphasize perception and search (e.g., Fawcett et al., 2014; Gray, 2007; Luan et al., 2014; Todd et al., 2012).
One of Simon’s key contributions was to acknowledge that organisms (whether animals or humans) are not aware of, nor do they perceive or have time to compute, all alternatives in their environments (cf. Gibson 1979). Rather than globally seeing and optimizing, the organism instead “satisfices” based on the more delimited set of choices it perceives in its immediate, perceptual surroundings. Additional search, whether visually or through movement, is costly. Thus organisms search, scan, and perceive their environments locally and the tradeoffs between the costs of additional search and the payoff of choosing particular, immediate options drive behavior. In all, organisms only consider a small subset of possibilities in their environment—that which they perceive immediately around them—and then choose options that work best among the subset, rather than somehow optimizing based on all possible choices, which Simon argues would require god-like computational powers and omniscience.
These ideas certainly seem reasonable; but they are nonetheless rooted in a problematic conception of vision and perception. We foreshadow some central problems here, problems that we will more carefully address later in the paper when we discuss Kahneman’s (2003a,b) work and carefully revisit some of the common visual tasks and perceptual examples of bounded rationality and bias.
First, note that a central background assumption behind bounded rationality is that there is an all-seeing eye present which can determine whether an organism in fact behaved boundedly or rationally, or not. As Simon put it, “rationality is bounded when it falls short of omniscience” (1978: 356). For this shortfall in omniscience to be specified and captured, it requires an outside view, an all-seeing eye—in this case, specified by the scientist—that somehow perceives, specifies, computes, or (exhaustively) sees the other options in the first place, then identifies the best or rational one, which in turn allows one to point out the shortfall, boundedness or bias.
From the perspective of vision research, Simon’s “falling short of omniscience”-specification of bounded rationality can directly be linked to the “ideal observer theory” of perception (e.g., Geisler 1989, 2011; Kersten et al., 2004). Similar to the standard of omniscience, the “ideal observer is a hypothetical device that performs optimally in a perceptual task given the available information” (Geisler, 2011: 771, emphasis added). Footnote 5 Naïve (or bounded) subjects can be contrasted with a form of camera-like ideal observer who objectively records the environment. The comparison of objective environments with subjective assessments of these environments (or objects within it) has been utilized in the lab as well as in natural environments (Geisler, 2008; also see Foster, 2011; McKenzie, 2003). These approaches build on a veridical model of perception and objective reality, a sort of “Bayesian natural selection” (Geisler & Diehl, 2002) where “(perceptual) estimates that are nearer the truth have greater utility than those that are wide of the mark” (Geisler & Diehl, 2003). The environment is seen as objective, and subjects’ accurate or inaccurate responses are used as information about perception and judgment. This approach can be useful if we demand that subjects see something highly specific (whether they miss or accurately account for some stimulus specified by the scientist), though even the most basic of stimuli—as we will discuss—are hard to conclusively nail down in this fashion.
Extant work raises fundamental questions about whether perception indeed tracks truth (or “veridicality”) in the ways of an ideal observer (e.g., Hoffman et al., 2015). For example, evolutionary fitness maps more closely onto practical usefulness rather than any idea of truth or objectivity. Bayesian models of perception can be built on evolutionary usefulness rather than truth and accuracy (e.g., Hoffman & Singh, 2012; Koenderink, 2016). Supernormal stimuli highlight how illusory, seemingly objective, facts can be in the world (Tinbergen, 1951). We discuss these issues more fully later.
The problem is that the very specification of an objective landscape, space, or environment assumes that the scientist him or herself, in effect, is omniscient and has a god-like, true view of all (or at least a larger set of) options available to the organism under study—a type of third-person omniscience. The scientist sees all (or more) and can, ex ante and post hoc, specify what is the best course of action and whether the organism in fact perceived correctly, acted boundedly, or behaved rationally. But, in most cases, simply labelling something as biased or bounded does not amount to a theoretical explanation. Indeed, it serves as a temporary holding place that requires further investigation as to the reasons why something was perceived or judged in a certain way. Perhaps the organism simply did not have enough time to identify the optimal solution or the organism couldn’t see certain possibilities. The fact that perception and rationality consistently fall short of standards set forth by scientists raises questions not only about the standards themselves but also about why this is the case.
The second problem is that perception as seen by Simon is a camera-like activity where organisms capture veridical images of and possibilities in their environments and store or compare this information (cf. Simon, 1980). Granted, the camera used by organisms—perception and vision—is specified as bounded in that it captures only a small, delimited portion of the surrounding environment in which it is situated—that which can be immediately perceived (for example, “a circular portion” around an organism: Simon, 1956: 130)—rather than assuming omniscient awareness of the full environment. Whether only some or all of the environment is captured within the choice set of an organism, the approach assumes that perception generates objective representations or copies of the environment. Perception is equivalent to “veridical” or true representation, and only the bounds of what is perceived are narrowed, compared to the more omniscient models featured in economics and elsewhere. Simon et al.’s “CaMeRa” model of representation illustrates the point, specifically where “mental images resemble visual stimuli closely” (Tabachneck-Schijf et al., 1977: 309)—an assumption we will return to when discussing Kahneman’s more recent work. Perception as representation, and the efforts to map true environments to true conceptions of those environments, is the sine qua non of much of the cognitive sciences. Frequent appeals to learning, bias, boundedness, and limitations only make sense by arguing that there is a true, actual nature to environments (which can be learned over time).
The standard paradigm uses a world-to-mind, rather than a mind-to-world, model of perception that is, quite simply, not true to the nature of perception. Perception is not (just) representation (e.g., Purves, 2014) or world-to-mind mapping (Koenderink et al., 2014). The emphasis on representation places undue emphasis on the environment itself—and objects within it—rather than the organism-specific factors that in fact might originate and direct perception. Simon’s view of perception, then, falls squarely into the domain of psychophysics and inverse optics (cf. Marr, 1982): the attempts to map objective environments onto the mind. It implies a form of pure vision or veridical optics where the world can properly be captured and represented, if only there were enough eyes on it, or enough computational or perceptual power to do so (cf. Simon, 1955, 1956). Environmental percepts are treated as relatively deterministic and passive data and inputs to be represented in the mind.
The third and perhaps most central concern is the way that perception is implicitly seen as independent of the perceiver. Simon argues that the nature of the organism doesn’t meaningfully impact the argument, as highlighted by his interchangeable use of universal mechanisms applied to organisms in general, both animals and humans alike. For example, he argues that “human beings [or ants], viewed as a behaving system, are quite simple. The apparent complexity of his behavior over time is largely a reflection of the complexity of the environment in which he finds himself”’ (1969: 64-65). No attention is paid to the organism-specific factors associated with perception; the focus is on computation of perceived alternatives and the representation of an objective environment.Footnote 6 Simon’s work was undoubtedly influenced in some form by behaviorism and its focus on the environment instead of the organism. He heralded the coming of a universal cognitive science (Simon, 1980, Cognitive Science), where a set of common concerns across “psychology, computer science, linguistics, economics, epistemology and social sciences generally”—focused on one idea: the organism as an “information processing system.” Perception, information gathering and processing provided the underlying, unifying model for this approach.Footnote 7
The universality and generality of the arguments was also evident in Simon’s interest in linking human and artificial intelligence or rationality. In an article titled “the invariants of human behavior,” Simon argues that “since Homo Sapiens shares some important psychological invariants with certain nonbiological systems—the computers—I shall make frequent reference to them also” (1990: 3, emphasis added). He then goes on to delineate how human and computer cognition and rationality share similarities and are a function of such factors as sensory processing, memory, computational feasibility, bounded rationality, search, and pattern recognition. This approach represents a highly behavioral, externalist, and automaton-like conception of human perception and behavior (cf. Ariely, 2008; Bargh & Chartrand, 1997; Moors & De Houwer, 2006).
The concern with these arguments is that they do not recognize that perception is specific to an organism or a species—they instead assume a universality that has little empirical support. To suggest and assume that there is some kind of objective environment which the organism searches is not true to nature. Instead of generic or objective environments, organisms operate in their own “Umwelt” and surroundings (Uexkull 2010), where what they perceive is conditioned by the nature of what they are (Koenderink 2014). The work of Tinbergen and Lorenz in ethology makes valuable contributions by showing how organism-specific factors are central to perception and behavior. Yet, the standard paradigm bypasses the hard problem of perception—its specificity and comparative nature—by jumping directly to environmental analysis and by assuming that perception is universal and equivalent to inverse optics (the mapping of objective stimuli to the mind). Although we may seek to identify general factors related to objects, or environmental salience or objectivity across species, this simply is not possible as what is perceived is determined by the nature of the organism itself.
Simon’s notion of objective environments, which then can be compared to subjective representations of that environment, is also readily evident in a large range of theories across the domain of psychology and cognition. For example, in his influential Architecture of Cognition, Anderson (2013; also see Anderson & Lebieri, 2003, 2014) builds on precisely the same premise of universal cognition, in seeking to develop a “unitary theory of mind” focused on external representation and the mind as a “production system” (input-outputs and if-then statements driving organism interaction with the environment). This research builds on the longstanding “Newell’s dream” (Alan Newell, Herbert Simon’s frequent co-author) of building a computational and unified theory of cognition.
Kahneman on perception
A timely example of how problematic models of perception and vision continue to plague the rationality and decision-making literature is provided by Kahneman’s Nobel Prize speech and subsequent American Economic Review publication (2003a) titled “Maps of Bounded Rationality.” A version of this article was also co-published in the American Psychologist (2003b). The article explicitly links the current conversations in cognitive psychology and behavioral economics with Simon’s work and our discussion in the previous section.
However, Kahneman’s work focuses even more directly on perception and vision. He argues that his approach is distinguished by the fact that “the behavior of agents is not guided by what they are able to compute”—à la Simon—“but by what they happen to see at a given moment” (Kahneman, 2003a: 1469, emphasis added). Sight thus takes center-stage as a metaphor for arguments about rationality. What is illustrative of Kahneman’s focus on perception and sight is that he “[relies] extensively on visual analogies” (2003a: 1450). The focal article in fact features many different visual tasks, pictures, and illusions, which are used as evidence and examples to make his points about the nature and limits of perception and rationality. We will revisit, and carefully reinterpret, some of these visual examples.
Kahneman’s emphasis on vision and perception is not all that surprising as his early work and scientific training—in the 1960s—was concerned with psychophysics, perception, and inverse optics: the study and measurement of physical and environmental stimuli. This early work focused on perception as a function of such factors as environmental exposure and contrast (Kahneman, 1965; Kahneman & Norman, 1964), visual masking (Kahneman, 1968), time intensity (Kahneman, 1966), and thresholds (Kahneman, 1967b). In other words, the study of perception is seen as the study of how (and whether) humans capture objects and environments based on the actual characteristics of objects and environments. These assumptions from Kahneman’s early work, and the broader domain of psychophysics, have carried over into the subsequent work on the nature of rationality. This view of perception is also center-stage in, for example, Bayesian models of rationality (e.g., Oaksford & Chater, 2010). The background assumption in all of this research is that “responding to [the actual attributes of reality] according to the frequency of occurrence of local patterns reveal[s] reality or bring[s] subjective values ‘closer’ to objective ones” (Purves et al., 2015: 4753).
In the target article Kahneman (2003a) conceptualizes individuals—similar to Simon—as “perceptual systems” that take in stimuli from the environment. As put by Kahneman, “the impressions that become accessible in any particular situation are mainly determined, of course, by the actual properties of the object of judgment” (2003a: 1453, emphasis added). This notion of perception explicitly accepts vision and perception as veridical or “true” representation (e.g., Marr, 1982; Palmer, 1999). Similar to Simon, the approach here is to build a world-to-mind mapping where “physical salience [of objects and environments] determines accessibility” (Kahneman, 2003a: 1453, emphasis added). Perception is the process of attending to, seeing, or recording—as suggested by Kahneman’s language of “impressions” and “accessibility” throughout the article— in camera-like fashion, physical stimuli in the environment based on the actual characteristics of objects and environments themselves.
The emphasis placed on the environment is evident in what Kahneman calls “natural assessments” (cf. Tversky & Kahneman, 1983). Natural assessments are environmental stimuli, characterized by the “actual,” “physical” features of objects that are recorded or “automatically perceived” or attended to by humans and organisms (Kahneman, 2003a: 1452). These physical features or stimuli include: “size, distance, and loudness, [and] the list includes more abstract properties such as similarity, causal propensity, surprisingness, affective valence, and mood” (Kahneman, 2003a: 1453). This work closely links with psychophysics: efforts to understand perception as a function of such factors as threshold stimuli or exposure (e.g., Kahneman, 1965).
Important to our arguments is that Kahneman equates perception—on a one-to-one basis—with rationality, intuition, and thinking itself, thus implying a specific environment-to-mind mapping of mind. This is evident in the claim that “rules that govern intuition are generally similar to the rules that govern perception,” or, more succinctly: “intuition resembles perception” (Kahneman, 2003a: 1450). Kahneman draws both analogical and direct links between perception and his conceptions of rationality, decision making, and behavior. Visual illusions, for example, are seen as instances and examples of the link between perception and the rationality. The discrepancy between what is seen (and reported) and what in fact is there, provides the basis for ascribing bias or irrationality to subjects. Visual illusions have thus become the example of choice for highlighting the bias and the limits of perception.
The assumed camera-like link between perception and cognition emerges across a wide range of literatures in the domain of rationality, reasoning, and cognition. For example, Chater et al. argue that the “problem of perception is that of inferring the structure of the world from sensory input” (2010: 813). Most Bayesian models of cognition, rationality, and decision making feature similar assumptions (cf. Jones and Love, 2011). The precise nature of these inferences, from a Bayesian perspective, is based on encounters with an objective environment, the nature of which can be learned with time and repeated exposure (cf. Duncan & Humphreys, 1989). The social sciences, then, are building on a broader psychological and scientific literature that treats “object perception as Bayesian inference” (Kersten et al., 2004; also see Chater et al., 2010). Bayesian perception compares observation and optimality (Ma, 2012; cf. Verghese, 2001), where the effort is to “accurately and efficiently” perceive in the form of “belief state representations” and to match these with some true state of the world (Lee, Ortega, & Stocker, 2014). Oaksford and Chater (2010) discuss this Bayesian “probabilistic turn in psychology” and the associated “probabilistic view of perception” in the social sciences, where repeated observations help agents learn about the true, objective nature of their environments. Bayesianism is now widely accepted, as Kahneman argues, “we know…that the human perceptual system is more reliably Bayesian” (2009: 523).Footnote 8
Revisiting and reinterpreting Kahneman’s examples
In the focal articles, Kahneman (2003a,b) provides five different visual illustrations and pictures to make his point about the nature and boundedness of perception and rationality. Scholars in the cognitive and social sciences have indeed heavily focused on visual tasks and illusions to illustrate the limitations, fallibility, and biases of human perception (e.g., Ariely, 2001; Gilovich & Griffin, 2010; Vilares & Kording, 2011). These visual examples are used to illustrate the (seeming) misperceptions associated with objectively judging such factors as size, color and contrast, context and comparison, and perspective. These examples are also used to point out perceptual salience and accessibility, the role of expectations and priming, and the more general problem of perceiving “veridically,” as an example of boundedness and bias (Kahneman, 2003a).
However, visual illusions are commonly misinterpreted (Rogers, 2014). First, they rarely provide a good example of bias in perception, but instead can be interpreted as illustrations of how the visual system works. Second, illusions and perceptual biases are simply an artefact of the problem of singularly and exhaustively representing objective reality in the first place. Thus we next point to some of Kahneman’s (2003a) examples and argue that these examples are wrongly interpreted, on both counts.
In one illustration, Kahneman (2003a: 1460) highlights the problem of accurately judging or comparing the size of objects by using a two-dimensional picture that seeks to represent a three-dimensional environment. Similar to the classic Ponzo illusion (see Fig. 1, copied from Gregory, 2005: 1243; cf. Ponzo, 1912), in the picture the focal objects (in the above case, the white lines) that are farther away (or higher, in the two-dimensional image) are seen as larger by human subjects, even though the objects are the same size on the two dimensional surface. Kahneman calls this “attribute substitution” and argues that the “illusion is caused by the differential accessibility of competing interpretations of the image” – and further that the “impression of three-dimensional size is the only impression of size that comes to mind for naïve observers—painters and experienced photographers are able to do better” (Kahneman, 2003a: 1461–1462). The perceptual naiveté of subjects, compared to experts, is indeed a popular theme in the rationality literature.
The problem is in how the visual task—that is purported to illustrate perceptual illusion and bias—is set up and how it is explained. The concern here is that the image features conflicting stimuli, namely, a conflict between the image and what it seeks to represent in the world. The reason that the top, white line in Fig. 1 (at first glance) appears to be longer is because the image features both two- and three-dimensional stimuli. Since the white line at the bottom (Fig. 1) is shorter than the railway ties it overlaps with—and the railway ties are presumed to be of equal length—it is natural to make the “mistake” of judging that the top line in fact is longer than the bottom line. The catch, or seeming illusion, is that the two white lines are of equal length in two-dimensional space. The issue is that the vertical lines disappearing into the horizon—the railroad tracks themselves—suggest a three-dimensional image, though the focal visual task relates to a two-dimensional comparison of the lengths of the two horizontal, white lines.
To illustrate the problem of labelling this an illusion, we might ask subjects whether the vertical lines (the rail road tracks) are merging and getting closer together (as they go into the horizon), or whether they remain equidistant. On a two-dimensional surface it would be correct to report that the lines are getting closer together and merging. This is how things appear in the image. But if the picture is interpreted as a representation of reality (of space, perspective, and horizon), then we might also correctly say that the lines are not getting closer together or merging. Furthermore, if the top, horizontal white line was in fact part of the three-dimensional scene that the picture represents, it would be correct to say that the top line indeed is longer. Experimental studies of visual space, using Blumenfeld alley experiments, provide strong evidence for the point that there is nothing straightforward about representing space on a two-dimensional surface or plane (e.g., Erkelens, 2015).
Furthermore, consider what would happen if subjects were asked to engage in the same task in a natural environment—rather than looking at a picture—standing in front of railroad tracks that go off into the horizon. What visual illusions could be pointed to in this setting? The subjects might, for example, report that the tracks themselves appear to remain equidistant and that the railroad ties appear to remain the same size. If the subjects slowly lifted up a 1-meter long stick, horizontally in front of them, at some point the stick would indeed be of seemingly equal (two-dimensional) length to one of horizontal railroad ties that are visible up further in the horizon.
We might briefly note that another interpretation of these types of perspective-based illusions is that they not only play with two and three dimensions, but that they also capture motion (e.g., Changizi et al., 2008). That is, human perception is conjectural and forward-looking, for example anticipating oncoming stimuli when in motion. Thus the converging or ancillary lines in the background of an image—commonly used in visual illusions (e.g., Ponzo, Hering, Orbison, & Müller-Lyer illusions)—can be interpreted as suggesting motion and thus appropriately “perceiving the present” and anticipating the relative size of objects.
Visual illusions are only artificially induced by taking advantage of the problem of representing a three-dimensional world in two dimensions. The discrepancies between two and three dimensions—the so-called data or evidence of visual illusions and bias—are not errors but simply (a) examples of how the visual system in fact works and (b) artefacts of the problem that two-dimensional representation is never true to any three-dimensional reality (we will touch on both issues below). The use of perspective-based visual illusions as evidence for fallibility, misperception, or bias is only a convenient tool to point out bias. But any bias is only the result of having subjects artificially toggle between representation and reality (or, more accurately, one form or expression of reality). To say that scientists have accurately captured some sort of bias is simply not true (Rogers, 2014). Visual illusions based on perspective are inappropriately exploiting and interpreting a more general problem, which is that two-dimensional images cannot fully represent three-dimensional reality. Moreover, as we will discuss, the very notion of appealing to some kind of singular verifiable reality as a benchmark for arbitrating between what is illusion or bias, versus what is not, is fraught with problems from the perspective of vision science (Koenderink, 2015; Rogers, 2014; see also Frith, 2007).
We might note that some scholars in the area of cognition and decision-making have recently noted that visual illusions are incorrectly used to argue that perception and cognition are biased. For example, Rieskamp et al. write: “Just as vision researchers construct situations in which the functioning of the visual system leads to incorrect inferences about the world (e.g., about line lengths in the Muller-Lyer illusion), researchers in the heuristics-and-biases program select problems in which reasoning by cognitive heuristics leads to violations of probability theory” (Rieskamp, Hertwig, & Todd, 2015: 222).
We agree with this assessment, but our point of departure is more fundamental and pertains to the nature of perception itself. Specifically, the extant critiques of bias (and associated interpretations of visual illusions) propose that humans eventually learn the true nature of the environment, and thus focus on alternatives such as a Bayesian probabilistic view of perception. But the problem is that “probability theory is firmly rooted in the belief in [an] all seeing eye” (Koenderink, 2016: 252). In other words, the idea of Bayesian “ecological rationality” (Goldstein & Gigerenzer, 2002; Todd & Gigerenzer, 2012) builds on a model of ecological optics (cf. Gibson, 1977)—where perception is also seen in camera-like fashion: humans learn the true nature of environments over time. The notion of ecological rationality and optics implies that illusions are mere temporary discrepancies between representations and the real world. We propose a fundamentally different view, one that suggests it is not as easy (if not impossible) to disentangle illusion, perception, and reality. Thus, while we agree with the critique, our point of departure anchors on a very different view of perception, which we will outline in the next section.
To illustrate further concerns with how perception is treated in this literature, we focus on another visual example provided by Kahneman (see Fig. 2 – from Kahneman, 2003a: 1455). This example is used by Kahneman to show the “reference-dependence of vision and perception” (2003a: 1455). He specifically points to reference-dependence by discussing how the perception of brightness or luminance can be manipulated by varying the surrounding context within which the focal image is embedded (see Fig. 2 – from Kahneman, 2003a: 1455). In other words, it would appear that the inset squares in Fig. 2 differ in brightness, due to the varied luminance of the surrounding context. But the two inset squares in fact have the same luminance. Kahneman thus argues that the “brightness of an area is not a single-parameter function of the light energy that reaches the eye from that area” (2003a: 1455). Stopping short of calling this an illusion, the implication is that the reference-dependence of vision says something about our inability to judge things objectively and veridically, even though actual luminance in fact can be objectively measured.Footnote 9 A wide variety of brightness and color-related illusions have of course been extensively studied by others as well (Adelson, 1993, 2000; Gilchrist 2007).
The concern with this example is that the use of color or luminance tasks artificially exploits the fact that no objective measurement of color or luminance is even possible (Koenderink, 2010).Footnote 10 Using shadows or changing the surrounding context or luminance of a focal image, a common approach to pointing out illusions, is not evidence that perception itself is biased or illusory. Kahneman is correct when he says that color or luminance is “reference-dependent.” But the underlying assumption remains that there also is a true, objective way to measure luminance itself—by the scientist—and to highlight how human judgment deviates from this objective measurement. Unfortunately no such measurement is possible for color (Koenderink, 2010; cf. Maund, 2006).
As discussed by Purves et al., any “discrepancies between lightness and luminance…are not illusions” (2015: 4753). We may infer that the “true” state of luminance is not observed by a subject (Adelson, 1993), but any observation, measurement, or perception is always conflated with a number of factors that cannot fully be separated (Koenderink, 2010). We may only care about the focal retinal stimulus itself, but perception and observation is also a function of illumination, reflectance, and transmittance (Purves et al. 2015). These factors are all inextricably conflated in a way that makes it impossible to extract true measurement (Koenderink, 2010). Similar to perspective-based visual illusions (where the illusion is artificially created by exploiting the gap between two-dimensional representation and three-dimensional reality), with luminance-based tasks scientists are only tricking themselves in pointing out observational discrepancies between perception and reality, rather than meaningfully pointing out bias. Color and luminance are always confounded by context (which includes a host of factors), and no objective measurement is possible (cf. Gilchrist et al., 1999; Gilchrist, 2006). Kahneman would seem to agree with this when he notes the context-dependence of perceptions. But his underlying “veridical” approach to perception and vision is in direct conflict with this argument (Kahneman, 2003a: 1460).Footnote 11
Most importantly, the nature of the perceiver matters. As discussed by Rogers, “there can be no such thing as ‘color information’ that is independent of the perceptual system which is extracting that information” (2014: 843). The way color or luminance is perceived depends on who and what, in what context, is doing the perceiving. The human visual system is highly specific—that is, it sees or registers a select portion of the light spectrum, responding to wavelengths between 390 nm and 700 nm. We wouldn’t point to illusion or bias if someone were not able to see spectra outside this range, for example, ultraviolet light – which can be measured. As discovered by Newton, we see some aspects of light or color but not others. Chromatic aberrations highlight how white light includes a spectrum of colors. Indeed, the very idea of “light” could be cast as an illusion, as alternative realities (e.g., the color spectrum) can be measured and proven. Of course, any discussion of color needs to wrestle with and separate colorimetry and the phenomenology of light and color (Koenderink, 2010).
Note also that the way that any particular, seemingly objective color is represented or subjectively sensed varies across species. A bat sees the world very differently than humans do (cf. Nagel, 1974). Luminance or color has no “true” or objective nature (Koenderink, 2010). It is mental paint. Different species not only see the same colors differently, or don’t see them at all, but they have different interpretations of the very same inputs, stimuli, and data. Furthermore, the human’s built-in mechanism for maintaining color constancy should not be regarded as an illusion (cf. Foster, 2011), though it is often used as such (cf. Albertazzi, 2013). For example, in the real world we assume color constancy in the presence of shadows, even though this information can wrongly be used as evidence for illusion or bias when judging luminance or color in pictures (Adelson, 2000; cf. Gilchrist, 2006; Purves, 2014; Rogers, 2014).
In all, although we can measure (and thus “objectively” show the existence of) a large range of possible frequencies across the electromagnetic spectrum, with various instruments, nonetheless the human visual system allows only certain types of input. This is true not only for luminance, but also for many other visual and perceptual factors. This very argument casts doubt on any one way of measuring perception and reality in the first place—an argument we will turn to next.