Abstract
Over the past decades psychological theories have made significant headway into economics, culminating in the 2002 (partially) and 2017 Nobel prizes awarded for work in the field of Behavioral Economics. Many of the insights imported from psychology into economics share a common trait: the presumption that decision makers use shortcuts that lead to deviations from rational behaviour (the Heuristics-and-Biases program). Many economists seem unaware that this viewpoint has long been contested in cognitive psychology. Proponents of an alternative program (the Ecological-Rationality program) argue that heuristics need not be irrational, particularly when judged relative to characteristics of the environment. We sketch out the historical context of the antagonism between these two research programs and then review more recent work in the Ecological-Rationality tradition. While the heuristics-and-biases program is now well-established in (mainstream neo-classical) economics via Behavioral Economics, we show there is considerable scope for the Ecological-Rationality program to interact with economics. In fact, we argue that there are many existing, yet overlooked, bridges between the two, based on independently derived research in economics that can be construed as being aligned with the tradition of the Ecological-Rationality program. We close the paper with a discussion of the open challenges and difficulties of integrating the Ecological Rationality program with economics.
Similar content being viewed by others
Explore related subjects
Find the latest articles, discoveries, and news in related topics.Avoid common mistakes on your manuscript.
1 Introduction
Heuristics are all around us, both in the real world and the literature. There are many examples of them and too many definitions of them. In computer science, heuristics are typically algorithms that are used to find the approximate optimal solution within reasonable bounds. These algorithms may still be computationally complex in absolute terms, but relatively less complex than the optimal solution; note, that often the optimal solution may be unknown, or if known, computationally untractable. By contrast, in cognitive psychology, the term heuristic is predicated on the complexity of the decision processes—whether or not this is an approximation of the optimal solution (or even surpasses it) is a matter of contention between two different schools of thought, which we will discuss below.
We focus on the history of fast-and-frugal heuristics, as a prominent school of thought in the ecological-rationality tradition that has been sketched out comprehensively in Gigerenzer et al. (1999) and scores of follow-up books (e.g., Gigerenzer et al., 2011; Hertwig et al., 2013; Todd et al., 2012) and articles—for the state of the art, see the reviews in Hertwig et al. (2019) and Hertwig et al. (2022). The major theme of the fast-and-frugal heuristics school of thought is that individuals can be smart and procedurally rational even if they seem to deviate from some norm of optimality. For example, consider the feasible set of combinations of effort and accuracy as being constrained by the decision maker’s cognitive processes and features of the decision environment—this implicit effort-accuracy tradeoff is the essence of prominent theories of bounded rationality. In this view, decision errors can be rationalized by arguing that regardless of the specific combination of effort and accuracy chosen, it cannot be classified as irrational as long as it is on the efficient frontier. Errors are thus decoupled from the notion of rationality, in contrast to neoclassical economics where errors are synonymous with irrational behavior. Proponents of fast-and-frugal heuristics go a step further and contend that there exist decision environments, highly likely in the real world, that can be exploited by appropriately adapted heuristics that transcend the effort-accuracy tradeoff. Under such circumstances, normative models such as Expected Utility may be dominated by simple heuristics in both the accuracy and effort dimensions. In other words, less may be more.
Below we contextualize the emergence of the “Ecological-Rationality” (ER from here on) program as an explicit counterpoint to the “Heuristics-and-Biases” (H&B from here on) program initiated by Kahneman and Tversky (e.g., Kahneman & Tversky, 1979; Kahneman, 2003a, 2003b, 2011; Tversky & Kahneman, 1974) that informed and inspired scores of early behavioral economists. Simple heuristics in the ER program are understood to be fast-and-frugal rules of thumb because they ignore information that is available and hence can shorten decision making time. They also ought to reflect cognitive processes, and hence be able to predict behavior, instead of only explaining behavior ex post through as-if modelling exercises.
Our comparative analysis seems warranted by the fact that the H&B program has invaded (neo-classical) economics, and other social sciences, to the extent that it is now considered mainstream (e.g., Camerer et al., 2004a, 2004b; Heukelom, 2015; Thaler, 2015, 2016). Increasingly critical questions have been asked about the H&B program (e.g., Ortmann, 2015a, 2015b, 2021 and references therein), however, its continued predominance has overshadowed the ER program, in our view to the detriment of both the ER program and economics.
The H&B program suggested that bounds on human rationality induce rash decisions leading to systematic biases, or cognitive illusions. The biased heuristics that people were said to use—such as representativeness, availability, and anchoring and adjustment—were motivated by appeal to the principles also underlying optical illusions. An implicit—and increasingly explicit claim (e.g., Thaler, 1980, p. 40)—was that cognitive illusions were as robust as optical illusions (see also Kahneman, 2003a, 2003b). Heuristics were thus considered to be problematic, which coined decision makers as fallible, even gullible, and in dire need of help from third-parties for better decision making. As Cochrane (2015) has noted, this view represents a considerable moral-hazard problem that H&B proponents have often given in to.
It is worth noting that the assessment of people’s performance as being severely wanting was quite a departure from the prevailing view in the 1950s, 1960s and early 1970’s (e.g., Edwards, 1956; Peterson & Beach, 1967; see also Ortmann, 2015a, 2015b). Even Tversky and Kahneman (1974), in the article that started it all, did not make the sweeping claims that followed in the decades thereafter. Lejarraga and Hertwig (2021) argue convincingly that the shift toward the pessimistic H&B assessment of rationality was guided by changes in experimental protocols across these decades, with a radical shift occurring from early experiments based on learning experience (see Edwards’ body of work) to those based solely on descriptions (pioneered by Kahneman and Tversky).
Drawing on arguments by Herb Simon (1947, 1955, 1956) that rationality cannot be defined through cognitive and emotional processes alone, Gigerenzer and the ABC Research group argued that the design and implementation of many H&B experiments were highly problematic (e.g., prominently Gigerenzer, 1991). Furthermore, heuristics could have surprising performance properties, particularly in increasingly uncertain environments (e.g., Gigerenzer & Gaissmaier, 2011 for a good survey). The fact that heuristics can outperform substantively rational optimization models prompted Gigerenzer and Sturm (2012) to argue that ecologically rational heuristics transcend the common categorization of models into either descriptive or prescriptive models of behavior, arguing that they can be both.
We first review in more detail how this battle of programs unfolded and then lay out what we regard to be the considerable accomplishments of the ER program. We then proceed by pointing out the overlooked connections between the ER program and economics and, finally, we enumerate what we regard to be open questions and challenges.
2 How the battle between the H&B program (H&BP) and the ER program (ERP) unfolded
2.1 First, the Heuristics and Biases program (H&BP)
Starting with the “old behavioral economists,” Richard Cyert, James March, Herbert Simon “who focused on bounded rationality, satisficing, and simulations” (p. 740), economics historian Esther-Mirjam Sent (2004) explained the transition from Old to New Behavioral Economics (pp. 742–747), “(t)he roots of new behavioral economics may be traced to the 1970s and the work of especially Amos Tversky and Daniel Kahneman, …” (p. 742). She identifies the “Behavioral Foundations of Economic Theory” conference held at the University of Chicago in October 1985 as a key event. In the preface to their book that drew on the conference, Hogarth and Reder (1987) argued that there was “a growing body of evidence—mainly of an experimental nature—that has documented systematic departures from the dictates of rational economic behaviour.” In his review of the book, Smith (1991) dismissed that claim: “(experimental economics) documents a growing body of evidence that is consistent with the implications of rational models.” (p. 878).
Acknowledging that Simon’s work on bounded rationality had influenced them, too, Kahneman (2003a, 2003b) identified three separate lines of research in which he worked.
The first explored the heuristics that people use and the biases to which they are prone in various tasks of judgment under uncertainty, including predictions and evaluations of evidence (…). The second was concerned with prospect theory, a model of choice under risk (…) and with loss aversion in riskless choice (…). The third line of research dealt with framing effects and with their implications for rational-agent models (…).” (p. 1449) and,
“Our research attempted to obtain a map of bounded rationality, by exploring the systematic biases that separate the beliefs that people have and the choices they make from the optimal beliefs and choices assumed in rational-agent models. The rational-agent model was our starting point and the main source of our null hypotheses, but Tversky and I viewed our research primarily as a contribution to psychology, with a possible contribution to economics as a secondary benefit. We were drawn into the interdisciplinary conversation by economists who hoped that psychology could be a useful source of assumptions for economic theorizing, and indirectly a source of hypotheses for economic research (Richard H. Thaler, 1980, …). (p. 1449)
Kahneman & Tversky’s H&BP was based on the idea that thinking was typically fast and rarely slow, and fundamentally about accessibility or intuition. Consequently, fast thinking had to rely on rules of thumb (heuristics) leading to systematic divergences (biases) from normative behavior as described by standard economic theories (Kahneman & Tversky, 1996; Kahneman, 2003a, 2003b; Tversky, 1974). People were increasingly conceptualized as incompetent decision-makers, a theme embraced by the movement that evolved into Behavioral Economics. Thaler (1980), for example, proclaimed that “Research on judgment and decision making under uncertainty, especially by Daniel Kahneman and Amos Tversky () has shown that (…) mental illusions should be considered the rule rather than the exception. Systematic, predictable differences between normative models of behavior and actual behavior occur (…).” (p. 40). It is striking that the optical-illusion analogy was not taken to its logical conclusion, namely that the documented illusions either never occur in the environment or in the few instances when they do, they rarely impose any real cost on the organism. How many times have you encountered, and been fooled, by the Müller-Lyer Illusion, a Necker cube or an Ames room in the real environment?—see http://www.michaelbach.de/ot/ for a discussion of these and other optical illusions. In fact, the properties of the visual system that produce these illusions are argued to be optimal given the characteristics of real-world visual environments and the constraints of the visual system, i.e., a finite number of neurons with a bounded response range. We have argued elsewhere (Spiliopoulos & Ortmann, 2014) that specific diagnostic tasks, i.e., specific parameterizations of tasks where competing models make starkly different predictions, should be used with caution to infer the rationality of agents. Rationality preferably should be assessed on a wide range of parameterizations including those found in the real environment and not just special cases rarely found beyond the laboratory (see on this, Erev et al., 2017; Hogarth & Karelaia, 2005, 2006, 2007).
The obvious problems with the H&B approach culminated in the highly-visible dispute between Kahneman and Tversky (1996) and Gigerenzer (1996) about the reality of cognitive illusions. From the critics’, and also the present authors’ view, the H&BP was characterized by a lack of process models (key concepts such representativeness, anchoring and adjustment, and availability being hardly more than labels), too much story-telling, un-incentivized studies, polysemy, often deception, and experimenter demand effects, to name a few. There was, in Nobel Prize laureate Vernon L. Smith’s sarcastic and brilliant observation, too much fishing in the tail ends of human behavior (Smith, 2003, fn. 8). No surprise then that many anomalies were found that were taken as proof of people’s limited rationality and underwhelming performance. This interpretation has been contested not only by the ERP, but also many others working in the neo-classical (e.g., Smith, 1991) and the bounded-rationality tradition (e.g., see Lieder et al., 2018 for a rational account of anchoring).
2.2 Second, the Ecological-Rationality program (ERP)
The ABC research program (see also Lopes, 1992) was constructed in contrast to the H&BP. Gigerenzer (1991), for example, successfully deconstructed some key findings of Kahneman and Tversky, who responded to Gigerenzer’s critique (Gigerenzer, 1996; Kahneman & Tversky, 1996). ABC also developed a fundamentally different view of heuristics by formulating cognitive process models that could be rigorously tested. Many of the process models were based on a frequentist view of the world, with researchers broadly adopting an evolutionary-psychology perspective, which conceptualized humans as intuitive statisticians with an almost innate ability in navigating familiar environs. The way statistical information is presented was a crucial moderator of performance (Sedlmeier & Gigerenzer, 2001; see Hertwig & Ortmann, 2005 for a summary), e.g., in terms of frequencies versus probabilities.
As the H&BP was gobbled up hook, line, and sinker by the initial waves of Behavioral Economics/Finance, the ERP remained an outsider of sorts, although its influence has grown as researchers have started applying the insights of ecological rationality to fields outside its domain of origin—more on this below. Part of the problem is that ERP researchers rarely engaged with mainstream modern (neo-classical) economics and when they did, they tended to do so antagonistically, focusing their critiques on normative economic models of deductive reasoning. While these critiques are on point and well-substantiated, if a research program does not simultaneously seek to build bridges to other programs, important opportunities can be lost. We will argue below that important work assuming inductive reasoning in economics can serve as a bridge to the ERP, although important differences remain, and considerable opportunities have yet to be realized. Note that inductive reasoning does not preclude the possibility that a learning process may converge to the deductive solution (e.g., prominently Friedman, 1991), but this will depend on the characteristics of the environment and the learning rule. In fact, learning dynamics may even serve as an equilibrium selection mechanism when deductive reasoning leads to a multiplicity of equilibrium solutions (e.g., Haruvy & Stahl, 2004).
3 Recent accomplishments of the Ecological-Rationality program
The ERP is characterized by a reliance on cognitive process models (which require some serious theorizing), empirical and experimental testing of these models, and an important methodological innovation: the preferred mode of testing relies on “out-of-sample” prediction, or “cross-validation” (Gigerenzer & Gaissmaier, 2011). That is, performance is not measured by the best fit on an existing dataset but by model performance in unknown datasets. There is no data-fitting after the fact. Cross-validation addresses the important bias-variance tradeoff (Gigerenzer & Brighton, 2009). Simple models exhibit higher bias but typically less variance than complex models—it is the relative strength that determines which type of model outperforms the other in prediction. However, a key finding of the ERP is that in many environments, heuristics often exhibit little or no bias relative to more complex models, therefore the variance effect tends to dominate—we return to this below.
Among the ERP’s key successful demonstrations is that, when cross-validation is used, simple heuristics such as the recognition heuristic or the “take-the-best” heuristic outperformed complicated, computationally slow and greedy models such as multiple regression favored by economists (e.g., Gigerenzer & Brighton, 2009; Gigerenzer & Gaissmaier, 2011; Gigerenzer et al., 1999; Katsikopoulos et al., 2010; Todd et al., 2012). More recent work also favorably compares the performance of heuristics in the wild to increasingly popular machine learning algorithms, including cases where “Big Data” is available (Katsikopoulos et al., 2021a, 2021b).
The reason is that multiple-regression and machine-learning constructs overfit by failing to account for the inherent noise in datasets. An important implication is that often one does not need to worry about the widely accepted effort-accuracy trade-off (e.g., Payne et al., 1993). Those using simple heuristics can have their cake and eat it, too.
The reason behind the success of heuristics such as recognition and take-the-best is now much better understood (Baucells et al., 2008; Drechsler et al., 2013; Katsikopoulos, 2013; Luan et al., 2011, 2014). There are three important environmental characteristics that are sufficient, but not necessary, to induce these striking results: non-compensatoriness of cues, dominance, and cumulative dominance.Footnote 1 If at least one of these three is true, then a lexicographic heuristic exhibits no bias vis-à-vis a linear rule, and is computationally less demanding. These theoretical results would not be important if these conditions were not found regularly in real environments. Şimşek (2013) found that they are very common in 51 real-world datasets; consequently, a lexicographic heuristic performed as well as multiple linear regressions in the median dataset for approximately 90% of cases. Recent work has analyzed fast-and-frugal trees and successfully connected them to signal-detection theory (Luan et al., 2011, 2014), other heuristics such as the fluency and priority heuristics have been proposed (Brandstätter et al., 2006, 2008; Drechsler et al., 2013; Hertwig et al., 2008; Katsikopoulos & Gigerenzer, 2008; but see also Johnson et al., 2008 on the priority heuristic), and a persuasive rationalization has been provided for the tendency of many economists and psychologists to overlook the benefits of simplicity (Brighton & Gigerenzer, 2015).
A new wave of ER research—much of it emerging from the Center for Adaptive Rationality at the Max Planck Institute for Human Development (with which one of us is affiliated)—is deepening our understanding of the interaction between heuristics and environments by turning the question on its head. The ERP initially wanted to answer the question of which heuristics performed well in specific (given) environmental niches and to uncover how important characteristics of the environment were. That is, research was directed towards a theory of heuristics, yet attention has recently turned to furthering our understanding of environments from merely describing its characteristics, to explaining how they arise and are shaped. This requires a theory of environments, which is a first step toward the goal of understanding the co-evolution of heuristics and environments, rather than assuming a uni-directional causal arrow where heuristics are a response to a specific and fixed environment. Pleskac and Hertwig (2014) posited that in the face of uncertainty about the likelihood of events occurring, decision-makers use the risk-reward relationship to infer the probabilities of events from their magnitude. They argue that in many man-made choice environments, an inverse relationship between risk and reward should be expected. For example, horse track odds, lottery tickets, bargaining, and submitting manuscripts to academic journals, the inverse relationship arises primarily through market pressure on prices to converge to those of a fair bet. Leuker et al., (2018, 2019) found that laboratory participants’ choice behavior was influenced by the risk-reward relationship even when probability information was available. Pleskac et al. (2021) presented their competitive risk–reward ecology theory, formally deriving how this relationship arises from the ideal free distribution principle applicable to competitive environments.
Another important contribution of the ERP is the distinction between “decisions-from-description” (DfD) and “decisions-from-experience” (DfE) and the empirical validation of a gap between the two, i.e., choice behavior derived from otherwise identical tasks differs across these two paradigms (e.g., Barron & Erev, 2003; Hertwig & Erev, 2009). In DfD, all relevant information (including lack or uncertainty thereof) about the choice task is described in words and numbers to decision-makers. For example, the canonical choice task investigated in this literature involves choice between prospects, whose monetary outcomes and associated likelihoods are directly provided. In DfE, these properties are learned through experience with the environment, for example, by sampling from these prospects and observing individual draws. Consequently, decision-makers must infer the true probabilities in a manner similar to learning in the real-world; of course, due to sample properties, such inference carries a degree of uncertainty, which although it may be ameliorated by more sampling, can never be completely eliminated. Intuitively, risk maps into DfD and uncertainty maps into DfE. Furthermore, these mappings are analogous to Savage’s (1954) distinction between small (DfD) and large (DfE) world decision making, where the states, consequences and probabilities of the world may be unknown (see Gigerenzer & Gaissmaier, 2011 for a discussion). We note parenthetically that in strategic environments DfD and DfE also map into eductive and evolutive (deductive and inductive) game theory (Binmore, 1990; Friedman, 1991). We hope that researchers will continue recent attempts at theory integration (e.g., Luan et al., 2011, 2014; Schooler & Hertwig, 2005) and related attempts to break down disciplinary boundaries (e.g., Hutchinson & Gigerenzer, 2005; Spiliopoulos & Hertwig, 2020).
An increasing number of economists and researchers from management and organization (no surprise here, given where it all started: Simon, 1955, 1956) have been attracted by the ER paradigm. For example, Åstebro and Elhedli (2006) have empirically demonstrated the usefulness of simple heuristics in forecasting commercial success for early-stage ventures. Another example is the hiatus heuristic that predicts whether a customer is active or not, i.e., will make future purchases. Wübben and Wangenheim (2013) not only find evidence of its use by executives, but also show using real-world data that simple heuristics can out-predict more complex models. Luan et al. (2019) have demonstrated empirically, using data from 236 applicants at an airline company and through simulations, that a particular inference heuristic outperforms logistic regressions. Eisenhardt and some of her colleagues (see for a self-centered primer, Bingham & Eisenhardt, 2014) have argued that successful repeated product innovation is best implemented through “simple rules”, or “semi-structures” which define a path between too much and too little structure. Maitland and Sammartino (2015) have demonstrated the use of simple decision rules for location choice by multinational companies when environments are politically hazardous. The work of Petrou et al. (2020) suggests that too much procedural rationality might be a bad thing.
Many real-world decisions, especially organizational ones, specifically involve a dynamic component. In extreme cases, the theoretically optimal solution is based on complex dynamic programming (often involving backward recursion), which certainly would not qualify as a potential descriptive model of behavior. Heuristics have not typically been applied to such dynamic problems, however, Rapoport et al. (2022) showed that an easily implementable heuristic exhibits excellent performance across a wide range of variants of optimal-stopping problems (often referred to as the Secretary problem), including a more realistic competitive version, where employers are competing for hires.
Ultimately, ERP proponents would like to see their behavioral insights taken up by practitioners, not just academics, and applied to important real-world situations. Key tenets of the ERP have inspired the upper echelon of the financial system, see the speech by Haldane and Madouros (2012) of the Bank of England or Kay and King (2020). They argue that simplicity, rather than complexity, would benefit the financial system by inducing greater robustness to the inherent Knightian uncertainty of the system. The principle of less-is-more could arguably help in designing the regulatory system, the assessment of key financial indicators such as the capital ratio, the prediction of crises and bank failures, and ultimately, policy intervention—see Aikman et al. (2021) for a more detailed discussion. Unfortunately, combatting complexity in (semi-) governmental agencies is a difficult venture, as the vicious cycle of Byzantine regulation—supporting a complex and bloated bureaucratic structure, leading to even more complexity and fractured regulation (e.g., Michelacci et al., 2021)—is self-sustaining.
Artinger et al. (2015) have provided a useful primer of heuristics as adaptive decision strategies in management and organizations, but it seems clear that—notwithstanding the almost 300 cites their primer has attracted so far—the use of heuristics in management and organization is under-studied and remains a fruitful area of research (e.g., Zellweger & Zenger, 2021 [in press] or Saxena et al., 2021). To see how understudied the topic is academically relative to the revealed interest of practitioners, google “rules of thumbs to determine when projects pay off” to find almost 3 million hits and scores of lists of simple decision rules for everything from cash flow to financial investments. While there can be no doubt that progress towards a science of heuristics has been tremendous and that the ERP’s influence is increasing, there remain in our view important blind spots.
4 The Ecological-Rationality program and economics: a missed opportunity (so far)
The incompatibility of the Ecological-Rationality program with economics has been emphasized by a number of ERP researchers, who present it as an antithesis both to the B&HP and the neoclassical-economics program including Behavioral Economics, which is viewed as a disguised extension of the neoclassical program (Berg & Gigerenzer, 2010).
The first argument is that Behavioral Economics often simply patches up neoclassical models to explain ex post any empirical deviations by incorporating additional parameters into the utility function (e.g., the Fehr & Schmidt model of fairness).
The second argument is that models in Behavioral Economics are still as-if models of behavior, lacking the cognitive foundations that would be laid by explicitly specifying the underlying decision processes. We are sympathetic to the claims made as far as they pertain to much of the research that is often dubbed Behavioral Economics (Thaler, 2016). Exceptions to this exist, for example, see the handbook by Altman (2017) for a collection of studies across many topics primarily conducted by economists. We have argued elsewhere in favor of process models compared to as-if models (Spiliopoulos & Ortmann, 2018). In some cases, the term heuristic is used in the literature to refer to simple behavioral models, that however, do not necessarily have a strong underlying procedural foundation. A case in point is the Intertemporal Choice Heuristic (ITCH) which captures choices between earlier or later payments by identifying the weight that individuals put on absolute and relative money value and time difference. Interestingly, the authors do not need to specify utility functions, making debates about functional forms, decision weights, risk attitudes, and their elicitations seemingly superfluous (e.g., Andersen et al., 2008). While an interesting as-if model of intertemporal decision making, it is not a heuristic in the fast-and-frugal sense because it doesn’t ignore available information (Gigerenzer & Gaissmaier, 2011; Gigerenzer et al., 1999) and requires the calculation of both absolute and relative differences in variables (time and payoffs), thereby introducing redundancy rather than parsimony.
Generally, we fear that a purely antagonistic approach of emphasizing the divide between the ERP and neo-classical economic rationality has the unfortunate consequence of deepening the schism rather than fostering an exchange between these programs. The differences in opinions are well known; we will attempt below to highlight (perhaps surprising) similarities between these research programs including the parallel, yet independent, emergence of similar ideas. This suggests that there is significant scope for future exchange of ideas and productive collaboration between researchers from the two fields—see, for example, Spiliopoulos and Hertwig (2020), to which we will return to below.
4.1 Heuristics in economics
Early research in marketing science by Roberts and Lattin (1991) has developed in parallel to ERP research in psychology. Representative examples include the work by economists like Manzini & Mariotti and collaborators, and an exploding literature on choice, consideration sets, attribute filters, and various forms of “rational (in)attention” as exemplified by the work of Masatlioglu et al. (2012), Kimya (2018), and Mackowiak et al. (2023) Independent parallel work can be scientifically counterproductive in the sense that closer collaboration could have afforded increasing returns to research and the avoidance of duplication [e.g., see Arkes and Ayton (1999) on the Concorde Fallacy and related work in economics on sunk cost effects such as Friedman et al., (2007) and McAffee et al., (2010)]. Broadly inspired by the work of Gigerenzer and associates, the well-cited Manzini and Mariotti (2007) formalized and axiomatized a type of sequential eliminative heuristic demonstrating that boundedly rational choice procedures can be tested with observable choice (“revealed preference”) data favored by more traditional economists. Manzini and Mariotti (2012a, 2012b, 2014) built on this earlier two-stage deterministic model of choice by providing models of stochastic choice when consideration sets are present (i.e., agents do not consider all feasible alternatives). This is a popular, but typically less formalized approach in management and marketing science that is related to Random Utility Models that have been around for decades in economics. Mandler et al. (2012) provided procedural foundations for utility maximization, with the checklists in the title of their paper being the equivalent of the—preferably non-compensatory—cues central to the fast-and-frugal heuristics extensively analyzed by the ERP. The authors show that under specific conditions, procedural utility maximization matches that of substantive utility. In Manzini and Mariotti (2012a), the authors extended and formalized a choice procedure introduced by Tversky (1969) that has also been prominently featured in Luan et al., (2011, 2014) recently. Masatlioglu et al. (2012) also study preference maximization with attention filters, as does Kimya (2018) who pointed out that in many cases alternatives have observable attributes (such as price and average review ratings) that allow for consideration sets. He also reviews other related literature that seems relatively untouched by insights from ERP, as seems to be the case for the exploding literature on rational inattention (e.g., Gossner et al., 2021; Mackowiak et al., 2023).
4.2 Cognitive bounds and behavior
The premise that less is more with respect to the amount of information that decision makers use can be linked to bounds on cognition such as capacity limitations in working memory (Cowan, 2000) or the long-term memory retrieval system (Schooler & Anderson, 1997). Economists have similarly been concerned with simple strategies that do not use all available historical information, starting at least with the Axelrod (1984) tournament. Tit-for-tat and the win-stay/lose-shift strategies are examples of relatively simple heuristics that perform well in repeated games and are robust to the exact composition of types in the population and to noise or uncertainty about payoffs. The win-stay/lose-shift heuristic is also used by players in games of pure conflict, and fingerprints of its use can be also found in the response time of those decisions (Spiliopoulos, 2018a)—see also Spiliopoulos and Ortmann (2018) for a discussion of what experimental economics can learn from psychologists’ use of response times. Relatedly, explicit modelling of forgetting has been common in economic studies of learning in repeated games since Roth and Erev (1995) and Cheung and Friedman (1997). Finite-state automata are another methodological tool explicitly aimed at examining the effects of limiting the prior information (in a temporal sense) that a player conditions his/her strategies on (e.g., Rubinstein, 1986). Furthermore, it is well known in game theory that more information or strategic sophistication does not necessarily lead to better outcomes. The desirability of sophistication, for example, in terms of Level-k strategies, depends on the structure of the game (Camerer & Fehr, 2006). Higher sophistication is beneficial (detrimental) in games of strategic substitutes (complements).
4.3 The interaction between simple decision rules and the environment
The ERP is based on the premise that rationality should be assessed in the context of the environment, i.e., Simon’s ‘scissors’ metaphor. In strategic settings, the definition of the environment must be extended to include institutions, market characteristics, and the interactions between agents. Perhaps surprisingly—to ERP researchers—an early example of such interactions was given by Becker (1962) who analyzed a model of markets in which participants behaved irrationally or randomly. He found that seemingly rational behavior at the macro level (not only in the consequence space, but also in the choice space) could arise even from random behavior at the micro level. In this spirit, more recent developments in economics include the zero-intelligence program initiated by Gode and Sunder (1993) who examined the effects of the structure of continuous double-auctions on market outcomes. They found that simple agents, who made random bids with the only constraint that they do not make offers that would lead to a loss, converged and achieved near perfect allocative efficiency. The lesson to be learned from this research is that rationality cannot be ascribed to individual decision makers without explicit consideration of the environment. In the case of zero-intelligence traders, convergence was driven by the position of the marginal units.
Economists tend to ignore the ecological rationality of boundedly rational strategies (such as Level-k and cognitive hierarchy theory) although in the lab many subjects’ behavior is explained particularly well by the L1 heuristic. This heuristic assumes that an opponent chooses each of his/her available actions with equal probability. Consequently, the L1 heuristic simplifies a strategic problem to a non-strategic one (Spiliopoulos et al., 2018). Are these heuristics ecologically rational? Does their use lead to systematically lower payoffs in the outcome space or not? Earlier work found that given the behavior of other subjects in experiments, the use of heuristics is not particularly detrimental (e.g., Camerer et al., 2004a, 2004b; Stahl & Wilson, 1995). In fact, using a Nash equilibrium strategy is not rational conditional on other players not adopting the Nash equilibrium. While these findings establish that heuristics may be appropriate given other players adopting non-Nash strategies, it is not a complete answer as it does not explain why other players were not playing Nash in the first place. To do so requires a map of bounded rationality specifying which heuristics are effective in different classes of games, something akin to the systematic exploration of the environment and decision rules in Hogarth and Karelaia (2006), but applied to games.
This is exactly what Spiliopoulos and Hertwig (2020) undertook by looking at the performance of heuristics found in the experimental economics literature for a wide range of different environments comprised of one-shot normal-form games with differences in: (a) the size of the action space, (b) the degree of payoff uncertainty in terms of missing information about the game payoffs, and (c) the degree of (mis)alignment or players’ interests in the games. They find that the simple Level-1 heuristic performs extremely well over a wide range of environments for various measures of performance and robustness. This may appear to be a surprising result at first, as L1 completely ignores an opponent’s payoffs. However, this imbues L1 with an immunity to noise or uncertainty in an opponent’s payoffs. Combined with the diminished effect of noise in own payoffs due to the averaging calculations of L1, these properties render L1 to be more robust to payoff uncertainty than other strategies. Note, that games with highly correlated payoffs across players exhibit significant redundancy in the payoff cues; knowing one’s own payoffs is highly predictive of the possible opponent’s payoffs. Despite its simplicity, L1 also indirectly conforms to an important prescriptive principle related to strategic uncertainty, never to play a dominated action. The normative Nash equilibrium solution is found to perform on par with the best heuristic only when payoffs are positively correlated, i.e., games that have a stronger incentive for cooperation than competition, but at a significantly greater computational cost.
Another example of work by economists comparing the performance of simple heuristics to other more sophisticated models in repeated games is Duersch et al. (2014). They characterized the set of symmetric two-player games where tit-for-tat (and a wider array of imitation strategies) cannot be beaten by any other strategy of unbounded sophistication. Generally, learning and evolutionary dynamics do not necessarily eliminate simpler strategies competing against more complex strategies in the long-run (e.g., Germano, 2007; Mohlin, 2012; Robalino & Robson, 2016; Stahl, 1993). This means that at the steady-states, the surviving simple heuristics achieve payoffs that are identical to those of complex strategies.
These discoveries seem to warrant the direction of more attention towards elucidating the complex relationship between the computational sophistication and information requirements of strategies, including heuristics, and the strategic and informational properties of environments.
4.4 How to choose heuristics from the adaptive toolbox?
Initial criticisms that the ERP had not adequately specified the heuristic selection method of the adaptive toolbox has prompted work directed at strategic selection. We will touch upon three qualitatively different approaches to this problem. The earliest response to this critique was to postulate a reinforcement learning mechanism over heuristics (Rieskamp & Otto, 2006)—see also the RELACS model by Erev and Baron (2005). This is essentially the same solution independently proposed for strategic decision making by economists. For example, Aumann (1997, pp. 7–8) writes “Ordinary people do not behave in a consciously rational way in their day-to-day activities. Rather, they evolve ‘rules of thumb’ that work in general, by an evolutionary process like that discussed above (Section 1a), or a learning process with similar properties.” In the El-Farol bar problem (Arthur, 1994), agents hold a heterogeneous set of simple predictive models and learn to use the more effective rules (given their individual experience) over time; interestingly, such a learning process converges to the Nash equilibrium solution. Empirical work in repeated games by Stahl (1996, 1999, 2000) and Haruvy and Stahl (2012) find evidence that subjects learn to use relatively simple rules based on their prior performance—they refer to their model as rule-learning. These are concepts strikingly similar to those proposed by the ERP; however, the ERP studies were in the domain of individual decision making, whereas the economic studies are in strategic decision making.
A second approach to the strategy selection problem can be based on evolutionary pressure, whose dynamics (similarly to reinforcement learning) depend directly on the consequence space, rendering the approach relatively computationally inexpensive. Spiliopoulos (2021) extends the paradigm in Spiliopoulos and Hertwig (2020) to a dynamic environment where strategic heuristics are subjected to replicator dynamics while playing randomly drawn one-shot normal form games. The findings point to an ecological map of the environments where simple heuristics survive in the long-run steady-state versus more sophisticated strategies, including the Nash equilibrium. Which heuristics survive depends crucially on the characteristics of the games found in an environment and the cost of computational complexity. In many cases, an ecology with mixtures of heuristics of varying sophistication and complexity emerges in the steady-state, providing a theoretical foundation for the co-existence of a heterogeneous set of strategies, as is often found in experiments.
A third approach to the problem of heuristic selection is resource-rational analysis (Griffiths et al., 2015; Lieder et al., 2019), which admits ERP-like principles at the level of decision rules or heuristics that are simple and constrained by cognitive bounds, but assumes optimization at the meta-reasoning level of heuristic selection based on cost–benefit analysis.
These approaches fall under the general category of what Schurz and Thorn (2016) referred to as meta-inductive strategies. These authors consider the theoretical limits of the performance of various meta-strategies ranging from choosing the recently best-performed heuristic to strategies that weight the predictions of many heuristics in the toolbox. Not surprisingly, they show that in dynamic and uncertain environments, the performance of meta-strategies depends crucially on the properties of the environmental dynamics, a direct analogy to the ecological mapping between specific heuristics and environments.
The aforementioned approaches may converge under specific environmental characteristics, as reinforcement learning and evolutionary pressure over heuristics may lead to similar results as rational meta-reasoning. The conditions under which this may occur need to be formally investigated and the different approaches pitted against each other in competitive model comparisons to determine their empirical content. It need not be the case that only one of the approaches is valid; they may also be part of an adaptive toolbox, to be employed in environments when the relevant information required is present and informative. Clearly there is potential here for both disciplines to interact and advance our knowledge of the strategy selection problem.
4.5 Procedural modeling
An important characteristic of most ERP studies is the insistence that models should be procedural (or process-based) in contrast to the majority of models in economics that are as-if models. The advantage of procedural models is that they make more specific predictions (for choices and processes) than as-if models and are consequently more falsifiable in the Popperian sense. For example, see Johnson et al. (2008) who argue that the process data collected in experiments is incompatible with that implied by the Priority heuristic; this of course, would not have been possible for an as-if model. It is perhaps here that cognitive psychologists have already exerted a uni-directional influence on economists. Early work in psychology employing process-tracing techniques such as Mouselab (Johnson et al., 1989) and eyetracking have spilled over to economics—see Crawford (2008) for an excellent overview. Providing process-level foundations to existing as-if models in economics, and highlighting the value-added of this, is another way of engaging economists with the ERP. For example, while not originally envisaged as process-models per se, Cognitive Hierarchy and Level-k theories have been subsequently grounded in models of information search and integration using process-tracing (e.g., Devetag et al., 2016; Polonio et al., 2015). Fischbacher et al. (2013) modified economic theories of social preferences by imposing a decision tree structure to the order in which the relevant variables are examined. Similarly, Spiliopoulos (2013) transformed a process-free model of pattern recognition in games (Spiliopoulos, 2012) into a process-model encompassing both exemplar- and prototype-based categorization grounded in the ACT-R cognitive architecture. It is a hopeful sign that economists such as Spiegler (2019) are calling for more theorizing in Behavioral Economics for it to truly realize its potential as a transformative force in economics. He argues convincingly that simply relaxing parametric functional forms, while empirically practical as they allow simple hypothesis tests of deviations from orthodox economic theory, primarily serve to keep Behavioral Economics mired under the methodological embrace of classical economics. Concrete theorizing, which includes procedural modeling but also fully utilizes the arsenal of tools in economists’ toolboxes particularly in the interaction and aggregation of individual behavior to whole markets, will lead to falsifiable behavioral models. This would contrast with more flexible renditions of heuristics described qualitatively, often found in the HB&P program and in the (now essentially orthodox) Behavioral Economics literature.
4.6 Reasoning by similarity and cases
Reasoning by similarity can be a useful tool when confronted with the uncertainty of a new situation that an agent has not experienced. Important theoretical contributions have been made by economists to case-based and analogy-based reasoning, see for example early work by Rubinstein (1988) and Leland (1994) on decisions under risk and the extensive work of Gilboa and Schmeidler (1995, 2001). Other work by economists exploiting similarity in inductive inference involves the question of how agents play a new game (that they have not seen before); specifically, how prior experience of other games may spill over to new (unseen) games on the basis of similarity between games (e.g., Grimm & Mengel, 2012; Knez & Camerer, 2000; Mengel, 2012; Mengel & Sciubba, 2014; Spiliopoulos, 2015). Also, Spiliopoulos (2013) showed that subjects learn from the similarity, not between games, but between patterns in the history of play (sequential actions chosen by both players in previous rounds) during a single repeated game. This work is complementary to that of cognitive psychologists’ work on pattern recognition in individual decision making, specifically in decision making under uncertainty (or decisions from experience), where sampling plays a prominent role. Decision-makers considering the similarity of the current history of samples of a given depth to contingent sequences in prior sampling admit a wavy-recency effect of rare outcomes (Plonsky & Erev, 2017; Plonsky et al., 2015), which more accurately predicts choice behavior than models without such pattern detection.
Pattern recognition, the learning of contingent events and their likelihood in the environment, must be a core competency of any species worth its while; adaptation and survival depend crucially on unearthing the regularities in the environment. Also, reasoning by similarity is essentially the application of this competency to uncertain and new environments, where exact matching is not possible. Consequently, we believe that these competencies are pervasive through all facets of judgment and decision making and constitute a particularly fruitful area of collaboration between economists and psychologists, potentially serving as one of the bridges between ecological and economic rationality.
4.7 What is the appropriate performance metric for model comparisons?
The ERP has rightly promoted the use of cross-validation to compare the performance of heuristics to more complex models, hence shifting the focus from explanation to prediction. This is a consequence of the bias-variance dilemma. More complex models will tend to fit better with in-sample predictions than simpler models (such as heuristics), but may perform worse on out-of-sample predictions. Friedman (1953) was an early proponent of the notion that theories should be evaluated on the basis of their predictive power; of course, ERP researchers would take aim with his contention that the processes (and underlying assumptions) are irrelevant—see for example the billiard player example in Friedman and Savage (1948). Studies published in prominent economics journals as far back as Camerer and Ho (1999), and including more recent work such as Wilcox (2011) and Spiliopoulos (2012, 2013), have also argued for, and used, cross-validation. Yet, it is here where economists really could learn from intriguing work that is currently being done in psychology. Erev et al., (2017, and literature therein) for example make a persuasive case, of immediate relevance for economists, for the use of choice prediction competitions between models. Yarkoni and Westfall (2017) have summarized lessons from machine learning to make the case for prediction over explanation; see relatedly also Bhatia and Le (2021). Plonsky and Erev (2022) have made the case for prediction oriented behavioral research summarizing succinctly some of the relevant recent literature. He et al. (2022) have extended the prediction paradigm (and the cross-validation approach) by showing the predictive power of “model crowds”, i.e., (relatively small) sets of models that get weighted in different ways. Model crowds’ superior predictive performance essentially further reduces the variance component of prediction errors. There are many fascinating developments in this space and economists ought to take note.
4.8 What is the appropriate space for the calculation of deviations from rationality?
A further issue concerns how we measure deviations from rationality, if they exist at all. The ERP focuses on deviations in the consequence space, i.e., comparing the actual loss in terms of the consequences of a behavior. Consequences can be actual payoffs, if they are well defined for a problem, or a metric based on the percentage of correct/wrong responses often used in binary tasks. Using deviations in the consequence space instead of the choice space is important, as seemingly large differences in the choice space may not translate into large deviations in the consequence space, particularly when computational costs are included. In the early history of Behavioral Economics, deviations from rationality were typically measured in the choice space, and this still occurs to a considerable extent. However, experimental economists have taken issue with experiments that have a flat payoff function around the normative solution, culminating in the payoff-dominance critique (Harrison, 1989) that prompted a large debate in the field (see the comments and replies to this paper in the American Economic Review Vol. 82, No. 5, 1992).
This debate has influenced future work in the field, see for example the extensions of the literature on the evidence of mixed strategy equilibrium behavior from the laboratory where incentives and the curvature of the payoff function may indeed be weak, to high incentives in the field for professional sport players (e.g., Spiliopoulos, 2018b; Walker & Wooders, 2001). While originally intended as a critique of the design of many experiments in economics, implicit in the payoff-dominance critique is the notion that sub-optimal behavior can only be identified when it is accompanied by large costs in the consequence space. A large deviation in the choice, but not consequence, space can be thought of as near-optimal behavior with a low opportunity cost.
5 Open questions and challenges
While the success of the Ecological Rationality program cannot be disputed, there remain many open questions that are in need of answers. We enumerate and discuss them next.
First, what is the complete set of heuristics out there? This question may be unanswerable because researchers have incentives to differentiate their product (e.g., Bingham & Eisenhardt, 2011, or the already mentioned Ericson et al., 2015, who do not reference Gigerenzer et al., 1999). Agreement on what is in the adaptive toolbox of heuristics seems elusive and may be even more challenging for newer work in strategic decision settings; see Vuori and Vuori (2014) for an excellent primer and Spiliopoulos et al. (2018) for some experimental results.
Constraining the infinite number of available heuristics to those that are part of the adaptive toolbox can be accomplished by various means. One approach (e.g., Schooler & Hertwig, 2005), is to constrain heuristics by using well-known cognitive constraints such as the number of items that can be held in working-memory, the relationship between memory retrieval, and frequency/timing of events. Another approach is to first constrain heuristics (e.g., by modelling them as fixed-memory finite-automata), expose the remaining heuristics to evolutionary or competitive pressure, and assume that a small subset of the fittest heuristics makes it into the adaptive toolbox. An alternative approach pioneered by Gigerenzer and Selten (2002) is to categorize the building blocks of heuristics into search rules, stopping rules and decision rules.
Second, how to choose the appropriate tool remains a prominent question, although considerable progress is being made, e.g., see Marewski and Schooler (2011) and Marewski and Link (2013) for a review. The argument that tool selection may be driven by evolutionary pressure or a reinforcement learning mechanism over heuristics is credible, but may not capture the whole picture. Our skepticism harks back to old debates about how sensitive decision-makers are to structural changes in the environment. We know that heuristics use change with environmental conditions (e.g., the work of Hogarth and Karelaia, see also Rieskamp & Otto, 2006; Spiliopoulos et al., 2018; Spiliopoulos & Ortmann, 2018), but we are far from a satisfactory understanding of the issue of matching.
Learning environment-specific heuristics requires the environment to remain relatively stable. If not, then heuristics that are mapped to a specific environment will, by construction, overstay their welcome. However, the issue of non-stationary environments can be addressed without necessarily changing heuristics by forgetting, i.e., a simple adaptation is directly built into the heuristic without having to resort to meta-cognitive strategy selection. The optimal degree of forgetting depends on two opposing effects. Stronger forgetting leads to more weight being placed on recent events, increasing the probability of detecting and adapting to a change in the environment. However, if the environment is stable then forgetting leads to a waste of information from the distant past (which is still relevant) and may lead to overfitting to the noise in the most recent observations. Ultimately, the complexity of the environment will determine and shape the tools in the box.
Third, as important as understanding how environments affect heuristic choice is, to what extent the use of heuristics can shape the environment raises important issues of causality (e.g., Hertwig et al., 2002 on parental investment) that strike us as under-researched. One collaborative avenue with economists is the mechanism design literature, which is concerned with the design of institutions, such as auctions and markets, so as to achieve the designer’s goals whilst considering the interaction between the behavior of participants and the institutional structure. Early work presumed that participants were substantively rational, however newer work is typically based on the H&B principles, such as exploring the effects of loss aversion and reference-dependence (e.g., Benkert, 2022; Eisenhuth, 2019). Therefore there still exists significant potential in exploring how agents employing fast-and-frugal heuristics would affect the design of incentive-compatible mechanisms.
Fourth, ERP researchers argue that the two programs of rationality have not only very different assessments of human rationality but also very different policy implications, identified as nudging (firmly rooted in the H&BP) and boosting (Gruene-Yanoff & Hertwig, 2016; Katsikopoulos, 2014; Viale, 2022). These issues strike us also as understudied. While nudging might have some undesirable intertemporal consequences (e.g., Carroll et al., 2009 and the literature that followed it), boosting is often an unavailable option or one whose long-term horizon (e.g., boosts based on education) is unattractive to policy-makers seeking short-term—though perhaps, fleeting—results. Despite the forementioned difficulties, the ERP program has the potential to influence policy and to challenge the incumbent nudging paradigm, especially in terms of longer-lasting and broader behavioral impact.
Fifth, the ERP has only recently started to have a practical impact on management and organization science. This is somewhat surprising given the intellectual origin of the Ecological-Rationality agenda and the concept of bounded rationality (Simon, anyone?). The first wave of publications in prominent management/organization science journals focused primarily on the theoretical properties of heuristics (e.g., Hogarth & Karelaia, 2005; Katsikopoulos, 2013), while the second wave focused on applied/empirical research (as we presented earlier). We anticipate that the discovery of (simple) heuristics by management and organization sciences (e.g., Loock & Hinner, 2015), will further encourage a shift from individual decision making to games introducing new complexities arising from strategic interactions amongst agents, but also important opportunities to extend both theoretical and empirical work (e.g., Rapoport et al., 2022; Spiliopoulos & Hertwig, 2020).
Sixth, engaging psychologists and economists in research collaborations is a promising avenue for new breakthroughs (e.g., Fischbacher et al., 2013). Another example is Stevens et al. (2011), who examined the effects of forgetting on the emergence of cooperative strategies in repeated interactions. Further investment in theory integration and bridging the different concepts of bounded rationality that psychologists and economists employ would be worthwhile. There are, of course, important differences across disciplines that we cannot fully discuss here—Katsikopoulos (2014) and Grüne-Yanoff et al. (2014) are excellent primers. We should not presume that the task is impossible, as successful examples of theory integration include linking CPT and heuristics for risky choice (Pachur et al., 2017) and identifying attention as mediating the relationship between CPT and drift diffusion models (Zilker & Pachur, 2021).
Seventh, the topic of learning has not been broached by the ERP program. A starting point is Selten’s Learning Direction Theory (LDT), which is ultimately a simple story of ex post rather than ex-ante rationality using minimal information—note again that this is an inductive model of reasoning. LDT requires information only about the direction that would have led to an improvement in the outcome; reinforcement learning would also require the magnitude and regret-based learning would require information about counterfactual outcomes. As an aside we draw the reader’s attention to the edited volume by Gigerenzer and Selten (2002). An excellent example of work along these lines is Bonawitz et al. (2014) who show that a simple heuristic (win-stay, lose-sample) approximates computationally demanding Bayesian inference in non-strategic settings.
Strategic interactions entail additional uncertainty—how often is the assumption of perfect information fulfilled in the real world? Do we know what the action space is, what the payoffs are, and the type/motives of our opponent? With so much uncertainty, is strategic ignorance or bounded sophistication necessarily irrational? ERP researchers should note that economists have not ignored these important questions as the literature is literally full of extensions and concepts specifically addressing them. On the other hand, ERP researchers can and should critique the characteristics of the solutions proposed by economists. For example, in many cases the extensions or refinements to equilibrium solution concepts dealing with uncertainty may be orders of magnitude more complicated than those under perfect information. Let us emphasize again that these solutions belong to the deductive strand of game theory, not the inductive strand; the latter should be far more palatable to psychologists. An example of inductive learning under uncertainty, where the payoffs of a game are unknown is Oechssler and Schipper (2003); despite finding that subjects did not efficiently learn the true game, they often converged to the Nash equilibrium.
Eighth, and relatedly, some celebrated heuristics can easily be exploited (e.g., default settings if the choice architect has vested interests: credit card companies, etc.). In general, it is necessary to determine to what extent the interests of the default-setter and the target of nudges coincide, as assuming that they are always aligned is naive.
Ninth, Goldberg (2005); see also Goldberg & Podell, (1999) have argued that studying lotteries does not capture decision making in the real world. The real issue is what to do with other problems that cannot be represented by lotteries with two or three outcomes? The important difference between DfD (decisions from description) and DfE (decisions from experience) is all but lost on economists. The economics discipline has become enamored with models of DfD, in particular Prospect Theory, whose speed of penetration and impact in behavioral finance has been surprising, displacing the Mean–Variance framework. More balanced approaches that consider that heuristics may be rational contingent on environmental characteristics, rather than assuming unconditionally that they are irrational, are rare; however, see the edited volumes by Altman (2017) and Viale et al. (2018). Finance is a large-world environment, where returns and volatility are not learned by description but rather from experience. This is a crucial distinction as Lejarraga et al. (2016) found experimental evidence that learning about stock-market fluctuations from experience has a differential and lasting impact on investors’ risk-taking behavior compared to an identical description of the fluctuations.
The core Cumulative Prospect Theory (CPT) stylized fact—an inverted S-shaped probability weighting function that overweighs rare events—becomes questionable in decisions from experience. The underweighting of rare events found in the DfE literature seems particularly relevant to the miscalculation of the likelihood of Black Swan events. Pursuant to this, Spiliopoulos and Hertwig (2015) find an inverse S-shaped probability weighting function only in decisions from description between two simple prospects (one prospect with a sure outcome and another prospect with only two outcomes). In DfE using more complex prospects with two to three outcomes, a moment-based preference model (extending the mean–variance framework to include skewness) predicted out-of-sample behavior more accurately than CPT. Further work on the interaction of prospect complexity (of up to four outcomes each) and DfD/DfE uncovers significant within-subject evidence that decision-makers adapt to complexity, and verifies the conclusions drawn regarding the importance of skewness preferences outside of niches involving described prospects of up to two outcomes only (Spiliopoulos & Hertwig, 2022). While there is significant evidence of context dependence in decision making under risk, a theory is required to link contexts to decision processes, lest we simply end up with a series of disjoint models. Viewing adaptation through the lens of ecological rationality provides such a bridge—environmental characteristics, such as complexity, may drive the choice of the appropriate decision processes or heuristics. Further engagement with behavioral finance is an important direction for the ERP program.
Tenth, the fast-and-frugal heuristics literature, while permitting behavioral heterogeneity through differences in the set of heuristics in each individual’s toolbox and heuristic selection therein, has admonished against parameter heterogeneity. In the rare case when a parameter is allowed, e.g., the minimum gain threshold in the stopping rule of the Priority heuristic (Brandstätter et al., 2006), it is often assumed to be fixed. Parameter calibration is understandably eschewed, as this is an additional avenue through which heuristics attain simplicity beyond selective use of information. However, behavioral heterogeneity is well-documented across every facet of judgment and decision making and these handcuffs may be too tight in some cases. Loosening them should be done with extreme caution, but should be considered. For example, a simple decision model could have a single (threshold?) parameter that may be adapted from experience via a basic learning mechanism, such as Selten’s LDT mentioned above. Another more conventional way forward is to link the heuristics in each individual’s toolbox to cognitive differences between individuals, or perhaps within-individuals, but across their life span. For example, if aging leads to cognitive decline with respect to the performance of the memory system, then individuals may resort to heuristics that depend less on accurate memory, e.g., Mata et al. (2012). Similarly, between-individual differences in fluid intelligence could predict differential use of heuristics. More work in explaining how the set of heuristics and their selection mechanism in the adaptive toolbox may vary across individuals, and how heuristics themselves may incorporate some limited flexibility, seems warranted.
6 Concluding discussion
We set out to contrast the two major programs in the heuristics space, the Heuristics-and-Biases program and the Ecological-Rationality program, but more importantly to uncover the open questions about simple heuristics and their unrealized potential.We sought out areas of similar thinking within the economics discipline and the ERP that could serve as a bridge for future collaborative work.
We endeavored to draw attention to work in economics that seems closely related to the Ecological Rationality program (ERP) and to highlight where common ground exists for the two disciplines to initiate a dialogue and collaborate despite their apparent differences, which we believe are not insurmountable. The reader will notice that the majority of research that we have cited in economics is firmly grounded in inductive (learning from experience) rather than deductive (normative) models derived axiomatically. We believe that much of the criticism of economics by ERP researchers has been directed at such normative deductive solutions. This however is a straw man of sorts, and does not acknowledge the richness of contemporary economics which although often may not be mainstream, has found its way even in highly ranked journals such as the American Economic Review, Quarterly Journal of Economics, Econometrica, and Games and Economic Behavior, from which we have cited. Therefore, we believe that sufficient interest exists for work that can be related to the ERP, and for the ERP to make significant headway into the economics discipline. This attempt will be most successful by connecting new research to prior work in economics and simultaneously pointing out the similarities and differences.
A starting point, which would allay initial concerns from both sides, lies in questioning how the normative solutions derived by economists may be approximated effectively and implemented procedurally, using heuristics rather than complex and possibly untractable decision models. Ultimately, however, we should also question said normative solutions and understand under what circumstances they may be attainable or even desirable in Savage’s large worlds. This is predicated on the rigorous application of procedural modeling, allowing for clearly falsifiable psychological models of behavior, which is where the ERP program has a firm lead over the H&B program. It is surprising to us that economists embraced the more fluid and vague concepts associated with the H&B program (e.g., representativeness, availability), rather than the more precise mathematical modeling promoted by the ERP. The latter should be far more palatable to most economists. This is not to say that concepts such as representativeness are wrong, for they can be couched in terms of precise mathematical models, e.g., similarity-based memory retrieval. To boot, many of the concepts of the H&B program as they are applied to economics may survive such precise mathematical and procedural modeling, however, we argue that they will be all the better for it. Some will fail tests of falsifiability, but the ERP perspective can be used to redefine them by putting them on more solid procedural foundations and in other cases by replacing them with new alternative theories that explicitly consider the nexus between environments and behavior.
To be sure, there is a place for both the H&B and the ER perspective, however the boundaries in their scope of application and relevance, particularly in terms of external validity, should be re-evaluated. The former, still seems more relevant in decisions from description, but we believe it seriously lacks external validity in decision tasks where Knightian uncertainty and learning from experience reign. That is, heuristics in the H&B spirit should not be applied indiscriminately to any type of task, as is often the case, without due diligence. Economists would be well advised to seek out common ground with psychologists beyond the (now) orthodox Heuristics-and-Biases program and consider investigating experiential decision tasks, whilst reviewing some of the existing behavioral work in economics with a critical eye and challenging the robustness of past findings with new experiments.
Notes
Non-compensatoriness of cues is satisfied if the weight of a higher ranked cue is greater than the sum of all lower ranked cues. Consequently, lower ranked cues can be ignored as regardless of the cue values, it is impossible for them to reverse a decision made using the higher ranked cue. Dominance is satisfied if the cues values of one object are all greater than those of the other object. Cumulative dominance is satisfied if the cue values of one object cumulatively dominate those of the other object. Further discussions and mathematical definitions of these concepts can be found in Şimşek (2013).
References
Aikman, D., Galesic, M., Gigerenzer, G., Kapadia, S., Katsikopoulos, K., Kothiyal, A., Murphy, E., & Neumann, T. (2021). Taking uncertainty seriously: Simplicity versus complexity in financial regulation. Industrial and Corporate Change, 30(2), 317–345. https://doi.org/10.1093/icc/dtaa024
Altman, M. (Ed.). (2017). Handbook of behavioural economics and smart decision-making. Edward Elgar.
Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2008). Eliciting risk and time preferences. Econometrica, 76(3), 583–618. https://doi.org/10.1111/j.1468-0262.2008.00848.x
Arkes, H. R., & Ayton, P. (1999). The sunk cost and concorde effects: Are humans less rational than lower animals? Psychological Bulletin, 125(5), 591–600.
Arthur, W. B. (1994). Inductive reasoning and bounded rationality. American Economic Review, 84(2), 406–411.
Artinger, F., Petersen, M., Gigerenzer, G., & Weibler, J. (2015). Heuristics as adaptive decision strategies in management. Journal of Organizational Behavior, 36(S1), S33–S52.
Åstebro, T., & Elhedhli, S. (2006). The effectiveness of simple decision heuristics: forecasting commercial success for early-stage ventures. Management Science, 52(3), 395–409.
Aumann, R. J. (1997). Rationality and bounded rationality. Games and Economic Behavior, 21, 2–14.
Axelrod, R. (1984). The evolution of cooperation. Basic Books.
Barron, G., & Erev, I. (2003). Small feedback-based decisions and their limited correspondence to description-based decisions. Journal of Behavioral Decision Making, 16(3), 215–233.
Baucells, M., Carrasco, J. A., & Hogarth, R. M. (2008). Cumulative dominance and heuristic performance in binary multiattribute choice. Operations Research, 56(5), 1289–1304.
Becker, G. S. (1962). Irrational behavior and economic theory. The Journal of Political Economy, 70(1), 1–13.
Benkert, J.-M. (2022). Bilateral trade with loss-averse agents. University of Zurich, Department of Economics, Working Paper #188. https://doi.org/10.2139/ssrn.2579661.
Berg, N., & Gigerenzer, G. (2010). As-if behavioral economics: Neoclassical economics in disguise? History of Economic Ideas, 18(1), 133–165.
Bhatia, S., & He, L. (2021). Machine-generated theories of human decision-making. Science, 372(6547), 1150–1151.
Bingham, C. B., & Eisenhardt, K. M. (2011). Rational heuristics: The ‘simple rules that strategists learn from process experience.’ Strategic Management Journal, 32(13), 1437–1464.
Bingham, C. B., & Eisenhardt, K. M. (2014). Response to Vuori and Vuori’s commentary on ‘heuristics in the strategy context.’ Strategic Management Journal, 35(11), 1698–1702.
Binmore, K. (1990). Essays on the foundations of game theory. Blackwell.
Bonawitz, E., Denison, S., Gopnik, A., & Griffiths, T. L. (2014). Win-stay, lose-sample: A Simple sequential algorithm for approximating Bayesian inference. Cognitive Psychology, 74, 35–65.
Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2006). The priority heuristic: Making choices without trade-offs. Psychological Review, 113(2), 409–432.
Brandstätter, E., Gigerenzer, G., & Hertwig, R. (2008). Risky choice with heuristics: Reply to Birnbaum (2008), Johnson, Schulte-Mecklenbeck, and Willemsen (2008), and Rieger and Wang (2008). Psychological Review, 115(1), 281–290.
Brighton, H., & Gigerenzer, G. (2015). The bias bias. Journal of Business Research, 68(8), 1772–1784.
Camerer, C. F., & Fehr, E. (2006). When do’s “economic” an dominate social behavior? Science, 311(5757), 47–52. https://doi.org/10.1126/science.1110600
Camerer, C. F., & Ho, T.-H. (1999). Experience-weighted attraction learning in normal form games. Econometrica, 67(4), 827–874.
Camerer, C. F., Ho, T., & Chon, J. (2004b). A cognitive hierarchy model of games. The Quarterly Journal of Economics, 119(3), 861–898. https://doi.org/10.1162/0033553041502225
Camerer, C. F., Loewenstein, G., & Rabin, M. (Eds.). (2004a). Advances in behavioral economics. Princeton University Press.
Carroll, G. D., Choi, J. J., Laibson, D., Madrian, B. C., & Metrick, A. (2009). Optimal defaults and active decisions. Quarterly Journal of Economics, 124(4), 1639–1674.
Cheung, Y.-W., & Friedman, D. (1997). Individual learning in normal form games: Some laboratory results. Games and Economic Behavior, 19(1), 46–76.
Cochrane, J. (2015). Homo Economicus or homo paleas. Retrieved 22 May, 2015, from, http://johnhcochrane.blogspot.com.au/2015/05/homo-economicus-or-homo-paleas.html, The Grumpy Economist (John Cochrane's blog).
Cowan, N. (2000). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24(1), 87–114.
Crawford, V. (2008). Look-ups as the windows of the strategic soul. In A. Caplin & A. Schotter (Eds.), The foundations of positive and normative economics. Oxford University Press.
Devetag, G., Di Guida, S., & Polonio, L. (2016). An eye-tracking study of feature-based choice in one-shot games. Experimental Economics, 19(1), 177–201.
Drechsler, M., Katsikopoulos, K., & Gigerenzer, G. (2013). Axiomatizing bounded rationality: The priority heuristic. Theory and Decision, 77(2), 183–196.
Duersch, P., Oechssler, J., & Schipper, B. C. (2014). When is tit-for-tat unbeatable? International Journal of Game Theory, 43(1), 25–36. https://doi.org/10.1007/s00182-013-0370-1
Edwards, W. (1956). Reward probability, amount, and information as determiners of sequential two-alternative decisions. Journal of Experimental Psychology, 52(3), 177–188.
Eisenhuth, R. (2019). Reference-dependent mechanism design. Economic Theory Bulletin, 7, 77–103. https://doi.org/10.1007/s40505-018-0144-9
Erev, I., & Barron, G. (2005). On adaptation, maximization, and reinforcement learning among cognitive strategies. Psychological Review, 112(4), 912–931.
Erev, I., Ert, E., Plonsky, O., Cohen, D., & Cohen, O. (2017). From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience. Psychological Review, 124(4), 369–409.
Ericson, K., Marzilli, M., White, J. M., Laibson, D., & Cohen, J. D. (2015). Money earlier or later? Simple heuristics explain intertemporal choices better than delay discounting does. Psychological Science, 26(6), 1–8.
Fischbacher, U., Hertwig, R., & Bruhin, A. (2013). How to model heterogeneity in costly punishment: Insights from respondrs’ response times. Journal of Behavioral Decision Making, 26(5), 462–476.
Friedman, D. (1991). Evolutionary games in economics. Econometrica, 59(3), 637–666.
Friedman, D., Pommerenke, K., Lukose, R., Milam, G., & Huberman, B. A. (2007). Searching for the Sunk Cost Fallacy. Experimental Economics, 10(1), 79–104.
Friedman, M. (1953). Essays in positive economics. University of Chicago Press.
Friedman, M., & Savage, L. J. (1948). The utility analysis of choices involving risk. The Journal of Political Economy, 56(4), 279–304.
Germano, F. (2007). Stochastic evolution of rules for playing finite normal form games. Theory and Decision, 62(4), 311–333. https://doi.org/10.1007/s11238-007-9032-8
Gigerenzer, G. (1991). How to make cognitive illusions disappear: Beyond ‘heuristics and biases. European Review of Social Psychology, 2(1), 83–115.
Gigerenzer, G. (1996). On narrow norms and vague heuristics: A reply to Kahneman and Tversky (1996). Psychological Review, 103, 592–596.
Gigerenzer, G., & Brighton, H. (2009). Homo heuristicus: Why biased minds make better inferences. Topics in Cognitive Science, 1(1), 107–143.
Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62(1), 451–482.
Gigerenzer, G., Hertwig, R., & Pachur, T. (2011). Heuristics: The foundations of adaptive behavior. Oxford University Press.
Gigerenzer, G., & Selten, R. (2002). Bounded rationality: The adaptive toolbox. MIT Press.
Gigerenzer, G., & Sturm, T. (2012). How (far) can rationality be naturalized? Synthese, 187(1), 243–268. https://doi.org/10.1007/s11229-011-0030-6
Gigerenzer, G., Todd, P. M., the ABC Research Group. (1999). Simple heuristics that make us smart. Oxford University Press.
Gilboa, I., & Schmeidler, D. (1995). Case-based decision theory. The Quarterly Journal of Economics, 110(3), 605–639.
Gilboa, I., & Schmeidler, D. (2001). A theory of case-based decisions. University Press.
Glenn, W. (1989). Theory and misbehavior of first-price auctions. American Economic Review, 79(4), 749–762.
Gode, D. K., & Sunder, S. (1993). Allocative efficiency of markets with zero-intelligence traders—Market as a partial substitute for individual rationality. The Journal of Political Economy, 101(1), 119–137.
Goldberg, E., & Podell, K. (1999). Adaptive versus veridical decision making and the frontal lobes. Consciousness and Cognition, 8(3), 364–377.
Goldberg, G. (2005). The wisdom paradox: How your mind can grow stronger as your brain grows older. Penguin.
Gossner, O., Steiner, J., & Stewart, C. (2021). Attention please! Econometrica, 89(4), 1717–1751.
Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science, 7(2), 217–229. https://doi.org/10.1111/tops.12142
Grimm, V., & Mengel, F. (2012). An experiment on learning in a multiple games environment. Journal of Economic Theory, 147(6), 2220–2259.
Grüne-Yanoff, T., & Hertwig, R. (2016). Nudge versus boost: How coherent are policy and theory? Minds and Machines, 26(1), 149–183.
Grüne-Yanoff, T., Marchionni, C., & Moscati, I. (2014). Introduction: Methodologies of bounded rationality. Journal of Economic Methodology, 21(4), 325–342.
Haldane, A., & Madouros, V. (2012). The Dog and the Frisbee. Presented at the Given at the Federal Reserve Bank of Kansas City’s 36th economic policy symposium, “The Changing Policy Landscape.” https://www.bankofengland.co.uk/paper/2012/the-dog-and-the-frisbee.Harrison.
Harrison, G. W. (1989). Theory and misbehavior of first-price auctions. American Economic Review, 79(4), 749–762. https://www.jstor.org/stable/1827930
Haruvy, E., & Stahl, D. O. (2004). Deductive versus inductive equilibrium selection: Experimental results. Journal of Economic Behavior & Organization, 53(3), 319–331.
Haruvy, E., & Stahl, D. O. (2012). Between-game rule learning in dissimilar symmetric normal-form games. Games and Economic Behavior, 74(1), 208–221.
He, L., Analytis, P. P., & Bhatia, S. (2022). The wisdom of model crowds. Management Science, 68(5), 3635–3659.
Hertwig, R., Davis, J. N., & Sulloway, F. J. (2002). Parental investment: How an equity motive can produce inequality. Psychological Bulletin, 128(5), 728–745.
Hertwig, R., & Erev, I. (2009). The description-experience gap in risky choice. Trends in Cognitive Sciences, 13(12), 517–523.
Hertwig, R., Herzog, S. M., Schooler, L. J., & Reimer, T. (2008). Fluency heuristic: A model of how the mind exploits a by-product of information retrieval. Journal of Experimental Psychology: Learning, Memory, and Cognition, 34(5), 1191–1206.
Hertwig, R., & Hoffrage, U. (2013). Simple heuristics in a social world. Oxford University Press.
Hertwig, R., Leuker, C., Pachur, T., Spiliopoulos, L., & Pleskac, T. J. (2022). Studies in ecological rationality. Topics in Cognitive Science, 14(3), 467–491. https://doi.org/10.1111/tops.12567
Hertwig, R., & Ortmann, A. (2001). Experimental practices in economics: A methodological challenge for psychologists? Behavioral and Brain Sciences, 24, 383–451.
Hertwig, R., & Ortmann, A. (2005). The cognitive illusions controversy: A methodological debate in disguise that matters to economists. In R. Zwick & A. Rapoport (Eds.), Experimental business research III (pp. 113–130). Kluwer.
Hertwig, R., Pleskac, T. J., Pachur, T., Center for Adaptive Rationality. (2019). Taming uncertainty. MIT Press.
Heukelom, F. (2015). Behavioral economics. Cambridge University Press.
Hogarth, R. M., & Karelaia, N. (2005). Simple models for multiattribute choice with many alternatives: When it does and does not pay to face trade-offs with binary attributes. Management Science, 51(12), 1860–1872.
Hogarth, R. M., & Karelaia, N. (2006). Regions of rationality: Maps for bounded agents. Decision Analysis, 3(3), 124–144.
Hogarth, R. M., & Karelaia, N. (2007). Heuristic and linear models of judgment: Matching rules and environments. Psychological Review, 114(3), 733–758.
Hogarth, R. M., & Reder, M. W. (Eds.). (1987). Rational choice. University of Chicago Press.
Hutchinson, J. M. C., & Gigerenzer, G. (2005). Simple heuristics and rules of thumb: Where psychologists and behavioural biologists might meet. Behavioural Processes, 69(2), 97–124.
Johnson, E. J., Payne, J. W., Schkade, D. A., & Bettman, J. R. (1989). Monitoring information processing and decisions: The Mouselab system. Fuqua School of Business. Durham: Duke University.
Johnson, E. J., Schulte-Mecklenbeck, M., & Willemsen, M. C. (2008). Process models deserve process data: Comment on Brandstätter, Gigerenzer, and Hertwig (2006). Psychological Review, 115(1), 263–272.
Kahneman, D. (2003a). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93(5), 1449–1475.
Kahneman, D. (2003b). A perspective on judgment and choice: Mapping bounded rationality. American Psychologist, 58(9), 697–720.
Kahneman, D. (2011). Thinking. Penguin.
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–291.
Kahneman, D., & Tversky, A. (1996). On the reality of cognitive illusions. Psychological Review, 103(3), 582–591.
Katsikopoulos, K. V. (2013). Why do simple heuristics perform well in choices with binary attributes? Decision Analysis, 10(4), 327–340.
Katsikopoulos, K. V. (2014). Bounded rationality: The two cultures. Journal of Economic Methodology, 21(4), 361–374.
Katsikopoulos, K. V., & Gigerenzer, G. (2008). One-reason decision-making: Modeling violations of expected utility theory. Journal of Risk and Uncertainty, 37(1), 35–56.
Katsikopoulos, K. V., Schooler, L. J., & Hertwig, R. (2010). The robust beauty of ordinary information. Psychological Review, 117(4), 1259–1266.
Katsikopoulos, K. V., Şimşek, Ö., Buckmann, M., & Gigerenzer, G. (2021a). ‘Classification in the wild: The science and art of transparent decision making. MIT Press.
Katsikopoulos, K. V., Şimşek, Ö., Buckmann, M., & Gigerenzer, G. (2021b). Transparent modeling of influenza incidence: Big data or a single data point from psychological theory? International Journal of Forecasting, 38(2), 613–619. https://doi.org/10.1016/j.ijforecast.2020.12.006
Kay, J., & King, M. (2020). Radical uncertainty. The Bridge Street Press.
Kimya, M. (2018). Choice, consideration sets, and attribute filters. American Economic Journal: Microeconomics, 10(4), 223–247.
Knez, M., & Camerer, C. F. (2000). Increasing cooperation in prisoner’s dilemmas by establishing a precedent of efficiency in coordination games. Organizational Behavior and Human Decision Processes, 82(2), 194–216.
Lejarraga, T., & Hertwig, R. (2021). How experimental methods shaped views on human competence and rationality. Psychological Bulletin, 147(6), 535–564. https://doi.org/10.1037/bul0000324
Lejarraga, T., Woike, K. J., & Hertwig, R. (2016). Description and experience: How experimental investors learn about booms and busts affects their financial risk taking. Cognition, 157, 365–383.
Leland, J. W. (1994). Generalized similarity judgments—An alternative explanation for choice anomalies. Journal of Risk and Uncertainty, 9(2), 151–172.
Leuker, C., Pachur, T., Hertwig, R., & Pleskac, T. J. (2018). Exploiting risk–reward structures in decision making under uncertainty. Cognition, 175, 186–200. https://doi.org/10.1016/j.cognition.2018.02.019
Leuker, C., Pachur, T., Hertwig, R., & Pleskac, T. J. (2019). Do people exploit risk–reward structures to simplify information processing in risky choice? Journal of the Economic Science Association, 5(1), 76–94. https://doi.org/10.1007/s40881-019-00068-y
Lieder, F., & Griffiths, T. L. (2019). Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43, 1–85. https://doi.org/10.1017/s0140525x1900061x
Lieder, F., Griffiths, T. L., Huys, Q. J. M., & Goodman, N. D. (2018). The anchoring bias reflects rational use of cognitive resources. Psychonomic Bulletin & Review, 25(1), 322–349.
Loock, M., & Hinnen, G. (2015). Heuristics in organizations: A review and a research agenda. Journal of Business Research, 68(9), 2027–2036.
Lopes, L. L. (1992). The rhetoric of irrationality. Theory and Psychology, 1(1), 65–82.
Luan, S., Reb, J., & Gigerenzer, G. (2019). Ecological rationality” fast-and-frugal heuristics for managerial decision making under uncertainty. Academy of Management Journal, 62(6), 1735–1759.
Luan, S., Schooler, L. J., & Gigerenzer, G. (2011). A signal-detection analysis of fast-and-frugal trees. Psychological Review, 118(2), 316–338.
Luan, S., Schooler, L. J., & Gigerenzer, G. (2014). From perception to preference and on to inference: An approach-avoidance analysis of thresholds. Psychological Review, 121(3), 501–525.
Mackowiak, B., Matejka, F., & Wiederholt, M. (2023). Rational inattention: A review. Journal of Economic Literature, 61(1), 226–273.
Maitland, E., & Sammartino, A. (2015). Decision making and uncertainty: The role of heuristics and experience in assessing a politically hazardous environment. Strategic Management Journal, 36(10), 1554–1578.
Mandler, M., Manzini, P., & Mariotti, M. (2012). A million answers to twenty questions: Choosing by checklist. Journal of Economic Theory, 147(1), 71–92.
Manzini, P., & Mariotti, M. (2007). Sequentially rationalizable choice. American Economic Review, 97(5), 1824–1839.
Manzini, P., & Mariotti, M. (2012a). Categorize then choose: Boundedly rational choice and welfare. Journal of the European Economic Association, 10(5), 1141–1165.
Manzini, P., & Mariotti, M. (2012b). Choice by lexicographic semiorders. Theoretical Economics, 7(1), 1–23.
Manzini, P., & Mariotti, M. (2014). Stochastic choice and consideration sets. Econometrica, 82(3), 1153–1176.
Marewski, J. N., & Link, D. (2013). Strategy selection: An introduction to the modeling challenge. Wiley Interdisciplinary Reviews: Cognitive Science, 5(1), 39–59.
Marewski, J. N., & Schooler, L. J. (2011). Cognitive niches: An ecological model of strategy selection. Psychological Review, 118(3), 393–437.
Masatlioglu, Y., Nakajima, D., & Ozbay, E. Y. (2012). Revealed attention. American Economic Review, 102(4), 2183–2205.
Mata, R., von Helversen, B., Karlsson, L., & Cüpper, L. (2012). Adult age differences in categorization and multiple-cue judgment. Developmental Psychology, 48(4), 1188–1201.
McAffee, R. P., Mialon, H. M., & Mialon, S. H. (2010). Do sunk costs matter? Economic Inquiry, 48(2), 323–336.
Mengel, F. (2012). Learning across games. Games and Economic Behavior, 74(2), 601–619.
Mengel, F., & Sciubba, E. (2014). Extrapolation and structural similarity in games. Economics Letters, 125(3), 381–385.
Michelacci, C., Morelli, M., Gratton, G., & Guiso, L. (2021). From Weber to Kafka: Political instability and the overproduction of laws. American Economic Review, 111(9), 2964–3003.
Mohlin, E. (2012). Evolution of theories of mind. Games and Economic Behavior, 75(1), 299–318. https://doi.org/10.1016/j.geb.2011.11.009
Oechssler, J., & Schipper, B. (2003). Can you guess the game you are playing? Games and Economic Behavior, 43(1), 137–152. https://doi.org/10.1016/s0899-8256(02)00549-3
Ortmann, A. (2015a). Review of Floris Heukelom (2014) behavioral economics, A history. Œconomia, 5–2, 259–267.
Ortmann, A. (2015b). Review of World Development Report 2015. Journal of Economic Psychology, 48(June), 111–120.
Ortmann, A. (2021). On the foundations of behavioral and experimental economics. In H. Kincaid & D. Ross (Eds.), a modern guide to philosophy of economics (chapter 10) (pp. 157–181). Edward Elgar Publishing.
Pachur, T., Suter, R. S., & Hertwig, R. (2017). How the Twain can meet: Prospect theory and models of heuristics in risky choice. Cognitive Psychology, 93, 44–73. https://doi.org/10.1016/j.cogpsych.2017.01.001
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1993). The adaptive decision maker. Cambridge University Press.
Peterson, C. R., & Beach, L. R. (1967). Man as an intuitive statistician. Psychological Bulletin, 68(1), 29–46.
Petrou, A. P., Hadjielas, E., Thanos, I. C., & Dmitratos, P. (2020). “Strategic decision-making processes, international environmental munificence and the accelerated internationalization of SMEs. International Business Review, 29, 101735.
Pleskac, T. J., Conradt, L., Leuker, C., & Hertwig, R. (2021). The ecology of competition: A theory of risk-reward environments in adaptive decision making. Psychological Review, 128(2), 315–335.
Pleskac, T. J., & Hertwig, R. (2014). Ecologically rational choice and the structure of the environment. Journal of Experimental Psychology: General, 143(5), 2000–2019.
Plonsky, O., & Erev, I. (2017). Learning in settings with partial feedback and the wavy recency effect of rare events. Cognitive Psychology, 93, 18–43. https://doi.org/10.1016/j.cogpsych.2017.01.002
Plonsky, O., & Erev, I. (2022). Prediction oriented behavioral research and its relationship to classical decision research. Retrieved from https://psyarxiv.com/7uha4.
Plonsky, O., Teodorescu, K., & Erev, I. (2015). Reliance on small samples, the wavy recency effect, and similarity-based learning. Psychological Review, 122(4), 621–647. https://doi.org/10.1037/a0039413
Polonio, L., Di Guida, S., & Coricelli, G. (2015). Strategic sophistication and attention in games: An eye-tracking study. Games and Economic Behavior, 94, 80–96.
Rapoport, A., Seale, D. A., & Spiliopoulos, L. (2022). Progressive stopping heuristics that excel in individual and competitive sequential search. Theory and Decision, 1, 31. https://doi.org/10.1007/s11238-022-09881-0
Rieskamp, J., & Otto, P. E. (2006). SSL: A theory of how people learn to select strategies. Journal of Experimental Psychology: General, 135(2), 207–236.
Robalino, N., & Robson, A. (2016). The evolution of strategic sophistication. American Economic Review, 106(4), 1046–1072. https://doi.org/10.1257/aer.20140105
Roberts, J. H., & Lattin, J. M. (1991). Development and testing of a model of consideration set composition. Journal of Marketing Research, 28(4), 429–440.
Roth, A. E., & Erev, I. (1995). Learning in extensive-form games: experimental data and simple dynamic models in the intermediate term. Games and Economic Behavior, 8(1), 164–212.
Rubinstein, A. (1986). Finite automata play the repeated prisoner’s dilemma. Journal of Economic Theory, 39(1), 83–96.
Rubinstein, A. (1988). Similarity and decision-making under risk (is there a utility-theory resolution to the allais paradox). Journal of Economic Theory, 46(1), 145–153.
Savage, L. J. (1954). The foundation of statistics. Wiley.
Saxena, D., Badillo-Urquiola, K., Wisniewski, P.J. & Guha, S. (2021). A framework of high-stakes algorithmic decision-making for the public sector developed through a case study of Child_Welfare. In: Proceedings of the ACM on Human-Computer Interaction 5 (CSCW2).
Schooler, L. J., & Anderson, J. R. (1997). The role of process in the rational analysis of memory. Cognitive Psychology, 32(3), 219–250.
Schooler, L. J., & Hertwig, R. (2005). How forgetting aids heuristic inference. Psychological Review, 112(3), 610–628.
Schurz, G., & Thorn, P. D. (2016). The revenge of ecological rationality: Strategy-selection by meta-induction within changing environments. Minds and Machines, 26(1–2), 31–59. https://doi.org/10.1007/s11023-015-9369-7
Sedlmeier, P., & Gigerenzer, G. (2001). Teaching Bayesian reasoning in less than two hours. Journal of Experimental Psychology: General, 130(3), 380–400.
Sent, E.-M. (2004). Behavioral economics: How psychology made its (limited) way back into economics. History of Political Economy, 36(4), 735–760.
Simon, H. A. (1947). Administrative behavior. Macmillan.
Simon, H. A. (1955). A behavioral model of rational choice. The Quarterly Journal of Economics, 69(1), 99–118.
Simon, H. A. (1956). Rational choice and the structure of the environment. Psychological Review, 63, 129–138.
Şimşek, Ö. (2013). Linear decision rule as aspiration for simple decision heuristics. Advances in neural information processing systems (pp. 2904–2912).
Smith, V. L. (1991). Rational choice: The contrast between economics and psychology. Journal of Political Economy, 99(4), 877–897.
Smith, V. L. (2003). Constructivist and ecological rationality in economics. American Economic Review, 93(3), 465–508.
Spiegler, R. (2019). Behavioral economics and the atheoretical style. American Economic Journal: Microeconomics, 11(2), 173–194. https://doi.org/10.1257/mic.20170007
Spiliopoulos, L. (2012). Pattern recognition and subjective belief learning in a repeated constant-sum game. Games and Economic Behavior, 75(2), 921–935.
Spiliopoulos, L. (2013). Beyond fictitious play beliefs: Incorporating pattern recognition and similarity matching. Games and Economic Behavior, 81, 69–85.
Spiliopoulos, L. (2015). Transfer of conflict and cooperation from experienced games to new games: A connectionist model of learning. Frontiers in Neuroscience, 9(139), 1–18.
Spiliopoulos, L. (2018a). The determinants of response time in a repeated constant-sum game: A robust Bayesian hierarchical dual-process model. Cognition, 172, 107–123.
Spiliopoulos, L. (2018b). Randomization and serial dependence in professional tennis matches: Do strategic considerations, player rankings and match characteristics matter? Judgment and Decision Making, 13(5), 413–427.
Spiliopoulos, L. (2021). On the evolution of heuristics and bounded rational behavior in random games. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3935872
Spiliopoulos, L., & Hertwig, R. (2015). Nonlinear decision weights or skewness preference? A model competition involving decisions from description and experience. Cognition, 183, 99–123.
Spiliopoulos, L., & Hertwig, R. (2020). A map of ecologically rational heuristics for uncertain strategic worlds. Psychological Review, 127(2), 245–280. https://doi.org/10.1037/rev0000171
Spiliopoulos, L., & Hertwig, R. (2022). Variance, skewness and multiple outcomes in described and experienced prospects: Can one descriptive model capture it all? Journal of Experimental Psychology: General. https://doi.org/10.1037/xge0001323
Spiliopoulos, L., & Ortmann, A. (2014). Model comparisons using tournaments: Likes, ‘dislikes’, and challenges. Psychological Methods, 19(2), 230–250.
Spiliopoulos, L., & Ortmann, A. (2018). The BCD of response time analysis in experimental economics. Experimental Economics, 21(2), 383–433. https://doi.org/10.1007/s10683-017-9528-1
Spiliopoulos, L., Ortmann, A., & Zhang, Le. (2018). Complexity, attention and choice in games under time constraints: A process analysis. Cognition, 44(10), 1609–1640.
Stahl, D. O. (1993). Evolution of Smartn players. Games and Economic Behavior, 5(4), 604–617. https://doi.org/10.1006/game.1993.1033
Stahl, D. O. (1996). Boundedly rational rule learning in a guessing game. Games and Economic Behavior, 16(2), 303–330.
Stahl, D. O. (1999). Evidence based rules and learning in symmetric normal-form games. International Journal of Game Theory, 28(1), 111–130.
Stahl, D. O. (2000). Rule learning in symmetric normal-form games: Theory and evidence. Games and Economic Behavior, 32(1), 105–138.
Stahl, D. O., & Wilson, P. (1995). On players’ models of other players: Theory and experimental evidence. Games and Economic Behavior, 10(1), 218–254. https://doi.org/10.1006/game.1995.1031
Stevens, J. R., Volstorf, J., Schooler, L. J., & Rieskamp, J. (2011). Forgetting constrains the emergence of cooperative decision strategies. Frontiers in Psychology, 1, 1–12.
Thaler, R. (1980). Toward a positive theory of consumer choice. Journal of Economic Behavior & Organization, 1(1), 39–60.
Thaler, R. (2015). Misbehaving: The making of behavioral economics. W.W. Norton & Company.
Thaler, R. (2016). Behavioral economics: Past, present, and future. American Economic Review, 106(7), 1577–1600.
Todd, P. M., & Gigerenzer, G. (2012). Ecological rationality: Intelligence in the World. Oxford University Press.
Tversky, A. (1969). Intransitivity of preferences. Psychological Review, 76(1), 31–48.
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157), 1124–1131.
Viale, R. (2022). Nudging. MIT Press.
Viale, R., Mousavi, S., Alemanni, B., & Filotto, U. (2018). The behavioural finance revolution: A new approach to financial policies and regulations. Edward Elgar Publishing. https://doi.org/10.4337/9781788973069
Vuori, N., & Vuori, T. (2014). Comment on ‘Heuristics in the Strategy Context’ by Bingham and Eisenhardt (2011). Strategic Management Journal, 35(11), 1689–1697.
Walker, M., & Wooders, J. (2001). Minimax play at Wimbledon. American Economic Review, 91(5), 1521–1538.
Wilcox, N. T. (2011). Stochastically more risk averse: A contextual theory of stochastic discrete choice under risk. Journal of Econometrics, 162(1), 89–104.
Wübben, M., & von Wangenheim, F. (2013). Instant customer base analysis: Managerial heuristics often ‘get it right.’ Journal of Marketing, 72(3), 82–93.
Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6), 1100–1112.
Zellweger, T. M., & Zenger, T. R. (2021). Entrepreneurs as scientists: A pragmatist approach to producing value out of uncertainty. Academy of Management Review. https://doi.org/10.5465/amr.2020.0503
Zilker, V., & Pachur, T. (2021). Nonlinear probability weighting can reflect attentional biases in sequential sampling. Psychological Review. https://doi.org/10.1037/rev0000304
Acknowledgements
The authors are grateful for critical and helpful commentary from Morris Altman, Nathan Berg, Gerd Gigerenzer, Ralph Hertwig, Konstantinos Katsikopoulos, Elizabeth Maitland, Ben Newell, and two referees for this journal.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no competing interests to declare that are relevant to the content of this article. This article is a significantly revised and updated version of a book chapter titled “The Beauty of Simplicity? (Simple) heuristics and the opportunities yet to be realized” in the Handbook of Behavioral Economics and Smart Decision-Making: Rational Decision-Making within the Bounds of Reason (2017), edited by Morris Altman.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ortmann, A., Spiliopoulos, L. Ecological rationality and economics: where the Twain shall meet. Synthese 201, 135 (2023). https://doi.org/10.1007/s11229-023-04136-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s11229-023-04136-z