Mind & Society

, Volume 5, Issue 2, pp 173–197

Symposium on ‘‘Cognition and Rationality: Part I’’ Relationships between rational decisions, human motives, and emotions

Authors

    • Consiglio Nazionale della Ricerche (CNR)Istituto di Scienze e Tecnologie Della Cognizione
    • Dipartimento di Scienze della ComunicazioneUniversità di Siena
  • Francesca Giardini
    • Consiglio Nazionale della Ricerche (CNR)Istituto di Scienze e Tecnologie Della Cognizione
  • Francesca Marzo
    • Consiglio Nazionale della Ricerche (CNR)Istituto di Scienze e Tecnologie Della Cognizione
Original Article

DOI: 10.1007/s11299-006-0015-1

Cite this article as:
Castelfranchi, C., Giardini, F. & Marzo, F. Mind & Society (2006) 5: 173. doi:10.1007/s11299-006-0015-1

Abstract

In the decision-making and rationality research field, rational decision theory (RDT) has always been the main framework, thanks to the elegance and complexity of its mathematical tools. Unfortunately, the formal refinement of the theory is not accompanied by a satisfying predictive accuracy, thus there is a big gap between what is predicted by the theory and the behaviour of real subjects. Here we propose a new foundation of the RDT, which has to be based on a cognitive architecture for reason-based agents, acting on the basis of their beliefs in order to achieve their goals. The decision process is a cognitive evaluation of conflicting goals, based on different beliefs and values, but also on emotions and desires. We refer to a cognitive analysis of emotions and we integrate them in this more general RDT.

Keywords

Rational decision theoryDecision-makingRationalityEmotionsCognition

1 Introduction

In the decision-making and rationality research field, rational decision theory (RDT) has always been the main paradigm, thanks to its formal refinement and to the possibility of modelling complex phenomena with mathematical tools in an easy way. From our standpoint, RDT is not simply a formal background, but, together with the model, prescribes a set of “rational motives” such as pecuniary incentives, and sometimes they include additional motives, such as avoiding emotions. The aim of this additions is to fill the gap between what theory predicts and what people really do. We believe that adding further motives is a good move but is not enough, and that a new “micro-foundation” is required. This has to be based on a cognitive architecture for reason-based and possibly rational agents, where goals and beliefs are the primary constituents. The decision process is a cognitive evaluation of conflicting goals founded on different beliefs, and the RDT is used as an empty shell for means-end reasoning, based on value attributing and level of certainty.

In the first part of this work we discuss the limits of RDT, we analyse what “rational” and “cognitive” mean, and we propose a broader framework for rationality based on a cognitive architecture made by goals and beliefs. In dealing with goals and beliefs as the primary constituents of rational behaviour we will define cognitive agents, describing their main features and the way we can build up a model of rationality using cognitive instruments. Finally we will confront cognition and RDT.

In the second part of the paper we raise the complex issue of emotions entering the decision-making process, and we consider how RDT deals with this problem, once again stressing the main limits of this framework. We go on to propose our view of cognitive agents, and then to describe decision-making processes as based on goals and beliefs, and to show what happens when an emotion enters this process.

Finally, we argue for the necessity of a new “micro-foundation” (agent/actor’s mind model) for the social sciences, different from RDT. For such a new micro-foundation, that is, for changing the model of the actor’s mind, postulating additional values is not enough: no motive per se can subvert the very model of utilitarian economic man. A new micro-foundation necessarily requires a different mechanism or a variety of mechanisms governing decision and action.

2 Rationality and cognition

2.1 The limits of rational decision theory

In his work on rational analysis, Herbert Simon claimed that “the foundation stone of contemporary neo-classic economics is the hypothesis that economic actors are rational (i.e. perfectly adaptive), and that their rationality takes the form of maximizing expected subjective utility [...]” (Simon 1991).

Taking Simon’s assumption, that “utility” is the actor’s own ordering of preferences among outcomes and that it is consistent, but arbitrary, we can conclude something about his behaviour only if we know what he is trying to accomplish.

It is important to consider that Simon made the assumption that utility is measured by profit only additionally, to predict the quantity which will be produced—that one which maximizes the difference between total revenue and total costs. According to Simon, the only requirement to predict behaviour is “a description of the environment (the demand and cost schedules) and an innocent assumption about motives.”

Since, correctly interpreted, classical RDT should say nothing about goals, motives, and desires of the agents, it should be just considered as an empty shell. It is a merely formal or methodological device to decide the best or a satisfying move, given a set of motives and their importance or order. Thus, being “rational” says nothing about being altruistic or not, being interested in capital (resources, money) or in art or in approval and reputation, and so forth.

The “instrumentalist,” merely formal approach to rationality should not be mixed up with the “substantive” view of rationality: instrumentalist rationality says nothing about the specific motives or preferences1 of the agents. Utility is just an abstraction relative to the “mechanism” for choosing among the real motives or goals of the agent (Bacharach and Gambetta 1997; Castelfranchi 2003). Therefore, it should not be conceived per se as a motive, a goal of the generic agent. Although this instrumental use of the concept of utility is generally accepted as obvious and well known, often when adopting a rational framework, the two aspects are mixed up and a narrow theory of agent’s motivation is imported.

Officially, economics considers RDT an empty shell, and motives are taken as “exogenous,” external to the theory and model. It is just a way of choosing among them, merely a mechanism, a procedure, as it should be. Nevertheless, in practice, in a lot of cases with the model comes a set of rational motives, i.e. pecuniary incentives. A clear example is relative to the so called “sunk cost bias” which is a bias only from a strictly economic point of view, but not a bias when we take into account the actual range of motives of the manager. Another example can be found in the interpretation of voter behaviour as irrational—since their costs in voting are greater than their marginal contribution to the result of their party (which should be the only concerns or goals of the voters). Frequently RDT is used in economics both as a formal model for choice and as a set of “economic” or presumed “rational” motives.

Taking the substantive rationality approach, and starting from the “not-innocent” assumption, that utility as profit is what moves human action, economic theory has often proved erroneous in interpretations of human behaviour. Several recent interdisciplinary studies are being conducted, with the added value of getting closer to a better interpretation of some of the problems addressed by economics. However, the classical economic approach is still used not only to address issues usually studied by economics, but also to interpret phenomena which are the matter of study of other disciplines.

For this reason it is important to stress the idea of subjective rationality distinct from the economical one and to make clear that economic theory as such is not entitled to specify or prescribe human motives. In fact, even adopting a rational decision framework, we can postulate in our agents any kind of motive/goal we want or need: benevolence, group concern, and so on. This does not make them less rational, since rationality is defined subjectively. This might make them less efficient, less adaptive, less competitive, less “economically” rational, but not less subjectively rational. This distinction—always claimed to be obvious and always ignored—is orthogonal to the other distinction between Olympic or normative rationality, and to Simon’s limited and bounded rationality, but it is not the same distinction.

The fact that subjects’ behaviours do not reflect the predictions of RDT does not prove that they are not following such a rule/mechanism and that they are irrational. This assumption is too strong and leads to a mistake, as we will explain later, because it is necessary to use other proofs and explanations for human rationality. Actually—as Simon has stressed—RDT cannot predict anything at all without assuming in the subjects some specific motivations (goals, preferences, subjective rewards of the subjects).

Thus, why do not assume instead that the actual motives and rewards are being ignored, and that trying to understand them is the only possible way to study human behaviour and choices? It is true that this is a risky strategy, since it can be ad hoc and impossible to falsify. Whatever behaviour—deviating from rational choice—we can observe in humans, we can always post hoc and ad hoc postulate that there should evidently be some rewarding motive in the subjects’ mind such that for them it is subjectively convenient and rational to act so. Any hidden “intervenient variable” in scientific explanations is exposed to this danger. The correct countermove to this risk is not avoiding postulating hidden variables (in this case subjective motives), but to search for some independent evidences for them (Bacharach 1988).

2.2 Models of rationality

Starting from the considerations above, we will propose cognitive models as more complete theoretical representations of human mind, through which we can study human decision-making less arbitrarily. As we have argued above, we believe that only by modelling decision-makers’ minds as complex causal systems of beliefs and goals it is possible to study decision processes and to advance in understanding their choices.

In order to introduce cognitive models, let us first present the concept of cognition and its relation with rationality. To be concise, we claim that rationality presupposes cognition, but cognition does not imply rationality.

Mind is not necessarily rational, and “cognition” is not synonym of rationality. Rationality is a special way of working of the cognitive apparatus. It is practiced when beliefs are well grounded on sufficient and convincing evidences and inferences are not biased by distortions or wishful thinking, illusions or delusions; where decisions are based on those grounded beliefs and on a correct and sufficient consideration of expected risks and advantages with their value. This is a very normative (but not “prescriptive”) and ideal model: neither believing, nor choosing, conform to such an ideal model. In the same way as only 10% of the human eyes conform to the “normal” eye as presented in a handbook of ophthalmology.

Another frequent mistake is to mix rationality—and intelligence—with effective and adaptive behaviour, in particular in the new “naturalistic” approach to mind and behaviour. According to this point of view, any entity (i.e. a swarm) able to deal with complex problems and to find some good adaptive “solution” can be seen as intelligent and its behaviour can be defined “rational,” even if accidentally successful and efficient. The mistake in this approach lies in loosing the specific meanings of notions like intelligence and rationality, which refer to mental solutions, achieved by working on mental representations. Since we already have notions like adaptive, efficient, effective, advantageous, solution, why should we avoid using them in favour of an extended, metaphoric, unclear notion?

Rationality and intelligence are neither the only, nor the best, form and path for behaviour adaptation. They just represent one adaptation device emerged from natural evolution, which is superior or efficient in some circumstances, but it is not in others. Working on and with anticipatory mental representations (beliefs, aims, and plans) in an optimal way (sound and grounded beliefs, realistic goals, coherent plans, optimal choices) is adaptive and successful, but it is not the only or the best adaptive behavioural device in any circumstances. A mix of reasoning and of “irrational” appraisal and reactions is likely to be better for human beings.

Our claim is that to understand human intelligence we need to study both reasons and appraisal in terms of mental representation. On one side, we need to integrate the model of mind with non-instrumental cognitive rationality and evaluative rationality. That is to say, we must enrich and articulate human cognition—as in Boudon’s proposal (Boudon 2003)—in terms of different kinds of beliefs, more or less explicit, motivated and supported, or irrationally believed. On the other side, in order not to follow in the fallacy of making “cognition” just coincide with “knowledge,” we must work on goals, that are those motivational mental representation that guide all actions. In fact, beliefs of any kind are not enough to arrive at an action or for deciding, because decisions are between goals. Although it is true that all behaviours involves beliefs, beliefs are not sufficient for behaviour and no kind of beliefs can directly motivate and drive the action. Since purposive, intentional action is belief-based but goal-driven, utility value cannot reduce or substitute the specific and multiple desires and goals that motivate and reward the agent (Bratman 1987).

The “cognitive movement” within those social sciences that make use of decision theory and game theory is still privileging just the epistemic (beliefs and knowledge) aspects, while neglecting the other basic components of mind.

Taking as examples, Bacharach’s mental “frames” (2003) and the “epistemic program” of Brandenburger (2003), which are aimed at making explicit in game theory the player’s beliefs about the game as part of the game, are very relevant but not sufficient. Also Boudon’s Cognitive Theory of Action (2003) and Sugden’s theory regarding the motivating power of expectations (2000) work only on beliefs and their role in decision and in games, but all of them refuse to work on explicitly modelling the motivational part of the agents. While eventually people resort to “emotions” as the motivational component of mind, again to avoid an explicit model of motives, desires, needs, objectives, intentions.

There is like a generalized phobia for goals and their explicit treatment in almost all the formal approaches to decision-making. Also, when used, the concept of “goal” seems to be relegated to a restrictive definition (in part related to the connotation of the English word), where goals would only be the prospect, the external target or the pursued short-term result of a specific action. There are long term, rather abstract or universal, goals and values, or generic motivations that must be implemented in specific objectives and plans depending on context and experience, and even “impossible” goals that will never produce an intention to be achieved, just remaining aspirations, utopias, and distant hopes. Let us make clear that when we talk about “goal,” we use the term as the general family term for all motivational representations: from desires to intentions, from objectives to motives, from needs to ambitions.

That said, it should be clear that our opinion is that we need a broader “cognitive program” aimed at making explicit the players’ goals and motives (and perceived partner’s motives) as part of the game they are playing. Goal-directed action, motives, and objectives must instead be taken into account in their being the only ground of utility, interpreted, as we said, as a quantitative measure of these qualitative aspects of mind. The inputs of any decision process are multiple conflicting goals and the related beliefs about conflict, priority, value, means, conditions, plans, risks, etc. This “quality”—the explicit account of the multiple and specific goals of the agent (from which only agents competition or cooperation follow) and, then, the agent’s beliefs about that—must be reintroduced into the very model of social and economic actor’s mind.

2.3 Goals and beliefs as the primary constituents of rational behaviour

Having defined “rationality” and “cognition,” we can turn our attention to defining a more analytical model of mind in terms of explicit beliefs and also goals. Our model is aimed to overcome the traditional economic models of mind, where goals are hidden by the concept of “utility,” beliefs are made equivalent to probability measures, and both rely on some assumption of perfect knowledge.

First, we will introduce goals and beliefs, as the two basic sets of mental representations (the motivational and the epistemic ones), comparing our approach to that of other researchers who did not recognize the true importance of these conceptual means for modelling rationality. Then, we will turn to describe cognitive agents acting on the basis of their beliefs and with the aim of realizing their goals, the so called “goal-governed” agents (Conte and Castelfranchi 1995). Finally, we will face the concept of utility, claiming that it just is a pseudo-goal (looks like a goal but it is not), eventually placing it within the more abstract category of meta-goals, not real motives that motivate actions.

2.3.1 Goals and beliefs: some general issues

The first step on the path leading from Rational Choice Theory to a more cognitive-oriented theory is to take into account the idea that agents act in order to reach their goals, given the beliefs they have. As we said, we use “goal” as the general family term for all motivational representations (desires, intentions, objectives, motives, needs, and ambitions), whereas the beliefs are representations or assumptions about the (past, current, future, possible) state of the world. Goals can have different values, intrinsic or specific ones due to the contingency, which allow them to be organized in a hierarchical manner, so that the agent can choose among several goals active in the same moment thanks to their different values. It is worth noting that no kind of belief can directly motivate and drive the action, because intentional action is belief-based but also goal-driven. This means that beliefs (and expectations if reduced to simple beliefs) can provide reasons and meanings for an action only relative to some goal: they are not enough for “orienting” and “motivating” the agent.

To better explain the importance of goals and beliefs, we will first consider some hasty and unjustified claims about the irrationality of real subjects when compared with the predictions of RDT. Sugden (2000) describes something very similar to goals, but he does not make this explicit, and does not take them into account in the model. For example, while explaining his main point on how the moral principle works, he says: “it is a property of human psychology that we feel a sense of resentment against people who frustrate our reasonable expectations, and also that we feel aversion towards performing actions which are likely to arouse other people’s resentment.” Where “resentment” is: “a sensation [...] which compounds disappointment at the frustration of one’s expectations with anger and hostility directed at the person who is frustrating them.” There is something missed. First of all, how can “expectations”—if they are simple beliefs about the future (and do not involve wishes, desires, concerns)—be “frustrated”? As simple predictions they can only be “invalidated,” they are “frustrated” because they are about something that the subject would like, would realize. Vice versa if our expectation is “fear” we will be happy and relieved if somebody make it “false.”

There is also a gap between to feel and to act. The fact that “we feel aversion” is relevant because it produces the choice and the action of avoiding frustration, i.e. the goal of not frustrating. This is the “motivating” effect and power of expectations and of normative beliefs: generating goals. Analogously, what do “anger” and “hostility” mean without their conative component, their induction to acting, their activation of a goal of harming the other? Feelings and emotions undoubtedly have motivating power in human, thanks to their ability of inducing desires and intentions, but only the goal, with its value, can change the behaviour.

This probably means that Binmore (1994) is right when he says that the “game” has always been changed by additional outcomes relative to the additional hidden goals. However, even Binmore does not provide an explicit and adequate theory of goals as a reform of RDT and game theory. It is true that when players seem to play irrationally they in fact are playing another game, with hidden non-official payoffs, but not only on the basis of what they believe, but on the basis of what they like and want, i.e. of the goals they have. The rewards that the players aspire to themselves, either public or hidden ones (Bacharach and Gambetta 1997), derive directly from the goals that are at stake. Thus, our main claim is that without goals, is not possible to consider and to evaluate rewards and values and, at the same time, the lack of rewards makes the preferences and the utility useless tools for understanding human behaviour (Miceli and Castelfranchi 2002b).

2.3.2 Cognitive agents and their main features

Now, we are going to introduce “cognitive agents,” which are “belief-based goal-governed systems.” This means that cognitive agents pursue a certain objective, for a certain aim following a certain plan, because they rely on specific assumptions. These assumptions are about what has been achieved and what not, about what is achievable, what is in conflict with what, what agents are able and in condition to do something or something else, what is better and preferable relative to their aims, and so on. Cognitive agents have representations of the world, of their actions’ effects, of themselves, and of other agents. Beliefs (the agent explicit knowledge or assumptions), theories (coherent and explanatory sets of beliefs), expectations, desires, plans, and intentions are relevant examples of these representations that can be internally generated, manipulated, and also be subject to inferences and reasoning. The agents act on the basis of their representations, which play a crucial causal role, since they both cause the action and guide it. The behaviour of cognitive agents is a teleonomic phenomenon, directed towards a given result that is pre-represented, anticipated in the agents’ mind. A cognitive agent is an agent who bases its goals, choices, intentions, actions, on what it believes, which means that he exhibits a “representation-driven behaviour.” In this framework, the success (or failure) of his actions depends on the adequacy of his limited knowledge and on his decisions, but it also depends on the objective conditions, relations, and resources, and on unpredicted events.

2.3.3 Founding utility in cognitive agents

Our suggestion for the reformation of RDT starts from the assumption that neither “economic incentives,” nor “utility” are enough for motivating and explaining human behaviour, and other architectures and mechanisms governing action are possible and real. Utility and its maximization are used as themotive governing rational behaviour, so that each choice can be described as aimedto maximize expected subjective utility. Also this commonsensical view creates lots of problems and misunderstandings in the relationship between the RDT and the theory of human motives, and also in designing a suitable architecture of human mind. Let us explicitly deal with this hard issue. Utility maximization is not a true motive, it is just an algorithm, a procedure for dealing with multiple goals. We can define it as a meta-goal: a goal about dealing with goals. But before we discuss meta-goals we have to clarify the issue of goals that just seem to be there, of pseudo-goals (Conte and Castelfranchi 1995).

There are some behavioural mechanisms (for example, reactive ones such as reflexes and releasers) that do not imply a true internally represented goal, even though the behaviour is finalized (but not intended), functional for the production of some given results. We call these “as if” goals or “pseudo-goals,” because it is “as if” the system was regulated by an explicitly represented goal, even though in fact there is neither an internal anticipatory representation of the result, nor decision, nor reasoning, or even planning for achieving it. Therefore, a pseudo-goal is an adaptive function external to the mind of the system, towards which behaviour is directed, without the goal being represented directly in the mind. The behaviour is finalistic and not random, although it is not deliberate and intentional. Pseudo-goals must not be confused with unconscious goals. In our opinion, in fact, a rigorous use of the term “unconscious” means that what is unconscious nevertheless resides in the mind, and is represented directly, in some format or other, in some memory. Pseudo-goals instead belong to the category of what is not in the mind. Of course, however, it is difficult, and sometimes empirically impossible, to determine whether a goal is unconscious or merely a pseudo-goal. This kind of goals do not refer solely to the “low” levels of behaviour (reflexes, instincts, and so on), they can also be related to the level of regulation that is structurally higher than the explicit goals and plans, i.e. the level of the “meta-goals.” Meta-goals are the constitutive and regulating principles of the system, like the goal of avoiding and solving contradictions, and maintaining coherent and integrated knowledge, or the goal of believing the most believable and sufficiently supported alternative. It is by no means necessary (as well as being very difficult) to formulate these teleonomic effects as goals explicitly represented and processed in the mind, but it is sufficient to assert that these functional principles are only pseudo-goals or adaptive procedures used to deal with mental representations. We believe that also the goal of assuring the best allocation of resources, of attaining the greatest possible number of goals with the overall highest value at the lowest cost, is not a true goal explicitly represented in the system and governing normal everyday choices. Individuals are not “economic agents”: they act with concrete and specific goals in mind (to be loved, to eat, to publish a book, to get married), i.e. a variety of heterarchic motives, and they do not pursue a single totalizing goal, such as Profit or Pleasure. Of course it is true that, to the extent allowed by their limited knowledge, agents normally choose the most appropriate goal (action) among the active ones that cannot be pursued simultaneously. However, our argument is that this result is not necessarily guaranteed by an explicit goal of maximizing profit: it is enough to have a mechanism or procedure for choosing among active goals on the basis of their value coefficients. Cognitive agents do not have the real “goal” of choosing the most appropriate goal, indeed they are “constructed” (their selective apparatus is constructed) in such a way to guarantee this result.

The same misuse of the rational/economic model and its implicit automatic stuff of “economic incentives” is the cause of the confusion between self-interested or motivated agents and “selfish” ones. Self-interested or self-motivated agents are simply autonomous agents which have their own interests, they are endowed with and guided by their own internal goals and motives. They can choose among those goals and active motives thanks to some rational principle of optimizing expected utility on the basis of their beliefs. The fact that they are unavoidably driven by their internal ends, once considered the value of their own motives, does not make them selfish agents. This is a matter of motives, not of mechanisms. They may have any kind of motives: pecuniary or in general economic, or moral, aesthetic, pro-social and altruistic, in favour of the group and self-sacrificial, etc. They are either selfish or not, dependent only on the basis of the specific motives that they have and prefer, not on the basis of their being obviously driven by internal and also endogenous motives.

From the architectural point of view altruism is possible, and perhaps it is also real (Bagozzi et al. 1998). As Seneca explained us some years ago, it is also possible that a virtuous man acts in a beneficial or altruistic way while expecting (and even enjoying) internal or external approval, or for preventing guilt and regret feelings: what matters, from our perspective, is that such an expectation is not the aim, the motive of its action, i.e. that his action is not “for,” or “in view of” such reward. We just need a cognitive-motivational architecture able of such a crucial discrimination between expectations, rewards, and motives driving the action and motivating the choice. Even rational architecture is compatible with such a sophistication (provided that does not hiddenly entail selfishmotives).

2.4 Cognition and rational decision theory

Now we can try to summarize what we claimed till now, proposing a more cognitive-oriented framework for decision-making. At this point, the comparison between the formal assumptions belonging to the decision-theoretic model and the more realistic accounts provided by cognitive modelling does not lean to a single answer: accept or reject RDT. There are more articulated responses, which lead to a complex picture of human rationality.

When RDT is faced by agents’ heterogeneous behaviours, they call for irrationality or, at best, they invoke different mechanisms. As we already said, the fact that subjects’ behaviours do not reflect the prediction of RDT does not prove that they are not following such a rule/mechanism, and so, indirectly, that the subjects are irrational. This is an unjustified conclusion, a diagnostic mistake and other explanations are possible. In fact—as Simon noticed (1991)—RDT cannot predict anything at all without assuming in the subjects some specific motivation (goals and subjective rewards of the subjects). Thus, why do we not assume instead that the actual motives and rewards are being ignored? As we said, utility is just an abstraction relative to the “mechanism” for choosing among the real motives or goals of the agent. Agents playing a game and not behaving in the prescribed manner are not irrational, they are simply considering other aspects of the game or, using Binmore (1994) words, they are playing another game. Therefore, before searching for different mechanisms, we need to consider that subjects have different goals, that they process them in order to reach the most important one or, at least, in order to compromise the lowest value one (or ones). In their “Image theory,” Beach et al. (1992) for example consider the decision process as a chain of following steps where, at each point, the agent compares his previous values, goals, and plans to the results he is going to achieve, so that he can always check which goals he is going to pursue and which not. The most effective way of describing the existence of different goals leading people to several decisions is, as often, found in Simon’s work (1986), where he says that “People have reasons for what they do.”

However, a variety of non-instrumental goals is not enough for explaining human behaviour, there are also different mechanisms for driving behaviour and even different decision mechanisms. Psychological research shows that humans use different strategies, apply several rules, and consider differing aspects of the same problem, depending on the circumstances, the context, but also on the way the problem is described (Payne et al. 1992, 1993). Since Kahneman and Tversky (1979) developed their Prospect Theory, countless researches have demonstrated that it is at least useless thinking about human beings behaving as formal models predict, and that is necessary to take into account several mechanisms. The impressive literature about heuristics and biases in decision-making shows that it is impossible to look for only one decision mechanism, suitable for every kind of decision process and situation. The way the decision is presented, the way the alternatives are described, but also the order of appearance of the different solutions lead to different choices taken thanks to different strategies, so that it is a main point to consider when dealing with human rationality.

For example Slovic et al. (2002) show that when decisions to be taken are quite complex while mental resources are limited, people resort to more economic strategies mainly based on intuitive affective appraisals of situations and scenarios (affect heuristic).2 These evoked affective connotations actively enter the decision process together with the analytical evaluations (see Miceli and Castelfranchi 2002a, for a discussion). This insertion and integration is more convincing for us than the “dual-process theories” (Kahneman and Frederick 2002; Sloman 1966) that sometimes are presented not as the confluence and integration of two information source in one process, or as the use of different decision heuristics and strategies, but as two independent (non-clearly architecturally and functionally integrated) “systems” of reasoning.

It should be noted that not only does the deliberation process not follows a single model, but several times deliberation is bypassed all together. Usually a decision leads to an action, but quite often something happens so that a deliberation based on a means-ending evaluation is overcome. We can have a simple reaction-based behaviour, as in the classical conditioning, so that a known stimulus elicits a given response or action, and while these kinds of mechanisms work, they are reinforced (only a serious failure could invalidate them). The behaviour can also be a rule-based one, also defined as an habit or as a script-based behaviour, where there is a lower level of automatism, but there is not even true deliberation. This means that the agent knows how to solve the problem, i.e. the decision is not due to a process of thinking and comparing options, but is more a matter of recognizing the appropriate response. The deliberation, even if it is the lowest-level one, can also be completely bypassed, as happens when physiological needs (as hunger, thirsty, pain, or even addiction) or emotions trigger an action. Loewenstein (Loewenstein 1996; Loewenstein and Schkade 1999) calls them “gut feelings,” and describes how they can avoid cognitive processing and directly trigger a behavioural response, even when this irrational behaviour can be harmful for the subject. This happens when these gut feelings have an high intensity, but in any case, when they are set running, the linear causal process linking beliefs and desires to goals and then to action is discarded and the feeling or the emotion immediately drive the subject to the action needed to face that emotion.

3 Emotions and decision-making

3.1 Emotion and the utility framework

Lately, several scholars in social sciences (and in Economics in particular) are working in order to make models of “rational mind” more realistic and human-like by simply adding emotion to the traditional RDT model.

The classical way in which they account for emotions is that to feel and experience an emotion is something good or bad, provides some positive or negative rewards.

In such a way emotion (feeling it) enters within the subject’s decision-making in a natural way, without changing the utility framework, just by stretching it a bit. The subject anticipates possible emotions: “if I do A (and the other will respond with C) I will feel E”; “if I do B (and the other will respond with D) I will not feel E (or I will feel E).” This changes the value of move A or B, etc.

This view characterizes for instance the literature about the role of anticipated regret in decision-making; but the same solution can be applied to shame, pride, envy, anxiety, fear, or whatever (for a criticism to this approach see also Loewenstein 1996). The solution proposed within the so called “psychological game theory” (Geanakoplos et al. 1989; Rabin 1998; Ruffle 1999) does not look very different, where players’ payoffs are endogenized: on the basis of beliefs and expectations, emotions enter the individual’s utility function by adding a “psychological” component to the “physical” component inside the agent’s overall payoff.

For this reason, in order to show an alternative approach to the issue of connecting emotions and decision-making, aimed to better understand the various ways in which emotion can affect decision process and behaviour, we propose to take as reference a more articulated basic framework of cognitive process, based on the goals variety we have argued in previous sections. But first, far from giving an exhaustive explanation of such a complex concept, we will give a brief introduction to emotions, according to recent developments of cognitive theory.

3.2 A brief sketch of emotions

In the last decade, the importance given to emotions grew, and one of the traditionally omitted topics in many human and social sciences, is now the centre of attention. Even more interesting is the fact that, since Descartes (and even before with Plato), emotions were considered as the main obstacles for human rationality, whereas now they are described as necessary and helpful features of human beings and even as a contribution to or a form of rationality. Neuroscience (Damasio 1994; Rolls 1999; LeDoux 1996), philosophy (De Sousa 2003), psychology (Oatley 1992; Frijda 1986; Lazarus 1991), are trying to account for a wide range and multifaceted phenomenon, mainly characterized by the fact that, although many attempts at defining and classifying them, there is always something missed. Also describing the complexity of emotions is a hard problem, because they have both mental and physical qualia; because there are simple reactive emotions (such as fright) but also socially and cognitively very rich emotions (such as envy, shame, guilt); because sometimes emotions are quite similar and thus is difficult to discriminate between them; and because different inputs can trigger the same emotions or different emotions are due to the same input. The list is far to be complete, but it could be sufficient to show how intricate is the problem and how difficult is to cope with it. Therefore, there are several theories of emotions, each one concerned with a specific side of the problem, and no one able to manage the problem in the whole, but, as we have just seen, this is due to the deepness of the problem, and not to the poorness of the theories.

Given this due premise, we can shortly define an emotion as a response to certain classes of events of concern to a subject, which triggers bodily changes and is typically related to characteristic behaviour (Rolls 1999). Bodily activation is one of the main features of an emotion, usually called “arousal,” together with the “valence” (positive or negative quality of the emotion itself), and the “appraisal.” As Scherer (1984) pointed out, in case of appraisal, the evaluation dimension result from “the intrinsic or inherent pleasantness or unpleasantness of a stimulus, the activity dimension from a mismatch between goal/plan-related expectations and the actual state (requiring action), and the potency dimension from the organisms estimate of how well it would be able to cope with the particular stimulus event and its consequences.”

Dolan (2002) defines emotions as “complex psychological and physiological states that, to a greater or lesser degree, index occurrences of value. It follows that the range of emotions to which an organism is susceptible will, to a high degree, reflect on the complexity of its adaptive niche.”

In our view, a complex human emotion is a two-sided object, constituted by a set of goals and beliefs (Castelfranchi 1988) plus a set of bodily feelings, which give it its distinctive quality, so that an “emotional goal” or an “emotional belief” are really different from “normal” ones, because the subject does not only consider them and their values, but she also feels them, with varying degree of intensity, and the overall decision between them is not so easy. Our general idea is that, usually, agents calculate the values of their “normal” goals, in order to decide which is the best goal to achieve, but when they feel an emotion, this bodily activation, linked to the goals activated by the emotion, can compel you, with varying degree of intensity, to do something even beyond or in contrast to some of your pre-existing goals, and the final result is not so easy foreseeable. On the one side—as we will see—the usual process and mechanism can be bypassed, on the other side it can be altered by the modification of its inputs and values. In the next paragraphs we will try to deal with this difficult issue.

The value of the goal, on the other side, is no longer based on means-end relations and expected results but just on bodily activation. Usually the value of a given instrumental goal (like leaving the house) is “calculated” on the basis of expected results, and more precisely derived from the possibly achieved (ex. not missing the bus) or harmed (ex. continuing chatting with Mary) higher level goals and their values; while the value of non-instrumental goals (motives) are simply assumed as given for a given agent with a given personality, gender, culture, age, experience, etc. With emotions a completely different principle operates: the higher the intensity of arousal and of emotion related sensations, the higher the value, the importance of the emotionally activated goal (“impulse”) (Brehm and Self 1989). An example is the emergency of leaving the house: the more intense the fear, the stronger the goal of leaving.

In brief, we can say that feelings subvert the cognitive reason-based (rationally oriented) way of believing and intending, because feeling can provide:

a different (non-argumentable) trick for assigning value to goals, and an anomalous evidence for believing.

Feeling something, for example fright and fear or anxiety, is assumed as an evidence for the belief (a reason for believing) that “there is some danger”; not—as usual—the other way around, where a belief about a possible danger elicits fear or anxiety (Castelfranchi 2000).

3.3 A multiple solution in a more complex cognitive architecture

It must be clear enough that, in our opinion, the real challenge to address the problem of taking into account the role of emotions in decision-making consists in modifying the basic model of mind.

This is an important research trend both in Cognitive Sciences and in the new agent-based Artificial Intelligence with its studies on agent’s architecture—i.e. the Touring machine, and the important area of BDI (Beliefs, Desires, Intentions) models.

Let’s for example consider one of the typical cognitive science architectures as used by an economist. McFadden’s schema of the decision process (1999) is shown in Fig. 1.
https://static-content.springer.com/image/art%3A10.1007%2Fs11299-006-0015-1/MediaObjects/11299_2006_15_Fig1_HTML.gif
Fig. 1

The multiple connections between single elements determining a choice

As we can see, in this model “affect” affects the decision process directly (by altering its correct procedure), and indirectly, by modifying perception and beliefs or by modifying motives (goals and their values and urgency).

The model is nice, but still rather vague: the conceptual and processing relationship between “attitudes,” “motives,” and “preferences” is not so clear. The model is also incomplete, since it seems that always action is the result of a true decision based on beliefs and preferences. The possibility of undecided actions, of impulsive or merely reactive or automatic behaviours is put aside.

In fact, it is very important to consider that only sometimes humans perform rational decisions, although obviously within a limited and bounded rationality. That is to say that, sometimes, agents are involved in true decisions, based on an explicit reasoning—although defective and biased by all the well known human biases in reasoning, estimating, risk perception, probability, framing. In this cases agents can use different approaches and strategies (for example non-compensatory mechanisms), they can evaluate alternative (pros and cons, cost and benefits, etc.) and their choice can also be due to emotions that alter the cognitive processing.

But other times there is no true decision at all. In some cases the action is not the result of a true deliberative process based on some prediction, evaluation, balance, etc. In some cases there is no real choice, or the choice is at a merely procedural level: the behaviour can be the result of following routines, procedure, prescriptions, habits, scripts, by conformity, imitation, receipts for already solved problems. Since the subject who makes a choice in this kind of situations does not face a “problem,” we can then claim that she does not take a decision.

In addition, the behaviour can be just the result of a simple reactive device, some “impulse” elicited by an intense emotion, or some production rule (of the type condition-action) that has been contextually activated and executed.

Loewenstein (1996) accounts this problem in a good and well-argued paper about decision and emotions, where criticizes the anticipatory role of emotions within the decision process and proposes a model where emotions directly conduce to behaviour. Although Loewenstein’s idea captures some real mechanisms of emotional interference, the theory he proposes should not be seen in competition with McFadden’s one. In our opinion they are complementary theories.

By adopting a richer cognitive architecture it is possible to coherently assemble all those emotional effects on decision-making in a single model. This model will be able to capture the differences between behaviours due to different processes in decision-maker’s mind can explain how all those layers and mechanisms compete and coordinate with each other.

3.4 How emotion changes the decision process

The relationship between emotions and the decision processes is a quite complex one, because these two processes are multifaceted and it is very difficult to reduce them into smaller and more manageable components. We can make an attempt to synthesize the most relevant ways in which emotion enters the decision process, paying attention to avoid every kind of reductionism or excessive simplification. We will analyse cases where emotions change just the way of taking a decision, then we will turn to the core components of decisions, i.e. the agent’s goals, and, finally, we will consider the case when the decision process is bypassed at all by emotions.

Sometimes, emotions alter decision resources, processes, and heuristics. We can say that only the means are different, whereas the core of the decision is not affected by the emotion, so that, for example, the time required by the process can be different because an emotion can make shorter the deliberation time, or, on the contrary, can make it longer. In this case the emotion acts as a signal that something important is happening, so that the subjects wants to spend more time in reasoning to get the best solution.

Very often, an emotion modifies knowledge accessibility: some items are more available, while others are not, as experimental results demonstrated. Bower and Cohen (1982) showed that the emotional state works as a filter for the attention focus of the subject, and it also modifies the accessibility of items stored in long-term memory, making easier the retrieval of past items with some “emotional features” similar to the current emotion. This kind of change will result in an overall modification of the process, either making it smoother or even less easy, depending on the changes in knowledge accessibility.

All researchers in the decision-making domain agree on the fact that emotion introduces several biases in reasoning and judging processes, making the decision process very far from the model of RDT. The list of heuristics and biases affecting human reasoning is too wide to be mentioned here (see Bower and Cohen 1982, and Kahneman et al. 1982 for a complete introduction to the problem), but we just want to propose two of the main heuristics and biases usually linked to emotions, as for example, the “frame effect,” which leads people to focus on negative or on positive consequences, so loosing a wider perspective, or the effect of emotional processes on risk perception, so that the threshold of acceptable risk is shifted towards one or the other extreme of the continuum.

Feeling an emotion can also alter agent’s goals, thus modifying the decision set. As we said, we can describe the effect of emotions on decision-making in terms of the goals taken into account in the decision balance. This means to consider additional incentives, since in the final balance a goal achievement is a plus whereas a frustration represents a loss. The emotion can modify considered goals in three ways: first, emotions can be goals per se or, better, to feel or not to feel an emotion can be a goal (for example feeling excited or avoiding regret), i.e. it is part of the subject’s utility function. Second, emotions can simply activate new important or urgent goals, and put them into the “decision box”: goals like “escaping,” “biting the other,” can be activated as emotional “impulses” and enter the decision process with previous intentions or desires. In this way an emotion gives rise to new priorities and preferences, perhaps (but not necessarily) overcoming pre-existing goals or replacing them in the decision-making process.

Finally, an emotion can directly elicit a behaviour without any real decision and balance among different goals and different values: the simple activation of a fixed behavioural sequence (like in response to terror), or of a high priority goal is not subject to any deliberation but is just executed (like: fire → alarm → panic → escaping!).

One might represent those different emotional impacts as shown in Fig. 2. This seems to be the minimal articulation of the model for accounting for the main impacts of emotions, given that the problem is a very wide range one and that it is not possible to arrive to a single and simple solution.
https://static-content.springer.com/image/art%3A10.1007%2Fs11299-006-0015-1/MediaObjects/11299_2006_15_Fig2_HTML.gif
Fig. 2

The proposed model of emotions and decision-making

3.5 Goals and emotions

For a better understanding of emotional impact on decision-making we have to take into account also time factors, and more precisely the relation of the emotional response with the decision process. We have to distinguish between current and anticipated emotions, each one with a peculiar impact on the goals and the beliefs supporting the decision-making process.

Looking at the picture we show in the previous paragraph, we can see that, according to our framework, emotions can have a role in the decision-making both by affecting the reasoned decision process and by changing the behaviour without emotions being processed within the decision-making. The former idea follows, between others, the idea of “somatic markers” (Damasio 1994) that, in our point of view, can be considered as a kind of emotional weights in the decision process; the latter, as we argued above, being a mechanism which must be taken into account as a complement of the former.

In every decision in which an emotive state is involved, it is possible that, together with a reasoning process on beliefs and goals, driven by the intervention of emotions, a more directed affection can modify the decision. As shown in the picture, by “more directed” we mean an affection that change the action that will be taken without an explicit, although fast, modification of beliefs and goals. In this way, the output of decision-making process (the action) can be different from that one which would have been chosen if no emotional state was involved, but it is not necessarily due to beliefs revision or goals adoption. Dividing this mechanism from the one concerning a reasoned process is not only theoretically founded, thanks to psychological studies both on not reasoned emotional reaction (Gigerenzer et al. 1999), and on response to fear (LeDoux 1996), but also methodologically convenient, since it is possible to focus on different aspect of decision process.

Since we have already argued about the importance of including different goals in decision-making processes is the right way to explain human choice and behaviour, here we want to analyse more in details a part of the reasoned and processed affection of emotions in decision. Namely, we are going to propose an analysis of how emotions and goals are in relation with each other.

3.5.1 Current emotions

We call “current emotions” those emotions that the subject feels while he is taking the decision, so it is an “here and now” emotion, which has all the classical emotional features. This means that this current emotion, apart from its specific qualities, modifies goals’ values (iv), changes the beliefs supporting the decision (ii) and the heuristics (iii) and, as showed above, can also alter the time required for the decision (i).

This current emotion can be external to decision-making and unrelated to it, or it can be a consequence of the decision process, i.e. a “by-product” of the decision itself that feeds back in it. In the former case, the agent is affectively activated by something which does not belong to the decision process, as for example he can be angry at traffic jam and, once at work, he takes a decision shaped by his anger, but the emotion is completely unrelated to the decision he has to take. In the latter case, the decision itself or its sub-processes (like a prediction or a conflict) generate an emotion, may be a positive one (due for example to the awareness he is taking the right decision) like happiness, or a negative one, as anxiety for an hard conflict, or as guilty or shame if the subject is not proud of his decision, but anyway he goes on.

3.5.2 Anticipated emotions

Another category of emotions useful in understanding their connection with goals is that one of “anticipated emotions.” The fact that the emotions can be forecasted is what they have in common with the expected one. But it is important to be careful in not confounding them, because they are basically distinct and the effects on decision-making can be substantially different.

By “anticipated emotions,” in fact, we mean those emotions that an agent can preview to feel if she tries to achieve some particular goals of hers instead of others. The ability to forecast an emotion in most of the cases derives from previous experiences (learning) and this is what makes anticipation a very interesting mechanism in decision process.

In fact, knowing what she will feel if she chooses a goal can make her feel the same emotions at the very moment of decision. So she can feel, while she is deciding, what she forecast she would feel in the future, when decision is made, like in the act of “desiring” one imagine sensations of the future event. What happens in this way is that decision is not directly affected by calculating the value of the forecasted emotion, but only indirectly. The activation of some goals or the alteration of value in agent’s mind is driven by emotions felt on the moment as an effect of a sort of empathic identification with herself in the future, a self-empathic experience. A similar process can also be driven, in social context, by the identification with the other in the future that makes the agent feel an emotion while she is making a decision (Gleicher et al. 1995). The two empathic identifications can be contemporarily in act and can participate in making the emotion rise. Although considering this kind of emotions can put up with the difficulty in defining what empathy actually is and how it works in the performance of pro-social behaviour, it can be very useful in analysing how goals are valued and chosen. It has been shown, in fact, that understanding others’ emotions implies experiencing such emotions to some extent (Bargh 1994). According to us, this experience often affects the decision process and it does it in quite different ways from emotions that are just expected. They can be seen as in the between of expectations and current emotional states and must be studied as such.

Anticipated emotions—as we have defined them—should not be mixed up with Anticipatory or Future-based emotions: emotion that are now but about future events, like hope or fear. One feels now hope not because she anticipates and imagines the hope that she will feel later, but because she is considering a possible pleasant event. Analogously one feels fear because forecasts a possible harm or loss that will produce pain, sadness, or grief.

3.5.3 Motives and emotions in interaction

Let’s give an example of how these different kinds of emotions can affect the decision process. It will be useful not only to understand how the categorization we just proposed can give a direction for the study of human decisions in particular contexts which the traditional simplified approach of mind structure and decision process cannot explain, but also to avoid to fall in the fallacy of considering common errors of cognition some characteristics of human preferences, even when they can lead to better solutions of negotiation game and to more successful interactions (Brothers 1990).

The decision-making process we are going to use as example is the one agents perform every time they find themselves involved in particular interactive-decision (Roth 1995). Namely, we are referring to what game theory has called “ultimatum game.” The basic version of the game is a two players interaction in which the first player can propose a division of a given amount of a certain good to the other player. The second player, then, can accept, in which case the good will be divided as proposed by the first player, or reject, making both players get nothing. The perfect equilibrium prediction for such game is that the first player will ask for and get all (or almost all) the amount of the good.

One of the most important problems that are being studied by jointed work of behavioural studies on negotiation and game theory is represented by the common deviation from classical game theoretical prediction of players’ choices in the ultimatum game (Bazerman et al. 2002).

Numerous experiments, also involving minor variants of this game, have found that individuals do not behave in the way that economic and game theoretical models predict (there is a vast literature on this issue, for a survey see Binmore et al. 1995).

Let us first see how our view of complex and numerous motivations can fit within an interpretation of these evidences. Indeed, these results make it easy to claim that negotiating may be an activity that systematically gives negotiators motivations distinct from simple self-outcome maximization and that players engage in trade-off between some underlying interpersonal preferences and strategic consideration.

More precisely, if the subjects would believe that there is a merely random distribution of rewards, and that somebody will receive 1 dollar while some other will receive 9 or 99 dollars, etc., they will accept 1 dollar. But, in a regular ultimatum game, the probability of a refusal increases with perceived unfairness: the larger the difference between what you take for yourself and what you offer to me, the greater the probability of refusing (being the amount stable). The probability that I refuse 5 dollars is greater if you take 95 dollars than if you take 45, than if you take 15 dollars.

It is easy to hypothesize that actually goals are involved in this decision other than maximization of one’s own monetarily reward: in order to predict the behaviour of the agents (both the one who makes the proposal and the one who answers to that proposal) it is important to take into account goals such as “to punish the other,” “not to be offended,” “fairness,” “justice.”

Since there are these other goals with their value, it might be that the decision of an agent who makes a proposal particularly high for the other, or of an agent who does not accept any offers simply higher than null, is perfectly rational. In fact, as it is possible to be inferred from what we said in previous sections, subjective motives cannot be “irrational” in principle—they might perhaps be unfortunate, noxious, unfit, not successful and adaptive for survival or reproduction, but not subjectively “irrational.” Since those motivations are more important to them than money, they would be perfectly rational in refusing the dollars (the amount of refused dollars would be a sort of measure of the importance, the worth of those goals for agents3). The refused monetary amount is even a measure of the importance/value of the other, prevailing non-monetary motive.

First, such a behaviour cannot be subjectively “irrational” because the goals itself is not irrational (rationality is not a moral theory about human ends, is a theory of effective means-end reasoning). Second, we need to stress the fact that they can be ends and not just means.4

It is to explain this latter specification that we can reason about emotions and their roles in negotiation of the form we are examining.

In particular let us focus on the expected and the anticipated emotions. In fact, although it is important to consider also what we called current emotions, they are so deeply related to and dependent of the context, that it is not easy (if possible at all) to generalize their influence to general ultimatum game interaction. Instead, what, can play a fundamental role in determining actions of both who proposes an offer and of who answers presumably in the same way in most of contexts are those emotions that are expected or anticipated.

When the agent has to propose an offer, in fact, she can look forward and imagine herself in the future, when the decision is made. In this way, she can find herself in choosing an offer that can be driven by one or more goals generated by an expected emotion. In this case, a good example is the regret. If she thinks that in the future she could feel regret for the decision taken if the proposal will be not accepted, one of her current active goals will be exactly that one of avoiding that. In this way, the decision she will take can be driven by goals such as “make the other satisfied,” in order to make him in condition of accepting and her in condition to achieve her goal of “having the proposal accepted.” Note that in this case the adoption of other’s goals (such as “fairness”) can be easily implied and that this makes things even more complex, since the knowledge of other’s goals (and concept of fairness, for instance) can be implied as well.5

On the other hand, if we consider that one or more of the goals of the agent who is making the proposal are fairness, justice or even the other’s satisfaction, it is possible to hypothesize that she will anticipate the emotions she could feel if these goals are achieved or not. This anticipation, as we said in anticipated emotions definition, will make her feel now what she could feel in the future and, then, will make the decision affected by that particular empathically anticipated emotion. It is interesting to highlight that in case of interaction the anticipation can be considered both of agent’s own emotions and the other’s ones, given, also in this case, a crucial role to the process of goal adoption.

To conclude, we must say that these kinds of emotions are often in interplay with each other and that the possibility that they are affecting decisions contemporarily, according to us, makes it even clearer the importance of a complex model of decision-making such as the one we proposed.

4 Concluding remarks

In this paper we have argued in favour of a richer, more complete, and also more explicit representation of the mind of a social actor or agent, in order to account for the economic, the strategic, and the organizational behaviour of social actors. We supported a “cognitive program”—much beyond the “epistemic program”—aimed at making explicit the players’ goals and motives (and perceived partner’s motives) as part of the game they are playing. The true ground of “utility” is goal-directed action, i.e. an action driven by motives and objectives. More psychological and qualitative aspects—the explicit account of the multiple and specific goals of the agent (from which only agents competition or cooperation follow)—must be reintroduced into the very model of economic or social actor’s mind. The inputs of any decision process are multiple conflicting goals (and beliefs about conflict, priority, value, means, conditions, plans, risks, etc.).

However, taking into account other more realistic human motives—although fundamental—is not enough. A better theory of human individual and social behaviour does not depend only on a better spectrum of human incentives.

Focusing on motivation theory and on various mechanisms governing behaviour does not coincide with the very trendy issue of dealing with emotion in rational models. We have in fact argued against a simplistic view that tries to add emotion to the reductive model of “rational mind” to make it more human-like and realistic. Our claim is that a more articulated model of cognitive process is also needed for understanding the various ways in which emotion affects the decision process and behaviour. The simpler the model of the decision process, the less articulated the ways in which emotion can affect it.

In sum, a new complex but abstract cognitive “architecture” is needed both for a new micro-foundation of human social behaviour and for dealing with emotion.

In sum, RDT should be developed in at least four directions:
  • Introducing an explicit and articulated theory of goals: motives and instrumental sub-goals, of different functional kinds (desires, duties, needs, ideals, intentions, etc.), and of different nature (social approval and sense of justice, aesthetic needs and desire of success and power, sexual drive and hate, etc.). Thanks to this theory a lot of apparently “irrational” behaviours will be explained even within the RDT framework.

  • Relaxing too strong conditions on rational decision in considering more reasonable evidence-grounded beliefs and reason-based choices and making more realistic assumption about the knowledge of the agents and their cognitive capacities.

  • Postulating different mechanisms (not only “rational” decision) for governing human behaviour, and also different decision mechanisms or heuristics.

  • Introducing emotions in a structural and articulated way (not as substitute of a good explicit theory of motives), including how emotions affect the cognitive decision process, and how they can simply bypass it.

Footnotes
1

We avoid the term “preferences” frequently used as synonym of “motives,” “goals,” etc. This use of “preferences” is seriously misleading. Preferences are not motives, they presuppose a set of motives (goals, desires, needs, ...) and just represent the fact that for the subject one motive is more important, has more value, than another one.

 
2

Something close to Damasio’s “somatic markers” but less “embodied.”

 
3

Of course, it is also reasonable to predict that the greater the absolute amount/value (5 or 20 dollars) the less probable will be the refusal. A real interesting issue to study is exactly the interaction between these two factors, in order to develop formal and quantitative models of it.

 
4

On the contrary an instrumental goal, a means can be irrational when based on some irrational belief (ill-grounded, not justified, without evidence and implausible or contradictory with stronger beliefs) or due to some wrong planning or reasoning processes.

 
5

Although we are not going to talk deeply about this here, in our view a theory of goal adoption is fundamental in understanding this kind of interaction.

 

Acknowledgments

The last part of this work has been developed within the European Project Mind RACES—EC’s sixth Framework Programme, Unit: Cognitive Systems—No. 511931 (the section on anticipation and emotions) and the European Project HUMAINE (the section on Emotions and Cognitive Architecture).

Copyright information

© Fondazione Rosselli 2006