Introduction

Depending on the context, humans perceive the very same outcome of a decision as favorable or unfavorable (Brandts & Schwieren, 2007; Carpenter, 2018; Kahneman & Tversky, 1979, 1984). The intriguing observation that the rephrasing of decision-relevant information affects decision-making itself is referred to as framing effects. The concept of decision framing was coined by Tversky and Kahneman (1981) who used the term to point out that the decision process is determined by the norms, habits, and personal characteristics of the decision-maker, but also by the formulation of the decision-relevant scenario. Framing effects have been shown in numerous studies and across different domains, such as insurance (Akaichi et al., 2020), finance and investment (Barberis et al., 2006; Kumar & Seongyeon Lim, 2008), moral judgments (Capraro & Vanzo, 2019), charity and fundraising (Chang & Lee, 2010; Chou & Murnighan, 2013; Das et al., 2008), public goods (Dufwenberg et al., 2011), health issues (Latimer et al., 2007) and social settings (Andreoni, 1995; Brandts & Schwieren, 2007; Eriksson et al., 2017; Gu et al., 2019; Story et al., 2015), for an overview see e.g. Carpenter (2018). However, despite having been investigated since the early 1980s, the cognitive mechanisms underlying framing effects are still poorly understood.

A frequently studied method to induce framing effects is the manipulation of valence, i.e. to influence decision-making depending on whether a given outcome is presented positive (as gain) or negative (as loss) (Levin et al., 1998). Valence framing has been shown to bias decision-making in a variety of different contexts, such as political attitude (Bizer et al., 2011), prosociality and social preferences (Capraro & Vanzo, 2019; Chowdhury et al., 2017; Grossman & Eckel, 2015; List, 2007), altruism (Andreoni, 1995) or reward and punishment (Windmann et al., 2006). Evidently, such valence framing effects are also relevant in applied settings, including fundraising (Chang & Lee, 2010; Chou & Murnighan, 2013; Das et al., 2008), investment banking (Barberis et al., 2006; Kumar & Seongyeon Lim, 2008), and insurance choice (Akaichi et al., 2020).

With regard to prosocial decision making, Andreoni (1995) examined how framing effects influence financial contributions by comparing a standard public good game (positive frame condition: giving to the public good), with a negative frame condition (taking from the public good). According to the results, cooperation, i.e., tokens contributed to the public good or not taken from the common pool, was higher in the positive frame condition than in the negative frame condition. This finding was interpreted as evidence that positive feelings when giving to the public good (“warm glow”) have a stronger effect than negative feelings when taking from the public good (“cold prickle”; Andreoni 1995). Inconsistent with these results, other studies reported no average differences between take and give frames in unidirectional decision making paradigms, such as the dictator game, and concluded that social framing has little or no effect on participants behavior (Dreber et al., 2013; Goerg et al., 2019).

Furthering this research, studies have investigated factors that may influence the strength of framing effects (Cassotti et al., 2012; Chowdhury et al., 2017). For example, investigating the effect of emotional context, Cassotti and colleagues showed an increased tendency for risky financial decisions in a loss frame compared to a gain frame. However, these risky decisions were reduced in the loss domain by previously induced positive emotions. Thus, the presentation of gain or loss no longer influenced subjects’ decision-making after they were exposed to emotionally pleasing images. Investigating gender differences, Chowdhury and colleagues (2017) found that males allocated more money to others in a give frame (i.e., if they were asked how much they want to allocate to the other), while females allocated more money in a take frame (i.e., if they were asked how much they want to take for themselves, leaving the rest to the other). Further studies have shown that framing effects disappear if points are allocated to charitable organizations instead of individuals (Eckel & Grossman, 1996; Grossman & Eckel, 2015).

The respective results are based on the analyses of average response times (Cassotti et al., 2012), response frequencies (Cassotti et al., 2012; Dreber et al., 2013; Goerg et al., 2019), or the average of allocated goods (Andreoni, 1995; Capraro & Vanzo, 2019; Chowdhury et al., 2017; Grossman & Eckel, 2015), and thus provide first insights into framing effects and factors that may alter the effects of valence framing on overt decisions. Reaction times and response frequencies represent an accumulation of various cognitive processes, including efficiency of stimulus processing, response strategies (e.g., how cautiously participants respond) and how much they bias their response towards one decision option (Stafford et al., 2020; White et al., 2009; Zhao et al., 2019). Give and take frames could affect all of these processes, or only some of them. In the latter case, framing effects may be overlooked in the analyses of reaction times and response frequencies, because framing-related changes in one decision component may be “buried” under the effects of other unaffected components. Supporting this point, there are studies showing biases in individual decision components that were not detected in average reaction times or response frequencies (Zajkowski et al., 2022; Zhao et al., 2019). Moreover, participants may choose the prosocial option because it is more socially acceptable, even if valence framing induces a bias towards egoistic decisions on the processing levels (that may reveal itself if social desirability concerns are low).

In our study, we aimed to investigate how give and take frames affect overt behavior and individual components of the decision process. To do so, we used linear models to analyze average reaction times and response frequency, and applied Drift-Diffusion-Models (DDMs; Ratcliff, 1978; Ratcliff & McKoon, 2008; Voss et al., 2004; Voss et al., 2015) to analyze different components of the decision process separately.

Originally, DDMs were mainly used to model memory retrieval, and since then, have been applied to many basic perceptual and memory tasks (Ratcliff, 1978; Ratcliff et al., 2016), thereby validating the interpretation of the parameters. So far, only relatively few studies have used DDMs for investigating the different components of the social decision processes (Chen & Krajbich, 2018; Ratcliff et al., 2016; Teoh et al., 2020).

In detail, DDMs assume three different components, captured by three different parameters, the z, a, and v parameters, that can be altered if noisy information is accumulated to select a decision option (Forstmann et al., 2016; Ratcliff et al., 2016). The v parameter, called drift rate, captures the speed of noisy information accumulation in favor of one of the two choice options and reflects the efficiency of the evidence accumulation. The boundaries of the decision process are labeled from zero (lower boundary) to the parameter a (upper boundary), thus, the parameter a reflects the total amount of evidence that is required to distinguish between the two options. This parameter is interpreted as a measure of cautiousness: The larger the a value, the more time is needed to reach one of the two decision boundaries, provided identical task difficulty (Voss et al., 2004). The third parameter (z), called the starting point, captures the individual’s response bias before selecting a decision option (Chen & Krajbich, 2018; Mulder et al., 2012; White et al., 2018). If a person has an a priori preference for a specific decision, the relative starting point of the decision process is closer to the boundary of that favored option, and therefore less evidence needs to be accumulated towards this option to arrive at the decision threshold. Consequently, the amount of information needed regarding the opposing option is increased. When no prior decision bias exists (neutral position, z = .50), the z parameter is equidistant between the two options (Voss et al., 2004). Mulder et al. (2012) showed that perceptual decisions in a random dot task are biased by reward, captured by significant changes in the starting point of the decision process. In the domain of prosocial decision making, it has been shown that an individual bias towards prosocial decisions is mainly associated with an increase in the z parameter, reflecting a shift in the starting point of the prosocial decision process (Chen & Krajbich, 2018). These results suggest that cognitive biases can change the starting point of the decision process, i.e., the z parameter. It seems plausible but has not yet been empirically investigated whether give and take frames induce a cognitive bias that alters the starting point of the decision process in an equivalent manner.

Furthermore, the DDM considers interindividual differences in response style (White et al., 2009). For example, analyses based on averaged statistics do not reveal whether give and take frames affect a priori beliefs (cognitive biases), amount of collected evidence, or speed of processing.

Another important component that is taken into account by the DDM is the speed-accuracy trade-off. Speed and accuracy are fundamental performance measures during any decision process that are not only influenced by participants’ ability to respond quickly and accurately, but are also related to participants’ strategic decision to make a trade-off between speed and accuracy (Stafford et al., 2020). It is important to keep in mind that task- and group-related as well as individual differences exist in participants’ positioning between speed and accuracy (Stafford et al., 2020). Considering these differences and separating the task- and group-specific components from the interindividual differences increases the sensitivity of the measurement method. The application of decision models such as the DDM allows the detection of such effects that cannot be detected by classical measurement methods, like accuracy or mean response times (White et al., 2009).

We hypothesized that we might find significant framing effects in both overt behavior and on the processing level. In this case, first, the regression models should reveal a significant effect of framing (give/take) on choice frequencies and reaction times. Specifying this effect, posthoc tests might either show a lower frequency and slower reaction times for prosocial decisions in the take frame compared to the give frame (Andreoni, 1995) or more prosocial decisions and faster reaction times in the take frame compared to the give frame, as reported for females by Chowdhury and colleagues (2017). Second, DDM analyses should reveal differences in individual decision components, most likely the starting point of the decision process (z parameter) that has been shown to capture cognitive biases (Chen & Krajbich, 2018; Mulder et al., 2012; White et al., 2018). In more detail, a decrease of prosocial choices in the take frame (Andreoni, 1995) should be accompanied by a shift of starting point towards the egoistic decision boundary, an increase of prosocial choices in the take frame (Chowdhury et al., 2017) should be paralleled by a shift of starting point towards the prosocial decision boundary.

Alternatively, given previous studies that observed changes in DDM parameters, but not in overt behavior (Zajkowski et al., 2022; Zhao et al., 2019), it is also possible that DDM analyses reveal a significant framing effect in the starting point of the decision process (z parameter), whereas regression models reveal no significant differences in choice frequencies and reaction times in both framing conditions. This would indicate that valence framing induces a bias on the processing level, which is not captured by measures of overt behaviors in our paradigms, probably due to the unspecific nature of the outcome measures (Stafford et al., 2020; White et al., 2009; Zhao et al., 2019) and/or social desirability effects.

Finally, it is possible that there is no significant framing effect on average choice frequencies and reaction times as well as DDM parameters, indicating that the give and take frame manipulation in the current paradigm has no observable effect on decision processing and outcome.

To test these hypotheses, we used hierarchical drift-diffusion modeling (HDDM; Wiecki et al., 2013) in combination with a well-established binary prosocial decision task (Hein et al., 2016; Saulin et al., 2022). This task was presented in a give and a take frame using minimal, and therefore highly controlled, differences in the instructions. Participants were randomly assigned to one of two groups and made binary choices by allocating points (later transferred to money) between themselves and another person. One allocation option favored the outcome of the other person (prosocial option) and the other allocation option favored the participants’ own outcome (selfish option; Fig. 1). Before each decision, one group of participants was asked how many points they would like to give to the other person (give frame group; “How much do you want to give?“). The other group of participants was asked how many points they would like to take for themselves (take frame group; “How much do you want to take?“). Thus, apart from one word (“give” in the give frame and “take” in the take frame), the task and the instructions were identical in both groups.

Fig. 1
figure 1

Example trial of the resource allocation task

Note: After the Participants were asked “How much do you want to take?“ (take frame shown in this example; in German: “Wie viel möchten Sie nehmen?“) or “How much do you want to give?“ (give frame; in German: “Wie viel möchten Sie geben?“), they were asked to choose between a prosocial option that favored points for the partner or a selfish option that maximized points for themselves. In this example trial, the participant chose the prosocial option, which favored the partner’s outcome at a cost to the participant (green box)

Exploratory study (laboratory)

Method

Participants

Previous evidence has shown that allocation decisions are influenced by the gender of the allocating person (e.g. Eckel & Grossman, 1998), the gender of the recipient (e.g. Saad & Gill, 2001) as well as both simultaneously (e.g. Croson & Gneezy, 2009; Voit et al., 2021). Moreover, there is evidence for gender differences in framing effects on allocation tasks (Chowdhury et al., 2017). Considering these results, we controlled for gender effects by recruiting only females who interacted with another unknown female.

62 healthy women participated in the study. Due to technical problems, age information was only recorded for 49 participants (Mage = 22.90 years, s.e. = 0.83). Participants were recruited via flyers distributed at a German University between November 2018 and July 2019. The confederates were two female students trained to play their roles alternatingly. Participants received monetary compensation (show up fee plus payout between EUR 3.00 and EUR 7.00 from two randomly chosen trials of the allocation task; see below). To exclude confounding factors associated with gender (Chowdhury et al., 2017), female deciders were paired with female recipients, and the anonymity of participants’ decisions was highlighted.

Ethical review

We obtained approval from the Ethics committee of the Department of Psychology, Goethe-University, Frankfurt am Main and obtained written informed consent from our participants.

Measures

Allocation task

The allocation task was identical in both groups. Participants were asked to repeatedly choose between two different distributions of points that each represented different amounts of monetary payoffs for themselves and the partner (Hein et al., 2016; Saulin et al., 2022).

Each decision trial (Fig. 1) started with a fixation cross (1000 ms) followed by the question (2000 ms) “How much do you want to take?“ (take frame group) or “How much do you want to give?“ (give frame group). Subsequently, the participants were presented two possible distributions of points in different colors, indicating the participant’s potential gain and the potential gain for the partner (Hein et al., 2016; Saulin et al., 2022). The colors were counterbalanced across participants and groups. Participants were asked to choose one of the two distributions within 4000 ms by pressing the left or the right arrow key. The position of the two distributions of points was randomized across trials to minimize response biases due to motor habituation. A green box appeared for 2000 ms around the distribution that was selected by the participant. If the participants did not answer within 4000 ms, the trial was excluded from the analysis. This happened in 58 of 3720 trials (1.56%). 1 trial was excluded due to extremely fast response time (70 ms). The allocation task was programed with Open Sesame version 2.8 (Mathot et al., 2012).

Procedure

The experiment was conducted at the Psychology Department of the Goethe-University, Frankfurt am Main, Germany. Upon arrival at the laboratory, participants were welcomed by the experimenter and then introduced to another participant (a female confederate) who was already waiting in the room. After signing the consent form, the experimenter explained that there would be the role of a decision-maker and the role of a receiver in the following task and that the roles would be randomly drawn before starting. Next, the participant and the confederate played a manipulated lottery (drawing matches) that ostensibly determined the role for both persons in the following task. The drawing of the matches was manipulated in such a way that the participant always drew the short match and thus was assigned to the role of the decision-maker while the confederate was assigned to the role of the receiver. Furthermore, it was explained that the receiver would work on different tasks in a separate room without being aware of the decision maker’s decisions. The experimenter emphasized that the decision-maker and the receiver would not meet again after the experiment in order to minimize potential reputation effects. Before starting the allocation task, participants learned the rules on a three-page instruction screen. Each participant performed 60 decision trials. Before starting the task, participants were asked to complete 4 practice trials that were not included in the analysis. At the end of the experiment, one of the distributions chosen by the participant was randomly selected for additional payment to the show-up fee.

Data Analysis

Behavioral data were analyzed with R-Studio Version 1.1.463 (RStudio Team, 2020) and R Version 3.6.0 (RCore Team, 2019) and Python (HDDM 0.8.0; Python Version 3.7.6; Jupiter notebook server 6.0.3; Van Rossum 2007; Wiecki et al., 2013).

Comparing the age between the take frame and the give frame groups revealed no significant difference between both groups (Mage = 22.90 years, s.e. = 0.83, B = -0.49, s.e. = 0.28, p = .09).

Regression analyses. Linear regressions were performed for all of the following tests using the R-package “stats” (RCore Team, 2019). For the study comparison we run a linear mixed model using the “lme4” (Bates et al., 2015) package. We used the “car” package for estimating the fixed effects of the linear mixed models (Fox & Weisberg, 2019). To estimate the effect sizes of the results obtained from linear models, we used the R-function “summary” (RCore Team, 2019). For the linear mixed model, the marginal R²m an estimate of the proportion of variance explained by the fixed factors was calculated using the R-Package MuMin (Bartoń, 2019). Results were visualized with the “tidyverse” package (Wickham et al., 2019) and the “ggpubr” package (Kassambara, 2020). All continuous variables in our regressions are z-scored. The frequency of prosocial and selfish decisions and the reaction times were included as dependent variables. Group was entered as categorical predictor with two levels: give frame and take frame.

Drift-Diffusion Modeling. We chose DDM because of its small but trackable number of crucial parameters (Bogacz et al., 2006). We used hierarchical drift-diffusion modeling (HDDM; Vandekerckhove et al., 2011; Wiecki et al., 2013), which is a version of the classical drift-diffusion model that exploits between-subject and within-subject variability using Bayesian parameter estimation methods and thus, is ideal for use with relatively small sample sizes. The analyses were conducted using the python implementation of HDDM (Wiecki et al., 2013). To test our a priori assumption that a potential cognitive bias should be represented by changes in the starting point (z parameter) and given that we had no a priori hypotheses regarding the other parameters, our main analyses were based on a model that allowed for modulation of the z parameter between the groups (i.e., the give and take frame), and estimated the other parameters (i.e., v parameter, a parameter, non-decision parameter (t0)) across the two groups. When the starting point is far from the boundary of the prosocial option, the whole distribution of prosocial responses is shifted to longer RTs than when the starting point is equidistant between the two boundaries, with the slowest responses (e.g., 0.9 quantiles) slowing much more than the fastest responses (0.1 quantiles) (Ratcliff & McKoon, 2008). This leads to the situation that a prosocial decision becomes less likely compared to a selfish decision. The probability that one “mistakenly” gives a selfish response also increases and more information must be accumulated in order to choose the prosocial option compared to the selfish option. Since the HDDM is a hierarchical Bayesian parameter estimation method the effect sizes are not specified as they would be in the frequentist framework (e.g., R²). Instead, we directly specify the probabilities that the parameter in one condition is higher than in the other condition (Makowski et al., 2019). This procedure is also recommended by the authors of the HDDM (Wiecki et al., 2013). Additionally, we run a full model that allowed for modulation of all three parameters (z, v and a parameter) between the groups (i.e., the give and take frame).

To evaluate the model fit, we conducted posterior predictive checks by comparing the observed data with 500 datasets simulated by our model, a method that has been particularly recommended for HDDMs (see Table S1 for quantile comparison and 95% credibility; Wiecki et al., 2013). Moreover, model convergence was checked by visual inspection of the estimation chain of the posteriors, as well as by computing the Gelman-Rubin Geweke statistic for convergence (all values < 1.01; Gelman & Rubin 1992). For the parameter comparison, the posteriors were analyzed directly, as recommended by Wiecki et al. (2013).

Results

Comparing the reaction times and frequencies of prosocial and selfish decisions between the take frame and the give frame group revealed no significant difference (for results see Tables 1 and 2).

Table 1 Mean (M) and standard errors (s.e.) of reaction times (in ms) separately for all decisions, for the prosocial decisions, and for the selfish decisions in both the take and give frames in the laboratory study. β-weights, s.e., p-values and R² for the comparisons between groups are shown
Table 2 Mean (M) and standard errors (s.e.) of decision frequency (absolute values) separately for all decisions, for the prosocial decisions, and for the selfish decisions in both the take and give frames in the laboratory study. β-weights, s.e., p-values and R² for the comparisons between groups are shown

To test for framing effects with HDDM, we compared the starting point (z parameters) between the give frame and the take frame group. The comparison of the posteriors (Wiecki et al., 2013) revealed high probability for a lower z parameter in the take frame group (Fig. 2, blue) compared to the give frame group (Fig. 2, orange), ztake_frame (M = 0.42, s.e. = 0.006), zgive_frame (M = 0.50, s.e. = 0.006), (P(z take frame < z give frame) > 0.99). Across groups, the mean value of the v-parameter was M = 1.67 (s.e. = 0.13), and the mean value of the a-parameter was M = 1.83 (s.e. =0.08). We also estimated the value of the non-decision time (t0 = 0.56, s.e. = 0.02) Table S3 for the HDDM parameters of all participants).

Taken together, these results showed a decreased starting point in the take frame group compared to the give frame group which showed a neutral starting point (z = 0.50). These results may indicate that the instruction to take money induced a cognitive bias toward the selfish option, which was not present if participants were asked to give money to the other.

Fig. 2
figure 2

Distribution of the participants’ starting points in the HDDM (z parameter) from the laboratory and from the online study

Note: Bar plots show the z parameter from the HDDM analysis in each group. Error bars represent standard errors and dots represent the participants individual starting points. The dashed line indicates the neutral (unbiased) position of the z parameter (z = 0.50). The results show a lower starting point (z parameter) in the take frame group (blue) compared to the give frame group (orange) and thus, a cognitive bias in the take frame group

Confirmatory study (online)

To test the reliability of the Study 1 results, we conducted the same study online with an independent sample.

Participants

To control for gender effects and keep the online setting as comparable as possible to Study 1, we recruited only female participants. We collected a sample of N = 110 German female participants (n = 55 in each group) via the crowdsourcing platform clickworker.de. One participant (give frame group) had to be excluded due to incomplete data, resulting in 109 data sets for analysis (Mage = 30.48 years, s.e. = 0.68). Participants received monetary compensation (EUR 2.00 fee plus EUR 3.00 or EUR 5.00 randomly chosen payout). However, comparing age between the take frame and the give frame groups revealed a significant difference (Mtake_frame = 28.21 years, s.e. = 0.88, Mgive_frame = 32.86 years, s.e. = 0.95, B = -0.65, s.e. = 0.18, p < .001, R² = 0.11). Therefore, regression analyses were computed both without age and including age as a control variable.

Ethical review

Again, we obtained approval from the Ethics committee of the Psychology Department of the Goethe-University, Frankfurt am Main, Germany and written informed consent from our participants.

Measures

Allocation task

The Allocation Task was an online version of the Allocation Task used in Study 1 (Hein et al., 2016; Saulin et al., 2022). To run the study online, the task was programed with PsychoPy version 1.73 (Peirce et al., 2019).

Procedure

At the beginning of the online study, participants were informed they would interact with another randomly assigned female student and that the role of decision-maker or receiver would be randomly assigned as well. For this purpose, the participants were shown a screen with an alleged search for a suitable partner. As soon as the partner was ostensibly found, it was announced the roles would now be assigned at random, whereby the participant was always assigned the role decision-maker. To minimize potential reputation effects, it was stated that the assigned partner could not observe the participant’s decisions during performance. This means the green rectangle confirming the participants chosen distribution will only be presented to the participant herself.

Data analysis

Analyses of reaction times, frequencies, and DDM analyses were identical to Study 1. For quantile comparison and 95% credibility see Table S2. As hierarchical models violate the independence assumption (Wiecki et al., 2013), to compare the results of the lab and the online study we conducted an additional mixed model analysis with group (give vs. take frame), context (laboratory vs. online), and their interaction, as categorical predictors and the z parameter as dependent variable (De Kock et al., 2021; Mandali et al., 2021). Study and group were additionally added as random effects.

Results

As in Study 1, comparing the reaction times and frequencies of prosocial and selfish decisions between the take frame and the give frame group revealed no significant difference (for results see Tables 3 and 4). Results from the models including age did not differ from the results without age; there was no effect of age on the outcomes (all p-values > 0.19).

Table 3 Mean (M) and standard errors (s.e.) of reaction times (in ms) separately for all decisions, for the prosocial decisions, and for the selfish decisions in both the take and give frames in the online study. β-weights, s.e., p-values and R² for the comparisons between groups are shown
Table 4 Mean (M) and standard errors (s.e.) of decision frequency (absolute values) separately for all decisions, for the prosocial decisions, and for the selfish decisions in both the take and give frames in the online study. β-weights, s.e., p-values and R² for the comparisons between groups are shown

To compare response frequencies and reaction times between Study 1 and 2, we conducted a regression analysis with group (give vs. take frame), context (laboratory vs. online), and their interaction, as categorical predictors and response frequencies or reaction times as dependent variables. The results revealed no significant effects (all p-values ≥ 0.28), indicating that reaction times and response frequency did not differ between the two studies.

Using the identical HDDM approach as in Study 1, we estimated the starting point (z parameters) in the give frame and the take frame groups of Study 2. The comparison of the posteriors (Wiecki et al., 2013) in Study 2 revealed again high probability for a lower z parameter in the take frame group compared to the give frame group, ztake_frame (M = 0.45, s.e. = 0.0006), zgive_frame (M = 0.49, s.e. = 0.0005), (P(z take frame < z give frame) = 0.97; Fig. 2). Across groups, the mean value of the v parameter was M = 1.15 (s.e. = 0.25), the mean value of the a parameter was M = 2.36 (s.e. =0.06). The non-decision time (t0) was estimated with a value of t0 = 0.51 (s.e. = 0.01). (Table S4 for HDDM parameters of all participants).

Additionally, a study comparison was conducted using a linear mixed model. The results revealed a significant effect of group (lmm χ2(1) = 8.26, p < .01, B = -0.09, s.e. = 0.03), which was comparable in both studies, study (lmm χ2(1) = 0.25, p = .62, B = -0.02, s.e. = 0.03), study x group interaction (lmm χ2(1) = 1.75, p = .19, B = 0.06, s.e. = 0.04; m = 0.51).

These findings indicate that the take frame leads to a cognitive bias toward the selfish option compared to the give frame, and thus confirm the results of the laboratory Study 1. In both studies a cognitive bias was induced by valence framing.

Our results show a framing-dependent bias in the HDDM analysis, but no significant effect on overt behavior. If a group difference exists at the starting point of the decision process (z parameter) that cannot be measured reliably at the endpoint of the process (RTs and response frequencies), it is possible that changes in one component of the decision process are compensated by other components of the decision process. To explore this possibility, we conducted an additional HDDM that estimated all three main parameters (z, v, a) separately for each group and study (see Supplement). The results replicated the decrease in starting point in the take frame group compared to the give frame group (laboratory study (P(z take frame < z give frame) > 0.99; online study (P(z take frame < z give frame) = 0.96), similar to the results from the original z parameter analysis in the laboratory study (P(z take frame > z give frame) > 0.99) and the online study (P(z take frame > z give frame) = 0.97) separately. In the laboratory study, there was a tendency for an increased v-parameter in the take frame (M = 2.13, s.e. = 0.37) compared to the give frame (M = 1.58, s.e. = 0.38) (p(v take frame > v give frame) > 0.86). In the online study, there was no such effect (M = 1.13, s.e. = 0.37) and the give frame (M = 1.20, s.e. = 0.37), (p(v take frame > v give frame) = 0.45). Considering the a parameter, in both studies there was a tendency for an increase in the take compared to the give frame, both in the laboratory study (P(a take frame > a give frame) > 0.87), and in the online study (P(a take frame > a give frame) > 0.83), indicating a more cautious response style in the take frame.

Discussion

We investigated the effect of a give and a take frame on prosocial and selfish decisions in both a laboratory and an online study. Our aim was to examine whether the change of a single word in the instructions (“take points” in the take frame vs. “give points” in the give frame) would lead to differences in cognitive processing. We hypothesized that this cognitive bias might be reflected by a change of the starting point in the decision process and used hierarchical drift-diffusion modelling (HDDM) to test this assumption. The results of our HDDM analysis on data collected in the laboratory revealed a shift in starting points (lower z parameter) in the take frame group compared to the give frame group. The observed shift in starting points was tested in an independent online study, which fully replicated the framing effect on the z parameter, despite smaller average values of the individual parameter estimates.

Previous studies on valence framing have inferred the existence (or non-existence) of framing effects from behavioral data (e.g., from the amount of money donated; (Andreoni, 1995; Capraro & Vanzo, 2019; Chowdhury et al., 2017; Grossman & Eckel, 2015). To the best of our knowledge, our study is the first study investigating valence framing effects by focusing on the components of the decision process instead of relying exclusively on the output, i.e., RTs and response frequency. Our findings show that manipulating the valence of a frame indeed induces a cognitive bias, and thus provides empirical evidence for a theoretical claim (Gilovich et al., 2002; Gu et al., 2019; Perez et al., 2018; Tabesh et al., 2019). In more detail, the observed bias reflects an a priori shift of the starting point of the decision process (z parameter), indicating that the take frame lowers individuals’ initial tendency to behave prosocially. While the take frame group in both studies showed a decrease in the z parameter, the same parameter was almost completely neutral in both give frame groups (laboratory study, (zgive frame lab = 0.50); online study (zgive frame online = 0.49). Thus, when participants were asked to take money, the starting point of their decision shifted towards the selfish option. This means that they needed to accumulate less information to decide selfishly compared to prosocially. Thus, the selfish decision became easier and faster. However, the probability for an incorrect selfish decision also increased at the processing level, while it decreased for the prosocial decision. In the give frame, this was not the case; both decision thresholds were approximately equidistant from the starting point.

Previous DDM research has shown that the estimation of DDM parameters is robust even if participants achieve near-ceiling accuracy (over 90% correct answers) (Ratcliff & McKoon, 2008. In light of this evidence, it is unlikely that a decrease in the z parameter reflects a ceiling effect, which otherwise may have been a concern given the relatively high percentages of prosocial decisions (84% prosocial decisions in the laboratory study and 70% prosocial decisions in the online study). The shift in starting point (z parameter) observed in the current studies is in line with previous studies that applied DDMs to investigate social decision making (Chen & Krajbich, 2018; Mulder et al., 2012; Saulin et al., 2022; White et al., 2018). Extending these previous results, our findings show that the starting point is shifted by valence frames induced by a minimal experimental manipulation (changing of one word in the instruction).

By contrast, the response frequency (number of prosocial versus selfish decisions) and response times were comparable in the take and the give frame groups. The lack of group differences in these traditional behavioral measures are in line with previous studies that investigated valence framing under highly controlled conditions (Dreber et al., 2013; Goerg et al., 2019). Nevertheless, our results raise the question of why the strong bias in starting point that we consistently found did not result in significant changes in response frequencies and reaction times. Our findings of significant differences in DDM parameters, but not in overt behavior such as reaction times and response frequencies, are in line with other previous studies (White et al., 2009; Zajkowski et al., 2022; Zhao et al., 2019). There are several reasons why changes in individual components of the decision process (i.e., individual DDM parameters) do not necessarily change overt behavior (such as reaction times and response frequencies). First, it is possible that noise at the output stage overlays changes in individual decision components (Stafford et al., 2020; White et al., 2009). Second, it is possible that changes in one decision component are compensated by changes in other components of the decision process, e.g., the total amount of evidence (a parameter) or the speed of information accumulation (v parameter). Bolstering the latter assumption, additional analyses showed a tendency for an increase in the a parameter in both studies. This indicates a more cautious response style that may have compensated for the low starting point and thus for differences in response frequency or reaction times. Compared to the observed strong shift in the starting point, these differences are moderate, but may, nevertheless, explain why the cognitive bias represented by the starting point did not alter the classical average behavioral outcomes. Supporting this notion, a recent study by Zhao et al. (2019) revealed that participants differ in the amount and type of biases they show in different DDM components in the same paradigm. The authors argue that these different biases can cancel each other out, resulting in null effects in reaction times and response frequency similar to our study. In the domain of (pro-) social decision making, i.e., the type of task that was used in our study, a third factor may play a role: Participants may choose the prosocial option because this choice is socially more acceptable than showing overt egoistic behavior. Thus, the shift of the starting point towards the egoistic decision boundary in the take frame does not transfer to overt behavior, because at the outcome stage, participants deliberately chose the prosocial option. If this is the case, the starting point bias in the take frame should reveal itself in overt behavior if social desirability concerns are low (for example if the person makes decisions alone outside the lab) – an interpretation that should be tested in future studies.

While the comparable reaction times and response frequencies in both framing conditions found in our studies are in line with previous research (Dreber et al., 2013; Goerg et al., 2019), they are at odds with those of Chowdhury et al. (2017), who found a higher frequency of prosocial decisions in the take frame group among a female sample, which was interpreted as a cognitive bias toward the prosocial option. In our sample of female participants, we found no differences between valence frames regarding the number of prosocial decisions. Instead, the opposite occurred in the DDM analysis, specifically, a cognitive bias toward the selfish option in the take frame group.

It is important to note that Chowdhury and colleagues did not observe differences in overt behavior when averaging across females and males, in line with other previous studies that tested a mixed-gender sample (Dreber et al., 2013; Goerg et al., 2019) and our results in females. Previous studies that tested the effects of give and take frames in the dictator game induced the different framing conditions by manipulating the source of the endowment (Chowdhury et al., 2017; Dreber et al., 2013; Goerg et al., 2019). In the give frame, the endowment (or additional endowment; Chowdhury et al., 2017) was given to the dictator, who could transfer a share to the receiver. In the take frame, the endowment (or additional endowment; Chowdhury et al. (2017) was given to the receiver, and the dictator decided on the amount that is transferred away from the receiver. Using this set up, a prosocial choice in the take frame refers to the amount that the dictator leaves for the receiver. In contrast, in our study, the participants (dictator) were presented with the same allocation options in both framing conditions and a prosocial choice always meant to forego money in favor of the other. It is possible that women may find it easier to leave additional money for the other (prosocial choice in Chowdhury et al. (2017) than allocating money to the other at cost to self (prosocial choice in our study).

Another possible explanation for the divergent findings could be that in the study by Chowdhury et al. (2017) both genders were involved as recipients. Thus, the observed effects may result from gender mixing, i.e., reflecting a prosocial bias if females allocate resources to males and the opposite bias if females allocate resources to females as in our study. A rigorous test of gender effects on framing in prosocial decision tasks would require a complex design including all possible combinations of same-gender and mixed-gender pairings of allocators and recipients. Implementing such a design in both the laboratory and the online study was beyond the scope of the current study. Therefore, we decided to recruit only females who interacted with another female. While being aware that this approach limits the generality of our findings, it allowed us to control for unspecific gender effects which may have aggravated the interpretation of our findings. Moreover, the current research inspires future studies to investigate framing effects on prosocial decision-making across genders and in same-gender and mixed-gender pairings.

Conclusion

In conclusion, our results showed a cognitive bias when participants were asked to take money (take frame) but not when they were asked to give money (give frame). This cognitive bias was identified with DDM analyses revealing a shift in starting point of the decision process towards the selfish decision boundary. Importantly, this facilitation of selfish decisions in the take frame was replicated in an independent study using a larger and more diverse online sample.