Abstract
Many charities rely on donations to support their work addressing some of the world’s most pressing problems. We conducted a meta-review to determine what interventions work to increase charitable donations. We found 21 systematic reviews incorporating 1339 primary studies and over 2,139,938 participants. Our meta-meta-analysis estimated the average effect of an intervention on charitable donation size and incidence: r = 0.08 (95% CI [0.03, 0.12]). Due to limitations in the included systematic reviews, we are not certain this estimate reflects the true overall effect size. The most robust evidence found suggests charities could increase donations by (1) emphasising individual beneficiaries, (2) increasing the visibility of donations, (3) describing the impact of the donation, and (4) enacting or promoting tax-deductibility of the charity. We make recommendations for improving primary research and reviews about charitable donations, and how to apply the meta-review findings to increase charitable donations.
Introduction
Charities address some of the world’s most important and neglected problems (MacAskill, 2015; Singer, 2019). Some of the highest-impact (e.g., Against Malaria Foundation; GiveWell, 2021) and most famous (e.g., American Red Cross; Charity Navigator, 2022) charities rely on asking people to give money for no tangible reward (Bendapudi et al., 1996). As a result, effective fundraising is both critical and challenging for nonprofits. We conduct a meta-review of systematic reviews to identify ‘what works’ to promote charitable donations. Our aim is to provide practitioners and researchers with a resource for identifying which interventions have been investigated, which ones work, and which do not. By charitable donations, we mean the altruistic transfer of money from a person to an organisation that helps people in need (after Bekkers & Wiepking, 2011b). We catalogue systematic reviews because they: (a) search for and assess the evidence about which interventions work (Hulland & Houston, 2020; Stanley et al., 2018), (b) describe the effectiveness of interventions in a way that can be systematically compared, and (c) help practitioners and researchers understand which interventions have good external validity and generalisability (Higgins et al., 2019; Stanley et al., 2018). By synthesising systematic reviews, we can provide stronger recommendations for evidence-informed decision-making than by reviewing individual studies alone (HM Treasury, 2020).
This Meta-Review Investigates Which Hypothesised Drivers of Charitable Giving Have Robust Support
There are several existing reviews of evidence-based charitable promotion (e.g., Bekkers & Wiepking, 2011a, 2011b; Bendapudi et al., 1996; Oppenheimer & Olivola, 2010; Wiepking & Bekkers, 2012). We build on these reviews by conducting a meta-review, also known as an umbrella review or overview of reviews. Meta-reviews are similar to systematic reviews because they systematically search for and appraise existing research to answer a focused research question. A systematic review aggregates primary studies, but a meta-review aggregates systematic reviews. This allows meta-reviews to cover a wider scope than traditional systematic reviews (Becker & Oxman, 2011). Systematic reviews employ a comprehensive, reproducible search strategy to identify primary research into the effects of an intervention (e.g., providing information about recipients) on a specific outcome (e.g., size of donation) across contexts, while also assessing which situational factors influence those effects. Meta-analyses may form part of a systematic review and use statistics to estimate the average strength of those effects (Higgins et al., 2019). Research standards and practices differ across disciplines, and even within a discipline (e.g., psychology), findings about ‘what works' to increase charitable donations can conflict with each other due to inconsistent pre-registration, participant demographics, and publication bias (Open Science Collaboration, 2015). Charitable donation as a behaviour is therefore a good fit for a meta-review because useful research on the topic is fragmented across many disciplines including marketing, economics, psychology, and others (Bekkers & Wiepking, 2011b; Bendapudi et al., 1996; Mazodier et al., 2020; Pham & Septianto, 2019; Rothschild, 1979; Septianto et al., 2020; Wallace et al., 2017). Our meta-review aggregates systematic reviews on charitable giving. Where included systematic reviews are accompanied by a meta-analysis, we aggregate those meta-analyses into a meta-meta-analysis to quantify and compare the strength of interventions to promote charitable giving.
We organise the presentation of results from our meta-review using an established and highly-cited model of drivers for charitable donations (Bekkers & Wiepking, 2011b). This narrative review proposed a model where donors are more likely to give when they are prompted to donate (solicitation) to a cause they know about (awareness of need), if the cost is low enough (costs and benefits) for the effect it has on society (altruism). According to this model, people also donate if they think doing so will make them look good in the eyes of others (reputation), make them feel good (psychological benefits), align with what is important to them (values), and make a meaningful difference (efficacy). Bekkers and Wiepking classified different interventions found in primary research into one or more of these drivers, for example, by discussing how tax deductibility decreases the costs of donation. However, unlike a systematic review, their narrative review approach did not account for publication bias or pre-register inclusion and exclusion criteria; it also did not estimate the relative effectiveness of each driver for influencing charitable donations. In our meta-review, we seek to comprehensively identify all interventions that increase charitable behaviour and that have been the focus of an existing systematic review. Because systematic reviews often include a meta-analysis, which summarises the quantitative effect size or ‘strength’ of an intervention on charitable donation behaviour, our meta-review will also assess the effectiveness of each driver (e.g., awareness, costs and benefits) in increasing charitable donation behaviour. In this review, we use the Bekkers and Wiepking (2011b) classification to identify which drivers have been the most studied, which have not, and which drivers appear to most influence charitable donation behaviour.
Aim
In this meta-review, we aim to:
-
1.
synthesise the systematic reviews on interventions designed to promote charitable donations across disciplines
-
2.
combine the quantitative effect size estimates from meta-analyses included in the systematic reviews and use meta-meta-analysis to estimate the effectiveness of interventions to promote charitable donations
-
3.
interpret the findings by classifying each intervention according to a widely-used model (Bekkers & Wiepking, 2011b) and best-practice guidelines for evidence-informed decision-making (Guyatt et al., 2011; Higgins et al., 2019)
Method
We conducted a meta-review of systematic reviews using established recommendations (Becker & Oxman, 2011; Grant & Booth, 2009; Khangura et al., 2012; Pollock et al., 2017; World Health Organisation, 2017) to synthesise the literature on how to increase charitable donations. We conducted a meta-meta-analysis on any meta-analyses reported in the included systematic reviews. A meta-meta-analytic approach was necessary because it permitted the use of all available information from the original meta-analyses to calculate a pooled effect while accounting for variability at both the study and meta-analysis level. Our meta-review was prospectively registered on the Open Science Framework (https://osf.io/465ej/). Details of our search strategy including search strings, screening and selection of studies, data extraction and quality assessment, quantitative synthesis, and certainty assessment are presented in Supplementary File 1 and summarised below.
We searched Scopus, PsycINFO (Ovid), Web of Science, and Database of Abstracts of Reviews of Effects due to their broad but non-overlapping corpora, and their coverage of topic areas relevant to our research question. We conducted searches on July 17th, 2019 and March 4th, 2021. We developed terms for identifying systematic reviews informed by a comprehensive typology of review methods (Grant & Booth, 2009). Terms for charitable donations as outcomes included: altruis*, charit*, philanthro*, donat*, pledge*, or non-profit. Titles and abstracts were screened in duplicate; full-text articles were screened in duplicate; and included papers were extracted in duplicate. Disputes were resolved by discussion between reviewers, consulting a senior member of the team, if necessary.
Our inclusion criteria were (1) systematic reviews, scoping reviews, or similar reproducible reviews (i.e., those with a reproducible method section describing a searching and screening procedure); (2) reviews describing monetary charitable donations; (3) reviews assessing any population of participants in any context; and (4) written in English (due to logistical constraints) and (5) peer-reviewed (although no papers ended up being excluded on the basis of this criteria). Exclusion criteria were (1) primary research reporting new data (e.g., randomised experiments); (2) non-systematic reviews, theory papers, or narrative reviews; (3) reviews on cause-related marketing; and (4) reviews of other kinds of prosocial behaviour (e.g., honesty, non-financial donations). We also conducted forward and backward citation searching (Hinde & Spackman, 2015) via Scopus with no subject or publication requirements. We developed a data extraction template to capture information from each included review and assessed the quality of the included reviews using an abbreviated list of quality criteria drawn from AMSTAR 2 (Shea et al., 2017). We used the GRADE approach to assess the quality of the evidence across all reviews for each combination of intervention and outcome (Guyatt et al., 2011; Higgins et al., 2019; Hultcrantz et al., 2017). More information about and results of these quality assessments are available in Supplementary File 1.
Many, but not all, systematic reviews also conducted meta-analyses to quantify the size of effects on donations. So we could compare the relative size of effects between these different meta-analyses, we conducted a meta-meta-analysis, or second-order meta-analysis (Hennessy et al., 2019; Schmidt & Oh, 2013). These models are the best practice for synthesising effects across different meta-analyses because they can compare effect sizes on a common metric while accounting for variability both within- and between reviews (Hennessy et al., 2019; Schmidt & Oh, 2013). Our primary outcome was the overall pooled effect size of intervention on donation size. A secondary outcome was donation incidence—whether a donation of any size was provided—because many reviews reported on this dichotomous outcome. We extracted quantitative estimates from reviews that included meta-analyses and converted them to the most commonly used metric (r) using the compute.es package (Del Re, 2020) in R (R Core Team, 2020). We conducted a meta-meta-analysis using the meta sem package (Cheung, 2014) and msemtools packages (Conigrave, 2019). We used random-effects meta-analyses to calculate pooled effects for each mechanism and each outcome, then conducted moderation analyses to assess whether interventions were homogenous within mechanism and outcome. Raw data and code for reproducing the analyses are available at https://osf.io/465ej/.
Results
We organise the results as follows. First, we describe the reviews identified and included through the systematic search (Table 1 and Fig. 1). Second, we present a meta-meta-analysis for the pooled effect of interventions on donation size and donation incidence (Fig. 2). Third, we organise the included interventions using the model of drivers for charitable donations from Bekkers and Wiepking (2011b) to present a meta-meta-analysis of interventions for each driver (Fig. 3) and describe each intervention in detail.
Pooled effect sizes and 95% confidence intervals from meta-analyses of interventions, grouped by outcome (donation size vs. incidence). Note: All effect sizes were converted to r, allowing for more meaningful comparisons between reviews. Light rows were interventions hypothesised to reduce donations. For these interventions, the sign of effects was reversed during analyses to calculate meaningful meta-meta-analytic pooled effect sizes
Pooled effect of donation size (with 95% confidence intervals) from meta-analyses of interventions, grouped by hypothesised mechanism. Note: All effect sizes were converted to r to allow for meaningful comparisons between reviews. Light rows were interventions hypothesised to reduce donations. For these interventions, the sign of effects was reversed during analyses to calculate meaningful pooled effect sizes
Records Identified Through Systematic Search
As outlined in Fig. 1, we screened 2294 unique titles and abstracts. The team subsequently screened 60 full texts for eligibility, 21 of which were included. Of the included systematic reviews, 15 included meta-analyses of either donation size or donation incidence. Characteristics and summaries of each included review are presented in Table 1. Most full-texts were excluded for being reviews that were not systematic (Weyant, 1996). Ten focused on prosocial behaviour but did not report charitable donations distinctly, so unique effects on that outcome could not be discerned (Nagel & Waldmann, 2016). Six were on organisational behaviour that did not include charitable donations (e.g., the effects of nonprofits becoming more commercial; Hung, 2020) and five were primary research (Kinnunen & Windmann, 2013) (e.g., randomised experiments; Kinnunen & Windmann, 2013). Three reviews did not report prosocial outcomes (e.g., effects of advertising on sales; Assmus et al., 1984). Quality appraisal and certainty assessment of the included reviews were conducted consistent with our pre-registered protocol. Due to limitations of space, we report the results of these assessments in detail in Supplementary File 1, including a table describing the quality assessment (Table S1) and certainty assessment (Table S2).
Meta-Meta-Analysis of Interventions on Donation Size and Donation Incidence
As shown in Fig. 2, the meta-meta-analytic pooled effect on donation size and donation incidence was small (r = 0.08, 95% CI [0.03, 0.12], K = 23). The pooled effect was calculated using meta-analyses reported in the included systematic reviews. These effects were heterogeneous between reviews (I22 = 0.85), meaning that the different interventions (e.g., pique, identifying recipient) had very different effects on outcomes (e.g., donation size). The effects were not moderated by the specific outcome (p = 0.12). As seen in Fig. 2, this means pooled effects were similar for donation incidence (r = 0.15, 95% CI [0.05, 0.25], K = 4) and donation size (r = 0.06, 95% CI [0.02, 0.11], K = 19). Raw effect sizes extracted from meta-analyses in the included reviews are available on the Open Science Framework (https://osf.io/465ej/).
Interventions to Increase Charitable Donations Organised Using Bekkers and Wiepking’s (2011a, 2011b ) model
We used a mixed-methods approach to synthesise quantitative effect size estimates with a qualitative analysis of findings according to Bekkers and Wiepking (2011b), with descriptions of each included review. We conducted a further meta-meta-analysis (Fig. 3) with interventions grouped by the mechanism ascribed by Bekkers and Wiepking (2011b). As shown in Fig. 3, the pooled effects of each hypothesised mechanism were significant, however there was large heterogeneity in the effects of each mechanism (all I2total > 0.70). Moderation analyses for interventions within each mechanism were all significant (each p < 0.018), suggesting that the specific design, channel, or context in which the intervention was delivered influenced the effective use of the hypothesised mechanism. In the following sections, we describe each identified behaviour change intervention organised by mechanism.
Interventions to Increase Awareness
On average, strategies designed to increase awarenessFootnote 1 had small to moderate effects on donations (r = 0.16, 95% CI [0.09, 0.23], K = 3). In general, charities can increase awareness and therefore donations by piquing donor interest, demonstrating the need, or identifying a victim.
There were Large Effects from the Pique Technique
Piquing interest increased both compliance (r = 0.27, 95% CI [0.19, 0.35], k = 16; Lee & Feeley, 2017) and donation size (r = 0.29, 95% CI [0.25, 0.33], k = 16; Lee & Feeley, 2017), leading to much larger total revenue (r = 0.49, 95% CI [0.45, 0.53], k = 16; Lee & Feeley, 2017). The pique technique involved asking donors for unusual amounts of money (17c instead of 10c), and was designed to break the ‘refusal script’: would-be donors were more likely to stop and ask for a rationale when an odd amount of money was requested of them (Lee & Feeley, 2017). The largest experiment involved a $3 request so the technique may have questionable ecological validity. It is unclear whether it would also work for requesting $1017 instead of $1000.
Describing a Needy Recipient Increased Donations
This was evaluated in three meta-analyses. Engel found that needy recipients received an increased donation size in dictator games (r = 0.13, 95% CI [0.10, 0.17], k = 69; Engel, 2011).Footnote 2 Neediness was also a mechanism that explained legitimizing paltry contributions (described below). When someone said “even a penny would help”, many donors saw the recipient as more needy, which had indirect effects on donation compliance (Bolkan & Rains, 2017). Finally, compared with causes with modest negative impact (e.g., no school books), when a problem was described as severe, certain, and calamitous (e.g., natural disaster), donation size increased regardless of whether the victim was identifiable or not (Butts et al., 2019).
Describing an Identifiable Victim Increased Donation Size
Under most circumstances, donation size increased when donors were presented with a single, ‘identifiable victim’ (r = 0.13, 95% CI [0.08, 0.17], k = 47; Butts et al., 2019) than when presented with statistics or multiple recipients. This is also known as ‘compassion fade’, where a larger number of victims leads to lower perceived impact and lower expected positive affect from donating (Jenni & Loewenstein, 1997). Mediation analyses supported these hypothesised paths (Butts et al., 2019). Empathy had a smaller mediating role: while people showed slightly less empathy for a larger group of people, this lower empathy had only a small effect on donations.
Interventions to Reduce Cost or Increase Benefits
On average, strategies targeting costs and benefits weakly increased donations (r = 0.08, 95% CI [0.02, 0.13], K = 5). The most influential effects appeared to be imbuing a charity with tax deductibility, but nudges or framing strategies had few effects.
Tax Deductibility Increased Donations
One large meta-analysis of 69 studies (n = 1,418,212), examined the impact of tax deductibility on charitable donations (Peloza & Steel, 2005). Effects were reported as price elasticities which could not be converted to effect sizes. They found substantial elasticity: a tax deduction of $1 resulted in an additional $1.44 being donated to charity (confidence interval not reported). The authors found that tax deductions particularly increased the likelihood of bequests. High-income donors were no more concerned with tax deductions than lower-income donors.
Matching (or Supplementing) Donor Contributions did not Affect Donations
In contrived experiments, when donors were told their funds would be matched (fully or partially), there was a small but non-significant increase in donations (r = 0.08, 95% CI [− 0.02, 0.17], k = 18; Engel, 2011).
‘Door-in-the-Face’ Does not Reliably Increase Donations
Door in the face is designed to reduce the perceived cost of donating by initially presenting a high anchor (e.g., “will you donate $1000?”), then asking for something more achievable (e.g., “how about $10?”; also known as the ‘request then retreat strategy’). Evidence for this strategy appears weak: donors may be marginally more likely to say they will donate (r = 0.08, 95% CI [0.03, 0.12], k = 7; Feeley et al., 2012) but this does not translate into actual compliance (r = 0.06, 95% CI [− 0.01, 0.12], k = 15; Feeley et al., 2012).
Repeated Opportunities/Requirements to Donate Decreased Donations
In contrived experiments with repeated rounds, donors gave less when they were aware there would be multiple opportunities to donate (r = − 0.06, 95% CI [− 0.08, − 0.04], k = 64; Engel, 2011).
Gain-Framed Messaging did not Affect Donations
Prospect theory proposes that small losses loom larger than small gains, but framing appeals for charitable donations as ‘losses averted’ did not increase the likelihood of donations (r = − 0.01, 95% CI [− 0.10, 0.09], k = 25; Xu & Huang, 2020).
Higher Stakes Inconsistently Decreased Donations
Some studies have asked whether those donating from larger pools of money (usually in contrived experiments) are more generous or more frugal. The larger of the two meta-analyses found no relationship between stake size and donations (r = − 0.01, 95% CI [− 0.09, 0.07], k = 603; Engel, 2011). But, a follow-up, more focused meta-analysis found that those endowed with more money tended to be less generous, in relative terms, than those endowed with less (r = − 0.07, 95% CI [− 0.03, − 0.12], k = 18; Larney et al., 2019). That is, when people had more, they may donate more in absolute terms, but usually donated a lower percentage of the money they held.
Legitimizing Paltry Contributions has Negligible Total Benefit
Three systematic reviews investigated the effect of ‘legitimizing paltry contributions’ on charitable donations (usually words like “even a penny will help”; Andrews et al., 2008; Bolkan & Rains, 2017; Lee et al., 2016). The largest of these reviews found a moderate increase in compliance (r = 0.22, 95% CI [0.17, 0.26], k = 34; Bolkan & Rains, 2017) which was offset by a decrease in the size of the average donation (r = − 0.23, 95% CI [− 0.34, − 0.12], k = 11; Bolkan & Rains, 2017). The net effect of these competing forces was a non-significant increase in total revenue (r = 0.03, 95% CI [− 0.01, 0.07], k = 18; Bolkan & Rains, 2017). Mediation analyses suggest that the technique increases the perceived neediness of the cause, but that it also vindicates those likely to donate a small amount to avoid judgement (i.e., donors high on ‘impression management’).
Interventions to Increase Efficacy
On average, strategies targeting efficacy increased donations (r = 0.11, 95% CI [0.01, 0.22], K = 3). Direct modelling of the desired behaviour—seeing others donate money—appears to increase donations, but general prosocial media does not.
Prosocial Modelling Moderately Increased Donations, Regardless of Media
When people saw others acting prosocially, they were more likely to imitate, including charitable donations (r = 0.22, 95% CI [0.21, 0.24], k = 40; Jung et al, 2020). Effects were consistent across media (e.g., direct observation vs watching on TV), age, gender, and culture.
Generic Prosocial Media has Uncertain Effects
Jung and colleagues (2020) looked at studies where the model performed the same behaviour (i.e., the model donated money and the dependent variable was donation too); they moderated how the model was viewed (real observation vs. via media). Coyne and colleagues (2018) instead looked at media only (TV, movies, video games, music or music videos) with explicitly prosocial content (but not necessarily donating money). Participants in one study were more likely to donate money while listening to “Love generation” (by Bob Sinclair) rather than “Rock this party” (also Bob Sinclair; Greitemeyer, 2009). This trend, however, was not reliable with small pooled effects on financial donations and a confidence interval including the null hypothesis (r = 0.09, 95% CI [− 0.04, 0.21], k = 9; Coyne et al., 2018). The media seldom demonstrated the exact behaviour being measured (i.e., ‘Love generation’ does not talk about donations); imitation and efficacy may increase the likelihood of the behaviour being observed but effects do not spill over to nearby prosocial behaviours.
Certainty of Donation Benefit has Little Influence on Donation Compliance
As mentioned previously, correlational studies show that certain calamities appear to attract donations, regardless of interventions like ‘identifiable victims’ (Butts et al., 2019). Among dictator games, when uncertainty was added to the benefit (e.g., donating lottery tickets) there was no significant reduction in donations (r = − 0.04, 95% CI [− 0.17, 0.10], k = 7; Engel, 2011). This may not necessarily translate to different types of uncertainty, however, such as uncertainty that a charity will have an impact.
Interventions to Increase Reputation
On average, strategies targeting reputation slightly increased donations (r = 0.06, 95% CI [0.00, 0.12], K = 6). In general, people are somewhat more likely to donate when there is some reputational benefit to doing so (e.g., being observed or having the donation amount visible).
Being Observed by Others Increases Donations
After synthesising a large number of studies and participants (N > 500,000), Bradley and colleagues found that being observed significantly increased donations (r = 0.15, 95% CI [0.11, 0.20], k = 101; Bradley et al., 2018). Consistent with reputation hypotheses, effects were larger for repeat interactions, interactions with personal consequences, group social dilemmas (vs. 1:1 bargaining games), and where observation is more intense. In contrast, Engel moderated his findings by whether or not donations were concealed. He found donations decreased when concealing the donor (r = − 0.09, 95% CI [− 0.02, − 0.16], k = 52; Engel, 2011) or the amount donated (r = − 0.06, 95% CI [− 0.12, 0.00], k = 19; Engel, 2011), but effects were small.
Artificial Cues of Being Observed Do Not Reliably Increase Donations
Three systematic reviews have explored the effect of artificial surveillance cues on donor generosity (Sparks & Barclay, 2013; Nettle et al., 2013; Northover et al., 2017). Studies have typically analysed the effect of displaying images of ‘watching eyes’ on donation decisions made within economic games, but many include field experiments (e.g., eyes above ‘honesty boxes’). The largest of these reviews found negligible increases in compliance (r = 0.04, 95% CI [0.03, 0.06], k = 27; Northover et al., 2017) and donation size (r = 0.01, 95% CI [− 0.02, 0.05], k = 26; Northover et al., 2017). Effects seem to only work short-term, with few studies finding any long-term benefits (Sparks & Barclay, 2013). Overall, artificial surveillance may increase the chance of people donating something in the short term, but the best quality evidence suggests effects are small.
Decision Made by a Group
Making a decision as a group may increase the reputational stakes of signalling altruism but also may diffuse the reputational benefit of donating. Group decisions had no significant total influence on donations (r = − 0.05, 95% CI [− 0.28, 0.18], k = 4; Engel, 2011) but the confidence intervals are wide due to the small number of studies.
Interventions to Affect Altruism
Few reviews explored the altruism mechanism proposed by Bekkers and Wiepking (2011b). Two reviews explored the crowding-out hypothesis—that donors motivated by a desire to have an impact would avoid causes already supported by governments because of diminishing marginal returns (Lu, 2016, k = 60; de Wit & Bekkers, 2017, k = 54). Neither review found decisive evidence for crowding out. A subset of the studies in the reviews had higher internal validity—they either controlled for confounding statistically or via experimental designs. These studies were more likely to suggest that government funding reduces private donations (de Wit & Bekkers, 2017; Lu, 2016), but given the small, heterogeneous effect sizes, the evidence for a relationship is weak.
Other Influences that have been explored
We did not find reviews of interventions that could be easily classified as ‘solicitation’, ‘psychological benefits’, or ‘values’. We could not easily classify two review findings on the basis of Bekkers and Wiepking’s (2011a, 2011b) mechanisms. One review tested a range of interventions designed to promote intuitive thinking (e.g., high cognitive load), but these studies did not influence donations (r = − 0.01, 95% CI [− 0.02, 0.01], k = 60; Fromell et al., 2020). The authors argue that ‘fast’ and ‘slow’ thinking are often aligned on issues of charitable donations. Engel (2011) found that reducing the number of options available to the donor decreased the amount they donated.
Discussion
Charities conduct activities that seek to address a wide range of social problems (MacAskill, 2015; Singer, 2019). Our meta-review identified interventions (e.g., piquing donor interest, prosocial modelling, increased neediness, identifiable victims, tax-deductibility) that robustly increase charitable donation size or incidence. The effect size of most interventions on charitable donations was relatively small in terms of increasing the success of individual opportunities to donate (|r|< 0.1), but would likely “add up” over time (Funder & Ozer, 2019). It is important to note that our certainty for this estimate is low, due to limitations in many of the included systematic reviews (e.g., many did not assess the quality of included studies; much neglected publication bias). Most effect sizes were far smaller than the average effect size of interventions published in marketing science (r = 0.24; Eisend, 2015; Eisend & Tarrahi, 2016) or social psychology (r = 0.21; (Richard et al., 2003). We identified support for some of the mechanisms described in a widely-used model of charitable donations (increasing awareness, efficacy, benefits, and reputation; Bekkers & Wiepking, 2011b) and some gaps in the review-level literature (e.g., systematic reviews assessing psychosocial benefits or values). Most reviews included primary studies that assessed interventions in contrived experiments, but some found consistent results in field and laboratory experiments. The findings suggest that several types of interventions can help to increase charitable donations, but the overall poor quality of the evidence suggests that expert judgement and contextual factors will be critical for good decisions in charity promotion.
Practitioners May Draw From a Range of Robust Interventions to Increase Charitable Donations
Taking the findings together, and notwithstanding the limitations of the included reviews, we recommend practitioners consider the following interventions for promoting charitable donations. Examples of the source, recipient, context, channel, and content of each intervention (Lasswell, 1948; Slattery et al., 2020) are presented in Supplementary Table S3.
Help Donors Feel Confident
When interventions increased donor confidence, they tended to solicit higher donations. Effective strategies included seeing other people who donated money (Jung et al., 2020), not merely seeing people performing ‘prosocial behaviours’ (Coyne et al., 2018). Theory and preliminary findings would suggest that effects are stronger when viewing those who share our group identity (Chapman et al., 2018, 2020). Uncertainty about the benefit of a charitable donation may cause prospective donors to reduce their donation size; donation matching campaigns may cause prospective donors to slightly increase their contribution (Engel, 2011). Identifiable victims work because donors feel more confident that they could make a meaningful difference (Butts et al., 2019). Overall, the key mechanism is that if prospective donors think they can make a meaningful difference, they are more likely to donate (Butts et al., 2019).
Provide Donors with Meaningful Rationales for Why Donations are Needed
Donors are persuaded by needy recipients (Engel, 2011). Campaigns that say things like ‘even a penny will help’ can increase the likelihood of an initial donation when it signals the ‘desperate need’ of the cause (Bolkan & Rains, 2017). Similarly, highlighting a single beneficiary (“identifiable victim”) does not change likelihood of donation behaviour if the charitable cause is obviously severe and widespread (Butts et al., 2019). Piquing a donor’s interest via odd requests (e.g., 17c) appears to work by prompting a conversation around why the donation is needed (Lee & Feeley, 2017).
Help Donors to Look Good in Front of Others, but Beware Side-Effects
Donations are more likely when donors are observed (Bradley et al., 2018), and when both they and their donation size are identified to recipients (Engel, 2011). Charities should be careful to avoid using this in a way that creates guilt or social pressure (Bennett, 1998) or in a way that is contrived/artificial (Northover et al., 2017; Sparks & Barclay, 2013). Instead, charities can use transparency as a way of facilitating pride and self-efficacy (Crocker et al., 2017), to minimise the taboo around discussing charitable donations publicly, and to help establish a social norm toward giving (Singer, 2019).
Seek and Advertise Tax Deductibility
Given the large and significant price elasticity from tax-deductibility (i.e., tax-deductibility increased donations; Peloza & Steel, 2005), directing effort toward becoming tax deductible will likely pay dividends. While few studies explicitly assessed the impact of advertising deductibility, we assume that doing so may confer some benefits for donations.
Some ‘Nudges’ and Compliance Techniques Work but have Modest Expectations
Nudges usually assume that people will be more likely to donate if charities activate their ‘fast’, intuitive thinking system, but this is not the case (Fromell et al., 2020). It appears that intuitive and deliberate thinking around donations are usually aligned. Nudges and framing strategies such as artificial cues (Northover et al., 2017; Sparks & Barclay, 2013), legitimizing paltry contributions (Bolkan & Rains, 2017), ‘door-in-the-face’, (Feeley et al., 2012) are not consistently effective.
To Aid Evidence-Informed Decision-Making, Reviews and Research Must Improve
Despite being mostly systematic reviews of randomised trials—which are the best causal evidence for effects of interventions (see Fig. 1)—we judged the certainty of all effects to be low. This was because the reviews here, and their included studies, often failed many well-established criteria for internal and external validity (Guyatt et al., 2011; Higgins et al., 2019; Hultcrantz et al., 2017). Many interventions were only tested in laboratories or in experiments with relatively trivial amounts of money (< $10). In contrast, many methods of persuasion commonly used in charitable contexts, such as emotional appeals and rational arguments (Bennett, 2019; Caviola et al., 2020; Stannard-Stockton, 2009), were seldom examined directly by the reviews we found. In addition to reviewing more authentic interventions, review authors could increase the reliability and transparency of their methods via AMSTAR 2 and PRISMA. Registration and standardised reporting checklists like PRISMA (Moher et al., 2010; Page et al., 2021) improve the internal validity of systematic reviews through common expectations of methodology and reporting.
Results Supported Many Mechanisms Proposed by Bekkers and Wiepking (2011a, 2011b )
Many interventions designed to increase awareness, efficacy, and reputation appeared to usually increase donations, as hypothesised by Bekkers & Wiepking (2011b). We did not find systematic reviews that assessed other hypothesised mechanisms as classified by Bekkers and Wiepking (2011b), such as solicitation, psychological benefits, and values. However, these findings may not necessarily reflect strengths and weaknesses in the proposed mechanisms but could reflect the way interventions are categorised. As described above, we followed the categorisation of Bekkers and Wiepking (e.g., identifiable victim as ‘awareness’) even if we had reason to think that interventions may be better classified elsewhere (i.e., identifiable victim as ‘efficacy’ or ‘psychological benefits’; Butts et al., 2019). Primary studies and non-systematic reviews have found support for some other mechanisms (psychological benefits, Crocker et al., 2017; value alignment, Goenka & van Osselaer, 2019) but for formal model building, researchers should explicitly test whether interventions are operating by the hypothesised mechanism.
Limitations Of Our Meta-Review
By focusing on review-level evidence we necessarily excluded primary studies that would have been useful for charity and non-profit researchers and practitioners. While a review of 1339 included primary studies would have been intractable, reviews of primary studies have sufficient granularity to look at mediators and moderators that might be useful across studies. Instead, we were beholden to the methods of the included reviews. Similarly, we were limited to the interventions selected by previous reviewers, so necessarily omitted interventions not included in any systematic reviews, even though they may inform research and practice (e.g., opt-in vs. opt-out donations; Everett et al., 2015). There may, for example, be a wealth of knowledge on interventions using the internet to drive donations, but since there have been few systematic reviews on that topic, those interventions would have been excluded from our meta-review (Bennett, 2016, 2019; Liang et al., 2014). In a similar vein, focusing on systematic reviews means we may have excluded some more recent, ‘cutting-edge’ interventions. It often takes a number of years for an intervention gaining traction and it being subject to a systematic review. For example, recent research has shown that donors may actually prefer cost-effectiveness indicators (i.e., cost per life saved) to overhead ratios (i.e., percent directed to administrative expenses) but that the latter is usually the focus of decision-making because of the ‘evaluability bias’: people weigh an attribute based on how easy it is to evaluate (Caviola et al., 2014). However, few studies have examined the effect of publishing cost-effectiveness indicators so it is not yet possible to meta-analyse these interventions. As a result, while the interventions presented in our review have been thoroughly assessed, and many have been shown to be robustly beneficial, there may be other interventions with larger effect sizes not listed here.
Our meta-review prospectively excluded grey literature and reviews in other languages. This may affect generalisability, but doing so seldom affects conclusions from meta-reviews (unlike reviews of primary studies; Ganann et al., 2010), and we excluded no reviews on the basis of this criteria (see Fig. 1). This is likely because unpublished reviews of charitable donations are less likely to use systematic search and synthesis methods. Nevertheless, there may be other reviews that contribute to this discussion that was missed by our searches and inclusion criteria.
Our review used well-validated assessments of certainty (i.e., GRADE) and review quality (i.e., an abbreviated AMSTAR2 checklist; see Supplementary File 1). These assessments allow interested readers to know the quality of the included reviews and certainty of the included findings. However, in a meta-review, these tools are again beholden to the methods of the included systematic reviews. For example, GRADE reduces the certainty of the findings if there are few randomised experiments, or if the included randomised experiments may have been subject to common experimental biases (e.g., if they were unblinded). These biases reduce the internal validity of the findings, but few included reviews formally assessed these biases. As a result, we could not conduct sophisticated assessments of the internal validity of the included without examining the methods of the 1339 primary studies. We hope future systematic reviews of primary studies more frequently assess these biases using a validated tool, like ROB2 (Sterne et al., 2019). Similarly, GRADE accounts for the external validity of the included studies—such as whether or not findings are likely to generalise to the populations or situations most practitioners are interested in. This can be a complex question requiring judgement. For example, in some cases, Mechanical Turk contractors may be representative samples, but external validity also depends on the design of the study (e.g., viewing a real advertisement vs. playing an economic game). Our ability to assess external validity was subject to the quality of the reporting in the included systematic reviews (unless we wanted to review all 1339 methods). We hope future reviews discuss the external validity of their included studies, and could consider integrating those judgements into their own certainty assessment (e.g., via GRADE). Another approach would be to assess the facilitators and barriers to successfully delivering a pilot-tested intervention to new populations and in new contexts (e.g., scale-up; Saeri et al., 2021).
One additional limitation concerns the intervention of tax deductibility (Peloza & Steel, 2005). This systematic review and meta-analysis investigated the impact of tax deductibility on charitable donations primarily in the United States, with a minority of included primary studies describing the effect in similar countries such as Canada and the United Kingdom. Given that formal tax structures and cultural values of taxation and charitable giving differ significantly between countries, and tax policy can vary over time within a given country, the substantial effect size observed in Peloza and Steel’s (2005) meta-analysis may not hold in other settings.
Conclusion
Increasing charitable donations could benefit society in a multitude of ways: from helping to address global poverty, health, animal suffering, climate change, human rights, and the long-term future of humanity. As a result, identifying robust strategies for promoting charitable causes can have widespread social benefits. Providing good review-level evidence is a key way that charity science can contribute to evidence-informed decision-making in this important area.
In this meta-review, we synthesised multidisciplinary literature on how to promote charitable donations. We identified a range of strategies that may increase donations and some mechanisms that may help explain their effects. These findings suggest that organisations can solicit more money by focusing on individual victims, increasing the publicity of donations, discussing the impact of the donation, and both ensuring and promoting the tax-deductibility of their charity.
Future reviews into other interventions—particularly those conducted outside of contrived experimental settings—would allow researchers and practitioners to assess the ecological validity of those interventions. Readers could have more faith in those reviews if they more consistently followed best-practice approaches to systematic reviews. Our meta-review reveals patterns and gaps within the current research, but it also identifies an array of well-researched mechanisms for promoting charitable donations. Using the findings of these reviews may increase the funds directed to some of the most important and neglected problems facing humanity.
Notes
Dictator games are designed as contrived analogies for donation situations: one participant is given some money and is given the chance to donate to another with no consequences or tangible benefits. Since actual money changes hands to a relative stranger, we deemed it sufficiently analogous to real charitable donations for this review.
References
Andrews, K. R., Carpenter, C. J., Shaw, A. S., & Boster, F. J. (2008). The legitimization of paltry favors effect: A review and meta-analysis. Communication Reports, 21(2), 59–69. https://doi.org/10.1080/08934210802305028
Assmus, G., Farley, J. U., & Lehmann, D. R. (1984). How advertising affects sales: Meta-analysis of econometric results. Journal of Marketing Research, 21(1), 65–74. https://doi.org/10.2307/3151793
Becker, L. A., & Oxman, A. D. (2011). Overviews of reviews. In J. P. T. Higgins & S. Green (Eds.), Cochrane Handbook for Systematic Reviews of Interventions (Version 5.1.0). The Cochrane Collaboration. https://handbook-5-1.cochrane.org/chapter_22/22_overviews_of_reviews.htm
Bekkers, R., & Wiepking, P. (2011a). Who gives? A literature review of predictors of charitable giving part one: Religion, education, age and socialisation. Voluntary Sector Review, 2(3), 337–365. https://doi.org/10.1332/204080511X6087712
Bekkers, R., & Wiepking, P. (2011b). A literature review of empirical studies of philanthropy: Eight mechanisms that drive charitable giving. Nonprofit and Voluntary Sector Quarterly, 40(5), 924–973. https://doi.org/10.1177/0899764010380927
Bendapudi, N., Singh, S. N., & Bendapudi, V. (1996). Enhancing helping behavior: An integrative framework for promotion planning. Journal of Marketing, 60(3), 33–49. https://doi.org/10.1177/002224299606000303
Bennett, R. (2016). Preventing charity website browsers from quitting the “donate now” page: A case study with recommendations. Social Business6(3), 291–306. https://doi.org/10.1362/204440816x14811339560974
Bennett, R. (2019). Nonprofit marketing and fundraising: a research overview. Routledge. https://content.taylorfrancis.com/books/download?dac=C2017-0-67448-0&isbn=9781351055093&format=googlePreviewPdf
Bennett, R. (1998). Shame, guilt & responses to non-profit & public sector ads. International Journal of Advertising, 17(4), 483–499. https://doi.org/10.1080/02650487.1998.11104734
Blinded for review. (2019). Philanthropy and charitable giving: A review of reviews. https://osf.io/465ej/?view_only=8f0ed79442cc4bc59feeb8d0880c6698
Bolkan, S., & Rains, S. A. (2017). The legitimization of paltry contributions as a compliance-gaining technique: A meta-analysis testing three explanations. Communication Research, 44(7), 976–996. https://doi.org/10.1177/0093650215602308
Bradley, A., Lawrence, C., & Ferguson, E. (2018). Does observability affect prosociality? Proceedings Biological Sciences/The Royal Society, 285, 1875. https://doi.org/10.1098/rspb.2018.0116
Butts, M. M., Lunt, D. C., Freling, T. L., & Gabriel, A. S. (2019). Helping one or helping many? A theoretical integration and meta-analytic review of the compassion fade literature. Organizational Behavior and Human Decision Processes, 151, 16–33. https://doi.org/10.1016/j.obhdp.2018.12.006
Caviola, L., Faulmüller, N., Everett, J. A. C., Savulescu, J., & Kahane, G. (2014). The evaluability bias in charitable giving: Saving administration costs or saving lives? Judgment and Decision Making, 9(4), 303–316. https://www.ncbi.nlm.nih.gov/pubmed/25279024
Caviola, L., Schubert, S., Teperman, E., Moss, D., Greenberg, S., & Faber, N. S. (2020). Donors vastly underestimate differences in charities’ effectiveness. Judgment and Decision Making, 15(4), 509–516. http://journal.sjdm.org/20/200504/jdm200504.pdf
Chapman, C. M., Louis, W. R., & Masser, B. M. (2018). Identifying (our) donors: Toward a social psychological understanding of charity selection in Australia. Psychology and Marketing, 35(12), 980–989. https://doi.org/10.1002/mar.21150
Chapman, C. M., Masser, B. M., & Louis, W. R. (2020). Identity motives in charitable giving: Explanations for charity preferences from a global donor survey. Psychology & Marketing, 37(9), 1277–1291. https://doi.org/10.1002/mar.21362
Charity Navigator. (2022). 10 Most followed charities. Charity Navigator. https://www.charitynavigator.org/index.cfm?bay=topten.detail&listid=148
Cheung, M.W.-L. (2014). metaSEM: An R package for meta-analysis using structural equation modeling. Frontiers in Psychology, 5, 1521. https://doi.org/10.3389/fpsyg.2014.01521
Conigrave, J. (2019). msemtools: Routines, tables, and figures for metaSEM analyses (Version 0.9.8) [Computer software]. Github. https://github.com/JConigrave/msemtools
Coyne, S. M., Padilla-Walker, L. M., Holmgren, H. G., Davis, E. J., Collier, K. M., Memmott-Elison, M. K., & Hawkins, A. J. (2018). A meta-analysis of prosocial media on prosocial behavior, aggression, and empathic concern: A multidimensional approach. Developmental Psychology, 54(2), 331–347. https://doi.org/10.1037/dev0000412
Crocker, J., Canevello, A., & Brown, A. A. (2017). Social motivation: Costs and benefits of selfishness and otherishness. Annual Review of Psychology, 68, 299–325. https://doi.org/10.1146/annurev-psych-010416-044145
de Wit, A., & Bekkers, R. (2017). Government support and charitable donations: A meta-analysis of the crowding-out hypothesis. Journal of Public Administration Research and Theory, 27(2), 301–319. https://doi.org/10.1093/jopart/muw044
Del Re, A. C. (2020). Package “compute.es”: Compute Effect Sizes (Version 0.2–5) [Computer software]. https://cran.r-project.org/web/packages/compute.es/compute.es.pdf
Eisend, M. (2015). Have we progressed marketing knowledge? A meta-meta-analysis of effect sizes in marketing research. Journal of Marketing, 79(3), 23–40. https://doi.org/10.1509/jm.14.0288
Eisend, M., & Tarrahi, F. (2016). The effectiveness of advertising: A meta-meta-analysis of advertising inputs and outcomes. Journal of Advertising, 45(4), 519–531. https://doi.org/10.1080/00913367.2016.1185981
Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14(4), 583–610. https://doi.org/10.1007/s10683-011-9283-7
Everett, J. A. C., Caviola, L., Kahane, G., Savulescu, J., & Faber, N. S. (2015). Doing good by doing nothing? The role of social norms in explaining default effects in altruistic contexts. European Journal of Social Psychology, 45(2), 230–241. https://doi.org/10.1002/ejsp.2080
Feeley, T. H., Anker, A. E., & Aloe, A. M. (2012). The door-in-the-face persuasive message strategy: A meta-analysis of the first 35 years. Communication Monographs, 79(3), 316–343. https://doi.org/10.1080/03637751.2012.697631
Fromell, H., Nosenzo, D., & Owens, T. (2020). Altruism, fast and slow? Evidence from a meta-analysis and a new experiment. Experimental Economics, 23(4), 979–1001. https://doi.org/10.1007/s10683-020-09645-z
Funder, D. C., & Ozer, D. J. (2019). Evaluating effect size in psychological research: Sense and nonsense. Advances in Methods and Practices in Psychological Science, 2(2), 156–168. https://doi.org/10.1177/2515245919847202
Ganann, R., Ciliska, D., & Thomas, H. (2010). Expediting systematic reviews: Methods and implications of rapid reviews. Implementation Science, 5, 56. https://doi.org/10.1186/1748-5908-5-56
GiveWell. (2021). Our Top Charities. GiveWell. https://www.givewell.org/charities/top-charities
Goenka, S., & van Osselaer, S. M. J. (2019). Charities can increase the effectiveness of donation appeals by using a morally congruent positive emotion. The Journal of Consumer Research, 46(4), 774–790. https://doi.org/10.1093/jcr/ucz012
Grant, M. J., & Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies: A typology of reviews, Maria J. Grant & Andrew Booth. Health Information and Libraries Journal, 26(2), 91–108. https://doi.org/10.1111/j.1471-1842.2009.00848.x
Greitemeyer, T. (2009). Effects of songs with prosocial lyrics on prosocial thoughts, affect, and behavior. Journal of Experimental Social Psychology, 45(1), 186–190. https://doi.org/10.1016/j.jesp.2008.08.003
Guyatt, G. H., Oxman, A. D., Schünemann, H. J., Tugwell, P., & Knottnerus, A. (2011). GRADE guidelines: A new series of articles in the journal of clinical epidemiology. Journal of Clinical Epidemiology, 64(4), 380–382. https://doi.org/10.1016/j.jclinepi.2010.09.011
Hennessy, E. A., Johnson, B. T., & Keenan, C. (2019). Best practice guidelines and essential methodological steps to conduct rigorous and systematic meta-reviews. Applied Psychology. Health and Well-Being, 11(3), 353–381. https://doi.org/10.1111/aphw.12169
Higgins, J. P. T., Altman, D. G., Sterne, J. A. C., & on behalf of the Cochrane Statistical Methods Group and the Cochrane Bias Methods Group. (2011). Assessing risk of bias in included studies. In J. P. T. Higgins & S. Green (Eds.), Cochrane Handbook for Systematic Reviews of Interventions (Vol. 5.1.1). The Cochrane Collaboration.
Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (2019). Cochrane handbook for systematic reviews of interventions. John Wiley & Sons. https://training.cochrane.org/cochrane-handbook-systematic-reviews-interventions
Hinde, S., & Spackman, E. (2015). Bidirectional citation searching to completion: An exploration of literature searching methods. PharmacoEconomics, 33(1), 5–11. https://doi.org/10.1007/s40273-014-0205-3
HM Treasury. (2020). Magenta book. HM Treasury. https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/879438/HMT_Magenta_Book.pdf
Hulland, J., & Houston, M. B. (2020). Why systematic review papers and meta-analyses matter: An introduction to the special issue on generalizations in marketing. Journal of the Academy of Marketing Science, 48(3), 351–359. https://doi.org/10.1007/s11747-020-00721-7
Hultcrantz, M., Rind, D., Akl, E. A., Treweek, S., Mustafa, R. A., Iorio, A., Alper, B. S., Meerpohl, J. J., Murad, M. H., Ansari, M. T., Katikireddi, S. V., Östlund, P., Tranæus, S., Christensen, R., Gartlehner, G., Brozek, J., Izcovich, A., Schünemann, H., & Guyatt, G. (2017). The GRADE Working Group clarifies the construct of certainty of evidence. Journal of Clinical Epidemiology, 87, 4–13. https://doi.org/10.1016/j.jclinepi.2017.05.006
Hung, C. (2020). Commercialization and nonprofit donations: A meta-analytic assessment and extension. Nonprofit Management and Leadership, 31(2), 287–309. https://doi.org/10.1002/nml.21435
Jenni, K., & Loewenstein, G. (1997). Explaining the identifiable victim effect. Journal of Risk and Uncertainty, 14(3), 235–257. https://doi.org/10.1023/A:1007740225484
Jung, H., Seo, E., Han, E., Henderson, M. D., & Patall, E. A. (2020). Prosocial modeling: A meta-analytic review and synthesis. Psychological Bulletin, 146(8), 635–663. https://doi.org/10.1037/bul0000235
Khangura, S., Konnyu, K., Cushman, R., Grimshaw, J., & Moher, D. (2012). Evidence summaries: The evolution of a rapid review approach. Systematic Reviews, 1, 10. https://doi.org/10.1186/2046-4053-1-10
Kinnunen, S. P., & Windmann, S. (2013). Dual-processing altruism. Frontiers in Psychology, 4(193), 193. https://doi.org/10.3389/fpsyg.2013.00193
Larney, A., Rotella, A., & Barclay, P. (2019). Stake size effects in ultimatum game and dictator game offers: A meta-analysis. Organizational Behavior and Human Decision Processes, 151, 61–72. https://doi.org/10.1016/j.obhdp.2019.01.002
Lasswell, H. D. (1948). The structure and function of communication in society. The Communication of Ideas, 37(1), 136–139. http://www.irfanerdogan.com/dergiweb2008/24/12.pdf
Lee, S., & Feeley, T. H. (2017). A meta-analysis of the pique technique of compliance. Social Influence, 12(1), 15–28. https://doi.org/10.1080/15534510.2017.1305986
Lee, S., Moon, S.-I., & Feeley, T. H. (2016). A meta-analytic review of the legitimization of paltry favors compliance strategy. Psychological Reports, 118(3), 748–771. https://doi.org/10.1177/0033294116647690
Liang, J., Chen, Z., & Lei, J. (2014). Inspire me to donate: The use of mixed emotions in public service announcements. ACR North American Advances, http://www.acrwebsite.org/volumes/v42/acr_v42_17619.pdf
Lu, J. (2016). The philanthropic consequence of government grants to nonprofit organizations: A meta-analysis. Nonprofit Management and Leadership, 26(4), 381–400. https://doi.org/10.1002/nml.21203
MacAskill, W. (2015). Doing good better: How effective altruism can help you make a difference. avery. https://www.amazon.com/Doing-Good-Better-Effective-Difference/dp/1592409105
Mazodier, M., Carrillat, F. A., Sherman, C., & Plewa, C. (2020). Can donations be too little or too much? European Journal of Marketing. https://doi.org/10.1108/EJM-03-2019-0278
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & PRISMA Group. (2010). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. International Journal of Surgery, 8(5), 336–341. https://doi.org/10.1016/j.ijsu.2010.02.007
Nagel, J., & Waldmann, M. R. (2016). On having very long arms: How the availability of technological means affects moral cognition. Thinking and Reasoning, 22(2), 184–208. https://doi.org/10.1080/13546783.2015.1114023
Nettle, D., Harper, Z., Kidson, A., Stone, R., Penton-Voak, I. S., & Bateson, M. (2013). The watching eyes effect in the Dictator Game: it's not how much you give, it's being seen to give something. Evolution and Human Behavior, 34(1), 35–40.
Northover, S. B., Pedersen, W. C., Cohen, A. B., & Andrews, P. W. (2017). Artificial surveillance cues do not increase generosity: Two meta-analyses. Evolution and Human Behavior: Official Journal of the Human Behavior and Evolution Society, 38(1), 144–153. https://doi.org/10.1016/j.evolhumbehav.2016.07.001
Oppenheimer, D. M., & Olivola, C. Y. (2010). The science of giving: Experimental approaches to the study of charity. Taylor & Francis. https://play.google.com/store/books/details?id=751YfqybBioC
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716. https://doi.org/10.1126/science.aac4716
Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., & Moher, D. (2021). Updating guidance for reporting systematic reviews: Development of the PRISMA 2020 statement. Journal of Clinical Epidemiology, 134, 103–112. https://doi.org/10.1016/j.jclinepi.2021.02.003
Peloza, J., & Steel, P. (2005). The price elasticities of charitable contributions: A meta-analysis. Journal of Public Policy and Marketing, 24(2), 260–272. https://doi.org/10.1509/jppm.2005.24.2.260
Pham, C., & Septianto, F. (2019). A smile – the key to everybody’s heart?: The interactive effects of image and message in increasing charitable behavior. European Journal of Marketing, 54(2), 261–281. https://doi.org/10.1108/EJM-01-2019-0019
Pollock, A., Campbell, P., Brunton, G., Hunt, H., & Estcourt, L. (2017). Selecting and implementing overview methods: Implications from five exemplar overviews. Systematic Reviews, 6(1), 145. https://doi.org/10.1186/s13643-017-0534-3
Rand, D. G., Brescoll, V. L., Everett, J. A., Capraro, V., & Barcelo, H. (2016). Social heuristics and social roles: Intuition favors altruism for women but not for men. Journal of Experimental Psychology: General, 145(4), 389. https://doi.org/10.1037/xge0000154
R Core Team. (2020). R: A language and environment for statistical computing (Version 3.6.3) [Computer software]. R Foundation for Statistical Computing. https://www.R-project.org/
Richard, F. D., Bond, C. F., & Stokes-Zoota, J. J. (2003). One hundred years of social psychology quantitatively described. Review of General Psychology Journal of Division 1 of the American Psychological Association, 7(4), 331–363. https://doi.org/10.1037/1089-2680.7.4.331
Rothschild, M. L. (1979). Marketing communications in nonbusiness situations or why it’s so hard to sell brotherhood like soap. Journal of Marketing, 43, 11–20. https://www.ncbi.nlm.nih.gov/pubmed/12267408
Saeri, A. K., Slattery, P., Tear, M. J., Varazzani, C., Epstein, D., Knott, C., Kusmanoff, A., Bagshaw, H., Phillips, K., Liao, J., Orjuela, S., & Smith, A. L. (2021). Scale up of behaviour change interventions. A Rapid Review of Evidence and Practice. https://doi.org/10.31219/osf.io/scd3k
Salido-Andres, N., Rey-Garcia, M., Alvarez-Gonzalez, L. I., & Vazquez-Casielles, R. (2021). Mapping the field of donation-based crowdfunding for charitable causes: systematic review and conceptual framework. VOLUNTAS: International Journal of Voluntary and Nonprofit Organizations, 32(2), 288–302. https://doi.org/10.1007/s11266-020-00213-w
Schmidt, F. L., & Oh, I.-S. (2013). Methods for second order meta-analysis and illustrative applications. Organizational Behavior and Human Decision Processes, 121(2), 204–218. https://doi.org/10.1016/j.obhdp.2013.03.002
Septianto, F., Tjiptono, F., Paramita, W., & Chiew, T. M. (2020). The interactive effects of religiosity and recognition in increasing donation. European Journal of Marketing. https://doi.org/10.1108/EJM-04-2019-0326
Shea, B. J., Reeves, B. C., Wells, G., Thuku, M., Hamel, C., Moran, J., Moher, D., Tugwell, P., Welch, V., Kristjansson, E., & Henry, D. A. (2017). AMSTAR 2: A critical appraisal tool for systematic reviews that include randomised or non-randomised studies of healthcare interventions, or both. BMJ, 358, j4008. https://doi.org/10.1136/bmj.j4008
Singer, P. (2019). The Life You Can Save, 10th Anniversary Edition. www.thelifeyoucansave.org. https://www.booktopia.com.au/10th-anniversary-edition-the-life-you-can-save-peter-singer/book/9781733672702.html
Slattery, P., Vidgen, R., & Finnegan, P. (2020). Persuasion: An analysis and common frame of reference for IS research. Communications of the Association for Information Systems, 46(1), 3. https://doi.org/10.17705/1CAIS.04603
Sparks, A., & Barclay, P. (2013). Eye images increase generosity, but not for long: The limited effect of a false cue. Evolution and Human Behavior: Official Journal of the Human Behavior and Evolution Society, 34(5), 317–322. https://doi.org/10.1016/j.evolhumbehav.2013.05.001
Stanley, T. D., Carter, E. C., & Doucouliagos, H. (2018). What meta-analyses reveal about the replicability of psychological research. Psychological Bulletin, 144(12), 1325–1346. https://doi.org/10.1037/bul0000169
Stannard-Stockton, S. (2009). Appealing to Donors’ Hearts and Heads. The Chronicle of Philanthropy. https://www.philanthropy.com/article/Appealing-to-Donors-Hearts/173425
Sterne, J. A. C., Savović, J., Page, M. J., Elbers, R. G., Blencowe, N. S., Boutron, I., Cates, C. J., Cheng, H.-Y., Corbett, M. S., Eldridge, S. M., Emberson, J. R., Hernán, M. A., Hopewell, S., Hróbjartsson, A., Junqueira, D. R., Jüni, P., Kirkham, J. J., Lasserson, T., Li, T., & Higgins, J. P. T. (2019). RoB 2: A revised tool for assessing risk of bias in randomised trials [Review of RoB 2: A revised tool for assessing risk of bias in randomised trials]. BMJ, 366, l4898. https://doi.org/10.1136/bmj.l4898
Wallace, E., Buil, I., & de Chernatony, L. (2017). When does “liking” a charity lead to donation behaviour?: Exploring conspicuous donation behaviour on social media platforms. European Journal of Marketing, 51(11–12), 2002–2029. https://doi.org/10.1108/EJM-03-2017-0210
Weyant, J. M. (1996). Application of compliance techniques to direct-mail requests for charitable donations. Psychology and Marketing, 13(2), 157–170. https://doi.org/10.1002/(SICI)1520-6793(199602)13:2%3c157::AID-MAR3%3e3.0.CO;2-E
Wiepking, P., & Bekkers, R. (2012). Who gives? A literature review of predictors of charitable giving part two: Gender, family composition and income. Voluntary Sector Review. https://www.ingentaconnect.com/content/tpp/vsr/2012/00000003/00000002/art00005
World Health Organisation. (2017). Rapid reviews to strengthen health policy and systems: a practical guide. World Health Organization. https://www.who.int/alliance-hpsr/resources/publications/rapid-review-guide/en/
Xu, J., & Huang, G. (2020). The relative effectiveness of gain‐framed and loss‐framed messages in charity advertising: Meta‐analytic evidence and implications. International Journal of Nonprofit and Voluntary Sector Marketing, 25(4), e1675. https://doi.org/10.1002/nvsm.1675
Acknowledgements
We gratefully acknowledge the assistance of Lucius Caviola and David Reinstein, and several anonymous reviewers for their efforts in maximising the rigor and usefulness of this manuscript
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions. The authors did not receive support from any organization for the submitted work. No funding was received to assist with the preparation of this manuscript. No funding was received for conducting this study. No funds, grants, or other support were received.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visithttp://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Saeri, A.K., Slattery, P., Lee, J. et al. What Works to Increase Charitable Donations? A Meta-Review with Meta-Meta-Analysis. Voluntas (2022). https://doi.org/10.1007/s11266-022-00499-y
Accepted:
Published:
DOI: https://doi.org/10.1007/s11266-022-00499-y
Keywords
- Philanthropy
- Charity
- Behaviour change
- Prosocial behaviour
- Overview of reviews
- Meta-review
- Meta-meta-analysis