Persuasive Backfiring: When Behavior Change Interventions Trigger Unintended Negative Outcomes
- Cite this paper as:
- Stibe A., Cugelman B. (2016) Persuasive Backfiring: When Behavior Change Interventions Trigger Unintended Negative Outcomes. In: Meschtscherjakov A., De Ruyter B., Fuchsberger V., Murer M., Tscheligi M. (eds) Persuasive Technology. PERSUASIVE 2016. Lecture Notes in Computer Science, vol 9638. Springer, Cham
Numerous scholars study how to design evidence-based interventions that can improve the lives of individuals, in a way that also brings social benefits. However, within the behavioral sciences in general, and the persuasive technology field specifically, scholars rarely focus-on, or report the negative outcomes of behavior change interventions, and possibly fewer report a special type of negative outcome, a backfire. This paper has been authored to start a wider discussion within the scientific community on intervention backfiring. Within this paper, we provide tools to aid academics in the study of persuasive backfiring, present a taxonomy of backfiring causes, and provide an analytical framework containing the intention-outcome and likelihood-severity matrices. To increase knowledge on how to mitigate the negative impact of intervention backfiring, we discuss research and practitioner implications.
KeywordsBackfire Taxonomy Behavior change Intention-outcome matrix Likelihood-severity matrix Persuasive technology Intervention design
Scholars have focused on the ways in which technology can produce positive outcomes, such as increasing users’ physical activity , reducing binge drinking , quitting smoking , or managing mood and anxiety disorders . There is considerable research on this topic, with several systematic reviews and meta-analyses that focus on a wide variety of positive outcomes [9, 38].
However, few papers report negative outcomes, and possibly fewer report a special class of negative outcomes, called a backfire, which we define as an intervention that triggers audiences to adopt the opposite target behavior, rendering the intervention partially responsible for causing the behavior it was designed to reduce or eliminate.
Examples of backfiring interventions include drug use reduction programs that trigger drug use by accidently creating a social norm that triggers some youth to feel like everyone else is trying drugs except for them; traffic safety campaigns that use shame which accidently triggers denial and resentment which leads to increased dangerous driving; binge drinking screeners that trigger some youth who drink less than average to feel pressured to catch up to their peers; or a tobacco industry sponsored anti-smoking campaign that encouraged parents to lecture their children on not smoking which triggered more youth to smoke .
In this paper, we aim to start a wider scientific discussion on intervention backfiring, to provide analytical frameworks to help structure this discussion, to support academic research on the topic, and to raising practitioners’ awareness of the potential risks of intervention backfires so they can better manage the potential risks.
Numerous fields provide recommendations on how to design behavior change interventions, including social marketing [1, 25], evidence-based behavioral medicine [12, 13], several health behavior change approaches [2, 7, 33], socially influencing systems [36, 37], persuasive technology , and classic persuasion literature .
However, the application of evidence-based intervention design frameworks does not guarantee that intervention designers will achieve positive outcomes. In practice, scientific models are more likely to inspire interventions, rather than to dictate exactly how they are implemented .
Quite often, interventions that start-out with a solid theoretical underpinning, lose their theoretical roots after adapting to real-world necessities, implementation complexities, budget limits, stakeholder feedback, market testing, political tampering, etc.… Then after moving through the process of translating abstract behavioral science principles into concrete intervention materials, such as translating the health belief model into a health app, it can be difficult to make clear links between the foundational theory and applied interventions.
No matter how promising an intervention may have appeared on paper, in practice, they always have the potential to exert unforeseen, and possibly negative outcomes. This is one of the reasons why new behavior change interventions typically undergo stringent monitoring and evaluation, before they are widely disseminated.
Scholars and practitioners, who report that they have disseminated a backfiring technology, can easily feel embarrassed, or worse, find themselves not just stigmatized, but potentially unfunded. Without doubt, there are many practical incentives and disincentives that may motivate people to not publish any information that details how their digital interventions backfired, causing even a small degree of unintended harm, even in marginal population.
We believe that this stigma has created a climate where the existing body of scientific literature may possess a significant degree of publication bias, resulting from too many published studies that only report positive outcomes, and too few studies reporting negative outcomes. This stigma has the potential to create a climate where scientists and practitioners are at greater risk of disseminating harmful interventions.
3.1 Axes of the Intention-Outcome Matrix
Intended Outcome. An outcome that was intended by the intervention designer. For example, getting university students to reduce their binge-drinking.
Unintended Outcome. Any outcome that the intervention designer did not intend, whether it is a positive or negative outcome. For example, when a binge-drinking screener accidently motivates some students to adopt other health outcomes, or triggers some students to drink more alcohol after they use it in a competition to see who can achieve the highest binge-drinking score.
Positive Outcome. An outcome that serves the interest of both the intervention designer and the target audience. This is a win-win situation, where both parties benefit. For example, a binge-drinking screener where the target audience achieves reduced alcohol consumption.
Negative Outcome. An outcome that does not serve the interest of the target audience. For example, a binge-drinking screener that causes small segments to drink more alcohol, or exposes them to greater risk.
3.2 Quadrants of the Intention-Outcome Matrix
Target Behavior. The primary intended positive behavioral outcome being sought, and typically reported.
Unexpected Benefits. A positive behavioral outcome that was not intentionally sought, but which was positive nonetheless, and may be reported as a complimentary benefit of the intervention.
Backfiring. This category includes a number of negative outcomes, when an intervention causes the opposite of the outcome (e.g. more binge drinking instead of less). It also includes “side effects”, when the primary behavior is achieved, but it also triggers unintended negative outcomes (e.g. using peer pressure to influence behavior which lowers audiences’ self-esteem). This quadrant is further subdivided into a risk management matrix that contrasts the likelihood of backfiring (low to high) with the potential severity (minor to major).
Dark Patterns. When an intervention is used for the benefit of the developer, at the expense of the target audience. This is in the realm of unethical applications, such as coercion, deception, and fraud. For instance, scholars draw attention to dark game design patterns that developers use to provide negative experiences to players, not in their best interests, and without their consent [24, 27].
The taxonomy of persuasive backfires presented in this paper was derived through a grounded theory methodology [6, 17], based on a corpus of academic, applied, and personal experiences with backfiring behavior change interventions .
We began the process by defining and limiting our selection criteria to interventions that backfire, that cause the opposite behavior, or unanticipated negative consequences that were contrary to the intentions of the program. We excluded flawed interventions that did not contribute towards any outcomes, positive or negative, as these programs constituted poor implementations, not backfires. The first type of flawed intervention includes programs that audiences did not find motivating, as these programs lacked the capacity to achieve significant outcomes. Similarly, we excluded interventions that faced implementation barriers, or created barriers among its target audiences, as it was not possible to assess the outcomes of these interventions.
To gather qualifying sources, we ran a call for references and examples across several academic, professional, and personal networks. The types of references we collected included journal papers, articles, program evaluations, and personal experiences. In total, we collected 47 responses.
We systematically reviewed all sources, and only included submissions that qualified as having demonstrated an unintended negative outcome, or which were presented as having contributed an unintended negative outcome. We also included behavior change interventions that were not implemented within technology per se, but were reasonably implemented in a context that could be applied to online behavior change campaigns or digital products. We also received submissions of backfiring legislation, which is often used in conjunction with communication campaigns, to elicit social change. In total 30 responses were included in our qualitative analysis.
We carried out a qualitative assessment of the corpus, with a view to developing taxonomy of triggers for backfiring interventions.
Taxonomy of persuasive backfires
A: LOW likelihood and MINOR severity
The superficial application of theory, such as copying surface tactics without understanding the underlying strategies, principles, or reasoning.
Research on social media, particularly on Foursquare, looks at driving user motivation through gamification mechanics that compete with intended target behaviors of information sharing . More generally, research that focuses on how gamification elements can drive extrinsic motivation, may cease to be persuasive over the long term, and may potentially deplete intrinsic motivation.
B: HIGH likelihood and MINOR severity
Motivate people to take action for one strongly emphasized benefit, while omitting (or hiding) harmful factors that are in the fine print.
Stressing “low fat”, while still including several unhealthy factors, such as high-sugar, or trans fats.
Asking people to take one pill a day to lower their risk of contracting HIV, which causes some groups to engage in riskier sex (and stop using condoms), contributing to an increase in other sexually transmitted diseases .
Resistance to messages that are incompatible with a person’s self-identity, that can induce unpleasant cognitive dissonance, leading the audience to reject the message, or oppose it.
Guilt and shame messages in anti-drinking ads for drunk drivers are ignored, and in some cases, backfire, when the message is incompatible with how individuals view themselves.
Persuasive messaging involving an authority might provoke opposition from groups and individuals who are sensitive or resistant to authorities.
One anti-smoking campaign failed due to advocating the message that “teens shouldn’t smoke… because they’re teens.” Youths in the 10th-12th-grade were 12 percent more likely to smoke for each parent-targeted ad they had seen in the last 30 days. According to developmental psychologists, teens 15 to 17 years old tend to reject authoritative messages because they believe they are independent, which renders Philip Morris’ ad campaign largely useless .
The presence of peer information decreases the savings of nonparticipants who are ineligible for automatic enrollment in a saving plan, while higher observed peer savings rates also decreased savings. Discouragement from upward social comparison seems to drive this reaction .
When someone does something good in one area, they sometimes feel like they have a license to misbehave in other.
After donating to charity, people may feel licensed to behave less morally in subsequent decisions . For instance, donating to charity may have a dark side to it, as it negatively affects subsequent, seemingly unrelated moral behavior, such as the intention to be environmentally friendly.
C: LOW likelihood and MAJOR severity
When the source disseminates discrediting information, causing a misalignment of source and message credibility.
People report less favorable thoughts and attitudes towards a source, after reading weak arguments presented by a high vs. low expertise source.
Too much fear mongering may discredit a campaign to the point of disbelief or humor .
Third party actors re-contextualize the source’s message, bringing a new meaning, which in many cases undermines the intervention by turning it into a public joke.
Creative works designed to cause fear, become a trendy meme with a different meaning, such as humorous cigarette ads of smoking children, reefer madness, fashionable heroin chic , or the ad campaign for TV PSA “This is your brain and this is our brain on drugs” which triggered numerous parodies .
England’s Beat Bullying Campaign triggered bullying and violence. The campaign was so popular at its launch that supplies of the “Beat Bullying” wristbands quickly sold out. Because of the scarcity of the bracelets, and theme of the campaign, some kids were bullied for wearing the bracelet .
D: HIGH likelihood and MAJOR severity
When a tailored messaging system provides information that produces negative outcomes in some users, that could have been avoided with an appropriate message.
A drinking screener showed both low and high drinking students how much they consume in comparison to an average consumption. Those that were above the norm felt encouraged to drink less, while those below received an implied message to drink more.
And a boomerang effect , when a descriptive social norm was not accompanied by an injunctive social norm in a similar way as described above.
When a message that was intended for one audience segment is misinterpreted by another group of people.
A one-size-approach to persuasive messaging can deter healthy eating behavior change, leading to a negative change in attitudes towards healthy eating over time.
The Playpump program was meant for children to pump water while playing, but resulted in adults using the playground pump, leading to back injuries and other health problems .
When a behavior change intervention does not properly diagnose user behavior or psychological processes.
Gaze tracking software, designed to provide proactive help as patients read medical documents, used fixation time as a cue to identify when users were struggling with the material, and might need help. During the trial, the participants with low health literacy had a slower reading rate, causing the system to inappropriately offer help continually, which just annoyed the users, leading to lower comprehension compared to the control condition.
Changes in policies or directives that lead to unanticipated shifts in beliefs, attitudes or behaviors.
In the Netherlands, the drinking age was changed from 16 to 18, which news stations linked to an increase in drug use among citizens in this age group.
Nebraska’s “Give Us Your Troubled Child” law backfired, as the Nebraska Safe Haven law enabled parents to drop off kids of any age and the state for the state to take in, resulting in some parents who dropped off grownups [Gra2008].
Demonstrating negative behavior, which exposes people to memory triggers of bad behaviors or temptations, potentially in moments of the greatest susceptibility.
Being exposed to others experiencing the stress of quitting smoking, for example, triggers people to want to smoke. Anti-bad-behavior interventions can remind people of the bad behavior, thus potentially spark their motivation.
A cookie company that introduced a 100-calorie packs of snacks, triggered people to eat far more of the small snack packs .
Interventions that use examples of popular bad behaviors can establish the bad behavior as a social norm.
An anti-littering program used campaign posters to stress how widespread littering was, accidently contributed to a social norm for littering, by demonstrating how many people were littering.
Two separate studies indicated that D.A.R.E. program was ineffective and in some cases, pushed kids toward drug use and lowered self-esteem. Researchers suspected that the intervention’s message made some kids want to try drugs as a way of fitting in, stating and the program’s message could be misinterpreted by youth as conveying that “peer pressure is around every corner, because everyone is doing drugs but you!” .
6 Scientific Considerations
We have authored this paper to initiate a scientific discussion on intervention backfiring, and to encourage scholars to examine this phenomenon in greater depth. We hope this will contribute to more transparency within the academic community, leading to more well-rounded research on persuasive design and its application.
The scientific contribution of this paper includes the persuasive backfiring framework, its two matrices (intention-outcome and likelihood-severity), and taxonomy. These frameworks can be used to define, discuss, and further research behavior change interventions that trigger unintended negative outcomes. Although we have not discussed ethics in this paper, it provides a system to further define ethical and unethical uses of persuasive technology.
Perhaps the most important contribution of this paper, is raising awareness of the elephant in the room, the well-known but rarely discussed fact that applied interventions targeting good outcomes, occasionally produce negative outcomes. It also raises questions on the social stigma attached to the people and organizations that deploy backfiring interventions, and how this may contribute to under reporting negative outcomes, or omitting them altogether.
More concerning, this investigation has identified a potentially large source of publication bias, as we believe that there are many incentives and disincentives, including stigma and embarrassment, that motivate researchers to avoid publishing research on negative outcomes.
We believe that scholars will need to develop innovative new research methods to overcome the research barriers that surround this subject, due to the stigma associated with reporting on interventions that backfire, and the limited ability of scientific studies to identify backfires that are more likely to become apparent in evaluations of real-world interventions, and often informally known by intervention staff but not publicly reported.
For instance, given the ability of technology to employ tailoring techniques, where content can be personalized, persuasive technology scholars are better equipped to undertake research on backfiring psychology, and use this knowledge to advise intervention designers when they need to omit influence principles that may be counter-productive to particular segments.
7 Practitioner Considerations
To assist practitioners, the taxonomy and tools presented in this paper provides a list of risks that can be addressed, if identified before they occur. Moreover, these risks can be used to build more effective interventions, by teaching practitioners how to avoid, mitigate, or manage backfires. The examples in this study suggest that backfires may originate from political tampering during the intervention design phase, evaluations that do not distinguish between groups, overusing a principle to the point of triggering mistrust in a message (“your brain on drugs”), misdiagnosis that leads to subscribing the wrong intervention leading to garbage-in-garbage-out interventions.
However, one ethically questionable practice we discovered was the potential intentional use of backfiring as a dark pattern, designed to deploy an intervention that superficially appears to promote a healthy behavior, but which actually promote unhealthy behavior. When corporations are obliged, or volunteer to carry out public health interventions to warn the public against their product, these corporations can easily benefit from the intentional use of backfiring interventions.
For instance, the “Talk: They’ll Listen” campaign is frequently cited as an example of a cleaver antismoking ad campaign that on the outside appeared to be a legitimate antismoking campaign, but which in practice caused an increase in youth smoking . Consequently, policy makers and regulators who empower tobacco, alcohol, and pharmaceutical companies to run their own interventions, need to be fully informed about the potential intentional use, and abuse of backfire-based campaigns, that superficially look effective, but at a deeper level, have been engineered to encourage the opposite effect.
Finally, we believe that backfires are normally present to some degree in all interventions, and that they can never be fully eliminated. It may be more practical to focus on strategies to reduce the riskiest backfires, to manage those that cannot be fully eliminated, and to continually monitor and improve interventions over time.
The stigma associated with reporting behavior change interventions that trigger negative outcomes, has relegated the topic of intervention backfiring to an informal observation that is widely known, but rarely discussed or reported. This has created a climate where scholars routinely overemphasize positive outcomes, while failing to report the fact that the same principle, can also lead to unforeseen negative outcomes.
In this paper, we discussed multiple ways how behavior change interventions can backfire. We provided a framework to help facilitate the discussion of this topic, presented tools to aid academics in the study of this realm, and offered advice to practitioners about potential risks. We encourage researchers to build on this work, and take a more systematic look on approaches involving the design of behavior change interventions.
In the future, researchers will need to innovate new ways to study this subject, and extend our scientific and practical knowledge of what pitfalls need to be avoided when designing technology-supported behavior change interventions. We advocate that researchers and practitioners adopt an honest and open attitude towards identifying and removing backfires as soon as possible, and to disseminating strategies to reduce their occurrence, before they cause more harm than good.