Skip to main content
Log in

Trust as a Test for Unethical Persuasive Design

  • Research Article
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

Persuasive design (PD) draws on our basic psychological makeup to build products that make our engagement with them habitual. It uses variable rewards, creates Fear of Missing Out (FOMO), and leverages social approval to incrementally increase and maintain user engagement. Social media and networking platforms, video games, and slot machines are all examples of persuasive technologies. Recent attention has focused on the dangers of PD: It can deceptively prod users into forming habits that help the company’s bottom line but not the user’s wellbeing. But PD is not inherently immoral. We can take advantage of our psychological biases to make beneficial changes in ways that enhance our agency rather than limit it. Knowing that a tool is persuasively designed is a consideration in favor of using it when we are trying to break bad habits, such as smoking. How are we to conceptually distinguish between ethical and unethical uses of PD? In this paper, I argue that unethical uses of PD betray or erode our trust. Annette Baier offers a moral test for trust: If gaining knowledge about what other parties do with our trust in them would lead us to stop trusting, then that trusting relationship is immoral. I apply this test to the case of PD. Using trust as a litmus test for ferreting out unethical PD has several advantages, one of which is that it reveals how the harm of unethical PD extends beyond the individual to her wider social network. I close the paper by investigating these cascading effects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. By “habit,” I mean that part of our behavioral process becomes subconscious, bypassing self-reflection and with it the decision to disengage. I want to thank an anonymous reviewer for clarifying this point and suggesting the language.

  2. This is what much of applied ethics, in particular the burgeoning field of data ethics, has tried to do. For starters, see Davis (2012), O’Keefe and Brien (2018), European Data Protection Supervisor (EDPS) (2015), Richterich (2018), and Robinson (2015).

  3. For starters, see Berdichevsky and Neunschwander (1999); Verbeek (2009); Burr et al. (2018); Lanzing (2019); and Owens and Cribb (2019). Even Nir Eyal, a leading name in PD, warns of the danger it presents to autonomy with his ‘manipulation matrix’ (Eyal 2014, 167).

  4. See Nussbaum’s and Sen’s (1993) capability approach to wellbeing, or Deci’s and Ryan’s (1985) self-determination approach to motivation as salient examples of the importance of autonomy to wellbeing.

  5. I say “reasonably” here to reflect the fact that what will betray someone’s trust is, to a certain extent, subjective. We each have varying thresholds of what will count as a betrayal, in part due to our histories of experiences that make us more sensitive to some transgressions rather than others. While these individual idiosyncrasies cannot be predicted in advance, we can make assumptions about kinds of actions that are likely to betray any reasonable person’s trust.

  6. I want to thank two anonymous reviewers for alerting me to the problems of conceiving of the trust relations involved in PD as residing simply between the designer and the user.

  7. Trust has the power to be therapeutic, even proleptic. See McGeer (2008).

  8. For an account of how we integrate our agency with technology and our surrounding environment, allowing us to trust them, where trust is characterized as an unquestioning attitude, see Nguyen (Forthcoming).

  9. Many thanks to an anonymous reviewer for suggesting that I look at trust between user and PT from this angle.

  10. Again, I do not mean to suggest by this formulation that there is a clear and distinct betrayer in the case of PD.

  11. This is why gaslighting is so wrong, Zagzebski and others suggest. The victim of gaslighting feels like she is going crazy because she is consistently given reasons to think that she cannot trust her own memory, even her own perception. While these reasons are eminently defeasible, with her self-trust broken she cannot see them as defeasible.

  12. Of course, jealousy may be misplaced or entirely inappropriate. But it is still real, and it still has the potential to harm friendships and trust. “The logic of jealousy is not an objective logic of the situation but of the subject’s construal of the situation, a subjective logic.” (Roberts 2003, 47) This construal, accurate or not, has consequences for behavior and patterns of relations that threaten to snowball.

  13. Thanks are due to an anonymous reviewer for nudging me to clarify this.

References

Download references

Acknowledgments

I am deeply grateful to Nick Smyth and Laura Specker Sullivan for our insightful conversations that shaped early drafts of this paper, to Nathan Ballantyne for introducing me to the topic of persuasive design, and to two anonymous reviewers for their critical and encouraging comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johnny Brennan.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brennan, J. Trust as a Test for Unethical Persuasive Design. Philos. Technol. 34, 767–783 (2021). https://doi.org/10.1007/s13347-020-00431-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-020-00431-6

Keywords

Navigation