Skip to main content

Part of the book series: Law, Governance and Technology Series ((LGTS,volume 50))

Abstract

This chapter discusses the new modes of interaction with consumers through personalisation. Marketers use algorithmic systems to generate, test and distribute commercial content, products, and website layouts for optimising business measures. While possibly contributing to a more pleasant customer experience and more straightforward choice, these developments fundamentally change business-to-consumer relations in that by adopting a radical behaviourist approach, they subject consumers’ decision-making to an opaque, subtle, and potentially exploitative modulation of digital choice architecture.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 59.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    For example, Giddens (1984), p. 183, argues that surveillance should be neutrally defined as “the coding of information relevant to the administration of subject populations, plus their direct supervision by officials and administrators of all sorts.”

  2. 2.

    Google (2020), https://policies.google.com/.

  3. 3.

    Instragram (2020), https://help.instagram.com.

  4. 4.

    Microsoft (2020), https://privacy.microsoft.com/en-GB/.

  5. 5.

    European Commission (2018). For example, more than two-thirds (71%) of respondents in the consumer survey reported that in their experience, nearly all or most websites use online targeted advertising.

  6. 6.

    O’Brien (2020).

  7. 7.

    Strycharz et al. (2019).

  8. 8.

    Arora et al. (2008). The one-to-one idea was introduced back in 1993 by Don Pepper and Martha Rogers after observing the growing role of databases in marketing. They showed that marketing was shifting from focusing on the market to focusing on consumers and heading into an era where mass marketing would no longer be an effective way to compete. In this setting, they predicted that “business will be able to communicate directly with consumers, individually, rather than shouting them, in groups.” See, in general, Peppers and Rogers (1993).

  9. 9.

    The inherent contradiction between personalisation and privacy is frequently referred to as the “privacy-personalisation paradox” (Aguirre et al. 2016): while personalisation requires extensive surveillance and may heighten privacy concerns among consumers, it also enhances consumer engagement with the firm.

  10. 10.

    For a dive into how machine learning can be used to personalise all these aspects of a transaction, see Wirth and Sweet (2019). The book was written by the CEO and director of content marketing at Evergage. The company was acquired in 2020 by Salesforce, today a leading player in marketing products powered by AI. The book contains a valuable account of personalisation practices and the logic of data-driven business.

  11. 11.

    Evergage Inc. (2019) reports survey results showing that marketers are using personalisation in email, homepages, landing pages, interior pages, online ads, product-detail pages, search results, pricing, and blog posts.

  12. 12.

    By leveraging AI technologies and machine learning correctly, marketers “can now engage with every customer on a 1:1 basis, in-the-moment, with relevant offers across channels, at scale” (Galkin 2018).

  13. 13.

    “Customer journey” is an expression popularly used by marketers to characterise and map the interaction between consumers and a brand. The basic idea is that marketing should no longer look at consumers as potential customers for products, but at individuals with whom to establish meaningful social relationships that foster interaction with them at many moments of the customer experience. For an overview of the concept, see Lemon and Verhoef (2016).

  14. 14.

    Statista (2019).

  15. 15.

    IAB Europe (2016), European Online Advertising surpasses TV to record annual spend of EUR 36.4 bn, https://iabeurope.eu/all-news/press-release-european-online-advertising-surpasses-tv-to-record-annual-spend-of-e36-2bn/.

  16. 16.

    For an overview and historical evolution on the AdTech sector, see McStay (2016).

  17. 17.

    For an informative overview of all such actors, see Sartor et al. (2021).

  18. 18.

    White and Samuel (2019).

  19. 19.

    On programmatic advertising and current development in the field, see Lee and Cho (2020).

  20. 20.

    Chen et al. (2019).

  21. 21.

    There is indeed often a misconception between targeted and personalised advertising. While targeted advertising tailors a campaign to a specific audience, personalized advertising further adapts that campaign to a group that is singled out within that audience. See Babet (2020).

  22. 22.

    Ghose et al. (2013).

  23. 23.

    van Noort et al. (2020).

  24. 24.

    Google (2015), EA Sports Madden GIFERATOR https://www.thinkwithgoogle.com/consumer-insights/consumer-trends/ea-sports-madden-giferator/.

  25. 25.

    Adobe, Adobe Advances Programmatic Advertising with New Dynamic Creative Technology, 2015, available at https://s23.q4cdn.com/979560357/files/doc_events/2015/04/1/042915AdobeAdvancesProgrammaticAdvertising.pdf.

  26. 26.

    Bakpayev et al. (2020).

  27. 27.

    Another cutting-edge development in programmatic creatives is automatic copywriting. As explained in Altstiel et al. (2018), automatic copywriting tools input consumers’ data ranging from simply entering the name of customers or specific information related to their personality. By means of natural-language-generation techniques, the tool outputs different possible keywords, design templates, and pictures to be used in the advertisements. For example, Lacy (2018) shows how Alibaba offers such a service to its business partners, who can use an AI copywriter that puts out 20,000 lines of copy per second.

  28. 28.

    Ricci et al. (2011).

  29. 29.

    In particular, Tuzhilin (2009), pp. 3–4.

  30. 30.

    On the theory of long-tail business, see Anderson (2006). Specifically, in the field of recommendation systems, see Yoon-Joo Park and Tuzhilin (2008).

  31. 31.

    In practice, pull and push methods are often used in combination. For example, the Amazon recommendation system also pushes recommended items when a user searches for products, i.e., pull methods. Similarly, YouTube’s search results match user queries and display recommended items based on inferred preferences.

  32. 32.

    Tuzhilin (2009).

  33. 33.

    Jannach and Jugovac (2019).

  34. 34.

    The Server Side (2017).

  35. 35.

    The annual RecSys conference, dedicated to state-of-the-art research and applications in recommender systems, routinely hosts workshops on applying deep-learning methods to recommender systems. For an updated review of recommender-system applications in e-commerce based on deep learning, see Zhang et al. (2019).

  36. 36.

    Haruna et al. (2017).

  37. 37.

    CXL (2019).

  38. 38.

    Greenberg and Witten (1985).

  39. 39.

    Findlater and Gajos (2009).

  40. 40.

    Wirth and Sweet (2019), pp. 94–117.

  41. 41.

    Wirth and Sweet (2019), p. 94.

  42. 42.

    Criteo (2017).

  43. 43.

    LiftIgniter (2019).

  44. 44.

    Logg et al. (2019), Gal and Elkin-Koren (2017) (“the most basic effect is a reduction in cost and/or an increase in quality, depending on the preferences set by the consumer, in the products purchased”), Brynjolfsson et al. (2003).

  45. 45.

    Sunstein (2014).

  46. 46.

    Goldman (2006) precisely makes this point, describing the benefits of digital market research that can uncover and then fulfil “latent preferences” that the consumers would otherwise be incapable of articulating themselves.

  47. 47.

    Strycharz et al. (2019).

  48. 48.

    For example, Wollan et al. (2017) show that personalisation is based on meaningfulness and relevance. Zealley et al. (2018) emphasise immediacy, since to do relevant marketing means to “serve a customer’s most relevant needs in the moment.” Similarly, Zoratti and Gallagher (2012) suggest that personalised marketing is relevant when a business reaches customers with the right message or offer in the right channel at the right time.

  49. 49.

    Merchant (2019).

  50. 50.

    Sweet (2018).

  51. 51.

    Lytics (2018).

  52. 52.

    Kantar (2014).

  53. 53.

    Boudet et al. (2018).

  54. 54.

    Bleier et al. (2018).

  55. 55.

    LaptrinhX (2020).

  56. 56.

    From the YouTube blog: “If viewers are watching more YouTube, it signals to us that they’re happier with the content they’ve found.” See YouTube, “Why We Focus on Watch Time,” 2012, https://youtube-creators.googleblog.com/2012/08/youtube-now-why-we-focus-on-watch-time.html.

  57. 57.

    Lury and Day (2019) speak of “zooming of the self”.

  58. 58.

    Lupton (2016), p. 338. The essay draws on the concept of “algorithmic assemblage” and expands the concept by arguing that thanks to algorithms, data subjects co-evolve with others in their cohort: “The vitality of digital data has significant implications for people’s data practices. People are confronted with attempting to gain some purchase on information about themselves which is not only continually generated but is also used by other actors and agencies in ways of which they may not be fully aware”.

  59. 59.

    A lucid attempt to lay bare the biased nature of personalisation can be found in Greene and Shmueli (2019), p. 4. Speaking of the person as a feature vector, they deny that machine-learning personalisation can be reconciled with the humanistic view of the person. Rather, “personalized scores are generated by combining information about your observed behavior with certain assumptions about you and your goals, beliefs, and preferences to predict a very constrained set of future actions”.

  60. 60.

    Kohavi and Thomke (2017) describe how Microsoft and several other leading companies—including Amazon, Booking.com, Facebook, and Google—each conduct more than 10,000 online controlled experiments annually, with many tests involving millions of users.

  61. 61.

    See, e.g., Vertical Leap (2018).

  62. 62.

    A popular approach to algorithmic marketing is contextual multi-arm bandit testing based on reinforced learning.

  63. 63.

    Kohavi and Thomke (2017): “some executives mistakenly believe that causality isn’t important. In their minds all they need to do is establish correlation, and causality can be inferred. Wrong!”

  64. 64.

    Siroker and Koomen (2013, p. 92) argues that “A/B testing neutralises the ideological and replaces it with the empirical. Instead of, ‘I feel we should do X because that’s what I feel,’ a culture of A/B testing encourages both curiosity and humility, where people say, ‘I hypothesise that X is better than what we have right now. I don’t have the data today, but let’s run an experiment and test it.’ After that process is complete, A/B testing then offers the much stronger claim: ‘I know we should do X because of the evidence that it’s the right thing to do.’”

  65. 65.

    As Lowrie (2017) argues, algorithms can only be evaluated in their functioning as components of extended computational assemblages; on their own, they are inert. Consequently, the epistemological coding proper to this evaluation does not turn on truth and falsehood but rather on the efficiency.

  66. 66.

    Susser (2019). The understanding of technology as a mediating interface between people and the world is founded on the post-phenomenological approach to technology embraced by Don Ihdle and Peter-Paul Verbeek, on which the technology would bear meaning only in context. In particular, the theory of technological mediation analyses the influence that technology has on human behaviour and relations. For an introduction, see Verbeek (2015).

  67. 67.

    The experiment is explained in Luguri and Strahilevitz (2021).

  68. 68.

    Among the many legal scholars who have written on the regulatory nature of code, see Lessig (2000); Grimmelmann (2004); Zittrain (2009).

  69. 69.

    Lessig (2000), p. 84.

  70. 70.

    Thaler and Sunstein (2009).

  71. 71.

    Kahneman (2011). As is known, Kahneman’s great contribution to behavioural science (and economics) lies in a general theory of cognition that divides the human mind into a dual system consisting of two types of cognitive activity that Kahneman calls System 1 and System 2: System 1 makes judgments quickly, in ways that are experienced as effortless and mostly automatic. In contrast, System 2 is slower, experienced as more effortful, and quite limited as to how much incoming information it can process at any given time. System 2 handles the most complex and mentally taxing operations, but System 1 serves as the most pervasive default mode for making decisions. Behavioural economists largely focus on how elements of System 1 processes lead to decision-making outcomes that diverge from the optimal outcomes predicted by a rational choice perspective. Thus, a choice architecture can be created to leverage System 2, then generally leading to rational and self-reflexive decision, but can also exploit System 1, providing a context for impulsive, biased, and incorrect decisions.

  72. 72.

    Thaler and Sunstein (2009), pp. 81–102.

  73. 73.

    Thaler and Sunstein (2009), p. 428.

  74. 74.

    As has been argued in behavioural economics, the proper name for these practices is not “nudge” but “sludge” (Thaler 2018).

  75. 75.

    Nadler and McGuigan (2018) show that marketers often represent themselves as “choice architects.” For example, Ogilvy.com, a unit specialising in behavioural economics for marketing, describes itself as a team of “choice architects” who “apply principles from cognitive psychology, social psychology and behavioural science to create measurable behaviour change in the real world.” This unit has a global presence with ten offices worldwide and works with marquee firms such as American Express, Nestlé, British Airways, and Starbucks. Another advertising giant, Foote, Cone & Belding (FCB), has established an Institute of Decision-Making that partners with businesses to deliver insights about clients by working from behavioural psychology.

  76. 76.

    Pasquale (2006).

  77. 77.

    Cheney-Lippold (2011).

  78. 78.

    Yeung (2017).

  79. 79.

    Yeung (2017), p. 121.

  80. 80.

    See Darmody and Zwick (2020), describing hyper-personalisation as “un-marketing,” a vision where marketing cannot fail to be “on target,” which at one time was what arguably empowered consumers in the business-to-consumer game.

  81. 81.

    See, e.g., Banker and Khetani (2019). Through several behavioural experiments, the authors show that recommendation systems make users over-dependent on algorithmic recommendations and put them at greater risk of choosing inferior products and services, with a consequent loss in welfare. The authors show that this is especially the case with highly specialised medical and financial products. The authors draw the attention of policymakers to the fact that “without full information on product attributes and consumers’ true utility functions, both watchdogs and consumers themselves cannot accurately assess whether adopting the recommendations they are presented with may lead to a welfare loss.”

  82. 82.

    This possibility has indeed been proven in studies on targeted advertising, such as Lienemann et al. (2019).

  83. 83.

    Frankfurt (1971). On the prominent role that second-order preferences play in many other accounts of autonomy, see Bruckner (2011).

  84. 84.

    André et al. (2018) endorses the view that while algorithmic systems may relieve choice overload, their “psychological reductionism” may deprive consumers of the ability to exercise self-reflection and caution in making attentive judgments between autonomous and hedonistic choices. See also Köcher and Holzmüller (2017).

  85. 85.

    Machines have only a limited ability to build on explicit or discursive feedback from humans and must instead rely on proxies. This is a significant limit of AI that Cristianini (2021) argues to be responsible for a major cultural transformation in which “unobservable quantities” are replaced with “cheaper proxies.”

  86. 86.

    Cohen (2019), p. 77.

  87. 87.

    It could even be argued that from an ad that is displayed but not clicked on, the system will infer that we are not interested in that product even if we simply did not notice the ad, and so regardless of whether the choice was deliberate or was not a choice at all. From an algorithmic perspective, then, even an action not taken is data which can be stored and analysed.

  88. 88.

    Greene and Shmueli (2020); Thomas and Uminsky (2020).

  89. 89.

    De Vries (2010).

  90. 90.

    On the problem of compulsive buying in personalised advertising, see Mikolajczak-Degrauwe and Brengman (2014).

  91. 91.

    Willis (2020).

  92. 92.

    Adomavicius et al. (2013, 2019).

  93. 93.

    See Grafanaki (2017), who explains that such self-reinforcing feedback loops no longer affect only the sphere of consumption but any activity in areas of life (news, media, healthcare) that are mediated by personalisation technologies.

  94. 94.

    Schwartz (2000) describes one of the traits of this trap as the “reduced sense of the possible.” A discussion of this concept can also be found in Zarsky (2003).

  95. 95.

    On the concept of filter bubble see, more extensively, Pariser (2011), passim.

  96. 96.

    For a different view, see Cass R. Sunstein (2018, pp. 54–55), who, assuming that algorithms actually reflect users’ choices, interprets algorithmic personalisation as “consumer sovereignty in action.” This is a type of sovereignty that to a certain degree admits and welcomes exposure to fixed and repetitive consumption choices, and Sunstein distinguishes it from political sovereignty. The latter rests on the fundamental values of ideological pluralism and free access to informed debate, which presupposes a certain diversity of viewpoints. It follows that algorithmic personalisation may be justified when used to influence consumers’ choices, but not when used to shape the political identity of citizens.

  97. 97.

    Novemsky et al. (2007); Nguyen et al. (2014).

  98. 98.

    See, e.g., Zwebner and Schrift (2020), who, on a sample of two hundred individuals, find that most consumers generally show aversion to being observed by online platforms in the process of forming preferences before making a purchase.

  99. 99.

    Schneider et al. (2018).

  100. 100.

    A growing literature is emerging devoted to describing and categorising dark practices. A taxonomy can be found in Gray et al. (2018); Mathur et al. (2019).

  101. 101.

    Richard H. Thaler and Tucker (2013). On “personalised nudges,” see specifically Mills (2020), who defines them as “the process of adding, adapting or augmenting a nudge using personal data to reduce the behavioural externalities produced by the nudge, while retaining the intervention’s status as a nudge.”

  102. 102.

    Schneider et al. (2018) provide a framework for designing personalised nudges based on repeating cycles that consist in defining a goal, understanding the user, designing the nudge, and testing its effectiveness with experimentation techniques such as A/B (or split) testing.

  103. 103.

    Experiments on nudges is what the company Nudgify provides to its business clients. Their tool A/B-tests different nudges and sees how they perform on different consumers groups. Likewise, Adobe offers AI-powered products making it possible to A/B-test the whole experience of nudging consumers. The AI platform “helps ensure that you zero in on the experience that nudges customers to buy, read, download, or take whatever other action you want them to take to meet business goals.”

  104. 104.

    Kaptein (2015).

  105. 105.

    According to Cialdini (2007), persuasive communication is based on six cognitive strategies built on social norms that make it possible to modify individuals’ attitudes. These attitudes include authority (people are inclined to follow suggestions originating from authority), consensus (individuals are more prone to follow the behaviour that others have had), commitment (people strive to maintain consistency in their behaviours if they have previously committed to some course of action), scarcity (the scarcity or presumed scarcity of commodities and opportunities tends to increase the probability that people will seek or pursue that opportunity), liking (people are more inclined to be influenced by people who express liking for them or with whom they can better identify), and reciprocity (individuals are more likely to have certain behaviours if they are in a situation of reciprocity with those who try to persuade them, as when they owe a favour).

  106. 106.

    Kaptein (2015).

  107. 107.

    Zoovü, Persuasion Profiling – How You Sell Matters More Than What You Sell, 2018, https://zoovu.com/blog/persuasion-profiling/.

  108. 108.

    Johnathan Keane, Crobox raises $1.25 million for its persuasion-as-a-service technology, TechEU, 2016, https://tech.eu/brief/crobox-1-25-million/.

References

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Galli, F. (2022). Predictive Personalisation. In: Algorithmic Marketing and EU Law on Unfair Commercial Practices. Law, Governance and Technology Series, vol 50. Springer, Cham. https://doi.org/10.1007/978-3-031-13603-0_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-13603-0_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-13602-3

  • Online ISBN: 978-3-031-13603-0

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics