Keywords

1 Introduction

Interaction with any technologies that possess a user interface (UI) is usually influenced by how such an interface is designed. In principle, a designer should produce an interface that guides a user and helps complete desired tasks and goals. Different guidelines exist that designers should follow to make the interaction experience smooth, seamless, and easy. For instance, there are some usability and user experience (UX) recommendations originating from usability heuristics [48], as well as standards such as ISO 9241 “Ergonomics of human-system interaction,” and others detailing the rules for the design of interactive systems [32] (see also the chapter “Achieving Usable Security and Privacy Through Human-Centered Design”). Through education, designers learn about UI structure. They are usually informed that even standard elements, such as navigation menus, footers, and similar, should be presented in a specific way, so the user would not struggle during interactions. Considering this, one could say that no design is entirely neutral and that most UI designers “nudge” users to interact with designs in specific and, most likely, predictable ways.

The construct of nudging and its use in the context of privacy is discussed in detail in the chapter “Privacy Nudges and Informed Consent? Challenges for Privacy Nudge Design”. The applicability of nudging is, at times, perceived as controversial, and nudges might be regarded as designs that negatively affect an individual’s autonomy. For more on this, we refer the reader to the above-mentioned chapter and recommend reading the discussion by Hansen and Jespersen [30]. In this chapter, we focus on what can be described as nudge’s sibling, which, contradictory to nudges, exploits human nature and deceives users—the phenomenon called dark patterns. In principle, dark patterns are deceptive designs that trick users into specific choices that they did not initially desire. What makes them related to nudges is that the mechanisms used in dark patterns are similar to the mechanisms underlying the design of nudges. Dark patterns exploit human psychology, particularly how people make judgments and decisions and the different heuristics and biases that these decisions are predisposed by.

2 Dark Patterns

The UI design relates to different attributes through which people can interact physically (e.g., pressing a button may result in haptic feedback), perceptually (e.g., elements displayed on the screen, sounds), and conceptually (e.g., people try to work out what is the device’s purpose, and find information about it in the device) with technologies [8]. The most influential elements that affect users are the visual elements of UI (e.g., layout, color, size of the font, buttons) and content (e.g., image, text). These UI components influence users’ behavior, and, in particular, they may steer users’ choices. It might be impossible to design an entirely neutral choice architecture. However, the design’s effects are not always intended or directed toward “any well-defined or consistent end” [30, p.9]—meaning that UI designers or choice architects do not always have all the possible users’ goals in mind while designing. Still, it is a common practice in the digital market that companies implement designs that intend to direct users into specific and predictable choices, i.e., nudge them. However, such designs are often unlikely to consider users’ interests as an ultimate goal. Instead, they may concentrate on maximizing the company’s benefits, primarily financial (e.g., collecting more information about user behavior to sell them additional products or target them with personalized advertising).

Such designs are referred to in the literature as dark patterns. Dark patterns term was first time used by Harry Brignull, who most recently amended the terminology to deceptive design. He defined it as follows: “Deceptive design patterns (also known as ‘dark patterns’) are tricks used in websites and apps that make you do things that you didn’t mean to, like buying or signing up for something” [10]. Therefore, dark patterns (or deceptive designs) are designed to confuse users purposefully or guide them toward specific choices. Users exposed to dark patterns can no longer follow their own desires or preferences or might become subject to manipulation [43].

When considering the topic of manipulation, it is essential to note that not all dark patterns are manipulative. Susser et al. [59] discussed it and defined manipulation as “an attempt to change the way someone would behave absent the manipulator’s interventions,” concluding that manipulation might be leaning toward persuasion—therefore being more acceptable. However, it may also be deceptive or coercive, becoming more dangerous and harmful to an individual. All four constructs, persuasion, manipulation, deception, and coercion, have been applied in the research on dark patterns and will be referred to in the subsequent parts of the chapter.

2.1 Why Do Dark Patterns Work?

Not without reason, the term dark patterns (deceptive designs) was introduced by Brignull—a UX practitioner and cognitive researcher. The premises that dark patterns rely on are similar to the fundamentals of nudging, stemming from cognitive science and psychology. It is crucial to understand underlying psychological processes and how people make physical and digital decisions to comprehend what dark patterns are and why and when they could be successful (for a discussion on psychological behavior models, please refer to the chapter “From the Privacy Calculus to Crossing the Rubicon: An Introduction to Theoretical Models of User Privacy Behavior”).

Traditionally, economic theories are the most common approach to explaining how people make decisions. These assume that people tend to be “rational” and make decisions aiming to maximize their benefit—construct often referred to as homo economicus. According to economic-based theories, such utility maximization, or benefit utilization, is always the most preferred outcome [50]. Historically, traditional economic theories have been subject to change due to their predictive inaccuracy. For instance, Kahneman and Tversky proposed their prospect theory to explain decision-making processes under risk and uncertainty [39]. The prospect theory considers value function differently from other economic-based theories, implying that value function is concave for gains but convex for losses, and the slope of the loss is steeper than the slope for gains [63].

Additionally, prospect theory suggests that the probability scale is nonlinear—people tend to underweight high probability and overweight low probability. To take this further, Tversky and Kahneman proposed cumulative prospect theory, enabling modeling decisions that recognize different phenomena of choice, listing framing effects, nonlinear preferences, source dependence, risk-seeking, and loss aversion [63]. Nevertheless, the theory failed to explain decision-making processes 100% accurately, and other developments were proposed, such as the contrast-weighting theory, stochastic difference model, or regret theory [50].

Still, the concept of homo economicus—rational, in an economic sense, decision-maker—did not prevail, and many anomalies around the decision-making process were identified in the research. One theoretical approach tries to explain why the anomalies exist through the class of theories referred to as dual-process theories. Dual-process theories imply that there are two kinds of thinking involved in decision-making, in the literature often referred to as System 1 and System 2, or Type 1 and Type 2 as suggested by Evans and Stanovich [24] (to ensure that these are not the only two processes that occur during the decision-making). Fast, intuitive, and automatic processes characterize Type 1 thinking, while slow, computational, and analytical processes characterize Type 2 [24, 35]. The intuitiveness of Type 1 thinking implies that it is autonomous. Therefore, it does not need to engage working memory (memory used to plan and carry behavior, including short-term memory and some other processes that help to make use of short-term memory [18]). The situation is inverted in Type 2 thinking, which is reflective and engages working memory. Additionally, Type 2 relies on cognitive decoupling (i.e., when testing hypotheses, people can prevent confusion of the actual world representations with some imaginary situations) and mental simulation [58].

A common and not entirely correct view is that decisions based on Type 1 thinking must be “bad”—i.e., less optimal, “irrational.” On the contrary, the same view implies that Type 2 thinking must lead to a “better” and more optimal decisional outcomes. Such presumptions might be a reminiscence of the economic approach to decision-making, where people are “rational” and desire to maximize benefits. The research suggests that these common beliefs are incorrect, indicating that the goodness or badness of decisional outcomes is interchangeable between the two modes of thinking [24]. More importantly, there is an implication that the optimization of Type 1 thinking depends on the environment, particularly its hostility [24]. The importance of hostile vs. benign environments is crucial for understanding issues related to the design of UI. For Type 1 processing, the hostile environment characterizes by no cues that might be useful or known. Even more critical for the purposes of the present chapter, the hostile environment contains simple cues that are injected into the environment by other agents to trigger Type 1 processing of information (e.g., deliberately developing space to ensure that the company’s profit is maximized) [24].

In a hostile environment, decisions are made based on attribute substitution, relying on heuristics and biases. Kahneman and Fredrick explain that such reliance exists when “an individual assesses a specific target attribute of a judgment object by substituting another propriety of that object—the heuristic attribute—which comes more readily in mind” [37]. For example, when people are asked, separately, about the level of happiness in their lives and the number of dates they had last month, the correlation between the answers to these two questions is minimal. However, research shows that when these questions are being asked together, people tend to automatically associate the level of their happiness with dating, and the correlation increases. As implied by Kahneman and Fredrick, such correlation results from heuristic-based or biased thinking [37].

2.1.1 Heuristics and Biases

According to Gigerenzer, heuristics can be defined as strategies that people use to make decisions faster, more frugally, and at times, more accurately if they were to use different decision-making methods [26]. Often this is achieved by ignoring certain information and relying on what could be colloquially named “rules of thumb.” Cognitive biases stem from heuristics and Type 1 reasoning and can be defined as systematic patterns of deviation from norm or rationality in judgment and decision-making. Individuals perceive an input based on their “subjective” rationality, and such perception dictates how they behave.

Research from different disciplines, e.g., psychology, economics, and neuroscience, demonstrates that intuitive Type 1 thinking, particularly heuristics and biases, affects decision-making [20, 36, 53]. Hence, the assumptions of how heuristics and biases are triggered have been applied in nudging. As a result, nudging has been applied in real-life contexts and was successful; for instance, companies utilize status quo bias (a preference for no change) to ensure that employees sign up for pension plans [34, 60, 61]. Other examples are when governments use a default effect (a pre-set course of action) for organ donation or shops display first the healthy products in the cafeteria to increase their consumption [34, 60, 61]. The applicability of Type 1 thinking also expands to the digital environment, particularly to UI and choice architecture design, often in the form of dark patterns.

While making decisions based on mental shortcuts might be beneficial in some contexts, as argued by Gigerenzer and Gaissmaier [26], e.g., in sport, medicine, and law, the effect that such thinking has on users of the digital environment is predominantly adverse. These negative effects are particularly valid when people make privacy-related decisions because the way of information processing (whether it is Type 1 or Type 2) is crucial for users that often automatically overshare their personal, and sometimes sensitive, information [12].

There are many mental shortcuts and biases that dark patterns can exploit. For the present chapter, Table 1 illustrates some of the biases most prominent in privacy-related decision-making in the digital context.

Table 1 Definitions of heuristics and biases likely to be exploited in the design of privacy dark patterns. Note: This list is not exhaustive, and other psychological effects might be used to design privacy dark patterns

2.2 Privacy Decision-Making

As hinted above, the psychological underpinnings, including biases and heuristics, have been shown to govern also privacy-related decision-making. For instance, many privacy researchers applied normative, neoclassical economic theories to explain how people decide about their privacy in the digital context. Predominantly, such research focused on information disclosure, considering the transactional dimensions of behavior—supposedly, people disclose information that has some value to gain the desired benefit. Sometimes, information protection has been given a monetary value, showing that the value of information online might be worth less than the offline information [14]. Some studies investigated privacy calculus, following neoclassical economics where people calculate the trade-off between the risks and benefits of information disclosures [21, 29]. Often, privacy calculus models were used to improve understanding of privacy concerns, as these were presumed to be central to explaining privacy-related decisions [21, 22]. Building on economic approaches to decision-making and psychological theories such as the theory of planned behavior or theory of reasoned action, researchers proposed the APCO (Antecedents—Privacy Concerns—Outcomes) framework that attempts to explain relationships between the different factors that may affect privacy behaviors [56]. Here, the antecedents of privacy affect privacy concerns, which in turn lead to specific behavioral reactions (e.g., information disclosure). Privacy calculus, privacy information display, and trust are also considered influencers of either behavior or privacy concerns. Still, both the initial framework’s authors and empirical evidence showed that economic approaches are insufficient to explain privacy decisions [2, 3]. As suggested in the redefined APCO framework, privacy decisions can also be affected by other factors, such as the level of cognitive effort required to make a decision that includes affect, motivation, time constraints, and similar [23]. For a more detailed discussion of behavioral frameworks, please refer to the chapter “From the Privacy Calculus to Crossing the Rubicon: An Introduction to Theoretical Models of User Privacy Behavior” . Furthermore, the biases, heuristics, peripheral cues, and misattribution effects influence privacy behaviors. The effects of some of these additional factors affecting privacy decisions are apparent in the research on privacy nudges, as discussed in the chapter “Privacy Nudges and Informed Consent? Challenges for Privacy Nudge Design”.

2.3 Categorization of Dark Patterns

Researchers attempted to categorize dark patterns in various ways to make understanding and counteract the phenomena easier. For instance, as early as 2010, Sobiesk and Conti proposed categorizing malicious interfaces—often utilized to increase revenue [16]. Their paper presents a taxonomy consisting of eleven categories containing different subcategories: (1) Coercion—threat and mandate users’ compliance (required form fields; user-threatening messages); (2) Confusion—questions or information that users cannot comprehend; (3) Distraction—driving user attention away by exploitation of pre-attentive processes (different media formats: video/animation/blinking, etc.; color); (4) Exploiting errors—taking advantage of errors to facilitate designers’ goals (typing errors); (5) Forced work—increasing users’ workload (delay of work effort; difficult de-installation); (6) Interruption—interfering the task flow (force viewing; “hot” interface elements); (7) Manipulating navigation—guiding users toward the designers’ goals (dead-end trails; important information hidden deep in navigation); (8) Obfuscation—hiding desired information (low-contrast colors; mask warning messages); (9) Restricted functionality—only some controls needed by the user are present (omit controls; hide desired UI elements); (10) Shock—presentation of disturbing content (controversial content); (11) Trick—mislead or deceive users (silent/invisible behavior; lie; spoof content).

Gray et al. [27], based on the dark patterns identified by Brignull, provided an overview of user experience dark patterns. They categorized patterns into five groups: (1) Nagging (redirection to expected functionality); (2) Obstruction (adding difficulties to the interaction process); (3) Sneaking (hide information that might be useful, delay it so it would be disregarded at the decision time); (4) Interface interference (UI design manipulation that emphasizes some aspects over others); (5) Forced action (user must perform some action in order to access some functionality). Similarly, Cara [13] focused their review on dark patterns in the context of user experience.

Luguri and Strahilevitz [43] summarized the existing taxonomies of dark patterns, describing different categories and variants within those categories. Finally, Mathur et al. [44] defined the taxonomy of dark patterns based on empirical research of real e-commerce websites, identifying seven categories: (1) Sneaking (e.g., hiding information that could have affected users’ choice); (2) Urgency (e.g., patterns that place deadlines on the decision); (3) Misdirection (e.g., use of design proprieties to steer users toward a specific choice); (4) Social proof (e.g., the specific choice is driven by the behavior of others); (5) Scarcity (there is limited availability of something, and therefore, its value increases); (6) Obstruction (some choices might be more challenging to make than others); (7) Forced action (additional action is required to complete a task).

3 Privacy Dark Patterns

Similarly, there were attempts to classify dark patterns in the context of privacy. However, the research in this context is more scarce. Bosch et al. [12] reviewed dark privacy strategies and dark patterns in their research, describing also contrasting privacy patterns (designs encouraging privacy-protective interactions). Their article suggests eight dark strategies that could be used in the design. (1) Maximize—design that enhances collection, storage, and processing of as much data as possible, instead of focusing on data that are required to ensure the system’s functionality. (2) Publish—personal information might be visible to the public; no mechanism protects access to such data. (3) Centralize—personal information is stored/processed in a central entity, enabling linkability that might create a clearer picture of an individual(s). (4) Preserve—data are kept in the original state; it does not undergo processing that could affect interrelationships between data. (5) Obscure—users cannot assess what happens to their data, e.g., via complex terminology used in privacy policies. (6) Deny—users cannot control their data, e.g., the service provider might deny account deletion. (7) Violate—a service provider might have a privacy policy. However, it is not upheld by the provider. (8) Fake—a service provider claims robust data protection techniques and practices; however, none is actually implemented in a given service.

Beyond the academic research, the Norwegian Consumer Agency (ForbrukerRadet) published a report focusing on dark patterns that deceive consumers [25]. In particular, patterns that prevent people from exercising privacy rights. The report based on the research on the three tech giants—Facebook, Google, and Windows 10 (Microsoft product)—presents an analysis of how these companies implement malicious designs and nudge users toward privacy-intrusive actions. The report’s aim was not to categorize dark patterns but to present a real-life analysis of how some tech companies violate privacy by utilizing deceiving designs in their pop-up windows. As a result, six such design techniques have been identified among the analyzed software: (1) Privacy-intrusive default settings; (2) Unequal ease (the number of clicks) for privacy-friendly options; (3) Visual design (color and symbols) that leads toward intrusive privacy option; (4) Language that leads toward intrusive privacy option; (5) Privacy unfriendly option presented without “warnings”; (6) Users cannot postpone the decision while accessing the service in the meantime.

Another non-academic categorization comes from CNIL (the French National Commission on Informatics and Liberty) in a report focusing on deceiving design and its effects on the privacy of individuals based on the GDPR principles. Although the report is not exclusively dedicated to dark patterns, it proposes a typology of deceptive design practices. There are four suggested categories containing different techniques that design is based on—enjoy, seduce, lure, complicate, and ban. (1) The first category contains designs that push the individual to accept sharing more than what is strictly necessary. (2) The second is designs that influence consent. (3) The third are designs that create friction in data protection actions. (4) And lastly, designs that divert individuals.

Another attempt to classify dark patterns comes from the legal field. Jarovsky [33] tried to create a taxonomy of dark patterns and suggested a new definition for dark patterns. Focusing on the legal aspects of design and the need for improvement of the privacy protection of data subjects (users), Jarovsky argues that privacy dark pattern “consists of user interface design choices that manipulate the data subject’s decision-making process in a way detrimental to his or her privacy and beneficial to the service provider” [33, p.8]. Moreover, she proposes a dark pattern taxonomy that addresses the legal challenges of such deceptive designs. There are four categories in the taxonomy. (1) The first one is pressure, meaning the designs that pressure users to share more data to continue using a service/product. (2) The second one is hinder, meaning designs that delay, hide, or make it cumbersome for users to take action and protect their privacy. (3) The third is mislead, meaning designs that use language or different UI elements to mislead users during privacy-protective interactions. (4) The last, fourth category is misrepresent—designs that misrepresent facts to drive users toward sharing more (or more in-depth) personal data than needed.

3.1 Examples of Privacy Dark Patterns

A considerable share of the above categorizations of dark patterns, particularly categorizations of privacy dark patterns, overlaps to some extent, thus making it hard to systematize dark patterns. In this section, we attempt to group privacy dark patterns based on the similarity of mechanisms that these patterns employ. Moreover, we point out the dark patterns defined in the literature as different, yet it seems that it is mainly the nomenclature that differentiates them. Their descriptions imply the same phenomena (we refer to them as subgroups). In Table 2, we present the overall grouping, and the detailed descriptions are presented in the subsequent sections. The list aims to draw connections between previously identified dark patterns and relate them to psychological biases they might be exploiting. Note that the list is not exhaustive.

Table 2 Groups of privacy dark patterns based on the similarities between the premises they built on

3.1.1 Invisible to the Human Eye

This group of privacy patterns contains dark patterns that are impossible to spot by a user’s eyes. Their mechanisms are hidden, working at the “back-end” of a given technology:

  1. 1.

    Address book leeching. This pattern uses contact lists that are uploaded to the service from a user’s device (predominantly a mobile phone). However, users are unaware that their contacts are being stored and processed by the service provider. Importing contacts may expose information to different third parties and place users’ privacy at risk [6].

    Related dark patterns or other interchangeable phenomena:

    • Shadow user profiles—information about users not registered for a given service, for instance, on social networks, might be collected [12]. A service provider might manage and process such information without users’ knowledge.

    Associated categories/design types: maximize, preserve, centralize [12].

    Associated psychological effects: N/A due to lack of perceptual interaction.

  2. 2.

    Camouflaged advertising. Also known as disguised ads [10]. The dark patterns that disguise adverts as other elements of UI [52]. Users might interact with such hidden advertisements, and as a result, they may be exposed to unwanted ads and motivated to distribute their data by signing up for new services or buying products.

    Associated categories/design types: diverting the individual/lure [52]; interface interference [27].

    Associated psychological effects: Because ads are not visible, this dark pattern does not directly exploit any psychological effects. However, it relies on an automatic processing mode, assuming that users depend on quick decisions and will click on the ad.

3.1.2 UI Design Tricks

The dark patterns in this group rely only on the UI design mechanisms that aim to distract and mislead the user, sometimes using commonly recognizable UI elements and misusing them. For instance, an icon with a specific meaning is used for purposes other than what the original meaning might imply. Figure 1 presents an example of a dark pattern (attention diversion) from this group:

  1. 1.

    Attention diversion. This dark pattern exploits visual design proprieties to draw users’ attention to something other than privacy-related parts of UI [52]. For instance, when signing up for an e-commerce application, within the privacy settings, changes to the settings could be presented in smaller and less-contrasting font, while the button for special discounts could be more prominent. The user will likely focus on the new desire to obtain a product at a discounted rate than on managing their privacy.

    Related dark patterns or other interchangeable phenomena:

    • Bad visibility—low contrasting, light colors, and small fonts make privacy-protective options less visible [33].

    • False hierarchy—some of the options appear more prominent than others [6].

    Associated categories/design types: influence consent/enjoy [52]; mislead [33]; interface interference [27].

    Associated psychological effects: anchoring, framing.

  2. 2.

    Chameleon strategy. This dark pattern occurs when a third-party service uses the style and visual appearance of the website browsed to make it look like a natural continuation [52]. For instance, a hotel booking suddenly becomes part of booking a train ticket. If the user follows through with such booking, their personal information might be automatically transferred to the rail service provider without informing users about the privacy implications of such transfer or asking for explicit consent.

    Associated categories/design types: diverting the individual/lure [52]; interface interference [27].

    Associated psychological effects: Because the pattern is not visible, it does not directly exploit any psychological effects. However, it relies on the Type 1 decision-making mode, which assumes that users will automatically select additional services.

  3. 3.

    Wrong signal. This dark pattern misuses commonly recognizable patterns, symbols, and similar, to create confusion related to the choice that a user makes [52]. For example, a service might be using a padlock icon in the UI design, yet the service lacks privacy and security protections.

    Related dark patterns or other interchangeable phenomena:

    • Bait and change—users’ choice produces unexpected consequences. For instance, “giving acceptance value to a button with a cross, which in users’ minds is synonymous with ‘close and move on”’ [52].

    • Twist—colors and symbols used in a way that misguides users [33].

    Associated categories/design types: mislead [33]; influence consent/lure, diverting the individual/lure [52]; interface interference [27].

    Associated psychological effects: anchoring, framing, affect heuristic, social norms (indirectly).

Fig. 1
An illustration of a 2-panel U I design trick. The left panel reads, select all that apply, stop tracking, do not send profile information, and do not subscribe. The right panel in brighter color reads, to get your 25% off the next purchase now. Hurry up. The offer is limited.

Example of attention diversion privacy dark pattern, where users’ attention is likely to be driven toward the discounts advertisement

3.1.3 Constrained Actionability

This group of dark patterns includes designs that affect users’ actions. Users are prevented from taking specific actions either by not having the possibility to act or by making actions challenging to carry on. Figure 2 presents an example of dark pattern (immortal account/difficult to delete) from this group:

  1. 1.

    Comparison obfuscation. When this pattern is applied, users struggle with comparing the different service providers or specific settings or rules within a service [52]. For example, when the changes in the service privacy policy content are implemented in a way that forbids users to compare these changes with the original content of the policy.

    Associated categories/design types: influence consent/complicate [52]; forced action and timing [25]; obstruction [27].

    Associated psychological effects: Exploiting the information asymmetry between the service providers and users, this pattern may trigger anchoring or optimism biases.

  2. 2.

    Forced action. This pattern forces users to make choices on the spot [6]. For instance, it may nudge users to agree to all the T&Cs when purchasing a product or service. As a result, they may blindly accept all the terms and be unaware of potential risks to privacy.

    Related dark patterns or other interchangeable phenomena:

    • False continuity—user is asked to provide personal information, such as the email address, to read an article, yet they are not warned that this might be a subscription to a newsletter [52].

    • Forced registration—forces users to register in order to use a service/product [12]. As a result, a company might gain access to personal information about a user, tracking their behavior.

    • Impenetrable wall—access to a service is blocked by a cookie wall or account creation, while it is not needed for service to function (also known as take-it-or-leave-it) [52].

    • Pressure to receive marketing—users must check the box “receive marketing offers per email” to complete a purchase or sign up for a service [33].

    Associated categories/design types: maximize [12]; creating friction on data protection actions/ban, pushing the individual to accept sharing more than what is strictly necessary/lure [52]; pressure [33]; sneaking [27]; forced action [27, 43].

    Associated psychological effects: instant gratification, framing, status quo. These dark patterns rely on automatic, Type 1 information processing, in which decisions can be constrained by external factors, such as situational context or time pressure.

  3. 3.

    Immortal accounts. This dark pattern appears when users have already created an account with a given service provider and want to delete their account and any associated data [12]. However, the service provider makes the deletion process cumbersome by not providing a straightforward deletion option. Instead, the user is confronted with a long process, which, in the end, if the user manages to complete deletion, may still trick the user and retain some personal information with the service.

    Related dark patterns or other interchangeable phenomena:

    • Difficult deletion—making it hard or inconvenient (e.g., call customer service—therefore, switch media) to delete an account [33].

    Associated categories/design types: deny, obscure [12]; hinder [33]; obstruction [27].

    Associated psychological effects: Difficulty associated with deletion, the prolonged process, etc., contribute to preventing the use of Type 2 thinking. Instead, the heuristic-based mode is activated, in which people tend to use mental shortcuts and come to conclusions quickly.

Fig. 2
A text model of constrained actionability. The left panel reads, total to pay, S E K 240, and pay highlighted. The right panel has foreign text. Text below reads, no commitment to buy, you can skip as many months as you wish, call our customer service team to cancel your subscription.

Example of immortal account/difficult to delete privacy dark pattern (from [6]). A user has to call customer service within a specific time frame to cancel the subscription

3.1.4 Emotion-Related

In this group, privacy dark patterns exploit human nature’s emotional aspects. They target emotions, often connecting them with the different social aspects of life. Figure 3 presents an example of dark pattern (confirmshaming) from this group:

  1. 1.

    Confirmshaming. This pattern makes the user feel guilty about not opting into something or opting out of something. It uses the power of language to steer users into making a specific and undesired choice [6]. For instance, when users do not want to get tracked, companies might use language such as “No, I do not want to save money and receive discount codes” to shame users’ choices.

    Related dark patterns or other interchangeable phenomena:

    • Blaming the individual—makes users feel guilty about their choices [52].

    • Toying with emotions—language, style, color, and other UI design elements can be used to evoke a particular emotional state. These design elements are explicitly applied to persuade the user into specific actions [6].

    • Framing—privacy-invasive features might be described in a positive way, preventing users from reflecting on these features’ negative effects [33].

    Associated categories/design types: creating friction on data protection actions/enjoy [52]; interface interference [27]; misdirection [44]; mislead [33]; interface interference [27].

    Associated psychological effects: affect heuristic, optimism bias, contrast effect, default effect, framing, anchoring.

Fig. 3
A sign-up prompt followed by text in a foreign language. Two text boxes for email and password with options of remember me, and forgot your password. Text below reads I do not like to save money, do not send me discount codes! against a checkbox, followed by a highlighted sign-in box.

Example of confirmshaming privacy dark pattern where service provider wants to further process users’ information by shaming users for not wanting to receive discounts

3.1.5 Affecting Comprehension

This group of dark patterns affects information understanding. They either use language that is difficult to comprehend, inject false information that is impossible to verify by a user, or similar. Figure 4 presents an example of dark pattern (trick questions) belonging to this group:

  1. 1.

    Hidden legalese stipulations. The pattern is often used in legally required documents, such as privacy policies and terms and conditions. Often, such texts are written in legal jargon, difficult to understand by an average user, who intentionally skips reading the long texts [12]. Simultaneously, these legally binding texts may include a stipulation that targets users’ privacy. For example, information that policy may change without further notice.

    Associated categories/design types: obscure [12].

    Associated psychological effects: The likelihood of users missing the opportunity to comprehend all the details increases the probability of Type 1 information processing.

  2. 2.

    False necessity—Falsely informing users that certain types of data are legally necessary or required for the system to function [33]. For instance, a social network mobile application may ask for access to contacts’ email addresses to ensure the application’s full functionality, while such information is not truly needed for the application to function.

  3. 3.

    Just between you and us. This dark pattern makes false promises. For instance, a service provider might request additional information, promising that such information will remain “invisible” and users will have full control, and it will allow better service [52]. Similar to this dark pattern are:

    Related dark patterns or other interchangeable phenomena:

    • Improving the user experience—encouraging users to share more personal information to improve services and their experiences [52].

    Associated categories/design types: pushing the individual to accept sharing more than what is strictly necessary/seduce [52]; deny [12].

    Associated psychological effects: instant gratification, optimism bias, framing.

  4. 4.

    Trick questions. This pattern is usually formed as a question that appears to be one thing while meaning something else. This dark pattern may rely on confusing wording, double negatives, or other similar tricks that could confuse users [6].

    Related dark patterns or other interchangeable phenomena:

    • Ambiguity—confusing language, e.g., “do not share my data with third parties” and options to choose from “yes” and “no” [33, p.31].

    • Double negative—using double negative in sentences makes it harder to grasp the meaning[33].

    Associated categories/design types: mislead [33]; influence consent/lure [52]; interface interference [27, 44]; misdirection [44].

    Associated psychological effects: default effect, framing, anchoring.

Fig. 4
A panel with a crossed box A B C. A sign-up form has boxes for first name, second name, email, password, and 2 tick boxes reading, please do not send me details about products or special offers, and please send me details about products or special offer from third parties, and sign-up option in a dark pattern.

Example of trick questions privacy dark pattern where the service provider applies sentences that purposefully confuse users (from [6])

3.1.6 Time-Related

In this group are the dark patterns that, to a different extent, rely on temporal aspects of decision-making. They often exploit that decisions are made in a hurry, on the spot, which prevents users from engaging in a more analytical decision-making process. Figure 5 presents an example of privacy dark pattern (last-minute consent) from this group:

  1. 1.

    Last-minute consent. This dark pattern is time- and context-dependent. It seeks consent for the data collection at a specific moment when users are in a hurry or close to finishing a given task [52]. For instance, a service provider might add a new opt-in for information transfer to a third party at the end of the purchasing process. Users pursuing a goal of completing a transaction might provide their consent since they have already invested a long time and effort into purchasing.

    Related dark patterns or other interchangeable phenomena:

    • Repetitive incentive—different incentives on data sharing can be inserted, repetitively, to interfere with users’ tasks [52].

    • Users cannot postpone decision—users are urged to act without the possibility of postponing their decision. For example, pop-ups force users to select privacy preferences when they install software, which might prevent them from reflecting on their choices [25].

    Associated categories/design types: influence consent/enjoy, creating friction on data protection actions/complicate [52]; forced action [25, 27]; nagging, obstruction [27]; urgency [43].

    Associated psychological effects: loss aversion, status quo.

  2. 2.

    Safety blackmail. This dark pattern occurs during the login process as a request for additional information [52]. At such times, users’ actions are time-driven and under pressure, and users want to complete the task and move on, accepting anything. For instance, a user might be tricked into providing their phone number, thinking it will be used for two-factor authentication, while it is only used for telemarketing.

    Associated categories/design types: pushing the individual to accept sharing more than what is strictly necessary/enjoy [52]; nagging [27]; misrepresent [33].

    Associated psychological effects: functional fixedness, loss aversion, restraint bias, instant gratification.

Fig. 5
A screenshot with steps 1, 2, 3, and step 3 highlighted. A table below reads the order total, flowers S E K 600, delivery S E K 150, care and handling S E K 100, order total S E K 850, I agree to transfer my information to third parties. The option, buy is highlighted.

Example of a last-minute consent privacy dark pattern where the service provider asks for additional consent when users are at the end of the task after dedicating much effort to achieving it (adapted from [6])

3.1.7 Affecting Privacy Options

This group of dark patterns consists of designs that provide users with privacy options. However, these options are often presented in a way that purposefully confuses users, makes it hard to choose, make it challenging to change options, or similar. Figure 6 presents an example of privacy dark pattern (bad default) from this group:

  1. 1.

    Bad defaults. This dark pattern exists when the options (particularly related to a user account) are predefined (e.g., selected checkboxes) in a way that encourages over-sharing personal information [12]. As a result, users might share information they have not intended to at the start of an interaction.

    Related dark patterns or other interchangeable phenomena:

    • Default sharing—pre-checked options for information sharing [52].

    • Pressure to share—users are obliged to share specific personal data with other users in order to use a service; they are given no alternative option [33].

    • Privacy-invasive defaults—applications that share certain data by default (e.g., a social network app that, per default, shares users’ videos publicly) [33].

    Associated categories/design types: obscure [12]; pushing the individual to accept sharing more than what is strictly necessary/enjoy [52]; pressure, hinder [33]; covert, asymmetric, misdirection [44].

    Associated psychological effects: default effect; status quo; loss aversion; instant gratification.

  2. 2.

    Privacy Zuckering. This dark pattern exists when the service provider allows changing privacy-related settings [12]. Still, these settings are purposefully designed to be unnecessarily complex and difficult to understand by the user. For example, using a layered design to present users with an application’s privacy settings—users have to open many sub-menus and, following different links, open new pages to reach the setting they desire to change.

    Related dark patterns or other interchangeable phenomena:

    • Difficult settings—privacy settings are complex, multilayered; contain links (including third-party links), sub-menus, which all make it less likely for user to read/use [33].

    • Hidden settings—privacy settings are placed on the least-expected and not intuitive part of a UI [33].

    • Making it fastidious about adjusting confidentiality settings—consent is quick, but the process of data-protective actions is long and complex. For example, the continue button is given to accept all opt-ins, while the alternative choice requires many different interactions with “find out more” and similar options [52].

    • Obfuscation—injecting privacy-unrelated settings to the section of UI where privacy settings are presented [33].

    • Obfuscating settings—to reach the desired privacy settings, users have to go through a long and cumbersome process, reducing the chance that the user will change the settings and increasing the chance that they will give up before achieving their task [52].

    • Unequal ease—the number of clicks to reach the privacy-protective options is higher, requiring more effort from the user [25].

    Associated categories/design types: creating friction on data protection actions/complicate [52]; mislead, hinder [33]; obscure [12]; forced action [27, 43]; obstruction [27].

    Associated psychological effects: choice overload; status quo; framing.

Fig. 6
2 panels read basket, bike check, total S E K 500, and a tick box reads I have read and understood terms and conditions. The left panel reads more info with a drop-down menu, continue, in a dark pattern. The right panel reads, less info, an up-arrow, and a tick box with text in a foreign language.

Example of a bad default privacy dark pattern [6]. A default option (e.g., about data processing) is hidden, and users must perform additional action (click on “More info”) to discover it

3.2 Tackling (Privacy) Dark Patterns

Compared to the number of studies that identify and categorize dark patterns, a lesser amount of research has been dedicated to answering the question of how to prevent dark patterns. One such attempt is given by Mathur et al. [45], categorizing existing research on dark patterns into two types of choice architecture: (1) modifying the decision space (dark patterns attributes: asymmetric, restrictive, disparate treatment, and covert) and (2) manipulating the information flow (dark patterns attributes: deceptive, and information hiding). Based on the new categorization and providing arguments about the normative principles of why dark patterns should be of concern (collective and individual welfare, regulatory objectives, autonomy), their article expands on what could be done in the field of human–computer interaction to tackle dark patterns. In particular, to assess the “darkness” of deceptive designs, they propose to analyze dark patterns through the above-mentioned normative lenses, specifying how such analysis should look. For instance, focusing on who might be affected by a specific dark pattern—general public or a particular group of users—should be considered when designing studies assessing dark patterns.

Nonetheless, the approach proposed by Mathur et al. [45] applies mainly to the research on dark patterns and aims to help identify the phenomena. A different aspect of dark patterns research that did not receive much attention, even though it could help to gain a better understanding of the phenomena and to develop solutions preventing it, relates to the harms that deceptive designs cause to users [6, 28, 45]. Kitkowska et al. [41] attempted to investigate this in the expert interviews-based study. Their research asked which dark patterns are more or less harmful and how to prevent the detrimental effects of these malicious designs. The experts identified two broad categories of dark patterns to assess their harmfulness. The first one, called first generation, contains “traditional,” easier to identify, often leading to clear economic loss dark patterns. The second one, called second generation, consists of designs that are complex, hard to identify, and related to extensive data collection, leading to economic and less tangible loss. The latter has been classified as more harmful and requiring greater attention from regulators. In their research, privacy matters emerged because of the increasing insights about individuals gathered by companies and the potential use of such insights to target vulnerable users (here, everyone can be vulnerable as it might be a temporal state) and personalize dark patterns. Although their research did not focus on privacy dark patterns, it is essential to note that they attempted to identify ways to tackle dark patterns. Using the behavioral change framework (COM-B) [46], they identified policy categories (regulation, legislation, guidelines, service provision) and intervention functions (education, coercion, modeling, training) that could be applied to prevent companies from applying deceptive designs and bring balance to the digital market.

Another way to tackle dark patterns is through legislation and regulation, as recommended in research [6, 9, 41]. In various geographical regions, lawmakers introduce regulations that could help prevent dark patterns. The General Data Protection Regulation (GDPR) could be adapted, as argued by Jarovsky [33], to identify dark patterns, focusing on the principle of fairness in data protection as well as the lawful basis for obtaining consent. Some of the regulations directly targeting dark patterns already exist. For instance, European Directive 2005/29/EC on unfair commercial practices (UCPD) [17] adds dark patterns to the category of manipulative practices. It states: “If dark patterns are applied in the context of business-to-consumer commercial relationships, then the Directive can be used to challenge the fairness of such practices, in addition to other instruments in the EU legal framework, such as the GDPR.” In the USA, California Consumer Privacy Act (CCPA) [5, 49]) defines a dark pattern as a design that “subvert user autonomy, decisionmaking, or choice” and considers acquiring consent obtained through the use of a dark pattern invalid. Still, these new regulations have their limitations. For instance, the UCPD lists only some designs that could be dark patterns, and the sanctions are still left to be defined by the EU members independently and might be too little.

Another issue regarding legal regulations is the problematic enforcement, for instance, how the digital market should be surveyed or how to identify and correctly recognize dark pattern designs. Some researchers propose automated tools that could help identify dark patterns. For instance, techniques used by Mathur et al. [44] to scrap web pages for dark patterns. However, the authors recognize the shortcomings of their methods, e.g., that images are not considered, yet they might be used as a part of dark pattern design. Curley et al. [19] proposed a framework for automated dark pattern detection, concluding that some deceptive designs are easier to detect. In contrast, others are impossible to identify through automated means.

Similarly, Soe et al. [57] attempted to use machine learning to identify dark patterns, and, achieving relatively good accuracy, they concluded that more research has to be done to improve it. Particularly, it might be challenging to create a good-enough data set that could yield better-promising results. Also, recognizing the difficulties around the variety of dark patterns and the ways they work, researchers suggest decomposing dark patterns into elements that could be automated (or quickly processed) and focusing on one specific domain. Both of these recommendations have the potential to improve automated detection tools.

Still, policymaking, regulation, and enforcement might be insufficient to entirely prevent the use of dark patterns in the digital market. One other way that surfaced in findings from [6] is a need for guidance and education for service providers. Such an approach, however practical, might also be inadequate since the profits that companies gain through the utilization of dark patterns might be too significant. Nevertheless, suppose appropriate regulations are in place and high fines for lack of compliance (e.g., similar to the fines that GDPR places on incompliant companies). In that case, the guidelines and education can reduce the number of dark patterns applied in the digital market.

Additionally, some researchers proposed that users’ education might be another way of reducing the adverse effects of dark patterns. As much as such an approach might be helpful to some extent, the early empirical findings suggest otherwise. For instance, Bongard-Blanchy et al. [9] examined how awareness of heuristics and biases applied in deceptive designs affects users. The study showed that a number of participants, conscious of psychological tricks used to deceive them, still fail and are influenced by dark patterns, yet they admit that they are aware of deceptive designs.

3.3 Dark Patterns and Implications on Businesses

Most of the research discusses dark patterns and their potential implications on users. However, only scarce investigations seem to tackle the effects that utilization of dark patterns might have on businesses that employ them. Moreover, none of the research papers considered in the present chapter examines privacy dark patterns’ effects on companies.

In their meta-analyses, Hummel et al. [31] tried to compare the effects of different nudging practices in digital and physical contexts. They found that digital nudging does not differ significantly from nudging in other contexts. However, the analysis showed that approximately one-third of the effect sizes reported in the existing nudging studies were insignificant. Notably, the default nudges (corresponding to the bad default dark patterns) seem to be the most effective. Brown and Jones [11], in their longitudinal study, investigated the effectiveness of dark patterns and other changes in the UI design. Based on data collected from e-commerce websites that implemented A/B testing (comparative testing of different versions of a product to identify which one consumers prefer) of various designs between 2014 and 2017, they showed that only specific design changes carry the potential to increase revenue per visitor. Unfortunately, some categories of dark patterns—scarcity, social proof, urgency, abandonment (persuading the user not to leave the site, which indicates abandonment behavior), and product recommendation—seem to increase revenue. Although the revenue increase identified in this research was relatively small and ranged from + 0.4% to + 2.9%, from the business point of view, the implementation of deceptive designs might still be worth it, as any increase in the revenue adds to the business growth.

Considering the very little research conducted in the context of dark patterns’ effects on businesses, it seems necessary to identify which dark patterns cause the most damage and regulate their applicability in the digital environment, similar to how the GDPR regulates data protection. Perhaps, this benefit-based approach could guide regulators to ban specific designs entirely or place high financial fines for using such deceptive designs. On the other hand, if the results of such proposed research would show minimal business benefits, such findings could be used to convince companies to stop the manipulative practices as they do not significantly impact their revenue. Instead, such research could show that users, becoming more aware of dark patterns, might be less loyal to a given business, perceive it as less trustworthy, and resign from the business’s services. Hence, this chapter proposes that more research on the effects of dark patterns on businesses and consumers could help regulators and policymakers.

4 Concluding Remarks

This chapter aimed to explain the mechanisms through which dark patterns work based on existing research. It presented a list of psychological biases and heuristics that privacy dark patterns exploit. Moreover, it provided a non-exhaustive list of privacy dark patterns, grouping them into patterns that relate to each other. Although numerous attempts at dark pattern categorizations exist, many researchers seem to describe very similar phenomena yet name them differently (e.g., obfuscation vs. obfuscating settings, immortal accounts vs. difficult deletion). In this chapter, such similarities were pointed at to improve the understanding of how specific designs exploit psychological vulnerabilities. Nevertheless, such a plethora of categories and different dark patterns might make the use of knowledge about privacy dark patterns challenging to digest and utilize, e.g., in automated dark patterns recognition tools.

Further, the existing research on dark patterns, particularly in the context of privacy, is somewhat limited. It is not entirely clear what harms deceptive designs cause and how (if possible) such harms could be classified. Such classification could prove helpful in assessing the severity of harm and the potential need to develop measures against specific privacy dark patterns, which effects might be more detrimental than others. To summarize, the scarce and not entirely systematic research on privacy dark patterns call for future work, mainly empirical, to feed policymaking, help companies dodge deceptive designs, and produce automated tools helping in recognition of digital manipulation.