Keywords

1 Challenge and Vision

Since the rise of digitization and the widespread use of digital services in everyday life, user interactions with technology have become increasingly intricate. The simple design of services used only by one client and one server at a time is long outdated. Modern digital services are typically divided into sub-services that perform very specific tasks: for example, online purchases are made via a central marketplace, i.e., an online platform that allows multiple (independent) asset providers to offer assets, such as goods or services; payments are processed via dedicated payment providers; and physical assets are delivered by an external (transport) service provider. Furthermore, platforms may include additional third-party services like RSS feeds or location services to enhance the user experience. The result is a digital ecosystem where every player benefits from participating. Koch et al. [25] provide deeper insights into the interplay between aspects and actors of a digital ecosystem. We define a digital ecosystem as a socio-technical system that brings together various independent providers and consumers of digital goods. A digital ecosystems service could be a website or a web service or even a locally installed software provided by a third party.

In almost every digital ecosystem service, personal data of the users are processed. In the European Union, personal data is protected by the General Data Protection Regulation (GDPR; [15]) and may only be processed under the conditions such as stipulated by Article 6. A common legal basis for the processing of personal data in digital services is the consent of the data subject to the processing of their personal data. Consent is defined as “any freely given, specific, informed and unambiguous indication of the data subject’s wishes” (Article 4 (11) GDPR). Because of the requirement that the consent be specific, it is pivotal from a legal perspective that users consent to each individual instance of data processing.

One example that shows the consequences of constantly repeated requests for consent are cookie banners used on websites. As obtaining consent concerning cookies is addressed in Art. 5 (3) ePrivacy Directive of the European Union and not in the GDPR, this only serves as an example to show how data subjects usually act when they are overwhelmed by the number of requests asking for their consent in the processing of their data. Unlike the GDPR, the ePrivacy Directive (just as the upcoming ePrivacy Regulation) applies to the processing of any electronic communication data arising from the provision or use of electronic communication services, as well as information related to the end-users terminal equipment (Art. 2 (1) ePrivacy Regulation (draft)). As cookies do not always relate to the processing of personal data, the consent concerning cookies is regulated in the ePrivacy Directive (or soon in the ePrivacy Regulation). In order to get rid of cookie banners, users often just accept everything instead of making an informed decision. This phenomenon is known as cookie fatigue [21]. The consents that are collected on websites in a digital ecosystem, for example to finish an ordering process, may lead to a problem comparable to the cookie fatigue. In a digital ecosystem, there are usually many different actors involved in one process, so there might be more than one consent necessary to complete one process. It is therefore conceivable that data subjects will act the same way as they act when too many cookie banners show up: they will just accept everything in order to proceed as quickly as possible. The way such requests are usually made (i.e., by cookie banners) cannot be considered to be an informed consent as demanded by Article 4 (11) GDPR [26, p. 407]. Besides cookies, websites or digital ecosystem services might also have other features for which explicit consent must be obtained. For example, weather forecast services frequently make use of the user’s current location obtained from their device. In this case, no other legal basis than a consent is possible.

The more specific consents are, the more elaborate the requests for consent are, causing greater informational and cognitive load on the user, which negatively affects usability [34]. In practice, this causes the number of consents to quickly become unmanageable for users. Research in this field has provided a number of solutions on how to request and receive consent in a consolidated and simplified way, yet these approaches have failed to fulfill their purpose as they were never widely adopted and integrated into services. We argue that digital ecosystems as self-contained systems need only a smaller and more manageable number of services to implement a possible solution for communicating consent. Therefore, we propose (Sect. 2) and legally assess (Sect. 3) generic consents as a possible solution for requesting and handling user consents in the scope of digital ecosystems . Focusing on the consumers’ needs, we also present our ideas of a trial period (Sect. 4) that allows the users to test and gain trust in a digital ecosystem service before giving further consent. Next, we discuss implementation options that allow us to demonstrate the general technical feasibility of our proposed solutions (Sect. 5), and we conclude with a discussion on the practicality and advantages of our solutions compared to previous work (Sect. 6).

2 Generic Consents

Consents form one of the bases for processing referred to in Article 6 GDPR, besides contracts, legal obligations, vital interests, public interests, and legitimate interest. In the following, we will primarily address two of the required characteristics of consents, namely, being specific and unambiguous. In a digital ecosystem, a consent relates to the interplay between an asset provider, a digital ecosystem service, a data category, a processing type, and a purpose.

For a consent granted by a user to be considered an explicit consent, four conditions have to be met: (1) it is granted to one service or one provider, (2) it is one specific processing permission, (3) it pertains to one concrete data item, and (4) it applies to one specific purpose. To counteract the problem of users getting overloaded by the plethora of explicit consents to cover the variability of these aspects, we propose the use of generic consents that can apply to several or selected groups of providers, services, data categories, processing types, or purposes (cf. Fig. 1). Note: The term explicit consent does not impose any restrictions on the way consent may be given. Both generic and explicit consents may be declared explicitly (e.g., in a written declaration of intent) or implicitly (e.g., through conduct implying an intent).

Fig. 1
A diagram illustrates two types of consent. They are generic and explicit. The consent is based on allowlists.

Concept of generic consents and allowlists

In general, the use of generic consents cannot be sustained because a consent that is too broad is not lawful. In the context of digital ecosystems, however, we argue that when properly implemented, generic consents can be used to express the users’ data protection demands in an abstract way while still being specific and unambiguous enough to be GDPR-compliant (cf. Sect. 3) and still being manageable. It is also conceivable that a whole set of consents is proposed to the user, for example, by neutral bodies or by the platform itself. These are essentially allowlists, which of course must always be compiled in the interests of the user. Therefore, this approach is not suitable for all digital ecosystems, but only for those where the platform provider is particularly trustworthy (e.g., data trustee) or where trustworthy interest groups exist to take care of this.

Determining whether a consent exists for a specific processing purpose is not trivial when using generic consents. In particular, one must always consider the digital ecosystem’s state, which we call context. To determine whether consent does or does not exist, the consent must be interpreted with respect to two temporally disjunct circumstances: the context when it was given—“consent context”—and the context of the current activity—“usage context”—(cf. Fig. 2). Then the fundamental question is whether the usage context is covered by the consent context. For each aspect, we will argue in the following how the characteristics of digital ecosystems can be used to achieve compliant generic consents.

Fig. 2
Two circle diagrams. Left, the center circle is labeled consent context, the surrounding circles are labeled data categories, processing types, purposes, asset providers, and digital ecosystem services. Right, the center circle is labeled usage context, and the surrounding area is labeled data, processing, purpose, asset provider, and digital ecosystem service.

Example: consent context vs. usage context

Data

First, we must answer the question of whether the data to be processed (usage context) is covered by the given consent (consent context).

Design Challenges

Different asset providers may use different terms for the same data category (e.g., geo-data vs. location data). In a digital ecosystem, this could be resolved through central standardization, which would also boost comprehensibility. Another problem is that categories are often related to each other. In the simplest case, this results in a hierarchy. For example, consent for the super-category “location data” should also apply to the subcategories “GPS-based location data” and “network-based location data.” The resulting taxonomy defines the consent context for the data.

However, it might not be possible to establish a clean, overlap-free hierarchy in all digital ecosystems. In practice, one and the same data item can be assigned to several categories. Accordingly, this might give rise to the problem that generic consents and objections may contradict each other. If a data item is in categories A and B and there is only consent for A, the consent can be regarded as given. However, if there is an objection for B, the objection takes precedence over the consent given in A.

Asset Provider

The next aspect to be assessed is whether the provider that wants to process a data item (usage context) is covered by the given consent (consent context). Asset providers within the same digital ecosystem can be categorized easily in most cases. For example, consent could apply to all “payment providers” or all “shipping service providers”. The categorization of providers could be performed by the operator of the digital ecosystem as part of the on-boarding process, which should make this a relatively simple exercise compared to the data categories. The resulting taxonomy thus defines the consent context in a structured manner.

Design Challenges

This notion relies on the assumption that asset providers in a given category are so similar that a user would always treat them in the same way when consenting. However, this decision could go beyond a simple categorization. For example, users could base their consent on the provider’s reputation (e.g., “consent only for companies rated 4.5 stars or higher”). The set of providers that fulfill this criterion is not fixed and must thus be reflected by the usage context. Since there are also various non-rational and non-measurable criteria that we cannot formally cover, there should be the possibility to exclude a provider explicitly from the generic consent (“all except provider x”). Another concern is whether the consent also applies for providers that joined the ecosystem after the generic consent was given. One could argue that these new players are not covered because they were not part of the consent context. On the other hand, it is likely in the interest of the users, and the basic idea of our approach is that they do not have to reconsider their consent every time a new asset provider joins the digital ecosystem. This also contradicts the flexibility and openness that are central to digital ecosystems. Naturally, when in doubt, users could also be given a choice of whether they want new asset providers to automatically be included in their generic consent. Another viable compromise would be to ask the users to reconfirm a previously given consent. If a user decides to give consent anew, the consent context would then get updated.

Digital Ecosystem Service

Digital ecosystem services can strongly vary between digital ecosystems, but they can usually be classified by their characteristics. The resulting taxonomy thus defines the consent context for the providers.

Design Challenges

Some aspects of a service offering can be rather dynamic, which will need to be taken into account in the consent. For example, consent could be related to the service level offered to specific users (e.g., “24/7 phone support”) or to temporary offers (e.g., “free returns only this weekend”). The digital ecosystem services that fulfill these criteria are not fixed and must thus be reflected by the usage context.

Processing Type

Both generic and explicit consents may specify the allowed processing type(s). According to Article 4 (2) GDPR, processing types includes “collection, recording, organisation, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction”.

Design Challenges

Even though these processing types could be used, it must be checked whether the users understand them and whether the users’ mental models (i.e., their own individual understanding, see also the chapter “Achieving Usable Security and Privacy Through Human-Centered Design”) fit the actual meaning of the processing type. Thus, a clear, understandable, and ecosystem-wide definition of each processing type is essential.

Purpose

Finally, the core of any consent is the intended purpose of the processing. Limiting data processing to only those purposes that are defined in advance is a consequence of the so-called purpose limitation principle. As demanded in Art. 5 (1) (b) GDPR, personal data shall only be collected for specified, explicit, and legitimate purposes. Changes of the primary purpose are only lawful if they comply with the prerequisite of Art. 6 (4). The purpose typically relates to specific business processes, such as ordering, payment, or advertising.

Design Challenges

These can vary between asset providers. However, the number of purposes in a digital ecosystem aimed at the users is actually quite limited. The operator of the digital ecosystem should therefore define the list of purposes for which generic consent can be obtained. The resulting taxonomy thus defines the consent context for purposes. Here, we do not assume a usage context. If the definition of a purpose changes, either the consent becomes invalid or the old definition (i.e., the consent context) has to be applied.

To summarize, it is primarily the task of the operator of the digital ecosystem to define precisely how the “context” of a consent is defined in their digital ecosystem. This task might sound like a lot of work, but we suggest to not over-engineer this distinction. Because users will not be giving their generic consent for special cases, it should be sufficient to cover the most common cases in the categories. However, these cases should be clearly defined in order to avoid (unintentional or intentional) assignment of consents to categories for which they were not intended, which would legally invalidate these consents.

3 Legal Assessment

As described above, the GDPR provides a well-defined set of requirements for lawful requests for consent: the basic requirements of Art. 4 (11) GDPR, as well as their modifications described in Arts. 6 (1) (a), 7, and 9 (2) (a) GDPR.

Among other things, consent may only be given by a data subject when they have knowledge of the full facts and circumstances. This results from the demand for the consent to be informed, specific, and unambiguous. A data subject’s consent must be given in respect to a specific data processing. In particular, it may not be derived from another expression of intent, not even if they are of comparable subject matter [32, para. 38]. Furthermore, informed consent can only be given if the controller provides the data subject with the information demanded by the GDPR (described in more detail in Sect. 2) in clear and plain language and in an easily accessible form [32, para. 40]. A data processing controller is the natural or legal person, public authority, agency, or other body that, alone or jointly with others, determines the purposes and means of the processing of personal data; cf. Art. 4 (7) GDPR. Subjects must be able to foresee the precise consequences of their consent. However, these strict requirements lead to the cookie fatigue problem described in Sects. 1 and 4.1: to fulfill the demands for consent to be informed, specific, and unambiguous, a multitude of consents is obtained, which causes users to experience overload and take on a dismissive stance, rather than leading to a more detailed understanding of the matter (which Sect. 4.1 explores in greater depth). With digital ecosystems encompassing a variety of different services, it makes them more susceptible to this effect.

To counteract this phenomenon, the legal literature has, in recent years, proposed and discussed different proposals to improve data protection and consent managements. One of the suggested Privacy Enhancing Technologies (PETs, see also the chapter “Acceptance Factors of Privacy-Enhancing Technologies on the Basis of Tor and JonDonym”) is a Personal Information Management System (PIMS) [6, p. 946]. The goal of PIMS is to enable users to manage their personal data in one place [17, p. 2241]. As a consent management system, a PIMS could be utilized to request and obtain the generic consents described in Sect. 2 in a lawful way. The basic idea of providing users with a central place to manage their data protection preferences is not new: back in 2002 already, the W3C recommended adopting the Platform for Privacy Preferences Project (P3P) [46]. The goal of P3P was to allow websites to present their data collection practices in a standardized, machine-readable, and easy-to-locate manner, thereby enabling web users to understand what kind of data will be collected, how it will be used, and what they can change about that [46]. Thus, P3P was a mechanism to support the protection of data and privacy [18, p. 157], just as PIMS are, and required users to indicate their data protection preferences beforehand [18, p.159], [46]. Unfortunately, P3P rarely got adopted in practice. Perhaps PIMS will be able to achieve the original goals of P3P. One factor that will allow PIMS to be adopted more widely than P3P is the introduction of new legislation recommending their use in practice. The enactment of the Telekommunikations-Telemedien-Datenschutz-Gesetz, TTDSG (Telecommunications-Telemedia Data Protection Act) in Germany in December 2021 marked the first introduction of a regulation regarding PIMS (§ 26 TTDSG). The TTDSG is partly based on the Directive 2002/58/EC of the European Parliament and of the Council concerning the processing of personal data and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications, or “ePrivacy Directive”). Art. 10 ff. of the Data Governance Act of the European Union further provides regulations concerning Data Sharing Services, which serve similar purposes as PIMS. Even though this demonstrates that regulations regarding PIMS are being drafted, their implementation into digital ecosystems is still a long way off. The regulations of the TTDSG regarding consent management systems (§ 26 TTDSG) correspond only to consent in terms of § 25 TTDSG. The prerequisite determining when this regulation applies is not governed by the processing of personal data, but by the storage of data on a user’s personal devices [43, Ettig, § 25 TTDSG para. 3]. Further restrictions by the German legislator regarding consent were not possible: regulations that deviate from the GDPR would be unlawful because of the precedence that laws of the European Union have over national legislation [23, Ambrock, Teil A, II. Rechtliche Grundlagen, para. 52 ff.]. Moreover, the Data Governance Act is not yet applicable in the Member States of the European Union. Thus, although these regulations may also be of relevance, the implementation of a PIMS to obtain generic consents in a digital ecosystem should be examined particularly in terms of its compliance with the demands of the GDPR. Because there is a vivid discussion about PIMS in Germany since the TTDSG was discussed in the German Parliament, our particular focus will be on the German jurisdiction and literature on this topic.

3.1 Personal Information Management Systems in Digital Ecosystems

A PIMS could be used as a central consent management system in digital ecosystems. A possible implementation for the central management of consents could take the form of a dashboard or cockpit: such a system would encompass both easy access to overviews on consents already given and management (e.g., reviewing, tracking, and withdrawal) of the consents themselves [6, p. 947]. To be precise, users should be enabled to define privacy settings only once and in an abstract way, so that these can be a basis on which future requests for consent can be answered automatically [6, p. 947]. Our idea assumes the use of a privacy cockpit combined with a central platform as an intermediation service between data subject and controller. This privacy cockpit would then be responsible for obtaining the user’s consents, which the individual services would then technically enforce in a given context. The privacy cockpit would then forward the given consents to the ecosystem services, which would implement them in their context. Should a consent cover the specific context only partially, the user should be presented with options to be notified of missing required consents in a non-intrusive way and should be supported in making adjustments. This is in line with the abstract model of a PIMS used as a consent management system.

3.2 Obtaining Consent via a PIMS

The legal community’s view on PIMS is in parts quite skeptical of PIMS because when consents are given via such a system, these consents are given in advance and without detailed knowledge of the specific data processing involved. Giving consents in an automated way based on settings unrelated to individual cases fundamentally contradicts the aforementioned principle of consent for personal data processing having to be specific. It harbors the danger of providing unlawful blanket consents. If a consent is insufficiently specified regarding its content, purpose, or consequences, it will be legally void for being too unspecific and ambiguous [43, Arning/Rothkegel; cf. Art. 4 GDPR para. 329]. Yet, as it is the goal of PIMS to decrease the effort and the sheer endless number of consents to be given by a user, one could then question whether it is even possible to obtain consents via a PIMS in a lawful way and resolve problems like overload and fatigue.

3.3 Using Allowlists in Digital Ecosystems

To ensure that a consent is specific, unambiguous, and informed enough on the one hand and to eliminate the cookie fatigue problem on the other hand, a compromise might be needed. The question of how lawful such abstract ex ante consents are was central in a research assessment of § 26 TTDSG, in which the authors proposed the implementation of so-called whitelistsFootnote 1 as a solution to this problem [42, p. 42 ff.]. Allowlists are intended to improve usability: users should be enabled to give consents for fine-grained processing purposes based only on knowledge about groups (i.e., categories of controllers) [42, p. 6 para. 5]. Using these allowlists, one could considerably reduce the number of consents to be given if a whole group of controllers could be accepted with one click. At the same time, such a list can be highly informative, specific, and unambiguous and eliminate the need to repeatedly have to obtain unique consents, by providing the user with an overview of all controllers and their corresponding processing purposes to which they grant consent.

The concept of allowlisting is certainly not new in the legal community. In the domain of competition and copyright law, various courts already addressed the issue of ad blockers implementing allowlists [8, 29]. They considered the question of whether for websites financed through advertisements, buying a placement on a allowlist that allows them to display advertisements even while an ad blocker is active complies with competition law [29]. The notion of using allowlists to simplify data processing consents has not been discussed in detail yet. Note that it is not our goal to establish a similar practice of granting a position on a allowlist in return for a fee because that might lead to allowlists becoming only available to services with sufficient financial backing and without the assurance that the controllers comply with data protection regulations. Thus, a different approach to creating such allowlists is needed, which can take either of two forms:

3.3.1 Solution 1: Organizational Allowlists

The first possibility is to create allowlists as suggested in the aforementioned research assessment concerning § 26 TTDSG [42]. The authors propose different approaches to allowlisting by organizations, such as using a neutral third party (the research assessment proposed NGOs) to curate a listing of trustworthy controllers [42, p. 5, para. 4], or by the operator of a PIMS. They propose the following procedure: first, the controllers register in the PIMS. Next, the PIMS operator creates allowlists containing all applicable controllers and their processing purposes, which can be tailored to the users. The allowlists would be adapted to a user’s specified preferences, but the user could still choose to accept or reject them [42, p. 43].

3.3.2 Solution 2: User-Defined Allowlists

The other possibility is to have the users generate their own allowlists in the PIMS. After registering, they could be prompted to set up their preferences and then create their own allowlist by specifying the controllers they deem trustworthy and to whom they would like to grant consent for the specified processing purposes. To implement this solution, the PIMS operator would have to invest some additional effort. They would have to determine how to classify services (e.g., payment services or shipping services) and processing purposes in the system so that the users can define their choices for these classes. The advantage of this solution is that it offers increased flexibility and empowers the users to define a allowlist that is completely attuned to their own preferences.

3.4 Legal Conclusion

We can conclude that, from a legal point of view, a dashboard or cockpit as a PIMS specialized in managing consents is an appropriate solution for requesting and managing consents in a centralized and lawful way within the context of digital ecosystems. Skeptics might argue that giving consent in advance without knowing about the intended data processing in detail fundamentally contradicts the regulations of the GDPR. Though our idea of using generic consents entails exactly that, just as systems like PIMS do in general, our approach should not be rejected right away when seeking a solution for the serious challenges that exist with consent handling. The requirements for consents to be informed, specific, and unambiguous were intended to enable data subjects to completely grasp the consequences and implications of their granted consents. It is indisputable that detailed information of the users is necessary to fulfill these demands, but whether the level of detail that the GDPR demands is truly needed is up for debate. For instance, if a user consents to a delivery service provider processing their address for the purpose of delivering a shipment, we can assume that the user has a clear understanding of the consequences of their consent. To them, it makes little difference whether this processing is done by delivery service provider A or delivery service provider B—as long as both are trustworthy. Granting consent to a group of controllers thus does not contradict the spirit and purpose of the GDPR. The reason why the GDPR requires consent to be given in an informed way is because it aims to protect the data subjects. They should only give their consent if they are completely aware of and agree with the consequences of this action [43, Taeger, Art. 6 GDPR para. 37]. This intended protection is best achieved when data subjects take note of and seriously consider the provided information. Overloading the user with information or repeatedly asking for consent does not fulfill this aim [43, Taeger, Art. 6 GDPR para. 40], as also demonstrated by the cookie fatigue problem. It is more likely that data subjects will examine the information provided in a PIMS if they have to specify their privacy settings and preferences regarding data processing by specific data processors only once [5, p. 10]. When applying a teleological interpretation (an interpretation in the spirit and purpose of the law) of the regulations of the GDPR, one will come to the conclusion that requesting generic consents is indeed lawful as long as the information demanded by the GDPR is at least provided for categories or groups (see also [42, p. 6, para. 5]).

4 User-Oriented Redesign of Consent Handling

In Sect. 3, we learned about the legal basis and the logic behind obtaining consents. However, we also established that from a legal perspective, it can be challenging to obtain these consents in a practical way that fits the purpose and is free from dispute. But what if we approach this problem from another perspective: that of the human actors involved in the transaction of data processors requesting consent and data subjects giving consent? What challenges do they face with the current implementation of the legal requirements, and how can they benefit from the concept of generic consents? In this chapter, we will take a psychological perspective on the consent handling process.

When data protection regulations started to mandate consents, little guidance was given on how this should be implemented. Processors of personal data had to quickly find ways to get consent and make sure to do so in a legally effective way, but without agreeing on a technical privacy standard. This resulted in a proliferation of consent handling tools [12, 27]. Dentists and other doctors had their patients fill out hastily created consent forms, and associations scurried to obtain explicit consent from their members—typically by email—to store their data and opt into their newsletter.Footnote 2 However, there is one type of consent that people continue to be confronted with most frequently: website cookies, which we briefly mentioned in Sect. 1. Because nearly all B2C digital ecosystems use a website as their primary front-end, this is an important topic for operators of digital ecosystems. As a result, cookie banners are a very compelling and tangible example of a current problem with consents, which allows us to explore the challenges and possible solutions in more depth. We understand that this is just one example instance of a problem area, and we believe that the solutions we propose can be transferred to other challenges related to consents.

In Sect. 4.1, we will review what problems exist with cookies and how they affect end-users from a psychological perspective. In Sect. 4.2, we will propose solutions for these problems. In essence, we will propose a solution that promotes privacy-driven human-centered design (see also the chapter “Achieving Usable Security and Privacy Through Human-Centered Design”) and emphasizes a positive user experience through an approach in which a website—which may be the front-end for a digital ecosystem—aligns with its users over time to find the optimal level of consent for them. Drawing from psychological concepts of how we build relationships as humans, this approach assumes that users must first be given an opportunity to build up trust [3, 9, 24, 39]. During this time, they do not have to concern themselves with consents. In this context, we specifically consider online trust, which Shankar et al. [39] define as “a reliance on a firm by its stakeholders with regard to its business activities in the electronic medium, and in particular, its Web site”. Empowering users to build up trust can foster their perception of self-determination, boost their loyalty, and ultimately provide them with a better user experience [2]. Although our focus lies on websites (i.e., front-ends of digital ecosystems), these solutions could also be transferred to digital ecosystems where services take the place of websites.

4.1 Psychological Effects of Cookie Banners

Whether or not they are aware of it, a user and the provider of a digital ecosystem enter into a mutual relationship; not only a contractual relationship but also an actual relationship that involves dependencies and feelings [16]. The users take the role of asset consumers, who depend on the digital ecosystem to be the asset broker that helps them find and obtain these assets, while the operator of the digital ecosystem is also the data processor, who depends on the users to be willing data subjects [25]. This relationship is obviously most harmonious if both parties feel comfortable. Particularly the fact that a user has trust in the operator of the website has been found to have a strong impact on that user’s willingness to provide personal information [50] and thus on the likelihood that they will give consent. But as we will see, in current practice, this relationship is often distorted or even unhealthy. This is unfortunate because this relationship is not just defined over the user’s personal data; the operator of a digital ecosystem does not merely want to store cookies, but they also want to be liked by the users in hopes that they will become loyal customers.Footnote 3 On the other hand, they must respect that the users do not want to feel too exposed on the one hand; on the other hand, however, they do not like to feel limited in their use of the website due to their privacy settings [9, 24]. In this section, we will explore six closely related problems related to cognitive aspects.

4.1.1 Problem 1: Upfront Consents

Obtaining consent from persons like patients or association members is markedly different from asking website visitors to consent to cookies. In the former case, there is always a basic level of trust that has been able to grow over some period of time. Before a patient seeks medical attention, they surmise that the medical professional will be able to treat them, often based on another doctor’s referral or recommendations from friends or online reviewers. By the time the patient is asked to fill out a consent form in the waiting room, they have already gone through these preparations, and during their visit they can determine whether the facilities are inviting and the staff is friendly. Similarly, one typically only joins an association that suits oneself. With websites, this is entirely different. Before one can go on to explore, the first thing one has to do is to provide consent. This is strange because in most cases, the visitor has no relation to whoever is behind that website and has not had the opportunity to gain (online) trust [3, 24]. Even if the website is the portal to a well-respected bank or news outlet, the visitor who has not had dealings with them before will not be able to decide for themselves whether or not they should entrust them with their personal data at this point. Thus, especially when a user has never visited that particular website before, it is too early to reasonably expect them to make a well-informed decision. They might be unsure as to whether the website is even suited for them or whether they can trust the website’s operator, especially if the latter is shrouded in anonymity. Consequently, in many cases, consent is provided indiscriminately, making this action—except in the case of the rare user who reads the Terms & Conditions and the Privacy Policy—more a case of blind consent rather than actual informed consent [11, 45].

4.1.2 Problem 2: Coerced Consents

So far, we have assumed the normal case, where a user still has the freedom to not share their personal data, even though this procedure is still somewhat intrusive. It might be compared to a real-life situation where one asks a random person on the street for their phone number as soon as they make eye contact. Unfortunately, the reality can be less idyllic. Sometimes users are downright coerced into consenting to (all) cookies before they can continue to use the website; this practice is employed, for example, by various prominent newspapers [40]. The lawfulness of this can be disputed [12, 38] because the user is no longer free to make this choice. They are rather pushed toward a decision because the website uses the psychological concept of fear of missing out (FOMO) [36] and because they know that without granting consent, they will not be able to access everything [49]. As a practical example, visitors of Healthline.com seeking medical advice can only choose between allowing and disallowing all purposes at the same time. In the latter case, they are redirected to the ad-free and tracking-free portion of the website,Footnote 4 which offers nearly no functionality. Users can thus either decide to give in and allow all purposes so they can access the content or consult other websites instead.

Particularly the example of newspaper websites (further explored by Soe et al. [40]) perfectly illustrates just how bad things have become with cookies, which in real life would be the equivalent of medical practices employing bouncers forcing patients to sign a release form before they can enter or associations requiring potential members to sign a non-disclosure agreement before they can even get in touch. Needless to say, that is not a great start for a mutual and trustful relationship to develop because it disrespects the autonomy that data subjects are entitled to have. Other websites use the strategy of pleading or confirmshaming [28] in a follow-up dialog. Such emotional appeals might not only push uncertain users in the direction intended by the designer but can also spark a feeling of discomfort, aversion, and mistrust both among users who reluctantly accept this as well as users who double down on their dismissal. Here, too, the basis for the relationship with the user is unhealthy and unbalanced.

4.1.3 Problem 3: Poor User Experience

So far, we have analyzed how users fundamentally and subconsciously approach their relationship to operators of a digital ecosystem. These aspects can be difficult to measure, but they play a central role in driving the user’s actions and perceptions. However, an aspect that is far more obvious to the user is how they consciously perceive their interaction with the website, i.e., their user experience [22]. Being confronted with a cookie pop-up (and possibly other pop-ups for subscribing to a newsletter, activating notifications, and accepting advertisements) before they can access a website’s contents can be a nuisance.Footnote 5 Earlier, we discussed that the mandate for obtaining consents was not complemented with guidance on how to implement this. Moreover, as explained in Sect. 5, the industry has so far been unable to agree on a shared standard. This has obvious consequences for cookie banners. Instead of being based on a reference architecture or system to meet the legal requirements in a structural manner, each website operator has essentially been left to their devices in terms of finding and implementing their own solution [12, 27]. This is why cookie banners come in all shapes and sizes, resulting in a “Frankenstein implementation” that has become a user experience nightmare.

4.1.4 Problem 4: Unclear Utility

A user browsing the Internet typically accesses a website for a reason, like accessing resources (e.g., texts, multimedia) or online functions (e.g., web shops, instant messaging). This experience can be typified as instant gratification [41, 51], meaning the user clicks on a link in order to immediately have access to content that entertains, provides information, or in another way helps them achieve their goal. In this cognitive process, a cookie banner and other pop-ups are an undesirable hurdle that stands in the user’s way of receiving their gratification and adds to the time and effort needed for them to achieve their goal. This is why they experience it as a nuisance that negatively impacts the user experience. It is further exacerbated by the fact that it is not obvious to them how these additional—seemingly unnecessary—actions will benefit them. For example, on most websites, a user will not be able to directly see the effect of accepting or declining functional cookies. The consequences of the additional actions are often neither desired nor do they appear to be beneficial, so they have a low perceived value to the user. This, in turn, lowers the user’s motivation to spend their cognitive resources on deliberating about them (cf. [40, 45]). The user’s new sub-goal thus becomes overcoming the annoyance (a) with as little cognitive load as possible and (b) as quickly or efficiently as possible. Put more plainly, the user wants to think as little as possible about this activity and ideally just wants to click somewhere to be done with it [19]. This is what leads to the phenomenon known as cookie fatigue [21], which we discussed in Sect. 1.

4.1.5 Problem 5: Dark Patterns

Unfortunately, designers of cookie banners are aware of the mental offloading that takes place in users and in some cases have chosen to exploit it. This explains the success of so-called dark patterns [28, 40] (see also the chapter “The Hows and Whys of Dark Patterns: Categorizations and Privacy”) being successfully applied because they guide an inattentive user to the result desired by the data processor and not the result that is necessarily in the user’s best interest. Through a misdirection pattern, providing consent becomes as easy as pressing the highlighted button, while objecting will, in the best scenario, involve the cognitive effort of reading what is on the highlighted and non-highlighted buttons and pressing the latter. Typically, this also involves a tedious and demanding process of finding a link in a paragraph of text or pressing an unlikely button (e.g., “more information”) to find the settings. In the worst-case scenario, it forces users to individually revoke their consent for legitimate interest processing purposes, after which they must still be careful not to click on the highlighted “Accept all” button [40]. This is not congruent with the careful deliberation a user is supposed to enter into when deciding what cookies to accept but instead is more about jumping through the right hoops; by this time, the cognitive load and the time that must be invested are so high that even privacy-aware users may already have given up and just consent to all [19]. Moreover, users rarely revisit their cookie settings on their own initiative—assuming they are even able to find the location where they can update their settings, provided this option is offered at all [12]. This actually makes it quite rewarding to lure users into giving their consent through dark patterns. Some websites even repeatedly ask for consents anew from users who have not consented to all before, in which case the ability to process more personal data outweighs the risk of those users growing more annoyed and suspicious. Note that from a legal perspective, many red flags can be raised in this section about the legality of the processing purposes provided or how users are enticed into giving consent and prevented from updating their preferences (cf. [38]), but we will leave that topic to governing bodies to sort out.

4.1.6 Problem 6: Repeated Consents

So far, our discussion has been limited to a single interaction between a user and one particular website in a specific setting. The problems show that in each individual instance, users deal with cookie banners that they consider of low value to them; users are often unable to estimate whether they want to use functions for which particular cookies are necessary; design patterns favor the wishes of the data processor, and the activity of giving consent involves a high cognitive load that inhibits careful deliberation. The final problem is that the same user has to “make” their choices again when accessing the same website from a different browser on the same device or from a different end device. Considered rationally, it is unlikely that the user will really make a different decision when using different browsers or devices. For some reason, data processors have developed the persistent belief that by forcing users to set individual preferences per website, users will be more liberal in giving consent because they have to constantly make their choice anew. This is especially unlikely and might only hold true if the user is able to become more affectionate and trustful toward a website [9]. Just as with interpersonal relationships, self-disclosure only increases after that relationship has had time to evolve [13].

4.2 Solutions for Improved User Experience

The six problems with cookie banners presented in Sect. 4.1 essentially revolve around aspects of user experience and the time given to users to build trust. When juxtaposing the needs of data subjects and data processors (see also the needs analysis methodology described in Sect. 4 of the chapter “Achieving Usable Security and Privacy Through Human-Centered Design”), we can derive possible design solutions that take both the psychological processes of the users and the intentions of the website’s operator into account. Here we propose four initial solution ideas, which naturally presume that the operator of the digital ecosystem is benevolent toward the users and has already made the decision to avoid questionable activities such as the use of dark patterns. A common thread shared by all four solutions is that they support a fair interplay (i.e., a healthy relationship) between the user and the website, where the user is given a feeling of empowerment as the decisions are made together with the PIMS that will guide them through a pleasant process. After the user has been allowed to explore the website at their own pace, it equips them to make a truly informed decision, which ultimately has a higher probability of being more in favor of the website if that impression is positive.

4.2.1 Solution 1: Make Cookies Something of Later Concern

A first solution could be to reverse the current approach. Just as there are taboo topics during a first date, cookies are not necessarily the best opening line for a website. Instead, users are likely to respond positively to a fairly non-intrusive message that tells them the website operator will start by not collecting cookies. They should be offered the possibility to already change their settings to provide consent to unlock certain features, for example, through in-line cookie options [19], but would be free to continue using the website without investing the mental workload. The website has just one chance to make a good first impression, and by enabling the user to explore some more, the user is likely to build up more trust toward the website. The option to consent to more cookies could be postponed until after a kind of trial period, during which the user can get an impression of the value the website can give to them. Now, they will be able to make a far more informed decision, and because of greater trust and a more positive attitude overall, they might give more lenient consents than they would have given otherwise. This might especially be effective if it is demonstrated clearly how granting consent makes certain kinds of processing and thus certain features possible.

4.2.2 Solution 2: Reject Until Further Notice

Many mailboxes in the Netherlands famously have a sticker through which residents opt out of unsolicited advertising material and papers (NO/NO) or either one of them (NO/YES or YES/NO) getting delivered. This reflects their generic consent to or rejection of receiving printed materials.Footnote 6 In some cases, an extra sticker provides an exception that the residents do want to receive separately delivered advertisements, for example, from their local supermarket. Similarly, users could have predefined YES/NO settings across their devices that reflect the basic attitude of a user toward privacy (see, e.g., the user group profiles discussed in Sect. 5 of the chapter “Achieving Usable Security and Privacy Through Human-Centered Design”). This should enforce general preferences to be adopted by an individual website’s cookie preferences, and the user would only need to make adjustments for that particular website to add exceptions. This strategy could take the form of allowlists in a PIMS, as described in Sect. 3.3. However, this case would exceed the limits of a digital ecosystem, as it would essentially be a more ubiquitous implementation of cookie banner blockers like Consent-O-Matic [30] (see Sect. 5).

4.2.3 Solution 3: Provide Differentiated Decision Support

Another factor that could positively influence the users’ perceptions is the decision support provided by the interface. By using short and to-the-point descriptions, it should be easier for the user to understand what the (positive or negative) consequences of giving consent for a particular purpose are. This goes farther than the often-seen and quite meaningless statement “We respect your privacy” and should also demonstrate how cookies have been used for their intended purpose. One might wonder what some websites that have been collecting data “for website optimization” for years actually did with all that information. Ultimately, these measures help to achieve a “collaborative mixing & matching” of cookies suitable for the user instead of the often-seen trickery (e.g., dark patterns, see also the chapter “The Hows and Whys of Dark Patterns: Categorizations and Privacy”, or nudging, see also the chapter “Privacy Nudges and Informed Consent? Challenges for Privacy Nudge Design”) employed to seduce users to give their consent. Useful inspiration may be drawn from research on explainable artificial intelligence (AI) [20] and prompt a form of explainable security and privacy [7, 14]. In the design of ethical AI systems, the quality of explainability ensures transparency about why and how an algorithm caused a particular system behavior or recommendation, so that a user can make a more informed decision based on the AI’s output. Similarly, explainable persuasion can inform a user of influencing techniques used in persuasive interfaces (e.g., on gambling websites) [10]. The principles and concepts from explainability could be used in a similar manner to guide informed consent regarding privacy.

4.2.4 Solution 4: Encourage Decision Review

A final factor could be to stimulate users to revise their decisions; not by forcing them through another cookie banner, but by suggesting to them in a non-intrusive way that they should review their cookie settings, akin to the way some digital ecosystems suggest performing an occasional privacy check-up.Footnote 7 If trust is built, here, too, the user might be more motivated to grant more consents upon review. This is also fairer to users who, in retrospect, realize that a dark pattern led them to granting consents they did not mean to grant. Provided that the user has given consent for their activities to be tracked, a PIMS could employ usage mining to establish a user profile and make personal recommendations on what settings and generic consents would befit the user specifically, thereby helping them arrive at the configuration that is optimal for them personally.

5 Feasibility of Technical Implementation

We have seen by now what legal requirements exist and which psychological improvements can be made. Now we would like to give a short introduction to the technical aspects. To investigate the feasibility of implementing our proposed consent handling solution, in this section we explore existing work and its relation to our proposed ideas.

Because our aim is to reduce the users’ mental load associated with granting consent in digital ecosystems, we propose that all consents—but not necessarily the personal data of the users—be managed by one central platform that stores consents and provides an interface through which these consents can be managed (e.g., granting, objecting to, revoking, or otherwise altering a consent). The central platform then communicates the type of data processing to which the user has consented to the digital ecosystem’s services. This is somewhat similar to concepts such as browser plug-ins, e.g., Consent-O-Matic [30], which allow users to preset their preferences regarding their consents in cookie banners and then automatically fill out these banners. However, our approach describes a more general way of managing consents, which is not limited to cookie banners but can be applied to different services in digital ecosystems. Although some of these approaches were initially tailored to websites, we consider our solution to be suitable for the services of a digital ecosystems, which may be websites, but could also be other services.

5.1 Consent Representation Formats

A first step toward a general framework for consent management—regardless of its form—is to ensure that consents are represented in a way that allows the managing platform and the digital ecosystem services to unambiguously communicate the types of processing and to determine whether consent to them was granted or denied by the user. To this end, various approaches and ontologies have already been defined and put into practice. An early approach was P3P [46], as mentioned in Sect. 3. On the one hand, its intention was to enable websites to inform their users of their specific data collection intentions; on the other hand, it was to enable users to set preferences for automatically accepting or denying these requests without having to read all of the policies. Unfortunately, this recommendation was only implemented by a small number of websites and became obsolete in 2018. Another proposal suggested combining requests for consent with classical methods of access control, for which Appenzeller et al. [4] suggested using the eXtensible Access Control Markup Language (XACML) [31]. A wide range of different ontologies exist for requesting consent and representing processing with no set standard, which Rantos et al. [37] consolidated automatically using machine learning algorithms.

A wholly different approach was proposed in the form of the Tracking Preference Expression [47], better known as Do Not Track (DNT). As an extension to the HTTP protocol, which is mainly used for web communication, an addition to the communicated data (more specifically, a flag in the HTTP header) allowed users to express their preferences regarding tracking and servers to inform about their tracking behavior. But just like P3P, this concept was not widely adopted and thus failed to reach its aim. Recently, a similar concept called Global Privacy Control (GPC) [48] has emerged, which follows the same principles. It remains to be seen whether it will be more successful than its predecessor. These two concepts, DNT and GPC, already implement the first of two possible ways of handling consents: by forwarding consents to services or by forwarding data to services. These will be discussed in the following two sections.

5.2 Consent Forwarding

Assuming our initial setup of a centralized platform for managing consents that gets reflected in multiple digital ecosystem services, the obvious approach here is for users to give their consents in one central place, from where a digital ecosystem service retrieves the given consents when users interact with that service. This centralizes all requests for consents in one place but changes little with respect to the current practice of websites and services. It is up to the digital ecosystem service itself to act in accordance with the consents it is presented with.

Especially for websites and their analytics data (e.g., where did the user click, which videos did they play, and how long did they remain on the website), this is the only reasonable solution because the digital platform that manages a digital ecosystem usually does not collect these kinds of personal data for every associated service or website. The costs and efforts involved in implementing such a system would simply be too high. Thus, such consents can only be forwarded to a specific digital ecosystem service according to an ontology or through a standard like DNT or GPC. Based on the consents, the digital ecosystem service in turn collects the analytics data itself, providing no credible proof of adhering to these consents. Pathmabandu et al. [33] sought to mitigate this disadvantage by scanning the data transmissions between users and the digital ecosystem service and trying to recognize the consented data patterns, thereby verifying whether these patterns match the data and processing categories to which the users consented. Since they applied their framework to smart buildings, it remains an open question whether their proposed idea can be transferred to website analytics data.

This concept in its basic form is currently the way data processing and consents are typically handled in contexts where several (digital ecosystem) services come into play. Users are asked for their consent when they first start using the service, but they have no way of checking whether the service truly complies with this processing. It should be assumed that most services do adhere to these given consents—especially because legal statutes require them to—but some insecurity remains for the user regarding whether their personal data is processed lawfully and without malicious intent and that no data other than what they consented to is being processed.

5.3 Data Forwarding

One possible solution for eliminating the insecurity among users about what happens with their personal data is for the centralized platform to only forward the data to the (digital ecosystem) service for whose use the user has granted consent. This could prevent the services from being able to collect data for which no consent was given. However, it cannot be ensured that this data will subsequently be processed only for the purposes to which the user has consented.

A rather technical solution to addressing this challenge was put forth by Agrawal et al., who suggested Hippocratic databases [1]. These databases are meant to include an access control mechanism that allows systems to apply the users’ data sharing preferences at the database level. Additional tables in the database encapsulate the data and only grant access when data is demanded (a) by the specified recipients and (b) with the associated processing purpose. Such policies representing the users’ preferences do not necessarily have to be integrated into the database; for example, Appenzeller et al. [4] used XACML policies at a higher abstraction level to represent the users’ consent and regulate the data that is forwarded to the services.

Sticky Policies [35] are another approach aimed at ensuring that only the data the users have consented to get forwarded. This concept ensures the encryption of the users’ data and sticks a policy to the encrypted data that describes under what conditions and by whom it may be used. A (digital ecosystem) service that intends to use that data must prove its compliance with the policy to a trusted authority before receiving the key for decrypting the data. Ulbricht et al. [44] extended this idea with a knowledge graph for federated data sources, of which a service might not yet know which data is available. The knowledge graph consists of short descriptions of what is contained in the encrypted data (e.g., address, gender, age, or more general classes like demographic data), based on which the service can determine whether the data is of interest.

Just as with P3P, the success of any of the approaches described in this section depends on their implementation by the services. But while P3P and DNT were meant to be used in the World Wide Web with its millions of very diverse services and websites, a digital ecosystem provides a small, finite set of services that is more manageable and needs a much smaller number of implementations for successful application. Thus, we believe that digital ecosystems are an environment that is well suited for the successful implementation of these approaches.

6 Discussion

In this work, we have suggested generic consents as a user-friendly way of giving tailored informed consent to data processing with reduced mental load, greater trust, and better informedness. From a legal perspective, we assert that our proposed approach of a PIMS implemented as a consent management system in a digital ecosystem can increase usability and privacy. Combined with a trial period (presented in the context of websites, but also applicable to ecosystem services)—a time in which users can gain trust in a service and better inform themselves—we claim that generic consents greatly foster self-determined and better deliberated decisions by users to consent to sharing their data. We also discussed existing ontologies and standards for representing (requests for) consent through which our vision can be realized. However, there are still some open questions regarding our idea, which we would like to discuss in this section.

These generic consents can be considered an extension to the consents demanded in Article 6 GDPR, which should, among other things, be specific and unambiguous. Generic consents are inherently not as specific as explicit consents; they are, in fact, intentionally unspecific to a degree. In this regard, they do not strictly comply with the regulations of the GDPR. But we argue that fine-grained specific consents are unmanageable for users. Having to handle as many consents as we have seen with website cookies, for example, makes it impossible for data subjects to make truly informed decisions [11, 45]. This does not lead to informed consents. Consequently, we believe that a allowlist containing generic consents introduces a necessary abstraction level to help users contain the amount of data processing they are asked to consent to. Given that there is currently no case law on this topic (see Sect. 3), the lawfulness of our concept is yet to be determined.

6.1 Allowlists Created by NGOs (Solution 1)

Although both suggested alternatives for the practical implementation of generic consents and (predefined) allowlists are theoretically feasible, Solution 1, in particular, has some drawbacks: if a allowlist is provided by NGOs (as proposed by Stiemerling et al. [42]), keeping it up to date in implemented applications may be a challenge. The allowlists would have to be made available in a machine-readable way so that applications can automatically query them and detect any changes. Another challenge is the workload involved in creating comprehensive allowlists of services and websites around the world. Ensuring that each entry receives a justified and fair evaluation is far beyond the capabilities of any organization—let alone maintaining these lists, given that asset providers might change the privacy practices in their service at any time.Footnote 8 Even if it were possible to create and maintain such allowlists, a reasonable expectation from a privacy-concerned point of view is that for few services on the allowlist, there will be a guarantee that they will not misuse the data or protect it insufficiently. To make such allowlists usable, the bar for approved services would need to be lowered considerably—which defeats the initial goal of increasing privacy. Another possibility would be to create different allowlists, each according to their own privacy level. This raises another open question: how does one rank services for such a allowlist? By how much data they collect or by the kind of data they collect? By the guarantees they provide for securing the data processing? Or by the reputability and trustworthiness of the asset provider? How to best measure and weight these aspects in order to provide a meaningful indicator of the “privacy” they ensure is yet to be determined. Thus, for Solution 1, many hurdles have yet to be overcome.

6.2 Allowlists Created by the User (Solution 2)

Solution 2 is not subject to the same problems as Solution 1 because it encompasses allowlists that users have tailored to their own privacy preferences. However, it is not clear regarding what aspect(s) users should best generalize consents while making sure these are still fairly informed and specific. Generalization is theoretically possible for all five aspects related to consent—Data, Asset Provider, Digital Ecosystem Service, Processing Type, and Purpose (see Sect. 2). From a functional point of view, one might keep the asset provider and perhaps also the digital ecosystem service generalized while specifying specific data categories, processing types, and purposes (e.g., “I always want navigation services to be able to access my location for the purpose of navigation.”). Based on trust, one could also specify the asset provider while generalizing all other aspects (e.g., “I allow my navigation service to perform any processing type it requests.”). It is also possible to specify the data category while generalizing all other aspects (e.g., “Any service may process my current location for any purpose.”). Each generalization has its advantages and drawbacks. Some subsume a large number of requests for consent while others contain only a small number. For the user experience to get the maximum benefit, it is most advantageous to cover a large number of requests, while from a legal point of view, a smaller number is better. Which generalization would work best in practice remains to be seen.

6.3 Blocklists

As a complement to generic consents in a allowlist, a blocklist might be a suitable counterpart through which users could add exceptions to their preferences (e.g., “I do not allow asset provider ShadyProvider to process any of my data.”). Blocklists and their consecutive exceptions to generic consents help foster the users’ self-determination. However, they also increase system complexity. For example, when a consent (e.g., allowing navigation services to access location) is at odds with an exception (e.g., not allowing ShadyProvider to do any data processing), the system must determine which of these has the upper hand. Should it base this decision on a heuristic in which the blocklist always prevails over the alowlist? Or does the most recent consent or exception take precedence? Regardless of the heuristic, both the system and the user will have greater difficulty managing and understanding how the configuration of consents plays out. A possible way to simplify this is to introduce a blocklist that sets the user’s bottom line of configurations to which an exception should always be made in all subsequent requests for consent. The exceptions from this list would then automatically get inserted into all generic consents to be created. In this way, a user only needs to specify their exceptions once and only has to confirm their choice without needing to make any manual adjustments (e.g., “I always want navigation services, except those from ShadyProvider, to be able to access my location for the purpose of navigation,” with the highlighted part automatically created from the blocklist).

6.4 Usability

A final aspect to discuss concerns when and how the allowlists and blocklists should be created. Although it would be beneficial if this were done right after a user started using the digital ecosystem, this might not be the best time if we consider Sect. 4.2. However, a prompt asking users to specify their consents at the central managing platform could be provided early on (e.g., during the registration process or upon the first log-on) for users who really want to grant their consent. The system could then check if a user has already configured their allowlists and blocklists and ask them to perform a privacy review. Importantly, the user should be able to adjust their preferences at any time, especially because it is impossible to thoroughly consider all digital ecosystem services they will ever encounter when they initially create these lists. Rather, it is more likely that the user will eventually come across a digital ecosystem service for which some or all of the necessary consents still need to be configured. In that case, the user would receive a request for consent for which they can create a specific consent, but they would also be given the option to specify it more broadly as a generic consent. This exposes one of the main weaknesses of our proposed idea: users would still receive requests for missing consents that have not been explicitly denied in a blocklist if this is needed for the interaction with a particular digital ecosystem service. When this occurs during the trial period suggested in Sect. 4, where consent is not yet provided, the result is that a user will receive more requests for consent than with the current practice of obtaining consent to all processing right at the beginning. Consequently, a control mechanism should ensure that users are not overwhelmed by requests and provide even less well-informed decisions because what we are aiming to achieve is the exact opposite. Hence, the system should adequately assist the user in creating generic consents that fit their personal preferences in order to decrease the number of requests they receive during their normal interactions with the digital ecosystem services.

7 Conclusion

In this chapter, we gave a short introduction to the use of generic consents in digital ecosystems. The challenges we highlighted show that in order to achieve a successful solution, careful and user-oriented design is crucial, and several open questions still need to be answered. When designed properly our proposed concept of generic consents in combination with a trial period can foster users’ self-determined and informed decision-making regarding consenting to the processing of their personal data. Further research is needed on how to help users create suitable generic consents, while case law must develop in which the judiciary explores to what degree generic consents still sufficiently comply with data protection regulations such as the GDPR. The concept proposed in this chapter is a step toward ensuring that users can make truly informed and self-determined decisions when faced with the vast amount of data processing in our time.