Keywords

1 Introduction

Algorithmic rankings and recommendations constitute an essential element of the architecture of digital platforms (see Jannach and Adomavicius 2016). Recommender systems facilitate shopping online (Amazon), booking holiday rentals (Airbnb), discovering new movies (Netflix) or even dating (Tinder). By determining which information and options are prominently presented on a platform and which content remains hidden, automatic recommendations and rankings affect the choice architectures for consumers (see Hildebrandt 2022). Although recommender systems assist consumers in filtering information and may help to improve overall decision quality (Häubl and Trifts 2000), overdependence on algorithmic recommendations and rankings can reduce competition and harm consumers (see Banker and Khetani 2019). Moreover, recommender systems are a key source of platform power and a tool for private ordering by platform operators (see e.g., Leerssen 2020a; Cobbe and Singh 2019). In order to mitigate risks for competition and consumers, legislators at EU level and member state level have started to introduce new regulatory requirements for algorithmic rankings and recommendations on digital platforms.

Against this background, this chapter will scrutinize the emerging regulatory framework for algorithmic recommender systems in the European Union. Much of the academic and political debate has focused primarily on recommender systems used by social media platforms such as Facebook, Twitter, and TikTok and their societal effects (see e.g., Leerssen 2020a; Helberger et al. 2018; Milano et al. 2020). In contrast, this paper will seek to fill a research gap by focusing mainly on product recommendations on online retail platforms. In doing so, the chapter makes three contributions to the literature on platform regulation and recommender systems: First, it surveys the new rules for rankings and recommender systems in consumer contract law, unfair commercial practices law, and platform regulation. Second, it identifies gaps and inconsistencies and highlights the need to ensure coherence between the different regulatory regimes. Third, it argues that the European legislator should go beyond the current regulatory model based on algorithmic transparency and embrace new regulatory tools which enable users to control the functioning of rankings and recommendations and to choose between different competing recommender systems.

The rest of the chapter is organized as follows: Part II sets the scene by giving a brief overview of transparency requirements regarding ranking criteria as well as paid search results and paid rankings under the Unfair Commercial Practices Directive (UCPD),Footnote 1 the Consumer Rights Directive (CRD)Footnote 2 and the Platform-to-Business (P2B) Regulation.Footnote 3 In addition, this Part offers some terminological clarifications and explains what EU law means when it speaks about rankings and recommender systems. Building on this overview, Part III provides a more detailed comparative analysis of the relevant provisions and scrutinizes emerging differences and commonalities in the field of EU recommender governance. Part IV then turns the focus to the Digital Services Act (DSA),Footnote 4 the latest addition to the emerging framework for recommender governance in the EU. This Part explains that the DSA (albeit hesitantly and incompletely) introduces a new regulatory paradigm that shifts the focus from algorithmic transparency to algorithmic choice. Part V looks beyond the DSA and argues that a choice-based approach to recommender governance and a market for “RecommenderTech” could also be facilitated through new interoperability requirements introduced by the Digital Markets Act (DMA).Footnote 5 Finally, Part VI offers some conclusions.

2 Recommender Governance in the EU Platform Economy

Until very recently, there were no specific rules for algorithmic rankings and recommendations at EU level. The main legal requirements regarding the transparency of recommender systems stemmed essentially from general rules of unfair commercial practices law, in particular the prohibition of misleading practices under Art. 6 and 7 UCPD. Within the timespan of only a few years, the situation has fundamentally changed. Instead of too few, there may now be too many rules that are not sufficiently coordinated. Since 2019, the European legislator has enacted several new regulations aimed at increasing the transparency of rankings and algorithmic recommendations on digital platforms. As a result, the regulatory framework currently presents itself as a complex and fragmented landscape of partially overlapping transparency rules. This Part will briefly map out the relevant rules and provide some terminological clarifications before the next Part will analyze in more detail the differences and similarities between the different rules applicable to algorithmic rankings and recommendations.

2.1 Mapping the Regulatory Landscape

For reasons of clarity, the following overview will focus on three legal instruments, that are most relevant for algorithmic transparency regarding online retail platforms: the P2B Regulation, the UCPD, and the CRD. The DSA, which will introduce a shift towards a new regulatory model combining algorithmic transparency and algorithmic choice, will be addressed separately in Part V. It should be noted that in some cases there may be also overlaps with transparency obligations stemming from the field of media law, such as Sect. 93 of the German Interstate Media Treaty (Medienstaatsvertrag),Footnote 6 which imposes algorithmic transparency requirements for “media intermediaries”. This category includes search engines and social media platforms.Footnote 7 With the growing convergence of social media and e-commerce (“social commerce”) the dividing line between e-commerce law and media law is getting more and more blurred and we will most likely see also growing overlap between media law and e-commerce regulation (see e.g., Svirskis 2020). This will even further increase the complexity of EU recommender governance. The regulatory landscape may even become more complex with the forthcoming AI Act,Footnote 8 which will add further transparency requirements for algorithmic systems.Footnote 9

This being said, there are currently mainly three legal instruments that define the regulatory framework for rankings and recommendations on online retail platforms: the P2B Regulation, the CRD, and the UCPD. Art. 5(1) P2B Regulation requires providers of “online intermediation services” (e.g., online retail marketplaces) to “set out set out in their terms and conditions the main parameters deter mining ranking and the reasons for the relative importance of those main parameters as opposed to other parameters”. Art. 5(2) P2B Regulation stipulates a similar transparency rule for online search engines. Art. 5(3) to (7) P2B Regulation further spells out the details of the transparency duty. Of particular interest here is Art. 5(3) P2B Regulation, which lays down specific disclosure duties for cases where the position in the ranking can be influenced by direct or indirect payments, such as “ranking boosters” or preferred partner programs.

While the P2B Regulation aims at promoting fairness and transparency between platforms and business users of intermediation services offered by the platforms, the two other transparency requirements for rankings have been introduced in the context of the recent reform of EU consumer law. As part of the “New Deal for Consumers”, the Modernisation Directive 2019/2161/EU has added in 2019 two new information requirements regarding online rankings to the CRD and the UCPD. According to Art. 6a(1) CRD, operators of online marketplaces have to provide consumers with general information on the “main parameters” determining the ranking of offers presented to the consumer as a result of a search query and the “relative importance of those parameters as opposed to other parameters”. A similar information requirement is set out in Art. 7(4a) UCPD, which also stipulates a duty to inform about the main parameters of the ranking and their relative importance. This provision is complemented by the new No. 11a of Annex I to the UCPD.Footnote 10 According to the new provision, it is under all circumstances prohibited to provide “search results in response to a consumer’s online search query without clearly disclosing any paid advertisement or payment specifically for achieving for achieving higher ranking of products within the search results”.Footnote 11

2.2 Layers of Terminology in EU Law: “Rankings” and “Recommender Systems”

Before taking a closer look at the emerging European regulatory framework for recommender systems and rankings on digital platforms, some terminological clarifications may be necessary. From a technical perspective, the term “recommender systems” refers to software tools “that provide suggestions for items that are most likely of interest to a particular user” (Ricci et al. 2022). For this purpose, recommender systems may use collaborative filtering techniques, content-based filters, knowledge-based filtering mechanisms, or hybrids between these models (See e.g. Aggarwal 2016; Ricci et al. 2022). Simply put, the task of a recommender system is to help users “find good items or predict an item’s relevance to a user” (Jannach and Adomavicius 2016).

While there is abundant technical literature on recommender systems (see, ex multis Ricci et al. 2022; Aggarwal 2016), the terms “recommender system” and “ranking” have only recently entered the vocabulary of the European legislator. One of the earliest EU references to rankings in the context of electronic commerce is not to be found in a Directive or Regulation, but in a somewhat apocryphal text, the “Key principles for comparison tools” of May 2016, which have been elaborated by a multi-stakeholder group (including consumer and business associations, providers of online comparison tools and national authorities) under the auspices of the European Commission (European Commission 2016a, b). These principles, which have been drafted to facilitate the application of the UCPD to online comparison tools such as Verivox or Yelp, do not define the term “ranking”, but stipulate that “criteria used for the rankings should be clearly and prominently indicated, as well as, where relevant to ensure that consumers are not misled, general information about any specific methodology used” (European Commission 2016a, b). This non-binding requirement is explicitly referred to in the (equally non-binding) Commission Guidance on the implementation and application of the UCPD, which was also published in May 2016.Footnote 12 The brief look at these early traces of “recommender governance” in EU law underlines that the topic is rooted in unfair commercial practices law and goes back to the early days of EU platform regulation.

One of the first attempts to explicitly regulate recommender systems in the context of platform regulation was the P2B Regulation. Interestingly, neither the proposalFootnote 13 for the Regulation, which was published in April 2018, nor the final text of June 2019 uses the term “recommender system”. Instead, the term “ranking mechanism” is used.Footnote 14 According to the definition in Art. 2(1)(m) P2B Regulation, “ranking” means

the relative prominence given to the goods or services offered through online intermediation services, or the relevance given to search results by online search engines, as presented, organised or communicated by the providers of online intermediation services or by providers of online search engines, respectively, irrespective of the technological means used for such presentation, organisation or communication.Footnote 15

This definition was copied almost verbatim in Art. 2(m) UCPD, which was added to the UCPD in November 2019 by the Modernisation Directive.Footnote 16 The definition is not only relevant for the new transparency requirements for rankings under Art. 7(4a) UCPD. The UCPD definition is also referred to explicitly in the transparency rule for rankings in Art. 6a(1) CRD.Footnote 17 It is interesting to note that neither the P2B Regulation nor the UCPD uses the term “recommender system”. The alternative term “ranking” shifts the focus away from the technological innards of the recommender system to the output of the system. How this output is produced is irrelevant to the applicability of the transparency rules.

The definitions in Art. 2(1)(m) P2B Regulation and Art. 2(m) UCPD are also technology-neutral in another respect. Both definitions cover rankings of products presented, organized or communicated to platform users “irrespective of the technological means used for such presentation, organization or communication.” Recital 19 of Directive (EU) 2019/2161 further explains that rankings can result “from the use of algorithmic sequencing, rating or review mechanisms, visual highlights, or other saliency tools, or combinations thereof”. This underlines that a ranking is not necessarily a list of items, but can also take the form of a display of varying prominence on a map or in a word cloud. The common feature in each case is that the individual items differ in their “relative prominence”, as it is called in the two definitions.

The term “ranking” is also used in the draft Digital Markets Act (DMA). The definition used in the DMA is recognizably modeled on Art. 2(1)(m) P2B Regulation.Footnote 18 Recital 52 DMA underlines that the concept of ranking is meant to “cover all forms of relative prominence, including display, rating, linking or voice results”. In this context, the DMA adds an interesting clarification by stating that a ranking can “include instances where a core platform service presents or communicates only one result to the end user”. At first glance, this might seem paradoxical. However, this is probably due to the fact that the “core platform services” covered by the DMA also include virtual assistants.Footnote 19 When consumers use virtual assistants for online shopping, voice-controlled devices such as Amazon’s Alexa do not read out long lists of ranked items. Rather they offer only a single recommendation (“Amazon’s choice”). In a way, this takes the idea of a ranking to the extreme.

Interestingly, the Digital Services Act (DSA) utilizes a different terminology and uses the technical “recommender system”. Art. 2(s) DSA defines “recommender systems” as a

fully or partially automated system used by an online platform to suggest in its online interface specific information to recipients of the service or prioritise that information, including as a result of a search initiated by the recipient of the service or otherwise determining the relative order or prominence of information displayed.Footnote 20

Despite the different terminology, the substance is more or less the same as in the other legal texts that use the term “ranking”. As in the other legal instruments, the DSA provision essentially aims at systems that determine a “relative order of prominence” and thus establishes a choice architecture for the platform users. Thus, in summary, it can be said that despite the differences in terminology, the different legal instruments share one important characteristic: They do not distinguish between different types of recommender systems such as content-based systems, knowledge-based systems, collaborative systems, or hybrid systems. They are agnostic with regard to the technology used in the filtering process and focus on the output of the systems, i.e., the “relative order” or “prominence” they produce.

3 Five Axes of Algorithmic Transparency: A Comparative Analysis

This section will zoom in on the transparency rules for algorithmic rankings set out in Sect. 3.2. In doing so, we will analyze differences and similarities between the rules along five axes: (1) purposes of transparency, (2) audiences of transparency, (3) addressees of the duty, (4) content of the disclosure, and (5) modalities of disclosure.

3.1 Purpose of Transparency

The criteria and the “hidden logics” used to create the ranking and how they are weighted usually remain in the dark. From the point of view of both consumers and professional platform users, the ranking algorithms are black box systems (see Pasquale 2015). However, the transparency problem presents itself somewhat differently from the consumer and trader perspective.

From the consumer’s point of view, the primary concern is to prevent unfair influence through a biased ranking. Empirical research suggests that “consumers are more likely to select options near the top of a list of results, simply by virtue of their position and independent of relevance, price or quality of the options” (UK Competition and Markets Authority 2021; see also Ursu 2018; De los Santos and Koulayev 2017). This position bias (or “ranking effect”) may induce platform providers to exploit consumers by giving a higher position to products that are more profitable for the platform, but which are not necessarily the best choice for the consumer. Personalized rankings could have even more harmful effects and lead to similar results as personalized pricing if more expensive products are presented specifically to consumers with a higher willingness-to-pay (“price steering”) (UK Competition and Markets Authority 2021). Reducing such risks for consumers through meaningful transparency of ranking criteria is the purpose of the transparency requirements stipulated by Art. 7(4a) UCPD and Art. 6a(1)(a) CRD.

From the perspective of traders who distribute their goods or services via an online marketplace, the transparency problem is different. Professional platform users are interested in learning more details about the functioning of the ranking mechanism in order to improve the visibility of their products on the platform (“ranking optimization”). If, for example, a hotel booking site informs their users how they can achieve a better position in the ranking if they display high-quality photos in their profile, a hotel owner can make a business decision on whether to invest in better photos. Similarly, detailed and truthful information about “ranking boosters” or “preferred partner programs” offered by the platform which promise to improve the position in the ranking will enable businesses to take an informed decision on how much they spend on such offers (see, e.g. Bundeskartellamt 2019). Addressing these concerns is the main purpose of the transparency requirements under Art. 5 P2B Regulation.

The transparency rule set out by Art. 27(1) Draft DSA has a hybrid function. On the one hand, it protects the autonomy of platform users by providing them with information on how information is prioritized for them. At the same time, the purpose of the provision transcends the platform-user-relationship. This is underlined by Recital 70 DSA which explicitly mentions that recommender systems “play an important role in the amplification of certain messages, the viral dissemination of information and the stimulation of online behaviour.” The Recital concludes: “Consequently, online platforms should consistently ensure that recipients of their service are appropriately informed about how recommender systems impact the way information is displayed, and can influence how information is presented to them.” In this sense, the DSA adds a broader, societal dimension to recommender governance.

3.2 Audiences of Disclosure

Closely linked to the purpose of the transparency requirement is the question to whom transparency shall be offered. In this sense, transparency is a relational concept that is defined by the audiences the disclosure duties serve (Leerssen 2020a). For transparency rules regarding social media recommender systems, a tripartite distinction has been suggested: (1) disclosures for users, (2) disclosures for public authorities, and (3) disclosures for academia and civil society (Leerssen 2020a). While this tiered approach could also be applied to transparency rules for rankings on online retail platforms, a slightly different approach seems preferable. Leaving the forthcoming DSA rules aside, the current EU regulatory framework for rankings and recommendations in the field of online retail clearly focuses on disclosures towards platform users. With regard to ranking transparency, the two other audiences are neither addressed in the P2B Regulation, the UCPD nor the CRD. However, the distinction between the two relevant sub-groups of “users” is of key importance for the effectiveness of ranking transparency in the e-commerce sector. In order to be effective, the information provided on ranking parameters has to be adjusted to the relevant target group.

For the transparency requirements under Art. 7(4a) UCPD the “average consumer test” applies.Footnote 21 Therefore, as a general rule, the information provided about the main parameters of the ranking and their relative importance must be intelligible for an average consumer who is “reasonably well-informed and reasonably observant and circumspect”.Footnote 22 However, this standard has to be adjusted if the ranking and its explanation are specifically aimed at a particular group of consumers or if the trader can foresee that the ranking will materially distort the economic behavior of an identifiable group of vulnerable consumers (e.g. children).Footnote 23 It is difficult to see, how a platform operator will be able to match this high standard given the technical complexity of many ranking mechanisms and whether such information will in effect be digestible for the consumer.

In contrast, the disclosures required by the P2B Regulation are directed at “business users” who offer their goods and services via online platforms (Art. 5(1) P2B Regulation) or “corporate website users” whose websites are ranked by search engines (Art. 5(2) P2B Regulation. In both cases, the audience of the transparency requirements consists of professionals who are interested in improving their online visibility for potential customers. As explained in the Commission’s Guidelines on ranking transparency, in order to be meaningful for the professional audience, explanations about the main ranking parameters “should take account of the nature, technical ability and needs of ‘average’ users of a given service, which may vary considerably between different types of services”.Footnote 24 In other words, in the P2B Regulation an “average business test” replaces the “average consumer test”.Footnote 25

3.3 Addressees of the Duty to Disclose

The transparency requirements differ not only in terms of their audiences, but with regard to those who are required to provide information about rankings. Among the legal instruments under comparison, Art. 6a(1) CRD has the narrowest scope. The provision only applies to “online marketplaces” i.e. websites and applications which allow consumers to conclude distance contracts with other traders or consumers.Footnote 26 This includes online retail marketplaces (e.g. Amazon.com) as well as hotel booking platforms (e.g. Booking.com), but excludes online search engines (e.g. Google.com) and price comparison tools (e.g. Shopping.com) which redirect consumers to the trader’s website in order to conclude a contract. However, in the face of rapidly changing business models, the contours of the term “online marketplace” are not entirely clear. In particular, it is an open question under which conditions social media platforms that offer shopping features (e.g. Shoppable Posts on Instagram) fall under Art. 6a(1) CRD. While the answer seems to be negative if the contract is concluded entirely outside the social media app, it might be positive if the shopping website is displayed within the application.

Art. 7(4a) UCPD has a broader scope and applies regardless of where the contract is eventually concluded. Hence, the information requirements under Art. 7(4a) UCPD not only applies to online marketplaces, but also to price comparison sites and similar online tools.Footnote 27 In contrast, search engines (as defined in Art. 2(6) P2B Regulation) are explicitly excluded from the scope of Art. 7(4a) UCPD. Similarly, the UCPD provision does not apply where traders provide consumers with a possibility to search only among their own offers of different products.Footnote 28 Furthermore, the information requirement under Art. 7(4a) UCPD only applies where the ranking is displayed to the consumers on the basis of a search query (e.g. in the form of a keyword, phrase or other input). Therefore, it does not apply to the default display on the online interface that is shown to the consumer and that is not the result of a specific search query.Footnote 29

The business-facing transparency rule under Art. 5 P2B Regulation has an even a wider scope. Art. 5(2) P2B Regulation applies to “online intermediary services”. This term refers to online services that allow business users to offer their products to consumers “with a view to facilitating the initiating of direct transactions between those business users and consumers, irrespective of where those transactions are ultimately concluded”.Footnote 30 However, it is required that the intermediation service is provided on the basis of a contractual relationship between the platform operator and the business user. Similar to Art. 7(4a) UCPD, the broad definition of “online intermediary services” does not only cover online marketplaces (where contracts are concluded), but also other websites or online interfaces (e.g. apps, virtual assistants) that “facilitate the initiating of direct transactions”. Here, social media platforms that offer shopping features clearly seem to be covered.

The broadest scope among the four legal instruments that stipulate transparency requirements for rankings and recommendations is found in Art. 27 DSA. The provision applies to all “providers of online platforms” that use recommender systems. According to Art. 2(i) DSA the term “online platform” means “a hosting service that, at the request of a recipient of the service, stores and disseminates information to the public, unless that activity is a minor and purely ancillary feature of another service or a minor functionality of the principal service and, for objective and technical reasons, cannot be used without that other service, and the integration of the feature or functionality into the other service is not a means to circumvent the applicability of this Regulation”. In other words, not only marketplaces but also communication platforms are covered.

3.4 Content of the Disclosure

While the scope of application of the two transparency rules in the CRD and the UCPD is different, the content of the information duties is more or less the same. Both provisions require businesses to provide consumers only with information on the “main parameters” determining the ranking of offers. In other words, there is no obligation to disclose all ranking parameters. Requiring the platform operator to indicate all factors that influence the ranking would most likely lead to an information overload as most ranking mechanisms take into account a rather large number of factors that are weighted according to a complex formula (Alexander 2019). Therefore, limiting the transparency requirement to “general information” about the “main parameters” seems reasonable.

For example, the short-term rental platform Airbnb states that their “search algorithm considers more than 100 signals to decide how to order listings in search results”.Footnote 31 Among the factors that are taken into account by the ranking algorithm are guest reviews, competitive pricing, availability for instant booking, host response time, and superhost status (Airbnb UK 2022). These factors seem to be linked to the overall attractiveness of a listing. By considering these factors the ranking of listings is likely to match the preferences of users who are looking for an attractive offer. However, the design of Airbnb’s ranking algorithm is not solely based on the preferences of guests. It also reflects the interests of prospective hosts and the platform provider itself. In this sense, Airbnb states that to “help hosts get started, the algorithm is designed to make sure new listings show up well in search results” (Airbnb Ireland 2022). Giving priority to listings simply because they are new is not necessarily in the interest of platform users who are looking for high-quality offers. However, it is probably in the own interest of Airbnb to give a boost to new listings in order to keep new hosts satisfied with the platform.

The Modernisation Directive, which introduces the new transparency requirements into the CRD and UCPD, does not provide very detailed and actionable guidance on how to determine the “main parameters” of the ranking mechanism. As explained in the Recitals of the Directive, the term “parameters” refers to “any general criteria, processes, specific signals incorporated into algorithms or other adjustment or demotion mechanisms used in connection with the ranking”.Footnote 32 This list of synonyms is not very helpful in elucidating the meaning of the term (Peifer 2021). Among the input variables which impact the ranking, “main parameters” are those “which individually or collectively are most significant in determining ranking”.Footnote 33 It remains an open question how exactly the “most significant” factors shall be determined. Are these the factors that individually or collectively account for 50% of determining the ranking position? What if different combinations of criteria change the weighting of the individual criteria? What about dynamic systems that apply temporary changes (e.g. for Black Friday or Christmas shopping) or use A/B testing in order to optimize the ranking mechanism? Arguably, it would be disproportionate to require real-time adjustment of information in cases where businesses use dynamic ranking systems (Peifer 2021).

Once the main parameters have been determined, the CRD and the UCPD require only “general information” about their influence on the ranking. Traders are not required to disclose the detailed functioning of their ranking mechanism or even the underlying algorithm.Footnote 34 It is also not necessary to present the information about the ranking in a customized manner for each individual search query.Footnote 35 In this sense, transparency is limited to a rather general “model explanation” and does not require individualized “outcome explanation”.Footnote 36 This is particularly important for personalized rankings which are based on the customer’s purchasing history or other elements of an individual customer profile (see, generally, Kant 2020; Cohn 2019). In such a case the trader is not obliged to provide personalized information about the personalized ranking but can limit herself to a general description of the parameters used for personalization (Alexander 2019).

Both the CRD and UCPD also require traders to indicate the “relative importance” of the main parameters as opposed to other parameters. However, the two Directives give no details as to how the relative importance of the main parameters should be indicated. One option could be to indicate the weighting of the main parameters in percentage points, as in the following example: “The top five parameters are weighted with 70% while the remaining fifteen parameters are only weighted with 30% percent.” An alternative could be to use standardized information about ranking factors similar to the nutrition fact labels which indicate the percentage of different ingredients in food products (see Stoyanovich et al. 2018). Such a standardized “Nutrition Label for Rankings” could in particular facilitate the comparison of different ranking mechanisms. In order to increase the comprehensibility of the label, graphic elements and pictograms could also be used.Footnote 37

On the business side, Art. 5 P2B Regulation also requires information about the “main parameters” which determine the ranking. With regard to further details about the functionality of the ranking mechanism, the P2B distinguishes between “online intermediation services” and “online search engines”. Providers of search engines have to provide information about the “relative importance” of the main parameters.Footnote 38 In contrast, providers of online intermediation services shall explain “the reasons for the relative importance” of the main parameters.Footnote 39 While the former follows the model used in the CRD and the UCPD, the latter deviates from this model. One may wonder whether this terminological discrepancy is an expression of an underlying substantive difference between the two transparency regimes. From this perspective, Art. 5(2) P2B Regulation could be understood in the sense that online search engines only have to indicate the weighting of the ranking parameters (as a percentage), but are not obliged to give a reason for the chosen weighting. Ultimately, however, the terminological differences appear to be an editorial inaccuracy that has no deeper impact on the scope of the disclosure obligation. This reading is also supported by the (non-binding) Commission Guidelines on ranking transparency. There, Art. 5(1) and (2) P2B Regulation are mentioned in the same breath, without addressing a possible differentiation of the information obligations: “The descriptions given by providers in accordance with Article 5 should provide real added-value to the users concerned. Articles 5(1) and (2) require that providers give information not only of the main parameters but also the reasons for the relative importance of those main parameters as opposed to other parameters.”Footnote 40 Therefore, providers of online intermediation services and online search engines have to “go beyond a simple enumeration of the main parameters”Footnote 41 and provide a “second layer”Footnote 42 of explanatory information that explain what objective the ranking mechanism has been optimized for.

Given the diversity of ranking algorithms, the content of the explanations required under Art. 5(1) and (2) P2B Regulation may vary significantly depending on the design of the ranking mechanism. Art. 5(3) and (5) P2B Regulation, therefore, specify the requirements for the content of the explanations. This establishes a mandatory minimum content of the disclosures. On the one hand, Art. 5(3) P2B Regulation states that providers must inform about the possibility that users can influence the ranking through direct or indirect fees, e.g. through temporary “ranking boosters” or a preferred partner program. On the other hand, Art. 5(5) P2B Regulation defines a list of mandatory disclosures. Among other things, it must be indicated whether and to what extent the ranking mechanism takes into account the characteristics of the products offered (lit. a) and the relevance of these characteristics for consumers (lit. b). Providers of online search engines must also indicate the extent to which design characteristics of the website (e.g. page speed, optimization for mobile devices) influence the ranking of the website (lit. c). These mandatory disclosures aim to achieve a certain standardization and thus improve the comparability of the ranking practices of different providers.

Finally, Art. 5(6) P2B Regulation defines the limits of ranking transparency. According to this provision, platform operators are not required to disclose algorithms or any information that would enable manipulation of the ranking mechanism. However, Art. 5(6) P2B Regulation does not allow platform operators to limit transparency on the blanket grounds that this is necessary to prevent “gaming of the system”. Instead, they have to provide evidence that further disclosure “with reasonable certainty” would open the door for manipulation of the search results and thus create harm for consumers.

3.5 Modalities of Disclosure

In addition to the content of the information duties, the modalities of disclosure are relevant in order to assess the effectiveness of the transparency requirements. In this respect, too, there are differences between the regulations under comparison. Basically, two different models can be distinguished:

The UCPD and the CRD require that the information about the main ranking parameters and their relative importance “made available in a specific section of the online interface that is directly accessible from the page” where the search query results or the offers are presented.Footnote 43 In other words, the information has to be provided at the place where the ranking is displayed to the consumer. Apparently, the EU legislator is guided by the idea that the consumer consults the information at the moment of the purchasing decision. Whether this is a realistic assumption seems rather doubtful in view of the complexity of the information.

In contrast, Art. 5(1) P2B Regulation stipulates that the explanation of the main parameters determining the ranking and the reasons for the relative importance of those parameters as opposed to other parameters are set out in the terms and conditions of the provider of online intermediation services. As can be seen from Art. 3(1)(b) P2B Regulation, the terms and conditions – and thus also the explanation of the rankings – must be made available to business users before they create an account on the platform. This shows that the information shall enable business users to make an informed decision about whether to use a given platform as a distribution channel. Furthermore, businesses shall be enabled to improve their visibility through “ranking optimization”. For online search engines, Art. 5(2) P2B Regulation applies a slightly different model. As “corporate website users” do not have to conclude a contract with the provider of the online search engine for their website to be ranked, the information does not have to be included in the terms and conditions of the search engine. Instead, Art. 5(2) P2B Regulation requires that the provider of the search engine provides an “easily and publicly available description” of the main ranking parameters. This shall enable the corporate website users to engage in meaningful (and legally acceptable) forms of “search engine optimization”.

4 The Digital Services Act: From Algorithmic Transparency to Algorithmic Choice?

The most recent layer of EU recommender regulation has been added by the Digital Services Act (DSA) which was formally adopted in October 2022 and will apply from 17 February 2024. In a sense, the DSA marks the transition towards a new regulatory model that goes beyond algorithmic transparency and makes a first tentative step towards algorithmic choice.

4.1 Extension of Transparency Rules

The DSA not only introduces a definition of “recommender systems” to the EU regulatory framework but also extends the existing rules on algorithmic transparency. Notwithstanding the minor differences in their respective scope of application, the transparency rules for recommender systems stipulated by the P2B Regulation, the CRD, and the UCPD primarily focus on digital platforms that facilitate the initiating of transactions between platform users.

The DSA goes one step further and introduces transparency requirements that apply to all online platforms that use recommender systems regardless of whether the recommendations are meant to facilitate any transactions between platform users. Therefore, the new transparency rules also apply to platforms such as Twitter, Spotify, or Tinder. The Commission’s original proposal for the DSA provided that the transparency rules would only apply to very large platforms (VLOPs) with more than 45 million monthly active users.Footnote 44 However, during the trilogue negotiations, this regulation was extended to all online platforms, regardless of the number of users.Footnote 45 The final version of Art. 27(1) DSA now requires all online platforms that use recommender systems to “set out in their terms and conditions, in plain and intelligible language, the main parameters use in their recommender systems, as well as any options for the recipients of the services to modify or influence those main parameters”.

If one compares this provision with the consumer-facing transparency rules of Art. 7(4a) UCPD and Art. 6(1)(a) CRD, it comes a bit as a surprise that the DSA allows platform providers to hide the information about the main parameters in the small print of the platform’s terms and conditions.Footnote 46 It seems that even the drafters of the provisions did not really believe that platform users would be inclined to read such detailed information. At first glance, it is therefore surprising that Art. 27(2)(b) DSA additionally requires that the “reasons for the relative importance of those parameters” must also be stated. In this respect, the DSA even goes beyond the transparency requirements of the UCPD and CRD, which only require an indication of “relative importance”, but not an indication of any “reasons” for the weighting chosen by the platform provider. Perhaps the true character of Art. 27 DSA becomes clear if one assumes that the information about the functioning of the recommender system is not intended to be read and evaluated by the individual platform users. The information in the terms and conditions may rather serve for documentation purposes and as a starting point for investigations by the competent authorities or further research by civil society organizations. Such a reading of Art. 27 DSA would be in line with the additional transparency requirements in the DSA directed at public authorities. For example, according to Art. 40(3) DSA the competent national Digital Service Coordinator or the Commission may ask the platform provider to “explain the design, the logic, the functioning and the testing of their algorithmic systems, including their recommender systems”. Such explanations offered by the platform providers could then be compared with the information on “main parameters” provided under Art. 27(1) Draft DSA.

One question that still needs further clarification is how Art. 27 DSA relates to the other transparency requirements from the UCPD, the CRD, and the P2B Regulation. On the surface, Art. 2(4)(e) and (f) DSA seems to provide a simple answer to this question by stating that the DSA is “without prejudice” to the P2B Regulation and Union law on consumer protection. What is less clear, however, is what the formula “without prejudice” means in this context. Does it mean that UCPD, CRD and P2B take precedence over the horizontal DSA as vertical leges speciales? But does this also apply where certain topics – such as algorithmic choice – are not being addressed in the leges speciales? In other words, since the UCPD and the CRD do not contain separate rules on “recommender switchboards”, would it be conceivable that Art. 27(3) DSA also applies to online marketplaces covered by the UCPD and the CRD? Such a “combined” application of the different rules would mean that the content of the information on “main parameters” would be governed by UCPD and CRD, but the user interface design of the “recommender switchboard” would be governed by DSA. These considerations show that there is still a considerable need for coordination in the increasingly complex regulatory landscape for recommender systems.

4.2 User Control Over Ranking Criteria

The DSA does not limit itself to extending the transparency requirements for recommender systems but makes a first tentative step towards a new regulatory model that seeks to enable algorithmic choice for platform users. In this sense, the Commission Proposal for the DSA stipulated that providers of VLOPs should inform their users about “any options for the recipients to modify or influence those parameters that they may have made available, including at least one option which is not based on profiling”Footnote 47 within the meaning of Art. 4(4) GPDR. However, in order to provide users with effective control over the functioning of recommender systems, information about available options is not sufficient. It must also be possible for users to change the parameters easily. Whether the available options are used in practice very much depends on the user interface design. Therefore, the Commission’s proposal for the DSA required providers of VLOPs which offer several options to modify or influence ranking parameters to “provide an easily accessible functionality on their online interface allowing the recipient of the service to select and to modify at any time their preferred option for each of the recommender systems that determines the relative order of information presented to them”.Footnote 48 From a practical perspective, this meant that providers of VLOPs would have to offer a sort of “recommender switchboard” (or control panel) that allows users to modify the functioning of the recommender system. Many platforms already today offer a number of options for adjusting the ranking criteria on a voluntary basis.

During the trilogue negotiations, the scope of the provision on algorithmic choice was partially extended beyond VLOPs. In particular, the duty to inform about any options to modify or influence the parameters of the recommender system has been extended to all online platforms.Footnote 49 Similarly, the duty to provide an easily accessible functionality to change the parameters (if such an option is provided) has also been extended to all online platforms.Footnote 50 However, the duty to provide a “profiling-free” ranking as an option still applies only VLOPs.Footnote 51 Therefore, the transition from a transparency-based model to a choice-based model remains rather limited.

It is doubtful, whether such a limited approach to algorithmic choice is sufficient to ensure autonomy and informed choice.Footnote 52 While the DSA gives users a certain degree of control over the functioning of recommender systems, the proposed regulations still leave it in the hands of the platform provider to decide which modifications of the recommender system are made available. The only real choice that must be made available by VLOPs to their users is a “profiling-free” ranking. Other user preferences, such as a more prominent display based on environmental or social ranking criteria, do not need to be offered. Moreover, as mentioned before, the effectiveness of the “recommender switchboard” very much depends on the user interface design. It must be ensured that platform operators do not attempt to influence and impair the selection of ranking parameters in an unfair manner by using manipulative design choices or “dark patterns”. In this sense, the EU Council rightly suggested during the trilogue negotiations to add a provision that explicitly stipulates that providers of VLOPs “shall not seek to subvert or impair the autonomy, decision-making, or choice of the recipient of the service through the design, structure, function or manner of operating of their online interface” when presenting options regarding the functioning of the recommender system.Footnote 53 However, this proposal did not make it into the final text of the DSA, which only mentions “dark patterns” in a more general context in Recital 67. An important contribution to the development of a uniform and user-friendly “recommender switchboard” could be provided by voluntary standards, which are to be developed by the relevant European and international standardization bodies (CEN, ISO) at the suggestion of the Commission.Footnote 54

5 Third Party Recommender Systems: Towards a Market for “RecommenderTech”

While the DSA only takes rather hesitant steps towards algorithmic choice, a more radical solution could have been possible. Instead of leaving the range of available options in the hands of the platforms, the DSA could have allowed platform users to choose between different competing third-party recommender systems. Such a solution would give users more control over what information they see on digital platforms. A practical proposal to this effect was recently put forward by a group of scholars led by Stanford political scientist Francis Fukuyama (Fukuyama et al. 2020). In essence, they propose that users should be able to select an alternative recommender system, that works as an external filtering device on top of a given platform. Such a third-party software, which Fukuyama and his co-authors refer to as “middleware”, would interact with the data provided by the platform via an application programming interface (API). With the help of middleware, users of an online retail marketplace could decide to choose a filter that displays only products that are environmentally friendly or from producers that comply with high social standards (Fukuyama 2021).

A similar approach has also been put forward in a proposal for an amendment to the DSA, which was tabled in the IMCO Committee of the European Parliament in July 2021. The proposal suggested adding the following passage to Art. 29 of the Commission’s proposal for the DSA:

In addition to the obligations to all online platforms, very large online platforms shall offer the recipients of the service the choice of using recommender systems from third party providers, where available. Such third parties must be offered access to the same operating system, hardware or software features that are available or used in the provision by the platform of its own recommender system.Footnote 55

This proposal effectively aimed at unbundling content hosting and content curation by introducing an interoperability requirement that would open up online platforms for third-party recommender systems. The underlying idea was to increase consumer choice and competition by creating a market for third-party recommender systems. Similar solutions have been applied successfully in other fields where interoperability has helped to boost innovation with regard to complementary products and services.Footnote 56 One prominent example that could serve as a model also for recommender systems is the unbundling of accounts from financial services (“open banking”). Here mandatory interoperability has created a market for third-party payment service providers such as payment initiation providers (e.g. iDeal, Sofort Überweisung, Trustly) or account information providers (e.g. Zuper, Outbank, Numbrs).

Similar to the blossoming market for “FinTech” businesses, a market for providers of “RecommenderTech” – or “middleware” in the terminology suggested by Francis Fukuyama – could be created. It is unclear, however, whether providers of “middleware” will emerge that are able and willing to offer alternative recommender systems that counterbalance the economic dominance of major online platforms (Ghosh and Srinivasan 2021).

It will be necessary to explore how such an unbundling of platform and recommender systems could be technically feasible. In particular, issues of security and privacy have to be solved. As a general rule, interoperable systems with a higher level of interconnectedness may lead to higher risks regarding reliability and security (Keber and Schweitzer 2017). The crucial question, however, might be whether third-party recommender systems will be economically viable and what are possible business models (Keller 2021). Should such a product be funded via advertising or based on a subscription model? How much would consumers be willing to pay for fair and unbiased rankings? More fundamentally, is it right that only consumers who can afford to pay for “RecommenderTech” solutions benefit from fair and independent rankings, while financially vulnerable consumers must continue to use the (potentially biased) bundle of hosting and ranking?

While the IMCO proposal cited above did not make its way into the final version of the DSA, its twin, the Digital Markets Act (DMA), could open the door for providers of “Recommender Tech”. Art. 6(1)(f) of the Commission’s proposal for the DMA stipulated that providers of core platform services who have been designated as gatekeepers by the European Commission shall “allow business users and providers of ancillary services access to and interoperability with the same operating system, hardware or software features that are available or used in the provision by the gatekeeper of any ancillary services”. In this context, the term “ancillary service” means “services provided in the context of or together with the core platform services”.Footnote 57 The Commission proposal explicitly mentions payment services, fulfillment, identification, and advertising services as examples. One could well imagine that third-party recommender services also fall under the term ancillary services and are thus covered by the mandatory interoperability requirement.

Following the provisional agreement on the DMA, reached in March 2022, the wording of the provision has changed and the term “ancillary services” is no longer used (see Gerpott 2022, providing an overview of the DMA based on the provisional trilogue agreement). In substance, however, the interoperability obligation was retained. The final version of the provision now requires gatekeepers to “allow business users and alternative providers of services provided together with, or in support of, core platform services, free of charge, effective interoperability” with the gatekeeper’s operating system, hardware, or software features.Footnote 58 It seems that the DMA provisions on interoperability for alternative providers' ancillary services have opened the door (at least a little bit) for third-party recommender systems. Whether this door leads to an attractive market for providers of “RecommenderTech” and whether it will be possible to develop viable business models in this field that will experience growth comparable to that of the “FinTech” ecosystem remains to be seen.

6 Conclusion

The European regulatory framework for algorithmic rankings and recommendations in the platform economy has developed rapidly in a short period of time. Until very recently, there were no specific rules for recommender systems at the European level. With the entry into force of the P2B Regulation and the DSA and the recent reform of UCPD and CRD, this situation has changed. Instead of too few, there are now maybe too many regulations that are not sufficiently coordinated. On a more fundamental level, one may ask whether transparency requirements alone are sufficient to ensure unbiased recommendations and consumer autonomy. Against this background, it is to be welcomed that the DSA takes a first step (albeit hesitantly and incompletely) from a regulatory model based on algorithmic transparency towards a new regulatory model based on algorithmic choice. In the medium term, the DMA could even have a greater impact if it succeeds in creating a market for “RecommenderTech”. In such a scenario, third-party recommender systems could offer consumers a real alternative to the rankings and recommendations currently provided by large platforms. While the economic viability and technical feasibility of such a decentralized regulatory model are not yet entirely clear, a choice-based approach to recommender governance may indeed be the wave of the future.