Keywords

When a person recommends a restaurant, movie or book, he or she is usually thanked for this recommendation. The person receiving the information will then evaluate, based on his or her knowledge about the situation, whether to follow the recommendation. With the rise of AI-powered recommender systems, however, restaurants, movies, books, and other items relevant for many aspects of life are generally recommended by an algorithm rather than a person. This volume aims to shed light on the implications of this transnational development from both legal and ethical perspectives and to spark further interdisciplinary thinking about algorithmic recommender systems.

In the last years, scientific contributions analyzed challenges deriving from the introduction of recommender systems as a tool to support our decisions in many aspects of our everyday lives, from business and education to leisure and dating (Ricci et al. 2011; Milano et al. 2020). From an ethical perspective, Milano, Taddeo and Floridi (Milano et al. 2020) identified at least six areas of concern related to the use of recommender systems, namely the spread of inappropriate content, privacy violations, threats for individual autonomy and personal identity, system opacity, fairness, and possible negative social effects.

Looking closely at the functioning of a recommender system, it is possible to exemplify many of these concerns. Let’s consider an application familiar to many travelers: a system to find, sort out and recommend possible accommodations for a holiday or business trip on a community-based online platform. This kind of system not only helps travelers find a place to stay, but also helps hosts to find new clients and, of course, is essential in order for the online platform to work. This means that, as already highlighted by Milano, Taddeo and Floridi (Milano et al. 2021), the study of a recommender system requires a multi-stakeholder analysis to understand the interests and needs of all the actors involved in its functioning and to highlight possible ways in which the system may represent a risk for one or more stakeholder groups, as well as for society at large.

Starting with fairness concerns, if the system performs differently across demographic groups based on users’ personal attributes such as gender, spoken language, or nationality, a concrete discrimination risk exists both for consumers looking for a place to stay and for hosts providing accommodation (Burke 2017; Solans et al. 2021) – for example, certain users might not receive recommendations for certain accommodations based on wrong or biased automated predictions. Prescinding from the output’s distribution, other fairness concerns may arise as well, among other things, concerning the inclusiveness of user experience, the application environment of the system (Grgić-Hlača et al. 2018), and the working conditions of people involved in the training, development, and supply processes of the system (Fuchs and Fisher 2015; Gray and Suri 2019).

Another much discussed ethical concern is the extent of the influence – if not even interference – with individual decision processes. In our example, the recommender system filters just a few options out of many based on the predicted relevance for the user. Other options are not hidden from the user, but are less visible – for example, they are not shown among the first results. So, even though users can access information about the available accommodation, in practice they are more likely to consider just first ones. The tendency of users to interact with items on top of a list with higher probability than with items at a lower position in the list, regardless of the items’ actual relevance is called “position bias” and affects users of recommender systems (Collins et al. 2018). This is problematic for at least two reasons. On the one hand, if automated predictions are inaccurate or manipulated in order to promote other businesses’ goals, users are being pushed toward options that don’t reflect their interests and are therefore being distracted from their original intentions. This has led scholars to reflect about the manipulation threats related to digital technologies and AI systems in particular (Jongepier and Klenk 2022; Susser et al. 2019). On the other hand, irrespective of the predictions’ accuracy, while interacting with the system users form their opinion based only on a selection of possible relevant items and might miss important pieces of information. This has led scholars to problematize whether pieces of information predicted to be not relevant should be included in the first result shown as well, a practice that has been called “serendipity by design” (Reviglio 2017). Moreover, in-built nudging techniques might influence individual decisions in a morally problematic way, for example causing users to make hasty decisions by pushing them to hurry up booking to not lose the accommodation.

Privacy issues of recommender systems are strictly related to fairness and individual autonomy concerns. Indeed, in order for recommender systems to predict what kind of content is relevant for a certain user, and therefore influence their decisions in a meaningful way, it is necessary to access, collect and process their data. Personal data is provided by users as part of agreements to use digital services, such as online platforms, and user behavior, such as history and navigation data, is tracked and amalgamated, often without proper consent or even awareness and beyond the purpose of providing the digital services sought by the user. The use of behavioral data as raw material for “prediction products” anticipating future behavior has aptly been described as surveillance capitalism (Zuboff 2019), causing power aggregation in the hands of big tech companies (Véliz 2020). These are often giving consumers no choice but to consent to collection and processing of personal data if they want to use digital services. Moreover, access to user data by untrusted parties or inappropriate use of this data can represent a serious threat to user privacy (Friedman et al. 2015).

Legal and ethical questions with regard to (meaningful) user consent and businesses’ use of dark patterns have been discussed with regard to the EU General Data Protection Regulation (GDPR), which entered into force on 24 May 2016 and applies since 25 May 2018. With the rise of the platform economy (Acs et al. 2021), privacy concerns beyond subjective data rights, moved into the focus of attention, e.g., the impact of technology (including recommender systems) on decisional and intellectual privacy (Richards 2017). Big Data and artificial intelligence pose new challenges to the traditional understanding of privacy and data protection, prompting discussions on predictive privacy (Mühlhoff 2021). The EU Digital Services Act (DSA) of October 2022 addresses structural issues beyond subjective user rights in a number of novel ways (Kaesling 2023b). Inter alia, it contains special rules on recommender systems on online platforms and additional obligations of very large online platforms and very large online search engines with relation to their use of recommender systems (Article 27 DSA, Article 38 DSA). The role of recommender systems for systemic risks flowing from the design, functioning and use of their digital services is also addressed (Articles 34 and 35). The Digital Services Act specifically addresses the impact of recommender systems on the ability of recipients to retrieve and interact with information online, including to facilitate the search of relevant information for recipients of the service and contribute to an improved user experience, their role in the amplification of certain messages, the viral dissemination of information, and the stimulation of online behavior (Recital 70 DSA). The interpretation and impact of these new rules has yet to be determined (Janal 2021). This volume contains some of the first studies of these regulations and their relation to other regulatory approaches, while juxtaposing perspectives from legal and ethical studies with the same points of reference.

System opacity, meaning the lack of transparency in the decisional process and poor explainability of decisional outcome, is a problem that affects many AI-powered systems, including recommender systems. In our example, a possible case for opacity could be represented by the output of recommendations that are not explainable based on the search parameters. Moreover, requesting to input information that do not intuitively contribute to refine the search for accommodation without explaining how the system processes this information would be an example of a transparency and data protection issue at the same time. Opacity is an ethical issue because unexplainable decisions cannot be understood and therefore objected to by users and developers. This undermines user control over the system and human agency in general. In contrast, transparent systems and explainable decisions empower users by allowing them to contest decisions they perceive as wrong or unfair. In the field of computer science, it is much debated to what extent automated decision based, for instance, on machine learning (ML) could and should be made explainable and what analytic methods should be used to explain whole models or single decisions (Molnar 2022). The Digital Services Act demands recommender system transparency from online platforms, specifically with regard to the main parameters and options for users to modify or influence those main parameters (Article 27 DSA), i.e., linking transparency to users’ choice, as will be analyzed in this volume.

Concerns regarding inappropriate content are common when considering recommender systems since misclassification of offensive and potentially harmful items has often occurred in the past. An infamous occurrence was the recommendation of disturbing videos portraying grotesque imitation of famous cartoon characters to children on YouTube Kids (Papadamou et al. 2020). In that case, the classifier failed at sorting out disturbing videos uploaded by trolls and labeled them as child-friendly content, causing psychological distress to many children. In our example, inappropriate content could be represented by fake accommodation posted by scammers, or by offensive content such as explicit text or images disguised as accommodation description or user profile.

Finally, concerning the potential negative impact on society of recommender systems much attention was raised in the last decade due to scandals involving the spread of disinformation and threats for democratic processes. This is addressed in the Digital Services Act as part of the systemic risks and their mitigation (Article 34 and 35 DSA) (Peukert 2021; Kaesling 2023a). Cambridge Analytica, arguably one of the most discussed cases, directly involved the use of recommender systems since content meant to influence voters was shown as recommended content on their social media news feed. Possible negative impact on society is not limited to disinformation. Considering our example one last time, we should question the impact that such an application could have on rental market in a city – for example, whether it will lead to price inflation or to scarcity of apartments available for long-term lease, and how this will impact the lives of inhabitants who cannot find affordable places anymore. Concluding this list of concerns on a positive note, investigating the impact of recommender systems on society also includes finding ways to employ them for social good, promoting sustainable development goals, individual flourishing and harm prevention (Hermann 2022; Taddeo and Floridi 2018).

These general issues regarding recommender systems also mirror the ethical concerns expressed by the High-Level Expert Group (HLEG) on AI appointed by the European Commission. Indeed, in formulating the key requirements AI systems should meet in order to be trustworthy, the HLEG pointed out seven main risk areas AI audit should focus on: human agency and oversight, safety, privacy, transparency, fairness, societal and environmental wellbeing, and accountability (HLEG on AI 2019).

Within the discipline of legal studies alone, several legal areas are touched in the context of algorithmic recommender systems. These areas include discrimination law, data protection law, unfair competition law, existing sector-specific platform regulation, such as P2B Regulation (Busch 2019), and contract law and its general principles, such as the private autonomy of contracting parties. In addition to the far-reaching regulation of recommender systems on online platforms in the Digital Service Acts, the proposed EU framework for Artificial Intelligence (Proposal for a Regulation laying down harmonized rules on artificial intelligence and amending certain Union legislative Acts of 21 April 2021) will have an impact upon its adoption, which is – in part – already anticipated in this volume.

The Digital Services Act contains a workable legal definition of recommender systems. According to Art. 3 lit. s. Digital Services Act, ‘recommender system’ means a fully or partially automated system used by an online platform to suggest in its online interface specific information to recipients of the service or prioritize that information, including as a result of a search initiated by the recipient of the service or otherwise determining the relative order or prominence of information displayed. The understanding, implementation and further development of the Digital Services Act’s regulatory approaches in the context of recommender systems depends on interdisciplinary exchanges, which this volume aims to start and foster. Assessing the role of recommender systems for systemic risks within the meaning of Article 34 DSA, for example, presupposes the development of a system definition (Kaesling 2023a), which can be informed by the human-centric approach of the High-Level Expert Group on AI appointed by the European Commission, and specifically the seven above mentioned key requirements for trustworthy AI (HLEG on AI 2019). The new legal framework of the Digital Services Act gives ample space to build on interdisciplinary insights, as they can be found in this volume.

Contributions in this volume offer analyses from different perspectives and aim to enrich both the ethical debate and the discussion on the interpretation of new legal norms and their future developments. Legal and ethical issues of recommender systems will be addressed in three thematic clusters: Fairness and Transparency, Manipulation and Personal Autonomy, and Design and Evaluation of Recommender Systems.

The first section entitled “Fairness and Transparency” addresses legal and ethical issues related to discrimination and unfair treatment of individuals as an effect of the development and application of recommender systems, as well as further concerns related to the lack of transparency of the decisional processes behind automated recommendations and its moral and legal implications for users. Susanne Gössl, in her paper “Recommender Systems and Discrimination” deals with a much-debated topic from a legal point of view. Gössl not only examines data protection law, unfair competition law and general anti-discrimination law, but also exposes lacunes in that regulation and evaluates the potential of emerging regulation to close regulatory gaps, notably the information approach, which is centered around the best possible information about the parameters and the risks of a specific recommender system. This aspect is then continued in Christoph Busch’s contribution “Platform Regulation and Recommender Systems – From Algorithmic Transparency to Algorithmic Choice”, in which he describes a paradigm shift and its consequences for the regulation of recommender systems on online platforms. Gesmann-Nuissl and Meyer analyze the specific lack of transparency on gaming platforms. In their contribution entitled “Black Hole instead of Black Box? – The Double Opaqueness of Recommender Systems on Gaming Platforms and its Legal Implications”, they find that the mixing of different components, namely shopping, streaming and social media, leads to an exacerbation of the black-box problem. With a view to the Digital Services Act and the proposed Artificial Intelligence Act, they develop solutions fostering transparency regarding the platform users, platform operators, and software developers as stakeholders. Sergio Genovesi complements these viewpoints by considering the position of digital laborers in the value production and redistribution processes for recommender systems in his paper “Digital Labor as a Structural Fairness Issue in Recommender Systems”.

The second section on “Manipulation and Personal Autonomy” focusses on the recommender system’s influence on the formation of the human will and values. Drawing on both legal and philosophical backgrounds, Karina Grisse explores risks of manipulation by recommender systems and how EU law can mitigate them in her chapter “Recommender Systems, Manipulation and Private Autonomy – How European civil law regulates and should regulate recommender systems for the benefit of private autonomy”. Marius Bartmann then argues, from an ethical point of view, that the identification of the recommendation rationale is vital for preserving autonomous human decision-making in his chapter entitled “Reasoning with Recommender Systems? Practical Reasoning, Digital Nudging, and Autonomy”. Scott Robbins, in this section’s last chapter entitled “Recommending Ourselves to Death: values in the age of algorithms” argues that recommendations are likely to be off track due to distorting forces that are inherent to evaluative recommendations. He goes further and argues that these incorrect recommendations will feedback into our own evaluative standards – wresting control over the evaluative from humans. He makes the case that this is a fundamental loss of meaningful human control.

The Section “Designing and Evaluating Recommender Systems” focusses on the practical implementation of general legal and ethical principles. In order to better understand and effectively address the risks associated with the use of recommender systems, the lack of transparency and the potential for manipulation, the design and constant (re-)evaluation of recommender systems in their specific context is paramount. With regard to the specific use case of a food recommender app, in their contribution “Ethical and Legal Analysis of Machine Learning Based Systems: A Scenario Analysis of a Food Recommender System” Olga Levina and Saskia Mattern exemplify how a combined ethical and legal assessment should be performed, highlighting the benefits of integrating such an assessment in the design process. In the chapter “Factors Influencing Trust and Use of Recommendation AI: A Case Study of Diet Improvement AI in Japan”, Arisa Ema and Takashi Suyama present a survey conducted in Japan investigating users’ trust in a recommender system for dietary habit improvement. The survey questions the impact that the usage of AI technologies, data management standards and purposes of use have on users’ trust. Lisa Roux and Thierry Nodenot, in the last chapter entitled “Ethics of E-learning Recommender Systems: Epistemic Positioning and Ideological Orientation” investigate the ethical and practical implications of recommender systems’ design in e-learning and show how system design can reflect ideological conceptions of science and techniques and specific visions of teaching and learning.

This volume documents some of the ideas developed in the framework of the editors’ project “Recommender Systems: Legal and Ethical Issues”, which was funded by the University of Bonn’s Transdisciplinary Research Area 4 (TRA 4) “Individuals, Institutions and Societies”, set up as part of the University of Bonn’s excellency initiative. The editors would like to thank project assistants Luis Nussbauer and Marie Bente John for their valuable support as well as all contributors to the hybrid conference on the topic organized in December 2021 as a preliminary step to the publication, namely Marius Bartmann, Joanna Bryson, Christoph Busch, Vicky Charisi, Dagmar Gesmann-Nuissl, Susanne Gössl, Karina Grisse, Olga Levina, Stefanie Meyer, Silvia Milano, Julia Maria Mönig, Lisa Roux, Shannon Vallor and Aimee van Whynsberghe. The interdisciplinary academic discussions at the conference inspired and shaped many of the contributions in this volume.