Abstract
Choices and preferences of individuals are nowadays increasingly influenced by countless inputs and recommendations provided by artificial intelligence-based systems. The accuracy of recommender systems (RS) has achieved remarkable results in several domains, from infotainment to marketing and lifestyle. However, in sensitive use-cases, such as nutrition, there is a need for more complex dynamics and responsibilities beyond conventional RS frameworks. On one hand, virtual coaching systems (VCS) are intended to support and educate the users about food, integrating additional dimensions w.r.t. the conventional RS (i.e., leveraging persuasion techniques, argumentation, informative systems, and recommendation paradigms) and show promising results. On the other hand, as of today, VCS raise unexplored ethical and legal concerns. This paper discusses the need for a clear understanding of the ethical/legal-technological entanglements, formalizing 21 ethical and ten legal challenges and the related mitigation strategies. Moreover, it elaborates on nutrition sustainability as a further nutrition virtual coaches dimension for a better society.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Individual choices of people in our society are constantly influenced by online media, recommendations and suggestions powered by artificial intelligence, impacting all sorts of domains on a daily basis. Consequently, industry and academia are intensifying their effort to improve the number and quality of possible alternatives to be suggested to the user [1]. By doing so, the services consumption and user satisfaction could be maximized, but at what cost? Conflicting interests can be identified, e.g., recommendation of food according to user’s taste, but in conflict with dietary recommendations, healthy behavior goals, or sustainability principles.
Recommender systems (RS) [2] have reached remarkable accuracy and efficacy in several domains, including lifestyle, infotainment, and e-commerce [3, 4]. Nevertheless, more sensitive areas (i.e., nutrition) demand more complex dynamics beyond conventional RS’ capabilities. Indeed, today’s nutrition support and education domain demands Virtual Coaching Systems (VCS), which are considered to integrate additional dimensions w.r.t. conventional RS and, therefore, are more suitable for such sensitive scenarios. In particular, VCS leverage persuasion techniques, argumentation, informative systems, and RS (see Fig. 1). Early approaches show promising results, and their efficacy is impending the human coaches. However, the cutting-edge techniques and technologies used to push the boundaries of modern VCS’ efficacy and efficiency raise important ethical and legal concerns. In particular, to this end, the generation of a clear understanding of the ethical/legal-technological entanglements is outstanding. Prior studies principally focus on Food RS, proposing evaluation frameworks [5, 6]. However, to the best of our knowledge, they do not scale to address the more complex nutrition virtual coaches (NVC).
The contribution of this paper focuses on the adaption and extension of existing frameworks to evaluate ethical, legal, and sustainability concerns w.r.t. NVC. In particular, it
-
identifies and analyzes ethical and legal challenges characterizing all the dimensions (overlapping areas included) of NVC;
-
amends earlier studies (i.e., [5, 6]) applying an analysis from a more comprehensive perspective and discussing mitigation strategies;
-
elaborates on legal boundaries, concerns, and possible solutions w.r.t. NVC;
-
tackles nutrition sustainability as an ethical dimension.
The rest of the paper is organized as follows. Section 2 provides the needed background on the NVC’s components, and ethics in AI pivoting around the concept of nudges. Section 4 presents and elaborates on the ethical challenges and possible mitigation strategies. Section 6 raises questions about the sustainability of nutrition patterns and articulates possible NVC’s contribution to create sustainable habits or lifestyles. Section 5 evolves from the ethical concerns toward the analysis of concrete legal boundaries and possible gray areas that impel clarifications.
2 Virtual coaching components and the ethics of nudges
Ethics has focused on the study of human behavior since the time of ancient Greek philosophers, and it has been later referred to as moral by the Romans [7]. As human societies evolve, these moral and ethical principles may evolve or be questioned and nuanced. Likewise, the continuous evolution of AI research and its application to recommender systems produces new requirements, understandings, practices, architectures, models, and norms (e.g., see VCS, and AI predictive systems in general). Therefore, given the influence of AI-based systems in individual decision-making, the considerations about ethics and morals principles must timely evolve accordingly, especially in sensitive scenarios. By evolve, we refer to re-assessments and possible adjustments/extensions of ethical concepts, aspects, and assets.
As a matter of fact, ethical concerns have been entangled with AI since its beginnings. The numerous domains of application (e.g., nutrition, behavioral change, and ehealth [5, 6, 8]), the increasing computation capabilities and communication means (e.g., via natural language processing, empathy communication [9]) exacerbate the implications characterizing such concerns and require new careful considerations. Hagendorff [10] analyzed 22 major AI ethics guidelines from academy institutions and industry leaders and identified in accountability, privacy and fairness the ethical aspects present in 80% of the reviewed guidelines [10].
As a general response to the accountability-related concerns, the AI community has undertaken the challenge of reducing machine learning and deep learning predictors’ opacity [11, 12]. Such a challenge entails the capability of explaining their behaviors. Moreover, it has been highlighted the need for multi-modal explanations to reduce human biases in interpretation [13].
Concerning privacy, several works and regulations have outlined good practices for treating sensitive data within an AI system. The General Data Protection Regulation (GDPR) is a legal code valid in the European Union (EU) that regulates personal data and privacy treatment and circulation. The GDPR introduces a set of privacy requirements and practices and legal enforcement for those systems that deal with personal data [14, 15]. Moreover, recent research started to develop decentralized and personalized approaches allowing for user-centric, distributed, and automated management of privacy in mobile apps and social network applications [16].
Concerning fairness, the need to reduce/eliminate biases affecting the data collection, process, and analysis within an AI system is particularly challenging to tackle. Indeed, such biases can affect user experience, data, and algorithms [17]. Being ML and DL highly data-dependent, the magnitude of the bias effect can be extremely significant. Several toolkits have been developed to assess and mitigate biases in AI. For instance, the AI Fairness 360 (AIF360) Python toolkit, allows detecting, understanding, and mitigating algorithmic bias within industrial critical support decision systems [18, 19]. Moreover, IBM, Microsoft, FAO, and the Italian Ministry of Innovation have signed an agreement to promote six principle (transparency, inclusion, accountability, impartiality, reliability, and security and privacy) within ethical approaches to AI that must rely on a sense of shared responsibility among the international organizations, governments, institutions, and the private sector [20].
As mentioned in the previous section, NVC go beyond RS’ capabilities (see Fig. 1)—henceforth responsibilities. In particular, they could involve the users in debates to inform, educate, and persuade them, as well as learn from them via argumentation-based negotiations. In other words, the users and the NVC are expected to interact dynamically over the recommendation, enriching both parties’ knowledge and adherence. To do so, NVC have to leverage techniques and domain-specific components (e.g., food) recommendation, informative and assistive, persuasive, and argumentation-based systems and techniques. Below follows a brief description of such cornerstones.
2.1 Informative and assistive systems
These systems are often conceived including principles of multi-agent systems, i.e., intelligent, autonomous, collaborative/competitive, virtual entities with bounded rationale and knowledge [21, 22]. Such agents are a virtual embodiment of the user, collecting their data, and providing personalized interactions [23, 24]. Moreover, virtual agents can interact among themselves, asking for and providing services/data to each other.
The agents’ intelligence, autonomy (i.e., proactivity), knowledge, and overall behavior raise several ethical concerns when dealing with users’ sensitive data. Sanz [25] analyzed several models for constructing moral or ethical-aware agents (e.g., the Artificial Moral Agent—AMA—which is supported by the cognitivist moral theory). In particular, AMA includes the implementation of intelligence, autonomy (self-governance), self-reflection, and at least one practical identity (e.g., personality) [26]. However, AMA implies limitations, such as a lack of self-awareness. For example, on one hand, artificial identity can only conceive precoded values. On the other hand, an artificial identity can rewrite and reconstitute their identities, becoming too volatile [25]. After all, virtual entities (even AMA) perform symbolic and sub-symbolic data processing, which lack the means to punctually deal with contextual information and cannot develop a social awareness as a human would. Thus, ensuring moral behaviors purely from a “thinking machine” perspective is still a hot topic under discussion.
2.2 Recommender systems (RS)
Such tools are decision-support systems intended to deliver suitable suggestions of products or services to a user based on their profile and collected data [27, 28]. RS are increasingly used in domains including e-commerce [29,30,31,32,33], food and nutrition [3, 34,35,36], and e-health [37,38,39]. The nature and impact of recommendations can vary significantly and, therefore, have long-term influence on user choices. Recommendations can be non-personalized (not requiring any prior knowledge about any specific user [40] or personalized (requiring a remarkable amount of knowledge about the targeted user). Non-personalized recommender systems employ techniques leveraging generic information, including items’ popularity, novelty, price, and distance, to sort the possible items of interest. Basket modeling and analysis are among the most prominent techniques in retail companies to identify complementary items (items that are usually bought together) [41, 42]. Personalized RS leverages users’ preferences, behavior, demographics, location, language, and other characteristic (personal) details to achieve a deep understanding of them [43, 44]. Such data are usually obtained by tracking the users [45, 46]. A popular mechanism to determine users’ preferences is the rating. Such user feedback can qualify an item explicitly or implicitly (if the preference is deduced from the user’s behavior) [47, 48]. For instance, when a user buys an item or puts it on the wish list, this behavior can be interpreted as a favorable inclination toward that item. As summarized in [49], the most adopted techniques include collaborative filtering (CF)—(leveraging users’ similarities and ratings [50]), content-based filtering (CBF)—(recommending similar items based on similar profiles’ previously liked items [51]), Knowledge-based recommendation (KB)—(based on the user preferences and constraints [52]), and Hybrid recommendation (HR)—(combining the techniques mentioned above [53]). In particular,
-
CF
exploits data collected from many users to identify tastes and similarities between items and users [28, 54, 55]. Based on such information, the expected ratings of unseen items are calculated [56, 57]. CF algorithms can be classified as memory-based (using neighborhood) [58], model-based (matrix factorization, tensor completion) [59, 60], hybrid CF (combining model-based and memory-based) [61], and deep learning-based [62, 63] approaches.
-
CB
leverages items’ characteristic features to provide new recommendations, and it is suitable when the user is directly interested in them [64, 65].
-
KB
encodes the knowledge about a given domain, and it is suitable when the variability and personalization options are broad and require both domain and item-specific knowledge [28, 66].
-
HR
includes combinations of two or more approaches to produce more robust recommender systems. For example, the dependency on ratings entails disfavoring unrated items in a CF approach. Nevertheless, CF could be combined with a KB approach (if items contain attributes) [61, 67].
2.3 Persuasion techniques and processes
These approaches are intended as “activities that involve one party trying to induce another party to believe something or to do something” [68]. Persuasion techniques have been predominantly used in healthcare, where the benefits of behavioral change are remarkably beneficial for the individual in all the nuances of the domain. Indeed, healthcare systems have evolved toward patient-centered care over the past decades to improve medical indicators and quality of life in general. As a result, people have progressively become more autonomous in adopting healthy behaviors, mainly through active health education, ensuring appropriate follow-up of care, and monitoring by health professionals. The use-cases adopting these techniques include psychological support [69], elderly care [70], chronic diseases [71], wellness [72], healthy diet [49], smoking cessation [73], telerehabilitation [74], and weight activities [75]. Intelligent systems and Web-based applications are typically at the core of most of the proposed solutions in the literature. Indeed, the concomitant market growth of mobile applications, devices, sensors, and connected watches has fostered the development of online health and wellness applications [76].
The most implemented/associated persuasion theories are the Persuasive System Design (PSD) [77,78,79], Fogg’s behavioral models [79, 80], Social cognitive theory (SCT) [81, 82], and Self-determination theory (SDT) [83]. Persuasive technology should be produced as closely as possible to the needs and context of the users and, when possible, involve key people in a co-creation initiative. Indeed, persuading users to improve their physical activity would be different from persuading them to take medications or stop bad habits.
Unfortunately, this research area and market seem to be still in their early stages. Most scientific contributions present computational persuasion techniques at a conceptual level, with only a few prototypes operating on a large scale, and little concrete evidence of the large-scale applicability of these technologies. The most plausible explanations are that medical applications have more stringent requirements (for both procedures and devices) as well as that compliance with ethical principles is burdensome to be proven and enforced [84]. This entails that there is still a long way to develop effective persuasion technologies providing a real benefit from changing user behavior and improving user health.
2.4 Argumentation techniques
Such methodologies are reasoning and logic-based approaches aiming to draft conclusions from “conflictual” information [85]. The reasoning can occur in dynamic and uncertain environments (e.g., possibly inconsistent information and time/resource restrictions). Among the most relevant theoretical frameworks, it is worth mentioning the non-monotonic logic, which has developed to deal with circumstances [86]. In particular, it allows managing inconsistency and conflicting arguments, invalidating previous theorems and conclusions as new evidence requires it [86, 87]. Based on the non-monotonic logic theory, several reasoning frameworks were developed [88, 89]. One of those reasoning systems derived from non-monotonic logic is the defeasable logic, a reasoning framework that enables updates and retraction of inference [90, 91]. Defeasible logic is widely used in argumentative systems due to its flexibility and ability to reach conclusions from conflictive information [92]. Additional to defeasible logic, probabilistic reasoning [93], causal reasoning [94], and fuzzy logic [95] are commonly used to infer conclusions from uncertain information and incomplete knowledge. For example, probabilistic reasoning is frequently used to find causal relationships between random variables [96], causal reasoning is useful for explaining complex behaviors and generating arguments, and fuzzy logic is used in situations characterized by various degrees of truth. Another example is the Dung’s Abstract Argumentation Framework –AA [97]. Here, the arguments are connected via attack relationships, and the conflicts are resolved by finding an acceptable argument, which is strongly supported by arguments [97,98,99,100]. Further studies have extended such a framework by introducing preferences to weight the arguments [101,102,103,104,105], arguments ranking [106,107,108], and generalizing the existing approaches into the Assumption-Based Argumentation framework (ABA) [109]. The approaches mentioned above are widely employed by studies dealing with complex and dynamic environments. In such studies, distributed virtual entities (namely, agents) leverage the approaches mentioned above to solve conflicting situations with other virtual agents or with human users. Multi-agent Systems (MAS) are composed of several intelligent and autonomous entities interacting in a shared environment and mimicking social interactions [110]. Each agent has its own knowledge, beliefs, and a set of behaviors to actuate their intentions. This can generate conflicts and divergencies, which are solved via argumentation techniques. However, they might not be enough to reach a consensus and cooperation. Hence, argumentation has been “included” within a negotiation process, where conflicting agents exchange proposals, arguments, and knowledge. This process is known as argument-based negotiation (ABN). It is characterized by a reasoning mechanism, negotiation protocol, and strategy [111,112,113,114,115].
2.5 Virtual coaches as nudges
As outlined in Fig. 1, besides the pure domain-specific features, the areas composing a virtual coach present clear overlaps. For example, food recommender systems, informative systems, and persuasive technologies’ features can blend into health educational nudges. According to Sunstein and Thale [116], a nudge is an intentional modification of “any aspect of the choice architecture that alters people’s behavior in a predictable way without forbidding any options or significantly changing their economic incentives”.
Classic examples of health nudges are placing vegetables at eye level to increase consumption in a canteen or, for example, highlighting important information about a risky product in red to discourage its rapid use. Such interference with the nudge’s target effectively modifies the environment of the user’s choice without forcing the individual to do anything and without modifying their options. Therefore, why consider NVC as health nudges? First, it promotes goals, such as health and the environment, not only because of their intrinsic value but because they are the users’ goals. Indeed, suppose the user chooses to use an application that is transparently devoted to improving their health and ecological behavior, we can legitimately consider that they consider health and ecological behavior as personal goals. Second, NVC seek to influence the user’s “choice architecture” in a predictable way in favor of these goals without ever constraining them.
However, it is worth noticing at the outset that NVC belong to a specific subset of nudges, namely, educational or informational nudges. If NVC do indeed seek to promote the consumption of healthy and sustainable products, they do not intend to use primarily an unconscious influence (often referred to in studies on e-coaching as the Automatic Decision System) but seek to convince through information and argumentation made possible by a virtual assistant (often referred to as the Deliberative System).
3 Methology for ethical and legal challenges elicitation
The context and focus of the paper revolve around nutritional virtual coaches. Being a heavily multi-disciplinary subject, we have decomposed the NVC and provided the most important underlying notions of its components. In turn, we tackled the elicitation of ethical challenges and mitigations by employing an ethical framework inspired by the recommendations of the European High-Level Expert Group on AI (AI HLEG) European Expert Group (2019) and the Ethix laboratory.Footnote 1 In particular, it requires systematically addressing four questions:
-
what do we want to produce with our innovation?
-
who are the actors involved, and what are their respective interests?
-
what are the ethical risk areas (i.e., the interests of the different actors) that could be threatened?
-
for which of these risks are we responsible or not responsible?
Elaborating on the initial findings generated by such questions, we have further investigated the growing literature on the ethics of recommender systems and more specifically, Food Recommender Systems. This is notably due to the ongoing discussion about the societal, legal, and ethical consequences of recommender systems, including polarization, filter bubbles, and echo chambers. In this context, the works of Milano et al. [5], Karpati et al. [6], and Kampik et al. [117] were instrumental in defining the ECs related to recommender systems. In particular, we extracted and extended the ECs identified in these studies, scaling them to the full NVC picture w.r.t. food, nutrition, opacity, and user data. Given the scarcity of information on the matter, to address the challenges concerning informative and assistive systems, we had to adopt a different approach. We leveraged the (AI HLEG) questions to assess the literature focusing on the applications of care for older adults, children, and other sensitive populations, where ethical challenges are most critical [118, 119]. Moreover, to identify EC (notably those related to informative assistive systems also relate to persuasive technologies), we relied on user (i.e., professional nutritionists and individuals interested in NVC) interviews reporting the determinants of the system’s trustworthiness in question. Moreover, conforming to the latest EU guidelines about trustworthy and explainable AI, we focused on EC related to transparency and explainability. Finally, elaborating on argumentative systems has revealed to be particularly challenging. Although automated argumentation is a well-established research field, notably in the domain of multi-agent argumentation, very few works attempt to identify and discuss ethical concerns or challenges associated with argumentation-based systems. To cope with this problem, we resorted to guidelines, recommendations, and ethical challenges identified in the domain of human argumentation and scaled them to human–machine–machine–human scenarios (e.g., argumental integrity, fairness, etc.) [120]. Once all the challenges have been elicited, populating the overall vision of the ECs in NVC, a bottom-up approach has been employed to propose the envisioned mitigation strategies.
Concerning the elicitation and formalization of the legal challenges, the conducted analysis is based on in-depth knowledge and precise examination of the technical–functional features of the NVC. These elements constituted the indispensable basis for structuring legal reflections on the implications and effects of using these systems for ordinary users. Starting from a detailed study of the technological state of the art, law experts have tried to identify profiles of possible or overt criticalities, applying their knowledge in terms of ethical principles and legal categories. In particular:
-
ethical and legal concepts were carefully (re)constructed w.r.t. specific human–NVC interaction contexts;
-
possible concerns were identified, either by analogy or contrast with scenarios already addressed in literature, doctrine, and/or case law;
-
possible challenges have been highlighted, starting with problems that ethical theories and legal instruments seem not yet able to address/to address efficiently, with regard to the classes of systems here examined;
-
mitigation strategies have been proposed, trying to anticipate the needs and spheres of fragility to which the individuals involved in the interaction may be exposed and addressing them with the approach and resources proper of both disciplines.
Although the analysis of ethical and legal implications has been organized in two distinct sections (to respect the specificities, theoretical, and applicative potentialities of each of the two disciplines), the complementarities have not been overlooked. Indeed, Fig. 2 displays a schematization of the envisioned ethical-legal liaisons among the challenges.
4 Ethical challenges and mitigation strategies in NVC
The systemic evolution from RS to NVC entails a distinct set of challenges that requires impelling attention. Thus, to pave the way for ethics-aware Personalized Food E-Coaching Systems, we have analyzed and extended previous studies on food RS, such as Milano et al. [5], Kampik et al. [117], the very recent food recommender system handbook [118], and [3], whose focus is on the health-aspects of food recommender systems. As a result, two sets of ethical and legal challenges (EC–LC) are below organized per subsystem (NVC components—see Fig. 1). The following section elaborates on the methodology adopted to elicit such sets of challenges.
4.1 Personalized food recommender system
-
EC1.1
To circumvent inappropriate recommendations: suggestions that could endanger the users’ health and cause moral damage to their fundamental beliefs and values must be avoided. A possible first mitigation strategy could be to cross-check the recommendation with (semi)official sources. For example, in the case of NVC, using the nutriscore as reference [121].
-
EC1.2
To ensure privacy: the generation of personalized food recommendations entails the access to personal and sensitive user information. Collaborative filtering, one of the most widely used approaches in recommender system [2], has shown to be vulnerable to data leakage in the inference phase [122, 123]. Recommender systems involve an inherent trade-off between the accuracy of recommendations and the extent to which users are willing to release information about their preferences. In the literature, this trade-off has been tackled by relying on a layered notion of privacy for corresponding user groups [124]. We envision further investigation in this direction.
-
EC1.3
To safeguard autonomy and personal identity: RS could affect the user’s autonomy and personal identity by (i) intentionally limiting their freedom of choice with biased recommendations and a reduced set of options and (ii) manipulating the user’s community to create a filter bubble and hide/ignore their personal identity. This would lead to echo chambers, filter bubbles, and cyber-balkanization [117]. It is necessary to strike a balance between exploitation (i.e., providing the user with recommendations derived from her personal preferences) and exploration (i.e., providing the user with unforeseen content) [125]). Moreover, techniques inspired by social choice architectures, where a recommendation has to comply with predefined opinion/product distributions [117], can be further extended.
-
EC1.4
To reduce the RS opacity: modern RS engines leverage conventional ML/DL predictors (currently black boxes) and provide no transparency on the recommendation production process. Such a lack induces mistrust and a lack of accountability. A reliable solution would be embedding explainable predictors into conventional RS. Such a path has been recently undertaken in the coaching and recommender system communities [126].
-
EC1.5
To overcome the absence of fairness: skewed data sets, biased stakeholders, and inappropriate recommendations are prone to generate unfair recommendations (see the concerns raised about the fairness of the Yuka app [127] toward Italian producers). The opacity of the system makes it harder to detect such biases and unfair outcomes. Thus, to overcome such a challenge, a key measure would be to adopt techniques for debiasing [17].
-
EC1.6
To deflect social pressure: Since the early days of recommender systems [128], polarization and cyber-balkanization have been identified as one of the most dangerous side-effects of using these systems [129, 130]. This problem has been accentuated in the recent decade with the widespread use of social networks as a source of news and information which led to the formation of filter-bubbles [131] and echo-chambers [132] and increased already existing polarization (i.e., market, societal, and political). Proposed solutions include a better understanding of user experiences [130] and devising new algorithms aiming at reinforcing the center of the political spectrum, as well as aiming for technology-facilitated societal consensus [133].
4.2 Argumentative systems
Although automated and multi-agent argumentation are well- established [134] and used in socially implicated domains (e.g., eDemocracy [135]), only a few works address the entailed ethical challenges. Thus, we rely on the guidelines and recommendations defined for human argumentation. In particular, Schreier et al. [136] introduced the concept of argumental integrity, derived from the strong relationship between argumentation and fairness. To achieve an integral argumentation in NVC, the challenges are:
-
EC2.1
To attain formal validity: arguments must satisfy rational criteria that guarantee the transition from premise to conclusion.
To do so, before injecting an argument into the reasoning process, we have to prove its premises with a rule of inference [137] page 6. If the rule of inference uses conclusions derived from another argument, this process must be done first on this upper argument.
-
EC2.2
To leverage sole sincerity/truth: the argumenting participants must be sincere (i.e., only express opinions and argue in favor of “facts” honestly and transparently considered correct.
Authors in [138] propose the FIPA ACL protocol. This protocol proposes full transparency of the rules. Then, all the participants would be able to assess the arguments of other participants.
-
EC2.3
To ensure content justice: the arguments selected by a party must be both morally and legally just toward other participants. To ensure that, it is possible to define a list of forbidden premises. This list should be based on some moral or legal rules. An argument conclusion cannot be derived with premises from this list.
-
EC2.4
To enact fair and just procedures: the argument generation and exchange procedure must allow equal capabilities/opportunities to all the participants to contribute toward a solution according to their individual (relevant and justifiable) beliefs.
To do so, it is recommended to use a dialogue game protocol enabling individual rationality and respecting fairness (i.e., the rules treat the participants equally). Individual rationality means that agents cannot advance arguments that are counterproductive to them. One example of protocol fitting with this prerequisite (besides FIPA ACL) is the inquiry dialogue [139]. It allows both the individual purpose and equal treatment of the participants by the rules.
-
EC2.5
To ensure compliance-verification convergence: the evaluation of an argument can extend to assessing the source, selection mechanism, etc.
To avoid diverging investigations, we envision (i) employing the concept of source/speaker’s reputation (well-established practice in the MAS domain [140]) and (ii) belief checking, verifying the “honesty” of an agent’s mistake and enabling its “correction” via other agents or human user/experts feedback (assuming their unbiased honesty).
-
EC2.6
To simplify or aggregate arguments: in some cases/applications, the time available for an agent to reply—henceforth arguments generation, exchange, and assessment can be remarkably affected. Operating in a limited time may entail simplifying arguments and increasing the risks for unethical outputs.
In Fan and Toni [141], the authors propose a method to select an extension of minimal size to provide the most concise explanation. It is possible to use this method to compute the minimal set of arguments allowing the defense/attack of the decision argument. Such a method allows to focus on the most important arguments, and so, saving time.
-
EC2.7
To produce multi-modal arguments: depending on the context and receiver knowledge, the same argument might be required to be communicated using multiple channels (e.g., audio, visual, or textual). The argument production process might differ according to the select channel, and this may cause inconsistencies, divergences, and non-conformities. The proposed FIPA ACL protocol is at best semi-decidable.
4.3 Informative and assistive systems
Intelligent assistive technology (IAT) is the umbrella term defining assistive solutions boosted by recent breakthroughs in AI, social robotics, ambient intelligence, and wearables. Such solutions primarily target care for the elderly and individuals with special needs [23, 119]). However, the advancement of IAT has raised several ethical challenges. Notably, the applications intended to be used by sensitive individuals leverage data collection remarkably and operate in human proximity. The ethical challenges involving IAT can be formalized as follows:
-
EC3.1
To facilitate technology access and IAT rightful behaviors: IAT are approaching individuals with special needs and cognitive impairments (e.g., dementia [142, 143] to drive their the decision-making process. Therefore:
-
Enabling the dynamic consent management and empowering the user over it is a challenge already tackled in clinical trials [144]. However, assistive systems are still behind w.r.t. such aspects. A first necessary step would be to enable the user to revoke their consent and verify their correct understanding of the system functionality, scope, and use of their data. Moreover, the terms of service are rarely really read/understood by the users. It is indeed questionable whether reading-accepting these terms would truly qualify as informed consent [145]. This problem is exacerbated in the case of adults with dementia or Alzheimer. User awareness should be the key to access to IAT and should, somehow, be measured/assessed. Moreover,
-
IAT never substitute medical personnel, but it is supposed to work alongside them. Although critical decisions are not left on the IAT hand, several sensitive tasks can be automatized and, especially those relying on AI-based learning mechanisms, operate unsupervised and evolve in unexpected/undesired directions. Hence, after some time, the system or its functionalities might no longer be the same for which the consent was originally given [119]. A needed intervention is to develop mechanisms ensuring the user’s data are only used as “originally” intended or that the user is informed about possible systems shifts. Moreover, it should be ensured that additional sensitive data possibly provided by the user but not required by the IAT would not be processed or trigger the attention of specialists and system administrators.
-
-
EC3.2
To ensure the system identity: untruthful information is not the sole source of deception. Indeed, sensitive or cognitively impaired users can be deceived by the unclear nature of virtual assistants (agents) and not being able to “treat” them as non-humans. This risk is exacerbated in the case of robotics (both humanoid and zoomorphic), since their shapes are inherently deceptive (e.g., the user might tend to treat the robot as a pet [119, 146]). In particular, in the case of individuals with cognitive difficulties or disabilities, such effects cannot be completely removed [9]. Therefore, to mitigate the harmful consequences of such dynamics, it is advisable to ensure that these AI systems are always used as tools and not as alternatives to human care and assistance. In other words, for instance, to make sure that AI systems are used under the supervision or in the presence of trained personnel and that just auxiliary roles are delegated to them. Furthermore, crucial aspects of the psycho-physical health of frail people involved in the interaction should not rely on them only.
-
EC3.3
To ensure medical data confidentiality: IAT might be required to store, process, and selectively share even medical data [120]. Therefore, besides the well-known privacy concern, the confidentiality challenge assumes a broader spectrum [147]. Although many pieces of information acquired by IAT are not considered strictly medical according to the regulations in effect (e.g., swiping behavior on mobile devices, wearables, and surveillance systems data), they can be used to infer health status and behaviors. To overcome the ambiguities due to gray areas in the management of user data for assistive and clinical purposes, this possibility should be disclosed to the user when the data are required and collected. Moreover, tracing the actual use of such sensitive data should be ensured, as well as that the user health data are not used for profiling to ends other than strictly care and that they are not shareable with third parties.
-
EC3.4
To make the solutions affordable: some IAT come with a cost that not every user can afford (e.g., the cost of social robots is relatively high and prohibitive). Thus, affordability is a key ethical concern, since it can unfairly determine who can access and benefit from a given service [147]. To ensure parallel development of web/mobile applications mimicking or somehow replicating the behavior of robots-based systems could mitigate such a phenomenon. Although the benefits from interacting with an anthropomorphic robot might be lacking, it can still be a service enabler. For example, a user can start their journey in increasing nutrition knowledge by profiting from virtual assistants, which are remarkably more affordable (if not free) than human nutrition counseling. Nevertheless, it is worth highlighting that virtual assistants can help achieve given goals but not entirely replace medical professionals. For example, if a user wants to have a better performance in his daily life and eat more sustainably but cannot afford nutrition counseling, they could already take advantage of several mobile applications. In this case, the app is a good option to increase his/her nutrition knowledge and could contribute to the early promotion of new healthy habits in his/her daily life.
-
EC3.5
: To ensure safety boundaries: IAT are intended to assist and not replace medical doctors or care providers. To this end, the investigation of mechanisms that certify or monitor boundaries, safety, and pertinence of IAT services and behaviors must be a priority.
4.4 Persuasive technologies and processes
Recalling that persuasive technologies explicitly intend to influence individuals imperatively demands careful considerations when planning for their adoption. In particular, to verify which interests are served (designers or users’) and the actual contents validity, honesty, and production must go under the magnifying lens.
-
EC4.1
To provide transparency: similar to other NVC’s sub-systems, several user studies have confirmed that users do not easily identify the persuasion in action, while they correctly do it when persuasion is not present [9, 148]. Awareness is the key. Indeed, it puts the user in a completely different mindset when interacting with the system. Unfortunately, to date, many systems adopting such approaches do not disclose to the user the underlying technique, goals, and interests. To overcome such a lack of transparency, mechanisms clearly notifying/warning the user about their exposition to persuasion strategies must be set in place, since the very first interactions with the given system.
-
EC4.2
To state the goals clearly: Several interviews have reported users demanding trust into the persuader and the intention of a given system [148, 149]. Nevertheless, too often, the persuasion goals are vague or unclear. A user approaching a persuasive system should be timely notified about goals formulated in a thorough, clear, and understandable manner. Current research suggests as a first approach to include an “information leaflet” advising the user about the method and the goals of the persuasive system [150].
-
EC4.3
To prevent unintended behavior change: the designer of the persuasion strategy should take responsibility for unintended, unforeseen, and unpredicted outcomes. However, new persuasive approaches might include autonomous AI-based strategies, actions, and contents. Therefore, a first step would be to investigate explainable and debuggable mechanisms [12], followed by the investigation of new ways to identify the liability of intelligent autonomous systems.
Table 1 summarizes the challenges discussed above.
4.5 Cross-cutting ethical challenges in NVC
Concerning personalized food recommender systems (EC1), as mentioned by the European Expert Group of the High-Level Expert Group on AI (AI HLEG), information ethics is essentially based on four principles: explicability, respect for autonomy, non-harmfulness, and fairness [151]. However, it is essential to highlight that respect for “user autonomy” is a prerequisite challenge and a cross-cutting principle. This is because free user consent relies on an effective explicability strategy (how can one consent to what one only half understands?). Indeed, there is an empirical link between justice and respect for autonomy: the less the people affected by a decision or technology (i.e., RS) consent freely and understand what is at stake, the less their aspirations and needs are generally taken into account. Such a concept clearly applies to food recommendation systems. Indeed, the desire to make recommendation mechanisms less opaque (EC1.4) or to ensure the compatibility of recommendations with the user’s values (EC1.3), or to protect the user from peer pressure (EC1.6) only makes sense to ensure user control. Similarly, in Western democracies, the notion of privacy (EC1.2) that emerged at the end of the 19th century was not a loose end but a condition for personal autonomy [151]. Therefore, coping with the aforementioned challenges means promoting user autonomy, understood as informed consent. As mentioned above and summarized by the third column of Table 1, there are already a number of risk mitigation strategies explored in the existing literature for each of these challenges. Although detailing them goes beyond the scope of this article, Sect. 5.6 suggests some improvements of informed consent from the legal and ethical perspectives.
Concerning argumentative systems (EC2), argumental integrity grounds the identified challenges. In particular, to have argument integrity, it should be rational and transition reasonably from the premise to the conclusion (i.e., EC2.1). At the same time, it should be sincere—participants need to express opinions and argue in favor of honest and transparent “facts”. (EC2.2). Moreover, the arguments that satisfy EC2.1 and EC2.2 must undergo EC2.3—guarantee the integrity and be morally and ethically just toward the participants. Therefore, a set of new mechanisms and protocols should be developed to ensure these three properties (i.e., rationality—EC.2.1, truth—EC2.2, and content justice EC2.3). Such protocol(s) would verify if an argument is reaching its conclusions reasonably, ensure full transparency of the rules and allow participants to assess the arguments of their peers, as well as forbid the use of unethical or illegal premises. In addition, the same protocol(s) should enforce the fair treatment of participants and provide them with equal opportunities and capabilities (EC2.4)—features that should be verifiable (EC2.5). Finally, to facilitate their understandability and improve their presentability to the users, arguments could be simplified and communicated to the user using multi-modal channels—yet ensuring that no bias or error-in-translation is introduced (EC2.6).
Concerning Intelligent assistive technology (EC3), the ethical challenges typically relate to their access and rightful behavior (EC3.1), ensuring the system identity (EC3.2), the protection of the user data(EC3.3), their affordability (EC3.4), and the fact that they do not replace human personnel (EC3.5). In this context, transparency and explainability play a crucial role in helping tackle the other challenges. In particular, explainability allows for inspection and verification of rightful behaviors (EC3.1), conveys a clear definition of the system identity (EC3.2 ), ensures a transparent processing and storage of user data (EC3.3), and helps to define the boundaries of the system clearly (EC3.5). Finally, the affordability of the IAT systems (EC3.4) should be dealt with separately, since it is mainly a technological-, market-, and national health-related challenge.
Concerning persuasive technologies (EC4), they explicitly intend to influence individuals. For this reason, transparency is a challenge of chief importance. More specifically, transparency enables users to identify when they are subject to persuasion strategy and helps them understand its consequences (EC4.1). Moreover, transparency forces the system to state its objectives clearly—thereby avoiding ambiguities—and guarantees that users are notified about their goals and informed about their progress and divergence (EC4.2). Finally, with explainable, debuggable, and transparent mechanism(s), unintended outcomes could be identified and rectified (EC4.3), at least to a partial extent.
Abstracting the analysis across the four characterizing domains, some questions and cross-consideration arise. For example, “is it enough to respect autonomy understood as informed consent?”—Not quite. The reader should note that the overall challenge of autonomy is not limited to the specific challenges linked to informed consent. For example, EC1.3. is not mainly related to informed consent. Nor is it the case for challenges EC4.1-2-3. The point of keeping the user informed about the running persuasive strategy and its purpose/goal(s) and preventing unintended/unadmissible behavioral changes is not only to meet the minimum standards of autonomy as informed consent but extends to empower the user over the habits they want to change. Hence, we can consider that EC1.3 and EC4.1-2-3 require a global and more ambitious approach—i.e., a genuine empowerment strategy that helps the reader to clarify rules from their standpoint and supports their behavioral adherence. In turn, “what could a more ambitious and comprehensive empowerment strategy look like?”—we could invite the user not only to pre-define options but to explicitly state (e.g., via a checklist) what they expect from an FRS or NVC. For example, it might be useful to compel the user to answer a few questions, such as “do you expect to lose weight?”, “do you expect to eat more locally?”, etc. Above all, it is important to understand the priority of their goals. Certainly, the user’s reaction can already give an idea of their priorities. EC1.3 mentions predefining the possible options. However, precedent experiences (confirmed by scientific research in behavioral studies [152]) show that “to wish for something” and “to act on what is wished” can be remarkably different. Therefore, it is not enough to recommend what they usually like to the user and take it as a good recommendation. Instead, users must be prompted to question and brake down their objectives and, only then, engage them with an empowerment strategy. If they are persistently dissatisfied with the recommendations, we must engage them in motivation assessment or objectives revision. By doing so, users can be able to see whether the behavior is/has changing/(ed) in a (un)wanted direction (EC4.3) and renew or clarify the acceptance of a given persuasion strategy (EC4.1) and the related goals.
Overall, the challenges presented above are complementary and, to a certain extent, share underlying points. For instance, transparency and explainability are present in EC1, EC2, EC3, and EC4. Moreover, while recommender systems can be seen as the bedrock of the NVC, Informative and assistive technologies (IAT) layout the application background and the case study, while persuasive and argumentation technologies define the methods used (or not to be used) to persuade the user to undergo the desired change. Furthermore, user data management and data protection are also transversal challenges shared by EC1, EC2, EC3, and EC4. In particular, in EC1, recommender systems require access to user data to propose the best recommendations. To produce the best arguments (EC2), the system should also access data related to the moral, ethical, and legal principles of the user and other participants. To deliver personalized care, user data is essential for the IAT (EC3), and finally, persuasion (EC4) cannot be effective without being familiar with the user preferences and concerns.
5 From ethical to legal concerns in nutrition virtual coaching
To date, the debate on new technologies is primarily formulated in terms of ethical guidelines rather than legal dispositions [153]. The main reason for this choice has been the need to foster innovation, avoiding as much as possible any form of constraint typically embodied—in the common imaginary—by the law. On the contrary, ethics is considered a flexible and soft regulatory tool. However, this sharp division between the two fields of analysis is due to a fundamental misconception and a misinterpretation of the profound nature of both disciplines. The result is that conceiving this tool as an alternative to regulation, we are encouraging an ethics-washing that makes “ethics, rights, and technology suffer” [154].
Overall, the abstraction of general principles does not allow for a unique and universal agreement on given situations [155]. Specific investigations, necessary for the maintenance of the society, are both time- and culture-sensitive in terms of scope, content, and countermeasures—which are not explicit in the general principles. For example, coordination cannot be defined nor regulated just leveraging principles, and the application of general principles in very particular cases might require “prudence” to defend, enforce, and ensure human rights and general norms [156].
Therefore, the specifications of ethical values and general principles can lead to different perceptions and standing points even within same cultures/contexts (i.e., divided and inconsistent European proposals [157]). Indeed, many of the initiatives that have arisen around the concept of AI systems and their concrete uses share common values. However, all of these—including fairness, transparency, explainability, responsibility etc.—can be interpreted differently depending on the ideology, culture, and country we take into account [158].
Moreover, we should notice that due to the heterogeneous nature of the actors of the ethical debate around AI (i.e., academic research [159], corporations and organizations [160], civil society [161]), it is not possible to detect homogeneity in the methodological approach and authority in ensuring concrete application of ethics. Indeed, the adherence to ethics codex is voluntary, and there are no mechanisms able to enforce the respect among the consociates [162].
Therefore, the fact that many ethical and legal principles can relate should not lead to the conclusion that they mirror the same type of rules or that they can be equally interchangeable. Both these instruments are necessary for the debate, even if not sufficient, the one without the other.
The law is very often misunderstood in its meaning, functioning, and methodology. The most common understanding of legal norms is the one that connects them to prohibitions, rigid dispositions, and limitations [163]. In other words, they have a negative connotation very commonly. On the contrary, the law is a structural body of positive principles and rules through which society is organized and which can be considered effective when it proves to be adaptable and to respond efficiently to the specific reality (or component of reality) that it is deemed to regulate [164].
Positive norms are intended to be mandatory and binding for any individual (or agent). For example, being fair is more than just an option. Nevertheless, sanctions occur only if the second type of norms are violated. Moreover, positive norms constitute a progress toward more clear and uniform standards. Positive norms can be uniform at different levels, for they are the result of an agreement among States—as is the case of International Law—or of an agreement among political forces at the end of a legislative process—as is the case of domestic law [165]. Finally, any norm is the sum of the evaluation and balancing of political, social, and economic needs, not always—concurrently—considered applying a mere ethical approach. Moreover, any norm is the sum of the evaluation and balancing of political, social, and economic needs, not always—concurrently—considered applying a mere ethical approach.
Then, against the idea that AI would highlight gaps in the law, such that it would be an inappropriate regulatory instrument, in 2019, the High-Level Experts Group stated that “no legal vacuum currently exists, as Europe already has regulation in place that applies to AI” [166]. This is because no legal vacuum exists in the legal system, generally speaking. Indeed, it does not consist of legal norms only, but even of legal interpretation, legal doctrine, and legal decisions of the courts of justice, at different levels [163].
This does not mean that the law is error-free, that it is always just, or that the norms we have today are the most appropriate for dealing with the challenges that new technologies pose. There are rules which may need to be revised in light of the uniqueness that, to some extent, AI displays. As mentioned above, one example is the data protection regulation (GDPR), which cannot guarantee to be always effective—due to the huge amount of data stored and used by modern applications and the variety of scenarios in which this occurs [167]. Moreover, even the concepts of explicability and explanation should receive particular attention and a precise theorization at the legal level [168], so as to heal the spaces of ambiguity that a concrete principle of explainable AI still poses to the jurists. Nevertheless, these considerations do not lead to the idea that a regulation of new technologies based on the law should be avoided. On the contrary, they demonstrate the need to focus our attention on what is the most appropriate and functional way to regulate the matter and not on the advisability of doing so [169].
Despite the differences, methodology, and scope here highlighted, ethical and legal analysis go hand in hand and mutually benefit from each other. For this reason, as we shall see, some of the challenges they could face may appear, at first, to correspond. In particular, we have identified the following legal challenges (LC) on the subsystems which compose a personalized food e-coaching system (i.e., NVC):
5.1 Personalized food recommender system
-
LC1.1
To avoid inappropriate/harmful recommendations: it is often possible to identify a recommendation as inappropriate or harmful only through an ex-post evaluation. However, the law has not only a punitive dimension but, above all, a dimension of prevention of damages/risks. Therefore, even applying a purely “restorative” approach, there are difficulties in allocating responsibility. Indeed, sometimes, there may be a user’s competition in the harmful consequence that occurred, which could not be promptly predicted or mitigated by the developers/service provider.
-
LC1.2
To sidestep manipulation and coercion: recommender systems could induce choices that users would not have made otherwise. They might also induce changes in their perceptions of themselves as individuals or of some aspects of reality (desires, preferences, needs). These could be considered manipulative—aiming at distorting the relevance and nature of the options available at the time of choice—or coercive—in the case, where the aim is to restrict the number of options considered available—dynamics [170].
-
LC1.3
To avoid steering the market unfairly: recommendations could have large-scale effects, affecting consumer choices so as to direct economic and market balances. An example could be that an NVC recommends foods of a specific brand, as the producer has economic interests in this regard. This could be a clear case of a manipulative dynamic that falls into the remit of unfair competition practices, for it conditions the choices of individuals to favor the producers’ financial interests.
-
LC1.4
To limit over-trust or mistrust: people’s biases and lack of technical knowledge can lead to a distorted representation of what the system is in concrete and which expectations it is reasonable to place. These issues are often addressed with XAI techniques. However, we should underline that, most of the time, explainability is not very much related to accountability [148].
5.2 Argumentative systems
-
LC2.1
To limit the side-effects of a data-based argument: in the case of automatic argumentation, the skepticism of legal experts may arise from the nature of the data on which they rely. Considering that biases are in the data, not only/directly in the AI itself, and that, at the moment, it is not easy to remove them, the final result could be discriminatory, offensive, fallacious, or misleading.
5.3 Informative and assistive systems
-
LC3.1
To discourage unsupervised use: NVC are specially designed to be used with commonly defined fragile subjects—due to age and physical/mental health conditions. These categories are subjected to specific safeguards by the legal system. In particular, in the case of choices that may impact health, autonomy, and the economic sphere. The possibility of solely unsupervised use of these systems/devices may violate such rules and expose the parties involved to possible material and psychological damages.
-
LC3.2
To handle deception: even if we cannot consider deception in human–robot/human–computer interaction a prerogative of fragile individuals, the impact of such deceptive dynamics can be more critical. In particular, deception in the context of assistive systems—especially if social and emotional robots are involved—can induce isolation, dehumanization, infantilization, and human dignity infringement [171].
-
LC3.3
To curb social discrimination: at a global level, the population is aging, and there are not enough trained personnel to cope with it. The diffusion of assistive and care systems may appear as an effective and efficient solution to solve the problem, slowly becoming the prevailing one. However, this would also undermine the very concept of “equality of starting points” [172], which underpins the right to health in Western democracies. This would create a form of discrimination, whereby only those who can afford large sums of money would be able to obtain appropriate assistance.
5.4 Persuasive technologies
-
LC4.1
To deal with conceptual ambiguity: the line between the concept of persuasion and the one of manipulation is still somewhat blurred. This nourishes a crucial uncertainty for legal argumentation, which makes the clear identification of the object of analysis its foundational basis. As a result, some harmful dynamics might still be considered lawful. An example can be the case in which a person is led to a behavioral change induced by the machine without the user having decided or realized it during the interaction.
-
LC4.2
To overcome the mere transparency requirement: trying to make a persuasive system transparent might not always be the most appropriate solution and certainly not the most effective one. First, it is possible that implementing systems that increase transparency makes the device less efficient and introduces additional errors (i.e., inaccurate interpretation of AI predictors), leading to failures [173]. Second, human cognition mechanisms are influenced by multiple subjective, biological, and contextual factors, making it, as of today, difficult to prove what is truly transparent for the end-user.
Table 2 summarizes the challenges presented above.
5.5 Informed consent: a transverse challenge
We have so far analyzed the challenges that AI systems could pose to the legal sciences, divided by classes of applications. Behind these, however, there is one common to all: informed consent.
Informed consent is considered one of the main pillars of contractual law, consumer protection law, and lawful economic transactions. Such a perspective is built around the figure of the so-called homo oeconomicus. This is the prototype of a perfectly rational, always wise human being, capable of deciding, to the best of their knowledge and conscience, the most advantageous decision to be taken in all circumstances [174]. Thus, the prerequisite necessary to allow individuals to embody the “rational consumer” would be to provide appropriate information that, once fully understood, will lead them to naturally make the choice that best pursues their own interests [175]. However, social sciences and legal practice have demonstrated that the idea of a fully aware decision-making process is essentially an illusion [176]. This is due to many concurrent factors.
First, we should consider the structural complexity. End users are usually ordinary individuals with a limited understanding (if any) of either the device’s technical characteristics or legal terms. Despite GDPR provisions, the information they should be made aware of often contains terms that are overly specialized or, on the contrary, too vague [177]. On one hand, this is justified by the need to be accurate. On the other hand, with the need to match the expertise/knowledge of the most. Moreover, regarding privacy documents themselves, it has been shown that only people with a Ph.D. education level would be able to analyze them accurately and really understand them, as a matter of structure and length [178]. In addition, very often, data collection and storage practices are characterized by a degree of discretion that does not allow to know what will actually happen to the data entered in the system. This, combined with the language issues, increases the difficulty of weighing the future risks of the current choice to share given information [179].
This leads to the second main challenge posed by the principle of informed consent. The fact that the information provided is, at the very end, GDPR compliant cannot guarantee per se balance of power between the economic actors involved. Indeed, the paradigm of consent in private law should protect the authenticity of individuals’ will rely on the perfect correspondence between what the users have preconceived in their minds and what they have concretely consented to. However, as many neuroscience and behavioral psychologists show us, this is not very often the case because of cognitive limitations that are intrinsic to human beings.
For a piece of information to be considered effective and meaningful, many subjective elements should be considered. A non-exhaustive list may include motivation, personal biases, knowledge, level of education, and cultural education. Even the way in which the information is provided may influence the willingness to receive it [145]. Furthermore, we should consider that when consent is required at the very beginning of the interaction, the user has the primary desire to start or carry on the activity. That can cause a lack of accuracy in understanding the real implications and the content itself [180]. This phenomenon is called “present bias”. It clearly highlights that, even if people rationally consider personal data protection relevant, this is not enough to overcome instinctive reactions triggered in a subconscious way by their own mind [181].
Consequently, we should admit the profound difficulty in considering the principle of informed consent as truly effective in solving the criticality posed by AI systems as a whole
5.6 Mitigation strategies and functional requirements
The discussion conducted in this section wants to highlight the challenges posed by new technologies and approaches (in particular revolving around NCV) from both an ethical and a legal point of view. It emerges a constantly evolving field of research in which the demand for technical experts is inevitably intertwined with those of the human sciences. Therefore, the most appropriate solution is to create a balance that allows their coexistence in the ultimate interest of the individuals involved. That is why it is not possible, as of today, to provide clear-cut solutions but rather strategies for risk mitigation. However, these will be developed and implemented in future works, always starting from multidisciplinary and integrated research.
Clear statement about what the system is not and what it cannot do/replace For instance, it could be useful to explicitly clarify that the virtual assistants or chatbots can provide recommendations based on scientific and nutritional researches, which cannot replace the consultation of a doctor or a nutritionist. This is even more true in the presence of specific health conditions or subjective body responses to the suggestions provided by the system. Such information should be stressed at the very beginning of the interaction before the actual use of the system/device (i.e., NCV). However, changing nutritional behavior is not just an action, it is a process. Therefore, people’s bodies and minds are involved at different levels and in different ways through the use. Consequently, such content should be repeated when any change is made to the initial settings or as soon as the pre-determined goals/sub-goals are reached.
Explicit mention of categories of individuals for whom the use of the system is not indicated To this end, it is necessary to identify, with the support of experts, for which specific diseases unsupervised access to dietary advice could be harmful. In doing so, it will be necessary to take into account not only physical profiles but also the psychological dimension. Mental pathology, eating disorders, and body dysmorphia will have to be taken seriously into account. Regarding the latter categories, it should be considered that those who are directly affected are also the ones who find it most difficult to be consciously aware of them or admit them, even to themselves. Therefore, it would be appropriate to propose examples alongside each of the clinical categories indicated. This will serve a twofold purpose. On the one hand, it will ensure a softer approach, mitigating the emotional impact of being stigmatized in a pathology. On the other hand, it will raise awareness of the issue and raise doubts in those who might recognize themselves in the dynamics/symptoms mentioned without having valuated that they might be dysfunctional.
Clear, user-centered goals The goal should be set by the user only and could be subjected to unidirectional modification at any level of the interaction. Any mechanism which can force, directly or subliminally, people to follow the instructions/recommendations should be strictly avoided. To such an end, the persuasive techniques implemented should be ex-ante evaluated by a multidisciplinary team—which may include psychologists and neuroscientists—so as to foresee possible deceptive or coercive dynamics. Specularly, the right to refuse suggestions and stop the interaction should be guaranteed any time the user feels overwhelmed or bothered. Even in this case, the above-mentioned team should evaluate which communicative techniques should be applied/developed so as to cope with normal reluctance to change, without resulting in manipulation of will.
Timing and design of consent The theme of informed consent is certainly one of the most sensitive, given the structural inconsistencies of this principle and the fallacy of its assumptions. Therefore, what we suggest is to structure the request for consent with characteristics that take into account the real nature—not fully aware and rational—of the average user. First, consent must be required any time the user modifies what was previously established. This may make the interaction less fluid. However, the constant reminder to agree with new conditions forces individuals to pay attention and objectively realize what is happening. This would help mitigate the problem of giving consent out of inertia or without really dwelling on the implications of subsequent choices. In addition, the provision of consent should not be a mechanical exercise, summarizing it in a single click. It would be useful to make this an interactive moment, at the end of which the device presents the user with quizzes through which to demonstrate a real and not fictitious understanding. If the user fails the test, new and more targeted explanations/information will be provided. The expected functionality will be unlocked only after passing the quiz.
Access to professional opinions If there are concerns on the part of the user that the device does not understand or cannot resolve, or if the user develops reactions that were not expected or that the device fails to handle, the access to professional opinion should be guaranteed. This can result in to direct intervention of a technician who takes control of the system and can solve the issue (i.e., due to malfunctioning) and in the invitation to contact a personal doctor or a specialist who is part of the team that developed/supported the NVC. In both cases, the system should prevent any subsequent action until one of the two solutions mentioned above has been taken.
Overall, ethics and law are disciplines with different (although somewhat related) methodologies, scopes, and purposes. The instances they both express and advocate are often interconnected, like in the case of NVC, where the two disciplines appear similar or even overlap. Nonetheless, they must be analyzed differently, respecting the specificities and potentials that both ethics and law, each in its sphere of competence, can express. Figure 2 shows the possible relationships identified w.r.t. the elicited challenges.
6 Practical implications of EC and LC on nutrition and food sustainability
NVCs are primarily intended to provide appropriate and tailored nutritional recommendations to users to optimize dietary health outcomes. However, in the last decade, growing concern about the environmental and social impacts of food production and consumption has raised the consciousness about shifting our dietary patterns toward more sustainable ones [182]. In this context, recommendations for healthy diets are paired with environmental-aware diet recommendations. Therefore, opening to the sustainability perspective in NVC demands the assessment of an additional dimension of ethical challenge: the process of informing, educating, and learning about sustainable eating patterns. To address this challenge, an NVC should adopt a precise and transparent definition of sustainable diets, which pairs dietary intake references to environmental consumption thresholds by also considering cultural and social aspects. So far, the most widely accepted definition of sustainable diets is the one provided by FAO (2012) [183], which intend them as “diets with low environmental impacts which contribute to food and nutrition security and to healthy life for present and future generations. Sustainable diets are protective and respectful of biodiversity and ecosystems, culturally acceptable, accessible, economically fair and affordable; nutritionally adequate, safe, and healthy, while optimizing natural and human resources” [184]. This definition is broad and poses significant boundaries to its adoption in NVC. Indeed, the definition does not provide detailed information on the nutritional, environmental, social, and economic criteria that should be considered to formulate the recommendations, the data used, and their reliability as well as their processing and modeling. In the last decade, many studies have focused on environmental boundaries of diets, special attention has been given to dietary Greenhouse Gasses emissions—GHGe and water footprint—WF i.e., [185,186,187,188]; The EAT-Lancet commission provided a comprehensive overview on healthy and environmental outcomes of diets with the aim to reach scientific consensus on health and environmental targets for a sustainable food system. The study identified a safe operating space for food production, which allows feeding “healthy diets to about 10 billion people within biophysical limits of the Earth system”, defining the so-called “healthy planetary diet” [182].
In the previous paragraphs, ethical and legal challenges that can arise in the development and adoption of NVC have been identified and argued. In following, the challenges are linked to practical implications (PIs) for NVC in the domain of nutrition and sustainability of diets.
6.1 Personalized food recommender system
-
PI1.1
Inappropriate/harmful recommendations (linked to EC1.1 and LC1.1): it may include (i) recommendations on the consumption of a specific food/food group that is excluded by the user’s religion or for ethical reasons (i.e., a statement on animal welfare) [189]; (ii) the existence of specific conditions such as food intolerances or allergies of which the user is unaware; (iii) recommending food that can interact/reduce the effect of specific medicines (active compounds), such in the case of food containing vitamin K which interacts with anticoagulants vitamin K antagonist [190]).
-
PI1.2
Disclosure of private information on the health status (linked to EC1.2): The privacy of the disclosed weight, date of birth, specific health conditions (e.g., food intolerances, food allergies), and other sensitive information has to be ensured. Furthermore, the user has to be aware of who has access to the data and under what conditions (i.e., research use) [191].
-
PI1.3
Disclosure of nutritional and environmental criteria adopted to model the recommendations (linked to EC1.4 and LC1.4): personalized food recommender produces tailored recommendations according to modeling assumptions, which might not take into account social and ethical beliefs that the user is not required to disclose or for whom entry data is missing. This knowledge gap can impact the quality of the recommendations produced. Furthermore, in the case of environmental recommendations, it is necessary to ensure transparent and understandable information on the modeling process with special reference to the indicators used, their normalization, the functional units, data sources, and impact categories considered. In the case of nutritional recommendations, the following information has to be disclosed: dietary guidelines adopted for reference intake per age, gender, weight, and physical activity level; coefficients used for physical activity conversion in terms of Kcal expenditure; basal metabolism Kcal expenditure per age, gender, and weight [192].
-
PI1.4
Data set uniformity and reliability (linked to EC1.5): data set used for modeling the environmental impact of diet has to be scientifically recognized and reliable, transparent information has to be provided on the type of data: primary data, secondary or tertiary data; type of harmonization of data in relation to the system boundaries, when more than one database is used; type of functional unit used [192].
-
PI1.5
Guarantying science-based and neutral information on healthy and sustainable diets (linked to EC1.6 and LC1.3): healthy and sustainable diets have been shown to have both positive outcomes on consumers’ health and on the environment [182]. Since multiple dietary patterns are possible in the framework of the described planetary diet, a neutral information on food substitutes and dietary supplements should be provided. Special attention has to be given to the reduction of meat consumption (when high), which has high environmental impact [153, 158], the information can be complemented with neutral and science-based information on meat substitutes [193] and complementary food or food supplements to guarantee proper nutritional intake of vitamin B12, Iron and Zinc [194]. PI1.4 and PI1.5 also apply to Argumentative Systems as practical implications of EC2.1, EC2.5, and LC2.1.
6.2 Argumentative system
-
PI2.1
The trade-off with understandability and accuracy in environmental recommendation (linked to EC2.6): aggregation or simplified information on the environmental impact of food or diets can lead to misunderstanding and confusion as shown in the case of EcoLabels [195]. Indeed, the environmental assessment of food and diets is the result of articulated modeling systems which adopt multi-indicators and take into account different scenarios (e.g., miles traveled by a food product), an extreme simplification can alter the information and incur into misunderstanding.
-
PI2.2
Uniformity of infographics such as dietary pyramid and plate (linked to EC2.7): unharmonized information can be delivered according to the communication means used if the info is not adapted. The food pyramid has been used for showing the appropriate balance of different food groups in daily consumption, many versions of the pyramid, which slightly vary among each other have been disseminated (i.e., [196, 197]). Further graphical communication approaches have been adopted to share information on healthy diets, such as the USA MyPlate,Footnote 2 or on the nutritional and environmental impact of diets, such as the Double Pyramid (which reports nutritional recommendations and water footprint as well as greenhouse gases emissions contribution of different food groupsFootnote 3) and the environmental hourglass (which shows nutritional recommendation and the greenhouse gases emissions of diets [186].
6.3 Informative and assistive systems
-
PI3.1
Dietary costs (linked to EC3.4 and LC3.3): IAT costs issues can be extended to the costs of the recommended diets. The recommended sustainable diet should be accessible and affordable. Hence, detailed information on the economic characteristics of users may be relevant for a tailored solution. Food accessibility is defined as access to a diverse range of healthy foods and, therefore, a balanced and appropriate diet [163], and involves the affordability of food and physical access to grocery shops or vendors. NVC can provide information on foods that can substitute each other and can optimize eating habits according to the budget allocated for food, equating information on food costs and nutritional quality of food.
-
PI3.2
IAT functionalities to be determined according patients’ needs (linked to EC3.5 and LC3.1) For users affected by mental diseases or with mental issues, medical doctors and caregivers should have full command of the IAT, i.e., settings could be adjusted by them according to the patient’s needs and diagnoses (different level of interaction between the user and the system should be possible). For instance, in patients suffering from eating disorders, recommending food quantity in grams can be detrimental, as well as remembering them to take their weight every week [198]. PI1.2 also applies to Informative and Assistive Systems as practical implications of EC3.3.
6.4 Persuasive technologies and processes
-
PI4.1
Nudging in persuasive technologies (linked to EC4.1 and LC4.2): Smart-nudging techniques have also been proven effective in information technologies to nudge healthy nutritional choices [199]. A nudging intervention is a change in the choice architecture environment to gently push behavior. Though effective, this technique is controversial, since the user is unaware of the mechanism. Furthermore, negative outcomes have been shown for those nudging whose architects had poor knowledge of behavioral changes [200].
-
PI4.2
Clear goal statement (linked to EC4.2 and LC4.1): The intended goal of NVC has to be stated clearly, an example statement is “improving health outcomes of your diets, reducing environmental boundaries of dietary consumption”.
6.5 Mitigation strategies and functional/content requirements
NVC can support the shift toward more sustainable and healthy diets, by providing targeted information, nevertheless the benefit comes with some risks that need to be tackled. Previously, the practical implications in the nutritional and environmental dimensions of the legal and ethical challenges of NVC have been discussed. In following, mitigation strategies and functional/content requirements are discussed.
6.6 Nutritional and health claims and nutritional guidelines
The nutritional recommendation provided by NVC has to be based on scientific and reliable data (i.e., national dietary guidelines per age classes, the dietary recommendations for special needs,...), whose source is properly communicated. Furthermore, the NVC should take into account the location of the user, to provide geographic-specific recommendations, which can take into account food availability, seasonality and applicability of the dietary guidelines. Furthermore, in case of short information (like pop-up or notification), the NVC may made use of the EU nutritional and health claims,Footnote 4 to ensure proper, legal, and accurate information and avoid misunderstanding.
6.7 Environmental assessment: transparency on data and indicators used
The environmental impact of a specific dietary pattern depends on many variables such as the set of food items consumed, their origin, the breeding systems adopted, the implementation of good agricultural practices, the transportation, the distance between the production and consumption sites, their manufacturing, handling and processing (both at manufacturer and consumer’s stage), and their waste/disposal [201]. Essential information about the best environmental choice should include understandable details on the impact of the categories taken into account (e.g., global warming potential, human toxicity, and land use), system boundaries considered, used data and databases, degree of uncertainty, and data reliability. Data should also be reported in terms of food nutritional values using, for instance, nutrient density indexes [202]. This can allow comparing the environmental impact of food (i.e., kg CO2 eq emitted) both in terms of weight of product consumed and its nutritional values as part of a balanced diet. Proper information about food environmental impacts can guarantee freedom of choice and autonomy, avoiding misbeliefs and uniformed decisions. Adopting environmental aggregated indicators should also be transparent: the methodology and weights used to aggregate the various environmental indicators should be clearly stated. Ruminant meat, pork, and chicken have the highest environmental impact in terms of greenhouse gas emissions, land use, energy use, acidification, and eutrophication potential [182, 203]. Consumption of red and processed meat is mainly discouraged due to its linear relationship with increased mortality risk [182, 204]. However, ruminant meat is nutrient-dense: high-quality protein, iron, zinc, and vitamin 12 (among others) are essential nutrients provided by meat [194, 205].
6.8 Food substitutes: information on the outcomes and nutritional and environmental comparison
NVC may provide the user with viable alternatives to optimize the dietary pattern. However, some ethical concerns may arise depending on the economic context in which NVC is used. Due to their nature, NVCs can become viral and can be adopted by a wide range of consumers. This could intensively shift the food consumption of specific regions with relevant changes in demand, raising some ethical issues related to the impact that wide adoption of NVC may have on local economies (i.e., impact on the livelihoods of smallholders and rural communities that depend on agro-pastoral activities and animal proteins from livestock [182]). Therefore, comprehensive information on food substitutes and their nutritional and environmental characteristics, have to be provided to guarantee freedom of choice and access to meaningful information.
6.9 Ensuring cultural/religious acceptability of dietary patterns
Cultural acceptability of a recommended diet entails religious and ethical beliefs. Indeed, some religions have given specific dietary restrictions (e.g., Jews forbid the consumption of pork and rabbit meat; Islam only allows Kosher food) [206]. The employment of NVC can partly address this problem. For example, by asking questions about foods the user does not eat (or is not allowed to eat), since the beginning (i.e., at the profiling stage) can enable the NVC to provide recommendations compliant with the religious standing point of the user. Nevertheless, collecting information about users’ religions and cultural affiliations, which is sensitive information, and elaborating on them to provide targeted advice may undermine user privacy.
Ethical beliefs that go beyond the choice of not consuming meat/animal sourced food may also entail the request of information on animal welfare. Major producers and retailers provide information on the animal welfare policies of different brands. However, it is unlikely (and undesired) that an NVC would promote the consumption of one brand over another. Furthermore, some retailers sell meat without detailed information on animal welfare, generating a lack of data that cannot be easily addressed/filled by NVC. Thus, potential users’ moral expectations about animal welfare could not be addressed by NVC, and the cultural acceptability of a recommended dietary pattern may be undermined.
6.10 Further recommendations on food safety
NVC can increase the user’s knowledge on detecting and analyzing food safety characteristics in the retail shop and implementing best handling practices in the cooking phase. On the other hand, the intrinsic safety of the specific food product purchased cannot be guaranteed and rechecked, and the user may make biased decisions due to misinterpretation of safety arguments. Further information is needed to determine whether the adoption of NVC can reduce (or increase) food waste. NVC can advance household knowledge and practices to reduce food waste. However, a mismatch between NVC recommendations, purchasing, and behavior could impact food waste, raising ethical concerns about the broader adoption of NVC systems.
Table 3 summarizes the practical implications of ethical and legal challenges in the nutrition and food sustainability domain.
7 Conclusions
The paper has discussed the need for a clear understanding of the ethical/legal-technological entanglements that lay behind (nutrition) virtual coaching systems. Recalling that NVC are intended to support and educate the users about food, integrating the dimensions described in Sect. 2 (i.e., leveraging persuasion and argumentation techniques, informative systems, and recommendation paradigms), new capabilities—henceforth risks/challenges—must be considered.
In particular, the analysis has elicited the following ethical challenges, including circumventing inappropriate recommendations, ensuring privacy, safeguarding autonomy and personal identity, reducing the RS opacity, overcoming the absence of fairness, deflecting social pressure, attaining formal validity, leveraging sole sincerity/truth, ensuring content justice, enacting fair and just procedures, ensuring compliance-verification convergence, simplifying or aggregating arguments, producing multi-modal arguments, facilitating technology access and IAT rightful behaviors, ensuring the system identity, ensuring medical data confidentiality, making the solutions affordable, ensuring safety boundaries, providing transparency, stating the goals clearly, and preventing unintended behavior change.
Moreover, we have related ethical challenges to the NVC sphere, elaborating on the food sustainability from both virtual assistant and user perspectives.
From a legal standing point, the analysis of NVC led to the formalization of challenges, such as avoiding inappropriate/harmful recommendations, sidestepping manipulation and coercion, excluding steering the market unfairly, restricting over-trust or mistrust, limiting the side-effects of a data-based argument, discouraging unsupervised use, handling deception, curbing social discrimination, dealing with conceptual ambiguity, overcoming the mere transparency requirement.
Finally, this work has elicited and elaborated on ethical and legal challenges that, as of today, cannot yet be fulfilled. This is due to the lack of techniques, frameworks, and unambiguous formulations that hinder sharp legal formalizations. Therefore, as future works, we plan to undertake the design and development of an NVC, providing concrete tools to cope with the highlighted challenges. In turn, the validation of technologies, techniques, and practices from a legal standpoint will be investigated.
Data Availability
Not applicable.
References
Calvaresi, D., Ciatto, G., Najjar, A., Aydoğan, R., Van der Torre, L., Omicini, A., Schumacher, M.: Expectation: personalized explainable artificial intelligence for decentralized agents with heterogeneous knowledge. In: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, pp. 331–343. Springer, Berlin (2021)
Bobadilla, J., Ortega, F., Hernando, A., Gutiérrez, A.: Recommender systems survey. Knowl.-Based Syst. 46, 109–132 (2013)
Ge, M., Ricci, F., Massimo, D.: Health-aware food recommender system. In: Proceedings of the 9th ACM Conference on Recommender Systems, pp. 333–334 (2015)
Jesse, M., Jannach, D.: Digital nudging with recommender systems: survey and future directions. Comput. Hum. Behav. Rep. 3, 100052 (2021)
Milano, S., Taddeo, M., Floridi, L.: Recommender systems and their ethical challenges. AI Soc. 35(4), 957–967 (2020)
Karpati, D., Najjar, A., Ambrossio, D.A.: Ethics of food recommender applications. In: Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, pp. 313–319 (2020)
Droit, R.-P.: L’éthique expliquée à tout le monde. Média Diffusion (2014)
Calvaresi, D., Calbimonte, J.-P., Dubovitskaya, A., Mattioli, V., Piguet, J.-G., Schumacher, M.: The good, the bad, and the ethical implications of bridging blockchain and multi-agent systems. Information 10(12), 363 (2019)
Carli, R., Najjar, A.: Rethinking trust in social robotics. arXiv preprint arXiv:2109.06800 (2021)
Hagendorff, T.: The ethics of ai ethics: an evaluation of guidelines. Mind. Mach. 30(1), 99–120 (2020)
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. (CSUR) 51(5), 1–42 (2018)
Anjomshoae, S., Najjar, A., Calvaresi, D., Främling, K.: Explainable agents and robots: Results from a systematic literature review. In: 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2019), Montreal, Canada, May 13–17, 2019. International Foundation for Autonomous Agents and Multiagent Systems, pp. 1078–1088 (2019)
Das, A. Rad, P. Opportunities and challenges in explainable artificial intelligence (xai): a survey (2020). arXiv preprint arXiv:2006.11371
Goddard, M.: The eu general data protection regulation (gdpr): European regulation that has a global impact. Int. J. Mark. Res. 59(6), 703–705 (2017)
Albrecht, J.P.: How the GDPR will change the world. Eur. Data Prot. L. Rev. 2, 287 (2016)
Such, J.M., Rovatsos, M.: Privacy policy negotiation in social media. ACM Trans. Autonom. Adapt. Syst. (TAAS) 11(1), 1–29 (2016)
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)
Bellamy, R.K.E., Dey, K., Hind, M., Hoffman, S.C., Houde, S., Kannan, K., Lohia, P., Martino, J., Mehta, S., Mojsilovic, A., et al.: Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias (2018). arXiv preprint arXiv:1810.01943
Hufthammer, K.T., Aasheim, T.H., Ånneland, S., Brynjulfsen, H., Slavkovik, M.: Bias mitigation with aif360: a comparative study. In: Norsk IKT-konferanse for forskning og utdanning, number 1, (2020)
Ethical call—renaissance foundation. https://www.romecall.org/the-call/. Accessed 25 June 2022
Russell, S., Norvig, P.: Artificial intelligence: a modern approach (2002)
Verschure, P.F.M.J., Althaus, P.: A real-world rational agent: unifying old and new AI. Cogn. Sci. 27(4), 561–590 (2003)
Calvaresi, D., Cesarini, D., Sernani, P., Marinoni, M., Dragoni, A.F., Sturm, A.: Exploring the ambient assisted living domain: a systematic review. J. Ambient. Intell. Hum. Comput. 8(2), 239–257 (2017)
Calvaresi, D., Marinoni, M., Dragoni, A.F., Hilfiker, R., Schumacher, M.: Real-time multi-agent systems for telerehabilitation scenarios. Artif. Intell. Med. 96, 217–231 (2019)
Sanz, R.: Ethica ex machina. Exploring artificial moral agency or the possibility of computable ethics. Z. für Ethik Moralphilos. 3(2), 223–239 (2020)
Cervantes, J.-A., López, S., Rodríguez, L.-F., Cervantes, S., Cervantes, F., Ramos, F.: Artificial moral agents: a survey of the current status. Sci. Eng. Ethics 26(2), 501–532 (2020)
Jie, L., Dianshuang, W., Mao, M., Wang, W., Zhang, G.: Recommender system application developments: a survey. Decis. Support Syst. 74, 12–32 (2015)
Aggarwal, C.C., et al.: Recommender Systems, vol. 1. Springer, Berlin (2016)
Alamdari, P.M., Navimipour, N.J., Hosseinzadeh, M., Safaei, A.A., Darwesh, A.: A systematic study on the recommender systems in the e-commerce. IEEE Access 8, 115694–115716 (2020)
Mbugua, A.W., Omondi, A.O.: An application of association rule learning in recommender systems for e-commerce and its effect on marketing (2017)
Ge, Y., Zhao, S., Zhou, H., Pei, C., Sun, F., Ou, W., Zhang, Y.: Understanding echo chambers in e-commerce recommender systems. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 2261–2270 (2020)
Zhou, M., Ding, Z., Tang, J., Yin, D.: Micro behaviors: a new perspective in e-commerce recommender systems. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining, pp. 727–735 (2018)
Wakil, K., Alyari, F., Ghasvari, M., Lesani, Z., Rajabion, L.: A new model for assessing the role of customer behavior history, product classification, and prices on the success of the recommender systems in e-commerce. Kybernetes (2019)
Tran, T.N.T., Atas, M., Felfernig, A., Stettinger, M.: An overview of recommender systems in the healthy food domain. J. Intell. Inf. Syst. 50(3), 501–526 (2018)
Musto, C., Starke, A.D., Trattner, C., Rapp, A., Semeraro, G.: Exploring the effects of natural language justifications in food recommender systems. In: Proceedings of the 29th ACM Conference on User Modeling, Adaptation and Personalization, pp. 147–157 (2021)
Trattner, C., Elsweiler, D.: Food recommender systems: important contributions, challenges and future research directions (2017). arXiv preprint arXiv:1711.02760
Katzman, J.L., Shaham, U., Cloninger, A., Bates, J., Jiang, T., Kluger, Y.: Deepsurv: personalized treatment recommender system using a cox proportional hazards deep neural network. BMC Med. Res. Methodol. 18(1), 1–12 (2018)
Sahoo, A.K., Pradhan, C., Barik, R.K., Dubey, H.: Deepreco: deep learning based health recommender system using collaborative filtering. Computation 7(2), 25 (2019)
Gräßer, F., Malberg, H., Zaunseder, S., Beckert, S., Schmitt, J., Klinik, S.A., et al.: Application of recommender system methods for therapy decision support. In: 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom). IEEE, pp. 1–6 (2016)
Falk, K.: Practical Recommender Systems. Simon and Schuster, New York (2019)
Yıldırım, S., Söyler, Ş. G., Akarsu, Ö.: Building a non-personalized recommender system by learning product and basket representation. In 2020 IEEE International Conference on Big Data (Big Data). IEEE, pp. 4450–4455 (2020)
Lin, W., Alvarez, S.A., Ruiz, C.: Efficient adaptive-support association rule mining for recommender systems. Data Min. Knowl. Disc. 6(1), 83–105 (2002)
Schäfer, H., Hors-Fraile, S., Karumur, R.P., Calero Valdez, A., Said, A., Torkamaan, H., Ulmer, T., Trattner, C.: Towards health (aware) recommender systems. In: Proceedings of the 2017 International Conference on Digital Health, pp. 157–161 (2017)
Logesh, R., Subramaniyaswamy, V., Vijayakumar, V., Li, X.: Efficient user profiling based intelligent travel recommender system for individual and group of users. Mob. Netw. Appl. 24(3), 1018–1033 (2019)
Wang, S., Cao, L., Wang, Y., Sheng, Q.Z., Orgun, M.A., Lian, D.: A survey on session-based recommender systems. ACM Comput. Surv. (CSUR) 54(7), 1–38 (2021)
Iwata, T., Saito, K., Yamada, T.: Modeling user behavior in recommender systems based on maximum entropy. In: Proceedings of the 16th International Conference on World Wide Web, pp. 1281–1282 (2007)
Guo, G., Zhang, J., Thalmann, D., Basu, A., Yorke-Smith, N.: From ratings to trust: an empirical study of implicit trust in recommender systems. In: Proceedings of the 29th Annual ACM Symposium on Applied Computing, pp. 248–253 (2014)
Oard, D.W., Kim, J., et al.: Implicit feedback for recommender systems. In: Proceedings of the AAAI Workshop on Recommender Systems, vol. 83. WoUongong, pp. 81–83 (1998)
Calvaresi, D., Eggenschwiler, S., Calbimonte, J.-P., Manzo, G., Schumacher, M.: A Personalized Agent-Based Chatbot for Nutritional Coaching. WI-IAT ’21 (2021). Association for Computing Machinery, New York, pp. 682–687
Jannach, D., Zanker, M., Felfernig, A., Friedrich, G.: Recommender systems: an introduction (2010)
Bocanegra, C.S., Laguna, F.S., Sevillano, J.L.: Introduction on health recommender systems. Methods Mol. Biol. (Clifton, NJ) 1246, 131–46 (2015)
Felfernig, A., Burke, R.: Constraint-based recommender systems: technologies and research issues. ACM Int. Conf. Proc. Ser. 3, 01 (2008)
Ricci, F., Rokach, L., Shapira, B., Kantor, P.: Recommender Systems Handbook (2011)
Koren, Y., Bell, R.: Advances in collaborative filtering. In: Recommender Systems Handbook, pp. 77–118 (2015)
Chen, R., Hua, Q., Chang, Y.-S., Wang, B., Zhang, L., Kong, X.: A survey of collaborative filtering-based recommender systems: from traditional methods to hybrid methods based on social networks. IEEE Access 6, 64301–64320 (2018)
Ramlatchan, A., Yang, M., Liu, Q., Li, M., Wang, J., Li, Y.: A survey of matrix completion methods for recommendation systems. Big Data Min. Anal. 1(4), 308–323 (2018)
Mustafa, N., Ibrahim, A.O., Ahmed, A., Abdullah, A.: Collaborative filtering: techniques and applications. In: 2017 International Conference on Communication, Control, Computing and Electronics Engineering (ICCCCEE). IEEE, pp. 1–6 (2017)
El Islem, N., Karabadji, S.B., Seridi, H., Aridhi, S., Dhifli, W.: Improving memory-based user collaborative filtering with evolutionary multi-objective optimization. Expert Syst. Appl. 98, 153–165 (2018)
Luo, X., Zhou, M., Xia, Y., Zhu, Q.: An efficient non-negative matrix-factorization-based approach to collaborative filtering for recommender systems. IEEE Trans. Ind. Inf. 10(2), 1273–1284 (2014)
Karlsson, L., Kressner, D., Uschmajew, A.: Parallel algorithms for tensor completion in the cp format. Parall. Comput. 57, 222–234 (2016)
Zhang, S., Yao, L., Xu, X.: Autosvd++ an efficient hybrid collaborative filtering model via contractive auto-encoders. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 957–960 (2017)
Wei, J., He, J., Chen, K., Zhou, Y., Tang, Z.: Collaborative filtering and deep learning based recommendation system for cold start items. Expert Syst. Appl. 69, 29–39 (2017)
Mingsheng, F., Hong, Q., Yi, Z., Li, L., Liu, Y.: A novel deep learning-based collaborative filtering model for recommendation system. IEEE Trans. Cybern. 49(3), 1084–1096 (2018)
Wang, D., Liang, Y., Dong, X., Feng, X., Guan, R.: A content-based recommender system for computer science publications. Knowl.-Based Syst. 157, 1–9 (2018)
Bagher, R.C., Hassanpour, H., Mashayekhi, H.: User trends modeling for a content-based recommender system. Expert Syst. Appl. 87, 209–219 (2017)
Tarus, J.K., Niu, Z., Mustafa, G.: Knowledge-based recommendation: a review of ontology-based recommender systems for e-learning. Artif. Intell. Rev. 50(1), 21–48 (2018)
Dong, X., Yu, L., Wu, Z., Sun, Y., Yuan, L., Zhang, F.: A hybrid collaborative filtering model with deep structure for recommender systems. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 31 (2017)
Hunter, A.: Towards a framework for computational persuasion with applications in behaviour change. Arg. Comput. 9(1), 15–40 (2018)
Carfora, V., Di Massimo, F., Rastelli, R., Catellani, P., Piastra, M.: Dialogue management in conversational agents through psychology of persuasion and machine learning. Multimedia Tools Appl. 79(47), 35949–35971 (2020)
Aldenaini, N., Alqahtani, F., Orji, R., Sampalli, S.: Trends in persuasive technologies for physical activity and sedentary behavior: a systematic review. Front. Artif. Intell. 3, 7 (2020)
Sqalli, M.T., Al-Thani, D.: Ai-supported health coaching model for patients with chronic diseases. In: 2019 16th International Symposium on Wireless Communication Systems (ISWCS). IEEE, pp. 452–456 (2019)
Orji, R., Moffatt, K.: Persuasive technology for health and wellness: state-of-the-art and emerging trends. Health Informat. J. 24(1), 66–91 (2018)
Calvaresi, D., Calbimonte, J.-., Dubosson, F., Najjar, A., Schumacher, M.: Social network chatbots for smoking cessation: agent and multi-agent frameworks. In: 2019 IEEE/WIC/ACM International Conference on Web Intelligence (WI). IEEE, pp. 286–292 (2019)
Calvaresi, D., Calbimonte, J.-P., Siboni, E., Eggenschwiler, S., Manzo, G., Hilfiker, R., Schumacher, M.: Erebots: privacy-compliant agent-based platform for multi-scenario personalized health-assistant chatbots. Electronics 10(6), 666 (2021)
Zhang, J., YooJung, O., Lange, P., Zhou, Yu., Fukuoka, Y., et al.: Artificial intelligence chatbot behavior change model for designing artificial intelligence chatbots to promote physical activity and a healthy diet. J. Med. Internet Res. 22(9), e22845 (2020)
Calbimonte, J.-P., Calvaresi, D., Schumacher, M.: Towards collaborative creativity in persuasive multi-agent systems. In: International Conference on Practical Applications of Agents and Multi-Agent Systems. Springer, pp. 40–51 (2021)
Bartlett, Y.K., Webb, T.L., Hawley, M.S.: Using persuasive technology to increase physical activity in people with chronic obstructive pulmonary disease by encouraging regular walking: A mixed-methods study exploring opinions and preferences. J. Med. Internet Res. 19(4), e124 (2017)
Purpura, S., Schwanda, V., Williams, K., Stubler, W., Sengers, P.: Fit4life (2011)
Oyibo, K.: Designing culture-based persuasive technology to promote physical activity among university students. In: Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization. ACM (2016)
Srisawangwong, P., Kasemvilas, S.: Mobile persuasive technology: a review on thai elders health service opportunity. In: 2014 14th International Symposium on Communications and Information Technologies (ISCIT). IEEE (2014)
Yoganathan, D., Sangaralingam, K.: Designing fitness apps using persuasive technology: a text mining approach. In: Pacific Asia Conference on Information Systems PACIS (2015)
Yoganathan, D., Kajanan, S.: Persuasive technology for smartphone fitness apps. In: Pacific Asia Conference on Information Systems PACIS 2013 Proceedings. AIS, p. 185 (2013)
Schnall, R., Bakken, S., Rojas, M., Travers, J., Carballo-Dieguez, A.: mHealth technology as a persuasive tool for treatment, care and management of persons living with HIV. AIDS Behav. 19(S2), 81–89 (2015)
Swindell, J.S., McGuire, A.L., Halpern, S.D.: Beneficent persuasion: techniques and ethical guidelines to improve patients’ decisions. Ann. Fam. Med. 8(3), 260–264 (2010)
Besnard, P., Hunter, A.: Elements of Argumentation, vol. 47. MIT press, Cambridge (2008)
McDermott, D., Doyle, J.: Non-monotonic logic i. Artif Intell 13(1–2), 41–72 (1980)
Ribeiro, J.S., Nayak, A., Wassermann, R.: Belief change and non-monotonic reasoning sans compactness. Proc. AAAI Conf. Artif. Intell. 33, 3019–3026 (2019)
Shakerin, F., Gupta, G.: Induction of non-monotonic logic programs to explain boosted tree models using lime. Proc. AAAI Conf. Artif. Intell. 33, 3052–3059 (2019)
Brewka, G., Thimm, M., Ulbricht, M.: Strong inconsistency in nonmonotonic reasoning. In: IJCAI, pp. 901–907 (2017)
Governatori, G., Maher, M.J., Antoniou, G., Billington, D.: Argumentation semantics for defeasible logic. J. Log. Comput. 14(5), 675–702 (2004)
Nute, D.: Defeasible logic. In: International Conference on Applications of Prolog. Springer, pp. 151–169 (2001)
García, A.J., Simari, G.R.: Defeasible logic programming: an argumentative approach. Theory Pract. Logic Program. 4(1–2), 95–138 (2004)
Pearl, J.: Probabilistic Reasoning in Intelligent Systems, vol. 88. Elsevier, Oxford (2014)
Waldmann, M.: The Oxford Handbook of Causal Reasoning. Oxford University Press, Oxford (2017)
Yager, R.R., Zadeh, L.A.: An Introduction to Fuzzy Logic Applications in Intelligent Systems, vol. 165. Springer, Berlin (2012)
Dubucs, J.: Feasibility in logic. Synthese 132(3), 213–237 (2002)
Carrera, Á., Iglesias, C.A.: A systematic review of argumentation techniques for multi-agent systems research. Artif. Intell. Rev. 44(4), 509–535 (2015)
Baumann, R., Brewka, G.: AGM meets abstract argumentation: expansion and revision for dung frameworks. In: Twenty-Fourth International Joint Conference on Artificial Intelligence (2015)
Bondarenko, A., Dung, P.M., Kowalski, R.A., Toni, F.: An abstract, argumentation-theoretic approach to default reasoning. Artif. Intell. 93(1–2), 63–101 (1997)
Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2), 321–357 (1995)
Amgoud, L., Dimopoulos, Y., Moraitis, P.: Making decisions through preference-based argumentation. KR 8, 963–970 (2008)
Amgoud, L., Vesic, S.: A new approach for preference-based argumentation frameworks. Ann. Math. Artif. Intell. 63(2), 149–183 (2011)
Amgoud, L., Cayrol, C.: Inferring from inconsistency in preference-based argumentation frameworks. J. Autom. Reason. 29(2), 125–169 (2002)
Kaci, S., van der Torre, L.: Preference-based argumentation: arguments supporting multiple values. Int. J. Approx. Reason. 48(3), 730–751 (2008)
Amgoud, L., Cayrol, C.: On the acceptability of arguments in preference-based argumentation (2013). arXiv preprint arXiv:1301.7358
Verheij, B., et al.: Valued-based argumentation for tree-like value graphs. Comput. Models Arg. Proc. COMMA 245(378), 2012 (2012)
Alsinet, T., Argelich, J., Béjar, R., Fernández, C., Mateu, C., Planes, J.: Weighted argumentation for analysis of discussions in twitter. Int. J. Approx. Reason. 85, 21–35 (2017)
Dung, P.M., Thang, P.M.: Fundamental properties of attack relations in structured argumentation with priorities. Artif. Intell. 255, 1–42 (2018)
Dung, P.M., Kowalski, R.A., Toni, F.: Assumption-based argumentation. In: Argumentation in Artificial Intelligence. Springer, pp. 199–218 (2009)
Calvaresi, D., DicenteCid, Y., Marinoni, M., Dragoni, A.F., Najjar, A., Schumacher, M.: Real-time multi-agent systems: rationality, formal model, and empirical results. Auton. Agent. Multi-Agent Syst. 35(1), 1–37 (2021)
Rahwan, I., Ramchurn, S.D., Jennings, N.R., Mcburney, P., Parsons, S., Sonenberg, L.: Argumentation-based negotiation. Knowl. Eng. Rev 18(4), 343–375 (2003)
Sierra, C., Jennings, N.R., Noriega, P., Parsons, S.: A framework for argumentation-based negotiation. In: International Workshop on Agent Theories, Architectures, and Languages. Springer, pp. 177–192 (1997)
Amgoud, L., Dimopoulos, Y., Moraitis, P.: A unified and general framework for argumentation-based negotiation. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, pp. 1–8 (2007)
Jennings, N.R., Parsons, S., Noriega, P., Sierra, C.: On argumentation-based negotiation. In: Proceedings of the International Workshop on Multi-Agent Systems. Citeseer, pp. 1–7 (1998)
El-Sisi, A.B., Mousa, H.M.: Argumentation based negotiation in multiagent system. In: 2012 Seventh International Conference on Computer Engineering & Systems (ICCES). IEEE, pp. 261–266 (2012)
Leonard, T.C., Thaler, R.C., Sunstein, N.: Improving decisions about health, wealth, and happiness (2008)
Kampik, T., Nieves, J.C., Lindgren, H.: Coercion and deception in persuasive technologies. In: 20th International Trust Workshop (co-located with AAMAS/IJCAI/ECAI/ICML 2018), Stockholm, Sweden, 14 July, 2018. CEUR-WS, pp. 38–49 (2018)
Elsweiler, D., Hauptmann, H., Trattner, C.: Food recommender systems. In: Recommender Systems Handbook. Springer, Berlin, pp. 871–925 (2022)
Wangmo, T., Lipps, M., Kressig, R.W., Ienca, M.: Ethical concerns with the use of intelligent assistive technology: findings from a qualitative study with professional stakeholders. BMC Med. Ethics 20(1), 1–11 (2019)
Vayena, E., Haeusermann, T., Adjekum, A., Blasimme, A.: Digital health: meeting the ethical and policy challenges. Swiss Med. Wkly. 148, w14571 (2018)
Egnell, M., Talati, Z., Galan, P., Andreeva, V.A., Vandevijvere, S., Gombaud, M., Dréano-Trécant, L., Hercberg, S., Pettigrew, S., Julia, C.: Objective understanding of the nutri-score front-of-pack label by European consumers and its effect on food choices: An online experimental study. Int. J. Behav. Nutr. Phys. Act. 17(1), 1–13 (2020)
Calandrino, J.A., Kilzer, A., Narayanan, A., Felten, E.W., Shmatikov, V.: you might also like: privacy risks of collaborative filtering. In: 2011 IEEE symposium on security and privacy. IEEE, pp. 231–246 (2011)
Ji, Y., Sun, A., Zhang, J., Li, C.: A critical study on data leakage in recommender system offline evaluation (2020). arXiv preprint arXiv:2010.11060
Xin, Y., Jaakkola, T.: Controlling privacy in recommender systems. Adv. Neural Inf. Process. Syst., 27 (2014)
Ten Hagen, S., Van Someren, M., Hollink, V.: et al. Exploration/exploitation in adaptive recommender systems. In: Proceedings of Eunite 2003 (2003)
Zhang, Y, Chen, X., et al.: Explainable recommendation: a survey and new perspectives. Found. Trends® Inf. Retriev., 14(1):1–101 (2020)
Soutjis, B.: The new digital face of the consumerist mediator: the case of the ‘yuka’ mobile app. J. Cult. Econ. 13(1), 114–131 (2020)
Van Alstyne, M., Brynjolfsson, E., et al.: Electronic communities: global village or cyberbalkans. In: Proceedings of the 17th International Conference on Information Systems. Wiley, New York, p. 32 (1996)
Knijnenburg, B.P., Willemsen, M.C., Gantner, Z., Soncu, H., Newell, C.: Explaining the user experience of recommender systems. User Model. User-Adap. Inter. 22(4), 441–504 (2012)
Konstan, J.A., Riedl, J.: Recommender systems: from algorithms to user experience. User Model. User-Adap. Inter. 22(1), 101–123 (2012)
Bozdag, E., Van Den Hoven, J.: Breaking the filter bubble: democracy and design. Ethics Inf. Technol. 17(4), 249–265 (2015)
Levy, G., Razin, R.: Echo chambers and their effects on economic and political outcomes. Annu. Rev. Econ. 11, 303–328 (2019)
Kampik, T., Najjar, A.: Technology-facilitated societal consensus. In: Adjunct Publication of the 27th Conference on User Modeling, Adaptation and Personalization, pp. 3–7 (2019)
Rahwan, I., Simari, G.R.: Argumentation in Artificial Intelligence, vol. 47. Springer, Berlin (2009)
Atkinson, K., Bench-Capon, T.J.M., McBurney, P.: Multi-agent argumentation for edemocracy. In: EUMAS, pp. 35–46 (2005)
Schreier, M., Groeben, N., Christmann, U.: “that’s not fair!’’ argumentational integrity as an ethics of argumentative communication. Argumentation 9(2), 267–289 (1995)
Walton, D.N., Walton, D.N.: Informal Logic: A Handbook for Critical Argument. Cambridge University Press, Cambridge (1989)
McBurney, P., Parsons, S., Wooldridge, M.: Desiderata for agent argumentation protocols. In: Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 1, pp. 402–409 (2002)
Black, E., Hunter, A.: An inquiry dialogue system. Auton. Agent. Multi-Agent Syst. 19(2), 173–209 (2009)
Sabater-Mir, J., Vercouter, L.: Trust and reputation in multiagent systems. In: Multiagent Systems, p. 381 (2013)
Fan, X., Toni, F.: On computing explanations in argumentation. In: Twenty-Ninth AAAI Conference on Artificial Intelligence (2015)
Hedman, A., Lindqvist, E., Nygård, L.: How older adults with mild cognitive impairment relate to technology as part of present and future everyday life: a qualitative study. BMC Geriatr. 16(1), 1–12 (2016)
Mahoney, D.F., Purtilo, R.B., Webbe, F.M., Alwan, M., Bharucha, A.J., Adlam, T.D., Jimison, H.B., Turner, B., Becker, S.A., Working Group on Technology of the Alzheimer’s Association. In-home monitoring of persons with dementia: ethical guidelines for technology research and development. Alzheimer’s Dement., 3(3):217–226 (2007)
Albanese, G., Calbimonte, J.-P., Schumacher, M., Calvaresi, D.: Dynamic consent management for clinical trials via private blockchain technology. J. Ambient. Intell. Humaniz. Comput. 11(11), 4909–4926 (2020)
Obar, J.A., Oeldorf-Hirsch, A.: The biggest lie on the internet: ignoring the privacy policies and terms of service policies of social networking services. Inf. Commun. Soc. 23(1), 128–147 (2020)
Schermer, M.: Nothing but the truth? on truth and deception in dementia care. Bioethics 21(1), 13–22 (2007)
Vayena, E.: Strictly biomedical? Sketching the ethics of the big data ecosystem in biomedicine. In: The Ethics of Biomedical Big Data, pp. 17–39. Springer, Berlin (2016)
Carli, R., Najjar, A., Calvaresi, D.: Risk and exposure of xai in persuasion and argumentation: the case of manipulation. In: International Workshop on Explainable, Transparent Autonomous Agents and Multi-Agent Systems, pp. 204–220. Springer, Berlin (2022)
Milliman, R.E., Fugate, D.L.: Using trust-transference as a persuasion technique: an empirical field investigation. J. Person. Sell. Sales Manag. 8(2), 1–7 (1988)
Compen, N., Ham, J., Spahn, A.: 4 sustainability coaches. A better environment starts with your coach. Sincere Support, 105
Holvast, J.: History of privacy. In: The History of Information Security, pp. 737–769. Elsevier, Oxford (2007)
Rajagopalan, S., Thaler, R.H.: Misbehaving: the making of behavioral economics. Rev. Aust. Econ. 30(1), 137–141 (2017)
Floridi, L.: Translating principles into practices of digital ethics: five risks of being unethical. In: Ethics, Governance, and Policies in Artificial Intelligence, pp. 81–90. Springer, Berlin (2021)
Wagner, Ben: Ethics as an escape from regulation from “ethics-washing’’ to ethics-shopping? In: Being Profiled, pp. 84–89. Amsterdam University Press, Amsterdam (2018)
Bostrom, N., Yudkowsky, E.: The ethics of artificial intelligence. Camb. Handb. Artif. Intell. 1, 316–334 (2014)
Donnelly, J.: Universal human rights in theory and practice. In: Universal Human Rights in Theory and Practice. Cornell University Press, Ithaca (2013)
de Kleijn, M., Siebert, M., Huggett, S.: How knowledge is created, transferred and used. Artificial intelligence (2017)
Pichai, S.: Ai at google: our principles. Keyword 7, 1–3 (2018)
Renda, A., et al.: Artificial Intelligence. Ethics, Governance and Policy Challenges. CEPS Centre for European Policy Studies (2019)
West, D.M.: The role of corporations in addressing AI’s ethical dilemmas. Blog Post (2018)
Boddington, P.: Towards a Code of Ethics for Artificial Intelligence. Springer, Berlin (2017)
Russell, S., Hauert, S., Altman, R., Veloso, M.: Ethics of artificial intelligence. Nature 521(7553), 415–416 (2015)
Carli, R.: Social robotics and deception: beyond the ethical approach. In: Proceedings of BNAIC/BeneLearn 2021 (2021)
Walton, D.: Argumentation Methods for Artificial Intelligence in Law. Springer, Berlin (2005)
Scherer, M.U.: Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harv. JL Tech. 29, 353 (2015)
HLEG AI. High-level expert group on artificial intelligence (2019)
Wachter, S., Mittelstadt, B.: A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Colum. Bus. L. Rev., 494 (2019)
Goodman, B., Flaxman, S.: European union regulations on algorithmic decision-making and a “right to explanation’’. AI Mag. 38(3), 50–57 (2017)
Bertolini, A.: A functional approach to the regulation of AI. In: A Functional Approach to the Regulation of AI., pp. 285–296 (2020)
Coons, C., Weber, M.: Manipulation: Theory and Practice. Oxford University Press, Oxford (2014)
Bertolini, A., Carli, R.: Human–robot interaction and user manipulation. In: International Conference on Persuasive Technology. Springer, pp. 43–57 (2022)
Calamandrei, P.: Discorso sulla Costituzione. Cetra (1955)
Kool, W., Botvinick, M.: Mental labour. Nat. Hum. Behav. 2(12), 899–908 (2018)
Kirchgässner, G.: Homo Economics: The Economic Model of Behaviour and Its Applications in Economics and Other Social Sciences, vol. 6. Springer, Berlin (2008)
Sibony, A.-L.: Can eu consumer law benefit from behavioural insights? An analysis of the unfair practices directive. Eur. Rev. Priv. Law 22(6) (2014)
Helleringer, G., Sibony, A.-L.: European consumer protection through the behavioral lens. Colum. J. Eur. L. 23, 607 (2016)
Reidenberg, J.R., Bhatia, J., Breaux, T.D., Norton, T.B.: Ambiguity in privacy policies and the impact of regulation. J. Leg. Stud. 45(S2), S163–S190 (2016)
Sherman, E.: Privacy policies are great-for phds. CBS News 4 (2008)
Zarsky, T.Z.: Incompatible: the GDPR in the age of big data. Seton Hall L. Rev. 47, 995 (2016)
Degeling, M., Utz, C., Lentzsch, C., Hosseini, H., Schaub, F., Holz, T.: We value your privacy... now take some cookies: measuring the GDPR’s impact on web privacy. arXiv preprint arXiv:1808.05096 (2018)
Acquisti, A., Grossklags, J.: Privacy and rationality in individual decision making. IEEE Secur. Privacy 3(1), 26–33 (2005)
Willett, W., Rockström, J., Loken, B., Springmann, M., Lang, T., Vermeulen, S., Garnett, T., Tilman, D., DeClerck, F., Wood, A., et al.: Food in the anthropocene: the eat-lancet commission on healthy diets from sustainable food systems. Lancet 393(10170), 447–492 (2019)
Gold, K., McBurney, R.P.H.: Sustainable Diets and Biodiversity: Directions and Solutions for Policy, Research and Action, pp. 108–114. Food and Agriculture Organization of the United Nations, Rome (2010)
Burlingame, B., Dernini, S.: Sustainable Diets and Biodiversity Directions and Solutions for Policy, Research and Action. FAO Headquarters, Rome (2012)
Corrado, S., Luzzani, G., Trevisan, M., Lamastra, L.: Contribution of different life cycle stages to the greenhouse gas emissions associated with three balanced dietary patterns. Sci. Total Environ. 660, 622–630 (2019)
Ulaszewska, M.M., Luzzani, G., Pignatelli, S., Capri, E.: Assessment of diet-related GHG emissions using the environmental hourglass approach for the Mediterranean and new nordic diets. Sci. Total Environ. 574, 829–836 (2017)
Harris, F., Moss, C., Joy, E.J.M., Quinn, R., Scheelbeek, P.F.D., Dangour, A.D., Green, R.: The water footprint of diets: a global systematic review and meta-analysis. Adv. Nutr. 11(2), 375–386 (2020)
Zucchinelli, M., Spinelli, R., Corrado, S., Lamastra, L.: Evaluation of the influence on water consumption and water scarcity of different healthy diet scenarios. J. Environ. Manag. 291, 112687 (2021)
Sabate, J.: Religion, diet and research. Br. J. Nutr. 92(2), 199–201 (2004)
Mekaj, Y.H., Mekaj, A.Y., Duci, S.B., Miftari, E.I.: New oral anticoagulants: their advantages and disadvantages compared with vitamin k antagonists in the prevention and treatment of patients with thromboembolic events. Ther. Clin. Risk Manag. 11, 967 (2015)
Fukunaga, W., Satomura, Y.: The environment and conditions for the establishment of internet medicine. Chiba Med. Soc. 81, 47–57 (2005)
Bianchi, M., Strid, A., Winkvist, A., Lindroos, A.-K., Sonesson, U., Hallström, E.: Systematic evaluation of nutrition indicators for use within food lca studies. Sustainability 12(21), 8992 (2020)
Smetana, S., Mathys, A., Knoch, A., Heinz, V.: Meat alternatives: life cycle assessment of most known meat substitutes. Int. J. Life Cycle Assess. 20(9), 1254–1267 (2015)
Winston John Craig: Nutrition concerns and health effects of vegetarian diets. Nutr. Clin. Pract. 25(6), 613–620 (2010)
Margaret, A., Donato, C., Domenico, M., Angeloantonio, R., Vincenzo, V., et al.: Information or confusion? the role of ecolabels in agrifood sector. Anal. Univ. Oradea Fascic. Ecotoxicol. Zooteh. si Tehnol. de Ind. Aliment., 14(A), 187–195 (2015)
Wang, F., Basso, F.: The peak of health: The vertical representation of healthy food. Appetite 167, 105587 (2021)
Schwartz, J.L., Vernarelli, J.A.: Assessing the public’s comprehension of dietary guidelines: use of mypyramid or myplate is associated with healthier diets among us adults. J. Acad. Nutr. Diet. 119(3), 482–489 (2019)
Samuels, K.L., Maine, M.M., Tantillo, M.: Disordered eating, eating disorders, and body image in midlife and older women. Curr. Psychiatry Rep. 21(8), 1–9 (2019)
Khan, M.A., Muhammad, K., Smyth, B., Coyle, D.: Investigating health-aware smart-nudging with machine learning to help people pursue healthier eating-habits (2021). arXiv preprint arXiv:2110.07045
Damgaard, M.T., Nielsen, H.S.: Nudging in education. Econ. Educ. Rev. 64, 313–342 (2018)
Clark, M.A., Springmann, M., Hill, J., Tilman, D.: Multiple health and environmental impacts of foods. Proc. Natl. Acad. Sci. 116(46), 23357–23362 (2019)
Green, A., Nemecek, T., Smetana, S., Mathys, A.: Reconciling regionally-explicit nutritional needs with environmental protection by means of nutritional life cycle assessment. J. Clean. Prod. 312, 127696 (2021)
González, N., Marquès, M., Nadal, M., Domingo, J.L.: Meat consumption: which are the current global risks? a review of recent (2010–2020) evidences. Food Res. Int. 137, 109341 (2020)
Etemadi, A., Sinha, R., Ward, M.H., Graubard, B.I., Inoue-Choi, M., Dawsey, S.M., Abnet, C.C.: Mortality from different causes associated with meat, heme iron, nitrates, and nitrites in the nih-aarp diet and health study: population based cohort study. BMJ, 357 (2017)
Vatanparast, H., Islam, N., Shafiee, M., Ramdath, D.D.: Increasing plant-based meat alternatives and decreasing red and processed meat in the diet differentially affect the diet quality and nutrient intakes of canadians. Nutrients 12(7), 2034 (2020)
Chouraqui, J.-P., Turck, D., Briend, A., Darmaun, D., Bocquet, A., Feillet, F., Frelut, M.-L., Girardet, J.-P., Guimber, D., Hankard, R., et al.: Religious dietary rules and their potential nutritional and health consequences. Int. J. Epidemiol. 50(1), 12–26 (2021)
Funding
Open access funding provided by University of Applied Sciences and Arts Western Switzerland (HES-SO). This work has been partially supported by the Chist-Era grant CHIST-ERA-19-XAI-005, and by (i) the Swiss National Science Foundation (G.A. 20CH21_195530), (ii) the Italian Ministry for Universities and Research, (iii) the Luxembourg National Research Fund (G.A. INTER/CHIST/19/14589586), (iv) the Scientific and Research Council of Turkey (TÜBİTAK, G.A. 120N680).
Author information
Authors and Affiliations
Contributions
Davide Calvaresi Main writer of the whole paper, topics co-writing coordinator, and revisor Rachele Carli First writing of the Ethical to Legal concerns topic Jean-Gabriel Piguet First writing of the Virtual Coaches as nudges topic Victor Hugo Contreras Collaborated to the state of the art topics and ethical challenges Gloria Luzzani first writing of the sustainability as ethical dimension topic Amro Najjar collaborated to the state of the art topics and ethical chalenges Jean-Paul Calbimonte revision and contribution to ethical challenges Michael Schumacher revision and coordination
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations
Appendix 1: Ethical and legal challenges from a professional nutritionist perspective
Appendix 1: Ethical and legal challenges from a professional nutritionist perspective
The practical translation in the very nutrition application of the ethical and legal challenge formulated above follows. In particular, in collaboration with a professional nutritionist, we have mimicked practical NVC scenarios possibly occurring in the real world. The outcome of such analysis has been organized w.r.t. the corresponding ethical and legal concerns.
1.1 Practical nutrition-centered examples of ethical NVC challenges in nutrition
1.1.1 Personalized food recommender system
-
EC1.1
do not recommend a snack with nutriscore D or E to users with a history of diabetes. Such nutriscores are associated with high sodium and saturated fat.
-
EC1.2
sensitive information such as allergies could be used by enterprises to sell products for this segment group. An NVC could recommend food products that are overall low in allergens to people with different types of allergies without highlighting this information for a particular user.
-
EC1.3
make recommendations based on a product or brand to change habits. This could be avoided if the recommendations are sufficiently general to the public and the user could choose based on their previous preferences.
-
EC1.4
favor products with an open and transparent production chain. Avoid recommending brands or products from companies with no clear procedures or missing information about their production chain.
-
EC1.5
recommend other types of vegetable oils or fats, such as canola oil, in the Mediterranean region to favor the variety of fair options to the users
-
EC1.6
deny or clarify pseudo-nutrition information from internet groups with scientific evidence from international guidelines.
1.1.2 Argumentative systems
-
EC2.1
pregnant women need a higher total energy intake in their diet. Recommend a balanced diet with no more than 350 kcal from their previous requirements, respecting their food preferences.
-
EC2.2
nutrition recommendations need to be transparent, clear, and safe for the user. The information given needs to be based on scientific information from international guidelines, i.e., recommend drinking natural water over other drinks in all life stages as proposed in the Dietary guidelines from the United States and Food dietary guidelines in Europe.
-
EC2.3
nutritionists are not allowed to prescribe drugs or medicaments. The recommendations of drug use should be avoided, and instead, it should be focused on nutritious food and a balanced diet. This is moral and legally correct in the nutritionist practice.
-
EC2.4
the recommendations should include just examples of balanced meals without focusing on a specific income or social status. This information should make people think and take the best option (i.e., give arguments to choose a piece of fruit over a candy bar).
-
EC2.5
favor recommendations from guidelines created by institutions with a strong reputation for developing scientific protocols and evidence over recommendations from pseudo-nutrition groups.
-
EC2.6
the recommendations need to be easy to read and understand. The phrases need to be concise and direct to the point of the recommendation, thus reducing time and confusion of the user (i.e., recommend eating fruit over a candy bar, because the fruits have vitamins, minerals, water, and fiber over an item high in sugar, fat, and sodium.
-
EC2.7
the messages could be shown with phrases and pictures to be coherent with a balanced diet/meal. Recreating this meal should be straightforward to the user with a picture example.
1.1.3 Informative and assistive systems
-
EC3.1
according to the code of ethics of dieticians/nutritionists, they are obligated to protect the patients’ information. The computer system needs to be transparent in the terms and conditions of the product. Updates to the application should be transparent for the users and request their approval if their personal data is being used in new or different processes. A similar situation to nutritional counseling would be if an obese patient loses weight and then requests to delete their previous clinical information from his medical or system records. These situations are still open for discussion in the field.
-
EC3.2
the application needs to give accurate and truthful information. The app should be transparent regarding the handling of the patient’s questions and also be clear with respect to its limitations as a healthcare tool. For example, the app should avoid recommending treatments and patient management without professional supervision. Similarly, dietitians/nutritionists should avoid giving out medical recommendations outside their scope of practice.
-
EC3.3
people use their devices for more than one app, and conversations and videos that are unnecessary/irrelevant should not be recorded. For example, people could have a conversation at the same time that they are asking for a recommendation meal to the app; it should just record the pictures and questions from the user.
In standard nutritional counseling, the patient can talk about their life, but the conversation must remain professional, keeping additional information confidential. The information should not be disclosed to any other (external) system or person unless the patient needs a multidisciplinary nutrition management.
-
EC3.4
an app is a good option for the user to start his journey in increasing nutrition knowledge when nutrition counseling is expensive or they are unaware of the benefits of nutrition counseling. For example, suppose a user wants to have a better performance in his daily life and eat more sustainably but cannot afford nutrition counseling. In that case, he could already take advantage of the application. In this case, the app is an excellent option to increase their nutrition knowledge and could contribute to the early promotion of new healthy habits in their daily life.
-
EC3.5
Users with complex necessities or with additional diagnoses need more interaction with a healthcare professional to exchange their doubts. For example, suppose they have a disease, and in addition, the user wants to start a sport regime due to their condition. In this case, it is ideal to follow standard nutritional counseling and use the app as a support and/or as a complementary source of information/knowledge.
1.1.4 Persuasive technologies and processes
-
EC4.1
highlight the changes in the diet proposed by the system in comparison with the initial dietary habits of the user.
-
EC4.2
the system should be transparent in the way they want to persuade the user. In nutrition recommendations, the program should show the health goals to achieve when the user interacts with the app (i.e., the goal of reducing between \(\frac{1}{2}\) kg to 1 kilogram of weight per week in obese users and give the strategies to achieve these goals). The rules need to be clear to the user in the case of nutrition recommendations. Changing habits and promoting a healthy diet need to be clear and limited to the health boundary and not affect other areas (i.e., a healthy diet needs to be shown on a plate divided into all groups of food showing different items). That makes it easy for the user to choose what item is more approachable to their culture, beliefs, health status, etc...
-
EC4.3
if the user tends to develop a mental/health disorder, such as eating disorders, using an AI system could be beneficial. In these cases, the system should perform a continuous follow-up of the dietary changes by the user with interactive feedback for earlier detection of unintended outcomes
1.2 Practical nutrition-centered examples of legal NVC challenges in nutrition
Personalized Food Recommender System
-
LC1.1
let us assume that a user with hypertension gets recommended by the NVC a food product containing a high amount of sodium. This can severely harm their health. Thus, the user needs to be self-aware and sufficiently knowledgeable to ignore this specific recommendation and report it to the service provider.
-
LC1.2
the messages are only recommendations with the freedom to choose (i.e., recommend eating a piece of fruit). However, the user is free to choose other snacks, regardless of the suggestion or the effects on their health.
-
LC1.3
the NVC should recommend local food regardless of brands and with prices affordable for the majority of the population and provide as many options as possible.
-
LC1.4
the user needs to be prone to acquire scientifically proven knowledge and avoid unrealistic expectations (i.e., achieving a body-builder body in a month). Overall, the user effort and continuous adherence to the recommendations are crucial to impact their habits.
1.2.1 Argumentative systems
-
LC2.1
the data used to make the recommendations should come from international guidelines that could be applied to all populations interested.
1.2.2 Informative and assistive systems
-
LC3.1
weight reduction by a teenager without supervision could lead to an eating disorder, such as anorexia or bulimia. The recommendations are tailored to different user profiles. In the case of teenagers, children, or people prone to mental illnesses, the NVC needs to be supervised by an adult (i.e., a tutor or a relative).
-
LC3.2
the diet recommendations should be on a daily basis for single-user routines. However, eating and cooking should be encouraged as social interaction.
-
LC3.3
including the older individuals category among the possible recipients for dietary recommendations can improve the inclusiveness. They can include simple acts in their daily routine that can prevent diseases/degenerations and improve their quality of life.
1.2.3 Persuasive technologies
-
LC4.1
the dietary propositions and recommendations are designed to change the habits and behaviors that affect the health of the users, with the idea to have a healthier lifestyle but not induction of change of personality or psychological affection. For example, the recommendation to eat a balanced breakfast prevents diseases without changing the personality of the user.
-
LC4.2
based on the recommendations, the user should have the skills to pick the meal or snack that covers their own preferences, culture, residency, local production, beliefs, and/or financial status.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Calvaresi, D., Carli, R., Piguet, JG. et al. Ethical and legal considerations for nutrition virtual coaches. AI Ethics 3, 1313–1340 (2023). https://doi.org/10.1007/s43681-022-00237-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s43681-022-00237-6