1 Introduction

The prospect of individuals receiving automated, tailored support to improve their self-regulation is thought to hold promise not just for empowering a select group of technologically inclined “lifehack enthusiasts” but for making society-wide progress in addressing a number of pressing issues. For example, it has been suggested that scaffolding aspects of people’s self-regulation processes may help meet sustainability targets (e.g., by helping people monitor and change their energy expenditures [4, 34]), curb increasing household debt (e.g., by providing financial insights and support with purchasing decisions [35, 64]), and, perhaps most saliently, ease current and future healthcare burden (e.g., by supporting people in their efforts to adopt and maintain healthy lifestyles (cf. [5, 48, 56])). In light of the ongoing developments in the field of Artificial Intelligence (AI), it is increasingly difficult to ignore the possibility of a future in which individuals can be supported in these various domains by interactive, personalized “e-coaching systems”.

Despite their projected individual and societal benefits, emerging e-coaching systems, like many AI-driven technologies, raise various ethical concerns (cf. [4, 20, 78]), including prominent concerns about the risks that large-scale data collection and (hyper)nudging pose to informational and decisional privacy (e.g., [41, 73, 76]). While these concerns do warrant the recognition, we want to draw attention in this paper to a different set of concerns, namely those having to do with social justice. We are not, of course, the only authors to emphasize the need for more discussion of issues around digital technologies in relation to what is owed to people as free and equal members of society. In relation to digital health applications, for example, Paldan, Sauer & Wagner [55] have looked at ways in which self-monitoring applications may lead to health inequalities. Likewise, Brall, Schröder-Bäck and Maeckelberghe [8] have identified potential issues of justice stemming from digital transformations in healthcare, and Figueroa et al. [18] have offered a guide with topics and questions for social justice digital health research. In relation to AI more generally, Buccella [10] has recently argued that access to AI (in its many forms) should be considered necessary for social justice.

Our aim is to contribute to the ethics of e-coaching by identifying how societal pressures towards the widespread adoption of automated e-coaching raise concerns in relation to social justice. In so doing, we foreground normative issues that to date have received insufficient attention in the e-coaching literature. In what follows, we will identify and elaborate three sets of social justice concerns related to e-coaching systems, concerning unequal access to e-coaching technologies, the potential for unequally distributed liberty restrictions, and the potentially disparate impact of the use of e-coaching systems on (self-)stigmatizing perceptions of competence. Before concluding, we will propose a research agenda for studying and addressing these concerns. First, however, we will further specify the kinds of technologies we will be considering.

2 E-coaching system

The term “e-coaching” by itself does not disambiguate between the process of coaching as performed by a human coach through an online platform and coaching by an automated, digital entity. In extension, the term “e-coaching system” can also be understood differently, depending on whether one takes the perspective of computer-mediated communication or human–computer interaction. On the former perspective, any technology used as an intermediate communication medium in a digital coaching practice (e.g., monitoring a coachee’s behavior on Facebook or providing feedback via email) can be considered an e-coaching system. On the latter perspective, however, the term “e-coaching system” will denote something more specific, namely technologies that engage in the actual coaching.Footnote 1

Though there are different perspectives on what it means to engage in coaching, coaching is typically characterized as a collaborative enterprise between coach and coachee in which the coach assists the coachee in the identification and pursuit of personal goals. As Ives puts it, “[t]he primary method is assisting the client to identify and form well-crafted goals and develop an effective action plan” [33, p. 102]. Coaching thus differs from various forms of (mere) decision support, at least to the extent that such decision support is understood in the narrow sense of suggesting, for a given decision, which option is preferable given some metric of efficiency (e.g., the minimization of economic costs). Crucial for e-coaching systems is their ability to engage in an ongoing dialogue with users to aid both planning (identifying means to an end) and the follow-through of one’s plans in pursuit of one’s goals (e.g., by offering support in overcoming intention-behavior gaps).

To further clarify what we mean by e-coaching systems, we adopt the following definition from Kamphorst [36]:

E-Coaching System. An e-coaching system is a set of computerized components that constitutes an artificial entity that can observe, reason about, learn from and predict a user’s behaviors, in context and over time, and that engages proactively in an ongoing collaborative conversation with the user in order to aid planning and promote effective goal striving through the use of persuasive techniques.

Viewing e-coaching systems through the lens of this definition has three key implications. First, it suggests that e-coaching systems should be distinguished from more basic self-regulation support tools such as calendar-driven reminder systems or sensor-based notification apps. For where those kinds of systems, barring malfunctioning, essentially do as they are instructed (e.g., sound an alarm at a certain time or event), e-coaching systems are designed to utilize AI to learn from user input and observed behavior, adapt to preferences, and support individuals at the different stages of self-regulation by (proactively) suggesting potential plans for action and offering persuasive mechanisms to stay on track.

Two examples will help illuminate the difference. First, in a health and lifestyle context, consider the difference between, on the one hand, a scheduling app that allows individuals to program daily reminders for themselves to exercise, take supplements, eat healthily, etc., and, on the other hand, a system that unobtrusively monitors behavior and, through data analysis and predictive modeling techniques, estimates the most opportune moments to engage in a supportive dialogue. Whereas the former system simply restates the users’ own input, the latter may offer various kinds of support, for example by prompting users to reflect on their overall goals in moments of weakness, helping to strike a balance between personal values, or training users to craft more effective, viable plans. Likewise, in an employment context, consider the difference between a calendar app that prompts individuals about their upcoming meetings and lists tasks, and a system that engages in a back-and-forth to help organize and prioritize one’s tasks and meetings, suggests ad hoc breaks when concentration is lagging, and helps create a distraction-free environment for certain periods of time. It is these more advanced types of systems that enhance people’s capacities through their continuous engagement and feedback that we consider e-coaching systems.

A second, further implication of conceptualizing e-coaching systems in this narrower way is that people ought to assess the content they receive from e-coaching systems more critically than content presented by less advanced self-regulation support tools. This is because e-coaching systems form their own “perspective” regarding a user—that is, they create representations of a user’s behavior and preferences that are not directly given by or even approved by the user—and from that perspective derive approaches for tailor-made persuasive interactions. Since individuals are not guaranteed interactions that they have endorsed in the past, they therefore have a responsibility to retain a certain level of vigilance and screen a system’s suggestions, at least superficially, for appropriateness. In this respect, e-coaching systems really do bear a closer resemblance to human coaches than they do to automated reminder systems.

Finally, the adopted definition implies that e-coaching systems work primarily on a psychological level, in dialogue with the coachee. Certainly, it can be imagined that certain systems, in addition to giving advice and feedback, could also control or affect certain bodily functionings more directly, for example using brain implants to directly affect the brain’s dopamine pathways. Such systems would raise different ethical concerns to those we will be taking up here, but as interventions on this level are better likened to doping than to coaching, we take these concerns (and these types of systems) to be outside the scope of this paper.

With the relevant types of systems now in view, let us turn to the subject of social justice and consider how specific aspects of social justice may be affected by the widespread adoption of e-coaching systems.

3 E-coaching systems and social justice

The term social justice is not easily defined [23, 60], but it is typically accepted that the concept concerns normative questions about the fair distribution of wealth, welfare, opportunities, and privileges in society.Footnote 2 So understood, social justice is tightly connected to the negative and positive duties that governments, social institutions, and individuals have in light of established principle of human rights (cf. [59]). In addition, although actual instances of social injustice can also be evaluated in terms of violations of individuals’ rights or the illegitimacy of governance, the discourse of “social justice” is centrally concerned with what we owe each other from the perspective of being free and equal members of society.

The widespread adoption of e-coaching systems potentially affects a wide variety of social justice considerations, given how they require access to certain (costly) technologies and how they have the potential to generate a culture of competitive self-management if the coaching they provide gives users a competitive advantage over those who are not coached or not coached to the same level of excellence. In this section, where we will examine three sets of social justice concerns, we are tacitly limiting our discussion to e-coaching systems that provide a significant benefit to their users.

3.1 Concerns about unequal access to e-coaching technologies

In the literature on e-coaching systems, advocates tend to assume that the introduction of e-coaching systems will make coaching more readily available to all (cf. [74, 79]). Indeed, cheap or even free e-coaching systems could flood the market, offering support to a substantially larger population than is currently the case with human-to-human coaching. However, it is important not to draw the further, faulty conclusion that improved access to coaching will guarantee equal access to all benefits offered by all e-coaching systems. For even if coaching in general becomes more accessible to a larger audience through e-coaching technologies, there will almost always be costs involved (e.g., for supporting hardware such as sensor systems) that those in underprivileged positions may not be able to afford. Moreover, there may be more expensive, advanced models placed on the market that offer additional functionalities and associated benefits that will remain reserved to the more affluent.

The gap between entry-level products and services “for the masses” and more expensive high-end products and services–a gap that can already be observed in relation to hardware products (cf. [26, 44])—raises two distinct sorts of concern. First, the familiar risk that the affordable products and services will be inaccurate or unreliable raises special concerns in the case of e-coaching systems, given how intimately they can be connected to a person’s sense of self (cf. [38]). Second, there is a concern that the more expensive systems will also provide significant relative advantages, further exacerbating inequalities by giving the rich a way of further expanding the socioeconomic advantage that they already have. This is especially worrisome because the enhancements provided by e-coaching systems pertain to factors such as capacities for self-control and for complex decision making that have an enormous influence not only on one’s ability to handle the challenges of life in complex societies but also on the comparative advantage one has in competitive environments. For example, if people with access to high-quality e-coaching systems will be considered more attractive employees in light of their superior self-regulation capacities, they will be more likely to be hired into high-salary positions (cf. [67, 71]).

The inequalities related to the differences in quality of e-coaching systems are often compounded by differences in the quality of the hardware on which these e-coaching systems run, insofar as they require the use of high-end devices with the processing power or battery technology required to run state-of-the-art machine learning models on the local device (cf. [77]). The same point holds for the costs of auxiliary components such as “smart” lighting, (wearable) sensor systems, or “Internet of Things” (IoT) appliances that allow users to take full advantage of all the e-coaching system’s capabilities. In addition, the costs of regularly upgrading software and hardware have the tendency to further widen the gap in the quality of devices available to the affluent and the poor.

Relatedly, people from lower socioeconomic groups or in lower income countries of the Global South may not be able to maintain access as well as others and may experience cycles of what Gonzales has called dependable instability [25]. Broken devices, interrupted connectivity, or expired subscriptions to e-coaching content may all affect the continuity of the e-coaching process.Footnote 3 Importantly, the costs of upkeep and the experience of access limitations may also affect people’s perceptions of the (usefulness of the) technologies themselves (cf. [13, 26]), which may again deepen existing inequalities.

Finally, access to e-coaching may also be hindered by a user’s limited digital skills. To the extent that installing, configuring, and maintaining e-coaching systems requires technological know-how, people lacking such knowledge will be disadvantaged. As research has already shown that socioeconomic status is linked to differences in digital skills in relation to internet use [15, 28, 72, 80], there is a real risk that existing inequalities with respect to digital skills will also hinder uptake and effective use of e-coaching systems.

Clearly, many open questions remain in relation to these concerns, both empirical and ethical. For example, what would be the magnitude of the competitive advantage one could gain? What would be the projected magnitude of the impact on society at large if e-coaching systems indeed had such an effect? What technological or regulatory mitigation strategies could or should be employed? Questions such as these deserve careful consideration and in Sect. 4 of this paper, we invite scholars to address them. For our purposes here, it is sufficient to have established the outline of this first category of concerns, in which we implicitly assumed that individuals would find e-coaching valuable and worthy of pursuit. But what if the use of e-coaching systems was not a choice but something that was imposed? It is to the potential unfairness of liberty restrictions we turn next.

3.2 Concerns about coercion and the unequal distribution of liberty restrictions

In terms of personal experience and the phenomenology of technology, users of e-coaching systems may have concerns about the ways in which these technologies restrict their subjective sense of freedom. In terms of social justice—our focus here—there is a concern with the degree to which the adoption of e-coaching systems is free and voluntary rather than coerced or manipulated. Indeed, as automated e-coaching becomes more effective and beneficial, the pressure to adopt increases. In highlighting these concerns, we can distinguish between mandatory programs and incentive schemes. Each raises important liberty-related concerns of social justice.

In the most straightforward case, the use of e-coaching systems can be mandated, for example, as part of an employment contract or a government benefits program. Such cases wear their compulsory character on their sleeve and thus explicitly call for legitimating endorsement, within the constraints of legal rights. Social justice concerns here relate more generally to the potential overreach by employers or governmental agencies. But one concern related specifically to social justice—where concerns about inequality and coercion intersect—has been insightfully analyzed by Virginia Eubanks [16]. She documents a tendency to test and develop behavioral monitoring technologies in “low rights” environments, in which there is relatively little resistance to the imposition of risky or unethical practices. A similar point holds for “low-rights” environments in which mandatory e-coaching is proposed but where the appearance of consent is illusory (with the further implication that, once these mandates have been established (illegitimately) in low-rights context, it will become easier to push them through elsewhere).

A somewhat more indirect restriction of freedom relates to the use of incentives to motivate the adoption of e-coaching systems, and here a familiar set of concerns arises about the boundary with coercion. One context where e-coaching incentives are regularly employed is in disease prevention and healthy lifestyle programs. In this context, various incentive programs have already been implemented to encourage specific “desirable” behaviors by offering individuals (monetary) rewards (cf. [75]). Several insurance companies in Switzerland, for example, are offering lower insurance rates for individuals who can show that they are promoting healthy habits (e.g., taking yoga classes), and at least five of them are offering monetary incentives to directly share health-related data with the insurer through a smartphone app [45].

Currently, these “opt-in” incentive programs are limited in their scope, in part because insurance companies at present do not have the means to accurately monitor users’ compliance. With the widespread adoption of e-coaching systems, however, and the associated improvements to measurement instruments, infrastructure, and data analysis techniques, this will likely change in the near future [65]. The technologies to extensively monitor people’s behavior and compliance with their agreements with the insurance companies (e.g., not to smoke, or to exercise twice a week) are become increasingly advanced. The aforementioned insurance companies in Switzerland already let their apps connect to other health apps (e.g., Google fit) or fitness trackers (e.g., Fitbit or Garmin smartwatches) to obtain all sorts of health-related data. This development makes it attractive for insurance companies to provide ever more fine-grained options for individuals to limit their insurance costs in exchange for information, and to nudge individuals to take up e-coaching in exchange for significant discounts on their insurance premiums.

The social justice concern here begins by pointing out that the voluntariness of the participation in incentive schemes is diminished to the extent to which the development and adoption of such incentive schemes leads to situations in which (groups of) people, in practice, will be unable to opt out, even if participation is formally considered voluntary (cf. [9, 39]). But not everyone’s voluntariness will be affected to the same degree. As healthcare costs (and insurance premiums) continue to rise, the less affluent individuals in countries with insurance-based healthcare systems may find themselves under increased pressure to choose one of these “restrictive-conditions” insurance policies, simply because they cannot afford to do otherwise.

Many corporate employers, especially larger ones, also have incentives to encourage healthy lifestyles among employees in order to reduce medical costs, absenteeism, and health-related productivity losses. For these purposes, many employers already offer corporate “wellness” programs [47, 52], which increasingly involve the use of wearable self-tracking technologies [12, 42, 68]. If this development continues, e-coaching systems may well become part of the “wellness” package that employees, especially those with limited alternative prospects on the job market, have no real way of avoiding.Footnote 4 Notice that the point here is not that people cannot, strictly speaking, refuse e-coaching, but that, realistically, certain (groups of) individuals will not be able to afford to do so.

A final set of concerns about freedom relates to the capacities for surveillance built into e-coaching technologies. Regardless of the degree of coercion involved in the adoption of automated e-coaching, the fact that these technologies involve extensive monitoring introduces a distinct kind of tension with liberty. For what recent research on so-called “neo-Republican” conceptions of freedom has brought out is that there is a sense in which individuals are less free when they are at the mercy of others, even if those others choose not to exercise that power [22, 57].Footnote 5 As those with increased knowledge of a person’s choices and behavior have increased power and opportunity to influence and intervene that they do not otherwise have (cf. [37]), it may be argued that individuals who have no real option but to employ e-coaching technologies are at the mercy of the e-coaching providers and therefore less free. And this may be especially problematic for those groups of people who can only afford cheap or free e-coaching products, where the collection of data for the purpose of resale is the business model financing the products.

As mentioned, our aim has not been to fully analyze or address these various concerns about liberty here, but rather to call attention to them and place them on the agenda for future discussions, together with a third set of concerns to which we now turn.

3.3 Concerns about stigmatization and its disparate impact on perceptions of competence

The third and final set of concerns pertains to associations between reliance on assistive e-coaching technologies and perceptions of competence. The central thought here is that, depending on individual differences, social norms, and environmental factors, the fact that someone uses these technologies may be construed differently, both by the users themselves and by others. In certain circumstances, reliance on an e-coaching system may be viewed in a positive light, as part of being an empowered individual who cleverly enhances his or her abilities. In other circumstances, the same degree of reliance may be viewed negatively, as indicating defects or an inability to perform adequately without support. And whereas the “power-tool-for-empowerment” construal is likely to lead to the attribution of competence to those employing e-coaching technologies, the “crutch-for-coping-with-deficiency” construal could be a source of stigma (e.g., being ridiculed or looked down upon or discriminated for relying on technology for successful self-regulation) or self-stigma (internalized feelings of embarrassment and shame; cf. [14]).

To an extent, how the use of e-coaching is construed in a given situation may depend on individual differences between users. For example, one plausible implication of research on “independence centrality” [46, 49], is that, with regard to self-attribution of competence, individuals who more highly value being functionally independent will be biased towards feeling a diminished sense of accomplishment for e-coach-supported self-regulation. Likewise, it could well be that people who are low in self-esteem are more likely than others to consider their reliance on e-coaching as confirming evidence of self-perceived deficiencies, especially when others are seen as not needing support.

Insofar as e-coaching technologies evoke such experiences of diminished competence, widespread deployment of these technologies raises concerns about direct setbacks to these people’s well-being, as well as about long-term harm to their agency, as self-efficacy—one’s belief in one’s ability to succeed [7]—has been shown to be pivotal for the initiative and persistence that significantly determine a person’s life-chances (see also [3]). But while these prospects would already offer grounds for caution about the extensive reliance on e-coaching technologies, we want to foreground another distinctive and neglected dimension of social injustice in this context.

The key concern about social justice that we would like to highlight in this connection stems from the possibility that stigmatizing construals of the use e-coaching systems are co-determined by entrenched stereotypes and patterns of prejudice. It is known that structural inequalities and deeply ingrained societal biases often affect how members of marginalized groups are perceived, specifically, that members of high-status groups “tend to be stereotyped as competent, while low-status groups tend to be stereotyped as incompetent” [54, p. 1135] (see also [53]). This “status = competence stereotype,” we contend, plausibly operates as a lens that shapes how the use of e-coaching systems is perceived: the use of e-coaching systems by members of high-status groups will tend to be viewed as enhancing or improving oneself, whereas the use of e-coaching systems by people in low-status groups will be viewed as evidence of needing aid in overcoming structural cognitive, affective, or motivational deficiencies. To the extent to which this hypothesis is confirmed, individuals from low-status groups could turn out to be systematically more vulnerable to stigmatizing construals of their use of e-coaching systems.

Importantly, stereotypes do not only affect how people are perceived by others, but also how people perceive themselves. The phenomenon of “stereotype threat” [50, p. 368, 66] suggests that the anxiety about confirming evidence of negative stereotypes about one’s social group can hamper performance and create self-fulfilling prophecies of failure. Given the stereotypes that associate low competence with membership in low-status groups, these members are at a heightened risk of reduced self-efficacy and performance, stemming from the tendency to perceive their own use of e-coaching as a stigmatizing confirmation of these negative stereotypes about their group’s competence. Moreover, beyond undermining self-efficacy, such construals compound existing inequalities to the extent that people in these positions subsequently do not reap the same benefits from e-coaching systems as other, more affluent individuals might.

In short, the worry is that people will interpret inconclusive evidence regarding abilities and accomplishments through the lens of existing societal prejudices, regularly resulting in biased, stigmatizing interpretations that disproportionately disempowers members of lower status groups. The full force of the potential for social injustice here can be seen conjunction with the concerns raised in Sects. 3.1 and 3.2. Given that privileged individuals will have better access to and more opportunity for the seamless and fluid integration of high-quality e-coaching technologies into their activities, the use of these technologies by social elites is likely to appear more natural or optimizing and hence less like a prominent and stigmatizing “crutch.” Moreover, given the difficulties opting out of e-coaching programs that are mandated by employers or strongly incentivized by insurance companies (see Sect. 3.2), to the extent that individuals from lower socioeconomic groups are practically unable to avoid the use of e-coaching systems, the concern is that they who need and would benefit the most from e-coaching might get a poor reputation for using such systems. Given that people from low-status groups are already vulnerable to (health-related) stigmatization [29], this additional source of stigma would stand to worsen their position in society even further.

We readily acknowledge that these concerns are based on assumptions that need to be confirmed by empirical research. Our point here is to highlight the need for research in this domain and to sound a note of caution, until these potential difficulties can be ruled out. For this reason, it is important to ask hard questions, both at the level of design and at the level of policies regarding widespread adoption of the e-coaching technologies, about the unintended side effects on individual well-being of adopting these technologies on a large scale.

4 An agenda for future research

In the preceding section, we identified concerns regarding three aspects of social justice—concerns that arise with the widespread adoption of e-coaching systems. The issues we discussed within each category show that these technologies risk exacerbating existing inequalities or creating new instances of unfairness, as a consequence of what they cost, how they are funded and marketed, and what kinds of competitive advantages or social disadvantages they introduce. If these concerns about social justice are to be adequately addressed, important conceptual, empirical, and regulatory work remains to be done to ensure that the introduction of e-coaching systems into society happens in a responsible and equitable way. In this final section, we identify four areas where efforts are needed to ensure compliance with the demands of a commitment to social justice: (1) further clarification of distinguishing characteristics of e-coaching systems, (2) elucidation of their disruptive scope, (3) implementation of justice-sensitive principles in the context of the design and implementation of the relevant technologies, and (4) development of approaches to the regulatory and governance challenges arising from the widespread adoption of e-coaching systems in society.

First, there is a need for a better understanding of the distinctive features of e-coaching systems and of related core concepts (cf. [36]). Currently, there are such widely varying (and often imprecise) understandings of the capabilities of e-coaching systems that it is difficult to accurately characterize the ways in which e-coaching systems constitute a “social disruption” [32, 63]. For example, to fully appreciate how deeply automated e-coaching may transform our understanding of accomplishment of action, it is essential to recognize how e-coaching systems—unlike certain other self-regulation support systems and (hyper)nudging technologies—support users in their practical reasoning about what goals to set and how to realize them (see again Sect. 3.3). Likewise, to accurately assess the risk that existing digital divides will exacerbate the (un)equal opportunities for benefiting from automated e-coaching, a realistic grasp is needed of the level of technological skill required to effectively interact with specific e-coaching systems (see Sect. 3.1). Here, we thus see a role for theorists and engineers to work together towards specifying a shared conceptual apparatus and corresponding vocabulary.

Second, more work is needed to detail the projected impact of e-coaching systems on individuals and their cultural, material, and social surroundings [32]. This requires sustained reflection on the extent to which the use of e-coaching technologies challenges “deeply held beliefs, values, social norms, and basic human capacities” [32, p. 6]. Paramount in this regard will be a comprehensive analysis of the interplay between e-coaching systems and human agency. In particular, research is needed into how sustained, 24/7 reliance on e-coaching systems may (i) change how we think about distributed willpower and environment-scaffolded self-regulation efforts (e.g., see [30]), (ii) erode certain self-regulation skills and promote others (cf. [4]), (iii) affect self-understanding and identity (cf. [40]), (iv) undermine or strengthen personal autonomy (cf. [2, 27, 38]), and (v) alter the social norms regarding mutual expectations of self-regulation success ([1], see also again Sect. 3.3). Insight into these areas should help in anticipating more accurately the magnitude, range, and pace of the societal impact of e-coaching systems, particularly regarding social justice. Here, we see a role for philosophers, as well as for economists and sociologists, in carefully mapping which parties and processes may be affected and to which degree, studying the relevant market dynamics that will influence the (un)equal uptake of e-coaching technologies, surveying the various domains that may be disrupted, and establishing an inventory of concepts (such as enhancement) and values (such as liberty) that may be challenged by the widespread adoption of e-coaching systems in society. Mapping, forecasting, and analyzing e-coaching systems’ impact will be critical for making realistic and timely assessments of the risks to social justice and for developing appropriate mitigation strategies (for several promising recommendations in this area, see [61]).

Third, against this background of an improved understanding of the nature and impact of e-coaching systems, practical steps will need to be taken for responsibly developing and implementing e-coaching systems. Alongside guidelines for public policy and social ethics, educational and design strategies must be explored to help mitigate the risks. For example, ethicists and social scientists could work with those involved in marketing these systems to develop revenue models or product placements that avoid compounding disadvantage and exclusion of vulnerable populations. Likewise, with an eye to increasing inclusivity, it will be important to increase awareness among designers and engineers as to the (potential) interplay between their design and implementation choices and the (dis)advantaging effects on society—for example with respect to hardware requirements or the presupposed level of digital skills. In addition, there may be ways of having the e-coaching systems themselves positively contribute to the way in which people experience their technology-supported self-regulation efforts. Recall that e-coaching, properly considered, involves establishing an ongoing, collaborative conversation between coach and coachee. Within this conversation, there are opportunities for tailoring the tone and content of the communication to better relate the coaching to an individual’s intrinsic motivation and identity (what is theorized as “self-concordance,” see [6, 62]). Here, it will be instructive to review and build off the literature on the value sensitive design (VSD) framework, which aims to facilitate the integration of ethical values into the design of new technologies—including those pertaining to artificial intelligence [19, 21, 69, 70].

Fourth, and finally, as the concerns with social justice come more clearly into view, policies and regulations will need to be developed that can guide the responsible introduction of e-coaching technologies into society (cf. [58]). In relation to the broader notion of Artificial Intelligence, several initiatives for regulation have already been put forward, the latest of which is the European Commission’s Artificial Intelligence Act.Footnote 6 This legislation, which aims to present a “balanced and proportionate horizontal regulatory approach to AI [in the EU]” [17, p. 3], posits a number of key regulatory regimes that will be pertinent to automated e-coaching, including prohibitions for manipulative systems (Title II, art. 5(1)) and essential requirements and obligations for providers of “high-risk AI systems” (Title III art. 9–23) such as the mandated implementation and maintenance of a quality management system, a risk management system, and technical documentation. Whether all e-coaching systems will be regarded as “high risk” remains to be seen, but the provisions in art. 7(1) would suggest at least that e-coaching systems operating in the respective spaces of health and employment, where they pose risk “of harm to health and safety, or an adverse impact on fundamental rights,” would be categorized in this way. Part of our aim in the present article is to encourage an understanding of “high risk” that is sufficiently sensitive to social justice concerns. Regulatory and legal efforts should also not be blind to the quasi-coercive character of employers’ incentivization of e-coaching systems for health promotion of their workforce. Finally, insofar as e-coaching systems have disparate stigmatizing effects—be it either for not having access to quality e-coaching systems (Sect. 3.1) or for having one’s self-regulation be supported by e-coaching technologies in the first place (Sect. 3.3)—improvements may be needed in the domain of anti-discrimination law to address these social injustices appropriately and effectively.

5 Conclusion

In this article, we have highlighted distinct social justice concerns that can arise with the widespread adoption of personalized, AI-driven support systems that can give users a competitive advantage in a wide variety of domains by aiding planning and promoting effective goal striving through the use of persuasive techniques. The concerns we identified with these e-coaching systems relate to unequal access to the technologies, the potential for unequally distributed liberty restrictions, and the potentially disparate impact of the use of e-coaching technologies on (self-)stigmatizing perceptions of competence. Each of these concerns, we have argued, can create or exacerbate societal inequalities.

As we have acknowledged throughout, our concerns are based, in part, on assumptions that need to be confirmed by empirical research. Beyond the empirical questions we have identified, we have also outlined four additional areas of research that we believe need to be prioritized in order to mitigate the identified social justice concerns. As will be evident from our discussion in the preceding section, much work remains to be done in each of these four areas. As e-coaching systems are beginning to get a foothold in society, and the technological developments of e-coaching systems are accelerating (including recent developments in the area of natural language processing), our central objective here has been to highlight the importance of addressing these wider, social justice concerns about inequality, coercion, and stigmatization.