Abstract
AI ethics guidelines have proposed the principle of solidarity as an important principle for Ethics in Artificial Intellligence (AI). However, they often leave out explanations on how solidarity ought to be understood and put into practice in the context of ethical AI. This paper explores the principle of solidarity in the context of AI. It examines solidarity from an Ethics in Design perspective, meaning how solidarity could be accounted for in the processes of technological design. Since solidarity conceptualisations differ depending on the respective discipline they are applied in, this paper first attempts to disentangle the many conceptual understandings and proposes a more discipline-neutral solidarity account describing solidarity’s core on the basis of five elements: (1) an element of relationality based on (2) a connecting element that builds the grounds for the relationship, (3) a cognitive element of awareness and recognition, (4) a motivational source, and (5) an element of duty. By using this account to explore solidarity in an AI context, it will be shown that approaching solidarity with an Ethics in Design perspective has its challenges. Therefore, it is proposed that solidarity should be conceived of not as an ideal end state but as a perspective or lens that can guide design choices. Ethics in Design methods such as user-centric or participatory design are discussed as potential enablers for adopting such a solidarity perspective. Lastly, the paper addresses some challenges and limitations and argues that an approach to solidarity in Ethics in Design needs to be complemented with collective decision-making at the societal level, which is a political task.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The societal implications of technologies belonging to the broad field of Artificial Intelligence have received much scholarly attention. AI systems have been found to inter alia infringe individual’s right to privacy, manipulate people in consumption behaviour and political opinions, or discriminate and thereby increase pre-existing inequalities. Given these (and other) problems and risks, researchers, policy papers, and ethics guidelines have suggested a range of ethical principles for AI, some of which include the principle of solidarity. However, while solidarity is often part of public narratives (such as during the Corona pandemic or in the discussion surrounding refugees), it is not a well-defined concept. AI ethics guidelines proposing solidarity often leave out how solidarity ought to be understood and put into practice in the context of ethical AI. Explanations found there often remain rather short and vague and make reference to other ethical values such as equality, fairness, justice, or human dignity (EGE, 2018; DEK, 2019; AI-HLEG, 2019). Even though the aim of AI ethics guidelines is to provide guidance on how to design ethically aligned AI systems, the meaning of solidarity in AI is anything but clear.
This paper is part of a broader research endeavour to explore the principle of solidarity in the context of AI. While solidarity may be found in a variety of situations relating to AI, such as people joining their resources and contribute to an AI technology for a collective goal, this paper examines solidarity from an Ethics in Design perspective, meaning how solidarity as an ethical principle could be accounted for in the processes of technological design. Taking such a perspective is especially important, since AI ethics guidelines ought to guide technologists in designing and deploying AI technologies in an ethically acceptable way. Moreover, the importance of ethical and social considerations in technology design has gained considerable attention in research and practice in recent years (Dignum, 2018, p. 1). Consequently, exploring solidarity from an Ethics in Design perspective contributes to a growing research field.
To demarcate the meaning of solidarity as an ethical principle for technology design, it is necessary to understand solidarity as a concept. The paper, hence, first provides an overview of the conceptual basis of solidarity by reviewing literature from a range of disciplines and fields — inter alia sociology, political philosophy, psychology, bioethics, and economics. Solidarity, it turns out, is a multifaceted concept and can come in multiple forms and operate at different analytical levels — the interpersonal (micro), group (meso), or societal (macro) level (Laermans, 2020). Since solidarity conceptualisations differ depending on the respective discipline applied in, this paper first attempts to disentangle the many conceptual understandings and proposes a more discipline-neutral solidarity account describing solidarity’s core on the basis of five elements: (1) an element of relationality based on (2) some connecting element that builds the grounds for the relationship, (3) a cognitive element of awareness and recognition, (4) a motivational source for solidarity, and (5) an element of duty. This account captures, on the one hand, the more sociological dimension of solidarity as a social practice (element (1) to (3)) and, on the other hand, its normative dimension (element (4) and (5)). In that sense it is broad enough for interdisciplinary research without losing its concise conceptual core. This account will then be used to explore whether and how the principle of solidarity can be applied to the context of designing ethical AI. As such, the paper intends to explore solidarity in theoretical as well as practical terms.
Ultimately, it will be shown that approaching solidarity with an Ethics in Design perspective has its challenges, especially when understanding it as a social practice in technology design. Consequently, it is proposed that solidarity should be conceived of as a perspective or lens that may shift attention towards solidarity’s normative aspirations and thereby guide decision-making towards more ethical design. Ethics in Design methods such as user-centric or participatory design are discussed as potential enablers for such a solidarity perspective. Lastly, the paper addresses some challenges and limitations and argues that an approach to solidarity in Ethics in Design needs to be complemented with collective decision-making at the societal level, which is a political task.
2 Solidarity in its Various Forms
Solidarity is a contested and multifaceted concept. It has been defined as collective action (Sangiovanni, 2015), mutual recognition (Honneth, 1995) or mutual respect (Laitinen, 2014). Others conceive of it as a contingent and conditional social disposition (Lynch & Kalaitzake, 2020) or an affective or generous disposition motivated by the desire to improve the well-being of others at some cost of one’s own (Arnsperger & Varoufakis, 2003). Barbara Prainsack & Alena Buyx describe it as a fellowship or unity based on bonds of mutual assistance and help, common objectives or other aspects they share (Prainsack & Buyx, 2012; 2016). Further, solidarity has been found to be linked with cooperation (Kritikos et al., 2007), interdependence (Fredericks, 2007), pro-social behaviour (Lindenberg, 2006), inclusion (and at the same time exclusion) (Burelli, 2016; Laitinen, 2014), collective well-being and the public or common good (Lev, 2011) — and this list is far from exclusive. Why are there so many conceptualisations of solidarity?
For one, how to conceive of solidarity differs between disciplines and fields. Solidarity can be approached descriptively, referring to actual attitudes or practices collectives or individuals show towards or among peers, which is most often the subject of inquiry in the social sciences. To the contrary, in philosophy and political theory solidarity is often used in a normative sense, where solidarity connotes a value or moral ideal often connected to questions of the good life or justice. As Violeta Moreno-Lax formulates it in relation to Brunckhorst’s notion of a global community: “The focus of analysis is not the empirical ‘origin’ in any particular social reality, but the normative ‘result’ and aspiration of solidarity (…)” (2017, p. 745). The different methodological and disciplinary approaches lead to the fact that solidarity is presented as a shared practice, collective action, a moral, social, or a political ideal.Footnote 1
Additionally, the solidarity literature includes conceptualisations of solidarity that focus on different levels of analysis (Laermans, 2020; Prainsack & Buyx, 2012). At the smallest level of analysis (micro level), solidarity occurs in groups of limited size with strong interpersonal relationships. In this case, solidarity is often based on a shared identity and feelings of closeness. The meso-level perspective focusses on the broader group level where people do not necessarily share identities or interpersonal relationships, but share some interest, values, or a collective goal. Often, individuals join together and enter into collective action against a common adversary and against some form of injustice — such as in worker union strikes. Many accounts of political solidarity focus on the meso-level (see, e.g. Scholz, 2008). Lastly, macro-level solidarity is concerned with solidarity at the broader societal level. It occurs in larger, pluralist societies whose members are in some way interdependent, but do not necessarily have shared identities or similarities. Since social bonds may be rather weak, solidarity at the macro level is often institutionalised via legal or contractual means (Laitinen, 2014; Prainsack & Buyx, 2012). The interdependency between its members leads societies to build institutions that aim at maximising collective benefits, but at the same time pool risks and share responsibilities such as in European welfare systems.
However, while it is important to define concepts according to the respective field of application, conceptualisations specified for one context may not capture aspects relevant for another. As Mariam Thalos writes: “It is, therefore, worthwhile looking for a conceptual core to all the forms of solidarity that have been identified—or, at any rate, a discipline-neutral formulation of it that seeks to explain the manifold uses and contexts in which it has seemed at home” (2012, p. 60). In order to provide some clarity on what solidarity could mean in a more discipline-neutral way, I propose five elements deduced from the varied and diverse literature on solidarity from the various fields and that constitute, I argue, solidarity’s core.
3 Solidarity’s Core
-
1.
The Element of Relationality
Solidarity has a strong communal element. It cannot exist within an individual alone but occur only in relations with others. For solidarity to arise, multiple persons — together in a group, a family, a clan, a society or another form of collective — need to have some sort of relationship. Relationships may be horizontal, meaning those among individuals, but may also be vertical, meaning that an individual has a relation with a community or collective.Footnote 2 The element of relationality refers to the question “Who is solidary with whom?”.
-
2.
Connecting Element or Grounds for Relationship
The connecting element that builds the basis for the relationship can take multiple forms – inter alia a shared identity, history, shared experience, common values and norms, shared interests, goals, or needs. Some scholars argue that some form of identification with the cultural or political community or with a shared concern for social issues is paramount for solidarity (i.e. Cornwall, 2007; Taylor, as cited in Smith & Laitinen, 2009; Straehle, 2010). Others like Hegel, Marx, Durkheim, and Honneth contend that the “experience of inter-dependency and cooperation that arises in socio-economic contexts of action is at least as important a source of solidarity as shared identification with a political, cultural or national community” (Smith & Laitinen, 2009, p. 62). Especially in modern, pluralist societies, solidarity is more likely to be based on interdependence and cooperation as for it puts less emphasis on a shared culture, history, or other identificatory aspects which are often less prevalent in multicultural and highly individualised societies. Depending on the context of solidarity, any of these potential grounds for a relationship may build the basis for solidary behaviour.Footnote 3
-
3.
The Element of Awareness and Recognition (Cognitive Dimension)
While there is wide consensus in the literature that individuals require some sort of relation in order to act in solidarity, there is less attention on what I call here the element of awareness and recognition. The element describes a necessary cognitive dimension of solidarity, namely that people are aware of some kind of social bond and then recognise that they share it with others. The mere existence of a connecting element is not necessarily sufficient for solidarity to arise.
The cognitive dimension of awareness describes the necessary condition that people know about the existence of their relationship with others, which they could potentially act upon (for instance, that many people share the common goal to fight climate change). Without such awareness solidarity could not arise, because those so connected would be ignorant of their relationship and hence would most likely not consider to act on the basis of this relationship. The awareness of the relationship is necessary to be able to recognise the social bond as significant for oneself and hence to adopt a perspective of the collective that factors into one’s actions. Psychological research has found that cognitive processes can influence what information and behavioural options a person thinks of more readily because they activate different patterns of attitudes, expectations, and behaviour (Lindenberg, 2006, p. 30). The cognitive processes described could, hence, lead to a distinct point of view, from which actions that are in the interest of the collective are more likely to be taken.
Miller and Tuomela have explained this with the example of a group that shares a collective goal. To define a collective goal as the common interest of the group, people abstract from their individual interest and formulate a goal from a “we-mode” (Miller & Tuomela, 2001). The intention is “to satisfy the goal as a group member and for the group” (ibid, p. 2). The cognitive dimension of solidarity thus entails that people are not only aware of the existence of the relationship with others but also recognise the significance the group bears in relation to themselves.Footnote 4 When perceiving and recognising these social bonds, the perspective of an individual extends beyond his or herself and develops into a concern for those he or she shares a relationship with and for the community overall. It is only then that people conceive of the respective community as a “we” and that solidarity may arise.Footnote 5
-
4.
The Motivational Source
Some scholars, like Mariam Thalos for instance, hold that the connecting element, the commonality or other social bond that people share, provides a sufficient motivational force to favour a collective good (Thalos, 2012, p. 63). However, others contend that there are additional motivational sources that form the basis for solidarity.Footnote 6
One of them is the social philosopher Axel Honneth. He argues in The Struggle for Recognition that a state of solidarity can only exist in communities that are based on relations of mutual recognition.Footnote 7 Mutual recognition means that members of a community recognise each other’s rights and grant each other social esteem, therefore, recognizing each other’s value in the contribution for the collective or the shared project. For Honneth, mutual recognition is the precondition for an individual’s self-realisation. He holds that mutual recognition ensures that each individual can enjoy their personal freedom and autonomy, because others refrain from disrespecting it. Mutuality means that this relationship must go in both directions. It is only when the expectation of recognition is fulfilled that people feel motivated to act in solidarity with others within the collective. Conversely, where mutual recognition is not granted a state of solidarity cannot exist. (Honneth, 1995).
Whereas Honneth’s conception of solidarity has a normative character, more practice-oriented approaches nevertheless support the argument that mutuality and reciprocity are an integral part of solidarity (Smith & Laitinien, 2009; Burelli, 2016; Prainsack & Buyx, 2012). The existence of mutuality or reciprocity encourages members of the community to “stand together” and contribute to the collective endeavour. (Bergmark, 2000; Laitinen, 2014) Furthermore, Sophia Dafinger argues that reciprocity puts actors of solidarity on equal footing (2020). With mutuality and reciprocity, power relations within the community are balanced or — where power asymmetries exist — are not abused for exploitation, denigration, or other harm. Because as James Fredericks notes: “[…] interdependence is the condition within which the dignity of the human person is either honored or abused” (2007, p. 61–62). Where the interdependency is abused and causes harm, solidary relationships must be absent, because these are characterised as relations of pro-social behaviour. Consequently, it can be argued that some form of mutual recognition does not only provide a motivational source but must be a prerequisite for solidarity.
-
5.
The Element of Duty
In the solidarity literature, it is widely accepted that solidarity puts obligations on the members of a community. For instance, when solidarity is conceived as being based on relations of mutual recognition, as described above, people commit to recognise each other’s rights and to value their contribution to the collective good. Such commitment, Laitinen argues, puts a duty on people not to do harm to the other members of the community and to treat them with respect for their freedom and autonomy (2014, p. 136).
Furthermore, solidary relationships generate a readiness to contribute to a shared goal or common purpose. This readiness puts another duty on its members, namely, to incur some cost or burden (if necessary). As Tava puts it: “[…] sharing goals or ideals incites willingness to also share costs and risks with an eye towards longer-term gain and an increased probability of achieving the shared goals” (2021, p. 122). To put it in other words, solidarity may promote the expectation of a collective benefit, however, requires that risks and burdens are also collectively shouldered. Responsibility for both — the benefits as well as the risks — is hence, shifted from an individual towards the collective, whereby each member is responsible for their fair contribution to the common aim as well as their fair share of the risks or costs.Footnote 8 Those in disadvantaged positions may hence shoulder a smaller share of the costs or receive some benefit while the more advantaged members of the group carry higher costs. Reichlin argues that such a shared responsibility includes that people “accept the obligation to put up with the difficulties of other members” and, therefore, defend the conditions of a dignified life for all (2011, p. 366).Footnote 9 At the societal level, Laitinen proposes, solidarity puts an obligation on the members of a society to create social practices and institutional structures that ensure the just distribution of benefits and burdens. Notably, the obligations that arise from solidarity are moral duties that may be valid for individuals independently from solidarity. However, as Kolers argues, “[t]he collective character of solidarity […] may improve our chances of acting on such duties as we do accept” (2012, p. 369). Solidarity can, hence, strengthen the demands of moral duties or extend moral duties to new ones formerly not accepted (ibid).
The five elements described above capture more descriptive as well as normative aspects of solidarity. In that sense, the account provides a conceptualisation that is able to capture different use contexts without losing a concise conceptual core. With this discipline-neutral account of solidarity in hand, the following part of the paper will explore whether and how solidarity can be practically applied in an Ethics in Design approach.
4 Approaching Solidarity and AI from an Ethics in Design Perspective
The ethical implications of the use of AI systems have sparked a debate on how to counter risks for individuals and society. One of the approaches taken to ensure ethical compliance is ‘Ethics in Design’, defined by Virginia Dignum as “the regulatory and engineering methods that support the analysis and evaluation of the ethical implications of AI systems as these integrate or replace traditional social structures” (Dignum, 2018, p. 2). Accordingly, it differs from Ethics by Design, which describes the integration of ethical reasoning capabilities into artificial autonomous systems (ibid). Ethics in Design concentrates on the design process in which computer engineers and other professionals involved in the development and deployment of AI can account for potential ethical problems and values. Could solidarity be applied from such a perspective and if so, how?
The theoretical account proposed in the previous section has shown that solidarity requires an element of relationality based on a connecting element as grounds for the relations and a cognitive dimension of awareness and recognition. Let’s consider these first three elements in turn. In the context of Artificial Intelligence, how might one answer the question of who ought to be solidary with whom and on what basis? And are these individuals aware of their relation? In an Ethics in Design approach, AI systems themselves do not hold any agency but it is assumed that AI technologies are objects or tools that are designed and used by human agents and may embody values that humans have embedded in them.Footnote 10 Exploring the concept of trustworthy AI, Rieder et al. argue that trust in technology can only plausibly exist in a derived sense, meaning that people do not trust the technology itself but trust the “human agents who design, manufacture, manage or operate them and who are capable of accounting for the trustors' values and interests” (Rieder et al., 2020, p. 6). Trust and solidarity are similar concepts in at least one respect: They both describe a state of social relations. Neither trust nor solidarity can exist within an individual alone, but are contingent on the relationship between at least two people who are required to both act in a certain way. Therefore, the realisation of solidarity in AI technologies is — similarly to trust — only possible in a derived sense, meaning that it must be the human agents who are involved in the design, deployment, and usage of AI systems that may act in accordance with solidarity and only with other human agents. AI technologies within the sociotechnical system may then serve to either facilitate or shape social interactions and can therefore undermine or promote solidarity.
How the human agents involved in the design, deployment, and usage of AI systems may stand in relation with others, is a complex issue. Computer engineers, for instance, have a relation with their fellow professionals who all bear responsibility for the ethical implications of the technology they create. Abbas et al. have proposed a Hippocratic oath for technologists with the aim “to provide humility and a sense of community among experts and technologists who explicitly want to promote technology for human progress” (2019, p. 72). An Ethics in Design approach, however, also incorporates the responsibility professionals have for the impact technological systems may have on society and its members. Therefore, one could argue that those involved in the design of AI need to be solidary with society overall. Referring to society overall has the advantage that it widens the perspective from technologists and the direct users of technologies towards indirect stakeholders — such as those who may only be affected by technology — as well as the society’s interest. However, such a broad understanding is at the same time problematic, because “society overall” can refer to many different collectives — a local, national, regional, or global society for instance. Moreover, AI systems may only be deployed in certain contexts and hence by distinct collectives such as people within a certain geographical area or those belonging to a certain profession. Hence, while some relations (such as national societies) pre-exist the technologies deployed, some only come into existence with a technology’s introduction into human life. The application context and the human agents surrounding a certain technology solution (who can hence constitute a collective) are however very important from an Ethics in Design perspective, as it is the implications of a specific AI system on human agents that need to be accounted for in the design process. The community in solidarity must, therefore, be defined more distinctively by clearly delineating who shares what kind of relations with each other and on what grounds these rest on.
Potentially, Ethics in Design frameworks may help to do so. They often include a process of stakeholder identification in order to determine who will be affected by the technology and how. For instance, Artificial Intelligence used in the algorithmic systems of social media platforms may have an impact on people all over the world, because these algorithmic systems have global reach and have been shown to increase political polarisation (Levy & Razin, 2020) and therefore change political conditions within and across nations. AI systems that are used in more delineated spaces such as for medical diagnosis in a specific hospital, will have an impact on a much more narrowly defined stakeholder group. Thus, in the case specific application of Ethics in Design frameworks, the collective may be determined more concretely via stakeholder identification.
One essential problem with this process of stakeholder identification is, however, that stakeholders can only be determined for the intended purpose of the AI system to be designed. Scholars like Anders Albrechtslund (2007) Shilton (2012) have raised this issue, drawing attention to the shortcomings of Ethics in Design methods. Albrechtslund, for instance, argues that contexts of use of a technological system may change in the future and generate different ethical problems that can at a later stage not be addressed for in the design (2007, p. 71). If technologists, hence, determine the stakeholders affected by the technology for only one specific application context and only in a certain point in time of the design process, stakeholder groups (and therefore the community in solidarity) may change in future use contexts. From the perspective of the technologist, there is hence a temporal dimension that makes the identification of the community at least uncertain. Those indirectly affected, and hence the collective, can therefore never be determined indefinitely.
Moreover, putting the responsibility to act in solidarity solely on technologists would assume a one-sided, hence, asymmetrical relationship. However, as noted earlier, Axel Honneth and others have shown that solidarity can only take hold when based on mutuality. This raises begging the question as to what extent users and other affected stakeholders would act in solidarity with technologists and to what extent they would even conceive of themselves as part of the relevant collective. This brings us to the next issue. Solidarity requires a connecting element that builds the basis for the relationship. From a broader societal but also the more technology-specific perspective, the connecting element could be located in the collective goal to benefit from AI. Ethics in Design approaches aim to construe technologies “as a formidable force which can be used to make the world a better place, especially when we take the trouble of reflecting on its ethical aspects in advance” (van den Hoven, 2007, p. 67). Ideally, the collective goal to benefit from AI may thus bind people – be it direct or indirect stakeholders – and thereby produce a collective will to design and deploy systems in a way that they will be beneficial rather than harmful for society.
While it can be assumed that societies generally aim to develop and use technologies in a beneficial way and that this goal pre-exists the introduction of any particular technology, it may be debatable in practice what “beneficial” exactly means. What constitutes a collective benefit may be a question that is answered very differently depending on the stakeholder and the specific application context of an AI system at hand. Furthermore, solidarity can only arise where people are aware of and recognise their relationship with others. From the perspective of the computer engineers this would entail that they do not only consider themselves as active members of society (and the case specific communities), but also acknowledge a responsibility for the impacts that AI systems may have on society and accept their task to design AI to be beneficial for all.
Here one encounters the third element of the proposed solidarity account. Ethics in Design approaches like that of participatory design may raise awareness for the connection one shares with others in respect to AI technologies. Participatory design entails the invitation of a variety of stakeholders and involves them in the decision-making process for technology design. Collectively deciding what issues are at stake and how to account for them in the design process may enable an identification with the stakeholder community and its goal to implement ethical design. Shilton (2013) has shown in an ethnographic study of computer engineers that the consideration of a user’s privacy had not been pertinent in computer engineers’ considerations when designing a system until the value of privacy was experienced personally. Shilton describes how computer engineers were assigned to test their products with their own data in order to understand that the results could generate insights into sensitive areas of their lives. After experiencing the implications themselves computer engineers understood how sensitive the data they had planned to use was and decided that privacy needed to be included in the value set to be considered in the following design process. The case shows that awareness and recognition is highly connected with personal experience. Being aware of and recognising one’s connection to a community may require a similar personal encounter, which participatory design may provide.
It should be noted, nonethless, that computer engineers are often subject to economic constraints while designing AI systems. For instance, Timko et al. have shown that the influence of the purchaser of technological systems and constraints on budget constitute a hindrance to the implementation of ethical values in the design process (2022). Ethics in Design methods by which an awareness could be raised are often time-consuming and may, hence, run counter to the economic constraints in technological design. Furthermore, in a market-based society economic processes are based on a capitalist logic that focuses on competition rather than unity. Such an environment may hinder technologist from being aware of and recognising their relation with other stakeholders and may hence stand in contrast to some sort of civic-mindedness that would generate the required concern for society and the collective good.
The considerations made so far demonstrate that the first three elements of the solidarity account described in the first part of the paper are rather difficult to apply to an Ethics in Design perspective. One of the reasons is that solidarity is a concept describing a state of social relations that entails attitudes and behaviour a technology cannot simulate, because it lacks the cognitive and social capacities of a human being. Defining a community of solidarity also poses challenges when conceiving of the ‘community’ as a collective of human agents as part of the broader sociotechnical system, because it is difficult to clearly delineate the collective in solidarity and it is questionable if the cognitive dimension of solidarity will, or could, be satisfied. Further, while some in-roads were considered, it was nevertheless also shown that Ethics in Design methods may be rather limited in their ability to help satisfying these conditions.
The limitations on the ability to clearly define a community suggest that a conception of solidarity and its realisation as a social practice in this setting may be erroneous. However, scholars have argued that solidarity can also be analysed from a moral perspective, focusing on the normative result and aspiration of solidarity (Moreno-Lax, 2017, p.745). Adopting such a normative approach would lead to solidarity becoming more of a perspective or lens that can guide actions in an ethical design process. The last two elements of the solidarity account presented in the previous section refer to this normative dimension. More precisely, to what solidarity requires individuals to do. The next section will consider these last two elements from an Ethics in Design perspective.
5 The Normative Aspirations of Solidarity
According to Axel Honneth and the fourth element of the solidarity account provided above, the prerequisite for solidarity are relations of mutual recognition. Members of a solidary community need to recognise each other by respecting each other’s rights and granting each other social esteem, which ensures individuals’ opportunities for self-realisation. While this is on the one hand a precondition for people to actually feel motivated to act in solidarity, it also generates a duty to constantly uphold relations of mutual respect. When we translate Axel Honneth’s theoretical ideas into the practice of design processes, AI systems can only be in line with a solidarity principle, if they comply with two conditions: First, AI systems ought not to violate fundamental and human rights of those who are affected by the technologies. And second, they need to ensure opportunities for self-realisation. Let’s consider these each in turn.
Research has shown that AI systems pose challenges to human and fundamental rights (FRA, 2020; Yeung, 2018).Footnote 11 For instance, AI can be used in facial recognition systems that enable mass surveillance and disproportionately infringe on the right to privacy. The usage of AI systems can also lead to a violation of the right to equality when systems used in hiring procedures preclude women from being offered high-paying jobs and therefore illegitimately discriminate against them. Violations of human and fundamental rights are incompatible with solidarity, for solidarity requires members of the community to respect each other’s rights, as Honneth suggests. By building and deploying systems that violate human and fundamental rights, technologies have the potential to undermine solidarity among members of the community. Conversely, where AI systems are designed to be aligned with solidarity, they are required to comply with human and fundamental rights.
The second form of recognition that Axel Honneth describes is recognition in the form of social esteem. In his view, social esteem is the recognition of each others’ value in the contribution for the common project, which is necessary for people to be able to self-realise. Honneth describes self-realisation as a process of articulating and realising individual life-goals without coercion (Honneth, 1995, p. 174). The usage of AI systems has been found to limit opportunities for the realisation of individuals’ life-goals, for instance, when AI systems deny certain people the opportunity to be granted a loan. When classifying people into categories that excludes them from certain services, products, or activities they may restrict people’s opportunities to self-realise. According to Honneth, such constraints on a person’s life are incompatible with solidarity. Hence, technologies aligned with solidarity must be designed to guarantee people’s opportunities for self-realisation.
To satisfy these two conditions, an Ethics in Design approach needs to incorporate an assessment of which human and fundamental rights may be affected. Human rights impact assessments may provide a suitable tool. Furthermore, it needs to identify the stakeholders’ interests and needs in order to ensure that they are not constrained in their opportunities to self-realise. In the broader field of Ethics in Design, there are several methods to account for the interests and needs of other stakeholders. For instance, user-centric design aims to identify users’ interests and needs and account for them in the design process. Participatory design takes a step further and invites users to discuss design choices and thereby participate in the decision-making process. The latter is especially interesting from a solidarity perspective, because it involves collective decision-making which can enable solidarity within the group. However, their sole focus on users fails to acknowledge indirect stakeholders such as those unwantedly affected. Scholars have pointed to the shortcomings of such methods by proposing the concepts of universal, systemic, or human-centred design, broadening the perspective from users to all humans (Sevaldson, 2018). Although, while a mere user-centric approach would not suffice, an incorporation of all humans may be too broad for practical implementation. A solidarity perspective would necessitate going beyond users and focus on all affected stakeholders within the sociotechnical system.
As a last element solidarity generates duties for its members to ensure the just distribution of benefits and burdens. While many Ethics in Design frameworks include an evaluation of the impacts that technologies may have on society, a solidarity perspective would entail not only the assessment of harms and benefits overall, but how they are distributed across society and stakeholder groups. Do some people primarily enjoy the benefits while others primarily carry the risks? Following the identification of stakeholders as well as their interests and needs, a solidarity perspective would also require an assessment of how different stakeholders may be affected. Furthermore, the design would then need to ensure that a just distribution of harms and benefits is accounted for. For instance, an AI system that collects and analyses sensitive data about the health conditions of individuals may entail a potential harm for users overall due to privacy infringements. However, if this data is sold to private insurance companies who use the insights from the data to calculate health risks and therefore result in higher insurance costs, people with genetic predispositions would face higher burdens through the use of such AI systems than people of good health. A solidarity perspective would therefore have to preclude technologists, including those who deploy the systems, from using the data for purposes that lead to such unjust distribution of harms and benefits. It would thus be paramount from a solidarity perspective to identify the more vulnerable individuals within the collective and make sure they are not adversely affected. The duties that arise from solidarity, hence, entail that the better-situated members of society forego some potential benefit if that would entail that more vulnerable individuals need to carry a higher burden.
Translating the normative aspirations of the solidarity account into practical recommendations for an Ethics in Design approach would hence mean that computer engineers need to account for three aspects in the design of AI systems: (i) they need to make sure that an AI system does not lead to violations of human and fundamental rights; (ii) they need to make sure that opportunities for self-realisation are not constrained and the interests and needs of individuals and society overall are accounted for; and (iii) they need to make sure that harms and benefits are distributed in a just way.
6 Conclusion
Some ethics guidelines seeking to guide the design and deployment of AI technologies have proposed solidarity as an important principle. However, how the principle should be applied in the Ethics of AI has so far been understudied. Consequently, this paper has explored solidarity in the context of Artificial Intelligence. In order to be able to study solidarity from an interdisciplinary perspective, it has first provided a discipline-neutral account of solidarity that describes solidarity’s five core elements. The account was then used to examine whether and how solidarity could be applied to the context of AI using an Ethics in Design approach, hence, focusing on how computer engineers could account for solidarity in technology design.
It has been found that approaching solidarity in Ethics in Design has its challenges, especially with satisfying the first three conditions of solidarity (the element of relationality, connecting element, element of awareness and recognition). Solidarity requires some form of human agency, social interaction or at least some form of attitude towards others, all practiced within the confines of a fixed community. However, to define that community is not an easy task for technologists, because AI technologies can involve and affect a variety of different stakeholders depending on the case at hand. At the same time, AI technologies may be used and deployed in different contexts later on, broadening the stakeholder group and thereby confounding the limits of the community. Due to the limited ability to define and recognise ‘the community’, it seems problematic to view solidarity and its realisation as a social practice. Rather, it has been suggested that solidarity needs to be conceived of as a perspective that can guide ethical decision-making in technology design.
By understanding solidarity as a perspective, the demands for the first three elements of the solidarity account are weakened. The limited ability to define a fixed collective to be in solidarity as well as the potential lack of technologist’s awareness and recognition of their relationship are still applicable, however, can be viewed as practical obstacles to a normative ideal. Because the solidarity perspective shifts the attention towards solidarity’s normative aspirations and therefore what solidarity requires individuals to do. In practical terms solidarity’s normative aspirations would require computer engineers to account for three aspects in the design of AI systems: (i) they need to make sure that an AI system does not lead to violations of human and fundamental rights; (ii) they need to make sure that opportunities for self-realisation are not constrained and the interests and needs of individuals and society overall are accounted for; and (iii) they need to make sure that harms and benefits are distributed in a just way.
Nevertheless, these normative aspirations are still deeply intertwined with the collective that is supposed to be in solidarity. Therefore, while the demands of the first three elements may be weakened, their exploration is still important. For one, the solidarity perspective draws attention to the designer’s role as a member of broader society and therefore the responsibility they bear for the impact that technologies may have on society. This includes the consideration of the collective rather than only individual interests and goals. Second, it draws attention to an ideal collective goal to design and deploy AI systems that benefit society overall. And third, it draws attention on direct as well as indirect stakeholders as well as their values, interests, and needs. By using Ethics in Design methods such as stakeholder identification and participatory design, technologists’ could at least explore and approximate the potential ‘community’ in solidarity and account for their interests and needs.
A solidarity perspective in Ethics in Design could be valuable because it strengthens the demands of moral duties towards others (Kolers, 2012). Due to its focus on the shared bonds among members of the community, the obligation of a just distribution of benefits and burdens also implies that people are willing to forego some potential benefit or carry some cost themselves in order to assist those members of the community that are in more disadvantaged positions. In line with Houtepen and ter Meulen (2000), solidarity thus provides the social infrastructure or motivational means for justice. In that sense, a solidarity perspective has the potential to change processes in technology design from mere profit-oriented towards more socially-oriented practices.
It should be noted, however, that Ethics in Design approaches may not always be sufficient to ensure ethical issues do not arise. First, technical features such as the complexity and adaptability of AI may render the evaluation of an AI system’s implications difficult. For instance, when based on self-learning capabilities the behaviour and implications of an AI system may change at a later stage in time, hence after the design process is already completed (Bratteteig & Verne, 2018). Long-term oversight mechanisms may be needed to ensure that requirements of solidarity are fulfilled even when AI systems learn and adapt over time.
Moreover, solidarity is a challenging principle for Ethics in Design, because it does not prescribe an ideal end state computer engineers can aim to optimise for. Unlike other values such as privacy or transparency, solidarity is a means to reach other societal values, interests, or goals. The determination of these values, interests, and goals must be subject to collective decision-making and cannot be taken by a small group behind the closed doors of a tech company’s building. Without methods of participatory design, computer engineers are left alone with decisions they are neither educated for nor do they have the legitimacy to take them (Rieder et al., 2020). Consequently, Ethics in Design methods need to be complemented with collective decision-making at the societal level which then finds its way into the design process. Some of these decisions do not need to be newly taken, as they are engrained in societal norms, constitutions, or political and judicial decisions. However, others may still be open, for instance, how and in what contexts certain systems should be deployed or whether they should be deployed at all. Taking a solidarity perspective in Ethics in Design frameworks is, hence, only one approach to ethical AI that needs to be complemented with others. Political solidarity for collective decision-making as well as a solidarity-based governance framework for AI may also be needed to deal with the implications of AI. As such, further research should explore the concept of solidarity in the AI context from a political perspective. Moreover, steps need to be taken to transform the considerations raised here into concrete practical recommendations that guide technologists in design decisions made in accordance with a solidarity perspective.
Notes
It should be noted that the varying conceptualisations are situated in different traditions of thought and are critically discussed within the respective field. A critical reception of these conceptualisations is, however, beyond the scope of this paper.
See e.g. Nicola Curtin (2011) on ally-activism as a way to practice political solidarity with people from disadvantaged groups.
The element of relationality and the grounds for the relationship, may together constitute the basis for a ‘community’. For reasons of practicability, the term ‘community’ is used more broadly in this paper and should not be conflated with the actual concept of community, which goes beyond the mere collection of individuals with some social bond.
Krishnamurthy (2013) argues that solidarity entails an affective and a cognitive dimension. In line with other authors, it is contended in this paper that affective attitudes are not a necessary condition and are especially unlikely to arise at the meso and macro level. See for a discussion of solidarity conceived as feelings, e.g., Kolers, 2012; Thalos, 2012.
Notably, the element of awareness and recognition might not play a role for already internalised or established practices of solidarity nor for institutionalised solidarity at the macro level. Because in these instances solidarity has already become a norm or rule that people (sometimes unconsciously) commonly comply with.
According to Bergmark, values can influence or guide actions to reach “preferable ‘end states’” (2000, p. 399) and, hence, can be another motivational source.
The notion of mutual recognition was inter alia proposed by Georg Wilhelm Friedrich Hegel. Axel Honneth has used Hegel’s early writings to develop his social theory of The Struggle for Recognition, in which he developed a psychological, social, and moral understanding of recognition.
Responsibility is here understood as an obligation that requires people to do certain things as opposed to a conception of responsibility as moral or legal accountability.
Where solidarity leads to the sharing of responsibilities it is assumed that any member of the group could be in a disadvantaged or advantaged position at any time. An example constitutes unemployment benefits that may be used by different people at different points in time.
Within the field of Ethics of AI there is, opposed to this view, also a discussion on the moral agency of artificial intelligent systems.
I refer to human and fundamental rights as they are universally accepted rights that are independent from national jurisdictions and cultural differences. It should however be noted that other rights may be equally important.
References
Abbas, A. E., Senges, M., & Howard, R. A. (2019). A hippocratic oath for technologists. In A. E. Abbas (Ed.), Next-Generation Ethics: Engineering a Better Society (pp. 71–80). Cambridge University Press.
(AI-HLEG) European Commission. (2019). High-Level Expert Group on AI. Ethics guidelines for trustworthy AI. Brussels: European Commission. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai.
Albrechtslund, A. (2007). Ethics and technology design. Ethics and Information Technology, 9(1), 63–72.
Arnsperger, C., & Varoufakis, Y. (2003). Toward a theory of solidarity. Erkenntnis, 59(2), 157–188.
Bergmark, Å. (2000). Solidarity in Swedish welfare–Standing the test of time? Health Care Analysis, 8(4), 395–411.
Bratteteig, T., & Verne, G. (2018). Does AI make PD obsolete? Exploring challenges from artificial intelligence to participatory design. In Proceedings of the 15th Participatory Design Conference: Short Papers, Situated Actions, Workshops and Tutorial-Volume 2, 1–5.
Burelli, C. (2016). Realistic solidarity for the real EU (No. 11). Working Paper.
Cornwall, A. (2007). Myths to live by? Female solidarity and female autonomy reconsidered. Development and Change, 38(1), 149–168.
Curtin, N. (2011). The roles of experiences of discrimination, collective identification, and structural awareness in own-group and ally activism [Doctoral dissertation, University of Michigan].
Dafinger, S. (2020). Solidarity among equals? On the hierarchies of solidarity practices. International Conference, Solidarity at the crossroads - Concepts, Practices, and Prospects from an Interdisciplinary Perspective, online. Available at: https://solidarityatthecrossroads.org/
(DEK) Data Ethics Commission of the Federal Government of Germany (DEK), Federal Ministry of the Interior, Building and Community, Federal Ministry of Justice and Consumer Protection. Opinion of the Data Ethics Commission, Berlin: DEK, 2019. http://datenethikkommission.de/wp-%20content/uploads/DEK_Gutachten_engl_bf_200121.pdf.
Dignum, V. (2018). Ethics in artificial intelligence: Introduction to the special issue. Ethics and Information Technology, 20(1), 1–3.
(EGE) European Commission (2018) Directorate-General for Research and Innovation, European Group on Ethics in Science and New Technologies. Statement on artificial intelligence, robotics and ‘autonomous’ systems. Brussels: European Commission. https://ec.europa.eu/info/research-and-innovation_en
FRA (European Union Agency for Fundamental Rights) (2020). Getting the future right – artificial intellligence and fundamental rights.
Fredericks, J. (2007). Dialogue and solidarity in a time of globalization. Buddhist-Christian Studies, 27, 51–66.
Honneth, A. (1995). The struggle for recognition: The moral grammar of social conflicts. MIT Press.
Houtepen, R., & Ter Meulen, R. (2000). New types of solidarity in the European welfare state. Health Care Analysis, 8(4), 329–340.
Kolers, A. H. (2012). Dynamics of solidarity. Journal of Political Philosophy, 20(4), 365–383.
Krishnamurthy, M. (2013). Political solidarity, justice and public health. Public Health Ethics, 6(2), 129–141.
Kritikos, A. S., Bolle, F., & Tan, J. H. (2007). The economics of solidarity: A conceptual framework. The Journal of Socio-Economics, 36(1), 73–89.
Laermans, R. (2020). Enacting solidarity. In I. van Hoyweghen, V. Pulignano, & G. Meyers (Eds.), Shifting Solidarities (pp. 193–199). Palgrave Macmillan.
Laitinen, A. (2014). From recognition to solidarity: Universal respect, mutual support, and social unity. In A. Laitinen & A. B. Pessi (Eds.), Solidarity: Theory and Practice (pp. 126–154). Lexington Books.
Lev, O. (2011). Will biomedical enhancements undermine solidarity, responsibility, equality and autonomy? Bioethics, 25(4), 177–184.
Levy, G., & Razin, R. (2020). Social media and political polarisation. LSE Public Policy Review, 1(1), 1–7.
Lindenberg, S. (2006). Prosocial behavior, solidarity, and framing processes. In D. Fetchenhauer, A. Flache, B. Buunk, & S. Lindenberg (Eds.), Solidarity and prosocial behavior (pp. 23–44). Springer.
Lynch, K., & Kalaitzake, M. (2020). Affective and calculative solidarity: The impact of individualism and neoliberal capitalism. European Journal of Social Theory, 23(2), 238–257.
Miller, K., & Tuomela, R. (2001). What are collective goals?. Explanatory connections
Moreno-Lax, V. (2017). Solidarity’s reach: Meaning, dimensions and implications for EU (external) asylum policy. Maastricht Journal of European and Comparative Law, 24(5), 740–762.
Prainsack, B., & Buyx, A. (2012). Solidarity in contemporary bioethics–towards a new approach. Bioethics, 26(7), 343–350.
Prainsack, B., & Buyx, A. (2016). Thinking ethical and regulatory frameworks in medicine from the perspective of solidarity on both sides of the Atlantic. Theoretical Medicine and Bioethics, 37(6), 489–501.
Reichlin, M. (2011). The role of solidarity in social responsibility for health. Medicine, Health Care and Philosophy, 14(4), 365–370.
Rieder, G., Simon, J., & Wong, P. H. (2020). Mapping the stony road toward trustworthy AI: Expectations, problems, conundrums. Machines We Trust: Perspectives on Dependable AI. Cambridge, MA: MIT Press, Forthcoming.
Sangiovanni, A. (2015). Solidarity as joint action. Journal of Applied Philosophy, 32(4), 340–359.
Scholz, S. J. (2008). Political solidarity. Penn State Press.
Sevaldson, B. (2018). Beyond user centric design. In Proceedings of RSD7, Relating systems thinking and design 7, 23–26 Oct 2018, Turin, Italy. Available at http://openresearch.ocadu.ca/id/eprint/2755/
Shilton, K. (2013). Values levers: Building ethics into design. Science, Technology, & Human Values, 38(3), 374–397.
Smith, N. H., & Laitinen, A. (2009). Taylor on Solidarity. Thesis Eleven, 99(1), 48–70.
Straehle, C. (2010). National and cosmopolitan solidarity. Contemporary Political Theory, 9(1), 110–120.
Tava, F. (2021). Solidarity and data access: Challenges and potentialities. Phenomenology and Mind, 20–2021, 118–126.
Thalos, M. (2012). Solidarity: A motivational conception. Philosophical Papers, 41(1), 57–95.
Timko, C., Schmidt, N., Niederstadt, M., & Roos, M. (2022). Softwareentwickler über Softwareentwicklung. In T. Hoeren & S. Pinelli (Eds.), Künstliche Intelligenz – Ethik und Recht (pp. 363–387). C.H. Beck.
van den Hoven, J. (2007). ICT and value sensitive design. In The information society: Innovation, legitimacy, ethics and democracy in honor of Professor Jacques Berleur SJ (pp. 67-72). Springer, Boston, MA.
Yeung, K. (2018). A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework. Committee of experts on human rights dimensions of automated data processing and different forms of artificial intelligence. MSI-AUT(2018)05.
Acknowledgements
The author would like to thank Prof. Dr. Ingrid Schneider for her encouragement, feedback, and advice and her constant support as a supervisor. She also thanks the editors Prof. Dr. Judith Simon, Dr. Gernot Rieder and Dr. Jason Branford for their support as well as the anonymous reviewers for the valuable feedback given. A thanks also goes to Dr. Pak-Hang Wong who commented on an earlier draft of this paper.
Funding
This research was conducted as part of the project “GOAL – Governance of and by Algorithms” funded by the German Federal Ministry of Education and Research (BMBF) under reference number 01IS19020D. The author bears full responsibility for the content of this research. Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
The corresponding author is the sole author of this article. Literature review, ideas, conceptualisations, writing and editing were performed by Catharina Rudschies. Prof. Dr. Ingrid Schneider is the supervisor of the author’s PhD thesis and has read and commented on the work.
Corresponding author
Ethics declarations
Competing interests
The author declares no competing interests.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rudschies, C. Exploring the Concept of Solidarity in the Context of AI: An Ethics in Design Approach. DISO 2, 1 (2023). https://doi.org/10.1007/s44206-022-00027-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s44206-022-00027-x