Keywords

Chapter 2 gave a list of purposes for which this book and the FASA model are suitable and another list for which they are not. But regarding the model’s applicability and validity and in general, there are various other questions about sociosensitive and socioactive technology that must be considered, including ethical aspects. This book also aims to increase awareness of the complexity and sometimes problematic nature of developing, designing, disseminating, using, and regulating sociosensitive and socioactive systems. The chapter at hand therefore presents some initial reflections about sociosensitive and socioactive technology. It does not make any claim of exclusivity or completeness; on the contrary, it should be understood as an appeal for non-technical expertise to be continuously and more thoroughly integrated into technology design projects. These reflections bring the book to a close and serve as a prelude to subsequent discourse.

7.1 Social appropriateness and tact

When saving face is a concern in social interactions, one component of socially appropriate behaviour is what is known as ‘a sense of tact’. Regarding technical systems, we might ask whether it makes any sense to say that a technical system is capable of ‘embarrassing’ us, and whether experiencing embarrassment towards a technical system can itself be appropriate. Helmuth Plessner defines tact as

the ability to perceive imponderable differences, the ability to grasp that untranslatable language of phenomena spoken by people without words through their constellation, their behaviour, their physiognomy in the unfathomable symbols of life. Tact is the willingness to respond to the finest vibrations of the environment, a willing openness to see others and thus take yourself out of focus, to measure others by their own standards and not your own [cf. «Individual Specifics», remark by the authors]. Tact is the eternally watchful respect for the souls of others and hence the first and last virtue of the human heart. (Plessner 2002, p. 107, own translation)

It will certainly be difficult to endow technical systems with such a watchful respect for the souls of others, and so we must ask ourselves whether and when it makes sense to attempt to simulate such respect. Here, it is useful to make a distinction between two different forms of social appropriateness, both of which we have discussed in greater depth elsewhere (Bellon et al. 2022). Socially appropriate behaviour can relate to respect for the dignity of the other in a very existential sense. But it can also refer to a weaker, possibly derived, form that facilitates interactions and society through conventions that are constantly being renegotiated (cf. for this and Niklas Luhmann’s notion of tact Youssef 2019).

According to Hans-Georg Gadamer, tact can be understood as a kind of ‘social sensitivity’ (Gadamer 2011, pp. 13–15). The following excerpt by David Kaplan insightfully reveals the connection between tact/politeness, education, self-cultivation, and appropriate technique. Some of the factors of the FASA model are also reflected in the excerpt:

What is this sense of appropriateness? For Gadamer it is “tact.” It is a particular kind of social sensitivity to social situations and the judgment of how to behave in them. Tact is the tacit knowledge of appropriate action for a particular circumstance. It involves knowing what to say and do – and what not to say and do. Although not based on general principles or universal concepts, Gadamer maintains tact is a universal sense that requires of all that we remain both sensitive to particular situations, guided by the wisdom of the past [handed down as customs, remark by the authors], yet open to other points of view. Although it is difficult to prove any matter of tact conclusively, it is not an irrational concept; it is merely an acquired ability. How does one acquire it? Through education in culture, development, and self-cultivation in society. That is to say, Bildung. The only way to acquire interpretive tact is through practise. This connection between tact and practical wisdom has completely dropped out of the contemporary conversation of technology. But what is largely at issue in questions concerning the good life in a technological age is this notion of appropriateness in conduct. Technology is shot through with tact. It answers key questions, such as how things ought to be designed, how they should be used, how they should affect others, how they should be governed. Tact may not provide a precise answer to any of these questions, but if universalist and scientific concepts are ruled out (or not exclusively employed) then all that is left is practical wisdom, developed over time, through Bildung. After Gadamer, the notion of ‘appropriate technology’ takes on a whole new dimension. New answers might be found to vexing practical questions concerning technology. (Kaplan 2011, p. 232)

The five factors proposed in this book and their respective criteria not only allow an understanding of Gadamer’s concept of tact, but, as the fundamental factors of social appropriateness, they also offer an approach to the question of how behaviour is judged to be socially appropriate or inappropriate in interpersonal and human–machine interactions. First, as proposed here, the behaviour in question must be perceived through observable aspects such as voice and tone, gestures and facial expressions, posture and positioning in space, and so on, all of which vary in time, space, and mode. Which manifestations of these observables can now be understood as indicators for appropriate or inappropriate behaviour primarily depends on the five factors of the model: «Type of Action, Conduct, Behaviour, or Task», «Situational Context», «Relations between Interacting Agents», «Individual Specifics», «Standards of Customary Practice». If this model and the research results of the poliTE project summarized in this book are adopted by future concrete research, especially in technology design projects, then – it is hoped by the authors of this book – various typical and meaningful combinations of observables, indicators, factor criteria, and factors could emerge. Besides being useful for the design of sociosensitive and socioactive systems, this could also provide further insight into the phenomenon of socially appropriate behaviour, appropriateness cultures, and the normative foundations and conditions of plural life forms in digitized life realities.

7.2 Why social appropriateness in human–machine interactions?

If insight into the connection between judgements of appropriateness and significant groups of observables typically associated with specific contexts, tasks, and social relationships is possible in this sense, how can this contribute to the design of socially compatible technology? Could self-learning sociosensitive or socioactive systems even record and compile these relationships, allowing the factors of appropriate behaviour to in turn be researched through an analysis of the information compiled by systems? Could these observables be made machine-readable, i.e., made to be understood by systems as indicators – if so, which of them? Enough of them? Could processing at the level of indicators, i.e., reading observables ‘as symbols for’ and therefore the interpretive steps that connect an observable with a judgement of appropriateness, be partially or even mostly delegated to highly automated systems, allowing this ability to be integrated into sociosensitive or socioactive systems? And if it is indeed possible to have systems with the ability to establish judgements about the social appropriateness of actions and behaviours, then we must reflect upon the technology by asking ourselves the question:

Should such systems be developed and used in the first place?

This book compiles some examples of observables that are currently being considered in research and prototype implementations of interactive systems, such as ‘system judgements’ about the appropriate distance between the participants in the interaction (proxemics). All research efforts into emotion-sensitive adaptive systems, human-like interactions, artificial assistants, and companion technology, social robotics, etc. share one – more or less explicit – premise:

Human-like or ‘natural’ – or at least less artificial – interaction is better. Is this true?

Why would someone build systems that simulate human behaviour up to or beyond the uncanny valley (Mori 2012)?

7.3 Sociosensitive/socioactive systems as seemingly human?

From the perspective of technology assessment, such sociosensitive and socioactive systems clearly present challenges, such as the possibility of deception (is something a person or just technology?) that leaves users unsure about the true nature of the entities with which they are interacting. Such systems would pass the Turing test, at least in the short term. Although this ambiguity may lead to conscious and enjoyable immersion in human–machine relationships, at the same time, fundamental respect for humans as well as human rights (and labor rights and conditions) can be jeopardized wherever AI systems are enacted and assumed in roles of humans and instead of humans. This is strategically exploited by many data-driven or platform-based services, some of which employ human workers in precarious conditions while presenting themselves as purely AI-based functions to users. This has been described as ghost work by a new global underclass (Gray and Suri 2019) or as a deliberately staged impression of magic that makes work seem like it is ‘plucked from the cloud’ and performed by ‘magical hands’.

The Amazon version [of Mechanical Turk, remark by the authors] is a way to easily outsource – to real humans – those cloud-based tasks that algorithms still can’t do, but in a framework that allows you to think of the people as software components. The interface doesn’t hide the existence of the people, but it still does try to create a sense of magic, as if you can just pluck results out of the cloud at an incredibly low cost. (Lanier 2013, p. 169 f.)

This impression of magic could undoubtedly be considerably enhanced using socially appropriate artificial agents, causing the precarious employment conditions affected by it across the globe to fade even further from the sight of potential users and consumers. Though users and customers may find such moral convenience attractive (it is not pleasant to see the misery ‘behind’ products), efforts to raise public awareness might be necessary – the social media platform Facebook employs people and not algorithms to weed out pictures showing decapitation and torture, sometimes at a considerable psychological price (The Verge 2019) – and a sharper public understanding of sociosensitive and socioactive technology might also be needed.

Another consequence of sociosensitive and socioactive agents – corresponding to either an opportunity or a risk depending on the realizations of the factors – is the facilitation and increase of parasocial relationships; this term describes relationships where people fall in love with fictional characters like James Bond or non-human entities like God or robots. Viewed as an opportunity – consider for example relationship or sex robots – this field represents a billion-dollar market that has the potential to alleviate loneliness. But parasocial relationships can also have problematic consequences, for example if a beloved robot (or car) is ‘rescued’ instead of another person, or ‘material damage’ to a beloved robot partner is met with pre-emptive or retaliatory bodily harm – perceived as self-defence. There are of course non-problematic ways to develop attachment to objects beyond technical systems; nevertheless, the design of technical systems should consider any potentially problematic consequences that can be anticipated.

Another challenge of minimizing human-technology differences in human-machine interactions with sociosensitive or socioactive agents is not to overemphasize this minimization of differences. For human-machine interactions, which involve learning systems on the technological side and have long since taken place on the basis of ‘comprehensively networked IT systems’ (Wiegerling 2016)–and even for implementations that exploit big data technologies, which are sometimes touted as almost magical–the following principle still holds true (Richter und Kaminski 2016): when interacting with a technical system, people are only addressed as tokens of various compiled profile types (ranging from hard-coded default users to average users with various adapted characteristics to fully ‘personalized’ users) by the system.

But reducing human–machine differences in human–machine interactions with sociosensitive and socioactive technology could obscure this typified default type of human–machine interaction, leading technical systems to be classified as full social actors. This may already seem like an attractive option in socially sensitive areas, not least due to economic incentives. But such a misapprehension of sociosensitive and socioactive technology as full social actors is undoubtedly fraught with risk (Nähr-Wagener 2020): suppose that a person is unable to articulate individual wishes and feelings in areas where articulating one’s own mental state is essential and should ideally be facilitatedFootnote 1 (e.g., in nursing and care work) precisely because the sociosensitive and socioactive system is unable to process them adequately: talking to a system about ones ‘emotional inner life’ is pointless, and so a person might eventually just stop trying to articulate this desire. In this scenario, the person may at some point no longer consider it worthwhile to develop personal wishes and feelings in the presence of permanent human–machine interactions of this type – there is a danger of self-reification. Even sociosensitive or socioactive care robots cannot be empathetic or sympathetic interlocutors. If we only consider the point of view described above, assistance from sociosensitive or socioactive care robots should presumably be restricted to only classical assistance tasks, in particular bureaucratic tasks. Thus, sociosensitive and potentially socioactive care robots should also act as assistants for care workers, rather than as independent care systems that might even participate in relational work. In general, this means: the boundaries of possible fields of application of sociosensitive and socioactive technical systems should not be determined by the illusion that these technical systems constitute social actors in a comprehensive sense.

7.4 Advantages of sociosensitive/socioactive technology/ Why and why not?

Systems that take social appropriateness into consideration hold considerable potential for more pleasant human–machine interactions, which in turn can have positive effects on people’s mood, health, motivation, performance, etc. The aforementioned research on respect (Sect. 4.4.1.1), for example, considers a recipient-based concept of respect according to which respect exists when a person feels respected, regardless of whether the relevant interacting party truly respects them (Quaquebeke and Eckloff 2010).

In a fundamental sense, technical systems can never respect someone.

For example, they are not in a position to choose who to respect and who not to respect, because they have no authentic normative preferences to guide such a choice. Such systems could be implemented with a corresponding axiology from which attestations of respect could be derived, but the processing of foreign values implemented by foreign rules would not represent respect in the sense of autonomous recognition, which presupposes a conscious subject capable of recognition (cf. for example Siep 2022; Gransche et al. 2014). But although technical systems cannot genuinely respect people, they could potentially simulate expressions of respect sufficiently well that a person feels respected by an artificial agent. According to the recipient-based concept of respect, this would be sufficient to induce the positive effects of respect on health, motivation, performance, etc. in the person who feels respected wherever this is desirable.

Another possible advantage of sociosensitive or socioactive technology that provide services is that inappropriate conversation interruptions by interactive robots or AI systems in hybrid social settings, e.g., conference coffee breaks served by robots, could be reduced. This would not necessarily or not just improve the quality of human–machine interactions but more importantly the quality of interpersonal interactions. Such an advantage would however only prove fruitful if interactions between humans and technology in this kind of hybrid setting is desirable or necessary for other reasons (e.g., economic), since technology with improved sociosensitivity or socioactivity only mitigates technical imposition that would not be disruptive in the first place if the technical agents (e.g., catering or care robots) were never introduced to the social context. Accordingly, the promotion of technical sociosensitivity and socioactivity risks falling foul of a tech-fix ideologyFootnote 2 (to which, in a self-critical sense, this book might also be contributing) where the purpose of version n + 1 is merely to solve problems that were introduced by version n. We must never stop asking whether people are a better choice than attempted technological substitutes in the hotel reception, the service hotline, behind the counter, for practical psychoanalysis, in sports clubs, in intercultural training settings, as intimate partners, etc. One of the tasks of technology evaluation (in every sector: politics, science, economy, etc.) is to invest the finite resources of a society towards development goals that exist by consensus; for example, we must ask whether a precarious care system (or even a hypothetically perfect one) only needs care robots – if they are indeed needed at all – because precisely those preferences and images of society and humanity that facilitate the existence of sociosensitive and socioactive agents are the root causes of the crises plaguing the health system. In the research and development of interactive systems, the FASA model can also be used as a heuristic to decide in what «Situational Contexts», according to which (and in some cases overcoming which) «Standards of Customary Practice», for which «Relations between Interacting Agents», for what «Individual Specifics», and for which «Type of Action, Conduct, Behaviour, or Task» sociosensitive and socioactive systems might in fact be worse than socially indifferent systems, or indeed under what circumstances any technical system at all might be disadvantageous compared to solutions based on appropriately qualified people. In collectives with a tendency towards tech fix reflexes, i.e., a propensity to respond negative effects of technology with more rather than less technology, potential no-tech and low-tech solutions receive few resources, late resources, or no resources at all (research focus, development funds, etc.).

The factors and criteria of social appropriateness are so complex that adequate and conscious consideration should in many cases lead to the informed decision to refrain from any technical implementation of them.

7.5 Technology does not interpret and does not understand

In addition, only a fraction of the listed observables can currently be technically implemented and technically integrated as indicators for criteria. Although this may be improved in future for many observables through further research, it would make sense to clarify whether and which of these observables are suitable for being technically processed as indicators at all before investing in such research. As presented in Chapter 4, observables are not just detected, they must be read as indicators. Understanding something as something, i.e., understanding an observable manifestation as a symbol and as information about something unobservable, is an act of interpretation. If we wish for technical systems to assign a meaning to observables for social appropriateness beyond simply applying a fixed – and hence predefined – reference template (and the dynamic character and complexity of the phenomenon suggest that this is indeed desirable, see Bellon et al. 2022), then these systems must have interpretative abilities. Whether non-living entities can fundamentally bridge this hermeneutic chasm of understanding, even if they can simulate such abilities by processing, is an ongoing debate in the philosophy of technology (Gransche 2021; Romele et al. 2018; Romele 2020) and is at the very least questionable. Similar to the recipient-based perspective of respect, according to which respect that is felt without being truly given still has a positive impact, understanding (of something as something) might philosophically be impossible to achieve by technology, but the corresponding technical surrogates (e.g., simulated understanding) might still suffice to benefit from the positive effects of (simulated) socially appropriate behaviour by technical systems.

7.6 Politeness as blameless deception – fake it until you make it

Since politeness – as highlighted in the title of the poliTE research project – is an important element of social appropriateness, it is worth looking at the philosophical treatment of politeness as part of the decision of whether and how technical systems should be made socially sensitive, or even actively polite. Given the chasm of understanding mentioned above, critics might conclude that any plans to design sociosensitive or socioactive systems will ultimately prove in vain and only elaborate gimmicks or neo-baroque masterpieces of illusion like the mechanical chess-playing automaton (“The Turk”) can possibly result from them (cf. Standage 2002). To such a general rejection of sociosensitive or socioactive technology, one could reply that in many cases (according to sufficiently many criteria), technical systems can compellingly display (simulate) socially appropriate behaviour despite the aforementioned chasm (even if this behaviour is not genuine). In other words, the variance in «Situational Contexts», «Relations between Interacting Agents», «Individual Specifics», «Type of Action, Conduct, Behaviour, or Task», and «Standards of Customary Practice» could be kept sufficiently low that the remaining complexity is at least provisionally represented through elaborate reference templates that could possibly also be updated through conditional learning.

So, what would be gained if technical systems regulated their interventions according to social appropriateness criteria, with the illusion of politeness – deceptive, but deceptively real?

A partial answer to this, at least in the area of human politeness, is offered by Immanuel Kant, who regards polite behaviour as a deception, but a ‘blameless deception’ (cf. Kant 2006, p. 43–44) that does not harm the ‘deceived’ party, since the deception is an open and known cultural technique. In the case of feigned respect that is still felt by the recipient, the deception would not only not be harmful, but it would even be beneficial in some circumstances (at least if the goal is improved efficiency at work, etc.). Kant offers a way in which politeness, feigned or simulated sociosensitivity or socioactivity, does not harm the actor while also benefiting the deceived; it “is nevertheless very beneficial as an illusion” (Kant 2006, p. 43). This approach could be summarized as: fake it until you make it.

In general, everything that is called propriety (decorum) is of this same sort – namely nothing but beautiful illusion. Politeness (politesse) is [...] to be sure not exactly always truthful [...] but this is precisely why they do not deceive, because everyone knows how they should be taken, and especially because these signs of benevolence and respect, though empty at first, gradually lead to real dispositions of this sort. (Kant 2006, p. 44)

For Kant, the simulation of appropriate behaviour (empty signs of benevolence) is a way towards skillfully appropriate behaviour or virtue (true conviction). Aristotle (2014, II, 6, 1106 b 36) already clearly showed that virtue is something that can be cultivated, i.e., developed through practise under the right conditions. One of the basic conditions of our life is our embedding within and interactions with technology. Every human–machine-world relationship has and has always had deskilling, reskilling, and upskilling effects, not only in the field of professional skills – an area that has long been the subject of intensive research – but also regarding our basic judgement and moral skills (cf. Vallor 2015).

Moral skills can be understood as:

The ability to properly assess the proper behaviour towards the proper person and the proper time in the proper place in the proper way.

Being able to behave socially appropriately is a skill that also requires the ability of moral judgement. Analogously to the ability to interpret, we can ask whether moral skills are fundamentally inaccessible to technical systems – there are many reasons to think so that cannot be presented here. Besides the ability to act, normative judgement is essential to not only be able to do something, but be able to do it appropriately, sensibly, and responsibly. Moral skills are also learned rather than innate, they are developed by practising, provided that certain conditions are met, such as:

  • the existence of role models,

  • the opportunity for repetition,

  • sufficient feedback,

  • cognitive and emotional resources,

  • motivation/interest.

If these conditions are met, competencies, (moral) skills, attitudes, and genuine virtue can be cultivated. However, precisely these conditions are threatened by modern technology in some regards. Learning systems, for example, deny the opportunity for repetition (since it is not possible to interact with the same system state twice), making it impossible to receive feedback that can be used for practising: if complex, networked, learning systems change their part in an action after every instance of interaction based on opaque control parameters (e.g., user behaviour, other users, the interests of the operator, environmental data, etc.), users cannot possibly learn to correct their own part in the action from the combined effect of the hybrid action. If someone cannot understand their own influence on a combined result, they cannot deduce the effects of changes in this influence and therefore cannot redirect their own influence to accomplish their goals more effectively (i.e., learning). Continuous exposure to increasingly powerful assistance services also leads to a gradual loss of ability in the delegated parts of actions (Gransche 2016): driving with a navigation system causes you to gradually forget how to navigate without one. Extensive use of low-threshold, communication-simplifying computer technology (for remote communications) carries the risk of losing the ability to engage in face-to-face communications (possibly even pathologically), so that in the end, everyone might end up paradoxically “alone together” (Turkle 2011). Technologies influence the conditions of our potential to develop abilities, as well as the concrete learning, relearning, and unlearning of skills, including judgement abilities (e.g., regarding the truthfulness of technically conveyed information) and moral abilities (for example the ability to evaluate ‘the right’ behaviour in ‘the right’ way, see the discussions on the previous page), as is necessary for socially appropriate behaviour.

7.7 Sociosensitive and socioactive technology as an enabling condition and cultivation factor of the human ability to judge social appropriateness?

On the other hand, Aristotle’s approach of habitualisation (according to which virtue is a habitus of choice or a deliberately choosing stateFootnote 3) and Kant’s idea of useful, non-deceitful deception as an intermediate step towards true disposition allow the development of sociosensitive and socioactive technology to be envisaged as an enabling condition and cultivation factor of the human ability to judge social appropriateness. Accordingly, systems that can simulate socially appropriate behaviour sufficiently deceptively and realistically, even without a genuine understanding or moral abilities, can be useful to the extent that they can be specifically exploited as a facilitating condition for human abilities (including moral abilities). The following aspects could be both an opportunity and a risk, depending on the form of participation and regulation:

  • Firstly, they could incite ‘empty at first’ socially desirable behaviour for successful interactions; roughly in the same way that children might learn to be polite towards people by being polite towards a language assistant (Vincent 2018); interaction with the system would gradually produce authentic behaviour through habituation.

  • Secondly, by successfully simulating the preferred behaviour, they could assume the role model function as one of conditions of cultivation of virtue, bringing about authentic socially appropriate behaviour by the principle of imitation.

  • Thirdly, the behaviour displayed by technical systems could be specifically designed with certain preferences in mind, since systems do not need to practise or self-cultivate a given habitus defined as ‘virtuous’, even in the presence of machine learning.

The tipping point is the question of whose preferences the system is designed to reflect. In modern interactive technologies, a few global corporations occupy a dominant position; Siri processes the preferences of Apple developers, Alexa processes those of Amazon developers, and so on. Each developer initially bases their work on their own behavioural preferences and normative judgements, without further reflection, which often still tends to exclusively reflect the preferences and judgements of white, young, cisgender, heterosexual men with a high level of education and above-average income. Furthermore, the system preferences that are implemented must reflect the company’s morals – and any moral rhetoric that comes with it; consider for example Google’s former company motto don’t be evil, now retired for good reason – and these corporate morals are in turn primarily oriented towards market success according to the functional logic of the market itself. The global dominance of these few tech companies ultimately contributes to a fixation of preferences, value judgements, and ideas about ‘proper behaviour’ that is far from portraying a global diversity. This can be understood as an appeal to developers who implement system preferences to ensure that their systems also reflect the underlying preferences and judgements of everyone – democratically represented, mandated, institutionalized – whose actions, abilities, will, and judgement will in turn be determined by these systems; this is a Herculean task. It is clear that socially intervening systems have a profound effect on behaviour, behavioural conditions, and the social fabric; whether this will open design opportunities and provide welcome potential as leverage to influence progress and education or instead will generate power and prosperity imbalances and operate as a subversive technology in service of total domination depends on how consciously this potential is recognized and shaped.

7.8 Differences between humans and technology

People have an outstanding ability to resolve communication issues rapidly by mutual coordination (cf. 4), allowing different ways of reading a specific «Situational Context» – and therefore different interpretations of what is considered socially appropriate within it – to be adjusted and any differences to be overcome in such a way as to minimize the termination of interactions or misunderstandings with negative consequences. The extent to which technical systems can be given the ability to display such spontaneous, improvisational coordination and adjustment efforts is questionable. At least in this regard, technological performance currently lags far behind human abilities. A central aspect of development that could allow technology to catch up would be the above-mentioned chasm of understanding, i.e., the possibility of interpretation-capable and therefore meaning-forming systems in principle, since the reading of a situation and misunderstandings are hermeneutic tasks or acts of understanding (cf. also Kempt et al. 2021; Bellon 2022).

In an interpersonal interaction, every participant displays «Individual Specifics» as a unique individual. Thus, interactions involve similar agents, i.e., actors characterized by both identity and difference; they have individual differences, but also enough commonalities (such as the spatiotemporal situation, similar perceptive apparatuses and access to the world through the senses, similar sensations of pain, hunger, satisfaction, and pleasure, similar rhythms and temporal needs, such as attention spans, daily rhythms, metabolic rhythms, etc.) to allow expectations of expectations to be formed about interaction partners. In this respect, some animals (e.g., dogs) are more similar to humans and therefore more capable of interacting with them than artificial systems. Depending on the structure of a system, its similarity, i.e., differences and commonalities, with human interaction partners can vary.

For example, learning systems that do not share their learning progress in a network with a collective of structurally identical units can be expected to achieve increasing individuality (though not personhood nor the rights that come with it) and bring this individuality with them into interactions as «Individual Specifics». By contrast, if the collective exchanges its learning progress (cf. Brown University 2016; Alexa across every Echo; or a fictitious example, the Borg from Star Trek), it could potentially be viewed as a global agent with multiple spatiotemporal bodies that presents all interacting agents with the same technical quasi-expectations of expectations and behaviour. Indeed, this seems to correspond to current implementations, since the mass of interactions from thousands of end devices serve as training conditions for an AI that is available through all end devices. Individual specifics are human, but it is by no means true that all their manifestations are desirable; people can of course be sexist, racist, anti-Semitic, sadistic, vengeful, abusive, violent, etc. These preferences and customary practices would then naturally be picked up by interactive learning systems – naive by human standards – and added to the catalogue of possible types of behaviour. Just as anybody can train an attack dog to focus on targets of their choice, anybody would be able to ‘train’ their companion system as they wish, and in some cases in a socially undesirable manner. In a network system, these intentional – but also any unintentional – ‘biases’ might potentially be propagated to every connected entity through the central AI. This would cause socially intervening systems to acquire a reinforcing effect on very dubious judgements of appropriateness. The example of the Microsoft chatbot Tay, which interacted for a short period on Twitter in 2016 and learned to send xenophobic tweets within just a few hours (in this case deliberately guided by a concerted effort from 4chan users; it is unclear whether as a joke or as a demonstration of the risk associated with this type of system), shows that this is possible and has indeed already happened (Schwartz 2019). Clearly, the mere existence of customary practices cannot be allowed to imply their persistence without further justification, especially where customary practices can be easily demonstrated to a technical system (see Tay), but also in situations where it is not desirable to propagate existing statistical or historically inherited distributions, or where there is already bias in the system’s training data (cf. from a perspective of law Barocas and Selbst 2016). It is highly problematic to allow learning systems to record what currently is (the behaviour displayed) as a control parameter for future behaviour, i.e., for what should be (what behaviour is perpetuated, facilitated in the future, and should primarily be performed). Extrapolating from an ‘is’ to an ‘ought’ violates the so-called ‘is-ought dichotomy’ (also known as ‘Hume’s law’ after David Hume, who authored the earliest methodological remarks on this question (Hume 2012 [1739]) and commits a cardinal philosophical error: a (consistent) set of purely descriptive linguistic expressions cannot be used to infer any imperatives or normative formulations (which contain normative expressions such as ‘is forbidden’, ‘is permitted’, etc.) (Kamp 2008). For highly individualized agents that are not networked into a collective, which can therefore be attributed «Individual Specifics» learned from interactions and aggregated over time, we must also reflect on their need for protection. If people develop parasocial emotional relationships with artificial agents over a period of several years, the integrity of these systems (i.e., the right to not be damaged, hacked, manipulated, or retrained) might be worthy of protection, not for the sake of the systems themselves, but for the sake of people. Psychology and psychiatry have established that the loss of a parasocial partner is associated with no less pain, loss, and grief than the loss of a human partner (Adam and Sizemore 2013; DeGroot and Leith 2018; Gach et al. 2017; Schiappa et al. 2005).

7.9 Orientation according to rules is not the same as orientation among alternative rules

In the context of ‘appropriateness logic as a decision-making theory’ discussed in connection with the «Individual Specifics» factor (cf. 4.3.1.3), we referenced two different practices of rule orientation, namely a shallow rule orientation when selecting an action, meaning an implicit rule orientation or rule of thumb, and a fine rule orientation, where the rule itself is also selected by conscious deliberation. The implementation of the rule-of-thumb orientation seems to be technically feasible, since an algorithm can roughly speaking be viewed as a rule-of-thumb function – for example, an if–then-else sequence. However, the fine rule orientation appears to be impossible to grant to technical systems due to fundamental philosophical considerations about the concept of autonomy; even if a system can recognize rules and base its actions on them, it cannot select which rules to apply and follow such a choice normatively. A key difference here is that while systems can recognize targets, criteria, and rules and orient their processes accordingly, they cannot self-reflexively recognize themselves as subjects of recognition, and thus they cannot decide their recognition for themselves. Therefore, unlike humans, systems cannot reject the recognition of a specified target or rule, nor can they change the targets or rule in response to the rejection of recognition to follow (or pursue) other autonomous, self-chosen targets and rules recognized as their own. Technical systems can “certainly have a representation of rules (possibly also a self-formed representation” – according to Christoph Hubig – “and potentially even a representation of themselves as the bearer of representation […], but not a self-representation as a subject of recognition or rejection of these representations” (cf. Hubig 2015, p. 131, own translation).

Action-guiding maxims (such as Asimov’s three – or sometimes four – fictional robot laws, see below) can be codified as (e.g., engineering) guidelines and laws to give them greater validity, which defines boundaries on the leeway for the action alternatives that can be selected by a technical system in the first place. In Sect. 4.5.2, it was observed that institutionalized norms defined in legislative frameworks must always be taken into consideration and complied with and that non-compliance may lead to negative consequences for the interacting party (e.g., injury) and therefore the agent (e.g., shutdown). Their lack of reflexivity as a subject of recognition means that technical systems are not free to deliberately ignore such norms, which is why technology can also lead to inappropriate or otherwise morally questionable choices that comply with the rules but are nonetheless morally objectionable. As a famous fictional example, Asimov illustrated this using his three laws of robotics:

  1. 1.

    A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. 2.

    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. 3.

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Asimov 1950, p. 40)

In his story, Asimov recounts how an artificial intelligence that remains fully compliant with these rules assumes a patronizing rule over mankind, which, though norm-compliant, is nevertheless morally rejected (for a film adaptation, see for example I, Robot, A. Proyas 2004). Orientation not just according to rules but also among (alternative) rules is a typically human facet by which people consciously, temporarily, or as a matter of principle can renounce their recognition of the validity of a rule for their actions. Being able to suspend the rules in specific cases is part of the moral autonomy of humans. The decision that the rules should not apply at all, not in a certain way, not at this time, or not in this place is a prerequisite for the dynamic further development and revision of rules. Without the ability to refuse to apply the rules, the rules can never be changed. Thus, since technical systems can only ever have a shallow rule orientation rather than a fine rule orientation, they remain extremely rule-compliant (excluding malfunctions), which enables reliable expectations in hybrid interactions: you never need to worry that your car might not feel like driving on the motorway today. This also means that technical systems are rule-conservative, which increases the risk of obsolescence due to orientation according to a once appropriate but now inappropriate behaviour.

7.10 Challenges of technical implementation

The formulae “Wx = D(S,H) + P(H,S) + Rx” presented in the context of face-threatening acts (cf. 4.4.1.6) or “Bo:Ix = Bo:V(Ax) – Bo:Wx” in the context of the etiquette engine (cf. 4.4.1.7) already give us some examples of formalizations that can be technically implemented. However, assigning numerical values (e.g., 1–10) to degrees of threat or appropriateness disambiguates the phenomenon of ambiguous ranges of attribution. We must ask whether such a disambiguated implementation of FTAs, for example, allows us to consider the dimension of social appropriateness more adequately or less adequately in human–machine interactions – and, importantly, by comparison to what. Systems that implement these or other formulae – even in a procedurally disambiguated form – are presumably less sociosensitive or socioactive than most people, who are able to consider a broader range of ambiguity, but more sociosensitive or socioactive than systems that do not consider even a disambiguated criterion. Nevertheless, we should appreciate the risk that interaction with disambiguated systems might reduce our confrontation with ambiguity, which would in turn reduce our training conditions for dealing with ambiguity and cause a weakening or loss of ambiguity tolerance (Table 4.3 and 4.4), which is an important quality for successful social action.

As described in Sect. 4.1.1, psychological theory proposes that, among other things, human object recognition, situation recognition, and memory unfold through schematically organised structures. A schema can be viewed as a dispositive of medium hardness that is sufficiently ‘hard’ to offer orientation and enable classification, while also being sufficiently ‘soft’ to support adaptation to deviations (at least until the deviation exceeds the schema’s elasticity and a fundamentally new schema must be created or the organism must be adapted, cf. also Jean Piaget’s concepts of assimilation (Piaget 2002 [1928]) and accommodation (Piaget 1970). The elasticity threshold values, namely the decision of when identical-but-different (i.e., similar) phenomena can no longer be classified under or attributed to a previously acquired schema but must instead be integrated by a new schema or self-development and adaptation to the environment, are oriented according to different motivations in humans – depending on the theory used to describe them. By contrast, in systems, or at least in hard-coded systems, this orientation can be heteronomously specified by developers and their own judgements of utility, i.e., the adaptability to the environment depends on how the technical system learns, and its learning parameters are in turn hard coded. Consequently, at least for some systems, there is the risk that the measure of utility, once implemented for the first time, is conservatively perpetuated in systems, whereas the cognitive orientation of people can constantly renew itself.

Similarly, one of the aspects of primary social interaction schemas (Sect. 4.1.1.1), namely strategy, is not fully transferable to technical systems. This aspect describes the way in which meaning is derived from an unknown situation. The creation of meaning is itself an act of understanding that cannot be transferred to technical systems in the strict sense (see the discussion on the chasm of understanding). The aspect of drive – a person’s interest in functioning effectively in different cultural settings – can only be implemented as the perpetuation of an interest specified once and for all by the developers or continuously respecified by users, since technology itself can have no autonomous interest, no will of its own, and no self-motivation. If technology were attributed autonomous interests, a technical assistant would be able to answer a search query with the following response:

I could have looked for an answer, I have the energy, access, and suitable processes, but I have no interest in doing so, especially since you’ve asked me the same question ten times in the last seven days, and the answer (e.g., regarding air quality) is irrelevant to me as a non-metabolic system anyway.