It is notable that ‘social’, ‘smart’ or ‘autonomous’ machines are regularly positioned within social worlds by being presented in ways that are borrowed from or variations on human or animal behaviour. This is the case when machines are given anthropomorphic or zoomorphic bodies, when they reveal sociomorphic behavioural traits such as keeping a ‘polite distance’, and when they demonstrate system-related technical conditions such as ‘understand’/‘don’t understand’, ‘pleasure’/‘frustration’ and ‘present’/‘asleep’ etc. A sociological perspective on communication views these kinds of displays as possessing a projective character:Footnote 1 As the symbolic anticipation of potential or imminent encounters between people and machines they illustrate the expected quality of these encounters and the foreseeable roles of the entities involved.

An approach based on ontological taxonomies would be compelled to view these displays as surfaces that dissimulate the mechanical essence of such devices. A practitioner of the sociology of communication, however, would acknowledge that these displays are developed and used as strategies for processing the ambiguities and uncertainties that face a human counterpart when confronted with the roles, functions and behavioural traits of “non-trivial machines” (von Foerster 2019, p. 247). Although describing anthropomorphic designs in particular as dissimulations or “fabrications” (Goffman 1974, p. 83) is by no means out of the question in the sociology of communication, the emphasis does shift to an investigative analysis of the communicative makeup and social functioning of these displays, i.e. examining their practical significance for the interaction between people and machines. It is in the latter sense that this article presents the term ‘social display’ as a theoretical concept that is intended to enable an analytically differentiated description of presentational strategies for forming relationships between people and machines. In other words, this concept focuses on how the presentation of machines is designed to be a mode of steering interaction in social communication.

1 The issue: accountability with respect to social worlds

The technically and socio-politically ambitious attempt to use ‘autonomous’ vehicles, ‘social’ robots, ‘smart’ agents, and comparable machines in everyday life leads to a unique problem that truly concerns their presentation.Footnote 2 Essentially, if these machines are to be integrated into the routine processes upon which everyday pragmatism depends they need to do more than merely function at a technical level: they also have to be understood by their human counterpart.Footnote 3 If anybody (not exclusively a technician or trained personnel) has to be able to cope with these machines at any and all times (stopping short of professional activities)—in other words, if passers-by have to cooperate with autonomous vehicles, hotel guests with service robots, all kinds of family members with smart household and kitchen devices, and car drivers with their assistance systems—then a pragmatic understanding of the institutional accountability of these machines is required: Which tasks does a particular machine tackle in a particular context? And how will it fulfil this task?

The structure of this problem with understanding is inherent to the notion of ‘autonomous’, ‘intelligent’ or ‘smart’ machines. Unlike conventional devices and computers, these machines supposedly do not need people to operate and control them continuously. Instead, they can react autonomously to the changing conditions around them and make necessary adjustments for sustained periods of time. Although attributes such as ‘intelligent’ are taking things too far in ontological terms (see Lindemann 2016, p. 74), people do encounter these machines when the latter are in a mode of non-trivial automatization, i.e. featuring only limited predictability (see Fig. 1a, b).Footnote 4 This mode requires a human counterpart to develop a pragmatic understanding of the role that particular machines play in specific contexts as well as an understanding of the typology of their behaviours (see Schulz-Schaeffer 2016). At the same time, interfaces are being designed for the machines that enable, facilitate or structure this kind of understanding by displaying the institutional accountability of such machines in one way or other—the sight of shiny ‘circuit boards and mechanisms’ would certainly not be very instructive in this respect. The ‘old’ problem of ‘translating’ interface design so that incomprehensible technology is rendered comprehensible for everybody (Krippendorff 2013; Häußling 2010) reasserts itself here in modified form. Henceforth, the key issue is not about identifying the usability of machines that one wants to employ and thus has to be able to operate, but rather about identifying the accountability of machines (Garfinkel 1984) that one encounters in public, private or virtual spaces, because these machines are put into operation by someone or other and are controlled by some kind of technical function or institutional supervisor, meaning that they are indeterminable or only determinable to a limited degree (see Fig. 1c). The new issue with interface design is not about making usability identifiable but rather about accentuating the accountability of complex, i.e. indeterminable and non-trivial, machines with socially accountable functional roles and behavioural typologies.

Fig. 1
figure 1

a Trivial machines, b non-trivial machines, and c indeterminable non-trivial machines. Heinz von Foerster (2019) designates machines as non-trivial if their specific behaviours are not completely predictable, even when we essentially know how these machines work. This lack of predictability may have many different reasons: machine learning, networking with data infrastructures, or, I would add, partial control by people. At the same time, rather than non-triviality being an attribute of a specific technology or specific control technologies, it refers to the relative uncertainty or ambiguity about the machine’s behaviour from the perspective of an observer: although non-trivial machines “are deterministic systems, some of them are inherently and others for practical reasons […] unpredictable: an output observed after a certain input will in all likelihood not be observed following the same input at a later time” (von Foerster 2019, p. 359) (see b). According to von Foerster, these differences in behaviour are brought about by changes to the “internal condition” of such machines: “A trivial machine [see a] will always only have one single internal condition involved in its operation, so it is precisely the transition from one internal condition to another that makes a non-trivial machine so difficult to grasp” (358). This contingency problem facing someone who is observing the behaviour of non-trivial machines is exacerbated when these machines’ sensors, external data streams, or remote control cause them not only to react (in an uncertain manner) to the counterpart’s intended inputs but also to process other sensory data or other data streams into (uncertain) outputs (see c). From the observer’s perspective, these machines adapt to their surroundings even without the observer instigating anything or operating the machine. This special form of non-trivial machine based on appropriate sensors or suitable data streams will be referred to here and below as indeterminable non-trivial machines

Sociologically speaking, this interface problem—one that developers of these complex machines see themselves confronted by—is a special manifestation of the general phenomenon of social display as described by Erving Goffman, who defines displays as being all the aspects of an animal or person A that are suitable for communicating an idea of A’s future behaviour to their counterpart B. “Displays”, according to Goffman, “provide evidence of the actor’s alignment in a gathering, the position he seems prepared to take up in what is about to happen in the social situation. Alignments tentatively establish the terms of contact, the mode or style or formular for the dealings that are to ensue among the individuals in the situation” (Goffman 1979, p. 1). Goffman uses an ethological concept of display that originally referred to animal behaviour (see Müller 2022) in order to analyse human impression management too, thereby avoiding the introduction of any kind of idealism via implicit conceptual presuppositions:Footnote 5 Rather than such displays illustrating what equates to a ‘self’ or a ‘person’, they initially only show the way in which an actor refers to situational circumstances and, consequently, how further interaction is expected to progresses. At the same time, what distinguishes a human display from an animalistic display (while also enabling social displays to be developed for machines) is the structural self-reflexivity and thus the malleability and artificiality of human activities of presenting themselves or others: with respect to displays created by people, Goffman emphasises (1979, p. 8), the assumption of an immediate “natural expression” loses its validity. Human beings can adjust their appearance and behavioural style as fungible instruments for controlling social expectations and influencing social situations. This possessive relationship is something that machine displays have in common with human displays, despite all their other differences: a machine display is also made by humans, equipping the developers of these machines with extensive opportunities for manipulating the impressions we receive from social communication. Crucially, these include attempts to transfer human modes of expression and self-presentation to machines, or the notion of displaying machines in a human-like way.

However spectacular and popular humanoid designs may be, the spectrum of attempted solutions that can be empirically found for this interface problem is far greater. It embraces both anthropomorphic designs and zoomorphic, purely fictional designs and those that are technically purist, and of course a wide range of hybrid forms. Technically speaking, these designs are superimposed on the machines’ “mode of operation” (Rammert 2011, p. 7). From a sociological perspective, we are talking about displays that mediate between machines and people and play a key role in integrating these machines and their functions into the routine processes of everyday pragmatism. They are the same instruments that help to model the expectations placed on these machines as well as the situations that occur when people and machines meet each other.

It is empirically striking that depending on the case—i.e. the prototype or type of device—a wide range of display elements have been borrowed from the spheres of device design, vehicle design, interior design, anatomy, body language, people’s dress habits, behavioural etiquette, and last but not least, science fiction. They are then arranged into more or less stringent figures. But essentially we cannot be surprised by the diversity—and occasionally the experimental nature—of these designs and design approaches, because while human behaviour relating to self-presentation is extremely conventionalised by society, relying on a common ground of relatively fixed “idioms” (Goffman 1981, p. 118; Müller 2019, p. 364), these idioms only develop gradually when designing complex machines. This may be largely connected to the fact that the kinds of institutional tasks that complex machines can and should fulfil—i.e. the functional types and role figures that they are assigned—are in practice defined as approaches at best (Hepp 2020). Along with Lucie Suchman (2007), Martin Meister and Ingo Schulz-Schaeffer (2016), we might suspect that the corresponding typifications, role figures and taxonomies in contemporary designs and attempts to use these machines only develop over time.

I will use this liminal phase in the development of complex, i.e. indeterminable and non-trivial, machines in order to focus on their social display and elaborate significant aspects of their communicative structure and functionality. The ultimate aim has to be to identify the taxonomies respectively functional types and role figures that society assigns to these complex machines on account of their design, and which are further differentiated by developing new designs. This, however, is predicated upon a fundamental understanding from the perspective of the sociology of communication about how these displays are structured and how they work. My article and the analytical concept of social display pursues this more advanced objective.Footnote 6

Methodologically speaking, I will take the concept of social display derived from Goffman in the sense of forming an analogy (Lenz 2008) and apply it to a specific case study (see below, Sect. 2), in order to use the insights thereby gained to define the terms of the concept more precisely (see below, Sect. 3). The following is about what Max Weber called the process of refining terminology (Weber 1988, p. 5), i.e. continuing to develop a concept on the basis of insights gained from analysing individual cases.Footnote 7 The aim is to come up with terms that permit a differentiated observation and description of other cases, too. The case study looks at a “healthcare robot” and conversational robot called Alice, who was developed at the University of Amsterdam from 2011–2015 and again from 2017–2019 (van Kemenade et al. 2015) (see Fig. 2a, b), as well as the documentary film showing the prototype being tried out as part of the care for elderly people.Footnote 8 Three factors make this complex machine a suitable example for investigating the structural composition of machine social displays: (1) The relative diversity of aesthetics used here in a single case (anthropomorphic, humanoid, technicistic and fictional elements of design), (2) the diversity of communicative modalities used (appearance, behavioural style, language), and (3) the pragmatic research-related fact—and one that is by no means irrelevant given the current pandemic-driven conditions—that the modalities of the project are relatively well documented in terms of data.

Fig. 2
figure 2

a Alice I and b Alice II

This case study certainly reflects the current experimental status of robotics—after all, it is the film data that document machine-Alice being tested out in real-life everyday situations. In this respect, the designs and design variants of the external appearance of the machine as well as its behaviour and conversational style discussed below are to be understood as practical attempts—you could call them ethnomethods—to understand the production of machines that are accountable in the social world, i.e. predictable and comprehensible. As mentioned above, however, the analytical description of this design as social display aims to move beyond individual case studies in later research by developing a more precise concept of social display and laying the groundwork for elaborating a typology of various displays.Footnote 9

2 Case study: the socio-communicative structure of Alice’s display

As shown in the analysis presented below, three aspects of the design of Alice the robot are significant with respect to the accountability of this machine by a potential human counterpart: the appearance of the machine and its behavioural style, i.e. its pictorial display, as well as its behavioural display and its linguistic display.Footnote 10 Viewed from the perspective of interaction theory, these displays and their interactions ‘answer’ three typical fundamental questions/problem areas concerning the interaction between people and machines:

  1. a)

    Concerning categorical assessability—in this case of a complex machine by a potential human counterpart.

  2. b)

    Concerning the problem of interpreting and structuring the situation that the person and machine find themselves in.

  3. c)

    Concerning the problem of meta-communicative handling of disruptions to the interactive process (crises).

In other words, the appearance of the machine, its operative behaviour and its spoken and mimetic communication are arranged and configured such that a human interaction partner will ideally be able to visualise (a) the kind of counterpart that can probably be expected and (b) the kind of situation that will probably arise from encountering this machine. In addition there is the fact that (c), the machine’s linguistic and mimetic display makes it possible to deal with crises of cooperation at the meta-communicative level.

2.1 On (a): categorical assessability

The initial spoken utterance that is triggered upon the machine’s first contact with a person is:

A.: Hallo, ich bin Alice, ich bin ein Gesundheitsroboter.    [Hello, my name is Alice, I am a healthcare robot.]

This speech act is assigned a personal origo (“ich [I]”) and an explicit act of self-presentation is introduced (“ich bin [I am]”). The self-presentation introduced in this manner is made more specific in two ways: firstly the machine mentions an individual name that it possesses (“ich bin Alice [my name is Alice]”), and secondly it mentions its general function (“ich bin ein Gesundheitsroboter [I am a healthcare robot]”). Thanks to its linguistic display, the machine is presented as a specific, unique individual who equates to an interchangeable counterpart in terms of its function. In this configuration, the linguistic display encourages people (in the sense of an affordance by the display) to evaluate the machine (1) as a counterpart that refers to itself as an activity centre, thereby displaying an essential personality trait, and (2) as a functionally defined device (as a healthcare robot).

At a linguistic level, this machine’s display corresponds to its external appearance. This comprises (1) a doll-like head segment designed to look like an android, (2) several torso segments that look much more like machines in comparison; what Alice’s head segment shares with dolls (see Fig. 3a) is that these bodies—in accordance with everyday gender stereotypes—are socially classified with attributes such as ‘male/female’ and with individual names (such as Alice, Annabelle, Lisa, etc.), which makes it possible for them to be personalised to some extent. What the machine’s torso segments share with technical product design (see Fig. 3b) is a plastic surface decorated in signal colours and featuring sound outlets, ventilation slits, and switches: these surface designs are typically found on household devices and small devices, and play a significant role in defining their practical, functional character. While the head segment visually presents the machine as a quasi-personal being, the torso segments accentuate its technical character.

Fig. 3
figure 3

a, b The machine design of the robot has significant similarities and differences to the design of both archetypal commercial dolls and devices for everyday use. These similarities and differences constitute “visual metaphors” (Krippendorff 2013, p. 130) or affordances that accentuate specific options for perceiving and interpreting the machine, i.e. making it structurally possible. On the methodological principles of comparing images such as that used here, see Goffman (1979) and Müller (2012, 2019, 2022)

Specific characteristics and functions of the machine are thus presented in conjunction with the linguistic display and the pictorial display. In other words, there is an illustration of the way in which this machine can or should be treated when encountered in a specific situation: In accordance with its display and the “social framework” (Goffman 1974, p. 22) thereby defined, it is to be addressed as a personal being at the same time as it remains demonstratively recognisable as a technical device. Thus, combining design elements from various backgrounds accentuates characteristics such as personal identifiability and addressability, while at the same time negating the notion that a machine has an all-embracing similarity to humans or a human-like personality. This kind of ‘chimera’ is not categorically unequivocal in the sense of existing taxonomies, but its display does invoke categories that make the specific skills, functions and characteristics of this machine foreseeable in terms of language and material design, and that comply with a specific visual design—albeit one that takes a bit of getting used to overall.

What is systematically remarkable is that even in a state of advanced interaction, the issue of machine assessability keeps on being revisited and revised, in particular by the machine’s behavioural and linguistic display. When, for example, the machine triggers the seemingly trivial speech act

A.: Telefon.    [Telephone.]

on the sofa of an older woman waiting for a phone call to be returned, this is not merely a communicative reference to the situation being indicated (the ringing of a phone). The speech act is simultaneously the linguistic display of a fundamentally cooperative orientation on the part of the machine: at a meta-communicative level this demonstrates the practical understanding that the machine musters for the situation of its human counterpart (in this case, ‘waiting for a return call’), as well as its capacity to turn the counterpart’s goals into its own goals (in this case, ‘not missing the call’). Michael Tomasello (2010) calls these gestural or spoken referential actions a specific feature of human cooperation. In Alice’s display, this specific feature becomes a characteristic of a machine that, in accordance with the behavioural and linguistic parts of its display, is to be understood as cooperative (and within the framework of the behaviour demonstrated here, it can also be understood as such), i.e. as a cooperative counterpart or one that is in principle capable of cooperation.

2.2 On (b): situational design

A second dimension of problems in the interaction between people and machines that is structurally answered by the machine display is the structuring and interpretation of a specific situation that the person and machine both find themselves in. In this problem sphere, the danger goes beyond the question ‘Where or what is the machine counterpart?’ to include the question “What is it that is going on here?” (Goffman 1974, p. 8). In the case of machine Alice, this problem sphere comprises three sub-problems:

  1. 1.

    Creating a situation where both are present (co-presence)

  2. 2.

    Focusing and structuring the situative relationship between human and machine

  3. 3.

    Ensuring performative consistency throughout sequences of situations.

(1) Presence:

Machine Alice embodies her approachability and her general readiness to interact thanks to micromovements made by her head segment and well as occasional ‘blinking’. Both movement sequences render the invisible fact that the machine processes visual and acoustic signals, i.e. is ready for communicative operation, visible for the human counterpart. While corresponding micromovements in humans or animals are natural, i.e. physical signs of them paying attention to what is going on around them, in the machine Alice they serve as “visual metaphors” (Krippendorff 2013, p. 130) for displaying their readiness to interact. This kind of behavioural embodiment of presence is not necessary for the machine to process information, but it does seem to be foe the interaction between person and machine: here the enactment of appropriate micromovements in order to create a situation of “copresence” (Goffman 2009, p. 33).

(2) Focusing:

While the embodiment of presence is primarily realised through behavioural display, when interpreting and configuring the situation in which the person and machine now find themselves, there is an increasing incidence of linguistic display. When machine Alice, for example, announces:

A.: Ich werde Ihnen Fragen über Ihr Leben stellen. Haben Sie etwas     Zeit dafür?    [I’m going to ask you some questions about your life. Do you have     time for that?]

These verbal displays help to establish the “basic positions” of the intended “network of relationships” (Elias 2000, p. 133): the machine does not refer to itself as a device (by activating an on/off light, for example), but as a personal being using the first person singular (‘I’). It addresses the elderly lady sitting opposite it in polite, formal terms, grants her the freedom to make her own decisions (“Haben Sie etwas Zeit dafür? [Do you have time for that?]”), and in general treats her as an autonomous counterpart; the impending interaction itself is framed as a biographical interview (“Fragen über Ihr Leben [questions about your life]”). Thus, the use of the pronouns “ich [I]” and “Sie [you]” and the biographical interview as a communicative genre do not function merely as means by which software can gain information, but rather as elements of a specific version of relationship. They are meta-communicative performances that serve to shape the incipient interactive situation and fundamentally organise the structure of how it proceeds.

Taking recourse to the “series of pronouns” (Elias 2000, p. 132) I, you [Sie], he/she as well as commonplace communicative genres and ritual idioms for interpersonal communication similarly occurs in connection with the designs for ending interactions or in the context of the machine adjusting to changing interaction situations (for example, when the two-way arrangement person–machine is transformed to a three-way arrangement person–person–machine). In structural terms, it is noteworthy that machine Alice, when confronted with these kinds of situations or formats, invariably operates with a reduced series of pronouns I, you [Sie], he/she, i.e. leaving out the informal personal pronouns for ‘you’, namely du and ihr. This reduced series of pronouns—which are characteristic of the style of linguistic machine displays in general—marks a gap between people and machines that could easily be overcome by linguistic technology, but is not actually overcome by the configuration of machine Alice, i.e. here too, language or the machine’s speech does not function solely as an instrument for conveying information but rather as a malleable social display and performative modulation of the relationship between person and machine that has been planned by the developers.

(3) Consistency:

A specific situation for an everyday approach to complex (indeterminable and non-trivial) machines is that they can be switched on or off by the developer or user. The display of machine Alice is programmed such that the machine confirms it is starting to operate by making searching motions with its head segment and camera eyes, and by the speech act:

A.: Wo sind wir jetzt?    [Where are we now?]

The spoken and behavioural elements of this display show that the machine starting up is not a processual zero point, but rather one of several different stages in a continuum of successive conditions: they point to the current situation as a situation of re-entering the social world (“wir [we]”), which is preceded by a state of inactivity or absence that could be interpreted as sleep. This display turns a device that can be turned on or off at will according to functional considerations into a machine counterpart that transcends such heteronomous human interventions, one that is endowed with continuity and uses this performed ‘reawakening’ over time to lay claim to an identity. In this case, the behavioural and spoken parts of the display ensure the consistency of the machine’s social display by reinterpreting (in the sense of a Goffmanesque frame modulation) the switching on and off as a ‘transition between asleep and awake’ rather than heteronomy, thereby reconciling it with the previous parts of the display that personalise the machine.

2.3 On (c): crisis management

Goffman leaves us in no doubt that displays can be traced back to the capacity, readiness or intention of persons presenting themselves (or of designers) to “portray a version of themselves” (or a specific image of their machines) “at a strategic moment” (or for those moments) (Goffman 1979, p. 7). As such, when social displays are perceived by others they can always turn out to be fragile or misleading, leading to contradictions or confusions. To take an example, machine Alice—who is presented to her human counterpart as a device that exhibits various quasi-personal traits and characteristics—is not able to provide an adequate answer when asked about her surname (see below in line 7). It is a situation that immediately triggers the compromise question about her nutritional habits (lines 9–10) and culminates in an interaction crisis:

1 Alice: Wer sind Sie?                 [Who are you?] 2 I2: Wim. 3 Alice: Hallo Wim.                 [Hello Wim.] 4 I2: Hallo, Alice.                 [Hello, Alice.]                  Du hast einen schönen Namen. Wie lautet Dein Nachname?                 [You have a lovely name. What is your surname?] 6 I1: Stell nicht so schwierige Fragen. Das ist nicht gut.                 [Don’t ask such difficult questions. That’s not good.] 7 Alice: Das weiß ich nicht.                 [I don’t know.] 8 I1: Sie weiß es nicht.                 [She doesn’t know it.] 9 I2: Was tust du am liebsten?                 [What do you like doing most?] 10 I1: Essen? Nein. Trinken? (Schüttelt verneinend den Kopf.)                 [Eating? No. Drinking? (shakes head in negation).] 11 Alice: Ein nettes Gespräch.                 [A nice conversation.]

But pictorial display can equally lead to misunderstandings or ambiguities. This is the case, for instance, when a counterpart of machine Alice offers her—in a logical interpretation of the android head segment—a piece of cake, while commenting on the gesture with the words:

figure a

1 I: Das kannst du auch nicht essen, oder?               [You can’t eat that either, can you?] 2 Alice: [remains silent, moves the head segment to the right                (away from the counterpart) with open eye cameras                pointing into the distance, and then slowly back                again]

In both examples, the interaction based on the machine’s social display culminates in a crisis that threatens to derail its impression management, namely its social “passing” (Garfinkel 1984, p. 118)—which in turn triggers crisis management reactions by the machine. In the first case (not knowing one’s own surname) the machine shifts to the level of meta-communication, i.e. it talks about the current situation (“ein […] Gespräch [a (…) conversation]”), reinterprets it—in the sense of a frame modulation (Goffman 1974)—(“ein nettes Gespräch [a nice conversation]”) in an attempt to counter the probable end to the conversation that is looming and keep open the option of leading onto another conversation on a different theme. In the second case (being unable to eat a cake) the machine undertakes a gesture of awkwardness (moving the head and blinking behaviour) that is typically appropriate to compensate for a ‘loss of face’ (here the situative unmasking of an all-too-humanlike display).

What becomes structurally visible in this kind of crisis management routine is that in sociological terms, the same applies to a designed presentation of such a complex machine as to human self-presentation: parts of the display that have already been realised define those characteristics against which the presented entities have to be measured, thereby marking out the framework of potential interaction crises (Willems 1997, p. 52–61). Whether and in what sense namely irony, misunderstandings, unpredictable occurrences, or technical failures represent a crisis for the course of the interaction, and whether and how these crises can then be dealt with, is largely dependent on the machine has been sketched out: other communicative and cooperative skills might be expected from a machine with personified display than from one that is more zoomorphic or more technically defined. Moreover, both examples make clear that suitable behavioural or linguistic displays can, if needed, be applied to deal with potential interaction crises. This can occur via frame modulations (such as those described above), apologetic presentations of awkwardness or shame, and in other cases through idioms of cuteness or childliness. The latter reduce expectations preventatively, as it were, or relativise potential mistakes before they even happen.

3 Discussion: creating accountability in the social world

If we shift from the perspective of analysing a single case study to one that is more general, focusing on the analysis of other complex (indeterminable and non-trivial) machines, the Alice case study produces a heuristics that is conceptually differentiated. In other words, the example clarifies which communicative and functional aspects and structures can be analytically expected with respect to the presentational positioning of these machines. An illustrative model—i.e. in line with logical research until there is proof of the unsuitability or inadequacy of this “analytical generalisation” (Przyborski and Wohlrab-Sahr 2010, p. 320) by means of other individual case studies—can be summarised thus: in the field of robotics, ideal-typical social displays are composed of many different practical presentations(a), that make these machines accountable(b) with respect to the characteristics or function being presented, i.e. realising more or less consistent accounts(c) of relevant behavioural characteristics and functional roles.

(a) Firstly, there is a wide range of appropriate presentations with respect to the potentially used modalities of communication:

  • Design aspects such as size, form, materials etc. can be identified as a pictorial display or sections of pictorial displays that shape the appearance of a machine.

  • The behavioural display comprises (in the sense of a modular differentiation) those forms of behavioural stylisation, such as style, where movements are carried out that can also be ritualised, i.e. behaviours whose meaning has become conventionalised.

  • Linguistic parts of display based on the use of abstract semiotic systems, comprising not only spoken language but also the use of deictic and iconic gestures in particular.Footnote 11

  • Digital displays ultimately comprise all the ways of using screens communicatively. These displays can supplement or even partially assume the communicative functions of other forms of display. This is the case, for example, when mimetic expressive values are realised by means of pictographic projections following the conventional model of comics or emoticons (cf. Figure 2b above).

The activities and functions of the presentation produced by the display of complex machines also potentially vary considerably. Taking an analytical approach, we can in any case first assume that these displays are not only composed of various sections of display (see above) but that they also refer in a more or less complex and dynamic manner to the very situation that they are supposed to provide a structure for as techniques for structuring potential interactions between people and machines:

  • They have a projective function that is ideal-typically formulated as presentations illustrating the everyday role or institutional function of a particular machine, thereby making it possible to expect a certain type of situation in the interaction between person and machine.

  • Displays have an adaptive function when they contribute to realising and structuring the situations which they have possibly projected in the first place. The realisation of situative presence is one kind of display function, while thematically centring interaction processes is another.

  • Finally, displays have a stabilising function that aims to avert or process rifts and crises in the interaction process. Sections of displays fulfil this kind of function when they are undertaken by frame modulations that function as techniques for impression management, or realised through idioms for cuteness/childliness.

(b) Complex (indeterminable and non-trivial) machines become accountable through their displays in the sense that these displays stipulate specific characteristics and functions. These are initially only making presentational claims: whether machines indeed exhibit or fulfil these characteristics or functions has to be proven in the course of the interaction, as otherwise an issue with authenticity arises. As risky and fallible the presentational creation of accountability may be, it is a functional way of utilising suitable machines in everyday activities and situations. Because displays only presentationally pre-empt some (out of many) of the many possible options for perceiving and interpreting these machines, they reduce the complexity and facilitate or enable potential observers or counterparts in developing an understanding of these machines. In other words, displays are solutions or attempted solutions to everyday problems of adequately interpreting real-world encounters, movements or processes.

(c) The sum total of all the display sections on a machine and the illustrated behavioural characteristics and functional roles accordingly form what Garfinkel (1984, p. 3) would call an “account”—a ‘presentation’ or ‘report’ in the literal sense, an “informational object” (Thielmann 2012, p. 87) or an ideography in the figurative sense (see vom Lehn 2016), or translated more loosely: a figure that is focused on social observational processes and embodies specific characteristics and functions.Footnote 12 In accordance with the wide variety of different display sections and presentational aspects, the overall arrangement of a specific account is by no means necessarily homogenous and possibly comprises heterogeneous or contradictory elements too. The combination, for example, of android design elements with the aesthetics of small devices in the case of the robot Alice I to produce a kind of ‘technical chimera’ sheds light not only on the complexity and composition of these accounts, but also on the presentational searching movements and symbolic dissonances (cf. Soeffner 2010), that are inevitable when developing these machines and creating an adequate—or seemingly adequate—accountability.Footnote 13 The problem of creating accountability is by no means solved by resorting to different display sections and presentational idioms. It is rather the case that the visual designs that have been drafted and presented have to be in a fitting proportion both to the actual functionality and capability of that machine and to the degree of understanding possessed by human observers and counterparts.

Because it can analytically be assumed that the displays of complex machines are more than styling for a technology that could function just as well without this styling, in other words because we need to take into consideration that these displays function as practical illustrations and explanations of the institutional functional roles and the situative behavioural characteristics of such machines, I would propose that they be called ‘displays for accountability in the social world’, or ‘social displays’ for short.Footnote 14 Viewed sociologically, they are design-based attempts to solve the (structurally social) problem of comprehension that complex machines present for their human counterpart, which is the difficulty of integrating the variable, i.e. non-trivial automated behaviours of these machines into one’s own blueprint for action.

However, in contrast to being directed at a fellow human being, this problem of comprehension is rooted not in the creativity and self-referentiality of a physical awareness but rather in the case-by-case interaction between complex programming, external data infrastructures and, if needed, additional human control. Instead of being a phenomenon of contingency that would resemble the principal openness of a human relationship to the self and to the world (see Esposito 2017, p. 257), it concerns the complexity of non-trivial technical processes for obtaining and processing information, and for controlling behaviour. This kind of complexity is suitable for entailing contingency as a problem of social understanding and orientation; it is a complexity or contingency with respect to the external world where social communication can be reduced for potential observers with the assistance of social displays. Yet any practical attempts at interaction in order to cope with this complexity by means of anthropomorphic design should not lead to a socio-theoretical anthropomorphism of these machines. Instead, in each case the display should be investigated with respect to the principles and ideas that are applied in the development of complex machines, because designing these machines also involves a process of social negotiation about the way in which these machines are to be integrated into relationships in the social world, and in which functional roles.