Our utopian and dystopian scenarios depicting an autonomous future home have been intentionally created and employed to enable breaching experiments that disturb common sense understandings of domestic life. The disturbance creates reality disjunctures and surface taken for granted background expectancies in participants’ efforts to repair them. These background expectancies are ordinarily used by people to understand and order action and interaction in what sociologist Harold Garfinkel  called “an obstinately familiar world”. That is, a world that members are “demonstrably responsive to” in mundane interaction but are ordinarily “at a loss” to tell us just what the expectancies that lend commonplace scenes their familiar, life-as-usual character consist of (ibid.).
Our scenarios have provided means, motive and opportunity to (gently) prod and provoke, and for members to thereby articulate and reflect on, expectations at work in domestic life that are ordinarily taken for granted and left unsaid. In doing so, the participants in our study have elaborated distinct challenges for future technological visions. These include the expectations that the behaviours of autonomous systems will be accountable, that their behaviours are responsive to context and gear in with end-user behaviour, and that people can intervene in their operations. More specifically, we can say that our participants expect that autonomous systems will be computationally accountable, socially accountable and that coordination and control are enabled. Below, we consider each of these expectations, and their implications for design, in more detail.
As noted above, computational accountability refers to the legibility or intelligibility of system behaviours. To borrow from our participants, it will not do for the behaviours of autonomous systems to appear “seemingly random”, they must be “accounted for”. The expectation that the behaviours of autonomous systems will be accountable has close parallels with Dourish and Button’s notion of “translucency” , which emphasises the importance of a system’s ability to give accounts of its own behaviour in order to render its actions legible and thus better support human–computer interaction.
For Dourish and Button, computational accountability is about making the inner workings of the “black box” accountable to users. They provide an example of file copying, replacing a general progress bar that glosses computational behaviour with data buckets and the articulation of flow strategies to elaborate the point. A number of studies (e.g. [30, 44, 45]) have subsequently established that revealing more of what goes on under the hood of technological systems is often needed to avoid a range of problems, including trust-related issues, that could otherwise negatively impact the user’s overall experience. However, when confronted with hypothetical systems operating more or less autonomously, our participants’ expectations about computational accountability operate at a different level. Our participants were not so much interested in making the opaque inner workings of autonomous systems accountable, as they were in making their observable behaviours accountable.
Our participants thus expect transparency to be built into autonomous behaviours, where transparency means the grounds of behaviour are visible and available to account. The grounds of behaviour were spoken about in terms of “motive” and “reason”, where the former articulates what occasions behaviour and the latter articulates what is done by an autonomous system in response to motive. Previous enquiries into autonomous systems, such as a study by Lim et al. , have stressed the importance of computer systems being able to articulate the “why” behind autonomous actions. Our results expand on these findings and paint a broader picture. Of particular concern to our participants is the articulation of on whose behalf autonomous behaviour is occasioned, and the commensurate expectation that whatever is being done is being done to serve and meet end-user’s need. Consequently our participants expect “simple” accounts of autonomous behaviour i.e. accounts that articulate what is being done, why and on whose behalf.
Although our findings shift the focus away from accounts articulating the inner workings of autonomous machines, we do find resonance in our participants’ talk with Dourish and Button’s notion of accountability  in the way in which accountability should be expressed. As Dourish and Button (ibid.) put it,
“ … what is important … is not the account itself (the explanation of the system’s behaviour) but rather accountability in the way this explanation arises. In particular, the account arises reflexively in the course of action, rather than as a commentary upon it ...” (our emphasis)
Our participants speak of autonomous systems “telling” users the motives and reasons for their behaviour as a preface to and/or in vivo feature of that behaviour rather than something that is bolted on after the fact to provide a post hoc explanation. Thus, and as Dourish and Button put it, computational accountability “becomes part and parcel of ordinary interaction with a computer system”.
The takeaway for design is not simply that autonomous behaviours should be accountable to users. Rather, in expressing taken for granted background expectancies at work in domestic life, our participants have articulated what an account should consist of and look like. Thus, we find that the accountability of autonomous behaviours should not be concerned with the inner workings of autonomous machines but should instead articulate motives and reasons, detailing what are to be done, why and on whose behalf in particular. Furthermore, the articulation of motives and reasons should occur as preface to and/or in vivo feature of autonomous behaviour, rather than an after the fact explanation. For as our participants succinctly put it, users “would not like that kind of autonomy”.
Social accountability moves beyond the expectation that autonomous systems will make their behaviours accountable to end-users to instead address expectations regarding the appropriateness of autonomous behaviour. Simply put, autonomous behaviour may be intelligible to end-users but that does not mean they will find it appropriate or acceptable. Acceptability is of longstanding concern in systems design. The technology acceptance model, or TAM, is often cited as a key approach to determining acceptability, being designed to measure a prototype’s perceived “ease of use” and “usefulness” . While such an approach has been said to help maximise commercial success of novel systems , the TAM framework has also been a target of criticism for its overly generic nature and limited scope .
Alternatively, user experience or UX models focus on “interface quality” and its impact on the “experiential component” of system use [49, 50], which leads more generally to a concern with usability in HCI. However, as Kim  points out, usability is only one aspect of user acceptance, a point underscored by Lindley et al. , who also note that usability studies rarely look beyond prototypical implementations to consider broader challenges of adoption in everyday life. Indeed, as our study demonstrates, domestic autonomous systems introduce a range of social concerns that extend well beyond usability, interface quality, usefulness and ease of use to the “fit” of machine actions with social expectancies. The need for social accountability introduces a broader lens for considering acceptance, encompassing not just the “product” itself, but also the social circumstances within which it will be embedded and the implications of this for design.
At first glance, it may be thought that social accountability is an external prerogative, something that cannot be built into autonomous systems insofar as it turns on user perceptions of what constitutes appropriate behaviour and is therefore a subjective matter. However, this is not the case, (a) because appropriateness is an intersubjective (social) matter as clearly articulated in the background expectancies our participants share, and (b) in articulating those background expectancies, it became evident that there is much for design to do in terms of supporting or enabling social accountability.
Thus, from a design perspective, it is clear that our participants expect autonomous systems to be responsive to the particular social circumstances in which their behaviours are embedded. There is a need then for autonomous systems to take what people are doing into account, not opening windows for example when human behaviour might be disrupted by such an action, and to tailor autonomous behaviours around the social context in which they operate. In other words, in addition to being autonomous, such systems also need to be context aware if their behaviours are to be seen and treated not only as intelligible but as intelligibly appropriate given the specific social conditions in which their actions are situated.
A key expectation in this regard is that autonomous systems effectively exhibit social competence. There is little to be obtained but trouble from automatically reordering foodstuffs, for example, if it is done without respect to the social consequences of doing so, and not in general but for the particular cohort that inhabits a particular home. Thus, autonomous systems need to act with respect to human agency and entitlement and tailor their behaviour around the differential rights and privileges at work in the home. Autonomous systems’ will in effect need to become social agents whose actions comply with the mundane expectations governing domestic life if they are to assume a trusted place in within it.
That autonomous systems are trustworthy is a critical expectation , which also turns on their demonstrably acting in end-users’ interests. This is not only a matter of computational accountability and making autonomous behaviours intelligible to people. It is also and effectively a matter of making it visible that actions done are done for you and not, for example, for the benefit of an external party. Thus in addition to exhibiting social competence, the behaviours of autonomous systems must also be accountable to end-user’s interests if they are to assume trusted status. Of particular note, here is the accountability of data—the oil that lubricates the autonomous machine—and transparency of data flows, coupled with tools to enable end-users to limit them and even close them off if it is deemed that the machine is not acting in their interests.
The takeaway for design is that social accountability is distinct from computational accountability: the latter speaks to the intelligibility of autonomous behaviours, the former to their appropriateness and acceptability. Social accountability brings with it the need to build context awareness and trust into autonomous systems. Context awareness is needed to enable autonomous systems to respond appropriately to the particular social circumstances in which their behaviours are embedded, and trust is an essential condition of their uptake in everyday life. It requires that computational agency exhibit social competence and that autonomous behaviour complies with the differential rights and privileges at work in any particular home (i.e. not generalised rights and privileges but situationally specific rights and privileges). Trust also requires that autonomous behaviours are accountable to end-user’s interests and turns on the transparency of data flows and ability to control them.
Coordination and control
We treat coordination and control together here as they may be seen to directly complement one another and span a spectrum of expectations to do with the orchestration of autonomous and human behaviours. Our findings also make it visible that context awareness is seen as key to coordination by our participants to ensure that autonomous systems do not make inappropriate interventions. However, as Bellotti and Edwards  point out, in order to become acceptable, systems cannot rely purely on context awareness to do things automatically on our behalf but should rather involve active input from users at least on some level. Similarly, Whitworth  argues that computing systems often have a poor understanding of context, which makes it necessary to give users control in order to preserve their autonomy. An issue underscored by Yang and Newman , who have argued that optimal user experience should be achieved through balancing machine intelligence with direct user control.
Our study unveils a similar sense of scepticism regarding the ability of fully autonomous systems to correctly assess every given situation and act in line with our expectations on a consistent basis. A key expectation at work here concerns the timeliness of autonomous behaviours and the need to synchronise them with the user’s situation. Our participants’ expectations regarding timeliness are of particular note, including, but moving beyond, a “here and now” understanding of timeliness (e.g. raising potentially embarrassing matters while I am entertaining visitors). Of equal concern are the temporal horizon of action and the need for autonomous systems to be sensitive and responsive to temporally distributed patterns of human behaviour (e.g. long work schedules).
Our participants therefore expect that human input will be required, which they speak about in terms of “customisation” and user-driven “scheduling” of events. In this respect, it is expected that customisation would allow users to coordinate the behaviours of autonomous systems with occasional and established patterns of human conduct (e.g. long but not permanent work schedules and the reoccurring rhythms and routines of domestic life). This would allow users to configure appropriate actions and help address the thorny problem of how autonomous systems are to develop an awareness of context, including what it is appropriate to do and when it is appropriate to do it. In effect, the problem of learning context is offloaded onto the user to some extent through the provision of coordination mechanisms that enable users to gear the behaviours of autonomous systems in with everyday life in the particular domestic environments they are deployed and used.
The qualification “to some extent” is important here, for no matter how much they learn about everyday life, and how smart they become, autonomous systems will never be able to anticipate and respond appropriately to all social circumstances . Thus, it is expected that end-users will be able to exert direct control over autonomous systems. However, the expectation is not that users will simply be able to turn autonomous systems off—that may only be necessary in critical situations—but rather that direct control can be exercised on a “sliding scale”. In effect, it is expected that the intelligence built into autonomous systems would be subject to granular control, with levels of intelligence being increased and decreased according to circumstance.
The takeaway for design is that potential end-users do not expect autonomous systems to act independently of user input. End-users expect that autonomous systems will be responsive to context and act in timely fashion, where “timely” means they are responsive not only to what happens “here and now” but also what happens over time. It is thus expected that autonomous systems will gear their behaviours in with occasional and established patterns of human conduct, and that mechanisms be provided to enable users to configure autonomous behaviours around human schedules to enable effective orchestration and synchronisation. It is expected too that users will be able to exercise vary levels of control over the behaviour of autonomous systems in order to make them responsive to the inevitable contingencies of everyday life in the home.
The background expectancies articulated by our participants elaborate several distinct design challenges for the development of autonomous systems for domestic use. These include the following:
Building accountability into the behaviours of autonomous systems by articulating motives and reasons for autonomous behaviours as a preface to and in vivo feature of those behaviours. What we mean here is different from the literature stressing the need to reveal inner workings of a system (e.g. ); instead we are concerned with the overarching motivations and agendas that drive an autonomous system’s decision-making.
Building context awareness into autonomous behaviours to enable autonomous systems to respond appropriately to the particular social circumstances in which their behaviours are embedded. Drawing on a rich body of existing research in context-aware systems (e.g. ), the unique challenge is the interactional nature of context articulated most prominently by Dourish .
Building social competence into computational agency to ensure that autonomous behaviour complies with the differential rights and privileges at work in the home in order to engender end-user’s trust. This challenge is distinct from existing work on roles for example in multi-agent systems  in that design solutions need to respond to the enacted and fluid ways in which rights and privileges are negotiated on an ongoing basis.
Building transparency into the data flows that drive autonomous behaviours and data flow controls to further engender user trust. This design challenge can build on initial work in human–data interaction , putting forward new models of personal data aligned with GDPR.
Building coordination mechanisms into autonomous systems to enable users to configure autonomous behaviours around occasional and established patterns of human conduct in the home. While home automation has made significant progress, multi-occupancy is a remaining challenge that has rendered for example a “learning thermostat” virtually unusable for families .
Building control mechanisms into autonomous systems to enable users to exercise vary levels of control over the behaviour of autonomous systems in the home. While for example occupancy-sensitive home automation has been explored , our work seeks to draw attention to the numerous remaining challenges concerning how to best bring the human back into the loop .
These are not requirements for autonomous systems in that they do not specify just what should be done or built to address them. Rather, they elaborate problem spaces and topics for design to explore. While we are aware of ongoing research that touches upon the above issues in various ways (e.g. [54, 57, 59, 60]), it is not clear, for example, what it would mean in practice to articulate the motives and reasons for autonomous behaviours in the in vivo course of their performance. Would an account have to be provided every time some behaviour occurred or only sometimes and only with respect to certain behaviours? It can be readily anticipated that constant articulation of motives and reasons would become an annoying nuisance having a negative impact on the acceptability of autonomous systems, particularly where relatively trivial behaviours are concerned.
The problem of course is that it is nigh impossible to say what constitutes “trivial” (or significant) behaviour in the absence of social context. Thus, a key research challenge here lies not only in building accountability into autonomous behaviours but also in working out how to best support the delivery of accounts to end-users. Ditto building context awareness, social competence, transparency, coordination and control in autonomous systems, which is to say that what any of these topics might look like and amount to in practice has yet to be determined and can only be determined through significant research effort.
Nonetheless, there is sufficient generality built into these design challenges for them to be widely applied in the design of autonomous systems. They might, in effect, be turned into a basic set of design guidelines or fundamental questions such that on any occasion of building an autonomous system, developers might ask themselves if their designs respond to them. For example, in designing an autonomous grocery system its developers might ask the following:
Does the system give an account of what motivates its behaviour to the user and the reasons for carrying out particular actions [e.g. that it is ordering XYZ grocery items because you are out of stock and they are the best deal available]?
Does the system respond to the social circumstances in which it is situated [e.g. presenting accounts of a shopping order at situationally relevant times and places]?
Does the system display social competence to users in executing its behaviours [e.g. not automatically reordering foodstuffs just because they have run out]?
Does the system make the data it uses transparent and allow users to control its flow [e.g. not “sharing” grocery data with large supermarkets and thereby curtailing the flow of adverts and offers?]
Does the system allow users to coordinate their patterns of behaviour with the system’s behaviour [e.g. to ‘share’ their calendar with the systems as a resource for scheduling reordering]?
Does the system provide users with granular choices over levels of intelligence and autonomy [e.g. allowing users to delegate certain aspects of grocery shopping to the systems and to retain others for themselves]?
The brackets  may of course be removed or, perhaps more to the point, their content replaced with specifics concerning the autonomous system to hand. For as with triviality and significance, the question as to what constitutes “respond to social circumstances”, “display social competence”, “make data use transparent”, “coordinate with patterns of behaviour” and “provide granular choices” are matters that will need to be worked out with reference to the specificities of an autonomous system and the particular social context into which it is to be situated and used. However, that does not mean the questions cannot be asked, nor answers sought and found.
It was suggested in the discussion of this paper that the novelty of presenting breaching experiments for design is rather limited; others have beat us to it, as they have with the use of contra-vision. However, the novelty here lies in the intentional configuration of contra-vision scenarios to drive a workshop-based approach to breaching experiments. This sharply contrasts to previous uses of breaching experiments in design to understand “in the wild” deployments of technology [13,14,15,16], not those previous approaches are homogenous.
Breaching experiments were, to the best of our knowledge, first introduced into design in 2002 by Steve Mann , who used wearable computing to create a set of “visible and explicit sousveillance” performances “that follow Harold Garfinkel’s ethnomethodological approach to breaching norms”. Mann’s use of breaching experiments was copybook i.e. he sought to make trouble and thereby “expose hitherto discreet, implicit, and unquestioned acts of organisational surveillance”. The make trouble approach to breaching experiments surfaced again in design in 2009, when Erika Shehan Poole  sought to exploit breaching experiments to investigate “existing technology … related practices in domestic settings”.
Poole asked participants in her field trial to interact with domestic technology “in ways that potentially disrupted the social norms of the home”. More specifically, “each home received weekly “homework” … intentionally designed to breach household technology installation, usage, and maintenance practices. The assignments the first week served as warm- up to acclimate participants to being in the study … For the second week, the participants were instructed to have the less technically oriented adult in the home complete the assignments. This choice was made to disrupt the normal family dynamic …”.
The results of Poole’s intentional efforts at disruption “provided explanations of why problems with technical advice sharing and home technical maintenance persist”. Poole subsequently recommends that breaching experiments should be considered as “an asset and an indispensable part of a researcher’s toolbox for understanding existing social norms and practices surrounding technology”. We do not disagree but would advise caution be exercised when deliberately disrupting the dynamics of any social setting in which the researcher is essentially a guest, and note an alternative approach to breaching experiments in design.
Following Mann, in 2004 Crabtree  introduced the notion of breaching experiments not as things that necessarily make trouble and cause disruption, but as an analytic lens on in the wild deployments of novel technology. This approach focuses on explicating through ethnography the “the contingent ways in which novel technology for which no use practice exists is made to work and the interactional practices providing for and organising that work”. In a similar vein, in 2008, Tolmie and Crabtree  treated novel technological deployments in the home as breaching experiments that “make tacit and taken for granted expectations visible … enabl[ing] us to see how even a simple arrangement of technology can breach ordinary expectations of where technology resides in the home, who owns it, who maintains it, and how user experience of it is accounted for”.
However, it is not that there are at least two different approaches towards breaching experiments in design or the contrast between them that matters. Despite their differences, previous breaching experiments are oriented, as Poole succinctly puts it, to existing technology and related practices, whether it be a novel prototype or technology that has been appropriated at scale and is well established.
We are not focused on existing technology, whether or not it makes trouble and disrupts or provokes practice by virtue of it having to be “made to work” in the world. There is no actual system, functionality, connectivity or interactivity in our breaching experiments, only utopian and dystopian envisionments of autonomous systems at work in everyday life. Our breaching experiments are oriented to future and emerging technology and the acceptability challenges that confront their adoption in everyday life . Rather than explicate, and even explain existing practice, they instead seek to engage potential end-users in reasoning about the place of future and emerging technologies in their everyday lives and thereby inform us as to key challenges that need to be addressed to make those technologies an acceptable feature of everyday life at an early stage in the design life cycle, before we have built anything. Furthermore, our breaching experiments are done through workshops rather than performances, field trials or ethnography.
It would appear then that there is some novelty to our approach: we are not using breaching experiments to make trouble or disrupt or to provide an analytic lens on existing technology related practice and we are not conducting them in previously practiced ways. The novelty in our approach lies in repurposing tried and tested methods to create utopian and dystopian scenarios that are designed to disrupt background expectancies that organise everyday life in familiar settings (in this case, the home), and as our findings make perspicuous those expectancies have little to do with existing technology-related practice too. The disruption lies in what are essentially incongruous visions of the future depicted by contra-vision scenarios, which in being presented to potential end-users create reality disjunctures that motivate efforts at resolution and repair. It is in the attempt to “resolve incongruities”  and repair the reality disjunctures they occasion that ordinarily tacit and unspoken background expectancies are surfaced; expectancies about which people usually have “little or nothing to say” when asked  but which this methodological innovation enables early in design.
It is also important to note that it is not the intention in designing breaching experiments that the utopian and dystopian futures depicted in contra-vision scenarios should predefine problematic aspects of future or emerging technology. Their job, as outlined above, is to disrupt and create a reality disjuncture, whose repair surfaces the taken for granted expectancies that impact future and emerging technologies in everyday life. Does this mean that the contra-visions limit the range of issues brought up by participants, constraining them to the incongruous topics depicted in the contra-vision scenarios? The vignettes presented above would suggest not, insofar as our participants talk and reasoning can be seen to range across a great many matters not depicted in the contra-visions. It would be more apposite, then, to see the contra-visions not as pre-defining design issues or topics but as provocative social objects that elicit multiple background expectancies, which participants themselves come to shape and prioritise in their talk as they go about resolving the incongruities they create.
In breaching taken for granted background expectancies that are usually left unspoken, our utopian and dystopian scenarios have elaborated significant challenges for the design of autonomous systems in the home. It might be countered that these are grand claims to make on the basis of 32 people’s say so. However, while articulated by a relatively small number of people, the background expectancies they have expressed do not belong to them alone. As Garfinkel  put it,
“Almost alone among sociological theorists, the late Alfred Schutz, in a series of classical studies of the constitutive phenomenology of the world of everyday life, described many of these seen but unnoticed background expectancies. He called them the ‘attitude of daily life’. He referred to their scenic attributions as the ‘world known in common and taken for granted’.”
The attitude of daily life, the world known in common, or in other words what anyone i.e. (with Bittner’s caveat ) “any normally competent, wide awake adult”  knows about everyday life. There is nothing special about our findings then. They do not speak of and elaborate rarefied knowledge or insight possessed by a privileged few. Rather, the background expectancies articulated by our participants are known in common, shared, used, recognised and relied upon by a much larger cohort, and it is for this reason that they elaborate significant challenges for the design of autonomous systems for the home.