1 Introduction

Today’s smart video doorbells use facial recognition to allow strangers into private homes, fitness trackers use deep learning to detect one’s pregnancy, and feminized digital voice assistants use natural language processing to obediently and intimately serve. What if we began to speculate that such intelligent things have an ethical agenda? Could we then imagine ways to move past the moral divide between ‘human vs. nonhuman’ in contexts, where things are meant to act on our behalf? Would this help us better address matters of agency and responsibility in the design and use of intelligent systems?

In this article, we propose a way of doing Research through Design that enables designers to critically consider the effects of interacting with intelligent things in everyday life, and bring into view the ethical implications of those effects for design. We illustrate our approach with the outcomes of a workshop we organized at the Research Through Design 2019 conference in Delft, Netherlands (Reddy et al. 2019).

2 Ethics and design

When discussing ethics in design, the emphasis is usually on the implications of design as a human act, according to a moral benchmarking of design ideals for what has to be considered good or bad (Hursthouse and Pettigrove 2016). While some scholars and practitioners consider design as an activity and outcome with an inherently moral or ethical valence (Buchanan 1985; Nelson and Stolterman 2012; Verbeek 2005), some researchers have explicitly drawn attention to the ethics of design and technology by devising value-oriented frameworks for guiding design practice and assessment (Abascal and Colette 2005; Friedman et al. 2008; Le Dantec et al. 2009).

However, in the recent past, scholars active in ethically sensitive justice and care frameworks are increasingly concerned with dismantling structural inequalities and marginalization. They have begun to raise arguments about how ethics arises in ongoing interactions between humans and things (individually and collectively), rather than guided by rigid moral principles (de La Bellacasa 2017). These frameworks suggest that by confining ethical matters to human action alone, we may fail to account for how new relations between human and nonhuman entities contribute to moral consideration. In other words, by failing to address intelligent things as objects of moral consideration by their relations within a broad social context, we may end up lacking a grip on how to govern our interactions with intelligent things—and how to design for it.

With growing moral concerns today around privacy, security and trust, designers are faced with critical questions about the ethical consequences of everyday encounters with intelligent systems. These ethical dilemmas concern not only how to craft one-to-one interactions with technology, but also how to govern end-to-end relations among multiple people, products and services (Giaccardi and Redström 2020). For example, today’s smart doorbells alert homeowners when an unfamiliar face is present at their doorstep. However, they also capture and store facial recognition data about family, friends, visitors and the occasional passers-by, and even share that data with local police (Ferguson 2017). Similarly, activity monitors track footsteps to make exercise recommendations to alert users if they meet or surpass their exercise goals. This form of datafication has already led some employers and insurance companies to request or pressure people to share fitness and health data with them (Office of the Privacy Commissioner of Canada 2014).

Today’s intelligent things are about the interactions taking place in the relation between them and us (and indeed also between our things and other things), without even being aware of the exchanges taking place (Redström and Wiltse 2019). How might we critically approach both the local and the systemic effects of interacting with intelligent things in everyday life to bring into view the ethical implications of those effects for design? How can designers develop the ethical sensitivity that will help them identify the handles users need to understand, respond, repair, govern, and, if needed, contest intelligent things’ autonomous performances?

3 Ethics through design

In the workshop, we have experimented with a creative way of crafting and enacting ethical encounters between people and intelligent things, which aligns with Research through Design (RtD) and combines elements of participation, embodiment, and speculation. RtD is a design research approach in which design activities play a crucial role in gaining an actionable understanding of a complicated situation, framing and reframing it, and iteratively developing prototypes that address it (Stappers and Giaccardi 2017). RtD practitioners are not new to using data to support co-creation with remote users and deepen their understanding of user experience with digital devices. However, it is only with the rise of the Internet of Things (IoT) and Artificial Intelligence (AI) that RtD practitioners have begun to engage with data and intelligence more critically. For example, deconstructing complex data processes to broaden participation, repurposing automation and monitoring technologies in support of city-making, drawing attention to the topologies of the data environments we live in, and speculatively addressing widespread concerns with data objects (see Giaccardi 2019 for a review). This shift in the conceptual framing of RtD has inspired new design methods and perspectives for reimagining a role for intelligent things in design research (Jenkins 2018; Odom et al. 2017), including casting them as co-ethnographers and co-designers in the design process (Giaccardi 2020).

In our experimentation, we have added these nonhuman perspectives to help workshop participants problematize the design space of intelligent things. The intention is to "unsettle a designer's assumptions, demonstrate the problem to be more uncertain, more nuanced or more complex than originally assumed or regarded" (Giaccardi 2020, 126).

4 Crafting ethical encounters

We invited 20 workshop participants to impersonate intelligent things and then prototype speculative scenarios with them. The participants were selected based on their responses to an open call for participation. This required them to submit a proposal by choosing an intelligent thing to bring to the workshop as an ‘invited guest,’ and to express how they approached the ethics around the behavior and use of the selected thing from their research perspectives. A majority of applicants were conference attendees who had a background in design and HCI research from international public and private institutions. In making the participant selection, we ensured a balance between the applicant's research expertise, openness to experimental methods, and motivation expressed through their choice of intelligent thing. The things chosen ranged from commercially available smart products such as assistive robots (e.g., Roomba, Anki Vector) and smart objects (e.g., a motion-activated night light and a hydration-tracking water bottle), to mundane things perceived as ‘intelligent’ such as plants and shoes, custom-designed data-driven artifacts such as shape-changing “Listening Cups,” and Machine Learning interfaces.

The research design of the workshop activities entailed exploring literal use-cases of the chosen intelligent things from nonhuman perspectives, by taking advantage of the language of design scenarios, storyboards and roleplay in RtD processes. This approach was orchestrated mainly by combining the technique called “Interview with Things” (Chang et al. 2017) with critical and speculative design. According to most participants, the ‘interview a thing’ activity described below was greatly influential in enabling a different, more critical mindset when acting out the scenarios.

4.1 Activity #1: interview a thing

Participants opened by interviewing the intelligent things they invited to the workshop as nonhuman participants. Given the workshop's timeframe, these interviews did not rely on sensor data collection (as in the original method), but on participants acting out the thing based on their previous interactions and personal experiences with the invited nonhuman guest. These interviews surfaced mid- and long-term ethical implications of using intelligent things, and particular dilemmas. On the basis of these dilemmas, participants formed groups to speculate on a future scenario meant to explore a particular ethical issue that emerged from the interviews.

4.2 Activity #2: speculating nonhuman futures

Later, the participants’ prototyped the scenarios and acted out the relevant nonhuman perspectives through props and bodily enactments to further unpack the ethical dilemmas and paradoxes that emerged from the interviews. In other words, participants encountered intelligent things twice: in the present (through the interview), and then again in the future (through the scenarios).

4.3 Activity #3: shared criticalities

Through the process of interviewing and enacting intelligent things, participants were able to relate to nonhuman entities in ways that approximate how we relate to people: empathizing with their experiences, understanding their worldviews, and learning about their social lives. These relations grounded their future encounters on something the participants themselves take issue with in the present, and brought them into a responsible position. The final workshop activity focused on jointly analyzing experiences and relations, and identifying these shared criticalities in a discussion moderated by the organizers.

These activities were scheduled as a full-day conference workshop. The workshop organizers sought verbal consent from the participants individually on the day of the workshop. Consent concerned data collection in the form of photo and video recordings, and attribution of their contribution (with first and last names) when disseminating the workshop outcomes. We further encouraged our participants to document the workshop activities, and sought permission from them for sharing the documentation among themselves and for later publication. Additionally, after the workshop, we stayed in touch with several participants, and asked them to provide feedback on the workshop’s insights and their analysis.

5 Outcomes and preliminary insights

We turn to a few examples scenarios from our workshop, specifically the resulting speculative enactments, to illustrate how these encounters allowed us to critically and provocatively consider the effects of interacting with intelligent things in everyday life, and actively view those effects for design. In the discussion, we tease out how the activities helped us frame a way to address the ethical issues surfaced, which focuses on building capacity for ethical responses ‘in the encounter’ with intelligent things.

5.1 Button: hidden and connected tensions

Based on the interview with a connected button like an Amazon Dash or Flic, and their expressed desire to free their owners from daily burdens and routine labor, workshop participants Heather Wiltse, Masako Kitazaki, Stuart Curran, and Viktor Bedö created a concept for a unique button using woolen threads. Taking a move from a daily routine such as laundry with a connected button, the participants enacted a scenario, where the button would send a signal through its network to the washing machine to run a cycle. However, the signal would be intercepted by a rogue network that messes with the target and causes a bomb to explode instead, without the user being aware of the hidden network interactions. Through this scenario, the participants reflected on how even something as simple as a button connects to multiple threads representing different network affiliations. Accessing the loose ends of one thread is sufficient to tamper with the button's function, with unexpected and potentially tragic outcomes—a metaphor for how we can only see the network partially, with some parts in view and others hidden. In this scenario, engaging a nonhuman perspective enabled participants to encounter the possibility of an unknown situation, the unfolding of which is more complex and obscure than how it would manifest in mundane interactions with intelligent things. This scenario prompted participants with the dilemma of how much one should be aware, or would like to be aware, of the underlying hidden networks (Fig. 1).

Fig. 1
figure 1

(Left) Reflecting on the button’s network affiliations using ‘thread’ as metaphor (© 2019 Lifeshots Photography); (right) participants enacting a scenario of a ‘bomb detonation’ with the connected button

On the one hand, being more aware of the underlying architecture and networks allows for better judgement (if this is expected at all). On the other hand, automation and routines are a necessary and desired aspect of contemporary lives and ways of thinking, to be able to dedicate focus on the things that matter most to us. This example thus shows the participants' confrontation with the ethics of not being able (or wanting) to directly engage with the tensions or effects of our actions when using intelligent technologies.

5.2 Shoe: embodied and evolving biases

Based on the interview with a pair of connected shoes, and their persistent concern with optimizing their wearer’s walking, workshop participants Cayla Small, Johan Salo, Juliette Bindt, and Larissa Pschetz created a concept for intelligent footwear. In this scenario, the shoe design would evolve by learning and matching walking patterns over generations of wearers. Embedded actuators would subtly influence wearers’ walking style according to emerging patterns of efficiency and aesthetic value. Taking the nonhuman perspective of a connected shoe encouraged participants to imagine the life of future generations of intelligent footwear, wanting to continue to influence walking patterns over decades, or even centuries. Eventually, the shoes would make it harder for users to follow the increasingly convoluted walking styles imposed on them, having potentially undesired consequences on their health and well-being (Fig. 2).

Fig. 2
figure 2

(Left) Participants role-playing different walking patterns emerging from intelligent footwear; (right) a prototype representing a repository of the historical, generational data embodied in a pair of shoes

With the underlying data-driven relationships changing over time, the shoes would change what is considered a standard way of walking and eventually create a “concept drift” on human walking due to poor and degrading predictive performance. This scenario allowed participants to consider that although intelligent everyday products can evolve according to user preferences, they also have the power to define what is ‘normal.’ The enactment of this speculative scenario provoked participants to reflect upon the ethical implications of when an algorithm remains in charge of determining what is ‘normal’ (normative) in everyday life, and how that might affect our minds and bodies in rather profound ways.

5.3 Bottle: disempowering dynamics in delegations of agency

Based on the interview with a hydration-tracking water bottle, workshop participants Iskander Smit, Janet van der Linden, and Marije de Haas were profoundly concerned with caring for people who could not care for themselves. They created the scenario of an intelligent bottle for people who have dementia, where the bottle belongs to the caretaker rather than the sufferer, and it connects to a euthanasia plug implanted in the sufferer. If the caretaker were to become severely dehydrated (and thus unable to provide care), then the euthanasia plug on the sufferer would unplug itself automatically, ending the sufferer’s life. This scenario involved participants getting to grips with perspectives of both the bottle and the euthanasia device in mediating the delicate balance between the caretaker's life and the sufferer’s life. By acting these nonhuman perspectives, the connected devices and their digital contract became more present and forceful in both their agentive role and their relationship with the caretaker and the sufferer. This scenario exposed the disempowering dynamics at play in delegations of agency to intelligent things. Simultaneously, the enactment of both human and nonhuman perspectives prompted alternative ways of negotiating moral ambiguity, primarily through role-playing how intelligent things could become ‘doubtful’ when unsure of what decision to take on people's behalf. What kinds of help would intelligent things seek out when in doubt? As an answer, the workshop participants envisioned a self-help book for intelligent bottles in morally ambiguous situations. This reframed mindset allowed shifting the focus from moral delegations of agency to conscious deliberations between people and their things (Fig. 3).

Fig. 3
figure 3

(Left) A prototype of the smart bottle to negotiate perspectives between the bottle and the euthanasia device (© 2019 Lifeshots Photography); (right) a self-help book designed to assist the bottle when morally ‘in doubt’

5.4 Mask: obscurity through tactical intervention

Interviewing smart home devices such as Roombas, surveillance cameras, and motion sensors revealed their indefatigable commitment and allegiance to home security. In response, workshop participants Audrey Desjardins, Bruno Jaeger, Maria Luce Lupetti, and Lars Holmberg created a tactical concealment mask concept. In this scenario, two parents and their teenage son would live in a household, where a roving surveillance camera would serve as an agent with allegiances to the parents, but not the son. By enacting this scenario, the participants devised a tactical concealment mask that would obscure data used by the surveillance camera for identifying the son’s face and tracking his possible whereabouts. The mask would allow the teenage boy to sneak around and exit home late at night, without the surveillance camera alerting his parents (Fig. 4).

Fig. 4
figure 4

(Left) A prototype of the concealment mask (© 2019 Lifeshots Photography); (right) participants describing the storyboard of the teenage son obscuring surveillance data using the tactical mask

In highlighting the active role that intelligent things play in mediating social relations between family members (e.g., between the roving camera trying to tattletale on the boy on behalf of the parents, and the boy obfuscating the parents’ ability to track his movements via the surveillance camera), the scenario surfaces issues about conflicting allegiances and forms of exclusion in the power relations between multiple human and nonhuman entities (e.g., the camera readily assisting the family in detecting a harmful intruder, yet also reporting on the activities of a family member). The enactment of this scenario prompted participants to consider the ethical implications of a notion of control when things increasingly mediate conflicting interests. It also offered participants a way to rehearse tactical solutions in contesting and negotiating the level of control that intelligent technologies can and should impose on everyday situations.

6 Building capacity for ethical responses

Given our all too human biases, it is perhaps not surprising that thinking through our interactions with things (and things with us and with each other) opens up a new space of possibilities for design. Specifically, it allows accessing perspectives that go beyond a narrow focus on the individual user and that can be useful for bringing under-examined, unanticipated, and more systemic ethical issues into consideration for design. With theoretical perspectives on nonhuman agency gaining new relevance and application in “attending to the things of design” and their entangled relations (Frauenberger 2019; Jenkins et al. 2016; Odom et al. 2017; Wakkary et al. 2017), decentering the human perspective helped participants to think beyond functional aspects and reflect on other kinds of relationships with intelligent things. We saw this happening in several ways. In some instances, things prompted workshop participants to think beyond an individual use case (e.g., the button). This scenario worked to defamiliarize and diverge one’s thinking, but specifically, it prompted thinking beyond the one-user > one-interface > one-function blind spot. Taking a thing perspective also led participants to extend their thinking beyond a single user lifetime (e.g., the shoe). Both the button and the shoe were useful in encountering the possibility of causing harmful and violating experiences to individuals, when considered beyond their situated context and lifetime. Ultimately, we found that encountering things differently provoked workshop participants to encounter closely and to think deeply about how intelligent things mediate relationships among people and other things. But on the other hand, it was counterintuitive to observe that the users’ role became significantly more central, as participants began to look at these relationships from the viewpoint of the thing. The bottle and the mask, for example, were useful in foregrounding how relations with intelligent things (and the social relationships they mediate) are multiple and conflicting, and suggesting forms of conscious deliberation and tactical contestation by the users themselves. This reverted the focus of ethics and responsibility back to humans and their encounters with things (rather than things with other things), as elaborated below, even if the workshop activities were designed to encounter intelligent things by actively pushing back against human-centeredness.

Our approach echoes emerging more-than-human approaches in design (Clarke et al. 2019; Coulton and Lindley 2019; DiSalvo et al. 2011; Forlano 2016; Galloway 2013; Liu et al. 2019; Wakkary et al. 2017), and offers one particular way to mobilize the agency and roles that humans and nonhumans can play in everyday life and speculate on the new capacities for action configured at the intersection of humans and nonhumans (Giaccardi and Redström 2020; Kuijer and Giaccardi 2018). In accordance with the decentered (Grusin 2015) and participatory (Bastian et al. 2017) perspective that a more-than-human design orientation is called to engage, we approached agency not as something that people or artifacts have, but as the emergent result of how the world actively and continuously configures and reconfigures itself. This allows ethics to be encountered through “ongoing interactions” (de La Bellacasa 2017). Navigating ethical dilemmas and paradoxes through “ongoing interactions” helps designers not just to explore and anticipate how one-to-one interactions with technology may unfold in the future, but also how to possibly govern the “end-to-end relations” (Redström and Wiltse 2019) that can form among multiple people, products and services.

At first, the activity of speculative interviews supported a process of defamiliarization (Bell et al. 2005) by granting things an active role in creating the narrative of their use. This activity allowed participants to consider their autonomous behavior not as a simplistic exercise of anthropomorphization, but within a broader ecosystem of humans and nonhumans (Maller and Strengers 2019). Later, by including things as participants and giving them an active (ethical) role in the roleplay exercise, designers in the workshop turned into active (ethically concerned) participants too, and more easily stepped away from the pitfalls of looking at people in their passive role as consumers of intelligent technology. By decentering themselves, they created speculative scenarios, where users have the capacity to take responsibility for their lives, avoid unwanted situations, and even make changes to the design through purposeful actions at use time. Through unexpected ethical encounters with their nonhuman counterparts, the participants seemed to be activated by the artifacts, provoked to act and respond through forms of resistance and non-compliance to the thing's proclaimed intelligence, functionality, and seamless user experience. New capacities and affective responses emerged from the interaction—protecting from (the mask), caring for (the bottle), or laying an influence upon (the shoe). In other words, the workshop participants could imagine things that empower users with a high(er) degree of agency and freedom to act. This reflection compares to the low level of control users have over the decisions made by intelligent systems, or to their ability to understand the effects that those decisions may have on them. As pointed out by Ananny and Crawford (2018), despite efforts to make intelligent systems more transparent to users, there are limitations, for example, to how much the user can understand and anticipate the consequences of the systems’ decisions; these limitations are not actually in the users, but in the epistemological assumptions underpinning current design ideals of transparency. These scenarios instead highlight the relations, types of interfaces, and interactions that designers should be attending to if we are to build capacity for ethical responses even after designing something. This capacity cannot be ascribed in the user or the intelligent thing, but rather seeded in their encounters. Fictional artifacts are used here not just to imagine possibilities, but rather to situate technology within everyday life to open up spaces for discussion (Hales 2013; Pierce and DiSalvo 2018). These fictions help trouble “collective imaginings” of a technology or future (Søndergaard and Hansen 2018), and to bring into focus particular “matters-of-concern” (Bleecker 2009). Beyond the storytelling aspect, a strong characteristic of speculations is that their physicality can generate new potentials within everyday contexts. Thus, material speculations, in their situatedness, become ‘sites’ for both critical inquiry (Pierce 2019; Wakkary et al. 2016) and experimental ethics (Lütge et al. 2014; Verbeek 2013).

Building capacity for ethical responses, that is, enabling and responding to these encounters by design, is a matter of human responsibility to foresee unintended consequences and harms. But it also includes the openness to be creative and to explore the potential of such nonhuman encounters (Heaven 2020; Nicenboim et al. 2020). In future work, the theoretical considerations that grant agential perspectives to things can be pushed further through computational RtD methods to explore the extent to which intelligent things can invoke ethical deliberations in their social encounters.

7 Conclusions

We organized this workshop with interest to open up mundane, everyday encounters with intelligent things as a source of ethical deliberation, to complement the existing moral concerns around intelligent systems, and bring into view unexpected nonhuman encounters for design consideration. By speculating from the perspective of intelligent things, we could, with relative success, activate ethical situations. When participants took on the role of things, they in turn became activated, responsive, and sensitized to those situations. The results of the workshop’s speculative and role-play activities together open up a space to discuss matters of ethics and responsibility for future AI research without capsizing into moral discussions. Under the RtD framework, this workshop can be seen to contribute to and complement morally-driven approaches to AI ethics and responsibility, by allowing participants to think creatively about the effects of interacting with intelligent things in everyday life and the implications of these interactions bear on society.