Skip to main content

The widening rift between aesthetics and ethics in the design of computational things

Abstract

In the face of massively increased technological complexity, it is striking that so many of today’s computational and networked things follow design ideals honed decades ago in a much different context. These strong ideals prescribe a presentation of things as useful tools through design and a withdrawal of aspects of their functionality and complexity. Beginning in the mid-twentieth century, we trace this ‘withdrawal program’ as it has persisted in the face of increasing computational complexity. Currently, design is in a dilemma where computational products can be seen as brilliantly designed and engaging to use yet can also be considered very problematic in how they support hidden agendas and often seem less than trustworthy. In this article, we analyse factors shaping this emergent ethical dilemma and reveal the concept of a widening rift between what computational things actually are and do and the ways in which they are presented as things for use. Against this backdrop, we argue that there is a need for a new orientation in design programs to adequately address this deepening rupture between the aesthetics and ethics in the design of computational things.

Introduction

Design as a practice has been steadily expanding. This is in large part a consequence of technological advances that provide new possibilities and new forms of complexity. These include the reach of computation into new domains and application areas and the sociotechnical complexity that results from this permeation of interactive technologies in everyday life and society. Early challenges of crafting basic interaction mechanisms that could enable people to make use of complex devices and computational machines have given way to the challenges of curating entire interactive environments and providing interactive capabilities that can enable people to effectively calibrate their engagement and integrity in the networked flows of planetary-scale computation (Bratton 2015).

Yet even as the scope, scale, and overall character of the practices and major challenges of design seem to have changed substantially, it is striking that the formal expressions of interactive consumer products often seem to follow aesthetic principles honed decades ago in a quite different context. While traditional design principles are still effective in providing appealing appearances and utility in use, we also find serious concerns emerging with their ongoing use because the challenges and concerns of designing products have changed dramatically: dark patterns of design that nudge and manipulate; things that are part of opaque algorithmic systems that operate with increasing consequence and seemingly decreasing accountability; and connected things that produce data about people’s lives and the world through their use that are optimized for surveillance, prediction, and control rather than actual user-facing utility and experience. Designed interactive things still typically present themselves through their interfaces as ‘just’ simple and effective tools that provide engaging and even pleasurable use and user experiences. At the same time, they are also key elements of larger systems built for multiple kinds of use and users that involve producing data about end users, but not for them, in following the increasingly prevalent economic model of surveillance capitalism (Zuboff 2019) and its elaboration of earlier patterns of exploitation through social quantification (Couldry and Mejias 2019).

It seems that we have arrived in a situation where products can be in some ways brilliantly designed and engaging in use but also completely problematic in other ways (see also Weyenberg 2017). As we will argue in this paper, certain design strategies used in the drive to make increasingly complex technology accessible for use have resulted in a split between design ethics and design aesthetics and an increasing rift between what computational things are and do in terms of their core character and behaviour and the ways in which they are presented to people as things for use. In this context, we find that trust in things is no longer earned by simplicity, and that ease of use often enables users being used. We are now in a situation where everything can be done ‘right’ from a design practice point of view, and yet we can still end up with a designed product that users will (usually for good reasons) not trust or feel comfortable using (Lau et al. 2018; Shklovski et al. 2014). How did we get here? And what mistakes and opportunities for design might have been overlooked along the way?

This article is an exploration of these questions, motivated by concerns about what seems to be an undeveloped capability to handle emerging sociotechnical complexities well through design. In particular, we are intrigued by the fact that, despite being radically different in so many substantive ways, the computational things of today are often designed to look so similar to things from decades ago.

Take for instance the example of Braun Speaker L1 from 1959 (designed by Dieter Rams) and its recent re-launch as a series of smart speakers housing the Google Assistant (Fig. 1) (see also Griffiths 2019). The former can be used to listen to music or the radio. The latter enables users to interact with Google’s voice-controlled digital AI assistant and thereby use the device to not only play music but also access content on the internet or personal devices through the assistant, and to control connected devices, among other things. Yet they look nearly the same. Similarly, it is widely known that Apple’s product designs as well as other contemporary technology designs are strongly influenced by Rams’ design aesthetic found in the products designed by him or under his supervision at Braun.Footnote 1

Fig. 1
figure1

The Braun LE Speakers from 1959 designed by Dieter Rams and the recently updated and relaunched smart speaker from 2019 retaining Rams' design aesthetic [Screenshot by author from Braun Audio (2020)]

Another example along those lines is the nearly identical presentation of Braun speakers from 1960 and the Google Home Max smart speakers in 2017, both shown with a record player (Fig. 2). While aesthetically the devices look nearly identical in the way they are presented to users, they are very different in terms of what they are and do. They deal very differently with the ethics of designing, design aesthetics and their relation; and in looking at these differences, it becomes possible to see what we propose can be understood as a widening rift between what things are and do and how they are presented as things for use through (interface) design (Fig. 2). In this article, we aim to better understand this rift between the aesthetics and ethics of design and its underlying dynamics, analytically and also figuratively by developing an illustration based on Fig. 2 yet highlighting those dynamics (Fig. 7).

Fig. 2
figure2

Top: Braun Speakers from 1960 designed by Dieter Rams (image by Das Programm, https://www.dasprogramm.co.uk). Bottom: the Google Home Max smart speakers from 2017 [Screenshot by author from Google Store (2020)]. Both are presented with a similar aesthetic despite being substantially different technologies. Illustrated on top: the widening rift between aesthetics and ethics of design

We take this paradox of things being very similar and different at the same time as a starting point for our investigation, beginning with the origin of this modern aesthetic strategy and its ideals of things as effective tools withdrawing from awareness during use. We then trace how this basic orientation—which we refer to as the ‘withdrawal program’—was maintained in the face of increasing technological complexity, resulting in a widening rift between the interfaces presented for use and what was going on beneath them. While it was originally the highly complex functioning of computational machines and systems that was concealed to enable more effective use, the scope and scale of what is concealed has more recently expanded dramatically to include also a variety of additional actors, interests, and purposes. Withdrawal of one kind (during effective use) has become withdrawal of another kind entirely (technological functioning and interests and agendas of other social actors). The serious issues raised by this situation include ethical ones that cannot be resolved only by a design strategy that continues to mainly focus on effective use of things as (withdrawn) tools.

While the argument presented here engages with design history, it is not intended to be a historical analysis. Rather, what we are interested in is getting to the heart of key challenges for design related to making technological complexity available for effective and humane use. The comparison between older and newer technological things allows us to pull out one particular and, in many ways, key historical trajectory. Thus, the historical material we engage with is intentionally selective. It is focused on tracing a line from a time when ‘good design’ entailed alignment between value for users and value for producers (i.e., people wanted to buy the products), to the present moment when there is significantly less alignment (i.e., producers are incentivized to use their products to produce data and profitable insights about current and future behavior of users). It is meant to provide illumination for understanding where we are now, and where design is going (and where it is not going or cannot go) if it continues along this same trajectory. Through our conceptual illustration we sketch what appears to be a widening rift between how things are presented as things for use through design, and the larger picture of what they actually are and do in their entirety. This is directly tied to the ethics and aesthetics of design, the practice and principles of giving form and presenting a thing for use. The goal of this article is to sketch the described problem space in a new way to pave ways for future explorations in advancing design.

The evolving withdrawal program

Ethics and aesthetics, the ‘what’ and the ‘how’

Design has always wrestled in various ways with the relations between form and function, surface and character, ethics and aesthetics. The particular aspects of these issues that have been in the foreground have changed over time and with technological developments, even as earlier concerns have remained present. In a simplified sense, one can think of design as operating at the intersections between what a thing is, and how it comes to present itself as that particular thing to us. While certainly engaged in what things are, it is often with respect to the ‘to us’ part that design makes its difference by defining how things should appear and present themselves—how they will make themselves present to us. One might be tempted to think that if one only gets the ‘what’ part right, the ‘how’ will easily follow; but in practice, answering the ‘how’ question is far from trivial. Typically, there is an almost infinite array of options, even for the most mundane objects. To exemplify: humans have been making drinking cups since the stone age (and it is safe to say that their function, the ‘what’, is rather well known by now), and yet there is still a seemingly endless stream of new variations being made (‘how’ with respect to forms, materials, production techniques, intended contexts of use, etc.).

Design makes use of several tools to manage this huge space of possibilities, a central one being aesthetics. With respect to the issue of how to design something, we can think of aesthetics as a kind of basic logic or framework, or as a set of foundational values that allow us to focus on certain parts of a huge design space and evaluate what possibilities are most promising. In other words, aesthetics is not just a matter of what things should look like—when designing, it is a foundation and a filter for what to look for. In a simplified sense, a given aesthetic defines what will qualify as a resolution to the ‘how’ issue, helping us determine what constitutes ‘good’—and in some sense perhaps even the ‘best’—design among all the different alternatives we may (or indeed may not) consider. But while there can certainly be a dominant aesthetic that influences much of the design happening in a given domain, such as a company, a region, or a market segment, it will still be one out of many possibilities: it is still an aesthetic, in singular with an ‘a’ and not an ‘A’.

Indeed, also when it comes to the more abstract or foundational issues in design such as aesthetics, design has to work through the particulars it produces: it is not any more possible to form such an ‘aesthetic’ in general, than it is to make a ‘chair’ in general. The concrete result of such efforts will still be an instantiation, no matter how exemplary, generic or universal we desire it to be. ‘Things’ can always be made differently.

In this article, we argue that there are serious issues involved in how we conceive of the ‘what’ and the ‘how’ when designing networked computational things, and that prevalent design aesthetics in the technological domain not only fail to address what these new things actually are, but that this aesthetic has become instrumental in hiding crucial aspects of what these things do. This inherently involves and affects design ethics—the moral frameworks, sensitivities, and values that guide responsible choices made in the practice of designing things. These choices determine, among other things, how a thing is presented for use through design.

To unpack this position, we need to go back to where our current normative understanding of computational things as efficient tools comes from, how an aesthetic centred on simplicity evolved, and where qualities, such as usability and effectiveness, became crucial, so much so that serious concerns regarding what things are and do are seemingly accepted. For that, we begin with the period in which aesthetic ideals centred on things as tools that should withdraw from awareness, be ‘simple’, ‘transparent’ and ‘intuitive’ during use, were developed. We then trace how an evolving approach—which we call ‘the withdrawal program’ for short—has played out in the context of increasing technological complexity.

Simplicity and withdrawal

One of the central aesthetical notions in the story told here is ‘simplicity’. As straightforward as it may seem, the idea of ‘simplicity’ in the context of design aesthetics and ethics carries a complex set of historical connotations—simplicity with respect to what? To visual expression? To range of functions? To materials? To better understand its current instantiations, we therefore need to unpack some of its history. Let us start with a canonical example from the early days of the home electronics referred to at the beginning of this paper, which remains deeply influential in technology design to this day (see also arguments made with Fig. 1 and Footnote 1): Dieter Rams’ designs and their underlying design principles he expressed in his 10 principles of design (Rams 2017):

Good design makes a product understandable

Products fulfilling a purpose are like tools. They are neither decorative objects nor works of art. Their design should therefore be both neutral and restrained, to leave room for the user’s self-expression.

It clarifies the product’s structure. Better still, it can make the product talk. At best, it is self-explanatory.

Good design is honest

It does not make a product more innovative, powerful or valuable than it really is. It does not attempt to manipulate the consumer with promises that cannot be kept.

Good design is as little design as possible

Less, but better—because it concentrates on the essential aspects, and the products are not burdened with non-essentials.

Back to purity, back to simplicity.

In these three exemplifying principles (out of a total of ten) we find a compact history of some of the key ideas in the formation of industrial design since the late nineteenth century. First, we have the idea that some everyday things are ‘tools’ that, above all, are intended to be used. This is quite different from thinking about everyday things as pieces of art, as decoration or some other form of a more passive presence in relation to human activity. The importance of ‘technology’ (not as material, but as a mode of being) is clearly visible here: (these kinds of) everyday things are not works of art; they are tools for living well. Following Martin Heidegger’s ontological account of technology, tools most significantly present themselves to us through use, as extensions of our own intentions and actions (Heidegger 1927). Indeed, in Heidegger’s account, the only time we attend to a tool as such is when there is a ‘breakdown’. In other words, for a technological tool to be present as the thing that it is and not through the useful purpose it has, is a ‘failure’ of sorts. This means that the functionality of the thing should be as easy to access and understand as possible, but also that the thing, once in use, should fade to the background. For instance, when one uses a phone to make a call one focuses on the conversation and the phone fades into the background. If the call is disconnected, the phone fails (breaks down) and moves into the foreground. As seen from this perspective, simplicity leads the way to other notions such as to not be in the way or withdrawn, to be’intuitive’ or ‘transparent’ with respect to use.

Another aspect of simplicity is that things should present themselves as what they actually are, that design should be honest. It should be easy—or at least straightforward—to understand what they are from the way they present themselves. Such ideas about honesty can be traced back to the Arts and Crafts movement and its tenet of a ‘truth to materials’—a material’s nature should not be hidden—, and to the work of for instance John Ruskin, who inspired the movement. In “The Seven Lamps of Architecture” from 1849, Ruskin addressed what he thought ought to be the seven guiding notions of design, one of them being ‘truth’:

“V. The violations of truth, which dishonor poetry and painting, are thus for the most part confined to the treatment of their subjects. But in architecture another and a less subtle, more contemptible, violation of truth is possible; a direct falsity of assertion respecting the nature of material, or the quantity of labor. And this is, in the full sense of the word, wrong; … VI. Architectural Deceits are broadly to be considered under three heads:—1st. The suggestion of a mode of structure or support, other than the true one; as in pendants of late Gothic roofs.2d. The painting of surfaces to represent some other material than that of which they actually consist (as in the marbling of wood), or the deceptive representation of sculptured ornament upon them. 3d. The use of cast or machine-made ornaments of any kind” (Ruskin 1849, p. 48).

While a reaction against the tendency to use lower cost materials in industrialised production to mimic the otherwise very expensive materials and time-consuming work of craft, this came to be a central idea in Modernist industrial design, still very much present in Rams’ later principles mentioned above. The far-reaching principle of a ‘truth to materials’ at the heart of the Arts and Crafts movement influenced the philosophy of the Bauhaus and the principles that were taught there. This in turn had an impact on the Ulm School of Design (HfG Ulm) and its teachings, which affected Rams and his work at Braun as well (e.g., through collaborations). The nature of a material or a product (i.e., what it is) should not be hidden while principles of simplicity and ease of use are aspired in a design.

These ideas about honesty and simplicity also include the experiential qualities of encountering the object keeping visual complexity at a minimum and overall expressions restrained. Part of this may stem from the industrial celebration of efficiency and the associated elimination of everything that is not strictly necessary; but part of the background is also the strong reactions against the features of much early industrial production, such as its use of excessive ornamentation that once was a sign of great human effort, but with technologies such as cast iron was almost as effective to make as surfaces without ornament. There is, however, also a trace of a more poetic element coming from an interest in what the essence of something is, as in how William Morris—a key figure in the Arts and Crafts Movement—wrote about ‘Hopes and Fears for Art’ in 1882, proclaiming:

“HAVE NOTHING IN YOUR HOUSES WHICH YOU DO NOT KNOW TO BE USEFUL OR BELIEVE TO BE BEAUTIFUL. All art starts from this simplicity; and the higher the art rises, the greater the simplicity. I have been speaking of the fittings of a dwelling-house--a place in which we eat and drink, and pass familiar hours; but when you come to places which people want to make more specially beautiful because of the solemnity or dignity of their uses, they will be simpler still, and have little in them save the bare walls made as beautiful as may be” (Morris 1882, p. 110).

And so, while the three principles by Rams cited above may appear rather straightforward, they all come from a history in design that was far from the norm at the time—on the contrary, many of the ideas were at the time almost a form of critical activism. The design principles also tell us that what we think of as our ‘current’ ideals and ideas about matters such as aesthetics, are indeed historical artefacts: things that have been made during long periods of time, and that typically change very slowly as they reflect more foundational values.

In the context of computational technology design, seeking this kind of simplicity most likely began with taking away all that is not necessary in the sense of reasonably and comprehensibly hiding aspects of how a given technology works. But over time, it has become a matter of hiding functionality, utility and operations that might distract from a chosen user-focused functionality and use. This implied a gradual shift in the interpretation of ‘simplicity’ from initially being about the ‘essence’ of a tool and what is ‘necessary’ about it, to instead be about what a user needs to know and understand to be able to use it in the way intended, and thus to what is necessary for whom.

As design came to face an enormous increase in technological complexity over time, and with it a continuous struggle to make things useful and usable, trying to achieve the ideal of simplicity came to be, in no small part, about minimizing failure and breakdown where the technology itself was forced to the foreground. Indeed, this understanding of simplicity in the context of technology led further towards ideals fostering withdrawal such as invisibility, meaning that ideally technology does not even move between foreground and background, but stays out of the way all the time. This approach has been maintained even as technological things have become vastly more complex [see for example Fig. 3 and also e.g. initiatives like (The Disappearing Computer 2004)].

Fig. 3
figure3

A still from a Google video on YouTube presenting their hardware design process: The Google Home smart speaker pictured with a crystal and sage smudge stick—things that have been used to access and affect invisible energies. Google’s head of product design says in this moment of the video: “technology will eventually be invisible.” Screenshot by author from (Google 2018)

The widening rift: increasing technological complexity and withdrawal of technology

The technological things that designers in the mid-twentieth century were working with were certainly complex in many ways, but this complexity pales in comparison with what was to come with advances in computation and computational capabilities. Technological advances have marked significant shifts in terms of what technological things could do and how those capabilities were through design brought to presence in ways that could enable effective use. In general, increasing technological complexity means that less and less of the full workings of things is presented to enable effective use. This leads to an expanding rift between what things are and do in a more holistic sense and how they are presented for use, which in turn has implications related to ethics and aesthetics.

To further explore this dynamic, we consider here a few substantive technological advances that have significantly changed what technological things are capable of, while presenting new sets of challenges for their design (which is increasingly concerned with the intangible). The aim here is not to give a full description, overview, or history of these developments but rather readings of their implications for the ways in which advanced technological things can be designed to come to presence.Footnote 2

Each of the following subsections contains a brief description of a technological advancement and its broad developments and implications. Selected design examples serve as exemplars to highlight some of the ways design has or has not reacted to the technological advancements.

Computation and programming

  • Physical machinery withdraws behind interfaces

Computers are effectively general-purpose information processing machines that can be configured into more specific ‘machines’ through coding instructions. The development of programming languages made it possible to use more abstract human-readable language to instruct computers, which were translated and assembled into machine code. Object-oriented programming languages in particular foregrounded issues of ontology, as programs are defined on the basis of objects (which often have some relation to a particular aspect of the world that has been rendered as data to be processed for some purpose). This type of code is modular and reusable, and can thus be easily modified, extended, and shared. Packages of code (software) can be separated from the machines on which they are created and run (hardware). Software can be distributed via data storage media.

Programming a computer through human-readable language means that one does not need to have a full grasp of, or engage with, what is actually going on in the machine; and what a computer does is not inherently visible on its user-facing interface. There always is a mediating layer helping us make sense of the machine and the machine’s language. Interfaces must be crafted and coded to enable various functionalities and interactive capabilities, as well as to indicate the status of the system. The design challenge is to render complex computational capabilities usable by humans. The functions of computers and the appearance of their interfaces can be constantly changed. Importantly, this also means that the user’s understanding of what the machine does only has to extend as far as intended use requires, which frees interfaces from the requirement of being technically accurate.

In the late 1950s, after computers were recognized as useful for not only calculations but also managing, processing and communicating information, mainframe computers were programmed for applications like inventories, accounting, and payrolls. At the time, the creation of interfaces for programming, using notation and symbols instead of machine code, took shape. For example, the ELEA 9003 mainframe computer (Fig. 4) built by Olivetti filled an entire room with all its equipment, while its human-facing interface was designed by Ettore Sottsass "as a kind of electronic landscape” (Sottsass et al. 2017, p. 133). The ELEA's wires and cables were also cleared from the floor into overhead supply, literally moving parts of the computer out of sight into the background (Mori 2020).

Fig. 4
figure4

The ELEA 9003, a mainframe computer made by Olivetti in the late 50s. Ettore Sottsass designed the mosaic-like workstation interface (photo credit a, c Elisabetta Mori, b, Armin Linke)

While computers were being prototyped for the workspace, smaller and simpler technologies in the form of electric tools like radios, TVs, and kitchen appliances reached into domestic settings, everyday use and mass-market production. The T 1000 Global Receiver, a radio designed by Dieter Rams in 1964 (Fig. 5) shows a tangible interface expressing the functionality of the device in a simplified way, easy to understand and use. The interface can be covered an uncovered with a lid allowing it to be hidden away when not needed.

Fig. 5
figure5

T 1000 Global Receiver, a radio designed by Dieter Rams in 1964 (image by Das Program)

Advances in computation, programming, and interfaces gave way to technologies withdrawing as physical machines behind their user-facing interfaces. With early computers filling entire rooms with their physical presence, design strategies began to form around obscuring, moving complex parts out of sight and presenting essential parts. This resulted in physical machinery effectively withdrawing behind interfaces. The same idea allowed (at first) simple domestic technologies to become more widely usable, including things, such as radios or kitchen appliances. Essentially this development gave way for design to aid in obscuring parts of things in an industrial context. However, the values and principles of design and the primary goal of making things for people to use were still aligned with the ways in which things were presented to consumers. The appearance and true character—the design aesthetics and design ethics—were still close together, yet these shifts paved the way for them to begin to drift apart.

Automation

  • Computational processes withdraw by no longer needing to be always initiated by users

Programming allowed for automation: executing potentially highly complicated sequences of information processing operations with little need for human involvement. Through programming, computers were instructed to carry out sequences of arithmetic or logical operations automatically. Through algorithms computation comes to have more presence and agency in the world (Finn 2017). Among other things, automation made it possible to develop computational technologies that were easier for people to use without advanced skills in computer programming. Computing machines became available that did not require advanced technical skills but rather a more basic competence in installing, configuring, and occasionally updating software. This allowed computers to be utilized in many new areas, including domestic settings.

Besides packaging complex processes into accessible commands, this also reduced the need for users to attend to many processes at all. For instance, maintenance tasks such as software updates used to require both explicit attention and manual labour but turned into something that can be done completely in the background, and therefore very frequently. Updates with earlier personal computing systems were user-initiated, often done by purchasing a set of CDs with entire new versions of software to update, for instance, an operating system and perform an installation process. Pictured in Fig. 6a–c is a Windows’95 package with 3.5-inch floppy disks inside and installation screenshots from the 1995 systems. Software updates have increasingly turned automatic (Fig. 6d, e) and are now being done via the internet and, in many cases are no longer user-initiated but automatically get done in the background. In other cases, updates are forced upon users. For instance, WhatsApp notifies users of an update being available during a time the app is used. This notification comes with a deadline and if a user does not initiate the update by the given date the app is not usable any longer. Similar tactics are used by Facebook messenger.

Fig. 6
figure6

Updating computational technology has changed over time from being user-initiated using physical CDs towards being automated and even forced

The advances of automation generally allowed and eventually forced users to give much less attention to everything a machine does, or even to really understand it at all beyond the functionality made usable by the user-facing interface. Less knowledge and skill were required to use computing technologies and fewer user instructions were provided. By removing many demands on users by making much of what the computers do less present, automation made more autonomy and agency in technologies and less user control possible. As a result, use was decreasingly a matter of engaging with what a machine does and instead focused more on what to use it for. Through automation, computers withdrew as complex machines always in need of instructions and came to instead be present in interfaces providing certain action possibilities. The appearance and true character—the design aesthetics and design ethics—actively drifted apart as consumers lost oversight and control of the things they use.

Networking

  • Resources needed to constitute computational things and enable ongoing functionality withdraw

Computer networks were initially used by the military and then at research institutions to send and receive data and communicate through wired networks. Eventually, this advancement shifted towards a global system enabling networked computation of interconnected computers and devices through the internet or World Wide Web. Through wireless data exchange, connectivity has now become ubiquitous in at least urban areas. Applications can now reliably make use of and even be constituted by networked resources as well as local ones. As networks reached into domestic settings, connected technologies such as smart phones can become access points to control other network-enabled devices.

Network connection is now in many cases integrated and even necessary for things to function. Computational things are less and less like traditional stable objects. Instead they have become more like fluid assemblages—assembled on the fly from a variety of local and networked components and changing continually over time and in response to context (Redström and Wiltse 2018). This means computational things are composed of a variety of components and connections to various platforms and other kinds of infrastructure. They change over time in response to people’s behaviour with and around them. They collect data that they feed back into the networks of which they are a part and to other actors that extract value from all this. Softwares change their visual forms and functions dynamically over time and across contexts; information processing abilities change the ways we relate to things and what we expect of them; connectivity changes the ways things relate to each other; and all of this changes our everyday practices in relation to the things in our lives.

Networking machines implied that the actual physical location of resources withdrew—there is no need to know where something is, as long as it can be accessed. Cloud computing services (such as Dropbox, Apple’s iCloud, Google Drive, and Amazon Web Services) enable access to data from anywhere there is a network connection. Data may seem to be located in a virtual space—in ‘the cloud’ which is everywhere and nowhere—but in reality, it is saved on hard drives on servers at massive server farms.

Additional computational resources, data, updates, etc. can all be accessed without the resources having to be explicitly provided by the user. It also means that the particular set of resources being used are not present as such during use (although they can become present through their absence if something becomes broken and does not function properly). This is the experience of cloud computing: one generally does not even know in which part of the world the server holding one’s data is located, only that it is available when needed.

The advances of networking allowed for separation of the device with the user interface, processing power, and data, oddly in a way returning to the initial model of computation based on a mainframe computer with connected terminals. Networked computational devices can thus serve merely as superficial shells or access points, while the main processing activity occurs elsewhere. As a result, resources needed to constitute computational things and enable ongoing functionality withdrew. What constitutes designed things expanded beyond what is locally in front of the consumers’ eyes and, perhaps, the designers’ eyes as well. This fragmentation and externalization of components of things further amplified the obscuring tendency of design.

Sensing and actuating

  • Things can monitor and act on the world in ways that do not require human involvement, thus withdrawing into the background

The capacity of technology to sense and act became something useful and common, particularly as people were being more and more removed from being very involved or engaged in what the technologies they use do and also with the aspirations towards making life easier and more convenient through technology. The automotive sector, for example, early on used sensing technology—first analogue sensing and actuating and then also computational sensing—to indicate low levels of fluids, to make wipers start and adjust automatically with rain, or to help park and even drive the car. Sensors are integrated in countless technologies and routinely used.

While networking made it possible to access data stored in other locations, it also made it possible to send instructions to execute an action at another location. This is the same logic as that of sending a message that will show up on a distant screen, but when combined with actuators it becomes possible to make other kinds of physical actions happen as well. This has become commonplace in smart home systems, where smartphone apps are typically used to control connected devices such as light bulbs. Combined with sensors, much of this can also be automated.

Sensors connected to networked computational devices enable continuous monitoring of the world through data streams in real time and over time (as a historical record that can be processed). As such, computational things are no longer restricted to what is already provided to them in digital form, but they can also actively produce data on their own. Handheld personal devices, such as smartphones and smartwatches, have increasingly sophisticated sensors, such as GPS, gyroscope, light, etc., as well as sensors for monitoring the body through detecting heart rate or movement, not to mention cameras for detecting objects, people’s faces included. Networked computational things increasingly produce data about activities in the world as part of their core functionality.

The world is thus increasingly rendered as data to be sensed, monitored, processed, and acted on. Sensors become ubiquitous in all kinds of contexts and domains, used in agendas around the internet of things (IoT), smart cities, etc. The sensors in general-purpose personal devices are leveraged for data tracking including voice data, location data and more for purposes of prediction and control (by actors able to pay for these results). Data-driven modes of sensing and representation become the dominant ways of finding out about and making sense of the world, and are usually concentrated at the top of (typically profit-driven) platforms (Couldry and Mejias 2019; Zuboff 2019).

The advances of sensing and actuating have turned computational things into entities that can have more sophisticated agency in the world—a capacity to sense and act. They can do this in the background without continual instruction, monitoring, or even awareness by humans. The presence and activity of sensors and connected components must be purposefully designed to be brought to attention, and they can also be further obscured through design. While they can be brought to presence, sensing and actuating in general involves the moving of those activities into the repertoire of technology rather than humans, and so withdraws into the background. This withdrawal pushed sensing and actuating activities outside of the scope of intended functionality in use and hence enabled design to lose track of them. Design is now used to obscure the details of sensing and actuating activities especially when outside of the scope of user-facing functionality. What data are being captured, produced, processed, as well as its consequences, are now commonly obscured by design.

Learning and other forms of intelligence

  • Computational representation, ontology, and processes withdraw into a black box that can be observed only in terms of inputs and outputs

With the rise of artificial intelligence (AI) and machine learning in recent years, computational things have become increasingly capable of doing things that require some form of intelligence, often mimicking cognitive functions we commonly associate with our own minds, including problem solving and especially learning. The agenda of artificial intelligence seeks to amplify artificial agency to provide insights beyond what humans are able to achieve alone. Here machines are not restricted to explicit instruction, but can evolve representations through ‘learning’ based on pattern recognition. This means that machines can develop and adapt on their own, on the basis of initial models and parameters in combination with massive amounts of data for training. In the case of deep learning, even the models are developed by the system, and in ways that are only possible to inspect by observing its stimuli-response patterns. In other words, computational systems can now assemble new computational systems, adapting to contexts and evolving over time in response to use through a relatively new and much more extensive kind of machinic agency.

Today’s market of learning devices with a certain agency for task completions for the home is vast. A popular example are the Nest devices which have been acquired by Google. The first device was the ‘Nest Learning Thermostat’ which can be programmed but also can learn on its own. It senses and learns the routines and presence of people in the home and aims to control the heating settings in a way that is most energy efficient. Connected to people’s phones, it can tell when people have left their house and shift into energy saving modes. Interestingly, the original patent schematic of Nest featured a microphone which at the time did not support an apparent functionality and was unknown to buyers and users. It later turned out to be usable for voice commands or perhaps to ‘listen in’ after a software update was installed. This shows how smart products are being made with functionality that is not even intended to be released until much later with a software update, which effectively changes the thing from what the user originally bought or planned to have in their home (e.g., Fussell 2019). Pierce (2019) calls such products ‘foot-in-the-door’ technologies based on the persuasive compliance technique with the same name.

Other ‘smart’ devices for the domestic environment act similarly. For instance, smart ‘speakers’ such as Google Home are complex voice command devices with an integrated virtual assistant using AI. Built-in microphones are supposedly only listening for ‘hot words’ and commands following them. However, the fact that these devices are continuously monitoring, saving, and learning from data generated by people has raised privacy concerns (Lau et al. 2018). It is not entirely clear what is being recorded by smart speakers’ continuously listening microphones, how the recorded data will be used, or if and how it will be protected. Companies like Amazon or Google say recordings will be used to improve the user experience of their devices, but their disclosures of the purposes for which they collect data are suspiciously vague. Analyses of Amazon Echo data show that 30–38% of “spurious audio recordings were human conversations,” suggesting that they capture audio outside of hot word detection (Ford and Palmer 2019). Important to note is that these technologies are designed to become more invisible through withdrawing in use, as literally stated in Google’s hardware design promotional video (see Fig. 3).

Another focus of intense development in terms of applications of AI technology is in autonomous vehicles, which are viewed as a transportation of the near future. Through sensing and learning mechanisms self-driving cars perceive their surroundings, interpret this sensory information, and identify navigational paths while following traffic rules and avoiding obstacles. The driverless technology uses sophisticated AI and machine learning with visual object recognition and deep learning on extensive amounts of data. This technological progression seems to bring about a future where computers are roaming the world and their need of us is drastically reduced and withdrawn. Another related example are delivery drones, which can serve to bring even more efficiency and further reduce dependence on human labour for Amazon, but might also be used to enable design of more distributed and participatory logistics networks (Davoli et al. 2015).

AI can provide insights not available to human perspectives alone, and can also reinforce existing biases. The fact that AI systems cannot be prototyped in low fidelity or readily inspected once in operation presents substantial challenges for responsible design and application. In use these systems can become something like runaway magic: perhaps useful, benign, surprising, or dangerous. They can be weaponized—literally and figuratively—to devastating effect (Eubanks 2017; O’Neil 2016). Data-driven design and the use of data to personalize appearance and performance, in combination with machines that learn and adapt, and that are constantly updated, represents a fundamental break with traditional design methodology: previous distinctions between production and consumption, between designing and using, collapse, and the design process becomes something constantly ongoing. In turn, computational representation, ontology and processes withdrew into a black box that cannot be opened, only observed in terms of inputs and outputs. By design emerging ontologies and particularities of newly found agency in technologies are obscured. Facing this complexity is challenging for design. How is a designer supposed to present something that can entirely change and learn to be something new over time? The rift between appearance and true character is being pushed further apart, paving the way for technologies to be two-faced: useful and using.

Data-driven processes and platforms

  • Computational things withdraw in their true character as data-producing, corporate and government surveillance devices

All of the technological advances mentioned in the previous sections involve data in some way. Data are a representation and abstraction of some aspect of the world. It is, famously, “never raw” (Gitelman 2013), but is rather produced through a series of representations that transform local and specific realities within frameworks that can make them comparable and possible to combine into more abstract, general, and portable representations (Latour 1999). Many of the technological platforms that are now effective infrastructures of daily life are designed to both structure and support activities (everything from our social lives to production processes) in ways that produce data about those activities (Alaimo and Kallinikos 2017; Plantin et al. 2018; Srnicek 2017).

The expansion of connected things into every domain of human activity has corresponded to and been driven by expansion of the scope and scale of industrial exploitation of everyday life as its new ‘raw material’, taken without asking for purposes of monitoring, prediction and control for profit (Couldry and Mejias 2019; Zuboff 2019). In this form of production, it is these predictions of human behaviour and means of influencing it that are the product that is eventually sold. ‘Smart’ connected devices are prime channels for production of behavioural data and also delivery of nudges toward certain behaviours, control being the best form of prediction (Zuboff 2019).

In this new economic model based on surveillance through data, much of that useful data are produced through everyday connected consumer products. These include the smartphones that many people carry with them at all times, but also the expanding range of connected consumer products. This includes anything referred to as “smart,” or falling under an IoT or smart city agenda. Here, the major drivers and intentions of design have thus been almost entirely reconfigured.

Whereas simplicity as a design ideal was once a matter of aiming for a kind of honesty and integrity that could empower people through effective use of things, it has now become a basic requirement for getting data-producing things into people’s lives as effortlessly and unobtrusively as possible, primarily (even if not only) for others’ benefit. What is hidden is now not only technological complexity, but also the presence of other actors monitoring what people do. In other words, connected technologies now serve to mediate relations. These relations typically have the structural dynamics of a one-way mirror, the watched not meant to be aware of the presence, activities, or interventions of the watchers.

The advances of data-driven processes and platforms, and especially the business models that have come to drive their development, have driven perhaps the biggest wedge yet between how things are presented to users and what they actually are and do. While technological complexity was originally concealed to render it usable, it is now the agenda and mechanisms of exploiting end-users through hidden uses of other actors. Of course, the tech giants and other platform companies make much of their commitment to privacy and user integrity—all while their terms of service remain practically impossible to read, data-producing machinery is disguised as interaction mechanisms, and surveillance-based business models remain unchanged.

At this point design delivers the most extreme simplification of what things are and do to the users of complex computational things. Agendas and doings deeply integrated in technologies and implicated in their use are obscured by design to a point where interaction is not necessarily about efficiency in use, but about effective acquisition of data and influence on behaviour.Footnote 3

Summary of analysis

Technological advancements have led to more complexity in everyday technologies often serving multiple uses, users, and agendas. While design principles honed decades ago, such as simplicity, have enabled the use of more and more complex technology, it has also led to inherent changes in design practice with regards to choices and responsibilities of design practice. Specifically, what is presented through design and what is obscured through design has changed. The idea that what is obscured through design is merely complex aspects of products users do not need to know about to effectively use something has moved to a situation where what is obscured often affects the user tremendously, but without knowing about it (e.g., being tracked, listened to, monitored). Holding on to traditional aesthetics and principles has changed the ethical aspects and choices made in design practice. This rift between what things are as a whole and how they are presented for use has become large, and is still increasing.

Figure 7 serves as an illustrative overview of our analysis and establishing the concept of the widening rift. It details and summarizes the analysis explored in Sect. 3 on increasing technological complexity, withdrawal of technology and effect on design. It highlights the consequences in the aesthetics and ethics of design.

Fig. 7
figure7

The widening rift between aesthetics and ethics in the design of computational things. A graphical abstract of this article. In what follows we detail our suggestions for how to deal with this rift in the future through transitions in design and its programs

Design in transition

We have now established the concept of the widening rift between what computational things are and do and the ways in which they are presented through design as things for use, as well as underlying concerns and consequences of this rift. In this final section of the article, we take a closer look at how professional design practice and commonly used methods of Human-Centered Design play a key role in perpetuating the described developments and dilemmas. In this effort, the following section aims to highlight that common design strategies both drive the creation of increasingly complex technology and means of making it accessible for use, while also supporting and leading to a stance of more or less ignoring serious concerns about current technologies. Earlier, we described this as a situation where products are in some ways brilliantly designed and engaging in use but also completely problematic in other ways. Through critically unpacking and reflecting on this issue, we aim to generate a pathway toward opportunities for design practice embracing or grappling with these challenges at hand. Against this backdrop, we propose possibilities to re-direct design with the goal of drawing the ethics and aesthetics of design closer together in the current age.

Darkening patterns

Professional design practice came into being in the context of industrialization, in the shift from craft to mass production. Since things were no longer individual custom-made pieces but rather identical products produced in large quantities for mass consumption, it became important to make sure that what was mass produced would be seen as useful and desirable by the consumers who were meant to buy it. The value of design was in crafting these prototypes for production through careful attention to material practices and contexts, as well as how things can fit to and accommodate human bodies comfortably during use (ergonomics). Much of design, especially around the 1980’s, also came to be about the meaning of things, and especially the lifestyles that they signified. But what might be considered the core of industry-oriented design teaching and practice came in the last decades to coalesce around ideas of human-centred design (HCD): design processes focused on addressing people’s real needs through creating products that fit into and support their activities (and thereby providing a compelling reason for them to purchase these products). This has driven the development of predominant design methods oriented toward engaging people or potential future users through interviews, observations, workshops, testing, and similar.

However, there is a growing awareness that there are significant shortcomings of HCD (e.g., Coulton and Lindley 2019; DiSalvo and Lukens 2011; Forlano 2016, 2017; Frauenberger 2019; Giaccardi 2019; Giaccardi and Redström 2020). As we argue here, widely adopted designed products we use in our lives daily may have been designed in part to produce value for users, yet simultaneously this goal has been bundled with corporate agendas and desires, data mining operations, choice control and nudging mechanisms. This is especially the case as technological advancements increasingly separated function from form, bringing about intangible and immaterial aspects of products. Technology is not what it used to be as technological advancements have given it new roles, competencies, and also new forms of materiality. The goals and intentions of humans (or users) have expanded and may perhaps not be viewed separately from technology (as is commonly done in HCD). The notions of use and usability take on new meanings considering corporate uses of technologies and the design of things for humans to use having been expanded towards also being designed to use the human users. In other words, things are designed to be in the service of use of multiple stakeholders–users and others (e.g., corporations influenced by their imperative to make a profit).

Providing value for the end user is no longer the primary objective from a business point of view. Rather, the objective is to keep people engaged with devices that can produce data about their behaviour. Use value and experience are in this scenario the means rather than the end. With the bulk of profit being generated by processing behavioural data, commercial design practices are no longer at the heart of value creation for end users. It is rather generally in the position of finding new ways to distract, manipulate, control, surveil, and exploit them—all through superficially beautiful, enchanting, simplified, effective things.

While traditional product design can be seen as mediating relations of production and consumption and as being implicated in the capitalist excesses of consumer society in the industrial economy, today’s commercial product and interaction design play a somewhat different role in post-industrial economies. As increasing profit through manufacturing goods became increasingly impossible, the digital economy emerged as the new locus of business activity in the information society (Castells 2010; Srnicek 2017). Now, it is data—especially behavioural data—that is the raw material fuelling production; only now what is produced are prediction products, with data science as the production machinery (Zuboff 2019).

From a business perspective, the aim is to increase the scope and scale of data production to increase the accuracy of predictions that can be generated (whether for the health outcomes of an insured person to what ad a person will click or to whatever else can be of profitable interest). This means that anything that could obstruct that flow of data is potentially problematic–end users very much included. It is in this context that so-called dark patterns of design have been identified: ways of leveraging the most sophisticated knowledge in design to nudge users through intended (from the business side) activity pathways, often including, for example, accepting the least privacy-preserving data settings.

Given the structural logic of this business model of data-based surveillance for prediction and control, it is data science that produces the real product (predictions) while design ensures the flow of raw material (behavioural data). In other words: the practice of design gives form to (data) mining tools in disguise. It does so by designing the end user-facing sides of products for people to integrate into every part of their lives, perhaps even becoming embedded as a sort of background infrastructure that is taken for granted or even necessary for effective participation in social life. Google’s prediction that “technology will eventually be invisible” (Fig. 3) can thus be read as a description of an essential condition for getting their data-producing products into every corner of everyday life, and in ways that encourage people to not question, inspect, or even be aware of all these things are doing. This is a situation we might describe as an untruth to things (thinking back to the ‘truth to materials’ tenet referenced earlier).

Here we can see in full view the paradox with which we began the paper: that computational things are often so brilliantly designed in some ways, but at the same time so problematic in others. The rift between what things actually are and do and the ways in which they are presented to users has now expanded to monumental proportions; and it seems that human-centred design and design ideals oriented toward simplicity and ease of use are no longer able to address core concerns. As Coulton and Lindley state:

“[D]esigns that interpret HCD’s simplicity axiom to mean that maximizing simplicity (of interface, interaction, and user experience design) is always the best thing, have a contradictory relationship with HCD because delivering simplicity so bluntly often disenfranchises the user” (Coulton and Lindley 2019, p. 4).

Beyond core issues of awareness, consent, and non-exploitation, there are even what might be thought of as a new breed of usability concerns that are no longer possible to resolve through traditional design approaches. For instance, artificial assistants are often designed to be humanoid to encourage users to interact with them through natural language, thus facilitating ease of use; but these often end up being in practice more like artificial idiots, unable to live up to expectations of even the most basic human interactional capabilities. Or, take GDPRFootnote 4 as one of the most sweeping (if still severely limited and problematic) attempts at regulating use of personal data: the interaction mechanisms meant to give users the possibility to control their personal data are typically experienced as just bad design, falling short of making present and allowing people to negotiate the terms of the relation between them and the company or other entity in question. Design practice has thus been in many ways marginalized, often used to further unsavoury business practices but no longer in the centre in terms of finding ways to provide value to users as the primary goal (and driver of profit generation).

There is now a critical need for real alternatives—alternative business models certainly, but also alternative visions for what complex and connected computational things could be, and alternative design approaches and programs for working this out. If character and appearance, ethics and aesthetics, have been split apart, what would it mean to try to weave them closer together again—moving to a future where designers can aspire to bring more truth to the things they design in their practice?

Redirecting design: reconciling discrepancies between appearance and character

The widening rift between what things actually are and do, and how they are presented to end users, ultimately means that the trajectories outlined in this article are not sustainable, practically or ethically. The fact that commercial design practices are not ethical in the deepest sense (notwithstanding the presence of company ethics boards) has resulted in increasing concerns with what computational, interactive products actually are and do, and what they do to the people who use them. While it may be possible to develop design further to some degree by extending the dominant and normative ideas, perspectives, and methodologies that have been established, major issues and matters of concern we detailed show that current approaches are being exhausted and it is time to call for a more radical shift. The key thing to critically consider here is that what can appear—literally—like continuity with earlier best practices of ‘good design’ in fact represents an almost total break with the underlying concerns that originally motivated those aesthetic and ethic qualities. While a main aim of designing in an industry context has always been to sell things to users, the presentation of designed things and underlying intention has changed. Despite the fact that Rams’ job at Braun was to design things that sell, those things were designed to produce value for users in a way where consumers knew what things were and what they were doing as they were presented as such. As we detailed, this has dramatically changed. How things are presented as things for use is no longer closely tied to what they actually do. The original bond between ethics and aesthetics (what makes something a ‘good’ design in its more general sense) and the values tied to traditional design principles have been lost in the process. As a result, we will need new foundations to radically challenge best practices and values in design based on lost traditions behind kept principles.

Bringing ethics and aesthetics closer together again is not trivial. After all, given the generally reasonable design responses to increasing technological complexity, it is not possible to simply go back and ‘undo’ what has been done or to ‘close’ the rift. Through understanding this development of the widening rift between ethics and aesthetics it becomes possible to move towards real alternatives. What do we need to do next and how do we get there?

Our purpose here has been to sketch the basic shape of the problem, aiming to lay the groundwork for future work in this space that can start to provide answers to those questions. With our article we question and analyse the status quo of a current dilemma in the design of complex computational things. An improved situation that we are envisioning for design practice takes the rift into consideration and aims to weave back the ethics and aesthetics for design. By that, we can bring back intentions and considerations that were values behind popular design principles making things usable and simple, yet at the same time consider the complexity of today’s technologies. For this we need to rethink what is considered to be the object of design and what constitutes ‘good design’ in our contemporary context. It is important to go back to questions of good design, design competence and foundations. What constitutes a ‘good life’ with things over time and what could be considered ‘good’ design in the present age?

To explore new understandings, ideals, and, ultimately, guiding principles for design practice, new approaches to design need to be explored. Beyond common contemporary ideals of simplicity, transparency, accessibility, or ease of use, we believe integrity could be a possible angle here (see also bottom in Fig. 7). This means aiming to still make technology accessible while at the same time moving towards a higher degree of integrity in the things we design—in other words ‘what a technology is and does’ should largely be in line with how it is presented as a thing for use through design.

Conclusion

In this article, we propose the concept of a widening rift between what computational things are and do and the ways in which they are presented through design as things for use—the ethics and aesthetics of design—and highlight the consequences of this rift. With this we articulate a real problem in design today. We discussed how networked computational things are not what things previously had been—and therefore the approaches to designing them must change too if we are to appropriately care for the values and responsibilities of design practice. While earlier product and interaction design efforts produced relatively stable products for mass production, things now are formed through hybrid materials and ongoing data-driven processes of customization and evolution, serving multiple actors and providing multiple forms of value (Landwehr et al. 2019; Redström and Wiltse 2018).

Perhaps one of the most striking aspects of the issues that we have arrived at is how they seem to land in the social world of politics, business, culture, and individual choice. There have indeed been many valid calls to address them through each of these avenues, and effective and substantive remedies are certainly needed along all of these lines. However, we argue that these are also design challenges that require re-imagining relations between what (computational) things are and how they come to presence to re-knit connections between ethics and aesthetics. We need to rethink what ‘good design’ really is in the present context and what can constitute good lives with things over time. This is a call to bring back integrity to design practice in today’s age.

Availability of data and material

Not applicable.

Code availability

Not applicable.

Notes

  1. 1.

    An image search on the web for “Rams’ influence on Apple” shows this well.

  2. 2.

    For a more thorough analysis of the implications for (interface) design as complexity increases, see for example (Janlert and Stolterman 2010, 2015, 2017).

  3. 3.

    An interesting mapping example to point to is Crawford and Joler’s (2018) anatomical map of Amazon Echo data, human labor and planetary resources, which beautifully shows the vast complexity and system behind such a technology.

  4. 4.

    GDPR is the abbreviation for the General Data Protection Regulation 2016/679 which is a regulation within European Union laws on data protection and privacy.

References

  1. Alaimo C, Kallinikos J (2017) Computing the everyday: social media as data platforms. Inf Soc 33(4):175–191. https://doi.org/10.1080/01972243.2017.1318327

    Article  Google Scholar 

  2. Bratton BH (2015) The stack: on software and sovereignty. MIT Press, Cambridge

    Google Scholar 

  3. Braun Audio (2020) https://www.braun-audio.com/en-GLOBAL. Accessed 16 June 2020

  4. Castells M (2010) The rise of the network society (2nd ed., with a new pref). Wiley-Blackwell, Chichester

    Google Scholar 

  5. Couldry N, Mejias UA (2019) The costs of connection: how data is colonizing human life and appropriating it for capitalism. Stanford University Press, Stanford

    Book  Google Scholar 

  6. Coulton P, Lindley JG (2019) More-than human centred design: considering other things. Des J 22(4):463–481. https://doi.org/10.1080/14606925.2019.1614320

    Article  Google Scholar 

  7. Crawford K, Joler V (2018) Anatomy of an AI System. Anatomy of an AI System website: http://www.anatomyof.ai. Accessed 31 Jan 2020

  8. Davoli L, Wiltse H, Redström J (2015) Trojans & drones: materializing possibilities for transforming industrial infrastructures. In: Presented at the RTD conference 2015.https://doi.org/10.6084/m9.figshare.1328010.v1

  9. DiSalvo C, Lukens J (2011) Nonanthropocentrism and the Nonhuman in design: possibilities for designing new forms of engagement with and through technology. In: Foth M, Forlano L, Satchell C, Gibbs M (eds) From social butterfly to engaged citizen: urban informatics, social media, ubiquitous computing, and mobile technology to support citizen engagement. MIT Press, Cambridge, pp 421–436

    Google Scholar 

  10. Eubanks V (2017) Automating inequality: how high-tech tools profile, police, and punish the poor, 1st edn. St. Martin’s Press, New York

    Google Scholar 

  11. Finn E (2017) What algorithms want: imagination in the age of computing. MIT Press, Cambridge

    Book  Google Scholar 

  12. Ford M, Palmer W (2019) Alexa, are you listening to me? An analysis of Alexa voice service network traffic. Pers Ubiquit Comput 23(1):67–79. https://doi.org/10.1007/s00779-018-1174-x

    Article  Google Scholar 

  13. Forlano L (2016) Decentering the human in the design of collaborative cities. Des Issues 32(3):42–54. https://doi.org/10.1162/DESI_a_00398

    Article  Google Scholar 

  14. Forlano L (2017) Posthumanism and design. She Ji 3(1):16–29. https://doi.org/10.1016/j.sheji.2017.08.001

    Article  Google Scholar 

  15. Frauenberger C (2019) Entanglement HCI The Next Wave? ACM Trans Comput Hum Interact 27(1):2:1-2:27. https://doi.org/10.1145/3364998

    Article  Google Scholar 

  16. Fussell S (2019) The microphones that may be hidden in your home. The Atlantic website: https://www.theatlantic.com/technology/archive/2019/02/googles-home-security-devices-had-hidden-microphones/583387/. Accessed 10 Feb 2021

  17. Giaccardi E (2019) Histories and futures of research through design: from prototypes to connected things. Int J Des 13(3):139–155

    Google Scholar 

  18. Giaccardi E, Redström J (2020) Technology and more-than-human design. Design Issues 36(4). https://pure.tudelft.nl/portal/files/69227577/TechnologyAndMoreThanHumanDesign_Preprint.pdf

  19. Gitelman L (ed) (2013) “Raw data” is an oxymoron. The MIT Press, Cambridge

    Google Scholar 

  20. Google (2018) Ivy Ross + Hardware Design. Made by Google—Youtube Channel. https://www.youtube.com/watch?v=10ppdFQNl4s

  21. Google Store (2020) Google Store website: https://store.google.com. Accessed 16 June 2020

  22. Griffiths A (2019) Braun Audio LE speakers revive a classic Dieter Rams design. Dezeen website: https://www.dezeen.com/2019/11/13/braun-audio-le-speakers-dieter-rams-design/. Accessed 2 June 2020

  23. Heidegger M (1927) Sein und Zeit (19th Edition (2006)). Max Niemeyer Verlag, Tübingen

    MATH  Google Scholar 

  24. Janlert L-E, Stolterman E (2010) Complex interaction. ACM Trans Comput Hum Interact 17(2):1–32. https://doi.org/10.1145/1746259.1746262

    Article  Google Scholar 

  25. Janlert L-E, Stolterman E (2015) Faceless interaction—a conceptual examination of the notion of interface: past, present, and future. Hum Comput Interact 30(6):507–539. https://doi.org/10.1080/07370024.2014.944313

    Article  Google Scholar 

  26. Janlert L-E, Stolterman E (2017) Things that keep us busy: the elements of interaction. The MIT Press, Cambridge

    Book  Google Scholar 

  27. Landwehr M, Borning A, Wulf V (2019) The high cost of free services: problems with surveillance capitalism and possible alternatives for IT infrastructure. In: Proceedings of the fifth workshop on computing within limits. ACM, New York, pp 3:1–3:10. https://doi.org/10.1145/3338103.3338106

  28. Latour B (1999) Pandora’s hope: Essays on the reality of science studies. Harvard University Press, Cambridge

    Google Scholar 

  29. Lau J, Zimmerman B, Schaub F (2018) Alexa, are you listening? Privacy perceptions, concerns and privacy-seeking behaviors with smart speakers. In: Proceedings of the ACM on human-computer interaction, 2(CSCW), pp 102:1–102:31. https://doi.org/10.1145/3274371

  30. Mori E (2020) Olivetti ELEA Sign System: Interfaces Before the Advent of HCI. IEEE Ann Hist Comput 42(4):24–38. https://doi.org/10.1109/MAHC.2020.3027581

  31. Morris W (1882) Hopes and fears for art. Ellis & White. https://www.marxists.org/archive/morris/works/1882/hopes/hopes.htm#chap-4

  32. O’Neil C (2016) Weapons of math destruction: how big data increases inequality and threatens democracy, 1st edn. Crown, New York

    MATH  Google Scholar 

  33. Pierce J (2019) Smart home security cameras and shifting lines of creepiness: a design-led inquiry. In: Proceedings of the 2019 CHI conference on human factors in computing systems. Association for Computing Machinery, New York, pp 1–14. https://doi.org/10.1145/3290605.3300275

  34. Plantin J-C, Lagoze C, Edwards PN, Sandvig C (2018) Infrastructure studies meet platform studies in the age of Google and Facebook. New Media Soc 20(1):293–310. https://doi.org/10.1177/1461444816661553

    Article  Google Scholar 

  35. Rams D (2017) Ten principles for good design. In: de Jong C (ed) Ten principles for good design: Dieter Rams. Prestel, Munich, pp 92–133

  36. Redström J, Wiltse H (2018) Changing things: the future of objects in a digital world. Bloomsbury Academic, London

    Google Scholar 

  37. Ruskin J (1849) The Seven Lamps of Architecture. Project Gutenberg Ebook released 2011. https://www.gutenberg.org/files/35898/35898-h/35898-h.htm

  38. Shklovski I, Mainwaring SD, Skúladóttir HH, Borgthorsson H (2014) Leakiness and creepiness in app space: Perceptions of privacy and mobile app use. In: Proceedings of the SIGCHI conference on human factors in computing systems. Association for Computing Machinery, Toronto, pp 2347–2356. https://doi.org/10.1145/2556288.2557421

  39. Sottsass E, Thomé P, Picchi F, Sudjic D, King E, Zanot F, Terragni E et al (2017) Sottsass. Phaidon, London

    Google Scholar 

  40. Srnicek N (2017) Platform capitalism. Polity, Cambridge

    Google Scholar 

  41. The Disappearing Computer (2004) Vienna, p 80. https://www.ercim.eu/EU-NSF/DC.pdf

  42. Weyenberg A (2017) The ethics of good design: a principle for the connected age. https://medium.com/swlh/dieter-rams-ten-principles-for-good-design-the-1st-amendment-4e73111a18e4. Accessed 2 June 2020

  43. Zuboff S (2019) The age of surveillance capitalism: the fight for a human future at the new frontier of power, 1st edn. PublicAffairs, New York

    Google Scholar 

Download references

Acknowledgements

This work was supported by the Marianne and Marcus Wallenberg Foundation, Stockholm, Sweden [project Grant 2017.0058]. We thank the reviewers for their comments and Maria Göransdotter and William Odom for comments on earlier versions of the article.

Funding

Open access funding provided by Umea University. This work was supported by the Marianne and Marcus Wallenberg Foundation, Stockholm, Sweden [project Grant 2017.0058].

Author information

Affiliations

Authors

Contributions

Not applicable.

Corresponding author

Correspondence to Sabrina Hauser.

Ethics declarations

Conflict of interest

Not applicable. 

Ethics approval

Not applicable.

Consent to participate

Not applicable.

Consent for publication

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hauser, S., Redström, J. & Wiltse, H. The widening rift between aesthetics and ethics in the design of computational things. AI & Soc (2021). https://doi.org/10.1007/s00146-021-01279-w

Download citation

Keywords

  • Aesthetics
  • Ethics
  • Industrial design
  • Interaction design
  • Design theory