, 3:231 | Cite as

Ambient Intelligence and Persuasive Technology: The Blurring Boundaries Between Human and Technology

  • Peter-Paul VerbeekEmail author
Open Access
Original Paper


The currently developing fields of Ambient Intelligence and Persuasive Technology bring about a convergence of information technology and cognitive science. Smart environments that are able to respond intelligently to what we do and that even aim to influence our behaviour challenge the basic frameworks we commonly use for understanding the relations and role divisions between human beings and technological artifacts. After discussing the promises and threats of these technologies, this article develops alternative conceptions of agency, freedom, and responsibility that make it possible to better understand and assess the social roles of Ambient Intelligence and Persuasive Technology. The central claim of the article is that these new technologies urge us to blur the boundaries between humans and technologies also at the level of our conceptual and moral frameworks.


Agency Ambient intelligence Autonomy Ethics of technology Human-technology relations Intentionality Persuasive technology Responsibility 


Within the field of Converging Technologies, the convergence of information technology and cognitive science plays a special role. Rather than playing itself out at the nanoscale, technologies like Ambient Intelligence and Persuasive Technology have a more mundane appearance. Yet, they constitute a radically new category of technologies, and introduce novel relationships between human beings and technological artifacts. Ambient intelligence and persuasive technology blend insights from the behavioural sciences with advanced possibilities from the field of information technology. The miniaturization of electronic equipment and improvements in wireless communication between devices has led to the design of so-called ‘smart environments’ ([7, 19]). These environments register what happens around them and respond intelligently to this information. The technology at work here is often invisible and is, furthermore, painstakingly coordinated to suit human cognitive processes. That is why it has been named ‘ambient intelligence’. When combined with cognitive and behavioural science, such technologies can also be used to deliberately influence the ideas, intentions, and behaviour of human beings.

Examples of ambient intelligence appeal to the imagination. In the field of elderly care, detectors can sound the alarm if someone falls out of bed or tries to leave the house at an unusual time. The walls can, literally, grow ears, by responding to certain sounds in rooms, such as a cry for help or a desperate question as to where someone has left his or her keys (cf. [20]). Toilets can test urine and stools automatically in order to spot health problems quickly. And the so-called Life Shirt System is an intelligent jacket that measures all sorts of bodily functions and sends the data collected to health care institutions [30]. There are, however, countless applications conceivable outside the care sector too. By inserting products with Radio Frequency IDentification or RFID chips (inexpensive chips, of which the content can be read wirelessly) refrigerators are able to recognize the foodstuffs they carry and so that they can help people write their shopping list, give feedback on eating habits and make suggestions for menus. Cameras can automatically spot deviant behaviour in the event of disturbances so that measures can rapidly be taken to protect public order. A mobile telephone with a Global Positioning System aids parents in tracking down their children if they are lost, or do not come home on time. And equipment in houses can respond to the presence or even the moods of people in them by, for instance, adjusting the intensity of the lighting, allowing incoming telephone calls to come through or not, or by making coffee when someone wakes up.

The impact of such intelligent environments will become even greater when the interaction with users is explicitly designed on the basis of insights from the behavioural sciences. This is currently taking place under the umbrella term of ‘persuasive technology’ [12]. Let me give you a few examples. The Persuasive Mirror was designed to deform someone’s likeness on the basis of data on his or her lifestyle and recent behaviour as visual feedback on the health risks of the way that person lives their life. The HygieneGuard for children’s toilets reminds children to wash their hands after using the toilet. Incidentally, there are also a great many examples of persuasive technology that are not related to ambient intelligence, like computer games that tried to interest players in the American army or the EconoMeter in cars, which gives feedback on the fuel consumption of your car in relation to your driving behaviour.

Things have always had an influence on people: from ditches that make areas inaccessible to speed bumps that make motorists slow down when it is safer to do so. But ambient intelligence and persuasive technology enable a much more subtle, far-reaching form of influence. They occupy a radically new position in the realm of human experience. While ‘classical’ technologies are encountered from a configuration of ‘using’ technology, these technologies merge with our environment—thus mirroring technologies at the nanoscale, that typically merge with our interior. Often without us noticing them explicitly, they actively interfere with our lives, in tailor-made ways. Some do so in compelling ways, and others by means of persuasion or seduction; some do so visibly, while others remain largely unnoticed.1 Especially the combination of ambient intelligence and persuasive design is relevant here, since such technologies interact with our behaviour in smart and interactive ways with the deliberate aim to influence our behaviour—sometimes even without us being aware of this.

The desirability of such new interactions with technologies does not always go without saying. If people are influenced by technology behind their backs, who, for example, decides what forms of influence are acceptable and what are not? How can we still hold people responsible for their actions if these actions can be partly ascribed to the technology that has influenced them? Is democratic supervision of the development and use of such technology possible? And are people still able to withdraw from this influence? Answering these questions is a complex matter because it requires a shift in our symbolic order. Ambient intelligence and persuasive technology challenge the boundary that we usually perceive between humans and technology. Because of the intricate connections they establish between humans and technologies, as I will argue, they urge us to rethink the concepts of agency—the capacity to act—and responsibility.

Promises and Threats

Ambient Intelligence

Ambient intelligence is not science fiction, but actual reality that will, bit by bit, pervade all aspects of our lives. We are already used to shop doors that slide open automatically as we walk in, and detection systems that keep an eye out for fire. The step to more comprehensive intelligent environments is only a small one. Typical for such environments is their interactivity, as well as their total or partial invisibility. Intelligent environments consist of a continually communicating network of devices that are permanently in contact with the environment and respond to it actively and on their own initiative. This contact may be realized by microphones, cameras, infrared sensors or scanners that can read RFID chips2 without having to be connected to them by wires.

In their evaluation of ambient intelligence, the Information Society Technologies Advisory Group (ISTAG)—the most important advisory body of the European Commission for policy in the field of ICT—focused on the increased usability that often is the result of ambient intelligence. It gives users more influence, prevents complicated interaction with technology and enables more efficient services ([17], 1). The way Aarts and Marzano elaborate the concept of ambient intelligence, though—which broadly describes the Philips approach—does not understand ambient intelligence so much in technological terms as in social terms ([1, 2] cf. [24]). For them, it is important that a technological system has a form of social intelligence: it should be capable of intelligent interaction with users.

Aarts and Marzano distinguish five layers in this interaction, each of which builds on the previous layers. Firstly there is the layer of embedding: ambient intelligence is embedded in the environment, both in the physical sense, hidden in walls, clothing and packaging, and in the social sense: it is possible to communicate with it in a ‘natural’ way, for example by means of movement or speech. Secondly, this technology is aware of its environment: it responds to what happens around it, by detecting movement, reading RFID chips, recognizing speech and so on. Shops can, for instance, automate the payment and stock system extensively if all products are equipped with RFID chips that can be read automatically by the checkout. The third layer is that of personalization. Ambient intelligence can draw up or retrieve a person’s profile and set up interaction with technology that is tailor-made to suit the person in question. An example is a refrigerator that forms a picture of someone’s eating pattern and subsequently makes suggestions for the shopping list, possibly with dietary advice thrown in. Fourthly, this technology has the capacity to adjust; it not only detects its environment, but adjusts to it in a personalized manner. The intelligent refrigerator in the example might quite well be able to harmonize the menu suggestions to the time of year. The fifth and last layer in ambient intelligence is that of anticipation. This technology can not only respond to its environment, but can also think ahead, like a car that can anticipate the movements of other road users and adapt its speed automatically if other road users suddenly brake, accelerate or switch lanes.

Persuasive Technology

Persuasive technology adds yet another step to these possibilities. Here intelligent systems and environments are explicitly deployed to influence human behaviour, to persuade people to act in a particular way. The art of persuasion has been recognized for millennia. From the rhetoricians and sophists in ancient Greece to the spin doctors and advertisers of today, people have developed technologies for persuading others of particular standpoints, to carry out certain actions or, on the contrary, not to do so. In the twentieth century, the art of persuasion became the object of behavioural scientific research. Not only is the form of the message now used to influence human behaviour, but also the characteristics of the receiver. By combining an understanding of how influencing behaviour works with the specific possibilities provided by information and communication technology, new leeway has arisen for the design and application of technologies that encroach greatly on our everyday activities and choice processes, and even on our ethical decision making (cf. [12]).

The FoodPhone, to give an example, is a specific application of mobile telephones with built-in cameras that are supposed to help obese people lose weight. If you take a photograph of everything you eat and send it to a central number, you receive detailed feedback on the number of calories you have eaten, so that you can relate that to your calorie consumption throughout the day. The Baby Think It Over is a doll that can be used in educational programmes to prevent teenage pregnancies. The doll gives a realistic picture of the amount of care and attention a new-born baby needs throughout the day and night, and tries in this way to motivate teenagers, from the inside, not to become pregnant at too young an age. The Persuasive Mirror mentioned earlier gives feedback on the health implications of recent behaviour by extrapolating someone’s mirror image to the future.

Social Impact

There is no doubt about the fact that the social impact of ambient intelligence and persuasive technology will be enormous [9]. The possible futures relating to these technologies are both utopian and dystopian, as is the case with most technologies [10]. On the one hand the promise is that ambient intelligence will remove itself to the background so that humans again become central. Because of its interactive character we should, moreover, finally have a technology at our disposal that adjusts to suit humans, instead of us having to adjust to the technology. In the meantime, we seem to be surrounded by beneficial applications: cameras that automatically monitor safety in public spaces, the automatic administration of medication in hospitals, technologies that help us find a healthy lifestyle, safety provisions in elderly people’s homes to enable them to live there longer. Provided well programmed, these technologies promise us a glorious new world.

On the other hand, there are dangers implicit in these technologies. Because of their interactive character, ambient intelligence technologies collect a lot of information about their users, and consequently form a new type of threat to our privacy. Furthermore, these technologies undertake responsibilities that have so far been the domain of people, and this does not necessarily take place safely and reliably. These are, however, aspects that apply to practically every new technology. More fundamentally, and more specific to ambient intelligence and persuasive technology, is the question of what happens to human freedom and responsibility here. If our environment starts to respond intelligently to us and to take our decisions for us, aren’t people going to gradually lose control over their own lives? Will we still take responsibility and be held responsible for our deeds? Are there still ways out of the controlling environments described? And what happens if persuasive technology consciously starts to influence our moral considerations? Do we want technology educating us? And who will then be responsible for the content of this education?

The Boundary Between Humans and Technology

Ambient intelligence and persuasive technology challenge our dominant cultural frameworks concerning the differences and relationships between humans and technologies, or rather: between people and things. On the one hand, technological environments respond to people with a form of intelligence that is usually only ascribed to people; on the other, these technologies have such a profound influence on human actions that the question looms of who or what is ultimately the actor here. Although people are generally seen as active and intentional, and things as inanimate and mute, new technologies seem to urge us to cross this boundary. After all, these technologies take decisions, respond to their environments and interfere intensively with our behaviour.

In order to get a closer understanding of the relations between humanity and technology, it is useful to use the distinction between the ‘taming’ and ‘breeding’ of human beings made by Peter Sloterdijk in his controversial lecture Regeln für den Menschenpark (Rules for the human zoo), in 1999. According to Sloterdijk [21], whereas the humanist tradition has repeatedly tried to ‘tame’ the human, that is to cultivate it using texts, the most recent developments in, for example biotechnology, focus on ‘breeding’ the human.

Sloterdijk uses this distinction to show that the humanist project of educating humans with the aid of persuasive texts has been superseded because, these days, humans are shaped in a technological, and thus post-humanist, manner. We already have a large number of means at our disposal to shape our progeny explicitly, and instead of standing aloof from these means, we should just acknowledge that we have them and utilize them responsibly. Sloterdijk, however, associates the activity of taming exclusively with a humanism that wants to debestialize the human, and that is overcome in the posthumanist activity of breeding. Yet, ambient intelligence and persuasive technology show that there are also non-humanistic forms of taming. Taming is still alive and well, only it takes place with the aid of technology, and embodies another form of posthumanism than breeding.3

The taming effect of ambient intelligence and persuasive technology lurks primarily in their interference in human intentionality: they help to shape the intentions of people to act in a specific fashion. Those who adapt their lifestyle because a Persuasive Mirror has repeatedly confronted them with the potential consequences of continuing on the same old lines are not taking a fully autonomic decision but are allowing themselves to be educated by technology. In this case, human intentions become interwoven with those of technology. The influence of this mirror entails more than simply conditioning human behaviour; it helps to shape the interpretations on the basis of which human beings make intentional decisions.

On the surface of it, it might seem absurd to connect technology with intentionality. Indeed, intentions require consciousness, and objects simply do not have it. Nevertheless, technologies not only enable people to carry out actions and have experiences that would hardly be possible without that technology, if at all, but they also shape the way in which people act and experience reality [23]. They are not neutral instruments or intermediaries but active mediators of relationships between people and reality. This already applies to low-tech artefacts such as speed bumps, which help determine how fast we drive, but it applies that much more and in a highly specific manner to high-tech artefacts such as intelligent environments and persuasive information technology. The fact is that the influence exerted by this latter group of technologies is tailor made, based on insights from the behavioural sciences; and it is interactive too.

The active role played by technology does not imply that technologies have intentions like people do, after all they cannot purposefully do anything. Their lack of consciousness does not, however, alter the fact that technologies can have intentions in the original, literal sense of the Latin word intendere, which means to ‘give direction’: technologies give direction to someone’s actions or consciousness. From this point of view, the intentionality of technologies must be sought in their directing or controlling role in people’s actions and experiences. Technological mediation can thus be seen as a specific, material form of intentionality. By mediating in the relationship between human and reality, technologies give direction to people’s actions and experiences.4 Ambient intelligence does so in a specific manner, by interacting with users in an artificially intelligent way.

What does this role of ambient intelligence and persuasive technology in human intentionality signify? Are people becoming a mere extension of technology? The promise of disburdenment and being freed from troublesome tasks seems to turn into a threat to our freedom and responsibility here. Yet, that does not, by definition, have to be the case, and this is where the exciting ethical questions and points of application for political decision making and policy arise. People are not in fact fully at the mercy of technologies. The ‘material intentionality’ of ambient intelligence and persuasive technology cannot exist without the intentionality of human beings. On the one hand, these technologies can only exert their influence within the context of the practices in which people use them and fit these technologies into their existences. In themselves they are nothing and it does not even make sense to speak of a technology. On the other hand, these technologies are always designed, the design always forming a reflection of human intentions.

This combination of people’s intentions and the ‘material intentionality’ of technologies determines the technologically mediated intentionality that ultimately comes about, and that consequently has a hybrid character: partly human and partly nonhuman. The subjects who act and take decisions, are never purely human but are a complex amalgam of human and technology. Driving more economically due to an EconoMeter and eating differently as a result of using the FoodPhone cannot be seen as purely human actions any more than they can be seen as fully technologically driven behaviour. In effect, they are the actions of hybrids that are part human and part technology, in which the two components shape one another. It appears that moral decision making can be a joint affair concerning both humans and technology.

Ambient intelligence and persuasive technology therefore blur the boundaries between humans and technology. In order to show how that happens, we need to interpret posthumanism not so much as a way to bypass man (Homo sapiens) but to bypass humanism as a specific approach of the human being. In our technological culture, our humanitas is not only shaped by ideas, but also by arrangements of the material reality in which we live. This has tremendous ethical implications. Based on the above vision, human morality does not originate solely from a consciousness situated in a physical setting, but also, and mainly, from the practical activities in which people as physical and conscious beings are involved and in which technologies play a mediatory role. Behaviour influenced by technology, then, is not amoral, but is pre-eminently the place where morality is located in our technological culture.

Freedom and the Place of Morality


Ambient intelligence and persuasive technology have an ambivalent relationship with human freedom. Whereas in many cases they have been designed to create freedom, as they quietly relieve us of all sorts of tasks, they also form a threat to this very freedom, because they influence and control us. A system that automatically administers medicine in a hospital, based on automatic measurements of bodily functions, gives nurses and doctors less leeway than a protocol. And a bathroom mirror that continually confronts someone with a grey, aged face after he or she has enjoyed a slap-up evening out with friends will in most cases eventually lead to a behavioural change that would otherwise not have taken place.

A lot of cases are conceivable in which this intervention in one’s freedom is not particularly controversial. But in other cases the restriction of one’s freedom might seem less desirable. Automatic speed limitation in cars is a good example of just such a situation. By establishing the location of a vehicle with the aid of GPS technology and subsequently limiting its speed to the maximum allowed at that location, this system forces people to do something that is, in itself, not particularly contentious, that is to keep to the law. However, the way in which it takes place is meeting with opposition on many sides, because people can no longer freely choose to keep to the law, but are forced to do so as slaves to technology. As persuasive technologies acquire increasing influence over us, unnoticed and in the background, the question will certainly arise as to whether we are still able to freely choose what we want to do and how we want to run our lives. It looks as though the Big Brother scenario so far only described in dystopic novels is becoming reality here.


This scenario becomes even more uncomfortable when, on top of everything else, ambient technology concerns itself with our morality. And this, actually, is often the case with persuasive technology. When technology begins to influence our moral choices, the moral character of our actions seems to disappear. A human action carried out under the influence of technology is more likely to be qualified as controlled behaviour than as a moral action. And that arouses opposition.

The resistance to ‘moralizing’ technologies is generally supported by two types of arguments. Firstly there is the fear that they endanger human freedom of choice, as a result of which democracy will deteriorate into technocracy ([3, 4], pp. 28–31). In fact, if everyone were controlled by technology, society would change into a technocratic complex in which moral problems were solved by devices that influence behaviour instead of morally-responsible people. The second argument is that of immorality or amorality. Actions not originating from human free will but induced by technology cannot be seen as ‘moral’. On the contrary, behaviour-controlling technology encourages a form of moral laziness that can form a serious threat to the moral level of society.

This wary reaction is understandable. After all, when it comes to moral decisions and the moral quality of actions, persuasive technology provides a kind of instant morality: people delegate moral decisions to technology so that they no longer have to take them themselves. As the American philosopher of technology, Albert Borgmann, puts it, a sort of ‘commodification’ of morality seems to be taking place here. In his opinion, commodification is the principal property of our technological culture: things we initially had to go to a lot of trouble to get are now available at the touch of a button [8]. Persuasive technologies seem to be taking a new step in this process of commodification. Here the capacity for moral reflection, which is not the least of human capabilities, seems to be swapped for a voluntary exposure to influence from technology. In this case, if the mind is willing, but the flesh is weak, people choose not only to have their flesh influenced, but also their minds. A part of our conscience is deliberately placed in the material environment, and that environment forms not only the background of our existence, but educates us too.

The dystopian picture of a society dictated to by technology and that makes people slaves to devices looms up here in all its intensity. Morality set in technology has always existed, as the example of the lock on the door and the speed bumps have clearly demonstrated, but this new technology is more refined. In one respect it is often unavoidable because it is located in the background of our existence, and in another it does not control our actions directly but interferes subtly with our intentions. This technology not only takes over responsibilities from us, as have many earlier forms of behaviour-influencing technology, but educates us as well. Is that desirable in all cases?

Moral Mediation

The behaviour-influencing effect of ambient intelligence and persuasive technology should be seen as the radicalization of an influence technology has always had. A car is not simply a means to get from A to B, but it also organizes a way of travelling, a certain relationship between home and work, and even how cities are planned. A mobile telephone is not only a useful device that enables people to talk to one another without having to be stuck to a wire, but it also shapes the contact people have with one another and the way in which they communicate. All technology therefore plays a mediating role in human actions and experiences. Yet, in the case of ambient intelligence and persuasive technology, this mediation has a very specific character. This influence is almost unavoidable and subtly and intrinsically bound to the material environment in which one is located.

The freedom one enjoys as driver of a car equipped with Intelligent Speed Adaptation is considerably curtailed. The driving behaviour of a driver in this position is not only the result of his or her own intentions but also of the controlling role of the speed limiting plate, and the detecting role of the environment. With a lot of applications of ambient intelligence it is, moreover, not always clear who precisely the user is. In healthcare, for example, these technologies play a role in the actions of patients, visitors, doctors, nurses, and suchlike. People are connected to computer networks, and because of this other sorts of networks arise of relationships between people and their material environment and on the basis of which their actions take shape. Automatic distribution of medication, or self-monitoring devices for heart patients, result in a changed interaction between patients and healthcare professionals, changed perceptions of one’s body and one’s disease, and changed responsibilities for diagnosis and treatment.

This does not mean that these technologies, by definition, take all our freedom away. When choosing whether to change up to a higher gear earlier because the EconoMeter suggests that you do so, or to look in the room of a nursing home resident because the detection system indicates that someone may have fallen out of bed, human behaviour is not determined by technology, as people are still able to reflect on their behaviour and make the decision concerned themselves. However, there is an obvious influence. And, each time, the person involved is inevitably placed in the situation of having to make a choice that would not be the case if this technology did not exist. The dilemma regarding the question of how fast to drive would not exist in this way without the organizing role of technology. In other words, technology cannot be defined out of our daily lives. People do not possess any sovereignty in relation to technology.

This conclusion can be viewed two ways. The first is that technological mediation and behaviour-influencing technology exclude human freedom or at best curtail it substantially. There is also a second, more adequate approach, which is more productive for both ethics and for policy practices. On the basis of the work by Foucault [13, 14], it is possible to understand freedom not so much as the total absence of influences from outside, but more as the capability of humans to develop a relation to these influences. And this relationship makes room for ethics, and in turn for policy-making.

From a Foucauldian perspective, freedom is not to be found in the absence of mediation and influence but in the explicit relation to them. It is the existential space people have to realize their existence, in interaction with the world in which it takes place. People relate to their own existence and to the ways in which that existence is partly shaped by the material environment in which it is enacted. This material situatedness of human existence creates specific forms of freedom instead of hindering it. Freedom exists in the possibilities that are opened up for people to find a relationship with the reality to which they are bound.

This redefinition of freedom shows that freedom and technology are not at odds with one another. On the contrary, in certain ways technology contributes to the constitution of freedom by forming the material environment in which human existence is enacted and takes shape. When people develop connections with technology, these connections form the places where freedom must be located. Besides intentionality, as explained in the previous section, freedom is therefore also a hybrid affair, distributed over people and artefacts.

From this point of view, too much resistance to a ‘moralizing’ material environment is not especially productive. Conflict about the question of whether such behaviour-influencing technology is desirable at all is therefore, in fact, a rearguard action. Ethical actions also take place in interaction with the influence exerted by technology and not in isolation from it. It is almost impossible to conceive of a morally relevant situation in which technology does not play a role. And we would be throwing the baby out with the bathwater if we concluded from this that there is no room for morality and moral judgements in situations in which technology plays a role. The actions of drivers are continually mediated even without speed limiting plates. As long as cars can easily exceed the applicable speed limits and roads are so wide and bends so wide that they facilitate fast driving, we will be continually tempted to put our foot down even further.

Based on this interpretation of freedom, it is not the influencing of behaviour by technology that is immoral, but the refusal to deal with this inevitable influence in a responsible manner. The recoil people often intuitively feel with respect to the influence technology could have on us, must not be allowed to lead to an impotent endeavour to expel all technological influence but can, on the contrary, be used to steer this unavoidable influence in the right direction.


Following from the above analysis of the way in which persuasive technology and ambient intelligence compel us to revise our concept of human freedom, our understanding of responsibility also automatically becomes problematic. Because of the large role these technologies play in our practices and experiences, the question arises of whether we can still be held entirely responsible for actions induced by these technologies. Is someone acting responsibly if he or she keeps to the maximum speed because speed limiting plates compel him or her to do so? Is an obese person who uses the FoodPhone for a long time responsible if he or she suddenly develops anorexia nervosa because of a continual obsession with the relationship between what he or she eats and his or her weight? And who is responsible if an automatic face recognition system in a surveillance camera incorrectly identifies someone as a suspicious person—something that, moreover, appears to happen more regularly with coloured or elderly people because the requisite software is set in accordance with light contrasts on white skin [16].

To start off with, it is useful to make an elementary distinction between two types of responsibility, namely causal responsibility and moral responsibility. A person is responsible in the causal sense if he or she is the cause of something, and that does not necessarily mean that he or she can be held responsible for it in the moral sense. The event or situation can, for example, have been caused unintentionally or under coercion. Only if someone acts freely and consciously can he or she be held liable, in the common interpretation of morality, for his or her actions. And it is precisely these two requirements that are complicated in the case of ambient intelligence and persuasive technology, as was clear in the previous sections. Human freedom and intentionality have become interwoven with these technologies. Through their influence on human actions, or their contribution to causal responsibility, ambient intelligence and persuasive technologies therefore also interfere in the moral responsibility of people for actions arising in interaction with them.

It will be clear that in these cases technology is not the ultimate cause of what people do and that it cannot be held responsible or accountable in the moral sense. But the same can be said of the people who deal with this technology. As a result of the interrelatedness of their freedom and intentions with this same technology, they are not the ultimate cause of actions either. Actions arise based on this interrelatedness of humans and technology. However, the fact that responsibility is distributed between people and technology due to the use of this technology does not mean that no reference points exist for shaping responsibility properly.

If we are to think adequately about responsibility in relation to these technologies, we have to contemplate the parts played by people and technology separately—without losing sight of the continual interrelatedness of the two, naturally. Two approaches can be taken here: one focuses on the design of technology and the other on its use. Users of technology can take responsibility for their share in the coming about of choices and actions, and the designers of the technology concerned can take responsibility for their share in the behavioural influence ultimately exerted. Exploring these approaches separately based on the question of how the impact of ambient intelligence and persuasive technology can be soundly designed will enable us, to a certain extent, ‘to tame the taming’, to stay with the metaphors of Peter Sloterdijk.

User Responsibility5

By recognizing that human actions are mediated no matter how, and that technology is one of the sources of this mediation, we create room for linking ethics and technological mediation. The objective of ethics is, then, not to protect humans from unilateral control by technology, but to assess and experiment meticulously with the technological mediation of life. People can relate to the influence of technology, and in order to be able to adjust fully without this influence too, can actually help shape its impact on their daily lives.

In this way, in the terms of the French philosopher Foucault, dealing with technology becomes a self-practice, or a practice in which the self is shaped by relating to the powers and forces that try to shape it ([19], pp. 2–3). Foucault elaborated the thought of moral self constitution in the domain of sexuality by studying how sexual passions can become the subject of ethical design. However, as Dorrestijn [11] has shown, this study can easily be translated and applied to technology. If people want to be able to take responsibility for the role technologies play in their lives, they must first of all relate explicitly to the way in which these technologies partly shape their intentions and behaviour. This assumes that users are equipped to see technologies as more than just interesting, novel gadgets.

A general realization that technology interferes in one’s subjectivity is, however, not sufficient to bring about active stylizing. People must also have insight into the specific way in which certain technologies shape intentions and behaviour. Characteristic for ambient intelligence and persuasive technology is, for example, that the technological control is tailor-made to suit the individual. The conscience function here is externalized, as it were. People are unremittingly exposed to influences intended to adjust their intentions in accordance with preprogrammed guidelines. Various forms of subjection can be distinguished, though. Firstly there is the direct coercion entailed in, for example, automatic speed influence, in which GPS technology is used to make it impossible to drive faster than 120 km/h on motorways, 50 km/h in built-up areas and so on. Persuasive technologies use a second form of subjection: such as feedback on one’s own behaviour as embodied in the Persuasive Mirror and the FoodPhone. A third variant consists of seductive technologies, which do not so much coerce people or persuade them to act in a certain way at the cognitive level, but that simply make some actions more attractive than others.

If we make explicit how certain technologies shape our lives, we can create the distance we need to be able to relate to these forces. This generates the space to experiment with the use of technology, keeping a sharp eye on the quality of the practices resulting from them, and based on the realization that every practice in which a technology is used shapes our own subjectivity as well. An example of a self practice of this kind in the field of ambient intelligence has been elaborated by Steven Dorrestijn based on an experiment with an automatic speed influencing system in Tilburg ([11], pp. 100–101). This system automatically limits the speed of vehicles to the maximum allowed speed at the spot where the vehicle is located, thus restricting the freedom of motorists considerably. However, contrary to the great resistance that might have been expected, the system ultimately won a lot of praise. Users developed a quieter driving style that they enjoyed. Hectic driving behaviour was simply no longer an option and, in the end, this turned out to be a comfortable situation rather than a hindrance for most of the people involved (Transport Research Centre, [5]). In a certain sense, therefore, the users of this automatic speed influencing system gave up some of their supposed autonomy, in terms of the absence of factors trying to control and influence them, but they got a form of freedom in return, by relating to this influence and allowing it to style their subjectivity in a particular way. In this case, freedom is a practice that is partly organized by the technological infrastructure of existence. Within this practice it is indeed possible to take partial responsibility for the specific way in which one’s own existence is shaped in interaction with technology. Even compelling technologies require that we incorporate them somehow in the way we live our lives.

Finally, an approach involving self practices forces us to reflect on the ideals and goals that lie hidden in our dealings with technology and how desirable this is. Do we want to be people who delegate the greater part of care of the elderly to ambient technologies, so that old people can literally talk to the wall if they need help and the walls literally have ears that can pick up whether people have fallen or are confused? Do we want to be people who take moral decisions in interaction with feedback we receive from technology? Questions of this kind require a public moral debate on the quality of our lives in relation to the technology we use. Ethics and technology policy should focus far more on the demand for public visions on the good life and the role technology plays in it than is currently the case.

Designer Responsibility

The distributed character of responsibility also has implications for the responsibility of designers. After all, by the way in which they design persuasive technology and ambient intelligence, designers inevitably contribute to the influence these technologies exert on people’s daily lives, be it explicitly or not. And unforeseen and unintended effects can arise too. The FoodPhone, for example, was clearly designed on the basis of good moral intentions. It does not use morally unacceptable persuasive methods and its intended effect is not morally unacceptable either. However, this device will organize the relationship of people to their food in a very specific manner, as well as the social relations concerning food. If obesity is the result of an eating disorder, the FoodPhone could exacerbate the disorder by stimulating an excessive focus on someone’s eating pattern. Furthermore, his or her social life will not get any simpler if everything that he or she eats must first be photographed so that the number of calories can be calculated. The EconoMeter, to mention another example, will undoubtedly lead to its users using less fuel, but may give users the impression that they are acting in an environmentally-friendly manner by driving like this, whereas the bicycle or train would be a better option from the environmental point of view.

Besides users, also designers are therefore responsible for the practices that ultimately come about with regard to persuasive technology and ambient intelligence. Two forms of designer responsibility can be distinguished here. Firstly, designers can anticipate the effects and side effects of the technology they design at the draft stage, and possibly adjust the design or even abandon it. Secondly, designers can explicitly and responsibly build behaviour-influencing and persuasive effects into technology. Both forms are relevant. On the one hand certain normative effects of technologies remain implicit in the design, and it is good to make these more explicit. On the other, persuasive technology is mainly about the explicit incorporation of behaviour-influencing effects, and it is good if that takes place in a responsible manner (cf. [6, 30]).

In order to be able to anticipate implicit normative effects of ambient intelligence and persuasive technology, a design may never be seen purely as instrumental, but always as mediatory. The Persuasive Mirror is more than a device that persuades people that they must drink and smoke less, sleep enough, work less hard, lead a more regular life, and so on. This instrumental vision hides the fact that not only does this technology fulfil its function (persuade people of the advisability of behavioural change) but also imposes an implicit normative framework and organizes its environment in a specific way. The lifestyle of Herman Brood or Jim Morrison, for example, would be strongly discouraged by this mirror, although it cannot be considered a fixed given that their lifestyles did not have any value, or should be banned, any more than a puritan life is the only worthwhile life. The FoodPhone is another good example. The use of this technology in the fight against obesity is sound just as long as this fight is seen as meaningful and as long as the FoodPhone does not develop into a source of a new beauty ideal. Persuasive technology inevitably contains built-in standards, and these standards must be made explicit to become responsible. This is only possible if the technology designed is approached explicitly as a mediating object around which new practices and new interpretations will arise. Designers must learn to ‘read’ and ‘rewrite’ their products.

We should note, however, that the impact of a technology cannot always be unambiguously predicted [16]. The introduction of the low-energy light bulb, for example, has had the opposite effect to what was intended. Instead of leading to lower energy consumption, it has led to higher energy consumption. Because this bulb is so cheap to use, it apparently tempts people to leave, for example, the light in the shed on all the time and to illuminate the front of the house or garden [22, 28, 29]. This phenomenon is just as likely to take place with ambient intelligence and persuasive technology. As mentioned above, the FoodPhone can lead to people having such a fixation on their own weight that eating disorders can result. Automatic face and behaviour recognition can lead to unjustified insinuations regarding disturbances of the public order in the case of people who do not fall within the software norms. A system that sounds the alarm when an elderly person falls can, on balance, lead to such people getting less attention from their carers.

It is only a small step from anticipating the impact of persuasive technology and ambient intelligence to explicitly influencing or controlling people (which implicitly takes place with many forms of ambient intelligence and which is the explicitly intended objective of persuasive technology). Here lies the most important issue regarding responsibility: where precisely does the responsibility for such influences lie? In fact, the influence of some technologies reaches so far that it does not seem desirable to delegate this responsibility solely to designers. Well-intended ‘moralizing’ effects of persuasive technology can turn out to be paternalistic or even constitute an undesirable implicit meddling in people’s answers to the question of the ‘good life’. If we discourage smoking or lavish eating by persuasive technology, we are implicitly stating that a longer life in which the risks of sickness through smoking and obesity are avoided, is more worthwhile than a shorter life with more emphasis on enjoyment.

Without users perhaps being properly aware of the fact, these technologies install a vision of the good life. This is despite the fact that in our liberal democracy visions of a good life are normally entrusted to the freedom of our individual private lives. If the government were to force people to practice sport regularly and to smoke and drink less by means of legislation, there would be great consternation: people are deemed able to take responsibility for their own lifestyles. Too much implicit interference from technology in our daily lives, therefore, can form a direct threat to the basic principles of our democratic constitutional state. That is why it is important to design democratic procedures to shape these kinds of influential technologies, and to equip citizens of modern, technological societies with the ability to understand the mediating roles of the technologies around them, and to develop and explicit relation to them.


Ambient intelligence and persuasive technology might raise fears of a Big Brother scenario—but as a matter of fact, they radicalize an influence of technology on human experiences and practices that has always existed. Our material world has always interfered implicitly with us, and it will be doing so increasingly explicitly. But precisely because this influence is becoming so clearly visible here, it can now also become the explicit subject of discussion. It is very important that this discussion does actually come about, and that it goes beyond the current focus in ethics of technology on safety, reliability and privacy. Nothing less than the quality of our lives is at stake here.

This calls for a new role of ethics. Rather than aiming to protect humanity against technology, therefore, ethics should aim ]to accompany the development, use, and implementation of new technologies, analyzing how technologies help to shape the good life, by providing designers, users, and policy makers with adequate vocabularies to perceive and assess these impacts of technology. Ethics has become a matter of both people and things—and it is high time that things are given the place they are entitled to.


  1. 1.

    Cf. Verbeek [27] for a further elaboration of the differences between compulsive, persuasive, and seductive technologies.

  2. 2.

    RFID chips are very cheap electronic labels that do not need their own source of power and that reveal their contents to a scanning device, such as a checkout or a detector at the entrance to a train or underground station. These chips can, for example, be placed on supermarket product packaging, identity cards or passes, or subcutaneously in pets so that they can be identified if they run away.

  3. 3.

    This thought has been elaborated on in more detail in Verbeek [24].

  4. 4.

    In Science and Technology Studies, the influence of technologies on human behavior is often indicated with the concept of ‘script’. This concept can be integrated in the approach of technological mediation; see Verbeek [23].

  5. 5.

    See also Verbeek [26].



This article is based on my Dutch article ‘Ambient Intelligence en Persuasive Technology: de vervagende grens tussen mens en techniek’, in: Tsj. Swierstra et al. (eds), Leven als Bouwpakket. Kampen: Klement, 2009. The research was made possible by the Netherlands Organization for Scientific Research (NWO), as part of the NWO-VIDI project ‘Technology and the Limits of Humanity’.

Open Access

This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.


  1. 1.
    Aarts E, Marzano S (2003) The new everyday. Views on ambient intelligence. 010, RotterdamGoogle Scholar
  2. 2.
    Aarts E et al (2001) Ambient intelligence. In: Denning P (ed) The invisible future: the seamless integration of technology into everyday life. McGraw Hill, New YorkGoogle Scholar
  3. 3.
    Achterhuis H (1995) De moralisering van de apparaten. Socialisme en Democratie 52(1):3–12Google Scholar
  4. 4.
    Achterhuis H (1998) De erfenis van de utopie. Ambo, AmsterdamGoogle Scholar
  5. 5.
    Adviesdienst Verkeer en Vervoer (2001) ISA Tilburg: Intelligente Snelheids Aanpassing in de praktijk getest, Eindrapportage praktijkproef Intelligente Snelheidsaanpassing. Ministry of Transport, Public Works and Water Management, The HagueGoogle Scholar
  6. 6.
    Berdichewsky D, Neuenschwander E (1999) Toward an ethics of persuasive technology. Commun ACM 42(5):51–58CrossRefGoogle Scholar
  7. 7.
    Bohn J et al (2004) Living in a world of smart everyday objects: social, economic, and ethical implications. Hum Ecol Risk Assess 10(5):763–786CrossRefGoogle Scholar
  8. 8.
    Borgmann A (1984) Technology and the character of contemporary life. University of Chicago Press, ChicagoGoogle Scholar
  9. 9.
    Brey, P. (2006). ‘Freedom and Privacy in Ambient Intelligence,’ Ethics and Information Technology 7(3), 157–166CrossRefGoogle Scholar
  10. 10.
    Casert R (2004) Verslag workshop ‘ambient intelligence: in the service of man?’. Rathenau Institute, The HagueGoogle Scholar
  11. 11.
    Dorrestijn S (2004) Bestaanskunst in de technologische cultuur: over de ethiek van door techniek beïnvloed gedrag. Master’s thesis. University of Twente, EnschedeGoogle Scholar
  12. 12.
    Fogg BJ (2003) Persuasive technology: using computers to change what we think and do. Kaufmann/Elsevier, San FranciscoGoogle Scholar
  13. 13.
    Foucault M (1990) The care of the self—the history of sexuality, vol 3. Penguin Books, London {1984}Google Scholar
  14. 14.
    Foucault M (1992) The use of pleasure—the history of sexuality, vol 2. Penguin Books, London {1984}Google Scholar
  15. 15.
    Ihde D (1990) Technology and the lifeworld. Indiana University Press, BloomingtonGoogle Scholar
  16. 16.
    Introna L (2005) Disclosive ethics and information technology: disclosing facial recognition systems. Ethics Inf Technol 7:75–86CrossRefGoogle Scholar
  17. 17.
    ISTAG (2001) Scenarios for ambient intelligence in 2010. European Commission, BrusselsGoogle Scholar
  18. 18.
    ISTAG (2003) Ambient intelligence: from vision to reality. European Commission, BrusselsGoogle Scholar
  19. 19.
    O’Leary T (2002) Foucault: the art of ethics. Continuum, LondonGoogle Scholar
  20. 20.
    Schuurman J et al (2007) Ambient intelligence: toekomst van de zorg of zorg van de toekomst? Rathenau Institute, The HagueGoogle Scholar
  21. 21.
    Sloterdijk P (1999) Regeln für den Menschenpark. Suhrkamp, FrankfurtGoogle Scholar
  22. 22.
    Steg L (1999) Verspilde energie? Wat doen en laten Nederlanders voor het milieu. Social and Cultural Planning Office;SCP Cahier no. 156, The HagueGoogle Scholar
  23. 23.
    Verbeek PP (2005) What things do: philosophical reflections on technology, agency, and design. Penn State University Press, University ParkGoogle Scholar
  24. 24.
    Verbeek PP (2006a) Moraliteit voorbij de mens—over de mogelijkheden van een posthumanistische ethiek. Krisis 1:42–57CrossRefGoogle Scholar
  25. 25.
    Verbeek PP (2006b) Persuasive technology and moral responsibility. Paper for conference Persuasive Technology 2006. Eindhoven University of Technology, EindhovenGoogle Scholar
  26. 26.
    Verbeek PP (2006c) Ethiek en technologie: moreel actorschap en subjectiviteit in een technologische cultuur. Ethische Perspectieven 16(3):267–289CrossRefGoogle Scholar
  27. 27.
    Verbeek PP (2008) Cultivating humanity: towards a non-humanist ethics of technology. In: Olsen J-KB, Selinger E, Riis S (eds) New waves in philosophy of technology. MacMillan, Hampshire, pp 241–266Google Scholar
  28. 28.
    Verbeek PP, Slob A (eds) (2006) User behavior and technology development—shaping sustainable relations between consumers and technologies. Springer, DordrechtGoogle Scholar
  29. 29.
    Weegink RJ (1996) Basisonderzoek elektriciteitsverbruik kleinverbruikers BEK’95. EnergieNed, ArnhemGoogle Scholar
  30. 30.
    Wehrens R (2007) De Gebruiker Centraal? Een inventarisatie van gebruikersgericht onderzoek op het gebied van Ambient Intelligence en gezondheid. Rathenau Institute, The HagueGoogle Scholar

Copyright information

© The Author(s) 2009

Authors and Affiliations

  1. 1.University of TwenteEnschedeThe Netherlands

Personalised recommendations