Keywords

1 Onlife Technologies

After a few decades of living with Information and Communication Technologies, we have got so much used to their presence in our daily lives, that we hardly realize that the societal and cultural revolution they are causing has only just begun. While most of the social and political discussions regarding ICTs still focus on privacy issues and on the impact of social media on interpersonal relations, a whole new generation of ICTs is currently entering the world, with potentially revolutionary impacts that require careful analysis and evaluation .

Two examples of this new generation of technologies can illustrate this. First of all, there is the rapid development of ‘embedded’ information technology . ICTs are starting to merge ever more intricately with our physical environment. Walls, beds, doors, cars—many everyday objects are currently being equipped with forms of ‘ubiquitous computing’ or ‘ambient intelligence’, as a large electronics multinational has come to call it (Aarts and Marzano 2003). Objects in our lifeworld, in other words, are becoming intelligent. Hospital beds can detect if patients fall out of their bed or step out of it. Doors in geriatric homes can determine who is allowed to go outside and who is not. Cars are increasingly taking over tasks that used to be reserved to humans, like lane parking, making emergency stops, and refusing to change lanes if it is too dangerous to do so.

This intelligification of our material world will have important implications. Public space will literally become space with a public character—the more it becomes aware of us, the more we need to become aware of the fact that that is the case. Moreover, intelligent objects are increasingly equipped with explicitly persuasive abilities. Smart mirrors in waiting rooms of medical doctors can give us feedback on our lifestyle when entering the waiting room. Smart training equipment in gyms can persuade people to exercise just a bit more. Smart websites attempt to persuade users to buy specific things, or to become a member of specific organizations. Our material world is developing into an active and intelligent counterpart, rather than a mute, stable and functional environment.

At the same time, our own access to the world is rapidly changing. With the advent of technologies like Google Glass , the phenomenon of ‘augmented reality’ is rapidly gaining influence. Google Glass consists of a pair of small, transparent monitors and a camera. The device provides an extra layer of information about the people, objects and images one sees. It has the potential to recognize the faces of people you meet, and provide all information available about them instantaneously—without these people noticing this. It makes it possible to send and receive messages, than can be composed with eye movements, voice input, or touch. This will enable people to communicate with each other in new ways, again without other people noticing it.

If this type of augmentations becomes widespread, this will have enormous implications for virtually all dimensions of society. Educational processes will need to be reinvented, when all information is available to anybody all the time. The boundaries between the public and the private will need to be drawn again, when a quick glance at somebody’s face reveals all their activities on the internet. Security policy, privacy legislation, commercial activities—it is hard to imagine a sphere of society that will not be affected by the advent of augmented realities. Our lives get increasingly interwoven with online realities—we get ‘onlife’, as the contributors to this book have come to call it .

New information technologies, in sum, put us potentially at the dawn of a new era. While many people are focusing on the biotechnological revolution, and the convergence of nanotechnology, biotechnology, information technology , and cognitive science, companies like Google and Philips are redesigning the world. How to understand these changes? And how to evaluate them?

2 Onlife Relations

Understanding the relations between humans and technologies has been one of the central activities of the philosophy of technology in the past decades . In mediation theory, the central idea has developed that we need to blur the boundaries between human and technology to understand the social role of technologies. Humans and technologies cannot be located in two separate realms, but need to be understood in their interrelations. At the basis of the theory of technological mediation is the work of the North-American philosopher Don Ihde. Ihde analyzes the various shapes that the relations between humans and technologies can take (Ihde 1990). His central thought is that technologies help to shape the relations between humans and world. Whenever a technology is used, it becomes a mediator between its users and their environment, helping to shape the character of the relations between both.

Ihde distinguishes four forms the relations between humans and technologies can take on. New information technologies like Google Glass , though, urge us to expand his framework. First, there is the ‘embodiment relation’, schematically indicated as (human—technology) → world. In this relation, technologies are extensions of the body, as it were. Humans experience the world ‘through’ the technologies here, as when wearing glasses, or using hearing aids. A relation with the world is also possible from the ‘hermeneutic relation’, though, schematically indicated as human → (technology—world). Some technologies give us access to the world by giving a representation of it, that requires human interpretation in order to be meaningful—hence the name ‘hermeneutic’—like a thermometer that gives a number rather than a sensation of temperature, or a sonogram that gives a visual representation of an unborn child on the basis of reflected ultrasonic soundwaves. A third relation is the so-called ‘alterity relation’, schematically indicated as human → technology (world). In this relation there is a direct interaction between humans and technologies, like when someone operates a copying machine, or repairs a car. The fourth and last relation Ihde distinguishes is the background relation, indicated as human (technology/world). From this relation, technologies have an impact on our relation with the world, without being explicitly experienced themselves. An air conditioning that automatically switches on and off, for instance, creates a context for the experience of human beings by producing noise or creating a specific temperature of the room.

In all these four human-technology relations, technologies moves ever further aay from the human being, as it were: from an extension of our senses to a context for our experiences. Ihde’s analysis has made possible an entirely new direction in the philosophy of technology . Rather than investigating what ‘Technology’ does to ‘Humanity’ and ‘Society’, Ihde’s approach made it possible to investigate how specific technologies mediate human actions, experiences, and interpretations. Against the gloomy theories of alienation that have been fashionable for a long time, it now becomes possible to investigate in more detail how technologies actually help to shape new relations between humans and world. Scientific instruments help scientists to understand reality; medical-diagnostic technologies help to shape interpretations of health and illness; social media reshape social relations and friendships.

New information technologies like Ambient Intelligence and Google Glass , though, urge us to expand this framework (Verbeek 2011). One more step ‘further away’ from the human being than the background relation—but ‘closer to us’ in another sense—is made by technologies that create an environment in which we are immersed, like the smart environments with ambient intelligence that I mentioned above. The relations we have with such environments can be indicated as ‘immersion’. Schematically these relations look like human ↔ technology/world: the technologies merge with the environment, and interact with their users.

Google Glass adds a new type of relation at the other end of the spectrum. Rather than merely being ‘embodied’, it adds a second layer to our world, which is often called an ‘augmented reality’. In addition to the sensory relation with the world ‘through’ the glasses, it also offers a representation of the world. Technologies like this offer not one, but two, parallel relations with the world. We could call this a relation of augmentation. This relation consists of two parallel circuits: (human—technology) → world and human → (technology—world). And this is quite a revolutionary step in the relations between humans and their world. Human intentionality, as phenomenologists call the human directedness at the world around them, is developing a bifurcation. Our attention is increasingly divided between two parallel tracks.

3 Onlife Mediations

New information and communication technologies, to be short, create radically new relations between human beings and the world around them. Not only the structure of these relations deserves further inquiry, but also its implications for social relations and human existence. What do all of these new information and communication technologies do to us, from the new and unanticipated relations we develop with them? I will limit myself again to the relations of ‘immersion’ and ‘augmentation’ that I described above.

In the relation of immersion, the material environment changes from a relatively stable background of our existence into an interactive context that interferes in novel ways with the ways we live our lives. Smart environments with ‘ambient intelligence’ are changing the character of the spaces in which our lives take shape. When public spaces are equipped with smart cameras that can detect suspicious behavior, new norms will be installed. When the doors in geriatric hospitals will have RFID chip readers, they can automatically determine who should be allowed to go out and who does not. When toilets will have sensors that can detect specific substances in our urine and feces, new norms regarding health and illness, and new regimes for healthcare will emerge.

Moreover, these ‘intelligent’ technologies can also interact with our decision-making processes. Under the name of ‘persuasive technologies’, products and systems are being developed to persuade people to behave in specific ways. School toilets can detect if children have washed their hands when they leave, and urge them to do so when they forget. Smart mirrors in the waiting room of medical doctors can recognize one’s face, and morph it into an image of what you will look like in 10 years if you don’t give up smoking, or eating too much, or working too hard. Smart windows in shops can determine the direction of one’s gaze and give extra lighting to articles that seem to interest specific people.

In the configuration of augmentation, technologies like Google Glass have the potential to radically change the character of social interactions. The mere look at somebody else can be enough for a face recognition system to look this person up on the Internet. This would result in a drastic reconfiguration of the boundaries between the public and the private. All one’s private activities that are on the Internet will be much more easily accessible. And all resulting information will be available in social interaction in a asymmetrical way, because people cannot see if the person they meet is simultaneously checking them on the Internet.

Also, the permanent availability of email, messaging services and Internet information will give us an increasing ‘double presence’ in the world. Our physical, bodily presence in concrete spaces and situations will increasingly be accompanied by a virtual, but still bodily-sensorial, presence at other places, with other people, and in different situations. Our being-in-the-world, as Heidegger called it, is developing into a being-in-multiple-worlds.

This quick exploration of the new configurations of humans and technologies shows that their implications are enormous. New information technologies will install new norms for human behavior, have a political impact on how we interact in public space, help to shape the quality of interpersonal interactions, and so on and so forth. No realm of human existence will remain unaffected. Our lives will be mediated in radically novel ways.

At the same time it is often hard to see these mediations, because information technologies increasingly challenge the frameworks by which we have come to understand ourselves and the world we live in. Ever since the Enlightenment, we have understood ourselves as relatively autonomous subjects in a world of objects that we can investigate, manipulate, and appreciate. But the self-evidence of this metaphysical framework—in which subjects have intentions and freedom, while objects are passive and mute—is rapidly fading away, now that information and communication technologies have started to challenge it seriously.

On the one hand, the advent of ‘social media’ has urged us to acknowledge how deeply intertwined our sociality has become with materiality. When Marshall McLuhan claimed that ‘the medium is the message’ (McLuhan 1994/1964), it was hardly possible to foresee that the mediating power of new media would become so strong that a few decades later people would start to wonder if Google is “making us stupid” (Carr 2008) and if virtual sociality is making us be “alone together” (Turkle 2011). On the other hand, the examples of ‘smart environments’ with ‘ambient intelligence’ have shown that our material environment now has unprecedented social capacities, persuading us to behave in specific ways, or reorganizing the character of public spaces.

Information technologies have made the boundaries between the social subject and the material object more porous than ever before. Social relations appear to be thoroughly mediated by technologies, while new technologies appear to have a profound social dimension. This situation is a serious challenge, not only for our metaphysical frameworks, but also for our self-understanding and for our ethical and approaches to technology. How are we going to deal with this new situation?

4 Onlife Governance

The blurring of the boundaries between humanity and technology that new ICTs are bringing about has serious implications for our ethical and political reflection. Implicit in many ethical approaches to technology, and especially regarding invasive technologies like ICTs, after all, is the model of a struggle between humans and technologies (see also Verbeek 2013). While some technological developments can be beneficial, this view holds, others compose a threat to humanity, and therefore the role of ethicists is to assess if technologies are morally acceptable or not.

In ethical and political discussions regarding ICTs, the theme of the ‘Panopticon’ often plays an important role. Inspired by Michel Foucault’s analysis of Jeremy Bentham’s prison design—a dome with a central watchtower from which all prisoners can be observed without them knowing if they are being watched or not – some people fear that ICTs are creating a panoptic society in which privacy becomes ever more problematic, and in which asymmetrical power relations can flourish. (Foucault 1975)

However important it is to develop and maintain a critical attitude toward new information and communication technologies, this model of a ‘struggle’ between technology and society is still based on the dualist metaphysics of subject versus object, that ICTs themselves have outdated by reconfiguring the boundaries between subjects and objects, as described above. When human beings cannot be understood in isolation from technology, and vice versa, approaching their relation in terms of struggle and threat, therefore, is like giving a moral evaluation of gravity, or language. It does not make much sense to be ‘against’ them, because they form the basis of our existence. Technologies have always helped to shape what it means to be human. Rather than opposing them, and putting all our efforts in resistance and protest, we should develop a productive interaction with them.

But how can such an interaction still be critical, when the boundaries between humans and technologies disappear? If human practices and experiences are always technologically mediated, there does not seem to be an ‘outside’ position anymore with respect to technology. And if there is no outside anymore, from where could we criticize technology?

To be sure, a hybrid understanding of humans and technologies does not imply that all roles of technology in human existence are equally desirable, and that human beings should redefine themselves as powerless victims of the power of technology. It does imply, though, that the ‘opposition model’ of humanity and technology might not be the most productive model if one wants to change undesirable configurations of humans and technologies. Ethics should not focus on determining which technologies should be allowed and which should not. Technological development will continue, and human existence will change with it. Tempora mutantur, nos et mutamur in illis: the times are changing, and we change in them. The main focus of ethics, should not be on technology assessment but on technology accompaniment. Rather than keeping humanity and technology apart, we should critically accompany their intertwinement.

In order to articulate such an alternative model for ethics, it is helpful to connect to the later work of Foucault (see also Verbeek 2013). In his lecture ‘What is Enlightenment?’ (Foucault 1997), Foucault develops an alternative account of the phenomenon of ‘critique’. Foucault is looking for an answer to what he calls ‘the blackmail of the Enlightenment’. This blackmail consists in the pressure that is exerted upon those who want to criticize the Enlightenment, because all their attempts are typically explained as being ‘against’ the Enlightenment. Anyone who dares to do open this discussion immediately raises the suspicion of being against rationality, democracy , and scientific inquiry. Foucault, however, explores if an alternative understanding of Enlightenment would be possible. And this exploration is of utmost importance in the context of the ethics of technology as well. Blurring the boundaries between humans and technologies, after all, can easily be explained as giving up on ethics: because there is no clear boundary to be defended anymore, it might seem that ’anything goes’. Therefore, an alternative model for ethics needs to be developed.

Foucault’s answer, however trivial it may seem, is to reinterpret Enlightenment as an attitude, rather than the beginning of a new era. For Kant, as Foucault explains, Enlightenment was primarily a way out of “immaturity”: using “reason” rather than accepting “someone else’s authority to lead us in areas where the use of reason is called for” (Foucault 1997, p. 305). This requires critique: only critique can tell us under which conditions “the use of reason is legitimate in order to determine what can be known, what must be done, and what may be hoped” (Foucault 1997a, p. 308). But for Foucault, critique must not be understood as an attempt to transcend the world—as Kant did—but as an attitude of always looking for the limits of what seems to be given and self-evident.

Foucault, in short, reinterprets critique—the ‘enlightened’ activity par excellence—as a form of practical self-inquiry. Critique means: investigating what has made us the beings that we are, what conditions our existence and what has shaped our current way of living. And, most importantly, it does not require an ‘outside’ position, but can only happing on the basis of positioning ourselves ‘at the limit’. The human subject, after all, is always situated within the world to which it has a relation, and therefore critique can never come from outside. We can never step out of the networks of relations that help to shape our existence, to phrase it in a Latourian way, but this does not imply that we have to give up on critical reflection and self-reflection.

Foucault’s alternative Enlightenment offers an interesting escape from the specific shape that the blackmail of the Enlightenment has taken on in the ethics of technology . The fundamental intertwinement of human beings and information technologies implies that the frameworks from which we criticize these technologies are always mediated by these technologies themselves. We can never step out of the mediations in which we are involved. The farthest we can get is: at the limits of the situation we are in. Standing at the borders, recognizing the technologically mediated character of our existence, our interpretations and judgments, our practices and preferences, we can investigate the nature and the quality of these mediations: where do they come from, what do they do, could they be different?

Rather than letting our selves be blackmailed by the Enlightenment—fearing that the boundary-blurring between technology and society would make it impossible to have a reasonable and normative discussion about technology—there is an alternative possibility for the ethics of technology. Not the assessment of technological developments ‘from outside’ is the central goal of ethical reflection then, but rather its accompaniment ‘from within’, using a concept from the Belgian philosopher Gilbert Hottois (Hottois 1996) and the recent work of Paul Rabinow (Rabinow 2011). The crucial question in such a form of ‘ethical technology accompaniment’ is not how we could impose ‘limits’ to technological developments, but rather how we can deal in responsible ways with the ongoing intertwinement of humans and technologies. The limit-attitude leads to an ethical approach that is not preoccupied with the question of whether a given technology is morally acceptable or not, but that is directed at improving the quality of our lives, as lived with technology. Standing at the limits of what mediates our existence, we can evaluate the quality of these mediations ‘from within’, and actively engage in reshaping these mediations and our own relations toward them.

It needs to be emphasized that this does not imply that all mediations are equally desirable, and that there can never be grounds to reject technologies. Rather, it implies that ethical reflection needs to engage more deeply with actual technological artifacts and practices. Giving up on an external position does not require us to give up all critical distance; it only prevents us from overestimating the distance we can take. An ethics of ‘technology accompaniment’ rather than ‘technology assessment’ should in fact be seen as a form of ‘governing’ the impact technology can have on one’s existence and on society. It replaces the modernist ambition to ‘steer’ technology and to ‘protect’ humanity against technological invasions with a more modest ambition to ’govern’ technological developments by engaging actively with their social and existential implications.

5 Onlife Citizenship

This critical accompaniment of ICTs can only take shape in concrete practices of design , use, and implementation, in which human beings can get critically involved in how technologies mediate their existence. A critical use of information technology then becomes an ‘ascetic practice’, in which human beings explicitly anticipate technological mediations , and develop creative appropriations of technologies in order to give a desirable shape to these mediations. At the same time, the design of information technology becomes an inherently moral activity, in which designers do not only develop technological artifacts, but also the social impacts that come with it. And policy-making activities regarding the implementation of new technologies then become ways of governing our technologically mediated world.

Let me return to one of the examples I gave at the beginning of this contribution in order to elaborate how this critical accompaniment of technologies could be a fruitful form of ethical and political reflection on technology. As indicated above, one of the most salient aspects of Google Glass is its impact on interpersonal relations. The ‘doubling’ of the relations between humans and world that it brings about adds a second layer to the communication between people, which remains invisible to the other person. When two people meet, they cannot see which information the other has available about them. Google’s search engine might reveal private information on the basis of face recognition software, or it might confuse the person with somebody else. Because this parallel information is only available for the person wearing the device, an asymmetry comes about that makes open communication impossible and that radically transforms the character of public space and public life.

Dealing with this new technology, then, requires more than asking oneself the question if we should allow it to be applied in society, and if so, under which conditions. Rather than aiming for a ‘yes’ or ‘no’, ethical reflection should ask itself how this technology could get a desirable place in society. And for answering this question, we need to think through the ethical dimension of the design , implementation, and use of this technology.

First of all, in the context of use, people will have to develop ways to appropriate this technology, and to integrate it in their daily lives. Typically, people develop codes of use for dealing with technologies that have an impact on social life—just like it has gradually become normal that people do not answer incoming calls on their cell phones when they are in a conversation, for instance. An obvious code that could develop would be that people put off their Google Glass when they are in a conversation, to prevent that your conversation partner is searching information about you on the Internet, or is checking his or her email simultaneously. Still, the meaning of a quick glimpse at each other’s face in public space will change forever, because everybody knows that the Google Glass enables people to look right through each other. Dealing with implications like this is not only a challenge for users, but also requires the attention of designers and policy makers.

Designers should be more aware of the mediations that can occur when people use the glasses, in order to make a responsible design. This requires experimentation, and creative redesign. When, for instance, one of the main problems appears to be the possibility that somebody secretly checks someone’s face on the Internet, it could be important to introduce a little warning light that gives a signal when the face recognition system is on. This would remove an essential element of the hidden character of what the glasses can do, and therefore restore part of the symmetry that this technology takes away. Another option could be to redesign the software in such a way that it can only activate face recognition when looking each other explicitly in the eyes for more than five seconds. When people engage in this form of contact, they have reached a level of intimacy that is far beyond the regular quick exchange of looks in public spaces. Designs like this can make it possible to remain relatively anonymous in public spaces, while making contact with each other might also become more easy when both parties are open to that and allow more substantial eye contact.

Also the character of the information that is revealed, should be part of the design of the technology. Google could give (or be obliged to give) people an active role in determining their profile that becomes visible when their face is recognized and looked up—just like the profile people are now making of themselves on social media like Facebook or LinkedIn. In this way, people would have more control over the ways in which they are present and visible in public spaces, comparable to the impression people make on others in real life, on the basis of their behavior, the way they dress, and the reputation they have.

Beside this, users should learn to deal with the effects Google Glass will have on their relations with other people—both when they wear the glasses and when they are being watched with it. It will not be very difficult to realize that other people might have all kinds of information available about you when they look at you. But that awareness should also grow regarding all activities people have on the Internet. Google Glass integrates the public space of the Internet with the public spaces ‘in real life’. This implies a rearrangement of the boundaries between the public and the private, and the coming about of a new public space—as it happened before because of other media like the newspaper, the radio, and the television. Rather than merely resisting and opposing the negative aspects of this development, we will also need to develop new forms of citizenship and citizenship education. Codes of conduct and etiquette will have to develop, just like they exist already now in current public spaces.

This requires, thirdly, new policy-making activities. If the main question remains if we should or should not allow technologies like Google Glass to be introduced in our society, we lose the possibility to address the quality of its social implications. At the same time, a blind and unregulated introduction of this technology in society would throw away the possibility of critical reflection and governance . The central question for policy-making activities is how Google Glass can be embedded in society in good ways. Governance and regulation should focus on the quality of this embedding, rather than on the permission for it. This, inevitably, requires experimentation that makes it possible to find the right balance between openness for change and preservation of what we find valuable. We will need to ask questions like: which information should be disclosed and which not? Which aspects of ourselves belong to the private realm and which do not? And who determines that? Should people have the right to adapt the profile that is connected to their visual appearance? How can the design of Google Glass embody the central values in our society? And how can users be equipped optimally to integrate Google Glass in their daily lives in responsible ways?

The real information revolution has yet to begin. The boundaries between human beings and information technologies are blurring ever more rapidly. This requires a normative framework that gives up the idea that we need to control technologies from outside, on the basis of a set of pre-given criteria. Rather, we need to develop ever better ways to understand how information technologies affect us, and to get explicitly get involved in that process, by critically designing, embedding and using information technologies from the perspective of their mediating powers in human existence and our technological society.