Keywords

1 Onlife After the Computational Turn?

1.1 Computational Turn

In my inaugural lecture I have reiterated the notion of a computational turn, referring to the novel layers of software that have nested themselves between us and reality (Hildebrandt 2013). These layers of decisional algorithmic adaptations increasingly co-constitute our lifeworld, determine what we get to see (search engines; behavioural advertising), how we are treated (insurance, employment, education, medical treatment), what we know (the life sciences, the digital humanities, expert systems in a variety of professions) and how we manage our risks (safety, security, aviation, critical infrastructure, smart grids). So far, this computational turn has been applauded, taken for granted or rejected, but little attention has been paid to the far-reaching implications for our perception and cognition, for the rewoven fabric on which our living-together hinges (though there is a first attempt in Ess and Hagengruber 2011, and more elaboration in Berry 2012). The network effects of ubiquitous digitization have been described extensively (Castells 2011; Van Dijk 2006), though many authors present this as a matter of ‘the social’, neglecting the extent to which the disruptions of networked, mobile, global digital technologies are indeed ‘affordances’ of the socio-technical assemblages of ‘the digital’. Reducing these effects to ‘the social’ does not help, because this leaves the constitutive and regulative workings of these technologies under the radar. Moreover, we need to distinguish between digitization per se and computational techniques such as machine learning that enable adaptive and proactive computing and thereby present us with an entirely novel—smart—environment.

1.2 Smart Environments

I believe that whereas such smart environments have long remained a technological fantasy, they are now with us, around us, even inside us (Hildebrandt and Anrig 2012). They anticipate our future behaviours and adapt their own behaviours to accommodate our inferred preferences—at least insofar as this fits the objectives of whoever is paying for them (commercial enterprise, government). They provide us with a ubiquitous artificial intelligence that uproots the common sense of our Enlightenment heritage that matter is passive and mind active. Matter is developing into mind, becoming context-sensitive, capable of learning on the basis of feedback mechanisms, reconfiguring its own programs to improve its performance, developing ‘a mind of its own’, based on second-order beliefs and preferences. This means nothing less than the emergence of environments that have agent-like characteristics: they are viable, engines of abduction, and adaptable (Bourgine and Varela 1992); they are context-sensitive, responsive, and capable of sustaining their identity by reconfiguring the rules that regulate their behaviours (Floridi and Sanders 2004). We note, of course, that so far ‘they’ are not consciously aware of any of this, let alone self-conscious. Also, let’s acknowledge that we are not talking about what Clark (2003) termed ‘skinbags’: neatly demarcated entities that contain their mind within their outer membranes, surface or skin. The intelligence that emerges from the computational layers is engineered to serve specific purposes, while thriving on the added value created by unexpected function creep; it derives from polymorphous, mobile computing systems, not from stand-alone devices such as those fantasised in the context of humanoid robotics.

1.3 What’s New Here?

In what sense is this a novel situation? Where lies the continuity with preceding information and communication technologies? In his magnificent Les technologies de l’intelligence Pierre Lévy (1990) discussed the transitions from orality to script, printing press and mass media towards digitisation and the internet. Summing up, Lévy suggests that we are in transition from a linear sense of time to segments and points; from accumulation to instant access; from delay and duration to real-time and immediacy; from universalization to contextualisation; from theory to modelling; from interpretation to simulation; from semantics to syntaxis; from truth to effectiveness; from semantics to pragmatics; from stability to change. Interestingly, his focus is on ubiquitous computing and he highlights the impact of the hyperlink, but hardly engages with the computational intelligence described above. Core to the more recent, ambient intelligence, is the fact that human beings are anticipated by complex, invisible computing systems (Stiegler 2013). Their capacity to generate data derivatives (Amoore 2011) and to pre-empt our intentions on the basis of personalised inferences creates what Catherine Dwyer (2009) has called ‘the inference problem’. The thingness of our artificial environment seems to turn into a kind of subjectivity, acquiring a form of agency. In other work I have suggested that social science has long since recognized the productive nature of the inference problem that nourishes relationships between humans (Hildebrandt 2011a). Notably, sociologists Parsons as well as Luhmann spoke of the so-called double contingency that determines the fundamental uncertainty of human interaction (Vanderstraeten 2007). Since I can never be sure how you will read my words or my actions, I try to infer what you will infer from my behaviours; the same goes for you. We are forever guessing each other’s interpretations. Zizek (1991) has qualified the potentially productive nature of this double and mutual anticipation by suggesting that ‘communication is a successful misunderstanding’. What is new here is that the computational layer that mediates our access to knowledge and information is anticipating us, creating a single contingency: whereas it has access to Big Data to make its inferences (Mayer-Schonberger and Cukier 2013), we have no such access and no way of guessing how we are being ‘read’ by our novel smart environments.

1.4 Which Are the Challenges?

If going Onlife refers to immersing ourselves in the novel environments that depend on and nourish the computational layers discussed above, then going Onlife will require new skills, different capacities and other capabilities. To prevent us from becoming merely the cognitive resource for these environments we must figure out how they are anticipating us. We must develop ways to extend the singly contingency to a renewed double contingency. How to read in what ways we are being read? How to guess the manner in which we are being categorized, foreseen and pre-empted? How to keep surprising our environments, how to move from their proaction to our interaction? In other work I have suggested that we need to probe at least two tracks: first, to develop human machine interfaces that give us unobtrusive intuitive access to how we are being profiled, and, second, a new hermeneutics that allows us to trace at the technical level how the underlying algorithms can be ‘read’ and contested (Hildebrandt 2011b, 2012). For now, the point I would like to make is that the implications of going Onlife cannot be reduced to privacy and data protection. I hope that the previous analysis demonstrates a far more extensive impact that cannot be understood solely in terms of the wish to hide one’s personal data. It requires more than that; indeed it challenges us to engage with our environments as if we are taking ‘the intentional stance’ with them (Dennett 2009) .

2 Publics and their Problems in Smart Environments

2.1 Smart Environments and the Public Sphere

Above I have tried to flesh out in what sense smart environments present us with a novel situation. My conclusion was that the computational layers that mediate our perception and cognition of the world are generating an environment that simulates agency. Whereas the International Telecommunications Union spoke of the Internet of Things as ‘the offline world going online’ (ITU 2005), in some sense the plethora of autonomic decision systems are turning our inanimate environment ‘Onlife’ . In this section I will investigate what this means for the public sphere, or even for the traditional private/public divide in itself. I will engage with the notion of the public sphere to inquire whether and how smart environments generate a kind of ‘natality’ here (Arendt 1958) : a novelty, a beginning, an empty space to experiment—with as yet unknown affordances.

2.2 Public Private Social: Performance, Exposure, Opacity

Much has been written about the shrinking of the private, the blurring of the public/private divide and, for instance, the loss of privacy in public (notably Nissenbaum 1997). Such shrinking, loss and blurring have been attributed to either the lure of self-publication in web 2.0 (Cohen 2012), or to the secretive trading with and spying on our behavioural data in the course of pervasive computing (Cohen 2012; Hildebrandt 2012).

Maybe we should return to Arendt (1958), when she spoke of the private as a sphere of necessity (the household), the public as the space for freedom (political action) and ‘the social’ as the emergence of mass society (bureaucracy, individual self-interest and conformity). Her understanding of ‘the social’ or what she called ‘society’ is not altogether positive, to put it mildly. Is the rise of web 2.0 antithetical to ‘the social’, because it concerns communication of one-2one, one-2-many as well as many-2-many, rather than many-2-one? Or does the processing of Big Data present us with ‘the social’ come true, where ‘the social’ is constituted by machine-readable bits and pieces that allow for the ultimate version of what Heidegger (1996) called ‘das rechnende Denken’ (calculative thinking)? I am not sure, and I believe the jury is still out. The answer will depend on empirical evidence of how ‘the social’ continues to evolve in smart environments.

I do think that Arendt’s understanding of the private and the public might save us from dichotomous thinking, as well as from the glorification of ‘private life’ as a sphere of uncontroversial freedom. Simultaneously, we must come to terms with the fact that her glorification of the public sphere has little connection with present day politics, which rather fall within the scope of her depiction of ‘the social’. We should also note that her glorification of politics as a ‘theatre of debate’ (other than the realm of household economics) is rooted in an appreciation of privacy as ‘some darker ground which must remain hidden if it is not to lose its depth in a very real, non-subjective sense’ (Arendt 1958, S. 71). To speak and act ‘in public’ one must leave the security of one’s home. But to distinguish oneself and to take the risk of being refuted, requires courage, daring and a place to hide. To recuperate from the tyranny of public opinion (Mill 1859) we need a measure of opacity to re-constitute the self, far from the social pressures that could turn us into obedient self-disciplined subjects (Hutton et al. 1988). In fact I would agree with Butler (2005), where she underscores the constitutive opacity of the self, that invites reiterant attempts to invent a coherent narrative of who we are, but at the same time escapes all narrative since the emergence of our self is hidden in our own prehistory (the infancy before we acquired language).

My question concerning the public in smart environments would thus be: how to design our ONLIFE in a way that affords a sustainable public performance,Footnote 1 an empowering opacity of the self and a range of exposures that incorporates the need for self-expression, identity performance as well as the generosity of forgetfulness, erasure and the chance to reinvent oneself?

2.3 Public Performance in the ONLIFE Everywhere

Maybe ONLIFE has two dimensions, as suggested above. The first concerns self-publication or reputation management. It is a type of social networking (Facebook, Twitter, Foursquare, Instagram, YouTube, Training Intelligence Programs, Enhanced Reality), a pervasive ambience of sharing self-images, brief text, photo’s, video’s, location, ‘likes’, ‘dislikes’, sport’s performance, health status or professional reputation. The second dimension of ONLIFE concerns the ubiquitous measurement, calculation and manipulation of the data that leak from everyday behaviours, and the way these behavioural data are used to predict, pre-empt and thus manage future states of mind, choices and decisions, for instance in the case of behavioural advertising, location based services, fraud detection, actuarial calculations, remote healthcare, neuromarketing or criminal profiling.Footnote 2 Both seem to draw individual ‘users’ into Arendt’s ‘the social’. ‘Users’ have become what she calls ‘a society’, an assembly of individuals that manage their reputation, while also being managed as a resource for government and the industry. In fact, the computational infrastructure employs behavioural traces as its cognitive resource.

The questions generated by all this focus, on what affordances the ONLIFE should develop to enable a shared, agonistically organised public space that allows a plurality of ‘users’ to develop a voice, to partake in democratic decision-making and to hold each other to account, while at the same time providing the ‘users’ with effective means to withdraw, to unplug, to delete and start over. This raises three additional inquiries. First, the question of how to protect ‘users’ against invisible manipulation (because of the hidden complexity) , unfair exclusion (because of the lack of transparency that disables contestation), and undesirable exposure (because of the ubiquitous pressure to ‘post’ an update of one’s where/what/who-abouts)? Second, the question of how to empower Onlife inhabitants in a way that enables them to challenge the design of their world? Is this about renegotiating the social contract? Or is it about construction work; how to build an Onlife world that is not a global village, nor a walled garden, but an extended urbanity? Third, the question of how this connects to the dimension of agency that is emerging within the Onlife experience; how can inhabitants or visitors of the ONLIFE learn to guess how they are being anticipated?

2.4 A Plurality of Publics, a Choice of Exposure, a Place to Hide

In 1927 Dewey wrote The public and its problems. The book is an extended reply to Lippman’s (1997) analysis of democratic government in the age of mass media, high tech instrumentation and societal complexity. I find his analysis and the normative position he takes on democratic practice highly relevant for our current enterprise. As Marres (2005) has demonstrated Dewey agrees with Lippman’s diagnosis, but not with his cure. Whereas Lippman believes the only solution is technocratic government, Dewey argues for a new understanding of democracy . For a start, he reminds us that representative democracy (voting) is a matter of delegation, relieving people from the burden of governing themselves. Second, he believes that once people discover that their delegates are not doing a good job with regard to a specific issue, they will seek out their fellows and form a public around this issue. Interestingly, the formation of publics and issues is a matter of co-constitution: no issue, no public [and vice versa]. This leads Dewey to understand democracy as the process of simultaneously constructing publics and issues, whereby people regain a measure of control over issues their delegates forsake. Publics and issues are thus performed, constructed, fabricated–not given. Their articulation and their assemblage require hard work. There is not one—given—Public, but a multiplicity of publics that changes shape in relation to the issues they frame. And also, in relation to each other.

Dewey’s publics differ from Arendt’s public sphere. His publics are more empirical and contingent and they have less continuity. In fact a successful public will resolve its issue and cease to exist as such. However, both Dewey and Arendt’s publics require individuals who take the risk of raising their voice, contesting common sense and—more importantly—initiating the construction of a new common sense around what they present as an issue. Dewey seems less interested in opposing ‘the social’ with ‘the public’. His definition of democracy demonstrates a fundamental trust in the wisdom of crowds (to be distinguished from a naïve wisdom of ‘the Crowd’). Like Mouffe (2000) in political theory and Rip (2003) in constructive Technology Assessment, Dewey trusts the outcome of agonistic decision-making processes. His publics are always under construction—they thrive on, contest and challenge whatever pretends to represent ‘the social’. They ground a natality in the midst of ‘the social’, a possibility for radical reinvention of what is taken for granted.

What interests me here is how we—a public constituted around the issue of ONLIFE —can contribute to the design , the engineering, the construction of an ONLIFE that affords the formation and un-formation of publics, while protecting and cherishing the opacities of the individuals that make these publics. In fact I believe that the 2012 draft Regulation of Data Protection holds several gems that may actually provide stepping-stones to such an ONLIFE. In the third part of this contribution I venture into the radical choices it presents and the bridges it builds between legislation, architecture, social norms and market forces (Lessig 2006).

3 Legal Protection by Design: A Novel Social Contract?

3.1 The Nature of the Social Contract

Having explained, in the first section, the challenges of an environment that comes Onlife due to a ubiquitous and pervasive layer of machine learning, I have put forward, in the second section, the question of what this means for the public, the social and the private. My conclusion was that we need to construct an infrastructure that allows for a plurality of publics, a choice of exposure and places to hide. Such an infrastructure cannot be taken for granted, it will not appear of itself, nor will it grow organically or ‘naturally’ from the computational layers we are currently putting in place.

The social contract that combined the idea of limited government with—ultimately—representative, deliberative and participative self-government was the result of a historic bargain (Nonet and Selznick 1978). This bargain sealed the autonomy of the law in relation to politics on the condition of non-interference; the independence of the courts thus combined with the monopoly of the legislator to enact the law. We can summarize this as the legislator writing and enacting the law, while the court speaks and interprets the law. Let’s invoke Montesquieu’s often misunderstood maxim: iudex—non rex—lex loqui. Not the king but the judge speaks the law (Schönfeld 2008). This was an attack on the medieval maxim that attributed all powers to the king: rex lex loqui. The division of tasks that follows from the historic bargain between enacting and speaking the law was based on the socio-technical infrastructure of the printing press; the checks and balances of the Rule of Law depend on the sequential processing of written codes that can be debated, interpreted and contested by those under their rule. The fact that the courts have the final word in case of a conflict guarantees a measure of due process, which guarantees that fundamental rights are an effective part of the social contract. This is not to say that the printing press ‘caused’ the Rule of Law, but to suggest that it created a socio-technical infrastructure conducive to a specific division of tasks between the differentiated powers of the state. This division has specific temporal dimensions: the court speaks after the legislator enacts; courts are bound by the law enacted by the legislator, while in turn the legislator is bound by the interpretation of the courts—the circle is virtuous; it constitutes countervailing powers and creates room for both enforcement and contestation. All this is part of modernity. It depends on the internal division of sovereignty. Ultimately it depends on the institutionalisation of the monopoly of violence which is at the core of the operations of sovereignty; effective protection of fundamental rights is only possible if the state can enforce them even where enforcement is required against the state itself.

3.2 Protecting Modernity’s Assets: Reconstructing the Social Contract

In his Die Aufklärung in the Age of Philosophical Engineering Stiegler (2013) accepts the challenge introduced by Tim Berners-Lee, who argued that ‘we are not analysing a world, we are building it. We are not experimental philosophers; we are philosophical engineers’ (Halpin 2008). Berners-Lee was not merely describing the activities of the architects of the World Wide Web he invented. He was calling them to account for the impact of their engineering on the constitution of mind and society. He was inviting them to build a new res publica. Stiegler is more careful. He suggests that digital technology is a pharmakon: ‘it can lead either to the destruction of the mind, or to its rebirth (ib.).’ Referring to Wolf (2008) he notes that the transition from the reading mind to the digitally extended mind entails substantive changes to the composition and behaviour of our brains. Though these changes may be cause for celebration, they also threaten the constitution of the self. In the course of his text Stiegler reiterates the crucial question of what we need to preserve as a valuable heritage of the era of the ‘reading brain’ (Wolf 2008). I want to connect this with the need to reconstruct the social contract, recognizing its modern roots and its contingency on the ICT infrastructure of the printing press. A new social contract would have to align with the novel technological landscape, co-opting current ICTs to incorporate checks & balances. In that sense we will need a hybrid social contract that testifies to the agency-characteristics of smart environments.Footnote 3

Though we might wish to declare ‘Game over for modernity’, this may require us to give up on the social contract that protects against immoderate government. Let us remind ourselves that the end of modernity would not necessarily be the end of totalitarian governance. The hidden complexity of computational layers in fact affords refined and invisible manipulations that may be closer to the totalitarian nightmares of Kafka’s Trial (Solove 2004) and Forster Machine (Forster 2009) then to the dictatorial schemes of Big Brother watching you. Stiegler (2013) notes that

the spread of traceability seems to be used primarily to increase the heteronomy of individuals through behaviour-profiling rather than their autonomy .

The ‘old-school’ social contract will not necessarily survive when cut lose from the ICT infrastructure of the printing press. The idea of the social as a distinctive sphere is in fact typical for modernity’s reliance on information and communication technologies that sustain a further distantiation and differentiation of societal spheres. Oral societies do not have written constitutions capable of keeping their economic and military leaders in check; they require a continuous calibration that entails a persistent threat of violence to keep the vicious circle of private revenge at bay (Hoebel 1971). Societies of the manuscript (the handwritten script) have no means to contest written laws for the majority that does not read or write, they thrive on the monopoly of the class of scribes that buffers between ruler and subjects, thus also protecting its own monopoly (Glenn 2007). Only the printing press provides the specific affordances conducive to the agonistic framework of representative, deliberative and participative democracies under the Rule of Law (Hildebrandt and Gutwirth 2007). To preserve the preconditions of constitutional democracy we need to acknowledge modernity’s dependence on sequential thinking (Wolf’s era of the reading brain) and its temporal structure that favours reflection over reflexes (Wolf’s era of the reading brain). This entails an attempt to engage with the benefits of modernity. Though hierarchical and linear models of social life may have lost territory, we may have to reconstruct and reengineer them insofar as they protect us from chaos and contingent power games. Of course this entails keeping hierarchies in bounds in function of the purpose they should serve.

A hierarchy that organizes countervailing powers may save us from the totalitarian rule of transnational computational decision-systems. Nevertheless, we should acknowledge that the dreams of early cyberspace utopists have not come through; the nation state has not lost its bearing and territorial jurisdiction has not become meaningless (Goldsmith and Wu 2008). This requires vigilance in the face of potential attempts to turn cyberspace into a set of Walled Gardens that might reinforce not merely totalitarianism but also tyranny (Mueller 2010). We must investigate how the novel affordances of cyberspace can be engineered in a way that sets us free as well as constraining those in charge, while fostering a fair distribution of capabilities (Cohen 2012). This urges us to take into account that whereas cyberspace may change the game for modernity’s incentive structure, it still nourishes on the system of legal-political checks and balances that was generated by modernity’s socio-technical infrastructure.

3.3 Technology Neutrality and Legal Protection by Design

One way of dealing with the implications of cyberspace as a game changer is to integrate legal protection into its socio-technical backbone: its hardware, software and the numerous protocols and standards that enable and constrain its affordances. I have coined this ‘legal protection by design’ , connecting the concept to research communities working on value-sensitive design (Flanagan et al. 2007), constructive technology assessment (Rip et al. 1995), upstream engagement with scientific research (Wynne 1995), privacy impact assessment (Wright and de Hert 2012) and privacy by design and default (Cavoukian 2009; Langheinrich 2001).

Legal protection by design is not about technical enforcement of legal compliance; legal problems cannot be solved by technical solutions. The concept of legal protection by design refers to novel articulations of fundamental legal rights into ICT infrastructures other than the printing press. Both lawyers and policy makers tend to display the Pavlov reflex of writing and enacting new laws when legal problems occur, whereas cyberspace easily turns written law into a paper dragon. Modern law is articulated by means of the technology of the printing press and in cyberspace its monopoly seems hard to enforce. Moreover, public administration has developed techniques to automatically enforce written administrative rules by translating them into automated decision systems. Social security, taxation and numerous permits are now granted or imposed on the basis of such decisions (Citron 2007). Legal protection by design should, however, not be confused with such techno-regulation or technological enforcement of legal compliance. Law is not administration, politics or policy. Legal protection by design instead implies that written legal rules and their underlying unwritten legal principles develop a new type of technology-neutrality. Other than some authors suggest, technology neutrality requires a keen eye on the normative implications of technological developments (Reed 2007; Hildebrandt 2008; Hildebrandt and Tielemans 2013). Wherever a technology changes the substance or the effectiveness of a right, its articulation must be reconsidered to take into account how we wish to reconceptualise and/or reframe the right within the network of related rights and principles. The socio-technical infrastructure of cyberspace often affects the network and the context of sets of rights; for instance, rights to compensation based on tort or breach of contract, as well as rights to privacy , due process and non-discrimination. Technology neutrality therefor requires a lively debate amongst lawyers, but should also generate a similar debate amongst the architects of cyberspace on how to reinvent, to reengineer and to redesign democracy and the Rule of Law in the Onlife environment.

3.4 The Proposed Data Protection Regulation

Let’s now be practical. Though some inhabitants of the ONLIFE may claim that data protection is boring and concerns an outdated attempt to revive ‘old-school’ privacy, I would argue that the legal framework of Data Protection is particularly well tuned to the data-driven environment of cyberspace. Whereas the value of privacy may indeed have been an affordance of the era of the printing press (Stalder 2002), we should not sit back to sing its requiem, instead, we need to assess how to re-invent privacy as a dimension of the Onlife habitat. The Fair Information Principles that inform the legal framework of data protection seem particularly apt to cope with the flux of de- and re-contextualization that drives cyberspatial innovation (Kallinikos 2006). So far, however, these principles were articulated as paper dragons, trailing an irritating bureaucracy while at the same time enforcement seemed an illusion due to the lack of penal competence, budget and personnel on the side of data protection supervisors. Compliance has long been a matter of (minor) costs, to be taken into account after new business models were set in place.

The proposed Regulation could be a game changer. It establishes a new incentive structure and is based on a salient understanding of law’s need for effective [not theoretical] technology neutrality. The Regulation presents the combined force of a mandatory data protection impact assessment, data protection by default (data minimisation at the level of applications), data portability (enabling an effective right to withdraw consent without losing the value of self-generated profiles), the right to forget (requiring effective mechanisms to achieve a reasonable measure of erasure of one’s personal data if no legal ground applies for their processing), rights against measures based on profiling (a right to object to being subjected to automated decisions and transparency rights as to the existence of such measures and their envisaged effects) and finally data protection by design (which imposes the duty of adequate mechanisms for compliance on commercial and governmental data controllers). All this would have no effect if the proposal had not ensured efficient mechanisms to incentivize the industry to actually develop data protection by design: the liability regime is inspired by competition law (fines of maximum 2 % of global turnover) while the burden of proof per default rests with the data controllers. If the proposed Regulation survives the legislative process, it may finally create the level playing field that challenges companies and governments to develop intuitive and auditable transparency tools. ONLIFE inhabitants will then have the chance to play around with the system, exploring and inventing their identities in the interstices of the hybrid social contexts that shape their capabilities. This should empower them—us—to establish a new hybrid social contract that enables a plurality of publics, a choice of exposure and places to hide. Writing did not erase speech, but it changed the nature of speech (Ong 1982); computational layers will not erase writing, but it will change the nature of the reading mind. This may be a good thing, but that will depend on how we invent the infrastructure that will invent us.