Keywords

Opening

An algorithm is conventionally defined as ‘a process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer’. 1 In this sense, an algorithm strictly speaking is nothing more than the ordering of steps that a combination of software and hardware might subsequently put into operation. It might seem odd, then, to write a book about the everyday life of a set of instructions. What life might the instructions have led, into what romance or crime might the instructions have become entangled, what disappointments might they have had? These seem unlikely questions to pose. For an ethnographer, they also seem like questions that would be difficult to pursue. Even if the instructions were engaged in a variety of different social interactions, where do these take place and how could I ever get to know them?

A quick perusal of the instructions hanging around in my house reveals a slightly crumpled paper booklet on my home heating system, two sets of colourful Lego manuals setting out how to build a vehicle, and a form with notes on how to apply for a new passport. I have no idea how the latter arrived in my house or for whom it is necessary. But it is clear in its formality and precision. I also know my sons will shortly be home from school and determined in their efforts to build their new Lego. And I am aware, but slightly annoyed, by the demands set by the heating instructions that suggest my boiler pressure is too high (above 1.5 bars; after a quick Google, it turns out that a bar is the force required to raise water to a height of 10 metres). The pressure needs to be reduced, and I have known this all week and not acted on it. The instructions have annoyed me by instilling a familiar sense of inadequacy in my own (in)ability to manage my domestic affairs—of course, the instructions provide numbers, a written diagram, even some words, but their meanings and my required response remain out of reach.

In a sense, then, we are already witnessing the social life in which these instructions participate. The passport form has arrived from somewhere, for someone, and is clear in its formal status. The Lego was a gift and will no doubt become the centre of my children’s attention. And the heating system might break down if I don’t do something reasonably soon. These point to some of the cornerstones for contemporary living. Travel and transport, government and formal bureaucracy, gift giving and learning, domestic arrangements and shelter are all witness-able through the instructions and the life in which they participate. As with other participants in social life, the instructions are demanding, occasionally quite austere and/or useless. Making sense of these everyday entanglements might be quite important if we were interested in the everyday life of instructions, but is this the kind of everyday life in which algorithms participate?

Reading through the ever-expanding recent academic literature on algorithms, the answer would be a qualified no. The everyday , humdrum banalities of life are somewhat sidelined by an algorithmic drama. 2 Here, the focus is on algorithmic power , the agency held by algorithms in making decisions over our futures, decisions over which we have no control. The algorithms are said to be opaque , their content unreadable. A closely guarded and commodified secret, whose very value depends upon retaining their opacity . All we get to see are their results: the continuing production of a stream of digital associations that form consequential relations between data sets. We are now data subjects or, worse, data derivatives (Amoore 2011). We are rendered powerless. We cannot know the algorithm or limit the algorithm or challenge its outputs.

A quick read through the news (via a search ‘algorithm’) reveals further numerous stories of the capacity of algorithms to dramatically transform our lives. Once again, the humdrum banalities of the everyday activities that the instructions participated in are pushed aside in favour of a global narrative of unfolding, large-scale change. In the UK, the Guardian newspaper tells us that large firms are increasingly turning to algorithms to sift through job applications, 3 using personality tests at the point of application as a way to pick out patterns of answers and steer applicants towards rejection or the next phase of the application process. What is at stake is not the effectiveness of the algorithms, as little data is collected on whether or not the algorithms are making the right decisions. Instead, the strength of the algorithms is their efficiency, with employment decisions made on a scale, at a speed and at a low cost that no conventional human resources department could match.

In the USA, we are told of algorithmic policing that sets demands for police officers to continually pursue the same neighbourhoods for potential crime. 4 Predictive policing does not actively anticipate specific crimes, but uses patterns of previous arrests to map out where future arrests should be made. The algorithms create their own effects as police officers are held accountable by the algorithm for the responses they make to the system’s predictions. Once a neighbourhood has acquired a statistical pattern denoting high crime, its inhabitants will be zealously policed and frequently arrested, ensuring it maintains its high crime status.

Meanwhile in Italy, Nutella launch a new marketing campaign in which an algorithm continually produces new labels for its food jars. 5 Seven million distinct labels are produced, each feeding off an algorithmically derived set of colours and patterns that, the algorithm believes, consumers will find attractive. The chance to own a limited edition Nutella design, combined with these newspaper stories and an advertising campaign, drives an algorithmically derived consumer demand. But the story is clear: it is not the labels that are unique in any important way. It is the algorithm that is unique.

And in India, a robot that uses algorithms to detect patterns of activity in order to offer appropriate responses struggles with normativity. 6 The robot finds it hard to discern when it should be quiet or indeed noisier, what counts as a reasonable expectation of politeness, which subtle behavioural cues it should pick up on or to which it should respond. This is one small part of the unfolding development of algorithmic artificial intelligence and the emergence of various kinds of robots that will apparently replace us humans.

These stories are doubtless part of a global algorithmic drama. But in important ways, these stories promote drama at the expense of understanding. As Ziewitz (2016) asks: just what is an algorithm? In these stories, the algorithm seems to be a central character, but of what the algorithm consists, why, how it participates in producing effects is all left to one side. There are aspects of everyday life that are emphasised within these stories: employment, policing, consumer demand and robotics are each positioned in relation to an aspect of ordinary activity from job interviews, to arrests and court trials, from markets and investments to the future role of robots in shaping conversations. But—and this seems to be the important part—we are not provided with any great insight into the everyday life of the algorithm. Through what means are these algorithms produced in the first place, how are they imagined, brought into being and put to work? Of what do the algorithms consist and to what extent do they change? What role can be accorded to the algorithm rather than the computational infrastructure within which it operates? And how can we capture the varied ways in which algorithms and everyday life participate in the composition of effects?

These are the questions that this book seeks to engage. As I noted in the opening example of the instructions in various locations around my house, ordered sets of step-by-step routines can establish their own specific demands and become entangled in some of the key social relations in which we participate. As I have further suggested in the preceding media stories, such ordered routines in the form of algorithms portray a kind of drama, but one that we need to cut through in order to investigate how the everyday and algorithms intersect. In the next section, I will begin this task by working through some of the recent academic literature on algorithms. I will then pursue the everyday as an important foreground for the subsequent story of algorithms. Finally, I will set out the structure of the rest of this book.

Algorithmic Discontent

One obvious starting point for an enquiry into algorithms is to look at an algorithm . And here, despite the apparent drama of algorithmic opacity (in Fig. 1.1) is an algorithm:

Fig. 1.1
figure 1

Abandoned luggage algorithm

This is taken from a project that sought to develop an algorithmic surveillance system for airport and train station security (and is introduced in more detail along with the airport and train station and their peculiar characteristics in Chapter 2). The algorithm is designed as a set of ordered step-by-step instructions for the detection of abandoned luggage. It is similar in some respects to the instructions for my home heating system or my children’s Lego. It is designed as a way to order the steps necessary for an effect to be brought about by others. However, while my heating system instructions are (nominally and slightly uselessly) oriented towards me as a human actor, the instructions here are for the surveillance system, its software and hardware which must bring about these effects (identifying abandoned luggage) for the system’s human operatives. In this sense, the algorithm is oriented towards human and non-human others. Making sense of the algorithm is not too difficult (although bringing about its effects turned out to be more challenging as we shall see in subsequent chapters). It is structured through four initial conditions (IF questions) that should lead to four subsequent consequences (THEN rules). The conditions required are: IF an object is identified within a set area that is classified as luggage, is separate from a human object, is above a certain distance from a human and for a certain time (with a threshold for distance and time set as required), THEN an ‘abandoned luggage’ alert will be issued. What can the recent academic literature on algorithms tell us about this kind of ordered set of instructions, conditions and consequences?

Recent years have seen an upsurge in writing on algorithms. This literature points to a number of notable themes that have helped establish the algorithm as a focal point for contemporary concern. Key has been the apparent power of algorithms (Beer 2009; Lash 2007; Slavin 2011; Spring 2011; Stalder and Mayer 2009; Pasquale 2015) that is given effect in various ways. Algorithms are said to provide a truth for modern living, a means to shape our lives, play a central role in financial growth and forms of exchange and participate in forms of governmentality through which we become algorithmic selves. In line with the latter point, it is said we have no option but to make sense of our own lives on the terms of algorithms as we are increasingly made aware of the role, status and influence of algorithms in shaping data held about us, our employment prospects or our intimate relations. At least two notions of power can be discerned, then, in these accounts. There is a traditional sense of power in which algorithms act to influence and shape particular effects . In this sense, algorithms might be said to hold power. A second notion of power is more Foucauldian in its inclination, suggesting that algorithms are caught up within a set of relations through which power is exercised. Becoming an algorithmic self is thus an expression of the exercise of this power , but it is not a power held by any particular party. Instead, it is a power achieved through the plaiting of multiple relations. In either case, the algorithm is presented as a new actor in these forms and relations of power.

What can this tells us about our abandoned luggage algorithm? Written on the page, it does not seem very powerful. I do not anticipate that it is about to jump off the page (or screen) and act. It is not mute, but it also does not appear to be the bearer of any great agency . The notion that this algorithm in itself wields power seems unlikely. Yet its ordered set of instructions does seem to set demands. We might then ask for whom or what are these demands set? In making sense of the everyday life of this algorithm, we would want to pursue these demands. If the academic writing on power is understood as having a concern for effect, then we might also want to make sense of the grounds on which these demands lead to any subsequent action. We would have to follow the everyday life of the algorithm from its demands through to accomplishing a sense of how (to any extent) these demands have been met. This sets a cautionary tone for the traditional notion of power . To argue that the demands lead to effects (and hence support a traditional notion of power, one that is held by the algorithm) would require a short-cutting of all the steps. It would need to ignore the importance of the methods through which the algorithm was designed in the first place, the software and hardware and human operatives that are each required to play various roles, institute further demands and that take action off in a different direction (see Chapters 3 and 5 in particular), before an effect is produced. We would need to ignore all these other actors and actions to maintain the argument that it is the algorithm that holds power. Nonetheless, the Foucauldian sense of power , dispersed through the ongoing plaiting of relations, might still hold some analytic utility here: pursuing the everyday life of the algorithm might provide a means to pursue these relations and the effects in which they participate.

At the same time as algorithms are noted as powerful (in the sense of holding power) or part of complex webs of relations through which power is exercised, an algorithmic drama (see Ziewitz 2016; Neyland 2016) plays out through their apparent inscrutability. To be powerful and inscrutable seems to sit centrally within a narrative of algorithmic mystery (just how do they work, what do algorithms do and how do they accomplish effect) that is frequently combined with calls for algorithmic accountability (Diakopolous 2013). Accountability is presented as distinct from transparency . While the latter might have utility for presenting the content or logic of an algorithm , accountability is said to be necessary for interrogating its outcomes (Felten 2012). Only knowing the content of an algorithm might be insufficient for understanding and deciding upon the relative justice of its effects . But here is where the drama is ratcheted up: the value of many commercial algorithms depends upon guarding their contents (Gillespie 2013). No transparency is a condition for the accumulation of algorithmically derived wealth. No transparency also makes accountability more challenging in judging the justice of an algorithm’s effects : not knowing the content of an algorithm makes pinning down responsibility for its consequences more difficult.

Our abandoned luggage algorithm presents its own contents. In this sense, we have achieved at least a limited sense of transparency . In forthcoming chapters, we will start to gain insights into other algorithms to which the abandoned luggage example is tied. But having the rules on the page does not provide a strong sense of accountability . In the preceding paragraphs, I suggested that insights into the everyday life of the algorithm are crucial to making sense of how it participates in bringing about effects . It is these effects and the complex sets of relations that in various ways underpin their emergence that need to be studied for the ordered steps of the abandoned luggage algorithm to be rendered accountable.

A further theme in recent writing has been to argue that algorithms should not be understood in isolation. Mythologizing the status or power of an algorithm , the capability of algorithms to act on their own terms or to straightforwardly produce effects (Ziewitz 2016) have each been questioned. Here, software studies scholars have suggested we need to both take algorithms and their associated software/code seriously and situate these studies within a broader set of associations through which algorithms might be said to act (Neyland and Mollers 2016). Up-close, ethnographic engagement with algorithms is presented as one means to achieve this kind of analysis (although as Kitchin [2014] points out, there are various other routes of enquiry also available). Getting close to the algorithm might help address the preceding concerns highlighted in algorithmic writing; opening up the inscrutable algorithm to a kind of academic accountability and deepening our understanding of the power of algorithms to participate in the production of effects . This further emphasises the importance of grasping the everyday life of the algorithm . How do the ordered steps of the abandoned luggage algorithm combine with various humans (security operatives, airport passengers, terminal managers and their equivalents in train stations) and non-humans (luggage, airports, software, trains, tracks, hardware) on a moment to moment basis?

Yet algorithmic writing also produces its own warnings. Taken together, writing on algorithms suggests that there is not one single matter of concern to take on and address. Alongside power , inscrutability and accountability , numerous questions are raised regarding the role of algorithms in making choices, political preferences, dating, employment, financial crises, death, war and terrorism (Crawford 2016; Karppi and Crawford 2016; Pasquale 2015; Schuppli 2014) among many other concerns. The suggestion is that algorithms do not operate in a single field or produce effects in a single manner or raise a single question or even a neatly bounded set of questions. Instead, what is required is a means to make sense of algorithms as participants in an array of activities that are all bound up with the production of effects , some of which are unanticipated, some of which seem messy and some of which require careful analysis in order to be made to make sense. It is not the case that making sense of the life of our abandoned luggage algorithm will directly shed light on all these other activities. However, it will provide a basis for algorithmically focused research to move forward. This, I suggest, can take place through a turn to the everyday .

Everyday

Some existing academic work on algorithms engages with ‘algorithmic life’ (Amoore and Piotukh 2015). But this tends to mean the life of humans as seen (or governed) through algorithms. If we want to make sense of algorithms, we need to engage with their everyday life. However, rather than continually repeat the importance of ‘the everyday ’ as if it is a concept that can somehow address all concerns with algorithms or is in itself available as a neat context within which things will make sense, instead I suggest we need to take seriously what we might mean by the ‘everyday life’ of an algorithm . If we want to grasp a means to engage with the entanglements of a set of ordered instructions like our abandoned luggage algorithm , then we need to do some work to set out our terms of engagement.

The everyday has been a focal point for sociological analysis for several decades. Goffman’s (1959) pioneering work on the dramaturgical staging of everyday life provides serious consideration of the behaviour, sanctions, decorum, controls and failures that characterise an array of situations. De Certeau (1984) by switching focus to the practices of everyday life brings rules, bricolage, tactics and strategies to the centre of his analysis of the everyday . And Lefebvre (2014) suggests across three volumes that the everyday is both a site of containment and potential change. The everyday of the algorithm will be given more consideration in subsequent chapters, but what seems apparent in these works is that for our purposes, the technologies or material forms that take part in everyday life are somewhat marginalised. Technologies are props in dramaturgical performances (in Goffman’s analysis of the life of crofters in the Shetland Islands) or a kind of background presence to practices of seeing (in de Certeau’s analysis of a train journey). Lefebvre enters into a slightly more detailed analysis of technology, suggesting for example that ‘computer scientists proclaim the generalization of their theoretical and practical knowledge to society as a whole’ (2014: 808). But Lefebvre’s account is also dismissive of the analytic purpose of focusing on technologies as such, suggesting ‘it is pointless to dwell on equipment and techniques’ (2014: 812). Taken together, as far as that is possible, these authors’ work suggests few grounds for opening up the everyday life of technology. Perhaps the most that could be said is that, based on these works, an analysis of the everyday life of an algorithm would need to attend to the human practices that then shape the algorithm. Even everyday analyses that devote lengthy excursions to technology, such as Braudel’s (1979) work on everyday capitalism, tend to treat technologies as something to be catalogued as part of a historical inventory. To provide analytical purchase on the algorithm as a participant in everyday life requires a distinct approach.

One starting point for taking the everyday life of objects, materials and technologies seriously can be found in Latour’s search for the missing masses. According to Latour, sociologists:

are constantly looking, somewhat desperately, for social links sturdy enough to tie all of us together… The society they try to recompose with bodies and norms constantly crumbles. Something is missing, something that should be strongly social and highly moral. Where can they find it? … To balance our accounts of society, we simply have to turn our exclusive attention away from humans and look also at nonhumans. Here they are, the hidden and despised social masses who make up our morality . They knock at the door of sociology, requesting a place in the accounts of society as stubbornly as the human masses did in the nineteenth century. What our ancestors, the founders of sociology, did a century ago to house the human masses in the fabric of social theory, we should do now to find a place in a new social theory for the nonhuman masses that beg us for understanding. (1992: 152–153)

Here, the non-humans should not simply be listed as part of an inventory of capitalism. Instead, their role in social, moral, ethical and physical actions demands consideration. But in this approach, ‘social’ is not to be understood on the conventional terms of sociologists as a series of norms that shape conduct or as a context that explains and accounts for action. Instead, efforts must be made to make sense of the means through which associations are made, assembled or composed. Everyday life, then, is an ongoing composition in which humans and non-humans participate. The algorithm might thus require study not as a context within which everyday life happens, but as a participant. Such a move should not be underestimated. Here, Latour tells us, we end the great divide between social and technical, and assumptions that humans ought to hold status over non-humans in our accounts. Instead, we start to open up an array of questions. As Michael suggests, in this approach: ‘everyday life is permeated by technoscientific artefacts, by projections of technoscientific futures and by technoscientific accounts of the present’ (2006: 9).

We can also start to see in this move to grant status to the non-human that questions can open up as to precisely how such status might be construed. Assembly work or composition certainly could provide a way to frame a study of the algorithm as a participant in everyday action, but how does the algorithm become (or embody) the everyday? Mol (2006) suggests that the nature of matters—questions of ontology—are accomplished. In this line of thought, it is not that ‘ontology is given before practices, but that different practices enable different versions of the world. This turns ontology from a pre-condition for politics into something that is, itself, always at stake’ (Mol 2006: 2). The analytic move here is not just to treat the algorithm as participant, but to understand that participation provides a grounds for establishing the nature of things, a nature that is always at stake. Being at stake is the political condition through which the nature of things is both settled and unsettled. But what does this tell us of the everyday?

Pollner’s (1974) account of mundane reason points us towards a detailed consideration of the interactions through which the everyday is accomplished. Pollner draws on the Latin etymology of the word mundane (mundus) to explore how matters are not just ordinary or pervasive, but become of the world. What is settled and unsettled, what is at stake, is this becoming. For Pollner , the pertinent question in his study of US court decisions on speeding is how putative evidence that a car was driving at a certain speed can become of the world of the court charged with making a decision about a drivers’ possible speeding offence. Through what organisational relations, material stuff, responsibilities taken on, and accountabilities discharged, can potential evidence come to be of the world (a taken for granted, accepted feature) of the court’s decision-making process? Pollner suggests that in instances of dispute, accountability relations are arranged such that a car and its driver cannot be permitted to drive at both 30 and 60 miles per hour simultaneously—the evidence must be made to act on behalf of one of the accounts (30 or 60), not both. Selections are made in order to demarcate what will and what will not count, what will become part of the world of the court and what will be dismissed, at the same time as responsibilities and accountabilities for action are distributed and their consequences taken on. Making sense of the algorithm , its enactment of data, its responsibilities and accountabilities on Pollner’s terms, sets some demanding requirements for our study. How does the abandoned luggage algorithm that we initially encountered, insist that data acts on behalf of an account as human or luggage, as relevant or irrelevant, as requiring an alert and a response or not? Although these actions might become the everyday of the algorithm, they might be no trivial matter for the people and things of the airport or train station where the algorithm will participate. The status of people and things will be made always and already at stake by the very presence of the algorithm.

This further points towards a distinct contribution of the algorithm: not just participant, not just at stake in becoming, but also a means for composing the everyday . To return to Latour’s no-longer-missing masses, he gives consideration to an automated door closer—known as a groom—that gently closes the door behind people once they have entered a room. The groom, Latour suggests, can be treated as a participant in the action in three ways:

first, it has been made by humans; second, it substitutes for the actions of people and is a delegate that permanently occupies the position of a human; and third, it shapes human action by prescribing back what sort of people should pass through the door. (1992: 160)

Prescribing back is the means through which the door closer acts on the human, establishing the proper boundaries for walking into rooms and the parameters for what counts as reasonably human from the groom’s perspective (someone with a certain amount of strength, ability to move and so on). Prescribing acts on everyday life by establishing an engineered morality of what ought to count as reasonable in the human encounters met by the groom. This makes sense as a premise: to understand the abandoned luggage algorithm’s moves in shaping human encounters, we might want to know something of how it was made by humans, how it substitutes for the actions of humans and what it prescribes back onto the human (and these will be given consideration in Chapter 3). But as Woolgar and Neyland (2013) caution, the certainty and stability of such prescribing warrants careful scrutiny. Prescribing might, on the one hand, form an engineer’s aspiration (in which case its accomplishment requires scrutiny) or, on the other hand, might be an ongoing basis for action, with humans, doors and grooms continuously involved in working through the possibilities for action, with the breakdown of the groom throwing open the possibility of further actions. In this second sense, prescribing is never more than contingent (in which case its accomplishment also requires scrutiny!).

Collectively these ideas seem to encourage the adoption of three kinds of analytical sensibility 7 for studying the everyday life of an algorithm . First, how do algorithms participate in the everyday? Second, how do algorithms compose the everyday? Third, how (to what extent, through what means) does the algorithmic become the everyday? These will be pursued in subsequent chapters to which I will now turn attention.

The Structure of the Argument

Building on the abandoned luggage algorithm , Chapter 2 will set out the algorithms and their human and non-human associations that will form the focus for this study. The chapter will focus on one particular algorithmic system developed for public transport security and explore the ways in which the system provided a basis for experimenting with what computer scientists termed human-shaped objects . In contrast with much of the social science literature on algorithms that suggests the algorithm itself is more or less fixed or inscrutable , this chapter will instead set out one basis for ethnographically studying the algorithm up-close and in detail. Placing algorithms under scrutiny opens up the opportunity for studying their instability and the ceaseless experimentation to which they are subjected. One basis for organising this experimentation is what Glynn (2010) terms elegance . Drawing on the recent growth of qualitative social science experimentation (Adkins and Lury 2012; Marres 2013; Corsín Jiménez and Estalella 2016), the chapter will consider how elegance is (and to an extent is not) accomplished. A fundamental proposition of the security system under development was that the algorithmic system could sift through streams of digital video data, recognise and then make decisions regarding humans (or at least human-shaped objects ). Rather than depending on the laboratory experiments of natural science and economics analysed by science and technology studies (STS ) scholars (focusing on the extension of the laboratory into the world or the world into the laboratory; Muniesa and Callon 2007), working on human-shaped objects required an ongoing tinkering with a sometimes bewildering array of shared, possible and (as it turned out) impossible controls. The chapter will suggest that elegance opens up a distinct way to conceive of the experimental prospects of algorithms under development and their ways of composing humans.

Chapter 3 will then develop the insights of Chapter 2 (on human-shaped objects and elegance ) in exploring the possibility of rendering the everyday life of algorithms accountable, and the form such accountability might take. Although algorithmic accountability is currently framed in terms of openness and transparency (James 2013; Diakopoulos 2013), the chapter will draw on ethnographic engagements with the algorithmic system under development to show empirically the difficulties (and indeed pointlessness) of achieving this kind of openness. Rather than presenting an entirely pessimistic view, the chapter will suggest that alternative forms of accountability are possible. In place of transparency , the chapter focuses on STS work that pursues the characteristics, agency , power and effect of technologies as the upshot of the network of relations within which a technology is positioned (Latour 1990; Law 1996). Moving away from the idea that algorithms have fixed, essential characteristics or straightforward power or agency , opens up opportunities for developing a distinct basis of accountability in action.

While experimentation provides one means to take the algorithmic literature in a different direction, in the system under development, deletion also opened up an analytic space for rethinking algorithms. Deletion became a key priority of the emerging system: just how could terabytes of data be removed from a system, freeing up space, reducing costs while not threatening the kinds of security -related activities the system was designed to manage? In Chapter 4, the calculative basis for deletion will be used to draw together studies of algorithms with the long-standing STS interest in the calculative. STS work on calculation raises a number of challenging questions. These include how accuracy is constructed (MacKenzie 1993), the accomplishment of numeric objectivity (Porter 1995), trading, exchange and notions of equivalence (Espeland and Sauder 2007; MacKenzie 2009). The kinds of concern articulated in these works is not focused on numbers as an isolated output of calculation . Instead, numbers are considered as part of a series of practical actions involved in, for example, solving a problem (Livingston 2006), distributing resources, accountabilities or responsibilities for action (Strathern 2002), governing a country (Mitchell 2002) and ascertaining a value for some matter (Espeland and Sauder 2007; MacKenzie 2009). Attuning these ideas to algorithms provides insight into not just the content of an algorithm , but its everyday composition , effects and associated expectations. However, deletion also poses a particular kind of problem: the creation of nothing (the deleted) needs to be continually proven. The chapter explores the complexities of calculating what ought to be deleted, what form such deletion ought to take and whether or not data has indeed been deleted. These focal points and the difficulties of providing proof are used to address suggestions in contemporary research that algorithms are powerful and agential, easily able to enact and execute orders. Instead, the chapter calls for more detailed analysis (picked up in the next chapter) of what constitutes algorithmic success and failure .

Following on from Chapter 4’s call for more detailed analysis of success and failure , Chapter 5 explores the problems involved in demonstrating an algorithmic system to a variety of audiences. As the project team reached closer to its final deadlines and faced up to the task of putting on demonstrations of the technology under development to various audiences—including the project funders—it became ever more apparent that in a number of ways the technology did not work. That is, promises made to funders, to academics, to potential end users on an advisory board, to ethical experts brought in to assess the technology may not be met. In project meetings, it became rapidly apparent that a number of ways of constituting a response to different audiences and their imagined demands could be offered. This was not simply a binary divide between providing a single truth or falsehood. Instead, a range of different more or less ‘genuine’ demonstrations with greater or lesser integrity were discursively assembled by the project team, and ways to locate and populate, witness and manage the assessment of these demonstrations were brought to the table. Holding in place witnesses, technologies and practices became key to successfully demonstrating the algorithm . In this chapter, the notion of integrity will be used to suggest that ideas of sight, materiality and morality can be reworked and incorporated into the growing literature on algorithms as a basis for investigating the everyday life of what it means for an algorithmic system to work.

The final chapter explores how a market can be built for an algorithmic system under development. It draws together studies of algorithms with the growing literature in STS on markets and the composition of financial value (Callon 1998; MacKenzie et al. 2007; Muniesa et al. 2007, 2017). In particular, it focuses on performativity (see, e.g., MacKenzie et al. 2007; MacKenzie 2008; Cochoy 1998; drawing on Austin 1962). Although STS work on markets has on occasions looked into financial trading through algorithms, the move here is to explore market making for algorithms. To accomplish this kind of market work and build a value for selecting relevance and deleting irrelevance, the project co-ordinators had to look beyond accountable outputs of technical certainty (given that, as we will have seen in Chapter 5, the machine had trouble delineating relevance and adequately deleting data). Instead, they looked to build a market for the algorithmic system through other means. From trying to sell technological efficacy, the project co-ordinators instead sought to build a market of willing customers (interested in a technology that might enable them to comply with emerging regulations) who were then constituted as a means to attract others to (potentially) invest in the system. Building a market here involved different kinds of calculations (such as Compound Annual Growth Rates for the fields in which the system might be sold) to forecast a market share . This might accomplish value by attracting interested parties whose investments might bring such a forecast closer to reality. The calculations had to enter a performative arena. This final chapter will suggest that market work is an important facet of the everyday life of an algorithm , without which algorithmic systems such as the one featured in this book, would not endure. The chapter concludes with an analysis of the distinct and only occasionally integrated everyday lives of the algorithm.

Notes

  1. 1.

    See Concise Oxford Dictionary (1999).

  2. 2.

    Ziewitz (2016) and Neyland (2016).

  3. 3.

    https://www.theguardian.com/science/2016/sep/01/how-algorithms-rule-our-working-lives.

  4. 4.

    http://uk.businessinsider.com/harvard-mathematician-reveals-algorithms-make-justice-system-baised-worse-black-people-crime-police-2017-6?r=US&IR=T.

  5. 5.

    https://www.dezeen.com/2017/06/01/algorithm-seven-million-different-jars-nutella-packaging-design/.

  6. 6.

    http://economictimes.indiatimes.com/news/science/new-algorithm-to-teach-robots-human-etiquettes/articleshow/59175965.cms.

  7. 7.

    An analytic sensibility is a way of thinking about and organising a determined scepticism towards the ethnographic material that a researcher is presented with during the course of their fieldwork. It is not as strong as an instruction (there is no ethnographic algorithm here), neither is it entirely without purpose. The analytic sensibility can set a direction which then subsequently can be called to sceptical account as the study develops.