Keywords

Opening

Simply being able to see an algorithm in some ways displaces aspects of the drama that I noted in the Introduction to this book. If one of the major concerns with algorithms is their opacity , then being able to look at our abandoned luggage algorithm would be a step forward. However, as I have also tried to suggest thus far in this book, looking at a set of IF-THEN rules is insufficient on its own to render an algorithm accountable. Algorithms combine with system architectures, hardware components, software/code, people, spaces, experimental protocols, results, tinkering and an array of other entities through which they take shape. Accountability for the algorithm would amount to not just seeing the rules (a limited kind of transparency ) but making sense of the everyday life of the algorithm—a form of accountability in action. In this chapter, we will move on with the project on airport and train station security and the development of our algorithm to try and explore a means by which accountability might be accomplished. We will again look at the question of how an algorithm can participate in everyday life, but now with an interest in how that everyday life might be opened to account. We will also look further at how an algorithmic means to make sense of things becomes the everyday and what this means for accountability. And we will explore how algorithms don’t just participate in the everyday but also compose the everyday . The chapter will begin by setting out a possible means to think through algorithmic accountability. In place of focusing on the abandoned luggage algorithm , we will look at how the algorithmic system makes sense of and composes the everyday through its User Interface and Route Reconstruction system. Then, we will consider a different form of accountability through an ethics board. The chapter will conclude with some suggestions on the everyday life of algorithmic accountability .

Accountability

Within the project we are considering, the ethical aims put forward from the original bid onwards were to reduce the amount of visual data made visible within a video surveillance system, to reduce the amount of data that gets stored and to do so without developing new algorithms. These were positioned as a basis on which my ethnographic work could hold the system to account. They were also presented as a potential means to address popular concerns regarding questions of algorithmic openness and transparency , at least theoretically enabling the algorithm , its authorship and consequences to be called to question by those subject to algorithmic decision-making processes (James 2013; Diakopoulos 2013). A more accountable algorithm might address concerns expressed in terms of the ability of algorithms to trap us and control our lives (Spring 2011), produce new ways to undermine our privacy (Stalder and Mayer 2009) and have power, an independent agency to influence everyday activities (Beer 2009; Lash 2007; Slavin 2011). A formal process of accountability might also help overcome the troubling opacity of algorithms, addressing Slavin’s concern that: ‘We’re writing things we can no longer read’ (2011: n.p.).

However, the social science literature provides a variety of warnings on systems and practices of accounting and accountability . For example, Power (1997) suggests in his formative audit society argument that the motifs of audit have become essential conditions for meeting the aims of regulatory programmes that problematically reorient the goals of organisations. This work draws on and feeds into neo-Foucauldian writing on governmentality (Ericson et al. 2003; Miller 1992; Miller and O’Leary 1994; Rose 1996, 1999). Here, the suggestion is made that, for example, government policies (from assessing value for money in public sector spending, through to the ranking of university research outputs) provide rationales to be internalised by those subject to accounts and accountabilities. The extent and adequacy of the take-up of these rationales then forms the basis for increasing scrutiny of the accounts offered by people or organisations in response. This sets in train a program of responsibilisation and individualisation whereby subjects are expected to deliver on the terms of the rationale, while taking on the costs of doing so, allowing ‘authorities … [to] give effect to government ambitions’ (Rose and Miller 1992: 175). For Foucault (1980), this provides: ‘A superb formula: power exercised continuously and for what turned out to be a minimal cost’ (1980: 158).

In this literature, the endurance of accounts and accountabilities is explained through the structured necessity of repetition. That is, alongside efficiency, accounts and accountabilities become part of an ordered temporality of repeated assessment in, for example, performance measurements where the same organisations and processes are subject to accounts and accountabilities at set intervals in order to render the organisation assessable (Power 1997; Rose 1999; Rose and Miller 1992). For Pentland, this repetition forms audit rituals which have: ‘succeeded in transforming chaos into order’ (1993: 606). In particular, accounts and accountabilities have introduced a ‘ritual which transforms the financial statements of corporate management from an inherently untrustworthy state into a form that the auditors and the public can be comfortable with’ (1993: 605). Efforts to make algorithms accountable might thus need to consider the kinds of rituals these procedures could introduce, the power relations they could institute and the problematic steering of organisational goals that could result.

This literature on the formal processes and repetitive procedures for accounting and accountabilities suggests emerging calls for algorithmic accountability would provide a fertile ground for the continued expansion of accounts and accountabilities into new territories. Procedures for accountability might expand for as long as there are new organisations, technologies or audiences available, presenting new opportunities for carrying out the same processes (see, e.g., Osborne and Rose [1999] on the expansion of governmentality ; also see Ferguson and Gupta [2002] on the creation of the individual as auditor of their own ‘firm’). Accounts and distributions of accountability then become an expectation , something that investors, regulators and other external audiences expect to see. Being able to account for the accountability of a firm can then become part of an organisation’s market positioning as transparent and open, as ethical, as taking corporate social responsibility seriously (Drew 2004; Gray 1992, 2002; Neyland 2007; Shaw and Plepinger 2001). Furthermore, as Mennicken (2010) suggests, once accountability becomes an expectation , auditors, for example, can seek to generate markets for their activities. Alternatively, the outcomes of forms of accounts and accountabilities become market-oriented assets in their own right, as is the case with media organisations promoting their league tables as one way to attract custom (such as the Financial Times MBA rankings, see Free et al. 2009). As we will see in Chapter 6, being able to promote the ethical algorithmic system as accountable became key to the project that features in this book, as a way to build a market for the technology.

This may sound somewhat foreboding: accountability becomes a ritual expectation , it steers organisational goals in problematic ways, and it opens up markets for the processes and outputs of accountability. Yet what we can also see in Chapter 2 is that what an algorithm is, what activities it participates in, and how it is entangled in the production of future effects , is subject to ongoing experimentation . Hence, building a ritual for algorithmic accountability seems somewhat distant. It seems too early, and algorithms seem too diverse, to introduce a single and universal, ritualised form of algorithmic accountability . It also seems too early to be able to predict the consequences of algorithmic accountability. A broad range of consequences could ensue from algorithmic accountability. For example, accounts and accountabilities might have unintended consequences (Strathern 2000, 2002), might need to consider the constitution of audience (Neyland and Woolgar 2002), the enabling and constraining of agency (Law 1996), what works gets done (Mouritsen et al. 2001), who and what gets hailed to account (Munro 2001), the timing and spacing of accounts (Munro 2004) and their consequence. But as yet we have no strong grounds for assessing these potential outcomes of accountability in the field of algorithms. And calls for algorithmic accountability have thus far mostly been focused on introducing a means whereby data subjects (those potentially subjected to algorithmic decision-making and their representatives) might be notified and be given a chance to ask questions or challenge the algorithm . An audit would also require that the somewhat messy experimentation of Chapter 2, the different needs and expectations of various different partner organisations, my own struggles to figure out what I was doing as an ethnographer would all be frozen in time and ordered into an account. The uncertainties of experimentation would need to be ignored, and my own ongoing questions would need to be side-lined to produce the kind of order in which formal processes of accountability excel. The everyday life of the algorithm would need to be overlooked. So what should an accountable algorithm look like? How could I, as an ethnographer of a developing project, work through a means to render the emerging algorithm accountable that respected these uncertainties, forms of experimentation and ongoing changes in the system but still provided a means for potential data subjects to raise questions?

One starting point for moving from the kinds of formal processes of accountability outlined above to an approach specifically attuned to algorithms is provided by Science and Technology Studies (STS ). The recent history of anti-essentialist or post-essentialist research (Rappert 2001) in STS usefully warns us against attributing single, certain and fixed characteristics to things (and people). Furthermore, STS research on technologies, their development and messiness also suggests that we ought to maintain a deep scepticism to claims regarding the agency or power of technology to operate alone. As I suggested in the Introduction to this book, in STS work, the characteristics, agency , power and effect of technologies are often treated as the upshot of the network of relations within which a technology is positioned (Latour 1990; Law 1996). Rather than seeing agency or power as residing in the algorithm , as suggested by much of the recent algorithm literature, this STS approach would be more attuned to raising questions about the set of relations that enable an algorithm to be brought into being.

If we take accountability to mean opening up algorithms to question by data subjects and their representatives, this STS approach prompts some important challenges. We need to get close to the everyday life in which the algorithm participates in order to make sense of the relations through which it accomplishes its effects . We need to make this everyday life of the algorithm open to question. But then we also need to know something about how the algorithm is itself involved in accounting for everyday life. How can we make accountable the means through which the algorithm renders the world accountable?

The ethical aims to see less and store less data provided one basis for holding the system to account, but developing the precise method for rendering the algorithmic system accountable was to be my responsibility. Traditional approaches to ethical assessment have included consequentialist ethics (whereby the consequences of a technology, e.g., would be assessed) and deontological ethics (whereby a technology would be assessed in relation to a set of ethical principles; for a discussion, see Sandvig et al. 2013). However, these traditional approaches seemed to fit awkwardly with the STS approach and its post-essentialist warnings. To judge the consequences of the algorithm or to what extent an algorithm matched a set of deontological principles appeared to require the attribution of fixed characteristics and a fixed path of future development to the algorithm while it was still under experimentation (and, for all I knew, this might be a ceaseless experimentation , without end). As a counter to these approaches, ethnography seemed to offer an alternative. In place of any assumptions at the outset regarding the nature and normativity of algorithms, their system, the space, objects or people with whom they would interact in the project (a deontological ethics ), my ethnography might provide a kind of unfolding narrative of the nature of various entities and how these might be made accountable. However, unlike a consequentialist ethics whereby the outcomes of the project could be assessed against a fixed set of principles, I instead suggested that an in-depth understanding of how the algorithms account for the world might provide an important part of accountability .

If putting in place a formal process of accountability and drawing on traditional notions of ethics were too limited for rendering the algorithm accountable, then what next? My suggestion to the project participants was that we needed to understand how the algorithm was at once a participant in everyday life and used that participation to compose accounts of everyday life. Algorithmic accountability must thus move between two registers of accountability. This first, sense of accountability through which the algorithm might be held to account needed to be combined with a second sense of accountability through which the algorithm engages in the process of making sense of the world. I suggested we could explore this second sense of accountability through ethnomethodology.

In particular, I looked to the ethnomethodological use of the hyphenated version of the term: account-able (Garfinkel 1967; Eriksen 2002). Garfinkel suggests that “the activities whereby members produce and manage settings of organized everyday affairs are identical with members’ procedures for making those settings ‘account-able’” (1967: 1). For ethnomethodologists, this means that actions are observable-reportable; their character derives from the ability of other competent members to assess and make sense of actions. Importantly, making sense of actions involves the same methods as competently taking part in the action. To be account-able thus has a dual meaning of being demonstrably open to inspection as an account of some matter and being able to demonstrate competence in making sense of some matter (see Lynch 1993; Dourish 2004 for more on this). This might be a starting point for a kind of algorithmic account-ability in action.

Although ethnomethodologists have studied the account-able character of everyday conversations, they have also developed a corpus of workplace studies (Heath and Button 2002). Here, the emphasis is on the account-able character of, for example, keeping records, following instructions, justifying actions in relation to guidelines and informing others what to do and where to go (Lynch 1993: 15). For Button and Sharrock (1998), actions become organisationally account-able when they are done so that they can be seen to have been done on terms recognisable to other members within the setting as competent actions within that organisation. Extending these ideas, studying the algorithm on such terms would involve continuing our study of the work of computer scientists and others involved in the project as we started in Chapter 2, but with an orientation towards making sense of the terms of account-ability within which the algorithm comes to participate in making sense of a particular scene placed under surveillance . This is not to imply that the algorithm operates alone. Instead, I will suggest that an understanding of algorithmic account-ability can be developed by studying how the algorithmic system produces outputs that are designed to be used as part of organisational practices to make sense of a scene placed under surveillance by the algorithmic system. In this way, the human-shaped object and luggage-shaped object of Chapter 2 can be understood as part of this ongoing, account-able production of the sense of a scene in the airport or train station in which the project is based. I will refer to these sense-making practices as the account-able order of the algorithmic system. Importantly, having algorithms participate in account-ability changes the terms of the account-able order (in comparison with the way sense was made of the space prior to the introduction of the algorithmic system).

Making sense of this account-able order may still appear to be some distance from the initial concerns with accountability which I noted in the opening to this chapter, of algorithmic openness and transparency . Indeed, the ethnomethodological approach appears to be characterised by a distinct set of concerns, with ethnomethodologists interested in moment to moment sense-making, while calls for algorithmic accountability are attuned to the perceived needs of those potentially subject to actions deriving from algorithms. The account-able order of the algorithm might be attuned to the ways in which algorithms participate in making sense of (and in this process composing) everyday life. By contrast, calls for algorithmic accountability are attuned to formal processes whereby the algorithm and its consequences can be assessed. However, Suchman et al. (2002) suggest that workplace actions, for example, can involve the simultaneous interrelation of efforts to hold each other responsible for the intelligibility of our actions (account-ability) while located within constituted ‘orders of accountability ’ (164). In this way, the account-able and the accountable, as different registers of account, might intersect. In the rest of this chapter, I will suggest that demands for an algorithm to be accountable (in the sense of being transparent and open to question by those subject to algorithmic decision-making and their representatives) might benefit from a detailed study of the account-able order of an algorithmic system under development. Being able to elucidate the terms of algorithmic participation in making sense of scenes placed under surveillance —as an account-able order—might assist in opening the algorithmic system to accountable questioning. However, for this to be realised requires practically managing the matter of intersecting different registers of account.

For the account-able to intersect with the accountable took some effort even before the project began. I proposed combining my ethnomethodologically inflected ethnography of the algorithm’s account-able order with a particular form of accountability—an ethics board to whom I would report and who could raise questions. The interactions through which the algorithm came to make sense of particular scenes—as an account-able order—could be presented to the ethics board so that they could raise questions on behalf of future subjects of algorithmic decision-making—a form of algorithmic accountability . As the following sections will show, intersecting registers of accounts (account-ability and accountability ) did not prove straightforward. The next section of the chapter will detail efforts to engage with the account-able order of the algorithm through the User Interface and Route Reconstruction components of the system. We will then explore the intersection of account registers through the ethics board.

Account-ability Through the User Interface and Route Reconstruction

In order for the algorithm to prove itself account-able—that is demonstrably able to participate in the production of accounts of the everyday life of the train station and the airport—required an expansion of the activities we already considered in Chapter 2. Being able to classify human-shaped and luggage-shaped objects through mapping the fixed attributes of the setting, parameterisation of the edges of objects, object classification, identifying the action states of objects, the production of bounding boxes or close-cropped images was also crucial to figuring out a means to participate in the production of accounts. These efforts all went into the production of alerts (a key form of algorithmic account) for operatives of the train station and airport surveillance system who could then take part in the production of further accounts. They could choose to ignore alerts they deemed irrelevant or work through an appropriate response, such as calling for security operatives in, for example, the Departure Lounge to deal with an item of luggage. At least, that was how the project team envisaged the future beyond the experimental phase of algorithmic work. Project participants from StateTrack and SkyPort who worked with the existing surveillance system on a daily basis, at this experimental stage, also more or less concurred with this envisaged future.

A future in which alerts were sent to operatives cutting down on the data that needed to be seen and cutting down on the data that needed to be stored seemed a potentially useful way forward for operatives and their managers (who were also interested in cutting down on data storage costs, see Chapters 4 and 6). But this was a cautious optimism. At this experimental stage of the project, operatives wanted to know what the alerts would look like when they received them, how would responses be issued by them, how would others in the airport receive these responses and further respond? Their everyday competences were oriented towards seeing as much as possible in as much detail as possible and reading the images for signs of what might be taking place. They mostly operated with what Garfinkel (1967) referred to as a relation of undoubted correspondence between what appeared to be happening in most images and what they took to be the unfolding action. In moments of undoubted correspondence, it was often the case that images could be ignored—they appeared to be what they were because very little was happening. However, it was those images that raised a concern—a relation of doubted correspondence between what appeared to be going on and what might unfold—that the operatives seemed to specialise in. It was these images of concern that they had to read, make sense of, order and respond to, that they would need to see translated into alerts (particularly if all other data was to remain invisible or even be deleted). How could the algorithms for abandoned luggage, moving the wrong way and entry into a forbidden area act as an experimental basis for handling this quite complex array of image competencies?

The computer scientists established an initial scheme for how the three types of relevance detection algorithm would work. The Event Detection system would sift through video frames travelling through the system and use the media proxy we met in Chapter 2 to draw together the streams of video from cameras across the surveillance network. This would use a Real-Time Streaming Protocol for MPEG4 using JSON (JavaScript Object Notification) as a data interchange format for system analysis. Each stream would have a metadata time stamp. The relevance detection algorithms for abandoned luggage, moving the wrong way and entering a forbidden space would then select out putative object types (using a Codebook algorithm for object detection) focusing on their dimensions, direction and speed. As we noted in Chapter 2, the system would then generate bounding boxes for objects that would then generate a stream of metadata related to the bounding box based on its dimensions and timing—how fast it moved and in what direction. This would also require a further development of the map of fixed attributes used for background subtraction in Chapter 2. Areas where entry was forbidden for most people (e.g. secure areas and train tracks) and areas where the direction of movement was sensitive (e.g. exits at peak times in busy commuter train stations) would need to be added to the maps. Producing an alert was no longer limited to identifying a human-shaped object (or luggage-shaped or any other shaped object)—even though that was challenging in its own ways. The system would now have to use these putative classifications to identify those human-shaped objects moving in the wrong direction or into the wrong space, along with those human-shaped objects that became separate from their luggage-shaped objects. Objects’ action states as moving or still, for example, would be central. For the algorithms to be able to do this demonstratively within the airport and train station was crucial to being able to produce alerts and participate in account-ability.

But this didn’t reduce the surveillance operatives’ concerns about the form in which they would receive these alerts. Participating in account-ability was not just about producing an alert. The alerts had to accomplish what Garfinkel (1963) termed the congruence of relevances. Garfinkel suggested that any interaction involved successive turns to account-ably and demonstrably make sense of the scene in which the interactions were taking place. This required the establishment of an at least in-principle interchangeability of viewpoints—that one participant in the interaction could note what was relevant for the other participants, could make sense of what was relevant for themselves and the other participants and could assume that other participants shared some similar expectations in return. Relevances would thus become shared or congruent through the interaction. Garfinkel (1963) suggested that these were strongly adhered to, forming what he termed constitutive expectancies for the scene of the interaction. In this way, building a shared sense of the interaction, a congruence of relevances, was constitutive of the sense accomplished by the interaction.

The algorithmic system seemed to propose a future that stood in some contrast to the operatives’ current ways of working. Prior to the algorithmic system, surveillance operatives’ everyday competences were oriented towards working with images, other operatives, airport or train station employees, their managers and so on, in making a sense of the scene. The rich and detailed interaction successively built a sense of what it was that was going on. Constitutive expectancies seemed to be set in place. The move to limit the amount of data that was seen seemed to reduce the array of image-based cues through which accomplishing the sense of a scene could take place. Given that an ethical aim of the project was to reduce the scope of data made visible and given that this was central to the funding proposal and its success, the computer scientists needed to find a way to make this work. They tried to work through with the surveillance operatives how little they needed to see for the system still to be considered functional. In this way, the ethical aims of the project began to form part of the account-able order of the algorithmic system that was emerging in this experimental phase of the project. Decision-making was demonstrably organised so that it could be seen to match the ethical aims of the project at the same time as the emerging system could be constituted as a particular material-algorithmic instantiation of the ethical aims. Accomplishing this required resolution of issues for the operatives and the computer scientists of just what should be seen and how should such visibility be managed.

This required a series of decisions to be made about the User Interface. The computer scientists suggested that one way to move forward with the ethical aims of the project was to develop a User Interface with no general visual component. This was both made to make sense as a demonstrable, account-able response to the ethical aims (to reduce visibility) and constituted a visually and materially available form for these otherwise somewhat general aims. In place of the standard video surveillance bank of monitors continually displaying images, operatives would be presented only with text alerts (Fig. 3.1) produced via our algorithms’ ‘IF-THEN’ rules. An operative would then be given the opportunity to click on a text alert and a short video of several seconds that had created the alert would appear on the operative’s screen. The operative would then have the option of deciding whether the images did indeed portray an event worthy of further scrutiny or could be ignored. An operative could then tag data as relevant (and it would then be stored) or irrelevant (and it would then be deleted; see Chapter 4). The User Interface could then participate in the accomplishment of the ethical aims to see less and store less. It would also provide a means for our algorithms to become competent participants in the account-able order of interactions. The User Interface would provide the means for the algorithms to display to operatives that they were participating in the constitutive expectancies of making sense of a scene in the airport or train station. They were participating in establishing the shared or congruent relevance of specific images—that an image of a human-shaped object was not just randomly selected by the algorithm , but displayed its relevance to the operative as an alert, as something to which they needed to pay attention and complete a further turn in interaction. The algorithm was displaying its competence in being a participant in everyday life.

Fig. 3.1
figure 1

Text alerts on the user interface

This might seem like a big step forward for our algorithms. It might even mean a step from experimentation towards actual implementation. But the operatives and their managers swiftly complained when they were shown the User Interface: How could they maintain security if all they got to see was (e.g.) an image of an abandoned item of luggage? As I mentioned in Chapter 2, to secure the airport or train station the operatives suggested that they needed to know who had abandoned the luggage, when and where did they go? A neatly cropped image of an item of luggage with a red box around it, a human-shaped object within a bounding box that had moved into a forbidden space or been recorded moving the wrong way, was limited in its ability to take part in making sense of the scene. The algorithms’ ability to take part in everyday life by participating in holding everyday life to account was questioned. As such, the emerging account-able order of the algorithmic system and the design decisions which acted as both a response to, and gave form to the project’s ethical aims, were subject to ongoing development, particularly in relation to operatives’ everyday competences.

This led to discussion among project participants, the computer scientists and StateTrack and SkyPort in particular, about how surveillance operatives went about making sense of, for example, abandoned luggage. Everyday competences that might otherwise never be articulated needed to be drawn to the fore here. Operatives talked of the need to know the history around an image, what happened after an item had been left, and with whom people had been associating. Computer scientists thus looked to develop the Route Reconstruction component of the system. This was a later addition to the system architecture as we saw in Chapter 2. The University 1 team of computer scientists presented a digital surveillance Route Reconstruction system they had been working on in a prior project (using a learning algorithm to generate probabilistic routes). Any person or object once tagged relevant, they suggested, could be followed backwards through the stream of video data (e.g. where had a bag come from prior to being abandoned, which human had held the bag) and forwards (e.g. once someone had dropped a bag, where did they go next). This held out the potential for the algorithms and operatives to take part in successively and account-ably building a sense for a scene. From a single image of, say, an abandoned item of luggage, the algorithm would put together histories of movements of human-shaped objects and luggage-shaped objects and future movements that occurred after an item had been left. As operatives clicked on these histories and futures around the image of abandoned luggage, both operatives and algorithms became active participants in successively building shared relevance around the image. Histories and futures could become a part of the constitutive expectancies of relations between algorithms and operatives.

Route Reconstruction would work by using the background maps of fixed attributes in the train station and airport and the ability of the system to classify human-shaped objects and place bounding boxes around them. Recording and studying the movement of human-shaped bounding boxes could be used to establish a database of popular routes human-shaped objects took through a space and the average time it took a human-shaped object to walk from one camera to another. The system would use the bounding boxes to note the dimensions, direction and speed of human-shaped objects . The Route Reconstruction system would then sift through the digital stream of video images to locate, for example, a person who had been subject to an alert and trace the route from which they were most likely to have arrived (using the database of most popular routes), how long it should have taken them to appear on a previous camera (based on their speed) and search for any human-shaped objects that matched their bounding box dimensions. If unsuccessful, the system would continue to search other potential routes and sift through possible matches to send to the operatives, who could then tag those images as also relevant or irrelevant. The idea was to create what the computer scientists termed a small ‘sausage’ of data from among the mass of digital images. The Route Reconstruction system used probabilistic trees (Fig. 3.2), which took an initial image (of, e.g., an abandoned item of luggage and its human-shaped owner) and then presented possible ‘children’ of that image (based on dimensions, speed and most popular routes) until operatives were happy that they had established the route of the person and/or object in question. Probability , background maps, object classification and tracking became a technical means for the algorithms to participate in holding everyday life to account.

Fig. 3.2
figure 2

A probabilistic tree and children (B0 and F0 are the same images)

As a result of operatives’ articulation of a potential clash between an ethical aim of the project (to reduce visibility) and the everyday competences of surveillance operatives (to secure a train station or airport space through comprehensive visibility), the account-able order of work between computer scientists, end-users, their working practices and the User Interface shifted somewhat to incorporate the new Route Reconstruction component. Route Reconstruction became a basis for account-ably acknowledging the existing competences of operatives in securing a space. The small ‘sausages’ of data and probabilistic ‘children ’ became a means of broadening the number of participants in account-ably accomplishing a sense of the everyday life of the train station and airport. Yet having ‘sausages’ of data and new forms of metadata (used to produce ‘children’) might initially appear to move the project away from its stated ethical aim to reduce the amount of surveillance data made visible—this became an issue for questions of accountability asked on behalf of future subjects of algorithmic decision-making, as we will see.

At this point (at least for a time), it seemed that I was in a position to make an ethnographic sense of the account-able order of the algorithmic system that would avoid an overly simplified snapshot. In place of a static audit of the system was an account of an emerging order in which terabytes of visual, video data would be sifted by relevance detection algorithms, using background subtraction models to select out proto-relevant human-shaped and other objects. These would be further classified through specific action states (abandoning luggage, moving the wrong way, moving into a forbidden space) that could be the basis for an alert. Operatives would then have the responsibility to decide on future courses of action as a result of the alerts they were sent (e.g. alerting airport security staff to items of luggage). The alerts were the first means through which the algorithmic system could participate in the account-able order of the scene placed under surveillance . Subsequent operative responses could also shift responsibility for a second form of account-able action back onto the algorithmic system if Route Reconstruction was deemed necessary, with probabilistic trees and children designed to offer images of proto-past and subsequent actions (once again to be deemed relevant by operatives). Through this account-able order, the algorithmic system was involved in making sense of the everyday life of particular spaces, such as an airport or train station, and held out the possibility of contributing to changes in operatives’ everyday competences in securing those spaces. The presence of the algorithmic system proposed notable changes in the operatives’ activities. Instead of engaging with large amounts of video data in order to make decisions, operatives would only be presented with a small amount of data to which their responses were also limited. Conversely, for the algorithmic system to work, far greater amounts of data were required prior to the system operating (e.g. digitally mapping the fixed attributes of a setting such as an airport and fixing in place parameters for objects such as luggage and humans, producing bounding boxes, metadata, tracking movements, constituting a database of popular routes). The introduction of the algorithmic system also seemed to require a much more precise definition of the account-able order of airport and train station surveillance activities. The form that the order took was both oriented to the project’s ethical aims and gave a specific form to those aims. Yet this emerging form was also a concern for questions of accountability being asked on behalf of future data subjects—those who might be held to account by the newly emerging algorithmic system.

The specific material forms that were given to the project’s ethical aims—such as the User Interface and Route Reconstruction system—were beginning to intersect with accountability questions being raised by the ethics board. In particular, how could this mass of new data being produced ever meet the ethical aim to reduce data or the ethical aim to not develop new surveillance algorithms? In the next section, I will explore the challenges involved in this intersection of distinct registers of account by engaging with the work of the ethics board.

Account-ability and Accountability Through the Ethics Board

As I suggested in the opening to this chapter, formal means of accountability are not without their concerns. Unexpected consequences, rituals, the building of new assets are among an array of issues with which accountability can become entangled. In the algorithm project, the key entanglement was between the kinds of account-ability that we have seen developing in this chapter, through which the algorithms began to participate more thoroughly in everyday life, and accountability involving questions asked on behalf of future data subjects—those who might be subject to algorithmic decision-making. This latter approach to accountability derived from a series of expectations established in the initial project bid, among project partners and funders that somehow and in some way the ethical aims of the project required an organised form of assessment . This expectation derived partly from funding protocols that place a strong emphasis on research ethics , the promises of the original funding proposal to develop an ethical system, and a growing sense among project participants that an ethical, accountable, algorithmic surveillance system might be a key selling point (see Chapter 6). This signalled a broadening in the register of accounts, from the algorithms participating in account-ability to the algorithms being subjected to accountability .

The ethics board became the key means for managing the accountable and the account-able. It was not the case that the project could simply switch from one form of account to another or that one took precedence over the other. Instead, the project—and in particular me, as I was responsible for assessing the ethics of the emerging technology—had to find a way to bring the forms of account together. The ethics board was central to this as it provided a location where I could present the account-able order of the algorithmic surveillance system and provoke accountable questions of the algorithms. The ethics board comprised a Member of the European Parliament (MEP) working on redrafting the EU Data Protection Regulation, two national Data Protection Authorities (DPAs), two privacy academics and two members of privacy-focused civil liberty groups. The ethics board met three times during the course of the project, and during these meetings, I presented my developing study of the account-able order of the algorithmic system. I presented the ways in which the algorithmic system was involved in making sense of spaces like an airport and a train station, how it was expected to work with operatives’ everyday competences for securing those spaces and how the system gave form to the project’s ethical aims. In place of buying into the claims made on behalf of algorithms by other members of the project team or in popular and academic discussions of algorithms, I could present the account-able order as a more or less enduring, but also at times precarious focus for action. In response, members of the ethics board used my presentations along with demonstrations of the technology to open up the algorithmic system to a different form of accountability by raising questions to be included in public project reports and fed back into the ongoing project.

Ethics board members drew on my presentations of the account-able order of the algorithmic system to orient their questions. In the first ethics board meeting (held approximately ten months into the project), one of the privacy-focused academics pointed to the centrality of my presentation for their considerations:

From a social scientist perspective it is not enough to have just an abstract account for ethical consideration. A closer understanding can be brought about by [my presentation’s] further insight into how [the system] will work.

The way the system ‘will work’—its means of making sense of the space of the airport and train station—encouraged a number of questions from the ethics board, enabling the system to be held accountable. For example, the Data Protection Officers involved in the board asked during the first meeting:

Is there a lot of prior data needed for this system? More so than before?

Are people profiled within the system?

How long will the system hold someone’s features as identifiable to them as a tagged suspect?

These questions drew attention to matters of concern that could be taken back to the project team and publicly reported (in the minutes of the ethics board) and subsequently formed the basis for response and further discussion at the second ethics board meeting. The questions could provide a set of terms for making the algorithmic system accountable through being made available (in public reports) for broader consideration. The questions could also be made part of the account-able order of the algorithmic system, with design decisions taken on the basis of questions raised. In this way, the computer scientists could ensure that there was no mechanism for loading prior data into the system (such as a person’s dimensions, which might lead to them being tracked), and to ensure that metadata (such as the dimensions of human-shaped objects ) were deleted along with video data to stop individual profiles being created or to stop ‘suspects’ from being tagged. Data Protection Officers sought to ‘use the committee meetings to clearly shape the project to these serious considerations.’ The ‘serious considerations’ here were the ethical aims. One of the representatives of the civil liberties groups also sought to utilise the access offered by the ethics board meetings but in a different way, noting that ‘As systems become more invisible it becomes more difficult to find legitimate forms of resistance.’

To ‘shape the project’ and ‘find legitimate forms of resistance’ through the project seemed to confirm the utility of intersecting account-ability and accountability , opening up distinct ways for the system to be questioned and for that questioning to be communicated to further interested audiences. However, as the project progressed, a series of issues emerged that complicated my presentation of the account-able order of the algorithmic system to the ethics board and hence made the intersection of account-ability and accountability more difficult.

For example, I reported to the ethics board a series of issues involved in system development. This included a presentation of the challenges involved in ‘dropping in’ existing algorithms. Although one of the project’s opening ethical aims was that no new algorithms would be developed and that existing algorithms could be ‘dropped into’ existing surveillance networks, these were also termed ‘learning’ algorithms. I presented to the ethics board an acknowledgement from both teams of computer scientists that the algorithms needed to ‘learn’ to operate in the end-user settings; that algorithms for relevancy detection and the Route Reconstruction component had to run through streams of video data; that problems in detecting objects and movements had to be continually reviewed; and that this took ‘10s of hours.’ When problems arose in relation to the lighting in some areas of end-user sites (where, e.g., the glare from shiny airport floors appeared to baffle our abandoned luggage algorithm which kept constituting the glare as abandoned luggage), the code/software tied to the relevancy detection algorithm had to be developed—this I suggested to the ethics board is what constituted ‘learning.’

These ongoing changes to the system through ‘learning’ emphasised the complexities of making sense of the algorithmic system’s account-able order; the way the system went about making sense changed frequently at times as it was experimented with and my reporting to the ethics board needed to manage and incorporate these changes. Alongside the continual development of ‘learning’ algorithms, other issues that emerged as the system developed included an initial phase of experimentation where none of the system components would interact. In this instance, it turned out that one of the project members was using obsolete protocols (based on VAPIX), which other project members could not use or did not want to use. Attempting to resolve this issue took 114 e-mails and four lengthy telephone conference calls in one month of the project. Other issues that emerged included: questions of data quality, frame rates, trade union concerns, pixilation and compression of video streams, which each led to changes in the ways in which the system would work. In particularly frenzied periods of project activity, I found it more challenging to maintain a clear notion of what constituted the ‘order’ of the algorithmic system to report to the ethics board, as major features (e.g. which components of the system talked to each other) would be changed in quite fundamental ways. When the Route Reconstruction and Privacy Enhancement components of the system were also brought together with the relevancy detection algorithms, reporting became more difficult again.

The ongoing changes of system development emphasised the value of building an understanding of the system’s developing account-able order. Making sense of the way in which the algorithmic system (its components, design decisions, designers, software, instructions and so on) was involved in making sense of the train station and airport, avoided providing a more or less certain account developed from a single or brief timeframe that simply captured and replayed moments of system activity, as if the system had a singular, essential characteristic. Instead, understanding the account-able order held out the promise of making sense of the ordering practices of the system under development, how algorithms went about making sense of and participating in everyday life. In the absence of such an approach to algorithms, the risk would be that multiple assumptions (that might be wrong or only correct for a short time) regarding the nature of algorithms were set in place and formed the basis for accountability .

Tracing system developments and the changing account-able order of the algorithmic system for presentation to the ethics board also became the principal means of intersecting the different registers of account-ability and accountability. In place of presenting a static picture of the algorithmic system, changes in the ordering activities of the system could be demonstrated and discussed in relation to the project’s ethical aims. This was particularly important in ethics board meetings as changes that emerged through system development appeared to change the specific form given to the project’s ethical aims. For example, as the project developed, a question for the ethics board was how far could an algorithm ‘learn’ and be changed before it was considered sufficiently ‘new’ to challenge the ethical aim of the project to not introduce new algorithms? Furthermore, how much new data from bounding boxes, object classification and action states could be produced before it challenged the ethical principle to reduce data? This intersection of account-ability and accountability was not resolved in any particular moment, but became a focal point for my ethics board presentations and ensuing discussions and public reporting.

However, as the project and ethics board meetings progressed, my role in producing accounts became more difficult. I was involved in making available an analysis of the account-able order of the system partly as a means to open the system to questions of accountability , which I would then publicly report and feed back to project members. At the same time, I was not just creating an intersection between account-ability and accountability , I risked being deeply involved in producing versions of the system’s account-able order which might steer ethics board members towards recognising that the system had achieved or failed to achieve its ethical aims and thus defuse or exacerbate accountability concerns. I was the algorithm’s proxy, mediating its ability to grasp everyday life through my ability to grasp the details of its abilities. As one of the Data Protection Officers on the ethics board asked, ‘What is Daniel’s role? How can he ensure he remains impartial?’

Rather than try to resolve this problem in a single ethics board meeting, I sought instead to turn this issue of my own accountability into a productive tension by bringing as much as possible to the ethics board. My own developing account of the account-able order of the algorithmic system, the computer scientists, end-users and the technology as it developed could all be drawn together in ethics board meetings. The result was not a single, agreed upon expert view on the system. In place of a single account, the meetings became moments for different views, evidence, material practices and so on to be worked through. The effect was to intersect account-ability and accountability in a way that enabled questions and attributions of algorithmic responsibility and openness to be brought into the meetings and discussed with ethics board members, computer scientists, the system and my own work and role in the project. Accountability was not accomplished in a single moment, by a single person, but instead was distributed among project members and the ethics board and across ongoing activities, with questions taken back to the project team between meetings and even to be carried forward into future projects after the final ethics board meeting. And the intersection of account-ability and accountability was not simply a bringing together of different registers of account, as if two different forms of account could, for example, sit comfortably together on the same page in a report to the ethics board. The intersecting of account-ability and accountability itself became a productive part of this activity, with questions of accountability (e.g. how much has changed in these algorithms?) challenging the account-able order of the algorithmic system and the more or less orderly sense-making practices of the algorithmic system being used to draw up more precise questions of accountability. The algorithms’ means to participate in the account-ability of everyday life in the airport became the means to make the algorithms available to this different sense of accountability through the ethics board.

Conclusion

In this chapter, we can see that our algorithms are beginning to participate in everyday life in more detailed ways. They are not only classifying putative human-shaped and luggage-shaped objects. They are also taking part in the production of accounts that make sense of the actions in which those objects are also taking part: being abandoned, moving the wrong way, moving into a forbidden space. This participation in the account-able order of everyday life is an achievement based on years of work by the computer scientists and significant efforts in the project to work with operatives to figure out their competences and how a system might be built that respects and augments these competences while also accomplishing the project’s ethical aims. Such aims were also the key grounds for intersecting this increasing participation in the account-ability of everyday life with the sense of accountability pursued by the ethics board. Regular meetings, minutes, publicly available reports, the development of questions into design protocols for the emerging system, creating new bases for experimentation , each formed ways in which accountability could take shape—as a series of questions asked on behalf of future data subjects. In a similar manner to the literature that opened this chapter, this more formal process of accountability came with its own issues. Unanticipated questions arose, the system being subjected to account kept changing, some things didn’t work for a time, and my own role in accountability came under scrutiny. In place of any counter expectation that algorithms could be made accountable in any straightforward, routine manner, came this series of questions and challenges.

What, then, can be said about future considerations of algorithms and questions of accountability? First, it seemed useful in this project to engage in detail with the account-able order of the algorithmic system. This displaced a formal approach to accountability , for example, carrying out an audit of algorithmic activity, with an in-depth account of the sense-making activities of the system. Second, however, this approach to account-ability did nothing on its own to address questions of accountability —what the concerns might be of future data subjects. Intersecting different registers of account through the ethics board was itself a significant project task and required resources, time and effort. Third, the intersection of account-ability and accountability was productive (raising new questions for the project to take on), but also challenging (requiring careful consideration of the means through which different views could be managed). With growing calls for algorithmic systems to be accountable, open to scrutiny and open to challenge, these three areas of activity set out one possible means for future engagement, intersecting the account-able and the accountable and managing the consequences.

But the challenges for our algorithms did not end here. Although we now finish this chapter with a sense that our algorithms are grasping everyday life in more detail, are more fully participating in everyday life through forms of account-ability and are even beginning to shape everyday life by causing the operatives to reconsider their everyday competences, there is still some way to go. The algorithms have only reached an initial experimental stage. Next, they need to be tested in real time, in real places. They need to prove that they can become everyday. The system components need to prove to the world that they can interact. The Privacy Enhancement System needs to show that it can select and delete relevant data. As we will see in the next chapter, deletion is not straightforward. And as we will see subsequently, real-time testing (Chapter 5) is so challenging, that the possibility of building a market value for the technology needs to be re-thought (Chapter 6). But then everyday life is never easy.