Keywords

Opening

In this chapter, our algorithms will continue their journey into the everyday . Beyond the initial expectation held by project participants (see Chapters 2 and 3) that in experimental settings the algorithms might prove their ability to grasp features of everyday life, the algorithms must now through testing and demonstration, to express their ability to become the everyday. However, building on the portents of Chapter 4, wherein the deletion system ran into trouble with orphan frames, what we will see in this chapter is a broader and more calamitous collapse of the relation between the algorithms and the everyday. As the project team reached closer to its final deadlines and faced up to the task of putting on demonstrations of the technology under development to various audiences—including the project funders—it became ever more apparent that the algorithms struggled to grasp the everyday, struggled to compose accounts of the everyday and would struggle to become the everyday of the train station and airport. Promises made to funders, to academics, to potential end-users, to ethical experts brought into assess the technology, may not be met. It became clear to the project team that a number of ways of constituting a response to different audiences and their imagined demands would need to be offered. This did not involve a simple binary divide between the algorithmic system working and not working. Instead a range of different demonstrations , with what we will describe (below) as greater or lesser integrity , were discursively assembled by the project team, ways to locate and populate, witness and manage the assessment of demonstrations were brought to the table. Several of the demonstrations that had already been carried out were now reconceptualised as showing that features of the algorithmic system could perhaps work. Agreements were rapidly made, tasks were distributed and imminent deadlines agreed; the demonstrations (with varying integrity) were only weeks away.

The growing science and technology studies (STS ) literature on demonstrations hints at a number of ways in which the integrity of demonstrations might be engaged. For example, Smith (2004) suggests demonstrations may involve elements of misrepresentation or partial fabrication. Coopmans (2010) analyses moves to conceal and reveal in demonstrations of digital mammography which manage the activity of seeing. And Simakova (2010) relates tales from the field of demonstration in technology launches, where the absence of the technology to be launched/demonstrated is carefully managed. These feature as part of a broader set of concerns in the STS demonstration literature including notions of witnessing (Smith 2009), dramaturgical metaphors and staging (Suchman 2011), and questions regarding who is in a position to see what at the moment of visual display (Collins 1988). These studies each have a potential relevance for understanding the demonstrations in this chapter.

The chapter will begin with a discussion of the ways in which recent STS literature has handled future orientations in studies of technology demonstrations , testing , expectations and prototyping. This will provide some analytic tools for considering the work of the algorithmic system in its move into forms of testing and demonstration. I will then suggest that notions of integrity provide a means to turn attention towards the practices of seeing, forms of morality and materiality made at stake in demonstrations of our algorithms. The chapter will conclude with a discussion of the problems now faced by our algorithms as a result of their demonstrable challenges.

Future Orientations of Technology

STS scholars have provided several ways of engaging with potential futures of technology, expectations of science and technology, technology under development and/or technologies that continue to raise concerns regarding their future direction, development, use or consequence. Drawing these studies together, three analytic themes emerge which I will use as a starting point for discussion. First, studies of technology demonstration, testing , displays, launches, experiments, and the management of expectations frequently incorporate considerations regarding what to show and what to hide. Coopmans (2010) terms this the management of ‘revelation and concealment ’ (2010: 155). Collins (1988) suggests that what might nominally be presented as a public experiment (in his case in the strength and integrity of flasks designed to carry nuclear waste) is more akin to a display of virtuosity (727). In order to manage and maintain such virtuosity, only partial access is provided to the preparation of the ‘experiment’, rendering invisible: ‘the judgements and glosses, the failed rehearsals, the work of science – that provide the normal levers for criticism of disputed experimental results’ (728). Through this process of revelation (the public display) and concealment (hiding the preparation and practice), ‘the particular is seen as the general’ (728). That is, a flask containing nuclear waste is not presented as simply surviving this particular train wreck, but a display is put on through which we can see that all future trouble that this flask, and other similar flasks, might face will be unproblematic. Through historical studies of scientific display and demonstration (Shapin 1988, cited by Collins 1988; Shapin and Shaffer 1985) we can start to see the long-standing import of this movement between revelation and concealment for the continued production and promotion of fields of scientific endeavour. Through contemporary studies of technology demonstration, sales pitches and product launches (Coopmans 2010; Simakova 2010), we can note the putative market value of concealment and revelation.

Second, technology demonstrations and the management of future technological expectations do not only involve a continual movement between revelation and concealment , but also a continual temporal oscillation. Future times, places and actions are made apparent in the here and now (Brown 2003; Brown and Michael 2003). For example, future concerns of safety, reliability, longevity and even ethics (Lucivero et al. 2011) are made demonstrably present in the current technology. Furthermore, work to prepare a prototype can act as a sociomaterial mediator of different times and work orientations, ‘an exploratory technology designed to effect alignment between the multiple interests and working practices of technology research and development, and sites of technologies-in-use’ (Suchman et al. 2002: 163). For example, the possibilities of future mundane technology support and supplies are made manifest as features of demonstrations and launches (Simakova 2010). Alternatively, the limitations of a technology as it is now, can be made clear as part of the work of emphasising the future developmental trajectory of a technology or as a feature of attesting to the professionalism and honesty of the organisation doing the demonstration (that the organisation can and already has noted potential problems-to-be-resolved; Smith 2009).

Third, within these studies there is an emphasis on the importance of audiences as witness. Drawing on Wittgenstein, Pinch (1993) suggests that audiences have to be persuaded of the close similarity between the demonstration and the future reality of a technology, they have to be persuaded to place in abeyance all the things that might make for a possible difference and instead agree to select the demonstrator’s criteria as the basis for judging sameness. However, this is not simply a case of the audience being dupes to the wily demonstrator. Smith (2004) contends the audience, the potential customer, can be knowledgeable of the limits of a technology, seeking to gain in some way from their participation in the demonstration and/or willing to ‘suspend disbelief’ in the artifice of the presentation (see also Coopmans [2010] on knowing audience members and their differential reaction). Through what means might audience members make their conclusions about a demonstration? Suchman (2011), in studying encounters with robots, looks at how persuasion occurs through the staging and witnessing that is characteristic of these scenes. Audiences, Suchman suggests, are captured by the story and its telling. Drawing on Haraway’s modest witness, Suchman outlines how the audience are positioned within, rather than outside the story; they are a part of the world that will come to be. Pollock and Williams (2010) provide a similar argument by looking at the indexicality of demonstrations which, to have influence, must create the context/world to which they point. Developing this kind of analytical position, Coopmans (2010) argues that audiences are integrated into the complex management of seeing and showing. Audiences are classified and selectively invited to identify with a particular future being shown through the demonstration and attached to the technology being demonstrated. ‘Efforts to position the technological object so as to make it “seeable” in certain ways are mirrored by efforts to configure an audience of witnesses’ (2010: 156).

Smith’s (2004) utilisation of the dramaturgical metaphor for technology demonstrations , suggests these three focal points, of concealment and revelation, temporal oscillation and witnessing , are entangled in particular ways in any moment of demonstration. I will suggest that these three themes are also prevalent in preparing our algorithms for demonstrable success. But, first, I propose a detour through integrity as a basis for foregrounding the subsequent analysis of our algorithms and for developing these ideas on demonstration.

Integrity and the Algorithm

Elements of the technology demonstration literature already appear to lend themselves to an analysis of integrity in, for example, studies of partial fabrication, revelation and concealment . However, it is in the work of Clark and Pinch (1992) on the ‘mock auction con’ that we find a rich and detailed study of a type of demonstration tied to questions of integrity . The central point of interest for us in this study is that those running the mock auction con build a routine and: ‘The various repetitive elements of the routine… provide local-historical precedents for understanding what occurs and for the audience determining what (apparently) is likely to follow’ (Clark and Pinch 1992: 169). In this way, the audience to the mock auction are convinced into bidding for ‘bargain’ priced goods that are not what they appear through allusions to the origin of those goods (that perhaps they were stolen, must be sold quickly and so on). Being able to point to a context which seems available—but is not—and could provide a reasonable account for buyers to make sense of the ‘bargains’ which seem available—but are not—is central to managing the con (getting people to pay over the odds for low-quality goods).

We notice similar themes of things appearing to be what they are not emerging in popular media stories of fakes, whether focused on an individual person or object (such as a fake death, 1 fake stamp, 2 or fake sportsperson, 3 a fake bomb detector 4 or a fake doctor accounting for more 5 or fewer 6 deaths) or a collective fake (where the number of fake documents, 7 fossils, 8 or the amount of money 9 claimed to be fake, takes centre stage). 10 In each case, the success of the fake in not being discovered for a time depends on a demonstrative ability to point towards a context which can successfully account for the claimed attributes of the person or object in focus. This is what Garfinkel (1963, 1967) would term the relation of undoubted correspondence between what something appears to be and what it is made to be through successive turns in interaction. We pay what turns out to be an amount that is over the odds for an item in the mock auction con, but what constitutes an amount that is over the odds is a later revelation. At the time of purchase, we have done no more than follow the ordinary routine of paying money for an item. We have done no more than trust the relation of undoubted correspondence. I would like to suggest that this kind of context work where we manage to index or point towards a sense of the scene that enables the relation of undoubted correspondence to hold, can be addressed in terms of integrity . A dictionary definition of integrity suggests: ‘1. the quality of having strong moral principles. 2. the state of being whole’. 11 Thus context work in situations of fakes or cons might be understood as directed towards demonstrating the moral and material integrity required for the relation of undoubted correspondence to hold (where what is required is a feature established within the setting where the interactions take place).

We can explore this notion of integrity further in the most developed field of fakery: fake art. Research in this area (see, for example, Alder et al. 2011) explores famous fakers 12 and the shifting attribution of artworks to artists. 13 The integrity of artworks in these situations appears to depend on work to establish that a painting is able to demonstrate material properties that support a claim to be genuine. 14 In order to convincingly account for the integral ‘whole’ of the painting, material properties are articulated in such a way as to indexically 15 point the artwork towards a context (of previous ‘sales’, auction catalogues which definitively describe ‘this’ artwork as attributed to a particular artist, dust which clearly demonstrates its age). We might note that such indexing is crucial to constituting the context. However, the sometimes arduous efforts to accomplish a context must be split and inverted (Latour and Woolgar 1979) in such a way that the artwork appears to effortlessly point towards ‘its’ context in a way that suggests this context has always been tied to this artwork, enabling the artwork to seem to be what it is. The work to actively construct a context must disappear from view in order for an artwork to effortlessly index ‘its’ context and attest to the ‘whole’ material integrity of the artwork; that it has the necessary age and history of value , ownership and exchange to be the artwork that it is. Furthermore, artworks need to be seen for what they are, with for example, brushstrokes becoming a focal point for audiences of expert witnesses to attest that in the brushstrokes, they can see the style of a particular artist, 16 with such witnesses then held in place as supporters of the see-able integrity of the artwork. 17 Declarations of the nature of an artwork (its material and visual integrity), also appear to be morally oriented such that constituting the nature of a painting as correctly attributed to an artist, becomes a means to constitute the moral integrity of: the material properties and practices of seeing that have established the painting as what it is (as genuine or as fake and thereby morally corrupt); its human supporters as what they are (as, for example, neutral art experts or morally dubious individuals who may be seeking financial gain from a painting’s material and visual integrity). The material, visual, moral question of integrity becomes: can the object to hand demonstrate the properties for which it ought to be able to account, indexically pointing towards a context for establishing the integrity of the material properties of the artwork and the practices through which it has been seen by its supporters? Can it maintain a relation of undoubted correspondence between what it appears to be and what it interactionally becomes?

In a similar manner to technology demonstrations , fakes appear to incorporate a concern for revelation and concealment (revealing a husband’s method of suicide, concealing the fact he is still alive), temporal oscillation (authorities in buying a fake bomb detector, also buy a future into the present, imagined and indexically created through the technology’s apparent capabilities) and the careful selection and positioning of audience within the narrative structure being deployed (particularly when faking a marriage or other notable social ceremony). However, fakes (particularly fake artworks) alert us to the possibility of also considering questions of visual, material and moral integrity in forms of demonstration. Returning to our algorithms will allow us to explore these questions of integrity in greater detail.

Demonstrating Algorithms

From their initial discussions of system architecture and experimentation with grasping the human-shaped object (see Chapter 2), to the start of system testing , demarcating relevant from irrelevant data and building a deleting machine (see Chapter 4), the project team had retained a confidence in the project’s premise. The aim was to develop an algorithmic surveillance system for use, initially, in a train station and airport that would sift through streams of digital video data and select out relevant images for human operatives. As I suggested in Chapter 2, the idea was to accomplish three ethical aims, to reduce the amount of visual video data that was seen by operatives, to reduce the amount of data that was stored by deleting irrelevant data and to not develop any new algorithms in the process. Up until the problems experienced with the deletion system (see Chapter 4), achieving these aims had been a difficult and challenging task, but one in which the project team had mostly succeeded. Yet the project had never been just about the team’s own success: the project and in particular the algorithmic system needed to demonstrate its success (and even become a marketable good, see Chapter 6).

From the project proposal onwards, a commitment had always been present to put on three types of demonstration for three distinct kinds of audience. As the person responsible for ethics in the project, I would run a series of demonstrations for ethical experts, policy makers (mostly in the field of data protection) and academics who would be called upon to hold to account the ethical proposals made by the project. End-users from the train station and airport would also be given demonstrations of the technology as an opportunity to assess what they considered to be the potential strengths and weaknesses of the system. Finally, the project funders would be given a demonstration of the technology ‘live’ in the airport at the end of the project, as an explicit opportunity to assess the merits, achievements, failures and future research that might emanate from the project. We will take each of these forms of demonstration in turn and look at the ways in which our algorithms now engage with the everyday and the questions of integrity these engagements provoke.

Demonstrating Ethics

I invited a group of ethical experts (including academics, data protection officers, politicians and civil liberty organisations) to take part in a demonstration of the technology and also ran sponsored sessions at three conferences where academics could be invited along to demonstrations . The nature of these demonstrations at the time seemed partial (Strathern 2004), and in some ways deferred and delegated (Rappert 2001) the responsibility for ethical questions from me to the demonstration audiences. The demonstrations were partial in the sense that I could not use live footage as these events did not take place in the end-user sites and could only use footage of project participants acting out suspicious behaviour due to data protection concerns that would arise if footage were used of non-project participants (e.g. airport passengers) who had not consented to take part in the demonstrations . Using recorded footage at this point seemed more like a compromise than an issue of integrity ; footage could be played to audiences of the User Interface and our algorithms selecting out human-shaped objects , action states (such as abandoned luggage) and even use footage of the Route Reconstruction system to replay those objects deemed responsible for the events. Audience members were invited to discuss the ethical advantages and disadvantages they perceived in the footage. If it raised questions of integrity to any extent, it was perhaps in the use of recorded footage. But audiences were made aware of the recorded nature of the footage and the project participants’ roles as actors. In place of a display of virtuosity (Collins 1988) or an attempt to manage revelation and concealment (Coopmans 2010) I (somewhat naively it turned out) aimed to put on demonstrations as moments where audiences could raise questions of the technology, free from a dedicated move by any wily demonstrator to manage their experience of seeing.

Along with recorded footage, the audience were shown recordings of system responses; videos incorporated the technicalities of the Event Detection component of the system architecture, its selection procedures and provision of alerts. I took audiences through the ways in which the system put bounding boxes around relevant human-shaped and other objects deemed responsible for an action and showed a few seconds of footage leading up to and following an alert. At this moment, I thought I was giving audiences a genuine recording of the system at work for them to discuss. However, it later transpired that the recorded footage and system response, and my attestation that these were more or less realistic representations of system capabilities, each spoke of an integrity belied by later demonstrations .

End-User Demonstrations

The limitations of these initial demonstrations became clear during a second form of demonstration, to surveillance operatives in the airport. Several members of the project team had assembled in an office in the airport in order to give operatives an opportunity to see the more developed version of the technology in action. Unlike initial discussions around the system architecture or initial experimentation with grasping the human-shaped object (see Chapter 2), our algorithms were now expected to deliver a full range of competences in real time and real space. 18 These demonstrations also provided an opportunity for operatives to raise issues regarding the system’s latest design (the User Interface, for example, had been changed somewhat), its strengths and limitations, and to ask any questions. This was to be the first ‘live’ demonstration of the technology using a live feed from the airport’s surveillance system. Although Simakova (2010) talks of the careful preparations necessary for launching a new technology into the world and various scholars cite the importance of revelation and concealment to moments of demonstration (Smith 2009; Coopmans 2010; Collins 1988), this attempt at a ‘demonstration’ to end-users came to seem confident, bordering on reckless in its apparent disregard of care and concealment. Furthermore, although there was little opportunity to select the audience for the test (it was made up from operatives who were available and their manager), there was also little done to position the audience, manage their experience of seeing, incorporate them into a compelling narrative or perform any temporal oscillation (between the technology now and how it might be in the future; Suchman 2011; Brown 2003; Brown and Michael 2003; Simakova 2010; Smith 2009). The users remained as unconfigured witnesses (Coopmans 2010; Woolgar 1991).

Prior to the demonstration to end-users, the limited preparatory work of the project team had focused on compiling a set of metrics to be used for comparing the new algorithmic system with the existing conventional video-based surveillance system. An idea shared among the computer scientists in the project was that end-users could raise questions regarding the technology during a demonstration, but also be given the metric results as indicative of its effectiveness in aiding detection of suspicious events. The algorithmic and the conventional surveillance system would operate within the same temporal and spatial location of the airport and the operatives would be offered the demonstrators’ metric criteria as the basis for judging sameness (Pinch 1993). The metrics would show that the new technology, with its move to limit visibility and storage, was still at least as effective as the current system in detecting events, but with added ethics .

This demonstration was designed to work as follows. The operatives of the conventional surveillance system suggested that over a 6 hour period, approximately 6 suspicious items that might turn out to be lost or abandoned luggage, would be flagged by the operatives and sent to security operatives on the ground for further scrutiny. On this basis, our abandoned luggage algorithm and its IF-THEN rules (see Introduction and Chapter 2) needed to perform at least to this level for the comparative measure to do its work and demonstrate that the future would be as effective as the present, but with added ethics . The system was set to run for 6 hour prior to the arrival in the office of the surveillance operatives so they could be given the results of the comparative metric. I had also taken an interest in these comparative metrics. I wanted to know how the effectiveness of our algorithms could be made calculable, what kinds of devices this might involve, how entities like false positives (seeing things that were not there) and false negatives (not seeing things that were there) might be constituted. I wanted to relay these results to the ethical experts who had taken part in the previous demonstrations on the basis that a clear division between technical efficacy and ethical achievement was not possible (see Chapter 3 for more on ethics ). If the system worked or did not on this criteria, would provide a further basis for ethical scrutiny.

In the 6 hour that the system ran, when the conventional surveillance system would detect 6 items of potentially lost or abandoned luggage, the algorithmic system detected 2654 potentially suspicious items. This result went so far off the scale of predicted events, that the accuracy of the system could not even be measured. That is, there were just too many alerts for anyone to go through and check the number of false positives. The working assumption of the computer scientists was that there were likely to be around 2648 incorrect classifications of human-shaped and luggage-shaped objects that had for a time stayed together and then separated. In later checking of a random sample of alerts, it turned out the system was detecting as abandoned luggage such things as reflective surfaces, sections of wall, a couple embracing and a person studying a departure board. Some of these were not fixed attributes of the airport and so did not feature in the digital maps that were used for background subtraction. However, object parameterisation should have been able to calculate that these were not luggage-shaped objects, and the flooring and walls should have been considered fixed attributes.

However, in the immediate situation of the demonstration, there was not even time for this random sampling and its hastily developed explanations—these all came later. The airport surveillance operatives turned up just as the 2654 results were gathered together and the project team had to meekly hand these results to the operatives’ manager as evidence of system (in)efficacy.

The results of these tests also highlighted the limitations of my initial ethical demonstrations (described previously). The ‘recorded’ footage of the system in operation that I had (apparently) simply replayed to audiences, began to seem distinctly at odds with the results from the live testing . What was the nature of the videos that I had been showing in these demonstrations? On further discussion with the computer scientists in the project, it turned out that system accuracy could be managed to the extent that the parameters of the footage feeding into the system could be controlled. For example, the computer scientists had worked out that a frame rate of 15 frames per second was ideal for providing enough detail without overloading the system with irrelevant footage. This frame rate enabled the system to work elegantly (see Chapter 2); using just enough processing power to produce results, in real time. They also suggested that certain types of camera location (particularly those with a reasonably high camera angle, no shiny floors and consistent lighting) led to better results for the system. And the conditions of filming were also a pertinent matter; crowds of people, sunshine and too much luggage might confuse the system. As we can see in the following images (Figs. 5.1, 5.2, and 5.3), the system often became ‘confounded’ (to use a term from the computer scientists).

Fig. 5.1
figure 1

A human-shaped object and luggage-shaped object incorrectly aggregated as luggage

Fig. 5.2
figure 2

A luggage-shaped object incorrectly classified as separate from its human-shaped object

Fig. 5.3
figure 3

A human-shaped object’s head that has been incorrectly classified as a human in its own right, measured by the system as small and therefore in the distance and hence in a forbidden area, set up for the demonstration

Collins (1988) and Coopmans (2010) suggest that central to demonstrations are questions of who is in a position to see what. However, the demonstrations considered here suggest that seeing is not straightforwardly a matter of what is revealed and what is concealed. In the development and demonstration of the algorithmic system, the straightforward division is made more complex between the seeing demonstrator and the audience whose vision is heavily managed. As a researcher and demonstrator, I was continually developing my vision of the algorithms and, in different ways, the end-users as audience were presented with stark (in)efficacy data to help shape how they might see the algorithms. The computer scientists also had a developing understanding of algorithmic vision (learning more about the precise ways that the system could not straightforwardly see different floor coverings or lighting conditions or manage different frame rates across different cameras). And some features of how our algorithms grasped everyday life were never resolved in the project. In the following image (Fig. 5.4), the algorithm has selected out a feature of the fixed attributes of the airport (a wall) as a luggage-shaped object, something that ought to be impossible using background subtraction as the wall ought to be part of the background map:

Fig. 5.4
figure 4

Wall as a luggage-shaped object

Further, those involved in seeing in these demonstrations needs to be extended to incorporate our algorithms too. In the ethical demonstrations, to reveal to the audience but not the algorithm , that the data was recorded involved some integrity (those invited to hold the technology to account were at least apparently informed of the nature of the data being used and if the recorded nature of the data was concealed from the algorithm, then the demonstration could be presented as sufficiently similar to using live data to maintain its integrity). However, following the disappointing results of the user demonstration and further discussions with the computer scientists regarding the recorded data used in the ethical demonstrations , it transpired that the algorithms were not entirely in the dark about the nature of the footage. The computer scientists had a developing awareness that the algorithms could see a space with greater or lesser confidence according to camera angles, lights, the material floor covering, how busy a space happened to be and so on. Using recorded data that only included ‘unproblematic’ footage enabled the algorithms to have the best chance of seeing the space and to be recorded seeing that space successfully. To replay these recordings as the same as live data, was to conceal the partially seeing algorithm (the algorithm that sees well in certain controlled conditions). Algorithmic vision (how the algorithm goes about seeing everyday life) and the constitution of the spaces in which the algorithms operate (including how the algorithms compose the nature of people and things) were entangled with questions of material, visual and moral integrity which we will return to below. However, first and most pressing for the project team was the question of what to do about demonstrating the ethical surveillance system to project funders given the disastrous efficacy results.

Demonstration for Project Funders

A meeting was called among project participants following the end-user demonstration. The dominant theme of the discussion was what to do about the rapidly approaching demonstration to project funders given the results of the end-user demonstrations . This discussion was made particularly tense when one of the computer scientists pointed out that in the original project description, a promise had been made to do a demonstration to the project funders not only of the airport, but also of the other end-user location—the train station. Much of the discussion during the meeting was of the technical challenges that were becoming apparent of digitally mapping the fixed attributes of a space as complex as an airport in order for the algorithms to classify objects as human-shaped or not. And the further complexities of then mapping a train station too, of how both locations had camera angles not favoured by the algorithms (e.g. being too low), were both subject to changing lighting conditions and frame rates, multiple flooring material and were both busy with people and objects.

The following excerpts have been produced from fieldnotes taken during the meeting. The first option presented during the meeting was to use recorded data:

Computer Scientist1::

it doesn’t work well enough. We should use recorded data.

[no response]

The silence that followed the computer scientist’s suggestion was typical of what seemed to be multiple awkward pauses during the meeting. One reason for this might have been an ongoing difference among members of the project team as to how responsibility ought to be distributed for the disappointing end-user demonstration results. Another reason might also be a concern that to use recorded data was to effectively undermine the integrity of the final project demonstration. The computer scientist went on to make a further suggestion to the project coordinator:

Computer Scientist1::

do you want to tell the truth?

[no response]

The pause in the meeting following this second suggestion was slightly shorter than the first and was breached by the project coordinator who began to set out a fairly detailed response to the situation, giving the impression that he had been gathering his thoughts for some time. In his view a live test in the airport, using live video streams was the only possibility for the demonstration to funders. For the train station, his view was different:

Project Coordinator::

We should record an idealised version of the system, using recorded data. We can just tell the reviewers there’s not enough time to switch [configurations from airport to train station]. What we are saying matches the [original project description]. We will say that a huge integration was required to get two installations.

In this excerpt the project coordinator suggests that for the train station, not only will recorded footage be used, but the demonstration will be ‘idealised’. That is, a segment of recorded data will be used that fits computer scientists’ expectations of what the algorithms are most likely to correctly see and correctly respond to (where ‘correct’ in both cases would be in line with the expectations of the project team). Idealising the demonstration is closer to a laboratory experiment than the initial system experimentation we saw in Chapter 2. It involved controlling conditions in such a way as to extend the clean and pure, controlled boundaries of the laboratory into the everyday life of the train station (drawing parallels with Muniesa and Callon’s (2007) approach to the economist’s laboratory) to manage a display of virtuosity (Collins 1988). This is the first way in which questions of integrity were opened: only footage from the best-positioned cameras, featuring people and things on one kind of floor surface, in one lighting condition, at times when the station was not busy, would be used. However, there was also a second question of integrity at stake here: the demonstration would also feature recorded system responses. This meant that the computer scientists could keep recording responses the system made—how our algorithms went about showing they had seen, grasped, classified and responded to everyday life correctly—until the computer scientists had a series of system responses that matched what the computer scientists expected the algorithms to see and show. Any ‘errors’ by the algorithms could be removed from the recording.

At this moment, several meeting participants looked as if they wanted to offer a response. However, the project coordinator cut off any further discussion:

Project Coordinator::

I don’t think there’s any need to say anything on any subject that was not what I just said.

The immediate practical outcome of this meeting was to distribute tasks for the idealised, recorded train station demonstration (project members from StateTrack, the train operator, were to start recording video streams and provide computer scientists with more detail on their surveillance camera layouts, computer scientists were to start figuring out which cameras to use in the recordings, and so on). The distribution of tasks was seemingly swift and efficient and unlike the initial sections of the meeting which were characterised by what appeared to be awkward pauses. For the train station demonstration, revelation and concealment (Coopmans 2010) would be carefully managed, through the positioning of witnesses (Smith 2009). The ethical future to be brought into being would be staged with a naturalistic certainty—as if the images were just those that one would see on entering the train station, rather than a narrow selection of images from certain cameras, at certain angles, at certain times, of certain people and certain objects.

However, this focus on an idealised, recorded demonstration for the train station, left the demonstration for the airport under-specified, aside from needing to be ‘live’. Two follow-up meetings were held in the airport to ascertain how a ‘live’ demonstration of the technology could be given to the project funders. Allowing the algorithms to run on their own and pick out events as they occurred in the airport continued to provide disappointing results. The project coordinator maintained the need for a ‘live’ demonstration and in particular wanted to put on a live demonstration of the system detecting abandoned luggage, describing this as the ‘king’ of Event Detection (on the basis that it was perceived by the computer scientists and funders as the most complex event type to detect). In a second airport meeting, a month before the final demonstration, the project team and particularly the project coordinator became more concerned that the algorithms would not work ‘live’. In response to these problems, the project team began to move towards idealising the ‘live’ demonstration as a means to increase the chance that the algorithms would successfully pick out abandoned luggage. To begin with the airport operators and computer scientists discussed times when the airport would be quietest, on the basis that the number of people passing between a camera and an item of abandoned luggage might confuse the algorithm :

Computer Scientist2::

Do we need to test the busy period, or quiet time like now?

Project Coordinator::

Now I think is good.

Computer Scientist1::

We need to find the best time to test… it cannot be too busy. We need to avoid the busy period because of crowding.

Once the ideal timing for a demonstration had been established (late morning or early afternoon, avoiding the early morning, lunchtime or early evening busy periods where multiple flights arrived and departed), other areas of activity that could be idealised were quickly drawn into discussion. It had become apparent in testing the technology that an item of abandoned luggage was identified by airport staff using the conventional surveillance system on average once an hour. To ensure that an item of luggage was ‘abandoned’ in the quiet period would require that someone known to the project (e.g. an airport employee in plain clothes) ‘abandoned’ an item of luggage. However, if the luggage was to be abandoned by someone known to the project, this opened up further opportunities for idealising the ‘live’ demonstration:

Project Coordinator::

Is there a set of luggage which will prove better?

Computer Scientist1::

In general some more colourful will be better.

The computer scientist explained that the background subtraction method for Event Detection might work more effectively if objects were in strong contrast to the background (note Fig. 5.3, where the system seems to have ‘lost’ the body of the human-shaped object as it does not contrast with the background and focused on the head as a human-shaped object in its own right). The system could not detect colour as such (it did not recognise yellow, green, brown, etc.), but the computer scientist reasoned that a very colourful bag would stand in contrast to any airport wall, even in shadow, and so might be more straightforward for the algorithms to classify:

Computer Scientist2::

We could wrap it [the luggage] in the orange [high visibility vest].

Project Coordinator::

Not for the final review, that will be suspicious.

Computer Scientist2::

We could use that [pointing at the yellow recycle bin].

Computer Scientist1::

That is the right size I think, from a distance it will look practically like luggage. We will check detection accuracy with colour … of the luggage. Maybe black is worst, or worse than others, we would like to check with a different colour. We should test the hypothesis.

Computer Scientist2::

What if we wrap the luggage in this [yellow printed paper].

Computer Scientist1::

I think yes.

Computer Scientist2::

Would you like to experiment with both bags?

Computer Scientist1::

Yes, we can check the hypothesis.

For the next run through of the test, one of the project team members’ luggage was wrapped in paper to test the hypothesis that this would increase the likelihood of the object being detected by the algorithm (Fig. 5.5).

Fig. 5.5
figure 5

Luggage is idealised

The hypothesis proved to be incorrect as the results for both items of luggage were broadly similar and continued to be disappointing. However, it seemed that the algorithms could always successfully accomplish background subtraction, classify objects as human-shaped and luggage-shaped and create an alert based on their action state as separate for a certain time and over a certain distance in one, very tightly delineated location in the airport. Here the IF-THEN rules of the algorithm seemed to work. The location provided a further basis to idealise the ‘live’ demonstration, except that the person ‘abandoning’ the luggage had to be very precise. In initial tests the computer scientists and the person dropping the luggage had to remain on their phones, precisely coordinating and adjusting where the luggage should be positioned. It seemed likely that a lengthy phone conversation in the middle of a demonstration and continual adjustment of the position of luggage would be noticed by project funders. The project team discussed alternatives to telephone directions:

Project Coordinator::

We should make a list of exact points where it works perfectly, I can go with a marker and mark them.

Computer Scientist1::

Like Xs, X marks the spot.

Project Coordinator::

But with cleaning, in one month it will be erased. We want a precise place, the reviewers will stay in the room to observe so they won’t see if it’s marked.

Computer Scientist1::

We can put some tape on the floor, not visible from the camera.

After two days of rehearsal, the project coordinator was satisfied that the airport employee was leaving the luggage in the precisely defined location on a consistent basis, that the luggage selected was appropriate, that it was being left in a natural way (its position was not continually adjusted following telephone instructions) and the algorithm was successfully classifying the luggage-shaped object and issuing an alert that funders would be able to see in the demonstration.

At this moment it appeared that the demonstration would be ‘live’ and ‘idealised’, but what of its integrity? I was still present to report on the ethics of the technology under development and the project itself. In the final preparation meeting prior to the demonstration for research funders, I suggested that a common motif of contemporary ethics was accountability and transparency (Neyland 2007; Neyland and Simakova 2009; also see Chapter 3) and this sat awkwardly with the proposed revelation and concealment and positioning of witnesses being proposed. On the whole, the project team supported the idea of making the demonstration more accountable and transparent—this was, after all, a research project. The project team collectively decided that the demonstration would go ahead, but the research funders would be told of the actor’s status as an employee of the airport, that the abandonment itself was staged, that instructions would be given to the actor in plain sight of the funders. Revelation and concealment were re-balanced and perhaps a degree of integrity was accomplished.

Integrity, Everyday Life and the Algorithm

In this chapter, the complexity of the everyday life of our algorithms appeared to escalate. Moving from initial experimentation , in which the aim was to grasp the human-shaped and other shaped objects, towards testing and demonstrations in which the everyday life of the airport and train station had to be accounted for, proved challenging. Building on the partial failures of the deleting machine in Chapter 4 that pointed towards emerging problems with the system, here demonstrations for end-users of the full system highlighted significant problems. But these were only one of three types of demonstration (to generate ethical discussion, for end-user operatives and for project funders).

The complexities of these demonstrations can be analysed through the three themes we initially marked out in the STS literature on future orientations of technology. Each of the forms of demonstration intersects these themes in distinct ways. For example, the demonstrations for ethical audiences were initially conceived as free from many of the concerns of revelation and concealment , temporal oscillation and carefully scripted witnessing . I had (naively) imagined these demonstrations were occasions in which the technology would be demonstrated in an open manner, inspiring free discussion of its potential ethical implications. Yet the demonstration to end-users and prior attempt to collect efficacy data to render the algorithmic system comparable with the conventional surveillance system (but with added ethics ), revealed the extent of concealment, temporal oscillation and carefully scripted witnessing that had been required to put together the videos of the system for the ethical demonstrations . I could now see these videos as demonstrably accounting for an algorithmic technology with capabilities far beyond those displayed to end-users. We could characterise the ethical demonstration as a kind of idealised display of virtuosity (Collins 1988), but one which no project member had confidence in, following the search for efficacy data for end-users.

Subsequent discussions of the form and content of the demonstrations for project funders suggests that a compromise on integrity was required. The project coordinator looked to carefully manage revelation and concealment (for the train station only using recorded footage, within conditions that algorithms could see, only using recorded system responses and only using those responses when the system had responded correctly; or in the airport controlling the type of luggage, its location, its careful ‘abandonment’), temporal oscillation (using the footage to conjure an ethical surveillance future to be made available now) and the elaboration of a world into which witnesses could be scripted (with the computer scientists, project manager, algorithms and myself initially in a different position from which to see the world being offered to project funders).

Yet discussion of demonstrations and their integrity should not lead us to conclude that this is simply and only a matter of deception. Attending to the distinct features of integrity through notions of morality , materiality and vision can help us to explore what kind of everyday life our algorithms were now entering into. Firstly, our algorithms have been consistently oriented towards three ethical aims (to see less and address privacy concerns, store less and address surveillance concerns, and only use existing algorithms as a means to address concerns regarding the expansion of algorithmic surveillance ). Articulating the aims in ethical demonstrations constituted the grounds for a form of material-moral integrity —that the likelihood of a specific materially mediated future emerging through the algorithmic system could be adjudged through the demonstrations . The demonstrations thus involved bringing a material-moral world into being by clearly indexing the certainty and achievability of that world, creating a relation of undoubted correspondence between the everyday life of the demonstration and the future everyday that it pointed towards. Ethical experts were drawn into this process of bringing the world into being so that they might attest to the strength, veracity and reliability of the demonstrated world to which they had been witness. In demonstrations to funders, the latter were also inscribed into the material-moral world being indexed so that they might attest to the project funding being money well spent. The pressure towards moral-material integrity is evidenced through: the project’s own ethical claims positioning the tasks of the project as achieving a recognisably moral improvement to the world; the project participants’ discussion of the demonstrations which appears to be an attempt to hold onto some moral integrity; recognition by project members of the impending demonstration to project funders and attempts to understand and pre-empt funders’ concerns and questions by designing a suitable demonstration with apparent integrity .

For this material and moral integrity to hold together, the demonstration must operate in a similar manner to a fake artwork. Fake artworks must be able to convincingly index and thus constitute a context (e.g. a history of sales, appearances in auction catalogues). And the work of indexing must appear effortless, as if the artwork and its context have always been what they are; that those called upon to witness the indexing can be confident that if they were to go to the context (an auction catalogue), it would definitively point back to the artwork. Our algorithms must similarly index or point to (and thus constitute) a context (the everyday life of a train station and airport, of human-shaped objects and abandoned luggage) in a manner that is sufficiently convincing and seemingly effortless that if those called upon to witness the demonstration—such as project funders and ethical experts—went to the context (the train station or airport), they would be pointed back towards the images displayed through the technology (the footage selected by the algorithm showing events). The alerts shown need to be convincingly of the everyday life of the train station or airport (rather than just a few carefully selected cameras) and any and all events that happen to be occurring (rather than just a few select events, from certain angles, in certain lighting conditions, with carefully resourced and placed luggage). This is required for the system to be able to hold together its material and moral integrity and convince the witnesses they don’t need to go to the train station or airport and assess the extent to which the footage they have been shown is a complete and natural representation of those spaces. In other words, the technology must be able to show that it can alert us to (index) features of everyday life out there (context) and everyday life out there (context) must be prepared in such a way that it convincingly acts as a material whole with integrity from which alerts (index) have been drawn. The relation of undoubted correspondence must operate thus.

The moral-material integrity holds together for as long as movement from index to context and back is not questioned and the ethical premise of the technology is maintained; if there is a failure in the index-context relation—if it becomes a relation of doubtful correspondence—this would not only question the ethical premise of the project, but also the broader motives of project members in putting on the demonstration. The move—albeit late in the project—to make the demonstration at least partially transparent and accountable, reflects this pre-empting of possible concerns that research funders might have held. Idealising the messiness of multiple floor coverings, lighting conditions, ill-disciplined passengers and luggage was relatively easily managed in the train station demonstration as it was displayed through recorded footage. However, idealising the ‘live’ airport demonstration and maintaining a natural and effortless relation of undoubted correspondence between index and context was much more challenging. Rehearsals, tests (of luggage and those doing abandonment) and off-screen control of the space (e.g. by marking the space where luggage must be dropped) were each likely to compromise the material-moral integrity of the demonstrations . Revealing their idealised features was perhaps unavoidable.

Secondly, the work of our algorithms provides us with an opportunity to review the complexities of morality and vision in demonstrations in new ways. Previously, Smith (2009) has suggested a complex relationship between originals, fabrications and partial fabrications in viewing the staged drama of a demonstration and Coopmans (2010) has argued that revelation and concealment are central to the practices of seeing in demonstrations . These are important contributions, and through our algorithms we can start to note that what the technology sees (e.g. the algorithms turn out to be able to see certain types of non-reflective flooring better than others) and the distribution of vision (who and what sees) and the organisation of vision (who and what is in a position to produce an account of who and what), are important issues in the integrity of demonstrations . The train station demonstration can have more or less integrity according to this distribution and organisation of vision . If recorded footage is used but the algorithms do not know what it is they will see, this is noted by project participants as having more integrity than if recorded decision-making by the algorithms is also used. In the event both types of recording were used. Discussions in project meetings around the demonstration for project funders, led to similar questions The algorithms need to see correctly (in classifying luggage as luggage-shaped objects) and to be seen correctly seeing (in producing system results) by, for example, project funders and ethical experts, in order to accomplish the visual-moral integrity to which the project has made claim: that the algorithms can grasp everyday life.

Conclusion

In this chapter, the focus on demonstrating our algorithms’ ability to grasp everyday life, compose accounts of everyday life and become the everyday of the airport and train station, has drawn attention to notions of integrity . Given the project’s ethical aims, work to bring a world into being through demonstration can be considered as concerted activities for bringing about a morally approved or better world. The moral terms of demonstrations can thus go towards establishing a basis from which to judge their integrity . Close scrutiny of demonstration work can then open up for analysis two ways of questioning the integrity of the moral world on show. Through material integrity, questions can be asked of the properties of demonstrations , what they seem to be and how they indexically provide for a means to constitute the moral order to which the demonstration attests. Through visual-integrity questions can be posed of who and what is seeing, the management of seeing, what it means to see correctly, and be seen correctly. Material and visual integrity is managed in such a way as to allow for the demonstrations to produce a relation of undoubted correspondence between index and context, establishing the integrity of the material and visual features of the technology: that it sees and has been seen correctly, and that the acts of seeing and those doing the seeing can be noted as having sufficient moral integrity for those acts of seeing to suffice.

The problems our algorithms have with grasping and composing an account of the everyday life of the airport and train station require that this integrity is compromised. Just taking Fig. 5.3 as an example, the human-shaped object composed by the algorithm does not match the human in the real time and real space of the airport (the system has placed a bounding box only around the human’s head). Algorithmic time and space has produced a mismatch with airport time and space; the everyday life of the algorithm and the airport are at odds and the relation of undoubted correspondence between algorithmic index and airport context does not have integrity ; the algorithm’s composition of, rather than grasping of, the human is laid bare. A compromise is required to overcome this mismatch. Years of work to grasp the human-shaped object (Chapter 2), to make that grasping accountable (Chapter 3) and to demarcate relevance from irrelevance (Chapter 4) are all now at stake. In the next Chapter, we will see that our algorithms’ problems in seeing and the need to adopt these compromises, poses questions for composing and managing the market value of the technology.

Notes

  1. 1.

    See, for example: http://www.theguardian.com/uk/2008/jul/23/canoe.ukcrime and http://news.bbc.co.uk/1/hi/uk/7133059.stm.

  2. 2.

    See: http://www.dailymail.co.uk/news/article-2478877/Man-sends-letters-using-DIY-stamps-Royal-Mail-failed-notice.html and http://metro.co.uk/2013/10/29/royal-fail-anarchist-creates-freepost-system-with-his-own-stamps-4165420/.

  3. 3.

    See: http://www.thefootballsupernova.com/2012/04/ali-dia-greatest-scam-premier-league.html and http://www.espncricinfo.com/county-cricket-2011/content/story/516800.html.

  4. 4.

    See: http://www.theguardian.com/world/2013/aug/20/government-fake-bomb-detectors-bolton.

  5. 5.

    See, for example: http://www.news.com.au/lifestyle/health/fake-nigerian-doctors-arrested-over-womens-deaths/story-fneuz9ev-1226700223105.

  6. 6.

    See, for example: http://www.dailymail.co.uk/news/article-2408159/Fake-plastic-surgeon-did-treatments-kitchen-left-woman-needing-hospital-treatment.html.

  7. 7.

    See, for example: http://www.bbc.co.uk/news/uk-england-tyne-24716257; http://news.bbc.co.uk/1/hi/world/asia-pacific/1310374.stm; http://edition.cnn.com/2003/US/03/14/sprj.irq.documents/; http://www.theguardian.com/uk/2008/may/05/nationalarchives.secondworldwar http://news.bbc.co.uk/1/hi/education/1039562.stm; http://www.timeshighereducation.co.uk/news/fake-verifiable-degrees-offered-on-internet/167361.article; and http://www.badscience.net/2008/11/hot-foul-air/#more-824.

  8. 8.

    See Stone (2010) and for example: http://www.science20.com/between_death_and_data/5_greatest_palaeontology_hoaxes_all_time_3_archaeoraptor-79473.

  9. 9.

    With around 2.8% of UK pound coins estimated to be fake. See: http://www.bbc.co.uk/news/business-12578952.

  10. 10.

    We also find Sokal’s fake social science paper sitting alongside a greater number of fake natural science discoveries (e.g. in cloning: http://news.bbc.co.uk/1/hi/world/asia-pacific/4554422.stm and http://www.newscientist.com/article/dn8515-cloning-pioneer-did-fake-results-probe-finds.html#.UoON3YlFBjp) and fakes in other fields such as psychology (see faked data in claims that white people fear black people more in areas that are dirty or that people act in a more selfish manner when hungry: http://news.sciencemag.org/education/2011/09/dutch-university-sacks-social-psychologist-over-faked-data).

  11. 11.

    Concise Oxford Dictionary (1999).

  12. 12.

    Such as van Meegreen, Elmyr deHory, Tom Keating, John Drewe and John Myatt and the Greenhalgh family.

  13. 13.

    Alder et al. (2011) cite the example of the portrait of the Doge Pietro Loredano which was and then wasn’t and now is again considered a painting by Tintoretto.

  14. 14.

    I use the term integrity here rather than provenance or an evidential focus on “chain of custody” (Lynch 1998: 848) as a first step towards exploring index-context relations (taken up further in the next section).

  15. 15.

    Indexical is used in the ethnomethodological sense here, see Garfinkel (1967).

  16. 16.

    For more on the complexities of seeing, see, for example: Daston and Galison (1992), Fyfe and Law (1988), Goodwin (1994, 1995), Goodwin and Goodwin (1996), Hindmarsh (2009), Jasanoff (1998), and Suchman (1993).

  17. 17.

    In a similar manner to Latour’s allies of scientific experiments sent out into the world to attest to the strength of a scientific fact or discovery.

  18. 18.

    Real time and real space here refer to the naturally occurring unfolding of events in the train station and airport in contrast to the experimental stage of the project where spaces and the timing of events might be more controlled. Here, our algorithms would have to make their first steps into a less controlled environment, causing anxiety for the computer scientists.