Introduction

‘Big Data’ has arrived as a topic of concern in criminology and socio-legal studies [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29]. This has come in the wake of more general considerations on the rise of this phenomenon, which involves such massive amounts of data – measured in gigabytes, terabytes, petabytes, zettabytes and beyond – so as to be incomprehensible to the human mind and manipulable only by machine means [30,31,32,33,34]. There are two dominant contrasting positions in the criminological and socio-legal fields. On the one hand are those positioned in terms of technology, crime and police ‘science’ who are interested in the efficacy of technologically enhanced policing. This view is tempered by the realization that modern police agencies have always been at the forefront of technological innovation and that ‘scientification’ is a lasting motif in policing [35]. When it comes to questions of efficacy, some observe that social and organizational factors in the police operational environment often inhibit technological innovation, while others stress the ability of transformative leaders to affect change. On the other hand are a range of critical positions all of which point to the absence of and need for adequate legal accountability, regulation and transparency. For critics, the demystification of Big Data Analytics is crucial since only, “if we can understand the techniques, we can learn how to use them appropriately” (Chan and Moses, p. 677 [4]). There is a tendency in the criminological and socio-legal literature to agree that the signifier ‘Big Data’ does not mark a significant historical break and to emphasise continuity in police techno-scientific evolution. There is a fair degree of consensus that, apart from the need to involve mathematicians, statisticians, computer programmers and allied technical experts to manipulate the large volumes of data, this phenomenon represents variations on old challenges about how to adapt police organization and legal tools to new technologies.

This paper takes a more radical view. We argue that the move to stochastic governance, defined as the governance of populations and territory by means of statistical representations based on the manipulation of Big Data, is enabled by the moral economy of global neo-liberalism and that a fundamentally new form of policing social order is evident in currently emergent patterns of social controlling. Our conceptualization of moral economy needs to be unpacked at the outset because it is rather different from the one established in E. P Thompson’s famous essay The Moral Economy of the English Crowd in the Eighteenth Century [36] in which he sought to explain the meaning of pre-modern hunger rioters clashing with emerging market capitalism [37]. In Thompson’s formulation the hunger rioters of the eighteenth century acted in “the belief that they were defending traditional rights” and that in so doing they were “supported by the wider consensus of the community” ( [36], p. 78). For Thompson, the term ‘moral economy’ refers to “passionately held notions of the common weal” (ibid. p. 79) and is used as an antithesis of the ‘rational choice’ imperatives that characterize capitalist political economy (see also, [38]). The concept of moral economy has thereby been taken to refer to moral sentiments such as compassion and communal solidarity in the face of overweening power. In contrast, and echoing Simon [39], Wacquant [40] and especially Fassin [41], we observe the emergence of a new governmental rationality that both shapes and is shaped by the moral economy of the neo-liberal public sphere, in which the term moral economy refers to a set of values and emotions that make possible actions that would otherwise be deemed ill-willed, mean spirited and harsh. Fassin defines moral economy as “the production, distribution, circulation and use of moral sentiments, emotions, and values, norms and obligations in social space” (p. 263). We argue on the basis of the claim that the moral economy of neo-liberalism is plainly visible in the policing practices of stochastic governance.

The discussion of policing with Big Data that follows will illustrate this point. This paper considers the moral economy that both emerges from and is constitutive of stochastic governance. In what follows we explore policing power in its broad Foucauldian sense as ‘governance’ writ large [42] before going on to look at ‘predictive policing’ and the use of stochastic methods in the orchestration of policing in its more traditional sense [43]. Uncovering ‘policing with Big Data’ is a contribution to the critique of the moral economy of stochastic governance.

Visions of stochastic governance

There is a vein of socio-legal scholarship that considers policing in its widest sense – as the regulatory power to take coercive measures to ensure the safety and welfare of ‘the community’ – and with the plurality of institutional auspices under which policing takes place [42, 44]. With this broad conception of policing in mind, in this section we discuss certain manifestations of policing with Big Data. In so doing, we give practical substance to the theoretical notion of stochastic governance.

On November 4th 2011, IBM trademarked the words ‘Smarter City’. That corporation’s narrative about smart cities is not novel. It draws on cybernetic theory and utopianism in equal measure to advance notions about computer-aided governance of cities, masking technocratic reductionism and financial interests behind a façade of bland assertion [45]. As José van Dijck [27] observed:

Over the past decade, datafication has grown to become an accepted new paradigm for understanding sociality and social behaviour. With the advent of Web 2.0 and its proliferating social network sites, many aspects of social life were coded that had never been quantified before – friendships, interests, casual conversations, information searches, expressions of tastes, emotional responses, and so on. As tech companies started to specialize in one or several aspects of online communication, they convinced many people to move parts of their social interaction to web environments. Facebook turned social activities such as ‘friending’ and ‘liking’ into algorithmic relations; Twitter popularized people’s online personas and promoted ideas by creating ‘followers’ and ‘retweet’ functions; LinkedIn translated professional networks of employees and job seekers into digital interfaces; and YouTube quantified the casual exchange of audio-visual content. Quantified social interactions were subsequently made accessible to third parties, be it fellow users, companies, government agencies or other platforms. The digital transformation of sociality spawned an industry that builds its prowess on the value of data and metadata – automated logs showing who communicated with whom, from which location and for how long. Metadata – not too long ago considered worthless by-products of platform-mediated services – have gradually been turned into treasured resources that can ostensibly be mined, enriched and repurposes into precious products (p. 198-199)

Metadata from new social media is only part of what constitutes Big Data. Insurance, medical, education, tax and financial information – oh yes, and information regarding the flows of criminal justice – are also available for datamining (to programmers with the access codes). Big Brother and Big Business are entwined in the routine exploitation of Big Data. Corporate and governmental agencies mine all manner of warehoused data, seemingly with legitimacy, or at least acquiescence. Stochastic governance, the governance of social order through algorithmic calculation made actionable through policing and regulatory means, is generalized for the rationalization of society in all respects. Stochastic governance synchronizes, and thereby transcends, the apparent divide between private and public forms of social control. Stochastic governance is made possible by Big Data, which allows programmers to monitor and measure people’s and population’s behaviour and allows for the manipulation and monetization of that behaviour.

The February 2016 issue of the North American publication Consumer Reports carried a series of articles on fitness trackers, smartphones, and the entertainment, communications and computerized gadgetry that festoon the twenty-first Century automobile (Vol. 81 No. 2). It also included a section titled ‘Who’s Tracking You in Public?’. Part of the report focused on Facebook’s development of facial recognition enabled surveillance, titled ‘The Ghost in the Camera; how facial recognition technology mines your face for information’, which noted that “facial recognition technology is coming soon to a mall near you”. According to Consumer Reports, facial recognition technology had even been deployed in Churches (to monitor congregants’ attendance), Disney cruise ships (to monitor people having a good time), and streetscapes (to monitor the everyday movements of likely consumers) as well as other manifestations of the ‘surveillance economy’ (see also [46]). Further:

A company called Herta Security, based in Barcelona, Spain, is one vendor of the technology. Its system is being used in casinos and expensive shops in Europe, and the company is preparing to open offices in Los Angles and Washington DC. Retailers that use the Herta system receive alerts through a mobile app when a member of a VIP loyalty program enters the store … For now, security is the bigger business, however. Herta’s software was used at the 2014 Golden Globe Awards at the Beverley Hills Hilton to scan for known celebrity stalkers. The company’s technology may soon help bar know criminals in soccer stadiums in Europe and Latin America. Police forces and national security agencies in the US, the United Kingdom, Singapore, South Korea and elsewhere are experimenting with facial recognition to combat violent crime and tighten border security. (p. 42)

Dataveillance, the sine qua non of stochastic governance, is partly enabled through a financial slight-of-hand. Users provide personal information in exchange for ostensibly free access to on-line platforms and new social media. Evidently, few people can be induced into technologically mediated interaction if there is a monetary cost to ensuring privacy. It is easier, much easier, to pay for online services by giving up access to personal data. But at the same time, local, regional, national and international governmental agencies also compile a staggering amount of data – for things as mundane as library usage, school attendance and parking tickets – dataveillance is the new normal. This is stochastic governance in practice. As Toshimaru Ogura ( [33], p. 272) notes, surveillance is always for “the management of population based on [the needs of] capitalism and the nation state”, echoing Oscar Gandy’s observation a decade earlier, that the “panoptic sort is a technology that has been designed and is being continually revised to serve the interests of decision makers with the government and the corporate bureaucracies” ( [47], p. 95).

“The ideas of the ruling class are in every epoch, the ruling ideas”, explained Karl Marx and Frederick Engels in The German Ideology, that is, “the class which is the ruling material force of society, is at the same time its ruling intellectual force … [and that is] the class which has the means of material production at its disposal” ( [48], p. 64). What is new is that the ‘means of production’ now include the technologies that make ‘Big Data’ possible. Gandy argued that these technologies underlie the ‘panoptic sort’, a discriminatory process that sorts individuals on the basis of their estimated value and worth and, we would argue, also their estimated degree of risk (both to themselves and more generally) and their threat (to the social order). The panoptic sort is configured by stochastic governance into cybernetic social triage which privileges elites while it disciplines and categorizes the rest. The ideological claim coming from those who would rule by these means is that the algorithms are neutral, that they mine data which are facts, and facts are neutral, and that governance through ‘Big Data’ is good for everybody.

Stochastic governance aims to improve the efficiency and sustainability of populations and territory while reducing costs and resource consumption. Efforts at installing the ‘smart city’ are a manifestation of this project. In the ‘smart city’ dataveillance, combined with other forms of surveillance (achieved through strategically placed sensors and cameras) collect data regarding every imaginable facet of living. Data is amassed in digital ‘warehouses’ where it can be aggregated, analysed and sorted, by governments and local authorities in order to manage social challenges of all types, from crime and public safety, to traffic and disaster management, energy use and waste disposal, health and well-being and creativity and innovation [49]. Stochastic governance allows governmental programmers to preside over population management and territorial grooming. Such technology has already been installed in a number of cities, including Amsterdam and Singapore. In them, ‘Citizens’ have few alternatives, other than ones powered by these technologies, and some of the benefits seem obvious. For example, in the ‘smart city’ sensors detect the intensity of road usage and intelligent traffic lights ease vehicular flows thus minimizing the time spent waiting at intersections and preventing congestion. Stochastic governance even subjects human bodies to its logic. Numberless people already submit to daily measurement of their bodily functions, using technology to monitor the number of steps they take in a day, the number of hours they sleep at night and the calories they consume in between – and the smart phones that everyone owns constantly monitor where they are doing what. For example, research using Google search data found significant correlations between certain key word searches and body-mass-index levels, prompting the authors to suggest that their analysis could be “particularly attractive for government health institutions and private businesses such as insurance companies” [50]. Another example is a report for the UK NHS which suggested linking various forms of welfare benefit to claimants’ visits to physical fitness centers and proposed the extension of tax rebates to people who give up smoking, lose weight or drink less alcohol [51].

Evgeny Morozov [52] imagines what a high level of dataveillance will accomplish once combined with insurance logic.Footnote 1 Social welfare benefits and insurance coverage could be linked to stochastic calculations based on warehoused data we ourselves willingly provide. “But”, he asks, “when do we reach a point where not using them [tracking apps] is seen as a deviation – or worse, an act of concealment – that ought to be punished with higher premiums?” And, we might add, denial of governmental services. Morozov observes other examples. Stochastic monitoring of credit card usage can be used to spot potential fraudulent use of credit cards and data-matching can be used to compare people’s spending patterns against their declared income so that authorities can spot people (tax cheats, drug dealers, and other participants in illicit markets) who spend more than they earn. Then he says:

Such systems, however, are toothless against the real culprits of tax evasion – the super rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook (ibid.).

Citing Tim O’Reilly, a Silicon Valley venture capitalist, and Brian Chesky, the CEO of Airbnb, Morozov observes that in this new social order the citizen subjects of what we are calling stochastic governance can write code in the morning, drive Uber cars in the afternoon and rent out their kitchens as restaurants in the evening. The ‘sharing economy’ is but a new form of proletarianization and the ‘uberization’ of everyday life is a form of precarious employment where everybody’s performance is constantly the subject of evaluation for customer service, efficiency and worthiness. Echoing Foucault, stochastic governance gets into the capillaries of social order. In such a moral economy someone, somewhere will rate you as a passenger, a house-guest, a student, a patient, a consumer or provider of some service and social worth will be a matter of algorithmic calculation. If one is honest and hardworking the reputational calculations will be positive, but if one is deviant, devious or simply too different, the calculations will not be so good.

This is not how stochastic governance looks to its proponents. As IBM reports on its website Analyzing the future of cities:

Competition among cities to engage and attract new residents, businesses and visitors means constant attention to providing a high quality of life and vibrant economic climate. Forward-thinking leaders recognize that although tight budgets, scarce resources and legacy systems frequently challenge their goals, new and innovative technologies can help turn challenges into opportunities. These leaders see transformative possibilities in using big data and analytics for deeper insights. Cloud for collaboration among disparate agencies. Mobile to gather data and address problems directly at the source. Social technologies for better engagement with citizens. Being smarter can change the way their cities work and help deliver on their potential as never before.

(http://www.ibm.com/smarterplanet/ca/en/smarter_cities/overview/).

Considering policing in its broadest sense, ‘policing with Big Data’ reveals something of the practice of stochastic governance and the moral economy of neo-liberalism. Alongside these considerations it is also interesting to consider policing in that more narrow sense of crime control, crime fighting and law enforcement.

Predictive policing, big data and crime control

According to a report published by the RAND corporation:

Predictive policing is the use of analytical techniques to identify promising targets for police intervention with the goal of preventing crime, solving past crimes, and identifying potential offenders and victims. These techniques can help departments address crime problems more effectively and efficiently. They are used across the United States and elsewhere, and these experiences offer valuable lessons for other police departments as they consider the available tools to collect data, develop crime-related forecasts and take action in their communities [18].

An enthusiastic report in the New York Times suggested that the economics of crime control was a primary reason for the shift from old-style Compstat policing to policing based on predictive analytics [54]. Compstat, the New York police system that uses crime maps and police recorded statistical information to manage police resource allocation, relied heavily on human pattern recognition and subsequent targeting of police patrol. Predictive policing relies on computer algorithms to see patterns, predict the occurrence of future events based on large quantities of data, and aims to carefully target police presence to the necessary minimum to achieve desired results. The New York Times quoted a police crime analyst:

We’re facing a situation where we have 30 percent more calls for service but 20 percent less staff than in the year 2000, and that is going to continue to be our reality, so we have to deploy our resources in a more effective way … (ibid.)

This logic is not altogether new. In the late 1980s, the Minneapolis Police Department engaged in a series of experiments to measure the deterrent effects of police patrol. These studies used continuous time, parametric event history models to determine how much time police patrol presence was required to create ‘residual deterrence’. They showed that police patrol had to stop and linger at crime hot spots for about 10 min in order to “generate significantly longer survival times without disorders”. That is police patrol vehicles were directed to stop at designated crime hot spots for between 10 and 15 min in order to optimize police time in the generation of deterrence ( [55], p. 649). What is novel about predictive policing is the increased power of the police surveillant assemblage which “is patently undemocratic in its mobilization of categorical suspicion, suspicion by association, discrimination, decreased privacy and exclusion” ( [56], p. 71).

As of the early twenty-first century, and after years of continuous enhancement of police information technology, the economic case for computer-aided police deployment had advanced even further. According to a report in The Police Chief, a magazine for professional law enforcement managers in the United States,

The strategic foundation for predictive policing is clear enough. A smaller, more agile force can effectively counter larger numbers by leveraging intelligence, including the element of surprise. A force that uses intelligence to guide information-based operations can penetrate an adversary’s decision cycle and change outcomes, even in the face of a larger opposing force. This strategy underscores the idea that more is not necessarily better, a concept increasingly important today with growing budget pressures and limited resources [57].

An earlier report in the same magazine explained how advanced ‘data mining’ was changing the management of law enforcement [58]. Data mining was once reserved for large agencies with big budgets, but advances in computational technologies made them cheaper hence available to local law enforcement. According to its advocates

… newer data mining tools do not require huge IT budgets, specialized personnel, or advanced training in statistics. Rather, these products are highly intuitive, relatively easy-to-use, PC-based, and very accessible to the law enforcement community” (ibid.).

These developments are reasonably well documented in the academic and policy literature [2, 3, 18, 59]. The predictive policing mantra is closely associated with ‘geographic criminology’ [28] and the new ‘crime scientists’ who challenge ‘mainstream criminologists’ to engage with law enforcement practitioners in understanding the spatial and temporal factors that underlie and shape criminal opportunity structures (cf. [14, 60, 61]).

Several types of police data are typical fodder for predictive forecasting analysis: recorded crimes, calls for service, and police stop-and-search or ‘street check’ records [16]. However, this does not exhaust the list of potential and actual data types that can be stored in a ‘data warehouse’ and subject to the rigours of police stochastic analysis. Courts and prisons data relating to offender release and bail release, police traffic enforcement data, criminal justice DNA database records, residential tenancy changes based on real estate transactions and records of rental agreements, driver’s license and Medicare change of address notifications, telecommunications data (eg. mobile phone ‘pings’ from cell towers) and a whole range of other governmental statistics and other ‘open source’ intelligence (eg. Twitter, Facebook and other new social media) can also be included [5, 19, 62]. As one California-based police chief remarked, “predictive policing has another level outside the walls of the police department … it takes a holistic approach – how do we integrate health and school and land-use data?” (quoted in Pearsall, p. 19).

There are fantastic claims in this literature, such as the often stated aim of crime forecasting which is to predict crimes before they occur and thereby prevent them. In the extreme, some of the advocates of predictive policing actually go so far as to aim for the elimination of crime [5]. There are sceptics, of course, and some journalistic commentators have critically remarked on the marketing of predictive analytics to North American police departments [15, 63, 64]. Nevertheless the marketization of this new version of ‘Techno-Police’ has had some considerable success.Footnote 2

Predpol is a US based company which has used claims about the utility of predictive analytics in an aggressive marketing campaign that has captured the major share of the American market for this emerging technology. Simplifying for the sake of brevity, the PredPol system provides geospatial and temporal information about likely future crime on city maps overlaid with a grid pattern of boxes that correspond to spaces of 500 × 500 square feet. Likely future crimes show up in tiny red boxes on these maps, directing patrol officers to attend to that location. Police crime science has long since attended to statistical patterns, trends, repeat offenders and has traditionally made use of maps to illustrate patterns. What is new with PredPol is the ‘black box’ of algorithmic statistical computation. The data warehousing of vast quantities of information (not all of which is strictly generated by police agencies themselves) raises significant civil liberties and privacy concerns, but in the main, PredPol turns out to be a technically sophisticated way of ‘rounding up the usual suspects’ [66].

Three aspects of PredPol attract criticism, the first being the paucity of its empirical claims [15, 63, 64]. According to Miller ( [15], p. 118), “reason tells us that an effective predictive system would need to outperform existing methods of crime prevention according to some valid metric without introducing side-effects that public policy deems excessively harmful”. Examining the successes and failures of a variety of predictive systems used by police, security and intelligence services in the United States, Miller states that “successes have been troubling hard to locate and quantify” (ibid. p. 118). With regard to PredPol specifically, there is little evidence that its programs are effective (p. 119). Examining two statistical charts from PredPol’s own website, Cushing [64] observes numerous faults, including a graph with no values indicated along the x-axis (the vertical axis); a graph which also seems to indicate that “the more predictions PredPol makes, the less accurate it is”. Looking at a second graph, Cushing remarks that “the $50,000 (and up) system reduced crime by one (1) crime per day (approximately) over the time period” (ibid.). Lawrence Sherman [24] gave passing consideration to PrePol concluding “at this writing no evidence is available for the accuracy of the forecasts or the crime reduction benefits of using them” (p. 426).

The second, not unrelated, concern about PrePol is its aggressive marketing. As of 2013, more than 150 police agencies in the United States had adopted its proprietary software and the hardware to go with it [63]. Like other pyramid marketing schemes, under the terms of at least some of these contracts, police agencies obtain the system at a discount on the condition that they take part in future marketing by providing testimonials and referrals to other agencies. Participating agencies agree to host visitors from other institutions demonstrating the efficacy of the system and to take part in joint press conferences, web-marketing, trade shows, conferences and speaking engagements. The line between ‘empirical case study’ and marketing is blurred. Scores of articles similar to Erica Goode’s New York Times encomium appeared in US news outlets. They restate claims about the neutrality and exactitude of algorithmically directed policing, often recycling quotes and statistics directly from PredPol press releases. There are effectively no independent third party evaluations. No assessment of these techniques has been done on the basis of randomized controlled trials. All of them are based on limited temporal analysis of the ‘before-after’ kind. Sometimes a comparison between results produced by PredPol predictive analytics and ‘old-style’ Compstat-type analysis is used to demonstrate improved efficacy. PredPol marketing material looks like evaluation, but it reveals only minor statistical variances touted as evidence of remarkable claims. Demonstrating the efficacy of predictive analytics in policing would require experimental conditions using jurisdictions matched for relevant socio-demographic and other variables, preferably double-blind and over a significant period of several years. But that is not what Predpol provides; it provides advertising which convinces buyers to spend more money than critical reflection on the basis of sound evidence would normally allow.

A third criticism concerns the social inequities arising from statistical bias and unacknowledged normative assumptions ( [4, 15], p. 124). Stochastic models of this type are reified as the ‘real deal’ and police patrol officers are wont to take things at face value. Officers and machines interact in a cycle of confirmation bias and self-fulfilling prophecies. As Miller put it:

There is significant evidence that this kind of observation bias is already happening in existing predictive systems: San Francisco Police Department chief information officer Susan Merritt decided to proceed with caution, noting “in LA I heard that many officers were only patrolling the red boxes [displayed by the PredPol system], not other areas. People became too focused on the boxes and they had to come up with a slogan: ‘Think outside the box’ (op. cit., p. 124)

Hot spot analysis of ‘high crime areas’ and crime profiling based on ‘shared group attributes’ provide the underlying probable cause logic for police tactics such as stop-and-search and street checks. Such geographical and population attributes obtain a spurious objectivity when stochastically derived from large volumes of data using algorithmic analysis. According to Captain Sean Malinowski of the LAPD, “The computer eliminates the bias that people have” (quoted in [54]). However blind the architects of stochastic prediction modelling profess it to be in matters concerning social values, the ‘social shaping’ of police technologies is evident [67, 30, 68]. The steps of Stochastic Governance – performance of algorithmic calculation, the drawing of inferences and the taking of action – are present in the organization of policing, law enforcement and crime control. As one bellicose advocate put it, while “there are not crystal balls in law enforcement and intelligence analysis … data mining and predictive analytics can help characterize suspicious or unusual behaviour so that we can make accurate and reliable predictions regarding future behaviour and actions” ([69], p. 57). Critics argue that the wizardry of predictive policing “risks projecting the all-too-thinkable racialized and discriminatory policing practices of the present into the future” (McCulloch and Wilson, p. 85 [51]). The full theoretical potential of intelligence-led policing [70] falters on organizational, occupational and cultural barriers [6, 12, 13, 22, 23, 71]. Nevertheless, and in spite of the demonstrable ‘organizational pathologies’ that intelligence-led policing entails, police are able to use technologies to legitimize ‘going after the usual suspects’ [21]. Critical observation of police agencies draws attention to endemic organizational problems that belie the claims to technocratic rationality, but any operational shortcomings are disregarded due to overriding economic reasoning. According to The Police Chief, in times of economic austerity “new tools designed to increase the effective use of police resources could make every agency more efficient, regardless of the availability of resources” [57]. What seems to be happening is that police agencies are substituting capital for labour – the ‘uberization of policing’. Predictive policing promises police managers the ability to do ‘more with less’.

A host of ethnographic accounts substantiate what ‘front-line policing’ in the banlieues and urban ghettos of ‘world class’ cities feels like (eg. [41, 72]). Observation of the rugged end of policing confirms that the rank-and-file are always making determinations and distinctions regarding social status. Those defined as ‘police property’ are subject to verminizaton – labeled ‘shit rats’, ‘assholes’, or ‘bastards’ – on the basis of police suspicions concerning ‘risks’ and ‘threats’. In the moral economy of policing on the front-line, “the tendency toward animosity or even cruelty enjoys greater legitimacy than the disposition towards kindness: insensitivity is the norm here and compassion is deviant” ( [41], p. 204). As Simon [39] and Wacquant [40] illustrate, the moral economy of neo-liberalism, articulated in elite level political discourse, leaves an excess population in a condition of abandonment in spaces of exclusionary enclosure where they are targeted by police agents. Police law enforcement and crime fighting is central to the implementation of stochastic governance, a form of cybernetic social triage which deploys algorithmic calculation in order to privilege social, economic and political elites while simultaneously disciplining and categorizing (as ‘threatening’ and ‘risky’) the rest.

Discussion and conclusion

Stochastic governance, the management of territory and populations by reference to prognostications based on algorithmically analysable ‘Big Data’, is an expression of a new mode of production that has been quietly installed in the interstices of the old industrial order. This is evident in policing, both in its broad sense à la Foucault [73] and its narrow crime-fighting sense, where the fashion for data analytics and intelligence-led policing evidences the ‘uberization’ of security control. This is deeply affected by the moral economy of neo-liberalism. In a world where family photos, dating preferences, porn habits, random banal thoughts and everyday occurrences – along with a good deal more – has been turned into data and monetized, social control and social order can be managed at the level of metadata while controllers can always exercise the option to ‘drill down’ into the data to impact on specific cases. Smart Cities have been constructed according to the template of the moral economy of neo-liberalism: the buildings have automated climate control and computerized access, the roads, water, waste, communications and heating and electrical power systems are dense with electronic sensors enabling the surveillant assemblage to track and respond to the city’s population and individual people. Big Data often presents a façade of apparently rigorous, systematized, mathematical and neutral logic and, where that has been challenged, the response has only been to develop legal tools that facilitate the ongoing development of stochastic governance. That is why the critique of stochastic governance requires more than documentation of abuses of state power [74, 75]. Notice the presence of trademarks (for example Smarter Cities and PredPol) representing this new form of policing; these are symptomatic of global neo-liberalism and they are deeply embedded in its moral economy and directly shape its practices. The structural power of the panoptic sort - the underlying logic of stochastic governance - lies in the hands of a relatively few and the algorithms ruthlessly sort, classify and separate the general population for their benefit. Those at the top end of the panoptic sort get tickets to Disneyland [76], the ones at the bottom get the Soft Cage [76], but those who own the means of panoptic sorting and have access to the levers of stochastic governance escape its classification metrics and constitute their own class. These are hallmarks of the moral economy of neo-liberalism. Within the corporate world of ‘high-tech industries’ the mantra of stochastic governance is that it is not people who make decisions, it is the data. The strength of the moral economy of neo-liberalism can be measured in the sanguine legitimacy accorded by the population in the willing supply of data for the purposes of stochastic governance. The weakness of the moral economy of neo-liberalism can be felt in the paranoid politics of uncertainty that shape the practices of stochastic governance. The tragedy of the moral economy of neo-liberalism can be glimpsed in the cruelty of street policing directed by the algorithmic programs of stochastic governance. Cassandra is inevitably dis-believed in her prophecies, so we refuse to make any. However, it is not prophetic to begin a critique on the basis of the empirical evidence and argue that the socio-legal and technical structures of stochastic governance have hardened along lines set by the moral economy of neo-liberalism much to the detriment of human dignity.