Introduction

The modern welfare state emerged out of industrialisation and the dual crises of a global recession followed by the Second World War that together created conditions for a consensus around the need to build a society better able to deal with the human costs of a largely unregulated market economy. The subsequent economic downturn of the 1970s followed by the advent of neoliberalism as a global ideology has seen the public sector shrink, labour relations shift, and financialisation take hold of the economy presenting numerous challenges for the welfare state and its continued relevance. Yet, recently the welfare state has come into renewed focus. The crisis of the COVID-19 pandemic has swiftly changed the terms of economy and state. For some, we are seeing a return of the Leviathan state, a social contract with an absolute sovereign in which the state provides the ultimate insurance against an intolerable condition (Mishra, 2020) and others see it as providing a renewed impetus for demands of universal healthcare, stable employment, and a basic income (Standing, 2020). Certainly, initial responses to the pandemic and ongoing lockdowns across the world have converged around unprecedented state interventions in the economy and a prominent rhetoric of economic planning and social security.

However, as Magalhães and Couldry (2020) note, any renewal of social welfare will be very different to how we knew it before. It will be so, in part, because the coronavirus crisis has elevated not only the role of the state but importantly that of Big Tech. They write, a renewal of social welfare “will be strongly driven by private corporations, and it will use their tools and platforms—whose ultimate goal is generating profit. Crucially, it will be based on opaque and intrusive forms of datafication” (para 1, italics in original). The trend to turn more and more of social life into data points that can be collected and analysed is rapidly transforming the ways in which the provision of public services is organised with significant implications for how we might think of the welfare state. Whilst the emphasis on data infrastructures in the context of COVID-19 has made this more explicit in several different ways, the conditions for these developments were already well underway. As noted by the UN Special Rapporteur on extreme poverty and human rights, Philip Alston, the “digital welfare state” is already a reality or is emerging in many countries across the globe. In these states, “systems of social protection and assistance are increasingly driven by digital data and technologies that are used to automate, predict, identify, surveil, detect, target and punish” (Alston, 2019).

In this chapter, I elaborate on these conditions and discuss the interplay between technological infrastructures, data-driven systems, and the welfare state, focusing particularly on the UK. The welfare state in the UK follows a different trajectory than many of its European counterparts, evident also in its response to the COVID-19 pandemic, but it serves as an illuminating case for trends that are also emerging in many other contexts. The chapter draws in part on research conducted with colleagues at the Data Justice Lab at Cardiff University that explored the uses of citizen scoring in public services as well as research carried out as part of the multi-year project DATAJUSTICE that explores the relationship between datafication and social justice. I am particularly focused on engaging with the imperatives of automation and the logics of data-driven systems in the context of the current political economy of digital technologies and how these relate to the values and visions of a society commonly associated with the welfare state. Using developments in local government and the public sector in the UK as a lens, I advance a two-part argument about the ways in which data infrastructures are transforming state-citizen relations through on the one hand advancing an actuarial logic based on personalised risk and the individualisation of social problems (what I refer to as responsibilisation) and, on the other, entrenching a dependency on an economic model that perpetuates the circulation of data accumulation (what I refer to as rentierism). These mechanisms, I argue, fundamentally shift the “matrix of social power” (Offe, 1984) that made the modern welfare state possible and position questions of data infrastructures as a core component of how we need to understand social change.

Matrix of Social Power and the Foundations of the British Welfare State

The British welfare state emerged, like elsewhere in Europe, out of the dual crises of the Great Depression and the Second World War, but it is worth noting that the foundations for a consensus around the need for the state to protect citizens from the harms of market failure, an emphasis on social solidarity, and a commitment to decommodification have earlier roots. As Thane (2013) has highlighted, demands for the state to take a permanent, as distinct from temporary and residual, responsibility for the social and economic conditions experienced by its citizens began in the 1870s in conjunction with industrial capitalism. Recognition that poverty had structural causes rather than ones that were purely moral and that responses needed to be collectivist rather than individualist grew in line with a notable increase in trade union membership and industrial conflicts in the lead up to the First World War. Yet it was only after the shocks of the Great Depression and Second World War that a government formally acknowledged that the welfare of the mass of its citizen was a major component of its activities and announced the dawning of a “welfare state” (Thane, 2013). The arrangement saw governments, formally or informally, presiding over negotiations between capital and labour that were more or less institutionalised. Importantly, according to Judt (2007), this faith in the state—as planner, coordinator, facilitator, arbiter, provider, caretaker, and guardian—was widespread and crossed almost all political parties. It was from the outset a class compromise that was able to serve many conflicting ends and strategies simultaneously, making it attractive to a broad alliance of heterogeneous forces (Offe, 1984). “The welfare state”, Judt contends, “was avowedly social, but it was far from socialist. In that sense welfare capitalism, as it unfolded in Western Europe, was truly post-ideological” (Judt, 2007: 362).

The welfare state, therefore, is more than the narrow interpretation of it as a provider of social services. Rather, as argued by Offe (1984), it can be understood as a formula that consists of the explicit obligation of the state apparatus to provide assistance and support to those citizens who suffer from specific needs and risks characteristic of the market society and is based on a recognition of the formal role of labour unions in both collective bargaining and the formation of public policy. It is, in this sense, a political solution to social contradictions that emerged out of a specific “matrix of social power”: the nature of the welfare state and the agenda of any political reality is an outcome of the ways in which social classes, collective actors, and other social categories are able to shape the environment of political decision-making (Offe, 1984: 160). In Britain, whilst there was no formal ‘social partnership’ of the kind we see in other European countries, the labour movement was able to seek gains for the working class through social reforms to improve living conditions. Without a viable alternative solution in terms of economic policy, Hobsbawm has argued, “a reformed capitalism which recognized the importance of labour and social-democratic aspirations suited them well enough” (Hobsbawm, 1994: 272). In this sense, the British welfare state is an outcome of a widespread normative shift and a growing labour movement that was simultaneously constrained by political circumstances and an ongoing dependency on the capitalist economy.

This historical backdrop is important for any discussion of the welfare state today as it highlights the particular dynamics that informed the policy agendas being pursued. These dynamics have radically changed since the post-war period. The economic downturns of the 1970s followed by the advent of neoliberalism and globalisation as dominant ideologies across the Western world have been significant for how the welfare state has advanced. Whilst there is no consensus on how these developments intersect and responses have varied across national contexts (Genschel, 2004), the UK has been at the forefront of key transformations, rapidly transitioning to a service economy, highly dependent on global supply chains and precarious labour whilst experiencing a significant decline in trade union membership (Dencik & Wilkin, 2015). In the last decade, since the financial crisis of 2008, this has been accompanied by an austerity agenda that has weakened the public sector and overhauled welfare programmes and social care through the privatisation of services and substantial cuts (Monbiot, 2020). A recent report estimated that local authorities and councils have seen a reduction in funding of up to 60 per cent in the last ten years (Davies et al., 2019), whilst the transfer of assets from the public sector to the private sector since Thatcher in the 1980s has reduced state-owned enterprises from 10 per cent to less than 2 per cent of GDP and from 9 per cent to less than 1.5 per cent of total employment (ons.gov.uk, CPI 2016).

Technology, information and communication technologies (ICTs), in particular, have played a key role in these shifting dynamics. Instrumental in the growth of consumer capitalism, digitalisation has also been seen as a challenge to the welfare state and its ability to deliver on its promises, disrupting labour relations, undermining social security, and changing the parameters of state governance. With growing trends such as mass data collection, automation, and artificial intelligence, these tensions have only intensified, putting the welfare state into further question (Petropoulos et al., 2019). At the same time, developments in technology have also significantly shaped public administration and the way social welfare is organised through the establishment of bureaucracies and different forms of population management. The creation of databases and the monitoring of citizens were from early on key features of the welfare state and played a fundamental part in assessing population needs and determining the allocation of resources (Rule, 1973; Scott, 1994). This includes ways of advancing social engineering and discerning “deserving” and “undeserving” citizens as central features of the modern welfare state (Dencik & Kaun, 2020). In the UK, for example, the ‘modernisation’ of public administration in line with a growing emphasis on new public management strategies is closely linked to early forms of the digitalisation of services as a way to “rationalise” engagement with citizens (White, 2009). In addition, a perceived need to increase information gathering and sharing as a way to better manage risk has led to a growing reliance on databases that overwhelmingly pertain to vulnerable and disadvantaged groups. In what they refer to as the advent of the “database state”, Anderson et al. (2009) map the myriad public sector databases that have been put in place under different government programmes in the UK, arguing that several of these do not abide by human rights and data protection laws.

These previous intersections between technology and the welfare state have paved the way to what Yeung (2018) has described as a paradigm shift in public administration from ‘new public management’ to ‘new public analytics’ organised around algorithmic regulation. In her seminal study of the welfare sector in the US, Eubanks (2018) similarly refers to a new “regime” of data analytics used to determine eligibility and assess needs across areas of housing, healthcare, and child welfare. The non-governmental organisation AlgorithmWatch (2019), meanwhile, has outlined the growing reliance on automated decision-making or decision support systems across the public sector in Europe, understood as procedures in which decisions are delegated to automatically execute decision-making models to perform an action. This might include allocating treatment for patients in the public health system in Italy, sorting the unemployed in Poland, identifying child neglect in Denmark, or detecting benefit fraud in the Netherlands. As I will go on to outline below, the UK has increasingly integrated these technologies into public services in a way that present a particular set of questions for the nature of the welfare state. These include both a concern with the epistemological and ontological premises of “dataism” (Van Dijck, 2014) and a concern with the implications of making public infrastructure subject to datafication as a “political-economic regime” (Sadowski, 2019).

The Datafication of Welfare in the UK

As part of his investigation into the UK in 2018, the UN Special Rapporteur on extreme poverty and human rights Philip Alston highlighted the important role digital technologies now play in the administration of welfare (Alston, 2018). Of particular significance is the Universal Credit system, the first ‘digital-by-default’ policy implemented by the UK government, designed to reform social welfare into one integrated platform for benefit claimants. A key part of this reform is the emphasis on automation as a policy goal and the processing of claims entirely through digital means. As Alston’s investigation makes clear, this has contributed to entrenched inequality, exclusion, and lack of redress with significant implications for human and social rights, not least the right to social protection. Digital divides, in terms of both access and literacy, poor design, and a lack of transparency have marked a system designed to embed conditionality within the very infrastructure of welfare provision, pushing people into destitution and poverty (ibid.). This has led to calls for the Universal Credit system to be scrapped and for digital-by-default as a policy to be illegalised (see, e.g. the Labour Party manifesto of 2019).

Yet, the Universal Credit system and the turn to digital platforms as intermediaries between public administration and service users are only one part of how digital technologies are intersecting with the British welfare state. Of growing importance is the emphasis on data collection and predictive analytics as a way to inform decisions that impact people’s ability to participate in society. We see this, for example, with the advent of what we describe as ‘citizen scoring’ in a study we carried out at the Data Justice Lab. This refers to “the use of data analytics in government for the purposes of categorization, assessment and prediction at both individual and population level” (Dencik et al., 2019: 3; italics in original). These practices are part of a broader trend towards organisations becoming data-driven as a way to, it is claimed, run more efficiently and, importantly, without human bias and errors. For councils and local authorities who have been facing significant cuts, the promotion of data-driven systems as a way to reduce costs and increase efficiency and effectiveness has been particularly attractive (Beer, 2019). The emphasis on the need to focus resources and advance a more strategic understanding of population needs has been a common justification for the turn to citizen scoring. In many cases this has led to the creation of what is described as ‘data warehouses’ or ‘data lakes’ in which data is collected from a range of sources and databases from across different parts of the council and are integrated as a way to get a more granular and holistic understanding of individual households and families (Dencik et al., 2019). In some instances, this has been accompanied by predictive analytics in which these data warehouses underpin further algorithmic processing designed to simulate projections of the future as a way to assess or evaluate risks and needs.

An example of this kind of practice is increasingly prevalent in policing, where a growing number of British police forces are using predictive analytics to map crime trends in neighbourhoods and to rank offenders from high to low risk of reoffending (Couchman, 2019). Such predictions draw on a range of data sources, including crime and intelligence data, missing persons data, operational data, data held by council agencies, demographic data, and even weather data (Dencik et al., 2018). At Avon and Somerset police constabulary, for example, they have contracted a software application suite from the company Qlik Sense that is used to attribute a risk profile to all existing offenders and victims of crime on record based on real-time monitoring of characteristics and behaviours. These profiles, presented as a dashboard, inform the way Avon and Somerset police organise their resources and how they decide to engage with different individuals. Similar tools are being used in child welfare where policy reforms, such as the Troubled Families programme implemented in 2012, have incentivised increased data collection and sharing on children and families. More recently, a range of tools designed to assess risk and predict potential behaviour has been implemented around the creation of these databases (Redden et al., 2020). Bristol Council, for example, has developed an in-house tool drawing on a range of social issue data-sets that are designed to attribute a risk score to all children and young people living in the city based on a prediction about the likelihood that a child falls victim to ‘child exploitation’. This score is generated on the basis of the extent to which the characteristics and behaviour of a family match those of known previous victims of child exploitation. The Council of Hackney contracted a similar tool, Early Help Profiling, from the company Xantura that produces intelligence reports once a risk threshold regarding a family is passed as a way to assist decision-making by frontline staff (Dencik et al., 2018).

The uses of these kinds of technologies in the public sector are still only emerging and there is still an uneven landscape amongst local and central government in regard to how data about people is collected and used. Whilst there is a general trend towards becoming more data-driven across government, it is not obvious that there is a shared understanding of what it is appropriate to do with data. Such an interpretive vacuum is evident from the difficulty in clearly asserting where and how data-driven systems are used in government, and in the myriad tensions and negotiations that shape the implementation of such technologies within councils and local authorities (Dencik et al., 2019). However, despite the heterogeneous nature of data practices across local government and the prevalent resistance towards algorithmic decision-making from a range of stakeholders, there is a recognisable drive towards automation and predictive analytics within social welfare and the public sector in the UK at large (cf. Booth, 2019). This has only been heightened by the COVID-19 pandemic with an onus on data collection and technological solutions shaping responses to the health crisis, whether in the form of contact-tracing apps, immunity passports, or other forms of data infrastructure to track, certify, and model the coronavirus. At the same time, the transition of social and economic life to the cloud that was already well underway has been accelerated with social distancing measures (Klein, 2020; Morozov, 2020). The welfare state, therefore, in whatever form it will take following the coronavirus crisis, looks certain to be more datafied. This raises some significant questions in need of interrogation. Below, I discuss two interrelated aspects that concern, firstly, the issue of responsibilisation and, secondly, the issue of rentierism. Both of these present counter-logics to the values commonly associated with the modern welfare state.

Datafication as Responsibilisation

As noted above, the advent of ‘digital by default’ policy frameworks and the collection of data in welfare systems build on previous bureaucracies and emerge out of a longer history of risk management in public administration. Alston (2019) also points out that often the implementation of new technologies in public services is seen as politically neutral and void of policy implications that allows for the gradual datafication to take place without much scrutiny and public debate. Largely it is framed as a matter of efficiency and a predominantly quantitative shift: more information, processed faster. Yet the sheer scale and nature of data now collected on citizens introduce key questions about the ways in which citizens are rendered increasingly legible to the state and the use of big data to inform decisions rest on some key assumptions with significant implications for the idea of the welfare state. In this section I focus particularly on the issue of responsibilisation, understood here as associated with the neoliberal transfer of responsibilities from state to social actors. This is not to suggest that responsibilisation emerged with datafication, but rather that the advent of data-driven systems in the context of social welfare is embedded in this form of governance. The concern here is with how social problems come to be defined and, in turn, are sought to be resolved. By optimising for personalised risk, data-driven systems can construct the burden of social ills as one that belongs to individuals, addressed through behaviour and characteristics, without engaging with underlying causes and collective responses. This fundamentally challenges notions of shared social responsibility.

Data sources now stretch across a complex ecology of digital transactions that incorporates both consumer and citizen data about evermore-intimate aspects of our lives as the public sector becomes embedded within a rapidly growing data broker industry. Local authorities in the UK, for example, were found to have contracted with the credit rating agency Experian for over £2 million in 2018 (O’Brien & Williams, 2019). These developments continue a long-standing critique of the welfare state as a surveillance state that tends to target particular parts of the population. Eubanks (2018) argues, for example, that datafication is reconfiguring the traditional poorhouse in the US into the creation of “digital poorhouses” in which some parts of the population are subject to hyper-surveillance and “predatory inclusion” (Seamster & Charron-Chénier, 2017) as a condition of welfare. The issue here is not just one of privacy, but also the inherent bias of algorithmically processed data, whether because of historically skewed data-sets (e.g. arrest records), the way certain variables are weighted (e.g. the length of benefit claims), or the type of assessment that is produced (e.g. the labelling of risk) that all lead to disparate impacts of harm (Barocas & Selbst, 2016). These so-called biases have tended to align with existing social and economic inequalities often targeting and stigmatising already disadvantaged and marginalised groups (Gandy, 2010). Indeed, the very construction of a data-set emerges out of historically discriminatory practices that have implications for people’s lives and can determine access to basic services and care (Ustek Spilda & Alastalo, 2020). Similarly, the ability to challenge how data about a person is collected and used is not distributed equally. In the words of Eubanks (2017), data processes “do not fall on smooth ground” and people do not share the same conditions of engagement with data-driven systems.

These concerns about surveillance, discrimination and bias, and their contingency on existing inequalities are important for discussions on the welfare state as they raise questions about how universal access and social security can be guaranteed. Of course, challenges to such values are not new. The inability of the welfare state to deliver on its promises has been a long-standing critique of it, in part due to its very reliance on a capitalist economy it is simultaneously intended to mitigate excess harm from (Offe, 1984). Often it has been precisely those at the margins bearing the brunt, whether excluded, criminalised, or neglected by the welfare systems intended to protect them. With the datafied welfare state, such critiques continue to resonate and take on further significance as these systems become embedded in “dataism”, what van Dijck (2014) terms the ideological component of datafication. While the need to gather information to assess needs and risk is seen as essential in providing public services, the growing reliance on automated processing as the arbiter of social knowledge introduces some particular, and contested, epistemological and ontological assumptions for making such assessments. The “subtractive methods of understanding reality” in which information flows are reduced into numbers that can be stored and then mined produce very particular forms of informational and computational knowledge (Berry, 2011: 2). As famously noted by boyd and Crawford (2012), big data shapes the reality it measures by staking out new terrains and methods of knowing. This includes the perceived epistemic capabilities of algorithms to anticipate, conjecture, and speculate on future outcomes in a way that McQuillan (2017: 2) compares to a kind of Neo-Platonism: “a belief in a hidden mathematical order that is ontologically superior to the one available to our everyday senses”. The premise is that based on enough data, correlations can predict future outcomes in such a way that facilitates pre-emption, a strategy of intervention just before an event might occur (Andrejevic et al., 2020).

With the turn to the datafied welfare state we are, therefore, confronted with some very significant assumptions about not only the neutral nature of data and technologies, but also that there is “a self-evident relationship between data and people, subsequently interpreting aggregated data to predict individual behaviour” (Van Dijck, 2014: 199). Of central importance here is the abstraction of big data in order to reduce social identities, mobilities, and practices to mere data that can be managed and sorted (Monahan, 2008). Furthermore, these “data derivatives” (Amoore, 2013) grant authority to knowledge domains based on new forms of risk calculations rooted in data science. These calculative devices, as Andrejevic (2019) argues, follow an “operative” logic in juxtaposition to one of representation. They are not concerned with why something happens, but simply that it does; it is correlations between variables that determine outcomes, not an engagement with underlying causes. In this sense, Andrejevic (2019: 108–9) contends, they not only collapse the future into the present but also threaten to lose the distinction between prediction and comprehension.

Such logics and assumptions are pertinent for understanding the nature of state-citizen relations in the datafied welfare state. They raise questions about how social ills are problematised and solved and how individuals are positioned in relation to such ills. For example, in advancing a long-standing shift towards risk management in public administration, the advent of big data expands and redefines the way we think about risks. As Poon (2016) has highlighted, big data derives from a cultural conception of personal risk intimately connected to corporate capitalism and with roots in actuarialism. It is not technical accuracy that makes big data investment worthy or secures profits, she argues, but rather the methods for manipulating and calculating elements and definitions of risk. Importantly, these calculations derive risk from correlations between group traits in order to make predictions about individuals. We see this, for example, in data-driven systems that predict the risk of child abuse by calculating the extent to which a child matches the behaviours and characteristics of previous victims of child abuse (Dencik et al., 2019). Carrying out such risk calculations can be seen as important for targeting resources on those who might need it most. However, they also adopt a personalised understanding of risk that centres on risk factors attributed to an individual’s behaviour and characteristics. This raises concerns about the ways in which responsibility for social problems might shift from the collective onto the individuals undermining values of social solidarity (Keddell, 2015; Morozov, 2015). Responses become focused on interventions targeted at individuals in a way that shift focus away from structural causes. For example, what comes to matter are measurable categories such as school attendance and number of benefit claims, rather than complex societal issues such as poverty, racism, and precarity (Dencik et al., 2019).

Furthermore, an imperative of pre-emption constructs personalised risk according to a compressed temporality. Risk is an outcome of simulated futures that draw on aggregated historical and real-time data about group traits to make predictions about an individual. In other words, it is what ‘people like you’ have done in the past that underpin predictions about what you might do in the future in order to inform interventions made towards you in the present. Insofar as such a temporal collapse informs decision-making, it is a form of decision-making that is intrinsically conservative (Cheney-Lippold, 2017). What is more, taken to its limit in seeking to address all possible risks and opportunities in advance, pre-emption is a-temporal, invoking a state of social stasis (Andrejevic et al., 2020). Rather than creating conditions for social mobility and human flourishing, the datafied welfare state threatens to lock individuals into their data futures and dispense with the possibility for social change (Dencik & Kaun, 2020).

In thinking about the welfare state, it, therefore, becomes imperative to consider how a growing reliance on data-driven systems constructs what counts as social knowledge and how people should be rendered legible in such a way that undermines notions of universal access, social solidarity, and human flourishing. Rather than the state being accountable to its citizens, the datafied welfare state is premised on the reverse, making citizens’ lives increasingly transparent to those who are able to collect and analyse data, at the same time as knowing increasingly little about how or for what purpose that data is collected. Moreover, rather than social problems being understood as shared, the datafied welfare state advances actuarial logics that attribute risk to individuals without necessarily engaging with preventative measures for such risks. Instead, policy responses become pre-emptive, potentially shifting responsibility away from the collective whilst at the same time entrenching existing inequalities and stifling the conditions needed for social change. We therefore need to consider the turn to data infrastructures in social welfare as a form of policy intervention that is part of shaping the conditions for governance. This positions data beyond questions of bias or whether it is used for good or bad and instead requires an engagement that attends to the way problems and solutions are constructed through such infrastructures.

Datafication as Rentierism

It is important to note that the actuarial logics that are prominent in dominant processes of datafication are not an inevitable feature of digital technologies but focus our attention on the political and economic forces that shape the development of data-driven systems. As the public sector becomes increasingly intertwined with technology companies, welfare systems become embedded in global markets and infrastructures that significantly shift the terms upon which such systems can operate. In this section, I, therefore, draw attention to questions of political economy in relation to data-driven systems and consider the implications of rentierism as the operating logic of state-capital relations under datafication. Rentierism here refers to the public sector becoming dependent on a mode of capitalism in which revenue is predominantly extracted from rent (money or data) in exchange for services, with significant implications for the functioning of institutions. This relates to processes of privatisation, but the concern here is with the way the dominant business models and drivers of data-driven platforms and tools configure social practices and shape the terms upon which public institutions are able to operate. As I will go on to argue, this not only undermines a principle of decommodification by embedding public institutions in commercial operations but, furthermore, creates a relationship of dependency that threatens to displace public infrastructure with (private) computational infrastructure.

In making sense of the value of data, Zuboff’s (2015, 2019) notion of surveillance capitalism has been widely used to describe the dominant business model that underpins much of today’s digital technologies. This business model, she argues, relies not on a division of labour but a division of learning: between those who are able to learn and make decisions based on global data flows and those who are (often unknowingly) subject to such analyses and decisions. In this model, capital moves from a concern with incorporating labour into the market as it did under previous forms of capitalism to a concern with incorporating private experiences into the market in the form of behavioural data. This is an accumulation logic driven by data that aims to predict and modify human behaviour as a means to produce revenue and market control. Social relations under this logic are extractive rather than reciprocal and based on a formal indifference to information: it is volume rather than quality that sustains it, sourcing data from a range of infrastructures from sensors to government databases to computer-mediated economic transactions alike.

Yet in understanding the implications of this business model for the welfare state, it is worth further unpacking datafication as a “political-economic regime” (Sadowski, 2019). In doing so, Sadowski argues that we need to understand the value of data not as a commodity but as capital that propels new ways of doing business and governance. Data collection is driven by the perpetual cycle of (data) capital accumulation, which in turn drives capital to construct and rely upon a universe in which everything is made of data, including social life. The digital platform is central for this transformation in that social practices are reconfigured in such a way that enables the extraction of data (Couldry & Mejias, 2018). This matters as data in this context serves to sustain an economic process that bypasses the creation of value through production and instead relies on the capturing of value through expanding the capacity for gaining information. For Wark (2019), this presents itself as a markedly different system than how we have conventionally understood capitalism as power shifts from the owners of the means of production to the owners of the vectors along which information is gathered and used, what Wark describes as the “vectorialist class”. This class controls the patents, the brands, the trademarks, the copyrights, and most importantly the logistics of the information vector. Through this, Wark argues, whilst a capitalist class owns the means of production, the means of organising labour, a vectorialist class owns the means of organising the means of production. Although Wark posits that such a shift in power relations forces us to place the vectorialist class outside a capitalist framework and as distinct from the landowning class, others have argued that understanding this organisation of power in the context of rent theory may be more fruitful (Sadowski, 2020; Srnicek, 2017).

Rent-seeking strategies are familiar in the wider shift towards financialisation that has marked advanced capitalism in Anglo-Saxon countries especially, and the drive to turn everything into a financial asset as a way to latch onto circuits of capital and consumption for the purposes of rent extraction. Whilst this logic is not new for capital, Sadowski (2020) argues what is new are the complex technologies that have been designed to extend and empower capital’s abilities of assetisation, extraction, and enclosure. As Srnicek (2017) has also outlined, such expansion is driven by accumulating data as the primary revenue source for platforms that also explains the extensive acquisitions relating to big data and the significant investments in the Internet of Things (IoT) and other assets that extend data extraction. Under this analytical framework, platforms are intermediaries in the production, circulation, or consumption process and capture value from all the activities and operations that make up the platform ecosystem, extracting both monetary rent and data rent (Sadowski, 2020). That is, rentiers capture revenue from the use of digital technologies and not only rely on money as value but also treat data as a source of value. As Sadowski goes on to argue, the main strategy of these rentiers is to turn social interactions and economic transactions into ‘services’ that take place on their platform. This “X-as-a-service” rental model is in line with assetisation and the transformation of things and activities into resources which generate income without a sale (Birch, 2015; Sadowski, 2020).

When public sector organisations integrate tools and platforms from providers within this economy to administer the welfare state, they implement not only the systems themselves, but also a regime that propels the further datafication of social life. This matters as although rentierism can be understood as an outgrowth of capitalism, and the welfare state has always been subject to the contradictions of being dependent upon and simultaneously mitigating the harms of a capitalist economy, it configures this relationship in significant ways. With the advent of neoliberalism and globalisation, the welfare state has long been subject to forms of privatisation with a growing number of public services outsourced to private companies and large parts of the public sector commoditised and made subject to the market. The UK has been particularly prone to these trends evident in the care system, for example, where it has gone from being 95 per cent provided publicly by local authorities in 1993 to now being almost entirely provided by private companies (Monbiot, 2020), or in higher education, where commodification has grown as funding has become increasingly dependent on external and private sources (Freedman, 2011). Whilst public institutions in other advanced capitalist societies, particularly in Europe, can be said to have been more resilient to these developments, there has nevertheless been a ‘convergence’ in the trajectory of institutional change across national contexts that can be characterised as neoliberal (Baccaro & Howell, 2011). The turn to data-driven systems, often bound up in commercial infrastructures, across the welfare state in this sense continues the trend of privatisation and commodification. However, as I go on to argue below, under a model of rentierism, the datafied welfare state is subject to pressures that arguably move beyond binaries of de/commodification and public/private.

By plugging in to a political economy of rentier capitalism, the datafied welfare state not only advances the commodification of information about citizens and the outsourcing of service provision but also becomes locked in to a form of social ordering that restructures practices to uphold the logic of this political economy. Understanding ‘welfare-as-a-service’ in the context of datafication is not simply an issue of privatisation, but about establishing a set of relations that ultimately seeks to overturn public institutions as we commonly understand them. That is, by turning to data-driven systems, the welfare state reconfigures social welfare into a problem that necessarily has to be optimised computationally rather than engaged with through human experience and expertise, and embeds social welfare within an ecosystem that endlessly perpetuates this reconfiguration. Gürses et al. (2020) use the term “programmable infrastructures” to refer to this political, economic, and technological vision that advocates for the introduction of computational infrastructure onto our existing infrastructures. This vision, they argue, features the management of human behaviour, the standardisation of values, a dependency on the economic terms of technology companies, a power asymmetry of cloud providers, and an avoidance of democratic governance. As such, the datafied welfare state raises questions not just about the ways in which decisions and practices in public administration are organised, but about their contingency on a particular process that threatens to displace the very public infrastructure upon which the welfare state is built. This speaks to a particular kind of power in relation to data infrastructures that needs to be captured in our engagement with data politics.

Conclusion

At a time of global crisis, the question of how technology intersects with the welfare state has gained new significance. The COVID-19 pandemic and responses to it have shed light on not only the vulnerabilities of the welfare state but also ways in which it might be rebuilt. In many respects, it increasingly looks to do so on the pillars of Silicon Valley. The UK has been at the forefront of this trend in Europe, but the focus on contact-tracing apps, immunity passports, and location tracking has nurtured new partnerships between companies like Apple, Google, Amazon, and Palantir and governments around the world. However, the conditions for the advent of the datafied welfare state have been in the making for quite some time. Data collection and practices of citizen scoring are now prominent features of how public administration and welfare provision are organised. In the UK, austerity measures and an active shrinking of the public sector have been accompanied by a prominent shift towards the implementation of data-driven systems across key areas of the welfare state that is set to dramatically accelerate in the context of the COVID-19 crisis and its aftermath.

In order to make sense of the significance of this shift, it is important to situate the welfare state in historical and national context, understanding it as an outcome of social struggle, a political compromise, and a model of inherent contradictions. There was nothing inevitable about the emergence of the British welfare state and the values it upheld. Equally, there is nothing inevitable about the datafied welfare state we are now confronted with. Rather, it is indicative of the current matrix of social power. The ideology of dataism and the political economy of technology posit values and operational logics that are markedly different from how the welfare state has previously been understood. As I have argued here, the epistemological and ontological pillars of the datafied welfare state advance an agenda of responsibilisation that counter values of universal access, social solidarity, and human flourishing, whilst the operations of capital out of which datafication has developed position the datafied welfare state as a tenant of private cloud and service providers that threatens to undermine democratic governance and displace public infrastructure.

As the welfare state becomes further embedded in the paradigm of datafication, the question then becomes how the matrix of social power might be shifted to facilitate a different vision. This might also entail examining different models of the welfare state and the constitution of public institutions across national contexts. The COVID-19 crisis allowed for openings in demands on how society should be organised that echo those of post-war Britain at the apogee of the welfare state. This has brought hope about an opportunity to question and challenge long-standing social experiments that do not serve the majority of the population. However, in accelerating the transition to the cloud, we might find ourselves with short-term solutions that have long-term consequences for any future of the welfare state. The interrogation of power in relation to data, therefore, needs to consider not only the values and logics that are advanced through such power but, with that, the conditions of possibility for social change created by the dynamics upon which the circulation of data depends.