The previous section indicated some initial points of similarity and difference in the orientation to death counts associated with Covid-19 and the 2003 Iraq invasion. For both, excess deaths emerged within high level political debates as a prominent metric for gauging harm. For both, however, accountability in the here-and-now was side-stepped by British officials through their citing the lack of appropriate data to make assessments (and, subsequently in the case of Iraq, by also citing methodological variations). Promise was held out by state officials that something like a reliable understanding of deaths was at least potentially attainable at some future date that might allow for evaluating state action. In the case of Iraq, though, this potential was ambiguously situated against the stated impossibility of collecting ‘reliable’ data.
In this section, I want to unpack these headline similarities with regard to the status of counts by contrasting how the counting of deaths varied between the two cases. The possibilities for accountability through counting will be examined by considering the assumptions and choices associated with the inter-related matters of (1) the attention to what was being counted; (2) the resources mobilised in counting; and (3) the identified purposes of counting.
What counting counts
While the aforementioned disparity between alternative Iraqi death counts was a frequent topic of note by British officials, much less commonplace was attention to differences in what was being counted. For instance, Lord Triesman’s October 2006 statement noted the contrast between the 2006 Lancet figure of 655,000 deaths and other tallies, yet without clarifying what those other tallies (such as Iraq Body Count) measured. What they measured was only a sub-set of what was captured by the Lancet’s all-encompassing excess death figure.Footnote 7
The contrasts to the case of Covid-19 in relation to counting are stark. For instance, the circulation of multiple death counts pertaining to potentially overlapping categories was taken as a problem by state agencies from the start of the pandemic—one that needed to be remediated least public trust and the ability to track the pandemic suffer (see PHE, 2020c; ONS, 2020f). The result of this recognition was the progressive differentiation of categories of deaths, and modifications to the type of data analysis given in public reports.
Central categories of deaths were also revised in response to identified statistical limitations. Notably, for example, in the UK the primary sources for death counts were (1) the Department of Health and Social Care (DHSC) daily figures pertaining to those who tested positive for Covid-19, and (2) the Office of National Statistics (ONS) weekly figures derived from death certificates in which Covid-19 was mentioned. In weekly bulletins and one-off reports, the ONS broke down registered deaths by age, sex, region and place of death (e.g., ONS, 2020a, d) in order to assess the differential burden of the virus. What was covered by these sources changed in the spring of 2020 in response to publicly identified deficiencies. A high profile instance of this was the inclusion from April 29 of deaths outside of hospitals within the DHSC daily figures. For the ONS, revisions included efforts to measure deaths in care homes in England (ONS, 2020b), and to break down the causes of excess deaths beyond those formally linked to Covid-19 in death certificates in England and Wales, even as it was recognised this would be difficult to accomplish (ONS, 2020c).
In relation to Covid-19, then, regard for what totals included and excluded led to reforms and refinements in what information was collected, what categories of deaths were relevant, and what information was made public. The combined result was a complex tapestry of sources offering varied perspectives in which unknowns about deaths were identified and redressed through reforms in the management and analysis of data (see Fig. 2).
By contrast, in the case of Iraqi deaths, the circulation of heterogeneous types of data was not taken by the UK as grounds for attempting to clarify, let alone to reform, British or Iraqi data management practices. Instead, repeatedly, the varied tallies served as ‘objections of deconstruction, figures impossible to verify and locate and therefore incapable of serving any intellectual operation other than that of the impossibility of determining their reality’ (Norris, 1994: 290). While officials cited an individual tally at certain times, this was done with hedging its ‘reliability’ (as in Straw, 2004). As such, data on deaths did not achieve the status of immutable mobiles (facts that could circulate while maintaining their integrity, Latour, 1987). More though, the lack of any internal or external efforts by the UK to engage with attempts to assess deaths meant that, for the British state at least, individual data sources did not take on the status of being mutable mobiles either. Unlike for Covid-19 deaths to British citizens, Iraqis could not benefit from statistical modifications to the codified traces of their deaths. In the absence of being treated as evidence for knowledge claims, it is questionable whether death tallies achieved the status of ‘data’ within the dealings of UK officials (Leonelli, 2016). Not being recognised as data proper meant there was little need to account for their implications.
In the case of Covid-19, reforms and refinements were enabled by the large-scale mobilisation of government departments and agencies that drew on pre-existing networks, resources and lines of authority as well as developed new networks, resources and lines of authority. For instance, in England, daily figures on confirmed positive tests would come to be derived from multiple sources: (1) hospitals use of a dedicated patient notification system, (2) Health Protections Teams from Public Health England through an electronic reporting system, and (3) comparing confirmed positive Covid-19 case lists held centrally in the pre-existent Second Generation Surveillance System (PHE, 2016) with health service patient records. Through combining multiple sources and subjecting them to semi-automatic cross-checks and quality assurances, the daily figures sought to provide rigorous information on all Covid-19 deaths.
In addition, transparency in the procedures for tallying death was sought in order to promote public confidence (PHE, 2020c) rather than only to be accurate. While geographically dispersed, the combined systems in place sought to function as ‘control zones’ (Lagoze, 2014) that could ensure the provenance and overall integrity of data through drawing on, and thereby reaffirming, established clinical systems.
But if re-enforcing existing ‘control’ mechanisms was a feature of mobilisation efforts, so too was the overhaul of previous epistemic conventions. For instance, the Office of National Statistics produced weekly figures derived from death certificates. Against mounting concerns that care home deaths related to Covid-19 were being missed, in April 2020 the ONS sought to gauge them in England (ONS, 2020b). To do so, it collaborated with the Care Quality Commission (CQC), an organisation that had collected data on deaths of care home residents. Prior to Covid-19, the CQC had not previously published data on deaths notified from care homes through its on-line reporting systems. One important feature of these notifications is that they ‘may or may not correspond to a medical diagnosis or test result, or be reflected in the death certification’ (ONS, 2020b). Thus, in making use of the CQC data for Covid-19 deaths, the ONS expanded the range of professional and occupational expertise as well as documentation that was treated as credible enough to inform its figures. Stated differently, through drawing on the notifications, the ONS integrated additional uncertainties into how deaths were counted.
In the case of Iraq, Lord Triesman’s call in 2006 for ‘considerable study’ about the methods for deriving casualty figures did not lead to the government commissioning such research or adopting externally derived figures (see Iraq Inquiry, 2016: 208–212). The official inquiry found little indication of efforts by the government either to assess direct deaths to civilians during the operations of British forces, or (as an occupying power) to improve the Iraqi systems responsible for recording deaths and issuing death certificates. A trial that had started in 2004 under the Cabinet Office to improve information on civilian deaths from military operations was halted before it was completed. Indeed, as it was argued by the Iraq Inquiry (2016: 218), much ‘more Ministerial and senior official time was devoted to the question of which department should have responsibility for the issue of civilian casualties than to efforts to determine the actual number’. Concerns about the quality of the data on deaths could have served to spur further official collection activities or attempts to bring together multiple sources. These concerns did not though. Instead, the portrayed difficulties with compiling figures was repeatedly taken as grounds against counting deaths at all.
In response to what it identified as limited state efforts, the official independent public inquiry into the invasion of Iraq called on the government ‘to make every reasonable effort to identify and understand the likely and actual effects of its military actions on civilians’ through working with NGOs and academics to establish the direct and indirect costs of war (Iraq Inquiry, 2016: 219). However, at the time of writing, no such efforts have been undertaken. Instead, echoing statements by officials made during the Iraq war against humanitarian concerns, the British state has made limited acknowledgement in its more recent military interventions that its forces could have caused even direct deaths to civilians (Airwars, 2018; Amnesty International, 2018). Efforts by the British government to gauge indirect fatalities and excess deaths associated with conflict remain further remote still.
One justification given for the lack of official studies of civilian deaths in the case of Iraq was their claimed lack of utility. For instance, the Ministry of Defence’s (MoD) Armed Forces Minister at the time of the invasion, Adam Ingram, argued in front of the Iraq Inquiry that establishing the number of Iraqi deaths would have not altered the military reality. This was so because the killing ‘was not being carried out by us on the civilians’ (Iraq Inquiry, 2016: 213). As Ingram further contended, if any deaths figures were to be calculated by the UK, this was not a job for his ministry.
If a sense of purpose was lacking in this respect for officials, it was lacking in other respects in the wider political and media debate. Just as which deaths were included in death counts was rarely specified, so too was the purpose of figures. In the case of armed conflict, these purposes can vary from memorialising suffering of innocents (a key aim of the Iraq Body Count), establishing assistance and reconstruction requirements, assessing the effectiveness of policies (a central objective in The Lancet studies), or revising use of force training and oversight. Yet, such purposes were seldom explicitly stated, let alone agreed between stakeholders, and certainly rarer still were the comparative merits of specific methodologies related to agreed purposes.
In contrast, Covid-19 death figures have not stood apart from the unfolding pandemic without any sense of how they might matter. They have served as milestones for marking and memorialising deaths, informed determinations of health care demands, and provided a means of assessing lockdown restrictions. As well, mortality figures deriving from institutionalised practices for determining the causes of death have been used to assess and revise those institutional practices. Such revisions have been intended to feed back into the production of mortality tallies. For instance, early on in the spread of the virus, concerns were raised that those in Black, Asian and Minority Ethnic (BAME) groups might face higher risks than those from White ethnic groups. In response to these concerns, and to wider ones about BAME health outcome disparities, the British government commissioned Public Health England to analyse surveillance data (PHE, 2020a). When this report was released, its lack of recommendations about what actions were needed led to widespread criticism (see, for instance, BMA, 2020). Criticism also followed the initial failure of the government to publish the findings on a consultation conducted with individuals and organisations within BAME communities (e.g., ITV, 2020). This resulted in extensive media speculation regarding the possible hidden motivations for the lack of publication in which this report’s fate was contextualised within wider ongoing national and international attention to the Black Lives Matter movement. When the consultation report was published ten days later, it not only called for further data collection and research into the biomedical, socio-economic, and structural determinants of health, but also for non-conventional forms of research (PHE, 2020b). In particular, the report stressed the need for community participatory research projects utilising local knowledge to ensure that research into the effects of Covid-19 informed concrete actions. Through such calls, the need to challenge existing ways of organising research entered into policy discussions in response to acts of counting.
It is important as well though to note what purposes counting did not serve. In the case of British lockdown policies in the first half of 2020, figures on deaths did not inform government formal cost-benefit calculations intended to justify the severity and duration of the restrictions placed on free movement. This situation differs from many other nation-state activities wherein the need to justify controversial policies has led to new methods to commodify and value bodies (Porter, 1995; Wernimont, 2018). Indeed, while standard health economic cost-benefit analyses would later come to inform British vaccination priorities, the policies for social restrictions have not been justified through cost-benefit analyses up until spring 2021.Footnote 8 This has been the case despite some calls for such analysis within the governing political party for just such a cost-benefit elaboration (Spinney, 2021). Relatedly, efforts at counting deaths have not been marshalled as part of attempts to justify the ‘objectiveness’ of lockdown policies (cf. Porter, 1995).