The expanding use of biometric facial recognition raises a number of pressing ethical concerns for liberal democracies that need to be considered. The concerns relate especially to the potential conflicts between security, on the one hand; and individual privacy and autonomy, and democratic accountability, on the other. Security and community safety are fundamental values in liberal democracies, as in other polities, including many authoritarian ones. However, liberal democracies are also committed to individual privacy and autonomy, democracy, and, therefore, democratic accountability. Accordingly, the latter fundamental ethical principles must continue to be valued in liberal democracies such as Australia, the United Kingdom and the United States, notwithstanding the benefits to security and community safety that biometric facial recognition can provide (Miller and Bossomaier 2021). While debates will continue between proponents of security, on the one hand, and defenders of privacy, on the other, there is often a lack of clarity in relation to the values or principles allegedly in conflict.
The notion of privacy has proven difficult to adequately explicate. Nevertheless, there are a number of general points that can be made. First, privacy is a right that people have in relation to other persons, the state and organisations with respect to: (a) the possession of information (including facial images) about themselves by other persons and by organisations, e.g. personal information and images stored in biometric databases, or; (b) the observation/perceiving of themselves—including of their movements, relationships and so on—by other persons, e.g. via surveillance systems including tracking systems that rely on biometric facial images (Kleinig et al. 2011). Biometric facial recognition is obviously implicated in both informational and observational concerns.
Second, the right to privacy is closely related to the more fundamental moral value of autonomy. Roughly speaking, the notion of privacy delimits an informational and observational ‘space’ i.e. the private sphere. However, the right to autonomy consists of a right to decide what to think and do and, of relevance here, the right to control the private sphere and, therefore, to decide who to exclude and who not to exclude from it (Kleinig et al. 2011). So the right to privacy consists of the right to exclude organisations and other individuals (the right to autonomy) both from personal information and facial images, and from observation and monitoring (the private sphere). Naturally, the right to privacy is not absolute; it can be overridden. Moreover, its precise boundaries are unclear; a person does not have a right not to be observed in a public space but, arguably, has a right not to be photographed in a public space (let alone have an image of their face widely circulated on the internet), albeit this right not to be photographed and have one’s image circulated can be overridden under certain circumstances. For instance, this right might be overridden if the public space in question is under surveillance by CCTV to detect and deter crime, and if the resulting images are only made available to police—and then only for the purpose of identifying persons who have committed a crime in that area. What of persons who are present in the public space in question and recorded on CCTV, but who have committed a serious crime, such as terrorism, elsewhere, or at least are suspected of having committed a serious crimeFootnote 15 elsewhere and are, therefore, on a watch list? Presumably, it is morally acceptable to utilise CCTV footage to identify these persons as well. If so, then it seems morally acceptable to utilize biometric facial recognition technology to match images of persons recorded on CCTV with those of persons on a watch list of those who have committed, for instance, terrorist actions, or are suspected of having done so, as the SWP were arguably seeking to do in the Bridges case.
Third, a degree of privacy is necessary simply for people to pursue their personal projects, whatever those projects might be. For one thing, reflection is necessary for planning, and reflection requires a degree of freedom from the distracting intrusions, including intrusive surveillance, of others (Kleinig et al. 2011). For another, knowledge of someone else’s plans can lead to those plans being thwarted (e.g. if one’s political rivals can track one’s movements and interactions then they can come to know one’s plans in advance of their implementation), or otherwise compromised, (e.g. if who citizens vote for is not protected by a secret ballot, including a prohibition on cameras in private voting booths, then democracy can be compromised).
We have so far considered the rights of a single individual; however, it is important to consider the implications of the infringement, indeed violation, of the privacy and autonomy rights of the whole citizenry by the state (and/or other powerful institutional actors, such as corporations). Such violations on a large scale can lead to a power imbalance between the state and the citizenry and, thereby, undermine liberal democracy itself (Miller and Walsh 2016). The surveillance system imposed on the Uighurs in China, incorporating biometric facial recognition technology, graphically illustrates the risks attached to large-scale violations of privacy and related autonomy rights.
Accordingly, while it is morally acceptable to collect biometric facial images for necessary circumscribed purposes, such as passports for border control purposes and drivers’ licences for safety purposes, it is not acceptable to collect them to establish vast surveillance states as China has done, and exploit them to discriminate on the basis of ethnicity. However, images in passports and driving licences are, and arguably ought to be, available for wider law enforcement purposes, e.g. to assist in tracking the movements of persons suspected of serious crimes unrelated to border control or safety on the roads. The issue that now arises is the determination of the point on the spectrum at which privacy and security considerations are appropriately balanced.
Privacy can reasonably be overridden by security considerations under some circumstances, such as when lives are at risk. After all, the right to life is, in general, a weightier moral right than the right to privacy (Miller and Walsh 2016). Thus utilising facial recognition technology to investigate a serious crime such as a murder or track down a suspected terrorist, if conducted under warrant, is surely ethically justified. On the other hand, intrusive surveillance of a suspected petty thief might not be justified. Moreover, given the importance of, so to speak, the aggregate privacy/autonomy of the citizenry, threats to life on a small scale might not be of sufficient weight to justify substantial infringements of privacy/autonomy, e.g. a low level terrorist threat might not justify citizen-wide biometric facial recognition database. Further, regulation, and associated accountability mechanisms need to be in place to ensure that, for instance, a database of biometric facial images created for a legitimate purpose, e.g. a repository of passport photos, can be accessed by border security and law enforcement officers to enable them to prevent and detect serious crimes, such as murder, but not used to identify protesters at a political rally.
We have argued that privacy rights, including in respect of biometric facial images, are important, in part because of their close relation to autonomy, and although they can be overridden under some circumstances, notably by law enforcement investigations of serious crimes, there is obviously a point where infringements of privacy rights is excessive and unwarranted. A national biometric facial recognition database for use in relation to serious crimes, and subject to appropriate accountability mechanisms may be acceptable, but utilising billions of images from social media accounts (e.g. in the way that Clearview AI’s technology does) to detect and deter minor offences, let alone establishing a surveillance state (e.g. to the extent that has been achieved in China), is clearly unacceptable. Let us now turn directly to security.
Security and public safety
Security can refer to, for example, national security (such as harm to public from a terrorist attack), community security (such as in the face of disruptions to law and order) and organisational security (such as breaches of confidentiality and other forms of misconduct and criminality). At other times it is used to refer to personal physical security. Physical security in this sense is security in the face of threats to one’s life, freedom or personal property—the latter being goods to which one has a human right. Violations or breaches of physical security obviously include murder, rape, assault and torture (Miller and Bossomaier 2021). Biometric facial recognition systems could assist in multiple ways to enhance security in each of these senses. Thus a biometric facial recognition system could help to prevent fraud by better establishing identity (e.g. identify people using falsified drivers licences) and facial recognition data would be likely to help to investigate serious crimes against persons, such as murder and assault (e.g. identifying unknown suspects via CCTV footage).
Arguably, security should be distinguished from safety, although the two concepts are related and the distinction somewhat blurred. We tend to speak of safety in the context of wildfires, floods, pandemics and the like, in which the harm to be avoided is not intended harm. By contrast, the term ‘security’ typically implies that the threatened harm is intended. At any rate, it is useful to at least maintain a distinction between intended and unintended harms and, in relation to unintended harms, between foreseen, unforeseen and unforeseeable harms. For instance, someone who is unknowingly carrying the COVID-19 virus because they are asymptomatic, is a danger to others but, nevertheless, might not be culpable (if, for instance, they had taken reasonable measures to avoid being infected, had an intention to test for infection if symptoms were to arise and, if infected, would take all possible measures not to infect others). While biometric facial recognition systems can make an important contribution to security, their utility in relation to safety is less obvious, albeit they could assist in relation to finding missing persons or ensuring unauthorised persons do not unintentionally access dangerous sites (Miller and Smith 2021).
A number of potential ethical problems arise from the expanding use of biometric facial recognition for security purposes, especially in the context of interlinkage with non-biometric databases, data analytics and artificial intelligence. First, the security contexts in which their use is to be permitted might become both very wide and continuing, e.g. the counter-terrorism (‘emergency’) security context becomes the ‘war’ (without end) against terrorism; which becomes the war (without end) against serious crime; which becomes the ‘war’ (without end) against crime in general (Miller and Gordon 2014).
Second, data, including surveillance data, originally and justifiably gathered for one purpose, e.g. taxation or combating a pandemic, is interlinked with data gathered for another purpose, e.g. crime prevention, without appropriate justification. The way metadata use has expanded from initially being used by only a few agencies to now being used quite widely by governments in many western countries, is an example of function creep and illustrates the potential problems that might arise with the introduction of biometric facial recognition systems (Mann and Smith 2017).
Third, various general principles taken to be constitutive of liberal democracy are gradually undermined, such as the principle that an individual has a right to freedom from criminal investigation or unreasonable monitoring, absent prior evidence of violation by that individual of its laws. In a liberal democratic state, it is generally accepted that the state has no right to seek evidence of wrongdoing on the part of a particular citizen or to engage in selective monitoring of that citizen, if the actions of the citizen in question have not otherwise reasonably raised suspicion of unlawful behaviour and if the citizen has not had a pattern of unlawful past behaviour that justify monitoring. Moreover, in a liberal democratic state, it is also generally accepted that there is a presumption against the state monitoring the citizenry. This presumption can be overridden for specific purposes but only if the monitoring in question is not disproportionate, is necessary or otherwise adequately justified and kept to a minimum, and is subject to appropriate accountability mechanisms. Arguably, the use of CCTV cameras in crime hot-spots could meet these criteria if certain conditions were met, e.g. police access to footage was granted only if a crime was committed or if the movements of a person reasonably suspected of a crime needed to be tracked. However, these various principles are potentially undermined by certain kinds of offender profiling and, specifically, ones in which there is no specific (actual or reasonably suspected) past, imminent or planned crime being investigated. Biometric facial recognition could be used to facilitate, for instance, a process of offender profiling, risk assessment and subsequent monitoring of people who as a result of fitting these profiles are considered at risk of committing crimes, notwithstanding that the only offences that the individuals in question had committed was to fit these profiles.