Given the argument for bans on FRT and the privacy and free speech rights enshrined in liberal democratic constitutions and human rights declarations, it is clear that the state must justify the use of FRTs before they can be used to capture terrorists. This is not a technology that simply improves upon a power that the state already had; instead, it is an entirely novel power. That is the power to identify anyone that comes into view of an FRT equipped camera without a human being watching the video feed.
Here I will outline the conditions that FRT should be subject to operate in a liberal democracy justifiably. I expand on each in the sections below. The context in which FRT is being used must be one in which the public does not have a reasonable expectation of privacy. Second, the only goal should be to prevent serious crimes like terrorism from taking place. Finally, FRTs to store and capture biometric facial data in a database, the individual in question must be suspected of committing a serious crime.
4.1 Reasonable Expectation of Privacy
In a famous case in the United States, the supreme court ruled that Charles Katz had a reasonable expectation of privacy when he closed the phone booth door [4, Chap. 1]. This meant that the evidence collected by the state who was listening in on his conversations in that phone booth had to be thrown out. This notion of a ‘reasonable expectation of privacy’ is fundamental to how the value of privacy is interpreted in liberal democracies. It is not just a legal notion but a notion which grounds how we act. In our bedrooms, we have a reasonable expectation of privacy, so we can change clothes without fear of someone watching. When Charles Katz closed the door to the phone booth he was using, he enjoys a reasonable expectation of privacy—he believes that no one should listen to his conversation.
Facial data captured by FRTs should be at least as protected as voice data. CCTVs in the public sphere should not be collecting information on individuals—something that happens when CCTVs are equipped with FRT. When I walk down my street, I have a reasonable expectation that my comings and goings are not being recorded—whether it be a police officer following me around or by a smart CCTV camera recognizing my face. Regular CCTVs do not record individuals' comings and goings; rather, they record what happens at a particular location.
The difference is that a CCTV camera does not record a line in a database that includes my identity and the location that I was ‘seen’ at. CCTV equipped with FRT can record such a line in a database—significantly empowering the state to perform searches that tell them much about my comings and goings. Not only should these searches be linked to clear justifications; but there should be clear justifications for collecting such intimate data (their comings and goings) on individuals.
This reasonable expectation can be overridden if I have committed a serious crime or plan on committing a serious crime. This is because my right to privacy would be overridden by the “rights of other individuals…to be protected by the law enforcement agencies from rights violations, including murder, rape, and terrorist attack” [17, 110]. If one were to be in the process of planning a terrorist attack, it would not be a surprise to them that they were being surveilled. Terrorists take active measures to prevent surveillance that they expect to occur. This may seem to justify the placing of smart CCTVs in public spaces to identify terrorists.
CCTV cameras are currently placed in many public spaces. If something happens, the authorities can review the CCTV footage to see who was responsible. In this case, the place itself is being surveilled. Data on individuals is not ‘captured’ in any sense. There is no way to search a database of CCTV footage for a particular name. One must look at the footage. However, if this CCTV camera were to be “smart” and capture biometric facial data along with video footage, then each individual who is captured by this camera is being surveilled. The authorities now know each person that comes into this camera's view and what time they were there. This, even though an overwhelming majority of people coming into any CCTV camera's view has not, and does not plan to, commit a serious crime. Their privacy has been invaded.
This has ethical implications regarding scope creep and chilling behavior discussed in Sect. 3. If FRT enabled CCTV cameras are in operation, then it is easy for the state to add new uses for the technology. A simple database search could reveal everyone who goes into an area with many gay bars. A gay man in a country where homosexuality is considered unacceptable but not illegal may chill their behavior—that is, not go to gay bars to fear those visits being documented. While the FRT enabled CCTV cameras were initially installed to counter terrorism, the ability to easily search for anyone that has come across it makes it easy to use it for other, illegitimate purposes.
The state could simply state that they will only use FRTs with a warrant targeted against an individual suspect of a serious crime. For example, the authorities may have good information regarding the planning of a terrorist attack by a particular person. It is imperative that they find this person before they are able to execute the attack. They obtain a warrant and then use the city’s network of FRT-enabled CCTV cameras to ‘look’ for this person. If this person’s face is captured by one of these cameras, then the authorities are immediately notified.
If we bracket issues of efficacy and disparate impact, it appears that this would be a useful power to the state—and subject to restrictions that protect privacy. The issue is not whether or not to use FRTs, but how they can and should be used. However, these would be merely institutional and perhaps legal barriers that are subject to interpretation. The scope of national security is little understood. Donald Trump used the concept to justify the use of collecting cell-phone location data to track suspected illegal immigrants [14]. The power enabled by FRTs is so great, and the justifications to use them will be so little understood, that it will be near impossible for regular citizens to feel and act as if they have privacy—even if they do, in principle, have it. Your partner may promise to never read your journal unless you are either dead or in a coma; however, the fact that she has a key and knows where it is will probably cause you do self sensor what you write down—just in case. With a journal, and with your general comings and goings, you should enjoy a reasonable expectation of privacy.
However, there are some public spaces where individuals do not enjoy a reasonable expectation of privacy. Airports and border crossings are two such examples. For better or worse, we now expect little privacy in these contexts. Authorities are permitted to question us, search our bags, search our bodies, submit us to millimeter scans, etc. It would be rather odd to think that our privacy was invaded more by our faces being scanned and checked against a criminal database. On regular public sidewalks, I would be horrified to find out that the state recorded my comings and goings; however, I would be shocked to find out the state did not record each time I crossed into and out of the country. This points to the idea that there may be places where we should have a reasonable expectation of privacy—whether we do or not.
A recent U.S. supreme court case illustrates this nicely. Timothy Carpenter was arrested for armed robbery of Radio Shacks and T-Mobile stores. The police used a court order (which is subject to less standards than a warrant) to obtain GPS data gathered by his cell phone and collected by the telecommunications companies MetroPCS and Sprint. In an opinion written by chief justice John Roberts, the supreme court ruled that Timothy Carpenter should have a reasonable expectation of privacy concerning his constant whereabouts. The government cannot simply, out of curiosity, obtain this data [24]. This prevents the widespread use of smart CCTV cameras in plain sight to undermine our ‘reasonable expectation of privacy.’ The state should not use conspicuous surveillance as a way to claim that no one has a reasonable expectation of privacy where these cameras exist. The critical point is that there are public spaces where citizens of a liberal democracy should have a reasonable expectation of privacy.
Therefore, if there are places where citizens should not have a reasonable expectation of privacy and FRTs are effective (they do not cause unequally distribute false positives and false negatives across different groups), it may be justifiable to use FRTs in those places. People expect the state to protect them from terrorism. If FRTs contribute to keeping citizens safe from terrorists, then there is a good reason to use them. However, based on the analysis above, they cannot simply be used anywhere as there are places where citizens should have a reasonable expectation of privacy.
The above points to the allowable use of regular CCTV cameras in public spaces but prevents FRTs from operating in those same public spaces.Footnote 7 The problem now is: How will the public know the difference? This is a serious problem. After all, the right to free expression may be ‘chilled' because people believe that the state is surveilling their actions. I may worry that because my friend lives above a sex shop, the state's surveillance may cause them to believe I frequent the sex shop rather than visit my friend. I may, therefore, not visit my friend very often. Or I may not join a Black Lives Matter protest because I believe the state is using FRTs to record that I was there. This is the “chilling effect” mentioned in Sect. 3.2. This can occur even if the state is not engaging in such surveillance. The only thing that matters is that I believe it to be occurring.
The ‘chilling effect' puts the burden on the state to assure the public that such unjustified surveillance is not happening. Where it is justified, there are appropriate safeguards and oversight to prevent misuse, etc. This requires institutional constraints, laws, and effective messaging. As [20] argue, institutional constraints and laws alone will not assure the public that the state is not practicing unjustified intrusive surveillance. And vice versa, effective messaging alone will not ensure that the state is not practicing unjustified intrusive surveillance.
For example, if the state creates laws that prevent the use of FRT on regular city streets but the cameras that are used look the same as the smart CCTV cameras that have FRT in airports, then the public will not be assured that facial recognition is not taking place. This sets up the conditions for the chilling effect to occur. However, if the state uses cameras that are clearly marked for facial recognition in places like airports, and cameras that are clearly marked ‘no facial recognition' on city streets but no laws are preventing them from using FRT on city streets, then the public has a greater chance of being assured. However, nothing is preventing the state from using the footage of those cameras and running facial recognition on them after the video has been captured. Therefore, it takes both institutional constraints (bound by law) and effective messaging to meet the standards which support liberal democratic values like free expression.
This creates two conditions for the state's use of FRT. First, the state must create institutional constraints that only allow FRTs to be used in places where people do not (and should not) enjoy a reasonable expectation of privacy (e.g., airports, border crossings). Second, the cameras equipped with FRT must be marked to assure the public that they are not being surveilled in places that they should have a reasonable expectation of privacy.
4.2 Cause for the State's Use of FRTs
The state should not simply use new technology because it exists. There must be a purpose for using technology that is greater than the harms and privacy infringements that occur due to that technology. It would be odd to use wiretaps to surveil a serial jaywalker. Wiretaps are used in highly restrictive situations involving serious criminals. FRTs should be no different. The point is, that “justifications matter.” Collecting facial data by using FRTs for countering terrorism does not mean that the data is now fair game for any other use. Each use must have its moral justification—and if that justification no longer obtains, then that data should be destroyed [8, 257].
Terrorism is a serious enough risk (in terms of possible harm—not necessarily in terms of likelihood) that it features as a justification employed by those advocating the use of FRTs. In these cases, one does not feel as if the privacy rights of terrorists are so strong that they should not be surveilled. We expect the government to do what they can to find people like this. Their privacy rights are overridden by others' rights not to be injured or killed in a terrorist attack.
The problem is that FRTs must also surveil everyone that comes into view of one of its cameras. That is, each face is used as an input to an algorithm that attempts to match that face to an identity and/or simply check whether that face matches one of the identities of suspected terrorists. In a technical sense, this technology could only be used for the legitimate purpose of finding terrorists. However, as argued above—the difficulty in assuring the public that this is the case will have a chilling effect. Furthermore, the real possibility of scope creep makes placing these cameras, in places where people should have a reasonable expectation of privacy, dangerous.
This means that no matter the cause, FRTs should not be employed in places where innocent people have a reasonable expectation of privacy (as argued above). However, once we restrict its use to those places where there is no reasonable expectation of privacy, then finding serious criminals using FRTs poses no ethical problem (providing that it reaches a threshold of effectiveness). The third condition for the use of FRTs is that FRTs should be restricted to finding serious criminals (e.g., terrorists).
4.3 Reliance on Third-Party Technology
The state's reliance on third-party technology companies to facilitate surveillance is perhaps the area where the most violations of liberal democratic values occur. For example, the government cannot simply scrape the entire internet of pictures of people, match the faces to names, create a detailed record of things you have done, places you have gone, people you have spent time with, etc. Especially without a just cause. This amounts to intrusive surveillance of every individual. In liberal democracies, there must be a justification (resulting in a warrant approved by a judge) to engage in such surveillance of an individual. Surveilling a million people should not be considered more acceptable than the surveillance of one person. However, Clearview A.I. has been scraping images from the web and creating digital identities for years. Many police departments and government agencies are now using this third-party company to aid in using FRTs [9].
This causes significant ethical concern for three reasons: first, some third-party companies do not follow the constraints already mentioned above; second, sensitive data is being stored and processed by third-party companies that have institutional aims that could incentivize the misuse or abuse of this data; and third, the role that these companies play in surveillance may reduce the public's trust in them.
4.3.1 Contracting out the Bad Stuff
When I first encountered FRT at an airport, I was a bit squeamish. It took me some time to understand why. Indeed, I am not against using such technology to prevent terrorists from entering the country or detecting people who are wanted in connection with a serious crime or find children on missing person lists.Footnote 8 I also did not feel that I had a reasonable expectation of privacy. I expect to be questioned by a border guard and have my passport checked. I expect that my bag or my body could be searched. And I expect to be captured on camera continuously throughout the airport. So why did I have this immediate adverse reaction towards the use of FRT by the state?
The answer lies in my knowledge regarding the contracting out of such work to third-party technology companies. I am expected to trust the state and the third party technology company that is behind the technology. Are they capturing my biometric face data and storing it on their third-party servers? Are there institutional barriers preventing them from reusing or selling that data for their benefit? Is the data captured, sent, stored, and processed in line with best security practices? In short, I fear that even if the proper laws and constraints regarding the state's use of FRTs are in place, that third-party technology company is not bound by them or does not respect them.Footnote 9
This is wrong. There are laws in place that prevent the United States, for example, contracting out intrusive surveillance on their citizens to other countries. So the U.S.—not being able to collect data on its citizens—cannot ask the U.K. to collect data on a U.S. citizen. The same should be true for FRTs. Suppose the U.S. cannot gather facial data on the entire U.S. population (practicing bulk surveillance). In that case, the U.S. should also not contract such work out to a third-party company—or use a third party company that has engaged in this practice. If I contract the work of killing an enemy to somebody else, that does not absolve me of all responsibility regarding the murder of that enemy.
It is not, in principle, unacceptable to use tools created by third-party companies. Third-party companies often have the resources and incentives to create far better tools than the government could create. Silicon valley technology companies attract many creative and motivated thinkers—and pay them a salary that the government could not afford. It would be detrimental to say that the government cannot use tools created by these companies. However, big data and artificial intelligence have made this relationship much more complicated.
Rather than merely purchasing equipment, the government is now purchasing services and data. A.I. algorithms created by third-party companies are driven by the collection of vast amounts of data. If this algorithm is to be used by the state, the state must ensure that the data driving it was collected according to laws governing the state's data collecting capabilities. Furthermore, the hosting of the data that the government collects is increasingly being contracted out to cloud services like Amazon Web Services. This is so because this data processing is extremely resource-intensive and something that third-party companies are more efficient at. This creates a situation where our biometric facial data may have to be sent to a third-party company for storage and/or processing. The company in question must have no ability to see/use this data. This is so for two reasons. First, these companies have institutional aimsFootnote 10 that have nothing to do with the security of the state. This creates incentives for companies to use this data for their aims—creating an informational injustice [10]. Furthermore, this blurring of institutional aims (e.g., maximizing profits and countering terrorism) could be detrimental to the company. As a result of NSA programs like PRISM, which purportedly allows the state to gain access to the company servers of Google and Facebook [5], rival companies are now advertising that they are outside of U.S. jurisdiction and can therefore be used without fear of surveillance.Footnote 11
Second, this data is now being entrusted to companies that may not have the same security standards or oversight expected for the storage and processing of sensitive surveillance data. Recently the Customs and Border Patrol contracted out facial recognition to a third-party company which was breached in a cyber-attack causing the photos of nearly 100,000 people to be stolen. Customs and Border Patrol claimed no responsibility—saying it was the third-party company's fault. The state should be responsible for the security of surveillance data [19, 35].
This discussion should cause constraints on how the state uses third-party companies to facilitate surveillance. Condition number four for the state's use of FRTs is that the state should not use third-party companies that violate the first three conditions during the creation or use of its service. This means that the state should know about the services they are using. Furthermore, a fifth condition is that the third-party company should not be able to access or read the sensitive data collected by the state. This keeps the state in control of this sensitive surveillance data.