Recognizing the broadened social and technological contexts that have shifted the ethical salience of privacy concerns from the individual to interdependent networks and to futures of artificiality and the decision-making frameworks that can assist us in asking what or whom matters morally and why, we turn now to discuss specific emerging ethical challenges pushing us to reconceptualize privacy ethics. We anchor this reconception of privacy in a foundation of universal human rights, recognized throughout the world with the establishment of the Universal Declaration of Human Rights  and encoded in international law and treaties. These rights are legally enforceable and provide clear consequences for violations. They include specific reference to concepts associated with privacy, including a respect for human dignity, freedom of the individual to make decisions for themselves and be free from intrusion and intervention, respect for justice and due process, a commitment to equality and non-discrimination, and the right of citizens to access and participate in their governing processes and public services.
Following the ethical framework outlined in the previous section, in this section, we discuss specific challenges to privacy presented by twenty-first-century emerging technologies in order to illustrate the ways in which the contexts for privacy violations have become more complex. We organize these discussions around five ethical principles: autonomy, justice, non-maleficence, beneficence, and explicability.
While we continue to address concepts traditionally associated with privacy, such as anonymity, confidentiality, consent, right to correct, and minimization of scope, we argue here that privacy threats now encompass broader ethical concerns. Specifically, we suggest that ethical concerns in privacy must now shift:
Beyond a focus on data protection of individuals to consider multifaceted and ubiquitous forms of surveillance as intrusions that violate respect for one’s dignity
From consent of individuals to a concern for human agency and autonomy
From a focus on individual due process to a consideration of social fairness, non-discrimination, and justice
From individual risk assessments to also consider safety, robustness, and the protection and inclusion of vulnerable populations as non-maleficent goals
Beyond the individual or singular context of intrusion or data collection to consider collective responsibilities for environmental, social, and cultural well-being aligned with beneficent goals
Beyond limits of scope and purpose to also consider data integrity, provenance, and accountability for explicability in the processes of algorithms, modeling, and data use
17.4.1 Autonomy as Dignity: From Data Protection to Multifaceted Forms of Intrusion
For the past 50 years, starting with the advent of computer systems used to store electronic records about individuals in financial, health, educational, and other sectors, the primary focus of privacy concerns has been the protection of data used in order to ensure that individual rights to privacy are not violated. Those concerns remain today, but they are complicated by the multiple forms of data that are now collected (e.g., numeric, text, voice, image, biometric) as well as the many technological means for doing so. We now live in a world filled with video cameras, facial recognition systems, RFID chips, electronic toll collectors, smartphones with location tracking, and voice-activated networks in our homes and automobiles. This modern context enables large-scale ubiquitous multimodal surveillance of users and citizens in public as well as in spaces traditionally considered to be private and free from intrusion: our cars, homes, and bedrooms. These new contexts suggest that ethical concerns in privacy must now shift beyond a focus on data protection of individuals to consider multifaceted and ubiquitous forms of surveillance as intrusions that violate respect for one’s dignity, as an expression of individual autonomy. That includes concerns about privacy of one’s person, identity, as well as one’s information.
For example, facial recognition technologies (FRTs) used in public spaces present unique challenges for privacy. Using biometric data and processes to map facial features from image or video data, facial recognition systems attempt to identify individuals by matching their image against stored data. Biometric identifier data (fingerprints, iris, and face images) raise specific privacy concerns because they are uniquely identifiable, highly sensitive, and hard to secure. And if captured and misused, biometric data cannot be changed or uncoupled from an individual’s identity . When used by government or other institutional authorities to identify, track, and surveil citizens or institutional members, FRTs create fundamental imbalances in power and can be used as a means of social control, a form of digital authoritarianism .
For example, FRTs in China are an integral part of a social scoring system used to monitor and assess citizen behavior in public spaces and assign consequences when behaviors fall outside acceptable boundaries . Similarly, the use of biometric identification systems in India’s Aadhaar —a centralized database that collects biometric information from 1.35 billion citizens, including fingerprints, iris scans, photographs, demographic information, and a unique 12-digit identifier—has raised significant concerns about the unprecedented access to and power over citizens given to government .
Because FRTs often operate continuously, invisibly, ubiquitously, and automatically, concerns about the risks of intrusion increase due to the large amounts of data collected, when data is collected without the knowledge or consent of the subject, and when human determination is removed from the equation. In addition, concerns about the accuracy, reliability, and security of FRTs—including false positives and negatives (e.g., for women and persons of color; )—have led some companies and countries to call for moratoriums on the use of FRTs in public spaces . The specific risks of structural violence  resulting from the use of technologies to categorize individuals, monitor their movements, and mete punishments lead to clear potential loss of freedoms of movement, intrusion, and liberty.
17.4.2 Autonomy as Agency: From Consent to Access
A second, prominent privacy concern has centered around the expectation of knowledge and consent of an individual when her person or information is accessed. Individuals who provide permission to be searched or have their information collected are presumed to give informed consent—a fundamental assumption that individuals have the right to decide when, what, and how much information about themselves will be shared  or that they have agency in the decisions that are made on their behalf (see  on proxy consent;  on deferred consent). Consent and agency have formed the core elements of research ethics practice (see Common Rule) as well as terms of service used in many industries.
Yet while our early conceptions of consent were based on individual transactions, today’s ubiquitous, invisible, and large-scale data collection practices mean consent is not only difficult, it is largely no longer meaningful . For example, when withholding consent equates to being denied access to services and goods provided through such platforms (e.g., without an Aadhaar ID, one cannot receive social support services), or when the terms of service agreements are inauthentic because they are too complex to be understandable or disguise exceptions that allow data sharing , consent as a means to respect and protect the rights of individuals to control their information becomes meaningless.
We argue that respecting autonomy in new privacy eras must shift away from consent and toward access, since self-governance is as much contingent on access (to read) as it is contingent on permission (to be read). This balance between read and write is essential in the context of information systems. We must ask not only What is the role that individuals play in determining how data are used? but also What level of control do humans maintain in automated systems? and How are systems designed to gauge individual tolerance for trusted systems and to adjust if a potential intrusion (or trust-eroding event) is imminent?
The ethical concern here focuses on tensions around autonomy between consent and agency. In addition to having the capability to act on the basis of one’s own decisions and ensure that individuals are not placed at risk when sharing information , we must also have the agency to intervene when engaging with automated systems or decision-making algorithms that make determinations about us.
One example arises in self-driving vehicles. Because these systems are designed with granular levels of autonomy in decision-making and responses to environmental stimuli, they must also be designed to learn and adopt the values of the community in which they are installed. This is essential not only for trustworthiness but also to ensure the preservation of human determination. Thus, critically important is an iterative design process that continually assesses ethical consequences of design choices, follows ethically aligned standards , and ensures that individuals are able to determine the values and rules used in the process. Centering humans and their values in the loop is a key part of human-centric computing [49, 50], where technological devices, algorithms, and systems are designed with consideration of the human impact, and human values are centered in the design process (see also value-sensitive design  and privacy by design ).
17.4.3 Justice: From Material Risk to Fairness and Due Process
In light of growing evidence and concerns about unfairness in technologies and algorithms, there have been many recent calls to reorient and broaden ethics discussion about emerging technologies like AI, as one that is defined by justice, including social, racial, economic, and environmental justice [53, 54]. Others have taken up these concerns as information justice (e.g., [55, 56]) or algorithmic justice  (see https://www.ajl.org/).
These discussions focus on the technical mechanisms needed to address questions of fairness, bias, and discrimination in algorithmic systems, as well the consequences suffered by individuals and groups from inaccurate, unfair, or unjust systems. With the deployment of predictive algorithms and machine-learning models as decision-support systems across many sectors—e.g., financial, health, and judicial—these consequences are of great concern .
For example, the work of Buolamwini and Gebru  revealed that a widely used facial recognition system was largely inaccurate in identifying darker-skinned females, with error rates close to 35%, compared to 1% for lighter-skinned males –suggesting that automated facial analysis algorithms and datasets can produce both gender and racial biases. Similarly, a widely used predictive algorithm used by judicial courts in the United States to predict recidivism rates for sentencing decisions was found to be more likely to incorrectly label Black defendants as higher risks compared to White defendants . These cases illustrate the larger societal risks that arise from algorithmic decisions that lead to systematic bias against individuals within groups with protected social identities like race, gender, and sexuality [60, 61].
Even for non-marginalized populations, algorithmic bias can lead to decisions that limit opportunities, intentionally or not. When Amazon attempted to address gender gaps in its hiring, they implemented an applicant screening algorithm to predict applicants likely to match the qualities of past successful candidates . But when the outcome widened gender gaps, they realized the dataset used to train the model included primarily successful male employees, thus making it less likely that female applicants would match the ideal . In this case, the problem was not inaccuracies in the data or model but rather what was missing: there was insufficient data about females to model a fair representation of their goals .
Algorithmic bias has due process implications as well. For example, automated performance evaluation systems for public school teachers in California, New York, and Texas led to termination decisions, without informing the employees such tools were being used or providing meaningful opportunities for scrutiny and accountability. Such secret black box systems, especially in public agencies, generate a number of ethical concerns [65, 66].
On a societal level, the use of social credit scoring systems (SCS) also carries the potential for large-scale systematic violations of privacy and human rights. In China, a government-mandated SCS was implemented to strengthen social governance and harmony . Every citizen was assigned a “trustworthiness” score, calculated from an algorithmic assessment of data from medical, insurance, bank, and school records; credit card and online transactions; satellite sensor data; mobile phone GPS data; and behavioral data from public cameras. Authorities use these data and the social credit score to evaluate and hold citizens accountable by imposing sanctions that range from restrictions on travel, bans on employment in civil service and public institutions, disqualification of children from private schools, and public disclosure of ratings on national websites . Thus, the stakes of large-scale state surveillance include significant loss of freedoms of movement, employment, education, and reputation .
17.4.4 Non-maleficence and Beneficence: From Individual Risk to Collective Societal Good
Privacy ethics have long included attention to assessing the risk for individuals and adequately consider the safety, robustness, and protection of vulnerable populations. Indeed, much of the legal discourse about privacy protection and rights centers on the harmful consequences suffered when privacy is violated. However, harm remains narrowly defined and allows violations to go unpunished. In this section, we argue that broadening the ethical focus to one of non-maleficence — a call to ensure that our research conduct and technological designs also consider potential harms to society at large—provides an opportunity to broaden concerns beyond individual risk assessments to consider and assess long-term social, intellectual, and political consequences.
At the intersections of humans and technologies, there are significant privacy concerns, in particular for the young (Chap. 14 this volume), the vulnerable (Chap. 15, this volume) and the marginalized, that are exacerbated with contemporary technologies. Of specific concern are tools of authoritarian regimes that have clear and dangerous consequences when individuals can more easily be identified and targeted . For example, it has recently come to light that facial recognition and other surveillance technologies are being used to identify, persecute, and imprison members of the Uyghur population in China . Members of this community are considered enemies of the Communist Party and subjected to incarceration and, by some reports, torture, sterilization, and starvation. The determination of whether Uyghurs are imprisoned is built upon a massive system of government surveillance both in public spaces using a network of CCTV cameras equipped with facial recognition software as well as private spaces using spyware installed on smartphones, allowing the government to trace location, communication, and media use .
Another example of malicious, harmful technology is illustrated in the case of deepfake technologies. Deepfake technology uses machine learning algorithms to combine images and voices from one person into recordings of another to create a realistic impersonation that is difficult to detect as inauthentic. Doctoring images is not new, nor are harmful lies. But as Floridi  notes, deepfake technologies can also “undermine our confidence in the original, genuine, authentic nature of what we see and hear” (p. 320).
The sophisticated digital impersonation made possible with modern deepfake technologies is realistic and convincing in a way that carries the potential for significant harms. Typically created without the knowledge or consent of the individual and often in negative or undesirable situations, they present significant ethical violations and a wide array of harms. These harms include economic harms from extortions under threat to release the videos; physical and emotional harms from simulated violence and dignitary or reputational harms that include relationship loss, job loss, and stigmatization in one’s community; and even societal harms when important political figures are depicted in damaging contexts, election results are manipulated, or trust is eroded in critical institutions . As more of our identities shift into digital spaces, this array of harms is informationalized or spread beyond the bodily self to the networks of information that extend us digitally . Thus, the potentials for harm are significantly amplified in a networked information environment context that facilitates wide distribution, viral spread, and infinite persistence of access.
If non-maleficence asks moral agents merely to avoid harms, the principle of beneficence shifts our focus to a positive account of doing good. Beneficence implies a balancing of tensions between individual and collective concerns to consider how we can design and conduct our research with a specific goal to benefit the well-being of society. This requires moving beyond the individual in a singular context of intrusion or data collection to consider collective responsibilities for environmental, social, and cultural well-being aligned with beneficent goals.
In the research context, this means asking not only How do I avoid risks? but also How can I modify how I conduct my work so that it generates social good and contributes to well-being? In the industry context, there have been growing movements to promote the specific design and deployment of technologies to serve broader social good—ICT4All and, for example AI4Good—particularly focusing on technologies to contribute to the social and economic development of underserved populations and countries . Other calls have come from disciplines like human-computer interaction to discuss emerging policy needs for culturally sensitive HCI, accessible interactions, and the environmental impact of HCI .
The principles of non-maleficence and beneficence intersect as privacy practices and policies continue to negotiate value tensions between avoiding harms and managing risk and active engagement in developing or protecting privacy concerns. One example is the technologies and applications developed to minimize the risk and spread of infection during the COVID-19 pandemic. In order to manage the highly infectious disease, public health officials around the world raced to create technological and data analysis capabilities, including contact tracing, symptom tracking, surveillance, and enforcement of quarantine orders—typically enabled through mobile phones . These health surveillance systems provide important capability to mitigate and manage the risks to global public health during the pandemic but also raise concerns about potential individual and societal-level privacy violations, both short term and long term. They seek to balance potential privacy harms against the good of public health.
Short-term concerns focus on the sharing of highly sensitive health, location, and behavioral data, complicated with disclosures of infectious health status. Long-term concerns center around the ambiguous end point for data collection and concerns that once allowed in order to mitigate a temporary emergency, surveillance will become permanent. Unfortunately, these concerns are warranted based on the history of previous surveillance activities enacted during crises: In the United States, there have been over 30 national emergencies declared providing emergency powers, including the domestic and international surveillance activities put in place after the September 11 terrorist attacks . Balancing the clear long-term societal benefit of technologies to manage critical infection spread and reduce deaths and health-care costs, with short-term risks of disclosing sensitive personal information and long-term risks of continuous health surveillance, illustrates the ethical tensions of crisis contexts.
17.4.5 Explicability: From Data Transparency to Process Intelligibility
Ethical values are always tightly coupled to epistemic values, or values about what and how we know. Privacy ethics have long focused on the important epistemic principles of transparency (i.e., providing notice to individuals regarding the collection, use, and dissemination of personally identifiable information), as well as accountability (i.e., holding accountable compliance with privacy protection requirements) . In the modern era, where the workings “inside the box” of complex systems are often invisible or unintelligible to most, these principles must be broadened to include requirements for intelligibility (how does it work?), along with clear provenance of the data and people involved (who is responsible for the way it works?) .
Collectively this principle has been termed explicability, or the ability to obtain a clear and direct explanation of a decision-making process , cf. . Explicability is especially salient in the case of algorithms and machine learning procedures and ensures individuals the right to know and understand what led to decisions that have significant consequence in their liberty, employment, and economic well-being: freedoms that are fundamental human rights protected by law.
Furthermore, as Floridi and Cowls  explain, explicability actually complements (or enables) the other principles: In order for designers and researchers to not constrain human autonomy and “keep the human in the loop,” we must know how the technologies might act or make decisions (instead of us) and when human intervention or oversight is required; to assure justice, we need to be able to identify who will be held accountable and explain why there was a negative consequence, when there are unjust outcomes; and to adhere to values of beneficence and non-maleficence, we must understand how such technologies will benefit or harm our society and environment (p. 700).
Pasquale’s Black Box Society  makes clear that algorithmic decision-making produces morally significant decisions with real-life consequences in employment, housing, credit, commerce, and criminal sentencing often without offering an explanation for how such decisions were reached. Civil society advocates have warned that “many of these techniques are entirely opaque, leaving individuals unaware whether the decisions were accurate, fair, or even about them” .
For example, algorithms are used in the criminal justice system to predict the probability of recidivism for individuals in parole and sentencing decisions. One such tool, the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), has been used in more than 1 million cases since 1998, yet research indicates the accuracy of predictions made by the algorithm is no more accurate than those made by people without criminal justice expertise . Furthermore, although individuals are more likely to trust the accuracy of computational tools, research indicates the COMPAS tool led to racially-biased outcomes: it overestimated the rate at which Black defendants would reoffend and underestimated the rate at which White defendants would [59, 81]. Furthermore, when defendants challenged the decisions, they were unable to receive an explanation about the information used in the decision because the COMPAS creators claimed the algorithm was proprietary information . In doing so, they violated the defendant’s right to due process.
The factual context of a particular privacy problem is a key element of specifying ethical principles. The epistemic context is always tightly coupled to the ethical. In privacy eras of varying complexities, the explicability of data has an impact not only on the reception of ethics but also on the transmission: especially when artificial agents are included in those contexts. Like other ethical principles, the epistemic principle of explicability takes on an increasingly complex role. Whether in the context of predictive algorithms, surveillance by autonomous systems, or any other information context, epistemic values no longer merely focus on replicability or accuracy but instead on validity, transparency, and comprehensibility.
Emerging ethical challenges to core ethical principles shift the way principles are specified and balanced, adding complexity to their scope and focus. These challenges have direct implications for research policy and practice.