To investigate whether a technological change, such as the introduction of a new technology, the modification of a technology, or a new application, reduces CI, and to inform a decision about what to do in response, we can use the procedure described in Table 1. Here, the procedure is presented in broad strokes. The idea is that the procedure will be eventually fleshed out with more specifics as we build up a better picture of which types of technologies tend to affect which contexts, and which values and other normative elements tend to appear in which contexts. Once elaborated, the procedure is intended for use by a range of actors, including groups engaged in collective deliberation, group members determining whether to oppose or support a technological change, and individuals who are observing a group (of which they are not a member) and aiming to evaluate a technological change on the basis of a rich understanding of the socio-technical situation of that group. I will refer to the person implementing the procedure as “the evaluator,” but it is important to note that an evaluator may sometimes be a decision-maker, and sometimes an observer.
For convenience of presentation, the procedure is primarily formulated as something to be performed prior to the technological change; however, some version of the procedure can be performed before, during, and after a technological change occurs, and it will frequently be advisable to perform the procedure iteratively, including as one monitors the effects of a technological change that has already been implemented. Additional comments on each step follow after Table 1.
Comments on the Steps in the Procedure
With regard to Step 1 and the question of what constitutes a technological change, we are interested in the creation of new technologies, as well as in changes to the production, maintenance, disposal, use, or application of existing technologies. Any of these changes can be morally or socially significant (see, e.g., Palm & Hansson (2006) on assessment over a technology’s life cycle).
In Step 2, one identifies relevant contexts and entrenched normative elements that are likely to be affected by the technological change. This can be a substantial undertaking, particularly when one is dealing with a new or emerging technology. It is of course difficult to anticipate the possible consequences of a technological change, whether to the normative elements in human life or to other aspects of the world (Hansson, 2011; Palm & Hansson, 2006). Within this step, as well as Steps 3 and 4, given that they all involve anticipation of consequences, one may beneficially incorporate numerous existing tools, such as technomoral scenario-building (Swierstra et al., 2009), risk analysis, technology assessment, scenario planning (Hansson, 2011), forecasting, futures studies (Brey, 2012, 2017), and generation and consideration of diverse narratives (see the efforts of Dihal et al. (2021) or the Future of Life Institute’s “worldbuilding” competition ). One may also want to invest in careful investigation of which values are “embedded” in a given technology, or, which outcomes a technological design will tend to promote or will make more likely within particular contexts (Klenk, 2021). For this, one may wish to draw on “disclosive ethics” (Brey, 2000). Even when a change has already occurred, it is not simple to identify the full range of relevant consequences it has had so far. For the purpose of identifying contexts to consider and for the purpose of predicting and recognizing how elements of normative life are affected by the technological change, an analysis will be more complete if informed by a diverse group, including those affected in different ways, experts and laypeople, the vulnerable, and the historically marginalized.Footnote 14 This is especially true in cases in which a technological change is likely to affect many contexts. Furthermore, obtaining an adequate picture of the likely and actual effects of some technologies may require substantial laboratory research, involving both experimentation and simulation, as well as (incremental) experimentation within society (van de Poel, 2011, 2016). A crucial part of this process of anticipation and monitoring involves attempting to identify (possible, likely, or actual) unintended effects and other secondary effects of the technological change (van Eijndhoven, 1997).
What does it mean to say that elements of normative life—e.g., practices, roles, norms, concepts, values, and value-laden artifacts—will be affected by the technological change? This could mean that the technological change itself will have direct effects on normative elements—for instance, a new dam makes a traditional, sacred practice of fishing downstream impossible, or medical implants violate the sanctity of the human body—or that the technological change may facilitate rather indirect effects on normative elements: it might facilitate threats to practices, norms, or things people value by making alternative practices more attractive, incentivizing norm violations, making a norm less salient, introducing a new value that is in tension with an old value, introducing a new context that lacks norms and thereby endangers shared values, etc.
The phenomena mentioned thus far are all attenuating influences on elements of normative life, but in other cases technological change may reinforce normative elements, for example, by facilitating compliance with norms, supplying new ways to execute an old practice, or making it easier to promote something one values. In such a case, the technological change may increase CI—but one cannot be sure until one has checked to see whether the technological change threatens other normative elements as well and (in Step 4) checked whether the entrenched normative elements that are enhanced actually promote shared ends.
One of the ways in which this step requires substantial work is that we may need to draw on moral psychology, sociology, and anthropology and perform considerable theorizing, in order to develop an adequate articulation of the pertinent norms that may be affected by a given technological change. Here again, Nissenbaum’s account suggests a valuable strategy. An important part of Nissenbaum’s contribution on the question of privacy comes in her identification of parameters that can be used to systematically articulate privacy norms. The key parameters that Nissenbaum identifies are actors (information subject, sender, and receiver), information type, and transmission principles. The information type parameter has to do with what the information in question is about. In Nissenbaum’s theory, this parameter highlights the point that different norms regulate the flow of different types of information. For instance, in the medical context, different norms regulate information about patient medical conditions, patient attire, insurance provider, and account balance (Nissenbaum, 2009, p. 143). Transmission principles are the conditions under which information may or must be transmitted. Some examples: one may be allowed to transmit information only via sharing, not selling; a transmission may occur only if the recipient pledges to keep the information confidential; or one’s medical information must be transferred to the patient if requested. A more fully elaborated account of the generalized CI procedure that I am proposing will develop sets of parameters for use with different domains of norms. For instance, we can develop sets of parameters for articulating norms in the following categories: fairness norms and the distribution of costs/benefits; norms governing punishment; care norms; property norms; honor norms; norms of respect; and norms of liberty, among many others.
On the topic of entrenched elements of normative life, I want to emphasize that entrenchment is a matter of degree and that some things that may seem highly entrenched at first glance may in fact not be. We may take something for granted because we inherited it (e.g., the consent model of privacy, or norms that permit disposal of yard waste in landfills), yet it is relatively new in human history and we still have not realized all of the ways in which it affects our full range of ends, or some people may have become habituated to something (e.g., group instant messaging, or posting consumer reviews on the internet) that is still far from well integrated into our larger normative system. It will often be worth questioning how deeply entrenched the normative elements are that are threatened by the technological change. When the entrenched element is relatively newer, as a rule of thumb we may expect that disruptions to it will be less socially or morally disruptive (cf. Hopster, 2021b), and we should pay extra attention in Steps 3 and 4 to whether the element in fact advances our ends—because it has not gone through the longer historical process of adjustment and screening that older elements have gone through.
In Step 3, one examines how the ends of individuals are likely to be affected. This step draws on information obtained in Step 2 as well as other information, including information about how background factors that are conducive to individuals’ ends are likely to be affected. Why are the ends of individuals worth considering? They will matter for the CI evaluation, and for collective deliberation, inasmuch as they matter for shared ends—e.g., if our society aims to protect autonomy and well-being, then it is relevant to consider the ends of the individuals whose autonomy and well-being we wish to protect. It is worth devoting a distinct step to the investigation of individual ends to help us disentangle the various reasons that group members have to care about the ends of individuals, and to aid our subsequent investigation of which ends are shared with others.
A consideration of background factors is an important component of Steps 3 and 4. To understand how a technological change affects individual and shared ends, we must look not only at how the change affects the entrenched elements of normative life that have been put in place over time for the purpose of advancing those ends, but also at background factors which no human may have ever put in place purposefully, but which nonetheless play a role in facilitating achievement of our ends. For instance, suppose that someone ordinarily uses a stovetop kettle to prepare her tea each morning. The kettle takes 5 min to boil water, and during that time, she has got into the habit of watering her plants. To be more environmentally friendly, she decides to replace her old kettle with an electric kettle. Now her water boils in 1 min. For the purpose of advancing her various ends (in this case, keeping her plants alive), it will be advantageous if she notices straightaway that the technological change she made to advance one end has removed a background feature of her environment (a 5-min forced pause in activities, compelled by the constraints imposed by the physics of her kettle) that was instrumental for bringing about one of her other ends. She will probably want to make an alternative plan—for instance, cultivating a new habit triggered by some other prompt during some other time interval—which ensures that she waters her plants each day. In both Steps 3 and 4, it is important to investigate the effects of a technological change on background factors that matter for ends—anticipating or predicting such effects, monitoring and/or experimenting as the change is implemented, and taking compensatory measures in cases where one’s ends are threatened due to changes in background factors.
In Step 4, one ascertains whether a prima facie CI reduction is a genuine CI reduction, and one investigates whether the technological change also threatens to undermine shared ends in other ways. In a case where the change threatens entrenched normative elements, one must now check whether those entrenched normative elements in fact advance shared ends—if yes, one concludes that the prima facie CI reduction is a genuine CI reduction.
During this step, one looks at both shared contextual and shared general ends—there is practical value in examining standard sets of values associated with particular contexts and activities, as well as considering general shared ends such as virtue or welfare. For instance, in the CI approach to privacy, Nissenbaum lists some of the values that have been identified as worth considering when we are looking at systems featuring flows of information: the fairness of power shifts, democracy, unfair discrimination, informational harm, equal treatment, civil liberties, and individual autonomy (2018, p. 842). For the purpose of applying the CI concept beyond privacy, we can draw on existing ethical and social science research to compile comparable lists of shared values to consider when analyzing systems featuring resource distribution, punishment, relationships, care settings, etc. In addition, separating the examination of whether contextual norms and practices advance shared contextual ends and whether contextual ends advance shared general ends allows one to draw conclusions such as the following: there is CI within some subsystem, but the system as a whole lacks CI.
One may conclude that a prima facie CI reduction is not a genuine CI reduction, if one finds that entrenched normative elements are disrupted, but that those elements in fact do not advance shared ends. In this case, there was little CI to begin with. This type of outcome reflects the fact that as circumstances change and as people’s conceptions of their ends change, entrenched norms that may have advanced the shared ends of the group in the past may no longer serve that function. If traditional norms are no longer (or never were) conducive to shared contextual or general ends, we may conclude that we have reason to adjust those norms. As discussed in Section 2, sometimes entire contexts, even, are not conducive to general ends; we may conclude that we have reason to revise or eliminate those contexts as well.
In this step, one can also pick out cases where the technological changes will not disrupt normative elements themselves, but will change background factors such that entrenched normative elements will no longer suffice to advance shared contextual ends—this is another form of CI reduction. Suppose a synthetic chemical industry has emerged for the first time, and someone proposes to introduce a new chemical that poses no threat to a society’s entrenched norms or social contexts. This new chemical, however, does threaten the survival of a remote animal species, and the society in question values species diversity. Thus far, the protection of this species has not required any action from that society—no norms or practices have been required. But now that someone is proposing to introduce the new chemical, a background factor has changed in a way that threatens that society’s shared end of species diversity. To protect this shared end, the group may conclude that it needs to implement new norms, such as a norm that the introduction of new chemicals cannot proceed without substantial testing.
In Step 5, the evaluator takes a position on the shared ends in question and reaches an evaluative conclusion on the possible technological change. An evaluator who is a member of the group will reflect on the shared ends of her group, and she will reach a conclusion about whether she still assents to those ends. The evaluator who is merely an observer has thus far been engaged primarily in investigating people’s normative lives and ends descriptively, but now will need to evaluate them. The evaluator will assent or object to shared ends on the basis of the usual mechanisms of moral cognition and established practices of evaluation: consideration of the information obtained thus far, intuition, moral perception, concern for further normative criteria (e.g., coherence), application of abstract moral theories, etc.
During this step, the evaluator may also look beyond CI and shared ends and appeal to additional considerations, such as ethical principles, theories, or values that diverge from those accepted by either the society under study or her interlocutor. If she cannot convince her interlocutor to accept the considerations she is appealing to, her argument will presumably have little impact on that particular interlocutor,Footnote 15 but she will have reached an evaluative conclusion for herself. Such an evaluative conclusion will be richer and better informed as a result of completing Steps 1 to 4 of the CI analysis.
Important Points of Divergence Between the Proposed Procedure and Nissenbaum’s CI Decision Heuristic
The general procedure for evaluating technologies that I have proposed in this article differs substantially from the CI decision heuristic that Nissenbaum proposes (2009, pp. 182–183). In summary, Nissenbaum’s (2009) heuristic has the evaluator (1) identify a practice that has been altered by a technology, (2) identify the prevailing context for the practice and contexts that are nested within that broader context, (3–5) identify entrenched (privacy) norms that the technology may have influenced, using the parameters that Nissenbaum has identified as building blocks for privacy norms, (6) perform a prima facie assessment of the altered practice: “A breach of informational norms yields a prima facie judgment that contextual integrity has been violated” (p. 182), (7) perform evaluation I: “Consider moral and political factors affected by the practice in question” (p. 182), (8) perform evaluation II: “Ask how the system or practices directly impinge on values, goals, and ends of the context. In addition, consider the meaning or significance of moral and political factors in light of contextual values, ends, purposes, and goals” (p. 182), and (9) draw a conclusion: “On the basis of these findings, contextual integrity recommends in favor of or against systems or practices under study.” (pp. 182–183).Footnote 16
In my proposed procedure, we distinguish prima facie CI reductions (i.e., threats to entrenched elements of normative life), genuine CI reductions (i.e., threats to elements of normative life that advance shared ends), and subsets of genuine CI reductions that the evaluator judges to be problematic. This does not map perfectly onto Nissenbaum’s use of the CI concept. In some places, Nissenbaum’s summary of her view suggests that we can diagnose CI violations just by looking for norm violations, without assessing whether the norms advance shared ends: “Contextual integrity is defined in terms of informational norms: It is preserved when informational norms are respected and violated when informational norms are breached” (2009, p. 140). In Nissenbaum’s, 2018 summary of her view, CI (and appropriateness, too) appears to be evaluable simply with reference to whether entrenched norms are violated, without requiring further assessment of the norms themselves. However, it is possible that her way of speaking in these instances rests on an assumption that the norms involved promote shared ends, and the ends involved are legitimate. In any case, I propose distinguishing prima facie, genuine, and problematic CI reductions in the way Table 1 depicts because I believe it is useful to explicitly break up the process of evaluation into multiple steps.
Another important difference is that Nissenbaum’s heuristic recommends identifying a singular prevailing context. By contrast, the general procedure proposed in this article supposes that there will typically be multiple important contexts to consider. This makes the evaluative task more complex, but I believe it is necessary because so many technologies involve or have implications for multiple contexts. For instance, think of the question of whether a social media site most resembles a context of friends interacting, a workplace, a networking venue, a news media entity, etc. Social media has potential implications for each these contexts, and each involves different and potentially conflicting sets of norms, roles, and other normative elements. Thus, when evaluating social media sites, we have reason to consider their relationship to entrenched normative elements in multiple contexts.