1 Introduction

Emerging technologies such as artificial intelligence are engines of social change. Such change can manifest itself directly in a range of domains (healthcare, military, governance, industry, etc.) [1]. For instance, technologies can drive shifts in power relations at the societal level [2, 3], as well as internationally [4,5,6]. Less visibly but no less significant, technological change can also have “soft impacts” [7], by challenging and changing entrenched norms, values, and beliefs [8, 9]. In virtue of such societally disruptive “second-order effects” [10]—which go far beyond the domain-specific changes of “first-order” market disruptions [11]—emerging technologies such as AI have been described as “socially disruptive” [12] or “transformative” [13].Footnote 1

For instance, while there is still considerable uncertainty over AI technology’s future trajectory, AI experts expect continued progress towards increasingly capable systems [14,15,16]. Further capability developments are likely to make AI’s eventual societal impacts considerable, possibly on par with previous radical and irreversible societal transformations such as the industrial revolution [13, 17]. Even when assuming a baseline scenario that (implausibly) assumes no further progress in AI, the mere proliferation of many existing AI techniques to existing actors, and its integration with pre-existing digital infrastructures will suffice to drive extensive societal impacts [18, pp. 56–82]. Indeed, Dafoe has argued that AI’s transformative implications may be grasped by considering it as the next step in a long line of ‘information technologies’ broadly conceived, spanning back to other such ‘technologies’ such as speech and culture, writing, the printing press, digital services, communications technologies; or as the next ‘intelligence technology’, following previous mechanisms such as “price mechanisms in a free market, language, bureaucracy, peer review in science, and evolved institutions like the justice system and law” [19]. Accordingly, we take AI to be a paradigmatic example of an emerging Socially Disruptive Technology [12]—i.e., a technology with the potential to affect important pillars of human life and society, in a way that raises perennial ethical and political questions [20].

The rise of AI has provoked increasing public concern with the technology’s potential ethical impacts [21,22,23], which has translated into growing calls for regulation and ethical guidance [24]. The European Commission has begun to draft an “Artificial Intelligence Act” [25]; Chinese government bodies have articulated new regulatory moves for AI governance, setting out requirements for algorithmic transparency and explainability [26]. There have also been notable steps in governance for AI at the global level [27,28,29], such as, among others, the establishment of the ‘Global Partnership on Artificial Intelligence’ (GPAI) [30], or the UNESCO ‘Recommendation on the Ethics of Artificial Intelligence’ [31], the first such global agreement. Such initiatives reflect the growing view that the sociotechnical impacts of transformative AI should not be left to run their own course without supervision [32], but may require intervention and accountability to safeguard core values such as justice, fairness, and democracy [33]. Yet, scholars, policymakers and the public continue to grapple with questions over how AI is concretely impacting societies, what values it impinges upon, and in what ways these societies can and should best respond.

One challenge to the formulation of adequate responses to the ‘first-order problems’ posed by AI, is that these can be derailed or suspended by underlying second-order disruptions of this technology, in the foundations and normative categories of both ethics and law (see Table 1 for key concepts). We understand first-order problems as those that can be adequately addressed in terms of pre-existing norms or prescriptions, such as pre-existing ethical norms or legal codes. For instance, the question of how pre-existing standards of jus in bello can be applied to warfare with autonomous weapons systems is a first-order problem. Second-order problems or disruptions, by contrast, call into question the appropriateness or adequacy of existing ethical and regulatory schemas. For instance, it has been argued that autonomous weapons systems create responsibility gaps that make the very idea of jus in bello inapplicable [34], and it is not obvious how this problem should be resolved. Given its major societal impact, it seems very likely that AI will drive second-order disruptions of various kinds, affecting ethical norms and values as well as systems of regulation. How can we rely on ethical and regulatory frameworks to cope with emerging technologies, while at the same time these frameworks are themselves changed by technology?

In this paper, we propose a conceptual approach that helps to mitigate this challenge, by addressing the disruptive implications of emerging technologies on ethics and regulation in tandem. To date, the fields of Technology Ethics (TechEthics), and Technology Law (TechLaw) have developed sophisticated frameworks that explore the co-evolutionary interaction of technology with existing (moral or legal) systems, in order both to analyze these impacts, and to normatively prescribe appropriate responses. However, these frameworks have remained isolated from one another, and insufficiently acknowledge that norms of TechEthics and regulations of TechLaw co-evolve. We propose to integrate the dyadic models of TechLaw and TechEthics, to shift focus to the triadic relations and mutual shaping of values, technology, and regulation. We claim that a triadic values-technology-regulation model is more descriptively accurate, and serves to highlight a broader portfolio of ethical, technical, or regulatory interventions that can enable effective ethical triage of Socially Disruptive Technologies.

We spell out this claim in the subsequent sections of this paper. In Sect. 2, we further clarify what second-order disruptions amount to and how they challenge TechEthics and TechLaw. In Sect. 3, we present succinct mappings of the dyadic models of TechEthics and TechLaw and subsequently point out some of their limitations. Specifically, we zoom in on AI technology and explain why second-order disruptions by AI cannot be easily captured by the dyadic models. In Sect. 4, we sketch a triadic model (the “Technology Triad”) that aims to synthesize these two frameworks, showing how it helps to grapple with the second-order societal impacts of AI both analytically and prescriptively. In Sect. 5, we evaluate this model, arguing that it can be both more descriptively accurate (as it allows the mapping of second-order impacts on values and norms, through changes in legal systems—or on legal systems, through changes in values and norms), as well as instrumentally useful (and normatively valuable) in responding to these changes, than each of the dyadic models used in isolation. We accordingly provide a step-by-step operationalization of this framework through a series of questions that can be posed of historical, ongoing, or anticipated technology-driven societal disruption, and we illustrate the application of this framework with two cases, one historical (how the adoption of the GDPR channeled and redirected the evolution of the ethical value of ‘privacy’ when that had been put under pressure by digital markets), and one anticipatory (looking at anticipated disruptions caused by the ongoing wave of generative AI systems). We conclude that approaching disruptive AI through the lens of the “Technology Triad” can lead to more resilient ethical and regulatory responses.

Table 1 Key concepts

2 Background: technological change and first- and second-order disruptions to ethics or law

The pace of policy responses to disruptive technological changes tends to be relatively slow, which may be due to various reasons. One factor is the uncertainty over the future course of the technology’s sociotechnical trajectory. With some notable exceptions,Footnote 2 it has usually proven difficult to accurately predict a technology’s future societal uptake in advance.Footnote 3 It is often even more difficult to anticipate a technology’s subsequent uptake and use in society, let alone the resulting societal impacts [62, 63]. As such, there may often be legitimate disagreement about the costs and benefits of adopting either a permissive or a precautionary approach towards regulation [64].

However, there is another barrier as well, which pertains to the ways in which technological disruption can stress the normative credentials of existing ethical heuristics and the functioning or legitimacy of available regulatory response strategies. Socially Disruptive Technologies can have “deep impacts” in ethics and beyond [12]: they transform basic ethical concepts, norms, and public values, which are instrumental to the ethical assessment and guidance of emerging technologies in turn. For example, it has been argued that two core human values—“truth” and “trust”—are being disrupted by emerging information technologies, yielding new norms of veracity and trustworthiness [65]. As a result, ethicists face a challenge: should they rely on prior norms and conceptions of truth and trust, or should they rethink these, in responding to the challenges of disruptive technologies?

Emphasis on second-order impacts of technologies is given in the field of ethics that studies emerging technologies (henceforth TechEthics); this literature often frames such shifts in terms of ‘technomoral change’ [7, 66,67,68]. The core premise of the technomoral change lens in TechEthics is that ethics and technology evolve in mutual interaction and shape each other: technological artifacts and applications are frequently designed to reflect and realize social and moral values,Footnote 4 but technologies may end up reshaping and disrupting our norms and values in turn. While technomoral change is not the only approach adopted in the field of TechEthics [73], it is certainly a prominent approach, especially for anticipating the implications of emerging technologies.

Analogous discussions on the second-order disruption of established norms occur in the field of law and regulation. Here, the challenge to simply applying existing law to address new (but essentially familiar) first-order problems created by emerging technologies is that often, the features or uses of these new technologies do not lend themselves to easy categorization, provoking legal uncertainty [64, 74]. For instance, cryptocurrencies blur the lines between different types of more traditional assets; different regulators have classified them as a currency, a security, or a commodity [64]. Such classificatory challenges are sometimes framed as being driven by the alleged ‘novelty’ of a technology: the ‘exceptionalist’ argument here is that some new artifacts, or some of their uses, are so different from past technologies, that the existing laws cannot sensibly or reasonably interpret and decide upon the new situation.Footnote 5

Given the potential inflexibility of law, it has been frequently argued that the speed and complexity of emerging technologies creates a ‘pacing problem’ for regulatory and governance responses [83, 84], though some have critiqued this concept [85]. However, the more recent approach of ‘TechLaw’ [64] does not grant the premise that the law can never keep up with technology. Instead, TechLaw focuses on “how law and technology foster, restrict, and otherwise shape each other’s evolution” [64, N. 1], [86]. Confronted with technological changes that affect legal rules, the legal system may respond in three ways: (1) by trying to deal with the new technology under existing rules (often through analogy to previously regulated technologies or their afforded behaviors), as occurs, for instance, when autonomous weapon systems are analogized to other weapons and regulated under existing weapon law; (2) by extending or modifying existing rules to fit the new technology, as occurs, for instance, when U.S. copyright law which restricts unauthorized copying “by any method now known or later developed” is extended to new technologies; (3) by creating new rules [64, 87], as exemplified by the new “AI Act” currently developed by the European Parliament. There is no one-size-fits all answer as to which response fits best; instead, the primary aim of the TechLaw approach is to identify how familiar forms of legal uncertainty appear in new sociolegal contexts [64, 88, 89].

In sum, the existing TechLaw and TechEthics approaches both already recognize and foreground the evolutionary nature of their respective domains: both law and ethics are understood as not static but rather evolving systems, which take their shape in interplay with a variety of (first-order and second-order) pressures, technology prominent among them. However, while the evolutionary nature of both law and morality has been recognized in recent scholarship, another shared feature of TechEthics and TechLaw has remained obscured: that (technology) ethics and (technology) law are co-evolutionary systems with mutually dependent trajectories. That is, while both scholars of morality and law have been paying increasing attention to the interrelations of their fields with technology, they remain largely oblivious to the entangled dynamics of their fields with one another.Footnote 6 This may not be a pressing problem when it comes to analyzing and responding to small-scale technological disruptions, but we will argue that second-order disruptions by AI require a more integrative approach.

3 Two dyadic models: TechEthics and TechLaw

Before arguing for the benefits of an integrative triadic model (Sect. 4), let us first outline the background of the existing dyadic models used in TechEthics and TechLaw, starting with the approach of technomoral change. The core premise of this approach is that ethics and technology mutually shape each other. The emergence of contraceptive technologies provides one of several historical case-studies illustrating this mutual shaping: the invention of the female birth-control pill was driven by social activists pursuing a variety of social and moral goals [94], but at the same time, by severing the link between sex and pregnancy, the birth-control pill facilitated unanticipated shifts in the sexual morals of many societies, and fueled emancipation movements far beyond the expectations of the initial reformers. Various other historical examples have been discussed in the literature on technomoral change, such as the influence of ploughing technology on gender norms; the dynamics between new weapon technology and the demise of dueling as an exclusively aristocratic practice; or the role of veterinary medicine and meat replacements in changing attitudes towards the treatment of farm animals [95].

While these historical cases provide a proof of concept, the technomoral change framework is mostly used as an anticipatory framework, which serves to sketch scenarios of possible pathways of future value change. This ‘technomoral scenario approach’ has recently been extended with the approach of ‘axiological futurism’, which proposes a systematic exploration of future axiological trajectories [8, 96]. These anticipatory frameworks are part of a broader array of Ethical Foresight Approaches [97], which ethicists invoke to assess emerging technologies. Frequently, these approaches involve a combined effort of not only anticipating the future dynamics of change, but also assessing change in prescriptive terms and intervening to achieve desired outcomes.

A recent criticism of technomoral change is that the approach faces an explanatory gap [68]: it does not provide a clear explanation as to why some technomoral changes have a decidedly disruptive character. Sometimes technology and morality shape each other gradually; at other times, changes occur rapidly, unleashing powerful disagreement and confusion. Nickel et al. [68] argue that this explanatory gap can be filled by providing a more comprehensive account of what moral inquiry and moral change amounts to, which should emphasize the role of individual and collective moral uncertainty and confusion about the interpretation, priority and correct application of public values.

Adding to this, we submit that a more comprehensive account of technomoral change should integrate with TechLaw work on (techno)legal disruption [36, 74], giving recognition to the legal and regulatory uncertainty that accompanies technological disruption. When legal and regulatory systems are disrupted, it is not obvious which existing legal and regulatory frameworks, if any, apply to a technology; or (if they are still held to apply), it is not obvious how they will apply. Such legal uncertainty, in turn, loosens a potential constraint on technomoral change: in the absence of regulatory standards, the dynamics of future technomoral change are more difficult to anticipate. Consider the example of generative AI, which we will discuss in further detail in Sect. 4: in the absence of standards for AI regulation, the question of how AI will affect societal norms and values is much more open-ended, than when such standards are present. Legal and regulatory gaps and a loss of institutional bearings loosen the bounds of collective moral inquiry, whereas the presence of a regulatory framework imposes a (soft) constraint on it.

A further criticism is that scholarship in TechEthics barely touches on questions of radical moral change at a societal level. The thematic focus of current case-studies in this literature is somewhat narrow and primarily geared to the biomedical sphere, as suggested by the frequently used example of the birth-control pill [98, 99]. Furthermore, research on the co-shaping of technology and society more generally is often geared to interactions between humans and apparently mundane technological artifacts. In both of these respects, we submit, extant scholarship is not perfectly equipped to analyze the deeper impacts of Socially Disruptive Technologies. Like AI, these are often not artifacts but sociotechnical systems; furthermore, the object of study is typically more radical forms of societal change.

Next, let us consider the field of law, regulation and technology, where there has similarly been a sustained focus on the mutual shaping of emerging technologies and particular regulatory systems [100]. Such work has frequently focused on the legal impacts of one or another specific (anticipated) new technology—from new reproductive technology to nanotechnology, and from the internet to AI applications—on existing law or doctrines [87]. Often, these debates have turned on the perceived ‘novelty’ of the technology under question, or of its assumed ‘essential characteristics’.Footnote 7 Accordingly, such legal work drew on an exceptionalist approach, asking whether or when a particular new technology possessed sufficiently novel or remarkable ‘essential features’ that it cannot be adequately covered by the existing legal doctrine.

Recent legal scholarship has taken issue with this exceptionalist framing of technolegal disruption, arguing that disruptive technologies foreground familiar forms of legal uncertainty in new sociolegal contexts [64, 88]. Relatedly, some scholars have called to depart from technology-centric or application-centric approaches to regulation [105, 106], and to instead focus on general types of change in the regulatory ecosystem. What matters in this view are not the assumed artefactual characteristics of a technology, but rather the societal ‘salience’ [104] or sociotechnical changes [88, 106] resulting from its use. Such work has sought to take a more systematic approach to developing general frameworks for understanding the cross-sector ‘legal disruption’ of technology [36].

In sum, both fields—TechEthics and TechLaw—have provided important insights into processes of technomoral change and technolegal disruption, respectively. As dyadic models, they each improve upon older approaches by allowing for an analysis of the mutual shaping of both phenomena under examination (technology and ethics; and technology and law, respectively). But both models, in isolation, are not ideally suited to anticipate and assess the implications of Socially Disruptive Technologies.

3.1 AI and second-order disruptions through dyadic lenses

Let us outline the strengths and shortcomings of the current dyadic models, with a focus on the case of AI. In recent years, AI has been adopted in diverse practices, from targeted advertising, insurance pricing, and fraud detection, to hiring decisions, predictive policing, and administrative decision-making. Notwithstanding the various benefits of the technology, it has also been associated with concerns about discrimination, privacy infringement, and the spread of misinformation [107], among many others. AI systems have been described as potentially causing harm at many levels—individual harm, collective harm, or societal harm [108]. With no aspiration to being exhaustive, we can represent some of the concerns that AI raises in terms of the dyadic models of both TechEthics and TechLaw, in terms of the mappings of Figs. 1 and 2.

Fig. 1
figure 1

The dyadic approach to Tech Ethics

Fig. 2
figure 2

The dyadic approach to TechLaw

The starting point of the dyadic TechEthics model is to analyze ethical problems and disruptions emerging to which emerging technologies give rise, and to subsequently identify which ethical response might—or should—be prescribed. As Fig. 1 illustrates, this dyadic approach allows us to examine important pathways by which technologies can lead to new first-order challenges and changes. For instance, TechEthics scholarship can identify and analyze numerous cases where new (AI) technology creates first-order ethical problems, because it violates established and cherished public values, such as privacy, non-discrimination, democracy [109, 110], human dignity [111], or environmental sustainability [112].

Where first-order challenges are concerned, these public values remain stable: while AI challenges compliance with extant ethical norms and practices (of non-discrimination, privacy, etc.), first-order challenges do not include a more thoroughgoing contestation of norms of privacy or the value of non-discrimination. Yet, the dyadic TechEthics approach can also reckon with such second-order ethical disruptions: it can analytically identify second-order changes in value systems in interaction with technological changes. On the basis of these analyses, the model allows the prescription of appropriate responses (of the ethical value system) to both first-order problems and second-order disruptions. Hence, the model is dyadic: it also includes the reverse, prescriptive question of how (ethics or law) can and should shape technology.

While useful for many contexts, a shortcoming of the dyadic TechEthics model is that it makes no explicit reference to regulation. As a result, in terms of analyzing the dynamics of technomoral change, it can easily miss out on relevant mutual interactions, as well as on key underexplored pathways, such as indirect effects of technology on ethics that are mediated through intermediate effects in the domain of law. In terms of recommendation, it risks foregrounding some prescriptive responses over others. In particular, there is an inclination to focus on interventions that can be made by altering the technology—either to the artifact, or its design process, as seen in ‘value-by-design’ approaches [69]. Conversely, regulatory responses are de-emphasized.

Now consider the dyadic model employed in TechLaw scholarship (Fig. 2). This model allows researchers to analytically identify and characterize a range of domains where AI systems put pressure on the existing regulatory equilibrium. This can be because the use of AI can create first-order problems that raise the question of whether or how existing laws apply, as seen in situations ranging from the lack of clarity over the status of self-driving cars under international road traffic conventions [113]; whether ‘robot lawyers’ should be classified as ‘goods’ or ‘services’ in international trade law [114]; whether autonomous weapons systems violate the norms of International Humanitarian Law [115]; or whether existing criminal law doctrines such as ‘mens rea’ can neatly apply to ‘crimes’ carried out or commissioned by AI systems (such as trading agents convergently discovering fraudulent trading strategies) [116, 117].

In principle, TechLaw scholarship can also explore cases of second-order legal disruption, where new developments create uncertainty over whether existing laws should apply, or whether they should rather be reconfigured in light of the new situation [64]. But note that these normative considerations cannot be settled based on intra-legal considerations; instead, they require ethical reflection on the proper scope of legal and regulatory intervention, and on the benefits and risks of taking a pre-active regulatory stance in the face of Socially Disruptive Technologies. This calls for drawing on ethical principles and values—a component not foregrounded in the dyadic TechLaw model.

In sum, in both dyadic frameworks, we can study and analyze situations where second-order impacts of AI systems cannot be resolved with a straightforward appeal to existing frameworks, but raise uncertainty about—or call into question—these frameworks themselves, or important aspects of them.Footnote 8 Some recurrent issues, both in the fields of TechEthics and TechLaw, pertain to the question of how to keep humans ‘in the loop’ with the advent of AI [119], to think of new hybrid forms of human–machine agency and responsibility, and to design ‘humane’ AI technology. Such issues require foundational reflection: rather than a problem that can be solved in the context of the current ethical/legal ecosystem, the ecosystem itself is challenged and may possibly need to be amended or reformed in order to cope with the challenge. In the face of this challenge, the dyadic models of TechEthics and TechLaw encounter limitations. When norms and values themselves are transformed, or when regulatory systems are disrupted by technology at a fundamental, conceptual, or processual level, then an adequate analytical framework requires a more holistic overview of the resultant changes in the combined ethical-regulatory ecosystem, both in order to grasp the relevant dynamics and to respond to them.

4 The triadic model

While the dyadic models of TechEthics and TechLaw have been developed separately and have thus far largely worked in parallel, they share at least part of their analytical domain: they are both concerned with technology (whether particular artifacts or sociotechnical systems),Footnote 9 and aim to explore the dyadic, co-evolutionary relation of technology with a particular social system or conceptual order. One focuses on the dyadic relations of technology to ethics (here broadly understood to include social value systems and norms); the other on the dyadic relations to law (i.e., a particular regulatory system).

This means that we can map all three systems onto a simplified triadic map, which allows us to map the overlap and differences between these dyadic paradigms (see Fig. 3). In particular, we can map which types of analytical topics and paths each of them highlights, and what kinds of prescriptive evaluations or recommendations either field reasons towards or foregrounds.

Fig. 3
figure 3

Siloed dyadic approaches, mapped on the triadic model

4.1 Analytical and prescriptive pathways on the triadic model

We suggest that integrating both models into a single triadic model can offer many more pathways to both analyze societal disruption (see Fig. 4), as well as a wider palette of potential responses available to both fields (see Fig. 5).

Fig. 4
figure 4

Analytical pathways on a triadic model

Fig. 5
figure 5

Prescriptive pathways on a triadic model

The triadic model allows an analytically richer exploration of indirect technological impacts: it can explore the impact of AI on values, as a result of its disruption in legal systems; or on legal systems, as a result of technology-driven changes in underlying public values. As such, the model illustrates the benefits, both descriptive and prescriptive, of the triadic approach over the isolated dyadic approaches.

In terms of descriptive analysis, while, as discussed in Sect. 3, both TechEthics and TechLaw can to some extent explore first-order and second-order challenges, they usually emphasize first-order challenges, and face limitations when exploring second-order challenges. Normative considerations are certainly part of extant TechLaw approaches, but these considerations are not foregrounded if TechLaw is framed in dyadic terms. A shift of focus specifically benefits analysis of second-order disruptions which are mediated indirectly—i.e., which results not directly from technological change, but indirectly from technological change mediated in the other domain. For instance, there might be use cases of AI (sociotechnical developments) that predominantly or most visibly affect regulatory systems, yet which raise important indirect effects for ethics. This could include first-order challenges for ethics (e.g., the accountability implications of increasing automation of legal decision-making), as well as second-order changes in ethics (e.g., how the increasing automation of legal systems might result in a shift in the extent to which society values transparency relative to efficiency or speed in governmental decision-making [121]). The triadic model enables easy identification of such second-order effects.

Simultaneously, the triadic model offers more actionable prescriptive analysis, addressing some of the shortfalls in the dyadic models. In particular, it improves upon dyadic approaches by (a) allowing for a more appropriate analysis of prescriptive priorities—i.e., triage among the full set or spectrum of societal disruption (both to ethics and law) driven by AI technology (see also Sect. 5). Moreover, the triadic model (b) supports a normatively richer analysis of potential prescriptive responses, as it prompts TechLaw and TechEthics to have a fuller appreciation of the relevance of carrying out responses through one another’s tools.

4.2 Operationalizing the triadic model

Having sketched the triadic model in the abstract, let us now indicate how it can be operationalized. We do so by outlining three steps. The first step is identificatory: it consists of identifying a relevant case (historical, ongoing, or anticipated) of technology-driven second-order disruption to ethical and/or legal systems. Questions that can help to guide such identification areFootnote 10:

  1. a.

    Which (past, ongoing, or anticipated) technologies meet the criteria for ‘emerging technologies’ [75], and/or ‘socially disruptive technologies’ [12, 122], such that we should expect not just first-order problems but also second-order impacts on ethics and law?

  2. b.

    Which (past, ongoing, or anticipated) technological disruptions are studied in both TechEthics and TechLaw scholarship, but primarily with a focus on domain-specific first-order disruptions? Where does either lens focus on the (problems created by) new artifacts when below the surface there are larger sociotechnical systems?

  3. c.

    Which (past, ongoing, or anticipated) technological disruptions are currently identified and studied as second-order impacts in either TechEthics or TechLaw scholarship, but remain understudied and underappreciated in the other?

  4. d.

    Which (past, ongoing, or anticipated) technological disruptions have received attention from both TechEthics or TechLaw, but generally receive very different treatment, analyses or evaluation?

  5. e.

    Which (past, ongoing, or anticipated) technological disruptions have received attention from both TechEthics or TechLaw, but both recommend different responses?

The second step consists of reviewing and comparing existing dyadic accounts to analyze second-order disruptions. Adopting the dyadic TechEthics lens: how does TechEthics analyze the technomoral change? Is the ethical shift one of (de)valuation, conceptual reconstitution, or gradual shift in ethical values? In addition, what responses does TechEthics accordingly prescribe? Or adopting the dyadic TechLaw lens: how does TechLaw analyze the technolegal disruption? Does the new artifact or enabled behavior (a) create clear gaps obviously uncovered within existing law; (b) lead to incorrectly over-inclusive or incorrectly under-inclusive application of existing laws; (c) lead to the obsolescence of laws (e.g., because they are no longer needed, adequate, or enforceable); or (d) shift the relative balance of problems?Footnote 11 In addition, what responses does TechLaw accordingly prescribe? E.g., when or where does (/should) the legal system respond to the new technology (a) by dealing with it under existing rules (e.g., through analogy); (b) by extending or modifying existing rules to fit the new technology; or (c) by creating new rules?

The third step is to integrate both dyadic accounts into a triadic model. In terms of analysis (3a), this may allow for the identification of legal disruptions that follow (indirectly) from technomoral change, e.g., (i) because the shift in the view or conceptualization of key values, indirectly affects the necessity, legitimacy, or underlying purpose of key existing technology laws, making their (re)application problematic and/or changing their intended purpose; or (ii) because the commonly prescribed ethical responses may create new conflicts or contradictions under existing legal systems. Conversely, technolegal disruptions may give rise (indirectly) to ethical changes, e.g., (i) because the regulatory response to patch legal provisions for the technology itself becomes considered ethically problematic or contested; or (ii) because the regulatory response affects, redirects or channels the public process of technomoral change into different directions.

In terms of triadic prescription (3b), the point of the third step is to identify new priorities, strategies, or considerations for societal (ethical and/or legal) responses to an emerging technology. Three types of use of the triadic model can be distinguished here:

  1. (i)

    Triaging prescriptive priorities. Taking the broader view of the technology’s societal disruptions to both ethics and law, which of these are the most urgent, critical, or fundamentally disruptive? Are these the direct second-order disruptions in either one domain (ethics or law), or are these the indirect second-order disruptions? How should this shift the priorities or research agenda of either TechEthics or TechLaw scholarship?

  2. (ii)

    Tailoring prescriptive responses within a lens. To TechEthics, what does this triadic perspective highlight about the multiple realizability of ethical responses to the technological disruption? To TechLaw, how can the triadic perspective help make regulation more tailored to the actual societal disruption?

  3. (iii)

    Tailoring prescriptive responses between lenses: where could either field draw on tools from the others’ toolset in addressing the societal challenges it faces?

4.3 Illustrating the triadic model: two case-studies

We conclude this section by sketching two case-studies on the basis of this three-step approach.Footnote 12 The first case-study is historical: it considers the intersection of digital technology, the societal value of privacy, and (data privacy) regulation within the last two decades. The main upshot of the triadic model, here, is to impose structure and enhance analytic clarity, by following general steps and questions to describe the dynamics of a second-order disruption. Nuanced reassessment of historical cases can often be highly revealing, both in shaping our views on the genealogy of our current (technology-focused) values or laws—our technomoral and technolegal legacy—as well as in providing potentially transferable lessons for how to anticipate novel instances of technomoral change.

The second case-study considers the growing use and dissemination of increasingly general-purpose and ‘generative’ AI systems. Here, the triadic model can help both to anticipate the relevant dynamics of this still-unfolding process of techno-moral-legal change, and to make recommendations for intervening in them. In particular, the model foregrounds that the emergence of generative AI does not only require scrutiny of the soundness and applicability of existing regulations, but also calls for ethical reflection on the value of human creativity, authenticity, and inventiveness, which should be reflected in the normative assessment of these AI technologies.

4.4 Case-study 1: digital technology as threat to privacy

  • Step 1: Identify a case of second-order ethical and/or legal disruption.

    While the last decades have seen extensive work exploring the ways in which digital technologies have challenged or endangered privacy, their impacts have not just been in creating first-order ethical (or legal) threats to privacy. Rather, a plausible case can be made that over the last 2 decades, privacy norms have also significantly shifted under the pressure of digital technologies, such that some conceptions of privacy—such as an understanding of privacy in terms of “secrecy”—have lost some of their appeal and prominence. This can be understood as a form of second-order ethical disruption.

  • Step 2: Review and compare existing dyadic accounts to analyze direct second-order disruptions.

    A dyadic model of TechEthics might serve to anticipate this technology-precipitated value dynamic. A technomoral change lens might, for instance, portray the dynamic as a runaway effect, resulting in a society which steadily and effectively surpasses privacy. Conversely, a TechLaw lens might focus on how emerging practices of consumer data collection and tracking not only create a need for new regulations to limit such practices of online privacy infringement (first-order legal disruption), but also lead to deeper reassessments by technology lawyers of the legitimate and/or appropriate pathways (such as ‘code as law’) through which regulators can serve and protect such values [102].

  • Step 3a: Integrate into a triadic model: analysis and anticipation

    In terms of analysis, a triadic lens can help to understand how, at least in a European context, this societal dynamic has arguably been stalled and transformed by regulatory intervention. Specifically, in 2016, the EU adopted the General Data Protection Regulation (GDPR). Its original aim was to replace the older 1995 Data Protection Directive, and to establish new guidelines for a much more connected world (technological change creating a direct legal problem). In response, the GDPR sought to adapt data protection regulations in ways that preserve ‘privacy’ as a core value (legal response to legal problem). However, rather than aiming to hold on or preserve or restore previous notions of “privacy as secrecy”, what the GDPR has functionally safeguarded is often more closely related to a conception of privacy in terms of the “appropriate flow” of information [123] (legal response driving indirect value change). Hence, the impact of technology regulation is closely linked to our retroactive understanding of these recent dynamics of value change; while digital technology is plausibly seen as a key driver of these changing norms and conceptions of privacy, the relevant dynamics are better appreciated by foregrounding the tacit third element of (technology) regulation in channeling and directing the process of technomoral change.

4.5 Case–study 2: the rise of generative AI

  • Step 1: Identify a case of second-order ethical and/or legal disruption.

    The recent and ongoing rise of multimodal and increasingly general ‘foundation models’ [124,125,126], large language models (LLMs, e.g., GPT-4, Claude, Bard), generative AI (e.g., DALL-E 2, Stable Diffusion, Midjourney), and other large generative AI systems [127],Footnote 13 has received tremendous attention, and it is widely expected that this is a development that brings tremendous new challenges to both ethics and regulation [128]. To date, there has been significant attention to many first-order problems of these generative models. This includes work on ethical first-order problems, such as these models’ potential to produce biased or hate speech, leak sensitive information, produce generally poor quality information, or aid the production of fake news or hate speech [129, 130]. Others have expressed concern over the growing capabilities and risks as these AI systems are scaled up further, and have called for temporary pause in such experiments [131]. Simultaneously, in TechLaw there has been emphasis on legal first-order problems, such as Italy’s 2023 ban on ChatGPT, in the wake of privacy concerns, and the intent to evaluate its compliance with the GDPR [132]; copyright lawsuits by artists and coders over the use of open-source materials in training LLMs [133], or Chinese regulations on generative AI to ban ‘subversive content’ [134], among many others. However, generative AI systems also have a clear potential to yield second-order disruptions, which call into question the applicability of existing concepts or norms.

  • Step 2: Review and compare existing dyadic accounts to analyze direct second-order disruptions.

    Generative AI systems put pressure on existing ethical concepts and intuitions. For instance, generative AI creates a new credit-blame asymmetry when assigning responsibility for language model outputs, in that human users should still be blamed for utilizing bad or low-quality outputs of those systems, yet should not get (as much) credit for utilizing particularly good outputs [135]. Another concern is that the dissemination of generative art models makes it increasingly unclear how to understand and value the notions of creativity and authenticity, as these allow for the reproduction of individual artistic styles to create new artistic products at scale, or the generation of novel texts that can mimic existing writing styles and contents. Moreover, there are cases of direct legal disruption. For instance, since their proliferation in late 2022, the latest generation of generative AI chatbots has rapidly put pressure on the risk-based approach of the proposed EU AI Act [136]; because these general-purpose AI systems have a wide range of possible use cases, they rapidly made it very difficult for the provider to envisage its downstream risk [137]. This highlighted the shortfalls of regulating AI at the application layer rather than throughout the product cycle [138]. In response, some have suggested applying strict liability [139].

  • Step 3a: Integrate into a triadic model: analysis and anticipation.

    Where will direct technomoral change create indirect legal disruption? Ethically thick concepts such as ‘creativity’ ‘authenticity’, and ‘inventiveness’ have long served as cornerstone concepts in legal discourse on intellectual property and patent law. Many aim to reapply these concepts to generative AI art. Yet one challenge is that the sheer proliferation of AI systems, resulting in widely available artistic capabilities, may begin to draw into question old ways of valuing creativity; this may re-open debates over whether, or how, IP law should be applied to protect or safeguard particularly human creativity.

    A related challenge is that, as LLMs begin to change the nature of many workplace tasks, the aforementioned growing credit-blame asymmetry will begin to express itself in an ‘achievement gap’, whereby many human jobs will involve supervising, prompting or maintaining LLMs to produce the outputs that skilled humans previously received credit for; but where it becomes increasingly hard for human professionals to claim credit for these tasks [135]; this may lead to a reappreciation of the nature and value of meaningful work, which might be taken as a need for regulatory updating in domains such as labor and employment law.

    Where will direct technolegal disruption create indirect ethical change? Some regulatory initiatives may focus on ensuring the ‘democratization’ and access to new technologies. Yet the ease with which generative AI can be disseminated, yet also misused, appears set to create new ethical debate and renegotiation over what it means to have ‘democratised’ (AI) technology [140, 141], and when (or in which form) this is actually a valuable goal for law to preserve.

  • Step 3b: Integrate into a triadic model: recommendation and prescription.

    If generative AI art models may lead to a disruption or rearticulation of widely shared notions (of the meaning, or the value) of ‘creativity’, then TechLaw regulatory approaches would benefit from engaging in broader (public) participation and/or (expert) debate about the intended purposes of the regulatory response to generative AI. If general-purpose generative AI creates legal challenges for the EU AI Act’s application-stage, risk-focused regulatory framework, then responses would benefit from taking into consideration evolving notions of the balance of responsibility for harms throughout the AI value chain. [142, 143].

5 Evaluating the triadic model: strengths and limits

There are at least three reasons to pursue the further development, testing, and application of the triadic model as a framework for synthesizing insights from TechEthics and TechLaw. First, to technology ethicists, a triadic model foregrounds the multiple realizability of ethical interventions. Familiar routes to cope with value change are by responsive ethical initiative, or by means of technological design (altering either a technology’s development process, or its artifactual features, as proposed by “ethics by design” approaches). The third route of implementing more flexible regulatory frameworks is underexplored in current TechEthics.

In response to emerging AI and associated value changes, ethical interventions via each of these routes are called-for and alignment between them is needed. Emphasis on the multiple realizability of ethical interventions facilitates a shift, away from a narrowly reactive ‘problem-solving’ orientation towards treating the disruptive symptoms of emerging AI technologies in diverse domains, and instead towards a general ‘problem-finding’ orientation towards these challenges [144]. Such problem-finding approaches to AI include strategies which do not only study how or where AI systems might create problems for existing law or ethics, to be ‘solved’ through law or ethics, but which instead takes stock of the ways in which AI technologies may also shape or disrupt the processes, instruments, assumptions, and even goals of existing regulatory or ethical orders.

A second benefit of the triadic approach is particularly relevant to technology lawyers. The triadic model benefits regulatory approaches by making them more tailored and resilient. Current regulatory approaches to AI tend to be either technology-centric (focused on regulating ‘AI’)Footnote 14; application-centric (e.g., focused on drones; self-driving cars; facial recognition); or law-centricFootnote 15 (e.g., focusing on problems for specific doctrines such as liability or tax law) [105, 106].

While these approaches all have value, and must play a role in societal responses to the technology, they also all have shortfalls: a technology-centric approach is problematic, since as a technology AI is rather amorphous, difficult to define, and encompasses several sub-technologies whose ethical features are rather different (e.g., machine learning vs. symbolic AI). The application-centric approach, too, is problematic, since AI applications are a moving target, and at times mix together algorithmic sub-technologies within different domains. A law-centric approach has shortfalls because it is too siloed and segmented across pre-existing doctrinal lines: such a compartmentalized approach will suffer in carrying out regulatory prioritization and triage (by focusing overmuch on legally ‘interesting’ puzzles or edge cases); moreover, it may often result in duplication of effort at best, and “ineffective, counterproductive, or even harmful rules and policy prescriptions” [64, p. 349] at worst—with a frequent outcome regulatory fragmentation, incoherence or conflict.

A triadic approach has value here, as it supports a more holistic perspective in technology law, one which helps shift away from debates over technological exceptionalism, in order to examine new technologies (such as AI) in conjunction with the broader dynamics of social change and value change in which they are implicated. This can ground regulatory frameworks that are more resilient and efficacious.

Third, the triadic model enables more effective and meaningful triage among many technological changes, helping to identify where these may be most disruptive, and where ethical and regulatory interventions are most urgently needed. The model is specifically applicable to the second-order disruptions instigated by emerging Socially Disruptive Technologies such as AI, which—in contradistinction to first-order disruptions—frequently reveal regulatory gaps or uncertainties and provoke value changes. Indeed, part of what makes these impacts disruptive is the uncertainties they provoke and the ethical re-orientation they require. As such, the qualification “socially disruptive” can serve as a useful shorthand for sociotechnical impacts which urgently require ethical and regulatory attention, and as a decision-heuristic in situations of ethical triage under uncertainty. To ascertain whether this qualification is warranted, it does not suffice to examine AI, value change and regulation in isolation; instead, the triad should be approached in conjunction.

Of course, this is just an initial sketch. Any model picks and chooses certain elements that it deems relevant to highlight, while neglecting others. In capturing the complex ecosystem of sociotechnical change, there are different interacting domains that could be highlighted and that might, potentially, be added to the model we have sketched. Some additional nuances that the Triadic model might need to account for, and which could be the subject of fruitful future work, include: (1) the ways in which the doctrinal and legal differences between national jurisdictions affect specific TechLaw (and therefore Triadic) analyses, in ways that might not line up with broader cross-society value changes; (2) potential cases that show ambiguities in the distinction between a technology’s (ethical or legal) impacts as being either first or second order.

At the same time, increasing model complexity also comes with a loss in practicability. Given the shared prescriptive orientation of the moral and legal domain, and their centrality to societal responses to technological disruption, we believe the technology triad we have sketched offers a good starting point to outline the broader ecosystem in which societal responses to Socially Disruptive Technologies can be advanced. Future work will further help to crystallize exactly what level of model complexity proves ideal for a workable and effective response.

6 Conclusion

The social impact of AI and other Socially Disruptive Technologies goes far beyond economic or industry changes, and includes changes to prevailing norms, values and legal systems that may be far-reaching. This paper has argued that an analytic framework for understanding these changes, and formulating an appropriate normative response to them, benefits from adopting a triadic model, which captures the interplay between technology, values, and regulation. We have outlined this triadic model and highlighted three particular strengths. First, to technology ethicists, a triadic model foregrounds the multiple realizability of ethical interventions and facilitates a shift from a reactive stance in the face of emerging AI, to a problem-solving and problem-finding orientation. Second, to technology lawyers, a triadic approach shifts away from debates over technological exceptionalism, and examines AI in conjunction with the broader dynamics of social change and value change in which it is implicated, grounding more tailored and resilient regulatory frameworks. Third, the triadic model enables triage among many technological changes, helping to identify where these may be most disruptive, and where ethical and regulatory interventions are most urgently needed. Applying this model facilitates an integrated and streamlined moral and legal response, which is urgently needed in the face of disruptive AI—and for the many other Socially Disruptive Technologies still waiting in the wings.