Conceptual engineering involves the design, evaluation and implementation of concepts. Some have claimed that implementation of new concepts is inscrutable and beyond our control, while others have suggested that implementation is practically unfeasible.Footnote 1 For example, Herman Cappelen (2018) and Max Deutsch (2020) argue that the implementation of new concepts is challenging due to the external factors determining meaning that are beyond our control and often epistemically inaccessible. Additionally, David Chalmers (2020) highlights the social obstacle of convincing others to adopt a new use of a word. Further complicating the implementation issue, Machery (2021) introduces a feasibility objection, positing that some concepts might be inherently difficult or impossible to implement accurately. Some researchers see these difficulties in implementation as advantageous (Kitsik 2022; Queloz and Bieber 2022), while others have challenged the metasemantic and social assumptions that underlie this skepticism about implementation (Koch 2021a, b).

In response to this debate, this paper proposes that even if the critiques presented by Cappelen, Chalmers, Deutsch, Machery and others are valid, there is a domain in which conceptual engineering is both feasible and impactful, i.e., through the design of new technology. The design and implementation of systems is influenced by various concepts, such as control, freedom, and trust, leading to tangible, real-world effects. By this logic, when we select the appropriate conception of a concept for a given technological context (be it a specific definition of privacy for a social media application or a notion of control for a self-driving car) we enable a requirements engineering process that results in tangible real-world effects that is more than a purely academic exercise.

The central argument this paper advances is that conceptual engineering can have substantial practical impact through the real-world effects of concept implementation in technology. Moreover, such implementation also creates an opportunity for empirical testing and continuous improvement, establishing a feedback loop to refine our conceptual work. To illustrate this argument, this paper first explains the implementation challenge and surveys the existing responses to it. It then elaborates on the role of conceptual engineering in requirements engineering, a process of defining, documenting, and maintaining requirements in the engineering design process, using a practical example of a real-world implementation challenge. Finally, the paper proposes an empirical methodology for testing the appropriateness of our conceptual choices. This approach not only demonstrates the practical impact of conceptual engineering, but also presents a mechanism for ongoing concept evaluation and improvement. By illustrating the direct impacts of different conceptions on technology design, this paper underscores the practical applicability and value of conceptual engineering, demonstrating how it serves as a crucial bridge between abstract philosophical analysis and concrete technological innovation.

1 Conceptual Engineering

To illustrate the implementation of concepts in technological artifacts, an initial understanding of my perspective on conceptual engineering is necessary. Despite its perennial presence in philosophy, it’s only in recent times that a systematic exploration of this methodology has gained momentum. The field is highly diverse and characterised by disagreement among philosophers on numerous significant issues. These issues include the scope of conceptual engineering (whether it targets expressions Cappelen 2018; Thomasson 2022) or also mental concepts (Eklund 2015; Haslanger 2000; Plunkett 2015)), its focus on engineering intensions and extensions (Cappelen 2018), use patterns (Jorem 2021), commitment and entitlement structures (Löhr 2021; Veluwenkamp et al. 2022), among others. These issues also prompt further questions about the nature of concepts and their contents. However, this paper will not engage in these debates.

Instead, I will assume that we are engineering the content of sub-sentential expressions. I will refer to any full-fledged way of rendering a sub-sentential expression’s content precise without altering its topic T as a conception of T. For instance, the topic of “friendship” is friendship, and Aristotle’s (1999) account of friendship is one conception of friendship.Footnote 2 The argument should, however, be largely applicable to other viable assumptions about the content of conceptual engineering.

A second issue, however, is crucial for my argument: which methodology to apply when we engage in conceptual engineering. The importance of this question lies in its role in clarifying the normative element of conceptual engineering (Simion 2018). In this paper, I will explicitly employ a functionalist approach (Jorem 2022; Queloz 2020; Thomasson 2020; Veluwenkamp and Hoven 2023). On my preferred way of understanding this approach, the first step is to determine the normative function of a term. A concept X has the normative function to produce effect E if, in a range of relevant circumstances C, applications of X indeed produce E, and importantly, users of X have normative reasons to deploy X in thought and language because of its effectiveness in producing E in circumstances C (Köhler and Veluwenkamp 2024). This describes the purpose of a concept in facilitating actions or achieving results that carry normative significance, encompassing ethical, moral, or other normatively relevant considerations. Once we have determined this normative function, the next step is to select or engineer a conception that best fulfills this function. By focusing on the normative function, we can ensure that the concept’s use promotes values such as fairness, justice, or other moral or ethical goals. In essence, the normative function guides the direction of conceptual engineering efforts. It provides a clear objective for the design and assessment of concepts.Footnote 3

With this understanding of the functionalist approach to conceptual engineering in place, we can proceed to investigate why many consider the implementation of a specific concept to be a significant challenge.

2 The Implementation Challenge

As discussed, careful investigation can lead us to revise our current understanding of a concept. However, this exploration is worthwhile only if we can also implement these revised concepts. A key concern here is whether implementing new concepts is feasible. David Chalmers, for example, refers to implementation as “[t]he hardest part of conceptual engineering” (2020, p. 14). In the literature we can find different reasons which contribute to this difficulty.

First, we seem to lack the right kind of control over the meaning of our concepts. The seminal treatment of the control challenge for conceptual engineering is found in Cappelen (2018). Cappelen argues that if it is indeed the case that the meaning of words is determined by external factors, as semantic externalists claim, we have little or no control over these meaning-determining facts. The external factors can, among others, include causal histories (Kripke 1972; Putnam 1975) or social structures (Burge 1979). Yet, these facts would need to change for us to alter the meanings of our expressions. Cappelen maintains that we lack the ability to guide the revision of concepts due to our lack of control over the reference-fixing facts. Even if we were perfectly coordinated, our actions and intentions would have only an unpredictable effect on our semantic values.

The second reason that implementation is problematic is epistemic in nature. Semantic externalism suggests that the mechanisms determining reference are extremely complex, making it difficult to know exactly what needs to change in order to alter a term’s meaning. So, even if we could control the reference-fixing facts, it would be practically impossible to identify what these facts are (Cappelen 2018, p. 74).

In response to this kind of issues, some have proposed to change the target of implementation from changing semantic-meaning to speaker-meaning (Pinder 2020, 2021).Footnote 4 Even if it is impossible or unfeasible to change the semantic-meaning of a term, we can purport to change what individual speakers mean (or speaker-mean) with a term. The reason that is more feasible to change the speaker-meaning of terms is that this meaning is within our intentional control. However, as some have pointed out (Deutsch 2020, 2021), it is not obvious that this would be an interesting enterprise. Although one can choose to alter an expression’s use oneself (or in a small group), the effect of such a local change is relatively minor if it doesn’t change the term’s semantic meaning or the way people generally conceptualize the world (Deutsch 2021, p. 3670).

In addition to these arguments, Machery (2021, p. 19) brings forth the idea of “attractor concepts.” These are the concepts that, when communicated within a population, tend to converge towards stable states over time, regardless of their initial form. The notion of attractor concepts suggests that some concepts have a kind of gravitational pull towards a particular interpretation or understanding, regardless of how they were originally engineered or intended.

When applying this idea to the practice of conceptual engineering, the presence of attractor concepts may further complicate the task of implementation. Even if conceptual engineers design a new concept, if this concept is close to an existing attractor concept, it might naturally evolve towards that attractor state in the minds of the individuals, rather than retaining the form originally intended by the conceptual engineers.

For instance, if a conceptual engineer tries to redefine the concept of “freedom” in a specific way, the historical, cultural, and linguistic ‘weight’ of this concept might act as an attractor, pulling the interpretations of individuals towards the traditional understanding of freedom, regardless of the intended new definition.

Machery’s (2021) notion of attractor concepts implies that the process of conceptual engineering must not only involve the design and implementation of new concepts, but also an assessment of the conceptual landscape to identify potential attractor concepts that could interfere with the successful implementation of the new concept.

Although I agree with skeptics that modifying semantic meaning poses a significant challenge and I acknowledge Machery’s concerns about the influence of attractor concepts, I propose an alternative. I contend that there is an important context where conceptual engineering can be demonstrably feasible, one that does not rely on changing semantic meaning or circumventing existing attractor concepts within our conceptual landscape.

In the literature on conceptual engineering, ‘implementation’ is typically understood as the process of putting a revised or newly developed conception into practical use. Implementation involves integrating a reengineered conception into everyday life, encouraging acceptance and uptake of this concept within the community. Typically, this understanding emphasizes the linguistic and cognitive aspects. It focuses on the change of meaning, use patterns, and the way people conceptualize the world with the new or revised concept. And while this context is undoubtedly crucial, it presents only part of the potential scope of conceptual engineering.

The reason for this is that there is a different way in which we can put a revised or newly developed concept into practical use. This broader understanding of conceptual engineering extends beyond linguistic and cognitive, into the technological. It adds a critical, often overlooked layer to conceptual engineering: not only is the introduction and acceptance of the concept within a linguistic community significant, but equally important is its embodiment and representation in the design, functionality, and user experience of a technological system.

In this broader view, the implementation of a concept encompasses its realization within the design of a technological tool or system. It includes the way this concept is woven into the very fabric of the system, influencing its operations and capabilities. It is about how the concept is translated into design parameters and how it shapes the functionality of the system.

In the following sections, I will present an alternative approach to the implementation challenge that taps into this potential. More specifically, I will illustrate how technology, by embodying and operationalizing reengineered concepts, can contribute to the practical realization of conceptual engineering. By doing so, it will also become clear that this different context provides us with innovative strategies to avoid the obstacles posed by the limited control over semantic meanings and the presence of attractor concepts. Thus, despite the complexities of the implementation challenge, I believe that a broader understanding of the domain of implementation opens up promising avenues for conceptual engineering that are well worth exploring.

3 Embedding Concepts in Technologies

As we have briefly discussed above, changes in speaker-meaning of terms are, at least to some degree, within our intentional control. However, the main concern with these changes is their limited scope and impact. In this section, I want to draw attention to a context which is also partially within our intentional control, but where changes can be very impactful: i.e., the design of technological artifacts.Footnote 5 The way we design technologies is heavily influenced by the different concepts we use. In this paper, I assert that through the design process, we embed concepts into technologies. To elucidate this point, I'll start by examining a more familiar practice: the incorporation of values into technology. I will explain the idea of value embodiment and its role in technology. Subsequently, I extend this discussion to encompass a broader spectrum of concepts.

Many philosophers have presented ways to facilitate designing technologies with moral values (e.g., Flanagan et al. 2008; Floridi and Sanders 2004; Klenk 2021). Although there are some authors who hold that technologies are inherently value-neutral (e.g., Pitt 2014), most philosophers of technology agree that technologies are in some sense value-laden (Miller, 2021).Footnote 6 The main debate in the literature on value embedding concerns the question in what way values are embedded in technology. Winner (1980), Flanagan et al. (2008), van de Poel and Kroes (2014) and van de Poel (2020) stress the role of intentional design for the embodiment of value. This idea can be captured using the following description:

Value Embedding: A technological artefact T embodies a value V if the designed properties of T have the potential to achieve or contribute to V due to the fact that T has been designed for V.Footnote 7Footnote 8

To see how this account works in practice, let us consider a relatively straightforward example: sea dikes. The value that we want to embody in a dike is safety; safety for people living in the hinterland. Centuries of experience in designing sea dikes have taught us that several material properties are important if we want to realize safety. For example, the shape of the dike, the dike slope angle, the inner and outer berm, and embankment materials all influence how much safety is realized. Many of the material properties required for safety are specified in the “14TCN 130–2002” standard, which describes technical guidelines and standards in sea dike design. Let us assume that these guidelines are largely correct. In that case, a sea dike embodies the value “safety” because the designed properties (codified in the “14TCN 130–2002” standard) have the potential to achieve or contribute to safety due to the fact that the sea dike has been designed for safety.

This account, therefore, offers a compelling perspective on how artifacts embody values. The relationship between T and V arises not coincidentally, but because T has been intentionally designed with V in mind. Thus, the embodiment of values in artifacts is a direct consequence of the design process aimed at achieving or supporting that value. This notion highlights the intentional aspect of design in embedding values into artifacts, suggesting that the values artifacts embody are a reflection of the objectives and considerations of their creators.

This framework generalizes beyond the embodiment of values. One of the core elements of the functionalist approach to conceptual engineering described in Sect. 1, is the idea that concepts have a normative function. According to this idea, a concept X is said to have a normative function insofar it reliably produces a certain outcome E in specific situations C, and if there are good reasons for people to use X because it effectively leads to E in those situations. This perspective underscores that concepts serve not only as tools for categorization or description but are fundamentally involved in molding our interactions with the world around us.

For example, when we want to design a "democratic” social networking application, there are certain features and functionalities that need to be incorporated to embody the concept of democracy. This could include features that promote equal participation among users, mechanisms for transparent decision-making, and tools that enable collective moderation of content. The design of such a platform would be guided by the normative function of the concept of democracy, which aims to produce an environment where all participants have equal opportunity to contribute and influence the platform's direction. In this situation, the designers have a normative reason to embed these democratic features because they lead to the desired outcome of a more equitable and participatory digital space.

By embedding a concept with a normative function, designers can intentionally shape technologies to reflect and realize specific goals. This process goes beyond mere functionality, it involves embedding aspects of chosen concepts into the technology, thus guiding how users interact with it and what experiences it fosters. In the case of the democratic social networking application, the incorporation of democratic principles into the design not only influences the platform's structure and policies but also encourages behaviors and interactions among its users that align with democratic ideals.

In the next section I will work out this idea in more detail, but for now, we can generalize the account offered by Van de Poel and Kroes to acknowledge this broader idea of concept embedding as follows:

Concept Embedding: An artifact T embodies a concept C if the designed properties reflect, facilitate, or instantiate aspects of C due to the fact that T has been designed for C.

This principle also makes clear that concept embedding is not a singular phenomenon but has a dual nature: it encompasses both the activity of embedding (which springs from the intentions of designers) and the resultant state of affairs (manifest in the designed properties of the technology). In the next section we will show how this account work in practice, and how we can use it to show that concept embedding understood in this way allows us to show that conceptual engineering can have large real-world effects.

4 Embedding “Control”

In the previous section, we have seen what it means for a concept to be embedded in a technological artifact. This is relatively unproblematic if it's clear which conceptualization is best in the given context. However, this isn't always the case. Particularly when we design for socially disruptive technologies—technologies that have deep, significant, ethically salient, and wide-ranging impacts and challenge our current concepts (Hopster 2021; Veluwenkamp and Hoven 2023). In such scenarios, we may need to engage in conceptual engineering to better align with the challenges at hand. Let us consider an example where the concept that is embedded in the artifact is in some sense deficient.

4.1 Embedding the Operative Conception of control

One of the important concepts that is discussed in the context of self-driving cars is “control.” When self-driving cars are introduced, policymakers in many circumstances require that there is a human being "in the loop" who controls the car’s operation. There are usually two reasons why control is important in the context of autonomous systems. The first reason is that a human agent having control over a self-driving car limits the amount of risk we expose others to. Secondly, control is related to appropriate responsibility attributions. According to many accounts of responsibility, ascribing responsibility to an agent is only apt if that agent had control over the outcome. Let us therefore say that the normative function of “control” in the context of autonomous systems is (1) a decrease of risk we expose others to and (2) to allow for appropriate responsibility attributions.

Operational control is in many contexts the operative conception of control (Haslanger 1995).Footnote 9 This conception is implicitly assumed in discussions regarding responsibility gaps (Matthias 2004; Sparrow 2007) and is popular in engineering and traffic psychology (Michon 1985). Operational control over an outcome entails an ability to causally influence the outcome. Let’s define it as follows:

Agent A is in operational control of outcome O if and only if A is (or has been) able to causally influence OFootnote 10

To see how operational control has been further translated into designed properties, we can look at the work of (Stern et al. 2018). In his work on self-driving cars, he has developed technologies to reduce the amount of phantom traffic jams. A phantom traffic jam is a slowdown in traffic that occurs without a good reason. That is, a slowdown that is not caused by an accident, speed trap, reckless driving, etc. These traffic jams occur mainly when the road is busy, and drivers therefore maintain little distance from each other. If someone then brakes, for example because the driver was distracted for a moment, the car behind must brake extra hard, as must the car behind it. This causes a ripple effect and can eventually lead to cars coming to a complete stop.

On a test track, Stern and his team managed to reproduce this phenomenon by having humans manually drive 21 cars in succession around a large traffic circle. Within just a few minutes, traffic began to condense and start-stop traffic waves began to appear. Their findings showed, however, that substituting one car with a vehicle equipped with adaptive cruise control drastically diminished the start-stop pattern. The human drivers in the remaining 20 cars adapted to the speed of the autonomous vehicle, resulting in a 98% reduction in braking instances and a 40% improvement in fuel efficiency. On our notion of concept embodiment, we can therefore say that the self-driving car embodies the concept fuel efficiency because the designed properties (in this case, the adaptive cruise control) reflect, facilitate, or instantiate aspects of fuel efficiency due to the fact that the car has been designed for fuel efficiency.

The problem with a simple setup where manual control is replaced by the adaptive cruise control is that it does not realize operational control: i.e., there is no human agent who is able to influence the velocity of the car. To remedy this, Stern et al. 2018 made sure the human in the self-driving car was always able to switch the car to manual mode. So, on this setup, the self-driving car embodies the conception operational control because the designed properties (in this case, the switch) reflect, facilitate, or instantiate aspects of operational control due to the fact that the car has been designed for operational control.

The car that Stern et al. 2018 developed therefore embodies operational control and fuel efficiency. This is seemingly a very promising result, because it already works when a relatively small number of cars are equipped with adaptive cruise control.

4.2 Evaluating the Operative Conception of Control

Some have argued, however, that operational control is not the best conception of control in the context of autonomous systems (Himmelreich 2019; Mecacci and Santoni de Sio 2020; Santoni de Sio and Hoven 2018). The reason for this is that there are conceptions that fulfil the normative function of “control” better. Remember that the normative function of “control” is to decrease the risk we expose others to and to allow for appropriate responsibility attributions.

Human agents typically remain legally responsible for taking control from the adaptive cruise control when it fails, and it is desirable for these individuals to also bear moral responsibility. However, there are several documented reasons why people may not be as responsible as expected. Firstly, it is well documented that skills as an operator decline over time when supervising an autonomous system (Bainbridge 1982), posing a challenge when the adaptive cruise control faces complicated tasks, such as difficulty in tracking the leading vehicle (Son et al. 2006). Other identified issues with automation include loss of attention and overreliance, leading to failures in manual intervention when necessary (Parasuraman 1987).

In instances of attention loss and skill degradation, humans theoretically have the ability to influence the vehicle's operation; they therefore have operation control. However, due to human psychological and physiological limitations, they do not know how to use this ability. Because these limitations are well-known and (or at least well documented), it would be unreasonable to hold the people behind the steering wheel responsible for the outcomes. In these situations, therefore, the conception “operational control” fails to facilitate proper attribution of responsibility. Additionally, these limitations increase the risk imposed on others by the self-driving car, thereby failing to mitigate risk, another aspect of the normative function of “control.” Consequently, when an autonomous system is designed with the value of operational control, it does not satisfactorily fulfill the normative function of “control.”

4.3 Designing a New Conception of Control

For the reasons mentioned above, philosophers have designed a different conception of control in the context of autonomous systems: i.e., meaningful human control (Cavalcante Siebert et al. 2022; Mecacci and Santoni de Sio 2020; Santoni de Sio and Hoven 2018; Veluwenkamp 2022). These authors take meaningful human control to require two conditions: a tracking and a tracing condition. The tracking condition tells us that a socio-technical system should be able to respond to both the relevant normative reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates. The second condition for meaningful human control is called “tracing”. The tracing condition requires that (1) at least one human agent is present in the system design history or use context, who (2) has the right cognitive and physical capacities to fit their role; and (3) is adequately aware of such controlling role and their own active and passive responsibility. It is worth mentioning that on this conception of meaningful human control, control is not only specified in relation to the technological artefact. The conception is rather to be understood to refer to a broader, sociotechnical system whose boundaries are context-dependent and, in the case of the semi-autonomous car, also includes the driver (Mumford 2006). Let us define this conception of control as follows:

Agent A has meaningful human control over system S if and only if (1) A’s reasons are being tracked by S and (2) S is designed such that the tracing condition applies to A.

So, to embody a system with this conception of control, our design efforts ought to focus on the socio-technical system as a whole, which includes both the technical artefact (the car) and the human driver.

To satisfy the tracing-condition for MHC, one must carefully consider the physical and mental capabilities of both the car and the driver. This includes considering that the system does not always function 100% correctly and that the driver's skills as operators of the vehicle will decline as a result of over-reliance on automated systems. The automation should therefore be designed in a way that it encourages driver attention and maintains a reasonable level of driving skill.

4.4 Embedding the New Conception of Control

One way to embody a system with MHC is presented in Koerten (2021). In this work, the author designed haptic accelerator and brake pedals to try to design a car that embodies both fuel efficiency and meaningful human control. The idea behind haptic pedals is that the system computes an ideal speed for the vehicle, mirroring the approach that the adaptive cruise control would take. However, instead of entirely automating the vehicle's speed, the haptic pedals employ force that impacts how much pressure is required to speed up or slow down the car. In scenarios where the car's current speed is lower than the ideal speed, the force reduces the pressure necessary to accelerate. Conversely, when the car's speed surpasses the optimal speed, the system requires more pressure to decelerate. The result of introducing this technology is that the car is more responsive to the normative reasons of the human driver (Veluwenkamp 2022). This is partly because, in the design of the systems, the engineers made sure that the physical and mental capacities required for a successful interaction matched those of the average human driver.

The development of these haptic controls is still in its preliminary phase, but these are first steps to actually embody meaningful human control in a partially automated car. So, given these results, we can say that the car now embodies the conception meaningful human control because the designed properties (in this case, the haptic brake and accelerator pedals) reflect, facilitate, or instantiate aspects of meaningful human control due to the fact that the car has been designed for meaningful human control.

We have now seen how developers and designers can implement different conceptions of control in a technological artefact. It is a way of implementing concepts that is impactful, valuable and feasible. When technical engineers implement meaningful human control instead of operational control, this has real world consequences. Moreover, if meaningful human control is indeed a better conception of control and meaningful human control is properly implemented in the technological artefact, then the technological artefact has the capacity to perform the normative function of “control” better.

Meaningful human control better fulfills the first normative function of control (risk reduction) by ensuring that the socio-technical system can adapt to the relevant normative reasons and environmental facts. Additionally, it better fulfills the second normative function (attribution of responsibility) because it ensures that human agents have a clear and active role in the control process. By designing systems that require human engagement and are sensitive to their skill levels, meaningful human control allows for appropriate responsibility attributions.

5 Broader Applications of the Approach

Is it possible to generalize the form of conceptual engineering, as exemplified here with the concept “control,” to other concepts? This section aims to answer this question by clarifying the wider applications of the approach developed in the paper, and address one possible objection.

The paper’s approach to conceptual engineering can indeed be applied to a wide range of concepts intended to be incorporated into technologies. This includes not only concepts like “control,” but also socially and politically significant ones. Take, for instance, the concept of “woman.” Depending on the conception adopted, e.g., biological, gender-identity, or societal role, the manner in which technologies interact with, represent, and address women would differ considerably. Suppose we decide to embed a conception of womanhood defined by gender identity into software that handles user data. It could ensure that the technology recognizes and respects self-identified gender identities, impacting everything from personalized recommendations to representation in digital spaces.

Similarly, let us revisit the “democratic” social media application from Sect. 3. The specific conception of democracy we adopt will significantly influence the architectural and functional design of the platform. If we opt for a deliberative democracy model, this would lead us to prioritize features that promote informed discussion and consensus-building. Conversely, if we choose a direct democracy conception, our focus would be on mechanisms that allow users to express individual preferences directly. The chosen interpretation of democracy directly influences the design decisions and the platform's functionality.

In the context of a home automation system, we can look at the concept “privacy.” Historically, privacy was understood as the “right to be left alone” (see e.g., Warren and Brandeis 1989). This traditional conception emphasized physical seclusion and the protection of personal spaces and communications from unwarranted intrusion. This conception of privacy requires us to ensure that all data collected is anonymized and encrypted. The introduction of social media networks and other information technologies has brought about new ways of collecting and storing personal information. This has not only made the traditional conception of privacy inadequate, but it also introduced complex ethical challenges related to data security and user consent.

As a response to problems created by these technological advancements, new conceptions of privacy have been engineered.Footnote 11 One of these new conceptions of privacy is “contextual integrity,” which refers to the ability to ensure that information flows appropriately according to contextual norms and expectations (Nissenbaum 2009). This idea focuses on making sure that data is shared and accessed in ways that match the social norms of the specific context, rather than completely restricting information flow. This involves using information only for its intended purpose, limiting access based on roles, and being clear about how the data is used.

Moreover, this approach to conceptual engineering is not limited to individual technological artifacts. It extends to other artificial artifacts, such as collective agents. Collective agents, such as companies, non-profit organizations and governments, can be seen as large-scale socio-technological artifacts. These entities are shaped by societal norms, rules and values. Viewing them as ‘artifacts’ allows us to see new ways for conceptual engineering. For example, reengineering and implementing the concept “corporate responsibility” within a company can lead to significant shifts in their business practices, with implications for employees, customers, the environment and society at large.

Additionally, the successful implementation of these redefined concepts within both technological and socio-technological artifacts can also indirectly influence semantic and speaker meanings. This is not a side effect, but a crucial part of the potential impact of conceptual engineering. When new conceptions are embedded in influential collective agents or technological artifacts, they can spread into the broader society and subtly change how individuals understand and use certain concepts in their language.

For example, if a company succeeds in embedding a more inclusive concept of "leadership" into its policies and practices, it can gradually change the way employees and other stakeholders see and talk about leadership, indirectly altering semantic and speaker meanings.

Therefore, the methodology of conceptual engineering I propose here is not restricted to the trivial or narrowly technical. Instead, it finds wide-ranging applicability, including socially and politically consequential concepts. This highlights the value and significance of conceptual engineering, showcasing its essential role in designing artifacts ethically.

Let me now address a potential worry. Philosophers involved in value-sensitive design, such as Ibo van de Poel and Kroes (2014) and Friedman and Hendry (2019), have emphasized the importance of the conceptualization phase, acknowledging the critical impact of embedding a particular understanding of a value within technology. This raises a question: what new insight is provided by framing this practice as “conceptual engineering”?

One partial answer is that referring to it as “conceptual engineering” explicitly underscores its normative dimension. The process of deciding which conception to adopt is not merely about identifying which understanding is currently employed in a specific context; rather, it involves discerning which conception is best suited for that context. The functionalist approach described in this paper aids in addressing this normative issue by offering a framework to determine the best response to this question.

Moreover, this process does not always involve choosing among pre-existing conceptions. At times, none of the current conceptions sufficiently fulfill the normative function of the concept. This was the case with “operational control,” where various philosophers found the existing conception of control lacking and thus engineered a new conception. A similar phenomenon occurred with the concept “privacy,” where scholars have engineered a new conception of privacy in response to technological developments. Therefore, describing the methodology as “conceptual engineering” makes explicit how to settle the normative question in choosing concepts and highlights that responsible design may involve creating new concepts, even if the philosophers proposing them didn’t regard themselves as conceptual engineers.

This brings us to an additional advantage of implementing concepts in technological artefacts: i.e., the ability to test and experiment with different conceptions.

6 Testing Conceptions

One of the phases often overlooked in conceptual engineering is the testing phase. While other engineering practices rightly prioritize testing their results, conceptual engineering often neglects verifying whether a new conception performs better than old ones in reality. This phase is crucial to ensure the efficacy of the newly proposed conceptions in achieving their intended goals. Even if we understand implementing concepts in terms of semantic meaning, testing poses significant moral concerns, despite ignoring feasibility challenges discussed above.

To see why this is problematic, let us consider a conceptual engineer who is interested in testing the conception Sally Haslanger has proposed for “woman” (Haslanger 2000). The motivation behind her proposal is that the new conception serves a specific function, that is, it will promote some moral and political goals (e.g., to highlight certain structural injustices). If we aim to experiment with this conception, we must ensure it is embraced by a small community and evaluate whether the intended moral and political goals are better met within this group. This would introduce the significant logistical challenge of establishing distinct language communities for each concept under examination, making it practically impossible to test these concepts. Moreover, it means that we would have to experiment with conceptual take-up on a community scale to perform an experiment in which societal and moral results are uncertain.

The argument is not that large-scale experiments with uncertain social and moral outcomes are always morally impermissible. After all, medical experiments can also have major moral and social consequences. However, there usually are quite stringent rules which determine under which conditions such experiments can be carried out. Securing ethical approval for an experiment that tests different conceptions of “woman” would be a considerable accomplishment. Moreover, this is not just the case for testing a conception of woman. Even when we want to test different concepts of “control,” such experiments are necessary.

So let us compare this with testing and experimenting with concepts that are implemented in technological artefacts. As we have seen above, we can implement different conceptions of control in different technological artefacts. Moreover, we have identified a normative function of “control” in the context of autonomous systems: i.e., to decrease the risk we expose others to and to allow for appropriate responsibility attributions. Researchers have the capability to create similar technological artefacts that embed either operational control or meaningful human control. Using this capability, they can design experiments to determine which artefact best realizes the normative function of “control”. We could, for example, design experiments in which people are able to experience the implementation of “control” themselves, or participate in other more embodied ways.

The second element of the function of “control” can be tested more directly. In fact, Koerten (2021) performed some initial experiments to determine if the amount of risk we expose others to is diminished. He attached both implementations of “control” to a simulator and performed two different experiments. One where a car embodied operational control, and one where a car embodied meaningful human control. He drove around the different cars in a ring road scenario with a radius of 42 m. In both experiments he added 20 additional cars that were driven by the same algorithm. What he found was that the drivers of the car embedded with meaningful human control were able to react much quicker in cases of system failure. In fact, in the setup he reported several accidents with cars that embodied operational control, while the car that embodied meaningful human control was able to avoid any accidents. While further experimentation is required to definitively conclude which conception of control performs best in this specific context, it's clear that embedding concepts in technological artefacts offers innovative approaches for conducting these experiments.

The opportunity to implement different conceptions in technologies offers not only a chance for empirical testing but also entails a significant responsibility. Implementing an defective conception can lead to substantial societal costs. Thus, the theoretical work in conceptual engineering is of critical importance. Consequently, given the profound impact that the implementation of concepts can have, we should be cautious and strive to avoid large-scale rollouts of technologies based on defective conceptions.

7 Conclusion

In this paper, I have argued that the potential of conceptual engineering is not solely restricted to semantic or speaker meanings, where implementation is taken to be impossible or infeasible. Rather, by broadening our view to include the implementation of concepts into artificial artefacts and large-scale socio-technological entities, we may bypass some of the challenge of implementation.

This broader approach does not require comprehensive knowledge of or control over reference-determining facts, and it is immune to psychological resistances suggested by the attractor challenge. To illustrate this, I've discussed the example of implementing “operational control” and “meaningful human control” in self-driving cars.

Moreover, this approach provides the unique advantage of real-world testing of the conceptions, which forms a crucial feedback loop, enriching and guiding future conceptual work. Furthermore, I have suggested that this kind of conceptual engineering could extend its impact beyond the immediate implementation, indirectly altering semantic and speaker meanings through its embedding in influential technological and socio-technological systems.

Importantly, successful implementation of a new conception through technology can be achieved with the creation and deployment of even a single artifact. However, the measure of impactfulness hinges on the conception's adoption and integration across the relevant field. If engineers from Tesla decide to embed a newly engineered conception into their designs, the effect of such a move could be profoundly impactful, while on the other hand, a less influential company introducing a new conception in a niche product may not see the same level of impact.

Despite the fact that the approach offered in this paper cannot solve the implementation problem in all domains, it offers promising new avenues for conceptual engineering. It encourages a move towards more effective, responsive, and ethical design, emphasizing the vast potential of conceptual engineering in shaping our socio-technical world.