Abstract
Conceptual engineering is the design, evaluation and implementation of concepts. Despite its popularity, some have argued that the methodology is not worthwhile, because the implementation of new concepts is both inscrutable and beyond our control. In the recent literature we see different responses to this worry. Some have argued that it is for political reasons just as well that implementation is such a difficult task, while others have challenged the metasemantic and social assumptions that underlie this skepticism about implementation. In this paper, I argue that even if implementation is as difficult as critics maintain, there is at least one context in which conceptual engineering is extremely impactful and demonstrably so: the design of new technology. Different conceptions of control, freedom, trust, etc. lead to different designs and implementations of systems that are built to embed those concepts. This means that if we want to design for control, freedom, trust, etc., we have to decide which conception we ought to use. When we determine what the appropriate conception of a concept is in a technological context and use this conception to operationalize a norm or value, we generate requirements which have real-world effects. This not only shows that conceptual engineering can be extremely impactful, the fact that it leads to different design requirements means that we have a way to evaluate our conceptual choices and that we can use this feedback loop to improve upon our conceptual work By illustrating the direct impacts of different conceptions on technology design, this paper underscores the practical applicability and value of conceptual engineering, demonstrating how it serves as a crucial bridge between abstract philosophical analysis and concrete technological innovation.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
Conceptual engineering involves the design, evaluation and implementation of concepts. Some have claimed that implementation of new concepts is inscrutable and beyond our control, while others have suggested that implementation is practically unfeasible.Footnote 1 For example, Herman Cappelen (2018) and Max Deutsch (2020) argue that the implementation of new concepts is challenging due to the external factors determining meaning that are beyond our control and often epistemically inaccessible. Additionally, David Chalmers (2020) highlights the social obstacle of convincing others to adopt a new use of a word. Further complicating the implementation issue, Machery (2021) introduces a feasibility objection, positing that some concepts might be inherently difficult or impossible to implement accurately. Some researchers see these difficulties in implementation as advantageous (Kitsik 2022; Queloz and Bieber 2022), while others have challenged the metasemantic and social assumptions that underlie this skepticism about implementation (Koch 2021a, b).
In response to this debate, this paper proposes that even if the critiques presented by Cappelen, Chalmers, Deutsch, Machery and others are valid, there is a domain in which conceptual engineering is both feasible and impactful, i.e., through the design of new technology. The design and implementation of systems is influenced by various concepts, such as control, freedom, and trust, leading to tangible, real-world effects. By this logic, when we select the appropriate conception of a concept for a given technological context (be it a specific definition of privacy for a social media application or a notion of control for a self-driving car) we enable a requirements engineering process that results in tangible real-world effects that is more than a purely academic exercise.
The central argument this paper advances is that conceptual engineering can have substantial practical impact through the real-world effects of concept implementation in technology. Moreover, such implementation also creates an opportunity for empirical testing and continuous improvement, establishing a feedback loop to refine our conceptual work. To illustrate this argument, this paper first explains the implementation challenge and surveys the existing responses to it. It then elaborates on the role of conceptual engineering in requirements engineering, a process of defining, documenting, and maintaining requirements in the engineering design process, using a practical example of a real-world implementation challenge. Finally, the paper proposes an empirical methodology for testing the appropriateness of our conceptual choices. This approach not only demonstrates the practical impact of conceptual engineering, but also presents a mechanism for ongoing concept evaluation and improvement. By illustrating the direct impacts of different conceptions on technology design, this paper underscores the practical applicability and value of conceptual engineering, demonstrating how it serves as a crucial bridge between abstract philosophical analysis and concrete technological innovation.
1 Conceptual Engineering
To illustrate the implementation of concepts in technological artifacts, an initial understanding of my perspective on conceptual engineering is necessary. Despite its perennial presence in philosophy, it’s only in recent times that a systematic exploration of this methodology has gained momentum. The field is highly diverse and characterised by disagreement among philosophers on numerous significant issues. These issues include the scope of conceptual engineering (whether it targets expressions Cappelen 2018; Thomasson 2022) or also mental concepts (Eklund 2015; Haslanger 2000; Plunkett 2015)), its focus on engineering intensions and extensions (Cappelen 2018), use patterns (Jorem 2021), commitment and entitlement structures (Löhr 2021; Veluwenkamp et al. 2022), among others. These issues also prompt further questions about the nature of concepts and their contents. However, this paper will not engage in these debates.
Instead, I will assume that we are engineering the content of sub-sentential expressions. I will refer to any full-fledged way of rendering a sub-sentential expression’s content precise without altering its topic T as a conception of T. For instance, the topic of “friendship” is friendship, and Aristotle’s (1999) account of friendship is one conception of friendship.Footnote 2 The argument should, however, be largely applicable to other viable assumptions about the content of conceptual engineering.
A second issue, however, is crucial for my argument: which methodology to apply when we engage in conceptual engineering. The importance of this question lies in its role in clarifying the normative element of conceptual engineering (Simion 2018). In this paper, I will explicitly employ a functionalist approach (Jorem 2022; Queloz 2020; Thomasson 2020; Veluwenkamp and Hoven 2023). On my preferred way of understanding this approach, the first step is to determine the normative function of a term. A concept X has the normative function to produce effect E if, in a range of relevant circumstances C, applications of X indeed produce E, and importantly, users of X have normative reasons to deploy X in thought and language because of its effectiveness in producing E in circumstances C (Köhler and Veluwenkamp 2024). This describes the purpose of a concept in facilitating actions or achieving results that carry normative significance, encompassing ethical, moral, or other normatively relevant considerations. Once we have determined this normative function, the next step is to select or engineer a conception that best fulfills this function. By focusing on the normative function, we can ensure that the concept’s use promotes values such as fairness, justice, or other moral or ethical goals. In essence, the normative function guides the direction of conceptual engineering efforts. It provides a clear objective for the design and assessment of concepts.Footnote 3
With this understanding of the functionalist approach to conceptual engineering in place, we can proceed to investigate why many consider the implementation of a specific concept to be a significant challenge.
2 The Implementation Challenge
As discussed, careful investigation can lead us to revise our current understanding of a concept. However, this exploration is worthwhile only if we can also implement these revised concepts. A key concern here is whether implementing new concepts is feasible. David Chalmers, for example, refers to implementation as “[t]he hardest part of conceptual engineering” (2020, p. 14). In the literature we can find different reasons which contribute to this difficulty.
First, we seem to lack the right kind of control over the meaning of our concepts. The seminal treatment of the control challenge for conceptual engineering is found in Cappelen (2018). Cappelen argues that if it is indeed the case that the meaning of words is determined by external factors, as semantic externalists claim, we have little or no control over these meaning-determining facts. The external factors can, among others, include causal histories (Kripke 1972; Putnam 1975) or social structures (Burge 1979). Yet, these facts would need to change for us to alter the meanings of our expressions. Cappelen maintains that we lack the ability to guide the revision of concepts due to our lack of control over the reference-fixing facts. Even if we were perfectly coordinated, our actions and intentions would have only an unpredictable effect on our semantic values.
The second reason that implementation is problematic is epistemic in nature. Semantic externalism suggests that the mechanisms determining reference are extremely complex, making it difficult to know exactly what needs to change in order to alter a term’s meaning. So, even if we could control the reference-fixing facts, it would be practically impossible to identify what these facts are (Cappelen 2018, p. 74).
In response to this kind of issues, some have proposed to change the target of implementation from changing semantic-meaning to speaker-meaning (Pinder 2020, 2021).Footnote 4 Even if it is impossible or unfeasible to change the semantic-meaning of a term, we can purport to change what individual speakers mean (or speaker-mean) with a term. The reason that is more feasible to change the speaker-meaning of terms is that this meaning is within our intentional control. However, as some have pointed out (Deutsch 2020, 2021), it is not obvious that this would be an interesting enterprise. Although one can choose to alter an expression’s use oneself (or in a small group), the effect of such a local change is relatively minor if it doesn’t change the term’s semantic meaning or the way people generally conceptualize the world (Deutsch 2021, p. 3670).
In addition to these arguments, Machery (2021, p. 19) brings forth the idea of “attractor concepts.” These are the concepts that, when communicated within a population, tend to converge towards stable states over time, regardless of their initial form. The notion of attractor concepts suggests that some concepts have a kind of gravitational pull towards a particular interpretation or understanding, regardless of how they were originally engineered or intended.
When applying this idea to the practice of conceptual engineering, the presence of attractor concepts may further complicate the task of implementation. Even if conceptual engineers design a new concept, if this concept is close to an existing attractor concept, it might naturally evolve towards that attractor state in the minds of the individuals, rather than retaining the form originally intended by the conceptual engineers.
For instance, if a conceptual engineer tries to redefine the concept of “freedom” in a specific way, the historical, cultural, and linguistic ‘weight’ of this concept might act as an attractor, pulling the interpretations of individuals towards the traditional understanding of freedom, regardless of the intended new definition.
Machery’s (2021) notion of attractor concepts implies that the process of conceptual engineering must not only involve the design and implementation of new concepts, but also an assessment of the conceptual landscape to identify potential attractor concepts that could interfere with the successful implementation of the new concept.
Although I agree with skeptics that modifying semantic meaning poses a significant challenge and I acknowledge Machery’s concerns about the influence of attractor concepts, I propose an alternative. I contend that there is an important context where conceptual engineering can be demonstrably feasible, one that does not rely on changing semantic meaning or circumventing existing attractor concepts within our conceptual landscape.
In the literature on conceptual engineering, ‘implementation’ is typically understood as the process of putting a revised or newly developed conception into practical use. Implementation involves integrating a reengineered conception into everyday life, encouraging acceptance and uptake of this concept within the community. Typically, this understanding emphasizes the linguistic and cognitive aspects. It focuses on the change of meaning, use patterns, and the way people conceptualize the world with the new or revised concept. And while this context is undoubtedly crucial, it presents only part of the potential scope of conceptual engineering.
The reason for this is that there is a different way in which we can put a revised or newly developed concept into practical use. This broader understanding of conceptual engineering extends beyond linguistic and cognitive, into the technological. It adds a critical, often overlooked layer to conceptual engineering: not only is the introduction and acceptance of the concept within a linguistic community significant, but equally important is its embodiment and representation in the design, functionality, and user experience of a technological system.
In this broader view, the implementation of a concept encompasses its realization within the design of a technological tool or system. It includes the way this concept is woven into the very fabric of the system, influencing its operations and capabilities. It is about how the concept is translated into design parameters and how it shapes the functionality of the system.
In the following sections, I will present an alternative approach to the implementation challenge that taps into this potential. More specifically, I will illustrate how technology, by embodying and operationalizing reengineered concepts, can contribute to the practical realization of conceptual engineering. By doing so, it will also become clear that this different context provides us with innovative strategies to avoid the obstacles posed by the limited control over semantic meanings and the presence of attractor concepts. Thus, despite the complexities of the implementation challenge, I believe that a broader understanding of the domain of implementation opens up promising avenues for conceptual engineering that are well worth exploring.
3 Embedding Concepts in Technologies
As we have briefly discussed above, changes in speaker-meaning of terms are, at least to some degree, within our intentional control. However, the main concern with these changes is their limited scope and impact. In this section, I want to draw attention to a context which is also partially within our intentional control, but where changes can be very impactful: i.e., the design of technological artifacts.Footnote 5 The way we design technologies is heavily influenced by the different concepts we use. In this paper, I assert that through the design process, we embed concepts into technologies. To elucidate this point, I'll start by examining a more familiar practice: the incorporation of values into technology. I will explain the idea of value embodiment and its role in technology. Subsequently, I extend this discussion to encompass a broader spectrum of concepts.
Many philosophers have presented ways to facilitate designing technologies with moral values (e.g., Flanagan et al. 2008; Floridi and Sanders 2004; Klenk 2021). Although there are some authors who hold that technologies are inherently value-neutral (e.g., Pitt 2014), most philosophers of technology agree that technologies are in some sense value-laden (Miller, 2021).Footnote 6 The main debate in the literature on value embedding concerns the question in what way values are embedded in technology. Winner (1980), Flanagan et al. (2008), van de Poel and Kroes (2014) and van de Poel (2020) stress the role of intentional design for the embodiment of value. This idea can be captured using the following description:
Value Embedding: A technological artefact T embodies a value V if the designed properties of T have the potential to achieve or contribute to V due to the fact that T has been designed for V.Footnote 7Footnote 8
To see how this account works in practice, let us consider a relatively straightforward example: sea dikes. The value that we want to embody in a dike is safety; safety for people living in the hinterland. Centuries of experience in designing sea dikes have taught us that several material properties are important if we want to realize safety. For example, the shape of the dike, the dike slope angle, the inner and outer berm, and embankment materials all influence how much safety is realized. Many of the material properties required for safety are specified in the “14TCN 130–2002” standard, which describes technical guidelines and standards in sea dike design. Let us assume that these guidelines are largely correct. In that case, a sea dike embodies the value “safety” because the designed properties (codified in the “14TCN 130–2002” standard) have the potential to achieve or contribute to safety due to the fact that the sea dike has been designed for safety.
This account, therefore, offers a compelling perspective on how artifacts embody values. The relationship between T and V arises not coincidentally, but because T has been intentionally designed with V in mind. Thus, the embodiment of values in artifacts is a direct consequence of the design process aimed at achieving or supporting that value. This notion highlights the intentional aspect of design in embedding values into artifacts, suggesting that the values artifacts embody are a reflection of the objectives and considerations of their creators.
This framework generalizes beyond the embodiment of values. One of the core elements of the functionalist approach to conceptual engineering described in Sect. 1, is the idea that concepts have a normative function. According to this idea, a concept X is said to have a normative function insofar it reliably produces a certain outcome E in specific situations C, and if there are good reasons for people to use X because it effectively leads to E in those situations. This perspective underscores that concepts serve not only as tools for categorization or description but are fundamentally involved in molding our interactions with the world around us.
For example, when we want to design a "democratic” social networking application, there are certain features and functionalities that need to be incorporated to embody the concept of democracy. This could include features that promote equal participation among users, mechanisms for transparent decision-making, and tools that enable collective moderation of content. The design of such a platform would be guided by the normative function of the concept of democracy, which aims to produce an environment where all participants have equal opportunity to contribute and influence the platform's direction. In this situation, the designers have a normative reason to embed these democratic features because they lead to the desired outcome of a more equitable and participatory digital space.
By embedding a concept with a normative function, designers can intentionally shape technologies to reflect and realize specific goals. This process goes beyond mere functionality, it involves embedding aspects of chosen concepts into the technology, thus guiding how users interact with it and what experiences it fosters. In the case of the democratic social networking application, the incorporation of democratic principles into the design not only influences the platform's structure and policies but also encourages behaviors and interactions among its users that align with democratic ideals.
In the next section I will work out this idea in more detail, but for now, we can generalize the account offered by Van de Poel and Kroes to acknowledge this broader idea of concept embedding as follows:
Concept Embedding: An artifact T embodies a concept C if the designed properties reflect, facilitate, or instantiate aspects of C due to the fact that T has been designed for C.
This principle also makes clear that concept embedding is not a singular phenomenon but has a dual nature: it encompasses both the activity of embedding (which springs from the intentions of designers) and the resultant state of affairs (manifest in the designed properties of the technology). In the next section we will show how this account work in practice, and how we can use it to show that concept embedding understood in this way allows us to show that conceptual engineering can have large real-world effects.
4 Embedding “Control”
In the previous section, we have seen what it means for a concept to be embedded in a technological artifact. This is relatively unproblematic if it's clear which conceptualization is best in the given context. However, this isn't always the case. Particularly when we design for socially disruptive technologies—technologies that have deep, significant, ethically salient, and wide-ranging impacts and challenge our current concepts (Hopster 2021; Veluwenkamp and Hoven 2023). In such scenarios, we may need to engage in conceptual engineering to better align with the challenges at hand. Let us consider an example where the concept that is embedded in the artifact is in some sense deficient.
4.1 Embedding the Operative Conception of control
One of the important concepts that is discussed in the context of self-driving cars is “control.” When self-driving cars are introduced, policymakers in many circumstances require that there is a human being "in the loop" who controls the car’s operation. There are usually two reasons why control is important in the context of autonomous systems. The first reason is that a human agent having control over a self-driving car limits the amount of risk we expose others to. Secondly, control is related to appropriate responsibility attributions. According to many accounts of responsibility, ascribing responsibility to an agent is only apt if that agent had control over the outcome. Let us therefore say that the normative function of “control” in the context of autonomous systems is (1) a decrease of risk we expose others to and (2) to allow for appropriate responsibility attributions.
Operational control is in many contexts the operative conception of control (Haslanger 1995).Footnote 9 This conception is implicitly assumed in discussions regarding responsibility gaps (Matthias 2004; Sparrow 2007) and is popular in engineering and traffic psychology (Michon 1985). Operational control over an outcome entails an ability to causally influence the outcome. Let’s define it as follows:
Agent A is in operational control of outcome O if and only if A is (or has been) able to causally influence OFootnote 10
To see how operational control has been further translated into designed properties, we can look at the work of (Stern et al. 2018). In his work on self-driving cars, he has developed technologies to reduce the amount of phantom traffic jams. A phantom traffic jam is a slowdown in traffic that occurs without a good reason. That is, a slowdown that is not caused by an accident, speed trap, reckless driving, etc. These traffic jams occur mainly when the road is busy, and drivers therefore maintain little distance from each other. If someone then brakes, for example because the driver was distracted for a moment, the car behind must brake extra hard, as must the car behind it. This causes a ripple effect and can eventually lead to cars coming to a complete stop.
On a test track, Stern and his team managed to reproduce this phenomenon by having humans manually drive 21 cars in succession around a large traffic circle. Within just a few minutes, traffic began to condense and start-stop traffic waves began to appear. Their findings showed, however, that substituting one car with a vehicle equipped with adaptive cruise control drastically diminished the start-stop pattern. The human drivers in the remaining 20 cars adapted to the speed of the autonomous vehicle, resulting in a 98% reduction in braking instances and a 40% improvement in fuel efficiency. On our notion of concept embodiment, we can therefore say that the self-driving car embodies the concept fuel efficiency because the designed properties (in this case, the adaptive cruise control) reflect, facilitate, or instantiate aspects of fuel efficiency due to the fact that the car has been designed for fuel efficiency.
The problem with a simple setup where manual control is replaced by the adaptive cruise control is that it does not realize operational control: i.e., there is no human agent who is able to influence the velocity of the car. To remedy this, Stern et al. 2018 made sure the human in the self-driving car was always able to switch the car to manual mode. So, on this setup, the self-driving car embodies the conception operational control because the designed properties (in this case, the switch) reflect, facilitate, or instantiate aspects of operational control due to the fact that the car has been designed for operational control.
The car that Stern et al. 2018 developed therefore embodies operational control and fuel efficiency. This is seemingly a very promising result, because it already works when a relatively small number of cars are equipped with adaptive cruise control.
4.2 Evaluating the Operative Conception of Control
Some have argued, however, that operational control is not the best conception of control in the context of autonomous systems (Himmelreich 2019; Mecacci and Santoni de Sio 2020; Santoni de Sio and Hoven 2018). The reason for this is that there are conceptions that fulfil the normative function of “control” better. Remember that the normative function of “control” is to decrease the risk we expose others to and to allow for appropriate responsibility attributions.
Human agents typically remain legally responsible for taking control from the adaptive cruise control when it fails, and it is desirable for these individuals to also bear moral responsibility. However, there are several documented reasons why people may not be as responsible as expected. Firstly, it is well documented that skills as an operator decline over time when supervising an autonomous system (Bainbridge 1982), posing a challenge when the adaptive cruise control faces complicated tasks, such as difficulty in tracking the leading vehicle (Son et al. 2006). Other identified issues with automation include loss of attention and overreliance, leading to failures in manual intervention when necessary (Parasuraman 1987).
In instances of attention loss and skill degradation, humans theoretically have the ability to influence the vehicle's operation; they therefore have operation control. However, due to human psychological and physiological limitations, they do not know how to use this ability. Because these limitations are well-known and (or at least well documented), it would be unreasonable to hold the people behind the steering wheel responsible for the outcomes. In these situations, therefore, the conception “operational control” fails to facilitate proper attribution of responsibility. Additionally, these limitations increase the risk imposed on others by the self-driving car, thereby failing to mitigate risk, another aspect of the normative function of “control.” Consequently, when an autonomous system is designed with the value of operational control, it does not satisfactorily fulfill the normative function of “control.”
4.3 Designing a New Conception of Control
For the reasons mentioned above, philosophers have designed a different conception of control in the context of autonomous systems: i.e., meaningful human control (Cavalcante Siebert et al. 2022; Mecacci and Santoni de Sio 2020; Santoni de Sio and Hoven 2018; Veluwenkamp 2022). These authors take meaningful human control to require two conditions: a tracking and a tracing condition. The tracking condition tells us that a socio-technical system should be able to respond to both the relevant normative reasons of the humans designing and deploying the system and the relevant facts in the environment in which the system operates. The second condition for meaningful human control is called “tracing”. The tracing condition requires that (1) at least one human agent is present in the system design history or use context, who (2) has the right cognitive and physical capacities to fit their role; and (3) is adequately aware of such controlling role and their own active and passive responsibility. It is worth mentioning that on this conception of meaningful human control, control is not only specified in relation to the technological artefact. The conception is rather to be understood to refer to a broader, sociotechnical system whose boundaries are context-dependent and, in the case of the semi-autonomous car, also includes the driver (Mumford 2006). Let us define this conception of control as follows:
Agent A has meaningful human control over system S if and only if (1) A’s reasons are being tracked by S and (2) S is designed such that the tracing condition applies to A.
So, to embody a system with this conception of control, our design efforts ought to focus on the socio-technical system as a whole, which includes both the technical artefact (the car) and the human driver.
To satisfy the tracing-condition for MHC, one must carefully consider the physical and mental capabilities of both the car and the driver. This includes considering that the system does not always function 100% correctly and that the driver's skills as operators of the vehicle will decline as a result of over-reliance on automated systems. The automation should therefore be designed in a way that it encourages driver attention and maintains a reasonable level of driving skill.
4.4 Embedding the New Conception of Control
One way to embody a system with MHC is presented in Koerten (2021). In this work, the author designed haptic accelerator and brake pedals to try to design a car that embodies both fuel efficiency and meaningful human control. The idea behind haptic pedals is that the system computes an ideal speed for the vehicle, mirroring the approach that the adaptive cruise control would take. However, instead of entirely automating the vehicle's speed, the haptic pedals employ force that impacts how much pressure is required to speed up or slow down the car. In scenarios where the car's current speed is lower than the ideal speed, the force reduces the pressure necessary to accelerate. Conversely, when the car's speed surpasses the optimal speed, the system requires more pressure to decelerate. The result of introducing this technology is that the car is more responsive to the normative reasons of the human driver (Veluwenkamp 2022). This is partly because, in the design of the systems, the engineers made sure that the physical and mental capacities required for a successful interaction matched those of the average human driver.
The development of these haptic controls is still in its preliminary phase, but these are first steps to actually embody meaningful human control in a partially automated car. So, given these results, we can say that the car now embodies the conception meaningful human control because the designed properties (in this case, the haptic brake and accelerator pedals) reflect, facilitate, or instantiate aspects of meaningful human control due to the fact that the car has been designed for meaningful human control.
We have now seen how developers and designers can implement different conceptions of control in a technological artefact. It is a way of implementing concepts that is impactful, valuable and feasible. When technical engineers implement meaningful human control instead of operational control, this has real world consequences. Moreover, if meaningful human control is indeed a better conception of control and meaningful human control is properly implemented in the technological artefact, then the technological artefact has the capacity to perform the normative function of “control” better.
Meaningful human control better fulfills the first normative function of control (risk reduction) by ensuring that the socio-technical system can adapt to the relevant normative reasons and environmental facts. Additionally, it better fulfills the second normative function (attribution of responsibility) because it ensures that human agents have a clear and active role in the control process. By designing systems that require human engagement and are sensitive to their skill levels, meaningful human control allows for appropriate responsibility attributions.
5 Broader Applications of the Approach
Is it possible to generalize the form of conceptual engineering, as exemplified here with the concept “control,” to other concepts? This section aims to answer this question by clarifying the wider applications of the approach developed in the paper, and address one possible objection.
The paper’s approach to conceptual engineering can indeed be applied to a wide range of concepts intended to be incorporated into technologies. This includes not only concepts like “control,” but also socially and politically significant ones. Take, for instance, the concept of “woman.” Depending on the conception adopted, e.g., biological, gender-identity, or societal role, the manner in which technologies interact with, represent, and address women would differ considerably. Suppose we decide to embed a conception of womanhood defined by gender identity into software that handles user data. It could ensure that the technology recognizes and respects self-identified gender identities, impacting everything from personalized recommendations to representation in digital spaces.
Similarly, let us revisit the “democratic” social media application from Sect. 3. The specific conception of democracy we adopt will significantly influence the architectural and functional design of the platform. If we opt for a deliberative democracy model, this would lead us to prioritize features that promote informed discussion and consensus-building. Conversely, if we choose a direct democracy conception, our focus would be on mechanisms that allow users to express individual preferences directly. The chosen interpretation of democracy directly influences the design decisions and the platform's functionality.
In the context of a home automation system, we can look at the concept “privacy.” Historically, privacy was understood as the “right to be left alone” (see e.g., Warren and Brandeis 1989). This traditional conception emphasized physical seclusion and the protection of personal spaces and communications from unwarranted intrusion. This conception of privacy requires us to ensure that all data collected is anonymized and encrypted. The introduction of social media networks and other information technologies has brought about new ways of collecting and storing personal information. This has not only made the traditional conception of privacy inadequate, but it also introduced complex ethical challenges related to data security and user consent.
As a response to problems created by these technological advancements, new conceptions of privacy have been engineered.Footnote 11 One of these new conceptions of privacy is “contextual integrity,” which refers to the ability to ensure that information flows appropriately according to contextual norms and expectations (Nissenbaum 2009). This idea focuses on making sure that data is shared and accessed in ways that match the social norms of the specific context, rather than completely restricting information flow. This involves using information only for its intended purpose, limiting access based on roles, and being clear about how the data is used.
Moreover, this approach to conceptual engineering is not limited to individual technological artifacts. It extends to other artificial artifacts, such as collective agents. Collective agents, such as companies, non-profit organizations and governments, can be seen as large-scale socio-technological artifacts. These entities are shaped by societal norms, rules and values. Viewing them as ‘artifacts’ allows us to see new ways for conceptual engineering. For example, reengineering and implementing the concept “corporate responsibility” within a company can lead to significant shifts in their business practices, with implications for employees, customers, the environment and society at large.
Additionally, the successful implementation of these redefined concepts within both technological and socio-technological artifacts can also indirectly influence semantic and speaker meanings. This is not a side effect, but a crucial part of the potential impact of conceptual engineering. When new conceptions are embedded in influential collective agents or technological artifacts, they can spread into the broader society and subtly change how individuals understand and use certain concepts in their language.
For example, if a company succeeds in embedding a more inclusive concept of "leadership" into its policies and practices, it can gradually change the way employees and other stakeholders see and talk about leadership, indirectly altering semantic and speaker meanings.
Therefore, the methodology of conceptual engineering I propose here is not restricted to the trivial or narrowly technical. Instead, it finds wide-ranging applicability, including socially and politically consequential concepts. This highlights the value and significance of conceptual engineering, showcasing its essential role in designing artifacts ethically.
Let me now address a potential worry. Philosophers involved in value-sensitive design, such as Ibo van de Poel and Kroes (2014) and Friedman and Hendry (2019), have emphasized the importance of the conceptualization phase, acknowledging the critical impact of embedding a particular understanding of a value within technology. This raises a question: what new insight is provided by framing this practice as “conceptual engineering”?
One partial answer is that referring to it as “conceptual engineering” explicitly underscores its normative dimension. The process of deciding which conception to adopt is not merely about identifying which understanding is currently employed in a specific context; rather, it involves discerning which conception is best suited for that context. The functionalist approach described in this paper aids in addressing this normative issue by offering a framework to determine the best response to this question.
Moreover, this process does not always involve choosing among pre-existing conceptions. At times, none of the current conceptions sufficiently fulfill the normative function of the concept. This was the case with “operational control,” where various philosophers found the existing conception of control lacking and thus engineered a new conception. A similar phenomenon occurred with the concept “privacy,” where scholars have engineered a new conception of privacy in response to technological developments. Therefore, describing the methodology as “conceptual engineering” makes explicit how to settle the normative question in choosing concepts and highlights that responsible design may involve creating new concepts, even if the philosophers proposing them didn’t regard themselves as conceptual engineers.
This brings us to an additional advantage of implementing concepts in technological artefacts: i.e., the ability to test and experiment with different conceptions.
6 Testing Conceptions
One of the phases often overlooked in conceptual engineering is the testing phase. While other engineering practices rightly prioritize testing their results, conceptual engineering often neglects verifying whether a new conception performs better than old ones in reality. This phase is crucial to ensure the efficacy of the newly proposed conceptions in achieving their intended goals. Even if we understand implementing concepts in terms of semantic meaning, testing poses significant moral concerns, despite ignoring feasibility challenges discussed above.
To see why this is problematic, let us consider a conceptual engineer who is interested in testing the conception Sally Haslanger has proposed for “woman” (Haslanger 2000). The motivation behind her proposal is that the new conception serves a specific function, that is, it will promote some moral and political goals (e.g., to highlight certain structural injustices). If we aim to experiment with this conception, we must ensure it is embraced by a small community and evaluate whether the intended moral and political goals are better met within this group. This would introduce the significant logistical challenge of establishing distinct language communities for each concept under examination, making it practically impossible to test these concepts. Moreover, it means that we would have to experiment with conceptual take-up on a community scale to perform an experiment in which societal and moral results are uncertain.
The argument is not that large-scale experiments with uncertain social and moral outcomes are always morally impermissible. After all, medical experiments can also have major moral and social consequences. However, there usually are quite stringent rules which determine under which conditions such experiments can be carried out. Securing ethical approval for an experiment that tests different conceptions of “woman” would be a considerable accomplishment. Moreover, this is not just the case for testing a conception of woman. Even when we want to test different concepts of “control,” such experiments are necessary.
So let us compare this with testing and experimenting with concepts that are implemented in technological artefacts. As we have seen above, we can implement different conceptions of control in different technological artefacts. Moreover, we have identified a normative function of “control” in the context of autonomous systems: i.e., to decrease the risk we expose others to and to allow for appropriate responsibility attributions. Researchers have the capability to create similar technological artefacts that embed either operational control or meaningful human control. Using this capability, they can design experiments to determine which artefact best realizes the normative function of “control”. We could, for example, design experiments in which people are able to experience the implementation of “control” themselves, or participate in other more embodied ways.
The second element of the function of “control” can be tested more directly. In fact, Koerten (2021) performed some initial experiments to determine if the amount of risk we expose others to is diminished. He attached both implementations of “control” to a simulator and performed two different experiments. One where a car embodied operational control, and one where a car embodied meaningful human control. He drove around the different cars in a ring road scenario with a radius of 42 m. In both experiments he added 20 additional cars that were driven by the same algorithm. What he found was that the drivers of the car embedded with meaningful human control were able to react much quicker in cases of system failure. In fact, in the setup he reported several accidents with cars that embodied operational control, while the car that embodied meaningful human control was able to avoid any accidents. While further experimentation is required to definitively conclude which conception of control performs best in this specific context, it's clear that embedding concepts in technological artefacts offers innovative approaches for conducting these experiments.
The opportunity to implement different conceptions in technologies offers not only a chance for empirical testing but also entails a significant responsibility. Implementing an defective conception can lead to substantial societal costs. Thus, the theoretical work in conceptual engineering is of critical importance. Consequently, given the profound impact that the implementation of concepts can have, we should be cautious and strive to avoid large-scale rollouts of technologies based on defective conceptions.
7 Conclusion
In this paper, I have argued that the potential of conceptual engineering is not solely restricted to semantic or speaker meanings, where implementation is taken to be impossible or infeasible. Rather, by broadening our view to include the implementation of concepts into artificial artefacts and large-scale socio-technological entities, we may bypass some of the challenge of implementation.
This broader approach does not require comprehensive knowledge of or control over reference-determining facts, and it is immune to psychological resistances suggested by the attractor challenge. To illustrate this, I've discussed the example of implementing “operational control” and “meaningful human control” in self-driving cars.
Moreover, this approach provides the unique advantage of real-world testing of the conceptions, which forms a crucial feedback loop, enriching and guiding future conceptual work. Furthermore, I have suggested that this kind of conceptual engineering could extend its impact beyond the immediate implementation, indirectly altering semantic and speaker meanings through its embedding in influential technological and socio-technological systems.
Importantly, successful implementation of a new conception through technology can be achieved with the creation and deployment of even a single artifact. However, the measure of impactfulness hinges on the conception's adoption and integration across the relevant field. If engineers from Tesla decide to embed a newly engineered conception into their designs, the effect of such a move could be profoundly impactful, while on the other hand, a less influential company introducing a new conception in a niche product may not see the same level of impact.
Despite the fact that the approach offered in this paper cannot solve the implementation problem in all domains, it offers promising new avenues for conceptual engineering. It encourages a move towards more effective, responsive, and ethical design, emphasizing the vast potential of conceptual engineering in shaping our socio-technical world.
Data Availability
Not applicable.
Notes
While there is an ongoing debate in philosophy around the precise definition of a 'topic,' this paper adopts the assumption that two different sub-sentential expressions can share the same topic. The understanding of ‘topic preservation,’ as identified by Strawson (1963), is seen by many philosophers as one of the primary challenges confronting conceptual engineering (Cappelen 2018; Haslanger 2000; Nado 2019; Pinder 2020; Prinzing 2018). However, the particularities of this debate are not central to the argument presented in this paper.
An anonymous reviewer pointed out that the normative function of a concept cannot play a guiding role in situations where the concept does not produce the effects at all that we would like the concept to produce. The reason that it cannot play this role is that the concept does not have the normative function if it doesn't produce the effects. While this observation is correct, it is important to note two things. First, in most relevant cases, such as the one discussed in this paper, the effects are realized, albeit suboptimally. In those cases, the concept does have the normative function. Secondly, in the cases where the effects are not realized at all, the identification of normative functions is still crucial. Recognizing that a concept does not fulfill its intended normative function highlights the need to engineer new concepts that can achieve these desired effects. Thus, the notion of normative function continues to play a crucial role in guiding the conceptual engineering process, even when our current concepts do not have this function. In such cases, we are not re-engineering a concept but engineering a new one.
The distinction between semantics and pragmatics (and so between meaning and speaker meaning) is controversial. However, for the sake of argument, I will assume that Pinder effectively delineates this boundary.
While our intentions significantly influence design, it is not entirely within our control. The outcomes of design efforts are not always predictable, and unintended side-effects are common. For instance, Charles Goodyear accidentally discovered vulcanized rubber, and Wilson Greatbatch invented the pacemaker while initially developing a heart recording device.
It is important to note that when philosophers of technology discuss the embedding of a value, they often do so metaphorically, not literally. Values are typically taken to be the qualities or characteristics that make something desirable, important, or worthwhile (Schroeder, 2021). When we say that a value is embedded in an artefact, or that technology is value-laden, we do not mean that the quality or characteristic is literally embedded in the artefact.
This formulation is from Klenk (2021).
There is a debate in the literature whether the feature that values and concepts can only be embedded intentionally is desirable in an account of value embedding (Klenk 2021). If you think this is a problem, other definitions of value embedding, such as the one by Michael Klenk which is in terms of affordances, are available. The argument I present in the paper would have to be reformulated slightly but would also work on this account.
In this paper, I describe "operational control" as a conception of control, while Haslanger (2012) discusses concepts in terms of their different versions. Therefore, she would characterize "operational control" as a version of "control." This leads her to use the term "operative concept" instead of "operative conception." Nonetheless, for the aims of this paper, these differences do not matter.
This conception of control is different from, and in some sense weaker than, what Fischer and Ravizza (1998) refer to as 'guidance control.' In the remainder of this section, I will demonstrate the implications of embedding this weaker conception of control and compare it with 'meaningful human control,' which is based on Fischer & Ravizza’s notion of 'guidance control.'.
It should be noted that privacy scholars typically do not describe their work as conceptual engineering. The advantages of nevertheless interpreting the work of these scholars as conceptual engineering are discussed in detail in (Veluwenkamp et al. 2022).
References
Aristotle. (1999). Nicomachean ethics (2nd ed). Hackett Pub. Co.
Bainbridge L (1982) Ironies of automation. IFAC Proceedings Volumes 15(6):129–135
Bartoli, I. (2022). Resistance to Change: The Implementation Challenge for Socially and Politically Significant Social Kind Concepts.
Burge T (1979) Individualism and the Mental. Midwest Studies in Philosophy 4:73–121
Cappelen, H. (2018). Fixing language: An essay on conceptual engineering. Oxford University Press.
Cavalcante Siebert, L., Lupetti, M. L., Aizenberg, E., Beckers, N., Zgonnikov, A., Veluwenkamp, H., Abbink, D., Giaccardi, E., Houben, G.-J., & Jonker, C. M. (2022). Meaningful human control: Actionable properties for AI system development. AI and Ethics, 1–15.
Chalmers, D. J. (2020). What is conceptual engineering and what should it be? Inquiry 1 18
Deutsch M (2020) Speaker’s reference, stipulation, and a dilemma for conceptual engineers. Philos Stud 177(12):3935–3957. https://doi.org/10.1007/s11098-020-01416-z
Deutsch M (2021) Still the same dilemma for conceptual engineers: Reply to Koch. Philos Stud 178(11):3659–3670
Eklund, M. (2015). Intuitions, conceptual engineering, and conceptual fixed points. In The Palgrave handbook of philosophical methods (pp. 363–385). Springer
Fischer JM, Ravizza M (1998) Responsibility and Control: A Theory of Moral Responsibility (Issue 2). Cambridge University Press
Flanagan M, Howe DC, Nissenbaum H (2008) Embodying values in technology: Theory and practice. Inf Technol Moral Philos 322:24
Floridi L, Sanders JW (2004) On the morality of artificial agents. Minds Mach 14(3):349–379
Friedman B, Hendry DG (2019) Value sensitive design: shaping technology with moral imagination. Mit Press
Haslanger S (1995) Ontology and social construction. Philos Topics 23(2):95–125
Haslanger S (2000) Gender and race:(What) are they?(What) do we want them to be? Noûs 34(1):31–55
Haslanger S (2012) Resisting reality: social construction and social critique. Oxford University Press
Himmelreich J (2019) Responsibility for Killer Robots. Ethical Theory Moral Pract 22(3):731–747
Hopster J (2021) What are socially disruptive technologies? Technol Soc 67:101750. https://doi.org/10.1016/j.techsoc.2021.101750
Isaac MG, Koch S, Nefdt R (2022) Conceptual engineering: A road map to practice. Philos Compass 17:e12879
Jorem S (2021) Conceptual engineering and the implementation problem. Inquiry 64(1–2):186–211. https://doi.org/10.1080/0020174X.2020.1809514
Jorem S (2022) The good, the bad and the insignificant—Assessing concept functions for conceptual engineering. Synthese 200(2):106. https://doi.org/10.1007/s11229-022-03548-7
Kitsik E (2022) Epistemic Paternalism via Conceptual Engineering. J Am Philos Asso 9(4):616–635. https://doi.org/10.1017/apa.2022.22
Klenk M (2021) How Do Technological Artefacts Embody Moral Values? Philos Technol 34(3):525–544. https://doi.org/10.1007/s13347-020-00401-y
Koch S (2021a) Engineering what? On concepts in conceptual engineering. Synthese 199(1–2):1955–1975. https://doi.org/10.1007/s11229-020-02868-w
Koch S (2021b) There is no dilemma for conceptual engineering Reply to Max Deutsch. Philos Stud 178(7):2279–2291. https://doi.org/10.1007/s11098-020-01546-4
Koerten, K. (2021). Dissipating phantom traffic jams with haptic shared control for longitudinal vehicle motion. Master Thesis
Köhler S, Veluwenkamp H (2024) Conceptual Engineering: For What Matters. Mind 133(530):400–427
Kripke, S. A. (1972). Naming and necessity. In Semantics of natural language (pp. 253–355). Springer
Löhr G (2021) Commitment engineering: Conceptual engineering without representations. Synthese 199(5):13035–13052. https://doi.org/10.1007/s11229-021-03365-4
Matthias A (2004) The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics Inf Technol 6(3):175–183
Machery E (2021) A new challenge to conceptual engineering. Inquiry 1–24
Mecacci G, Santoni de Sio F (2020) Meaningful human control as reason-responsiveness: The case of dual-mode vehicles. Ethics Inf Technol 22(2):103–115
Michon JA (1985) Human behavior and traffic safety. Spring-Verlag, US
Miller B (2021) Is technology value-neutral? Sci Technol Hum Val 46(1):53–80
Mumford E (2006) The story of socio-technical design: Reflections on its successes, failures and potential. Inf Syst J 16(4):317–342
Nado J (2019) Conceptual engineering, truth, and efficacy. Synthese 198(Suppl 7):1507–1527
Nefdt RM (2021) Concepts and conceptual engineering: Answering Cappelen’s challenge. Inquiry 67(1):400–428
Nimtz C (2021) Engineering concepts by engineering social norms: Solving the implementation challenge. Inquiry 67(6):1716–1743
Nissenbaum, H. (2009). Privacy in context: Technology, policy, and the integrity of social life. In Privacy in Context. Stanford University Press
Parasuraman R (1987 Human-computer monitoring. Hum Factors 29(6):695–706
Pinder M (2021) Conceptual engineering, metasemantic externalism and speaker-meaning. Mind 130(517):141–163
Pinder, M. (2020). Conceptual engineering, speaker-meaning and philosophy. Inquiry, 1–15.
Pitt, J. C. (2014). “Guns Don’t Kill, People Kill”; Values in and/or Around Technologies. In The moral status of technical artefacts (pp. 89–101). Springer
Plunkett D (2015) Which concepts should we use?: Metalinguistic negotiations and the methodology of philosophy. Inquiry 58(7–8):828–874
Prinzing M (2018) The Revisionist’s Rubric: Conceptual Engineering and the Discontinuity Objection. Inquiry 61(8):854–880
Putnam H (1975) The Meaning of ‘Meaning.’ Mind Language Reality 2:215–271
Queloz M (2020) From Paradigm-Based Explanation to Pragmatic Genealogy. Mind 129(515):683–714
Queloz M, Bieber F (2022) Conceptual Engineering and the Politics of Implementation. Pac Philos Q 103(3):670–691. https://doi.org/10.1111/papq.12394
Santoni de Sio F, Van den Hoven J (2018) Meaningful human control over autonomous systems: A philosophical account. Front Robot AI 5:15
Simion M (2018) The ‘should’ in conceptual engineering. Inquiry 61(8):914–928
Sparrow R (2007) Killer robots. J Appl Philos 24(1):62–77
Son B, Kim T, Shin Y (2006) A solution for the dropout problem in adaptive cruise control range sensors. In: International conference on embedded and ubiquitous computing, Springer, pp 979–987
Stern RE, Cui S, DelleMonache ML, Bhadani R, Bunting M, Churchill M, Hamilton N, Haulcy R, Pohlmann H, Wu F, Piccoli B, Seibold B, Sprinkle J, Work DB (2018) Dissipation of stop-and-go waves via control of autonomous vehicles: Field experiments. Transp Res Part C Emerg 89:205–221. https://doi.org/10.1016/j.trc.2018.02.005
Strawson PF (1963) Carnap’s Views on Conceptual Systems versus Natural Languages in Analytic Philosophy. In: Schilpp PA (ed) The Philosophy of Rudolf Carnap. Open Court, La Salle, pp 503–518
Thomasson AL (2022) How should we think about linguistic function? Inquiry 67(3):840–871
Thomasson, A. L. (2020). Pragmatic Method for Normative Conceptual Work. In A. Burgess, H. Cappelen, & D. Plunkett (Eds.), Conceptual Engineering and Conceptual Ethics (p. 0). Oxford University Press. https://doi.org/10.1093/oso/9780198801856.003.0021
van de Poel I (2020) Embedding Values in Artificial Intelligence (AI) Systems. Minds Mach 30(3):385–409. https://doi.org/10.1007/s11023-020-09537-4
van de Poel, I., & Kroes, P. (2014). Can Technology Embody Values? In P. Kroes & P.-P. Verbeek (Eds.), The Moral Status of Technical Artefacts (Vol. 17, pp. 103–124). Springer Netherlands. https://doi.org/10.1007/978-94-007-7914-3_7
Veluwenkamp H (2022) Reasons for Meaningful Human Control. Ethics Inf Technol 24(4):51
Veluwenkamp H, van den Hoven J (2023) Design for values and conceptual engineering. Ethics Inf Technol 25(1):2. https://doi.org/10.1007/s10676-022-09675-6
Veluwenkamp H, Capasso M, Maas J, Marin L (2022) Technology as Driver for Morally Motivated Conceptual Engineering. Philos Technol 35(3):71. https://doi.org/10.1007/s13347-022-00565-9
Warren S, Brandeis L (1989) The right to privacy. In: Killing the messenger: 100 years of media criticism. Columbia University Press, pp 1–21
Winner L (1980) Do Artifacts Have Politics? Daedalus 109(1):121–136
Acknowledgements
I would like to thank the audiences of the ESDiT 2022 conference, the SPT 2023 in Tokyo and the Groningen Grundlegung Colloquium for their invaluable feedback on an earlier draft of this paper. Special thanks to Sally Haslanger, Steffen Koch, and Guido Löhr. Additionally, I extend my gratitude to two referees from Ethical Theory and Moral Practice for their charitable, detailed, engaged, and constructive comments.
Funding
No funding received.
Author information
Authors and Affiliations
Contributions
Not applicable, this is a single authored paper.
Corresponding author
Ethics declarations
Ethical Approval
Not applicable.
Informed Consent
Not applicable.
Statement Regarding Research Involving Human Participants and/or Animals
Not applicable.
Competing Interests
The author has no relevant financial or non-financial interests to disclose.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Veluwenkamp, H. Impactful Conceptual Engineering: Designing Technological Artefacts Ethically. Ethic Theory Moral Prac (2024). https://doi.org/10.1007/s10677-024-10459-8
Accepted:
Published:
DOI: https://doi.org/10.1007/s10677-024-10459-8