In the most general terms, experiment philosophy of technology applies the methods of experimental philosophy to topics in philosophy of technology. Techxphi involves the combination of empirical methods, like the controlled experiments that are characteristic of psychology, neuroscience, and other social sciences, with philosophical and normative analysis.Footnote 5 A flourishing techxphi would mean a truly interdisciplinary effort between empirically minded philosophers of technology and cognitive and social scientists in order to examine deep questions about technology.Footnote 6
Before distinguishing between the negative and positive programs of experimental philosophy of technology, some objections that have been raised against experimental philosophy in the literature need to be discussed.Footnote 7 This is not the place to fully engage with these criticisms; nevertheless, to the extent that some objections to x-phi might extend to techxphi, I will briefly discuss (and dispel) three of what I take to be the most pressing issues.
First, it has been argued against experimental philosophy that intuitions do not play a role in philosophy (e.g., Cappelen, 2012; Deutsch, 2015; Williamson, 2007). It appears to me, however, to be much less fruitful to argue for or against the use of intuitions in philosophy across the board than to go about it in a more close-grained way. From a pragmatic point of view, it makes better sense to examine specific arguments and theories, and then to examine the presence or absence of intuitions in these arguments and theories—and only then to criticize or defend the role of such intuitions on a case-by-case basis. Given that intuitions in philosophizing about technology have yet to be seriously studied, there is a prima facie reason to at least make the attempt to ascertain and better understand their function in the field.
Second, it has been argued against experimental philosophy that intuitions should not play a role in philosophy (see, e.g., Knobe & Nichols, 2017). One might grant that philosophers at times rely on intuitions, but nonetheless argue that this is unwarranted. I want to point out here that, if one subscribes to the view that intuitions should not play a role in philosophy, then it naturally follows that one would also try to counteract reliance upon such intuitions. However, in order to do so, one must first identify such intuitions in order to get the argument off the ground. How could one attempt to eliminate reliance on intuitions without first setting out to detect intuitions and the role that that they in fact play in philosophical arguments? Yet this project, as should be clear, is not fundamentally opposed to that of experimental philosophy (Knobe & Nichols, 2017)—in fact, it falls squarely within the negative program that I describe in more detail later.
Third, it has been argued against experimental philosophy that it is not properly philosophy (e.g., Sorell, 2018). This is fundamentally a dispute about whether empirical work has a rightful place within the discipline of philosophy. I follow Hartmann et al. (2013) in rejecting the terms of the debate about whether philosophical questions are (or should be) impervious to empirical research, allowing instead that experimental methods can complement—rather than displace—more purely analytical methods. The goal should be to maintain the rigor of philosophical reflection and to make good use of empirical data, while at the same time avoiding empirical naivety (Hämäläinen, 2016). As I will show, research in the philosophy of technology stands to gain much from an explicit engagement with empirical research. The question of how to demarcate the field of philosophy, while interesting, is not of concern to the current project, which seeks to generate new knowledge about philosophical issues surrounding technology. Should a purist about philosophical method object that empirical methods have no place in philosophy, then this should not hamper the techxphi project. At worst, one might concede that experimental philosophy of technology is a hybrid discipline.
Two Programs
Experimental philosophy is commonly divided into different programs according to how these are positioned in relation to the traditional role of intuitions in analytic philosophy. The negative and positive programs are taken to be directly concerned with intuitions—the former with undermining them in a “negative” way, and the latter with making progress in philosophy by examining them in a “positive” way (Knobe & Nichols, 2017). A third program is often identified, which is not so much concerned with the role of intuitions in philosophy traditionally, as with the attempt to “make progress on questions that are directly about people’s thoughts and feelings themselves” (Knobe & Nichols, 2017, 5). Sytsma and Livengood (2016) also differentiate between intuitional and non-intuitional programs, according to whether research is or is not, respectively, about intuitions. With this additional distinction, the positive and negative programs might be called intuitional, and the third non-intuitional.
In the following sections, I will limit myself to specifying two research programs for experimental philosophy of technology: a negative and a positive one. I will go into more detail later, but in the broadest terms, the difference between them is as follows. The negative program uses experimental methods and findings to debunk (or vindicate) intuitions, judgments, and so on in philosophy and ethics of technology. The positive program, on the other hand, uses experimental means more generally to further knowledge and advance debates in philosophy and ethics of technology. The positive program is thus not necessarily tied to the project of debunking or vindication; it is more broadly concerned with making constructive use of experimental data to inform techno-philosophical reflection.
Making distinctions between programs is a metaphilosophical endeavor. For present purposes, I am primarily interested in creating a space for concrete research topics in experimental philosophy of technology. To distinguish between different research programs provides a useful early schema toward this end, even if the research to be conducted in techxphi will likely fall only loosely within different programs and be subject to considerable overlap, as is the case in x-phi more generally (Knobe & Nichols, 2017). I adapt the two programs from x-phi in order to form a cohesive vision of techxphi that also preserves some continuity. The two programs are not, however, directly translated from, nor wholly translatable to, current x-phi programs. Later work might relate the two programs in experimental philosophy of technology back to work in experimental philosophy. As it stands, however, I have chosen to make the respective divisions between negative and positive programs in order to maximize theoretical clarity and, perhaps more importantly, to create as practicable a guide—and as clear a call to action—as possible.
Before moving on, something must be said about intuitions. There is a substantial philosophical literature about the nature, prevalence, and role of intuitions (see, e.g., Pust 2019). I cannot do justice to this rich area here. If the nature and the role of philosophical intuitions can be contested within experimental philosophy, then this can also be done within experimental philosophy of technology. Differently put, the success of an experimental philosophy of technology does not hinge on the adoption of any particular philosophical view of intuitions. For my part, in this paper, where I speak of intuitions, I follow Devitt (2015) in taking a minimally demanding view of intuitions that brings them close to our ordinary ability to recognize intuitions. Intuitions are, in short, what we ordinarily take them to be.
The Negative Program
The negative program in x-phi centers on using experimental means to demonstrate that certain intuitions in philosophy are unreliable. Within the negative program of techxphi, this aim is extended in the attempt to debunk (or demonstrate as unreliable, unstable, biased, etc.) intuitions, judgments, and assumptions in philosophy of technology. The “negative” refers to the role of research in this program, which is to critically examine latent intuitions and assumptions that may escape the arguments, concepts, and theories themselves. The operative metaphor here is that of “negative space”—by framing the space around the substantive arguments and theories in philosophy and ethics of technology, which is present but often taken for granted or hidden from analysis, the hope is that a fuller image will emerge. Of course, when one makes explicit the implicit intuitions and assumptions that are at work in theorizing about technology, and when one sets out to experimentally test these intuitions and assumptions, there are two possible outcomes: The intuitions or assumptions in question may be debunked or vindicated.Footnote 8 As such, the negative program of experimental philosophy of technology includes not only debunking but also vindication efforts. Generally stated, the negative program involves experimental investigation of the intuitions and assumptions that feed into techno-philosophical arguments, in order to question or fortify their value.
I will focus on a specific line of research in ethics of technology in order to illustrate what the negative program of experimental philosophy of technology entails and how it can advance current debates. Consider the influential argument by Andreas Matthias that increasingly autonomous machines threaten to create a so-called responsibility gap, which has initiated a line of responses regarding potential responsibility gaps in technology (e.g., Champagne & Tonkens, 2015; Nyholm, 2018a; Tigard, 2020). The original argument by Matthias (2004) may be summarized as follows:
-
[1]
Traditionally, manufacturers/operators of machines are held morally/legally responsible for its operations.
-
[2]
Highly autonomous machines create a novel situation where manufacturers/operators are in principle unable to predict the machine’s future behavior.
-
[3]
One can only be held morally responsible/legally accountable for things one can control.
-
[4]
Being unable to predict the machine’s future behavior means that manufacturers/operators do not have control over that behavior.
Therefore,
-
[5]
Manufacturers/operators cannot be morally responsible/liable for its operations.
Therefore,
-
[6]
A socially undesirable responsibility gap emerges.
In the spirit of the negative program of experimental philosophy of technology, my claim is that there is an intuition at work in this argument—namely, that being unable to attribute responsibility to someone or something is problematic for individuals/society. Much of the argument seems to turn on this. This becomes clearer when we separate the theoretical from a more practical concern. If the possibility of the actions of highly autonomous machines being impervious to responsibility-attribution were purely theoretical, but never actual—that is, if we somehow knew that we would never actually encounter a situation where we would have difficulties attributing moral responsibility—then much of the problematic nature of a would-be responsibility gap appears to fall away. The possibility of encountering a concrete situation where a responsibility gap in fact occurs and faces us is what gives the argument much of its force.
Even if one is not entirely convinced by this interpretation, looking at the argument in this way opens up a line of experimental philosophical research that can help us answer at least some pertinent questions about responsibility gaps. One might, for instance, conduct an experiment in which participants are presented with descriptions of moral dilemmas in which machines cause harm and where the complexity of potential responsibility-attribution is systematically varied (i.e., in some cases, a responsible party will be easily identified; in others, with greater difficulty). How do participants deal with this complexity? Do people, in fact, have trouble attributing responsibility to the outcomes produced by highly autonomous machines?Footnote 9 Another way to approach this would be to study the moral judgments of those who are already working with complex, highly autonomous machines. Using an experimental method, engineers, manufacturers, and operators of highly autonomous machines could be presented with moral dilemmas surrounding machines, harm, and responsibility, in order to examine whether they—that is, people actually working closely with technology of this kind—have difficulties attributing responsibility in certain cases. If this research were to show that people (lay and professional) do not, as a matter of fact, encounter difficulties attributing responsibility even where highly complex systems are concerned, then this gives us reason to think that a responsibility gap is not as threatening a social issue as it has often been portrayed. Of course, to the extent that findings would show people to be unable to attribute moral responsibility in such cases, then worries about a responsibility gap turn out to be better grounded.
The role of intuitions is also evident in a recent offshoot of the responsibility gap argument, namely the potential threat of a retribution gap (Danaher, 2016). Here, the argument is the following:
-
[1]
Human beings are innate retributivists.
-
[2]
As highly autonomous robots become ubiquitous, they will more frequently cause harm.
-
[3]
When autonomous robots cause harm, people will seek targets for retributive blame.
-
[4]
It is unlikely that either the robots or their makers will be eligible for retributive blame.
Therefore,
-
[5]
People will seek retribution but fail to find appropriate targets.
Therefore,
-
[6]
Increased robotization will lead to a retribution gap.
I have argued elsewhere (Kraaijeveld, 2020) that this argument essentially involves people’s retributive intuitions, and I have applied an evolutionary debunking argument to intuitions in these cases to argue that they are unjustified and thus ought not to be heeded. Although this approach at first glance may appear to fall within the negative program of experimental philosophy of technology, it was by means of an analytic argument that I attempted to undermine the relevant intuitions and, thereby, the retribution gap argument. I did not, in any case, make use of experimental findings. To qualify as experimental philosophy of technology, at least one empirical premise must be combined with at least one normative premise.Footnote 10 The argument by Earp et al. (2020) for undermining or vindicating moral judgments in bioethics may be adapted here, in order to offer an approach to debunking/vindicating moral intuitions and judgments in ethics of technology that can be readily and widely applied in the field. The Debunking/Vindication Argument (DVA) for experimental philosophy of technology may be stated as follows:
-
[1]
Moral judgment M or moral intuition I is mainly influenced by factor/process F/P.
-
[2]
F/P is an unreliable (reliable) or morally irrelevant (relevant) factor/process.
-
[3]
So, moral judgment M or moral intuition I is unjustified (vindicated/not defeated).
It must be noted that the normative conclusion [3] is derived from an empirical premise [1] as well as a normative premise [2]. Empirical premise of this kind, based on experimental findings, combined with normative premises of this kind, provided by philosophical reflection, is how the negative program of experimental philosophy of technology can advance knowledge about moral judgment within the ethics of technology.
To return to the retribution gap, there are at least two ways in which one might take a techxphi approach to the argument. On one hand, one could conduct an empirical study to examine whether people in fact respond with the kind of retributive intuitions and moral judgments of blame that make the retribution gap potentially problematic. It is worth carefully examining (i.e., empirically and systematically) both the nature and the scope of these intuitions and moral judgments in cases of harm caused by highly autonomous robots. If it turns out that people are not as prone to retributive intuitions and/or moral judgments of blame in these cases, then this would give us some reason to think that the social, legal, and moral ramifications of a retribution gap may not be as far-reaching as they have previously been considered.
On the other hand, a more sophisticated way of approaching the argument from the perspective of experimental philosophy of technology is to manipulate the relevant retributive intuitions and moral judgments in an experimental study, in order to test whether they are subject to unreliable and/or morally irrelevant factors or processes. Taking for granted that the intuitions and judgments that go into the retribution gap are as described, perhaps they are ultimately not to be heeded because they are empirically demonstrated to be unreliable, unstable, and etc. If it would turn out that retributive intuitions in cases of robot harm without clear candidates for moral blame were influenced by, say, unreliable factor F or morally irrelevant process P, then one could use the DVA as described above to argue that the judgment based on factor F or process P in these cases means that the resulting moral judgment or moral intuition is not justified. If it turns out, for instance, that when a self-driving car crashes, people’s moral judgments about blame are significantly influenced by some morally irrelevant feature like the country of origin of the car manufacturer, then this would give us reason to question their validity.
In one sense, the move made in the second approach is similar to the one that I made in applying an evolutionary debunking argument (Kraaijeveld, 2020), in that both try to undermine in some way the status of the relevant intuitions/judgments. The important difference, as should be clear, is that applying the DVA inherently involves an empirical premise. This is what sets it apart as experimental and thus what makes it count as experimental philosophy of technology.
I have been able to cover only a few recent debates in ethics of technology, but the negative program generally, and the DVA in particular, can be applied to a host of other arguments, intuitions, and judgments in philosophy and ethics of technology.
The Positive Program
The positive program, which in experimental philosophy centers on making progress directly on all sorts of philosophical issues, can for experimental philosophy of technology be viewed largely as a similar effort to experimentally investigate intuitions, assumptions, thoughts, emotions, concepts, and so on that are relevant to topics in philosophy and ethics of technology.Footnote 11 What are the intuitions that come into play when we think about novel and emerging technologies? How are our moral judgments about technology related to the underlying cognitive and psychological processes that give rise to them? Do certain ethical theories about technology rely on the existence of (previously unexamined) empirical matters of fact? The aim of the positive program is to make progress on these and other questions surrounding technology. The nominal “positive” is indicative of the positive spirit of contribution; this program may thus be seen as wider in scope than the negative program, which focuses more narrowly on debunking and vindication attempts.
There are many potentially interesting lines of research that could be taken up under the auspices of the positive program of experimental philosophy of technology. I will specify two major tasks, which may be pursued separated but will ideally be integrated in a meaningful way. First, there is the descriptive task of probing people’s intuitions and judgments about arguments, theories, and dilemmas in philosophy and ethics of technology. This includes folk intuitions and judgments as well as those of experts. There are many areas in ethics and philosophy of technology where intuitions appear, at least on the surface, to play an important role. Whether we are considering the introduction and ramifications of novel technologies or products of technological advances like cultured or in vitro meat (Van der Weele & Tramper, 2014), accident-algorithms for unavoidable collisions of self-driving cars (Nyholm & Smids, 2016),Footnote 12 the possibility of robots being good colleagues (Nyholm & Smids, 2019) or being objects for sexual gratification (Danaher et al., 2017) or mutual love (Nyholm & Frank, 2017), or what the role of technology ought to be in a good society (Brey, 2018), intuitions about many of these questions will play an important role. To explicate them in a systematic way, through empirical investigation, will add to our knowledge about these questions and will assist in formulating new ones.
To give a more elaborate example, one area in which experimental philosophy of technology may be especially productive is in discussions surrounding technomoral change. The main claim of the technomoral change approach is that technology co-shapes many if not all aspects of society, including moral norms and values (Swierstra et al., 2009). The notion of technomoral change readily lends itself to the combination of empirical investigation and philosophical analysis that characterizes experimental philosophy of technology. For, in order to know precisely how technology co-shapes moral norms and values, one must know at least some of the empirical matters of fact—about the nature, scope, and direction of changes in people’s moral frameworks and about how technology acts as a driving force. Although rare, some good empirical-normative work has already been conducted in this area. Olya Kudina and Peter-Paul Verbeek (2018), for instance, have studied online discussions about the “explorer” version of Google Glass in order to make progress on the ethical variant of the Collingridge dilemma. Additionally, the proposed use of alternative technomoral scenarios to inform public deliberation about New and Emerging Science and Technology (NEST) for Technology Assessment (Swierstra et al., 2009) is a promising way of prompting moral intuitions and judgments on a wide range of relevant issues, which appears closely aligned with the goals of a descriptive experimental philosophy of technology program that uses vignettes and moral dilemmas to elicit intuitions. Where experimental philosophy of technology can make its unique contribution, however, is in not taking the relevant intuitions and judgments at face value, but by experimentally manipulating the scenarios so as to learn about the factors that are involved in producing these intuitions and judgments. For instance, are they subject to situational factors, order effects, or framing effects? Which psychological processes underlie different moral judgments in these scenarios? All of the methodological tools of experimental philosophy may be applied here to categorize and scrutinize the relevant intuitions and judgments.
As in experimental ethics more widely, the use of indirect experiments may give us insight into “the nature of some capacity or judgment: for example, whether certain types of moral dilemmas engage particular areas of the brain,” whereas direct experiments can “investigate whether a claim held (or denied) by philosophers is corroborated or falsified,” which “might mean investigating an intuition and whether it is as widely shared as philosophers claim, or it might mean investigating the claim that a certain behavior or trait is widespread or that two factors covary” (Alfano et al., 2018). If the purpose is to show that the relevant intuitions and judgments are unreliable or biased in some way, then this research may be tallied under the negative program. However, there is no reason why assessing the scope and sensitivity of moral judgments about technomoral change, say, or about self-driving cars causing certain kinds of harm, has to involve the attempt to debunk the judgments in question. To the extent that we really just want to know about these judgments, or about whether intuitions are in fact as philosophers of technology have taken them to be, then this may be seen as an important descriptive task in its own right.
Second, there is the normative task of using empirical findings to support normative claims about the subject at hand. Some inchoate work has already been done in this area, even if this was not under the name of experimental philosophy of technology.Footnote 13 For example, Kudina (2019) has recently used empirical methods as well as philosophical analysis to investigate the complex interactions between ethics and technology. As previously mentioned, research into technomoral change that uses technomoral scenarios has also been receptive to the value of empirical input, even though one may still press this work to make the role of empirical research more prominent in discussions surrounding the scenarios and the conclusions to which they give rise (cf. Boenink et al., 2010). Research of this kind remains rare. Given that technologies can “rob moral routines of their self-evident invisibility and turn them into topics for discussion, deliberation, modification, and reassertion” (Swierstra & Rip, 2007, 6), experimental philosophy of technology is well-suited to take on the task of elucidating the (changes in) moral routines, norms, and values surrounding technology, in an effort to strengthen normative conclusions about what the role of technologies should be given what they are or what they do within particular practices. Think, for example, of people’s lived experiences in relation to or as affected by technologies. By studying people’s actual experiences (e.g., through qualitative research; see Andow, 2016), one can obtain knowledge that transcends the merely observational or anecdotal; and by documenting the experiences (e.g., of injustice or systematic biases), one can move toward conceptualizing and implementing changes for the better.Footnote 14