How Artefacts Influence Our Actions
Artefacts can influence our actions in several ways. They can be instruments, enabling and facilitating actions, where their presence affects the number and quality of the options for action available to us. They can also influence our actions in a morally more salient way, where their presence changes the likelihood that we will actually perform certain actions. Both kinds of influences are closely related, yet accounts of how they work have been developed largely independently, within different conceptual frameworks and for different purposes. In this paper I account for both kinds of influences within a single framework. Specifically, I develop a descriptive account of how the presence of artefacts affects what we actually do, which is based on a framework commonly used for normative investigations into how the presence of artefacts affects what we can do. This account describes the influence of artefacts on what we actually do in terms of the way facts about those artefacts alter our reasons for action. In developing this account, I will build on Dancy’s (2000a) account of practical reasoning. I will compare my account with two alternatives, those of Latour and Verbeek, and show how my account suggests a specification of their respective key concepts of prescription and invitation. Furthermore, I argue that my account helps us in analysing why the presence of artefacts sometimes fails to influence our actions, contrary to designer expectations or intentions.
KeywordsPractical reasons Externalism Philosophy of artefacts Philosophy of technology
When it comes to affecting human actions, it seems artefacts can play two roles. In their first role they can enable or facilitate human actions. Here, the presence of artefacts changes the number and quality of the options for action available to us.1 For example, their presence makes it possible for us to do things that we would not otherwise be able to do, and thereby adopt new goals, or helps us to do things we would otherwise be able to do, but in more time, with greater effort, etc. (Houkes and Vermaas 2004; Illies and Meijers 2009). In this role, artefacts are instruments, means that can be used to achieve a certain end. In short, in this role the presence of artefacts affects what people can do. What people actually do, which artefacts they actually use, is up to them. Philosophers who investigate this role tend to do so for normative purposes: in analysing how the presence of artefacts affects what people can do, and what people could know about that, they seek to establish rational standards for e.g. functionality (Kroes and Meijers 2006), use know-how (Houkes 2006) and (instrumental) goodness (Franssen 2006).
Artefacts can also play a second role, where their presence increases the likelihood that people will perform, or abstain from performing certain actions. In this role, the presence of artefacts affects what people actually do. Authors like Akrich (1992) and Latour (1992) have argued that artefacts (themselves) can prescribe actions to us. Following their lead, Verbeek (2005) has argued that artefacts mediate actions in part by inviting or inhibiting us to act in a certain way. For example, speed bumps can be said to prescribe slowing down to the oncoming driver, while SUVs seem to invite reckless driving. Philosophers who investigate this role tend do so for descriptive purposes, to come to general yet revealing analyses of the different ways in which the presence of artefacts changes how we relate to the world, for example, how we see the world (Ihde 1990) and act in it (Verbeek 2005), and how the presence of artefacts shapes scientific practice (Latour 1987), politics (Winner 1986), or societies (Akrich 1992).
Both roles are not clearly demarcated and do overlap: for example, in many cases where the presence of artefacts affects what we can do, it also affects what we actually do. However, by and large, philosophers working on different roles of artefacts have developed their ideas on the relation between artefacts and human actions independently. And while those philosophers have criticised each other’s approaches (e.g. Illies and Meijers 2009; Peterson and Spahn 2011; Selinger et al. 2011), there has been little attempt to account for each other’s insights. The lack of cross-pollination between both groups is unfortunate not only because there is no lack of different and interesting insights in both fields, with which both groups could strengthen each other’s accounts. It is also because both groups aim to advise engineers and designers with regard to good or ethical design (e.g. Houkes et al. 2002; Verbeek 2006), yet it would be difficult for engineers and designers to compare the merits and flaws of these advices due to the different underlying philosophical and methodological assumptions.
In this paper I will take the conceptual framework commonly used for investigating the first role of artefacts (how their presence affects what we can do) and making normative claims about them, and apply it in a descriptive investigation of the second role of artefacts (how their presence affects what we actually do). The main purpose of this paper is to establish that this framework can offer a plausible description of how artefacts play their second role, how their presence influences human actions, thus grounding the normative claims made on the basis of this framework and providing an alternative way of investigating the second role of artefacts. My particular claim will be that artefacts can play their second role because facts about them alter our reasons for action. This will hook up the investigation of the second role of artefacts to theories of practical rationality and (good) reasons for action, and thus facilitates the moral and rational evaluation of the particular influences the presence of artefacts has on our actions. To put the account to the test, I will show how it can deal with the phenomena covered by Akrich’s and Latour’s prescription and Verbeek’s invitation. I will also suggest a specification of these concepts, which might otherwise be taken to obscure rather than clarify the various different ways in which the presence of artefacts can influence our actions (e.g. Waelbers 2009).
With respect to giving advice to engineers and designers, the reasons account also allows us to go beyond the actual influence of the presence of artefacts on human actions and look at cases where it fails to influence human actions, contrary to designer intentions or expectations. For example, suppose that people regularly crash on a certain speed bump. A reasons account can then suggest possibilities why the presence of the speed bump didn’t influence driver actions the way it should have done: the relevant facts might not have been perceivable, the offered reason for slowing down might not have been considered a good or relevant one, etc. I will work this out in more detail in the next section.
To show how we can explain how the presence of artefacts influences human actions in terms of reasons, in Section 2 I explain what I take to be reasons for action and how they work, using Dancy’s (2000a) externalist account of practical reasons. In Sections 3 and 4 I show how reasons for action can be provided or changed by facts about artefacts, and how this allows us to give an account of the various ways in which the presence of artefacts can influence our actions, including prescription (Section 3) and invitation (Section 4). Finally, in Section 5 I address a possible counterargument to my claim that a reasons account can explain how the presence of artefacts influences human actions.
2 Practical Reasons
For Dancy, reasons for action are facts (things that are the case) that favour or disfavour a certain course of action for an agent. For example, if I drive in my car and approach a speed bump, the fact that I am approaching a speed bump would be a reason for me to slow down. Besides being reasons, facts can play two more roles in Dancy’s account, which allows us to perform a more fine-grained analysis of the ways in which the presence of artefacts can influence human actions. The first role is that of the enabler/disabler. An enabler is something that enables a fact to be a reason, for example: the facts that I would ruin my car or that I would hurt my back when driving too fast over a speed bump enable the fact that I am approaching a speed bump to be a reason for me to slow down. Similarly, a disabler disables a reason to count as such. The fact that my car has excellent suspension systems can disable the fact that I am approaching a speed bump to be a reason for me to slow down.
The second role that facts can play is that of the intensifier/attenuator, which can make a reason a stronger or weaker reason for action. The fact that I am taking my frail grandmother for a ride does not in itself give me a reason for slowing down if there is no speed bump, but it may intensify my existing reason for slowing down (the fact that I approach the speed bump), because she will suffer as well if I drive too fast over it (Dancy 2000b, 2004).2
As a note on terminology, facts can change: they can be brought about or destroyed, depending on circumstances. The fact that I am approaching a speed bump is brought about (made the case) by me taking a particular road in which a speed bump has been constructed. As soon as I take a side road, avoiding the speed bump, the fact that I approach it has been destroyed: it is no longer the case.
Note that even if some fact about an artefact provides a reason for action, that action need not be the (morally) best, or even a good, option: the fact that there is an obstacle in the road may be a reason for a car driver to take a detour over the bicycle path, but it would not necessarily justify that action. Furthermore, facts about artefacts can seem to be reasons for action, while they actually are not, e.g. when the actual facts do not favour that action. It may seem to me that I have a reason to press the button of a pedestrian traffic light if I want to cross the street, while in fact this may not be so, for example, if the button does not work. Finally, Dancy’s commitment to the claim that reasons favour or disfavour certain courses of action allows him to say that agents can have reasons to act if they do not desire to act, even if they were to be fully informed and had deliberated rationally about what to do.3
The structure of our reasons account gets practically relevant if we wish to advise engineers and designers, investigate why some technologies succeed and others fail, or evaluate design methodologies like value-sensitive design, where artefacts are designed for values that actual users may or may not hold. Consider for example the Ruton Robot, a universal household appliance developed in the Netherlands in the 1950s that could be used as a vacuum cleaner and as a mixer, among other uses (Lintsen 2005: 264). While nothing was wrong with the technology itself, it never became a success. A reasons account can offer us a number of questions with which to start our investigation of why this technology failed. For our account those would be: Were the right facts present? Were the relevant agents (the intended customers) aware of those facts? Were those facts enabled to be reasons for action (by their moral features, or by features of the agent or the context)? Were those reasons intensified to be good reasons for action, both instrumentally and morally? Were the good reasons actually motivating? In this specific case, it turned out that potential customers found the idea of cleaning the house and preparing food with the same appliance repulsive. In other words, they judged the value of flexibility to be less important than values of e.g. cleanliness and hygiene. In our account, we could say that facts about the artefact were reasons for using it, and that the potential customers were aware of that. The potential customers, however, generally did not consider those reasons to be good reasons. In addition, the reasons against using the artefact were considered to be much stronger by the potential users than the designers had anticipated.
3 Artefacts as Prescribing Actions
The notion of artefacts prescribing actions originates in Akrich (1992) and was subsequently picked up by Latour (1992). Akrich argues that artefacts contain ‘scripts’ that ‘prescribe’ actions to users. These scripts do not get in the artefact by accident: they are ‘inscribed’ by engineers, based on the engineers’ view of who the users will be, in which context the artefact will be used, et cetera. Unfortunately, scripts may also be inscribed unintentionally due to false expectations or oversight of the part of the engineer, leading to artefacts prescribing undesired behaviour or discriminating against groups of users. Latour (ibid., p. 158–159) gives the example of the door outfitted with a door-closer that requires so much effort to open that it prescribes ‘push to open’ only to able-bodied persons: the very young and very old have to find another way to enter. Likewise, the effects of the artefact’s presence are dependent on its context and working condition, which means that a change in either of those factors can trigger a change in script. For example: a moving obstacle in a road may prescribe cars to pass one at a time. If it breaks down, however, and does not sink back into the road, it could be said to prescribe car drivers to take the only way out and drive past the obstacle over the neighbouring footpath or bicycle path (example taken from Pols 2010).4 Latour (1992) gives a number of examples where artefacts prescribe behaviour. In addition to the moving obstacle, I will consider two well-known examples of Latour here: the speed bump in the road and the seat belt that, when not fastened, activates an alarm and a flashing light.
One characteristic is common to these examples: actions are enforced to some degree by the presence of the artefacts mentioned. In Latour’s words: “I will call (…) the behavior imposed back onto the human by nonhuman delegates prescription”.5 (1992, p. 157, my italics). It seems that the prescription of actions comes in two degrees. The first is soft prescription: this includes the prescription of actions by the speed bump and the seat belt. Here, there is no physical force, but not using the artefact in a certain way will have certain negative consequences, and characteristics of the artefact make the user aware of that.6 Those who race over the speed bump damage their cars and their backs, those who don’t fasten their seat belts have to put up with the alarm. The presence of the speed bump and the seat belt thus provides their users with a reason against not following a specific course of action. The second form of prescription is hard prescription: the moving obstacle in the road prescribes stopping by making it impossible to drive on. Here, there is physical force, so not using the artefact in a certain way will be impossible, or have overwhelmingly negative consequences, and characteristics of the artefact make the user aware of that. In terms of reasons, not only do facts about the moving obstacle provide a reason against driving on, the fact that doing so would wreck one’s car, but as driving on is in fact made impossible, the reasons for driving on are destroyed as well, as one cannot have reasons to do the impossible (Streumer 2007).
Before we examine more closely what is going on with artefacts prescribing actions, it should be noted that not all artefacts in the above examples are used because of their instrumental value. We do not drive over speed bumps because doing so serves a certain purpose, we rather have to deal with them as part of using the road. Some drivers use their seat belts as instruments for a purpose, to increase their own safety, while others may consider them to be like speed bumps: as nuisances that have to be dealt with when driving. Nevertheless, I will call all of these dealings ‘use’, as I consider use to be performing an action or executing a plan involving an artefact, regardless of our reasons for doing so. Of course, if we choose not to use the road or the car, the speed bump, the moving obstacle and the seat belt do not prescribe any behaviour to us.
James’s car will get him quickly from A to B.
James has to get from A to B quickly (enables 1. to be a reason for James to take his car).
So James has a reason to take the car.
If James were to drive without fastening his seat belt, the alarm and a flashing light would be activated
James is annoyed by loud alarms and flashing lights (enables 4. to be a reason for James against not fastening his seat belt).
So James has a reason against not fastening his seat belt.
Of course, 4. is not the only reason for James against not fastening his seat belt. Not fastening his seat belt increases James’s risk of suffering harm in the case of an accident, and that is a reason for him against not using it even when there is no alarm. Nor does 4. ensure that James does indeed fasten his seat belt: maybe he dislikes wearing it so much that that fact gives him a stronger reason not to fasten it, and he decides to put up with the alarm – or abandons his plan of taking the car altogether.
James’s car will get him quickly from A to B.
James has to get from A to B quickly (enables 1. to be a reason for James to take his car).
So James has a reason to take the car.
If James were to drive on, his car would crash.
If James’s car were to crash, James would not be able to get from A to B quickly anymore (enables 4. to be a reason for James against driving on.)
So James has a reason against driving on.
If James wishes to get from A to B quickly, he has to drive on. (Is enabled by 2. to be a reason for James to drive on.)
Driving on is made impossible. (Destroys the fact that is a reason for driving on in 7. Note that this does not have to affect James’s initial reasons for wishing to get from A to B quickly.)
So James has one less reason to drive on. (Furthermore, all other possible reasons for James to drive on will also be destroyed by 8.)
It seems that an artefact prescribes an action if certain actions are enforced (to some extent) on the agent because of its presence. This is most likely to happen when the agent cannot avoid the artefact in the pursuit of certain purposes. This means that even instruments can prescribe actions: a corkscrew doesn’t prescribe opening wine bottles, but if I have to open a wine bottle, the corkscrew does prescribe how I should go about it, it prescribes its proper use. In other words: the corkscrew has a use plan I have to follow if I wish to open the bottle with it (Houkes and Vermaas 2004).
In terms of our reasons account, we can say that an artefact prescribes an action if facts about it provide reasons against not doing that action (as with the seat belt) and the agent is made aware of that; in addition, those facts may destroy or disable facts as reasons for doing any other relevant action (as with the moving obstacle). An artefact may also prescribe an action if facts about it intensify a reason against not doing that action, or attenuate facts as reasons for doing any other relevant action, but these forms of prescription will usually be rather soft.
It might be tempting to say that the moving obstacle prescribes stopping because the fact of its presence provides you with a good reason to stop, rather than with good reasons against driving on. However, I use the second formulation here to distinguish between cases where there is a promise of positive consequences (as is the case with invitation) versus cases where there is a threat of negative consequences (as is the case with prescription). Moreover, it should be noted that this temptation arises because there is only one other relevant action here, driving on. If we were to assume that there are multiple relevant actions available (taking a side road, driving over the grass), it would be clear that the presence of the obstacle does not provide you with a good reason to stop if it does not simultaneously destroy your possibilities or reasons for bypassing the obstacle, for you could simply take one of the alternatives. If there were multiple relevant actions available, the presence of the obstacle would provide you with a good reason against performing a particular action, namely, driving on. But this would not be prescription, as it would not enforce particular behaviour. In this case, the presence of the moving obstacle would merely remove one possible action from a set of many.
Finally, in case of bad design it may seem that an artefact prescribes an action without the user being aware of it. For example, suppose that a particular moving obstacle is hidden in the road so well that even attentive drivers do not notice it until it rises up under their cars. Here, there are good reasons for stopping, but they are not available to the drivers. This case, however, is not one of prescription. We can certainly not say that the presence of this artefact affects what we do: it just crashes our cars.
4 Artefacts as Inviting Actions
When an artefact prescribes an action, it does so with a certain force. When an artefact invites an action, there is no force, but its characteristics rather make the agent aware that that particular action can be performed, and that there is some reason to do that particular action.7 When a restaurant owner in a Greek tourist resort invites a passing tourist into his establishment, he is not forcing the invitee to come in, but rather suggesting that she could come in, and that there are good reasons for doing so: his wine is excellent, he plays authentic Greek music, etc. Of course, these facts might not be (good) reasons for the tourist: perhaps she detests wine and authentic Greek music. An invitation does not carry force, but neither is it always enticing.
The idea that artefacts can invite actions is suggested by Verbeek (2005), who extends the earlier work of Ihde (1990). Ihde is mainly interested in how technologies affect perception. He claims that technologies mediate our perception of the world: they give shape to our perception, and thereby influence how we experience the world. This mediation can take several forms, for example embodiment, where an artefact used to perceive the world, like a pair of glasses, becomes part of the agent, or representation (the hermeneutic relation), where the agent perceives an artefact that represents the world in some way, like a thermometer.
Verbeek argues that artefacts do not only mediate perception, but action as well: artefacts actively shape our actions as well as our perception.8 Like Ihde, he distinguishes several possible ways in which artefacts can do so. The most important way for our present purposes is translation, where the artefact changes our relation with the world by inviting certain actions and inhibiting others.9 Verbeek (2008) illustrates the translation relation with the example of obstetric ultrasound technology. He claims that on the one hand, this technology can be said to invite abortions, since it can make parents aware of inborn deficiencies or risk factors for hereditary diseases of the fetus. On the other hand, it can also inhibit abortions by confronting the parents with the fetus as a real, live human being, which can strengthen the emotional bond between the parents and the unborn child.
As I mentioned before, when an artefact invites an action, its characteristics make the agent aware that there is an opportunity for action, and that there is some reason to perform that action. Note that this awareness does not have to be conscious: the agent just needs to have access to those facts in some way. With simple artefacts, making an agent aware of an opportunity for action can be easy: humans can often quickly see what they can do with artefacts and other objects, though the actual perceived action opportunities may depend on the need of the observer (Gibson 1979: ch. 8). With more specialized or complex artefacts the observer might need knowledge of a use plan to see them (Houkes and Vermaas 2004).
Inviting is not only about making the agent aware of an opportunity for action, but also about providing a reason for doing that action and making the agent aware of that reason, or making the agent aware of an existing reason for action. Again, this does not necessarily mean that any agent will consider the relevant fact to be a proper reason for action, let alone a good reason. Artefacts are usually designed with typical (groups of) users in mind who are likely to respond to the invitation, that is, who would consider the provided reasons good reasons for action. The artefact would invite other agents as well, but they might just not consider the provided reason to be a reason. And even if they would, they might not consider it to be a good reason, or have other reasons not to respond to the invitation. For example: a comfortable chair can be said to invite sitting, even though the fact that it is comfortable might be a good reason for me to sit in it, insufficient reason for you to sit in it (perhaps you have more important reasons to hurry on) and no reason at all for a baby to sit in it, who might need firm support in order to sit upright at all. Whether a fact is a (good) reason for action depends on both observer and context.
Like prescription, invitation has a soft and a hard variant. In the soft variant, the balance of reasons is not altered by facts about the artefact; its characteristics only make the user aware of opportunities for actions and facts that count as reasons for performing those actions.10 Persuasive technologies, artefacts that are intentionally designed to change the user’s attitude, behaviour or beliefs, often work in this way (Fogg 2003; Spahn 2011). For example: in cars, the presence of a prominent speedometer makes you aware of a reason to drive faster or slower (that you are under or over the legal speed limit). In some cars, however, the most prominent place is now given to the air-fuel meter that shows the efficiency of your engine. This also makes you aware of reasons to drive faster or slower, but here the intended result is increasing engine efficiency rather than adjusting driving speed. In these cars, facts about the speed of the car and the efficiency of the engine are not changed, but different facts are made available to the user that may constitute reasons for different actions.
Returning to Verbeek’s example of obstetric ultrasound technology, it seems that this technology invites agents to act by making them aware of facts that were already reasons for them. The fact that a certain fetus has an inborn deficiency might be a reason for abortion, the fact that it looks human already might be a reason against it, but such facts are not accessible without ultrasound technology.11 Ideally, the characteristics of such technology would make you aware of all the relevant facts for considering what to do. In practice, though, it often leaves the user unaware of certain relevant facts, for example, because of the low resolution of an image. Or worse, distortion of the facts might occur, for example, if the ultrasound machine screen shows the fetus as larger than it actually is. Here, something is presented as a fact that might be a reason for action, while the ‘fact’ is actually not the case.
Incidentally, this example highlights an important difference between Verbeek’s account and my reasons account. Verbeek focuses on technology: it is the ultrasound machine that invites or inhibits abortions. The reasons account focuses on the facts that count as reasons, e.g. facts about inborn deficiencies, that happen to be made available to the agent by a certain technology. On the one hand, this difference in focus does not matter for the purpose of explaining behaviour: the fact that an unborn child has certain inborn deficiencies cannot make a difference to what we do if we cannot know about them. On the other hand, Verbeek’s use of ‘invitation’ suggests that the ultrasound machine is primarily responsible for any change in behaviour, where it seems that we might rather want to say that ‘inborn deficiencies invite abortions’ whenever we come to know about them - through ultrasound machines or otherwise. Furthermore, ‘invitation’ seems to be a comparative term, which raises questions about how it should be used. For example, suppose that a new type of ultrasound machine highlights inborn deficiencies, whereas the old type doesn’t. Should we say that the new type of machine is ‘more inviting’ when it comes to abortions than the old type? The reasons account avoids questions of this kind because it focuses on the facts (about the inborn deficiencies) that are reasons for action, and states that the normative force of those reasons is independent from our uncertainty with regard to or limited knowledge of the facts.12
Next to the soft variant of invitation, there is also a hard variant, where the user is not only made aware of opportunities for actions and facts that count as reasons for performing those actions, but also about facts about the artefact that do alter the balance of reasons. This can be done by creating or intensifying a reason to perform an action, or by creating new opportunities for actions that agents might have reasons to perform. An example of the first kind would be the piano staircase where piano notes are played as one walks up and down the steps, adding a reason for using the stairs rather than the escalator next to existing reasons, like the beneficial health effects.13 An example of the second kind would be the bath that looks inviting because it makes an activity possible which the agent may have a reason to perform (e.g. because bathing can be relaxing). In fact, it seems that every artefact that makes an action possible for which the agent can have a reason also invites it, assuming its characteristics make the agent aware of the fact that the artefact makes the action possible.
Reckless driving increases James’s risk of suffering bodily harm
Suffering bodily harm is bad for James (enables 1. to be a reason for James to not drive recklessly).
So James has a reason to not drive recklessly.
In an SUV, reckless driving does not increase James’s risk of suffering bodily harm (destroys the fact in 1. that is a reason for James not to drive recklessly).
So James has one less reason to not drive recklessly while driving an SUV.
Generally speaking, we can say that an artefact can invite an action in two ways. The first is when its characteristics make the agent aware of an opportunity for action, and existing reasons for performing that action. Those reasons can be false reasons if the ‘facts’ provided by the artefact are not the case, but would have been reasons if they had been the case: here, the invitation is also a deception. The second possibility for an artefact to invite an action is when the artefact provides an opportunity for an action which the agent has a reason to do, or when facts about the artefact provide the agent with reasons for that action or intensify existing reasons, and the agent is made aware of that. Alternately, the artefact may invite an action if facts about it destroy, disable or attenuate facts as reasons against performing that action and the agent is made aware of that, provided that there are also reasons for performing that action.
5 A Counterargument
In this section I will consider a possible counterargument against my account of artefacts influencing human action, where facts about them alter our reasons for action, and argue that it does not threaten my account.
The counterargument runs as follows: artefacts may influence our actions without facts about them altering our reasons to act. Humans show all kinds of (unconscious) biases in their behaviour that may be irrational. One way in which our behaviour can be changed is by situating the artefact so that our biases may be exploited. For example, TV screens in buses or trams may grab our attention without showing us anything worthwhile, and in the supermarket products sell better when placed at eye level than on the bottom shelf. All things considered, it does not seem that we have any special reason to watch the screen or buy the product at eye level. Yet it seems that we can say that the screen prescribes watching it, and the product at eye level invites us to buy it. These would then be cases where actions are prescribed and invited where our reasons for action are not altered.
I think it is undeniable that our behaviour may be changed by exploiting our psychological biases, whether or not this is done through artefacts. This, however, need not be a problem for my account. We have already seen that facts about artefacts can provide us with reasons to act while leaving the choice of actions to us, but this is compatible with them exerting inescapable causal force. It is not impossible for us to drive over a speed bump at high speed, yet if we choose to do so, we will suffer the negative effects, and that fact is a reason for us to slow down. Here, what generates the reason is a fact about a physical effect that is causally necessitated, but such effects might also be psychological. The TV screen might prescribe looking at it by impeding other actions, in which case you would be able to look away by exerting your willpower, or it could practically unavoidably force you to look at it, in which case you would need to resort to a strategy to avoid looking at it, like turning your back to it. The reason against looking away is then simply the fact that it requires more effort to look away than to look at the screen. Again, this may not be a very good reason to look at the screen, but it is a reason nonetheless: people may still act on it and cite it to explain or justify their behaviour.
Artefacts do not only influence human actions by making actions possible and facilitating them, they also influence what we actually do. Both kinds of influences seem to be closely related, yet accounts of them have been developed largely independently, and within different conceptual frameworks, used for different kinds of purposes. In this paper I have developed a descriptive account of how the presence of artefacts affects what we actually do, which is based on a framework commonly used for normative investigations into how the presence of artefacts affects what we can do. Specifically, I have argued that the presence of artefacts can affect what we actually do because facts about them can alter our reasons for action. Not only does my account suggest a specification of some of the key concepts of alternative accounts, it could also be useful for evaluating the merits and flaws of the assumptions underlying the different conceptual frameworks, at least for the purpose of explaining the influence of artefacts on our actions. Finally, it can help us in analysing why the presence of artefacts sometimes fails to influence our actions, contrary to designer expectations or intentions.
While I speak only about ‘presence’ for the sake of readability, various other factors influence our set of options and our actual behaviour, e.g. whether the artefact present is actually available for use, whether it is actually usable by a given user, and whether the user can and indeed does perceive the artefact. More attention will be given to these factors in Sections 3 and 4.
A more elaborate account of enablers/disablers and intensifiers/attenuators can be found in Dancy’s unpublished paper Practical Reasoning and Inference. http://www.kcl.ac.uk/content/1/c6/05/34/35/alpc2009dancypaper.pdf. Accessed 20 June 2012.
This is what makes Dancy an externalist about reasons, as opposed to reason-internalists like Williams (1981) who hold that agents only have a reason for action if they, knowing all the relevant facts and after rational deliberation, would be motivated to do that action. Though the particular account of reasons I use to support my claim is externalist, the validity of my claim does not depend on the truth of reasons externalism. For more on the internalism/externalism discussion, see e.g. Parfit (1997); Hooker and Streumer (2004); Finlay and Schroeder (2008).
I will make no distinction between “artefacts prescribing actions” (e.g. driving) and “artefacts prescribing the agent to act in a certain way” (e.g. driving slowly or carefully). This distinction seems to originate in us having a specific repertoire of action verbs rather than in artefacts influencing our actions in different ways.
In a footnote to this sentence, Latour weakens the meaning of prescription by saying: “We call prescription whatever a scene presupposes from its transcribed actors and authors…” (p. 177n8, my italics). He then gives the example of a painting that is designed to be viewed from a specific angle of view. This meaning of prescription seems simply to be that artefacts have a specific use plan that users have to follow in order to successfully use the artefact (Houkes and Vermaas 2004). I will come back to this form of prescription at the end of this section.
Strictly speaking, it is perceiving the behaviour of an artefact that can make the user aware of certain facts that are reasons, e.g. perceiving a moving obstacle rising out of the road, or a traffic sign simply being in place at a prominent spot. There are many ways in which artefact users can be made aware of facts that are reasons, from physical cues to signs, notices, commercials, et cetera. Also, while artefacts are often designed to make users aware of certain facts that count as reasons for action, designer intentions are not necessary to induce awareness of such facts, for example, in the case of the moving obstacle that has broken down.
Again, strictly speaking, it is perceiving the behaviour of an artefact that can make the user aware of certain facts that are reasons.
Apart from this, Verbeek’s notion of mediation is meant to do more work: he claims that “mediation consists in a mutual constitution of subject and object” (2005, p. 130). For artefacts, this means (among other things) that “artefacts coshape the use that is made of them” (p. 171). For elaboration on the mediation concept, see Waelbers (2009). For criticism, see Illies and Meijers (2009) and Peterson and Spahn (2011).
While both Latour and Verbeek tie their concepts of prescription and invitation explicitly to technology, there seems to be no reason why natural objects could not also prescribe and invite, e.g. snow on the road can be said to prescribe slowing down, and a lake can be said to invite swimming. The reasons account does not differentiate between artefacts and natural objects in this respect: it holds that facts about both can alter our reasons for action.
Though in a sense, making an agent aware of reasons is always altering the balance of reasons, as an agent can be said to have a reason (though not necessarily a good one) to act on information that is readily available: it is convenient.
Of course, neither of these facts might be a good reason for or against abortion, and dependent on the values of the agent, neither fact may be enabled to be a reason at all.
The ultrasound example is further complicated by the fact that having the ability to make ultrasound scans does not imply that we are actually able to terminate pregnancies. I thank an anonymous referee for making this point.
Example taken from http://www.thefuntheory.com/piano-staircase on 20 June 2012. Indeed, many persuasive technologies try to make us perform actions which we already have reasons to do, by creating new facts that constitute new, more motivating reasons for performing those actions.
This might not be true for all kinds of accidents, e.g. rollover accidents. It could be argued that only the perception of safety in an SUV is greater than that in an ordinary car. In that case, the fact that is a reason not to drive recklessly would seem to be destroyed, while this is actually not the case. Hence the design recommendation to make cars seem less safe than they actually are (Horswill and Coster 2002).
The author would like to thank Jonathan Dancy, Anthonie Meijers, Philip Nickel, Bart Streumer, Wybo Houkes and two anonymous referees for this journal for their helpful comments on earlier drafts of this paper.
This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
- Akrich M (1992) The De-Scription of technical objects. In: Bijker WE, Law J (eds) Shaping Technology/Building Society: studies in sociotechnical change. MIT Press, Cambridge, pp 205–224Google Scholar
- Dancy J (2000a) Practical reality. Oxford University Press, OxfordGoogle Scholar
- Dancy J (2000b) The particularist’s progress. In: Hooker B, Little M (eds) Moral particularism. Clarendon, Oxford, pp 130–156Google Scholar
- Finlay S, Schroeder M (2008) Reasons for action: internal vs. external. In: Zalta E (ed) The Stanford Encyclopedia of Philosophy (Fall 2008 edition), http://plato.stanford.edu/archives/fall2008/entries/reasons-internal-external/. Accessed 20 June 2012.
- Fogg BJ (2003) Persuasive technology: using computers to change what we think and do. The Morgan Kaufmann series in interactive technologies. Morgan Kaufmann Publishers, Amsterdam, BostonGoogle Scholar
- Gibson JJ (1979) The ecological approach to visual perception. Houghton Mifflin Company, BostonGoogle Scholar
- Ihde D (1990) Technology and the lifeworld: from garden to earth. Indiana University Press, BloomingtonGoogle Scholar
- Kroes PA, Meijers AWM (Eds) (2006) The dual nature of technical artefacts. Special issue Stud Hist Philos Sci 37(2)Google Scholar
- Latour B (1987) Science in action: how to follow Scientists and Engineers through Society. Open University Press, Milton KeynesGoogle Scholar
- Latour B (1992) Where are the missing masses? The sociology of a few mundane artifacts. In: Bijker WE, Law J (eds) Shaping Technology/Building Society: studies in sociotechnical change. MIT Press, Cambridge, pp 225–258Google Scholar
- Lintsen H (2005) Made in Holland. Een Techniekgeschiedenis van Nederland 1800–2000. Walburg Press, ZutphenGoogle Scholar
- Pols AJK (2010) Transferring responsibility through use plans. In: Van de Poel I, Goldberg DE (eds) Philosophy and engineering: an emerging agenda. Springer, Dordrecht, pp 189–203Google Scholar
- Spahn A (2011) And lead us (Not) into Persuasion…? Persuasive Technology and the Ethics of Communication. Sci Eng Ethics. doi:10.1007/s11948-011-9278-y
- Verbeek P-P (2005) What things do. Philosophical reflections on technology, agency, and design. Penn State Press, University ParkGoogle Scholar
- Williams B (1981) Internal and external reasons. In: Moral luck, Cambridge University Press, Cambridge, pp 101–113Google Scholar
- Winner L (1986) Do artifacts have politics? In: The whale and the reactor: a search for limits in an age of high technology, University of Chicago Press, Chicago, pp 19–39Google Scholar