Artificial Intelligence and Law

, Volume 18, Issue 1, pp 103–121

Intelligent agents and liability: is it a doctrinal problem or merely a problem of explanation?

Authors

    • Al- albayt University
Article

DOI: 10.1007/s10506-010-9086-8

Cite this article as:
Dahiyat, E.A.R. Artif Intell Law (2010) 18: 103. doi:10.1007/s10506-010-9086-8

Abstract

The question of liability in the case of using intelligent agents is far from simple, and cannot sufficiently be answered by deeming the human user as being automatically responsible for all actions and mistakes of his agent. Therefore, this paper is specifically concerned with the significant difficulties which might arise in this regard especially if the technology behind software agents evolves, or is commonly used on a larger scale. Furthermore, this paper contemplates whether or not it is possible to share the responsibility with these agents and what are the main objections surrounding the assumption of considering such agents as responsible entities. This paper, however, is not intended to provide the final answer to all questions and challenges in this regard, but to identify the main components, and provide some perspectives on how to deal with such issue.

Keywords

Intelligent agentLiabilityDecision makingForeseeability

1 Introduction

Human parties have been contracting electronically for some time. However, electronic contracts conducted through the use of intelligent software agents have unique qualities and attributes making them sufficiently different from contracts entered through the use of other electronic or automated means. With this kind of software agents, contracts might be formed without the human parties using such agents having any knowledge of the exact terms of contracts or to whom they are addressed. Human parties might not even know that the communications or transactions are taking place. This is particularly evident where two intelligent software agents or more interact, negotiate with each other, and then conclude the contract autonomously without the human’s supervision or input on either side.

In fact, intelligent software agents are capable of independent action rather than merely following instructions.1 They further exhibit high levels of mobility, intelligence, and autonomy according to which their actions are not always completely anticipated, intended, or known by their users.2 This is why their contracts very often do not seem consensual, and this is also why difficulties arise in deciding who should be responsible for the actions and mistakes such agents make. It can thus be said that the advent of intelligent agents has given rise to a wave of cynicism concerning the agents’ capacity to incur obligations and form binding contracts on behalf of their users. Such agents are also posing new concerns regarding who is to bear the risk associated with their unintended consequences, how liability should be attributed in online environments, and how the law ought to respond in cases where the technology is intelligent enough to act autonomously and not only automatically.

However, before proceeding to the main sections of the paper, it is crucial to note that software agents will not necessarily always display the same degree or level of sophistication, autonomy, and intelligence. While the second generation of intelligent software agents are able to exhibit a high level of autonomous decision-making capability,3 first-generation software agents exhibit a very limited level of intelligence and lack any significant ability to make autonomous decisions according to their own experiences. Nevertheless, it is expected that artificial intelligence technology will, in the very near future, progress to the next level that allows software agents to gain more intelligence and autonomy, and to be active initiators and decision makers rather than merely assistants or facilitators.

2 Intelligent software agent and liability

Due to the absence of human review and in light of the fact that one or both of the human parties to the contract will often have no conscious role in the electronic contracting process, errors and mistakes can easily arise in computer-mediated environments.4 Let us imagine, for example, that a software agent, in the course of searching and roaming the Internet, infringes the rights of others (such as copyright, or privacy right), performs illegal transactions, or operates without the user’s authorization and sells rather than buys certain shares. Let us further suppose that this agent, while gathering information, corrupts a third party’s database, or causes the server to crash. In these cases, who is to be held liable for the damage caused by that agent? Should the law automatically deem the human user as being responsible for causing such harm?

As a general principle, legal responsibility is assigned when it appears that someone, in some way, was the actual cause of damage. The law is also likely to be invoked when control is not exercised or when no warning of the danger is given by those who could have controlled the outcome of events, forecast the results and warned of the dangerous outcome. Although there are several parties who might be potential perpetrators of a wrongful act (such as the programmer, a third party, or the supplier, etc.), most analyses of the responsibility for the use of such software systems (as well as the consequences of such use) focus on the human users or legal entities on behalf of which these systems are operated, and adopt the legal fiction that anything issuing from these systems is considered to be really issued by the natural or legal persons who use them.5

Such analyses of the responsibility, which put the full responsibility on the shoulders of the user, arouse little or no difficulties when they are applied to neutral software applications or even to the first generations of software agents that have only very limited autonomy. With such kind of applications, the software agents simply browse the Internet in order to complete an acceptable deal, subject to a set of user-specified constraints, which could be tacit or explicit. Each agent’s goal is then to cope much more maturely with user’s preferences and patterns, and to do what it is told to do. In this case, the strategic directions, final purposes, contractual details, and general conditions of their actions are all well pre-determined by their users or programmers.

It can thus be said that the abilities of such software applications to understand and react will be limited more by the ideas, instructions, aspirations, and efforts expended by their creator or user than by the state of the art.6 This is even clearer if we take into account that these applications do not generate the overall circumstances that produce an effect, but rather the person who is standing behind them. This is why the actions of such applications cannot be considered as the source of new entitlements and liabilities, and this is also why they should never be the scapegoats in order to transfer the blame away from ourselves.7

Under this position, any activity involving such applications would be construed as simple transmission. If things go wrong, the damage can often be easily traced to human mistakes either in programming or parameterisation. It can then be argued that there should be no problem if we apply to such applications the same rules relating to the guardianship of things and objects. This will provide users with a strong incentive to ensure that their programs operate appropriately and are adequately controlled, since they know that they will be entirely liable if something goes awry.

The matter, however, might take a different turn, and the idea of custody might seem inappropriate in respect of the advanced generations of intelligent software agents where the harm is a function of many factors, and no one can either know the full context of intelligent programs or forecast their behaviour in all possible circumstances since these programs have some degree of control over their internal states, and their actions are, to a certain extent, determined by their own experiences. Even programmers involved in the building of such agents will be incapable either of writing instructions to handle all circumstances optimally or determining the pattern of their mechanism of action over both the middle and the long term.

Unlike other programs that merely follow instructions and produce output that is causally related to the input, intelligent agents are endowed with many peculiar characteristics that provide them with the ability to act autonomously without the user’s control or knowledge, but according to self-modified or self-created instructions. It should thus come as no surprise if the output produced by such agents differs radically from the input that is supplied by their users or if the activities of these agents cause damage that is completely unintended, unauthorized, and unanticipated by their users. In such cases, it is quite difficult to decide who is to bear the risk of such consequences or to whom the wrongful act must be attributed.

Holding someone responsible is unfair, as long as no one has done anything that specifically caused harm, nor could have prevented or foreseen it. It is also unreasonable to expect from a person, who is unable to appreciate the extent of the risk or to take steps to avoid its occurrence, to control the uncontrollable. If the law attributes the liability for communication initiated by the electronic agent to its operator simply because of his use of the agent, the result will surely be unjust, especially when such agent has a malfunction or was subject to unauthorized intervention or deception by other parties and factors. Holding users responsible in all cases whether or not they exercise an appropriate level of care is then unnecessarily harsh and might lead to the conferral upon electronic agents of an unlimited power to bind their users, regardless of the circumstances of the transaction. Consequently, a user may quickly be the victim of an electronic agent technology simply because he is not able to appreciate the extent of its technical capacities.

In many cases, it may be impossible or extremely difficult for a consumer who knows little about the unique nature of the intelligent agent to determine exactly where the fault lay and to identify the source of the negligence that was responsible for the defect, and whether this negligence is in the design of the system, in the operation of the system, or in the reliance on the output of that system. Even if such identification is possible, the undesired outcomes of such systems might not be due to a defect in the code or in the input values and configuration, but might be because of the peculiar nature of intelligent agents which provides them with the ability to operate autonomously, modify their own code, and even generate new instructions. In such case, it would require a very imaginative approach to consider such agents as mere transmission tools such as telephones or fax machines, or to classify any error occurred as an error in transmission. This is because of the autonomy of such agents which enables them to generate the contractual offer without human intervention or knowledge, and consequently to act more like initiators or intermediaries than messengers or instruments.

On the other hand, making the software company liable for all software failures might discourage it from attempting new and innovative projects, and might encourage consumers to behave recklessly if they knew that the company would be solely liable in all cases. If that were the case, it is highly likely that software companies would all go bankrupt.

The position is further complicated if one thinks of the fact that different persons have contributed different skills to the development of a software agent and that the standard of care differs from one contributor to the next. This situation which is usually referred to as “The problem of many hands” is the result of the fact that a number of different parties are involved in the manufacturing, developing, designing, and using of such advanced agents, and hence there are several parties who are potential perpetrators of a wrongful act.8 This can make it quite difficult to determine precisely to whom the wrongful act should be attributed. The same doubt also arises when an electronic agent delegates its authority to another electronic device, since it would be unclear to what extent the person who appointed the original electronic agent is responsible for the operations of the device to which a task was delegated.

3 Could an intelligent agent be responsible?

While the idea of custody, which automatically deems the human users as being responsible for the actions of their programs, is fairly clearcut for conventional programs, the matter differs radically in the case of the advanced generations of intelligent software agents where autonomy is coupled with mobility,9 capacity of learning, and the power of self-programmability.10 As noted earlier, the question of liability in this regard is far from simple, and cannot be perfectly solved under current legal doctrine. It is difficult, in most circumstances, to show a chain of causation for damages back to the programmer or owner.

“Causation” analysis of such injuries is particularly difficult since these agents appear to be autonomous and intelligent to a level that complicates assigning responsibility to the user. This inability to pinpoint specific human responsibility for failure suggests that “the intelligent software agent” should be blamed, in some way, for the damage that it causes.11 As the interactions between intelligent software agents and humans become more frequent, and as these agents become more intelligent and autonomous, it will become more urgent to take the cognitive ability of such agents into account and even consider sharing the responsibility with these intelligent computer systems. It will also become imperative to search for realistic answers that strike a balance between the various interests involved in conducting businesses within the virtual environment of the Internet.

The questions that need to be addressed in this regard are: What attitude ought the law to take toward dealings with automated devices that interact with us, and which, to some extent, function independently of those who own them? Can a human user be charged for damages they could not have prevented or foreseen? Could an intelligent agent be responsible? And if so, how could it answer for the damages caused to other third parties? Questions such as these cannot sensibly be answered without analysis of the legal issues associated with Software Intelligence. In order to be fair-minded, we need also to examine the objections surrounding the assumption of considering such agent as a responsible entity, and shine a spotlight on the relevant technical, practical and legal considerations. These issues will therefore be the subject matter of the following sections.

3.1 Intelligent agents and decision making

It makes perfect sense to say that an electronic agent can be considered a responsible, intentional system if its decisions really stem from it and are within its control. But is that the reality? Did a computer (an electronic agent by analogy) truly make the decision it did? Such a question arouses many debates and concerns that have not yet been resolved. While some authors assert that software agents cannot be held responsible, others think that such agents might be held responsible once they arrive to a reliable degree of autonomy, intelligence, and sophistication.

Those who reject attributing responsibility to a software agent have justified their attitude by arguing that such agent only does what it is designed to do without being able to do otherwise.12 They have also argued that it is difficult to imagine that there is any real cognitive ability on the part of software agents because that kind of agent depends to a great extent on the pre-programmed software instructions. Even if such agents can learn from their experience, they have not had any choice with regard to what they have learned. This means that their decision results simply from the way in which they have adapted to their environment. After that, is it still possible to understand their decisions as voluntary decisions stemming from their structure? One can conclude consequently that when an electronic agent makes a mistake, it is because the agent is not being effectively monitored by a user, or because data was put into the computer incorrectly, or because the program being used is defective.13

One potential response to the above claim is that it is not clear that humans have any choice of this kind either.14 Thus, the question that should be asked is whether we simply exempt someone from blame because we think that he has not had any choice in what he has learned? In fact, we hold humans responsible for the actions they have taken on the basis of their cognitive capabilities and what they have learned, or what we think they should have learned. In this case, we do not inquire further as to whether they chose what they learned or not.15

The question of whether the agent could have done otherwise should be irrelevant in this regard since it is impossible or at least extremely difficult to create a trusted criterion to perform any meaningful investigation into this question. This is because the answer to this question will surely vary from agent to agent, and from one occasion to another. It can be said that this principle ignores the fact that agents learn and shift their interests so incessantly.16 This principle further ignores the micro-details, which will never be the same again in any case. Even if we suppose that these micro-details will be the same again in any case, no one can guarantee what the agent will do. It may do otherwise and it may not. Moreover, no one can exactly determine the psychological or cognitive state in all possible circumstances since no one has direct access to other minds. In reality, people show no interest in pursuing that question since the answer in most cases could not conceivably make any noticeable imprint to the way the world went.

On the other hand, it can be said that the view of those objectors is incompatible with the fact that an intelligent agent migrates, communicates, and interacts without any human intervention; it even operates while the user is disconnected, logged out, or away from Web interaction. This clearly shows that an intelligent agent of this kind made the decision it did, and hence it has those critical qualities of entities that we take into account when we hold them liable. With this in mind, it is inaccurate then to say that an intelligent agent is simply the extension of human action. They have in fact the capacity to act in some extra-manner and make their own decisions autonomously.

It is submitted that one need not go to the extreme of dealing with intelligent agents as legal persons or as nothing, but should be realistic enough to recognize that it is no longer possible to ignore the peculiar qualities of such agents or to deal with them as passive tools. It is essential at the same time to admit that there are technical and doctrinal difficulties still facing the scenario of attributing full responsibility to software agents. This necessitates that we look for a moderate approach that addresses the unique nature of intelligent software agents, and strikes a balance between the various interests of different parties involved in conducting business through such agents. It is also necessary to determine the responsibility in the light of the relevant technical and commercial consideration, without restricting our attention to the complexity of the metaphysical influences. Once the competence of an entity to be blamed is analysed practically and not as body or soul, complex problems can become more manageable, and solutions will surely become clearer.

3.2 Intelligent agents and their unreliable nature

According to this objection, electronic agents are by nature unreliable, and they have some unusual characteristics that pose so many questions about their capability to be blamed or praised for their actions. For instance, electronic agents do not have an established physical location or domicile in which the unsatisfied creditor could sue them. Moreover, they have no inherent substantiality or persistence by themselves, but they acquire a degree of permanence just by virtue of the physical medium on which they are stored. Bearing their problematic nature in mind, it should come as no surprise if they die in the middle of transaction, or if they disappear without any apparent reason. These are not “bugs” in their programs, but are part of their inherent nature. It is also well understood that it is statistically impossible to undertake a comprehensive test of these software agents in order to forecast the nature and timing of their pathological behaviour in all situations. Such practical and technical difficulty in checking the reliability of agents makes it too complicated for us to feel that these agents really deserve to be considered as reliable and responsible entities.

Furthermore, electronic agents can divide, replicate, and multiply themselves into undistinguishable modules that might operate collaboratively across unknown platforms in which elements are continuously added and dropped out. By doing so, electronic agents may turn out to be unrecognizable, and results may be derived from a large number of concurrently interacting components so that it will become too hard, if not impossible, to distinguish between intelligent agents, and determine which agent did not properly perform its task. It will also be too complicated, if not unrealizable, to determine good or bad faith in this regard, or to ascertain whether electronic agents intentionally or negligently produced the damage. Such difficulties will be even more pressing when an electronic agent delegates some parts of its task to other electronic agents. In such cases, it becomes extremely difficult to determine the source of an agent or its code,17 and to select the actual course that has caused the damage.

The position will be further complicated if the hardware and software are dispersed over several sites and maintained by different individuals.18 It may then be difficult to determine whether software or hardware is the real cause of the damage. It will furthermore be difficult, if not impossible, to identify the intelligent agent: does it coincide with the hardware or with the software? Moreover, agents’ mutable shapes, variable roles, and changeable sense pose various difficulties in distinguishing them from viruses, which are also subject to polymorphism. After that, how are we to identify the liable electronic agent? Does it still make sense to argue that such agents can be legally responsible? These questions will repeat themselves if we contemplate the fact that there are occasions in which humans and software agents communicate in the same way, and hence it is likely that others will be unable to differentiate between human- and machine- originated communications.

According to this objection, intelligent software agents are no more than objects, tools or things that may be owned, transferred, and used by others. Having the power to produce rights and duties through their activities does not yet provide them a full legal competency to bear the responsibility. It is possible, however, to view these agents as cognitive tools to which users delegate certain cognitive functions, but it remains difficult to imagine them as legally responsible just because they play a causal role in producing an effect or in preparing an electronic contract.

This objection, however, can be subjected to many criticisms since intelligent software agents are always a part of a larger environment, and so subject to the vagaries of this environment. It is important then to reject any attempt to deal with these modules separately without taking into consideration their dynamic environment. At the same time, we need to avoid confusion between the problematic nature of the electronic environment and the inherent characteristics of an electronic agent. Following this line of thinking, we can say that what is critical is that the agent relates to its environment, not the issue of its internal state, and hence what the user may be expected to foresee is only that the agent will exercise its functions properly, or at least sufficiently well for the purposes of its use as long as the use of such agent complies with the circumstances of the relevant electronic environment.

The high level of connectivity should not, by itself, preclude an agent from bearing the responsibility for its actions. This collaborative spirit is not only essential for an electronic agent to perform its tasks in such a dynamic and complex environment, but it is also one of the key aspects of intelligence since intelligent programs do not operate exclusively by matching formal symbols, but they cooperate with each other, share knowledge and experiences, and take many different forms depending on the nature of that entirely dispersed atmosphere that they inhabit. In principle, this social capacity for collective action does not represent any real problem in the normal course of events. The difficulties only arise in some exceptional cases where the sub-agents engage in transactions that are not sufficiently related to the task as determined by the person who initiated the original agent.

To be fair-minded, those objectors should recognize that the source of the problem consists in the lack of reliable and efficient controlling mechanisms. We accept the risk that these modules have wings we cannot clip and which carry them we know not where, but we do not accept reliance on simple laymen’s guides to computer science, as the basis for firm conclusions about what computers cannot do.19 We should keep in mind that problems relating to identification and authentication are not unique to electronic agents. Such problems are also experienced with corporations whose constituents and control mechanisms are also subject to change over time.20 However, a number of possible technical and legal solutions could be devised in order to provide an appropriate assurance with reasonable efforts. For example, some of these problems could possibly be solved through some form of registration, which is accompanied by digital signatures. In this case, the acts attributed to the agent would be those, which are marked by its registration number, and signed with its digital signature.21

On the other hand, the fact that an electronic agent is owned should not, by itself, preclude it from bearing the responsibility since corporations are legally entitled to be blamed for their actions even though they are also owned by stockholders. Perhaps then the analogy between an electronic agent and its owner, and a corporation and its stockholder can be extended. This requires us to be pragmatic in the sense that instead of asking whether an agent is allowed theoretically to do something, we should ask whether an agent can really do it. We need thus to avoid confusing the notion of capacity from that of absence of personality.

3.3 Intelligent agents and foreseeability

As a general principle, one should not be able to minimise his liability solely because he used electronic or technical aids, such as software agents, instead of human agents. This implies that a user, who has consciously chosen to use a particular agent to interact and carry out a set of tasks in his name, should bear the consequences of that choice. Just as he accepts the profits that result from agent’s actions, he has also to be held liable for any harm caused by his agent. By following this line of thinking, we give the user a strong incentive to carefully choose his electronic agent and make sure that such agent is properly used and monitored.

The fact that intelligent software agents operate autonomously and not only automatically should not be exploited as an excuse to absolve humans from responsibility, especially if there are actual facts that establish that the user or programmer who developed and set up the agent has ordered an agent to steal, perform illegal transactions, or cause harm to third parties by destroying their network properties. Even if the human control or intervention is pretty thin in the case of intelligent software agents, it is not, as some might expect, totally nonexistent. Therefore, the position of the parties involved has to be studied well, and a human user should be held responsible unless he proves that he has exercised an appropriate level of care to ensure that the agent is properly monitored, and that the event was extremely inevitable, and provides sufficient evidence that he has adopted all measures suitable to avoid damage.

Even if we accept that holding someone responsible for the behaviour of an agent is unfair, as long as he is unable to foresee this behaviour, we cannot consider the unpredictable results of an agent’s operation as an act of god or a force majeure since these results are related to the well-known unreliable nature of an agent, and thus these results are pertinent. What we said about the impossibility of completely forecasting the behaviour of an agent does not mean that we cannot, in some way, anticipate it. Even though the agent’s user does not know the full context of a networked intelligent program, he knew or ought to have known the software agent’s potential risks, its unreliable nature, and its purpose and limits as well. The user then should not depend on his ignorance of circumstances, which he himself should have known, in order to transfer the blame away from himself.

Foreseeability by itself should not be the prime rationale for assigning the liability since it is an ambiguous criterion that varies from judge to judge, and from one culture to another. What should have been expected by a “reasonable” person changes over time, and depends on custom, public policy, and on the perceived technological sophistication. What is considered as “reasonably foreseeable” to some people is not necessarily foreseeable to others.22 What qualifies as remotely foreseeable to me might be closely foreseeable to you. It is more accurate then to think about other standards that conform to the global nature of the Internet.

The above objection, however, can be subjected to many criticisms since foreseeability is considered as a key issue in determining a person’s liability. If a person could not reasonably have foreseen the outcome and could not have taken steps to avoid its occurrence, then there may be no liability on his part. But one who reasonably should have foreseen a consequence, and was in a position to prevent it, may be liable for it.23 It is worth noting that liability will not depend on whether humans were involved in some capacity, but only on whether the court believes that the action should have been reasonably foreseeable.24

Unlike conventional software applications where the forecast is generally based upon the analysis of the pre-determined instructions or the computational mechanisms, the behaviour of an intelligent agent cannot be anticipated by analysing the computational mechanism or the software of which the agent consists since the code of this software, as we previously observed, is too complex to be studied in time, and the user is usually forbidden to decode or modify it.25 In that case, can we say that the user, who has no knowledge of all instructions included in the agent, knew or ought to have known of the software agent’s potential risks? At least for agents which are sufficiently complex, the question of foreseeability is far from simple, and cannot be solved merely by attributing goals to the system, and assuming that it can act in such a way as to achieve its goals. After that, is the user really in the position to be able to control that risk? Can we state that the user is liable even if the sole cause of damage is an extraordinary force of electronic nature?

It is well understood that an electronic agent may produce some pathological decisions, but this does not mean that a user can precisely forecast these decisions since the nature and timing of these pathological decisions cannot be known in advance. It is further important to recognize that a software agent’s dangerousness does not reveal itself clearly, and hence it will be, as Allen and Widdison admit,26 unfair, and even commercially unreasonable, to hold the human trader bound by unexpected communications just because it was theoretically, or remotely, possible that the computer would produce them.

This objection obviously conflicts with that fact that the user in many cases has no control over certain aspects of his agent, and he does not really know how a software agent works since the code of this software is usually inaccessible, and in any case it is too complex to be studied in time. Moreover, this objection ignores the fact that agents learn from their experiences, and hence it is extremely difficult to predict their outcomes in detail. The difficulty in forecasting what computing operations the agent’s software will perform, or what data will form the agent at the time in which it will operate, is a necessary consequence of the very reason for using an agent. If the user could forecast the agent’s behaviour in every possible circumstance, there would be then no need to use an intelligent agent.27

This objection, furthermore, clearly ignores the major role the electronic environment plays in producing such unforeseen decisions. As we mentioned before, intelligent agents have the ability to sense, affect, and act in response to their environment. This ability to affect indeterminate environments necessitates that these agents change their behaviour dynamically over time in order to cope much more maturely with such environment, and by doing so, they raise the concern that the outcomes of such effects are not always expected since these environments are unreliable and very complex.

The question that should be asked before making any precise judgments about any proposed system is whether we need it? Those who claim that a user must bear the consequences of his choice regardless of the circumstances of the electronic transaction, should further ask themselves whether there is any alternative to a software agent in order to fully engage in the universe of the commerce electronic? It should be noted here that using such agents is not merely a voluntary and free choice, but it is in many occasions imposed by the requirements of online context. This is especially true when we contemplate the complexity of the digital world as well as the massive growth of information and e-businesses available on the Internet.

3.4 Other objections

Some commentators think that software agents are merely coded information and that we will commit excessive conceptual mistakes if we attribute a legal or moral responsibility to these agents, or if we just assume that they possess whatever else we take to be present when we hold human beings responsible for their actions.28 This is because, unlike humans who are sensitive, self-determined and moral, electronic agents lack a number of conditions, which should be fulfilled in order for responsibility to be ascribed, such as emotional abilities, common-sense sensitivity to the constraints of the physical world, the possibility to be guided by fear of sanctions or hope of rewards, some knowledge of the results of actions as well as the power to change events. After that, can we even imagine the possibility of such agents being subjects of responsibility? According to this objection, a software agent is simply a system the action of which only matters when related to its use by a person. It appears to be very inaccurate thus to draw any analogy between such agents and other legal entities, since electronic agents are information systems while other legal entities, such as companies, are social systems.

Furthermore, software agents can neither understand that their actions may result in the formation of a contract nor meet the demands of nonmonetary liability. They also do not have the capacity to be punished, or to be sued. Accordingly, we have to attribute to their users whatever they did since these agents are only capable within the parameters of their programming, and once they are activated by their users.29 Even though software agents have a power to produce rights and duties through their activities, one cannot say that those rights and duties will belong to them since the law does not yet recognize them as legal persons capable of contracting for their own sake. They thus have no interest in those transactions that are concluded through them. That being the case, what is the point of making them the subject of a legal duty?

Treating computers as responsible agents may give them opportunities under which we would forgive them and not hold them responsible, and the plaintiff would thus become largely unprotected. For instance, if we ascertain that an electronic agent was subject to internal malfunction so that it could not behave rationally, or if we discover that this agent had been deprived of an appropriate environment in which to learn, then we would no longer hold it responsible.30 After all, is it logical to relieve human users of all responsibility, and assign this responsibility to software agents? On the other hand, holding the software agent liable might hide the real source of the problem and mask the human creator of the harm that may result from using an agent in an environment for which it was clearly not suitable. Ascribing responsibility to software agents might also be used as an excuse for some people to evade their responsibility and behave recklessly.

Another argument against ascribing responsibility to software agents is that fixing liability on software agents will solve no problems since this will not relieve humans of the responsibility for preparing these systems to take responsibility.31 In that case, what is the point of declaring software agents liable if all responsibility at the end would be translated back to the human users who are still liable to prepare their agents and make them responsible? Why, one might then ask, go through all the trouble from the beginning?

Such objections, however, can be subjected to so many criticisms. Once again, those objectors insist that only humans or persons are capable of free actions and autonomous determination that allow responsibility ascription, and thus they can only be responsible in the eyes of the law and society. By following this line of thinking, such objectors not only confuse the concept of responsibility with the concept of humanity, but also ignore the distinctive features of artificial intelligence and allow for their metaphysical arguments to take place at the cost of the practical, technical, and commercial considerations. Maybe it is time to begin to recognize that the absence of personality is not necessarily incapacity, and to deal with the practical competence as a separate factor, as it is quite harsh to put the full liability on the shoulders of the users (whether or not they exercise an appropriate level of care) just because the law does not yet recognize electronic agents as legal persons.

Those, who argue that electronic agents should not be treated as responsible entities on the ground that such agents do not have any real interests in respect of transactions that are concluded through them, and that they are not contracting for themselves, should keep in mind the case of corporations that contract for the benefits of their members and in order to achieve the purposes and interests of their stockholders. In fact, these corporations do not have any real interests, but this does not preclude them from being responsible. Why then do we not adopt the paradigm of corporations in the case of intelligent agents, and provide such agents with some kind of patrimony and personality to meet the demands of responsibility? There is no reason however why intelligent agents might not some day be conferred at least some elements of legal personality and provided with a minimum level of patrimonial rights. Our experience with conferring legal personality on corporations indicates the possibility of personifying software agents once they arrive at more advanced levels of autonomy, intelligence, and reliability.32

We accept the general principle according to which whoever profits by the action of his employee is also to be held liable for any harm caused by him within the work he is assigned to do, but this principle should not be applicable when the harm is caused by the employee outside the course of his work, and similarly the user should not be made liable for the harm caused beyond the work that was assigned to the software agent. Just as we are not liable for the consequences of a human agent’s unauthorized actions,33so too should humans be absolved, to a certain extent, of liability for the unanticipated results of software intelligence’s pathology.

Those who deny the possibility of software agents being subjects of responsibility on the ground that such agents do not have the capacity to be punished, should note that punishment by itself is not the purpose in the case at hand; the main practical purpose is to compensate the sufferer since the losses resulting from software agents will be mostly economic in nature (non-physical). Moreover, there are so many kinds of punishments other than the physical one in order to realize the philosophy of punishment. It is by no means certain that entities should be natural or moral persons in order to deserve punishment. Companies, for instance, are subject to punishment and liability despite the fact that they are not human beings. This clearly shows that we should think about law not only as a concept, but also as a process. In other words, we have to go back to what law is (concept) and what it is designed to achieve (process).

As noted earlier, it does not make sense any more to deal with the advanced generations of software agents as mere neutral tools. Given the considerable level of intelligence, autonomy, and mobility such agents exhibit, it seems sensible to begin considering the principles of vicarious liability and dealing with intelligent agents as employees or even as independent contractors once they arrive at more reliable levels of autonomy, intelligence, mobility, and sophistication. According to this kind of liability, the employer might not be liable for the acts of his employee, if these acts are wholly unconnected with the course of his employment.34 This implies that an employee in so many cases might be responsible and sued for acts alleged to have occurred outside the course and scope of his employment. In such cases, he will be asked not only to answer for the damage caused to other third parties, but also to meet the demands of his employer who might claim contribution from him to recover what he had paid as a compensation for the injured party. The fact that an employee might exclusively be held liable to pay damages raises the question of whether it is still possible to apply the principles of vicarious liability in the case of torts caused by intelligent software agents.

Even though both of them might perform tasks requiring a high degree of skill or expertise and they might even control the manner in which such tasks are to be done, there are still substantial differences between human employees and software agents which might prevent analogy being drawn between them for the purpose of applying the principles of vicarious liability. While a human employee enjoys legal personality and juristic capacity, and he is employed under a contract of service according to which he agrees, in consideration of a wage or other remuneration, to be subject to the supervision of his employer and to provide his own work and skill in the performance of some service for that employer,35 a software agent lacks such legal personality or capacity which enables it to contract on its own or to provide a required consent to any contract of service. Moreover, unlike human employees who have separate patrimonies distinct from their employers, such agents have no patrimony or personal assets and thus they are unable to pay damages or to satisfy any judgment against them. This means that any liability will practically fall back on the users of such agents whether or not the acts of such agents were authorized or within the course of users’ businesses. If that is the case, does it still make sense to consider the application of vicarious liability in the cases of torts caused by intelligent software agents?

Even if such agents are provided with a patrimony, such patrimony will not change the matter and will not play any real role in setting limits on the liability of the user since such user will pay the patrimony and will be responsible for paying any additional compensation if the patrimony is insufficient to satisfy a judgment. That is to say, if the patrimony of the agent was not enough to compensate the creditors of that agent, such creditors would sue the user and try to get compensation from him. This practically means that the user will ultimately bear all risk of loss. That being the case, does the attribution of patrimony to a software agent make any sense?

Intelligent agent technology is still in its infancy and we still need some time to capture all the promises and fruits of this technology. At the present time, it is still early to issue a final judgment regarding this technology, or confer a legal personality to its outputs. In fact, this technology still exhibits a limited level of autonomy, and it does not reach that extent in which we can consider electronic agents as fully autonomous, intelligent, and mobile so that it becomes desirable to provide them with a legal personality and consider them as distinct and separate parties. Until now, the majority of electronic agents in the market exercises the role of the assistant who helps the human users in one or more of the various stages in the buying process, but the final decision in this process usually requires human involvement to make the definitive selection of the product or merchant or to confirm or reject the transaction entered into.36

However, there will come a day in which we can make some progress on intelligent software technology. We will one day improve the reliability of these agents so that they can be identified accurately, and provided with personality safely. But that day has not yet arrived. Therefore, we should, in the meantime, think about other mechanisms and solutions that address the issue realistically, and take all the relevant factors and difficulties into account.

4 Conclusion

Before proceeding to any further discussions, one might ask: why should the software agent become a subject of responsibility? The discussion of this question in legal literature is not new; it was dealt with even when the computer was exhibiting a very limited degree of intelligence and autonomy. One of the answers that was given at that time, and which still seems valid at the present time in which computers are starting to display considerable levels of autonomy and sophistication, is the answer Bechtel gave when he said that: “One reason to examine the conditions under which responsibility might be differently assigned is that it may be that humans are unable to meet the demands of the responsibility that now rests on them. Then it may be important to consider sharing that responsibility with computer systems.”37

Given the significant autonomy, mobility, intelligence and sophistication displayed by the second generation of software agents, it is essential to re-evaluate the legal status of these agents, and determine what laws must be applied to them since it is clearly not convincing any more to deal with such agents as neutral tools of communication. Even though such agents have not yet arrived at the perfect level of reliability and autonomy which is enough to guarantee a safe assignment of responsibility to them, it is obvious that they, as predicted by many specialists and authors,38 will gain self-awareness, self-programmability, and human-like intelligence in the not too distant future. This implies that many questions and issues will soon emerge especially those relating to the liability for mistakes and actions committed by self-programming agents.

We have to recognize that ascribing responsibility to software agents will not be the magical joystick that will solve all problems, and should not be a way to absolve humans of entire liability, but to facilitate the attributing and distributing of responsibility and make it more realistic and fair. This thus necessitates the examination of the role of relevant involved parties such as administrators of electronic shopping malls, owners of the servers, programmers, and users.

We have also to consider the problem not only from the perspective of the owner, or developer of the software agent, but also from the perspective of the counterparty. Counterparties will need to find out who they are dealing with and whether the party (agent) can be trusted or has enough assets to pay. Otherwise, the results will be less and less convincing since individuals will be less willing to engage in the universe of the electronic commerce, and hence e-commerce will not take off and its growth will be stifled.

In order to avoid a divorce between legal theory and technological practice, and in order for our solutions to be translated successfully into law, it is necessary to recognize the unique characteristics of electronic agents and provide for the possibility that an autonomous electronic agent might operate in a manner unknown, unforeseen or unauthorized by the person who initiated its use. This implies that our solutions must be based on a deep understanding of different aspects of such technology, and must also take into account the environment as a part of a problem. Furthermore, they need to clarify the relationship between electronic agents, programmers, users, and even third parties.

The question of whether software agents should be held responsible cannot be answered quickly with a “yes” or a “no”. Before addressing such question, we have first to deal with several issues of relevance such as the issues of identification and reliability, and those issues relating to the limits of an agent’s responsibility (Where does its responsibility begin and where does it end?) and the limits of what we should let software agents do. We have also to decide how far we are willing to accept the idea of sharing responsibility with such agents.

Although the scenario of ascribing liability to software agents might look really exciting, the truth is that we are not yet there, since the law does not yet recognize these agents as legal entities capable of defending themselves or paying damages. There is no reason however for which our legal system might not in future attribute perhaps a limited legal subjectivity to intelligent software agents in order to share liability with them and to the extent necessary to enable these agents to bear legal consequences issuing from certain facts of their acts.

Footnotes
1

For more information, see Russell and Norvig (1995).

 
2

They are also technically capable of representing buyers and sellers, negotiating with human parties or even with each other, and concluding transactions without any human intervention or knowledge during the conclusion of transactions. Fore example, See Tete-a-Tete (http://ecommerce.media.mit.edu/tete-a-tete/), which is an online negotiation system where price and other terms of transactions are handled entirely by software agents.

 
3

See, for example, AuctionBot where the user specifies a number of parameters, and after that, it is up to the agent to manage the auction, monitor the price change, interact with other bidding agents, and compete autonomously in the marketplace for the best bids. Unlike popular online auction sites such as eBay’s Auction- Web, which require consumers to manage their own negotiation strategies over an extended period of time, AuctionBot can perform tasks that require immediate response to events with no delay while its user is away from a Web interaction. For more information, see R. Guttman, et al., ‘Agent-mediated electronic commerce: A survey’, Knowledge Engineering Review, vol. 13 (2), 1998. See also P. R. Wurman, et al., ‘The Michigan Internet AuctionBot: A Configurable Auction Server for Human and Software Agents’, In Proceedings of the second International Conference on Autonomous Agents (ICAA-98), New York, ACM Press, 1998, pp. 301–308.

 
4

Contemplate, for example, the recent cases in which Argos.com mistakenly offered Sony televisions for £2.99, Amazom.co.uk erroneously listed HP iPAQ pocket PC for £7.32 instead of £287 each, and Kodak advertised digital cameras on its website at £100 instead of £329 each.

 
5

A good example of such analyses is the “Guide to Enactment” accompanying the UNCITRAL Model Law, which provides that “The Data messages that are generated automatically by computers without human intervention should be regarded as “originating” from the legal entity on behalf of which the computer is operated”.

 
6

Davis (1998), p. 1148.

 
7

Johnson (1985).

 
8

Such as users, designers, distributors, administrators of the platform, trusted third parties, and owners of the servers.

 
9

Would it be fair to assign liability to the human user when such user is so far away from the transactional environment and he is no longer aware of the form and structure of the software agent nor has he any awareness of the agent’s decision making processes?

 
10

Given that some intelligent agents may have mutated from the original version written by the programmer, or may persist as a result of self-programming, placing the human in the casual chain of liability becomes difficult and so harsh.

 
11

Karnow (Karnow 1996), p. 189.

 
12

For more information, see Bechtel (1985).

 
13

Deborah G. Johnson, supra note 7, p. 55.

 
14

W. Bechtel, supra note 12, p. 305.

 
15

Ibid.

 
16

For more information, see Dennett (1984).

 
17

This becomes more acute if we take into account the fact that intelligent agents can modify their code, and even create new instructions.

 
18

Allen and Widdison (1996).

 
19

P. Hayes, et al., ‘Human Reasoning about Artificial Intelligence’, in E. Dietrich (ed.), Thinking Computer & Virtual Persons (San Diego, CA: Academic Press, 1994), p. 333.

 
20

Kerr (1999).

 
21

From this perspective, Karnow proposed a system akin to the registration system for companies according to which a software agent should be submitted to certification procedures for the purpose of guaranteeing coverage for risks arising out of its use. However, such a system has been criticised because it does not completely solve the problem of identification, and has been seen as an unnecessary expense, which would be unwelcome and superfluous to the needs of those engaging in e-commerce. For more information, see Karnow, supra note 11.

 
22

A person owes a duty of care not to injure those who it can be reasonably foreseen would be affected by his acts or omissions. However, there is still a difficulty in determining what is reasonably foreseeable and what is not. The term “reasonably foreseeable” can be constructed and interpreted broadly, and the scope of “duty of care” is still not completely clear. For an excellent discussion of “reasonable foreseeability” and “duty of care”, see Donoghue v. Stevenson [1932] AC 562 which concerns a decomposed snail found in a bottle of ginger beer. This case posed the issue of the situations to which the law of negligence extends, and the extent to which we can consider the action or harm reasonably foreseeable. The practical effect of this case was to confirm that a manufacturer of products owes a duty to the consumer (end-user) to take reasonable care to prevent any damage or injury to the consumer arising from the product. The other point this case made clear is that there is no need for a contract between plaintiff and defendant for liability in tort to arise.

 
23

See, for example, Cooper v. Horn, 448 S.E.2d 403 (Va.1994). If a flood is reasonably foreseeable, then the law imposes liability on the builders of the dam that fails because it was inadequately constructed and was thus unable to withstand heavy rainfall.

 
24

C. Karnow, supra note 11, p. 179.

 
25

For more information, see G. Sartor, ‘Intentional concepts and the legal discipline of software agents’, in J. Pitt (ed.), Open Agent Societies: Normative Specifications in Multi-Agent Systems (Chichester: John Wiley & Sons Inc, 2003).

 
26

Tom Allen, Robin Widdison, supra note 18, p 46.

 
27

Giovanni Sartor, ‘Agents in Cyberlaw’, Workshop on The Law of Electronic Agents (LEA2002), as available at http://www.cirfid.unibo.it/~agsw/lea02/pp/Sartor.pdf, on 29/03/04.

 
28

Most philosophers and commentators deny the possibility of computers being subjects of responsibility. See, for example, Jordan (1963).

 
29

See, for example, State Farm Mutual Auto. Ins. Co. v. Brockhurst 453 F.2d 533, 10th Cir. (1972). In this case, the court ruled that the insurance company was bound by the contract formed by its computer (an insurance renewal) since this computer only operated as programmed by the company.

 
30

W. Bechtel, supra note 12, p. 305.

 
31

Ibid, p. 297.

 
32

Leon E. Wein, ‘The responsibility of intelligent artefacts: toward an automated jurisprudence’, Harvard Journal of Law and Technology, Vol.6, 1992, p. 116.

 
33

See Lopez v. McDonald's, 238 Cal. Rptr. 436, 445–446 (Cal. Ct. App. 1987). In this case, it was held that McDonald's owed no duty to plaintiffs, and is not liable for deaths of plaintiffs caused by an unforeseeable mass murder assault at its restaurant.

 
34

See Beard v London General Omnibus Co [1900] 2 QB 530 in which the employer of a bus conductor who in the absence of the driver negligently drove the bus himself was held not vicariously liable. See also Twine v Bean's Express Ltd [1946] 1 All ER 202 when a hitchhiker had been given a lift contrary to express instructions and was fatally injured. In this case, it was held that the employer was not vicariously liable since the servant was doing something totally outside the scope of his employment, namely, giving a lift to a person who had no right whatsoever to be there.

 
35

See Ready Mixed Concrete (South East) Ltd v Minister of Pensions and National Insurance [1968] 2 QB 497, in which it was held that three conditions must be fulfilled for a contract of service to exist. First, the servant agrees, in consideration of a wage or other remuneration, to provide his own work and skill in the performance of some service for his master; secondly, he agrees, expressly or impliedly, that in the performance of that service he will be subject to the other's control in a sufficient degree to make that other master; thirdly, the other provisions of the contract are consistent with its being a contract of service.

 
36

Emily M. Weitzenboeck, ‘Electronic Agents and the Formation of Contracts’, International Journal of Law and Information Technology, Vol. 9. Issue 3, 2001, p. 209.

 
37

W. Bechtel, supra note 12, p. 297.

 
38

See, for example, Kurzweil 1999). See also Moravec (1999).

 

Copyright information

© Springer Science+Business Media B.V. 2010