Skip to main content
Log in

Law and software agents: Are they “Agents” by the way?

  • Published:
Artificial Intelligence and Law Aims and scope Submit manuscript

Abstract

Using intelligent software agents in the world of e-commerce may give rise to many difficulties especially with regard to the validity of agent-based contracts and the attribution of liability for the actions of such agents. This paper thus critically examines the main approaches that have been advanced to deal with software agents, and proposes the gradual approach as a way of overcoming the difficulties of such agents by adopting different standards of responsibility depending whether the action is done autonomously by an unattended software, or whether it is done automatically by an attended software. Throughout this paper, it is argued that the introduction of “one size” regulation without sufficient consideration of the nature of software agents or the environments in which they communicate might lead to a divorce between the legal theory and technological practice. It is also concluded that it is incorrect to deal with software agents as if they were either legal persons or nothing without in any way accounting for the fact that there are various kinds of such agents endowed with different levels of autonomy, mobility, intelligence, and sophistication. However, this paper is not intended to provide the final answer to all problematic questions posed by the emergence of intelligent software agents, but is designed to provide some kind of temporary relief until such agents reach a more reliable and autonomous level whereby law begins to regard them, rather than their users, as the source of the relevant action.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The same conclusion was drawn by Allen and Widdison who asked the question, “Is it fair, or even commercially reasonable, to hold the human trader bound by unexpected communications just because it was theoretically possible that the computer would produce them?”. See (Allen and Widdison 1996, p. 46).

  2. In many cases, it may be extremely difficult for a lay user to determine exactly where the fault lay and to identify the source of the negligence that was responsible for the defect, and whether this negligence was in the design of the system, in the operation of the system, or in the reliance on the output of that system.

  3. It would be useful in this regard if we contemplate the case of companies that can be subject to financial punishment and liability despite the fact that they are not human beings and they cannot be imprisoned.

  4. See, for example, Andrade et al. (2007), Karnow (1994), Solum (1992),and Allen and Widdison (1996).

  5. For more information, see Sartor (2002).

  6. See paragraph 57 of the European Parliament’s Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).

  7. See paragraph 59(e) of the European Parliament’s Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).

  8. It is useful here to contemplate the substantial differences between companies and software agents in terms of the structure, function or nature. While a company can be made up of one or many individuals that make up the controlling mind of the entity, a software agent is a set of algorithms designed to guide the agent, make it smart, and provide it with the ability to perform logical inference. For comparison between software agents and companies, see Chen and Burgess (2019).

  9. For more information, see Nosworthy (1998) and Solaiman (2016).

  10. This is not the place to detail these problems, but the reference here does confirm the futility of treating software agents as legal persons that might purchase insurance.

  11. See for example Fischer (1997), Kerr (1999) and Smed (1998).

  12. Restatement (Second) of Agency § 1 (1958).

  13. Commercial Agents (Council Directive) Regulations 1993, SI 1993/3053,reg 3.

  14. Due to the autonomy and incredible speed of computational operation with which they carry out functions as well as the programmers’ inability after a while to even understand how they think or respond to unprecedented situations, intelligent software agents, and surely future generations of such agents, are not amenable to full control or even open to detailed direction and instructions issued by human users. For more information, see Lehman-Wilzig (1981).

  15. For more information, see Bechtel (1985, p. 297).

  16. A good example here is Bargain Finder, which is regarded as the first shopping agent to have appeared on the Internet. This system assists users interested in music compact discs to find the desired CD. First, the user provides it with the name of a specific music CD and then it provides him with the price (including shipping costs) being charged at each of a number of online music stores, and with the necessary hyperlinks to order it.

  17. For more information, see Russell and Norvig (2016).

  18. This possibility was mentioned by Sartor. For more information, see Sartor (2002).

  19. It should however be noted here that the Regulation (EU) No 910/2014 of the European Parliament and of the Council on electronic identification and trust services for electronic transactions in the internal market (eIDAS Regulation) clearly provides that only human persons can sign. According to Art. 3 (9), signatory means a natural person who creates an electronic signature. Therefore, certificates for electronic signatures cannot be issued to legal persons anymore. Instead, legal persons can use certificates for electronic seals in order to ensure the integrity and origin of data. Electronic seals are similar to electronic signatures, but only available to legal persons.

  20. In an environment where many parties are unwilling to take full responsibility for the actions of their intelligent software, or are unable to be fully accountable for the unexpected events which happen during the conduct of online business, insurance companies might play a role in making the distribution of liability more realistic and smooth. However, such role or task will not be easy at all. On the one hand, insuring the risk posed by the use of intelligent agents still faces difficulties in checking, assessing, or analysing the operations of such agents. On the other hand, agent insurance may not be appropriate for lay users or consumers who rarely, irregularly, or unknowingly use software agents to conduct simple or routine transactions without knowledge of the particular agents they are using. This gives priority to the solution based on creating online companies that regularly use software intelligence technology to transact business.

  21. Such as the right to cancel the contract under certain conditions.

  22. See for example St Albans City and District Council v. International Computers Ltd [1995] F.S.R 686; [1996] 4 All E.R. 481, CA, relating to a faulty computer program designed by ICL for St Albans which resulted in a loss of nearly £1 million. In this case, the court held that ICL’s standard terms and conditions limited its liability to £100,000 was unreasonable, considering, among other things, that ICL had the resources to remedy the damage as well as an insurance policy of £50 million. See also Salvage Association v. Cap Financial Services [1995] F.S.R. 654, in which the court held that a clause limiting liability to £25,000 was unreasonable due to the fact that the supplier, who can obtain insurance far more easily and cheaply than the customer, already had insurance covering up to £5 million in damages.

  23. Just because the terms are somehow available or accessible does not mean that they are properly displayed. See Interfoto Picture Library Ltd v. Stiletto Visual Programmes Ltd [1989] Q.B. 433, [1988] 1 All E.R. 348, relating to unusual charges for keeping pictures. In this case, the court ruled that the vendor had a duty to drawn attention to particularly surprising or onerous terms using boldface type or a separately attached note. See also Microstar v. Formgen, 942 F. Supp. 1312 (S.D. Cal.1996) where the court admonished the merchant for putting the restrictive terms in a separate, non-cross-referenced file that the customer did not necessarily have to review.

  24. This necessitates that software agents should be designed in a way allows them to send an acknowledgement of receipt or to register relevant contractual events, and then forward and report these events back to the user. Such notification can be performed through e-mail, Short Message Services (SMS), or through using linking or other web features and data transmission.

  25. See, for example, Specht v. Netscape Communications Corp., 150 F. Supp. 2d 585 (SDNY2001), aff’d. 306 F.3d 17 (2nd Cir.2002), in which the court ruled that the software license agreement in which reference to terms was too obscure was not binding because a binding contract means that both parties know of the terms and agree to them. Mere downloading without first having to click through a license is not enough by itself to establish a valid contract. This shows how important it is to incorporate terms in the contract and bring certain key terms to the purchaser’s attention before the contract is finalised.

References

  • Allen T, Widdison R (1996) Can computers make contracts? Harv J L Tech 9(25):26–52

    Google Scholar 

  • Andrade F et al (2007) Contracting agents: legal personality and representation. Artif Intell Law 15(4):357–373

    Article  Google Scholar 

  • Bechtel W (1985) Attributing responsibility to computer systems. Metaphilosophy 16(4):296–306

    Article  Google Scholar 

  • Chen J, Burgess P (2019) The boundaries of legal personhood: how spontaneous intelligence can problematise differences between humans, artificial intelligence, companies and animals. Artif Intell Law 27(1):73–92

    Article  Google Scholar 

  • Chissick M, Kelman A (2002) Electronic commerce: law and practice. Sweet & Maxwell, London

    Google Scholar 

  • Dahiyat E (2010) Intelligent agents and liability: is it a doctrinal problem or merely a problem of explanation? Artif Intell Law 18(1):103–121

    Article  Google Scholar 

  • Finocchiaro G (2003) The conclusion of the electronic contract through “software agents” A false legal problem? Brief consideration. CLSR 19:20–24

    Google Scholar 

  • Fischer J (1997) Computers as agents: a proposed approach to revised U.C.C. Article 2. Indiana Law Journal 72(2):545–570

    Google Scholar 

  • Jordan N (1963) Allocation of functions between man and machines in automated systems. J Appl Psychol 47(3):161–165

    Article  Google Scholar 

  • Karnow C (1994) The encrypted self: fleshing out the rights of electronic personalities. J Marshall J Comput Inf Technol Privacy Law 13:1–16

    Google Scholar 

  • Karnow C (1996) Liability for distributed artificial intelligences. Berkeley Technol Law J 11:147–204

    Google Scholar 

  • Kerr I (1999) Spirits in the material world: intelligent agents as intermediaries in electronic commerce. Dalhous Law J 22(2):189–249

    Google Scholar 

  • Lehman-Wilzig S (1981) Frankenstein unbound: towards a legal definition of artificial intelligence. IPC Bus Press 30:442–457

    Google Scholar 

  • Lloyd I (2017) Information technology law. Oxford University Press, Oxford

    Book  Google Scholar 

  • Millar J, Kerr I (2016) Delegation, relinquishment, and responsibility: The prospect of expert robots. In: Calo R, Froomkin A, Kerr I (eds) Robot law. Edward Elgar Publishing Ltd, Cheltenham, pp 102–127

    Chapter  Google Scholar 

  • Nosworthy J (1998) The Koko dilemma: a challenge to legal personality. South Cross Univ Law Rev 2:1–23

    Google Scholar 

  • Russell S, Norvig P (2016) Artificial intelligence: a modern approach. Pearson Education Limited, England

    MATH  Google Scholar 

  • Sartor G (2002) Agents in cyberlaw. In Proceedings of the workshop on the law of electronic agents (LEA 2002)

  • Sartor G (2003) Intentional concepts and the legal discipline of software agents. In: Pitt J (ed) Open agent societies: normative specifications in multi-agent systems. Wiley, Chichester

    Google Scholar 

  • Sartor G (2009) Cognitive automata and the law: electronic contracting and the intentionality of software agents. Artif Intell Law 17(4):253–290

    Article  Google Scholar 

  • Smed S (1998) Intelligent software agents and agency law. Santa Clara Comput High Technol Law J 14:503–507

    Google Scholar 

  • Solaiman S (2016) Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy. Artif Intell Law 25(2):155–179

    Article  MathSciNet  Google Scholar 

  • Solum L (1992) Legal personhood for artificial intelligences. NC Law Rev 70:1231–1287

    Google Scholar 

  • Starke J, Seddon N, Ellinghaus M (1992) Cheshire and Fifoot’s law of contract. Butterworths, Sydney

    Google Scholar 

  • Thomas J (1976) Textbook of Roman law. North-Holland Publishing Company, Amsterdam

    Google Scholar 

  • Weitzenboeck E (2001) Electronic agents and the formation of contracts. Int J Law Inf Technol 9(3):204–234

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emad Abdel Rahim Dahiyat.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dahiyat, E.A.R. Law and software agents: Are they “Agents” by the way?. Artif Intell Law 29, 59–86 (2021). https://doi.org/10.1007/s10506-020-09265-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10506-020-09265-1

Keywords

Navigation