Abstract
When Ken Jennings, 74-times winner of the Jeopardy TV quiz, lost against a room-size IBM computer, he wrote on his video screen: ‘I, for one, welcome our new computer overlords’ (citing a popular ‘Simpsons’ phrase). The New York Times writes that ‘for IBM’ this was ‘proof that the company has taken a big step toward a world in which intelligent machines will understand and respond to humans, and perhaps inevitably, replace some of them’ (Markoff 2011). Richard Powers anticipated this event in his 1995 novel on Helen, ‘a box’ that ‘had learned how to read, powered by nothing more than a hidden, firing profusion. Neural cascade, trimmed by self-correction, (…)’ (at 31). Powers describes an experiment that involves a neural net being trained to take the Master’s Comprehensive Exam in English literature. The novel traces the relationship that develops between the main character and the computer he is teaching, all the while raising and rephrasing the questions that have haunted AI research. This chapter addresses the potential implications of engaging computing systems as smart competitors or smart companions, bringing up the question of what it would take to acknowledge their agency by giving them legal personhood.
Interestingly, it is the science part of the narrative, the tale of a machine that learned to live, that proves to be the more moving, the more human one.
Cohen (1995)
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Naming it Helen does remind one of Trojan horses, a nice overlap between world literature and computer science.
- 2.
- 3.
A remarkable attempt to link the fundamental uncertainties uncovered by the natural sciences with the humanities was made by Prigogyne and Stengers in their well known discussion of chaos theory. The original French title of their book was La nouvelle alliance. Métamorphose de la science (1979).
- 4.
Turing’s (1950) article is a very sophisticated and unorthodox exploration of what he calls ‘the imitation game’. Many of the objections that have been made since then are already foreseen and countered by Turing in this article, see e.g. Russell and Norvig (2009: 1020–1030). The point is not whether one agrees, but to detect to what extent his predictions have come true. See Floridi and Taddeo (2009) for an evaluation of the 2008 Loebner Contest, a yearly event that imitates the Turing Test and nominates ‘the most human machine’ as well as ‘the most human human’. See Christian (2011) who played as human in the 2009 Loebner Contest and came out as ‘most human human’.
- 5.
See Christian (2011), chapter 7 ‘Barging in’ on the silliness as well as the rigidity of much chatbots’ conversation.
- 6.
See Christian (2011), chapter 5 ‘Getting Out of Book’ on the reliance on registered games.
- 7.
Futurist Kurzweil (2005) has coined the term singularity for the moment in time when all problems that are intractable now will be resolved. This will be the moment that ‘humans transcend biology’. Only those who believe that all problems that matter are computable will be relieved to hear this. My point is that even if all problems are computable they are usually computable in different ways, with different outcomes. Back to square one?
- 8.
This kind of robust knowledge, however, requires transparency as to the translations, requiring access to the whole process of knowledge construction. This is not possible as long as this kind of knowledge production is protected by trade secret and/or intellectual property rights.
- 9.
- 10.
See the IBM White Paper (2011): To achieve the most right answers (in the case of Jeopardy: the most right questions) at a competitive speed, IBM deploys: (1) massive parallelism to consider multiple interpretations and hypothesis; (2) many different experts to integrate, apply and contextually evaluate loosely coupled probabilistic questions with content analysis; (3) confidence estimation on the basis of a range of combined scores; and finally (4) integrating deep and shallow knowledge, leveraging many loosely formed ontologies.
- 11.
Data science is ‘the new kid on the block’. It provides a set of tools to infer knowledge from Big Data and is used in all the sciences now, from the natural sciences, to the life sciences, to medicine and healthcare, the humanities and the social sciences. Plus marketing and customer relationship management, forensic science and police intelligence. See notably Mitchell (2006), Fayyad et al. (1996), Custers (2004), and Hildebrandt and Gutwirth (2008).
- 12.
In fact, in the case of the game of Jeopardy, Watson has to find precise questions to specific answers.
- 13.
Moore (1965), Intel co-founder, predicted that the computing power of chips would increase exponentially (doubling every 2 years). The prediction became a goal for the industry which has so far been met.
- 14.
- 15.
The addition of 2.2 to Galatea seems to refer to version 2.2 of the program that constitutes Helen.
- 16.
For a more extensive discussion see Cole (2009).
- 17.
Shaw’s 1902 Pygmalion (Shaw 1994) was the inspiration of the romantic musical My Fair Lady (World Premiere 1956 on Broadway). Note that Galatea translates as ‘she who is white as milk’, which seems a ‘fair’ translation of Shaw’s Eliza Doolittle, and remember that Weizenbaum’s therapeutic machine was called Eliza.
- 18.
This is – evidently – not to discredit Shakespeare or The Merchant of Venice. It is to say that we cannot take for granted what is relevant and should not too easily think in terms of a canon.
- 19.
Huff argues that the lack of the legal institution of the corporation ‘caused’ the stagnation of the sciences in the Islamic and Chinese traditions.
- 20.
- 21.
- 22.
See e.g. Wells (2001) at 70: ‘Davis proposes a variation based on social contract theory such that punishment for a strict liability offence is related to the unfair advantage gained by the offender. The principle of just punishment requires, us, Davis asserts, to measure punishment in accordance with the seriousness of the harm, but how is seriousness to be measured? One suggested measure could be the unfair advantage the offender gains by doing what the law forbids’. She refers to Davis (1985).
- 23.
GOSS and MASS are my acronyms. Note that GOSS refers to quantitative social science, not to theoretical social science that builds on e.g. Weber or Durkheim. Most simulations of multi-agent systems still depend on methodological individualism, because this simplifies the calculation of emergent behaviours. See e.g. Helbing and Balietti (2011) who suggest that regarding the social sciences ‘investments into experimental research and data mining must be increased to reach the standards in the natural and engineering sciences’; they term this a strategy ‘to quickly increase the objective knowledge about social and economic systems’.
- 24.
Wells refers to Dan-Cohen (1986).
- 25.
I am using the machine-metaphor here to draw attention to non-human systems that consist of interacting human and/or non-human agents, though some would claim that individual human beings are also ‘intelligent machines’.
- 26.
Note that for an entity to act as an agent on behalf of a principal, the agent must be a legal subject. Only then can the ‘intelligent machine’ bind the principal to a contract with a third party.
References
Allen, R., and R. Widdison. 1996. Can computers make contracts? Harvard Journal of Law and Technology 9(1): 25–52.
Black, E. 2002. IBM and the Holocaust. The strategic alliance between Nazi Germany and America’s most powerful corporation. New York: Crown.
Bourgine, P., and F.J. Varela. 1992. Towards a practice of autonomous systems. In Towards a practice of autonomous systems. Proceedings of the first European conference on artificial life, ed. F.J. Varela and P. Bourgine. Cambridge, MA: MIT Press, xi–xviii.
Brooks, R. 1991. Intelligence without reason. In: Proceedings of the twelfth international joint conference on artificial intelligence, 569–595, Sydney.
Chopra, S., and L.F. White. 2011. A legal theory for autonomous artificial agents. Ann Arbor: University of Michigan Press.
Christian, B. 2011. The most human human. What talking with computers teaches us about what it means to be alive. New York: Doubleday.
Citron, D.K. 2007. Technological due process. Washington University Law Review 85: 1249–1313.
Cohen, R. 1995. Pygmalion in the computer lab. New York Times, July 23. Available at: http://www.nytimes.com/books/98/06/21/specials/powers-galatea.html. Last accessed 10 Oct 2012.
Cole, D. 2009. The Chinese room argument. In The Stanford encyclopedia of philosophy, E.N. Zalta, Winter 2009 ed. Available at: http://plato.stanford.edu/archives/win2009/entries/chinese-room/. Last accessed 14 Aug 2012.
Cover, R., M. Minow, M. Ryan, and A. Sarat. 1995. Narrative, violence, and the law. The essays of Robert Cover. Ann Arbor: University of Michigan Press.
Custers, B. 2004. The power of knowledge. Ethical, legal, and technological aspects of data mining and group profiling in epidemiology. Nijmegen: Wolf Legal Publishers.
Dahiyat, E.A.R. 2010. Intelligent agents and liability: Is it a doctrinal problem or merely a problem of explanation? Artificial Intelligence and Law 18(1): 103–121.
Damasio, A.R. 2000. The feeling of what happens: Body and emotion in the making of consciousness. New York: Harcourt Inc.
Dan-Cohen, M. 1986. Rights, persons, and organizations: A legal theory for bureaucratic society. London: University of California Press.
Davis, M. 1985. How to make the punishment fit the crime? In Criminal justice, ed. J.R. Pennock and J.M. Chapman. New York/London: New York University Press.
De Doelder, H., and K. Tiedemann (eds.). 1995. Criminal liability of corporations. Dordrecht: Kluwer Law International.
De Mul, J., M. Coolen, and H. Ernste (eds.). (forthcoming). Artificial by nature. Plessner’s philosophical anthropology. Perspectives and prospects. Amsterdam: Amsterdam University Press.
Dewey, J. 1926. The historic background of corporate legal personality. Yale Law Journal 35(6): 655–673.
Dreyfus, H.L. 1979. What computers can’t do: The limits of artificial intelligence. New York: Harper & Row.
Dreyfus, H.L. 1992. What computers still can’t do: A critique of artificial reason. Cambridge, MA/London: MIT Press.
Duff, R.A. 2001. Punishment, Communication, and Community. Oxford: Oxford University Press.
Eser, A., G. Heine, and B. Huber (eds.). 1999. Criminal responsibility of legal and collective entities. International colloquium Berlin 1998. Freiburg: Edition Iuscrim.
Fayyad, U.M., G. Piatetsky-Shapiro, P. Smyth, and R. Uthurusamy. 1996. Advances in knowledge discovery and data mining. Meno Park: AAAI Press/MIT Press.
Fisse, B., and J. Braithwaite. 1993. Corporations, crime and accountability. Cambridge: Cambridge University Press.
Floridi, L., and M. Taddeo. 2009. Turing’s Imitation game: Still an impossible challenge for all machines and some judges – An evaluation of the 2008 Loebner contest. Mind and Machines 19(1): 145–150.
French, P.A. 1979. The corporation as a moral person. American Philosophical Quarterly 16(3): 207–215.
Gaakeer, J. 1998. Hope springs eternal: An introduction to the work of James Boyd White. Amsterdam: Amsterdam University Press.
Helbing, D., and S. Balietti. 2011. From Social Data Mining to Forecasting Socio-economic Crises. The European Physical Journal Special Topics 195(1): 3–68.
Hildebrandt, M. 2011a. Criminal liability and ‘smart’ environments. In Philosophical foundations of criminal law, ed. A. Duff and S. Green. Oxford: Oxford University Press.
Hildebrandt, M. 2011b. Autonomic and autonomous “thinking”: Preconditions for criminal accountability. In Law, human agency and autonomic computing, ed. M. Hildebrandt and A. Rouvoy, 141–160. Abingdon: Routledge.
Hildebrandt, M. (forthcoming). Eccentric positionality as a precondition of the criminal liability of artificial life forms. In Artificial by nature. Plessner’s philosophical anthropology. Perspectives and prospects, ed. J. de Mul, M. Coolen, and H. Ernste. Amsterdam: Amsterdam University Press.
Hildebrandt, M., and S. Gutwirth. 2008. Profiling the European citizen. Cross-disciplinary perspectives. Dordrecht: Springer.
Hildebrandt, M., and A. Rouvroy. 2011. Law, human agency and autonomic computing. The philosophy of law meets the philosophy of technology. Abingdon: Routledge.
Huff, T.E. 2003. The rise of early modern science. Islam, China, and the West. Cambridge: Cambridge University Press.
IBM White Paper. 2011. Watson – A system designed for answers. The future of workload optimization. Available at: http://www.itworld.com/business/242693/watson-system-designed-answers-future-workload-optimized-systems-design. Last accessed 10 Oct 2012.
Ihde, D. 1991. Instrumental realism: The interface between philosophy of science and philosophy of technology. Bloomington: Indiana University Press.
Ihde, D. 2008. Ironic technics. Copenhagen: Automatic Press.
Karnow, C.E.A. 1997. Future codes: Essays in advanced computer technology and the law. Boston: Artech House.
Koops, B.-J., M. Hildebrandt, and D.-O. Jacquet-Chiffelle. 2010. Bridging the accountability gap: Rights for new entities in the information society? Minnesota Journal of Law Science & Technology 11(2): 497–561.
Kranzberg, M. 1986. Technology and history: Kranzberg’s laws. Technology and Culture 27(3): 544–560.
Kurzweil, R. 2005. The singularity is near: When humans transcend biology. New York: Viking.
Le, Q.V., M. Ranzato, R. Monga, M. Devin, K. Chen, G.S. Corrado, J. Dean, and A.Y. Ng. 2012. Building high-level features using large scale unsupervised learning. In Proceedings of the 29th International Conference on Machine Learning (ICML) 2012. Madison: Omnipress. Available at: http://arxiv.org/abs/1112.6209. Last accessed 10 Oct 2012.
Leenes, R.E. 1998. Hercules of Karneades. Hard cases in recht en rechtsinformatica. Enschede: Twente University Press.
Markoff, J. 2011. On ‘Jeopardy!’ Watson win is all but trivial. The New York Times. Available at: http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html. Accessed 19 Oct 2012.
Markoff, J. 2012. How many computers to identify a cat? 16,000. New York Times, June 25.
Minsky, M. 1988. The society of mind. New York: Simon & Schuster.
Minsky, M. 2006. The emotion machine. New York: Simon & Schuster.
Mitchell, T.M. 2006. The discipline of machine learning (Technical Report CMU-ML-06-108). Pittsburgh: Carnegie Mellon University, School of Computer Science. Available at: http://www-cgi.cs.cmu.edu/~tom/pubs/MachineLearningTR.pdf. Last accessed 10 Oct 2012.
Moore, G.E. 1965. Cramming more components onto integrated circuits. Electronics Magazine 38(8): 114–117.
Pfeifer, R., and J. Bongard. 2007. How the body shapes the way we think. A new view of intelligence. Cambridge, MA: MIT Press.
Picard, R. 1995. Affective computing. Cambridge, MA: MIT Press.
Plessner, H. 1975. Die Stufen des Organischen under der Mensch. Einleitung in die philosophische Anthropologie. Berlin: De Gruyter.
Powers, R. 1995. Galatea 2.2. New York: Picador.
Powers, R. 2011. What is artificial intelligence? The New York Times, February 5.
Rosu, A. 2002. Parody as cultural memory in Richard Powers’s Galatea 2.2. Connotations 12(2/3): 139–154.
Russell, S., and P. Norvig. 2009. Artificial intelligence: A modern approach. Upper Saddle River: Prentice Hall.
Sartor, G. 2002. Agents in cyberlaw. In The law of electronic agents: Selected revised papers. In Proceedings of the workshop on the Law of Electronic Agents (LEA 2002), G. Sartor, 3–12. Bologna: CIRSFID Universita di Bologna.
Searle, J. 1980. Minds, brains, and programs. The Behavioral and Brain Sciences 3(3): 517–557.
Shannon, C.E. 1948a. A mathematical theory of communication. Bell System Technical Journal 27(3): 379–423.
Shannon, C.E. 1948b. A mathematical theory of communication. Bell System Technical Journal 27(4): 623–656.
Shaw, G.B. 1994. Pygmalion. Mineola NJ: Dover Publications.
Simon, H.A. 1996. The sciences of the artificial. Cambridge, MA: MIT Press.
Solum, L.B. 1992. Legal personhood for artificial intelligences. North Carolina Law Review 70(2): 1231–1287.
Steels, L. 1995. When are robots intelligent autonomous agents? Robotics and Autonomous Systems 15: 3–9.
Teubner, G. 2006. Rights of non-humans? Electronic agents and animals as new actors in politics and law. Journal of Law and Society 33: 497–521.
Turing, A.M. 1950. Computing machinery and intelligence. Mind 59(236): 433–460.
Van der Linden-Smith, T. 2001. Een duidelijk geval: geautomatiseerde afhandeling, NOW/ITeR-serie 41. Den Haag: SDU Uitgevers.
Velasquez, J.D. 1998. Modeling emotion-based decision making. In Proceedings of the 1998 fall symposium emotional and intelligent: The tangled knot of cognition, Technical Report FS-98-03, ed. D. Canamero, 164–169. Menlo Park: AAAI Press. Available at: http://www.global-media.org/neome/docs/PDF's/01%20-%20the%20best%20ones/emotional%20agents.pdf. Last accessed 10 Oct 2012.
Weizenbaum, J. 1976. Computer power and human reason: From judgment to calculation. San Francisco: W.H. Freeman & Co.
Wells, C. 2001. Corporations and criminal responsibility. Oxford: Oxford University Press.
Wettig, S., and E. Zehender. 2004. A legal analysis of human and electronic agents. Artificial Intelligence and Law 12(1–2): 111–135.
White, J.B. 1990. Justice as translation: An essay in cultural and legal criticism. Chicago: University of Chicago Press.
Wiener, N. 1948. Cybernetics: Or control and communication in the animal and the machine. Cambridge, MA: MIT Press.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer Science+Business Media Dordrecht.
About this chapter
Cite this chapter
Hildebrandt, M. (2013). From Galatea 2.2 to Watson – And Back?. In: Hildebrandt, M., Gaakeer, J. (eds) Human Law and Computer Law: Comparative Perspectives. Ius Gentium: Comparative Perspectives on Law and Justice, vol 25. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-6314-2_2
Download citation
DOI: https://doi.org/10.1007/978-94-007-6314-2_2
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-94-007-6313-5
Online ISBN: 978-94-007-6314-2
eBook Packages: Humanities, Social Sciences and LawLaw and Criminology (R0)