What is it like to encounter an autonomous artificial agent?
- 432 Downloads
Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that assertion. Finally, some tentative conclusions concerning moral implications of the arguments presented here shall be drawn.
KeywordsAutonomous artificial agent Turing test Uncanny valley Moral responsibility
- Breazeal C, Brooks R (2004) Robot emotions: a functional perspective. In: Fellous J, Arbib M (eds) Who needs emotions?. Oxford University Press, New York, pp 271–310Google Scholar
- Friedman B, Kahn PH Jr (1992) Human agency and responsible computing: implications for Computer System Design. J Syst Softw 17:7–14Google Scholar
- Kidd CD, Taggart W, Turkle S (2006) A sociable robot to encourage social interaction among the elderly. Proceedings 2006 IEEE international conference on robotics and automation, pp 3972–3976Google Scholar
- Mori M (1970) The uncanny valley. Energy 7:33–35Google Scholar
- Rickenberg R, Reeves B (2000) The effects of animated characters on anxiety, task performance, and evaluations of user interfaces. Proceedings of the SIGCHI’00 conference on Human factors in computing systems. ACM, New York, pp 49–56Google Scholar
- Taggart W, Turkle S, Kidd CD (2005) An interactive robot in a nursing home: preliminary remarks. Toward social mechanisms of android science. Cognitive Science Society, Stresa/Italy, pp 56–61Google Scholar