, Volume 24, Issue 2, pp 181-189,
Open Access This content is freely available online to anyone, anywhere at any time.

Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents

This is an excerpt from the content

Introduction

Contemporary technology creates a proliferation of non-human artificial entities such as robots and intelligent information systems. Sometimes they are called ‘artificial agents’. But are they agents at all? And if so, should they be considered as moral agents and be held morally responsible? They do things to us in various ways, and what happens can be and has to be discussed in terms of right and wrong, good or bad. But does that make them agents or moral agents? And who is responsible for the consequences of their actions? The designer? The user? The robot? Standard moral theory has difficulties in coping with these questions for several reasons. First, it generally understands agency and responsibility as individual and undistributed. I will not further discuss this issue here. Second, it is tailored to human agency and human responsibility, excluding non-humans. It makes a strong distinction between (humans as) subjects and objects, between humans and animals, between