Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches
- 761 Downloads
A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing artificial morality and the differing criteria for success that are appropriate to different strategies.
Keywordsartificial morality autonomous agents ethics machines robots values
Unable to display preview. Download preview PDF.
- Danielson, P 1992Artificial Morality: Virtuous Robots for Virtual GamesRoutledgeNew YorkGoogle Scholar
- Danielson, P 1998Modeling Rationality, Morality and EvolutionOxford University PressNew YorkGoogle Scholar
- Foerster, H 1992Ethics and Second-order CyberneticsCybernetics & Human Knowing1925Google Scholar
- Gips, J 1995
Towards the Ethical RobotFord, KGlymour, CHayes, P eds. Android EpistemologyMIT PressCambridge, MA243252Google Scholar
- Skyrms, B 1996Evolution of the Social ContractCambridge University PressNew YorkGoogle Scholar
- W. Wallach. Artificial Morality: Bounded Rationality, Bounded Morality and Emotions. In I. Smit, G. Lasker and W. Wallach, editors, Proceedings of the Intersymp 2004 Workshop on Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, pp. 1–6, Baden-Baden, Germany, IIAS, Windsor, Ontario, 2004.Google Scholar
- Wilson, E.O 1975Sociobiology: The New SynthesisHarvard/BelknapCambridge, MAGoogle Scholar