Springer Nature is making SARS-CoV-2 and COVID-19 research free. View research | View latest news | Sign up for updates

Moral Mechanisms


As highly intelligent autonomous robots are gradually introduced into the home and workplace, ensuring public safety becomes extremely important. Given that such machines will learn from interactions with their environment, standard safety engineering methodologies may not be applicable. Instead, we need to ensure that the machines themselves know right from wrong; we need moral mechanisms. Morality, however, has traditionally been considered a defining characteristic, indeed the sole realm of human beings; that which separates us from animals. But if only humans can be moral, can we build safe robots? If computationalism—roughly the thesis that cognition, including human cognition, is fundamentally computational—is correct, then morality cannot be restricted to human beings (since equivalent cognitive systems can be implemented in any medium). On the other hand, perhaps there is something special about our biological makeup that gives rise to morality, and so computationalism is effectively falsified. This paper examines these issues by looking at the nature of morals and the influence of biology. It concludes that moral behaviour is concerned solely with social well-being, independent of the nature of the individual agents that comprise the group. While our biological makeup is the root of our concept of morals and clearly affects human moral reasoning, there is no basis for believing that it will restrict the development of artificial moral agents. The consequences of such sophisticated artificial mechanisms living alongside natural human ones are also explored.

This is a preview of subscription content, log in to check access.


  1. 1.

    In this paper, the term robot will generally be used in the common, non-technical sense, to mean an intelligent artificial being with cognitive abilities more or less equivalent to humans and which may even physically resemble human beings.

  2. 2.

    Normally, a product’s designers/manufacturers are held responsible should it harm someone. They thus exercise great care in considering every possible condition under which something might go wrong and try to ensure that none of these actually cause harm should they occur. The sort of highly intelligent machines we are considering here can, in effect, completely rewire themselves as a result of interactions with the world, making it impossible for engineers to consider all possibilities.

  3. 3.

    Computationalism is the view that cognitive agents, in particular human beings, are computational in nature, that is, they automatically instantiate and maintain computational models of their environment. Such models enable agents to simulate possible interactions with the world, allowing them to select the actions most likely to achieve their goals. A computational model is an implementation-independent specification for a causal system, the states of which can be systematically mapped to the states of the system being modelled. A computer is an actual physical implementation of such a model. A universal computer is a physical system that can be quickly and easily configured to have any desired causal dynamics. A computation is the execution of a model (i.e. the causally constrained evolution of its implemented states) from specific initial conditions, the resulting model states effectively predicting the corresponding states of the system being modelled (usually future states of the environment). The states of a model are representational (representations of things in the world) and intensional (meaningful for the agent) exactly because (and to the extent that) they allow the agent to make correct predictions.

    Robotics research, exemplified, for example, by Rodney Brooke’s work on situated cognition and by the embedded and embodied approaches to AI, offers insights into the basic bodily control mechanisms needed for robots, but appears unable to scale up to the cognitive abilities needed for intelligent moral agents. Despite initial claims that such simple robotic mechanisms are non-representational in nature, there is reason to doubt this. As with the non-representational dynamic systems approaches championed by van Gelder, a lot depends on how computation and representation are understood. Computationalism, properly understood as above, still seems to be “the only game in town” (Davenport 2012a). Note that the computational approach outlined, for example, by Bickhard and Terveen (1995) and Davenport (2000), also provides a clear account of intentionality and may even give us a handle on the problem of consciousness (see Section 4.2).

  4. 4.

    Which is not to diminish the highly nuanced arguments of some scholastic theologians.

  5. 5.

    Following recent practice, I will use the words ethics and morals interchangeably.

  6. 6.

    Rather than three separate theories, these may be seen as different aspects of a single idea: roughly as individual members of a society, we have a duty to follow rules that help us avoid any generally negative/harmful consequences of our actions and, where possible, to do actions that promote positive/good/virtuous ends.

  7. 7.

    Notable recent work also includes company ethics and information ethics. Company/business ethics is slightly different in the sense that its primary concern seems to be whether the company itself (rather than the individuals comprising it) should be treated as a moral entity. It is, however, clearly anthropocentric in outlook. Floridi’s Information Ethics, in contrast, offers a fundamentally different ontological framework for ethics, one that takes information rather than agency as its basis.

  8. 8.

    An earlier version of this paper was included in the “Machine Question” symposium, part of the Turing Centenary celebrations at the ASIB-IACAP 2012 conference.

  9. 9.

    Deterministic, that is, at the level of abstraction at which we (human agents) normally operate. If there were not some degree of determinism, then prediction and hence intelligent agency and morality, would prove impossible.

  10. 10.

    This is what Chalmers called the “hard problem” of consciousness. Computationalism has no immediate solution to it, but then neither does any other scientific theory.

  11. 11.

    This section, an addendum to the original symposium article, was written after reading David Gunkel’s newly published book entitled The Machine Question, a most excellent work that helped enormously in clarifying and contextualising my ideas on this extremely complex subject.


  1. BBC (2012). Congenital analgesia: the agony of feeling no pain. Outlook, BBC World Service. Accessed 28 Nov 2013.

  2. Bekoff, M., & Pierce, J. (2009). Wild justice: the moral lives of animals. Chicago: Chicago University Press.

  3. Bickhard, M.H., & Terveen, L. (1995). Foundational issues in artificial intelligence and cognitive science: impasse and solution. Amsterdam: Elsevier Scientific.

  4. Churchland, P. (2012). Braintrust: what neuroscience tells us about morality. Princeton: Princeton University Press.

  5. Coeckelbergh, M. (2010). Moral appearances: emotions, robots and human morality. Ethics Information Technology, 12, 235–241. doi:10.1007/s10676-010-9221-y.

  6. Coeckelbergh, M. (2012). Who cares about robots? A phenomenological approach to the moral status of autonomous intelligent machines. In: This volume.

  7. Davenport, D. (2000). Computationalism: the very idea. Conceptus Studien, 14(14). Fall.

  8. Davenport, D. (2012a). Computationalism: still the only game in town. Minds & Machines. doi:10.1007/s11023-012-9271-5.

  9. Davenport, D. (2012b). The two (computational) faces of A.I. In Muller, V.M. (Ed.) Theory and philosophy of artificial intelligence, SAPERE. Berlin: Springer.

  10. Floridi, L. (2010). The digital revolution as a fourth revolution. Accessed 28 Nov 2013.

  11. Floridi, L., & Sanders, J.W. (2004). On the morality of artificial agents. Minds and Machines, 14(3), 349–379.

  12. Gunkel, D.J. (2012a). The machine question: critical perspectives on AI, Robots and Ethics. Cambridge: MIT Press.

  13. Gunkel, D.J. (2012b). A vindication of the rights of machines. In: This volume.

  14. Lihoreau, M., Costa, J., Rivault, C. (2012). The social biology of domiciliary cockroaches: colony structure, kin recognition and collective decisions. Insectes Sociaux, 59(4), 445–452. doi:10.1007/s00040-012-0234-x.

  15. Nagel, T. (2012). Mind and cosmos: why the materialist Neo-Darwinian conception of nature is almost certainly false. Oxford: Oxford University Press.

  16. Neely, E. (2012). Machines and the moral community. In: This volume.

  17. Parthemore, J., & Whitby, B. (2012). Moral agency, moral responsibility, and artefacts. In: This volume.

  18. Ruffo, M. (2012). The robot, a stranger to ethics. In: This volume.

  19. Torrance, S. (2010). Machine ethics and the idea of a more-than-human moral world. In Anderson, M., & Anderson, S. (Eds.) Machine ethics. Cambridge: Cambridge University Press.

  20. Waser, M. (2012). Safety and morality require the recognition of self-improving machines as moral/justice patients & agents. In: This volume.

Download references

Author information

Correspondence to David Davenport.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Davenport, D. Moral Mechanisms. Philos. Technol. 27, 47–60 (2014).

Download citation


  • Moral agent
  • Moral patient
  • Computationalism
  • AI
  • Safety