Skip to main content
Log in

Artificial Consciousness and Artificial Ethics: Between Realism and Social Relationism

  • Special Issue
  • Published:
Philosophy & Technology Aims and scope Submit manuscript

Abstract

I compare a ‘realist’ with a ‘social–relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being—particularly in relation to moral patiency attribution—is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist, both moral status and experiential capacity are objective properties of agents. A social relationist denies the existence of any such objective properties in the case of either moral status or consciousness, suggesting that the determination of such properties rests solely upon social attribution or consensus. A wide variety of social interactions between us and various kinds of artificial agent will no doubt proliferate in future generations, and the social–relational view may well be right that the appearance of CSS features in such artificial beings will make moral role attribution socially prevalent in human–AA relations. But there is still the question of what actual CSS states a given AA is capable of undergoing, independently of the appearances. This is not just a matter of changes in the structure of social existence that seem inevitable as human–AA interaction becomes more prevalent. The social world is itself enabled and constrained by the physical world, and by the biological features of living social participants. Properties analogous to certain key features in biological CSS are what need to be present for nonbiological CSS. Working out the details of such features will be an objective scientific inquiry.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Wallach et al. are primarily concerned in their paper with how modelling moral agency requires a proper theoretical treatment of conscious ethical decision-making, whereas the present paper is more broadly concerned with the problem of ethical consideration—that is: what kinds of machines, or artificial agents in general, merit ethical consideration either as agents or as patients. The discussion largely centres around the relation between experiential consciousness and the status of moral patiency. I have discussed the general relation between consciousness and ethics in an AI context in Torrance, 2008, 2011, 2012a,b; Torrance and Roche 2011. While I sympathize strongly with the sentiment expressed in the above quote from Wallach et al., I prefer the terms ‘artificial consciousness’ (AC) and ‘artificial ethics’ (AE) to the ‘machine’ variants. It seems clear that many future agents at the highly bio-engineered end of the spectrum of possible artificial agents—particularly those with near-human levels of cognitive ability—will be strong candidates to be considered both as phenomenally conscious much in the way we are and as moral beings (both as moral agents and as moral patients). Yet it may be thought rather forced to call such artificial creatures ‘machines’, except in the stretched sense in which all natural organisms, us included, may be classed as machines.

  2. In what follows, I will sometimes talk about ‘robots’ and sometimes about AAs. Generally, I will mean, by ‘robots’ physical agents (possibly humanoid in character), which are constructed using something like current robotic technology—that is, whose control mechanisms are computer-based (or based on some future offshoot from present-day computational designs). By ‘AAs’ I will understand a larger class of agents, which includes ‘robots’ but which will also include various kinds of possible future bio-machine hybrids, plus also agents which, while synthetic or fabricated, may be partially or fully organic or metabolic in physical make-up.

  3. A source for the term ‘social relationism’ is the title of a paper by Mark Coeckelbergh (Coeckelbergh 2010a).

  4. I am grateful to an anonymous reviewer for insisting on this point. In the present discussion, I am limiting the kinds of cases under consideration to AAs whose design involves electronic technologies which are relatively easy to imagine, on the basis of the current state of the art and of fairly solid future projections. There are a wide variety of other kinds of artificial creature—including ones with various kinds of artificial organic makeup, plus bio-machine hybrids of different sorts—which expand considerably on this range of cases. We will consider this broader range of cases in later sections.

    Concentrating at the present stage of the discussion on AAs like the robot gardener, and other such relatively conservative cases has a triple utility. First, it allows us to lay down the foundations for the argument without bringing in too many complexities for now. Second, many people (supporters of strong AI or ‘strong artificial consciousness’) have asserted that such robots could well have genuinely conscious states (and thus qualify for serious ethical consideration) if constructed with the right (no doubt highly complex) functional designs. Third, such cases seem to offer a greater challenge than cases which are closer-to-biology: it is precisely the non-organic cases, in which one has detailed similarity to humanity in terms of behaviour and functional organization but marked dissimilarity in terms of physical or somatic structure, where the issues seem to be raised particularly sharply.

  5. Some people might agree that Q1 should be construed in a realist way—what could be more real than a person’s vivid experiences of distress, pleasure, etc.?—while being reluctant to treat Q2, and similar moral questions, in a realist or objectivist way. In this paper, I am supporting a realist position for both experiential and moral attributions.

  6. For a defence of the view that there is a close association between questions of consciousness and those of moral status, see, for example, Levy 2009. Versions of this view are defended in Torrance, 2012; Torrance and Roche 2011. The conclusions Levy comes to on the basis of these views are very different from mine, however.

  7. To clarify: the realist’s claim is that ‘Is X conscious’ is objective in the sense that ‘X is currently conscious’ asserts a historical fact about X, even though it’s a fact about X’s subjective state, unlike, say, ‘X is currently at the summit of Everest’.

  8. The inherent distinguishability between phenomenal and functional consciousness is defended in Torrance, 2012.

  9. See Thompson (2007, chapter 12) for a discussion of the relation between consciousness, affect and valence.

  10. This is not to say that questions concerning consciousness or ethics in relation to such machines are to be thought of as trivial or inconsequential on the SR view: on the contrary, a relationist will take such questions as seriously as the realist, and may claim that they deserve our full intellectual and practical attention.

  11. See also the discussion in Torrance, 2013.

  12. Singer does not discuss the case of moral interests of robots or other artificial agents (or indeed of exoplanetary beings) in his 2011 book.

  13. For an excellent, and fully elaborated, defence of the kind of view of consciousness that I would accept, which is centred around notions of enactivism and autopoiesis, see Thompson (2007)—especially chapters 12 and 13. There is no space here to do more than gesture to this view in the present discussion.

  14. Like Singer, Harris does not consider the ethical status of possible artificial agents.

  15. Sometimes the seals can be leaky. I was once at a conference on consciousness, where an eminent neuropsychologist was giving a seminar on ethical issues in neural research on consciousness. He said things like ‘With my neuroscientist’s cap on, I think… But with my ethicist’s cap on, I think…’ What cap was he wearing when deciding which cap to put on at a given time?

  16. It is worth pointing out that no consensual human view may come to predominate on these issues: there may rather be a fundamental divergence just as there is in current societies between liberals and conservatives, or between theistic and humanistic ways of thinking, or between envirocentric versus technocentric attitudes towards the future of the planet, and so on. In such a case, the relationist could say, social reality will be just as it manifests itself—one in which no settled view on the psychological or moral status of such agents comes to prevail; society will just contain irreconcilable social disagreements on these matters, much as it does today on these other issues.

  17. The present paper originated as a contribution to a workshop at a Convention celebrating the 100th anniversary of Turing’s birth.

  18. We assume—perhaps with wild optimism—that that these artificial agents are by then smart enough to debate such matters somewhat as cogently as humans can today, if not much more so. To get a possible flavour of the debate, consider Terry Bisson’s ‘They’re made out of meat’ (Bisson 1991).

  19. That is, should we not say that the epistemological status of our question about them is comparable to that of theirs about us?—although the answers to the two questions may be very different, as may be the relative difficulty in answering them.

  20. See, for example, Gunkel (2012, chapters 1 and 2), who insists on the perennial philosophical problem of ‘other minds’ as a reason for casting doubts on rational schemes of ethical extension which might enlarge the sphere of moral agency or patiency to animals of different types, and beyond that, to machines. It is remarkable how frequently Gunkel returns to rehearsing the theme of solipsistic doubt in his discussion.

  21. Appeal to doubts over other minds is one of the arguments used by Turing (1950) to buttress his early defence of the possibility of thinking, and indeed conscious, machines.

  22. See also the treatment of this issue in Thompson (2007), especially chapter 8—therein called the ‘body–body problem’. Thompson mentions that a quite elaborate range of notions contrasting and combining the motifs of Leib and Körper are found in Husserl’s writings (see Depraz (1997, 2001), cited in Thompson (2007), ch. 8, footnote 6). Thompson also critiques superficial or ‘thin’ conceptions of phenomenology (ibid., ch. 8) but without the ‘thin’/’thick’ terminology used in Torrance, 2007.

  23. A variety of sources, from phenomenology, and several of the mind sciences, all converging on the view that our understanding of mind is thoroughly intersubjective, in a way that renders solipsistic doubts incoherent, will be found in Thompson (2001, 2007).

  24. And no doubt many others—for instance, I have left out the essential role played by our cognitive capacities, by beliefs, perceptions, intellective skills, etc.!

  25. For the sake of simplicity of discussion, I am here representing the spectrum as if it were a unidimensional space, whereas it is almost certainly more appropriate to see it as multidimensional (cf. Sloman 1984).

  26. Here, we are stressing moral patiency, but a similar problem of false-positives/false-negatives exists for moral agency, too. Many relatively primitive kinds of AI agents will act in a functionally autonomous fashion so as to affect human well-being in many different ways—so in one sense the question of moral agency is much more pressing, as many authors have pointed out (see, for example, Wallach and Allen 2009). Yet there are important questions of responsibility ascription that need to be determined. If we provide too great a share of responsibility to AAs that act in ways that are detrimental to human interests, this may well mask the degree of responsibility that should be borne by particular humans in such situations (e.g. those who design, commission and use such AAs). Conversely, we may overattribute responsibility to humans in such situations and withhold moral credit from artificial agents when in truth it is due to such agents, for example, on the grounds that as ‘mere machines’ they cannot be treated as fully responsible moral agents. The vexed issue of moral responsibility in the case of autonomous lethal battlefield robots provides one illustration of this area: see Sparrow 2007; Arkin 2009; Sharkey and Suchman 2013.

  27. It may well be that realism will not be defeated if no decision procedure is provided. There is the ontological matter of whether questions like “Does A have a moral status as an ethical patient/agent?” have a correct answer or not (independently of the accidents of social determination). And there is the epistemological or methodological matter of whether or not it can be determined, in a straightforward way or even only with extreme difficulty, what the correct answer to that question is for any particular A.

  28. For example, here I have dealt primarily with the connection between consciousness and artificial moral patiency, or recipiency, as opposed to moral agency, or productivity (but see footnote 26 above). There are arguments that suggest that consciousness may be as crucial to the former as to the latter (Torrance 2008; Torrance and Roche 2011).

References

  • Arkin, R. C. (2009). Governing lethal behavior in autonomous systems. Boca Raton: CRC.

    Book  Google Scholar 

  • Bisson, T. (1991). ‘They’re made out of meat’, Omni, 4. April, 1991. http://www.eastoftheweb.com/short-stories/UBooks/TheyMade.shtml. Accessed 10 January 2013.

  • Block, N. (1978). Troubles with functionalism. In C. Savage (Ed.), Perception and cognition: issues in the foundations of psychology. Minnesota studies in the philosophy of science (pp. 261–325). Minneapolis: University of Minnesota Press.

    Google Scholar 

  • Chalmers, D. (1995). Facing up to the problem of consciousness. Journal of Consciousness Studies, 2(3), 200–219.

    Google Scholar 

  • Coeckelbergh, M. (2009). Personal robots, appearance, and human good: a methodological reflection on roboethics. International Journal of Social Robotics, 1(3), 217–221.

    Article  Google Scholar 

  • Coeckelbergh, M. (2010a). Robot rights? Towards a social–relational justification of moral consideration. Ethics and Information Technology, 12(3), 209–221.

    Article  Google Scholar 

  • Coeckelbergh, M. (2010b). Moral appearances: emotions, robots, and human morality. Ethics and Information Technology, 12(3), 235–241.

    Article  Google Scholar 

  • Coeckelbergh, M. (2012). Growing moral relations: a critique of moral status ascription. Basingstoke: Macmillan.

    Book  Google Scholar 

  • Coeckelbergh, M. (2013). The moral standing of machines: Towards a relational and non-Cartesian moral hermeneutics. Philos. Technol. This issue.

  • Depraz, N. (1997). La traduction de Leib, une crux phaenomenologica. Etudes Phénoménologiques, 3.

  • Depraz, N. (2001). Lucidité du corps. De l’empiricisme transcendental en phénoménologie. Dordrecht: Kluwer.

    Google Scholar 

  • Gallagher, S. (2005a). How the body shapes the mind. Oxford: Clarendon.

    Book  Google Scholar 

  • Gallagher, S. (2005b). Phenomenological contributions to a theory of social cognition. Husserl Studies, 21(2), 95–110.

    Article  Google Scholar 

  • Gallagher, S. (2012). You, I, robot. AI and Society. doi:10.1007/s00146-012-0420-4.

    Google Scholar 

  • Gallagher, S., & Zahavi, D. (2008). The phenomenological mind: an introduction to philosophy of mind and cognitive science. London: Taylor & Francis.

    Google Scholar 

  • Gallie, W. B. (1955). Essentially contested concepts. Proceedings of the Aristotelian Society, 56, 167–198.

    Google Scholar 

  • Gunkel, D. (2007). Thinking otherwise: philosophy, communication, technology. West Lafayette: Purdue University Press.

    Google Scholar 

  • Gunkel, D. (2012). The machine question: critical perspectives on AI, robots and ethics. Cambridge: MIT Press.

    Google Scholar 

  • Gunkel, D. (2013) ‘A vindication of the rights of machines. Philos. Technol. This issue. doi:10.1007/s13347-013-0121-z

  • Hanna, R., & Thompson, E. (2003). The mind–body–body problem. Theoria et Historia Scientiarum: International Journal for Interdisciplinary Studies, 7, 24–44.

    Google Scholar 

  • Harris, S. (2010). The moral landscape: how science can determine human values. London: Random House.

    Google Scholar 

  • Holland, O. (2007). A strongly embodied approach to machine consciousness. Journal of Consciousness Studies, 14(7), 97–110.

    Google Scholar 

  • Kurzweil, R. (2005). The singularity is near: when humans transcend biology. Viking.

  • Leopold, A. (1948). ‘A land ethic’. In: A sand county almanac with essays on conservation from Round River. NY: Oxford University Press.

    Google Scholar 

  • Levine, J. (1983). Materialism and qualia: the explanatory gap. Pacific Philosophical Quarterly, 64, 354–361.

    Google Scholar 

  • Levy, D. (2009). The ethical treatment of artificially conscious robots. International Journal of Social Robotics, 1(3), 209–216.

    Article  Google Scholar 

  • Naess, A. (1973). The shallow and the deep long-range ecology movements. Inquiry, 16, 95–100.

    Article  Google Scholar 

  • O’Regan, J. (2007). How to build consciousness into a robot: the sensorimotor approach. In M. Lungarella et al. (Eds.), 50 years of artificial intelligence (pp. 332–346). Heidelberg: Springer.

    Chapter  Google Scholar 

  • O’Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and brain sciences, 24(5), 939–972.

    Article  Google Scholar 

  • Regan, T. (1983). The case for animal rights. Berkeley: University of California Press.

    Google Scholar 

  • Sharkey, N., & Suchman, L. (2013). Wishful mnemonics and autonomous killing machines. AISB Quarterly, 136, 14–22.

    Google Scholar 

  • Shear, J. (Ed.). (1997). Explaining consciousness: the hard problem. Cambridge: MIT Press.

    Google Scholar 

  • Singer, P. (1975). Animal liberation: a new ethics for our treatment of animals. NY: New York Review of Books.

    Google Scholar 

  • Singer, P. (2011). The expanding circle: ethics, evolution and moral progress. New Jersey: Princeton University Press.

    Google Scholar 

  • Sloman, A. (1984). ‘The structure of the space of possible minds’. In Torrance, S. (ed). The Mind and the Machine: Philosophical Aspects of Artificial Intelligence. Chichester, Sussex: Ellis Horwood, 35–42.

  • Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy, 24(1), 62–77.

    Article  Google Scholar 

  • Stuart, S. (2007). Machine consciousness: cognitive and kinaesthetic imagination. Journal of Consciousness Studies, 14(7), 141–153.

    Google Scholar 

  • Thompson, E. ed. (2001) Between ourselves: second-person issues in the study of consciousness, Thorverton, UK: Imprint Academic. Also published in Journal of Consciousness Studies (2001) 8(5–7).

  • Thompson, E. (2007). Mind in life: biology, phenomenology, and the sciences of mind. Cambridge: Harvard University Press.

    Google Scholar 

  • Turing, A. (1950). Computing machinery and intelligence. Mind, 59, 433–460.

    Article  Google Scholar 

  • Wallach, W. and Allen, C. (2009) Moral machines: teaching robots right from wrong. NY: Oxford University Press.

  • Wallach, W., Allen, C., & Franklin, S. (2011). Consciousness and ethics: artificially conscious moral agents. International Journal of Machine Consciousness, 3(1), 177–192.

    Article  Google Scholar 

  • Wittgenstein, L. (1953). Philosophical investigations. Oxford: Blackwell.

    Google Scholar 

  • Zahavi, D. (2001). Beyond empathy. Phenomenological approaches to intersubjectivity. Journal of Consciousness Studies, 8(5–7), 5–7.

    Google Scholar 

  • Ziemke, T. (2007). The embodied self: theories, hunches and robot models. Journal of Consciousness Studies, 14(7), 167–179.

    Google Scholar 

Download references

Acknowledgments

Work on this paper was assisted by grants from the EUCogII network, in collaboration with Mark Coeckelbergh, to whom I express gratitude. I am also grateful to Joanna Bryson and David Gunkel for inviting me to join with them in co-chairing the Turing Centenary workshop on The Machine Question, where this paper first saw life. Ideas in the present paper have also greatly benefitted from discussions with Mark Bishop, Rob Clowes, Ron Chrisley, Madeline Drake, David Gunkel, Joel Parthemore, Denis Roche, Wendell Wallach and Blay Whitby.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steve Torrance.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Torrance, S. Artificial Consciousness and Artificial Ethics: Between Realism and Social Relationism. Philos. Technol. 27, 9–29 (2014). https://doi.org/10.1007/s13347-013-0136-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13347-013-0136-5

Keywords

Navigation