Skip to main content

Modelling Consciousness-Dependent Expertise in Machine Medical Moral Agents

  • Chapter
  • First Online:
Book cover Machine Medical Ethics

Abstract

It is suggested that some limitations of current designs for medical AI systems (be they autonomous or advisory) stem from the failure of those designs to address issues of artificial (or machine) consciousness. Consciousness would appear to play a key role in the expertise, particularly the moral expertise, of human medical agents, including, for example, autonomous weighting of options in (e.g.,) diagnosis; planning treatment; use of imaginative creativity to generate courses of action; sensorimotor flexibility and sensitivity; empathetic and morally appropriate responsiveness; and so on. Thus, it is argued, a plausible design constraint for a successful ethical machine medical or care agent is for it to at least model, if not reproduce, relevant aspects of consciousness and associated abilities. In order to provide theoretical grounding for such an enterprise we examine some key philosophical issues that concern the machine modelling of consciousness and ethics, and we show how questions relating to the first research goal are relevant to medical machine ethics. We believe that this will overcome a blanket skepticism concerning the relevance of understanding consciousness, to the design and construction of artificial ethical agents for medical or care contexts. It is thus argued that it would be prudent for designers of MME agents to reflect on issues to do with consciousness and medical (moral) expertise; to become more aware of relevant research in the field of machine consciousness; and to incorporate insights gained from these efforts into their designs.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use the term “Machine” Ethics/Consciousness here rather than “Artificial” following many (but not all) in the respective fields. There is a lot of fuzziness about what counts as a “machine” here, but the de facto emphasis in this discussion is on computational systems.

  2. 2.

    For the moment we are eliding over the distinction between artificially modeling some target area, one the one hand, and reproducing, or replicating that target area. This distinction is usually marked in the area of AI, by the phraseology “Strong AI”/“Weak AI”. A similar division between Strong and Weak MC has been suggested (refs); and between Strong and Weak ME. We will discuss the Strong/Weak antithesis later on in the paper.

  3. 3.

    The “non-relationally individuated” qualifier is meant to block certain trivializations of the notion of causal indistinguishability. For example, suppose, per impossible, that b and b′ have the same effect, that of light L coming on. Further suppose that they both have no other effects on any other objects. Intuitively, b and b′ are causally indistinguishable. However b, but not b′, has the (relationally individuated) effect of light L coming on as a result of b. This would render b and b′ (and all other pairs of behaviors) causally distinguishable, making any notion defined in terms of it (such as functional ethical equivalence) vacuous. Adding the restriction prevents this.

  4. 4.

    Or perhaps not, strictly speaking: plausibly, only a proper subset of our (and Bloggs’) behavior is ethically evaluable.

  5. 5.

    The terms “moral agent/patient” are used more frequently than “moral producer/consumer” in the literature, but there are pitfalls to such a usage. First, when one talks about an “artificial agent”, or indeed of a “moral agent”, one is often using the term “agent” in a wider sense than when one is using “agent” to distinguish between moral agency and patiency. For example, there is a very real question of whether an artificial agent—in a wide sense of “agent”—could be a genuine moral patient (moral consumer) (see [46, 48]). Second, as suggested above, the term “moral patient” is particularly awkward in a medical context, as in the present discussion. Medical patients may invariably be moral patients (or moral consumers) as well, but the class of moral patients (consumers) whose interests may be affected by a particular medical intervention may be larger than the actual patient receiving that intervention (e.g., there may be family members whose interests would be severely affected if the intervention went wrong, and so on).

  6. 6.

    A similar point can be made concerning moral consumerhood. Many see corporations as possessing rights, as entities toward which we can have obligations; for an example, one only need look at the recent (2010) US Supreme Court ruling in the case of Citizens United v. Federal Election Commission, which held that the first amendment in the Bill of Rights applies to corporations. If corporations have rights, then they can be wronged. And being something which can be wronged is, plausibly, sufficient for moral consumerhood, e.g., “An entity has moral status if and only if it or its interests morally matter to some degree for the entity's own sake, such that it can be wronged” [26]. Again, that corporations currently comprise conscious beings does not in itself undermine the general point; further, it is not prima facie absurd to suppose that a corporation could persist, and retain its rights, even if the number of humans constituting it shrank to zero. There is also the much-discussed issue of potentialism. Presumably we retain our full moral status even when we are unconscious; this (and the moral consumerhood of, say, fetuses) is usually explained in terms of a potential for being conscious, rather than consciousness itself. Perhaps, then, some artificial agents might be properly seen as moral consumers, as long as they could be understood in some way to be potentially, even if not actually, conscious.

  7. 7.

    It is also unclear what (non-nefarious) motives one could have for creating “genuine” moral consumerhood in an artificial agent, unless one takes the view that it is a requirement for, e.g., moral producerhood.

  8. 8.

    It could be argued, in a related way, that an artificial agent which was unable to experience pain or affective suffering could not be a subject of moral praise or blame, since the latter requires a capability of experiencing the positive or negative sanctions, whatever they might be, that attaches to any such praise or blame.

  9. 9.

    Normally the strong/weak distinction is thought of as primarily applying to intelligence. In fact, one can see the context of Searle's original use of the distinction, the Chinese room argument, as applying most directly to consciousness, and only indirectly to intelligence, via the assumption that genuine intelligence requires understanding accessible from a first-person perspective.

  10. 10.

    Chrisley there talks of Artificial Consciousness (AC) rather than Machine Consciousness.

  11. 11.

    Here we are generalizing Searle's strong-weak distinction in another way: his distinction dealt specifically with the technology of digital computer programs, whereas we are open to considering any computational/robotic technology.

  12. 12.

    Some might be unimpressed by a shift of focus from sufficient to necessary conditions. Necessary conditions, it might be thought, are ubiquitous to the point of being non-explanatory. Compare a similar move in chemistry: “I don't know what the sufficient conditions for a sample X being water are”, a scientist might have said in the time before the structure of water was known, “but I do know a lot of the necessary conditions: X must be a substance, must be located in space-time, must be composed of atoms, must either be an element or a compound, must be capable of being a liquid at room temperature…” and so on. While true, all of these conditions, even taken together, fall short of explaining what water is, unlike the sufficient condition “X is H2O”. Two things can be said to quell this worry. First, the example is rather anachronistic. From our current, H2O-informed perspective it is easy to underestimate the explanatory and heuristic value of the necessary conditions just cited. Second, the necessary conditions can be ordered, from the most widely applicable, to the very-narrow-but-still-not-narrow-enough-to guarantee-waterhood, such as “has, at sea level, a boiling point of 100 °C and a freezing point of 0 °C”, which may be very useful in the determination of sufficient conditions. The suggestion, then, is that synthetic investigation of the “narrow” end of the corresponding hierarchy of necessary conditions for consciousness can play an important role in explaining consciousness.

  13. 13.

    “Well” is a relative term that we are intentionally leaving unspecified to allow us to cover a broader range of claims. But as an example, one (strong) gloss on it would be: “at a level sufficient to allow replacement, or at least significant supplementation, of human medical personnel”.

References

  1. Aleksander I (2000) How to build a mind: toward machines with imagination. Weidenfeld & Nicolson, London

    Google Scholar 

  2. Armer P (2000) Attitudes toward intelligent machines. In: Chrisley RL (ed) Artificial intelligence: critical concepts, pp 325–342. Routledge, London. (Originally appeared In: Feigenbaum E, Feldman J (eds) Computers and thought. McGraw-Hill, NY, pp 389–405

    Google Scholar 

  3. Anderson M, Anderson SL (eds) (2011a) Machine ethics. Cambridge University Press, Cambridge

    Google Scholar 

  4. Anderson SL, Anderson M (2011b) A prima facie duty approach to machine ethics: machine learning of features of ethical dilemmas, prima facie duties, and decision principles through a dialogue with ethicists. In: Anderson M, Anderson SL (eds) 2011a, pp 476–492

    Google Scholar 

  5. Baars BJ (1988) A cognitive theory of consciousness. Cambridge University Press, Cambridge

    Google Scholar 

  6. Bentham J (1989/2005) An introduction to the principles of morals and legislation. In: Burns JH, Hart HL (eds) Oxford University Press, Oxford

    Google Scholar 

  7. Block N (1995) On a confusion about a function of consciousness. Behav Brain Sci 18(2):227–247

    Article  Google Scholar 

  8. Boltuc P (2009) The philosophical issue in machine consciousness. Int J Mach Conscious 1(1):155–176

    Article  Google Scholar 

  9. Chalmers DJ (2010) The singularity: a philosophical analysis. J Conscious Stud 17(9–10):7–65

    Google Scholar 

  10. Chrisley RL (2003) Embodied artificial intelligence. Artif Intell 49:3–50

    Google Scholar 

  11. Chrisley RL (2007) Philosophical foundations of artificial consciousness. Artif Intell Med 44:119–137

    Article  Google Scholar 

  12. Clowes RW, Seth AK (2008) Axioms, properties and criteria: roles for synthesis in the science of consciousness. Artif Intell Med 44(2):91–104

    Article  Google Scholar 

  13. Floridi L, Sanders JW (2004) On the morality of artificial agents. Mind Mach 14(3):349–379

    Article  Google Scholar 

  14. Franklin S (2003) IDA: a conscious artefact? J Conscious Stud 10(4–5):47–66

    MathSciNet  Google Scholar 

  15. Franklin S, Patterson FGJ (2006) The LIDA architecture: adding new modes of learning to an intelligent, autonomous, software agent. In: IDPT-2006 proceedings, integrated design and process technology: society for design and process science

    Google Scholar 

  16. Grau C (2011) There is no “I” in “Robot”: robots and utilitarianism. In: Anderson M, Anderson SL (eds) 2011a, pp 451–463

    Google Scholar 

  17. Gunkel D (2012) The machine question: critical perspectives on AI, robots and ethics. MIT Press, Cambridge

    Google Scholar 

  18. Gunkel D (2013) A vindication of the rights of machines. Philos Technol 27(1):113–132. doi:10.1007/s13347-013-0121-z

  19. Guzeldere G (1997) The many faces of consciousness: a field guide. In: Block N, Flanagan O, Guzeldere G (eds) The nature of consciousness: philosophical debates. MIT Press, Cambridge, pp 1–67

    Google Scholar 

  20. Haikonen PO (2005) You only live twice: imagination in conscious machines. In: Chrisley RL, Clowes RC, Torrance SB (eds) Proceedings of the AISB05 symposium on machine consciousness. AISB Press, Hatfield, pp 19–25

    Google Scholar 

  21. Haikonen PO (2012) Consciousness and robot sentience. World Scientific, Singapore

    Book  Google Scholar 

  22. Hesslow G (2002) Conscious thought as simulation of behaviour and perception. Trends Cogn Sci 6:242–247

    Article  Google Scholar 

  23. Hesslow G, Jirenhed DA (2007) The inner world of a simple robot. J Conscious Stud 14:85–96

    Google Scholar 

  24. Holland O, Goodman R (2003) Robots with internal models: a route to machine consciousness? J Conscious Stud 10(4–5):77–109

    Google Scholar 

  25. Husserl E (1952/1989) Ideas pertaining to a pure phenomenology and to a phenomenological philosophy: second book: studies in the phenomenology of constitution (trans. Rojcewicz R, Schuwer A). Kluwer Academic Publishers, Dordrecht, The Netherlands

    Google Scholar 

  26. Jaworska A, Tannenbaum J (2013) The grounds of moral status. In: Zalta EN (ed) The Stanford encyclopedia of philosophy (summer 2013 edn). URL: http://plato.stanford.edu/archives/sum2013/entries/grounds-moral-status/

  27. Kurzweil R (2005) The singularity is near: when humans transcend biology. Viking Press, NY

    Google Scholar 

  28. Latour B (2002) Morality and technology: the end of the means. Theor Cult Soc 19(5–6):247–260

    Article  Google Scholar 

  29. Lin PA, Abney K, Bekey GA (eds) (2012) Robot ethics: the ethical and social implications of robotics. MIT Press, Cambridge

    Google Scholar 

  30. Merleau-Ponty M (1945/1962) The phenomenology of perception (trans Smith C). Routledge and Kegan Paul, London

    Google Scholar 

  31. Moor JH (2006) The nature, importance and difficulty of machine ethics. IEEE Intell Syst 21(4):18–21

    Article  Google Scholar 

  32. Powers TM (2011) Prospects for a Kantian machine. In: Anderson M, Anderson SL (eds) 2011a, pp 464–475

    Google Scholar 

  33. Regan T (1983) The case for animal rights. The University of California Press, Berkeley

    Google Scholar 

  34. Searle JR (1980) Minds, brains, and programs. Behav Brain Sci 3:417–424. doi:10.1017/S0140525X00005756

    Article  Google Scholar 

  35. Shanahan M (2006) A cognitive architecture that combines internal simulation with a global workspace. Conscious Cogn 15:433–449

    Article  Google Scholar 

  36. Shanahan M (2010) Embodiment and the inner life: cognition and consciousness in the space of possible minds. Oxford University Press, Oxford

    Google Scholar 

  37. Sloman A (1984) The structure of the space of possible minds. In: Torrance SB (ed) The mind and the machine: philosophical aspects of artificial intelligence. Ellis Horwood, Chichester, Sussex, pp 35–42

    Google Scholar 

  38. Sloman A (2010) Phenomenal and access consciousness and the “Hard” problem: a view from the designer stance. Int J Mach Conscious 2(1):117–169

    Article  Google Scholar 

  39. Sloman A, Chrisley RL (2003) Virtual machines and consciousness. J Conscious Stud 10(4–5):133–172

    Google Scholar 

  40. Sloman A, Chrisley RL (2005) More things than are dreamt of in your biology: information processing in biologically-inspired robots. Cogn Syst Res 6(2):45–74

    Article  Google Scholar 

  41. Thompson E (2001) Empathy and consciousness. J Conscious Stud 8(5–7):1–32

    Google Scholar 

  42. Thompson E (2007) Mind in life: biology, phenomenology, and the sciences of mind. Harvard University Press, Cambridge

    Google Scholar 

  43. Toombs SK (1992) The meaning of illness: a phenomenological account of the different perspectives of physician and patient. Kluwer Academic Publishers, Norwell

    Book  Google Scholar 

  44. Toombs SK (2001) The role of empathy in clinical practice. J Conscious Stud 8(5–7):247–258

    Google Scholar 

  45. Torrance SB (2007) Two conceptions of machine phenomenality. J Conscious Stud 14(7):154–166

    Google Scholar 

  46. Torrance SB (2008) Ethics and consciousness in artificial agents. Artif Intell Soc 22(4):495–521

    Google Scholar 

  47. Torrance SB (2012) Super-intelligence and (super-) consciousness. Int J Mach Conscious 4(2):483–501

    Article  Google Scholar 

  48. Torrance SB (2013) Artificial consciousness and artificial ethics: between realism and social-relationism. Philos Technol (Special issue on ‘Machines as moral agents and moral patients’) 27(1):9–29. doi:10.1007/s13347-013-0136-5

    Google Scholar 

  49. Verbeek PP (2011) Moralizing technology: understanding and designing the morality of things. Chicago University Press, Chicago

    Book  Google Scholar 

  50. Wallach W, Allen C (2009) Moral machines: teaching robots right from wrong. Oxford University Press, Oxford

    Book  Google Scholar 

  51. Wallach W, Allen C, Franklin S (2011) Consciousness and ethics: artificially conscious moral agents. Int J machine Conscious 3(1):177–192

    Google Scholar 

  52. Whitby B, Yazdani M (1987) Artificial intelligence: building birds out of beer cans. Robotica 5:89–92

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Robert Clowes, Mark Coeckelbergh, Madeline Drake, David Gunkel, Jenny Prince-Chrisley, Wendell Wallach, and Blay Whitby. We would also like to thank members of the audience of the following institutions/meetings for their helpful comments: University of Sussex (COGS, E-intentionality); Twente (SBT); AISB meetings on Machine Consciousness (Universities of Hertfordshire, Bristol, and York) and on Machine Ethics (Birmingham and Goldsmiths Universities). Thanks also to the EU Cognition network for their support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Steve Torrance .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Torrance, S., Chrisley, R. (2015). Modelling Consciousness-Dependent Expertise in Machine Medical Moral Agents. In: van Rysewyk, S., Pontier, M. (eds) Machine Medical Ethics. Intelligent Systems, Control and Automation: Science and Engineering, vol 74. Springer, Cham. https://doi.org/10.1007/978-3-319-08108-3_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08108-3_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08107-6

  • Online ISBN: 978-3-319-08108-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics