Skip to main content

Advertisement

Log in

Health Care, Capabilities, and AI Assistive Technologies

  • Published:
Ethical Theory and Moral Practice Aims and scope Submit manuscript

Abstract

Scenarios involving the introduction of artificially intelligent (AI) assistive technologies in health care practices raise several ethical issues. In this paper, I discuss four objections to introducing AI assistive technologies in health care practices as replacements of human care. I analyse them as demands for felt care, good care, private care, and real care. I argue that although these objections cannot stand as good reasons for a general and a priori rejection of AI assistive technologies as such or as replacements of human care, they demand us to clarify what is at stake, to develop more comprehensive criteria for good care, and to rethink existing practices of care. In response to these challenges, I propose a (modified) capabilities approach to care and emphasize the inherent social dimension of care. I also discuss the demand for real care by introducing the ‘Care Experience Machine’ thought experiment. I conclude that if we set the standards of care too high when evaluating the introduction of AI assistive technologies in health care, we have to reject many of our existing, low-tech health care practices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

We’re sorry, something doesn't seem to be working properly.

Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.

Similar content being viewed by others

Notes

  1. Since this is the discussion I know best, I will mainly refer to the literature on care and robots. However, the arguments presented in this paper are relevant to other AI assistive technologies as well.

  2. Note that it also hurts to see a person’s health gradually diminish. And it is also not always easy, when dealing with a terminally ill patient, for instance, to take and show the ‘right’ emotional attitude towards that person. What does a particular person at a given moment in time need most? Compassion? Encouragement? Which feelings should I show? Should I talk or should I listen?

  3. The latter principles may also be understood as requirements that flow from the principle of (respect for) autonomy.

  4. Although the principles of beneficence and justice seem to be positive principles, they are commonly used in a negative way.

  5. I am aware that to add ‘enhancement’ to this list is very controversial and there are serious difficulties with defining what enhancement means. Therefore, I leave ‘enhancement’ out of my discussion in this paper.

  6. See for instance experiments by Ishiguro and others with the ‘android father’: as far as eye movements of the child go, Ishiguro and others found that the child responds to the android father as if it were her real father - knowing, however, that it is not the real father. For what the presence of a robot does to children see for example the experiments by Nishio et al. (2007).

References

  • Anand P (2005) Capabilities and health. J Med Ethics 31:299–303

    Article  Google Scholar 

  • Beauchamp T, Childress IF (1994) Principles in biomedical ethics, 4th edn. Oxford University Press, New York

    Google Scholar 

  • Coeckelbergh M (2007) Imagination and principles. Palgrave Macmillan, Basingstoke/New York

    Book  Google Scholar 

  • Decker M (2008) Caregiving robots and ethical reflection: the perspective of interdisciplinary technology assessment. AI & Soc 22(3):315–330

    Article  Google Scholar 

  • Kass L (1985) The end of medicine and the pursuit of health. In: Kass L (ed) Toward a more natural science. The Free Press, New York, pp 157–186

    Google Scholar 

  • Nishio S, Ighiguro H, Hagita N (2007) Can a teleoperated android represent personal experience? a case study with children. Psychologia 50(4):330–342

    Article  Google Scholar 

  • Nozick R (1974) Anarchy, state, and utopia. Basic Books, New York

    Google Scholar 

  • Nussbaum MC (2000) Women and human development: the capabilities approach. Cambridge University Press, Cambridge

    Google Scholar 

  • Nussbaum MC (2006) Frontiers of justice: disability, nationality, species membership. The Belknap Press of Harvard University Press, Cambridge M.A. and London

  • Nussbaum MC, Sen A (eds) (1993) The quality of life. Clarendon Press, Oxford

    Google Scholar 

  • Ruger JP (2006) Health, capability, and justice: toward a new paradigm of health ethics, policy and law. Cornell J Law Public Policy 15(2):403–482

    Google Scholar 

  • Sparrow R (2002) The march of the robot dogs. Ethics Inf Technol 4:305–318

    Article  Google Scholar 

  • Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16(2):141–161

    Article  Google Scholar 

Download references

Acknowledgments

Thanks to Nicole Vincent, Nicholas Munn, Aimee van Wynsberghe, and other participants of the International Applied Ethics Conference 2008 (Hokkaido University, Sapporo, Japan), the January 2009 research seminar of the Philosophy section at Delft University of Technology, and the Good Life meetings at the Philosophy Department of Twente University for the discussions we had about robots and care. I also wish to thank the anonymous reviewers for their helpful comments, which improved the quality of my arguments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark Coeckelbergh.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Coeckelbergh, M. Health Care, Capabilities, and AI Assistive Technologies. Ethic Theory Moral Prac 13, 181–190 (2010). https://doi.org/10.1007/s10677-009-9186-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10677-009-9186-2

Keywords

Navigation