Abstract
Scenarios involving the introduction of artificially intelligent (AI) assistive technologies in health care practices raise several ethical issues. In this paper, I discuss four objections to introducing AI assistive technologies in health care practices as replacements of human care. I analyse them as demands for felt care, good care, private care, and real care. I argue that although these objections cannot stand as good reasons for a general and a priori rejection of AI assistive technologies as such or as replacements of human care, they demand us to clarify what is at stake, to develop more comprehensive criteria for good care, and to rethink existing practices of care. In response to these challenges, I propose a (modified) capabilities approach to care and emphasize the inherent social dimension of care. I also discuss the demand for real care by introducing the ‘Care Experience Machine’ thought experiment. I conclude that if we set the standards of care too high when evaluating the introduction of AI assistive technologies in health care, we have to reject many of our existing, low-tech health care practices.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Since this is the discussion I know best, I will mainly refer to the literature on care and robots. However, the arguments presented in this paper are relevant to other AI assistive technologies as well.
Note that it also hurts to see a person’s health gradually diminish. And it is also not always easy, when dealing with a terminally ill patient, for instance, to take and show the ‘right’ emotional attitude towards that person. What does a particular person at a given moment in time need most? Compassion? Encouragement? Which feelings should I show? Should I talk or should I listen?
The latter principles may also be understood as requirements that flow from the principle of (respect for) autonomy.
Although the principles of beneficence and justice seem to be positive principles, they are commonly used in a negative way.
I am aware that to add ‘enhancement’ to this list is very controversial and there are serious difficulties with defining what enhancement means. Therefore, I leave ‘enhancement’ out of my discussion in this paper.
See for instance experiments by Ishiguro and others with the ‘android father’: as far as eye movements of the child go, Ishiguro and others found that the child responds to the android father as if it were her real father - knowing, however, that it is not the real father. For what the presence of a robot does to children see for example the experiments by Nishio et al. (2007).
References
Anand P (2005) Capabilities and health. J Med Ethics 31:299–303
Beauchamp T, Childress IF (1994) Principles in biomedical ethics, 4th edn. Oxford University Press, New York
Coeckelbergh M (2007) Imagination and principles. Palgrave Macmillan, Basingstoke/New York
Decker M (2008) Caregiving robots and ethical reflection: the perspective of interdisciplinary technology assessment. AI & Soc 22(3):315–330
Kass L (1985) The end of medicine and the pursuit of health. In: Kass L (ed) Toward a more natural science. The Free Press, New York, pp 157–186
Nishio S, Ighiguro H, Hagita N (2007) Can a teleoperated android represent personal experience? a case study with children. Psychologia 50(4):330–342
Nozick R (1974) Anarchy, state, and utopia. Basic Books, New York
Nussbaum MC (2000) Women and human development: the capabilities approach. Cambridge University Press, Cambridge
Nussbaum MC (2006) Frontiers of justice: disability, nationality, species membership. The Belknap Press of Harvard University Press, Cambridge M.A. and London
Nussbaum MC, Sen A (eds) (1993) The quality of life. Clarendon Press, Oxford
Ruger JP (2006) Health, capability, and justice: toward a new paradigm of health ethics, policy and law. Cornell J Law Public Policy 15(2):403–482
Sparrow R (2002) The march of the robot dogs. Ethics Inf Technol 4:305–318
Sparrow R, Sparrow L (2006) In the hands of machines? The future of aged care. Mind Mach 16(2):141–161
Acknowledgments
Thanks to Nicole Vincent, Nicholas Munn, Aimee van Wynsberghe, and other participants of the International Applied Ethics Conference 2008 (Hokkaido University, Sapporo, Japan), the January 2009 research seminar of the Philosophy section at Delft University of Technology, and the Good Life meetings at the Philosophy Department of Twente University for the discussions we had about robots and care. I also wish to thank the anonymous reviewers for their helpful comments, which improved the quality of my arguments.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Coeckelbergh, M. Health Care, Capabilities, and AI Assistive Technologies. Ethic Theory Moral Prac 13, 181–190 (2010). https://doi.org/10.1007/s10677-009-9186-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10677-009-9186-2