Abstract
In today’s highly dynamic societies, moral norms and values are subject to change. Moral change is partly driven by technological developments. For instance, the introduction of robots in elderly care practices requires caregivers to share moral responsibility with a robot (see van Wynsberghe 2013). Since we do not know what elements of morality will change and how they will change (see van der Burg 2003), moral education should aim at fostering what has been called “moral resilience” (Swierstra 2013). We seek to fill two gaps in the existing literature: (i) research on moral education has not paid enough attention to the development of moral resilience; (ii) the very limited literature on moral resilience does not conceptualise moral resilience in relation to new technological developments. We argue that philosophical accounts of moral education need to do justice to the importance of moral resilience, and that a specific form of moral resilience should be conceptualised as “technomoral resilience” to underline the added value of cultivating moral resilience in relation to technomoral change. We illustrate the role of technomoral resilience in practice by looking at the context of elderly care. To make the first step towards an account of how technomoral resilience can be fostered in moral education, we propose that moral education shall focus on a triangle of capacities: (1) moral imagination, (2) a capacity for critical reflection, and (3) a capacity for maintaining one’s moral agency in the face of disturbances.
1 Introduction
In today’s highly dynamic societies, moral norms and values are subject to change. Moral change is partly driven by technological developments. Given the dynamic character of morality, the goals of moral education are not static. What a moral agent needs to know and be able to do is subject to change. For instance, the introduction of robots in elderly care practices disrupts those practices and requires caregivers to delegate certain moral responsibilities to an artefact, to take up new responsibilities or to share moral responsibility with a robot (see van Wynsberghe 2013). Interaction with care robots and the resulting human-robot relationships require a refinement of existing moral skills and sensibilities and perhaps even novel moral skills.
Since we know that some elements of morality will change, partly in response to new technological developments - albeit not which elements that are and how they will change (see van der Burg 2003) -, moral education should aim at fostering what Swierstra (2013, 203) calls “moral resilience”.Footnote 1 Moral resilience is an emerging concept that has so far mainly been discussed in the literature on nursing ethics, where no particular attention has been given to its relation to new technological developments, despite the fact that technologies are pervasive in nursing practices (Rusthon 2016, 2017, 2018; Defilippis et al. 2019; Monteverde 2014). As we understand the concept, it applies to different levels: the individual and the systemic level. At the systemic level, it is the capacity to regain stability after destabilisation, which asks for a certain degree of flexibility. At the individual level, it is the capacity to cope with moral disturbances without losing one’s identity as a moral agent. In the following, we refer to the systemic level also as the “level of moral practices”.
Moral education should be conceived of as an integral part of education (see Kitcher 2022, 9). Philip Kitcher (2022, 4) identifies three main general aims of education: the development of (i) a capacity for self-maintenance, (ii) a capacity to function as a citizen, and (iii) a capacity to live a fulfilling life. All of these capacities are closely linked to moral upbringing. In the last few decades, research on moral education has come to emphasise “character formation and virtue embodiment” (Engelen et al. 2018, 360). According to Engelen et al. (2018, 355), moral education has two main aims: (1) “enabling people to uphold society’s norms and standards, while (2) developing people’s critical and creative capacities”. In the light of technomoral change, the latter aim is particularly important. The dynamic character of morality requires us to overcome the moral conservatism characteristic of most people, which “frustrates dealing in an open and fruitful manner with technology’s challenges to morality […] [and] keeps the door closed to techno-moral learning” (Swierstra 2013, 215). It constitutes an obstacle to the requirement, identified by van der Burg (2003, 29), of incorporating openness to change into our character.
We have identified two gaps in the existing literature: (i) research on moral education has not paid enough attention to the development of moral resilience; (ii) the very limited literature on moral resilience does not conceptualise moral resilience in relation to new technological developments. This paper seeks to make a beginning in filling both gaps. We argue that philosophical accounts of moral education need to do justice to a specific form of moral resilience, which should be conceptualised as “technomoral resilience” to underline the added value of cultivating moral resilience in relation to technomoral change. By looking at a particular context in which technomoral change is happening, the context of elderly care, and demonstrating how the (anticipated) changes require technomoral resilience, we contribute to the further development of the concept of moral resilience, which is a “concept under construction” (Rushton 2016, 112). In order to make the first step towards an account of how technomoral resilience can be fostered in moral education, we propose that moral education shall focus on a triangle of capacities: (1) moral imagination, (2) a capacity for critical reflection, and (3) a capacity for maintaining one’s moral agency in the face of disturbances.
We start by presenting the emerging paradigm of technomoral change and the idea that such change calls for the development of technomoral resilience. Subsequently we apply the technomoral change paradigm to the context of elderly care (Sect. 2). Having described the possible changes and disruptions in practices of elderly care, we then develop the concept of technomoral resilience and apply it in that context (Sect. 3). This leads us to a discussion of why and how technomoral resilience should be cultivated as part of moral education (Sect. 4). Finally, we summarise our main points and suggest some avenues for further research (Sect. 5).
2 Technomoral Change in the Context of Elderly Care
2.1 Technomoral Change
Rather recently, the phenomenon of moral change has become a topic of interest for philosophers. There is a growing body of literature on different forms of moral change, including moral progress and moral revolutions (e.g. Baker 2019; Buchanan & Powell 2018; Eriksen 2020; Hermann 2019; Kitcher 2021). Traditionally, the issue of moral change has received little attention from philosophers, who by and large understood morality as something universal and unchanging and searched for a firm foundation of morality (see van der Burg 2003). Now that the topic of change is being addressed by more and more authors, it is striking how little attention is paid to technology, which arguably plays a crucial role in the processes in which moral norms, attitudes and practices are altered.Footnote 2
The emerging paradigm of “technomoral change”, which is gaining popularity among philosophers of technology, seeks to capture the often underappreciated role of technology in processes of moral change (Boenink, Swierstra and Stemerding 2010; Swierstra 2013). The main idea behind this paradigm is that technology and morality co-evolve (Swierstra 2013, 205). This is a variation of the view, widely embraced by scholars of Science and Technology Studies (STS), that technology and society co-evolve. Technological developments do not only change social norms.
Distinguishing between moral and social norms is notoriously difficult. In their book on the nature of norms, Geoffrey Brennan et al. (2013, 57–89) reject two prominent views of the distinction: the view that the two kinds of norms are distinguished in terms of their content (“Content View”) and the view that they are distinguished by formal properties (“Form View”). The authors argue that what instead distinguishes the two are their grounds (“Grounds View”). Unlike social norms, moral norms are not even partly grounded in social practices. This means that it does not seem to someone that part of what justifies the prohibition on lying is the fact that there exists a practice of truth telling. By contrast, the requirement to wear clothes when walking on the street does seem to be justified at least in part by the practice of wearing clothes outside of one’s house instead of running around naked. Like Brennan et al., we are not concerned with moral norms understood as “objectively valid rules”, but understood as “social facts”, i.e., “accepted rules or normative principles” (ibid., 57).Footnote 3
We understand morality not as a rigid system of rules, but as a conglomerate of practices, norms, and values directed towards respectful interaction and individual as well as collective flourishing. Therefore, we shall in the following mostly use the term “moral practices” instead of “moral system”. Examples of moral practices are “providing and receiving moral reasons”, “helping someone in need”, “debating about a moral issue”, “morally praising and blaming”, and “distributing something applying a standard of fairness”. There is no sharp boundary between moral and non-moral practices (see Hermann 2015, 32). Care practices, e.g., can be conceived of as moral practices in so far as they serve a moral function or have a moral purpose.
While the ways in which morality can shape technology are studied, for instance, within the growing field of “design for values” (see e.g. Friedman et al. 2006; Jacobs 2020), proponents of the technomoral change paradigm focus on ways in which technological developments contribute to changes in morality, i.e., on “technology-induced moral change” (van den Hoven, Lokhorst and van de Poel 2012, 153). Swierstra (2013, 203) conceptualises technology’s influence on morality in terms of the de- and restabilisation of moral routines.Footnote 4 The processes of de- and restabilisation through technology are elucidated by the concept of technological mediation (see Ihde 1990; Verbeek 2011). Technology mediates human perception and action in various ways. Swierstra (2013, 208–212) distinguishes several mechanisms by which technologies affect morality via mediating how humans perceive and act in the world. For instance, technology can make us aware of stakeholders we were not aware of before (e.g. obstetric ultrasound making the foetus visibleFootnote 5), but it can also hide stakeholders from our view (e.g. military drones enabling killing from a distance where the people killed are not directly confronted). It can also affect perception more fundamentally, disrupting established ways of perceiving the order of the world and our place in the world. Such disruptive changes have recently been conceptualised as instances of “technosocial disruption”, which is the disruption by technology of “social relations, institutions, epistemic paradigms, foundational concepts, values, and the very nature of human cognition and experience” (Hopster 2021, 1). Among the objects of technosocial disruption are fundamental moral concepts such as autonomy, moral agency, or responsibility.Footnote 6
An emphasis on the dynamic character of morality does not imply an outright relativism. As van der Burg (2003, 16) notes, in morality we have a combination of stability and change, and stability is dominant. We hold that allowing for moral change is compatible with the view that some moral values and demands are universal and that it is possible to make valid moral judgements about practices from the past, such as slavery, and about practices from other cultures (see Hermann 2015, Chap. 7). Prohibitions on murder and lying and restrictions on the use of violence are examples of so called “moral universals” (see ibid., 180; Rachels 1986, 622; Redfield 1973, 140 f.). They can be found in all human societies throughout human history and seem to be necessary for a viable human society. Unless the circumstances of human social life or the human condition itself were to be altered radically, for instance through extreme human enhancement, such rules will not change. What is subject to change, by contrast, includes concrete manifestations of such general rules, e.g., to whom a prohibition applies or what counts as acting in accordance with a prohibition. This is where we can observe cross-cultural variation and change over time. For instance, Ifugao law excludes strangers from the prohibition on killing (Hoebel 1968, 100 ff.)
However, moral change also happens at a more fundamental level, most notably the level of fundamental moral concepts. An example is the concept of dignity, which has shifted - inter alia through social change - from a designation for the nobility of the ruling class to the notion of universal and absolute worth of the person, and has more recently been discussed as a personal attitude of embodied self-respect (see Bauer 2022). Another example is the concept of (good) care. While care is “a universal aspect of human life” in the sense that all human beings have needs that can only be met through the help of others (Tronto 1993, 110), what (good) care means varies between societies and groups and is subject to change.
As Swierstra argues, technomoral change requires “technomoral learning”, which in turn requires training of our capacities for “techno-moral imagination” (2013, 216). A radical destabilisation of morality or of a specific set of moral practices “invites us to reflect, discuss, learn and improve” (ibid., 215). Through technomoral learning, we can develop what Swierstra calls “moral resilience”, an “attitude that is simultaneously robust and flexible” (2013, 203) and allows for a restabilisation after an experience of destabilisation.
2.2 The Introduction of Care Robots in Elderly Care
As Shannon Vallor (2011, 251) points out, “we stand on the threshold of welcoming robots into domains of human activity that will expand their presence in our lives dramatically”. The development and (potential) use of robots in elderly care can be studied as an example of technomoral change. Care robots, or “carebots”, are “robots designed for use in home, hospital, or other settings to assist in, support, or provide care for the sick, disabled, young, elderly or otherwise vulnerable persons” (Vallor 2011, 252). Sharkey and Sharkey (2012, 27) distinguish three main possible (future) uses of robots in elderly care: “(1) to assist the elderly, and/or their carers in daily tasks; (2) to help monitor their behaviour and health; and (3) to provide companionship”.
Today, there are robots for lifting patients (“RIBA”), robots that facilitate communication with family members (“Double”) and multifunctional robots like “Zora”, a small humanoid social robot that can give instructions for daily routines, foster patients’ mobility, or pick up trash in care facilities (see Niemelä and Melkas 2019). The social robot PARO is currently used, inter alia as an artificial therapy animal, in over 80% of Danish care institutions (see Hung et al. 2019), and Japan has started a large-scale initiative for robotisation in elderly care in 2018 (Hurst 2018). However, up to now care robots are introduced rather reluctantly in many countries (Hung et al. 2019, 179). Despite the widespread presence of social robots such as PARO in care settings, robots with physical functions like lifting or cleaning are more widely accepted than social robots (ibid., 180). The general scepticism can inter alia be explained by the lack of involvement of relevant stakeholders and insufficient consideration of the social culture and system of care in the design and development process (ibid., 181).
The introduction of care robots is not an inevitable fate. We do not advocate a position of technological determinism. However, economic pressures and the shortage of caregivers increase the likelihood of a widespread introduction of robots in elderly care. Standards of good care should guide decisions concerning whether and which tasks are taken over by robots. At the same time, it should be kept in mind that those standards might themselves change through the introduction of care robots. It should moreover be noted that care robots do not enter care practices that are devoid of technologies, as care practices are already pervaded by technological artefacts, such as mechanical beds or heart monitoring devices (van Wynsberghe 2013, 422).
Reflection on the ethical implications of care robots is often accompanied by the imagination of doom scenarios (Coeckelbergh 2015). Concerns are not only raised about the concrete consequences for the care-receivers, but also about an erosion of standards of care. There are worries that the replacement of human care workers by robots may lead to a destabilisation of principles of reciprocity and mutual recognition, a habituation to a form of “slavery”, or a confusion about moral status (robots become humanised and are treated like persons) (see Nyholm 2020, ch. 8).
For the sake of illustrating how technomoral change unfolds and requires technomoral resilience, we distinguish three (potential) moral changes related to the introduction of care robots in elderly careFootnote 7: (1) a change in the meaning of “good care”; (2) a new distribution of roles and moral responsibilities (between the human caregivers, the care-robots and the elderly); (3) a new self-understanding of human caregivers as moral agents. Before we address these three changes in turn, let us reflect for a moment on the notion of good care. According to care ethicist Joan Tronto (1993), good care is the result of a caring attitude in combination with a caring activity. Tronto (1993, 105–137) distinguishes four phases of a care practice and four ethical elements arising from them. In the first phase (“caring about”), the caregiver recognises that someone else is in need and what their needs are (attentiveness). In the second phase (“taking care of”), the caregiver takes responsibility for meeting those needs (responsibility). In the third phase (“care-giving”), the caregiver performs an action to fulfil the needs of an individual (competence). In the fourth phase (“care-receiving”), the caregiver recognises that the care-receiver responds to the care that is being provided. This requires responsiveness on the part of the care-receiver. For the context of elderly care, this means that good care requires interaction between the elderly and their caregivers in which the caregivers are attentive to the needs of the elderly, e.g. assistance with daily activities such as washing themselves and eating, companionship or someone who listens, take the responsibility to meet these needs and carry out the required actions in competent ways, e.g. wash and feed them without hurting them. The elderly need to play an active role in this interaction by responding to the acts of care-giving and signalling to the caregivers whether their needs are being met.
Let us now turn to the change of the meaning of “good care”. It is a widespread conviction that good care must be care provided by human beings, not by machines, which are associated with coldness (see Coeckelbergh 2015, 455). They cannot substitute the experience of the human touch, which can be regarded as an expression of mutual recognition within a dignified relationship (van Wynsberghe 2016, 416). The implementation of care-robots challenges the concept of good care. How can care that is partly provided by care-robots be good care? What can and should the role of care-robots be in the four phases of a care practice? How, for instance, can responsiveness be realised? We can envision technologies that assist the elderly in responding to the acts of caregiving, helping them with communicating to the caregivers, both human and non-human, how they feel and to what extent their needs have been met.
Answering questions like the above requires the exercise of technomoral imagination. Attempts to imagine future care practices must consider the expected digital literacy of the future elderly, amongst others. The young digital natives of today will be the elderly of tomorrow (Coeckelbergh 2015, 457). While their level of proficiency in using technologies may lead them to reject the introduction of certain tools (as in recent activist movements against facial recognition software), in many cases, they may not only be more accustomed to the use of digital technologies, but even demand it (see ibid.).
Regarding the second change, the care-robot will “alter the distribution of responsibilities and roles within the [socio-technical] network as well as the manner in which the practice takes place” (van Wynsberghe 2013, 412). The decision-making of nurses and patients becomes “a hybrid affair between the nurse/patient and existing technologies” (ibid.). Van Wynsberghe discusses the example of a human-operated robot for lifting and argues that here responsibility and competence become “shared endeavours between the human and the robot”, and “responsibility for the safety of the practice becomes a hybrid event between the human care-giver and the robot” (ibid., 428). This echoes the approach of Peter-Paul Verbeek, who conceives of moral agency as distributed amongst humans and machines and thus as a “hybrid affair” (2014, 77). Verbeek (2014) pleads for a re-conceptualisation of concepts such as “moral agency”, “intentionality”, “responsibility”, and “freedom”. He suggests viewing moral agency etc. as resulting from an interplay of human agents and technologies. Contrary to the traditional Western understanding, they are not exclusively human. Moral practices are thus “coproductions of technologies and humans” (ibid., 76).
The moral change in our example concerns the distribution of moral responsibility. The moral responsibility for the safety of the patient is distributed over the different actors involved, including the robot. The claim is not that robots can bear moral responsibility on their own, in the traditional sense of the concept, but that responsibility in a new sense can be attributed to a human-robot constellation.Footnote 8 The changes in how roles and responsibilities are distributed are likely to give rise to confusion and uncertainty. For some time, human caregivers will probably be confused about their roles and responsibilities, and the elderly will be uncertain about what they can expect from robots and nurses. Human-robot interaction will suffer from wrong expectations, miscommunications, a lack of orientation, etc., especially in the absence of adequate training and knowledge (Niemelä and Melkas 2019, 191).
Thirdly, as the decision-making of nurses and patients becomes a hybrid affair, the self-understanding of caregivers as moral agents will change. The technologies with which they interact so closely in the provision of care mediate not only their perceptions and actions but also how they see themselves as moral agents in their role as caregivers. For instance, a caregiver might come to understand themselves as being, together with the robots, jointly responsible for the well-being of the elderly, while in the past they had understood themselves as bearing this responsibility all by themselves. This transition can be expected to be accompanied by uncertainty and feelings of distress. We can imagine a nurse undergoing a kind of identity crisis, due to their roles and responsibilities having become unclear.
As stated above, the described disruptions raise concerns about a devaluation of care. This is related to the concern that the introduction of care robots could deepen structural problems of elderly care. For instance, financial resources could be redirected to the robots instead of supporting human caregivers (van Wynsberghe 2021, 6). Moral learning and a cultivation of moral resilience can surely not eliminate all these risks but contribute to a differentiated and balanced reaction to the change that happens, thus complementing the necessary changes at the political and institutional level. It cannot be stressed enough that structural changes such as better payment for care workers are urgently called for, and that it is not our intention to place the burden of ensuring the continuous provision of good care entirely on individuals.
3 Technomoral Resilience
How should we understand the capacity of technomoral resilience, which, as we argue, is necessary for dealing with processes of technomoral change? The concept “resilience” was originally used in the field of engineering to describe the amount of disturbance a material can sustain without breaking. It describes “the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks” (Walker et al. 2004, 4). The term is currently applied to different systems, from the psychological structures of individuals to ecosystems to socio-economic and political systems (see Southwick and Charney 2018; Holling 1973; Walker and Salt 2006; Chandler 2014).
Our understanding of resilience is procedural: it involves a movement from a state of stability through destabilisation and restabilisation to a new and modified stability. A person who has developed moral resilience is able to regain a new form of stability as a competent moral agent after the experience of destabilisation.Footnote 9 A resilient set of moral practices is able to regain the ability to fulfil the core functions of morality (coordinating interaction, fostering eudaimonia) if confronted with serious disturbance.
Despite making radical change difficult, the stabilising function of resilience is indispensable. This does not imply a regression to moral conservatism. Adapting Kristy Dotson’s understanding of the resilience of epistemological systems to moral systems (practices), we hold that in the absence of moral resilience, “we would be unable to [navigate the world as moral agents] and, worse yet, we would be unable to detect noticeable changes or the need for significant changes within our [moral] systems.” (Dotson 2014, 122). Moral resilience is therefore a condition of being a moral agent who can cope with change and co-shape the change. A “resilience paradox” consists in the fact that too much adaptation or acceptance of disturbances can prevent change (Mahdiani & Ungar 2021, 703) and finally lead to a rigidity that makes a system more vulnerable to breakage. Therefore, we want to build on an idea of resilience that combines adaptability as the capacity to control change and reestablish stability with transformability - the capacity to “configure an entirely new stability landscape” and “to create untried beginnings […] when existing […] structures become untenable” (Walker et al. 2004, 5).
Developing technomoral resilience does not simply mean submitting to technological developments, adapting opportunistically, or “going with the flow”. In some cases, an opposition against the introduction of a technology might be more desirable than a resilient reaction to it. However, we hold that in many if not most cases, the choice is not between outright rejection and wholehearted embracement of a technology. One should rather critically reflect upon different ways of designing, implementing, and using the technology. The introduction of new technologies will confront individuals with moral changes and challenges also in cases where the technology itself is not per se morally reprehensible and brings advantages for the parties involved (e.g., relieving the workload of caregivers). Therefore, we argue for a good balance between resistance and technomoral resilience.
We want to go beyond the existing discourse on moral resilience by putting stronger emphasis on the interdependence of the individual level and the level of practices. The field of nursing ethics, where most attention is currently paid to this concept, focuses on the capacity of the individual for navigating moral distress. Here, moral resilience is not understood as a reaction to moral change - and even less to technomoral change. It operates on the individual and situational level. Moral resilience in this sense allows an individual caregiver to cope with concrete situational moral dilemmas, like triage cases, without losing hope, meaning in life, and individual integrity (see Rushton 2016 and 2018). Swierstra’s idea of moral resilience is not described as a reaction to a situational dilemma, but to a highly complex and long-lasting process of technomoral change that can culminate in dilemmatic situations: “technology destabilizes moral routines, which then provokes ethical reflection and discussion, which then do or do not result in new ethical answers that re-stabilize into new moral routines” (Swierstra 2013, 203). As he puts it metaphorically, the stability of “cold morality” is turned into “hot ethics’’ by going through a crisis (ibid., 207). Here, moral resilience allows for deep and continuous transformations of moral practices without too radical change transgressing a threshold where the practices can no longer fulfil the core functions of morality. However, this more structural account of moral resilience is inseparable from the individual perspective: the confrontation with a profound change of moral frameworks through technological change in a specific field of agency can destabilise the individual moral agent in a fundamental sense by challenging their self-understanding as a competent moral agent. If, for instance, the field of elderly care is deeply transformed and the standards of good care that served as an orientation for the agent are destabilised, this can also lead to a crisis of trust into one’s capacities to make moral judgements and decisions. Individual moral resilience and resilience at the level of practices are mutually conditional. Individuals can both transform and stabilise moral practices. At the same time, the identity of moral agents is constituted within practices, and is both transformed and (re-)stabilised in practice.
Maintaining moral competence requires being actively involved in the ethical discussions that generate new answers to normative challenges, which result in new moral routines in the field of care. It is important to make sure that the individual agents who realise practices of good care remain aware of their capacity to help shape the normative standards and do not feel like passive playthings of technomoral change. Technomoral resilience is not to be reduced to a neoliberal ideal of heroic acceptance and endurance, or of adapting to social expectations and economic pressure to succeed (Evans and Reid 2015; Slaby 2016). “Individualis[ing] societal issues in problematic ways” (Jackson 2020, 137) can lead to an uncritical and passive basic attitude, thus suppressing ethical and political agency. This criticism is highly relevant in the field of elderly care, where working conditions are often problematic, and the introduction of new technological tools is connected to economic interests. Fostering technomoral resilience through education thus must aim at strengthening individuals as competent moral agents who can both cope with change and co-shape the change.
4 Technomoral Resilience as a Goal of Moral Education
4.1 Technomoral Resilience as a Goal of Professional Moral Education in Elderly Care
How would we describe a nurse who has developed technomoral resilience? Imagine a nurse who reflects critically upon the introduction of an autonomous robot for lifting, such as the RIBA robot (see van Wynsberghe 2013, 426). The nurse doesn’t simply refuse to make use of the robot or quit their job, nor do they uncritically embrace the new technological support. Rather, they try to figure out - in interaction with the robot and together with other nurses as well as patients - how the robot can be used in a way that contributes to the realisation of values such as trust and privacy, how responsibilities need to be redistributed, in what ways their perceptions and actions are mediated by the new technology and whether these mediations are desirable, what features of the technology should be improved, and so on. This process of reflection does not take a purely theoretical form. It also happens through experimentation: trying something out by giving the robot a particular task and modifying that task in the light of how well the robot fulfilled it, including how the fulfilment of the task by the robot affected the patient, the nurse, and the relationship between the two. The described process of experimenting with new forms of human-robot interaction and establishing new human-robot as well as new human-human relationships requires an environment in which the different actors feel safe. The forms of experimentation involved are partly forms of self-experimentation, which make the actors particularly vulnerable. This is all the more true if one proceeds from the relatively simple example of lifting to social robots. It also holds for the care-receivers that having developed technomoral resilience would help them to actively shape the emerging interactions and relationships. An elderly person who has this skill would for instance experiment with different forms of interacting with the robots, share their experiences with the human caregivers, and articulate desires concerning the design features of the robots. They would point out what sorts of tasks they would still want human caregivers to fulfil because otherwise some of their fundamental needs would not be met.
In response to the threat to trust posed by the robot for lifting, designers of care robots should anticipate this challenge and - ideally together with caregivers and care-receivers, in a process of co-creation - think about ways of establishing a trusting relationship between human caregiver and care-receiver other than via the practice of lifting (see ibid., 428). The cultivation of technomoral resilience is not merely an individual task but can and should be supported by collective efforts. This requires complex knowledge and skills, which should be promoted in professional training from the beginning. The openness to experiment that is included in technomoral resilience should, for instance, be supported by the integration of experimental settings in education. Case studies about the introduction of different types of care robots show a strong desire both among caregivers and care-receivers for more knowledge and education about such robots (Niemelä and Melkas 2019, 194). We believe that a specific form of moral education that addresses the challenges of technomoral change and fosters technomoral resilience shall be implemented in professional education, for example of care workers, but also in general moral education (e.g. in ethics or philosophy classes at school), starting from an early age.
Existing professional nursing training, like for instance the programme “Bachelor Nursing 2020” in the Netherlands (Lambregts et al. 2020) or the programmes of the American Association of Colleges of Nursing (2021), on the one hand aim at educating nurses in the implementation of technologies, like e-health tools within frameworks of safe care. On the other hand, they aim at fostering capacities for ethical reflection mainly to guarantee the correct implementation and application of ethics codes and compliance with legal, professional, and regulatory standards. We think that in many fields of professional education, technological and ethical education are not sufficiently connected and questions of technomoral change or potential disruptions of competent moral agency are not addressed explicitly enough. We suggest that moral education put more emphasis on moral imagination and a specific training of critical reflection tailor-made to support individuals in sustaining competent moral agency in confrontation with technomoral change.
4.2 Technomoral Resilience as a Goal of General Moral Education
Cultivating technomoral resilience as an individual is demanding. It means integrating openness to change in a yet stable self-understanding and cultivating one’s capacity to regain stability after destabilisation, thus maintaining one’s moral agency in confrontation with disturbances. The individual must learn to navigate technomoral change both in the sense of adapting to changing practices and in the sense of actively co-shaping the change. We believe that these complex challenges can best be met by individuals who are prepared to deal with technomoral change from early on in their moral education. Further concretisation of how technomoral resilience can be cultivated in moral education can build on theories of moral education that investigate how to deal with discussable or changing elements of morality (Hand 2014 and 2017)Footnote 10 as well as on theories of education towards autonomy (Levinson 1999; Taylor 2017) and civic empowerment (Levinson and Solomon 2021). Without the term being used explicitly, moral resilience plays a role in any approach that aims at strengthening morally competent agents in dealing with particular challenges (e.g. racial segregation and marginalisation, see Levinson 2014). However, direct references to technomoral change are still rare in the current debate on moral education.
To better understand what fostering technomoral resilience as a goal of education involves we would like to reformulate the overall aims of (moral) education introduced earlier (based on Kitcher 2022 and Engelen et al. 2018) as supporting the creative (i) and critical (ii) capacities that are necessary to cope with upcoming challenges and to jointly co-shape the change and (iii) supporting the self-maintenance of competent moral agents who can uphold the functionality of moral practices (enabling good interaction and the possibility of a flourishing liveFootnote 11) in confrontation with a destabilisation through technological change. We propose that moral education shall focus on a triangle of capacities that are fundamental for cultivating technomoral resilience: (1) moral imagination, (2) a capacity for critical reflection, (3) a capacity for maintaining one’s moral agency in the face of disturbances.
In the case of adults and adolescents, the development of moral imagination (1) can be supported by cultivating curiosity and by teaching techniques of imagining and playing through concrete scenarios of the application of novel or future technologies and their potential impact on morality. This can happen, e.g., in the form of narratives, or by building models (Ostroff 2016, 68), for instance of future cities or technologised nursing homes tailored to the needs of the elderly. We suggest supporting students to develop a rich moral vocabulary to describe and evaluate their experiences and emotional reactions (Rushton 2017, 116). Instead of learning within a real situation, preparation for future developments can be based on learning within a simulated, imagined future scenario, for instance in serious video games (see Farber and Metro-Roland 2021, 14).
We assume that “the capacities that are characteristic of a morally competent agent are inculcated through training”, which goes beyond mere drill by involving “the giving and receiving of reasons” (Hermann 2015, 129) based on a training of critical reflection (2). Students should train their moral competence by way of critically evaluating the imagined scenarios, but also in concrete confrontation with already existing technologies and reflection on their moral impact. Furthermore, they should practise understanding different moral evaluations of the scenarios based on different moral standpoints and individual perspectives. They should also learn to reflect critically about fundamental moral concepts that are involved in these evaluations and about potential changes of these concepts (such as “dignity”, “autonomy” or “good care”). A comprehensive training of giving and receiving reasons combined with fostering moral imagination can help the learners to transfer insights from imagined and actually experienced situations to new cases and challenges (see Annas 2011, 19 & 23; Hills 2009, 102).
These ideas could be incorporated in school curricula. They would, for instance, complement current suggestions for including teaching units on technology in Dutch curricula, which include learning about ethical dilemmas resulting from technological developments and about how technology affects the pupils themselves.Footnote 12 Similarly, our ideas for cultivating technomoral resilience could complement applied ethics units developed in the German context.Footnote 13 Our impression is that with their focus on theoretical analysis and argumentation, those units do not sufficiently encourage an imaginative engagement with technological developments.
The learners’ capacity to maintain their moral agency (3) is automatically promoted by the strengthening of their moral imagination and reflection skills. This does not protect them against feeling moral distress in confrontation with new technologies that challenge their moral outlook. Studies have shown that the moral distress of professionals (in healthcare) can be reduced by presenting ethical theories as “normative playgrounds” that help them to “name the moral stressors” and to “frame […] experiences of moral complexity” (Monteverde 2014, 394; see also Monteverde 2016).Footnote 14 Partly, the capacity of the moral agent to cope with moral distress caused by technomoral change can probably only be acquired in a tedious learning process, gained on the basis of sometimes painful real-life experiences. However, the experimental settings and self-experimentation under careful guidance and in a safe environment that we propose above can also support this coping capacity, potentially “aided by exercises in pre-rehearsal” inspired by the Stoic tradition (Sherman 2021, 67). In education aiming at technomoral resilience, pre-rehearsal does not only mean learning to manage emotions, but includes self-reflection about one’s future role and (moral) agency in the process of co-shaping the change. This can be supported by strategies of education towards autonomy: fostering self-questioning and a general willingness to reconsider moral beliefs in the light of new insights and changed circumstances (Levinson 2004, 227) and cultivating the intellectual virtue of open-mindedness (see Taylor 2017).
Our understanding of technomoral resilience corresponds to an understanding of moral learning that cannot be described as a linear process of optimisation. Walter Reid (2006) compares the process of gaining resilience with carrying a full cup of water to a stateroom on a boat in rough seas. In a calm harbour, “the solution is a simple optimization problem”, you just have to find the right walking tempo (Reid 2006, x). At sea you need to learn to “absorb disturbance”, to balance your steps well and adjust the position of your body to the waves (ibid.). Technomoral change can take our boat to a rough sea and learning to move along smoothly on the boat and navigate in unknown waters requires processes of trying out different strategies and practices, openness to experimentation, as well as permanent adjustments and restabilisations.
5 Conclusion
In this paper, we have argued that the moral changes and disruptions brought about by the implementation of new and emerging technologies such as care robots call for the cultivation of technomoral resilience, which should become a goal of professional and general moral education. We developed a procedural concept of technomoral resilience as a capacity for restabilsation after destabilisation. In order to make a first step towards an account of how technomoral resilience can be cultivated in moral education, we proposed that moral education shall focus on a triangle of capacities: (1) moral imagination, (2) a capacity for critical reflection, and (3) a capacity for maintaining one’s moral agency in the face of disturbances.
In future research, we plan to study the changes and disruptions in the care context empirically by interviewing and observing elderly people, caregivers and designers of care robots. From this we might also gain new insights about what skills and traits these actors need. Moreover, we would like to explore the role of technomoral resilience in other contexts and to develop more elaborate suggestions for its cultivation from an early age onwards. It is our hope that scholars from different disciplines will join us in this endeavour.Footnote 15
Notes
This holds both for the education of children and for that of adults, for instance in a professional setting.
For different ways in which technology contributes to moral change see Hopster et al. 2022.
An anonymous reviewer has pointed out that we do not clearly distinguish between descriptive and normative moral change. In some places, it might appear as if the changes that we are discussing were of a merely descriptive nature, i.e., as if only people’s moral attitudes were changing and not what actually is morally right, wrong, etc. We reject the idea of entirely attitude-independent moral truths. Though believing that X is morally right does not make X morally right, the moral rightness or wrongness of X is not independent of the standards of actual moral practices. Thus, we hold that there is no sharp distinction between descriptive and normative moral change.
Moral routines can be conceived of as a subset of moral practices. While many moral practices have the characteristics of routines, this does not hold for the whole range of moral practices. Hotly debating the potential ethical implications of an emerging technology is not a moral routine.
This example is discussed extensively in Verbeek 2008.
The disruption of fundamental moral and ontological concepts is currently studied within the Dutch research programme Ethics of Socially Disruptive Technologies: www.esdit.nl.
This list of changes is not intended to be exhaustive.
Even before the introduction of the robot, responsibility was distributed among humans and technologies other than robots, but it is assumed that the introduction of the robot alters the distribution significantly.
By “competent moral agent” we mean “an agent who knows what morality requires in many concrete cases […,] is able to evaluate situations morally, to grasp moral reasons, and to act in accordance with them“ (Hermann 2015, 125). A person who possesses moral competence moreover acts out of moral motives. They grasp moral reasons and are moved by them.
Hand distinguishes between robust moral standards that are rationally justified as being fundamental for social interaction and moral standards open to discussion. For the first, he suggests moral formation and directive moral inquiry (directive means that “a teacher has the aim of persuading children that the moral standards under investigation are, or are not, justified”), for the latter teachers should take a “deliberately noncommittal” stance (2014, 526).
The capacity to live a fulfilling or “flourishing” life can be further differentiated into “the capacities for economic productivity, autonomy, democratic competence, healthy personal relationships, treating others as equals, and personal fulfillment” (Brighouse et al. 2016, 8).
https://curriculum.nu/download/mm/Voorstellen-ontwikkelteam-Mens-en-Maatschappij.pdf, accessed on 18 November 2022.
http://www.bildungsplaene-bw.de/,Lde/LS/BP2016BW/ALLG/GYM/ETH/IK/11-12-LF/04/02, accessed on 18 November 2022; http://www.bildungsplaene-bw.de/,Lde/LS/BP2016BW/ALLG/GYM/ETH/IK/11-12-LF/01/03, accessed on 18 November 2022.
Monteverde’s approach to professional moral education is based on a pragmatist account.
Several people have helped us to improve this paper. We thank the audience of our presentation of an earlier version of this paper at the conference “Changing Values, Changing Technologies” (Delft 2021) and at the Biannual 4.TU Ethics Conference (2021), the participants in the Technology & Human Values colloquium at the University of Twente, the participants in the research colloquium of the Foundations & Synthesis research line of the ESDiT programme, and the participants in a workshop organised by the guest editors of this special issue. Special thanks are due to Gregor Hochstätter, Peter Königs, Björn Lundgren, Alex Madva, Christian Miller, and two anonymous reviewers.
References
The American Association of Colleges of Nursing (2021) The essentials: Core competencies for professional nursing education. https://www.aacnnursing.org/Education-Resources/AACN-Essentials
Annas J (2011) Intelligent Virtue. Oxford University Press, New York
Baker R (2019) The structure of moral revolution: studies of changes in the morality of abortion, death, and the bioethics revolution. The MIT Press, Cambridge MA
Bauer K (2022) ‘Do not make yourself a worm’: reconsidering dignity as a duty to oneself. In: Sarat A, Pele A, Riley S (eds) Human dignity (studies in Law, Politics, and Society, vol 88. Emerald Publishing Limited, Bingley, pp 23–40. https://doi.org/10.1108/S1059-433720220000088002
Boenink M, Swierstra T, Stemerding D (2010) Anticipating the interaction between technology and morality: a scenario study of experimenting with humans in bionanotechnology. Stud Ethics Law Technol 4/2:1–38. https://doi.org/10.2202/1941-6008.1098
Brennan G et al (2013) Explaining norms. Oxford University Press, Oxford and New York
Brighouse H et al (2016) Educational goods and values: a framework for decision makers. Theory and Research in Education 14(1):3–25. https://doi.org/10.1177/1477878515620887
Buchanan A, Powell R (2018) The evolution of moral progress. Oxford University Press, Oxford UK
van der Burg W (2003) Dynamic ethics. J Value Inq 37:13–34. https://doi.org/10.1023/A:1024009125065
Chandler DC (2014) Resilience: the governance of complexity. Routledge, Abingdon, Oxon
Coeckelbergh M (2015) Care robots and the future of ICT-mediated elderly care: a response to doom scenarios. AI & Soc 31(4):455–462. https://doi.org/10.1007/s00146-015-0626-3
Defilippis S, Tiziana ML, Curtis K, Gallagher A (2019) Conceptualising moral resilience for nursing practice. Nurs Inq 26(3):e12291. https://doi.org/10.1111/nin.12291
Dotson K (2014) Conceptualizing Epistemic Oppression. Soc Epistemol 28:2:115–138. https://doi.org/10.1080/02691728.2013.782585
Engelen B et al (2018) Exemplars and nudges: combining two strategies for moral education. J Moral Educ 47(3):346–365. https://doi.org/10.1080/03057240.2017.1396966
Eriksen C (2020) Moral change: dynamics, structure and normativity. Palgrave Macmillan, New York
Evans B, Reid J (2015) Exhausted by resilience: response to the commentaries, Resilience 3:2:154–159. https://doi.org/10.1080/21693293.2015.1022991
Friedman B, Kahn PH, Borning A (2006) Value sensitive design and information systems. In: Zhang P, Galletta D (eds) Human-computer interaction in management information systems. M.E. Sharp, New York, pp 55–85
Hand M (2014) Towards a theory of moral education. J Philos Edu 48(4):519–532
Hand M (2017) A theory of Moral Education. Routledge, New York
Hermann J (2015) On moral certainty, justification, and practice: a wittgensteinian perspective. Palgrave Macmillan, Basingstoke
Hermann J (2019) The dynamics of moral progress. Ratio 32:300–331. https://doi.org/10.1111/rati.12232
Hills A (2009) Moral Testimony and Moral Epistemology. Ethics 120(1):94–127. https://doi.org/10.1086/648610
Hoebel EA (1968) The law of primitive man. Harvard University Press, New York
Hopster JKG et al (2022) Pistols, pills, pork and ploughs: the structure of technomoral revolutions. https://doi.org/10.1080/0020174X.2022.2090434. Inquiry
Holling CS (1973) Resilience and stability of ecological systems. Annu Rev Ecol Evol Syst 4:1–23
Van den Hoven J, Lokhorst G-J, van de Poel I (2012) Engineering and the problem of moral overload. Sci Eng Ethics 18:143–155. https://doi.org/10.1007/s11948-011-9277-z
Hung L, Liu C, Woldum E et al (2019) The benefits of and barriers to using a social Robot PARO in care settings: a scoping review. BMC Geriatr 19:232. https://doi.org/10.1186/s12877-019-1244-6
Hurst D (2018) Japan lays groundwork for boom in robot carers. The Guardian. https://www.theguardian.com/world/2018/feb/06/japan-robots-will-care-for-80-of-elderly-by-2020. Accessed 31 March 2022
Ihde D (1990) Technology and the lifeworld. Indiana University Press, Bloomingto
Jacobs N (2020) Capability sensitive design for health and wellbeing technologies. Sci Eng Ethics 26:3363–3391. https://doi.org/10.1007/s11948-020-00275-5
Kitcher P (2021) Moral progress. Oxford University Press, New York
Kitcher P (2022) The main enterprise of the world: rethinking education. Oxford University Press, New York
Lambregts J et al., (2020) Bachelor Nursing 2020. Een toekomstbestendig opleidingsprofiel 4.0. https://www.venvn.nl/media/aadklpzc/opleidingsprofiel-bachelor-of-nursing-2020.pdf
Levinson M (1999) The demands of liberal education. Oxford University Press, New York
Levinson M (2004) Is autonomy imposing education too demanding? A response t Dr. de Ruyter. Stud Philos Edu 23(2–3):223–233. https://doi.org/10.1023/B:SPED.0000024424.69307.07
Levinson M (2014) No citizen left behind (ser. Educational psychology: critical pedagogical perspectives, 13). Harvard University Press, New York
Levinson M, Solomon MZ (2021) Can our schools help us preserve democracy? Special challenges at a time of shifting norms. Hastings Cent Rep 51(1):15–22. https://doi.org/10.1002/hast.1224
Niemelä M, Melkas H (2019) Robots as social and physical assistants in elderly care. In: Toivonen M, Saari E (eds) Human-centered digitalization and services. Springer, Singapore, pp 177–197
Nyholm S (2020) Humans and robots. Ethics, agency, and anthropomorphism. Rowman and Littlefield, Washington DC
Mahdiani H, Ungar M (2021) The dark side of resilience. Adv Res Sci 2:147–155. https://doi.org/10.1007/s42844-021-00031-z
Monteverde S (2014) Undergraduate healthcare ethics education, moral resilience, and the role of ethical theories. Nurs Ethics 21(4):385–401. https://doi.org/10.1177/0969733013505308
Monteverde S (2016) Caring for tomorrow’s workforce: moral resilience and healthcare ethics education. Nurs Ethics 23(1):104–116. https://doi.org/10.1177/0969733014557140
Ostroff W (2016) Cultivating curiosity in K-12 classrooms. How to promote and sustain deep learning, ASD, Alexandria
Rachels J (1986) The elements of Moral Philosophy. Random House
Redfield R (1973) The universally human and the culturally variable. In: Ladd J (ed) Ethical relativism. Wadsworth Publishing Company, Belmont, CA, pp 129–143
Rushton CH (2016) Moral resilience: a capacity for navigating moral distress in critical care. AACN Adv Crit Care 27(1):111-9. https://doi.org/10.4037/aacnacc2016275. PMID: 26909461
Rushton CH (2017) Cultivating moral resilience. Am J Nurs 117(2 Suppl 1):11–15. doi: https://doi.org/10.1097/01.NAJ.0000512205.93596.00. PMID: 28085701
Rushton CH (2018) Moral resilience: transforming moral suffering in healthcare. Oxford University Press, New York
Sharkey A, Sharkey N (2012) Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf Technol 14:27–40
Sherman N (2021) Stoic wisdom. Ancient lessons for modern resilience. Oxford University Press, New York
Slaby J (2016) Kritik der Resilienz. In: Kurbacher F, Wüschner P (eds) Was ist Haltung? Begriffsbestimmung, Positionen, Anschlüsse. Königshausen & Neumann, Würzburg, pp 273–298
Southwick S, Charney D (2018) Resilience. The science of mastering life’s greatest challenges. Cambridge University Press, Cambridge MA
Swierstra T (2013) Nanotechnology and technomoral change.Etica & Politica XV:200–219
Taylor R (2017) Education for autonomy and open-mindedness in diverse societies. Educ Philos Theory 49(14):1326–1337. https://doi.org/10.1080/00131857.2017.1278673
Tronto J (1993) Moral boundaries. Routledge, New York
Vallor S (2011) Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos Technol 24:251–268. https://doi.org/10.1007/s13347-011-0015-x
Verbeek PP (2008) Obstetric ultrasound and the technological mediation of morality: a postphenomenological analysis. Hum Stud 31:11–26. https://doi.org/10.1007/s10746-007-9079-0
Verbeek PP (2011) Moralizing technology. Understanding and designing the morality of things. University of Chicago Press, Chicago/London
Verbeek PP (2014) Some misunderstandings about the moral significance of technology. In: Kroes P, Verbeek PP (eds) The moral status of technical artefacts. Springer, Dordrecht, pp 75–88
Walker B, Holling CS, Carpenter SR, Kinzig A (2004) Resilience, adaptability and transformability in social–ecological systems. Ecology and Society 9(2):5. [online] URL: http://www.ecologyandsociety.org/vol9/iss2/art5/. Accessed 31 March 2022
Walker B, Salt D (2006) Resilience thinking: sustaining ecosystems and people in a changing world. IslandPress, Washington DC
van Wynsberghe A (2013) Designing robots for care: care centered value-sensitive design. Sci Eng Ethics 19(2):407–433. https://doi.org/10.1007/s11948-011-9343-6
van Wynsberghe A (2016) Robots in Healthcare: design, development and implementation. Routledge, New York
van Wynsberghe A (2021) Social robots and the risks to reciprocity. AI & Soc 37:479–485. https://doi.org/10.1007/s00146-021-01207-y
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors have no competing interests to declare that are relevant to the content of this article.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Both authors contributed equally to the paper. Julia Hermann’s work on this paper is part of the research programme Ethics of Socially Disruptive Technologies, which is funded through the Gravitation programme of the Dutch Ministry of Education, Culture, and Science and the Netherlands Organization for Scientific Research (NWO grant number 024.004.031).
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bauer, K., Hermann, J. Technomoral Resilience as a Goal of Moral Education. Ethic Theory Moral Prac (2022). https://doi.org/10.1007/s10677-022-10353-1
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10677-022-10353-1
Keywords
- Care robots
- Disruption
- Moral education
- Moral imagination
- Moral resilience
- Technomoral change