Skip to main content

A critique of robotics in health care

Abstract

When the social relevance of robotic applications is addressed today, the use of assistive technology in care settings is almost always the first example. So-called care robots are presented as a solution to the nursing crisis, despite doubts about their technological readiness and the lack of concrete usage scenarios in everyday nursing practice. We inquire into this interconnection of social robotics and care. We show how both are made available for each other in three arenas: innovation policy, care organization, and robotic engineering. First, we analyze the discursive “logics” of care robotics within European innovation policy, second, we disclose how care robotics is encountering a historically grown conflict within health care organization, and third we show how care scenarios are being used in robotic engineering. From this diagnosis, we derive a threefold critique of robotics in healthcare, which calls attention to the politics, historicity, and social situatedness of care robotics in elderly care.

Introduction

When the social relevance of robotic applications is addressed today, the use of assistive technology in nursing settings is almost always the first example (Hergesell et al. 2020). Of all conceivable applications, the image of the humanoid robot that autonomously and unerringly fetches a glass of water for senior citizens seems omnipresent. The assertion that robots will help care for a growing elderly population is never really questioned. However, despite this future vision, there is little evidence that such robots will exist any time soon. First, in technological terms, autonomous, humanoid robots are nowhere near ready for use in care or other real-world settings involving (physical) contact with people. Second, there is currently little demand for genuine care work done by robots. Neither caregivers nor care recipients have expressed explicit interest in robotic applications (Smarr et al. 2012; Krings et al. 2018; Pekkarinen et al. 2020). Scenarios proposing this are usually rejected (Sparrow 2016). Furthermore, there are many doubts about the broader ethical (Sharkey and Sharkey 2012) and legal (Beck 2016) issues around using robots for care.

Even though, in recent times, more projects have sought to align their research efforts with actual care needs (Sabanovic 2014; Riek 2017; Jeon 2020), criticize anthropomorphic robot ideas (Duffy 2003; Sandini and Sciutti 2018; Weber 2005) or use participatory methods (Hornecker et al. 2020; Björling and Rose 2019; Lee et al. 2017; Lee and Riek 2018) this is still not a mandatory requirement in mainstream robotics research. This discrepancy between the poor suitability and acceptance of care robots and the massive political and scientific investments in this field makes care robotics an especially pertinent site for critical inquiry.

In our paper, we scrutinize this powerful interconnection of social robotics and elderly care within three arenas: innovation policy, care organizations, and robotics engineering. In the first section, robotics and elderly care are rendered available for one another through claims of impending demographic change. This connection is not self-evident but rather enabled by a number of pervasive themes specific to the context of (European) innovation policy. For example, independent living constitutes a major resource for legitimizing the application of robots in older people’s homes. In the second section, we show that when robotics technologies are used in the field of care, they are not a neutral entity intervening in an unoccupied field. Instead, we address how the introduction of robotics interacts with ongoing conflicts in the history of elderly care between economic and care-related interests. In this respect, robotic care is not a solution to the “crisis of care”, but also becomes a vehicle to turn the conflict in one direction. In the third section, we analyze how care is made an object of robotic engineering and show the epistemic reasons for the development of care robotics. From this perspective, we reconstruct how, of all possible applications, the areas of health care and nursing fit the requirements and conditions of robotics and human–robot interaction (HRI).

Our analyses give tentative examples of what a multi-dimensional critique of care robotics can offer. It calls attention to the manifold biases in the current regime of care robotics and shows its politics, historicity, and social situatedness. Each of the sections can be understood as a stand-alone analysis of the phenomenon, yet none explains the entire phenomenon.

Overall, this critique will offer a reflexive explanation of why care robotics has become a discursive solution to the nursing crisis and shows that the technology development at its current state is not a solution at any level. Instead, we need to instigate a critical debate about the political, organizational, and epistemic assumptions built into much of the work directed at making robotics a solution in elderly care.

Innovation policy: European innovation policy and the discursive “logics” of care robotics

Care robotics is first and foremost a political reality. The vision of robots solving the “healthcare challenge” (Ford 2015, 145ff) has been especially successful in policymakers’ meeting rooms and documents around the globe. A prominent example is Japan, where the notion that (humanoid) robots are already caring for people due to demographic pressures and cultural affinities is in fact still a heavily subsidized political fiction (Wagner 2013). Similarly, those looking for the “robot revolution” in European care homes and hospitals will still find themselves searching in vain. Nevertheless, care robotics has become an arena of political interest that is not only publicly debated (European Commission 2015a) but also heavily researched and funded (European Commission 2016a, 2017). This is a rather recent phenomenon. A mere 20 years ago, almost nobody talked about this vision, let alone invested public money in it. Neither elderly care nor robotics featured in the EU’s work programs and policy agendas. Since then, it has gradually established itself on the European stage as the contingent product of a range of technological, social, and, especially, political processes (for a more extensive sociological analysis of this, see Lipp 2019).

Hence, a critique of care robotics should not take the link between robots and elderly care for granted but rather investigate how it could become a political reality in the first place. Therefore, we ask: What are the discursive logics of care robotics and how have they shaped our idea of what care robots are and what they are for? To answer this question, we follow Michel Foucault’s genealogical “method” (Foucault 1997a, b). This perspective does not enquire into the rationality of a given phenomenon, e.g., by asking “Is care robotics a rational response to demographic change?” Rather, it looks at how a given discourse constructs a certain phenomenon as rational. Hence, investigating discursive “logics” of care robotics means asking: Under what conditions and given which kinds of assumptions could the vision of robots in care become plausible. To use Foucault’s language, how has this discourse been able to talk about it as a matter of course?

We will do this by analyzing three prominent themes of recent European innovation policies, which, in our view, have rendered the vision of care robotics a political possibility: the silver economy, active and healthy ageing, and independent living. While these phenomena are not entirely discrete, we will take each of them as an opportunity to enquire into the way care and robotics have been rendered compatible.

The silver economy

In industrial societies, old age has for the most part been framed as a problem of productivity. In this context, elderly people fall “out of the field of capacity” (Foucault 2003, 244), because they are not deemed fit or productive enough for industrial labor. As a result, an ageing population has mostly been seen in deficient and alarmist terms, i.e., as “a rapidly growing population of needy, relatively affluent persons whose collective dependence is straining the economies of Western industrialized nations” (Katz 1992, 203). These ageist stereotypes persist with terms like the “ageing tsunami” (Barusch 2013), setting the tone for how elderly people are perceived within the present socio-economic order.

Yet more recent discourses of European innovation policy provide a stark contrast to those “alarmist” accounts described by Katz. European initiatives subscribe to a “positive vision on ageing” (EIP on AHA 2011). Following the World Health Organization’s concept of “Active Ageing,” demographic change is seen as “one of humanity’s greatest triumphs” (WHO 2002, 6). Furthermore, the European Commission has urged member states to view ageing as an opportunity to grow a “Silver Economy” (EC 2015b, 8) instead of as a hindrance to growth. Here, health managers, engineers, tech businesses, and policymakers imagine that an increasing number of elderly people will create a new “silver” market of consumers, which new industries must cater to. Hence, according to this logic, the elderly population should not be primarily perceived as “unproductive” or “dependent” but rather as a new group of consumers and users of assistive technology. Specifically, the affluent, fit, and “young old” adults are heralded as the new archetype of old age (Neilson 2006). In turn, this excludes old people who do not conform to such bio-political ideals of fitness and spending power. It also reduces ageing to needs and experiences that can be satisfied through consumption and a logic of care provided through market relations (Mol 2008).

In this context, robotic innovation promises “to transform lives and work practices, raise efficiency and safety levels, provide enhanced levels of service and create jobs” (SPARC 2013). This general promise has mainly been derived from the application of robotics in industrial contexts (ibid., 15), where, in the past decades, production has increasingly been automated. What is surprising here is the transfer of this promise to elderly care—an area that is in no way similar to robots’ traditional domains and where robots still have to prove that they are a viable solution at all. This is not to say that they cannot be. However, the ease with which this assumption is accepted is problematic. This interconnection can only work because of a political rationality that assumes that robotic innovations are a solution to demographic change, because it also fits into a certain regime of marketization. The project of care robotics is tightly linked to agendas of competitiveness, economic growth, and industrial policy. Robotics is seen as a vital future market, one in which Europe cannot lag behind. We must understand why these overlapping policy agendas are crucial if we are to grasp why robotics and care seem to fit so neatly together. As a result, ageing has become a topic for the future to be exploited by cunning entrepreneurs and innovative engineers (Adam and Groves 2007: 57ff). At least within the particular context of innovation policy, old age is not tackled in alarmist terms but rather as an occasion for establishing new links between elderly people’s everyday lives and digital consumerism.

Active and healthy ageing

The theme of “active and healthy ageing” within European framework programs has been crucial in enabling the interconnection of robotics and elderly care. Partly, this is due to how the European Commission re-organized its funding policies a decade ago. Especially, the fifth framework program (FP5) introduced a “[n]ew integrated problem-solving approach” that replaced the former science-based approach (European Commission 2016b). This means that the funding agenda is not organized according to specific disciplines anymore but is instead differentiated by key actions “integrating the entire spectrum of activities and disciplines” needed to solve a given societal problem (ibid.). This means that research and development agendas in areas like robotics are increasingly concerned with and shaped by political expectations of usefulness. In this context, the topic of robotics has become intimately connected to the concern of an ageing society.

In the beginning of the 2010s, “active and healthy ageing” (AHA) emerged as an overarching theme further integrating formerly separated work programs on health, ageing, and digital technology. As pointed out above, these developments have been underpinned by a new understanding of ageing. Particularly, active ageing expands what “health” means and broadens the disciplinary scope from a purely (bio)medical point of view to various forms of (social) health sciences. In this context, “active ageing” not only means being physically or mentally well but also being socially included in society. This opens up a new playing field for robotics to assist the elderly in their everyday lives. This is evidenced by the fact that the EU began to view the challenge of advancing AHA “with … [s]ervice robotics within assisted living environments” (EC 2015c, 29) as increasingly urgent. Hence, the biomedical gaze on elderly care and ageing is now being complemented—if not at times displaced—by technical disciplines, such as engineering, computer science, and robotics. Here, the “cross-thematic approach” of active and healthy ageing serves as a new logic that links robotics and elderly care, which can be witnessed in the emergence of further initiatives such as the “Ambient Assisted Living” program.

At the same time, elderly care is an appealing area to test the new paradigm of robotics. Here, the focus on robotic assistance marks an epistemic and technological shift within the discipline. While industrial robotics usually operate in closed factory cages, assistive robots are explicitly intended for lay user interaction. This gives rise to a range of new requirements for robots and, incidentally, new ways of doing robotics, which are often subsumed under terms like “new robotics” (Schaal 2007). This is due to the fact that a setting such as a household is a much less controlled one than a factory production line and, hence, a more chaotic environment. This makes a difference for robot development, because to interact with humans, roboticists must engineer robots as “independent entities that monitor themselves and improve their own behaviors based on learning outcomes in practice” (Matsuzaki and Lindemann 2016, 501). This affords new kinds of technical capabilities, for which the uncontrolled environment of the (care) home provides an appealing—i.e., challenging—testbed.

Independent living

The demarcation of daily assistance and the interconnection with assistive robotics highlights the value of independent living, which is a prominent feature of care robotics projects. Similar to what we have described in the context of AHA, independent living defines good ageing as “the ability to perform functions related to daily living” (WHO 2002, 13). This not only focuses on the elderly person’s lifeworld and lifestyle (as opposed to their medicalization) but also is embedded within a wider imaginary of how to (re-)organize European healthcare systems. Within the context of EU innovation policy, independent living is positioned as a way of relieving European healthcare systems of their burdens as such systems are increasingly under pressure due to demographic change, limited public funding, and a lack of skilled personnel. Here, the idea is that robots will help by preventing “avoidable/unnecessary hospitalisation” (EIP on AHA 2011, 4), i.e., by assisting elderly people in their home and thus allowing them to live independently for longer. Here, independence is reframed as relative autonomy from and lower utilization of institutionalized care.

The theme of independent living renders care and robotics compatible in two ways: it re-defines care as (de-hospitalized, personalized, temporary) assistance and, at the same time, casts robots as a (de-institutionalizing, disburdening) response. The idea of using robots as an alternative to institutional care dates back to the very beginning of service robotics. As early as the 1980s, robotics pioneer Joseph Engelberger imagined robots “aiding the handicapped and the elderly” (Engelberger 1989, 210). Interestingly, in his book on service robotics, he positioned the use of assistive robots in care vis-à-vis institutionalized forms of care, at a time when the actual application of such machines was still considered a far-off vision.

Here, being assisted by robots in a “robotised private abode” (ibid., 217) is described as a more desirable alternative to the—in his eyes—disastrous conditions in nursing facilities. While Engelberger does not extend his account of institutionalized care, he discusses how robots could be useful to the elderly and how they might even be preferable to human caregivers. This relates to particular tasks, such as “food preparation” or “social interaction”, certain robotic capabilities, such as “dialog” or “grasp”, and more general characteristics of robots that render them desirable, such as the fact that robots do not need “personal time” (ibid., 215) or could sustain the elderly’s “unrelenting loquacity” (ibid., 216). Apart from the condescending tone, these quotes show that the discursive logic of assistive robotics converges with the political rationale of “de-hospitalizing” European healthcare systems. Robotics functions here as a “private” care technology, which renders publicly organized care obsolete.

This leads us back to the question of how exactly care is understood in this discourse. Assistive robotics defines care indirectly in terms of particular assistive tasks or robotic capabilities specialized to perform such tasks. This is not only true for early examples of service robotics but in fact constitutes a central design philosophy in care robotics. To render itself relevant in elderly care, robotics identifies distinct everyday “problems that older people face” (Robinson et al. 2014, 577), which can then supposedly be met using specialized robotic capabilities. Hence, when the innovation policy discourse talks about care in connection with service robots, it is presupposing a particular logic of care (Mol 2008). This is a more or less fixed set of dissectible tasks, which can be carried out at the point of need without the institutionalized infrastructure of hospitals and care homes.

Care organization: care robots as a solution to a historic conflict in inpatient elderly care

Despite being a largely political, we can show that the discursive success of the care robot has had real consequences for health care organization. When robotics technology enters the field of elderly care, it does not occur in a vacuum. We address a historic conflict in elderly care organization and show how the introduction of care robots has changed the constellation in this conflict. We argue here that care robots are successful not because they offer practicable solutions in everyday care practice, but because they resolve a historic conflict between economic and professional–ethical interests that has existed since the emergence of elderly care (Hergesell 2019). In the second section of our critique of care robots, we look at the organizational structures of elderly care as well as the enforcement of partial interests in the context of care robotics. We draw on empirical (Hergesell and Maibaum 2018) and historical (Hergesell 2019) data about care in Germany.

Today the need for care is observably increasing just as working-age populations are shrinking, which is resulting in a decline in nursing staff, less care by family members, and fewer contributors to social security systems, which are all expected to accelerate in the future. This phenomenon—described as the “nursing crisis”—threatens affordable, skilled care provision. Despite various attempts to avert this crisis, such as reforms of care insurance or recruitment campaigns, the most discursively prominent solution is technology. Technology promises to relieve and support caregivers, optimize work organization, and increase the efficiency of care work (Hülsken-Giesler and Krings 2015). In short, promoted by political actors, healthcare technologies, and prominently robotics claim the capacity to address all the current problems of nursing care and provide a promising solution for the future of the elderly.

However, as pointed out, neither the demand for care robots nor the current technological possibilities are in line with this notion. The question arises of why robotics is currently perceived as an effective and reasonable solution strategy for the problems of nursing care. We argue that it is not only the novelty of the technology but an underlying historical conflict that is being litigated via the deployment of this new technology.

We will show that since the emergence of geriatric care, there has been a fundamental conflict between economic, professional, and ethical care concepts and that this care conflict shapes the phenomenon of care robots. By considering this conflict, we become more aware of the current effects of care technologies and their potential consequences for the “historically grown” (Weber 2002) structures of elderly care than when considering them through a purely present day, often techno-euphemistic analysis.

The historical care constellation and the structural conflict

To illustrate this conflict, we take a brief look at the sociogenesis of nursing care. In the late nineteenth century, old age was increasingly perceived as a phenomenon of the welfare state, which had to be regulated based on economic conditions and bureaucratic administration (Kondratowitz 1990). At the same time, a differentiation of the inpatient facilities responsible for the care of elderly people began. The numerous new retirement homes required a new type of staff. The traditional “warders” were gradually replaced by mostly female staff, who were entrusted with activities specific to elderly care, thus starting the professional development of inpatient geriatric care (Irmak 2002). In this period, a structural conflict emerged that still dominates the discourse of the health sector in modern welfare states today: while caregivers increasingly developed their own professional and ethical care concepts, political and administrative actors with an interest in a rational-efficient administrative logic and more cost-effective care influenced nursing care. This structural conflict has led to today's demand for care technologies.

A shift from care concepts limited to basic care or “keeping safe” towards more positive, solidarity-based guiding principles did not occur until the 1960s (Kondratowitz 1990). “Old age” developed into an economically secure and meaningful phase of life. As a result, there was an up-skilling process for the caregivers, who were increasingly allowed to carry out care work independently and were required to implement the socially desired concepts of old age and care on their own. The focus was on psychosocial care, which included the maintenance of autonomy, supported by professional caregivers (Heumer and Kühn 2010). Thus, the formerly dominant instrumental–economic logics of political actors lost their interpretive authority. The professionalized caregivers were increasingly able to prioritize their concepts of professional care, which focused on care recipients’ quality of life. Political and administrative actors could, therefore, no longer play the dominant role in the structural conflict in elderly care.

From the 2000s onwards, it became more urgent to find a solution to the structural conflict underlying the nursing crisis. As shown in the first section, political actors were under increasing pressure to present solutions to the problems of elderly care. However, conventional attempts to overcome this problem—for example, by reforming nursing legislation or recruiting foreign workers—were largely unsuccessful. Demand for a significant increase in personnel and individual nursing concepts were limited by economics. The result was a stalemate between the conflicting care concepts (Hergesell 2019, 234–239, 297f).

The discursive success of care robots must be understood against the background of this conflict. Today, elderly care finds itself in a situation in which all interests must be considered when developing (technological) solutions. The prominence of robotic care applications is thus not primarily due to their actual use in everyday care but rather relates to the discursive integration of the historically grown conflict between professional care and instrumental economic interests. Care robots promise to enhance the quality of care and ensure professional standards, as well as to enable more efficient care organization and reduced costs.

Care robots as a means of conflict resolution in elderly care

This situation means that robotics-centered projects are not coming in a “vacuum” in which only the functional features and performance of the care robots are relevant. The technology is not neutral. This is especially true when care robots—whether strategically intended or not—intervene in the historically grown conflict. Developers must understand that their technology will constitute a tool for the conflicting logics.

In contrast to the promise of the care robot, we observed that robotic technology has not led to an integration of interests; instead, the involvement of engineers and technology developers and their understanding of care have led to a shift in the historically grown conflict. We illustrate this with an example of the conflicts and misunderstandings that occur during participatory technological development.

At the beginning of the process, a needs analysis is carried out, which aims to match the interests and problems of caregivers with those of administrative actors and technology developers. The needs analysis indicates that, from the caregiver’s perspective, everyday care should be designed to meet the individual needs of the care recipients. The care provided is supposed to exceed mere basic care—it should guarantee quality of life and respect the dignity and autonomy of the care recipients. To fulfil these objectives, highly qualified nurses should be deployed and work should be organized to provide sufficient time.

In contrast, the administrative actors and funders’ perspective focuses on the financial feasibility and legal aspects; these include the clawback of costs or compliance with legal standards and effective work organization. Their focus is less on individual cases but on the handling of economic resources. They want to achieve these goals by increasing efficiency, which means using resources efficiently and reducing costs by avoiding care work deemed unnecessary.

During a needs analysis for care robots, the demands of both groups may seem to match: The caregivers expect technology to free them from noncore care tasks such as filing or repetitive control chores and thus allow them to focus on giving care and preserving recipients’ autonomy. The funders see the demand for autonomy as the least disruptive intervention from the caregivers in the lives of the care recipients: Autonomy can be guaranteed, especially in outpatient care, by monitoring the care recipient’s physical safety, which substitutes for the physical presence of caregivers. So, even if the groups lexically use the same notion of autonomy, they nevertheless aspire to very different objectives. Although the same vocabulary is used, the relevant actors are referring to fundamentally different meanings. In the current situation, both groups are not equal in their capacity to impose their logic on the other. Because funding, development, and implementation are controlled by the technology developers and administrative actors, their economic–instrumental interpretation of care is inscribed into the technology (Hergesell and Maibaum 2018).

Hence, a sociohistorical analysis helps to understand that development processes for care robots and the complex interests, care concepts, and power relations involved. In this setting, robotics technology cannot expect to be a neutral tool that will develop to meet everyone’s needs. Even if this is not always a conscious intention, the demand for care robots is also an efficient strategy for political and administrative actors to assert their interests.

Robotic engineering: the epistemology of building robots and its implications for care scenarios

We have shown how the idea and goal of care robotics can become a catalyst both on the discursive level for innovation policies and on the organizational level for the field of care. The desirability of care robots is fed by a third strand, the epistemic conditions of robot development. When constructing robots for applications outside of laboratories, their developers are subject to specific socio-technical conditions that form a subsequent context for care robots in both discursive and manifest ways (Bischof and Maibaum 2020). To understand how robotics can care, one must first understand the challenge of building robots for everyday worlds. For the theoretical, methodological, and technical instruments of robotics, everyday worlds—like care scenarios—are on the absolute outer limits of workability (Bischof and Maibaum 2020). The designers’ understandings of the social situation become the core of the epistemic practices of social robotics (see Bischof 2017, 213 ff). Both the attractions and the problems of care delivered by robots thus lie in the fact that robot-delivered care goes beyond the previous limits and possibilities of robotics: If care robotics were to be understood primarily as a technical enterprise, the extent to which its epistemic conditions configured the addressed users and the situation of use would remain invisible (cf. Woolgar 1990; Oudshoorn et al. 2004).

When seeking to describe and explain this epistemic context of robot development for care robotics, the main challenge is to show what roboticists actually do—that is, how they try to fix social phenomena such as care technically and scientifically. For this purpose, we will briefly describe typical patterns of robot development for care scenarios. These two cases show how the addressed users are configured by the development practices (Woolgar 1990; Oudshoorn et al. 2004) and how the social institutionalization of care practices facilitates the development and implementation of robotics scenarios. Finally, we will observe that the actual implementation work in care robotics—“making the robot work”—rarely follows the needs of the users, but rather the epistemic and technical conditions of robot development: The assumed “users” of care robots appear in very narrowly defined roles, as the persons to be treated, who are integrated within rationalized routines.

To understand how the epistemic conditions of healthcare robotics affect research and development practice, we conducted ethnographic fieldwork and interviews in five robotics in health projects in Europe and the United States (Bischof 2017). At first glance, care robotics projects look very similar: Robots are used in care environments and the results are measured. The characteristic difference that we will show in two examples is how care is operationalized as a project goal—either as a successfully measurable intervention in a field of practice or as a technical challenge. The mode of incorporating empirically existing care situations into the development diverges accordingly: is the goal to make robots function for a certain existing situation or rather to abstract from it to create a transferable—universal—solution? Both types imply modes of decontextualization from care practices, as the cases will show.

Constructing as a key epistemic mode

By the paradigm of constructing, we mean practices that aim to make a functioning robot technically feasible. When this epistemic mode is applied in a care context, it leads to an understanding of care as a “task” to be completed by the robot. Therefore, subsets of practices, i.e., serving a glass of water, are transformed into a cascade of actions, for which applicable rules need to be disclosed. The technical members of a project team need concrete specifications against which they can test and measure the system. Supposedly meaningful tasks can also be made up or staged for this purpose—for example, if no such real-life operational situation is accessible. The “home assistance” case that was assessed by Bischof (2017, 202 ff.) via participant observations and subsequent expert interviews illustrates this.

For the first user tests in this EU-funded project, two concrete tasks were operationalized, which are now being carried out with older people as test persons for the first time. This is being done in a “living lab”, which is designed like a living room, including a kitchenette with refrigerator. During the test, the seniors sit on the sofa and are supposed to maneuver a butler-like robot with a tablet computer through the tasks assigned to them. One task is for the robot to take a water bottle out of the refrigerator. Since this is the first user test, many problems are expected to occur. The test leader, who sits on the sofa next to the test subjects, often has to moderate during breaks caused by non-functioning or explain again what the task is. The biggest problem, however, is that the robot cannot open the refrigerator door itself due to technical difficulties in controlling the gripper. For this reason, the test persons have to get up themselves during the water fetch task, walk to the refrigerator about three meters away, place the bottle on a kind of tray for the robot and sit down again so that the bottle can be brought to them. Getting up is tedious for some test persons (some with a walking stick, one even with a walker). At the refrigerator and on the way back, there are repeated coordination problems between the robot and the test persons when they pass the invisible laser scanner and the machine stops for safety reasons.

It would contradict the idea of a test to expect it to work smoothly on the first occasion. Indeed, user tests are carried out to identify problems. The coordination between spatial conditions, different hardware components, operating software on the tablet computer and, last but not least, the test manager, the engineers, and the test subjects is complicated and time-consuming. However, the specific decontextualization of the actual care practices that results from the decomposition of human–robot interaction into tasks becomes evident.

The decomposition of human–robot interaction into tasks that are modeled as technical problems creates a reality of its own. The researchers see themselves as objective observers who want to observe “natural” human–robot interaction (Woolgar 1990, 84f.). At the same time, a number of coincidences and accidents occur in the test that makes their intervention necessary. Instead of the tasks showing “natural” user behavior, it is evident that many practices of technical and social debugging are necessary to fulfill the intended task. Second, the contexts and meaning of assistance in the household must be subordinated to the developmental logic and suspended over many project steps. In this case, this may mean that the tested interaction no longer produces a meaningful equivalent in the intended use context. The observed test, however, serves to maintain this well-formed (technical) problem—which is just not solvable at the moment of testing. In such “tasks,” the user becomes a system component that also determines the execution of the task and must be configured accordingly with the help of test runs, operating instructions, and moderated use. Overall, this example from the constructing type shows a typical time sequence, in which conditions and knowledge from concrete assistance or care scenarios are obtained at the very beginning of the project and then technically narrowed down. Subsequently, the majority of the available resources are invested in technically resolving the defined problem. Concrete living environments or notified user groups are only created at the end of the project or at the beginning of the project.

Applying robots as a key epistemic mode

Projects with the goal of applying care robots aim to practically implement human–robot interaction in a concrete setting. This typology is often characterized by the participation of nontechnical experts—for example, from gerontology, nursing science, or medicine. Care robotics of this kind is understood as pioneering research that prepares and tests the use of robots in care and tends to generalize from the specific application scenario they design for.

In such development processes, users such as nursing staff or relatives, who are actually secondary and tertiary users, become relevant for the engineers. This is because the specific application and test areas of such projects are located in highly institutionalized settings such as care facilities or hospital wards. The necessary formalization of human–robot interaction, therefore, often occurs according to the organizational and institutional conditions of care practices. An example of this is an early case in which the Paro robot was used in a European care facility in 2008 (cf. Bischof 2017, 198 ff). The decision to use the Paro robot was made on the basis of its readiness for use, especially since it had CE certification and thus met the insurance requirements. However, on their way into the field, the researchers faced some rejection regarding the use of the robot in specific care facilities. Either nursing staff members were unwilling to participate in the study or relatives did not give written consent to let their senile parents or grandparents participate in the experiment. Therefore, the research team decided to wait until the Ministry of Health approved the research protocol. Following this step and several presentation workshops to demonstrate the robotics platform, six nursing homes, their staff members, and the relatives of 80 patients participated in the experiment.

The project did not seek to compare different robots or robot characteristics. The motivation was rather to carry out and test the use of a robot in inpatient care. The operationalization of the measure of effectiveness clarifies the type of reference to the field of application: Nursing staff members were to assess via questionnaires whether the use of the robots (positively) influenced the nursing routine. In addition, a medical indication scale of the dementia patients’ wellbeing was applied, which was also based on staff assessments.

So, much of the researchers’ work consisted in establishing their field access. The communication with the different authorities, actors, and requirements in this process can be understood as “boundary work” (Gieryn 1983). These are social negotiation processes about the circumstances under which the use of robots in this field of application is permissible, possible, desired, and feasible. The reasons for rejection ranged from ethical concerns to problems in adapting the duty rosters such that the 4-month test design could be carried out in the already limited time in the morning routine. In addition to the experimental results, the real achievement of the project was, therefore, to have established an initial fit between the intended field of application and the proposed solution.

Thereby, this test showed how different users were constituted and configured. Such tendencies were already evident in the “top down” course of stakeholder involvement: from political organizations to economic units, from staff to family members. In contrast, the supposed primary users, the dementia patients, do not appear directly in the scenario development. They are often the subject of the researchers’ efforts, for example, as legal subjects regarding consent provided by their legal proxies or as consumers with reference to the CE mark, and also in the discursive justification of the goal to increase the quality of life for the elderly. This can be justified from a practical and methodological point of view in the case of dementia patients with limited cognitive abilities. For the developmental practice described above, however, the fact that the dementia patients appear as the treated persons, who are already integrated into rationalized daily routines, is constitutive. The actual implementation workers then followed the secondary users, i.e., the nursing staff who ultimately implemented the test.

Care robotics’ decontextualizing epistemics

Care becomes the subject of robotics under certain epistemic conditions. Innovation policy (see 1.) and the historic conflict within care organization (see 2.) are such conditions. By highlighting typical project modes, we showed how “care” is epistemically framed as a problem to be solved in robotics. This leads to two modes of decontextualization of actual existing care practices. The deconstruction of care practice into “tasks” leads to a decontextualization context in which the meaning of the practices is mechanized—up to the point where the scenario completely ceases to function. Instead, the focus is on defining a well-defined (technical) problem. Many rather implicit aspects of care practices are thereby hidden. But even projects that are motivated by the goal of applying robots in “real-world” care settings exhibit an epistemology of decontextualization. They focus mainly on the conditions of the institutional context and not on the needs of the primary users, who were surveyed separately. The persons to be cared for manifest as “personas” who are integrated in a heavily institutionalized context, as is the case for the entire health care system. They may not, in principle, be defenseless inmates of a total institution (although this is certainly true for advanced dementia patients in nursing homes) (Goffman 1961), but as subjects they are almost exclusively seen in their largely transferable role as patients or persons in need in an institutional setting that sets the rules. The fact that care facilities and many care settings are a highly structured area of everyday life contributes to this decontextualization. Here, there are already complexity reductions that help roboticists to define a technical “scenario”. The “users” appear in a very specific role—as those to be treated—who are integrated in rationalized routines. Although user-centered or even participatory methods are increasingly applied in HRI context (Hornecker et al. 2020; Björling and Rose 2019; Lee et al. 2017; Lee and Riek 2018), the use of such methods itself does not protect against decontextualization of the social situation of use. Both examples of care robotics projects presented here understood themselves as user-centered design. Even participatory methods configure future users in specific ways (Bischof and Jarke 2021), critical analysis needs evaluate time points and involved actors and especially use of the accumulated data for later design decisions to prevent decontextualization of social situations of use.

Conclusion and critique

In all three arenas, we have shown that the discursive success of autonomous humanoid care robots cannot be explained by their abilities alone—either current or projected. Instead, our analysis shows how the interconnection is socially constructed, i.e. part of a particular regime of care robotics that narrows down the range of available challenges and solutions. Acknowledging this, opens up the topic for contestation and critique by showing its constitutive contingency.

While we analyzed each level separately, they are in fact highly interdependent. The interplay between the arenas described above can be illustrated as a feedback effect within a loop where assumptions in one arena travel and shape activities the other arenas (see Fig. 1).

Fig. 1
figure 1

Interplay of the sections

For instance, funding priorities set in innovation policy shape the epistemic practices of robotics projects. Here, the expectation to “fix” demographic change by building autonomous machines narrows the scope of such projects to merely technical aspects and implies a linear innovation model. This means that user needs and application scenarios are mostly set at the beginning of research projects instead of being iteratively negotiated with care personnel and older users inside the care organization, with consideration for its power structures. From there, engineers tend to deconstruct care practices irrespective of their context while disregarding their intrinsic efficiency. This takes an economic–instrumental interpretation of care as the basis for its ‘optimization’. Despite emerging from a shared problem—demographic change—it becomes clear that the solution cannot be single-tracked but needs to pay attention to and incorporate the historically grown heterogeneity of the field of care. Otherwise, this leads to privileging certain perspectives (e.g., care managers or entrepreneurs) that are compatible with that economic–instrumental logic of care while foreclosing the input by other important groups like care personnel and older people. The loop is complete when the former build the basis for how policymakers and funders set the priorities for their research programs, which, in turn, shape the setup of research projects, and so on.

Considering the feedback effects that stabilize this regime of care robotics, we derive a threefold critique, which calls attention to the politics, historicity, and social situatedness of care and robotics.

The first critique refers to the political vision of applying robots in care. Here, it is clear that the problem of an ageing society has been profoundly reconfigured in positive and potentially negative terms. Generally, we can observe that care robotics and elderly care have been made to fit the current regime of marketization, rationalization, and de-hospitalization. The current push for robotics in health care is thus heavily implicated in the ongoing project of solving the “health care crisis” by exclusively focusing on the market, technology, and the private sphere. In this sense, robotics is a prime example of a political technology (Winner 1980) that reproduces existing power relations instead of helping to unsettle them. Hence, robotics is disruptive in a problematic sense; it dismantles alternative solutions to the demographic challenge ahead by deflecting attention away from communal, social, and public strategies instead of supporting them. Hence, we call for a more responsible politics of robotics in health care, one that is able to reflect the constraints it sets on governance (Stilgoe et al. 2013) instead of simply heralding a technological fix.

The second critique relates to the idea that when developers and engineers enter the field of care it is not an empty space waiting for machines. We have shown the consequences of different logics in a historically grown field. To successfully develop robots for care in the future, it will be important not only to consider different levels of social aggregation but to also bring together as many different research perspectives and subjects as possible in an intensive dialogue to obtain a holistic picture. This calls for an in-depth analysis of the socio-historic setting—generally as well as locally—to provide practice-oriented knowledge. This goes beyond most ethical, legal, social issues (ELSI) efforts of today. Currently, the lack of such considerations means the deployment of robotics technology increasingly focuses on standardization and selection into reasonable and unreasonable care measures, whereby reasonable often means economic and marketable. In consequence, care robots are unintentionally produced to rely less on traditional, care-intrinsic knowledge. In the long term, this could lead to the deprofessionalization of nursing.

And third, we want to heighten awareness of the epistemic framing of care as a problem to be solved in robotics. As we have shown through ethnographic analysis, the epistemic modes of care robotics often lead to a decontextualization and deconstruction of care practice into individual, technically feasible tasks. This begets both mechanized care ideas and robots that do not function in specific care situations. It is difficult to overcome the epistemic conditions of a field like care robotics, but our analysis points to the rather mundane conditions of care robotics projects that can be improved. Corresponding to the critique on innovation policy, the usual cycle of publicly funded robotics projects that aim to technically solve a social problem should be broken. Instead, project funding and organization should consider open and iterative project structures that look at how care practices are situationally and interactively enacted (Hornecker et al. 2020). The essential implication, which would have to change accordingly, is that the understanding of robots as neutral machines and engineering practice as value-free would have to become obsolete.

This threefold critique is but a preparatory step for evaluating and possibly shifting the current regime of care robotics in elderly care. It calls for an integrated view of the manifold factors that shape and stabilizes this regime that ultimately privileges economic–instrumental rationales, grants epistemic primacy to technical disciplines, and largely ignores the historicity and organizational situatedness of care practices. We argue that these biases can only be shaken if, first of all, we manage to instigate a critical debate about the very political, organizational, and epistemic fundaments that have enabled this regime in the first place; hopefully breaking its perpetual feedback loop.

References

  1. Adam B, Groves C (2007) Future matters. Action knowledge, ethics. Brill, Leiden

    Book  Google Scholar 

  2. Barusch AS (2013) The aging tsunami: time for a new metaphor? J Gerontol Soc Work 56(3):181–184. https://doi.org/10.1080/01634372.2013.787348

    Article  Google Scholar 

  3. Bischof A (2017) Soziale Maschinen Bauen. Epistemische Praktiken der Sozialrobotik, Transcript, Bielefeld

    Book  Google Scholar 

  4. Bischof A, Jarke J (2021) Configuring the older adult: how age and ageing are re-configured in gerontechnology design. In: Peine A, Marshall BL, Martin W, Neven L (eds) Socio-gerontechnology. Interdisciplinary critical studies of ageing and technology. Routledge, London, pp 197–212

    Chapter  Google Scholar 

  5. Bischof A, Maibaum A (2020) Robots and the complexity of everyday worlds. In: Goecke BP, Rosenthalütten A (eds) Artificial intelligence. Reflections in philosophy, theology, and the social sciences. Mentis, Paderborn, pp 307–320

    Google Scholar 

  6. Björling E, Rose E (2019) Participatory research principles in human-centered design: engaging teens in the co-design of a social robot. MTI 3(1):8. https://doi.org/10.3390/mti3010008

    Article  Google Scholar 

  7. Bundesministerium für Bildung und Forschung (BMBF) (2017) ZukunftsMonitor IV: Wissens schaffen—Denken und Arbeiten in der Welt von morgen. Berlin: BMBF. https://www.bmbf.de/files/zukunftsmonitor_Wissen-schaffen-denken-und-arbeiten-in-der-welt-von-morgen.pdf

  8. Chihyung J, Heesun S, Sungeun K, Hanbyul J (2020) Talking over the robot. A field study of strained collaboration in a dementia-prevention robot class. Interact Stud 21(1):85–110. https://doi.org/10.1075/is.18054.jeo

    Article  Google Scholar 

  9. Duffy B (2003) Anthropomorphism and the social robot. Rob Auton Syst 42:177–190. https://doi.org/10.1016/S0921-8890(02)00374-3

    Article  MATH  Google Scholar 

  10. Engelberger JF (1989) Robotics in service, 1st edn. MIT Press, Cambridge

    Book  Google Scholar 

  11. European Commission (EC) (2015a) Autonomous systems. Special Eurobarometer 427. https://ec.europa.eu/commfrontoffice/publicopinion/archives/ebs/ebs_427_en.pdf. Accessed 24 Mar 2020

  12. European Commission (EC) (2015b) Growing the European silver economy. Brussels

  13. European Commission (EC) (2015c) Health, demographic change and wellbeing. Revised. Horizon 2020 Work Programme 2014–2015

  14. European Commission (ed) (2016a) Europe's Digital Progress Report 2016. https://ec.europa.eu/digital-single-market/en/news/europes-digital-progress-report-2016. Accessed 22 Jan 2019

  15. European Commission (EC) (ed) (2016b) Fifth framework programme. Programme structure and content. https://cordis.europa.eu/fp5/src/struct.htm#N. Accessed 12 April 2017

  16. European Commission (ed) (2017) Europe’s digital progress report 2017. Brussels. https://ec.europa.eu/digital-single-market/en/news/europes-digital-progress-report-2017. Accessed 22 Jan 2019

  17. Ford M (2015) Rise of the robots. Technology and the threat of a jobless future. Basic Books, New York. http://gbv.eblib.com/patron/FullRecord.aspx?p=1948747

  18. Foucault M (1997) What is critique? In: Foucault M (ed) The politics of truth. Hg. v. Lotringer S and Hochroth L. Semiotext(e), New York, pp 23–82 (Semiotext(e) foreign agents series)

    Google Scholar 

  19. Foucault M (1997) What is Enlightenment? In: Foucault M (ed) The politics of truth. Hg. v. Lotringer S and Hochroth L. Semiotext(e), New York, pp 101–134 (Semiotext(e) foreign agents series)

    Google Scholar 

  20. Foucault M (2003) Society must be defended. Lectures at the Collège de France 1975–76. Hg. v. Foucault M, Bertani M, Fontana A. Allen Lane, London

    Google Scholar 

  21. Gieryn TF (1983) Boundary-work and the demarcation of science from non-science: strains and interests in professional ideologies of scientists. Am Sociol Rev 48(6):781–795

    Article  Google Scholar 

  22. Goffman E (1961) Asylums: essays on the social situation of mental patients and other inmates. Anchor Books, New York

    Google Scholar 

  23. Hergesell J (2019) Technische Assistenzen in der Altenpflege. Eine historisch-soziologische Analyse zu den Ursachen und Folgen von Pflegeinnovationen. Juventa, Weinheim/Basel

  24. Hergesell J, Maibaum A (2018) Interests and Side Effects in Geriatic Care. In: Weidner R, Karafilidis A (eds) Developing support technologies—integrating multiple perspectives to create support that people really want. VS-Verlag, Wiesbaden, pp 163–168

    Chapter  Google Scholar 

  25. Hergesell J, Maibaum A, Meister M (eds) (2020) Genese und Folgen der Pflegerobotik. Die Konstitution eines interdisziplinären Forschungsfeldes. Juventa, Weinheim/Basel

  26. Heumer M, Kühn C (2010) Die Entstehung und Entwicklung der Altenpflegeausbildung: Historische Rekonstruktion des Zeitraums 1950 bis 1994 in Nordrhein-Westfalen. Diplomica Verlag, Hamburg

    Google Scholar 

  27. Hornecker E, Bischof A, Graf P, Franzkowiak L, Krüger N (2020) The interactive enactment of care technologies and its implications for human–robot-interaction in care. In: Proceedings NordiCHI 2020, ACM, (accepted)

  28. Hülsken-Giesler M, Krings B (2015) Technik und Pflege in einer Gesellschaft des langen Lebens. Technikfolgenabschätzung Theor Praxis 24(2):4–11

    Article  Google Scholar 

  29. Irmak K (2002) Der Sieche. Alte Menschen und die stationäre Altenhilfe in Deutschland 1924–1961. Klartext Verlag, Essen

  30. Katz S (1992) Alarmist demography. Power, knowledge, and the elderly population. J Aging Stud 6(3):203–225. https://doi.org/10.1016/0890-4065(92)90001-M

    Article  Google Scholar 

  31. Keller R (2014) Wissenssoziologische Diskursforschung und Deutungsmusteranalyse. In: Behnke C, Lengersdorf D, Scholz S (eds) Wissen—Methode—Geschlecht: Erfassen des fraglosen Gegebenen. Springer, Wiesbaden, pp 143–159

    Chapter  Google Scholar 

  32. Kondratowitz HJ (1990) Geschichte der Altenpflege. In: Wallrafen-Dreisow H (ed) Ich bin Altenpflegerin. Berichte aus der Praxis. Vincentz Verlag, Hannover, pp 63–76

    Google Scholar 

  33. Krings B, Weinberger N (2018) Assistant without masters? Some conceptual implications of assistive robotics in health care. Technologie 6(13):1–11. https://doi.org/10.3390/technologies6010013

    Article  Google Scholar 

  34. Lee HR, Riek LD (2018) Reframing assistive robots to promote successful aging. ACM Trans Hum Robot Interact (THRI) 7(1):1–23. https://doi.org/10.1145/3203303

    Article  Google Scholar 

  35. Lee HR, Šabanović S, Chang W-L, Nagata S, Piatt J, Bennett C, Hakken D (2017) Steps toward participatory design of social robots. In: HRI (ed) HRI'17. Proceedings of the ACM/IEEE international conference on human–robot interaction: March 6–9, 2017; HRI. Piscataway, NJ: IEEE, pp 244–253

  36. Lipp B (2019) Interfacing robotcare. on the techno-politics of innovation. Dissertation, Technical University of Munich. https://doi.org/10.13140/RG.2.2.33338.75202

  37. Maibaum A, Hergesell J (2020) 2030—Der demografische Wandel als neue soziotechnische Deadline. In: Rothenhausler A, Dobroc P (eds) Tagungsband 2000 Revisited—Ruckblick auf die Zukunft

  38. Matsuzaki H, Lindemann G (2016) The autonomy-safety-paradox of service robotics in Europe and Japan. A comparative analysis. AI Soc 31(4):501–517. https://doi.org/10.1007/s00146-015-0630-7

    Article  Google Scholar 

  39. Mol A (2008) The logic of care. Health and the problem of patient choice. Routledge, London

    Book  Google Scholar 

  40. Neilson B (2006) Anti-ageing cultures, biopolitics and globalisation. Cult Stud Rev 12(2):149–164

    Google Scholar 

  41. Oudshoorn N, Rommes E, Stienstra M (2004) Configuring the user as everybody: gender and design cultures in information and communication technologies. Sci Technol Human Values 29(1):30–63

    Article  Google Scholar 

  42. Partnership for Robotics in Europe (SPARC) (2013) Strategic research agenda for robotics in Europe 2014–2020

  43. European Innovation Partnership on Active and Healthy Ageing (EIP on AHA) (2011) Strategic implementation plan for the european innovation partnership on active and healthy ageing. Strategic plan. Steering group working document. Brussels

  44. Pekkarinen S, Hennala L, Tuisku O, Gustafsson C, Johannsoon-Pajala R-A, Thommes K, Hoppe J, Melkas H (2020) Embedding care robots intro society and practice: socio-technical considerations. Futures 122:1–15. https://doi.org/10.1016/j.futures.2020.102593

    Article  Google Scholar 

  45. Riek L (2017) Healthcare robotics. Commun ACM 60(11):68–78

    Article  Google Scholar 

  46. Robinson H, MacDonald B, Broadbent E (2014) The role of healthcare robots for older people at home. A review. Int Soc Robot 6(4):575–591. https://doi.org/10.1007/s12369-014-0242-2

    Article  Google Scholar 

  47. Sabanovic S, Bennett CC, Piatt JA et al (2014) Participatory design of socially assistive robots for preventive patientcentered healthcare. IEEE IROS Workshop on Assistive Robotics for Individuals with Disabilities, Chicago

    Google Scholar 

  48. Sandini G, Sciutti A (2018) Humane robots—from robots with a humanoid body to robots with an anthropomorphic mind. J Hum Robot Interact 7(1):1–4. https://doi.org/10.1145/3208954

    Article  Google Scholar 

  49. Schaal S (2007) The new robotics-towards human-centered machines. HFSP J 1(2):115–126. https://doi.org/10.2976/1.2748612

    Article  Google Scholar 

  50. Sharkey A, Sharkey N (2012) Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf Technol 14:27–40. https://doi.org/10.1007/s10676-010-9234-6

    Article  Google Scholar 

  51. Smarr C-A, Prakash A, Beer JM, Mitzner TL, Kemp CC, Rogers WA (2012) Older adults’ preferences for and acceptance of robot assistance for everyday living tasks. Proc Hum Factors Ergon Soc Ann Meet 56(1):153–157. https://doi.org/10.1177/1071181312561009

    Article  Google Scholar 

  52. Sparrow R (2016) Robots in aged care: a dystopian future? AI Soc 31:445–454. https://doi.org/10.1007/s00146-015-0625-4

    Article  Google Scholar 

  53. Stilgoe J, Owen R, Macnaghten P (2013) Developing a framework for responsible innovation. Res Policy 42(9):1568–1580. https://doi.org/10.1016/j.respol.2013.05.008

    Article  Google Scholar 

  54. Wagner C (2013) Robotopia Nipponica. Recherchen zur Akzeptanz von Robotern in Japan, Tectum, Marburg

    Google Scholar 

  55. Weber M (2002) Die „Objektivität“ sozialwissenschaftlicher und sozialpolitischer Erkenntnis. In: Kaesler D (ed) Weber. Schriften 1894–1922. Ausgewählt von Dirk Kaesler. Kroner Verlag, Stuttgart, pp 77–149

  56. Weber J (2005) Helpless machines and true loving care givers: a feminist critique of recent trends in human–robot interaction. J Inf Commun Ethics Soc 3(4):209–218. https://doi.org/10.1108/14779960580000274

    Article  Google Scholar 

  57. Winner L (1980) Do artifacts have politics? Daedalus 109(1):121–136

    Google Scholar 

  58. Woolgar S (1990) Configuring the user: the case of usability trials. Sociol Rev 38(1):58–99

    Article  Google Scholar 

  59. World Health Organization (2002) Active ageing. a policy framework. World Health Organization, Geneva

    Google Scholar 

Download references

Funding

Open Access funding enabled and organized by Projekt DEAL.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Arne Maibaum.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Maibaum, A., Bischof, A., Hergesell, J. et al. A critique of robotics in health care. AI & Soc (2021). https://doi.org/10.1007/s00146-021-01206-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-021-01206-z

Keywords

  • Social robotics
  • Health care
  • Innovation policies
  • Care robots
  • Care crisis