The way elderly care is delivered is changing. Attempts are being made to accommodate the increasing number of elderly, and the decline in the number of people available to care for them, with care robots. This change introduces ethical issues into robotics and healthcare. The two-part study (heuristic evaluation and survey) reported here examines a phenomenon which is a result of that change. The phenomenon rises out of a contradiction. All but 2 (who were undecided) of the 12 elderly survey respondents, out of the total of 102 respondents, wanted to be able to change how the presented care robot made decisions and 7 of those 12 elderly wanted to be able to examine its decision making process so as to ensure the care provided is personalized. However, at the same time, 34% of the elderly participants said they were willing to trust the care robot inherently, compared to only 16% of the participants who were under fifty. Additionally, 66% of the elderly respondents said they were very likely or likely to accept and use such a care robot in their everyday lives. The contradiction of inherent trust and simultaneous wariness about control gives rise to the phenomenon: elderly in need want control over their care to ensure it is personalized, but many may desperately take any help they can get. The possible causes, and ethical implications, of this phenomenon are the focus of this paper.
- Value sensitive design
- Artificial intelligence
Elderly care robot technology is going through what Moor calls a technological revolution . Such a revolution is the moment at which a technological development, prompted by a technological paradigm evolution or major device improvement, causes enormous social impact . Care robot technologies are fundamentally changing elderly care. Still in its infancy, the technological revolution of elderly care robots needs nurturing to ensure ethical care is delivered to the increasing number of elderly [2,3,4,5,6,7,8,9,10].
Researchers at Florida state university “surveyed 445 people between the ages of 80 and 93 and found that most of the adults over 80 are using technological gadget[s] daily” . Although ethical considerations in relation to technology-based care of the elderly have been the subject of numerous studies [12,13,14,15,16,17], relatively little has been considered regarding care robots [18, 19] and the integration of complex robots into care for the elderly. One particular month long social robot study revealed that the acceptance care robots depends on factors from social and psychological usage context, one of the former being trust . That study glanced over the phenomenon presented here. In their results, the researchers stated that the “trustworthiness of the robot was a serious issue for most participants, it seemed more important [than] privacy. Yet, most participants did trust the robot and its messages or they did not even think about questioning the integrity of the robot” . The care robot used in that study was constructed and tested, as opposed to the care robot framework presented in this paper. While a small number care robot studies have been conducted with tested care robots with a focus on the ethical considerations, none attempt to comprehensively explore the phenomenon presented here. Moreover, none have found the phenomenon with an untested care robot. What are the ethical implications of the elderly giving their trust to an untested care robot? It is the hope that this paper will provide an initial discussion which will further encourage more debate and study to comprehensively answer that question.
As a result of long-term demographic changes many elderly people find themselves forced by circumstances to accept whatever care is on offer. Compounding the already apparent problem of impersonal care in many establishments, care robots could open up social care for the elderly to further abuse. However, the care robot technological revolution might, by contrast, present an opportunity for empowerment of the elderly in care, and an improvement upon current provision. This paper outlines a study undertaken which revealed these issues.
The paper begins by describing the design approach used to envision the hypothetical elderly care robot used in the study, then the hypothetical framework itself, followed by a brief summary of the study. We then present the results of the study, and a discussion on the background and ethical implications of the phenomenon of inherent trust in care robots.
2 Values in Motion Design and Dynamic Value Trade-Offs in Run-Time
Values in Motion Design (VMD), formerly the design process of the ‘attentive framework,’ is a care robot design approach that intends to not only help overcome the lack of carers, but provide better care without human error or ill intent . The approach stems from Value Sensitive Design (VSD) [22, 23] but since VSD “lacks a complimentary or explicit ethical theory for dealing with value trade-offs” , it rejects the notion that value trade-off decisions should be made exclusively by designers and user sample groups. Further justification for this rejection is linked to care ethics. In care ethics, good care is “determinative in practice” . Which is to say that good care isn’t standardized ‘elderly care management’ derived from following a normative ethical theory, instead it is determined during the act of caring by carers within the carer-patient relationship [9, 26,27,28,29]. VMD rises from the rejection of all value trade-offs being made during the design process and the need for good care ethics in care robot technology; it provides a way to deal with shortcomings of VSD.
VMD asserts that each care patient has unique values and value priorities, and that some values and priorities are open to constant changes. Thus, VMD suggests that care robot technology should be able to identify those values, priorities, and changes, and then adapt to them using dynamic value trade-offs in run-time in order to provide good, customized care. VMD doesn’t suggest that all values should be customisable due to safety issues, it makes a distinction between intrinsic and extrinsic patient care values. Intrinsic values are the end goal of care, such values include wellbeing and safety. Extrinsic values are those that help to reach intrinsic ones. Extrinsic values, and the priority of them, are personal; each individual’s values are different. Privacy, independence, and other such extrinsic values are those that care robot technology should adapt to. In distinguishing intrinsic and extrinsic values, one acknowledges that there are value trade-off decisions that are unsafe for users to make and instead designers should make them, as is the case for intrinsic values. Moreover, there are those value trade-offs (extrinsic) that could and should be made by users to provide good, customised care.
To follow the VMD approach, designers need tools. They need a design tool to identify all values and to design intrinsic value elements (AI safety and physical component control systems, as well as physical robot components themselves). Additionally, they need a computational dynamic value trade-offs in run-time method (AI value customizer system) to use so it can customise extrinsic values to users. The VMD approach recommends using care-centered value sensitive design (CCVSD)  for the former, and extrinsic value ordering embedded in an AI system for the latter. These recommendations form a care robot design package called the attentive framework.
3 The Attentive Framework
For the study within this paper, the conceptual AI system used to embed extrinsic value ordering in was the learning intelligent distribution agent (LIDA) model of computational consciousness [30, 31] was used. Both CCVSD and the LIDA model have their limitations, but are nonetheless useful for the purposes for constructing a framework that adheres to the VMD approach. The attentive framework is a theoretical framework of highly sophisticated care robots (attentos), using a type of computational ‘consciousness’ to relate to the elderly individuals in their care, and make ethical decisions by design. As a thought experiment the attento proves very useful for gleaning what attitudes to future care robots might be, and what problems might arise. Value sensitive design (VSD), as argued elsewhere , needs to take better account of social power, and adopt some normative positions concerning which values the framework itself values most highly. The notion of computational ‘consciousness,’ like so many terms in the world of computing (e.g. ‘information’, ‘memory,’ – see Checkland’s critique , and arguments made elsewhere ) anthropomorphizes what is perhaps better understood as a detailed computational model of agent perception. Whilst such an agent perception model fails to address the ‘hard’ problem  of the ‘I’ of consciousness experiencing the decision trees of perception, and could perhaps be improved with better accounting for the ‘enactive approach’  locating perception between the agent and the perceived, rather than merely within the perceiver, it nonetheless offers a substantive computational model for the problems inherent in getting robots to make decisions related to their context. Attentos would also support a patient customization function: the ‘conscious’ ordering of extrinsic patient care values (such as autonomy and privacy), thus being able to make dynamic value trade-offs in run-time, so as to provide good, customized patient care.
The attentive framework takes the VMD approach in designing an AI value customizer system and physical robot elements. The AI value customizer system is somewhat hypothetical, due to the ‘hard’ problem of consciousness . In the framework, an attento focuses on ensuring that extrinsic patient care values are upheld. It allocates decisions that are safe to make (extrinsic value ordering) to the attento’s interpretation of the patient’s care values. It sets a patient value priority list for each unique patient and attempts to match it to that patient. An attento performs dynamic value trade-offs in run-time, via extrinsic value ordering – ‘consciously’ interpreting and adjusting the list in practice to provide customized patient care. The value priority list, which an attento has for each patient, aids the attento in customized action selection during care. The ordering of the list dictates the way the attento looks after the patient; the highest priority values are considered first in each possible action selection. An attento does this with computational ‘consciousness’ and an ‘affirmation of values’ function. Computational consciousness provides: situation observation and evaluation; reactionary responsiveness; external stimuli perception; internal modelling of situations, patients, and patient values and their expression of values; and attentiveness. Additionally, the framework posits a ‘subjective experience’ and internal ‘moral’ dialogue taking place within computational ‘consciousness,’ which it provides as key in making the care robot something that the patient can have a good care experience with. Finally, the affirmation function affirms what a patient values in their care, by asking them or their guardian/s, by listening to what a patient or their guardian(s) explicitly tells them about the patient’s values, by conferring with other carers, and by performing a self-check of what they have interpreted and now understand to be a patient’s values. Affirming a patient’s values updates the value priority list of that patient, thus ensuring the actions which the attento chooses are customized.
The physical robot elements, which includes the AI safety and physical component control systems, as well as physical robot components themselves, is CCVSD inspired . It focuses on ensuring that intrinsic patient care values (such as safety) are upheld. It places decisions that are unsafe for patients to make in the hands of designers. Designers are to develop a robot that suits the safety requirements of the patients and environment. The CCVSD process should be well informed by professional codes of ethics [36,37,38,39], healthcare regulations, and with stakeholders involved. The ‘AI safety and physical component control systems’ and ‘AI value customizer system’ are two distinct elements of the AI. The former regards intrinsic values and the latter is concerned with extrinsic values. The distinction recognises that there are some elements of the complete AI system that must include intrinsic value trade-offs made by the designer, such as control mechanism governance like the speed at which a limb is articulated. The speed should be set at the time of design, not changed dynamically during run-time, just like any intrinsic value considerations.
While the attentive framework is the recommended toolkit for VMD, designers are not limited to only using CCVSD and extrinsic value ordering supported by the LIDA model. Any other tools that can provide a way to identify values and design for intrinsic values, as well as a AI system to provide dynamic value trade-offs in run-time for extrinsic values, could be used in the VMD approach.
In 2017, a study was undertaken to test the validity of the attentive framework. The study employed a social constructionist, interpretivist methodology. Data was gathered in two phases. The first was a heuristic, expert evaluation. The second an online survey. Both focused on a specifically designed attento, the ‘elderly care medicine delivery attento’, as well as scenarios involving care robot-patient interactions. The two phases and overall results have been addressed in another paper . Further details can also be found in the thesis titled ‘Dynamic Value Trade-offs in Run-time to Provide Good, Customised Patient Care with Robots’ published here: https://researchoutput.csu.edu.au/en/publications/dynamic-value-trade-offs-in-run-time-to-provide-good-customised-p. In this paper the focus is on the survey results, comparing the responses of elderly respondents to younger age groups.
Participants were presented with the medicine delivery attento’s design (functionality, competencies, and components) and the concept of dynamic value trade-offs in run-time. Moreover, participants were shown two scenarios in which the attento encountered a situation where values should be considered, at which point it examined the relevant patient’s value priority list, evaluated the situation, revealed relevant extrinsic values, and made a decision based on the situation variables and value priority list. Participants were asked to put themselves in the shoes of the elderly person in the scenario and then at the end of each scenario they were asked: “Would you trust your care robot’s decision in this scenario?”and “Do you feel the care robot has respected the way you want to be treated and behaved appropriately?” After answering, they were presented with the same scenario setup but the person’s value priority list had changed so that the person’s highest value was now different, thus the robot’s actions were different in order to respect that highest value instead. They were then asked the same questions. The highest value changed 3 times over 2 scenarios, with each change the same questions were asked. The online survey used Survey Monkey. Scaled and multiple-choice questions were asked, each followed by open-ended questions. The scaled and multiple-choice questions were asked to help participants identify and categorize open-ended responses, and to stay engaged. The scaled question queried the acceptability and usability of the attentive framework by asking how likely they were to accept and use it in their everyday lives. Scenario questions addressed the trustworthiness of the attento in the scenarios, and dynamic value trade-offs in practice. The survey study was conducted from August to September 2017, with 102 random international participants. Table 1 shows the complete set of demographic data for survey participants. Participant sampling was random and international in the hopes to get cross-culture and fairer set of data. Participants were recruited randomly using social media websites.
Approval to conduct the study was given by the University Human Research Ethics Committee. This study aimed to gather end-user perspectives on whether a particular care robot designed according to the attentive framework, an elderly care medicine delivery attento, that is capable of delivering quality, customized care (defined as being determinative in practice) is value sensitive, acceptable, and usable.
This paper focuses on the phenomenon that the elderly were willing to inherently trust a care robot even though they wanted more control over its decisions. Thus, only those results relevant to this phenomenon are presented and discussed.
As seen in Fig. 1, 34% of the elderly participants indicated they would inherently trust the attento, whilst only 16% of those under 50 years of age were willing (as seen in Fig. 2). Regarding the desire for control, all but 2 (who were undecided) of the elderly participants said yes to the following question: “Would you want the ability to change how your care robot makes its decisions?” It is from this we can see the contradiction of inherent trust in something one wishes to control and change, and thus the resulting phenomenon.
The phenomenon is further evident in the results shown in Fig. 3, which indicates that the elderly participants of the study were more likely to accept and use the attento in their everyday lives than the lower age groups. The elderly distribution is more focused towards the ‘likely’ side whilst the lower age group distribution is further spread. Moreover, the elderly participants dominate the ‘very likely’ category. As indicated by Fig. 3 the greater percentage of all participants were willing to accept and use the attento. Since those over 50 are more likely to use a care robot sooner, this is both positive and negative. Positive since it encourages further development of customizable care robots such as the attento, and negative because it indicates uninformed acceptability and usability – echoing the inherent trust the elderly were found to have in the attento.
Twelve of the participants in the study were elderly. The contradiction of inherent trust in the attento and simultaneous desire for control over the attento that those participants indicated suggests that some elderly people may be so desperate for care that they deem a new technology acceptable, usable, and more inherently trustworthy than younger generations would before seeing it.
Such a result prompts the following question, what is wrong with current care that makes the elderly inherently trust an untested care robot? In the following sections this phenomenon is explored from an international perspective.
6.1 Finnish Policy
A case study to demonstrate the desire for personalized care and the failure to deliver it is the current state of elderly care in Finland. The Finnish healthcare sector strongly encourages good elderly care insofar that it even has a national quality framework for care of the elderly; it is one of the few OECD countries to do so . Finland’s National Advisory Board on Health Care Ethics (ETENE) published a report on the elderly and ethics of care . The report aimed to advise social and healthcare workers, as well as policy makers, on the Finnish government’s expectation of care – particularly addressing the “sensitive ethical issues associated with this [elderly] stage of life” . The main position point in the report indicates that elderly care should be personalized:
“Treating elderly persons as individuals forms the basis of ethically sustainable eldercare. Care should be tailored to the needs and wishes of the elderly, respecting their values and taking their opinions into account” .
Despite only having 0.07% of the world’s population  and spending a 2.1% share of its GDP on elderly care, which is “significantly above the OECD average of 1.6%” , Finland was ranked 15th on the Global AgeWatch Index (GAWI) in 2013  and 14th in 2015 . The GAWI is the “first comparative overview of the wellbeing of older people around the world” . The overview was conducted by HelpAge International, a global elderly wellbeing watchdog. The GAWI measured wellbeing in terms of “income, health, education and employment, and environment” . What brought Finland’s ranking down, other than low educational levels and low pension for elderly women, was that the elderly have “less healthy years to look forward to than those in better performing countries” . The future is a more disturbing prospect when noting that it’s expected that the national percentage of elderly will jump from 27.2% (in 2015) to 32.4% by 2050 .
In the case of Finland, one can see high expectations of personalized elderly care but the failings to provide it. In a country with highly funded public and universally accessible elderly care , as well as a small population , which focuses primarily on value-based personalized elderly care  and thus recognizing the desire for it, it still fails to implement good, personalized care due to lack of resources and staffing .
Although the Finnish government has the right idea and the right funding, they fail to implement policy. As a result, access to the right care is a problem, and a cause for the phenomenon. A lack of resources has existed in Finland since the end of the 1990’s , since then “there has been a rapidly growing shortage of medical doctors and dentists in health centers, especially in remote rural areas” . Without carers to implement policy, there is no access to personalized care.
The lofty aims set out by national policy create high public expectation. That expectation, the aging population, and low availability of clinicians threatens cost of, and access to, personalized elderly care for the Finnish people . A lack of carers is in itself another cause of the phenomenon.
6.2 Less Carers and Ill-Intent in Current Care in Canada and Australia
In western countries the number of elderly is rising and the current system cannot accommodate this [47, 48]. Not only is the number of professional carers entering the field falling, but more current carers are leaving the field [49, 50]. To add to the numbers crisis, there are more cases coming to light of carer ill-intent. In June 2017, Elizabeth Wettlaufer, Canadian elderly care nurse, was convicted of eight murder charges against patients in her care . Megan Haines was convicted of murdering two elderly ladies whilst caring for them in the Ballina, New South Wales (NSW) nursing home where she worked as a nurse in 2016 . Newcastle, NSW nursing home team leader Garry Davis was found guilty of murdering two patients in 2016 . Roger Dean, elderly care nurse, killed eleven patients after maliciously setting the nursing home where he worked alight; Dean was sentenced in 2013 . There are many more cases , but the point is: human carers are capable of ill-intent. Given the elderly numbers crisis and possible human ill-intent, it could be concluded that the best solution is care robots capable of providing quality and ‘conscious’ human-level care. The attentive framework described in this paper was designed to do just that.
Perhaps elderly people are coming to the same conclusion. That a care robot could give them a carer if they don’t have one, or it could replace one that has shown ill-intent. If they come to this conclusion than that could be a cause for the phenomenon. Possible causes of the phenomenon aside, the next section explores the following question, what is the ethical implications of the phenomenon?
6.3 It’s an Opportunity to Capitalize on, with Substandard Care Robots
It isn’t unusual for unethical technology bureaucrats to take advantage of people. In the following cases agencies and companies have capitalized, or attempted to capitalize, on other technological revolutions.
In 2017, the Federal Communications Commission (FCC) attempted to consolidate power over internet distribution and control and give it to the US government . The FCC’s attack on net neutrality could lead to internet filtering and usage throttling. For the government to create a monopoly of the internet within the US is a clear attempt at capitalizing on users who are not equip to fight back.
Google was found to be abusing “its market dominance as a search engine by promoting its own comparison shopping service in its search results, and demoting those of competitors” . Users searching Google for online products were directed towards Google’s prices while rival prices were unfairly relegated. Google’s abuse of their monopoly on internet searches lead to a £2.14bn fine for this unethical practice.
John Deere tractors are proprietary in the sense that only John Deere dealerships and authorized repairs shops can work on their tractors with embedded software . A license agreement forbids users from nearly all repairs and modifications to farming equipment . It “prevents farmers from suing for “crop loss, lost profits, loss of goodwill, loss of use of equipment … arising from the performance or non-performance of any aspect of the software” . Deere & Company are abusing the application of embedded software to prevent users from making their own repairs to their purchases, and instead forcing them to make further payments to Deere & Company through authorized repair avenues.
Apple practiced “planned obsolescence” . Although the practice is a fact, the motive is unclear. The theory is that to encourage users to upgrade their iPhone, Apple was throttling the performance of their phones as a new iPhone was about to be released . Purposefully making a gadget obsolete, regardless of the motive, is an unethical practice and a successful case of capitalizing on a technology revolution.
Digital Rights Management (DRM) technologies, such as those used by digital distribution companies like Valve (which uses Steam, an online game distribution platform), allows users to buy software or gadgets but not allow them the freedom to use them as they wish . Companies use DRM technologies to unethically maintain rights over the products they distribute, capitalizing on digital distribution platforms.
Capitalizing on the care robot technology revolution with substandard products is possible. Related to the phenomenon, the bureaucrats could be care robot designers, ethicists, policy makers, builders, and producers; elderly care institution owners and operators; assistive home care businesses; etc. The fear is that any of those people involved with care robots could abuse the phenomenon for gain but with no intention of actually improving care. It could also happen to be that such unethical behaviour is unintentional, however the result is this the same – their ‘bandwagon’ products provide poor quality care.
Care robot companies may enter the market with no ethical guidance for care robot design, construction, maintenance, etc. and/or with purposefully substandard care robots. They may find the same phenomenon, as found in the study described, and attempt to capitalize on it. In the current global consumer economy this is an entirely possible scenario. In that likely case, the elderly will be cared for in a way that may be misguided and unethical, much like other users were treated in the cases mentioned.
A particularly disturbing part of the phenomenon, as found in the study, was that the inherent trust was evident even when the participants lacked a technological understanding of the attento’s ethical decision making. It is generally accepted that a product, such as a refrigerator or a car, is inherently trustworthy based on the assumption that the producers conducted safety tests and performed lengthy and careful design processes. However, it is strange that a theoretical framework (the attentive framework), inspired that trust. Furthermore, participants of the study were informed that the attento was ‘conscious’, that its model for ethical decision making was extrinsic value ordering, and that both of these features are mostly untested. The LIDA model (the computational consciousness model used for the attentive framework in the case of this study) has been subjected to some testing, where a ‘conscious’ AI was used by the LIDA model designers - Baars’ and Franklin - to assign naval tasks for sailors . These tests were far from the complex and theoretical claims of ‘consciousness’ (subjectivity and moral dialogue) made for the attentos in this study. Yet in a personal communication with the lead author, Wendell Wallach (a consultant, ethicist, scholar, and author of papers on the LIDA model), Wallach stated that as “far as consciousness goes, your claims for CC [computational consciousness] are valid” .
Perhaps it is the case, with futuristic concepts such as autonomous cars and androids in the news, care robots are not such a hi-tech notion to the general public. Even so, the inherent trust in this case is clearly unfounded. Care robots, especially those that fully replace the function of a carer (like the medicine delivery attento in the study) are still in their infancy. One as complex as the attento in this study is theoretical, with unproven capabilities.
Finally, another contribution of this study has been to the area of care ethics. Through the analysis of the phenomenon one can see that it is both an ethical implication of the care robot technological revolution, and an ethical issue of current care. As for the resolution of both points, we maintain the assumption that by resolving the first - by implementing ethical care robots capable of good, customised care - then the second will also be resolved. When such care robots will be capable of crossing the threshold of the ‘hard’ problem of consciousness, and evidencing subjectivity and moral dialogue, remains to be seen.
Moor, J.: Why we need better ethics for emerging technologies. Ethics Inf. Technol. 7(3), 111–119 (2005)
Draper, H., Sorell, T.: Ethical values and social care robots for older people: an international qualitative study. Ethics Inf. Technol. 19(1), 49–68 (2017)
Sharkey, A., Sharkey, N.: Granny and the robots: ethical issues in robot care for the elderly. Ethics Inf. Technol. 14(1), 27–40 (2012)
Sharkey, A., Sharkey, N.: The eldercare factory. Gerontology 58(3), 282–288 (2011)
Sparrow, R., Sparrow, L.: In the hands of machines? The future of aged care. Mind. Mach. 16(2), 141–161 (2006)
Tokunaga, S., et al.: VirtualCareGiver: personalized smart elderly care. Int. J. Softw. Innov. 5(1), 30–43 (2017)
Garner, T.A., Powell, W.A., Carr, V.: Virtual carers for the elderly: a case study review of ethical responsibilities. Digital Health 2, 1–14 (2016)
Landau, R.: Ambient intelligence for the elderly: hope to age respectfully? Aging Health 9(6), 593–600 (2013)
Vallor, S.: Carebots and caregivers: sustaining the ethical ideal of care in the twenty-first century. Philos. Technol. 24(3), 251–268 (2011)
Burmeister, O.K.: The development of assistive dementia technology that accounts for the values of those affected by its use. Ethics Inf. Technol. 18(3), 185–198 (2016)
Bhupinder, C., Rachna, K.: Role of Technology in Management of Health Care of Elderly. J. Gerontol. Geriatr. Res. 6(1), 1–3 (2017)
Burmeister, O.K., Kreps, D.: Power influences upon technology design for age-related cognitive decline using the VSD framework. Ethics Inf. Technol. (2018). https://doi.org/10.1007/s10676-018-9460-x
Burmeister, O.K., et al.: Enhancing connectedness through peer training for community-dwelling older people: a person centred approach. Issues Mental Health Nurs. 37(6), 406–411 (2016). https://doi.org/10.3109/01612840.2016.1142623
Sayago, S., Blat, J.: Telling the story of older people e-mailing: an ethnographical study. Int. J. Hum Comput Stud. 68(1–2), 105–120 (2010)
Burmeister, O.K.: Websites for seniors: cognitive accessibility. Int. J. Emerg. Technol. Soc. 8(2), 99–113 (2010)
Seals, C.D., et al.: Life long learning: seniors in second life continuum. J. Comput. Sci. 4(12), 1064–1070 (2008)
Moreno, T.: http://www.usabilitysciences.com/respect-your-elders-web-accessibility-for-seniors. Accessed 24 Apr 2011
van Wynsberghe, A.: Healthcare Robots: Ethics, Design and Implementation. Ashgate Publishing Ltd., Farnham (2015)
van Wynsberghe, A.: Designing robots for care: care centered value-sensitive design. Sci. Eng. Ethics 19(2), 407–433 (2013)
de Graaf, M.M.A., Allouch, S.B., Klamer, T.: Sharing a life with Harvey: exploring the acceptance of and relationship-building with a social robot. Comput. Hum. Behav. 43, 1–14 (2015)
Poulsen, A., Burmeister, O.K.: Overcoming carer shortages with carebots: dynamic value trade-offs in run-time. Australas. J. Inf. Syst. 22 (2018)
Friedman, B., Kahn, P.H.J., Borning, A.: Value sensitive design and information systems. In: Zhang, P., Galletta, D. (eds.) Human-Computer Interaction and Management Information Systems: Foundations, pp. 348–372. M. E. Sharpe, New York (2006)
Friedman, B.: Value-sensitive design. Interactions 3(6), 17–23 (1996)
Manders-Huits, N.: What values in design? The challenge of incorporating moral values into design. Sci. Eng. Ethics 17(2), 271–287 (2011)
Beauchamp, T.L.: Does ethical theory have a future in bioethics? J. Law Med. Ethics 32(2), 209–217 (2004)
Upton, H.: Moral theory and theorizing in health care ethics. Ethical Theor. Moral Pract. 14(4), 431 (2011)
Tronto, J.C.: Creating caring institutions: politics, plurality, and purpose. Ethics Soc. Welfare 4(2), 158–171 (2010)
Gámez, G.G.: The nurse-patient relationship as a caring relationship. Nurs. Sci. Q. 22(2), 126–127 (2009)
Vanlaere, L., Gastmans, C.: A personalist approach to care ethics. Nurs. Ethics 18(2), 161–173 (2011)
Baars, B.J., Franklin, S.: An architectural model of conscious and unconscious brain functions: Global workspace theory and IDA. Neural Netw. 20(9), 955–961 (2007)
Baars, B.J., Franklin, S.: Consciousness is computational: the LIDA model of global workspace theory. Int. J. Mach. Conscious. 01(01), 23–32 (2009)
Checkland, P.: Information systems and systems thinking: time to unite? Int. J. Inf. Manage. 8, 239–248 (1988)
Kreps, D.: Matter and memory and deep learning. In: Hirai, Y., Fujita, H., Abiko, S. (eds.) Berukuson Busshitsu to Kioku wo Shindan suru: Jikan Keiken no Tetsugaku, Ishiki no Kagaku, Bigaku, Rinrigaku eno Tenkai (Diagnoses of Bergson’s Matter and Memory: Developments Towards the Philosophy of Temporal Experience, Sciences of Consciousness, Aesthetics, and Ethics), pp. 196–225 (2017)
Chalmers, D.J.: The Conscious Mind. Oxford University Press, Oxford (1996)
Noë, A.: Action in Perception. MIT Press, London (2006)
Burmeister, O.K.: Professional ethics in the information age. J. Inf. Commun. Ethics Soc. 15(2), 348–356 (2017)
Burmeister, O.K.: Achieving the goal of a global computing code of ethics through an international-localisation hybrid. Ethical Space Int. J. Commun. Ethics 10(4), 25–32 (2013)
Bowern, M., et al.: ICT integrity: bringing the ACS code of ethics up to date. Australas. J. Inf. Syst. 13(2), 168–181 (2006)
Burmeister, O.K., Weckert, J.: Applying the new software engineering code of ethics to usability engineering: a study of 4 cases. J. Inf. Commun. Ethics Soc. 3(3), 119–132 (2003)
Organisation for Economic Co-operation and Development (OECD) Homepage. https://www.oecd.org/els/health-systems/Finland-OECD-EC-Good-Time-in-Old-Age.pdf. Accessed 10 Jan 2018
The National Advisory Board on Health Care Ethics (ETENE). Old Age and Ethics of Care (2008). http://etene.fi/en/publications
Statistics Finland Homepage. http://www.stat.fi/til/vamuu/2017/09/vamuu_2017_09_2017-10-24_tie_001_en.html. Accessed 10 Jan 2018
Yle Uutiset Homepage. https://yle.fi/uutiset/osasto/news/finland_lags_in_seniors_wellbeing/6858107. Accessed 11 Jan 2017
Global AgeWatch Index Homepage. http://www.helpage.org/global-agewatch/population-ageing-data/country-ageing-data/?country=Finland. Accessed 11 Jan 2017
Teperi, J. et al.: The Finnish health care system: a value-based perspective. http://www.hbs.edu/faculty/Publication%20Files/Finnish_Health_Care_System_SITRA2009_78584c8b-10c4-4206-9f9a-441bf8be1a2c.pdf
News Now Finland Homepage. http://newsnowfinland.fi/editors-pick/finlands-elderly-care-crisis. Accessed 11 Jan 2018
Koldrack, P., et al.: Cognitive assistance to support social integration in Alzheimer’s disease. Geriatr. Mental Health Care 1, 39–45 (2013)
Australian Bureau of Statistics Homepage. http://www.abs.gov.au/ausstats/abs%40.nsf/Latestproducts/4430.0Main%20Features302015?issue=2015&num=&opendocument=&prodno=4430.0&tabname=Summary&view=. Accessed 12 Jan 2018
Bernoth, M., Dietsch, E., Davies, C.: Forced into exile: the traumatic impact of rural aged care service inaccessibility. Rural Remote Health 12(1294), 1–8 (2012)
Bernoth, M., et al.: The impact of a participatory care model on work satisfaction of care workers and the functionality, connectedness and mental health of community-dwelling older people. Issues Mental Health Nurs. 37(6), 429–435 (2016)
British Broadcasting Corporation Homepage. http://www.bbc.com/news/world-us-canada-40412080. Accessed 12 Jan 2018
Australian Broadcasting Corporation Homepage (a). http://www.abc.net.au/news/2016-12-16/megan-haines-sentenced-for-murder-of-two-elderly-women/8126418. Accessed 12 Jan 2018
Australian Broadcasting Corporation Homepage (b). http://www.abc.net.au/news/2016-09-28/nursing-home-employee-garry-davis-found-guilty-of-murder/7883704. Accessed 12 Jan 2018
Daily Mail. http://www.dailymail.co.uk/news/article-3225332/Convicted-murderer-Roger-Dean-appeals-life-sentences-received-lighting-fire-nursing-home-resulted-death-14-elderly-people.html. Accessed 12 Jan 2018
Bernoth, M., et al.: Information management in aged care: cases of confidentiality and elder abuse. J. Bus. Ethics 122, 453–460 (2014)
Internet Society Homepage. https://www.internetsociety.org/blog/2017/12/net-neutrality-fccs-december-14-vote/. Accessed 29 Jan 2018
Telegraph Homepage. http://www.telegraph.co.uk/technology/2017/06/27/eu-hits-google-record-21bn-fine-abusing-internet-search-monopoly/. Accessed 29 Jan 2018
Vice Homepage. https://motherboard.vice.com/en_us/article/xykkkd/why-american-farmers-are-hacking-their-tractors-with-ukrainian-firmware. Accessed 29 Jan 2018
Business Insider Homepage. http://www.businessinsider.com/apple-battery-throttling-gives-customers-reason-to-distrust-2017-12. Accessed 29 Jan 2018
Electronic Frontier Foundation Homepage. https://www.eff.org/issues/drm. Accessed 29 Jan 2018
Wallach, W.: Personal Communication (2017)
Editors and Affiliations
© 2018 IFIP International Federation for Information Processing
About this paper
Cite this paper
Poulsen, A., Burmeister, O.K., Kreps, D. (2018). The Ethics of Inherent Trust in Care Robots for the Elderly. In: Kreps, D., Ess, C., Leenen, L., Kimppa, K. (eds) This Changes Everything – ICT and Climate Change: What Can We Do?. HCC13 2018. IFIP Advances in Information and Communication Technology, vol 537. Springer, Cham. https://doi.org/10.1007/978-3-319-99605-9_24
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99604-2
Online ISBN: 978-3-319-99605-9