This section discusses several applications of AI to specific areas and illustrates the challenges and importance of ethics in these areas. While we cover autonomous vehicles and military uses of AI in separate chapters, here we discuss issues of AI for enhancement, healthcare and education.

9.1 Ethical Issues Related to AI Enhancement

It is starting to become possible to merge humans and machines. Robotic components that can replace biological anatomy are under active development. No ethical dilemma ensues when the use of robotic replacement parts is for restorative purposes. Robotic restoration takes place when a missing physical or cognitive capability is replaced with an equally functional mechanical or electronic capability. The most common example of robotic restoration is the use of robotic prosthetics. Enhancement, on the other hand, occurs when a physical or cognitive capability is replaced with an amplified or improved mechanical or electronic capability.

9.1.1 Restoration Versus Enhancement

It is not always clear when a restorative prosthetic becomes an enhancement. Technology is changing rapidly and prosthetics are becoming so advanced as to allow for significant increases in some abilities. Moreover, while some functions may only be restored, others may be enhanced. Typically, these distinctions do not inherently result in ethical dilemmas. Non-invasive mechanical and electrical devices, such as exo-skeletons, are also being developed to make people stronger, faster, or more capable in some ways. These systems may potentially be purchased by people for the sake of enhancement.

9.1.2 Enhancement for the Purpose of Competition

Ethical issues arise when we consider the use of enhancements for the purpose of job-related competition. Athletics offers an illustrative example. Athletes are often looking for competitive advantage over their rivals, and invasive or non-invasive augmentations may soon generate considerable advantage for those with the money to invest. Moreover, prosthetics are quickly becoming so capable that they offer a competitive advantage to some athletes. An ethical dilemma arises when these devices prevent fair competition or when they endanger the health of the athletes.

Note that the Superhuman Sports SocietyFootnote 1 promotes sports that explicitly invite the use of enhancements, but at least these enhancements are open and transparent. They even form an essential part of their games. Players without specific enhancements would not be able to play those games or would perform at a very low level. But we may ask how far will athletes go to be the best? In the 1990s, Goldman and Katz posed the following question to elite athletes: “Would you take a drug that would guarantee overwhelming success in your sport, but also cause you to die after five years?” The authors report that fifty percent stated yes (though other authors have since then disputed the results) (Goldman et al. 1987).

We can also consider cognitive versus physical enhancement. Amphetamine (methamphetamine in WWII) has been used to increase the wakefulness and energy of pilots and soldiers in combat. These enhancers were used by the United States Air Force until 2012. The United States Air Force still uses Modafinil as stimulant for combat pilots. Other militaries are suspected of using similar drugs.

Similarly, the number of college students using unprescribed Ritalin or Adderall on college campuses has tripled since 2008 (Desmon-Jhu 2016). Experts estimate that the total number of users may be more than 50% (Wilens et al. 2008). But is it ethical to use these so-called “Smart Drugs” to gain an academic advantage? Is this equivalent to doping in Olympic Sports?

More mechanical versions of cognitive enhancement are also becoming available. For example, transcranial magnetic stimulation of deep brain regions has been shown to improve cognitive performance. It may also soon be possible to make point-wise gene changes to improve biological, cognitive, or physical enhancements.

Although these technologies clearly have major benefits to society we must also be cautious to understand the potentially negative consequences. For the technologies described above, in particular, it is worth discussing and debating when an enhancement becomes a shortcut to excellence. Moreover, we must question whether or not the purchasing of cognitive aids cheapens success. At some point success may simply depend on one’s ability to purchase the necessary shortcuts. Less wealthy people would be disadvantaged in professional or academic success.

Another form of enhancement could be based on the brain-computer interfaces that are currently being developed. One of its prime applications is Neuroprosthetics, in which the neurons of a patient are connected to a computer which in turn controls a prosthetic device. Thinking about closing one’s hand will then result in the prosthetic hand closing.

9.2 Ethical Issues Related to Robots and Healthcare

Healthcare is another application of AI and robotics that raises ethical issues. Robots have been proposed for a wide variety of roles in healthcare including assisting older adults in assisted living, assisting with rehabilitation, surgery, and delivery. Currently robot-assisted surgery is the predominant application of robots within the healthcare industry. Robots are also being developed to deliver items in a hospital environment and for using ultraviolet light to disinfect hospital and surgical rooms.

9.3 Robots and Telemedicine

Robots have been suggested as an important method for performing telemedicine whereby doctors perform examinations and determine treatments of patients from a distance (see Fig. 9.1).

Fig. 9.1
figure 1

(Source Cmglee)

da Vinci surgical system

The use of robots for telemedicine offers both benefits and risks. This technology may afford a means for treating distantly located individuals that would otherwise only be able to see a doctor under extreme circumstances. Telemedicine may thus encourage patients to see the doctor more often. It may also decrease the cost of providing healthcare to rural populations. On the negative side, the use of telemedicine may result in and even encourage a substandard level of healthcare, when being used in an exaggerated way. It might also result in the misdiagnosis of certain ailments which are not easily evaluated remotely.

9.3.1 Older Adults and Social Isolation

Robots are also being introduced as a benefit to older adults to combat social isolation. Social isolation occurs for a variety of reasons such as children entering adulthood and leaving the home, friends and family ageing and passing away. Older adults that reside in nursing homes may feel increasingly isolated which can result in depression. The United Kingdom acknowledged the societal scale problem of loneliness and appointed a dedicated minister in 2018.

Researchers have developed robots, such as Paro, in an attempt to reduce feelings of loneliness and social isolation in these older adults. Ethical concerns about the use of the Paro robot (see Fig. 9.2) have been raised (Calo et al. 2011; Sharkey and Sharkey 2012).

Fig. 9.2
figure 2

(Source National Institute of Advanced Industrial Science and Technology)

Paro robot

The main concerns are that patients with dementia may not realise that the robot is a robot, even if they are told (whatever the consequences may be). Moreover, the use of the robot may further increase actual social isolation by reducing the incentive of family members to visit. Yet, for this use case, many would argue that the benefits clearly outweigh the concerns (Abdi et al. 2018).

9.3.2 Nudging

Perhaps more controversial is the use of AI and robotics to provide encouraging nudges to push patients towards a particular behavioural outcome. Robotic weight-loss coaches, for example, have been proposed and developed that ask people about their eating habits and remind them to exercise (Kidd and Breazeal 2007). These robots are meant to help people stick to diets, but ethical concerns arise related to the issue of autonomy. Specifically, people should have the autonomy to choose how they want to live and not be subjected to the influence of an artificially intelligent system. These systems also raise issues relate to psychological manipulation if their interactions are structured in a way that is known to be most influential. A variety of methods exist, such as the foot-in-the-door technique which could be used to manipulate a person.

9.3.3 Psychological Care

Recently, artificial systems have been proposed as a means for performing preliminary psychological evaluations. Ideally, these systems could be used to detect depression from online behaviour and gauge whether or not a treatment intervention is required. The use of such systems offers a clear benefit in that, by identifying individuals at risk, they may be able to prevent suicides or other negative outcomes (Kaste 2018). On the other hand, these systems still raise questions of autonomy and the potential development of nanny technologies which prevent humans from working through their own problems unfettered. Along a similar line of reasoning, virtual agents have been developed for interviewing Post Traumatic Stress Disorder (PTSD) suffers. Research has shown that individuals with PTSD are more likely to open up to a virtual agent than to a human therapist (Gonzalez 2017).

9.3.4 Exoskeletons

Looking at one final technology, exoskeletons have been developed to assist individuals with lower-limb disabilities. These systems are most often used for rehabilitation and training. Recently they have allowed paraplegic patients to stand and take small steps. Ethical issues arise when these systems are incorrectly viewed as a cure for disease rather than a tool.

9.3.5 Quality of Care

Overall, robotics and AI have the potential to revolutionise healthcare. These technologies may well provide lasting benefits in many facets of care ranging from surgery to diagnose disease. Certainly, a society must carefully analyse the benefits and costs of these systems. For instance, computer aided detection of cancer affects decisions in complex ways. Povyakalo et al. (2013) examined the quality of decisions that result when healthcare providers use computer-aids to detect cancer in mammograms. They found that the technology helped more novice mammogram readers but hindered more experienced readers. They note that this differential effect, even if subtle, may be clinically significant. The authors suggest that detection algorithms and protocols be developed that include the experience of the user in the type of decision support it provides. In 2019, a study on more than 9,400 women published by the Journal of the National Cancer Institute found that AI is overwhelmingly better in detecting pre-cancerous cells than human doctors (Hu et al. 2019).

9.4 Education

In education, AI systems and social robots have been used in a variety of contexts. Online courses are widely used. For example, the University of Phoenix is now (technically) one of the largest universities in the world—since hundreds of thousands of students are enrolled in their online courses. In such a highly digital learning environment, it is much easier to integrate AI that helps students not only with their administrative tasks, but also with their actual learning experiences.

9.4.1 AI in Educational Administrative Support

AI systems, such as Amelia from IPSoft, may one day advise students on their course selecting and provide general administrative support. This is not fundamentally different from other chatbot platforms such as Amazon’s Lex, Microsoft’s Conversation or Google’s Chatbase. They all provide companies and organisations with tools to create their own chatbot that users can interact with on their respective websites or even on dedicated messaging platforms. While these bots may be able to provide some basic support, they do have to fall back to a human support agent when encountering questions that go beyond the knowledge stored in their databases.

Another form of supporting education with AI from an organisational perspective is plagiarism checking. In an age where students are able to copy and paste essays easily from material found online, it is increasingly important to check if the work submitted is truly the student’s original work or just a slightly edited Wikipedia article. Students are of course aware of their teachers’ ability to google the texts in their essays, and therefore are aware that they need to do better than a plain copy and paste. Good plagiarism checking software goes far beyond matching identical phrases and is able to detect similarities and approximations even search for patterns in the white space. Artificial intelligence is able to detect the similarities of text patterns and empowers teachers to quickly check student work against all major sources on the internet including previously submitted student contributions that were never openly published.

9.4.2 Teaching

AI systems have several advantages over human teachers that make them attractive for online learning. First, they are extremely scalable. Each student can work with a dedicated AI system which can adapt the teaching speed and difficulty to the students’ individual needs. Second, such a system is available at any time, for an unconstrained duration at any location. Moreover, such an agent does not get tired and does have an endless supply of patience. Another advantage of AI teaching systems is that students might feel less embarrassed. Speaking a foreign language to a robot might be more comfortable for a novice speaker.

The autonomous teaching agents work best in constrained topics, such as math, in which good answers can be easily identified. Agents will fail in judging the beauty of a poem or appreciate the novelty in thought or expression. Section 2.4 discusses the limitations of AI in more detail.

Another teaching context in which robots and AI systems show promising results is teaching children with special needs, more specifically children with Autism Spectrum Disorder. The robots’ limited expressivity combined with its repetitive behaviour (that is perceived by many as boring) is in this context actually a key advantage (Diehl et al. 2012). The use of robots for general purpose childcare is ethically questionable (Sharkey and Sharkey 2010).

9.4.3 Forecasting Students’ Performance

Artificial Intelligence has been used to predict the dropout rate of students and their grades (Gorr et al. 1994; Moseley and Mead 2008). The goal is typically to provide students at risk with additional support, but also to carefully plan resource allocation. This is particularly important in the context of the United States’ “No Child Left Behind” policy, which strongly encourages schools to minimise dropout rates and ensure good performance of most students.

There are, however, several ethical issues. First, the pressure applied under this policy can incentivise teachers and administrators to manipulate the scores of their students in order to meet the set targets. The Atlanta Public Schools cheating scandal is an example for such a misconduct (Fantz 2015).

Second, if the performance of a student is calculated prior to taking a course then both the student and the teacher might adapt to this score (Kolowich 2012). A student might, because of the prediction, not even try to perform well in the course. The student may deem their own effort as ineffective, exhibiting signs of learned helplessness. A teacher might also look at the list and decide that the students with the lowest predicted score are likely to drop out anyway and would not be worth putting extra effort into. These are two possible negative effects of such a forecasted performance, but both students and teacher might also decide to counteract the prediction. The student might work extra hard because he/she knows that this will be hard or the teacher might allocate extra time and resources to those students that are predicted to struggle. In any case, the consequences of using AI in predicting student performance should be discussed and monitored to avoid abuse and bias.

9.5 Sex Robots

One of the more controversial applications of AI technology is the design of robots for sexual purposes. Indeed, robots that engage in both sex and violence has been a trope in several recent hit films and TV shows such as Ex Machina, Humans and Westworld to name but three.

Those who argue against sex robots claim they will degrade people, especially women, and perpetrate harmful stereotypes of submissive females (Richardson 2016). It is certainly true that the vast majority of sex robots currently being produced are female. There are also concerns that giving people who lack social skills access to sex robots will cause them to not bother acquiring social skills. There are those who argue that sex is something humans should do with each other not machines. Some of these arguments are similar to arguments made against pornography and prostitution. There is even a Campaign Against Sex RobotsFootnote 2 where more detail on these arguments can be found.

Present day sex robots are little more than silicone dolls. These silicon dolls are, however, highly realistic and offer many customisation options (see Fig. 9.3).

Fig. 9.3
figure 3

(Source real doll)

A real doll

Those who dismiss the arguments against sex robots argue there is no moral difference between a sex robot, a sex doll and a vibrator. The city of Houston modified its ordinance in 2018 (Ehrenkranz 2018) to ban the creation of a robotic brothel by changing its definition of Adult Arcades to include “anthropomorphic devices or objects that are utilized for entertainment with one or more persons”. However, a sex robot with sophisticated AI could have some kind of relationship with a human. This would present the risk of unidirectional emotional bonding discussed in Sect. 7.3.

While most sex robots and sex dolls are female in form, male sex dolls are commercially available. Male sex robots with conversational AI may become available. Some think sex with a robot is in questionable taste. Others may argue that just because a thing is in poor taste is not necessarily a sufficient reason to ban it. The point being that this topic raises numerous ethical issues. For example, if a spouse has sex with a robot, does that count as infidelity? Would robot partners degrade human relationships (as depicted in Humans)? If the AI in a sex robot gets particularly advanced, will it get angry and turn on its human creators (an depicted in Ex Machina)? Is it acceptable (or possible) to “rape” and “kill” a sex robot which is a central theme of Westworld? Would this murder be morally equivalent to murder in a video game (Sparrow 2016)? At what point in their development should robots be given rights (Coeckelbergh 2010; Gunkel 2018) to protect them from human abuse? Coeckelbergh (2009) develops a methodology approach to evaluating roboethics questions related to person relations. At a point these questions become philosophical in nature (Floridi 2008).

Discussion Questions:

  • If a drug would give you athletic triumph for five years then kill you, would you take it? Explain your reasoning.

  • If predictive analytics suggested you should change your major, would you do so? Develop and discuss the costs and benefits of doing so.

  • How would you feel about being tutored by an AI? Would you pay for this type of tutoring? Explain how much you would pay and why.

Further Reading: