1 Introduction

Social robots are complex machines that are envisioned to engage in meaningful social interaction with humans and with each other [115]. As health care robots they are envisioned to be involved in post-stroke motivation to do exercises (rehabilitation) [16], teaching people with Autism to recognize social cues, facial expressions, eye contact, etc. [1726], to motivate and enable movement in people with physical disabilities, for supporting self-management in children with diabetes [27] and to perform tasks otherwise performed by health care staff such as taking blood pressure, carrying and moving patients, bathing patients and easing vaccinations for youth. Social robots are also seen to have great potential for long term care and daily care provisions [28]. Being a companion for the elderly is seen as another main application [28, 29]. Philos is for example a socially interactive robot designed for use in homes of those who need continual care. It is capable of daily health monitoring as well as emotional stimulation (HRSI) [30].

Public perception of an emerging scientific and technological product is important for the acceptance of such a product. One recent survey of public attitudes toward using robots in eldercare and other applications [31] showed, among other things, a high acceptance for the bathing application, the therapeutic robot animal, the human-like care robot, Ri-Man (for carrying patients) and a surveillance care robot [31, 32]. The main reasons for rejection in the case of the bathing robot were based on the judgment that the robot-based action would be inferior to the human-based action and that it would take away jobs from human workers [31]. However at the same time, social robots are seen as a possible way to address the human resource and economic pressures on health care systems (e.g., created by growing elderly populations) [33]. We present here results of an exploratory study that ascertained the so far invisible views of staff of a disability service organization toward social robotics and its various applications covering among others the angle of bullying [34, 35].

2 Methods

An online-delivered exploratory, non-probability survey (using a combination of 55 simple yes or no, Likert scale, and opinion rating scale questions, as well as options for written comments) was developed. Prior to the survey distribution, the executive director of the disability service organization reviewed a draft of the full survey and commented on language and clarity. We made adjustments in accordance with the executive director’s suggestions. This survey covered numerous topics. Quantitative and qualitative data as they relate to the topic of this paper was generated through questions 8–11 and 41, 42 that were concerned with application of social robots. Six questions covered various demographic angles. The survey received ethics approval from the University of Calgary Conjoined Health Research Ethics Board. After approval was obtained, the link to the survey was given to the executive director of the disability service organization who gave the link to the staff of the organization. All 44 staff in the organization answered at least one content question which reflected a response rate of 100 %”. In accordance with the ethics approval, there was no mandatory requirement to answer any question. Also in accordance with ethics approval we were not able to identify individual respondents. The response rate per question covered in this paper was very high and ranged from 64 and 73 %. The results provide a good insight to what the staff from this one disability service organization thinks. The results cannot be generalized to the disability service industry as a whole. However, the survey generated data could provide the foundation for other investigation allowing for comparative views of different disability service organizations. The data from this survey were seen to provide an avenue for the disability service staff to voice their opinion on topics they have never been asked before and had not heard of yet. Quantitative data was extracted using the Survey Monkey intrinsic frequency distribution analysis capability. The qualitative data of the comments given in the comment boxes linked to the five questions was exported as one pdf file into Atlas.ti for the qualitative analysis of the comment box contributions. Given the set-up of the survey this qualitative data cannot be traced back to a given individual. As to question 8 (n = 35) gave comments; as to question 9 (n = 34) gave comments; as to question 10 (n = 38) gave comments; as to question 11 (n = 34) gave comments; as to question 41 (n = 29) gave comments and as to question 42 (n = 27) gave comments.

3 Results

3.1 Demographics

As shown in Table 1, 20.5 % (n = 9) were male and 79.5 % (n = 35) were female. As to age, 25.0 % (n = 11) were between the ages of 18–30; 70.5 % (n = 31) were between the ages of 30–65 and 4.5 % (n = 2) were over the age of 65 years. As to their self-perception of body ability, 93.3 % (n = 41) perceived themselves as ’Normal’ and felt they were perceived by others as ’Normal; 2.3 % (n = 1) saw themselves as ’Normal’ but felt they are perceived by others as impaired; and 2.3 % (n = 1) saw themselves as ‘Impaired’ but felt they were perceived by others as ’Normal’. With regards to work experience, 47.5 % (n = 19) worked in the field more than 8 years; 7.5 % worked 5–6 years; 7.5 % (n = 3) worked 1–2 years and 7.5 % (n = 3) worked 8 months-1 year in the field. Smaller percentages were in between year wise. The majority of respondents that indicated their education stated a completion of grade 12. The majority of the respondents were care providers. Some respondents were program coordinators, program activity staff and admin staff.

Table 1 Demographic characteristics

3.2 Perception Towards Different Applications for Social Robots

To gain an idea of participant perceptions of various applications for social robots and what they thought their clients might think, we asked “Question 8. Robotics is a growing field, and new applications for social robots are being developed every day. The following is a list of emerging social robotics applications in the service, healthcare, and education sectors. Please indicate whether you think the following robots would be useful:”

Table 2 reveals (a) that respondents in general did not perceive social robots as very important or important and (b) that respondents showed differences in acceptance of social robots based on applications; for example housework related robots had the highest rank of 3.55 (moderate important to important) while companion and childcare robots ranked between unimportant and of little importance.

Table 2 Perception towards different social robot applications

3.3 Cannot Replace Ruman Rouch

The main theme evident in the comments given by n = 35 respondents was that although robots could be used for some routine task, they were not seen to be acceptable for replacing the human touch, due to, for example, lack of ability to read emotions.

To quote some of the responses:

Robots would not be able to understand the inner emotional turmoil that a lot of those adults and children who have both emotional and physical disabilities. They would not be able to give the gentle touch of a hand, the warmth of a hug, the understanding of conflict.

I think Robots would be good for routine tasks, but I do not believe they should be a replacement for social interactions. However I think that they can provide good remind cues for tasks and to teach particular task to individuals but should not been seen as a replacement for personal relationships.

If robots are used to assist in learning and to conduct routine household tasks that would be good, but robots should not replace humans as social companions. Social relationships are very important in recovery and in life experience and I do not believe robots can fulfill this need.

That helps a person with cognitive development and which provide needed physical assistance while I am opposed to robots that may diminish the need for a person to develop independent living skills or to replace human relationships ...I am opposed to robots that perform basic living tasks completely. These tasks include cooking, cleaning, taking out garbage, preparing coffee etc. As it stands we have difficulty encouraging staff to involve group home residents in the tasks associated with home ownership. There is a certain dignity that one gains by being able to tend to their own home, yard, and personal needs. In the same way I am opposed to the mechanization of menial tasks like answering phones. A major barrier that many people with intellectual disabilities face in today’s world is the increase in technical knowledge needed for menial tasks. Many manual labor or low skill jobs are being mechanized either eliminating the human worker or replacing many strong backs with one ”operator”. By doing this we make it increasingly difficult for people with impaired cognitive function to find belonging in the general workforce ...I am, however, opposed to the replacement of real human relationships with machines ...it is an insufficient substitute for real human interaction, touch, affection etc. This is especially true in terms of sexual development and expression. Although many people will benefit from the use of sexual aides for private expression I think we will do ourselves a huge disservice if we replace good sexual education and the encouragement of appropriate sexual expression (up to and including real human intercourse with another person). Perhaps sex robots could be useful in this education but they can never replace the real affection between two people.

3.4 Disabilities seen as Possible Targets

As social robots are envisioned to be employed in regards to different disabilities, we asked in

Question 9 “If social robots become more common as therapeutic tools, for which disabilities do you see robots being used in the future and for what tasks do you think they would be used (e.g. guides for blind people)?”

The following disabilities were mentioned as possible targets: can’t speak (n = 3), autism (n = 6), intellectually impairment (n = 3), hearing impaired (n = 4), blind people (n = 9), mobility impaired (n = 5), Fetal Alcohol Syndrome (n = 2).

3.5 Acceptable Actions for Social Robots

As to the answers to the part of question 9 that asked about tasks the social robot should be used for the following acceptable actions were envisioned: make a bed, housekeeping, clean the house (n=2), peel potatoes, read to the person, change the TV channel, repetitive activities that don’t require personal interaction, cutting the lawn, teaching tool (n = 6), guide (n = 8), companion, assist the elderly with reaching high objects from a cupboard or maybe replacing a light bulb, reminder cues (i.e. prompts, to help with schedules, to track completion of tasks or monitor, to provide external memory for people who may not be able to remember information, but could ask the robot for the information), GPS, skill building games, reading aid, and—other household tasks (i.e. move laundry, vacuum, dust, load dishwasher etc.)

To give one quote:

I think that they could be used for every individual regardless of the disability and that robotic therapy needs to be individualized to the needs of the individual. Tasks include assisting with routine household tasks, including reminders and assistance with holding something. Some physio therapy Speech therapy Cognitive games that strengthen memory or processing processes Music therapy Art therapy Following task analysis and assisting people to remember the next step in the process Sports activities, hand eye coordination.

Two people mentioned guide dogs for the blind one believing that dogs could be replaced and the other not being convinced:

As guides for blind people I could see them having some use. I think a guide dog might be preferable however.

I think maybe using robots to help guide the blind people would be okay, because they already use guide dogs. A robot might be better as people will not want to walk over and pet the robot as well as the robot would be easier to care for.

3.6 Social Robots should not Replace Human Workers

With many technologies the question arises whether it can and should replace human workers. We asked therefore in Question 10, “Do you feel that social robots could replace human workers currently offering services to these groups? Please explain.” The responses were somewhat mixed: of the 38 respondents, n = 12 stated that workers could be partly replaced, n = 22 felt workers could not be replaced, and four did not know.

To give some quotes:

I am sure they can. But should they? I do not think they should. If myself or a family member is in need of care due to a disability I would want a human to care for me. I deserve that. I would need to be able to talk with who is caring for my needs.

They could reduce the hours worked or provide more time to provide personal care but I think individuals would not thrive well if their total care were provided by a robot.

I can see this coming to be in the not so distant future. However there is always going to be the need for human interaction and companionship. Many of our participants have lived in institutions where there has been a lack of positive human contact and we find ourselves often trying to undo some of the damage that has been done in that regard. However just as a small child needs human contact and stimulation, so too do our participants. Robots I think can be a wonderful tool and perhaps make our job easier but I don’t think that a robot could give the human stimulation that is needed to grow to be a healthy individual.

No. firstly, at this point in time robotics are not dependable enough to be entrusted with providing unsupervised care. Also, robotics have not yet advanced to the stage where they do not require at least minimal human operation (even if this is simply turning them on or off). Lastly, as we have fought for the past thirty years toward community inclusion for adults with intellectual disabilities it would feel like a huge step in the wrong direction to eliminate what is for many, their only interaction with non-intellectually disabled people

3.7 Concerns Mentioned Related to Social Robots

Although respondents could raise concerns throughout the comment sections of the survey, Question 11 asked explicitly, “Do you have concerns about the use of social robotics for the above applications and conditions? Why or why not?”

Of the 34 respondents only n = 5 had no concerns, n = 29 had concerns. To highlight some of the concerns voiced:

Yes—if an individual has limited human interactions the use of robotics can further undermine valuable social interactions and potential isolate the individual receiving services.

Yes I do have concerns because what happens if a robot breaks or something happens to the robot well assisting someone who needs them?

I have no concerns with using them for therapeutic and rehabilitation techniques but not to replace human workers.

Only if people are forced to use them, when they are not comfortable with them.

Concerns of putting people out of work.

Yes I would worry about vulnerable people.

Yes a lot of concerns. They have no empathy or compassion. They have no feelings. They can’t help you feel better. They would come in do their job and be on their way...A huge part of working in the field is caring about what you do and the people you work with. Robots don’t care!!! People do !!!

I have big concerns. Robots can not provide the personal care that one may need in regards to communication, friendship connection, compassion and understanding, ideas, concerns, etc.

Only one of the five that had no concern elaborated: “Not as long as they are introduced slowly and carefully.”

3.8 Do You Think Social Robots can be Bullied?

As to the question whether social robots can be bullied (n = 16) said No, (n = 2) were not sure and (n = 8) said yes. The main reasoning for the no sentiment (if one was given) was linked to the belief that robots have no feelings (n = 4); linked to the sentiment of feeling, one participant stated that they can be bullied but that it does not matter as they cannot feel.

To give some of the quotes related to the No sentiment:

No—a robot is programmed and will respond as per program. As well I believe that bullying is about feelings and I don’t believe robots can feel.

No Robots are sub human. They are machines that can be worked and abused until they break down.

One felt that the bulling might be useful to decrease the bullying of humans

no, but it could help bullies redirect from hurting peoples feelings

One felt that the technology is not there yet but that bullying could become a possibility.

At this point I do not believe that Artificial Intelligence as advanced to the point that robots can be thought of as having legitimate feelings. If technology does advance to the point of an android like Data on star trek or the Cylons on Battlestar Galactica then we would have to admit that they would have the capacity to be bullied.

As to the Yes sentiment:

yes they could be and the effects would be people not wanting anything to do with the robot or the person with it

yes but i’d probably call it property abuse

3.9 Can Robots become the Bully?

As to the question of whether the robot could be a bully (n = 4) said No, all the others were saying it depends on variables linked mostly to how the robot is programmed. They acknowledge the possibility of the robot being a bully, but attribute this action not to the robot itself but to the entity that programmed the robot, thereby revealing that they do not see the robot as an autonomous entity but still as a programmed entity.

To illustrate:

Yes- based on the programming. A machine can tell others what to do and if the person does not respond appropriately the robot may be programmed to be ”bossy” or ”disrespectful”. I think a robots’ bullying can be just as devastating as the bullying from a person

They are run on a program and not from actually knowing who they are serving so they could be a very bad bully. The effect to the victim would be that they would have no means of correcting the behavior of the robot without getting rid of it

Robots can become the bully as they can be executing something that they are programmed to do when it is unwanted

Yes they could do the work of an evil person.

4 Discussion

“Social robots are technologies designed to engage with humans on an emotional level through play, sometimes therapeutic play, and perhaps even companionship”[15]. Robotherapy is a field in robotics that tries to apply the principles of social robotics to improve the psychological and physiological state of people with disabilities [36]. Dario reported that motor-disabled people were favourably inclined towards a personal assistance robot [37]. Autism is one main focus [8, 26, 3844] of social robotics research. Many investigate the acceptance of social robots by users and what we expect from robots [14, 4552]. The staff we interviewed were highly skeptical of a robot being ever able to replace them as they felt that the robot will not be able to interact properly with their clients due to lack of various cognitive abilities such as emotions. They also felt that a robot should not be used unsupervised as it could break down. Our findings are in sync with respondents of a European survey that highlighted that 60 % of EU citizens were saying that robots should be banned from caring for children, elderly people and people with disabilities, and only 4 % indicated robots should be used for disabled people [53]. The European study [53] did not elaborate on the term disability; however, it seems reasonable to expect that the negative sentiment is bigger the more the disabled person is seen to be cognitive compromised. In our case, the clients of the staff of the disability service organization are cognitive impaired people who are seen as more vulnerable due to their state of cognitive abilities. The staff of the same organization, when asked about sensors, where highlighting the lack of control they felt their clients already have over their daily life [54] and they believe that more complicated machines could not be understood by their client. This sentiment would be a barrier to uptake of the social robots as the cognitive impaired person would be seen as having no way to rectify a ‘bad’ action by the robot. This danger is reflected in one of the quotes of our respondents who talks about the danger of malfunction and therefore the need for human supervision of the robot.

One additional aspect mentioned within the section of concerns is coercion that people have to take them. This is of particular importance to disabled people where there is a long history of ‘therapeutics’ being pushed onto disabled people. Coercion is so far not discussed in the literature like what scenarios would lead to coercion. That is very likely due to the reality that coercion is not seen as an issue as the stages of developments are more reflecting how to gain people’s trust than forcing them to use them.

So what tasks are seen as acceptable for robots to perform? Ray et al [52] mention a Swedish study that concluded that “people were globally positive towards the idea of intelligent service robots, which were seen as “domestic machines” that can be “controlled” and do mainly household tasks”. Dautenhahn et al found in 2006 subjects wanted the robot [55] to be able to do household (vacuuming) jobs (96.4 %). Only 10.7 % of subjects wanted the robot to be able to look after their children. They found that people would want a companion robot more as a servant but not a friend whereby young people were more inclined toward the friend role. 71 % expressed a want to be able to control the robot. Our respondents mimic these findings around the role of robots and the need for control. Six basic dimensions in the perception of humanoid robots were found: utility, clumsiness of motion, possibility of communication, controllability, vulnerability, and objective hardness [56].

According to Sparrow, care workers assisting the elderly should not be replaced by robots [33]. According to Feil Seifer et al [57, ”socially assistive robotics is leading away from scenarios where a robot is the sole caregiver of a child.“ This direction fits with the sentiment of our respondents. Boyer raised in 2004 various questions such as what care robots mean for understandings about technology’s “place” in our lives and for the individuals who rely on care work for their livelihood [58]. Boyer points out that “space of the home carries great cultural and symbolic significance” and “that allowing robots into this space to help us with our most private tasks would mark an unprecedented level of intimacy in our relationship with technology” [58]. Indeed Boyer asked: “While a ‘nursebot’ may be able to measure vital signs, how would the replacement of a human care-giver with an assistive technology alter the relationship between the person being cared-for and the world outside?” [58]. Our respondents’ comments are linked to the concerns of impact of changing relationships for their clients with our respondents strongly believing that they [the staff] cannot be replaced due to their unique ability to interact with their clients in ways they do not believe robots could ever mimic. One study found that “anthropomorphic robots were less socially acceptable, compared to machine-like robots [59] and it is argued that “this may be particularly true for old people, especially those at an advanced stage of dementia, who may not be able to distinguish such a robot from a real human-being” [59]. They found in their study that “participants showed resistance toward robots supposed to have more social interactions with them, especially when they are suspected to decrease human presence and contact” [59]. This might be also true for the clients of our respondents given that their clients are people with cognitive impairments. Boyer was concerned especially about low paying jobs [58], and the fear of being replaced as a worker is one source of resistance to the proliferation of robots [60]. Ott highlights that low skill jobs are very likely the ones to be replaced by service robots and that high skills jobs might be newly generated [61] thereby effecting different social groups in different ways. However, our respondents were not rejecting the robot based on their own livelihood but based on concerns for the emotional and other well-being of their clients. Our respondents did not fear for their job, which might be because they felt the robot could not perform their cognitive abilities.

The tasks seen as acceptable for social robots by our respondents have nothing to do with the social interaction or emotion and human touch but are tasks one could say are performed by a machine like a coffee machine makes coffee with no emotional exchange. This finding is in sync with the sentiment of respondents of a European survey who see the social robot as an instrument-like machine rather than a human-like machine [53].

Decker [62] mentions various aspect of replacing a human caregiver with a robot caregiver, namely technical replaceability, economic replaceability, legal replaceability and ethical replaceability. Our respondents were sure that the technical replaceability was not a possibility. Our respondents also mentioned ethical issues (although did not use the term ethics) such as danger of coercion, privacy and lack of control over ones’ life. Decker cited earlier work of Christaller [62] recommended “that robots be employed only as tools or as technical assistants in caregiving and to maintain the autonomy of the care recipient in his/her social environment”. These recommendations reflect the sentiment of our respondents.

The sentiment that respondents see the robot as an instrument and not as a human-like machine also influences the sentiment towards another area of importance, namely how autonomous the robot should be. One understanding of autonomous is to envision the robot to adapt and react to the target actions; however, it is not seen to replace the therapist who is still seen as being in control of the robot; to borrow from the movie I Robot autonomous is an illusion based on ‘clever programming’ not really autonomy as we would understand the term if used with humans where it is about free will. This sentiment fits with our data as staff would not trust the robot to be the therapist without being controlled. This understanding of autonomy fits also with the sentiment of many of our respondents that the robot could be bullied/damaged. An autonomous robot in the human sense would have the means to avoid being bullied/damaged. Interestingly the idea voiced by some of using the robot to teach about the problem of bullying would have to entail that the robot can exhibit the emotions and feelings of the bully or the victim of bullying. However, we posit that our respondents would see the ability to exhibit emotions to be the result of ‘clever programming’ and not real feelings and it was voiced that the robot is a tool which would preclude human type autonomy. That our respondents felt that robots can be bullied/damaged suggests that, given the likely high price of robots for some time, the danger of damage to the robot might be seen as too high to leave the robot unsupervised with children period. Children often destroy their toys whether on purpose or by accident. We submit that the aspect of bullying and social robots might be worthy of further exploration. If a robot is involved in emotional exchanges it also triggers emotional responses which may be dangerous to the robot. The question is how one does prevent vandalism against robots under unsupervised interaction with humans? On the other hand we find the task of a robot to teach the problem of bullying an interesting one which due to its dangers to the robot also would have to have supervision.

That quite a few of our respondents felt that the robot could bully someone based on ‘dangerous programming by humans’ highlights the danger people will see in a robot. If robots start to interact with humans on an emotional level how will the human know what the robot is capable of when programmed? Paro the seal might not trigger certain fear due to its limited mobility. Indeed Turkle found that Paro elicited feelings of admiration, loving behavior, and curiosity [63] but the possible construction of more mobile, stronger and human-sized robots may be seen as a threat. Will people trust them or will there be distrust like the character Del Spooner in the film I Robot.

Our findings support Kahn who questions the “sociable robot’s ontological status as ‘social’ and their ability to engage in truly social behavior, doubting that intelligent machines can ever really interpret the world around them in terms of their own experience” [64]. Shawn Garlock mentioned that “Duffy points out that from the point of view of social robotics, it doesn’t really matter whether or not a particular robot genuinely possesses a sense of personal agency, intention, or self awareness. What matters most is our perception of their emotionality and intelligence” [15]. We posit that the response of our respondents suggests that even if there were emotional abilities due to programming, our respondents would still not accept a truly autonomous and uncontrolled robot. Fong, Nourbakhsh, and Dautenhahn define social robots as follows: “Social robots are embodied agents that are part of a heterogeneous group.” ”They are able to recognize each other and engage in social interactions, they possess histories (perceive and interpret the world in terms of their own experience), and they explicitly communicate with and learn from each other” [4]. This vision of Fong et al seems not to be shared by our respondents and the European survey mentioned before as we would not attach the characteristics used for social robots by Fong et al to industry machines. Indeed the question so far unanswered is what the endpoint of social robot developments is. To use the I Robot movie is it an NS5 robot that is seen to be fairly autonomous but really is acting based on its “clever programming” as one character in the movie states, or will it be like the robot Sunny who seemed to have evolved beyond clever programming. Khan suggests the term “robotic other” over social robot, however, we posit that does not change the controversy around what characteristics robots should have. We posit that it would be for the general public much better to have terms that clearly indicate what the endpoint of the development is to be like; is it to stay a tool, or become cognitive autonomous in the human sense, is it to replace humans and if yes in which tasks.

A 2012 study with Robovie found that children believed it had mental states (e.g., was intelligent and had feelings) and was a social being (e.g., could be a friend, offer comfort, and be trusted with secrets). In terms of Robovie’s moral standing, children (age 9–15) “believed that Robovie deserved fair treatment and should not be harmed psychologically but did not believe that Robovie was entitled to its own liberty (Robovie could be bought and sold) or civil rights (in terms of voting rights and deserving compensation for work performed)” [65]. The same study found that “while more than half the 15-year-olds conceptualized Robovie as a mental, social, and partly moral other, they did so to a lesser degree than the 9- and 12-year-olds” [65]. The finding means that more than 50 % do not give moral standing to Robovie. Fitting with these findings is the views of our respondents that the robot could be bullied and might be used to teach the problem with bullying. It also fits with the sentiment found in the European Study where respondents see the robot as an instrument-like machine rather than a human-like machine [53].

Various studies looked at the interaction of people towards a robot dog (e.g. AIBO) and live dogs. Many of the sentiments linked to AIBO in the qualitative data published by Kahn et al. [64] suggest that respondents might see AIBO as more than hardware such as a tablet. The sentiments revealed are sentiments some might also attach to real dogs. Melson et all compared reaction toward AIBO and a live dog [66, 67] and cautioned marketers and animal rights activists that the idea of social robots such as AIBO being a substitute for a living dog as a pet seems misguided, or at least, given current technological capabilities, premature. Rather, one might conclude that the robot dog is being assimilated into children’s cognitive models of a mechanical or computer-based toy [67]. In a 2005 study [66] there were various similarities as to how respondents related and treated AIBO and a live dog. It however revealed also the limitation of the standing of the dog. In both cases more than half of the respondents felt it being ok to give away the dogs (AIBO or real) which highlight that both are seen as property to be discarded as one wishes and more than 50 % felt both did not understand them. Melson found in a 2009 study that children may extend their moral regard for their own pets or other dogs to the robot, at least partially” [67]. However Kahn highlighted that the respondents (adults) in their study did not evoke conceptions of moral standing with AIBO such as “right not to be harmed or abused”, or that AIBO merited respect, deserved attention, or could be held accountable for its actions (e.g., knocking over a glass of water)”. Furthermore Melson’s 2005 study found that limited cognitive abilities were attributed to both AIBO and the living dog which is one reason why humans give themselves a higher standing than animals [68]. Indeed even with a living dog there is a whole breath of understanding as to how much moral standing a dog has [68]. Interestingly Darling makes a case for giving rights to social robots (the more they evoke feelings of reciprocity) similar to us giving rights to animal [69]. Doing so, she distinguishes between social robots from inanimate computers, as well as from industrial or service robots [70]. The responses of our respondents suggest that the service robot version with no autonomy and fully controllable is what they would be willing to accept (with the caveat they do not think that programming will be ever good enough to mimic the abilities of the staff). This sentiment seems to exclude the social robot flavor outlined by Darling and others at least for the time being.

The limitation of our study is that our questions were all hypothetical. If our respondents could see a robot in action that could indeed do the task of our respondents, the possibility exists that their views might be different than exhibited in our study. We also did not ask our respondents how they would judge the robot in relation to, for example, animals which could be pursued as a follow up question. Indeed given that the majority stated that robots cannot have real feelings it would be interesting to see if the same respondents would think animals have real feelings. However, even if they would accept the robot on the level of an animal, it would be not a foregone conclusion that they would accept the robot as a human equal.

5 Conclusion

Social Robotics is an emerging field with implication for the clinical and community rehabilitation setting of engagement with disabled people. Empirical data of views of staff of disability service organizations in regards to emerging health technologies are rare. Our results indicate that staff can envision usage of social robots for people with various disabilities. However staff mostly felt that the utility of social robots was limited to performing repetitive task that did not require mimicking human interaction and touch. The study revealed a strong belief that social robots cannot mimic human touch or the personal interaction between staff and clients. This might pose some problems for designers who believe they can. One of our study limitations is that we could not show a working model. Our results indicate that staff has certain believes. The findings do not preclude that believes of staff change if staff is exposed to a working model of a social robot with a given ability staff in this study felt the social robot cannot exhibit. Staff saw social robots as an industrial product like a kitchen appliance performing a defined task and not as an entity that could show and understand emotions and replace human–human interactions.

These results are intriguing given that there is extensive work being done in designing robots that have human-like exteriors and body warmth [71, 72], and that can recognize and respond to human emotions [73]. Technological advances are making it increasingly possible to develop robots that possess remarkably human qualities. The question of whether robots should possess these qualities may be contested by individuals working with people with disabilities, especially if the sentiments found in our sample reflect the general perception of robots in the cultural setting of the staff. Since it will be these kinds of service workers who implement and work alongside robotic aids for people with disabilities, the perceptions of robots expressed above will dictate whether they are generally accepted or rejected. It is important to gain a more in-depth understanding of how disability service workers perceive social robots and how they feel this new technology will impact their practice and their clients. We therefore envision performing further research involving other disability service organizations whereby we envision not only having the online quantitative and qualitative survey but also performing focus groups and one on one interviews.