Skip to main content

What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You


Purpose of Review

We provide an outlook on the definitions, laboratory research, and applications of social robots, with an aim to understand what makes a robot social—in the eyes of science and the general public.

Recent Findings

Social robots demonstrate their potential when deployed within contexts appropriate to their form and functions. Some examples include companions for the elderly and cognitively impaired individuals, robots within educational settings, and as tools to support cognitive and behavioural change interventions.


Science fiction has inspired us to conceive of a future with autonomous robots helping with every aspect of our daily lives, although the robots we are familiar with through film and literature remain a vision of the distant future. While there are still miles to go before robots become a regular feature within our social spaces, rapid progress in social robotics research, aided by the social sciences, is helping to move us closer to this reality.


Since its inception, the scientific field of robotics has been closely entwined with the science fiction literature, with the first mention of the word robot made by Karel Čapek in his 1920 play ‘Rossum’s Universal Robots’. In this play, robots who look almost indistinguishable from humans are exploited as factory slaves and later rebel against their human makers, a popular trope in science fiction. A bit later, the term ‘robotics’ was coined by Isaac Asimov in his 1941 short story ‘Liar!’, which features a robot that is compelled to lie so as not to upset its human creators. While these terms were introduced historically quite late, visions of automata have existed for almost as long as humans have lived together in societies. Spanning back to at least ancient Egypt, Greece, and China, and including the Golem from Jewish mythology, the eighteenth century ‘Turk’ (a fake chess playing machine, controlled by a human hiding inside the device) and the friendly Japanese ‘Gakutensoku’ mechatronic puppets and automatons have fuelled the public imagination across cultures and history, in terms of what might be possible in terms of human-fabricated autonomous agents that interact with us—almost as equals [1, 2].

Science fiction has further inspired us to conceive of a future where autonomous robots help with every aspect of our daily lives, although the robots we are familiar with through films like Ex Machina or Robot & Frank remain a vision of the distant future, whether they are depicted as helpers and companions, or villains [3••]. When we encounter robots ‘in the wild’ (Fig. 1), this discrepancy between the reality of social robots and our expectations towards them becomes even more salient. Accordingly, Duffy and Joue coined the ‘social robot paradox’, which has remained a critical point in social robotics over the years [4]. Speaking of this paradox, Duffy states:

In fact, humanoid robots outside of science fiction, have thus far only been toys or research platforms with nebulous applications. It is intriguing that one of the most powerful paradigms for adaptivity and flexibility, the human, has so far, when modelled in the form of a machine, resulted in little more than a toy. Its usefulness is very limited. (p. 1)

Many social robot developers have designed their creations to incorporate human characteristics, while at the same time being careful to avoid imitating human appearance or motion too closely, in order to avoid falling into the Uncanny Valley [5]. While a human-like embodiment as a design feature for social robots is a powerful signal to users that the agent affords social interactions, it also makes the robot more prone to failing to deliver on high expectations regarding the nature of the interaction (e.g. [6,7,8]).

Fig. 1
figure 1

Recent examples of the Pepper robot ‘in the wild’. a The social robot was placed at the customer checkout in a German supermarket and reminded shoppers of new hygiene regulations to ensure public health in April 2020, during the global coronavirus pandemic. b Pepper in a Dutch souvenir shop at Schiphol airport. (Photos taken by Anna Henschel)

This observation still rings true, with new social robots moving away from referencing the human form. Zoomorphic and pet-like robots (e.g. the Paro and MiRo robots, see Fig. 2) have been developed to enter peoples’ homes and address specific needs of their target populations (e.g. within care settings, with older adults, and with people with cognitive impairment). One way in which social robots and other kinds of artificial agents can provide acceptable solutions to people’s social needs (in certain situations) is by not raising peoples’ expectations of their capabilities to unrealistic levels. The importance of setting people’s expectations to appropriate levels is highlighted by the robot Jibo (Fig. 2), which also serves as a cautionary tale of this point. Jibo was among the first social robots developed for private consumers and was introduced in 2014 as a family robot designed to take up residence in people’s homes, to establish social relationships with them and serve as a personal assistant [9, 10]. By 2017, the company announced layoffs [11], sold their intellectual property and assets in 2018 [12], and by 2019, Jibo announced to its users the imminent shutdown of its servers [13].

Fig. 2
figure 2

Examples of several social robotics platforms that are heavily used in research and/or have enjoyed commercial success, and are discussed in this review. a Paro, the cuddly baby harp seal robot. b MiRo, the puppy/bunny-like robot. c Jibo, the erstwhile personal home assistant robot. d iCub, the humanoid robot testbed for human cognition and AI. e Nao, a humanoid robot. f Darwin, a small humanoid robot (now discontinued)

While Jibo ultimately failed, disembodied and functional personal assistants like Amazon Alexa or Google Assistant which neither reference the human form nor are designed to establish social relationship with users, have been commercially successful [14,15,16,17]. Following on from Duffy and Joue’s suggestion [4], it could be that attempts to create ever more human-like robots, in terms of form and function, leads to unrealistic expectations of robots’ capabilities in human users, and thus, less effective human-robot interactions. Instead of trying to design social robots in line with science fiction’s unrealistic expectations, it will be important to understand when and why a robot should look or behave in a human-like way, and when this approach is ineffective or problematic. This observation raises questions regarding the value and definitions of what the concept of ‘social’ means within the interdisciplinary field of human-robot interaction.

In the current review, we provide an outlook on the definitions, laboratory research, and application of social robots. We begin by examining definitions of a social robot through the eyes of both scientists and users. Next, we address the lack of social and behavioural science research in social robotics, what the field can learn from social, behavioural, and neurocognitive research, and how principles from these disciplines are applied in today’s current social robots. Finally, we review some of the areas of social robots’ application that successfully capitalize upon robots’ social design and abilities.

What Is ‘Social’ About Social Robots?

In the social robotics literature, no universally agreed-upon definition for social robots exists. Furthermore, consensus is lacking in terms of understanding what these robots do and what, specifically, makes them social. Within the field of HRI, social robots take on a special role, and fall under the category of ‘proximate interaction’, in which ‘humans and robots interact as peers or companions’ [18]. Based on reference information of articles they extracted, Mejia and Kajikawa [19] identified relevant clusters that represent the social robotics knowledgebase. The largest clusters in social robotics research can be summarized as ‘robots as social partners’ and ‘human factors and ergonomics in human-robot interaction’. Interestingly, the authors point out that research trends emphasize the various fields of application for social robots: robots as companions, robots as educators for children, and robots as assistants for the elderly. This is consistent with a trend identified by Šabanović, who, in interviews with robotics researchers in the USA and Japan, identified that social robots ‘often represent technological fixes’ i.e. using a technological approach to solve a pressing societal problem ([20], p. 349).

Sarrica and colleagues [21] investigated the question of how social robots are understood by analyzing definitions in articles published by the International Journal of Social Robotics between 2009 and 2015. In investigating the most often cited definitions, it becomes apparent how heterogenous the understanding of social robots is. Through this work, Sarrica and colleagues identified a few shared traits: social robots are physically embodied agents that have some (or full) autonomy and engage in social interactions with humans, by communicating, cooperating, and making decisions. These behaviours are then interpreted by human onlookers as ‘social’, according to current norms and conventions.

A study by de Graaf, Allouch, and van Dijk [6] evaluated users’ perspectives on the characteristics of social HRI through a longitudinal home study. They observed and identified eight main social characteristics that users described as factors for a social robot to appear as social and be accepted as social entities in their homes. The most prominent factor was (1) the capability of two-way interaction, expecting a robot to be able to respond to a human in a social manner. When a robot failed to do so, people were disappointed and experienced a sense of dissonance. Following this, users described the need for robots to share the same environment as them (be physically embodied or embedded), and to: (2) display thoughts and feelings; (3) be socially aware of their environment; (4) provide social support by being there for them (like their friends); and (5) demonstrate autonomy. Participants also raised the concepts of (6) cosiness, (7) similarity to self; and (8) mutual respect. However, these latter three concepts were mentioned fewer times than the previous five concepts. While users’ perceptions of robots’ socialness share many similarities with scholars’ definitions of social robots, some key differences also emerge. Users’ expectations, as described in de Graaf and colleagues’ [6] study, were influenced by their relationships with other social actors (i.e. their friends). Participants repeatedly compared the robot in that study to their friends, dwelling on the fact that the robot’s lack of social capabilities meant that it would be unlikely to become an actual ‘friend’. By contrast, the definitions of a social robot described in Sarrica and colleagues’ [21] review focus on general social and communication capabilities. It is of note, however, that these definitions rarely address the context of the interaction, whose importance is underscored by the findings of de Graaf and colleagues [6].

This discrepancy has been noted in other user studies as well. Dautenhahn and colleagues [22] show that participants in their studies did not see robots as companions or friends, but rather as useful household servants. Dereshev and colleagues [7] interviewed long-term, expert users of the Pepper robot (SoftBank Robotics; seen in Fig. 1). Their participants had lived and interacted with the robot on timescales ranging between 8 months to more than 3 years. The researchers report that one specific expectation regarding the humanoid Pepper robot was its ability to engage in a reciprocal conversation. Participants were disappointed when the robot was not able to go beyond the smart-speaker like single-turn structure of conversation. One of the participants also pointed out that people who interacted with Pepper quickly lost interest, a finding which is echoed in a usability study by Aldebaran (later purchased by SoftBank Robotics), where Pepper was deployed to the homes of users over several weeks [8]. The novelty effect is a common problem in social robotics, and long-term studies have often found a reduced engagement with various robotic platforms over time [23, 24].

Finally, Baraka and colleagues [25] recently proposed an ‘extended framework’ for social robotics by illustrating seven relevant dimensions of social robots: a robot’s (1) appearance, (2) social capabilities, (3) autonomy, (4) intelligence, the (5) proximity and (6) temporal profile of the interaction, and the (7) context of the interaction (such as its purpose or intended application). In their appearance classification system, they distinguish between bio-inspired robots (e.g. human- or animal-inspired), artefact shaped (e.g. robots resembling man-made objects or those that are imaginary), and functional robots (e.g. drones). Additional recent efforts to establish frameworks for designing and evaluating social robotics research emphasize that in all the enthusiasm from researchers from different fields to amplify or focus on social aspects of social robots, these robots remain, at their core, machines, and advances in HRI research will be well served to keep robots’ machine or object-like qualities in mind as well [26].

Interdisciplinary Tensions

The bibliometric analysis by Mejia and Kajikawa [19] referenced above also highlights that the social robotics literature comprises only a small portion (2.3%) of the larger robotics knowledgebase. When further investigating the extant social robotics literature, Mejia and Kajikawa [19] find that even though concepts of socialness play a central role, the social sciences are hardly represented. The authors write aptly: “Social robotics is social in its intention, but its knowledgebase is concentrated in the engineering and technology domains” (p.11). This lack of social, behavioural, and cognitive science input into social robot development highlights a challenge and an opportunity for future roboticists to work towards effective interdisciplinary collaborations with social scientists. Indeed, while the interdisciplinary nature of social robotics is emphasized throughout the literature, this observation by Mejia and Kajikawa reveals an interesting tension that has also been voiced by Broadbent [3••] and Eyssel [27]—the literature could benefit from knowledge about the mechanisms of human social behaviour gained through psychology, cognitive science, and neuroscience. As Fig. 3 illustrates, texts gathered from the proceedings from one of the premiere conferences debuting new empirical and theoretical work in social robotics (ACM-HRI) include some social science mentions, even if these concepts are not among this conference’s (current) core content. Irfan and colleagues [28] argue that as HRI is positioned between engineering and the social sciences (specifically social and cognitive psychology), HRI researchers should aim to develop novel methodology inspired by these scientific disciplines, while also learning from the mistakes and successes of these fields. With psychology researchers continuing to grapple with the replication crisis (referring to the concerning lack of reproducibility of published findings), HRI researchers would be well served to keep in mind these new approaches and methods to ensure their own work is as rigorous and valid as possible [29]. And as Ifran and colleagues [28] also argue, HRI researchers should aspire to establish robust and reliable scientific standards for empirical HRI research. The fact that research rigour is receiving increasing attention in the domain of HRI will only benefit the field [30]. Furthermore, in a recent opinion piece, our group has further emphasized and provided concrete examples where empirical HRI and social robotics research can follow open science practices and focus on ensuring high reproducibility of research findings [31•].

Fig. 3
figure 3

Subject areas in the ACM-HRI conference proceedings. The subject areas are presented as a word cloud with the size of the word representing the number of conference proceedings in one category. Robotics being the most frequent one (947 results), there are some nods to the social sciences: psychology (143 search results), user studies (175 results), and empirical studies in interaction design (57 results). (Screenshot taken from

What Can Social Robotics Learn from the Social, Behavioural, and Cognitive Sciences?

In order to most appropriately and convincingly provide answers to what makes a robot social, research will clearly benefit from a broader variety of empirical disciplines to provide a complementary outlook. One field we would argue provides particularly rich opportunities for interdisciplinary collaboration with social robotics is cognitive neuroscience. Cognitive neuroscience is the study of the biological procedures that support cognition [32]. When cognitive neuroscience theory and methods are applied to HRI research, they allow us to probe how the human brain processes and reacts to robots, and these insights, in turn, can help facilitate further development of social robots [33]. Previous research in cognitive neuroscience has used social robots to address questions regarding attention (e.g. [34,35,36,37]), theory of mind (e.g. [38,39,40,41]), mind perception (e.g. [42,43,44,45]), intention attribution (e.g. [40, 46, 47]), and decision making (e.g. [48, 49]).

As an example of this bidirectional loop of cognitive neuroscience research informing robotic design, iCub, the ‘robot child’, is based on theories of developmental psychology and cognitive neuroscience [50, 51] and was developed as a testbed for the theory of embodied cognition. This theory describes the phenomenon of learning and development through the physical interaction with the world through a human(oid) body [51]. Like a child exploring its environment, iCub was designed to manipulate its surroundings, imitate its human partners, and communicate with them. iCub has been used in cognitive neuroscience studies to investigate whether humans perceive it as intentional and as an agent with a mind [52, 53•]. Across several studies, it has been shown that the degree to which participants perceive the robot as behaving intentionally is profoundly shaped by participants’ knowledge or beliefs about the robot [47, 54].

In addition to cognitive neuroscience, research from psychology relating to social cognition is also informing social robotics development, and vice versa. Social cognition can be defined as the processing, storing, and application of information about social beings and situations, and this discipline can help establish a role for cognitive processes during social interactions with social robots. Moreover, using social robots as research tools, we can learn more about ourselves as humans through a social-cognitive lens [33, 55]. Social concepts like trust (e.g. [56,57,58,59]), attachment (e.g. [60]), empathy (e.g. [61]), acceptance (e.g. [57, 62]), and disclosure (e.g. [42, 63,64,65,66,67]) with social robots are being studied. In addition, the use of social robots is growing in complex social contexts such as those found in education (e.g. [58, 59, 68]), service (e.g. [69]), and care sectors (e.g. [70,71,72]).

It is worth noting that several commercial robots that are widely used in research are strongly informed by (and continue to inform) social, behavioural, and cognitive science domains. Some of these robots take on a humanoid form, such as the Pepper and Nao robots by SoftBank Robotics (Figs. 1 and 2). Mubin and colleagues [73] investigated the use of Pepper and Nao in public spaces, and a range of studies have evaluated Pepper’s social acceptability in shopping malls, elderly care homes, remote classrooms, and as a customer service employee in a hotel lobby [24, 74,75,76]. While in these contexts a humanoid robot may be valuable, other developers have taken a different approach with the MiRo robot (Consequential Robotics). It is designed as a biomimetic system and its design (in terms of form and function) does not aim to be humanlike (Fig. 2), but instead takes its cues from (lower) mammalian brain and behavioural systems (such as a rabbit or dog [77]). The developers explicitly justify their design choice of animal morphology as a strategy to mitigate potential disappointment of users and their expectations towards the social capabilities of the robot. The design of the robot features light patterns under the translucent shell of the back, which satisfies two goals: the simple communication of affect and increasing the salience of the interaction with an artificial, rather than a real, social agent [77]. The robot, which evokes a pet-like impression, includes characteristics modelled from “puppies, kittens and rabbits” ([77], p. 2). This robot is described as an ‘edutainment’ product, which alludes to its intended purpose as an educational tool for children. However, MiRo has also been explored as a fall alert system, relevant especially to the elderly population [78]. In their proof-of-principle study, these authors demonstrated that MiRo could be used as a mobile and smart tool to locate a person on the ground, and send a help signal if no movement of the person is detected. These different embodiments for social robots highlight that in different contexts different types of social robots are valuable and appropriate.

To summarize, this section highlights how theoretical underpinnings and empirical work spanning the social, behavioural, and cognitive sciences can inform the development and deployment of social robots. While the field of social robotics seems to be in unanimous agreement that greater integration with these fields will accelerate and enhance social robotic development, challenges to working across disciplines remain (as discussed in the previous section), and will be important to overcome if the social robotics applications surveyed in the following section are to be introduced on a bigger scale. Continuing research with different types of social robot morphologies utilizing social sciences’ rigour and methodology will ultimately lead to an advancement in social robotics.

Social Robots Deployed in the Wild

Recalling the cautionary tale of the Jibo robot introduced above, this story too has a happy ending. Earlier this year (March 2020), the assets for Jibo were acquired by the Japanese telecommunications company Nippon Telegraph and Telephone (NTT) [79]. Interestingly, NTT decided to focus Jibo’s future in health care and education. Instead of focusing on developing Jibo as a personal assistant robot that people can buy and use straight out of the box, NTT plans to market Jibo to businesses that provide certain services (such as healthcare and education) as a tool for professionals to use [80, 81]. Supporting this decision is NTT’s assessment that Jibo will be more valuable as an enterprise product in these designated domains, rather than as a consumer product. Surveying this area more broadly, the application of social robots within care settings, and as tools to deliver health and well-being interventions, is already an emerging success story highlighting contexts and uses where social robots are successfully being deployed as autonomous assistance tools for human users [82]. While it remains uncontroversial that social robots do not (yet) offer the same opportunities as humans for social interactions [33], they can nonetheless afford valuable opportunities for social engagement with human users when introduced in specific contexts, and in careful, ethically responsible ways [83, 84]. A growing evidence base documents how social robots might function as autonomous tools to support psychological health interventions [42, 85], physical therapy and physical health [86,87,88], and other means to amplify or support human therapeutic efforts (see [89•, 90]). Moreover, social robots are being equipped with technologies such as sensors, cameras, and processors, which promote the collection of human data (such as where a person is standing, where they are looking, what they are saying, etc.) with high fidelity, as well as support on-line, on-going analysis of a human interaction partner’s behaviour.

Research into the application of social robots in psychosocial health interventions highlights how social robots that take on different forms of embodiment and design can benefit different interventions. For example, robots like Paro, which take on a zoomorphic pet- or cuddly toy-like embodiment, hold value for interventions when used with appropriate target populations, including older adults in care homes and people with cognitive impairment (e.g. dementia) [91, 92]. A review by Hung and colleagues [93] found that previous studies using Paro provided evidence of this robot reducing negative emotions in patients, improving their social engagement, and generally promoting positive mood, atmosphere, and quality of care experience. Moreover, a recent study documents the psychophysiological benefits of interacting with a companion robot like Paro, demonstrating that stroking Paro reduces pain perception and salivary oxytocin levels [94]. Other research demonstrates how different robot forms can have negligible impact on psychosocial health interventions. A recent study by our group [42] examined how social robot and voice assistant technology might be used to support people’s psychological health through conversation. While participants were aware of many of the obvious differences between speaking to a humanoid social robot compared to a disembodied conversational agent (the Google Nest Mini voice assistant, in this case), their verbal disclosures to both were similar in length and duration. This finding thus suggests that human-like embodiment for this particular kind of conversational intervention did not lead to improved outcomes.

In contrast, health interventions where more active participation is required are finding that robots with a more human-like embodiment are more effective. One such study by da Silva and colleagues [95] tested an intervention for students with the humanoid Nao robot, aimed at encouraging their motivation to exercise through motivational interviewing. The results of their study demonstrated that some participants felt that the intervention increased their physical activity levels and their motivation to exercise. Interestingly, participants expressed a positive opinion of Nao as it appeared to be non-judgmental. This is a meaningful benefit of using social robots in psychosocial interventions, as these machines can overcome some of the social desirability limitations when similar interventions are operated exclusively by people. Another study that used Nao demonstrated its viability to deliver a behaviour change intervention, applying a motivational intervention for reducing high-calorie snack consumption [96]. This study reported a > 50% snack episode reduction between the beginning of the intervention and week 8, and an average weight reduction of 4.4 kg over the first 2 weeks of the treatment. Four weeks from the beginning of the intervention, participants reported an increase in their perceived confidence in controlling their snack intake and their emotional states. The results of this study demonstrate that in certain contexts and settings, social robots have potential to autonomously behaviour change interventions. While some evidence suggests that an intervention delivered by a social robot could be as effective as a human delivering a similar intervention (e.g. [96]), many significant open questions remain regarding the cost, ethics, and long-term efficacy of machine vs. human-based health interventions.

Social robots with more degrees of freedom in terms of their movement and behavioural repertoire can provide more advanced assistance, for example, by demonstrating complex physical movements to assist with rehabilitation, build physical fitness, and help people cope with injury and illness [88]. A recent study by Feingold-Polak and Levi-Tzedek [97] reported positive outcomes for a long-term upper limb rehabilitation intervention delivered via the humanoid social robot Pepper for post-stroke patients in a rehabilitation facility. Moreover, clinicians and patients in this study found the intervention with Pepper to be engaging, motivating, and most importantly meeting the needs of upper limb rehabilitation. Similar work has examined how the smaller, less expensive Nao robot can also deliver physical therapy for upper limb impairment, and shows similar effectiveness of this robot in rehabilitation contexts with adults [86]. Furthermore, Chen and colleagues [87] have shown that an even more compact and simple social robot (Darwin from RobotLab, San Francisco, CA, USA) can be effectively deployed to assist with children with and without cerebral palsy performing reach actions. This work further underscores the potential value and utility of embodied social robots for building physical capacity in individuals across the lifespan.

To summarize the state of the art on the potential of social robots to contribute to the greater good of society, increasing research effort is being invested in this domain, and some early results speaking to how robots might be able to support human psychosocial and physical function is promising. The current public health crisis has thrown into even starker contrast the value and need for not just technological solutions, but embodied technological solutions to help people stave off loneliness, as well as learn and connect with others when in-home learning and social distancing are the new normal [98]. Social robotics can undoubtedly contribute to improving people’s quality of life [99], but the need remains for more methodologically rigorous and ethically sound research into how social robots might interact with humans in a sensitive, timely and nuanced manner.


In this review, we reflected on the paradox of robots’ limited socialness, and how it can be better defined, studied, and applied. It is apparent from the literature that a substantial gap remains between how social robots are defined by scientists and roboticists, compared to the general public’s expectations and experience with robots. Social robotics remains a small subdiscipline of robotics that envisions robots as assistants and companions. As this review highlights, it is also a heterogenous and multidisciplinary field, which can greatly benefit from deeper integration with and feedback from the social, behavioural, and neurocognitive sciences. The research reviewed here shows how, despite real limitations in social robots capabilities due to the current state of technology, they nonetheless hold potential to enhance human life, particularly in some education, psychosocial support, and rehabilitation contexts. The research reviewed in the context of these robots further highlights their usefulness as a testbed for human social cognition, in terms of probing its flexibility and dimensions [100]. Despite this, many questions remain regarding the capabilities of robots to take on more social roles, especially if they are to be working autonomously alongside human users in complex social settings.


Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance

  1. Frumer Y. The short, strange life of the first friendly robot. IEEE SPECTRUM. 2020. Accessed 21 May 2020.

  2. Schwartz O. Untold history of AI: when charles babbage played chess with the original mechanical Turk. IEEE SPECTRUM. 2019. Available in Accessed 18 Mar 2019.

  3. •• Broadbent E. Interactions with robots: the truths we reveal about ourselves. Annu Rev Psychol. 2017;68(9):1–926. seminal work by Broadbent was the first major social robotics related piece to capture the attention of psychologists working across a number of subdisciplines, by outlining the value and utility of using robots to examine a number of fundamental features of human behavior, perception and cognition.

    Article  Google Scholar 

  4. Duffy BR, Joue G. The paradox of social robotics: a discussion. AAAI Fall 2005 Symp Mach ethics. Hyatt Regency; 2005.

  5. Pandey AK, Gelin R. A mass-produced sociable humanoid robot: pepper: the first machine of its kind. IEEE Robot Autom Mag. 2018;25:40–8.

    Article  Google Scholar 

  6. de Graaf MMA, Ben Allouch S, van Dijk JAGM. What makes robots social?: a user’s perspective on characteristics for social human-robot interaction. In: Tapus A, André E, Martin J-C, Ferland F, Ammi M, editors. Soc robot. Cham: Springer International Publishing; 2015. p. 184–93.

    Chapter  Google Scholar 

  7. Dereshev D, Kirk D, Matsumura K, Maeda T. Long-term value of social robots through the eyes of expert users. Proc 2019 CHI Conf Hum Factors Comput Syst. New York: Association for Computing Machinery; 2019. p. 1–12.

    Book  Google Scholar 

  8. Rivoire C, Lim A. Habit detection within a long-term interaction with a social robot: an exploratory study. Proc Int Work Soc Learn Multimodal Interact Des Artif Agents. New York: Association for Computing Machinery; 2016.

    Book  Google Scholar 

  9. Breazeal C. JIBO, The world’s first social robot for the home [Internet]. Indiegogo. 2014. Available from: Accessed 15 Sep 2014.

  10. Hodson H. The first family robot. New Sci. 2014;223:21.

    Article  Google Scholar 

  11. Martin D. Layoffs hit Jibo more than a month after social robot’s launch [Internet]. BostInno. 2017. Available from: Accessed 15 Dec 2017.

  12. Ackerman E. Jibo is probably totally dead now - [Internet]. IEEE Spectr. 2018. Available from: Accessed 3 Dec 2018.

  13. Heater B. The lonely death of Jibo, the social robot [Internet]. TechCrunch. 2019. Available from: Accessed 4 Mar 2019.

  14. Kinsella B. Consumer robots are dead; long live Alexa [Internet]. USA Today Tech. 2018. Available from: Accessed 13 Dec 2018.

  15. Kinsella B. Jibo Shuts down, selling off robot parts [Internet]. 2018. Available from: Accessed 3 Dec 2018.

  16. Linus Tech Tips. TERRIBLE $900 party trick – Jibo review [Video file] [Internet]. 2017. Available from: Accessed 27 Dec 2017.

  17. Williams A. Virtual assistants evolve, but will they be integrated in robots? [Internet]. Robot. Bus. Rev. 2018. Available from: Accessed 8 Oct 2018.

  18. Goodrich MA, Schultz AC. Human-robot interaction: a survey. found trends hum-comput interact, vol. 1. Hanover: Now Publishers Inc.; 2007. p. 203–75.

    Book  Google Scholar 

  19. Mejia C, Kajikawa Y. Bibliometric analysis of social robotics research: identifying research trends and knowledgebase. Appl Sci. 2017;7:12.

    Article  Google Scholar 

  20. Šabanović S. Robots in society, society in robots: mutual shaping of society and technology as a framework for social robot design. Int J Soc Robot. 2010;2(4):439–50.

    Article  Google Scholar 

  21. Sarrica M, Brondi S, Fortunati L. How many facets does a “social robot” have? A review of scientific and popular definitions online. Inf Technol People. 2019;33(1):1–21.

    Article  Google Scholar 

  22. Dautenhahn K. Socially intelligent robots: dimensions of human-robot interaction. Philos Trans R Soc Lond B Biol Sci The Royal Soc. 2007;362:679–704.

    Article  Google Scholar 

  23. Leite I, Martinho C, Paiva A. Social robots for long-term interaction: a survey. Int J Soc Robot. 2013;5:291–308.

    Article  Google Scholar 

  24. Tanaka F, Isshiki K, Takahashi F, Uekusa M, Sei R, Hayashi K. Pepper learns together with children: development of an educational application. 2015 IEEE-RAS 15th International Conference on Humanoid Robots (Humanoids). 2015:270–5.

  25. Baraka K, Alves-Oliveira P, Ribeiro T. An extended framework for characterizing social robots. In: Jost C, Le Pévédic B, Belpaeme T, Bethel C, Chrysostomou D, Crook N, et al., editors. Human-Robot Interact Eval Methods Their Stand Springer Ser Bio- Neurosystems: Springer International Publishing; 2020. p. 21–64.

  26. Cross ES, Ramsey R. Mind meets machine: towards a cognitive science of human-machine interactions. Trends Cogn Sci. 2020;28:S1364–6613(20)30297–7.

  27. Eyssel F. An experimental psychological perspective on social robotics. Robot Auton Syst. 2017;87:363–71.

    Article  Google Scholar 

  28. Irfan B, Kennedy J, Lemaignan S, Papadopoulos F, Senft E, Belpaeme T. Social psychology and human-robot interaction: an uneasy marriage. Companion 2018 ACM/IEEE Int Conf Human-Robot Interact, vol. 2018. New York: Association for Computing Machinery. p. 13–20.

  29. Belpaeme T. Learning from social robots. 2020 Int Symp Community-centric Syst. Hachioji, Tokyo, Japan; 2020. p. 12.

  30. Hoffman G, Zhao X. A primer for conducting experiments in human–robot interaction. J Hum-Robot Interact. New York: Association for Computing Machinery 2020:10:1:31.

    Book  Google Scholar 

  31. • Henschel A, Hortensius R, Cross ES. Social cognition in the age of human–robot interaction. Trends Neurosci. 2020;43:373–84. recent opinion piece makes a case for using human neuroscience techniques in real-life, embodied interaction settings with robots to advance our knowledge of the human side of human-robot interaction, aspects of which have been surprisingly neglected in HRI research to date.

    Article  Google Scholar 

  32. Gazzaniga MS, Mangun GR, editors. The cognitive neurosciences. 5th ed. Cambridge: MIT Press; 2014.

    Google Scholar 

  33. Cross ES, Hortensius R, Wykowska A. From social brains to social robots: applying neurocognitive insights to human-robot interaction. Philos Trans R Soc B Biol Sci. 2019;374:20180024.

    Article  Google Scholar 

  34. Cao W, Song W, Li X, Zheng S, Zhang G, Wu Y, et al. Interaction with social robots: Improving gaze toward face but not necessarily joint attention in children with autism spectrum disorder. Front Psychol. 2019;10:1503.

    Article  Google Scholar 

  35. Chevalier P, Kompatsiari K, Ciardo F, Wykowska A. Examining joint attention with the use of humanoid robots-a new approach to study fundamental mechanisms of social cognition. Psychon Bull Rev. 2020;27:217–36.

    Article  Google Scholar 

  36. Gordon G. Social behaviour as an emergent property of embodied curiosity: a robotics perspective. Philos Trans R Soc B Biol Sci. 2019;374:20180029.

    Article  Google Scholar 

  37. Kajopoulos J, Cheng G, Kise K, Müller HJ, Wykowska A. Focusing on the face or getting distracted by social signals? The effect of distracting gestures on attentional focus in natural interaction. Psychol Res. 2020;1:3.

    Article  Google Scholar 

  38. Banks J. Theory of mind in social robots: replication of five established human tests. Int J Soc Robot. 2020;12:403–14.

    Article  Google Scholar 

  39. Bianco F, Ognibene D. In: Salichs MA, Ge SS, Barakova EI, Cabibihan J-J, Wagner AR, Castro-González Á, et al., editors. Soc Robot Transferring adaptive theory of mind to social robots: insights from developmental psychology to robotics. Cham: Springer International Publishing; 2019. p. 77–87.

    Chapter  Google Scholar 

  40. Bossi F, Willemse C, Cavazza J, Marchesi S, Murino V, Wykowska A. The human brain reveals resting state activity patterns that are predictive of biases in attitudes toward robots. Sci Robot. 2020;5:eabb6652.

    Article  Google Scholar 

  41. Kuniyoshi Y. Fusing autonomy and sociality via embodied emergence and development of behaviour and cognition from fetal period. Philos Trans R Soc B Biol Sci. 2019;374:20180031.

    Article  Google Scholar 

  42. Laban G, George J-N, Morrison V, Cross E. Tell me more! Assessing interactions with social robots from speech. Paladyn J Behav Robot. 2021;12:136–159.

  43. Stafford RQ, MacDonald BA, Jayawardena C, Wegner DM, Broadbent E. Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. Int J Soc Robot. 2014;6:17–32.

    Article  Google Scholar 

  44. Wallkötter S, Stower R, Kappas A, Castellano G. A robot by any other frame: framing and behaviour influence mind perception in virtual but not real-world environments. Proc 2020 ACM/IEEE Int Conf human-robot interact, vol. 2020. New York: Association for Computing Machinery. p. 609–18.

  45. Wang X, Krumhuber EG. Mind perception of robots varies with their economic versus social function. Front Psychol. 2018;9:1230.

    Article  Google Scholar 

  46. Thellman S, Ziemke T. Do you see what i see? Tracking the perceptual beliefs of robots. iScience. 2020;23:101625.

    Article  Google Scholar 

  47. Wiese E, Metta G, Wykowska A. Robots as intentional agents: using neuroscientific methods to make robots appear more social. Front Psychol. 2017;8:1663.

    Article  Google Scholar 

  48. Hsieh T-Y, Chaudhury B, Cross ES. Human-robot cooperation in prisoner dilemma games: people behave more reciprocally than prosocially toward robots. Companion 2020 ACM/IEEE Int Conf human-robot interact, vol. 2020. New York: Association for Computing Machinery. p. 257–9.

  49. Marchesi S, Perez-Osorio J, De Tommaso D, Wykowska A. Don’t overthink: fast decision making combined with behavior variability perceived as more human-like. 2020 29th IEEE Int Conf Robot Hum Interact Commun. Naples: IEEE; 2020. p. 54–9.

    Book  Google Scholar 

  50. Natale L, Bartolozzi C, Pucci D, Wykowska A, Metta G. iCub: the not-yet-finished story of building a robot child. Sci Robot. 2017;2:eaaq1026.

    Article  Google Scholar 

  51. Sandini G, Metta G, Vernon D. RobotCub: an open framework for research in embodied cognition. 4th IEEE/RAS Int Conf Humanoid Robot 2004. 2004; Vol. 1. p. 13–32

  52. Ghiglino D, De Tommaso D, Willemse C, Marchesi S, Wykowska A. Can I get your (robot) attention? Human sensitivity to subtle hints of human-likeness in a humanoid robot’s behavior. 2020.

  53. • Pérez-Osorio J, De Tommaso D, Baykara E, Wykowska A. Joint action with Icub: a successful adaptation of a paradigm of cognitive neuroscience in HRI. 2018 27th IEEE Int Symp Robot Hum Interact Commun. 2018:152–7. seminal work highlights the cutting-edge research performed by Wykowska and colleagues at the IIT develop cognitive neuroscience paradigms with the iCub robot, which enable research into fundamental aspects of social perception and cognition (such as joint attention) using real-life social scenarios with embodied robots.

  54. Wykowska A, Chaminade T, Cheng G. Embodied artificial agents for understanding human social cognition. Philos Trans R Soc B Biol Sci. 2016;371:20150375.

    Article  Google Scholar 

  55. Wykowska A. Social robots to test flexibility of human social cognition. Int J Soc Robot. 2020:1–9.

  56. Langer A, Feingold-Polak R, Mueller O, Kellmeyer P, Levy-Tzedek S. Trust in socially assistive robots: considerations for use in rehabilitation. Neurosci Biobehav Rev. 2019;104:231–9.

    Article  Google Scholar 

  57. Naneva S, Sarda Gou M, Webb TL, Prescott TJ. A systematic review of attitudes, anxiety, acceptance, and trust towards social robots. Int J Soc Robot. 2020:1–23.

  58. Stower R, Kappas A. “Oh no, my instructions were wrong!” An exploratory pilot towards children’s trust in social robots. 2020 29th IEEE Int Conf robot hum interact Commun. Naples, Italy; 2020. p. 641–6.

  59. Stower R. The Role of trust and social behaviours in children’s learning from social robots. 2019 8th Int Conf Affect Comput Intell Interact Work Demos. Cambridge, United Kingdom, 2019. p. 1–5.

  60. Dziergwa M, Kaczmarek M, Kaczmarek P, Kędzierski J, Wadas-Szydłowska K. Long-term cohabitation with a social robot: a case study of the influence of human attachment patterns. Int J Soc Robot. 2018;10:163–76.

    Article  Google Scholar 

  61. Cross ES, Riddoch KA, Pratts J, Titone S, Chaudhury B, Hortensius R. A neurocognitive investigation of the impact of socializing with a robot on empathy for pain. Philos Trans R Soc B Biol Sci. 2019;374:20180034.

    Article  Google Scholar 

  62. Thunberg S, Thellman S, Ziemke T. Don’t judge a book by its cover: a study of the social acceptance of NAO vs. Pepper. Proc 5th Int Conf Hum Agent Interact. New York: Association for Computing Machinery; 2017. p. 443–6.

    Book  Google Scholar 

  63. Birnbaum GE, Mizrahi M, Hoffman G, Reis HT, Finkel EJ, Sass O. What robots can teach us about intimacy: the reassuring effects of robot responsiveness to human disclosure. Comput Hum Behav. 2016;63:416–23.

    Article  Google Scholar 

  64. Birnbaum GE, Mizrahi M, Hoffman G, Reis HT, Finkel EJ, Sass O. Machines as a source of consolation: robot responsiveness increases human approach behavior and desire for companionship. 2016 11th ACM/IEEE Int Conf Human-Robot Interact. 2016. p. 165–72.

  65. Björling EA, Rose E, Davidson A, Ren R, Wong D. Can we keep him forever? Teen’s engagement and desire for emotional connection with a social robot. Int J Soc Robot. 2019;12:65–77.

    Article  Google Scholar 

  66. Hoffman G, Birnbaum GE, Vanunu K, Sass O, Reis HT. Robot Responsiveness to human disclosure affects social impression and appeal. Proc 2014 ACM/IEEE Int Conf Human-Robot Interact, vol. 2014. New York: Association for Computing Machinery. p. 1–8.

  67. Traeger ML, Sebo SS, Jung M, Scassellati B, Christakis NA. Vulnerable robots positively shape human conversational dynamics in a human–robot team. Proc Natl Acad Sci. 2020;117:6370–5.

    Article  Google Scholar 

  68. Belpaeme T, Kennedy J, Ramachandran A, Scassellati B, Tanaka F. Social robots for education: a review. Sci Robot. 2018;3:eaat5954.

    Article  Google Scholar 

  69. Čaić M, Mahr D, Oderkerken-Schröder G. Value of social robots in services: social cognition perspective. J Serv Mark. 2019;33:463–78.

    Article  Google Scholar 

  70. Dawe J, Sutherland C, Barco A, Broadbent E. Can social robots help children in healthcare contexts? A scoping review. BMJ Paediatr Open. 2019;3:e000371.

    Article  Google Scholar 

  71. Johanson DL, Ahn HS, MacDonald BA, Ahn BK, Lim J, Hwang E, et al. The effect of robot attentional behaviors on user perceptions and behaviors in a simulated health care interaction: randomized controlled trial. J Med Internet Res. 2019;21:e13667.

    Article  Google Scholar 

  72. Johanson DL, Ho SA, Sutherland CJ, Brown B, MacDonald BA, Jong YL, et al. Smiling and use of first-name by a healthcare receptionist robot: effects on user perceptions, attitudes, and behaviours. Paladyn J Behav Robot. 2020;11:40–51.

    Article  Google Scholar 

  73. Mubin O, Ahmad MI, Kaur S, Shi W, Khan A. Social robots in public spaces: a meta-review. In: Ge SS, Cabibihan J-J, Salichs MA, Broadbent E, He H, Wagner AR, et al., editors. Soc Robot. Cham: Springer International Publishing; 2018. p. 213–20.

    Chapter  Google Scholar 

  74. Aaltonen I, Arvola A, Heikkilä P, Lammi H. Hello Pepper, may i tickle you? Children’s and adults’ responses to an entertainment robot at a shopping mall. Proc Companion 2017 ACM/IEEE Int Conf Human-Robot Interact. New York: Association for Computing Machinery; 2017. p. 53–4.

    Book  Google Scholar 

  75. Stock R. M., Merkle M. Can humanoid service robots perform better than service employees? A comparison of innovative behavior cues. Proceedings of the 51st Hawaii International Conference on System Sciences 2018:10.

  76. Yang C, Lu M, Tseng S, Fu L. A companion robot for daily care of elders based on homeostasis. 2017 56th Annu Conf Soc Instrum Control Eng Japan. 2017. p. 1401–6.

  77. Collins EC, Prescott TJ, Mitchinson B, Conran S. MIRO: a versatile biomimetic edutainment robot. Proc 12th Int Conf Adv Comput Entertain Technol. New York: Association for Computing Machinery; 2015.

    Book  Google Scholar 

  78. Georgiou T, Singh K, Baillie L, Broz F. Small robots with big tasks: a proof of concept implementation using a MiRo for fall alert. companion 2020 ACM/IEEE Int Confuman-robot interact. New York: Association for Computing Machinery; 2020. p. 206–8.

    Book  Google Scholar 

  79. Crowe S. Jibo’s social robot assets acquired by NTT disruption [Internet]. Robot Rep. 2020. Available from: Accessed 18 Mar 2020.

  80. Carman A. Jibo, the social robot that was supposed to die, is getting a second life. The Verge 2020. Available from: Accessed 23 July 2020.

  81. NTT Disruption. Jibo the social robot returns, with its brand new website - NTT DISRUPTION | Creating today what really matters for tomorrow. 2020. Available from: Accessed 23 July 2020.

  82. Cifuentes CA, Pinto MJ, Céspedes N, Múnera M. Social robots in therapy and care. Curr Robot Rep. 2020;1:59–74.

    Article  Google Scholar 

  83. Villaronga EF, Kieseberg P, Li T. Humans forget, machines remember: artificial intelligence and the right to be forgotten. Comput Law Secur Rev. 2018;34:304–13.

    Article  Google Scholar 

  84. Wullenkord R, Eyssel F. Societal and ethical issues in HRI. Curr Robot Rep. 2020;1:85–96.

    Article  Google Scholar 

  85. Alnajjar F, Khalid S, Vogan AA, Shimoda S, Nouchi R, Kawashima R. Emerging cognitive intervention technologies to meet the needs of an aging population: a systematic review. Front Aging Neurosci. 2019;11:291.

    Article  Google Scholar 

  86. Assad-Uz-Zaman M, Rasedul Islam M, Miah S, Rahman MH. NAO robot for cooperative rehabilitation training. J Rehabil Assist Technol Eng. 2019;6:2055668319862151.

    Article  Google Scholar 

  87. Chen Y, Garcia-Vergara S, Howard AM. Effect of feedback from a socially interactive humanoid robot on reaching kinematics in children with and without cerebral palsy: a pilot study. Dev Neurorehabil. 2018;21:490–6.

    Article  Google Scholar 

  88. Mohebbi A. Human-robot interaction in rehabilitation and assistance: a review. Curr Robot Rep. 2020;1:131–44.

    Article  Google Scholar 

  89. • Robinson NL, Cottier TV, Kavanagh DJ. Psychosocial health interventions by social robots: systematic review of randomized controlled trials. J Med Internet Res. 2019;21:1–20. systematic review of major randomized controlled trials done with social robots to date highlights how scant adequately powered and appropriately controlled research using social robots in health interventions currently is. It also provides important insights and recommendations for research using social robots in clinical settings to maximise efficacy.

    Article  Google Scholar 

  90. Scoglio AAJ, Reilly ED, Gorman JA, Drebing CE. Use of social robots in mental health and well-being research: systematic review. J Med Internet Res. 2019;21:e13322.

    Article  Google Scholar 

  91. Góngora Alonso S, Hamrioui S, de la Torre DI, Motta Cruz E, López-Coronado M, Franco M. Social robots for people with aging and dementia: a systematic review of literature. Telemed e-Health. 2018;25:533–40.

    Article  Google Scholar 

  92. Robinson H, MacDonald B, Kerse N, Broadbent E. The psychosocial effects of a companion robot: a randomized controlled trial. J Am Med Dir Assoc. 2013;14:661–7.

    Article  Google Scholar 

  93. Hung L, Liu C, Woldum E, Au-Yeung A, Berndt A, Wallsworth C, et al. The benefits of and barriers to using a social robot PARO in care settings: a scoping review. BMC Geriatr. 2019;19:232.

    Article  Google Scholar 

  94. Geva N, Uzefovsky F, Levy-Tzedek S. Touching the social robot PARO reduces pain perception and salivary oxytocin levels. Sci Rep. 2020;10:9814.

    Article  Google Scholar 

  95. da Silva J, Kavanagh DJ, Belpaeme T, Taylor L, Beeson K, Andrade J. Experiences of a motivational interview delivered by a robot: qualitative study. J Med Internet Res. 2018;20:e116.

    Article  Google Scholar 

  96. Robinson NL, Connolly J, Hides L, Kavanagh DJ. Social robots as treatment agents: pilot randomized controlled trial to deliver a behavior change intervention. Internet Interv. 2020;21:100320.

    Article  Google Scholar 

  97. Feingold Polak R, Tzedek SL. Social robot for rehabilitation: expert clinicians and post-stroke patients’ evaluation following a long-term intervention. Proc 2020 ACM/IEEE Int Conf Human-Robot Interact, vol. 2020. New York: Association for Computing Machinery. p. 151–60.

  98. Henschel A, Cross ES. The neuroscience of loneliness – and how technology is helping us [Internet]. Conversat. 2020. Available from: Accessed 17 Apr 2020.

  99. Yang G-Z, Nelson BJ, Murphy RR, Choset H, Christensen H, Collins SH, et al. Combating COVID-19—the role of robotics in managing public health and infectious diseases. Sci Robot. 2020;5:eabb5589.

    Article  Google Scholar 

  100. Hortensius R, Cross ES. From automata to animate beings: the scope and limits of attributing socialness to artificial agents. Ann N Y Acad Sci. 2018;1426:93–110.

    Article  Google Scholar 

Download references


The authors gratefully acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement 677270 to E.S.C.), the Leverhulme Trust (PLP-2018-152 to E.S.C.), and the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie to ENTWINE, the European Training Network on Informal Care (grant agreement no. 814072 to E.S.C.).

Author information

Authors and Affiliations


Corresponding author

Correspondence to Emily S. Cross.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest.

Human and Animal Rights and Informed Consent

This review article does not contain any primary data from studies with human or animal subjects performed by any of the authors.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Cognitive Robotics

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Henschel, A., Laban, G. & Cross, E.S. What Makes a Robot Social? A Review of Social Robots from Science Fiction to a Home or Hospital Near You. Curr Robot Rep 2, 9–19 (2021).

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


  • Social robots
  • Human-robot interaction
  • Socially assistive robots
  • Cognitive neuroscience
  • Social cognition