Purpose of Review
Research has demonstrated the potential for robotic interfaces to leverage human-like social interaction techniques, for example, autonomous social robots as companions, as professional team members, or as social proxies in robot telepresence. We propose that there is an untapped opportunity to extend the benefits of social robotics to more traditional teleoperation, where the robot does not typically communicate with the operator socially. We argue that teleoperated robots can and should leverage social techniques to shape interactions with the operator, even in use cases such as remote exploration or inspection that do not involve using the robot to communicate with other people.
The core benefit of social robotics is to leverage human-like and thus familiar social techniques to communicate effectively or shape people’s mood and behavior. Initial results provide proofs of concept for similar benefits of social techniques applied to more traditional teleoperation; for example, we can design teleoperated robots as social agents to facilitate communication or to shape operator behavior, or teleoperated robots can leverage knowledge of operator psychology to change perceptions, potentially improving operation safety and performance.
This paper provides a proposal and roadmap for leveraging social robotics techniques in more classical teleoperation interfaces.
There is an untapped opportunity to leverage social human-robot interaction techniques in traditional teleoperation interface design as a new way to shape interaction and operator performance. The field of social human-robot interaction has highlighted how people can embrace social interfaces as a natural feeling and easy-to-understand paradigm , with results indicating a broad range of benefits including increased user comprehension of robot communication [1,2,3], engagement [4•], motivation , and task performance . This follows the well-established computers-as-social-actors paradigm , recently emphasized by the proliferation and acceptance of voice-based digital assistants. In this paper, we highlight how traditional teleoperation interfaces can likewise benefit from social design.
In teleoperation, the use of social interaction techniques has yielded similar benefits for supporting robot-mediated interaction with other people, such as telepresence [8, 9]. However, more traditional teleoperation applications such as inspection or exploration, where the operator does not interact with other people through the robot, have yet to see widespread integration of social robotics methods.
We draw a link to video games, which share similarities with teleoperation: a gamer (the operator) similarly controls an avatar (like a robot) using a computer interface [10•]. In games, social techniques relating to controlling the avatar are widespread as a means to increase engagement, elicit social responses from the user, and encourage behavior patterns; game designers explicitly manage these techniques to shape user experience and action [10•] (Fig. 1). For example, there may be virtual co-pilots, on-board AI, or other techniques to facilitate social communication, shape empathy, and influence behavior. Pragmatically, and relevant for our application, these methods can also be used to support operator awareness, sustained motivation, performance, and more [11, 12]. Given the similarities between controlling an avatar or vehicle in a video game and remotely operating a robot [10•], we note that the success of social techniques in video game avatar control motivates the investigation of their use in teleoperation.
This paper establishes a link between potential benefits of social robotics approaches and application to inherently nonsocial teleoperation tasks. We develop a clear vision for how social robotics techniques can be used pragmatically to support teleoperation and operators, via shaping operator perceptions, emotions, and behavior. We present design avenues for social robotics in teleoperation: positioning the robot itself to be seen by the operator as an agent, including a virtual agent co-pilot, and having the robot monitor and model the operator mental state and mood to inform its social interactions.
Why Social Interfaces for Teleoperation?
Human-robot interaction has established a range of potential impacts of social interfaces. Of particular relevance to teleoperation is improved communication and shaping a person’s mood and behavior.
To successfully teleoperate a robot, an operator needs to monitor and understand a great deal of information while providing complex commands, all in real-time [12,13,14]. One design theme used to mitigate these issues is to create abstractions and easy-to-interpret visualizations and widgets to reduce cognitive effort (e.g., [15,16,17]). Relating to this, a standard approach in social robotics is to leverage the human capacity to quickly and intuitively process social interactions, by designing robots to communicate using human-like social methods [18•] through visual [19, 20], aural [21, 22], or haptic methods [23, 24], ostensibly increasing how much the person can process and understand . While such abstractions may not fully represent underlying data (e.g., specific error codes), it provides a quick an intuitive communication channel.
Social robotics has been successful in applications where people are collocated with an autonomous robot, for example, where the person can directly read the robot’s gestures, characteristic motions, facial expressions, etc., to read information and robot state [3, 26, 27]. Social signals can further be adapted and personalized to the user to increase trust and improve communication over time [22, 28, 29]. We suggest using similar approaches even when the person is not collocated with the robot and when the robot is not autonomous, as in teleoperation.
Further, people can use their existing social skillsets to naturally give commands to social robots such as by using gestures, voice tone, and pointing [19, 30, 31]. This reduces the need for the person to learn or use an intermediary communication technique, supporting comfort and ease of use [7, 32, 33]. For example, modern digital assistants have helped demonstrate how well-designed voice commands can increase accessibility and ease of use, as well as improve overall experience [34•].
Thus, we argue that teleoperation can and should likewise use this “social bandwidth”  alongside more commonly targeted cognitive abilities (such as map reading), to increase the operator’s ability to intuitively understand and control robots. Only a few initial projects have begun to extend this approach to social interfaces for teleoperated robots, providing evidence for our proposed broad approach [2, 35]. However, we advocate for increased focus given the potential for social interfaces to mitigate core teleoperation challenges.
Shaping Mental State and Behavior
Operator workload is a core consideration of teleoperation [11, 12], often measured via self-report measures relating to feelings of work demand (e.g., cognitive and physical) and frustration (e.g., NASA TLX ). We note that such measures are intimately linked with a person’s more general mood, enthusiasm, engagement, and motivation. These in turn can have effects on human behavior and performance [37,38,39,40].
People have natural tendencies to engage with robots as social entities, even when the robot is not designed to be anthropomorphic or zoomorphic [1, 18]; such designs are used to accentuate and leverage these tendencies. Thus, a key theme of social robotics research has been to leverage related social interaction techniques [2, 18, 41], social structures, and constructs [42, 43] to influence a person’s thoughts, mood, and even their behavior. Example results include increasing a person’s task engagement (e.g., [44, 45]), performance (e.g., [46, 47]), motivation (e.g., ), comfort (e.g., ), willingness to use robots again , and more [18•]. Further, social techniques can affect how someone perceives a robot, including trust in the robot [28, 50,51,52] and perception of a robot’s abilities [53, 54]. These are all desirable teleoperation qualities [11, 12, 55], and we argue it is worth investigating how to purposefully employ social techniques in teleoperation design to influence the operators mental state and behavior, to support teleoperation.
All of this builds on the relationship between operator stress, engagement, and workload and their ability to sustain learning and working with a robot in the long term . Thus, natural user interfaces that are perhaps more comfortable and intuitive to use [7, 32], such as the recent trend of digital personal assistants using voice to improve overall experience [34•], or in-car GPS devices conveying abstract navigational information in social means , can be expected to relate to operator stress, mental state, and therefore performance.
Challenges and Drawbacks
There are potential drawbacks, dangers, and other challenges with using social techniques for teleoperation. A simple reality is that poor social design can be annoying and distracting: social-focused interfaces in commercial products have sometimes been met with consumer derision, low popularity, and user performance [57, 58]. In some cases, social interfaces can increase cognitive load, such as when listening to a social in-car navigation system while driving . Perhaps the key is to have social interfaces for meaningful performance and experiential improvements, rather than simply being an engagement “gimmick.”
Another aspect of the robots-as-social-actors approach is that these techniques may be used for manipulation [4•]. Prior work has demonstrated that how a robot is introduced to people can impact how acceptable they find it (e.g., for a telepresence robot [60••]), and our own work has applied a similar approach, demonstrating how information about a robot can be curated and presented to engineer operator expectations and beliefs about robot capability, irrespective of actual ability . Other examples include how a robot can be designed to talk in ways to influence how people speak [61•] or can have authoritative influence over people due to perceptions in status  or style of movement [62•]. We must consider the potential for social interaction techniques, when applied to teleoperation, to be used in a deceptive or seemingly “underhanded” manner and influence people toward dangerous or questionable behavior.
Social Robotics for Nonsocial Teleoperation
Social robotics provides a range of potential benefits that we argue can be useful for traditional teleoperation tasks. In this section, we establish a clear vision and detail a range of pragmatic examples for how social teleoperation can be used to support and aid operators in teleoperation tasks. We structure this discussion around three methods we propose for social teleoperation: designing and presenting the robot itself as an agent, including a virtual agent “co-pilot,” and monitoring social cues from the operator.
Teleoperated Robots as Social Agents
We can design a teleoperated robot, its interface, and its interactions in ways that encourage the operator to see the robot as a social agent. This contrasts with the typical perception of teleoperated robots as mechanical tools that are considered first and foremost in terms of their form and capabilities. It is important to note that designing the robot to be a social agent does not necessarily imply robot autonomy, but rather that the operator has the impression that they are directing and operating an interactive agent.
Designing teleoperated robots, such as for industrial inspection and repair, to support operators using social techniques, opens a broad range of interaction possibilities—that is, if the operator sees the robot as an agent, then the robot can engage in productive social behavior. For example, a robot agent could chat and banter with the operator to stave off monotony and boredom, supporting engagement and potentially improving interaction comfort (in a video game–like fashion). In face of an important event (e.g., a warning sensor), the agent can change the mood, perhaps with an abrupt stop in banter or altering voice tone, leveraging social contrast to increase saliency and draw the operator’s attention and focus to pertinent information.
Such a personality could help bolster engagement and focus by congratulating the operator after a difficult task or offering words of encouragement after a mistake. As the task progresses, the agent can create a sense of time pressure by appearing cautious (to indicate that it is okay to slow down) or impatient (needs to hurry), to encourage related operator behavior patterns (e.g., as in [63••]). The agent can further act in ways that garner operator empathy to influence behavior (e.g., as in [35, 64]). For example, upon scraping against a rock, the robot agent could yell “Ouch!” in surprise or use other social signals (e.g., Fig. 2) to express pain, creating an emotional reaction in the operator to encourage more careful operation. With sustained damage, the agent could, for example, use a strained voice to provide an ongoing emotional reminder.
Designing the teleoperated robot as a social agent can further mitigate challenges relating to mixed initiative systems, where the robot autonomously takes control as needed to aid the operator (e.g., [65, 66]) and thus needs to clearly communicate why, how, and when it will take control. Designing the robot as a social agent provides an embodiment, context, and related social communication to provide transparency and awareness. For example, the agent could suddenly shout “Watch out!”, looking scared and backing up, while pointing the camera at a dangerous hole; this clarifies to the operator why the action was taken.
The mixed-initiative autonomy itself could be purposefully designed using social interaction techniques to encourage desired operation behaviors. For example, if an operator unnecessarily takes control from the robot, the robot could try to encourage the operator to rely on it more in the future by acting stubborn or sulky, aiming to ultimately reduce operator workload.
In short, if we can design for operators to view teleoperated robots as agents—and not just tools—then this agent can use a wide range of social interaction techniques to support the operator and shape their teleoperation behaviors.
Virtual Co-pilot Agent
We can add a third-party agent to the interface—a virtual “co-pilot”—for social interaction. The key to this approach is that the agent is completely virtual and does not represent the robot (is conceptually disassociated from it), providing increased range of new complementary interaction possibilities.
As already explained, a social robotic agent can build operator empathy and garner emotional reactions; however, this may also result in undesirable emotional attachment and guilt if the robot gets damaged (as in [67•]). This may lead to a negative mood and anxiety when operating in dangerous situations where mistakes are inevitable. Operators may also become too careful, when they instead should be aggressive and take risks, such as in high-demand time-critical scenarios (e.g., nuclear reactor inspection), or search and rescue, where robots should be risked saving human lives. Using an agent co-pilot instead—disassociated from the robot—can still enable myriad social interaction techniques (via the co-pilot agent) without creating empathy toward the physical robot.
This co-pilot agent could further mitigate operator worry about the robot, for example, by being reassuring and calming after mistakes, reminding them that it is “just a machine,” and encouraging more aggressive behavior (e.g., as in [35••]). This agent could alternatively still react in ways to mitigate risks given over-aggressive operator behavior [35••] (Fig. 3).
A virtual co-pilot can also be useful when an operator is responsible for controlling multiple robots and regularly switches control between them. With each robot designed as an agent, switching likewise changes which agent the operator is interacting with. A virtual co-pilot would not change with this switch, increasing interaction stability. It could even support the transition by providing state summaries of the new robot and environment or draw attention to important information through action such as focusing intently on a map to indicate a point of interest the robot discovered while the operator was controlling another robot.
It may not be trivial to convince operators to see their robot as an agent (see section Teleoperated Robots as Social Agents). Giving a robot a social personality may conflict with existing mental models (and perhaps seem silly), particularly if an operator has existing interaction experience or technical knowledge. A disassociated virtual co-pilot may be more acceptable and thus a more appropriate path for integrating social interaction.
Similarly, a real robot has an existing physical form and capabilities that may limit the design of social interface. A virtual co-pilot could be given a face, voice, arms, or any arbitrary shape, even if no such components exist on the actual robot, avoiding conflict or inconsistency in how the robot is presented.
Thus, a virtual co-pilot, that is not associated with the robot itself, provides a range of complementary methods for integrating social interaction into teleoperation.
Reading Operator Social Cues
An integral component of social interaction is that it is a dialog between multiple actors; an agent (whether the robot or a virtual co-pilot) should likewise monitor, interpret, and respond to social cues from the operator. This includes leveraging natural input modalities such as operator voice and gestures (e.g., ) but also includes more subtle cues such as operator facial expressions (e.g., Fig. 4), voice tone, and other expressions relating to emotion (i.e., as well established in affective computing ). Thus, the agent should maintain awareness of the operator ; developing a model of the operator’s workload and mental state, including engagement, stress, and mood, will enable the agent to more appropriately when using social techniques to interact with the operator.
An agent that understands if the operator is stressed or has high cognitive load could, for example, adapt by slowing down the robot to reduce workload (e.g., [63, 70]) or simplifying visualizations to reduce the amount of information displayed (e.g., [16, 71]). Similarly, if the agent could know when the operator starts to show signs of boredom and disengagement, it could more aggressively employ social techniques to increase attention and focus.
Knowledge about the operator state can also be used in conjunction with sensor data, for example, if an operator is not reacting despite warning messages, or robot collisions, indicating a lack of attention, the agent could step up communication or step in with emergency measures (e.g., contacting a superior).
While developing a robot as an agent—or having a virtual co-pilot—is a powerful way to use social interaction to improve teleoperation, ultimately solutions will require a two-way dialog, with the agent both reading and exhibiting social behaviors.
Design Strategies to Create Teleoperation Agents
The prior sections provide a vision of how social robotics techniques can be used to support teleoperation. However, the question remains of how exactly to design and implement the agents themselves. In this section, we briefly discuss techniques for encouraging the operator to see the robot as an agent, for embedding a virtual co-pilot, and for monitoring social cues and model operator state.
Design the Teleoperated Robot as an Agent
The straightforward way to encourage an operator to interact with a teleoperated robot as an agent is to specifically design the physical robot itself to encourage anthropomorphism. For example, it could be given arms or a face [72•], or a zoomorphic form, which shapes expectations on how it can be interacted with [54, 73]. Even if the robot itself (which may be remote) does not have these features, a picture or graphical representation could depict them (e.g., Fig. 2).
Another way to build a perception of social agency is to introduce or describe the robot to build expectations (e.g., [53••]) of its social abilities to encourage social interaction . For example, one could simply call the robot an agent, talk about it using anthropomorphic language, or inform the operator that the robot has a personality.
The graphical interface itself could be modified, for example, with visual representations of the robot being altered to appear more anthropomorphic or animated (e.g., [64••], Fig. 2). Alternatively, the robot could be given a voice, perhaps disembodied if no on-screen representation is given.
Another strategy, well established in social robotics, is to modify otherwise mechanical motions to encourage anthropomorphism. How a robot moves (e.g., its path on a map) or performs actions (as shown via an on-screen 3D model of the robot) can be modified to convey emotion and personality [74, 75].
Finally, any actual robot autonomy, such as with semi-autonomous [76, 77] or mixed initiative [65, 66] systems, can be expected to promote a sense of robot agency. That is, when the robot acts autonomously, operators will naturally assign agency and related concepts (e.g., intelligence ) to those actions . This autonomy could also extend to haptic interaction. For example, haptic feedback used for communication such as through the joystick [79, 80], a vibrating chair, or other types of equipment [81, 82] could be attributed to the robot’s perceived autonomy (such as haptics in social human-robot interaction [23, 24]), further increasing the sense of agency.
How to Design Virtual Agent Co-pilots
Creating a virtual co-pilot is as simple as creating an on-screen character avatar, leveraging the large body of work from the Intelligent Virtual Agents community (e.g., [83, 84]). This agent could alternatively be disembodied, for example, speaking as a remote companion talking over radio. The agent should be designed to emphasize its disassociation with the robot, for example, by referring to the robot in the third person, such as saying “The robot’s tire is punctured.”
A co-pilot agent could instead leverage a physical embodiment, such as by being a robot companion robot sitting near the operator. This would allow similar freedom of design (not linked to the form of the operated robot) but leveraging social interaction possibilities of physical embodiment .
The virtual co-pilot does not need to be a fully interactive agent and instead can simply leverage one or more social interaction components, for example, using multiple on-screen facial expressions to visualize a high-level summary of operator driving safety [35••] (Fig. 3). Such a multiple agent or social display design can enable the designer to select the social modality based on what is most effective, instead of forcing a design to match an already existing virtual co-pilot.
How to Read an Operator’s Social Communication and Signals
A wealth of technologies and research exist for sensing and monitoring a person’s state and social communication (e.g., see [86,87,88,89]). Further, this has been broadly explored for use in HCI; for example, the use of gestures  or a person’s skin conductivity [91•] can be used in a classroom to understand learner engagement, preferences, personality, and more. These techniques are still evolving and are quite complex (e.g., ); however, even simple approaches such as comparing response time against known typical values, or haptic inputs on the joystick or leaning on the chair or desk, can be used to gain insight [16, 92]. How social robots can leverage this input in their behaviors, however, is still not clear [87, 93], and this remains an open problem.
Given the specific task of interactive social agents for teleoperation, we envision that reading operator’s signals will focus primarily on measures of workload, stress, and attention. As such, monitoring skeletal pose (and change over time, e.g., with a Microsoft Kinect ), voice tone [88•], facial expressions or eye gaze (and thus attention) through a webcam or eye tracker , and simple biometrics including EEG, heart-rate, and galvanic skin response are all feasible given a static seating configuration.
We argue that nonsocial teleoperation tasks (e.g., exploration and inspection) have a largely untapped potential to leverage established techniques from social human-robot interaction. By designing teleoperation to employ social interaction, the interface can better support the operator: it can shape their mental state, perceptions, and overall experience, leading to improved performance. As demonstrated by recent research, we can achieve such results with only simple design tools, and this approach does not require advanced learning or sensing methods still being developed.
In this paper, we motivated this approach and painted a detailed picture—including a wide range of concrete examples—of how teleoperation can use social interaction techniques to improve teleoperation. We envision that this exploration will serve as a springboard and call to action for increased exploration of social interaction techniques for teleoperation.
Papers of particular interest, published recently, have been highlighted as: • Of importance •• Of major importance
Young JE, Hawkins R, Sharlin E, Igarashi T. Toward acceptable domestic robots: applying insights from social psychology. Int J Soc Robot. 2009;1:95–108.
•• Feldmaier J, Stimpfl M, Diepold K. Development of an emotion-competent SLAM agent. In: Human-Robot Interact: ACM Press; 2017. p. 1–9. An example of mechanical and algorithmic variables of a robot process being conveyed in a social way.
Singh A, Young JE. A dog tail for utility robots: Exploring affective properties of tail movement. Lect Notes Comput Sci 8118 LNCS. 2013:403–19.
• Sanoubari E, Seo SH, Garcha D, Young JE, Loureiro-Rodriguez V. Good robot design or Machiavellian? An in-the-wild robot leveraging minimal knowledge of Passersby’s culture. In: Human-Robot Interact: IEEE; 2019. p. 382–91. Demonstrates how social techniques in robotics can be manipulative.
Nakagawa K, Shiomi M, Shinozawa K, Matsumura R, Ishiguro H, Hagita N. Effect of Robot’s whispering behavior on People’s motivation. Int J Soc Robot. 2013;5:5–16.
Breazeal C, Kidd CD, Thomaz AL, Hoffman G. Berlin M. IEEE Int Conf Intell Robot Syst: Effects of Nonverbal Communication on Efficiency and Robustness of Human-Robot Teamwork.pdf; 2005.
Lee JR, Nass CI. Trust in computers: the computers-are-social-actors (CASA) paradigm and trustworthiness perception in human-computer communication. In: Trust Technol. a Ubiquitous Mod. Environ. Theor. Methodol. Perspect. IGI Global; 2010. p. 1–15.
Kristoffersson A, Coradeschi S, Loutfi A. A review of mobile robotic telepresence. Adv Human-Computer Interact. 2013;2013:1–17.
Tsui KM, Dalphond JM, Brooks DJ, Medvedev MS, McCann E, Allspaw J, et al. Accessible human-robot interaction for telepresence robots: a case study. Paladyn, J Behav Robot. 2015;6:1–29.
•• Rea DJ. Now you’re teleoperating with power: learning from video games to improve teleoperation interfaces: University of Manitoba; 2020. A thesis demonstrating the similarity of video games and teleoperation, including suggesting how social techniques in games could be applicable to telerobotics
Steinfeld A, Fong T, Field M, Lewis M, Scholtz J, Schultz A (2006) Common metrics for human-robot interaction. Human-Robot Interact.
Chen JYC, Haas EC, Barnes MJ (2007) Human performance Issues and User interface design for teleoperated robots. IEEE Trans Syst Man Cybern Part C (applications rev 37:1231–1245.
Endsley MR. Designing for situation awareness: an approach to user-centered design, Second: CRC Press; 2016.
Endsley MR. Design and evaluation for situation awareness enhancement. Proc Hum Factors Soc Annu Meet. 1988;32:97–101.
Leeper A, Hsiao K, Ciocarlie M, Takayama L, Gossow D (2012) Strategies for human-in-the-loop robotic grasping. Human-Robot Interact (HRI), 2012 7th ACM/IEEE Int Conf 1–8.
•• Rea DJ, Seo SH, Bruce N, Young JE. Movers, shakers, and those who stand still: visual attention-grabbing techniques in robot teleoperation. In: human-robot interact. New York, USA: ACM/IEEE; 2017. p. 398–407. Demonstrates how awareness of how an operator can be distracted can be used to design interfaces to improve task performance and reduce workload.
Seo SH, Young JE, Irani P. Where are the robots? In-feed embedded techniques for visualizing robot team member locations. In: Robot Hum. Commun: Interact; 2017. p. 522–7.
• Bartneck C, Belpaeme T, Eyssel F, Kanda T, Keijsers M, Sabanovic S. Human-robot interaction: an introduction: Cambridge University Press; 2020. A summary of the findings of social human-robot interaction
Gleeson B, Maclean K, Haddadi A, Croft E, Alcazar J. Gestures for industry: intuitive human-robot communication from human observation. In: Human-robot interact. Piscataway: IEEE Press; 2013. p. 349–56.
Admoni H, Scassellati B. Social eye gaze in human-robot interaction: a review. J Human-Robot Interact. 2017;6:25.
Ohshima N, Kimijima K, Yamato J, Mukawa N. A conversational robot with vocal and bodily fillers for recovering from awkward silence at turn-takings. In: Int. Work. Robot Hum. Interact. Commun. IEEE; 2015. p. 325–30.
Seo SH, Griffin K, Young JE, Bunt A, Prentice S, Loureiro-Rodríguez V. Investigating People’s rapport building and hindering behaviors when working with a collaborative robot. Int J Soc Robot. 2018;10:147–61.
Ammi M, Demulier V, Caillou S, Gaffary Y, Tsalamlal Y, Martin J-C, et al. Haptic human-robot affective interaction in a handshaking social protocol. In: Human-Robot Interact. New York: ACM Press; 2015. p. 263–70.
Tsalamlal MY, Martin J-C, Ammi M, Tapus A, Amorim M-A. Affective handshake with a humanoid robot: how do participants perceive and combine its facial and haptic expressions? In: Affect. Intell. Interact. IEEE: Comput; 2015. p. 334–40.
Brooks JA, Freeman JB. Neuroimaging of person perception: a social-visual interface. Neurosci Lett. 2019;693:40–3.
Sharma M, Hildebrandt D, Newman G, Young JE, Eskicioglu R. Communicating affect via flight path exploring use of the laban effort system for designing affective locomotion paths. ACM/IEEE Int Conf Human-Robot Interact. 2013:293–300.
Young JE, Xin M, Sharlin E (2007) Robot expressionism through cartooning. In: Human-Robot Interact. ACM Press, New York, p 309.
• Ciocirlan S-D, Agrigoroaie R, Tapus A (2019) Human-robot team: effects of communication in analyzing trust. In: robot hum. Interact. Commun. IEEE, pp 1–7. And example of social techniques increasing trust between users and robots.
Lee M, Forlizzi J, Kiesler S. Personalization in HRI: a longitudinal field experiment. In: Human-Robot Interact; 2012. p. 319–26.
Riek LD, Paul PC, Robinson P. When my robot smiles at me: enabling human-robot rapport via real-time head gesture mimicry. J Multimodal User Interfaces. 2010;3:99–108.
Bainbridge WA, Hart JW, Kim ES, Scassellati B. The benefits of interactions with physically present robots over video-displayed agents. Int J Soc Robot. 2011;3:41–52.
Reeves B, Nass C (1996) How people treat computers, television, and new media like real people and places.
Leshed G, Velden T, Rieger O, Kot B, Sengers P (2008) In-car gps navigation: engagement with and disengagement from the environment. Proc SIGCHI Conf hum factors Comput Syst (CHI ‘08) 1675–1684.
• Lopatovska I, Rink K, Knight I, Raines K, Cosenza K, Williams H, et al. Talk to me: exploring user interactions with the Amazon Alexa. J Librariansh Inf Sci. 2019;51:984–97. Exploring how socially-enabled devices are interpreted and integrated into home-life.
•• Rea DJ, Young JE. Backseat Teleoperator : affective feedback with on-screen agents to influence Teleoperation. Human-robot interact. 2019:19–28. A proof of concept of how social techniques can act as a robot state summary to impact operator emotions.
Hart SG, Staveland LE. Development of NASA-TLX (task load index): results of empirical and theoretical research. In: Hum. Ment. Workload; 1988. p. 139–83.
Butler EA, Egloff B, Wilhelm FH, Smith NC, Erickson EA, Gross JJ. The social consequences of expressive suppression. Emotion. 2003;3:48–67.
•• Precht L, Keinath A, Krems JF. Effects of driving anger on driver behavior – results from naturalistic driving data. Transp Res Part F Traffic Psychol Behav. 2017;45:75–92. A link between emotion and performance for operating vehicles.
Hart SG. NASA-task load index (NASA-TLX); 20 years later. Hum Factors Ergon Soc Annu Meting. 2006;50:904–8.
Chatterjee P. Drone pilots are quitting in record numbers: a combination of lower-class status in the military, overwork, and psychological trauma appears to be taking a mental toll on drone pilots. In: Mother Jones; 2015. https://www.motherjones.com/politics/2015/03/drone-pilots-are-quitting-record-numbers/.
Erden MS. Emotional postures for the humanoid-robot Nao. Int J Soc Robot. 2013;5:441–56.
Sakamoto D, Ono T (2006) Sociality of robots: do robots construct or collapse human relations? In: human-robot interact. ACM Press, New York, p 355.
Geiskkovitch D, Seo S, Young JE. Autonomy, embodiment, and obedience to robots. In: Human-robot interact. Abstr. ACM: Ext; 2015. p. 235–6.
Short E, Hart J, Vu M, Scassellati B (2010) No fair!! An interaction with a cheating robot. 2010 5th ACM/IEEE Int Conf human-robot interact 219–226.
Vázquez M, Steinfeld A, Hudson SE, Forlizzi J (2014) Spatial and other social engagement cues in a child-robot interaction. In: Proc. 2014 ACM/IEEE Int. Conf. Human-robot interact. - HRI ‘14. Pp 391–398.
Salem M, Eyssel F, Rohlfing K, Kopp S, Joublin F. To err is human(−like): effects of robot gesture on perceived anthropomorphism and likability. In: Int. J. Soc. Robot; 2013. p. 313–23.
•• Holthaus P, Menon C, Amirabdollahian F. How a robot ’ s social credibility affects safety performance. In: Int. Conf. Soc. robot; 2019. p. 1–10. An example of a robot's social behavior impacting a human's behavior.
Graether E, Mueller F. Joggobot: a flying robot as jogging companion. Conf Hum Factors Comput Syst - Proc. 2012:1063–6.
Gockley R, Forlizzi J, Simmons R. Natural person following behavior for social robots. Proc ACM/IEEE Int Conf Human-robot Interact. 2007:17–24.
Kahn PH, Kanda T, Ishiguro H, Gill BT, Shen S, Gary HE, et al. Will people keep the secret of a humanoid robot? In: Human-Robot Interact. In; 2015. p. 173–80.
Banh A, Rea DJ, Young JE, Sharlin E (2015) Inspector Baxter : the social aspects of integrating a robot as a quality inspector in an assembly line. Human-Agent Interact.
Mota RCR, Rea DJ, Le Tran A, Young JE, Sharlin E, Sousa MC. Playing the ‘trust game’ with robots: social strategies and experiences. In: Robot hum. Commun. IEEE: Interact; 2016. p. 519–24.
•• Rea DJ, Young JE. It’s all in your head. In: human-robot interact. New York: ACM press; 2018. p. 32–40. Demonstrates how describing robots in certain ways can impact operator perceptions and operation behaviors
Schramm LT, Dufault D, Young JE. Warning: this robot is not what it seems! Exploring expectation discrepancy resulting from robot design. In: Companion human-robot interact. New York: ACM; 2020. p. 439–41.
Young JE, Sung J, Voida A, Sharlin E, Igarashi T, Christensen HI, et al. Evaluating human-robot interaction. Int J Soc Robot. 2010;3:53–67.
Chatterjee P. Drone pilots are quitting in record numbers: a combination of lower-class status in the military, overwork, and psychological trauma appears to be taking a mental toll on drone pilots. In: Mother Jones; 2015.
Swartz L. Why people hate the paperclip: labels, appearance, behavior and social responses to user interface. Stanford: AGENTS; 2003.
Whitworth B. Polite computing. Behav Inf Technol. 2005;24:353–63.
Srinivasan R, Jovanis PP. Effect of selected in-vehicle route guidance systems on driver reaction times. Hum Factors J Hum Factors Ergon Soc. 1997;39:200–15.
•• Rueben M, Bernieri FJ, Grimm CM, Smart WD. Framing effects on privacy concerns about a home Telepresence robot. In: human-robot interact: ACM; 2017. p. 435–44. Demonstrates how the description of a teloperated robot impacts the social perception of that robot's behaviors.
• Brandstetter J, Beckner C, Sandoval EB, Bartneck C. Persistent lexical entrainment in HRI. In: human-robot interact: ACM; 2017. p. 63–72. A demonstrating of how robot actions can subconsciounsly influence human behavior.
• Mizumaru K, Satake S, Kanda T, Ono T (2019) Stop doing it! Approaching strategy for a robot to admonish pedestrians. In: human-robot interact. IEEE, pp 449–457. An example of a social robot behavior that can influence how people behave towards the robot.
•• Rea DJ, Hanzaki MR, Bruce N, Young JE (2017) Tortoise and the Hare Robot Slow and steady almost wins the race , but finishes more safely. In: Robot Hum. Interact. Commun. IEEE, pp 1–6. Demonstrates how a teleoperated robot's capabilities are linked to performance, workload, and user experience.
•• Seo SH, Young JE, Irani P. How are your robot friends doing? A design exploration of graphical techniques supporting awareness of robot team members in Teleoperation. Int J of Soc Robotics. 2020; An approach exploring how virtual social representations of robot state can convey information similarly to traditional interface techniques.
Wang J, Lewis M (2007) Human control for cooperating robot teams. HRI 2007 - Proc 2007 ACM/IEEE Conf human-robot interact - robot as team Memb 9–16.
Kortenkamp D, Bonasso RP, Ryan D, Schreckenghost D. Traded control with autonomous robots as mixed initiative interaction. AAAI Tech Rep. 1997;04:89–94.
• Seo SH, Geiskkovitch D, Nakane M, King C, Young JE (2015) Poor thing ! Would you feel sorry for a simulated robot ? A comparison of empathy toward a physical and a simulated robot. In: human-robot interact. Pp 125–132. An example of how understanding how operators process visual information can improve interfaces by lowering workload.
Cambria E. Affective computing and sentiment analysis. IEEE Intell Syst. 2016;31:102–7.
Drury JL, Scholtz J, Yanco H a. (2003) Awareness in human-robot interactions. IEEE Int Conf Syst Man Cybern.
Jia Y, Xi N, Liu S, Wang Y, Li X, Bi S. Quality of teleoperator adaptive control for telerobotic operations. Int J Robot Res. 2014;33:1765–81.
Seo SH, Rea DJ, Wiebe J, Young JE (2017) Monocle: interactive detail-in-context using two pan-and-tilt cameras to improve Teleoperation effectiveness. RO-MAN.
• Phillips E, Zhao X, Ullman D, Malle BF. What is human-like?: decomposing robots’ human-likeAppearance using the anthropomorphic roBOT (ABOT)database. In: In: human-robot interact. New York: ACM; 2018. p. 105–13. A robot survey that finds how certain physical features are linked to to anthropomorphism and social agency.
Osawa H, Ohmura R, Imai M. Using attachable humanoid parts for realizing imaginary intention and body image. Int J Soc Robot. 2009;1:109–23.
Sharma M, Hildebrandt D, Newman G, Young JE, Eskicioglu R (2013) Communicating affect via flight path. In: hum. Robot interact. 2013. Pp 293–300.
Young JE, Sharlin E, Igarashi T. Teaching robots style: designing and evaluating style-by-demonstration for interactive robotic locomotion. Human–Computer Interact. 2013;28:379–416.
Tsui KM, Norton A, Brooks DJ, McCann E, Medvedev MS, Yanco HA. Design and development of two generations of semi-autonomous social telepresence robots. In: 2013 IEEE Conf. Pract. Robot Appl. IEEE: Technol; 2013. p. 1–6.
Quigley M, Goodrich MA, Beard RW. Semi-autonomous human-UAV interfaces for fixed-wing mini-UAVs. Intell Robot Syst. 2004;3:2457–62.
Bartneck C, Kanda T, Mubin O, Al Mahmud A. Does the design of a robot influence its animacy and perceived intelligence? Int J Soc Robot. 2009;1:195–204.
Okamura AM. Methods for haptic feedback in teleoperated robot-assisted surgery. Ind Robot An Int J. 2004;31:499–508.
Hacinecipoglu A, Konukseven EI, Koku AB (2013) Evaluation of haptic feedback cues on vehicle teleoperation performance in an obstacle avoidance scenario. 2013 World Haptics Conf WHC 2013.
Marquardt N, Nacenta MA, Young JE, Carpendale S, Greenberg S, Sharlin E. The haptic tabletop Puck. In: interact. Tabletops surfaces - ITS ‘09. New York: ACM Press; 2009. p. 85.
Guo C, Sharlin E. Exploring the use of tangible user interfaces for human-robot interaction. In: Hum. factors Comput. New York: Syst. ACM Press; 2008. p. 121.
• Klopfenstein LC, Delpriori S, Malatini S, Bogliolo A. The rise of bots: a survey of conversational interfaces, patterns, and paradigms. In: In: des. Interact. Syst. New York: ACM; 2017. p. 555–65. A survey of social techniques and their use in chatbots.
• Paiva A, Leite I, Boukricha H, Wachsmuth I. Empathy in virtual agents and robots. Interact Intell Syst. 2017;7:1–40. A survey of how the importance and effects of empathy as a social skill in virtual and robotic agents.
Klemmer SR, Hartmann B, Takayama L. How bodies matter. In: Proc. 6th ACM Conf. Des. Interact. Syst. - DIS ‘06. New York: ACM Press; 2006. p. 140.
Picard RW, Vyzas E, Healey J. Toward machine emotional intelligence: analysis of affective physiological state. Trans Pattern Anal Mach Intell. 2001;23:1175–91.
Picard RW, Fedor S, Ayzenberg Y. Multiple arousal theory and daily-life electrodermal activity asymmetry. Emot Rev. 2016;8:62–75.
• Griol D, Molina JM, Callejas Z. Combining speech-based and linguistic classifiers to recognize emotion in user spoken utterances. Neurocomputing. 2019;(326–327):132–40. An example of modern biometrics used for affect recognition.
• Li P, Liu H, Si Y, et al (2019) EEG based emotion recognition by combining functional connectivity network and local activations. IEEE Transactions biomed Eng 66:2869–2881. An example of modern biometrics being used for affect recognition.
Vermun K, Senapaty M, Sankhla A, Patnaik P, Routray A. Gesture-based affective and cognitive states recognition using Kinect for effective feedback during e-learning. In: Technol. IEEE: Educ; 2013. p. 107–10.
• Dafoulas G, Tsiakara A, Samuels-Clarke J, Maia CC, Neilson D, Ali AA. Investigating patterns of emotion and expressions using smart learning spaces. In: In: Inf. Commun. Syst: IEEE; 2019. p. 238–44. An example of how technology can affect social interaction and affect.
Balaguer C, Giménez A, Jardón A, Correal R, Martínez S, Sabatini AM, et al. Proprio and teleoperation of a robotic system for disabled persons’ assistance in domestic environments. Springer Tracts Adv Robot. 2007;31:415–27.
Hutt S, Mills C, White S, Donnelly PJ, D’Mello SK (2016) The eyes have it: gaze-based detection of mind wandering during learning with an intelligent tutoring system. Educ Data Min 86–93.
Kanade T, Cohn JF, Tian Y. Comprehensive database for facial expression analysis. Face Gesture Recognit: Autom; 2000.
Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The extended Cohn-Kanade dataset (CK+): a complete expression dataset for action unit and emotion-specified expression. Anal: Work. CVPR Hum. Commun. Behav; 2010.
Conflict of Interest
The authors declare that they have no conflict of interest.
Human and Animal Rights and Informed Consent
This article does not contain any studies with human or animal subjects performed by any of the authors.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This article is part of the Topical Collection on Service and Interactive Robotics
About this article
Cite this article
Rea, D.J., Seo, S.H. & Young, J.E. Social Robotics for Nonsocial Teleoperation: Leveraging Social Techniques to Impact Teleoperator Performance and Experience. Curr Robot Rep 1, 287–295 (2020). https://doi.org/10.1007/s43154-020-00020-7
- Social design
- Interface design