Advertisement

Personal and Ubiquitous Computing

, Volume 20, Issue 1, pp 51–63 | Cite as

Robotic experience companionship in music listening and video watching

  • Guy Hoffman
  • Shira Bauman
  • Keinan Vanunu
Original Article

Abstract

We propose the notion of robotic experience companionship (REC): a person’s sense of sharing an experience with a robot. Does a robot’s presence and response to a situation affect a human’s understanding of the situation and of the robot, even without direct human–robot interaction? We present the first experimental assessment of REC, studying people’s experience of entertainment media as they share it with a robot. Both studies use an autonomous custom-designed desktop robot capable of performing gestures synchronized to the media. Study I (\(n=67\)), examining music listening companionship, finds that the robot’s dance-like response to music causes participants to feel that the robot is co-listening with them, and increases their liking of songs. The robot’s response also increases its perceived human character traits. We find REC to be moderated by music listening habits, such that social listeners were more affected by the robot’s response. Study II (\(n=91\)), examining video watching companionship supports these findings, demonstrating that social video viewers enjoy the experience more with the robot present, while habitually solitary viewers do not. Also in line with Study I, the robot’s response to the video clip causes people to attribute more positive human character traits to the robot. This has implications for robots as companions for digital media consumption, but also suggests design implications based on REC for other shared experiences with personal robots.

Keywords

Human–robot interaction Social robotics Music listening Video watching Digital companions Social referencing 

1 Introduction

Robots are predicted to soon move from industrial, military, and research environments to lay human spaces [13, 21, 31, 54]. Applications include robots for the home [23, 50], where they could help with chores, hobbies, or entertainment experiences. Other robots are developed for care giving roles [6, 53, 55], education and childcare [32, 52], and work environments such as offices, stores, workshops, restaurants and hotels [1, 44]. In fact, the past year has seen an unprecedented number of socially interactive robotics products designed for the home market (e.g., Jibo, Pepper, and Luna) and for human work environments (e.g., Baxter, Sawyer, and SaviOne).

This category of robots is generally referred to as personal robots. Acting in close proximity with humans, personal robots will play both a functional and a social role. Among other tasks, they will communicate with humans, understanding and producing social behaviors [9, 18, 37, 40].

1.1 Responding to the environment

In particular, as part of their operation, personal robots will respond autonomously to their environment, and will do so around people. This raises a key question: How does a robot’s perceived response to an external situation affect a human’s perception of the situation and of the robot?

We know that humans look to the interpretation of others to form their own perception of events and experiences, a phenomenon called “Social Referencing” [19]. Humans are also socially affected by the mere presence of others, through a process called “social facilitation” [15, 57]. In addition, studies find that humans view robots in some cases as social agents [4, 22, 36], a view sometimes called “Robotic Social Presence” [26, 34] or “Robots as Social Actors” [17].

It therefore makes sense to ask whether a robot’s reaction to an experience, or even its mere presence, can affect people’s perception of that experience, and their perception of the robot? Can we design robot responses to experiences so as to affect a particular human response in a desired way? Answering these questions could have important consequences for a host of application areas for robots, from entertainment, through health care to work environments.

1.2 Robotic experience companionship (REC)

To formalize these questions, we propose the notion of robotic experience companionship (REC), which we define as the sense and outcomes of sharing an experience with a robot. In this paper, we present the first experimental assessment of this notion.

We specifically study REC in the realm of entertainment media experiences: music listening and video watching. In two studies, we examine how the sense that a robot is co-experiencing media with a person affects that person’s enjoyment of the experience, their liking of the media, and their impression of the robot.

We chose media consumption because it is a behavior that people do both in a solitary and a social manner, and the literature holds some knowledge of how social settings affect people’s media consumption. Moreover, a relatively simple robot can be programmed to respond to digital media events, allowing for precise control of the experimental conditions. The video watching part of this research also ties into the growing literature on the psychology of co-viewing [3, 5, 46].

2 Background

Can robots provide the social presence to support a joint media experience even when it occurs in a solitary setting? This question relates to the theories of social referencing, social facilitation, and social aspects of media experiences, as well as to the notion of robots as social actors.

2.1 Social referencing and social facilitation

In psychology, social referencing refers mainly to the process in which infants take cues from the reactions of others in order to infer their own appropriate reaction to a given situation. While this skill develops as early as six months of age, social referencing occurs not only in infants, but also in children and adults. In a more general sense, social referencing is the process through which one uses another person’s interpretation of the situation in order to create their own understanding of that situation [19, 20]. Social referencing happens most frequently in ambiguous situations [49] and is selective with respect to the referent [58].

The theory of social facilitation posits that people’s performance is affected when an action is performed in front of others [14, 15, 57]. The theory is divided into two paradigms: The audience effect occurs when being watched by passive viewers. The co-action effect when the other people present are also doing the same action. A leading explanation of social facilitation is that it is caused by the increased level of arousal caused by the presence of another person, which could in turn also have effects on robotic experience companionship.

2.2 Social aspects of music listening

One example of social referencing and facilitation is in the phenomenon of social music listening. As music playback technology evolves, so does the way we consume music. The introduction of portable devices has made music listening in the late twentieth century increasingly solitary [35]. This trend has recently reversed, perhaps due to the proliferation of playback opportunities and online music sharing. A recent study found that today only 26 % of music listening happens alone, compared to 69 % in the 1980s [41].

While apparent that music listening is often a social phenomenon (e.g., parties and public concerts), the social aspects of music listening have not been widely explored. A recent book on shared consumption of music deals mostly with online sharing of music and not with physically colocated listening [42]. And while North et al. [41] found that people enjoy music less when they are with others, their finding could not be separated from unintended public listening, where participants did not control the music they heard. They found, in contrast, that participants paid more attention to music when listening with their boyfriend or girlfriend, or even with “others,” than alone.

Fieldwork shows that a social effect can cause people to be swayed by others’ opinions about music played by a live band. When listening to the band, participants who were placed with confederates saying positive things about the band, rated the band higher, stayed longer, and were more likely to listen to the band again [27]. In other work it has been found that people move more vigorously to music when listening to it with others [12], illustrating another social aspect of music listening.

2.3 Social aspects of video watching

Viewing of video can also be social, an activity that occurs with strangers and with friends, in public and in private. Co-viewing has been studied in the context of the home via people meters, showing that it leads to less channel browsing and increased viewing time [39]. Other co-viewing research has focused on the parent–child relationship, and the effect on children’s comprehension and emotions [43, 47]. There have been some studies labeled as “social viewing,” but in fact concerned with the effects of laugh tracks in comedy [8, 45]. Surprisingly, there has been almost no experimental research on physically colocated co-viewing. This research was only carried out quite recently and focuses on the context of in-groups and out-groups, for example with relation to race [3, 5] or gender [51].

With the proliferation or online video Web sites, there has been some research on computer-mediated social co-viewing. One study surveyed users of YouTube with respect to reasons for video viewing and video sharing [25], while another looked at online text chatting while watching online video and found that it had a positive effect on social interaction, even though the participants were more distracted from the video. The ability to chat did improve participant’s perception of bad videos [56].

Despite the existing work on social music listening and video watching, we know of no prior work evaluating robotic companions in music listening and video watching.

2.4 Robots as social actors

Computers can provide users with a sense of “being with another” [7], and so can robots. We know that social presence can be a mediating factor in human–robot social responses [36] and that people often treat robots as social actors [17, 22] In one study, a robot was perceived as more engaging, credible, and informative than an animated character due to its physical embodiment [33]. Another robot’s physical presence has been shown to affect its social presence in relation to personal space, trust, and respect [4]. In an additional study, a robot’s movement to music influenced children’s proclivity to dance to the music [38].

That said, the question of REC, the effect of a robot’s perceived experience on the human’s evaluation of the situation, without direct human–robot interaction, has not been investigated.

3 Overview of current research

In light of the above, we present two experiments, one involving music listening (Study I) and one involving video watching (Study II), examining the effect of a robotic media companion on people’s media experience, and their evaluation of the robot. We expect that degree of REC will be associated with positive evaluations of both the experience and the robot, particularly when the REC is synchronized with peoples subjective experiences. We also expected this effect to be more pronounced among people whose listening and viewing habits are social (versus solitary). This is because social listeners should be more likely to draw on social referents, be they human or robotic, in their subjective experiences. Both studies manipulated REC as well as the degree to which the robot companionship was responsive to the auditory and visual cues. We also measured participants music listening and video watching habits to test whether they serve as moderators.

4 Robotic platform

To study robotic media companionship, we use the robot Travis, a custom-built autonomous robotic loudspeaker and media companion (Fig. 1). We built the robot as a research platform designed to study human–robot interaction as it relates to media consumption, nonverbal behavior, timing, and physical presence. The robot is capable of performing autonomous behaviors that can be synchronized to media played through its speakers and to other external events. In addition, the robot has software enabling it to make eye contact with the user, and it has a number of preprogrammed gestures for social communication.
Fig. 1

Travis, a robotic speaker dock and music listening companion

To study robotic companionship, Travis is sized and shaped so that, when placed on a desk, its head is roughly in line with a seated person’s head in front of it. The robot’s appearance is intended to elicit a sense of companionship with the human user. Its body is thus designed to evoke a pet-like relation, with a size comparable to a small animal, and an organic, but not humanoid form.

4.1 Hardware

Travis is a five-degree-of-freedom robot. Each degree of freedom is controlled via direct-drive using a Robotis Dynamixel MX-28 servo motor. The robot has two speakers in the side of its head, acting as a stereo pair, and one subwoofer speaker pointing downwards in the base. In addition, the robot contains an Android Accessory Development Kit (ADK) control board, which enables it to be controlled through a smartphone. Finally, the robot’s electronics include a digital amplifier with an audio crossover circuit (Fig. 2).
Fig. 2

Mechanical structure and hardware components

4.2 Software

The robot’s control software runs on the smartphone placed in the robot’s hand [28]. The software is responsible for playing songs through the speakers, generating dance moves based on the song’s beat, responding to voice commands, triggering nonverbal behaviors, and responding through remote communication to events received on the robot’s wireless network. In addition, the robot can maintain eye contact by using the camera of the mobile device.

The full mechanical and electronic system, as well as the autonomous behavior generation and trajectory control, is beyond the scope of this paper. Details can be found in [10, 11, 28, 30].

5 Study I: music listening

Our first study was a controlled laboratory experiment, in which participants listened to songs through the robot’s speakers, with the robot placed on a table next to them. We manipulated the robot’s response to the music.

5.1 Research questions and hypotheses

Our research pertains to the effect of a robotic companion on music listening and impressions of the robot, as well as these measures’ interaction with people’s listening habits. In particular, we wanted to know whether a robotic companion and people’s habits affect their liking of songs, their enjoyment of listening to the song, and their perception of the robot. To evaluate these questions, we tested the following hypotheses:

Song Liking A robotic companion listening to a song with a person will cause them to like the song better.

Experience Enjoyment A robotic companion listening to a song with a person will cause them to enjoy listening to music more.

Agent Impression A robotic companion listening to a song with a person will cause them to attach more positive character traits to the robot.

Similarity to Self A robotic companion listening to a song with a person will cause them to consider the robot to be more similar to themselves.

Listening Habits Listening habits act as a moderating factor to the above-mentioned effects, such that people who usually listen to music with others will show more positive responses to the robot’s co-listening.

5.2 Method

In this experiment, we activated only a simple beat-tracking behavior in the robot, which was kept constant throughout each song. We did not activate more complex choreographies, or any other of the robot’s nonverbal behaviors. In other words, the robot’s only behavior was that, when a song was played, the robot moved in a constant repetitive motion according to the beat, and then stopped moving when the song was stopped.

5.2.1 Independent variables

We manipulated one between-subject variable, the robot’s movement response to the music. Participants were randomly assigned to one of three conditions. In the On Beat condition, the robot performed accurate on beat movements to the song. In the Static control condition, the robot did not move to the music, although the music still came out of the robot’s speakers. In order to evaluate how much the accuracy of the robot’s response affects people’s responses, we also added an intermediate Off Beat condition. In that condition, the robot’s movements were at the same tempo as the music, but consistently off beat. Some beats were skipped completely, and some were time-shifted by a random amount of \(\pm 100\;{\text{to}}\;500\,\hbox {ms}\).

In addition, we measured one between-subject variable of music listening habits, solitary or social, by asking three questions: “I usually listen to music alone,” “I enjoy music more when I am listening alone,” and “I usually listen to music with friends” (reverse scale). We separated participants into two habit groups by using a median cut and discarding median-neutral participants (\(n=7\)).

5.2.2 Participants

A total of 67 people participated in the experiment (46 female, 17 male, 4 did not respond). The participants were multi-national, and all communication was done in English. They were recruited from the International School of Communication at a local college, in return for class credit.

5.2.3 Procedure

The experiment was conducted in a small office room with controlled lighting and no outside distractions. The room contained a desk on which a computer monitor and mouse were placed, as well as the robotic speaker device (Fig. 3). Each participant entered the room individually with the experimenter. They were asked to sit in a chair at the end of the desk that was furthest from the computer monitor and the robot, facing the back of the robot. The experimenter explained the experiment guidelines and received informed consent from the participant while they were seated. The participant then moved to the other end of the desk to sit by the monitor. In front of the monitor was the “song liking” questionnaire with a pen, allowing the participant to fill it out while listening to each individual song.
Fig. 3

Experimental setup for music listening study

Each participant was told that they would be listening to three songs through a prototype speaker device. The robot was situated slightly to their left, about 30 centimeters away. At no point was the device referred to as a “robot,” but only as a “speaker device” in order to not prime subjects in the Static group to expect movement from the device.

The participant was shown briefly how to use the computer program to switch between songs. They were told to answer three questions measuring song liking before moving on to the next song. Participants were also instructed that they could skip to the next song at any point and were not required to listen to each song to the end. Finally, they were told that it was supposed to be a fun experience and were encouraged to enjoy and have a good time. The participant was then left alone in the office to complete the experiment, which started by clicking a button on the screen, triggering the first song. To stop the current song they would click the same button on the screen, fill out the three relevant enjoyment questions, if they have not done so, and click the same button to start the next song. All participants listened to the same three songs, taken from different genres and periods. To prevent order effects, the order of the songs was randomized for each participant.

Upon completion of the experiment, they would click another button labeled “Call Experimenter” and the experimenter would reenter the room. The experimenter would then shut off the robot in front of the participant and ask them to return to the chair at the end of the desk in order to fill out the post-experiment questionnaire. This was in order for the participants to not be influenced by the robot’s movement or lack thereof while completing their evaluation of the robot.

5.2.4 Measures

All measures are on a seven-point scale (from “strongly disagree” to “strongly agree”) unless otherwise noted.

Song Liking We estimate the real-time liking of the played songs by taking the mean of the response to three questions answered during or immediately after listening to each song, “I enjoyed this song,” “I believe others would enjoy this song,” and “I would like to listen to this song again in the future.” The total song liking score is the mean of the score for all three songs (Cronbach’s \(\alpha = 0.75\)). This measure estimates participants’ rating of the songs themselves.

Experience Enjoyment We estimate the overall enjoyment of the experience by taking the mean of the response to two questions answered in the post-experiment questionnaire: “My overall experience was enjoyable,” “My overall experience was boring” (reverse scale). Cronbach’s \(\alpha\) for this measure was 0.74.

Impression of Agent Participants rated their impression of the robotic agent as a composite measure of five items: the robot’s perceived friendliness, confidence, warmth, cooperativeness, and sociability. This measure was validated in previous studies [24, 48], and Cronbach’s \(\alpha\) in our data was 0.79.

Human–Robot Similarity Finally, we asked participants to rate on a seven-point scale to what extent the robot was similar to them.

5.3 Results

We denote our experimental conditions On Beat, Off Beat, and Static. No significant differences were found between conditions regarding gender, age, familiarity with artificial intelligence, and past experience working with robots.

5.3.1 Sensitivity to beat precision

Somewhat surprisingly, participants were overall not consciously aware of the robot’s beat precision. When asked whether the device “moved on or off beat,” 22/23 (96 %) participants in the Off Beat condition said that the robot moved on beat. This is on par with the On Beat condition, where 21/22 (95 %) perceived the robot to move on beat.
Fig. 4

Means and SEs of perceived sense of joint listening per condition

5.3.2 Manipulation check

Our conditions were intended to manipulate the sense of joint listening. We confirmed our manipulation by a composite measure of two items, asking whether the device “was listening to the song with” the participants, and whether it “enjoyed the songs.” Cronbach’s \(\alpha\) for this composite was 0.84. One-way ANOVA confirms that the manipulation had a significant effect on the sense of joint listening \([F(2{,}64)= 18.192,p <.001, \eta ^2 = .36]\). Multiple comparisons, applying the Bonferroni correction, revealed that participants in both the On Beat (\(M= 5.29, SD= 1.32\)) and Off Beat (\(M= 5.78, SD= 1.15\)) conditions showed significantly higher means than the control group (\(M= 3.32, SD= 1.8\)). No significant differences were found between the On Beat and Off Beat conditions (Fig. 4).

5.3.3 Response to music

Song Liking The composite song liking variable was collected in real time during or after each song and estimated how much participants liked the songs they listened to. Song liking for the On Beat condition is highest (\(M= 5.89,\, SD= .62\)), for Static is lowest (\(M=5.36,\, SD= .87\)), and for Off Beat in-between (\(M= 5.6,\, SD= .92\)) (Fig. 5).

A one-way ANOVA did not result in a significant result \([F(2{,}64) = 2.202,\, p=0.12]\). However, a planned contrast analysis between the On Beat and the Static conditions yielded a significant difference \([t(42)= 2.095,\, p<.05]\).
Fig. 5

Means and SEs of experienced song liking per condition. The difference of means between the Static and On Beat conditions is significant at \(p<0.05\)

Experience Enjoyment The composite experience enjoyment variable was measured at the end of the study and estimated whether the overall experience was enjoyable to participants. A one-way ANOVA revealed no significant differences between the conditions \([F(2{,}64) < 1]\).

5.3.4 Perception of the robot

Impression of Agent The impression of agent composite variable measured positive human traits attributed to the robot. A one-way ANOVA yielded significant results, \([F(2{,}64)= 6.16,\, p < .01,\, \eta ^2 = .19]\). Planned contrasts show that the On Beat (\(M=5.37,\, SD=.92\)) and Off Beat (\(M= 5.36,\, SD= .70\)) conditions significantly differ from the Static control condition \([M= 4.55,\, SD= 1.01,\, t(64)=3.53\, p<0.001]\), but not from each other (Fig. 6).
Fig. 6

Means and SEs of perceived impression of agent per condition

Human–Robot Similarity A one-way ANOVA on the human–robot similarity scale yielded significant results (\(F(2{,}64)= 3.64,\, p < 0.05,\, \eta ^2 = .11\)). Planned contrasts revealed that participants in the On Beat (\(M= 3.95, SD= 1.62\)) and Off Beat (\(M= 3.83, SD= 1.72\)) felt that the device was more similar to them when compared to the control condition (\(M= 2.64, SD= 2.01,\, t(64)=2.69,\, p<0.01\), but not from each other (Fig. 7).
Fig. 7

Means and SEs of perceived human robot similarity per condition

5.3.5 Effects of listening habits

To measure the interaction between individual differences in music listening habits and the above effects, we performed a two-way ANOVA for condition and listening habits (solitary vs. social as described above).

The reported significant main effect for the “Impression of Agent” variable was qualified by significant disordinal two-way interaction for Condition \(\times\) Listening Habits, \([F(2{,}54)= 5.02, p < .01,\,\eta ^2_p = .13]\). Simple effects analysis revealed that among participants who usually listen to music with others, both On Beat \(({M}= 5.83, {SD}= .69)\) and Off Beat \(({M}= 5.57, {SD}= .35)\) conditions rated the robot higher compared to the Static condition \(({M}= 4.44, {SD}= .66).\) Participants who usually listen to music alone did not. In fact the On Beat robot was rated lower than the one in the Static control condition (Fig. 8). A similar analysis of interaction for the “Experience Enjoyment” variable was not found to be significant \([F(2{,}54) < 1]\).
Fig. 8

Interaction between participants’ listening habits and experimental condition on impression of agent. Error bars indicate standard errors

In sum, we find that the robot’s response to the music affects people’s sense of co-listening, their liking of the songs they are listening to, their impression of the robot, and the sense that the robot is similar to them. These findings are moderated by participants’ music listening habits.

6 Study II: video viewing

Building on Study I, we conducted a second controlled laboratory experiment on the topic of video watching. In this study, participants watched a clip from a comedy show, with the robot situated on a table next to them, also facing the video screen. We manipulated both the robot’s presence and its response to the video clip.

6.1 Research questions and hypotheses

We were interested how a robot co-viewer affects a person’s enjoyment watching a video clip and the robot’s perceived positive human-like traits. We also wanted to see whether people’s video watching habits affect their response to the robotic co-viewing. To evaluate these questions we tested the following hypotheses:

Experience Enjoyment A robotic companion watching a video with a person will cause them to enjoy the video watching experience more.

Agent Impression A robotic companion responding to watching a video with a person will cause them to attach more positive human character traits to the robot.

Viewing Habits Viewing habits act as a moderating factor to the above-mentioned effects, such that people who usually watch videos with others will show more positive responses to the robot co-viewing.

6.2 Method

Our participants watched a clip from a comedy show either alone or with a robotic co-viewing companion. We made use of the robot’s capacity to synchronize gestures with media events and had the robot express one of a few preprogrammed socially communicative gestures every time there was canned laughter on the video’s laugh track. These gestures included moving its head in a laughter-like manner, nodding, leaning forward, looking back-and-forth between the screen and the participant, or a combination of the above.

6.2.1 Independent variables

We manipulated one between-subject variable, the robot’s presence and response to the video, using three levels. Having found in Study I that people were not sensitive to the precise timing of the response, we only used a single responsive condition. But to better separate the robot’s presence from its behavior, we added an additional baseline condition of people watching the video on their own, without a robot present.

Participants were thus randomly assigned to one of three conditions: In the Control condition, Travis was not present in the experimental room at all and the participant watched the movie on their own on the computer monitor. In the Present condition Travis was present in the experimental room and facing the video screen on which the clip was played, but was not responding to the video. In the Responding condition, Travis was animated and responding to the content in the video, using head movements and gestures related to laughter or enjoyment, at funny points in the video.

In addition, we measured a single between-subject categorical variable, their video watching habits (solitary and social). We estimated this variable by asking a single forced-choice question: “I usually watch video when I am: a) by myself; b) with others.”

6.2.2 Participants

A total of 91 undergraduate students (60 female, 31 male) participated in the experiment. Participants were multi-national, and all communication was done in English. They ranged in age from 17 to 31 (\(M= 22.53,\, SD= 2.8\)) and were recruited from the International School of Communication at a local college, in return for class credit.
Fig. 9

Desk layout for video watching study. a The robot looking at the video; b the robot looking at the participant

6.2.3 Procedure

The experiment was conducted in a closed room with controlled lighting and no outside distractions. The room contained a desk on which a computer monitor was placed, and on which Travis was present in the two presence conditions, Present and Responding (Fig. 9). An empty box of approximately the same size was placed in the robot’s stead in the Control condition.

Participants were told that they were participating in a video programming study. They were first taken to a vacant room where they signed written consent forms, and completed a series of questionnaires, recording their demographic information. The forms were immediately inspected for missing information, and the participants were asked to fill out any missing data.

Next, each participant was escorted to the room where the experiment took place. They were asked to sit at a desk with a computer screen on it, and were told they were going to watch a four minute long video, and fill out a questionnaire after they were done. The experimenter then left the room, leaving participants to watch the video.

Following previous co-viewing research [3], we selected to use a clip from a comedy show. The video was a 4-min segment from the TV series “Friends,” which was found to be perceived as funny in a pretest, and was also validated to not induce ceiling effects for video enjoyment. The video pretest was conducted online. Participants in the pretest watched the 4-min-long segment, and filled out the experience enjoyment questionnaire we used in our experiment (pretest \(n= 53,\, M= 5.73, \,SD= .84\)).

After they finished watching the video, participant went back to the first vacant room to complete the dependent variable questionnaire. Finally, they were thanked for their participation and debriefed.

6.2.4 Measures

All measures are on a seven-point scale (from “strongly disagree” to “strongly agree”) unless otherwise noted.

Experience Enjoyment We estimated participants’ enjoyment of watching the video using a counter-balanced 14-item experience enjoyment questionnaire. The measure included statements such as, “The video was enjoyable,” “I would like to watch similar videos in the future,” and “I would recommend other people to watch the video.” Cronbach’s \(\alpha\) for this measure was 0.94.

Impression of Agent Participants rated their impression of the robotic agent using a counter-balanced 10-item composite measure, including the questions used in the measure in Study I, in addition to positive character traits indicating the robot’s intelligence (based on [2, 24, 29, 48]). Cronbach’s \(\alpha\) for the measure in this experiment was 0.72.

6.3 Results

We denote our three manipulated experimental conditions Control, Present, and Responding. We analyzed group differences of demographic variables between conditions. No significant differences were found between conditions regarding age, gender, and prior experience with AI or robotics.

6.3.1 Experience enjoyment

This seven-point composite measure estimated people’s enjoyment of watching the video clip. There was no main effect for experimental condition \([F(2{,}89)<1]\). There were, however, interaction effects with the other independent variable, viewing habits, as well as main effects for viewing habits:

Interaction with Viewing Habits A 3 (Condition: Control, Present, and Responding) \(\times\) 2 (Viewing Habits: Solitary and Social) analysis of variance (ANOVA) on experience enjoyment revealed a significant main effect for Viewing Habits \([F(1{,}85)= 5.00, p < .05, \eta ^2_p = .06]\). Solitary viewers (\(M= 5.68, SD= .99\)) enjoyed the video more when compared to Social viewers (\(M= 5.17, SD= 1.13\)).
Fig. 10

Interaction between participants’ viewing habits and experimental condition on experience enjoyment. Error bars indicate standard errors

This effect was qualified by a Condition \(\times\) Viewing Habits interaction \([F(2{,}85)= 7.13, p = .001, \eta ^2_p = .14]\), see: Fig. 10. Tests for simple main effects revealed that among participants who prefer watching videos in a social setting, both presence conditions (Responding: \(M= 5.52, SD= 0.9\), and Present: \(M= 5.33, SD= 1.16\)) differed significantly when compared to the control condition \([M= 4.07, SD= 0.88, F(2{,}30)= 4.32, p < .05,\eta ^2_p = .22]\). Condition had no significant effect among participants who prefer watching videos by themselves \([F(2{,}55)= 2.70\), n.s.]. However, the control condition \((M=6.06, SD=0.66)\) was higher than both presence conditions (Responding: \(M= 5.46, SD= 1.19\), and Present: \(M= 5.45, SD= 1.03\)).

6.3.2 Impression of the robot

A two-level (Condition: Present and Responding) one-way ANOVA on participants’ impression of the robotic agent yielded a significant main effect of condition \([F(1{,}61)= 16.87, p < .001, \eta ^2 = .28]\), such that participants in the Responding condition (\(M= 4.63, SD= 0.77\)) attributed Travis with more positive characteristics when compared to the Present condition (\(M= 3.87, SD= .70\)) (Fig. 11). We did not find a main effect of viewing habits, \(F(1{,}61) < 1\).
Fig. 11

Means and SEs of impression of robotic agent per condition. Error bars indicate standard errors

Interaction with Viewing Habits A 2 (Condition: Present and Responding) X 2 (Habits: Social and Solitary) ANOVA did not show an interaction between experimental condition and viewing habits with respect to the impression of the robot’s perceived positive characteristics.

In sum, we find that the robot’s presence affects people’s enjoyment of the video watching experience, moderated by their video watching habits. The robot’s response to the video also affects people’s perception of the robot.

7 Discussion

In two experiments, we evaluated the effects of sharing a media experience with a robotic companion. We manipulated the robot’s presence and response, and used both music and video as media. In addition, we examined the moderating effect of people’s media consumption habits—solitary or social.

This research is part of a broader investigation of the notion of robotic experience companionship (REC), the effects of sharing an experience with a robot on the experience itself, and on the perception of the robot. To the best of our knowledge this is the first experimental study of REC.

7.1 Music listening

In music co-listening, we find that the robot’s beat-response causes people to feel that the robot is listening to the music with them. Also, our results support our Song Liking hypothesis: A robot that responds in synchrony to music causes participants to rate the same songs significantly higher than a robot that just plays the song without responding to the music. In other words, people’s opinion of a song can be heightened by a robot’s dance-like response to the song.

Interestingly, although participants did not report noticing the robot’s imprecision in the Off Beat condition, we observe a linear trend: The Off Beat song liking measure falls roughly halfway between the two more extreme conditions, indicating a relationship between the synchrony of the robot’s response and people’s opinion of the music.

Furthermore, both our hypotheses regarding participants’ perception of the robot—positive character traits, and the sense that the robot is similar to them—were supported, with significant differences between the robot response conditions and the control condition. People perceived the responding robot as having more positive traits, whether it was on beat or off beat, and were more likely to rate it as being similar to them.

Interaction analysis suggests that individual differences based on people’s listening habits are in play: People who usually listen to music together with others significantly rated the music-responsive robot more positively. Solitary listeners not only did not show this effect, but actually show a decrease in their rating of the robot when it responded to the music. We found a similar trend (albeit not significant) in the rating of their experience enjoyment, with social listeners enjoying the session more with a responsive robot, and solitary listeners enjoying the session less when the robot responded to the music.

7.2 Video watching

Many of these findings carry over to co-viewing a funny video with a robot. We find support for the robot’s behavior affecting people’s perception of the robot. Participants significantly attached more positive human character traits to a robot that responded to the video it was watching with them, when compared to the same robot present, looking at the video but not responding to it. This provides additional support to the results in Study I and fortifies the potential of shared experiences as a mechanism for a robot to create a positive impression on humans.

Also in line with Study I, our results emphasize the importance of existing habits on robotic experience companionship. Despite there being a main significant effect that habitually solitary viewers enjoyed watching the funny video more than social viewers, these social viewers enjoyed the video significantly more when watching it with a robot present. This leads us to believe that people who like to watch video with friends carry this preference over to the situation where their viewing companion is a robot, even to the point of reversing their baseline enjoyment of the video. Notably, for video enjoyment, there was no difference whether the robot responded to the video or was just present and looking in the direction of the screen for the reported increase in video watching enjoyment to occur.

7.3 General discussion

The fact that participants liked songs better with the robot responding to the music suggests a sort of “robotic social referencing”—a robot’s perceived enjoyment of an event influencing people’s perception of the same event. While we focused on music listening, this effect can have implications beyond media enjoyment, and suggests a novel role for personal robots as contributors to people’s perception of a situation.

We find repeated indications that people rate a robot significantly higher on positive human character traits, and even as significantly more similar to them when the robot responds to the media they co-experience. This finding is particularly salient since this is a between-subject design, and participants were not primed to think about the device as a robot in the non-responding conditions, and should therefore not have had any expectations of the device’s behavior.

This has design implications for personal robots. Today, much of personal robotics design thinking is focused on appearance. Our studies point to an alternative way to affect people’s perceptions of robots: The robot’s perceived traits could be positively influenced by causing the robot to respond with people (and perhaps similarly to them) to an external event, making the robot seem more “like them.”

We also found repeated evidence that individual differences affect the response to robotic media companions. In particular, we found that habitually social listeners preferred a responsive robot, whereas solitary listeners found it detrimental for the robot to seem to be listening to music with them. Similarly, social video watchers preferred watching video with a robot, while solitary viewers did not. This contributes to the relatively new field of personality-based human–robot interaction (e.g., [16]) and suggests that robotic experience companionship might not be for everyone, but could be more appropriate for people who enjoy experiences socially more than on their own.

7.4 Limitations

Our manipulation of the robot’s co-viewing behavior still merits further exploration. While the appropriate behavior for a robot enjoying music is quite straightforward, and a robot moving to the music is easily understood, it is not clear what the appropriate behavior of a robot would be to indicate that it is enjoying a funny video clip. In fact, it is not even clear how to express laughter in a faceless non-anthropomorphic robot. These questions are also more crucial in video watching compared to music listening, since we can assume that people’s visual attention is focused on the video, and not the robot. We are now experimenting with a variety of laughter responses to further investigate this issue.

Also, following [3] we only tested video of a single genre. REC might have a stronger effect on dramatic scenes, or scenes in which the robot might have a stake [51] (e.g., a clip from robot-themed movies), or in scary scenes. In the latter, the robot’s presence might reassure participants. The investigation of robotic co-viewing needs therefore be extended to different genres and content topics.

In the music listening study, we believed that our beat precision manipulation was distinctly noticeable. We were therefore surprised to not have found much difference between a precisely moving robot and a robot that misses beats, and errs on most other beats relative to the music. Participants did not detect the beat precision consciously, and for all but one measure (Song Liking), both conditions behaved as if they were the same. One explanation could be the robot’s high novelty in both conditions, masking the actual robot’s movement. Another could be that the manipulation was too subtle, relative to people’s sensitivity to choreography precision. Also participants in our Off Beat did not have any point of reference to estimate how well the robot is performing, possibly supporting their sense of this being the robot’s “best attempt” at syncing to the music.

Finally, we should note that our sample was made up of students, who—compared to the general population—are young and tech-savvy. In particular when it comes to the evaluation of media technology, this might not be representative of the general population.

8 Conclusion

In this work we presented the notion of robotic experience companionship (REC), a person’s sense that they are sharing an experience with a robot.

We presented a first experimental study of REC in the area of music co-listening and video co-viewing. Both studies used a custom-built autonomous desktop robot capable of responding to music beats and to cues in a video clip. In two between-subject designs, we evaluated the effects of the robot’s response on people’s liking of songs, their enjoyment of listening to music and watching video, and their perception of the robot. We also examined the interaction of people’s prior media consumption habits with these variables.

Our results suggest that a robot responding to music can make people feel that the robot is co-listening, and is similar to them, and that a robot that responds to music or video is perceived as having more human positive character traits. We also found that social music listeners and video viewers are more positively affected by the robot’s response.

This points to the use robotic companions as part of a new kind of digital media playback technologies, enhancing some people’s enjoyment of the media and of the media experience. It also has design implications, suggesting REC as a tool for robot designers to achieve affinity toward a robot.

Our study contributes to the existing social reference literature in that it looks at a sort of social referencing mechanism caused by a robot. We also broaden most of existing human–robot interaction literature, in that we do not look at the effects of directly interacting with a robot, but instead at the “side effects” of a peripheral robot responding to an external situation. Finally, our findings fortify the view that users’ individual differences, such as habits, should be taken into account when designing robot behavior.

While we only examined REC in the context of media experiences, robotic experience companions could shape people’s perception of external occurrences in a variety of applications, from health and elder care (making an unpleasant medical procedure more positively perceived), to work environments (encouraging people to be more accepting of a new policy), or even just for making standing at a traffic light less boring, and discouraging crossing on a red light. This, of course, merits further research in these individual application areas.

As a phenomenon, REC can be epiphenomenal or intentional. Epiphenomenal, in the sense that a robot not designed to have a particular effect on people but behaving in a certain way, could inadvertently affect bystanders, positively or negatively. On the other hand, REC effects could be intentionally designed into a robot’s behavior in order to affect a certain outcome in the human bystander, and could even be the robot’s main purpose.

In sum, to design robots for human environments, we need to better understand the mechanisms of REC. Robot designers need to be cognizant of its potential effects when developing personal robots. Conversely, people’s reactions to events could be purposefully manipulated by a robot’s apparent reaction to them. This could help build robots that cause people to enjoy positive activities more, or at least “sweeten a bitter pill” when a negative situation is unavoidable.

Notes

Acknowledgments

The authors would like to thank Shoshana Krug for assistance in running the study, as well as Avital Mentovich and Oren Zuckerman for valuable comments on an earlier draft of this paper.

References

  1. 1.
    Acosta L, González E, Rodríguez JN, Hamilton AF et al (2006) Design and implementation of a service robot for a restaurant. Int J Robot Autom 21(4):273Google Scholar
  2. 2.
    Bailenson JN, Yee N (2005) Digital chameleons: automatic assimilation of nonverbal gestures in immersive virtual environments. Psychol Sci 16(10):814–9CrossRefGoogle Scholar
  3. 3.
    Banjo OO, Appiah O, Wang Z, Brown C, Walther WO (2015) Co-viewing effects of ethnic-oriented programming: an examination of in-group bias and racial comedy exposure. J Mass Commun Quart 92(3):662–680Google Scholar
  4. 4.
    Bainbridge WA, Hart J, Kim ES, Scassellati B (2008) The effect of presence on human–robot interaction. In: RO-MAN 2008—the 17th IEEE international symposium on robot and human interactive communication. IEEE, Aug 2008Google Scholar
  5. 5.
    Banjo OO (2013) For us only? Examining the effect of viewing context on black audiences perceived influence of black entertainment. Race Soc Probl 5(4):309–322CrossRefGoogle Scholar
  6. 6.
    Bemelmans R, Gelderblom GJ, Jonker P, de Witte L (2012) Socially assistive robots in elderly care: a systematic review into effects and effectiveness. J Am Med Dir Assoc 13(2):114–120.e1CrossRefGoogle Scholar
  7. 7.
    Biocca F, Harms C, Burgoon JK (2003) Towards a more robust theory and measure of social presence : review and suggested criteria. Presence Teleoperators Virtual Environ 12(5):456–480CrossRefGoogle Scholar
  8. 8.
    Bore I-LK (2011) Laughing together? TV comedy audiences and the laugh track. Velvet Light Trap 68:24–34CrossRefGoogle Scholar
  9. 9.
    Breazeal C (2004) Social interactions in HRI: the robot view. IEEE Trans SMC Part C Spec Issue Hum Robot Interact 34(2):181–186Google Scholar
  10. 10.
    Bretan M, Hoffman G, Weinberg G (2015) Emotionally expressive dynamic physical behaviors in robots. Int J Hum Comput Stud 78:1–16CrossRefGoogle Scholar
  11. 11.
    Bretan M, Weinberg G (2014) Chronicles of a robotic musical companion. In: Proceedings of the 2014 conference on New interfaces for musical expression. University of LondonGoogle Scholar
  12. 12.
    Bruyn LD, Leman M, Moelants D (2009) Does social interaction activate music listeners? In: Ystad S, Kronland-Marinet R, Jensen K (eds) CMMR 2008. Springer, BerlinGoogle Scholar
  13. 13.
    Burke J, Coovert M, Murphy R, Riley J, Rogers E (2006) Human–robot factors: robots in the workplace. In: Proceedings of the human factors and ergonomics society annual meeting, vol 50, Oct 2006Google Scholar
  14. 14.
    Cottrell NB, Rittle RH, Wack DL (1967) The presence of an audience and list type (competitional or noncompetitional) as joint determinants of performance in paired-associates learning. J Pers 35(3):425–434CrossRefGoogle Scholar
  15. 15.
    Cottrell NB, Wack DL, Sekerak GJ, Rittle RH (1968) Social facilitation of dominant responses by the presence of an audience and the mere presence of others. J Pers Soc Psychol 9(3):245CrossRefGoogle Scholar
  16. 16.
    Dang T-H-H, Tapus A (2014) Towards personality-based assistance in human–machine interaction. In: RO-MAN: the 23rd IEEE international symposium on robot and human interactive communication, 2014. IEEE, 2014Google Scholar
  17. 17.
    Dautenhahn K (1999) Robots as social actors: aurora and the case of autism. In: Proceedings of CT99, the third international cognitive technology conference, August, San Francisco, vol 359Google Scholar
  18. 18.
    Feil-Seifer D, Mataric M (2011) Socially assistive robotics. IEEE Robot Autom Mag 18(1):24–31CrossRefGoogle Scholar
  19. 19.
    Feinman S (1982) Social referencing in infancy. Merrill Palmer Q 28(4):445–470Google Scholar
  20. 20.
    Feinman S (1983) How does baby socially refer? Two views of social referencing: a reply to campos. Merrill Palmer Q 1982:467–471Google Scholar
  21. 21.
    Fong T, Nourbakhsh I, Dautenhahn K (2003) A survey of socially interactive robots. Robot Auton Syst 42(3–4):143–166CrossRefMATHGoogle Scholar
  22. 22.
    Forlizzi J (2007) How robotic products become social products: an ethnographic study of cleaning in the home. In: Proceedings of the ACM/IEEE international conference on Human–robot interaction. ACMGoogle Scholar
  23. 23.
    Fukuda T, Jung M-J, Nakashima M, Arai F, Hasegawa Y (2004) Facial expressive robotic head system for human–robot communication and its application in home environment. Proc IEEE 92(11):1851–1865CrossRefGoogle Scholar
  24. 24.
    Guadagno RE, Cialdini RB (2002) Online persuasion: an examination of gender differences in computer-mediated interpersonal influence. Group Dyn Theory Res Pract 6(1):38–51CrossRefGoogle Scholar
  25. 25.
    Haridakis P, Hanson G (2009) Social interaction and co-viewing with YouTube: blending mass communication reception and social connection. J Broadcast Electron Media 53(2):317–335CrossRefGoogle Scholar
  26. 26.
    Heerink M, Ben K, Evers V, Wielinga B (2008) The influence of social presence on acceptance of a companion robot by older people. J Phys Agents 2(2):33–40Google Scholar
  27. 27.
    Hocking JE, Margreiter DG, Hylton C (1977) Intra-audience effects: a field test. Hum Commun Res 3(3):243–249CrossRefGoogle Scholar
  28. 28.
    Hoffman G (2012) Dumb robots, smart phones : a case study of music listening companionship. In: RO-MAN 2012—the IEEE international symposium on robot and human interactive communicationGoogle Scholar
  29. 29.
    Hoffman G (2013) Evaluating fluency in human–robot collaboration. In: Robotics: science and systems (RSS’13) workshop on human–robot collaborationGoogle Scholar
  30. 30.
    Hoffman G, Vanunu K (2013) Effects of robotic companionship on music enjoyment and agent perception. In: Proceedings of the 8th ACM/IEEE international conference on Human–robot interaction (HRI)Google Scholar
  31. 31.
    Iwamura Y, Shiomi M, Kanda T, Ishiguro H, Hagita N (2011) Do elderly people prefer a conversational humanoid as a shopping assistant partner in supermarkets? In: Proceedings of the 6th international conference on Human–robot interaction - HRI ’11, New York, New York, USA, 2011. ACM PressGoogle Scholar
  32. 32.
    Kanda T, Hirano T, Eaton D, Ishiguro H (2004) Interactive robots as social partners and peer tutors for children: a field trial. Hum Comput Interact 19:61–84CrossRefGoogle Scholar
  33. 33.
    Kidd C, Breazeal C (2004) Effect of a robot on user perceptions. In: Proceedings of theIEEE/RSJ international conference on intelligent robots and systems (IROS2004)Google Scholar
  34. 34.
    Kidd CD (2003) Sociable robots: the role of presence and task in human–robot interaction. Ph.D. Thesis, MITGoogle Scholar
  35. 35.
    Larson R, Kubey R (1983) Television and music: contrasting media in adolescent life. Youth Soc 15(1):13–31CrossRefGoogle Scholar
  36. 36.
    Lee KM, Peng W, Jin S-A, Yan C (2006) Can robots manifest personality?: an empirical test of personality recognition, social responses, and social presence in human–robot interaction. J Commun 56(4):754–772CrossRefGoogle Scholar
  37. 37.
    Mead R, Atrash A, Mataric MJ (2011) Recognition of spatial dynamics for predicting social interaction. In: Proceedings of the 6th international conference on human–robot interaction - HRI ’11. New York, New York, USA, 2011. ACM PressGoogle Scholar
  38. 38.
    Michalowski M, Sabanovic S, Kozima H (2007) A dancing robot for rhythmic social interaction. In: HRI ’07: Proceedings of the ACM/IEEE international conference on human–robot interaction. Arlington, Virginia, USA, Mar 2007Google Scholar
  39. 39.
    Mora J-D, Ho J, Krider R (2011) Television co-viewing in Mexico: an assessment on people meter data. J Broadcast Electron Media 55(4):448–469CrossRefGoogle Scholar
  40. 40.
    Morales Saiki LY, Satake S, Huq R, Glass D, Kanda T, Hagita N (2012) How do people walk side-by-side? In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction—HRI ’12. New York, New York, USA, 2012. ACM PressGoogle Scholar
  41. 41.
    North AC, Hargreaves DJ, Hargreaves JJ (2004) Uses of music in everyday life. Music Percept 22(1):41–77CrossRefGoogle Scholar
  42. 42.
    O’Hara K, Brown B (eds) (2006) Consuming music together, computer supported cooperative work, vol 35. Springer, BerlinGoogle Scholar
  43. 43.
    Paavonen EJ, Roine M, Pennonen M, Lahikainen AR (2009) Do parental co-viewing and discussions mitigate TV-induced fears in young children? Child Care Health Dev 35(6):773–780CrossRefGoogle Scholar
  44. 44.
    Pacchierotti E, Christensen HI, Jensfelt P (2006) Design of an office-guide robot for social interaction studies. In: International conference on intelligent robots and systems, 2006 IEEE/RSJ. IEEEGoogle Scholar
  45. 45.
    Platow MJ, Haslam SA, Both A, Chew I, Cuddon M, Goharpey N, Maurer J, Rosini S, Tsekouras A, Grace DM (2005) Its not funny if theyre laughing: self-categorization, social influence, and responses to canned laughter. J Exp Soc Psychol 41(5):542–550CrossRefGoogle Scholar
  46. 46.
    Rubin AM, Rubin RB (1985) Interface of personal and mediated communication: a research agenda. Crit Stud Media Commun 2(1):36–53CrossRefGoogle Scholar
  47. 47.
    Skouteris H, Kelly L (2006) Repeated-viewing and co-viewing of an animated video: an examination of factors that impact on young children’s comprehension of video content. Aust J Early Child 31(3):22–30Google Scholar
  48. 48.
    Slater M, Sadagic A, Usoh M, Schroeder R (2000) small group behaviour in a virtual and real environment : a comparative study. Presence 9:37–51CrossRefGoogle Scholar
  49. 49.
    Sorce JF, Emde RN, Campos JJ, Klinnert MD (1985) Maternal emotional signaling: its effect on the visual cliff behavior of 1-year-olds. Dev Psychol 21(1):195CrossRefGoogle Scholar
  50. 50.
    Spexard T, Li S, Wrede B, Fritsch J, Sagerer G, Booij O, Zivkovic Z, Terwijn B, Krose B (2006) BIRON, where are you? Enabling a robot to learn new places in a real home environment by integrating spoken dialog and visual localization. In: International conference on intelligent robots and systems, 2006 IEEE/RSJ. IEEEGoogle Scholar
  51. 51.
    Tal-Or N, Tsfati Y. Does the co-viewing of sexual material affect rape myth acceptance? The role of the co-viewer’s reactions and gender. Manuscript submitted for publicationGoogle Scholar
  52. 52.
    Tanaka F, Ghosh M (2011) The implementation of care-receiving robot at an English learning school for children. In: 2011 6th ACM/IEEE international conference on human–robot interaction (HRI). IEEEGoogle Scholar
  53. 53.
    Tapus A, Tapus C, Mataric MJ (2009) The use of socially assistive robots in the design of intelligent cognitive therapies for people with dementia. In: IEEE international conference on rehabilitation robotics, 2009. ICORR 2009. IEEE June 2009Google Scholar
  54. 54.
    Thrun S (2004) Toward a framework for human–robot interaction. Hum Comput Interact 19:9–24CrossRefGoogle Scholar
  55. 55.
    Wada K, Shibata T, Musha T, Kimura S (2008) Robot therapy for elders affected by dementia. IEEE Eng Med Biol Mag 27(4):53–60CrossRefGoogle Scholar
  56. 56.
    Weisz JD, Kiesler S, Zhang H, Ren Y, Kraut RE, Konstan JA (2007) Watching together: integrating text chat with video. In: Proceedings of the SIGCHI conference on Human factors in computing systems. ACMGoogle Scholar
  57. 57.
    Zajonc RB et al (1965) Social facilitation. Research Center for Group Dynamics, Institute for Social Research, University of MichiganGoogle Scholar
  58. 58.
    Zarbatany L, Lamb ME (1985) Social referencing as a function of information source: mothers versus strangers. Infant Behav Dev 8(1):25–33CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London 2016

Authors and Affiliations

  1. 1.Media Innovation LabIDC HerzliyaHerzliyaIsrael
  2. 2.The MITRE CorporationMcLeanUSA

Personalised recommendations