Advertisement

Gaze Behavioral Adaptation Towards Group Members for Providing Effective Recommendations

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10652)

Abstract

An adequate robot gaze control is essential for successful and natural human-robot interactions. In multi-party contexts, the effective use of the gaze shared among the participants may have a strong impact in keeping the participants’ attention and obtaining a persuasive effect. To gain a deeper understanding of how the robot gaze behavior might influence and shape the human perception of the interaction and the decision-making process in small groups, we conducted a between-subjects experimental study using a humanoid robot in a movie recommendation scenario. Our results showed that different gaze behaviors resulted in different group acceptance rates if combined with the personal acceptance of the group members. However, users were not able to differentiate the behaviors in term of naturalness and persuasiveness. Moreover, results showed that other factors, such as the length of the recommendation, play a significant role in the users’ perception of the interaction naturalness.

1 Introduction

Nowadays, many applications view a social robot interacting in a multi-party setting with groups of human beings. Here, we focus on the role of robots to convey meaningful information towards a small group of users and specifically on cases that primarily involve a single speaker (i.e., the robot). This kind of domains may include the robot acting as a tutor providing information about a topic, as a storyteller, or the robot that presents relevant information (e.g., a robotic tour guide that describes museum artworks). In particular, the selected application domain regards a robot providing recommendations on movies to watch, as in [16].

Embodied social agents used to provide recommendation can make the interaction more meaningful with respect to simple interfaces (which do not display actions or speech), because users’ attitude towards social agents is similar to that they show towards other people. It has been observed that the robots endowed with social behaviors, similar to that of humans, are more compelling for human-robot interaction. In a face-to-face interaction between humans, several modalities are normally used for coordination or to smooth the interaction. For example, body posture, gestures, gaze, vocalization, and facial expressions are commonly used to convey information that is not primarily used for conveying content. According to [3], the dynamics of social interaction could not be totally hard-wired, but it should emerge from adaptation, once we learned and profiled the user we are interacting with. Hence, the effective use of a robot of such non-verbal cues can improve the interaction flow, but it has to be adapted to the context. The dynamic adaptation of these cues becomes even more important when robots are engaged in information-providing tasks (e.g., providing recommendations on items), which could affect the choices of users interacting with them.

Here, we focus on the role of modeling the robot gaze while providing recommendations to a group of users. Adequate gaze control of a virtual agent is essential for successful and natural human-agent interaction. By using gaze cues, people can control the flow of a conversation, and the temporal gaze pattern can be adapted in order to show different behaviors. According to [1], gaze timing can have a strong effect on realism and comfort during an interaction. For example, too short gaze fixations are perceived as an index of avoidance, or disinterest, while too long fixations may cause discomfort. In the context of a robot providing recommendations, the user tracking behavior itself has been shown to influence the decision-making process [14]. Moreover, the modulation of the gaze behavior may help a robot in showing a persuasive behavior [4], not via explicit communication (gesture, speech), but via implicit manipulation of social cues.

In this paper, we analyze the impact of different gaze behaviors on users’ satisfaction and acceptance rate in a group recommendation scenario (i.e., a scenario whereas a single recommendation has to be accepted by a group of people). Three gaze models have been developed and tested with a movies’ recommendation system with small groups. Each of the proposed gaze models has different configuration values for head’s turnings and fixation times of the robot. Ten tests with a total of thirty people have been made and their results are here analyzed and discussed.

2 Background and Related Works

Researchers in social robotics and intelligent virtual agents have developed different gaze control models by taking account of gaze duration (i.e., how long fixing a person) and frequency (i.e., how many times in a considered interval). When dealing with virtual agents, gaze models typically address the communicative functions of the contents. Typically, they focus on the structure and on the content of the utterances and integrate it with social and cultural aspects [12]. However, these models rely on the proper annotation of the text and aim to provide general models to act in a conversation involving both the speaker and the listener gazes. Here, we focus only on the speaker role in conveying information towards a group of users with non-annotated text.

Several robotics researchers have explored how robot gaze influences the human-robot interactions. For example, in [10], when the robot used gaze mechanisms in a multi-party setting to signal to its conversational partners their conversational roles, it results in the participants conforming to these intended roles. Some other works explicitly dealt with the evaluation of frequencies and duration of gaze. For example, in [1], the authors evaluated the duration and frequency of fixations, while fixing the gaze location to a single participant. Their results showed that short but more frequent fixations are better at conveying attention, compared to longer and less frequent fixations. Therefore, it is reasonable to suppose that subjects are more influenced by the head-turning event than the fixation time itself. The role of the frequency was also explored in [9]. The authors, in detail, explored the role of gaze behavior for a storytelling robot by manipulating the frequency of gaze between two participants in the experiments. Results showed that, when the robot looked at them more, the participants were able to recall more about the presented story.

Robot’s gaze and face direction have also been shown to influence users’ choices, together with their decision-making process, in a context where robots are able to give recommendations to the subjects. Speakers tend to gaze more at the listener to be more persuasive or assertive [7]. Dealing with robots, [14] showed that the considered subjects selected the recommended color’s name significantly more often when the robot tracked the subjects’ faces. Robot’s gaze cues also have an impact on effectively establish the role of participants in a conversation and influence the speak-turn-taking among the users. According to [11], the addressees of robot’s attention think their preferences are considered more significant and rate their experience more positively. Additionally, the use of gaze combined with gestures significantly improves the influencing power of a robot in delivering a message [4]. A very related work is [6], where the robot, acting as a museum guide, ‘favors’ one of the three participants (that is randomly chosen) by directing its gaze at them more frequently and longer compared to the other two participants. Results show that this could positively influence her attitude toward the robot.

Concluding, none of the above approaches deal with the problem of adapting the gaze behavior in small groups taking into account the preferences and roles of the members of the group. Here, the use of nonverbal communication could convey the attention to the user that is less likely to accept the recommended product. On the basis of the works described above, the elicitation of his/her interest could lead to an increase in the group acceptance rate of the suggestions made by the robot.

3 Gaze Models for Movie Recommendations

Here, we consider the case of an entertainment application where a robotic platform is involved in order to provide a simple and intuitive interface [2] and to give convincing recommendations. Humanoid robots have been proven to be successful in recommendation systems, increasing the likelihood that the user will accept the given suggestions [16]. However, the suggestion of entertainment activities in houses have to take into account that more than one person may be involved in the proposed activity (e.g., a movie to watch or a song to listen). In this context, the goal is to recommend a single item to a whole group of users taking into account all the individual preferences (expressed as estimated rating \(r_{i,p} \in [1,5]\) of the user i on the item p), in order to maximize global satisfaction, or at least, minimize dissatisfaction of all the members. The group members have to reach a consensus in order to accept the recommendation. However, when proposing a group activity, it is often the case that not every member of the group is equally satisfied by the proposed activity.

Considering the results of [6] on focusing the attention on a favorite participant in group and the experiments of [1, 11] showing a correlation between frequent fixations and perceived attention, we hypothesize that a participant, who is gazed at more frequently and for a longer time, will perceive his/her role in the group as more significant in addition to eliciting his/her attention towards the robot. This could positively impact the group decision-making process, helping the person, whose satisfaction on the recommended item is not good, in deciding in favor of the other members of the group.

In order to test this hypothesis in the followings, we defined three different gaze behaviors that will be used by the robot while recommending a movie. We describe a heuristic model focusing on the parameters modeling the robot gaze behavior toward the participants. We do not address the possibility of conveying the gaze orientation toward other parts of the environment since the provided informative context (e.g., movie plots) does not have references towards objects in the shared visual space.

3.1 Round Allocation

Since the plot’s (p) length varies depending on the specific movie that is suggested, a heuristic algorithm has been used to estimate the plot’s read-time (\(T_{max}(p)\)) (i.e., the time the robot will spend telling the plot of the movie it is recommending), on the basis of its words-per-minute (i.e., how fast it speaks). The estimated time can be then divided into nr(pb) rounds, where b is a specific gaze behavior.

Each round is composed of n sections, each one corresponding to the gaze towards a single member of the group and with a time duration \(T_{round}(b)\) defined as:
$$\begin{aligned} T_{round}(b) = t_{turn} + t_{track} + t_{gaze}(b) \end{aligned}$$
(1)
where \(t_{turn}\) represents the time required by the robot to turn its head towards the user linked to the current section (this time has been empirically fixated to two seconds to prevent fast and unnatural movements of the robot’s head). \(t_{track}\) represents the time required to identify the user in the robot field of view and to gaze at him/her (it is estimated in the average). The gaze time \(t_{gaze}(b)\), instead, denotes the time spent by the robot while gazing at the user (e.g., if the user moves, the robot will track the user), and its duration in time depends on the specific interaction behavior b that is used.

Three different gaze behaviors b have been developed, varying the number of turns of the robot towards each user and the gaze time.

Standard Gaze. In this model (S), each section has a fixed gaze time of three seconds (\(t_{gaze}(S)=3\,s\)), as suggested in [1]. The goal is to distribute the total reading-time of the plot and the head turnings of the robot among the n users. In this case, the number of rounds is:
$$\begin{aligned} nr(p,S) = \lceil \frac{T_{max}(p)}{n * T_{round}(S)} \rceil \end{aligned}$$
(2)
After the completion of nr(pS) rounds, if the remaining time is smaller than a single round, the robot will make the last gaze smaller. Otherwise, it will equally distribute the remaining time on all the members of the group. In conclusion, this approach does not take into account the ratings given by the users to the recommended movie, and so it treats all the users equally (i.e., the gaze time is equally distributed among the members).
Longer Gaze on the Weakest Component. In this case, the goal is to convince the user who rated the suggested movie with the lowest value by assigning a longer fixation (\(t_{gaze}(L)=7s\)) to him/her in each round. This amount of time has been chosen in order to obtain a perceivable difference between this kind of gaze and the standard one. It has been shown that when receiving more attention from the robot, users think that their preferences were considered more significantly [11]. The other members of the group have a standard gaze time (\(t_{gaze}(S)=3\,s\)). The head turnings are equally distributed among the users. In this case, the number of rounds is:
$$\begin{aligned} nr(p,L) = \lceil \frac{T_{max}(p)}{(n-1) * T_{round}(S) + T_{round}(L)} \rceil \end{aligned}$$
(3)
More Frequent Gaze on the Weakest Component. The strategy in this approach is based on the result of some experiments suggesting that frequent fixations are better at conveying attention [1]. In this model, each round is divided into \(n+1\) sections and two non-adjacent sections are linked to the same user who gave the lowest rate to the recommended movie. As a result of this division, the weakest component of the group is fixated twice the time and gets twice the number of head turnings with respect to the other users. The gaze time is as for the standard case (\(t_{gaze}(F)=t_{gaze}(S)\)) for all the components of the group. Hence, in this case, the number of rounds is:
$$\begin{aligned} nr(p,F) = \lceil \frac{T_{max}(p)}{(n+1) * T_{round}(F)} \rceil \end{aligned}$$
(4)

4 Experimental Evaluation

We designed a user study as a within-subjects, repeated measures experiment, where the independent variable, in each condition, is the different robot gaze behavior used for providing the recommendation to a group of users. In this design, each group is subjected to every single test. The order in which the conditions are presented is random and balanced among the three gaze behaviors.

The Robot. The used robot was a NAO T14 robot model, consisting in a humanoid torso with 14 degrees of freedom (2 for the head and 12 for the arms) developed by Softbank. Nao robot does not allow eye movements, so gaze is obtained by the head orientation. However, according to [5], head orientation is fundamental to recognize gaze in human-robot multi-party settings. A face tracking module has been enabled in order to obtain an accurate gaze towards the actual position of the participants [15]. This feature also gave the robot the capability of following the user’s movements making the interaction more natural. A blinking effect has been reproduced, using Nao’s led positioned in its eyes, making the robot look more human-like. Finally, a set of generic gestures were added automatically. Information regarding the movie, such as the genre, the cast, and the plot were obtained via The Movie Database API. This tool allows developers to retrieve data from the Internet Movie Database (IMDB) for a specific movie in a selected language. The strings obtained for the movies to recommend were then manipulated to create a coherent speech that is lastly read to the users by the robot.
Fig. 1.

Experimental setup

Procedure and Participants. We considered 10 groups, each one composed of 3 members. All the participants were Italians and high-school or master students. In detail, 25 male and 5 female were involved, with an average age of 20\(\,\pm \,\)6. Finally, all the participants had a moderate familiarity with robotics applications.

The testing procedure’s main steps are:
  1. (a)

    at the beginning of the interaction, each user of a group provides new rates for a list of 15 movies (training phase) and provides personal information (gender, age);

     
  2. (b)

    the recommendation system generates three recommendations selecting movies, if possible, that were rated with 3 or more stars by two group members and with 2 or less star by the remaining one (the weakest component, i.e. the one the robot will fixate longer/more times according to the selected gaze model); the movies will be recommended to the groups through the three different gaze behaviors;

     
  3. (c)
    for each of the three recommended movie, the group completed the following tasks:
    1. (c.1)

      provide an acceptance/reject response (for each individual and for the whole group) for the proposed movies (please note that when a group decision has to be made, the group has to reach a consensus on the answer);

       
    2. (c.2)

      complete a short satisfaction questionnaire after each interaction with the different conditions.

       
     
  4. (d)

    finally, each user was requested to express a preference regarding a single interaction mode among the three proposed behaviors.

     

The proposed questionnaire is organized into three specific sections: an initial section containing personal and topic related information (Q1. How familiarized are you with robotic applications? (1 to 5), Q2. How familiarized are you with the movie domain? (1 to 5)); a second section is repeated after each experimental condition (Q3. Did you accept the recommended movie? (Personal acceptance), Q4. Did the group accept the recommended movie? (Group acceptance), Q5. How persuasive was the robot? (1 to 5), Q6. The robot motions were natural (5) or unnatural (1)); a final section is at the end of the test session (Q7. Which mode of interaction did you prefer? {A, B, C}).

The three members of the group were seated at a table around Nao, respectively in front of Nao, at its left and at its right (see Fig. 1). Participants were free to select their position while an operator provides to the Nao control program their id with the chosen positions.

4.1 Result Analysis

Since we are dealing with groups, we start analyzing the group acceptance rate of the proposed movies. Naturally, one of the factor with the biggest impact on the group acceptance is the personal acceptance (one-way ANOVA with \(F(1,90)=53.083\) and \(p<0.001\), and Sperman \(\rho =0.613\) with \(p<0.001\)). If the individual members of the group like the recommendation, the group decision is straightforward. By considering alone the effect of the gaze behavior on the group acceptance, we did not find a statistically significant difference in the acceptance rate (one-way ANOVA with \(F(2,90)=0.547\) and \(p=0.581\)). However, since, in our opinion, both the gaze behavior and the personal acceptance of a recommended movie may have a significant impact on the group acceptance, we evaluated both effects with a two-way ANOVA that is summarized in Table 1. Results show that both the considered factors have a significant main effect on the group acceptance, but there is no interaction effect between the two or the interaction is additive. The absence of the interaction effect is also shown in Fig. 2(left), where we plotted the results obtaining by considering both the personal acceptance and the different gaze behaviors with respect to the group acceptance rate.
Table 1.

Table of ANOVA2 Analysis

Source

Sum Sq.

d.f.

Mean Sq.

F

Prob \(>F\)

Gaze behavior

0.679

2

0.339

3.1

0.05

Personal accept.

6.434

1

6.434

58.766

0.000

Interaction

0.465

2

0.233

2.124

0.126

Error

9.197

84

0.109

Total

69

90

Fig. 2.

Estimated marginal means for group acceptance rates of movies recommended varying the robot gaze behaviors (left) and the personal acceptance (right)

As shown in Fig. 2(right), there is a significant difference of the estimated marginal means of the group acceptance rate of movies recommended by using the three different gaze behaviors. In detail, for the cases of a standard gaze behavior, half of the proposed movies (\(52\%\)) could be accepted by the groups. In the case of long gazes towards the user with the lowest evaluation of the movie, we obtained that in \(67\%\) of the recommendations the group could accept the recommendation. Finally, the best results (\(75\%\)) are obtained in the case of frequent gazes towards the user with the lowest evaluation of the movie. These results confirm the general user preference of more frequent gazes with respect to longer ones. Moreover, as shown in Fig. 2, while the behavior of the groups with personal acceptance equals to 1 (i.e., true) is stable with respect to the gaze behaviors, in the case of a personal acceptance equals to zero the impact of the different gaze behaviors is stronger.

Regarding the familiarity with robotics applications and with the movie domain, we found that self-assessed familiarity with robotic applications has a negative weak correlation with the group acceptance (\(\rho =-0.198\) with \(p=0.062\)), meaning that people with less familiarity with a robot may be more inclined to accept a recommendation in a group. No significant differences and correlation with respect to the familiarity with the movie domain. Results of the questionnaire are summarized in Table 2. Differences are not statistically significant. Moreover, there is no statistically significant correlation between the presented gaze behavior and the reported persuasiveness (\(\rho =-0.040\) with \(p=0.7\)), and between the presented gaze behavior and the perceived naturalness of the interaction (\(\rho =0.038\) with \(p=0.721\)).
Table 2.

Results of the questionnaire

Gaze behavior

Q5

Q6

Q7(%)

Standard

3.83

3.57

40.0

Long

3.77

3.47

33.3

Frequent

3.73

3.67

26.7

Regarding question Q7, results showed that the participants expressed a preference towards the standard gaze behavior. This can be also related to Q5 and Q6 answers about the perceived naturalness and persuasiveness. Such questions may provide an index about the users ability to consciously observe the manipulations of the robot’s gaze behavior in the study [18]. Indeed, these questions allow us to identify whether the effects of the experimental manipulation were perceived or not. Results confirmed the typical difficulties in the perception of small differences in social cues. In particular, with respect to gaze patterns, a fundamental role is played by the communication partner, whose personality has on impact on the perception of such non-verbal cue [8]. Moreover, during the experimentation, we noted that also inter-group dynamics have an important role. For example, some testers did not look at the robot at all, focusing more on other group members. Hence, an efficient gaze mechanism has to rely on the recognition of the users’ gaze pattern behaviors.

Finally, the participants focused more on the explicit communicative cues, such as gestures and the speech, with respect to the implicit ones. The participants reported many considerations with respect to some perceived changes in the voice (not real). During the experimental evaluation, we noticed that there were different reactions with respect to the length of the plots. In order to confirm this impression, we evaluated the length of the proposed plots in terms of the number of characters of the plots. We, then, found a significant weak Spearman correlation between the length of the movie plot and the perceived naturalness of the interaction (\(\rho =0.2\) with \(p=0.05\)), meaning that there is a monotonic relation between these two variables. Whenever there is an information to be provided to the users, these considerations have to be carefully taken into account in the design of the experiments, since the provided information plays a dominant role in the interaction.

5 Conclusions

In this paper, we discussed the problem of adapting the gaze behavior in order to take into account individual differences among the members of a small group. In particular, with respect to the task of providing group recommendation, our model aims to take into account the preferences of users. We presented a user study to evaluate how the use of nonverbal communication could be used to convey the attention to the user that is less likely to accept the recommended product. Results show that such different gaze behaviors resulted in different group acceptance rates of the proposed movies if combined with the personal acceptance of the group members. However, while such differences were significant, users were not able to perceive them in term of naturalness and persuasiveness, but still preferring the standard gaze behavior among the others. Moreover, results showed that other factors, such as the length of the recommendation, play a role on users’ perception of the naturalness of the human-robot interaction.

In future works, we would like to refine our model in order to take into account other intra-group characteristics that may have an impact on the interaction as well as on decision-making process. In literature, some work is investigating the possibility to dynamically recognize leaders and dominant persons through the analysis of non-verbal behavior [17]. Dominance relationships in small groups have an impact on the decision-making [13]. Another characteristic to take into account is the hearer personality, since it affects both the perception of the naturalness and comfortableness of the speaker gaze and the decision making. All these characteristics could be used, once the target users are identified, to adapt the gaze behavior.

Notes

Acknowledgment

This work has been partially supported by MIUR within the PRIN2015 research project “User-centered Profiling and Adaptation for Socially Assistive Robotics - UPA4SAR”.

References

  1. 1.
    Admoni, H., Hayes, B., Feil-Seifer, D., Ullman, D., Scassellati, B.: Are you looking at me?: perception of robot attention is mediated by gaze type and group size. In: Proceedings of the 8th ACM/IEEE International Conference on HRI, pp. 389–396 (2013)Google Scholar
  2. 2.
    Cozzolongo, G., De Carolis, B., Pizzutilo, S.: Social robots as mediators between users and smart environments. In: Proceedings of the 12th International Conference on Intelligent User Interfaces, IUI 2007, pp. 353–356. ACM, New York (2007)Google Scholar
  3. 3.
    Dautenhahn, K.: I could be you: the phenomenological dimension of social understanding. Cybern. Syst. 28(5), 417–453 (1997)CrossRefGoogle Scholar
  4. 4.
    Ham, J., Cuijpers, R.H., Cabibihan, J.J.: Combining robotic persuasive strategies: the persuasive power of a storytelling robot that uses gazing and gestures. Int. J. Soc. Robot. 7(4), 479–487 (2015)CrossRefGoogle Scholar
  5. 5.
    Imai, M., Kanda, T., Ono, T., Ishiguro, H., Mase, K.: Robot mediated round table: analysis of the effect of robot’s gaze. In: Proceedings of 11th IEEE International Workshop on Robot and Human Interactive Communication, pp. 411–416 (2002)Google Scholar
  6. 6.
    Karreman, D.E., Bradford, G.U.S., van Dijk, E.M., Lohse, M., Evers, V.: Picking favorites: the influence of robot eye-gaze on interactions with multiple users. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 123–128, November 2013Google Scholar
  7. 7.
    Kleinke, C.L.: Gaze and eye contact: a research review. Psychol. Bull. 100(1), 78–100 (1986)CrossRefGoogle Scholar
  8. 8.
    Koda, T., Ogura, M., Matsui, Y.: Shyness level and sensitivity to gaze from agents - are shy people sensitive to agent’s gaze? In: Traum, D., Swartout, W., Khooshabeh, P., Kopp, S., Scherer, S., Leuski, A. (eds.) IVA 2016. LNCS, vol. 10011, pp. 359–363. Springer, Cham (2016). doi: 10.1007/978-3-319-47665-0_33 CrossRefGoogle Scholar
  9. 9.
    Mutlu, B., Forlizzi, J., Hodgins, J.: A storytelling robot: modeling and evaluation of human-like gaze behavior. In: 6th IEEE-RAS International Conference on Humanoid Robots, pp. 518–523, December 2006Google Scholar
  10. 10.
    Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., Ishiguro, H.: Conversational gaze mechanisms for humanlike robots. ACM Trans. Interact. Intell. Syst. 1(2), 1–33 (2012)CrossRefGoogle Scholar
  11. 11.
    Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., Hagita, N.: Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: 4th ACM/IEEE International Conference on HRI, pp. 61–68. ACM (2009)Google Scholar
  12. 12.
    Pelachaud, C., Bilvi, M.: Modelling gaze behavior for conversational agents. In: Rist, T., Aylett, R.S., Ballin, D., Rickel, J. (eds.) IVA 2003. LNCS, vol. 2792, pp. 93–100. Springer, Heidelberg (2003). doi: 10.1007/978-3-540-39396-2_16 CrossRefGoogle Scholar
  13. 13.
    Rossi, S., Cervone, F.: Social utilities and personality traits for group recommendation: a pilot user study. In: Proceedings of the 8th International Conference on Agents and Artificial Intelligence, pp. 38–46 (2016)Google Scholar
  14. 14.
    Shinozawa, K., Naya, F., Kogure, K., Yamato, J.: Effect of robot’s tracking users on human decision making. In: IROS, pp. 1908–1913. IEEE (2004)Google Scholar
  15. 15.
    Staffa, M., Gregorio, M.D., Giordano, M., Rossi, S.: Can you follow that guy?. In: 22th European Symposium on Artificial Neural Networks, ESANN, pp. 511–516 (2014)Google Scholar
  16. 16.
    Staffa, M., Rossi, S.: Recommender interfaces: the more human-like, the more humans like. In: Agah, A., Cabibihan, J.-J., Howard, A.M., Salichs, M.A., He, H. (eds.) ICSR 2016. LNCS (LNAI), vol. 9979, pp. 200–210. Springer, Cham (2016). doi: 10.1007/978-3-319-47437-3_20 CrossRefGoogle Scholar
  17. 17.
    Yoshino, T., Takase, Y., Nakano, Y.I.: Controlling robot’s gaze according to participation roles and dominance in multiparty conversations. In: Tenth Annual ACM/IEEE International Conference on HRI, Extended Abstracts, pp. 127–128. ACM (2015)Google Scholar
  18. 18.
    Zheng, M., Moon, A., Croft, E.A., Meng, M.Q.H.: Impacts of robot head gaze on robot-to-human handovers. Int. J. Soc. Robot. 7(5), 783–798 (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Electrical Engineering and Information TechnologyUniversity of Naples Federico IINaplesItaly

Personalised recommendations