Skip to main content
Log in

Towards Robots able to Measure in Real-time the Quality of Interaction in HRI Contexts

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

When humans interact with each other, collaborating on a shared activity or chatting, they are able to tell whether their interaction is going well or not and if they observe that its quality is deteriorating, they can adapt their behavior or invite their partner to act in order to improve it. A robot endowed with the ability to evaluate the quality of its interaction with its human partners, will have the opportunity to perform better since it will be better informed for its decision making processes. We propose metrics to be integrated in a cognitive and collaborative robot in order to measure in real-time the quality of an interaction (QoI). This permanent evaluation process has been implemented and tested within the high-level controller of an entertainment robot. A first demonstration shows the ability of the scheme to compute QoI for a direction-giving task and exhibit significant differences between its performance in interaction with a fully compliant human, a human confused by the course of action and a non-cooperative one. This paper is an extension and further refinement of work originally reported in Mayima (in: 29th IEEE International conference on robot and human interactive communication (RO-MAN), 2020).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

Availability of Data and Material

Not applicable.

Notes

  1. Values are empirically defined given intuition regarding the importance of a given metrics for a given task and a set of testing experiments

  2. In the robotic domain, it is the word “engagement” and not “commitment” which is often used, unlike in the psychological and philosophical fields.

  3. Obviously, the success is context and task dependent and should be defined according to the needs

  4. http://mummer-project.eu/.

References

  1. Anzalone SM, Boucenna S, Ivaldi S, Chetouani M (2015) Evaluating the engagement with social robots. Int J Soc Robot 7(4):465–478. https://doi.org/10.1007/s12369-015-0298-7

    Article  Google Scholar 

  2. Baraglia J, Cakmak M, Nagai Y, Rao RPN, Asada M (2017) Efficient human-robot collaboration: when should a robot take initiative? Int J Robot Res 36(5–7):563–579

    Article  Google Scholar 

  3. Bauer A, Wollherr D, Buss M (2008) Human-robot collaboration: a survey. Int J Humanoid Robot 5(01):47–66. https://doi.org/10.1142/S0219843608001303

    Article  Google Scholar 

  4. Bekele E, Sarkar N (2014) Psychophysiological feedback for adaptive human-robot interaction (hri). In: Fairclough SH, Gilleade K (eds) Advances in physiological computing. Springer, London, pp 141–167. https://doi.org/10.1007/978-1-4471-6392-3_7

    Chapter  Google Scholar 

  5. Belhassein K, Clodic A, Cochet H, Niemelä M, Heikkilä P, Lammi H, Tammela A (2017) Human-human guidance study. Hal-01719730

  6. Bensch S, Jevtić A, Hellström T (2017) On interaction quality in human-robot interaction. In: Proceedings of the 9th international conference on agents and artificial intelligence (ICAART), pp. 182–189, 10.5220/0006191601820189

  7. Bethel CL, Murphy RR (2010) Review of human studies methods in hri and recommendations. Int J Soc Robot 2(4):347–359. https://doi.org/10.1007/s12369-010-0064-9

    Article  Google Scholar 

  8. Bordini RH, Hübner JF, Wooldridge M (2007) Programming multi-agent systems in agentspeak using Jason (Wiley Series in Agent Technology). John Wiley, Hoboken

    Book  Google Scholar 

  9. Devin S, Alami R (2016) An implemented theory of mind to improve human-robot shared plans execution. In: The Eleventh ACM/IEEE international conference on human robot interaction (HRI), Christchurch, New Zealand, pp. 319–326

  10. Fan J, Bian D, Zheng Z, Beuscher L, Newhouse PA, Mion LC, Sarkar N (2017) A robotic coach architecture for elder care (rocare) based on multi-user engagement models. IEEE Trans Neural Syst Rehabil Eng 25(8):1153–1163

    Article  Google Scholar 

  11. Foster ME, Craenen B, Deshmukh A, Lemon O, Bastianelli E, Dondrup C, et al (2019) Mummer: socially intelligent human-robot interaction in public spaces. In: AAAI 2019 Fall symposium series, Arlington, United States, arxiv:1909.06749

  12. Ghallab M, Knoblock C, Wilkins D, Barrett A, Christianson D, Friedman M, et al (1998) PDDL - The planning domain definition language

  13. Ghallab M, Nau DS, Traverso P (2016) Automated planning and acting. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  14. Grosz BJ, Kraus S (1996) Collaborative plans for complex group action. Artif Intell 86(2):269–357. https://doi.org/10.1016/0004-3702(95)00103-4

    Article  MathSciNet  Google Scholar 

  15. Hiatt LM, Narber C, Bekele E, Khemlani SS, Trafton JG (2017) Human modeling for human-robot collaboration. Int J Robot Res 36(5–7):580–596. https://doi.org/10.1177/0278364917690592

    Article  Google Scholar 

  16. Hoffman G (2019) Evaluating fluency in human-robot collaboration. IEEE Trans Human-Machine Syst 49(3):209–218

    Article  Google Scholar 

  17. Hoffman G, Breazeal C (2007) Cost-based anticipatory action selection for human-robot fluency. IEEE Trans Robot 23(5):952–961

    Article  Google Scholar 

  18. Ingrand F, Ghallab M (2017) Deliberation for autonomous robots: a survey. Artif Intell 247:10–44. https://doi.org/10.1016/j.artint.2014.11.003

    Article  MathSciNet  Google Scholar 

  19. Itoh K, Miwa H, Nukariya Y, Zecca M, Takanobu H, Roccella S, et al (2006) Development of a bioinstrumentation system in the interaction between a human and a robot. In: IEEE/RSJ International conference on intelligent robots and systems (IROS), Beijing, China, pp. 2620–2625, 10.1109/IROS.2006.281941

  20. Khambhaita H, Alami R (2020a) Viewing robot navigation in human environment as a cooperative activity. In: Amato NM, Hager G, Thomas S, Torres-Torriti M (eds) Robotics research. Springer International Publishing, Berlin, pp 285–300

    Chapter  Google Scholar 

  21. Khambhaita H, Alami R (2020b) Viewing robot navigation in human environment as a cooperative activity. In: Amato N, Hager G, Thomas S, Torres-Torriti M (eds) Robotics research. Springer, Cham, pp 285–300

    Chapter  Google Scholar 

  22. Kruse T, Pandey AK, Alami R, Kirsch A (2013) Human-aware robot navigation: a survey. Robot Auton Syst 61(12):1726–1743

    Article  Google Scholar 

  23. Kulić D, Croft EA (2003) Estimating intent for human-robot interaction. In: IEEE International conference on advanced robotics, pp. 810–815

  24. Kulic D, Croft EA (2007) Affective state estimation for human-robot interaction. IEEE Trans Robot 23(5):991–1000

    Article  Google Scholar 

  25. Lallement R, De Silva L, Alami R (2014) HATP: An HTN planner for robotics. In: 2nd ICAPS Workshop on planning and robotics, Portsmouth, United States

  26. Lemaignan S, Garcia F, Jacq A, Dillenbourg P (2016) From real-time attention assessment to “with-me-ness” in human-robot interaction. In: 11th ACM/IEEE International conference on human-robot interaction (HRI), pp. 157–164

  27. Lemaignan S, Warnier M, Sisbot EA, Clodic A, Alami R (2017a) Artificial cognition for social human-robot interaction: an implementation. Artif Intell 247:45–69

    Article  MathSciNet  Google Scholar 

  28. Lemaignan S, Warnier M, Sisbot EA, Clodic A, Alami R (2017b) Artificial cognition for social human-robot interaction: an implementation. Artif Intell 247:45–69. https://doi.org/10.1016/j.artint.2016.07.002 (special Issue on AI and Robotics)

    Article  MathSciNet  Google Scholar 

  29. Mayima A, Clodic A, Alami R (2019) Evaluation of the Quality of Interaction from the robot point of view in human-robot interactions. In: 1st Edition of quality of interaction in socially assistive robots (QISAR) Workshop, The 11th international conference on social robotics (ICSR 2019), Madrid, Spain, https://hal.laas.fr/hal-02403081

  30. Mayima A, Clodic A, Alami R (2020) Toward a robot computing an online estimation of the quality of its interaction with its human partner. In: 2020 29th IEEE International conference on robot and human interactive communication (RO-MAN), pp. 291–298, 10.1109/RO-MAN47096.2020.9223464

  31. Michael J, Salice A (2017) The sense of commitment in human-robot interaction. Int J Soc Robot 9(5):755–763. https://doi.org/10.1007/s12369-016-0376-5

    Article  Google Scholar 

  32. Michael J, Sebanz N, Knoblich G (2016) The sense of commitment: a minimal approach. Front Psychol 6:1968. https://doi.org/10.3389/fpsyg.2015.01968

    Article  Google Scholar 

  33. Milliez G, Warnier M, Clodic A, Alami R (2014) A framework for endowing an interactive robot with reasoning capabilities about perspective-taking and belief management. In: The 23rd IEEE international symposium on robot and human interactive communication (RO-MAN), Edinburgh, United Kingdom, pp. 1103–1109, 10.1109/ROMAN.2014.6926399

  34. Olsen DR, Goodrich MA (2003) Metrics for evaluating human-robot interaction. In: PERMIS, Gaithersburg, United States

  35. Robinson JD (2012) Overall structural organization. In: Sidnell J, Stivers T (eds) The handbook of conversation analysis. John Wiley, Hoboken, pp 257–280. https://doi.org/10.1002/9781118325001.ch13

    Chapter  Google Scholar 

  36. Sallami Y, Lemaignan S, Clodic A, Alami R (2019) Simulation-based physics reasoning for consistent scene estimation in an hri context. In: IEEE/RSJ International conference on intelligent robots and systems (IROS), pp 1–8

  37. Sanchez-Matilla R, Chatzilygeroudis K, Modas A, Duarte NF, Xompero A, Frossard P, Billard A, Cavallaro A (2020) Benchmark for human-to-robot handovers of unseen containers with unknown filling. IEEE Robot Autom Lett 5(2):1642–1649. https://doi.org/10.1109/LRA.2020.2969200

    Article  Google Scholar 

  38. Sarthou G, Alami R, Clodic A (2019) Semantic spatial representation: a unique representation of an environment based on an ontology for robotic applications. In: SpLU-RoboNLP, pp 50–60

  39. Schegloff EA, Sacks H (1973) Opening up closings. Semiotica 8(4):289–327

    Article  Google Scholar 

  40. Sidner CL, Lee C (2003) Engagement rules for human-robot collaborative interactions. In: IEEE International conference on systems, man and cybernetics (SMC), Washington DC, United States, pp. 3957–3962, 10.1109/ICSMC.2003.1244506

  41. Singamaneni PT, Alami R (2020) HATEB-2: Reactive planning and decision making in human-robot co-navigation. In: International conference on robot & human interactive communication, 2020, Online, Italy, 10.1109/RO-MAN47096.2020.9223463

  42. Steinfeld A, Fong T, Kaber D, Lewis M, Scholtz J, Schultz A, Goodrich M (2006) Common metrics for human-robot interaction. In: Proceedings of the 1st ACM SIGCHI/SIGART conference on human-robot interaction, Salt Lake City, United States, pp pp. 33–40, 10.1145/1121241.1121249

  43. Tabrez A, Luebbers MB, Hayes B (2020) A survey of mental modeling techniques in human-robot teaming. Current Robotics Reports 10.1007/s43154-020-00019-0

  44. Tanevska A, Rea G Fand Sandini, Sciutti A (2017) Towards an affective cognitive architecture for human-robot interaction for the iCub robot. In: 1st Workshop on “behavior, emotion and representation: building blocks of interaction”, Bielefeld, Germany

  45. Thomaz A, Hoffman G, Çakmak M (2016) Computational human-robot interaction. Found Trends Robot 4(2–3):105–223. https://doi.org/10.1561/2300000049

    Article  Google Scholar 

  46. Waldhart J, Clodic A, Alami R (2019) Reasoning on shared visual perspective to improve route directions. In: IEEE International conference on robot and human interactive communication (RO-MAN), IEEE, pp 1–8

  47. Movellan JR, Tanaka F, Fasel IR, Taylor C, Ruvolo P, Eckhardt M (2007) The RUBI project: a progress report. In 2nd ACM/IEEE International Conference on Human-Robot Interaction (HRI) pp. 333–339

Download references

Acknowledgements

Many thanks to Michaël Mayer for his technical expertise and his help on the mathematical formalization.

Funding

This work has been supported by the European Union’s Horizon 2020 research and innovation program under grant agreement No. 688147 (MuMMER project), and by the French National Research Agency (ANR) under grant references ANR-16-CE33-0017 (JointAction4HRI project), and ANR-19-PI3A-0004 (ANITI).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amandine Mayima.

Ethics declarations

Conflict of interest

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Ethical Approval

This paper or a similar version is not currently under review by a journal or conference. This paper is void of plagiarism or self-plagiarism as defined by the Committee on Publication Ethics and Springer Guidelines.

Code availability

The code of the Quality of Interaction Evaluator is available at https://github.com/amdia/guiding_task.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

A Appendix: Scaling Functions for the Metrics

A Appendix: Scaling Functions for the Metrics

As the metrics are aggregated to compute the QoI, their values need to be on the same scale. In order to do this, we use scaling functions rescaling metrics into a range of \([-1,1]\), as the QoI bounds. As all the metrics does not have the same properties, they have to be scaled by using different functions. The two properties to check to choose which function to apply to which metric are the following ones:

  • does the metric already have a bounded value ?

  • what value of the metric should make the QoI decrease, increase or remain the same ?

Therefore, we designed three functions to be used with metrics having bounded values and three functions for metrics that do not have upper bounds. Then, among these two sets of functions, it is possible to choose the one to use according to the positive, neutral or negative impact a value should have on the QoI.

1.1 A.1 Scaling of Bounded Metrics: Min-Max Normalization

We defined three min-max normalization functions, illustrated in Fig. 11. They were designed to be used for metrics whose values belong to a bounded set, i.e., metrics for which the minimum and maximum values are known. The first function is to apply in cases for which a measure approaching the bound value \(b_1\) has a negative impact on the quality evaluation whereas a measure approaching \(b_2\) has a positive one. It allows to scale a measure x between -1 and 1:

$$\begin{aligned} n_1(x) = 2 * \dfrac{x-b_1}{b_2-b_1} -1 \end{aligned}$$
(7)

The second function is intended to be applied in cases for which a measure approaching the bound value \(b_1\) has a neutral impact on the quality evaluation whereas a measure approaching \(b_2\) has a positive one. It allows to scale a measure x between 0 and 1:

$$\begin{aligned} n_2(x) = \dfrac{x-b_1}{b_2-b_1} \end{aligned}$$
(8)

Finally, the last function is to apply in cases for which a measure approaching the bound value \(b_1\) has an negative impact on the quality evaluation whereas a measure approaching \(b_2\) has a neutral one. It allows to scale a measure x between -1 and 0:

$$\begin{aligned} n_3(x) = \dfrac{x-b_2}{b_2-b_1} \end{aligned}$$
(9)
Fig. 11
figure 11

a, b and c respectively represent the min-max normalization functions (7), (8) and (9)

Fig. 12
figure 12

Plots of the sigmoid-like functions \(s_1(x)\) and \(s_2(x)\) with different parameters values

1.2 A.2 Scaling of Unbounded Metrics: Sigmoid Normalization

We defined three sigmoid-like functions to scale and squash values of metrics without an upper bound. As for the min-max normalization, there is one function to scale the metrics values between -1 and 1, another one to scale between 0 and 1 and the last one to scale between -1 and 0.

The first function allows to scale between -1 and 1 the values of a metric, for a metric whose values are between 0 and \(+\infty \) (e.g. a duration whose final value is unknown during the execution). The function is defined as:

$$\begin{aligned} s_1(x) = 1 - 2 \exp {\left( -\ln {(2)}\left( \dfrac{x}{th}\right) ^k\right) }, x > 0 \end{aligned}$$
(10)

with \(s_1(x) \in [-1,1]\), th the value of the sigmoid’s midpoint (i.e., \(s_1(th)=0\)) and, k setting the shape of the function curve. k and th values are set off-line by the designer and they allow to define the shape of the metric scaling.

The second function is designed for metric which cannot have a negative impact on the QoI as it scales the value between 0 and 1 (and with \(x \in [0,+\infty ]\) as well):

$$\begin{aligned} s_2(x) = 1 - \exp {\left( -\ln {(2)}\left( \dfrac{x}{th}\right) ^k\right) }, x > 0 \end{aligned}$$
(11)

with \(s_2(x) \in [0,1]\), th the value of the sigmoid’s midpoint (i.e., \(s_2(th)=0.5\)) and, k setting the shape of the function curve.

The third function is designed for metric which cannot have a positive impact on the QoI as it scales the value between -1 and 0 (and with \(x \in [0,+\infty ]\) as well):

$$\begin{aligned} s_3(x) = - 1 + \exp {\left( -\ln {(2)}\left( \dfrac{x}{th}\right) ^k\right) }, x > 0 \end{aligned}$$
(12)

with \(s_3(x) \in [-1,0]\), th the value of the sigmoid’s midpoint (i.e., \(s_3(th)=-0.5\)) and, k setting the shape of the function curve.

The functions \(s_1(x)\) and \(s_2(x)\) are illustrated in Fig. 12 with four examples.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mayima, A., Clodic, A. & Alami, R. Towards Robots able to Measure in Real-time the Quality of Interaction in HRI Contexts. Int J of Soc Robotics 14, 713–731 (2022). https://doi.org/10.1007/s12369-021-00814-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-021-00814-5

Keywords

Navigation