Skip to main content
Log in

An Attentional Approach to Human–Robot Interactive Manipulation

  • Published:
International Journal of Social Robotics Aims and scope Submit manuscript

Abstract

Human robot collaborative work requires interactive manipulation and object handover. During the execution of such tasks, the robot should monitor manipulation cues to assess the human intentions and quickly determine the appropriate execution strategies. In this paper, we present a control architecture that combines a supervisory attentional system with a human aware manipulation planner to support effective and safe collaborative manipulation. After detailing the approach, we present experimental results describing the system at work with different manipulation tasks (give, receive, pick, and place).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Abbeel P, Ng AY (2004) Apprenticeship learning via inverse reinforcement learning. In: Proceedings of the twenty-first international conference on Machine learning, p 1. ACM

  2. Alili S, Alami R, Montreuil V (2009) A task planner for an autonomous social robot. In: Distributed autonomous robotic systems. Springer, Berlin, pp 335–344

  3. Arbib MA (1998) Schema theory. In: The handbook of brain theory and neural networks. MIT Press, Cambridge, pp 830–834

  4. Arkin R (1998) Behavior based robotics. MIT Press, Cambridge

    Google Scholar 

  5. Berchtold S, Glavina B (1994) A scalable optimizer for automatically generated manipulator motions. In: IEEE/RSJ Int. Conf. on Intel. Rob. And Sys. IEEE, Munich, Germany

  6. Bounab B, Labed A, Sidobre D (2010) Stochastic optimization-based approach for multifingered grasps synthesis. Robotica 28(07):1021–1032

    Article  Google Scholar 

  7. Breazeal C (2002) Designing sociable robots. MIT Press, Cambridge

    Google Scholar 

  8. Breazeal C, Kidd CD, Thomaz AL, Hoffman G, Berlin M (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In: in IROS-2005. ACM/IEEE, Edmonton, pp 383–388

  9. Brooks RA (1991) A robust layered control system for a mobile robot. In: Iyengar SS, Elfes A (eds) Autonomous mobile robots: control, planning, and architecture (vol 2). IEEE Computer Society Press, Los Alamitos, pp 152–161

  10. Broquère X, Sidobre D (2010) From motion planning to trajectory control with bounded jerk for service manipulator robots. In: IEEE Int. Conf. Robot. And Autom. IEEE, Anchorage

  11. Broquère X, Sidobre D, Herrera-Aguilar I (2008) Soft motion trajectory planner for service manipulator robot. In: IEEE/RSJ Int. Conf. on Intel. Rob. And Sys. IEEE, Nice, France

  12. Burattini E, Finzi A, Rossi S, Staffa M (2010) Attentive monitoring strategies in a behavior-based robotic system: an evolutionary approach. In: Proceedings of the 2010 international conference on emerging security technologies, EST ’10. IEEE Computer Society, Washington, pp 153–158

  13. Burattini E, Finzi A, Rossi S, Staffa M (2011) Cognitive control in cognitive robotics: attentional executive control. In: Proc. of ICAR-2011. IEEE, Tallin, Estonia, pp 359–364

  14. Burattini E, Finzi A, Rossi S, Staffa M (2012) Attentional human-robot interaction in simple manipulation tasks. In: Proc. of HRI-2012, Late-Breaking Reports. ACM/IEEE, Boston

  15. Burattini E, Rossi S (2008) Periodic adaptive activation of behaviors in robotic system. IJPRAI 22(5):987–999 Special Issue on Brain, Vision and Artificial Intelligence

    Google Scholar 

  16. Clodic A, Cao H, Alili S, Montreuil V, Alami R, Chatila R (2009) Shary: a supervision system adapted to human-robot interaction. In: Khatib O, Kumar V, Pappas G (eds) Experimental robotics, springer tracts in advanced robotics, vol 54. Springer, Berlin, pp. 229–238. doi:10.1007/978-3-642-00196-3_27

  17. Cooper R, Shallice T (2000) Contention scheduling and the control of routine activities. Cogn Neuropsychol 17:297–338

    Article  Google Scholar 

  18. Di Nocera D, Finzi A, Rossi S, Staffa M (2012) Attentional action selection using reinforcement learning. In: Ziemke T, Balkenius C, Hallam J (eds) From animals to animats 12–12th international conference on simulation of adaptive behavior, SAB 2012, Lecture Notes in Computer Science, vol 7426. Springer, Berlin, pp 371–380

  19. Duguleana M, Barbuceanu FG, Mogan G (2011) Evaluating human-robot interaction during a manipulation experiment conducted in immersive virtual reality. In: Proc. of international conference on virtual and mixed reality: new trends, vol I. Springer, Berlin, pp 164–173

  20. Edsinger A, Kemp CC (2007) Human-robot interaction for cooperative manipulation: Handing objects to one another. In: RO-MAN 2007. IEEE, Jeju, Korea, pp 1167–1172

  21. Fitts P (1954) The information capacity of the human motor system in controlling the amplitude of movement. J Exp Psychol 47(6):381391

    Article  Google Scholar 

  22. Fleury S, Herrb M, Chatila R (1997) Genom: a tool for the specification and the implementation of operating modules in a distributed robot architecture. In: IEEE/RSJ Int. conf. on intel. rob. snd sys. IEEE, Grenoble, France

  23. Hoffman G, Breazeal C (2007) Cost-based anticipatory action selection for human–robot fluency. IEEE Trans Robot 23(5): 952–961

    Google Scholar 

  24. Iengo S, Origlia A, Staffa M, Finzi A (2012) Attentional and emotional regulation in human-robot interaction. In: RO-MAN, pp 1135–1140

  25. Itti L, Koch C (2001) Computational modeling of visual attention. Nat Rev Neurosci 2(3):194–203

    Article  Google Scholar 

  26. Jaillet L, Cortés J, Siméon T (2010) Sampling-based path planning on configuration-space costmaps. IEEE Trans Robot 26(4): 635–646

    Google Scholar 

  27. Kahneman D (1973) Attention and effort. Prentice-Hall, Englewood

    Google Scholar 

  28. Kaplan F, Hafner VV (2006) The challenges of joint attention. Interact Stud 7(2):135–169. doi:10.1075/is.7.2.04kap

    Article  Google Scholar 

  29. Lang S, Kleinehagenbrock M, Hohenner S, Fritsch J, Fink GA, Sagerer G (2003) Providing the basis for human-robot-interaction: A multi-modal attention system for a mobile robot. In: Proc. int. conf. on multimodal interfaces. ACM, Vancouver, pp 28–35

  30. Mainprice J, Sisbot E, Jaillet L, Cortés J, Siméon T, Alami R (2011) Planning Human-aware motions using a sampling-based costmap planner. In: IEEE int. conf. robot. and autom. IEEE, Shanghai.

  31. Marler R, Rahmatalla S, Shanahan M, Abdel-Malek K (2005) A new discomfort function for optimization-based posture prediction. SAE Technical Paper, Warrendale

  32. Nagai Y, Hosoda K, Morita A, Asada M (2003) A constructive model for the development of joint attention. Connect Sci 15(4):211–229

    Article  Google Scholar 

  33. Norman D, Shallice T (1986) Attention in action: willed and automatic control of behaviour. Conscious Self-Regulation 4:1–18

    Article  Google Scholar 

  34. Pashler H, Johnston J (1998) Attentional limitations in dual-task performance. In: Pashler H (ed) Attention. Psychology Press, East Essex, pp 155–189

    Google Scholar 

  35. Posner M, Snyder C (1975) Attention and cognitive control. In: Information processing and cognition: the loyola symposium. Psychology Pr, Hillsdale, Erlbaum

  36. Posner M, Snyder C, Davidson B (1980) Attention and the detection of signals. J Exp Psychol Gen 109:160–174

    Article  Google Scholar 

  37. Rossi S, Leone E, Fiore M, Finzi A, Cutugno F, (2013) An extensible architecture for robust multimodal human-robot communication. In: Proc. of IROS, (2013) IEEE. Tokyo, Japan

  38. Saut JP, Sidobre D (2012) Efficient models for grasp planning with a multi-fingered hand. Robot Auton Syst 60(3):347–357. doi:10.1016/j.robot.2011.07.019 Autonomous Grasping

    Article  Google Scholar 

  39. Scassellati B (1999) Imitation and mechanisms of joint attention: a developmental structure for building social skills on a humanoid robot. In: Computation for metaphors, analogy and agents, vol 1562. Springer, Berlin, pp 176–195

  40. Senders J (1964), The human operator as a monitor and controller of multidegree of freedom systems. IEEE Trans. on Human Factors in, Electronics, HFE-5 pp 2–6

  41. Siciliano B (2012) Advanced bimanual manipulation: results from the DEXMART project, vol 80. Springer, Heidelberg. doi:10.1007/978-3-642-29041-1

  42. Sisbot E, Marin-Urias L, Broquère X, Sidobre D, Alami R (2010) Synthesizing robot motions adapted to human presence. Int J Soc Robot 2(3):329–343

    Article  Google Scholar 

  43. Sisbot EA, Alami R (2012) A human-aware manipulation planner. Robot IEEE Trans 28(5):1045–1057

    Article  Google Scholar 

  44. Sisbot EA, Marin-Urias LF, Alami R, Siméon T (2007) Human aware mobile robot motion planner. IEEE Trans Robot 23(5): 874–883

    Article  Google Scholar 

  45. Sisbot EA, Ros R, Alami R (2011) Situation assessment for human-robot interactive object manipulation. In: IEEE RO-MAN. IEEE, IEEE, Atlanta

  46. Tinbergen N (1951) The study of instinct. Oxford University Press, London

    Google Scholar 

  47. Trafton JG, Cassimatis NL, Bugajska MD, Brock DP, Mintz FE, Schultz AC (2005) Enabling effective human-robot interaction using perspective-taking in robots. IEEE Trans Syst Man Cybern 35:460–470

    Article  Google Scholar 

Download references

Acknowledgments

The research leading to these results has been supported by the SAPHARI Large-scale integrating project, which has received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement ICT-287513. The authors are solely responsible for its content. It does not represent the opinion of the European Community and the Community is not responsible for any use that might be made of the information contained therein.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Sidobre.

Appendix

Appendix

The overall control architecture has been implemented within the LAAS architecture exploiting the GenoM (Generator of Modules) [22] development framework. In the following, we first introduce the main concepts of the GenoM framework, then we illustrate the implemented control architecture, finally we provide some details about the implementation of the attentional module.

1.1 GenoM

The GenoM framework allows to design real-time software architectures. It permits to encapsulate the robot functionalities into independent modules, which are responsible for their execution. Each GenoM module can concurrently execute several services, send information to other modules or share data with other modules using data structures called posters. The functionalities are dynamically started, interrupted or parameterized upon asynchronous requests sent to the modules. There are execution and control requests: the first starts an actual service, whereas the latter controls the execution of the services (see Fig.  12). Each request is associated with a final reply that reports how the service has been executed. For each module, the algorithms must be split into several parts: initialization, body, termination, interruption, etc. Each of these elementary pieces of code is called a codel. In the current version of GenoM, these codels are C/C++ functions. A running service is called an activity. The different states of an activity are shown in Fig. 12(right). On any transition, one can go into the INTER state. In case of a problem, one can go into the FAIL state, or even directly into the ZOMBIE state (frozen). Activities can control a physical device (e.g., sensors and actuators), read data produced by other modules (from posters) or produce data. The data can be transferred at the end of the execution through the final reply, or at any time by means of posters.

Fig. 12
figure 12

GenoM module structure and state machine

1.2 System Architecture

A description of the GenoM modules involved in the attentional control cycle is provided in Fig. 13. Here, we can distinguish the SPARK module, which is responsible for perceptual analysis and costmap generation, the MHP module, which is responsible for the robot motion planning and execution (path/grasp/motion planning and smoothing), and the ATTENTIONAL module, which is responsible for attentional regulation and task switching.

Fig. 13
figure 13

Architecture of the system

1.3 Attentional System

The attentional system is implemented as a GenoM module that has an executive cycle of \(10\) milliseconds. An abstract illustration of the codel associated with the attentional system is provided by the Algorithm 1. Here, the \(attentionalControlMain()\) is activated at each cycle (i.e., every \(10\) milliseconds) and returns an ACTIVITY_EVENT (i.e., the EXEC state). During the cycle, all the behaviors are checked and updated. For each behavior, the attentional module checks if the perceptual schema is active or not. If it is not active, the behavior clock is increased by one tick (\(updateClock()\)). Otherwise, if the perceptual schema is active, its acts as follows: it reads the associated input data from the poster generated by the SPARK module (\(readData()\)); it defines the next clock period according to the behavior monitoring function (\(updateClockPeriod()\)); it assesses the releasing function (\(checkReleaser()\)) to determine whether the motor schema is active or not; finally, the previous sensing data is stored (\(storeLastSensing()\)) and the clock is reset (\(resetClock()\)). Once each behavior has been updated, the executive system is to select the current activity to be executed and the associated cost (\(selectActivity()\)).

The executive system is implemented by the \(selectActivity()\) function (see Algorithm 2). It gets the current executive state (IDLE, PICK, GIVE, RECEIVE, PLACE), the attentional state (active behaviors and the associated periods), and the associated cost vector (velocity modulation suggested by each behavior). If there exists at least one active behavior, the function checks for priorities (depending on the executive state) and decides whether to keep the current activity or to switch to another one. Once one activity has been selected, a target human, location or object is set (\(selectTarget()\)). Finally, the velocity modulation is decided (\(setCost()\)) by minimizing the one associated with the selected behavior and the one proposed by AVOID (i.e., \(min(\alpha _{av}(t),\alpha _{task}(t))\)).

figure h

Following the standard specifications of a GenoM module, the attentional module is activated by the start function \(attentionalControlStart()\) (used to initialize the module, it returns EXEC) and it is closed by the end function \(attentionalControlEnd()\) (used to close the module, it returns ETHER).

1.4 Interaction Example

In Fig. 14 we illustrate a sequence diagram that represents a typical pick and give interaction. The diagram shows how the main components of the global framework in Fig. 1 (which is an abstract version of Fig. 13) interacts in the following scenario: the robot picks an object from the table and tries to place it in another position or to give it to a human. For the sake of clarity, we distinguish between an ATTENTIONAL and an EXECUTIVE timeline even though they belong to the same module. On the ATTENTIONAL timeline we show the names of the behaviors whose motor schemas are active (recall that the perceptual schemas of the behaviors are always periodically active). Moreover, to simplify the presentation, only relevant messages are shown. In the absence of a human or when the robot is idling, the robot monitors the scene (search for human). The perceptual schema of the SEARCH behavior receives data from the SPARK module (e.g., no human). Notice that in Fig. 14, the messages labeled with \((*)\) are periodically transmitted. If an object appears on the table (object position), in the absence of other stimuli, the robot tries to pick it up (pick object). The EXECUTIVE, as soon as the frequency of (pick object) increases, calls the PLANNER for the trajectory generation. Once the planner sends the trajectory to the arm controller, the attentional system should modulate the arm velocity (speed modulation) during the execution taking into account the information provided by all the active behaviors. The execution of the trajectory terminates with the object picked (holding object). When the robot is holding the object, in the absence of humans, the robot tries to place it on a suitable location (location position). The activation of PLACE behavior (place object) affects the EXECUTIVE system, which switches to the PLACE mode and invokes the generation of an associated new trajectory (place trajectory). During this trajectory execution the attentional system can affect the speed modulation. If a human enters in the INTERATION_SPACE (human detected), TRACK will monitor his/her position (human position) and GIVE will be activated (give object). In this particular configuration, both PLACE and GIVE behavior are active. The task switcher should choose the one or the other taking into account the frequencies of the two behaviors while monitoring the external processes. If a human is ready to receive an object and the frequency of GIVE becomes dominant, the EXECUTIVE calls a task switch. It stops the execution of PLACE and asks the planner to launch the behavior GIVE (switch to give). Once again, during the execution, the attentional system affects the behaviors activations and consequently the arm speed modulation. In the presence of a human, also the AVOID behavior can give its contribution with speed modulation halting the execution in case of danger.

Fig. 14
figure 14

Sequence diagram of a typical pick and place/give human–robot interactive activity. Messages labeled with \((*)\) are periodically sent

figure g

1.5 Interface

In Fig. 15 we show the interface used to visualize the system behavior. This snapshot shows the case of the parallel activation of PLACE, GIVE and also AVOID behavior presented above. In the right box we can notice, that the active behaviors are these latter three, and that the selected one, under the condition that the robot is holding an object, is the GIVE behavior, because there is a man in the scene who is asking for an object.

Fig. 15
figure 15

Snapshot of the interface of the simulated environment, during a typical pick and place/give human–robot interactive activity

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Broquère, X., Finzi, A., Mainprice, J. et al. An Attentional Approach to Human–Robot Interactive Manipulation. Int J of Soc Robotics 6, 533–553 (2014). https://doi.org/10.1007/s12369-014-0236-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12369-014-0236-0

Keywords

Navigation