Advertisement

Journal on Multimodal User Interfaces

, Volume 12, Issue 1, pp 17–30 | Cite as

A natural interface based on intention prediction for semi-autonomous micromanipulation

  • Laura Cohen
  • Mohamed Chetouani
  • Stéphane Régnier
  • Sinan Haliyo
Original Paper
  • 119 Downloads

Abstract

Manipulation at micro and nano scales is a particular case for remote handling. Although novel robotic approaches emerge, these tools are not yet commonly adopted due to their inherent complexity and their lack of user-friendly interfaces. In order to fill this gap, this work first introduces a novel paradigm dubbed semi-autonomous. Its aim is to combine full-automation and user-driven manipulation by sequencing simple automated elementary tasks following user instructions. To acquire these instructions in a more natural and intuitive way, we propose a “metaphor-free” user interface implemented in a virtual reality environment. A predictive intention extraction technique is introduced through a computational model inspired from cognitive sciences and implemented using a Kinect depth sensor. The model is compared in terms of naturalness and intuitiveness to a gesture recognition technique to detect user actions, in a semi-autonomous pick-and-place operation. It shows an improvement in user performance in duration and success of the task, and a qualitative preference for the proposed approach as evaluated by a user survey. The projected technique may be a worthy alternative to manual operation on a basic keyboard/joystick setup or even an interesting complement to the use of a haptic feedback arm.

Keywords

HCI Microrobotics Intention prediction Gesture recognition Kinect 

Notes

Funding

Funding was provided by the French government research program “Investissements d’avenir” through SMART Laboratory of Excellence (Grant No. ANR-11-LABX-65) and Robotex Equipment of Excellence (Grant No. ANR-10-EQPX-44).

References

  1. 1.
    Atkeson CG, Hollerbach JM (1985) Kinematic features of unrestrained vertical arm movements. J Neurosci 5(9):2318–2330Google Scholar
  2. 2.
    Becchio C, Manera V, Sartori L, Cavallo A, Castiello U (2012) Grasping intentions: from thought experiments to empirical evidence. Front Hum Neurosci 6:117CrossRefGoogle Scholar
  3. 3.
    Binnig G, Quate CF, Gerber C (1986) Atomic force microscope. Phys Rev Lett 56(9):930CrossRefGoogle Scholar
  4. 4.
    Bolopion A, Régnier S (2013) A review of haptic feedback teleoperation systems for micromanipulation and microassembly. IEEE Trans Autom Sci Eng 10(3):496–502CrossRefGoogle Scholar
  5. 5.
    Bolopion A, Stolle C, Tunnell R, Haliyo S, Régnier S, Fatikow S (2011) Remote microscale teleoperation through virtual reality and haptic feedback. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 894–900Google Scholar
  6. 6.
    Bolopion A, Xie H, Haliyo S, Régnier S (2012) Haptic teleoperation for 3-d microassembly of spherical objects. IEEE/ASME Trans Mechatron 17(1):116–127CrossRefGoogle Scholar
  7. 7.
    Bolt RA (1980) Put-that-there: voice and gesture at the graphics interface, vol 14. ACM, New yorkGoogle Scholar
  8. 8.
    Brooke J (1996) SUS-A quick and dirty usability scale. Usability Eval Ind 189:194Google Scholar
  9. 9.
    Chandarana M, Trujillo A, Shimada K, Allen BD (2017) A natural interaction interface for UAVs using intuitive gesture recognition. In: Savage-Knepshield P, Chen J (eds) Advances in human factors in robots and unmanned systems. Springer, Berlin, pp 387–398CrossRefGoogle Scholar
  10. 10.
    Gauthier M, Régnier S (2010) Robotic micro-assembly. IEEE Press, New JerseyCrossRefGoogle Scholar
  11. 11.
    Haliyo S, Dionnet F, Régnier S (2004) Controlled rolling of microobjects for autonomous manipulation. J Micromechatron 3(2):75–102CrossRefGoogle Scholar
  12. 12.
    Haliyo S, Régnier S, Guinot JC (2003) [mü]MAD, the adhesion based dynamic micro-manipulator. Eur J Mech A Solids 22(6):903–916CrossRefMATHGoogle Scholar
  13. 13.
    MacKenzie IS (1992) Fitts’ law as a research and design tool in human–computer interaction. Hum Comput Interact 7(1):91–139CrossRefGoogle Scholar
  14. 14.
    Millet G, Lécuyer A, Burkhardt JM, Haliyo S, Régnier S (2008) Improving perception and understanding of nanoscale phenomena using haptics and visual analogy. In: Ferre M (ed) Haptics: perception, devices and scenarios. Springer, Berlin, pp 847–856CrossRefGoogle Scholar
  15. 15.
    Millet G, Lécuyer A, Burkhardt JM, Haliyo S, Régnier S (2013) Haptics and graphic analogies for the understanding of atomic force microscopy. Int J Hum Comput Stud 71(5):608–626CrossRefGoogle Scholar
  16. 16.
    Nagasaki H (1989) Asymmetric velocity and acceleration profiles of human arm movements. Exp Brain Res 74(2):319–326CrossRefGoogle Scholar
  17. 17.
    Norman DA (2010) Natural user interfaces are not natural. Interactions 17(3):6–10.  https://doi.org/10.1145/1744161.1744163 CrossRefGoogle Scholar
  18. 18.
    Oztop E, Wolpert D, Kawato M (2005) Mental state inference using visual control parameters. Cognit Brain Res 22(2):129–151CrossRefGoogle Scholar
  19. 19.
    Plamondon R, Alimi AM, Yergeau P, Leclerc F (1993) Modelling velocity profiles of rapid movements: a comparative study. Biol Cybern 69(2):119–128CrossRefGoogle Scholar
  20. 20.
    Régnier S, Chaillet N (2010) Microrobotics for micromanipulation. Wiley-ISTE, LondonGoogle Scholar
  21. 21.
    Ren G, O’Neill E (2013) 3d selection with freehand gesture. Comput Graph 37(3):101–120CrossRefGoogle Scholar
  22. 22.
    Sartori L, Becchio C, Castiello U (2011) Cues to intention: the role of movement information. Cognition 119(2):242–252CrossRefGoogle Scholar
  23. 23.
    Sauvet B, Ouarti N, Haliyo S, Régnier S (2012) Virtual reality backend for operator controlled nanomanipulation. In: IEEE international conference on manipulation, manufacturing and measurement on the nanoscale (3M-NANO), pp 121–127Google Scholar
  24. 24.
    Searle JR (1983) Intentionality: an essay in the philosophy of mind. Cambridge University Press, CambridgeCrossRefGoogle Scholar
  25. 25.
    Sol A (2013) Real-world machine learning: how kinect gesture recognition works. http://alissonsol.com/
  26. 26.
    Stapel JC, Hunnius S, Bekkering H (2012) Online prediction of others’ actions: the contribution of the target object, action context and movement kinematics. Psychol Res 76(4):434–445CrossRefGoogle Scholar
  27. 27.
    Taranta EM, Vargas AN, LaViola JJ (2016) Streamlined and accurate gesture recognition with penny pincher. Comput Graph 55:130–142CrossRefGoogle Scholar
  28. 28.
    Vinter A, Mounoud P (1991) Isochrony and accuracy of drawing movements in children: effects of age and context. In: Development of graphic skills. Research perspectives and educational implications. Academic Press, New York, pp 113–134. http://archive-ouverte.unige.ch/unige:21512
  29. 29.
    Viviani P, Flash T (1995) Minimum-jerk, two-thirds power law, and isochrony: converging approaches to movement planning. J Exp Psychol Hum Percept Perform 21(1):32CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Sorbonne Université, CNRS, Institut des Systèmes Intelligents et de Robotique (ISIR)ParisFrance

Personalised recommendations