A natural interface based on intention prediction for semi-autonomous micromanipulation
- 125 Downloads
Manipulation at micro and nano scales is a particular case for remote handling. Although novel robotic approaches emerge, these tools are not yet commonly adopted due to their inherent complexity and their lack of user-friendly interfaces. In order to fill this gap, this work first introduces a novel paradigm dubbed semi-autonomous. Its aim is to combine full-automation and user-driven manipulation by sequencing simple automated elementary tasks following user instructions. To acquire these instructions in a more natural and intuitive way, we propose a “metaphor-free” user interface implemented in a virtual reality environment. A predictive intention extraction technique is introduced through a computational model inspired from cognitive sciences and implemented using a Kinect depth sensor. The model is compared in terms of naturalness and intuitiveness to a gesture recognition technique to detect user actions, in a semi-autonomous pick-and-place operation. It shows an improvement in user performance in duration and success of the task, and a qualitative preference for the proposed approach as evaluated by a user survey. The projected technique may be a worthy alternative to manual operation on a basic keyboard/joystick setup or even an interesting complement to the use of a haptic feedback arm.
KeywordsHCI Microrobotics Intention prediction Gesture recognition Kinect
Funding was provided by the French government research program “Investissements d’avenir” through SMART Laboratory of Excellence (Grant No. ANR-11-LABX-65) and Robotex Equipment of Excellence (Grant No. ANR-10-EQPX-44).
- 1.Atkeson CG, Hollerbach JM (1985) Kinematic features of unrestrained vertical arm movements. J Neurosci 5(9):2318–2330Google Scholar
- 5.Bolopion A, Stolle C, Tunnell R, Haliyo S, Régnier S, Fatikow S (2011) Remote microscale teleoperation through virtual reality and haptic feedback. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 894–900Google Scholar
- 7.Bolt RA (1980) Put-that-there: voice and gesture at the graphics interface, vol 14. ACM, New yorkGoogle Scholar
- 8.Brooke J (1996) SUS-A quick and dirty usability scale. Usability Eval Ind 189:194Google Scholar
- 20.Régnier S, Chaillet N (2010) Microrobotics for micromanipulation. Wiley-ISTE, LondonGoogle Scholar
- 23.Sauvet B, Ouarti N, Haliyo S, Régnier S (2012) Virtual reality backend for operator controlled nanomanipulation. In: IEEE international conference on manipulation, manufacturing and measurement on the nanoscale (3M-NANO), pp 121–127Google Scholar
- 25.Sol A (2013) Real-world machine learning: how kinect gesture recognition works. http://alissonsol.com/
- 28.Vinter A, Mounoud P (1991) Isochrony and accuracy of drawing movements in children: effects of age and context. In: Development of graphic skills. Research perspectives and educational implications. Academic Press, New York, pp 113–134. http://archive-ouverte.unige.ch/unige:21512