Learning Human Priors for Task-Constrained Grasping

  • Martin Hjelm
  • Carl Henrik Ek
  • Renaud Detry
  • Danica Kragic
Conference paper

DOI: 10.1007/978-3-319-20904-3_20

Part of the Lecture Notes in Computer Science book series (LNCS, volume 9163)
Cite this paper as:
Hjelm M., Ek C.H., Detry R., Kragic D. (2015) Learning Human Priors for Task-Constrained Grasping. In: Nalpantidis L., Krüger V., Eklundh JO., Gasteratos A. (eds) Computer Vision Systems. ICVS 2015. Lecture Notes in Computer Science, vol 9163. Springer, Cham

Abstract

An autonomous agent using manmade objects must understand how task conditions the grasp placement. In this paper we formulate task based robotic grasping as a feature learning problem. Using a human demonstrator to provide examples of grasps associated with a specific task, we learn a representation, such that similarity in task is reflected by similarity in feature. The learned representation discards parts of the sensory input that is redundant for the task, allowing the agent to ground and reason about the relevant features for the task. Synthesized grasps for an observed task on previously unseen objects can then be filtered and ordered by matching to learned instances without the need of an analytically formulated metric. We show on a real robot how our approach is able to utilize the learned representation to synthesize and perform valid task specific grasps on novel objects.

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Martin Hjelm
    • 1
  • Carl Henrik Ek
    • 1
  • Renaud Detry
    • 2
  • Danica Kragic
    • 1
  1. 1.Autonomous Systems and the Computer Vision and Active Perception LabCSC, KTH Royal Institute of TechnologyStockholmSweden
  2. 2.University of LiègeLiègeBelgium

Personalised recommendations