Grasping and manipulation of objects are essential motor skills for robots to interact with their environment and perform meaningful, physical tasks. Since the dawn of robotics, grasping and manipulation have formed a core research field with a large number of dedicated publications. The field has reached an important milestone in recent years as various robots can now reliably perform basic grasps on unknown objects. However, these robots are still far from being capable of human-level manipulation skills including in-hand or bimanual manipulation of objects, interactions with non-rigid objects, and multi-object tasks such as stacking and tool-usage. Progress on such advanced manipulation skills is slowed down by requiring a successful combination of a multitude of different methods and technologies, e.g., robust vision, tactile feedback, grasp stability analysis, modeling of uncertainty, learning, long-term planning, and much more. In order to address these difficult issues, there have been an increasing number of governmental research programs such as the European projects DEXMART, GeRT and GRASP, and the American DARPA Autonomous Robotic Manipulation (ARM) project. This increased interest has become apparent in several international workshops at important robotics conferences, such as the well-attended workshop “Beyond Robot Grasping” at IROS 2012 in Portugal. Hence, this special issue of the Autonomous Robots journal aims at presenting important recent success stories in the development of advanced robot grasping and manipulation abilities. The issue covers a wide range of different papers that are representative of the current state-of-the-art within the field. Papers were solicited with an open call that was circulated in the 4 months preceding the deadline. As a result, we have received 37 submissions to the special issue which were rigorously reviewed by up to four reviewers as well as by at least one of the guest editors. Altogether twelve papers were selected for publication in this special issue. We are in particular happy to include four papers which detail the approach and goal of the DARPA ARM project as well as detailed descriptions of the developed methods.

We start with the paper “The DARPA Autonomous Robotic Arm (ARM) Program - Synopsis” by Hacket et al. which provides a general overview of the recent ARM project. The paper motivates the decisions made during the ARM project and gives insights into its history, structure and current state.

In the paper “An Autonomous Manipulation System based on Force Control and Optimization”, Righetti et al. introduce a manipulation architecture that uses force-torque control, variable compliance and optimization methods to realize robust grasping. The architecture includes components for calibration, perception, motion planning, motion primitive execution, inverse kinematics and force control, and, therefore, addresses a number of vital sub-tasks of grasping and manipulation. The efficiency of this approach was demonstrated during competitions in the DARPA ARM project, where Righetti and colleagues were consistently ranked among the top two teams.

The system of another top team at the DARPA ARM project is described in the paper “Model-Based Autonomous System for Performing Dextrous Human-Level Manipulation Tasks” by Hudson et al. from NASA JPL. The paper discusses the challenges faced during the project and the related competition and presents system architecture for autonomous bimanual manipulation. The required perceptual capabilities including mapping, object detection and object tracking are described in detail in the first part of the paper. The second part of the paper focuses on planning, control and action execution. The paper provides important insights into the design choices that were made during the development of the system, as well as their advantages and drawbacks.

In the last paper on ARM entitled “Learning of Grasp Selection based on Shape-Templates”, Herzog et al. present an algorithm for selecting good grasp poses for unknown objects from point cloud data. The approach uses a local grasp shape descriptor to encode suitable grasp locations on an object. This descriptor is used together with kinesthetic teaching methods in order to create a grasp library of stable grasps. The approach also includes a learning component that allows feedback on success or failure of a grasp to be used for adapting the library. The robustness of the approach is demonstrated in an extensive experiment with a variety of household objects.

The next three papers of the special issue highlight recent advances in the design of robotic actuators for grasping. The paper “Design and Control of a Three-Fingered Tendon-Driven Robotic Hand with Active and Passive Tendons” by Ozawa et al. presents a new three-fingered robotic hand and a set of corresponding controllers. The paper discusses important aspects in the design of tendon-driven robotic fingers and shows how active and passive tendons can be used to realize variable stiffness control. The approach was validated by demonstrating five different types of stable grasps on a variety of objects.

Another type of robot grippers is presented in the paper “A Compliant Self-Adaptive Gripper with Proprioceptive Haptic Feedback” by Belizle et al. The presented gripper features compliant joints, under actuation and a haptic interface. In addition to the technical description of this actuator, Belizle and colleagues also present a theoretical model using a quasi-static analysis. Finally, they also demonstrate the advantageous features of the new gripper in extensive simulated and real experimental results.

A more bio-inspired approach to the design grippers is introduced in the paper “A Variable Compliance Soft Gripper” by Giannaccini et al. The paper presents a novel tentacle-like gripper that has a large degree of variability in its shape. In addition to the shape the authors also show how the compliance of the gripper can be changed according to the requirements of the task.

The remaining five papers focus on the algorithmic and theoretical modelling of grasping and manipulation. In “An Active Sensing Strategy for Contact Location without Tactile Sensors Using Robot Geometry and Kinematics” Lee et al. describe new methods for locating contacts without relying on specialized sensors. A geometric estimation of the contact point between the robot actuator and the environment is used in conjunction with a control strategy in order to improve estimation accuracy. Experimental results show that the estimated forces are more accurate that those achieved using force-torque controllers.

The paper “Teaching Robots to Cooperate with Humans in Dynamic Manipulation Tasks Based on Multi-Modal Human-in-the-Loop Approach” by Peternel et al. focuses on compliant robotic manipulation in the presence of a human interaction partner. In this approach, a human demonstrator first tele-operates a robot arm using a motion capture setup in order to provide training data for a subsequent imitation learning step. However, in contrast to previous work on imitation learning, not only the position of the human’s hand is recorded, but also the human’s muscle activation. This information is, in turn, used to modulate the stiffness of the robot. The approach is verified in a cooperative wood sawing task where a human and a robot have to collaborate.

In “Autonomously Learning to Visually Detect Where Manipulation Will Succeed”, Nguyen et al. turn towards active learning of visual classifiers. They introduce a methodology for predicting successful locations for manipulation based on visual features. A robot autonomously generates training data by acting in his environment. The resulting data set is then processed via dimensionality reduction and Support Vector Machines. A set of experiments with a PR2 robot show how this methodology can be used by a robot to autonomously improve the success rates during daily tasks, e.g., turning a switch.

The paper “Object Search by Manipulation” by Dogar et al. addresses the question of how to search for an object in a cluttered environment. In such a situation a robot needs to push away occluding objects in order to find what it is looking for. Dogar and colleagues show that even a greedy approach to pushing objects away can be optimal under certain conditions. They also present a second algorithm which approaches polynomial time complexity and produces optimal plans under all situations. Both algorithms are evaluated on a real-world mobile robot platform. Additionally, the authors provide a Markov Decision Problem formulation of the problem and present a partial proof of optimality.‘

Finally, the paper “Analyzing Dexterous Hands using a Parallel Robots Framework” by Borras et al. adapts and extends an existing mathematical framework from the parallel robotics literature to analyze underactuacted robotic hands. The authors apply the framework to analyze a simple example hand. In this analysis they show how the underactuation design parameters such as the transmission ratio and the stiffness constants of the finger joints can modify the size of the feasible workspace.

All 12 papers present significant developments in robot grasping and manipulation and we hope that you will enjoy reading them as much as we did.