Advertisement

Safely Grasping with Complex Dexterous Hands by Tactile Feedback

  • Jose SanchezEmail author
  • Sven Schneider
  • Paul Plöger
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8992)

Abstract

Robots capable of assisting elderly people in their homes will become indispensable, since the world population is aging at an alarming rate. A crucial requirement for these robotic caregivers will be the ability to safely interact with humans, such as firmly grasping a human arm without applying excessive force. Minding this concern, we developed a reactive grasp that, using tactile sensors, monitors the pressure it exerts during manipulation. Our approach, inspired by human manipulation, employs an architecture based on different grasping phases that represent particular stages in a manipulation task. Within these phases, we implemented and composed simple components to interpret and react to the information obtained by the tactile sensors. Empirical results, using a Care-O-bot 3® with a Schunk Dexterous Hand (SDH-2), show that considering tactile information can reduce the force exerted on the objects significantly.

Keywords

Robot grasping Domestic robot Tactile feedback 

1 Introduction

The percentage of people over 65 years in the world is projected to increase from eight percent to twelve percent by the year 2030 [1], which will increment further the already high demand of elderly care. To address this issue, global leaders carried out a UN World Assembly on Aging in 2002 which had, as one of its main topics, the objective of providing enabling and supportive environments for the elderly [2].

As a solution to this workforce shortage of elderly care, the governments of nations such as Japan, US and Germany have been encouraging the introduction of robots in nursing homes. Recent efforts towards this goal can be seen in the work of Nagai et al., which provides an analysis of the challenges to introduce robots into these environments in [3]. Pineau et al. introduced a robot assistant that autonomously guided the elderly and also reminded them of their schedules [4]. In Germany, the Fraunhofer Institute for Manufacturing Engineering and Automation (IPA) developed the Care-O-bot 3® with the objective of assisting elderly people in domestic environments [5]. The need for such domestic service robots can also be seen in the RoboCup@Home competition [6], especially the “Emergency situation” scenario, where robots deal with an accident in a home environment.

Despite the introduction of robots in domestic environments, their physical interaction with humans remains limited, a skill that is crucial for robots to eventually become reliable caregivers. The robots have to operate safely in these highly dynamic and uncertain environments. Manipulation under these conditions, while extremely complicated for robots, is performed effortlessly by humans. This proficiency achieved by humans, highly depends on their tactile sensing abilities while executing manipulation tasks [7]. Based on this insight, together with the improvement of tactile sensing technology, robotic researchers have produced algorithms inspired by human tactile sensing [8], and used tactile feedback to reactively adjust grasps [9, 10]. Although most of these approaches allow robots to interact physically in a domestic environment, their main concern is manipulation of objects.

With the long-term goal of enabling a robot, namely a Care-O-bot 3®, to safely interact with humans (e.g. guiding people with vision impairment in a nursing home), we develop a grasping approach that considers the pressure exerted during manipulation to prevent the application of excessive grasping forces. The pressure information provided by the tactile sensors of the SDH-2 hand is used as a feedback signal to control the fingers’ motion and react to contacts. Aside from the tactile information, force-torque sensors of the manipulator are used to enable the detection of contacts between the robot’s arm and its environment. Furthermore, the high-level control of our implementation is based on the phases observed in human manipulation.

To validate our work, we recorded empirical data of grasps on a set of objects1 with distinct features such as hardness, shape, and size. Our approach effectively reduced the exerted force on the grasped objects, by at least, half of the original force. In a particular case, the applied force was reduced by a factor of 20, while still successfully executing the grasp. We also analyze the limitations of this approach and compare its performance to an open-loop grasp approach. An early version of this work has been demonstrated during the competitions of RoboCup@Home German Open 2013 in Magdeburg and RoboCup@Home World Championship 2013 in Eindhoven.

The remainder of the paper is organized as follows. Section 2 provides a brief description on human grasp and the involved tactile information, as well as current applications of tactile sensors in robotics. Section 3 describes our approach and the hardware it uses. In Sect. 4 the evaluation method is detailed and the results obtained are reported. A summary of the paper is presented in Sect. 5.

2 Related Work

2.1 Human Manipulation

Johansson and Flanagan noticed the importance of tactile signals during manipulation by humans [7]. These tactile signals are denoted as tactile afferents2 by Johansson. They can end at skin level (type I) or, deeper, at the dermis (type II); and they can have fast or slow frequency responses. Thus, the tactile afferents used by the hand are: fast-adapting type I (FA-I), slow-adapting type I (SA-I), fast-adapting type II (FA-II), and slow-adapting type II (SA-II). Besides studying these tactile signals they analyzed the phases involved during a manipulation task. The phases of a simple pick and place task, as described by Johansson in [7], are:
  1. 1.

    Reach: Fingers make contact with the object and FA-I afferents are activated.

     
  2. 2.

    Load: Enough force is applied to the object to obtain a firm grip. During this phase the SA-I and SA-II afferents are triggered.

     
  3. 3.

    Lift: The object is lifted off the support surface and the FA-II afferents are activated.

     
  4. 4.

    Hold: Forces are applied to the object to prevent its slippage. SA-I and SA-II afferents are activated in this phase.

     
  5. 5.

    Replace: The object makes contact with the support surface and the FA-II afferents are triggered in this phase.

     
  6. 6.

    Unload: The fingers release the object and FA-I afferents are activated.

     

2.2 Tactile Sensing in Robotics

Robots with tactile sensors have recently been used in object recognition [11], evaluation of grasp stability [12], and grasp adjustment. Our review of related work focuses on the latter application.

Hsiao et al. [9] apply corrective actions, using the tactile information of a PR2 gripper, to improve the location of the contacts. They define corrective actions to open the PR2 gripper when a contact is sensed, and moving the wrist in the direction of the sensed contact. This approach is able to compensate for position errors to yield better stability of the grasp. Prats [10] also improved the performance of a robotic control system by adding tactile information as feedback. Their previous approach only considered visual and force signals for feedback. The tactile feedback drives a controller that moves three degrees of freedom of a robotic arm to open a slide door. Romano et al. [8] developed an approach, inspired also on human manipulation, that uses tactile sensors to design low-level signals and control loops that mimic the tactile afferents FA-I, SA-I, FA-II. However, this implementation is specific to the PR2 gripper, a parallel jaw gripper with only one actuator. We therefore seek to extend their work to control a gripper with more than one degree of freedom, e.g. a SDH-2. Compliant grasps have also been achieved without the use of tactile sensing [13].

3 Approach

3.1 Hardware

The SDH-2 is a servo-electric 3-finger gripping hand with seven degrees of freedom3 (DoF). The three fingers are actuated by two joints each, one is rooted to the hand’s palm and the other is in the middle of the finger. Both of these joints have a range of motion of -90\(^\circ \) to +90\(^\circ \), and enable the extension and flexion of the fingers. The seventh actuator allows two fingers to rotate simultaneously in opposite directions and generates an abduction or adduction movement. The range of motion of this actuator reaches 0\(^\circ \) to +90\(^\circ \). Figure 1 depicts how these motions are executed by both a human finger and an SDH finger. Moreover, each finger has two phalanges: a proximal phalanx which is closer to the palm, and a distal phalanx which is further away from the palm. Each phalanx is equipped with a tactile sensor matrix.
Fig. 1.

Flexion/extension and abduction/adduction motions of (a) the human hand [14] and (b) the SDH-2.

The six tactile sensors from Weiss Robotics [15] provide the contact information. This information is represented as a matrix that either contains \(6 \times 14\) tactile elements (tactels), for the proximal phalanges, or \(6 \times 13\) tactels for the distal phalanges. Each tactel produces an integer value between \(0\), when there is no pressure, and \(4095\), the maximum pressure value that represents \(250\,KPa\). The tactels in the proximal phalanges have identical sizes of \(3.4 \times 3.4\) mm. However, the sizes of the tactels in the distal phalanges vary slightly, because the tactile arrays are curved. For simplicity of the calculations, the size of all tactels is assumed to be the same. Figure 2 shows a diagram of a tactile sensor together with a visualization of a contact sample.
Fig. 2.

Left: A diagram of a tactile sensor, right: the pressure profile of a contact.

The SDH-2 is mounted to a KUKA Lightweight Robot with seven DoF that provides torque signals in each joint [16].

3.2 Tactile Signal Processing

Following the idea of considering the information produced by the tactile sensors as grayscale images, as proposed in [12], we process the tactile data online and offline. The online signal processing is used to monitor the pressure applied to a grasped object, and the offline signal processing is used to calculate the exerted force of the grasp. Both online and offline signal processing are detailed next.

Online Processing. The online signal processing uses the following algorithms:
  • detect_contacts: Given a number of tactile arrays, with their respective threshold values, it returns a Boolean array that indicates if a tactel is exceeding the contact threshold value. A \(0\) is assigned for no contact, and a \(1\) represents that a tactile array has a contact.

  • detect_thresholds: This inverts the result of detect_contacts. I.e. a \(0\) in the returned array, indicates a contact, while a \(1\) represents no contact.

Each element in the Boolean arrays controls the motion of a single phalanx. The output of detect_contacts selects which phalanges to move, this is to only move those phalanges that are in contact with the object (i.e. is used while the phalanges are not moving). The output of detect_thresholds is used to stop the movement of the phalanges that have reached the desired contact value.

Offline Processing. The offline processing is used after the hand has stopped moving. We applied the steps applied by Li et al. [17], namely:
  • Threshold: For each tactile array, and their respective pressure threshold, it sets the tactels below this threshold to zero. The application of this thresholding is optional to remove low contact values, which may be caused by pressure applied to an adjacent tactel. Due to the rubber layer covering the sensor, pressure applied to a single tactel also activates its neighbors [18].

  • Label: Using the connected-component labeling algorithm with a 4-con-nectivity criteria [19], it labels the contact regions in each tactile array. The purpose of this step is to segment areas of contact for further classification (e.g. determine the largest contact area or the strongest contact area).

  • Extract: This step differs from the one described in [17], by extracting the strongest contact region instead of the largest contact region. The strongest region is defined by its normal force. The normal force of each contact region is calculated with the equation \(F = P*A \) (where \(P\) represents a normalized pressure of a contact region, and \(A\) is the area of the region). The region with the highest normal force is selected as the strongest. The normalized pressure \(P\) is calculated as the ratio of the maximum pressure range (250 KPa) to the maximum displayed value (4095 bits) times the average value of the active tactels (i.e. tactels with a contact value greater than zero). The area \(A\) is calculated by multiplying the individual area of a tactel times the number of active tactels. As noted in Sect. 3.1, the size of the tactels on the distal phalanges is assumed to be the same as on the proximal phalanges.

  • Locate: For each tactile array, this step calculates the centroid of the contact regions. The centroids are calculated, as suggested in [18], using the raw moment formula:
    $$\begin{aligned} M_{pq} = \mathop {\sum }\limits _{x} \mathop {\sum }\limits _{y} x^p y^q I(x, y) \end{aligned}$$
    (1)
    where \(x\) and \(y\) represent the coordinates of a tactel in a tactile array and \(I(x,y)\) is the intensity (i.e. pressure value) in tactel \(x,y\). The order of \(x\) and \(y\) is determined by \(p\) and \(q\), respectively. A centroid can be then calculated using:
    $$\begin{aligned} \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} = \frac{1}{M_{00}} \begin{bmatrix} M_{10} \\ M_{01} \end{bmatrix} \end{aligned}$$
    (2)

3.3 Architecture

Our architecture follows the human manipulation phases, as described by Johansson [7]. Figure 3 illustrates our architecture. Note that this architecture is based on a pick-and-place task. When grasping a human, the lift/hold and place phases will be different.
Fig. 3.

Phase-based architecture, inspired by human manipulation. The colored boxes indicate the phases implemented in this work.

The make_contact phases move the phalanges (first the proximal phalanges, followed by the distal phalanges) from their initial, open configuration, to a desired closed configuration. Each phalanx is controlled by the loop shown in Fig. 4, where \(P\) is the pressure of the highest-valued tactel in the tactile sensor, \(P_{ref}\) is the pressure threshold and \(\dot{\theta }\) is the desired joint velocity. The controller is a simple bang-bang controller that sets \(\dot{\theta } = 0\) when \(P_{err} \le 0\). Once all joint velocities have been set to zero the make_contact phase ends and the load phase is started. During the load phase each joint, except for the one generating abduction/adduction movements, is also actuated using the pressure control loop shown in Fig. 4. The make_contact phase shapes the hand according to the object, while the load phase regulates the pressure to achieve a stable grasp.

When the load phase has finished, the object is grasped and the lift/hold phase raises the arm to lift the object from a surface and it holds the object during the transportation to the placement pose. Next, the place phase moves the arm downward, while using a force monitor to detect an abrupt change of force exerted on the hand, that indicates a contact between the object an a surface. This allows to safely place the object on the surface. Finally, the unload phase opens the hand to release the object.
Fig. 4.

Control loop using tactile feedback.

Each of these phases is composed by simpler components, which can be replaced without modifying the overall structure of a phase, thus separating concerns as described in [20]. These components implement algorithms that perform computations and communicate their outputs publishing messages through ROS topics [21].

4 Experimental Evaluation

To evaluate the performance of our reactive grasp we compared it to the current approach, an open-loop grasp, which does not consider the grasp force as feedback. First, we describe the materials involved in the experiments, then the procedure is detailed. Finally, we present the results obtained from the experimentation.

4.1 Materials

The platform used to carry out the experimental evaluation was the Care-O-bot 3 [5], with a KUKA Ligthweight Robot (LWR4) [16]. The end-effector located at the end of the LWR4 is a SDH-2. Furthermore, 18 objects were selected to represent the following three features:
  • Hardness: The objects were regarded as deformable (D) when the open-loop grasp would either leave a mark on the object, or change its shape or size. If no mark or modification was observed, the object was labeled as non-deformable (N).

  • Shape: The shape of an object was considered to be one of the following: prismatic (Pr), spherical (Sp) and cylindrical (Cy).

  • Size: An object was classified as small (S), medium (M), or large (L).

A sample of the selected objects can be seen in Fig. 5, and their classification is shown in Fig. 6.

Fig. 5.

A sample of the test objects. Missing objects in the figure are: dictionary, orange, melon and soda can (full).

Fig. 6.

Categorization of the test objects, according to their hardness (D/N), shape (Pr/Sp/Cy) and size (S/M/L).

4.2 Procedure

For each test object, the robot’s arm started in a predefined pose (see Fig. 7a). Both approaches were executed three times4, for each of the six locations shown in Fig. 7b. Spherical and cylindrical objects were centered on the marked locations, while prismatic objects were placed on the marked locations along their edges. These locations were chosen to cover a range of positions within the grasp, e.g., close/away from the wrist and close to the fingers/thumb.
Fig. 7.

Experimental procedure.

4.3 Results

The results obtained by the experiments conducted on 18 different objects, using both the open-loop grasp and the reactive grasp, are summarized next. Table 1 shows the success rate of both approaches. Based on the performance of the grasps, two aspects were further analyzed: the grasp force applied by each grasp, and the cause of each failed grasp. The grasp forces of the objects that had a 100 % success rate with both approaches are displayed in Table 2. The force on each object is the average of all trials (i.e. 18 trials). This average represents the sum of the forces on each tactile sensor.

To conclude this section, the 84 failed grasps along with their causes are presented in Table 3. The majority of the reactive approach failures (i.e. no grasp) were caused by the phalanges pushing the objects out of the grasp (27 failed grasps), and by the phalanges not receiving their required stop commands, either because the specified joint limits could not be reached or the desired grasp force could not be reached (15 failed grasps). The overall success rate for the reactive grasp approach was 78.64 %, and of 94.17 % for the open-loop grasp approach.
Table 1.

Success rate of both approaches, the open-loop grasp (OLG) and the reactive grasp (RG).

 

OLG

RG

 

Trials

Success

Rate

Success

Rate

Chocolate milk

18

18

100 %

18

100 %

Bathroom cleaner

18

18

100 %

15

83.3 %

Milk carton

18

18

100 %

18

100 %

Small ball

18

18

100 %

17

94.4 %

Medium ball

18

18

100 %

16

88.9 %

Large ball

18

18

100 %

18

100 %

Soda can (empty)

18

18

100 %

18

100 %

Chips can

18

18

100 %

17

94.4 %

Soup

18

18

100 %

15

83.3 %

Alloy profile\(^a\)

3

3

100 %

0

0 %

Dictionary

18

18

100 %

18

100 %

Dried coffee

18

18

100 %

5

27.8 %

Candle

18

5

27.8 %

5

27.8 %

Orange

18

13

72.2 %

10

55.6 %

Melon

18

18

100 %

3

16.7 %

Soda can (full)

18

18

100 %

16

88.9 %

Coffee bottle

18

18

100 %

16

88.9 %

Noodles

18

18

100 %

18

100 %

\({^a}\)The alloy profile was tested only three times, in position 1, for each grasp approach. The other positions were not tested, because in those positions the alloy profile was damaging the tactile sensors.

Table 2.

Grasp forces applied by both approaches, in Newtons.

 

OLG

RG

Force reduction

Chocolate milk

12.3

6.8

44.9 %

Milk carton

10.5

0.5

95.4 %

Dictionary

23.5

6.6

71.7 %

Noodles

32.1

8.9

72.3 %

Soda (empty)

18.6

5.5

70.7 %

Large ball

43.4

7.1

83.6 %

Table 3.

Categorization of failures.

 

OLG

RG

No grasp

0 (0 %)

42(63.6 %)

No lift

6 (33.3 %)

4 (6.1 %)

Rotated

0 (0 %)

5 (7.6 %)

Slip

12 (66.7 %)

15 (22.7 %)

Total

18

66

5 Conclusions and Future Work

This paper presented a software architecture that emulates the human manipulation phases together with an approach that significantly reduces the grasp force through the use of tactile feedback. The approach was specifically tuned for the SDH-2. The pressure information of all experiments, was recorded and made available at https://github.com/jsanch2s/tactile_info. However, the success rate of our reactive grasp approach was not as high as the open-loop grasp approach (78.64 % vs 94.17 %), mainly due to the following limitations:
  • The tactile sensors do not completely cover the fingers, causing the reactive grasp to not reach the desired contact values.

  • Low sensitivity of the tactile sensors hinders the ability to detect light contacts (this accounts for 40 % of the failures). Integrating the signals of a force-torque sensor signals, as demonstrated in [22] could improve contact detection.

  • Insufficient force on the grasp caused objects to slip or rotate within the grasp, caused by low values of the contact thresholds.

Future work will be focused on the implementation of the offline signal processing and the improvement of individual components to detect contacts that tactile sensors cannot (e.g. using force-torque sensors), improve the location of contact points using arm motions as Hsiao et al. demonstrated in [9], and detect slippage by analyzing temporal readings from the tactile sensors. A video showing the capabilities of our reactive grasp is available at https://www.youtube.com/watch?v=fJoSDVKSdm0. The video shows a slower version due to safety reasons.

Footnotes

  1. 1.

    Although the motivation of this work is to ultimately grasp humans, due to safety reasons, we evaluated our approach on objects.

  2. 2.

    A tactile afferent is a conduct that conveys signals to the brain.

  3. 3.
  4. 4.

    The reactive approach was partially executed. The phases executed were: make contact(proximal/distal), load and lift phase.

Notes

Acknowledgments

We gratefully acknowledge the support by the b-it Bonn-Aachen International Center for Information Technology.

References

  1. 1.
    Dobriansky, P.J., Suzman, R.M., Hodes, R.J.: Why population aging matters: a global perspective, U.S. Department of State, U.S. Department of Health and Human Services, National Institute on Aging, NIH, Washington, DC, p. 132 (2007)Google Scholar
  2. 2.
    Kinsella, K.G., Phillips, D.R.: Global aging: the challenge of success. Popul. Bull. 60(1), 1–44 (2005)Google Scholar
  3. 3.
    Nagai, Y., Tanioka, T., Fuji, S., Yasuhara, Y., Sakamaki, S., Taoka, N., Locsin, R.C., Ren, F., Matsumoto, K.: Needs and challenges of care robots in nursing care setting: a literature review.In: Proceedings of IEEE International Conference on Natural Language Processing and Knowledge Engineering (NLP-KE), pp. 1–4 (2010)Google Scholar
  4. 4.
    Pineau, J., Montemerlo, M., Pollack, M., Roy, N., Thrun, S.: Towards robotic assistants in nursing homes: challenges and results. Robot. Auton. Syst. 42(3–4), 271–281 (2003)CrossRefzbMATHGoogle Scholar
  5. 5.
    Graf, B., Reiser, U., Hagele, M., Mauz, K., Klein, P.: Robotic home assistant Care-O-bot\(^{\textregistered }\) 3 - product vision and innovation platform. In: Advanced Robotics and its Social Impacts (ARSO), pp. 139–144 (2009)Google Scholar
  6. 6.
    Chen, K., Holz, D., Rascon, C., Ruiz del Solar, J., Shantia, A., Sugiura, K., Stückler, J., Wachsmuth, S.: RoboCup@Home 2014: Rules and Regulations (2014). http://www.robocupathome.org/rules/2014_rulebook.pdf
  7. 7.
    Johansson, R.S., Flanagan, J.R.: Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat. Rev. Neurosci. 10(5), 345–359 (2009)CrossRefGoogle Scholar
  8. 8.
    Romano, J.M., Hsiao, K., Niemeyer, G., Chitta, S., Kuchenbecker, K.J.: Human-inspired robotic grasp control with tactile sensing. IEEE Trans. Robot. 27(6), 1067–1079 (2011)CrossRefGoogle Scholar
  9. 9.
    Hsiao, K., Chitta, S., Ciocarlie, M., Jones, E.G.: Contact-reactive grasping of objects with partial shape information. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1228–1235 (2010)Google Scholar
  10. 10.
    Prats, M., Sanz, P.J., Del Pobil, A.P.: Vision-tactile-force integration and robot physical interaction. In: Proceedings of IEEE International Conference on Robotics and Automation, pp. 3975–3980 (2009)Google Scholar
  11. 11.
    Montano, A., Suarez, R.: Object shape reconstruction based on the object manipulation. In: Proceedings of International Conference on Advanced Robotics (ICAR), pp. 1–6 (2013)Google Scholar
  12. 12.
    Bekiroglu, Y., Huebner, K., Kragic, D.: Integrating grasp planning with online stability assessment using tactile sensing. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 4750–4755 (2011)Google Scholar
  13. 13.
    Rosales, C., Surez, R., Gabiccini, M., Bicchi, A.: On the synthesis of feasible and prehensile robotic grasps. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA), pp. 550–556 (2012)Google Scholar
  14. 14.
    Li, K., Chen, I.M., Yeo, S.H., Lim, C.K.: Development of finger-motion capturing device based on optical linear encoder. J. Rehabil. Res. Dev. 48(1), 69–82 (2011)CrossRefGoogle Scholar
  15. 15.
    Weißand, K., Wörn, H.: The working principle of resistive tactile sensor cells. In: Proceedings of IEEE International Conference on Mechatronics and Automation, vol. 1, pp. 471–476 (2005)Google Scholar
  16. 16.
    Bischoff, R., Kurth, J., Schreiber, G., Koeppe, R., Albu-Schäffer, A., Beyer, A., Eiberger, O.: The KUKA-DLR lightweight robot arm - a new reference platform for robotics research and manufacturing. In: Proceedings of International Symposium on Robotics (ISR), pp. 1–8 (2010)Google Scholar
  17. 17.
    Li, Q., Elbrechter, C., Haschke, R., Ritter, H.: Integrating vision, haptics and proprioception into a feedback controller for in-hand manipulation of unknown objects. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2466–2471 (2013)Google Scholar
  18. 18.
    Nagatani, T., Noda, A., Hirai, S.: What can be inferred from a tactile arrayed sensor in autonomous in-hand manipulation? In: Proceedings of IEEE International Conference on Automation Science and Engineering (CASE), pp. 461–468 (2012)Google Scholar
  19. 19.
    Di Stefano, L., Bulgarelli, A.: A simple and efficient connected components labeling algorithm. In: Proceedings of International Conference on Image Analysis and Processing, pp. 322–327 (1999)Google Scholar
  20. 20.
    Bruyninckx, H., Klotzbücher, M., Hochgeschwender, N., Kraetzschmar, G., Gherardi, L., Brugali, D.: The BRICS component model: a model-based development paradigm for complex robotics software systems. In: Proceedings of the 28th Annual ACM Symposium on Applied Computing, pp. 1758–1764 (2013)Google Scholar
  21. 21.
    Quigley, M., Conley, K., Gerkey, B., Faust, J., Foote, T., Leibs, J., Wheeler, R., Ng, A.Y.: ROS: an open-source robot operating system. In: ICRA Workshop on Open Source Software, vol. 3(3.2) (2009)Google Scholar
  22. 22.
    Pastor, P., Righetti, L., Kalakrishnan, M., Schaal, S.: Online movement adaptation based on previous sensor experiences. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 365–371 (2011)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Computer ScienceBonn-Rhein-Sieg University of Applied ScienceSankt AugustinGermany

Personalised recommendations