1 Introduction

Historically, humans often tried to construct machines which would resemble living beings, either human-like or closer to certain animals. While these approaches have been rather artistic, later the upsurge of industrial robotics began with low Degree-Of-Freedom (DOF) manipulator arms such as the PUMA 560 by Unimation. The idea behind the development of these early machines was to alleviate certain fabrication tasks for human operators. Traditionally, these are dull, dirty and dangerous tasks. While the substitution of human workers in such production environments with the aforementioned context has already been achieved to a great extend, novel fields of robot usage arise. These fields are no longer easily covered by pre-programming routines into robot arms. These novel robot tasks include dynamic operation and movement in flexible production lines, human assistance in service and medical areas, in rehabilitation as well as elderly care and rescue (compare Fig. 1). These tasks require mobility as well as manipulation capabilities which former robotic systems would not provide. New approaches in path-planning and control of these high DOF systems are subject of current research in the same way as are human-robot-interaction concepts and new physical sensors which provide robots with sensing capabilties often more sophisticated than those of human beings. It is no longer possible to draw a line clearly distinguishing between the research areas involved, e.g., path-planning, motion control, manipulator and grasp control.

Fig. 1.
figure 1

Mobile manipulator for rescue purposes, developed and built at Carinthia University of Applied Sciences (CUAS) by the RoboCup Rescue Team CUAS-RRR

In the following, we give an insight into recent developments with respect to the hardware construction as well as motion, control and sensing concepts. First, we show construction concepts for mobile manipulator platforms. Then we outline the necessary physical capabilties (moving, manipulation and grasping, sensing, interfacing and interacting) of the same. Next, we discuss the systems’ intelligence enabling them to understand humans and the environment, move smartly and do the right thing.

Then we focus on the RoboCup, which is established to publicly promote robotics research by posing forefront challenges in the field, which, at the same time, should make significant social impact as well. As such, it is the optimal setting for active groups to present new ideas and recent developments. Consequently, we include this here with a special emphasis on rescue (RoboCup Rescue) and industrial employment (RoboCup@Work) of mobile manipulators.

2 Physical capabilities (hardware)

The combination of a mobile robot platform with a manipulator (in combination called mobile manipulator) quickly results in having a DOF equal or greater than nine and must therefore be described as robot system with kinematic redundancy. Modern mobile manipulators are often equipped with collaborative manipulation arms. This type of manipulation arms is specially designed to work with human workers in a collaborative manner. With some additional restrictions in terms of speed, force and torque of the mobile manipulator these robots can be gathered under the term collaborative mobile manipulators (see ISO 15066 and ISO 10218). Figure 5 shows an example of a collaborative mobile manipulator with a degree of freedom of nine which can be used in an industrial environment. In Fig. 1, a mobile rescue robot with a DOF of 12 is shown which will be further described in the following as a design example for the RoboCup Rescue League. In [10] an overview of mobile manipulators used in industrial environments is given. Different search and rescue robots are described in [21, 95] as well as in [11] (Part F, Sect. 50).

2.1 Moving

The method to move a mobile robot platform depends on the main purpose of use, mostly these mobile robots are classified into wheel type, track type and leg type. Here, two different driving methods (wheel type and tracked type) including their advantages and disadvantages are described.

2.1.1 Tracked vehicle with additional linkages

When it comes to tasks in difficult unstructured environments (e.g. damaged buildings or collapsed factories) where different obstacles prevent safe rapid movement, tracked vehicles have advantages in terms of stable movement and system robustness. Additional linkages with ground adaption (often called flipper or flipper tracks) are used to overcome obstacles and to handle uneven ground. The mobile rescue robot T.R.U.D.I. (Third Robot for Urban Disaster Intervention) developed and built at CUAS [38] (Fig. 1) uses four active flipper tracks to extend its movement capabilities. Also, a special gearbox was developed which includes all mechanical parts to move the main tracks, flipper tracks and to rotate the flipper tracks with three electric brushless DC (BLDC) motors for each flipper/track combination. The drive train is realized with four of these gearboxes, one on each corner of the chassis.

Figure 2 shows a schematic drawing of the mobile base including main tracks and flippers (\(f_{1} \dots f_{4}\)). This mobile platform is considered as non-holonomic system where the velocity in the plain is constrained while the position is not. The system can only move in body x-direction with velocity \(v_{R}\) and rotate around the body z-axis with angular velocity \(\dot{\phi }_{R}\).

Fig. 2.
figure 2

Schematic drawing of the mobile rescue robot T.R.U.D.I

Movement in the plane is achieved by the two main tracks and follows the rules of a differential drive mechanism where the forward velocity kinematics for a rotation with \(\dot{\phi _{R}}\) around the body fixed z-axis can be denoted as

$$ \dot{\phi _{R}}=\frac{v_{2}-v_{1}}{b} $$
(1)

where \(v_{i}\) are the wheel circumferential velocities calculated as \(v_{1}=\omega _{1} R\) and \(v_{2}=\omega _{2} R\), \(R\) is the wheel radius and \(\omega _{i}\) is the corresponding rotational velocity of the sprocket, \(b\) means the wheelbase. The linear speed along the body fixed x-axis \(v_{R}\) matches to the circumferential velocities of both main tracks if they are equal. The inverse kinematics to control the tracks’ movement regarding the desired body velocities \(v_{R}\) and \(\phi _{R}\), whereby \(v_{R}\) is the desired velocity in body x direction, can be described by

$$\begin{aligned} \omega _{1} =&\frac{1}{R}(v_{R}-\frac{\dot{\phi _{R}}b}{2}) \end{aligned}$$
(2)
$$\begin{aligned} \omega _{2} =&\frac{1}{R}(v_{R}+\frac{\dot{\phi _{R}}b}{2}). \end{aligned}$$
(3)

The speed of the flipper tracks for each side (flipper \(f_{1}\), \(f_{3}\) on the left and \(f_{2}\), \(f_{4}\) on the right hand side) is due to the mechanical construction directly linked to the speed of the corresponding main track. Due to the fact that the sprocket of the flipper tracks and the sprocket of the main tracks have the same diameter and both tracks are used with the same velocity it can be combined as one big track (pointed out in Fig. 2, market as red rectangle). Figure 3 shows a view of the backside of the mobile manipulator T.R.U.D.I. where this combined track can be seen in detail. The for simplicity the slip problem of tracked vehicles as described in [23] and [57] was neglected.

Fig. 3.
figure 3

Mobile rescue robot T.R.U.D.I. in a test lane at the RoboCup WorldCup 2017

The angle of the flippers \(\phi _{fi}\) with \(i=1\dots 4\) of each flipper can be controlled individually. Due to those additional linkage structures, the mobile platform extends its DOF in position to five. With a certain combination of flipper rotations, the platform can be moved as shown in the kinematic chain of Fig. 1a.

The joints \(P_{1}\) and \(P_{2}\) can be assigned to the differential drive system, the joints \(P_{3}\) and \(P_{4}\) can swivel the whole platform around \(x_{R}\) and \(y_{R}\) axis. A movement of all four flippers downwards by the same angle can be used to fulfill an elevation of the whole system and this translational movement in \(z_{R}\) direction can be seen as joint \(P_{5}\) as also shown in Fig. 2.

2.1.2 Omnidirectional mecanum wheels

The requirements for the movement capabilities of an mobile robot used in industrial environment are totally different compared to the scenario described before. The floor in factories is commonly rather smooth, obstacle-free and free of steps. Consequently, these environments can be considered structured quasi well-known. This brings levitations to the construction of the drive train of mobile platforms and simplifications in autonomous navigation and path planing of mobile manipulators. Taking the advantage of the well-known environment into account, often holonomous mobile platforms which can move in all directions (omnidirectional) in the plane, are used.

The usage of four actuated Mecanum wheels ensures a linear velocity in \(x_{R}\) and \(y_{R}\) direction as well as an angular velocity around the \(z_{R}\) axis as shown in Fig. 4. That is why such mobile platforms with omnidirectional drive mechanism (Fig. 5) can be represented using three DOFs. In the kinematic chain of the mobile manipulator KAIROS (Fig. 5a), the joints \(P_{1}\), \(P_{2}\) and \(P_{3}\) represent these DOFs.

Fig. 4.
figure 4

Schematic drawing of the mobile platform SUMMIT XL-Steel used as base for the mobile manipulator KAIROS

Fig. 5.
figure 5

Mobile manipulator KAIROS for industrial usage, build by the company Robotnik and developed by the working group MEKT at the Campus Technik Lienz in Austria (Leopold Franz University Innsbruck)

Detailed explanation of Mecanum wheels, as well as modeling and motion control algorithms of Mecanum wheel driven mobile manipulators, are given by Röhring et al. in [72], by Lin et al. in [51] and by Taheri et al. in [81]. The simplified inverse kinematics on velocity level is given by

$$ \begin{bmatrix} \omega _{1} \\ \omega _{2} \\ \omega _{3} \\ \omega _{4} \end{bmatrix} = \begin{bmatrix} 1& -1& -(l_{y}+l_{x}) \\ 1 & 1& (l_{y}+l_{x}) \\ 1 & 1 &-(l_{y}+l_{x}) \\ 1&-1&(l_{y}+l_{x})\end{bmatrix} \begin{bmatrix} \dot{x}_{R} \\ \dot{y}_{R} \\ \dot{\theta }_{R} \end{bmatrix} $$
(4)

with \(\omega _{i}\) the rotational wheel velocity of the i-th wheel, \(l_{x}\) and \(l_{y}\) are the half robot lengths and widths and \(R\) is the wheel radius. For an autonomous exploration, or path following, the velocities in the robot frame can be transformed into the world frame:

$$\begin{aligned} \dot{\mathbf{x}}_{W}=\mathbf{R}(\theta _{R})^{-1}\dot{\mathbf{x}}_{R} \end{aligned}$$
(5)

with \(\dot{\mathbf{x}}_{W}=[\dot{x}_{W},\;\dot{y}_{W},\;\dot{\theta }_{R}]^{T}\), \(\dot{\mathbf{x}}_{R}=[\dot{x}_{R},\;\dot{y}_{R},\;\dot{\theta }_{R}]^{T}\) and

$$ \mathbf{R}(\theta _{R})= \begin{bmatrix} \cos (\theta _{R})&\sin (\theta _{R})& 0 \\ -\sin (\theta _{R})& \cos (\theta _{R})& 0 \\ 0&0&1 \end{bmatrix} . $$
(6)

2.2 Manipulating and grasping

Manipulator arms to be mounted on moving robotic platforms need to exhibit a high ratio of payload compared to their own weight, this has been achieved in some of the recent manipulators such as KUKA LWR IIWA [8], the TM-series by Techman [82], or the Universal Robots Cobot-series [71]. Research approaches in this context show unified design for lightweight robot arms [96] and also more specific work in the fields of medicine, e.g. the da Vinci system (compare [44]), for the employment in space, e.g. the NASA ROBONAUT [6] or JPL arms (compare, e.g., [12]), or anti-terror missions such as the devices developed by the Forrest Miller and iRobot company (see also, e.g. [58]).

With respect to industry, manipulation and grasping employed in robotic devices, exploits a rather low degree of dexterity, still most of the systems present in production are used for monotonous tasks which do not require much grasping and manipulation intelligence (compare also [30]). The more our industrial settings change, i.e. demanding more flexible production lines and highly agile processes, the higher the requirements regarding robot manipulators become. Such problems are addressed by commercially available robotic hands such as the shadow-hand [16], the DLR-hand [9] and in research where a recent focus is also on soft devices for human-robot-interaction [24]. Currently, specialized grippers are developed specifically for each application, e.g. [28], approaches based on biomemimetic, soft devices (e.g., [37]) which naturally conform to a large number of different objects could lead to a more efficient use of grippers. Modular, highly redundant manipulators, which can change their configuration with respect to the assigned task are presented by Pieber et al. [67]. Manipulation in a mobile setting (see Fig. 6) is even more complex and different methods have to be developed to address flexibility [88].

Fig. 6.
figure 6

Husky platform equipped with two UR5 manipulator arms in a grasping scenario where the moving platform starts, approaches a soup can on a support, grasps it and picks it up using reinforcement learning. Figure adapted from [88]

2.3 Sensing

Perceiving the environment is a crucial capability of manipulators in general and mobile manipulators especially, it is important in a variety of fields where robots physically interact with the environment as well as with human beings: in human care and assistance applications [30, 62], in industry settings as Cobots (e.g., which have names such as Walt [22]), in fields of service [59] and also novel public mobility applications [36].

In order to perceive their environment properly, mobile robots and especially also mobile manipulators need to be equipped with the corresponding sensors. These sensors may be placed on different parts of the platform depending on the purpose they fulfill:

2.3.1 Elevated sensors

Such a setup resembles a concept known from biology: eyes. In the case of humans who walk upright, the eyes are elevated sensors - at the top of the body – in order to get the best possible impression of the surroundings. In mobile robotics, such a concept can enable, e.g., strategic ahead-of-path-planning and obstacle avoidance [50]. Important aspects in this context are the choice of proper sensor hardware, placement of the sensors and intelligent task planning: e.g. while a manipulation tasks is ongoing, these sensors can be used for planning next actions after completing the current task. These elevated sensors are often optical systems such as, e.g. light detection and ranging (LIDAR) [20], stereo cameras and RGB-D cameras [61].

2.3.2 Touch sensors

Due to the limited capabilities of cameras (e.g., with respect to the field of view: occlusion, bad lighting conditions, mounting and integration issues, e.g., with respect to the object properties: temperature, texture, material) size in terms of highly dynamic processes, often physical sensors are preferred over vision-based systems. Additionally they are preferable in such tasks, where properties of the manipulated object (surface texture, temperature, fragility, stiffness, etc.) are of major importance. Research in terms of visuo-tactile object perception has been shown recently, however, purely physical approaches show promising results in terms of resolution and bandwidth. Based on electrical sensors (see also [55]), e.g., employing capacitive sensing electrodes [74], such approaches can be easily fabricated using rapid-prototyping technology [26] and can enable exteroceptive as well as proprioceptive sensing of robotic devices and been shown previously for walking of humanoid robotics [27].

2.3.3 Base sensors

For holonomic platforms, such sensors often have to cover the whole proximity region of the base platform. This can be achieved using, e.g., laser rangefinders, LIDAR, Time-of-Flight and Radar [5], as well as ultra-sonic-based principles are in use. These often times need to be mounted on strategic positions on the mobile platform in order to optimally cover a field of view of \(f_{V} = 360^{\circ}\). These sensors are often used for map building in an unknown environment (such as is necessary in a dynamic industrial context or in the RoboCup Rescue League). This is commonly known as Simultaneous Localization and Mapping (SLAM). The quality (accuracy of the trajectory followed) of the SLAM depends strongly on the hardware used [29]. Depth estimation (such as enbaled by, e.g., LIDAR) can significantly improve the quality of the SLAM [70], this, however, also comes with an increased computational complexity.

2.4 Interfacing and interacting

Another aspect is the interfacing of robots with their environment, this is especially important when teaching and controlling the robot system in a collaboration task. Such interfaces can either be established using touchpad technology [25, 86] or in contactless gesture-based approaches [78]. In the latter it is often mandatory to find a common language which is, on the one hand, intuitive for the human operator and on the other hand easily interpretable and manageable in terms of signal processing [85].

The terms human-robot interaction and human-robot collaboration describe all those application scenarios and the various methods used to implement these. In most cases, especially in industry, it is important to provide a safe environment for humans. In this respect, lately, strong research efforts have been put into qualifying and quantifying the resulting injury and level of pain caused by an impact with certain energy and location on the body [33, 34]. Apart from the application of ISO/TS 15066 [56], more effort is necessary in the definition of standard testing methods and standardized test setup and hardware. Currently available developments which enable safe human robot co-working are airbags systems [90], robotic skins [15, 53], but also control strategies for intuitive hand guidance [92] based on force/torque-sensing.

3 Intelligence (software)

The software implementation of mobile manipulators is the systems’ intelligence. It is connecting the hardware and corresponding sensor readings in a sense-plan-act-loop. It consequently is the brain of the whole system. Important tasks here are:

  • Interfacing: communicating with the sensors and actuators hardware.

  • Perception: human and object detection algorithms, mapping.

  • Knowledge-base: gather information about the robot, the environment, objects.

  • Planning: path and task planning components.

  • Control: actuate the system in order to follow the planned movement as accurately as possible.

3.1 Understanding humans and the environment

The authors of [77] claim already that common metrics are important in Human-Robot-Interaction and safety standards have been implemented since then. Vision-based distance monitoring has recently been presented in [80] and in [73], a strategy for human-robot collaboration is presented where the signals from 378 sensors of an artificial skin are used in a constraint-based optimization task to control the robot and avoid collisions. Delicate grasp force control, for manipulation, based on physical sensors for a high DOF robot hand (Fig. 7) has been shown in [18]. While in the previous approaches, human-robot physical contact is monitored, in [17], the authors establish an interaction scheme based on gaze recognition of the human co-operator. Even the role of robot-telepathy is evaluated in [93]. In this field of robot intelligence, common metrics and a standardized approach, however, are still missing. It is very likely that different strategies will have to be pursued depending on the field of robot application (elderly or child care, robot tele-operation, flexible industry with a lot of interaction, pure production lines, etc.).

Fig. 7.
figure 7

A 20 DOF robotic hand including multimodal touch sensors and the grasp force control strategy as suggested by Deng et al. Figure adapted from [18]

3.2 Moving smartly (path and motion planning)

Path planning is generally defined as follows: A collision free and continuous path in the joint angle space is to be found to bring a robot from a start configuration (position and orientation) to a target configuration without violating the limitations of the system [89]. Redundant robots, as described before, are able to move while the position and orientation of the end effector are fixed. An advantage of such kinematically redundant systems is the capability of collision avoidance, while the end effector follows a specific path [60]. Therefore, such systems can be used for technically difficult dexterity tasks, for example grasping in a tube or between two walls. The workspace of a stationary industrial robot is also limited by the arm length and allowed joint torques. The initial placement of the base of a static industrial serial manipulator when installing it is often done by trial and error or by help of human experience. For mobile manipulators controlled by an operator (as often in rescue missions) this is also valid. For autonomous driven mobile manipulators the positioning task of the mobile platform which is equal to the base of the mounted serial manipulator is part of the path planning algorithm and there are many possibilities how to fulfill this planing task.

Mobile manipulator movement can be considered in three ways, movement of the mobile platform, movement of the manipulator or a combination of both. Depending on the system the separation of mobile platform and manipulator reduces the difficulty to solve the inverse kinematics problem of such kinematically redundant systems [35], e.g., Husty et al. [41] show an analytic solution of the inverse kinematic for a general serial link manipulator. The problem of path planning and collision avoidance in general of redundant configurations can be solved using different methods. In [100], the authors describe the solution, based on numerical optimization algorithms using the pseudo-inverse of the Jacobi matrix. In [14] neural networks are used to find a solution for a planar manipulator with three joints. Quendler [69] uses a combination of different concepts to solve the inverse kinematics problem for a 6-DOF manipulator used on the mobile rescue robot R.U.D.I. [42] which was later extended with an additional joint [68] and mounted on the rescue robot platform T.R.U.D.I. [38] (Fig. 1a). Here, the Cyclic Coordinate Descent and Broyden-Fletcher-Goldfarb-Shanno Quasi-Newton [87] methods are used. The manipulator is controlled by a human operator with the help of a joystick input in either of the following modes: Control of each joint individual (forward kinematics), control position and orientation of the end effector relative to the actual end effector coordinate system or to the robots body fixed coordinate system (inverse kinematics).

In [7] and [79] an approach is shown where not only the Cartesian working space of the robot is used for the solution but also the joint angle space. Both [49] and [31] give an overview of many possible solutions for path planning and collision avoidance in the Cartesian workspace or low dimensional joint angle space. A summary of methods for higher dimensional spaces is given in [66]. In particular, the three methods Net-based, Sampling-based and Trajectory-Optimizing Path Planning are discussed. Additional to collision avoidance while following a desired path with redundant robotic systems also the configuration regarding manipulability from different points of view can be considered. Thereby often manipulator Jacobian based methods as a measure of robot manipulability introduced by Yoshikawa in [98] are used. It is based on the idea to determine the capability of the end effector to move in an certain direction as well as how easy the end effector can generate forces. The detailed calculation of those measurements is based on the idea of manipulability and force ellipsoids [52]. Based on this idea also the dynamic manipulability measure was introduced [97] and reformulated for redundant systems by Chiacchio in [13]. The authors in [75] use a combination of different concepts such as manipulability, reachability [54] and stiffness [3] to define a so-called comfort zone for kinematically redundant mobile manipulators.

Additionally, non-traditional methods such as robot learning based on infant development theory has been presented in [94]. In this setting, the mobile manipulator is made to play games with varying difficulty as shown in Fig. 8. As soon as there is no more progress on one level of difficulty, it will automatically step forward in complexity.

Fig. 8.
figure 8

Game setup with varying complexity for robot learning. Figure adapted from [94]

3.3 Do the right thing (control)

Mobile Robots are a set of complex mechanical systems, sensors and computational processes. A control system architecture and its implementation solve the challenging task to transfer these components to a well performing working robot. A traditional control methodology decomposes the robot’s tasks into functional steps: sense – plan (control) – act. This is needed and often suitable for solving the problem of driving a mobile robot, i.e. to guarantee a smooth navigation of a mobile robot platform. Classical PD or PID controller are still used in terms of servo control of the robot’s individual drives. Many effective methods for the navigation and trajectory tracking have been proposed [46].

Robot control of mobile manipulator platforms in general deals with the central problem of finding appropriate forces or torques that are generated by the actuators, e.g. DC-motors, in order to move the robot arm or hand to a desired position or to track a desired reference-trajectory.

Cascade control structures of multiple linear and nonlinear controllers are widely used. The mobile rescue robot T.R.U.D.I. uses four brushless DC (BLDC) motors with a power of \(P_{M} = 200~\mbox{W}\) to drive the main tracks. Each main drive (left and right side of the robot) is powered by two motors to achieve a higher driving torque. The two motors on each side are permanently coupled via a chain connection (Fig. 9 marked in red). Here, a combination of closed loop an open loop controller together with the BLDC motor controller was implemented. Figure 9 shows the overall control structure, one can see the user input (robot velocities \(\dot{x}_{R}\) and \(\dot{\phi }_{R}\), the inverse kinematics block is used to calculate the desired angular velocities of the chain sprocket for each track (\(\omega _{1}\) and \(\omega _{2}\)). The green blocks represent linear (LEAD-compensator [39]) speed controllers which control the first motor (of each side) in an closed loop control structure. The control signal (controller output) is also sent to the second BLDC motor controller (\(M_{2L}\)/\(M_{2R}\)) to achieve the same speed.

Fig. 9.
figure 9

Schematic of the drive control structure

In known or adaptable environments (e.g. in a factory), it is suitable to employ model-based dynamic control strategies including inverse kinematics approaches. They are based on mathematical descriptions of the manipulators in terms of nonlinear differential equations and their parameters. Computed torque control together with PD or PID control have successfully been employed for control in joint space [43, 45]. Such a combined control strategy was also used to control the 7-DOF manipulation arm on the mobile rescue robot T.R.U.D.I. (Fig. 1a) [68].

Depending on the used drive concept (e.g. wheeled robots), present (nonholonomic) constraints and the complexity (highly nonlinear systems) nonlinear control techniques like feedback linearization or Lyapunov based control turn out to be mathematically intense, but suitable methods [19, 83].

In uncertain (highly dynamic) and uncontrolled environments, the control algorithms must be more sophisticated involving some kind of (artificial) intelligence. One of the methodologies for treating such uncertain systems is the adaptive control methodology [40, 91]. Furthermore, the design of a robust control has been attracting great attention. The sliding mode control methodology is a robust control procedure often used in mobile robots’ control [2].

Recently, ANN-based control methods have been attracted much more attention. The authors in [99], for example, address the control problem for a class of mobile robot systems, where multi-layer feed forward ANNs and deep learning are both employed for the mobile robot system to achieve the trajectory tracing and obstacles avoidance.

Especially, when mobile rescue robots are used, the mobile manipulator is often teleoperated by a human operator. This can be supported by additional algorithms which may use additional sensor readings, be based on inverse kinematics (for example as described in Sect. 4.1) or semi-autonomous driving methods. Semi-autonomous here means assistant algorithms which help an operator to control a robot with higher DOFs. For Example the mobile manipulator T.R.U.D.I. described in Sect. 2.1 needs additional operator inputs to move the four flippers to support the mobile manipulator while overcoming difficult obstacles like stairs or steps. Additionally, in [32, 63, 64] different possibilities to control additional flipper tracks by sensor information about the environment or soil condition are described.

A comprehensive review of control strategies and their advantages and disadvantages can be found in [84].

4 RoboCup challenges

The first RoboCup competition was held at IJCAI-97, Nagoya. The idea was to push the developments of soccer playing robots up to a level to participate in a soccer game against human players in year 2050. At this time only soccer robots where in the view of the researchers by comparing real robots as well as robot simulation algorithms in the competition [47]. Step by step, different leagues where established each pursuing a different goal. With competitions in the Small-Size-, Soccer Simulation-, the Middle-Size-, the Standard Platform- as well as the Humanoid-League, soccer is still one main focus in the competition, with different aspects of soccer robots taken into account. Furthermore, more practice related tasks were included in the RoboCup competitions over the years. This results in a diversity of different disciplines of which the main ones are shown in Table 1 (including date of first event and main drive method).

Table 1. Summary of main RoboCup leagues

In the following, we discuss mobile manipulators in view of the RoboCup Rescue and RoboCup@Work competitions. Mobile manipulators in general, with different suitable drive methods as shown in Table 1, are used in RoboCup Rescue, RoboCup@Work, RoboCup Logistics, RoboCup@Home. Mobile robot platforms with omnidirectional drive system but without classical serial manipulators are used in Soccer Small-Size and Soccer Middle-Size leagues. Steinbauer et al. [76] give an overview of the development of the different RoboCup leagues from year 2000 to year 2016.

4.1 RoboCup rescue

After the Great Hanshin Earthquake (1995) in Kobe, the Japanese government decided to promote research related to urban disaster and rescue robots [4]. The RoboCup Rescue competition simulates the search for victims in a disaster scenario such as after an earthquake or flooding in an urban environment. Until 2015 the main focus of the competition was to find simulated victims in a big area with various obstacles like stairs, pipes, ramps and stones to overcome as shown in Fig. 10.

Fig. 10.
figure 10

RoboCup Rescue arena used until 2015

Finding those victims, scores points for the competition ranking, extra points are gained for mapping the area, marking various objects in the map or various audio and visual detections. Those simulated victims can be placed all over the competition area and consists of visual markers as QR codes, E or C charts, a heat pad to simulate body temperature and an audio player.

They can be located at a height of up to \(h_{o} = 1.6~\mbox{m}\) over ground, be hidden in holes or can be found in pipes and behind closed doors. The search for victims could be done by a swarm of robots, where only one of these migh be teleoperated and the others needed to operate in autonomous mode. Each year, the rules for the RoboCup Rescue competitions are extended to support the development of new features for the participating robots and to enhance their existing capabilities. The competition is manly based on the mobile robot test strategies developed by the American National Institute of Standards and Technology (NIST).

In the year 2016, a new rule set was introduced during the RoboCup World Championship in Leipzig (Germany) with the focus to test repeatability, functionality and operator proficiency, and all capabilities should be combined onto a single robot platform.

So-called individual test lanes (example shown in Fig. 12) where introduced. Each lane has another testing focus [65]. For each repetition of one lane, the team earns one point and by the help of basic sensor and dexterity tests at the begin of each test round this number can be multiplied by the number of successful fulfilled sensor/dexterity tasks. The best teams in this individual tests are qualified to participate in the final runs where the individual test lanes are used to build a large arena with the goal to find as many simulated victims as possible.

The basic sensor and dexterity tests at the beginning of each test lane ensures the reliability of the system. Especially, test lanes out of the so called mobility tests suits, include driving over terrain with high difficulty. This are for example driving over sand and gravel (with \(15^{\circ}\) slope) as well as stepfields (a diagonal hill terrain consisting of 20 cm square steps made from posts with flat tops and stairs (\(35^{\circ}\) and \(45^{\circ}\)) and obstacles partly blocked with debris, e.g. angled bars in defined locations). These tests require a high degree of stability and reliability in the mechanical as well as in the electronic and sensor system. The dexterity and sensor tests have to be done every time the mobile manipulator starts on a new test lane which is approximately 5 times a day. The schedule during one competition day is very tight therefore a problem in the sensor system or with the manipulation arm can lead to a loss of points.

Figure 11 shows the used test board which consists of two parts where the lower part is used to test the sensors of the mobile manipulator and the upper part is used to test the dexterity of the mobile manipulator. For the sensor test the robot system has to fulfill 6 identification tasks:

  • Video Image Resolution: Visual identification the third biggest concentric C gap by the competition judge.

  • Motion Detection: Highlight the moving target (1-4) automatically by any usage of any camera and integrated video processing. Additionally track the identified motions on the operator station textually or audibly.

  • Thermal Image Resolution: Visual identification of the concentric Landolt C with a 2 cm gap to evaluate thermal resolution by the competition judge.

  • Audio Acuity: Two way communication from mobile manipulator to operator station. Tested by judge and team members.

  • Color/Pattern Recognition: Automatically identification of two hazards labels out of 12 possibilities. Highlight and track the identified labels on the operator station textually or audibly.

  • Gas: Operator demonstrates active display of increase in \(\mathrm{CO}_{2}\) concentration when a team-mate breaths into the robot’s sensor or a \(\mathrm{CO}_{2}\) cartridge is opened near the sensor.

All identification tasks has to be done with a minimum distance of 40 cm to the sensor board. Furthermore 4 dexterity tasks has to be performed:

  • Touch: Touch a 1 cm diameter circular target on the end of a pipe with a pencil attached to the manipulator end effector.

  • Rotate: Grab a 5 cm octagonal pipe cap and rotate \(180^{\circ}\) with the end effector.

  • Extract: Grab a 5 cm octagonal pipe cap and pull out of the pipe after holding it 1 s it can be dropped to the floor.

  • Inspect: Visual identification the number of bars placed on the internal walls of a 5 cm pipe by the competition judge.

Fig. 11.
figure 11

Simulated victim and sensor board to evaluate the sensor capabilities

Fig. 12.
figure 12

Test Line (MAN2- Align) with two parallel bars (10 cm in width and in height) adjusted to the dimension of the robot

For the sensor and dexterity tests a precise movement of the manipulator without oscillations or vibrations as well as a stable positioning of the mobile platform (without tilting) is crucial. To unveil the mobile manipulator in rescue purpose often additional flipper tracks as described in Sect. 2 are used. Moreover it has been shown that a remote controlled movement of the end effector by using an inverse kinematics solution as described in 3.2 is a big advantage in terms of usability and time consumption to solve the task compared to a simple forward kinematic movement. Especially for the sensor and dexterity tests the operator uses the inverse kinematics mode which allows a movement of the end effector related to the end effector-camera coordinate system. In this right handed coordinate system the z-y plane is equal to the camera image plane and the x-axis is equal to the camera optical axis. Therefore a movement parallel to the sensor/dexterity board is done very intuitive in comparison to a movement with classical forward kinematics where every joint of the manipulator has to be moved individually.

4.2 RoboCup@Work

In 2012, the RoboCup Federation included the RoboCup @Work league to increase the development progress of mobile manipulators in a more industry-related setting. The mobile robotic systems used in RoboCup @Work are aimed to be used as flexible multipurpose systems also for smaller companies with changing product range. In the competition, different tasks like acquiring, pick and place as well as transportation of different machine parts and objects are to be performed. The contest procedure is separated into two parts. After some basic tests which must be completed, an extended, more difficult test scenario has to be fulfilled.

The basic tests include testing navigation, manipulation and transportation capabilities of the participating mobile manipulators with three different test tasks:

  • Basic navigation: The mobile manipulator will be send a string contains a series of triples, each of which specifies a location, an orientation and a pause duration. The challenge is to navigate autonomous robust and safe in the scenario environment. Additional to specific service areas, arena objects, wall markers and floor markers random obstacles my be placed in the environment.

  • Basic manipulation: The mobile manipulator will be send some task specifications consist of the objects source location, the objects destination location, a list of objects to manipulated from source to the destination area and the final place for the robot. As objects different obstacles like aluminum profiles of different size and color, screws, nuts and plastic tubes can be used. An object does not count as placed when it dropped from a height above 5 cm. The pose of the object at the destination location can be chosen freely by the robot.

  • Basic transportation: Here the ability of the robots to combine navigation and manipulation tasks as well as its task planning capabilities are tested. The task is to pic objects from the source location to the destination location. Two lists will be send to the robot. The first list contains a list of manipulation object descriptions, the second list contains for each destination area a configuration of manipulation objects the robot is supposed to achieve. A robot may carry up to three objects at the same time and it is not allowed to place objects anywhere except on the robot itself and the correct destination areas.

Figure 13a shows a KUKA youBot robot platform during the basic manipulation test [1]. Figure 13b shows the mobile manipulator of evocortex used by the RoboCup@Work team AutonOHM at the Nuremberg Institute of Technology during the precision placement test where the mobile manipulator has to pick up different obstacles like screws, nuts or aluminum profiles and has to place them into prepared cavities on a target desk after a short movement of the robot platform. Figure 14 shows the rotating table which is used in the competition for the so called rotating table test. The robot has to navigate autonomous to the table and has to grasp all objects from the moving table. Objects can pass multiple times by the robot end-effector until the maximum time is over. Target location of the objects is on the robot itself. The rotating table test assesses the ability to pick and place moving objects which are placed on a rotating turntable with unknown speed. In this context, the vision system as well as the physical movement of the end-effector (this is the robots’ hand-eye-coordination) have to meet very high requirements in terms of speed and precision [48].

Fig. 13.
figure 13

Two RoboCup@Work examples

Fig. 14.
figure 14

Rotating table used in the competition

The score calculation for each test is defined individually, it includes points for achieving certain sub tasks, for winning a run and also penalty points can be assigned each time an incident occurs (collisions, losing an object during manipulation or transportation). Additional to this tests, there are further technical challenges (e.g. cluttered pick test, human-robot collaboration, open challenge) to evaluate specific capabilities of the robotic system. These challenges are separately awarded and could be included into the major competition in future.

5 Conclusion

Mobile manipulation is a vivid research field with a variety of applications such as industry, rescue, service and health. Each subfield covered in the development of mobile manipulators, such as construction of the hardware (platform, manipulator, gripper), choice of suitable algorithms for path-planning, obstacle avoidance, control and perception is a huge field of research on its own. In this paper, we have tried to give a brief overview of recent developments in all those fields covered together with an insight into the development on the example of a tracked rescue robot developed at CUAS. Additionally, we have discussed mobile manipulation challenges in the context of the RoboCup and especially the RoboCup Rescue and RoboCup@Work leagues. We consider the RoboCup as an attractive platform to motivate young researchers and students to have a deeper look into the research challenges ahead in robotics in general and in this context especially in mobile manipulation.