Set-point Control of Mobile Robot with Obstacle Detection and Avoidance Using Navigation Function - Experimental Verification

This paper presents the results of an experimental verification of mobile robot control algorithm including obstacle detection and avoidance. The controller is based on the navigation potential function that was proposed in work (Urakubo, Nonlinear Dyn. 81(3), 1475–1487 2015). Conducted experiments considered the task of reaching and stabilization of robot in point. The navigation potential agregates information of robot position and orientation but also the repelling potentials of obstacles. The obstacle detection is performed solely with the use of laser scanner. The experiments show that the method can easily handle environments with one or two obstacles even if they instantly hide or show-up due to the scanner range limits. The experiments also indicate that the utilized control method has a good potential for being used in parallel parking task.


Introduction
Artificial potential functions (APF) are widely used in robotics since 1986 when they were proposed by Khatib [5]. The idea of repulsive and attractive interactions used to avoid collisions and move to desired position in the original version had some limitations. The main one was possibility of local minima when the APF was not designed carefully.
In the beginning of '90 Rimon and Koditschek presented concept of the navigation function. At first a sphere world version was introduced [12] that assumes that the obstacles are bounded with spheres in three dimensional space or with circles in planar case. Then the method was expanded to more complex environments [13,14] and [15]. All these algorithms assumed a point-like robot without constraints.
In 2004 Urakubo [26] expanded the method introducing navigation function that takes into account nonholonomic constraints of the mobile robot. In his method the orientation of the robot can reach desired value just as both position coordinates. Convergence proof was included in the paper.
In the papers [1,20], and [19] navigation function was used to control multiple mobile robots. Authors of these publications adress the problem of collision avoidance in multiagent robotic systems. In the first of the mentioned papers extension of the navigation function called multi-robot navigation functions (MRNFs) was applied. Second and third of these papers propose the use of prioritization to solve conflicts in case of concurrent goals of the agents. Examples of navigation function used for collision avoidance in 3D space were presented among others in work [21] and [22]. Authors conducted numerical simulations to show the effectiveness of the method. Work [10] introduces the concept of Lyapunov-like barrier functions which is another kind of potential function. The authors of this work apply this concept to the problem of multi-robot control system. In article [27] nonlinear control for nonholonomic mobile robot tracking target and avoiding collisions is presented. Method takes into account the uncertainty of measurements. The paper includes simulation results illustrating robustness of the proposed design scheme. In [8] simulation results for the group of nonholonomic mobile robots tracking trajectory with leader-follower scheme are presented. Papers [16] and [24] present collision avoidance for multiple robots. The avoidance procedure is activated only close to obstacles. In [17] and [18] measurement uncertainties in addition were taken into account.
Authors of work [6] use a local potential function to control formation of mobile robots in an environment with obstacles. Local artificial potential functions are widely investigated and applied with various robots, including UAVs, however, in this approach local minima may occur. In contrast to the navigation function these methods give no general solution of local minima problem. For the case of navigation function algorithm, there exist a clear methodology of tuning, that guarantees only one global minimum.
In this paper experimental results for the algorithm presented in [25] are shown. In comparison to other approaches it takes into account nonholonomic constraints and guarantees global convergence of the position and orientation coordinates. Despite having these advantages it was not previously tested in practice, especially with on-line obstacle detection. In the authors' previous publication [7] the same control algorithm was investigated, however, in these tests other kind of the mobile platform was used. Moreover, in the previous experiments the location and size of the obstacles were known a priori. In the current research the control utilizes the environment model obtained by the reconstruction of the obstacle parameters on the basis of the data from the laser range finder. In comparison to the previously published results it causes two difficulties: building model of the environment in each algorithm repetition and using these uncertain data to control the motion of the mobile platform. In addition OptiTrack motion capture system was used to assure exact set-point control and collect data for the future off-line analysis.
The paper is organized as follows. Section 2 introduces the model of the robot and the environment. Section 3 presents the control algorithm and short explanation of its parameters. Section 4 describes the setup used for the experiments. In particular Section 4.3 presents the obstacle detection algorithm. Section 5 describes the conducted experiments and includes the results with short comments on each of them. The final section includes concluding remarks.

Model of the System
The experiments were conducted on a differentially driven mobile robot with a kinematic model given by the following equation: where vector q [x y θ] denotes the pose and x, y, θ are position coordinates and orientation of the robot with respect e. Vector u v ω is the control vector with v denoting linear velocity and ω denoting angular velocity of the platform. The input matrix B has the following form: The control algorithm requires the robot task space to be bounded by a conceivable circle which defines the pool of attraction and its potential. The definition of the attracting potential function [14] is: where ρ 0 is the radius of the task space, r = [x y] is the current position of the robot and p 0 is the center of the task space. The controller design assumes the obstacles to be circular-shaped objects of radiuses ρ i (i = 1, ..., N , N being the number of obstacles) with their centers located at positions p i . The definition of repelling potential function for i-th obstacle is: It is clearly seen that the attracting potential function behaves as a negative obstacle, inside which the robot moves.

Controller Design
Given an environment with N obstacles and a task of stabilizing the robot in its origin the total navigation potential is defined as: where κ is a positive, constant design parameter and Symbol k w in Eq. 6 denotes a positive, constant design parameter that allows to tune the influence of the orientation term on the navigation function depending on the euclidean distance to the goal. The aggregation of repelling potentials happens in term β which is defined as: One must note that the iteration starts from zero, which means the inclusion of task space boundary potential. The control algorithm proposed in work [25] is defined as: where a is a positive, constant design parameter and Symbolb in Eq. 9 denotes another positive, constant design parameter and L [sin θ − cos θ 0]. Function h(g) is defined as: where g || B ∇V || and is a small positive constant. Finally, ∇V denotes the gradient of the navigation function with respect to variables x, y and θ . Regardless of number of obstacles, the gradient can be obtained in analytical form as: where and As noted in [14] all the undesired local minima of navigation function (5) disappear as the parameter κ increases. An algorithm for automatically tuning analytic navigation functions for sphere worlds was presented in [3]. The tuning parameter must satisfy a lower bound to ensure convergence to the desired value. In this paper navigation functions have been manually tuned to assure convergence. For the sufficiently high value of the κ parameter navigation function (5) has a critical point associated with each isolated obstacle, the saddle point. V has no other critical point other than these points. Saddle points are unstable equilibrium points. In [25] special control procedure for saddle point avoidance is described. It uses time varying function to push the robot away from the unstable equilibrium point.

Experimental Setup
The preparations for the experiments were conducted using simulated environment and robot. The simulations provided a set of parameters that resulted in reasonable behavior of the robot. The experiments introduced few new factors that imposed a need for further retuning. The following section describes the details of the experimental test-bed. The experiments were performed with the use of MTracker -a small, two-wheeled mobile robot designed and constructed at Chair of Control and Systems Engineering (Fig. 1).
The MTracker is equipped with a single board computer (SBC) with one-core processor (Intel Atom 1.2 GHz) and a low-level controller based on a digital signal processor. A Hokuyo laser scanner is mounted on top of the robot body. The scanner angular and radial range is 240 • and 5.6 m, respectively. The average scan rate given by the scanner is 10 Hz.
The robot body is cylinder-shaped with the diameter (wheel base) of 14.5 cm. The wheels diameter is approximately 5 cm. The robot is powered with 3S2P, 3.7 Ah LiIo battery. With the battery mounted, the mass of the robot is 1.47 kg. The top of the robot body is covered with 5 infra-red reflecting markers (one on top of the laser scanner) used for external localization. The localization is performed with the use of ten active, infrared cameras which are part the OptiTrack system.
The SBC on the robot communicates with a personal computer (PC) via Wi-Fi. The average communication delay is 8 ms. The main task of the PC is to provide the user with a graphical user interface (GUI, Fig. 2) but also to collect the data obtained from OptiTrack system and pass it over to the robot.

The Software
Both the SBC and the PC work on Ubuntu operating system (versions 12.04 and 14.04, respectively). Furthermore, both systems include ROS system (version Hydromedusa on SBC and version Indigo Igloo on PC). The overall system was highly distributed into socalled nodes -isolated processes of ROS. A simplified graph of the system is presented in Fig. 3.
Some of the nodes depicted in Fig. 3 work in an asynchronous (reactive) manner (e.g. Controls Scaling or MTracker). This means that they perform their task immediately after receiving new input data. Some of the nodes work in a synchronous manner (e.g. Obstacle Controller, State Estimator). This means that they perform their task in a cyclical way. The rate of these cycles was set to 100 Hz (ROS does not provide real time operations). In the graph, one can notice a State Estimator node. While conducting experiments this node only copied the OptiTrack data. This was reasoned by the high precision of the external localization method.
The Controls Scaling node scales down control signal u when at least one of the wheels exceeds assumed limitation. The scaled control signal u s is calculated as follows: where and where ω l , ω r denote left and right wheel angular velocity. Symbol ω max denotes the predefined maximal allowed angular velocity for each wheel. This scaling algorithm preserves the direction of the control vector u.

The Obstacle Detection
An important part of the system is the obstacle detecting node. It does not solve the obstacle tracking problem, so the obstacles are dynamically recalculated from scan to scan. This introduces a number of phenomena in the potential-based control algorithms. The obstacles are provided in two basic forms: line-shaped and circular-shaped. The control algorithm uses only the latter ones. For the sake of simplicity, the robot workspace has been virtually restricted so that only the essential obstacles were taken into consideration by the algorithm. Still, however, the limitation of scanner angular range caused discontinuous effects while rotating and 'discovering' new obstacles.  Fig. 3 A graph representation of the system detecting node exploits data given by external localization in order to transform obstacles from local to global coordinate frame. Below we present the steps of obstacle detection algorithm.
1. Range data is copied into a buffer of 2D points.
While copying, the artificial range restrictions are taken into consideration. 2. Point data is grouped into sets by evaluating a point-to-point distance criterion (in the criterion, the threshold distance grows with the range). 3. Each point set is recursively turned into set of segments with the use of iterative end point fit method [2,11] (a method similar to split-andmerge algorithm [9]). The segments are reconstructed from point data with the use of total least squares method, where the model of the leading line is represented with general equation: Ax + By + C = 0. The vector and matrix algebra was performed with the use of Armadillo C++ library [23]. At last, both first and last point of the set are projected onto the leading line. These two projected points form a segment. 4. An iterative search on total set of segments merges those of them that possibly comprise a d 1-4 d 0 Fig. 4 A graphical visualisation of two segments merge condition. If distance d 0 is smaller than assumed threshold, a new potential segment is created (red line). If distances d 1−4 are also small enough, the new segment replaces the previous two single object. The criterion evaluated here is composed of two equally important parts: the distance between the neighboring end points of two considered segments must be lower than assumed threshold, if so, a new segment is reconstructed from sum of both point sets and distances between all of the extreme points of both segments must be lower than assumed threshold (Fig. 4). This ends up the segments detection. 5. The detection of circular obstacles is based on the total segments set from the previous step. If a segment length is below assumed threshold, create an equilateral triangle such that the segment becomes its base, and its height points outwards (from the robot). Create a circle circumscribed on this triangle and enlarge its radius by some constant margin. This methodology works more stable than exact circle approximation [4] because not always the obstacles are circular. 6. An iterative search on the total set of circles merges small circles that are close to each other and that do not exceed allowable radius after merge.
In order to show effectiveness of this method, various experiments were conducted. During first experiments set the robot was standing still in the origin of global coordinate frame and few yellow, cylindershaped obstacles of radius 13.5 cm have been placed in its workspace. The number of range points constituing detected obstacle decreases with an inverse square of distance and hence the quality of detected parameters declines. Table 1 gathers statistical data (mean values and variances of position and radius) of ten obstacles randomly placed in the workspace. The values of detected radiuses are the true detected values. During control experiments they were enlarged by a margin of 5 cm.
The second experiments set consisted in detecting two non-moving obstacles while the robot was in motion. The robot was moving toward the obstacles with predefined forward velocity and negligible angular rate. Table 2 gathers similar statistical data obtained from the experiments. One can note that the variance of position has largely grown. This is explained by the fact that the central points of detected obstacles do not coincide with the real  obstacles central points and during the motion the robot 'sees' them from different angles, hence the points wander.

Experimental Results
In They were tuned to get expected time of task execution and avoid disturbances (larger values of the a and b produced larger u that led the wheel velocity controller switching between positive and negative velocity limit). Coefficient were tuned to reduce the chattering in the neighborhood of the desired position. It is worth to note that the value of this coefficient did not cause chattering in wide range during simulation tests (e.g. authors of [25] used value = 10 −6 that gave no acceptable results in tests with real robot). The value of k w coefficient allows to tune the ratio of the orientation to position convergence rate. Inadequate values caused that the orientation converge to desired value too fast or too slow in comparison to position coordinates. The κ coefficient were used to form the velocity profile along the robots path. By increasing this value local minima of the potential function are flattened and eventually removed. Usually κ must be increased if narrow passages between obstacles exist in the environment. The radius of the task space was set to 5 m. The angular velocities of the wheels were restricted to maximum value of w max = 10 rad/s. This value was selected referring to physical properties of the MTracker robot. Such value provided that there were no longitudinal slides during task execution. In subsequent experiments the algorithm was investigated for increasing complexity of the environment.
In the first experiment the robot moved in the free space. Initial configuration was as follows: q (0) = [−0.71 − 1.04 0.015]. During the maneuver the robot changed the direction of motion twice (Fig. 5a). The non-trivial path of the robot is the result of the influence of orientation component of the navigation function. The robot reached the neighborhood of the desired position after about 20 seconds (Fig. 5b). Desired orientation converged at the same time (Fig. 5c). Figure 5d and e show the linear and angular velocities of the platform, respectively. The dashed line represents the original control signal computed with Eq. 8, the solid one represents scaled control signals. In Fig. 5g wheels velocities are presented. In Fig. 5f time graph of the navigation function computed along motion path is shown. The influence of the potential function associated with the boundary of the task space is not significant as the robot stays far from the boundary during the task execution.
In Fig. 6 results for the experiment with single obstacle are presented. Robot initial coordinates were as follows: q (0) = [−1.76 − 0.30 − 0.81]. During the task execution robot avoided the obstacle in a smooth fashion (Fig. 6a), reaching desired position after about 25 seconds (Fig. 6b and c).

t [s]
Navigation Potential control signals. In Fig. 6g wheels velocities signals are presented. Figure 6f represents the time graph of the potential function. It is worth noting the discontinuity at about 12 second. It is the result of obstacle disappearing when the robot moved behind it. Although the linear velocity has a drastic impulse about that time, the scaling procedure saved the robot from such reaction and thus the motion remained smooth. The influence of the environment on the robots motion is significant as the obstacle is located between the initial and final positions of the robot. Initially robot moves toward the goal but as it approaches the obstacle the bypass maneuver is executed. Figure 7a shows the robot path for the environment with two obstacles. Robot started the motion in the position q (0) = [−1.95 0.02 0.02]. Desired position and orientation are reached after about 27 seconds ( Fig. 7b and c). In Fig. 7d and f linear and angular velocities of the platform are presented. Grey lines represent signal without scaling and the solid lines are scaled control signals. In Fig. 7g wheel angular velocities are presented. Figure 7f shows the navigation function graph. In this scenario two obstacles formed the shape of the resulting robots path. It keeps far from the obstacle during the whole task execution. In the initial part of the path the tendency to push the robot into empty areas can be observed.
The results of the last experiment are shown in Fig. 8. In this scenario robot executes parallel parking maneuver. In Fig. 8a the path of the robot is shown. In Fig. 8b and c position and orientation coordinates are presented. The robot reaches the neighbourhood of the desired value after about 14 seconds. In Fig. 8d and e control signals for the platform are presented. In Fig. 8g wheel controls are presented. The potential function is shown in Fig. 8f. Due to the limited angular range of the laser range finder temporary growth of the potential is observed during initial motion. When the robot rotates left at the beginning the left obstacle falls within the scope of detection. In this test the robot operates close to the obstacles, hence their influence on the control signals is substantial. It results in three changes of the motion direction but finally desired position between two obstacles is reached. Note that the desired orientation causes inconvenience as one obstacle is in the front of the robot while the second is at its back during the final part of the task execution.

Conclusion
In this paper experimental verification of the nonholonomic mobile robot control and collision avoidance algorithm was presented. Both position errors and orientation error are quickly reduced to near zero values. Future research will focus on the more complex environments, including star shaped obstacles and tree of stars obstacles. Authors plan to expand the algorithm to adopt it to multiple nonholonomic robot control and dynamic environment. The saddle point avoidance procedure described in [25] will be tested in the near future.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Wojciech Kowalczyk received the M.Sc. degree in robotics from Poznan University of Technology (PUT), Poland in 1998. In 1999-2000 he was working as electronics design engineer and software developer (programming databases and communication applications for retail chains). Since 2000 he works at Poznan University of Technology where he received the Ph.D. degree in robotics in 2008. His scientific interests focus on the multi-agent systems, mainly on their application in the control of multiple mobile robots, collision avoidance and cooperative behaviours of non-holonomic robots at both kinematics and dynamics level. He was involved in medical research projects conducted at PUT. He teaches multi-agent systems, smart systems, robotics, object oriented programming, electrical engineering and electronics.
Mateusz Przybyla received his Eng. degree in robotics in 2008 and M.Sc. degree in robotics and management in 2010 from Poznan University of Technology (PUT) in Poland. His master thesis concerned 2D SLAM based on laser scanner data. After graduating in 2010 he became a doctorate student at PUT. His main scientific focus was put on autonomous mobile robots, robust control systems and perception systems. He was involved in several research projects as a programmer. He teaches fundamentals of autonomous systems and basics of control systems. He also gives courses on ROS for students associated with a local scientific club.