Learning Visual Landmarks for Mobile Robot Topological Navigation
Relevant progress has been done, within the Robotics field, in mechanical systems, actuators, control and planning. This fact, allows a wide application of industrial robots, where manipulator arms, Cartesian robots, etc., widely outcomes human capacity. However, the achievement of a robust and reliable autonomous mobile robot, with ability to evolve and accomplish general tasks in unconstrained environments, is still far from accomplishment. This is due, mainly, because autonomous mobile robots suffer the limitations of nowadays perception systems. A robot has to perceive its environment in order to interact (move, find and manipulate objects, etc.) with it. Perception allows making an internal representation (model) of the environment, which has to be used for moving, avoiding collision, finding its position and its way to the target, and finding objects to manipulate them. Without a sufficient environment perception, the robot simply can’t make any secure displacement or interaction, even with extremely efficient motion or planning systems. The more unstructured an environment is, the most dependent the robot is on its sensorial system. The success of industrial robotics relies on rigidly controlled and planned environments, and a total control over robot’s position in every moment. But as the environment structure degree decreases, robot capacity gets limited.
KeywordsMobile Robot Training Image Deformable Model Symbolic Information Mobile Robot Navigation
Unable to display preview. Download preview PDF.