About this book
This monograph introduces novel methods for the control and navigation of mobile robots using multiple-1-d-view models obtained from omni-directional cameras. This approach overcomes field-of-view and robustness limitations, simultaneously enhancing accuracy and simplifying application on real platforms. The authors also address coordinated motion tasks for multiple robots, exploring different system architectures, particularly the use of multiple aerial cameras in driving robot formations on the ground. Again, this has benefits of simplicity, scalability and flexibility. Coverage includes details of:
- a method for visual robot homing based on a memory of omni-directional images
- a novel vision-based pose stabilization methodology for non-holonomic ground robots based on sinusoidal-varying control inputs
- an algorithm to recover a generic motion between two 1-d views and which does not require a third view
- a novel multi-robot setup where multiple camera-carrying unmanned aerial vehicles are used to observe and control a formation of ground mobile robots and
- three coordinate-free methods for decentralized mobile robot formation stabilization.
The performance of the different methods is evaluated both in simulation and experimentally with real robotic platforms and vision sensors.Control of Multiple Robots Using Vision Sensors will serve both academic researchers studying visual control of single and multiple robots and robotics engineers seeking to design control systems based on visual sensors.