Learning to Control a Visual Sensing System
This chapter shows how to learn the extraction of visual primitives from an image. The considered primitives are edges that provide visual information about the environment in which the robot work. Their extraction can be obtained by means of four modules working in chain. The result is that raw data acquired with a camera is transformed into semantically relevant data that take information about the robot environment. In this context, Machine Learning is used to tune image acquisition and edge extraction, so that the Visual Sensing System (VSS) adapts itself to a dynamic environment. An experiment is presented that shows how the VSS is able to find the exact location of a door acquired by the robot camera.
Unable to display preview. Download preview PDF.