Multi-camera and Multi-modal Sensor Fusion, an Architecture Overview
- Cite this paper as:
- Bustamante A.L., Molina J.M., Patricio M.A. (2010) Multi-camera and Multi-modal Sensor Fusion, an Architecture Overview. In: de Leon F. de Carvalho A.P., Rodríguez-González S., De Paz Santana J.F., Rodríguez J.M.C. (eds) Distributed Computing and Artificial Intelligence. Advances in Intelligent and Soft Computing, vol 79. Springer, Berlin, Heidelberg
This paper outlines an architecture formulti-camera andmulti-modal sensor fusion.We define a high-level architecture in which image sensors like standard color, thermal, and time of flight cameras can be fused with high accuracy location systems based on UWB, Wifi, Bluetooth or RFID technologies. This architecture is specially well-suited for indoor environments, where such heterogeneous sensors usually coexists. The main advantage of such a system is that a combined nonredundant output is provided for all the detected targets. The fused output includes in its simplest form the location of each target, including additional features depending of the sensors involved in the target detection, e.g., location plus thermal information. This way, a surveillance or context-aware system obtains more accurate and complete information than only using one kind of technology.
Unable to display preview. Download preview PDF.