Multi-camera and Multi-modal Sensor Fusion, an Architecture Overview

  • Alvaro Luis Bustamante
  • José M. Molina
  • Miguel A. Patricio
Conference paper

DOI: 10.1007/978-3-642-14883-5_39

Part of the Advances in Intelligent and Soft Computing book series (AINSC, volume 79)
Cite this paper as:
Bustamante A.L., Molina J.M., Patricio M.A. (2010) Multi-camera and Multi-modal Sensor Fusion, an Architecture Overview. In: de Leon F. de Carvalho A.P., Rodríguez-González S., De Paz Santana J.F., Rodríguez J.M.C. (eds) Distributed Computing and Artificial Intelligence. Advances in Intelligent and Soft Computing, vol 79. Springer, Berlin, Heidelberg

Abstract

This paper outlines an architecture formulti-camera andmulti-modal sensor fusion.We define a high-level architecture in which image sensors like standard color, thermal, and time of flight cameras can be fused with high accuracy location systems based on UWB, Wifi, Bluetooth or RFID technologies. This architecture is specially well-suited for indoor environments, where such heterogeneous sensors usually coexists. The main advantage of such a system is that a combined nonredundant output is provided for all the detected targets. The fused output includes in its simplest form the location of each target, including additional features depending of the sensors involved in the target detection, e.g., location plus thermal information. This way, a surveillance or context-aware system obtains more accurate and complete information than only using one kind of technology.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Alvaro Luis Bustamante
    • 1
  • José M. Molina
    • 1
  • Miguel A. Patricio
    • 1
  1. 1.Applied Artificial Intelligence GroupUniversidad Carlos III de MadridMadridSpain

Personalised recommendations