Advertisement

Spatial Fusion of Different Imaging Technologies Using a Virtual Multimodal Camera

  • Sebastian P. Kleinschmidt
  • Bernardo Wagner
Chapter
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 430)

Abstract

To analyze potential relations between different imaging technologies such as RGB, hyperspectral, IR and thermal cameras, spatially corresponding image regions need to be identified. Regarding the fact that images of different cameras cannot be taken from the same pose simultaneously, corresponding pixels in the taken images are spatially displaced or subject to time variant factors. Furthermore, additional spatial deviations in the images are caused by varying camera parameters such as focal length, principal point and lens distortion. To reestablish the spatial relationship between images of different modalities, additional constraints need to be taken into account. For this reason, a new intermodal sensor fusion technique called Virtual Multimodal Camera (VMC) is presented in this paper. Using the presented approach, spatially corresponding images can be rendered for different camera technologies from the same virtual pose using a common parameter set. As a result, image points of the different modalities can be set into a spatial relationship so that the pixel locations in the images correspond to the same physical location. Additional contributions of this paper are the introduction of an hybrid calibration pattern for intrinsic and extrinsic intermodal camera calibration and a high performance 2D-to-3D mapping procedure. All steps of the algorithm are performed parallelly on a graphics processing unit (GPU). As a result, large amount of spatially corresponding images can be generated online for later analysis of intermodal relations.

Keywords

Intermodal sensor fusion Virtual multimodal camera GPU-acceleration 

References

  1. 1.
    Alfano, B., Ciampi, M., De Pietro, G.: A Wavelet-Based Algorithm for Multimodal Medical Image Fusion. Springer, Berlin (2007). pp. 117–120CrossRefGoogle Scholar
  2. 2.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: Speeded-up robust features (surf). Comput. Vis. Image Underst. 110(3), 346–359 (2008)CrossRefGoogle Scholar
  3. 3.
    Birkfellner, W.: Applied Medical Image Processing, 2nd edn. Taylor & Francis, London (2014)Google Scholar
  4. 4.
    Brown, D.C.: Close-range camera calibration. Photogramm. Eng. 37, 855–866 (1971)Google Scholar
  5. 5.
    Endres, F., Hess, J., Sturm, J., Cremers, D., Burgard, W.: 3D mapping with an RGB-D camera, vol. 30, pp. 177–187 (2013)Google Scholar
  6. 6.
    Henry, P., Krainin, M., Herbst, E., Ren, X., Fox, D.: RGB-D mapping: using depth cameras for dense 3d modeling of indoor environments. In: RGB-D: Advanced Reasoning with Depth Cameras Workshop in Conjunction with RSS, Zaragoza, Spain (2010)Google Scholar
  7. 7.
    Izadi, S., Kim, D., Hilliges, O., Molyneaux, D., Newcombe, R., Kohli, P., Shotton, J., Hodges, S., Freeman, D., Davison, A., Fitzgibbon, A.: Kinectfusion: Real-time 3d reconstruction and interaction using a moving depth camera. In: ACM Symposium on User Interface Software and Technology, Santa Barbara, CA, USA, pp. 559–268 (2011)Google Scholar
  8. 8.
    James, A.P., Dasarathy, B.V.: Medical image fusion: a survey of the state of the art. Inf. Fusion 19, 4–19 (2014)CrossRefGoogle Scholar
  9. 9.
    James, A.P., Dasarathy, B.V.: A review of feature and data fusion with medical images. Clinical Orthopaedics and Related Research (2015)Google Scholar
  10. 10.
    Kleinschmidt, S.P., Wagner, B.: Gpu-accelerated multi-sensor 3d mapping for remote control of mobile robots using virtual reality. In: 13th International Conference on Informatics in Control, Automation and Robotics (ICINCO 2016) (2016)Google Scholar
  11. 11.
    Krotosky, S.J., Trivedi, M.M.: Mutual information based registration of multimodal stereo videos for person tracking. Comput. Vis. Image Underst. 106(23), 270–287 (2007), special issue on Advances in Vision Algorithms and Systems beyond the Visible SpectrumGoogle Scholar
  12. 12.
    Leinonen, I., Jones, H.G.: Combining thermal and visible imagery for estimating canopy temperature and identifying plant stress. J. Exp. Bot. 55(401), 1423–1431 (2004)CrossRefGoogle Scholar
  13. 13.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  14. 14.
    Marshall, S., Matsopoulos, G.: Morphological data fusion in medical imaging. In: 1993 IEEE Winter Workshop on Nonlinear Digital Signal Processing (1993)Google Scholar
  15. 15.
    Matsopoulos, G., Marshall, S., Brunt, J.N.H.: Multiresolution morphological fusion of MR and CT images of the human brain. In: IEE Proceedings - Vision, Image and Signal Processing, vol. 141, pp. 137–142. IET (1993)Google Scholar
  16. 16.
    Mikoajczyk, K., Owczarczyk, J., Recko, W.: A test-bed for computer-assisted fusion of multi-modality medical images. In: International Conference on Computer Analysis of Images and Patterns, pp. 664–668. Computer Analysis of Images and Patterns (1993)Google Scholar
  17. 17.
    Mitchell, H.: Data Fusion: Concepts and Ideas. Springer, Berlin (2010)zbMATHGoogle Scholar
  18. 18.
    Moghadam, P., Vidas, S.: Heatwave: the next generation of thermography devices. In: International Society for Optical Engineering (SPIE), vol. 9105, p. 91050 (2014)Google Scholar
  19. 19.
    Raza, S., Sanchez, V., Prince, G., Clarkson, J., Rajpoot, N.M.: Registration of thermal and visible light images of diseased plants using silhouette extraction in the wavelet domain. Pattern Recognit. 48(7), 2119–2128 (2015)CrossRefGoogle Scholar
  20. 20.
    Reid, A., Ramos, F., Sukkarieh, S.: Bayesian fusion for multi-modal aerial images. In: Proceedings of the Robotics: Science and Systems IX (2013)Google Scholar
  21. 21.
    Rosten, E., Drummond, T.: Machine Learning for High-Speed Corner Detection. Springer, Berlin (2006)CrossRefGoogle Scholar
  22. 22.
    Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: IEEE International Conference on Computer Vision (ICCV) Washington, DC, USA, pp. 839–846 (1998)Google Scholar
  23. 23.
    Vidas, S., Moghadam, P.: Heatwave: a handheld 3D thermography system for energy auditing. Energy Build. 66, 445–460 (2013)CrossRefGoogle Scholar
  24. 24.
    Vidas, S., Moghadam, P., Bosse, M.: 3D thermal mapping of building interiors using an RGB-D and thermal camera. In: IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, pp. 2311–2318 (2013)Google Scholar
  25. 25.
    Wang, Y.P., Dang, J.W., Li, Q., Li, S.: Multimodal medical image fusion using fuzzy radial basis function neural networks. In: Wavelet Analysis and Pattern Recognition (2007)Google Scholar
  26. 26.
    Zhang, Q.P., Liang, M., Sun, W.C.: Medical diagnostic image fusion based on feature mapping wavelet neural networks. In: Proceedings of the Third International Conference on Image and Graphics ICIG ’04, pp. 51–54. IEEE Computer Society, Washington, DC, USA (2004)Google Scholar
  27. 27.
    Zhang, Z.: A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1330–1334 (2000)CrossRefGoogle Scholar
  28. 28.
    Zhang, Z.: Flexible camera calibration by viewing a plane from unknown orientations. In: IEEE International Conference on Computer Vision, Kerkyra, Greece, vol. 1, pp. 666–673 (1999)Google Scholar
  29. 29.
    Zhao, J., Cheung, S.C.S.: Human segmentation by geometrically fusing visible-light and thermal imageries. Multimed. Tools Appl. 73(1), 61–89 (2014)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Real-Time Systems GroupInstitute for Systems EngineeringHannoverGermany

Personalised recommendations