Photonic Sensors

, Volume 8, Issue 2, pp 134–145 | Cite as

Integrative Multi-Spectral Sensor Device for Far-Infrared and Visible Light Fusion

Open Access
Regular
  • 54 Downloads

Abstract

Infrared and visible light image fusion technology is a hot spot in the research of multi-sensor fusion technology in recent years. Existing infrared and visible light fusion technologies need to register before fusion because of using two cameras. However, the application effect of the registration technology has yet to be improved. Hence, a novel integrative multi-spectral sensor device is proposed for infrared and visible light fusion, and by using the beam splitter prism, the coaxial light incident from the same lens is projected to the infrared charge coupled device (CCD) and visible light CCD, respectively. In this paper, the imaging mechanism of the proposed sensor device is studied with the process of the signals acquisition and fusion. The simulation experiment, which involves the entire process of the optic system, signal acquisition, and signal fusion, is constructed based on imaging effect model. Additionally, the quality evaluation index is adopted to analyze the simulation result. The experimental results demonstrate that the proposed sensor device is effective and feasible.

Keywords

Integrative multi-spectral sensor device infrared and visible fusion beam splitter prism imaging effect model 

Notes

Acknowledgment

This study is supported by the Natural Science Foundation of China (Grant No. 51274150) and Shanxi Province Natural Science Foundation of China (Grant No. 201601D011059).

References

  1. [1]
    Y. L. Maoult, T. Sentenac, J. J. Orteu, and J. P. Arcens, “Fire detection: a new approach based on a low cost CCD camera in the near infrared,” Process Safety & Environmental Protection, 2007, 85(3): 193–206CrossRefGoogle Scholar
  2. [2]
    B. C. Ko, K. H. Cheong, and J. Y. Nam, “Fire detection based on vision sensor and support vector machines,” Fire Safety Journal, 2009, 44(3): 322–329CrossRefGoogle Scholar
  3. [3]
    H. T. Chen, Y. C. Wu, and C. C Hsu, “Daytime preceding vehicle brake light detection using monocular vision,” IEEE Sensors Journal, 2015, 16(1): 120–131CrossRefGoogle Scholar
  4. [4]
    Y. Li, Y. L. Qiao, and Y. Ruichek, “Multiframe-based high dynamic range monocular vision system for advanced driver assistance systems,” IEEE Sensors Journal, 2015, 15(10): 5433–5441CrossRefGoogle Scholar
  5. [5]
    V. Milanés, D. F. Llorca, J. Villagrá, J. Pérez, C. Fernandez, I. Parra, et al., “Intelligent automatic overtaking system using vision for vehicle detection,” Expert Systems with Applications, 2012, 39(3): 3362–3373CrossRefGoogle Scholar
  6. [6]
    B. Z. Jia, R. Liu, and M. Zhu, “Real-time obstacle detection with motion features using monocular vision,” Visual Computer, 2015, 31(3): 281–293CrossRefGoogle Scholar
  7. [7]
    S. C. Yi, Y. C. Chen, and C. H. Chang, “A lane detection approach based on intelligent vision,” Computers & Electrical Engineering, 2015, 42(C): 23–29.CrossRefGoogle Scholar
  8. [8]
    Y. S. Lee, Y. M. Chan, and L. C. Fu, “Near-infrared-based nighttime pedestrian detection using grouped part models,” IEEE Transactions on Intelligent Transportation Systems, 2015, 16(4): 1929–1940CrossRefGoogle Scholar
  9. [9]
    R. O′Malley, E. Jones, and M. Glavin, “Detection of pedestrians in far-infrared automotive night vision using region-growing and clothing distortion compensation,” Infrared Physics & Technology, 2010, 53(6): 439–449ADSCrossRefGoogle Scholar
  10. [10]
    C. J. Liu, Y. Zhang, K. K. Tan, and H. Y. Yang, “Sensor fusion method for horizon detection from an aircraft in low visibility conditions,” IEEE Transactions on Instrumentation and Measurement, 2014, 63(3): 620–627CrossRefGoogle Scholar
  11. [11]
    Y. Chen, L. Wang, Z. B. Sun, Y. D. Jiang, and G. J. Zhai, “Fusion of color microscopic images based on bidimensional empirical mode decomposition,” Optics Express, 2010, 18(21): 21757–21769ADSCrossRefGoogle Scholar
  12. [12]
    J. F. Zhao, Q. Zhou, Y. T. Chen, H. J. Feng, Z. H. Xu, and Q. Li, “Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition,” Infrared Physics and Technology, 2013, 56(2): 93–99ADSCrossRefGoogle Scholar
  13. [13]
    R. Shen, I. Cheng, and A. Basu, “Cross-scale coefficient selection for volumetric medical image fusion,” IEEE Transactions on Biomedical Engineering, 2013, 60(4): 1069–1079CrossRefGoogle Scholar
  14. [14]
    X. Z. Bai, F. G. Zhou, and B. D. Xue, “Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform,” Optics Express, 2011, 19(9): 8444–8457ADSCrossRefGoogle Scholar
  15. [15]
    S. G. Kong, J. Heo, F. Boughorbel, Y. Zheng, B. Abidi, A. Koschan, et al., “Multiscale fusion of visible and thermal IR images for illumination-invariant face recognition,” International Journal of Computer Vision, 2007, 71(2): 215–233CrossRefGoogle Scholar
  16. [16]
    D. M. Bulanona, T. F. Burksa, and V. Alchanatis, “Image fusion of visible and thermal images for fruit detection,” Biosystems Engineering, 2009, 103(1): 12–22CrossRefGoogle Scholar
  17. [17]
    D. P. Bavirisetti and R. Dhuli, “Fusion of infrared and visible sensor images based on anisotropic diffusion and karhunen-loeve transform,” IEEE Sensors Journal, 2016, 16(1): 203–209CrossRefGoogle Scholar
  18. [18]
    C. Beyan, A. Yigit, and A. Temizel, “Fusion of thermal- and visible-band video for abandoned object detection,” Journal of Electronic Imaging, 2011, 20(3): 033001-1–033001-13.Google Scholar
  19. [19]
    T. Alexander and A. H. Maarten, “Portable real-time color night vision,” SPIE, 2008, 69(74): 697402-1–697402-12.Google Scholar
  20. [20]
    T. Alexander and A. H. Maarten, “Progress in color night vision,” Optical Engineering, 2012, 51(1): 010901-1–010901-19.Google Scholar
  21. [21]
    A. Toet, M. A. Hogervorst, R. V. Son, and J. Dijk, “Augmenting full color fused multiband night vision imagery with synthetic imagery for enhanced situational awareness,” International Journal of Image and Data Fusion, 2011, 2(4): 287–308ADSCrossRefGoogle Scholar
  22. [22]
    N. R. Nelson and P. S. Barry, “Measurement of Hyperion MTF from on-orbit scenes,” in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium (IGARSS ’01), Sydney, Australia, 2001, pp. 2967–2969.Google Scholar
  23. [23]
    H. Du and K. J. Voss, “Effects of point-spread function on calibration and radiometric accuracy of CCD camera,” Applied Optics, 2004, 43(3): 665–670ADSCrossRefGoogle Scholar
  24. [24]
    F. Bu, “Study on modeling and simulation of optical remote sensing system and image processing technology,” Ph.D. dissertation, The University of Chinese Academy of Sciences, Beijing, China, 2014.Google Scholar
  25. [25]
    B. Ding, “Hyperspectral imaging system model implementation and analysis,” Ph.D. dissereation, Chester F. Carlson Center for Imaging Science Rochester Institute of Technology, New York, the United States, 2014.Google Scholar

Copyright information

© The Author(s) 2018

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Tiezhu Qiao
    • 1
  • Lulu Chen
    • 1
  • Yusong Pang
    • 2
  • Gaowei Yan
    • 3
  1. 1.Key Lab of Advanced Transducers and Intelligent Control System, Ministry of Education and Shanxi ProvinceTaiyuan University of TechnologyTaiyuanChina
  2. 2.Section of Transport Engineering and Logistics, Faculty of 3mEDelft University of TechnologyMekelweg 2Netherlands
  3. 3.College of Information EngineeringTaiyuan University of TechnologyTaiyuanChina

Personalised recommendations