Skip to main content

Advertisement

Log in

Conversion of infrared ocean target images to visible images driven by energy information

  • Special Issue Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Infrared images are full of energy information, which can intuitively reflect the difference between objects and scenes. Visible images are full of color information, which can intuitively reflect the details of objects and scenes. To achieve the comprehensive utilization of infrared energy information when transforming infrared images to visible images, this paper proposes a method called infrared-energy-to-color (IETC) based on Cycle-Consistent Generative Adversarial Networks (CycleGAN). We compare some state-of-the-art methods in image transformation; the results show that the proposed model can generate visible images of ocean objects and scenes in the visual expression. Furthermore, in the quantitative evaluation, the proposed method performs better than others. In addition, the classic object detection algorithms also show that the proposed IETC method can achieve more robust results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  1. Yang, J., Wen, J., Wang, Y., et al.: Fog-based marine environmental information monitoring toward ocean of things[J]. IEEE Internet Things J. 7(5), 4238–4247 (2019)

    Article  Google Scholar 

  2. Li, Y., Yang, J.: Few-shot cotton pest recognition and terminal realization[J]. Comput. Electron. Agric. 169, 105240 (2020)

    Article  Google Scholar 

  3. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning[J]. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  4. Li, Y., Nie, J., Chao, X.: Do we really need deep CNN for plant diseases identification?[J]. Comput. Electron. Agric. 178, 105803 (2020)

    Article  Google Scholar 

  5. Li, Y., Chao, X.: ANN-based continual classification in agriculture[J]. Agriculture 10(5), 178 (2020)

    Article  Google Scholar 

  6. Liu, Y., Li, Y., Yuan, Y.H., et al.: A new robust deep canonical correlation analysis algorithm for small sample problems[J]. IEEE Access 7, 33631–33639 (2019)

    Article  Google Scholar 

  7. Yang, J., Man, J., Xi, M., et al.: Precise measurement of position and attitude based on convolutional neural network and visual correspondence relationship[J]. IEEE Trans. Neural Netw. Learn. Syst. 31(6), 2030–2041 (2019)

    Article  Google Scholar 

  8. Yang, Y., Zhang, Z., Mao, W., et al.: Radar target recognition based on few-shot learning[J]. Multimed. Syst. (2021). https://doi.org/10.1007/s00530-021-00832-3

    Article  Google Scholar 

  9. Yang, J., Xi, M., Jiang, B., et al.: FADN: fully connected attitude detection network based on industrial video[J]. IEEE Trans. Industr. Inf. 17(3), 2011–2020 (2020)

    Article  Google Scholar 

  10. Li, Y., Chao, X.: Semi-supervised few-shot learning approach for plant diseases recognition[J]. Plant Methods 17(1), 1–10 (2021)

    Article  MathSciNet  Google Scholar 

  11. Yang, J., Zhao, Z., Zhang, H., et al.: Data augmentation for X-ray prohibited item images using generative adversarial networks[J]. IEEE Access 7, 28894–28902 (2019)

    Article  Google Scholar 

  12. Yang, J., Wen, J., Jiang, B., et al.: Blockchain-based sharing and tamper-proof framework of big data networking[J]. IEEE Network 34(4), 62–67 (2020)

    Article  Google Scholar 

  13. Cao, Z.Y., Niu, S.Z., Zhang, J.W.: Masked image inpainting algorithm based on generative adversarial nets[J]. J. Beijing Univ. Posts Telecommun. 41(3), 81 (2018)

    Google Scholar 

  14. Pan, Z., Yu, W., Yi, X., et al.: Recent progress on generative adversarial networks (GANs): a survey[J]. IEEE Access 7, 36322–36333 (2019)

    Article  Google Scholar 

  15. Yang, J., Zhao, Y., Liu, J., et al.: No reference quality assessment for screen content images using stacked autoencoders in pictorial and textual regions[J]. IEEE Trans. Cybernet. (2020). https://doi.org/10.1109/TCYB.2020.3024627

    Article  Google Scholar 

  16. Li, J., Li, C., Yang, T., et al.: Cross-domain co-occurring feature for visible-infrared image matching[J]. IEEE Access 6, 17681–17698 (2018)

    Article  Google Scholar 

  17. Li, Z., Guo, C., Zhao, P., et al.: Mode instability mitigation by counter-pumped scheme in high power fiber laser[J]. Chin. J. Lasers 44(8), 0801010 (2017)

    Article  Google Scholar 

  18. Liang, W., Ding, D., Wei, G.: An improved DualGAN for near-infrared image colorization[J]. Infrared Phys. Technol. 116, 103764 (2021)

    Article  Google Scholar 

  19. Zhang, X., Hu, Z., Zhang, G., et al.: Dose calculation in proton therapy using a discovery cross-domain generative adversarial network (DiscoGAN)[J]. Med. Phys. 48(5), 2646–2660 (2021)

    Article  Google Scholar 

  20. Son, C.H., Zhang, X.P.: Near-infrared coloring via a contrast-preserving mapping model[J]. IEEE Trans. Image Process. 26(11), 5381–5394 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  21. Li, Y., Huang, J.B., Ahuja, N., et al.: Deep joint image filtering[C]. European Conference on Computer Vision, pp. 154–169. Springer, Cham (2016)

    Google Scholar 

  22. Sun, T., Jung, C., Fu, Q., et al.: Nir to rgb domain translation using asymmetric cycle generative adversarial networks[J]. IEEE Access 7, 112459–112469 (2019)

    Article  Google Scholar 

  23. Zeng, D., Zhu, M.: Multiscale fully convolutional network for foreground object detection in infrared videos[J]. IEEE Geosci. Remote Sens. Lett. 15(4), 617–621 (2018)

    Article  Google Scholar 

  24. Jeong, M., Ko, B.C., Nam, J.Y.: Early detection of sudden pedestrian crossing for safe driving during summer nights[J]. IEEE Trans. Circuits Syst. Video Technol. 27(6), 1368–1380 (2016)

    Article  Google Scholar 

  25. Nataprawira, J., Gu, Y., Goncharenko, I., et al.: Pedestrian detection using multispectral images and a deep neural network[J]. Sensors 21(7), 2536 (2021)

    Article  Google Scholar 

  26. Li, Y., Yang, J.: Meta-learning baselines and database for few-shot classification in agriculture[J]. Comput. Electron. Agric. 182, 106055 (2021)

    Article  Google Scholar 

  27. Yang, J., Zhao, Y.Q., Chan, J.C.W.: Learning and transferring deep joint spectral–spatial features for hyperspectral classification[J]. IEEE Trans. Geosci. Remote Sens. 55(8), 4729–4742 (2017)

    Article  Google Scholar 

  28. Fernandes, K., Cardoso, J.S.: Hypothesis transfer learning based on structural model similarity[J]. Neural Comput. Appl. 31(8), 3417–3430 (2019)

    Article  Google Scholar 

  29. Lu, J., Behbood, V., Hao, P., et al.: Transfer learning using computational intelligence: a survey[J]. Knowl.-Based Syst. 80, 14–23 (2015)

    Article  Google Scholar 

  30. Jameel, A., Riaz, M.M., Ghafoor, A.: Guided filter and IHS-based pan-sharpening[J]. IEEE Sens. J. 16(1), 192–194 (2015)

    Article  Google Scholar 

  31. Xinli, L., Changming, Z., Guotian, Y., et al.: Research of super-resolution processing of invoice image based on generative adversarial network[J]. J. Syst. Simul. 33(6), 1307 (2021)

    Google Scholar 

  32. Pan, Y., Pi, D., Chen, J., et al.: FDPPGAN: remote sensing image fusion based on deep perceptual patchGAN. Neural Comput. Appl. 33, 1–17 (2021)

    Article  Google Scholar 

  33. Ren, S., He, K., Girshick, R., et al.: Faster r-cnn: towards real-time object detection with region proposal networks[J]. Adv. Neural. Inf. Process. Syst. 28, 91–99 (2015)

    Google Scholar 

  34. Liu, M.Y., Breuel, T., Kautz, J.: Unsupervised image-to-image translation networks. Adv. Neural Inf. Process. Syst. 30, 700–708 (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported by the China Postdoctoral Science Foundation (2017M620884, 2019T120127).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xuewei Chao.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chen, C., Chao, X. Conversion of infrared ocean target images to visible images driven by energy information. Multimedia Systems 29, 2887–2898 (2023). https://doi.org/10.1007/s00530-021-00879-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00530-021-00879-2

Keywords

Navigation