Advertisement

Research on fundus image registration and fusion method based on nonsubsampled contourlet and adaptive pulse coupled neural network

  • Jun Wu
  • Xingxing Ren
  • Zhitao XiaoEmail author
  • Fang Zhang
  • Lei Geng
  • Shihao Zhang
Article
  • 45 Downloads

Abstract

We present a registration and fusion method of fluorescein fundus angiography image and color fundus image which combines Nonsubsampled Contourlet (NSCT) and adaptive Pulse Coupled Neural Network (PCNN). Firstly, we register two images by Speeded Up Robust Features (SURF) feature points, the nearest neighbor and the next nearest neighbor distance ratio method to eliminate the spatial difference between the source images. Secondly, we use Random Sample Consensus (RANSAC) algorithm to achieve precise matching of feature points. Then, according to the transformation parameters obtained by RANSAC algorithm, we perform spatial transformation on the floating image to complete the registration. Finally, we obtain the low-frequency sub-band and high-frequency sub-band of the image to be fused by NSCT decomposition. The low-frequency sub-band is fused by the regional energy. The high-frequency sub-bands are studied using a simplified-PCNN model and the Particle Swarm Optimization algorithm. The link strength of the simplified-PCNN is an improved Laplacian energy and the images are fused based on the number of times the pixels are ignited. The proposed method has higher average gradient (AG) value and information entropy (IE) value and lower relative global dimensional synthesis error (ERGAS) than the existing fusion methods of the fundus image. The fusion image can accurately synthesize the image information, clarify the performance of the details, and has better spectral quality in the spectral range. The image of fused provides an effective reference for the clinical diagnosis of fundus diseases.

Keywords

Fundus fusion Nonsubsampled contourlet Regional energy Simplified-pulse coupled neural network Particle swarm optimization 

Notes

Acknowledgements

This work was supported by the National Natural Science Foundation of China, under grant No. 61771340; Tianjin Science and Technology Major Projects and Engineering, under grant No. 17ZXHLSY00040, No. 17ZXSCSY00060 and No. 17ZXSCSY00090.

References

  1. 1.
    E. J. Candes (1999) Ridgelets: Theory and application. USA: Department of statistics, Stanford University.Google Scholar
  2. 2.
    Chen J, Tian J (2010) A Partial Intensity Invariant Feature Descriptor for Multimodal Retinal Image Registration. IEEE Transactions on Biomedi -Cal Engineering 57(7):1707–1718CrossRefGoogle Scholar
  3. 3.
    Cunha AL, Zhou J, Do MN (2006) The nonsubsampled contourlet transform: theory, design, and application. IEEE Trans. Image Proc 15(10):3089–3101CrossRefGoogle Scholar
  4. 4.
    Dai W, Jiang X (2016) NSCT Medical Image Adaptive Fusion Based on Human Visual Properties. J Electron 08(8):1932–1939Google Scholar
  5. 5.
    Do MN, Vetterli M (2005) The Contourlet Transform: An Efficient Directional Multiresolution Image Representation. IEEE Trans Image Process 14(12):2091–2095CrossRefGoogle Scholar
  6. 6.
    Fischler M, Bolles R (1981) Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communication of the ACM 24(2):381–395MathSciNetCrossRefGoogle Scholar
  7. 7.
    Goshtasby AA, Nikolov S (2007) Image fusion: advances in the state of the art. Information Fusion (S1566-2535) 8(2):114–118CrossRefGoogle Scholar
  8. 8.
    Guo F, Zhao X, Zou B, Liang Y (2017) Automatic retinal image registration using blood vessel segmentation and SIFT feature. Int J Pattern Recognit Artif IntellGoogle Scholar
  9. 9.
    He F, Guo Y, Gao C (2017) Improved PCNN Method for Human Target Infrared Image Segmentation Under Complex Environments. Acta Opt Sin 2017(2):175–184Google Scholar
  10. 10.
    Izhikevich EM (1999) Class 1 neural excitability, conventional synapses, weakly connected networks, and mathematical foundations of pulse-coupled. IEEE Trans Neural Netw 10(3):499–507CrossRefGoogle Scholar
  11. 11.
    Ju J, Loew M, Ku B, Ko H (2016) Erratum: Hybrid Retinal Image Registration Using Mutual Information and Salient Features. IEICE Trans Inf Syst E99.D(6):1729–1732CrossRefGoogle Scholar
  12. 12.
    Kennedy J (2001) Swarm Intelligence. Swarm Intelligence 2(1), pp 475–495Google Scholar
  13. 13.
    Kennedy J (2011) Particle swarm optimization. Springer US: Encyclopedia of Machine Learning 2010:760–766Google Scholar
  14. 14.
    Li X, Ren J, L Z (2013) Multispectral and Panchromatic Image Fusion Methods Based on Improved PCNN and Regional Energy in NSCT Domain. Infrared and Laser Engineering 42(11):3096-3102Google Scholar
  15. 15.
    Liao Y, Huang W, Shang L, Li P (2014) Image fusion based on Shearlet and improved PCNN. Computer Engineering and Applications 50(2):142–146Google Scholar
  16. 16.
    Luo T, Liu B (2015) Fast SURF registration algorithm for fusion features. Journal of Image and Graphics 20(01):95–103Google Scholar
  17. 17.
    Maia GA, Jose LS, Raquel GC, Rafael G (2004) Fusion of Multispectral and Panchromatic Image Using Improve HIS and PCA Mergers Based on Wavelet Decomposition. IEEE Trans Geosci Remote Sens 2004:1291–1299Google Scholar
  18. 18.
    Miri MS, Abràmoff MD, Kwon YH, Garvin MK (2016) Multimodal registration of SD-OCT volumes and fundus photographs using histograms of oriented gradients. Biomedical Optics Express 7(12):5252–5267CrossRefGoogle Scholar
  19. 19.
    Nencini F, Garzelli A, Baronti S, Alparone L (2007) Remote sensing image fusion using the curvelet transform. Information Fusion (S1566-2535) 8(2):143–156CrossRefGoogle Scholar
  20. 20.
    Santosh KC, Alam N, Roy PP, Wendling L, Antani S, Thoma GR (2016) A Simple and Efficient Arrowhead Detection Technique in Biomedical Images. IJPRAI 30(5):1–16Google Scholar
  21. 21.
    Santosh KC, Roy PP (2018) Arrow detection in biomedical images using sequential classifier. Int J Mach Learn Cybern 9(6):993–1006CrossRefGoogle Scholar
  22. 22.
    Song J, Gan J, Liu S (2016) Infrared and visible image fusion based on PCNN and region characters. Computer Engineering and Applications 52(8):186–190Google Scholar
  23. 23.
    Song R, Wang M, Wang X (2016) Multifocus Image Fusion Algorithm Based on NSCT and Edge Detection. Journal of Computer - Aided Design and Graphics 28(12):2134–2141Google Scholar
  24. 24.
    Sun Y, Jiang L (2017) Color multi-focus image fusion algorithm based on fuzzy theory and dual-tree complex wavelet transform. Journal of Algorithms & Computational Technology 11(2):164–169CrossRefGoogle Scholar
  25. 25.
    Wu J, Song S, Zhang D, Yang S (2017) Research on fundus image registration and fusion method based on NSCT and adaptive PCNN. International Journal of Engineering InventionsGoogle Scholar
  26. 26.
    Yang L, Liu Y, Liu X (2009) Medical Image Fusion Based on Wavelet Packet Transform. Chin J Biomed Eng 28(1):12–16Google Scholar
  27. 27.
    Zhou D, Gao C, Guo Y (2014) A coarse-to fine strategy for iterative segmentation using simplified pulse-coupled neural network. Soft Comput 18(3):557–570CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  • Jun Wu
    • 1
    • 2
  • Xingxing Ren
    • 1
    • 2
  • Zhitao Xiao
    • 1
    • 2
    Email author
  • Fang Zhang
    • 1
    • 2
  • Lei Geng
    • 1
    • 2
  • Shihao Zhang
    • 1
    • 2
  1. 1.School of Electronics and Information EngineeringTianjin Polytechnic UniversityTianjinChina
  2. 2.Tianjin Key Laboratory of Optoelectronic Detection Technology and SystemTianjinChina

Personalised recommendations