Skip to main content

Fluorescence Microscopic Image Reconstruction Using Variational Autoencoder and CycleGAN

  • Conference paper
  • First Online:
IoT Based Control Networks and Intelligent Systems

Abstract

Many times, noise corrupts images, especially fluorescence microscopic data. Conventional methods for increasing the Signal to Noise ratio (SNR) of corrupted images, such as deconvolution frequently fail to achieve a high SNR since only an estimate of the point spread function is available due to modelling deficiencies or complications. In comparison to statistical approaches, deep learning methods significantly enhanced the SNR of reconstructed images. Deep learning algorithms are computationally simpler while still outperforming approaches that are computationally intensive. In this work, we attempt to reconstruct images using Variational Autoencoders and CycleGAN. Metrics like Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used for evaluating the quality of reconstruction.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Liu P-Y, Lam EY (2018) Image reconstruction using deep learning. arXiv preprint arXiv:1809.10410

  2. Ahishakiye E, Van Gijzen MB, Tumwiine J, Wario R, Obungoloch J (2021) A survey on deep learning in medical image reconstruction. Intell Med 1(03):118–127

    Article  Google Scholar 

  3. Saravanan S, Sujitha J (2020) Deep medical image reconstruction with autoencoders using deep Boltzmann machine training. EAI Endorsed Trans Pervasive Health Technol 6(24):e2

    Google Scholar 

  4. Liu X, Gherbi A, Wei Z, Li W, Cheriet M (2020) Multispectral image reconstruction from color images using enhanced variational autoencoder and generative adversarial network. IEEE Access 9:1666–1679

    Article  Google Scholar 

  5. Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. Adv Neural Inf Process Syst 27

    Google Scholar 

  6. Radford A, Metz L, Chintala S (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434

  7. Unpaired image-to-image translation using cycle-consistent adversarial networks

    Google Scholar 

  8. Brock A, Donahue J, Simonyan K (2018) Large scale GAN training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096

  9. Kavalerov I, Czaja W, Chellappa R (2021) A multi-class hinge loss for conditional GANs. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 1290–1299

    Google Scholar 

  10. Zhao S, Liu Z, Lin J, Zhu J-Y, Han S (2020) Differentiable augmentation for data-efficient GAN training. Adv Neural Inf Process Syst 33:7559–7570

    Google Scholar 

  11. Karras T, Aittala M, Laine S, Härkönen E, Hellsten J, Lehtinen J, Aila T (2021) Alias-free generative adversarial networks. Adv Neural Inf Process Syst 34

    Google Scholar 

  12. Poorna SS, Ravi Kiran Reddy M, Akhil N, Kamath S, Mohan L, Anuraj K, Pradeep HS (2020) Computer vision aided study for melanoma detection: a deep learning versus conventional supervised learning approach. In: Advanced computing and intelligent engineering. Springer, Singapore, pp 75–83

    Google Scholar 

  13. Bharath Chandra BV, Naveen C, Sampath Kumar MM, Sai Bhargav MS, Poorna SS, Anuraj K (2021) A Comparative study of drowsiness detection from EEG signals using pretrained CNN models. In: 2021 12th international conference on computing communication and networking technologies (ICCCNT), pp 1–3. https://doi.org/10.1109/ICCCNT51525.2021.9579555.

  14. Aloysius N, Geetha M (2017) A review on deep convolutional neural networks. In: International conference on communication and signal processing (ICCSP), pp 0588–0592. https://doi.org/10.1109/ICCSP.2017.8286426

  15. Geetha M, Manjusha C, Unnikrishnan P, Harikrishnan R (2013) A vision based dynamic gesture recognition of Indian Sign Language on Kinect based depth images. In: 2013 international conference on emerging trends in communication, control, signal processing and computing applications (C2SPCA), pp 1–7. https://doi.org/10.1109/C2SPCA.2013.6749448

  16. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press

    Google Scholar 

  17. Mannam V, Zhang Y, Zhu Y, Howard S (2019) Fluorescence microscopy denoising (FMD) dataset. Notre Dame. https://doi.org/10.7274/r0-ed2r-4052

    Article  Google Scholar 

  18. Ou X, Yan P, Zhang Y, Bing T, Zhang G, Jianhui W, Li W (2019) Moving object detection method via ResNet-18 with encoder-decoder structure in complex scenes. IEEE Access 7:108152–108160

    Article  Google Scholar 

  19. Liao Y, Xiong P, Min W, Min W, Lu J (2019) Dynamic sign language recognition based on video sequence with BLSTM-3D residual networks. IEEE Access 7:38 044-38 054

    Google Scholar 

  20. Miyato T, Kataoka T, Koyama M, Yoshida Y (2018) Spectral normalization for generative adversarial networks, arXiv preprint arXiv:1802.05957

  21. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning, pp 448–456

    Google Scholar 

  22. Lee KS, Town C (2020) Mimicry: towards the reproducibility of GAN research. arXiv preprint arXiv:2005.02494 (2020)

  23. Miyato T, Koyama M (2018) cGANs with projection discriminator. arXiv preprint arXiv:1802.05637

  24. Odena A, Dumoulin V, Olah C (2016) Deconvolution and checkerboard artifacts. Distill 1(10):e3

    Article  Google Scholar 

  25. Nguyen A, Clune J, Bengio Y, Dosovitskiy A, Yosinski J (2017) Plug & play generative networks: Conditional iterative generation of images in latent space. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4467–4477

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. S. Poorna .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Charan, M.G.K.S. et al. (2023). Fluorescence Microscopic Image Reconstruction Using Variational Autoencoder and CycleGAN. In: Joby, P.P., Balas, V.E., Palanisamy, R. (eds) IoT Based Control Networks and Intelligent Systems. Lecture Notes in Networks and Systems, vol 528. Springer, Singapore. https://doi.org/10.1007/978-981-19-5845-8_30

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-5845-8_30

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-5844-1

  • Online ISBN: 978-981-19-5845-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics