Skip to main content
Log in

Learning to transfer microscopy image modalities

  • Special Issue Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

Abstract

Phase Contrast and Differential Interference Contrast (DIC) microscopy are two popular noninvasive techniques for monitoring live cells. Each of these two imaging modalities has its own advantages and disadvantages to visualize specimens, so biologists need these two complementary modalities together to analyze specimens. In this paper, we propose a novel data-driven learning method capable of transferring microscopy images from one imaging modality to the other imaging modality, reflecting the characteristics of specimens from different perspectives. For example, given a Phase Contrast microscope, we can transfer its images to the corresponding DIC images without using DIC microscope, vice versa. The preliminary experiments demonstrate that the image transfer approach can provide biologists a computational way to switch between microscopy imaging modalities, so biologists can combine the advantages of different imaging modalities to better visualize and analyze specimens over time, without purchasing all types of microscopy imaging modalities or switching between imaging systems back-and-forth during time-lapse experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Zernike, F.: How I discovered phase contrast. Science 121(3141), 345–349 (1955)

    Article  Google Scholar 

  2. Murphy, D.B.: Fundamentals of Light Microscopy and Electronic Imaging. Wiley, Hoboken (2002)

    Google Scholar 

  3. http://www.olympusmicro.com/primer/techniques/dic/dicphasecomparison.html

  4. http://www.microscopyu.com/tutorials/java/phasedicm-orph/index.html

  5. Kroon, D.J., Slump, C.H.: MRI modality transformation in demon registration. In IEEE International Symposium on Biomedical Imaging: From Nano to Macro, ISBI’09, pp. 963–966 (2009)

  6. Konukoglu, E., van der Kouwe, A., Sabuncu, M.R., Fischl, B.: Example-based restoration of high-resolution magnetic resonance image acquisitions. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 131–138. Springer, Berlin, Heidelberg (2013)

  7. Rousseau F.: Brain hallucination. Computer VisionECCV, pp. 497–508 (2008)

  8. Alexander, D.C., Zikic, D., Zhang, J., Zhang, H., Criminisi, A.: Image quality transfer via random forest regression: applications in diffusion MRI. In: International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 225–232. Springer, Cham (2014)

  9. Jog, A., Roy, S., Carass, A., Prince, J.L.: Magnetic resonance image synthesis through patch regression. In: 2013 IEEE 10th International Symposium on Biomedical Imaging (ISBI), pp. 350–353 (2013)

  10. Cao, T., Zach, C., Modla, S., Powell, D., Czymmek, K., Niethammer, M.: Multi-modal registration for correlative microscopy using image analogies. Med. Image Anal. 18(6), 914–926 (2014)

    Article  Google Scholar 

  11. Roy, S., Carass, A., Shiee, N., Pham, D.L., Prince, J.L.: MR contrast synthesis for lesion segmentation. In: IEEE International Symposium on Biomedical Imaging: From Nano to Macro, pp. 932–935 (2010)

  12. Vemulapalli, R., Van Nguyen, H., Kevin Zhou, S.: Unsupervised cross-modal synthesis of subject-specific scans. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 630–638 (2015)

  13. Hoffman, J., Gupta, S., Darrell, T.: Learning with side information through modality hallucination. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 826–834 (2016)

  14. Choi, Y., Kim, N., Hwang, S., Kweon, I.S.: Thermal image enhancement using convolutional neural network. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 223–230 (2016)

  15. Geiger, A., Wang, C.: Joint 3d object and layout inference from a single rgb-d image. In: German Conference on Pattern Recognition, pp. 183–195 (2015)

  16. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)

  17. Mirza, M., Osindero, S. :Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)

  18. Wang, X., Gupta, A.: Generative image modeling using style and structure adversarial networks. In: European Conference on Computer Vision, Springer International Publishing, pp. 318–335 (2016)

  19. Dong, H., Neekhara, P., Wu, C., Guo, Y.: Unsupervised image-to-image translation with generative adversarial networks. arXiv preprint arXiv:1701.02676 (2017)

  20. Yoo, D., Kim, N., Park, S., Paek, A.S., Kweon, I. S.: Pixel-level domain transfer. In: European Conference on Computer Vision, Springer International Publishing, pp. 517–532 (2016)

  21. Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: CVPR (2016)

  22. Zhu, J., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: ICCV (2017)

  23. Ghahramani, Z., Hinton, G.E.: The EM algorithm for mixtures of factor analyzers. Technical Report CRG-TR-96-1, University of Toronto (1996)

  24. Rubin, D.B., Thayer, D.T.: EM algorithms for ML factor analysis. Psychometrika 47, 69–76 (1982)

    Article  MathSciNet  Google Scholar 

  25. McLachlan, G.J., Peel, D., Bean, R.W.: Modelling high-dimensional data by mixtures of factor analyzers. Comput. Stat. Data Anal. 41(3), 379–388 (2003)

    Article  MathSciNet  Google Scholar 

  26. Han, J., Pei, J., Kamber, M.: Data Mining: Concepts and Techniques. Elsevier, Amsterdam (2011)

    MATH  Google Scholar 

  27. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23(11), 1222–1239 (2001)

    Article  Google Scholar 

  28. Kolmogorov, V., Zabin, R.: What energy functions can be minimized via graph cuts? IEEE Trans. Pattern Anal. Mach. Intell. 26(2), 147–159 (2004)

    Article  Google Scholar 

  29. Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. Pattern Anal. Mach. Intell. 26(9), 1124–1137 (2004)

    Article  Google Scholar 

  30. Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)

    Article  Google Scholar 

Download references

Acknowledgements

This project was supported by National Science Foundation (NSF) CAREER award IIS-1351049 and Established Program to Stimulate Competitive Research (NSF EPSCoR) Grant IIA-1355406.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhaozheng Yin.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, L., Yin, Z. Learning to transfer microscopy image modalities. Machine Vision and Applications 29, 1257–1267 (2018). https://doi.org/10.1007/s00138-018-0946-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-018-0946-7

Keywords

Navigation