Advertisement

Springer Nature is making Coronavirus research free. View research | View latest news | Sign up for updates

Application of machine learning method in optical molecular imaging: a review

  • 38 Accesses

Abstract

Optical molecular imaging (OMI) is an imaging technology that uses an optical signal, such as near-infrared light, to detect biological tissue in organisms. Because of its specific and sensitive imaging performance, it is applied in both preclinical research and clinical surgery. However, it requires heavy data analysis and a complex mathematical model of tomographic imaging. In recent years, machine learning (ML)-based artificial intelligence has been used in different fields because of its ability to perform powerful data processing. Its analytical capability for processing complex and large data provides a feasible scheme for the requirement of OMI. In this paper, we review ML-based methods applied in different OMI modalities.

References

  1. 1

    Conway J R W, Carragher N O, Timpson P. Developments in preclinical cancer imaging: innovating the discovery of therapeutics. Nat Rev Cancer, 2014, 14: 314–328

  2. 2

    Maldiney T, Bessiére A, Seguin J, et al. The in vivo activation of persistent nanophosphors for optical imaging of vascularization, tumours and grafted cells. Nat Mater, 2014, 13: 418–426

  3. 3

    Ellenbroek S I J, van Rheenen J. Imaging hallmarks of cancer in living mice. Nat Rev Cancer, 2014, 14: 406–418

  4. 4

    Weissleder R, Pittet M J. Imaging in the era of molecular oncology. Nature, 2008, 452: 580–589

  5. 5

    Massoud T F, Gambhir S S. Molecular imaging in living subjects: seeing fundamental biological processes in a new light. Genes Dev, 2003, 17: 545–580

  6. 6

    Fan-Minogue H, Cao Z W, Paulmurugan R, et al. Noninvasive molecular imaging of c-Myc activation in living mice. Proc Natl Acad Sci USA, 2010, 107: 15892–15897

  7. 7

    Nguyen Q T, Tsien R Y. Fluorescence-guided surgery with live molecular navigation-a new cutting edge. Nat Rev Cancer, 2013, 13: 653–662

  8. 8

    Weissleder R. Molecular imaging in cancer. Science, 2006, 312: 1168–1171

  9. 9

    Jobsis F F. Noninvasive, infrared monitoring of cerebral and myocardial oxygen sufficiency and circulatory parameters. Science, 1977, 198: 1264–1267

  10. 10

    Gao Y, Wang K, An Y, et al. Nonmodel-based bioluminescence tomography using a machine-learning reconstruction strategy. Optica, 2018, 5: 1451–1454

  11. 11

    Jiang S X, Liu J, Zhang G L, et al. Reconstruction of fluorescence molecular tomography via a fused LASSO method based on group sparsity prior. IEEE Trans Biomed Eng, 2019, 66: 1361–1371

  12. 12

    Li Y C, Charalampaki P, Liu Y, et al. Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data. Int J Comput Assist Radiol Surg, 2018, 13: 1187–1199

  13. 13

    LeCun Y, Bengio Y, Hinton G. Deep learning. Nature, 2015, 521: 436–444

  14. 14

    Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. Commun ACM, 2017, 60: 84–90

  15. 15

    Farabet C, Couprie C, Najman L, et al. Learning hierarchical features for scene labeling. IEEE Trans Pattern Anal Mach Intell, 2013, 35: 1915–1929

  16. 16

    Tompson J J, Jain A, LeCun Y, et al. Joint training of a convolutional network and a graphical model for human pose estimation. In: Proceedings of Advances in Neural Information Processing Systems 27. 2014

  17. 17

    Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions. In: Proceedings of 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 2015. 1–9

  18. 18

    Mikolov T, Deoras A, Povey D, et al. Strategies for training large scale neural network language models. In: Proceedings of 2011 IEEE Workshop on Automatic Speech Recognition & Understanding, Waikoloa, 2011. 196–201

  19. 19

    Hinton G, Deng L, Yu D, et al. Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Process Magaz, 2012, 29: 82–97

  20. 20

    Sainath T N, Kingsbury B, Saon G, et al. Deep convolutional neural networks for large-scale speech tasks. Neural Netw, 2015, 64: 39–48

  21. 21

    Bengio Y, Ducharme R, Vincent P. A neural probabilistic language model. J Mach Learn Res, 2003, 3: 1137–1155

  22. 22

    Sutskever I, Vinyals O, Le Q V. Sequence to sequence learning with neural networks. 2014. ArXiv: 1409.3215

  23. 23

    Quan W Z, Wang K, Yan D M, et al. Distinguishing between natural and computer-generated images using convolutional neural networks. IEEE Trans Inform Forensic Secur, 2018, 13: 2772–2787

  24. 24

    Bayar B, Stamm M C. Constrained convolutional neural networks: a new approach towards general purpose image manipulation detection. IEEE Trans Inform Forensic Secur, 2018, 13: 2691–2706

  25. 25

    Yang Y, Zhang W S, He Z W, et al. Locator slope calculation via deep representations based on monocular vision. Neural Comput Applic, 2019, 31: 2781–2794

  26. 26

    Ma J S, Sheridan R P, Liaw A, et al. Deep neural nets as a method for quantitative structure-activity relationships. J Chem Inf Model, 2015, 55: 263–274

  27. 27

    Lemaître G, Rastgoo M, Massich J, et al. Classification of SD-OCT volumes using local binary patterns: experimental validation for DME DETECtion. J Ophthalmology, 2016, 2016: 1–14

  28. 28

    Srinivasan P P, Kim L A, Mettu P S, et al. Fully automated detection of diabetic macular edema and dry age-related macular degeneration from optical coherence tomography images. Biomed Opt Express, 2014, 5: 3568–3577

  29. 29

    Lee C S, Baughman D M, Lee A Y. Deep learning is effective for the classification of OCT images of normal versus age-related macular degeneration. Ophthalmology Retina, 2017, 1: 322–327

  30. 30

    Roy A G, Conjeti S, Karri S P K, et al. ReLayNet: retinal layer and fluid segmentation of macular optical coherence tomography using fully convolutional networks. Biomed Opt Express, 2017, 8: 3627

  31. 31

    Roy A G, Conjeti S, Carlier S G, et al. Lumen segmentation in intravascular optical coherence tomography using backscattering tracked and initialized random walks. IEEE J Biomed Health Inform, 2016, 20: 606–614

  32. 32

    Wang Z, Jenkins M W, Linderman G C, et al. 3-D stent detection in intravascular OCT using a Bayesian network and graph search. IEEE Trans Med Imag, 2015, 34: 1549–1561

  33. 33

    Schwab J, Antholzer S, Nuster R, et al. Real-time photoacoustic projection imaging using deep learning. 2018. ArXiv: 1801.06693

  34. 34

    Hauptmann A, Lucka F, Betcke M, et al. Model-based learning for accelerated, limited-view 3-D photoacoustic tomography. IEEE Trans Med Imag, 2018, 37: 1382–1393

  35. 35

    Antholzer S, Schwab J, Bauer-Marschallinger J, et al. Nett regularization for compressed sensing photoacoustic tomography. In: Proceedings of SPIE, 2019. 10878

  36. 36

    Huang C, Meng H, Gao Y, et al. Fast and robust reconstruction method for fluorescence molecular tomography based on deep neural network. In: Proceedings of SPIE, 2019. 10881

  37. 37

    André B, Vercauteren T, Buchner A M, et al. A smart atlas for endomicroscopy using automated video retrieval. Med Image Anal, 2011, 15: 460–476

  38. 38

    Kamen A, Sun S H, Wan S H, et al. Automatic tissue differentiation based on confocal endomicroscopic images for intraoperative guidance in neurosurgery. Biomed Res Int, 2016, 2016: 1–8

  39. 39

    Raví D, Szczotka A B, Shakir D I, et al. Effective deep learning training for single-image super-resolution in endomicroscopy exploiting video-registration-based reconstruction. Int J Comput Assist Radiol Surg, 2018, 13: 917–924

  40. 40

    Zhang C, Wang K, An Y, et al. Improved generative adversarial networks using the total gradient loss for the resolution enhancement of fluorescence images. Biomed Opt Express, 2019, 10: 4742–4756

  41. 41

    de Fauw J, Ledsam J R, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med, 2018, 24: 1342–1350

  42. 42

    Wang L V, Wu H I, Masters B R. Biomedical optics, principles and imaging. J Biomed Opt, 2008, 13: 049902

  43. 43

    Gessert N, Lutz M, Heyder M, et al. Automatic plaque detection in IVOCT pullbacks using convolutional neural networks. IEEE Trans Med Imag, 2019, 38: 426–434

  44. 44

    Foot B, MacEwen C. Surveillance of sight loss due to delay in ophthalmic treatment or review: frequency, cause and outcome. Eye, 2017, 31: 771–775

  45. 45

    Ting D S W, Pasquale L R, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. British J Ophthalmol, 2019, 103: 167–175

  46. 46

    Liu Y Y, Chen M, Ishikawa H, et al. Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding. Med Image Anal, 2011, 15: 748–759

  47. 47

    Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. ArXiv: 1409.1556

  48. 48

    Venhuizen F G, van Ginneken B, Liefers B, et al. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography. Biomed Opt Express, 2018, 9: 1545

  49. 49

    Tsantis S, Kagadis G C, Katsanos K, et al. Automatic vessel lumen segmentation and stent strut detection in intravascular optical coherence tomography. Med Phys, 2012, 39: 503–513

  50. 50

    Lu H, Gargesha M, Wang Z, et al. Automatic stent detection in intravascular OCT images using bagged decision trees. Biomed Opt Express, 2012, 3: 2809–2824

  51. 51

    Yabushita H, Bouma B E, Houser S L, et al. Characterization of human atherosclerosis by optical coherence tomography. Circulation, 2002, 106: 1640–1645

  52. 52

    Wang Z, Chamie D, Bezerra H G, et al. Volumetric quantification of fibrous caps using intravascular optical coherence tomography. Biomed Opt Express, 2012, 3: 1413–1426

  53. 53

    Zahnd G, Karanasos A, van Soest G, et al. Quantification of fibrous cap thickness in intracoronary optical coherence tomography with a contour segmentation method based on dynamic programming. Int J Comput Assist Radiol Surg, 2015, 10: 1383–1394

  54. 54

    Wang L V. Multiscale photoacoustic microscopy and computed tomography. Nat Photon, 2009, 3: 503–509

  55. 55

    Kruger R A, Liu P Y, Fang Y R, et al. Photoacoustic ultrasound (PAUS)-reconstruction tomography. Med Phys, 1995, 22: 1605–1609

  56. 56

    Karabutov A A, Podymova N B, Letokhov V S. Time-resolved laser optoacoustic tomography of inhomogeneous media. Appl Phys B-Lasers Opt, 1996, 63: 545–563

  57. 57

    Ntziachristos V, Razansky D. Molecular imaging by means of multispectral optoacoustic tomography (MSOT). Chem Rev, 2010, 110: 2783–2794

  58. 58

    Antholzer S, Haltmeier M, Schwab J. Deep learning for photoacoustic tomography from sparse data. Inverse Problems Sci Eng, 2019, 27: 987–1005

  59. 59

    Xu M H, Wang L V. Universal back-projection algorithm for photoacoustic computed tomography. Phys Rev E, 2005, 71: 016706

  60. 60

    Burgholzer P, Bauer-Marschallinger J, Grün H, et al. Temporal back-projection algorithms for photoacoustic tomography with integrating line detectors. Inverse Problems, 2007, 23: S65–S80

  61. 61

    Zeng L, Xing D, Gu H M, et al. High antinoise photoacoustic tomography based on a modified filtered backprojection algorithm with combination wavelet. Med Phys, 2007, 34: 556–563

  62. 62

    Hoelen C G A, de Mul F F M. Image reconstruction for photoacoustic scanning of tissue structures. Appl Opt, 2000, 39: 5872–5883

  63. 63

    Rosenthal A, Razansky D, Ntziachristos V. Fast semi-analytical model-based acoustic inversion for quantitative optoacoustic tomography. IEEE Trans Med Imag, 2010, 29: 1275–1285

  64. 64

    Paltauf G, Viator J A, Prahl S A, et al. Iterative reconstruction algorithm for optoacoustic imaging. J Acoust Soc Am, 2002, 112: 1536–1544

  65. 65

    Jetzfellner T, Rosenthal A, Englmeier K H, et al. Interpolated model-matrix optoacoustic tomography of the mouse brain. Appl Phys Lett, 2011, 98: 163701

  66. 66

    Treeby B E, Cox B T. k-Wave: MATLAB toolbox for the simulation and reconstruction of photoacoustic wave fields. J Biomed Opt, 2010, 15: 021314

  67. 67

    Xu Y, Wang L V. Time reversal and its application to tomography with diffracting sources. Phys Rev Lett, 2004, 92: 033902

  68. 68

    Hristova Y, Kuchment P, Nguyen L. Reconstruction and time reversal in thermoacoustic tomography in acoustically homogeneous and inhomogeneous media. Inverse Problems, 2008, 24: 055006

  69. 69

    Dean-Ben X L, Ntziachristos V, Razansky D. Acceleration of optoacoustic model-based reconstruction using angular image discretization. IEEE Trans Med Imag, 2012, 31: 1154–1162

  70. 70

    Dean-Ben X L, Buehler A, Ntziachristos V, et al. Accurate model-based reconstruction algorithm for three-dimensional optoacoustic tomography. IEEE Trans Med Imag, 2012, 31: 1922–1928

  71. 71

    Huang C, Wang K, Nie L M, et al. Full-wave iterative image reconstruction in photoacoustic tomography with acoustically inhomogeneous media. IEEE Trans Med Imag, 2013, 32: 1097–1110

  72. 72

    Arridge S R, Betcke M M, Cox B T, et al. On the adjoint operator in photoacoustic tomography. Inverse Problems, 2016, 32: 115012

  73. 73

    Arridge S R, Beard P, Betcke M, et al. Accelerated high-resolution photoacoustic tomography via compressed sensing. Phys Med Biol, 2016, 61: 8908–8940

  74. 74

    Hauptmann A, Cox B, Lucka F, et al. Approximate k-space models and deep learning for fast photoacoustic reconstruction. In: Machine Learning for Medical Image Reconstruction. Berlin: Springer, 2018. 103–111

  75. 75

    Ntziachristos V, Ripoll J, Wang L V, et al. Looking and listening to light: the evolution of whole-body photonic imaging. Nat Biotechnol, 2005, 23: 313–320

  76. 76

    Ntziachristos V, Bremer C, Weissleder R. Fluorescence imaging with near-infrared light: New technological advances that enable in vivo molecular imaging. Eur Radiol, 2003, 13: 195–208

  77. 77

    Wang G, Li Y, Jiang M. Uniqueness theorems in bioluminescence tomography. Med Phys, 2004, 31: 2289–2299

  78. 78

    Gao Y, Wang K, Jiang S X, et al. Bioluminescence tomography based on gaussian weighted laplace prior regularization for in vivo morphological imaging of glioma. IEEE Trans Med Imag, 2017, 36: 2343–2354

  79. 79

    Qin C H, Zhu S P, Feng J C, et al. Comparison of permissible source region and multispectral data using efficient bioluminescence tomography method. J Biophoton, 2011, 4: 824–839

  80. 80

    Arridge S R, Schweiger M, Hiraoka M, et al. A finite element approach for modeling photon transport in tissue. Med Phys, 1993, 20: 299–309

  81. 81

    Arridge S R. Optical tomography in medical imaging. Inverse Problems, 1999, 15: R41–R93

  82. 82

    Lu Y J, Zhang X Q, Douraghy A, et al. Source reconstruction for spectrally-resolved bioluminescence tomography with sparse a priori information. Opt Express, 2009, 17: 8062–8080

  83. 83

    Liu K, Tian J, Qin C H, et al. Tomographic bioluminescence imaging reconstruction via a dynamically sparse regularized global method in mouse models. J Biomed Opt, 2011, 16: 046016

  84. 84

    Chehade M, Srivastava A K, Bulte J W M. Co-registration of bioluminescence tomography, computed tomography, and magnetic resonance imaging for multimodal in vivo stem cell tracking. Tomography, 2016, 2: 158–165

  85. 85

    Zhang X Q, Lu Y J, Chan T. A novel sparsity reconstruction method from poisson data for 3D bioluminescence tomography. J Sci Comput, 2012, 50: 519–535

  86. 86

    Dutta J, Ahn S, Li C Q, et al. Joint l1 and total variation regularization for fluorescence molecular tomography. Phys Med Biol, 2015, 57: 1459–1476

  87. 87

    Davis S C, Samkoe K S, O’Hara J A, et al. Comparing implementations of magnetic-resonance-guided fluorescence molecular tomography for diagnostic classification of brain tumors. J Biomed Opt, 2010, 15: 051602

  88. 88

    Davis S C, Samkoe K S, Tichauer K M, et al. Dynamic dual-tracer MRI-guided fluorescence tomography to quantify receptor density in vivo. Proc Natl Acad Sci USA, 2013, 110: 9025–9030

  89. 89

    Holt R W, Demers J L H, Sexton K J, et al. Tomography of epidermal growth factor receptor binding to fluorescent Affibody in vivo studied with magnetic resonance guided fluorescence recovery in varying orthotopic glioma sizes. J Biomed Opt, 2015, 20: 026001

  90. 90

    Schulz R B, Ale A, Sarantopoulos A, et al. Hybrid system for simultaneous fluorescence and x-ray computed tomography. IEEE Trans Med Imag, 2010, 29: 465–473

  91. 91

    Baikejiang R, Zhao Y, Fite B Z, et al. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method. J Biomed Opt, 2017, 22: 055001

  92. 92

    Cho K, van Merrienboer B, Gulcehre C, et al. Learning phrase representations using rnn encoder-decoder for statistical machine translation. 2014. ArXiv: 1406.1078

  93. 93

    Machida H, Sano Y, Hamamoto Y, et al. Narrow-band imaging in the diagnosis of colorectal mucosal lesions: a pilot study. Endoscopy, 2004, 36: 1094–1098

  94. 94

    Gerger A, Koller S, Weger W, et al. Sensitivity and specificity of confocal laser-scanning microscopy for in vivo diagnosis of malignant skin tumors. Cancer, 2006, 107: 193–200

  95. 95

    Gotoh K, Kobayashi S, Marubashi S, et al. Intraoperative detection of hepatocellular carcinoma using indocyanine green fluorescence imaging. In: ICG Fluorescence Imaging and Navigation Surgery. Tokyo: Springer, 2016. 325–334

  96. 96

    Glatz J, Garcia-Allende P B, Becker V, et al. Near-infrared fluorescence cholangiopancreatoscopy: initial clinical feasibility results. Gastrointest Endosc, 2014, 79: 664–668

  97. 97

    Adler A, Pohl H, Papanikolaou I S, et al. A prospective randomised study on narrow-band imaging versus conventional colonoscopy for adenoma detection: does narrow-band imaging induce a learning effect? Gut, 2007, 57: 59–64

  98. 98

    Vahrmeijer A L, Hutteman M, van der Vorst J R, et al. Image-guided cancer surgery using near-infrared fluorescence. Nat Rev Clin Oncol, 2013, 10: 507–518

  99. 99

    Schaafsma B E, Mieog J S D, Hutteman M, et al. The clinical use of indocyanine green as a near-infrared fluorescent contrast agent for image-guided oncologic surgery. J Surg Oncol, 2011, 104: 323–332

  100. 100

    Kitai T, Inomoto T, Miwa M, et al. Fluorescence navigation with indocyanine green for detecting sentinel lymph nodes in breast cancer. Breast Cancer, 2005, 12: 211–215

  101. 101

    Tummers Q R J G, Verbeek F P R, Schaafsma B E, et al. Real-time intraoperative detection of breast cancer using near-infrared fluorescence imaging and methylene blue. Eur J Surgical Oncology, 2014, 40: 850–858

  102. 102

    Keereweer S, van Driel P B A A, Snoeks T J A, et al. Optical image-guided cancer surgery: challenges and limitations. Clin Cancer Res, 2013, 19: 3745–3754

  103. 103

    Andre B, Vercauteren T, Buchner A M, et al. Learning semantic and visual similarity for endomicroscopy video retrieval. IEEE Trans Med Imag, 2012, 31: 1276–1288

  104. 104

    Mountney P, Yang G Z. Context specific descriptors for tracking deforming tissue. Med Image Anal, 2012, 16: 550–561

  105. 105

    Hu J, Shen L, Sun G, et al. Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 7132–7141

  106. 106

    Xu T, Zhang P C, Huang Q Y, et al. Attngan: fine-grained text to image generation with attentional generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. 1316–1324

  107. 107

    Woo S, Park J, Lee J Y, et al. Cbam: convolutional block attention module. In: Proceedings of the European Conference on Computer Vision (ECCV), 2018. 3–19

Download references

Acknowledgements

This work was supported by Ministry of Science and Technology of China (Grant Nos. 2018YFC0910602, 2017YFA0205200, 2017YFA0700401, 2016YFA0100902, 2016YFC0103702), National Natural Science Foundation of China (Grant Nos. 61901472, 61671449, 81227901, 81527805), the Strategic Priority Research Program of Chinese Academy of Sciences (Grant Nos. XDB32030200, XDB01030200), Chinese Academy of Sciences (Grant Nos. GJJSTD20170004, YJKYYQ20180048, KFJ-STS-ZDTP-059, QYZDJ-SSW-JSC005), Beijing Municipal Science & Technology Commission (Grant Nos. Z161100002616022, Z171100000117023), and General Financial Grant from the China Postdoctoral Science Foundation (Grant No. 2017M620952). The authors would like to acknowledge the instrumental and technical support of Multi-modal biomedical imaging experimental platform, Institute of Automation, Chinese Academy of Sciences.

Author information

Correspondence to Jie Tian.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

An, Y., Meng, H., Gao, Y. et al. Application of machine learning method in optical molecular imaging: a review. Sci. China Inf. Sci. 63, 111101 (2020). https://doi.org/10.1007/s11432-019-2708-1

Download citation

Keywords

  • optical molecular imaging
  • machine learning
  • artificial intelligence