Advertisement

Straight to the Point: Reinforcement Learning for User Guidance in Ultrasound

  • Fausto MilletariEmail author
  • Vighnesh Birodkar
  • Michal Sofka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11798)

Abstract

Point of care ultrasound (POCUS) consists in the use of ultrasound imaging in critical or emergency situations to support clinical decisions by healthcare professionals and first responders. In this setting it is essential to be able to provide means to obtain diagnostic data to potentially inexperienced users who did not receive an extensive medical training. Interpretation and acquisition of ultrasound images is not trivial. First, the user needs to find a suitable sound window which can be used to get a clear image, and then he needs to correctly interpret it to perform a diagnosis. Although many recent approaches focus on developing smart ultrasound devices that add interpretation capabilities to existing systems, our goal in this paper is to present a reinforcement learning (RL) strategy which is capable to guide novice users to the correct sonic window and enable them to obtain clinically relevant pictures of the anatomy of interest. We apply our approach to cardiac images acquired from the parasternal long axis (PLAx) view of the left ventricle of the heart.

References

  1. 1.
    Alansary, A., et al.: Automatic view planning with multi-scale deep reinforcement learning agents. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11070, pp. 277–285. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00928-1_32CrossRefGoogle Scholar
  2. 2.
    Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (ELUs). arXiv preprint arXiv:1511.07289 (2015)
  3. 3.
    Lample, G., Chaplot, D.S.: Playing FPS games with deep reinforcement learning. In: AAAI, pp. 2140–2146 (2017)Google Scholar
  4. 4.
    Lasso, A., Heffter, T., Rankin, A., Pinter, C., Ungi, T., Fichtinger, G.: PLUS: open-source toolkit for ultrasound-guided intervention systems. IEEE Trans. Biomed. Eng. 61(10), 2527–2537 (2014)CrossRefGoogle Scholar
  5. 5.
    Lillicrap, T.P., et al.: Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015)
  6. 6.
    Lin, M., Chen, Q., Yan, S.: Network in network. arXiv preprint arXiv:1312.4400 (2013)
  7. 7.
    Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
  8. 8.
    Mnih, V., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)CrossRefGoogle Scholar
  9. 9.
    Neumann, D., et al.: A self-taught artificial agent for multi-physics computational model personalization. Med. Image Anal. 34, 52–64 (2016)CrossRefGoogle Scholar
  10. 10.
    Sahba, F., Tizhoosh, H.R., Salama, M.M.: A reinforcement agent for object segmentation in ultrasound images. Expert Syst. Appl. 35(3), 772–780 (2008)CrossRefGoogle Scholar
  11. 11.
    Tokuda, J., et al.: OpenIGTLink: an open network protocol for image-guided therapy environment. Int. J. Med. Robot. Comput. Assist. Surg. 5(4), 423–434 (2009)CrossRefGoogle Scholar
  12. 12.
    Van Hasselt, H., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: AAAI, pp. 2094–2100 (2016)Google Scholar
  13. 13.
    Wang, Z., Schaul, T., Hessel, M., Van Hasselt, H., Lanctot, M., De Freitas, N.: Dueling network architectures for deep reinforcement learning. arXiv preprint arXiv:1511.06581 (2015)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Fausto Milletari
    • 1
    Email author
  • Vighnesh Birodkar
    • 1
  • Michal Sofka
    • 1
  1. 1.4Catalyzer Inc.Santa ClaraUSA

Personalised recommendations