Advertisement

EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging Without External Trackers

  • Bishesh Khanal
  • Alberto Gomez
  • Nicolas Toussaint
  • Steven McDonagh
  • Veronika Zimmer
  • Emily Skelton
  • Jacqueline Matthew
  • Daniel Grzech
  • Robert Wright
  • Chandni Gupta
  • Benjamin Hou
  • Daniel Rueckert
  • Julia A. Schnabel
  • Bernhard Kainz
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11076)

Abstract

Ultrasound (US) is the most widely used fetal imaging technique. However, US images have limited capture range, and suffer from view dependent artefacts such as acoustic shadows. Compounding of overlapping 3D US acquisitions into a high-resolution volume can extend the field of view and remove image artefacts, which is useful for retrospective analysis including population based studies. However, such volume reconstructions require information about relative transformations between probe positions from which the individual volumes were acquired. In prenatal US scans, the fetus can move independently from the mother, making external trackers such as electromagnetic or optical tracking unable to track the motion between probe position and the moving fetus. We provide a novel methodology for image-based tracking and volume reconstruction by combining recent advances in deep learning and simultaneous localisation and mapping (SLAM). Tracking semantics are established through the use of a Residual 3D U-Net and the output is fed to the SLAM algorithm. As a proof of concept, experiments are conducted on US volumes taken from a whole body fetal phantom, and from the heads of real fetuses. For the fetal head segmentation, we also introduce a novel weak annotation approach to minimise the required manual effort for ground truth annotation. We evaluate our method qualitatively, and quantitatively with respect to tissue discrimination accuracy and tracking robustness.

Notes

Acknowledgements

This work was supported by the Wellcome/EPSRC Centre for Medical Engineering [WT 203148/Z/16/Z], Wellcome Trust IEH Award [102431]. The authors thank Nvidia Corporation for the donation of a Titan Xp GPU.

References

  1. 1.
    Angiolini, F., et al.: 1024-Channel 3D ultrasound digital beamformer in a single 5W FPGA. In: Proceedings of the Conference on Design, Automation & Test in Europe, pp. 1225–1228. European Design and Automation Association (2017)Google Scholar
  2. 2.
    Blackall, J.M., Rueckert, D., Maurer, C.R., Penney, G.P., Hill, D.L.G., Hawkes, D.J.: An image registration approach to automated calibration for freehand 3D ultrasound. In: Delp, S.L., DiGoia, A.M., Jaramaz, B. (eds.) MICCAI 2000. LNCS, vol. 1935, pp. 462–471. Springer, Heidelberg (2000).  https://doi.org/10.1007/978-3-540-40899-4_47CrossRefGoogle Scholar
  3. 3.
    Octorina Dewi, D.E., Mohd. Fadzil, M., Mohd. Faudzi, A.A., Supriyanto, E., Lai, K.W.: Position tracking systems for ultrasound imaging: a survey. In: Lai, K.W., Octorina Dewi, D.E. (eds.) Medical Imaging Technology. LNB, pp. 57–89. Springer, Singapore (2015).  https://doi.org/10.1007/978-981-287-540-2_3CrossRefGoogle Scholar
  4. 4.
    Durrant-Whyte, H., Bailey, T.: Simultaneous localization and mapping: part I. IEEE Robot. Autom. Mag. 13(2), 99–110 (2006)CrossRefGoogle Scholar
  5. 5.
    Fenster, A., Downey, D.B.: 3-D ultrasound imaging: a review. IEEE Eng. Med. Biol. Mag. 15(6), 41–51 (1996)CrossRefGoogle Scholar
  6. 6.
    Gee, A.H., et al.: Rapid registration for wide field of view freehand three-dimensional ultrasound. IEEE Trans. Med. Imaging 22(11), 1344–1357 (2003)CrossRefGoogle Scholar
  7. 7.
    He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_38CrossRefGoogle Scholar
  8. 8.
    Heng, L., et al.: Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle. Auton. Robot. 39(3), 259–277 (2015)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Kamnitsas, K., et al.: Ensembles of multiple models and architectures for robust brain tumour segmentation. CoRR abs/1711.01468 (2017)Google Scholar
  10. 10.
    Kontschieder, P., et al.: Quantifying progression of multiple sclerosis via classification of depth videos. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8674, pp. 429–437. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10470-6_54CrossRefGoogle Scholar
  11. 11.
    Milletari, F., Navab, N., Ahmadi, S.A.: V-Net: fully convolutional neural networks for volumetric medical image segmentation. In: 2016 Fourth International Conference on 3D Vision (3DV), pp. 565–571. IEEE (2016)Google Scholar
  12. 12.
    Newcombe, R.A., et al.: KinectFusion: real-time dense surface mapping and tracking. In: ISMAR, pp. 127–136. IEEE (2011)Google Scholar
  13. 13.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  14. 14.
    Slavcheva, M., Baust, M., Ilic, S.: SobolevFusion: 3D reconstruction of scenes undergoing free non-rigid motion. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2646–2655 (2018)Google Scholar
  15. 15.
    Solberg, O.V., et al.: Freehand 3D ultrasound reconstruction algorithmsa review. Ultrasound Med. Biol. 33(7), 991–1009 (2007)CrossRefGoogle Scholar
  16. 16.
    Wachinger, C., Wein, W., Navab, N.: Three-dimensional ultrasound mosaicing. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007. LNCS, vol. 4792, pp. 327–335. Springer, Heidelberg (2007).  https://doi.org/10.1007/978-3-540-75759-7_40CrossRefGoogle Scholar
  17. 17.
    Whelan, T., et al.: ElasticFusion: dense SLAM without a pose graph. In: Robotics: Science and Systems (2015)Google Scholar
  18. 18.
    Wolf, I., et al.: The Medical Imaging Interaction Toolkit (MITK) a toolkit facilitating the creation of interactive software by extending VTK and ITKGoogle Scholar
  19. 19.
    Wygant, I.O., et al.: Integration of 2D CMUT arrays with front-end electronics for volumetric ultrasound imaging. IEEE Trans. Ultrason. Ferroelect. Freq. Control 55(2), 327–342 (2008)CrossRefGoogle Scholar
  20. 20.
    Yang, X., et al.: Towards automatic semantic segmentation in volumetric ultrasound. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10433, pp. 711–719. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66182-7_81CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Bishesh Khanal
    • 1
    • 2
  • Alberto Gomez
    • 1
  • Nicolas Toussaint
    • 1
  • Steven McDonagh
    • 2
  • Veronika Zimmer
    • 1
  • Emily Skelton
    • 1
  • Jacqueline Matthew
    • 1
  • Daniel Grzech
    • 2
  • Robert Wright
    • 1
  • Chandni Gupta
    • 1
  • Benjamin Hou
    • 2
  • Daniel Rueckert
    • 2
  • Julia A. Schnabel
    • 1
  • Bernhard Kainz
    • 2
  1. 1.School of Biomedical Engineering and Imaging SciencesKing’s College LondonLondonUK
  2. 2.Department of ComputingImperial College LondonLondonUK

Personalised recommendations