Advertisement

Incorporating Temporal Prior from Motion Flow for Instrument Segmentation in Minimally Invasive Surgery Video

  • Yueming JinEmail author
  • Keyun Cheng
  • Qi Dou
  • Pheng-Ann Heng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Automatic instrument segmentation in video is an essentially fundamental yet challenging problem for robot-assisted minimally invasive surgery. In this paper, we propose a novel framework to leverage instrument motion information, by incorporating a derived temporal prior to an attention pyramid network for accurate segmentation. Our inferred prior can provide reliable indication of the instrument location and shape, which is propagated from the previous frame to the current frame according to inter-frame motion flow. This prior is injected to the middle of an encoder-decoder segmentation network as an initialization of a pyramid of attention modules, to explicitly guide segmentation output from coarse to fine. In this way, the temporal dynamics and the attention network can effectively complement and benefit each other. As additional usage, our temporal prior enables semi-supervised learning with periodically unlabeled video frames, simply by reverse execution. We extensively validate our method on the public 2017 MICCAI EndoVis Robotic Instrument Segmentation Challenge dataset with three different tasks. Our method consistently exceeds the state-of-the-art results across all three tasks by a large margin. Our semi-supervised variant also demonstrates a promising potential for reducing annotation cost in the clinical practice.

Notes

Acknowledgments

The work was partially supported by HK RGC TRS project T42-409/18-R, HK RGC project CUHK14225616, and CUHK T Stone Robotics Institute, CUHK. Yueming Jin is funded by the HK Ph.D. Fellowship.

References

  1. 1.
    Allan, M., Ourselin, S., et al.: 3-D pose estimation of articulated instruments in robotic minimally invasive surgery. IEEE TMI 37(5), 1204–1213 (2018)Google Scholar
  2. 2.
    Allan, M., Shvets, A., et al.: 2017 robotic instrument segmentation challenge. arXiv preprint arXiv:1902.06426 (2019)
  3. 3.
    Bouget, D., Benenson, R., et al.: Detecting surgical tools by modelling local appearance and global shape. IEEE TMI 34(12), 2603–2617 (2015)Google Scholar
  4. 4.
    Chen, J., et al.: Multiview two-task recursive attention model for left atrium and atrial scars segmentation. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 455–463. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_51CrossRefGoogle Scholar
  5. 5.
    García-Peraza-Herrera, L.C., Li, W., et al.: ToolNet: holistically-nested real-time segmentation of robotic surgical tools. In: IEEE/RSJ IROS, pp. 5717–5722 (2017)Google Scholar
  6. 6.
    Hasan, S., Linte, C.A.: U-NetPlus: a modified encoder-decoder U-Net architecture for semantic and instance segmentation of surgical instrument. arXiv preprint arXiv:1902.08994 (2019)
  7. 7.
    Jin, Y., Dou, Q., et al.: SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE TMI 37(5), 1114–1126 (2018)Google Scholar
  8. 8.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  9. 9.
    Laina, I., et al.: Concurrent segmentation and localization for tracking of surgical instruments. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 664–672. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_75CrossRefGoogle Scholar
  10. 10.
    Meister, S., Hur, J., Roth, S.: UnFlow: unsupervised learning of optical flow with a bidirectional census loss. In: AAAI (2018)Google Scholar
  11. 11.
    Milletari, F., Rieke, N., Baust, M., Esposito, M., Navab, N.: CFCM: segmentation via coarse to fine context memory. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 667–674. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_76CrossRefGoogle Scholar
  12. 12.
    Oktay, O., Schlemper, J., et al.: Attention U-Net: learning where to look for the pancreas. MIDL (2018)Google Scholar
  13. 13.
    Rieke, N., Tan, D.J., et al.: Real-time localization of articulated surgical instruments in retinal microsurgery. Med. Image Anal. 34, 82–100 (2016)CrossRefGoogle Scholar
  14. 14.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  15. 15.
    Sarikaya, D., Corso, J.J., Guru, K.A.: Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE TMI 36(7), 1542–1549 (2017)Google Scholar
  16. 16.
    Shvets, A.A., Rakhlin, A., et al.: Automatic instrument segmentation in robot-assisted surgery using deep learning. In: ICMLA, pp. 624–628 (2018)Google Scholar
  17. 17.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  18. 18.
    Twinanda, A.P., Shehata, S., et al.: EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE TMI 36(1), 86–97 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Yueming Jin
    • 1
    Email author
  • Keyun Cheng
    • 1
  • Qi Dou
    • 2
  • Pheng-Ann Heng
    • 1
    • 3
  1. 1.Department of Computer Science and EngineeringThe Chinese University of Hong KongHong KongChina
  2. 2.Department of ComputingImperial College LondonLondonUK
  3. 3.T Stone Robotics InstituteThe Chinese University of Hong KongHong KongChina

Personalised recommendations