Advertisement

An Extremely Fast and Precise Convolutional Neural Network for Recognition and Localization of Cataract Surgical Tools

  • Dongqing Zang
  • Gui-Bin BianEmail author
  • Yunlai Wang
  • Zhen Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Recognition and localization of surgical tools is a crucial requirement to provide safe tool-tissue interaction in various computer-assisted interventions (CAI). Unfortunately, most state-of-the-art approaches are committed to improving detection precision regardless of the real-time performance, which leads to poor prediction for these methods in intraoperative detection task. In this paper, we propose an extremely fast and precise network (EF-PNet) for tool detection that performs well both in intraoperative tracking and postoperative skill evaluation. The proposed approach takes a single sweep of the single network to achieve rapid tool detection during intraoperative tasks, and also integrates densely connected constraint to guarantee a comparable precision for skill assessment. We demonstrate the superiority of our method on a newly built dataset: cataract surgical tool location (CaSToL). Experimental results with a mean inference time of 3.7 ms per test frame detection (i.e. 270 fps) and a mean average precision (mAP) of 93%, demonstrate the effectiveness of the proposed architecture, and also indicate that our study is far superior to recent region-based methods for tool detection in terms of detection speed, surely with a comparable precision.

Notes

Acknowledgements

This research was supported by the National Key Research and Development Program of China (Grant 2017YFB1302704), the National Natural Science Foundation of China (Grant U1713220) and Youth Innovation Promotion Association of the Chinese Academy of Sciences (Grant 2018165).

Supplementary material

Supplementary material 1 (mp4 36465 KB)

References

  1. 1.
    Bouget, D., Allan, M., Stoyanov, D., Jannin, P.: Vision-based and marker-less surgical tool detection and tracking: a review of the literature. Med. Image Anal. 35, 633–654 (2017)CrossRefGoogle Scholar
  2. 2.
    Mishra, K., Sathish, R., Sheet, D.: Tracking of retinal microsurgery tools using late fusion of responses from convolutional neural network over pyramidally decomposed frames. In: Mukherjee, S., et al. (eds.) ICVGIP 2016. LNCS, vol. 10481, pp. 358–366. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-68124-5_31CrossRefGoogle Scholar
  3. 3.
    Dewan, M., Marayong, P., Okamura, A.M., Hager, G.D.: Vision-based assistance for ophthalmic micro-surgery. In: Barillot, C., Haynor, D.R., Hellier, P. (eds.) MICCAI 2004. LNCS, vol. 3217, pp. 49–57. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-30136-3_7CrossRefGoogle Scholar
  4. 4.
    Ko, S.Y., Kim, J., Kwon, D.S., Lee, W.J.: Intelligent interaction between surgeon and laparoscopic assistant robot system. In: IEEE International Workshop on Robot & Human Interactive Communication (2005)Google Scholar
  5. 5.
    Tonet, O., Thoranaghatte, R.U., Megali, G., Dario, P.: Tracking endoscopic instruments without a localizer: a shape-analysis-based approach. Stud. Health Technol. Inform. 119(1), 544 (2006)Google Scholar
  6. 6.
    Raphael, S., Karim, A., Rogério, R., Taylor, R.H., Hager, G.D., Pascal, F.: Data-driven visual tracking in retinal microsurgery. Med. Image Comput. Comput. Assist. Interv. 15(2), 568–575 (2012)Google Scholar
  7. 7.
    Ye, M., Zhang, L., Giannarou, S., Yang, G.-Z.: Real-time 3D tracking of articulated tools for robotic surgery. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9900, pp. 386–394. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46720-7_45CrossRefGoogle Scholar
  8. 8.
    Li, Y., Chen, C., Huang, X., Huang, J.: Instrument tracking via online learning in retinal microsurgery. In: Golland, P., Hata, N., Barillot, C., Hornegger, J., Howe, R. (eds.) MICCAI 2014. LNCS, vol. 8673, pp. 464–471. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10404-1_58CrossRefGoogle Scholar
  9. 9.
    Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N.: EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2017)CrossRefGoogle Scholar
  10. 10.
    Twinanda, A.P., Mutter, D., Marescaux, J., de Mathelin, M., Padoy, N.: Single-and multi-task architectures for tool presence detection challenge at M2CAI 2016. arXiv preprint arXiv:1610.08851 (2016)
  11. 11.
    Jin, A., et al.: Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 691–699. IEEE (2018)Google Scholar
  12. 12.
    Sarikaya, D., Corso, J.J., Guru, K.A.: Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans. Med. Imaging 36(7), 1542–1549 (2017)CrossRefGoogle Scholar
  13. 13.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7263–7271 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Dongqing Zang
    • 1
  • Gui-Bin Bian
    • 1
    Email author
  • Yunlai Wang
    • 1
  • Zhen Li
    • 1
  1. 1.Institute of AutomationChinese Academy of SciencesBeijingChina

Personalised recommendations