Advertisement

Using 3D Convolutional Neural Networks to Learn Spatiotemporal Features for Automatic Surgical Gesture Recognition in Video

  • Isabel FunkeEmail author
  • Sebastian Bodenstedt
  • Florian Oehme
  • Felix von Bechtolsheim
  • Jürgen Weitz
  • Stefanie Speidel
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

Automatically recognizing surgical gestures is a crucial step towards a thorough understanding of surgical skill. Possible areas of application include automatic skill assessment, intra-operative monitoring of critical surgical steps, and semi-automation of surgical tasks. Solutions that rely only on the laparoscopic video and do not require additional sensor hardware are especially attractive as they can be implemented at low cost in many scenarios. However, surgical gesture recognition based only on video is a challenging problem that requires effective means to extract both visual and temporal information from the video. Previous approaches mainly rely on frame-wise feature extractors, either handcrafted or learned, which fail to capture the dynamics in surgical video. To address this issue, we propose to use a 3D Convolutional Neural Network (CNN) to learn spatiotemporal features from consecutive video frames. We evaluate our approach on recordings of robot-assisted suturing on a bench-top model, which are taken from the publicly available JIGSAWS dataset. Our approach achieves high frame-wise surgical gesture recognition accuracies of more than 84%, outperforming comparable models that either extract only spatial features or model spatial and low-level temporal information separately. For the first time, these results demonstrate the benefit of spatiotemporal CNNs for video-based surgical gesture recognition.

Keywords

Surgical gesture Spatiotemporal modeling Video understanding Action segmentation Convolutional Neural Network 

Notes

Acknowledgements

The authors thank Colin Lea for sharing code and precomputed S-CNN features to reproduce results from [10] as well as the Helmholtz-Zentrum Dresden-Rossendorf (HZDR) for granting access to their GPU cluster.

Supplementary material

490279_1_En_52_MOESM1_ESM.pdf (5.5 mb)
Supplementary material 1 (pdf 5670 KB)

References

  1. 1.
    Ahmidi, N., Tao, L., Sefati, S., Gao, Y., Lea, C., Haro, B.B., et al.: A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans. Biomed. Eng. 64(9), 2025–2041 (2017)CrossRefGoogle Scholar
  2. 2.
    Carreira, J., Zisserman, A.: Quo vadis, action recognition? A new model and the Kinetics dataset. In: CVPR, pp. 4724–4733. IEEE (2017)Google Scholar
  3. 3.
    DiPietro, R., Lea, C., Malpani, A., Ahmidi, N., Vedula, S.S., Lee, G.I., et al.: Recognizing surgical activities with recurrent neural networks. In: Ourselin, S., Joskowicz, L., Sabuncu, M.R., Unal, G., Wells, W. (eds.) MICCAI 2016. LNCS, vol. 9900, pp. 551–558. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46720-7_64CrossRefGoogle Scholar
  4. 4.
    Hara, K., Kataoka, H., Satoh, Y.: Learning spatio-temporal features with 3D residual networks for action recognition. In: ICCV-W, pp. 3154–3160. IEEE (2017)Google Scholar
  5. 5.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778. IEEE (2016)Google Scholar
  6. 6.
    Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013)CrossRefGoogle Scholar
  7. 7.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)Google Scholar
  8. 8.
    Lea, C., Flynn, M.D., Vidal, R., Reiter, A., Hager, G.D.: Temporal convolutional networks for action segmentation and detection. In: CVPR, pp. 156–165. IEEE (2017)Google Scholar
  9. 9.
    Lea, C., Reiter, A., Vidal, R., Hager, G.D.: Segmental spatiotemporal CNNs for fine-grained action segmentation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9907, pp. 36–52. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46487-9_3CrossRefGoogle Scholar
  10. 10.
    Lea, C., Vidal, R., Reiter, A., Hager, G.D.: Temporal convolutional networks: a unified approach to action segmentation. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 47–54. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-49409-8_7CrossRefGoogle Scholar
  11. 11.
    Liu, D., Jiang, T.: Deep reinforcement learning for surgical gesture segmentation and classification. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11073, pp. 247–255. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00937-3_29CrossRefGoogle Scholar
  12. 12.
    Tao, L., Elhamifar, E., Khudanpur, S., Hager, G.D., Vidal, R.: Sparse hidden Markov models for surgical gesture classification and skill evaluation. In: Abolmaesumi, P., Joskowicz, L., Navab, N., Jannin, P. (eds.) IPCAI 2012. LNCS, vol. 7330, pp. 167–177. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-30618-1_17CrossRefGoogle Scholar
  13. 13.
    Tao, L., Zappella, L., Hager, G.D., Vidal, R.: Surgical gesture segmentation and recognition. In: Mori, K., Sakuma, I., Sato, Y., Barillot, C., Navab, N. (eds.) MICCAI 2013. LNCS, vol. 8151, pp. 339–346. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40760-4_43CrossRefGoogle Scholar
  14. 14.
    Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., et al.: Temporal segment networks: towards good practices for deep action recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 20–36. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_2CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Division of Translational Surgical OncologyNational Center for Tumor Diseases (NCT), Partner Site DresdenDresdenGermany
  2. 2.Department for Visceral, Thoracic and Vascular SurgeryUniversity Hospital Carl Gustav Carus, TU DresdenDresdenGermany
  3. 3.Centre for Tactile Internet with Human-in-the-Loop (CeTI)TU DresdenDresdenGermany

Personalised recommendations