Advertisement

Task-Specific Surgical Skill Assessment with Neural Networks

  • Malik BenmansourEmail author
  • Wahida Handouzi
  • Abed Malti
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 914)

Abstract

Many studies on surgical skill analysis have reported results on classification of different skills. However, regardless of the classification problem, only few of them have addressed the problem of task evaluation. In this paper, we propose a simple and computationally lightweight neural network to provide evaluation scores on a given surgery task. The used neural network has three hidden layers and one output node. The output is trained so that it fits average scores of performance on a single known surgery task. Three levels of performance are used: expert, intermediate and novice. We evaluate the performance of the proposed approach on three different surgical gestures: knot-tying, needle passing and suturing. To each surgery gesture, we associate one instantiation of the designed network, which is trained with the corresponding data. We show that this method gives evaluation scores that are more plausible than a single network, which is requested to provide evaluation scores for different tasks.

Keywords

Skill assessment Deep learning 

References

  1. 1.
    Dubin, A.K., Julian, D., Tanaka, A., Mattingly, P., Smith, R.: A model for predicting the GEARS score from virtual reality surgical simulator metrics. Surg. Endosc. 32, 3576 (2018)CrossRefGoogle Scholar
  2. 2.
    Vapnik, V.N.: An overview of statistical learning theory. IEEE Trans. Neural Netw. 10(5), 988–999 (1999)CrossRefGoogle Scholar
  3. 3.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed. Springer, New York (2009)Google Scholar
  4. 4.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  5. 5.
    Kim, P.: MatLab Deep Learning: with Machine Learning, Neural Networks and Artificial Intelligence. Apress (2017)Google Scholar
  6. 6.
    Stein, S., McKenna, S.J.: Combining embedded accelerometers with computer vision for recognizing food preparation activities. In: Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp 2013 (2013). http://eprints.gla.ac.uk/134991/. Accessed 07 Apr 2018
  7. 7.
    Gao, Y., Vedula, S.S., Reiley, C.E., Ahmidi, N.: JIGSAWS: The JHU-ISI Gesture and Skill Assessment Working Set. CIRL, 04 September 2014. https://cirl.lcsr.jhu.edu/research/hmm/datasets/jigsaws_release/
  8. 8.
    Lea, C., Reiter, A., Vidal, R., Hager, G.D.: Segmental spatiotemporal CNNs for fine-grained action segmentation. ArXiv160202995 Cs, February 2016CrossRefGoogle Scholar
  9. 9.
    Law, H., Ghani, K., Deng, J.: Surgeon technical skill assessment using computer vision based analysis. In: Machine Learning for Healthcare Conference, pp. 88–99 (2017)Google Scholar
  10. 10.
    Jin, A., et al.: Tool detection and operative skill assessment in surgical videos using region-based convolutional neural networks. ArXiv180208774 Cs, February 2018Google Scholar
  11. 11.
    Ahmidi, N., et al.: A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans. Biomed. Eng. 64(9), 2025–2041 (2017)CrossRefGoogle Scholar
  12. 12.
    Doughty, H., Damen, D., Mayol-Cuevas, W.: Who’s better? Who’s best? Pairwise deep ranking for skill determination. ArXiv170309913 Cs, March 2017Google Scholar
  13. 13.
    Ahmidi, N., et al.: Automated objective surgical skill assessment in the operating room from unstructured tool motion in septoplasty. Int. J. Comput. Assist. Radiol. Surg. 10(6), 981–991 (2015)CrossRefGoogle Scholar
  14. 14.
    Zappella, L., Béjar, B., Hager, G., Vidal, R.: Surgical gesture classification from video and kinematic data. Med. Image Anal. 17(7), 732–745 (2013)CrossRefGoogle Scholar
  15. 15.
    DiPietro, R., et al.: Recognizing surgical activities with recurrent neural networks. In: Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, pp. 551–558 (2016)CrossRefGoogle Scholar
  16. 16.
    Sbernini, L., Quitadamo, L.R., Riillo, F., Lorenzo, N.D., Gaspari, A.L., Saggio, G.: Sensory-glove-based open surgery skill evaluation. IEEE Trans. Hum. Mach. Syst. 48(2), 213–218 (2018)CrossRefGoogle Scholar
  17. 17.
    Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., Mathelin, M., Padoy, N.: EndoNet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2017)CrossRefGoogle Scholar
  18. 18.
    Charrière, K., Quellec, G., Lamard, M., Martiano, D., Cazuguel, G., Coatrix, G., Cochener, B.: Real-time analysis of cataract surgery videos using statistical models. Multimed. Tools Appl. 76(21), 22473–22491 (2017)CrossRefGoogle Scholar
  19. 19.
    Rafii-Tari, H., Payne, C.J., Liu, J., Riga, C., Bicknell, C., Yang, G.-Z.: Towards automated surgical skill evaluation of endovascular catheterization tasks based on force and motion signatures. In: IEEE International Conference of Robotics and Automation ICRA 2015, pp. 1789–1794 (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.AutoMed (LAT)Tlemcen UniversityTlemcenAlgeria

Personalised recommendations