Abstract
We present SurgeonAssist-Net: a lightweight framework making action-and-workflow-driven virtual assistance, for a set of predefined surgical tasks, accessible to commercially available optical see-through head-mounted displays (OST-HMDs). On a widely used benchmark dataset for laparoscopic surgical workflow, our implementation competes with state-of-the-art approaches in prediction accuracy for automated task recognition, and yet requires \(7.4\times \) fewer parameters, \(10.2\times \) fewer floating point operations per second (FLOPS), is \(7.0\times \) faster for inference on a CPU, and is capable of near real-time performance on the Microsoft HoloLens 2 OST-HMD. To achieve this, we make use of an efficient convolutional neural network (CNN) backbone to extract discriminative features from image data, and a low-parameter recurrent neural network (RNN) architecture to learn long-term temporal dependencies. To demonstrate the feasibility of our approach for inference on the HoloLens 2 we created a sample dataset that included video of several surgical tasks recorded from a user-centric point-of-view. After training, we deployed our model and cataloged its performance in an online simulated surgical scenario for the prediction of the current surgical task. The utility of our approach is explored in the discussion of several relevant clinical use-cases. Our code is publicly available at https://github.com/doughtmw/surgeon-assist-net.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Peters, T.M.: Image-guidance for surgical procedures. Phys. Med. Biol. 51(14), R505 (2006)
Liu, D., Jenkins, S.A., Sanderson, P.M., Fabian, P., Russell, W.J.: Monitoring with head-mounted displays in general anesthesia: a clinical evaluation in the operating room. Anesth. Analg. 110(4), 1032–1038 (2010)
Bernhardt, S., Nicolau, S.A., Soler, L., Doignon, C.: The status of augmented reality in laparoscopic surgery as of 2016. Med. Image Anal. 37, 66–90 (2017)
Zorzal, E.R., et al.: Laparoscopy with augmented reality adaptations. J. Biomed. Inform. 107, 103463 (2020)
Meola, A., Cutolo, F., Carbone, M., Cagnazzo, F., Ferrari, M., Ferrari, V.: Augmented reality in neurosurgery: a systematic review. Neurosurg. Rev. 40(4), 537–548 (2016). https://doi.org/10.1007/s10143-016-0732-9
Jud, L., et al.: Applicability of augmented reality in orthopedic surgery-a systematic review. BMC Musculoskelet. Disord. 21(1), 1–13 (2020)
Rahman, R., Wood, M.E., Qian, L., Price, C.L., Johnson, A.A., Osgood, G.M.: Head-mounted display use in surgery: a systematic review. Surg. Innov. 27(1), 88–100 (2020)
Dixon, B.J., Daly, M.J., Chan, H., Vescan, A.D., Witterick, I.J., Irish, J.C.: Surgeons blinded by enhanced navigation: the effect of augmented reality on attention. Surg. Endosc. 27(2), 454–461 (2013)
Grubert, J., Itoh, Y., Moser, K., Swan, J.E.: A survey of calibration methods for optical see-through head-mounted displays. IEEE Trans. Visual Comput. Graphics 24(9), 2649–2662 (2017)
Kersten-Oertel, M., Jannin, P., Collins, D.L.: The state of the art of visualization in mixed reality image guided surgery. Comput. Med. Imaging Graph. 37(2), 98–112 (2013)
Hong, J., et al.: Three-dimensional display technologies of recent interest: principles, status, and issues [invited]. Appl. Opt. 50(34), H87–H115 (2011)
Cleary, K., Peters, T.M.: Image-guided interventions: technology review and clinical applications. Annu. Rev. Biomed. Eng. 12, 119–142 (2010)
Eckert, M., Volmerg, J.S., Friedrich, C.M.: Augmented reality in medicine: systematic and bibliographic review. JMIR mHealth uHealth 7(4), e10967 (2019)
Twinanda, A.P., Shehata, S., Mutter, D., Marescaux, J., De Mathelin, M., Padoy, N.: Endonet: a deep architecture for recognition tasks on laparoscopic videos. IEEE Trans. Med. Imaging 36(1), 86–97 (2016)
Suzuki, T., Sakurai, Y., Yoshimitsu, K., Nambu, K., Muragaki, Y., Iseki, H.: Intraoperative multichannel audio-visual information recording and automatic surgical phase and incident detection. In: 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pp. 1190–1193. IEEE (2010)
Forestier, G., et al.: Multi-site study of surgical practice in neurosurgery based on surgical process models. J. Biomed. Inform. 46(5), 822–829 (2013)
Navab, N., Traub, J., Sielhorst, T., Feuerstein, M., Bichlmeier, C.: Action-and workflow-driven augmented reality for computer-aided medical procedures. IEEE Comput. Graphics Appl. 27(5), 10–14 (2007)
Quellec, G., Lamard, M., Cochener, B., Cazuguel, G.: Real-time task recognition in cataract surgery videos using adaptive spatiotemporal polynomials. IEEE Trans. Med. Imaging 34(4), 877–887 (2014)
Padoy, N., Blum, T., Ahmadi, S.A., Feussner, H., Berger, M.O., Navab, N.: Statistical modeling and recognition of surgical workflow. Med. Image Anal. 16(3), 632–641 (2012)
Lea, C., Vidal, R., Hager, G.D.: Learning convolutional action primitives for fine-grained action recognition. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), pp. 1642–1649. IEEE (2016)
Katić, D., et al.: A system for context-aware intraoperative augmented reality in dental implant surgery. Int. J. Comput. Assist. Radiol. Surg. 10(1), 101–108 (2014). https://doi.org/10.1007/s11548-014-1005-0
Jin, Y., et al.: SV-RCNet: workflow recognition from surgical videos using recurrent convolutional network. IEEE Trans. Med. Imaging 37(5), 1114–1126 (2017)
Jin, Y., et al.: Multi-task recurrent convolutional network with correlation loss for surgical video analysis. Med. Image Anal. 59, 101572 (2020)
Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114. PMLR (2019)
Liu, R.: Higher accuracy on vision models with efficientnet-lite. TensorFlow Blog (2020). https://blog.tensorflow.org/2020/03/higher-accuracy-on-visionmodels-with-efficientnet-lite.html. Accessed 30 Apr 2020
Cho, K., Van Merriënboer, B., Bahdanau, D., Bengio, Y.: On the properties of neural machine translation: encoder-decoder approaches. arXiv preprint arXiv:1409.1259 (2014)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256. JMLR Workshop and Conference Proceedings (2010)
Bradski, G.: The opencv library. Dr. Dobb’s J. Softw. Tools 25, 120–125 (2000)
Bai, J., Lu, F., Zhang, K., et al.: ONNX: open neural network exchange (2019). https://github.com/onnx/onnx
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. arXiv preprint arXiv:1912.01703 (2019)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012)
Acknowledgements
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery program (RGPIN-2019-06367).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Doughty, M., Singh, K., Ghugre, N.R. (2021). SurgeonAssist-Net: Towards Context-Aware Head-Mounted Display-Based Augmented Reality for Surgical Guidance. In: de Bruijne, M., et al. Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science(), vol 12904. Springer, Cham. https://doi.org/10.1007/978-3-030-87202-1_64
Download citation
DOI: https://doi.org/10.1007/978-3-030-87202-1_64
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87201-4
Online ISBN: 978-3-030-87202-1
eBook Packages: Computer ScienceComputer Science (R0)