Skip to main content
Log in

A Neural Network-based Suture-tension Estimation Method Using Spatio-temporal Features of Visual Information and Robot-state Information for Robot-assisted Surgery

  • Regular Papers
  • Robot and Applications
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

In robot-assisted minimally invasive surgery, there is a risk of skin tissue damage or suture failure at the suture site owing to incomplete tension. To avoid these problems and improve the inaccuracy of tension prediction, this study proposes a suture-tension prediction method using spatio-temporal features that simultaneously utilizes visual information obtained from surgical suture images and robot state changes over time. The proposed method can assist in minimally invasive robotic surgical techniques by predicting suture-tension through a neural network with image and robot information as inputs, without additional equipment. The neural network structure of the proposed method was reconstructed using ShuffleNet V2plus and spatio-temporal long-short-term memory, which are suitable for tension prediction. To validate the constructed neural network, we performed suturing expferiments using biological tissue and created a training database. We trained the proposed model using the built database and found that the estimated suture-tension values were similar to the actual tension values. We also found that the estimated tension values performed better than those of the other neural network models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. P. Gomes, “Surgical robotics: Reviewing the past, analysing the present, imagining the future,” Robotics and Computer-Integrated Manufacturing, vol. 27, no. 2, pp. 261–266, 2011.

    Article  Google Scholar 

  2. A. Marbán, A. Casals, J. Fernández, and J. Amat, “Haptic feedback in surgical robotics: Still a challenge,” Proc. of ROBOT2013: First Iberian Robotics Conference, pp. 245–253, 2014.

  3. B. Bayle, M. Joinie-Maurin, L. Barbe, J. Gangloff, and M. De Mathelin, “Robot interaction control in medicine and surgery: Original results and open problems,” Computational Surgery and Dual Training, Springer, pp. 169–191, 2014.

  4. K. A. LeBlanc, W. V. Booth, J. M. Whitaker, and D. E. Bellanger, “Laparoscopic incisional and ventral herniorraphy: our initial 100 patients,” Hernia, vol. 5, no. 1, pp. 41–45, 2001.

    Article  Google Scholar 

  5. J. Melinek, P. Lento, and J. Moalli, “Postmortem analysis of anastomotic suture line disruption following carotid endarterectomy,” Journal of Forensic Science, vol. 49, no. 5, pp. JFS2003218–5, 2004.

    Article  Google Scholar 

  6. R. Anup and K. A. Balasubramanian, “Surgical stress and the gastrointestinal tract,” Journal of Surgical Research, vol. 92, no. 2, pp. 291–300, 2000.

    Article  Google Scholar 

  7. S. A. Pedram, C. Shin, P. W. Ferguson, J. Ma, E. P. Dutson, and J. Rosen, “Autonomous suturing framework and quantification using a cable-driven surgical robot,” IEEE Transactions on Robotics, vol. 37, no. 2, pp. 404–417, April 2021.

    Article  Google Scholar 

  8. Y. Hu, W. Li, L. Zhang, and G.-Z. Yang, “Designing, prototyping, and testing a flexible suturing robot for transanal endoscopic microsurgery,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1669–1675, April 2019.

    Article  Google Scholar 

  9. G. B. Chung, S. Kim, S. Lee, and B.-J. Yi, “An image-guided robotic surgery system for spinal fusion,” International Journal of Control, Automation, and Systems, vol. 4, no. 1, pp. 30–41, 2006.

    Google Scholar 

  10. A. Abiri, S. J. Askari, A. Tao, Y.-Y. Juo, Y. Dai, J. Pensa, R. Candler, E. P. Dutson, and W. S. Grundfest, “Suture breakage warning system for robotic surgery,” IEEE Transactions on Biomedical Engineering, vol. 66, no. 4, pp. 1165–1171, 2018.

    Article  Google Scholar 

  11. C. E. Reiley, T. Akinbiyi, D. Burschka, D. C. Chang, A. M. Okamura, and D. D. Yuh, “Effects of visual force feedback on robot-assisted surgical task performance,” The Journal of Thoracic and Cardiovascular Surgery, vol. 135, no. 1, pp. 196–202, 2008.

    Article  Google Scholar 

  12. C. Shi, M. Li, C. Lv, J. Li, and S. Wang, “A high-sensitivity fiber Bragg grating-based distal force sensor for laparoscopic surgery,” IEEE Sensors Journal, vol. 20, no. 5, pp. 2467–2475, 2019.

    Article  Google Scholar 

  13. S.-C. Lim, H.-K. Lee, and J. Park, “Grip force measurement of forceps with fibre Bragg grating sensors,” Electronics Letters, vol. 50, no. 10, pp. 733–735, 2014.

    Article  Google Scholar 

  14. A. Marban, V. Srinivasan, W. Samek, J. Fernández, and A. Casals, “A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery,” Biomedical Signal Processing and Control, vol. 50, pp. 134–150, 2019.

    Article  Google Scholar 

  15. D.-H. Lee, W. Hwang, and S.-C. Lim, “Interaction force estimation using camera and electrical current without force/torque sensor,” IEEE Sensors Journal, vol. 18, no. 21, pp. 8863–8872, 2018.

    Article  Google Scholar 

  16. D.-K. Ko, K.-W. Lee, D. H. Lee, and S.-C. Lim, “Vision-based interaction force estimation for robot grip motion without tactile/force sensor,” Expert Systems with Applications, vol. 211, 118441, 2023.

    Article  Google Scholar 

  17. B. Zhao and C. A. Nelson, “A sensorless force-feedback system for robot-assisted laparoscopic surgery,” Computer Assisted Surgery, vol. 24, no. sup1, pp. 36–43, 2019.

    Article  Google Scholar 

  18. Z. Wang, B. Zi, D. Wang, J. Qian, W. You, and L. Yu, “External force self-sensing based on cable-tension disturbance observer for surgical robot end-effector,” IEEE Sensors Journal, vol. 19, no. 13, pp. 5274–5284, 2019.

    Article  Google Scholar 

  19. C. W. Kennedy and J. P. Desai, “A vision-based approach for estimating contact forces: Applications to robotassisted surgery,” Applied Bionics and Biomechanics, vol. 2, no. 1, pp. 53–60, 2005.

    Article  Google Scholar 

  20. S. M. Yoon, W. J. Kim, and M. C. Lee, “Design of bilateral control for force feedback in surgical robot,” International Journal of Control, Automation, and Systems, vol. 13, no. 4, pp. 916–925, 2015.

    Article  Google Scholar 

  21. J. Xia and K. Kiguchi, “Sensorless real-time force estimation in microsurgery robots using a time series convolutional neural network,” IEEE Access, vol. 9, pp. 149447–149455, 2021.

    Article  Google Scholar 

  22. F. B. Naeini, D. Makris, D. Gan, and Y. Zweiri, “Dynamic-vision-based force measurements using convolutional recurrent neural networks,” Sensors, vol. 20, no. 16, p. 4469, 2020.

    Article  Google Scholar 

  23. F. Luongo, R. Hakim, J. H. Nguyen, A. Anandkumar, and A. J. Hung, “Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery,” Surgery, vol. 169, no. 5, pp. 1240–1244, May 2021.

    Article  Google Scholar 

  24. N. Gessert, M. Bengs, M. Schlüter, and A. Schlaefer, “Deep learning with 4D spatio-temporal data representations for oct-based force estimation,” Medical image analysis, vol. 64, 101730, 2020.

    Article  Google Scholar 

  25. D.-K. Ko, D.-H. Lee, and S.-C. Lim, “Continuous image generation from low-update-rate images and physical sensors through a conditional GAN for robot teleoperation,” IEEE Transactions on Industrial Informatics, vol. 17, no. 3, pp. 1978–1986, 2020.

    Article  Google Scholar 

  26. H. Shin, H. Cho, D. Kim, D.-K. Ko, S.-C. Lim, and W. Hwang, “Sequential image-based attention network for inferring force estimation without haptic sensor,” IEEE Access, vol. 7, pp. 150237–150246, 2019.

    Article  Google Scholar 

  27. Z. Chua, A. M. Jarc, and A. M. Okamura, “Toward force estimation in robot-assisted surgery using deep learning with vision and robot state,” Proc. of IEEE International Conference on Robotics and Automation (ICRA), pp. 12335–12341, 2021.

  28. W.-J. Jung, K.-S. Kwak, and S.-C. Lim, “Vision-based suture tensile force estimation in robotic surgery,” Sensors, vol. 21, no. 1, 110, 2021.

    Article  Google Scholar 

  29. Z. Chua and A. M. Okamura, “Characterization of realtime haptic feedback from multimodal neural network-based force estimates during teleoperation,” arXiv preprint arXiv:2109.11488, 2021.

  30. K. D. Kallu, W. Jie, and M. C. Lee, “Sensorless reaction force estimation of the end effector of a dual-arm robot manipulator using sliding mode control with a sliding perturbation observer,” International Journal of Control, Automation, and Systems, vol. 16, no. 3, pp. 1367–1378, 2018.

    Article  Google Scholar 

  31. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.

    Article  Google Scholar 

  32. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105, 2012.

    Google Scholar 

  33. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arXiv preprint arXiv:1409.1556, 2014.

  34. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, 2016.

  35. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, and D. Erhan, “Going deeper with convolutions,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9, 2015.

  36. C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, “Inception-v4, inception-ResNet and the impact of residual connections on learning,” Proc. of the 31st AAAI Conference on Artificial Intelligence, pp. 4278–4284, 2017.

  37. F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1251–1258, 2017.

  38. X. Zhang, X. Zhou, M. Lin, and J. Sun, “Shufflenet: An extremely efficient convolutional neural network for mobile devices,”Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6848–6856, 2018.

  39. N. Ma, X. Zhang, H.-T. Zheng, and J. Sun, “Shufflenet v2: Practical guidelines for efficient cnn architecture design,” Proc. of the European Conference on Computer Vision(ECCV), pp. 116–131, 2018.

  40. “ShuffleNet-Series/ShuffleNetV2+ at master megvii-model/ShuffleNet-Series,” GitHub. https://github.com/megvii-model/ShuffleNet-Series (accessed Dec. 06, 2021).

  41. A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications,” arXiv:1704.04861, April 2017.

  42. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-propagating errors,” Nature, vol. 323, no. 6088, pp. 533–536, 1986.

    Article  MATH  Google Scholar 

  43. S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural Computation, vol. 9, no. 8, pp. 1735–1780, 1997.

    Article  Google Scholar 

  44. X. SHI, Z. Chen, H. Wang, D.-Y. Yeung, W. Wong, and W. WOO, “Convolutional LSTM network: A machine learning approach for precipitation nowcasting,” Advances in Neural Information Processing Systems, vol. 28, 201.5

  45. Y. Wang, M. Long, J. Wang, Z. Gao, and P. S. Yu, “Pre-dRNN: Recurrent neural networks for predictive learning using spatiotemporal LSTMs,” Proc. of the 31st International Conference on Neural Information Processing Systems, pp. 879–888, 2017.

  46. Z. Pan, Y. Liang, W. Wang, Y. Yu, Y. Zheng, and J. Zhang, “Urban traffic prediction from spatio-temporal data using deep meta learning,” Proc. of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1720–1730, 2019.

  47. L. Zhang, L. Lu, X. Wang, R. M. Zhu, M. Bagheri, R. M. Summers, and J. Yao, “Spatio-temporal convolutional LSTMs for tumor growth prediction by learning 4D longitudinal patient data,” IEEE Transactions on Medical Imaging, vol. 39, no. 4, pp. 1114–1126, 2019.

    Article  Google Scholar 

  48. T. Bourcier, J. Chammas, D. Gaucher, P. Liverneaux, J. Marescaux, C. Speeg-Schatz, D. Mutter, and A. Sauer, “Robot-assisted simulated strabismus surgery,” Translational Vision Science & Technology, vol. 8, no. 3, pp. 26–26, 2019.

    Article  Google Scholar 

  49. R. Karabulut, K. Sonmez, Z. Turkyilmaz, B. Bagbanci, A. C. Basaklar, and N. Kale, “An in vitro and in vivo evaluation of tensile strength and durability of seven suture materials in various pH and different conditions: An experimental study in rats,” Indian Journal of Surgery, vol. 72, no. 5, pp. 386–390, 2010.

    Article  Google Scholar 

  50. D. P. Kingma and J. Ba, “Adam: A Method for Stochastic Optimization,” arXiv:1412.6980, January 2017.

  51. I. Loshchilov and F. Hutter, “SGDR: Stochastic gradient descent with warm restarts,” arXiv:1608.03983, May 2017.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Soo-Chul Lim.

Ethics declarations

The authors have declared that no competing interests exist and have no conflicts of interest.

Additional information

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korean Government (NRF-2020R1A2C1008883).

Dong-Han Lee received his B.S., M.S., and Ph.D. degrees in mechanical engineering from Dongguk University, Seoul, Korea, in 2013, 2016, and 2021, respectively. His current research interests include human-computer interaction, human-robot interface, machine learning, computer vision, surgical robot, and haptics.

Kyung-Soo Kwak received his B.S. and M.S. degrees in mechanical, robotics, and energy engineering from Dongguk University, Seoul, Korea, in 2018 and 2020, respectively. His research interests include human-robot interaction, surgical robot, deep learning, and haptics.

Soo-Chul Lim received his B.S., M.S., and Ph.D. degrees in mechanical engineering from the Korea Advanced Institute of Science and Technology, Daejeon, Korea in 2001, 2003, and 2011, respectively. From 2006 to 2009, he was a full-time Lecturer with the Department of Mechanical Engineering, Korea Military Academy. From 2011 to 2016, he was a Research Staff Member with the Samsung Advanced Institute of Technology. In 2016, he joined the Department of Mechanical, Robotics, and Energy Engineering, Dongguk University, Seoul, Korea, as an Associate Professor. His current research interests include human-robot interaction, deep learning, surgical robot, and haptics.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, DH., Kwak, KS. & Lim, SC. A Neural Network-based Suture-tension Estimation Method Using Spatio-temporal Features of Visual Information and Robot-state Information for Robot-assisted Surgery. Int. J. Control Autom. Syst. 21, 4032–4040 (2023). https://doi.org/10.1007/s12555-022-0469-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-022-0469-x

Keywords

Navigation