Abstract
The surge of haptic technology has greatly impacted Robotic-assisted surgery in recent years due to its inspirational advancement in the field. Delivering tactile feedback to the surgeon has a significant role in improving the user experience in RAMIS. This work proposes a Modified inception ResNet network along with dimensionality reduction to regenerate the variable force produced during the surgical intervention. This work collects the relevant dataset from two ex vivo porcine skins and one ex vivo artificial skin for the validation of the results. The proposed framework is used to model both spatial and temporal data collected from the sensors, tissue, manipulators, and surgical tools. The evaluations are based on three distinct datasets with modest variations in tissue properties. The results of the proposed framework show an improvement of force prediction accuracy by 10.81% over RNN, 6.02% over RNN + LSTM, and 3.81% over the CNN + LSTM framework, and torque prediction accuracy by 12.41% over RNN, 5.75% over RNN + LSTM, and 3.75% over CNN + LSTM. The sensitivity study demonstrates that features such as torque (96.93%), deformation (94.02%), position (93.98%), vision (92.12%), stiffness (87.95%), tool diameter (89.24%), rotation (65.10%), and orientation (62.51%) have respective influences on the anticipated force. It was observed that the quality of the predicted force improved by 2.18% when performing feature selection and dimensionality reduction on features collected from tool, manipulator, tissue, and vision data and processing them simultaneously in all four architectures. The method has potential applications for online surgical tasks and surgeon training.
Article PDF
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Data Availability
The code and data will be made available from the corresponding author upon reasonable request.
References
Kroh M., Chalikonda, S.: Essentials of robotic surgery. (2015)
Spinoglio, G., Marano, A., Formisano, G.: Robotic surgery: current applications and new trends. (2015)
Hayward, V., MacLean, K.E.: Do it yourself haptics: part I. Robot. Autom. Mag. IEEE 14, 88–104 (2007)
van der Meijden, O.A.J., Schijven, M.P.: The value of haptic feedback in conventional and robot-assisted minimal invasive surgery and virtual reality training: a current review. Surg. Endosc. 23, 1180–1190 (2009)
Pacchierotti, C.: Cutaneous Haptic Feedback in Robotic Teleoperation. Springer, Berlin, Germany (2015)
Haidegger, T., Benyo, B., Kovacs, L., Benyo, Z.: Force Sensing and Force Control for Surgical Robots. IFAC Proc. Vol. 42, 401–406 (2009)
Haouchine, N., Kuang, W., Cotin, S., Yip, M.: Vision-Based Force Feedback Estimation for Robot-Assisted Surgery Using Instrument-Constrained Biomechanical Three-Dimensional Maps. IEEE Robot. Autom. Lett. 3, 2160–2165 (2018)
Gessert N., Beringhoff J., Otte C., Schlaefer A.: Force estimation from OCT volumes using 3D CNNs. Int. J. Comp. Assisted Radiol. Surg. 13, 1073–1082, 2018/07/01 (2018)
Okamura, A.M.: Haptic feedback in robot-assisted minimally invasive surgery. Curr. Opin. Urol. 19, 102–107 (2009)
Yoon, S.M., Lee, M.-C., Kim, C.Y.: Sliding Perturbation Observer Based Reaction Force Estimation Method of Surgical Robot Instrument for Haptic Realization. Int. J. Humanoid Robotics 12, 13–19 (2015)
Li, Y., Miyasaka, M., Haghighipanah, M., Lei, C., Hannaford B.: Dynamic modeling of cable driven elongated surgical instruments for sensorless grip force estimation. In: 2016 IEEE International Conference on Robotics and Automation (ICRA), 4128–4134 (2016)
Lee, D., Kim, U., Gulrez, T., Yoon, W.J., Hannaford, B., Choi, H.R.: A Laparoscopic Grasping Tool With Force Sensing Capability. IEEE/ASME Trans. Mechatron. 21, 130–141 (2016)
Aviles, A. I., Marban, A., Sobrevilla, P., Fernandez, J., Casals. A.: A recurrent neural network approach for 3D vision-based force estimation. In: 2014 4th International Conference on Image Processing Theory, Tools and Applications (IPTA), pp. 1–6 (2014)
Aviles, A. I., Alsaleh, S., Sobrevilla, P., Casals, A.: Sensorless force estimation using a neuro-vision-based approach for robotic-assisted surgery. In: 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), pp. 86–89 (2015)
Aviles, A. I., Alsaleh, S. M., Sobrevilla, P., Casals, A.: Force-feedback sensory substitution using supervised recurrent learning for robotic-assisted surgery. In: 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), pp. 1–4 (2015)
Aviles, A.I., Alsaleh, S.M., Hahn, J.K., Casals, A.: Towards Retrieving Force Feedback in Robotic-Assisted Surgery: A Supervised Neuro-Recurrent-Vision Approach. IEEE Trans. Haptics 10, 431–443 (2017)
Aviles, A. I., Alsaleh, S. M., Montseny, E., Sobrevilla, P., Casals, A.: A Deep-Neuro-Fuzzy approach for estimating the interaction forces in Robotic surgery. In: 2016 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1113–1119 (2016)
Marban, A., Srinivasan, V., Samek, W., Fernández, J., Casals, A.: A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery. Biomed. Signal Process. Control. 50, 134–150 (2019)
He, K., Zhang, X., Ren, S., Sun J.: Deep residual learning for image recognition. In: 2016 IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), pp. 770–778 (2016)
Greminger, M. A., Nelson, B. J.: Modeling elastic objects with neural networks for vision-based force measurement. In: Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), pp. 1278–1283 (2003)
Angelica, I. A., Samar, M. A., Eduard, M., Alicia, C.: V-ANFIS for Dealing with visual uncertainty for force estimation in robotic surgery. In: Proceedings of the 2015 Conference of the International Fuzzy Systems Association and the European Society for Fuzzy Logic and Technology, pp. 1465–1472 (2015)
Marban, A., Srinivasan, V., Samek, W., Fernández, J., Casals, A.: Estimation of interaction forces in robotic surgery using a semi-supervised deep neural network model. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 761–768 (2018)
Jeyabalan, S.D., Yesudhas, N.J., Harichandran, K.N., Sridharan, G.: Multivariate temporal data classification framework for ozone level prediction. Journal of Intelligent & Fuzzy Systems 43, 143–157 (2022)
Gessert, N., Bengs, M., Schluter, M., Schlaefer, A.: Deep learning with 4D spatio-temporal data representations for OCT-based force estimation. Med. Image Anal. 64, 101730 (2020)
Gao, C., Liu, X., Peven, M., Unberath, M., Reiter, A.: Learning to see forces: surgical force prediction with RGB-Point cloud temporal convolutional networks. In: OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, Cham, pp. 118–127 (2018)
Mendizabal, A., Sznitman, R., Cotin, S.: Force classification during robotic interventions through simulation-trained neural networks. Int. J. Comp. Assisted Radiol. Surg. 14, 1601–1610 (2019)
Abeywardena, S., Yuan, Q., Tzemanaki, A., Psomopoulou, E., Droukas, L., Melhuish, C., et al.: Estimation of Tool-Tissue Forces in Robot-Assisted Minimally Invasive Surgery Using Neural Networks. Front. Robot. AI 6, 1–10 (2019)
Edwards, P.J.E., Colleoni, E., Sridhar, A., Kelly, J.D., Stoyanov, D.: Visual kinematic force estimation in robot-assisted surgery – application to knot tying. Comput. methods Biomech. Biomed. Eng. Imaging Vis. 9, 414–420 (2021)
Jung, W.-J., Kwak, K.-S., Lim, S.-C.: Vision-Based Suture Tensile Force Estimation in Robotic Surgery. Sensors 21, 110 (2021)
Sabique, P.V., Ganesh, P., Sivaramakrishnan, R.: Stereovision based force estimation with stiffness mapping in surgical tool insertion using recurrent neural network. J. Supercomput. 78, 14648–14679 (2022)
Chua, Z., Jarc, A. M., Okamura, A. M.: Toward force estimation in robot-assisted surgery using deep learning with vision and robot state. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), pp. 12335–12341 (2021)
Ko, D.-K., Lee, K.-W., Lee, D. H., Lim, S.-C.: Vision-based interaction force estimation for robot grip motion without tactile/force sensor. Expert Syst. with Appl. vol. 211, p. 118441 (2022)
Greminger, M.A., Nelson, B.J.: Vision-based force measurement. IEEE Trans. Pattern Anal. Mach. Intell. 26, 290–298 (2004)
Kim, J., Janabi-Sharifi, F., Kim, J.: A Haptic Interaction Method Using Visual Information and Physically Based Modeling. IEEE/ASME Trans. Mechatron. 15, 636–645 (2010)
Noohi, E., Parastegari, S., Žefran, M.: Using monocular images to estimate interaction forces during minimally invasive surgery. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4297–4302 (2014)
Sabique, P. V., Pasupathy, G., Ramachandran, D., Shanmugasundar, G.: Investigating the influence of dimensionality reduction on force estimation in robotic-assisted surgery using recurrent and convolutional networks. Eng. Appl. Art. Intel. 126, p. 107045 (2023)
Sabique, P.V., Pasupathy, G., Ramachandran, S.: A data driven recurrent neural network approach for reproduction of variable visuo-haptic force feedback in surgical tool insertion. Expert Sys. Appl. 238, p. 122221 2024/03/15/ (2024)
Zhang, J., Zhong, Y., Gu, C.: Deformable Models for Surgical Simulation: A Survey. IEEE Rev. Biomed. Eng. 11, 143–164 (2018)
Campeau-Lecours, A., Lamontagne, H., Latour, S., Fauteux, P., Maheu, V., Boucher, F., et al.: Kinova Modular Robot Arms for Service Robotics Applications. Int. J. Robot. Appl. Technol. 5, 49–71 (2017)
Krutikova, O., Sisojevs, A., Kovalovs, M.: Creation of a Depth Map from Stereo Images of Faces for 3D Model Reconstruction. Procedia Comput. Sci. 104, 452–459 (2017)
Pfister, T., Simonyan, K., Charles, J., Zisserman, A.: Deep convolutional neural networks for efficient pose estimation in gesture videos. (2014)
Baudat, G., Anouar, F.: Generalized discriminant analysis using a kernel approach. Neural Comput. 12, 2385–2404 (2000)
Wu, K.S., van Osdol, W.W., Dauskardt, R.H.: Mechanical properties of human stratum corneum: Effects of temperature, hydration, and chemical treatment. Biomaterials 27, 785–795 (2006)
Silva, C.L., Topgaard, D., Kocherbitov, V., Sousa, J.J.S., Pais, A.A.C.C., Sparr, E.: Stratum corneum hydration Phase transformations and mobility in stratum corneum, extracted lipids and isolated corneocytes. Biochim. Biophys. Acta (BBA) Biomembranes 1768, 2647–2659 (2007)
Yuan, Y., Verma, R.: Measuring microelastic properties of stratum corneum. Colloids Surf B: Biointerfaces 48, 6–12 (2006)
Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv 1603.04467 (2015)
Acknowledgements
The authors would like to thank Sri Ramachandra Medical College and Research Institute, University with Potential for Excellence (UPE) limited manufacturing mechanical facility, at Madras Institute of Technology, Anna University, for providing the opportunity and required facilities for the work to be accomplished efficiently. The authors would also like to thank the University Grants Commission (UGC) for providing financial support for this project and our gratitude to everyone who contributed to it.
Funding
This study was funded by the University Grants Commission (F./2017–18/NFO-2017–18-OBC-KER-60500), The Government of India.
Author information
Authors and Affiliations
Contributions
All authors contributed to the study conception and design. Methodology, Data curation, Software writing, Formal analysis, Investigation, Visualization, and Writing-original draft were performed by Sabique P.V. Supervision, Writing-reviewing, and editing were performed by Ganesh Pasupathy and Kalaimagal S.
Corresponding author
Ethics declarations
Conflict of Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Ethical Approval
All applicable international, national, and Anna University guidelines for the care and use of animals were followed, and all the experiments were conducted under the authority of strategies developed by the Animal Welfare Board of India. No formal consent is required for this type of study.
Consent to Participate
All individuals expressed their informed consent to participate in the study. No formal consent was necessary for this type of study.
Consent to Publish
All individuals expressed their consent to publish the work.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Sabique, P.V., Pasupathy, G., Kalaimagal, S. et al. A Stereovision-based Approach for Retrieving Variable Force Feedback in Robotic-Assisted Surgery Using Modified Inception ResNet V2 Networks. J Intell Robot Syst 110, 81 (2024). https://doi.org/10.1007/s10846-024-02100-8
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10846-024-02100-8