Advertisement

ARAMIS: Augmented Reality Assistance for Minimally Invasive Surgery Using a Head-Mounted Display

  • Long QianEmail author
  • Xiran Zhang
  • Anton Deguet
  • Peter Kazanzides
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11768)

Abstract

We propose ARAMIS, a solution to provide real-time “x-ray see-through vision” of a patient’s internal structure to the surgeon, via an optical see-through head-mounted display (OST-HMD), in minimally invasive laparoscopic surgery. ARAMIS takes input imaging from a binocular endoscope, reconstructs a dense point cloud with a GPU-accelerated semi-global matching algorithm on a per-frame basis, and then wirelessly streams the point cloud to an untethered OST-HMD (currently, Microsoft HoloLens) for visualization. The OST-HMD localizes the endoscope distal tip by fusing fiducial-based tracking and self-localization. The point cloud is rendered on the OST-HMD with a custom shader supporting our data-efficient point cloud representation. ARAMIS is able to visualize the reconstructed point cloud (184k points) at 41.27 Hz with an end-to-end latency of 178.3 ms. A user study with 25 subjects, including 2 experienced users, compared ARAMIS to conventional laparoscopy during a peg transfer task on a deformable phantom. Results showed no significant difference in task completion time, but users generally preferred ARAMIS and reported improved intuitiveness, hand-eye coordination and depth perception. Inexperienced users showed a stronger preference for ARAMIS and achieved higher task success rates with the system, whereas the two experienced users indicated a slight preference for ARAMIS and succeeded in all tasks with and without assistance.

Keywords

Augmented Reality Minimally invasive surgery Laparoscopic surgery Head-mounted display Microsoft hololens 

Supplementary material

Supplementary material 1 (mp4 40532 KB)

References

  1. 1.
    Bogdanova, R., Boulanger, P., Zheng, B.: Depth perception of surgeons in minimally invasive surgery. Surg. Innov. 23(5), 515–524 (2016)CrossRefGoogle Scholar
  2. 2.
    Breedveld, P., Wentink, M.: Eye-hand coordination in laparoscopy-an overview of experiments and supporting aids. Minim. Invasive Ther. Allied Technol. 10(3), 155–162 (2001)CrossRefGoogle Scholar
  3. 3.
    Colon Cancer Laparoscopic or Open Resection Study Group: Laparoscopic surgery versus open surgery for colon cancer: short-term outcomes of a randomised trial. Lancet Oncol. 6(7), 477–484 (2005)CrossRefGoogle Scholar
  4. 4.
    Fuchs, H.: Augmented reality visualization for laparoscopic surgery. In: Wells, W.M., Colchester, A., Delp, S. (eds.) MICCAI 1998. LNCS, vol. 1496, pp. 934–943. Springer, Heidelberg (1998).  https://doi.org/10.1007/BFb0056282CrossRefGoogle Scholar
  5. 5.
    Hirschmuller, H.: Stereo processing by semiglobal matching and mutual information. IEEE TPAMI 30(2), 328–341 (2008)CrossRefGoogle Scholar
  6. 6.
    Liao, H., Hata, N., Nakajima, S., Iwahara, M., Sakuma, I., Dohi, T.: Surgical navigation by autostereoscopic image overlay of integral videography. IEEE Trans. Inf Technol. Biomed. 8(2), 114–121 (2004)CrossRefGoogle Scholar
  7. 7.
    Liu, X., et al.: Self-supervised learning for dense depth estimation in monocular endoscopy. In: Stoyanov, D., et al. (eds.) CARE/CLIP/OR 2.0/ISIC -2018. LNCS, vol. 11041, pp. 128–138. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01201-4_15CrossRefGoogle Scholar
  8. 8.
    Penza, V., Ortiz, J., Mattos, L.S., Forgione, A., De Momi, E.: Dense soft tissue 3D reconstruction refined with super-pixel segmentation for robotic abdominal surgery. Int. J. Comput. Assist. Radiol. Surg. 11(2), 197–206 (2016)CrossRefGoogle Scholar
  9. 9.
    Qian, L., Azimi, E., Kazanzides, P., Navab, N.: Comprehensive tracker based display calibration for holographic optical see-through head-mounted display. arXiv preprint arXiv:1703.05834 (2017)
  10. 10.
    Qian, L., Deguet, A., Kazanzides, P.: ARssist: augmented reality on a head-mounted display for the first assistant in robotic surgery. Healthc. Technol. Lett. 5(5), 194–200 (2018)CrossRefGoogle Scholar
  11. 11.
    Rolland, J.P., Fuchs, H.: Optical versus video see-through head-mounted displays in medical visualization. Presence: Teleoperators Virtual Environ. 9(3), 287–309 (2000)CrossRefGoogle Scholar
  12. 12.
    Wang, J., Qian, L., Azimi, E., Kazanzides, P.: Prioritization and static error compensation for multi-camera collaborative tracking in augmented reality. In: IEEE Virtual Reality (VR), pp. 335–336. IEEE (2017)Google Scholar
  13. 13.
    Wang, Z., et al.: Image-based trajectory tracking control of 4-DoF laparoscopic instruments using a rotation distinguishing marker. EEE Robot. Autom. Lett. 2(3), 1586–1592 (2017)CrossRefGoogle Scholar
  14. 14.
    Wilensky, G.R.: Robotic surgery: an example of when newer is not always better but clearly more expensive. Milbank Q. 94(1), 43 (2016)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Long Qian
    • 1
    Email author
  • Xiran Zhang
    • 1
  • Anton Deguet
    • 1
  • Peter Kazanzides
    • 1
  1. 1.Laboratory for Computational Sensing and RoboticsJohns Hopkins UniversityBaltimoreUSA

Personalised recommendations