Abstract
With the increased presence of automated devices such as C-arms and medical robots and the introduction of a multitude of surgical tools, navigation systems and patient monitoring devices, collision avoidance has become an issue of practical value in interventional environments. In this paper, we present a real-time 3D reconstruction system for interventional environments which aims at predicting collisions by building a 3D representation of all the objects in the room. The 3D reconstruction is used to determine whether other objects are in the working volume of the device and to alert the medical staff before a collision occurs. In the case of C-arms, this allows faster rotational and angular movement which could for instance be used in 3D angiography to obtain a better reconstruction of contrasted vessels. The system also prevents staff to unknowingly enter the working volume of a device. This is of relevance in complex environments with many devices. The recovered 3D representation also opens the path to many new applications utilizing this data such as workflow analysis, 3D video generation or interventional room planning. To validate our claims, we performed several experiments with a real C-arm that show the validity of the approach. This system is currently being transferred to an interventional room in our university hospital.
Chapter PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
Allard, J., Menier, C., Raffin, B., Boyer, E., Faure, F.: Grimage: Markerless 3d interactions. In: SIGGRAPH - Emerging Technologies (2007)
Asai, D., Katopo, S., Arata, J., Warisawa, S., Mitsuishi, M., Morita, A., Sora, S., Kirino, T., Mochizuki, R.: Micro-neurosurgical system in the deep surgical field. In: Barillot, C., Haynor, D.R., Hellier, P. (eds.) MICCAI 2004. LNCS, vol. 3216, pp. 33–40. Springer, Heidelberg (2004)
Cheung, G., Kanade, T., Bouguet, J.Y., Holler, M.: A real-time system for robust 3d voxel reconstruction of human motions. In: CVPR (2000)
Fichtinger, G., Fiene, J., Kennedy, C., Kronreif, G., Iordachita, I., Song, D., Burdette, E., Kazanzides, P.: Robotic assistance for ultrasound guided prostate brachytherapy. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007, Part I. LNCS, vol. 4791, pp. 119–127. Springer, Heidelberg (2007)
Franco, J.S., Boyer, E.: Exact polyhedral visual hulls. In: BMVC (2003)
Fukui, S., Iwahori, Y., Itoh, H.i., Kawanaka, H., Woodham, R.: Robust background subtraction for quick illumination changes. In: Chang, L.-W., Lie, W.-N. (eds.) PSIVT 2006. LNCS, vol. 4319. Springer, Heidelberg (2006)
Gauvrita, J.-Y., Leclerca, X., Vermandelb, M., Lubicza, B., Despretzd, D., Lejeunec, J.-P., Rousseaub, J., Pruvoa, J.-P.: 3d rotational angiography: Use of propeller rotation for the evaluation of intracranial aneurysms. American Journal of Neuroradiology 26, 163–165 (2005)
Hasenfratz, J.M., Lapierre, M., Sillion, F.: A real-time system for full body interaction with virtual worlds. In: Eurographics Symposium on Virtual Environments, pp. 147–156 (2004)
Laurentini, A.: The visual hull concept for silhouette-based image understanding. PAMI 16(2), 150–162 (1994)
Leong, J., Nicolaou, M., Atallah, L., Mylonas, G., Darzi, A., Yang, G.-Z.: Hmm assessment of quality of movement trajectory in laparoscopic surgery. In: Larsen, R., Nielsen, M., Sporring, J. (eds.) MICCAI 2006. LNCS, vol. 4190, pp. 752–759. Springer, Heidelberg (2006)
Lin, H., Shafran, I., Murphy, T., Okamura, A., Yuh, D., Hager, G.: Automatic detection and segmentation of robot-assisted surgical motions. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3749, pp. 802–810. Springer, Heidelberg (2005)
Matsuik, W., Buehler, C., McMillan, L.: Polyhedral visual hulls for real-time rendering. In: Eurographics Workshop on Rendering (2001)
Matsuyama, T., Takai, T.: Generation, visualization, and editing of 3d video. In: 3DPVT (2002)
Padoy, N., Blum, T., Essa, I., Feussner, H., Berger, M.-O., Navab, N.: A boosted segmentation method for surgical workflow analysis. In: Ayache, N., Ourselin, S., Maeder, A. (eds.) MICCAI 2007, Part I. LNCS, vol. 4791, pp. 102–109. Springer, Heidelberg (2007)
Seitz, S., Curles, B., Diebel, J., Scharstein, D., Szeliski, R.: A comparison and evaluation of multi-view stereo reconstruction algorithms. In: CVPR (2006)
Svoboda, T., Martinec, D., Pajdla, T.: A convenient multi-camera self-calibration for virtual environments. Presence: Teleoperators and Virtual Environments 14(4), 407–422 (2005)
Szeliski, R.: Rapid octree construction from image sequences. CVGIP: Image Understanding 58(1), 23–32 (1993)
Author information
Authors and Affiliations
Editor information
Electronic Supplementary Material
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ladikos, A., Benhimane, S., Navab, N. (2008). Real-Time 3D Reconstruction for Collision Avoidance in Interventional Environments. In: Metaxas, D., Axel, L., Fichtinger, G., Székely, G. (eds) Medical Image Computing and Computer-Assisted Intervention – MICCAI 2008. MICCAI 2008. Lecture Notes in Computer Science, vol 5242. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85990-1_63
Download citation
DOI: https://doi.org/10.1007/978-3-540-85990-1_63
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-85989-5
Online ISBN: 978-3-540-85990-1
eBook Packages: Computer ScienceComputer Science (R0)