Advertisement

Unified Detection and Tracking in Retinal Microsurgery

  • Raphael Sznitman
  • Anasuya Basu
  • Rogerio Richa
  • Jim Handa
  • Peter Gehlbach
  • Russell H. Taylor
  • Bruno Jedynak
  • Gregory D. Hager
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6891)

Abstract

Traditionally, tool tracking involves two subtasks: (i) detecting the tool in the initial image in which it appears, and (ii) predicting and refining the configuration of the detected tool in subsequent images. With retinal microsurgery in mind, we propose a unified tool detection and tracking framework, removing the need for two separate systems. The basis of our approach is to treat both detection and tracking as a sequential entropy minimization problem, where the goal is to determine the parameters describing a surgical tool in each frame. The resulting framework is capable of both detecting and tracking in situations where the tool enters and leaves the field of view regularly. We demonstrate the benefits of this method in the context of retinal tool tracking. Through extensive experimentation on a phantom eye, we show that this method provides efficient and robust tool tracking and detection.

Keywords

Tool Detection Surgical Tool Tracking Loop Simple Detect Tool Parameter 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Voros, S., Long, J.A., Cinquin, P.: Automatic detection of instruments in laparoscopic images: A first step towards high-level command of robotic endoscopic holders. IJRR 26, 1173–1190 (2007)Google Scholar
  2. 2.
    Dewan, M., Marayong, P., Okamura, A.M., Hager, G.D.: Vision-based assistance for ophthalmic micro-surgery. In: Barillot, C., Haynor, D.R., Hellier, P. (eds.) MICCAI 2004. LNCS, vol. 3217, pp. 49–57. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  3. 3.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: A survey. ACM Computing Surveys 38(4) (2006)Google Scholar
  4. 4.
    Sznitman, R., Rother, D., Handa, J., Gehlbach, P., Hager, G.D., Taylor, R.: Adaptive multispectral illumination for retinal microsurgery. In: Jiang, T., Navab, N., Pluim, J.P.W., Viergever, M.A. (eds.) MICCAI 2010. LNCS, vol. 6363, pp. 465–472. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  5. 5.
    Pezzementi, Z., Voros, S., Hager, G.D.: Articulated object tracking by rendering consistent appearance parts. In: IEEE international Conference on Robotics and Automation, pp. 1225–1232. IEEE Press, Piscataway (2009)Google Scholar
  6. 6.
    Burschka, D., Corso, J.J., Dewan, M., Lau, W., Lia, M., Lin, H., Marayong, P., Ramey, N., Hager, G.D., Hoffman, B., Larkin, D., Hasser, C.: Navigating inner space: 3-D assistance for minimally invasive surgery. Robotics and Autonomous Systems 52(1), 5–26 (2005)CrossRefGoogle Scholar
  7. 7.
    Geman, D., Jedynak, B.: An active testing model for tracking roads in satellite images. IEEE TPAMI 18(1), 1–14 (1996)CrossRefGoogle Scholar
  8. 8.
    Sznitman, R., Jedynak, B.: Active testing for face detection and localization. IEEE TPAMI 32(10), 1914–1920 (2010)CrossRefGoogle Scholar
  9. 9.
    Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics (Intelligent Robotics and Autonomous Agents). The MIT Press, Cambridge (2005)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Raphael Sznitman
    • 1
  • Anasuya Basu
    • 1
  • Rogerio Richa
    • 1
  • Jim Handa
    • 1
  • Peter Gehlbach
    • 1
  • Russell H. Taylor
    • 1
  • Bruno Jedynak
    • 1
  • Gregory D. Hager
    • 1
  1. 1.Johns Hopkins UnivesityBaltimoreUSA

Personalised recommendations