Advertisement

Self-Supervised Domain Adaptation for Patient-Specific, Real-Time Tissue Tracking

Conference paper
  • 3.8k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12263)

Abstract

Estimating tissue motion is crucial to provide automatic motion stabilization and guidance during surgery. However, endoscopic images often lack distinctive features and fine tissue deformation can only be captured with dense tracking methods like optical flow. To achieve high accuracy at high processing rates, we propose fine-tuning of a fast optical flow model to an unlabeled patient-specific image domain. We adopt multiple strategies to achieve unsupervised fine-tuning. First, we utilize a teacher-student approach to transfer knowledge from a slow but accurate teacher model to a fast student model. Secondly, we develop self-supervised tasks where the model is encouraged to learn from different but related examples. Comparisons with out-of-the-box models show that our method achieves significantly better results. Our experiments uncover the effects of different task combinations. We demonstrate that unsupervised fine-tuning can improve the performance of CNN-based tissue tracking and opens up a promising future direction.

Keywords

Patient-specific models Motion estimation Endoscopic surgery 

Notes

Acknowledgements

This work has received funding from the European Union as being part of the EFRE OPhonLas project.

References

  1. 1.
    Armin, M.A., Barnes, N., Khan, S., Liu, M., Grimpen, F., Salvado, O.: Unsupervised learning of endoscopy video frames’ correspondences from global and local transformation. In: Stoyanov, D., et al. (eds.) CARE/CLIP/OR 2.0/ISIC -2018. LNCS, vol. 11041, pp. 108–117. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01201-4_13CrossRefGoogle Scholar
  2. 2.
    Buciluă, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: ACM SIGKDD, pp. 535–541 (2006)Google Scholar
  3. 3.
    Butler, D.J., Wulff, J., Stanley, G.B., Black, M.J.: A naturalistic open source movie for optical flow evaluation. In: IEEE ECCV, pp. 611–625 (2012).  https://doi.org/10.1007/978-3-642-33783-3_44
  4. 4.
    Doersch, C., Zisserman, A.: Multi-task self-supervised visual learning. In: IEEE ICCV, pp. 2051–2060 (2017)Google Scholar
  5. 5.
    Dosovitskiy, A., et al.: FlowNet: learning optical flow with convolutional networks. In: IEEE ICCV (2015).  https://doi.org/10.1109/ICCV.2015.316
  6. 6.
    French, G., Mackiewicz, M., Fisher, M.: Self-ensembling for visual domain adaptation. arXiv:1706.05208 (2017)
  7. 7.
    Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The Kitti vision benchmark suite, pp. 3354–3361, May 2012.  https://doi.org/10.1109/CVPR.2012.6248074
  8. 8.
    Giannarou, S., Visentini-Scarzanella, M., Yang, G.Z.: Probabilistic tracking of affine-invariant anisotropic regions. IEEE TPAMI 35(1), 130–143 (2013).  https://doi.org/10.1109/TPAMI.2012.81CrossRefGoogle Scholar
  9. 9.
    Guerre, A., Lamard, M., Conze, P.H., Cochener, B., Quellec, G.: Optical flow estimation in ocular endoscopy videos using flownet on simulated endoscopy data. In: IEEE International Symposium on Biomedical Imaging (ISBI), pp. 1463–1466 (2018).  https://doi.org/10.1109/ISBI.2018.8363848
  10. 10.
    Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv:1503.02531 (2015)
  11. 11.
    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17, 185–203 (1981).  https://doi.org/10.1016/0004-3702(81)90024-2CrossRefGoogle Scholar
  12. 12.
    Ihler, S., Laves, M.H., Ortmaier, T.: Patient-specific domain adaptation for fast optical flow based on teacher-student knowledge transfer. arXiv:2007.04928 (2020)
  13. 13.
    Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: FlowNet 2.0: evolution of optical flow estimation with deep networks. In: IEEE CVPR, July 2017.  https://doi.org/10.1109/CVPR.2017.179
  14. 14.
    Liu, P., King, I., Lyu, M.R., Xu, J.: DDFlow: learning optical flow with unlabeled data distillation. In: AAAI, vol. 33, pp. 8770–8777 (2019).  https://doi.org/10.1609/aaai.v33i01.33018770
  15. 15.
    Meister, S., Hur, J., Roth, S.: UnFlow: unsupervised learning of optical flow with a bidirectional census loss. In: AAAI, New Orleans, Louisiana, pp. 7251–7259, February 2018. arXiv:1711.07837
  16. 16.
    Menze, M., Geiger, A.: Object scene flow for autonomous vehicles. In: IEEE CVPR (2015).  https://doi.org/10.1109/CVPR.2015.7298925
  17. 17.
    Mountney, P., Stoyanov, D., Yang, G.: Three-dimensional tissue deformation recovery and tracking. IEEE Signal Process. Mag. 27(4), 14–24 (2010).  https://doi.org/10.1109/MSP.2010.936728CrossRefGoogle Scholar
  18. 18.
    Reda, F., Pottorff, R., Barker, J., Catanzaro, B.: flownet2-pytorch: pytorch implementation of flownet 2.0: evolution of optical flow estimation with deep networks (2017). https://github.com/NVIDIA/flownet2-pytorch
  19. 19.
    Sun, D., Roth, S., Black, M.J.: Secrets of optical flow estimation and their principles. In: IEEE CVPR, pp. 2432–2439, June 2010.  https://doi.org/10.1109/CVPR.2010.5539939
  20. 20.
    Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Models matter, so does training: an empirical study of CNNs for optical flow estimation. arXiv:1809.05571 (2018)
  21. 21.
    Sun, D., Yang, X., Liu, M.Y., Kautz, J.: PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In: IEEE CVPR, pp. 8934–8943 (2018).  https://doi.org/10.1109/CVPR.2018.00931
  22. 22.
    Wulff, J., Black, M.J.: Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In: IEEE CVPR, pp. 120–130 (2015).  https://doi.org/10.1109/CVPR.2015.7298607
  23. 23.
    Yip, M.C., Lowe, D.G., Salcudean, S.E., Rohling, R.N., Nguan, C.Y.: Tissue tracking and registration for image-guided surgery. IEEE Trans. Med. Imaging 31(11), 2169–2182 (2012).  https://doi.org/10.1109/TMI.2012.2212718CrossRefGoogle Scholar
  24. 24.
    Yu, J.J., Harley, A.W., Derpanis, K.G.: Back to basics: unsupervised learning of optical flow via brightness constancy and motion smoothness. In: Hua, G., Jégou, H. (eds.) ECCV 2016. LNCS, vol. 9915, pp. 3–10. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-49409-8_1CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Institut für Mechatronische SystemeLeibniz Universität HannoverHanoverGermany
  2. 2.Institut für InformationsverarbeitungLeibniz Universität HannoverHanoverGermany

Personalised recommendations