Advertisement

Weakly-Supervised Cell Tracking via Backward-and-Forward Propagation

Conference paper
  • 1k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12357)

Abstract

We propose a weakly-supervised cell tracking method that can train a convolutional neural network (CNN) by using only the annotation of “cell detection” (i.e., the coordinates of cell positions) without association information, in which cell positions can be easily obtained by nuclear staining. First, we train co-detection CNN that detects cells in successive frames by using weak-labels. Our key assumption is that co-detection CNN implicitly learns association in addition to detection. To obtain the association, we propose a backward-and-forward propagation method that analyzes the correspondence of cell positions in the outputs of co-detection CNN. Experiments demonstrated that the proposed method can associate cells by analyzing co-detection CNN. Even though the method uses only weak supervision, the performance of our method was almost the same as the state-of-the-art supervised method. Code is publicly available in https://github.com/naivete5656/WSCTBFP.

Keywords

Cell tracking Weakly-supervised learning Multi-object tracking Cell detection Tracking Weakly-supervised tracking 

Notes

Acknowledgement

This work was supported by JSPS KAKENHI Grant Number 20H04211.

Supplementary material

Supplementary material 1 (mp4 74426 KB)

References

  1. 1.
    Ahn, J., Cho, S., Kwak, S.: Weakly supervised learning of instance segmentation with inter-pixel relations. In: CVPR, pp. 2209–2218 (2019)Google Scholar
  2. 2.
    Ahn, J., Kwak, S.: Learning pixel-level semantic affinity with image-level supervision for weakly supervised semantic segmentation. In: CVPR, pp. 4981–4990 (2018)Google Scholar
  3. 3.
    Akram, S.U., Kannala, J., Eklund, L., Heikkilä, J.: Joint cell segmentation and tracking using cell proposals. In: ISBI, pp. 920–924 (2016)Google Scholar
  4. 4.
    Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K.R., Samek, W.: On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS One 10(7), e0130140 (2015)CrossRefGoogle Scholar
  5. 5.
    Bansal, A., Chen, X., Russell, B., Gupta, A., Ramanan, D.: Pixelnet: Representation of the pixels, by the pixels, and for the pixels. arXiv:1702.06506 (2017)
  6. 6.
    Bao, S.Y., Xiang, Y., Savarese, S.: Object co-detection. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7572, pp. 86–101. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33718-5_7CrossRefGoogle Scholar
  7. 7.
    Bensch, R., Olaf, R.: Cell segmentation and tracking in phase contrast images using graph cut with asymmetric boundary costs. In: ISBI, pp. 1220–1223 (2015)Google Scholar
  8. 8.
    Bise, R., Li, K., Eom, S., Kanade, T.: Reliably tracking partially overlapping neural stem cells in DIC microscopy image sequences. In: International Conference on Medical Image Computing and Computer-Assisted Intervention Workshop (MICCAIW), pp. 67–77 (2009)Google Scholar
  9. 9.
    Bise, R., Maeda, Y., Kim, M.H., Kino-oka, M.: Cell tracking under high confluency conditions by candidate cell region detection-based-association approach. In: Biomedical Engineering, pp. 1004–1010 (2013)Google Scholar
  10. 10.
    Bise, R., Yin, Z., Kanade, T.: Reliable cell tracking by global data association. In: ISBI, pp. 1004–1010 (2011)Google Scholar
  11. 11.
    Chalfoun, J., Majurski, M., Dima, A., Halter, M., Bhadriraju, K., Brady, M.: Lineage mapper: a versatile cell and particle tracker. Sci. Rep. 6, 36984 (2016)CrossRefGoogle Scholar
  12. 12.
    Chattopadhay, A., Sarkar, A., Howlader, P., Balasubramanian, V.N.: GRAD-CAM++: generalized gradient-based visual explanations for deep convolutional networks. In: WACV, pp. 839–847 (2018)Google Scholar
  13. 13.
    Hayashida, J., Bise, R.: Cell tracking with deep learning for cell detection and motion estimation in low-frame-rate. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11764, pp. 397–405. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-32239-7_44CrossRefGoogle Scholar
  14. 14.
    Hayashida, J., Nishimura, K., Bise, R.: MPM: joint representation of motion and position map for cell tracking. In: CVPR, pp. 3823–3832 (2020)Google Scholar
  15. 15.
    He, Z., Li, J., Liu, D., He, H., Barber, D.: Tracking by animation: Unsupervised learning of multi-object attentive trackers. In: CVPR, pp. 1318–1327 (2019)Google Scholar
  16. 16.
    Huang, K., Shi, Y., Zhao, F., Zhang, Z., Tu, S.: Multiple instance deep learning for weakly-supervised visual object tracking. Sig. Process.: Image Commun. 115807 (2020)Google Scholar
  17. 17.
    Kanade, T., et al.: Cell image analysis: algorithms, system and applications. In: WACV, pp. 374–381 (2011)Google Scholar
  18. 18.
    Ker, E., et al.: Phase contrast time-lapse microscopy datasets with automated and manual cell tracking annotations. Sci. Data 5, 180237 (2018).  https://doi.org/10.1038/sdata.2018.237
  19. 19.
    Khoreva, A., Benenson, R., Hosang, J., Hein, M., Schiele, B.: Simple does it: weakly supervised instance and semantic segmentation. In: CVPR, pp. 876–885 (2017)Google Scholar
  20. 20.
    Kindermans, P.J., et al.: Learning how to explain neural networks: Patternnet and patternattribution. In: International Conference on Learning Representations (2018)Google Scholar
  21. 21.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: ICLR (2015)Google Scholar
  22. 22.
    Li, K., Miller, E.D., Chen, M., Kanade, T., Weiss, L.E., Campbell, P.G.: Cell population tracking and lineage construction with spatiotemporal context. Med. Image Anal. 12(5), 546–566 (2008)CrossRefGoogle Scholar
  23. 23.
    Li, P., Chen, B., Ouyang, W., Wang, D., Yang, X., Lu, H.: GradNet: Gradient-guided network for visual object tracking. In: ICCV, pp. 6162–6171 (2019)Google Scholar
  24. 24.
    Li, Q., Arnab, A., Torr, P.H.S.: Weakly- and semi-supervised panoptic segmentation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 106–124. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01267-0_7CrossRefGoogle Scholar
  25. 25.
    Lux, F., Matula, P.: DIC image segmentation of dense cell populations by combining deep learning and watershed. In: ISBI, pp. 236–239 (2019)Google Scholar
  26. 26.
    Maška, M., et al.: A benchmark for comparison of cell tracking algorithms. Bioinformatics 30(11), 1609–1617 (2014)CrossRefGoogle Scholar
  27. 27.
    Montavon, G., Lapuschkin, S., Binder, A., Samek, W., Müller, K.R.: Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn. 65, 211–222 (2017)CrossRefGoogle Scholar
  28. 28.
    Nishimura, K., Ker, D.F.E., Bise, R.: Weakly supervised cell instance segmentation by propagating from detection response. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11764, pp. 649–657. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-32239-7_72CrossRefGoogle Scholar
  29. 29.
    Nwoye, C.I., Mutter, D., Marescaux, J., Padoy, N.: Weakly supervised convolutional lstm approach for tool tracking in laparoscopic videos. Int. J. Comput. Assist. Radiol. Surg. 14(6), 1059–1067 (2019)CrossRefGoogle Scholar
  30. 30.
    Okuma, K., Taleghani, A., de Freitas, N., Little, J.J., Lowe, D.G.: A boosted particle filter: multitarget detection and tracking. In: Pajdla, T., Matas, J. (eds.) ECCV 2004. LNCS, vol. 3021, pp. 28–39. Springer, Heidelberg (2004).  https://doi.org/10.1007/978-3-540-24670-1_3CrossRefGoogle Scholar
  31. 31.
    Payer, C., Štern, D., Neff, T., Bischof, H., Urschler, M.: Instance segmentation and tracking with cosine embeddings and recurrent hourglass networks. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 3–11. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_1CrossRefGoogle Scholar
  32. 32.
    Rempfler, M., Kumar, S., Stierle, V., Paulitschke, P., Andres, B., Menze, B.H.: Cell lineage tracing in lens-free microscopy videos. In: Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D.L., Duchesne, S. (eds.) MICCAI 2017. LNCS, vol. 10434, pp. 3–11. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-66185-8_1CrossRefGoogle Scholar
  33. 33.
    Rempfler, M., et al.: Tracing cell lineages in videos of lens-free microscopy. Med. Image Anal. 48, 147–161 (2018)CrossRefGoogle Scholar
  34. 34.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  35. 35.
    Schiegg, M., Hanslovsky, P., Kausler, B.X., Hufnagel, L., Hamprecht, F.A.: Conservation tracking. In: ICCV, pp. 2928–2935 (2013)Google Scholar
  36. 36.
    Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., Batra, D.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: ICCV, pp. 618–626 (2017)Google Scholar
  37. 37.
    Smal, I., Niessen, W., Meijering, E.: Bayesian tracking for fluorescence microscopic imaging. In: ISBI, pp. 550–553 (2006)Google Scholar
  38. 38.
    Smilkov, D., Thorat, N., Kim, B., Viégas, F., Wattenberg, M.: Smoothgrad: removing noise by adding noise. arXiv:1706.03825 (2017)
  39. 39.
    Springenberg, J., Dosovitskiy, A., Brox, T., Riedmiller, M.: Striving for simplicity: the all convolutional net. In: ICLRW (2015)Google Scholar
  40. 40.
    Su, H., Yin, Z., Huh, S., Kanade, T.: Cell segmentation in phase contrast microscopy images via semi-supervised classification over optics-related features. Med. Image Anal. 17(7), 746–765 (2013)CrossRefGoogle Scholar
  41. 41.
    Ulman, V., et al.: An objective comparison of cell-tracking algorithms. Nat. Methods 14(12), 1141 (2017)CrossRefGoogle Scholar
  42. 42.
    Vondrick, C., Shrivastava, A., Fathi, A., Guadarrama, S., Murphy, K.: Tracking emerges by colorizing videos. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018, pp. 391–408. Springer, Heidelberg (2018).  https://doi.org/10.1007/978-3-030-01261-8_24
  43. 43.
    Wang, N., Song, Y., Ma, C., Zhou, W., Liu, W., Li, H.: Unsupervised deep tracking. In: CVPR, pp. 1308–1317 (2019)Google Scholar
  44. 44.
    Wang, X., Jabri, A., Efros, A.A.: Learning correspondence from the cycle-consistency of time. In: CVPR, pp. 2566–2576 (2019)Google Scholar
  45. 45.
    Wang, X., He, W., Metaxas, D., Mathew, R., White, E.: Cell segmentation and tracking using texture-adaptive snakes. In: ISBI, pp. 101–104 (2007)Google Scholar
  46. 46.
    Yang, F., Mackey, M.A., Ianzini, F., Gallardo, G., Sonka, M.: Cell segmentation, tracking, and mitosis detection using temporal context. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3749, pp. 302–309. Springer, Heidelberg (2005).  https://doi.org/10.1007/11566465_38CrossRefGoogle Scholar
  47. 47.
    Yin, Z., Kanade, T., Chen, M.: Understanding the phase contrast optics to restore artifact-free microscopy images for segmentation. Med. Image Anal. 16(5), 1047–1062 (2012)CrossRefGoogle Scholar
  48. 48.
    Zhang, J., Bargal, S.A., Lin, Z., Brandt, J., Shen, X., Sclaroff, S.: Top-down neural attention by excitation backprop. Int. J. Comput. Vis. 126(10), 1084–1102 (2018)CrossRefGoogle Scholar
  49. 49.
    Zhong, B., Yao, H., Chen, S., Ji, R., Chin, T.J., Wang, H.: Visual tracking via weakly supervised learning from multiple imperfect oracles. Pattern Recogn. 47(3), 1395–1410 (2014)CrossRefGoogle Scholar
  50. 50.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., Torralba, A.: Learning deep features for discriminative localization. In: CVPR, pp. 2921–2929 (2016)Google Scholar
  51. 51.
    Zhou, Z., Wang, F., Xi, W., Chen, H., Gao, P., He, C.: Joint multi-frame detection and segmentation for multi-cell tracking. In: Zhao, Y., Barnes, N., Chen, B., Westermann, R., Kong, X., Lin, C. (eds.) ICIG 2019. LNCS, vol. 11902, pp. 435–446. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-34110-7_36CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Kyushu UniversityFukuokaJapan
  2. 2.The Chinese University of Hong KongSha TinHong Kong

Personalised recommendations