Advertisement

Joint Multi-frame Detection and Segmentation for Multi-cell Tracking

  • Zibin Zhou
  • Fei WangEmail author
  • Wenjuan Xi
  • Huaying Chen
  • Peng Gao
  • Chengkang He
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

Tracking living cells in video sequence is difficult, because of cell morphology and high similarities between cells. Tracking-by-detection methods are widely used in multi-cell tracking. We perform multi-cell tracking based on the cell centroid detection, and the performance of the detector has high impact on tracking performance. In this paper, UNet is utilized to extract inter-frame and intra-frame spatio-temporal information of cells. Detection performance of cells in mitotic phase is improved by multi-frame input. Good detection results facilitate multi-cell tracking. A mitosis detection algorithm is proposed to detect cell mitosis and the cell lineage is built up. Another UNet is utilized to acquire primary segmentation. Jointly using detection and primary segmentation, cells can be fine segmented in highly dense cell population. Experiments are conducted to evaluate the effectiveness of our method, and results show its state-of-the-art performance.

Keywords

Multi-frame Segmentation Joint Multi-object Cell tracking 

References

  1. 1.
    Ren, Y., Xu, B., Zhang, J., et al.: A generalized data association approach for cell tracking in high-density population. In: 2015 International Conference on Control, Automation and Information Sciences (ICCAIS), pp. 502–507. IEEE (2015)Google Scholar
  2. 2.
    He, T., Mao, H., Guo, J., et al.: Cell tracking using deep neural networks with multi-task learning. Image Vis. Comput. 60, 142–153 (2017)CrossRefGoogle Scholar
  3. 3.
    Yang, F., Mackey, M.A., Ianzini, F., Gallardo, G., Sonka, M.: Cell segmentation, tracking, and mitosis detection using temporal context. In: Duncan, J.S., Gerig, G. (eds.) MICCAI 2005. LNCS, vol. 3749, pp. 302–309. Springer, Heidelberg (2005).  https://doi.org/10.1007/11566465_38CrossRefGoogle Scholar
  4. 4.
    He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)Google Scholar
  5. 5.
    Payer, C., Štern, D., Neff, T., Bischof, H., Urschler, M.: Instance segmentation and tracking with cosine embeddings and recurrent hourglass networks. In: Frangi, A.F., Schnabel, J.A., Davatzikos, C., Alberola-López, C., Fichtinger, G. (eds.) MICCAI 2018. LNCS, vol. 11071, pp. 3–11. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00934-2_1CrossRefGoogle Scholar
  6. 6.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  7. 7.
    Maška, M., Ulman, V., Svoboda, D., et al.: A benchmark for comparison of cell tracking algorithms. Bioinformatics 30(11), 1609–1617 (2014)CrossRefGoogle Scholar
  8. 8.
    Bochinski, E., Eiselein, V., Sikora, T.: High-speed tracking-by-detection without using image information. In: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. IEEE (2017)Google Scholar
  9. 9.
    Ulman, V., Maška, M., Magnusson, K.E.G., et al.: An objective comparison of cell-tracking algorithms. Nat. Methods 14(12), 1141 (2017)CrossRefGoogle Scholar
  10. 10.
    Luo, W., Xing, J., Milan, A., et al.: Multiple object tracking: a literature review. arXiv preprint arXiv:1409.7618v4 (2017)
  11. 11.
    Ciresan, D., Giusti, A., Gambardella, L.M., et al.: Deep neural networks segment neuronal membranes in electron microscopy images. In: Advances in Neural Information Processing Systems, pp. 2843–2851 (2012)Google Scholar
  12. 12.
    Zhou, Z., Rahman Siddiquee, M.M., Tajbakhsh, N., Liang, J.: UNet++: a nested U-Net architecture for medical image segmentation. In: Stoyanov, D., et al. (eds.) DLMIA/ML-CDS -2018. LNCS, vol. 11045, pp. 3–11. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00889-5_1CrossRefGoogle Scholar
  13. 13.
    Ballas, N., Yao, L., Pal, C., et al.: Delving deeper into convolutional networks for learning video representations. arXiv preprint arXiv:1511.06432 (2015)
  14. 14.
    Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_29CrossRefGoogle Scholar
  15. 15.
    Arbelle, A., Raviv, T.R.: Microscopy cell segmentation via convolutional LSTM networks. arXiv preprint arXiv:1805.11247 (2018)
  16. 16.
    Xingjian, S.H.I., Chen, Z., Wang, H., et al.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: Advances in Neural Information Processing Systems, pp. 802–810 (2015)Google Scholar
  17. 17.
    Sadeghian, A., Alahi, A., Savarese, S.: Tracking the untrackable: learning to track multiple cues with long-term dependencies. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 300–311 (2017)Google Scholar
  18. 18.
    Khudeev, R.: A new flood-fill algorithm for closed contour. In: 2005 Siberian Conference on Control and Communications, pp. 172–176. IEEE (2005)Google Scholar
  19. 19.
    Bochinski, E., Senst, T., Sikora, T.: Extending IOU based multi-object tracking by visual information. In: 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), pp. 1–6. IEEE (2018)Google Scholar
  20. 20.
    Okabe, A., Boots, B., Sugihara, K., et al.: Spatial Tessellations: Concepts and Applications of Voronoi Diagrams. Wiley, Hoboken (2009)zbMATHGoogle Scholar
  21. 21.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  22. 22.
    Falk, T., Mai, D., Bensch, R., et al.: U-Net: deep learning for cell counting, detection, and morphometry. Nat. Methods 16(1), 67 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Zibin Zhou
    • 1
  • Fei Wang
    • 1
    Email author
  • Wenjuan Xi
    • 1
  • Huaying Chen
    • 2
  • Peng Gao
    • 1
  • Chengkang He
    • 1
  1. 1.School of Electronics and Information EngineeringHarbin Institute of TechnologyShenzhenChina
  2. 2.School of Mechanical Engineering and AutomationHarbin Institute of TechnologyShenzhenChina

Personalised recommendations