Advertisement

An Effective Microscopic Detection Method for Automated Silicon-Substrate Ultra-microtome (ASUM)

  • Long ChengEmail author
  • Weizhou Liu
Article
  • 47 Downloads

Abstract

Three-dimensional (3D) representation of whole-brain cellular connectomics is the fundamental challenge for brain-inspired intelligence. And orderly automatic collection of brain sections on the silicon substrate is essential for the 3D imaging of cerebral ultrastructure. With the self-designed automated silicon-substrate ultra-microtome, serial brain sections can be orderly collected on the circular silicon substrates. In order to automate the collection process and further improve the efficiency of section collection, the form-invariant “Single Shot MultiBox-Detector” is proposed to detect the brain sections and baffles in the field of view of the microscope. And the “Cycle Generative Adversarial Networks” data augmentation method is proposed to alleviate the problem of fewer samples of the collected microscopic image dataset. The experimental results suggest that the proposed detection method could effectively detect the foreground objects in the microscopic images.

Keywords

Microscopic object detection Deep learning Data augmentation Serial sections 

Notes

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grants 61873268, 61633016, in part by the Research Fund for Young Top-Notch Talent of National Ten Thousand Talent Program, in part by the Beijing Municipal Natural Science Foundation under Grant 4162066.

References

  1. 1.
    Lu H, Li Y, Chen M, Kim H, Serikawa S (2018) Brain intelligence: go beyond artificial intelligence. Mobile Netw Appl 23(2):368–375CrossRefGoogle Scholar
  2. 2.
    Schultz DH, Cole MW (2016) Higher intelligence is associated with less task-related brain network reconfiguration. J Neurosci 36(33):8551–8561CrossRefGoogle Scholar
  3. 3.
    Roth G, Dicke U (2005) Evolution of the brain and intelligence. Trends Cogn Sci 9(5):250–257CrossRefGoogle Scholar
  4. 4.
    Hearne LJ, Mattingley JB, Cocchi L (2016) Functional brain networks related to individual differences in human intelligence at rest. Sci Rep 6:32328CrossRefGoogle Scholar
  5. 5.
    Hassabis D, Kumaran D, Summerfield C, Botvinick M (2017) Neuroscience-inspired artificial intelligence. Neuron 95(2):245–258CrossRefGoogle Scholar
  6. 6.
    Poo M, Du J, Ip NY, Xiong Z-Q, Xu B, Tan T (2016) China brain project: basic neuroscience, brain diseases, and brain-inspired computing. Neuron 92(3):591–596CrossRefGoogle Scholar
  7. 7.
    Shibata S, Komaki Y, Seki F, Inouye MO, Nagai T, Okano H (2014) Connectomics: comprehensive approaches for whole-brain mapping. Microscopy 64(1):57–67CrossRefGoogle Scholar
  8. 8.
    Kubota Y (2015) New developments in electron microscopy for serial image acquisition of neuronal profiles. Microscopy 64(1):27–36CrossRefGoogle Scholar
  9. 9.
    Peddie CJ, Collinson LM (2014) Exploring the third dimension: volume electron microscopy comes of age. Micron 61:9–19CrossRefGoogle Scholar
  10. 10.
    Schalek R, Kasthuri N, Hayworth K, Berger D, Tapia J, Morgan J, Turaga S, Fagerholm E, Seung H, Lichtman J (2011) Development of high-throughput, high-resolution 3D reconstruction of large-volume biological tissue using automated tape collection ultramicrotomy and scanning electron microscopy. Microsc Microanal 17(S2):966–967CrossRefGoogle Scholar
  11. 11.
    Horstmann H, Körber C, Sätzler K, Aydin D, Kuner T (2012) Serial section scanning electron microscopy (S3EM) on silicon wafers for ultra-structural volume imaging of cells and tissues. PLoS ONE 7(4):e35172CrossRefGoogle Scholar
  12. 12.
    Wacker I, Spomer W, Hofmann A, Thaler M, Hillmer S, Gengenbach U, Schröder RR (2016) Hierarchical imaging: a new concept for targeted imaging of large volumes from cells to tissues. BMC Cell Biol 17(1):38CrossRefGoogle Scholar
  13. 13.
    Koike T, Kataoka Y, Maeda M, Hasebe Y, Yamaguchi Y, Suga M, Saito A, Yamada H (2017) A device for ribbon collection for array tomography with scanning electron microscopy. Acta Histochemica et Cytochemica 50(5):135–140CrossRefGoogle Scholar
  14. 14.
    Perez L, Wang J (2017) The effectiveness of data augmentation in image classification using deep learning. arXiv preprint arXiv:1712.04621
  15. 15.
    Wong SC, Gatt A, Stamatescu V, McDonnell MD (2016) Understanding data augmentation for classification: when to warp? In: Proceedings of the IEEE international conference on digital image computing: techniques and applications, pp 1–6Google Scholar
  16. 16.
    Ratner AJ, Ehrenberg H, Hussain Z, Dunnmon J, Ré C (2017) Learning to compose domain-specific transformations for data augmentation. In: Proceedings of the advances in neural information processing systems, pp 3236–3246Google Scholar
  17. 17.
    Lemley J, Bazrafkan S, Corcoran P (2017) Smart augmentation learning an optimal data augmentation strategy. IEEE Access 5:5858–5869CrossRefGoogle Scholar
  18. 18.
    Dvornik N, Mairal J, Schmid C (2018) On the importance of visual context for data augmentation in scene understanding. arXiv preprint arXiv:1809.02492
  19. 19.
    Taylor L, Nitschke G (2017) Improving deep learning using generic data augmentation. arXiv preprint arXiv:1708.06020
  20. 20.
    Simard PY, Steinkraus D, Platt JC (2003) Best practices for convolutional neural networks applied to visual document analysis. In: Proceedings of the IEEE international conference on document analysis and recognition, pp 958–963Google Scholar
  21. 21.
    Chatfield K, Simonyan K, Vedaldi A, Zisserman A (2014) Return of the devil in the details: delving deep into convolutional nets. arXiv preprint arXiv:1405.3531
  22. 22.
    Masi I, Trãn AT, Hassner T, Leksut JT, Medioni G (2016) Do we really need to collect millions of faces for effective face recognition? In: Proceedings of the European conference on computer vision, pp 579–596CrossRefGoogle Scholar
  23. 23.
    Bowles C, Chen L, Guerrero R, Bentley P, Gunn R, Hammers A, Dickie DA, Herncndez MV, Wardlaw J, Rueckert D (2018) GAN augmentation: augmenting training data using generative adversarial networks. arXiv preprint arXiv:1810.10863
  24. 24.
    Wu E, Wu K, Cox D, Lotter W (2018) Conditional infilling GANs for data augmentation in mammogram classification. In: Proceedings of the image analysis for moving organ, breast, and thoracic images. Springer, pp 98–106Google Scholar
  25. 25.
    Antoniou A, Storkey A, Edwards H (2017) Data augmentation generative adversarial networks. arXiv preprint arXiv:1711.04340
  26. 26.
    Frid-Adar M, Diamant I, Klang E, Amitai M, Goldberger J, Greenspan H (2018) GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification. Neurocomputing 321:321–331CrossRefGoogle Scholar
  27. 27.
    Li S, Zhang L, Diao X (2018) Improving human intention prediction using data augmentation. In: Proceedings of the IEEE international symposium on robot and human interactive communication, pp 559–564Google Scholar
  28. 28.
    Welander P, Karlsson S, Eklund A (2018) Generative adversarial networks for image-to-image translation on multi-contrast MR images—a comparison of CycleGAN and UNIT. arXiv preprint arXiv:1806.07777
  29. 29.
    LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436CrossRefGoogle Scholar
  30. 30.
    Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sanchez CI (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60–88CrossRefGoogle Scholar
  31. 31.
    Dong B, Shao L, Da Costa M, Bandmann O, Frangi AF (2015) Deep learning for automatic cell detection in wide-field microscopy zebrafish images. In: Proceedings of the IEEE international symposium on biomedical imaging, pp 772–776Google Scholar
  32. 32.
    Liu F, Yang L (2017) A novel cell detection method using deep convolutional neural network and maximum-weight independent set. In: Proceedings of the deep learning and convolutional neural networks for medical image computing. Springer, pp 63–72Google Scholar
  33. 33.
    Xie Y, Xing F, Kong X, Su H, Yang L (2015) Beyond classification: structured regression for robust cell detection using convolutional neural network. In: Proceedings of the international conference on medical image computing and computer-assisted intervention, pp 358–365Google Scholar
  34. 34.
    Holmström O, Linder N, Ngasala B, Martensson A, Linder E, Lundin M, Moilanen H, Suutala A, Diwan V, Lundin J (2017) Point-of-care mobile digital microscopy and deep learning for the detection of soil-transmitted helminths and Schistosoma haematobium. Glob Health Action 10(sup3):1337325CrossRefGoogle Scholar
  35. 35.
    Zhao Z-Q, Zheng P, Xu S, Wu X (2019) Object detection with deep learning: a review. arXiv preprint arXiv:1807.05511
  36. 36.
    Girshick R, Donahue J, Darrell T, Malik J (2014) Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 580–587Google Scholar
  37. 37.
    Girshick R (2015) Fast R-CNN. In: Proceedings of the IEEE international conference on computer vision, pp 1440–1448Google Scholar
  38. 38.
    Ren S, He K, Girshick R, Sun J (2015) Faster R-CNN: towards real-time object detection with region proposal networks. In: Proceedings of the advances in neural information processing systems, pp 91–99Google Scholar
  39. 39.
    Hung J, Carpenter A (2017) Applying faster R-CNN for object detection on malaria images. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 56–61Google Scholar
  40. 40.
    Lo Y-C, Juang C-F, Chung I-F, Guo S-N, Huang M-L, Wen M-C, Lin C-J, Lin H-Y (2018) Glomerulus detection on light microscopic images of renal pathology with the faster R-CNN. In: Proceedings of the international conference on neural information processing, pp 369–377Google Scholar
  41. 41.
    Huang J, Rathod V, Sun C, Zhu M, Korattikara A, Fathi A, Fischer I, Wojna Z, Song Y, Guadarrama S (2017) Speed/accuracy trade-offs for modern convolutional object detectors. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7310–7311Google Scholar
  42. 42.
    Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 779–788Google Scholar
  43. 43.
    Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C-Y, Berg AC (2016) SSD: single shot multibox detector. In: Proceedings of the European conference on computer vision, pp 21–37Google Scholar
  44. 44.
    Dong S, Liu X, Lin Y, Arai T, Kojima M (2018) Automated tracking system for time lapse observation of C. elegans. In: Proceedings of the IEEE international conference on mechatronics and automation, pp 504–509Google Scholar
  45. 45.
    Liu W, Cheng L, Meng D (2018) Brain slices microscopic detection using simplified SSD with cycle-GAN data augmentation. In: Proceedings of the international conference on neural information processing, pp 454–463Google Scholar
  46. 46.
    Zhu J-Y, Park T, Isola P, Efros AA (2017) Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE international conference on computer vision, pp 2223–2232Google Scholar
  47. 47.
    Dai J, Qi H, Xiong Y, Li Y, Zhang G, Hu H, Wei Y (2017) Deformable convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 764–773Google Scholar
  48. 48.
    Lin T-Y, Dollcr P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2117–2125Google Scholar
  49. 49.
    Lin T-Y, Goyal P, Girshick R, He K, Dollcr P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.State Key Laboratory of Management and Control for Complex Systems, Institute of AutomationChinese Academy of SciencesBeijingChina
  2. 2.School of Artificial IntelligenceUniversity of Chinese Academy of SciencesBeijingChina

Personalised recommendations