Improving Cytoarchitectonic Segmentation of Human Brain Areas with Self-supervised Siamese Networks
- 6.5k Downloads
Cytoarchitectonic parcellations of the human brain serve as anatomical references in multimodal atlas frameworks. They are based on analysis of cell-body stained histological sections and the identification of borders between brain areas. The de-facto standard involves a semi-automatic, reproducible border detection, but does not scale with high-throughput imaging in large series of sections at microscopical resolution. Automatic parcellation, however, is extremely challenging due to high variation in the data, and the need for a large field of view at microscopic resolution. The performance of a recently proposed Convolutional Neural Network model that addresses this problem especially suffers from the naturally limited amount of expert annotations for training. To circumvent this limitation, we propose to pre-train neural networks on a self-supervised auxiliary task, predicting the 3D distance between two patches sampled from the same brain. Compared to a random initialization, fine-tuning from these networks results in significantly better segmentations. We show that the self-supervised model has implicitly learned to distinguish several cortical brain areas – a strong indicator that the proposed auxiliary task is appropriate for cytoarchitectonic mapping.
KeywordsSelf-supervised learning Deep learning Brain parcellation Human brain Histology
This work was partially supported by the Helmholtz Association through the Helmholtz Portfolio Theme “Supercomputing and Modeling for the Human Brain”, and by the European Union’s Horizon 2020 Framework Research and Innovation under Grant Agreement No. 7202070 (Human Brain Project SGA1) and 785907 (Human Brain Project SGA2). Computing time was granted by the John von Neumann Institute for Computing (NIC) and provided on the supercomputer JURECA at Jülich Supercomputing Centre (JSC).
- 3.de Brébisson, A., Montana, G.: Deep neural networks for anatomical brain segmentation. In: CVPRW, pp. 20–28 (2015)Google Scholar
- 4.Doersch, C., Gupta, A., Efros, A. A.: Unsupervised visual representation learning by context prediction. In: ICCV, pp. 1422–1430 (2015)Google Scholar
- 5.Doumanoglou, A., Balntas, V., Kouskouridas R., Kim, T.: Siamese regression networks with efficient mid-level feature extraction for 3D object pose estimation. CoRR, abs/1607.02257 (2016)Google Scholar
- 7.Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
- 9.Simo-Serra, E., Trulls, E., Ferraz, L., Kokkinos, I., Fua, P., Moreno-Noguer, F.: Discriminative learning of deep convolutional feature point descriptors. In: ICCV, pp. 118–126 (2015)Google Scholar
- 10.Spitzer, H., Amunts, K., Harmeling, S., Dickscheid, T.: Parcellation of visual cortex on high-resolution histological brain sections using convolutional neural networks. In: ISBI, pp. 920–923 (2017)Google Scholar
- 11.Wang, X., Gupta, A.: Unsupervised learning of visual representations using videos. In: ICCV (2015)Google Scholar
- 13.Yosinski, J., Clune, J., Bengio, Y., Lipson, H.: How transferable are features in deep neural networks? In: NIPS, pp. 3320–3328 (2014)Google Scholar