A Method for Analyzing the Composition of Petrographic Thin Section Image

  • Lanfang DongEmail author
  • Zhongya Zhang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11901)


The recognition and content analysis of the components in petrographic thin section image is a valuable study in geology. In this paper, we propose a two-stage method to segmentation and recognition of petrographic thin section image. In the first stage, we propose an image segmentation algorithm that can adaptively generate superpixel numbers based on SLIC algorithm. The algorithm is able to continually correct the number of superpixels in the iteration and then cluster the pixels of the image into superpixels by both color and spatial features. In the second stage, we designed a convolutional neural network and trained it with mineral grain images, which is then used to classify the superpixels obtained by first-stage. Finally, we count the categories and content of the components in the image based on the segmentation and classification results. We collected some images and invited geologists to label them for experimentation. The experimental results demonstrate the following: (1) Our proposed image segmentation algorithm is capable of dynamically generating the superpixels by the number of mineral grains in the image. (2) The CNN model we designed can accurately identify the categories of superpixel regions and has a small size. (3) The two-stage method is very effective in identifying the category of major components in an image and accurately estimating the content.


Petrographic thin section image Superpixels Image segmentation Image recognition Component analysis 


  1. 1.
    Tetley, M.G., Daczko, N.R.: Virtual Petrographic Microscope: a multi-platform education and research software tool to analyse rock thin-sections. Aust. J. Earth Sci. 61(4), 631–637 (2014)CrossRefGoogle Scholar
  2. 2.
    Izadi, H., Sadri, J., Mehran, N.A.: A new intelligent method for minerals segmentation in thin sections based on a novel incremental color clustering. Comput. Geosci. 81, 38–52 (2015)CrossRefGoogle Scholar
  3. 3.
    Liu, M.Y., Tuzel, O., Ramalingam, S., Chellappa, R.: Entropy rate superpixel segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2097–2104. IEEE (2011)Google Scholar
  4. 4.
    Levinshtein, A., Stere, A., Kutulakos, K.N., et al.: Turbopixels: fast superpixels using geometric flows. IEEE Trans. Pattern Anal. Mach. Intell. 31(12), 2290–2297 (2009)CrossRefGoogle Scholar
  5. 5.
    Li, Z., Chen, J.: Superpixel segmentation using linear spectral clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1356–1363. IEEE (2015)Google Scholar
  6. 6.
    Bergh, M.V.D., Boix, X., Roig, G., et al.: SEEDS: superpixels extracted via energy-driven sampling. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7578, pp. 13–26. Springer, Heidelberg (2012). Scholar
  7. 7.
    Achanta, R., Shaji, A., Smith, K., et al.: SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 34(11), 2274–2282 (2012)CrossRefGoogle Scholar
  8. 8.
    Jiang, F., Gu, Q., Hao, H.Z., et al.: Grain segmentation of multi-angle petrographic thin section microscopic images. In: IEEE International Conference on Image Processing (ICIP), pp. 3879–3883. IEEE (2017)Google Scholar
  9. 9.
    Jiang, F., Gu, Q., Hao, H.Z., et al.: Feature extraction and grain segmentation of sandstone images based on convolutional neural networks. In: 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2636–2641. IEEE (2018)Google Scholar
  10. 10.
    Tarquini, S., Favalli, M.: A microscopic information system (MIS) for petrographic analysis. Comput. Geosci. 36(5), 665–674 (2010)CrossRefGoogle Scholar
  11. 11.
    Asmussen, P., Conrad, O., Gnther, A., et al.: Semi-automatic segmentation of petrographic thin section images using a seeded-region growing algorithm with an application to characterize wheathered subarkose sandstone. Comput. Geosci. 83, 89–99 (2015)CrossRefGoogle Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Proceedings of Advances in Neural Information Processing Systems, Lake Tahoe, pp. 1097–1105 (2012)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 770–778 (2016)Google Scholar
  14. 14.
    Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, pp. 580–587 (2014)Google Scholar
  15. 15.
    He, K., Gkioxari, G., Dollár, P., et al.: Mask R-CNN. arXiv preprint arXiv:1703.06870 (2017)
  16. 16.
    Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., et al.: MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861 (2017)
  17. 17.
    He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 37(9), 1904–1916 (2015)CrossRefGoogle Scholar
  18. 18.
    Veksler, O., Boykov, Y., Mehrani, P.: Superpixels and supervoxels in an energy optimization framework. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6315, pp. 211–224. Springer, Heidelberg (2010). Scholar
  19. 19.
    Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 705–718. Springer, Heidelberg (2008). Scholar
  20. 20.
    Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: 13th International Conference on Artificial Intelligence and Statistics, vol. 9, pp. 249–256 (2010)Google Scholar
  21. 21.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  22. 22.
    Szegedy, C., Liu, W., Jia, Y., et al.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9. IEEE (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.School of Computer Science and TechnologyUniversity of Science and Technology of ChinaHefeiChina

Personalised recommendations