Advertisement

Microstructure Cluster Analysis with Transfer Learning and Unsupervised Learning

  • Andrew R. Kitahara
  • Elizabeth A. Holm
Technical Article
  • 46 Downloads

Abstract

We apply computer vision and machine learning methods to analyze two datasets of microstructural images. A transfer learning pipeline utilizes the fully connected layer of a pre-trained convolutional neural network as the image representation. An unsupervised learning method uses the image representations to discover visually distinct clusters of images within two datasets. A minimally supervised clustering approach classifies micrographs into visually similar groups. This approach successfully classifies images both in a dataset of surface defects in steel, where the image classes are visually distinct and in a dataset of fracture surfaces that humans have difficulty classifying. We find that the unsupervised, transfer learning method gives results comparable to fully supervised, custom-built approaches.

Keywords

Characterization Computer vision Machine learning Microstructure 

Notes

Acknowledgements

This work was performed at Carnegie Mellon University and has been supported by the US National Science Foundation award number DMR-1507830. The In-718 dataset was generously provided by the NextManufacturing Center for additive manufacturing research.

References

  1. 1.
    Kalidindi SR, De Graef M (2015) Materials data science: current status and future outlook. Annu Rev Mater Res 45(1): 171–193.  https://doi.org/10.1146/annurev-matsci-070214-020844, http://www.annualreviews.org/doi/10.1146/annurev-matsci-070214-020844 CrossRefGoogle Scholar
  2. 2.
    Potyrailo R, Rajan K, Stoewe K, Takeuchi I, Chisholm B, Lam H (2011) Combinatorial and high-throughput screening of materials libraries: review of state of the art. ACS Comb Sci 13(6):579–633.  https://doi.org/10.1021/co200007w CrossRefGoogle Scholar
  3. 3.
    Pickering FB (1976) The basis of quantitative metallography. Institute of Metallurgical TechniciansGoogle Scholar
  4. 4.
    Rublee E, Garage W, Park M (2011) ORB : an efficient alternative to SIFT or SURF. In: IEEE International Conference on Computer Vision, pp 2564–2571.  https://doi.org/10.1109/ICCV.2011.6126544
  5. 5.
    Oliver N, Rosario B, Pentland A (2000) A Bayesian computer vision system for modeling human interactions. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8):831–843.  https://doi.org/10.1007/3-540-49256-9_16 CrossRefGoogle Scholar
  6. 6.
    Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M, Berg AC, Fei-Fei L (2015) ImageNet large scale visual recognition challenge. International Journal of Computer Vision 115(3):211–252. arXiv:1409.0575,  https://doi.org/10.1007/s11263-015-0816-y CrossRefGoogle Scholar
  7. 7.
    Krizhevsky A, Sutskever I, Geoffrey EH (2012) ImageNet classification with deep convolutional neural networks. Adv Neural Inf Proces Syst 25 (NIPS2012) 86:1–9. arXiv:1102.0183,  https://doi.org/10.1109/5.726791 Google Scholar
  8. 8.
    Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition, pp 1–14. arXiv:1409.1556,  https://doi.org/10.1016/j.infsof.2008.09.005
  9. 9.
    Wu X, Wang J Neural and machine learning to the surface defect investigation in sheet metal forming, Neural Information.  https://doi.org/10.1109/ICONIP.1999.844687. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=844687
  10. 10.
    Decost BL (2016) Microstructure representations microstructure representations, dissertationsGoogle Scholar
  11. 11.
    Song K, Yan Y (2013) A noise robust method based on completed local binary patterns for hot-rolled steel strip surface defects. Appl Surf Sci 285(PARTB):858–864.  https://doi.org/10.1016/j.apsusc.2013.09.002 CrossRefGoogle Scholar
  12. 12.
    Kitahara AR (2017) Automated classification of fracture surfaces with computer vision and machine learning, PosterGoogle Scholar
  13. 13.
    van der Walt S, Schönberger JL, Nunez-Iglesias J, Boulogne F, Warner JD, Yager N, Gouillart E, Yu T (2014) The scikit-image contributors, scikit-image: image processing in Python. PeerJ 2:e453.  https://doi.org/10.7717/peerj.453 CrossRefGoogle Scholar
  14. 14.
    Pal KK, Sudeep KS (2016) Preprocessing for image classification by convolutional neural networks. In: 2016 IEEE International Conference on Recent Trends in Electronics, Information Communication Technology (RTEICT), pp 1778–1781.  https://doi.org/10.1109/RTEICT.2016.7808140
  15. 15.
    DeCost BL, Francis T, Holm EA (2017) Exploring the microstructure manifold: image texture representations applied to ultrahigh carbon steel microstructures. Acta Mater 133:30–40. arXiv:1702.01117,  https://doi.org/10.1016/j.actamat.2017.05.014 CrossRefGoogle Scholar
  16. 16.
    Abdi H, Williams LJ (2010) Principal component analysis. Wiley Interdisciplinary Reviews: Computational Statistics 2(4):433–459. arXiv:1011.1669v3,  https://doi.org/10.1002/wics.101 CrossRefGoogle Scholar
  17. 17.
    Minka TP (2001) Automatic choice of dimensionality for PCA. In: Advances in neural information processing systems, pp 598–604Google Scholar
  18. 18.
    Van Der Maaten LJP, Hinton GE (2008) Visualizing high-dimensional data using t-SNE. Journal of Machine Learning Research 9:2579–2605. arXiv:http://arxiv.org/abs/1307.1662,  https://doi.org/10.1007/s10479-011-0841-3 Google Scholar
  19. 19.
    van der Maaten L (2013) Barnes-Hut-SNE, pp 1–11. arXiv:1301.3342
  20. 20.
    Hartigan JA, Wong MA (1979) Algorithm as 136: a k-means clustering algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics) 28(1):100–108. http://www.jstor.org/stable/2346830 Google Scholar
  21. 21.
    Pedregosa F, Varoquaux G, Gramfort A, Michel V, Thirion B, Grisel O, Blondel M, Prettenhofer P, Weiss R, Dubourg V et al (2011) Scikit-learn: machine learning in python. The Journal of Machine Learning Research 12:2825–2830Google Scholar
  22. 22.
    Zhou S, Chen Y, Zhang D, Xie J, Zhou Y (2017) Classification of surface defects on steel sheet using convolutional neural networks. Materiali in Tehnologije 51(1):123–131.  https://doi.org/10.17222/mit.2015.335 CrossRefGoogle Scholar
  23. 23.
    Bansal A, Chen X, Russell B, Gupta A, Ramanan D PixelNet: Representation of the pixels, by the pixels, and for the pixels, arXiv:1702.06506
  24. 24.
    Fong RC, Vedaldi A (2017) Interpretable explanations of black boxes by meaningful perturbation. In: Proceedings of the IEEE International Conference on Computer Vision 2017-October, pp 3449–3457. arXiv:1704.03296,  https://doi.org/10.1109/ICCV.2017.371
  25. 25.
    Goodfellow IJ, Shlens J, Szegedy C Explaining and harnessing adversarial examples, pp 1–11. arXiv:1412.6572
  26. 26.
    Zhou B, Bau D, Oliva A, Torralba A Interpreting deep visual representations via network dissection, arXiv:1711.05611
  27. 27.
    Zhang Q, Nian Wu Y, Zhu S-C Interpretable convolutional neural networks, arXiv:1710.00935
  28. 28.
    Olah C, Mordvintsev A, Schubert L Feature visualization, Distill https://distill.pub/2017/feature-visualization,  https://doi.org/10.23915/distill.00007

Copyright information

© The Minerals, Metals & Materials Society 2018

Authors and Affiliations

  1. 1.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations