Skip to main content
Log in

Fully convolutional networks for chip-wise defect detection employing photoluminescence images

Efficient quality control in LED manufacturing

  • Published:
Journal of Intelligent Manufacturing Aims and scope Submit manuscript

Abstract

Efficient quality control is inevitable in the manufacturing of light-emitting diodes (LEDs). Because defective LED chips may be traced back to different causes, a time and cost-intensive electrical and optical contact measurement is employed. Fast photoluminescence measurements, on the other hand, are commonly used to detect wafer separation damages but also hold the potential to enable an efficient detection of all kinds of defective LED chips. On a photoluminescence image, every pixel corresponds to an LED chip’s brightness after photoexcitation, revealing performance information. But due to unevenly distributed brightness values and varying defect patterns, photoluminescence images are not yet employed for a comprehensive defect detection. In this work, we show that fully convolutional networks can be used for chip-wise defect detection, trained on a small data-set of photoluminescence images. Pixel-wise labels allow us to classify each and every chip as defective or not. Being measurement-based, labels are easy to procure and our experiments show that existing discrepancies between training images and labels do not hinder network training. Using weighted loss calculation, we were able to equalize our highly unbalanced class categories. Due to the consistent use of skip connections and residual shortcuts, our network is able to predict a variety of structures, from extensive defect clusters up to single defective LED chips.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

References

  • Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., et al. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. https://www.tensorflow.org/, software available from tensorflow.org.

  • Abdulnabi, A. H., Winkler, S., & Wang, G. (2017). Beyond forward shortcuts: Fully convolutional master-slave networks (msnets) with backward skip connections for semantic segmentation. CoRR arXiv:1707.05537.

  • Chen, L., Papandreou, G., Schroff, F., & Adam, H. (2017). Rethinking atrous convolution for semantic image segmentation. CoRR arXiv:1706.05587.

  • Demant, M., Glatthaar, M., Haufschild, J., & Rein, S. (2010). Analysis of luminescence images applying pattern recognition techniques. In 25th European PV solar energy conference and exhibition (pp. 1078–1082).

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015a). Deep residual learning for image recognition. CoRR arXiv:1512.03385.

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015b). Deep residual learning for image recognition. CoRR arXiv:1512.03385

  • He, K., Zhang, X., Ren, S., & Sun, J. (2015c). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. CoRR arXiv:1502.01852.

  • Ioffe, S., & Szegedy, C. (2015). Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR arXiv:1502.03167.

  • Jégou, S., Drozdzal, M., Vázquez, D., Romero, A., & Bengio, Y. (2016). The one hundred layers tiramisu: Fully convolutional densenets for semantic segmentation. CoRR arXiv:1611.09326.

  • Kendall, A., Badrinarayanan, V., & Cipolla, R. (2015). Bayesian segnet: Model uncertainty in deep convolutional encoder-decoder architectures for scene understanding. CoRR arXiv:1511.02680.

  • Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. CoRR arXiv:1412.6980.

  • Krähenbühl, P., & Koltun, V. (2012). Efficient inference in fully connected CRFS with Gaussian edge potentials. CoRR arXiv:1210.5644.

  • Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In F. Pereira, C. J. C. Burges, L. Bottou, & K. Q. Weinberger (Eds.), Advances in neural information processing systems 25 (pp. 1097–1105). Red Hook: Curran Associates, Inc.

    Google Scholar 

  • Li, R., Liu, W., Yang, L., Sun, S., Hu, W., Zhang, F., & Li, W. (2017). Deepunet: A deep fully convolutional network for pixel-level sea-land segmentation. CoRR arXiv:1709.00201.

  • Li, Y., Qi, H., Dai, J., Ji, X., & Wei, Y. (2016). Fully convolutional instance-aware semantic segmentation. CoRR arXiv:1611.07709.

  • Lin, G., Milan, A., Shen, C., & Reid, I. D. (2016). Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. CoRR arXiv:1611.06612.

  • Lin, H., Li, B., Wang, X., Shu, Y., & Niu, S. (2018). Automated defect inspection of led chip using deep convolutional neural network. Journal of Intelligent Manufacturing. https://doi.org/10.1007/s10845-018-1415-x.

  • Long, J., Shelhamer, E., & Darrell, T. (2014). Fully convolutional networks for semantic segmentation. CoRR arXiv:1411.4038.

  • Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on international conference on machine learning, Omnipress, USA, ICML’10 (pp. 807–814). http://dl.acm.org/citation.cfm?id=3104322.3104425.

  • Odena, A., Dumoulin, V., & Olah, C. (2016). Deconvolution and checkerboard artifacts. Distill https://doi.org/10.23915/distill.00003, http://distill.pub/2016/deconv-checkerboard.

  • Pan, S. J., & Yang, Q. (2010). A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10), 1345–1359. https://doi.org/10.1109/TKDE.2009.191.

    Article  Google Scholar 

  • Pohlen, T., Hermans, A., Mathias, M., & Leibe, B. (2016). Full-resolution residual networks for semantic segmentation in street scenes. CoRR arXiv:1611.08323.

  • Redmon, J., & Farhadi, A. (2016). YOLO9000: Better, faster, stronger. CoRR arXiv:1612.08242.

  • Ren, S., He, K., Girshick, R., & Sun, J. (2015). Faster R-CNN: Towards real-time object detection with region proposal networks. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, & R. Garnett (Eds.), Advances in Neural Information Processing Systems 28 (pp. 91–99). Red Hook: Curran Associates, Inc.

    Google Scholar 

  • Ronneberger, O., Fischer, P., & Brox, T. (2015). U-Net: Convolutional networks for biomedical image segmentation. CoRR arXiv:1505.04597.

  • Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., et al. (2014). Imagenet large scale visual recognition challenge. CoRR arXiv:1409.0575.

  • Sharma, K., Rupprecht, C., Caroli, A., Aparicio, M. C., Remuzzi, A., Baust, M., et al. (2017). Automatic segmentation of kidneys using deep learning for total kidney volume quantification in autosomal dominant polycystic kidney disease. Scientific Reports, 7(1), 2049. https://doi.org/10.1038/s41598-017-01779-0.

    Article  Google Scholar 

  • Shelhamer, E., Long, J., & Darrell, T. (2017). Fully convolutional networks for semantic segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(4), 640–651. https://doi.org/10.1109/TPAMI.2016.2572683.

    Article  Google Scholar 

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. CoRR arXiv:1409.1556.

  • Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S. E., Anguelov, D., et al. (2014). Going deeper with convolutions. CoRR arXiv:1409.4842.

  • Tabernik, D., Šela, S., Skvarč, J., & Skočaj, D. (2019). Segmentation-based deep-learning approach for surface-defect detection. Journal of Intelligent Manufacturing,. https://doi.org/10.1007/s10845-019-01476-x.

    Article  Google Scholar 

  • Tai, L., Ye, Q., & Liu, M. (2016). PCA-aided fully convolutional networks for semantic segmentation of multi-channel fMRI. CoRR arXiv:1610.01732.

  • Tran, P. V. (2016). A fully convolutional neural network for cardiac segmentation in short-axis MRI. CoRR arXiv:1604.00494.

  • Uhrig, J., Cordts, M., Franke, U., & Brox, T. (2016). Pixel-level encoding and depth layering for instance-level semantic labeling. CoRR arXiv:1604.05096.

  • Yosinski, J., Clune, J., Nguyen, A. M., Fuchs, T. J., & Lipson, H. (2015). Understanding neural networks through deep visualization. CoRR arXiv:1506.06579.

  • Yu, F., & Koltun, V. (2015). Multi-scale context aggregation by dilated convolutions. CoRR arXiv:1511.07122.

  • Zeiler, M. D., & Fergus, R. (2013). Visualizing and understanding convolutional networks. CoRR arXiv:1311.2901.

  • Zhao, H., Shi, J., Qi, X., Wang, X., & Jia, J. (2016). Pyramid scene parsing network. CoRR arXiv:1612.01105.

Download references

Acknowledgements

We would like to acknowledge support of our work from the German Federal Ministry of Education and Research (BMBF), as part of the joint project InteGreat. Moreover, we would like to thank those who share their knowledge in blogs and patiently answer programming questions of strangers. In particular we would like to thank the authors of NumPy groupies.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maike Lorena Stern.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors would like to thank OSRAM Opto Semiconductors and especially Dr. Hans Lindberg for data provision, ongoing support and a great collaboration.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stern, M.L., Schellenberger, M. Fully convolutional networks for chip-wise defect detection employing photoluminescence images. J Intell Manuf 32, 113–126 (2021). https://doi.org/10.1007/s10845-020-01563-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10845-020-01563-4

Keywords

Navigation