Skip to main content

An End-to-End System for Automatic Urinary Particle Recognition with Convolutional Neural Network

Abstract

The urine sediment analysis of particles in microscopic images can assist physicians in evaluating patients with renal and urinary tract diseases. Manual urine sediment examination is labor-intensive, subjective and time-consuming, and the traditional automatic algorithms often extract the hand-crafted features for recognition. Instead of using the hand-crafted features, in this paper we propose to exploit convolutional neural network (CNN) to learn features in an end-to-end manner to recognize the urinary particle. We treat the urinary particle recognition as object detection and exploit two state-of-the-art CNN-based object detection methods, Faster R-CNN and single shot multibox detector (SSD), along with their variants for urinary particle recognition. We further investigate different factors involving these CNN-based methods to improve the performance of urinary particle recognition. We comprehensively evaluate these methods on a dataset consisting of 5,376 annotated images corresponding to 7 categories of urinary particle, i.e., erythrocyte, leukocyte, epithelial cell, crystal, cast, mycete, epithelial nuclei, and obtain a best mean average precision (mAP) of 84.1% while taking only 72 ms per image on a NVIDIA Titan X GPU.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

References

  1. Almadhoun, M. D., and El-Halees, A., Automated recognition of urinary microscopic solid particles.Journal of medical engineering & technology 38(2):104–110, 2014.

    Article  Google Scholar 

  2. Avci, D., Leblebicioglu, M. K., Poyraz, M., and Dogantekin, E., A new method based on adaptive discrete wavelet entropy energy and neural network classifier (ADWEENN) for recognition of urine cells from microscopic images independent of rotation and scaling. Journal of medical systems 38(2):7, 2014.

    Article  PubMed  Google Scholar 

  3. Bell, S., Lawrence Zitnick, C., Bala, K., and Girshick, R.: Inside-outside net: Detecting objects in context with skip pooling and recurrent neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2874–2883, 2016.

  4. Budak, Y. U., and Huysal, K., Comparison of three automated systems for urine chemistry and sediment analysis in routine laboratory practice. Clinical laboratory 57(1):47, 2011.

    PubMed  Google Scholar 

  5. Cai, Z., Fan, Q., Feris, R. S., and Vasconcelos, N.: A unified multi-scale deep convolutional neural network for fast object detection. European conference on computer vision, pp. 354–370. Springer, 2016.

  6. Chien, T. I., Kao, J. T., Liu, H. L., Lin, P. C., Hong, J. S., Hsieh, H. P., and Chien, M. J., Urine sediment examination: a comparison of automated urinalysis systems and manual microscopy. Clinica Chimica Acta 384(1):28–34, 2007.

    Article  CAS  Google Scholar 

  7. Girshick, R.: Fast r-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1440–1448, 2015.

  8. Girshick, R., Donahue, J., Darrell, T., and Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580–587, 2014.

  9. Han, J., Zhang, D., Cheng, G., Liu, N., and Xu, D., Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Processing Magazine 35(1):84–100, 2018.

    Article  Google Scholar 

  10. He, K., Gkioxari, G., Dollár, P., and Girshick, R.: Mask r-CNN. In: IEEE International conference on computer vision (ICCV), pp. 2980–2988. IEEE, 2017.

  11. He, K., Zhang, X., Ren, S., and Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: European conference on computer vision, pp. 346–361. Springer , 2014.

  12. He, K., Zhang, X., Ren, S., and Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 , 2016.

  13. Hoang Ngan Le, T., Zheng, Y., Zhu, C., Luu, K., and Savvides, M.: Multiple scale faster-RCNN approach to driver’s cell-phone usage and hands on steering wheel detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 46–53, 2016.

  14. Huang, J., Rathod, V., Sun, C., Zhu, M., Korattikara, A., Fathi, A., Fischer, I., Wojna, Z., Song, Y., Guadarrama, S., et al., Speed/accuracy trade-offs for modern convolutional object detectors. In: IEEE CVPR, Vol. 4, 2017.

  15. İnce, F. D., Ellidaġ, H. Y., Koseoġlu, M., Ṡimṡek, N., Yalċın, H., and Zengin, M.O., The comparison of automated urine analyzers with manual microscopic examination for urinalysis automated urine analyzers and manual urinalysis. Practical Laboratory Medicine 5:14–20, 2016.

    Article  PubMed  PubMed Central  Google Scholar 

  16. Kim, K. H., Hong, S., Roh, B., Cheon, Y., and Park, M.: Pvanet: Deep but lightweight neural networks for real-time object detection. arXiv:abs/1608.08021, 2016

  17. Kong, T., Yao, A., Chen, Y., and Sun, F.: Hypernet: towards accurate region proposal generation and joint object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 845–853, 2016.

  18. Kouri, T., Fogazzi, G., Gant, V., Hallander, H., Hofmann, W., and Guder, W.: European urinalysis guidelines. Scandinavian Journal of Clinical and Laboratory Investigation-Supplement 60(231), 2000

  19. Li, C., Tang, Y. Y., Luo, H., and Zheng, X.: Join gabor and scattering transform for urine sediment particle texture analysis. In: 2nd international conference on Cybernetics (CYBCONF), 2015 IEEE, pp. 410–415. IEEE, 2015.

  20. Li, Y., and He, K.: Sun, J., others: r-FCN: Object detection via region-based fully convolutional networks. In: Advances in neural information processing systems, pp. 379–387, 2016.

  21. Liang, Y., Fang, B., Qian, J., Chen, L., Li, C., and Liu, Y., False positive reduction in urinary particle recognition. Expert Systems with Applications 36(9):11,429–11,438, 2009.

    Article  Google Scholar 

  22. Lin, T. Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S.: Feature pyramid networks for object detection. In: CVPR, Vol. 1, p. 4, 2017.

  23. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C. Y., and Berg, A. C.: SSD: Single shot multibox detector. In: European conference on computer vision, pp. 21–37. Springer, 2016.

  24. Long, J., Shelhamer, E., and Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, 2015.

  25. Ranzato, M., Taylor, P., House, J., Flagan, R., LeCun, Y., and Perona, P., Automatic recognition of biological particles in microscopic images. Pattern recognition letters 28(1):31–39, 2007.

    Article  Google Scholar 

  26. Redmon, J., Divvala, S., Girshick, R., and Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788, 2016.

  27. Redmon, J., and Farhadi, A.: Yolo9000: better, faster, stronger. In: 2017 IEEE Conference on computer vision and pattern recognition (CVPR), pp. 6517–6525. IEEE, 2017.

  28. Redmon, J., and Farhadi, A., 2018

  29. Ren, S., He, K., Girshick, R., and Sun, J.: Faster R-CNN: Towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, pp. 91–99, 2015.

  30. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al., Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3):211–252, 2015.

    Article  Google Scholar 

  31. Schmid, C., and Mohr, R., Local grayvalue invariants for image retrieval. IEEE transactions on pattern analysis and machine intelligence 19(5):530–535, 1997.

    Article  Google Scholar 

  32. Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., and LeCun, Y.: Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2013

  33. Shen, M. l., and Zhang, R.: Urine sediment recognition method based on svm and adaboost. In: International conference on Computational intelligence and software engineering, 2009. ciSE 2009, pp. 1–4. IEEE, 2009.

  34. Shrivastava, A., Gupta, A., and Girshick, R.: Training region-based object detectors with online hard example mining. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 761–769, 2016.

  35. Simonyan, K., and Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014

  36. Uijlings, J. R., Van De Sande, K. E., Gevers, T., and Smeulders, A. W., Selective search for object recognition. International journal of computer vision 104(2):154–171, 2013.

    Article  Google Scholar 

  37. Zeiler, M. D., and Fergus, R.: Visualizing and understanding convolutional networks. In: European conference on computer vision, pp. 818–833. Springer, 2014.

  38. Zhang, L., Lin, L., Liang, X., and He, K.: Is Faster r-CNN doing well for pedestrian detection?. In: European conference on computer vision, pp. 443–457. Springer, 2016.

  39. Zhang, S., Wen, L., Bian, X., Lei, Z., and Li, S. Z.: Single-shot refinement neural network for object detection. In: IEEE CVPR, 2018.

  40. Zhou, Y., and Zhou, H.: Automatic classification and recognition of particles in urinary sediment images. In: Computer, informatics, cybernetics and applications, pp. 1071–1078. Springer, 2012.

  41. Zitnick, C. L., and Dollár, P.: Edge boxes: Locating object proposals from edges. In: European conference on computer vision, pp. 391–405. Springer, 2014.

Download references

Acknowledgements

This research was partially supported by the Natural Science Foundation of Hunan Province, China (No. 14JJ2008) and the National Natural Science Foundation of China under Grant No. 61602522, No. 61573380, No. 61672542.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yixiong Liang.

Ethics declarations

Conflict of interests

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Additional information

This article is part of the Topical Collection on Image & Signal Processing

Appendix

Appendix

Fig. 8
figure 8

Selecteddetection examples of urinary particle on urinalysis test set. We show detections with scores higher than 0.7. All examples are divided into 7 groups, where 5 groups are at high-power field (i.e., erythrocyte, leukocyte, crystal, mycete, epithelial nuclei ) and the other 2 groups at low-power field (i.e., epithelial cell, cast ). In each group: a shows original image with ground truth boxes; b-d are Faster R-CNN detections separately on ZF, VGG-16 and ResNet-50 networks with a anchor scales of {322, 642, 1282, 2562, 5122}; e shows detection results on PVANet; f shows detection results on SSD300 model. For the ground truths and detection boxes, different categories use only different colors: eryth (red), leuko (black), epith (green), crystal (magenta), cast (cyan), mycete (yellow). As shown in this figure, the performance of SSD is inferior to Faster R-CNN, and it misses a lot of small objects

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Liang, Y., Kang, R., Lian, C. et al. An End-to-End System for Automatic Urinary Particle Recognition with Convolutional Neural Network. J Med Syst 42, 165 (2018). https://doi.org/10.1007/s10916-018-1014-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10916-018-1014-6

Keywords

  • Urinary particle recognition
  • CNN
  • Faster R-CNN
  • SSD