HEp-2 Cell Classification Using K-Support Spatial Pooling in Deep CNNs

  • Xian-Hua HanEmail author
  • Jianmei Lei
  • Yen-Wei Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10008)


This study addresses the recognition problem of the HEp-2 cell using indirect immunofluorescent (IIF) image analysis, which can facilitate the diagnosis of many autoimmune diseases by finding antibodies in the patient serum. Recently, a lot of automatic HEp-2 cell classification strategies including both shallow and deep methods have been developed, wherein the deep Convolutional Neural Networks (CNNs) have been proven to achieve impressive performance. However, the deep CNNs in general requires a fixed size of image as the input. In order to conquer the limitation of the fixed size problem, a spatial pyramid pooling (SPP) strategy has been proposed in general object recognition and detection. The SPP-net usually exploit max pooling strategies for aggregating all activated status of a specific neuron in a predefined spatial region by only taking the maximum activation, which achieved superior performance compared with mean pooling strategy in the traditional state-of-the-art coding methods such as sparse coding, linear locality-constrained coding and so on. However, the max pooling strategy in SPP-net only retains the strongest activated pattern, and would completely ignore the frequency: an important signature for identifying different types of images, of the activated patterns. Therefore, this study explores a generalized spatial pooling strategy, called K-support spatial pooling, in deep CNNs by integrating not only the maximum activated magnitude but also the response magnitude of the relatively activated patterns of a specific neuron together. This proposed K-support spatial pooling strategy in deep CNNs combines the popularly applied mean and max pooling methods, and then avoid awfully emphasizing of the maximum activation but preferring a group of activations in a supported region. The deep CNNs with the proposed K-support spatial pooling is applied for HEp-2 cell classification, and achieve promising performance compared with the state-of-the-art approaches.


Specific Neuron Sparse Code Response Magnitude Fixed Size Activation Degree 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.



This paper is based on results obtained from a project commissioned by the New Energy and Industrial Technology Development Organization (NEDO), and was supported by the Grant-in Aid for Scientific Research from the Japanese Ministry for Education, Scientific Culture and Sports under the Grant No. 15H01130, No. 15K00253, No. 26330212 and No. 25280044, open foundation project from state key laboratory of vehicle NVH and safety (NVHSKL-201414).


  1. 1.
    Conrad, K., Schoessler, W., Hiepe, F., Fritzler, M.J.: Utoantibodies in Systemic Autoimmune Diseases. Pabst Science Publishers, Lengerich (2002)Google Scholar
  2. 2.
    Perner, P., Perner, H., Muller, B.: Mining knowledge for HEp-2 cell image classification. J. Artif. Intell. Med. 26, 161–173 (2002)CrossRefzbMATHGoogle Scholar
  3. 3.
    Nosaka, R., Fukui, K.: Hep-2 cell classification using rotation invariant co-occurrence among local binary patterns. Pattern Recogn. 27(7), 2428–2436 (2013)Google Scholar
  4. 4.
    Qi, X., Xiao, R., Guo, J., Zhang, L.: Pairwise rotation invariant co-occurrence local binary pattern. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 158–171. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  5. 5.
    Lazebnik, S., Schmid, C., Ponce, J.: Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. In: Processings of the IEEE Conferenceon Computer Vision and Pattern Recognition, New York, NY, USA, vol. 2, pp. 2169–2178, June 2006Google Scholar
  6. 6.
    Lowe, D.: Distinctive image features from scale-invariant keypoint. Int. J. Comput. Vis. 60(2), 91–110 (2004)CrossRefGoogle Scholar
  7. 7.
    Dala, N., Triggs, B.: Histogram of oriented gradients for human detection. In: Computer Vision and Pattern Recognition (CVPR 2005), vol. 1, pp. 886–893 (2005)Google Scholar
  8. 8.
    Larsen, A.B., Vestergaard, J.S., Larsen, R.: HEp-2 cell classification using shape index histograms with donut-shaped spatial pooling. IEEE Trans. Med. Imaging 33(7), 1573–1580 (2014)CrossRefGoogle Scholar
  9. 9.
    Haralick, M.R., Shanmugam, K., Dinstein, I.: Textural features for image classification. IEEE Trans. Syst. Man Cybern. SMC–3(6), 610–621 (1973)CrossRefGoogle Scholar
  10. 10.
    Thibault, G., Angulo, J., Meyer, F.: Advanced statistical matrices for texture characterization: application to cell classification. IEEE Trans. Biomed. Eng. 61 630–637Google Scholar
  11. 11.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 35, 1798–1828 (2013)CrossRefGoogle Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the Twenty-Sixth Annual Conference on Neural Information Processing Systems, Lake Tahoe, NY, USA, pp. 1097–1105, December 2012Google Scholar
  13. 13.
    Sermanet, P., Eigen, D., Zhang, X., Mathieu, M., Fergus, R., LeCun, Y.: OverFeat: integrated recognition, localization and detection using convolutional networks. In: Proceedings of the International Conference on Learning Representations, CBLS, Banff, AL, Canada, April 2014Google Scholar
  14. 14.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: Proceedings of the International Conference on Learning Representations, San Diego, CA, USA, May 2015Google Scholar
  15. 15.
    Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, OH, USA, pp. 512–519, June 2014Google Scholar
  16. 16.
    Fan, H., Xia, G.-S., Jingwen, H.: Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sens. 7(11), 14680–14707 (2015)CrossRefGoogle Scholar
  17. 17.
    Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J., Girshick, R., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the ACM International Conference on Multimedia, Orlando, FL, USA, November 2014Google Scholar
  18. 18.
    Gao, Z.M., Zhang, J.J., Zhou, L.P., Wang, L.: HEp-2 cell image classification with convolutional neural networks. In: The 1st Workshop on Pattern Recognition Techniques for Indirect Immunofluorescence Images (I3A), pp. 24–28 (2014)Google Scholar
  19. 19.
    Li, H.W., Zhang, J.G., Zheng, W.-S.: Deep CNNs for HEp-2 cells classification: a cross-specimen analysis, CoRR, vol. abs/1604.05816 (2016).
  20. 20.
    He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014, Part III. LNCS, vol. 8691, pp. 346–361. Springer, Heidelberg (2014)Google Scholar
  21. 21.
    Foggia, P., Percannella, G., Soda, P., Vento, M.: Benchmarking HEp-2 Cells classification methods. IEEE Trans. Med. Imaging 32(10), 1878–1889 (2013)CrossRefGoogle Scholar
  22. 22.
    Agrawal, P., Vatsa, M., Singh, R.: HEp-2 cell image classification: a comparative analysis. In: Wu, G., Zhang, D., Shen, D., Yan, P., Suzuki, K., Wang, F. (eds.) MLMI 2013. LNCS, vol. 8184, pp. 195–202. Springer, Heidelberg (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.National Institute of Advanced Industrial Science and TechnologyTokyoJapan
  2. 2.State Key Laboratory of Vehicle Noise Vibration and Safe TechnologyChongqingChina
  3. 3.Ritsumeikan UniveristyKusatsuJapan

Personalised recommendations