Skip to main content
Log in

Selective multi-descriptor fusion for face identification

  • Original Article
  • Published:
International Journal of Machine Learning and Cybernetics Aims and scope Submit manuscript

Abstract

Over the last 2 decades, face identification has been an active field of research in computer vision. As an important class of image representation methods for face identification, fused descriptor-based methods are known to lack sufficient discriminant information, especially when compared with deep learning-based methods. This paper presents a new face representation method, multi-descriptor fusion (MDF), which represents face images through a combination of multiple descriptors, resulting in hyper-high dimensional fused descriptor features. MDF enables excellent performance in face identification, exceeding the state-of-the-art, but it comes with high memory and computational costs. As a solution to the high cost problem, this paper also presents an optimisation method, discriminant ability-based multi-descriptor selection (DAMS), to select a subset of descriptors from the set of 65 initial descriptors whilst maximising the discriminant ability. The MDF face representation, after being refined by DAMS, is named selective multi-descriptor fusion (SMDF). Compared with MDF, SMDF has much smaller feature dimension and is thus usable on an ordinary PC, but still has similar performance. Various experiments are conducted on the CAS-PEAL-R1 and LFW datasets to demonstrate the performance of the proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. In terms of LFW dataset, “outside data” is defined as the data that is not part of LFW [13]. As the outside data can have a significant impact on experiments, researchers are asked to be specific about whether or what type of outside training data was used to ensure fair comparison of different methods on LFW [13].

  2. Here, a feature block means a group of features which normally cannot be divided. The features of an instance can consist of many feature blocks. Searching an optimum subset of feature blocks is to find a subset of feature blocks among all feature blocks that can maximise the objective function.

  3. Here the DCP histogram under a certain variable combination is denoted by \(DCP(BNR, BNC, r_{in}, r_{ex})\). In our method, we extract the following DCP histograms for each face image: DCP(6, 5, 2, 3), DCP(6, 5, 3, 4), DCP(6, 5, 4, 5), DCP(6, 5, 5, 6), DCP(5, 4, 2, 3), DCP(5, 4, 3, 4), DCP(5, 4, 4, 5), DCP(5, 4, 5, 6), DCP(4, 4, 2, 3), DCP(4, 4, 3, 4), DCP(4, 4, 4, 5), DCP(4, 4, 5, 6), DCP(3, 2, 2, 3), DCP(3, 2, 3, 4), DCP(3, 2, 4, 5) and DCP(3, 2, 5, 6). So we get 16 DCP histograms in all for each face image. Please note that we didn’t carefully tune these four parameters. According to our experience, the setting of these four parameters will not significantly influence the performance.

References

  1. Huang GB, Ramesh M, Berg T, Learned-Miller E (2007) Labeled faces in the wild: a database for studying face recognition in unconstrained environments. University of Massachusetts, Amherst, Tech. Rep. pp 07–49

  2. Sun Y, Liang D, Wang X, Tang X (2015) Deepid3: Face recognition with very deep neural networks. arXiv preprint arXiv:1502.00873

  3. Schroff F, Kalenichenko D, Philbin J (2015) FaceNet: a unified embedding for face recognition and clustering. In: pp 815–823

  4. Liu J, Deng Y, Bai T, Wei Z, Huang C (2015) Targeting ultimate accuracy: face recognition via deep embedding, pp 06–24. arXiv:1506.07310[cs]

  5. Kumar N, Berg A C, Belhumeur P N, Nayar S K (2009) Attribute and simile classifiers for face verification. In: 2009 IEEE 12th international conference on computer vision, Sep., pp 365–372

  6. Learned-Miller E, Huang GB, RoyChowdhury A, Li H, Hua G (2016) Labeled faces in the wild: a survey. In: Kawulok M, Celebi ME, Smolka B (eds) Advances in face detection and facial image analysis. Springer International Publishing, Berlin, pp 189–248. https://doi.org/10.1007/978-3-319-25958-1_8

    Chapter  Google Scholar 

  7. Xie S, Shan S, Chen X, Chen J (2010) Fusing local patterns of gabor magnitude and phase for face recognition. IEEE Trans Image Process 19(5):1349–1361

    Article  MathSciNet  MATH  Google Scholar 

  8. Wei X, Wang H, Guo G, Wan H (2015) Multiplex image representation for enhanced recognition. Int J Mach Learn Cybern 9:1–10

    Google Scholar 

  9. Chan CH, Tahir MA, Kittler J, Pietikäinen M (2013) Multiscale local phase quantization for robust component-based face recognition using kernel fusion of multiple descriptors. IEEE Trans Pattern Anal Mach Intell 35(5):1164–1177

    Article  Google Scholar 

  10. Taigman Y, Yang M, Ranzato M, Wolf L (2014) DeepFace: closing the gap to human-level performance in face verification. In: pp 1701–1708

  11. Sun Y, Wang X, Tang X (2014) Deep learning face representation from predicting 10,000 classes. In: pp 1891–1898

  12. Ding C, Tao D (2017) Trunk-Branch ensemble convolutional neural networks for video-based face recognition. IEEE Trans Pattern Anal Mach Intell PP(99):1–1

    Google Scholar 

  13. Huang GB, Learned-Miller E (2014) Labeled faces in the wild: Updates and new reporting procedures. Dept. Comput. Sci., Univ. Massachusetts Amherst, Amherst, MA, USA, Tech. Rep, pp 14–003

  14. Huang KK, Dai DQ, Ren CX, Yu YF, Lai ZR (2017) Fusing landmark-based features at kernel level for face recognition. Pattern Recognit 63:406–415

    Article  Google Scholar 

  15. Ding C, Choi J, Tao D, Davis LS (2016) Multi-directional multi-level dual-cross patterns for robust face recognition. IEEE Trans Pattern Anal Mach Intell 38(3):518–531

    Article  Google Scholar 

  16. Visual descriptor (2017) Page Version ID: 788982618. [Online]. https://en.wikipedia.org/w/index.php?title=Visual_descriptor&oldid=788982618

  17. Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: application to face recognition. IEEE Trans Pattern Anal Mach Intell 28(12):2037–2041

    Article  MATH  Google Scholar 

  18. Qi X, Xiao R, Li CG, Qiao Y, Guo J, Tang X (2014) Pairwise rotation invariant co-occurrence local binary pattern. IEEE Trans Pattern Anal Mach Intell 36(11):2199–2213

    Article  Google Scholar 

  19. Kim J, Yu S, Kim D, Toh K-A, Lee S (2017) An adaptive local binary pattern for 3d hand tracking. Pattern Recognit 61(Supplement C):139–152. [Online]. http://www.sciencedirect.com/science/article/pii/S0031320316301972

    Article  Google Scholar 

  20. Karczmarek P, Kiersztyn A, Pedrycz W, Dolecki M (2017) An application of chain code-based local descriptor and its extension to face recognition. Pattern Recognit 65:26–34

    Article  Google Scholar 

  21. Zhen X, Zheng F, Shao L, Cao X, Xu D (2017) Supervised local descriptor learning for human action recognition. IEEE Trans Multimed 19(9):2056–2065

    Article  Google Scholar 

  22. Lan R, Zhou Y, Tang YY (2016) Quaternionic local ranking binary pattern: a local descriptor of color images. IEEE Trans Image Process 25(2):566–579

    Article  MathSciNet  MATH  Google Scholar 

  23. Yan P, Liang D, Tang J, Zhu M (2016) Local feature descriptor using entropy rate. Neurocomputing 194:157–167

    Article  Google Scholar 

  24. Mangai UG, Samanta S, Das S, Chowdhury PR (2010) A survey of decision fusion and feature fusion strategies for pattern classification. IETE Tech Rev 27(4):293–307. [Online]. http://www.tandfonline.com/doi/abs/10.4103/0256-4602.64604

    Article  Google Scholar 

  25. Nikan S, Ahmadi M (2014) Local gradient-based illumination invariant face recognition using local phase quantisation and multi-resolution local binary pattern fusion. IET Image Process 9(1):12–21

    Article  Google Scholar 

  26. Gao Z, Ding L, Xiong C, Huang B (2014) A robust face recognition method using multiple features fusion and linear regression. Wuhan Univ J Nat Sci 19(4):323–327

    Article  MATH  Google Scholar 

  27. Ruggieri S (2002) Efficient C4.5 [classification algorithm]. IEEE Trans Knowl Data Eng 14(2):438–444

    Article  Google Scholar 

  28. Wright J, Yang AY, Ganesh A, Sastry SS, Ma Y (2009) Robust face recognition via sparse representation. IEEE Trans Pattern Anal Mach Intell 31(2):210–227

    Article  Google Scholar 

  29. Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244 [Online]. http://www.jmlr.org/papers/v10/weinberger09a.html

  30. Heisele B, Ho P, Poggio T (2001) Face recognition with support vector machines: global versus component-based approach. In: Proceedings of eighth IEEE international conference on computer vision, vol 2. ICCV 2001, pp 688–694

  31. Lawrence S, Giles CL, Tsoi AC, Back AD (1997) Face recognition: a convolutional neural-network approach. IEEE Trans Neural Netw 8(1):98–113

    Article  Google Scholar 

  32. Sebe N, Lew MS, Cohen I, Garg A, Huang TS (2002) Emotion recognition using a Cauchy Naive Bayes classifier. In: Object recognition supported by user interaction for service robots, vol 1, pp 17–20

  33. Li XX, Dai DQ, Zhang XF, Ren CX (2013) Structured sparse error coding for face recognition with occlusion. IEEE Trans Image Process 22(5):1889–1900

    Article  MathSciNet  MATH  Google Scholar 

  34. Yang M, Zhang L, Yang J, Zhang D (2013) Regularized robust coding for face recognition. IEEE Trans Image Process 22(5):1753–1766

    Article  MathSciNet  MATH  Google Scholar 

  35. Yang M, Zhang L, Shiu SCK, Zhang D (2013) Robust kernel representation with statistical local features for face recognition. IEEE Trans Neural Netw Learn Syst 24(6):900–912

    Article  Google Scholar 

  36. Huang KK, Dai DQ, Ren CX, Lai ZR (2016) Learning kernel extended dictionary for face recognition. IEEE Trans Neural Netw Learn Syst PP(99):1–13

    Google Scholar 

  37. Deng W, Hu J, Guo J (2012) Extended SRC: undersampled face recognition via intraclass variant dictionary. IEEE Trans Pattern Anal Mach Intell 34(9):1864–1870

    Article  Google Scholar 

  38. Deng W, Hu J, Guo J (2013) In defense of sparsity based face recognition. In The IEEE conference on computer vision and pattern recognition (CVPR)

  39. Gao W, Cao B, Shan S, Chen X, Zhou D, Zhang X, Zhao D (2008) The CAS-PEAL large-scale chinese face database and baseline evaluations. IEEE Trans Syst Man Cybern Part A Syst Hum 38(1):149–161

    Article  Google Scholar 

  40. Xiong X, De la Torre F (2013) Supervised descent method and its applications to face alignment. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 532–539

  41. Gross R, Matthews I, Cohn J, Kanade T, Baker S (2010) Multi-pie. Image Vis Comput 28(5):807–813

    Article  Google Scholar 

  42. Saragih J (2011) Principal regression analysis. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 2881–2888

  43. Bartlett MS, Littlewort G, Frank MG, Lainscsek C, Fasel IR, Movellan JR (2006) Automatic recognition of facial actions in spontaneous expressions. J Multimed 1(6):22–35

    Article  Google Scholar 

  44. Tan H, Yang B, Ma Z (2013) Face recognition based on the fusion of global and local hog features of face images. IET Comput Vis 8(3):224–234

    Article  Google Scholar 

  45. Fierro-Radilla AN, Nakano-Miyatake M, Perez-Meana H, Cedillo-Hernandez M, Garcia-Ugalde F (2013) An efficient color descriptor based on global and local color features for image retrieval. In: IEEE 2013 10th international conference on electrical engineering, computing science and automatic control (CCE), pp 233–238

  46. Shabanzade M, Zahedi M, Aghvami SA (2011) Combination of local descriptors and global features for leaf recognition. Signal Image Process 2(3):23

    Google Scholar 

  47. Swets DL, Weng JJ (1996) Using discriminant eigenfeatures for image retrieval. IEEE Trans Pattern Anal Mach Intell 18(8):831–836

    Article  Google Scholar 

  48. Tan X, Triggs B (2010) Enhanced local texture feature sets for face recognition under difficult lighting conditions. IEEE Trans Image Process 19(6):1635–1650

    Article  MathSciNet  MATH  Google Scholar 

  49. Ahonen T, Rahtu E, Ojansivu V, Heikkila J (2008) Recognition of blurred faces using local phase quantization. In 2008 19th international conference on pattern recognition, vol 12, pp 1–4

  50. Vu NS, Caplier A (2012) Enhanced patterns of oriented edge magnitudes for face recognition and image matching. IEEE Trans Image Process 21(3):1352–1365

    Article  MathSciNet  MATH  Google Scholar 

  51. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987

    Article  MATH  Google Scholar 

  52. Trefnỳ J, Matas J (2010) Extended set of local binary patterns for rapid object detection. In: Computer vision winter workshop, pp 1–7

  53. Ren CX, Dai DQ, Li XX, Lai ZR (2014) Band-reweighed gabor kernel embedding for face image representation and recognition. IEEE Trans Image Process 23(2):725–740

    Article  MathSciNet  MATH  Google Scholar 

  54. Best-Rowden L, Han H, Otto C, Klare BF, Jain AK (2014) Unconstrained face recognition: identifying a person of interest from a media collection. IEEE Trans Inf Forensics Secur 9(12):2144–2157

    Article  Google Scholar 

  55. Taigman Y, Yang M, Ranzato M, Wolf L (2015) Web-scale training for face identification. In: pp 2746–2754. [Online]. https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Taigman_Web-Scale_Training_for_2015_CVPR_paper.html

  56. Sun Y, Wang X, Tang X (2015) Deeply learned face representations are sparse, selective, and robust. In: pp 2892–2900. [Online]. https://www.cv-foundation.org/openaccess/content_cvpr_2015/html/Sun_Deeply_Learned_Face_2015_CVPR_paper.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xin Wei.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Wei, X., Wang, H., Scotney, B. et al. Selective multi-descriptor fusion for face identification. Int. J. Mach. Learn. & Cyber. 10, 3417–3429 (2019). https://doi.org/10.1007/s13042-019-00929-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13042-019-00929-2

Keywords

Navigation