Unified Face Representation for Individual Recognition in Surveillance Videos

Chapter
Part of the Augmented Vision and Reality book series (Augment Vis Real, volume 6)

Abstract

Recognizing faces in surveillance videos becomes difficult due to the poor quality of the probe data in terms of resolution, noise, blurriness, and varying lighting conditions. In addition, the poses in the probe data are usually not frontal view, as opposed to the standard format of the gallery data. The discrepancy between the two types of data makes the existing recognition algorithm far less accurate in real-world surveillance video data captured in a multi-camera network. In this chapter, we propose a multi-camera video based face recognition framework using a novel image representation called Unified Face Image (UFI), which is synthesized from multiple camera video feeds. Within a temporal window the probe frames from different cameras are warped towards a template frontal face and then averaged. The generated UFI representation is a frontal view of the subject that incorporates information from different cameras. Face super-resolution can also be achieved, if desired. We use SIFT flow as a high level alignment tool to warp the faces. Experimental results show that by using the unified face image representation, the recognition performance is better than the result of any single camera. The proposed framework can be adapted to any multi-camera video based face recognition using any face feature descriptors and classifiers.

Keywords

Face recognition Face registration in video Multi-camera network Surveillance videos 

References

  1. 1.
    Ahonen, T., Hadid, A., Pietikäinen, M.: Face recognition with local binary patterns. In: European Conference on Computer Vision, pp. 469–481 (2004)Google Scholar
  2. 2.
    Ahonen, T., Rahtu, E., Ojansivu, V., Heikkila, J.: Recognition of blurred faces using local phase quantization. In: 19th International Conference on Pattern recognition 2008. ICPR 2008, pp. 1–4 (2008). doi: 10.1109/ICPR.2008.4761847
  3. 3.
    Arandjelović, O., Cipolla, R.: A manifold approach to face recognition from low quality video across illumination and pose using implicit super-resolution. In: International Conference on Computer Vision (2007)Google Scholar
  4. 4.
    Biswas, S., Aggarwal, G., Flynn, P.: Face recognition in low-resolution videos using learning-based likelihood measurement model. In:International Joint Conference on Biometrics (IJCB) 2011, pp. 1–7 (2011). doi: 10.1109/IJCB.2011.6117514
  5. 5.
    Blanz, V., Vetter, T.: Face recognition based on fitting a 3D morphable model. IEEE Trans. Pattern Anal Mach. Intell. 25(9), 1063–1074 (2003). doi: 10.1109/TPAMI.2003.1227983 Google Scholar
  6. 6.
    Grgic, M., Delac, K., Grgic, S.: SCface—surveillance cameras face database. Multimedia Tools Appl. 51(3), 863–879 (2011). Doi: 10.1007/s11042-009-0417-2. http://dx.doi.org/10.1007/s11042-009-0417-2Google Scholar
  7. 7.
    Harguess, J., Aggarwal, J.: A case for the average-half-face in 2D and 3D for face recognition. In: Computer Vision and Pattern Recognition Workshops, 2009. IEEE Computer Society Conference on CVPR Workshops 2009, pp. 7–12 (2009). doi: 10.1109/CVPRW.2009.5204304
  8. 8.
    Harguess, J., Hu, C., Aggarwal, J.: Fusing face recognition from multiple cameras. In: Workshop on Applications of Computer Vision (WACV) 2009, pp. 1–7 (2009). doi: 10.1109/WACV.2009.5403055
  9. 9.
    Hennings-Yeomans, P., Baker, S., Kumar, B.: Simultaneous super-resolution and feature extraction for recognition of low-resolution faces. In: Computer Vision and Pattern Recognition, 2008. IEEE Conference on CVPR 2008, pp. 1–8 (2008). doi: 10.1109/CVPR.2008.4587810
  10. 10.
    Li, C., Barreto, A.: An integrated 3D face-expression recognition approach. In: Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, vol. 3, p. III (2006). doi: 10.1109/ICASSP.2006.1660858
  11. 11.
    Liu, C., Yuen, J., Torralba, A.: SIFT flow: Dense correspondence across scenes and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 978–994 (2011). doi: 10.1109/TPAMI.2010.147 Google Scholar
  12. 12.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004). doi:10.1023/B:VISI.0000029664.99615.94. url:http://dx.doi.org/10.1023/B:VISI.0000029664.99615.94 Google Scholar
  13. 13.
    Lui, Y.M., Bolme, D., Draper, B., Beveridge, J., Givens, G., Phillips, P.: A meta-analysis of face recognition covariates. In: Biometrics: Theory, Applications, and Systems, 2009. BTAS ’09. IEEE 3rd International Conference on, pp. 1 –8 (2009). doi: 10.1109/BTAS.2009.5339025
  14. 14.
    Sharma, A., Jacobs, D.: Bypassing synthesis: PLS for face recognition with pose, low-resolution and sketch. In: Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pp. 593–600 (2011). doi: 10.1109/CVPR.2011.5995350
  15. 15.
    Shen, L., Bai, L.: A review on gabor wavelets for face recognition. Pattern Anal. Appl. 9(2), 273–292 (2006). doi:10.1007/s10044-006-0033-y. url:http://dx.doi.org/10.1007/s10044-006-0033-y
  16. 16.
    Stallkamp, J., Ekenel, H., Stiefelhagen, R.: Video-based face recognition on real-world data. In: Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on, pp. 1–8 (2007). doi: 10.1109/ICCV.2007.4408868
  17. 17.
    Tan, X., Triggs, B.: Enhanced local texture feature sets for face recognition under difficult lighting conditions. Image Proces. IEEE Trans. 19(6), 1635–1650 (2010). doi: 10.1109/TIP.2010.2042645
  18. 18.
    Thomaz, C.E., Giraldi, G.A.: A new ranking method for principal components analysis and its application to face image analysis. Image Vis. Comput. 28(6), 902–913 (2010). doi:10.1016/j.imavis.2009.11.005. url:http://www.sciencedirect.com/science/article/pii/S0262885609002613
  19. 19.
    Tsai, G.Y., Tang, A.W.: Two-view face recognition using bayesian fusion. In: Systems, Man and Cybernetics, 2009. SMC 2009. IEEE International Conference on, pp. 157–162 (2009). doi: 10.1109/ICSMC.2009.5346579
  20. 20.
    Vu, N.S., Caplier, A.: Enhanced patterns of oriented edge magnitudes for face recognition and image matching. Image Proces IEEE Trans. 21(3), 1352–1365 (2012). doi: 10.1109/TIP.2011.2166974
  21. 21.
    Wong, Y., Chen, S., Mau, S., Sanderson, C., Lovell, B.: Patch-based probabilistic image quality assessment for face selection and improved video-based face recognition. In: Computer Vision and Pattern Recognition Workshops (CVPRW), 2011 IEEE Computer Society Conference on, pp. 74–81 (2011). doi: 10.1109/CVPRW.2011.5981881
  22. 22.
    Xie, B., Ramesh, V., Zhu, Y., Boult, T.: On channel reliability measure training for multi-camera face recognition. In: Applications of Computer Vision, 2007. WACV ’07. IEEE Workshop on, p. 41 (2007). doi: 10.1109/WACV.2007.46
  23. 23.
    Zhang, L., Samaras, D.: Face recognition from a single training image under arbitrary unknown lighting using spherical harmonics. Pattern Anal. Mach. Intell. IEEE Trans. 28(3), 351–363 (2006). doi: 10.1109/TPAMI.2006.53

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Center for Research in Intelligent SystemsUniversity of CaliforniaRiversideUSA

Personalised recommendations