Inter-modality Face Recognition
Recently, the wide deployment of practical face recognition systems gives rise to the emergence of the inter-modality face recognition problem. In this problem, the face images in the database and the query images captured on spot are acquired under quite different conditions or even using different equipments. Conventional approaches either treat the samples in a uniform model or introduce an intermediate conversion stage, both of which would lead to severe performance degradation due to the great discrepancies between different modalities. In this paper, we propose a novel algorithm called Common Discriminant Feature Extraction specially tailored to the inter-modality problem. In the algorithm, two transforms are simultaneously learned to transform the samples in both modalities respectively to the common feature space. We formulate the learning objective by incorporating both the empirical discriminative power and the local smoothness of the feature transformation. By explicitly controlling the model complexity through the smoothness constraint, we can effectively reduce the risk of overfitting and enhance the generalization capability. Furthermore, to cope with the nongaussian distribution and diverse variations in the sample space, we develop two nonlinear extensions of the algorithm: one is based on kernelization, while the other is a multi-mode framework. These extensions substantially improve the recognition performance in complex situation. Extensive experiments are conducted to test our algorithms in two application scenarios: optical image-infrared image recognition and photo-sketch recognition. Our algorithms show excellent performance in the experiments.
KeywordsFace Recognition Linear Discriminant Analysis Face Image Learning Objective Query Image
Unable to display preview. Download preview PDF.
- 2.Zhao, W., Chellappa, R., Krishnaswamy, A.: Discriminant Analysis of Principal Components for Face Recognition. In: Proc. of FGR (1998)Google Scholar
- 4.Liu, C., Wechsler, H.: Enhanced Fisher Linear Discriminant Models for Face Recognition. In: Proc. of CVPR 1998 (1998)Google Scholar
- 6.Wang, X., Tang, X.: Unified Subspace Analysis for Face Recognition. In: Proc. of ICCV 2003 (2003)Google Scholar
- 7.Wang, X., Tang, X.: Dual-Space Linear Discriminant Analysis for Face Recognition. In: Proc. of CVPR 2004 (2004)Google Scholar
- 8.Tang, X., Wang, X.: Face Sketch Synthesis and Recognition. In: Proc. of ICCV 2003 (2003)Google Scholar
- 9.Tang, X., Wang, X.: Face Sketch Recognition. IEEE Trans. CSVT 14(1), 50–57 (2004)Google Scholar
- 11.Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Scholkopf, B.: Learning with Local and Global Consistency. In: Proc. of NIPS 2004 (2004)Google Scholar
- 12.He, X., Yan, S., Hu, Y., Zhang, H.: Learning a Locality Preserving Subspace for Visual Recognition. In: Proc. of ICCV 2003 (2003)Google Scholar
- 14.Belkin, M., Niyogi, P.: Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering. In: Proc. of NIPS 2001 (2001)Google Scholar
- 15.Phillips, P.J., Moon, H., Rizvi, S.A., Rauss, P.J.: The FERET Evaluation Methodology for Face Recognition Algorithms. IEEE Trans. PAMI, 1090–1104 (2000)Google Scholar
- 16.Tang, X., Wang, X.: Face Sketch Recognition. IEEE Trans. CSVT 14(1), 50–57 (2004)Google Scholar