Abstract
3D face recognition research has received significant attention in the past two decades because of the rapid development in imaging technology and ever increasing security demand of modern society. One of its challenges is to cope with nonrigid deformation among faces, which is often caused by the changes of appearance and facial expression. Popular solutions to deal with this problem are to detect the deformable parts of the face and exclude them, or to represent a face in terms of sparse signature points, curves or patterns that are invariant to deformation. Such approaches, however, may lead to loss of information which is important for classification. In this paper, we propose a new geodesicmap representation with statistical shape modelling for handling the nonrigid deformation challenge in face recognition. The proposed representation captures all geometrical information from the entire 3D face and provides a compact and expressionfree map that preserves intrinsic geometrical information. As a result, the search for dense points correspondence in the face recognition task can be speeded up by using a simple imagebased method instead of timeconsuming, recursive closest distance search in 3D space. An experimental investigation was conducted on 3D face scans using publicly available databases and compared with the benchmark approaches. The experimental results demonstrate that the proposed scheme provides a highly competitive new solution for 3D face recognition.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Face recognition is one of the most common biometrics with unique advantages, such as naturalness, noncontact and nonintrusiveness. Its related research has been for many years of great interest to computer vision and pattern recognition communities, which has been exploited for applications such as public securityÂ [1], fraud preventionÂ [2] and crime prevention and detectionÂ [3]. A fair amount of efforts have been made on the development of 2D face recognition systems using intensity images as input data in the past. Despite 2D face recognition systems being able to perform well under constrained conditions, they are still facing great difficulties as facial appearances can vary significantly even for the same individual due to differences in pose, lighting conditions and expressionsÂ [4]. Using the 3D geometry of the face instead of its 2D appearance is expected to alleviate the difficulties since the human face is a natural 3D entityÂ [5]. According to the type of features used, the relevant work for 3D face recognition can be roughly classified into three major categories, which are geometrical featurebased, shape descriptorbased and prominent regionbased approaches.
Geometrical featurebased methods achieve the face recognition task using structural information extracted from 3D faces, such as landmarks, salient curves and geodesiclike patterns. Landmarks are representative key facial points often associated with in order to construct a feature space. Shi etÂ al.Â [6] introduced a method based on the so called â€˜softâ€™ landmarks, i.e., the landmarks that are easily located on actual skin surfaces, such as eye corners, mouth corners, nose edge, etc. It showed that these landmarks vary significantly if different subjects are used to generate them. The use of anthropometric facial fiducial landmarks for the face recognition was presented inÂ [7]. Salient curves are a kind of the discriminative surface curves extracted from 3D faces. The symmetric profile curve from the intersection between the symmetry plane and the 3D facial scans was described inÂ [8]. Three facial curves which intersect the facial scan using horizontal and vertical planes as well as a cylinder were proposed inÂ [9]. A geodesic is a locally lengthminimizing curve along the surface and it contains information related to the intrinsic geometry of an object. Mpiperis etÂ al.Â [10] proposed a geodesic polar representation of the facial surface. With this representation, the intrinsic surface attributes do not change under isometric deformations and therefore it can be used for expressioninvariant face recognition. Based on the similar concept, a method using the similarity measurement of local geodesic patch has been proposed by Hajati et al.Â [11].
Shape descriptorbased methods look into attributes of local surfaces and encode the 3D face into special designed patterns, which are often invariant to the orientation of faces. A representation of freeform surfaces based on the point signature was proposed for 3D face recognition by Chua et al.Â [12]. The approach uses the point signature extracted from the rigid parts of a face to overcome the challenge of facial expressions. A similar representation, named local shape map, was proposed inÂ [13]. Tanaka et al.Â [14] introduced a special shape descriptor based on the extended Gaussian image (EGI) and local surface curvature. The EGI was used as a mediate feature after curvaturebased segmentation on which principal directions are mapped as local features. Huang et al.Â [15] adopted a multiscale extended local binary pattern (eLBP) as an accurate descriptor of local shape changes for 3D face identification. A hybrid matching approach based on scaleinvariant feature transform (SIFT) was designed to measure similarities between control and test face scans once they were represented by the multiscale eLBP.
Prominent regionbased methods use dense point clouds detected from specific regions to form feature vectors. The similarity between two faces is determined based on their relationship in the feature space. Queirolo et al.Â [16] proposed a union of the segmented regions from faces for the purpose of face identification. These regions include the circular nose area, elliptical nose area and upper head. Regions segmented by median and Gaussian curvature were utilised for the feature construction inÂ [17]. Gupta et al.Â [7] manually placed anthropometric points on faces and used a feature vector based on the anthropometric distance between points for face recognition. Xu et al.Â [18] proposed a method which converted a 3D face into a regular mesh and created a feature vector to encode the 3D shape of face based on this regular mesh.
In this paper, we propose a new geodesicmap representation for 3D faces, which is an extension of original work proposed by Quan et al.Â [19]. The proposed method preserves the intrinsic geometrical information related to the identity of the face. It can be considered as the expressionfree representation for faces from the same person and is able to reduce the nonrigid deformation effect in the face recognition task. The method first creates the geodesic strip for each extracted landmarks on a single face based on the geodesic distance measurement between surface points and the landmark. Then it combines the calculated geodesic strips for all extracted landmark in order to form a map. This map is the new representation of the face. In the subsequent stage of the statistical shape modelling, the search for the dense points correspondence is therefore simplified to an imagebased method using the calculated geodesicmap instead of the iterative search in 3D space. This helps to improve the efficiency of the whole face recognition task, including training and testing.
The rest of the paper is organized as follows: Sect.Â 2 introduces the sparse facial landmark detection. SectionÂ 3 describes the processing steps for generating the geodesicmap representation for 3D faces. SectionÂ 4 explains the mechanism for dense point correspondence search across all training datasets. SectionÂ 5 presents the statistical shape models used in this work. SectionÂ 6 illustrates the model matching process. The experimental results using the statistical shape models for face recognition are presented in Sect.Â 7. Finally, concluding remarks and possible future work are given in Sect.Â 8.
2 Sparse Facial Landmark Detection
Landmarks are often used to assist the process of data registration in order to determine the coefficients of a transformation function. A minimum of three pairs of corresponding landmarks are needed if the transformation is considered as rigid and more is required when the transformation has more degrees of freedom. In this work, a small number of landmarks are extracted from 3D faces, which are used for generating the geodesicmap representation and dense point correspondence search at the later stage. Using a combination of the shape indexÂ [20] and the intersecting profiles of facial symmetry planeÂ [21], a set of 12 key landmarks can be extracted, which are two upper nose base, two nose corners, upper and lower lip tips, two inner eye corners, two outside eye corners and two mouth corners.
The general strategy of this landmark detection process is to use the Gaussian curvature and mean curvature to locate a set of candidates for each landmark along the intersecting profiles of facial symmetry plane, and then select the candidate with the shape index as the key landmark. The shape index S(p) at point p is calculated as:
where \(K_{1}(p)\) and \(K_{2}(p)\) are the maximum and minimum local curvature at point p, respectively. According to the value of shape index, between zero and one, each point can be classified into six types of shape, such as cup, rut, saddle, ridge and cap. FigureÂ 1 demonstrates the locations of all 12 key landmarks extracted. Since the extracted landmarks are sparse and around the facial areas that are anatomically stable, as they are well defined for all faces, and invariant to facial expressions, they are more likely to be robustly detected than landmarks located in other parts of the face.
3 GeodesicMap Representation
A geodesic is a generalization of an Euclidean distance and is defined as the length of the shortest path between two points along a continuous surfaceÂ [22]. Bronstein et al.Â [23] proposed a face recognition method based on transformation, \(\psi \), mapping an original face \(\mathbb {S}\) with the given geodesic distance \(d_{\mathbb {S}}(\xi _{1},\xi _{2})\) onto another space \(\mathbb {S'}\) with the Euclidean distance \(d_{\mathbb {S'}}(\psi (\xi _{1}),\psi (\xi _{2}))\) in such a way that corresponding distances are preserved:
This means that the surface information represented by the geodesic distance between different points on the surface is preserved. Such mapping is invariant to rigid transformation as well as any nonrigid deformation which does not change the distance between the points on the surface. Based on the assumption that for the same subject facial expressions do not change geodesic distance, the dense point correspondence between two faces of the same subject can be estimated using geodesic distances between surface points to a number of fixed points on both surfaces. FigureÂ 2 illustrates the search for point \(\mathbf {P'}\) corresponding to the given point \(\mathbf {P}\) on another surface, where \(\mathbf {L1}\), \(\mathbf {L2}\), \(\mathbf {L3}\) are three landmark points on one surface with the corresponding geodesic distances to \(\mathbf {P}\) denoted by \(\mathbf {g_{1}}\), \(\mathbf {g2}\), \(\mathbf {g3}\); \(\mathbf {L'1}\), \(\mathbf {L'2}\), \(\mathbf {L'3}\) are three landmark points on the other surface with the corresponding geodesic distances to \(\mathbf {P'}\) denoted by \(\mathbf {g'1}\), \(\mathbf {g'2}\) and \(\mathbf {g'3}\). \(\mathbf {P'}\) is said to correspond to \(\mathbf {P}\) if it is found that \(\mathbf {g1=g'1}\), \(\mathbf {g2=g'2}\) and \(\mathbf {g3=g'3}\). For a unique solution, a minimum of three fixed landmark points are needed.
The geodesic distance is used in this paper to assist the dense point correspondence search across 3D faces. 12 key landmarks extracted using the method described in Sect.Â 2 are considered as the fixed surface point landmarks on 3D faces. A geodesicmap representation is proposed to simplify overall correspondence search. The geodesic map is built using the following three steps. The first step is to compute the geodesic distances between 12 key landmarks and all surface points on a 3D face. The second step is to rearrange the related geodesic distances of each key landmark to an geodesicstripe. The final step is to combine all the geodesicstripes in order to form the geodesicmap. An example of generating the geodesicmap representation for a 3D face is illustrated in Fig.Â 3, where the colour of the surface represents geodesicdistances to a specific landmark. In the geodesicmap, the row index corresponds to the order of landmarks and the column index matches the order of the surface points. From the figure, it can be seen that the 3D faces are transformed from \(\mathbb {R}^{3}\) space to a \(\mathbb {R}^{2}\) image space and this enables an efficient dense point correspondence search. FigureÂ 4 shows examples of 3D faces and their corresponding geodesicmaps.
4 GeodesicMap Matching
Having the geodesicmaps created, the pairwise dense point correspondences among faces can be estimated using standard imagebased matching techniques. For the faces from the same person, this can be achieved by using crosscorrelation [22] in which geodesicmapâ€™s column of the given face is crosscorrelated with the target face geodesicmap. The geodesicmapâ€™s column of the target face with the highest crosscorrelation value is considered as being in correspondence with theÂ point in questions from the given face as shown in Fig.Â 5.
For the faces from different persons, the geodesicmap cannot be directly used for the correspondence search simply because the characteristics of geodesic distance. Computation of the point correspondence between faces of different subjects is required to construct the statistical shape model for the face recognition task. To tackle this problem, a data warping process is introduced prior to the geodesicmap matching process when it applying to faces from different subjects. The data warping is based on the Thinplate Splines (TPS) warping techniqueÂ [24] and applies to the target face. 12 pairs of extracted key landmarks from both original and target faces are used as the control points for the calculation the warping function. It is then used to warp the whole target face to match the one from the original face so that the standard geodesicmap matching described above can be carried out. This process is able to minimise nonrigid deformation caused by changes of person.
5 Dimensionality Reduction
Statistical models have been successfully used for face analysis and recognition for many years. The core of the models is the dimensionality reduction, which often serves the purpose of feature vector extraction. PCA is often the popular choices, which produces a compact representation based on low dimensional linear manifolds [25]. However, the models fail to discover the underlying nonlinear structure of facial data especially for faces containing facial expressions. Another choice is Locality Preserving Projection (LPP) and it is able to handle a wider range of data variability while preserving local structure linked to the nonlinear structure of facial data. In this work the statistical model, LPP, was used to evaluate their performance for the task of face recognition. The detail of the method can be found in [26].
6 New Dataset Fitting
Given the eigenvectors of statistical models extracted from the training dataset, the estimation of feature vectors in order to synthesise shape for faces from a new dataset, using the constructed statistical model is the next processing stage. This is usually achieved by a recursive data registration in which the shape and pose parameters are iteratively estimated in turn. While pose parameters control the orientation and position of the model, shape parameters encapsulate deformation of the model. Instead of applying one of the widely used approaches, modified Iterative Closest Point (ICP) registration, a hybrid fitting based on the combination of geodesicmap representation and feature subspace projection is proposed in this work. In order to solve all unknown parameters effectively, the following standard optimization scheme is used:

1.
Create the geodesicmap representation for both the model and a new face using the method described in Sect.Â 3.

2.
Estimate the dense point correspondence between model and new face using the geodesicmap matching process explained in Sect.Â 4.

3.
Calculate feature vector, \(\mathbf {\alpha }\), for the new face using backprojection based on the created feature subspace, described as
$$\begin{aligned} \mathbf {\alpha }=\mathbf {W}_{opt}^{T}\widehat{\mathbf {x}} \end{aligned}$$(3)where \(\widehat{\mathbf {x}}\) is the surface points related to estimated dense correspondences from the new face and \(\mathbf {W}_{opt}\) is the matrix containing feature vectors.

4.
Generate a new instance of the statistical model, \(\mathbf {Q}\), using the feature vector \(\mathbf {\alpha }\), as
$$\begin{aligned} \mathbf {Q}=\mathbf {W}_{opt}\widehat{\mathbf {\alpha }} \end{aligned}$$(4)and repeat steps 2 to 4 until the preset convergence condition is reached.
In this optimization scheme, the geodesicmap representation and map matching serve the similar purpose as applied to the training dataset in which it estimates correspondence between both the models and new face. Since the models have learnt nonrigid deformation from faces across different identities in the training set and can be adapted to match the deformation in the new dataset, the TPS warping techniques described in Sect.Â 4 is no longer needed. Furthermore the use of the proposed method speeds up the whole fitting process for the new dataset and saves upÂ to \(70\,\%\) computation time on average compared with the widely used modified ICP registrationÂ [19, 27]. A few examples of the fitting results generated using the LPPbased method are shown in Fig.Â 6. From the figure it can be seen that the shape of synthesised faces are very close to new faces.
It is worth noticing that the feature vector \(\mathbf {\alpha }\) controls shape of the models in order to match it to the new face. Therefore it contains geometrical information of the face and is used as the feature vector for the classification of face identity in this work. A variety of classification methods can be applied, including, NearestNeighbour, Naive Bayesian, Support Vector Machine, etc. For the sake of simplicity and to demonstrate the discriminative nature of the shape parameters \(\mathbf {\alpha }\) for the proposed feature vector, the NearestNeighbour classifier is chosen for the face classification in this work.
7 Experimental Results
To show the effectiveness of the proposed method for the purpose of face recognition tasks, two publicly available 3D facial databases, BU3DFE and Gavab, were exploited for the evaluation in this work. The BU3DFE database consists of 2,500 3D faces from 100 people, with age ranging from 18 to 70Â years old, with a variety of ethnic origins including White, Black, EastAsian, Middle East Asian, Indian and Hispanic Latino [28]. Each person has seven basic expressions. The Gavab database contains 549 face scans from 61 different subjects [29]. Each subject was scanned 9 times for different poses and expressions, giving six neutral scans and three scans with an expression. The scan with missing data contains one scan while looking up (\(+35^{\circ }\)), one while looking down (\(35^{\circ }\)), one for the left profile (\(90^{\circ }\)), one for the right profile (\(+90^{\circ }\)) as well as one with random poses.
7.1 Facial Expression Changes
The robustness to facial expression variation is an important aspect in face recognition. To test the face recognition invariance with respect to face articulation, a series of tests were run and the performance of the proposed method is compared with that of the stateoftheart methods, including Patch Geodesic MomentsÂ [10], Geodesic Polar RepresentationÂ [11] and Canonical Image RepresentationÂ [23]. In order to make a direct comparison with the results reported inÂ [11], the same experimental protocol used inÂ [11] is adopted here. TheÂ performance is measured in terms of rank1 recognition rate and the Cumulative Matching Characteristics (CMC)Â [30]. In the test, all faces with neutral expression from BU3DFE database are used to form the statistical models, while the rest of the database is used as the testing faces.
The rank1 recognition rates of the proposed approaches are given in TableÂ 1 together with the reported results of the Patch Geodesic Moments, Geodesic Polar Representation and Canonical Image RepresentationÂ [11]. From TableÂ 1, it can be seen that among the four methods LPPbased approach achieved the highest recognition rate with an average accuracy of \(89\,\%\), outperforming the stateoftheart 3D expressioninvariant techniques by at least \(4\,\%\). It is worth noticing that the recognition rates for different expressions range from \(87\,\%\) to \(94\,\%\). This shows that the proposed statistical shape modelling scheme can handle facial expression changes well but still introduces uncertainty into face recognition task caused by facial expressions.
The CMC of the proposed methods together with those benchmark methods are shown in Fig.Â 7. From the figure it can be noted that the recognition rate of the proposed LPPbased method is always the highest.
7.2 Data Resolution Variation
In many practical applications, the data resolution usually varies because of the specification of data acquisition system, the need of data storage or the use of preprocessing. It often requires face recognition system to cope with lowresolution data. In order to evaluate the capability of the proposed methods in terms of handling lowresolution data in the face recognition task, a set of experiments were conducted using data with \(75\,\%\), \(50\,\%\) and \(25\,\%\) of the original resolution as the test dataset. The original resolution is approximately 5,000 surface points for each face. In terms of experimental strategy, all 2,500 faces from BU3DFE database were used in the experiments. The faces were divided into ten subsets with each subset containing all 100 subjects and all seven expressions. One subset is selected for testing while the remaining subsets were used for training. Such experiments are repeated ten times with a different subset selected for testing each time. TheÂ faces in the training set are not used for the testing. FigureÂ 8 reports the CMC of the proposed method. From the figures it can been seen that the method is able to achieve reasonable recognition rates with data resolution of \(75\,\%\) and \(50\,\%\) comparedÂ to the resolution of \(25\,\%\).
7.3 Missing Data
In order to evaluate the missing data challenge of the proposed method, and compare with the results achieved by the existing benchmark methods reported inÂ [31], the same experimental protocol introduced inÂ [31] was used here. The benchmark methods include sparse representationÂ [32], 3D ridge imagesÂ [33], concave and convex regionsÂ [34] and elastic radial curvesÂ [31]. In the experiment, the frontal scans with neutral expression of each person was taken as the training set. The rest of the scans were used for testing. Since the proposed approach is not designed for working on facial scans with a large part of missing data, the scans for the left and right profiles were not included in testing. TableÂ 2 illustrates the results of the rank1 recognition accuracy for different categories of testing faces. From the table, it can be seen that the proposed approach provides a high recognition accuracy on both expression and pose variations and outperforms majority of the existing methods and its performance is close to the best recognition accuracy achieved by the elastic radial curvesÂ [31].
8 Conclusions
This paper presents an effective representation with the statistical shape modelling scheme for 3D face recognition. Given a set of training faces with a variety of facial expressions, the proposed scheme is able to effectively estimate the accurate dense point correspondence among the faces using the geodesicmap, construct the statistical shape models and synthesise appropriate shapes for a new face. The face recognition experiments show that the proposed method is handling well nonrigid deformations caused by the changes in the appearance (e.g. due to weight gain was not tested here) as well as certain level of missing data. It provides evidence that the proposed method can cope with the face recognition task with lower data resolution. The use of geodesicmap also helps improve the efficiency of the entire face recognition task. The research will be extended further by taking into consideration other practical factors, such as independent database, lack of training samples and occlusion.
References
Chellppa, R., Wilson, C., Sirohey, S.: Human and machine recognition of faces: a survey. Proc. IEEE 83(5), 705â€“740 (1995)
Jafri, R., Ardabilian, H.R.: A survery of face recognition techniques. J. Inf. Process. Syst. 53(2), 41â€“66 (2009)
Kong, S.G., Heo, J., Abidi, B.R., Paik, J., Abidi, M.A.: Recent advances in visual and infraraed face recognition  a review. Comput. Vis. Image Underst. 97, 103â€“135 (2005)
Lu, Y., Zhou, J., Yu, S.: A survey of face detection, extraction and recognition. Comput. Inf. Pattern Recogn. 22(2), 163â€“195 (2003)
Lu, X., Jain, A.K.: Deformation modeling for robust 3D face matching. IEEE Trans. Pattern Anal. Mach. Intell. 30(8), 1346â€“1356 (2005)
Shi, J., Samal, A., Marx, D.: How effective are landmarks and their geometry for face recognition? Comput. Vis. Image Underst. 102(2006), 117â€“133 (2006)
Gupta, S., Aggatwal, J.K., Markey, M.K., Bovik, A.C.: 3D face recognition founded on the structural diversity of human faces. In: IEEE Conference on Computer Vision and Pattern Recognition, Minnesota (2007)
Zhang, L., Razdan, A., Farin, G., Femiani, J., Bae, M., Lockwood, C.: 3D face authentication and recognition based on bilateral symmetry analysis. Vis. Comput. 22(1), 43â€“55 (2006)
Nagamin, T., Uemura, T., Masuda, I.: 3D facial image analysis for human identification. In: IEEE Conference on Computer Vision and Pattern Recognition, Champaign (1992)
Mpiperis, I., Malassiotis, S., Strintzis, M.G.: 3D face recognition with the geodesic polar representation. IEEE Trans. Inf. Forensics Secur. 2(3), 537â€“547 (2007)
Hajati, F., Raie, A.A., Gao, Y.: 2.D face recognition using patch geodesci moments. Pattern Recogn. 45(2012), 969â€“982 (2012)
Chua, C., Han, F., Ho, Y.: 3D human face recognition using point signature. In: IEEE International Conference on Automatic Face and Gesture Recognition, Washington D.C. (2000)
Wu, Z., Wang, Y., Pan, G.: 3D face recognition using local shape map. In: IEEE International Conference on Image Processing, Singapore (2004)
Tanaka, H.T., Ikeda, M., Chiaki, H.: Curvaturebased face surface recognition using spherical correlation  principal directions for curved object recognition. In: IEEE International Conference on Automatic Face and Gesture Recognition, Nara (1998)
Huang, D., Ardabilian, M., Wang, Y., Chen, Y.: 3D face recognition using eLBPbased facial description and local feature hybrid matching. IEEE Trans. Inf. Forensics Secur. 7(5), 1551â€“1565 (2012)
Queirolo, C.C., Silva, L., Bellon, O.R.P., Segundo, M.P.: 3D face recognition using simulated annealing and the surface interpenetration measure. IEEE Trans. Pattern Anal. Mach. Intell. 32(2), 206â€“219 (2010)
Moreno, A. B., Sanchez, A., Velez, J.F., Diaz, F.J.: Face recognition using 3D surfaceextracted descriptors. In: International Conference on Irish Machine Vision and Image Processing, Coleraine (2003)
Xu, C., Tan, T., Li, S., Wang, Y., Zhong, C.: Learning effective intrinsic features to boost 3Dbased face recognition. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 416â€“427. Springer, Heidelberg (2006)
Quan, W., Matuszewski, B.J., Shark, L.K.: 3D shape matching for face analysis and recognition. In: International Conference on Pattern Recognition Applications and Methods, Lisbon (2015)
Lu, X., Jain, A.K., Colbry, D.: Matching 2.5D face scans to 3D models. IEEE Trans. Pattern Anal. Mach. Intell. 28(1), 31â€“43 (2006)
Quan, W., Matuszewski, B.J., Shark, L.K.: Facial asymmetry analysis based on 3D dynamic scans. In: IEEE International Conference on System, Man and Cybernetics. Seoul (2012)
Bouttier, J., Francesco, P.D., Guitter, E.: Geodesic distance in planar graphs. Nucl. Phys. 663(3), 535â€“567 (2003)
Bronstein, A.M., Bronstein, M.M., Kimmel, R.: Threedimensional face recognition. Int. J. Comput. Vision 64(1), 5â€“30 (2005)
Bookstein, F.L.: Principal warps: thinplate splines and decomposition of deformations. IEEE Trans. Pattern Anal. Mach. Intell. 11(6), 567â€“585 (1989)
Belhumeur, P.N., Hespanha, J.P., Kriegman, D.J.: Eigenfaes vs. fisherfaces: recognition using class specific linear projection. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 711â€“720 (1997)
He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.J.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27(3), 328â€“340 (2005)
Quan, W., Matuszewski, B.J., Shark, L.K.: Facial expression biometrics using statistical shape models. EURASIP J. Adv. Signal Process. 2009(1), 1â€“17 (2009)
Yin, L., Wei. X., Sun, Y., Wang, J., Rosato, M.J.: A 3D facial expression database for facial behavior research. In: IEEE International Conference on Automatic Face and Gesture Recognition, Dublin (2006)
Moreno, A.B., Sanchez, A.: GavabDB: a 3D face database. In: COST Workshop on Biometrics on the Internet: Fundamentals, Advances and Applications, Nara (2004)
Rizvi, S.A., Philips, P.J., Moon H.: The FERET verification testing protocol for face recognition algorithms. In: IEEE International Conference on Automatic Face and Gesture Recognition, Nara (1998)
Drira, H., Amor, B.B., Mohamed, D., Srivastava, A.: Pose and expressioninvariant 3D face recognition using elastic radial curves. In: British Machine Vision Conference, Aberystwyth (2010)
Li, X., Jia, T., Zhang, H.: Expressioninsensitive 3D face recognition using sparse representation. In: Conference on Computer Vision and Pattern Recognition, Kyoto (2009)
Mahoor, M.H., AbdelMottaleb, M.: Face recognition based on 3D ridge images obtained from range data. Pattern Recogn. 42(3), 445â€“451 (2009)
Berretti, S., Del Bimbo, A., Pala, P.: 3D face recognition by modeling the arrangement of concave and convex regions. In: MarchandMaillet, S., Bruno, E., NÃ¼rnberger, A., Detyniecki, M. (eds.) AMR 2006. LNCS, vol. 4398, pp. 108â€“118. Springer, Heidelberg (2007)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Â© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Quan, W., Matuszewski, B.J., Shark, LK. (2015). 3D Face Recognition Using GeodesicMap Representation and Statistical Shape Modelling. In: Fred, A., De Marsico, M., Figueiredo, M. (eds) Pattern Recognition: Applications and Methods. ICPRAM 2015. Lecture Notes in Computer Science(), vol 9493. Springer, Cham. https://doi.org/10.1007/9783319276779_13
Download citation
DOI: https://doi.org/10.1007/9783319276779_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 9783319276762
Online ISBN: 9783319276779
eBook Packages: Computer ScienceComputer Science (R0)