Face Recognition Performance Comparison Between Real Faces and Pose Variant Face Images from Image Display Device

  • Mi-Young ChoEmail author
  • Young-Sook Jeong
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9357)


Face recognition technology, unlike other biometric methods, is conveniently accessible with the use of only a camera. Consequently, it has created an enormous interest in a variety of applications, including face identification, access control, security, surveillance, smart cards, law enforcement, human computer interaction. However, face recognition system is still not robust enough, especially in unconstrained environments, and recognition accuracy is still not acceptable. In this paper, to measure performance reliability of face recognition systems, we expand performance comparison test between real faces and face images from the recognition perspective and verify the adequacy of performance test methods using an image display device.


Face recognition Image display device Performance evaluation 

1 Introduction

Face recognition is a widely used biometric technology because it is more direct, user friendly, and convenient to use than other biometric approaches. Face recognition technology is now significantly advanced, has great potential in the application systems. However, it is difficult to guarantee of performance due to insufficient test methods in real environment. The best method is direct evaluation from human subjects in real environment. Unfortunately, in this case, it would be considered impossible to consistently obtain the same way for a lengthy period of time a certain number of persons. That is, it’s difficult to guarantee objectivity and reproducibility.

There are many approaches for performance evaluation of the face recognition in the system level including methods using an algorithm [1], a mannequin [2], and a high-definition photograph [3]. The first method simply evaluates the performance of an algorithm installed in a face recognition system. However, the performance of an algorithm cannot guarantee the performance of a face recognition system. The second method uses mannequin instead of real human face. This method has a number of problems because the material coating the mannequin is not the same as human skin. Last, the method using a high-definition photograph has overcome some of the existing problems. However, it still experiences minor difficulties with automatic control interoperation with a computer, and a lack of reproducibility in real situations.

In this paper, we expand performance comparison test between real faces and face images from the recognition perspective and verify the adequacy of performance test methods using an image display device. The paper is organized as follows: in Sect. 2, we explain limitation of precious works. Section 3 describes how to construct the facial DB. In Sect. 4, we show and analyze the experimental results. Section 5 concludes this paper.

2 Previous Works

In the previous works, we have introduced performance evaluation method of face recognition using face images from a high definition monitor and prove similarity between real faces and face images [4, 10]. However, the previous work has a limitation to reflect performance in real environments as it is a test only using frontal pose images.

Recognizing faces reliably across changes in pose and illumination has proved to be a much more difficult problem [11]. So, we need verification about the proposed test method according to not only illumination but also pose. In this paper, we expand previous works and compare face recognition performance according to various poses (Fig. 1).
Fig. 1.

Previous works.

3 Facial DB

The majority of facial images used to evaluate face recognition algorithms such as Feret [5], PF07 [6], and CMU PIE [7] could be used for the proposed test method. However, most images are not adequate because of the low-resolution output of the image display device. To overcome this challenge, high-resolution facial DB was required.

To obtain subject images under various pose conditions, seven cameras were used. The locations of cameras are shown in Fig. 2. We took ultra-high definition images using a Sony Nex 7 so that the face area took up at least two thirds of the whole area of the image. The height of the camera was fixed, and we controlled the height of the chair depending on the subject’s height.
Fig. 2.

Environment for capturing real face images.

We captured 4200 real face images from 60 subjects, which were captured under ten different lighting directions and seven pose for each subject. Figure 3 shows sample images for one subject.
Fig. 3.

Sample images for one subject.

To the re-capture, we displayed the high definition images captured with a camera on a 27-inch image display device to provide an output similar to a real face. The image display device was calibrated and characterized according to the ISO 15076-1:2010 standard [8], which contains the criteria for color management and standard image reproduction. To ensure proper display output, we used 2.2 gamma tone reproduction curve, D65 whitepoint color temperature as stated in IEC 61966-2-1:1999 [9], which contains the sRGB and HDTV color space standards. The procedure for the face image DB construction is presented in Fig. 4.
Fig. 4.

Procedure for building face image DB.

4 Experiment

This experiment verifies the similarity of real faces and face images from an image display device from the perspective of face recognition performance. In particular, we focused on changes in face recognition performance according to pose. The test engines registered ten frontal pose images under ten lighting conditions and obtained recognition results from test images that consists of six groups according to pose. Figure 5 illustrates sample face images for registration and test.
Fig. 5.

Registration and test purpose sample images.

The performance comparison results from four face recognition engines that are used for commercial purposes are shown Table 1. To analyze the similarity of real faces and the facial images captured from the image display device, recognition rate deviations were analyzed. As a result, the maximum deviation between the real facial and face images is 1.56.
Table 1.

Overall results.


Face recognition rate(%)


Real faces

Face images

















Figure 6 shows performance changes according to the pose for each engine. Engine A and B get results from all test images, other engines get those from face images of only 4 poses(top/bottom/left/right 15º) because of coverage. The x-axis represents recognition rate and the y-axis represents pose. The number means recognition rate deviations between real faces and face images. Although each engine exhibited different recognition performance according to pose, the deviations between the real face and face images were all less than 3 %. In other words, there is no significant difference in face recognition performance when using face images instead of real faces.
Fig. 6.

Performance changes according to the pose for (a) Engine A. (b) Engine B. (c) Engine C. (d) Engine D.

5 Conclusion

In this paper, we expand the previous works and verified the similarity of real face and face images from an image display device by comparing face recognition performance changes according to pose. Based on the comparison results using an image display device, the proposed method can be applied to the face recognition performance evaluation in system level.



This work is partly supported by the R&D program of the Korea Ministry of Trade, Industry and Energy (MOTIE) and the Korea Evaluation Institute of Industrial Technology (KEIT). (Project: Technology Development of service robot’s performance and standardization for movement/manipulation/HRI/Networking, 10041834).


  1. 1.
    TTAK.KO-10.0418, Performance Evaluation Method of Face Extraction and Identification Algorithm for Intelligent Robots: Part 1 Performance Evaluation of Recognition Algorithm (2010)Google Scholar
  2. 2.
    TTAK.KO-10.0419, Performance Evaluation Method of Face Extraction and Identification Algorithm for Intelligent Robots: Part 2. System Level Performance Evaluation using Human Model (mannequin) of Human Face Recognition (2010)Google Scholar
  3. 3.
    TTAK.KO-10.0507, Performance Evaluation Method of Face Extraction and Identification Algorithm for Intelligent Robots: Part 3. Performance Evaluation of Face Recognition using Face Photos (2011)Google Scholar
  4. 4.
    Cho, M.Y., Jeong, Y.S., Chun, B.T.: A study on face recognition performance comparison of real images with images from LED monitor. J. Inst. Electron. Eng. Korea 50(5), 1164–1169 (2013)Google Scholar
  5. 5.
    Phillips, P.J., Moon, H., Rauss, P.J., Rizvi, S.: The FERET evaluation methodology for face recognition algorithms. IEEE Trans. Pattern Anal. Mach. Intell. 22(10), 1090–1104 (2000)CrossRefGoogle Scholar
  6. 6.
    Lee, H., Park, S., Kang, B., Shin, J., Lee, J., Je, H., Jun, B., Kim, D.: The POSTECH face database (PF07) and performance evaluation. In: Proceedings IEEE International Conference Automatic Face & Gesture Recognition, pp. 1–6 (2008)Google Scholar
  7. 7.
    Sim, T., Baker, S., Bsat, M.: The CMU pose, illumination, and expression database. IEEE Trans. Pattern Anal. Mach. Intell. 25(12), 1615–1618 (2003)CrossRefGoogle Scholar
  8. 8.
    ISO 15076-1:2010, Image technology colour management – Architecture, profile format and data structure – Part 1: Based on ICC.1:2010 (2010)Google Scholar
  9. 9.
    IEC 61966-2-1:1999, Multimedia systems and equipment – Colour measurement and management – Part 2-1: Colour management (1999)Google Scholar
  10. 10.
    Cho, M.-Y., Jeong, Y.-S.: Face recognition performance comparison of fake faces with real faces in relation to lighting. J. Internet Serv. Inf. Secur. (JISIS) 4(4), 82–90 (2014)Google Scholar
  11. 11.
    Phillips, P.J., et al.: Face recognition vendor test 2002. In: IEEE International Workshop on Analysis and Modeling of Faces and Gestures, AMFG 2003. IEEE (2003)Google Scholar

Copyright information

© IFIP International Federation for Information Processing 2015

Authors and Affiliations

  1. 1.Electronics and Telecommunications Research Institute (ETRI)DaejeonKorea

Personalised recommendations