Continuity Labeling Technique of Multiple Face in Multiple Frame

Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 301)

Abstract

Research on recognizing and tracking objects have recently been carried out actively. There are especially high numbers of application fields using face recognition and tracking. Existing methods for recognizing and tracking objects have many difficulties when the target is multiple and of the same type. This study is on the continuous labeling method of the same face between frames in videos in which the faces of multiple people are included. The core of the algorithm is divided into detecting the face regions in one frame and recognizing faces using the previous frame, while applying the suitable methods. The usefulness of the proposed method was proven through experimentation and somewhat achievements were acquired through the test results.

Keywords

FLA Multiple frame Multiple face tracking 

References

  1. 1.
    Zhou F, Duh HB, Billinghurst M (2008) Trends in augmented reality tracking, interaction and display: a review of ten years of ISMAR. In: IEEE international symposium on mixed and augmented reality, pp 193–202Google Scholar
  2. 2.
    Bakowski A, Jones GA (1999) Video surveillance tracking using color region adjacency graphs. Image processing and its applications. Conference Publication No. 465, pp 794–798Google Scholar
  3. 3.
    Lee S-Y, Jung T-R, Hur C-W, Ryu K-R (2006) A study on the position tracking of moving image for surveillance system. In: KIMICS integrated conference, pp 205–208Google Scholar
  4. 4.
    Froba B, Ernst A (2004) Face detection with the modified census transform. In: IEEE international conference on automatic face and gesture recognition, pp 91–96Google Scholar
  5. 5.
    Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of CVPR, pp 511–518Google Scholar
  6. 6.
    Schapire RE, Singer Y (1999) Improved boosting algorithms using confidencerated predictions. Mach Learn 37(3):297–336Google Scholar
  7. 7.
    Freund Y, Schapire RE (1999) A short introduction to boosting. J Japan Soc Artif Intell 14(5):771–780Google Scholar
  8. 8.
    Viola P, Jones M (2001) Robust real-time object detection. Compaq Cambridge research Laboratory (2001)Google Scholar
  9. 9.
    Turk M, Pentland A (1991) Eigenfaces for recognition. J Cogn Neurosci 3(1):71–86CrossRefGoogle Scholar
  10. 10.
    Sayeed AM, Jones DL (1995) Optimal detection using bilinear time-frequency and time-scale representations. IEEE Trans Signal Process 43:2872–2883CrossRefGoogle Scholar
  11. 11.
    Swets DL, Weng J (1996) Using discriminant eigenfeatures for image retrieval. IEEE Trans Pattern Anal Mach Intell 18:831–836CrossRefGoogle Scholar
  12. 12.
    Belhumer PN, Hespanha JP, Kriegman DJ (1997) Eigenfaces versus fisherfaces: recognition using class specific linear projection. IEEE Trans Pattern Anal Mach Intell 9(7):711–720CrossRefGoogle Scholar
  13. 13.
    Yeom S (2012) Multi-classifier decision-level fusion for face recognition. J Inst Electron Eng Korea 49-SP(4)Google Scholar
  14. 14.
    Lu J et al (2003) Face recognition using LDA-based algorithms. IEEE Trans Neural Networks 14(1):195–200CrossRefGoogle Scholar
  15. 15.
    Lee K, Lee C (2013) Content-based image retrieval using LBP and HSV color histogram. JBE (J Broadcast Eng) 18(3):372–379CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2014

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringKonkuk UniversitySeoulSouth Korea
  2. 2.Cyber Hacking Security Seoul Hoseo Technical CollegeSeoulSouth Korea

Personalised recommendations