Multimedia Tools and Applications

, Volume 70, Issue 1, pp 495–523 | Cite as

Robust semi-automatic head pose labeling for real-world face video sequences



Automatic head pose estimation from real-world video sequences is of great interest to the computer vision community since pose provides prior knowledge for tasks, such as face detection and classification. However, developing pose estimation algorithms requires large, labeled real-world video databases on which computer vision systems can be trained and tested. Manual labeling of each frame is tedious, time consuming, and often difficult due to the high uncertainty in head pose angle estimate, particularly in unconstrained environments that include arbitrary facial expression, occlusion, illumination etc. To overcome these difficulties, a semi-automatic framework is proposed for labeling temporal head pose in real-world video sequences. The proposed multi-stage labeling framework first detects a subset of frames with distinct head poses over a video sequence, which is then manually labeled by the expert to obtain the ground truth for those frames. The proposed framework provides a continuous head pose label and corresponding confidence value over the pose angles. Next, the interpolation scheme over a video sequence estimates i) labels for the frames without manual labels and ii) corresponding confidence values for interpolated labels. This confidence value permits an automatic head pose estimation framework to determine the subset of frames to be used for further processing, depending on the labeling accuracy required. The experiments performed on an in-house, labeled, large, real-world face video database (which will be made publicly available) show that the proposed framework achieves 96.98 % labeling accuracy when manual labeling is only performed on 30 % of the video frames.


Semi-automatic labeling Real-world video sequence Head pose Automatic face tracking Bag-of-words Manifold 


  1. 1.
    Aghajanian J, Prince S (2009) Face pose estimation in uncontrolled environments. In: Proceedings of the British Machine Vision Conference. pp 1–11Google Scholar
  2. 2.
    Ahn L, Liu R, Blum M (2006) Peekaboom: A game for locating objects in images. In: Proceedings of the SIGCHI conference on Human Factors in computing system pp 55–64. doi: 10.1145/1124772.1124782
  3. 3.
    Ambardekar A, Nicolescu M, Dascalu S (2009) Ground Truth Verification Tool (GTVT) for Video Surveillance Systems. In: Proceedings of the Second International Conferences on Advances in Computer-Human Interactions. doi: 10.1109/ACHI.2009.17
  4. 4.
    Ballerini L (2003) Multiple Genetic Snakes for People Segmentation in Video Sequences. In: Proceedings of the 13th Scandinavian conference on Image analysis pp 275–282Google Scholar
  5. 5.
    Bederson BB (2001) Photomesa: A zoomable image browser using quantum treemaps and bubblemaps. In: Proceedings of the annual ACM symposium on User interface software and technology, pp 71–80. doi:  10.1145/502348.502359
  6. 6.
    Belkin M, Niyogi P (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput 15:1373–1396. doi: 10.1162/089976603321780317 CrossRefMATHGoogle Scholar
  7. 7.
    Birchfield ST, Rangarajan S (2005) Spatiograms versus histograms for region-based tracking. Proc IEEE Conf Comput Vis Pattern Recognit 2:1158–1163Google Scholar
  8. 8.
    Blanz V, Grother P, Vetter T (2005) Face recognition based on frontal views generated from non-frontal images. Proc IEEE Conf Comput Vis Pattern Recognit 2:454–461. doi: 10.1109/CVPR.2005.150 Google Scholar
  9. 9.
    Blunsden SJ, Fisher RB (2010) The BEHAVE video dataset: ground truthed video for multi-person behavior classification. Ann BMVA 4:1–12Google Scholar
  10. 10.
    Boom BJ, Spreeuwers LJ, Veldhuis RNJ (2011) Virtual illumination grid for correction of uncontrolled illumination in facial images. Pattern Recog 44(9):1980–1989. doi: 10.1016/j.patcog.2010.07.022 CrossRefGoogle Scholar
  11. 11.
    Bosch A, Zisserman A, Muñoz X (2008) Scene classification using a hybrid generative/discriminative approach. IEEE Trans Pattern Anal Mach Intell 30(4):712–727. doi: 10.1109/TPAMI.2007.70716 CrossRefGoogle Scholar
  12. 12.
    Bruneau P, Picarougne F, Gelgon M (2010) Interactive unsupervised classification and visualization for browsing an image collection. Pattern Recog 43(2):485–493. doi: 10.1016/j.patcog.2009.03.024 CrossRefMATHGoogle Scholar
  13. 13.
    Chen Y, Han C, Wang C, Jeng B, Fan K. A CNN-Based Face Detector with a Simple Feature Map and a Coarse-to-fine Classifier. Accepted for IEEE Trans on Pattern Analysis and Machine Intelligence. doi: 10.1109/TPAMI.2007.70798
  14. 14.
    Color FERET face database (2003) Accessed 1 June 2012
  15. 15.
    Delezoide B, Precioso F, Redi M, Merialdo B, Granjon L, Pellerin D, Rombaut M, Jégou H, Vieux R, Mansencal B, Benois-Pineau J et al. (2011) IRIM at TRECVID 2011: Semantic Indexing and Instance Search. In: Proceedings of TREC Video Retrieval Evaluation OnlineGoogle Scholar
  16. 16.
    Demirkus M, Oreshkin B, Clark J, Arbel T (2011) Spatial and probabilistic codebook template based head pose estimation from unconstrained environments. In: Proceedings of the IEEE International Conference on Image Processing (ICIP), pp 573–576Google Scholar
  17. 17.
    Demirkus M, Precup D, Clark J, Arbel T (2012) Soft Biometric Trait Classification from Real-world Face Videos Conditioned on Head Pose Estimation. In: Proceedings of the IEEE Computer Society Workshop on Biometrics in association with the IEEE Conference on Computer Vision and Pattern RecognitionGoogle Scholar
  18. 18.
    Dhall A, Goecke R, Lucey S, Gedeon T (2012) A semi-automatic method for collecting richly labelled large facial expression databases from movies. IEEE Multimedia (99):1. URL:
  19. 19.
    Doerman D, Mihalcik D (2000) Tools and techniques for video performance evaluation. In: Proceedings of International Conference on Pattern Recognition 4:167:170Google Scholar
  20. 20.
  21. 21.
    Gao W, Cao B, Shan SG (2004) The CAS-PEAL Large-Scale Chinese Face Database and Baseline Evaluations. Technical report of JDLGoogle Scholar
  22. 22.
    Giro-i-Nieto X, Martos M (2012) Multiscale annotation of still images with GAT In: Proceedings of the First International Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision Applications. doi: 10.1145/2304496.2304497
  23. 23.
    Gross R, Matthews I, Cohn JF, Kanade T, Baker S (2009) Multi-PIE. Image Vis Comput 28(5):807–813. doi: 10.1016/j.imavis.2009.08.002 CrossRefGoogle Scholar
  24. 24.
    Hacid H (2006) Neighborhood graphs for semi-automatic annotation of large image databases. Adv Multimedia Model 1:586–595. doi: 10.1007/978-3-540-69423-6_57 CrossRefGoogle Scholar
  25. 25.
    Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten I H (2009) The WEKA Data Mining Software. An Update, SIGKDD Explorations 11(1)Google Scholar
  26. 26.
    He J, Rijke M, Sevenster M, Ommering RV, Qian Y (2011) Generation to Backgroun Knowledge: A Case Study in Annotating Radiology Reports. In: Proceedings of the 20th ACM Conference on Information and Knowledge Management pp 1867–1876. doi: 10.1145/2063576.2063845
  27. 27.
    Hildebrand M, van Ossenbruggen J (2012) Linking user-generated video annotations to the web of data. Proc Int Conf Multimed Model 7131:693–704. doi: 10.1007/978-3-642-27355-1_74 Google Scholar
  28. 28.
    Hildebrand M, van Ossenbruggen JR, Hardman L, Jacobs G (2009) Supporting subject matter annotation using heterogeneous thesauri: a user study in web data reuse. Int J Hum Comput Stud 67(10):888–903. doi: 10.1016/j.ijhcs.2009.07.008, doi: 10.1016/2Fj.ijhcs.2009.07.008 CrossRefGoogle Scholar
  29. 29.
    Huang GB, Ramesh M, Berg T, Learned-Miller E (2007) Labeled Faces in the Wild: A Database for Studying Face Recognition in Unconstrained Environments. University of Massachusetts, Amherst, Technical Report 07-49Google Scholar
  30. 30.
    Jones M, Rehg JM (2002) Statistical color models with application to skin detection. Int J Comput Vis 81–96. doi: 10.1023/A:1013200319198
  31. 31.
    Karaman S, Benois-Pineau J, Mégret R, Bugeau A (2012) Multi-layer local graph words for object recognition. Advances in Multimedia Modeling 29–39Google Scholar
  32. 32.
    Kavasidis I, Palazzo S, Salvo RD, Giordano D, Spampinato C (2012) A Semi-automatic Tool for Detection and Tracking Ground Truth Generation in Videos. In: Proceedings of the First Int. Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision Applications. doi: 10.1145/2304496.2304502
  33. 33.
    Kumar N, Berg AC, Belhumeur PN, Nayar SK (2009) Attribute and Simila Classifiers for Face Verification. In: Proceedings of the International Conference on Computer Vision pp 365–372. doi: 10.1109/ICCV.2009.5459250
  34. 34.
    Kumar N, Berg A, Belhumeur P, Nayar S (2011) Describable visual attributes for face verification and image search. IEEE Trans Pattern Anal Mach Intell 33:1962–1977. doi: 10.1109/TPAMI.2011.48 CrossRefGoogle Scholar
  35. 35.
    Lazebnik S, Schmid C, Ponce J (2006) Beyond bags of features: spatial pyramid matching for recognizing natural scene categories. Proc IEEE Comput Vis Pattern Recognit 2:2169–2178Google Scholar
  36. 36.
    Lin C, Tseng BL, Smith JR (2003) Video collaborative annotation forum: Establishing ground-truth labels on large multimedia datasets. In: Proceedings of the TRECVID WorkshopGoogle Scholar
  37. 37.
    Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110. doi: 10.1023/B:VISI.0000029664.99615.94 CrossRefGoogle Scholar
  38. 38.
    McGill Real-World Face Video Database (2012) Accessed 10 Nov 2012
  39. 39.
    Mezaris V, Dimou A, Kompatsiaris I (2010) On the use of feature tracks for dynamic concept detection in video. In: Proceedings of IEEE International Conference on Image Processing pp 4697–4700Google Scholar
  40. 40.
    Moehrmann J, Heidemann G (2012) Efficient annotation of image data sets for computer vision applications. In: Proceedings of the First International Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision Applications. doi: 10.1145/2304496.2304498
  41. 41.
    Murphy-Chutorian E, Trivedi MM (2009) Head pose estimation in computer vision: a survey. IEEE Trans Pattern Anal Mach Intell 31(4):607–626. doi: 10.1109/TPAMI.2008.106 CrossRefGoogle Scholar
  42. 42.
    Phillips PJ, Flynn PJ, Scruggs T, Bowyer KW, Chang J, Hoffman, Marques J, Min J, Worek W (2005) Overview of the face recognition grand challenge. Proc IEEE Conf Comput Vis Pattern Recognit 1:947–954. doi: 10.1109/CVPR.2005.268 Google Scholar
  43. 43.
    Shi J, Tomasi C (1994) Good Features to Track. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition pp 593–600. doi: 10.1109/CVPR.1994.323794
  44. 44.
    Shneiderman B, Kang H (2000) Direct annotation: A drag-and-drop strategy for labeling photos. In: Proceedings of the IEEE Conference on Information Visualization pp 88–95Google Scholar
  45. 45.
    Spampinato C, Boom B, He J (2012) First International Workshop on Visual Interfaces for Ground Truth Collection in Computer Vision ApplicationsGoogle Scholar
  46. 46.
    Toews M, Arbel T (2009) Detection, localization and sex classification of faces from arbitrary viewpoints and under occlusion. IEEE Trans Pattern Anal Mach Intell 31(9):1567–1581, CrossRefGoogle Scholar
  47. 47.
    Torki M, Elgammal AM (2011) Regression from local features for viewpoint and pose estimation. In: Proceedings of the International Conference on Computer Vision pp 2603–2610 URL:
  48. 48.
    Volkmer T, Smith JR, Natsev AP (2005) A web-based system for collaborative annotation of large image and video collections: An evaluation and user study. In: Proceedings of the 13th annual ACM international conference on Multimedia. doi: 10.1145/1101149.1101341
  49. 49.
    Weston J, Ratle F, Collobert R (2008) Deep learning via semi-supervised embedding. In: Proceedings of the 25th International Conference on Machine Learning pp 1168–1175. doi: 10.1145/1390156.1390303
  50. 50.
    Yang Y, Wu F, Nie F, Shen HT, Zhuang Y, Hauptmann AG (2012) Web and personal image annotation by mining label correlation with relaxed visual graph embedding. IEEE Trans Image Process 21(3):1339–1351. doi: 10.1109/TIP.2011.2169269 CrossRefMathSciNetGoogle Scholar
  51. 51.

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  1. 1.Centre for Intelligent MachinesMcGill UniversityMontréalCanada

Personalised recommendations