ChaLearn LAP 2016: First Round Challenge on First Impressions - Dataset and Results

  • Víctor Ponce-López
  • Baiyu Chen
  • Marc Oliu
  • Ciprian CorneanuEmail author
  • Albert Clapés
  • Isabelle Guyon
  • Xavier Baró
  • Hugo Jair Escalante
  • Sergio Escalera
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9915)


This paper summarizes the ChaLearn Looking at People 2016 First Impressions challenge data and results obtained by the teams in the first round of the competition. The goal of the competition was to automatically evaluate five “apparent” personality traits (the so-called “Big Five”) from videos of subjects speaking in front of a camera, by using human judgment. In this edition of the ChaLearn challenge, a novel data set consisting of 10,000 shorts clips from YouTube videos has been made publicly available. The ground truth for personality traits was obtained from workers of Amazon Mechanical Turk (AMT). To alleviate calibration problems between workers, we used pairwise comparisons between videos, and variable levels were reconstructed by fitting a Bradley-Terry-Luce model with maximum likelihood. The CodaLab open source platform was used for submission of predictions and scoring. The competition attracted, over a period of 2 months, 84 participants who are grouped in several teams. Nine teams entered the final phase. Despite the difficulty of the task, the teams made great advances in this round of the challenge.


Behavior analysis Personality traits First impressions 



We are very grateful for the funding provided by Microsoft Research without which this work would not have been possible, and for the kind support provided by Evelyne Viegas, director of the Microsoft AI Outreach project, since the inception of this project. We also thank the Microsoft CodaLab support team for their responsiveness and particularly Flavio Zhingri. We sincerely thank all the teams who participated in ChaLearn LAP 2016 for their interest and for having contributed to improve the challenge with their comments and suggestions. Special thanks to Marc Pomar for preparing the annotation interface for Amazon Mechanical Turk (AMT). The researchers who joined the program committee and reviewed for the ChaLearn LAP 2016 workshop are gratefully acknowledged. We are very grateful to our challenge sponsors: Facebook, NVIDIA and INAOE, whose support was critical for awarding prizes and travel grants. This work was also partially supported by Spanish projects TIN2015-66951-C2-2-R, TIN2012-39051, and TIN2013-43478-P, the European Comission Horizon 2020 granted project SEE.4C under call H2020-ICT-2015 and received additional support from the Laboratoire d’Informatique Fondamentale (LIF, UMR CNRS 7279) of the University of Aix Marseille, France, via the LabeX Archimede program, the Laboratoire de Recherche en Informatique of Paris Sud University, INRIA-Saclay and the Paris-Saclay Center for Data Science (CDS). We thank our colleagues from the speed interview project for their contribution, and particularly Stephane Ayache, Cecile Capponi, Pascale Gerbail, Sonia Shah, Michele Sebag, Carlos Andujar, Jeffrey Cohn, and Erick Watson.


  1. 1.
    Ambady, N., Bernieri, F.J., Richeson, J.A.: Toward a histology of social behavior: judgmental accuracy from thin slices of the behavioral stream. Adv. Exp. Soc. Psychol. 32, 201–271 (2000)CrossRefGoogle Scholar
  2. 2.
    Berry, D.S.: Taking people at face value: evidence for the kernel of truth hypothesis. Soc. Cogn. 8(4), 343 (1990)CrossRefGoogle Scholar
  3. 3.
    Hassin, R., Trope, Y.: Facing faces: studies on the cognitive aspects of physiognomy. JPSP 78(5), 837 (2000)Google Scholar
  4. 4.
    Ambady, N., Rosenthal, R.: Thin slices of expressive behavior as predictors of interpersonal consequences: a meta-analysis. Psychol. Bull. 111(2), 256 (1992)CrossRefGoogle Scholar
  5. 5.
    Willis, J., Todorov, A.: First impressions making up your mind after a 100-ms exposure to a face. PSS 17(7), 592–598 (2006)Google Scholar
  6. 6.
    Ozer, D.J., Benet-Martinez, V.: Personality and the prediction of consequential outcomes. Annu. Rev. Psychol. 57, 401–421 (2006)CrossRefGoogle Scholar
  7. 7.
    Roberts, B.W., Kuncel, N.R., Shiner, R., Caspi, A., Goldberg, L.R.: The power of personality: the comparative validity of personality traits, socioeconomic status, and cognitive ability for predicting important life outcomes. PPS 2(4), 313–345 (2007)Google Scholar
  8. 8.
    Posthuma, R.A., Morgeson, F.P., Campion, M.A.: Beyond employment interview validity: a comprehensive narrative review of recent research and trends over time. Pers. Psychol. 55(1), 1–81 (2002)CrossRefGoogle Scholar
  9. 9.
    Huffcutt, A.I., Conway, J.M., Roth, P.L., Stone, N.J.: Identification and meta-analytic assessment of psychological constructs measured in employment interviews. JAP 86(5), 897 (2001)Google Scholar
  10. 10.
    Vinciarelli, A., Mohammadi, G.: A survey of personality computing. TAC 5(3), 273–291 (2014)Google Scholar
  11. 11.
    Mairesse, F., Walker, M.A., Mehl, M.R., Moore, R.K.: Using linguistic cues for the automatic recognition of personality in conversation and text. JAIR 30, 457–500 (2007)zbMATHGoogle Scholar
  12. 12.
    Ivanov, A.V., Riccardi, G., Sporka, A.J., Franc, J.: Recognition of personality traits from human spoken conversations. In: INTERSPEECH, pp. 1549–1552 (2011)Google Scholar
  13. 13.
    Pianesi, F., Mana, N., Cappelletti, A., Lepri, B., Zancanaro, M.: Multimodal recognition of personality traits in social interactions. In: ICMI, pp. 53–60. ACM (2008)Google Scholar
  14. 14.
    Batrinca, L.M., Mana, N., Lepri, B., Pianesi, F., Sebe, N.: Please, tell me about yourself: automatic personality assessment using short self-presentations. In: ICMI, pp. 255–262. ACM (2011)Google Scholar
  15. 15.
    Batrinca, L., Lepri, B., Mana, N., Pianesi, F.: Multimodal recognition of personality traits in human-computer collaborative tasks. In: ICMI, pp. 39–46. ACM, New York (2012)Google Scholar
  16. 16.
    Mana, N., Lepri, B., Chippendale, P., Cappelletti, A., Pianesi, F., Svaizer, P., Zancanaro, M.: Multimodal corpus of multi-party meetings for automatic social behavior analysis and personality traits detection. In: ICMI Workshop, pp. 9–14. ACM (2007)Google Scholar
  17. 17.
    Lepri, B., Subramanian, R., Kalimeri, K., Staiano, J., Pianesi, F., Sebe, N.: Connecting meeting behavior with extraversion - a systematic study. TAC 3(4), 443–455 (2012)Google Scholar
  18. 18.
    Polzehl, T., Moller, S., Metze, F.: Automatically assessing personality from speech. In: ICSC, pp. 134–140. IEEE (2010)Google Scholar
  19. 19.
    Biel, J.I., Teijeiro-Mosquera, L., Gatica-Perez, D.: Facetube: predicting personality from facial expressions of emotion in online conversational video. In: ICMI, pp. 53–56. ACM (2012)Google Scholar
  20. 20.
    Sanchez-Cortes, D., Biel, J.I., Kumano, S., Yamato, J., Otsuka, K., Gatica-Perez, D.: Inferring mood in ubiquitous conversational video. In: MUM, p. 22. ACM (2013)Google Scholar
  21. 21.
    Abadi, M.K., Correa, J.A.M., Wache, J., Yang, H., Patras, I., Sebe, N.: Inference of personality traits and affect schedule by analysis of spontaneous reactions to affective videos. FG (2015)Google Scholar
  22. 22.
    Ponce-López, V., Escalera, S., Baró, X.: Multi-modal social signal analysis for predicting agreement in conversation settings. In: ICMI. ICMI 2013, pp. 495–502. ACM, New York (2013)Google Scholar
  23. 23.
    Ponce-López, V., Escalera, S., Pérez, M., Janés, O., Baró, X.: Non-verbal communication analysis in victim-offender mediations. PRL 67, Part 1, pp. 19–27. Cognitive Systems for Knowledge Discovery (2015)Google Scholar
  24. 24.
    Guyon, I., Athitsos, V., Jangyodsuk, P., Escalante, H.J., Hamner, B.: Results and analysis of the ChaLearn gesture challenge 2012. In: Jiang, X., Bellon, O.R.P., Goldgof, D., Oishi, T. (eds.) WDIA 2012. LNCS, vol. 7854, pp. 186–204. Springer, Heidelberg (2013). doi: 10.1007/978-3-642-40303-3_19 CrossRefGoogle Scholar
  25. 25.
    Guyon, I., Athitsos, V., Jangyodsuk, P., Escalante, H.J.: The ChaLearn gesture dataset (CGD 2011). Mach. Vis. Appl. 25(8), 1929–1951 (2014)CrossRefGoogle Scholar
  26. 26.
    Escalera, S., Gonzàlez, J., Baró, X., Reyes, M., Lopes, O., Guyon, I., Athitsos, V., Escalante, H.J.: Multi-modal gesture recognition challenge 2013: dataset and results. ICMI Workshop, pp. 445–452 (2013)Google Scholar
  27. 27.
    Escalera, S., et al.: ChaLearn looking at people challenge 2014: dataset and results. In: Agapito, L., Bronstein, M.M., Rother, C. (eds.) ECCV 2014. LNCS, vol. 8927, pp. 459–473. Springer, Heidelberg (2015). doi: 10.1007/978-3-319-16178-5_32 CrossRefGoogle Scholar
  28. 28.
    Baró, X., Gonzalez, J., Fabian, J., Bautista, M.A., Oliu, M., Escalante, H.J., Guyon, I., Escalera, S.: ChaLearn looking at people 2015 challenges: action spotting and cultural event recognition. In: CVPR Workshop, pp. 1–9. IEEE (2015)Google Scholar
  29. 29.
    Escalera, S., Fabian, J., Pardo, P., Baró, X., Gonzalez, J., Escalante, H.J., Misevic, D., Steiner, U., Guyon, I.: ChaLearn looking at people 2015: apparent age and cultural event recognition datasets and results. In: ICCV Workshop, pp. 1–9 (2015)Google Scholar
  30. 30.
    Escalera, S., Torres, M., Martinez, B., Baró, X., Escalante, H.J., et al.: ChaLearn looking at people and faces of the world: face analysis workshop and challenge 2016. In: CVPR Workshop (2016)Google Scholar
  31. 31.
    Viola, P., Jones, M.J.: Robust real-time face detection. IJCV 57(2), 137–154 (2004)CrossRefGoogle Scholar
  32. 32.
    Lang, A., Rio-Ross, J.: Using Amazon mechanical Turk to transcribe historical handwritten documents (2011)Google Scholar
  33. 33.
    Watts, D.J., Strogatz, S.H.: Collective dynamics of ‘small-world’ networks. Nature 393(6684), 409–410 (1998)CrossRefGoogle Scholar
  34. 34.
    Humphries, M., Gurney, K., Prescott, T.: The brainstem reticular formation is a small-world, not scale-free, network. PRSL-B 273(1585), 503–511 (2006)Google Scholar
  35. 35.
    Bradley, R., Terry, M.: Rank analysis of incomplete block designs: the method of paired comparisons. Biometrika 39, 324–345 (1952)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Chen, B., Escalera, S., Guyon, I., Ponce-López, V., Shah, N., Oliu, M.: Overcoming calibration problems in pattern labeling with pairwise ratings: application to personality traits. In: ECCV LAP Challenge Workshop (2016, submitted)Google Scholar
  37. 37.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. arXiv (2015)Google Scholar
  38. 38.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  39. 39.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  40. 40.
    Zhang, C.L., Zhang, H., Wei, X.S., Wu, J.: Deep bimodal regression for apparent personality analysis. In: ECCV Workshop Proceedings (2016)Google Scholar
  41. 41.
    Subramaniam, A., Patel, V., Mishra, A., Balasubramanian, P., Mittal, A.: Bi-modal first impressions recognition using temporally ordered deep audio and stochastic visual features. In: ECCV Workshop Proceedings (2016)Google Scholar
  42. 42.
    Güçlütürk, Y., Güçlü, U., van Gerven, M., van Lier, R.: Deep impression: audiovisual deep residual networks for multimodal apparent personality trait recognition. In: ECCV Workshop Proceedings (2016)Google Scholar
  43. 43.
    Wei, X.S., Luo, J.H., Wu, J.: Selective convolutional descriptor aggregation for fine-grained image retrieval. arXiv (2016)Google Scholar
  44. 44.
    Parkhi, O.M., Vedaldi, A., Zisserman, A.: Deep face recognition. BMVC 1, 6 (2015)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Víctor Ponce-López
    • 1
    • 2
    • 6
  • Baiyu Chen
    • 4
  • Marc Oliu
    • 6
  • Ciprian Corneanu
    • 1
    • 2
    Email author
  • Albert Clapés
    • 2
  • Isabelle Guyon
    • 3
    • 5
  • Xavier Baró
    • 1
    • 6
  • Hugo Jair Escalante
    • 3
    • 7
  • Sergio Escalera
    • 1
    • 2
    • 3
  1. 1.Computer Vision CenterCampus UABBarcelonaSpain
  2. 2.Department of MathematicsUniversity of BarcelonaBarcelonaSpain
  3. 3.ChaLearnBerkeleyUSA
  4. 4.UC BerkeleyBerkeleyUSA
  5. 5.University of Paris-SaclayParisFrance
  6. 6.EIMT/IN3 at the Open University of CataloniaBarcelonaSpain
  7. 7.INAOEPueblaMexico

Personalised recommendations