Advertisement

Affective and Personality Corpora

  • Ante Odić
  • Andrej Košir
  • Marko Tkalčič
Chapter
Part of the Human–Computer Interaction Series book series (HCIS)

Abstract

In this chapter we describe publicly available datasets with personality and affective parameters relevant to the research questions covered by this book. We briefly describe the available data, acquisition procedure, and other relevant details of these datasets. There are three datasets acquired through the users’ natural interaction with different services: LDOS CoMoDa, LJ2M and myPersonality. Two datasets were acquired in controlled, laboratory settings: LDOS PerAff-1 and DEAP. Finally, we also mention four stimuli datasets from the Media Core project: ANET, IADS, ANEW, IAPS, as well as the 1000 songs dataset. We summarise this information for a quick reference to researchers interested in using these datasets or preparing the acquisition procedure of their own.

Keywords

Recommender System International Affective Picture System Music Video Acquisition Procedure Dominance Dimension 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Saucier, G., Goldberg, L.R.: What is beyond the big five? J. Pers. 66, 495–524 (1998)CrossRefGoogle Scholar
  2. 2.
    Odić, A., Tkalčič, M., Tasič, J.F., Košir, A.: Predicting and detecting the relevant contextual information in a movie-recommender system. Interact. Comput. 25(1), 74–90 (2013)Google Scholar
  3. 3.
    Tkalčič, M., Košir, A., Tasič, J.: The LDOS-PerAff-1 corpus of facial-expression video clips with affective, personality and user-interaction metadata. J. Multimodal User Interface 7(1–2), 143–155 (2013)Google Scholar
  4. 4.
    Liu, J.Y., Liu, S.Y., Yang, Y.H.: LJ2M dataset: toward better understanding of music listening behavior and user mood. In: IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2014)Google Scholar
  5. 5.
    Koelstra, S., Muhl, C., Soleymani, M., Lee, J.S., Yazdani, A., Ebrahimi, T., Patras, I.: Deap: a database for emotion analysis; using physiological signals. IEEE Trans. Affect. Comput. 3(1), 18–31 (2012)CrossRefGoogle Scholar
  6. 6.
    Kosinski, M., Stillwell D.J., Graepel T: Private traits and attributes are predictable from digital records of human behavior. In: Proceedings of the National Academy of Sciences (PNAS) (2013)Google Scholar
  7. 7.
    Soleymani, M., Caro, M.N., Schmidt, E.M., Sha, C.Y., Yang, Y.H.: 1000 songs for emotional analysis of music. In: Proceedings of the 2nd ACM International Workshop on Crowdsourcing for Multimedia, pp. 1–6 (2013)Google Scholar
  8. 8.
    Bradley, M.M., Lang, P.J.: Affective Norms for English Text (ANET): affective ratings of text and instruction manual. (Tech. Rep. No. D-1). University of Florida, Gainesville, FL (2007)Google Scholar
  9. 9.
    Bradley, M.M., Lang, P.J.: International affective digitized sounds (IADS): Stimuli, instruction manual and affective ratings (Tech. Rep. No. B-2). The Center for Research in Psychophysiology, University of Florida, Gainesville, FL (1999)Google Scholar
  10. 10.
    Bradley, M.M., Lang, P.J.: Affective norms for English words (ANEW): Stimuli, instruction manual and affective ratings. Technical report C-1. The Center for Research in Psychophysiology, University of Florida, Gainesville, FL (1999)Google Scholar
  11. 11.
    Lang, P.J., Bradley, M.M., Cuthbert, B.N.: International affective picture system (IAPS): affective ratings of pictures and instruction manual. Technical Report A-8. University of Florida, Gainesville, FL (2008)Google Scholar
  12. 12.
    Herlocker, J.L., Konstan, J.A., Terveen, L.G., Riedl, J.T.: Evaluating collaborative filtering recommender systems. ACM Trans. Inf. Syst. (TOIS) 22(1), 5–53 (2004)CrossRefGoogle Scholar
  13. 13.
    Goldberg, L.R., Johnson, J.A., Eber, H.W., Hogan, R., Ashton, M.C., Cloninger, C.R., Gough, H.G.: The international personality item pool and the future of public-domain personality measures. J. Res. Pers. 84–96 (2006)Google Scholar
  14. 14.
    Zheng, Y., Mobasher, B., Burke, R.D.: The role of emotions in context-aware Recommendation. In: Decisions@ RecSys, pp 21–28 (2013)Google Scholar
  15. 15.
    Codina, V., Ricci, F., Ceccaroni, L.: Local context modeling with semantic pre-filtering. In: Proceedings of the 7th ACM Conference on Recommender Systems, pp. 363–366 (2013)Google Scholar
  16. 16.
    Košir, A., Odić, A., Kunaver, M., Tkalčič, M., Tasič, J.F.: Database for contextual personalization. Elektrotehniki vestnik 78(5), 270–274 (2011)Google Scholar
  17. 17.
    Posner, J., Russell, J.A., Peterson, B.S.: The circumplex model of affect: an integrative approach to affective neuroscience, cognitive development, and psychopathology. Dev. Psychopathol. 17(03), 715–734 (2005)CrossRefGoogle Scholar
  18. 18.
    Lang, P.J., Bradley, M.M., Cuthbert, B.N.: International affective picture system (IAPS): Affective ratings of pictures and instruction manual. Technical Report A-8 (2008)Google Scholar
  19. 19.
    Tkali, M., Burnik, U., Koir, A.: Using affective parameters in a content-based recommender system for images. User Model. User-Adap. Inter. 20(4), 279–311 (2010)CrossRefGoogle Scholar
  20. 20.
    Tkalcic, M., Kunaver, M., Koir, A., Tasic, J.: Addressing the new user problem with a personality based user similarity measure. In First International Workshop on Decision Making and Recommendation Acceptance Issues in Recommender Systems (DEMRA 2011) (2011)Google Scholar
  21. 21.
    Leshed, G., Kaye, J.J.: Understanding how bloggers feel: recognizing affect in blog posts. In: CHI’06 extended abstracts on Human factors in computing systems, pp. 1019–1024 (2006)Google Scholar
  22. 22.
    Bertin-Mahieux, T., Ellis, D.P., Whitman, B., Lamere, P.: The million song dataset. In: ISMIR: Proceedings of the 12th International Society for Music Information Retrieval Conference, Miami. Florida, vol. 591–596 (2011)Google Scholar
  23. 23.
  24. 24.
    Kandemir, M., Vetek, A., Gnen, M., Klami, A., Kaski, S.: Multi-task and multi-view learning of user state. Neurocomputing 139, 97–106 (2014)CrossRefGoogle Scholar
  25. 25.
    Jirayucharoensak, S., Pan-Ngum, S., Israsena, P.: EEG-based emotion recognition using deep learning network with principal component based covariate shift adaptation. Sci. World J. (2014)Google Scholar
  26. 26.
    Park, G., Schwartz, H.A., Eichstaedt, J.C., Kern, M.L., Kosinski, M., Stillwell, D.J., Seligman, M.E.: Automatic Personality Assessment Through Social Media Language (2014)Google Scholar
  27. 27.
    Cantador, I., Fernndez-Tobas, I., Bellogn, A., Kosinski, M., Stillwell, D.: Relating personality types with user preferences in multiple entertainment domains. In: UMAP Workshops (2013)Google Scholar
  28. 28.
  29. 29.
    Soleymani, M., Aljanaki, A., Yang, Y. H., Caro, M. N., Eyben, F., Markov, K., Wiering, F.: Emotional analysis of music: a comparison of methods. In: Proceedings of the ACM International Conference on Multimedia, pp. 1161–1164 (2014)Google Scholar
  30. 30.
    Weninger, F., Eyben, F., Schuller, B.: On-line continuous-time music mood regression with deep recurrent neural networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5412–5416 (2014)Google Scholar
  31. 31.

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Outfit7 (Slovenian Subsidiary Ekipa2 D.o.o.)LjubljanaSlovenia
  2. 2.Faculty of Electrical EngineeringThe User-adapted Communications & Ambient Intelligence Lab (LUCAMI)LjubljanaSlovenia
  3. 3.Department of Computational PerceptionJohannes Kepler University in LinzLinzAustria

Personalised recommendations