Chapter

Intelligent Tutoring Systems

Volume 7315 of the series Lecture Notes in Computer Science pp 78-83

Categorical vs. Dimensional Representations in Multimodal Affect Detection during Learning

  • Md. Sazzad HussainAffiliated withCarnegie Mellon UniversityNational ICT Australia (NICTA), Australian Technology ParkSchool of Electrical and Information Engineering, University of Sydney
  • , Hamed MonkaresiAffiliated withCarnegie Mellon UniversitySchool of Electrical and Information Engineering, University of Sydney
  • , Rafael A. CalvoAffiliated withCarnegie Mellon UniversitySchool of Electrical and Information Engineering, University of Sydney

* Final gross prices may vary according to local VAT.

Get Access

Abstract

Learners experience a variety of emotions during learning sessions with Intelligent Tutoring Systems (ITS). The research community is building systems that are aware of these experiences, generally represented as a category or as a point in a low-dimensional space. State-of-the-art systems detect these affective states from multimodal data, in naturalistic scenarios. This paper provides evidence of how the choice of representation affects the quality of the detection system. We present a user-independent model for detecting learners’ affective states from video and physiological signals using both the categorical and dimensional representations. Machine learning techniques are used for selecting the best subset of features and classifying the various degrees of emotions for both representations. We provide evidence that dimensional representation, particularly using valence, produces higher accuracy.

Keywords

Affect multimodality machine learning learning interaction