Emotion Determination in eLearning Environments Based on Facial Landmarks

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 620)

Abstract

Massive Open Online Courses (MOOCs) are a new kind of e-Learning environment, which enables us to address untold numbers of students. MOOCs allow students all over the world to participate in lectures independent of place and time. The sessions that are in some cases joined by more than 100,000 students are based on small units of teaching material containing videos or texts.

However today’s MOOCs are static environments, which do not take into account the diversity of the students and their situational context. Current MOOCs can be seen as mass processing but not as an individual treatment of individual students. Thus MOOCs need to be personalized in addition to massive.

In order to personalize an e-Learning environment it is first of all necessary to collect data, or personal factors, about the student, his or her current environment and his or her situational context. This data should later be processed and used as input for adaptive functions. Basically there are many input factors imaginable, such as cognitive style, preknowledge, currently used device or personal goals. The input factors can be grouped into technical, personal and situational factors. Especially situational factors may help to support students in different learning situations.

This paper describes an approach to detect the student’s current mood as a situational input factor. The mood of a student in a learning situation might be an interesting feature that can be used as an instant feedback for the currently used teaching materials. The proposed approach is based on widespread availability of built-in cameras in devices that are used by students, such as smart-phones, tablets or laptop computers. The captured frames from these devices are processed by a Java-based server component that detects selected facial landmarks. Based on the relative position of these landmarks the potential shown emotion is determined.

The output of the system may be used to adjust the difficulty level of tests or to determine the preferred media type.

References

  1. 1.
    Burnett, D.C., Narayanan, A.: Media Capture and Streams. W3C Editor’s Draft, 25 June 2012. http://dev.w3.org/2011/webrtc/editor/getusermedia.html. Reffered on 8 Jan 2016
  2. 2.
  3. 3.
  4. 4.
    Jesorsky, O., Kirchberg, K.J., Frischholz, R.W.: Robust face detection using the hausdorff distance. In: Bigun, J., Smeraldi, F. (eds.) AVBPA 2001. LNCS, vol. 2091, pp. 90–95. Springer, Heidelberg (2001). ISBN 3-540-42216-1, https://facedetection.com/wp-content/uploads/AVBPA01BioID.pdf CrossRefGoogle Scholar
  5. 5.
    Fröba, B., Külbeck, C.: Real-time face detection using edge-orientation matching. In: Bigun, J., Smeraldi, F. (eds.) AVBPA 2001. LNCS, vol. 2091, pp. 78–83. Springer, Heidelberg (2001). ISBN 3-540-42216-1. http://link.springer.com/chapter/10.1007%2F3-540-45344-X_12 CrossRefGoogle Scholar
  6. 6.
    Viola, P., Jones, M.: Robust real-time object detection. Int. J. Comput. Vis. 4, 34–47 (2001)Google Scholar
  7. 7.
    Lienhart, R., Kuranov, A., Pisarevsky, V.: Empirical analysis of detection cascades of boosted classifiers for rapid object detection. In: Michaelis, B., Krell, G. (eds.) DAGM 2003. LNCS, vol. 2781, pp. 297–304. Springer, Heidelberg (2003)CrossRefGoogle Scholar
  8. 8.
    Lienhart, R., Maydt, J.: An extended set of haar-like features for rapid object detection. In: Proceedings of 2002 International Conference on Image Processing, vol. 1, pp. I–900. IEEE (2002)Google Scholar
  9. 9.
    Amit, Y., Geman, D., Wilder, K.: Joint induction of shape features and tree classifiers. IEEE Trans. Pattern Anal. Mach. Intell. 19(11), 1300–1305 (1997)CrossRefGoogle Scholar
  10. 10.
  11. 11.
    Turk, M., Pentland, A.: Eigenfaces for recognition. J. Cogn. Neurosci. 3(1), 71–86 (1991a)Google Scholar
  12. 12.
    Turk, M.A., Pentland, A.P.: Face recognition using eigenfaces. In: Computer Vision and Pattern Recognition, Proceedings CVPR 1991, IEEE Computer Society Conference, pp. 586–591. IEEE (1991b)Google Scholar
  13. 13.
    Uricar, M., Franc, V., Hlavac, V.: Detector of facial landmarks learned by the structured output SVM. In: VISAPP 2012: Proceedings of the 7th International Conference on Computer Vision Theory and Applications (2012)Google Scholar
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
    Kanade, T., Cohn, J.F., Tian, Y.: Comprehensive database for facial expression analysis. In: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition (FG 2000), Grenoble, France, pp. 46–53 (2000)Google Scholar
  19. 19.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The Extended Cohn-Kanade Dataset (CK+): a complete expression dataset for action unit and emotion-specified expression. In: Proceedings of the Third International Workshop on CVPR for Human Communicative Behavior Analysis (CVPR4HB 2010), San Francisco, USA, pp. 94–101 (2010)Google Scholar
  20. 20.
    Felzenszwalb, P.F., Girshick, R.B., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell. 32(9), 1627–1645 (2010)CrossRefGoogle Scholar
  21. 21.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: CVPR, pp. I:886–I:893 (2005)Google Scholar
  22. 22.
    Uřičář, M.: CLandmark Open Source Landmarking Library (2016). http://cmp.felk.cvut.cz/~uricamic/clandmark/
  23. 23.
    Wiarda, J.: Rankings sind was für Angeber, Interview with John Hennessy, Die Zeit, Nr. 14, 23.003.2016Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  1. 1.Fernuniversität in HagenHagenGermany

Personalised recommendations