Advertisement

Engaging with the Scenario: Affect and Facial Patterns from a Scenario-Based Intelligent Tutoring System

  • Benjamin D. Nye
  • Shamya Karumbaiah
  • S. Tugba Tokel
  • Mark G. Core
  • Giota Stratou
  • Daniel Auerbach
  • Kallirroi Georgila
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10947)

Abstract

Facial expression trackers output measures for facial action units (AUs), and are increasingly being used in learning technologies. In this paper, we compile patterns of AUs seen in related work as well as use factor analysis to search for categories implicit in our corpus. Although there was some overlap between the factors in our data and previous work, we also identified factors seen in the broader literature but not previously reported in the context of learning environments. In a correlational analysis, we found evidence for relationships between factors and self-reported traits such as academic effort, study habits, and interest in the subject. In addition, we saw differences in average levels of factors between a video watching activity, and a decision making activity. However, in this analysis, we were not able to isolate any facial expressions having a significant positive or negative relationship with either learning gain, or performance once question difficulty and related factors were also considered. Given the overall low levels of facial affect in the corpus, further research will explore different populations and learning tasks to test the possible hypothesis that learners may have been in a pattern of “Over-Flow” in which they were engaged with the system, but not deeply thinking about the content or their errors.

Notes

Acknowledgments

The effort described here is sponsored by the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005. Any opinion, content or information presented does not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.

References

  1. 1.
    Ahmed, A.A., Goodwin, M.S.: Automated detection of facial expressions during computer-assisted instruction in individuals on the autism spectrum. In: CHI Conference on Human Factors in Computing Systems (2017)Google Scholar
  2. 2.
    Baker, R.S., D’Mello, S.K., Rodrigo, M.M.T., Graesser, A.C.: Better to be frustrated than bored: the incidence, persistence, and impact of learners’ cognitive-affective states during interactions with three different computer-based learning environments. Int. J. Hum. Comput. Stud. 68(4), 223–241 (2010)CrossRefGoogle Scholar
  3. 3.
    Bosch, N., D’Mello, S.K., Ocumpaugh, J., Baker, R.S., Shute, V.: Using video to automatically detect learner affect in computer-enabled classrooms. ACM Trans. Interac. Intell. Syst. (TiiS) 6, 17 (2016)Google Scholar
  4. 4.
    Core, M.G., Georgila, K., Nye, B.D., Auerbach, D., Liu, Z.F., DiNinni, R.: Learning, adaptive support, student traits, and engagement in scenario-based learning. In: Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) (2016)Google Scholar
  5. 5.
    Craig, S.D., D’Mello, S., Witherspoon, A., Graesser, A.: Emote aloud during learning with AutoTutor: applying the facial action coding system to cognitive-affective states during learning. Cogn. Emot. 22(5), 777–788 (2008)CrossRefGoogle Scholar
  6. 6.
    D’Mello, S., Graesser, A.: Dynamics of affective states during complex learning. Learn. Instr. 22(2), 145–157 (2012)CrossRefGoogle Scholar
  7. 7.
    D’Mello, S., Lehman, B., Pekrun, R., Graesser, A.: Confusion can be beneficial for learning. Learn. Instr. 29, 153–170 (2014)CrossRefGoogle Scholar
  8. 8.
    D’Mello, S.K., Craig, S.D., Gholson, B., Franklin, S., Picard, R., Graesser, A.C.: Integrating affect sensors in an intelligent tutoring system. In: Affective Interactions: The Computer in the Affective Loop Workshop, pp. 7–13 (2005)Google Scholar
  9. 9.
    D’Mello, S.K., Craig, S.D., Sullins, J., Graesser, A.C.: Predicting affective states expressed through an emote-aloud procedure from AutoTutor’s mixed-initiative dialogue. Int. J. Artif. Intell. Educ. 16(1), 3–28 (2006)Google Scholar
  10. 10.
    Ekman, P., Friesen, W.V.: Facial Action Coding System. Consulting Psychologists Press, Stanford University, Palo Alto (1977)Google Scholar
  11. 11.
    Ekman, P., Friesen, W.V.: Unmasking the Face: A Guide to Recognizing Emotions From Facial Clues. Malor Books, Cambridge (2003)Google Scholar
  12. 12.
    Grafsgaard, J., Wiggins, J., Boyer, K.E., Wiebe, E., Lester, J.: Predicting learning and affect from multimodal data streams in task-oriented tutorial dialogue. In: Educational Data Mining (2014)Google Scholar
  13. 13.
    Grafsgaard, J., Wiggins, J.B., Boyer, K.E., Wiebe, E.N., Lester, J.: Automatically recognizing facial expression: predicting engagement and frustration. In: Educational Data Mining (2013)Google Scholar
  14. 14.
    Grafsgaard, J.F., Wiggins, J.B., Vail, A.K., Boyer, K.E., Wiebe, E.N., Lester, J.C.: The additive value of multimodal features for predicting engagement, frustration, and learning during tutoring. In: International Conference on Multimodal Interaction (ICMI), pp. 42–49. ACM (2014)Google Scholar
  15. 15.
    Guo, P.J., Kim, J., Rubin, R.: How video production affects student engagement: an empirical study of MOOC videos. In: Learning at Scale Conference, pp. 41–50. ACM (2014)Google Scholar
  16. 16.
    Hays, M.J., Campbell, J.C., Trimmer, M.A., Poore, J.C., Webb, A.K., King, T.K.: Can role-play with virtual humans teach interpersonal skills? In: Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) (2012)Google Scholar
  17. 17.
    Juul, J.: Fear of failing? The many meanings of difficulty in video games. Video Game Theor. Read. 2, 237–252 (2009)Google Scholar
  18. 18.
    Keltner, D.: Signs of appeasement: evidence for the distinct displays of embarrassment, amusement, and shame. J. Pers. Soc. Psychol. 68(3), 441–454 (1995)CrossRefGoogle Scholar
  19. 19.
    Littlewort, G., Whitehill, J., Wu, T., Fasel, I., Frank, M., Movellan, J., Bartlett, M.: The computer expression recognition toolbox (CERT). In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 298–305 (2011)Google Scholar
  20. 20.
    Lucey, P., Cohn, J.F., Kanade, T., Saragih, J., Ambadar, Z., Matthews, I.: The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression. In: IEEE Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 94–101 (2010)Google Scholar
  21. 21.
    McDaniel, B., D’Mello, S., King, B., Chipman, P., Tapp, K., Graesser, A.: Facial features for affective state detection in learning environments. In: Annual Meeting of the Cognitive Science Society, pp. 467–472 (2007)Google Scholar
  22. 22.
    Nye, B., Karumbaiah, S., Tokel, S.T., Core, M.G., Stratou, G., Auerbach, D., Georgila, K.: Analyzing learner affect in a scenario-based intelligent tutoring system. In: André, E., Baker, R., Hu, X., Rodrigo, M.M.T., du Boulay, B. (eds.) AIED 2017. LNCS, vol. 10331, pp. 544–547. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-61425-0_60CrossRefGoogle Scholar
  23. 23.
    Rowe, J.P., Shores, L.R., Mott, B.W., Lester, J.C.: Integrating learning, problem solving, and engagement in narrative-centered learning environments. Int. J. Artif. Intell. Educ. 21(1–2), 115–133 (2011)Google Scholar
  24. 24.
    Singmann, H., Bolker, B., Westfall, J., Aust, F.: afex: Analysis of Factorial Experiments (2018). https://CRAN.R-project.org/package=afex. r package version 0.19-1
  25. 25.
    Stratou, G., Morency, L.P.: Multisense - context-aware nonverbal behavior analysis framework: a psychological distress use case. IEEE Trans. Affect. Comput. 8(2), 190–203 (2017)CrossRefGoogle Scholar
  26. 26.
    Vail, A.K., Grafsgaard, J.F., Boyer, K.E., Wiebe, E.N., Lester, J.C.: Predicting learning from student affective response to tutor questions. In: International Conference on Intelligent Tutoring Systems, pp. 154–164 (2016)Google Scholar
  27. 27.
    Whitehill, J., Serpell, Z., Lin, Y.C., Foster, A., Movellan, J.R.: The faces of engagement: automatic recognition of student engagement from facial expressions. IEEE Trans. Affec. Comput. 5(1), 86–98 (2014)CrossRefGoogle Scholar
  28. 28.
    Xu, Z., Woodruff, E.: Person-centered approach to explore learner’s emotionality in learning within a 3D narrative game. In: Learning Analytics & Knowledge Conference, pp. 439–443 (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Benjamin D. Nye
    • 1
  • Shamya Karumbaiah
    • 2
  • S. Tugba Tokel
    • 3
  • Mark G. Core
    • 1
  • Giota Stratou
    • 4
  • Daniel Auerbach
    • 1
  • Kallirroi Georgila
    • 1
  1. 1.Institute for Creative TechnologiesUniversity of Southern CaliforniaLos AngelesUSA
  2. 2.Penn Center for Learning AnalyticsUniversity of PennsylvaniaPhiladelphiaUSA
  3. 3.Department of Computer Education and Instructional TechnologyMETUAnkaraTurkey
  4. 4.Keysight TechnologiesAtlantaUSA

Personalised recommendations